markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Special functions for creating arrays Numpy has several built-in functions that can assist you in creating certain types of arrays: arange(), zeros(), and ones(). Of these, arrange() is probably the most useful because it allows you a create an array of numbers by specifying the initial value in the array, the maximum value in the array, and a step size between elements. arrange() has three arguments: start, stop, and step: arange([start,] stop[, step,]) The stop argument is required. The default for start is 0 and the default for step is 1. Note that the values in the created array will stop one increment below stop. That is, if arrange() is called with stop equal to 9 and step equal to 0.5, then the last value in the returned array will be 8.5.
# Create a variable called b that is equal to a numpy array containing the numbers 1 through 5 b = np.arange(1,6,1) print(b) # Create a variable called c that is equal to a numpy array containing the numbers 0 through 10 c = np.arange(11) print(c)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
The zeros() and ones() take as arguments the desired shape of the array to be returned and fill that array with either zeros or ones.
# Construct a 1x5 array of zeros print(np.zeros(5)) # Construct a 2x2 array of ones print(np.zeros([2,2]))
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Math with NumPy arrays A nice aspect of NumPy arrays is that they are optimized for mathematical operations. The following standard Python arithemtic operators +, -, *, /, and ** operate element-wise on NumPy arrays as the following examples indicate.
# Define two 1-dimensional arrays A = np.array([2,4,6]) B = np.array([3,2,1]) C = np.array([-1,3,2,-4]) # Multiply A by a constant print(3*A) # Exponentiate A print(A**2) # Add A and B together print(A+B) # Exponentiate A with B print(A**B) # Add A and C together print(A+C)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
The error in the preceding example arises because addition is element-wise and A and C don't have the same shape.
# Compute the sine of the values in A print(np.sin(A))
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Iterating through Numpy arrays NumPy arrays are iterable objects just like lists, strings, tuples, and dictionaries which means that you can use for loops to iterate through the elements of them.
# Use a for loop with a NumPy array to print the numbers 0 through 4 for x in np.arange(5): print(x)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Example: Basel problem One of my favorite math equations is: \begin{align} \sum_{n=1}^{\infty} \frac{1}{n^2} & = \frac{\pi^2}{6} \end{align} We can use an iteration through a NumPy array to approximate the lefthand-side and verify the validity of the expression.
# Set N equal to the number of terms to sum N = 1000 # Initialize a variable called summation equal to 0 summation = 0 # loop over the numbers 1 through N for n in np.arange(1,N+1): summation = summation + 1/n**2 # Print the approximation and the exact solution print('approx:',summation) print('exact: ',np.pi**2/6)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Simulate example dataset
# the true tree tree = toytree.rtree.imbtree(ntips=10, treeheight=1e7) tree.draw(ts='p'); # setup simulator subst = { "state_frequencies": [0.3, 0.2, 0.3, 0.2], "kappa": 0.25, "gamma": 0.20, "gamma_categories": 4, } mod = ipcoal.Model(tree=tree, Ne=1e5, nsamples=2, mut=1e-8, substitution_model=subst) mod.sim_loci(nloci=1, nsites=10000) mod.write_concat_to_phylip(name="raxtest", outdir="/tmp", diploid=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Infer a ML tree
# init raxml object with input data and (optional) parameter options rax = ipa.raxml(data="/tmp/raxtest.phy", T=4, N=100) # print the raxml command string for prosperity print(rax.command) # run the command, (options: block until finishes; overwrite existing) rax.run(block=True, force=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Draw the inferred tree After inferring a tree you can then visualize it in a notebook using toytree.
# load from the .trees attribute of the raxml object, or from the saved tree file tre = toytree.tree(rax.trees.bipartitions) # draw the tree rtre = tre.root("r9") rtre.draw(tip_labels_align=True, node_sizes=18, node_labels="support");
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Setting parameters By default several parameters are pre-set in the raxml object. To remove those parameters from the command string you can set them to None. Additionally, you can build complex raxml command line strings by adding almost any parameter to the raxml object init, like below. You probably can't do everythin in raxml using this tool, it's only meant as a convenience. You can always of course just write the raxml command line string by hand instead.
# init raxml object rax = ipa.raxml(data="/tmp/raxtest.phy", T=4, N=10) # parameter dictionary for a raxml object rax.params # paths to output files produced by raxml inference rax.trees
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Cookbook Most frequently used: perform 100 rapid bootstrap analyses followed by 10 rapid hill-climbing ML searches from random starting trees under the GTRGAMMA substitution model.
rax = ipa.raxml( data="/tmp/raxtest.phy", name="test-1", workdir="analysis-raxml", m="GTRGAMMA", T=8, f="a", N=50, ) print(rax.command) rax.run(force=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Another common option: Perform N rapid hill-climbing ML analyses from random starting trees, with no bootstrap replicates. Be sure to use the BestTree output from this analysis since it does not produce a bipartitions output file.
rax = ipa.raxml( data="/tmp/raxtest.phy", name="test-2", workdir="analysis-raxml", m="GTRGAMMA", T=8, f="d", N=10, x=None, ) print(rax.command) rax.run(force=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Check your files The .info and related log files will be stored in the workdir. Be sure to look at these for further details of your analyses.
! cat ./analysis-raxml/RAxML_info.test-1 ! cat ./analysis-raxml/RAxML_info.test-2
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
DeepDream <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/generative/deepdream"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/deepdream.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial contains a minimal implementation of DeepDream, as described in this blog post by Alexander Mordvintsev. DeepDream is an experiment that visualizes the patterns learned by a neural network. Similar to when a child watches clouds and tries to interpret random shapes, DeepDream over-interprets and enhances the patterns it sees in an image. It does so by forwarding an image through the network, then calculating the gradient of the image with respect to the activations of a particular layer. The image is then modified to increase these activations, enhancing the patterns seen by the network, and resulting in a dream-like image. This process was dubbed "Inceptionism" (a reference to InceptionNet, and the movie Inception). Let's demonstrate how you can make a neural network "dream" and enhance the surreal patterns it sees in an image.
import tensorflow as tf import numpy as np import matplotlib as mpl import IPython.display as display import PIL.Image
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Choose an image to dream-ify For this tutorial, let's use an image of a labrador.
url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg' # Download an image and read it into a NumPy array. def download(url, max_dim=None): name = url.split('/')[-1] image_path = tf.keras.utils.get_file(name, origin=url) img = PIL.Image.open(image_path) if max_dim: img.thumbnail((max_dim, max_dim)) return np.array(img) # Normalize an image def deprocess(img): img = 255*(img + 1.0)/2.0 return tf.cast(img, tf.uint8) # Display an image def show(img): display.display(PIL.Image.fromarray(np.array(img))) # Downsizing the image makes it easier to work with. original_img = download(url, max_dim=500) show(original_img) display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Prepare the feature extraction model Download and prepare a pre-trained image classification model. You will use InceptionV3 which is similar to the model originally used in DeepDream. Note that any pre-trained model will work, although you will have to adjust the layer names below if you change this.
base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
The idea in DeepDream is to choose a layer (or layers) and maximize the "loss" in a way that the image increasingly "excites" the layers. The complexity of the features incorporated depends on layers chosen by you, i.e, lower layers produce strokes or simple patterns, while deeper layers give sophisticated features in images, or even whole objects. The InceptionV3 architecture is quite large (for a graph of the model architecture see TensorFlow's research repo). For DeepDream, the layers of interest are those where the convolutions are concatenated. There are 11 of these layers in InceptionV3, named 'mixed0' though 'mixed10'. Using different layers will result in different dream-like images. Deeper layers respond to higher-level features (such as eyes and faces), while earlier layers respond to simpler features (such as edges, shapes, and textures). Feel free to experiment with the layers selected below, but keep in mind that deeper layers (those with a higher index) will take longer to train on since the gradient computation is deeper.
# Maximize the activations of these layers names = ['mixed3', 'mixed5'] layers = [base_model.get_layer(name).output for name in names] # Create the feature extraction model dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Calculate loss The loss is the sum of the activations in the chosen layers. The loss is normalized at each layer so the contribution from larger layers does not outweigh smaller layers. Normally, loss is a quantity you wish to minimize via gradient descent. In DeepDream, you will maximize this loss via gradient ascent.
def calc_loss(img, model): # Pass forward the image through the model to retrieve the activations. # Converts the image into a batch of size 1. img_batch = tf.expand_dims(img, axis=0) layer_activations = model(img_batch) if len(layer_activations) == 1: layer_activations = [layer_activations] losses = [] for act in layer_activations: loss = tf.math.reduce_mean(act) losses.append(loss) return tf.reduce_sum(losses)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Gradient ascent Once you have calculated the loss for the chosen layers, all that is left is to calculate the gradients with respect to the image, and add them to the original image. Adding the gradients to the image enhances the patterns seen by the network. At each step, you will have created an image that increasingly excites the activations of certain layers in the network. The method that does this, below, is wrapped in a tf.function for performance. It uses an input_signature to ensure that the function is not retraced for different image sizes or steps/step_size values. See the Concrete functions guide for details.
class DeepDream(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[], dtype=tf.int32), tf.TensorSpec(shape=[], dtype=tf.float32),) ) def __call__(self, img, steps, step_size): print("Tracing") loss = tf.constant(0.0) for n in tf.range(steps): with tf.GradientTape() as tape: # This needs gradients relative to `img` # `GradientTape` only watches `tf.Variable`s by default tape.watch(img) loss = calc_loss(img, self.model) # Calculate the gradient of the loss with respect to the pixels of the input image. gradients = tape.gradient(loss, img) # Normalize the gradients. gradients /= tf.math.reduce_std(gradients) + 1e-8 # In gradient ascent, the "loss" is maximized so that the input image increasingly "excites" the layers. # You can update the image by directly adding the gradients (because they're the same shape!) img = img + gradients*step_size img = tf.clip_by_value(img, -1, 1) return loss, img deepdream = DeepDream(dream_model)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Main Loop
def run_deep_dream_simple(img, steps=100, step_size=0.01): # Convert from uint8 to the range expected by the model. img = tf.keras.applications.inception_v3.preprocess_input(img) img = tf.convert_to_tensor(img) step_size = tf.convert_to_tensor(step_size) steps_remaining = steps step = 0 while steps_remaining: if steps_remaining>100: run_steps = tf.constant(100) else: run_steps = tf.constant(steps_remaining) steps_remaining -= run_steps step += run_steps loss, img = deepdream(img, run_steps, tf.constant(step_size)) display.clear_output(wait=True) show(deprocess(img)) print ("Step {}, loss {}".format(step, loss)) result = deprocess(img) display.clear_output(wait=True) show(result) return result dream_img = run_deep_dream_simple(img=original_img, steps=100, step_size=0.01)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Taking it up an octave Pretty good, but there are a few issues with this first attempt: The output is noisy (this could be addressed with a tf.image.total_variation loss). The image is low resolution. The patterns appear like they're all happening at the same granularity. One approach that addresses all these problems is applying gradient ascent at different scales. This will allow patterns generated at smaller scales to be incorporated into patterns at higher scales and filled in with additional detail. To do this you can perform the previous gradient ascent approach, then increase the size of the image (which is referred to as an octave), and repeat this process for multiple octaves.
import time start = time.time() OCTAVE_SCALE = 1.30 img = tf.constant(np.array(original_img)) base_shape = tf.shape(img)[:-1] float_base_shape = tf.cast(base_shape, tf.float32) for n in range(-2, 3): new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32) img = tf.image.resize(img, new_shape).numpy() img = run_deep_dream_simple(img=img, steps=50, step_size=0.01) display.clear_output(wait=True) img = tf.image.resize(img, base_shape) img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8) show(img) end = time.time() end-start
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Optional: Scaling up with tiles One thing to consider is that as the image increases in size, so will the time and memory necessary to perform the gradient calculation. The above octave implementation will not work on very large images, or many octaves. To avoid this issue you can split the image into tiles and compute the gradient for each tile. Applying random shifts to the image before each tiled computation prevents tile seams from appearing. Start by implementing the random shift:
def random_roll(img, maxroll): # Randomly shift the image to avoid tiled boundaries. shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32) img_rolled = tf.roll(img, shift=shift, axis=[0,1]) return shift, img_rolled shift, img_rolled = random_roll(np.array(original_img), 512) show(img_rolled)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Here is a tiled equivalent of the deepdream function defined earlier:
class TiledGradients(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[2], dtype=tf.int32), tf.TensorSpec(shape=[], dtype=tf.int32),) ) def __call__(self, img, img_size, tile_size=512): shift, img_rolled = random_roll(img, tile_size) # Initialize the image gradients to zero. gradients = tf.zeros_like(img_rolled) # Skip the last tile, unless there's only one tile. xs = tf.range(0, img_size[1], tile_size)[:-1] if not tf.cast(len(xs), bool): xs = tf.constant([0]) ys = tf.range(0, img_size[0], tile_size)[:-1] if not tf.cast(len(ys), bool): ys = tf.constant([0]) for x in xs: for y in ys: # Calculate the gradients for this tile. with tf.GradientTape() as tape: # This needs gradients relative to `img_rolled`. # `GradientTape` only watches `tf.Variable`s by default. tape.watch(img_rolled) # Extract a tile out of the image. img_tile = img_rolled[y:y+tile_size, x:x+tile_size] loss = calc_loss(img_tile, self.model) # Update the image gradients for this tile. gradients = gradients + tape.gradient(loss, img_rolled) # Undo the random shift applied to the image and its gradients. gradients = tf.roll(gradients, shift=-shift, axis=[0,1]) # Normalize the gradients. gradients /= tf.math.reduce_std(gradients) + 1e-8 return gradients get_tiled_gradients = TiledGradients(dream_model)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Putting this together gives a scalable, octave-aware deepdream implementation:
def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01, octaves=range(-2,3), octave_scale=1.3): base_shape = tf.shape(img) img = tf.keras.utils.img_to_array(img) img = tf.keras.applications.inception_v3.preprocess_input(img) initial_shape = img.shape[:-1] img = tf.image.resize(img, initial_shape) for octave in octaves: # Scale the image based on the octave new_size = tf.cast(tf.convert_to_tensor(base_shape[:-1]), tf.float32)*(octave_scale**octave) new_size = tf.cast(new_size, tf.int32) img = tf.image.resize(img, new_size) for step in range(steps_per_octave): gradients = get_tiled_gradients(img, new_size) img = img + gradients*step_size img = tf.clip_by_value(img, -1, 1) if step % 10 == 0: display.clear_output(wait=True) show(deprocess(img)) print ("Octave {}, Step {}".format(octave, step)) result = deprocess(img) return result img = run_deep_dream_with_octaves(img=original_img, step_size=0.01) display.clear_output(wait=True) img = tf.image.resize(img, base_shape) img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8) show(img)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches. Add batch normalization We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things. TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ linear_output = tf.layers.dense(prev_layer, num_units, activation=None) normalized_output = tf.layers.batch_normalization(linear_output, training=True) layer = tf.nn.relu(normalized_output) return layer
batch-norm/Batch_Normalization_Exercises.ipynb
elenduuche/deep-learning
mit
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer_output = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None) normalized_output = tf.layers.batch_normalization(conv_layer_output, training=True) conv_layer = tf.nn.relu(normalized_output) return conv_layer
batch-norm/Batch_Normalization_Exercises.ipynb
elenduuche/deep-learning
mit
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]]}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
batch-norm/Batch_Normalization_Exercises.ipynb
elenduuche/deep-learning
mit
Construct GP model from log file We can reconstruct GP model from the parsed log file (the on-the-fly training trajectory). Here we build up the GP model with 2+3 body kernel from the on-the-fly log file.
gp_model = otf_object.make_gp(hyp_no=hyp_no) gp_model.parallel = True gp_model.hyp_labels = ['sig2', 'ls2', 'sig3', 'ls3', 'noise'] # write model to a binary file gp_model.write_model('AgI.gp', format='json')
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
The last step write_model is to write this GP model into a binary file, so next time we can directly load the model from the pickle file as
from flare.gp import GaussianProcess gp_model = GaussianProcess.from_file('AgI.gp.json')
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
Map the GP force field & Dump LAMMPS coefficient file To use the trained force field with accelerated version MGP, or in LAMMPS, we need to build MGP from GP model. Since 2-body and 3-body are both included, we need to set up the number of grid points for 2-body and 3-body in grid_params. We build up energy mapping, thus set map_force=False. See MGP tutorial for more explanation of the MGP settings.
from flare.mgp import MappedGaussianProcess grid_params = {'twobody': {'grid_num': [64]}, 'threebody': {'grid_num': [20, 20, 20]}} data = gp_model.training_statistics lammps_location = 'AgI_Molten' mgp_model = MappedGaussianProcess(grid_params, data['species'], var_map=None, lmp_file_name='AgI_Molten', n_cpus=1) mgp_model.build_map(gp_model)
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
The coefficient file for LAMMPS mgp pair_style is automatically saved once the mapping is done. Saved as lmp_file_name. Run LAMMPS with MGP pair style With the above coefficient file, we can run LAMMPS simulation with the mgp pair style. First download our mgp pair style files, compile your lammps executable with mgp pair style following our instruction in the Installation section. One way to use it is running lmp_executable &lt; in.lammps &gt; log.lammps with the executable provided in our repository. When creating the input file, please note to set newton off pair_style mgp pair_coeff * * &lt;lmp_file_name&gt; &lt;chemical_symbols&gt; yes/no yes/no An example is using coefficient file AgI_Molten.mgp for AgI system, with two-body (the 1st yes) together with three-body (the 2nd yes). pair_coeff * * AgI_Molten.mgp Ag I yes yes Another way is to use the ASE LAMMPS interface
import os from flare.utils.element_coder import _Z_to_mass, _element_to_Z from flare.ase.calculator import FLARE_Calculator from ase.calculators.lammpsrun import LAMMPS from ase import Atoms # create test structure species = otf_object.gp_species_list[-1] positions = otf_object.position_list[-1] forces = otf_object.force_list[-1] otf_cell = otf_object.header['cell'] structure = Atoms(symbols=species, cell=otf_cell, positions=positions) # get chemical symbols, masses etc. species = gp_model.training_statistics['species'] specie_symbol_list = " ".join(species) masses=[f"{i} {_Z_to_mass[_element_to_Z[species[i]]]}" for i in range(len(species))] # set up input params parameters = {'command': os.environ.get('lmp'), # set up executable for ASE 'newton': 'off', 'pair_style': 'mgp', 'pair_coeff': [f'* * {lammps_location + ".mgp"} {specie_symbol_list} yes yes'], 'mass': masses} files = [lammps_location + ".mgp"] # create ASE calc lmp_calc = LAMMPS(label=f'tmp_AgI', keep_tmp_files=True, tmp_dir='./tmp/', parameters=parameters, files=files, specorder=species) structure.calc = lmp_calc # To compute energy, forces and stress # energy = structure.get_potential_energy() # forces = structure.get_forces() # stress = structure.get_stress()
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
The third way to run LAMMPS is using our LAMMPS interface, please set the environment variable $lmp to the executable.
from flare import struc from flare.lammps import lammps_calculator # lmp coef file is automatically written now every time MGP is constructed # create test structure species = otf_object.gp_species_list[-1] positions = otf_object.position_list[-1] forces = otf_object.force_list[-1] otf_cell = otf_object.header['cell'] structure = struc.Structure(otf_cell, species, positions) atom_types = [1, 2] atom_masses = [108, 127] atom_species = [1, 2] * 27 # create data file data_file_name = 'tmp.data' data_text = lammps_calculator.lammps_dat(structure, atom_types, atom_masses, atom_species) lammps_calculator.write_text(data_file_name, data_text) # create lammps input style_string = 'mgp' coeff_string = '* * {} Ag I yes yes'.format(lammps_location) lammps_executable = '$lmp' dump_file_name = 'tmp.dump' input_file_name = 'tmp.in' output_file_name = 'tmp.out' input_text = \ lammps_calculator.generic_lammps_input(data_file_name, style_string, coeff_string, dump_file_name) lammps_calculator.write_text(input_file_name, input_text) # To run lammps and get forces # lammps_calculator.run_lammps(lammps_executable, input_file_name, # output_file_name) # lammps_forces = lammps_calculator.lammps_parser(dump_file_name)
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
ratings.movieId = ratings.movieId.map(movieid2idx) ratings.userId = ratings.userId.map(userid2idx) user_min, user_max, movie_min, movie_max = (ratings.userId.min(), ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max()) user_min, user_max, movie_min, movie_max n_users = ratings.userId.nunique() n_movies = ratings.movieId.nunique() n_users, n_movies
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
Dot product The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:
user_in = Input(shape=(1,), dtype='int64', name='user_in') u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in) movie_in = Input(shape=(1,), dtype='int64', name='movie_in') m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in) x = merge([u, m], mode='dot') x = Flatten()(x) model = Model([user_in, movie_in], x) model.compile(Adam(0.001), loss='mse') model.summary() model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=1, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.01 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=3, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.001 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=6, validation_data=([val.userId, val.movieId], val.rating))
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well... Bias The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
def embedding_input(name, n_in, n_out, reg): inp = Input(shape=(1,), dtype='int64', name=name) return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp) user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) def create_bias(inp, n_in): x = Embedding(n_in, 1, input_length=1)(inp) return Flatten()(x) ub = create_bias(user_in, n_users) mb = create_bias(movie_in, n_movies) x = merge([u, m], mode='dot') x = Flatten()(x) x = merge([x, ub], mode='sum') x = merge([x, mb], mode='sum') model = Model([user_in, movie_in], x) model.compile(Adam(0.001), loss='mse') model.summary() model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=1, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.01 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=6, validation_data=([val.userId, val.movieId], val.rating)) model.optimizer.lr=0.001 model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=10, validation_data=([val.userId, val.movieId], val.rating)) model.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=5, validation_data=([val.userId, val.movieId], val.rating))
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
model.predict([np.array([3]), np.array([6])]) ratings.loc[lambda df: df.userId == 3, :].head() model.predict([np.array([3]), np.array([20])])
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
import sys stdout = sys.stdout reload(sys) sys.setdefaultencoding('utf-8') sys.stdout = stdout start=50; end=100 X = fac0[start:end] Y = fac2[start:end] plt.figure(figsize=(15,15)) plt.scatter(X, Y) for i, x, y in zip(topMovies[start:end], X, Y): plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14) plt.show()
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
Neural net Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) x = merge([u, m], mode='concat') x = Flatten()(x) x = Dropout(0.3)(x) x = Dense(70, activation='relu')(x) x = Dropout(0.75)(x) x = Dense(1)(x) nn = Model([user_in, movie_in], x) nn.compile(Adam(0.001), loss='mse') nn.summary() nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, nb_epoch=8, validation_data=([val.userId, val.movieId], val.rating)) nn.save_weights(model_path+'nn.h5') nn.load_weights(model_path+'nn.h5')
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
List Operators In this step we list the current operators that exist in the system by using the get_plat_operators function and capturing the output into a varible called ops_list. We will then run the len which measures the length of the ops_list object, or more simply, measures the number of operators currently configured in the system.
ops_list = get_plat_operator(url=auth.url, auth=auth.creds) print ("There are currently " + str(len(ops_list)) + " operators configured.")
examples/.ipynb_checkpoints/HPE IMC Import Operators-checkpoint.ipynb
HPNetworking/HP-Intelligent-Management-Center
apache-2.0
Shown here is a screen capture of the current operators configured in the HPE IMC system. You can see that there are the same amoutn of operators as shown in the statement above. note: The screen capture is used for the intial demonstration only. If you are running this notebook against your own IMC server, this screen capture may not match what you have configured on your system. Resetting an Operator Password We will now use the set_operator_password function to set the password of the newly created cyoung account to something a little more secure. Feel free to try using this function to reset the password of another account by replacing the cyoung variable in the function below.
set_operator_password('cyoung', password='newpass',auth=auth.creds,url=auth.url,) set_operator_password('')
examples/.ipynb_checkpoints/HPE IMC Import Operators-checkpoint.ipynb
HPNetworking/HP-Intelligent-Management-Center
apache-2.0
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='target') learn_rate = tf.placeholder(tf.float32, name='learn_rate') return inputs, targets, learn_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) #TODO: Wrap with dropout layer? cell = tf.contrib.rnn.MultiRNNCell([lstm] * 1) #TODO: Try different numbers of layers? initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name='initial_state') return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embedding = get_embed(input_data, vocab_size, embed_dim) #TODO: Try a different embed_dim rnn, final_state = build_rnn(cell, embedding) logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 700 # Batch Size batch_size = 1024 # RNN Size rnn_size = 256 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 30 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 5 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Plotting
# creating figure plt.figure(figsize=(6, 4), dpi=120) #plt.plot(x, Y, color='blue') plt.scatter(X, y, color='blue') plt.xlabel("Pizza diameter") plt.ylabel("Pizza price $") plt.title("Pizza price analysis") #plt.xlim(0, 30) #plt.ylim(0, 30) plt.grid(True, color='0.2') plt.autoscale(True)
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Using linear Regression to predict the pizza price
from sklearn.linear_model import LinearRegression lReg = LinearRegression() lReg.fit(X, y) # predict the price of 16" pizza print("16' pizza price : ", lReg.predict([16])[0]) # getting coefficeint & intercept print("Coeff : ", lReg.coef_, "\nIntercept : ", lReg.intercept_)
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Checking RSS - $$ mean([y-f(x)]^2) $$
rss = np.mean((y-lReg.predict(X))**2) rss # also called cost func
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Calculating variance of X and co-variance of X and y
xm = np.mean(X) print(xm) variance = (np.sum((X - xm)**2))/4 print(variance) # numpy func np.var print(np.var(X, ddof=1)) #ddof - bessels corelation ym = np.mean(y) print(ym) covar = np.sum((X-xm)*(y-ym))/4 print(covar) # numpy func np.cov print(np.cov([6,8,10,14,18], [7,9,13,17.5,18])[0][1])
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
now, calculating coeff - $$ \frac{cov(X,y)}{var(X)} $$
coeff = covar / variance coeff # based on coeff we can calc intercept which is y - coeff*x intercept = ym - coeff*xm intercept print(coeff, intercept) print(lReg.coef_, lReg.intercept_) # checking out the 16" pizza price price = 1.96551724138 + (0.976293103448 * 16) print(price) print(lReg.predict([[16]])) # let's test this model on test data X_test = [[8],[9],[11],[16], [12]] y_test = [[11],[8.5],[15], [18],[11]] y_predict = lReg.predict(X_test) y_predict
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Performance measures, bias, and variance There are two fundamental causes of prediction error: a model's bias and its variance. Bias A model with a high bias will produce similar errors for an input regardless of the training set it was trained with; the model biases its own assumptions about the real relationship over the relationship demonstrated in the training data. variance A model with high variance, conversely, will produce different errors for an input depending on the training set that it was trained with. A model with high bias is inflexible, but a model with high variance may be so flexible that it models the noise in the training set. That is, a model with high variance over-fits the training data, while a model with high bias under-fits the training data. It can be helpful to visualize bias and variance as darts thrown at a dartboard. Each dart is analogous to a prediction from a different dataset. high bias but low variance - A model with high bias but low variance will throw darts that are far from the bull's eye, but tightly clustered. high bias and high variance - A model with high bias and high variance will throw darts all over the board; the darts are far from the bull's eye and each other. low bias and high variance - A model with low bias and high variance will throw darts that are closer to the bull's eye, but poorly clustered. Finally, a model with low bias and low variance will throw darts that are tightly clustered around the bull's eye, Ideally, a model will have both low bias and variance, but efforts to decrease one will frequently increase the other. This is known as the bias-variance trade-off. accuracy, precision, and recall $$ ACC = \frac{TP + TN}{TP + TN + FP + FN} $$ $$ P = \frac{TP}{TP + FP} $$ $$ R = \frac{TP}{TP + FN} $$
# RSS from sklearn import metrics print("Mean Abs Error", metrics.mean_absolute_error(y_test, y_predict)) print("Sqred Abs Error", metrics.mean_squared_error(y_test, y_predict)) print("", lReg.score(X_test, y_test))
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
E2E ML on GCP: MLOps stage 4 : formalization: get started with Vertex AI ML Metadata <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 4 : formalization: get started with Vertex AI ML Metadata. Dataset The dataset used for this tutorial is the UCI Machine Learning 'Dry beans dataset', from: KOKLU, M. and OZKAN, I.A., (2020), "Multiclass Classification of Dry Beans Using Computer Vision and Machine Learning Techniques."In Computers and Electronics in Agriculture, 174, 105507. DOI. Objective In this tutorial, you learn how to use Vertex AI ML Metadata. This tutorial uses the following Google Cloud ML services: Vertex AI ML Metadata Vertex AI Pipelines The steps performed include: Create a Metadatastore resource. Create (record)/List an Artifact, with artifacts and metadata. Create (record)/List an Execution. Create (record)/List a Context. Add Artifact to Execution as events. Add Execution and Artifact into the Context Delete Artifact, Execution and Context. Create and run a Vertex AI Pipeline ML workflow to train and deploy a scikit-learn model. Create custom pipeline components that generate artifacts and metadata. Compare Vertex AI Pipelines runs. Trace the lineage for pipeline-generated artifacts. Query your pipeline run metadata. Installations Install the packages required for executing the notebook.
import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG -q ! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import Vertex AI SDK Import the Vertex AI SDK into your Python environment.
import google.cloud.aiplatform_v1beta1 as aip_beta
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Introduction to Vertex AI Metadata The Vertex AI ML Metadata service provides you with the ability to record, and subsequently search and analyze, the artifacts and corresponding metadata produced by your ML workflows. For example, during experimentation one might desire to record the location of the model artifacts, as artifacts, and the training hyperparameters and evaluation metrics as the corresponding metadata. The service supports recording ML metadata both manually and automatically, with the later occurring when you use Vertex AI Pipelines. Concepts and organization Vertex ML Metadata describes your ML system's metadata as a graph. Artifacts: Artifacts are pieces of data that ML systems consume or produce, such as: datasets, models, or logs. For large artifacts like datasets or models, the artifact record includes the URI where the data is stored. Executions: Executions describe a single step in your ML system's workflow. Events: Executions can depend on artifacts as inputs or produce artifacts as outputs. Events describe the relationship between artifacts and executions to help you determine the lineage of artifacts. For example, an event is created to record that a dataset is used by an execution, and another event is created to record that this execution produced a model. Contexts: Contexts let you group artifacts and executions together in a single, queryable, and typed category. ML artifact lineage Vertex AI ML Metadata provides the ability to understand changes in the performance of your machine ML system, and analyze the metadata produced by your ML workflow and the lineage of its artifacts. An artifact's lineage includes all the factors that contributed to its creation, as well as artifacts and metadata that descend from this artifact. Learn more about Introduction to Vertex AI ML Metadata Create a MetadataStore resource Each project may have one or more MetadataStore resources. By default, if none is explicity created, each project has a default, which is specified as: projects/&lt;project_id&gt;/locations/&lt;region&gt;/metadataStores/&lt;name&gt; You create a MetadataStore resource using the create_metadata_store() method, with the following parameters: parent: The fully qualified subpath for all resources in your project, i.e., projects/<project_id>/locations/<location> metadata_store_id: The name of the MetadataStore resource.
metadata_store = clients["metadata"].create_metadata_store( parent=PARENT, metadata_store_id="my-metadata-store" ) metadata_store_id = str(metadata_store.result())[7:-2] print(metadata_store_id)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List metadata schemas When you create an Artifact, Execution or Context resource, you specify a schema that describes the corresponding metadata. The schemas must be pre-registered for your Metadatastore resource. You can get a list of all registered schemas, default and user defined, using the list_metadata_schemas() method, with the following parameters: name: The fully qualified resource identifier for the MetadataStore resource. Learn more about Metadata system schemas.
schemas = clients["metadata"].list_metadata_schemas(parent=metadata_store_id) for schema in schemas: print(schema)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create an Artifact resource You create an Artifact resource using the create_artifact() method, with the following parameters: parent: The fully qualified resource identifier to the Metadatastore resource. artifact: The definition of the Artifact resource display_name: The human readable name for the Artifact resource. uri: The uniform resource identifier of the artifact file. May be empty if there is no actual artifact file. labels: User defined labels to assign to the Artifact resource. schema_title: The title of the schema that describes the metadata. metadata: The metadata key/value pairs to associate with the Artifact resource. artifact_id: (optional) A user defined short ID for the Artifact resource.
from google.cloud.aiplatform_v1beta1.types import Artifact artifact_item = Artifact( display_name="my_example_artifact", uri="my_url", labels={"my_label": "value"}, schema_title="system.Artifact", metadata={"param": "value"}, ) artifact = clients["metadata"].create_artifact( parent=metadata_store_id, artifact=artifact_item, artifact_id="myartifactid", ) print(artifact)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List Artifact resources in a Metadatastore You can list all Artifact resources using the list_artifacts() method, with the following parameters: parent: The fully qualified resource identifier for the MetadataStore resource.
artifacts = clients["metadata"].list_artifacts(parent=metadata_store_id) for _artifact in artifacts: print(_artifact)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create an Execution resource You create an Execution resource using the create_execution() method, with the following parameters: parent: The fully qualified resource identifier to the Metadatastore resource. execution: display_name: A human readable name for the Execution resource. schema_title: The title of the schema that describes the metadata. metadata: The metadata key/value pairs to associate with the Execution resource. execution_id: (optional) A user defined short ID for the Execution resource.
from google.cloud.aiplatform_v1beta1.types import Execution execution = clients["metadata"].create_execution( parent=metadata_store_id, execution=Execution( display_name="my_execution", schema_title="system.CustomJobExecution", metadata={"value": "param"}, ), execution_id="myexecutionid", ) print(execution)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List Execution resources in a Metadatastore You can list all Execution resources using the list_executions() method, with the following parameters: parent: The fully qualified resource identifier for the MetadataStore resource.
executions = clients["metadata"].list_executions(parent=metadata_store_id) for _execution in executions: print(_execution)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Context resource You create an Context resource using the create_context() method, with the following parameters: parent: The fully qualified resource identifier to the Metadatastore resource. context: display_name: A human readable name for the Execution resource. schema_title: The title of the schema that describes the metadata. labels: User defined labels to assign to the Context resource. metadata: The metadata key/value pairs to associate with the Execution resource. context_id: (optional) A user defined short ID for the Context resource.
from google.cloud.aiplatform_v1beta1.types import Context context = clients["metadata"].create_context( parent=metadata_store_id, context=Context( display_name="my_context", labels=[{"my_label", "my_value"}], schema_title="system.Pipeline", metadata={"param": "value"}, ), context_id="mycontextid", ) print(context)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List Context resources in a Metadatastore You can list all Context resources using the list_contexts() method, with the following parameters: parent: The fully qualified resource identifier for the MetadataStore resource.
contexts = clients["metadata"].list_contexts(parent=metadata_store_id) for _context in contexts: print(_context)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Add events to Execution resource An Execution resource consists of a sequence of events that occurred during the execution. Each event consists of an artifact that is either an input or an output of the Execution resource. You can add execution events to an Execution resource using the add_execution_events() method, with the following parameters: execution: The fully qualified resource identifier for the Execution resource. events: The sequence of events constituting the execution.
from google.cloud.aiplatform_v1beta1.types import Event clients["metadata"].add_execution_events( execution=execution.name, events=[ Event( artifact=artifact.name, type_=Event.Type.INPUT, labels={"my_label": "my_value"}, ) ], )
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Combine Artifacts and Executions into a Context A Context is used to group Artifact resources and Execution resources together under a single, queryable, and typed category. Contexts can be used to represent sets of metadata. You can combine a set of Artifact and Execution resources into a Context resource using the add_context_artifacts_and_executions() method, with the following parameters: context: The fully qualified resource identifier of the Context resource. artifacts: A list of fully qualified resource identifiers of the Artifact resources. executions: A list of fully qualified resource identifiers of the Execution resources.
clients["metadata"].add_context_artifacts_and_executions( context=context.name, artifacts=[artifact.name], executions=[execution.name] )
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Delete an Artifact resource You can delete an Artifact resource using the delete_artifact() method, with the following parameters: name: The fully qualified resource identifier for the Artifact resource.
clients["metadata"].delete_artifact(name=artifact.name)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Delete an Execution resource You can delete an Execution resource using the delete_execution() method, with the following parameters: name: The fully qualified resource identifier for the Execution resource.
clients["metadata"].delete_execution(name=execution.name)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Introduction to tracking ML Metadata in a Vertex AI Pipeline Vertex AI Pipelines automatically records the metrics and artifacts created when the pipeline is exeuted. You can then use the SDK to track and analyze the metrics and artifacts across pipeline runs.
from kfp.v2 import compiler, dsl from kfp.v2.dsl import (Artifact, Dataset, Input, Metrics, Model, Output, OutputPath, component, pipeline)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Creating a 3-step pipeline with custom components First, you create a pipeline to run on Vertex AI Pipelines, consisting of the following custom components: get_dataframe: Retrieve data from a BigQuery table and convert it into a pandas DataFrame. sklearn_train: Use the pandas DataFrame to train and export a scikit-learn model, along with some metrics. deploy_model: Deploy the exported scikit-learn model to a Vertex AI Endpoint resource. get_dataframe component This component does the following: Creates a reference to a BigQuery table using the BigQuery client library Downloads the BigQuery table and converts it to a shuffled pandas DataFrame Exports the DataFrame to a CSV file sklearn_train component This component does the following: Imports a CSV as a pandas DataFrame Splits the DataFrame into train and test sets Trains a scikit-learn model Logs metrics from the model Saves the model artifacts as a local model.joblib file deploy_model component This component does the following: Uploads the scikit-learn model to a Vertex AI Model resource. Deploys the model to a Vertex AI Endpoint resource.
@component( packages_to_install=["google-cloud-bigquery", "pandas", "pyarrow"], base_image="python:3.9", output_component_file="create_dataset.yaml", ) def get_dataframe(bq_table: str, output_data_path: OutputPath("Dataset")): from google.cloud import bigquery bqclient = bigquery.Client() table = bigquery.TableReference.from_string(bq_table) rows = bqclient.list_rows(table) dataframe = rows.to_dataframe( create_bqstorage_client=True, ) dataframe = dataframe.sample(frac=1, random_state=2) dataframe.to_csv(output_data_path) @component( packages_to_install=["sklearn", "pandas", "joblib"], base_image="python:3.9", output_component_file="beans_model_component.yaml", ) def sklearn_train( dataset: Input[Dataset], metrics: Output[Metrics], model: Output[Model] ): import pandas as pd from joblib import dump from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier df = pd.read_csv(dataset.path) labels = df.pop("Class").tolist() data = df.values.tolist() x_train, x_test, y_train, y_test = train_test_split(data, labels) skmodel = DecisionTreeClassifier() skmodel.fit(x_train, y_train) score = skmodel.score(x_test, y_test) print("accuracy is:", score) metrics.log_metric("accuracy", (score * 100.0)) metrics.log_metric("framework", "Scikit Learn") metrics.log_metric("dataset_size", len(df)) dump(skmodel, model.path + ".joblib") @component( packages_to_install=["google-cloud-aiplatform"], base_image="python:3.9", output_component_file="beans_deploy_component.yaml", ) def deploy_model( model: Input[Model], project: str, region: str, vertex_endpoint: Output[Artifact], vertex_model: Output[Model], ): from google.cloud import aiplatform aiplatform.init(project=project, location=region) deployed_model = aiplatform.Model.upload( display_name="beans-model-pipeline", artifact_uri=model.uri.replace("model", ""), serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-24:latest", ) endpoint = deployed_model.deploy(machine_type="n1-standard-4") # Save data to the output params vertex_endpoint.uri = endpoint.resource_name vertex_model.uri = deployed_model.resource_name
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Construct and compile the pipeline Next, construct the pipeline:
PIPELINE_ROOT = f"{BUCKET_URI}/pipeline_root/3step" @dsl.pipeline( # Default pipeline root. You can override it when submitting the pipeline. pipeline_root=PIPELINE_ROOT, # A name for the pipeline. name="mlmd-pipeline", ) def pipeline( bq_table: str = "", output_data_path: str = "data.csv", project: str = PROJECT_ID, region: str = REGION, ): dataset_task = get_dataframe(bq_table) model_task = sklearn_train(dataset_task.output) deploy_model(model=model_task.outputs["model"], project=project, region=region)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Compile and execute two runs of the pipeline Next, you compile the pipeline and then run two separate instances of the pipeline. In the first instance, you train the model with a small version of the dataset and in the second instance you train it with a larger version of the dataset.
NOW = datetime.now().isoformat().replace(".", ":")[:-7] compiler.Compiler().compile(pipeline_func=pipeline, package_path="mlmd_pipeline.json") run1 = aip.PipelineJob( display_name="mlmd-pipeline", template_path="mlmd_pipeline.json", job_id="mlmd-pipeline-small-{}".format(TIMESTAMP), parameter_values={"bq_table": "sara-vertex-demos.beans_demo.small_dataset"}, enable_caching=True, ) run2 = aip.PipelineJob( display_name="mlmd-pipeline", template_path="mlmd_pipeline.json", job_id="mlmd-pipeline-large-{}".format(TIMESTAMP), parameter_values={"bq_table": "sara-vertex-demos.beans_demo.large_dataset"}, enable_caching=True, ) run1.run() run2.run() run1.delete() run2.delete() ! rm -f mlmd_pipeline.json *.yaml
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Compare the pipeline runs Now that you have two pipeline completed pipeline runs, you can compare the runs. You can use the get_pipeline_df() method to access the metadata from the runs. The mlmd-pipeline parameter here refers to the name you gave to your pipeline: Alternately, for guidance on inspecting pipeline artifacts and metadata in the Vertex AI Console, see this codelab.
df = aip.get_pipeline_df(pipeline="mlmd-pipeline") print(df)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Visualize the pipeline runs Next, you create a custom visualization with matplotlib to see the relationship between your model's accuracy and the amount of data used for training.
import matplotlib.pyplot as plt plt.plot(df["metric.dataset_size"], df["metric.accuracy"], label="Accuracy") plt.title("Accuracy and dataset size") plt.legend(loc=4) plt.show()
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Quering your Metadatastore resource Finally, you query your Metadatastore resource by specifying a filter parameter when calling the list_artifacts() method.
FILTER = f'create_time >= "{NOW}" AND state = LIVE' artifact_req = { "parent": metadata_store_id, "filter": FILTER, } artifacts = clients["metadata"].list_artifacts(artifact_req) for _artifact in artifacts: print(_artifact) clients["metadata"].delete_artifact(name=_artifact.name)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Delete a MetadataStore resource You can delete a MetadataStore resource using the delete_metadata_store() method, with the following parameters: name: The fully qualified resource identifier for the MetadataStore resource.
clients["metadata"].delete_metadata_store(name=metadata_store_id)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
For practical reasons, we plot the excess shut-time probability densities in the graph below. In all other particulars, it should reproduce Fig. 10 from Hawkes, Jalali, Colquhoun (1992)
from dcprogs.likelihood import missed_events_pdf fig = plt.figure(figsize=(12,9)) ax = fig.add_subplot(2, 2, 1) x = np.arange(0, 10, tau/100) pdf = missed_events_pdf(qmatrix, 0.2, nmax=2, shut=True) ax.plot(x, pdf(x), '-k') ax.set_xlabel('time $t$ (ms)') ax.set_ylabel('Shut-time probability density $f_{\\bar{\\tau}=0.2}(t)$') ax = fig.add_subplot(2, 2, 2) ax.set_xlabel('time $t$ (ms)') tau = 0.2 x, x0 = np.arange(0, 5*tau, tau/10.0), np.arange(0, 5*tau, tau/100) plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0) ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau)) ax.set_xlabel('time $t$ (ms)') ax.yaxis.tick_right() ax.yaxis.set_label_position("right") ax = fig.add_subplot(2, 2, 3) tau = 0.05 x, x0 = np.arange(0, 5*tau, tau/10.0), np.arange(0, 5*tau, tau/100) plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0) ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau)) ax.set_xlabel('time $t$ (ms)') ax = fig.add_subplot(2, 2, 4) tau = 0.5 x, x0 = np.arange(0, 5*tau, tau/10.0), np.arange(0, 5*tau, tau/100) plot_exponentials(qmatrix, tau, shut=True, ax=ax, x=x, x0=x0) ax.set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau)) ax.set_xlabel('time $t$ (ms)') ax.yaxis.tick_right() ax.yaxis.set_label_position("right") #fig.subplots_adjust(wspace=0.1) fig.tight_layout() from dcprogs.likelihood import DeterminantEq, find_root_intervals, find_lower_bound_for_roots from numpy.linalg import eig tau = 0.5 determinant = DeterminantEq(qmatrix, tau).transpose() # print find_lower_bound_for_roots(determinant) x = np.arange(-100, -3, 0.1) #plot(x, determinant(x)) matrix = qmatrix.transpose() qaffa = np.array(np.dot(matrix.af, matrix.fa), dtype='f16') aa = np.array(matrix.aa, dtype='f16') def anaH(s): from numpy.linalg import det from numpy import identity, exp arg0 = 1e0/np.array(-2-s, dtype='f16') arg1 = np.array(-(2+s) * tau, dtype='f16') return qaffa * (exp(arg1) - np.array(1e0, dtype='f16')) * arg0 + aa def anadet(s): from numpy.linalg import det from numpy import identity, exp s = np.array(s, dtype='f16') matrix = s*identity(qaffa.shape[0], dtype='f16') - anaH(s) return matrix[0,0] * matrix[1, 1] * matrix[2, 2] \ + matrix[1,0] * matrix[2, 1] * matrix[0, 2] \ + matrix[0,1] * matrix[1, 2] * matrix[2, 0] \ - matrix[2,0] * matrix[1, 1] * matrix[0, 2] \ - matrix[1,0] * matrix[0, 1] * matrix[2, 2] \ - matrix[2,1] * matrix[1, 2] * matrix[0, 0] x = np.arange(-100, -3, 1e-2) print(find_lower_bound_for_roots(determinant)) print(eig(np.array(anaH(-160 ), dtype='float64'))[0]) print(anadet(-104)) #plot(x, [anadet(u) for u in x])
exploration/CB.ipynb
DCPROGS/HJCFIT
gpl-3.0
Possible Solution
def neighbour_squares(x, y, num_rows, num_cols): """ (x, y) 0-based index co-ordinate pair. num_rows, num_cols: specifiy the max size of the board returns all valid (x, y) coordinates from starting position. """ offsets = [(-1,-1), (-1,0), (-1,1), ( 0,-1), ( 0,1), ( 1,-1), ( 1,0), ( 1,1)] result = [] for x2, y2 in offsets: px = x + x2 py = y + y2 row_check = 0 <= px < num_rows col_check = 0 <= py < num_cols if row_check and col_check: point = (py, px) result.append(point) return result def count_bombs(x, y, board): """ returns the number of neighbours of (x,y) that are bombs. Max is 8, min is 0. """ num_rows = len(board[0]) num_cols = len(board) squares = neighbour_squares(x, y, num_rows, num_cols) bombs_found = 0 for px, py in squares: if board[px][py] == "B": bombs_found += 1 return bombs_found # Testing... test_neighbour_squares() test_count_bombs()
A Beginners Guide to Python/Final Project (Minesweeper)/_04. Getting the Neighbours(HW).ipynb
fluffy-hamster/A-Beginners-Guide-to-Python
mit
Explanation Neighbour Squares 'neighbour_squares' takes an (x, y) pair co-ordinate and returns the neighbours of that square. Often a square has eight neighbors (up left, up right, below, below right, etc), however squares on the edge of the board have fewer. The purpose of "row_check" and "col_check" is to help avoid possible index errors later on and also avoids a possible bug where the index is negative. For example: [1,2,3] [4,5,6] [7,8,9] square(3) has an index of (0,2). If we subtract 1 from both sides we get (-1,1). And the index (-1,1) is equivalent to (2,1), thus we end saying the neighbour of square(3) is square(8). This is only desirable in the cases where we want the edges of the board to 'wrap round'. Carving this logic out into its own function (as opposed to doing everything in the count_bombs function) allows us to be flexible and this code can be reused (notice that "neighbour_squares" doesn't even take a board as an argument). Count Bombs The 'count_bombs' function takes an (x,y) position and a board. If then returns how many neighbour squares are in fact bombs. This function can however be improved. You may remember me saying in the design lecture that its usually better to give a function a name that describes what it does and not what it is used for. This functions name ("count_bombs") makes this mistake. To fix this problem, we have to ask ourselves: "What this function is actually doing?" Here's what it is not doing: counting_bombs. A bomb in our current implementation is simply the string "B". So what this function is actually doing is counting how many occurrences of the string "B" there are. That's not a very useful function. A much more useful function would be to count any arbitrary character. So let's change the function name and signature to better reflect this new understanding def count_occurence_of_character_in_neighbour_squares(x, y, array, character): ... And thats a function that we could use easily in a chess game: ENEMY_PIECE = "P" chess_board = ["_, "P", _] ["k", "P, _] def can_king_capture_piece(king_position_x, king_position_y, chess_board): possible_captures = count_occurence_of_character_in_neighbour_squares(king_position_x, king_position_y, chess_board, ENEMY_PIECE) return possible_captures. And it is also a function we could use in a sudoku game to check if a 3x3 grid is correct (or not): def is_3x3_correct(mid_point_x, mid_point_y, grid): for number in range(1, 10): if count_occurence_of_character_in_neighbour_squares(mid_point_x, mid_point_y, grid, str(number)) != 1: return False # number is either missing or there are duplicates return True Hopefully those two additional examples illustrate how writing flexible code is useful. By thinking about that the function does (as opposed to concentrating on how we use it) we were able to make the code significantly more flexible with minimal work. And it doesn't take that much effort to imagine scenarios where we might want to use such a function. The final improvement I can think of that is worth mentioning is that in the previous homework we actually wrote helper functions that could get a value at a given square. Since we already have that function we may as well use it.
def get_square(x, y, board): """ This function takes a board and returns the value at that square(x,y). """ return board[x][y] def count_occurence_of_character_in_neighbour_squares(x, y, board, character): """ returns the number of neighbours of (x,y) that are bombs. Max is 8, min is 0. """ num_rows = len(board[0]) num_cols = len(board) squares = neighbour_squares(x, y, num_rows, num_cols) character_found = 0 for px, py in squares: square_value = get_square(px, py, board) if square_value == character: character_found += 1 return character_found
A Beginners Guide to Python/Final Project (Minesweeper)/_04. Getting the Neighbours(HW).ipynb
fluffy-hamster/A-Beginners-Guide-to-Python
mit
Basis Objekte Das Basis-Objekt für die Panelmethode kann ein 2d- oder 3d-"Berechnungsfall" (Case2, Case3) sein. Diesem wird die Geometrie bestehend aus mehren Flächen (Panel2, Panel3) vorgegeben. Diese sind planare Flächen bzw. im 2D Fall Linien, die aus mehreren Punkten (PanelVector2, PanelVector3) gebildet werden, welche Vektoren (Vector2, Vector3) mit weiteren Parametern sind. Vector2 & Vector3 Vector2 und Vector3 repräsentieren Eigen::Vector2d bzw Eigen::Vector3d in Python. Diese besitzten einige hilfreiche Funktionen. Einige davon werden hier gezeigt:
from __future__ import division # enable python3 division v1 = paraBEM.Vector2(0, -1) v2 = paraBEM.PanelVector2(0, 1) v3 = paraBEM.Vector3(0, 0, 1) v4 = paraBEM.PanelVector3(1, 0, 1)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
2d $\longleftrightarrow$ 3d Es ist möglich einen 2d Vektor in einen 3d Vektor umszuwandeln, sowie umgekehrt. Die Operatoren können aber immer nur auf Vektoren der glchen Dimension angewendet werden.
print(paraBEM.Vector3(v1)) print(paraBEM.Vector2(v3))
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Panel2 Das Panel2 besteht aus 2 Punkten der Klasse PanelVector2. Einige in der Panelmethode öfter benutzte Eigenschaften werden direkt als Attribute im Panel gespeichert. Dies sind zum Beispiel die Länge l, die Ausrichtung n, t und der Mittelpunkt center.
l = [paraBEM.PanelVector2(1, 2), paraBEM.PanelVector2(3, 4)] p = paraBEM.Panel2(l) p.l, p.t, p.n, p.center
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Panel3 Das Panel3 besteht aus 3 oder mehr Punkten der Klasse PanelVector3. Einige in der Panelmethode öfter benutzte Eigenschaften werden direkt als Attribute im Panel gespeichert. Dies sind zum Beispiel die Fläche area, die Ausrichtung n, m, l und der Mittelpunkt center.
l = [paraBEM.PanelVector3(1, 2, 0), paraBEM.PanelVector3(3, 4, 1), paraBEM.PanelVector3(0, -1, 0)] p = paraBEM.Panel3(l) p.area, p.n, p.l, p.m, p.center
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Einflussfunktionen Die Einflussfunktionen stellen die Kernfunktionen für die Panel Methode dar. Sie sind alle Lösungen der Laplace Gleichung und werden in Potential- und Geschwindigkeitsfunktionen (v) unterschieden. Die ersten zwei Argumente für diese Funktionen sind der Zielpunkt (target) und das Störungsobjekt (source). Die Einflussfunktionen können aus dem Paket pan2d bzw. pan3d geladen werden.
SVG(filename='tutorial_files/kernfunktionen_bezeichnung.svg') import paraBEM.pan2d as pan2d target = paraBEM.Vector2(1, 1) source_point = paraBEM.PanelVector2(-1, 0) source_point_1 = paraBEM.PanelVector2(1, 0) source_panel = paraBEM.Panel2([source_point, source_point_1]) print(pan2d.source_2(target, source_point)) print(pan2d.doublet_2(target, source_point, paraBEM.Vector2(1, 0))) print(pan2d.doublet_2_0(target, source_panel)) print(pan2d.doublet_2_0_v(target, source_panel))
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Erstellen der Panels aus den Geometriedaten (paraBEM)
points = [paraBEM.PanelVector2(x, y) for x, y in xy] points += [points[0]] panels = [paraBEM.Panel2([point, points[i+1]]) for i, point in enumerate(points[:-1])]
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Erstellen eines Case
case = pan2d.NeumannDoublet0Case2(panels) case.v_inf = paraBEM.Vector2(1, 0) case.run()
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Visualisierung der berechneten Werte
print("lift: ", case.cl) # kein Auftrieb, weil kein Nachlauf definiert ist plt.plot([p.cp for p in panels], c="g") plt.ylabel("$cp$") plt.xlabel("$nr$") plt.show() nx = 200 ny = 200 space_x = np.linspace(-2, 2, nx) space_y = np.linspace(-2, 2, ny) grid = [paraBEM.Vector2(x, y) for y in space_y for x in space_x] velocity = list(map(case.off_body_velocity, grid)) pot = list(map(case.off_body_potential, grid)) file_name = check_path("/tmp/paraBEM_results/cylinder/field.vtk") with open(file_name, "w") as _file: writer = VtkWriter() writer.structed_grid(_file, "cylinder", [nx, ny, 1]) writer.points(_file, grid) writer.data(_file, velocity, name="velocity", _type="VECTORS", data_type="POINT_DATA") writer.data(_file, pot, name="pot", _type="SCALARS", data_type="POINT_DATA") paraview(file_name)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Zylinder Umströmung durch Überlagerung eines Dipols mit einer Parallel-Strömung
from paraBEM.pan2d import doublet_2, doublet_2_v, vortex_2, vortex_2_v source = paraBEM.Vector2(0, 0) # center of the circle def cylinder_field(target, circulation=0, r=1, v_inf=paraBEM.Vector2(1, 0)): direction = paraBEM.Vector2(-1, 0) # direction of doublet (-v_inf) mu = v_inf.norm() * 2 * np.pi * r**2 # solve mu * doublet_2_v(t, s) + v_v_inf == 0 return ( # potential influence mu * doublet_2(target, source, -v_inf) + v_inf.dot(target) + vortex_2(target, source, direction) * circulation, # velocity influence mu * doublet_2_v(target, source, -v_inf) + v_inf + vortex_2_v(target, source) * circulation ) def cp(velocity, v_inf=paraBEM.Vector2(1, 0)): return 1 - velocity.dot(velocity) / v_inf.dot(v_inf)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Druckverteilung an der Oberfläche
phi = np.linspace(0, np.pi * 2, 100) x = list(np.cos(phi) + source.x) y = list(np.sin(phi) + source.y) xy = list(zip(x, y)) pot, vel = zip(*[cylinder_field(paraBEM.Vector2(xi, yi)) for xi, yi in xy]) _cp = list(map(cp, vel)) vel = [v.norm() for v in vel] plt.axes().set_aspect("equal", "datalim") plt.grid() plt.plot(x, y, label="surface", color="black") plt.plot(x, pot, label="$\\frac{\phi}{r_K u_{\infty}}$", color="b") plt.plot(x, vel, label="$\\frac{u}{u_{\infty}}$", color="r") plt.plot(x, _cp, label="$cp$", color="g") plt.xlabel("$x$") plt.xlim(-2, 3) plt.legend() plt.show()
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Potential- und Geschwindigkeitsverteilung im Raum
x_grid = np.linspace(-2, 2, 100) y_grid = np.linspace(-2, 2, 100) grid = [paraBEM.Vector2(x, y) for x in x_grid for y in y_grid] pot, vel = zip(*[cylinder_field(point) for point in grid]) writer = VtkWriter() filename = check_path("/tmp/paraBEM_results/cylinder.vtk") with open(filename, "w") as _file: writer = VtkWriter() writer.structed_grid(_file, "z_plane", [100, 100, 1]) writer.points(_file, grid) writer.data(_file, pot, name="potential", _type="SCALARS", data_type="POINT_DATA") writer.data(_file, vel, name="velocity", _type="VECTORS", data_type="POINT_DATA") paraview(filename)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Joukowsky Profil - konforme Abbildung Druckverteilung an der Oberfläche eines Joukowsky Profil mittels konformer Abbildung.
from paraBEM.airfoil.conformal_mapping import JoukowskyAirfoil airfoil = JoukowskyAirfoil(midpoint=-0.1 + 0.05j) alpha = np.deg2rad(3) vel = airfoil.surface_velocity(alpha, num=70) vel = np.sqrt(vel.imag ** 2 + vel.real ** 2) cp = airfoil.surface_cp(alpha, num=100) coordinates = airfoil.coordinates(100) plt.grid() plt.axes().set_aspect("equal", "datalim") plt.plot(coordinates.real, coordinates.imag, label="joukowsky -0.1 + 0.05j", c="black", marker="x") plt.plot(coordinates.real, cp, label="$cp$", c="g") plt.legend() plt.xlabel("$x$") plt.show()
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Joukowsky Profil - 2D Panelmethode Druckverteilung an der Oberfläche
# translate the complex coordinates to (x, y) coordinates coordiantes = list(zip( airfoil.coordinates(num=70).real, airfoil.coordinates(num=70).imag)) vertices = [paraBEM.PanelVector2(*v) for v in coordiantes[:-1]] vertices[0].wake_vertex = True panels = [paraBEM.Panel2([vertices[i], vertices[i + 1]]) for i in range(len(vertices[:-1]))] panels.append(paraBEM.Panel2([vertices[-1], vertices[0]])) case = pan2d.DirichletDoublet0Source0Case2(panels) case.v_inf = paraBEM.Vector2(np.cos(alpha), np.sin(alpha)) case.run() pan_center_x = [panel.center.x for panel in panels] pan_vel = [panel.velocity.norm() for panel in panels] pan_cp = [panel.cp for panel in panels] plt.grid() plt.axes().set_aspect("equal", "datalim") plt.plot(coordinates.real, coordinates.imag, label="joukowsky -0.1 + 0.05j", c="black", marker="x") plt.plot(pan_center_x, pan_cp, label="$cp$", c="g") plt.legend() plt.xlabel("$x$") plt.show()
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Potential- und Geschwindigkeitsverteilung im Raum
nx = 200 ny = 200 space_x = np.linspace(-3, 3, nx) space_y = np.linspace(-1, 1, ny) grid = [paraBEM.Vector2(x, y) for y in space_y for x in space_x] velocity = list(map(case.off_body_velocity, grid)) pot = list(map(case.off_body_potential, grid)) file_name = check_path("/tmp/paraBEM_results/airfoil_2d_linear/field.vtk") with open(file_name, "w") as _file: writer = VtkWriter() writer.structed_grid(_file, "airfoil", [nx, ny, 1]) writer.points(_file, grid) writer.data(_file, velocity, name="velocity", _type="VECTORS", data_type="POINT_DATA") writer.data(_file, pot, name="pot", _type="SCALARS", data_type="POINT_DATA") paraview(file_name)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Umströmung einer Kugel Überlagerung eines Dipols und einer Parallelströmung
from paraBEM.pan3d import doublet_3, doublet_3_v def sphere_field(target, r=1, v_inf=paraBEM.Vector3(1, 0, 0)): source = paraBEM.Vector3(0, 0, 0) mu = v_inf.norm() * np.pi * r**3 * 2 return ( mu * doublet_3(target, source, -v_inf) + v_inf.dot(target), mu * doublet_3_v(target, source, -v_inf) + v_inf ) def cp_(velocity, v_inf=paraBEM.Vector3(1, 0, 0)): return 1 - velocity.dot(velocity) / v_inf.dot(v_inf)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Potential- und Geschwindigkeitsverteilung im Raum
phi = np.linspace(0, np.pi * 2, 300) x = np.cos(phi) y = np.sin(phi) pot, vel = zip(*[sphere_field( paraBEM.Vector3( np.cos(p), np.sin(p), 0)) for p in phi]) cp = list(map(cp_, vel)) vel = [v.norm() for v in vel] plt.plot(x, y, label="surface", color="black") plt.plot(x, pot, label="$\\frac{\phi}{r_K u_{\infty}}$", color="b") plt.plot(x, vel, label="$\\frac{u}{u_{\infty}}$", color="r") plt.plot(x, cp, label="$cp$", color="g") plt.grid() plt.axes().set_aspect("equal", "datalim") plt.xlabel("$x$") plt.legend() plt.show() nx, ny, nz = 30, 30, 30 x_grid = np.linspace(-2, 2, nx) y_grid = np.linspace(-2, 2, ny) z_grid = np.linspace(-2, 2, nz) grid = [paraBEM.Vector3(x, y, z)for z in z_grid for y in y_grid for x in x_grid] pot, vel = zip(*[sphere_field(point) for point in grid]) writer = VtkWriter() filename = check_path("/tmp/paraBEM_results/sphere.vtk") with open(filename, "w") as _file: writer = VtkWriter() writer.structed_grid(_file, "points", [nx, ny, nz]) writer.points(_file, grid) writer.data(_file, pot, name="potential", _type="SCALARS", data_type="POINT_DATA") writer.data(_file, vel, name="velocity", _type="VECTORS", data_type="POINT_DATA") paraview(filename)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
3D Panelmethode Kugel
from paraBEM.mesh import mesh_object from paraBEM.vtk_export import CaseToVTK # create panels from mesh mesh = mesh_object.from_OBJ("../../examples/mesh/sphere_low_tri.obj") # create case from panels case = pan3d.DirichletDoublet0Case3(mesh.panels) # set boundary conditon far away from the body case.v_inf = paraBEM.Vector3(1, 0, 0.) # solve for constant potential inside the object case.run() center_x, surf_vel, surf_cp, surf_pot = [], [], [], [] for panel in case.panels: center_x.append(panel.center.x) surf_vel.append(panel.velocity.norm()) surf_cp.append(panel.cp) surf_pot.append(panel.potential) phi = np.linspace(0, np.pi * 2, 300) x = np.cos(phi) y = np.sin(phi) plt.plot(x, y, label="surface", color="black") plt.scatter(center_x, surf_pot, marker=4, color="b", label="$\\frac{\phi}{r_K u_{\infty}}$") plt.scatter(center_x, surf_vel, marker="+", color="r", label="$\\frac{u}{u_{\infty}}$") plt.scatter(center_x, surf_cp, marker="*", color="g", label="$cp$") plt.grid() plt.axes().set_aspect("equal", "datalim") plt.xlabel("$x$", fontsize=15) plt.legend() plt.show() lin = np.linspace(-0.5, 0.5, 5) grid = [[-2, k, j] for j in lin for k in lin] file_name = "/tmp/paraBEM_results/sphere_case" vtk_writer = CaseToVTK(case, file_name) vtk_writer.write_panels(data_type="cell") vtk_writer.write_field([-2, 2, 20], [-2, 2, 20], [-2, 2, 20]) paraview(file_name + "/panels.vtk")
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Umströmung eines Gleitschirms Erstellen des "Case"
from openglider.jsonify import load from openglider.utils.distribution import Distribution from openglider.glider.in_out.export_3d import paraBEM_Panels from paraBEM.utils import v_inf_deg_range3 # load glider file_name = "../../examples/openglider/glider/referenz_schirm_berg.json" with open(file_name) as _file: parGlider = load(_file)["data"] parGlider.shape.set_const_cell_dist() glider = parGlider.get_glider_3d() # create the panels and get the trailing edge _, panels, trailing_edge = paraBEM_Panels( glider, midribs=0, profile_numpoints=50, distribution=Distribution.nose_cos_distribution(0.2), num_average=0, symmetric=True) # setup the case with panels and trailing edge case = pan3d.DirichletDoublet0Source0Case3(panels, trailing_edge) # set the boundarycondition far away from the wing case.v_inf = paraBEM.Vector3(*parGlider.v_inf) # create the wake (length, number of wake panels per column) case.create_wake(length=10000, count=10) # set reference values case.mom_ref_point = paraBEM.Vector3(1.25, 0, -5) case.A_ref = parGlider.shape.area # set farfield-factor case.farfield = 5 # chooce between "on_body" or "trefftz" case.drag_calc = "on_body" # if "trefftz" is used, this point represents the position of the trefftz-plane case.trefftz_cut_pos = case.v_inf * 100 # run the case with fixed wake for different aoa (v_inf) polars = case.polars(v_inf_deg_range3(case.v_inf, -15, 15, 10))
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Visualisierung
file_name = "/tmp/paraBEM_results/vtk_glider_case" vtk_writer = CaseToVTK(case, file_name) vtk_writer.write_panels(data_type="cell") vtk_writer.write_wake_panels() vtk_writer.write_body_stream(panels, 100) paraview(file_name + "/panels.vtk")
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
At first, we need to define the dataset name and variables we want to use.
dh=datahub.datahub(server,version,API_key) dataset='nasa_merra2_global_v2' variable_names = 'T2MMEAN,T2MMAX,T2MMIN' time_start = '1980-01-01T00:00:00' area_name = 'Bering_Strait'
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
In this part we define the RBSN dataset key as we would like to use observational data from stations as well. Also, we define station id - you can see different station id's on the RBSN detail page map. At this point, we are choosing a station near Bering Strait. Only requirement would be that the station should locate somewhere near Arctic. However, graphs are still working on other places as well.
dataset1 = 'noaa_rbsn_timeseries' station = '25399' # time_start_synop = '2019-01-01T00:00:00' time_end = '2019-02-28T23:00:00'#datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%dT%H:%M:%S') variable = 'temperature' link = 'https://api.planetos.com/v1/datasets/noaa_rbsn_timeseries/stations/{0}?origin=dataset-details&apikey={1}&count=1000&time_start={2}&time_end={3}&var={4},lat,lon'.format(station,API_key,time_start_synop,time_end,variable) data = read_data_to_json(link)
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Now we read in station data and from it we define longitude latitude values to get data from the same location using MERRA2 dataset as well.
time_synop = [datetime.datetime.strptime(n['axes']['time'],'%Y-%m-%dT%H:%M:%S') for n in data['entries']][:-54] temp_synop = [n['data']['temperature'] for n in data['entries']][:-54] latitude = data['entries'][0]['axes']['latitude'] longitude = data['entries'][0]['axes']['longitude']
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
For starters, using Basemap we created a map of the Arctic region and we mark chosen location with a red dot.
plt.figure(figsize=(10,8)) m = Basemap(projection='npstere',boundinglat=60,lon_0=0,resolution='l') x,y = m(longitude,latitude) m.drawcoastlines() m.drawcountries() m.drawstates() m.drawparallels(np.arange(-80.,81.,20.)) m.drawmeridians(np.arange(-180.,181.,20.)) m.shadedrelief() m.scatter(x,y,50,marker='o',color='red',zorder=4) plt.show()
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Download the data with package API Create package objects Send commands for the package creation Download the package files Note that this package has over a 30 years of data and downloading it might take some time
package = package_api.package_api(dh,dataset,variable_names,longitude,longitude,latitude,latitude,time_start,time_end,area_name=area_name) package.make_package() package.download_package()
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Work with downloaded files We start by opening the files with xarray. We also convert Kelvins to Celsius degrees.
dd1 = xr.open_dataset(package.local_file_name) dd1['T2MMEAN'] = dd1['T2MMEAN'] -272.15
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit