markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Special functions for creating arrays Numpy has several built-in functions that can assist you in creating certain types of arrays: arange(), zeros(), and ones(). Of these, arrange() is probably the most useful because it allows you a create an array of numbers by specifying the initial value in the array, the maximum ...
# Create a variable called b that is equal to a numpy array containing the numbers 1 through 5 b = np.arange(1,6,1) print(b) # Create a variable called c that is equal to a numpy array containing the numbers 0 through 10 c = np.arange(11) print(c)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
The zeros() and ones() take as arguments the desired shape of the array to be returned and fill that array with either zeros or ones.
# Construct a 1x5 array of zeros print(np.zeros(5)) # Construct a 2x2 array of ones print(np.zeros([2,2]))
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Math with NumPy arrays A nice aspect of NumPy arrays is that they are optimized for mathematical operations. The following standard Python arithemtic operators +, -, *, /, and ** operate element-wise on NumPy arrays as the following examples indicate.
# Define two 1-dimensional arrays A = np.array([2,4,6]) B = np.array([3,2,1]) C = np.array([-1,3,2,-4]) # Multiply A by a constant print(3*A) # Exponentiate A print(A**2) # Add A and B together print(A+B) # Exponentiate A with B print(A**B) # Add A and C together print(A+C)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
The error in the preceding example arises because addition is element-wise and A and C don't have the same shape.
# Compute the sine of the values in A print(np.sin(A))
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Iterating through Numpy arrays NumPy arrays are iterable objects just like lists, strings, tuples, and dictionaries which means that you can use for loops to iterate through the elements of them.
# Use a for loop with a NumPy array to print the numbers 0 through 4 for x in np.arange(5): print(x)
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Example: Basel problem One of my favorite math equations is: \begin{align} \sum_{n=1}^{\infty} \frac{1}{n^2} & = \frac{\pi^2}{6} \end{align} We can use an iteration through a NumPy array to approximate the lefthand-side and verify the validity of the expression.
# Set N equal to the number of terms to sum N = 1000 # Initialize a variable called summation equal to 0 summation = 0 # loop over the numbers 1 through N for n in np.arange(1,N+1): summation = summation + 1/n**2 # Print the approximation and the exact solution print('approx:',summation) print('exact: ',np.pi**2...
winter2017/econ129/python/Econ129_Class_03_Complete.ipynb
letsgoexploring/teaching
mit
Simulate example dataset
# the true tree tree = toytree.rtree.imbtree(ntips=10, treeheight=1e7) tree.draw(ts='p'); # setup simulator subst = { "state_frequencies": [0.3, 0.2, 0.3, 0.2], "kappa": 0.25, "gamma": 0.20, "gamma_categories": 4, } mod = ipcoal.Model(tree=tree, Ne=1e5, nsamples=2, mut=1e-8, substitution_model=subst) ...
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Infer a ML tree
# init raxml object with input data and (optional) parameter options rax = ipa.raxml(data="/tmp/raxtest.phy", T=4, N=100) # print the raxml command string for prosperity print(rax.command) # run the command, (options: block until finishes; overwrite existing) rax.run(block=True, force=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Draw the inferred tree After inferring a tree you can then visualize it in a notebook using toytree.
# load from the .trees attribute of the raxml object, or from the saved tree file tre = toytree.tree(rax.trees.bipartitions) # draw the tree rtre = tre.root("r9") rtre.draw(tip_labels_align=True, node_sizes=18, node_labels="support");
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Setting parameters By default several parameters are pre-set in the raxml object. To remove those parameters from the command string you can set them to None. Additionally, you can build complex raxml command line strings by adding almost any parameter to the raxml object init, like below. You probably can't do everyth...
# init raxml object rax = ipa.raxml(data="/tmp/raxtest.phy", T=4, N=10) # parameter dictionary for a raxml object rax.params # paths to output files produced by raxml inference rax.trees
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Cookbook Most frequently used: perform 100 rapid bootstrap analyses followed by 10 rapid hill-climbing ML searches from random starting trees under the GTRGAMMA substitution model.
rax = ipa.raxml( data="/tmp/raxtest.phy", name="test-1", workdir="analysis-raxml", m="GTRGAMMA", T=8, f="a", N=50, ) print(rax.command) rax.run(force=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Another common option: Perform N rapid hill-climbing ML analyses from random starting trees, with no bootstrap replicates. Be sure to use the BestTree output from this analysis since it does not produce a bipartitions output file.
rax = ipa.raxml( data="/tmp/raxtest.phy", name="test-2", workdir="analysis-raxml", m="GTRGAMMA", T=8, f="d", N=10, x=None, ) print(rax.command) rax.run(force=True)
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Check your files The .info and related log files will be stored in the workdir. Be sure to look at these for further details of your analyses.
! cat ./analysis-raxml/RAxML_info.test-1 ! cat ./analysis-raxml/RAxML_info.test-2
testdocs/analysis/cookbook-raxml-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
DeepDream <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/generative/deepdream"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google....
import tensorflow as tf import numpy as np import matplotlib as mpl import IPython.display as display import PIL.Image
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Choose an image to dream-ify For this tutorial, let's use an image of a labrador.
url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg' # Download an image and read it into a NumPy array. def download(url, max_dim=None): name = url.split('/')[-1] image_path = tf.keras.utils.get_file(name, origin=url) img = PIL.Image.open(image_path) if m...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Prepare the feature extraction model Download and prepare a pre-trained image classification model. You will use InceptionV3 which is similar to the model originally used in DeepDream. Note that any pre-trained model will work, although you will have to adjust the layer names below if you change this.
base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
The idea in DeepDream is to choose a layer (or layers) and maximize the "loss" in a way that the image increasingly "excites" the layers. The complexity of the features incorporated depends on layers chosen by you, i.e, lower layers produce strokes or simple patterns, while deeper layers give sophisticated features in ...
# Maximize the activations of these layers names = ['mixed3', 'mixed5'] layers = [base_model.get_layer(name).output for name in names] # Create the feature extraction model dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers)
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Calculate loss The loss is the sum of the activations in the chosen layers. The loss is normalized at each layer so the contribution from larger layers does not outweigh smaller layers. Normally, loss is a quantity you wish to minimize via gradient descent. In DeepDream, you will maximize this loss via gradient ascent.
def calc_loss(img, model): # Pass forward the image through the model to retrieve the activations. # Converts the image into a batch of size 1. img_batch = tf.expand_dims(img, axis=0) layer_activations = model(img_batch) if len(layer_activations) == 1: layer_activations = [layer_activations] losses = [...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Gradient ascent Once you have calculated the loss for the chosen layers, all that is left is to calculate the gradients with respect to the image, and add them to the original image. Adding the gradients to the image enhances the patterns seen by the network. At each step, you will have created an image that increasin...
class DeepDream(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[], dtype=tf.int32), tf.TensorSpec(shape=[], dtype=tf.float32),) ) def __call__(self, img, steps,...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Main Loop
def run_deep_dream_simple(img, steps=100, step_size=0.01): # Convert from uint8 to the range expected by the model. img = tf.keras.applications.inception_v3.preprocess_input(img) img = tf.convert_to_tensor(img) step_size = tf.convert_to_tensor(step_size) steps_remaining = steps step = 0 while steps_remain...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Taking it up an octave Pretty good, but there are a few issues with this first attempt: The output is noisy (this could be addressed with a tf.image.total_variation loss). The image is low resolution. The patterns appear like they're all happening at the same granularity. One approach that addresses all these proble...
import time start = time.time() OCTAVE_SCALE = 1.30 img = tf.constant(np.array(original_img)) base_shape = tf.shape(img)[:-1] float_base_shape = tf.cast(base_shape, tf.float32) for n in range(-2, 3): new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32) img = tf.image.resize(img, new_shape).numpy() ...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Optional: Scaling up with tiles One thing to consider is that as the image increases in size, so will the time and memory necessary to perform the gradient calculation. The above octave implementation will not work on very large images, or many octaves. To avoid this issue you can split the image into tiles and compute...
def random_roll(img, maxroll): # Randomly shift the image to avoid tiled boundaries. shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32) img_rolled = tf.roll(img, shift=shift, axis=[0,1]) return shift, img_rolled shift, img_rolled = random_roll(np.array(original_img), 512) sho...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Here is a tiled equivalent of the deepdream function defined earlier:
class TiledGradients(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[2], dtype=tf.int32), tf.TensorSpec(shape=[], dtype=tf.int32),) ) def __call__(self, img, im...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
Putting this together gives a scalable, octave-aware deepdream implementation:
def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01, octaves=range(-2,3), octave_scale=1.3): base_shape = tf.shape(img) img = tf.keras.utils.img_to_array(img) img = tf.keras.applications.inception_v3.preprocess_input(img) initial_shape = img.shape[:-1] i...
site/en/tutorials/generative/deepdream.ipynb
tensorflow/docs
apache-2.0
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalizatio...
def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of un...
batch-norm/Batch_Normalization_Exercises.ipynb
elenduuche/deep-learning
mit
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the...
batch-norm/Batch_Normalization_Exercises.ipynb
elenduuche/deep-learning
mit
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for lay...
batch-norm/Batch_Normalization_Exercises.ipynb
elenduuche/deep-learning
mit
Construct GP model from log file We can reconstruct GP model from the parsed log file (the on-the-fly training trajectory). Here we build up the GP model with 2+3 body kernel from the on-the-fly log file.
gp_model = otf_object.make_gp(hyp_no=hyp_no) gp_model.parallel = True gp_model.hyp_labels = ['sig2', 'ls2', 'sig3', 'ls3', 'noise'] # write model to a binary file gp_model.write_model('AgI.gp', format='json')
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
The last step write_model is to write this GP model into a binary file, so next time we can directly load the model from the pickle file as
from flare.gp import GaussianProcess gp_model = GaussianProcess.from_file('AgI.gp.json')
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
Map the GP force field & Dump LAMMPS coefficient file To use the trained force field with accelerated version MGP, or in LAMMPS, we need to build MGP from GP model. Since 2-body and 3-body are both included, we need to set up the number of grid points for 2-body and 3-body in grid_params. We build up energy mapping, t...
from flare.mgp import MappedGaussianProcess grid_params = {'twobody': {'grid_num': [64]}, 'threebody': {'grid_num': [20, 20, 20]}} data = gp_model.training_statistics lammps_location = 'AgI_Molten' mgp_model = MappedGaussianProcess(grid_params, data['species'], var_map=None, lmp_file_name='AgI...
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
The coefficient file for LAMMPS mgp pair_style is automatically saved once the mapping is done. Saved as lmp_file_name. Run LAMMPS with MGP pair style With the above coefficient file, we can run LAMMPS simulation with the mgp pair style. First download our mgp pair style files, compile your lammps executable with mg...
import os from flare.utils.element_coder import _Z_to_mass, _element_to_Z from flare.ase.calculator import FLARE_Calculator from ase.calculators.lammpsrun import LAMMPS from ase import Atoms # create test structure species = otf_object.gp_species_list[-1] positions = otf_object.position_list[-1] forces = otf_object.f...
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
The third way to run LAMMPS is using our LAMMPS interface, please set the environment variable $lmp to the executable.
from flare import struc from flare.lammps import lammps_calculator # lmp coef file is automatically written now every time MGP is constructed # create test structure species = otf_object.gp_species_list[-1] positions = otf_object.position_list[-1] forces = otf_object.force_list[-1] otf_cell = otf_object.header['cell'...
docs/source/tutorials/after_training.ipynb
mir-group/flare
mit
We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
ratings.movieId = ratings.movieId.map(movieid2idx) ratings.userId = ratings.userId.map(userid2idx) user_min, user_max, movie_min, movie_max = (ratings.userId.min(), ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max()) user_min, user_max, movie_min, movie_max n_users = ratings.userId.nunique() n_mo...
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
Dot product The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:
user_in = Input(shape=(1,), dtype='int64', name='user_in') u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in) movie_in = Input(shape=(1,), dtype='int64', name='movie_in') m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in) x = merge([u, m], mode='dot') x...
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well... Bias The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by sim...
def embedding_input(name, n_in, n_out, reg): inp = Input(shape=(1,), dtype='int64', name=name) return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp) user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) d...
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
model.predict([np.array([3]), np.array([6])]) ratings.loc[lambda df: df.userId == 3, :].head() model.predict([np.array([3]), np.array([20])])
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
import sys stdout = sys.stdout reload(sys) sys.setdefaultencoding('utf-8') sys.stdout = stdout start=50; end=100 X = fac0[start:end] Y = fac2[start:end] plt.figure(figsize=(15,15)) plt.scatter(X, Y) for i, x, y in zip(topMovies[start:end], X, Y): plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fo...
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
Neural net Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) x = merge([u, m], mode='concat') x = Flatten()(x) x = Dropout(0.3)(x) x = Dense(70, activation='relu')(x) x = Dropout(0.75)(x) x = Dense(1)(x) nn = Model([user_in, movie_in], x) nn.com...
deeplearning1/nbs/lesson4-ma.ipynb
appleby/fastai-courses
apache-2.0
List Operators In this step we list the current operators that exist in the system by using the get_plat_operators function and capturing the output into a varible called ops_list. We will then run the len which measures the length of the ops_list object, or more simply, measures the number of operators currently confi...
ops_list = get_plat_operator(url=auth.url, auth=auth.creds) print ("There are currently " + str(len(ops_list)) + " operators configured.")
examples/.ipynb_checkpoints/HPE IMC Import Operators-checkpoint.ipynb
HPNetworking/HP-Intelligent-Management-Center
apache-2.0
Shown here is a screen capture of the current operators configured in the HPE IMC system. You can see that there are the same amoutn of operators as shown in the statement above. note: The screen capture is used for the intial demonstration only. If you are running this notebook against your own IMC server, this scree...
set_operator_password('cyoung', password='newpass',auth=auth.creds,url=auth.url,) set_operator_password('')
examples/.ipynb_checkpoints/HPE IMC Import Operators-checkpoint.ipynb
HPNetworking/HP-Intelligent-Management-Center
apache-2.0
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Inpu...
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='target') ...
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the follo...
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) #TODO: Wrap with dr...
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number...
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logi...
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_e...
# Number of Epochs num_epochs = 700 # Batch Size batch_size = 1024 # RNN Size rnn_size = 256 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 30 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 5 """ DON'T MODIFY ANYTHING IN THIS CELL THAT ...
tv-script-generation/dlnd_tv_script_generation.ipynb
ianhamilton117/deep-learning
mit
Plotting
# creating figure plt.figure(figsize=(6, 4), dpi=120) #plt.plot(x, Y, color='blue') plt.scatter(X, y, color='blue') plt.xlabel("Pizza diameter") plt.ylabel("Pizza price $") plt.title("Pizza price analysis") #plt.xlim(0, 30) #plt.ylim(0, 30) plt.grid(True, color='0.2') plt.autoscale(True)
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Using linear Regression to predict the pizza price
from sklearn.linear_model import LinearRegression lReg = LinearRegression() lReg.fit(X, y) # predict the price of 16" pizza print("16' pizza price : ", lReg.predict([16])[0]) # getting coefficeint & intercept print("Coeff : ", lReg.coef_, "\nIntercept : ", lReg.intercept_)
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Checking RSS - $$ mean([y-f(x)]^2) $$
rss = np.mean((y-lReg.predict(X))**2) rss # also called cost func
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Calculating variance of X and co-variance of X and y
xm = np.mean(X) print(xm) variance = (np.sum((X - xm)**2))/4 print(variance) # numpy func np.var print(np.var(X, ddof=1)) #ddof - bessels corelation ym = np.mean(y) print(ym) covar = np.sum((X-xm)*(y-ym))/4 print(covar) # numpy func np.cov print(np.cov([6,8,10,14,18], [7,9,13,17.5,18])[0][1])
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
now, calculating coeff - $$ \frac{cov(X,y)}{var(X)} $$
coeff = covar / variance coeff # based on coeff we can calc intercept which is y - coeff*x intercept = ym - coeff*xm intercept print(coeff, intercept) print(lReg.coef_, lReg.intercept_) # checking out the 16" pizza price price = 1.96551724138 + (0.976293103448 * 16) print(price) print(lReg.predict([[16]])) # let...
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
Performance measures, bias, and variance There are two fundamental causes of prediction error: a model's bias and its variance. Bias A model with a high bias will produce similar errors for an input regardless of the training set it was trained with; the model biases its own assumptions about the real relationship ove...
# RSS from sklearn import metrics print("Mean Abs Error", metrics.mean_absolute_error(y_test, y_predict)) print("Sqred Abs Error", metrics.mean_squared_error(y_test, y_predict)) print("", lReg.score(X_test, y_test))
MasteringML_wSkLearn/01_Linear_Regression.ipynb
atulsingh0/MachineLearning
gpl-3.0
E2E ML on GCP: MLOps stage 4 : formalization: get started with Vertex AI ML Metadata <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb"> <img src="https://clo...
import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = ...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import Vertex AI SDK Import the Vertex AI SDK into your Python environment.
import google.cloud.aiplatform_v1beta1 as aip_beta
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Introduction to Vertex AI Metadata The Vertex AI ML Metadata service provides you with the ability to record, and subsequently search and analyze, the artifacts and corresponding metadata produced by your ML workflows. For example, during experimentation one might desire to record the location of the model artifacts, a...
metadata_store = clients["metadata"].create_metadata_store( parent=PARENT, metadata_store_id="my-metadata-store" ) metadata_store_id = str(metadata_store.result())[7:-2] print(metadata_store_id)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List metadata schemas When you create an Artifact, Execution or Context resource, you specify a schema that describes the corresponding metadata. The schemas must be pre-registered for your Metadatastore resource. You can get a list of all registered schemas, default and user defined, using the list_metadata_schemas() ...
schemas = clients["metadata"].list_metadata_schemas(parent=metadata_store_id) for schema in schemas: print(schema)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create an Artifact resource You create an Artifact resource using the create_artifact() method, with the following parameters: parent: The fully qualified resource identifier to the Metadatastore resource. artifact: The definition of the Artifact resource display_name: The human readable name for the Artifact resource...
from google.cloud.aiplatform_v1beta1.types import Artifact artifact_item = Artifact( display_name="my_example_artifact", uri="my_url", labels={"my_label": "value"}, schema_title="system.Artifact", metadata={"param": "value"}, ) artifact = clients["metadata"].create_artifact( parent=metadata_st...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List Artifact resources in a Metadatastore You can list all Artifact resources using the list_artifacts() method, with the following parameters: parent: The fully qualified resource identifier for the MetadataStore resource.
artifacts = clients["metadata"].list_artifacts(parent=metadata_store_id) for _artifact in artifacts: print(_artifact)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create an Execution resource You create an Execution resource using the create_execution() method, with the following parameters: parent: The fully qualified resource identifier to the Metadatastore resource. execution: display_name: A human readable name for the Execution resource. schema_title: The title of the sche...
from google.cloud.aiplatform_v1beta1.types import Execution execution = clients["metadata"].create_execution( parent=metadata_store_id, execution=Execution( display_name="my_execution", schema_title="system.CustomJobExecution", metadata={"value": "param"}, ), execution_id="myexe...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List Execution resources in a Metadatastore You can list all Execution resources using the list_executions() method, with the following parameters: parent: The fully qualified resource identifier for the MetadataStore resource.
executions = clients["metadata"].list_executions(parent=metadata_store_id) for _execution in executions: print(_execution)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Context resource You create an Context resource using the create_context() method, with the following parameters: parent: The fully qualified resource identifier to the Metadatastore resource. context: display_name: A human readable name for the Execution resource. schema_title: The title of the schema that d...
from google.cloud.aiplatform_v1beta1.types import Context context = clients["metadata"].create_context( parent=metadata_store_id, context=Context( display_name="my_context", labels=[{"my_label", "my_value"}], schema_title="system.Pipeline", metadata={"param": "value"}, ), ...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
List Context resources in a Metadatastore You can list all Context resources using the list_contexts() method, with the following parameters: parent: The fully qualified resource identifier for the MetadataStore resource.
contexts = clients["metadata"].list_contexts(parent=metadata_store_id) for _context in contexts: print(_context)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Add events to Execution resource An Execution resource consists of a sequence of events that occurred during the execution. Each event consists of an artifact that is either an input or an output of the Execution resource. You can add execution events to an Execution resource using the add_execution_events() method, wi...
from google.cloud.aiplatform_v1beta1.types import Event clients["metadata"].add_execution_events( execution=execution.name, events=[ Event( artifact=artifact.name, type_=Event.Type.INPUT, labels={"my_label": "my_value"}, ) ], )
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Combine Artifacts and Executions into a Context A Context is used to group Artifact resources and Execution resources together under a single, queryable, and typed category. Contexts can be used to represent sets of metadata. You can combine a set of Artifact and Execution resources into a Context resource using the ad...
clients["metadata"].add_context_artifacts_and_executions( context=context.name, artifacts=[artifact.name], executions=[execution.name] )
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Delete an Artifact resource You can delete an Artifact resource using the delete_artifact() method, with the following parameters: name: The fully qualified resource identifier for the Artifact resource.
clients["metadata"].delete_artifact(name=artifact.name)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Delete an Execution resource You can delete an Execution resource using the delete_execution() method, with the following parameters: name: The fully qualified resource identifier for the Execution resource.
clients["metadata"].delete_execution(name=execution.name)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Introduction to tracking ML Metadata in a Vertex AI Pipeline Vertex AI Pipelines automatically records the metrics and artifacts created when the pipeline is exeuted. You can then use the SDK to track and analyze the metrics and artifacts across pipeline runs.
from kfp.v2 import compiler, dsl from kfp.v2.dsl import (Artifact, Dataset, Input, Metrics, Model, Output, OutputPath, component, pipeline)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Creating a 3-step pipeline with custom components First, you create a pipeline to run on Vertex AI Pipelines, consisting of the following custom components: get_dataframe: Retrieve data from a BigQuery table and convert it into a pandas DataFrame. sklearn_train: Use the pandas DataFrame to train and export a scikit-le...
@component( packages_to_install=["google-cloud-bigquery", "pandas", "pyarrow"], base_image="python:3.9", output_component_file="create_dataset.yaml", ) def get_dataframe(bq_table: str, output_data_path: OutputPath("Dataset")): from google.cloud import bigquery bqclient = bigquery.Client() table...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Construct and compile the pipeline Next, construct the pipeline:
PIPELINE_ROOT = f"{BUCKET_URI}/pipeline_root/3step" @dsl.pipeline( # Default pipeline root. You can override it when submitting the pipeline. pipeline_root=PIPELINE_ROOT, # A name for the pipeline. name="mlmd-pipeline", ) def pipeline( bq_table: str = "", output_data_path: str = "data.csv", ...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Compile and execute two runs of the pipeline Next, you compile the pipeline and then run two separate instances of the pipeline. In the first instance, you train the model with a small version of the dataset and in the second instance you train it with a larger version of the dataset.
NOW = datetime.now().isoformat().replace(".", ":")[:-7] compiler.Compiler().compile(pipeline_func=pipeline, package_path="mlmd_pipeline.json") run1 = aip.PipelineJob( display_name="mlmd-pipeline", template_path="mlmd_pipeline.json", job_id="mlmd-pipeline-small-{}".format(TIMESTAMP), parameter_values={...
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Compare the pipeline runs Now that you have two pipeline completed pipeline runs, you can compare the runs. You can use the get_pipeline_df() method to access the metadata from the runs. The mlmd-pipeline parameter here refers to the name you gave to your pipeline: Alternately, for guidance on inspecting pipeline artif...
df = aip.get_pipeline_df(pipeline="mlmd-pipeline") print(df)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Visualize the pipeline runs Next, you create a custom visualization with matplotlib to see the relationship between your model's accuracy and the amount of data used for training.
import matplotlib.pyplot as plt plt.plot(df["metric.dataset_size"], df["metric.accuracy"], label="Accuracy") plt.title("Accuracy and dataset size") plt.legend(loc=4) plt.show()
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Quering your Metadatastore resource Finally, you query your Metadatastore resource by specifying a filter parameter when calling the list_artifacts() method.
FILTER = f'create_time >= "{NOW}" AND state = LIVE' artifact_req = { "parent": metadata_store_id, "filter": FILTER, } artifacts = clients["metadata"].list_artifacts(artifact_req) for _artifact in artifacts: print(_artifact) clients["metadata"].delete_artifact(name=_artifact.name)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Delete a MetadataStore resource You can delete a MetadataStore resource using the delete_metadata_store() method, with the following parameters: name: The fully qualified resource identifier for the MetadataStore resource.
clients["metadata"].delete_metadata_store(name=metadata_store_id)
notebooks/community/ml_ops/stage4/get_started_with_vertex_ml_metadata.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
For practical reasons, we plot the excess shut-time probability densities in the graph below. In all other particulars, it should reproduce Fig. 10 from Hawkes, Jalali, Colquhoun (1992)
from dcprogs.likelihood import missed_events_pdf fig = plt.figure(figsize=(12,9)) ax = fig.add_subplot(2, 2, 1) x = np.arange(0, 10, tau/100) pdf = missed_events_pdf(qmatrix, 0.2, nmax=2, shut=True) ax.plot(x, pdf(x), '-k') ax.set_xlabel('time $t$ (ms)') ax.set_ylabel('Shut-time probability density $f_{\\bar{\\tau}=0....
exploration/CB.ipynb
DCPROGS/HJCFIT
gpl-3.0
Possible Solution
def neighbour_squares(x, y, num_rows, num_cols): """ (x, y) 0-based index co-ordinate pair. num_rows, num_cols: specifiy the max size of the board returns all valid (x, y) coordinates from starting position. """ offsets = [(-1,-1), (-1,0), (-1,1), ( 0,-1), ( 0,1), ...
A Beginners Guide to Python/Final Project (Minesweeper)/_04. Getting the Neighbours(HW).ipynb
fluffy-hamster/A-Beginners-Guide-to-Python
mit
Explanation Neighbour Squares 'neighbour_squares' takes an (x, y) pair co-ordinate and returns the neighbours of that square. Often a square has eight neighbors (up left, up right, below, below right, etc), however squares on the edge of the board have fewer. The purpose of "row_check" and "col_check" is to help avoid ...
def get_square(x, y, board): """ This function takes a board and returns the value at that square(x,y). """ return board[x][y] def count_occurence_of_character_in_neighbour_squares(x, y, board, character): """ returns the number of neighbours of (x,y) that are bombs. Max is 8, min is 0. """...
A Beginners Guide to Python/Final Project (Minesweeper)/_04. Getting the Neighbours(HW).ipynb
fluffy-hamster/A-Beginners-Guide-to-Python
mit
Basis Objekte Das Basis-Objekt für die Panelmethode kann ein 2d- oder 3d-"Berechnungsfall" (Case2, Case3) sein. Diesem wird die Geometrie bestehend aus mehren Flächen (Panel2, Panel3) vorgegeben. Diese sind planare Flächen bzw. im 2D Fall Linien, die aus mehreren Punkten (PanelVector2, PanelVector3) gebildet werden, ...
from __future__ import division # enable python3 division v1 = paraBEM.Vector2(0, -1) v2 = paraBEM.PanelVector2(0, 1) v3 = paraBEM.Vector3(0, 0, 1) v4 = paraBEM.PanelVector3(1, 0, 1)
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
2d $\longleftrightarrow$ 3d Es ist möglich einen 2d Vektor in einen 3d Vektor umszuwandeln, sowie umgekehrt. Die Operatoren können aber immer nur auf Vektoren der glchen Dimension angewendet werden.
print(paraBEM.Vector3(v1)) print(paraBEM.Vector2(v3))
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Panel2 Das Panel2 besteht aus 2 Punkten der Klasse PanelVector2. Einige in der Panelmethode öfter benutzte Eigenschaften werden direkt als Attribute im Panel gespeichert. Dies sind zum Beispiel die Länge l, die Ausrichtung n, t und der Mittelpunkt center.
l = [paraBEM.PanelVector2(1, 2), paraBEM.PanelVector2(3, 4)] p = paraBEM.Panel2(l) p.l, p.t, p.n, p.center
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Panel3 Das Panel3 besteht aus 3 oder mehr Punkten der Klasse PanelVector3. Einige in der Panelmethode öfter benutzte Eigenschaften werden direkt als Attribute im Panel gespeichert. Dies sind zum Beispiel die Fläche area, die Ausrichtung n, m, l und der Mittelpunkt center.
l = [paraBEM.PanelVector3(1, 2, 0), paraBEM.PanelVector3(3, 4, 1), paraBEM.PanelVector3(0, -1, 0)] p = paraBEM.Panel3(l) p.area, p.n, p.l, p.m, p.center
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Einflussfunktionen Die Einflussfunktionen stellen die Kernfunktionen für die Panel Methode dar. Sie sind alle Lösungen der Laplace Gleichung und werden in Potential- und Geschwindigkeitsfunktionen (v) unterschieden. Die ersten zwei Argumente für diese Funktionen sind der Zielpunkt (target) und das Störungsobjekt (sourc...
SVG(filename='tutorial_files/kernfunktionen_bezeichnung.svg') import paraBEM.pan2d as pan2d target = paraBEM.Vector2(1, 1) source_point = paraBEM.PanelVector2(-1, 0) source_point_1 = paraBEM.PanelVector2(1, 0) source_panel = paraBEM.Panel2([source_point, source_point_1]) print(pan2d.source_2(target, source_point)) ...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Erstellen der Panels aus den Geometriedaten (paraBEM)
points = [paraBEM.PanelVector2(x, y) for x, y in xy] points += [points[0]] panels = [paraBEM.Panel2([point, points[i+1]]) for i, point in enumerate(points[:-1])]
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Erstellen eines Case
case = pan2d.NeumannDoublet0Case2(panels) case.v_inf = paraBEM.Vector2(1, 0) case.run()
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Visualisierung der berechneten Werte
print("lift: ", case.cl) # kein Auftrieb, weil kein Nachlauf definiert ist plt.plot([p.cp for p in panels], c="g") plt.ylabel("$cp$") plt.xlabel("$nr$") plt.show() nx = 200 ny = 200 space_x = np.linspace(-2, 2, nx) space_y = np.linspace(-2, 2, ny) grid = [paraBEM.Vector2(x, y) for y in space_y for x in space_x] v...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Zylinder Umströmung durch Überlagerung eines Dipols mit einer Parallel-Strömung
from paraBEM.pan2d import doublet_2, doublet_2_v, vortex_2, vortex_2_v source = paraBEM.Vector2(0, 0) # center of the circle def cylinder_field(target, circulation=0, r=1, v_inf=paraBEM.Vector2(1, 0)): direction = paraBEM.Vector2(-1, 0) # direction of doublet (-v_inf) mu = v_inf....
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Druckverteilung an der Oberfläche
phi = np.linspace(0, np.pi * 2, 100) x = list(np.cos(phi) + source.x) y = list(np.sin(phi) + source.y) xy = list(zip(x, y)) pot, vel = zip(*[cylinder_field(paraBEM.Vector2(xi, yi)) for xi, yi in xy]) _cp = list(map(cp, vel)) vel = [v.norm() for v in vel] plt.axes().set_aspect("equal", "datalim") plt.grid() plt.plot(x,...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Potential- und Geschwindigkeitsverteilung im Raum
x_grid = np.linspace(-2, 2, 100) y_grid = np.linspace(-2, 2, 100) grid = [paraBEM.Vector2(x, y) for x in x_grid for y in y_grid] pot, vel = zip(*[cylinder_field(point) for point in grid]) writer = VtkWriter() filename = check_path("/tmp/paraBEM_results/cylinder.vtk") with open(filename, "w") as _file: writer = Vtk...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Joukowsky Profil - konforme Abbildung Druckverteilung an der Oberfläche eines Joukowsky Profil mittels konformer Abbildung.
from paraBEM.airfoil.conformal_mapping import JoukowskyAirfoil airfoil = JoukowskyAirfoil(midpoint=-0.1 + 0.05j) alpha = np.deg2rad(3) vel = airfoil.surface_velocity(alpha, num=70) vel = np.sqrt(vel.imag ** 2 + vel.real ** 2) cp = airfoil.surface_cp(alpha, num=100) coordinates = airfoil.coordinates(100) plt.grid() p...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Joukowsky Profil - 2D Panelmethode Druckverteilung an der Oberfläche
# translate the complex coordinates to (x, y) coordinates coordiantes = list(zip( airfoil.coordinates(num=70).real, airfoil.coordinates(num=70).imag)) vertices = [paraBEM.PanelVector2(*v) for v in coordiantes[:-1]] vertices[0].wake_vertex = True panels = [paraBEM.Panel2([vertices[i], ...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Potential- und Geschwindigkeitsverteilung im Raum
nx = 200 ny = 200 space_x = np.linspace(-3, 3, nx) space_y = np.linspace(-1, 1, ny) grid = [paraBEM.Vector2(x, y) for y in space_y for x in space_x] velocity = list(map(case.off_body_velocity, grid)) pot = list(map(case.off_body_potential, grid)) file_name = check_path("/tmp/paraBEM_results/airfoil_2d_linear/field.vt...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Umströmung einer Kugel Überlagerung eines Dipols und einer Parallelströmung
from paraBEM.pan3d import doublet_3, doublet_3_v def sphere_field(target, r=1, v_inf=paraBEM.Vector3(1, 0, 0)): source = paraBEM.Vector3(0, 0, 0) mu = v_inf.norm() * np.pi * r**3 * 2 return ( mu * doublet_3(target, source, -v_inf) + v_inf.dot(target), mu * doublet_3_v(target, source, -v_in...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Potential- und Geschwindigkeitsverteilung im Raum
phi = np.linspace(0, np.pi * 2, 300) x = np.cos(phi) y = np.sin(phi) pot, vel = zip(*[sphere_field( paraBEM.Vector3( np.cos(p), np.sin(p), 0)) for p in phi]) cp = list(map(cp_, vel)) vel = [v.norm() for v in vel] plt.plot(x, y, label="surface", color="black...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
3D Panelmethode Kugel
from paraBEM.mesh import mesh_object from paraBEM.vtk_export import CaseToVTK # create panels from mesh mesh = mesh_object.from_OBJ("../../examples/mesh/sphere_low_tri.obj") # create case from panels case = pan3d.DirichletDoublet0Case3(mesh.panels) # set boundary conditon far away from the body case.v_inf = paraBEM....
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Umströmung eines Gleitschirms Erstellen des "Case"
from openglider.jsonify import load from openglider.utils.distribution import Distribution from openglider.glider.in_out.export_3d import paraBEM_Panels from paraBEM.utils import v_inf_deg_range3 # load glider file_name = "../../examples/openglider/glider/referenz_schirm_berg.json" with open(file_name) as _file: ...
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
Visualisierung
file_name = "/tmp/paraBEM_results/vtk_glider_case" vtk_writer = CaseToVTK(case, file_name) vtk_writer.write_panels(data_type="cell") vtk_writer.write_wake_panels() vtk_writer.write_body_stream(panels, 100) paraview(file_name + "/panels.vtk")
doc/tutorial/tutorial.ipynb
looooo/paraBEM
gpl-3.0
At first, we need to define the dataset name and variables we want to use.
dh=datahub.datahub(server,version,API_key) dataset='nasa_merra2_global_v2' variable_names = 'T2MMEAN,T2MMAX,T2MMIN' time_start = '1980-01-01T00:00:00' area_name = 'Bering_Strait'
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
In this part we define the RBSN dataset key as we would like to use observational data from stations as well. Also, we define station id - you can see different station id's on the RBSN detail page map. At this point, we are choosing a station near Bering Strait. Only requirement would be that the station should locat...
dataset1 = 'noaa_rbsn_timeseries' station = '25399' # time_start_synop = '2019-01-01T00:00:00' time_end = '2019-02-28T23:00:00'#datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%dT%H:%M:%S') variable = 'temperature' link = 'https://api.planetos.com/v1/datasets/noaa_rbsn_timeseries/stations/{0}?origin=dataset-d...
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Now we read in station data and from it we define longitude latitude values to get data from the same location using MERRA2 dataset as well.
time_synop = [datetime.datetime.strptime(n['axes']['time'],'%Y-%m-%dT%H:%M:%S') for n in data['entries']][:-54] temp_synop = [n['data']['temperature'] for n in data['entries']][:-54] latitude = data['entries'][0]['axes']['latitude'] longitude = data['entries'][0]['axes']['longitude']
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
For starters, using Basemap we created a map of the Arctic region and we mark chosen location with a red dot.
plt.figure(figsize=(10,8)) m = Basemap(projection='npstere',boundinglat=60,lon_0=0,resolution='l') x,y = m(longitude,latitude) m.drawcoastlines() m.drawcountries() m.drawstates() m.drawparallels(np.arange(-80.,81.,20.)) m.drawmeridians(np.arange(-180.,181.,20.)) m.shadedrelief() m.scatter(x,y,50,marker='o',color='red',...
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Download the data with package API Create package objects Send commands for the package creation Download the package files Note that this package has over a 30 years of data and downloading it might take some time
package = package_api.package_api(dh,dataset,variable_names,longitude,longitude,latitude,latitude,time_start,time_end,area_name=area_name) package.make_package() package.download_package()
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Work with downloaded files We start by opening the files with xarray. We also convert Kelvins to Celsius degrees.
dd1 = xr.open_dataset(package.local_file_name) dd1['T2MMEAN'] = dd1['T2MMEAN'] -272.15
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit