markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
This is called a hierarchical index, which we will revisit later in the section.
If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument:
|
pd.read_csv("../data/microbiome.csv", skiprows=[3,4,6]).head()
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
If we only want to import a small number of rows from, say, a very large data file we can use nrows:
|
pd.read_csv("../data/microbiome.csv", nrows=4)
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each:
|
pd.read_csv("../data/microbiome.csv", chunksize=15)
data_chunks = pd.read_csv("../data/microbiome.csv", chunksize=15)
mean_tissue = pd.Series({chunk.Taxon[0]: chunk.Tissue.mean() for chunk in data_chunks})
mean_tissue
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL.
|
!cat ../data/microbiome_missing.csv
pd.read_csv("../data/microbiome_missing.csv").head(20)
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Above, Pandas recognized NA and an empty field as missing data.
|
pd.isnull(pd.read_csv("../data/microbiome_missing.csv")).head(20)
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument:
|
pd.read_csv("../data/microbiome_missing.csv", na_values=['?', -99999]).head(20)
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
These can be specified on a column-wise basis using an appropriate dict as the argument for na_values.
Microsoft Excel
Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), Pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: xlrd and openpyxl (these may be installed with either pip or easy_install).
Importing Excel data to Pandas is a two-step process. First, we create an ExcelFile object using the path of the file:
|
mb_file = pd.ExcelFile('../data/microbiome/MID1.xls')
mb_file
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
There is now a read_excel conveneince function in Pandas that combines these steps into a single call:
|
mb2 = pd.read_excel('../data/microbiome/MID2.xls', sheetname='Sheet 1', header=None)
mb2.head()
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis.
Pandas Fundamentals
This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.
For some variety, we will leave our digestive tract bacteria behind and employ some baseball data.
|
baseball = pd.read_csv("../data/baseball.csv", index_col='id')
baseball.head()
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
In addition to using loc to select rows and columns by label, pandas also allows indexing by position using the iloc attribute.
So, we can query rows and columns by absolute position, rather than by name:
|
baseball_newind.iloc[:5, 5:8]
micro
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Exercise
You can use the isin method query a DataFrame based upon a list of values as follows:
data['phylum'].isin(['Firmacutes', 'Bacteroidetes'])
Use isin to find all players that played for the Los Angeles Dodgers (LAN) or the San Francisco Giants (SFN). How many records contain these values?
|
# Write your answer here
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index:
|
mb = pd.read_csv("../data/microbiome.csv", index_col=['Taxon','Patient'])
mb.head(10)
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.
|
data.loc[7, 'year'] = np.nan
data
data.dropna(thresh=4)
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Try running corr on the entire baseball DataFrame to see what is returned:
|
# Write answer here
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
As Wes warns in his book, it is recommended that binary storage of data via pickle only be used as a temporary storage format, in situations where speed is relevant. This is because there is no guarantee that the pickle format will not change with future versions of Python.
Advanced Exercise: Compiling Ebola Data
The data/ebola folder contains summarized reports of Ebola cases from three countries during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
From these data files, use pandas to import them and create a single data frame that includes the daily totals of new cases and deaths for each country.
|
# Write your answer here
|
notebooks/Introduction to Pandas.ipynb
|
fonnesbeck/scientific-python-workshop
|
cc0-1.0
|
Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
|
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=z, units=n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(tf.scalar_mul(alpha, h1), h1)
# Logits and tanh output
logits = tf.layers.dense(inputs=h1, units=out_dim, activation=None)
out = tf.tanh(logits)
return out
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
|
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=x, units=n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(tf.scalar_mul(alpha, h1), h1)
# Logits and tanh output
logits = tf.layers.dense(inputs=h1, units=1, activation=None)
out = tf.sigmoid(logits)
return out, logits
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
|
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(real_dim=input_size, z_dim=z_size)
# Generator network here
g_model = generator(z=input_z, out_dim=input_size, alpha=alpha, n_units=g_hidden_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(x=input_real, alpha=alpha, n_units=d_hidden_size, reuse=False)
d_model_fake, d_logits_fake = discriminator(x=g_model, alpha=alpha, n_units=d_hidden_size, reuse=True)
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
|
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [generatorVar for generatorVar in t_vars if generatorVar.name.startswith('generator')]
d_vars = [discriminatorVar for discriminatorVar in t_vars if discriminatorVar.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(g_loss, var_list=g_vars)
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
Training
|
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
#batch[1] is the MNIST label, but here we don't need it
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
# Scale from [0,1] to [-1,1]
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = sess.run(g_loss, {input_z: batch_z})
#Alternative way
#train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
SlipknotTN/udacity-deeplearning-nanodegree
|
mit
|
List can be heterogeneous list
|
L3 = [True, '2', 3.0, 4]
[type(item) for item in L3]
tuple(L3)
|
Chapter 1 - Python DS Handbook.ipynb
|
suresh/notebooks
|
mit
|
Creating Arrays from lists
|
import numpy as np
np.array([1, 4, 2, 5, 3])
np.array([3.14, 4, 2, 3])
|
Chapter 1 - Python DS Handbook.ipynb
|
suresh/notebooks
|
mit
|
Creating Arrays from scratch
Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into Numpy. Here are some examples:
|
# create a random integer array
np.random.randint(0, 10, (3, 3))
# create a 3x3 array of uniform random values
np.random.random((3, 3))
# create 3x3 array of normally distributed data
np.random.normal(0, 1, (3,3))
|
Chapter 1 - Python DS Handbook.ipynb
|
suresh/notebooks
|
mit
|
LV2 recovery system analysis
|
# Analyzing LV2 Telemetry and IMU data
# From git, Launch 12
#######################################
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.integrate import simps
%matplotlib inline
# Graphing helper function
def setup_graph(title='',x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0],
fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
########################################
# Data from the IMU
data = pd.read_csv('IMU_data.csv')
time = data[' [1]Timestamp'].tolist()
time = np.array(time)
# Umblicial disconnect event
t_0 = 117853569585227
# Element wise subtraction
time = np.subtract(time, t_0)
# Convert from ns to s
time = np.divide(time, 1e9)
acceleration = data[' [6]Acc_X'].tolist()
acceleration = np.array(acceleration)
acceleration = np.subtract(acceleration,g)
########################################
# Data from the Telemetrum
data_tel = pd.read_csv('Telemetry_data.csv')
time_tel = data_tel['time'].tolist()
time_tel = np.array(time_tel)
acceleration_tel = data_tel['acceleration'].tolist()
acceleration_tel = np.array(acceleration_tel)
setup_graph('Accel vs. Time', 'Time (s)', 'Accel (m/s^2)', (16,7))
plt.plot(time,acceleration,'b-')
plt.plot(time_tel, acceleration_tel,'r-')
plt.legend(['IMU','Telemetry'])
plt.show()
# Drogue analysis
# Plot of only the duration of impulse
setup_graph('Accel vs. Time of Drogue Impact', 'Time (s)', 'Accel (m/s^2)', (10,7))
plt.plot(time[len(time)-8700:35500], acceleration[len(acceleration)-8700:35500],'b-')
plt.plot(time_tel[len(time_tel)-9808:3377], acceleration_tel[len(acceleration_tel)-9808:3377],'r-')
plt.legend(['IMU','Telemetry'])
plt.show()
|
Useful_Misc/Recovery_Initial_Calculations.ipynb
|
psas/lv3.0-recovery
|
gpl-3.0
|
suggestions
Your $dt$ value is a bit off. Also, $dt$ changes, especially when the drogue is deploying.
The other thing that I'd be careful about is your assumption that the deployment force is just the $\Delta v$ for deployment divided by the time. That would mean the force is as spread out as possible, which isn't true.
Also, I think it's a reasonable guess that the IMU was zeroed on the launch pad. This means that
1. The acceleration data isn't quite representative of the load on the parachute/rocket.
1. We can get a calibration factor for the IMU data.
At apogee, right before the drogue deploys, the rocket is basically in freefall. Aka, any accelerometer "should" read $0$. So, whatever the IMU reads at apogee is equal to $1 g$. We can then subtract out that value and scale the data so that we get $9.81$ on the launch pad.
|
# how to get the values for dt:
#diffs= [t2-t1 for t1,t2 in zip(time[:-1], time[1:])]
#plt.figure()
#plt.plot(time[1:], diffs)
#plt.title('values of dt')
print('mean dt value:', np.mean(diffs))
# marginally nicer way to keep track of time windows:
ind_drogue= [i for i in range(len(time)) if ((time[i]>34.5) & (time[i]<39))]
# indices where we're basically in freefall:
ind_vomit= [i for i in range(len(time)) if ((time[i]>32) & (time[i]<34.5))]
offset_g= np.mean(acceleration[ind_vomit])
accel_nice = (acceleration-offset_g)*(-9.81/offset_g)
deltaV= sum([accel_nice[i]*diffs[i] for i in ind_drogue])
print('change in velocity (area under the curve):', deltaV)
plt.figure()
plt.plot(time[:2500], acceleration[:2500])
plt.title('launch pad acceleration')
print('"pretty much zeroed" value on the launch pad:', np.mean(acceleration[:2500]))
# Filtering out noise from IMU and Telemetry data w/ scipy
# Using these graphs to estimate max possible impulse felt during LV2
# From IMU
from scipy.signal import savgol_filter
accel_filtered = savgol_filter(acceleration, 201, 2)
# From telemetry
# ***Assuming the filter parameters will work for the telemetry data too***
accel_filtered_tel = savgol_filter(acceleration_tel, 201, 2)
# Plot of filtered IMU and telemetry for comparison
setup_graph('Accel vs. Time (filtered)', 'Time (s)', 'Accel_filtered (m/s^2)', (16,7))
plt.plot(time[1:], accel_filtered[1:],'b-')
plt.plot(time_tel[1:], accel_filtered_tel[1:],'r-')
plt.legend(['IMU','Telemetry'])
plt.show()
# Looks pretty darn close
# Wanted the filtered telemetry instead of IMU because IMU does not include landing and we need terminal velocity
print ("Estimating max possible impulse during LV2 \n")
# Estimate Area Under Curve
areaCurve = simps(abs(accel_filtered), dx=0.00125)
print ("Area under entire filtered curve (m/s) %3.2f" %areaCurve)
# Calculating area under drogue acceleration curve to find impulse
areaCurvesmall = simps(abs(accel_filtered[len(accel_filtered)-8700:35500]), dx=0.00125)
print ("Area under accel curve during drogue impact (m/s) %3.2f" %areaCurvesmall)
# Impulse = mass * integral(accel dt)
impulse = m_tot2 * areaCurvesmall
print ("Impulse (N*s) %3.2f" %impulse)
# If deploy time = approx. 3.5 sec
deploy_time = 3.5
force = impulse/deploy_time # not a conservative assumption
print ("Force (N) %3.2f" %force)
# Finding terminal velocity with height and time
# Assuming average velocity is equivalent to terminal velocity
##because so much of the time after the main is deployed is at terminal velocity
# Using telemetry data
# Main 'chute deployment height (m)
main_deploy_height = 251.6
# Main 'chute deployment time (s)
main_deploy_time = 686.74
# Final height (m)
final_height = 0
# Final time
final_time = 728.51
# Average (terminal) velocity (m/s)
terminal_speed = abs((main_deploy_height - final_height)/(main_deploy_time - final_time))
print ("Terminal speed %3.2f" % terminal_speed)
# This seems accurate, 6 m/s is approx. 20 ft/s which is ideal
# LV2 drag calculations
#######################################
# If we say at terminal velocity, accel = 0,
# Therefore drag force (D2) = weight of system (w_tot2) (N)
# Because sum(forces) = ma = 0, then
D2 = w_tot2
print ("D2 (N) %3.2f" % D2)
# Calculated drag coefficient (Cd2), using D2
Cd2 = (2*D2)/(p*A2*(terminal_speed**2))
print ("Cd2 %3.2f" % Cd2)
# Drag coefficient (Cd_or), from OpenRocket LV2.3
# Compare to AIAA source [4], 0.60
# Compare to calculated from LV2 data, Cd2
Cd_or = 0.59
print ("Cd_or %3.2f" % Cd_or)
# Calculated drag (D_or) (N), from OpenRocket LV2.3
D_or = (Cd_or*p*A2*(terminal_speed**2))/2
print ("D_or (N) %3.2f" % D_or)
|
Useful_Misc/Recovery_Initial_Calculations.ipynb
|
psas/lv3.0-recovery
|
gpl-3.0
|
LV3 Recovery System
=======================================
Overview
Deployment design
<img src='Sketch_ Deployment_Design.jpg' width="450">
Top-level design
<img src='Sketch_ Top-Level_Design.jpg' width="450">
<img src='initial_idea_given_info.jpg' width="450">
Step-by-step
<img src='step_by_step.jpg' width="450">
Deciding on a parachute
Estimating necessary area
|
## **Need to decide on FOS for the weight and then use estimator to calculate necessary area** ##
# Calculating area needed for LV3 parachutes
# Both drogue and main
# From OpenRocket file LV3_L13a_ideal.ork
# Total mass (kg), really rough estimate
m_tot3 = 27.667
# Total weight (N)
w_tot3 = m_tot3 * g
print w_tot3
# Assuming drag is equivalent to weight of rocket system
D3 = w_tot3
# v_f3 (m/s), ideal terminal(impact) velocity
v_f3 = 6
# Cd from AIAA example (source 4)
Cd_aiaa = 0.60
# Cd from OpenRocket LV3
Cd_or3 = 0.38
# Printing previous parachute area for reference
print ("Previous parachute area %3.2f" %A2)
# Need to work on this...
"""
# Area needed using LV2 calculations (m^2)
A3_2 = (D3*2)/(Cd2*p*(v_f3**2))
print ("Area using Cd2 %3.2f" %A3_2)
# Area needed using LV2 OpenRocket (m^2)
A3_or2 = (D3*2)/(Cd_or*p*(v_f3**2))
print ("Area using Cd_or %3.2f" %A3_or2)
# Area needed using aiaa info (m^2)
A3_aiaa = (D3*2)/(Cd_aiaa*p*(v_f3**2))
print ("Area using Cd_aiaa %3.2f" %A3_aiaa)
# Area needed using LV3 OpenRocket
A3_or3 = (D3*2)/(Cd_or3*p*(v_f3**2))
print ("Area using Cd_or3 %3.2f" %A3_or3)
# Area estimater
A3 = (D3*2)/(1.5*p*(v_f3**2))
print ("Area estimate %3.2f" %A3)
import math
d_m = (math.sqrt(A3_or3/math.pi))*2
d_ft = d_m * 0.3048
print d_ft
"""
|
Useful_Misc/Recovery_Initial_Calculations.ipynb
|
psas/lv3.0-recovery
|
gpl-3.0
|
Define Network
|
with tf.name_scope("data"):
X = tf.placeholder(tf.float32, [None, IMG_SIZE, IMG_SIZE, 1], name="X")
Y = tf.placeholder(tf.float32, [None, NUM_CLASSES], name="Y")
def conv2d(x, W, b, strides=1):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding="SAME")
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding="SAME")
def network(x, dropout=0.75):
# CONV-1: 5x5 kernel, channels 1 => 32
W1 = tf.Variable(tf.random_normal([5, 5, 1, 32]))
b1 = tf.Variable(tf.random_normal([32]))
conv1 = conv2d(x, W1, b1)
# MAXPOOL-1
conv1 = maxpool2d(conv1, 2)
# CONV-2: 5x5 kernel, channels 32 => 64
W2 = tf.Variable(tf.random_normal([5, 5, 32, 64]))
b2 = tf.Variable(tf.random_normal([64]))
conv2 = conv2d(conv1, W2, b2)
# MAXPOOL-2
conv2 = maxpool2d(conv2, k=2)
# FC1: input=(None, 7, 7, 64), output=(None, 1024)
flatten = tf.reshape(conv2, [-1, 7*7*64])
W3 = tf.Variable(tf.random_normal([7*7*64, 1024]))
b3 = tf.Variable(tf.random_normal([1024]))
fc1 = tf.add(tf.matmul(flatten, W3), b3)
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction (1024 => 10)
W4 = tf.Variable(tf.random_normal([1024, NUM_CLASSES]))
b4 = tf.Variable(tf.random_normal([NUM_CLASSES]))
pred = tf.add(tf.matmul(fc1, W4), b4)
return pred
# define network
Y_ = network(X, 0.75)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=Y_, labels=Y))
optimizer = tf.train.AdamOptimizer(
learning_rate=LEARNING_RATE).minimize(loss)
correct_pred = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar("loss", loss)
tf.summary.scalar("accuracy", accuracy)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
|
src/tensorflow/02-mnist-cnn.ipynb
|
sujitpal/polydlot
|
apache-2.0
|
Train Network
|
history = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
# tensorboard viz
logger = tf.summary.FileWriter(LOG_DIR, sess.graph)
train_gen = datagen(Xtrain, ytrain, BATCH_SIZE)
num_batches = len(Xtrain) // BATCH_SIZE
for epoch in range(NUM_EPOCHS):
total_loss, total_acc = 0., 0.
for bid in range(num_batches):
Xbatch, Ybatch = train_gen.next()
_, batch_loss, batch_acc, Ybatch_, summary = sess.run(
[optimizer, loss, accuracy, Y_, merged_summary_op],
feed_dict={X: Xbatch, Y:Ybatch})
# write to tensorboard
logger.add_summary(summary, epoch * num_batches + bid)
# accumulate to print once per epoch
total_acc += batch_acc
total_loss += batch_loss
total_acc /= num_batches
total_loss /= num_batches
print("Epoch {:d}/{:d}: loss={:.3f}, accuracy={:.3f}".format(
(epoch + 1), NUM_EPOCHS, total_loss, total_acc))
saver.save(sess, MODEL_FILE, (epoch + 1))
history.append((total_loss, total_acc))
logger.close()
losses = [x[0] for x in history]
accs = [x[1] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs)
plt.subplot(212)
plt.title("Loss")
plt.plot(losses)
plt.tight_layout()
plt.show()
|
src/tensorflow/02-mnist-cnn.ipynb
|
sujitpal/polydlot
|
apache-2.0
|
Visualize with Tensorboard
We have also requested the total_loss and total_accuracy scalars to be logged in our computational graph, so the above charts can also be seen from the built-in tensorboard tool. The scalars are logged to the directory given by LOG_DIR, so we can start the tensorboard tool from the command line:
$ cd ../../data
$ tensorboard --logdir=tf-mnist-cnn-logs
Starting TensorBoard 54 at http://localhost:6006
(Press CTRL+C to quit)
We can then view the [visualizations on tensorboard] (http://localhost:6006)
Evaluate Network
|
BEST_MODEL = os.path.join(DATA_DIR, "tf-mnist-cnn-5")
saver = tf.train.Saver()
ys, ys_ = [], []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, BEST_MODEL)
test_gen = datagen(Xtest, ytest, BATCH_SIZE)
val_loss, val_acc = 0., 0.
num_batches = len(Xtrain) // BATCH_SIZE
for _ in range(num_batches):
Xbatch, Ybatch = test_gen.next()
Ybatch_ = sess.run(Y_, feed_dict={X: Xbatch, Y:Ybatch})
ys.extend(np.argmax(Ybatch, axis=1))
ys_.extend(np.argmax(Ybatch_, axis=1))
acc = accuracy_score(ys_, ys)
cm = confusion_matrix(ys_, ys)
print("Accuracy: {:.4f}".format(acc))
print("Confusion Matrix")
print(cm)
|
src/tensorflow/02-mnist-cnn.ipynb
|
sujitpal/polydlot
|
apache-2.0
|
List Comprehensions
List comprehensions provide a concise way to create lists (arrays). Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence.
For example: Create the list: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
|
squares = [] # create a blank list
for x in range(10): # foor loop 0 -> 9
squares.append(x**2) # calculate x**2 for each x, add to end of list
squares
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
You can do the same thing with:
|
squares = [x**2 for x in range(10)]
squares
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
You can include if statements:
|
even_squares = []
for x in range(10):
if (x % 2 == 0):
even_squares.append(x**2)
even_squares
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
You can do the same thing with:
|
even_squares = [x**2 for x in range(10) if (x % 2 == 0)]
even_squares
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Now to observations
Let us start with a external list of target objects:
|
target_table = QTable.read('ObjectList.csv', format='ascii.csv')
target_table
targets = [FixedTarget(coord=SkyCoord(ra = RA*u.hourangle, dec = DEC*u.deg), name=Name)
for Name, RA, DEC in target_table]
targets
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Observing Night
You are the most junior member of the team, so you get stuck observing on New Years Eve
|
observe_date = Time("2018-01-01", format='iso')
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
But, you get to observe in Hawaii
|
my_timezone = pytz.timezone('US/Hawaii')
my_location = Observer.at_site('gemini_north')
observe_start = my_location.sun_set_time(observe_date, which='nearest')
observe_end = my_location.sun_rise_time(observe_date, which='next')
print("Observing starts at {0.iso} UTC".format(observe_start))
print("Observing ends at {0.iso} UTC".format(observe_end))
print("Observing starts at {0} local".format(observe_start.to_datetime(my_timezone)))
print("Observing ends at {0} local".format(observe_end.to_datetime(my_timezone)))
# A complete list of built-in observatories can be found by:
#EarthLocation.get_site_names()
observing_length = (observe_end - observe_start).to(u.h)
print("You can observe for {0:.1f} tonight".format(observing_length))
observing_range = [observe_start, observe_end]
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Plot the objects
|
%matplotlib inline
import matplotlib.pyplot as plt
from astroplan import time_grid_from_range
from astroplan.plots import plot_sky, plot_airmass
time_grid = time_grid_from_range(observing_range)
fig,ax = plt.subplots(1,1)
fig.set_size_inches(10,10)
fig.tight_layout()
for my_object in targets:
ax = plot_sky(my_object, my_location, time_grid)
ax.legend(loc=0,shadow=True);
fig,ax = plt.subplots(1,1)
fig.set_size_inches(10,5)
fig.tight_layout()
for my_object in targets:
ax = plot_airmass(my_object, my_location, time_grid)
ax.legend(loc=0,shadow=True);
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Observing Constraints
|
from astroplan import AltitudeConstraint, AirmassConstraint
from astroplan import observability_table
constraints = [AltitudeConstraint(20*u.deg, 80*u.deg)]
observing_table = observability_table(constraints, my_location, targets, time_range=observing_range)
print(observing_table)
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Let us add another constraint
|
constraints.append(AirmassConstraint(2))
observing_table = observability_table(constraints, my_location, targets, time_range=observing_range)
print(observing_table)
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Additional Constraints
from astroplan import CONSTRAINT
AtNightConstraint() - Constrain the Sun to be below horizon.
MoonIlluminationConstraint(min, max) - Constrain the fractional illumination of the Moon.
MoonSeparationConstraint(min, max) - Constrain the separation between the Moon and some targets.
SunSeparationConstraint(min, max) - Constrain the separation between the Sun and some targets.
|
from astroplan import moon_illumination
moon_illumination(observe_start)
from astroplan import MoonSeparationConstraint
constraints.append(MoonSeparationConstraint(45*u.deg))
observing_table = observability_table(constraints, my_location, targets, time_range=observing_range)
print(observing_table)
fig,ax = plt.subplots(1,1)
fig.set_size_inches(10,5)
fig.tight_layout()
for i, my_object in enumerate(targets):
if observing_table['ever observable'][i]:
ax = plot_airmass(my_object, my_location, time_grid)
ax.legend(loc=0,shadow=True);
|
Python_Astroplan_Constraints.ipynb
|
UWashington-Astro300/Astro300-A17
|
mit
|
Capculate & Plot Correlations
|
hp = cars_rdd.map(lambda x: x[0][2])
weight = cars_rdd.map(lambda x: x[0][10])
print '%2.3f' % Statistics.corr(hp, weight, method="pearson")
print '%2.3f' % Statistics.corr(hp, weight, method="spearman")
print hp
import pandas as pd
from ggplot import *
%matplotlib inline
df = pd.DataFrame({'HP': hp.collect(),'Weight':weight.collect()})
ggplot(df, aes(x='HP', y='Weight')) +\
geom_point() + labs(title="Car-Attributes", x="Horsepower", y="Weight")
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
Coding Excrcise
Calculate the correlation between Rear Axle Ratio & the Width
Plot & verify
|
ra_ratio = cars_rdd.map(lambda x: x[0][5])
width = cars_rdd.map(lambda x: x[0][9])
print '%2.3f' % Statistics.corr(ra_ratio, width, method="pearson")
print '%2.3f' % Statistics.corr(ra_ratio, width, method="spearman")
df = pd.DataFrame({'RA Ratio': ra_ratio.collect(),'Width':width.collect()})
ggplot(df, aes(x='RA Ratio', y='Width')) +\
geom_point() + labs(title="Car-Attributes", x="RA Ratio", y="Width")
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
Linear Regression
|
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.regression import LinearRegressionWithSGD
from pyspark.mllib.regression import LassoWithSGD
from pyspark.mllib.regression import RidgeRegressionWithSGD
from numpy import array
data = [
LabeledPoint(0.0, [0.0]),
LabeledPoint(10.0, [10.0]),
LabeledPoint(20.0, [20.0]),
LabeledPoint(30.0, [30.0])
]
lrm = LinearRegressionWithSGD.train(sc.parallelize(data), initialWeights=array([1.0]))
print lrm
print lrm.weights
print lrm.intercept
lrm.predict([40])
data_test = [
LabeledPoint(5.0, [5.0]),
LabeledPoint(15.0, [15.0]),
LabeledPoint(25.0, [25.0]),
LabeledPoint(35.0, [35.0])
]
data_test_rdd = sc.parallelize(data_test)
valuesAndPreds = data_test_rdd.map(lambda p: (p.label, lrm.predict(p.features)))
#
MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))
valuesAndPreds.take(10)
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
TIP : Step Size is important
|
data = [
LabeledPoint(0.0, [0.0]),
LabeledPoint(9.0, [10.0]),
LabeledPoint(22.0, [20.0]),
LabeledPoint(32.0, [30.0])
]
lrm = LinearRegressionWithSGD.train(sc.parallelize(data), initialWeights=array([1.0])) # should be 1.09x -0.60
# Default step size of 1.0 will diverge
print "Step Size 1.0 (Default)"
print lrm
print lrm.weights
print lrm.intercept
print "%3.3f" % lrm.predict([40])
lrm = LinearRegressionWithSGD.train(sc.parallelize(data), initialWeights=array([1.0]), step=0.01) # should be 1.09x -0.60
# Default step size of 1.0 will diverge
print
print "Step Size 0.01"
print lrm
print lrm.weights
print lrm.intercept
print "%3.3f" % lrm.predict([40])
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
Step Size 1.0 (Default)
(weights=[-2.4414455467e+173], intercept=0.0)
[-2.4414455467e+173]
0.0
-9765782186791751487210715088039060352220111253693716482748893621300180659099465516818172090556066502587581239328158959270667383329208729580664342023823383430423411112418476032.000
Step Size 0.01
(weights=[1.06428571429], intercept=0.0)
[1.06428571429]
0.0
42.571
|
data = [
LabeledPoint(18.9, [3910.0]),
LabeledPoint(17.0, [3860.0]),
LabeledPoint(20.0, [4200.0]),
LabeledPoint(16.6, [3660.0])
]
lrm = LinearRegressionWithSGD.train(sc.parallelize(data), step=0.00000001) # should be ~ 0.006582x -7.595170
print lrm
print lrm.weights
print lrm.intercept
lrm.predict([4000])
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
(weights=[0.00439009869891], intercept=0.0)
[0.00439009869891]
0.0
Out[57]:
17.560394795638977
Homework
Convert the car data to labelled points
Partition to Train & Test
Train the three Linear Models
Calculate the MSE for the three models
|
from pyspark.mllib.regression import LabeledPoint
def parse_car_data(x):
# return labelled point
return LabeledPoint(x[0][0],[ x[0][1],x[0][2],x[0][3],x[0][4],x[0][5],
x[0][6],x[0][7],x[0][8],x[0][9],x[0][10],x[0][11] ])
car_rdd_lp = cars_rdd.map(lambda x: parse_car_data(x))
print car_rdd_lp.count()
print car_rdd_lp.first().label
print car_rdd_lp.first().features
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
30
18.9
[350.0,165.0,260.0,8.0,2.55999994278,4.0,3.0,200.300003052,69.9000015259,3910.0,1.0]
|
car_rdd_train = car_rdd_lp.filter(lambda x: x.features[9] <= 4000)
car_rdd_train.count()
car_rdd_test = car_rdd_lp.filter(lambda x: x.features[9] > 4000)
car_rdd_test.count()
car_rdd_train.take(5)
car_rdd_test.take(5)
lrm = LinearRegressionWithSGD.train(car_rdd_train, step=0.000000001)
print lrm
print lrm.weights
print lrm.intercept
valuesAndPreds = car_rdd_test.map(lambda p: (p.label, lrm.predict(p.features)))
MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
1.6.0 (12/17/15) : Mean Squared Error = 221.828342627
|
valuesAndPreds.take(20)
lrm = LassoWithSGD.train(car_rdd_train, step=0.000000001)
print lrm.weights
print lrm.intercept
valuesAndPreds = car_rdd_test.map(lambda p: (p.label, lrm.predict(p.features)))
MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
[7.99133570468e-05,4.11481587054e-05,6.19949476655e-05,3.16472476808e-06,1.2330557013e-06,8.41105433668e-07,1.37323711608e-06,6.74789184347e-05,2.56204587207e-05,0.00112604747864,1.93481200483e-07]
0.0
Mean Squared Error = 105.857623024
|
valuesAndPreds.take(20)
lrm = RidgeRegressionWithSGD.train(car_rdd_train, step=0.000000001)
print lrm.weights
print lrm.intercept
valuesAndPreds = car_rdd_test.map(lambda p: (p.label, lrm.predict(p.features)))
MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))
|
extras/010-Linear-Regression.ipynb
|
dineshpackt/Fast-Data-Processing-with-Spark-2
|
mit
|
From analysing the newsday archieve website we see that the URL follows a parsable convention
http://www.newsday.co.tt/archives/YYYY-M-DD.html
So our general approach will be as follows:
1. Generate date in the expected form between an ending and starting date
2. Test to ensure the dates generated are valid. (refine step1 based on results)
3. Read the content and process based on our goal for scaping the page
|
# Step 1 - create a function to generates a list(array) of dates
def genDatesNewsDay(start_date = date.today(), num_days = 3):
# date_list = [start - timedelta(days=x) for x in range(0, num_days)] # generate a list of dates
# While we expand the above line for beginners understanding
date_list = []
for d in range(0, num_days):
temp = start_date - timedelta(days=d)
date_list.append(temp.strftime('%Y-%-m-%d'))# http://strftime.org/ used a reference
return date_list
# Step 2 -Test the generated URL to ensure they point to
def traverseDatesNewsDay(func, start_date = date.today(), num_days = 3):
base_url="http://www.newsday.co.tt/archives/"
dates_str_list = genDatesNewsDay(start_date, num_days)
for date in dates_str_list:
url = base_url + date
func(url)
def printDate(date):
print(date)
traverseDatesNewsDay(printDate)
from dateutil.relativedelta import relativedelta
# http://www.guardian.co.tt/archive/2017-02?page=3
base_url = "http://www.guardian.co.tt/archive/"
# print date.today().strftime("%Y-%-m")
dates_str_list = []
page_content_list = []
for i in range(0, 12):
d = date.today() - relativedelta(months=+i)
page_url = base_url + d.strftime("%Y-%-m")
dates_str_list.append(page_url)
try:
page_content_list.append( urllib.urlopen(page_url).read() )
except:
print "Unable to find content for {0}".format(page_url)
print "Generated {0} urls and retrieved {1} pages".format(len(dates_str_list), len(page_content_list))
url = dates_str_list[0]
request = urllib2.Request("http://www.guardian.co.tt/archive/2017-2")
request.add_header('User-Agent', user_agent)
request.add_header('Accept-Language', accept_language)
content = urllib2.build_opener().open(request).read()
def fetch_content(url):
user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3004.3 Safari/537.36"
accept_language="en-GB,en-US;q=0.8,en;q=0.6"
request = urllib2.Request(url)
request.add_header('User-Agent', user_agent)
request.add_header('Accept-Language', accept_language)
content = urllib2.build_opener().open(request).read()
return content
beau = BeautifulSoup(content, "html5lib")
main_block = beau.find(id="block-system-main")
links = main_block.find_all("div", class_="view-content")[0].find_all('a')
last = main_block.find("li", class_="pager-last last")
max_pages = int(last.find("a")['href'].split("=")[1])
pages_list = range(1, max_pages+1)
# len(links)
# url = "http://www.guardian.co.tt/archive/2017-2"
# page = pages_list[0]
# url = "{0}?page={1}".format(url, page)
# content = fetch_content(url)
base_url = "http://www.guardian.co.tt/"
stories = []
stories_links = []
for pg in links:
url = base_url + pg['href']
stories_links.append(url)
stories.append( fetch_content(url) )
first = True
emo_count = {
"anger" : 0,
"disgust": 0,
"fear" : 0,
"joy" : 0,
"sadness": 0
}
socio_count = {
"openness_big5": 0,
"conscientiousness_big5": 0,
"extraversion_big5" : 0,
"agreeableness_big5" : 0,
"emotional_range_big5": 0
}
for story in stories:
beau = BeautifulSoup(story, "html5lib")
# main_block = beau.find("h1", class_="title")
paragraphs = beau.find(id="block-system-main").find_all("p")
page_text = ""
for p in paragraphs:
page_text += p.get_text()
tone_analyzer = getAnalyser()
res = tone_analyzer.tone(page_text)
tone = res['document_tone']['tone_categories']
emo = tone[0]['tones'] # we want the emotional tone
soci= tone[2]['tones'] # we also want the social tone
e_res = processTone(emo)
emo_count[e_res['tone_id']] += 1
s_res = processTone(soci)
socio_count[s_res['tone_id']] += 1
for e in emo_count:
print("{0} articles were classified with the emotion {1}".format(emo_count[e], e))
for s in socio_count:
print("{0} articles were classified as {1}".format(socio_count[s], s))
# Step 3 - Read content and process page
def processPage(page_url):
print("Attempting to read content from {0}".format(page_url))
page_content = urllib.urlopen(page_url).read()
beau = BeautifulSoup(page_content, "html5lib")
tables = beau.find_all("table") #https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all
for i in range(0,13):
named_sec = tables[i].h3
if named_sec:
print("i {0} produced {1}".format(i,named_sec))
article_links = beau.find_all("a", 'title')
print("Found {0} tables and {1} articles".format(len(tables), len(article_links)))
# traverseDatesNewsDay(processPage,num_days = 1)
|
Scrape Newsday.ipynb
|
kyledef/jammerwebscraper
|
mit
|
The Purpose (Goal) of Scraping
Our main purpose of developing this exercise was to determine if the statement that the majority of the news published was negative. To do this we need to capture the sentiment of the information extracted from the link. While we can develop sentiment analysis tools using python, the process of training and validating is too much work at this time. Therefore, we utilize the IBM Watson Tone Analyzer API. We selected this API because it provides a greater amount of detail rather than a binary positive or negative result.
To use the watson api for python:
We installed the pip pacakge
bash
pip install --upgrade watson-developer-cloud
We created an account (free for 30 days)
https://tone-analyzer-demo.mybluemix.net/
Use the API referene to build application
http://www.ibm.com/watson/developercloud/tone-analyzer/api/v3/?python#
Created a local_settings.py file that contains the credentials retrieved from signing up
|
# Integrating IBM Watson
import json
from watson_developer_cloud import ToneAnalyzerV3
from local_settings import *
def getAnalyser():
tone_analyzer = ToneAnalyzerV3(
username= WATSON_CREDS['username'],
password= WATSON_CREDS['password'],
version='2016-05-19')
return tone_analyzer
# tone_analyzer = getAnalyser()
# tone_analyzer.tone(text='A word is dead when it is said, some say. Emily Dickinson')
def analysePage(page_url):
page_content = urllib.urlopen(page_url).read()
beau = BeautifulSoup(page_content, "html5lib")
tables = beau.find_all("table") #https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all
article_links = beau.find_all("a", 'title')
print("Found {0} tables and {1} articles".format(len(tables), len(article_links)))
for i in article_links:
print i
# traverseDatesNewsDay(analysePage,num_days = 1)
page_content = urllib.urlopen("http://www.newsday.co.tt/archives/2017-2-2").read()
beau = BeautifulSoup(page_content, "html5lib")
tables = beau.find_all("table") #https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all
article_links = beau.find_all("a", 'title')
print("Found {0} tables and {1} articles".format(len(tables), len(article_links)))
def processTone(tone):
large = tone[0]['score']
large_i = 0
for i in range(1, len(tone)):
if tone[i]['score'] > large:
large = tone[i]['score']
large_i = i
return tone[large_i]
|
Scrape Newsday.ipynb
|
kyledef/jammerwebscraper
|
mit
|
The Understanding of the structure of the response is provided in the API reference
https://www.ibm.com/watson/developercloud/tone-analyzer/api/v3/?python#post-tone
|
first = True
emo_count = {
"anger" : 0,
"disgust": 0,
"fear" : 0,
"joy" : 0,
"sadness": 0
}
socio_count = {
"openness_big5": 0,
"conscientiousness_big5": 0,
"extraversion_big5" : 0,
"agreeableness_big5" : 0,
"emotional_range_big5": 0
}
for i in article_links:
res = tone_analyzer.tone(i['title'])
tone = res['document_tone']['tone_categories']
emo = tone[0]['tones'] # we want the emotional tone
soci= tone[2]['tones'] # we also want the social tone
e_res = processTone(emo)
emo_count[e_res['tone_id']] += 1
s_res = processTone(soci)
socio_count[s_res['tone_id']] += 1
for e in emo_count:
print("{0} articles were classified with the emotion {1}".format(emo_count[e], e))
for s in socio_count:
print("{0} articles were classified as {1}".format(socio_count[s], s))
|
Scrape Newsday.ipynb
|
kyledef/jammerwebscraper
|
mit
|
Selecting one seal
|
wd = df.pivot( columns="individual") #row, column, values (optional)
f104 = df.ix[df["individual"] == "F104"]
f104.head()
|
Week-03/04-plotting-seal-data.ipynb
|
scientific-visualization-2016/ClassMaterials
|
cc0-1.0
|
Plotting the seal path
Several steps:
1. Create a map centered around the region
2. Draw coastlines
3. Draw countries
4. Fill oceans and coastline
5. Draw the oberservations of the seal on map
|
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
|
Week-03/04-plotting-seal-data.ipynb
|
scientific-visualization-2016/ClassMaterials
|
cc0-1.0
|
Drawing an empty map of the region
|
f104.dtypes
lons = f104["longitude"].values
lons = lons.astype(np.float)
lats = f104["latitude"].values
lons_c=np.average(lons)
lats_c=np.average(lats)
print (lons_c, lats_c)
#
map = Basemap(projection='ortho', lat_0=lats_c,lon_0=lons_c)
fig=plt.figure(figsize=(12,9))
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='blue')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
|
Week-03/04-plotting-seal-data.ipynb
|
scientific-visualization-2016/ClassMaterials
|
cc0-1.0
|
Plotting seal observations
|
#
map = Basemap(projection='ortho', lat_0=lats_c,lon_0=lons_c)
fig=plt.figure(figsize=(12,9))
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='blue')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
# Seal F104
x, y = map(lons,lats)
map.scatter(x,y,color='r',label='f104')
plt.legend()
|
Week-03/04-plotting-seal-data.ipynb
|
scientific-visualization-2016/ClassMaterials
|
cc0-1.0
|
Plot all zoomed in
|
#
map = Basemap(width=200000,height=100000,projection='lcc', resolution='h',
lat_0=lats_c,lon_0=lons_c)
fig=plt.figure(figsize=(12,9))
ax = fig.add_axes([0.05,0.05,0.9,0.85])
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='blue')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
# create a grid
# draw lat/lon grid lines every 2 degrees.
map.drawmeridians(np.arange(0,360,2), labels=[False, True, True, False])
map.drawparallels(np.arange(-90,90,1), lables=[True, False, False, True])
# Seal f104
x, y = map(lons,lats)
map.scatter(x,y,color='b',label='f104')
plt.legend()
?map
|
Week-03/04-plotting-seal-data.ipynb
|
scientific-visualization-2016/ClassMaterials
|
cc0-1.0
|
Decibels vs. Percentages
Percentages are simple, right? I bought four oranges. I ate two. What percentage of the original four do I have left? 50%. Easy.
How many decibels down in oranges am I? Not so easy, eh? Well the answer is 3. Skim the rest of this notebook to find out why.
Percentage:
The prefix <code>cent</code> indicates one hundred. For example: one <b>cent</b>ury is equal to one-hundred years. Percentage is simply a ratio of numbers compared to the number one-hundred, hence "per-cent", or "per one hundred".
|
# percentage function, v = new value, r = reference value
def perc(v,r):
return 100 * (v / r)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Let's now formalize our oranges percentage answer from above:
|
original = 4
uneaten = 2
print("You're left with", perc(uneaten, original), "% of the original oranges.")
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Decibels:
A decibel is simply a different way to represent the ratio of two numbers. It's based on a logrithmic calculation, as to compare values with large variance and small variance on the same scale (more on this below).
<br><i>For an entertainingly complete history about how decibels were decided upon, read the <a href="https://en.wikipedia.org/wiki/Decibel" target="_blank">Wikipedia article.</a></i>
|
# decibel function, v = new value, r = reference value
def deci(v,r):
return 10 * math.log(v / r, 10)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Let's now formalize our oranges decibel answer from above:
|
print("After lunch you have", round( deci(uneaten, original), 2), "decibels less oranges.")
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
That's it. Simply calculate the log to base ten of the ratio and multiply by ten.
<hr>
Advanced Part
<hr>
Well <b>who cares?</b> I'm just going to use percentages. They're easier to calculate and I don't have to relearn my forgotten-for-decades logarithms.
<br>True, most people use percentages because that's what everyone else does. However, I suggest that decibel loss is a more powerful and perceptively accurate representation of ratios.
Large ratios - small ratios
Imagine you started with four oranges, and had somehow gained three million oranges. Sitting in the middle of your grove, you'd be left with a fairly cumbersome percentage to express:
|
perc(3000000, original)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
"I have seventy five million percent of my original oranges." Yikes. What does that even mean? Use decibels instead:
|
deci(3000000, original)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
"I've gained fifty-nine decibels of oranges."
Negative ratios
Additionally, the decibel scale automatically expresses positive and negative ratios.
|
# Less oranges than original number
print(deci(uneaten, original))
print(perc(uneaten, original))
# More oranges than original number
print(deci(8, original))
print(perc(8, original))
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Greater than 100% ratios
There's some ambiguity when a person states she has 130% more oranges than her original number. Does this mean she has 5.2 oranges (which is 30% more than 4 oranges)?
|
perc(5.2, original)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Does she have 9.2 oranges (which is 130% more than 4 oranges)?
|
perc(9.2, original)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Expressed in decibel format, the answer is clear:
|
deci(5.2, original)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
She's gained 1.1 decibels in orange holdings.
How do they stack up?
|
width = 100
center = 50
ref = center
percplot = np.zeros(width)
deciplot = np.zeros(width)
for i in range(width):
val = center - (width/2) + i
if val == 0:
val = 0.000001
percplot[i] = perc(val, ref)
deciplot[i] = deci(val, ref)
plt.plot(range(width), percplot, 'r', label="percentage")
plt.plot(range(width), deciplot, 'b', label="decibels")
plt.plot((ref,ref), (-100,percplot[width-1]), 'k', label="reference value")
plt.legend(loc=4)
plt.show()
#plt.savefig('linear_plot.png', dpi=150)
plt.semilogy(range(width), deciplot, 'b', label="decibels")
plt.semilogy(range(width), percplot, 'r', label="percentage")
plt.legend(loc=4)
plt.show()
#plt.savefig('log_plot.png', dpi=150)
|
dB-vs-perc.ipynb
|
gganssle/dB-vs-perc
|
mit
|
Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
|
notebookdir = os.getcwd()
hs=hydroshare.hydroshare()
homedir = hs.getContentPath(os.environ["HS_RES_ID"])
os.chdir(homedir)
print('Data will be loaded from and save to:'+homedir)
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource.
2. Get list of gridded climate points for the watershed
This example uses a shapefile with the watershed boundary of the Sauk-Suiattle Basin, which is stored in HydroShare at the following url: https://www.hydroshare.org/resource/c532e0578e974201a0bc40a37ef2d284/.
The data for our processing routines can be retrieved using the getResourceFromHydroShare function by passing in the global identifier from the url above. In the next cell, we download this resource from HydroShare, and identify that the points in this resource are available for downloading gridded hydrometeorology data, based on the point shapefile at https://www.hydroshare.org/resource/ef2d82bf960144b4bfb1bae6242bcc7f/, which is for the extent of North America and includes the average elevation for each 1/16 degree grid cell. The file must include columns with station numbers, latitude, longitude, and elevation. The header of these columns must be FID, LAT, LONG_, and ELEV or RASTERVALU, respectively. The station numbers will be used for the remainder of the code to uniquely reference data from each climate station, as well as to identify minimum, maximum, and average elevation of all of the climate stations. The webserice is currently set to a URL for the smallest geographic overlapping extent - e.g. WRF for Columbia River Basin (to use a limit using data from a FTP service, treatgeoself() would need to be edited in observatory_gridded_hydrometeorology utility).
|
"""
Sauk
"""
# Watershed extent
hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')
sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
Summarize the file availability from each watershed mapping file
|
# map the mappingfiles from usecase1
mappingfile1 = os.path.join(homedir,'Sauk_mappingfile.csv')
mappingfile2 = os.path.join(homedir,'Elwha_mappingfile.csv')
mappingfile3 = os.path.join(homedir,'RioSalado_mappingfile.csv')
t1 = ogh.mappingfileSummary(listofmappingfiles = [mappingfile1, mappingfile2, mappingfile3],
listofwatershednames = ['Sauk-Suiattle river','Elwha river','Upper Rio Salado'],
meta_file=meta_file)
t1
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
3. Compare Hydrometeorology
This section performs computations and generates plots of the Livneh 2013, Livneh 2016, and WRF 2014 temperature and precipitation data in order to compare them with each other and observations. The generated plots are automatically downloaded and saved as .png files in the "plots" folder of the user's home directory and inline in the notebook.
|
# Livneh et al., 2013
dr1 = meta_file['dailymet_livneh2013']
# Salathe et al., 2014
dr2 = meta_file['dailywrf_salathe2014']
# define overlapping time window
dr = ogh.overlappingDates(date_set1=tuple([dr1['start_date'], dr1['end_date']]),
date_set2=tuple([dr2['start_date'], dr2['end_date']]))
dr
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
INPUT: gridded meteorology from Jupyter Hub folders
Data frames for each set of data are stored in a dictionary. The inputs to gridclim_dict() include the folder location and name of the hydrometeorology data, the file start and end, the analysis start and end, and the elevation band to be included in the analsyis (max and min elevation). <br/>
Create a dictionary of climate variables for the long-term mean (ltm) using the default elevation option of calculating a high, mid, and low elevation average. The dictionary here is initialized with the Livneh et al., 2013 dataset with a dictionary output 'ltm_3bands', which is used as an input to the second time we run gridclim_dict(), to add the Salathe et al., 2014 data to the same dictionary.
|
%%time
ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='dailymet_livneh2013',
file_start_date=dr1['start_date'],
file_end_date=dr1['end_date'],
file_time_step=dr1['temporal_resolution'],
subset_start_date=dr[0],
subset_end_date=dr[1])
%%time
ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='dailyvic_livneh2013',
file_start_date=dr1['start_date'],
file_end_date=dr1['end_date'],
file_time_step=dr1['temporal_resolution'],
subset_start_date=dr[0],
subset_end_date=dr[1],
df_dict=ltm_3bands)
sorted(ltm_3bands.keys())
meta = meta_file['dailyvic_livneh2013']['variable_info']
meta_df = pd.DataFrame.from_dict(meta).T
meta_df.loc[['BASEFLOW','RUNOFF'],:]
ltm_3bands['STREAMFLOW_dailyvic_livneh2013']=ltm_3bands['BASEFLOW_dailyvic_livneh2013']+ltm_3bands['RUNOFF_dailyvic_livneh2013']
ltm_3bands['STREAMFLOW_dailyvic_livneh2013']
"""
Sauk-Suiattle
"""
# Watershed extent
hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')
sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']
"""
Elwha
"""
# Watershed extent
hs.getResourceFromHydroShare('4aff8b10bc424250b3d7bac2188391e8', )
elwha = hs.content["elwha_ws_bnd_wgs84.shp"]
"""
Upper Rio Salado
"""
# Watershed extent
hs.getResourceFromHydroShare('5c041d95ceb64dce8eb85d2a7db88ed7')
riosalado = hs.content['UpperRioSalado_delineatedBoundary.shp']
# generate surface area for each gridded cell
def computegcSurfaceArea(shapefile, spatial_resolution, vardf):
"""
Data-driven computation of gridded cell surface area using the list of gridded cells centroids
shapefile: (dir) the path to the study site shapefile for selecting the UTM boundary
spatial_resolution: (float) the spatial resolution in degree coordinate reference system e.g., 1/16
vardf: (dataframe) input dataframe that contains FID, LAT and LONG references for each gridded cell centroid
return: (mean surface area in meters-squared, standard deviation in surface area)
"""
# ensure projection into WGS84 longlat values
ogh.reprojShapefile(shapefile)
# generate the figure axis
fig = plt.figure(figsize=(2,2), dpi=500)
ax1 = plt.subplot2grid((1,1),(0,0))
# calculate bounding box based on the watershed shapefile
watershed = gpd.read_file(shapefile)
watershed['watershed']='watershed'
watershed = watershed.dissolve(by='watershed')
# extract area centroid, bounding box info, and dimension shape
lon0, lat0 = np.array(watershed.centroid.iloc[0])
minx, miny, maxx, maxy = watershed.bounds.iloc[0]
# generate traverse mercatur projection
m = Basemap(projection='tmerc', resolution='h', ax=ax1, lat_0=lat0, lon_0=lon0,
llcrnrlon=minx, llcrnrlat=miny, urcrnrlon=maxx, urcrnrlat=maxy)
# generate gridded cell bounding boxes
midpt_dist=spatial_resolution/2
cat=vardf.T.reset_index(level=[1,2]).rename(columns={'level_1':'LAT','level_2':'LONG_'})
geometry = cat.apply(lambda x:
shapely.ops.transform(m, box(x['LONG_']-midpt_dist, x['LAT']-midpt_dist,
x['LONG_']+midpt_dist, x['LAT']+midpt_dist)), axis=1)
# compute gridded cell area
gc_area = geometry.apply(lambda x: x.area)
plt.gcf().clear()
return(gc_area.mean(), gc_area.sd())
gcSA = computeMeanSurfaceArea(shapefile=sauk, spatial_resolution=1/16, vardf=ltm_3bands['STREAMFLOW_dailyvic_livneh2013'])
gcSA
# convert mm/s to m/s
df_dict = ltm_3bands
objname = 'STREAMFLOW_dailyvic_livneh2013'
dataset = objname.split('_',1)[1]
gridcell_area = geo_area
exceedance = 10
# convert mmps to mps
mmps = df_dict[objname]
mps = mmps*0.001
# multiply streamflow (mps) with grid cell surface area (m2) to produce volumetric streamflow (cms)
cms = mps.multiply(np.array(geo_area))
# convert m^3/s to cfs; multiply with (3.28084)^3
cfs = cms.multiply((3.28084)**3)
# output to df_dict
df_dict['cfs_'+objname] = cfs
# time-group by month-yearly streamflow volumetric values
monthly_cfs = cfs.groupby(pd.TimeGrouper('M')).sum()
monthly_cfs.index = pd.Series(monthly_cfs.index).apply(lambda x: x.strftime('%Y-%m'))
# output to df_dict
df_dict['monthly_cfs_'+objname] = monthly_cfs
# prepare for Exceedance computations
row_indices = pd.Series(monthly_cfs.index).map(lambda x: pd.datetime.strptime(x, '%Y-%m').month)
months = range(1,13)
Exceed = pd.DataFrame()
# for each month
for eachmonth in months:
month_index = row_indices[row_indices==eachmonth].index
month_res = monthly_cfs.iloc[month_index,:].reset_index(drop=True)
# generate gridded-cell-specific 10% exceedance probability values
exceed = pd.DataFrame(month_res.apply(lambda x: np.percentile(x, 0.90), axis=0)).T
# append to dataframe
Exceed = pd.concat([Exceed, exceed])
# set index to month order
Exceed = Exceed.set_index(np.array(months))
# output to df_dict
df_dict['EXCEED{0}_{1}'.format(exceedance,dataset)] = Exceed
# return(df_dict)
def monthlyExceedence_cfs (df_dict,
daily_streamflow_dfname,
gridcell_area,
exceedance=10,
start_date=None,
end_date=None):
"""
streamflow_df: (dataframe) streamflow values in cu. ft per second (row_index: date, col_index: gridcell ID)
daily_baseflow_df: (dataframe) groundwater flow in cu. ft per second (row_index: date, col_index: gridcell ID)
daily_surfacerunoff_df: (dateframe) surface flow in cu. ft per second (row_index: date, col_index: gridcell ID)
start_date: (datetime) start date
end_date: (datetime) end date
"""
#daily_baseflow_df=None, #'BASEFLOW_dailyvic_livneh2013',
#daily_surfacerunoff_df=None, #'RUNOFF_dailyvic_livneh2013',
## aggregate each daily streamflow value to a month-yearly sum
mmps = df_dict[daily_streamflow_dfname]
## subset daily streamflow_df to that index range
if isinstance(start_date, type(None)):
startyear=0
if isinstance(end_date, type(None)):
endyear=len(mmps)-1
mmps = mmps.iloc[start_date:end_date,:]
# convert mmps to mps
mps = mmps*0.001
# multiply streamflow (mps) with grid cell surface area (m2) to produce volumetric streamflow (cms)
cms = mps.multiply(np.array(geo_area))
# convert m^3/s to cfs; multiply with (3.28084)^3
cfs = cms.multiply((3.28084)**3)
# output to df_dict
df_dict['cfs_'+objname] = cfs
# time-group by month-yearly streamflow volumetric values
monthly_cfs = cfs.groupby(pd.TimeGrouper('M')).sum()
monthly_cfs.index = pd.Series(monthly_cfs.index).apply(lambda x: x.strftime('%Y-%m'))
# output to df_dict
df_dict['monthly_cfs_'+objname] = monthly_cfs
monthly_streamflow_df = daily_streamflow_df.groupby(pd.TimeGrouper("M")).sum()
# loop through each station
for eachcol in monthly_streamflow_df.columns():
station_moyr = dask.delayed(monthly_streamflow_df.loc[:,eachcol])
station_moyr
df_dict = ltm_3bands
objname = 'STREAMFLOW_dailyvic_livneh2013'
dataset = objname.split('_',1)[1]
gridcell_area = geo_area
exceedance = 10
ltm_3bands['STREAMFLOW_dailyvic_livneh2013']=ltm_3bands['BASEFLOW_dailyvic_livneh2013']+ltm_3bands['RUNOFF_dailyvic_livneh2013']
# function [Qex] = monthlyExceedence_cfs(file,startyear,endyear)
# % Load data from specified file
# data = load(file);
# Y=data(:,1);
# MO=data(:,2);
# D=data(:,3);
# t = datenum(Y,MO,D);
# %%%
# % startyear=data(1,1);
# % endyear=data(length(data),1);
# Qnode = data(:,4);
# % Time control indices that cover selected period
# d_start = datenum(startyear,10,01,23,0,0); % 1 hour early to catch record for that day
# d_end = datenum(endyear,09,30,24,0,0);
# idx = find(t>=d_start & t<=d_end);
# exds=(1:19)./20; % Exceedence probabilities from 0.05 to 0.95
# mos=[10,11,12,1:9];
# Qex(1:19,1:12)=0 ; % initialize array
# for imo=1:12;
# mo=mos(imo);
# [Y, M, D, H, MI, S] = datevec(t(idx));
# ind=find(M==mo); % find all flow values in that month in the period identified
# Q1=Qnode(idx(ind)); % input is in cfs
# for iex=1:19
# Qex(iex,imo)=quantile(Q1,1-exds(iex));
# end
# end
# end
ltm_3bands['cfs_STREAMFLOW_dailyvic_livneh2013']
ltm_3bands['monthly_cfs_STREAMFLOW_dailyvic_livneh2013']
ltm_3bands['EXCEED10_dailyvic_livneh2013']
# loop through each month to compute the 10% Exceedance Probability
for eachmonth in range(1,13):
monthlabel = pd.datetime.strptime(str(eachmonth), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['EXCEED10_dailyvic_livneh2013'],
vardf_dateindex=eachmonth,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath='sauk{0}exceed10.png'.format(monthlabel.strftime('%b')),
plottitle='Sauk {0} 10% Exceedance Probability'.format(monthlabel.strftime('%B')),
colorbar_label='cubic feet per second',
cmap='seismic_r')
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
4. Visualize monthly precipitation spatially using Livneh et al., 2013 Meteorology data
Apply different plotting options:
time-index option <br />
Basemap option <br />
colormap option <br />
projection option <br />
|
%%time
month=3
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='Demographics/USA_Social_Vulnerability_Index',
cmap='gray_r')
month=6
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='ESRI_StreetMap_World_2D',
cmap='gray_r')
month=9
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='ESRI_Imagery_World_2D',
cmap='gray_r')
month=12
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='Elevation/World_Hillshade',
cmap='seismic_r')
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
Visualize monthly precipitation difference between different gridded data products
|
for month in [3, 6, 9, 12]:
monthlabel = pd.datetime.strptime(str(month), '%m')
outfile='SaukLivnehPrecip{0}.png'.format(monthlabel.strftime('%b'))
ax1 = ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
basemap_image='ESRI_Imagery_World_2D',
cmap='seismic_r',
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
outfilepath=os.path.join(homedir, outfile))
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
comparison to WRF data from Salathe et al., 2014
|
%%time
ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='dailywrf_salathe2014',
colvar=None,
file_start_date=dr2['start_date'],
file_end_date=dr2['end_date'],
file_time_step=dr2['temporal_resolution'],
subset_start_date=dr[0],
subset_end_date=dr[1],
df_dict=ltm_3bands)
for month in [3, 6, 9, 12]:
monthlabel = pd.datetime.strptime(str(month), '%m')
outfile='SaukSalathePrecip{0}.png'.format(monthlabel.strftime('%b'))
ax1 = ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailywrf_salathe2014'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
basemap_image='ESRI_Imagery_World_2D',
cmap='seismic_r',
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
outfilepath=os.path.join(homedir, outfile))
def plot_meanTmin(dictionary, loc_name, start_date, end_date):
# Plot 1: Monthly temperature analysis of Livneh data
if 'meanmonth_temp_min_liv2013_met_daily' and 'meanmonth_temp_min_wrf2014_met_daily' not in dictionary.keys():
pass
# generate month indices
wy_index=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
wy_numbers=[10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9]
month_strings=[ 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sept']
# initiate the plot object
fig, ax=plt.subplots(1,1,figsize=(10, 6))
if 'meanmonth_temp_min_liv2013_met_daily' in dictionary.keys():
# Liv2013
plt.plot(wy_index, dictionary['meanmonth_temp_min_liv2013_met_daily'][wy_numbers],'r-', linewidth=1, label='Liv Temp min')
if 'meanmonth_temp_min_wrf2014_met_daily' in dictionary.keys():
# WRF2014
plt.plot(wy_index, dictionary['meanmonth_temp_min_wrf2014_met_daily'][wy_numbers],'b-',linewidth=1, label='WRF Temp min')
if 'meanmonth_temp_min_livneh2013_wrf2014bc_met_daily' in dictionary.keys():
# WRF2014
plt.plot(wy_index, dictionary['meanmonth_temp_min_livneh2013_wrf2014bc_met_daily'][wy_numbers],'g-',linewidth=1, label='WRFbc Temp min')
# add reference line at y=0
plt.plot([1, 12],[0, 0], 'k-',linewidth=1)
plt.ylabel('Temp (C)',fontsize=14)
plt.xlabel('Month',fontsize=14)
plt.xlim(1,12);
plt.xticks(wy_index, month_strings);
plt.tick_params(labelsize=12)
plt.legend(loc='best')
plt.grid(which='both')
plt.title(str(loc_name)+'\nMinimum Temperature\n Years: '+str(start_date.year)+'-'+str(end_date.year)+'; Elevation: '+str(dictionary['analysis_elev_min'])+'-'+str(dictionary['analysis_elev_max'])+'m', fontsize=16)
plt.savefig('monthly_Tmin'+str(loc_name)+'.png')
plt.show()
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
6. Compare gridded model to point observations
Read in SNOTEL data - assess available data
If you want to plot observed snotel point precipitation or temperature with the gridded climate data, set to 'Y'
Give name of Snotel file and name to be used in figure legends.
File format: Daily SNOTEL Data Report - Historic - By individual SNOTEL site, standard sensors (https://www.wcc.nrcs.usda.gov/snow/snotel-data.html)
|
# Sauk
SNOTEL_file = os.path.join(homedir,'ThunderBasinSNOTEL.txt')
SNOTEL_station_name='Thunder Creek'
SNOTEL_file_use_colsnames = ['Date','Air Temperature Maximum (degF)', 'Air Temperature Minimum (degF)','Air Temperature Average (degF)','Precipitation Increment (in)']
SNOTEL_station_elev=int(4320/3.281) # meters
SNOTEL_obs_daily = ogh.read_daily_snotel(file_name=SNOTEL_file,
usecols=SNOTEL_file_use_colsnames,
delimiter=',',
header=58)
# generate the start and stop date
SNOTEL_obs_start_date=SNOTEL_obs_daily.index[0]
SNOTEL_obs_end_date=SNOTEL_obs_daily.index[-1]
# peek
SNOTEL_obs_daily.head(5)
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
Read in COOP station data - assess available data
https://www.ncdc.noaa.gov/
|
COOP_file=os.path.join(homedir, 'USC00455678.csv') # Sauk
COOP_station_name='Mt Vernon'
COOP_file_use_colsnames = ['DATE','PRCP','TMAX', 'TMIN','TOBS']
COOP_station_elev=int(4.3) # meters
COOP_obs_daily = ogh.read_daily_coop(file_name=COOP_file,
usecols=COOP_file_use_colsnames,
delimiter=',',
header=0)
# generate the start and stop date
COOP_obs_start_date=COOP_obs_daily.index[0]
COOP_obs_end_date=COOP_obs_daily.index[-1]
# peek
COOP_obs_daily.head(5)
#initiate new dictionary with original data
ltm_0to3000 = ogh.gridclim_dict(metadata=meta_file,
mappingfile=mappingfile1,
dataset='dailymet_livneh2013',
file_start_date=dr1['start_date'],
file_end_date=dr1['end_date'],
subset_start_date=dr[0],
subset_end_date=dr[1])
ltm_0to3000 = ogh.gridclim_dict(metadata=meta_file,
mappingfile=mappingfile1,
dataset='dailywrf_salathe2014',
file_start_date=dr2['start_date'],
file_end_date=dr2['end_date'],
subset_start_date=dr[0],
subset_end_date=dr[1],
df_dict=ltm_0to3000)
sorted(ltm_0to3000.keys())
# read in the mappingfile
mappingfile = mappingfile1
mapdf = pd.read_csv(mappingfile)
# select station by first FID
firstStation = ogh.findStationCode(mappingfile=mappingfile, colvar='FID', colvalue=0)
# select station by elevation
maxElevStation = ogh.findStationCode(mappingfile=mappingfile, colvar='ELEV', colvalue=mapdf.loc[:,'ELEV'].max())
medElevStation = ogh.findStationCode(mappingfile=mappingfile, colvar='ELEV', colvalue=mapdf.loc[:,'ELEV'].median())
minElevStation = ogh.findStationCode(mappingfile=mappingfile, colvar='ELEV', colvalue=mapdf.loc[:,'ELEV'].min())
# print(firstStation, mapdf.iloc[0].ELEV)
# print(maxElevStation, mapdf.loc[:,'ELEV'].max())
# print(medElevStation, mapdf.loc[:,'ELEV'].median())
# print(minElevStation, mapdf.loc[:,'ELEV'].min())
# let's compare monthly averages for TMAX using livneh, salathe, and the salathe-corrected livneh
comp = ['month_TMAX_dailymet_livneh2013',
'month_TMAX_dailywrf_salathe2014']
obj = dict()
for eachkey in ltm_0to3000.keys():
if eachkey in comp:
obj[eachkey] = ltm_0to3000[eachkey]
panel_obj = pd.Panel.from_dict(obj)
panel_obj
comp = ['meanmonth_TMAX_dailymet_livneh2013',
'meanmonth_TMAX_dailywrf_salathe2014']
obj = dict()
for eachkey in ltm_0to3000.keys():
if eachkey in comp:
obj[eachkey] = ltm_0to3000[eachkey]
df_obj = pd.DataFrame.from_dict(obj)
df_obj
t_res, var, dataset, pub = each.rsplit('_',3)
print(t_res, var, dataset, pub)
ylab_var = meta_file['_'.join([dataset, pub])]['variable_info'][var]['desc']
ylab_unit = meta_file['_'.join([dataset, pub])]['variable_info'][var]['units']
print('{0} {1} ({2})'.format(t_res, ylab_var, ylab_unit))
%%time
comp = [['meanmonth_TMAX_dailymet_livneh2013','meanmonth_TMAX_dailywrf_salathe2014'],
['meanmonth_PRECIP_dailymet_livneh2013','meanmonth_PRECIP_dailywrf_salathe2014']]
wy_numbers=[10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9]
month_strings=[ 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep']
fig = plt.figure(figsize=(20,5), dpi=500)
ax1 = plt.subplot2grid((2, 2), (0, 0), colspan=1)
ax2 = plt.subplot2grid((2, 2), (1, 0), colspan=1)
# monthly
for eachsumm in df_obj.columns:
ax1.plot(df_obj[eachsumm])
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=2, fontsize=10)
plt.show()
df_obj[each].index.apply(lambda x: x+2)
fig, ax = plt.subplots()
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
lws=[3, 10, 3, 3]
styles=['b--','go-','y--','ro-']
for col, style, lw in zip(comp, styles, lws):
panel_obj.xs(key=(minElevStation[0][0], minElevStation[0][1], minElevStation[0][2]), axis=2)[col].plot(style=style, lw=lw, ax=ax, legend=True)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=2)
fig.show()
fig, ax = plt.subplots()
lws=[3, 10, 3, 3]
styles=['b--','go-','y--','ro-']
for col, style, lw in zip(comp, styles, lws):
panel_obj.xs(key=(maxElevStation[0][0], maxElevStation[0][1], maxElevStation[0][2]),
axis=2)[col].plot(style=style, lw=lw, ax=ax, legend=True)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=2)
fig.show()
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
Set up VIC dictionary (as an example) to compare to available data
|
vic_dr1 = meta_file['dailyvic_livneh2013']['date_range']
vic_dr2 = meta_file['dailyvic_livneh2015']['date_range']
vic_dr = ogh.overlappingDates(tuple([vic_dr1['start'], vic_dr1['end']]),
tuple([vic_dr2['start'], vic_dr2['end']]))
vic_ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile,
metadata=meta_file,
dataset='dailyvic_livneh2013',
file_start_date=vic_dr1['start'],
file_end_date=vic_dr1['end'],
file_time_step=vic_dr1['time_step'],
subset_start_date=vic_dr[0],
subset_end_date=vic_dr[1])
sorted(vic_ltm_3bands.keys())
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
10. Save the results back into HydroShare
<a name="creation"></a>
Using the hs_utils library, the results of the Geoprocessing steps above can be saved back into HydroShare. First, define all of the required metadata for resource creation, i.e. title, abstract, keywords, content files. In addition, we must define the type of resource that will be created, in this case genericresource.
Note: Make sure you save the notebook at this point, so that all notebook changes will be saved into the new HydroShare resource.
|
#execute this cell to list the content of the directory
!ls -lt
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
Create list of files to save to HydroShare. Verify location and names.
|
!tar -zcf {climate2013_tar} livneh2013
!tar -zcf {climate2015_tar} livneh2015
!tar -zcf {wrf_tar} salathe2014
ThisNotebook='Observatory_Sauk_TreatGeoSelf.ipynb' #check name for consistency
climate2013_tar = 'livneh2013.tar.gz'
climate2015_tar = 'livneh2015.tar.gz'
wrf_tar = 'salathe2014.tar.gz'
mappingfile = 'Sauk_mappingfile.csv'
files=[ThisNotebook, mappingfile, climate2013_tar, climate2015_tar, wrf_tar]
# for each file downloaded onto the server folder, move to a new HydroShare Generic Resource
title = 'Results from testing out the TreatGeoSelf utility'
abstract = 'This the output from the TreatGeoSelf utility integration notebook.'
keywords = ['Sauk', 'climate', 'Landlab','hydromet','watershed']
rtype = 'genericresource'
# create the new resource
resource_id = hs.createHydroShareResource(abstract,
title,
keywords=keywords,
resource_type=rtype,
content_files=files,
public=False)
|
tutorials/Observatory_usecase6_observationdata.ipynb
|
ChristinaB/Observatory
|
mit
|
Install Required Libraries
Import the libraries required to train this model.
|
import notebook_setup
notebook_setup.notebook_setup()
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Import the python libraries we will use
We add a comment "fairing:include-cell" to tell the kubefow fairing preprocessor to keep this cell when converting to python code later
|
# fairing:include-cell
import fire
import joblib
import logging
import nbconvert
import os
import pathlib
import sys
from pathlib import Path
import pandas as pd
import pprint
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from xgboost import XGBRegressor
from importlib import reload
from sklearn.datasets import make_regression
from kubeflow.metadata import metadata
from datetime import datetime
import retrying
import urllib3
# Imports not to be included in the built docker image
import util
import kfp
import kfp.components as comp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
from kubernetes import client as k8s_client
from kubeflow import fairing
from kubeflow.fairing.builders import append
from kubeflow.fairing.deployers import job
from kubeflow.fairing.preprocessors.converted_notebook import ConvertNotebookPreprocessorWithFire
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Code to train and predict
In the cells below we define some functions to generate data and train a model
These functions could just as easily be defined in a separate python module
|
# fairing:include-cell
def read_synthetic_input(test_size=0.25):
"""generate synthetic data and split it into train and test."""
# generate regression dataset
X, y = make_regression(n_samples=200, n_features=5, noise=0.1)
train_X, test_X, train_y, test_y = train_test_split(X,
y,
test_size=test_size,
shuffle=False)
imputer = SimpleImputer()
train_X = imputer.fit_transform(train_X)
test_X = imputer.transform(test_X)
return (train_X, train_y), (test_X, test_y)
# fairing:include-cell
def train_model(train_X,
train_y,
test_X,
test_y,
n_estimators,
learning_rate):
"""Train the model using XGBRegressor."""
model = XGBRegressor(n_estimators=n_estimators, learning_rate=learning_rate)
model.fit(train_X,
train_y,
early_stopping_rounds=40,
eval_set=[(test_X, test_y)])
print("Best RMSE on eval: %.2f with %d rounds",
model.best_score,
model.best_iteration+1)
return model
def eval_model(model, test_X, test_y):
"""Evaluate the model performance."""
predictions = model.predict(test_X)
mae=mean_absolute_error(predictions, test_y)
logging.info("mean_absolute_error=%.2f", mae)
return mae
def save_model(model, model_file):
"""Save XGBoost model for serving."""
joblib.dump(model, model_file)
logging.info("Model export success: %s", model_file)
def create_workspace():
METADATA_STORE_HOST = "metadata-grpc-service.kubeflow" # default DNS of Kubeflow Metadata gRPC serivce.
METADATA_STORE_PORT = 8080
return metadata.Workspace(
store=metadata.Store(grpc_host=METADATA_STORE_HOST, grpc_port=METADATA_STORE_PORT),
name="xgboost-synthetic",
description="workspace for xgboost-synthetic artifacts and executions")
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Wrap Training and Prediction in a class
In the cell below we wrap training and prediction in a class
A class provides the structure we will need to eventually use kubeflow fairing to launch separate training jobs and/or deploy the model on Kubernetes
|
# fairing:include-cell
class ModelServe(object):
def __init__(self, model_file=None):
self.n_estimators = 50
self.learning_rate = 0.1
if not model_file:
if "MODEL_FILE" in os.environ:
print("model_file not supplied; checking environment variable")
model_file = os.getenv("MODEL_FILE")
else:
print("model_file not supplied; using the default")
model_file = "mockup-model.dat"
self.model_file = model_file
print("model_file={0}".format(self.model_file))
self.model = None
self._workspace = None
self.exec = self.create_execution()
def train(self):
(train_X, train_y), (test_X, test_y) = read_synthetic_input()
# Here we use Kubeflow's metadata library to record information
# about the training run to Kubeflow's metadata store.
self.exec.log_input(metadata.DataSet(
description="xgboost synthetic data",
name="synthetic-data",
owner="someone@kubeflow.org",
uri="file://path/to/dataset",
version="v1.0.0"))
model = train_model(train_X,
train_y,
test_X,
test_y,
self.n_estimators,
self.learning_rate)
mae = eval_model(model, test_X, test_y)
# Here we log metrics about the model to Kubeflow's metadata store.
self.exec.log_output(metadata.Metrics(
name="xgboost-synthetic-traing-eval",
owner="someone@kubeflow.org",
description="training evaluation for xgboost synthetic",
uri="gcs://path/to/metrics",
metrics_type=metadata.Metrics.VALIDATION,
values={"mean_absolute_error": mae}))
save_model(model, self.model_file)
self.exec.log_output(metadata.Model(
name="housing-price-model",
description="housing price prediction model using synthetic data",
owner="someone@kubeflow.org",
uri=self.model_file,
model_type="linear_regression",
training_framework={
"name": "xgboost",
"version": "0.9.0"
},
hyperparameters={
"learning_rate": self.learning_rate,
"n_estimators": self.n_estimators
},
version=datetime.utcnow().isoformat("T")))
def predict(self, X, feature_names):
"""Predict using the model for given ndarray.
The predict signature should match the syntax expected by Seldon Core
https://github.com/SeldonIO/seldon-core so that we can use
Seldon h to wrap it a model server and deploy it on Kubernetes
"""
if not self.model:
self.model = joblib.load(self.model_file)
# Do any preprocessing
prediction = self.model.predict(data=X)
# Do any postprocessing
return [[prediction.item(0), prediction.item(1)]]
@property
def workspace(self):
if not self._workspace:
self._workspace = create_workspace()
return self._workspace
def create_execution(self):
r = metadata.Run(
workspace=self.workspace,
name="xgboost-synthetic-faring-run" + datetime.utcnow().isoformat("T"),
description="a notebook run")
return metadata.Execution(
name = "execution" + datetime.utcnow().isoformat("T"),
workspace=self.workspace,
run=r,
description="execution for training xgboost-synthetic")
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Train your Model Locally
Train your model locally inside your notebook
To train locally we just instatiante the ModelServe class and then call train
|
model = ModelServe(model_file="mockup-model.dat")
model.train()
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Predict locally
Run prediction inside the notebook using the newly created model
To run prediction we just invoke redict
|
(train_X, train_y), (test_X, test_y) =read_synthetic_input()
ModelServe().predict(test_X, None)
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Use Kubeflow Fairing to Launch a K8s Job to train your model
Now that we have trained a model locally we can use Kubeflow fairing to
Launch a Kubernetes job to train the model
Deploy the model on Kubernetes
Launching a separate Kubernetes job to train the model has the following advantages
You can leverage Kubernetes to run multiple training jobs in parallel
You can run long running jobs without blocking your kernel
Configure The Docker Registry For Kubeflow Fairing
In order to build docker images from your notebook we need a docker registry where the images will be stored
Below you set some variables specifying a GCR container registry
Kubeflow Fairing provides a utility function to guess the name of your GCP project
|
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job'.format(GCP_PROJECT)
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Use Kubeflow fairing to build the docker image
First you will use kubeflow fairing's kaniko builder to build a docker image that includes all your dependencies
You use kaniko because you want to be able to run pip to install dependencies
Kaniko gives you the flexibility to build images from Dockerfiles
kaniko, however, can be slow
so you will build a base image using Kaniko and then every time your code changes you will just build an image
starting from your base image and adding your code to it
you use the kubeflow fairing build to enable these fast rebuilds
|
# TODO(https://github.com/kubeflow/fairing/issues/426): We should get rid of this once the default
# Kaniko image is updated to a newer image than 0.7.0.
from kubeflow.fairing import constants
constants.constants.KANIKO_IMAGE = "gcr.io/kaniko-project/executor:v0.14.0"
from kubeflow.fairing.builders import cluster
# output_map is a map of extra files to add to the notebook.
# It is a map from source location to the location inside the context.
output_map = {
"Dockerfile": "Dockerfile",
"requirements.txt": "requirements.txt",
}
preprocessor = ConvertNotebookPreprocessorWithFire(class_name='ModelServe', notebook_file='build-train-deploy.ipynb',
output_map=output_map)
if not preprocessor.input_files:
preprocessor.input_files = set()
input_files=["xgboost_util.py", "mockup-model.dat"]
preprocessor.input_files = set([os.path.normpath(f) for f in input_files])
preprocessor.preprocess()
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Build the base image
You use cluster_builder to build the base image
You only need to perform this again if we change our Docker image or the dependencies we need to install
ClusterBuilder takes as input the DockerImage to use as a base image
You should use the same Jupyter image that you are using for your notebook server so that your environment will be
the same when you launch Kubernetes jobs
|
# Use a stock jupyter image as our base image
# TODO(jlewi): Should we try to use the downward API to default to the image we are running in?
base_image = "gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0"
# We use a custom Dockerfile
cluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_REGISTRY,
base_image=base_image,
preprocessor=preprocessor,
dockerfile_path="Dockerfile",
pod_spec_mutators=[fairing.cloud.gcp.add_gcp_credentials_if_exists],
context_source=cluster.gcs_context.GCSContextSource())
cluster_builder.build()
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Build the actual image
Here you use the append builder to add your code to the base image
Calling preprocessor.preprocess() converts your notebook file to a python file
You are using the ConvertNotebookPreprocessorWithFire
This preprocessor converts ipynb files to py files by doing the following
Removing all cells which don't have a comment # fairing:include-cell
Using python-fire to add entry points for the class specified in the constructor
Call preprocess() will create the file build-train-deploy.py
You use the AppendBuilder to rapidly build a new docker image by quickly adding some files to an existing docker image
The AppendBuilder is super fast so its very convenient for rebuilding your images as you iterate on your code
The AppendBuilder will add the converted notebook, build-train-deploy.py, along with any files specified in preprocessor.input_files to /app in the newly created image
|
preprocessor.preprocess()
builder = append.append.AppendBuilder(registry=DOCKER_REGISTRY,
base_image=cluster_builder.image_tag, preprocessor=preprocessor)
builder.build()
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Launch the K8s Job
You can use kubeflow fairing to easily launch a Kubernetes job to invoke code
You use fairings Kubernetes job library to build a Kubernetes job
You use pod mutators to attach GCP credentials to the pod
You can also use pod mutators to attch PVCs
Since the ConvertNotebookPreprocessorWithFire is using python-fire you can easily invoke any method inside the ModelServe class just by configuring the command invoked by the Kubernetes job
In the cell below you extend the command to include train as an argument because you want to invoke the train
function
Note When you invoke train_deployer.deploy; kubeflow fairing will stream the logs from the Kubernetes job. The job will initially show some connection errors because the job will try to connect to the metadataserver. You can ignore these errors; the job will retry until its able to connect and then continue
|
pod_spec = builder.generate_pod_spec()
train_deployer = job.job.Job(cleanup=False,
pod_spec_mutators=[
fairing.cloud.gcp.add_gcp_credentials_if_exists])
# Add command line arguments
pod_spec.containers[0].command.extend(["train"])
result = train_deployer.deploy(pod_spec)
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
You can use kubectl to inspect the job that fairing created
|
!kubectl get jobs -l fairing-id={train_deployer.job_id} -o yaml
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Deploy the trained model to Kubeflow for predictions
Now that you have trained a model you can use kubeflow fairing to deploy it on Kubernetes
When you call deployer.deploy fairing will create a Kubernetes Deployment to serve your model
Kubeflow fairing uses the docker image you created earlier
The docker image you created contains your code and Seldon core
Kubeflow fairing uses Seldon to wrap your prediction code, ModelServe.predict, in a REST and gRPC server
|
from kubeflow.fairing.deployers import serving
pod_spec = builder.generate_pod_spec()
module_name = os.path.splitext(preprocessor.executable.name)[0]
deployer = serving.serving.Serving(module_name + ".ModelServe",
service_type="ClusterIP",
labels={"app": "mockup"})
url = deployer.deploy(pod_spec)
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
You can use kubectl to inspect the deployment that fairing created
|
!kubectl get deploy -o yaml {deployer.deployment.metadata.name}
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Send an inference request to the prediction server
Now that you have deployed the model into your Kubernetes cluster, you can send a REST request to
preform inference
The code below reads some data, sends, a prediction request and then prints out the response
|
(train_X, train_y), (test_X, test_y) = read_synthetic_input()
result = util.predict_nparray(url, test_X)
pprint.pprint(result.content)
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Clean up the prediction endpoint
You can use kubectl to delete the Kubernetes resources for your model
If you want to delete the resources uncomment the following lines and run them
|
# !kubectl delete service -l app=ames
# !kubectl delete deploy -l app=ames
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Track Models and Artifacts
Using Kubeflow's metadata server you can track models and artifacts
The ModelServe code was instrumented to log executions and outputs
You can access Kubeflow's metadata UI by selecting Artifact Store from the central dashboard
See here for instructions on connecting to Kubeflow's UIs
You can also use the python SDK to read and write entries
This notebook illustrates a bunch of metadata functionality
Create a workspace
Kubeflow metadata uses workspaces as a logical grouping for artifacts, executions, and datasets that belong together
Earlier in the notebook we defined the function create_workspace to create a workspace for this example
You can use that function to return a workspace object and then call list to see all the artifacts in that workspace
|
ws = create_workspace()
ws.list()
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Create a pipeline to train your model
Kubeflow pipelines makes it easy to define complex workflows to build and deploy models
Below you will define and run a simple one step pipeline to train your model
Kubeflow pipelines uses experiments to group different runs of a pipeline together
So you start by defining a name for your experiement
Define the pipeline
To create a pipeline you create a function and decorate it with the @dsl.pipeline decorator
You use the decorator to give the pipeline a name and description
Inside the function, each step in the function is defined by a ContainerOp that specifies
a container to invoke
You will use the container image that you built earlier using Kubeflow Fairing
Since the Kubeflow Fairing preprocessor added a main function using python-fire, a step in your pipeline can invocation any function in the ModelServe class just by setting the command for the container op
See the pipelines SDK reference for more information
|
@dsl.pipeline(
name='Training pipeline',
description='A pipeline that trains an xgboost model for the Ames dataset.'
)
def train_pipeline(
):
command=["python", preprocessor.executable.name, "train"]
train_op = dsl.ContainerOp(
name="train",
image=builder.image_tag,
command=command,
).apply(
gcp.use_gcp_secret('user-gcp-sa'),
)
train_op.container.working_dir = "/app"
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Compile the pipeline
Pipelines need to be compiled
|
pipeline_func = train_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)
|
xgboost_synthetic/build-train-deploy.ipynb
|
kubeflow/examples
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.