markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
This is called a hierarchical index, which we will revisit later in the section. If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument:
pd.read_csv("../data/microbiome.csv", skiprows=[3,4,6]).head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
If we only want to import a small number of rows from, say, a very large data file we can use nrows:
pd.read_csv("../data/microbiome.csv", nrows=4)
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each:
pd.read_csv("../data/microbiome.csv", chunksize=15) data_chunks = pd.read_csv("../data/microbiome.csv", chunksize=15) mean_tissue = pd.Series({chunk.Taxon[0]: chunk.Tissue.mean() for chunk in data_chunks}) mean_tissue
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL.
!cat ../data/microbiome_missing.csv pd.read_csv("../data/microbiome_missing.csv").head(20)
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Above, Pandas recognized NA and an empty field as missing data.
pd.isnull(pd.read_csv("../data/microbiome_missing.csv")).head(20)
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument:
pd.read_csv("../data/microbiome_missing.csv", na_values=['?', -99999]).head(20)
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
These can be specified on a column-wise basis using an appropriate dict as the argument for na_values. Microsoft Excel Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), Pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or t...
mb_file = pd.ExcelFile('../data/microbiome/MID1.xls') mb_file
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
There is now a read_excel conveneince function in Pandas that combines these steps into a single call:
mb2 = pd.read_excel('../data/microbiome/MID2.xls', sheetname='Sheet 1', header=None) mb2.head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python fo...
baseball = pd.read_csv("../data/baseball.csv", index_col='id') baseball.head()
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
In addition to using loc to select rows and columns by label, pandas also allows indexing by position using the iloc attribute. So, we can query rows and columns by absolute position, rather than by name:
baseball_newind.iloc[:5, 5:8] micro
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Exercise You can use the isin method query a DataFrame based upon a list of values as follows: data['phylum'].isin(['Firmacutes', 'Bacteroidetes']) Use isin to find all players that played for the Los Angeles Dodgers (LAN) or the San Francisco Giants (SFN). How many records contain these values?
# Write your answer here
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index:
mb = pd.read_csv("../data/microbiome.csv", index_col=['Taxon','Patient']) mb.head(10)
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.
data.loc[7, 'year'] = np.nan data data.dropna(thresh=4)
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Try running corr on the entire baseball DataFrame to see what is returned:
# Write answer here
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
As Wes warns in his book, it is recommended that binary storage of data via pickle only be used as a temporary storage format, in situations where speed is relevant. This is because there is no guarantee that the pickle format will not change with future versions of Python. Advanced Exercise: Compiling Ebola Data The d...
# Write your answer here
notebooks/Introduction to Pandas.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gan_mnist/Intro_to_GANs_Exercises.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
gan_mnist/Intro_to_GANs_Exercises.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Th...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(real_dim=input_size, z_dim=z_size) # Generator network here g_model = generator(z=input_z, out_dim=input_size, alpha=alpha, n_units=g_hidden_size) # g_model is the generator output # Disriminator network here d_model_real, d_l...
gan_mnist/Intro_to_GANs_Exercises.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the gener...
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [generatorVar for generatorVar in t_vars if generatorVar.name.startswith('generator')] d_vars = [discriminatorVar for discriminatorVar in t_vars if discriminatorVar.name.startswith('dis...
gan_mnist/Intro_to_GANs_Exercises.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
Training
batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) ...
gan_mnist/Intro_to_GANs_Exercises.ipynb
SlipknotTN/udacity-deeplearning-nanodegree
mit
List can be heterogeneous list
L3 = [True, '2', 3.0, 4] [type(item) for item in L3] tuple(L3)
Chapter 1 - Python DS Handbook.ipynb
suresh/notebooks
mit
Creating Arrays from lists
import numpy as np np.array([1, 4, 2, 5, 3]) np.array([3.14, 4, 2, 3])
Chapter 1 - Python DS Handbook.ipynb
suresh/notebooks
mit
Creating Arrays from scratch Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into Numpy. Here are some examples:
# create a random integer array np.random.randint(0, 10, (3, 3)) # create a 3x3 array of uniform random values np.random.random((3, 3)) # create 3x3 array of normally distributed data np.random.normal(0, 1, (3,3))
Chapter 1 - Python DS Handbook.ipynb
suresh/notebooks
mit
LV2 recovery system analysis
# Analyzing LV2 Telemetry and IMU data # From git, Launch 12 ####################################### import numpy as np import matplotlib.pyplot as plt import pandas as pd from scipy.integrate import simps %matplotlib inline # Graphing helper function def setup_graph(title='',x_label='', y_label='', fig_size=None): ...
Useful_Misc/Recovery_Initial_Calculations.ipynb
psas/lv3.0-recovery
gpl-3.0
suggestions Your $dt$ value is a bit off. Also, $dt$ changes, especially when the drogue is deploying. The other thing that I'd be careful about is your assumption that the deployment force is just the $\Delta v$ for deployment divided by the time. That would mean the force is as spread out as possible, which isn't tru...
# how to get the values for dt: #diffs= [t2-t1 for t1,t2 in zip(time[:-1], time[1:])] #plt.figure() #plt.plot(time[1:], diffs) #plt.title('values of dt') print('mean dt value:', np.mean(diffs)) # marginally nicer way to keep track of time windows: ind_drogue= [i for i in range(len(time)) if ((time[i]>34.5) & (time[i]<...
Useful_Misc/Recovery_Initial_Calculations.ipynb
psas/lv3.0-recovery
gpl-3.0
LV3 Recovery System ======================================= Overview Deployment design <img src='Sketch_ Deployment_Design.jpg' width="450"> Top-level design <img src='Sketch_ Top-Level_Design.jpg' width="450"> <img src='initial_idea_given_info.jpg' width="450"> Step-by-step <img src='step_by_step.jpg' width="450"> Dec...
## **Need to decide on FOS for the weight and then use estimator to calculate necessary area** ## # Calculating area needed for LV3 parachutes # Both drogue and main # From OpenRocket file LV3_L13a_ideal.ork # Total mass (kg), really rough estimate m_tot3 = 27.667 # Total weight (N) w_tot3 = m_tot3 * g print w_tot3 # ...
Useful_Misc/Recovery_Initial_Calculations.ipynb
psas/lv3.0-recovery
gpl-3.0
Define Network
with tf.name_scope("data"): X = tf.placeholder(tf.float32, [None, IMG_SIZE, IMG_SIZE, 1], name="X") Y = tf.placeholder(tf.float32, [None, NUM_CLASSES], name="Y") def conv2d(x, W, b, strides=1): x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding="SAME") x = tf.nn.bias_add(x, b) return ...
src/tensorflow/02-mnist-cnn.ipynb
sujitpal/polydlot
apache-2.0
Train Network
history = [] with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver = tf.train.Saver() # tensorboard viz logger = tf.summary.FileWriter(LOG_DIR, sess.graph) train_gen = datagen(Xtrain, ytrain, BATCH_SIZE) num_batches = len(Xtrain) // BATCH_SIZE for epoch in range(NUM_E...
src/tensorflow/02-mnist-cnn.ipynb
sujitpal/polydlot
apache-2.0
Visualize with Tensorboard We have also requested the total_loss and total_accuracy scalars to be logged in our computational graph, so the above charts can also be seen from the built-in tensorboard tool. The scalars are logged to the directory given by LOG_DIR, so we can start the tensorboard tool from the command li...
BEST_MODEL = os.path.join(DATA_DIR, "tf-mnist-cnn-5") saver = tf.train.Saver() ys, ys_ = [], [] with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess, BEST_MODEL) test_gen = datagen(Xtest, ytest, BATCH_SIZE) val_loss, val_acc = 0., 0. num_batches = len(Xtrain) // ...
src/tensorflow/02-mnist-cnn.ipynb
sujitpal/polydlot
apache-2.0
List Comprehensions List comprehensions provide a concise way to create lists (arrays). Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence. For example: Create the list: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
squares = [] # create a blank list for x in range(10): # foor loop 0 -> 9 squares.append(x**2) # calculate x**2 for each x, add to end of list squares
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
You can do the same thing with:
squares = [x**2 for x in range(10)] squares
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
You can include if statements:
even_squares = [] for x in range(10): if (x % 2 == 0): even_squares.append(x**2) even_squares
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
You can do the same thing with:
even_squares = [x**2 for x in range(10) if (x % 2 == 0)] even_squares
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Now to observations Let us start with a external list of target objects:
target_table = QTable.read('ObjectList.csv', format='ascii.csv') target_table targets = [FixedTarget(coord=SkyCoord(ra = RA*u.hourangle, dec = DEC*u.deg), name=Name) for Name, RA, DEC in target_table] targets
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Observing Night You are the most junior member of the team, so you get stuck observing on New Years Eve
observe_date = Time("2018-01-01", format='iso')
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
But, you get to observe in Hawaii
my_timezone = pytz.timezone('US/Hawaii') my_location = Observer.at_site('gemini_north') observe_start = my_location.sun_set_time(observe_date, which='nearest') observe_end = my_location.sun_rise_time(observe_date, which='next') print("Observing starts at {0.iso} UTC".format(observe_start)) print("Observing ends at {...
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Plot the objects
%matplotlib inline import matplotlib.pyplot as plt from astroplan import time_grid_from_range from astroplan.plots import plot_sky, plot_airmass time_grid = time_grid_from_range(observing_range) fig,ax = plt.subplots(1,1) fig.set_size_inches(10,10) fig.tight_layout() for my_object in targets: ax = plot_sky(my_...
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Observing Constraints
from astroplan import AltitudeConstraint, AirmassConstraint from astroplan import observability_table constraints = [AltitudeConstraint(20*u.deg, 80*u.deg)] observing_table = observability_table(constraints, my_location, targets, time_range=observing_range) print(observing_table)
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Let us add another constraint
constraints.append(AirmassConstraint(2)) observing_table = observability_table(constraints, my_location, targets, time_range=observing_range) print(observing_table)
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Additional Constraints from astroplan import CONSTRAINT AtNightConstraint() - Constrain the Sun to be below horizon. MoonIlluminationConstraint(min, max) - Constrain the fractional illumination of the Moon. MoonSeparationConstraint(min, max) - Constrain the separation between the Moon and some targets. SunSeparationCo...
from astroplan import moon_illumination moon_illumination(observe_start) from astroplan import MoonSeparationConstraint constraints.append(MoonSeparationConstraint(45*u.deg)) observing_table = observability_table(constraints, my_location, targets, time_range=observing_range) print(observing_table) fig,ax = plt.su...
Python_Astroplan_Constraints.ipynb
UWashington-Astro300/Astro300-A17
mit
Capculate & Plot Correlations
hp = cars_rdd.map(lambda x: x[0][2]) weight = cars_rdd.map(lambda x: x[0][10]) print '%2.3f' % Statistics.corr(hp, weight, method="pearson") print '%2.3f' % Statistics.corr(hp, weight, method="spearman") print hp import pandas as pd from ggplot import * %matplotlib inline df = pd.DataFrame({'HP': hp.collect(),'Weigh...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
Coding Excrcise Calculate the correlation between Rear Axle Ratio & the Width Plot & verify
ra_ratio = cars_rdd.map(lambda x: x[0][5]) width = cars_rdd.map(lambda x: x[0][9]) print '%2.3f' % Statistics.corr(ra_ratio, width, method="pearson") print '%2.3f' % Statistics.corr(ra_ratio, width, method="spearman") df = pd.DataFrame({'RA Ratio': ra_ratio.collect(),'Width':width.collect()}) ggplot(df, aes(x='RA Rat...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
Linear Regression
from pyspark.mllib.regression import LabeledPoint from pyspark.mllib.regression import LinearRegressionWithSGD from pyspark.mllib.regression import LassoWithSGD from pyspark.mllib.regression import RidgeRegressionWithSGD from numpy import array data = [ LabeledPoint(0.0, [0.0]), LabeledPoint(10.0, [10.0]), La...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
TIP : Step Size is important
data = [ LabeledPoint(0.0, [0.0]), LabeledPoint(9.0, [10.0]), LabeledPoint(22.0, [20.0]), LabeledPoint(32.0, [30.0]) ] lrm = LinearRegressionWithSGD.train(sc.parallelize(data), initialWeights=array([1.0])) # should be 1.09x -0.60 # Default step size of 1.0 will diverge print "Step Size 1.0 (Default)...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
Step Size 1.0 (Default) (weights=[-2.4414455467e+173], intercept=0.0) [-2.4414455467e+173] 0.0 -9765782186791751487210715088039060352220111253693716482748893621300180659099465516818172090556066502587581239328158959270667383329208729580664342023823383430423411112418476032.000 Step Size 0.01 (weights=[1.06428571429], int...
data = [ LabeledPoint(18.9, [3910.0]), LabeledPoint(17.0, [3860.0]), LabeledPoint(20.0, [4200.0]), LabeledPoint(16.6, [3660.0]) ] lrm = LinearRegressionWithSGD.train(sc.parallelize(data), step=0.00000001) # should be ~ 0.006582x -7.595170 print lrm print lrm.weights print lrm.intercept lrm.predict([4000])
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
(weights=[0.00439009869891], intercept=0.0) [0.00439009869891] 0.0 Out[57]: 17.560394795638977 Homework Convert the car data to labelled points Partition to Train & Test Train the three Linear Models Calculate the MSE for the three models
from pyspark.mllib.regression import LabeledPoint def parse_car_data(x): # return labelled point return LabeledPoint(x[0][0],[ x[0][1],x[0][2],x[0][3],x[0][4],x[0][5], x[0][6],x[0][7],x[0][8],x[0][9],x[0][10],x[0][11] ]) car_rdd_lp = cars_rdd.map(lambda x: parse_car_data(x)) p...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
30 18.9 [350.0,165.0,260.0,8.0,2.55999994278,4.0,3.0,200.300003052,69.9000015259,3910.0,1.0]
car_rdd_train = car_rdd_lp.filter(lambda x: x.features[9] <= 4000) car_rdd_train.count() car_rdd_test = car_rdd_lp.filter(lambda x: x.features[9] > 4000) car_rdd_test.count() car_rdd_train.take(5) car_rdd_test.take(5) lrm = LinearRegressionWithSGD.train(car_rdd_train, step=0.000000001) print lrm print lrm.weights p...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
1.6.0 (12/17/15) : Mean Squared Error = 221.828342627
valuesAndPreds.take(20) lrm = LassoWithSGD.train(car_rdd_train, step=0.000000001) print lrm.weights print lrm.intercept valuesAndPreds = car_rdd_test.map(lambda p: (p.label, lrm.predict(p.features))) MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count() print("Mean Squ...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
[7.99133570468e-05,4.11481587054e-05,6.19949476655e-05,3.16472476808e-06,1.2330557013e-06,8.41105433668e-07,1.37323711608e-06,6.74789184347e-05,2.56204587207e-05,0.00112604747864,1.93481200483e-07] 0.0 Mean Squared Error = 105.857623024
valuesAndPreds.take(20) lrm = RidgeRegressionWithSGD.train(car_rdd_train, step=0.000000001) print lrm.weights print lrm.intercept valuesAndPreds = car_rdd_test.map(lambda p: (p.label, lrm.predict(p.features))) MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count() print...
extras/010-Linear-Regression.ipynb
dineshpackt/Fast-Data-Processing-with-Spark-2
mit
From analysing the newsday archieve website we see that the URL follows a parsable convention http://www.newsday.co.tt/archives/YYYY-M-DD.html So our general approach will be as follows: 1. Generate date in the expected form between an ending and starting date 2. Test to ensure the dates generated are valid. (...
# Step 1 - create a function to generates a list(array) of dates def genDatesNewsDay(start_date = date.today(), num_days = 3): # date_list = [start - timedelta(days=x) for x in range(0, num_days)] # generate a list of dates # While we expand the above line for beginners understanding date_list = [] for...
Scrape Newsday.ipynb
kyledef/jammerwebscraper
mit
The Purpose (Goal) of Scraping Our main purpose of developing this exercise was to determine if the statement that the majority of the news published was negative. To do this we need to capture the sentiment of the information extracted from the link. While we can develop sentiment analysis tools using python, the proc...
# Integrating IBM Watson import json from watson_developer_cloud import ToneAnalyzerV3 from local_settings import * def getAnalyser(): tone_analyzer = ToneAnalyzerV3( username= WATSON_CREDS['username'], password= WATSON_CREDS['password'], version='2016-05-19') return tone_analyzer # tone...
Scrape Newsday.ipynb
kyledef/jammerwebscraper
mit
The Understanding of the structure of the response is provided in the API reference https://www.ibm.com/watson/developercloud/tone-analyzer/api/v3/?python#post-tone
first = True emo_count = { "anger" : 0, "disgust": 0, "fear" : 0, "joy" : 0, "sadness": 0 } socio_count = { "openness_big5": 0, "conscientiousness_big5": 0, "extraversion_big5" : 0, "agreeableness_big5" : 0, "emotional_range_big5": 0 } for i in article_links: res = tone_analy...
Scrape Newsday.ipynb
kyledef/jammerwebscraper
mit
Selecting one seal
wd = df.pivot( columns="individual") #row, column, values (optional) f104 = df.ix[df["individual"] == "F104"] f104.head()
Week-03/04-plotting-seal-data.ipynb
scientific-visualization-2016/ClassMaterials
cc0-1.0
Plotting the seal path Several steps: 1. Create a map centered around the region 2. Draw coastlines 3. Draw countries 4. Fill oceans and coastline 5. Draw the oberservations of the seal on map
%matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap
Week-03/04-plotting-seal-data.ipynb
scientific-visualization-2016/ClassMaterials
cc0-1.0
Drawing an empty map of the region
f104.dtypes lons = f104["longitude"].values lons = lons.astype(np.float) lats = f104["latitude"].values lons_c=np.average(lons) lats_c=np.average(lats) print (lons_c, lats_c) # map = Basemap(projection='ortho', lat_0=lats_c,lon_0=lons_c) fig=plt.figure(figsize=(12,9)) # draw coastlines, country boundaries, fill con...
Week-03/04-plotting-seal-data.ipynb
scientific-visualization-2016/ClassMaterials
cc0-1.0
Plotting seal observations
# map = Basemap(projection='ortho', lat_0=lats_c,lon_0=lons_c) fig=plt.figure(figsize=(12,9)) # draw coastlines, country boundaries, fill continents. map.drawcoastlines(linewidth=0.25) map.drawcountries(linewidth=0.25) map.fillcontinents(color='coral',lake_color='blue') # draw the edge of the map projection region (...
Week-03/04-plotting-seal-data.ipynb
scientific-visualization-2016/ClassMaterials
cc0-1.0
Plot all zoomed in
# map = Basemap(width=200000,height=100000,projection='lcc', resolution='h', lat_0=lats_c,lon_0=lons_c) fig=plt.figure(figsize=(12,9)) ax = fig.add_axes([0.05,0.05,0.9,0.85]) # draw coastlines, country boundaries, fill continents. map.drawcoastlines(linewidth=0.25) map.drawcountries(linewidth=0.25) map....
Week-03/04-plotting-seal-data.ipynb
scientific-visualization-2016/ClassMaterials
cc0-1.0
Decibels vs. Percentages Percentages are simple, right? I bought four oranges. I ate two. What percentage of the original four do I have left? 50%. Easy. How many decibels down in oranges am I? Not so easy, eh? Well the answer is 3. Skim the rest of this notebook to find out why. Percentage: The prefix <code>cent</code...
# percentage function, v = new value, r = reference value def perc(v,r): return 100 * (v / r)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Let's now formalize our oranges percentage answer from above:
original = 4 uneaten = 2 print("You're left with", perc(uneaten, original), "% of the original oranges.")
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Decibels: A decibel is simply a different way to represent the ratio of two numbers. It's based on a logrithmic calculation, as to compare values with large variance and small variance on the same scale (more on this below). <br><i>For an entertainingly complete history about how decibels were decided upon, read the <a...
# decibel function, v = new value, r = reference value def deci(v,r): return 10 * math.log(v / r, 10)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Let's now formalize our oranges decibel answer from above:
print("After lunch you have", round( deci(uneaten, original), 2), "decibels less oranges.")
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
That's it. Simply calculate the log to base ten of the ratio and multiply by ten. <hr> Advanced Part <hr> Well <b>who cares?</b> I'm just going to use percentages. They're easier to calculate and I don't have to relearn my forgotten-for-decades logarithms. <br>True, most people use percentages because that's what ever...
perc(3000000, original)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
"I have seventy five million percent of my original oranges." Yikes. What does that even mean? Use decibels instead:
deci(3000000, original)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
"I've gained fifty-nine decibels of oranges." Negative ratios Additionally, the decibel scale automatically expresses positive and negative ratios.
# Less oranges than original number print(deci(uneaten, original)) print(perc(uneaten, original)) # More oranges than original number print(deci(8, original)) print(perc(8, original))
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Greater than 100% ratios There's some ambiguity when a person states she has 130% more oranges than her original number. Does this mean she has 5.2 oranges (which is 30% more than 4 oranges)?
perc(5.2, original)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Does she have 9.2 oranges (which is 130% more than 4 oranges)?
perc(9.2, original)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Expressed in decibel format, the answer is clear:
deci(5.2, original)
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
She's gained 1.1 decibels in orange holdings. How do they stack up?
width = 100 center = 50 ref = center percplot = np.zeros(width) deciplot = np.zeros(width) for i in range(width): val = center - (width/2) + i if val == 0: val = 0.000001 percplot[i] = perc(val, ref) deciplot[i] = deci(val, ref) plt.plot(range(width), percplot, 'r', label="percentage") plt.plo...
dB-vs-perc.ipynb
gganssle/dB-vs-perc
mit
Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
notebookdir = os.getcwd() hs=hydroshare.hydroshare() homedir = hs.getContentPath(os.environ["HS_RES_ID"]) os.chdir(homedir) print('Data will be loaded from and save to:'+homedir)
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, yo...
""" Sauk """ # Watershed extent hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284') sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
Summarize the file availability from each watershed mapping file
# map the mappingfiles from usecase1 mappingfile1 = os.path.join(homedir,'Sauk_mappingfile.csv') mappingfile2 = os.path.join(homedir,'Elwha_mappingfile.csv') mappingfile3 = os.path.join(homedir,'RioSalado_mappingfile.csv') t1 = ogh.mappingfileSummary(listofmappingfiles = [mappingfile1, mappingfile2, mappingfile3], ...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
3. Compare Hydrometeorology This section performs computations and generates plots of the Livneh 2013, Livneh 2016, and WRF 2014 temperature and precipitation data in order to compare them with each other and observations. The generated plots are automatically downloaded and saved as .png files in the "plots" folder o...
# Livneh et al., 2013 dr1 = meta_file['dailymet_livneh2013'] # Salathe et al., 2014 dr2 = meta_file['dailywrf_salathe2014'] # define overlapping time window dr = ogh.overlappingDates(date_set1=tuple([dr1['start_date'], dr1['end_date']]), date_set2=tuple([dr2['start_date'], dr2['end_date']])...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
INPUT: gridded meteorology from Jupyter Hub folders Data frames for each set of data are stored in a dictionary. The inputs to gridclim_dict() include the folder location and name of the hydrometeorology data, the file start and end, the analysis start and end, and the elevation band to be included in the analsyis (max...
%%time ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1, metadata=meta_file, dataset='dailymet_livneh2013', file_start_date=dr1['start_date'], file_end_date=dr1['end_date'], ...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
4. Visualize monthly precipitation spatially using Livneh et al., 2013 Meteorology data Apply different plotting options: time-index option <br /> Basemap option <br /> colormap option <br /> projection option <br />
%%time month=3 monthlabel = pd.datetime.strptime(str(month), '%m') ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'], vardf_dateindex=month, shapefile=sauk.replace('.shp','_2.shp'), outfilepath=os.path.join(homedir,...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
Visualize monthly precipitation difference between different gridded data products
for month in [3, 6, 9, 12]: monthlabel = pd.datetime.strptime(str(month), '%m') outfile='SaukLivnehPrecip{0}.png'.format(monthlabel.strftime('%b')) ax1 = ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'], vardf_dateindex=month, ...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
comparison to WRF data from Salathe et al., 2014
%%time ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1, metadata=meta_file, dataset='dailywrf_salathe2014', colvar=None, file_start_date=dr2['start_date'], ...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
6. Compare gridded model to point observations Read in SNOTEL data - assess available data If you want to plot observed snotel point precipitation or temperature with the gridded climate data, set to 'Y' Give name of Snotel file and name to be used in figure legends. File format: Daily SNOTEL Data Report - Historic ...
# Sauk SNOTEL_file = os.path.join(homedir,'ThunderBasinSNOTEL.txt') SNOTEL_station_name='Thunder Creek' SNOTEL_file_use_colsnames = ['Date','Air Temperature Maximum (degF)', 'Air Temperature Minimum (degF)','Air Temperature Average (degF)','Precipitation Increment (in)'] SNOTEL_station_elev=int(4320/3.281) # meters SN...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
Read in COOP station data - assess available data https://www.ncdc.noaa.gov/
COOP_file=os.path.join(homedir, 'USC00455678.csv') # Sauk COOP_station_name='Mt Vernon' COOP_file_use_colsnames = ['DATE','PRCP','TMAX', 'TMIN','TOBS'] COOP_station_elev=int(4.3) # meters COOP_obs_daily = ogh.read_daily_coop(file_name=COOP_file, usecols=COOP_file_use_colsnames, ...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
Set up VIC dictionary (as an example) to compare to available data
vic_dr1 = meta_file['dailyvic_livneh2013']['date_range'] vic_dr2 = meta_file['dailyvic_livneh2015']['date_range'] vic_dr = ogh.overlappingDates(tuple([vic_dr1['start'], vic_dr1['end']]), tuple([vic_dr2['start'], vic_dr2['end']])) vic_ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile,...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
10. Save the results back into HydroShare <a name="creation"></a> Using the hs_utils library, the results of the Geoprocessing steps above can be saved back into HydroShare. First, define all of the required metadata for resource creation, i.e. title, abstract, keywords, content files. In addition, we must define the...
#execute this cell to list the content of the directory !ls -lt
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
Create list of files to save to HydroShare. Verify location and names.
!tar -zcf {climate2013_tar} livneh2013 !tar -zcf {climate2015_tar} livneh2015 !tar -zcf {wrf_tar} salathe2014 ThisNotebook='Observatory_Sauk_TreatGeoSelf.ipynb' #check name for consistency climate2013_tar = 'livneh2013.tar.gz' climate2015_tar = 'livneh2015.tar.gz' wrf_tar = 'salathe2014.tar.gz' mappingfile = 'Sauk_map...
tutorials/Observatory_usecase6_observationdata.ipynb
ChristinaB/Observatory
mit
Install Required Libraries Import the libraries required to train this model.
import notebook_setup notebook_setup.notebook_setup()
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Import the python libraries we will use We add a comment "fairing:include-cell" to tell the kubefow fairing preprocessor to keep this cell when converting to python code later
# fairing:include-cell import fire import joblib import logging import nbconvert import os import pathlib import sys from pathlib import Path import pandas as pd import pprint from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from sklearn.impute import SimpleImputer fr...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Code to train and predict In the cells below we define some functions to generate data and train a model These functions could just as easily be defined in a separate python module
# fairing:include-cell def read_synthetic_input(test_size=0.25): """generate synthetic data and split it into train and test.""" # generate regression dataset X, y = make_regression(n_samples=200, n_features=5, noise=0.1) train_X, test_X, train_y, test_y = train_test_split(X, ...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Wrap Training and Prediction in a class In the cell below we wrap training and prediction in a class A class provides the structure we will need to eventually use kubeflow fairing to launch separate training jobs and/or deploy the model on Kubernetes
# fairing:include-cell class ModelServe(object): def __init__(self, model_file=None): self.n_estimators = 50 self.learning_rate = 0.1 if not model_file: if "MODEL_FILE" in os.environ: print("model_file not supplied; checking environment variable") ...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Train your Model Locally Train your model locally inside your notebook To train locally we just instatiante the ModelServe class and then call train
model = ModelServe(model_file="mockup-model.dat") model.train()
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Predict locally Run prediction inside the notebook using the newly created model To run prediction we just invoke redict
(train_X, train_y), (test_X, test_y) =read_synthetic_input() ModelServe().predict(test_X, None)
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Use Kubeflow Fairing to Launch a K8s Job to train your model Now that we have trained a model locally we can use Kubeflow fairing to Launch a Kubernetes job to train the model Deploy the model on Kubernetes Launching a separate Kubernetes job to train the model has the following advantages You can leverage Kubernet...
# Setting up google container repositories (GCR) for storing output containers # You can use any docker container registry istead of GCR GCP_PROJECT = fairing.cloud.gcp.guess_project_name() DOCKER_REGISTRY = 'gcr.io/{}/fairing-job'.format(GCP_PROJECT)
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Use Kubeflow fairing to build the docker image First you will use kubeflow fairing's kaniko builder to build a docker image that includes all your dependencies You use kaniko because you want to be able to run pip to install dependencies Kaniko gives you the flexibility to build images from Dockerfiles kaniko, however...
# TODO(https://github.com/kubeflow/fairing/issues/426): We should get rid of this once the default # Kaniko image is updated to a newer image than 0.7.0. from kubeflow.fairing import constants constants.constants.KANIKO_IMAGE = "gcr.io/kaniko-project/executor:v0.14.0" from kubeflow.fairing.builders import cluster # ...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Build the base image You use cluster_builder to build the base image You only need to perform this again if we change our Docker image or the dependencies we need to install ClusterBuilder takes as input the DockerImage to use as a base image You should use the same Jupyter image that you are using for your notebook s...
# Use a stock jupyter image as our base image # TODO(jlewi): Should we try to use the downward API to default to the image we are running in? base_image = "gcr.io/kubeflow-images-public/tensorflow-1.14.0-notebook-cpu:v0.7.0" # We use a custom Dockerfile cluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Build the actual image Here you use the append builder to add your code to the base image Calling preprocessor.preprocess() converts your notebook file to a python file You are using the ConvertNotebookPreprocessorWithFire This preprocessor converts ipynb files to py files by doing the following Removing all ce...
preprocessor.preprocess() builder = append.append.AppendBuilder(registry=DOCKER_REGISTRY, base_image=cluster_builder.image_tag, preprocessor=preprocessor) builder.build()
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Launch the K8s Job You can use kubeflow fairing to easily launch a Kubernetes job to invoke code You use fairings Kubernetes job library to build a Kubernetes job You use pod mutators to attach GCP credentials to the pod You can also use pod mutators to attch PVCs Since the ConvertNotebookPreprocessorWithFire is using...
pod_spec = builder.generate_pod_spec() train_deployer = job.job.Job(cleanup=False, pod_spec_mutators=[ fairing.cloud.gcp.add_gcp_credentials_if_exists]) # Add command line arguments pod_spec.containers[0].command.extend(["train"]) result = train_deployer.deploy...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
You can use kubectl to inspect the job that fairing created
!kubectl get jobs -l fairing-id={train_deployer.job_id} -o yaml
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Deploy the trained model to Kubeflow for predictions Now that you have trained a model you can use kubeflow fairing to deploy it on Kubernetes When you call deployer.deploy fairing will create a Kubernetes Deployment to serve your model Kubeflow fairing uses the docker image you created earlier The docker image you cr...
from kubeflow.fairing.deployers import serving pod_spec = builder.generate_pod_spec() module_name = os.path.splitext(preprocessor.executable.name)[0] deployer = serving.serving.Serving(module_name + ".ModelServe", service_type="ClusterIP", labels={"...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
You can use kubectl to inspect the deployment that fairing created
!kubectl get deploy -o yaml {deployer.deployment.metadata.name}
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Send an inference request to the prediction server Now that you have deployed the model into your Kubernetes cluster, you can send a REST request to preform inference The code below reads some data, sends, a prediction request and then prints out the response
(train_X, train_y), (test_X, test_y) = read_synthetic_input() result = util.predict_nparray(url, test_X) pprint.pprint(result.content)
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Clean up the prediction endpoint You can use kubectl to delete the Kubernetes resources for your model If you want to delete the resources uncomment the following lines and run them
# !kubectl delete service -l app=ames # !kubectl delete deploy -l app=ames
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Track Models and Artifacts Using Kubeflow's metadata server you can track models and artifacts The ModelServe code was instrumented to log executions and outputs You can access Kubeflow's metadata UI by selecting Artifact Store from the central dashboard See here for instructions on connecting to Kubeflow's UIs You ca...
ws = create_workspace() ws.list()
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Create a pipeline to train your model Kubeflow pipelines makes it easy to define complex workflows to build and deploy models Below you will define and run a simple one step pipeline to train your model Kubeflow pipelines uses experiments to group different runs of a pipeline together So you start by defining a name f...
@dsl.pipeline( name='Training pipeline', description='A pipeline that trains an xgboost model for the Ames dataset.' ) def train_pipeline( ): command=["python", preprocessor.executable.name, "train"] train_op = dsl.ContainerOp( name="train", image=builder.image_tag, ...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Compile the pipeline Pipelines need to be compiled
pipeline_func = train_pipeline pipeline_filename = pipeline_func.__name__ + '.pipeline.zip' compiler.Compiler().compile(pipeline_func, pipeline_filename)
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0