markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Then we import it into a SQLite3 database. The following function automatically guesses the table schema. | from pyensae.sql import import_flatfile_into_database
import_flatfile_into_database("velib_vanves.db3", "velib_vanves.txt", add_key="key") | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
We check the database exists: | import os
os.listdir(".") | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
On Windows, you can use SQLiteSpy to visualize the created table. We use pymysintall to download it. | try:
from pymyinstall.installcustom import install_sqlitespy
exe = install_sqlitespy()
except:
# we skip an exception
# the website can be down...
exe = None
exe | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
We just need to run it (see run_cmd). | if exe:
from pyquickhelper import run_cmd
run_cmd("SQLiteSpy.exe velib_vanves.db3") | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
You should be able to see something like (on Windows): | from pyquickhelper.helpgen import NbImage
NbImage('img_nb_sqlitespy.png') | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
It is easier to use that tool to extract a sample of the data. Once it is ready, you can execute the SQL query in Python and converts the results into a DataFrame. The following code extracts a random sample from the original sets. | sql = """SELECT * FROM velib_vanves WHERE key IN ({0})"""
import random
from pyquickhelper.loghelper import noLOG
from pyensae.sql import Database
db = Database("velib_vanves.db3", LOG = noLOG)
db.connect()
mx = db.execute_view("SELECT MAX(key) FROM velib_vanves")[0][0]
rnd_ids = [ random.randint(1,mx) for i in range(... | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
<h3 id="mem">Memory Dump</h3>
Once you have a big dataset available in text format, it takes some time to load into memory and you need to do that every time you need it again after you closed your python instance. | with open("temp_big_file.txt","w") as f :
f.write("c1\tc2\tc3\n")
for i in range(0,10000000):
x = [ i, random.random(), random.random() ]
s = [ str(_) for _ in x ]
f.write( "\t".join(s) + "\n" )
os.stat("temp_big_file.txt").st_size
import pandas,time
t = time.perf_counter()
df =... | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
It is slow considering that many datasets contain many more features. But we can speed it up by doing a kind of memory dump with to_pickle. | t = time.perf_counter()
df.to_pickle("temp_big_file.bin")
print("duration (s)",time.perf_counter()-t) | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
And we reload it with read_pickle: | t = time.perf_counter()
df = pandas.read_pickle("temp_big_file.bin")
print("duration (s)",time.perf_counter()-t) | _doc/notebooks/pyensae_flat2db3.ipynb | sdpython/pyensae | mit |
First fetch from the primary source in s3 as per bug 1312006. We fall back to the github location if this is not available. | import boto3
import botocore
import json
import tempfile
import urllib2
def fetch_schema():
""" Fetch the crash data schema from an s3 location or github location. This
returns the corresponding JSON schema in a python dictionary. """
region = "us-west-2"
bucket = "org-mozilla-telemetry-crashes"
... | reports/socorro_import/ImportCrashData.ipynb | acmiyaguchi/data-pipeline | mpl-2.0 |
Read crash data as json, convert it to parquet | from datetime import datetime as dt, timedelta, date
from pyspark.sql import SQLContext
def daterange(start_date, end_date):
for n in range(int((end_date - start_date).days) + 1):
yield (end_date - timedelta(n)).strftime("%Y%m%d")
def import_day(d, schema, version):
"""Convert JSON data stored in an ... | reports/socorro_import/ImportCrashData.ipynb | acmiyaguchi/data-pipeline | mpl-2.0 |
Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings
In this example, we use a subset of the
MosMedData: Chest CT Scans with COVID-19 Related Findings.
This dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings.
We will be using the associated radiologic... | # Download url of normal CT scans.
url = "https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip"
filename = os.path.join(os.getcwd(), "CT-0.zip")
keras.utils.get_file(filename, url)
# Download url of abnormal CT scans.
url = "https://github.com/hasibzunair/3D-image-classificat... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Loading data and preprocessing
The files are provided in Nifti format with the extension .nii. To read the
scans, we use the nibabel package.
You can install the package via pip install nibabel. CT scans store raw voxel
intensity in Hounsfield units (HU). They range from -1024 to above 2000 in this dataset.
Above 400 a... |
import nibabel as nib
from scipy import ndimage
def read_nifti_file(filepath):
"""Read and load volume"""
# Read file
scan = nib.load(filepath)
# Get raw data
scan = scan.get_fdata()
return scan
def normalize(volume):
"""Normalize the volume"""
min = -1000
max = 400
volume[... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Let's read the paths of the CT scans from the class directories. | # Folder "CT-0" consist of CT scans having normal lung tissue,
# no CT-signs of viral pneumonia.
normal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-0", x)
for x in os.listdir("MosMedData/CT-0")
]
# Folder "CT-23" consist of CT scans having several ground-glass opacifications,
# involvement of lung p... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Build train and validation datasets
Read the scans from the class directories and assign labels. Downsample the scans to have
shape of 128x128x64. Rescale the raw HU values to the range 0 to 1.
Lastly, split the dataset into train and validation subsets. | # Read and process the scans.
# Each scan is resized across height, width, and depth and rescaled.
abnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths])
normal_scans = np.array([process_scan(path) for path in normal_scan_paths])
# For the CT scans having presence of viral pneumonia
# assign 1... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Data augmentation
The CT scans also augmented by rotating at random angles during training. Since
the data is stored in rank-3 tensors of shape (samples, height, width, depth),
we add a dimension of size 1 at axis 4 to be able to perform 3D convolutions on
the data. The new shape is thus (samples, height, width, depth,... | import random
from scipy import ndimage
@tf.function
def rotate(volume):
"""Rotate the volume by a few degrees"""
def scipy_rotate(volume):
# define some rotation angles
angles = [-20, -10, -5, 5, 10, 20]
# pick angles at random
angle = random.choice(angles)
# rotate ... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
While defining the train and validation data loader, the training data is passed through
and augmentation function which randomly rotates volume at different angles. Note that both
training and validation data are already rescaled to have values between 0 and 1. | # Define data loaders.
train_loader = tf.data.Dataset.from_tensor_slices((x_train, y_train))
validation_loader = tf.data.Dataset.from_tensor_slices((x_val, y_val))
batch_size = 2
# Augment the on the fly during training.
train_dataset = (
train_loader.shuffle(len(x_train))
.map(train_preprocessing)
.batch(... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Visualize an augmented CT scan. | import matplotlib.pyplot as plt
data = train_dataset.take(1)
images, labels = list(data)[0]
images = images.numpy()
image = images[0]
print("Dimension of the CT scan is:", image.shape)
plt.imshow(np.squeeze(image[:, :, 30]), cmap="gray")
| examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Since a CT scan has many slices, let's visualize a montage of the slices. |
def plot_slices(num_rows, num_columns, width, height, data):
"""Plot a montage of 20 CT slices"""
data = np.rot90(np.array(data))
data = np.transpose(data)
data = np.reshape(data, (num_rows, num_columns, width, height))
rows_data, columns_data = data.shape[0], data.shape[1]
heights = [slc[0].sh... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Define a 3D convolutional neural network
To make the model easier to understand, we structure it into blocks.
The architecture of the 3D CNN used in this example
is based on this paper. |
def get_model(width=128, height=128, depth=64):
"""Build a 3D convolutional neural network model."""
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Train model | # Compile model.
initial_learning_rate = 0.0001
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
model.compile(
loss="binary_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
metrics=["acc"],
... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
It is important to note that the number of samples is very small (only 200) and we don't
specify a random seed. As such, you can expect significant variance in the results. The full dataset
which consists of over 1000 CT scans can be found here. Using the full
dataset, an accuracy of 83% was achieved. A variability of ... | fig, ax = plt.subplots(1, 2, figsize=(20, 3))
ax = ax.ravel()
for i, metric in enumerate(["acc", "loss"]):
ax[i].plot(model.history.history[metric])
ax[i].plot(model.history.history["val_" + metric])
ax[i].set_title("Model {}".format(metric))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(metric)
... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
Make predictions on a single CT scan | # Load best weights.
model.load_weights("3d_image_classification.h5")
prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0]
scores = [1 - prediction[0], prediction[0]]
class_names = ["normal", "abnormal"]
for score, name in zip(scores, class_names):
print(
"This model is %.2f percent confident th... | examples/vision/ipynb/3D_image_classification.ipynb | keras-team/keras-io | apache-2.0 |
model
load data | %matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# define the labels
col_labels=['C2H3', 'C2H6', 'CH2', 'H2CN', 'C2H4', 'H2O2', 'C2H', 'CN',
'heatRelease', 'NCO', 'NNH', 'N2', 'AR', 'psi', 'CO', 'CH4', 'HNCO',
'CH2OH', 'HCCO', 'CH2CO', 'CH', 'mu', 'C2H2... | FPV_ANN/notebooks/.ipynb_checkpoints/fgm_nn_inhouse-checkpoint.ipynb | uqyge/combustionML | mit |
model training
gpu training | import keras.backend as K
from keras.callbacks import LearningRateScheduler
import math
def cubic_loss(y_true, y_pred):
return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)
def coeff_r2(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_to... | FPV_ANN/notebooks/.ipynb_checkpoints/fgm_nn_inhouse-checkpoint.ipynb | uqyge/combustionML | mit |
prepare data for plotting
GPU data prepare | from sklearn.metrics import r2_score
# model.load_weights("./tmp/weights.best.cntk.hdf5")
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
predict_df ... | FPV_ANN/notebooks/.ipynb_checkpoints/fgm_nn_inhouse-checkpoint.ipynb | uqyge/combustionML | mit |
The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions: r in [0, 10] cm; h in [0, 20] cm
Objectives: minimize S, T
Constraints: V > 200cm<sup>3</sup> | # Few Utility functions
def say(*lst):
"""
Print whithout going to new line
"""
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
"""
Generate a random number between low and high.
decimals incidicate number of decimal places
"""
return round(rando... | code/5/WS1/tchhabr.ipynb | tarunchhabra26/fss16dst | apache-2.0 |
Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population. | def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
return [problem.generate_one() for _ in xrange(size)]
print (populate(cone,5))
| code/5/WS1/tchhabr.ipynb | tarunchhabra26/fss16dst | apache-2.0 |
Crossover
We perform a single point crossover between two points | def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
pop = populate(cone,5)
crossover(pop[0], pop[1]) | code/5/WS1/tchhabr.ipynb | tarunchhabra26/fss16dst | apache-2.0 |
Mutation
Randomly change a decision such that | def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the problem
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
for i, d in enumerate(problem.decisions):
if random.random() < mutation_r... | code/5/WS1/tchhabr.ipynb | tarunchhabra26/fss16dst | apache-2.0 |
Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows:
* Consider two points one and two.
* For every decision o and t in one and two, o <= t
* Atleast one decision o and t in one and two, o == t
Note: Binary Domination is not the best method to evaluate... | def bdom(problem, one, two):
"""
Return if one dominates two
"""
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
if (one == two):
return False
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
first = True
... | code/5/WS1/tchhabr.ipynb | tarunchhabra26/fss16dst | apache-2.0 |
Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3.... | def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
for pop in population:... | code/5/WS1/tchhabr.ipynb | tarunchhabra26/fss16dst | apache-2.0 |
Load the pretrained weights into the network : | params = pickle.load(open('./data/googlenet/blvc_googlenet.pkl', 'rb'), encoding='iso-8859-1')
model_param_values = params['param values']
#classes = params['synset words']
lasagne.layers.set_all_param_values(net_output_layer, model_param_values)
IMAGE_W=224
print("Loaded Model parameters") | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Executing the cell below will iterate through the images in the ./images/art-style/photos directory, so you can choose the one you want | photo_i += 1
photo = plt.imread(photos[photo_i % len(photos)])
photo_rawim, photo = googlenet.prep_image(photo)
plt.imshow(photo_rawim) | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Executing the cell below will iterate through the images in the ./images/art-style/styles directory, so you can choose the one you want | style_i += 1
art = plt.imread(styles[style_i % len(styles)])
art_rawim, art = googlenet.prep_image(art)
plt.imshow(art_rawim) | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
This defines various measures of difference that we'll use to compare the current output image with the original sources. | def plot_layout(combined):
def no_axes():
plt.gca().xaxis.set_visible(False)
plt.gca().yaxis.set_visible(False)
plt.figure(figsize=(9,6))
plt.subplot2grid( (2,3), (0,0) )
no_axes()
plt.imshow(photo_rawim)
plt.subplot2grid( (2,3), (1,0) )
no_axes()
plt.i... | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Here are the GoogLeNet layers that we're going to pay attention to : | layers = [
# used for 'content' in photo - a mid-tier convolutional layer
'inception_4b/output',
# used for 'style' - conv layers throughout model (not same as content one)
'conv1/7x7_s2', 'conv2/3x3', 'inception_3b/output', 'inception_4d/output',
]
#layers = [
# # used for 'content' in photo ... | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Precompute layer activations for photo and artwork
This takes ~ 20 seconds | input_im_theano = T.tensor4()
outputs = lasagne.layers.get_output(layers.values(), input_im_theano)
photo_features = {k: theano.shared(output.eval({input_im_theano: photo}))
for k, output in zip(layers.keys(), outputs)}
art_features = {k: theano.shared(output.eval({input_im_theano: art}))
... | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Define the overall loss / badness function | losses = []
# content loss
cl = 10 /1000.
losses.append(cl * content_loss(photo_features, gen_features, 'inception_4b/output'))
# style loss
sl = 20 *1000.
losses.append(sl * style_loss(art_features, gen_features, 'conv1/7x7_s2'))
losses.append(sl * style_loss(art_features, gen_features, 'conv2/3x3'))
losses.append(s... | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
The Famous Symbolic Gradient operation | grad = T.grad(total_loss, generated_image) | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Get Ready for Optimisation by SciPy | # Theano functions to evaluate loss and gradient - takes around 1 minute (!)
f_loss = theano.function([], total_loss)
f_grad = theano.function([], grad)
# Helper functions to interface with scipy.optimize
def eval_loss(x0):
x0 = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))
generated_image.set_value(x0)
ret... | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Initialize with the original photo, since going from noise (the code that's commented out) takes many more iterations. | generated_image.set_value(photo)
#generated_image.set_value(floatX(np.random.uniform(-128, 128, (1, 3, IMAGE_W, IMAGE_W))))
x0 = generated_image.get_value().astype('float64')
iteration=0 | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Optimize all those losses, and show the image
To refine the result, just keep hitting 'run' on this cell (each iteration is about 60 seconds) : | t0 = time.time()
scipy.optimize.fmin_l_bfgs_b(eval_loss, x0.flatten(), fprime=eval_grad, maxfun=40)
x0 = generated_image.get_value().astype('float64')
iteration += 1
if False:
plt.figure(figsize=(8,8))
plt.imshow(googlenet.deprocess(x0), interpolation='nearest')
plt.axis('off')
plt.text(270, 25, '# ... | notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb | mdda/fossasia-2016_deep-learning | mit |
Note about slicing columns from a Numpy matrix
If you want to extract a column i from a Numpy matrix A and keep it as a column vector, you need to use the slicing notation, A[:, i:i+1]. Not doing so can lead to subtle bugs. To see why, compare the following slices. | A = np.array ([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
], dtype=float)
print "A[:, :] ==\n", A
print "\nA[:, 0] ==\n", A[:, 0]
print "\nA[:, 2:3] == \n", A[:, 2:3]
print "\nAdd columns 0 and 2?"
a0 = A[:, 0]
a1 = A[:, 2:3]
print a0 + a1 | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Sample data: Rock lobsters!
As a concrete example of a classification task, consider the results of this experiment. Some marine biologists took a bunch of lobsters of varying sizes (size being a proxy for stage of development), and then tethered and exposed these lobsters to a variety of predators. The outcome that th... | # http://www.stat.ufl.edu/~winner/data/lobster_survive.txt
df_lobsters = pd.read_table ('http://www.stat.ufl.edu/~winner/data/lobster_survive.dat',
sep=r'\s+', names=['CarapaceLen', 'Survived'])
display (df_lobsters.head ())
print "..."
display (df_lobsters.tail ())
sns.violinplot (x="Surv... | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Although the classes are distinct in the aggregate, where the median carapace (outer shell) length is around 36 mm for the lobsters that died and 42 mm for those that survived, they are not cleanly separable.
Notation
To develop some intuition and a method, let's now turn to a more general setting and work on synthetic... | df = pd.read_csv ('http://vuduc.org/cse6040/logreg_points_train.csv')
display (df.head ())
print "..."
display (df.tail ()) | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Next, let's extract the coordinates as a Numpy matrix of points and the labels as a Numpy column vector labels. Mathematically, the points matrix corresponds to $X$ and the labels vector corresponds to $l$. | points = np.insert (df.as_matrix (['x_1', 'x_2']), 0, 1.0, axis=1)
labels = df.as_matrix (['label'])
print "First and last 5 points:\n", '='*23, '\n', points[:5], '\n...\n', points[-5:], '\n'
print "First and last 5 labels:\n", '='*23, '\n', labels[:5], '\n...\n', labels[-5:], '\n' | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Next, let's plot the data as a scatter plot using Plotly. To do so, we need to create separate traces, one for each cluster. Below, we've provided you with a function, make_2d_scatter_traces(), which does exactly that, given a labeled data set as a (points, labels) pair. | def assert_points_2d (points):
"""Checks the dimensions of a given point set."""
assert type (points) is np.ndarray
assert points.ndim == 2
assert points.shape[1] == 3
def assert_labels (labels):
"""Checks the type of a given set of labels (must be integral)."""
assert labels is not None
... | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Linear discriminants
Suppose you think that the boundary between the two clusters may be represented by a line. For the synthetic data example above, I hope you'll agree that such a model is not a terrible one.
This line is referred to as a linear discriminant. Any point $x$ on this line may be described by $\theta^T x... | def lin_discr (X, theta):
# @YOUSE: Part 1 -- Complete this function.
pass
def heaviside (Y):
# @YOUSE: Part 2 -- Complete this function
pass | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
The following is the code to generate the plot; look for the place to try different values of $\theta$ a couple of code cells below. | def heaviside_int (Y):
"""Evaluates the heaviside function, but returns integer values."""
return heaviside (Y).astype (dtype=int)
def assert_discriminant (theta, d=2):
"""
Verifies that the given coefficients correspond to a
d-dimensional linear discriminant ($\theta$).
"""
assert len (the... | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
An alternative linear discriminant: the logistic or "sigmoid" function
The heaviside function, $H(\theta^T x)$, enforces a sharp boundary between classes around the $\theta^T x=0$ line. The following code produces a contour plot to show this effect. | # Use Numpy's handy meshgrid() to create a regularly-spaced grid of values.
# http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html
x1 = np.linspace (-2., +2., 100)
x2 = np.linspace (-2., +2., 100)
x1_grid, x2_grid = np.meshgrid (x1, x2)
h_grid = heaviside (theta[0] + theta[1]*x1_grid + theta[2]*x2_g... | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
However, as the lobsters example suggests, real data are not likely to be cleanly separable, especially when the number of features we have at our disposal is relatively small.
Since the labels are binary, a natural idea is to give the classification problem a probabilistic interpretation. The logistic function provide... | def logistic (Y):
# @YOUSE: Implement the logistic function G(y) here
pass | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Plot of your implementation in 1D: | x_logit_1d = np.linspace (-6.0, +6.0, 101)
y_logit_1d = logistic (x_logit_1d)
trace_logit_1d = Scatter (x=x_logit_1d, y=y_logit_1d)
py.iplot ([trace_logit_1d]) | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Contour plot of your function: | g_grid = logistic (theta[0] + theta[1]*x1_grid + theta[2]*x2_grid)
trace_logit_grid = Contour (x=x1, y=x2, z=g_grid)
py.iplot ([trace_logit_grid]) | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Exercise. Verify the following properties of the logistic function, $G(y)$.
$$
\begin{array}{rcll}
G(y)
& = & \frac{e^y}{e^y + 1}
& \mathrm{(P1)} \
G(-y)
& = & 1 - G(y)
& \mathrm{(P2)} \
\dfrac{dG}{dy}
& = & G(y) G(-y)
& \mathrm{(P3)} \
{\dfrac{d}{dy}} {\left[ \ln G(y) \right]}
& = &... | MAX_STEP = 100
PHI = 0.1
# Get the data coordinate matrix, X, and labels vector, l
X = points
l = labels.astype (dtype=float)
# Store *all* guesses, for subsequent analysis
thetas = np.zeros ((3, MAX_STEP+1))
for t in range (MAX_STEP):
# @YOUSE: Fill in this code
pass
print "Your (hand) solution:", thet... | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
Exercise. Make a contour plot of the log-likelihood and draw the trajectory taken by the $\theta(t)$ values laid on top of it. | def log_likelihood (theta, l, X):
# @YOUSE: Complete this function to evaluate the log-likelihood
pass
n1_ll = 100
x1_ll = np.linspace (-20., 0., n1_ll)
n2_ll = 100
x2_ll = np.linspace (-20., 0., n2_ll)
x1_ll_grid, x2_ll_grid = np.meshgrid (x1_ll, x2_ll)
ll_grid = np.zeros ((n1_ll, n2_ll))
# @YOUSE: Write som... | 25--logreg.ipynb | rvuduc/cse6040-ipynbs | bsd-3-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Using groupby(), plot the number of films that have been released each decade in the history of cinema.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Use groupby() to plot the number of "Hamlet" films made each decade.
</div>
<div class="alert ale... | df
def normalize(group):
return (group - group.mean()) / group.std()
df.groupby('key').transform(normalize) | 04 - Groupby operations.ipynb | dpshelio/2015-EuroScipy-pandas-tutorial | bsd-2-clause |
<div class="alert alert-success">
<b>EXERCISE</b>: Calculate the ratio of number roles of actors and actresses to the total number of roles per decade and plot this for both in time (tip: you need to do a groupby twice in two steps, once calculating the numbers, and then the ratios.
</div>
Value counts
A useful s... | titles.title.value_counts().head() | 04 - Groupby operations.ipynb | dpshelio/2015-EuroScipy-pandas-tutorial | bsd-2-clause |
You can skip the following steps if you just want to load the word2vec model I provided... but this is the raw data approach. | # We need to unzip the data file to use it:
!gunzip ../data/yelp/yelp_academic_dataset_reviews.json.gz
# Make sure it is there and unzipped:
!ls -al ../data/yelp/
## Make sure this dataset is here and unzipped.
data = []
with open("../data/yelp/yelp_academic_dataset_reviews.json") as handle:
for line in handle.re... | python/Word2Vec_Yelp.ipynb | arnicas/eyeo_nlp | cc0-1.0 |
What is Word2Vec?
"Generally, word2vec is trained using something called a skip-gram model. The skip-gram model, pictures above, attempts to use the vector representation that it learns to predict the words that appear around a given word in the corpus. Essentially, it uses the context of the word as it is used in a va... | """ An alternate from gensim tutorials - just use all words in the model in a rewiew. No nltk used to split."""
import re
class YelpReviews(object):
"""Iterate over sentences of all plaintext files in a directory """
SPLIT_SENTENCES = re.compile(u"[.!?:]\s+") # split sentences on these characters
def __... | python/Word2Vec_Yelp.ipynb | arnicas/eyeo_nlp | cc0-1.0 |
Now let's do some basic word sentiment stuff again... for the html side! | import nltk
nltk.data.path = ['../nltk_data']
from nltk.corpus import stopwords
english_stops = stopwords.words('english')
revs[0]
tokens = [nltk.word_tokenize(rev) for rev in revs] # this takes a long time. don't run unless you're sure.
mystops = english_stops + [u"n't", u'...', u"'ve"]
def clean_tokens(tokens, s... | python/Word2Vec_Yelp.ipynb | arnicas/eyeo_nlp | cc0-1.0 |
The "AFINN-111.txt" file is another sentiment file. | from collections import defaultdict
sentiment = defaultdict(int)
with open('../data/sentiment_wordlists/AFINN-111.txt') as handle:
for line in handle.readlines():
word = line.split('\t')[0]
polarity = line.split('\t')[1]
sentiment[word] = int(polarity)
sentiment['pho']
sentiment['good']
s... | python/Word2Vec_Yelp.ipynb | arnicas/eyeo_nlp | cc0-1.0 |
Import statements | import pandas
import numpy as np
import pyprind | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
GRASS import statements | import grass.script as gscript
from grass.pygrass.vector import VectorTopo
from grass.pygrass.vector.table import DBlinks | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
Function declarations
connect to an attribute table | def connectToAttributeTable(map):
vector = VectorTopo(map)
vector.open(mode='r')
dblinks = DBlinks(vector.c_mapinfo)
link = dblinks[0]
return link.table() | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
finds the nearest element in a vector map (to) for elements in another vector map (from) <br />
calls the GRASS v.distance command | def computeDistance(from_map, to_map):
upload = 'dist'
result = gscript.read_command('v.distance',
from_=from_map,
to=to_map,
upload=upload,
separator='comma',
flags='p')
return resu... | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
selects vector features from an existing vector map and creates a new vector map containing only the selected features <br />
calls the GRASS v.extract command | def extractFeatures(input_, type_, output):
where = "{0} = '{1}'".format(road_type_field, type_)
gscript.read_command('v.extract',
input_=input_,
where=where,
output=output,
overwrite=True) | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
Get unique 'roads' types | roads_table = connectToAttributeTable(map=roads)
roads_table.filters.select(road_type_field)
cursor = roads_table.execute()
result = np.array(cursor.fetchall())
cursor.close()
road_types = np.unique(result)
print(road_types) | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
Get 'points' attribute table | point_table = connectToAttributeTable(map=points)
point_table.filters.select()
columns = point_table.columns.names()
cursor = point_table.execute()
result = np.array(cursor.fetchall())
cursor.close()
point_data = pandas.DataFrame(result, columns=columns).set_index('cat') | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
Loop through 'roads' types and compute the distances from all 'points' | distances = pandas.DataFrame(columns=road_types, index=point_data.index)
progress_bar = pyprind.ProgBar(road_types.size, bar_char='█', title='Progress', monitor=True, stream=1, width=50)
for type_ in road_types:
# update progress bar
progress_bar.update(item_id=type_)
# extract road data based o... | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
Export distances table to a csv file | distances.to_csv(distance_table_filename, header=False) | Compute distance to roads.ipynb | jacobdein/alpine-soundscapes | mit |
A generalization using accumulation | mapped = list(accumulate(mapped, accumulating))
mapped
clear_cache()
m,v,r = to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])
m,v,r
m_sym = m.subs(inverted_fibs, simultaneous=True)
m_sym[:,0] = m_sym[:,0].subs(f[2],f[1])
m_sym[1,2] = m_sym[1,2].subs(f[2],f[1])
m_sym
# the following cell produces an error ... | notebooks/recurrences-unfolding.ipynb | massimo-nocentini/PhD | apache-2.0 |
According to A162741, we can generalize the pattern above: | i = symbols('i')
d = IndexedBase('d')
k_fn_gen = Eq((k+1)*f[n], Sum(d[k,2*k-i]*f[n-i], (i, 0, 2*k)))
d_triangle= {d[0,0]:1, d[n,2*n]:1, d[n,k]:d[n-1, k-1]+d[n-1,k]}
k_fn_gen, d_triangle
mapped = list(accumulate(mapped, accumulating))
mapped
# skip this cell to maintain math coerent version
def adjust(term):
a_wil... | notebooks/recurrences-unfolding.ipynb | massimo-nocentini/PhD | apache-2.0 |
Unfolding a recurrence with generic coefficients | s = IndexedBase('s')
a = IndexedBase('a')
swaps_recurrence = Eq(n*s[n],(n+1)*s[n-1]+a[n])
swaps_recurrence
boundary_conditions = {s[0]:Integer(0)}
swaps_recurrence_spec=dict(recurrence_eq=swaps_recurrence, indexed=s,
index=n, terms_cache=boundary_conditions)
unfolded = do_unfolding_ste... | notebooks/recurrences-unfolding.ipynb | massimo-nocentini/PhD | apache-2.0 |
A curious relation about Fibonacci numbers, in matrix notation | d = 10
m = Matrix(d,d, lambda i,j: binomial(n-i,j)*binomial(n-j,i))
m
f = IndexedBase('f')
fibs = [fibonacci(i) for i in range(50)]
mp = (ones(1,d)*m*ones(d,1))[0,0]
odd_fibs_eq = Eq(f[2*n+1], mp, evaluate=True)
odd_fibs_eq
(m*ones(d,1)) | notebooks/recurrences-unfolding.ipynb | massimo-nocentini/PhD | apache-2.0 |
Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dict... | import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
unique_words = set(text)
vocab_to_int = {word: index f... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:... | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Che... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Inpu... | def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
return tf.placeholder(dtype=tf.int32, shape=(None, None), name="input"), tf.placeholder(dtype=tf.int32, shape=(None, None), name="target"), tf.placeholder(dtype=tf.f... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the follo... |
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)])
return cell, tf.id... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence. | def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embeddings = tf.Varia... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number... | def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logi... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- Th... | def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
total_batches = len(in... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_e... | # Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT ... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTen... | def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
return (loaded_graph.get_tensor_by_nam... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Choose Word
Implement the pick_word() function to select the next word using probabilities. | def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
probabilities = np.reshape(pr... | tv-script-generation/dlnd_tv_script_generation.ipynb | mu4farooqi/deep-learning-projects | gpl-3.0 |
Index comparison
See how indexes affect queries
First without then with |
scratch_db.zips.drop_indexes()
count = scratch_db.zips.find().count()
city_count = scratch_db.zips.find({"city": "FLAGSTAFF"}).count()
city_explain = scratch_db.zips.find({"city": "FLAGSTAFF"}).explain()['executionStats']
print(count)
print(city_count)
print(city_explain)
scratch_db.zips.drop_indexes()
scratch_db.z... | MongoDB.ipynb | 0Rick0/Fontys-DS-GCD | mit |
You can see with the index it's execution is a bit different.
Seeing the executionTimeMillis parameter shows that the second one is executed much faster.
This is because the index allow you to search the index instead of all the documents.
Some other information about the dataset | print("Amount of cities per state:")
pipeline = [
{"$unwind": "$state"},
{"$group": {"_id": "$state", "count": {"$sum": 1}}},
{"$sort": SON([("count", -1), ("_id", -1)])}
]
results = scratch_db.zips.aggregate(pipeline)
for result in results:
print("State %s: %d" % tuple(result.values()))
print("A... | MongoDB.ipynb | 0Rick0/Fontys-DS-GCD | mit |
Geolocation
Mongodb also has build in support for geolocation indexes
This allows for searching for example nearby shops for a given location | scratch_db.zips.create_index([("loc", "2dsphere")])
flagstaff = scratch_db.zips.find_one({"city": "FLAGSTAFF"})
nearby = scratch_db.zips.find({"loc": {
"$near": {
"$geometry": {
'type': 'Point',
'coordinates': flagstaff['loc']
},
"$maxDistance": 50000
}
}})
for c... | MongoDB.ipynb | 0Rick0/Fontys-DS-GCD | mit |
Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model. | # Create a managed tabular dataset
ds = # TODO 1: Your code goes here(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name | courses/machine_learning/deepdive2/production_ml/labs/sdk_metric_parameter_tracking_for_custom_jobs.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins. | aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
# Launch the training job
model = # TODO 2: Your code goes here(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs... | courses/machine_learning/deepdive2/production_ml/labs/sdk_metric_parameter_tracking_for_custom_jobs.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins. | # Deploy the model
endpoint = # TODO 3: Your code goes here(machine_type="n1-standard-4") | courses/machine_learning/deepdive2/production_ml/labs/sdk_metric_parameter_tracking_for_custom_jobs.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Perform online prediction. | # Perform online prediction using endpoint
prediction = # TODO 4: Your code goes here(test_dataset.tolist())
prediction | courses/machine_learning/deepdive2/production_ml/labs/sdk_metric_parameter_tracking_for_custom_jobs.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Extract all parameters and metrics created during this experiment. | # Extract all parameters and metrics of the experiment
# TODO 5: Your code goes here | courses/machine_learning/deepdive2/production_ml/labs/sdk_metric_parameter_tracking_for_custom_jobs.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
XGBoost HP Tuning on AI Platform
This notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository.
Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]... | # Replace <PROJECT_ID> and <BUCKET_ID> with proper Project and Bucket ID's:
%env PROJECT_ID <PROJECT_ID>
%env BUCKET_ID <BUCKET_ID>
%env JOB_DIR gs://<BUCKET_ID>/xgboost_job_dir
%env REGION us-central1
%env TRAINER_PACKAGE_PATH ./auto_mpg_hp_tuning
%env MAIN_TRAINER_MODULE auto_mpg_hp_tuning.train
%env RUNTIME_VERSION ... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
The data
The Auto MPG Data Set that this sample
uses for training is provided by the UC Irvine Machine Learning
Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/auto_mpg/. The data has been pre-processed to remove rows with incomplete data so as not to create additional steps... | %%writefile ./auto_mpg_hp_tuning/train.py
import argparse
import datetime
import os
import pandas as pd
import subprocess
import pickle
from google.cloud import storage
import hypertune
import xgboost as xgb
from random import shuffle
def split_dataframe(dataframe, rate=0.8):
indices = dataframe.index.values.tol... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
Load the hyperparameter values that are passed to the model during training.
In this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.) | %%writefile -a ./auto_mpg_hp_tuning/train.py
parser = argparse.ArgumentParser()
parser.add_argument(
'--job-dir', # handled automatically by AI Platform
help='GCS location to write checkpoints and export models',
required=True
)
parser.add_argument(
'--max_depth', # Specified in the config file
h... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
Add code to download the data from GCS
In this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model. | %%writefile -a ./auto_mpg_hp_tuning/train.py
# Public bucket holding the auto mpg data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
blob = bucket.blob('ml-engine/auto_mpg/auto-mpg.data')
# Download the data
blob.download_to_filename('auto-mpg.data')
# -----------... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
Use the Hyperparameters
Use the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's XGBoost code. | %%writefile -a ./auto_mpg_hp_tuning/train.py
# Create the regressor, here we will use a Lasso Regressor to demonstrate the use of HP Tuning.
# Here is where we set the variables used during HP Tuning from
# the parameters passed into the python script
regressor = xgb.XGBRegressor(max_depth=args.max_depth,
... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
Report the mean accuracy as hyperparameter tuning objective metric. | %%writefile -a ./auto_mpg_hp_tuning/train.py
# Calculate the mean accuracy on the given test data and labels.
score = regressor.score(test_df[FEATURES], test_df[TARGET])
# The default name of the metric is training/hptuning/metric.
# We recommend that you assign a custom name. The only functional difference is that ... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
Export and save the model to GCS | %%writefile -a ./auto_mpg_hp_tuning/train.py
# Export the model to a file
model_filename = 'model.pkl'
with open(model_filename, "wb") as f:
pickle.dump(regressor, f)
# Example: job_dir = 'gs://BUCKET_ID/xgboost_job_dir/1'
job_dir = args.job_dir.replace('gs://', '') # Remove the 'gs://'
# Get the Bucket Id
buck... | notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb | GoogleCloudPlatform/cloudml-samples | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.