markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Example output:
{
"deployedModel": {
"id": "7407594554280378368"
}
}
|
# The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id)
|
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
projects.locations.endpoints.predict
Prepare file for online prediction
|
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
|
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Request
|
prediction_request = {"endpoint": endpoint_id, "instances": INSTANCES}
print(json.dumps(prediction_request, indent=2))
|
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376",
"instances": [
[
1.4,
1.3,
5.1,
2.8
],
[
1.5,
1.2,
4.7,
2.4
]
]
}
Call
|
request = clients["prediction"].predict(endpoint=endpoint_id, instances=INSTANCES)
|
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Example output:
{
"predictions": [
2.045193195343018,
1.961864471435547
],
"deployedModelId": "7407594554280378368"
}
projects.locations.endpoints.undeployModel
Call
|
request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
)
|
notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
We want to generalize this result to several massess connected by several springs.
The spring constant as a second derivative of potential
The force related to poential energy by
$$ F = -\frac{d}{dx}V(x).$$
Ths equation comes directly from the definition that work is force times distance.
Integrating this, we find the potential energy of a mass on a spring,
$$ V(x) = \frac{1}{2}kx^2. $$
In fact, the spring contant can be defined to be the second derivative of the potential,
$$ k = \frac{d^2}{dx^2} V(x).$$ We take the value of the second derivative at the minimum
of the potential, which assumes that the oscillations are not very far from equilibrium.
We see that Hooke's law is simply
$$F = -\frac{d^2 V(x)}{dx^2} x, $$
where the second derivative is evaluated at the minimum of the potential.
For a general potential, we can write the equation of motion as
$$ \frac{d^2}{dt^2} x = -\frac{1}{m}\frac{d^2V(x)}{dx^2} x.$$
The expression on the right hand side is known as the dynamical matrix,
though this is a trivial 1x1 matrix.
Two masses connected by a spring
Now the potential depends on two corrdinates,
$$ V(x_1, x_2) = \frac{1}{2} k (x_1 - x_2 - d),$$
where $d$ is the equilibrium separation of the particles.
Now the force on each particle depends on the positions of both of the particles,
$$
\begin{pmatrix}F_1 \ F_2\end{pmatrix}
= -
\begin{pmatrix}
\frac{\partial^2 V}{\partial x_1^2} &
\frac{\partial^2 V}{\partial x_1\partial x_2} \
\frac{\partial^2 V}{\partial x_1\partial x_2} &
\frac{\partial^2 V}{\partial x_2^2} \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
For performing the derivatives, we find
$$
\begin{pmatrix}F_1 \ F_2\end{pmatrix}
= -
\begin{pmatrix}
k & -k \
-k & k \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
The equations of motion are coupled,
$$
\begin{pmatrix}
\frac{d^2x_1}{dt^2} \
\frac{d^2x_2}{dt^2} \
\end{pmatrix}
= -
\begin{pmatrix}
k/m & -k/m \
-k/m & k/m \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
To decouple the equations, we find the eigenvalues and eigenvectors.
|
import numpy as np
a = np.array([[1, -1], [-1, 1]])
freq, vectors = np.linalg.eig(a)
vectors = vectors.transpose()
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
The frequencies of the two modes of vibration are (in multiples of $\sqrt{k/m}$)
|
freq
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
The first mode is a vibrational mode were the masses vibrate against each other (moving in opposite directions). This can be seen from the eigenvector.
|
vectors[0]
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
The second mode is a translation mode with zero frequency—both masses move in the same direction.
|
vectors[1]
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
We can interactively illustrate the vibrational mode.
|
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = np.array([-1,1]), np.array([0,0])
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * vectors[0] * t, y, 'bo')
plt.xlim(-1.5,1.5)
plt.ylim(-1.5,1.5)
widgets.interact(make_plot, t=(-1,1,0.1))
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
Finding the dynamical matrix with numerical derivatives
We start from a function $V(x)$. If we want to calculate a derivative,
we just use the difference formula but don't take the delta too small.
Using $\delta x = 10^{-6}$ is safe.
$$
F = -\frac{dV(x)}{dx} \approx
\frac{V(x+\Delta x) - V(x-\Delta x)}{2\Delta x}
$$
Note that it is more accurate to do this symmetric difference formula
than it would be to use the usual forward derivative from calculus class.
It's easy to see this formula is just calculating the slope of the function using points near $x$.
|
def V(x):
return 0.5 * x**2
deltax = 1e-6
def F_approx(x):
return ( V(x + deltax) - V(x - deltax) ) / (2 * deltax)
[(x, F_approx(x)) for x in np.linspace(-2,2,9)]
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
Next, we can find the second derivative by using the difference formula twice.
We find the nice expression,
$$
\frac{d^2V}{dx^2} \approx \frac{V(x+\Delta x) - 2V(x) + V(x-\Delta x)}{(\Delta x)^2}.
$$
This formula has the nice interpretation of comparing the value of $V(x)$ to
the average of points on either side. If it is equal to the average, the line
is straight and the second derivative is zero.
If average of the outer values is larger than $V(x)$, then the ends curve upward,
and the second derivative is positive.
Likewise, if the average of the outer values is less than $V(x)$, then the ends curve downward,
and the second derivative is negative.
|
def dV2dx2_approx(x):
return ( V(x + deltax) - 2 * V(x) + V(x - deltax) ) / deltax**2
[(x, dV2dx2_approx(x)) for x in np.linspace(-2,2,9)]
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
Now we can use these derivative formulas to calcuate the dynamical matrix
for the two masses on one spring. Well use $k=1$ and $m=1$ for simplicity.
|
def V2(x1, x2):
return 0.5 * (x1 - x2)**2
x1, x2 = -1, 1
mat = np.array(
[[(V2(x1+deltax, x2) - 2 * V2(x1,x2) + V2(x1-deltax, x2)) / deltax**2 ,
(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2],
[(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2,
(V2(x1, x2+deltax) - 2 * V2(x1,x2) + V2(x1, x2-deltax)) / deltax**2 ]]
)
mat
freq, vectors = np.linalg.eig(mat)
vectors = vectors.transpose()
for f,v in zip(freq, vectors):
print("freqency", f, ", eigenvector", v)
|
HarmonicOscillator.ipynb
|
shumway/srt_bootcamp
|
mit
|
I want to count the number of rentals per vehicle ID in reservations.csv, appending these values as a column in vehicles.csv, in order to compare vehicle properties to reservation numbers directly. I also expect that a key factor in customer decisions may be the price difference between the actual and recommended prices, so I create a new column for this parameter as well. Finally, I merge the two dataframes to facilitate easy histogram plotting and analysis using the reservations data as a basis.
|
# Count frequency of rentals and add column to VEHICLE_DATA
veh_id = VEHICLE_DATA.as_matrix(columns=['vehicle_id'])
res_id = RESERVATION_DATA.as_matrix(columns=['vehicle_id'])
# Use numpy here to ensure zero counts as a value
n_reservations = np.zeros(len(veh_id))
for i,id in enumerate(veh_id):
n_reservations[i] = len(np.where(id == res_id)[0])
VEHICLE_DATA['num_reservations'] = n_reservations.astype(int)
# Add column with difference between the recommended price and the actual price
VEHICLE_DATA['diff_price'] = VEHICLE_DATA['recommended_price'] - VEHICLE_DATA['actual_price']
# Add a column that 'bucketizes' the number of reservations (low, med, high) categories
VEHICLE_DATA['categorical_reservations'] = pd.cut(VEHICLE_DATA['num_reservations'], 3, labels=["low","medium","high"])
# Merge databases to get vehicle features in the RESERVATION_DATA dataframe
MERGED_DATA = pd.merge(VEHICLE_DATA,RESERVATION_DATA)
# Define columns to plot initially as an exploration step
PLOT_COLUMNS = ['technology', 'num_images', 'street_parked', 'description','actual_price',
'recommended_price','diff_price','num_reservations']
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
Finding the most important factors driving the total number of reservations
|
from pandas.tools.plotting import scatter_matrix
scatter_matrix(VEHICLE_DATA[PLOT_COLUMNS], diagonal='kde')
plt.show()
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
The plot above shows every measured parameter, along with the newly added price-difference and number-of-reservations parameters, plotted against one another. I used this to try to quickly determine, visually, what parameters might drive the number of reservations. This is probably at the limit, in terms of number of parameters, of what I would just throw onto a scatter plot to inspect. A few things pop out:
* Actual price, recommended price, and the price difference are all correlated (unsurprisingly)
* The recommended price distribution is relatively flat, and it may not play a major role in reservations
* Discrete parameters are hard to interpret here, but don't seem to show trends
The next step is to look at the price trends more closely.
|
plt.rcParams.update({'font.size': LABEL_SIZE})
plt.rc('figure',figsize=(10,8))
VEHICLE_DATA.plot.hexbin(x='actual_price',y='recommended_price',C='num_reservations',reduce_C_function=np.max,
gridsize=25,figsize=(10,8))
plt.ylabel("Recommended Price")
plt.xlabel("Actual Price")
plt.plot([34,90],[34,90],color='orange',linewidth=3)
plt.show()
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
In the above figure, I show the recommended price versus the actual price in a hex density plot, with the color intensity representing the number of reservations. The orange line represents a one-to-one correlation between the two parameters. One can see immediately that there is a high density of reservations corresponding to the one-to-one line, where the actual price very nearly matches the recommended price. I also feel confident, now, that the recommended price does not need to be a free parameter in the future machine learning analysis, instead substituting the price difference.
Buyers naturally want to reserve a car at a price they perceive as fair, and coming close to the recommended price is very important.
|
# Define feature columns to explore with machine learning
FEATURE_COLUMNS = ['technology', 'num_images', 'street_parked', 'description','diff_price','actual_price']
# Random forest regressor for continuous num_reservations
TARGET_COLUMN = ['num_reservations']
rf = RandomForestRegressor()
rf.fit(VEHICLE_DATA[FEATURE_COLUMNS],VEHICLE_DATA[TARGET_COLUMN].values.ravel())
print "====================================================================================="
print "Features Sorted by Score for Regressor:\n"
print sorted(zip(map(lambda x: round(x,4), rf.feature_importances_),FEATURE_COLUMNS),reverse=True)
# Random forest classifier for bucketized num_reservations
TARGET_COLUMN = ['categorical_reservations']
rf = RandomForestClassifier()
rf.fit(VEHICLE_DATA[FEATURE_COLUMNS],VEHICLE_DATA[TARGET_COLUMN].values.ravel())
print "\nFeatures Sorted by Score for Classifier:\n"
print sorted(zip(map(lambda x: round(x,4), rf.feature_importances_),FEATURE_COLUMNS),reverse=True)
print "====================================================================================="
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
In the above code, I explore the data set using a Random Forest algorithm. It is a relatively quick and exceptionally versatile way to examine labelled data and derive relationships. In this case, I am using it to "score" the various parameters by how much they contribute to the number of reservations.
I ask the Random Forest algorithm first to classify based upon the raw number of reservations. I then also ask it to examine labels which are "bucketized," or binned, into three categories: low, medium, and high numbers of reservations, effectively smoothing out the options and ensuring that extreme values do not drive perceived trends.
We can conclude that the difference between the actual price of the car and the recommend price of the car is exceptionally important, followed by the actual price of the car, and then followed by the number of characters in the car's description. The number of images, whether or not it can be street parked, and the technology package do not play a large role in getting a car reserved frequently.
|
fig, axes = plt.subplots(nrows=1, ncols=2, sharey=True)
fig.set_figheight(6)
fig.set_figwidth(16)
MERGED_DATA[['actual_price']].plot.hist(bins=15,ax=axes[0])
axes[0].set_xlabel("Actual Price")
MERGED_DATA[['diff_price']].plot.hist(bins=15,ax=axes[1])
plt.xlabel("Price Difference")
plt.show()
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
In the above figures, I examine exactly how the price factors in to the number of reservations. These histograms show the number of reservations on the y-axis as a function of either the actual price or the difference between the actual price and the recommended price.
On the left, it appears that as price goes down, the number of reservations goes up, at least until roughly the $90 mark, where, below this level, the distribution flattens.
On the right, the trend is more pronounced. As the price approaches the recommended price, the number of reservations increases.
Lower prices lead to higher numbers of reservations, up until a point (near $90). At prices below this level, what matters most is how well the actual price matches the recommended price. Customers want to get a deal, and when they get that deal, they want it to be a fair deal.
|
MERGED_DATA[['description']].plot.hist(bins=15,figsize=(8,6))
plt.show()
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
The final important parameter is the length of the description. The above plot shows the frequency of reservations as a function of the description character length. I can quickly conclude that more reservations are made for cars with descriptions less than 50 characters. After this point, the length of the description does not play a major role in one's decision whether or not to reserve a vehicle.
What role does the technology package play?
I already know that the presence of the technology package does not significantly influence the number of reservations. Knowing this, I want to explore whether it has a impact on the type of reservations made.
|
plt.rc('figure',figsize=(8,6))
plt.figure(1)
MERGED_DATA.loc[MERGED_DATA['reservation_type']==1]['technology'].plot.hist(alpha=0.5,title="Hourly",
normed=True)
plt.xlabel("With or Without Technology")
plt.figure(2)
MERGED_DATA.loc[MERGED_DATA['reservation_type']==2]['technology'].plot.hist(alpha=0.5,title="Daily",
normed=True)
plt.xlabel("With or Without Technology")
plt.figure(3)
MERGED_DATA.loc[MERGED_DATA['reservation_type']==3]['technology'].plot.hist(alpha=0.5,title="Weekly",
normed=True)
plt.xlabel("With or Without Technology")
plt.show()
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
In the above plots, I show the normalized frequency of reservation for the three types of reservations. On the x-axis, a 0 represents the absence of the technology package, and a 1 represents a vehicle having the technology. It is visually obvious that a proportionately larger number of hourly reservations are made with vehicles having the technology package. We can statistically support this claim.
|
import scipy.stats
KSstatistic, pvalue = scipy.stats.ks_2samp(MERGED_DATA.loc[MERGED_DATA['reservation_type']==3]['technology'],
MERGED_DATA.loc[MERGED_DATA['reservation_type']==2]['technology'])
print "KS probability that Weekly and Daily reservations are drawn from the same underlying population:\n"
print "P(KS) = {}\n".format(pvalue)
KSstatistic, pvalue = scipy.stats.ks_2samp(MERGED_DATA.loc[MERGED_DATA['reservation_type']==1]['technology'],
MERGED_DATA.loc[MERGED_DATA['reservation_type']==2]['technology'])
print "KS probability that Hourly and Daily reservations are drawn from the same underlying population:\n"
print "P(KS) = {}\n".format(pvalue)
|
Data Science Example Rental Cars.ipynb
|
nigelmathes/Data_Science
|
gpl-3.0
|
Let's experiment with neural networks!
Learning Objectives
Gain familiarity with
Standard NN libraries: Keras and Tensorflow
One standard NN architecture: fully connected ('dense') networks
Two standard tasks performed with NNs: Binary Classification, Multi-class Classification
Diagnostics of NNs
1. History (Loss Function)
2. Receiver Operator Curve and Area Under the Curve
Experience fundamental considerations, pitfalls, and strategies when training NNs
Data set preparation (never underestimate the time required for this)
Training set size
Training speed and efficiency
Model fitting (training)
Begin connecting NN functionality to data set structure and problem of interest
Topics not covered
Class (im)balance
Training set diversity
Notes
Our main task will be to Classify Handwritten Digits (Is it a "zero" [0] or a "one" [1]?; ooooh, the suspense). This is very useful though, because it's an opportunity to separate yourself from the science of the data. MNIST is a benchmark data set for machine learning. Astronomy doesn't yet have machine learning-specific benchmark data sets.
Prepare the Data
For this MNIST set, this is the fastest it will probably ever take you to prepare a data set. Ye have been warn-ed.
Download the data
(ooh look it's all stored on Amazon's AWS!)
(pssst, we're in the cloooud, it's the future!)
|
# import MNIST data
(x_train_temp, y_train_temp), (x_test_temp, y_test_temp) = mnist.load_data()
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Reshape the data arrays: flatten for ingestion into a Dense layer.
We're going to use a Dense Neural Network Architecture, which takes 1D data structures, not 2D images.
So we need to make the input shape appropriate, so we 'flatten' it.
|
# Flatten the images
img_rows, img_cols = x_train[0].shape[0], x_train[0].shape[1]
img_size = img_rows*img_cols
x_train = x_train.reshape(x_train.shape[0], img_size)
x_test = x_test.reshape(x_test.shape[0], img_size)
print("New shape", x_train.shape)
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Create a Neural Net and train it!
Decide model format
|
model = Sequential()
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
For later: check out the keras documentation to see what other kinds of model formats are available. What might they be useful for?
Add layers to the model sequentially
|
model.add(Dense(units=32, activation='sigmoid', input_shape=(img_size,))) # dense layer of 32 neurons
model.add(Dense(units=num_classes, activation='softmax')) # dense layer of 'num_classes' neurons, because these are the number of options for the classification
#model.add(BatchNormalization())
model.summary() # look at the network output
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Review the model summary, noting the major features --- the meaning of each column, the flow downward, the number of parameters at the bottom.
What's the math that leads to the number of parameters in each layer? This is a little bit of fun math to play with. In the long run it can help understand how to debug model compilation errors, and to design custom networks better. No need to dwell here too long, but give it a try.The ingredients are
image size
number of units in the layer
number of bias elements per unit
Compile the model
Select three key options
1. optimizer: the method for optimizing the weights. "Stochastic Gradient Descent (SGD)" is the canonical method.
2. loss function: the form of the function to encode the difference between the data's true label and the predict label.
3. metric: the function by which the model is evaluated.
|
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
The optimizer is an element that I sometimes tune if I want more control over the training. Check the Keras docs for the optiosn available for the optimizer.
Fit (read: Train) the model
|
# Training parameters
batch_size = 32 # number of images per epoch
num_epochs = 5 # number of epochs
validation_split = 0.8 # fraction of the training set that is for validation only
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=validation_split,
verbose=True)
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
"Exerciiiiiiisee the neurons! This Network is cleeeah!"
(bonus points if you know the 90's movie, from which this is a near-quote.)
How well does the training go if we don't normalize the data to be between 0 and 1.0?
What is the effect of batch_size on how long it takes the network to achieve 80% accuracy?
What happens if you increase the validation set fraction (and thus reduce the training set size)? Try going really low on the training set size. Can you still get 80% accuracy?
Diagnostics!
Because neural networks are such complex objects, and in this era of experimentation, diagnostics (both standard and tailored) are critical for training NN models.
Evaluate overall model efficacy
Evaluate model on training and test data and compare. This provides summary values that are equivalent to the final value in the accuracy/loss history plots.
|
loss_train, acc_train = model.evaluate(x_train, y_train, verbose=False)
loss_test, acc_test = model.evaluate(x_test, y_test, verbose=False)
print(f'Train acc/loss: {acc_train:.3}, {loss_train:.3}')
print(f'Test acc/loss: {acc_test:.3}, {loss_test:.3}')
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Define Confusion Matrix
|
# Function: Convert from categorical back to numerical value
def convert_to_index(array_categorical):
array_index = [np.argmax(array_temp) for array_temp in array_categorical]
return array_index
def plot_confusion_matrix(cm,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function modified to plots the ConfusionMatrix object.
Normalization can be applied by setting `normalize=True`.
Code Reference :
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
This script is derived from PyCM repository: https://github.com/sepandhaghighi/pycm
"""
plt_cm = []
for i in cm.classes :
row=[]
for j in cm.classes:
row.append(cm.table[i][j])
plt_cm.append(row)
plt_cm = np.array(plt_cm)
if normalize:
plt_cm = plt_cm.astype('float') / plt_cm.sum(axis=1)[:, np.newaxis]
plt.imshow(plt_cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(cm.classes))
plt.xticks(tick_marks, cm.classes, rotation=45)
plt.yticks(tick_marks, cm.classes)
fmt = '.2f' if normalize else 'd'
thresh = plt_cm.max() / 2.
for i, j in itertools.product(range(plt_cm.shape[0]), range(plt_cm.shape[1])):
plt.text(j, i, format(plt_cm[i, j], fmt),
horizontalalignment="center",
color="white" if plt_cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('Actual')
plt.xlabel('Predict')
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Plot Confusion Matrix
|
# apply conversion function to data
y_test_ind = convert_to_index(y_test)
y_pred_test_ind = convert_to_index(y_pred_test)
# compute confusion matrix
cm_test = ConfusionMatrix(y_test_ind, y_pred_test_ind)
np.set_printoptions(precision=2)
# plot confusion matrix result
plt.figure()
plot_confusion_matrix(cm_test,title='cm')
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Problems
Problem 1: Binary data
Problem 3a: Generate a binary data sets from the MNIST data set.
Problem 3b: Prepare the data set in the same way as the multi-class scenario, but apprioriately for only two classes.
Problem 3c: Re-train the neural network for the binary classification problem.
Problem 3d: Perform all the diagnostics again.
Problem 3e: Use sklearn to generate a Receiver Operator Curve (ROC) and the associated AUC.
Problem 3f: Create a new diagnostic that shows examples of the images and how they were classified. For a given architecture and training, plot four columns of images, five rows down. Each row is just another example of something that is classified. The four columns are
True positive
False positive
True negative
True postiive
So this will be a grid of images that are examples of classifications. This is important to give a sense of how well the network is classifying different types of objects.
Problem 2: How does the following metrics scale with the data set size? (Constructing a learning curve)
time required to perform 5 epochs.
accuracy and loss at final step
Some advice for this problem:
* create a function or functions that can wrap around the NN architecture, and then output key results.
* watch out for network initialization. If you don't redefine the network model each time, the weights will be preserved from the most recent training.
Sub-problems
1. Try training and evaluation for a number of well-spaced data set sizes and report on the pattern you see. Plot a "learning curve": the loss (or accuracy) as a function of the training set data size. This is an important step in judging what you need for your training set, and in assessing the feasibility of the application.
2. Do you see any change a function data size? If not, go back and train the original network for more epochs. When does the loss plateau? Use that epoch for the
3. Does keras have any model function fit parameters that automatically decide when it will stop? Try using that.
Problem 3: How do the following factors and metrics scale with the number of neurons in the first layer?
time required to perform 10 epochs
accuracy and loss at final step
When developing an NN model, we think about the balance between accuracy, computation time, and network complexity. This exercise is intended to draw out some of the patterns in the confluence of those aspects. The goal of this problem is to produce plots that show general trends.
Problem 4: Apply this analysis to the Fashion MNIST data set
tip: you can use a very similar procedure to load the data as you did for MNIST
hint: "fashion_mnist"
Problem 6: Apply this to some mock strong lensing data
We have mock strong lensing data made with Lenspop.
Download from cloud storage to your remote runtime.
|
!gsutil cp -r gs://lsst-dsfp2018/stronglenses ./
!ls .
|
Sessions/Session07/Day4/LetsGetNetworking.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Breaking Out Of A For Loop
|
for army in armies:
print(army)
if army == 'Blue Army':
print('Blue Army Found! Stopping.')
break
|
python/exiting_a_loop.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
Notice that the loop stopped after the conditional if statement was satisfied.
Exiting If Loop Completed
A loop will exit when completed, but using an else statement we can add an action at the conclusion of the loop if it hasn't been exited earlier.
|
for army in armies:
print(army)
if army == 'Orange Army':
break
else:
print('Looped Through The Whole List, No Orange Army Found')
|
python/exiting_a_loop.ipynb
|
tpin3694/tpin3694.github.io
|
mit
|
<h2><font color="red">Challenge!</font></h2>
Copy this sample code and use it to calculate the mass of the muons. Make a histogram of this quantity.
<i>Hint!</i>
Make sure you do this for all the muons! Each collision can produce differing numbers of muons, so take care when you code this up.
Your histogram should look something like the following sketch, though the peak will be at different values.
The value of the peak, should be the mass of the particle <a href="http://en.wikipedia.org/wiki/Muon">Check your answer!</a>
You should also make histograms of the energy and magnitude of momentum ($|p|$). You should see a pretty wide range of values for these, and yet the mass is a very specific number.
|
from IPython.display import Image
Image(filename='images/muons_sketch.jpeg')
# Your code here
|
activities/activity00_cms_muons.ipynb
|
particle-physics-playground/playground
|
mit
|
Suppose we didn't know anything about special relativity and we tried calculating the mass from what we know about classical physics.
$$KE = \frac{1}{2}mv^2 \qquad KE = \frac{p^2}{2m} \qquad m = \frac{p^2}{2KE}$$
Let's interpret the energy from the CMS data as the kinetic energy ($KE$). Use classical mechanics then to calculate the mass of the muon, given the energy/KE and the momentum. What does <b>that</b> histogram look like?
Your histogram should not look like the last one! We know that the Classical description of kinematics is not accurate for particle moving at high energies, so don't worry if the two histograms are different. That's the point! :)
|
# Your code here
|
activities/activity00_cms_muons.ipynb
|
particle-physics-playground/playground
|
mit
|
Android Management API - Quickstart
If you have not yet read the Android Management API Codelab we recommend that you do so before using this notebook. If you opened this notebook from the Codelab then follow the next instructions on the Codelab.
In order to run this notebook, you need:
An Android 6.0+ device.
Setup
The base resource of your Android Management solution is a Google Cloud Platform project. All other resources (Enterprises, Devices, Policies, etc) belong to the project and the project controls access to these resources. A solution is typically associated with a single project, but you can create multiple projects if you want to restrict access to resources.
For this Codelab we have already created a project for you (project ID: android-management-io-codelab).
To create and access resources, you need to authenticate with an account that has edit rights over the project. The account running this Codelab has been given rights over the project above. To start the authentication flow, run the cell below.
To run a cell:
Click anywhere in the code block.
Click the ▶ button in the top-left of the code block.
When you build a server-based solution, you should create a
service account
so you don't need to authorize the access every time.
|
from apiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from random import randint
# This is a public OAuth config, you can use it to run this guide but please use
# different credentials when building your own solution.
CLIENT_CONFIG = {
'installed': {
'client_id':'882252295571-uvkkfelq073vq73bbq9cmr0rn8bt80ee.apps.googleusercontent.com',
'client_secret': 'S2QcoBe0jxNLUoqnpeksCLxI',
'auth_uri':'https://accounts.google.com/o/oauth2/auth',
'token_uri':'https://accounts.google.com/o/oauth2/token'
}
}
SCOPES = ['https://www.googleapis.com/auth/androidmanagement']
# Run the OAuth flow.
flow = InstalledAppFlow.from_client_config(CLIENT_CONFIG, SCOPES)
credentials = flow.run_console()
# Create the API client.
androidmanagement = build('androidmanagement', 'v1', credentials=credentials)
print('\nAuthentication succeeded.')
|
notebooks/codelab_kiosk.ipynb
|
google/android-management-api-samples
|
apache-2.0
|
Select an enterprise
An Enterprise resource binds an organization to your Android Management solution.
Devices and Policies both belong to an enterprise. Typically, a single enterprise
resource is associated with a single organization. However, you can create multiple
enterprises for the same organization based on their needs. For example, an
organization may want separate enterprises for its different departments or regions.
For this Codelab we have already created an enterprise for you. Run the next cell to select it.
|
enterprise_name = 'enterprises/LC02de1hmx'
|
notebooks/codelab_kiosk.ipynb
|
google/android-management-api-samples
|
apache-2.0
|
Create a policy
A Policy is a group of settings that determine the behavior of a managed device
and the apps installed on it. Each Policy resource represents a unique group of device
and app settings and can be applied to one or more devices. Once a device is linked to
a policy, any updates to the policy are automatically applied to the device.
To create a basic policy, run the cell below. You'll see how to create more advanced policies later in this guide.
|
import json
# Create a random policy name to avoid colision with other Codelabs
if 'policy_name' not in locals():
policy_name = enterprise_name + '/policies/' + str(randint(1, 1000000000))
policy_json = '''
{
"applications": [
{
"packageName": "com.google.samples.apps.iosched",
"installType": "FORCE_INSTALLED"
}
],
"debuggingFeaturesAllowed": true
}
'''
androidmanagement.enterprises().policies().patch(
name=policy_name,
body=json.loads(policy_json)
).execute()
|
notebooks/codelab_kiosk.ipynb
|
google/android-management-api-samples
|
apache-2.0
|
Simple k-means aggregation
Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical' and 'k_medoids'.
|
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24,
clusterMethod = 'k_means')
|
examples/aggregation_method_showcase.ipynb
|
FZJ-IEK3-VSA/tsam
|
mit
|
Simple k-medoids aggregation of weeks
Initialize a time series aggregation which integrates the day with the minimal temperature and the day with the maximal load as periods.
|
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24*7,
clusterMethod = 'k_medoids', )
|
examples/aggregation_method_showcase.ipynb
|
FZJ-IEK3-VSA/tsam
|
mit
|
Save typical periods to .csv file with weeks order by GHI to get later testing consistency
|
typPeriods.reindex(typPeriods['GHI'].unstack().sum(axis=1).sort_values().index,
level=0).to_csv(os.path.join('results','testperiods_kmedoids.csv'))
|
examples/aggregation_method_showcase.ipynb
|
FZJ-IEK3-VSA/tsam
|
mit
|
AWS (S3, Redshift, Kinesis) + Databricks Spark = Real-time Smart Meter Analytics
Create S3 Bucket
|
s3 = boto3.client('s3')
s3.list_buckets()
def create_s3_bucket(bucketname):
"""Quick method to create bucket with exception handling"""
s3 = boto3.resource('s3')
exists = True
bucket = s3.Bucket(bucketname)
try:
s3.meta.client.head_bucket(Bucket=bucketname)
except botocore.exceptions.ClientError as e:
error_code = int(e.response['Error']['Code'])
if error_code == 404:
exists = False
if exists:
print 'Bucket {} already exists'.format(bucketname)
else:
s3.create_bucket(Bucket=bucketname, GrantFullControl='dkelly628')
create_s3_bucket('pecanstreetresearch-2016')
|
SmartMeterResearch_Phase2.ipynb
|
dougkelly/SmartMeterResearch
|
apache-2.0
|
Copy Postgres to S3 via Postgres dump to CSV and s3cmd upload
|
# Note: Used s3cmd tools because awscli tools not working in conda env
# 14m rows or ~ 1.2 GB local unzipped; 10min write to CSV and another 10min to upload to S3
# !s3cmd put ~/Users/Doug/PecanStreet/electricity-03-06-2016.csv s3://pecanstreetresearch-2016/electricity-03-06-2016.csv
# 200k rows ~ 15 MB local unzipped; 30 sec write to CSV and 15 sec upload to S3
# !s3cmd put ~/Users/Doug/PecanStreet/weather-03-06-2016.csv s3://pecanstreetresearch-2016/weather-03-06-2016.csv
|
SmartMeterResearch_Phase2.ipynb
|
dougkelly/SmartMeterResearch
|
apache-2.0
|
Amazon Redshift: NoSQL Columnar Data Warehouse
Quick data cleanup before ETL
|
# Quick geohashing before uploading to Redshift
weather_df = pd.read_csv('/Users/Doug/PecanStreet/weather_03-06-2016.csv')
weather_df.groupby(['latitude', 'longitude', 'city']).count()
weather_df['city'] = weather_df['Austin' if weather_df.latitude=30.292432 elif '']
weather_df['city'] = 'city'
weather_df.city.unique()
# weather_df['city'][weather_df.latitude==40.027278] = 'Boulder'
weather_df.to_csv('/Users/Doug/PecanStreet/weather-03-07-2016.csv', index=False)
metadata_df = pd.read_csv('/Users/Doug/PecanStreet/dataport-metadata.csv')
metadata_df = metadata_df[['dataid','city', 'state']]
metadata_df.to_csv('/Users/Doug/PecanStreet/metadata.csv', index=False)
# !s3cmd put metadata.csv s3://pecanstreetresearch-2016/metadata/metadata.csv
redshift = boto3.client('redshift')
# redshift.describe_clusters()
# psql -h pecanstreet.czxmxphrw2wv.us-east-1.redshift.amazonaws.com -U dkelly628 -d electricity -p 5439
|
SmartMeterResearch_Phase2.ipynb
|
dougkelly/SmartMeterResearch
|
apache-2.0
|
create table electricity (
dataid integer not null,
localhour timestamp not null distkey sortkey,
use decimal(30,26),
air1 decimal(30,26),
furnace1 decimal(30,26),
car1 decimal(30,26)
);
create table weather (
localhour timestamp not null distkey sortkey,
latitude decimal(30,26),
longitude decimal(30,26),
temperature decimal(30,26),
city varchar(20)
);
create table metadata (
dataid integer distkey sortkey,
city varchar(20),
state varchar(20)
);
|
# Complete
COPY electricity
FROM 's3://pecanstreetresearch-2016/electricity/electricity-03-06-2016.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1
dateformat 'auto';
# Complete
COPY weather
FROM 's3://pecanstreetresearch-2016/weather/weather-03-06-2016.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1
dateformat 'auto';
# Complete
COPY metadata
FROM 's3://pecanstreetresearch-2016/metadata/metadata.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1;
# Query for checking error log; invaluable
select query, substring(filename,22,25) as filename,line_number as line,
substring(colname,0,12) as column, type, position as pos, substring(raw_line,0,30) as line_text,
substring(raw_field_value,0,15) as field_text,
substring(err_reason,0,45) as reason
from stl_load_errors
order by query desc
limit 10;
# All table definitions are stored in pg_table_def table; different from Postgres
SELECT DISTINCT tablename
FROM pg_table_def
WHERE schemaname = 'public'
ORDER BY tablename;
# Returns household, time, city, usage by hour, and temperature for all residents in Austin, TX
SELECT e.dataid, e.localhour, m.city, SUM(e.use), w.temperature
FROM electricity AS e
JOIN weather AS w ON e.localhour = w.localhour
JOIN metadata AS m ON e.dataid = m.dataid
WHERE m.city = 'Austin'
GROUP BY e.dataid, e.localhour, m.city, w.temperature;
# Returns number of participants by city, state
SELECT m.city, m.state, COUNT(e.dataid) AS participants
FROM electricity AS e
JOIN metadata AS m ON e.dataid = m.dataid
GROUP BY m.city, m.state;
# Setup connection to Pecan Street Dataport
try:
conn = psycopg2.connect("dbname='electricity' user='dkelly628' host='pecanstreet.czxmxphrw2wv.us-east-1.redshift.amazonaws.com' port='5439' password='password'")
except:
# print "Error: Check there aren't any open connections in notebook or pgAdmin"
electricity_df = pd.read_sql("SELECT localhour, SUM(use) AS usage, SUM(air1) AS cooling, SUM(furnace1) AS heating, \
SUM(car1) AS electric_vehicle \
FROM electricity \
WHERE dataid = 7982 AND use > 0 \
AND localhour BETWEEN '2013-10-16 00:00:00'::timestamp AND \
'2016-02-26 08:00:00'::timestamp \
GROUP BY dataid, localhour \
ORDER BY localhour", conn)
electricity_df['localhour'] = electricity_df.localhour.apply(pd.to_datetime)
electricity_df.set_index('localhour', inplace=True)
electricity_df.fillna(value=0.0, inplace=True)
electricity_df[['usage','cooling']].plot(figsize=(18,9), title="Pecan Street Household 7982 Hourly Energy Consumption")
sns.despine();
|
SmartMeterResearch_Phase2.ipynb
|
dougkelly/SmartMeterResearch
|
apache-2.0
|
Databricks Spark Analysis (see Databricks): Batch analytics on S3, Streaming using Amazon Kinesis Stream
Create Amazon Kinesis Stream for writing streaming data to S3
|
kinesis = boto3.client('kinesis')
kinesis.create_stream(StreamName='PecanStreet', ShardCount=2)
kinesis.list_streams()
firehose = boto3.client('firehose')
# firehose.create_delivery_stream(DeliveryStreamName='pecanstreetfirehose', S3DestinationConfiguration={'RoleARN': '', 'BucketARN': 'pecanstreetresearch-2016'})
firehose.list_delivery_streams()
def kinesis_write(stream, ):
"""Method that writes to kinesis stream"""
kinesis = boto3.client('kinesis')
kinesis.put(StreamName=stream, )
def kinesis_read():
"""Method to read from kinesis stream"""
|
SmartMeterResearch_Phase2.ipynb
|
dougkelly/SmartMeterResearch
|
apache-2.0
|
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
|
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
|
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
|
kgrodzicki/machine-learning-specialization
|
mit
|
Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
|
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
score= np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = 1 / (1 + np.exp(-score))
return predictions
|
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
|
kgrodzicki/machine-learning-specialization
|
mit
|
Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
|
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative += -2. * l2_penalty * coefficient
return derivative
|
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
|
kgrodzicki/machine-learning-specialization
|
mit
|
Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
|
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
#
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
|
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
|
kgrodzicki/machine-learning-specialization
|
mit
|
Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
|
coefficients = list(coefficients_0_penalty[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
positive_words = word_coefficient_tuples[:5]
negative_words = word_coefficient_tuples[-5:]
|
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
|
kgrodzicki/machine-learning-specialization
|
mit
|
Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
|
positive_words = table.sort('coefficients [L2=0]', ascending = False)['word'][0:5]
positive_words
negative_words = table.sort('coefficients [L2=0]')['word'][0:5]
negative_words
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
|
course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb
|
kgrodzicki/machine-learning-specialization
|
mit
|
Stock Market Similarity Searches: Daily Returns
We have provided a year of daily returns for 379 S&P 500 stocks. We have explicitly excluded stocks with incomplete or missing data. We have pre-loaded 350 stocks in the database, and have excluded 29 stocks for later use in similarity searches.
Data source: <a href='www.stockwiz.com'>www.stockwiz.com</a>
|
# load data
with open('data/returns_include.json') as f:
stock_data_include = json.load(f)
with open('data/returns_exclude.json') as f:
stock_data_exclude = json.load(f)
# keep track of which stocks are included/excluded from the database
stocks_include = list(stock_data_include.keys())
stocks_exclude = list(stock_data_exclude.keys())
# check the number of market days in the year
num_days = len(stock_data_include[stocks_include[0]])
num_days
|
docs/stock_example_returns.ipynb
|
Mynti207/cs207project
|
mit
|
Database Initialization
Let's start by initializing all the database components.
|
# 1. load the database server
# when running from the terminal
# python go_server_persistent.py --ts_length 244 --db_name 'stock_prices'
# here we load the server as a subprocess for demonstration purposes
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(num_days), '--data_dir', '../db_files', '--db_name', 'stock_returns'])
time.sleep(5) # make sure it loads completely
# 2. load the database webserver
# when running from the terminal
# python go_webserver.py
# here we load the server as a subprocess for demonstration purposes
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # make sure it loads completely
# 3. import the web interface and initialize it
from webserver import *
web_interface = WebInterface()
|
docs/stock_example_returns.ipynb
|
Mynti207/cs207project
|
mit
|
Let's pick one of our excluded stocks and carry out a vantage point similarity search.
|
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the vantage point similarity search
result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1)
stock_match = list(result)[0]
stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_match})[stock_match]['ts']
print('Most similar stock:', stock_match)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(stock_ts.values(), label='Result:' + stock_match)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Return Similarity')
plt.show()
|
docs/stock_example_returns.ipynb
|
Mynti207/cs207project
|
mit
|
iSAX Tree Search
Let's pick another one of our excluded stocks and carry out an iSAX tree similarity search. Note that this is an approximate search technique, so it will not always be able to find a similar stock.
|
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the isax tree similarity search
result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]))
# could not find a match
if result == 'ERROR: NO_MATCH':
print('Could not find a similar stock.')
# found a match
else:
# closest time series
stock_match = list(result)[0]
stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_match})[stock_match]['ts']
print('Most similar stock:', stock_match)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(stock_ts.values(), label='Result:' + stock_match)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Return Similarity')
plt.show()
|
docs/stock_example_returns.ipynb
|
Mynti207/cs207project
|
mit
|
Comparing Similarity Searches
Now, let's pick one more random stock, carry out both types of similarity searches, and compare the results.
|
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the vantage point similarity search
result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1)
match_vp = list(result)[0]
ts_vp = web_interface.select(fields=['ts'], md={'pk': match_vp})[match_vp]['ts']
print('VP search result:', match_vp)
# run the isax similarity search
result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]))
# could not find an isax match
if result == 'ERROR: NO_MATCH':
print('iSAX search result: Could not find a similar stock.')
# found a match
else:
# closest time series
match_isax = list(result)[0]
ts_isax = web_interface.select(fields=['ts'], md={'pk': match_isax})[match_isax]['ts']
print('iSAX search result:', match_isax)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(ts_vp.values(), label='Result:' + match_vp)
plt.plot(ts_isax.values(), label='Result:' + match_isax)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Return Similarity')
plt.show()
|
docs/stock_example_returns.ipynb
|
Mynti207/cs207project
|
mit
|
Now, we make the predictions based on population. We first import multiple models from the scikit-learn library.
|
from sklearn import ensemble,tree,neighbors
rfr = ensemble.RandomForestRegressor()
dtr = tree.DecisionTreeRegressor()
knr = neighbors.KNeighborsRegressor()
#nnn = neural_network.BernoulliRBM()
|
development_folder/ipc_microsim_sklearn.ipynb
|
UN-DESA-Modelling/Electricity_Consumption_Surveys
|
gpl-3.0
|
Evaluate the accuracy for each model based on cross-validation
These predictions are for the Medium Variance Projections. The arrays below whow the accuracy of each model using cross-validation
|
from sklearn.cross_validation import cross_val_score
print 'Random Forests',cross_val_score(rfr,estimates,medVariance_projections)
print 'Decision Trees',cross_val_score(dtr,estimates,medVariance_projections)
print 'Nearest Neighbors',cross_val_score(knr,estimates,medVariance_projections)
from sklearn.cross_validation import cross_val_score
print 'Random Forests',cross_val_score(rfr,estimates,highVariance_projections)
print 'Decision Trees',cross_val_score(dtr,estimates,highVariance_projections)
print 'Nearest Neighbors',cross_val_score(knr,estimates,highVariance_projections)
|
development_folder/ipc_microsim_sklearn.ipynb
|
UN-DESA-Modelling/Electricity_Consumption_Surveys
|
gpl-3.0
|
Using groupby(), plot the number of films that have been released each decade in the history of cinema.
|
titles['decade'] = titles.year // 10 * 10
titles.groupby('decade').size().plot(kind='bar')
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
Use groupby() to plot the number of "Hamlet" films made each decade.
|
titles[titles.title=='Hamlet'].groupby('decade').size().sort_index().plot(kind='bar')
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?
|
cast[(cast.year//10==195)&(cast.n==1)].groupby(['year','type']).type.value_counts()
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
In the 1950s decade taken as a whole, how many total roles were available to actors, and how many to actresses, for each "n" number 1 through 5?
|
cast[(cast.year//10==195)&(cast.n<=5)].groupby(['n','type']).type.value_counts()
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
Use groupby() to determine how many roles are listed for each of the Pink Panther movies.
|
cast[cast.title.str.contains('Pink Panther')].groupby('year').size()
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
List, in order by year, each of the films in which Frank Oz has played more than 1 role.
|
cast[cast.name=='Frank Oz'].groupby(['title','year']).size()>1
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
List each of the characters that Frank Oz has portrayed at least twice.
|
cast[cast.name=='Frank Oz'].groupby('character').size()>2
|
Tutorial/Exercises-3.ipynb
|
RobbieNesmith/PandasTutorial
|
mit
|
Link to What's in Scipy.constants: https://docs.scipy.org/doc/scipy/reference/constants.html
Library Functions in Maths
(and numpy)
|
x = 4**0.5
print(x)
x = np.sqrt(4)
print(x)
|
IntroductiontoPython/UserDefinedFunction.ipynb
|
karenlmasters/ComputationalPhysicsUnit
|
apache-2.0
|
User Defined Functions
Here we'll practice writing our own functions.
Functions start with
python
def name(input):
and must end with a statement to return the value calculated
python
return x
To run a function your code would look like this:
```python
import numpy as np
def name(input)
```
FUNCTION CODE HERE
```python
return D
y=int(input("Enter y:"))
D = name(y)
print(D)
```
First - write a function to calculate n factorial. Reminder:
$n! = \pi^n_{k=1} k$
~
~
~
~
~
~
~
~
~
~
~
~
~
~
|
def factorial(n):
f = 1.0
for k in range(1,n+1):
f *= k
return f
print("This programme calculates n!")
n = int(input("Enter n:"))
a = factorial(10)
print("n! = ", a)
|
IntroductiontoPython/UserDefinedFunction.ipynb
|
karenlmasters/ComputationalPhysicsUnit
|
apache-2.0
|
Finding distance to the origin in cylindrical co-ordinates:
|
from math import sqrt, cos, sin
def distance(r,theta,z):
x = r*cos(theta)
y = r*sin(theta)
d = sqrt(x**2+y**2+z**2)
return d
D = distance(2.0,0.1,1.5)
print(D)
|
IntroductiontoPython/UserDefinedFunction.ipynb
|
karenlmasters/ComputationalPhysicsUnit
|
apache-2.0
|
Another Example: Prime Factors and Prime Numbers
Reminder: prime factors are the numbers which divide another number exactly.
Factors of the integer n can be found by dividing by all integers from 2 up to n and checking to see which remainders are zero.
Remainder in python calculated using
python
n % k
|
def factors(n):
factorlist=[]
k = 2
while k<=n:
while n%k==0:
factorlist.append(n)
n //= k
k += 1
return factorlist
list=factors(12)
print(list)
print(factors(17556))
print(factors(23))
|
IntroductiontoPython/UserDefinedFunction.ipynb
|
karenlmasters/ComputationalPhysicsUnit
|
apache-2.0
|
The reason these are useful are for things like the below, where you want to make the same calculation many times. This finds all the prime numbers (only divided by 1 and themselves) from 2 to 100.
|
for n in range(2,100):
if len(factors(n))==1:
print(n)
|
IntroductiontoPython/UserDefinedFunction.ipynb
|
karenlmasters/ComputationalPhysicsUnit
|
apache-2.0
|
Generalization of Taylor FD operators
In the last lesson, we learned how to derive a high order FD approximation for the second derivative using Taylor series expansion. In the next step we derive a general equation to compute FD operators, where I use a detailed derivation based on "Derivative Approximation by Finite Differences" by David Eberly
Estimation of arbitrary FD operators by Taylor series expansion
We can approximate the $d-th$ order derivative of a function $f(x)$ with an order of error $p>0$ by a general finite-difference approximation:
\begin{equation}
\frac{h^d}{d!}f^{(d)}(x) = \sum_{i=i_{min}}^{i_{max}} C_i f(x+ih) + \cal{O}(h^{d+p})
\end{equation}
where h is an equidistant grid point distance. By choosing the extreme indices $i_{min}$ and $i_{max}$, you can define forward, backward or central operators. The accuracy of the FD operator is defined by it's length and therefore also the number of
weighting coefficients $C_i$ incorporated in the approximation. $\cal{O}(h^{d+p})$ terms are negelected.
Formally, we can approximate $f(x+ih)$ by a Taylor series expansion:
\begin{equation}
f(x+ih) = \sum_{n=0}^{\infty} i^n \frac{h^n}{n!}f^{(n)}(x)\nonumber
\end{equation}
Inserting into eq.(1) yields
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{i=i_{min}}^{i_{max}} C_i \sum_{n=0}^{\infty} i^n \frac{h^n}{n!}f^{(n)}(x) + \cal{O}(h^{d+p})\nonumber\
\end{align}
We can move the second sum on the RHS to the front
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{n=0}^{\infty} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x) + \cal{O}(h^{d+p})\nonumber\
\end{align}
In the FD approximation we only expand the Taylor series up to the term $n=(d+p)-1$
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{n=0}^{(d+p)-1} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x) + \cal{O}(h^{d+p})\nonumber\
\end{align}
and neglect the $\cal{O}(h^{d+p})$ terms
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{n=0}^{(d+p)-1} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x)\
\end{align}
Multiplying by $\frac{d!}{h^d}$ leads to the desired approximation for the $d-th$ derivative of the function f(x):
\begin{align}
f^{(d)}(x) &= \frac{d!}{h^d}\sum_{n=0}^{(d+p)-1} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x)\
\end{align}
Treating the approximation in eq.(2) as an equality, the only term in the sum on the right-hand side of the approximation that contains $\frac{h^d}{d!}f^{d}(x)$ occurs when $n = d$, so the coefficient of that term must be 1. The other terms must vanish for there to be equality, so the coefficients of those terms must be 0; therefore, it is necessary that
\begin{equation}
\sum_{i=i_{min}}^{i_{max}} i^n C_i=
\begin{cases}
0, ~~ 0 \le n \le (d+p)-1 ~ \text{and} ~ n \ne d\
1, ~~ n = d
\end{cases}\nonumber\
\end{equation}
This is a set of $d + p$ linear equations in $i_{max} − i_{min} + 1$ unknowns. If we constrain the number of unknowns to be $d+p$, the linear system has a unique solution.
A forward difference approximation occurs if we set $i_{min} = 0$
and $i_{max} = d + p − 1$.
A backward difference approximation can be implemented by setting $i_{max} = 0$ and $i_{min} = −(d + p − 1)$.
A centered difference approximation occurs if we set $i_{max} = −i_{min} = (d + p − 1)/2$ where it appears that $d + p$ is necessarily an odd number. As it turns out, $p$ can be chosen to be even regardless of the parity of $d$ and $i_{max} = (d + p − 1)/2$.
We could either implement the resulting linear system as matrix equation as in the previous lesson, or simply use a SymPy function which gives us the FD operators right away.
|
# import SymPy libraries
from sympy import symbols, differentiate_finite, Function
# Define symbols
x, h = symbols('x h')
f = Function('f')
|
04_FD_stability_dispersion/lecture_notebooks/4_general_fd_taylor_operators.ipynb
|
daniel-koehn/Theory-of-seismic-waves-II
|
gpl-3.0
|
☝️ While this code is running, preview the code ahead.
|
# Read the data into a list of strings.
def read_data(filename):
"""Extract the first file enclosed in a zip file as a list of words"""
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Dataset size: {:,} words'.format(len(words)))
# Let's have a peek
words[10:]
# Let's have a peek
words[-10:]
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
Notice: None of the words are capitalized and there is no punctuation
Step 2: Build the dictionary
|
vocabulary_size = 50000
def build_dataset(words):
""" Replace rare words with UNK token which stands for "unknown".
It is called a dustbin category, aka sweep the small count items into a single group.
"""
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count += 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
del words # Hint to reduce memory.
data # An index of each word to its rank. Therefore we don't have reference the string
# dictionary # word: rank
# reverse_dictionary # rank: word
print('Most common words:')
pprint(count[:5])
print('Most least words:')
pprint(count[-5:])
print('Sample data:')
pprint(list(zip(data[:10], [reverse_dictionary[i] for i in data[:10]])))
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
Step 3: Function to generate a training batch for the skip-gram model.
|
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
batch, labels = generate_batch(batch_size=8,
num_skips=2,
skip_window=1)
for i in range(8):
print(batch[i], reverse_dictionary[batch[i]],
'->', labels[i, 0], reverse_dictionary[labels[i, 0]])
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
Step 5: Begin training.
|
num_steps = 1 # 100001
with tf.Session(graph=graph) as session:
# We must initialize all variables before we use them.
init.run()
print("Initialized")
average_loss = 0
for step in range(num_steps):
batch_inputs, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}
# We perform one update step by evaluating the optimizer op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += loss_val
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log_str = "Nearest to '%s':" % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
What do you see?
How would you describe the relationships?
Let's render and save more samples.
|
def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) #in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i,:]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.savefig(filename)
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [reverse_dictionary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
Take a look at the saved embeddings, are they good?
Are there any surprises?
What could we do to improve the embeddings?
If you are feeling adventurus, check out the complete TensorFlow version of word2vec
Summary
You need a lot of data to get good word2vec embeddings
The training takes some time
TensorFlow simplifies the model fitting of word2vec
You should always visualize inspect your results
<br>
<br>
<br>
Bonus Material
Run Google's TensorFlow code
|
# %run word2vec_basic.py
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Word2Vec/3_word2vec_activity.ipynb
|
shareactorIO/pipeline
|
apache-2.0
|
Preset based - using CoachInterface
The basic method to run Coach directly from python is through a CoachInterface object, which uses the same arguments as the command line invocation but allowes for more flexibility and additional control of the training/inference process.
Let's start with some examples.
Training a preset
In this example, we'll create a very simple graph containing a Clipped PPO agent running with the CartPole-v0 Gym environment. CoachInterface has a few useful parameters such as custom_parameter that enables overriding preset settings, and other optional parameters enabling control over the training process. We'll override the preset's schedule parameters, train with a single rollout worker, and save checkpoints every 10 seconds:
|
coach = CoachInterface(preset='CartPole_ClippedPPO',
# The optional custom_parameter enables overriding preset settings
custom_parameter='heatup_steps=EnvironmentSteps(5);improve_steps=TrainingSteps(3)',
# Other optional parameters enable easy access to advanced functionalities
num_workers=1, checkpoint_save_secs=10)
coach.run()
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
Running each training or inference iteration manually
The graph manager (which was instantiated in the preset) can be accessed from the CoachInterface object. The graph manager simplifies the scheduling process by encapsulating the calls to each of the training phases. Sometimes, it can be beneficial to have a more fine grained control over the scheduling process. This can be easily done by calling the individual phase functions directly:
|
from rl_coach.environments.gym_environment import GymEnvironment, GymVectorEnvironment
from rl_coach.base_parameters import VisualizationParameters
from rl_coach.core_types import EnvironmentSteps
tf.reset_default_graph()
coach = CoachInterface(preset='CartPole_ClippedPPO')
# registering an iteration signal before starting to run
coach.graph_manager.log_signal('iteration', -1)
coach.graph_manager.heatup(EnvironmentSteps(100))
# training
for it in range(10):
# logging the iteration signal during training
coach.graph_manager.log_signal('iteration', it)
# using the graph manager to train and act a given number of steps
coach.graph_manager.train_and_act(EnvironmentSteps(100))
# reading signals during training
training_reward = coach.graph_manager.get_signal_value('Training Reward')
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
Sometimes we may want to track the agent's decisions, log or maybe even modify them.
We can access the agent itself through the CoachInterface as follows.
Note that we also need an instance of the environment to do so. In this case we use instantiate a GymEnvironment object with the CartPole GymVectorEnvironment:
|
# inference
env_params = GymVectorEnvironment(level='CartPole-v0')
env = GymEnvironment(**env_params.__dict__, visualization_parameters=VisualizationParameters())
response = env.reset_internal_state()
for _ in range(10):
action_info = coach.graph_manager.get_agent().choose_action(response.next_state)
print("State:{}, Action:{}".format(response.next_state,action_info.action))
response = env.step(action_info.action)
print("Reward:{}".format(response.reward))
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
Non-preset - using GraphManager directly
It is also possible to invoke coach directly in the python code without defining a preset (which is necessary for CoachInterface) by using the GraphManager object directly. Using Coach this way won't allow you access functionalities such as multi-threading, but it might be convenient if you don't want to define a preset file.
Training an agent with a custom Gym environment
Here we show an example of how to use the GraphManager to train an agent on a custom Gym environment.
We first construct a GymEnvironmentParameters object describing the environment parameters. For Gym environments with vector observations, we can use the more specific GymVectorEnvironment object.
The path to the custom environment is defined in the level parameter and it can be the absolute path to its class (e.g. '/home/user/my_environment_dir/my_environment_module.py:MyEnvironmentClass') or the relative path to the module as in this example. In any case, we can use the custom gym environment without registering it.
Custom parameters for the environment's __init__ function can be passed as additional_simulator_parameters.
|
from rl_coach.agents.clipped_ppo_agent import ClippedPPOAgentParameters
from rl_coach.environments.gym_environment import GymVectorEnvironment
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.graph_managers.graph_manager import SimpleSchedule
from rl_coach.architectures.embedder_parameters import InputEmbedderParameters
# Resetting tensorflow graph as the network has changed.
tf.reset_default_graph()
# define the environment parameters
bit_length = 10
env_params = GymVectorEnvironment(level='rl_coach.environments.toy_problems.bit_flip:BitFlip')
env_params.additional_simulator_parameters = {'bit_length': bit_length, 'mean_zero': True}
# Clipped PPO
agent_params = ClippedPPOAgentParameters()
agent_params.network_wrappers['main'].input_embedders_parameters = {
'state': InputEmbedderParameters(scheme=[]),
'desired_goal': InputEmbedderParameters(scheme=[])
}
graph_manager = BasicRLGraphManager(
agent_params=agent_params,
env_params=env_params,
schedule_params=SimpleSchedule()
)
graph_manager.improve()
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
Advanced functionality - proprietary exploration policy, checkpoint evaluation
Agent modules, such as exploration policy, memory and neural network topology can be replaced with proprietary ones. In this example we'll show how to replace the default exploration policy of the DQN agent with a different one that is defined under the Resources folder. We'll also show how to change the default checkpoint save settings, and how to load a checkpoint for evaluation.
We'll start with the standard definitions of a DQN agent solving the CartPole environment (taken from the Cartpole_DQN preset)
|
from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.base_parameters import VisualizationParameters, TaskParameters
from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps
from rl_coach.environments.gym_environment import GymVectorEnvironment
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.graph_managers.graph_manager import ScheduleParameters
from rl_coach.memories.memory import MemoryGranularity
####################
# Graph Scheduling #
####################
# Resetting tensorflow graph as the network has changed.
tf.reset_default_graph()
schedule_params = ScheduleParameters()
schedule_params.improve_steps = TrainingSteps(4000)
schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(10)
schedule_params.evaluation_steps = EnvironmentEpisodes(1)
schedule_params.heatup_steps = EnvironmentSteps(1000)
#########
# Agent #
#########
agent_params = DQNAgentParameters()
# DQN params
agent_params.algorithm.num_steps_between_copying_online_weights_to_target = EnvironmentSteps(100)
agent_params.algorithm.discount = 0.99
agent_params.algorithm.num_consecutive_playing_steps = EnvironmentSteps(1)
# NN configuration
agent_params.network_wrappers['main'].learning_rate = 0.00025
agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False
# ER size
agent_params.memory.max_size = (MemoryGranularity.Transitions, 40000)
################
# Environment #
################
env_params = GymVectorEnvironment(level='CartPole-v0')
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
Next, we'll override the exploration policy with our own policy defined in Resources/exploration.py.
We'll also define the checkpoint save directory and interval in seconds.
Make sure the first cell at the top of this notebook is run before the following one, such that module_path and resources_path are adding to sys path.
|
from exploration import MyExplorationParameters
# Overriding the default DQN Agent exploration policy with my exploration policy
agent_params.exploration = MyExplorationParameters()
# Creating a graph manager to train a DQN agent to solve CartPole
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params,
schedule_params=schedule_params, vis_params=VisualizationParameters())
# Resources path was defined at the top of this notebook
my_checkpoint_dir = resources_path + '/checkpoints'
# Checkpoints will be stored every 5 seconds to the given directory
task_parameters1 = TaskParameters()
task_parameters1.checkpoint_save_dir = my_checkpoint_dir
task_parameters1.checkpoint_save_secs = 5
graph_manager.create_graph(task_parameters1)
graph_manager.improve()
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
Last, we'll load the latest checkpoint from the checkpoint directory, and evaluate it.
|
import tensorflow as tf
import shutil
# Clearing the previous graph before creating the new one to avoid name conflicts
tf.reset_default_graph()
# Updating the graph manager's task parameters to restore the latest stored checkpoint from the checkpoints directory
task_parameters2 = TaskParameters()
task_parameters2.checkpoint_restore_path = my_checkpoint_dir
graph_manager.create_graph(task_parameters2)
graph_manager.evaluate(EnvironmentSteps(5))
# Clearning up
shutil.rmtree(my_checkpoint_dir)
|
tutorials/0. Quick Start Guide.ipynb
|
NervanaSystems/coach
|
apache-2.0
|
If you read this article, you will see that Matt used the first 35,000 lyrics of each artist. For the sake of If you read the article, you will see that Matt used the first 35,000 lyrics of each artist. For the sake of simplicity, I am going to use the artist Jay Z as the subject of our analysis. So let's go and collect the first 35,000 words of the Jay Z lyrical catalog.
How are we going to do this you might ask? Well first off, you can go to your favorite search engine and search for Jay Z lyrics. On the other hand, you can actually use the rap annotation site genius(http://rap.genius.com/) to get all that information and then some. According to Genius, Jay Z has a lot of songs. On average, most rap songs are usually set with 3 verses containing sixteen bars, or sixteen sentences each.
python
35000/(16 * 3)
729
So if 16 by 3 = 48, and 48 goes into 35000 gives approximately 729 songs. That can't be right, even though Jay Z is prolific, he hasn't written 700+ songs. So I must have gotten my understanding wrong.
|
35000/(16 * 3)
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
I proceeded under the false assumption that each lyric was a sentence. Now I can see that each lyric must mean each word instead. This brings me to an essential quality of a good problem solver and Computer Scientist, the ability to embrace failure. More on that later. So lets go back and re-analyze the numbers. I need to figure out how many words are in the average rap bar?
In order to solve this we need to basically find every instance of Jay Z's lyric, scrape them off the Internet, and then start number crunching on them. To make this process faster, I have already built something called a webscraper and scraped all his lyrics till the Holy Grail album. Hopefully that gives us more than enough data to get to 35,000 words.
The lyrics I scraped off the Internet have been compressed in the zip file named `JayZ.zip.' Go ahead an unpack this file to our PythonDataLab folder. Take a look inside, you will see its made up of a bunch of text files. Take sometime and go through some the text file to see what they look like. For example the file ``JayZ_The Black Album_99 Problems.txt" contains the lyrics to 99 problems.
Now that we have these files, we are going to use some Python packages (a package is also known as a library) to help us. The Python natural language toolkit (NLTK) is one of the more popular Python libraries that people use for natural language processing. In fact, Matt Daniels used it for his hip-hop vocabulary. You can learn more about NLTK here, http://www.nltk.org/book/.
So lets import the toolkit. Go ahead a fire up Python in terminal. To import any package in Python, you type in the keyword import, followed by the name of the toolkit.
|
import nltk
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
The nltk library has a lot of functions associated with it. The one we are going to use in particular is called a Corpus reader. If you look up the definition of a corpus, you will see that it is just a collection of written text. The JayZ folder contains a corpora of Jay Z lyrics. The great thing about nltk is that it comes with built-in support for dozens of corpora. For example, NLTK includes a small selection of texts from Project Gutenberg, which contains some 25,000 free electronic books. To see some of these books we can run the following query
|
nltk.corpus.gutenberg.fileids()
from nltk.corpus import PlaintextCorpusReader
corpus_root = 'JayZ'
wordlist = PlaintextCorpusReader(corpus_root, '.*')
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Ah, there is Shakespeare's Macbeth, Hamlet, as well as Julius Cesar. There is also the King James version of the bible as well as Jane Austen's Emma. Let's get on to the business of making our JayZ corpus.
Based on my perusal of the nltk book, I know that there is a plaintext corpus reading function named PlaintextCorpusReader that I can use to make my corpus. So from nltk corpus function, which I access through the '.' (dot) operator, I import PlaintextCorpusReader
python
from nltk.corpus import PlaintextCorpusReader
I create a variable that I name corpus_root and assign the full system path location to where I unzipped the JayZ file I downloaded.
python
corpus_root = 'JayZ'
I then call the plain text corpus reader function with the root location and the token '.' that means grab every file in that folder.
python
wordlist = PlaintextCorpusReader(corpus_root, '.*')
I have adapted a function from nltk to reading in my corpus. This function is below. In order to make this function work, we will need to get some additional libraries of functions. In this case, we will need the regular expressions library named "re". So the first thing we need to do is import that package.
|
import re
"""
This function takes in an object of the type PlaintextCorpusReader, and system path.
It returns an nltk corpus
It requires the regular expression package re to work
"""
#In here is where I should get rid of the rap stopwords.
def create_corpus(wordlist, some_corpus): #process the files so I know what was read in
for fileid in wordlist.fileids():
raw = wordlist.raw(fileid)
raw = re.split(r'\W+', raw) ## split the raw text into appropriate words
some_corpus.extend(raw)
print fileid
return some_corpus
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
The function does something then you type in the following
python
the_corpus = create_corpus(wordlist, [])
Now the_corpus contains all the lyrics. I wrote the create_corpus function in a way that shows what lyrics were read in. You should have a similar output to the one below.
The series of words that make up Jay Z's lyrics is now represented by a list data structure, named the_corpus. This list data structure is the same exact computational mental model that we have already acquired with the list data structure we are already familiar with in Snap!. Hopefully, you are beginning to gain a better understanding of how all the computational thinking skills you acquired in your learning of Snap! carries over to solving any computation problem realized in any programming language.
|
the_corpus = create_corpus(wordlist, [])
len(the_corpus)
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
We have finally gotten our Jay Z Corpus! The data that we collected has a 119,302 words. You can see for yourself by running the len (Python function for figuring out the length of a list) on the_corpus. This is great, because we are interested in the first 35,000 words of the corpora, in order to recreate the data science experiment of the hip-hop vocabulary; which determines the number of unique words used within an artist's first 35,000 lyrics.
The corpus is stored in a list data structure, which lends itself to list manipulation techniques. You can see all the ways you can interact with a list in the Python documentation here https://docs.Python.org/2/tutorial/datastructures.html.
Did you know that 80-90% of time spent on data projects is gathering data and putting it into a format you can analyze? Geez
Let's take a look inside the_corpus, to determine what the first 10 words are.
|
the_corpus[:10]
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Here's a little secret: much of NLP (and data science, for that matter) boils down to counting things. If you've got a bunch of data that needs analyzin' but you don't know where to start, counting things is usually a good place to begin. Sure, you'll need to figure out exactly what you want to count, how to count it, and what to do with the counts, but if you're lost and don't know what to do, just start counting. Some of this content has been adapted from Charlie Greenbacker's A smattering of NLP in Python
These words come from the first text file that was read in, JayZ_American Gangster_American Dreamin.txt. We can go ahead and save all the album titles by typing in Albums = wordlist.fileids()
|
Albums = wordlist.fileids()
Albums[:14]
[fileid for fileid in Albums[:14]]
the_corpus[34990:35000]
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
We can now go ahead and figure out the number of unique words used in Jay Z's first 35,000 lyrics. An astute observer will notice that we have not done any data cleaning. For example, take a look inside a slice of the corpus, the last 10 words the_corpus[34990:35000], ['die', 'And', 'even', 'if', 'Jehovah', 'witness', 'bet', 'he', 'll', 'never'], you will see it has treated the contraction "I'm" as two separate words. The create_corpus function that we used, works by separating each contiguous chunk of alphabets separated by punctuations or space as a word. As a result contractions like "I'm" gets treated as two words. We can use the function lexical_diversity to determine the number of unique words in our Jay Z corpus.
|
def lexical_diversity(my_text_data):
word_count = len(my_text_data)
vocab_size = len(set(my_text_data))
diversity_score = word_count / vocab_size
return diversity_score
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
If we call our function on the Jay Z sliced corpus, it should give us a score.
|
emma = nltk.corpus.gutenberg.words('austen-emma.txt')
print "The lexical diversity score for the first 35,000 words in the Jay Z corpus is ",
lexical_diversity(the_corpus[:35000])
print "The lexical diversity score for the first 35,000 words in the Emma corpus is ",
lexical_diversity(emma[:35000])
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Remember we had created a wordlist of type PlaintextCorpusReader. This uses the data structure of a nested list.
PlaintextCorpusReader type is made up of a list of fileids, which are made up of a list of paragraphs, which are in turn made up of a list of sentences, which are in turn made up of a list of words.
words(): list of str
sents(): list of (list of str)
paras(): list of (list of (list of str))
fileids(): list of (list of (list of (list of str)))
In this section, lets investigate the use of basketball language in Jay Z's lyrics. Go ahead and save all the album titles by typing in Albums = wordlist.fileids(). Notice that JayZ_ text appears before each of the filenames. To get this prefix out of the filename, we can extract the first five characters, using fileid[5:].
Here's a little secret: much of NLP (and data science, for that matter) boils down to counting things. If you've got a bunch of data that needs analyzin' but you don't know where to start, counting things is usually a good place to begin. Sure, you'll need to figure out exactly what you want to count, how to count it, and what to do with the counts, but if you're lost and don't know what to do, just start counting.
|
[fileid[5:] for fileid in Albums[:14]]
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
We are ready to start mining the data. So let's do a simple analysis on the occurrence of the concept "basketball" in JayZ's lyrics as represented by a list of 40 terms that are common when we talk about basketball.
|
basketball_bag_of_words = ['bounce','crossover','technical',
'shooting','double','jump','goal','backdoor','chest','ball',
'team','block','throw','offensive','point','airball','pick',
'assist','shot','layup','break','dribble','roll','cut','forward',
'move','zone','three-pointer','free','post','fast','blocking','backcourt',
'violation','foul','field','pass','turnover','alley-oop','guard']
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Lets reduce our investigation of this concept to just the American Gangster album, which is the first 14 songs in the corpus. We do that by using the command Albums[:14]. Remember that Albums is just a list data type, so we can slice it, to its first 14 indexes.
The following code converts the words in the basket ball concept to lowercase using w.lower(), then checks if they start with any of the "targets", that is, each of the words in the basketball_bag_of_words, using the command startswith(). Thus, it will count words like "turnover," "alley-oop" and so on. All this is enabled by NLTK's built in function for Conditional Frequency Distribution. You can read more about it here.
|
cfd = nltk.ConditionalFreqDist(
(target, fileid[5:])
for fileid in Albums[:14]
for w in wordlist.words(fileid)
for target in basketball_bag_of_words
if w.lower().startswith(target))
# have inline graphs
#get_ipython().magic(u'matplotlib inline')
%pylab inline
cfd.plot()
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
From the plot we see that the basketball term "roll" seems to be used extensively in the song Party Life. Let's take a closer look at this phenomenon, and determine if "roll" was used in the "basketball" sense of the term. To do this, we need to see the context in which it was used. What we really need is a concordance. Let's build one.
The first thing I want to do is to create a corpus that only contain words from the American Gangster album.
|
AmericanGangster_wordlist = PlaintextCorpusReader(corpus_root, 'JayZ_American Gangster_.*')
AmericanGangster_corpus = create_corpus(AmericanGangster_wordlist, [])
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Building a concordance, gets us to the area of elementary information retrieval (IR)<a href="#fn1" id="ref1">1</a>, think, <i> basic search engine</i>. So why do we even need to “normalize” terms? We want to match <b>U.S.A.</b> and <b>USA</b>. Also when we enter <b>roll</b>, we would like to match <b>Roll</b>, and <b>rolling</b>. One way to do this is to stem the word. That is, reduce it down to its base/stem/root form. As such <b>automate(s)</b>, <b>automatic</b>, <b>automation</b> all reduced to <b>automat</b>. Most stemmers are pretty basic and just chop off standard affixes indicating things like tense (e.g., "-ed") and possessive forms (e.g., "-'s"). Here, we'll use the most popular english language stemmer, the Potter stemmer, which comes with NLTK.
Once our tokens are stemmed, we can rest easy knowing that roll, Rolling, Rolls will all stem to roll.
<sup id="fn1">1. Some of this content has been adapted from Dan Jurafsky's <a href="https://web.stanford.edu/class/cs124/">Stanford CS124 class</a><a href="#ref1" title="Jump back to footnote 1 in the text."></a></sup>
|
porter = nltk.PorterStemmer()
stemmer = nltk.PorterStemmer()
stemmed_tokens = [stemmer.stem(t) for t in AmericanGangster_corpus]
for token in sorted(set(stemmed_tokens))[860:870]:
print token + ' [' + str(stemmed_tokens.count(token)) + ']'
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Now we can go ahead and create a concordance to test if "roll" is used in the basketball (pick and roll) sense or not.
|
AmericanGangster_lyrics = IndexedText(porter, AmericanGangster_corpus)
AmericanGangster_lyrics.concordance('roll')
print AmericanGangster_wordlist.raw(fileids='JayZ_American Gangster_Party Life.txt')
|
HipHopDataExploration/HipHopDataExploration.ipynb
|
omoju/hiphopathy
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.