code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
import math
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.linear_model import LinearRegression
from sklearn import datasets, metrics
from sklearn.metrics import roc_curve, auc, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from scipy import interp
from itertools import cycle
import tensorflow as tf
from tensorflow.keras import layers
from keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping
import keras.backend as K
from keras.utils.vis_utils import model_to_dot
from IPython.display import SVG
# -
# ## DNN Classifier
# SKLearn is not a specialist in implementing complex neural networks. Instead, it allows you to create one simple kind - the vanilla flavor. For us, this could be plenty complex in order to model changes, but should you want to have more control over facets of the process like mixed activation functions, convolution, recurrence, linked layers, etc., you must use some more complex packages like Google's TensorFlow.
#
# Let's start just by creating some simple networks for a classifier.
Xs, y = datasets.make_classification(n_samples=2000, n_features=5)
model = MLPClassifier(hidden_layer_sizes=(10,10), activation='relu', solver='adam', verbose=True,
max_iter = 3000)
# We can fit data just as before with our previous algorithms. For example, a simple Test/Train fit can be done like the following.
X_train, X_test, y_train, y_test = train_test_split(Xs, y, test_size=0.3)
model.fit(X_train,y_train)
print(model.score(X_test,y_test))
# So what exactly just happened?
#
# We can recall the structure of the neural net looked something like:
#
# $$input: (5) - relu(10) - relu(10) - output: softmax(2)$$.
#
# Let's examine a little closer what this does.
# ## Relu
x = np.linspace(-5,5,100)
y = list(map(lambda x: max(0,x), x))
sns.lineplot(x=x,y=y).set(title='ReLu - Rectified Linear Unit',xlabel='input',ylabel='output')
# This is coming to be accepted as the best activation function for much of deep learning - the Sigmoid and Tanh suffer from issues that can make neurons evaluated close to zero, look into the vanishing gradient problem for more. However, ReLu is not without issue. It is unlikely you will come across this in our projects, but there is the potential for ReLU cells to 'die' where they evaluate closs to zero for all inputs. This can be solved using something like the ELU or Leaky Relu, where the negative side has a small negative slope that forces the model to continue to pay attention to its value.
# ## Softmax
#
# The softmax function is the almost exclusive choice for the final layer of categorical classifying dense neural networks as it takes the values of the final layer and calculates probabilities for each of the classes. We will not worry about the computation, since we will likely not end up changing this. In fact, SKLearn does not even give the option to change it in the MLP models.
# ## Optimization
# Optimization in neural networks is the same principal as linear regression! It may seem strange, but if you imagine a series of our weights and biases as being the axes on the heatmap we saw a few weeks ago, a similar process can be performed to find the global maximum - or the point at which we minimize our loss function.
# ## Loss Functions
# Loss in neural networks is the function that is used to calculate how well we are performing on our data either with regard to predicting classes or regression values. We have a few different function to consider. Hinge Loss is slightly faster, but Log-Loss (Categorical Cross-Entropy) is more accurate. There are others, but these are two of the main functions.
# +
fig, axs = plt.subplots(ncols=2,nrows=2,figsize=(12,12))
x = np.linspace(0.00001,.99999,100).tolist()
y = list(map(lambda x:-1*math.log(x), x))
sns.lineplot(x=x, y=y, ax=axs[0][0]).set(title='Log-Loss: For Positive Class',
xlabel='Predicted Probability',ylabel='Loss')
y = list(map(lambda x:-1*math.log(1-x), x))
sns.lineplot(x=x, y=y, ax=axs[0][1]).set(title='Log-Loss: For Negative Class',
xlabel='Predicted Probability',ylabel='Loss')
y = list(map(lambda x: max(0, 1-x), x))
sns.lineplot(x=x, y=y, ax=axs[1][0]).set(title='Hing Loss: For Positive Class',
xlabel='Predicted Probability',ylabel='Loss')
y = list(map(lambda x: max(0, 1+x), x))
sns.lineplot(x=x, y=y, ax=axs[1][1]).set(title='Hing Loss: For Negative Class',
xlabel='Predicted Probability',ylabel='Loss')
# -
# ## DNN Regression
# For regression we have two main options - the first being the same as for in regression and the second being very similar. The Mean Squared Error (MSE) is excellent for cases of minimal outliers, but if your data does have this sort of issue, the Mean Absolute Error (MAE) is less sensitive to their impact.
# +
x, y = datasets.make_regression(n_samples=100, n_features=1, noise=5)
fig, axs = plt.subplots(ncols=2,nrows=2,figsize=(18,12))
reg = LinearRegression()
reg.fit(x,y)
pred = reg.predict(x)
mse = str(metrics.mean_squared_error(pred,y).round(3))
mae = str(metrics.mean_absolute_error(pred,y).round(3))
title = 'Regression: MSE = '+mse+', MAE = '+mae
sns.regplot(x,y,ax=axs[0][0],ci=None).set(title=title,xlabel='X',ylabel='Y')
names=[]
sns.scatterplot(data=pd.DataFrame({'Y':y},index=x.flatten()), ax = axs[1][0])
for i in range(10):
mlpr = MLPRegressor(max_iter=(i+1)*200,activation='relu',)
mlpr.fit(np.array(x).reshape(-1, 1),y)
pred = mlpr.predict(np.array(x).reshape(-1, 1))
names.append(str((i+1)*200)+' Iteration Line')
sns.lineplot(data=pd.DataFrame({'Prediction':pred,'X':x.flatten()}),ax = axs[1][0],x='X',y='Prediction')
mse = str(metrics.mean_squared_error(pred,y).round(3))
mae = str(metrics.mean_absolute_error(pred,y).round(3))
title = 'Regression: MSE = '+mse+', MAE = '+mae
axs[1][0].set_title(title)
_, ind = min((_, idx) for (idx, _) in enumerate(y))
y[ind] = 0
reg = LinearRegression()
reg.fit(x,y)
pred = reg.predict(x)
mse = str(metrics.mean_squared_error(pred,y).round(3))
mae = str(metrics.mean_absolute_error(pred,y).round(3))
title = 'Regression: MSE = '+mse+', MAE = '+mae
sns.regplot(x,y,ax=axs[0][1],ci=None).set(title=title,xlabel='X',ylabel='Y')
names=[]
sns.scatterplot(data=pd.DataFrame({'Y':y},index=x.flatten()), ax = axs[1][1])
for i in range(10):
mlpr = MLPRegressor(max_iter=(i+1)*200,activation='tanh')
mlpr.fit(np.array(x).reshape(-1, 1),y)
pred = mlpr.predict(np.array(x).reshape(-1, 1))
names.append(str((i+1)*200)+' Iteration Line')
sns.lineplot(data=pd.DataFrame({'Prediction':pred,'X':x.flatten()}), ax = axs[1][1],x='X',y='Prediction')
mse = str(metrics.mean_squared_error(pred,y).round(3))
mae = str(metrics.mean_absolute_error(pred,y).round(3))
title = 'Regression: MSE = '+mse+', MAE = '+mae
axs[1][1].set_title(title)
plt.show()
# -
# We would however like to have more control over the details in the model, as well as access to different combinations of layers, and the ability to create recurrent and convolution layers, the details of which we will cover in another session. For the time being, let's investigate the TensorFlow and Keras implementation to build Dense Neural Networks.
# ## Keras Sequential Structure
# Let's go ahead and build the exact same thing, just in a slightly different way. In Keras, we can declare an ordered series of layers (input, hidden, ouput) to create a model. Within keras, there a pre-packaged layers that allow you to do everything from a simple neural net up to convolutional image recognition.
#
# We will first call a method to create a Sequential series of layers. Then, to this we will pass a list of our layers, starting with a dense layer with inputs equal to the dimensions of our inputs. After this, you can pass any series of dense, convolutional, or recurrent layers. Let's focus on the former for now.
Xs, y = datasets.make_classification(n_samples=2000, n_features=5)
print('X:',np.array(Xs).shape,' Y:',np.array(y).shape)
model = tf.keras.Sequential([layers.Dense(10, activation='linear', input_shape=(5,)),
layers.Dense(2, activation='softmax')])
SVG(model_to_dot(model).create(prog='dot', format='svg'))
# Using this model above, reference the documentation to create a layer sequence of
#
# $$Input: 5,\ ReLU:10,\ Sigmoid:5,\ TanH: 2,\ Softmax: 2$$
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(5,)),
layers.Dense(5, activation='sigmoid'),
layers.Dense(2, activation='tanh'),
layers.Dense(2, activation='softmax')])
# ## Compiling a Model
# Now that we can assemble a series of layers into a Dense Neural Network (DNN) there are a few more decisions that you must make: the loss function and your accuracy metric. There are hundreds of potential combinations of values for this, so it comes down to selecting the option that best fits your problem _and_ data. Just a few things to take into account are the impact of false positives, the impact of false negatives, the amount of data, if the response variable is categorical, how much time you have to train, and more.
#
# ### Regression
#
# Mean Squared Error: $\frac{1}{n}\sum_{i=1}^{n}(y-\hat{y})^2$
#
# Mean Absolute Error: $\frac{1}{n}\sum_{i=1}^{n}|y-\hat{y}|$
#
# Mean Absolute Percentage Error: $\frac{100\%}{n}\sum_{i=1}^{n} | \frac{(y-\hat{y})}{y}|$
#
# ### Categorical
#
# Cross-Entropy: $−(y\log(p)+(1−y)\log(1−p))$
# As we discussed some above, the mean absolute error is more forgiving of outlier values than the mean squared error, the usual function. The mean absolute percentage error is used less often, but is read in terms of a percentage. This means that there is a maximum downside percentage error of $100$ but the upside error is infinite.
#
# However, before we can compile the model, there is one final decision we have to make. By what function would we like the algorithm to attempt to minimize the loss by? In other words, how should the weights and biases be wiggled to achieve maximum performance (minimal loss).
#
# There are many, but we will select a few.
#
# ### RMSProp
# 
#
# As we move towards the optimal point, the weights will bounce the point towards the right and the biases up and down. Using this, we can converge on the optimal point.
#
# Using a larger learning rate will cause the vertical oscillation to have higher magnitudes. This slows down our gradient descent and prevents using a much larger learning rate.
#
# ### Adam
# As with all of the optimizers, we are trying to solve the gradient descent problem, finding a minimum loss value in the space with dimensions equal to our weights and biases. In other words, this function loss, we would like to minimize it by solving which weights and biases should have what values. In this optimizer, we continually change the values by a smaller and smaller amount until we converge on the best value.
#
# Adam is the same as the above, but with the ability to know how the hill is behaving (the second derivative) to speed up down a steep part of the space and slow down as is shallows out to prevent over running and having to roll back.
#
# The math behind these is highly complex, so we will only touch briefly. https://blog.paperspace.com/intro-to-optimization-momentum-rmsprop-adam/ is a great article to learn moer if you are interested.
#
# ## Categorical Compiling
# Now, let's compile our cmodel!
#Two Classes
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(5,)),
layers.Dense(10, activation='relu'),
layers.Dense(2, activation='softmax')])
model.compile(optimizer='adam',loss=tf.keras.losses.binary_crossentropy)
#Five Classes
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(5,)),
layers.Dense(10, activation='relu'),
layers.Dense(5, activation='softmax')])
model.compile(optimizer='adam',loss=tf.keras.losses.categorical_crossentropy)
# ## Regression Compiling
# We know that we must use the softmax layer for categorical variables, because our actualy goal in those cases is to get class probabilities to be able to choose decision boundaries that maximize our ability to predict in a way that maximizes our chosen metric - which will be the next discussion. In this case, since we are not trying to predict a probability, we would simply like to ask for a continuous value. This can be done by simply requesting a weighted average of the final layer in a 'linear' activation - i.e. a linear regression layer.
#Mean Squared Error
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(5,)),
layers.Dense(10, activation='relu'),
layers.Dense(1, activation='linear')])
model.compile(optimizer='adam',loss=tf.keras.losses.mean_squared_error)
#Mean Absolute Error
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(5,)),
layers.Dense(10, activation='relu'),
layers.Dense(1, activation='linear')])
model.compile(optimizer='adam',loss=tf.keras.losses.mean_absolute_error)
# ## Fitting a Model
# Finally, now that we have a compiled, ready-to-go model, the final step is to train it. In SKLearn, this was as simple as executing the .fit() method. In Keras this is slightly more complicated. We must first decide for how long we would like to fit our model (in epochs, or one run through the training data). We can also decide on the metrics we would like to record.
#
# Note, that you can ONLY pass numpy arrays. No DataFrames.
#
# For Regression, let's see an example
# +
X, y = datasets.make_regression(n_samples=5000, n_features=5, noise=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(5,)),
layers.Dense(10, activation='relu'),
layers.Dense(1, activation='linear')])
model.compile(optimizer='adam',loss=tf.keras.losses.mean_squared_error,
metrics=[tf.keras.metrics.mean_squared_error,
tf.keras.metrics.mean_absolute_error,
tf.keras.metrics.mean_absolute_percentage_error])
history = model.fit(X_train, y_train, epochs=3, validation_data=(X_test,y_test), verbose=1)
# -
# ## Creating Analytics for Models
# We would now like to be able to look at these accuracies over time to evaluate if we are at the optimal fit, and investigate the performance analytics that we discussed before - specifically the Confusion Matrix and the ROC Curve. Notice above, that in preparation for this step, we asked keras to store the history of the data in a variable called history. Let's train for a few more epochs to get some more datapoints then let's grab out that information.
history = model.fit(X_train, y_train, epochs=15, validation_data=(X_test,y_test), verbose=0)
epochs = pd.DataFrame(history.history)
epochs.index = np.arange(1,16)
epochs
# Let's now do some graphing, let's see how we are performing relative to the train set.
fig, axs = plt.subplots(ncols=3, figsize=(18,6))
sns.lineplot(data=epochs[['mean_squared_error','val_mean_squared_error']],ax=axs[0]).set(
title='Mean Squared Errors',xlabel='Epoch',ylabel='MSE')
sns.lineplot(data=epochs[['mean_absolute_error','val_mean_absolute_error']],ax=axs[1]).set(
title='Mean Absolute Errors',xlabel='Epoch',ylabel='MAE')
sns.lineplot(data=epochs[['mean_absolute_percentage_error',
'val_mean_absolute_percentage_error']],ax=axs[2]).set(
title='Mean Absolute Percentage Errors',xlabel='Epoch',ylabel='MAPE')
# +
#This wasn't in the package, so I consulted StackExchange for something that works.
def r_squared(y_true, y_pred):
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
model.compile(optimizer='adam',loss=tf.keras.losses.mean_squared_error,
metrics=[r_squared])
history = model.fit(X_train, y_train, epochs=15, validation_data=(X_test,y_test), verbose=0)
epochs = pd.DataFrame(history.history)
epochs.index = np.arange(1,16)
plt.figure(figsize=(5,5))
sns.lineplot(data=epochs[['r_squared','val_r_squared']]).set(title='R-Squared',xlabel='Epoch',ylabel='R2')
# -
# Now, we must determine how to extract probabilities and comput ROC curves using Keras if we would like to produce the Confusion Matrices, and ROC Curves. It's actually quite simple and can use many of SKLearn's methods.
#
# First things first, we need a classification problem, layer structure, compiled sequential object, and fitted neural network to get probabilities from.
# +
X, y = datasets.make_moons(n_samples=1000)
X_train, X_test, y_train_l, y_test_l = train_test_split(X, y, test_size=0.25)
y_test = to_categorical(y_test_l)
y_train = to_categorical(y_train_l)
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(2,)),
layers.Dense(10, activation='relu'),
layers.Dense(2, activation='softmax')])
model.compile(optimizer='adam',loss=tf.keras.losses.binary_crossentropy,
metrics=[tf.keras.losses.binary_crossentropy])
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test,y_test), verbose=1)
# -
# Now let's see if we are performing well.
epochs = pd.DataFrame(history.history)
epochs.index = np.arange(1,11)
plt.figure(figsize=(5,5))
sns.lineplot(data=epochs[['loss','val_loss']])
# And grab out the class predictions.
pred = np.array(model.predict_classes(X_test))
# Then, create a confusion matrix.
# +
test_cf = pd.DataFrame(confusion_matrix(y_test_l,pred))
fig, ax = plt.subplots(figsize=(6,6))
ax = sns.heatmap(test_cf,annot=True,fmt='d',cmap='Blues',cbar=False)
ax.set(xlabel='Predicted Class',ylabel='Actual Class',title='Confusion Matrix')
# -
# Or in the multi-class scenario.
# +
X, y = datasets.make_classification(n_samples=1000, n_features=15, n_informative=15,
n_redundant=0,n_classes=5)
X_train, X_test, y_train_l, y_test_l = train_test_split(X, y, test_size=0.25)
y_test = to_categorical(y_test_l)
y_train = to_categorical(y_train_l)
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(15,)),
layers.Dense(10, activation='relu'),
layers.Dense(5, activation='softmax')])
model.compile(optimizer='adam',loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.losses.categorical_crossentropy])
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test,y_test), verbose=1)
# -
epochs = pd.DataFrame(history.history)
epochs.index = np.arange(1,11)
plt.figure(figsize=(8,8))
sns.lineplot(data=epochs[['loss','val_loss']]).set(title='Loss over Epochs',xlabel='Epochs',
ylabel='Loss')
# +
pred = np.array(model.predict_classes(X_test))
test_cf = pd.DataFrame(confusion_matrix(y_test_l,pred))
fig, ax = plt.subplots(figsize=(6,6))
ax = sns.heatmap(test_cf,annot=True,fmt='d',cmap='Blues',cbar=False)
ax.set(xlabel='Predicted Class',ylabel='Actual Class',title='Confusion Matrix')
# +
pred = np.array(model.predict_classes(X_test))
test_cf = confusion_matrix(y_test_l,pred)
test_cf = test_cf.astype('float') / test_cf.sum(axis=1)[:, np.newaxis]
test_cf = pd.DataFrame((test_cf))
fig, ax = plt.subplots(figsize=(6,6))
ax = sns.heatmap(test_cf,annot=True,cmap='Blues',cbar=False,fmt='.0%')
ax.set(xlabel='Predicted Class',ylabel='Actual Class',title='Normalized Confusion Matrix')
# -
# For ROC Curves, we need to grab out the curves for each class using the probabilities. Let's see how this is done.
# +
#Binarize
target_b = label_binarize(y, classes=[0,1,2,3,4])
n_classes = target_b.shape[1]
#Split/Train
X_train, X_test, y_train, y_test = train_test_split(X, target_b, test_size=0.25)
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test,y_test), verbose=1)
#Score
y_scores = model.predict_proba(X_test)
# Compute ROC
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_scores[:,i])
roc_auc[i] = auc(fpr[i], tpr[i])
# +
# Aggregate False Positive Rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Interpolate Curves
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(10,10))
plt.plot(fpr["macro"], tpr["macro"],
label='Macro-Average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'blue', 'red'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=2,
label='ROC curve of class {0} (area = {1:0.5f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=2)
plt.xlim([-0.01, 1.0])
plt.ylim([0.0, 1.01])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Multi-Class ROC Curves: Gesture')
plt.legend(loc="lower right")
# -
# ## Early Stopping
# Because the choice of the number of epochs to train is mostly arbitrary, it is sometimes nice to automate away this choice by asking for the model to simply stop training once it no longer is finding any progress. In the SKLearn MLP module, we found our model by stopping once the loss value did not improve by more than a very small amount for 10 epochs in a row. Let's implement this where patience is the amount of time to wait before ending the fit.
# +
X, y = datasets.make_classification(n_samples=1000, n_features=15, n_informative=15,
n_redundant=0,n_classes=5)
X_train, X_test, y_train_l, y_test_l = train_test_split(X, y, test_size=0.25)
y_test = to_categorical(y_test_l)
y_train = to_categorical(y_train_l)
model = tf.keras.Sequential([layers.Dense(10, activation='relu', input_shape=(15,)),
layers.Dense(10, activation='relu'),
layers.Dense(5, activation='softmax')])
model.compile(optimizer='adam',loss=tf.keras.losses.binary_crossentropy,
metrics=[tf.keras.losses.binary_crossentropy])
callbacks = [EarlyStopping(monitor='val_loss', patience=5)]
history = model.fit(X_train, y_train, epochs=500, validation_data=(X_test,y_test),
callbacks=callbacks,verbose=3)
# -
epochs = pd.DataFrame(history.history)
epochs.index = np.arange(1,len(epochs)+1)
plt.figure(figsize=(8,8))
sns.lineplot(data=epochs[['loss','val_loss']]).set(title='Loss over Epochs',xlabel='Epochs',
ylabel='Loss')
# ## DropOut Layers
# For our last topic today, remember what it means to be a dense neural network. This dense is defined as the complete interconnectivity of the input layers, through the hidden layers, and out the output layer. This huge number of connections can have issues where the model could tend to overfit. So, in order to prevent this, and to allow ourselves more time to train the data. We use a dropout layers. Let's quickly make one.
# +
#X, y = datasets.make_classification(n_samples=1000, n_features=15, n_informative=15,
# n_redundant=0,n_classes=5)
X_train, X_test, y_train_l, y_test_l = train_test_split(X, y, test_size=0.25)
y_test = to_categorical(y_test_l)
y_train = to_categorical(y_train_l)
model = tf.keras.Sequential([layers.Dense(30, activation='relu', input_shape=(15,)),
layers.Dropout(rate=0.5),
layers.Dense(10, activation='relu'),
layers.Dense(5, activation='softmax')])
model.compile(optimizer='adam',loss=tf.keras.losses.binary_crossentropy,
metrics=[tf.keras.losses.binary_crossentropy])
callbacks = [EarlyStopping(monitor='val_loss', patience=5)]
history = model.fit(X_train, y_train, epochs=500, validation_data=(X_test,y_test),
callbacks=callbacks,verbose=3)
# -
epochs = pd.DataFrame(history.history)
epochs.index = np.arange(1,len(epochs)+1)
plt.figure(figsize=(8,8))
sns.lineplot(data=epochs[['loss','val_loss']]).set(title='Loss over Epochs',xlabel='Epochs',
ylabel='Loss')
| Lectures/10NeuralNetworks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
mode = 'static'
if mode == 'learning':
standard_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_combined/static_standard.csv")
NT_bump_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_combined/static_NT_bump.csv")
VI_bump_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_combined/static_VI_bump.csv")
TF_bump_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_combined/static_TF_bump.csv")
if mode == 'static':
standard_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_static/static_standard.csv")
NT_bump_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_static/static_NT_bump.csv")
VI_bump_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_static/static_VI_bump.csv")
TF_bump_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/neutral_static/static_TF_bump.csv")
h = 2/256
# +
''' Gain matrix estimation without outliers '''
np.set_printoptions(suppress=True)
margin = 3
def Monthlise(dailyreturn):
MonthlyReturn = ((1 + dailyreturn) ** 21) - 1
return MonthlyReturn
# standard_coord = pd.read_csv("/Users/aymericvie/Documents/GitHub/evology/evology/research/TransferStatus/GainMatrixSingle/data/static_standard.csv")
clean_standard_coord = pd.DataFrame()
# new_standard_coord = pd.DataFrame()
# new_standard_coord = standard_coord[np.abs(standard_coord['NT_DayReturns']-standard_coord['NT_DayReturns'].mean()) <= (margin*standard_coord['NT_DayReturns'].std())]
# standard_coord_NT_Return = new_standard_coord['NT_DayReturns'].mean()
clean_standard_coord['NT_DayReturns'] = 100 * standard_coord['NT_DayReturns']
standard_coord_NT_Return = Monthlise(np.nanmean(clean_standard_coord['NT_DayReturns']))
SharpeNT = np.nanmean(clean_standard_coord['NT_DayReturns']) / np.nanstd(clean_standard_coord['NT_DayReturns'])
# new_standard_coord = pd.DataFrame()
# new_standard_coord = standard_coord[np.abs(standard_coord['VI_DayReturns']-standard_coord['VI_DayReturns'].mean()) <= (margin*standard_coord['VI_DayReturns'].std())]
# standard_coord_VI_Return = new_standard_coord['VI_DayReturns'].mean()
clean_standard_coord['VI_DayReturns'] = 100 * standard_coord['VI_DayReturns']
standard_coord_VI_Return = Monthlise(np.nanmean(clean_standard_coord['VI_DayReturns']))
SharpeVI = np.nanmean(clean_standard_coord['VI_DayReturns']) / np.nanstd(clean_standard_coord['VI_DayReturns'])
# new_standard_coord = pd.DataFrame()
# new_standard_coord = standard_coord[np.abs(standard_coord['TF_DayReturns']-standard_coord['TF_DayReturns'].mean()) <= (margin*standard_coord['TF_DayReturns'].std())]
# standard_coord_TF_Return = new_standard_coord['TF_DayReturns'].mean()
clean_standard_coord['TF_DayReturns'] = 100 * standard_coord['TF_DayReturns']
standard_coord_TF_Return = Monthlise(np.nanmean(clean_standard_coord['TF_DayReturns']))
print(standard_coord_NT_Return, standard_coord_VI_Return, standard_coord_TF_Return)
SharpeTF = np.nanmean(clean_standard_coord['TF_DayReturns']) / np.nanstd(clean_standard_coord['TF_DayReturns'])
clean_NT_bump_coord = pd.DataFrame()
# new_NT_bump_coord = pd.DataFrame()
# new_NT_bump_coord = NT_bump_coord[np.abs(NT_bump_coord['NT_DayReturns']-NT_bump_coord['NT_DayReturns'].mean()) <= (margin*NT_bump_coord['NT_DayReturns'].std())]
# NT_bump_NT_Return = new_NT_bump_coord['NT_DayReturns'].mean()
clean_NT_bump_coord['NT_DayReturns'] = 100 * NT_bump_coord['NT_DayReturns']
NT_bump_NT_Return = Monthlise(np.nanmean(clean_NT_bump_coord['NT_DayReturns']))
SharpeNTNT = np.nanmean(clean_NT_bump_coord['NT_DayReturns']) / np.nanstd(clean_NT_bump_coord['NT_DayReturns'])
# new_NT_bump_coord = pd.DataFrame()
# new_NT_bump_coord = NT_bump_coord[np.abs(NT_bump_coord['VI_DayReturns']-NT_bump_coord['VI_DayReturns'].mean()) <= (margin*NT_bump_coord['VI_DayReturns'].std())]
# NT_bump_VI_Return = new_NT_bump_coord['VI_DayReturns'].mean()
clean_NT_bump_coord['VI_DayReturns'] = 100 * NT_bump_coord['VI_DayReturns']
NT_bump_VI_Return = Monthlise(np.nanmean(clean_NT_bump_coord['VI_DayReturns']))
SharpeNTVI = np.nanmean(clean_NT_bump_coord['VI_DayReturns']) / np.nanstd(clean_NT_bump_coord['VI_DayReturns'])
# new_NT_bump_coord = pd.DataFrame()
# new_NT_bump_coord = NT_bump_coord[np.abs(NT_bump_coord['TF_DayReturns']-NT_bump_coord['TF_DayReturns'].mean()) <= (margin*NT_bump_coord['TF_DayReturns'].std())]
# NT_bump_TF_Return = new_NT_bump_coord['TF_DayReturns'].mean()
clean_NT_bump_coord['TF_DayReturns'] = 100 * NT_bump_coord['TF_DayReturns']
NT_bump_TF_Return = Monthlise(np.nanmean(clean_NT_bump_coord['TF_DayReturns']))
SharpeNTTF = np.nanmean(clean_NT_bump_coord['TF_DayReturns']) / np.nanstd(clean_NT_bump_coord['TF_DayReturns'])
print(NT_bump_NT_Return, NT_bump_VI_Return, NT_bump_TF_Return)
clean_VI_bump_coord = pd.DataFrame()
# new_VI_bump_coord = pd.DataFrame()
# new_VI_bump_coord = VI_bump_coord[np.abs(VI_bump_coord['NT_DayReturns']-VI_bump_coord['NT_DayReturns'].mean()) <= (margin*VI_bump_coord['NT_DayReturns'].std())]
# VI_bump_NT_Return = new_VI_bump_coord['NT_DayReturns'].mean()
clean_VI_bump_coord['NT_DayReturns'] = 100 * VI_bump_coord['NT_DayReturns']
VI_bump_NT_Return = Monthlise(np.nanmean(clean_VI_bump_coord['NT_DayReturns']))
SharpeVINT = np.nanmean(clean_VI_bump_coord['NT_DayReturns']) / np.nanstd(clean_VI_bump_coord['NT_DayReturns'])
# new_VI_bump_coord = pd.DataFrame()
# new_VI_bump_coord = VI_bump_coord[np.abs(VI_bump_coord['VI_DayReturns']-VI_bump_coord['VI_DayReturns'].mean()) <= (margin*VI_bump_coord['VI_DayReturns'].std())]
# VI_bump_VI_Return = new_VI_bump_coord['VI_DayReturns'].mean()
clean_VI_bump_coord['VI_DayReturns'] = 100 * VI_bump_coord['VI_DayReturns']
VI_bump_VI_Return = Monthlise(np.nanmean(clean_VI_bump_coord['VI_DayReturns']))
SharpeVIVI = np.nanmean(clean_VI_bump_coord['VI_DayReturns']) / np.nanstd(clean_VI_bump_coord['VI_DayReturns'])
# new_VI_bump_coord = pd.DataFrame()
# new_VI_bump_coord = VI_bump_coord[np.abs(VI_bump_coord['TF_DayReturns']-VI_bump_coord['TF_DayReturns'].mean()) <= (margin*VI_bump_coord['TF_DayReturns'].std())]
# VI_bump_TF_Return = new_VI_bump_coord['TF_DayReturns'].mean()
clean_VI_bump_coord['TF_DayReturns'] = 100 * TF_bump_coord['TF_DayReturns']
VI_bump_TF_Return = Monthlise(np.nanmean(clean_VI_bump_coord['TF_DayReturns']))
SharpeVITF = np.nanmean(clean_VI_bump_coord['TF_DayReturns']) / np.nanstd(clean_VI_bump_coord['TF_DayReturns'])
print(VI_bump_NT_Return, VI_bump_VI_Return, VI_bump_TF_Return)
clean_TF_bump_coord = pd.DataFrame()
# new_TF_bump_coord = pd.DataFrame()
# new_TF_bump_coord = TF_bump_coord[np.abs(TF_bump_coord['NT_DayReturns']-TF_bump_coord['NT_DayReturns'].mean()) <= (margin*TF_bump_coord['NT_DayReturns'].std())]
# TF_bump_NT_Return = new_TF_bump_coord['NT_DayReturns'].mean()
clean_TF_bump_coord['NT_DayReturns'] = 100 * TF_bump_coord['NT_DayReturns']
TF_bump_NT_Return = Monthlise(np.nanmean(clean_TF_bump_coord['NT_DayReturns']))
SharpeTFNT = np.nanmean(clean_TF_bump_coord['NT_DayReturns']) / np.nanstd(clean_TF_bump_coord['NT_DayReturns'])
# new_TF_bump_coord = pd.DataFrame()
# new_TF_bump_coord = TF_bump_coord[np.abs(TF_bump_coord['VI_DayReturns']-TF_bump_coord['VI_DayReturns'].mean()) <= (margin*TF_bump_coord['VI_DayReturns'].std())]
# TF_bump_VI_Return = new_TF_bump_coord['VI_DayReturns'].mean()
clean_TF_bump_coord['VI_DayReturns'] = 100 * TF_bump_coord['VI_DayReturns']
TF_bump_VI_Return = Monthlise(np.nanmean(clean_TF_bump_coord['VI_DayReturns']))
SharpeTFVI = np.nanmean(clean_TF_bump_coord['VI_DayReturns']) / np.nanstd(clean_TF_bump_coord['VI_DayReturns'])
# new_TF_bump_coord = pd.DataFrame()
# new_TF_bump_coord = TF_bump_coord[np.abs(TF_bump_coord['TF_DayReturns']-np.nanmean(TF_bump_coord['TF_DayReturns'])) <= (margin*TF_bump_coord['TF_DayReturns'].std())]
# TF_bump_TF_Return = np.nanmean(new_TF_bump_coord['TF_DayReturns'])
clean_TF_bump_coord['TF_DayReturns'] = 100 * TF_bump_coord['TF_DayReturns']
TF_bump_TF_Return = Monthlise(np.nanmean(clean_TF_bump_coord['TF_DayReturns']))
SharpeTFTF = np.nanmean(clean_TF_bump_coord['TF_DayReturns']) / np.nanstd(clean_TF_bump_coord['TF_DayReturns'])
print(TF_bump_NT_Return, TF_bump_VI_Return, TF_bump_TF_Return)
print('Sharpe')
print(SharpeNT, SharpeVI, SharpeTF)
print(SharpeNTNT, SharpeNTVI, SharpeNTTF)
print(SharpeVINT, SharpeVIVI, SharpeVITF)
print(SharpeTFNT, SharpeTFVI, SharpeTFTF)
# +
# standard_coord['TF_DayReturns'].hist()
# standard_coord['VI_DayReturns'].hist()
# standard_coord['NT_DayReturns'].hist()
import matplotlib
import matplotlib.pyplot as plt
bincount = 20
fig, ((ax1, ax2, ax3), (ax4, ax5, ax6), (ax7, ax8, ax9), (ax10, ax11, ax12)) = plt.subplots(4, 3, sharex = True, figsize = (20, 15))
fig.suptitle('Histogram of day returns (%) (NT, VI, TF)')
ax1.hist(standard_coord['NT_DayReturns'], bins = bincount)
ax1.axvline(x=np.nanmean(standard_coord['NT_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax1.set_title('NT Returns Standard')
ax2.hist(standard_coord['VI_DayReturns'], bins = bincount)
ax2.axvline(x=np.nanmean(standard_coord['VI_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax2.set_title('VI Returns Standard')
ax3.hist(standard_coord['TF_DayReturns'], bins = bincount)
ax3.axvline(x=np.nanmean(standard_coord['TF_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax3.set_title('TF Returns Standard')
ax4.hist(NT_bump_coord['NT_DayReturns'], bins = bincount)
ax4.axvline(x=np.nanmean(NT_bump_coord['NT_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax4.set_title('NT Bump - NT returns')
ax5.hist(NT_bump_coord['VI_DayReturns'], bins = bincount)
ax5.axvline(x=np.nanmean(NT_bump_coord['VI_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax5.set_title('NT Bump - VI returns')
ax6.hist(NT_bump_coord['TF_DayReturns'], bins = bincount)
ax6.axvline(x=np.nanmean(NT_bump_coord['TF_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax6.set_title('NT Bump - TF returns')
ax7.hist(VI_bump_coord['NT_DayReturns'], bins = bincount)
ax7.axvline(x=np.nanmean(VI_bump_coord['NT_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax7.set_title('VI Bump - NT returns')
ax8.hist(VI_bump_coord['VI_DayReturns'], bins = bincount)
ax8.axvline(x=np.nanmean(VI_bump_coord['VI_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax8.set_title('VI Bump - VI returns')
ax9.hist(VI_bump_coord['TF_DayReturns'], bins = bincount)
ax9.axvline(x=np.nanmean(VI_bump_coord['TF_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax9.set_title('VI Bump - TF returns')
ax10.hist(TF_bump_coord['NT_DayReturns'], bins = bincount)
ax10.axvline(x=np.nanmean(TF_bump_coord['NT_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax10.set_title('TF Bump - NT returns')
ax11.hist(TF_bump_coord['VI_DayReturns'], bins = bincount)
ax11.axvline(x=np.nanmean(TF_bump_coord['VI_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax11.set_title('TF Bump - VI returns')
ax12.hist(TF_bump_coord['TF_DayReturns'], bins = bincount)
ax12.axvline(x=np.nanmean(TF_bump_coord['TF_DayReturns']), color='r', linestyle='dashed', linewidth=2)
ax12.set_title('TF Bump - TF returns')
plt.show()
# +
print('Maartens equation before 9' )
print(sum(~np.isnan(clean_standard_coord['NT_DayReturns'])))
print(( 1 / sum(~np.isnan(clean_standard_coord['NT_DayReturns']))))
print(np.nanprod(clean_standard_coord['NT_DayReturns']))
averageNT = (100 * np.nanprod(clean_standard_coord['NT_DayReturns']) - 1) ** ( 1 / sum(~np.isnan(clean_standard_coord['NT_DayReturns'])))
print(averageNT)
averageTF = (np.nanprod(TF_bump_coord['TF_DayReturns']) - 1) ** (1/sum(~np.isnan(TF_bump_coord['TF_DayReturns'])))
print(averageTF)
np.where(TF_bump_coord['TF_DayReturns'] == 0)
print(TF_bump_coord['TF_DayReturns'])
averageTF = (np.nanprod(TF_bump_coord['TF_DayReturns']) - 1) ** (1/sum(~np.isnan(TF_bump_coord['TF_DayReturns'])))
print(averageTF)
print('This gives nan. why?')
print(np.nanprod(TF_bump_coord['TF_DayReturns']) - 1)
print(1/sum(~np.isnan(TF_bump_coord['TF_DayReturns'])))
print('Trying average sum')
print(np.nanmean(TF_bump_coord['TF_DayReturns']))
# +
import statsmodels as stats
import scipy
import pingouin as pg
def write_signif(res):
if res['p-val'][0] < 0.01:
# signif = str.maketrans('***')
signif = '^{***}'
elif res['p-val'][0] < 0.05:
# signif = str.maketrans('**')
signif = '^{**}'
elif res['p-val'][0] < 0.1:
# signif = str.maketrans('*')
signif = '^{*}'
else:
signif = ''
return signif
'''
Null hypothesis: means are equal
Alternative hypothesis" means are different
For p-value >= alpha: fail to reject null hypothesis
For p-value < alpha: reject H0 and accept HA
'''
print('--NT ROW--')
res = pg.ttest(1/h * (clean_NT_bump_coord['NT_DayReturns'] - clean_standard_coord['NT_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif00 = write_signif(res)
ci00 = res['CI95%'][0]
print(1/h * (clean_NT_bump_coord['NT_DayReturns'] - clean_standard_coord['NT_DayReturns']).mean())
res = pg.ttest(1/h * (clean_VI_bump_coord['NT_DayReturns'] - clean_standard_coord['NT_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif01 = write_signif(res)
ci01 = res['CI95%'][0]
print(1/h * (clean_VI_bump_coord['NT_DayReturns'] - clean_standard_coord['NT_DayReturns']).mean())
res = pg.ttest(1/h * (clean_TF_bump_coord['NT_DayReturns'] - clean_standard_coord['NT_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif02 = write_signif(res)
ci02 = res['CI95%'][0]
print(1/h * (clean_TF_bump_coord['NT_DayReturns'] - clean_standard_coord['NT_DayReturns']).mean())
print('--VI ROW--')
res = pg.ttest(1/h * (clean_NT_bump_coord['VI_DayReturns'] - clean_standard_coord['VI_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif10 = write_signif(res)
ci10 = res['CI95%'][0]
print(1/h * (clean_NT_bump_coord['VI_DayReturns'] - clean_standard_coord['VI_DayReturns']).mean())
res = pg.ttest(1/h * (clean_VI_bump_coord['VI_DayReturns'] - clean_standard_coord['VI_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif11 = write_signif(res)
ci11 = res['CI95%'][0]
print(1/h * (clean_VI_bump_coord['VI_DayReturns'] - clean_standard_coord['VI_DayReturns']).mean())
res = pg.ttest(1/h * (clean_TF_bump_coord['VI_DayReturns'] - clean_standard_coord['VI_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif12 = write_signif(res)
ci12 = res['CI95%'][0]
print(1/h * (clean_TF_bump_coord['VI_DayReturns'] - clean_standard_coord['VI_DayReturns']).mean())
print('--TF ROW--')
res = pg.ttest(1/h * (clean_NT_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif20 = write_signif(res)
ci20 = res['CI95%'][0]
print(1/h * (clean_NT_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']).mean())
res = pg.ttest(1/h * (clean_VI_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif21 = write_signif(res)
ci21 = res['CI95%'][0]
print(1/h * (clean_VI_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']).mean())
res = pg.ttest(1/h * (clean_TF_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif22 = write_signif(res)
ci22 = res['CI95%'][0]
print(1/h * (clean_TF_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']).mean())
# +
''' There are different ways to estimate the gain matrix.
Essentially, the DPi/DW is 1/h * [Pi_i(w') - Pi_i(w)]
h = 2/N = 2/256, this is set
Pi_i(w) is the average return at coords w.
For i = TF, the corresponding data is standard_coord['TF_DayReturns']
The data there are means for each period of nd.DailyReturn = (ind.wealth / ind.prev_wealth) - 1
'''
# print(standard_coord['TF_DayReturns'])
# print(TF_bump_coord['TF_DayReturns'])
meanTFStd = np.nanmean(standard_coord['TF_DayReturns'])
meanTFTF= np.nanmean(TF_bump_coord['TF_DayReturns'])
print(meanTFStd, meanTFTF)
''' The value we want in the GM is simple.
1/h (Pi at w' = Pi at w)
Hence it is '''
term = (1/h) * (meanTFTF - meanTFStd)
print(term)
''' in percentages: '''
print(100 * term)
''' And, is that different from 0? '''
''' Previous test gives same results '''
# res = pg.ttest(1/h * (clean_TF_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']), 0, correction=False, confidence=0.95)
# print(res)
# signif22 = write_signif(res)
# ci22 = res['CI95%'][0]
# print(1/h * (clean_TF_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']).mean())
''' New test '''
res = pg.ttest(1/h * (TF_bump_coord['TF_DayReturns'] - standard_coord['TF_DayReturns']), 0, correction=False, confidence=0.95)
print(res)
signif222 = write_signif(res)
ci222 = res['CI95%'][0]
print(1/h * (TF_bump_coord['TF_DayReturns'] - standard_coord['TF_DayReturns']).mean())
print('----')
''' Here is what the tests are testing '''
tests = (1/h) * (clean_TF_bump_coord['TF_DayReturns'] - clean_standard_coord['TF_DayReturns']).mean()
print(tests)
clean_TF_bump_coord['TF_DayReturns'] = 100 * TF_bump_coord['TF_DayReturns']
TF_bump_TF_Return = Monthlise(np.nanmean(clean_TF_bump_coord['TF_DayReturns']))
clean_standard_coord['TF_DayReturns'] = 100 * standard_coord['TF_DayReturns']
standard_coord_TF_Return = Monthlise(np.nanmean(clean_standard_coord['TF_DayReturns']))
''' What our current GM does'''
GainMatrix22 = round(1/h * (TF_bump_TF_Return - standard_coord_TF_Return),3)
print(str(GainMatrix22) + str(signif22))
print('-----')
standard_coord['A'] = [(((1 + r) ** 21) - 1) for r in standard_coord['TF_DayReturns']]
standard_coord['B'] = [(((1 + r) ** 21) - 1) for r in TF_bump_coord['TF_DayReturns']]
meanTFStd = np.nanmean(standard_coord['A'])
meanTFTF= np.nanmean(standard_coord['B'])
print(meanTFStd, meanTFTF)
''' The value we want in the GM is simple.
1/h (Pi at w' = Pi at w)
Hence it is '''
term = (1/h) * (meanTFTF - meanTFStd)
print(term)
''' in percentages: '''
print(100 * term)
''' New test '''
res = pg.ttest(1/h * (standard_coord['B'] - standard_coord['A']), 0, correction=False, confidence=0.95)
print(res)
signif222 = write_signif(res)
ci222 = res['CI95%'][0]
print(1/h * (standard_coord['B'] - standard_coord['A']).mean())
# +
np.set_printoptions(suppress=True)
GainMatrix = np.zeros((3,3))
h = 2/256
# h = 2/128
''' It is mutliplied by 1/h by finite difference '''
GainMatrix[0,0] = round(1/h * (NT_bump_NT_Return - standard_coord_NT_Return),3)
GainMatrix[0,1] = round(1/h * (VI_bump_NT_Return - standard_coord_NT_Return),3)
GainMatrix[0,2] = round(1/h * (TF_bump_NT_Return - standard_coord_NT_Return),3)
GainMatrix[1,0] = round(1/h * (NT_bump_VI_Return - standard_coord_VI_Return),3)
GainMatrix[1,1] = round(1/h * (VI_bump_VI_Return - standard_coord_VI_Return),3)
GainMatrix[1,2] = round(1/h * (TF_bump_VI_Return - standard_coord_VI_Return),3)
GainMatrix[2,0] = round(1/h * (NT_bump_TF_Return - standard_coord_TF_Return),3)
GainMatrix[2,1] = round(1/h * (VI_bump_TF_Return - standard_coord_TF_Return),3)
GainMatrix[2,2] = round(1/h * (TF_bump_TF_Return - standard_coord_TF_Return),3)
# print(GainMatrix)
# +
from tabulate import tabulate
from texttable import Texttable
import latextable
rows = [['', 'NT', 'VI', 'TF'],
['NT', str(GainMatrix[0,0]) + str(signif00), str(GainMatrix[0,1]) + str(signif01), str(GainMatrix[0,2]) + str(signif02)],
['VI', str(GainMatrix[1,0]) + str(signif10), str(GainMatrix[1,1]) + str(signif11), str(GainMatrix[1,2]) + str(signif12)],
['TF', str(GainMatrix[2,0]) + str(signif20), str(GainMatrix[2,1]) + str(signif21), str(GainMatrix[2,2]) + str(signif22)]]
table = Texttable()
table.set_cols_align(["C"] * 4)
table.set_deco(Texttable.HEADER | Texttable.VLINES | Texttable.BORDER)
table.add_rows(rows)
print('\nTexttable Table:')
print(table.draw())
print(latextable.draw_latex(table,
caption="Gain matrix at the equal wealth coordinates. Significance is showed for p-value inferior to 0.01 (***), 0.05 (**) and 0.1 (*)."))
# +
from tabulate import tabulate
from texttable import Texttable
import latextable
rows = [['', 'NT', 'VI', 'TF'],
['NT', str(ci00), str(ci01), str(ci02)],
['VI', str(ci10), str(ci11), str(ci12)],
['TF', str(ci20), str(ci21), str(ci22)]]
table = Texttable()
table.set_cols_align(["C"] * 4)
table.set_deco(Texttable.HEADER | Texttable.VLINES | Texttable.BORDER)
table.add_rows(rows)
print('\nTexttable Table:')
print(table.draw())
print(latextable.draw_latex(table, caption="95\% Confidence intervals of the gain matrix entries at the equal wealth coordinates"))
| evology/bin/GainMatrixSingle/GainMatrixNb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:s2s-future-dragonstone]
# language: python
# name: conda-env-s2s-future-dragonstone-py
# ---
# # Modeling Source-to-Sink systems using FastScape: 2. Basic FastScape use
# 
# ## Using FastScape
#
# Let's first import a few libraries that are essential to use FastScape
import xsimlab as xs # modeling framework used for FastScape development
import xarray as xr # xarray is a python package to work with labelled multi-dimensional arrays
import numpy as np # numpy is the basic numerical package in python
import matplotlib.pyplot as plt # matplotlib is the basic plotting package
#plt.style.use('dark_background')
# %load_ext xsimlab.ipython
# FastScape itself contains several model types, i.e. that include diffusion (or not), sediment transport/deposition (which we will use later), flexure, etc. Here we will use the *basic_model* which solves the equaiton described above, i.e., topographic evolution controlled by the balance between tectonic uplift, stream incision and hillslope diffusion.
from fastscape.models import basic_model
# Let's look at it and compare its guts to the basic equation:
#
# $$\frac{\partial h}{\partial t}=U-K_fA^mS^n+K_d\nabla^2h$$
#
# domain definition:
#
# $$x\in[0,L_x]$$ and $$y\in[0,L_y]$$
#
# boundary conditions:
#
# $$h(x=0, x=L_x, y=0, y=L_y)=0$$
#
# and initial condition:
#
# $$h(t=0) = rand(x,y)$$
basic_model
# We can also display the model graphically to examine its various components:
basic_model.visualize()
# To use FastScape, we need to specify the value of the input parameters. For this we use a functionality of the xsimlab package that create an input to the model with default or empty values for the various necessary parameters:
# +
# # %create_setup basic_model --default --verbose
import xsimlab as xs
ds_in = xs.create_setup(
model=basic_model,
clocks={'time': np.linspace(0,1e7,101)},
input_vars={
# nb. of grid nodes in (y, x)
'grid__shape': [101,101],
# total grid length in (y, x)
'grid__length': [1e5,1e5],
# node status at borders
'boundary__status': 'fixed_value',
# uplift rate
'uplift__rate': 1e-3,
# bedrock channel incision coefficient
'spl__k_coef': 1e-5,
# drainage area exponent
'spl__area_exp': 0.4,
# slope exponent
'spl__slope_exp': 1,
# diffusivity (transport coefficient)
'diffusion__diffusivity': 1e-2,
# random seed
'init_topography__seed': None,
},
output_vars={'topography__elevation': 'time'}
)
# -
# Let's now modify this setup to solve the following problem. We will assume that:
# 1. the uplifting region is 100$\times$100 km;
# 2. we will discretize it using 101$\times$101 nodes;
# 3. all boundary conditions are fixed (base level);
# 4. the uplift rate is 1 mm/yr;
# 5. $K_f=10^{-5}$, $K_d=10^{-2}$, $m=0.4$ and $n=1$
#
# We also need to define a *clock* that will determine the time steps at which we wish to compute the solution. Here, we will compute the solution at 101 time steps from 0 to 10 Myr.
#
# Let's look at what is inside the model set up (which is an xarray dataset...)
ds_in
# Now we can run the model...
ds_in.xsimlab.run(model=basic_model)
# We see that the model has run but nothing has really changed (i.e. no output has been stored). To save any output, we must decide on what needs to be saves. This is done by specifying *output_vars* in *ds_in*. To see what is available, explore the model and its processes. For example, the process *topography* contains the variable *elevation*, which can be accessed using a double underscore: *topography__elevation*:
basic_model.topography
# So add the output variable *topography__elevation* and a frequency set by the clock *time.
#
# Let's run it again but noting that it is very useful to store the output in an other dataset so that it can be displayed...
ds_out = ds_in.xsimlab.run(model=basic_model)
ds_out.topography__elevation.isel(time=-1).plot()
# It is also useful to only store a sub-sample of time steps. For this, we can define another clock (*out*). Note that when more than one clock has been defined, one needs to specify which is the *master* clock... Tjis is what has been done in the following cell which by now should look similar/identical to the one you created above...
# +
# # %create_setup basic_model --default --verbose
import xsimlab as xs
ds_in = xs.create_setup(
model=basic_model,
clocks={'time': np.linspace(0,1e7,101),
'out': np.linspace(0,1e7,11)},
master_clock='time',
input_vars={
# nb. of grid nodes in (y, x)
'grid__shape': [101, 101],
# total grid length in (y, x)
'grid__length': [1e5, 1e5],
# node status at borders
'boundary__status': 'fixed_value',
# uplift rate
'uplift__rate': 1e-3,
# bedrock channel incision coefficient
'spl__k_coef': 1e-5,
# drainage area exponent
'spl__area_exp': 0.4,
# slope exponent
'spl__slope_exp': 1,
# diffusivity (transport coefficient)
'diffusion__diffusivity': 1e-2,
# random seed
'init_topography__seed': None,
},
output_vars={'topography__elevation': 'out'}
)
# -
# Let's now run this new model.
#
# It is also useful to use a progress bar, especially if the model run is long. For this we add another *decorator* to the basic run instruction
with xs.monitoring.ProgressBar():
ds_out = ds_in.xsimlab.run(model=basic_model)
# We can now have a look at the solution. For this we use the *plot()* function that we apply to the *topography__elevation* variable after selection the last time step using *.isel(out=-1)*
ds_out.topography__elevation.isel(out=-1).plot()
# There are many ways to inspect/display the model results. You can use hvplot, for example:
# +
import hvplot.xarray
ds_out.topography__elevation.hvplot.image(x='x', y='y',
cmap=plt.cm.viridis,
groupby='out', width=400)
# -
# You can also use a widget called TopoViz3D, part of ipyfastscape developed by <NAME>,
# +
from ipyfastscape import TopoViz3d
app = TopoViz3d(ds_out, canvas_height=600, time_dim="out")
app.components['background_color'].set_color('lightgray')
app.components['vertical_exaggeration'].set_factor(5)
app.components['timestepper'].go_to_time(ds_out.out[-1])
app.show()
# -
# The output can also be saved to many format, including *netcdf* which can be loaded into Paraview, for example...
| notebooks/MountainSource/FastScape_S2S_Source_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Downloading Overlays
#
# This notebook demonstrates how to download an FPGA overlay and examine programmable logic state.
#
# ## 1. Instantiating an overlay
# With the following overlay bundle present in the `overlays` folder, users can instantiate the overlay easily.
#
# * A bitstream file (\*.bit).
# * A tcl file (\*.tcl).
# * A python class (\*.py).
#
# On PYNQ-Z1, for example, the base overlay can be loaded by:
# ```python
# from pynq.overlays.base import BaseOverlay
# overlay = BaseOverlay("base.bit")
# ```
# Users can also use absolute file path to instantiate the overlay.
# ```python
# from pynq.overlays.base import BaseOverlay
# overlay = BaseOverlay("/home/xilinx/pynq/bitstream/base.bit")
# ```
#
# In the following cell, we get the current bitstream loaded on PL, and try to download it multiple times.
# +
from pynq import PL
from pynq import Overlay
ol = Overlay(PL.bitfile_name)
# -
# Now we can check the download timestamp for this overlay.
ol.download()
ol.timestamp
# ## 2. Examining the PL state
#
# While there can be multiple overlay instances in Python, there is only one bitstream that is currently loaded onto the programmable logic (PL).
#
# This bitstream state is held in the singleton class, PL, and is available for user queries.
PL.bitfile_name
PL.timestamp
# Users can verify whether an overlay instance is currently loaded using the Overlay is_loaded() method
ol.is_loaded()
# ## 3. Overlay downloading overhead
#
# Finally, using Python, we can see the bitstream download time over 50 downloads.
# +
import time
import matplotlib.pyplot as plt
length = 50
time_log = []
for i in range(length):
start = time.time()
ol.download()
end = time.time()
time_log.append((end-start)*1000)
# %matplotlib inline
plt.plot(range(length), time_log, 'ro')
plt.title('Bitstream loading time (ms)')
plt.axis([0, length, 0, 1000])
plt.show()
| pynq/notebooks/common/overlay_download.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exact Cover問題
#
# 最初にExact Cover問題について説明します。
#
# ある自然数の集合Uを考えます。またその自然数を含むいくつかのグループ$V_{1}, V_{2}, \ldots, V_{N}$を想定します。1つの自然数が複数のグループに属していても構いません。さて、そのグループ$V_{i}$からいくつかピックアップしたときに、それらに同じ自然数が複数回含まれず、Uに含まれる自然数セットと同じになるようにピックアップする問題をExact Cover問題といいます。
# さらに、選んだグループ数を最小になるようにするものを、Smallest Exact Coverといいます。
# ## 準備
# これをwildqatを使用して解いてみます。
# wildqatがインストールされていない場合は、環境に併せて以下のようにインストールしてください。
# ```bash
# pip install wildqat
# ```
# 必要なライブラリをimportし、wildqatオブジェクトをインスタンス化します。
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import wildqat as wq
# -
# ## QUBOの作成
# 解きたい問題のQUBOマトリクスを作成します。
#
# 最初に自然数の集合を $U = \{1, \ldots, n\}$、グループを$V_{i} \subseteq U(i=1, \ldots, N)$とします。また、i番目のグループをピックアップしたかどうかを$x_{i} \in \{1, 0\}$で表します。ピックアップされた場合は1、されなかった場合は0です。ここで、各自然数(αとします)についてピックアップされた1つのグループのみに含まれている場合に最小となるようなコスト関数$E_{A}$を考えます。
#
# この場合、
#
# $E_{A} = A \sum _ { \alpha = 1 } ^ { n } \left( 1 - \sum _ { i : \alpha \in V _ { i } } x _ { i } \right) ^ { 2 }$
#
# とすると、各自然数αに対して1つのグループのみがピックアップされた場合、$E_{A} = 0$となります。
#
# これをQUBO形式に変換していきます。まず括弧の中を展開します。
#
# $E_{A} = A \sum _ { \alpha = 1 } ^ { n } \{ 1 - 2\sum _ { i : \alpha \in V _ { i } } x _ { i } + ( \sum _ { i : \alpha \in V _ { i } } x _ { i } ) ^ { 2 } \} $
#
# 今回$E_{A}$を最小化する問題なので、定数である{}内の第一項は無視できます。
# 第二項は、$x_{i} \in {1,0}$であることを利用して、次のように書き換えることができます。
#
# $ - 2\sum _ { i : \alpha \in V _ { i } } x _ { i } = - 2\sum _ { i = j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j}$
#
# 第三項についても、i = jの場合と、$i \neq j$の場合に分けると、次の様に書き換えられます。
#
# $ ( \sum _ { i : \alpha \in V _ { i } } x _ { i } ) ^ { 2 } = \sum _ { i = j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} + 2 \sum _ { i \neq j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} $
#
# まとめると、
#
# $E_{A} = A \sum _ { \alpha = 1 } ^ { n } ( - \sum _ { i = j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} + 2 \sum _ { i \neq j, i : \alpha \in V _ { i }, j : \alpha \in V _ { j } } x _ { i } x _ {j} )$
#
# となり、QUBO形式にすることができました。
# +
U = [1,2,3,4,5,6,7,8,9,10]
A = 1
def get_qubo(V):
Q = np.zeros( (len(V), len(V)) )
for i in range(len(V)):
for j in range(len(V)):
for k in range(len(U)):
alpha = U[k]
in_Vi = V[i].count(alpha) > 0 #V[i]に存在しているか
in_Vj = V[j].count(alpha) > 0 #V[j]に存在しているか
if i == j and in_Vi:
Q[i][j] += -1
elif i < j and in_Vi and in_Vj:
Q[i][j] += 2
return Q * A
# -
# また、結果を表示する関数を定義しておきます。
def display_answer(list_x, energies = None, show_graph = False):
print("Result x:", list_x)
text = ""
for i in range(len(list_x)):
if(list_x[i]):
text += str(V[i])
print("Picked {} group(s): {}".format(sum(list_x), text))
if energies is not None:
print("Energy:", a.E[-1])
if show_graph:
plt.plot(a.E)
plt.show()
# 次の通り実行してみると、正しい答えが得られていることが分かります。
V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10] ]
a = wq.opt()
a.qubo = get_qubo(V)
answer = a.sa()
display_answer(answer)
# ## Vをもう少し複雑にしてみる
# Vをもう少し複雑にして(2つグループを追加して)、実行してみます。
V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10], [7,9], [2,4,6,8] ]
a = wq.opt()
a.qubo = get_qubo(V)
answer = a.sa()
display_answer(answer)
# 正しい答えが得られていることが分かります。
# ## Smallest Exact Coverへの拡張
# さらにSmallest Exact Coverにも取り組んでみます。
# ピックアップされる数が最小にするためには、次の$E_{B}$を考えます。
#
# $ E _ { B } = B \sum _ { i } x _ { i } $
#
# これも、$x_{i} \in {1,0}$であることを利用して、次のように書き換えることができます
#
# $ E _ { B } = B \sum _ { i, j : i = j } x _ { i } x _ {j} $
#
# そして、$E = E_{A} + E_{B}$が最小になるようにします。
B = A / len(U) * 0.99
def get_qubo_min(Q):
for i in range(len(V)):
Q[i][i] += B
return Q
# ### 以前の実装で試す
# まずは、この拡張が含まれていない$ E _ {A} $だけの実装したもので5回実行してみます。
#
# 結果をみると3つのグループと4つのグループがピックアップされた結果が混在しており、つねに最小数が選ばれている訳ではありません。
# +
V = [ [1,2,3,4], [5,6,7,8], [9,10], [1,2], [3,4], [5,6], [7,8,9,10]]
for i in range(5):
print("---{}回目".format(i+1))
a = wq.opt()
a.qubo = get_qubo(V)
answer = a.sa()
display_answer(answer, a.E)
# -
# ### 新しい実装で試す
# $ E _ {A} + E_{B}$となっている実装で試してみます。
#
# 結果を見ると、概ね正しい答え(3つのグループ)が選ばれるようですが、まれに少しエネルギーの高い不正解(4つのグループ)の方が選ばれてしまいます。
for i in range(5):
print("---{}回目".format(i+1))
a = wq.opt()
a.qubo = get_qubo_min(get_qubo(V))
answer = a.sa()
display_answer(answer, a.E)
# ### 意地悪ケース
# 最後に意地悪なケースを試します。
# {1,2}{3}{4}{5}{6}{7}{8}{9}{10}が選ばれるのが正解です。
#
# 結果を見ると、概ね正しい答えが選ばれるようですが、まれに少しエネルギーの高い不正解の方が選ばれてしまいます。
V = [ [1,2], [3], [4], [5], [6], [7], [8], [9], [10], [2,3,4,5,6,7,8,9,10]]
for i in range(5):
print("---{}回目".format(i+1))
a = wq.opt()
a.qubo = get_qubo_min(get_qubo(V))
answer = a.sa()
display_answer(answer, a.E)
| examples_ja/tutorial007_exact_cover.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Try it out: <https://gishub.org/gee-ngrok>
import ee
import geemap
# +
# geemap.update_package()
# +
Map = geemap.Map(center=[37.75, -122.45], zoom=12)
S2 = ee.ImageCollection('COPERNICUS/S2_SR') \
.filterBounds(ee.Geometry.Point([-122.45, 37.75])) \
.filterMetadata('CLOUDY_PIXEL_PERCENTAGE', 'less_than', 10)
vis_params = {"min": 0,
"max": 4000,
"bands": ["B8", "B4", "B3"]}
Map.addLayer(S2, {}, "Sentinel-2", False)
Map.add_time_slider(S2, vis_params)
Map
# -
# 
| examples/notebooks/72_time_slider_gui.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CPA on WeatherBench forecasts
# Code and Data is from Weather Bench https://github.com/pangeo-data/WeatherBench
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import seaborn as sns
from score import *
from urocc import cpa
res = '5.625'
DATADIR = 'D:/'
PREDDIR = 'D:/baselines/'
t850_valid = load_test_data(f'{DATADIR}temperature_850', 't')
cnn_3d = xr.open_dataset(f'{PREDDIR}fccnn_3d.nc')
tigge = xr.open_dataset(f'{PREDDIR}/tigge_{res}deg.nc')
t42 = xr.open_dataset(f'{PREDDIR}/t42_5.625deg.nc')
t63 = xr.open_dataset(f'{PREDDIR}/t63_5.625deg.nc')
lr_3d = xr.open_dataset(f'{PREDDIR}fc_lr_3d.nc')
persistence = xr.open_dataset(f'{PREDDIR}persistence_{res}.nc')
# +
# define computation of rmse for latitiude
def compute_rmse_lat(da_fc, da_true):
error = da_fc - da_true
rmse = np.sqrt(((error)**2).mean(dim= ['time','lon']))
if type(rmse) is xr.Dataset:
rmse = rmse.rename({v: v + '_rmse' for v in rmse})
else: # DataArray
rmse.name = error.name + '_rmse' if not error.name is None else 'rmse'
return rmse
def evaluate_iterative_forecast(fc_iter, da_valid):
rmses = []
for lead_time in fc_iter.lead_time:
fc = fc_iter.sel(lead_time=lead_time)
fc['time'] = fc.time + np.timedelta64(int(lead_time), 'h')
rmses.append(compute_rmse_lat(fc, da_valid))
return xr.concat(rmses, 'lead_time')
def compute_cpa_lon(da_fc, da_true):
latitude = da_true.lat.values
longitude = da_true.lon.values
time_stemp = (da_fc- da_true).time.values
da_fc_2 = da_fc.sel(time=time_stemp)
da_true_2 = da_true.sel(time=time_stemp)
cpas = []
for lat in latitude:
cpa_lon = []
a1 = da_true_2.sel(lat = lat)
b1 = da_fc_2.sel(lat = lat)
for lon in longitude:
a = a1.sel(lon = lon).values.flatten()
b = b1.sel(lon = lon).values.flatten()
cpa_val = cpa(a, b)
cpa_lon.append(cpa_val)
cpas.append(np.mean(cpa_lon))
return(cpas)
def evaluate_iterative_cpa_lon(fc_iter, da_valid):
fc = fc_iter.sel(lead_time=3*24)
fc['time'] = fc.time + np.timedelta64(int(3*24), 'h')
return(compute_cpa_lon(fc, da_valid))
# -
rmse_tigge = evaluate_iterative_forecast(tigge['t'], t850_valid).sel(lead_time=3*24).values
rmse_t42 = evaluate_iterative_forecast(t42['t'], t850_valid).sel(lead_time=3*24).values
rmse_t63 = evaluate_iterative_forecast(t63['t'], t850_valid).sel(lead_time=3*24).values
rmse_persistence = evaluate_iterative_forecast(persistence['t'], t850_valid).sel(lead_time=3*24).values
rmse_lr3d = compute_rmse_lat(lr_3d['t'], t850_valid)
rmse_cnn3d = compute_rmse_lat(cnn_3d['t'], t850_valid)
cpa_tigge = evaluate_iterative_cpa_lon(tigge['t'], t850_valid).sel(lead_time=3*24).values
cpa_t42 = evaluate_iterative_cpa_lon(t42['t'], t850_valid).sel(lead_time=3*24).values
cpa_t63 = evaluate_iterative_cpa_lon(t63['t'], t850_valid).sel(lead_time=3*24).values
cpa_persistence = evaluate_iterative_cpa_lon(persistence['t'], t850_valid).sel(lead_time=3*24).values
cpa_lr3d = compute_cpa_lon(lr_3d['t'], t850_valid)
cpa_cnn3d = compute_cpa_lon(cnn_3d['t'], t850_valid)
latitude = tigge.lat.values
col = ["#00BA38", "#00BFC4", "#619CFF", "#F564E3", "#F8766D", "#B79F00"]
plt.plot(rmse_tigge, color=col[0], label = "HRES")
plt.plot(rmse_t63, color=col[1], label="T63")
plt.plot(rmse_t42, color=col[2], label="T42")
plt.plot(rmse_cnn3d, color=col[3], label = "CNN")
plt.plot(rmse_lr3d, color=col[4], label="LR")
plt.plot(rmse_persistence, color=col[5], label="Persistence")
plt.xlabel('Latitude')
plt.ylabel('T850 RMSE [K]')
plt.legend()
ticks = plt.xticks(np.arange(0,32,6), (str(-np.round(latitude[0],2))+"°S", str(-np.round(latitude[6],2))+"°S", str(-np.round(latitude[12],2))+"°S",
str(np.round(latitude[18],2))+"°N", str(np.round(latitude[24],2))+"°N", str(np.round(latitude[30]))+"°N"))
latitude = tigge.lat.values
col = ["#00BA38", "#00BFC4", "#619CFF", "#F564E3", "#F8766D", "#B79F00"]
plt.plot(cpa_tigge, color=col[0], label="HRES")
plt.plot(cpa_t63, color=col[1], label = "T63")
plt.plot(cpa_t42, color=col[2], label="T42")
plt.plot(cpa_cnn3d, color=col[3], label="CNN")
plt.plot(cpa_lr3d, color=col[4], label="LR")
plt.plot(cpa_persistence, color=col[5], label = "Persistence")
plt.xlabel('Latitude')
plt.ylabel('CPA')
plt.legend()
ticks = plt.xticks(np.arange(0,32,6), (str(-np.round(latitude[0],2))+"°S", str(-np.round(latitude[6],2))+"°S", str(-np.round(latitude[12],2))+"°S",
str(np.round(latitude[18],2))+"°N", str(np.round(latitude[24],2))+"°N", str(np.round(latitude[30]))+"°N"))
| example/weather_bench/cpa_rmse_plt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Trajectory of a single patient
# References
# - https://github.com/MIT-LCP/mimic-workshop/blob/b27eee438a1f62d909dd30d1d458d3516f32b276/intro_to_mimic/01-example-patient-heart-failure.ipynb
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import sys
sys.path.insert(0, './db')
import db_con
import sqlalchemy
# #### Sepsis 환자(25030)의 ICU 기록 중 하나를 선택(276176)
# load chartevents
# chartevents contain variables such as heart rate, respiratory rate, temperature
engine = db_con.get_engine()
pat = pd.read_sql("""
SELECT de.icustay_id, de.mins, de.value, de.valuenum, de.itemid, di.label
FROM (SELECT
de.icustay_id,
EXTRACT(MINUTE FROM de.charttime - (SELECT intime
FROM icustays
WHERE icustay_id = 256064)) AS mins,
de.value,
de.valuenum,
de.itemid
FROM chartevents de
WHERE icustay_id = 256064) AS de
INNER JOIN (SELECT
di.label,
di.itemid
FROM d_items di) AS di
ON de.itemid = di.itemid
ORDER BY mins;
""", engine)
pat.head()
# ### Heart rate
pat[pat.label.str.find('Heart Rate')>=0]
# pat[pat.label.str.find('Temp')>=0]
# 환자의 심박수 변화
# 시간(x), 심박수 변수(y)를 기반으로 타임시리즈 차트를 구성한다.
pat.index[pat.label=='Heart Rate']
x = pat.mins[pat.label=='Heart Rate']
y = pat.valuenum[pat.label=='Heart Rate']
# +
plt.figure(figsize=(20, 5))
plt.plot(x, y, 'k+')
plt.xlabel('minutes after admission')
plt.ylabel('Heart Rate')
plt.title('Heart rate change over time from admission to icu')
# + [markdown] slideshow={"slide_type": "-"}
# 세로 줄로 표시되는 부분은 1분 내 여러번 측정해서 값이 range로 존재하는 경우입니다.
# -
pat[(pat.mins==19) & (pat.label=='Heart Rate')]
# + [markdown] slideshow={"slide_type": "-"}
# ### Respiratory Rate
# -
pat[pat.label=='Respiratory Rate']
# 환자의 상태가 위험 경계를 넘어간 경우가 있나요?
# ICU에서 경계 알람 값을 지정할 때 대부분의 경우 너무 높거나 너무 낮은 값에 알람을 세팅하게 되는데, False 알람을 줄이기 위해서 경계 값을 조절하는 경우도 있습니다.
pat[pat.label.str.find('Resp Alarm')>=0]
# +
plt.figure(figsize=(20, 10))
plt.plot(pat.mins[pat.label=='Respiratory Rate'],
pat.valuenum[pat.label=='Respiratory Rate'],
'k+')
plt.plot(pat.mins[pat.label=='Resp Alarm - High'],
pat.valuenum[pat.label=='Resp Alarm - High'],
'm--')
plt.plot(pat.mins[pat.label=='Resp Alarm - Low'],
pat.valuenum[pat.label=='Resp Alarm - Low'],
'b--')
plt.xlabel('minutes after admission')
plt.ylabel('Respiratory Rate')
plt.title('Respiratory Rate change over time from admission to icu')
# -
# ## 의학적 의견
# - 호흡수를 기준으로 알람이 울렸을 만한 시점?
# - outlier에 대한 적절한 설명
# ## 체온
# 체온 관련 라벨들
pat[pat.label.str.find('Temperature')>=0].label.unique()
pat[pat.label.str.find('Temperature Celsius')>=0].shape
pat[pat.label.str.find('Temperature Fahrenheit')>=0].shape
pat_fah = pat[pat.label.str.find('Temperature Fahrenheit')>=0]
pat_fah.reset_index(drop=True)
# Fahrenheit체온 값을 Celsius 값으로 변환
pat_fah['valuenum'] = pat_fah.valuenum.map(lambda x: (x - 32)*(5/9))
pat_fah.head()
pat_temp = pd.concat([pat[pat.label.str.find('Temperature Celsius')>=0],
pat_fah])
pat_temp.sort_values(by='mins')
pat_temp.shape
# +
plt.figure(figsize=(20, 10))
plt.plot(pat_temp.mins, pat_temp.valuenum, 'k+')
plt.plot(pat_temp.mins, [37.5]*len(pat_temp.valuenum), 'r--')
plt.xlabel('minutes after admission')
plt.ylabel('Temperate(Celsius)')
plt.title('Temperate change over time from admission to icu')
plt.ylim(32, 40)
# -
# ### GCS(Glasgow Coma Scale) 환자의 의식 판단
# ICU에서 환자의 상태를 모니터링할 때 주로 활용한다. 주로 세가지 라벨이 있는데 Eye, verbal, motor response 등이 있다.
pat[pat.label=='GCS - Eye Opening'].head()
# +
plt.figure(figsize=(20, 10))
# heart rate
plt.plot(pat.mins[pat.label=='Heart Rate'], pat.valuenum[pat.label=='Heart Rate'], 'b+')
# respiratory rate
plt.plot(pat.mins[pat.label=='Respiratory Rate'], pat.valuenum[pat.label=='Respiratory Rate'], 'r+')
# temperate
plt.plot(pat_temp.mins, pat_temp.valuenum, 'y+')
# GCS plot annotate to avoid overlap
for i, val in enumerate(pat.value[pat.label=='GCS - Eye Opening'].values):
if i%8==0 and i<60:
plt.annotate(val, (pat.mins[pat.label=='GCS - Eye Opening'].values[i], 165))
plt.text(-20, 165, 'GCS - Eye Opening')
for i, val in enumerate(pat.value[pat.label=='GCS - Motor Response'].values):
if i%10==0 and i<60:
plt.annotate(val, (pat.mins[pat.label=='GCS - Motor Response'].values[i], 150))
plt.text(-20, 150, 'GCS - Motor Response')
for i, val in enumerate(pat.value[pat.label=='GCS - Verbal Response'].values):
if i%10==0 and i<60:
plt.annotate(val, (pat.mins[pat.label=='GCS - Verbal Response'].values[i], 135))
plt.text(-20, 135, 'GCS - Verbal Response')
plt.title('Vital Sign and GCS change over time from admission')
plt.xlabel('Time (mins)')
plt.ylabel('Respiratory Rate, Temperature, Heart rate or GCS')
plt.ylim(10, 170)
# -
# ## 의학적 의견
# - 환자의 의식의 변화
# ## creatinine
pat[pat.label=='Creatinine']
# +
plt.figure(figsize=(20, 10))
plt.plot(pat[pat.label=='Creatinine'].mins, pat[pat.label=='Creatinine'].valuenum, 'k+')
plt.xlabel('minutes after admission')
plt.ylabel('Creatinine')
plt.title('Creatinine change over time from admission to icu')
# +
# from pandas.tools.plotting import lag_plot
# df_creatinine = pat[pat.label=='Creatinine'].sort_values('mins').valuenum
# df_creatinine = df_creatinine.reset_index(drop=True)
# lag_plot(df_creatinine)
# plt.show()
# +
# from pandas.tools.plotting import autocorrelation_plot
# autocorrelation_plot(df_creatinine)
# +
# from statsmodels.tsa.ar_model import AR
# #create train/test datasets
# X = df_creatinine
# train_data = X[0:int(len(X)*(0.7))]
# test_data = X[int(len(X)*(0.7)):]
# # train_data
# # test_data
# +
# #train the autoregression model
# model = AR(train_data)
# model_fitted = model.fit()
# +
# print('The lag value chose is: %s' % model_fitted.k_ar)
# print('The coefficients of the model are:\n %s' % model_fitted.params)
# +
# # make predictions
# predictions = model_fitted.predict(
# start=len(train_data),
# end=len(train_data) + len(test_data)-1,
# dynamic=False)
# predictions
# +
# # create a comparison dataframe
# compare_df = pd.concat(
# [df_creatinine.tail(len(test_data)), predictions.tail(len(train_data))],
# axis=1, ignore_index=True).rename(columns={0: 'actual', 1:'predicted'})
# #plot the two values
# compare_df.plot()
# +
# # root mean square error
# from sklearn.metrics import r2_score
# r2 = r2_score(df_creatinine.tail(len(test_data)), predictions)
# r2
# -
| notebook/0517_patient_projectory.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Preenchimento da disposição do Jupyter Notebook, em Tela Cheia
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# +
# Importação das Bibliotecas de Python necessárias
# Importação da Biblioteca NumPy para Python, com o pseudónimo "num_py"
import numpy as num_py
# Importação do módulo PyPlot da Biblioteca Matplotlib para Python, com o pseudónimo "py_plot"
from matplotlib import pyplot as py_plot
# Importação dos módulos display, clear_output
from IPython.display import display, clear_output
# -
# Uso de Fundo Negro, como padrão para os gráficos construídos,
# a partir do módulo PyPlot
py_plot.style.use("dark_background")
# Definição do intervalo das coordenadas x e y, mínimas e máximas,
# dos gráficos gerados, para o Experimento da Dupla-Fenda
x_min, x_max = 0, 10
y_min, y_max = -5, 5
# Definição de números de pontos (x,y) a serem gerados,
# para a propagação das ondas de luz, provenientes das fendas
num_pontos_xs, num_pontos_ys = 200, 200
# Geração dos pontos x e y,
# calculados no intervalo definido anteriormente,
# para o número de pontos (x,y) a serem gerados,
# para a propagação das ondas de luz, provenientes das fendas
xs = num_py.linspace(x_min, x_max, num_pontos_xs)
ys = num_py.linspace(y_min, y_max, num_pontos_ys)
# Criação das matrizes de coordenadas para os pontos (x,y),
# geradas anteriormente
xs_matriz_coordenadas, ys_matriz_coordenadas = num_py.meshgrid(xs, ys, sparse = False)
# Geração de pontos onde espectro da luz é vísivel
pontos_espectro_luz = num_py.concatenate([xs_matriz_coordenadas.reshape(-1, 1), ys_matriz_coordenadas.reshape(-1, 1)], axis=-1)
# Definição das coordenadas (x,y) para as 2 fendas de fontes de luz
fenda_fonte_luz_1 = num_py.array([0, 0.5])
fenda_fonte_luz_2 = num_py.array([0, -0.5])
# Cálculo dos pontos para o trajeto para onde a luz é propagada,
# em relação às 2 fendas de fontes de luz
pontos_propagacao_fenda_fonte_luz_1 = (pontos_espectro_luz - fenda_fonte_luz_1)
pontos_propagacao_fenda_fonte_luz_2 = (pontos_espectro_luz - fenda_fonte_luz_2)
# Definição das amplitudes das 2 ondas de luz
amplitude_onda_luz_1 = 4
amplitude_onda_luz_2 = 4
# Definição do factor constante,
# para os pontos da propagação das 2 ondas de luz
factor_constante_onda = 20
# Cálculo das 2 ondas de luz, provenientes das fendas
ondas_luz_1 = amplitude_onda_luz_1*(num_py.sin( factor_constante_onda * (pontos_propagacao_fenda_fonte_luz_1[:, 0]**2 + pontos_propagacao_fenda_fonte_luz_1[:, 1]**2)**0.5))
ondas_luz_2 = amplitude_onda_luz_2*(num_py.sin( factor_constante_onda * (pontos_propagacao_fenda_fonte_luz_2[:, 0]**2 + pontos_propagacao_fenda_fonte_luz_2[:, 1]**2)**0.5))
# Cálculo da sobreposição das 2 ondas de luz, provenientes das 2 fendas
# (sendo elas, sobreposições construtivas ou destrutivas)
sobreposicao_ondas_luz = (ondas_luz_1 + ondas_luz_2)
# Cálculo da intensidade da sobreposição das ondas de luz,
# provenientes das 2 fendas
intensidade_sobreposicao_ondas_luz = (sobreposicao_ondas_luz**2)
# +
# Gráfico do Experimento da Dupla-Fenda (<NAME>, 1801) - Espectro de Luz
# Definição da moldura/figura para o gráfico do Experimento da Dupla-Fenda,
# de acordo com o intervalo de pontos (x, y), mínimos e máximos
py_plot.figure(figsize=(10, 8))
py_plot.xlim(x_min, x_max)
py_plot.ylim(y_min, y_max)
# Definição do Título do gráfico do Experimento da Dupla-Fenda
py_plot.title("Experimento da Dupla-Fenda (<NAME>, 1801)\nEspectro de Luz como Dualidade Partícula-Onda\n\n")
# Marcação dos pontos relativos à disseminação do espectro de luz,
# proveniente das 2 fendas de luz
py_plot.scatter(pontos_espectro_luz[:, 0], pontos_espectro_luz[:, 1], c = intensidade_sobreposicao_ondas_luz, cmap=py_plot.cm.binary)
# Marcação dos pontos relativos às coordenadas das 2 fendas de luz
py_plot.scatter(*fenda_fonte_luz_1, c="red")
py_plot.scatter(*fenda_fonte_luz_2, c="red")
# -
# Definição do intervalo e velocidade com que a propagação das ondas de luz,
# provenientes das 2 fendas, são animadas
intervalo_animacao_propagacao_ondas_luz = 0.5
velocidade_animacao_propagacao_ondas_luz = 0.02
# Raios dos arcos da propagação das ondas de luz,
# provenientes das 2 fendas
raios_arcos_propagacao_ondas_luz = num_py.linspace(x_min, x_max, int((x_max - x_min) / intervalo_animacao_propagacao_ondas_luz))
# Definição do número de passos para o ciclo, responsável por gerar
# a animação do Experimento da Fenda-Dupla
num_passos_ciclo_animacao = 100
# +
# Gráfico do Experimento da Dupla-Fenda (<NAME>, 1801) - Espectro de Luz (Animado)
# Criação da Figura e Eixos para o Gráfico do Experimento da Dupla-Fenda Animado
figura, eixo = py_plot.subplots(figsize=(10, 8))
# Definição do limite mínimo e máximo para a marcação de pontos
# no Gráfico do Experimento da Dupla-Fenda Animado
py_plot.xlim(x_min, (x_max - 1))
py_plot.ylim(y_min, y_max)
# Lista de círculos da propagação das ondas de luz,
# provenientes das 2 fendas
circulos_propagacao_ondas_luz_1, circulos_propagacao_ondas_luz_2 = [], []
# Para cada raio do arco da propagação das ondas de luz,
# provenientes das 2 fendas
for raio_arco_propagacao_ondas_luz in raios_arcos_propagacao_ondas_luz:
# Cálculo do círculos da propagação das ondas de luz,
# provenientes da primeira fenda
circulo_propagacao_ondas_luz_1 = py_plot.Circle(fenda_fonte_luz_1, raio_arco_propagacao_ondas_luz, facecolor=(1, 0, 0, 0), edgecolor="red")
circulos_propagacao_ondas_luz_1.append(eixo.add_artist(circulo_propagacao_ondas_luz_1))
# Cálculo do círculos da propagação das ondas de luz,
# provenientes da segunda fenda
circulo_propagacao_ondas_luz_2 = py_plot.Circle(fenda_fonte_luz_2, raio_arco_propagacao_ondas_luz, facecolor=(1, 0, 0, 0), edgecolor="yellow")
circulos_propagacao_ondas_luz_2.append(eixo.add_artist(circulo_propagacao_ondas_luz_2))
# Definição do Título do gráfico da Animação do Experimento da Dupla-Fenda
py_plot.title("Animação Experimento da Dupla-Fenda (<NAME>, 1801)\nEspectro de Luz como Dualidade Partícula-Onda\n\n")
# Marcação dos pontos relativos às coordenadas das 2 fendas de luz
py_plot.scatter(*fenda_fonte_luz_1, c="white")
py_plot.scatter(*fenda_fonte_luz_2, c="white")
# Para cada passo do ciclo, responsável por gerar a animação do Experimento da Fenda-Dupla
for num_passo_atual_ciclo_animacao in range(num_passos_ciclo_animacao):
# Definição dos raios dos círculos dos arcos da propagação das ondas de luz,
# provenientes das 2 fendas
[circulo_propagacao_ondas_luz_1.set_radius(raios_arcos_propagacao_ondas_luz[circulo_propagacao_ondas_luz_1_indice]) for circulo_propagacao_ondas_luz_1_indice, circulo_propagacao_ondas_luz_1 in enumerate(circulos_propagacao_ondas_luz_1)]
[circulo_propagacao_ondas_luz_2.set_radius(raios_arcos_propagacao_ondas_luz[circulo_propagacao_ondas_luz_2_indice]) for circulo_propagacao_ondas_luz_2_indice, circulo_propagacao_ondas_luz_2 in enumerate(circulos_propagacao_ondas_luz_2)]
# Atualização dos raios dos arcos da propagação das ondas de luz,
# provenientes das 2 fendas
raios_arcos_propagacao_ondas_luz = ( (raios_arcos_propagacao_ondas_luz + velocidade_animacao_propagacao_ondas_luz) % (x_max - x_min) )
# Limpa a Figura para a animação no passo atual,
# com um compasse de espera, antes remover o conteúdo do passo anterior
clear_output(wait=True)
# Mostra a figura, no passo actual
display(figura)
# -
# ***
# **Uma atualização em:** 27 de Junho, 2021
#
# **Autor:** _<NAME>_
#
# **Adaptado da Dissertação/Tese de Mestrado:**
# * "_Acordo de Chave de Conferência Semi-Quântico (ACCSQ)_" - 2021
#
# **© Direitos de autoria Qiskrypt, 2021, todos os direitos reservados.**
# ***
# <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a><br />Este trabalho está licenciado sobre uma <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">Licensa Internacional da Creative Commons Attribution-NonCommercial-NoDerivatives 4.0</a>.
| pt/1. introducao/extras/1.1.4-experimento-dupla-fenda-thomas-young-1801.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py38] *
# language: python
# name: conda-env-py38-py
# ---
from erddapy import ERDDAP
import pandas as pd
import numpy as np
# +
## settings (move to yaml file for routines)
server_url = 'http://akutan.pmel.noaa.gov:8080/erddap'
maxdepth = 0 #keep all data above this depth
site_str = 'M8'
region = 'bs'
substring = ['bs8','bs8'] #search substring useful for M2
prelim=[]
#this elimnates bad salinity but
data_QC = True
# -
e = ERDDAP(server=server_url)
df = pd.read_csv(e.get_search_url(response='csv', search_for=f'datasets_Mooring AND {region}'))
#print(df['Dataset ID'].values)
# +
from requests.exceptions import HTTPError
dfs = {}
for dataset_id in sorted(df['Dataset ID'].values):
if ('1hr' in dataset_id):
continue
if any(x in dataset_id for x in substring) and not any(x in dataset_id for x in prelim) and ('final' in dataset_id):
print(dataset_id)
try:
d = ERDDAP(server=server_url,
protocol='tabledap',
response='csv'
)
d.dataset_id=dataset_id
d.variables = ['latitude',
'longitude',
'depth',
'Chlorophyll_Fluorescence',
'time',
'timeseries_id']
d.constraints = {'depth>=':maxdepth}
except HTTPError:
print('Failed to generate url {}'.format(dataset_id))
try:
df_m = d.to_pandas(
index_col='time (UTC)',
parse_dates=True,
skiprows=(1,) # units information can be dropped.
)
df_m.sort_index(inplace=True)
df_m.columns = [x[1].split()[0] for x in enumerate(df_m.columns)]
dfs.update({dataset_id:df_m})
except:
pass
if any(x in dataset_id for x in prelim) and ('preliminary' in dataset_id):
print(dataset_id)
try:
d = ERDDAP(server=server_url,
protocol='tabledap',
response='csv'
)
d.dataset_id=dataset_id
d.variables = ['latitude',
'longitude',
'depth',
'Chlorophyll_Fluorescence',
'time',
'timeseries_id']
d.constraints = {'depth>=':maxdepth}
except HTTPError:
print('Failed to generate url {}'.format(dataset_id))
try:
df_m = d.to_pandas(
index_col='time (UTC)',
parse_dates=True,
skiprows=(1,) # units information can be dropped.
)
df_m.sort_index(inplace=True)
df_m.columns = [x[1].split()[0] for x in enumerate(df_m.columns)]
#using preliminary for unfinished datasets - very simple qc
if data_QC:
#overwinter moorings
if '17bs2c' in dataset_id:
df_m=df_m['2017-10-3':'2018-5-1']
if '16bs2c' in dataset_id:
df_m=df_m['2016-10-6':'2017-4-26']
if '17bsm2a' in dataset_id:
df_m=df_m['2017-4-28':'2017-9-22']
if '18bsm2a' in dataset_id:
df_m=df_m['2018-4-30':'2018-10-01']
if '17bs8a' in dataset_id:
df_m=df_m['2017-9-30':'2018-10-1']
if '18bs8a' in dataset_id:
df_m=df_m['2018-10-12':'2019-9-23']
if '16bs4b' in dataset_id:
df_m=df_m['2016-9-26':'2017-9-24']
if '17bs4b' in dataset_id:
df_m=df_m['2017-9-30':'2018-10-1']
if '18bs4b' in dataset_id:
df_m=df_m['2018-10-12':'2018-9-23']
if '13bs5a' in dataset_id:
df_m=df_m['2013-8-18':'2014-10-16']
if '14bs5a' in dataset_id:
df_m=df_m['2014-10-16':'2015-9-24']
if '16bs5a' in dataset_id:
df_m=df_m['2016-9-26':'2017-9-24']
if '17bs5a' in dataset_id:
df_m=df_m['2017-9-30':'2018-10-1']
if '18bs5a' in dataset_id:
df_m=df_m['2018-10-12':'2018-9-23']
dfs.update({dataset_id:df_m})
except:
pass
# -
df_merged=pd.DataFrame()
for dataset_id in dfs.keys():
df_merged = df_merged.append(dfs[dataset_id])
df_merged.describe()
df_merged = df_merged.dropna()
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.scatter(df_merged.index, y=df_merged['depth'], s=10, c=df_merged['Chlorophyll_Fluorescence'], vmin=0, vmax=10, cmap='inferno')
plt.plot(df_merged.index, df_merged['Chlorophyll_Fluorescence'])
df_merged.to_csv(f'{site_str}_nearsfc_chlor.csv')
| EcoFOCI_Moorings/EcoFOCI_erddap_DataVarSubset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Example 02: General Use of XGBoostClassifierHyperOpt
# [](https://colab.research.google.com/github/slickml/slick-ml/blob/master/examples/optimization/example_02_XGBoostClassifierHyperOpt.ipynb)
# ### Google Colab Configuration
# +
# # !git clone https://github.com/slickml/slick-ml.git
# # %cd slick-ml
# # !pip install -r requirements.txt
# -
# ### Local Environment Configuration
# # Change path to project root
# %cd ../..
# ### Import Python Libraries
# +
# %load_ext autoreload
# widen the screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# change the path and loading class
import os, sys
import pandas as pd
import numpy as np
import seaborn as sns
# -
# %autoreload
from slickml.optimization import XGBoostClassifierHyperOpt
# ----
# # XGBoostClassifierHyperOpt Docstring
# loading data
df = pd.read_csv("data/clf_data.csv")
df.head()
# define X, y
y = df.CLASS.values
X = df.drop(["CLASS"], axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
shuffle=True,
stratify=y,
random_state=1367)
# +
# define the parameters' bounds
from hyperopt import hp
def get_xgb_params():
""" Define Parameter Space"""
params = {
"nthread": 4,
"booster": "gbtree",
"tree_method": "hist",
"objective": "binary:logistic",
"max_depth": hp.choice("max_depth", range(2, 7)),
"learning_rate": hp.quniform("learning_rate", 0.01, 0.50, 0.01),
"gamma": hp.quniform("gamma", 0, 0.50, 0.01),
"min_child_weight": hp.quniform("min_child_weight", 1, 10, 1),
"subsample": hp.quniform("subsample", 0.1, 1, 0.01),
"colsample_bytree": hp.quniform("colsample_bytree", 0.1, 1.0, 0.01),
"gamma": hp.quniform("gamma", 0.0, 1.0, 0.01),
"reg_alpha": hp.quniform("reg_alpha", 0.0, 1.0, 0.01),
"reg_lambda": hp.quniform("reg_lambda", 0.0, 1.0, 0.01),
}
return params
# -
# initialize XGBoostClassifierHyperOpt
xho = XGBoostClassifierHyperOpt(num_boost_round=200,
metrics="logloss",
n_splits=3,
shuffle=True,
early_stopping_rounds=20,
func_name="xgb_cv",
space=get_xgb_params(),
max_evals=100,
verbose=False
)
# fit
xho.fit(X_train,y_train)
# ### Best set of parameters from all runs
xho.get_optimization_results()
# ### Results from each trial
import pprint
pprint.pprint(xho.get_optimization_trials().trials)
| examples/optimization/example_02_XGBoostClassifierHyperOpt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import print_function, division, absolute_import
# # Challenges of Streaming Data:
# Building an ANTARES-like Pipeline for Data Management and Discovery
# ========
#
# #### Version 0.1
#
# ***
# By <NAME> 2017 Apr 10
# As we just saw in Gautham's lecture - LSST will produce an unprecedented volume of time-domain information for the astronomical sky. $>37$ trillion individual photometric measurements will be recorded. While the vast, vast majority of these measurements will simply confirm the status quo, some will represent rarities that have never been seen before (e.g., LSST may be the first telescope to discover the electromagnetic counterpart to a LIGO graviational wave event), which the community will need to know about in ~real time.
#
# Storing, filtering, and serving this data is going to be a huge <del>nightmare</del> challenge. ANTARES, as detailed by Gautham, is one proposed solution to this challenge. In this exercise you will build a miniature version of ANTARES, which will require the application of several of the lessons from earlier this week. Many of the difficult, and essential, steps necessary for ANTARES will be skipped here as they are too time consuming or beyond the scope of what we have previously covered. We will point out these challenges are we come across them.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib notebook
# -
# ## Problem 1) Light Curve Data
#
# We begin by ignoring the streaming aspect of the problem (we will come back to that later) and instead we will work with full light curves. The collection of light curves has been curated by Gautham and like LSST it features objects of different types covering a large range in brightness and observations in multiple filters taken at different cadences.
#
# As the focus of this exercise is the construction of a data management pipeline, we have already created a Python `class` to read in the data and store light curves as objects. The data are stored in flat text files with the following format:
#
# |t |pb |flux |dflux |
# |:--------------:|:---:|:----------:|-----------:|
# | 56254.160000 | i | 6.530000 | 4.920000 |
# | 56254.172000 | z | 4.113000 | 4.018000 |
# | 56258.125000 | g | 5.077000 | 10.620000 |
# | 56258.141000 | r | 6.963000 | 5.060000 |
# | . | . | . | . |
# | . | . | . | . |
# | . | . | . | . |
#
# and names `FAKE0XX.dat` where the `XX` is a running index from `01` to `99`.
# **Problem 1a**
#
# Read in the data for the first light curve file and plot the $g'$ light curve for that source.
# +
# execute this cell
lc = pd.read_csv('testset_for_LSST_DSFP/FAKE001.dat', delim_whitespace=True, comment = '#')
plt.errorbar(np.array(lc['t'].ix[lc['pb'] == 'g']),
np.array(lc['flux'].ix[lc['pb'] == 'g']),
np.array(lc['dflux'].ix[lc['pb'] == 'g']), fmt = 'o', color = 'green')
plt.xlabel('MJD')
plt.ylabel('flux')
# -
# As we have many light curve files (in principle as many as 37 billion...), we will define a light curve class to ease our handling of the data.
#
# ** Problem 1b**
#
# Fix the `lc` class definition below.
#
# *Hint* - the only purpose of this problem is to make sure you actually read each line of code below, it is not intended to be difficult.
class ANTARESlc():
'''Light curve object for NOAO formatted data'''
def __init__(self, filename):
'''Read in light curve data'''
DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')
self.DFlc = DFlc
self.filename = filename
def plot_multicolor_lc(self):
'''Plot the 4 band light curve'''
fig, ax = plt.subplots()
g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],
fmt = 'o', color = '#78A5A3', label = r"$g'$")
r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],
fmt = 'o', color = '#CE5A57', label = r"$r'$")
i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],
fmt = 'o', color = '#E1B16A', label = r"$i'$")
z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],
fmt = 'o', color = '#444C5C', label = r"$z'$")
ax.legend(fancybox = True)
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
# **Problem 1c**
#
# Confirm the corrections made in **1b** by plotting the multiband light curve for the source `FAKE010`.
# +
lc = ANTARESlc('testset_for_LSST_DSFP/FAKE010.dat')
lc.plot_multicolor_lc()
# -
# One thing that we brushed over previously is that the brightness measurements have units of flux, rather than the traditional use of magnitudes. The reason for this is that LSST will measure flux variations via image differencing, which will for some sources in some filters result in a measurement of *negative flux*. (You may have already noticed this in **1a**.) Statistically there is nothing wrong with such a measurement, but it is impossible to convert a negative flux into a magnitude. Thus we will use flux measurements throughout this exercise. [Aside - if you are bored during the next break, I'd be happy to rant about why we should have ditched the magnitude system years ago.]
#
# Using flux measurements will allow us to make unbiased measurements of the statistical distributions of the variations of the sources we care about.
#
# **Problem 1d**
#
# What is `FAKE010` the source that is plotted above?
#
# *Hint 1* - if you have no idea that's fine, move on.
#
# *Hint 2* - ask Szymon or Tomas...
# **Solution 1d**
#
# `FAKE010` is a transient, as can be seen by the rapid rise followed by a gradual decline in the light curve. In this particular case, we can further guess that `FAKE010` is a Type Ia supernova due to the secondary maxima in the $i'$ and $z'$ light curves. These secondary peaks are not present in any other known type of transient.
# **Problem 1e**
#
# To get a better sense of the data, plot the multiband light curves for sources `FAKE060` and `FAKE073`.
# +
lc59 = ANTARESlc("testset_for_LSST_DSFP/FAKE060.dat")
lc59.plot_multicolor_lc()
lc60 = ANTARESlc("testset_for_LSST_DSFP/FAKE073.dat")
lc60.plot_multicolor_lc()
# -
# ## Problem 2) Data Preparation
#
# While we could create a database table that includes every single photometric measurement made by LSST, this ~37 trillion row db would be enormous without providing a lot of added value beyond the raw flux measurements [while this table is necessary, alternative tables may provide more useful]. Furthermore, extracting individual light curves from such a database will be slow. Instead, we are going to develop summary statistics for every source which will make it easier to select individual sources and develop classifiers to identify objects of interest.
#
# Below we will redefine the `ANTARESlc` class to include additional methods so we can (eventually) store summary statistics in a database table. In the interest of time, we limit the summary statistics to a relatively small list all of which have been shown to be useful for classification (see [Richards et al. 2011](http://iopscience.iop.org/article/10.1088/0004-637X/733/1/10/meta) for further details). The statistics that we include (for now) are:
#
# 1. `Std` -- the standard deviation of the flux measurements
# 2. `Amp` -- the amplitude of flux deviations
# 3. `MAD` -- the median absolute deviation of the flux measurements
# 4. `beyond1std` -- the fraction of flux measurements beyond 1 standard deviation
# 5. the mean $g' - r'$, $r' - i'$, and $i' - z'$ color
#
# **Problem 2a**
#
# Complete the mean color module in the `ANTARESlc` class. Feel free to use the other modules as a template for your work.
#
# *Hint*/*food for thought* - if a source is observed in different filters but the observations are not simultaneous (or quasi-simultaneous), what is the meaning of a "mean color"?
#
# *Solution to food for thought* - in this case we simply want you to take the mean flux in each filter and create a statistic that is $-2.5 \log \frac{\langle f_X \rangle}{\langle f_{Y} \rangle}$, where ${\langle f_{Y} \rangle}$ is the mean flux in band $Y$, while $\langle f_X \rangle$ is the mean flux in band $X$, which can be $g', r', i', z'$. Note that our use of image-difference flux measurements, which can be negative, means you'll need to add some form a case excpetion if $\langle f_X \rangle$ or $\langle f_Y \rangle$ is negative. In these cases set the color to -999.
# +
from scipy.stats import skew
class ANTARESlc():
'''Light curve object for NOAO formatted data'''
def __init__(self, filename):
DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')
self.DFlc = DFlc
self.filename = filename
def plot_multicolor_lc(self):
'''Plot the 4 band light curve'''
fig, ax = plt.subplots()
g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],
fmt = 'o', color = '#78A5A3', label = r"$g'$")
r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],
fmt = 'o', color = '#CE5A57', label = r"$r'$")
i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],
fmt = 'o', color = '#E1B16A', label = r"$i'$")
z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],
self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],
fmt = 'o', color = '#444C5C', label = r"$z'$")
ax.legend(fancybox = True)
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
def filter_flux(self):
'''Store individual passband fluxes as object attributes'''
self.gFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'g']
self.gFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g']
self.rFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'r']
self.rFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r']
self.iFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'i']
self.iFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i']
self.zFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'z']
self.zFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z']
def weighted_mean_flux(self):
'''Measure (SNR weighted) mean flux in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
weighted_mean = lambda flux, dflux: np.sum(flux*(flux/dflux)**2)/np.sum((flux/dflux)**2)
self.gMean = weighted_mean(self.gFlux, self.gFluxUnc)
self.rMean = weighted_mean(self.rFlux, self.rFluxUnc)
self.iMean = weighted_mean(self.iFlux, self.iFluxUnc)
self.zMean = weighted_mean(self.zFlux, self.zFluxUnc)
def normalized_flux_std(self):
'''Measure standard deviation of flux in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
normalized_flux_std = lambda flux, wMeanFlux: np.std(flux/wMeanFlux, ddof = 1)
self.gStd = normalized_flux_std(self.gFlux, self.gMean)
self.rStd = normalized_flux_std(self.rFlux, self.rMean)
self.iStd = normalized_flux_std(self.iFlux, self.iMean)
self.zStd = normalized_flux_std(self.zFlux, self.zMean)
def normalized_amplitude(self):
'''Measure the normalized amplitude of variations in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
normalized_amplitude = lambda flux, wMeanFlux: (np.max(flux) - np.min(flux))/wMeanFlux
self.gAmp = normalized_amplitude(self.gFlux, self.gMean)
self.rAmp = normalized_amplitude(self.rFlux, self.rMean)
self.iAmp = normalized_amplitude(self.iFlux, self.iMean)
self.zAmp = normalized_amplitude(self.zFlux, self.zMean)
def normalized_MAD(self):
'''Measure normalized Median Absolute Deviation (MAD) in griz'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
normalized_MAD = lambda flux, wMeanFlux: np.median(np.abs((flux - np.median(flux))/wMeanFlux))
self.gMAD = normalized_MAD(self.gFlux, self.gMean)
self.rMAD = normalized_MAD(self.rFlux, self.rMean)
self.iMAD = normalized_MAD(self.iFlux, self.iMean)
self.zMAD = normalized_MAD(self.zFlux, self.zMean)
def normalized_beyond_1std(self):
'''Measure fraction of flux measurements beyond 1 std'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
beyond_1std = lambda flux, wMeanFlux: sum(np.abs(flux - wMeanFlux) > np.std(flux, ddof = 1))/len(flux)
self.gBeyond = beyond_1std(self.gFlux, self.gMean)
self.rBeyond = beyond_1std(self.rFlux, self.rMean)
self.iBeyond = beyond_1std(self.iFlux, self.iMean)
self.zBeyond = beyond_1std(self.zFlux, self.zMean)
def skew(self):
'''Measure the skew of the flux measurements'''
self.gSkew = skew(self.gFlux)
self.rSkew = skew(self.rFlux)
self.iSkew = skew(self.iFlux)
self.zSkew = skew(self.zFlux)
def mean_colors(self):
'''Measure the mean g-r, r-i, and i-z colors'''
if not hasattr(self, 'gFlux'):
self.filter_flux()
if not hasattr(self, 'gMean'):
self.weighted_mean_flux()
self.gMinusR = -2.5*np.log10(self.gMean/self.rMean) if self.gMean> 0 and self.rMean > 0 else -999
self.rMinusI = -2.5*np.log10(self.rMean/self.iMean) if self.rMean> 0 and self.iMean > 0 else -999
self.iMinusZ = -2.5*np.log10(self.iMean/self.zMean) if self.iMean> 0 and self.zMean > 0 else -999
# -
# **Problem 2b**
#
# Confirm your solution to **2a** by measuring the mean colors of source `FAKE010`. Does your measurement make sense given the plot you made in **1c**?
# +
lc = ANTARESlc('testset_for_LSST_DSFP/FAKE010.dat')
lc.filter_flux()
lc.weighted_mean_flux()
lc.mean_colors()
print("The g'-r', r'-i', and i'-z' colors are: {:.3f}, {:.3f}, and {:.3f}, respectively.". format(lc.gMinusR, lc.rMinusI, lc.iMinusZ))
# -
# ## Problem 3) Store the sources in a database
#
# Building (and managing) a database from scratch is a challenging task. For (very) small projects one solution to this problem is to use [`SQLite`](http://sqlite.org/), which is a self-contained, publicly available SQL engine. One of the primary advantages of `SQLite` is that no server setup is required, unlike other popular tools such as postgres and MySQL. In fact, `SQLite` is already integrated with python so everything we want to do (create database, add tables, load data, write queries, etc.) can be done within Python.
#
# Without diving too deep into the details, here are situations where `SQLite` has advantages and disadvantages [according to their own documentation](http://sqlite.org/whentouse.html):
#
# *Advantages*
#
# 1. Situations where expert human support is not needed
# 2. For basic data analysis (`SQLite` is easy to install and manage for new projects)
# 3. Education and training
#
# *Disadvantages*
#
# 1. Client/Server applications (`SQLite` does not behave well if multiple systems need to access db at the same time)
# 2. Very large data sets (`SQLite` stores entire db in a single disk file, other solutions can store data across multiple files/volumes)
# 3. High concurrency (Only 1 writer allowed at a time for `SQLite`)
#
# From the (limited) lists above, you can see that while `SQLite` is perfect for our application right now, if you were building an actual ANTARES-like system a more sophisticated database solution would be required.
# **Problem 3a**
#
# Import sqlite3 into the notebook.
#
# *Hint* - if this doesn't work, you may need to `conda install sqlite3` or `pip install sqlite3`.
import sqlite3
# Following the `sqlite3` import, we must first connect to the database. If we attempt a connection to a database that does not exist, then a new database is created. Here we will create a new database file, called `miniANTARES.db`.
conn = sqlite3.connect("miniANTARES.db")
# We now have a database connection object, `conn`. To interact with the database (create tables, load data, write queries) we need a cursor object.
cur = conn.cursor()
# Now that we have a cursor object, we can populate the database. As an example we will start by creating a table to hold all the raw photometry (though ultimately we will not use this table for analysis).
#
# *Note* - there are many cursor methods capable of interacting with the database. The most common, [`execute`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.execute), takes a single `SQL` command as its argument and executes that command. Other useful methods include [`executemany`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany), which is useful for inserting data into the database, and [`executescript`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executescript), which take an `SQL` script as its argument and executes the script.
#
# In many cases, as below, it will be useful to use triple quotes in order to improve the legibility of your code.
cur.execute("""drop table if exists rawPhot""") # drop the table if is already exists
cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
# Let's unpack everything that happened in these two commands. First - if the table `rawPhot` already exists, we drop it to start over from scratch. (this is useful here, but should not be adopted as general practice)
#
# Second - we create the new table `rawPhot`, which has 6 columns: `id` - a running index for every row in the table, `objId` - an ID to identify which source the row belongs to, `t` - the time of observation in MJD, `pb` - the passband of the observation, `flux` the observation flux, and `dflux` the uncertainty on the flux measurement. In addition to naming the columns, we also must declare their type. We have declared `id` as the primary key, which means this value will automatically be assigned and incremented for all data inserted into the database. We have also declared `pb` as a variable character of length 1, which is more useful and restrictive than simply declaring `pb` as `text`, which allows any freeform string.
#
# Now we need to insert the raw flux measurements into the database. To do so, we will use the `ANTARESlc` class that we created earlier. As an initial example, we will insert the first 3 observations from the source `FAKE010`.
# +
filename = "testset_for_LSST_DSFP/FAKE001.dat"
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[0])))
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[1])))
cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(lc.DFlc.ix[2])))
# -
# There are two things to highlight above: (1) we do not specify an id for the data as this is automatically generated, and (2) the data insertion happens via a tuple. In this case, we are taking advantage of the fact that a Python tuple is can be concatenated:
#
# (objId,) + tuple(lc10.DFlc.ix[0]))
#
# While the above example demonstrates the insertion of a single row to the database, it is far more efficient to bulk load the data. To do so we will delete, i.e. `DROP`, the rawPhot table and use some `pandas` manipulation to load the contents of an entire file at once via [`executemany`](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany).
# +
cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
# next 3 lines are already in name space; repeated for clarity
filename = "testset_for_LSST_DSFP/FAKE001.dat"
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
# -
# **Problem 3b**
#
# Load all of the raw photometric observations into the `rawPhot` table in the database.
#
# *Hint* - you can use [`glob`](https://docs.python.org/3/library/glob.html) to select all of the files being loaded.
#
# *Hint 2* - you have already loaded the data from `FAKE001` into the table.
# +
import glob
filenames = glob.glob("testset_for_LSST_DSFP/FAKE*.dat")
for filename in filenames[1:]:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
# -
# **Problem 3c**
#
# To ensure the data have been loaded properly, select the $r'$ light curve for source `FAKE010` from the `rawPhot` table and plot the results. Does it match the plot from **1c**?
# +
cur.execute("""select t, flux, dflux
from rawPhot
where objId = 10 and pb = 'g'""")
data = cur.fetchall()
data = np.array(data)
fig, ax = plt.subplots()
ax.errorbar(data[:,0], data[:,1], data[:,2], fmt = 'o', color = '#78A5A3')
ax.set_xlabel(r"$\mathrm{MJD}$")
ax.set_ylabel(r"$\mathrm{flux}$")
# -
# Now that we have loaded the raw observations, we need to create a new table to store summary statistics for each object. This table will include everything we've added to the `ANTARESlc` class.
cur.execute("""drop table if exists lcFeats""") # drop the table if it already exists
cur.execute("""create table lcFeats(
id integer primary key,
objId int,
gStd float,
rStd float,
iStd float,
zStd float,
gAmp float,
rAmp float,
iAmp float,
zAmp float,
gMAD float,
rMAD float,
iMAD float,
zMAD float,
gBeyond float,
rBeyond float,
iBeyond float,
zBeyond float,
gSkew float,
rSkew float,
iSkew float,
zSkew float,
gMinusR float,
rMinusI float,
iMinusZ float,
FOREIGN KEY(objId) REFERENCES rawPhot(objId)
)
""")
# The above procedure should look familiar to above, with one exception: the addition of the `foreign key` in the `lcFeats` table. The inclusion of the `foreign key` ensures a connected relationship between `rawPhot` and `lcFeats`. In brief, a row cannot be inserted into `lcFeats` unless a corresponding row, i.e. `objId`, exists in `rawPhot`. Additionally, rows in `rawPhot` cannot be deleted if there are dependent rows in `lcFeats`.
# **Problem 3d**
#
# Calculate features for every source in `rawPhot` and insert those features into the `lcFeats` table.
for filename in filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
lc.filter_flux()
lc.weighted_mean_flux()
lc.normalized_flux_std()
lc.normalized_amplitude()
lc.normalized_MAD()
lc.normalized_beyond_1std()
lc.skew()
lc.mean_colors()
feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd,
lc.gAmp, lc.rAmp, lc.iAmp, lc.zAmp,
lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD,
lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,
lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew,
lc.gMinusR, lc.rMinusI, lc.iMinusZ)
cur.execute("""insert into lcFeats(objId,
gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ) values {}""".format(feats))
# **Problem 3e**
#
# Confirm that the data loaded correctly by counting the number of sources with `gAmp` > 2.
#
# How many sources have `gMinusR` = -999?
#
# *Hint* - you should find 9 and 2, respectively.
# +
cur.execute("""select count(*) from lcFeats where gAmp > 2""")
nAmp2 = cur.fetchone()[0]
cur.execute("""select count(*) from lcFeats where gMinusR = -999""")
nNoColor = cur.fetchone()[0]
print("There are {:d} sources with gAmp > 2".format(nAmp2))
print("There are {:d} sources with no measured i' - z' color".format(nNoColor))
# -
# Finally, we close by commiting the changes we made to the database.
#
# Note that strictly speaking this is not needed, however, were we to update any values in the database then we would need to commit those changes.
conn.commit()
# **mini Challenge Problem**
#
# If there is less than 45 min to go, please skip this part.
#
# Earlier it was claimed that bulk loading the data is faster than loading it line by line. For this problem - prove this assertion, use `%%timeit` to "profile" the two different options (bulk load with `executemany` and loading one photometric measurement at a time via for loop).
#
# *Hint* - to avoid corruption of your current working database, `miniANTARES.db`, create a new temporary database for the pupose of running this test. Also be careful with the names of your connection and cursor variables.
# +
# %%timeit
# bulk load solution
tmp_conn = sqlite3.connect("tmp1.db")
tmp_cur = tmp_conn.cursor()
tmp_cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
tmp_cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
for filename in filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
tmp_cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
# +
# %%timeit
# line-by-line load solution
tmp_conn = sqlite3.connect("tmp1.db")
tmp_cur = tmp_conn.cursor()
tmp_cur.execute("""drop table if exists rawPhot""") # drop the table if it already exists
tmp_cur.execute("""create table rawPhot(
id integer primary key,
objId int,
t float,
pb varchar(1),
flux float,
dflux float)
""")
for filename in filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
for obs in lc.DFlc.values:
tmp_cur.execute("""insert into rawPhot(objId, t, pb, flux, dflux) values {}""".format((objId,) + tuple(obs)))
# -
# ## Problem 4) Build a Classification Model
#
# One of the primary goals for ANTARES is to separate the Wheat from the Chaff, in other words, given that ~10 million alerts will be issued by LSST on a nightly basis, what is the single (or 10, or 100) most interesting alert.
#
# Here we will build on the skills developed during the DSFP Session 2 to construct a machine-learning model to classify new light curves.
#
# Fortunately - the data that has already been loaded to miniANTARES.db is a suitable training set for the classifier (we simply haven't provided you with labels just yet). Execute the cell below to add a new table to the database which includes the appropriate labels.
# +
cur.execute("""drop table if exists lcLabels""") # drop the table if it already exists
cur.execute("""create table lcLabels(
objId int,
label int,
foreign key(objId) references rawPhot(objId)
)""")
labels = np.zeros(100)
labels[20:60] = 1
labels[60:] = 2
data = np.append(np.arange(1,101)[np.newaxis].T, labels[np.newaxis].T, axis = 1)
tup_data = [tuple(x) for x in data]
cur.executemany("""insert into lcLabels(objId, label) values (?,?)""", tup_data)
# -
# For now - don't worry about what the labels mean (though if you inspect the light curves you may be able to figure this out...)
#
# **Problem 4a**
#
# Query the database to select features and labels for the light curves in your training set. Store the results of these queries in `numpy` arrays, `X` and `y`, respectively, which are suitable for the various `scikit-learn` machine learning algorithms.
#
# *Hint* - recall that databases do not store ordered results.
#
# *Hint 2* - recall that `scikit-learn` expects `y` to be a 1d array. You will likely need to convert a 2d array to 1d.
# +
cur.execute("""select label
from lcLabels
order by objId asc""")
y = np.array(cur.fetchall()).ravel()
cur.execute("""select gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ
from lcFeats
order by objId asc""")
X = np.array(cur.fetchall())
# -
# **Problem 4b**
#
# Train a SVM model ([`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) in `scikit-learn`) using a radial basis function (RBF) kernel with penalty parameter, $C = 1$, and kernel coefficient, $\gamma = 0.1$.
#
# Evaluate the accuracy of the model via $k = 5$ fold cross validation.
#
# *Hint* - you may find the [`cross_val_score`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score) module helpful.
# +
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
cv_scores = cross_val_score(SVC(C = 1.0, gamma = 0.1, kernel = 'rbf'), X, y, cv = 5)
print("The SVM model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))
# -
# The SVM model does a decent job of classifying the data. However - we are going to have 10 million alerts every night. Therefore, we need something that runs quickly. For most ML models the training step is slow, while predictions (relatively) are fast.
#
# **Problem 4c**
#
# Pick any other [classification model from `scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html), and "profile" the time it takes to train that model vs. the time it takes to train an SVM model.
#
# Is the model that you have selected faster than SVM?
#
# *Hint* - you should import the model outside your timing loop as we only care about the training step in this case.
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier()
svm_clf = SVC(C = 1.0, gamma = 0.1, kernel = 'rbf')
# %%timeit
# timing solution for RF model
rf_clf.fit(X,y)
# %%timeit
# timing solution for SVM model
svm_clf.fit(X,y)
# **Problem 4d**
#
# Does the model you selected perform better than the SVM model? Perform a $k = 5$ fold cross validation to determine which model provides superior accuracy.
# +
cv_scores = cross_val_score(RandomForestClassifier(), X, y, cv = 5)
print("The RF model produces a CV accuracy of {:.4f}".format(np.mean(cv_scores)))
# -
# **Problem 4e**
#
# Which model are you going to use in your miniANTARES? Justify your answer.
# *Write solution to **4e** here*
#
# In this case we are going to adopt the SVM model as it is a factor of 20 times faster than RF, while providing nearly identical performance from an accuracy stand point.
# ## Problem 5) Class Predictions for New Sources
#
# Now that we have developed a basic infrastructure for dealing with streaming data, we may reap the rewards of our efforts. We will use our ANTARES-like software to classify newly observed sources.
# **Problem 5a**
#
# Load the light curves for the new observations (found in `full_testset_for_LSST_DSFP/`) into the `rawPhot` table in the database.
#
# *Hint* - ultimately it doesn't matter much one way or another, but you may choose to keep new observations in a table separate from the training data. Up to you.
# +
new_obs_filenames = glob.glob("full_testset_for_LSST_DSFP/FAKE*.dat")
for filename in new_obs_filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples
cur.executemany("""insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)""", data)
# -
# **Problem 5b**
#
# Calculate features for the new observations and insert those features into the `lcFeats` table.
#
# *Hint* - again, you may want to create a new table for this, up to you.
for filename in new_obs_filenames:
lc = ANTARESlc(filename)
objId = int(filename.split('FAKE')[1].split(".dat")[0])
lc.filter_flux()
lc.weighted_mean_flux()
lc.normalized_flux_std()
lc.normalized_amplitude()
lc.normalized_MAD()
lc.normalized_beyond_1std()
lc.skew()
lc.mean_colors()
feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd,
lc.gAmp, lc.rAmp, lc.iAmp, lc.zAmp,
lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD,
lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,
lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew,
lc.gMinusR, lc.rMinusI, lc.iMinusZ)
cur.execute("""insert into lcFeats(objId,
gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ) values {}""".format(feats))
# **Problem 5c**
#
# Train the model that you adopted in **4e** on the training set, and produce predictions for the newly observed sources.
#
# What is the class distribution for the newly detected sources?
#
# *Hint* - the training set was constructed to have a nearly uniform class distribution, that may not be the case for the actual observed distribution of sources.
# +
svm_clf = SVC(C=1.0, gamma = 0.1, kernel = 'rbf').fit(X, y)
cur.execute("""select gStd, rStd, iStd, zStd,
gAmp, rAmp, iAmp, zAmp,
gMAD, rMAD, iMAD, zMAD,
gBeyond, rBeyond, iBeyond, zBeyond,
gSkew, rSkew, iSkew, zSkew,
gMinusR, rMinusI, iMinusZ
from lcFeats
where objId > 100
order by objId asc""")
X_new = np.array(cur.fetchall())
y_preds = svm_clf.predict(X_new)
print("""There are {:d}, {:d}, and {:d} sources
in classes 0, 1, 2, respectively""".format(*list(np.bincount(y_preds)))) # be careful using bincount
# -
# **Problem 5d**
#
# What does the class distribution tell you about the model?
#
# Does it tell you anything about the survey that has been completed?
# *Write solution to 5d here*
# ## Problem 6) Anomaly Detection
#
# As we learned earlier - one of the primary goals of ANTARES is to reduce the stream of 10 million alerts on any given night to the single (or 10, or 100) most interesting objects. One possible definition of "interesting" is rarity - in which case it would be useful to add some form of anomaly detection to the pipeline. `scikit-learn` has [several different algorithms](http://scikit-learn.org/stable/auto_examples/covariance/plot_outlier_detection.html#sphx-glr-auto-examples-covariance-plot-outlier-detection-py) that can be used for anomaly detection. Here we will employ [isolation forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) which has many parallels to random forests, which we have previously learned about.
#
# In brief, isolation forest builds an ensemble of decision trees where the splitting parameter in each node of the tree is selected randomly. In each tree the number of branches necessary to isolate each source is measured - outlier sources will, on average, require fewer splittings to be isolated than sources in high-density regions of the feature space. Averaging the number of branchings over many trees results in a relative ranking of the anomalousness (*yes, I just made up a word*) of each source.
# **Problem 6a**
#
# Using [`IsolationForest`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) in `sklearn.ensemble` - determine the 10 most isolated sources in the data set.
#
# *Hint* - for `IsolationForest` you will want to use the `decision_function()` method rather than `predict_proba()`, which is what we have previously used with `sklearn.ensemble` models to get relative rankings from the model.
# +
from sklearn.ensemble import IsolationForest
isoF_clf = IsolationForest(n_estimators = 100)
isoF_clf.fit(X)
anomaly_score = isoF_clf.decision_function(X)
print("The 10 most anomalous sources are: {}".format(np.arange(1,101)[np.argsort(anomaly_score)[:10]]))
# -
# **Problem 6b**
#
# Plot the light curves of the 2 most anomalous sources.
#
# Can you identify why these sources have been selected as outliers?
# +
lc3 = ANTARESlc("testset_for_LSST_DSFP/FAKE003.dat")
lc3.plot_multicolor_lc()
lc9 = ANTARESlc("testset_for_LSST_DSFP/FAKE009.dat")
lc9.plot_multicolor_lc()
# -
# *Write solution to **6b** here*
#
# For source 3 - this is an extremely faint (and therefore likely a supernova) source with a mean flux that is almost 0 in all filters. This is likely why this source was selected as an outlier.
#
# For source 9 - the light curve looks like a supernova, but beyond that I do not have a good guess for why it is considered an anomaly. This may indicate that there aren't really many outliers in the data.
# ## Challenge Problem) Simulate a Real ANTARES
#
# The problem that we just completed features a key difference from the true ANTARES system - namely, all the light curves analyzed had a complete set of observations loaded into the database. One of the key challenges for LSST (and by extension ANTARES) is that the data will be *streaming* - new observations will be available every night, but the full light curves for all sources won't be available until the 10 yr survey is complete. In this problem, you will use the same data to simulate an LSST-like classification problem.
#
# Assume that your training set (i.e. the first 100 sources loaded into the database) were observed prior to LSST, thus, these light curves can still be used in their entirety to train your classification models. For the test set of observations, simulate LSST by determining the min and max observation date and take 1-d quantized steps through these light curves. On each day when there are new observations, update the feature calculations for every source that has been newly observed. Classify those sources and identify possible anomalies.
#
# Here are some things you should think about as you build this software:
#
# 1. Should you use the entire light curves for training-set objects when classifying sources with only a few data points?
# 2. How are you going to handle objects on the first epoch when they are detected?
# 3. What threshold (if any) are you going to set to notify the community about rarities that you have discovered
#
# *Hint* - Since you will be reading these light curves from the database (and not from text files) the `ANTARESlc` class that we previously developed will not be useful. You will (likely) either need to re-write this class to interact with the database or figure out how to massage the query results to comply with the class definitions.
| wk3Apr17/miniAntaresSolutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# # Pandas - Transform dataframe to dict
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Pandas/Pandas_Transform_dataframe_to_dict.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# **Tags:** #pandas #dict #snippet #yahoofinance #naas_drivers #operations #jupyternotebooks
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# **Author:** [<NAME>](https://www.linkedin.com/in/florent-ravenel/)
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# With this template, you can convert dataframe to dictionary in pandas WITHOUT index
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ## Input
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ### Import libraries
# + papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
from naas_drivers import yahoofinance
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ### Input parameters
# 👉 Here you can change the ticker, timeframe and add moving averages analysis
# + papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
ticker = "TSLA"
date_from = -100
date_to = 'today'
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ## Model
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ### Get dataframe from Yahoo Finance
# + papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
df_yahoo = yahoofinance.get(ticker,
date_from=-5)
df_yahoo
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ## Output
# + [markdown] papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
# ### Transform dataframe
# Params = orient
# - list: keys are column names, values are lists of column data
# - records: each row becomes a dictionary where key is column name and value is the data in the cell
# + papermill={} tags=["awesome-notebooks/Pandas/Pandas_Transform_dataframe_to_dict.ipynb"]
dict_yahoo = df_yahoo.to_dict(orient="records")
dict_yahoo
| Pandas/Pandas_Transform_dataframe_to_dict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ece_506
# language: python
# name: ece_506
# ---
# ### based on example at [Pytorch.com](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#replay-memory)
# +
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from PIL import Image
# pytorch imports
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
# %matplotlib inline
# +
env = gym.make('CartPole-v0').unwrapped
# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# select gpu if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# -
# ## Replay Memory
# +
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# -
# # DQN
class DQN(nn.Module):
def __init__(self, h, w, outputs):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
def conv2d_size_out(size, kernel_size = 5, stride=2):
return (size - (kernel_size -1) - 1) // stride + 1
convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))
convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))
linear_input_size = convw * convh * 32
self.head = nn.Linear(linear_input_size, outputs)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
# ## Input Extraction
# +
resize = T.Compose([T.ToPILImage(),
T.Resize(40, interpolation=Image.CUBIC),
T.ToTensor()])
def get_cart_location(screen_width):
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0) # middle of cart
def get_screen():
screen = env.render(mode='rgb_array').transpose((2,0,1))
_, screen_height, screen_width = screen.shape
screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]
view_width = int(screen_width *0.6)
cart_location = get_cart_location(screen_width)
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
screen = screen[:, :, slice_range]
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
return resize(screen).unsqueeze(0).to(device)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1,2,0).numpy(),
interpolation='none')
plt.title("Example extracted screen")
plt.show()
# -
## Training
# +
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
# Get screen size so that we can initialize layers correctly based on shape
# returned from AI gym. Typical dimensions at this point are close to 3x40x90
# which is the result of a clamped and down-scaled render buffer in get_screen()
init_screen = get_screen()
_, _, screen_height, screen_width = init_screen.shape
# Get number of actions from gym action space
n_actions = env.action_space.n
policy_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)
steps_done = 0
def select_action(state):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * \
math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
if sample > eps_threshold:
with torch.no_grad():
# t.max(1) will return largest column value of each row.
# second column on max result is index of where max element was
# found, so we pick action with the larger expected reward.
return policy_net(state).max(1)[1].view(1, 1)
else:
return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)
episode_durations = []
def plot_durations(is_pause=True):
plt.figure(2)
plt.clf()
durations_t = torch.tensor(episode_durations, dtype=torch.float)
plt.title('Training...')
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(durations_t.numpy())
# Take 100 episode averages and plot them too
if len(durations_t) >= 100:
means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
means = torch.cat((torch.zeros(99), means))
plt.plot(means.numpy())
if is_pause:
plt.pause(0.001) # pause a bit so that plots are updated
if is_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
# + active=""
# env = gym.make('CartPole-v0')
# env.reset()
# plt.figure(figsize=(9,9))
# img = plt.imshow(env.render(mode='rgb_array')) # call this once
#
# for _ in range(200):
# img.set_data(env.render(mode='rgb_array')) # update the data
# display.display(plt.gcf())
# display.clear_output(wait=True)
# obs, reward, done, info = env.step(env.action_space.sample()) # take a random action
#
# env.close()
# -
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
# detailed explanation). This converts batch-array of Transitions
# to Transition of batch-arrays.
batch = Transition(*zip(*transitions))
# Compute a mask of non-final states and concatenate the batch elements
# (a final state would've been the one after which simulation ended)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.bool)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
# Compute Q(s_t, a) - the model computes Q(s_t), then we select the
# columns of actions taken. These are the actions which would've been taken
# for each batch state according to policy_net
state_action_values = policy_net(state_batch).gather(1, action_batch)
# Compute V(s_{t+1}) for all next states.
# Expected values of actions for non_final_next_states are computed based
# on the "older" target_net; selecting their best reward with max(1)[0].
# This is merged based on the mask, such that we'll have either the expected
# state value or 0 in case the state was final.
next_state_values = torch.zeros(BATCH_SIZE, device=device)
next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
# Compute the expected Q values
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Compute Huber loss
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
# +
num_episodes = 400
for i_episode in range(num_episodes):
# Initialize the environment and state
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for t in count():
# Select and perform an action
action = select_action(state)
_, reward, done, _ = env.step(action.item())
reward = torch.tensor([reward], device=device)
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
next_state = current_screen - last_screen
else:
next_state = None
# Store the transition in memory
memory.push(state, action, next_state, reward)
# Move to the next state
state = next_state
# Perform one step of the optimization (on the target network)
optimize_model()
if done:
episode_durations.append(t + 1)
plot_durations(False)
break
# Update the target network, copying all weights and biases in DQN
if i_episode % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
print('Complete')
env.render()
env.close()
plt.ioff()
plt.show()
# -
plt.ion()
plot_durations()
plt.show()
plt.ion()
plot_durations(True)
plt.show()
| q_learning/dqn_with_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import glob
import os
import matplotlib.pyplot as plt
def GetRelevantTrackingData(fname):
DLC_track = glob.glob(path_for_DLCAnnotation + '\\'+ fname + 'DeepCut_' + '*.h5')[0]
temp = pd.read_hdf(DLC_track)
DLC = temp.loc[slice(None),(slice(None),'proboscisTip')]
DLC = DLC.droplevel(level = [0,1], axis = 1)
return(DLC)
# +
# use step 5 in moth learning as starting point to read out all the data that is included
direc = r"../../MothLearning/dataFolders/Output/Step5_FilesWith_TrueTrialAnd_ProboscisDetect_v2/"
mothlist = glob.glob(direc + '*.csv')
# -
path_for_DLCAnnotation = r"G:\My Drive\Tom-Tanvi\Shared With Bing,Tom and Tanvi\Video Analysis\DeepLabCut-ImageAnalysis\take5\outputFromDLC\VideoResults\EntireDataSet"
path_for_visit_frames = r"../../MothLearning/dataFolders/Output/Step5_FilesWith_TrueTrialAnd_ProboscisDetect_v2/"
# parameters used
visits = ['FirstVisit/', 'Later7thVisit/' ,'Later20thVisit/']
# , 'LastVisit/']
visit_num = [0, 6 ,19]
# , -1]
for visit in visits:
outpath = os.path.join(r"../dataFolders/PaperPipelineOutput/v3/RawTracks/" + visit)
if not os.path.exists(outpath):
try:
os.mkdir(outpath)
except OSError:
print ("Creation of the directory %s failed" % outpath)
# +
frameInfo = pd.DataFrame(data = None, columns = ['mothID', 'start', 'stop'])
m = []
fstart = []
fstop = []
for file in mothlist:
mothID = os.path.basename(file)[:-30]
print(mothID)
dlc = GetRelevantTrackingData(mothID)
path_frame_Reference = glob.glob(path_for_visit_frames + mothID + '_*RawDataForExplorationTime.csv')
if len(path_frame_Reference) > 1:
print('referening the wrong the track file for %s' %mothID)
Visit_info_f = pd.read_csv(path_frame_Reference[0])
Visit_info = Visit_info_f[['MothIN', 'MothOut','ProboscisDetect']]
for visitnum, direc in zip(visit_num, visits):
outpath = os.path.join(r"../dataFolders/PaperPipelineOutput/v3/RawTracks/" + direc)
if len(Visit_info) <= visitnum + 1:
print(mothID + ' has less than %s visits' %str(visitnum + 1))
else:
# print('We are in the loop for %s' %str(visitnum + 1))
fin = Visit_info.iloc[visitnum, 0]
fout = Visit_info.iloc[visitnum, 2]
if np.isnan(fout):
fout = Visit_info.iloc[visitnum, 1]
print(fin, fout)
tracks = dlc[int(fin):int(fout)].copy()
tracks.to_csv(outpath + mothID + '_visit_' + str(visitnum) + '.csv')
del dlc
# -
# When considering 7th visit:
# c-1 has 4,
# c-2 has 5,
# c-3 has 5 and
# c-10 has 4 less than 7 visits
# When considering 20th visit:
# c-1 - 10,
# c-2 - 16,
# c-3 - 14, and
# c-10 - 9 moths less than 20 visits
| PaperPinelineCodes/0-GetRelevantProboscisTracks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# Two independent events can occur at the same or different times.
# Their probability is: $$P(AB) = P(A) * P(B) - P(A \cap B)$$
#
# The two probabilities subtracting the event that both occur together.
#
# Two disjoint (mutally exclusive) events cannot occur together.
# Their probability is $$P(AB) = P(A) + P(B)$$
| all_of_statistics/01. Probability.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gesture Recognition
# In this group project, you are going to build a 3D Conv model that will be able to predict the 5 gestures correctly. Please import the following libraries to get started.
import numpy as np
import os
from scipy.misc import imread, imresize
import datetime
import os
# We set the random seed so that the results don't vary drastically.
np.random.seed(30)
import random as rn
rn.seed(30)
from keras import backend as K
import tensorflow as tf
tf.set_random_seed(30)
# In this block, you read the folder names for training and validation. You also set the `batch_size` here. Note that you set the batch size in such a way that you are able to use the GPU in full capacity. You keep increasing the batch size until the machine throws an error.
train_doc = np.random.permutation(open('/notebooks/storage/Final_data/Collated_training/train.csv').readlines())
val_doc = np.random.permutation(open('/notebooks/storage/Final_data/Collated_training/val.csv').readlines())
batch_size = #experiment with the batch size
# ## Generator
# This is one of the most important part of the code. The overall structure of the generator has been given. In the generator, you are going to preprocess the images as you have images of 2 different dimensions as well as create a batch of video frames. You have to experiment with `img_idx`, `y`,`z` and normalization such that you get high accuracy.
def generator(source_path, folder_list, batch_size):
print( 'Source path = ', source_path, '; batch size =', batch_size)
img_idx = #create a list of image numbers you want to use for a particular video
while True:
t = np.random.permutation(folder_list)
num_batches = # calculate the number of batches
for batch in range(num_batches): # we iterate over the number of batches
batch_data = np.zeros((batch_size,x,y,z,3)) # x is the number of images you use for each video, (y,z) is the final size of the input images and 3 is the number of channels RGB
batch_labels = np.zeros((batch_size,5)) # batch_labels is the one hot representation of the output
for folder in range(batch_size): # iterate over the batch_size
imgs = os.listdir(source_path+'/'+ t[folder + (batch*batch_size)].split(';')[0]) # read all the images in the folder
for idx,item in enumerate(img_idx): # Iterate iver the frames/images of a folder to read them in
image = imread(source_path+'/'+ t[folder + (batch*batch_size)].strip().split(';')[0]+'/'+imgs[item]).astype(np.float32)
#crop the images and resize them. Note that the images are of 2 different shape
#and the conv3D will throw error if the inputs in a batch have different shapes
batch_data[folder,idx,:,:,0] = #normalise and feed in the image
batch_data[folder,idx,:,:,1] = #normalise and feed in the image
batch_data[folder,idx,:,:,2] = #normalise and feed in the image
batch_labels[folder, int(t[folder + (batch*batch_size)].strip().split(';')[2])] = 1
yield batch_data, batch_labels #you yield the batch_data and the batch_labels, remember what does yield do
# write the code for the remaining data points which are left after full batches
# Note here that a video is represented above in the generator as (number of images, height, width, number of channels). Take this into consideration while creating the model architecture.
curr_dt_time = datetime.datetime.now()
train_path = '/notebooks/storage/Final_data/Collated_training/train'
val_path = '/notebooks/storage/Final_data/Collated_training/val'
num_train_sequences = len(train_doc)
print('# training sequences =', num_train_sequences)
num_val_sequences = len(val_doc)
print('# validation sequences =', num_val_sequences)
num_epochs = # choose the number of epochs
print ('# epochs =', num_epochs)
# ## Model
# Here you make the model using different functionalities that Keras provides. Remember to use `Conv3D` and `MaxPooling3D` and not `Conv2D` and `Maxpooling2D` for a 3D convolution model. You would want to use `TimeDistributed` while building a Conv2D + RNN model. Also remember that the last layer is the softmax. Design the network in such a way that the model is able to give good accuracy on the least number of parameters so that it can fit in the memory of the webcam.
# +
from keras.models import Sequential, Model
from keras.layers import Dense, GRU, Flatten, TimeDistributed, Flatten, BatchNormalization, Activation
from keras.layers.convolutional import Conv3D, MaxPooling3D
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras import optimizers
#write your model here
# -
# Now that you have written the model, the next step is to `compile` the model. When you print the `summary` of the model, you'll see the total number of parameters you have to train.
optimiser = #write your optimizer
model.compile(optimizer=optimiser, loss='categorical_crossentropy', metrics=['categorical_accuracy'])
print (model.summary())
# Let us create the `train_generator` and the `val_generator` which will be used in `.fit_generator`.
train_generator = generator(train_path, train_doc, batch_size)
val_generator = generator(val_path, val_doc, batch_size)
# +
model_name = 'model_init' + '_' + str(curr_dt_time).replace(' ','').replace(':','_') + '/'
if not os.path.exists(model_name):
os.mkdir(model_name)
filepath = model_name + 'model-{epoch:05d}-{loss:.5f}-{categorical_accuracy:.5f}-{val_loss:.5f}-{val_categorical_accuracy:.5f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=False, save_weights_only=False, mode='auto', period=1)
LR = # write the REducelronplateau code here
callbacks_list = [checkpoint, LR]
# -
# The `steps_per_epoch` and `validation_steps` are used by `fit_generator` to decide the number of next() calls it need to make.
# +
if (num_train_sequences%batch_size) == 0:
steps_per_epoch = int(num_train_sequences/batch_size)
else:
steps_per_epoch = (num_train_sequences//batch_size) + 1
if (num_val_sequences%batch_size) == 0:
validation_steps = int(num_val_sequences/batch_size)
else:
validation_steps = (num_val_sequences//batch_size) + 1
# -
# Let us now fit the model. This will start training the model and with the help of the checkpoints, you'll be able to save the model at the end of each epoch.
model.fit_generator(train_generator, steps_per_epoch=steps_per_epoch, epochs=num_epochs, verbose=1,
callbacks=callbacks_list, validation_data=val_generator,
validation_steps=validation_steps, class_weight=None, workers=1, initial_epoch=0)
| 15. Gesture Recognition/.ipynb_checkpoints/Neural_Nets_Project_Starter_Code-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Robot World
#
# A robot, much like you, perceives the world through its "senses." For example, self-driving cars use video, radar, and Lidar, to observe the world around them. As cars gather data, they build up a 3D world of observations that tells the car where it is, where other objects (like trees, pedestrians, and other vehicles) are, and where it should be going!
#
# In this section, we'll be working with first a 1D then a 2D representation of the world for simplicity, and because two dimensions are often all you'll need to solve a certain problem.
# * You'll be given a set of quizzes to solve to build up your understanding of robot localization.
# * Try your best to solve these quizzes and consult the solution if you get stuck or want to confirm your answer.
#
# <img src="files/images/lidar.png" width="50%" height="50%">
#
#
# These grid representations of the environment are known as **discrete** representations. Discrete just means a limited number of places a robot can be (ex. in one grid cell). That's because robots, and autonomous vehicles like self-driving cars, use maps to figure out where they are, and maps lend themselves to being divided up into grids and sections.
#
# You'll see **continuous** probability distributions when locating objects that are moving around the robot. Continuous means that these objects can be anywhere around the robot and their movement is smooth.
#
# So, let's start with the 1D case.
# ### Robot World 1-D
#
# First, imagine you have a robot living in a 1-D world. You can think of a 1D world as a one-lane road.
#
# <img src="images/road_1.png" width="50%" height="50%">
#
# We can treat this road as an array, and break it up into grid cells for a robot to understand. In this case, the road is a 1D grid with 5 different spaces. The robot can only move forwards or backwards. If the robot falls off the grid, it will loop back around to the other side (this is known as a cyclic world).
#
# <img src="images/numbered_grid.png" width="50%" height="50%">
#
# ### Uniform Distribution
#
# The robot has a map so that it knows there are only 5 spaces in this 1D world. However, it hasn't sensed anything or moved. For a length of 5 cells (a list of 5 values), what is the probability distribution, `p`, that the robot is in any one of these locations?
#
# Since the robot does not know where it is at first, the probability of being in any space is the same! This is a probability distribution and so the sum of all these probabilities should be equal to 1, so `1/5 spaces = 0.2`. A distribution in which all the probabilities are the same (and we have maximum uncertainty) is called a **uniform distribution**.
#
# importing resources
import matplotlib.pyplot as plt
import numpy as np
# uniform distribution for 5 grid cells
p = [0.2, 0.2, 0.2, 0.2, 0.2]
print(p)
# I'll also include a helper function for visualizing this distribution. The below function, `display_map` will output a bar chart showing the probability that a robot is in each grid space. The y-axis has a range of 0 to 1 for the range of probabilities. For a uniform distribution, this will look like a flat line. You can choose the width of each bar to be <= 1 should you want to space these out.
# +
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
# call function on grid, p, from before
display_map(p)
# -
# Now, what about if the world was 8 grid cells in length instead of 5?
#
# ### QUIZ: Write a function that takes in the number of spaces in the robot's world (in this case 8), and returns the initial probability distribution `p` that the robot is in each space.
#
# This function should store the probabilities in a list. So in this example, there would be a list with 8 probabilities.
#
# **Solution**
#
# We know that all the probabilities in these locations should sum up to 1. So, one solution to this includes dividing 1 by the number of grid cells, then appending that value to a list that is that same passed in number of grid cells in length.
# ex. initialize_robot(5) = [0.2, 0.2, 0.2, 0.2, 0.2]
def initialize_robot(grid_length):
''' Takes in a grid length and returns
a uniform distribution of location probabilities'''
p = []
# create a list that has the value of 1/grid_length for each cell
for i in range(grid_length):
p.append(1.0/grid_length)
return p
p = initialize_robot(8)
print(p)
display_map(p)
# +
# Here is what this distribution looks like, with some spacing
# so you can clearly see the probabilty that a robot is in each grid cell
p = initialize_robot(8)
print(p)
display_map(p, bar_width=0.9)
# -
# Now that you know how a robot initially sees a simple 1D world, let's learn about how it can locate itself by moving around and sensing it's environment!
| Robot_Localization/1. Robot World.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Section 7: Regression Model with NimbusML
#
# We will create an end-to-end regression model with the wine review dataset. The API of NimbusML is compatible with sklearn, so users who are already familiar with scikit-learn can get started right away. There are also some "advanced" techniques which can be helpful for optimal performance:
#
# 1. NimbusML pipelines
# 2. FileDataStream
# 3. Column operations and roles
# *Let's get started!!*
#
# Note that it would be useful to have this page opened for class referenece:
#
# https://docs.microsoft.com/en-us/nimbusml
# ## 1. Quick Start
# The modeling data can be sourced from several different types. Most array-like structures are supported (e.g. lists, numpy arrays, dataframes, series etc.). Let’s look at a simple example.
# +
from nimbusml.linear_model import FastLinearClassifier
X = [[1,2,3],[2,3,4],[-1.2,-1,-7]]
Y = [0,0,1]
model = FastLinearClassifier()
model.fit(X,Y)
model.predict(X)
# -
# We can also use Pipeline to include more than one operators in the model, just like sklearn.
# +
from nimbusml import Pipeline
from nimbusml.preprocessing.missing_values import Handler as Missingval_Handler
model = Pipeline([
Missingval_Handler(), # issues handling integers, input needs to be float
FastLinearClassifier()
])
model.fit(X,Y)
scores, metrics = model.test(X,Y)
metrics
# -
# ## 2. Wine Review Example
# In this section, we are trying to develop a prediction model to use the review data and other information of the wine to predict its price. We will use NimbusML's text featurizer to extract numeric features from the review corpus using **pre-trained** language models.
#
# The dataset contains a mix of numeric, categorical and text features. This section will demonstrate how a pipeline of transforms and trainers to do the following.
#
# - Process data directly from files!
# - Filter records
# - New : how to apply transforms to just the columns of interest!!
# - Using OneHotVectorizer to encode the categorical features
# - Use of NGramFeaturizer and WordEmbedding transform (a pre-trained DNN model) to convert text to numeric embeddings.
# - Feature selection using the CountSelector
# - Fitting a regression model
# ### 2.1 Data Preprocessing - Stream Data from Files
# +
from nimbusml import FileDataStream
# we don't use pandas DataFrame, but FileDataStream to improve performance
ds_train = FileDataStream.read_csv("data/wine_train.csv")
ds_test = FileDataStream.read_csv("data/wine_test.csv")
ds_train.head(3)
# -
ds_train.schema
# ### 2.2 Model Development
# Based on the data type, we want to develop a pipeline that applies different operators onto different columns. Note that this pipeline can defintely be improved to achieve better accuracy.
from IPython.display import Image
Image(filename='Graphics/1.png')
# +
from nimbusml.preprocessing.missing_values import Filter as Missingval_Filter
from nimbusml.feature_extraction.categorical import OneHotVectorizer
from nimbusml.feature_selection import CountSelector
from nimbusml.feature_extraction.text import NGramFeaturizer
from nimbusml.feature_extraction.text import WordEmbedding
from nimbusml.ensemble import LightGbmRegressor
from nimbusml import Role
# tk = TakeFilter(count = 100) #Always suggested to start with a TakeFilter to quickly examine the pipeline
ft = Missingval_Filter() << ['price']
# ft = Missingval_Filter(columns = ['price']) #Equivalent
onv = OneHotVectorizer() << ['country', 'province', 'region_1', 'variety']
cs = CountSelector(count = 2) << ['country', 'province', 'region_1', 'variety']
ng = NGramFeaturizer(output_tokens_column_name = 'description_TransformedText') << ['description']
we = WordEmbedding(model_kind = 'SentimentSpecificWordEmbedding') << ['description_TransformedText']
lgm = LightGbmRegressor() << {'Feature': ['country', 'province', 'region_1', 'variety',
'description_TransformedText', 'points'],
'Label': 'price'}
# lgm = LightGbmRegressor(feature = ['country', 'province', 'region_1', 'variety',
# 'description_TransformedText', 'points'],
# label = 'price') #Equivalent
model = Pipeline([ft, onv, cs, ng, we, lgm])
model.fit(ds_train)
# -
# Users can specify the input columns for the transform using:
#
# OneHotVectorizer(columns = ['country', 'province', 'region_1', 'variety'])
# or
#
# OneHotVectorizer() << ['country', 'province', 'region_1', 'variety']
# By default, the output column names are the same as the input (overwrite). Users can also specify the new output columns names, therefore, both the input and output columns are preserved.
#
# OneHotVectorizer(columns = {'country_out': 'country', 'variety_out': 'variety'})
# or
#
# OneHotVectorizer() << {'country_out': 'country', 'variety_out': 'variety'}
#
# For learners, users need to specify the roles for the columns by using:
#
# FastForestRegressor(feature = ['country', 'province'], label = 'price')
#
# The feature, lable are the "roles" users need to specify. Notice that, it is equivalent to use the shift operator:
#
# FastForestRegressor() << {Role.Feature: ['country', 'province'], Role.Label: 'price'}
#
# We have well-known names for columns. For example, column named as “Features” would be treated as a training data. Column named “Label” will be treated as Label by default . Also, I believe those are case sensitive.
# We can also plot the pipeline using the plot function.
from nimbusml.utils.exports import img_export_pipeline
fig = img_export_pipeline(model,ds_train)
fig
# fig.render("Graphics/ppl1.png") # save this image to files
# ### 2.3 Model Evaluation
metrics, scores = model.test(ds_test, output_scores=True)
metrics
Image(filename='Graphics/2.png')
# ## 3. Recap
#
# In this tutorial, we presented an example to:
#
# 1. Use NimbusML pipeline
# 2. Train the model with FileDataStream
# 3. Column operation for transforms and learners:
#
# For Transforms, always use "columns = " (or "<<" is equivalent)
# For learners, specify roles by using "feature = ", "label = " (or "<< {'Feature': , 'Label': }")
#
# For more details about the package, please refer to:
#
# https://docs.microsoft.com/en-us/nimbusml
# ### Resources
# - [NimbusML FastLinearClassifier](https://docs.microsoft.com/en-us/python/api/nimbusml/nimbusml.linear_model.fastlinearclassifier?view=nimbusml-py-latest)
# - [NimbusML LightGbmRegressor](https://docs.microsoft.com/en-us/python/api/nimbusml/nimbusml.ensemble.lightgbmregressor?view=nimbusml-py-latest)
# - [Machine Learning at Microsoft with ML.NET](https://arxiv.org/pdf/1905.05715.pdf)
| Events and Hacks/OpenHack/Intro-to-Python-and-Machine-Learning/7_Regression Model with NimbusML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis of the IDP Knowledge Graph
#
# __Authors:__
# <NAME> ([ORCID:0000-0002-5711-4872](http://orcid.org/0000-0002-5711-4872)), _Heriot-Watt University, Edinburgh, UK_
#
# <NAME> ([ORCID:0000-0002-8110-7576](https://orcid.org/0000-0002-8110-7576)), _Heriot-Watt University, Edinburgh, UK_
#
# <NAME> ([ORCID:0000-0003-1691-8425](https://orcid.org/0000-0003-1691-8425)), _University of Padua, Italy_
#
# <NAME> ([ORCID:0000-0001-9224-9820](https://orcid.org/0000-0001-9224-9820)), _University of Padua, Italy_
#
# __License:__ Apache 2.0
#
# __Acknowledgements:__ This notebook was created during the Virtual BioHackathon-Europe 2020.
# ## Introduction
#
# This notebook contains SPARQL queries to perform a data analysis of the Intrinsically Disordered Protein (IDP) Knowledge Graph. The IDP knowledge graph was constructed from Bioschemas markup embedded in DisProt, MobiDb, and Protein Ensemble Database (PED) that was harvested using the Bioschemas Markup Scraper and Extractor and converted into a knowledge graph using the process in this [notebook](https://github.com/elixir-europe/BioHackathon-projects-2020/blob/master/projects/24/IDPCentral/notebooks/ETLProcess.ipynb).
# ### Library Imports
# Import and configure logging library
from datetime import datetime
import logging
logging.basicConfig(
filename='idpQuery.log',
filemode='w',
format='%(levelname)s:%(message)s',
level=logging.INFO)
logging.info('Starting processing at %s' % datetime.now().time())
# Imports from RDFlib
from rdflib import ConjunctiveGraph
# ### Result Display Function
#
# The following function takes the results of a `SPARQL SELECT` query and displays them using a HTML table for human viewing.
def displayResults(queryResult):
from IPython.core.display import display, HTML
HTMLResult = '<table><tr style="color:white;background-color:#43BFC7;font-weight:bold">'
# print variable names and build header:
for varName in queryResult.vars:
HTMLResult = HTMLResult + '<td>' + varName + '</td>'
HTMLResult = HTMLResult + '</tr>'
# print values from each row and build table of results
for row in queryResult:
HTMLResult = HTMLResult + '<tr>'
for column in row:
#print("COLUMN:", column)
if column is not "":
HTMLResult = HTMLResult + '<td>' + str(column) + '</td>'
else:
HTMLResult = HTMLResult + '<td>' + "N/A"+ '</td>'
HTMLResult = HTMLResult + '</tr>'
HTMLResult = HTMLResult + '</table>'
display(HTML(HTMLResult))
# ## Loading IDP-KG
#
# The data is read in from an N-QUADS file (`IDPKG.nq`). The data is expected to be in multiple named graphs, based on where the data was extracted from, with provenance data in the default graph.
idpKG = ConjunctiveGraph()
idpKG.parse("IDPKG.nq", format="nquads")
logging.info("\tIDP-KG has %s statements." % len(idpKG))
# ## Knowledge Graph Statistics
#
# This section reports various statistics about the IDP-KG. The choice of statistics was inspired by the [HCLS Dataset Description Community Profile](https://www.w3.org/TR/hcls-dataset/#s6_6).
# ### Number of Triples
displayResults(idpKG.query("""
SELECT (COUNT(*) AS ?triples)
WHERE {
{ ?s ?p ?o }
UNION
{ GRAPH ?g
{?s ?p ?o }
}
}
"""))
# ### Number of Typed Entities
#
# Note that we use the `DISTINCT` keyword in the query since the same entity can appear in multiple named graphs.
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?s) AS ?entities)
WHERE {
{ ?s a [] }
UNION
{ GRAPH ?g
{ ?s a [] }
}
}
"""))
# ### Number of Unique Subjects
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?s) AS ?subjects)
WHERE {
{ ?s ?p ?o }
UNION
{ GRAPH ?g
{ ?s ?p ?o }
}
}
"""))
# ### Number of Unique Properties
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?p) AS ?properties)
WHERE {
{ ?s ?p ?o }
UNION
{ GRAPH ?g
{ ?s ?p ?o }
}
}
"""))
# ### Number of Unique Objects
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?o) AS ?objects)
WHERE {
{ ?s ?p ?o }
UNION
{ GRAPH ?g
{ ?s ?p ?o }
}
FILTER(!isLiteral(?o))
}
"""))
# ### Number of Unique Classes
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?o) AS ?classes)
WHERE {
{ ?s a ?o }
UNION
{ GRAPH ?g
{ ?s a ?o }
}
}
"""))
# ### Number of Unique Literals
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?o) AS ?objects)
WHERE {
{ ?s ?p ?o }
UNION
{ GRAPH ?g
{ ?s ?p ?o }
}
FILTER(isLiteral(?o))
}
"""))
# ### Number of Graphs
displayResults(idpKG.query("""
SELECT (COUNT(DISTINCT ?g) AS ?graphs)
WHERE {
GRAPH ?g
{ ?s ?p ?o }
}
"""))
# ### Instances per Class
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT ?Class (COUNT(DISTINCT ?s) AS ?distinctInstances)
WHERE {
GRAPH ?g {
?s a ?Class
}
}
GROUP BY ?Class
ORDER BY ?Class
"""))
# ### Properties and their Occurence
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT ?p (COUNT(?p) AS ?triples)
WHERE {
{ ?s ?p ?o }
UNION
{
GRAPH ?g {
?s ?p ?o
}
}
}
GROUP BY ?p
ORDER BY ?p
"""))
# ### Property, number of unique typed subjects, and triples
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT (COUNT(DISTINCT ?s) AS ?scount) ?stype ?p (COUNT(?p) AS ?triples)
WHERE {
{
?s ?p ?o .
?s a ?stype
}
UNION
{
GRAPH ?g {
?s ?p ?o .
?s a ?stype
}
}
}
GROUP BY ?p ?stype
ORDER BY ?stype ?p
"""))
# ### Number of Unique Typed Objects Linked to a Property
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT ?p (COUNT(?p) AS ?triples) ?otype (COUNT(DISTINCT ?o) AS ?count)
WHERE {
{ ?s ?p ?o }
UNION
{
GRAPH ?g {
?s ?p ?o
}
}
}
GROUP BY ?p ?otype
ORDER BY ?p
"""))
# ### Triples and Number of Unique Literals Related to a Property
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT ?p (COUNT(?p) AS ?triples) (COUNT(DISTINCT ?o) AS ?literals)
WHERE {
{ ?s ?p ?o }
UNION
{
GRAPH ?g {
?s ?p ?o
}
}
FILTER (isLiteral(?o))
}
GROUP BY ?p
ORDER BY ?p
"""))
# ### Number of Unique Subject Types Linked to Unique Object Types
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT (COUNT(DISTINCT ?s) AS ?scount) ?stype ?p ?otype (COUNT(DISTINCT ?o) AS ?ocount)
WHERE {
{
?s ?p ?o .
?s a ?stype .
?o a ?otype .
}
UNION
{
GRAPH ?g {
?s ?p ?o .
?s a ?stype .
?o a ?otype .
}
}
}
GROUP BY ?p ?stype ?otype
ORDER BY ?p
"""))
# ## Find proteins in multiple datasets
#
# Provenance information is stored in the default graph as annotations on graph.
#
# A protein comes from multiple sources if the triple is found in multiple named graphs. The number of named graphs containing the triple indicates the number of sources containing the triple.
displayResults(idpKG.query("""
PREFIX schema: <https://schema.org/>
PREFIX pav: <http://purl.org/pav/>
SELECT ?protein (COUNT(?g) as ?numSources) (GROUP_CONCAT(?source;SEPARATOR=", ") AS ?sources)
WHERE {
GRAPH ?g {
?protein a schema:Protein .
}
?g pav:retrievedFrom ?source .
}
GROUP BY ?protein
HAVING (COUNT(*) > 1)
"""))
# ## Find proteins with annotations in multiple datasets
#
# We are looking for annotations where the protein is common but the annotation is different across the datasets.
#
# First we'll write a query to find the proteins with annotations and return the provenance of where the annotation has come from.
displayResults(idpKG.query("""
PREFIX pav: <http://purl.org/pav/>
PREFIX schema: <https://schema.org/>
SELECT DISTINCT ?protein ?proteinName ?source1 ?annotation1 ?annotation2 ?source2
WHERE {
GRAPH ?g1 {
?protein a schema:Protein ;
schema:hasSequenceAnnotation ?annotation1 .
OPTIONAL {?protein schema:name ?proteinName .}
}
?g1 pav:retrievedFrom ?source1 .
GRAPH ?g2 {
?protein a schema:Protein ;
schema:hasSequenceAnnotation ?annotation2
}
?g2 pav:retrievedFrom ?source2 .
FILTER(?g1 != ?g2)
}
"""))
# The following query finds for each protein, its name (if known), a count of the number of sequence annotations, and a count of the number of sources from which the data has been extracted. Results are only returned if there are annotations from more than one source.
displayResults(idpKG.query("""
PREFIX pav: <http://purl.org/pav/>
PREFIX schema: <https://schema.org/>
SELECT ?protein (SAMPLE(?proteinName) AS ?name) (COUNT(distinct ?annotation) AS ?annotationCount) (COUNT(distinct ?source) AS ?sourceCount)
WHERE {
{
SELECT DISTINCT ?protein ?proteinName
WHERE {
GRAPH ?g {
?protein a schema:Protein .
OPTIONAL {?protein schema:name ?proteinName .}
}
}
}
{
SELECT ?annotation ?source ?protein
WHERE {
GRAPH ?g {
?protein schema:hasSequenceAnnotation ?annotation
}
?g pav:retrievedFrom ?source .
}
}
}
GROUP BY ?protein
HAVING (COUNT(distinct ?source) > 1)
ORDER BY DESC(?annotationCount)
"""))
# The following varient of the query will list the annotations and the source from which the annotation has come.
displayResults(idpKG.query("""
PREFIX pav: <http://purl.org/pav/>
PREFIX schema: <https://schema.org/>
SELECT ?protein ?proteinName ?annotation ?source
WHERE {
{
SELECT DISTINCT ?protein ?proteinName
WHERE {
GRAPH ?g {
?protein a schema:Protein .
OPTIONAL {?protein schema:name ?proteinName .}
}
}
}
{
SELECT ?annotation ?source ?protein
WHERE {
GRAPH ?g {
?protein schema:hasSequenceAnnotation ?annotation
}
?g pav:retrievedFrom ?source .
}
}
}
ORDER BY ?protein ?annotation
"""))
| projects/24/IDPCentral/notebooks/AnalysisQueries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Workshop Installation Guide
# # 如何使用和开发微信聊天机器人的系列教程
# # A workshop to develop & use an intelligent and interactive chat-bot in WeChat
# ### WeChat is a popular social media app, which has more than 800 million monthly active users.
#
# <img src='http://www.kudosdata.com/wp-content/uploads/2016/11/cropped-KudosLogo1.png' width=30% style="float: right;">
# <img src='wechat_tool/reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
#
# ### http://www.KudosData.com
#
# by: <EMAIL>
#
#
# April 2017 ============= Scan the QR code to become trainer's friend in WeChat ===========>>
# # Option 1: Use Cloud Platform (Difficulty level: Easy, like being a boss)
# !python --version
# !pip install -U html
# !pip install -U pyqrcode
# !pip install -U config
# !pip install -U backports.tempfile
# !mv docs org_docs
# ### Download and install WeChat API-2
# !yes | pip uninstall itchat
# !rm -rf ItChat
# !git clone https://github.com/telescopeuser/ItChat.git
# !cp -r ItChat/* .
# !python setup.py install
# ### Housekeeping after installation
# +
# !rm -rf itchat
# !rm -rf ItChat
# !rm -rf wxpy
# !rm -rf README*
# !rm -rf LICENSE
# !rm -rf MANIFEST*
# !rm -rf mkdocs*
# !rm -rf build
# !rm -rf dist
# !rm -rf docs*
# !rm -rf requirements.txt
# !rm -rf setup.py
# !rm -rf *.egg-info
# !mv org_docs docs
# -
# !pip install -U google-api-python-client
# !pip install -U gTTS
# !apt-get update -y
# !apt-get install libav-tools -y
# !avconv -version
# ### If above importing has no error, then installation is successful.
# # You are now ready to rock! Go to folder: workshop_blog/wechat_tool, open Notebook and follow...
#
# <img src='./wechat_tool/reference/setup_ref_01.png' width=100% style="float: left;">
print('')
print('+-------------------------------------------------------------------------------------------------+')
print('| www.KudosData.com: Google Cloud Datalab Python 2 setup successful! |')
print('| You are now ready to rock! Go to folder: workshop_blog/wechat_tool, open Notebook and follow... |')
print('+-------------------------------------------------------------------------------------------------+')
| uat_wechat/setup_cloud.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
tf.keras.backend.clear_session()
# -
from src.models import base_model, mlp_model
from src import custom_losses, custom_metrics, optimizers
from src.data import data
from importlib import reload
reload(mlp_model)
batch_size = 64
n_classes = 6
epochs = 100
img_size = 128
n_channels = 3
paths = data.PATH()
dataset_path = f'{paths.PROCESSED_DATA_PATH}/REI-Dataset/'
model = mlp_model.MLP_Model(batch_size, n_classes, epochs, img_size, n_channels)
train_generator, validation_generator = model.get_image_data_generator(dataset_path, train=True, validation=True)
weights = model.get_class_weights(train_generator.classes, model)
model.compile(loss=custom_losses.weighted_categorical_crossentropy(weights), metrics=['categorical_accuracy'],)
model.model.summary()
steps_per_epoch, validation_steps = len(train_generator.classes), len(validation_generator.classes)
# +
# keras.backend.get_session().run(tf.global_variables_initializer())
# -
model.fit_from_generator(path=dataset_path, train_steps_per_epoch=steps_per_epoch,
validation_steps_per_epoch=validation_steps,
train_generator=train_generator, validation_generator=validation_generator,
evaluate_net=True, use_model_check_point=False,
use_early_stop=True, weighted=True)
model.model_is_trained
model.save_model()
# ### Results
path = '/home/ifranco/tf_real_estate_images_classification/models/MLP_Model/exp_1_2019-11-19__17_07/'
model = base_model.BaseModel.load_model(path)
# +
paths = data.PATH()
dataset_path = f'{paths.PROCESSED_DATA_PATH}/REI-Dataset/'
# -
train_generator, validation_generator, test_generator = model.get_image_data_generator(f'{dataset_path}', train=True, validation=True, test=True,
class_mode_validation='categorical', class_mode_test='categorical')
weights = model.get_class_weights(train_generator.classes, model)
model.compile(loss=custom_losses.weighted_categorical_crossentropy(weights), metrics=['categorical_accuracy'],)
model.scores = model.evaluate_from_generator(f'{dataset_path}', test_generator=validation_generator)
model.scores_test = model.evaluate_from_generator(f'{dataset_path}', test_generator=test_generator)
model.save_model()
| notebooks/MLP_Model_v2.0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import cv2
# +
def angle_cos(p0, p1, p2):
d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
img = cv2.imread('test.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# img = cv2.GaussianBlur(img, (3, 3), 0)
# img = cv2.Canny(img, 0, 50, apertureSize=3)
# img = cv2.dilate(img, None)
# img = cv2.erode(img, np.ones((1,1)))
retval, thresh = cv2.threshold(img, 10, 100, cv2.THRESH_TRIANGLE)
bin, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)#[:100]
squares = []
for cnt in contours:
cnt_len = cv2.arcLength(cnt, True)
cnt = cv2.approxPolyDP(cnt, 0.09*cnt_len, True)
if ((len(cnt) == 4) and
(cv2.contourArea(cnt) > 1000) and
# (cv2.contourArea(cnt) < 50000) and
cv2.isContourConvex(cnt) ):
cnt = cnt.reshape(-1, 2)
max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] )
for i in range(4)])
if max_cos < 0.1:
squares.append(cnt)
bin = cv2.cvtColor(bin, cv2.COLOR_GRAY2BGR)
cv2.drawContours( bin, contours,
-1, (0, 0, 255), 1 )
items = [
img,
thresh,
bin
]
fig, axes = plt.subplots(len(items),1, figsize=(len(items)*3,12))
for ax,im in zip(axes.flatten(), items):
plt.sca(ax)
plt.imshow(im, interpolation='nearest', cmap=plt.cm.gray)
# +
img = cv2.imread('test.jpg')
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,0,255,apertureSize = 3)
gray = cv2.cvtColor(gray,cv2.COLOR_GRAY2BGR)
minLineLength = 1
maxLineGap = 1
lines = cv2.HoughLinesP(edges,3,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(gray,(x1,y1),(x2,y2),(0,255,0),2)
# lines = cv2.HoughLines(edges,1,np.pi/180,200)
# for rho,theta in lines[0]:
# a = np.cos(theta)
# b = np.sin(theta)
# x0 = a*rho
# y0 = b*rho
# x1 = int(x0 + 1000*(-b))
# y1 = int(y0 + 1000*(a))
# x2 = int(x0 - 1000*(-b))
# y2 = int(y0 - 1000*(a))
# cv2.line(img,(x1,y1),(x2,y2),(255,0,0),6)
items = [
img,
gray,
edges
]
fig, axes = plt.subplots(len(items),1, figsize=(12, len(items)*6))
for ax,im in zip(axes.flatten(), items):
plt.sca(ax)
plt.imshow(im, interpolation='nearest', cmap=plt.cm.gray)
# -
| notebook/20161120_opencv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={} colab_type="code" id="f9ySOjrcc0Yp"
##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="bl9GdT7h0Hxk"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="WhwgQAn50EZp"
# # TensorFlow Addons Networks : Sequence-to-Sequence NMT with Attention Mechanism
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/networks_seq2seq_nmt"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/networks_seq2seq_nmt.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/networks_seq2seq_nmt.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/networks_seq2seq_nmt.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="ip0n8178Fuwm"
# ## Overview
# This notebook gives a brief introduction into the ***Sequence to Sequence Model Architecture***
# In this noteboook we broadly cover four essential topics necessary for Neural Machine Translation:
#
#
# * **Data cleaning**
# * **Data preparation**
# * **Neural Translation Model with Attention**
# * **Final Translation**
#
# The basic idea behind such a model though, is only the encoder-decoder architecture. These networks are usually used for a variety of tasks like text-summerization, Machine translation, Image Captioning, etc. This tutorial provideas a hands-on understanding of the concept, explaining the technical jargons wherever necessary. We focus on the task of Neural Machine Translation (NMT) which was the very first testbed for seq2seq models.
#
# + [markdown] colab_type="text" id="YNiadLKNLleD"
# ## Setup
# + [markdown] colab_type="text" id="82GcQTsGf414"
# ## Additional Resources:
#
# These are a lst of resurces you must install in order to allow you to run this notebook:
#
#
# 1. [German-English Dataset](http://www.manythings.org/anki/deu-eng.zip)
#
#
# The dataset should be downloaded, in order to compile this notebook, the embeddings can be used, as they are pretrained. Though, we carry out our own training here.
#
# -
# !pip install -U tensorflow-addons
# !pip install nltk sklearn
# + colab={} colab_type="code" id="5OIlpST_6ga-"
#download data
print("Downloading Dataset:")
# !wget --quiet http://www.manythings.org/anki/deu-eng.zip
# !unzip -o deu-eng.zip
# + colab={"height": 34} colab_type="code" id="co6-YpBwL-4d" outputId="6571961c-8f50-4333-9b1d-5eb1a157f4f8"
import csv
import string
import re
from typing import List, Tuple
from pickle import dump
from unicodedata import normalize
import numpy as np
import itertools
from pickle import load
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.utils import plot_model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Embedding
from pickle import load
import random
import tensorflow as tf
from tensorflow.keras.models import load_model
from nltk.translate.bleu_score import corpus_bleu
from sklearn.model_selection import train_test_split
import tensorflow_addons as tfa
# + [markdown] colab_type="text" id="q7gjUT_9XSoj"
# ## Data Cleaning
#
# Our data set is a German-English translation dataset. It contains 152,820 pairs of English to German phases, one pair per line with a tab separating the language. These dataset though organized needs cleaning before we can work on it. This will enable us to remove unnecessary bumps that may come in during the training. We also added start-of-sentence `<start>` and end-of-sentence `<end>` so that the model knows when to start and stop predicting.
# + colab={} colab_type="code" id="6ZIu-TNqKFsd"
# Start of sentence
SOS = "<start>"
# End of sentence
EOS = "<end>"
# Relevant punctuation
PUNCTUATION = set("?,!.")
def load_dataset(filename: str) -> str:
"""
load dataset into memory
"""
with open(filename, mode="rt", encoding="utf-8") as fp:
return fp.read()
def to_pairs(dataset: str, limit: int = None, shuffle=False) -> List[Tuple[str, str]]:
"""
Split dataset into pairs of sentences, discards dataset line info.
e.g.
input -> 'Go.\tGeh.\tCC-BY 2.0 (France) Attribution: tatoeba.org
#2877272 (CM) & #8597805 (Roujin)'
output -> [('Go.', 'Geh.')]
:param dataset: dataset containing examples of translations between
two languages
the examples are delimited by `\n` and the contents of the lines are
delimited by `\t`
:param limit: number that limit dataset size (optional)
:param shuffle: default is True
:return: list of pairs
"""
assert isinstance(limit, (int, type(None))), TypeError(
"the limit value must be an integer"
)
lines = dataset.strip().split("\n")
# Radom dataset
if shuffle is True:
random.shuffle(lines)
number_examples = limit or len(lines) # if None get all
pairs = []
for line in lines[: abs(number_examples)]:
# take only source and target
src, trg, _ = line.split("\t")
pairs.append((src, trg))
# dataset size check
assert len(pairs) == number_examples
return pairs
def separe_punctuation(token: str) -> str:
"""
Separe punctuation if exists
"""
if not set(token).intersection(PUNCTUATION):
return token
for p in PUNCTUATION:
token = f" {p} ".join(token.split(p))
return " ".join(token.split())
def preprocess(sentence: str, add_start_end: bool=True) -> str:
"""
- convert lowercase
- remove numbers
- remove special characters
- separe punctuation
- add start-of-sentence <start> and end-of-sentence <end>
:param add_start_end: add SOS (start-of-sentence) and EOS (end-of-sentence)
"""
re_print = re.compile(f"[^{re.escape(string.printable)}]")
# convert lowercase and normalizing unicode characters
sentence = (
normalize("NFD", sentence.lower()).encode("ascii", "ignore").decode("UTF-8")
)
cleaned_tokens = []
# tokenize sentence on white space
for token in sentence.split():
# removing non-printable chars form each token
token = re_print.sub("", token).strip()
# ignore tokens with numbers
if re.findall("[0-9]", token):
continue
# add space between words and punctuation eg: "ok?go!" => "ok ? go !"
token = separe_punctuation(token)
cleaned_tokens.append(token)
# rebuild sentence with space between tokens
sentence = " ".join(cleaned_tokens)
# adding a start and an end token to the sentence
if add_start_end is True:
sentence = f"{SOS} {sentence} {EOS}"
return sentence
def dataset_preprocess(dataset: List[Tuple[str, str]]) -> Tuple[List[str], List[str]]:
"""
Returns processed database
:param dataset: list of sentence pairs
:return: list of paralel data e.g.
(['first source sentence', 'second', ...], ['first target sentence', 'second', ...])
"""
source_cleaned = []
target_cleaned = []
for source, target in dataset:
source_cleaned.append(preprocess(source))
target_cleaned.append(preprocess(target))
return source_cleaned, target_cleaned
# + [markdown] colab_type="text" id="5nDIELt9RH-w"
# ## Create Dataset
#
# - limit number of examples
# - load dataset into pairs `[('Be nice.', 'Seien Sie nett!'), ('Beat it.', 'Geh weg!'), ...]`
# - preprocessing dataset
# + colab={"height": 119} colab_type="code" id="GMxdlVU1X8yI" outputId="f4977f48-dbe9-4323-ec2a-a9b0cf8b1895"
NUM_EXAMPLES = 10000 # Limit dataset size
# load from .txt
filename = 'deu.txt' #change filename if necessary
dataset = load_dataset(filename)
# get pairs limited into 1000
pairs = to_pairs(dataset, limit=NUM_EXAMPLES)
print(f"Dataset size: {len(pairs)}")
raw_data_en, raw_data_ge = dataset_preprocess(pairs)
# show last 5 pairs
for pair in zip(raw_data_en[-5:],raw_data_ge[-5:]):
print(pair)
# + [markdown] colab_type="text" id="Cfb66QxWYr6A"
# ## Tokenization
# + colab={} colab_type="code" id="3oq60MBPSanQ"
en_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
en_tokenizer.fit_on_texts(raw_data_en)
data_en = en_tokenizer.texts_to_sequences(raw_data_en)
data_en = tf.keras.preprocessing.sequence.pad_sequences(data_en,padding='post')
ge_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
ge_tokenizer.fit_on_texts(raw_data_ge)
data_ge = ge_tokenizer.texts_to_sequences(raw_data_ge)
data_ge = tf.keras.preprocessing.sequence.pad_sequences(data_ge,padding='post')
# + colab={} colab_type="code" id="XH5oSRNeSc1s"
def max_len(tensor):
#print( np.argmax([len(t) for t in tensor]))
return max( len(t) for t in tensor)
# + [markdown] colab_type="text" id="KdM37lNBGXAj"
# ## Model Parameters
# + colab={} colab_type="code" id="EfiBUJM2Et6C"
X_train, X_test, Y_train, Y_test = train_test_split(data_en,data_ge,test_size=0.2)
BATCH_SIZE = 64
BUFFER_SIZE = len(X_train)
steps_per_epoch = BUFFER_SIZE//BATCH_SIZE
embedding_dims = 256
rnn_units = 1024
dense_units = 1024
Dtype = tf.float32 #used to initialize DecoderCell Zero state
# + [markdown] colab_type="text" id="Ff_jQHLhGqJU"
# ## Dataset Prepration
# + colab={"height": 51} colab_type="code" id="b__1hPHVFALO" outputId="88d35286-184c-44e7-a16b-5559f22e2eb1"
Tx = max_len(data_en)
Ty = max_len(data_ge)
input_vocab_size = len(en_tokenizer.word_index)+1
output_vocab_size = len(ge_tokenizer.word_index)+ 1
dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train)).shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
example_X, example_Y = next(iter(dataset))
print(example_X.shape)
print(example_Y.shape)
# + [markdown] colab_type="text" id="UQRgJcYgapqE"
# ## Defining NMT Model
# + colab={} colab_type="code" id="sGdakRtjaokF"
#ENCODER
class EncoderNetwork(tf.keras.Model):
def __init__(self,input_vocab_size,embedding_dims, rnn_units ):
super().__init__()
self.encoder_embedding = tf.keras.layers.Embedding(input_dim=input_vocab_size,
output_dim=embedding_dims)
self.encoder_rnnlayer = tf.keras.layers.LSTM(rnn_units,return_sequences=True,
return_state=True )
#DECODER
class DecoderNetwork(tf.keras.Model):
def __init__(self,output_vocab_size, embedding_dims, rnn_units):
super().__init__()
self.decoder_embedding = tf.keras.layers.Embedding(input_dim=output_vocab_size,
output_dim=embedding_dims)
self.dense_layer = tf.keras.layers.Dense(output_vocab_size)
self.decoder_rnncell = tf.keras.layers.LSTMCell(rnn_units)
# Sampler
self.sampler = tfa.seq2seq.sampler.TrainingSampler()
# Create attention mechanism with memory = None
self.attention_mechanism = self.build_attention_mechanism(dense_units,None,BATCH_SIZE*[Tx])
self.rnn_cell = self.build_rnn_cell(BATCH_SIZE)
self.decoder = tfa.seq2seq.BasicDecoder(self.rnn_cell, sampler= self.sampler,
output_layer=self.dense_layer)
def build_attention_mechanism(self, units,memory, memory_sequence_length):
return tfa.seq2seq.LuongAttention(units, memory = memory,
memory_sequence_length=memory_sequence_length)
#return tfa.seq2seq.BahdanauAttention(units, memory = memory, memory_sequence_length=memory_sequence_length)
# wrap decodernn cell
def build_rnn_cell(self, batch_size ):
rnn_cell = tfa.seq2seq.AttentionWrapper(self.decoder_rnncell, self.attention_mechanism,
attention_layer_size=dense_units)
return rnn_cell
def build_decoder_initial_state(self, batch_size, encoder_state,Dtype):
decoder_initial_state = self.rnn_cell.get_initial_state(batch_size = batch_size,
dtype = Dtype)
decoder_initial_state = decoder_initial_state.clone(cell_state=encoder_state)
return decoder_initial_state
encoderNetwork = EncoderNetwork(input_vocab_size,embedding_dims, rnn_units)
decoderNetwork = DecoderNetwork(output_vocab_size,embedding_dims, rnn_units)
optimizer = tf.keras.optimizers.Adam()
# + [markdown] colab_type="text" id="NPwcfddTa0oB"
# ## Initializing Training functions
# + colab={} colab_type="code" id="x1BEqVyra2jW"
def loss_function(y_pred, y):
#shape of y [batch_size, ty]
#shape of y_pred [batch_size, Ty, output_vocab_size]
sparsecategoricalcrossentropy = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction='none')
loss = sparsecategoricalcrossentropy(y_true=y, y_pred=y_pred)
mask = tf.logical_not(tf.math.equal(y,0)) #output 0 for y=0 else output 1
mask = tf.cast(mask, dtype=loss.dtype)
loss = mask* loss
loss = tf.reduce_mean(loss)
return loss
def train_step(input_batch, output_batch,encoder_initial_cell_state):
#initialize loss = 0
loss = 0
with tf.GradientTape() as tape:
encoder_emb_inp = encoderNetwork.encoder_embedding(input_batch)
a, a_tx, c_tx = encoderNetwork.encoder_rnnlayer(encoder_emb_inp,
initial_state =encoder_initial_cell_state)
#[last step activations,last memory_state] of encoder passed as input to decoder Network
# Prepare correct Decoder input & output sequence data
decoder_input = output_batch[:,:-1] # ignore <end>
#compare logits with timestepped +1 version of decoder_input
decoder_output = output_batch[:,1:] #ignore <start>
# Decoder Embeddings
decoder_emb_inp = decoderNetwork.decoder_embedding(decoder_input)
#Setting up decoder memory from encoder output and Zero State for AttentionWrapperState
decoderNetwork.attention_mechanism.setup_memory(a)
decoder_initial_state = decoderNetwork.build_decoder_initial_state(BATCH_SIZE,
encoder_state=[a_tx, c_tx],
Dtype=tf.float32)
#BasicDecoderOutput
outputs, _, _ = decoderNetwork.decoder(decoder_emb_inp,initial_state=decoder_initial_state,
sequence_length=BATCH_SIZE*[Ty-1])
logits = outputs.rnn_output
#Calculate loss
loss = loss_function(logits, decoder_output)
#Returns the list of all layer variables / weights.
variables = encoderNetwork.trainable_variables + decoderNetwork.trainable_variables
# differentiate loss wrt variables
gradients = tape.gradient(loss, variables)
#grads_and_vars – List of(gradient, variable) pairs.
grads_and_vars = zip(gradients,variables)
optimizer.apply_gradients(grads_and_vars)
return loss
# + colab={} colab_type="code" id="71Lkdx6GFb3A"
#RNN LSTM hidden and memory state initializer
def initialize_initial_state():
return [tf.zeros((BATCH_SIZE, rnn_units)), tf.zeros((BATCH_SIZE, rnn_units))]
# + [markdown] colab_type="text" id="v5uzLcu2bNX3"
# ## Training
# + colab={"height": 1000} colab_type="code" id="PvfD2SknWrt6" outputId="0a427bb7-8184-4076-97ca-f638116ca52b"
epochs = 15
for i in range(1, epochs+1):
encoder_initial_cell_state = initialize_initial_state()
total_loss = 0.0
for ( batch , (input_batch, output_batch)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(input_batch, output_batch, encoder_initial_cell_state)
total_loss += batch_loss
if (batch+1)%5 == 0:
print("total loss: {} epoch {} batch {} ".format(batch_loss.numpy(), i, batch+1))
# + [markdown] colab_type="text" id="nDyK-EGqbN5r"
# ## Evaluation
# + colab={"height": 326} colab_type="code" id="y98sfom7SuGy" outputId="00d94338-e841-4bd6-f9e3-509ef1f1a08b"
#In this section we evaluate our model on a raw_input converted to german, for this the entire sentence has to be passed
#through the length of the model, for this we use greedsampler to run through the decoder
#and the final embedding matrix trained on the data is used to generate embeddings
input_raw='how are you'
# We have a transcript file containing English-German pairs
# Preprocess X
input_raw = preprocess(input_raw, add_start_end=False)
input_lines = [f'{SOS} {input_raw}']
input_sequences = [[en_tokenizer.word_index[w] for w in line.split()] for line in input_lines]
input_sequences = tf.keras.preprocessing.sequence.pad_sequences(input_sequences,
maxlen=Tx, padding='post')
inp = tf.convert_to_tensor(input_sequences)
#print(inp.shape)
inference_batch_size = input_sequences.shape[0]
encoder_initial_cell_state = [tf.zeros((inference_batch_size, rnn_units)),
tf.zeros((inference_batch_size, rnn_units))]
encoder_emb_inp = encoderNetwork.encoder_embedding(inp)
a, a_tx, c_tx = encoderNetwork.encoder_rnnlayer(encoder_emb_inp,
initial_state =encoder_initial_cell_state)
print('a_tx :', a_tx.shape)
print('c_tx :', c_tx.shape)
start_tokens = tf.fill([inference_batch_size],ge_tokenizer.word_index[SOS])
end_token = ge_tokenizer.word_index[EOS]
greedy_sampler = tfa.seq2seq.GreedyEmbeddingSampler()
decoder_input = tf.expand_dims([ge_tokenizer.word_index[SOS]]* inference_batch_size,1)
decoder_emb_inp = decoderNetwork.decoder_embedding(decoder_input)
decoder_instance = tfa.seq2seq.BasicDecoder(cell = decoderNetwork.rnn_cell, sampler = greedy_sampler,
output_layer=decoderNetwork.dense_layer)
decoderNetwork.attention_mechanism.setup_memory(a)
#pass [ last step activations , encoder memory_state ] as input to decoder for LSTM
print(f"decoder_initial_state = [a_tx, c_tx] : {np.array([a_tx, c_tx]).shape}")
decoder_initial_state = decoderNetwork.build_decoder_initial_state(inference_batch_size,
encoder_state=[a_tx, c_tx],
Dtype=tf.float32)
print(f"""
Compared to simple encoder-decoder without attention, the decoder_initial_state
is an AttentionWrapperState object containing s_prev tensors and context and alignment vector
decoder initial state shape: {np.array(decoder_initial_state).shape}
decoder_initial_state tensor
{decoder_initial_state}
""")
# Since we do not know the target sequence lengths in advance, we use maximum_iterations to limit the translation lengths.
# One heuristic is to decode up to two times the source sentence lengths.
maximum_iterations = tf.round(tf.reduce_max(Tx) * 2)
#initialize inference decoder
decoder_embedding_matrix = decoderNetwork.decoder_embedding.variables[0]
(first_finished, first_inputs,first_state) = decoder_instance.initialize(decoder_embedding_matrix,
start_tokens = start_tokens,
end_token=end_token,
initial_state = decoder_initial_state)
#print( first_finished.shape)
print(f"first_inputs returns the same decoder_input i.e. embedding of {SOS} : {first_inputs.shape}")
print(f"start_index_emb_avg {tf.reduce_sum(tf.reduce_mean(first_inputs, axis=0))}") # mean along the batch
inputs = first_inputs
state = first_state
predictions = np.empty((inference_batch_size,0), dtype = np.int32)
for j in range(maximum_iterations):
outputs, next_state, next_inputs, finished = decoder_instance.step(j,inputs,state)
inputs = next_inputs
state = next_state
outputs = np.expand_dims(outputs.sample_id,axis = -1)
predictions = np.append(predictions, outputs, axis = -1)
# + [markdown] colab_type="text" id="iodjSItQds1t"
# ## Final Translation
# + colab={"height": 102} colab_type="code" id="K6aWFB5IWlH2" outputId="2179c9a3-cb27-447a-ac94-0e5ab2920aff"
#prediction based on our sentence earlier
print("English Sentence:")
print(input_raw)
print("\nGerman Translation:")
for i in range(len(predictions)):
line = predictions[i,:]
seq = list(itertools.takewhile( lambda index: index !=2, line))
print(" ".join( [ge_tokenizer.index_word[w] for w in seq]))
# + [markdown] colab_type="text" id="g6Av-oPWvRc4"
# ### The accuracy can be improved by implementing:
# * Beam Search or Lexicon Search
# * Bi-directional encoder-decoder model
| site/en-snapshot/addons/tutorials/networks_seq2seq_nmt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [source](../api/alibi_detect.cd.classifier.rst)
# # Classifier
#
# ## Overview
#
# The classifier-based drift detector [Lopez-Paz and Oquab, 2017](https://openreview.net/forum?id=SJkXfE5xx) simply tries to correctly classify instances from the reference data vs. the test set. If the classifier does not manage to significantly distinguish the reference data from the test set according to a chosen metric (defaults to the classifier accuracy), then no drift occurs. If it can, the test set is different from the reference data and drift is flagged. To leverage all the available reference and test data, stratified cross-validation can be applied and the out-of-fold predictions are used to compute the drift metric. Note that a new classifier is trained for each test set or even each fold within the test set.
# ## Usage
#
# ### Initialize
#
#
# Parameters:
#
# * `threshold`: Threshold for the drift metric (default is accuracy). Values above the threshold are classified as drift.
#
# * `model`: Classification model used for drift detection.
#
# * `X_ref`: Data used as reference distribution.
#
# * `preprocess_X_ref`: Whether to already preprocess and store the reference data using the `preprocess_fn`. Typically set to *True* since it can reduce the time to detect drift during the `predict` call. It is possible that it needs to be set to *False* if the preprocessing step requires statistics from both the reference and test data, such as the mean or standard deviation.
#
# * `update_X_ref`: Reference data can optionally be updated to the last N instances seen by the detector or via [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling) with size N. For the former, the parameter equals *{'last': N}* while for reservoir sampling *{'reservoir_sampling': N}* is passed.
#
# * `preprocess_fn`: Function to preprocess the data before computing the data drift metrics.
#
# * `preprocess_kwargs`: Keyword arguments for `preprocess_fn`.
#
# * `metric_fn`: Function computing the drift metric. Takes `y_true` and `y_pred` as input and returns a float: *metric_fn(y_true, y_pred)*. Defaults to accuracy.
#
# * `metric_name`: Optional name for the `metric_fn` used in the return dict. Defaults to `metric_fn.__name__`.
#
# * `train_size`: Optional fraction (float between 0 and 1) of the dataset used to train the classifier. The drift is detected on *1 - train_size*. Cannot be used in combination with `n_folds`.
#
# * `n_folds`: Optional number of stratified folds used for training. The metric is then calculated on all the out-of-fold predictions. This allows to leverage all the reference and test data for drift detection at the expense of longer computation. If both `train_size` and `n_folds` are specified, `n_folds` is prioritized.
#
# * `seed`: Optional random seed for fold selection.
#
# * `optimizer`: Optimizer used during training of the classifier.
#
# * `compile_kwargs`: Optional additional kwargs form *model.compile()* when compiling the classifier.
#
# * `batch_size`: Batch size used during training of the classifier.
#
# * `epochs`: Number of training epochs for the classifier. Applies to each fold if `n_folds` is specified.
#
# * `verbose`: Verbosity level during the training of the classifier. 0 is silent, 1 a progress bar and 2 prints the statistics after each epoch.
#
# * `fit_kwargs`: Optional additional kwargs for *model.fit()* when fitting the classifier.
#
# * `data_type`: Optionally specify the data type (e.g. tabular, image or time-series). Added to metadata.
#
# Initialized drift detector example:
#
# ```python
# from alibi_detect.cd import ClassifierDrift
#
# model = tf.keras.Sequential(
# [
# Input(shape=(32, 32, 3)),
# Conv2D(8, 4, strides=2, padding='same', activation=tf.nn.relu),
# Conv2D(16, 4, strides=2, padding='same', activation=tf.nn.relu),
# Conv2D(32, 4, strides=2, padding='same', activation=tf.nn.relu),
# Flatten(),
# Dense(2, activation='softmax')
# ]
# )
#
# cd = ClassifierDrift(threshold=.55, model=model, X_ref=X_ref, n_folds=5, epochs=2)
# ```
# ### Detect Drift
#
# We detect data drift by simply calling `predict` on a batch of instances `X`. `return_metric` equal to *True* will also return the drift metric (e.g. accuracy) and the threshold used by the detector.
#
# The prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys:
#
# * `is_drift`: 1 if the sample tested has drifted from the reference data and 0 otherwise.
#
# * `threshold`: user-defined drift threshold for the chosen drift metric.
#
# * `metric_fn.__name__` or the optional `metric_name` kwarg value: drift metric value if `return_metric` equals *True*.
#
#
# ```python
# preds_drift = cd.predict(X, return_metric=True)
# ```
# ### Saving and loading
#
# The drift detectors can be saved and loaded in the same way as other detectors:
#
# ```python
# from alibi_detect.utils.saving import save_detector, load_detector
#
# filepath = 'my_path'
# save_detector(cd, filepath)
# cd = load_detector(filepath)
# ```
# ## Examples
#
# [Drift detection on CIFAR10](../examples/cd_clf_cifar10.nblink)
| doc/source/methods/classifierdrift.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1.4 查找最大或最小的N个元素
# ## 怎样从一个集合中获得最大或者最小的N个元素列表?
import heapq
nums = [1,8,23,44,56,12,-2,45,23]
print(heapq.nlargest(3,nums))
print(heapq.nsmallest(3,nums))
portfolio = [
{'name':'IBM','shares':100,'price':91.1},
{'name':'AAPL','shares':50,'price':543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(4,portfolio,key = lambda s : s['price'])
expensive = heapq.nlargest(3,portfolio,key = lambda s:s['price'])
print('The fours of cheapest:%s\nThe threes of expensivest:%s'%(cheap,expensive))
heapq.heapify(nums)
nums
heapq.heappop(nums) # heap -- 堆
# ### 当查找的元素个数较小时(N < nSum),函数nlargest and nsmalest 是很适合<br>若 仅仅想查找唯一的 最小或最大N=1的元素的话,使用max and min 更快<br>若N的大小的和集合大小接近时,通常先排序在切片更快 sorted(items)[:N] and sorted(items)[-N:]
# # 1.5 实现一个优先级队列
# ## 怎样实现一个按优先级排序的队列? 并且在这个队列上面每次pop操作总是返回优先级最高的那个元素
# +
class PriorityQueue:
def __init__(self):
self._queue = []
self._index = 0
def push(self,item,priority):
heapq.heappush(self._queue,(priority,self._index,item))
self._index += 1
'''
push 按照queue 优先级priority 是以从低到高起 若 -priority is 以从高到低起
'''
def pop(self):
return heapq.heappop(self._queue)[-1]
class Item:
def __init__(self,name):
self.name = name
def __repr__(self):
return 'Item({!r})'.format(self.name)
# -
q = PriorityQueue()
q.push(Item('foo'),1)
q.push(Item('bar'),5)
q.push(Item('spqm'),4)
q.push(Item('grok'),1)
q
q.pop() # pop 1
q.pop() # pop 2
q.pop() # pop 3
q.pop() # pop 4
# ### pop 1 返回优先级最高的元素<br>针对pop 3 and pop 4 按照其被插入至queue 顺序返回
# ### module heapq ---heapq.heappush() and heapq.pop() 分别在\_queue队列中插入和删除第一个元素 同时保证\_queue第一个元素拥有最小优先级<br>heapq()函数总是返回"最小的(priority)"的元素--This is Key of 保证queue pop操作返回正确元素的关键 时间复杂度O(log N ) super quick!!!<br> index -var 保证同等优先级正确排序 如pop3 and pop4
# # 1.6 字典中的键映射多个值
# ## 怎样实现一个键对应多个值的字典(也叫 multidict )?
# ### 一个dict就是一个键对应一个单值的映射 if you want to 一个键映射多个值 则需要将这多个值放置另外容器 比如 list or set中
d = {
'a':[1,2,3],
'b':[4,5]
}
e = {
'a':{1,2,3},
'b':{4,5}
}
# ### 选择list 还是set 取决你的实际要求 if want to keep element 的插入顺序 则选择list ifwant 去掉 repeat element 即使用set<br>you can use collections module 中的defaultdict 来构造这样字典
from collections import defaultdict
# +
d = defaultdict(list)
d['a'].append(1)
d['a'].append(2)
d['b'].append(3)
d = defaultdict(set)
d['a'].add(1)
d['a'].add(2)
d['b'].add(4)
# -
d
# ### 以上d 指的是(创建)新的dict [创建映射实体]<br>if you 只想在一个普通的字典上使用setdefault方法来替代
d = {} # a regular dictionary
d.setdefault('a',[]).append(1)
d.setdefault('a',[]).append(2)
d.setdefault('b',[]).append(4)
# 每次都调用需要创建一个新的初始值的instance(empty list=[])
d
# ### create a 多值映射dict很简单 but if you want to create yourself 太难啦
'''
d = {}
pairs = {'a':[1,2],'b':2,'c':3}
for key, value in pairs:
if key not in d:
d[key] = []
d[key].append(value)
'''
# ### But use defaultdict is so easy and simple
'''
d = defaultdict(list)
for key, value in pairs:
d[key].append(value)
'''
# # 1.7 字典排序
# ## 想创建一个dict and 在迭代or序列化这个dict 时可控制element 的顺序
# ### 为control 一个dict 中element 的order you can use collections 中的OrderedDict类 在迭代操作时 其会 keep element 元素插入时的order
from collections import OrderedDict
def ordered_dict():
d = OrderedDict()
d['foo'] = 1
d['bar'] = 2
d['spa'] = 3
d['gro'] = 4
# Outputs 'foo 1','bar 2','spa 3','gro 4'
ordered_dict()
for key in d:
print(key,d[key])
# ### create a 将来序列化 or 编码成其他格式的映射的时候OrderedDict is very useful<br> 精确control JSON编码字段的顺序可使用OrderedDict来构建数据
import json
json.dumps(d)
# ### OrderedDict 内部维护这一个根据插入顺序排序的双向链表 每次当一个新的element insert into it and newelement will be 放到 链表的尾部<br>对于一个已经存在键的重复赋值不会改变键的顺序
# ### 需要注意的是,一个 OrderedDict 的大小是一个普通字典的两倍,因为它内部维护着另外一个链表。 所以如果你要构建一个需要大量 OrderedDict 实例的数据结构的时候(比如读取100,000行CSV数据到一个 OrderedDict 列表中去), 那么你就得仔细权衡一下是否使用 OrderedDict 带来的好处要大过额外内存消耗的影响。
# # 1.8 字典的运算
# ## 怎样在data dict 中执行一些计算操作
prices = {
'AC':45.34,
'AA':615.2,
'IAM':205.3,
'FB':10.765
}
# 对dict进行计算操作 需要利用zip() 将key and value 反转过来
min_price = min(zip(prices.values(),prices.keys()))
print('min_price is %s , %s' % min_price[:])
max_price = max(zip(prices.values(),prices.keys()))
print('max_price is %s , %s' % max_price[:])
prices_sorted = sorted(zip(prices.values(),prices.keys()))
prices_sorted
# ### 需要注意的是 zip function is 创建的一个只能访问一次的迭代器
prices_and_names = zip(prices.values(),prices.keys())
print(min(prices_and_names))
print(max(prices_and_names))
# ### max() arg is an empty sequence
# --------------------------------------
# ERROR:表示此时 max 中的参数是一个空的 序列
# ### 若是不利用zip() 直接进行普通的数学运算<br>他会作用于key 而不是value
min(prices)
max(prices)
# 以上是按照key的 字母进行排序并获得最大 or 最小
# ### 为弥补以上问题 我就直接提取出 dict中的value
min(prices.values())
max(prices.values())
# ### 不过 以上两种方式 都差强人意 我对dict操作 是为了既要显示 key 并要显示 value<br>So 这里要利用到lambda函数
print(min(prices,key=lambda k:prices[k]))
# print(max(prices,value=lambda v:prices[v]))
# ### 以上key 函数 可以返回 value 最低的对应key 即value 最低是 10.45 贰最低对应的key是 FB
# ## 最先利用的zip 函数就可以"反转"为 (value ,key)元组序列来解决上述问题 当比较此元组时 value会先进性比较 后是key---(这样的话,即可利用简单语句进行实现操作)---
# ### 若是出现dict中实体拥有相同的value 在执行 max or min 时会继续判读 key的大小来据此进行判断
p = {'a':123,'b':123}
print(min(zip(p.values(),p.keys())))
print(max(zip(p.values(),p.keys())))
| data_structure_and_algorithm_py3_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # lowess
# +
from lowess import lowess,tri_cube
import numpy as np
# %matplotlib inline
# -
# 1-d
x = np.random.randn(100)
f = np.cos(x) + 0.2 * np.random.randn(100)
x0 = np.linspace(-2,1,100)
f_hat = lowess(x, f, x0)
import matplotlib.pyplot as plt
fig,ax = plt.subplots(1)
ax.scatter(x,f)
ax.plot(x0,f_hat,'ro')
#2-d
x = np.random.randn(2, 100)
f = -1 * np.sin(x[0]) + 0.5 * np.cos(x[1]) + 0.2*np.random.randn(100)
x0 = np.mgrid[-1:1:.1, -1:1:.1]
x0 = np.vstack([x0[0].ravel(), x0[1].ravel()])
f_hat = lowess(x, f, x0, kernel=tri_cube)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[0], x[1], f)
ax.scatter(x0[0], x0[1], f_hat, color='r')
| lowess-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: local-venv
# language: python
# name: local-venv
# ---
# # Pre-trained Word2Vec embeddings
# +
# Play with pre-trained word embeddings
# Need to download GoogleNews-vectors-negative300.bin or other word2vec embeddings and copy to ./data folder
# We will use the gensim library to import the word vectors
import gensim
# Load the word embeddings
# (this is just a simple structure. Each word is a vector)
model = gensim.models.KeyedVectors.load_word2vec_format('./data/GoogleNews-vectors-negative300.bin', binary=True)
# Get the vocabulary (i.e. the unique words that were used to train these embeddings)
vocab = model.vocab.keys()
# Get the size of the Vocabulary
wordsInVocab = len(vocab)
print(f'Vocab length: {wordsInVocab}')
# +
# Let's see the vector of a random word
print(model['computer'])
print(f'Dimensions: {len(model["computer"])}')
# -
# ## Compute similarities
# +
# Compute similarities between words
print(f'Similarity between (dog, cat): {model.similarity("dog", "cat")}')
print(f'Similarity between (king, queen): {model.similarity("king", "queen")}')
print(f'Similarity between (car, computer): {model.similarity("car", "computer")}')
# -
# Look up most similar words
model.most_similar('france', topn=5)
# +
# Compute similarities between sentences
s1 = "This is a sentence"
s2 = "This is also a sentence"
# Normalize sentences: remove words not in vocabulary
tokens = s1.split()
s1_final = ''
for t in tokens:
if t.lower() in model.vocab:
s1_final += t.lower() + ' '
s1_final = s1_final.strip()
tokens = s2.split()
s2_final = ''
for t in tokens:
if t.lower() in model.vocab:
s2_final += t.lower() + ' '
s2_final = s2_final.strip()
print(f'Sentence 1: {s1_final}')
print(f'Sentence 2: {s2_final}')
sml = model.n_similarity(s1_final.split(), s2_final.split())
print('Similarity = %.3f' % sml)
# -
# ## Vector Arithmetics
# +
# Most famous example:
# "Man to Woman is King to X", what is X? (Answer: Queen)
# We can apply basic arithmetic to Word2Vec vectors:
# King - Man + Woman = ?
# This means that if we take the notion of King and subtract the notion of Man
# and add the notion of Woman, we get the notion of Queen
result = model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
print(result)
# +
# Similar examples
result = model.most_similar(positive=['Tokyo', 'France'], negative=['Paris'], topn=1)
print(result)
print('')
result = model.most_similar(positive=['girl', 'prince'], negative=['boy'], topn=1)
print(result)
# +
# Probably not the best results:
# but the model learns what we feed it
result = model.most_similar(positive=['she', 'doctor'], negative=['he'], topn=1)
print(result)
print('')
result = model.most_similar(positive=['woman', 'computer_programer'], negative=['man'], topn=1)
print(result)
print('')
result = model.most_similar(positive=['she', 'janitor'], negative=['he'], topn=1)
print(result)
# -
# *Gender bias and other types of bias is something we must deal with if we want to move toward ethical and transparent AI solutions*
| word2vec.ipynb |
# <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"> <img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png"/> </a> <br/> This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"> Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. </a>
# # + 
#
# <img src="http://spark-mooc.github.io/web-assets/images/cs110x/movie-camera.png" style="float:right; height: 200px; margin: 10px; border: 1px solid #ddd; border-radius: 15px 15px 15px 15px; padding: 10px"/>
#
# # Predicting Movie Ratings
#
# One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the [Spark ML][sparkml] library's Alternating Least Squares method to make more sophisticated predictions.
#
# For this lab, we will use a subset dataset of 20 million ratings. This dataset is pre-mounted on Databricks and is from the [MovieLens stable benchmark rating dataset](http://grouplens.org/datasets/movielens/). However, the same code you write will also work on the full dataset (though running with the full dataset on Community Edition is likely to take quite a long time).
#
# In this lab:
# * *Part 0*: Preliminaries
# * *Part 1*: Basic Recommendations
# * *Part 2*: Collaborative Filtering
# * *Part 3*: Predictions for Yourself
#
# As mentioned during the first Learning Spark lab, think carefully before calling `collect()` on any datasets. When you are using a small dataset, calling `collect()` and then using Python to get a sense for the data locally (in the driver program) will work fine, but this will not work when you are using a large dataset that doesn't fit in memory on one machine. Solutions that call `collect()` and do local analysis that could have been done with Spark will likely fail in the autograder and not receive full credit.
# [sparkml]: https://spark.apache.org/docs/1.6.2/api/python/pyspark.ml.html
labVersion = 'cs110x.lab2-1.0.0'
# ## Code
#
# This assignment can be completed using basic Python and pySpark DataFrame Transformations and Actions. Libraries other than math are not necessary. With the exception of the ML functions that we introduce in this assignment, you should be able to complete all parts of this homework using only the Spark functions you have used in prior lab exercises (although you are welcome to use more features of Spark if you like!).
#
# We'll be using motion picture data, the same data last year's CS100.1x used. However, in this course, we're using DataFrames, rather than RDDs.
#
# The following cell defines the locations of the data files. If you want to run an exported version of this lab on your own machine (i.e., outside of Databricks), you'll need to download your own copy of the 20-million movie data set, and you'll need to adjust the paths, below.
#
# **To Do**: Run the following cell.
# +
import os
from databricks_test_helper import Test
dbfs_dir = '/databricks-datasets/cs110x/ml-20m/data-001'
ratings_filename = dbfs_dir + '/ratings.csv'
movies_filename = dbfs_dir + '/movies.csv'
# The following line is here to enable this notebook to be exported as source and
# run on a local machine with a local copy of the files. Just change the dbfs_dir,
# above.
if os.path.sep != '/':
# Handle Windows.
ratings_filename = ratings_filename.replace('/', os.path.sep)
movie_filename = movie_filename.replace('/', os.path.sep)
# -
# ## Part 0: Preliminaries
#
# We read in each of the files and create a DataFrame consisting of parsed lines.
#
# ### The 20-million movie sample
#
# The 20-million movie sample consists of CSV files (with headers), so there's no need to parse the files manually, as Spark CSV can do the job.
# First, let's take a look at the directory containing our files.
display(dbutils.fs.ls(dbfs_dir))
# ### CPU vs I/O tradeoff
#
# Note that we have both compressed files (ending in `.gz`) and uncompressed files. We have a CPU vs. I/O tradeoff here. If I/O is the bottleneck, then we want to process the compressed files and pay the extra CPU overhead. If CPU is the bottleneck, then it makes more sense to process the uncompressed files.
#
# We've done some experiments, and we've determined that CPU is more of a bottleneck than I/O, on Community Edition. So, we're going to process the uncompressed data. In addition, we're going to speed things up further by specifying the DataFrame schema explicitly. (When the Spark CSV adapter infers the schema from a CSV file, it has to make an extra pass over the file. That'll slow things down here, and it isn't really necessary.)
#
# **To Do**: Run the following cell, which will define the schemas.
# +
from pyspark.sql.types import *
ratings_df_schema = StructType(
[StructField('userId', IntegerType()),
StructField('movieId', IntegerType()),
StructField('rating', DoubleType())]
)
movies_df_schema = StructType(
[StructField('ID', IntegerType()),
StructField('title', StringType())]
)
# -
# ### Load and Cache
#
# The Databricks File System (DBFS) sits on top of S3. We're going to be accessing this data a lot. Rather than read it over and over again from S3, we'll cache both
# the movies DataFrame and the ratings DataFrame in memory.
#
# **To Do**: Run the following cell to load and cache the data. Please be patient: The code takes about 30 seconds to run.
# +
from pyspark.sql.functions import regexp_extract
from pyspark.sql.types import *
raw_ratings_df = sqlContext.read.format('com.databricks.spark.csv').options(header=True, inferSchema=False).schema(ratings_df_schema).load(ratings_filename)
ratings_df = raw_ratings_df.drop('Timestamp')
raw_movies_df = sqlContext.read.format('com.databricks.spark.csv').options(header=True, inferSchema=False).schema(movies_df_schema).load(movies_filename)
movies_df = raw_movies_df.drop('Genres').withColumnRenamed('movieId', 'ID')
ratings_df.cache()
movies_df.cache()
assert ratings_df.is_cached
assert movies_df.is_cached
raw_ratings_count = raw_ratings_df.count()
ratings_count = ratings_df.count()
raw_movies_count = raw_movies_df.count()
movies_count = movies_df.count()
print 'There are %s ratings and %s movies in the datasets' % (ratings_count, movies_count)
print 'Ratings:'
ratings_df.show(3)
print 'Movies:'
movies_df.show(3, truncate=False)
assert raw_ratings_count == ratings_count
assert raw_movies_count == movies_count
# -
# Next, let's do a quick verification of the data.
#
# **To do**: Run the following cell. It should run without errors.
assert ratings_count == 20000263
assert movies_count == 27278
assert movies_df.filter(movies_df.title == 'Toy Story (1995)').count() == 1
assert ratings_df.filter((ratings_df.userId == 6) & (ratings_df.movieId == 1) & (ratings_df.rating == 5.0)).count() == 1
# Let's take a quick look at some of the data in the two DataFrames.
#
# **To Do**: Run the following two cells.
display(movies_df)
display(ratings_df)
# ## Part 1: Basic Recommendations
#
# One way to recommend movies is to always recommend the movies with the highest average rating. In this part, we will use Spark to find the name, number of ratings, and the average rating of the 20 movies with the highest average rating and at least 500 reviews. We want to filter our movies with high ratings but greater than or equal to 500 reviews because movies with few reviews may not have broad appeal to everyone.
# ### (1a) Movies with Highest Average Ratings
#
# Let's determine the movies with the highest average ratings.
#
# The steps you should perform are:
#
# 1. Recall that the `ratings_df` contains three columns:
# - The ID of the user who rated the film
# - the ID of the movie being rated
# - and the rating.
#
# First, transform `ratings_df` into a second DataFrame, `movie_ids_with_avg_ratings`, with the following columns:
# - The movie ID
# - The number of ratings for the movie
# - The average of all the movie's ratings
#
# 2. Transform `movie_ids_with_avg_ratings` to another DataFrame, `movie_names_with_avg_ratings_df` that adds the movie name to each row. `movie_names_with_avg_ratings_df`
# will contain these columns:
# - The movie ID
# - The movie name
# - The number of ratings for the movie
# - The average of all the movie's ratings
#
# **Hint**: You'll need to do a join.
#
# You should end up with something like the following:
# ```
# movie_ids_with_avg_ratings_df:
# +-------+-----+------------------+
# |movieId|count|average |
# +-------+-----+------------------+
# |1831 |7463 |2.5785207021305103|
# |431 |8946 |3.695059244355019 |
# |631 |2193 |2.7273141814865483|
# +-------+-----+------------------+
# only showing top 3 rows
#
# movie_names_with_avg_ratings_df:
# +-------+-----------------------------+-----+-------+
# |average|title |count|movieId|
# +-------+-----------------------------+-----+-------+
# |5.0 |<NAME> (1898)|1 |94431 |
# |5.0 |Serving Life (2011) |1 |129034 |
# |5.0 |Diplomatic Immunity (2009? ) |1 |107434 |
# +-------+-----------------------------+-----+-------+
# only showing top 3 rows
# ```
# +
# TODO: Replace <FILL_IN> with appropriate code
from pyspark.sql import functions as F
# From ratingsDF, create a movie_ids_with_avg_ratings_df that combines the two DataFrames
movie_ids_with_avg_ratings_df = ratings_df.groupBy('movieId').agg(F.count(ratings_df.rating).alias("count"), F.avg(ratings_df.rating).alias("average"))
print 'movie_ids_with_avg_ratings_df:'
movie_ids_with_avg_ratings_df.show(3, truncate=False)
# Note: movie_names_df is a temporary variable, used only to separate the steps necessary
# to create the movie_names_with_avg_ratings_df DataFrame.
movie_names_df = movie_ids_with_avg_ratings_df.join(movies_df,movie_ids_with_avg_ratings_df["movieId"]==movies_df["ID"],"inner")
movie_names_df.show(3,truncate=False)
movie_names_with_avg_ratings_df = movie_names_df.select (movie_names_df["movieId"],movie_names_df["title"],movie_names_df["count"],movie_names_df["average"])
print 'movie_names_with_avg_ratings_df:'
movie_names_with_avg_ratings_df.show(3, truncate=False)
# +
# TEST Movies with Highest Average Ratings (1a)
Test.assertEquals(movie_ids_with_avg_ratings_df.count(), 26744,
'incorrect movie_ids_with_avg_ratings_df.count() (expected 26744)')
movie_ids_with_ratings_take_ordered = movie_ids_with_avg_ratings_df.orderBy('MovieID').take(3)
_take_0 = movie_ids_with_ratings_take_ordered[0]
_take_1 = movie_ids_with_ratings_take_ordered[1]
_take_2 = movie_ids_with_ratings_take_ordered[2]
Test.assertTrue(_take_0[0] == 1 and _take_0[1] == 49695,
'incorrect count of ratings for movie with ID {0} (expected 49695)'.format(_take_0[0]))
Test.assertEquals(round(_take_0[2], 2), 3.92, "Incorrect average for movie ID {0}. Expected 3.92".format(_take_0[0]))
Test.assertTrue(_take_1[0] == 2 and _take_1[1] == 22243,
'incorrect count of ratings for movie with ID {0} (expected 22243)'.format(_take_1[0]))
Test.assertEquals(round(_take_1[2], 2), 3.21, "Incorrect average for movie ID {0}. Expected 3.21".format(_take_1[0]))
Test.assertTrue(_take_2[0] == 3 and _take_2[1] == 12735,
'incorrect count of ratings for movie with ID {0} (expected 12735)'.format(_take_2[0]))
Test.assertEquals(round(_take_2[2], 2), 3.15, "Incorrect average for movie ID {0}. Expected 3.15".format(_take_2[0]))
Test.assertEquals(movie_names_with_avg_ratings_df.count(), 26744,
'incorrect movie_names_with_avg_ratings_df.count() (expected 26744)')
movie_names_with_ratings_take_ordered = movie_names_with_avg_ratings_df.orderBy(['average', 'title']).take(3)
result = [(r['average'], r['title'], r['count'], r['movieId']) for r in movie_names_with_ratings_take_ordered]
Test.assertEquals(result,
[(0.5, u'13 Fighting Men (1960)', 1, 109355),
(0.5, u'20 Years After (2008)', 1, 131062),
(0.5, u'3 Holiday Tails (Golden Christmas 2: The Second Tail, A) (2011)', 1, 111040)],
'incorrect top 3 entries in movie_names_with_avg_ratings_df')
# -
# ### (1b) Movies with Highest Average Ratings and at least 500 reviews
#
# Now that we have a DataFrame of the movies with highest average ratings, we can use Spark to determine the 20 movies with highest average ratings and at least 500 reviews.
#
# Add a single DataFrame transformation (in place of `<FILL_IN>`, below) to limit the results to movies with ratings from at least 500 people.
# TODO: Replace <FILL IN> with appropriate code
movies_with_500_ratings_or_more = movie_names_with_avg_ratings_df.where(movie_names_with_avg_ratings_df["count"]>= 500).orderBy(movie_names_with_avg_ratings_df["average"].desc())
print 'Movies with highest ratings:'
movies_with_500_ratings_or_more.show(20, truncate=False)
# +
# TEST Movies with Highest Average Ratings and at least 500 Reviews (1b)
Test.assertEquals(movies_with_500_ratings_or_more.count(), 4489,
'incorrect movies_with_500_ratings_or_more.count(). Expected 4489.')
top_20_results = [(r['average'], r['title'], r['count']) for r in movies_with_500_ratings_or_more.orderBy(F.desc('average')).take(20)]
Test.assertEquals(top_20_results,
[(4.446990499637029, u'<NAME>mption, The (1994)', 63366),
(4.364732196832306, u'Godfather, The (1972)', 41355),
(4.334372207803259, u'Usual Suspects, The (1995)', 47006),
(4.310175010988133, u"Schindler's List (1993)", 50054),
(4.275640557704942, u'Godfather: Part II, The (1974)', 27398),
(4.2741796572216, u'Seven Samurai (Shichinin no samurai) (1954)', 11611),
(4.271333600779414, u'Rear Window (1954)', 17449),
(4.263182346109176, u'Band of Brothers (2001)', 4305),
(4.258326830670664, u'Casablanca (1942)', 24349),
(4.256934865900383, u'Sunset Blvd. (a.k.a. Sunset Boulevard) (1950)', 6525),
(4.24807897901911, u"One Flew Over the Cuckoo's Nest (1975)", 29932),
(4.247286821705426, u'Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964)', 23220),
(4.246001523229246, u'Third Man, The (1949)', 6565),
(4.235410064157069, u'City of God (Cidade de Deus) (2002)', 12937),
(4.2347902097902095, u'Lives of Others, The (Das leben der Anderen) (2006)', 5720),
(4.233538107122288, u'North by Northwest (1959)', 15627),
(4.2326233183856505, u'Paths of Glory (1957)', 3568),
(4.227123123722136, u'Fight Club (1999)', 40106),
(4.224281931146873, u'Double Indemnity (1944)', 4909),
(4.224137931034483, u'12 Angry Men (1957)', 12934)],
'Incorrect top 20 movies with 500 or more ratings')
# -
# Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings.
# ## Part 2: Collaborative Filtering
# In this course, you have learned about many of the basic transformations and actions that Spark allows us to apply to distributed datasets. Spark also exposes some higher level functionality; in particular, Machine Learning using a component of Spark called [MLlib][mllib]. In this part, you will learn how to use MLlib to make personalized movie recommendations using the movie data we have been analyzing.
#
# <img src="https://courses.edx.org/c4x/BerkeleyX/CS100.1x/asset/Collaborative_filtering.gif" alt="collaborative filtering" style="float: right"/>
#
# We are going to use a technique called [collaborative filtering][collab]. Collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue x than to have the opinion on x of a person chosen randomly. You can read more about collaborative filtering [here][collab2].
#
# The image at the right (from [Wikipedia][collab]) shows an example of predicting of the user's rating using collaborative filtering. At first, people rate different items (like videos, images, games). After that, the system is making predictions about a user's rating for an item, which the user has not rated yet. These predictions are built upon the existing ratings of other users, who have similar ratings with the active user. For instance, in the image below the system has made a prediction, that the active user will not like the video.
#
# <br clear="all"/>
#
# ----
#
# For movie recommendations, we start with a matrix whose entries are movie ratings by users (shown in red in the diagram below). Each column represents a user (shown in green) and each row represents a particular movie (shown in blue).
#
# Since not all users have rated all movies, we do not know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the ratings matrix by factorizing it as the product of two matrices: one that describes properties of each user (shown in green), and one that describes properties of each movie (shown in blue).
#
# <img alt="factorization" src="http://spark-mooc.github.io/web-assets/images/matrix_factorization.png" style="width: 885px"/>
# <br clear="all"/>
#
# We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The [Alternating Least Squares][als] algorithm does this by first randomly filling the users matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the "alternating" in the name.
#
# This optimization is what's being shown on the right in the image above. Given a fixed set of user factors (i.e., values in the users matrix), we use the known ratings to find the best values for the movie factors using the optimization written at the bottom of the figure. Then we "alternate" and pick the best user factors given fixed movie factors.
#
# For a simple example of what the users and movies matrices might look like, check out the [videos from Lecture 2][videos] or the [slides from Lecture 8][slides]
# [videos]: https://courses.edx.org/courses/course-v1:BerkeleyX+CS110x+2T2016/courseware/9d251397874d4f0b947b606c81ccf83c/3cf61a8718fe4ad5afcd8fb35ceabb6e/
# [slides]: https://d37djvu3ytnwxt.cloudfront.net/assets/courseware/v1/fb269ff9a53b669a46d59e154b876d78/asset-v1:BerkeleyX+CS110x+2T2016+type@asset+block/Lecture2s.pdf
# [als]: https://en.wikiversity.org/wiki/Least-Squares_Method
# [mllib]: http://spark.apache.org/docs/1.6.2/mllib-guide.html
# [collab]: https://en.wikipedia.org/?title=Collaborative_filtering
# [collab2]: http://recommender-systems.org/collaborative-filtering/
# ### (2a) Creating a Training Set
#
# Before we jump into using machine learning, we need to break up the `ratings_df` dataset into three pieces:
# * A training set (DataFrame), which we will use to train models
# * A validation set (DataFrame), which we will use to choose the best model
# * A test set (DataFrame), which we will use for our experiments
#
# To randomly split the dataset into the multiple groups, we can use the pySpark [randomSplit()](http://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html#pyspark.sql.DataFrame.randomSplit) transformation. `randomSplit()` takes a set of splits and a seed and returns multiple DataFrames.
# +
# TODO: Replace <FILL_IN> with the appropriate code.
# We'll hold out 60% for training, 20% of our data for validation, and leave 20% for testing
seed = 1800009193L
(split_60_df, split_a_20_df, split_b_20_df) = ratings_df.randomSplit([.60,.20,.20],seed)
# Let's cache these datasets for performance
training_df = split_60_df.cache()
validation_df = split_a_20_df.cache()
test_df = split_b_20_df.cache()
print('Training: {0}, validation: {1}, test: {2}\n'.format(
training_df.count(), validation_df.count(), test_df.count())
)
training_df.show(3)
validation_df.show(3)
test_df.show(3)
# +
# TEST Creating a Training Set (2a)
Test.assertEquals(training_df.count(), 12001389, "Incorrect training_df count. Expected 12001389")
Test.assertEquals(validation_df.count(), 4003694, "Incorrect validation_df count. Expected 4003694")
Test.assertEquals(test_df.count(), 3995180, "Incorrect test_df count. Expected 3995180")
Test.assertEquals(training_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 5952) & (ratings_df.rating == 5.0)).count(), 1)
Test.assertEquals(training_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 1193) & (ratings_df.rating == 3.5)).count(), 1)
Test.assertEquals(training_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 1196) & (ratings_df.rating == 4.5)).count(), 1)
Test.assertEquals(validation_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 296) & (ratings_df.rating == 4.0)).count(), 1)
Test.assertEquals(validation_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 32) & (ratings_df.rating == 3.5)).count(), 1)
Test.assertEquals(validation_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 6888) & (ratings_df.rating == 3.0)).count(), 1)
Test.assertEquals(test_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 4993) & (ratings_df.rating == 5.0)).count(), 1)
Test.assertEquals(test_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 4128) & (ratings_df.rating == 4.0)).count(), 1)
Test.assertEquals(test_df.filter((ratings_df.userId == 1) & (ratings_df.movieId == 4915) & (ratings_df.rating == 3.0)).count(), 1)
# -
# After splitting the dataset, your training set has about 12 million entries and the validation and test sets each have about 4 million entries. (The exact number of entries in each dataset varies slightly due to the random nature of the `randomSplit()` transformation.)
# ### (2b) Alternating Least Squares
#
# In this part, we will use the Apache Spark ML Pipeline implementation of Alternating Least Squares, [ALS](http://spark.apache.org/docs/1.6.2/api/python/pyspark.ml.html#pyspark.ml.recommendation.ALS). ALS takes a training dataset (DataFrame) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise.
#
# The process we will use for determining the best model is as follows:
# 1. Pick a set of model parameters. The most important parameter to model is the *rank*, which is the number of columns in the Users matrix (green in the diagram above) or the number of rows in the Movies matrix (blue in the diagram above). In general, a lower rank will mean higher error on the training dataset, but a high rank may lead to [overfitting](https://en.wikipedia.org/wiki/Overfitting). We will train models with ranks of 4, 8, and 12 using the `training_df` dataset.
#
# 2. Set the appropriate parameters on the `ALS` object:
# * The "User" column will be set to the values in our `userId` DataFrame column.
# * The "Item" column will be set to the values in our `movieId` DataFrame column.
# * The "Rating" column will be set to the values in our `rating` DataFrame column.
# * We'll using a regularization parameter of 0.1.
#
# **Note**: Read the documentation for the [ALS](http://spark.apache.org/docs/1.6.2/api/python/pyspark.ml.html#pyspark.ml.recommendation.ALS) class **carefully**. It will help you accomplish this step.
# 3. Have the ALS output transformation (i.e., the result of [ALS.fit()](http://spark.apache.org/docs/1.6.2/api/python/pyspark.ml.html#pyspark.ml.recommendation.ALS.fit)) produce a _new_ column
# called "prediction" that contains the predicted value.
#
# 4. Create multiple models using [ALS.fit()](http://spark.apache.org/docs/1.6.2/api/python/pyspark.ml.html#pyspark.ml.recommendation.ALS.fit), one for each of our rank values. We'll fit
# against the training data set (`training_df`).
#
# 5. For each model, we'll run a prediction against our validation data set (`validation_df`) and check the error.
#
# 6. We'll keep the model with the best error rate.
#
# #### Why are we doing our own cross-validation?
#
# A challenge for collaborative filtering is how to provide ratings to a new user (a user who has not provided *any* ratings at all). Some recommendation systems choose to provide new users with a set of default ratings (e.g., an average value across all ratings), while others choose to provide no ratings for new users. Spark's ALS algorithm yields a NaN (`Not a Number`) value when asked to provide a rating for a new user.
#
# Using the ML Pipeline's [CrossValidator](http://spark.apache.org/docs/1.6.2/api/python/pyspark.ml.html#pyspark.ml.tuning.CrossValidator) with ALS is thus problematic, because cross validation involves dividing the training data into a set of folds (e.g., three sets) and then using those folds for testing and evaluating the parameters during the parameter grid search process. It is likely that some of the folds will contain users that are not in the other folds, and, as a result, ALS produces NaN values for those new users. When the CrossValidator uses the Evaluator (RMSE) to compute an error metric, the RMSE algorithm will return NaN. This will make *all* of the parameters in the parameter grid appear to be equally good (or bad).
#
# You can read the discussion on [Spark JIRA 14489](https://issues.apache.org/jira/browse/SPARK-14489) about this issue. There are proposed workarounds of having ALS provide default values or having RMSE drop NaN values. Both introduce potential issues. We have chosen to have RMSE drop NaN values. While this does not solve the underlying issue of ALS not predicting a value for a new user, it does provide some evaluation value. We manually implement the parameter grid search process using a for loop (below) and remove the NaN values before using RMSE.
#
# For a production application, you would want to consider the tradeoffs in how to handle new users.
#
# **Note**: This cell will likely take a couple of minutes to run.
# +
# TODO: Replace <FILL IN> with appropriate code
# This step is broken in ML Pipelines: https://issues.apache.org/jira/browse/SPARK-14489
from pyspark.ml.recommendation import ALS
# Let's initialize our ALS learner
als = ALS()
# Now we set the parameters for the method
als.setMaxIter(5)\
.setSeed(seed)\
.setRegParam(0.1)\
.setUserCol("userId")\
.setItemCol("movieId")\
.setRatingCol("rating")
# Now let's compute an evaluation metric for our test dataset
from pyspark.ml.evaluation import RegressionEvaluator
# Create an RMSE evaluator using the label and predicted columns
reg_eval = RegressionEvaluator(predictionCol="prediction", labelCol="rating", metricName="rmse")
tolerance = 0.03
ranks = [4, 8, 12]
errors = [0, 0, 0]
models = [0, 0, 0]
err = 0
min_error = float('inf')
best_rank = -1
for rank in ranks:
# Set the rank here:
als.setRank(rank)
# Create the model with these parameters.
model = als.fit(training_df)
# Run the model to create a prediction. Predict against the validation_df.
predict_df = model.transform(validation_df)#.select("userId","MovieId","rating")
# Remove NaN values from prediction (due to SPARK-14489)
predicted_ratings_df = predict_df.filter(predict_df.prediction != float('nan'))
# Run the previously created RMSE evaluator, reg_eval, on the predicted_ratings_df DataFrame
error = reg_eval.evaluate(predicted_ratings_df)
errors[err] = error
models[err] = model
print 'For rank %s the RMSE is %s' % (rank, error)
if error < min_error:
min_error = error
best_rank = err
err += 1
als.setRank(ranks[best_rank])
print 'The best model was trained with rank %s' % ranks[best_rank]
my_model = models[best_rank]
# -
# TEST
Test.assertEquals(round(min_error, 2), 0.81, "Unexpected value for best RMSE. Expected rounded value to be 0.81. Got {0}".format(round(min_error, 2)))
Test.assertEquals(ranks[best_rank], 12, "Unexpected value for best rank. Expected 12. Got {0}".format(ranks[best_rank]))
Test.assertEqualsHashed(als.getItemCol(), "18f0e2357f8829fe809b2d95bc1753000dd925a6", "Incorrect choice of {0} for ALS item column.".format(als.getItemCol()))
Test.assertEqualsHashed(als.getUserCol(), "db36668fa9a19fde5c9676518f9e86c17cabf65a", "Incorrect choice of {0} for ALS user column.".format(als.getUserCol()))
Test.assertEqualsHashed(als.getRatingCol(), "3c2d687ef032e625aa4a2b1cfca9751d2080322c", "Incorrect choice of {0} for ALS rating column.".format(als.getRatingCol()))
# ### (2c) Testing Your Model
#
# So far, we used the `training_df` and `validation_df` datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is; otherwise, we would be very vulnerable to [overfitting](https://en.wikipedia.org/wiki/Overfitting). To decide how good our model is, we need to use the `test_df` dataset. We will use the `best_rank` you determined in part (2b) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE.
#
# The steps you should perform are:
# * Run a prediction, using `my_model` as created above, on the test dataset (`test_df`), producing a new `predict_df` DataFrame.
# * Filter out unwanted NaN values (necessary because of [a bug in Spark](https://issues.apache.org/jira/browse/SPARK-14489)). We've supplied this piece of code for you.
# * Use the previously created RMSE evaluator, `reg_eval` to evaluate the filtered DataFrame.
# +
# TODO: Replace <FILL_IN> with the appropriate code
# In ML Pipelines, this next step has a bug that produces unwanted NaN values. We
# have to filter them out. See https://issues.apache.org/jira/browse/SPARK-14489
predict_df = my_model.transform(test_df)
# Remove NaN values from prediction (due to SPARK-14489)
predicted_test_df = predict_df.filter(predict_df.prediction != float('nan'))
# Run the previously created RMSE evaluator, reg_eval, on the predicted_test_df DataFrame
test_RMSE = reg_eval.evaluate(predicted_test_df)
print('The model had a RMSE on the test set of {0}'.format(test_RMSE))
# -
# TEST Testing Your Model (2c)
Test.assertTrue(abs(test_RMSE - 0.809624038485) < tolerance, 'incorrect test_RMSE: {0:.11f}'.format(test_RMSE))
# ### (2d) Comparing Your Model
#
# Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set.
#
# The steps you should perform are:
# * Use the `training_df` to compute the average rating across all movies in that training dataset.
# * Use the average rating that you just determined and the `test_df` to create a DataFrame (`test_for_avg_df`) with a `prediction` column containing the average rating. **HINT**: You'll want to use the `lit()` function,
# from `pyspark.sql.functions`, available here as `F.lit()`.
# * Use our previously created `reg_eval` object to evaluate the `test_for_avg_df` and calculate the RMSE.
# +
# TODO: Replace <FILL_IN> with the appropriate code.
# Compute the average rating
from pyspark.sql.functions import lit
avg_rating_df = training_df.groupBy().avg("rating")
#movie_ids_with_avg_ratings_df = ratings_df.groupBy('movieId').agg(F.count(ratings_df.rating).alias("count"), F.avg(ratings_df.rating).alias("average"))
# Extract the average rating value. (This is row 0, column 0.)
training_avg_rating = avg_rating_df.collect()[0][0]
print('The average rating for movies in the training set is {0}'.format(training_avg_rating))
# Add a column with the average rating
test_for_avg_df = test_df.withColumn('prediction', lit(training_avg_rating))
test_for_avg_df.show()
# Run the previously created RMSE evaluator, reg_eval, on the test_for_avg_df DataFrame
test_avg_RMSE = reg_eval.evaluate(test_for_avg_df)
print("The RMSE on the average set is {0}".format(test_avg_RMSE))
# -
# TEST Comparing Your Model (2d)
Test.assertTrue(abs(training_avg_rating - 3.52547984237) < 0.000001,
'incorrect training_avg_rating (expected 3.52547984237): {0:.11f}'.format(training_avg_rating))
Test.assertTrue(abs(test_avg_RMSE - 1.05190953037) < 0.000001,
'incorrect test_avg_RMSE (expected 1.0519743756): {0:.11f}'.format(test_avg_RMSE))
# You now have code to predict how users will rate movies!
# ## Part 3: Predictions for Yourself
# The ultimate goal of this lab exercise is to predict what movies to recommend to yourself. In order to do that, you will first need to add ratings for yourself to the `ratings_df` dataset.
# **(3a) Your Movie Ratings**
#
# To help you provide ratings for yourself, we have included the following code to list the names and movie IDs of the 50 highest-rated movies from `movies_with_500_ratings_or_more` which we created in part 1 the lab.
print 'Most rated movies:'
print '(average rating, movie name, number of reviews, movie ID)'
display(movies_with_500_ratings_or_more.orderBy(movies_with_500_ratings_or_more['average'].desc()).take(100))
# The user ID 0 is unassigned, so we will use it for your ratings. We set the variable `my_user_ID` to 0 for you. Next, create a new DataFrame called `my_ratings_df`, with your ratings for at least 10 movie ratings. Each entry should be formatted as `(my_user_id, movieID, rating)`. As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to `take()` in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen).
# +
# TODO: Replace <FILL IN> with appropriate code
from pyspark.sql import Row
my_user_id = 0
# Note that the movie IDs are the *last* number on each line. A common error was to use the number of ratings as the movie ID.
my_rated_movies = [
(my_user_id,260,4),
(my_user_id,950,4),
(my_user_id,58559,5),
(my_user_id,1196,4),
(my_user_id,593,5),
(my_user_id,79132,5),
(my_user_id,2858,3),
(my_user_id,7153,4),
(my_user_id,4993,4),
(my_user_id,92259,5),
#(my_user_id,1256,2),
# The format of each line is (my_user_id, movie ID, your rating)
# For example, to give the movie "Star Wars: Episode IV - A New Hope (1977)" a five rating, you would add the following line:
# (my_user_id, 260, 5),
]
my_ratings_df = sqlContext.createDataFrame(my_rated_movies, ['userId','movieId','rating'])
print 'My movie ratings:'
display(my_ratings_df.limit(10))
# -
# ### (3b) Add Your Movies to Training Dataset
#
# Now that you have ratings for yourself, you need to add your ratings to the `training` dataset so that the model you train will incorporate your preferences. Spark's [unionAll()](http://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html#pyspark.sql.DataFrame.unionAll) transformation combines two DataFrames; use `unionAll()` to create a new training dataset that includes your ratings and the data in the original training dataset.
# +
# TODO: Replace <FILL IN> with appropriate code
training_with_my_ratings_df = training_df.unionAll(my_ratings_df)
#training_df
training_with_my_ratings_df.orderBy('userId').show()
print ('The training dataset now has %s more entries than the original training dataset' %
(training_with_my_ratings_df.coun+t() - training_df.count()))
assert (training_with_my_ratings_df.count() - training_df.count()) == my_ratings_df.count()
# -
# ### (3c) Train a Model with Your Ratings
#
# Now, train a model with your ratings added and the parameters you used in in part (2b) and (2c). Mke sure you include **all** of the parameters.
#
# **Note**: This cell will take about 30 seconds to run.
# +
# TODO: Replace <FILL IN> with appropriate code
# Reset the parameters for the ALS object.
als.setPredictionCol("prediction")\
.setMaxIter(5)\
.setSeed(seed)\
.setRegParam(0.1)\
.setUserCol("userId")\
.setItemCol("movieId")\
.setRatingCol("rating")
# Create the model with these parameters.
my_ratings_model = als.fit(training_with_my_ratings_df)
print my_ratings_model
# -
# ### (3d) Check RMSE for the New Model with Your Ratings
#
# Compute the RMSE for this new model on the test set.
# * Run your model (the one you just trained) against the test data set in `test_df`.
# * Then, use our previously-computed `reg_eval` object to compute the RMSE of your ratings.
# +
# TODO: Replace <FILL IN> with appropriate code
my_predict_df = my_ratings_model.transform(test_df)
# Remove NaN values from prediction (due to SPARK-14489)
predicted_test_my_ratings_df = my_predict_df.filter(my_predict_df.prediction != float('nan'))
# Run the previously created RMSE evaluator, reg_eval, on the predicted_test_my_ratings_df DataFrame
test_RMSE_my_ratings = reg_eval.evaluate(predicted_test_my_ratings_df)
print('The model had a RMSE on the test set of {0}'.format(test_RMSE_my_ratings))
# -
# ### (3e) Predict Your Ratings
#
# So far, we have only computed the error of the model. Next, let's predict what ratings you would give to the movies that you did not already provide ratings for.
#
# The steps you should perform are:
# * Filter out the movies you already rated manually. (Use the `my_rated_movie_ids` variable.) Put the results in a new `not_rated_df`.
#
# **Hint**: The [Column.isin()](http://spark.apache.org/docs/1.6.2/api/python/pyspark.sql.html#pyspark.sql.Column.isin)
# method, as well as the `~` ("not") DataFrame logical operator, may come in handy here. Here's an example of using `isin()`:
#
# ```
# > df1 = sqlContext.createDataFrame([("Jim", 10), ("Julie", 9), ("Abdul", 20), ("Mireille", 19)], ["name", "age"])
# > df1.show()
# +--------+---+
# | name|age|
# +--------+---+
# | Jim| 10|
# | Julie| 9|
# | Abdul| 20|
# |Mireille| 19|
# +--------+---+
#
# > names_to_delete = ["Julie", "Abdul"] # this is just a Python list
# > df2 = df1.filter(~ df1["name"].isin(names_to_delete)) # "NOT IN"
# > df2.show()
# +--------+---+
# | name|age|
# +--------+---+
# | Jim| 10|
# |Mireille| 19|
# +--------+---+
# ```
#
# * Transform `not_rated_df` into `my_unrated_movies_df` by:
# - renaming the "ID" column to "movieId"
# - adding a "userId" column with the value contained in the `my_user_id` variable defined above.
#
# * Create a `predicted_ratings_df` DataFrame by applying `my_ratings_model` to `my_unrated_movies_df`.
# +
# TODO: Replace <FILL_IN> with the appropriate code
# Create a list of my rated movie IDs
my_rated_movie_ids = [x[1] for x in my_rated_movies]
print my_rated_movie_ids
# Filter out the movies I already rated.
not_rated_df = movies_df.filter(~ movies_df["ID"].isin(my_rated_movie_ids))
not_rated_df.show()
# Rename the "ID" column to be "movieId", and add a column with my_user_id as "userId".
#my_unrated_movies_df = not_rated_df.withColumn("userId",lit(my_user_id))
my_unrated_movies_df = not_rated_df.select(not_rated_df["ID"].alias("movieId"),not_rated_df["title"]).withColumn("userId",lit(my_user_id))
#my_unrated_movies_df.show()
# Use my_rating_model to predict ratings for the movies that I did not manually rate.
raw_predicted_ratings_df = my_ratings_model.transform(my_unrated_movies_df)
predicted_ratings_df = raw_predicted_ratings_df.filter(raw_predicted_ratings_df['prediction'] != float('nan'))
predicted_ratings_df.show()
# -
# ### (3f) Predict Your Ratings
#
# We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings.
#
# The steps you should perform are:
# * Join your `predicted_ratings_df` DataFrame with the `movie_names_with_avg_ratings_df` DataFrame to obtain the ratings counts for each movie.
# * Sort the resulting DataFrame (`predicted_with_counts_df`) by predicted rating (highest ratings first), and remove any ratings with a count of 75 or less.
# * Print the top 25 movies that remain.
# +
# TODO: Replace <FILL_IN> with the appropriate code
predicted_with_counts_df = movie_names_with_avg_ratings_df.join(predicted_ratings_df,movie_names_with_avg_ratings_df.movieId == predicted_ratings_df.movieId,'inner')
predicted_highest_rated_movies_df = predicted_with_counts_df.where(predicted_with_counts_df["count"]> 75).orderBy(predicted_with_counts_df["prediction"].desc())
#display(predicted_highest_rated_movies_df)
print ('My 25 highest rated movies as predicted (for movies with more than 75 reviews):')
display(predicted_highest_rated_movies_df.take(25))
# -
# ## Appendix A: Submitting Your Exercises to the Autograder
#
# This section guides you through Step 2 of the grading process ("Submit to Autograder").
#
# Once you confirm that your lab notebook is passing all tests, you can submit it first to the course autograder and then second to the edX website to receive a grade.
#
# ** Note that you can only submit to the course autograder once every 1 minute. **
# ### Step 2(a): Restart your cluster by clicking on the dropdown next to your cluster name and selecting "Restart Cluster".
#
# You can do this step in either notebook, since there is one cluster for your notebooks.
#
# <img src="http://spark-mooc.github.io/web-assets/images/submit_restart.png" alt="Drawing" />
# ### Step 2(b): _IN THIS NOTEBOOK_, click on "Run All" to run all of the cells.
#
# <img src="http://spark-mooc.github.io/web-assets/images/submit_runall.png" alt="Drawing" style="height: 80px"/>
#
# This step will take some time.
#
# Wait for your cluster to finish running the cells in your lab notebook before proceeding.
# ### Step 2(c): Publish this notebook
#
# Publish _this_ notebook by clicking on the "Publish" button at the top.
#
# <img src="http://spark-mooc.github.io/web-assets/images/Lab0_Publish0.png" alt="Drawing" style="height: 150px"/>
#
# When you click on the button, you will see the following popup.
#
# <img src="http://spark-mooc.github.io/web-assets/images/Lab0_Publish1.png" alt="Drawing" />
#
# When you click on "Publish", you will see a popup with your notebook's public link. **Copy the link and set the `notebook_URL` variable in the AUTOGRADER notebook (not this notebook).**
#
# <img src="http://spark-mooc.github.io/web-assets/images/Lab0_Publish2.png" alt="Drawing" />
# ### Step 2(d): Set the notebook URL and Lab ID in the Autograder notebook, and run it
#
# Go to the Autograder notebook and paste the link you just copied into it, so that it is assigned to the `notebook_url` variable.
#
# ```
# notebook_url = "..." # put your URL here
# ```
#
# Then, find the line that looks like this:
#
# ```
# lab = <FILL IN>
# ```
# and change `<FILL IN>` to "CS110x-lab2":
#
# ```
# lab = "CS110x-lab2"
# ```
#
# Then, run the Autograder notebook to submit your lab.
# ### <img src="http://spark-mooc.github.io/web-assets/images/oops.png" style="height: 200px"/> If things go wrong
#
# It's possible that your notebook looks fine to you, but fails in the autograder. (This can happen when you run cells out of order, as you're working on your notebook.) If that happens, just try again, starting at the top of Appendix A.
| Distributed Machine Learning with Apache Spark/cs110_lab2_als_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
plt.rcParams["figure.figsize"] = [12, 9]
matplotlib.style.use('ggplot')
# %matplotlib inline
# -
# # Data exploration
def read_data(path):
return pd.read_csv(path,
index_col=False,
skipinitialspace=True,
names=['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'sex',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country',
'income']
)
train = read_data('./data/adult/adult.data')
test = read_data('./data/adult/adult.test')
train = train.append(test)
train.head()
train.hist(figsize=(12, 9))
# `age, education_num, hours_per_week, fnlwgt` seem like good candidates as features. Not much information in `capital_gain, capital_loss`.
# ## Some routine stuff
# 1. Convert objects to categories
# 2. Drop duplicates
# 3. Drop NA's - we can potentially impute these values. But always try out the simpler alternative before making it too complicated :)
# +
# for column in train.select_dtypes(['object']).columns:
# train[column] = train[column].astype('category')
## Check for duplicates, nulls
train.drop_duplicates(inplace=True)
train.dropna(inplace=True)
print any(train.duplicated())
print train.isnull().any()
# -
# Let's clean some data
train.income.loc[train.income == '>50K.'] = '>50K'
train.income.loc[train.income == '<=50K.'] = '<=50K'
train.income.value_counts()
# ## Intuition 1:
# Higher education should result in more income.
education_subset = train.groupby(['education_num', 'income']).size().reset_index()
education_subset.columns = ['education_num', 'income', 'count']
func = lambda x: float(x['count']) / train[train.education_num == x.education_num].count()[0]
education_subset['percentage'] = education_subset.apply(func, axis=1)
education_subset['education + income'] = education_subset.apply(lambda x: '%s, %s' % (x.education_num, x.income), axis=1)
education_subset.sort().plot(kind='barh', x='education + income', y='percentage', figsize=(12,12))
# Above plot shows percentage of population with respect to education and income, and it seems people with Masters and PhD tend to earn to more (more number of people are in >50K bucket).
# ## Intuition 2:
# People earn more as they get more experience.
train.groupby('income').hist(figsize=(15,12))
# First plot shows distribution of age with respect to income <= 50K. Age is used as an proxy to experience. Assumption here is people continue to work as they age and acquire more skills in the process. As per intuition, number of people making less than 50K decreases as per age.
#
# Second plot shows income > 50K. More interestingly, data shows a peak around 45. This indicates either there aren't enough poeple of age 45+ earning more than 50K in the data or income decreases as people approach retirement.
# # Feature construction
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
lencoder = LabelEncoder()
oencoder = OneHotEncoder()
# +
features = pd.DataFrame()
features['age'] = train['age']
features['education_num'] = train['education_num']
features['hours_per_week'] = train['hours_per_week']
features['fnlwgt'] = train['fnlwgt']
features['sex'] = lencoder.fit_transform(train.sex)
features['occupation'] = lencoder.fit_transform(train.occupation)
features.income = train.income
features.income = lencoder.fit_transform(features.income)
features.head()
# -
# # Model fitting
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(features.drop('income'), features.income)
model = RandomForestClassifier()
model.fit(x_train, y_train)
y_hat = model.predict(x_test)
# # Model/Feature Evaluation
from sklearn.metrics import confusion_matrix, accuracy_score
accuracy_score(y_test, y_hat)
confusion_matrix(y_test, y_hat)
| 03-adult-income-by-census.ipynb |
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # divisible_by_9_through_1
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/divisible_by_9_through_1.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/divisible_by_9_through_1.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Divisible by 9 through 1 puzzle in Google CP Solver.
From http://msdn.microsoft.com/en-us/vcsharp/ee957404.aspx
' Solving Combinatory Problems with LINQ'
'''
Find a number consisting of 9 digits in which each of the digits
from 1 to 9 appears only once. This number must also satisfy these
divisibility requirements:
1. The number should be divisible by 9.
2. If the rightmost digit is removed, the remaining number should
be divisible by 8.
3. If the rightmost digit of the new number is removed, the remaining
number should be divisible by 7.
4. And so on, until there's only one digit (which will necessarily
be divisible by 1).
'''
Also, see
'Intel Parallel Studio: Great for Serial Code Too (Episode 1)'
http://software.intel.com/en-us/blogs/2009/12/07/intel-parallel-studio-great-for-serial-code-too-episode-1/
This model is however generalized to handle any base, for reasonable limits.
The 'reasonable limit' for this model is that base must be between 2..16.
Compare with the following models:
* MiniZinc: http://www.hakank.org/minizinc/divisible_by_9_through_1.mzn
* Comet : http://www.hakank.org/comet/divisible_by_9_through_1.co
* ECLiPSe : http://www.hakank.org/eclipse/divisible_by_9_through_1.ecl
* Gecode : http://www.hakank.org/gecode/divisible_by_9_through_1.cpp
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
import sys
from ortools.constraint_solver import pywrapcp
#
# Decomposition of modulo constraint
#
# This implementation is based on the ECLiPSe version
# mentioned in
# - A Modulo propagator for ECLiPSE'
# http://www.hakank.org/constraint_programming_blog/2010/05/a_modulo_propagator_for_eclips.html
# The ECLiPSe source code:
# http://www.hakank.org/eclipse/modulo_propagator.ecl
#
def my_mod(solver, x, y, r):
if not isinstance(y, int):
solver.Add(y != 0)
lbx = x.Min()
ubx = x.Max()
ubx_neg = -ubx
lbx_neg = -lbx
min_x = min(lbx, ubx_neg)
max_x = max(ubx, lbx_neg)
d = solver.IntVar(max(0, min_x), max_x, "d")
if not isinstance(r, int):
solver.Add(r >= 0)
solver.Add(x * r >= 0)
if not isinstance(r, int) and not isinstance(r, int):
solver.Add(-abs(y) < r)
solver.Add(r < abs(y))
solver.Add(min_x <= d)
solver.Add(d <= max_x)
solver.Add(x == y * d + r)
#
# converts a number (s) <-> an array of integers (t) in the specific base.
#
def toNum(solver, t, s, base):
tlen = len(t)
solver.Add(
s == solver.Sum([(base**(tlen - i - 1)) * t[i] for i in range(tlen)]))
# Create the solver.
solver = pywrapcp.Solver("Divisible by 9 through 1")
# data
m = base**(base - 1) - 1
n = base - 1
digits_str = "_0123456789ABCDEFGH"
print("base:", base)
# declare variables
# the digits
x = [solver.IntVar(1, base - 1, "x[%i]" % i) for i in range(n)]
# the numbers, t[0] contains the answer
t = [solver.IntVar(0, m, "t[%i]" % i) for i in range(n)]
#
# constraints
#
solver.Add(solver.AllDifferent(x))
for i in range(n):
mm = base - i - 1
toNum(solver, [x[j] for j in range(mm)], t[i], base)
my_mod(solver, t[i], mm, 0)
#
# solution and search
#
solution = solver.Assignment()
solution.Add(x)
solution.Add(t)
db = solver.Phase(x, solver.CHOOSE_FIRST_UNBOUND, solver.ASSIGN_MIN_VALUE)
solver.NewSearch(db)
num_solutions = 0
while solver.NextSolution():
print("x: ", [x[i].Value() for i in range(n)])
print("t: ", [t[i].Value() for i in range(n)])
print("number base 10: %i base %i: %s" % (t[0].Value(), base, "".join(
[digits_str[x[i].Value() + 1] for i in range(n)])))
print()
num_solutions += 1
solver.EndSearch()
print("num_solutions:", num_solutions)
print("failures:", solver.Failures())
print("branches:", solver.Branches())
print("WallTime:", solver.WallTime())
base = 10
default_base = 10
max_base = 16
| examples/notebook/contrib/divisible_by_9_through_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.047432, "end_time": "2021-07-31T00:08:30.097072", "exception": false, "start_time": "2021-07-31T00:08:30.049640", "status": "completed"} tags=[]
# ### Titanic | ML Classification Prediction Algorithms with accuracy (79.2%)
# + papermill={"duration": 1.481329, "end_time": "2021-07-31T00:08:31.626319", "exception": false, "start_time": "2021-07-31T00:08:30.144990", "status": "completed"} tags=[]
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import r2_score
from sklearn.metrics import classification_report
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn import svm
import warnings
# + papermill={"duration": 0.052274, "end_time": "2021-07-31T00:08:31.724747", "exception": false, "start_time": "2021-07-31T00:08:31.672473", "status": "completed"} tags=[]
warnings.filterwarnings('ignore')
# + [markdown] papermill={"duration": 0.045848, "end_time": "2021-07-31T00:08:31.815758", "exception": false, "start_time": "2021-07-31T00:08:31.769910", "status": "completed"} tags=[]
# ## EDA
# + papermill={"duration": 0.105399, "end_time": "2021-07-31T00:08:31.966461", "exception": false, "start_time": "2021-07-31T00:08:31.861062", "status": "completed"} tags=[]
df_train = pd.read_csv("../input/titanic/train.csv",index_col="PassengerId")
df_train.head()
# + papermill={"duration": 0.088236, "end_time": "2021-07-31T00:08:32.112478", "exception": false, "start_time": "2021-07-31T00:08:32.024242", "status": "completed"} tags=[]
df_test = pd.read_csv("../input/titanic/test.csv", index_col="PassengerId")
df_test.head()
# + papermill={"duration": 0.055332, "end_time": "2021-07-31T00:08:32.213579", "exception": false, "start_time": "2021-07-31T00:08:32.158247", "status": "completed"} tags=[]
df_train.Parch.unique()
# + papermill={"duration": 0.069469, "end_time": "2021-07-31T00:08:32.330137", "exception": false, "start_time": "2021-07-31T00:08:32.260668", "status": "completed"} tags=[]
df_test.info()
# + papermill={"duration": 0.070011, "end_time": "2021-07-31T00:08:32.446872", "exception": false, "start_time": "2021-07-31T00:08:32.376861", "status": "completed"} tags=[]
Y_test = pd.read_csv("../input/titanic/gender_submission.csv", index_col="PassengerId")
Y_test.head()
# + papermill={"duration": 0.064309, "end_time": "2021-07-31T00:08:32.558440", "exception": false, "start_time": "2021-07-31T00:08:32.494131", "status": "completed"} tags=[]
df_train.info()
# + papermill={"duration": 0.058512, "end_time": "2021-07-31T00:08:32.664406", "exception": false, "start_time": "2021-07-31T00:08:32.605894", "status": "completed"} tags=[]
df_train.isnull().sum()
# + [markdown] papermill={"duration": 0.050721, "end_time": "2021-07-31T00:08:32.763267", "exception": false, "start_time": "2021-07-31T00:08:32.712546", "status": "completed"} tags=[]
# ## Data Cleaning & Encoding
# + papermill={"duration": 0.064494, "end_time": "2021-07-31T00:08:32.877221", "exception": false, "start_time": "2021-07-31T00:08:32.812727", "status": "completed"} tags=[]
df_train[["Embarked","Name"]].groupby(by=["Embarked"],as_index=True).count().sort_values("Name",ascending=False)
# + papermill={"duration": 0.060667, "end_time": "2021-07-31T00:08:32.986527", "exception": false, "start_time": "2021-07-31T00:08:32.925860", "status": "completed"} tags=[]
# so we notes that Most repeated Embarked is S, so that we can replace null value in Embarked column with it.
most_repeated = "S"
df_train.Embarked.replace(np.nan, most_repeated, inplace=True)
df_test.Embarked.replace(np.nan, most_repeated, inplace=True)
print("the number of null value in Embarked Column =",df_train.Embarked.isnull().sum())
# + papermill={"duration": 0.075527, "end_time": "2021-07-31T00:08:33.110719", "exception": false, "start_time": "2021-07-31T00:08:33.035192", "status": "completed"} tags=[]
# transform Embarked Column to numeric.
Embarked_transform_dict = {"S":1, "C":2, "Q":3}
for value in Embarked_transform_dict:
df_train.Embarked.replace(value, Embarked_transform_dict.get(value), inplace=True)
df_test.Embarked.replace(value, Embarked_transform_dict.get(value), inplace=True)
df_train.head(5)
# we now finish cleaning and transform column Emvarked to numeric.
# + papermill={"duration": 0.059087, "end_time": "2021-07-31T00:08:33.219766", "exception": false, "start_time": "2021-07-31T00:08:33.160679", "status": "completed"} tags=[]
print("the number of null value in Cabin Column =", df_train.Cabin.isnull().sum())
# notes that null values is 687 from 891 (77%) of Cabin column is null, so i will droped it from data.
# + papermill={"duration": 0.082561, "end_time": "2021-07-31T00:08:33.352112", "exception": false, "start_time": "2021-07-31T00:08:33.269551", "status": "completed"} tags=[]
df_train.drop("Cabin", axis=1, inplace=True)
df_test.drop("Cabin", axis=1, inplace=True)
df_train
# + papermill={"duration": 0.061156, "end_time": "2021-07-31T00:08:33.463617", "exception": false, "start_time": "2021-07-31T00:08:33.402461", "status": "completed"} tags=[]
# Enter to Fare column
# calculate the range of value in Fare column.
print("Range of Fare column values = ", df_train.Fare.max() - df_train.Fare.min())
# min value is 0.0 and max value = 512.3292
# i will divied this range to 10 sections.
df_test.Fare.replace(np.nan, df_test.Fare.mean(), inplace=True)
print("Range of Fare column values = ", df_test.Fare.max() - df_test.Fare.min())
# + papermill={"duration": 0.078808, "end_time": "2021-07-31T00:08:33.593046", "exception": false, "start_time": "2021-07-31T00:08:33.514238", "status": "completed"} tags=[]
df_train.Fare = df_train.Fare.astype("int64")
df_test.Fare = df_test.Fare.astype("int64")
# df_train.info()
df_test
# + papermill={"duration": 0.073791, "end_time": "2021-07-31T00:08:33.718640", "exception": false, "start_time": "2021-07-31T00:08:33.644849", "status": "completed"} tags=[]
bins_i = [-1, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550]
labels_i = [1,2,3,4,5,6,7,8,9,10,11]
df_train['stage'] = 0
df_train['stage'] = pd.cut(df_train.Fare, bins=bins_i, labels=labels_i)
df_test['stage'] = 0
df_test['stage'] = pd.cut(df_test.Fare, bins=bins_i, labels=labels_i)
df_train.stage.unique()
# + papermill={"duration": 0.068855, "end_time": "2021-07-31T00:08:33.839641", "exception": false, "start_time": "2021-07-31T00:08:33.770786", "status": "completed"} tags=[]
df_train.Fare = df_train.stage.astype("int64")
df_test.Fare = df_test.stage.astype("int64")
df_train.drop("stage", axis=1, inplace=True)
df_test.drop("stage", axis=1, inplace=True)
# + papermill={"duration": 0.070026, "end_time": "2021-07-31T00:08:33.962265", "exception": false, "start_time": "2021-07-31T00:08:33.892239", "status": "completed"} tags=[]
df_train.head()
# + papermill={"duration": 0.062008, "end_time": "2021-07-31T00:08:34.077172", "exception": false, "start_time": "2021-07-31T00:08:34.015164", "status": "completed"} tags=[]
df_test.Fare.unique()
# + papermill={"duration": 0.064889, "end_time": "2021-07-31T00:08:34.194365", "exception": false, "start_time": "2021-07-31T00:08:34.129476", "status": "completed"} tags=[]
df_train.Ticket.unique()
# + papermill={"duration": 0.062808, "end_time": "2021-07-31T00:08:34.310157", "exception": false, "start_time": "2021-07-31T00:08:34.247349", "status": "completed"} tags=[]
# i drop this column, because this is column is outlayer of data not need.
df_train.drop("Ticket", axis=1, inplace=True)
df_test.drop("Ticket", axis=1, inplace=True)
# + papermill={"duration": 0.07209, "end_time": "2021-07-31T00:08:34.435088", "exception": false, "start_time": "2021-07-31T00:08:34.362998", "status": "completed"} tags=[]
df_train.head()
# + papermill={"duration": 0.078603, "end_time": "2021-07-31T00:08:34.567349", "exception": false, "start_time": "2021-07-31T00:08:34.488746", "status": "completed"} tags=[]
# Sex column.
Sex_dict = {"male":1, "female":2}
for key, value in Sex_dict.items():
df_train.Sex.replace(key, value, inplace=True)
df_test.Sex.replace(key, value, inplace=True)
df_train.Sex = df_train.Sex.astype("int64")
df_test.Sex = df_test.Sex.astype("int64")
df_train.head()
# + papermill={"duration": 0.096202, "end_time": "2021-07-31T00:08:34.717424", "exception": false, "start_time": "2021-07-31T00:08:34.621222", "status": "completed"} tags=[]
df_train["Title"] = 0
titles = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
data = [df_train, df_test]
for dataset in data:
# extract titles
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
# replace titles with a more common title or as Rare
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr',\
'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace(['Mlle','Ms'], 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
# convert titles into numbers
dataset['Title'] = dataset['Title'].map(titles)
# filling NaN with 0, to get safe
dataset['Title'] = dataset['Title'].fillna(0)
df_train = df_train.drop(['Name'], axis=1)
df_test = df_test.drop(['Name'], axis=1)
df_train
# + papermill={"duration": 0.067011, "end_time": "2021-07-31T00:08:34.839411", "exception": false, "start_time": "2021-07-31T00:08:34.772400", "status": "completed"} tags=[]
df_train.isnull().sum()
# + papermill={"duration": 0.09411, "end_time": "2021-07-31T00:08:34.988893", "exception": false, "start_time": "2021-07-31T00:08:34.894783", "status": "completed"} tags=[]
df_Age_train = df_train.loc[pd.notna(df_train.Age)]
df_Age_train.Age = df_Age_train.Age.astype("float64")
df_Age_train.Age = (df_Age_train.Age - df_Age_train.Age.mean()) / df_Age_train.Age.std()
df_Age_train
# + papermill={"duration": 0.068901, "end_time": "2021-07-31T00:08:35.113716", "exception": false, "start_time": "2021-07-31T00:08:35.044815", "status": "completed"} tags=[]
df_Age_train.Survived.corr(df_Age_train.Age)
# so i will drop Age column, because it is correlation between Age and Survived is very small.
# + papermill={"duration": 0.087336, "end_time": "2021-07-31T00:08:35.269086", "exception": false, "start_time": "2021-07-31T00:08:35.181750", "status": "completed"} tags=[]
df_train.drop("Age", axis=1, inplace=True)
df_test.drop("Age", axis=1, inplace=True)
df_train
# + papermill={"duration": 0.088343, "end_time": "2021-07-31T00:08:35.414532", "exception": false, "start_time": "2021-07-31T00:08:35.326189", "status": "completed"} tags=[]
df_test
# + papermill={"duration": 0.070558, "end_time": "2021-07-31T00:08:35.549597", "exception": false, "start_time": "2021-07-31T00:08:35.479039", "status": "completed"} tags=[]
data = [df_train, df_test]
for dataset in data:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# + papermill={"duration": 0.081277, "end_time": "2021-07-31T00:08:35.692510", "exception": false, "start_time": "2021-07-31T00:08:35.611233", "status": "completed"} tags=[]
for dataset in data:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
print (df_train[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean())
# + papermill={"duration": 0.869934, "end_time": "2021-07-31T00:08:36.621548", "exception": false, "start_time": "2021-07-31T00:08:35.751614", "status": "completed"} tags=[]
colormap=plt.cm.RdBu
figure = plt.figure(figsize=(12,12))
sns.heatmap(df_train.corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
plt.title("Correlations",size=15)
plt.xlabel("Features")
plt.ylabel("Features")
plt.show()
# + papermill={"duration": 0.07039, "end_time": "2021-07-31T00:08:36.752194", "exception": false, "start_time": "2021-07-31T00:08:36.681804", "status": "completed"} tags=[]
columns = ["Pclass","Sex", "Fare", "Embarked","Title","IsAlone"]
X_train = df_train[columns]
Y_train = df_train["Survived"]
len(Y_train)
# + papermill={"duration": 0.071451, "end_time": "2021-07-31T00:08:36.885116", "exception": false, "start_time": "2021-07-31T00:08:36.813665", "status": "completed"} tags=[]
X_test = df_test[columns]
len(X_test)
# + papermill={"duration": 0.070561, "end_time": "2021-07-31T00:08:37.016595", "exception": false, "start_time": "2021-07-31T00:08:36.946034", "status": "completed"} tags=[]
len(Y_test)
# + [markdown] papermill={"duration": 0.06049, "end_time": "2021-07-31T00:08:37.138192", "exception": false, "start_time": "2021-07-31T00:08:37.077702", "status": "completed"} tags=[]
# ## ML Clasification Prediction
# + [markdown] papermill={"duration": 0.060868, "end_time": "2021-07-31T00:08:37.260232", "exception": false, "start_time": "2021-07-31T00:08:37.199364", "status": "completed"} tags=[]
# ### 1- SGDClassifier (66%)
# + papermill={"duration": 0.216234, "end_time": "2021-07-31T00:08:37.537713", "exception": false, "start_time": "2021-07-31T00:08:37.321479", "status": "completed"} tags=[]
sgd_clf = SGDClassifier(random_state=42, max_iter=1000, tol=1e-3)
sgd_clf.fit(X_train, Y_train)
Y_pred_SGD = sgd_clf.predict(X_test)
print("the train score of SGD = ",round(sgd_clf.score(X_train, Y_train) *100, 2),"%")
# + [markdown] papermill={"duration": 0.061114, "end_time": "2021-07-31T00:08:37.660434", "exception": false, "start_time": "2021-07-31T00:08:37.599320", "status": "completed"} tags=[]
# ### 2- Random Forest (78%)
# + papermill={"duration": 0.523233, "end_time": "2021-07-31T00:08:38.245385", "exception": false, "start_time": "2021-07-31T00:08:37.722152", "status": "completed"} tags=[]
random_forest = RandomForestClassifier(n_estimators=40, min_samples_leaf=2, max_features=0.1, n_jobs=-1)
random_forest.fit(X_train, Y_train)
Y_pred_Random = random_forest.predict(X_test)
print("the train score of random_forest = ",round(random_forest.score(X_train, Y_train) *100, 2),"%")
# + [markdown] papermill={"duration": 0.061808, "end_time": "2021-07-31T00:08:38.369252", "exception": false, "start_time": "2021-07-31T00:08:38.307444", "status": "completed"} tags=[]
# ### 3- Logistic Regression (76.6%)
# + papermill={"duration": 0.081085, "end_time": "2021-07-31T00:08:38.512735", "exception": false, "start_time": "2021-07-31T00:08:38.431650", "status": "completed"} tags=[]
logistic_regression = LogisticRegression(solver='liblinear',max_iter=1000)
logistic_regression.fit(X_train, Y_train)
Y_pred_Logistic = logistic_regression.predict(X_test)
print("the train score of logistic_regression = ",round(logistic_regression.score(X_train, Y_train) *100, 2),"%")
# + [markdown] papermill={"duration": 0.061855, "end_time": "2021-07-31T00:08:38.636864", "exception": false, "start_time": "2021-07-31T00:08:38.575009", "status": "completed"} tags=[]
# ### 4- Decision Tree (78%)
# + papermill={"duration": 0.080381, "end_time": "2021-07-31T00:08:38.780066", "exception": false, "start_time": "2021-07-31T00:08:38.699685", "status": "completed"} tags=[]
tree = DecisionTreeClassifier(random_state=25)
tree.fit(X_train, Y_train)
Y_pred_Tree= tree.predict(X_test)
print("the score of prediction = ",round(tree.score(X_train, Y_train) * 100,2), "%")
# + papermill={"duration": 0.62375, "end_time": "2021-07-31T00:08:39.466814", "exception": false, "start_time": "2021-07-31T00:08:38.843064", "status": "completed"} tags=[]
scores= cross_val_score(tree, X_train, Y_train, scoring="accuracy", cv=100)
scores.mean()
# + [markdown] papermill={"duration": 0.062116, "end_time": "2021-07-31T00:08:39.590903", "exception": false, "start_time": "2021-07-31T00:08:39.528787", "status": "completed"} tags=[]
# ### 5- SVM (76.55%)
# + papermill={"duration": 0.091723, "end_time": "2021-07-31T00:08:39.746242", "exception": false, "start_time": "2021-07-31T00:08:39.654519", "status": "completed"} tags=[]
clf = svm.SVC(kernel = 'linear')
clf.fit(X_train, Y_train)
Y_predict_svm = clf.predict(X_test)
print("the score of prediction = ",round(clf.score(X_train, Y_train) * 100,2), "%")
# + [markdown] papermill={"duration": 0.064318, "end_time": "2021-07-31T00:08:39.875649", "exception": false, "start_time": "2021-07-31T00:08:39.811331", "status": "completed"} tags=[]
# ### 6- KNeighbors (79.18%)
# + papermill={"duration": 0.148581, "end_time": "2021-07-31T00:08:40.087311", "exception": false, "start_time": "2021-07-31T00:08:39.938730", "status": "completed"} tags=[]
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, Y_train)
Y_pred_KNN= knn.predict(X_test)
print("the score of prediction = ",round(knn.score(X_train, Y_train) * 100,2), "%")
# + papermill={"duration": 0.078601, "end_time": "2021-07-31T00:08:40.228918", "exception": false, "start_time": "2021-07-31T00:08:40.150317", "status": "completed"} tags=[]
output_csv = {"PassengerId":[*range(892,892+len(Y_pred_KNN))], "Survived":Y_pred_KNN}
Y_pre = pd.DataFrame(output_csv)
Y_pre.set_index("PassengerId", drop=True, append=False, inplace=True)
Y_pre.to_csv("/kaggle/working/submission.csv")
# + [markdown] papermill={"duration": 0.062696, "end_time": "2021-07-31T00:08:40.354836", "exception": false, "start_time": "2021-07-31T00:08:40.292140", "status": "completed"} tags=[]
# ### 7- MlPClassifier (77.5%)
# + papermill={"duration": 0.091404, "end_time": "2021-07-31T00:08:40.509083", "exception": false, "start_time": "2021-07-31T00:08:40.417679", "status": "completed"} tags=[]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(3, 2), random_state=1)
clf.fit(X_train, Y_train)
Y_pred_clf= clf.predict(X_test)
print("the score of prediction = ",round(clf.score(X_train, Y_train) * 100,2), "%")
# + [markdown] papermill={"duration": 0.063204, "end_time": "2021-07-31T00:08:40.635600", "exception": false, "start_time": "2021-07-31T00:08:40.572396", "status": "completed"} tags=[]
# ### 8- GaussianNB (74.6%)
# + papermill={"duration": 0.079824, "end_time": "2021-07-31T00:08:40.778903", "exception": false, "start_time": "2021-07-31T00:08:40.699079", "status": "completed"} tags=[]
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred_gaussian = gaussian.predict(X_test)
print("the train score for Gaussian = ", round(gaussian.score(X_train, Y_train) * 100, 2), "%")
# + [markdown] papermill={"duration": 0.064504, "end_time": "2021-07-31T00:08:40.906764", "exception": false, "start_time": "2021-07-31T00:08:40.842260", "status": "completed"} tags=[]
# ### 9- Perceptron (78.2%)
# + papermill={"duration": 0.081929, "end_time": "2021-07-31T00:08:41.052498", "exception": false, "start_time": "2021-07-31T00:08:40.970569", "status": "completed"} tags=[]
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred_perceptron = perceptron.predict(X_test)
print("the train score for Perceptron = ",round(perceptron.score(X_train, Y_train) * 100, 2), "%")
# + papermill={"duration": 0.071001, "end_time": "2021-07-31T00:08:41.188024", "exception": false, "start_time": "2021-07-31T00:08:41.117023", "status": "completed"} tags=[]
# # !kaggle competitions download -c titanic
# # !kaggle competitions submit -c titanic -f /kaggle/working/submission.csv -m "submission"
# + papermill={"duration": 0.081363, "end_time": "2021-07-31T00:08:41.333707", "exception": false, "start_time": "2021-07-31T00:08:41.252344", "status": "completed"} tags=[]
model = ["SGDClassifier", "Random Forest", "Logistic Regression", "Decision Tree", "SVM",
"KNeighbors", "MlPClassifier", "GaussianNB", "Perceptron"]
score = [66, 78, 76.6, 78, 76.55, 79.18, 77.5, 74.6, 78.2]
data_dict = {"models": model, "test_score": score}
data_score = pd.DataFrame(data_dict)
data_score.index = data_score.index + 1
data_score.sort_values("test_score",ascending=False)
# + papermill={"duration": 0.064177, "end_time": "2021-07-31T00:08:41.463256", "exception": false, "start_time": "2021-07-31T00:08:41.399079", "status": "completed"} tags=[]
| titanic-survival-prediction-with-ml.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.11 64-bit (''finderenv_new'': conda)'
# name: python3
# ---
# # Graph Neural Network for multiscale solver in the Traveling Salesman Problem
#
# This notebook is designed to predict the edge's probability belongs to optimized solution.
#
# In general, the input of GNN contains cooordinate($n_i \in R^{2}$), edges values(distance) and targeted edges, and meanwhile our GNN consists of $30$ graph convolutional layers and $3$ layers in MLP with hidden dimension $h=300$ for each layers.
# +
import os
import numpy as np
import torch
print(torch.cuda.is_available())
from config import *
from utils.process import *
# -
# ## 1. Set the model configuration parameters
#
# The cell below sets a number of parameters that define the model configuration. The parameters set here are being used both by the `get_config()` function.
# +
notebook_mode = True
viz_mode = False
# model-parameter
# config_path = "configs/tsp20.json"
config_path = "configs/tsp20.json"
config = get_config(config_path)
# -
# ## 2. Build or load the model
#
# You will want to execute either of the two code cells in the subsequent two sub-sections, not both.
# ### 2.1 Create a new model
#
# If you want to create a new model, this is the relevant section for you. If you want to load a previously saved model, skip ahead to section 2.2.
if viz_mode==False:
# tsp20--model
net = main(config, pretrained=False, patience=50, lr_scale=0.001, random_neighbor=False)
# ### 2.2 Load a saved model
#
# If you have previously created and saved a model and would now like to load it, simply execute the next code cell.
if viz_mode==False:
# tsp20--model
net = main(config, pretrained=True, patience=50,
lr_scale=0.01, pretrained_path='./logs/tsp20/attgcn_preprocessor-2021-11-09 18:22:49/best_val_checkpoint.tar',
random_neighbor=False)
| attgcn_preprocessor/train-20.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HTML Downloader and Preprocessor
import os
import re
import sys
import requests
import numpy as np
import parsel
from urllib.parse import urlparse
sys.path.insert(0, '..')
from autopager.htmlutils import get_every_button_and_a
from autopager.model import page_to_features
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import ssl
WINDOW_SIZE = "1920,1080"
chrome_options = Options()
chrome_options.binary_location = "/usr/bin/google-chrome"
chrome_options.add_argument(f"--window-size={WINDOW_SIZE}")
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
ssl._create_default_https_context = ssl._create_unverified_context
import time
DEFAULT_PROJECT_FOLDER = os.path.abspath('..')
DEFAULT_PREDICT_FOLDER = os.path.abspath('..') + '/predict_folder'
DEFAULT_MODEL_FOLDER = os.path.abspath('..') + '/models'
IS_CONTAIN_BUTTON = True
NB_TO_PY = True
SCROLL_PAUSE_TIME = 0.5
def _scrollToButtom(driver):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
def _get_html_from_selenium(url):
# 然後將options加入Chrome方法裡面,至於driver請用executable_path宣告進入
browser=webdriver.Chrome(options=chrome_options)
browser.implicitly_wait(5)
browser.set_page_load_timeout(30)
# 在瀏覽器打上網址連入
browser.get(url)
_scrollToButtom(browser)
time.sleep(SCROLL_PAUSE_TIME)
html = browser.page_source
browser.quit()
return html
def generate_page_component(url):
html = _get_html_from_selenium(url)
url_obj = urlparse(url)
return {
"html": html,
"parseObj": url_obj,
}
def get_selectors_from_file(html):
sel = parsel.Selector(html)
links = get_every_button_and_a(sel)
xseq = page_to_features(links)
return xseq
if __name__ == '__main__':
# If NB_TO_PY is true, than we convert this book to .py file
if NB_TO_PY:
# !jupyter nbconvert --to script preprocessing.ipynb
else:
test_url = "https://kktix.com/events"
page = generate_page_component(test_url)
xseq = get_selectors_from_file(page["html"])
print(xseq[:5])
| autopager/preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # SPY Spreadsheet
# +
# use future imports for python 3.x forward compatibility
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
# other imports
import pandas as pd
import matplotlib.pyplot as plt
import datetime
from talib.abstract import *
# project imports
import pinkfish as pf
# format price data
pd.options.display.float_format = '{:0.2f}'.format
# %matplotlib inline
# +
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import datetime
from talib.abstract import *
import pinkfish as pf
import itable
# format price data
pd.options.display.float_format = '{:0,.2f}'.format
# set size of inline plots
matplotlib.rcParams['figure.figsize'] = [14, 10]
# -
# Some global data
symbol = 'SPY'
start = datetime.datetime(1900, 1, 1)
end = datetime.datetime.now()
# Fetch symbol data from internet; do not use local cache.
ts = pf.fetch_timeseries(symbol, use_cache=False)
ts.tail()
# Select timeseries between start and end. Back adjust prices relative to adj_close for dividends and splits.
ts = pf.select_tradeperiod(ts, start, end, use_adj=True)
ts.head()
# Add technical indicator: 200 day MA
sma200 = SMA(ts, timeperiod=200)
ts['sma200'] = sma200
ts.tail()
# Add technical indicator: ATR
atr = ATR(ts, timeperiod=14)
ts['atr'] = atr
ts.tail()
# Add technical indicator: 5 day high, and 5 day low
high5 = pd.Series(ts.high).rolling(window=5).max()
low5 = pd.Series(ts.low).rolling(window=5).min()
ts['high5'] = high5
ts['low5'] = low5
ts.tail()
# Add technical indicator: RSI, and 2-period cumulative RSI
# +
rsi2 = RSI(ts, timeperiod=2)
ts['rsi2'] = rsi2
c2rsi2 = pd.Series(ts.rsi2).rolling(window=2).sum()
ts['c2rsi2'] = c2rsi2
# -
ts.tail()
# Add technical indicator: Midpoint
mp = (ts.high + ts.low)/2
ts['mp'] = mp
ts.tail()
# Add technical indicator: SMA10 of midpoint
sma10mp = pd.Series(ts.mp).rolling(window=10).mean()
ts['sma10mp'] = sma10mp
ts.head(10)
# Add technical indicator: Standard Deviation
sd = pd.Series(ts.mp).rolling(window=10).std()
ts['sd'] = sd
ts.tail()
upper = ts.sma10mp + ts.sd*2
lower = ts.sma10mp - ts.sd*2
ts['upper'] = upper
ts['lower'] = lower
ts.tail()
# Select a smaller time from for use with itable
df = ts['2018-01-01':]
df.head()
# Use itable to format the spreadsheet. New 5 day high has blue highlight; new 5 day low has red highlight.
# +
pt = itable.PrettyTable(df, tstyle=itable.TableStyle(theme='theme1'), center=True, header_row=True, rpt_header=20)
pt.update_col_header_style(format_function=lambda x: x.upper(), text_align='right')
pt.update_row_header_style(format_function=lambda x: pd.to_datetime(str(x)).strftime('%Y/%m/%d'), text_align='right')
for col in range(pt.num_cols):
if pt.df.columns[col] == 'volume':
pt.update_cell_style(cols=[col], format_function=lambda x: format(x, '.0f'), text_align='right')
else:
pt.update_cell_style(cols=[col], format_function=lambda x: format(x, '.2f'), text_align='right')
for row in range(pt.num_rows):
if row == 0:
continue
if (pt.df['high5'][row] == pt.df['high'][row]) and \
(pt.df['high5'][row] > pt.df['high'][row-1]):
col = df.columns.get_loc('high5')
pt.update_cell_style(rows=[row], cols=[col], color='blue')
if (pt.df['low5'][row] == pt.df['low'][row]) and \
(pt.df['low5'][row] < pt.df['low'][row-1]):
col = df.columns.get_loc('low5')
pt.update_cell_style(rows=[row], cols=[col], color='maroon')
# -
pt
| examples/spreadsheet/spreadsheet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # %load train_model.py
import numpy as np
import os
from alexnet import alexnet
datafile_name = 'training_data-'
datafile_no = 1
prefix_name = '-balanced'
extension = '.npy'
cwd = os.getcwd()
for file_name in os.listdir(cwd):
print('{}-'.format(datafile_no) + file_name)
if file_name.startswith(datafile_name) and file_name.find('-balanced.npy') != -1:
print('{} exists, loading previous data!'.format(file_name))
datafile_no += 1
WIDTH = 160
HEIGHT = 120
LEARNING_RATE = 0.001
EPOCHS = datafile_no
MODELER = 'sikurity'
MODEL_NAME = 'Osori-SelfDrivingWithGTA5_{}_{}_{}-epochs-300K-data.model'.format(MODELER, LEARNING_RATE, EPOCHS)
model = alexnet(WIDTH, HEIGHT, LEARNING_RATE)
for epoch in range(1, EPOCHS):
train_data = np.load(datafile_name + str(epoch) + prefix_name + extension)
train = train_data[:-100]
test = train_data[-100:]
X = np.array([i[0] for i in train]).reshape(-1,WIDTH,HEIGHT,1)
Y = [i[1] for i in train]
test_x = np.array([i[0] for i in test]).reshape(-1,WIDTH,HEIGHT,1)
test_y = [i[1] for i in test]
model.fit({'input': X}, {'targets': Y}, n_epoch=1, validation_set=({'input': test_x}, {'targets': test_y}),
snapshot_step=500, show_metric=True, run_id=MODEL_NAME)
model.save(MODEL_NAME)
# tensorboard --logdir=./log
| train_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Assignment 3 – Tabular Dataset Preparation
#
# This is Assignment 3 for the Introduction to Deep Learning with PyTorch course (www.leaky.ai). In this assignment you will practice preparing tabular datasets for training a neural network. You will practice applying normalization and standardization techniques. You will also use pandas to convert categorical inputs into numerical values.
#
# ### To Get Started:
#
# 1. Open up a web browser (preferable Chrome)
# 2. Copy the Project GitHub Link: https://github.com/LeakyAI/PyTorch-Overview
# 3. Head over to Google Colab (https://colab.research.google.com)
# 4. Load the notebook: Tabular Dataset Preparation - Start Here.ipynb
# 5. Replace the [TBD]'s with your own code
# 6. Execute the notebook after completing each cell and check your answers using the solution notebook
#
# Good Luck!
#
# ### Key Objectives:
# - Calculate the minimum and maximum values for each input and applied normalization
# - Apply standardization and compared the results
# - Replace categorical inputs with numerical values using one-hot encoding
# ## Part 1 - Standardization and Min Max Normalization
# Import PyTorch and set the seed for reproducible results
import torch
torch.set_printoptions(precision=3,sci_mode=False) # Tensor easier to read
# Create a PyTorch tensor with the following content:
# [[1,100,3,0.01,5000],[0,10,8,-0.002,0.01],[1,25,13,0.04,0.2],[1,45,18,-0.05,0.5]]
data = torch.tensor([[1,100,3,0.01,5000],[0,10,8,-0.002,0.01],[1,25,13,0.04,0.2],[1,45,18,-0.05,0.5]], dtype=torch.float)
print (data)
# #### Answer
# <pre>
# tensor([[ 1.000, 100.000, 3.000, 0.010, 5000.000],
# [ 0.000, 10.000, 8.000, -0.002, 0.010],
# [ 1.000, 25.000, 13.000, 0.040, 0.200],
# [ 1.000, 45.000, 18.000, -0.050, 0.500]])
# </pre>
# # Normalize the Values
# Here you will apply normalization to the column values.
# Find the minimum and maximum value for each column
# Hint: Make sure you use axis = 0 when calling min and max as we
# want to apply the function calls to the columns (not entire tensor)
maximums = data.max(axis=0)
minimums = data.min(axis=0)
print (f"Max Values: {maximums.values}")
print (f"Min Values: {minimums.values}")
# #### Answer
# <pre>
# Max Values: tensor([ 1.000, 100.000, 18.000, 0.040, 5000.000])
# Min Values: tensor([ 0.000, 10.000, 3.000, -0.050, 0.010])
# </pre>
# Applying normalization to each input
# Use the formula x = (x-min)/(max-min)
dataNormalized = (data - minimums.values) / (maximums.values - minimums.values)
print (dataNormalized)
# #### Answer
# <pre>
# tensor([[ 1.000, 1.000, 0.000, 0.667, 1.000],
# [ 0.000, 0.000, 0.333, 0.533, 0.000],
# [ 1.000, 0.167, 0.667, 1.000, 0.000],
# [ 1.000, 0.389, 1.000, 0.000, 0.000]])
# </pre>
# ### Question
# What obervations can be made about using normalization? Does normalization work well in all cases? How about the last column?
# ### Your Answer
# Normalization ensures all our intputs are in the range of 0..1. However, when the data contains outliers, as with the last column, normalization is not ideal as the smaller values will all be close to 0.
# ## Standardize the Values
# Use the following formula:
# xStandardized = (x - xMean) / xStdDeviation
# +
# Calculate the mean and standard deviation of each column
dataMean = data.mean(axis = 0)
dataStDev = data.std(axis = 0)
print (f"Mean : {dataMean}")
print (f"St Dev: {dataStDev}")
# -
# #### Answer
# <pre>
# Mean : tensor([ 0.750, 45.000, 10.500, -0.001, 1250.177])
# St Dev: tensor([ 0.500, 39.370, 6.455, 0.037, 2499.882])
# </pre>
# Standadize the columns using the following formula:
# dataStandardized = (data - mean) / (standardDeviation)
# hint - make sure you use axis=0 as we want these operations
# conducted on the columns (not rows, not entire tensor)
dataStandardized = (data - dataMean) / (dataStDev)
print (dataStandardized)
# #### Answer
# <pre>
# tensor([[ 0.500, 1.397, -1.162, 0.281, 1.500],
# [-1.500, -0.889, -0.387, -0.040, -0.500],
# [ 0.500, -0.508, 0.387, 1.082, -0.500],
# [ 0.500, 0.000, 1.162, -1.322, -0.500]])
# </pre>
# ### Question
# What obervations can be made about using normalization? Does normalization work well in all cases? How about the last column?
# ### Your Answer
# Standardization resulted in a larger range (outside of 0..1) but also ensured that the outlier in the last column did not overly affect the other input values in the column. For columns with outliers, it is usually better to apply standardization as opposed to normalization.
#
# ## One-Hot Encoding
# Most tabular datasets contain categorical data. You will need to convert this type of data into numerical data before training. We will be using the panda library to automatically convert our the categorical data into numeric using the get_dummies function.
# Load a categorical dataset using Pandas
import pandas as pd
# !wget https://raw.githubusercontent.com/LeakyAI/PyTorch-Overview/main/cat_data_v1.csv
df = pd.read_csv('cat_data_v1.csv')
# Understand the shape of the data by displaying the value of shape
df.shape
# Show the first portion of the data using head()
df.head()
# Use the describe() function to better understand the data
# and look for missing values
df.describe()
# Drop rows that contain missing values using dropna()
df = df.dropna()
df.describe()
# Create one-hot encoded values for each column using
# the the get_dummies function:
OneHot = pd.get_dummies(df, drop_first=True)
OneHot.head()
# ### Key Takeaways:
# - You calculated the minimum and maximum values for each input and applied normalization
# - You then applied standardization and compared the results
# - You replaced categorical inputs with numerical values using one-hot encoding
| Tabular Dataset Preparation - Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
df_movieItem = pd.read_csv('MovieItem.csv', delimiter=',',index_col=['id'])
df_movieReview = pd.read_csv('MovieReview.csv', delimiter=',',index_col=['id'])
# sort by index id(also known by rating)
df_movieItem = df_movieItem.sort_index(axis=0)
# overview
print(df_movieItem.describe())
# -
# stars analysis
star_list = []
for stars in df_movieItem['stars']:
star_list += [x.lstrip().replace('"','') for x in stars[1:-1].replace('\'','').split(',')]
# reduce duplicate
star_list = list(set(star_list))
# create a dataframe for output
df_star = pd.DataFrame(columns=['stars','avg_rating','num_movie'])
df_star['stars'] = star_list
for index,star in enumerate(df_star['stars']):
filter = movieItem['stars'].str.contains(star)
df_star['num_movie'][index] = len(movieItem[filter])
df_star['avg_rating'][index] = pd.to_numeric(movieItem[filter]['rating'].str[2:-2]).sum(axis=0)/df_star['num_movie'][index]
df_star.sort_values(['num_movie'],ascending=False).head(10)
# visual stars
import matplotlib.pyplot as plt
plt.hist(df_star['num_movie'],bins=20)
plt.show()
| .ipynb_checkpoints/Moive_Analysis-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basis Pursuit Denoising via ADMM
#
# We are given the measurements $b = Ax + e$ where $e$ is a measurement noise.
#
# We solve the problem:
#
# $$
# \tag{L1/L2}
# {\min}_{x} \| x\|_{1} + \frac{1}{2\rho}\| A x - b \|_2^2
# $$
#
# We will work with a sensing matrix $A$ of size $M \times N$ which consists of orthonormal rows.
# %load_ext autoreload
# %autoreload 2
from jax.config import config
config.update("jax_enable_x64", True)
from jax import jit, random
import jax.numpy as jnp
import numpy as np
np.set_printoptions(precision=6)
from jax.numpy.linalg import norm
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
import cr.sparse as crs
import cr.sparse.dict as crdict
import cr.sparse.data as crdata
from cr.sparse import lop
from cr.sparse.cvx.adm import yall1
# Problem size
M = 2000
N = 20000
K = 200
# Dictionary Setup
A = crdict.random_orthonormal_rows(crs.KEYS[0],M, N)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.imshow(A, extent=[0, 2, 0, 1])
plt.gray()
plt.colorbar()
plt.title(r'$A$');
x, omega = crdata.sparse_biuniform_representations(crs.KEYS[1], 1, 4, N, K)
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.stem(x, markerfmt='.');
# Convert A into a linear operator
T = lop.real_matrix(A)
T = lop.jit(T)
# Compute the measurements
b0 = T.times(x)
# Generate some Gaussian noise
sigma = 0.01
noise = sigma * random.normal(crs.KEYS[2], (M,))
# Measure the SNR
crs.snr(b0, noise)
# Add measurement noise
b = b0 + noise
fig=plt.figure(figsize=(8,6), dpi= 100, facecolor='w', edgecolor='k')
plt.stem(b, markerfmt='.');
# Solve the BPDN problem
sol = yall1.solve(T, b, rho=sigma)
int(sol.iterations), int(sol.n_times), int(sol.n_trans)
norm(sol.x-x)/norm(x)
# The support of K largest non-zero entries in sol.x
omega_rec = crs.largest_indices(sol.x, K)
common = jnp.intersect1d(omega, omega_rec)
total = jnp.union1d(omega, omega_rec)
support_overlap_ratio = len(common) / len(total)
print(support_overlap_ratio)
fig=plt.figure(figsize=(8,7), dpi= 100, facecolor='w', edgecolor='k')
plt.subplot(211)
plt.title('original')
plt.stem(x, markerfmt='.', linefmt='gray');
plt.subplot(212)
plt.stem(sol.x, markerfmt='.');
plt.title('reconstruction');
# %timeit yall1.solve(T, b, rho=sigma).x.block_until_ready()
1572 / 273
| modules/l1/yall1/examples/basis_pursuit-denoising.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
# !pip install git+https://github.com/huggingface/transformers.git
# +
from transformers import DistilBertTokenizerFast
from transformers import TFDistilBertForSequenceClassification
import tensorflow as tf
import json
# + pycharm={"name": "#%%\n"}
#### Import data and prepare data
# + pycharm={"name": "#%%\n"}
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
# -O /tmp/sarcasm.json
# + pycharm={"name": "#%%\n"}
training_size = 20000
with open("/tmp/sarcasm.json", 'r') as f:
datastore = json.load(f)
sentences = []
labels = []
urls = []
for item in datastore:
sentences.append(item['headline'])
labels.append(item['is_sarcastic'])
training_sentences = sentences[0:training_size]
validation_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
validation_labels = labels[training_size:]
# + pycharm={"name": "#%%\n"}
print(len(training_sentences))
print(len(validation_sentences))
# + pycharm={"name": "#%%\n"}
#### Setup BERT and run training
# + pycharm={"name": "#%%\n"}
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
# + pycharm={"name": "#%%\n"}
train_encodings = tokenizer(training_sentences,
truncation=True,
padding=True)
val_encodings = tokenizer(validation_sentences,
truncation=True,
padding=True)
# + pycharm={"name": "#%%\n"}
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(train_encodings),
training_labels
))
val_dataset = tf.data.Dataset.from_tensor_slices((
dict(val_encodings),
validation_labels
))
# + pycharm={"name": "#%%\n"}
# We classify two labels in this example. In case of multiclass classification, adjust num_labels value
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased',
num_labels=2)
# + pycharm={"name": "#%%\n"}
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy'])
model.fit(train_dataset.shuffle(100).batch(16),
epochs=3,
batch_size=16,
validation_data=val_dataset.shuffle(100).batch(16))
# + pycharm={"name": "#%%\n"}
model.save_pretrained("/tmp/sentiment_custom_model")
# + pycharm={"name": "#%%\n"}
#### Load saved model and run predict function
# + pycharm={"name": "#%%\n"}
loaded_model = TFDistilBertForSequenceClassification.from_pretrained("/tmp/sentiment_custom_model")
# + pycharm={"name": "#%%\n"}
test_sentence = "With their homes in ashes, residents share harrowing tales of survival after massive wildfires kill 15"
test_sentence_sarcasm = "News anchor hits back at viewer who sent her snarky note about ‘showing too much cleavage’ during broadcast"
# replace to test_sentence_sarcasm variable, if you want to test sarcasm
predict_input = tokenizer.encode(test_sentence,
truncation=True,
padding=True,
return_tensors="tf")
tf_output = loaded_model.predict(predict_input)[0]
print(tf_output)
# + pycharm={"name": "#%%\n"}
tf_prediction = tf.nn.softmax(tf_output, axis=1).numpy()[0]
print(tf_prediction)
# + pycharm={"name": "#%%\n"}
| 02/sentiment-fine-tuning-huggingface.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="aKwi1_4l0wev"
# # Adding a Dataset of Your Own to TFDS
# + colab={} colab_type="code" id="w9nZyRcLhtiX"
import os
import textwrap
import scipy.io
import pandas as pd
from os import getcwd
# + [markdown] colab_type="text" id="wooh61rn2FvF"
# ## IMDB Faces Dataset
#
# This is the largest publicly available dataset of face images with gender and age labels for training.
#
# Source: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/
#
# The IMDb Faces dataset provides a separate .mat file which can be loaded with Matlab containing all the meta information. The format is as follows:
# **dob**: date of birth (Matlab serial date number)
# **photo_taken**: year when the photo was taken
# **full_path**: path to file
# **gender**: 0 for female and 1 for male, NaN if unknown
# **name**: name of the celebrity
# **face_location**: location of the face (bounding box)
# **face_score**: detector score (the higher the better). Inf implies that no face was found in the image and the face_location then just returns the entire image
# **second_face_score**: detector score of the face with the second highest score. This is useful to ignore images with more than one face. second_face_score is NaN if no second face was detected.
# **celeb_names**: list of all celebrity names
# **celeb_id**: index of celebrity name
# -
# Next, let's inspect the dataset
# + [markdown] colab_type="text" id="uspGC84pWmjR"
# ## Exploring the Data
# + colab={} colab_type="code" id="sp7bUzZr3ZUQ"
# Inspect the directory structure
imdb_crop_file_path = f"{getcwd()}/../tmp2/imdb_crop"
files = os.listdir(imdb_crop_file_path)
print(textwrap.fill(' '.join(sorted(files)), 80))
# + colab={} colab_type="code" id="1aPlCn9E2PMj"
# Inspect the meta data
imdb_mat_file_path = f"{getcwd()}/../tmp2/imdb_crop/imdb.mat"
meta = scipy.io.loadmat(imdb_mat_file_path)
# + colab={} colab_type="code" id="aFj-jsz-6z-I"
meta
# + [markdown] colab_type="text" id="rnPmrXJ9XAkK"
# ## Extraction
# + [markdown] colab_type="text" id="zOBtgW6U_VgP"
# Let's clear up the clutter by going to the metadata's most useful key (imdb) and start exploring all the other keys inside it
# + colab={} colab_type="code" id="fgrZJWOA7RVa"
root = meta['imdb'][0, 0]
# + colab={} colab_type="code" id="BqqaBw6Y7tku"
desc = root.dtype.descr
desc
# + colab={} colab_type="code" id="s3WJXw4G2cPk"
# EXERCISE: Fill in the missing code below.
full_path = root["full_path"][0]
# Do the same for other attributes
names = root["name"][0] # YOUR CODE HERE
dob = root["dob"][0] # YOUR CODE HERE
gender = root["gender"][0] # YOUR CODE HERE
photo_taken = root["photo_taken"][0] # YOUR CODE HERE
face_score = root["face_score"][0] # YOUR CODE HERE
face_locations = root["face_location"][0] # YOUR CODE HERE
second_face_score = root["second_face_score"][0] # YOUR CODE HERE
celeb_names = root["celeb_names"][0] # YOUR CODE HERE
celeb_ids = root["celeb_id"][0] # YOUR CODE HERE
print('Filepaths: {}\n\n'
'Names: {}\n\n'
'Dates of birth: {}\n\n'
'Genders: {}\n\n'
'Years when the photos were taken: {}\n\n'
'Face scores: {}\n\n'
'Face locations: {}\n\n'
'Second face scores: {}\n\n'
'Celeb IDs: {}\n\n'
.format(full_path, names, dob, gender, photo_taken, face_score, face_locations, second_face_score, celeb_ids))
# + colab={} colab_type="code" id="zjKXJU1yEnMb"
print('Celeb names: {}\n\n'.format(celeb_names))
# + [markdown] colab_type="text" id="TT0un3eFXNW-"
# Display all the distinct keys and their corresponding values
# + colab={} colab_type="code" id="rYb98AUtC_fA"
names = [x[0] for x in desc]
names
# + colab={} colab_type="code" id="xJJ9j56hDvnN"
values = {key: root[key][0] for key in names}
values
# + [markdown] colab_type="text" id="lYob5mjgXpuy"
# ## Cleanup
# + [markdown] colab_type="text" id="3YRjp2gpXbRA"
# Pop out the celeb names as they are not relevant for creating the records.
# + colab={} colab_type="code" id="VRi5bcqnFBua"
del values['celeb_names']
names.pop(names.index('celeb_names'))
# + [markdown] colab_type="text" id="V2uhpASzXhuy"
# Let's see how many values are present in each key
# + colab={} colab_type="code" id="4Zu_L_QFEPEm"
for key, value in values.items():
print(key, len(value))
# + [markdown] colab_type="text" id="uJUvw-MBXuKb"
# ## Dataframe
# + [markdown] colab_type="text" id="2_uZu2ZQ_169"
# Now, let's try examining one example from the dataset. To do this, let's load all the attributes that we've extracted just now into a Pandas dataframe
# + colab={} colab_type="code" id="x-O0pLwWAREq"
df = pd.DataFrame(values, columns=names)
df.head()
# + [markdown] colab_type="text" id="w-wdFD8uIyRf"
# The Pandas dataframe may contain some Null values or nan. We will have to filter them later on.
# + colab={} colab_type="code" id="YGsTHc2VIoJh"
df.isna().sum()
# + [markdown] colab_type="text" id="DS-9rLTR065l"
# # TensorFlow Datasets
#
# TFDS provides a way to transform all those datasets into a standard format, do the preprocessing necessary to make them ready for a machine learning pipeline, and provides a standard input pipeline using `tf.data`.
#
# To enable this, each dataset implements a subclass of `DatasetBuilder`, which specifies:
#
# * Where the data is coming from (i.e. its URL).
# * What the dataset looks like (i.e. its features).
# * How the data should be split (e.g. TRAIN and TEST).
# * The individual records in the dataset.
#
# The first time a dataset is used, the dataset is downloaded, prepared, and written to disk in a standard format. Subsequent access will read from those pre-processed files directly.
# + [markdown] colab_type="text" id="6bGCSA-jX0Uw"
# ## Clone the TFDS Repository
#
# The next step will be to clone the GitHub TFDS Repository. For this particular notebook, we will clone a particular version of the repository. You can clone the repository by running the following command:
#
# ```
# # # !git clone https://github.com/tensorflow/datasets.git -b v1.2.0
# ```
#
# However, for simplicity, we have already cloned this repository for you and placed the files locally. Therefore, there is no need to run the above command if you are running this notebook in Coursera environment.
#
# Next, we set the current working directory to `/datasets/`.
# + colab={} colab_type="code" id="KhYXnLCf5F-Y"
# cd datasets
# + [markdown] colab_type="text" id="6Fct97VEYxlT"
# If you want to contribute to TFDS' repo and add a new dataset, you can use the the following script to help you generate a template of the required python file. To use it, you must first clone the tfds repository and then run the following command:
# + colab={} colab_type="code" id="wZ3psFN65G9u" language="bash"
#
# python tensorflow_datasets/scripts/create_new_dataset.py \
# --dataset my_dataset \
# --type image
# + [markdown] colab_type="text" id="a5UbwBVRTmb2"
# If you wish to see the template generated by the `create_new_dataset.py` file, navigate to the folder indicated in the above cell output. Then go to the `/image/` folder and look for a file called `my_dataset.py`. Feel free to open the file and inspect it. You will see a template with place holders, indicated with the word `TODO`, where you have to fill in the information.
#
# Now we will use IPython's `%%writefile` in-built magic command to write whatever is in the current cell into a file. To create or overwrite a file you can use:
# ```
# # # %%writefile filename
# ```
#
# Let's see an example:
# + colab={} colab_type="code" id="qkspG9KV7X7i"
# %%writefile something.py
x = 10
# + [markdown] colab_type="text" id="TQ--c2h0K6R1"
# Now that the file has been written, let's inspect its contents.
# + colab={} colab_type="code" id="VqBEa9UrK4-Z"
# !cat something.py
# + [markdown] colab_type="text" id="UJT2Mh-bYmYa"
# ## Define the Dataset with `GeneratorBasedBuilder`
#
# Most datasets subclass `tfds.core.GeneratorBasedBuilder`, which is a subclass of `tfds.core.DatasetBuilder` that simplifies defining a dataset. It works well for datasets that can be generated on a single machine. Its subclasses implement:
#
# * `_info`: builds the DatasetInfo object describing the dataset
#
#
# * `_split_generators`: downloads the source data and defines the dataset splits
#
#
# * `_generate_examples`: yields (key, example) tuples in the dataset from the source data
#
# In this exercise, you will use the `GeneratorBasedBuilder`.
#
# ### EXERCISE: Fill in the missing code below.
# + colab={} colab_type="code" id="cYyTvIoO7FqS"
# %%writefile tensorflow_datasets/image/imdb_faces.py
# coding=utf-8
# Copyright 2019 The TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""IMDB Faces dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import os
import re
import tensorflow as tf
import tensorflow_datasets.public_api as tfds
_DESCRIPTION = """\
Since the publicly available face image datasets are often of small to medium size, rarely exceeding tens of thousands of images, and often without age information we decided to collect a large dataset of celebrities. For this purpose, we took the list of the most popular 100,000 actors as listed on the IMDb website and (automatically) crawled from their profiles date of birth, name, gender and all images related to that person. Additionally we crawled all profile images from pages of people from Wikipedia with the same meta information. We removed the images without timestamp (the date when the photo was taken). Assuming that the images with single faces are likely to show the actor and that the timestamp and date of birth are correct, we were able to assign to each such image the biological (real) age. Of course, we can not vouch for the accuracy of the assigned age information. Besides wrong timestamps, many images are stills from movies - movies that can have extended production times. In total we obtained 460,723 face images from 20,284 celebrities from IMDb and 62,328 from Wikipedia, thus 523,051 in total.
As some of the images (especially from IMDb) contain several people we only use the photos where the second strongest face detection is below a threshold. For the network to be equally discriminative for all ages, we equalize the age distribution for training. For more details please the see the paper.
"""
_URL = ("https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/")
_DATASET_ROOT_DIR = 'imdb_crop' # Put the name of the dataset root directory here
_ANNOTATION_FILE = 'imdb.mat' # Put the name of annotation file here (.mat file)
_CITATION = """\
@article{Rothe-IJCV-2016,
author = {<NAME> and <NAME> and <NAME>},
title = {Deep expectation of real and apparent age from a single image without facial landmarks},
journal = {International Journal of Computer Vision},
volume={126},
number={2-4},
pages={144--157},
year={2018},
publisher={Springer}
}
@InProceedings{Rothe-ICCVW-2015,
author = {<NAME> and <NAME> and <NAME>},
title = {DEX: Deep EXpectation of apparent age from a single image},
booktitle = {IEEE International Conference on Computer Vision Workshops (ICCVW)},
year = {2015},
month = {December},
}
"""
# Source URL of the IMDB faces dataset
_TARBALL_URL = "https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/static/imdb_crop.tar"
class ImdbFaces(tfds.core.GeneratorBasedBuilder):
"""IMDB Faces dataset."""
VERSION = tfds.core.Version("0.1.0")
def _info(self):
return tfds.core.DatasetInfo(
builder=self,
description=_DESCRIPTION,
# Describe the features of the dataset by following this url
# https://www.tensorflow.org/datasets/api_docs/python/tfds/features
features=tfds.features.FeaturesDict({
"image": tfds.features.Image(),# Create a tfds Image feature here
"gender": tfds.features.ClassLabel(num_classes=2), # Create a tfds Class Label feature here for the two classes (Female, Male)
"dob": tf.int32, # YOUR CODE HERE
"photo_taken": tf.int32, # YOUR CODE HERE
"face_location": tfds.features.BBoxFeature(), # Create a tfds Bounding box feature here
"face_score": tf.float32, # YOUR CODE HERE
"second_face_score": tf.float32, # YOUR CODE HERE
"celeb_id": tf.int32 # YOUR CODE HERE
}),
supervised_keys=("image", "gender"),
urls=[_URL],
citation=_CITATION)
def _split_generators(self, dl_manager):
# Download the dataset and then extract it.
download_path = dl_manager.download([_TARBALL_URL])
extracted_path = dl_manager.download_and_extract([_TARBALL_URL])
# Parsing the mat file which contains the list of train images
def parse_mat_file(file_name):
with tf.io.gfile.GFile(file_name, "rb") as f:
# Add a lazy import for scipy.io and import the loadmat method to
# load the annotation file
dataset = tfds.core.lazy_imports.scipy.io.loadmat(file_name)['imdb'] # YOUR CODE HERE
return dataset
# Parsing the mat file by using scipy's loadmat method
# Pass the path to the annotation file using the downloaded/extracted paths above
meta = parse_mat_file(os.path.join(extracted_path[0], _DATASET_ROOT_DIR, _ANNOTATION_FILE))
# Get the names of celebrities from the metadata
celeb_names = meta[0, 0]['celeb_names'][0] # YOUR CODE HERE
# Create tuples out of the distinct set of genders and celeb names
self.info.features['gender'].names = ('Female', 'Male')# YOUR CODE HERE
self.info.features['celeb_id'].names = tuple([x[0] for x in celeb_names]) # YOUR CODE HERE
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
gen_kwargs={
"image_dir": extracted_path[0],
"metadata": meta,
})
]
def _get_bounding_box_values(self, bbox_annotations, img_width, img_height):
"""Function to get normalized bounding box values.
Args:
bbox_annotations: list of bbox values in kitti format
img_width: image width
img_height: image height
Returns:
Normalized bounding box xmin, ymin, xmax, ymax values
"""
ymin = bbox_annotations[0] / img_height
xmin = bbox_annotations[1] / img_width
ymax = bbox_annotations[2] / img_height
xmax = bbox_annotations[3] / img_width
return ymin, xmin, ymax, xmax
def _get_image_shape(self, image_path):
image = tf.io.read_file(image_path)
image = tf.image.decode_image(image, channels=3)
shape = image.shape[:2]
return shape
def _generate_examples(self, image_dir, metadata):
# Add a lazy import for pandas here (pd)
pd = tfds.core.lazy_imports.pandas # YOUR CODE HERE
# Extract the root dictionary from the metadata so that you can query all the keys inside it
root = metadata[0, 0]
"""Extract image names, dobs, genders,
face locations,
year when the photos were taken,
face scores (second face score too),
celeb ids
"""
image_names = root["full_path"][0]
# Do the same for other attributes (dob, genders etc)
dobs = root["dob"][0] # YOUR CODE HERE
genders = root["gender"][0] # YOUR CODE HERE
face_locations = root["face_location"][0] # YOUR CODE HERE
photo_taken_years = root["photo_taken"][0] # YOUR CODE HERE
face_scores = root["face_score"][0] # YOUR CODE HERE
second_face_scores = root["second_face_score"][0] # YOUR CODE HERE
celeb_id = root["celeb_id"][0] # YOUR CODE HERE
# Now create a dataframe out of all the features like you've seen before
df = pd.DataFrame(
list(zip(
image_names,
dobs,
genders,
face_locations,
photo_taken_years,
face_scores,
second_face_scores,
celeb_id
)),
columns = ['image_names', 'dobs', 'genders', 'face_locations', 'photo_taken_years', 'face_scores', 'second_face_scores', 'celeb_ids']
)
# Filter dataframe by only having the rows with face_scores > 1.0
df = df[df['face_scores'] > 1.0] # YOUR CODE HERE
# Remove any records that contain Nulls/NaNs by checking for NaN with .isna()
df = df[~df['genders'].isna()]
df = df[~df['second_face_scores'].isna()] # YOUR CODE HERE
# Cast genders to integers so that mapping can take place
df.genders = df.genders.astype(int) # YOUR CODE HERE
# Iterate over all the rows in the dataframe and map each feature
for _, row in df.iterrows():
# Extract filename, gender, dob, photo_taken,
# face_score, second_face_score and celeb_id
filename = os.path.join(image_dir, _DATASET_ROOT_DIR, row['image_names'][0])
gender = row['genders']
dob = row['dobs']
photo_taken = row['photo_taken_years']
face_score = row['face_scores']
second_face_score = row['second_face_scores']
celeb_id = row['celeb_ids']
# Get the image shape
image_width, image_height = self._get_image_shape(filename)
# Normalize the bounding boxes by using the face coordinates and the image shape
bbox = self._get_bounding_box_values(row['face_locations'][0],
image_width, image_height)
# Yield a feature dictionary
yield filename, {
"image": filename,
"gender": gender,
"dob": dob,
"photo_taken": photo_taken,
"face_location": tfds.features.BBox(
ymin=min(bbox[0], 1.0),
xmin=min(bbox[1], 1.0),
ymax=min(bbox[2], 1.0),
xmax=min(bbox[3], 1.0)
), # Create a bounding box (BBox) object out of the coordinates extracted
"face_score": face_score,
"second_face_score": second_face_score,
"celeb_id": celeb_id
}
# + [markdown] colab_type="text" id="7Lu65xXYZC8m"
# ## Add an Import for Registration
#
# All subclasses of `tfds.core.DatasetBuilder` are automatically registered when their module is imported such that they can be accessed through `tfds.builder` and `tfds.load`.
#
# If you're contributing the dataset to `tensorflow/datasets`, you must add the module import to its subdirectory's `__init__.py` (e.g. `image/__init__.py`), as shown below:
# + colab={} colab_type="code" id="pKC49eVJXJLe"
# %%writefile tensorflow_datasets/image/__init__.py
# coding=utf-8
# Copyright 2019 The TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Image datasets."""
from tensorflow_datasets.image.abstract_reasoning import AbstractReasoning
from tensorflow_datasets.image.aflw2k3d import Aflw2k3d
from tensorflow_datasets.image.bigearthnet import Bigearthnet
from tensorflow_datasets.image.binarized_mnist import BinarizedMNIST
from tensorflow_datasets.image.binary_alpha_digits import BinaryAlphaDigits
from tensorflow_datasets.image.caltech import Caltech101
from tensorflow_datasets.image.caltech_birds import CaltechBirds2010
from tensorflow_datasets.image.cats_vs_dogs import CatsVsDogs
from tensorflow_datasets.image.cbis_ddsm import CuratedBreastImagingDDSM
from tensorflow_datasets.image.celeba import CelebA
from tensorflow_datasets.image.celebahq import CelebAHq
from tensorflow_datasets.image.chexpert import Chexpert
from tensorflow_datasets.image.cifar import Cifar10
from tensorflow_datasets.image.cifar import Cifar100
from tensorflow_datasets.image.cifar10_corrupted import Cifar10Corrupted
from tensorflow_datasets.image.clevr import CLEVR
from tensorflow_datasets.image.coco import Coco
from tensorflow_datasets.image.coco2014_legacy import Coco2014
from tensorflow_datasets.image.coil100 import Coil100
from tensorflow_datasets.image.colorectal_histology import ColorectalHistology
from tensorflow_datasets.image.colorectal_histology import ColorectalHistologyLarge
from tensorflow_datasets.image.cycle_gan import CycleGAN
from tensorflow_datasets.image.deep_weeds import DeepWeeds
from tensorflow_datasets.image.diabetic_retinopathy_detection import DiabeticRetinopathyDetection
from tensorflow_datasets.image.downsampled_imagenet import DownsampledImagenet
from tensorflow_datasets.image.dsprites import Dsprites
from tensorflow_datasets.image.dtd import Dtd
from tensorflow_datasets.image.eurosat import Eurosat
from tensorflow_datasets.image.flowers import TFFlowers
from tensorflow_datasets.image.food101 import Food101
from tensorflow_datasets.image.horses_or_humans import HorsesOrHumans
from tensorflow_datasets.image.image_folder import ImageLabelFolder
from tensorflow_datasets.image.imagenet import Imagenet2012
from tensorflow_datasets.image.imagenet2012_corrupted import Imagenet2012Corrupted
from tensorflow_datasets.image.kitti import Kitti
from tensorflow_datasets.image.lfw import LFW
from tensorflow_datasets.image.lsun import Lsun
from tensorflow_datasets.image.mnist import EMNIST
from tensorflow_datasets.image.mnist import FashionMNIST
from tensorflow_datasets.image.mnist import KMNIST
from tensorflow_datasets.image.mnist import MNIST
from tensorflow_datasets.image.mnist_corrupted import MNISTCorrupted
from tensorflow_datasets.image.omniglot import Omniglot
from tensorflow_datasets.image.open_images import OpenImagesV4
from tensorflow_datasets.image.oxford_flowers102 import OxfordFlowers102
from tensorflow_datasets.image.oxford_iiit_pet import OxfordIIITPet
from tensorflow_datasets.image.patch_camelyon import PatchCamelyon
from tensorflow_datasets.image.pet_finder import PetFinder
from tensorflow_datasets.image.quickdraw import QuickdrawBitmap
from tensorflow_datasets.image.resisc45 import Resisc45
from tensorflow_datasets.image.rock_paper_scissors import RockPaperScissors
from tensorflow_datasets.image.scene_parse_150 import SceneParse150
from tensorflow_datasets.image.shapes3d import Shapes3d
from tensorflow_datasets.image.smallnorb import Smallnorb
from tensorflow_datasets.image.so2sat import So2sat
from tensorflow_datasets.image.stanford_dogs import StanfordDogs
from tensorflow_datasets.image.stanford_online_products import StanfordOnlineProducts
from tensorflow_datasets.image.sun import Sun397
from tensorflow_datasets.image.svhn import SvhnCropped
from tensorflow_datasets.image.uc_merced import UcMerced
from tensorflow_datasets.image.visual_domain_decathlon import VisualDomainDecathlon
# EXERCISE: Import your dataset module here
# YOUR CODE HERE
from tensorflow_datasets.image.imdb_faces import ImdbFaces
# + [markdown] colab_type="text" id="QYmgS2SrYXtP"
# ## URL Checksums
#
# If you're contributing the dataset to `tensorflow/datasets`, add a checksums file for the dataset. On first download, the DownloadManager will automatically add the sizes and checksums for all downloaded URLs to that file. This ensures that on subsequent data generation, the downloaded files are as expected.
# + colab={} colab_type="code" id="cvrp-iHuYG_e"
# !touch tensorflow_datasets/url_checksums/imdb_faces.txt
# + [markdown] colab_type="text" id="JwnUAn49U-U8"
# ## Build the Dataset
# + colab={} colab_type="code" id="Y8uKiqWrU_C0"
# EXERCISE: Fill in the name of your dataset.
# The name must be a string.
DATASET_NAME = "imdb_faces" # YOUR CODE HERE
# + [markdown] colab_type="text" id="S7evoTtpon7I"
# We then run the `download_and_prepare` script locally to build it, using the following command:
#
# ```
# # # %%bash -s $DATASET_NAME
# python -m tensorflow_datasets.scripts.download_and_prepare \
# --register_checksums \
# --datasets=$1
# ```
#
# **NOTE:** It may take more than 30 minutes to download the dataset and then write all the preprocessed files as TFRecords. Due to the enormous size of the data involved, we are unable to run the above script in the Coursera environment.
# + [markdown] colab_type="text" id="7hNPD2rraN5o"
# ## Load the Dataset
#
# Once the dataset is built you can load it in the usual way, by using `tfds.load`, as shown below:
#
# ```python
# import tensorflow_datasets as tfds
# dataset, info = tfds.load('imdb_faces', with_info=True)
# ```
#
# **Note:** Since we couldn't build the `imdb_faces` dataset due to its size, we are unable to run the above code in the Coursera environment.
# -
# ## Explore the Dataset
#
# Once the dataset is loaded, you can explore it by using the following loop:
#
# ```python
# for feature in tfds.as_numpy(dataset['train']):
# for key, value in feature.items():
# if key == 'image':
# value = value.shape
# print(key, value)
# break
# ```
#
# **Note:** Since we couldn't build the `imdb_faces` dataset due to its size, we are unable to run the above code in the Coursera environment.
#
# The expected output from the code block shown above should be:
#
# ```python
# >>>
# celeb_id 12387
# dob 722957
# face_location [1. 0.56327355 1. 1. ]
# face_score 4.0612864
# gender 0
# image (96, 97, 3)
# photo_taken 2007
# second_face_score 3.6680346
# ```
# + [markdown] colab_type="text" id="BhUO2vXDZw8q"
# # Next steps for publishing
#
# **Double-check the citation**
#
# It's important that DatasetInfo.citation includes a good citation for the dataset. It's hard and important work contributing a dataset to the community and we want to make it easy for dataset users to cite the work.
#
# If the dataset's website has a specifically requested citation, use that (in BibTex format).
#
# If the paper is on arXiv, find it there and click the bibtex link on the right-hand side.
#
# If the paper is not on arXiv, find the paper on Google Scholar and click the double-quotation mark underneath the title and on the popup, click BibTeX.
#
# If there is no associated paper (for example, there's just a website), you can use the BibTeX Online Editor to create a custom BibTeX entry (the drop-down menu has an Online entry type).
#
#
# **Add a test**
#
# Most datasets in TFDS should have a unit test and your reviewer may ask you to add one if you haven't already. See the testing section below.
# **Check your code style**
#
# Follow the PEP 8 Python style guide, except TensorFlow uses 2 spaces instead of 4. Please conform to the Google Python Style Guide,
#
# Most importantly, use tensorflow_datasets/oss_scripts/lint.sh to ensure your code is properly formatted. For example, to lint the image directory
# See TensorFlow code style guide for more information.
#
# **Add release notes**
# Add the dataset to the release notes. The release note will be published for the next release.
#
# **Send for review!**
# Send the pull request for review.
#
# For more information, visit https://www.tensorflow.org/datasets/add_dataset
# -
# # Submission Instructions
# +
# Now click the 'Submit Assignment' button above.
# -
# # When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This frees up resources for your fellow learners.
# + language="javascript"
# <!-- Save the notebook -->
# IPython.notebook.save_checkpoint();
# + language="javascript"
# <!-- Shutdown and close the notebook -->
# window.onbeforeunload = null
# window.close();
# IPython.notebook.session.delete();
| course-3/week-4/utf-8''TFDS-Week4-Question.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL Jointed Table") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
# -
from pyspark.ml.linalg import Vectors
df = spark.sparkContext.parallelize([
("assert", Vectors.dense([1, 2, 3]))
]).toDF(["word", "vector"])
df.show()
from pyspark.ml.linalg import Vectors
def extract(row):
return (row.word, ) + tuple(float(x) for x in row.vector.values)
df_new = df.rdd.map(extract).toDF(["word"])
df_new.show()
df = spark.sparkContext.parallelize([(1, 2, 3, 'a b c'),
(4, 5, 6, 'd e f'),
(7, 8, 9, 'g h i')]).toDF(['col1', 'col2', 'col3','col4'])
df.show()
df.printSchema()
from pyspark.sql.functions import split, explode
new = df.withColumn('col4',explode(split('col4',' ')))
new.show()
new.printSchema()
df = spark.sparkContext.parallelize([(1, 2, 3, 'a,b,c'),
(4, 5, 6, 'd,e,f'),
(7, 8, 9, 'g,h,i')]).toDF(['col1', 'col2', 'col3','col4'])
new = df.withColumn('col4',explode(split('col4',',')))
new.show()
a = spark.sparkContext.\
parallelize([['a', 'foo'], ['b', 'hem'], ['c', 'haw']]).toDF(['a_id', 'extra'])
a.show()
b = spark.sparkContext.parallelize([['p1', 'a'], ['p2', 'b'], ['p3', 'c']]).toDF(["other", "b_id"])
b.show()
c = a.join(b, a.a_id == b.b_id,'outer')
c.show()
from tqdm import tqdm, tqdm_notebook
for i in tqdm_notebook(range(int(1e4))):
pass
# +
import matplotlib as plt
import seaborn as sns
import random
# create an RDD of 100 random numbers
x = [random.normalvariate(0,1) for i in range(100)]
rdd = spark.sparkContext.parallelize(x)
# plot data in RDD - use .collect() to bring data to local
num_bins = 50
n, bins, patches = plt.histogram(rdd.collect(), num_bins, normed=1, facecolor='green', alpha=0.5)
# +
import pandas as pd
# Let's use UCLA's college admission dataset
file_name = "http://www.ats.ucla.edu/stat/data/binary.csv"
# Creating a pandas dataframe from Sample Data
pandas_df = pd.read_csv(file_name)
# Creating a Spark DataFrame from a pandas dataframe
spark_df = spark.createDataFrame(df)
spark_df.show(5)
# -
| demo/DataStack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ' Zipline environment'
# language: python
# name: zipline
# ---
# <img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
#
# © Copyright Quantopian Inc.<br>
# © Modifications Copyright QuantRocket LLC<br>
# Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
#
# <a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# # Statistical Moments - Skewness and Kurtosis
# By <NAME>" Nitishinskaya, <NAME>, and <NAME>
# Sometimes mean and variance are not enough to describe a distribution. When we calculate variance, we square the deviations around the mean. In the case of large deviations, we do not know whether they are likely to be positive or negative. This is where the skewness and symmetry of a distribution come in. A distribution is <i>symmetric</i> if the parts on either side of the mean are mirror images of each other. For example, the normal distribution is symmetric. The normal distribution with mean $\mu$ and standard deviation $\sigma$ is defined as
# $$ f(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$
# We can plot it to confirm that it is symmetric:
# + jupyter={"outputs_hidden": false}
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
# Plot a normal distribution with mean = 0 and standard deviation = 2
xs = np.linspace(-6,6, 300)
normal = stats.norm.pdf(xs)
plt.plot(xs, normal);
# -
# A distribution which is not symmetric is called <i>skewed</i>. For instance, a distribution can have many small positive and a few large negative values (negatively skewed) or vice versa (positively skewed), and still have a mean of 0. A symmetric distribution has skewness 0. Positively skewed unimodal (one mode) distributions have the property that mean > median > mode. Negatively skewed unimodal distributions are the reverse, with mean < median < mode. All three are equal for a symmetric unimodal distribution.
#
# The explicit formula for skewness is:
# $$ S_K = \frac{n}{(n-1)(n-2)} \frac{\sum_{i=1}^n (X_i - \mu)^3}{\sigma^3} $$
#
# Where $n$ is the number of observations, $\mu$ is the arithmetic mean, and $\sigma$ is the standard deviation. The sign of this quantity describes the direction of the skew as described above. We can plot a positively skewed and a negatively skewed distribution to see what they look like. For unimodal distributions, a negative skew typically indicates that the tail is fatter on the left, while a positive skew indicates that the tail is fatter on the right.
# + jupyter={"outputs_hidden": false}
# Generate x-values for which we will plot the distribution
xs2 = np.linspace(stats.lognorm.ppf(0.01, .7, loc=-.1), stats.lognorm.ppf(0.99, .7, loc=-.1), 150)
# Negatively skewed distribution
lognormal = stats.lognorm.pdf(xs2, .7)
plt.plot(xs2, lognormal, label='Skew > 0')
# Positively skewed distribution
plt.plot(xs2, lognormal[::-1], label='Skew < 0')
plt.legend();
# -
# Although skew is less obvious when graphing discrete data sets, we can still compute it. For example, below are the skew, mean, and median for AAPL returns 2012-2014. Note that the skew is negative, and so the mean is less than the median.
# + jupyter={"outputs_hidden": false}
from quantrocket.master import get_securities
from quantrocket import get_prices
aapl_sid = get_securities(symbols="AAPL", vendors='usstock').index[0]
start = '2012-01-01'
end = '2015-01-01'
prices = get_prices('usstock-free-1min', data_frequency="daily", sids=aapl_sid, fields='Close', start_date=start, end_date=end)
closes = prices.loc["Close"][aapl_sid]
returns = closes.pct_change()[1:]
print('Skew:', stats.skew(returns))
print('Mean:', np.mean(returns))
print('Median:', np.median(returns))
plt.hist(returns, 30);
# -
# # Kurtosis
#
# Kurtosis attempts to measure the shape of the deviation from the mean. Generally, it describes how peaked a distribution is compared the the normal distribution, called mesokurtic. All normal distributions, regardless of mean and variance, have a kurtosis of 3. A leptokurtic distribution (kurtosis > 3) is highly peaked and has fat tails, while a platykurtic distribution (kurtosis < 3) is broad. Sometimes, however, kurtosis in excess of the normal distribution (kurtosis - 3) is used, and this is the default in `scipy`. A leptokurtic distribution has more frequent large jumps away from the mean than a normal distribution does while a platykurtic distribution has fewer.
# + jupyter={"outputs_hidden": false}
# Plot some example distributions
plt.plot(xs,stats.laplace.pdf(xs), label='Leptokurtic')
print('Excess kurtosis of leptokurtic distribution:', (stats.laplace.stats(moments='k')))
plt.plot(xs, normal, label='Mesokurtic (normal)')
print('Excess kurtosis of mesokurtic distribution:', (stats.norm.stats(moments='k')))
plt.plot(xs,stats.cosine.pdf(xs), label='Platykurtic')
print('Excess kurtosis of platykurtic distribution:', (stats.cosine.stats(moments='k')))
plt.legend();
# -
# The formula for kurtosis is
# $$ K = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) $$
#
# while excess kurtosis is given by
# $$ K_E = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) - \frac{3(n-1)^2}{(n-2)(n-3)} $$
#
# For a large number of samples, the excess kurtosis becomes approximately
#
# $$ K_E \approx \frac{1}{n} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} - 3 $$
#
# Since above we were considering perfect, continuous distributions, this was the form that kurtosis took. However, for a set of samples drawn for the normal distribution, we would use the first definition, and (excess) kurtosis would only be approximately 0.
#
# We can use `scipy` to find the excess kurtosis of the AAPL returns from before.
# + jupyter={"outputs_hidden": false}
print("Excess kurtosis of returns: ", stats.kurtosis(returns))
# -
# The histogram of the returns shows significant observations beyond 3 standard deviations away from the mean, multiple large spikes, so we shouldn't be surprised that the kurtosis is indicating a leptokurtic distribution.
# # Other standardized moments
#
# It's no coincidence that the variance, skewness, and kurtosis take similar forms. They are the first and most important standardized moments, of which the $k$th has the form
# $$ \frac{E[(X - E[X])^k]}{\sigma^k} $$
#
# The first standardized moment is always 0 $(E[X - E[X]] = E[X] - E[E[X]] = 0)$, so we only care about the second through fourth. All of the standardized moments are dimensionless numbers which describe the distribution, and in particular can be used to quantify how close to normal (having standardized moments $0, \sigma, 0, \sigma^2$) a distribution is.
# # Normality Testing Using Jarque-Bera
#
# The Jarque-Bera test is a common statistical test that compares whether sample data has skewness and kurtosis similar to a normal distribution. We can run it here on the AAPL returns to find the p-value for them coming from a normal distribution.
#
# The Jarque Bera test's null hypothesis is that the data came from a normal distribution. Because of this it can err on the side of not catching a non-normal process if you have a low p-value. To be safe it can be good to increase your cutoff when using the test.
#
# Remember to treat p-values as binary and not try to read into them or compare them. We'll use a cutoff of 0.05 for our p-value.
#
# ## Test Calibration
#
# Remember that each test is written a little differently across different programming languages. You might not know whether it's the null or alternative hypothesis that the tested data comes from a normal distribution. It is recommended that you use the `?` notation plus online searching to find documentation on the test; plus it is often a good idea to calibrate a test by checking it on simulated data and making sure it gives the right answer. Let's do that now.
# + jupyter={"outputs_hidden": false}
from statsmodels.stats.stattools import jarque_bera
N = 1000
M = 1000
pvalues = np.ndarray((N))
for i in range(N):
# Draw M samples from a normal distribution
X = np.random.normal(0, 1, M);
_, pvalue, _, _ = jarque_bera(X)
pvalues[i] = pvalue
# count number of pvalues below our default 0.05 cutoff
num_significant = len(pvalues[pvalues < 0.05])
print(float(num_significant) / N)
# -
# Great, if properly calibrated we should expect to be wrong $5\%$ of the time at a 0.05 significance level, and this is pretty close. This means that the test is working as we expect.
# + jupyter={"outputs_hidden": false}
_, pvalue, _, _ = jarque_bera(returns)
if pvalue > 0.05:
print('The returns are likely normal.')
else:
print('The returns are likely not normal.')
# -
# This tells us that the AAPL returns likely do not follow a normal distribution.
# ---
#
# **Next Lecture:** [Linear Correlation Analysis](Lecture09-Linear-Correlation-Analysis.ipynb)
#
# [Back to Introduction](Introduction.ipynb)
# ---
#
# *This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| quant_finance_lectures/Lecture08-Statistical-Moments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import datetime
import os
import pandas as pd
import numpy as np
import pkg_resources
import seaborn as sns
import time
import scipy.stats as stats
from sklearn import metrics
from sklearn import model_selection
from keras.preprocessing.text import Tokenizer
from keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding
from keras.layers import Input
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import Dense
from keras.optimizers import RMSprop
from keras.models import Model
from keras.models import load_model
import boto3
import pandas as pd
import os
from configparser import ConfigParser
from smart_open import smart_open
# -
# ## Load and pre-process the data set
# +
config = ConfigParser()
config_file = ('config.ini')
config.read(config_file)
default = config['aws.data']
aws_key = default['accessKey']
aws_secret = default['secretAccessKey']
bucket_name = 'texttoxicity-train-test'
object_key = 'train.csv'
object_key_train = 'train.csv'
object_key_test ='test.csv'
object_key_sample_submission ='sample_submission.csv'
path_train = 's3://{}:{}@{}/{}'.format(aws_key, aws_secret, bucket_name, object_key_train)
path_test = 's3://{}:{}@{}/{}'.format(aws_key, aws_secret, bucket_name, object_key_test)
path_sample_submission = 's3://{}:{}@{}/{}'.format(aws_key, aws_secret, bucket_name, object_key_sample_submission)
train = pd.read_csv(smart_open(path_train))
test =pd.read_csv(smart_open(path_test))
sample_submission =pd.read_csv (smart_open(path_sample_submission))
# +
print('loaded %d records' % len(train))
# Make sure all comment_text values are strings
train['comment_text'] = train['comment_text'].astype(str)
# List all identities
identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']
# Convert taget and identity columns to booleans
def convert_to_bool(df, col_name):
df[col_name] = np.where(df[col_name] >= 0.5, True, False)
def convert_dataframe_to_bool(df):
bool_df = df.copy()
for col in ['target'] + identity_columns:
convert_to_bool(bool_df, col)
return bool_df
train = convert_dataframe_to_bool(train)
# -
# ## Split the data into 80% train and 20% validate sets
train_df, validate_df = model_selection.train_test_split(train, test_size=0.2)
print('%d train comments, %d validate comments' % (len(train_df), len(validate_df)))
# ## Create a text tokenizer
# +
MAX_NUM_WORDS = 10000
TOXICITY_COLUMN = 'target'
TEXT_COLUMN = 'comment_text'
# Create a text tokenizer.
tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(train_df[TEXT_COLUMN])
# All comments must be truncated or padded to be the same length.
MAX_SEQUENCE_LENGTH = 250
def pad_text(texts, tokenizer):
return pad_sequences(tokenizer.texts_to_sequences(texts), maxlen=MAX_SEQUENCE_LENGTH)
# -
# ## Define and train a Convolutional Neural Net for classifying toxic comments
# +
EMBEDDINGS_PATH = 'glove.6B.100d.txt'
EMBEDDINGS_DIMENSION = 100
DROPOUT_RATE = 0.3
LEARNING_RATE = 0.00005
NUM_EPOCHS = 5
BATCH_SIZE = 10
def train_model(train_df, validate_df, tokenizer):
# Prepare data
train_text = pad_text(train_df[TEXT_COLUMN], tokenizer)
train_labels = to_categorical(train_df[TOXICITY_COLUMN])
validate_text = pad_text(validate_df[TEXT_COLUMN], tokenizer)
validate_labels = to_categorical(validate_df[TOXICITY_COLUMN])
# Load embeddings
print('loading embeddings')
embeddings_index = {}
with open(EMBEDDINGS_PATH, encoding ='utf-8') as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
num_words_in_embedding += 1
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# Create model layers.
def get_convolutional_neural_net_layers():
"""Returns (input_layer, output_layer)"""
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_layer = Embedding(len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
x = embedding_layer(sequence_input)
x = Conv1D(128, 2, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(128, 3, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(128, 4, activation='relu', padding='same')(x)
x = MaxPooling1D(40, padding='same')(x)
x = Flatten()(x)
x = Dropout(DROPOUT_RATE)(x)
x = Dense(128, activation='relu')(x)
preds = Dense(2, activation='softmax')(x)
return sequence_input, preds
# Compile model.
print('compiling model')
input_layer, output_layer = get_convolutional_neural_net_layers()
model = Model(input_layer, output_layer)
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(lr=LEARNING_RATE),
metrics=['acc'])
# Train model.
print('training model')
model.fit(train_text,
train_labels,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(validate_text, validate_labels),
verbose=2)
return model
model = train_model(train_df, validate_df, tokenizer)
# -
# ## Generate model predictions on the validation set
MODEL_NAME = 'my_model'
validate_df[MODEL_NAME] = model.predict(pad_text(validate_df[TEXT_COLUMN], tokenizer))[:, 1]
validate_df.head()
# ## Define bias metrics, then evaluate our new model for bias using the validation set predictions
# +
SUBGROUP_AUC = 'subgroup_auc'
BPSN_AUC = 'bpsn_auc' # stands for background positive, subgroup negative
BNSP_AUC = 'bnsp_auc' # stands for background negative, subgroup positive
def compute_auc(y_true, y_pred):
try:
return metrics.roc_auc_score(y_true, y_pred)
except ValueError:
return np.nan
def compute_subgroup_auc(df, subgroup, label, model_name):
subgroup_examples = df[df[subgroup]]
return compute_auc(subgroup_examples[label], subgroup_examples[model_name])
def compute_bpsn_auc(df, subgroup, label, model_name):
"""Computes the AUC of the within-subgroup negative examples and the background positive examples."""
subgroup_negative_examples = df[df[subgroup] & ~df[label]]
non_subgroup_positive_examples = df[~df[subgroup] & df[label]]
examples = subgroup_negative_examples.append(non_subgroup_positive_examples)
return compute_auc(examples[label], examples[model_name])
def compute_bnsp_auc(df, subgroup, label, model_name):
"""Computes the AUC of the within-subgroup positive examples and the background negative examples."""
subgroup_positive_examples = df[df[subgroup] & df[label]]
non_subgroup_negative_examples = df[~df[subgroup] & ~df[label]]
examples = subgroup_positive_examples.append(non_subgroup_negative_examples)
return compute_auc(examples[label], examples[model_name])
def compute_bias_metrics_for_model(dataset,
subgroups,
model,
label_col,
include_asegs=False):
"""Computes per-subgroup metrics for all subgroups and one model."""
records = []
for subgroup in subgroups:
record = {
'subgroup': subgroup,
'subgroup_size': len(dataset[dataset[subgroup]])
}
record[SUBGROUP_AUC] = compute_subgroup_auc(dataset, subgroup, label_col, model)
record[BPSN_AUC] = compute_bpsn_auc(dataset, subgroup, label_col, model)
record[BNSP_AUC] = compute_bnsp_auc(dataset, subgroup, label_col, model)
records.append(record)
return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True)
bias_metrics_df = compute_bias_metrics_for_model(validate_df, identity_columns, MODEL_NAME, TOXICITY_COLUMN)
bias_metrics_df
# -
# ## Calculate the final score
# +
def calculate_overall_auc(df, model_name):
true_labels = df[TOXICITY_COLUMN]
predicted_labels = df[model_name]
return metrics.roc_auc_score(true_labels, predicted_labels)
def power_mean(series, p):
total = sum(np.power(series, p))
return np.power(total / len(series), 1 / p)
def get_final_metric(bias_df, overall_auc, POWER=-5, OVERALL_MODEL_WEIGHT=0.25):
bias_score = np.average([
power_mean(bias_df[SUBGROUP_AUC], POWER),
power_mean(bias_df[BPSN_AUC], POWER),
power_mean(bias_df[BNSP_AUC], POWER)
])
return (OVERALL_MODEL_WEIGHT * overall_auc) + ((1 - OVERALL_MODEL_WEIGHT) * bias_score)
get_final_metric(bias_metrics_df, calculate_overall_auc(validate_df, MODEL_NAME))
# -
# ## Prediction on Test data
sample_submission['prediction'] = model.predict(pad_text(test[TEXT_COLUMN], tokenizer))[:, 1]
sample_submission.to_csv('submission.csv')
sample_submission.head()
t= pd.read_csv('Book1.csv')
t
s= model.predict(pad_text(t[TEXT_COLUMN], tokenizer))
print(s[:,1])
| Notebook/Bias Determination/CNN/CONVOLUTIONAL NEURAL NETWORK --- BIAS DETECTION FINAL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# # クラウドへ(そしてその先へ)
# 数値の問題を解決する方法については十分に検討しました。トレーニング部分をクラウドに移行します。(数値の問題ではこれ以上はローカルでの実行は不要ですが、他の問題については、ローカルでサブセットの問題をテストしてからクラウドに移動して全体を処理します)
#
# いくつか設定しましょう。
#
# 最初にしなければならないことは、azureml.core パッケージがノートブック環境にインストールされているのを確認することです。Azure Notebooksを使用している場合は、簡単な2ステップのプロセスで確認できます。
# ## Azure Notebooks に依存関係を追加する
# "Project Settings" をクリックします。
#
# 
#
# 次に、"Environments" タブを選択し、"Python 3.6" を選択します。最後に、`requirements.txt` を選択します。
#
# 
#
# これらのステップで、実行できるようになるはずです。
#
# **注** もし上記の設定をしても問題が発生する場合は、Notebook でカーネルが Python 3.6 に設定されていることを確認してください。Python 3.6 になっていない場合は、次の操作で設定変更できます: ノートブックのメニューで [Kernel] -[Change Kernel] - [Python 3.6] を選択
# +
import json
import time
import azureml
from azureml.core.model import Model
from azureml.core import Workspace, Run, Experiment
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.train.dnn import PyTorch
from azureml.widgets import RunDetails
from torchvision import datasets, transforms
print("Azure ML SDK Version: ", azureml.core.VERSION)
# -
# # Azure Machine Learning サービスを設定する
# 最初に必要な作業は、Azure Machine Learning サービス ワークスペースを作成することです。その方法についての [ドキュメント](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started#create-a-workspace) があります。コマンドラインタイプに慣れている場合は、Azure CLI を使用してセットアップする方法の [例](https://github.com/sethjuarez/workspacestarter) があります。プロジェクトを設定したら、以下のコードのコメントを外して設定ファイルを書き出し、ワークスペースに適した設定を入力します。設定ファイルが書き出されたら、下記のようにプログラムでワークスペースをロードすることができます。
# +
## config ファイルを設定するには以下のコードを使います
#subscription_id ='<SUB_ID>'
#resource_group ='<RESOURCE_GROUP>'
#workspace_name = '<WORKSPACE>'
#try:
# ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
# ws.write_config()
# print('Workspace configuration succeeded. You are all set!')
#except:
# print('Workspace not found. TOO MANY ISSUES!!!')
## 上記のコードを一度実行したあとは、保存した config ファイルを利用できます
#ws = Workspace.from_config()
# -
# # クラウドコンピュート
# 次に、実験用のコンピュートターゲットを定義する必要があります。これは新規のワークスペースなので、クラスタの名前は自由に変更してください(私は 'racer' と呼んでいます)。以下のコードは自分のクラスタへの参照を取得しようとしますが、存在しない場合は作成します。クラスタを作成する場合、少し時間がかかります。また、予想外の課金をされないように、実験が完了したらクラスターをオフにしてください(実際には、min_node を 0 に設定して、長時間アイドル状態になるとクラスタが自動的にオフになる設定を検討してください)。
#
# **訳注** Azure の無償評価版などの GPU 最適化済みマシンを利用できない場合、またはコストを抑えたい場合は、vm_size を "STANDARD_D2_V2" にしてください。min_nodes を 1 以上にすると、訓練開始までの待ち時間を短縮できますが、コンピュートの削除し忘れなどで課金が継続されることがあるので注意してください。min_nodes を 0 にすると実行が終わると自動的にノードが削除されて課金されなくなります。
cluster = 'racer'
try:
compute = ComputeTarget(workspace=ws, name=cluster)
print('Found existing compute target "{}"'.format(cluster))
except ComputeTargetException:
print('Creating new compute target "{}"...'.format(cluster))
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', min_nodes=1, max_nodes=6)
compute = ComputeTarget.create(ws, cluster, compute_config)
compute.wait_for_completion(show_output=True)
# # 実験の時間
# コンピューティングターゲットが設定されたら、前回の小さなノートブックをリモートコンピューティング環境で実行できる単一のスクリプトにパッケージ化します。[あなたのために](train.py) train.py を作っておきました。実際、ファイルを見ると、前のノートブックから学んだものとまったく同じ概念がすべて表示されます(これはほとんどまったく同じですが、スクリプトへの受け渡しを容易にするために追加の事項を入れています)。
#
# Azure ML サービスには実験という概念があります。実験ごとに複数回実行することができます。ここでは、実験の実行方法を定義する Estimator オブジェクトを使用しています。
#
# ### バックグラウンドで何をしてるか気にしないのであれば、ここは読む必要はありません
# バックグラウンドでは、Estimator は基本的に実験を格納する docker イメージの定義です。このすべてについての最もよい部分は、あなたがあなたの実験に使うもの(TensorFlowのカスタムバージョンであっても他の何かであっても)に関係なく、それが必ず実行可能であるということです - 結局それはコンテナです。とても使いやすいです。
#
# ### 通常の手順に戻る
# Estimator を Azure ML サービスで実行することを送信すると、現在のディレクトリの内容がコピーされ、新しいコンテナにまとめられます(それらは [.amlignore] ファイルに記述されたもの以外、全部アップロードされます)
#
# また、'argparse' を使用しているので、推論器の定義の一部としてトレーニングスクリプトに外部パラメータを指定できます。
#
# 次の3個のセルを実行して、何が起こるのか見てみましょう。
#
# **訳注** クラウドコンピュートを作成した際に vm_size を "STANDARD_D2_V2" など(GPU無し)にした場合は、下のセルで "use_gpu=True" を "use_gpu=False" に変更してください。
# +
# 実験を作成
mnist = Experiment(ws, 'pytorchmnist')
# script parameters
script_params={
'--epochs': 5,
'--batch': 100,
'--lr': .001,
'--model': 'cnn'
}
# Estimator を作成
estimator = PyTorch(source_directory='.',
compute_target=compute,
entry_script='train.py',
script_params=script_params,
use_gpu=True)
run = mnist.submit(estimator)
# -
run
RunDetails(run).show()
# すべて完了すると、次のようになります:
#
# 
#
# 実際に、損失関数は時間の経過とともに(平均して)減少し、モデルの精度が上がることに注意してください。learning_rate パラメータを変更して試してみてください。詳しくは、[Azure Machine Learning service でモデルのハイパーパラメーターを調整する](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters) を参照してください。
#
# さて、どのようにしてこれらの素晴らしいチャートが表示できたのか疑問に思うかもしれません。これは Azure ML サービスが、実験結果に対して実用的な価値を付加してくれるところです。[いくつか](https://github.com/sethjuarez/pytorchintro/blob/master/train.py#L156-L166) の [戦略的](https://github.com/sethjuarez/pytorchintro/blob/master/train.py#L121-L122) に [配置](https://github.com/sethjuarez/pytorchintro/blob/master/train.py#L142-L143) されたログステートメントを使用して、Azure ML サービスはこの出力を作成しました。実際、値が複数回ログに記録されると、テーブル内の項目ではなくチャートが自動的に作成されます。
# # モデル
# トレーニングがすべて完了して出力が完了したら、実際に特定の実験のすべての実行の出力を確認し、それを「公式な」ワークスペースモデルに昇格させることができます。重要なファイル(つまり私たちをお金持ちにしてくれるかもしれないモデル)が通常 Jeff という名前のコンピュータ上に置かれるのは素晴らしい機能です。現在は、多くの人がモデルのバージョン管理さえしていませんが、以下のコードを実行してください。
run.get_file_names()
model_file = 'outputs/model.pth'
run.download_file(name=model_file, output_file_path='model.pth')
model = Model.register(ws, model_name='PyTorchMNIST', model_path='model.pth',
description='CNN PyTorch Model')
# # イメージ
# モデルが完成したので、それをプロダクションで使用する場合は、モデルの使用方法を定義する必要があります。これはスコアリングまたは推論とも呼ばれます。Azure ML サービスでは、基本的に2つのメソッドが必要です:
# 1. `init()`
# 2. `run(raw)` - JSON 文字列を取り込んで予測を返す
#
# 最初にスコアリングスクリプトが実行される環境を記述し、それを設定ファイルにまとめる必要があります。
myenv = CondaDependencies()
myenv.add_pip_package('numpy')
myenv.add_pip_package('torch')
with open('pytorchmnist.yml','w') as f:
print('Writing out {}'.format('pytorchmnist.yml'))
f.write(myenv.serialize_to_string())
print('Done!')
# 次に、Azure ML サービスにスコアリングスクリプトの場所を通知する必要があります。score.py を [あらかじめ作っておきました](score.py)。ファイルを見ると、init() メソッドと run(raw) メソッドの両方が簡単に見つかるはずです。ファイルをローカルで実行して、正しい動作をしていることを確認することもできます。
#
# これですべてが完成したので、イメージを作成しましょう。
#
# ### バックグラウンドで何をしてるか気にしないのであれば、ここは読む必要はありません
# 基本的には、定義からdockerイメージを作成して、Workspace に表示される Azure Container Registry にプッシュします。
# **注** しばらく時間がかかります
# +
from azureml.core.image import ContainerImage, Image
# イメージの作成
image_config = ContainerImage.image_configuration(execution_script="score.py",
runtime="python",
conda_file="pytorchmnist.yml")
image = Image.create(ws, 'pytorchmnist', [model], image_config)
image.wait_for_creation(show_output=True)
# -
# # デプロイ
# イメージ作成をせずに、残りの展開プロセスを Azure Pipelines のようなものに移動したいかもしれません。そうではなくて、このサービスを引き続きワークスペースにデプロイしたい場合は、以下を使用してください。
# +
from azureml.core.webservice import Webservice, AciWebservice
service_name = 'pytorchmnist-svc'
# check for existing service
svcs = [svc for svc in Webservice.list(ws) if svc.name==service_name]
if len(svcs) == 1:
print('Deleting prior {} deployment'.format(service_name))
svcs[0].delete()
# create service
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
description='simple MNIST digit detection')
service = Webservice.deploy_from_image(workspace=ws,
image=image,
name=service_name,
deployment_config=aciconfig)
service.wait_for_deployment(show_output=True)
print(service.scoring_uri)
# -
# イメージを ACI またはワークスペース Kubernetes クラスターにプッシュすることもできます。
#
# 時々うまくいかないことがあります・・・もし実行時にそうなったら、実際の [logs](deploy.log) を見てください。!
with open('deploy.log','w') as f:
f.write(service.get_logs())
# # サービスの実行
# 以上でサービスは動作しています。適切に動作しているか見てみましょう。前から使っているテストデータをロードしてランダムな数字で試すことができます。
digits = datasets.MNIST('data', train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Lambda(lambda x: x.reshape(28*28))
])
)
print(len(digits))
# インデックスとして基本的に最大60,000まで任意の数を選ぶことができます。サービスがどのように動作しているかを見るために何回か試してみてください。
# +
import torch
from PIL import Image
import matplotlib.pyplot as plt
X, Y = digits[20]
X = X * 255
plt.imshow(255 - X.reshape(28,28), cmap='gray')
# -
# ポストしようとしているエンドポイントの場所
image_str = ','.join(map(str, X.int().tolist()))
print(image_str)
import json
import requests
service_url = service.scoring_uri
print(service_url)
r = requests.post(service_url, json={'image': image_str })
r.json()
# ## 最後に
# この小さな旅が参考になっていればうれしいです! 私の目標は、機械学習の基本がそれほど悪いものではないと理解してもらうことです。コメント、提案、または分からないところは一言教えてください。
| cloud.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
list_a = [100,200, 300,400,500,600]
list_b = ["Tom","Dickinson", "Harris"]
list_b.extend(list_a)
for var in list_a:
print(var)
print("***********************************")
for var in range(len(list_b)):
print(list_b[-1*(var+1)])
print(list(range(10)))
# -
| test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Titanic CNN
#
# ## 概要:
# - 将主要的13维数据移动到40*40图片的中央区域。避免在最数组开头,变成图片之后在边缘,卷积的时候,特征提取不充分。
# - 运行时间比较长的训练,还是应该弄个TensorBoard,方便监控结果,免得每次都需要用鼠标手动托页面查看最新的运行结果。
# - 数据用StandardScaler归一化,避免卷积的时候,有些值特别大,覆盖了其他值。
#
# Reference:
# 1. https://www.kaggle.com/c/titanic#tutorials
# 2. https://www.kaggle.com/sinakhorami/titanic-best-working-classifier
# 3. https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python/notebook
#
# ## 1. Preprocess
# ### Import pkgs
# +
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import MinMaxScaler
from IPython.display import display
# %matplotlib inline
# +
cwd = os.getcwd()
date_str = time.strftime("%Y%m%d", time.localtime())
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
model_path = os.path.join(cwd, 'model')
log_path = os.path.join(cwd, 'log')
# -
# ### Import original data as DataFrame
# +
data_train = pd.read_csv('./input/train.csv')
data_test = pd.read_csv('./input/test.csv')
display(data_train.head(2))
display(data_test.head(2))
print(data_train.shape)
print(data_test.shape)
data_train.loc[2, 'Ticket']
# -
# ### Show columns of dataframe
data_train_original_col = data_train.columns
data_test_original_col = data_test.columns
print(data_train_original_col)
print(data_test_original_col)
# data_train0 = data_train.drop(data_train_original_col, axis = 1)
# data_test0 = data_test.drop(data_test_original_col, axis = 1)
# display(data_train0.head(2))
# display(data_test0.head(2))
# ### Preprocess features
# Get survived
survived = data_train['Survived']
# Drop survived to align columns of data_train and data_test
data_train = data_train.drop('Survived', axis = 1)
dataset = data_train.append(data_test)
print(dataset.shape)
dataset_original_columns = dataset.columns
print(dataset_original_columns)
# +
# Pclass
temp = dataset[dataset['Pclass'].isnull()]
if len(temp) == 0:
print('Do not have null value!')
else:
temp.head(2)
dataset['a_Pclass'] = dataset['Pclass']
# display(dataset.head())
# -
# Name
dataset['a_Name_Length'] = dataset['Name'].apply(len)
# display(dataset.head(2))
# Sex
dataset['a_Sex'] = dataset['Sex'].map({'female': 0, 'male': 1}).astype(int)
# display(dataset.head(2))
# Age
dataset['a_Age'] = dataset['Age'].fillna(-1)
dataset['a_Have_Age'] = dataset['Age'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Age'].isnull()].head(2))
# display(dataset.head(2))
# SibSp and Parch
dataset['a_FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
dataset['a_IsAlone'] = dataset['a_FamilySize'].apply(lambda x: 1 if x<=1 else 0)
# display(dataset.head(2))
# Ticket(Very one have a ticket)
dataset['a_Have_Ticket'] = dataset['Ticket'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Ticket'].isnull()].head(2))
# display(dataset.head(2))
# Fare
dataset['a_Fare'] = dataset['Fare'].fillna(-1)
dataset['a_Have_Fare'] = dataset['Fare'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Fare'].isnull()].head(2))
# display(dataset.head(2))
# Cabin
dataset['a_Have_Cabin'] = dataset['Cabin'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Cabin'].isnull()].head(2))
# display(dataset.head(2))
# Embarked
# dataset['Embarked'] = dataset['Embarked'].fillna('N')
dataset['a_Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2, None: 3} ).astype(int)
dataset['a_Have_Embarked'] = dataset['Embarked'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Embarked'].isnull()].head(2))
# display(dataset.head(2))
# print(len(dataset.columns))
# Name words segmentation and one-hote
# +
# Name words segmentation
import re
name_words = []
# Inorder to allign columns of data_train and data_test, only data_train to fetch word
for name in dataset['Name']:
# print(name)
words = re.findall(r"[\w']+", name)
# print(len(words))
# print(words)
for w in words:
if w not in name_words:
name_words.append(w)
# print(len(name_words))
name_words.sort()
# print(name_words)
# -
# Add columns
for w in name_words:
col_name = 'a_Name_' + w
dataset[col_name] = 0
dataset.head(1)
# Name words one-hote
for i, row in dataset.iterrows():
# print(row['Name'])
words = re.findall(r"[\w']+", row['Name'])
for w in words:
if w in name_words:
col_name = 'a_Name_' + w
dataset.loc[i, col_name] = 1
# display(dataset[dataset['a_Name_Braund'] == 1])
# Cabin segmentation and one-hote
# +
# Get cabin segmentation words
import re
cabin_words = []
# Inorder to allign columns of data_train and data_test, only data_train to fetch number
for c in dataset['Cabin']:
# print(c)
if c is not np.nan:
word = re.findall(r"[a-zA-Z]", c)
# print(words[0])
cabin_words.append(word[0])
print(len(cabin_words))
cabin_words.sort()
print(np.unique(cabin_words))
cabin_words_unique = list(np.unique(cabin_words))
# +
def get_cabin_word(cabin):
if cabin is not np.nan:
word = re.findall(r"[a-zA-Z]", cabin)
if word:
return cabin_words_unique.index(word[0])
return -1
dataset['a_Cabin_Word'] = dataset['Cabin'].apply(get_cabin_word)
# dataset['a_Cabin_Word'].head(100)
# +
def get_cabin_number(cabin):
if cabin is not np.nan:
word = re.findall(r"[0-9]+", cabin)
if word:
return int(word[0])
return -1
dataset['a_Cabin_Number'] = dataset['Cabin'].apply(get_cabin_number)
print(dataset.shape)
# dataset['a_Cabin_Number'].head(100)
# -
# +
# Clean data
# Reference:
# 1. https://www.kaggle.com/sinakhorami/titanic-best-working-classifier
# 2. https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python/notebook
# full_data = [data_train, data_test]
# for dataset in full_data:
# dataset['a_Name_length'] = dataset['Name'].apply(len)
# #dataset['Sex'] = (dataset['Sex']=='male').astype(int)
# dataset['a_Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
# dataset['a_Age'] = dataset['Age'].fillna(0)
# dataset['a_Age_IsNull'] = dataset['Age'].isnull()
# dataset['a_FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# dataset['a_IsAlone'] = dataset['a_FamilySize'].apply(lambda x: 1 if x<=1 else 0)
# dataset['a_Fare'] = dataset['Fare'].fillna(dataset['Fare'].median())
# #dataset['Has_Cabin'] = dataset['Cabin'].apply(lambda x: 1 if type(x) == str else 0) # same as below
# dataset['a_Has_Cabin'] = dataset['Cabin'].apply(lambda x: 0 if type(x) == float else 1)
# dataset['a_Has_Embarked'] = dataset['Embarked'].isnull()
# dataset['Embarked'] = dataset['Embarked'].fillna('N')
# dataset['a_Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2, 'N': 3} ).astype(int)
# dataset['Embarked'] = dataset['Embarked'].fillna('S')
# display(data_train.head(2))
# display(data_test.head(2))
# -
# Drop old columns
print(dataset_original_columns)
print(dataset.shape)
full_data = dataset.drop(dataset_original_columns, axis = 1)
print(full_data.shape)
display(full_data.iloc[0:15])
from sklearn.preprocessing import MinMaxScaler, StandardScaler
data = [[0, 0, 0],
[0, -1, 0],
[1, 10, -10],
[1, 15, 10]]
scaler = MinMaxScaler()
print(scaler.fit(data))
print(scaler.transform(data))
print('*'*40)
scaler = StandardScaler()
print(scaler.fit(data))
print(scaler.mean_)
print(scaler.transform(data))
# Normalization
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler(copy=True, with_mean=True, with_std=True)
scaler.fit(full_data)
full_data0 = scaler.transform(full_data)
print(full_data0[0:15, 0:5])
features = full_data.iloc[0:891]
data_test0 = full_data.iloc[891:]
print(features.shape)
print(data_test0.shape)
# display(features.head(2))
# Check and confirm all columns is proccessed
for col in features.columns:
if not col.startswith('a_'):
print(col)
# Shuffle and split the train_data into train, crossvalidation and testing subsets
x_train, x_val, y_train, y_val = train_test_split(features, survived, test_size=0.2, random_state=2017)
# Show distribute of abave data sets
print(x_train.shape)
print(x_val.shape)
print(y_train.shape)
print(y_val.shape)
display(x_train.head(2))
display(y_train.head(2))
# ## Fullfil matrix to 16000 and Reshape matrix
# +
x_train0 = x_train.as_matrix()
y_train0 = y_train.as_matrix()
x_val0 = x_val.as_matrix()
y_val0 = y_val.as_matrix()
x_test0 = data_test0.as_matrix()
target_shape = (45, 45)
extend_widgth = target_shape[0]*target_shape[1] - x_train0.shape[1]
print('Target shape: ', target_shape)
print('Extend_widgth: ', extend_widgth)
print('Before extend:')
print(x_train0.shape)
print(x_val0.shape)
print(x_test0.shape)
x_train_ext = np.zeros((x_train0.shape[0], extend_widgth))
x_val_ext = np.zeros((x_val0.shape[0], extend_widgth))
x_test_ext = np.zeros((x_test0.shape[0], extend_widgth))
x_train0 = np.column_stack((x_train0, x_train_ext))
x_val0 = np.column_stack((x_val0, x_val_ext))
x_test0 = np.column_stack((x_test0, x_test_ext))
print('After extend:')
print(x_train0.shape)
print(x_val0.shape)
print(x_test0.shape)
x_train0 = x_train0.reshape(-1, target_shape[0], target_shape[1], 1)
x_val0 = x_val0.reshape(-1, target_shape[0], target_shape[1], 1)
x_test0 = x_test0.reshape(-1, target_shape[0], target_shape[1], 1)
print('After reshape:')
print(x_train0.shape)
print(x_val0.shape)
print(x_test0.shape)
# -
# ## 2. Build CNN
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input, Flatten, Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
# +
annealer = LearningRateScheduler(lambda x: 1e-4 * 0.9 ** x)
log_dir = os.path.join(log_path, time_str)
print('log_dir:' + log_dir)
tensorBoard = TensorBoard(log_dir=log_dir)
# +
model = Sequential()
# Block 1
model.add(Conv2D(filters = 64, kernel_size = (3, 3), activation='relu', padding = 'Same',
input_shape = (45, 45, 1)))
model.add(BatchNormalization())
model.add(Conv2D(filters = 64, kernel_size = (3, 3), activation='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(strides=(2,2)))
model.add(Dropout(0.5))
# Block 2
# model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 128, kernel_size = (3, 3), activation='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(strides=(2,2)))
model.add(Dropout(0.5))
# Block 3
# model.add(Conv2D(filters = 256, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(Conv2D(filters = 256, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
model.add(Conv2D(filters = 256, kernel_size = (3, 3), activation='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(Conv2D(filters = 256, kernel_size = (3, 3), activation='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(strides=(2,2)))
model.add(Dropout(0.5))
# Block 4
# model.add(Conv2D(filters = 512, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(Conv2D(filters = 512, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(Conv2D(filters = 512, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(Conv2D(filters = 512, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(MaxPooling2D(strides=(2,2)))
# model.add(Dropout(0.5))
# Block 5
# model.add(Conv2D(filters = 512, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(Conv2D(filters = 512, kernel_size = (3, 3), activation='relu', padding = 'Same'))
# model.add(BatchNormalization())
# model.add(MaxPooling2D(strides=(2,2)))
# model.add(Dropout(0.25))
# Output
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
hist = model.fit(x_train0, y_train0,
batch_size = 8,
verbose=1,
epochs = 20,
validation_data=(x_val0, y_val0),
callbacks=[annealer, tensorBoard])
# -
final_loss, final_acc = model.evaluate(x_val0, y_val0, verbose=1)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
# ### Predict and Export pred.csv file
train_cols = data_train.columns
for col in data_test0.columns:
if col not in train_cols:
print(col)
# +
import time
import os
project_name = 'Titanic'
step_name = 'Predict'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
final_acc_str = str(int(final_acc*10000))
run_name = project_name + '_' + step_name + '_' + time_str + '_' + final_acc_str
print(run_name)
cwd = os.getcwd()
pred_file = os.path.join(cwd, 'output', run_name + '.csv')
print(pred_file)
# +
display(data_test0.head(2))
y_data_pred = model.predict(x_test0)
print(y_data_pred.shape)
y_data_pred = np.squeeze(y_data_pred)
print(y_data_pred.shape)
y_data_pred = (y_data_pred > 0.5).astype(int)
print(y_data_pred)
print(data_test['PassengerId'].shape)
passenger_id = data_test['PassengerId']
output = pd.DataFrame( { 'PassengerId': passenger_id , 'Survived': y_data_pred })
output.to_csv(pred_file , index = False)
# +
# display(data_test0.head(2))
# y_data_pred = clfs['RandomForestClassifier'].predict(data_test0.as_matrix())
# print(y_data_pred.shape)
# y_data_pred = np.squeeze(y_data_pred)
# print(y_data_pred.shape)
# print(data_test['PassengerId'].shape)
# passenger_id = data_test['PassengerId']
# output = pd.DataFrame( { 'PassengerId': passenger_id , 'Survived': y_data_pred })
# output.to_csv(pred_file , index = False)
# -
print(run_name)
print('Done!')
| titanic/Titanic CNN.ipynb |
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# # NDVI Forecast on Area of Interest (AOI)
# This notebook demonstrates, how to load the model which has been trained using previous notebook, 2_train.ipynb and forecast NDVI for next 10 days on new Area of Interest.
#
# ### Import Libraries
# +
# Stanadard library imports
import json
import pickle
import os
import sys
import requests
from datetime import datetime,timedelta
# Disable unnecessary logs
import logging
logging.disable(sys.maxsize)
import warnings
warnings.filterwarnings("ignore")
# Third party library imports
import numpy as np
import pandas as pd
import rasterio
import tensorflow as tf
from tensorflow import keras
# Local imports
from utils.ard_util import ard_preprocess
from utils.config import farmbeats_config
from utils.constants import CONSTANTS
from utils.satellite_util import SatelliteUtil
from utils.test_helper import get_sat_weather_data, get_timezone
from utils.weather_util import WeatherUtil
# Azure imports
from azure.identity import ClientSecretCredential
# SDK imports
from azure.agrifood.farming import FarmBeatsClient
# -
# ### Farmbeats Configuration
# +
# FarmBeats Client definition
credential = ClientSecretCredential(
tenant_id=farmbeats_config['tenant_id'],
client_id=farmbeats_config['client_id'],
client_secret=farmbeats_config['client_secret'],
authority=farmbeats_config['authority']
)
credential_scopes = [farmbeats_config['default_scope']]
fb_client = FarmBeatsClient(
endpoint=farmbeats_config['instance_url'],
credential=credential,
credential_scopes=credential_scopes,
logging_enable=True
)
# -
# ### Forecast EVI for new AOI
# #### Satellie Data
farmer_id = "contoso_farmer"
boundary_id = "sample-boundary-32"
boundary_geometry = '[[-121.5283155441284,38.16172478418468],[-121.51544094085693,38.16172478418468],[-121.51544094085693,38.16791636919515],[-121.5283155441284,38.16791636919515],[-121.5283155441284,38.16172478418468]]'
timezone = get_timezone(json.loads(boundary_geometry))
end_dt = datetime.strptime(datetime.now(timezone).strftime("%Y-%m-%d"), "%Y-%m-%d")
start_dt = end_dt - timedelta(days=60)
# +
# Create Boundary and get satelite and weather (historical and forecast)
get_sat_weather_data(fb_client,
farmer_id,
boundary_id,
json.loads(boundary_geometry),
start_dt,
end_dt)
# get boundary object
boundary = fb_client.boundaries.get(
farmer_id=farmer_id,
boundary_id=boundary_id
)
# -
boundary.as_dict()
# +
root_dir = CONSTANTS['root_dir']
sat_links = SatelliteUtil(farmbeats_client = fb_client).download_and_get_sat_file_paths(farmer_id, [boundary], start_dt, end_dt, root_dir)
# get last available data of satellite data
end_dt_w = datetime.strptime(
sat_links.sceneDateTime.sort_values(ascending=False).values[0][:10], "%Y-%m-%d"
)
# calculate 30 days from last satellite available date
start_dt_w = end_dt_w - timedelta(days=CONSTANTS["input_days"] - 1)
# -
# #### Weather Data
# get weather data historical
weather_list = fb_client.weather.list(
farmer_id= boundary.farmer_id,
boundary_id= boundary.id,
start_date_time=start_dt_w,
end_date_time=end_dt,
extension_id=farmbeats_config['weather_provider_extension_id'],
weather_data_type= "historical",
granularity="daily")
weather_data = []
for w_data in weather_list:
weather_data.append(w_data)
w_df_hist = WeatherUtil.get_weather_data_df(weather_data)
# +
# get weather data forecast
weather_list = fb_client.weather.list(
farmer_id= boundary.farmer_id,
boundary_id= boundary.id,
start_date_time=end_dt,
end_date_time=end_dt + timedelta(10),
extension_id=farmbeats_config['weather_provider_extension_id'],
weather_data_type= "forecast",
granularity="daily")
weather_data = []
for w_data in weather_list:
weather_data.append(w_data)
w_df_forecast = WeatherUtil.get_weather_data_df(weather_data)
# +
# merge weather data
weather_df = pd.concat([w_df_hist, w_df_forecast], axis=0, ignore_index=True)
with open(CONSTANTS["w_pkl"], "rb") as f:
w_parms, weather_mean, weather_std = pickle.load(f)
# -
# ### Prepare ARD for test boundary
# +
ard = ard_preprocess(
sat_file_links=sat_links,
w_df=weather_df,
sat_res_x=1,
var_name=CONSTANTS["var_name"],
interp_date_start=end_dt_w - timedelta(days=60),
interp_date_end=end_dt_w,
w_parms=w_parms,
input_days=CONSTANTS["input_days"],
output_days=CONSTANTS["output_days"],
ref_tm=start_dt_w.strftime("%d-%m-%Y"),
w_mn=weather_mean,
w_sd=weather_std,
)
frcst_st_dt = end_dt_w
# -
# raise exception if ARD is empty
if ard.shape[0] == 0:
raise Exception("Analysis ready dataset is empty")
# raise exception if data spills into multiple rows
if ard.query("grp1_ > 0").shape[0] > 0:
raise Exception(
"More than one record has been found for more than one pixel"
)
# warning if nans are in input data or data is out of bounds
if (
ard.query("not nan_input_evi").shape[0] > 0
or ard.query("not nan_input_w").shape[0] > 0
or ard.query("not nan_output_w").shape[0] > 0
):
print("Warning: NaNs found in the input data")
if (
ard.query(
"nan_input_evi and nan_input_w and nan_output_w and not input_evi_le1"
).shape[0]
> 0
):
print("Warning: input data outside range of (-1,1) found")
# ### Load Model
# read model and weather normalization stats
model = tf.keras.models.load_model(CONSTANTS["model_trained"], compile=False)
# ### Model Predictions
# +
# model prediction
label = model.predict(
[
np.array(ard.input_evi.to_list()),
np.array(ard.input_weather.to_list()),
np.array(ard.forecast_weather.to_list()),
]
)
label_names = [
(frcst_st_dt + timedelta(days=i + 1)).strftime("%Y-%m-%d")
for i in range(CONSTANTS["output_days"])
]
pred_df = pd.DataFrame(label[:, :, 0], columns=label_names).assign(
lat=ard.lat_.values, long=ard.long_.values
)
# -
pred_df.dropna().head()
# ### Write Output to TIF files
# +
# %matplotlib inline
import time
from IPython import display
from rasterio.plot import show
import shutil
ref_tif = sat_links.filePath.values[0]
with rasterio.open(ref_tif) as src:
ras_meta = src.profile
time_stamp = datetime.strptime(datetime.now().strftime("%d/%m/%y %H:%M:%S"), "%d/%m/%y %H:%M:%S")
output_dir = "results/model_output_"+str(time_stamp)+"/"
try:
if os.path.exists(output_dir):
shutil.rmtree(output_dir)
os.mkdir(output_dir)
except Exception as e:
print(e)
# -
for coln in pred_df.columns[:-2]: # Skip last 2 columns: lattiude, longitude
try:
data_array = np.array(pred_df[coln]).reshape(src.shape)
with rasterio.open(os.path.join(output_dir, coln + '.tif'), 'w', **ras_meta) as dst:
dst.write(data_array, indexes=1)
except Exception as e:
print(e)
# ### Visualize NDVI Forecast Maps
for coln in pred_df.columns[:-2]: # Skip last 2 columns: lattiude, longitude
try:
src = rasterio.open(os.path.join(output_dir, coln + '.tif'))
show(src.read(), transform=src.transform, title=coln)
#show_hist(src)
display.clear_output(wait=True)
time.sleep(1)
except Exception as e:
print(e)
# ### Next Step
# please go to [4_deploy_azure.ipynb](./4_deploy_azure.ipynb)
| ndvi_forecast/3_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object Detection with SSD
# ### Here we demostrate detection on example images using SSD with PyTorch
# +
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
import numpy as np
import cv2
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
from ssd import build_ssd
# -
# ## Build SSD300 in Test Phase
# 1. Build the architecture, specifyingsize of the input image (300),
# and number of object classes to score (21 for VOC dataset)
# 2. Next we load pretrained weights on the VOC0712 trainval dataset
net = build_ssd('test', 300, 21) # initialize SSD
net.load_weights('../weights/ssd300_mAP_77.43_v2.pth')
# ## Load Image
# ### Here we just load a sample image from the VOC07 dataset
# image = cv2.imread('./data/example.jpg', cv2.IMREAD_COLOR) # uncomment if dataset not downloaded
# %matplotlib inline
from matplotlib import pyplot as plt
from data import VOCDetection, VOC_ROOT, VOCAnnotationTransform
# here we specify year (07 or 12) and dataset ('test', 'val', 'train')
testset = VOCDetection(VOC_ROOT, [('2007', 'val')], None, VOCAnnotationTransform())
img_id = 61
image = testset.pull_image(img_id)
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# View the sampled input image before transform
plt.figure(figsize=(10,10))
plt.imshow(rgb_image)
plt.show()
# ## Pre-process the input.
# #### Using the torchvision package, we can create a Compose of multiple built-in transorm ops to apply
# For SSD, at test time we use a custom BaseTransform callable to
# resize our image to 300x300, subtract the dataset's mean rgb values,
# and swap the color channels for input to SSD300.
x = cv2.resize(image, (300, 300)).astype(np.float32)
x -= (104.0, 117.0, 123.0)
x = x.astype(np.float32)
x = x[:, :, ::-1].copy()
plt.imshow(x)
x = torch.from_numpy(x).permute(2, 0, 1)
# ## SSD Forward Pass
# ### Now just wrap the image in a Variable so it is recognized by PyTorch autograd
xx = Variable(x.unsqueeze(0)) # wrap tensor in Variable
if torch.cuda.is_available():
xx = xx.cuda()
y = net(xx)
# ## Parse the Detections and View Results
# Filter outputs with confidence scores lower than a threshold
# Here we choose 60%
# +
from data import VOC_CLASSES as labels
top_k=10
plt.figure(figsize=(10,10))
colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist()
plt.imshow(rgb_image) # plot the image for matplotlib
currentAxis = plt.gca()
detections = y.data
# scale each detection back up to the image
scale = torch.Tensor(rgb_image.shape[1::-1]).repeat(2)
for i in range(detections.size(1)):
j = 0
while detections[0,i,j,0] >= 0.6:
score = detections[0,i,j,0]
label_name = labels[i-1]
display_txt = '%s: %.2f'%(label_name, score)
pt = (detections[0,i,j,1:]*scale).cpu().numpy()
coords = (pt[0], pt[1]), pt[2]-pt[0]+1, pt[3]-pt[1]+1
color = colors[i]
currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2))
currentAxis.text(pt[0], pt[1], display_txt, bbox={'facecolor':color, 'alpha':0.5})
j+=1
| demo/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
lst = [[0 , 0,0 ],
[50 , 50,50 ],
[100,100,100]]
plt.imshow(lst)
# +
lst = [[[100,0,0],[0,100,0],[0,0,100]],
[[150,0,0],[0,150,0],[0,0,150]],
[[200,0,0],[0,200,0],[0,0,200]]]
#RGB
plt.imshow(lst)
# +
lst = [[[61,144,246],[249,144,60],[255,213,97]],
[[150,100,50],[100,150,80],[160,100,150]],
[[200,100,90],[100,200,150],[189,100,200]]]
#RGB
plt.imshow(lst)
# -
| Create Coloured Image/Create Coloured Image.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ćwiczenia - Zmienne
# # Zestaw 1
# ## Zadanie 1.
# Do zmiennej x przypisz wartość 100.00, a do zmiennej y wartość 150.00. Następnie wyświetl wyniki.
# Tu umieść swój kod
# ## Zadanie 2
# Która z zaproponowanych nazw zmiennych jest zgodna z konwencją Pythona (więcej niż jedna odpowiedź):
#
# a) kolo_naukowe </br>
# b) koloNaukowe </br>
# c) 2022kolo_naukowe </br>
# d) _kngik </br>
# ## Zadanie 3
# Za pomocą jakiej funkcji sprawdzimy typ zmiennej?
#
# a) print() </br>
# b) type() </br>
# c) bool() </br>
# d) types() </br>
# ## Zadanie 4
# Do zmiennej fi, lam oraz h przypisz wartość 52.10, 21.05, 210.05 (spróbuj to zrobić w jednej linijce).
# Wypisz wartości zmiennych z wykorzystaniem funkcji print
# Tu umieść swój kod
# ## Zadanie 5
# Do zmiennej fi przypisz wartość 46.05, do zmiennej lam wartość 36.00. Następnie do wartości fi przypisza wartość zmiennej lam. Wyświetl wynik.
# Tu umieść swój kod
# ## Zadanie 6
# Do zmiennej jednostka przypisz nazwę wybranej jednostki z wydziału GIK. Wyświetl wynik.
# Tu umieść swój kod
# ## Zadanie 7
# Utwórz zmienną kurs z wartością Python, zmienną jednostka z wartością KNGiK oraz zmienną rok z wartością 2022. Następnie wyświetl następujący napis: "Kurs Python z KNGiK 2022". Użyj f-string'a.
# Tu umieść swój kod
# ## Zadanie 8.
# Określ jakim typem zmiennej jest zmienna `liczba = 120`. Napisz fragment kodu, którym sprawdzisz typ zmiennej.
liczba = 120
# Tu umieść swój kod
# ## Zadanie 9
# Określ jakim typem zmiennej jest wartość ```True```.
# ## Zadanie 10
# Utwórz zmienną done = False. Zmień jej wartość na przeciwną i wyświetl wynik.
# Tu umieść swój kod
| 1_typy_zmiennych/1_cwiczenia.ipynb |
/ -*- coding: utf-8 -*-
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ + [markdown] cell_id="00000-7236f12d-273c-428f-b21b-648a052196c7" deepnote_cell_type="markdown" tags=[]
/ ### Not sure whos notebook this is but be sure to give it a nice name :D
/ + cell_id="00001-7014e2f5-e9ff-4d8f-9ce2-884bbbe74d6b" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=901 execution_start=1617094292567 source_hash="745f61f9" tags=[]
# explorative data analysis
import matplotlib.pyplot as plt
import pylab as P
with open('./data/Train_call.txt', 'r') as fhandle:
lines = fhandle.readlines()
arrays = lines[0].strip('\n')
arrays = arrays.split('\t')
array_dict = {}
array_list = []
for array in arrays[4:]:
array_dict[array] = {}
for i in range(1,24):
array_dict[array][i] = []
array_list.append(array)
for line in lines[1:]:
split_line = line.split('\t')
chromosome = int(split_line[0])
for i in range(0,100):
array_dict[array_list[i]][chromosome] = int(split_line[i+4])
# array_dict[array_name][chromosome] = score (-1,0,1,2)
with open('./data/Train_clinical.txt', 'r') as fhandle:
lines = fhandle.readlines()
label_dict = {}
for line in lines[1:]:
line = line.strip('\n')
line = line.split('\t')
label_dict[line[0]] = line[1]
for i in range(0,100):
per_chrom = []
for j in range(1,24):
per_chrom.append(array_dict[array_list[i]][j])
label = label_dict[array_list[i]]
if label == '"HER2+"':
color = 'red'
elif label == '"Triple Neg"':
color = 'blue'
else:
color = 'green'
plt.plot(range(1,24), per_chrom, c=color, label = color)
print('all arrays colored by label (HR+ = green, HER+ = red, Triple Neg = blue)')
plt.show()
raw_count = {}
for per_label in ['"HER2+"','"Triple Neg"','"HR+"']:
raw_count[per_label] = {}
for j in range(1,24):
raw_count[per_label][j] = {-1:0, 0:0, 1:0, 2:0}
for i in range(0,100):
label = label_dict[array_list[i]]
per_chrom = []
if label == per_label:
for j in range(1,24):
per_chrom.append(array_dict[array_list[i]][j])
raw_count[label][j][array_dict[array_list[i]][j]] += 1
if label == '"HER2+"':
color = 'red'
plt.plot(range(1,24), per_chrom, c=color, label = color)
elif label == '"Triple Neg"':
color = 'blue'
plt.plot(range(1,24), per_chrom, c=color, label = color)
else:
color = 'green'
plt.plot(range(1,24), per_chrom, c=color, label = color)
print(per_label,raw_count[per_label])
plt.show()
/ + cell_id="00002-4754d2e8-2e3f-43d8-9c88-e78b1b21395e" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=1857 execution_start=1617093925275 source_hash="51ba0547" tags=[]
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
for per_label in ['"HER2+"','"Triple Neg"','"HR+"']:
labels = range(1,24)
minus1 = []
zero = []
one = []
two = []
if per_label == '"HER2+"':
color = 'red'
elif per_label == '"Triple Neg"':
color = 'blue'
else:
color = 'green'
for i in labels:
minus1.append(raw_count[per_label][i][-1])
zero.append(raw_count[per_label][i][0])
one.append(raw_count[per_label][i][1])
two.append(raw_count[per_label][i][2])
x = np.arange(len(labels)) # the label locations
width = 0.2 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(x - width*1.5, minus1, width, label='-1' )
rects2 = ax.bar(x - width/2, zero, width, label='0')
rects3 = ax.bar(x + width/2, one, width, label ='1')
rects4 = ax.bar(x + width*1.5, two, width, label = '2')
ax.set_ylabel('relative frequency alteration')
ax.set_title(('relative frequencies for '+ per_label+ ' per chromosome'))
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend()
fig.tight_layout()
plt.show()
/ + [markdown] cell_id="00004-3e73e519-a194-4174-b909-bde636e0bd54" deepnote_cell_type="markdown" tags=[]
/
/ + [markdown] cell_id="00004-7fb8b17f-f707-40eb-9470-01b860c50bdd" deepnote_cell_type="markdown" tags=[]
/ her2
/ -The HER2 receptor is a 1255 amino acid, 185 kD transmembrane glycoprotein located at the long arm of human *chromosome 17 (17q12)* [6]. HER2 is expressed in many tissues and its major role in these tissues is to facilitate excessive/uncontrolled cell growth and tumorigenesis
/ https://www.hindawi.com/journals/mbi/2014/852748/
/
/ HR: hormone positive, either estrogen or progestron receptor:
/ - ER: ESR1 and ESR2 on the sixth and fourteenth chromosome (6q25.1 and 14q23.2) (https://en.wikipedia.org/wiki/Estrogen_receptor#Genetics)
/ - PR: PR is encoded by a single PGR gene residing on chromosome 11q22 (https://en.wikipedia.org/wiki/Progesterone_receptor)
/
/ TN
/ -Triple-negative breast cancer (TNBC) is defined as a type of breast cancer with negative expression of estrogen (ER), progesterone (PR), and human epidermal growth factor receptor-2 (HER2) (https://breast-cancer-research.biomedcentral.com/articles/10.1186/s13058-020-01296-5#:~:text=Triple%2Dnegative%20breast%20cancer%20(TNBC)%20is%20defined%20as%20a,like%20breast%20cancer%20(BLBC) )
/
/ + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[]
/ <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=c915e4f9-60c2-40b5-a522-8a90cb3fd50a' target="_blank">
/ <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
/ Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| src/raw_data_visualisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="7r56EFezrQ5X" colab={"base_uri": "https://localhost:8080/"} outputId="1f76dfde-7669-432d-cbcd-b5d5429b90f3"
import keras
from keras.models import *
from keras.layers import *
from keras.regularizers import *
import sklearn
import tensorflow as tf
import keras.optimizers
import sqlite3
import datetime
import math
import seaborn as sns
import numpy as np
import pandas as pd
# !pip install livelossplot
from livelossplot.keras import PlotLossesCallback
# + id="0cwjFmrZrpjK"
conn = sqlite3.connect('database.db')
# + colab={"base_uri": "https://localhost:8080/"} id="DH_4M_JXr3bA" outputId="62ab464d-224f-44e0-eb68-d8eef75c094a"
raw_data = [i for i in conn.execute("SELECT lat, lon, time FROM simulated")]
print(len(raw_data))
print(raw_data[0])
# + id="04T_hKrwtKfG"
def parse_time(time):
time = time.split(".")[0]
return datetime.datetime.strptime(time, "%Y-%m-%d %H:%M:%S").timestamp()
def to_time(sample):
return sample[2]
start_time = min(raw_data, key=lambda x: parse_time(to_time(x)))
end_time = max(raw_data, key=lambda x: parse_time(to_time(x)))
start_time = parse_time(to_time(start_time))
end_time = parse_time(to_time(end_time))
BUCKETS = 1000
processed = []
for i in range(BUCKETS):
processed.append([])
for sample in raw_data:
curr_time = parse_time(to_time(sample))
curr_idx = math.floor((curr_time - start_time - 1) * BUCKETS / (end_time - start_time))
processed[curr_idx].append(sample)
# + colab={"base_uri": "https://localhost:8080/"} id="SKZ5G6mE3R9M" outputId="6434db30-8137-472d-9bcf-3d560a77050d"
def dist(p1, p2):
x = p1[0] - p2[0]
y = p1[1] - p2[1]
return math.sqrt(x**2 + y**2)
def clean_processed(data):
cleaned = []
for point in data:
if cleaned:
min_pt = min(cleaned, key=lambda x: dist(point, x))
if dist(min_pt, point) > 0.0002:
cleaned.append(point)
else:
cleaned.append(point)
return cleaned
print(len(processed[0]))
print(len(clean_processed(processed[0])))
# + colab={"base_uri": "https://localhost:8080/", "height": 143} id="S0dD5YWVCVhh" outputId="0507e792-b511-44b7-f5d8-b86a814c585f"
pd.DataFrame(data={
"X": [1,2,3],
"Y": [5,6,7]
})
# + id="1IJ_7GCyECdh"
def extract_points(key_points, data):
ret = []
for point in data:
min_point = min(key_points, key=lambda x: dist(x, point))
if dist(min_point, point) < 0.002:
ret.append(point)
return ret
# + colab={"base_uri": "https://localhost:8080/", "height": 325} id="y3ygucBsxJU3" outputId="9a677a1b-1432-45c5-f59f-37d5c8d47787"
def plot(data):
x = [a[0] for a in data]
y = [a[1] for a in data]
print(x)
sns.scatterplot(data=pd.DataFrame(data={
"latitude": x,
"longitude": y
}), x="longitude", y="latitude")
#plot(processed[0])
plot(clean_processed(processed[0]))
key_points = [
(40.443428578, -79.94563020846905),
(40.44301507, -79.94203969055374),
(40.442504266, -79.9400592703583),
(40.443939382, -79.94218459934854),
(40.441470496, -79.94208799348534),
(40.445252878, -79.94324726384365)
]
plot(extract_points(key_points, clean_processed(processed[0])))
# + colab={"base_uri": "https://localhost:8080/"} id="NIFL5Z-PFIIZ" outputId="1f4cefbe-0877-4976-cf10-0fab742c219c"
processed[0][0]
# + id="MRTHDfETwWDl"
dist_cutoff = 0.001
def extract_features_bucket(keypoints, points_in_bucket):
result = []
for i in range(len(keypoints)):
sum = 0
cur_keypoint = keypoints[i]
for point in points_in_bucket:
if(dist(cur_keypoint, point) < dist_cutoff):
sum += 1
result.append(sum)
return result
def extract_features_total(keypoints, points):
result = []
for points_in_bucket in points:
result.append(extract_features_bucket(keypoints, points_in_bucket))
df = pd.DataFrame(result, columns = ["hunan", "entropy", "resnick", "abp", "exchange", "underground"])
return df
features = extract_features_total(key_points, processed).to_numpy()
features = features / features.max()
# + colab={"base_uri": "https://localhost:8080/"} id="J1bBFM8MYe7T" outputId="cfeb8f16-5b8a-4033-9403-77a7cd6ab022"
features
# + id="r3-PIIn9To23"
X_data = []
y_data = []
TIME_INTERVAL = 50
for i in range(BUCKETS - TIME_INTERVAL - 1):
X_data.append(features[i:i+TIME_INTERVAL])
y_data.append(features[i + TIME_INTERVAL])
X_data = np.array(X_data)
y_data = np.array(y_data)
# + colab={"base_uri": "https://localhost:8080/"} id="X-dE-Zh-YYEg" outputId="876011a7-c9b8-4d1c-c098-cb302c287633"
X_data[0]
# + id="5pFW8T7dXB_F"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.33, random_state=42)
# + id="ZyinOoL6LwQZ" colab={"base_uri": "https://localhost:8080/"} outputId="84dbc617-b083-4763-dcc5-0ced848dd977"
hidden_dim = 16
num_classes = y_data.shape[1]
model = keras.Sequential()
model.add(GRU(hidden_dim, return_sequences=False, input_shape=X_data.shape[1:])) #, kernel_regularizer=L1L2(l1=0.02, l2=0.02)))
model.add(Dense(num_classes, activation="sigmoid"))
model.compile(loss="mean_squared_error", optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001))
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 637} id="VaRMcBi0uQpA" outputId="f8643f83-3033-4192-b9dd-ccc35970b863"
history = model.fit(X_train, y_train,
batch_size=16,
epochs=100,
callbacks=[PlotLossesCallback()],
validation_data=(X_test, y_test)
)
# + colab={"base_uri": "https://localhost:8080/"} id="CWmx-9k-tN00" outputId="5a85518e-bf00-43ed-e055-7fdabd0709a5"
# + id="rGMA8zNJtRyi"
| notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ImageProcess.ipynb
#
# ### This is an attempt to use Canny edge detection and Probabilistic Hough line transformation to programmatically turn the scanned images into square images with the proper orientation.
# +
# imports
import cv2
from matplotlib import pyplot as plt
import os
import numpy as np
import random
# location of the TDS images on disk - change this to match your own system!
IMAGE_DIR = '/home/jaeger/dev/github.com/tds/Playground/TDS_Image_Proj/images'
# +
# load an image from disk
orig_img = cv2.imread(os.path.join(IMAGE_DIR, 'thefan2.jpg'))
# save its size for later use
cols, rows, channels = orig_img.shape
# display the loaded image
plt.figure(figsize=(8, 8), dpi=96)
plt.imshow(orig_img)
plt.show()
# +
# get rotation matrix (2D affine rotation) (cv2.getRotationMatrix2D((center), angle, scale))
M = cv2.getRotationMatrix2D((cols/2, rows/2), -90, 1)
# warp the image using the calculated matrix (cv2.warpAffine(source, transformation matrix, (output size))
img_rot = cv2.warpAffine(orig_img, M, (cols, rows))
# display the newly-rotated image
plt.figure(figsize=(8, 8), dpi=96)
plt.imshow(img_rot)
plt.show()
# +
# convert it to grayscale
gray_img = cv2.cvtColor(img_rot, cv2.COLOR_BGR2GRAY)
# display the grayscale image
plt.figure(figsize=(8, 8), dpi=96)
plt.imshow(gray_img, cmap='gray')
plt.show()
# +
# find edges (Canny edge detection) (cv2.Canny(image, minval, maxval))
edges = cv2.Canny(gray_img, 45, 165)
# display edges
plt.figure(figsize=(8, 8), dpi=96)
plt.imshow(edges, cmap='gray')
plt.show()
# +
# make a copy of the rotated image on which to draw
img_draw = img_rot
# find lines (Probabilistic Hough Transformation) (cv2.HoughLinesP(image, rho, theta, threshold, lines, minlength, maxgap))
lines = cv2.HoughLinesP(edges, 1, np.pi/180, 50, minLineLength=200, maxLineGap=80)
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(img_draw, (x1, y1), (x2, y2), (255, 255, 0), 1)
# display lines
plt.figure(figsize=(8, 8), dpi=96)
plt.imshow(img_draw)
plt.show()
| TDS_Image_Proj/code/ImageProcess.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 量子分类器
#
# <em> Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>
# ## 概览
#
# 本教程我们将讨论量子分类器(quantum classifier)的原理,以及如何利用量子神经网络(quantum neural network, QNN)来完成**两分类**任务。这类方法早期工作的主要代表是 Mitarai et al.(2018) 的量子电路学习 [(Quantum Circuit Learning, QCL)](https://arxiv.org/abs/1803.00745) [1], Farhi & Neven (2018) [2] 和 Schuld et al.(2018) 的中心电路量子分类器 [Circuit-Centric Quantum Classifiers](https://arxiv.org/abs/1804.00633) [3]。这里我们以第一类的 QCL 框架应用于监督学习(Supervised learning)为例进行介绍,通常我们需要先将经典数据编码成量子数据,然后通过训练量子神经网络的参数,最终得到一个最优的分类器。
# ### 背景
#
# 在监督学习的情况下,我们需要输入 $N$ 组带标签的数据点构成的数据集 $D = \{(x^k,y^k)\}_{k=1}^{N}$,其中 $x^k\in \mathbb{R}^{m}$ 是数据点,$y^k \in\{0,1\}$ 是对应数据点 $x^k$ 的分类标签。**分类过程实质上是一个决策过程,决策给定数据点的标签归属问题**。 对于量子分类器框架,分类器 $\mathcal{F}$ 的实现方式为一个含参 $\theta$ 的量子神经网络/参数化量子电路, 测量量子系统以及数据后处理的组合。一个优秀的分类器 $\mathcal{F}_\theta$ 应该尽可能的将每个数据集内的数据点正确地映射到相对应的标签上 $\mathcal{F}_\theta(x^k) \rightarrow y^k$。因此,我们将预测标签 $\tilde{y}^{k} = \mathcal{F}_\theta(x^k)$ 和实际标签 $y^k$ 之间的累计距离作为损失函数 $\mathcal{L}(\theta)$ 进行优化。对于两分类任务,可以选择二次损失函数
#
# $$
# \mathcal{L}(\theta) = \sum_{k=1}^N |\tilde{y}^{k}-y^k|^2. \tag{1}
# $$
#
#
# ### 方案流程
#
# 这里我们给出实现量子电路学习 (QCL) 框架下量子分类器的一个流程。
#
# 1. 在初始化的量子比特 $\lvert 0 \rangle$ 上作用参数化的酉门 $U$(unitary gate),从而把原始的经典数据点 $x^k$ 编码成量子计算机可以运行的量子数据 $\lvert \psi_{in}\rangle^k$。
# 2. 使输入态 $\lvert \psi_{in} \rangle^k$ 通过参数为 $\theta$ 的参数化电路 $U(\theta)$ ,由此获得输出态 $\lvert \psi_{out}\rangle^k = U(\theta)\lvert \psi_{in} \rangle^k$。
# 3. 对量子神经网络处理后的量子态 $\lvert \psi_{out}\rangle^k$ 进行测量和数据后处理,得到估计出的标签 $\tilde{y}^{k}$。
# 4. 重复步骤2-3直到数据集内所有的数据点都经过了处理。然后计算损失函数 $\mathcal{L}(\theta)$。
# 5. 通过梯度下降等优化方法不断调整参数 $\theta$ 的值,从而最小化损失函数。记录优化完成后的最优参数 $\theta^*$, 这时我们就学习到了最优的分类器 $\mathcal{F}_{\theta^*}$。
#
#
#
# 
# <div style="text-align:center">图 1:量子分类器训练的流程图 </div>
# ## Paddle Quantum 实现
#
# 这里,我们先导入所需要的语言包:
#
# +
import time
import matplotlib
import numpy as np
import paddle
from numpy import pi as PI
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings("ignore")
from paddle import matmul, transpose
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import pauli_str_to_matrix
# -
# 这是教程中会用到的几个主要函数
__all__ = [
"circle_data_point_generator",
"data_point_plot",
"heatmap_plot",
"Ry",
"Rz",
"Observable",
"U_theta",
"Net",
"QC",
"main",
]
# ### 数据集的生成
#
# 对于监督学习来说,我们绕不开的一个问题就是——采用的数据集是什么样的?在这个教程中我们按照论文 [1] 里所提及方法生成简单的圆形决策边界二分数据集 $\{(x^{k}, y^{k})\}$。其中数据点 $x^{k}\in \mathbb{R}^{2}$,标签 $y^{k} \in \{0,1\}$。
#
# 
# <div style="text-align:center">图 2:生成的数据集和对应的决策边界 </div>
#
# 具体的生成方式和可视化请见如下代码:
# +
# 圆形决策边界两分类数据集生成器
def circle_data_point_generator(Ntrain, Ntest, boundary_gap, seed_data):
"""
:param Ntrain: 训练集大小
:param Ntest: 测试集大小
:param boundary_gap: 取值于 (0, 0.5), 两类别之间的差距
:param seed_data: 随机种子
:return: 'Ntrain' 训练集
'Ntest' 测试集
"""
train_x, train_y = [], []
num_samples, seed_para = 0, 0
while num_samples < Ntrain + Ntest:
np.random.seed((seed_data + 10) * 1000 + seed_para + num_samples)
data_point = np.random.rand(2) * 2 - 1
# 如果数据点的模小于(0.7 - gap),标为0
if np.linalg.norm(data_point) < 0.7 - boundary_gap / 2:
train_x.append(data_point)
train_y.append(0.)
num_samples += 1
# 如果数据点的模大于(0.7 + gap),标为1
elif np.linalg.norm(data_point) > 0.7 + boundary_gap / 2:
train_x.append(data_point)
train_y.append(1.)
num_samples += 1
else:
seed_para += 1
train_x = np.array(train_x).astype("float64")
train_y = np.array([train_y]).astype("float64").T
print("训练集的维度大小 x {} 和 y {}".format(np.shape(train_x[0:Ntrain]), np.shape(train_y[0:Ntrain])))
print("测试集的维度大小 x {} 和 y {}".format(np.shape(train_x[Ntrain:]), np.shape(train_y[Ntrain:])), "\n")
return train_x[0:Ntrain], train_y[0:Ntrain], train_x[Ntrain:], train_y[Ntrain:]
# 用以可视化生成的数据集
def data_point_plot(data, label):
"""
:param data: 形状为 [M, 2], 代表 M 2-D 数据点
:param label: 取值 0 或者 1
:return: 画这些数据点
"""
dim_samples, dim_useless = np.shape(data)
plt.figure(1)
for i in range(dim_samples):
if label[i] == 0:
plt.plot(data[i][0], data[i][1], color="r", marker="o")
elif label[i] == 1:
plt.plot(data[i][0], data[i][1], color="b", marker="o")
plt.show()
# +
# 数据集参数设置
Ntrain = 200 # 规定训练集大小
Ntest = 100 # 规定测试集大小
boundary_gap = 0.5 # 设置决策边界的宽度
seed_data = 2 # 固定随机种子
# 生成自己的数据集
train_x, train_y, test_x, test_y = circle_data_point_generator(Ntrain, Ntest, boundary_gap, seed_data)
# 打印数据集的维度信息
print("训练集 {} 个数据点的可视化:".format(Ntrain))
data_point_plot(train_x, train_y)
print("测试集 {} 个数据点的可视化:".format(Ntest))
data_point_plot(test_x, test_y)
print("\n 读者不妨自己调节数据集的参数设置来生成属于自己的数据集吧!")
# -
# ### 数据的预处理
#
# 与经典机器学习不同的是,量子分类器在实际工作的时候需要考虑数据的预处理。我们需要多加一个步骤将经典的数据转化成量子信息才能放在量子计算机上运行。接下来我们看看具体是怎么完成的。
#
# 首先我们确定需要使用的量子比特数量。因为我们的数据 $\{x^{k} = (x^{k}_0, x^{k}_1)\}$ 是二维的, 按照 Mitarai (2018) 论文[1]中的编码方式我们至少需要2个量子比特。接着准备一系列的初始量子态 $|00\rangle$。然后将经典信息 $\{x^{k}\}$ 编码成一系列量子门 $U(x^{k})$ 并作用在初始量子态上。最终得到一系列的量子态 $|\psi\rangle^k = U(x^{k})|00\rangle$。这样我们就完成从经典信息到量子信息的编码了!给定 $m$ 个量子比特去编码二维的经典数据点,则量子门的构造为:
#
# $$
# U(x^{k}) = \otimes_{j=0}^{m-1} R_j^z\big[\arccos(x_{j \, \text{mod} \, 2}\cdot x_{j \, \text{mod} \, 2})\big] R_j^y\big[\arcsin(x_{j \, \text{mod} \, 2}) \big],\tag{2}
# $$
#
# **注意** :这种表示下,我们将第一个量子比特编号为 $j = 0$。更多编码方式见 [Robust data encodings for quantum classifiers](https://arxiv.org/pdf/2003.01695.pdf)。读者也可以直接使用量桨中提供的[编码方式](./DataEncoding_CN.ipynb)。这里我们也欢迎读者自己创新尝试全新的编码方式。
# 由于这种编码的方式看着比较复杂,我们不妨来举一个简单的例子。假设我们给定一个数据点 $x = (x_0, x_1)= (1,0)$, 显然这个数据点的标签应该为 1,对应上图**蓝色**的点。同时数据点对应的2比特量子门 $U(x)$ 是
#
# $$
# U(x) =
# \bigg( R_0^z\big[\arccos(x_{0}\cdot x_{0})\big] R_0^y\big[\arcsin(x_{0}) \big] \bigg)
# \otimes
# \bigg( R_1^z\big[\arccos(x_{1}\cdot x_{1})\big] R_1^y\big[\arcsin(x_{1}) \big] \bigg),\tag{3}
# $$
#
#
# 把具体的数值带入我们就能得到:
# $$
# U(x) =
# \bigg( R_0^z\big[0\big] R_0^y\big[\pi/2 \big] \bigg)
# \otimes
# \bigg( R_1^z\big[\pi/2\big] R_1^y\big[0 \big] \bigg),
# \tag{4}
# $$
#
# 以下是常用的旋转门的矩阵形式:
#
#
# $$
# R_x(\theta) :=
# \begin{bmatrix}
# \cos \frac{\theta}{2} &-i\sin \frac{\theta}{2} \\
# -i\sin \frac{\theta}{2} &\cos \frac{\theta}{2}
# \end{bmatrix}
# ,\quad
# R_y(\theta) :=
# \begin{bmatrix}
# \cos \frac{\theta}{2} &-\sin \frac{\theta}{2} \\
# \sin \frac{\theta}{2} &\cos \frac{\theta}{2}
# \end{bmatrix}
# ,\quad
# R_z(\theta) :=
# \begin{bmatrix}
# e^{-i\frac{\theta}{2}} & 0 \\
# 0 & e^{i\frac{\theta}{2}}
# \end{bmatrix}. \tag{5}
# $$
#
# 那么这个两比特量子门 $U(x)$ 的矩阵形式可以写为:
#
# $$
# U(x) =
# \bigg(
# \begin{bmatrix}
# 1 & 0 \\
# 0 & 1
# \end{bmatrix}
# \begin{bmatrix}
# \cos \frac{\pi}{4} &-\sin \frac{\pi}{4} \\
# \sin \frac{\pi}{4} &\cos \frac{\pi}{4}
# \end{bmatrix}
# \bigg)
# \otimes
# \bigg(
# \begin{bmatrix}
# e^{-i\frac{\pi}{4}} & 0 \\
# 0 & e^{i\frac{\pi}{4}}
# \end{bmatrix}
# \begin{bmatrix}
# 1 &0 \\
# 0 &1
# \end{bmatrix}
# \bigg),\tag{6}
# $$
#
# 化简后我们作用在零初始化的 $|00\rangle$ 量子态上可以得到编码后的量子态 $|\psi\rangle$,
#
# $$
# |\psi\rangle =
# U(x)|00\rangle = \frac{1}{2}
# \begin{bmatrix}
# 1-i &0 &-1+i &0 \\
# 0 &1+i &0 &-1-i \\
# 1-i &0 &1-i &0 \\
# 0 &1+i &0 &1+i
# \end{bmatrix}
# \begin{bmatrix}
# 1 \\
# 0 \\
# 0 \\
# 0
# \end{bmatrix}
# = \frac{1}{2}
# \begin{bmatrix}
# 1-i \\
# 0 \\
# 1-i \\
# 0
# \end{bmatrix}.\tag{7}
# $$
#
# 接着我们来看看代码上怎么实现这种编码方式。需要注意的是:代码中使用了一个张量积来表述
#
# $$
# (U_1 |0\rangle)\otimes (U_2 |0\rangle) = (U_1 \otimes U_2) |0\rangle\otimes|0\rangle
# = (U_1 \otimes U_2) |00\rangle.\tag{8}
# $$
# +
def Ry(theta):
"""
:param theta: 参数
:return: Y 旋转矩阵
"""
return np.array([[np.cos(theta / 2), -np.sin(theta / 2)],
[np.sin(theta / 2), np.cos(theta / 2)]])
def Rz(theta):
"""
:param theta: 参数
:return: Z 旋转矩阵
"""
return np.array([[np.cos(theta / 2) - np.sin(theta / 2) * 1j, 0],
[0, np.cos(theta / 2) + np.sin(theta / 2) * 1j]])
# 经典 -> 量子数据编码器
def datapoints_transform_to_state(data, n_qubits):
"""
:param data: 形状为 [-1, 2]
:param n_qubits: 数据转化后的量子比特数量
:return: 形状为 [-1, 1, 2 ^ n_qubits]
"""
dim1, dim2 = data.shape
res = []
for sam in range(dim1):
res_state = 1.
zero_state = np.array([[1, 0]])
for i in range(n_qubits):
if i % 2 == 0:
state_tmp=np.dot(zero_state, Ry(np.arcsin(data[sam][0])).T)
state_tmp=np.dot(state_tmp, Rz(np.arccos(data[sam][0] ** 2)).T)
res_state=np.kron(res_state, state_tmp)
elif i % 2 == 1:
state_tmp=np.dot(zero_state, Ry(np.arcsin(data[sam][1])).T)
state_tmp=np.dot(state_tmp, Rz(np.arccos(data[sam][1] ** 2)).T)
res_state=np.kron(res_state, state_tmp)
res.append(res_state)
res = np.array(res)
return res.astype("complex128")
print("作为测试我们输入以上的经典信息:")
print("(x_0, x_1) = (1, 0)")
print("编码后输出的2比特量子态为:")
print(datapoints_transform_to_state(np.array([[1, 0]]), n_qubits=2))
# -
# ### 构造量子神经网络
#
# 那么在完成上述从经典数据到量子数据的编码后,我们现在可以把这些量子态输入到量子计算机里面了。在那之前,我们还需要设计下我们所采用的量子神经网络结构。
#
# 
# <div style="text-align:center">图 3:参数化量子神经网络的电路结构 </div>
#
#
# 为了方便,我们统一将上述参数化的量子神经网络称为 $U(\boldsymbol{\theta})$。这个 $U(\boldsymbol{\theta})$ 是我们分类器的关键组成部分,需要一定的复杂结构来拟合我们的决策边界。与经典神经网络类似,量子神经网络的的设计并不是唯一的,这里展示的仅仅是一个例子,读者不妨自己设计出自己的量子神经网络。我们还是拿原来提过的这个数据点 $x = (x_0, x_1)= (1,0)$ 来举例子,编码过后我们已经得到了一个量子态 $|\psi\rangle$,
#
# $$
# |\psi\rangle =
# \frac{1}{2}
# \begin{bmatrix}
# 1-i \\
# 0 \\
# 1-i \\
# 0
# \end{bmatrix},\tag{9}
# $$
#
# 接着我们把这个量子态输入进我们的量子神经网络,也就是把一个酉矩阵乘以一个向量。得到处理过后的量子态 $|\varphi\rangle$
#
# $$
# |\varphi\rangle = U(\boldsymbol{\theta})|\psi\rangle,\tag{10}
# $$
#
# 如果我们把所有的参数 $\theta$ 都设置为 $\theta = \pi$, 那么我们就可以写出具体的矩阵了:
#
# $$
# |\varphi\rangle =
# U(\boldsymbol{\theta} =\pi)|\psi\rangle =
# \begin{bmatrix}
# 0 &0 &-1 &0 \\
# -1 &0 &0 &0 \\
# 0 &1 &0 &0 \\
# 0 &0 &0 &1
# \end{bmatrix}
# \cdot
# \frac{1}{2}
# \begin{bmatrix}
# 1-i \\
# 0 \\
# 1-i \\
# 0
# \end{bmatrix}
# = \frac{1}{2}
# \begin{bmatrix}
# -1+i \\
# -1+i \\
# 0 \\
# 0
# \end{bmatrix}.\tag{11}
# $$
# 模拟搭建量子神经网络
def U_theta(theta, n, depth):
"""
:param theta: 维数: [n, depth + 3]
:param n: 量子比特数量
:param depth: 电路深度
:return: U_theta
"""
# 初始化网络
cir = UAnsatz(n)
# 先搭建广义的旋转层
for i in range(n):
cir.rz(theta[i][0], i)
cir.ry(theta[i][1], i)
cir.rz(theta[i][2], i)
# 默认深度为 depth = 1
# 搭建纠缠层和 Ry旋转层
for d in range(3, depth + 3):
for i in range(n-1):
cir.cnot([i, i + 1])
cir.cnot([n-1, 0])
for i in range(n):
cir.ry(theta[i][d], i)
return cir
# ### 测量与损失函数
#
# 当我们在量子计算机上(QPU)用量子神经网络处理过初始量子态 $|\psi\rangle$ 后, 我们需要重新测量这个新的量子态 $|\varphi\rangle$ 来获取经典信息。这些处理过后的经典信息可以用来计算损失函数 $\mathcal{L}(\boldsymbol{\theta})$。最后我们再通过经典计算机(CPU)来不断更新QNN参数 $\boldsymbol{\theta}$ 并优化损失函数。这里我们采用的测量方式是测量泡利 $Z$ 算符在第一个量子比特上的期望值。 具体来说,
#
# $$
# \langle Z \rangle =
# \langle \varphi |Z\otimes I\cdots \otimes I| \varphi\rangle,\tag{12}
# $$
#
# 复习一下,泡利 $Z$ 算符的矩阵形式为:
#
# $$
# Z := \begin{bmatrix} 1 &0 \\ 0 &-1 \end{bmatrix},\tag{13}
# $$
#
# 继续我们前面的 2 量子比特的例子,测量过后我们得到的期望值就是:
# $$
# \langle Z \rangle =
# \langle \varphi |Z\otimes I| \varphi\rangle =
# \frac{1}{2}
# \begin{bmatrix}
# -1-i \quad
# -1-i \quad
# 0 \quad
# 0
# \end{bmatrix}
# \begin{bmatrix}
# 1 &0 &0 &0 \\
# 0 &1 &0 &0 \\
# 0 &0 &-1 &0 \\
# 0 &0 &0 &-1
# \end{bmatrix}
# \cdot
# \frac{1}{2}
# \begin{bmatrix}
# -1+i \\
# -1+i \\
# 0 \\
# 0
# \end{bmatrix}
# = 1,\tag{14}
# $$
#
# 好奇的读者或许会问,这个测量结果好像就是我们原来的标签 1 ,这是不是意味着我们已经成功的分类这个数据点了?其实并不然,因为 $\langle Z \rangle$ 的取值范围通常在 $[-1,1]$之间。 为了对应我们的标签范围 $y^{k} \in \{0,1\}$, 我们还需要将区间上下限映射上。这个映射最简单的做法就是让
#
# $$
# \tilde{y}^{k} = \frac{\langle Z \rangle}{2} + \frac{1}{2} + bias \quad \in [0, 1].\tag{15}
# $$
#
# 其中加入偏置(bias)是机器学习中的一个小技巧,目的就是为了让决策边界不受制于原点或者一些超平面。一般我们默认偏置初始化为0,并且优化器在迭代过程中会类似于参数 $\theta$ 一样不断更新偏置确保 $\tilde{y}^{k} \in [0, 1]$。当然读者也可以选择其他复杂的映射(激活函数)比如说 sigmoid 函数。映射过后我们就可以把 $\tilde{y}^{k}$ 看作是我们估计出的标签(label)了。如果 $\tilde{y}^{k}< 0.5$ 就对应标签 0,如果 $\tilde{y}^{k}> 0.5$ 就对应标签 1。 我们稍微复习一下整个流程,
#
#
# $$
# x^{k} \rightarrow |\psi\rangle^{k} \rightarrow U(\boldsymbol{\theta})|\psi\rangle^{k} \rightarrow
# |\varphi\rangle^{k} \rightarrow ^{k}\langle \varphi |Z\otimes I\cdots \otimes I| \varphi\rangle^{k}
# \rightarrow \langle Z \rangle \rightarrow \tilde{y}^{k}.\tag{16}
# $$
#
# 最后我们就可以把损失函数定义为平方损失函数:
#
# $$
# \mathcal{L} = \sum_{k} |y^{k} - \tilde{y}^{k}|^2.\tag{17}
# $$
#
#
# 生成只作用在第一个量子比特上的泡利 Z 算符
# 其余量子比特上都作用单位矩阵
def Observable(n):
"""
:param n: 量子比特数量
:return: 局部可观测量: Z \otimes I \otimes ...\otimes I
"""
Ob = pauli_str_to_matrix([[1.0, 'z0']], n)
return Ob
# 搭建整个优化流程图
class Net(paddle.nn.Layer):
"""
创建模型训练网络
"""
def __init__(self,
n, # 量子比特数量
depth, # 电路深度
seed_paras=1,
dtype='float64'):
super(Net, self).__init__()
self.n = n
self.depth = depth
# 初始化参数列表 theta,并用 [0, 2*pi] 的均匀分布来填充初始值
self.theta = self.create_parameter(
shape=[n, depth + 3],
default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2*PI),
dtype=dtype,
is_bias=False)
# 初始化偏置 (bias)
self.bias = self.create_parameter(
shape=[1],
default_initializer=paddle.nn.initializer.Normal(std=0.01),
dtype=dtype,
is_bias=False)
# 定义前向传播机制、计算损失函数 和交叉验证正确率
def forward(self, state_in, label):
"""
Args:
state_in: The input quantum state, shape [-1, 1, 2^n]
label: label for the input state, shape [-1, 1]
Returns:
The loss:
L = ((<Z> + 1)/2 + bias - label)^2
"""
# 将 Numpy array 转换成 tensor
Ob = paddle.to_tensor(Observable(self.n))
label_pp = paddle.to_tensor(label)
# 按照随机初始化的参数 theta
cir = U_theta(self.theta, n=self.n, depth=self.depth)
Utheta = cir.U
# 因为 Utheta是学习到的,我们这里用行向量运算来提速而不会影响训练效果
state_out = matmul(state_in, Utheta) # 维度 [-1, 1, 2 ** n]
# 测量得到泡利 Z 算符的期望值 <Z>
E_Z = matmul(matmul(state_out, Ob), transpose(paddle.conj(state_out), perm=[0, 2, 1]))
# 映射 <Z> 处理成标签的估计值
state_predict = paddle.real(E_Z)[:, 0] * 0.5 + 0.5 + self.bias
loss = paddle.mean((state_predict - label_pp) ** 2)
# 计算交叉验证正确率
is_correct = (paddle.abs(state_predict - label_pp) < 0.5).nonzero().shape[0]
acc = is_correct / label.shape[0]
return loss, acc, state_predict.numpy(), cir
# ### 训练效果与调参
#
# 好了, 那么定义完以上所有的概念之后我们不妨来看看实际的训练效果!
# +
def heatmap_plot(net, N):
# 生成数据点 x_y_
Num_points = 30
x_y_ = []
for row_y in np.linspace(0.9, -0.9, Num_points):
row = []
for row_x in np.linspace(-0.9, 0.9, Num_points):
row.append([row_x, row_y])
x_y_.append(row)
x_y_ = np.array(x_y_).reshape(-1, 2).astype("float64")
# 计算预测: heat_data
input_state_test = paddle.to_tensor(
datapoints_transform_to_state(x_y_, N))
loss_useless, acc_useless, state_predict, cir = net(state_in=input_state_test, label=x_y_[:, 0])
heat_data = state_predict.reshape(Num_points, Num_points)
# 画图
fig = plt.figure(1)
ax = fig.add_subplot(111)
x_label = np.linspace(-0.9, 0.9, 3)
y_label = np.linspace(0.9, -0.9, 3)
ax.set_xticks([0, Num_points // 2, Num_points - 1])
ax.set_xticklabels(x_label)
ax.set_yticks([0, Num_points // 2, Num_points - 1])
ax.set_yticklabels(y_label)
im = ax.imshow(heat_data, cmap=plt.cm.RdBu)
plt.colorbar(im)
plt.show()
def QClassifier(Ntrain, Ntest, gap, N, D, EPOCH, LR, BATCH, seed_paras, seed_data,):
"""
量子二分类器
"""
# 生成数据集
train_x, train_y, test_x, test_y = circle_data_point_generator(Ntrain=Ntrain, Ntest=Ntest, boundary_gap=gap, seed_data=seed_data)
# 读取训练集的维度
N_train = train_x.shape[0]
paddle.seed(seed_paras)
# 定义优化图
net = Net(n=N, depth=D)
# 一般来说,我们利用Adam优化器来获得相对好的收敛
# 当然你可以改成SGD或者是RMSprop
opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())
# 初始化寄存器存储正确率 acc 等信息
summary_iter, summary_test_acc = [], []
# 优化循环
for ep in range(EPOCH):
for itr in range(N_train // BATCH):
# 将经典数据编码成量子态 |psi>, 维度 [-1, 2 ** N]
input_state = paddle.to_tensor(datapoints_transform_to_state(train_x[itr * BATCH:(itr + 1) * BATCH], N))
# 前向传播计算损失函数
loss, train_acc, state_predict_useless, cir \
= net(state_in=input_state, label=train_y[itr * BATCH:(itr + 1) * BATCH])
if itr % 50 == 0:
# 计算测试集上的正确率 test_acc
input_state_test = paddle.to_tensor(datapoints_transform_to_state(test_x, N))
loss_useless, test_acc, state_predict_useless, t_cir \
= net(state_in=input_state_test,label=test_y)
print("epoch:", ep, "iter:", itr,
"loss: %.4f" % loss.numpy(),
"train acc: %.4f" % train_acc,
"test acc: %.4f" % test_acc)
# 存储正确率 acc 等信息
summary_iter.append(itr + ep * N_train)
summary_test_acc.append(test_acc)
if (itr + 1) % 151 == 0 and ep == EPOCH - 1:
print("训练后的电路:")
print(cir)
# 反向传播极小化损失函数
loss.backward()
opt.minimize(loss)
opt.clear_grad()
# 画出 heatmap 表示的决策边界
heatmap_plot(net, N=N)
return summary_test_acc
# -
# 以上都是我们定义的函数,下面我么讲运行主程序。
# +
def main():
"""
主函数
"""
time_start = time.time()
acc = QClassifier(
Ntrain = 200, # 规定训练集大小
Ntest = 100, # 规定测试集大小
gap = 0.5, # 设定决策边界的宽度
N = 4, # 所需的量子比特数量
D = 1, # 采用的电路深度
EPOCH = 4, # 训练 epoch 轮数
LR = 0.01, # 设置学习速率
BATCH = 1, # 训练时 batch 的大小
seed_paras = 19, # 设置随机种子用以初始化各种参数
seed_data = 2, # 固定生成数据集所需要的随机种子
)
time_span = time.time() - time_start
print('主程序段总共运行了', time_span, '秒')
if __name__ == '__main__':
main()
# -
# 通过打印训练结果可以看到不断优化后分类器在测试集和数据集的正确率都达到了 $100\%$。
# ---
#
# ## 参考文献
#
# [1] Mitarai, Kosuke, et al. Quantum circuit learning. [Physical Review A 98.3 (2018): 032309.](https://arxiv.org/abs/1803.00745)
#
# [2] <NAME>, and <NAME>. Classification with quantum neural networks on near term processors. [arXiv preprint arXiv:1802.06002 (2018).](https://arxiv.org/abs/1802.06002)
#
# [3] [Schuld, Maria, et al. Circuit-centric quantum classifiers. [Physical Review A 101.3 (2020): 032308.](https://arxiv.org/abs/1804.00633)
| tutorial/machine_learning/QClassifier_CN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # colab
# Type an introduction of the package here.
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.colab import *
# -
# ### Global Variable Definitions:
# + hide_input=true
show_doc(ColabFilteringDataset)
# -
# [<code>ColabFilteringDataset</code>](http://docs.fast.ai/colab.html#ColabFilteringDataset)
# + hide_input=true
show_doc(ColabFilteringDataset.from_csv)
# -
# `ColabFilteringDataset.from_csv`
# + hide_input=true
show_doc(ColabFilteringDataset.from_df)
# -
# `ColabFilteringDataset.from_df`
# + hide_input=true
show_doc(EmbeddingDotBias)
# -
# [<code>EmbeddingDotBias</code>](http://docs.fast.ai/colab.html#EmbeddingDotBias)
# + hide_input=true
show_doc(EmbeddingDotBias.forward)
# -
# `EmbeddingDotBias.forward`
# + hide_input=true
show_doc(get_collab_learner)
# -
# [<code>get_collab_learner</code>](http://docs.fast.ai/colab.html#get_collab_learner)
| docs_src/colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loading LS 7 data from the datacube
#
# This notebook will briefly discuss how to load data from the datacube.
#
# **Requirements:**
# * A running data cube created via 01 - Getting Started
# This next line enables interactive plots for use later in the notebook
# %matplotlib notebook
# ## Importing the datacube module
#
# To start with, we'll import the datacube module and load an instance of the datacube and call our application name *load-data-example*. The application name can be anything, its used in logging information so you can trace results etc using the name.
# We'll keep this example satisfying in that you can see some data quickly. Later as we look at more specific products and analysis we'll introduce more variants of the loading process.
import datacube
dc = datacube.Datacube(app='load-data-example')
data = dc.load(product='ls7_usgs_sr_albers',
x=(-2.05, -2.17), y=(8.25, 8.35), measurements=['red', 'green', 'blue'],
output_crs='epsg:3577',resolution=(-30,30))
data
# +
from datacube.storage import masking
# Set all nodata pixels to `NaN`:
data = masking.mask_invalid_data(data)
# Set all invalid data to `NaN` - valid range for USRS SR is 0 to 10000, but the surface reflectance product can have values just outside this range
# We remove them so the image drawn isn't impacted by them
data = data.where((data >= 0) & (data<=10000))
# Select a time slice from the EO data and combine the bands into a 3 band array
image_array = data[['red', 'green', 'blue']].isel(time=0).to_array()
# Show the image
image_array.plot.imshow(robust=True, figsize=(8, 8))
# + [markdown] raw_mimetype="text/restructuredtext"
# More information on the load function see the API docs: :py:meth:`~datacube.Datacube.load`
# -
# # Show/Hide error information
# Yep, here is it again in case you need it
# +
from IPython.display import HTML
HTML('''<script>
code_show_err=false;
function code_toggle_err() {
if (code_show_err){
$('div.output_error').hide();
$('div.output_stderr').hide();
} else {
$('div.output_error').show();
$('div.output_stderr').show();
}
code_show_err = !code_show_err
}
$( document ).ready(code_toggle_err);
</script>
<form action="javascript:code_toggle_err()"><input type="submit" value="Click here to toggle on/off the error output."></form>''')
| work/hub-notebooks/03b(ingest version) - Loading LS 7 Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import fastbook
from fastbook import *
from utils import *
from fastai.vision.widgets import *
# +
#Gathering Data
# -
path = Path('vehicletypes')
# +
vehicle_type = 'car', 'truck', 'bus', 'aeroplane', 'ship'
path = Path('vehicletypes')
if not path.exists():
path.mkdir()
for o in vehicle_type:
print('Collecting: ', o)
dest = (path/o)
dest.mkdir(exist_ok = True)
results = search_images_ddg(o, max_n = 200)
download_images(dest, urls=results)
# -
fns = get_image_files(path)
fns
# +
#Unlinking Corrupt images from the path
# -
failed = verify_images(fns)
failed
failed.map(Path.unlink)
print('hello')
# +
#Making datablock for our data
# -
vehicles_dataset = DataBlock(
blocks = (ImageBlock, CategoryBlock),
get_items = get_image_files,
splitter = RandomSplitter(valid_pct = 0.2, seed = 42),
get_y = parent_label,
item_tfms = Resize(128)
)
# +
#Loading our data into dataloaders
# -
dls = vehicles_dataset.dataloaders(path)
dls.valid.show_batch(max_n = 4, nrows = 1)
# +
#Data augmentation
# -
vehicles_dataset = vehicles_dataset.new(item_tfms = Resize(128), batch_tfms = aug_transforms(mult = 2))
dls = vehicles_dataset.dataloaders(path)
dls.train.show_batch(max_n=8, nrows=2, unique = True)
vehicles_dataset = vehicles_dataset.new(
item_tfms=RandomResizedCrop(224, min_scale=0.5),
batch_tfms=aug_transforms())
dls = vehicles_dataset.dataloaders(path)
# +
#Creating learner
# -
learn = cnn_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(4)
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
interp.plot_top_losses(5, nrows=1)
# +
#Cleaning data
# -
cleaner = ImageClassifierCleaner(learn)
cleaner
# +
#for idx in cleaner.delete(): cleaner.fns[idx].unlink()
#for idx,cat in cleaner.change(): shutil.move(str(cleaner.fns[idx]), path/cat)
# +
#Exporting the model
# -
learn.export(fname='vehexport.pkl')
path = Path()
path.ls(file_exts='.pkl')
learn_inf = load_learner(path/'vehexport.pkl')
learn_inf.dls.vocab
# +
#A simple gui
# -
btn_upload = widgets.FileUpload()
out_pl = widgets.Output()
lbl_pred = widgets.Label()
btn_run = widgets.Button(description='Classify')
def on_click_classify(change):
img = PILImage.create(btn_upload.data[-1])
out_pl.clear_output()
with out_pl: display(img.to_thumb(128,128))
pred, pred_idx, probs = learn_inf.predict(img)
lbl_pred.value = f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}'
btn_run.on_click(on_click_classify)
btn_upload = widgets.FileUpload()
VBox([widgets.Label('Select your vehicle!'),
btn_upload, btn_run, out_pl, lbl_pred])
| VehicleType Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nersc_env
# language: python
# name: nersc_env
# ---
import numpy as np
import time
import sys
#
from matplotlib import pyplot as plt
# %matplotlib inline
# ### Real Space LPT ###
#
# In this notebook we give some examples to compute real-space 1-loop halo/matter pothird_orderra in LPT, as well as cross spectra of the "component fields" that comprise the emulator in Modi et al. 2019 (https://arxiv.org/abs/1910.07097).
#
# This is done using the CLEFT class, which is the basic object in the LPT modules.
from velocileptors.LPT.cleft_fftw import CLEFT
# To match the plots in Chen, Vlah & White (2020) let's
# work at z=0.8, and scale our initial power spectrum
# to that redshift:
z,D,f = 0.8,0.6819,0.8076
klin,plin = np.loadtxt("pk.dat",unpack=True)
plin *= D**2
# Initialize the class -- with no wisdom file passed it will
# experiment to find the fastest FFT algorithm for the system.
start= time.time()
cleft = CLEFT(klin,plin)
print("Elapsed time: ",time.time()-start," seconds.")
# You could save the wisdom file here if you wanted:
# mome.export_wisdom(wisdom_file_name)
# ### Halo-Halo Autospectrum in Real Space ###
# This is the basic application of CLEFT, so comes with its own auxiliary function.
#
# All we need to do is make a power spectrum table and call it.
# The parameters we feed it are: b1, b2, bs, b3, alpha, and sn
# The first four are deterministic Lagrangian bias up to third order
# While alpha and sn are the counterterm and stochastic term (shot noise)
pars = [0.70, -1.3, -0.06, 0, 7.4, 1.9e3]
#
start= time.time()
cleft.make_ptable(nk=200)
kv, pk = cleft.combine_bias_terms_pk(*pars)
print("Elapsed time: ",time.time()-start," seconds.")
# +
plt.plot(kv, kv * pk)
plt.xlim(0,0.25)
plt.ylim(850,1120)
plt.ylabel(r'k $P_{hh}(k)$ [h$^{-2}$ Mpc$^2$]')
plt.xlabel('k [h/Mpc]')
plt.show()
# -
# ### Lagrangian Component Spectra ###
# All spectra in LPT can be thought of as sums of cross spectra of bias operators $delta_X(q)$ shifted to their observed positions $x = q + \Psi$.
#
# At up to third order these operators are $\{1, b_1, b_2, b_s, b_3\}$, not including derivative bias (which is roughly $b_1 \times k^2$) and stochastic contributions (e.g. shot noise).
# +
# Let's explicitly list the components
# Note that the cross spectra are multiplied by a factor of one half.
kv = cleft.pktable[:,0]
spectra = {\
r'$(1,1)$':cleft.pktable[:,1],\
r'$(1,b_1)$':0.5*cleft.pktable[:,2], r'$(b_1,b_1)$': cleft.pktable[:,3],\
r'$(1,b_2)$':0.5*cleft.pktable[:,4], r'$(b_1,b_2)$': 0.5*cleft.pktable[:,5], r'$(b_2,b_2)$': cleft.pktable[:,6],\
r'$(1,b_s)$':0.5*cleft.pktable[:,7], r'$(b_1,b_s)$': 0.5*cleft.pktable[:,8], r'$(b_2,b_s)$':0.5*cleft.pktable[:,9], r'$(b_s,b_s)$':cleft.pktable[:,10],\
r'$(1,b_3)$':0.5*cleft.pktable[:,11],r'$(b_1,b_3)$': 0.5*cleft.pktable[:,12]}
# +
# Plot some of them!
plt.figure(figsize=(15,10))
spec_names = spectra.keys()
for spec_name in spec_names:
plt.loglog(kv, spectra[spec_name],label=spec_name)
plt.ylim(10,3e4)
plt.legend(ncol=4)
plt.xlabel('k [h/Mpc]')
plt.ylabel(r'$P_{ab}$ [(Mpc/h)$^3$]')
plt.show()
# -
# ### Bonus: Cross Spectra with Matter in Real Space ###
# In the language of the component spectra the matter field is just "1."
#
# This means we have straightforwardly
#
# $P_{mm} = P_{11}$
#
# and
#
# $P_{hm} = P_{11} + b_1 P_{1b_1} + b_2 P_{1b_2} + b_s P_{1b_s} + b_3 P_{1b_3} + $ eft corrections.
# +
# Note that if desired one can also add subleading k^n
# type stochastic corrections to these
def combine_bias_terms_pk_matter(alpha):
kv = cleft.pktable[:,0]
ret = cleft.pktable[:,1] + alpha*kv**2 * cleft.pktable[:,13]
return kv, ret
def combine_bias_terms_pk_crossmatter(b1,b2,bs,b3,alpha):
kv = cleft.pktable[:,0]
ret = cleft.pktable[:,1] + 0.5*b1*cleft.pktable[:,2] \
+ 0.5*b2*cleft.pktable[:,4] + 0.5*bs*cleft.pktable[:,7] + 0.5*b3*cleft.pktable[:,11]\
+ alpha*kv**2 * cleft.pktable[:,13]
return kv, ret
# +
plt.figure(figsize=(10,5))
alpha_mm = 2
b1, b2, bs, b3, alpha_hm = 0.70, -1.3, -0.06, 0, 5
kv, phm = combine_bias_terms_pk_crossmatter(b1,b2,bs,b3,alpha_hm)
kv, pmm = combine_bias_terms_pk_matter(alpha_mm)
plt.plot(kv, kv * pk, label='hh')
plt.plot(kv, kv * phm, label='hm')
plt.plot(kv, kv * pmm, label='mm')
plt.xlim(0,0.25)
plt.ylim(0,1220)
plt.ylabel(r'$k P(k)$ [h$^{-2}$ Mpc$^2$]')
plt.xlabel('k [h/Mpc]')
plt.legend()
plt.show()
| Real Space CLEFT Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
import os
warnings.filterwarnings("ignore")
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
x = tf.constant(8)
y = tf.constant(9)
z = tf.multiply(x, y)
sess = tf.Session()
out_z = sess.run(z)
sess.close()
print('The multiplicaiton of x and y: %d' % out_z)
with tf.Session() as sess:
x = tf.placeholder(tf.float32, name="x")
y = tf.placeholder(tf.float32, name="y")
z = tf.multiply(x,y)
z_output = sess.run(z,feed_dict={x:8, y:9})
sess.close()
print(z_output)
| Chapter03/TensorFlow programming model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mmseg
# language: python
# name: mmseg
# ---
# +
import os
import sys
from PIL import Image
import shutil
import glob
import numpy as np
import pickle
from itertools import product
import matplotlib.pyplot as plt
import mmcv
from mmcv.runner import get_dist_info, init_dist, load_checkpoint
from mmseg.datasets import build_dataloader, build_dataset
from mmseg.models import build_segmentor
from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
from mmseg.apis import multi_gpu_test, single_gpu_test_multi
import pandas as pd
import argparse
# -
pred_base_dir = '../../'
data_base_dir = '/home/shirakawa/projects/pandar_rain_filter/'
# +
data_cond_list = [
'raining',
'wet_road'
]
# +
model_config_name_list = [
#"unet/fcn_unet_without_aux_s4-d16_32x32_160k_concat_rain_filtering",
#"simple_convnet/fcn_simple_conv_s1_32x32_160k_rain_filtering",
#"simple_convnet/fcn_simple_conv_s1_32x32_40k_rain_filtering"
#"simple_convnet/fcn_simple_conv_s2_32x32_40k_rain_filtering",
"simple_convnet/fcn_simple_conv_s4_32x32_40k_rain_filtering"
]
model_config_name = "fcn_simple_conv_s1_32x32_40k_rain_filtering"
# -
dataset = build_dataset(cfg.data.test)
sum_df = pd.DataFrame([])
for data_cond, model_config_name in product(data_cond_list, model_config_name_list):
checkpoint_file = model_config_name.split('/')[1]
print(data_cond, model_config_name)
pred_dir_name = f'{data_cond}_results/'
#pred_dir = os.path.join(pred_base_dir, pred_dir_name, model_config_name+'.pkl')
#with open(pred_dir, 'rb') as f:
# outputs = pickle.load(f)
parser = argparse.ArgumentParser(
description='mmseg test (and eval) a model')
args = parser.parse_args(args=[])
item = model_config_name
config_file = f'../../configs/{item}.py'
args.config = config_file
args.checkpoint = f'../../work_dirs/{checkpoint_file}/iter_40000.pth'
args.show = True
args.show_dir = f'../../results_real/{data_cond}/{checkpoint_file}/model_predicted'
args.show_original_dir = f'../../results_real/{data_cond}/{checkpoint_file}/predict_labels'
args.eval = "mIoU"
args.eval_options = None
cfg = mmcv.Config.fromfile(args.config)
#assert 1== 0
cfg.data.val.data_root=os.path.join(data_base_dir,data_cond)#f"../../data/{data_cond}"
dataset = build_dataset(cfg.data.val)
distributed = False
data_loader = build_dataloader(
dataset,
samples_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
cfg.model.train_cfg = None
model = build_segmentor(cfg.model, test_cfg=cfg.get('test_cfg'))
checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
model.CLASSES = checkpoint['meta']['CLASSES']
model.PALETTE = checkpoint['meta']['PALETTE']
efficient_test = True
args.show=False
#for concatenated (multi) image input.
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test_multi(model, data_loader, args.show, args.show_dir,
args.show_original_dir, efficient_test)
"/home/shirakawa/projects/openmmlab/KS_work/mmsegmentation/configs/simple_convnet/fc"
cfg.data.val.data_root=os.path.join(data_base_dir,data_cond)#f"../../data/{data_cond}"
dataset = build_dataset(cfg.data.val)
distributed = False
data_loader = build_dataloader(
dataset,
samples_per_gpu=1,
workers_per_gpu=cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
cfg.model.train_cfg = None
model = build_segmentor(cfg.model, test_cfg=cfg.get('test_cfg'))
checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
model.CLASSES = checkpoint['meta']['CLASSES']
model.PALETTE = checkpoint['meta']['PALETTE']
# +
efficient_test = True
args.show=False
#for concatenated (multi) image input.
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test_multi(model, data_loader, args.show, args.show_dir,
args.show_original_dir, efficient_test)
# -
sum_res = {}
sum_res['data'] = data_cond
sum_res.update(dataset.evaluate(outputs, args.eval, **kwargs))
os.listdir(os.path.join(data_base_dir,data_cond))
f"../../data/{data_cond}"
save_dir = '../../results_merge'
save_file = 'merge_performance.csv'
os.makedirs(save_dir,exist_ok=True)
sum_df.to_csv(os.path.join(save_dir, save_file))
sum_df
| tools/convert_datasets/Inference_real_data_without_labels-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_tensorflow_p36)
# language: python
# name: conda_tensorflow_p36
# ---
# +
import matplotlib.pyplot as plt
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="7"
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, Normalizer
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
import pickle
import numpy as np
import pandas as pd
from annsa.template_sampling import *
from annsa.load_dataset import load_easy
# -
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
# #### Import model, training function
from annsa.model_classes import (dae_model_features,
DAE,
save_model,
train_earlystop)
# ## Load testing dataset
dataset = np.load('../dataset_generation/testing_dataset_full_200keV_log10time_1000.npy')
# +
all_spectra = np.random.poisson(np.add(dataset.item()['sources'], dataset.item()['backgrounds']))
all_keys = np.array(dataset.item()['sources']) # background subtracted output
all_keys_modified = np.log1p(all_keys)/np.max(np.log1p(all_spectra),axis=1)[:,None]
keys_raw = dataset.item()['keys'] # keys necessary for stratified k-folds
# -
# ## Train network
model_id = 'BSDAE_full_fullnetwork-updated'
# +
model_class = DAE
dense_nodes_encoder_choices = [
[64],
[128],
[256],
[128,64],
# [256,128],
# [256,64],
# [512,512,128],
# [1024,1024,128],
# [1024,1024,64],
# [1024,1024,32],
]
dense_nodes_decoder_choices = [
[],
[],
[],
[128],
# [256],
# [256],
# [512,512],
# [1024,1024],
# [1024,1024],
# [1024,1024],
]
scaler_choices = [
make_pipeline(FunctionTransformer(np.log1p, validate=True)),
make_pipeline(FunctionTransformer(np.log1p, validate=True), Normalizer(norm='l1')),
make_pipeline(FunctionTransformer(np.log1p, validate=True), Normalizer(norm='max')),
make_pipeline(FunctionTransformer(np.sqrt, validate=True)),
make_pipeline(FunctionTransformer(np.sqrt, validate=True), Normalizer(norm='l1')),
make_pipeline(FunctionTransformer(np.sqrt, validate=True), Normalizer(norm='max')),
]
# -
def read_scaler_pipeline(scaler):
scaling = scaler.named_steps['functiontransformer'].func.__name__
normalizer = 'None'
if scaler.named_steps.get('normalizer'):
normalizer = scaler.named_steps['normalizer'].norm
scaler_norm_txt = scaling+'_'+normalizer
return scaler_norm_txt
# +
def dnn_nodes_choice_to_txt(cnn_filters_choice):
'''
Returns cnn filters as text.
'''
cnn_filter_txt = str(cnn_filters_choice).replace('[', '').replace(']', '').replace(', ', '_')
return cnn_filter_txt
def save_features(model_features,
model_id,
hyperparameter_index):
with open('./hyperparameter-search-results/' + model_id + '_' +
str(hyperparameter_index), 'wb+') as f:
pickle.dump(model_features,f)
return None
# +
skf = StratifiedKFold(n_splits=5, random_state=5)
testing_errors = []
all_kf_errors = []
for scaler in scaler_choices:
for dense_nodes_encoder_choice in dense_nodes_encoder_choices:
network_id = dnn_nodes_choice_to_txt(dense_nodes_encoder_choice)
network_id += read_scaler_pipeline(scaler)
model_features = dae_model_features(
learining_rate=10**-3,
l1_regularization_scale=0.0,
dropout_probability=0.0,
batch_size=32,
output_size=1024,
dense_nodes_encoder=dense_nodes_encoder_choice,
dense_nodes_decoder=dense_nodes_encoder_choice[:-1],
scaler=make_pipeline(FunctionTransformer(np.log1p, validate=True), Normalizer(norm='max')),
activation_function=tf.nn.tanh,
output_function=None)
save_features(model_features,
model_id,
network_id)
k_folds_errors = []
model = model_class(model_features)
optimizer = tf.train.AdamOptimizer(model_features.learining_rate)
_, f1_error = model.fit_batch(
(all_spectra, all_keys_modified),
(all_spectra, all_keys_modified),
optimizer=optimizer,
num_epochs=5,
verbose=1,
obj_cost=model.mse,
earlystop_cost_fn=model.mse,
earlystop_patience=1000,
data_augmentation=model.default_data_augmentation,
augment_testing_data=False,
print_errors=True,
record_train_errors=False)
model.save_weights('./hyperparameter-search-results/'+model_id+'_checkpoint_'+str(network_id)+'_',
save_format='tf')
# -
| examples/source-interdiction/hyperparameter-search/BSDAEHPSearch-kFolds-Full-0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mp
import astropy.units as u
mp.rcParams['figure.figsize'] = (12, 8)
from context import lens
from lens.sie.inference import *
import emcee
# # A simple Baysian model
#
#
# ## sources parameters $\theta_{src}$
# * $g_S$
# * $x_S$
# * $y_S$
#
# ## SIE lens parameters $\theta_{SIE}$
# * $b_L$ : Einstein's radius
# * $q_L$ : axis ratio
# * $x_L$
# * $y_L$
# * $\theta_L$ : orientation
#
plt.subplot(221)
x = np.arange(-1,30,0.01)
y = [magnitudePrior(v) for v in x]
plt.plot(x,y)
plt.xlabel("Magnitude radius prior")
# +
plt.subplot(221)
x = np.arange(-1,20,0.01)
y = [radiusPrior(v) for v in x]
plt.plot(x,y)
plt.xlabel("Einstein's radius prior")
plt.subplot(222)
x = np.arange(-1,2,0.01)
y = [ratioPrior(v) for v in x]
plt.plot(x,y)
plt.xlabel("Lens axis ratio prior")
plt.subplot(223)
x = np.arange(-10,10,0.01)
y = [positionPrior(v) for v in x]
plt.plot(x,y)
plt.xlabel("Position prior")
plt.subplot(224)
x = np.arange(-1,7,0.01)
y = [thetaPrior(v) for v in x]
plt.plot(x,y)
plt.xlabel("Lens orientation prior")
# -
model = np.array([0.1,0.1,18,2,0.5,0,0,0.])
parameter = "xS,yS,gS,bL,qL,xL,yL,thetaL".split(',')
log_prior(model)
# generate simulated true data using the model and more or less realistic errors
error = np.concatenate((np.ones((4,2))*0.001,np.ones((4,1))*0.01),axis=1)
data = np.concatenate((np.array(getImages(model)),error),axis=1)
np.around(data,3)
log_likelihood(model,data)
log_posterior(model,data)
ndim = len(model) # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nsteps = 1000 # number of MCMC steps
np.random.seed(0)
def init(N):
""" to initialise each walkers initial value : sets parameter randomly """
xs = norm.rvs(0,0.2,size=N)
ys = norm.rvs(0,0.2,size=N)
gs = gamma.rvs(10,5,size=N)
xl = norm.rvs(0,0.2,size=N)
yl = norm.rvs(0,0.2,size=N)
b = 2*beta.rvs(2,3,size=N)
q = np.random.uniform(0,1,N)
theta = np.random.uniform(0,np.pi,N)
return np.transpose(np.array([xs,ys,gs,b,q,xl,yl,theta]))
starting_guesses = init(nwalkers)
# check that the initial values make sens with respect to the prior
np.std([log_prior(guess) for guess in starting_guesses])
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[data])
# %time x = sampler.run_mcmc(starting_guesses, nsteps)
# with a prior on theta that does not restrict the lens orientation, i.e. multi modal distribution in theta, emcee sampling does not seem to behave very well!
def plot_chains(sampler,warmup=100):
fig, ax = plt.subplots(ndim,3, figsize=(12, 12))
samples = sampler.chain[:, warmup:, :].reshape((-1, ndim))
for i in range(ndim):
ax[i,0].plot(sampler.chain[:, :, i].T, '-k', alpha=0.2);
ax[i,0].vlines(warmup,np.min(sampler.chain[:, :, i].T),np.max(sampler.chain[:, :, i].T),'r')
ax[i,1].hist(samples[:,i],bins=100,label=parameter[i]);
ax[i,1].legend()
ax[i,1].vlines(np.median(samples[:,i]),0,10000,lw=1,color='r',label="median")
ax[i,1].vlines(np.median(model[i]),0,5000,lw=1,color='b',label="true")
ax[i,2].hexbin(samples[:,i],samples[:,(i+1)%ndim])#,s=1,alpha=0.1);
plot_chains(sampler)
# tune the starting value around the expected results
# +
np.random.seed(0)
def init2(N):
""" to initialise each walkers initial value : sets parameter randomly """
xs = norm.rvs(0.1,0.05,size=N)
ys = norm.rvs(0.1,0.05,size=N)
gs = norm.rvs(18,0.5,size=N)
xl = norm.rvs(0,0.05,size=N)
yl = norm.rvs(0,0.05,size=N)
b = norm.rvs(2,0.1,size=N)
q = norm.rvs(0.5,0.1,size=N)
theta = np.random.uniform(0,np.pi,N)
return np.transpose(np.array([xs,ys,gs,b,q,xl,yl,theta]))
starting_guesses = init2(nwalkers)
# -
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[data])
# %time x = sampler.run_mcmc(starting_guesses, nsteps)
plot_chains(sampler)
| notebooks/SIE inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.1 64-bit
# language: python
# name: python3
# ---
# ---
#
# # Decision and Regression Tree
#
# ## Introduction
#
# Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. A tree can be seen as a piecewise constant approximation.
#
# Decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node (e.g., Outlook) has two or more branches (e.g., Sunny, Overcast and Rainy), each representing values for the attribute tested. Leaf node (e.g., Hours Played) represents a decision on the numerical target. The topmost decision node in a tree which corresponds to the best predictor called root node. Decision trees can handle both categorical and numerical data.
#
# <p>
# <img src="Decision_Tree.jpeg" width="500" align="center">
# </p>
#
# The nodes shown above fall under the following types of nodes:
#
# * Root node — node at the top of the tree. This node acts as the input node for feature vectors in the model.
# * Decision nodes — nodes where the variables are evaluated. These nodes have arrows pointing to them and away from them
# * Leaf nodes — final nodes at which the prediction is made
#
# ## Algorithm
#
# The core algorithm for building decision trees called ID3 by <NAME> which employs a top-down, greedy search through the space of possible branches with no backtracking. The ID3 algorithm can be used to construct a decision tree for regression by replacing Information Gain with Standard Deviation Reduction.
#
# ---
#
# ## Implement
#
# To illustrate how decision trees work we will consider artificial binary classification data generated by the ```sklearn.datasets.make_moons()``` function.
#
# First, let's the visualization of classification data.
#
# ### Implement 1 - Classification Tree
# +
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.model_selection import train_test_split
# Import the data
df = pd.read_csv("classification.csv")
results = [0.0, 1.0]
colors = ["red", "green"]
fig, ax = plt.subplots(figsize = (10, 8))
for results, colors in zip(results, colors):
temp_df = df[df.success == results]
ax.scatter(temp_df.age,
temp_df.interest,
c = colors,
label = int(results),
)
ax.set_xlabel("age", fontsize = 15)
ax.set_ylabel("interest", fontsize = 15)
ax.legend()
plt.show()
# -
# Next, we need split our data into a training and testing subsets and visualize the training data.
# +
X = df[["age","interest"]].to_numpy()
y = df["success"].to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.35,
random_state=42)
c_train = ["red" if label == 0.0 else "green" for label in y_train]
c_test = ["red" if label == 0.0 else "green" for label in y_test]
plt.figure(figsize=(10, 8))
plt.scatter(X_train[:, 0], X_train[:, 1], c = c_train)
plt.xlabel("feature x_0", fontsize = 15)
plt.ylabel("feature x_1", fontsize = 15)
plt.grid()
plt.show()
# -
# Next we instantiate an instance of the sklearn.tree.DecisionTreeClassifier model and train the model by calling the fit() method.
# +
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier(max_depth=5, random_state=42)
decision_tree.fit(X_train, y_train)
# -
# Now that our model has been trained we can visualize the tree structure of our current model.
# +
from sklearn.tree import plot_tree
from sklearn.tree import export_text
plt.figure(figsize=(20, 8))
a = plot_tree(decision_tree,
feature_names = ["age", "interest"],
class_names = ["red", "green"],
rounded = True,
filled = True,
fontsize=14)
plt.show()
# -
# Show the result of tree by text.
tree_rules = export_text(decision_tree,
feature_names = ["age", "interest"])
print(tree_rules, "\n")
# In the following cell, we will show the decision boundaries from our trained model.
# +
from mlxtend.plotting import plot_decision_regions
y = y.astype(int)
plt.figure(figsize = (10, 8))
plot_decision_regions(X, y, clf = decision_tree)
plt.xlabel("feature x_0", fontsize = 15)
plt.ylabel("feature x_1", fontsize = 15)
plt.grid()
plt.show()
# -
# The boundary looks good! It seems there is no overfit.
#
# As with all machine learning models, we next evaluate our models performance on the testing data.
#
# First, the confusion matrix.
# +
# predicted values on the testing data
test_pred_decision_tree = decision_tree.predict(X_test)
# Import metrics from sklearn
from sklearn import metrics
# The confusion matrix
confusion_matrix = metrics.confusion_matrix(y_test, test_pred_decision_tree)
# Convert confusion matrix into dataframe
matrix_df = pd.DataFrame(confusion_matrix)
plt.figure(figsize=(10, 8))
ax = plt.axes()
sns.set(font_scale=1.3)
sns.heatmap(matrix_df,
annot = True,
fmt = "g",
ax = ax,
cmap = "magma",
cbar = False)
ax.set_title("Confusion Matrix - Decision Tree")
ax.set_xlabel("Predicted Label", fontsize=15)
ax.set_xticklabels(["red", "blue"])
ax.set_ylabel("True Label", fontsize=15)
ax.set_yticklabels(["red", "blue"], rotation=0)
plt.show()
# -
# Second, the accuracy scores.
# +
from sklearn.metrics import classification_report
y_pred = decision_tree.predict(X_test)
#Checking performance our model with classification report.
print(classification_report(y_test, y_pred))
# -
# The precision is good!
# ---
#
# ### Implement 2 - Regression Tree
#
# Decision Trees are also capable of performing regression tasks.
#
# For the data, we generate quadratic data with white noise as the dataset.
np.random.seed(6)
m = 240
X = np.random.rand(m, 1)
y = 2 * (X - 0.4) ** 2
wn = np.random.randn(m, 1) / 10
y = y + wn
# First, visualization of the generated data.
plt.scatter(X,y)
plt.xlabel("X")
plt.ylabel("y")
plt.title('Quadratic + WN')
# Split the data into training and test set and use DecisionTreeRegressor for the regression tree.
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40, random_state=42)
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(max_depth=3, random_state=42)
tree_reg.fit(X_train, y_train)
# -
# See the details of the decision tree.
# +
tree_rules = export_text(tree_reg)
print(tree_rules, "\n")
plt.figure(figsize=(20, 8))
a = plot_tree(tree_reg,
feature_names = None,
class_names = None,
rounded = True,
filled = True,
fontsize=14)
plt.show()
# -
# Visualize the predict results.
pred = tree_reg.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test, pred, squared=False)
plt.scatter(X_test,pred,c='red')
plt.scatter(X_test,y_test)
plt.xlabel("X")
plt.ylabel("y")
plt.title('Prediction (red) vs True (blue)')
# The decision tree model does not look good.
| Supervised Learning/Decision and Regression Tree/Decision Tree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exemplos sobre a autocorrelação
#
# A autocorrelação é definida por
#
# \begin{equation}
# R_{xx}(\tau)=\int_{-\infty}^{\infty}x(t)x(t+\tau)\mathrm{d} t
# \tag{1}
# \end{equation}
#
# Veremos uma intuição sobre ela e dois exemplos práticos,
#
# # Intuição
#
# Vamos avaliar o que acontece com um ruído branco. Particularmente, estamos interessados no produto $x(t)x(t+\tau)$ para diferentes valores de $\tau$. Vamos tomar uma porção deste sinal aleatório para investigar o que acontece com o produto $x(t)x(t+\tau)$, à medida que aplicamos diferentes atrasos $\tau$.
#
# - Note que para $\tau = 0$, $x(t)$ e $x(t+\tau)$ estarão perfeitamente alinhados no tempo e o produto será máximo
# importar as bibliotecas necessárias
import numpy as np # arrays
import matplotlib.pyplot as plt # plots
from scipy.stats import norm
from scipy import signal
plt.rcParams.update({'font.size': 14})
import IPython.display as ipd # to play signals
import sounddevice as sd
# +
# Frequencia de amostragem e vetor temporal
fs = 200
time = np.arange(0, 20, 1/fs)
# sinal aleatório completo
xt = np.random.normal(loc = 0, scale = 1, size = len(time))
# Vamos tomar uma porção deste sinal aleatório para investigar o que acontece à medida que aplicamos diferentes atrasos tau
taus = np.array([0, 0.1, 0.5, 1, 3])
xt_lag = np.zeros((len(taus), len(time)))
for jt, tau in enumerate(taus):
xt_lag[jt,:] = np.roll(xt, int(tau*fs))
# Vamos plotar todos os produtos
fig, axs = plt.subplots(len(taus), 1, figsize = (12, 10))
for jt, tau in enumerate(taus):
axs[jt].plot(time[1000:2000], xt[1000:2000]*xt_lag[jt,1000:2000], linewidth = 1,
label = r"$\tau$ = {:.2f} [s]; $\int = {:.2f}$ ".format(tau, np.sum(xt[1000:2000]*xt_lag[jt,1000:2000]/1000)))
axs[jt].legend(loc = 'upper right')
axs[jt].grid(linestyle = '--', which='both')
axs[jt].set_ylabel(r'$x(t)x(t+\tau)$')
#axs[jt].set_ylim((-2, 10))
axs[0].set_title(r'Produto $x(t)x(t+\tau)$')
axs[0].set_xlabel('Tempo [s]');
# -
# # Exemplo 1 - Ruído branco
# +
fs = 2000
time = np.arange(0, 2, 1/fs)
# sinal aleatório completo
xt = np.random.normal(loc = 0, scale = 1, size = len(time))
q# plot signal
plt.figure(figsize = (10, 3))
plt.plot(time, xt, linewidth = 1, alpha = 0.7)
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$x(t)$ [Pa]')
plt.xlim((0, time[-1]))
plt.ylim((-4, 4))
plt.xlabel('Tempo [s]')
plt.tight_layout()
# +
# Calculemos a auto-correlação
Rxx = np.correlate(xt, xt, mode = 'same')
q= np.linspace(-len(Rxx)/fs, len(Rxx)/fs, len(Rxx))
# plot autocorrelação
plt.figure(figsize = (10, 3))
plt.plot(tau, Rxx/fs, linewidth = 1)
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$R_{xx}(\tau)$ [Pa$^2$]')
plt.xlim((tau[0], tau[-1]))
plt.ylim((-0.5, 1.2*max(np.abs(Rxx/fs))))
plt.xlabel(r'$\tau$ [s]')
plt.tight_layout()
# -
# # Exemplo 2 - Seno contaminado por ruído
#
# Consideremos um sinal senoidal contaminado por ruído. Temos que
#
# \begin{equation}
# x(t) = s(t) + n(t),
# \end{equation}
# com
#
# \begin{equation}
# s(t) = A \mathrm{sin}(2\pi f t + \phi)
# \end{equation}
#
# e $n(t)$ um sinal aleatório com distribuição normal. A autocorrelação de $x(t)$ é dada por
#
# \begin{equation}
# E[x(t)x(t+\tau)] = E[(s(t) + n(t)) \ (s(t+\tau) + n(t+\tau))] \\
# E[x(t)x(t+\tau)] = E[s(t) s(t+\tau) + s(t) n(t+\tau) + n(t) s(t+\tau) + n(t) n(t+\tau)]
# \end{equation}
# e como os operadores de expectativas são lineares, temos
#
# \begin{equation}
# E[x(t)x(t+\tau)] = E[s(t) s(t+\tau)] + E[n(t) n(t+\tau)] + E[s(t) n(t+\tau)] + E[n(t) s(t+\tau)]
# \end{equation}
# em que os primeiros dois termos representam as auto-correlações da função senoidal e do ruído aleatório. Já os últimos dois termos representam correlações cruzadas entre $s(t)$ e $n(t)$, que devem tender a zero, já que os $s(t)$ e $n(t)$ não são correlacionados. Assim
# \begin{equation}
# R_{xx}(\tau) = R_{ss}(\tau) + R_{nn}(\tau)
# \end{equation}
#
# Note que $R_{nn}(\tau)$ é um sinal impulsivo como do exemplo anterior e que
#
# \begin{equation}
# R_{ss}(\tau) = \frac{A^2}{2}\mathrm{cos}(2\pi f \tau),
# \end{equation}
# uma função cossenoidal desprovida da informação de fase de $s(t)$.
#
# +
fs = 2000
time = np.arange(0, 20, 1/fs)
# sinal aleatório completo
st = 0.9*np.sin(2*np.pi*10*time + np.pi/3)
nt = np.random.normal(loc = 0, scale = 1, size = len(time))
xt = st + nt
# plot signal
plt.figure(figsize = (10, 3))
plt.plot(time, xt, linewidth = 1, color = 'b', alpha = 0.7)
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$x(t)$ [Pa]')
plt.xlim((0, 1))
plt.xlabel('Tempo [s]')
plt.tight_layout()
# +
# Calculemos a auto-correlação
Rxx = np.correlate(xt, xt, mode = 'same')
tau = np.linspace(-len(Rxx)/fs, len(Rxx)/fs, len(Rxx))
# plot autocorrelação
plt.figure(figsize = (10, 3))
plt.plot(tau, Rxx/len(time), linewidth = 1, color = 'b')
plt.grid(linestyle = '--', which='both')
plt.ylabel(r'$R_{xx}(\tau)$ [Pa$^2$]')
plt.xlim((-1, 1))
#plt.ylim((-0.5, 1.2*max(np.abs(Rxx/fs))))
plt.xlabel(r'$\tau$ [s]')
plt.tight_layout()
| Aula 54 - Autocorrelacao exemplos/Autocorelacao exemplos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="GuHDEMxCxh3x"
# # Wrangling Features
# + [markdown] colab_type="text" id="mB9a834Oxh32"
# In this notebook we drop features that might give unfair information, identify missing values, and explore the data.
# + colab={"base_uri": "https://localhost:8080/", "height": 122} colab_type="code" id="_E21rFlUx8-l" outputId="532ffacf-3910-4a3c-dc52-f1d6abd3c4ac"
#if working on goolge drive
goog_drive=False
if goog_drive:
from google.colab import drive
drive.mount('/content/drive')
goog_dir = '/content/drive/My Drive/lending_club_project/'
else:
goog_dir = ''
# + colab={} colab_type="code" id="-aWzqGlNxh33"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
import utils
# %matplotlib inline
# -
# ### Raw Data
# + colab={"base_uri": "https://localhost:8080/", "height": 559} colab_type="code" id="aBHvuC7txh4B" outputId="bbb522d5-c42c-40bb-993e-8f59ba8592e4"
#get directory
df_train_path = os.path.join(goog_dir, 'data','df_train.csv.zip')
df_test_path = os.path.join(goog_dir,'data','df_test.csv.zip')
#download in chunks
df = utils.chunk_loader(df_train_path, index_col=0)
df.head()
# + colab={} colab_type="code" id="rx5tWd21xh4I"
df.shape
# -
# ### Table of Definitions
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="96KkD_Ydxh4N" outputId="db6b4828-ba0d-4131-a590-13a6eeadc03f"
#read accompanying excel
excel_dir = os.path.join(goog_dir, 'LCDataDictionary.xlsx')
loan_stats = pd.read_excel(excel_dir, sheet_name='LoanStats')
#fix naming
loan_stats['LoanStatNew'] = loan_stats['LoanStatNew'].str.replace('verified_status_joint',
'verification_status_joint')
loan_stats['LoanStatNew'] = loan_stats['LoanStatNew'].str.replace(u' \xa0', u'')
loan_stats['LoanStatNew'] = loan_stats['LoanStatNew'].str.strip()
loan_stats.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="FTWsyuHWxh4Q" outputId="53ef6d02-b645-4577-860b-b3f24046cd9f"
loan_stats.shape
# + colab={} colab_type="code" id="yNYSJNZkxh4V"
#convert to dict
feat_to_text = dict(zip(loan_stats['LoanStatNew'], loan_stats['Description']))
# + [markdown] colab_type="text" id="IsBxcFx2xh4Y"
# # 1.0 Future Leak: Drop Cheat Features
# + [markdown] colab_type="text" id="m746dFNbxh4Z"
# In this part we wish to drop features that are recorded in the future. These features are not available to us when applying for loans and should not be taken in consideration
# + colab={} colab_type="code" id="p9VK2z4vxh4e"
leak_cols = ['acc_now_delinq', 'acc_open_past_24mths', 'avg_cur_bal',
'bc_open_to_buy', 'bc_util', 'chargeoff_within_12_mths',
'collection_recovery_fee', 'collections_12_mths_ex_med',
'debt_settlement_flag', 'delinq_2yrs', 'delinq_amnt',
'disbursement_method', 'funded_amnt', 'funded_amnt_inv',
'hardship_flag', 'inq_last_6mths', 'last_credit_pull_d',
'initial_list_status', 'mo_sin_old_rev_tl_op', 'mo_sin_old_il_acct',
'last_pymnt_amnt',
'last_pymnt_d', 'mo_sin_rcnt_rev_tl_op', 'mo_sin_rcnt_tl',
'mths_since_recent_bc', 'mths_since_recent_inq', 'num_accts_ever_120_pd',
'num_actv_bc_tl', 'num_actv_rev_tl', 'num_bc_sats',
'num_bc_tl', 'num_il_tl', 'num_op_rev_tl', 'num_rev_accts',
'num_rev_tl_bal_gt_0', 'num_sats', 'num_tl_120dpd_2m',
'num_tl_30dpd', 'num_tl_90g_dpd_24m', 'num_tl_op_past_12m',
'out_prncp', 'out_prncp_inv', 'pct_tl_nvr_dlq',
'percent_bc_gt_75', 'pymnt_plan', 'recoveries',
'tax_liens', 'tot_coll_amt', 'tot_cur_bal',
'tot_hi_cred_lim','total_bal_ex_mort', 'total_bc_limit',
'total_il_high_credit_limit', 'total_pymnt', 'total_pymnt_inv',
'total_rec_int', 'total_rec_late_fee', 'total_rec_prncp',
'total_rev_hi_lim']
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="0UmDUsTPxh4m" outputId="5c1e5de3-1ee5-487d-9547-2a53c7f16fd4"
#display columns
for leak in leak_cols:
print('{}:'.format(leak))
print(feat_to_text[leak])
print(20*'*')
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="I7db57fZxh4t" outputId="c2313fd9-b04c-4e5d-f833-a758238353fe"
#drop leaks
df_small = df.drop(columns=leak_cols, axis=1)
drop_col_count = len(leak_cols)
print("The new dataframe has {} columns and lost {} after dropping future leaks".format(df_small.shape[1],
drop_col_count))
# -
# ## 1.1 Columns with No Variance
# We need to check if any feature repeats only 1 feature and is therefore useless to us. This will take out uncessary dimensions and remove noise. One might consider this a case of pre-pruning for tree based models.
#remove columns that have only 1 feature
#get unique values per feature
nunique_value = df_small.apply(pd.Series.nunique)
#note columns that have single value
single_value_cols = nunique_value[nunique_value == 1].index.tolist()
#see what we are dripping
for col in single_value_cols:
print('{}:\n{}'.format(col, feat_to_text[col]))
print(20*'*')
# drop the redundant columns
df_small = df_small.drop(columns=single_value_cols)
# + [markdown] colab_type="text" id="diH7W2OOxh4x"
# # 2.0 Identify the Target Variable
# + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" id="VtAxgYoWxh4y" outputId="0d69f487-7ad7-43d8-ffd7-c286636fcd3e"
#see possibilities of target variable
df_small['loan_status'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="2qPJG0l0xh41" outputId="c9ce7266-9ec2-418b-ab37-bb3a281c3882"
#only care about paid and default
target_vals = ['Fully Paid', 'Charged Off']
#drop vlaues if not in list
df_target = df_small[df_small['loan_status'].isin(target_vals)]
#check values
df_target['loan_status'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="-bZFwSDqxh44" outputId="6e99930d-1ea8-450e-cfee-f8bfabaefd41"
df_target.shape
# + [markdown] colab_type="text" id="Bg1FcpVMxh48"
# # 3.0 Missing Values Limits
# + [markdown] colab_type="text" id="YE9SfXeaxh49"
# In this section we will assess on a domain expertise basis if the data should be dropped or kept.<br>
# Given that our target variable is unbalanced, some features with missing values might be key to a better performance. However given the case where we have too many missing values for a judgement call, we must drop some features beyond a certain threshold.
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="EVCQxNJfxh4-" outputId="79792445-c446-4df0-f3a7-e377044806f9"
#make dataframe to record mean missing
nan_df = pd.DataFrame(df_target.isna().mean(), columns=['mean_nan'])
#add dtypes
nan_df['dtypes'] = df_target.dtypes
#sort by most missing at top
nan_df = nan_df.sort_values(by=['mean_nan'], ascending=False)
nan_df = nan_df.reset_index()
nan_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="qVLFQm7nxh5C" outputId="b2b76325-4bbc-428e-c884-570724cf8deb"
#tally of average mission values
nan_df.mean()
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="xV_Jgv_wxh5F" outputId="52a4055d-c1ca-490b-9ee0-79dd218ca662"
nan_df.std()
# + colab={"base_uri": "https://localhost:8080/", "height": 295} colab_type="code" id="Z6V_EynVxh5I" outputId="49e82f49-affe-4f34-d556-9722789a22b5"
#draw histogram of missing values
nan_df.hist(column='mean_nan', bins=np.arange(0,1.05,0.05))
plt.xlabel("percent missing")
plt.ylabel("count")
plt.title("Missing Features Histogram")
plt.xticks(np.arange(0, 1.1, 0.1))
plt.show()
# + [markdown] colab_type="text" id="yzVAHK8Xxh5L"
# We decide to drop the features that have more than **50%** missing values.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0yGs5Pf6xh5M" outputId="7900321b-cc08-4fa4-b6be-9f0aa87af17f"
#set the minimum missing percentage
nan_pct_min = 0.5
#get columns with more missing values than minimum
missing_cols = nan_df[nan_df['mean_nan']>=nan_pct_min]['index']
#drop the columns
df_drop_nan = df_target.drop(columns=missing_cols)
#see new shape
df_drop_nan.shape
# + [markdown] colab_type="text" id="9gZUAAmmxh5R"
# # 4.0 Make Features Numerical & Fill NaNs
# + colab={"base_uri": "https://localhost:8080/", "height": 561} colab_type="code" id="iMNrpbaixh5S" outputId="620ec093-1f30-4898-faa5-f8637e942f60"
df_drop_nan.dtypes
# + colab={"base_uri": "https://localhost:8080/", "height": 359} colab_type="code" id="IZwyvdMDxh5W" outputId="7b7b8a0e-8c60-4635-e4cf-7f7991cd65da"
#inspect what is left from features with missing values
nan_df[(nan_df['mean_nan']<nan_pct_min) & (nan_df['mean_nan']!=0)]
# + [markdown] colab_type="text" id="KxZAmCN_xh5Y"
# ## 4.1 term
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="fMBE4dqPxh5Z" outputId="e132f301-4cd2-49a5-c59b-8fcab66890bc"
df_drop_nan['term'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="76MDN0JNxh5c" outputId="463ee2f7-d341-49e6-867a-ee0bcb3e1eee"
#Make a dict to convert term to integer
term_to_int = {' 36 months': 1, ' 60 months': 2}
term_to_int
# + [markdown] colab_type="text" id="RB80j9n4xh5g"
# ## 4.2 grade
# + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="7TRdLW4zxh5h" outputId="c97c2f80-ed2c-4987-c6d5-4e7fc57e1a65"
df_drop_nan['grade'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="RG5vmhVPxh5k" outputId="8c07220f-dcbb-4965-91bb-cc2b78b544eb"
feat_to_text['grade']
# -
df_drop_nan['grade'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="PGTUx_4dxh5n" outputId="4198a1d8-27b0-426c-8b8e-4491cf97faf9"
grade_to_int = dict(zip(['A', 'B', 'C', 'D', 'E', 'F', 'G'], np.arange(7, 0, -1)))
grade_to_int
# + [markdown] colab_type="text" id="henVAxdmxh5t"
# ## 4.3 sub_grade
# Note that is feature is related to the main grade
# -
df_drop_nan['sub_grade'].value_counts()
plt.figure(figsize=(10,10))
sns.countplot(x='sub_grade',
data=df_drop_nan,
order=sorted(df_drop_nan['sub_grade'].unique()))
plt.savefig(os.path.join('plots', 'sub_grade_count.png'))
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="QV1NA1dgxh5u" outputId="5c9d67b8-5f97-4714-ed1a-7fbae186ab77"
#get unique sub grades sorted in ascending order
sub_grade_sorted = sorted(df_drop_nan['sub_grade'].unique())[::-1]
sub_grade_to_int = dict(zip(sub_grade_sorted, np.arange(0, len(sub_grade_sorted))))
#print side to side
print(sub_grade_to_int)
# + [markdown] colab_type="text" id="c8duulLxxh5x"
# ## 4.4 emp_title
# + colab={"base_uri": "https://localhost:8080/", "height": 544} colab_type="code" id="W9r3FFXpxh5y" outputId="aeeba5ac-cd64-4de8-94ad-0592b18e9e94"
#what are top titles
df_drop_nan['emp_title'].value_counts()[:30]
# + [markdown] colab_type="text" id="iZRSRY7Xxh51"
# There are a lot of titles to consider and it makes sense to turn this into a binary feature. we could make use of **pandas.get_dummies** method for this task however we will use a custom function that combines similar titles like VP and President into executives.
# + colab={} colab_type="code" id="rJp2n1n5xh52"
def emp_title_to_dict(e_title):
#force make string if not and make lower
title_lower = str(e_title).lower()
#list of employment types to consider
emp_list = ['e_manager', 'e_educ', 'e_self',
'e_health', 'e_exec', 'e_driver',
'e_law', 'e_admin', 'e_fin', 'e_other']
#instantiate title dict
title_dict = dict(zip(emp_list, len(emp_list)*[0]))
#check and fill out dict
if any(job in title_lower for job in ['manag', 'superv']):
title_dict['e_manager'] = 1
elif 'teacher' in title_lower:
title_dict['e_educ'] = 1
elif 'owner' in title_lower:
title_dict['e_self'] = 1
elif any(job in title_lower for job in ['rn', 'registered nurse', 'nurse',
'doctor', 'pharm', 'medic']):
title_dict['e_health'] = 1
elif any(job in title_lower for job in ['vice president', 'president', 'director',
'exec', 'chief']):
title_dict['e_exec'] = 1
elif any(job in title_lower for job in ['driver', 'trucker']):
title_dict['e_driver'] = 1
elif any(job in title_lower for job in ['lawyer', 'legal', 'judg']):
title_dict['e_law'] = 1
elif 'admin' in title_lower:
title_dict['e_admin'] = 1
elif any(job in title_lower for job in ['analyst', 'financ', 'sales']):
title_dict['e_fin'] = 1
else:
title_dict['e_other'] = 1
return title_dict
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="dwPS2FNqxh56" outputId="b757842e-657e-4600-a1e2-b09f3e747e0e"
#example
emp_title_to_dict('regional executive manager')
# + [markdown] colab_type="text" id="fzyqB6kYxh59"
# ## 4.5 emp_length
# This feature contains very few entries with missing values. It also raises the question of people with little to no work experience.
# + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="I68QU6fDxh5-" outputId="65dbf015-4fac-4ac4-a7ef-43144a15ed0c"
df_drop_nan['emp_length'].value_counts()
# + colab={} colab_type="code" id="ceAbmoebxh6E"
def emp_time_to_int(emp_length_str):
"""
extracts the digits from employment duration
"""
#provision for missing values
if pd.isna(emp_length_str):
return 0
emp_int = [char for char in (emp_length_str) if char.isdigit()]
emp_int = ''.join(emp_int)
if len(emp_int)>= 1:
return int(emp_int)
else:
return 0
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="LqK6w6Noxh6J" outputId="04649ec3-102d-4478-81ff-dfe0be72e466"
#example
emp_time_to_int('10+ years')
# + colab_type="text" id="DKGkvCRrxh6O" active=""
# ## 4.6 home_ownership
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="3-GC8TnYxh6R" outputId="a522189c-06a0-412d-865a-815567d9ff67"
df_drop_nan['home_ownership'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="TzvdL30Qxh6U" outputId="ea43df64-0120-4464-f292-a81b6ccffec3"
feat_to_text['home_ownership']
# + colab={} colab_type="code" id="JFZFbV3Cxh6Y"
#make dict for homeownership rank ownership as lowest value
home_to_int = {'MORTGAGE': 4,
'RENT': 3,
'OWN': 5,
'ANY': 2,
'OTHER': 1,
'NONE':0 }
# + [markdown] colab_type="text" id="Eg6KayRHxh6b"
# ## 4.7 verification_status
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="LPa-CfFXxh6c" outputId="7fcd6e5b-1756-4883-ad61-84361a9b0595"
df_drop_nan['verification_status'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mijMf1Clxh6h" outputId="d5e59496-f8c2-4a24-fd82-b44dd3f2842c"
feat_to_text['verification_status']
# + colab={} colab_type="code" id="XqXhzY5vxh6m"
#dict to rank income verification
ver_stat_to_int = {'Source Verified':2,
'Verified': 1,
'Not Verified': 0}
# + [markdown] colab_type="text" id="HhmFgVBNxh6q"
# ## 4.10 loan_status
# This is our binary target variable
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="sQ4kOqa8xh6r" outputId="c7cb2469-fe7e-453a-ce8c-cb0582dc185c"
df_drop_nan['loan_status'].value_counts()
# + colab={} colab_type="code" id="Mpr-YsPexh6z"
loan_stat_to_int = {'Fully Paid': 1,
'Charged Off': 0}
# + [markdown] colab_type="text" id="Ke473Lnfxh66"
# ## 4.11 purpose
# -
feat_to_text['purpose']
# + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" id="VafMnaz1xh68" outputId="efd8a726-f752-4f00-eb1b-14df457fcd44"
df_drop_nan['purpose'].value_counts()
# + colab={} colab_type="code" id="nAANXsNfxh7D"
def purpose_to_dummy(purpose_series, prefix='purp'):
"""
returns dummy dataframe
"""
return pd.get_dummies(purpose_series.str.lower(), prefix=prefix)
# -
purpose_to_dummy(df_drop_nan['purpose']).head()
# + [markdown] colab_type="text" id="k678KCTAxh7I"
# ## 4.12 title
#
# Note the subtle difference between title and purpose. Title is the loan title as given by the borrower, purpose is category provided by the borrower for the loan request. It might make sense to keep purpose and drop title as purpose is more concise.
# -
feat_to_text['title']
# + colab={"base_uri": "https://localhost:8080/", "height": 459} colab_type="code" id="GD1qH44Axh7J" outputId="ac16dac6-de5a-43e7-8c7e-77e5a560acc8"
df_drop_nan['title'].value_counts()[:25]
# + [markdown] colab_type="text" id="73gGZHLqxh7O"
# This is another instance where we should return a dataframe
# + colab={} colab_type="code" id="eENyVTTbxh7P"
def title_to_int(title):
#force make string if not and make lower
title_lower = str(title).lower()
#list of employment types to consider
title_list = ['debt_consol', 'refinancing', 'major',
'home', 'car', 'travel', 'health', 'business']
#instantiate title dict
title_dict = dict(zip(title_list,
len(title_list)*[0]))
#check and fill out dict
if 'consol' in title_lower:
title_dict['debt_consol'] = 1
if 'refinanc' in title_lower:
title_dict['refinancing'] = 1
if 'major' in title_lower:
title_dict['major'] = 1
if 'home imp' in title_lower:
title_dict['home'] = 1
if 'car' in title_lower:
title_dict['car'] = 1
if any(t in title_lower for t in ['travel', 'vacation']):
title_dict['travel'] = 1
if any(t in title_lower for t in ['health', 'medic']):
title_dict['health'] = 1
if any(t in title_lower for t in ['business', 'invest']):
title_dict['business'] = 1
return title_dict
# + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="vCIaf9Umxh7U" outputId="6074d182-03f5-4acb-ac52-11b1bd802a39"
#example
title_to_int('HoMe ImprovemeNT')
# + [markdown] colab_type="text" id="nqu7UsNixh7Y"
# ## 4.13 zip_code
# We will extract the first 3 digits of zip code to keep track of stats
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="6R5XqBjVxh7a" outputId="2b47a4c1-e4a8-4f8c-bd18-b4754339ff66"
#in future will wrangle data using zip code
df_drop_nan['zip_code'].str.replace('x', '').value_counts()[:5]
# + [markdown] colab_type="text" id="0wCsrDm1xh7e"
# ## 4.14 addr_state
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="8hLz9HqQxh7f" outputId="718c04ec-8072-4a34-a19d-96767ca7feef"
df_drop_nan['addr_state'].value_counts()[:5]
# + [markdown] colab_type="text" id="-_dmGKOexh7h"
# Might drop this feature at a later stage or use to wrangle data on a geographic basis.
# + [markdown] colab_type="text" id="lz5Bibfcxh7i"
# ## 4.15 earliest_cr_line & issue_d
# + colab={} colab_type="code" id="Kocqq5Znxh7i"
def credit_time_delta(earliest_cr_line, issue_d):
#get time differential
time_delta = pd.to_datetime(issue_d) - pd.to_datetime(earliest_cr_line)
return time_delta.dt.days
# + [markdown] colab_type="text" id="aMMw5OeCxh7l"
# ## 4.16 initial_list_status
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="web1O0TSxh7l" outputId="ec5d1e35-51c5-4fba-c4f2-41e5a9b92403"
#note that this was a cheat feature
feat_to_text['initial_list_status']
# + colab={} colab_type="code" id="TozktqsUxh7q"
# #convert to binary target
# initial_list_status_dict = {'w':0, 'f':1}
# + [markdown] colab_type="text" id="CNJC5erMxh7s"
# ## 4.17 application_type
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="CVdmgbaqxh7t" outputId="5f4f031e-d389-4d90-da72-4a08af6982a1"
df_drop_nan['application_type'].value_counts()
# + colab={} colab_type="code" id="qmHjKvvcxh7x"
application_type_dict = {'Individual': 0,
'Joint App': 1}
# + [markdown] colab_type="text" id="A371jwNcEKQu"
# ## 4.18 mo_sin_old_il_acct
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="oU-2j9t_EMxJ" outputId="b2bc3d7d-e6c7-4081-b873-9f2a062d14bd"
feat_to_text['mo_sin_old_il_acct']
# + colab={"base_uri": "https://localhost:8080/", "height": 315} colab_type="code" id="dq9Ppan0EMo0" outputId="48d23596-f0f8-43b3-dc50-1e5475d46ef7"
#df_drop_nan.hist(column='mo_sin_old_il_acct')
# + [markdown] colab_type="text" id="Lp54GdhrHkwz"
# For this missing feature we decide to fill with the mode.
# + [markdown] colab_type="text" id="pFwtKM8TPf4r"
# ## 4.19 mo_sin_old_rev_tl_op
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Yp22ghRhPpWB" outputId="92bffb57-72ae-4f3a-b3bf-4743199859c2"
feat_to_text['mo_sin_old_rev_tl_op']
# + colab={"base_uri": "https://localhost:8080/", "height": 315} colab_type="code" id="ZtBVs_Q2qGzV" outputId="0cf6445a-4478-46a5-a252-4bea6452d218"
# df_drop_nan.hist(column='mo_sin_old_rev_tl_op')
# + [markdown] colab_type="text" id="X3q-IE8NPp5J"
# ## 4.20 mort_acc
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="8KeEr38drShJ" outputId="40db3dc0-7ff1-4588-aa1d-e1b8610a5180"
feat_to_text['mort_acc']
# + colab={"base_uri": "https://localhost:8080/", "height": 315} colab_type="code" id="rF5OxglVPqVe" outputId="16f77473-2235-4fc3-90fe-e05beeace785"
df_drop_nan.hist(column='mort_acc', bins=np.arange(0, 20, 2))
# + [markdown] colab_type="text" id="c7jI3LMgPq0Z"
# ## 4.21 revol_util
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="09Sk7CXGrV-R" outputId="dc92e533-66d4-43b8-9f5e-2d358ea50cee"
feat_to_text['revol_util']
# + colab={"base_uri": "https://localhost:8080/", "height": 315} colab_type="code" id="oKFQm1uXPrPT" outputId="4989e940-5bc2-407a-8eeb-aa411e20de77"
df_drop_nan.hist(column='revol_util', bins=np.arange(0,150,10))
# + [markdown] colab_type="text" id="5-MSLHuyP0Km"
# ## 4.22 pub_rec_bankruptcies
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="dRKjxxbTP0hH" outputId="5ccc018c-6a6f-4c81-ef70-612152dd5ce8"
feat_to_text['pub_rec_bankruptcies']
# + colab={"base_uri": "https://localhost:8080/", "height": 315} colab_type="code" id="PujpprhHo7C5" outputId="8860fe2d-4b69-4953-ed1a-0d381aa4912d"
df_drop_nan.hist(column='pub_rec_bankruptcies', bins=np.arange(0, 10, 1))
# + [markdown] colab_type="text" id="dyObjG74o7T7"
# ## 4.22 dti
# + colab={} colab_type="code" id="442x_sado7yJ"
feat_to_text['dti']
# + colab={} colab_type="code" id="tBjaGri8rvpb"
df_drop_nan.hist(column='dti', bins=np.arange(0, 150, 10))
# + [markdown] colab_type="text" id="Ku0figTcxh70"
# ## 4.18 Combine steps above in function
#
# In this part we combine all the steps in the previous subsections into one custom function to clean the data. Based on our observations in the training set, this shall apply to the testing set as well.
# + colab={} colab_type="code" id="t4D7fgvAs7ta"
#instantiate dict
mode_dict = {}
#loop over columns
for col in df_drop_nan.columns:
mode_dict[col] = df_drop_nan[col].mode()[0]
#make dataframe object
df_train_mode = pd.DataFrame.from_dict(mode_dict,
orient='index',
columns=['mode'])
df_train_mode= df_train_mode.reset_index(drop=False)
df_train_mode_path = os.path.join(goog_dir,'data', 'df_train_mode.csv')
df_train_mode.to_csv(df_train_mode_path)
df_train_mode.head(10)
# -
df_train_mode.tail()
df_train_mode[df_train_mode['index']=='mort_acc']['mode'].values[0]
df_train_mode[df_train_mode['index']=='pub_rec']
# + colab={} colab_type="code" id="B27wQRNkxh70"
def df_object_to_num(df_obj, df_train_mode=df_train_mode):
"""Custom function to transform categorical features into numerical ones
This function will also be applied on the testing data so that we apply the same exact kind of transformation
"""
#make deep copy
df_num = df_obj.copy()
#keep track of columns to drop at end
col_drop_list = ['title']
#loads from path defined outside of function
df_train_mode = pd.read_csv(goog_dir+'data/df_train_mode.csv')
#fill missing values with mode
for col in df_num.columns:
#lookup fill value
mode_val = df_train_mode[df_train_mode['index']==col]['mode'].values[0]
#fill nan
df_num[col] = df_num[col].fillna(mode_val)
#drop nans in case we missed any
df_num = df_num.dropna(axis=0)
#fix types
df_num = df_num.astype(
{'mort_acc': 'float64',
'dti': 'float64',
#'mo_sin_old_il_acct': 'float64',
#'mo_sin_old_rev_tl_op': 'float64',
'pub_rec_bankruptcies': 'float64'
})
### instantiate dicts ###
term_to_int = {' 36 months': 1,
' 60 months': 2}
grade_to_int = dict(zip(['A', 'B', 'C', 'D', 'E', 'F', 'G'], np.arange(7, 0, -1)))
#copied from above, we can write a function to generate this dict
sub_grade_sorted = {'G5': 0, 'G4': 1, 'G3': 2, 'G2': 3, 'G1': 4,
'F5': 5, 'F4': 6, 'F3': 7, 'F2': 8, 'F1': 9,
'E5': 10, 'E4': 11, 'E3': 12, 'E2': 13, 'E1': 14,
'D5': 15, 'D4': 16, 'D3': 17, 'D2': 18, 'D1': 19,
'C5': 20, 'C4': 21, 'C3': 22, 'C2': 23, 'C1': 24,
'B5': 25, 'B4': 26, 'B3': 27, 'B2': 28, 'B1': 29,
'A5': 30, 'A4': 31, 'A3': 32, 'A2': 33, 'A1': 34}
home_to_int = {'MORTGAGE': 4,
'RENT': 3,
'OWN': 5,
'ANY': 2,
'OTHER': 1,
'NONE':0 }
ver_stat_to_int = {'Source Verified':2,
'Verified': 1,
'Not Verified': 0}
loan_stat_to_int = {'Fully Paid': 1,
'Charged Off': 0}
initial_list_status_dict = {'w':0, 'f':1}
application_type_dict = {'Individual': 0,
'Joint App': 1}
#replacements
df_num.replace({'term': term_to_int,
'grade':grade_to_int,
'sub_grade': sub_grade_to_int,
'home_ownership':home_to_int,
'verification_status':ver_stat_to_int,
'loan_status':loan_stat_to_int,
'initial_list_status':initial_list_status_dict,
'application_type': application_type_dict}, inplace=True)
#fix zip codes and keep first 3
df_num['zip_3'] = df_num['zip_code'].str.replace('x','')
df_num['zip_2'] = df_num['zip_3'].str[:2]
col_drop_list.append('zip_code')
#employee length extract numbers
df_num['emp_length'] = df_num['emp_length'].apply(emp_time_to_int)
#store employees in list
emp_title_list = []
for emp in df_num['emp_title']:
emp_title_list.append(emp_title_to_dict(emp))
#make dataframe
df_emp_title = pd.DataFrame.from_dict(emp_title_list)
df_emp_title.index = df_num.index
#join
df_num = pd.merge(df_num, df_emp_title, left_index=True, right_index=True)
#df_num = pd.concat([df_num, df_emp_title], axis=1)
#take out col
col_drop_list.append('emp_title')
#get time differential
col_drop_list.extend(['issue_d', 'earliest_cr_line'])
time_delta = pd.to_datetime(df_num['issue_d']) - pd.to_datetime(df_num['earliest_cr_line'])
df_num['time_delta'] = time_delta.dt.days
#get purpose
purp_df = purpose_to_dummy(df_num['purpose'])
purp_df =purp_df.astype('int64')
col_drop_list.append('purpose')
purp_df.index = df_num.index
#df_num = pd.concat([df_num, purp_df])
df_num = pd.merge(df_num, purp_df, left_index=True, right_index=True)
#drop more types
col_drop_list.append('addr_state')
df_num = df_num.drop(columns=col_drop_list)
return df_num
# + colab={} colab_type="code" id="5BDKXVjcxh73"
#make entries numerical
df_num = df_object_to_num(df_drop_nan)
df_num.head()
# -
# ## 4.19 Explore Features
# ### Interest Rate
plt.hist(df_num['int_rate'], bins=np.arange(0, 35, 0.5))
plt.xlabel("Interest Rate (%)")
plt.ylabel("Count")
plt.show()
# ### Loan Amount
plt.hist(df_num['loan_amnt'], bins=np.arange(0, 45_000, 1000))
plt.xlabel("Loan Aount (USD)")
plt.ylabel("Count")
plt.show()
# ## Number of derogatory public records
plt.hist(df_num['pub_rec'], bins=np.arange(0,11,1))
plt.xlabel("pub_rec")
plt.ylabel("Count")
plt.show()
# ## Annual Inc
plt.hist(df_num['annual_inc'], bins=np.arange(0, 200_000, 5000))
plt.xlabel("Annual Income (USD)")
plt.ylabel("Count")
plt.show()
# + [markdown] colab={} colab_type="code" id="OjO7F29axh76"
# # 5.0 Custom Scaling
#
# It makes more economic sense to compare observations to the micropolitan statistical area (MSA) rather than the whole nation. If the first digit of a zip code indicates a state, the second two digits denote central post office facility for that area. We shall therefore keep track of statistics such as the mean and standard deviation of samples grouped by the first 3 digits of zip code if there is sufficient data. Otherwise group by first 2 digits.
# + [markdown] colab={} colab_type="code" id="D9gWXF9kxh78"
# ## 5.1 Macro Level
# -
#instantiate groupby object
df_macro_groupby = df_num.drop(columns=['zip_3']).groupby(by='zip_2')
#take note of columns to scale
scale_cols = ['loan_amnt', 'term', 'int_rate', 'installment',
'grade', 'sub_grade','emp_length', 'home_ownership',
'annual_inc', 'open_acc', 'revol_bal', 'dti',
'total_acc', 'mort_acc', 'time_delta']
# +
#get mean
df_macro_mean = df_macro_groupby.mean()
#eliminate binary features
df_macro_mean = df_macro_mean[scale_cols]
df_macro_mean.head()
# -
#save statistics for later use
df_macro_mean.to_csv(os.path.join(goog_dir,'data/df_macro_mean.csv'))
# +
#get std
df_macro_std = df_macro_groupby.std()
df_macro_std = df_macro_std[scale_cols]
df_macro_std.head()
# -
#save statistics for later use
df_macro_std.to_csv(os.path.join(goog_dir,'data/df_macro_std.csv'))
#get counts
df_macro_count = pd.DataFrame(df_num['zip_2'].value_counts())
df_macro_count.head()
df_macro_count.describe()
#sample measurement
df_macro_count.loc['85']
df_macro_count.hist(column='zip_2',
bins=np.arange(0, 55_000, 5000))
plt.xlabel("Loan Applications")
plt.ylabel("Count")
plt.title("Distribution of Loan Applications by 2 Digit Zip Code")
# ## 5.2 Micro Level
#instantiate groupby object
df_micro_groupby = df_num.drop(columns=['zip_2']).groupby(by='zip_3')
# +
#get mean
df_micro_mean = df_micro_groupby.mean()
df_micro_mean.head()
# +
#get std
df_micro_std = df_micro_groupby.std()
df_micro_std.head()
# -
df_num[df_num['zip_3']=='009']
#get counts
df_micro_count = pd.DataFrame(df_num['zip_3'].value_counts())
df_micro_count.head()
df_micro_count.describe()
df_micro_count.hist(column='zip_3',
bins=np.arange(0, 16000, 1000))
plt.xlabel("Loan Applications")
plt.ylabel("Count")
plt.title("Distribution of Loan Applications by 3 digit Zip Code")
# ## 5.3 Scaling Function
# In this section we will apply a standard scaler that relies on observations from the training data. The same scaler that was fitted on the training data will be used to transform both training and test data.
# +
def custom_scaler(df_unscaled, zip_means = df_macro_mean, zip_std = df_macro_std):
"""
Applies a custom scaling to a dataframe based on observations in same 2 digit zip code
returns new dataframe
"""
#nested for loops to scale by 2 digit zip code
df_scaled_list = []
for code in df_unscaled['zip_2'].unique():
#make deep copy of localized zip
df_local_zip = df_unscaled[df_unscaled['zip_2']==code].copy()
#for each feature we want to scale
for feat in zip_means.columns:
#subtract and divide by std
df_local_zip[feat] = (df_local_zip[feat] - zip_means.loc[code,feat]) / zip_std.loc[code,feat]
#after scaling is done for local zip area we append to list
df_scaled_list.append(df_local_zip)
#concat in one df
df_scaled = pd.concat(df_scaled_list)
return df_scaled
df_train_scaled = custom_scaler(df_unscaled=df_num, zip_means = df_macro_mean, zip_std = df_macro_std)
df_train_scaled.head()
# -
# # 6.0 Repeat On Test Data
#load test data
df_test = utils.chunk_loader(df_test_path, index_col=0)
# +
#drop leak columns
df_test = df_test.drop(columns=leak_cols, axis=1)
# drop the redundant columns
df_test = df_test.drop(columns=single_value_cols)
#drop vlaues if not in list
df_test = df_test[df_test['loan_status'].isin(target_vals)]
#drop the columns
df_test = df_test.drop(columns=missing_cols)
# +
#make numerical
df_test_num = df_object_to_num(df_test)
#scale
df_test_scaled = custom_scaler(df_test_num)
# -
#peak
df_test_scaled.head()
# # 6.1 Shape Check
#
# In some functions for the training data we used a pd.get_dummies method which might return novel columns between train and testdata. We must ensure that both datas have the same shape to fit similar algorithms. For examples we see that in the test set there is no loans for educations or weddings. This could be a product that was discontinued. In addition our testing set might not be representative enough, for example we only have the first few months of the year which are not typically wedding season.
different_columns = set.difference(set(df_train_scaled.columns), set(df_test_scaled.columns))
different_columns
#get common columns with set intersection
common_cols_set = set.intersection(set(df_train_scaled.columns), set(df_test_scaled.columns))
common_cols_list = list(common_cols_set)
df_train_scaled = df_train_scaled[common_cols_list]
df_test_scaled = df_test_scaled[common_cols_list]
# # 7.0 Save
# +
#zip columns no longer needed
zip_cols = ['zip_2', 'zip_3']
#drop columns
df_train_save = df_train_scaled.drop(columns=zip_cols)
df_test_save = df_test_scaled.drop(columns=zip_cols)
#save as compressed zipped files to save space
df_train_save.to_csv(os.path.join(goog_dir,'data','df_train_scaled.csv'), compression='zip')
df_test_save.to_csv(os.path.join(goog_dir,'data','df_test_scaled.csv'),compression='zip')
# -
# # 8.0 Conclusion
# This concludes the feature engineering part of the project. In the next notebook we will evaluate machine learning models to select a best performer and iterate on it.
| notebooks/01_Clean_Wrangle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from glob import glob
from astropy.table import Table
# -
import matplotlib
matplotlib.rcParams.update({'font.size':18})
matplotlib.rcParams.update({'font.family':'serif'})
# data from: https://www.ngdc.noaa.gov/stp/solar/solarflares.html
# but cleaned up a bit
FI_files = glob('FI_total/*.txt')
# Requested Citation:
#
# *The "Flare Index" dataset was prepared by the Kandilli Observatory and Earthquake Research Institute at the Bogazici University and made available through the NOAA National Geophysical Data Center (NGDC).*
years = pd.Series(FI_files).str[-8:-4].astype('float').values
years
# +
k=0
cnames=('Day', 'Jan', 'Feb', 'Mar', 'Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec')
df = pd.read_table(FI_files[k], skip_blank_lines=True, skiprows=7, delim_whitespace=True,
skipfooter=4, names=cnames, engine='python')
FI = df.iloc[:,1:].sum(axis=0).values
DD = years[k]+np.arange(0,1,1/12)
# +
for k in range(1,len(FI_files)):
df = pd.read_table(FI_files[k], skip_blank_lines=True, skiprows=7, delim_whitespace=True,
skipfooter=4, names=cnames, engine='python')
FI = np.append(FI, df.iloc[:,1:].sum(axis=0).values)
DD = np.append(DD, years[k]+np.arange(0,1,1/12))
# print(years[k],np.shape(df.iloc[:,1:].sum(axis=0).values), np.shape(years[k]+np.arange(0,1,1/12)))
ss = np.argsort(DD)
DD = np.array(DD[ss], dtype='float')
FI = np.array(FI[ss], dtype='float')
# -
FI0 = FI
DD0 = DD
plt.figure(figsize=(10,5))
plt.plot(DD, FI)
plt.ylabel('Flare Index')
# +
import celerite
from celerite import terms
from scipy.optimize import minimize
# A non-periodic component
# Q = 1.0 / np.sqrt(2.0)
# w0 = 44
# S0 = np.var(FI) / (w0 * Q)
# kernel = terms.SHOTerm(log_S0=np.log(S0), log_Q=np.log(Q), log_omega0=np.log(w0),
# bounds=bounds)
# kernel.freeze_parameter('log_Q')
# A periodic component
def neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.log_likelihood(y)
def grad_neg_log_like(params, y, gp,):
gp.set_parameter_vector(params)
return -gp.grad_log_likelihood(y)[1]
# -
DD0.shape
# +
DD = DD0[0:]
FI = FI0[0:]
bounds = dict(log_S0=(-0, 15), log_Q=(-15, 15), log_omega0=(-15, 5))
EPAD = 100.
Q = 1.0
w0 = 2*np.pi/11.
S0 = np.var(FI) / (w0 * Q)
kernel = terms.SHOTerm(log_S0=np.log(S0), log_Q=np.log(Q), log_omega0=np.log(w0),
bounds=bounds)
gp = celerite.GP(kernel, mean=np.mean(FI), fit_mean=True)
gp.compute(DD, yerr=(np.sqrt(FI + 0.75) + 1.0)/10 + EPAD) # add extra Yerror term to account for scatter
bounds = gp.get_parameter_bounds()
initial_params = gp.get_parameter_vector()
soln = minimize(neg_log_like, initial_params, jac=grad_neg_log_like,
method='L-BFGS-B', bounds=bounds, args=(FI, gp))
gp.set_parameter_vector(soln.x)
x = np.linspace(1975, 2024, 500)
mu, var = gp.predict(FI, x, return_var=True)
print(2*np.pi / np.exp(gp.get_parameter_dict()['kernel:log_omega0']))
# +
plt.figure(figsize=(10,5))
plt.plot(DD, FI, alpha=0.75)
# plt.plot(DD0, FI0, alpha=0.25)
plt.plot(x, mu)
plt.fill_between(x, mu+np.sqrt(var), mu-np.sqrt(var), color='C1', alpha=0.3,
edgecolor="none")
plt.ylabel(r'H$\alpha$ Flare Index')
plt.savefig('solar_flare_index_gp.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
# -
| flare_index_gp.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.1
# language: julia
# name: julia-1.6
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Monads 2.0, aka Algebraic Effects: ExtensibleEffects.jl
#
# an introduction to Monads, their downsides, and their next generation
#
# by <NAME>, <EMAIL>
# + [markdown] cell_style="center" slideshow={"slide_type": "-"}
# ## Outline
# * Working with Monads
# * Limitations of Monads
# * Extensible Effects - How they work
# * Extensible Effects - How to define your own
# * Extensible Effects - more
# + [markdown] cell_style="split" slideshow={"slide_type": "subslide"}
# <img src="images/jakobsweg-fisterra-fokus.png"
# alt="<NAME>"
# style="float: right; margin-right: 3em"
# width="50%"/>
# + [markdown] cell_style="split" slideshow={"slide_type": "-"}
# ### <NAME>
#
# - freelancer
# - end-to-end Data & AI consultant
# - organizer of Julia User Group Munich
#
# ### key interests
# - professional best practices
# - probabilistic programming
# - functional programming
# + [markdown] cell_style="center" slideshow={"slide_type": "slide"}
# # Let's start simple
#
# The key idea behind Monads is to **encapsulate a computational context**
# <br>
# simplifying our daily programming
# + [markdown] cell_style="center" slideshow={"slide_type": "subslide"}
# Loving context managers, let's encapsulate **construction (aka enter)** and **destruction (aka exit)**
# + cell_style="center"
struct MyContextManager
implementation
end
(context::MyContextManager)(continuation) = context.implementation(continuation)
# + cell_style="center"
context = MyContextManager(continuation -> begin
println("construction")
x = 42
result = continuation(x)
println("destruction")
result
end)
# -
context(println)
context() do inner_value
println(inner_value)
inner_value
end
# + [markdown] slideshow={"slide_type": "subslide"}
# # How to work with such a computational context?
#
# Situation: given a function `f` we want to apply it to our `42` encapsulated in our `context`.
#
# 😬 *on a first glance this looks impressively impossible*
#
# 😀 *but actually we can*
#
# and by convention we store our logic in the higher-order function named `map`
# + slideshow={"slide_type": "fragment"}
Base.map(f, context::MyContextManager) = MyContextManager(continuation -> begin
context() do inner_value
continuation(f(inner_value))
end
end)
# + slideshow={"slide_type": "fragment"}
context_doubled = map(x -> 2x, context)
context_doubled = map(context) do x
2x
end
context_doubled(println)
# + [markdown] slideshow={"slide_type": "fragment"}
# <div style="text-align: right"> 👏 it works 👏 </div>
# + [markdown] slideshow={"slide_type": "subslide"}
# # One step left: Combining multiple contexts
# +
f_context(x) = MyContextManager(continuation -> begin
println("before")
y = 1111
result = continuation(x+y)
println("after")
result
end)
context_in_context = map(f_context, context)
context_in_context(println)
# -
# we would like to **merge both contexts** together somehow... in literatur this merging is called *join* or *flatten*, hence...
# <br>
# such an enhanced `map` function is typically called `flatmap`
# + slideshow={"slide_type": "fragment"}
using TypeClasses
TypeClasses.flatmap(f_context, context::MyContextManager) = MyContextManager(continuation -> begin
context() do inner_value
inner_context = f_context(inner_value)
inner_context(continuation)
end
end)
# + cell_style="split" slideshow={"slide_type": "fragment"}
context_flattened = flatmap(f_context, context)
context_flattened(println)
# + [markdown] cell_style="split" slideshow={"slide_type": "-"}
# <div style="text-align: right"> 👏 it works 👏 </div>
# + [markdown] slideshow={"slide_type": "subslide"}
# # Making everything nice and convenient
#
# That is it! Having `map` and `flatmap` defined, we get nice syntax for free, which just compiles to a combination of the two.
# + cell_style="split"
context_flattened = TypeClasses.@syntax_flatmap begin
x = context
f_context(x)
end
context_flattened(println)
# + cell_style="split"
context_flattened = TypeClasses.flatmap(((x,)->begin
f_context(x)
end), context)
context_flattened(println)
# + [markdown] slideshow={"slide_type": "fragment"}
# within `@syntax_flatmap` every normal row is interpreted as context and `map`/`flatmap` are used.
# <br>
# You can also mix this with normal computation, just mark the normal computations by the macro `@pure`.
# + cell_style="split"
context_complex = @syntax_flatmap begin
x = context
@pure s = "x = $x"
z = f_context(x)
# we can define another final return value by ending with @pure
@pure (x, s, z)
end
context_complex(println)
# + cell_style="split"
context_complex = TypeClasses.flatmap(((x,)->begin
s = "x = $(x)"
TypeClasses.map(((z,)->begin
(x, s, z)
end), f_context(x))
end), context)
context_complex(println)
# + [markdown] slideshow={"slide_type": "slide"}
# # Limitation of Monads: they don't compose well
#
# Here we have two monads (aka computational context): `Vector` and `ContextManager`
# + cell_style="split"
# Vector are supported by TypeClasses
# with for-loop interpretation
@syntax_flatmap begin
a = [3, 7]
b = [100, 200]
@pure a + b
end
# + cell_style="split"
# MyContextManager is fully implemented in TypeClasses
create_context(x) = @ContextManager continuation -> begin
println("before $x")
result = continuation(x)
println("after $x")
result
end
context = @syntax_flatmap begin
a = create_context(3)
b = create_context(10)
@pure a + b
end
context(println)
# -
# Let's compose them!
# + cell_style="split" slideshow={"slide_type": "fragment"}
@syntax_flatmap begin
a = [100, 200]
# contextmanager is converted to Vector by simply running it
b = create_context(a)
@pure a + b
end
# + cell_style="split" slideshow={"slide_type": "-"}
context_composition = @syntax_flatmap begin
a = create_context(7)
b = [100, 200]
@pure a + b
end
context_composition(println)
# + [markdown] slideshow={"slide_type": "slide"}
# # ExtensibleEffects.jl
#
# There is a lot of theory developed around this problem of composing computational contexts.
# - **Monad-Transformers:** difficult to port to Julia, and don't compose well
# - **Algebraic / Extensible Effects:** compose impressively well, but also have some limitations
#
# ExtensibleEffects.jl implements Extensible Effects following the paper [Freer Monads, More Extensible Effects](http://okmij.org/ftp/Haskell/extensible/more.pdf) which already has an [Haskell implementation](https://hackage.haskell.org/package/freer-effects) as well as a [Scala implementation](https://github.com/atnos-org/eff).
# + cell_style="center" slideshow={"slide_type": "fragment"}
using ExtensibleEffects
context_composition = @runcontextmanager @runhandlers (Vector,) @syntax_eff_noautorun begin
a = create_context(7)
b = [100, 200]
@pure a + b
end
context_composition(println)
# + slideshow={"slide_type": "-"}
context_composition2 = @runcontextmanager @runhandlers (Vector,) @syntax_eff_noautorun begin
a = [100, 200]
b = create_context(a)
@pure a + b
end
context_composition2(println)
# + [markdown] cell_style="center"
# <div style="text-align: right"> 👏 it works 👏 </div>
# + [markdown] slideshow={"slide_type": "subslide"}
# # How does it work?
#
# using `@macroexpand` we can see what is happening
# + cell_style="split"
eff = @syntax_eff_noautorun begin
a = create_context(7)
b = [100, 200]
@pure a + b
end
# + cell_style="split"
TypeClasses.flatmap(((a,)->begin
TypeClasses.map(((b,)->begin
a + b
end), ExtensibleEffects.effect([100, 200]))
end), ExtensibleEffects.effect(create_context(7)))
# + [markdown] cell_style="split"
# -------------
# + [markdown] cell_style="split"
# - [same as for `@syntax_flatmap`] <br> every `=` is parsed as `flatmap` or `map`
# - [**NEW**] every effect is wrapped with `ExtensibleEffects.effect`
# + cell_style="split"
# effect makes sure we have an Eff type
ExtensibleEffects.effect([100, 200])
# + [markdown] cell_style="split"
# # Everything is wrapped into the type `Eff` for later ("lazy") execution
#
# # Nothing is executed first
# + [markdown] slideshow={"slide_type": "subslide"}
# # Eff
#
# The `Eff` type is very simple: It just stores the **value** and the **continuation** which leads to the next `Eff`.
# -
# slight simplification of the real type
struct Eff′{Effectful, Continuation}
value::Effectful
cont::Continuation # just a function, which returns another Eff
end
# for performance improvements the `Continuation` is internally represented
# as a list of functions which get composed (instead of a single composed function)
# # Running Eff - Handlers
#
# The effects can be evaluated in kind of every order (not 100% true, but almost)
# + cell_style="split"
@show eff.value; println(); @show eff.cont;
# + cell_style="split"
eff′ = @runhandlers (Vector,) eff
@show eff′.value; println(); @show eff′.cont;
# -
# ----------------
# - the value has not changed at all
# - the continuation changed, namely, **now every `Vector` will get executed in this continuation**
# + [markdown] slideshow={"slide_type": "subslide"}
# # Running Eff - Handlers 2
#
# Let's see what happens if the first effect is `Vector`
# + cell_style="split"
eff2 = @syntax_eff_noautorun begin
a = [100, 200]
b = create_context(a)
@pure a + b
end
@show eff2.value; println(); @show length(eff2.cont.functions);
# + cell_style="split"
eff2′ = @runhandlers (Vector,) eff2
@show eff2′.value
println()
@show length(eff2′.cont.functions);
# -
# ----------------
# - the value has changed
# - [**NEW**] the original `Vector` was handled. The next unhandled effect appears, here the `ContextManager`
# - the continuation changed
# - [same as before] every `Vector` will get executed in this continuation
# - [**NEW**] one for each vector-element there is one separate continuation, which all are combined into one overall continuation
# + [markdown] slideshow={"slide_type": "subslide"}
# # What happened when we handled the original Vector?
# + [markdown] cell_style="split" slideshow={"slide_type": "-"}
# 1. the Vector is `[100, 200]`
# + cell_style="split"
a = [100, 200]
# + [markdown] cell_style="split"
# 2. for each element we run the continuation, each continuation returns an `Eff`
# + cell_style="split"
continuation(a) = @syntax_eff_noautorun begin
b = create_context(a)
@pure a + b
end
a′ = map(continuation, a);
# + [markdown] cell_style="split"
# 3. so we have one `Eff` encapsulating the result for 100, and another `Eff` which encapsulates the result for 200
# + cell_style="split"
(a′100, a′200) = a′
# + [markdown] cell_style="split"
# 4. treating `Eff` as computational context (aka Monad) in its own right, we can work **within** `Eff` (just like we have worked within `ContextManager`) and combine both `Eff` to a combined effect
# + cell_style="split"
@syntax_flatmap begin
x = a′100
y = a′200
@pure [x, y]
end
# -
# (The real implementation is slightly different, because Vector can be of arbitrary length, but you get the point.)
# + [markdown] slideshow={"slide_type": "slide"}
# # The Interface - How to define your own Effects?
# -
# core function | description
# ------------- | ------------
# `ExtensibleEffects.eff_applies(handler::Type{<:Vector}, vec::Vector) = true` | specify on which values the handler applies (the handler Vector applies to Vector of course)
# `ExtensibleEffects.eff_pure(handler::Type{<:Vector}, value) = [value]` | wrap a plain value into the Monad of the handler, here Vector.
# `ExtensibleEffects.eff_flatmap(continuation, vec::Vector)` | apply a continuation to the current effect (here again Vector as an example). The key difference to plain `TypeClasses.flatmap` is that `continuation` does not return a plain `Vector`, but a `Eff{Vector}`. Applying this `continuation` with a plain `map` would lead `Vector{Eff{Vector}}`. However, `eff_flatmap` needs to return an `Eff{Vector}` instead.
# Thanks to `TypeClasses.jl` we can provide a generic interface which works out of the box for most types
#
# ```julia
# function ExtensibleEffects.eff_pure(T, a)
# TypeClasses.pure(T, a)
# end
#
# function ExtensibleEffects.eff_flatmap(continuation, a)
# a_of_eff_of_a = map(continuation, a)
# eff_of_a_of_a = TypeClasses.flip_types(a_of_eff_of_a)
# eff_of_a = map(TypeClasses.flatten, eff_of_a_of_a)
# eff_of_a
# end
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# `TypeClasses.flatten` just wraps `TypeClasses.flatmap`
#
# For implementing `TypeClasses.flip_types`, if your type supports `Base.iterate` and (only needed in case of multiple elements) also `TypeClasses.combine`, you can simply fallback to `TypeClasses.default_flip_types_...`)
# + [markdown] slideshow={"slide_type": "fragment"}
# Consequently, here the actual implementation of `Vector`
#
#
# core function | description
# ------------- | ------------
# `ExtensibleEffects.eff_applies(handler::Type{<:Vector}, vec::Vector) = true` | always needs to be provided explicitly
# `Base.map(f, vec::Vector) = [f(x) for x in vec]` | apply a function within the Monad.
# `Base.iterate(vec::Vector, state...) = ...` | standard iteration protocol
# `TypeClasses.pure(handler::Type{<:Vector}, value) = [value]` | wrap a plain value into the Monad of the handler, here Vector.
# `TypeClasses.flatmap(f, vec::Vector) = [x for vec2 in vec for x in vec2]` | apply a function returning a Monad and flatten everything immediately.
# `TypeClasses.combine(vec1::Vector, vec2::Vector) = [vec1; vec2]` | !!Only needed if your Monad contains multiple elements!! Combining two values of the same Monad
# `TypeClasses.flip_types(vec::Vector) = TypeClasses.default_flip_types_having_pure_combine_apEltype(vec)` | make a vector_of_something into something_of_vector
# -
# It is the recommended set of functions to be implemented, if possible for your type.
#
#
# #### If this is not possible, you can
# * fallback up to providing a direct implementation of `eff_flatmap` for your type
# * implement a custom handler for your type containing additionally needed information
# + [markdown] slideshow={"slide_type": "slide"}
# # More Extensible Effects - autorun
#
# This packages comes with the super awesome functionality that standard effects can be run automatically without explicitly calling the handler.
#
# This is implemented by using the type of the effect itself as the handler (plus some smart evaluation logic what to do in case no such handler exists.)
#
# * `@syntax_eff` enables `autorun`
# * `@syntax_eff_noautorun` disables `autorun`
#
# -
@syntax_eff begin
x = [1,4]
[x, 3x]
end
# + [markdown] slideshow={"slide_type": "subslide"}
# # More Extensible Effects - contextmanager
#
# Turns out, `ContextManager` is actually one of the most difficult computational contexts to be represented within Extensible Effects.
# <br>
# This is because, within `Eff` the continuations always return another `Eff`, within which other effects still need to be handled.
#
# Consequently, the contextmanager handler `ContextManagerHandler` **needs to run last**.
#
# ----------------
#
# There is further a special handler `ContextManagerCombinedHandler` which improves the execution order for memory performance.
# + cell_style="split"
println_return(x) = (println(x); x)
eff = @syntax_eff begin
a = [100,200]
b = create_context(a)
c = [5,6]
d = create_context(a + c)
@pure a, b, c, d
end
@runhandlers (ContextManagerHandler(println_return),) eff
# + cell_style="split"
eff = @syntax_eff noautorun(Vector) begin
a = [100,200]
b = create_context(a)
c = [5,6]
d = create_context(a + c)
@pure a, b, c, d
end
handlers = (ContextManagerCombinedHandler(Vector, println_return),)
@runhandlers handlers eff
# + [markdown] slideshow={"slide_type": "subslide"}
# # More Extensible Effects - More Effects
#
# ExtensibleEffects supports many more types:
# + [markdown] cell_style="split"
# types defined in `DataTypesBasic.jl`
# * Option
# * Try
# * Either
# * Identity
# * Const
# * ContextManager
# + [markdown] cell_style="split"
# types defined in `TypeClasses.jl`
# * Callable
# * Iterable
# * State
# * Writer
#
# types defined in `Base`
# * Vector
# * Task
# * Future
# + [markdown] cell_style="split" slideshow={"slide_type": "slide"}
# # Summary
#
# * Monads = easy encapsulation of an computational context
# * `map` + `flatmap` and you get nice syntax support (powered by `TypeClasses.jl`)
# * Monads do not compose well with other Monads
# * Extensible Effects do compose well
# * They do so by defining a meta-Monad called `Eff` which just stores everything for lazy execution later on.
# * handlers define how certain Monads/Effects are actually run
# * thanks to `autorun` you don't need to bother about handlers for most cases
# * `ContextManagerHandler` or `ContextManagerCombinedHandler` always needs to be the last handler to be run
# * many Monads are already implemented and work out of the box - take a look at the documentation and tests
# + [markdown] cell_style="split" slideshow={"slide_type": "fragment"}
# # Thank you for your attention
#
# Find more help at the repositories and documentations respectivly
# * https://github.com/JuliaFunctional/ExtensibleEffects.jl
# * https://github.com/JuliaFunctional/TypeClasses.jl
# * https://github.com/JuliaFunctional/DataTypesBasic.jl
#
# The code and tests are all easy to understand, don't hesitate to take a look in case of questions!
# + [markdown] cell_style="split"
# <img src="images/jakobsweg-fisterra-fokus.png"
# alt="<NAME>"
# style="float:left"
# width="20%"/>
#
# <div style="margin-left: 7em !important">
# You are always welcome to reach out
#
# - github: schlichtanders
# - mail: <EMAIL>
# - linkedin: https://de.linkedin.com/in/stephan-sahm-918656b7
# </div>
#
# + slideshow={"slide_type": "skip"}
| docs/jupyter/Monad2.0, aka Algebraic Effects - ExtensibleEffects.jl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plotting Dyke Thickness vs Length And Histogram
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
# +
path='SectionsExample/*.dat' # Text string path to data (keep it nice and clean)
data = pd.read_csv(path, delimiter="\t")
data
# -
| .ipynb_checkpoints/dykethic-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: home_credit
# language: python
# name: home_credit
# ---
import statsmodels.api as sm
import numpy as np
spector_data = sm.datasets.spector.load_pandas()
spector_data.exog = sm.add_constant(spector_data.exog)
logit_mod = sm.Logit(spector_data.endog, spector_data.exog[])
logit_res = logit_mod.fit()
def step_aic(model, exog, endog, params={}):
"""
This select the best exogenous variables with AIC
Both exog and endog values can be either str or list.
(Endog list is for the Binomial family.)
Note: This adopt only "forward" selection
Args:
model: model from statsmodels.discrete.discrete_model
exog (DataFrame): exogenous variables
endog (DataFrame): endogenous variables
params(dict): fit parameter
Returns:
selected: selected variables that seems to have the smallest AIC
"""
exog_names = exog.columns.values
remaining = set(exog_names[1:])
selected = ["const"] # 採用が確定された要因
# 定数項のみのAICを計算
cols = ["const"]
aic = model(endog, exog[cols]).fit(**params).aic
print('AIC: {}, variables: {}'.format(round(aic, 3), cols))
current_score, best_new_score = np.ones(2) * aic
# 全要因を採択するか,どの要因を追加してもAICが上がらなければ終了
while remaining and current_score == best_new_score:
scores_with_candidates = []
for candidate in remaining:
# 残っている要因を1つずつ追加したときのAICを計算
cols = selected + [candidate]
aic = model(endog, exog[cols]).fit(**params).aic
print('AIC: {}, variables: {}'.format(round(aic, 3), cols))
scores_with_candidates.append((aic, candidate))
# 最もAICが小さかった要因をbest_candidateとする
scores_with_candidates.sort()
scores_with_candidates.reverse()
print(scores_with_candidates)
best_new_score, best_candidate = scores_with_candidates.pop()
# 候補要因追加でAICが下がったならば,それを確定要因として追加する
if best_new_score < current_score:
remaining.remove(best_candidate)
selected.append(best_candidate)
current_score = best_new_score
print(f'The best variables: {selected}')
return selected
step_aic(sm.Logit, spector_data.exog, spector_data.endog)
# +
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
#%%
# データ作成
μ = 50
σ = 10
data = [ np.random.normal(μ, σ) for i in range(10000) ]
#%%
# 階級数
num_bins = 20
# 階級幅
bin_width = (max(data) - min(data)) / num_bins
print(f"階級幅 = 約{bin_width}")
# グラフ描画
fig = plt.figure(figsize=(8, 24))
# (1) 縦軸を度数にしたヒストグラム
ax1 = fig.add_subplot(311)
ax1.title.set_text("(1) frequency")
ax1.grid(True)
ax1.hist(data, bins=num_bins)
# (2) 縦軸を相対度数にしたヒストグラム
ax2 = fig.add_subplot(312)
ax2.title.set_text("(2) relative frequency")
ax2.grid(True)
ax2.set_xlim(ax1.get_xlim())
weights = np.ones_like(data) / len(data)
ax2.hist(data, bins=num_bins, weights=weights)
# (3) 縦軸を相対度数密度にしたヒストグラム(青) & 正規分布の確率密度関数(赤)
ax3 = fig.add_subplot(313)
ax3.title.set_text("(3) density")
ax3.grid(True)
ax3.set_xlim(ax1.get_xlim())
ax3.hist(data, bins=num_bins, density=True, color="blue", alpha=0.5)
# -
| notebooks/stepwise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:mayavi_env]
# language: python
# name: conda-env-mayavi_env-py
# ---
# +
'''Compare algorithms for find nearest'''
import math
import numpy as np
def find_nearest1(array,value):
'''not right function and non vector'''
idx,val = min(enumerate(array), key=lambda x: abs(x[1]-value))
return idx
def find_nearest2(array, values):
'''not right function'''
indices = np.abs(np.subtract.outer(array, values)).argmin(0)
return indices
def find_nearest3(array, values):
values = np.atleast_1d(values)
indices = np.abs(np.int64(np.subtract.outer(array, values))).argmin(0)
out = array[indices]
return indices
def find_nearest4(array,value):
'''not right function and non vector'''
idx = (np.abs(array-value)).argmin()
return idx
def find_nearest5(array, value):
'''not right function and non vector'''
idx_sorted = np.argsort(array)
sorted_array = np.array(array[idx_sorted])
idx = np.searchsorted(sorted_array, value, side="left")
if idx >= len(array):
idx_nearest = idx_sorted[len(array)-1]
elif idx == 0:
idx_nearest = idx_sorted[0]
else:
if abs(value - sorted_array[idx-1]) < abs(value - sorted_array[idx]):
idx_nearest = idx_sorted[idx-1]
else:
idx_nearest = idx_sorted[idx]
return idx_nearest
def find_nearest6(array,value):
xi = np.argmin(np.abs(np.ceil(array[None].T - value)),axis=0)
return xi
def bisection(array,value,lower = -np.inf, upper=np.inf):
'''Given an ``array`` , and given a ``value`` , returns an index j such that ``value`` is between array[j]
and array[j+1]. ``array`` must be monotonic increasing. j=-1 or j=len(array) is returned
to indicate that ``value`` is out of range below and above respectively.'''
n = len(array)
if (value < array[0]):
return -1,lower
res = -1# Then set the output
elif (value > array[n-1]):
return n, upper
#array = np.append(np.append(-np.inf,array),np.inf)
jl = 0# Initialize lower
ju = n-1# and upper limits.
while (ju-jl > 1):# If we are not yet done,
jm=(ju+jl) >> 1# compute a midpoint,
if (value >= array[jm]):
jl=jm# and replace either the lower limit
else:
ju=jm# or the upper limit, as appropriate.
# Repeat until the test condition is satisfied.
if (value == array[0]):
return 0,array[0]
res = -1# Then set the output
elif (value == array[n-1]):
return n-1,array[n-1]
else:
return jl, array[jl]
if __name__=='__main__':
array = np.arange(100000)
val = array[50000]+0.55
print( bisection(array,val))
# %timeit bisection(array,val)
print( find_nearest1(array,val))
# %timeit find_nearest1(array,val)
print( find_nearest2(array,val))
# %timeit find_nearest2(array,val)
print( find_nearest3(array,val))
# %timeit find_nearest3(array,val)
print( find_nearest4(array,val))
# %timeit find_nearest4(array,val)
print( find_nearest5(array,val))
# %timeit find_nearest5(array,val)
print( find_nearest6(array,val))
# %timeit find_nearest6(array,val)
# -
(2, 2)
100000 loops, best of 3: 4.36 µs per loop
3
10 loops, best of 3: 143 ms per loop
3
10000 loops, best of 3: 203 µs per loop
[2]
1000 loops, best of 3: 380 µs per loop
3
1000 loops, best of 3: 197 µs per loop
3
1000 loops, best of 3: 876 µs per loop
[2]
1000 loops, best of 3: 1.05 ms per loop
| src/ionotomo/notebooks/FindNearest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R 4.1.2
# language: R
# name: ir41
# ---
# # Heart attacks Among Navajo Indians
#
# **Date:** 2021-12-01
#
# **Reference:** M249, Book 1, Part 2
suppressPackageStartupMessages(library(tidyverse))
library(R249)
library(DescTools)
# ## Summary
# ## Get the data
(dat <- as_tibble(read.csv(file = "..\\..\\data\\navajoindians.csv")))
# ## Prepare the data
#
# Cast the `cases`, `controls` columns to factors.
labexp <- c("no diabetes", "diabetes")
(sorteddat <- dat %>%
mutate(cases = factor(dat$cases, labexp)) %>%
mutate(controls = factor(dat$controls, labexp)) %>%
arrange(cases, controls))
# Pull the `count` column as a vector and initilise a matrix.
datmat <- sorteddat$count %>%
matrix(nrow = 2, ncol = 2, dimnames = list(labexp, labexp))
print(datmat)
# ## Mantel-Haenszel odds ratio
#
# Calculate the odds ratio.
oddsratio_matched(datmat)
# ## McNemar's test for no association
mcnemar.test(datmat)
| jupyter/1_medical_statistics/1_06_matched_casecon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 9: Getting Familiar with NASA Polynomials
# ## Due Date: Tuesday, November 7th at 11:59 PM
# Read the NASA Polynomial dataset in raw format and parse and store the data into an .xml file.
# ### Review of the NASA Polynomials
# You can find the NASA Polynomial file in `thermo.txt`.
#
# You can find some details on the NASA Polynomials [at this site](http://combustion.berkeley.edu/gri_mech/data/nasa_plnm.html) in addition to the Lecture 16 notes.
#
#
# The NASA polynomials for specie $i$ have the form:
# $$
# \frac{C_{p,i}}{R}= a_{i1} + a_{i2} T + a_{i3} T^2 + a_{i4} T^3 + a_{i5} T^4
# $$
#
# $$
# \frac{H_{i}}{RT} = a_{i1} + a_{i2} \frac{T}{2} + a_{i3} \frac{T^2}{3} + a_{i4} \frac{T^3}{4} + a_{i5} \frac{T^4}{5} + \frac{a_{i6}}{T}
# $$
#
# $$
# \frac{S_{i}}{R} = a_{i1} \ln(T) + a_{i2} T + a_{i3} \frac{T^2}{2} + a_{i4} \frac{T^3}{3} + a_{i5} \frac{T^4}{4} + a_{i7}
# $$
#
# where $a_{i1}$, $a_{i2}$, $a_{i3}$, $a_{i4}$, $a_{i5}$, $a_{i6}$, and $a_{i7}$ are the numerical coefficients supplied in NASA thermodynamic files.
with open('thermo.txt','r') as f:
lines=f.readlines()
# +
molecules={}
for line in lines[5:]:
word=line.split()
if word[0]=="END":
break
print(line)
if word[-1]=="1":
current_molecule=word[0]
molecules[current_molecule]={}
molecules[current_molecule]["T_Min"]=word[-4]
molecules[current_molecule]["T_Max"]=word[-3]
molecules[current_molecule]["T_Between"]=word[-2]
molecules[current_molecule]["coef_r"]=[]
molecules[current_molecule]["coef_p"]=[]
continue
if word[-1]=="2":
for i in range(5):
molecules[current_molecule]["coef_r"].append(line[15*i:15*(i+1)])
continue
if word[-1]=="3":
for i in range(2):
molecules[current_molecule]["coef_r"].append(line[15*i:15*(i+1)])
for i in range(2,5):
molecules[current_molecule]["coef_p"].append(line[15*i:15*(i+1)])
continue
if word[-1]=="4":
for i in range(5):
molecules[current_molecule]["coef_p"].append(line[15*i:15*(i+1)])
# +
import xml.etree.cElementTree as ET
p=ET.Element("ctml")
cp=ET.SubElement(p,"phase",id="gri30")
molecules_arr=ET.SubElement(cp,"speciesArray",datasrc="#species_data")
molecules_list=""
for m in molecules.keys():
molecules_list+=m+" "
molecules_arr.text=molecules_list
data=ET.SubElement(p,"speciesData",id="species_data")
for m in molecules.keys():
species = ET.SubElement(data, "species", name=m)
thermo = ET.SubElement(species, "thermo")
# each temperature range, use a
# sub-field with the minimum and maximum temperature as attributes
NASA = ET.SubElement(thermo, "NASA", Tmax=molecules[m]['T_Max'],
Tmin=molecules[m]['T_Between'])
# floatArray field that contains
#comma-separated string of each coefficient
floatArray = ET.SubElement(NASA, "floatArray", name="coeffs", size="7")
# high-temperature range
coeff = molecules[m]['coef_r'][0]
for c in molecules[m]['coef_p'][1:]:
coeff += ',' + c
floatArray.text = coeff
# low-temperature range
NASA = ET.SubElement(thermo, "NASA", Tmax=molecules[m]['T_Between'],Tmin=molecules[m]['T_Min'])
floatArray = ET.SubElement(NASA, "floatArray", name="coeffs", size="7")
coeff = molecules[m]['coef_r'][0]
for c in molecules[m]['coef_p'][1:]:
coeff += ',' + c
floatArray.text = coeff
# write into xml file
tree = ET.ElementTree(p)
tree.write("thermo.xml")
# -
# ### Some Notes on `thermo.txt`
# The first 7 numbers starting on the second line of each species entry (five of the second line and the first two of the third line) are the seven coefficients ($a_{i1}$ through $a_{i7}$, respectively) for the high-temperature range (above 1000 K, the upper boundary is specified on the first line of the species entry).
#
# The next seven numbers are the coefficients ($a_{i1}$ through $a_{i7}$, respectively) for the low-temperature range (below 1000 K, the lower boundary is specified on the first line of the species entry).
# ### Additional Specifications
# Your final .xml file should contain the following specifications:
#
# 1. A `speciesArray` field that contains a space-separated list of all of the species present in the file.
# 2. Each species contains a `species` field with a `name` attribute as the species name.
#
# 1. For each temperature range, use a sub-field with the minimum and maximum temperature as attributes.
# 2. `floatArray` field that contains comma-separated string of each coefficient.
#
# You can reference the `example_thermo.xml` file for an example .xml output.
# **Hint**: First parse the file into a Python dictionary.
| homeworks/HW9/HW9_Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div>
# <img src="attachment:qgssqml2021wordmark.png"/>
# </div>
# このLabでは、機械学習で使われる一般的なパラメーター化された量子回路にノイズが与える影響について、量子プロセス・トモグラフィーを使って学びます。
# <div class="alert alert-danger" role="alert">
# Gradingのために、全ての<b><i>execute</i></b>関数でのシミュレーターの引数について(<i>noise_model=noise_thermal, seed_simulator=3145, seed_transpiler=3145, shots=8192</i>)を指定してください。
#
# </div>
# +
# 一般的なツール
import numpy as np
import matplotlib.pyplot as plt
# Qiskit Circuit 関連機能
from qiskit import execute,QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, transpile
import qiskit.quantum_info as qi
# トモグラフィーの機能
from qiskit.ignis.verification.tomography import process_tomography_circuits, ProcessTomographyFitter
from qiskit.ignis.mitigation.measurement import complete_meas_cal, CompleteMeasFitter
import warnings
warnings.filterwarnings('ignore')
# -
# ### Question 1
# - この量子回路を作ってください。
#
# <div>
# <img src="attachment:lab5ex1.png"/>
# </div>
target = QuantumCircuit(2)
target = # ここにコードを記入します
target_unitary = qi.Operator(target)
# +
from qc_grader import grade_lab5_ex1
# grade関数は測定なしの量子回路を期待していることに注意してください
grade_lab5_ex1(target)
# -
# # ショットノイズのみによる量子プロセス・トモグラフィー
#
# ここでは、`qasm_simulator`で量子プロセス・トモグラフィー回路をシミュレートします。
#
# ### Question 2a
#
# - Qiskitのプロセストモグラフィー(QPT)回路の機能を使って、量子プロセスト・モグラフィーを行う回路とQASMシミュレーター(ショットノイズのみ)のシミュレーションを作ってください。QPT回路の機能は、`seed_simulator=3145`, `seed_transpiler=3145`, `shots=8192`で実行してください。
#
# - _ヒント:必要となる機能、<a href="https://qiskit.org/documentation/stubs/qiskit.ignis.verification.process_tomography_circuits.html">process_tomography_circuits</a>、は上記でインポートされています。完成すると、合計で144の回路が`execute`関数によって`qasm_simulator` に与えられるはずです。`len(qpt_circs)`を使って作成された回路の数を確認できます。_
#
#
simulator = Aer.get_backend('qasm_simulator')
qpt_circs = # ここにコードを記入します
qpt_job = execute(qpt_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192)
qpt_result = qpt_job.result()
# ### Question 2b
#
# - プロセス・トモグラフィー・フィッターの最小二乗フィッティング法を使って、目標のユニタリー行列の忠実度(Fidelity)を決定してください。
#
#
# - _ヒント: まず、上記の<a href="https://qiskit.org/documentation/stubs/qiskit.ignis.verification.ProcessTomographyFitter.html">ProcessTomographyFitter</a>関数を使ってQuestion 2aの結果を処理し、ProcessTomographyFitter.fit(method='....')を使って、測定されたユニタリー演算を効果的に記述した"Choi Matrix"を抽出します。ここからは、量子情報モジュールの<a href="https://qiskit.org/documentation/stubs/qiskit.quantum_info.average_gate_fidelity.html#qiskit.quantum_info.average_gate_fidelity">average_gate_fidelity</a>関数を使って、結果が達成した忠実度を抽出します。_
#
# +
# ここにコードを記入します
# +
from qc_grader import grade_lab5_ex2
# grade関数は浮動小数点を期待していることに注意してください
grade_lab5_ex2(fidelity)
# -
# # T1/T2ノイズモデルでの量子プロセス・トモグラフィー
#
# 一貫性を保つために、ゲートの持続時間とT1/T2時間を特徴付けるいくつかの値を設定しましょう:
# +
# 量子ビット0〜3のT1、T2の値
T1s = [15000, 19000, 22000, 14000]
T2s = [30000, 25000, 18000, 28000]
# 命令時間(ナノセカンド)
time_u1 = 0 # 仮想gate
time_u2 = 50 # (単一X90パルス)
time_u3 = 100 # (2個のX90パルス)
time_cx = 300
time_reset = 1000 # 1マイクロ秒
time_measure = 1000 # 1マイクロ秒
# -
from qiskit.providers.aer.noise import thermal_relaxation_error
from qiskit.providers.aer.noise import NoiseModel
# ### Question 3
#
# - Qiskitの熱緩和エラーモデルを使って、上記で定義された量子ビット0〜3に対する値を使って、`u1`,`u2`,`u3`, `cx`, `measurement`,`reset` のエラーを定義し、熱ノイズモデルを構築してください。
#
# - _ヒント:Qiskitチュートリアルの<a href="https://github.com/Qiskit/qiskit-tutorials/blob/master/tutorials/simulators/3_building_noise_models.ipynb">building noise models</a>が、特に、`u1`,`u2`,`u3`,`cx`, `reset`,`measurement` のエラーに対して役立ちます。(これら全てを含めてください。)_
# +
# QuantumErrorオブジェクト
errors_reset = [thermal_relaxation_error(t1, t2, time_reset)
for t1, t2 in zip(T1s, T2s)]
errors_measure = [thermal_relaxation_error(t1, t2, time_measure)
for t1, t2 in zip(T1s, T2s)]
errors_u1 = [thermal_relaxation_error(t1, t2, time_u1)
for t1, t2 in zip(T1s, T2s)]
errors_u2 = [thermal_relaxation_error(t1, t2, time_u2)
for t1, t2 in zip(T1s, T2s)]
errors_u3 = [thermal_relaxation_error(t1, t2, time_u3)
for t1, t2 in zip(T1s, T2s)]
errors_cx = [[thermal_relaxation_error(t1a, t2a, time_cx).expand(
thermal_relaxation_error(t1b, t2b, time_cx))
for t1a, t2a in zip(T1s, T2s)]
for t1b, t2b in zip(T1s, T2s)]
# ノイズモデルにエラーを加えます
noise_thermal = NoiseModel()
# ここにコードを記入します
# +
from qc_grader import grade_lab5_ex3
# grade関数はNoiseModelを期待していることに注意してください
grade_lab5_ex3(noise_thermal)
# -
# ### Question 4.
#
# - ノイズモデルを使って、QPTの忠実度を出してください。ただし、エラー軽減の技術は使わないでください。再掲ですが、`execute`関数には`seed_simulator=3145`, `seed_transpiler=3145` , `shots=8192`を使ってください。
#
# - _ヒント:ここでのプロセスは、question 2a/bと大変良く似ていますが、question 3のノイズモデルを`execute`関数に含める必要があります。_
# +
np.random.seed(0)
# ここにコードを記入します
# +
from qc_grader import grade_lab5_ex4
# grade関数は浮動小数点を期待していることに注意してください
grade_lab5_ex4(fidelity)
# -
# ### Question 5.
#
# - Qiskitにある`complete_meas_cal`関数を使って、1つ前のquestionにおけるQTPの結果に対して適用してください。ともに`execute`関数と`seed_simulator=3145`, `seed_transpiler=3145`, `shots=8192`を使ってください。また同様に、question 3のノイズモデルを`execute`関数に含める必要があります。
#
#
# - *ヒント:Qiskitテキストブックに<a href="https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html">`readout error mitigation`</a>というとても良い章があります。特に、<a href="https://qiskit.org/documentation/stubs/qiskit.ignis.mitigation.complete_meas_cal.html">`complete_meas_cal`</a>関数を<a href="https://qiskit.org/documentation/stubs/qiskit.ignis.mitigation.CompleteMeasFitter.html">`CompleteMeasureFitter`</a>とともに、キャリブレーション行列を作るための回路を望み通りに作成するために使いたいでしょう。そしてこれは、修正行列の<a href="https://qiskit.org/documentation/stubs/qiskit.ignis.mitigation.CompleteMeasFitter.html#qiskit.ignis.mitigation.CompleteMeasFitter.filter">`meas_filter`</a>を生成するために使えます。Question 4の結果に対してこの関数を適用します。*
# +
np.random.seed(0)
# ここにコードを記入します
# +
from qc_grader import grade_lab5_ex5
# grade関数は浮動小数点を期待していることに注意してください
grade_lab5_ex5(fidelity)
# -
# ### Exploratory Question 6.
#
# - CXの持続時間を変化させたノイズモデルを実行して、ゲートの忠実度がCXの持続時間にどのように依存するかをテストします(ただし、他の値はすべて固定します)。
#
# (注:理想的には、前のレクチャーで説明したスケーリング技術を使って行いますが、バックエンドの使用が制限されるので、代わりにCX自体の持続時間を調整して効果を実証し
# ます。このゲートは完全なCXゲートではないので、ハードウェアでの実装方法とは異なります。)
| notebooks/summer-school/2021/resources/lab-notebooks/lab-5-ja.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
countries = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
country = citipy.nearest_city(lat_lng[0], lat_lng[1]).country_code
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
countries.append(country.upper())
# Print the city count to confirm sufficient count
print(len(cities))
#Print the country count to confirm it matches city count
print(len(countries))
# +
#test api on one city = "victoria". to see the keys and values in the data
# cities = "victoria"
# url = "http://api.openweathermap.org/data/2.5/weather?"
# query_url = f"{url}appid={weather_api_key}&q={city}"
# response = requests.get(query_url).json() # a dictionary with various information labels of the city
# print(response)
# +
#Create a dataframe to store the data
city_weather = pd.DataFrame({'City': cities, 'Cloudiness': "", 'Country': countries, 'Date': "",
'Humidity': "", 'Lat': "", 'Lng': "", 'Max Temp': "", 'Windspeed': ""})
#Preview the dataframe
city_weather.head()
# +
### api calls and data collection while filitering the data in the calls to make sure all columns will have same length and adding it directly to dataframe
# +
#Variables to keep track of response number
record_num = 1
record_set = 1
#Base url to make the api call
base_url = 'http://api.openweathermap.org/data/2.5/weather?units=imperial&q='
#Run the API call for each city
print('''
-----------------------------
Beginning Data Retrieval
-----------------------------''')
for index, row in city_weather.iterrows():
#Create the query url for the API call
query_url = base_url + row['City'] + ',' + row['Country'] + '&APPID=' + weather_api_key
#API call
response = requests.get(query_url).json()
#Exception script in case the city is not found by the API call
try:
#Meassage to alert the user that the data is being retrieved
print(f'Processing Weather Data for Record {record_num} of Set {record_set} | {response["name"]}')
#Input data into the dataframe
city_weather.loc[index, 'Cloudiness'] = response['clouds']['all']
city_weather.loc[index, 'Date'] = response['dt']
city_weather.loc[index, 'Humidity'] = response['main']['humidity']
city_weather.loc[index, 'Lat'] = response['coord']['lat']
city_weather.loc[index, 'Lng'] = response['coord']['lon']
city_weather.loc[index, 'Max Temp'] = response['main']['temp_max']
city_weather.loc[index, 'Windspeed'] = response['wind']['speed']
#Increase the record count
record_num += 1
except (KeyError, IndexError):
print("City not found....Skipping............")
#If statement to handle the API call limits
if record_num == 51:
record_set += 1
record_num = 1
time.sleep(60)
print('''
-----------------------------
Data Retrival Complete
-----------------------------''')
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
# +
#Done Above
# -
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
# +
# Dataframe created above before data extraction and was appended using iterrows()
#Clean dataframe to remove cities that were not found
city_weather['Max Temp'] = city_weather['Max Temp'].replace('', np.nan)
city_weather = city_weather.dropna()
#Save dataframe
city_weather.to_csv("C:/Users/iezik/Desktop/USC/usc-la-data-pt-12-2020-u-c/unit_06_python_apis/homework/starter_code/CityWeatherData.csv", index=False, header=True)
#Preview the dataframe
city_weather.head(10)
# -
# ## Inspect the data and remove the cities where the humidity > 100%.
# ----
# Skip this step if there are no cities that have humidity > 100%.
# +
## inpection of humidity
city_weather['Humidity'].max()
# max humidity is 100 there is no city with humdity grater than 100 skipping to plotting
# -
# ## Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# ## Latitude vs. Temperature Plot
# +
# Build a scatter plot for each data type
plt.scatter(city_weather["Lat"], city_weather["Max Temp"], marker="o")
# Incorporate the other graph properties
plt.title(f' Latitude vs. Max Temperature {time.strftime("%m/%d/%Y")}')
plt.ylabel("Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
# Show plot
plt.show()
# -
# ## Latitude vs. Humidity Plot
# +
# Build a scatter plot for each data type
plt.scatter(city_weather["Lat"], city_weather["Humidity"], marker="o")
# Incorporate the other graph properties
plt.title(f' Latitude vs. Humidity {time.strftime("%m/%d/%Y")}')
plt.ylabel("Humidity")
plt.xlabel("Latitude")
plt.grid(True)
# Show plot
plt.show()
# -
# ## Latitude vs. Cloudiness Plot
# +
plt.scatter(city_weather["Lat"], city_weather["Cloudiness"], marker="o")
# Incorporate the other graph properties
plt.title(f'Latitude vs. Max Cloudiness {time.strftime("%m/%d/%Y")}')
plt.ylabel("Cloudiness")
plt.xlabel("Latitude")
plt.grid(True)
# Show plot
plt.show()
# -
# ## Latitude vs. Wind Speed Plot
# +
# Build a scatter plot for each data type
plt.scatter(city_weather["Lat"], city_weather["Windspeed"], marker="o")
# Incorporate the other graph properties
plt.title(f' Latitude vs. Wind Speed {time.strftime("%m/%d/%Y")}')
plt.ylabel("Wind Speed")
plt.xlabel("Latitude")
plt.grid(True)
# Show plot
plt.show()
# -
# ## Linear Regression
# OPTIONAL: Create a function to create Linear Regression plots
def plot_linear_regression(x_values, y_values, title, text_coordinates):
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,text_coordinates,fontsize=14,color="red")
plt.xlabel("Latitude")
plt.ylabel(title)
print(f"The r-squared is: {rvalue}")
plt.show()
# +
# Create Northern and Southern Hemisphere DataFrames
northlat_df = city_weather.loc[city_weather["Lat"] >= 0,:]
southlat_df = city_weather.loc[city_weather["Lat"] < 0,:]
northlat_df.head()
# -
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x_values = northlat_df["Lat"].astype('int')
y_values = northlat_df["Max Temp"].astype('int')
plot_linear_regression(x_values,y_values,'Max Temp',(4,-20))
# -
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x_values = southlat_df["Lat"].astype('int')
y_values = southlat_df["Max Temp"].astype('int')
plot_linear_regression(x_values,y_values,'Max Temp',(-25,55))
print("The high r value indicates a strong positive correlation between latitude and max temperature.")
# -
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
x_values = northlat_df["Lat"].astype('int')
y_values = northlat_df["Humidity"].astype('int')
plot_linear_regression(x_values,y_values,'Humidity',(45,30))
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
x_values = southlat_df["Lat"].astype('int')
y_values = southlat_df["Humidity"].astype('int')
plot_linear_regression(x_values,y_values,'Humidity',(-25,30))
print("The low r values indicate a weak relationship between humidity and latitude.")
# -
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
x_values = northlat_df["Lat"].astype('int')
y_values = northlat_df["Cloudiness"].astype('int')
plot_linear_regression(x_values,y_values,'Cloudiness',(45,30))
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
x_values = southlat_df["Lat"].astype('int')
y_values = southlat_df["Cloudiness"].astype('int')
plot_linear_regression(x_values,y_values,'Cloudiness',(-45,25))
print("The low r values indicate a weak positive relationship between latitude and cloudiness.")
# -
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
x_values = northlat_df["Lat"].astype('int')
y_values = northlat_df["Windspeed"].astype('int')
plot_linear_regression(x_values,y_values,'Wind Speed',(5,35))
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
x_values = southlat_df["Lat"].astype('int')
y_values = southlat_df["Windspeed"].astype('int')
plot_linear_regression(x_values,y_values,'Wind Speed',(-48,20))
print("The low r values indicate that there is no real relationship between wind speed and latitude. The difference between the hemispheres doesn't seem to be significant enough to comment upon.")
# -
| WeatherPy.ipynb |
# ---
# jupyter:
# jupytext:
# formats: py:light
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:AKSDeploymentPytorch]
# language: python
# name: conda-env-AKSDeploymentPytorch-py
# ---
# # Tear it all down
# Once you are done with your cluster you can use the following two commands to destroy it all.
from dotenv import get_key, find_dotenv
# +
env_path = find_dotenv(raise_error_if_not_found=True)
rgname = 'cnn-aks-docker-test'
aksname = 'msaks-andy'
akslocation = 'eastus'
# -
# Once you are done with your cluster you can use the following two commands to destroy it all. First, delete the application.
# !kubectl delete -f az-dl.json
# Next, you delete the AKS cluster. This step may take a few minutes.
# !az aks delete -n $aksname -g $rgname -y
# Finally, you should delete the resource group. This also deletes the AKS cluster and can be used instead of the above command if the resource group is only used for this purpose.
# !az group delete --name $rgname -y
| Pytorch/07_TearDown.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import Libraries that will likely be used
# +
import pandas as pd
import numpy as np
import spotipy as sp
import os
import matplotlib.pyplot as plt
import seaborn as sb
# %matplotlib inline
# -
# Take a look at first file to see what cleaning needs to be done
pd_BI.head()
# +
pd_BI = pd_BI.sort_values('popularity', ascending=False)
unique_BI = pd_BI[['id', 'followers', 'genres', 'name', 'popularity']].copy()
unique_BI.drop_duplicates(subset= "name", keep= 'first', inplace= True)
unique_BI.head()
# +
unique_BI.size
for n in range(10, 100, 10):
tmp = unique_BI[unique_BI['popularity']>=n].size
print(f"There are {tmp} artists with popularity above {n}")
# unique_BI[unique_BI['popularity']>=90]
ninety_BI = unique_BI[unique_BI['popularity']>=90].copy()
ninety_BI
# +
# start with a standard-scaled plot
binsize = 1
bins = np.arange(0, unique_BI['popularity'].max()+binsize, binsize)plt.figure(figsize=[14.70, 8.27])
plt.hist(data = unique_BI, x = 'popularity', bins = bins)
plt.title('Popularity Distribution')
plt.xlabel('popularity value between 0 and 100, with 100 being the most popular.')
plt.ylabel('Count')
plt.show()
# +
path, dirs, files = next(os.walk("../spotify_artist_data/"))
file_count = len(files)
# empty list of dfs, will be copied later
artist_df_list_master = []
artist_df_list = []
# create a df from each csv
for f in range(file_count):
temp_df = pd.read_csv("../spotify_artist_data/"+files[f],
index_col = None, header = 0)
artist_df_list_master.append(temp_df)
artist_df_list.append(temp_df.copy())
for df in artist_df_list_master:
print(df.head())
# -
| .ipynb_checkpoints/Foreign Markets Data Cleaning-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# The purpose of this notebook is to demonstrate how you can sum individual forward propagation losses together and autograd will still backprop into the different forward propagations. This is a leadin to Reinforcement Learning using Policy Gradients.
import syft
import syft.nn as nn
import imp
imp.reload(syft)
imp.reload(syft.nn)
from syft.controller import tensors, models
import numpy as np
from syft import FloatTensor
from functools import reduce
# +
model = nn.Sequential([
nn.Linear(3,4),
nn.Tanh(),
nn.Linear(4,1),
nn.Sigmoid()
])
inputs = list()
targets = list()
inputs.append(FloatTensor([[0,0,1]], autograd=True))
inputs.append(FloatTensor([[0,1,1]], autograd=True))
inputs.append(FloatTensor([[1,0,1]], autograd=True))
inputs.append(FloatTensor([[1,1,1]], autograd=True))
targets.append(FloatTensor([[0]], autograd=True))
targets.append(FloatTensor([[0]], autograd=True))
targets.append(FloatTensor([[1]], autograd=True))
targets.append(FloatTensor([[1]], autograd=True))
# -
for iter in range(10):
preds = list()
for i in range(len(inputs)):
preds.append(model(inputs[i]))
losses = list()
for i in range(len(preds)):
losses.append((preds[i] - targets[i]) ** 2)
loss = reduce(lambda x,y:x+y,losses)
loss.backward()
for p in model.parameters():
p -= p.grad()
print(loss.to_numpy())
| notebooks/demos/Multi-MLP Using Layers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Manipulation with Pandas
import pandas as pd
pd.set_option('max_rows', 10)
# ## Categorical Types
# * Pandas provides a convenient `dtype` for reprsenting categorical, or factor, data
c = pd.Categorical(['a', 'b', 'b', 'c', 'a', 'b', 'a', 'a', 'a', 'a'])
c
c.describe()
c.codes
c.categories
# * By default the Categorical type represents an **unordered categorical**
# * You can provide information about the order of categories
c.as_ordered()
# ### Support in DataFrames
# * When a Categorical is in a DataFrame, there is a special `cat` accessor
# * This gives access to all of the features of the Categorical type
dta = pd.DataFrame.from_dict({'factor': c,
'x': np.random.randn(10)})
dta.head()
dta.dtypes
dta.factor.cat
dta.factor.cat.categories
dta.factor.describe()
# ### Exercise
# * Load NFS data again. Convert `fditemno` to a Categorical Type. Use describe.
# +
# [Solution Here]
# -
# %load solutions/load_nfs_categorical.py
# ## Date and Time Types
# Pandas provides conveniences for working with dates
# ### Creating a Range of Dates
dates = pd.date_range("1/1/2015", periods=75, freq="D")
dates
y = pd.Series(np.random.randn(75), index=dates)
y.head()
y.reset_index().dtypes
# ### Support in DataFrames
# * When a `datetime` type is in a DataFrame, there is a special `dt` accessor
# * This gives access to all of the features of the datetime type
dta = (y.reset_index(name='t').
rename(columns={'index': 'y'}))
dta.head()
dta.dtypes
dta.y.dt.freq
dta.y.dt.day
# ### Indexing with Dates
# * You can use strings
# * **Note**: the ending index is *inclusive* here. This is different than most of the rest of Python
y.ix["2015-01-01":"2015-01-15"]
# DatetimeIndex supports partial string indexing
y["2015-01"]
# * You can **resample** to a lower frequency, specifying how to aggregate
# * Uses the `DateTeimIndexResampler` object
resample = y.resample("M")
resample.mean()
# Or go to a higher frequency, optionally specifying how to fill in the
y.asfreq('H', method='ffill')
# There are convenience methods to lag and lead time series
y
y.shift(1)
y.shift(-1)
# ### Rolling and Window Functions
# * Pandas also provides a number of convenience functions for working on rolling or moving windows of time series through a common interface
# * This interface is the new **Rolling** object
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000',
periods=1000))
ts = ts.cumsum()
rolling = ts.rolling(window=60)
rolling
rolling.mean()
# ### Exercise
# * Create a datetime colume named 'date' for NFS_1974.csv NFS diary data
# * styr: Survey year
# * stmth: Survey month
# * logday: Day in the log (assume logdays are actual days)
# * *Hint*: You could do this in two ways
# * Look at the `parse_dates` keyword of `read_csv`
# * Create the date after reading in the DataFrame
# +
# [Solution here]
# -
# %load solutions/load_nfs_datetime.py
# ## Merging and Joining DataFrames
# +
# this is a bit slow because of the date parsing
transit = pd.read_csv("../data/AIS/transit_segments.csv",
parse_dates=['st_time', 'end_time'],
infer_datetime_format=True)
vessels = pd.read_csv("../data/AIS/vessel_information.csv")
# -
# * A lot of the time data that comes from relational databases will be normalized
# * I.e., redundant information will be put in separate tables
# * Users are expected to *merge* or *join* tables to work with them
vessels.head()
transit.head()
# * Several ships in the vessels data have traveled multiple segments as we would expect
# * Matching the names in the transit data to the vessels data is thus a many-to-one match
# * *aside* pandas Indices (of which Columns are one) are set-like
vessels.columns.intersection(transit.columns)
# ### Merging
# * We can combine these two datasets for a many-to-one match
# * `merge` will use the common columns if we do not explicitly specify the columns
transit.merge(vessels).head()
# **Watch out**, when merging on columns, indices are discarded
A = pd.DataFrame(np.random.randn(25, 2),
index=pd.date_range('1/1/2015', periods=25))
A[2] = np.repeat(list('abcde'), 5)
A
B = pd.DataFrame(np.random.randn(5, 2))
B[2] = list('abcde')
B
A.merge(B, on=2)
# ### Joins
#
# * Join is like merge, but it works on the indices
# * The same could be achieved with merge and the `left_index` and `right_index` keywords
transit.set_index('mmsi', inplace=True)
vessels.set_index('mmsi', inplace=True)
transit.join(vessels).head()
# ### Exercise
# * Join the 1974 Household NFS data with the Diary data
# * The data is in `../data/NationalFoodSurvey/NFS_1974/`
# %load solutions/join_nfs.py
# ## Concatenation
# * Another common operation is appending data row-wise or column-wise to an existing dataset
# * We can use the `concat` function for this
# * Let's import two microbiome datasets, each consisting of counts of microorganisms from a particular patient.
# * We will use the first column of each dataset as the index.
# * The index is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
df1 = pd.read_csv('../data/ebola/guinea_data/2014-08-04.csv',
index_col=['Date', 'Description'])
df2 = pd.read_csv('../data/ebola/guinea_data/2014-08-26.csv',
index_col=['Date', 'Description'])
df1.shape
df2.shape
df1.head()
df2.head()
df1.index.is_unique
df2.index.is_unique
# We can concatenate on the rows
df = pd.concat((df1, df2), axis=0)
df.shape
# ### Exercise
# * Join all of the diary data together in a single DataFrame
# * *Hint*: you might find `glob.glob` useful
# * You will need to add a unique field identifying the survey year to each DataFrame
# * *Hint*: you might find a regular expression using `re.search` useful
# +
# [Solution here]
# -
# %load solutions/concat_nfs.py
# ## Text Data Manipulation
# * Much like the `cat` and `dt` accessors we've already seen
# * String types have a `str` accessor that provides fast string operations on columns
vessels.type
# * Count the vessel separators
vessels.type.str.count('/').max()
# * Split on these accessors and expand to return a DataFrame with `nan`-padding
vessels.type.str.split('/', expand=True)
# ### Exercise
# * Load the file `"Ref_ food groups.txt"`
# * Get all of the food groups that contain the word milk
# +
# [Solution here]
# -
# %load solutions/nfs_dairy.py
| notebooks/1.3 Data Manipulation with Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CHEM 154 review
# ### The following are topics from the standard CHEM 154 syllabus (2016W) that we will be reviewing in this Notebook. No, you don't need to remember everything from CHEM 154. Yet.
#
# ---
#
# ### Reaction stoichiometry (Ch 4)
#
# ### Bulk Properties of Matter and Intermolecular Interactions (Ch, 10 12)
#
# ### Gases; Liquids; Phase Diagrams.Phase transitions & vapour/liquid equilibrium calculation.
#
# ### Thermochemistry and Thermodynamics (Ch 7, Ch 19 )
# - First Law
# - Enthalpy
# - Standard State
# - Calorimetry
# - Hess's Law
# - Kirchoff's law
# - Standard Enthalpy of Formation
# - Entropy; Spontaneity in Chemical Reactions
# - Second Law
# - Third Law
# - Gibbs Free Energy
# - Spontaneity and Approach to Equilibrium.
# ## Reaction Stoichiometry
#
# Reaction stoichiometry allows us to determine the amount of substance that is consumed or produced by a reaction. Think of a molecule's molecular equation, such as
#
# $$SO_{2(g)}$$
#
# as the ratio of stuff in the molecule. In this case it would be 1 sulphur atom to 2 oxygen atoms in this sulfur dioxide molecule.
#
# Now stoichiometry is the theory of this proportion applied to chemical equations. Think of it as a mathematical equation where everything on the left has to equal everything on the left ($2 = 1 + 1$).
#
# Coefficients in chemistry act the same as coefficients in math, multiplying everything in the molecular equation by the coefficient to represent the total number of atoms at play.
#
# Building on the sulfur dioxide example, the production of sulfur dioxide is essential in the production of fertilizers, metal processing and cocaine.
#
# Many metal ores occur as sulfides and are roasted to form an oxide and sulfur dioxide, for example, in the manufacture of lead:
#
# $$2PbS_{(s)} + 3O_{2(g)} \longrightarrow 2PbO_{s} + 2SO_{2(g)} $$
#
# This equation indicates that for every 2 molecules (g-moles, lb-moles) of $PbS$ that react/3 molecules (g-mole, lb-mole) of $O_2$ reacts to produce 2 molecules (g-moles, lb-moles) of $PbO$ and 2 molecules of $SO_2$.
#
# The numbers that precede the formulas for each species are the stoichiometric coefficients of the reaction components. Overall, it is akin to making the equation say
#
# $$2 \space Pb \space atoms + 2 \space S \space atoms + 6 \space O \space atoms = 2 \space Pb \space atoms + 2 \space O \space atoms + 2 \space S \space atoms + 4 \space O \space atoms$$
#
# (if we "multiply out" the molecular equations)
#
# Note that the total number of atoms on the left equal the number on the right. Knowing the total number of atoms in the reactant and product side of the equation allows us to then use their respective molar masses to find the total mass of the reactants or products.
# ### Steps in balancing reaction stoichiometry
#
# Let's use the previous example of $2PbS_{(s)} + 3O_{2(g)} $ to explore blancing a reaction. We usually don't get pre-balanced equations, so when we are asked what is the equation of the reaction of lead sulfide and oxygen, we start off with $PbS$ and $O_2$.
#
# 1. In the first step, write out the reaction.
# $$PbS_{(s)} + O_{2(g)} \longrightarrow PbO_{(s)} + SO_{2(g)} $$
#
# 2. Give each molecule an unknown coefficient.
#
# $$(a)PbS_{(s)} + (b)O_{2(g)} \longrightarrow (c)PbO_{(s)} + (d)SO_{2(g)} $$
#
# 3. Balance the coefficients based on the atoms found in the chemical equation
#
# $$(a)PbS_{(s)} + (b)O_{2(g)} \longrightarrow (c)PbO_{(s)} + (d)SO_{2(g)} $$
# $$Pb: a = c $$
# $$S: a = d $$
# $$O: 2b = c + 2d $$
#
# Notice that in this case, we cannot solve for all the unknowns, since we only have 3 equations but 4 unknowns (You will use this method to determine if material or energy balances can be solved later in the course).
#
# In this case, we solve for equality between 2 coefficients
#
# $$ 2b = a + 2a $$
# $$ 2b = 3a $$
# $$ b = {3/2}a $$
#
# We then choose a "basis", a random number for 'a' that will then give us all the subsequent coefficients of 'b', 'c' and 'd'.
#
# $$ a = 1 $$
# $$ b = {3/2} $$
# $$ c = 1 $$
# $$ d = 1 $$
#
# $$PbS_{(s)} + {3/2}O_{2(g)} \longrightarrow PbO_{(s)} + SO_{2(g)} $$
#
# Which is equivalent to
#
# $$2PbS_{(s)} + 3O_{2(g)} \longrightarrow 2PbO_{s} + 2SO_{2(g)} $$
#
# ## Phase diagrams, phase transitions and vapour/liquid equilibrium calculation
#
# ### Phase diagrams
#
# In material and energy balances, reactants could change states from liquid to gas and vice versa. Phase diagrams help us relate the pressure, temperature and physical state a certain substance will be in, and thus allow us to calculate the subsequent heat and energy change once a substnace changes states.
#
# <img src = "../../figures/Module-0/WaterTriplePoint2.jpg">
# <div style = "text-align:center;"><h3>Figure 1. Phase diagram of water</h3></div>
# <p>https://www.birdvilleschools.net/cms/lib2/TX01000797/Centricity/Domain/912/ChemLessons/Lessons/Phases%20and%20Changes/image022.jpg</p>
# 1. The point where ($T$ ,$P$) falls on the solid–vapor equilibrium curve is the **triple point** of the substance (A).
# 2. If ($T$ ,$P$) falls on the solid–liquid equilibrium curve, then $T$ is the **melting/freezing point** at pressure $P$ (C).
# 3. The boiling point at $P = 1 atm$ is the **normal boiling point** of the substance (D).
# 4. If ($T$ ,$P$) falls on the solid–vapor equilibrium curve, then $T$ is the **sublimation point** at pressure $P$.
# 5. If $T$ and $P$ correspond to a point on the vapor–liquid equilibrium curve for a substance, $P$ is the **vapor pressure** of the substance at the temperature $T$, and $T$ is the boiling point temperature of the substnace at $P$.
# 6. The vapor-liquid curve terminates at the **critical point**. Above and to the right of the critical point, the two phases never co-exist.
#
# Note: Add interactive phase graph to this portion, not just a static graph.
# ### Phase transitions
#
# Phase transitions occur when the substance changes state, crossing one or more of the state equilibrium curves. Large amounts of energy are usually exchanged at this state. This energy change is characterized by the change in specific enthalpy and is termed **latent heat** of the phase change.
#
# The two most common latent heats are for condensing/boiling and melting/freezing, which are termed as **heat of vaporization** and **heat of fusion** respectively.
#
# $$ \Delta H_{v, water} $$
# $$ \Delta H_{m, water} $$
#
# Note that latent heats are affected more by temperature than pressure.
#
# (Note: Make it under the chapter of thermodynamics, interactive table to explore the different heats of vaporization of different substances.)"
# ### Vapor/liquid equilibrium calculations
#
# A subtance may co-exist as a vapor-liquid combination when the temperature and pressure falls on the vapor-liquid equilibrium curve. At points above the VLE curve, water is a **subcooled liquid**. On points along the VLE curve, water can be a **saturated liquid** or **vapor**. Below the VLE curve, water is **superheated vapor**.
#
# These keywords are used in the determination of specific internal energy ($U$)and specific enthalpy ($H$) of water at specific temperatures and pressures using a **steam table**. (Add cropped version as an exmaple.)
# ## Works cited
| Modules/Module-0-Introduction/CHEM-154/CHEM 154 Review notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python3
# name: python3
# ---
# # 1. Cross Entropy and Maximum Likelihood Estimation
# If you have gone through any of my other walkthroughs on machine learning, particularly those on **Logistic Regression**, **Neural Networks**, **Decision Trees**, or **Bayesian machine learning** you have definitely come across the concept of **Cross Entropy** and **Maximum Likelihood Estimation**. Now, when discussed separately, these are relatively simple concepts to understand. However, during the creation of these notebooks, particularly the sections on logisitic regression and neural networks (and the cost functions involved), I felt as though it was not clear why they were related in certain cases.
#
# This notebook is meant to do three things:
# 1. Describe **Cross Entropy** in detail
# 2. Describe **Maximum Likelihood Estimation** in detail
# 3. Describe how the **Cross Entropy** can be equivalent to the negative **log-likelihood**, such as in the cost function in a neural network.
#
# So, with that said, lets get started talking about Cross Entropy.
# # 1.1. Cross Entropy
# When we develop a model for probabilistic classification, we are trying to figure out how to map the models **inputs** to **probabilistic predictions**, with the goal that these are very close to the **ground-truth probabilities**. The **training** process is done by iteratively adjusting the model's parameters so that our predictions get closer and closer to the ground-truth.
#
# Say we are trying to build a model that can determine whether an image contains a dog, a cat, or a fish. If, for instance, we input an image that contains a fish, we are hoping that the output is:
#
# $$y = \begin{bmatrix}
# 0 \\
# 0 \\
# 1
# \end{bmatrix}$$
#
# Since that vector represents the **ground-truth class probabilities**- in this case 0 for dog, 0 for cat, and 1 for fish. If our model ended up predicting a different probability distribution, for instance:
#
# $$\hat{y}= \begin{bmatrix}
# 0.3 \\
# 0.2 \\
# 0.5
# \end{bmatrix}$$
#
# Then we would want to adjust our parameters so that $\hat{y}$ gets closer to $y$. The question is: What exactly do we mean when we say "get's closer to"? How should we measure the difference between $\hat{y}$ and $y$? One possible measure is **cross entropy**.
#
# ## 1.2 Entropy
# In the context of **information theory**, what is **entropy**? Let's look at an example first. Say you are standing along a road, and you want to communicate each car model that you see pass to a friend. The only means that you have to communicate with your friend is a binary channel where you can only send 0 or 1, and each particular bit costs 10 cents. To do this, you will need bit sequences, one to represent each car model.
#
# Lets assume you are trying to minimize the amount of money you have to spend. How would you decide to assign bit sequences to car models? Would you use the same number of bits for a Toyota Camry as you would for a corvette? No, you wouldn't! Clearly you know that the camry is far more common, and you will be communicating to your friend that you saw a camry far more often, so you want to assign it a smaller sequence of bits. In other words, you are exploiting your knowledge about the distribution over car models to reduce the number of bits that you need to send on average.
#
# Well, it turns out that if you have access to the underlying distribution of cars on the road, $y$, then if you want to use the smallest number of bits on average, you should assign $log(\frac{1}{y_i})$ bits to the $i$th symbol. (Remember, $y_i$ is the probability of the $i$th symbol).
#
# For example, if we assume that seeing a Camry is 128 times as likely as seeing a corvette, then we'd give the Camry 7 less bits than the Tesla symbol:
#
# $$b_{camry} = log\frac{1}{128p_{corvette}} = log\frac{1}{p_{corvette}} + log\frac{1}{128} = b_{corvette}-7$$
#
# If we fully exploit the known distribution of cars, $y$, in this way, we can achieve an optimal number of bits per transmission. The optimal number of bits is known as **entropy**. Mathematically, it's just the expected number of bits under this optimal encoding.
#
# $$H(y) = \sum_iy_ilog\frac{1}{y_i} = -\sum_iy_ilog(y_i)$$
#
# Where again, $y_i$ is the probability of seeing the $i$-th symbol, i.e. a corvette, and $log\frac{1}{y_i}$ is the number of bits we have assigned to it. So this equation just means that we take the probability of seeing each car, and multiply that by the number of bits we have assigned to it, and that is the total number of bits we would be expecting to transmit.
#
# ## 1.3 Information
# Now, you may very well be wondering: why are we taking the logarithm of the above inverse probability $\frac{1}{y_i}$? I mentioned that it "turns out" that with the underlying distribution, then this is the number of bits you should assign to the $i$th symbol; but why?
#
# In order to fully grasp that, the concept of **information** must be introduced. A full post will be dedicated to it in the future, but for now we will focus in on a specific type of information known as **Shannon Information** (named after the founder of information theory, <NAME>), and define it as follows:
#
# > **Shannon Information** is a measure of _surprise_.
#
# Now what exactly do I mean when referencing "surprise"? Well, take our car example again. When a camry passes us we experience little surprise, since it is very common. On the other hand, when the rare Tesla passes by we experience a good deal of surprise; we were not expecting it! Well, as <NAME> was considering how to define information he realized that this concept of "surprise" was absolutely crucial.
#
# A simple was to mathematically define surprise would be:
#
# $$information = surprise = \frac{1}{\text{probability of event}}$$
#
# In the above definition, if the probability of our event was 0.01 (a rare event), then our $surpise$ would be $100$. However, for a very common event with a probability of 0.9, our $surprise$ would be $1.11$.
#
# This definition on first glance seems to quantify surprise very well! However, as Shannon was trying to create his official definition of information, he realized that in order for it be useful it would have to be _additive_. This simply means that the information associated with a set of outcomes should be equal to the sum of the information associated with each individual outcome. Shannon showed that in order to satisfy this _additive_ condition, we are better off defining information as as:
#
# $$information = surprise = log(\frac{1}{\text{probability of event}})$$
#
# The definition above is that of **Shannon Information**. Now, how Shannon proved that will be saved for another post, but the important thing to remember is that it was _chosen for a reason_, not simply plucked out of thin air.
#
# One final thing to keep in mind is that another way of viewing _Entropy_, is as the _average shannon information_. In other words, it is the _expected value of the shannon information_. Recall, the definition of expected value is:
#
# $$E \big[ X \big] = \sum_{i}^k x_ip_i = x_1p_1 + x_2p_2 + ... + x_kp_k$$
#
# Which, we can apply to shannon information:
#
# $$Entropy = E \big[\text{Shannon Information} \big] = \sum_{i}^k \text{Probability of event $i$}*log\Big(\frac{1}{\text{probability of event $i$}}\Big)$$
#
# And this is exactly what we found in section 1.2! A succicint definition of Entropy may sometimes take the form:
#
# $$Entropy = H(X) = E \Big[ log(\frac{1}{p(x)})\Big]$$
#
# ## 1.4 Cross Entropy
# Now, if we think of a distribution as the tool we use to encode symbols, then entropy measures the number of bits we'll need if we use the *correct* tool, $y$, the **ground-truth probability distribution**. This is optimal, in that we can't encode the symbols using fewer bits on average.
#
# **However**, **cross entropy** represents the number of bits we will need if we encode symbols from $y$ using the *wrong* tool $\hat{y}$. In other words, if the probabilty distribution that our model learns, $\hat{y}$, is not the same as $y$ (which it almost never will be), then **cross entropy** represents the number of bits used in the encoding. We would have encoded the $i$th symbol with $log\frac{1}{\hat{y_i}}$ bits, instead of $log\frac{1}{y_i}$ bits.
#
# We of course will still be utilizing the value of the true distribution $y$, since that is the distribution we will actually encounter (if this is unclear, all it means is that you create the encoding scheme *before* you actually see the number of cars, meaning you use a distribution you are hoping is close the the correct one. We call this distribution $\hat{y}$. However, when actually determining how many bits you use, that is based on *actual* cars that pass you, which is the true distribution, $y$.). Mathematically this looks like:
#
# $$H(y, \hat{y}) = \sum_iy_ilog\frac{1}{\hat{y_i}} = -\sum_iy_ilog\frac{1}{\hat{y_i}}$$
#
# **Cross entropy** is *always* larger than **entropy**. Encoding symbols according to the wrong distribution $\hat{y}$ will always make us use more bits. The only exception is in the trivial case where $y$ and $\hat{y}$ are equal, and in this case entropy and cross entropy are equal.
#
# ### 1.4.1 Cross Entropy Applied
# In order to make the notion of cross entropy more concrete, I want to go through how it applies in a scenario such as logistic regression, and _why_ it is the perfect choice for a cost function. We are going to start with the above definition of cross entropy:
#
# $$H(y, \hat{y}) = \sum_iy_ilog\frac{1}{\hat{y}_i}$$
#
# Remember, the entire reason we that we have been delving into entropy is that we wanted to find a way to determine how "different" our _predicted distribution_ is from the _target distribution_. For clarity, I am going to let our target distribution be represented as $t$, and the predicted distribution will remain $\hat{y}$.
#
# $$H(t, \hat{y}) = \sum_i t_i log\frac{1}{\hat{y}_i}$$
#
# In the case of machine learning (think logistic regression), we have a set of _labels_ which corresponds to our _target distribution_, $t$. Again, it will often have the form:
#
# $$t = \begin{bmatrix}
# 0 \\
# 0 \\
# 1
# \end{bmatrix}$$
#
# The entire idea is that we want to _learn_ this distribution, which is represented by $\hat{y}$. It may look like:
#
# $$\hat{y}= \begin{bmatrix}
# 0.3 \\
# 0.2 \\
# 0.5
# \end{bmatrix}$$
#
# For one specific training example, we want to learn the distribution, and hope that it is very close the target distribution. We can determine _how far away_ from the target distribution it is via the _cross entropy_. Now, let's say we are in a binary classification scenario, with classes $a$ and $b$. Our cross entropy would look like:
#
# $$H(t, \hat{y}) = t_a log(\frac{1}{\hat{y}_a}) + t_b log(\frac{1}{\hat{y}_b})$$
#
# And recall, that in binary classification:
#
# $$t_b = 1 - t_a$$
#
# $$\hat{y}_b = 1 - \hat{y}_a$$
#
# Hence, if we substitute those values in:
#
# $$H(t, \hat{y}) = t_a log \big(\frac{1}{\hat{y}_a}\big) + \big(1 - t_a\big) log \big(\frac{1}{1 - \hat{y}_a} \big)$$
#
# And we can utilize the properties of logarithms to reduce our equation to:
#
# $$H(t, \hat{y}) = -\Big[ t_a log \big(\hat{y}_a\big) + \big(1 - t_a\big) log \big(1 - \hat{y}_a \big) \Big]$$
#
# And, if we let $t_a$ just be represented as $t$, and $\hat{y}_a$ represented as $\hat{y}$:
#
# $$H(t, \hat{y}) = -\Big[ t log \big(\hat{y}\big) + \big(1 - t\big) log \big(1 - \hat{y} \big) \Big]$$
#
# And that is the exact definition of the cross entropy cost function that we have encountered numerous times, specifically with logistic regression. Now, keep in mind that the above Cross entropy will only yield an output for _one training example_. We generally will have _many training examples_, and want to learn distributions for _all of them_. In that case, if we have $n$ training examples, we can determine the _total cross entropy_ via:
#
# $$\text{Total Cross Entropy $\forall$ examples} = - \sum_n^N \Big[ t_n log \big(\hat{y}_n\big) + \big(1 - t_n\big) log \big(1 - \hat{y}_n \big) \Big]$$
#
# We can quickly see why this is such a good cost function, specifically in the case of binary classification, with the following examples:
#
# $$\text{Cross Entropy}(t=1, y=1) \rightarrow = 0$$
#
# $$\text{Cross Entropy}(t=0, y=0) \rightarrow = 0$$
#
# $$\text{Cross Entropy}(t=1, y=0.9) \rightarrow = 0.11$$
#
# $$\text{Cross Entropy}(t=1, y=0.5) \rightarrow = 0.69$$
#
# $$\text{Cross Entropy}(t=1, y=0.1) \rightarrow = 2.3$$
#
# We can see that the cross entropy is zero when there is no error, and that the more incorrect our prediction is, the larger the cross entropy becomes.
#
# ## 1.5 KL Divergence
# The **KL Divergence** from $\hat{y}$ to $y$ is simply the **difference** between **cross entropy** and **entropy**:
#
# $$KL(y \;||\; \hat{y}) = \sum_iy_ilog\frac{1}{\hat{y_i}} - \sum_iy_ilog\frac{1}{y_i} = \sum_iy_ilog\frac{y_i}{\hat{y_i}}$$
#
# It measures the extra bits we'll need on average if we encode symbols from $y$ according to $\hat{y}$. It is never negative, and it is only 0 if $y$ and $\hat{y}$ are the same. Note that minimizing the **cross entropy** and minimizing the **KL divergence** from $y$ to $\hat{y}$ are the same thing.
# # 2. Maximum Likelihood
# ## 2.1 Introduction
# One thing that we try and do during machine learning is maximize the likelihood of our data, given our particular model. This really means that we are trying to say: "How likely is it that we received these outputs, given our model is true". As an example, say we have trained a model to predict whether an image is of a cat or a dog. Now lets say we run predictions on 3 images of dogs, but our model predicts they are all cats. In that case our data was 3 images of dogs, and our model got all predictions wrong. So we could say that, if our model was in fact correct, the likelihood of our input data really being 3 dogs is very low. Our goal is to find a model that **maximizes the likelihood of our data**. So we would want a model that predicts all 3 images are dogs.
#
# <br></br>
# ## 2.2 Coin Toss Example
# Let's look at an example where try and calculate the likelihood for a biased coin. Say we have a coin with a probability of heads, $p(H)$, equal to $p$:
# $$p(H) = p$$
# In this case, $p$ is a parameter. The probability of tails (since this is a bernoulli trial), is:
# $$p(T) = 1 - p$$
# Now we are going to run an experiment to help us determine $p$. We flip a coin 10 times and we get 7 heads and 3 tails. We want to know how we would write the total likelihood, which is the probability of receiving the data (result) that we saw. The general form equation for the likelihood in a binomial experiment is:
#
# $$L(X\;|\;p) = p^k(1-p)^{N-k}$$
#
# In this case $k$ is the total number of success's and $N$ is the total number of trials. In our example, we had 7 heads, so 7 success's, and 3 tails, so 3 failures, with 10 trials total. Our likelihood function then looks like:
#
# $$L(X\;|\;p) = p^7(1-p)^3$$
#
# Note that we are able to do this because each coin toss is independent. Therefore we can multiply each probability! In other words the above equation came from multiplying each probability of each result together:
#
# $$p*p*p*p*p*p*p*(1-p)*(1-p)*(1-p)$$
#
# Also, the likelihood can look just like a conditional probability. This is because the likelihood is used when our data, in this case $X$, has already been observed.
#
# Now, we want to **maximize the likelihood**. In other words, we want to maximize $L$ with respect to $p$, our parameter. This means we want to choose a $p$ that maximizes $L$. This can be done using basic calculus. Note that in most of these problems we take the log and maximize the likelihood. This is acceptable because the log function is monotonically increasing.
#
# Before we perform the maximizing of our likelihood, lets get a quick idea of what our Likelihood function actually looks like:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(figsize=(8,6))
x = np.linspace(0, 1, 100)
y = np.power(x, 7)*np.power(1-x,3)
plt.plot(x, y)
plt.show()
# Okay great, we can see that from a visual perspective we should be expecting a value of ~0.7 to be the value of $p$ that maximizes the likelihood of observing our data!
#
# Lets now go through the derivation. First we will take the log of the likelihood:
# $$log(L) = l = log\Big(p^7*(1-p)^3\Big)$$
# Then use the multiplication rule:
# $$log(p^7) + log\Big((1-p)^3\Big)$$
# Then the power rule:
# $$7log(p) + 3log(1-p)$$
# Set the derivative equal to zero:
# $$\frac{dl}{dp} = \frac{7}{p}+\Big(-1*\frac{3}{1-p}\Big)= 0$$
# And solve for p:
# $$\frac{7}{p} = \frac{3}{1-p}$$
# $$\frac{1-p}{p} = \frac{3}{7}$$
# $$\frac{1}{p} -1 = \frac{3}{7}$$
# $$\frac{1}{p} = \frac{10}{7}$$
# $$p = \frac{7}{10}$$
# This is what we would expect (visually it agrees)! If this is still slightly abstract, here is a visualization of exactly what is going on:
#
# <img src="https://drive.google.com/uc?id=1F6Vm2ucJCfZVy2Iy4sKz6NujXSV-b6FW">
#
# We already have observed our data $X$, and we slowly change our parameter (in the visual above the parameter is now $\mu$, but it can be thought of as $p$. As we change the parameter, we can see that our probability distribution moves to the right. Once we find the parameter that maximizes the likelihood, we can see that the distribution on the left also is in the best place it can be.
# <br></br>
# ## 2. Predictive Power
# At this point, you may be happy to just use the cross entropy to measure the difference between two distributions, $y$ and $\hat{y}$, and then using the total cross entropy over all training examples as our loss function. In particular, if we let $n$ index our training examples, the overal loss would be:
#
# $$H(\{y^{(n)}\}, \{\hat{y}^{(n)}\}) = \sum_nH(y^{(n)}, \hat{y}^{(n)})$$
#
# But let's look at another approach. What if we want our objective function to be a direct measure of our model's predictive power (at least with respect to our training data)? One common approach is to tune our parameters so that the **likelihood** of our data under the model is **maximized**. When performing **classification** we are often using a *discriminative model*, our "data" often just consists of the labels we are trying to predict. We can reason that a model that often predicts the ground-truth labels given the inputs might be useful, while a model that fails to predict the ground-truth labels is not useful.
#
# Because we generally assume that our samples are **independent and identically distributed**, the likelihood over all of our examples decomposes into a product over the likelihoods of individual examples:
#
# $$L(\{y^{(n)}\}, \{\hat{y}^{(n)}\}) = \prod_nL(y^{(n)}, \hat{y}^{(n)})$$
#
# And what is the likelihood of the $n$th example? It is just the particular entropy of $\hat{y}^{(n)}$ that corresponds to the ground truth label specified by $y^{(n)}$!
#
# If we go back to our original example, if the first training image is of a fish, then:
#
# $$y^{(1)} = \begin{bmatrix}
# 0 \\
# 0 \\
# 1
# \end{bmatrix}$$
#
# This tells us that the likelihood $L(y^{(1)}, \hat{y}^{(1)})$ is just the last entry of:
#
# $$\hat{y}= \begin{bmatrix}
# 0.3 \\
# 0.2 \\
# 0.5
# \end{bmatrix}$$
#
# Which is $\hat{y}^{(1)} = 0.5$. Now lets that we have 4 training images labeled: *fish, cat, dog, cat*. This gives us our ground truth distribution:
# $$y^{(1)} = \begin{bmatrix}
# 0 \\
# 0 \\
# 1
# \end{bmatrix}$$
# $$y^{(2)} = \begin{bmatrix}
# 0 \\
# 1 \\
# 0
# \end{bmatrix}$$
# $$y^{(3)} = \begin{bmatrix}
# 1 \\
# 0 \\
# 0
# \end{bmatrix}$$
# $$y^{(4)} = \begin{bmatrix}
# 0 \\
# 1 \\
# 0
# \end{bmatrix}$$
#
# Our model would predict 4 other distributions:
#
# $$\hat{y}^{(1)},\hat{y}^{(2)},\hat{y}^{(3)},\hat{y}^{(4)}$$
#
# And our overall likelihood would just be:
#
# $$L(\{ y^{(1)},y^{(2)},y^{(3)},y^{(4)}\}, \{\hat{y}^{(1)},\hat{y}^{(2)},\hat{y}^{(3)},\hat{y}^{(4)} \}) = \hat{y}^{(1)}\hat{y}^{(2)}\hat{y}^{(3)}\hat{y}^{(4)}$$
#
# Maybe we would have been previously happy with just minimizing the cross entropy during training, but after seeing this, are we still happy? Why shouldn't we instead maximize the likelihood of our data?
# # 3. Unified Loss Function
# Let's take a minute to play with the expression above. Because a logarithm is monotonic, we know that maximizing the likelihood is equivalent to maximizing the **log likelihood**, which is in turn equivalent to *minimizing* the **negative log likelihood**.
#
# $$-log\Big(L(\{y^{(n)}\}, \{\hat{y}^{(n)}\})\Big)= - \sum_nlog\Big(L(y^{(n)}, \hat{y}^{(n)})\Big)$$
#
# But, from the work we did earlier, we also know that the log likelihood of $y^{(n)}$ is just the log of a particular entry of $\hat{y}^{(n)}$. In fact, its the entry $i$ that satisfies $y_i^{(n)} = 1$. We can therefore rewrite the log likelihood for the nth training example in the following way:
#
# $$ \log L(y^{(n)}, \hat{y}^{(n)}) = \sum_i y^{(n)}_i \log \hat{y}^{(n)}_i $$
#
# Which in turn gives us an overall negative log likelihood of:
#
# $$ - \log L(\{y^{(n)}\}, \{\hat{y}^{(n)}\}) = -\sum_n \sum_i y^{(n)}_i \log \hat{y}^{(n)}_i $$
#
# Does this look familiar? This is exactly the **cross entropy**, summed over all training examples:
#
# $$ -\log L(\{y^{(n)}\}, \{\hat{y}^{(n)}\}) = \sum_n \big[-\sum_i y_i \log \hat{y}^{(n)}_i\big] = \sum_n H(y^{(n)}, \hat{y}^{(n)})$$
# # 4. Conclusions
#
# When we develop a probabilistic model over mutually exclusive classes, we need a way to measure the difference between predicted probabilities $\hat{y}$ and ground-truth probabilities $y$, and during training we try to tune parameters so that this difference is minimized.In this post we saw that cross entropy is a reasonable choice.
#
# From one perspective, minimizing cross entropy lets us find a $\hat{y}$ that requires as few extra bits as possible when we try to encode symbols from $y$ using $\hat{y}$.
#
# From another perspective, minimizing cross entropy is equivalent to minimizing the negative log likelihood of our data, which is a direct measure of the predictive power of our model.
| Mathematics/05-Information_Theory-01-Cross-Entropy-and-MLE-walkthrough.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sock Merchant
#
# <br>
#
# 
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
# Complete the sockMerchant function below.
def sockMerchant(n, arr):
d = dict()
for i in arr:
d[i] = d.get(i, 0) + 1
res = 0
for i in d.values():
res += i // 2
return res
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
n = int(input())
ar = list(map(int, input().rstrip().split()))
result = sockMerchant(n, ar)
fptr.write(str(result) + '\n')
fptr.close()
| Interview Preparation Kit/1. Warm-up Challenges/Sock Merchant.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# Setting up a model and a mesh for the MT forward problem
# -
import SimPEG as simpeg
from SimPEG import MT
sys.path.append('/home/gudni/gitCodes/python/telluricpy')
import telluricpy
# Define the area of interest
bw, be = 556500, 558000
bs, bn = 7133500, 7133500
bb, bt = 0,480
# Build the mesh
# Design the tensors
hSize,vSize = 50., 12.5
nrCcore = [10, 8, 6, 4, 2, 2, 2, 2, 2]
hPad = simpeg.Utils.meshTensor([(hSize,9,1.5)])
hx = np.concatenate((hPad[::-1],np.ones(((be-bw)/hSize,))*hSize,hPad))
hy = np.concatenate((hPad[::-1],np.ones(((bn-bs)/hSize,))*hSize,hPad))
airPad = simpeg.Utils.meshTensor([(vSize,13,1.5)])
vCore = np.concatenate([ np.ones(i)*s for i, s in zip(nrCcore,(simpeg.Utils.meshTensor([(vSize,1),(vSize,8,1.3)])))])[::-1]
botPad = simpeg.Utils.meshTensor([(vCore[0],8,-1.5)])
hz = np.concatenate((botPad,vCore,airPad))
# Calculate the x0 point
x0 = np.array([bw-np.sum(hPad),bs-np.sum(hPad),bt-np.sum(vCore)-np.sum(botPad)])
# Make the mesh
meshFor = simpeg.Mesh.TensorMesh([hx,hy,hz],x0)
print np.sum(vCore)
print meshFor.nC
print meshFor
# Save the mesh
meshFor.writeVTK('nsmesh.vtr',{'id':np.arange(meshFor.nC)})
nsvtr = telluricpy.vtkTools.io.readVTRFile('nsmesh.vtr')
topoSurf = telluricpy.vtkTools.polydata.normFilter(telluricpy.vtkTools.io.readVTPFile('../Geological_model/CDED_Lake_Coarse.vtp'))
activeMod = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(nsvtr,topoSurf)
telluricpy.vtkTools.io.writeVTUFile('activeModel.vtu',activeMod)
# Get active indieces
activeInd = telluricpy.vtkTools.dataset.getDataArray(activeMod,'id')
# Make the conductivity dictionary
# Note: using the background value for the till, since the extraction gets the ind's below the till surface
geoStructFileDict = {'Till':1e-4,
'XVK':3e-2,
'PK1':5e-2,
'PK2':1e-2,
'PK3':1e-2,
'HK1':1e-3,
'VK':5e-3}
# Loop through
extP = '../Geological_model/'
geoStructIndDict = {}
for key, val in geoStructFileDict.iteritems():
geoPoly = telluricpy.vtkTools.polydata.normFilter(telluricpy.vtkTools.io.readVTPFile(extP+key+'.vtp'))
modStruct = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(activeMod,geoPoly,extBoundaryCells=True,extInside=True,extractBounds=True)
geoStructIndDict[key] = telluricpy.vtkTools.dataset.getDataArray(modStruct,'id')
# Make the physical prop
sigma = np.ones(meshFor.nC)*1e-8
sigma[activeInd] = 1e-3 # 1e-4 is the background and 1e-3 is the till value
# Add the structure
for key in ['Till','XVK','PK1','PK2','PK3','HK1','VK']:
sigma[geoStructIndDict[key]] = geoStructFileDict[key]
# Save the model
meshFor.writeVTK('nsmesh_0.vtr',{'S/m':sigma})
# Set up the forward modeling
freq = np.logspace(5,0,26)
np.save('MTfrequencies',freq)
# Find the locations on the surface of the model.
# Get the outer shell of the model
actModVTP = telluricpy.vtkTools.polydata.normFilter(telluricpy.vtkTools.extraction.geometryFilt(activeMod))
polyBox = vtk.vtkCubeSource()
polyBox.SetBounds(bw,be,bs,bn,bb,bt)
polyBox.Update()
# Exract the topo of the model
modTopoVTP = telluricpy.vtkTools.extraction.extractDataSetWithPolygon(actModVTP,telluricpy.vtkTools.polydata.normFilter(polyBox.GetOutput()),extractBounds=True)
# Make the rxLocations file
x,y = np.meshgrid(np.arange(bw+25.,be,50),np.arange(bs+25.,bn,50))
xy = np.hstack((x.reshape(-1,1),y.reshape(-1,1)))
# Find the location array
locArr = telluricpy.modelTools.surfaceIntersect.findZofXYOnPolydata(xy,modTopoVTP)
np.save('MTlocations',locArr)
# +
# Running the forward modelling on the Cluster.
# Define the forward run in findDiam_MTforward.py
# -
# %matplotlib qt
sys.path.append('/home/gudni/Dropbox/code/python/MTview/')
import interactivePlotFunctions as iPf
# Load the data
mtData = np.load('MTdataStArr_nsmesh_0.npy')
mtData
iPf.MTinteractiveMap([mtData])
# +
# Looking at the data shows that data below 100Hz is affected by the boundary conditions,
# which makes sense for very conductive conditions as we have.
# Invert data in the 1e5-1e2 range.
# -
# Run the inversion on the cluster using the inv3d/run1/findDiam_inversion.py
drecAll = np.load('MTdataStArr_nsmesh_0.npy')
np.unique(drecAll['freq'])[10::]
| SciPy2016/MTwork/Setup MT forward modelling_HK1PK1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--NOTEBOOK_HEADER-->
# *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
# content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
# <!--NAVIGATION-->
# < [Pose Basics](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.01-Pose-Basics.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Accessing PyRosetta Documentation](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.02-Working-with-Pose-Residues.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# # Working with Pose residues
# Keywords: total_residue(), chain(), number(), pdb2pose(), pose2pdb()
# Notebook setup
import sys
if 'google.colab' in sys.modules:
# !pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
init()
# **From previous section:**
# Make sure you are in the directory with the pdb files:
#
# `cd google_drive/My\ Drive/student-notebooks/`
pose = pose_from_pdb("inputs/5tj3.pdb")
pose_clean = pose_from_pdb("inputs/5tj3.clean.pdb")
# We can use methods in `Pose` to count residues and pick out residues from the pose. Remember that `Pose` is a python class, and to access methods it implements, you need an instance of the class (here `pose` or `pose_clean`) and you then use a dot after the instance.
print(pose.total_residue())
print(pose_clean.total_residue())
# Did you catch all the missing residues before?
# Store the `Residue` information for residue 20 of the pose by using the `pose.residue(20)` function.
# + nbgrader={"grade": true, "grade_id": "cell-b4a65f9eb1e72d02", "locked": false, "points": 0, "schema_version": 3, "solution": true}
# residue20 = type here
### BEGIN SOLUTION
residue20 = pose.residue(20)
### END SOLUTION
print(residue20.name())
# -
# ## Exercise 2: Residue objects
#
# Use the `pose`'s `.residue()` object to get the 24th residue of the protein pose. What is the 24th residue in the PDB file (look in the PDB file)? Are they the same residue?
# + code_folding=[] nbgrader={"grade": true, "grade_id": "cell-0b423c45607a85e0", "locked": false, "points": 0, "schema_version": 3, "solution": true}
# store the 24th residue in the pose into a variable (see residue20 example above)
### BEGIN SOLUTION
residue24 = pose.residue(24)
### END SOLUTION
# + code_folding=[]
# what other methods are attached to that Residue object? (type "residue24." and hit Tab to see a list of commands)
# -
# We can immediately see that the numbering PyRosetta internally uses for pose residues is different from the PDB file. The information corresponding to the PDB file can be accessed through the `pose.pdb_info()` object.
print(pose.pdb_info().chain(24))
print(pose.pdb_info().number(24))
# By using the `pdb2pose` method in `pdb_info()`, we can turn PDB numbering (which requires a chain ID and a residue number) into Pose numbering
# + code_folding=[]
# PDB numbering to Pose numbering
print(pose.pdb_info().pdb2pose('A', 24))
# -
# Use the `pose2pdb` method in `pdb_info()` to see what is the corresponding PDB chain and residue ID for pose residue number 24
# + code_folding=[]
# Pose numbering to PDB numbering
# + nbgrader={"grade": true, "grade_id": "cell-eb3b845928c9313f", "locked": false, "points": 0, "schema_version": 3, "solution": true}
### BEGIN SOLUTION
print(pose.pdb_info().pose2pdb(1))
### END SOLUTION
# -
# Now we can see how to examine the identity of a residue by PDB chain and residue number.
#
# Once we get a residue, there are various methods in the `Residue` class that might be for running analysis. We can get instances of the `Residue` class from `Pose`. For instance, we can do the following:
res_24 = pose.residue(24)
print(res_24.name())
print(res_24.is_charged())
# <!--NAVIGATION-->
# < [Pose Basics](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.01-Pose-Basics.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Accessing PyRosetta Documentation](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.02-Working-with-Pose-Residues.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| notebooks/02.02-Working-with-Pose-Residues.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
# %load_ext autoreload
# %autoreload 2
# +
# default_exp dialog_system
# +
# export
from let_me_answer_for_you import settings
import deeppavlov
import logging
from unittest.mock import patch
from collections import defaultdict
import pandas as pd
logging.basicConfig(
#filename='example.log',
format='%(asctime)s %(levelname)s:%(message)s',
level=logging.ERROR,
datefmt='%I:%M:%S'
)
logging.debug(" Debug Log Active")
logging.info("Hello! Welcome to our automated dialog system!")
logging.warning(' Warning Log Active')
# -
# # Dialog System
# > Implements the `question_response`, `new_question_answer` and `new_context` methods
#export
class DialogSystem:
''' The DialogSystem class implements the main methods
defined in the settings module. \n
INPUT: \n
- context_data_file: csv file of contexts (default: None)\n
- faq_data_file: csv file of FAQs (default: None)\n
- configs_faq: json config file (default: None)\n
- download_models: Indicates if download configuration files (default: True)\n
If the context or the faq files are not provided, a *data* directory with the missing files,
will be created (in the same path where the module is running). \n
When an instance is created, the 'run_shell_installs', 'load_and_prepare_data'
and 'load_qa_models' of the settings module are called. Also the *data* and *qa_models*
attributes are created, they store the dataframes and models information, respectively.\n
If the dataframes are provided they must have the following columns:
1. context dataframe columns: 'topic', 'context'
2. faq dataframe columns: 'Question, 'Answer'
'''
def __init__(
self,
context_data_file=None,
faq_data_file=None,
configs_faq=None,
download_models=True
):
settings.run_shell_installs()
self.data = {'context': defaultdict(str), 'faq': defaultdict(str)}
self.download = download_models
settings.load_and_prepare_data(
context_data_file=context_data_file,
faq_data_file=faq_data_file,
configs_faq=configs_faq,
data=self.data
)
self.qa_models = settings.load_qa_models(
config_tfidf=self.data['faq']['config'], download=self.download
)
def question_answer(self, question):
''' Gets answers to a question. \n
INPUT: \n
- *question* parameter \n
The method creates the following attributes:\n
- 'self.question' -> the input parameter \n
- 'self.responses' -> a dict of possible responses \n
- 'self.formatted_responses' -> a formatted string of the possible responses
This method calls the functions `settings.get_response` and `settings.format_responses`
'''
self.question, self.responses = settings.get_responses(
self.data['context']['df'],
question,
self.qa_models,
nb_squad_results=1
)
self.flatten_responses, self.formatted_responses = settings.format_responses(
self.responses
)
def new_question_answer(self, question, answer):
'''Adds a new question-answer pair.\n
INPUT:\n
- question\n
- answer\n
The new question-answer pair is stored in the path *self.data['faq']['path']*
and the models in *qa_models['faq']* get re-trained by calling the function
`deeppavlaov.train_model`
'''
_faq = self.data['faq']
new_faq = pd.DataFrame({'Question': [question], 'Answer': [answer]})
_faq['df'] = _faq['df'].append(new_faq)
_faq['df'].to_csv(_faq['path'], index=False)
self.qa_models['faq']['tfidf'] = deeppavlov.train_model(
_faq['config'], download=False
)
self.question, self.answer = question, answer
logging.info('FAQ dataset and model updated..')
def new_context(self, topic, context):
''' Adds a new context. \n
INPUT:\n
- topic (The title of the context)
- context
The new context is stored in the path *self.data['context']['path']*
'''
_ctx = self.data['context']
new_context = pd.DataFrame({'topic': [topic], 'context': [context]})
_ctx['df'] = _ctx['df'].append(new_context)
_ctx['df'].to_csv(_ctx['path'], index=False)
self.topic, self.context = topic, context
logging.info('contexts dataset updated..')
# +
from nbdev.showdoc import *
method_list_f = lambda Foo: [func for func in dir(Foo) if callable(getattr(Foo, func)) and not func.startswith("__")]
show_doc(DialogSystem)
for method in method_list_f(DialogSystem):
show_doc( getattr(DialogSystem, method))
# -
# ### Test Example
#test
import tempfile
from os import path
with tempfile.TemporaryDirectory() as tmpdirname:
ds = DialogSystem(
faq_data_file=path.join(tmpdirname, 'faq_example.csv'),
context_data_file=path.join(tmpdirname, 'context_example.csv'),
download_models=False
)
ds.question_answer(question='What is Intekglobal?')
assert ds.question == 'What is Intekglobal?'
assert isinstance(ds.responses['squad'], dict)
assert isinstance(ds.responses['faq'], dict)
logging.info(f'{ds.question} \n\n {ds.formatted_responses}')
assert 'This should not be in the current set of responses' not in ds.formatted_responses
logging.info(f' dict of responses: {ds.responses}')
logging.info(f'{ds.question} \n\n {ds.formatted_responses}')
ds.new_question_answer(
question='What day is today?', answer='Today is the day!'
)
ds.question_answer(question='What day is today?')
assert 'Today is the day!' in ds.formatted_responses
context = '''Space Exploration Technologies Corp., trading as SpaceX,
is an American aerospace manufacturer and space transportation services company headquartered
in Hawthorne, California. It was founded in 2002 by <NAME> with the goal of reducing
space transportation costs to enable the colonization of Mars. SpaceX has developed several launch
vehicles,the Starlink satellite constellation, and the Dragon spacecraft.
'''
ds.new_context(topic='SpaceX', context=context)
ds.question_answer(question='What are SpaceX initials stand for?')
logging.info(f'{ds.question} \n\n {ds.formatted_responses}')
assert 'Space Exploration Technologies Corp.' in ds.formatted_responses
# +
#hide
| nbs/02_dialog_system.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using the Poisson-Boltzmann solver code with real data.
#
# **Running the code in this notebook (under Mott-Schottky conditions with real data) takes approximately 2 minutes (iMac with 4 Ghz i7 processor).**
#
# This example uses real data generated for the 111 grain boundary orientation in gadolinium doped ceria ( $\mathrm{Gd-CeO_2}$ ). The data was generated using METADISE which is a computer program that performs atomic scale simulations of crystal structures. These simulations allow dislocations, surfaces and interfaces to be studied. METADISE has been used to study a number of grain boundaries in gadolinium doped ceria and individual defect energies and positions were calculated for gadolinium ions at cerium sites and vacancies at oxygen sites.
# +
from pyscses.defect_species import DefectSpecies
from pyscses.set_of_sites import SetOfSites
from pyscses.constants import boltzmann_eV
from pyscses.calculation import Calculation, calculate_activation_energies
from pyscses.set_up_calculation import calculate_grid_offsets
from pyscses.grid import Grid
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# -
boundary_conditions = 'periodic'
site_charges = False
systems = 'mott-schottky'
core_models = False
site_models = 'site_explicit'
# +
alpha = 0.0005
conv = 1e-8
grid_x_min = -6.094e-9
grid_x_max = 5.16e-9
bulk_x_min = -5.783e-9
bulk_x_max = -2.502e-9
dielectric = 55
index = 111
b = 7.65327e-10
c = 7.65327e-10
temp = [ 773.15 ]
# -
valence = [ +2.0, -1.0 ]
site_labels = [ 'O', 'Ce' ]
defect_labels = ['Vo', 'Gd']
mole_fractions = np.array([ [ 0.05, 0.2 ] ])
initial_guess = [ [ 0.05, 0.2 ] ]
mobilities = [ 1.0, 0.0 ]
data = '../input_data/Gd_CeO2_111_data.txt'
# +
limits, laplacian_limits = calculate_grid_offsets( data, grid_x_min, grid_x_max, 'single' )
for m in mole_fractions:
for t in temp:
defect_species = { l : DefectSpecies( l, v, m, mob ) for l, v, m, mob in zip( defect_labels, valence, m, mobilities ) }
all_sites = SetOfSites.set_of_sites_from_input_data( data, [grid_x_min, grid_x_max], defect_species, site_charges, core_models, t )
if site_models == 'continuum':
all_sites, limits = SetOfSites.form_continuum_sites( all_sites, grid_x_min, grid_x_max, 1000, b, c, defect_species, laplacian_limits, site_labels, defect_labels )
if systems == 'mott-schottky':
for site in all_sites.subset( 'Ce' ):
site.defect_with_label('Gd').fixed = True
if systems == 'gouy-chapman':
for site in all_sites.subset( 'Ce' ):
site.defect_with_label('Gd').fixed = False
grid = Grid.grid_from_set_of_sites( all_sites, limits, laplacian_limits, b, c )
c_o = Calculation( grid, bulk_x_min, bulk_x_max, alpha, conv, dielectric, t, boundary_conditions )
c_o.form_subgrids( site_labels )
if systems == 'gouy-chapman':
c_o.mole_fraction_correction( m, systems, initial_guess )
c_o.solve(systems)
c_o.mole_fractions()
c_o.calculate_resistivity_ratio( 'positive', 2e-2 )
c_o.solve_MS_approx_for_phi( valence[0] )
# +
plt.plot(grid.x, c_o.phi)
plt.xlabel( '$x$ $\mathrm{coordinate}$' )
plt.ylabel('$\Phi$ $\mathrm{( eV )}$')
plt.show()
plt.plot(grid.x, c_o.rho)
plt.xlabel( '$x$ $\mathrm{coordinate}$' )
plt.ylabel(' $\mathrm{charge density}$ $(\mathrm{C m}^{-1})$')
plt.show()
plt.plot(grid.x, c_o.mf[site_labels[0]], label = '$\mathrm{Vo}$')
plt.plot(grid.x, c_o.mf[site_labels[1]], label = '$\mathrm{Gd}$')
plt.xlabel( '$x$ $\mathrm{coordinate}$' )
plt.ylabel('$x_{i}$')
plt.legend()
plt.show()
print('perpendicular grain boundary resistivity = ', c_o.perpendicular_resistivity_ratio)
print('parallel grain boundary resistivity = ', c_o.parallel_resistivity_ratio)
print('space charge potential = ', max(c_o.phi))
print('Mott-Schottky approximated space charge potential = ', c_o.ms_phi)
| userguides/notebooks/Ex_5_real_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Face Recognition Using mlrun with OpenCV And PyTorch
# A complete pipeline of data processing, model training and serving function deployment.
# ### Install mlrun and kubeflow pipelines
# !pip install git+https://github.com/mlrun/mlrun.git@development
# # !pip install kfp
# ### Restart jupyter kernel after initial installations
# ### Install dependencies for the code and set config
#
# It is possible that after installing dependencies locally, you will need to restart Jupyter kernel to successfully import the packages.
# nuclio: ignore
import nuclio
# Change following magic command to %%nuclio cmd -c if the following packages are already installed locally.
# %%nuclio cmd
pip install scikit-build
pip install cmake==3.13.3
pip install face_recognition
pip install opencv-contrib-python
pip install imutils
pip install torch torchvision
pip install pandas
pip install v3io_frames
# %nuclio config spec.build.baseImage = "python:3.6-jessie"
# ### Declare global variables and perform necessary imports
DATA_PATH = '/User/demos/demos/realtime-face-recognition/dataset/'
ARTIFACTS_PATH = '/User/demos/demos/realtime-face-recognition/artifacts/'
MODELS_PATH = '/User/demos/demos/realtime-face-recognition/models.py'
import torch.nn as nn
import torch.nn.functional as F
import torch
import importlib.util
import os
import shutil
import zipfile
from urllib.request import urlopen
from io import BytesIO
import face_recognition
from imutils import paths
from pickle import load, dump
import cv2
from mlrun.artifacts import TableArtifact
import pandas as pd
import numpy as np
import datetime
import random
import string
import v3io_frames as v3f
# ### Import and define mlrun functions for the pipeline
# nuclio: ignore
from mlrun import new_function, code_to_function, NewTask, mount_v3io
import kfp
from kfp import dsl
def encode_images(context, cuda=True):
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
context.logger.info(f'Running on device: {device}')
client = v3f.Client("framesd:8081", container="users")
if not os.path.exists(DATA_PATH + 'processed'):
os.makedirs(DATA_PATH + 'processed')
if not os.path.exists(DATA_PATH + 'label_pending'):
os.makedirs(DATA_PATH + 'label_pending')
# If no train images exist in the predefined path we will train the model on a small dataset of movie actresses
if not os.path.exists(DATA_PATH + 'input'):
os.makedirs(DATA_PATH + 'input')
resp = urlopen('https://iguazio-public.s3.amazonaws.com/roy-actresses/Actresses.zip')
zip_ref = zipfile.ZipFile(BytesIO(resp.read()), 'r')
zip_ref.extractall(DATA_PATH + 'input')
zip_ref.close()
if os.path.exists(DATA_PATH + 'input/__MACOSX'):
shutil.rmtree(DATA_PATH + 'input/__MACOSX')
idx_file_path = ARTIFACTS_PATH+"idx2name.csv"
if os.path.exists(idx_file_path):
idx2name_df = pd.read_csv(idx_file_path)
else:
idx2name_df = pd.DataFrame(columns=['value', 'name'])
#creates a mapping of classes(person's names) to target value
new_classes_names = [f for f in os.listdir(DATA_PATH + 'input') if not '.ipynb' in f and f not in idx2name_df['name'].values]
initial_len = len(idx2name_df)
final_len = len(idx2name_df) + len(new_classes_names)
for i in range(initial_len, final_len):
idx2name_df.loc[i] = {'value': i, 'name': new_classes_names.pop()}
name2idx = idx2name_df.set_index('name')['value'].to_dict()
#log name to index mapping into mlrun context
context.log_artifact(TableArtifact('idx2name', df=idx2name_df), target_path='idx2name.csv')
#generates a list of paths to labeled images
imagePaths = [f for f in paths.list_images(DATA_PATH + 'input') if not '.ipynb' in f]
knownEncodings = []
knownLabels = []
fileNames = []
urls = []
for (i, imagePath) in enumerate(imagePaths):
print("[INFO] processing image {}/{}".format(i + 1, len(imagePaths)))
#extracts label (person's name) of the image
name = imagePath.split(os.path.sep)[-2]
#prepares to relocate image after extracting features
file_name = imagePath.split(os.path.sep)[-1]
new_path = DATA_PATH + 'processed/' + file_name
#converts image format to RGB for comptability with face_recognition library
image = cv2.imread(imagePath)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
#detects coordinates of faces bounding boxes
boxes = face_recognition.face_locations(rgb, model='hog')
#computes embeddings for detected faces
encodings = face_recognition.face_encodings(rgb, boxes)
#this code assumes that a person's folder in the dataset does not contain an image with a face other then his own
for enc in encodings:
file_name = name + '_' + ''.join(random.choices(string.ascii_uppercase + string.digits, k=5))
knownEncodings.append(enc)
knownLabels.append([name2idx[name]])
fileNames.append(file_name)
urls.append(new_path)
#move image to processed images directory
shutil.move(imagePath, new_path)
#saves computed encodings to avoid repeating computations
df_x = pd.DataFrame(knownEncodings, columns=['c' + str(i).zfill(3) for i in range(128)]).reset_index(drop=True)
df_y = pd.DataFrame(knownLabels, columns=['label']).reset_index(drop=True)
df_details = pd.DataFrame([['initial training']*3]*len(df_x), columns=['imgUrl', 'camera', 'time'])
df_details['time'] = [datetime.datetime.utcnow()]*len(df_x)
df_details['imgUrl'] = urls
data_df = pd.concat([df_x, df_y, df_details], axis=1)
data_df['fileName'] = fileNames
client.write(backend='kv', table='iguazio/demos/demos/realtime-face-recognition/artifacts/encodings', dfs=data_df, index_cols=['fileName'])
with open('encodings_path.txt', 'w+') as f:
f.write('iguazio/demos/demos/realtime-face-recognition/artifacts/encodings')
context.log_artifact('encodings_path', src_path=f.name, target_path=f.name)
os.remove('encodings_path.txt')
def train(context, processed_data, model_name='model.bst', cuda=True):
if cuda:
if torch.cuda.is_available():
device = torch.device("cuda")
context.logger.info(f"Running on cuda device: {device}")
else:
device = torch.device("cpu")
context.logger.info("Requested running on cuda but no cuda device available.\nRunning on cpu")
else:
device = torch.device("cpu")
# prepare data from training
context.logger.info('Client')
client = v3f.Client('framesd:8081', container="users")
with open(processed_data.url, 'r') as f:
t = f.read()
data_df = client.read(backend="kv", table=t, reset_index=False, filter='label != -1')
X = data_df[['c'+str(i).zfill(3) for i in range(128)]].values
y = data_df['label'].values
n_classes = len(set(y))
X = torch.as_tensor(X, device=device)
y = torch.tensor(y, device=device).reshape(-1, 1)
input_dim = 128
hidden_dim = 64
output_dim = n_classes
spec = importlib.util.spec_from_file_location('models', MODELS_PATH)
models = importlib.util.module_from_spec(spec)
spec.loader.exec_module(models)
model = models.FeedForwardNeuralNetModel(input_dim, hidden_dim, output_dim)
model.to(device)
model = model.double()
# define loss and optimizer for the task
criterion = nn.CrossEntropyLoss()
learning_rate = 0.05
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# train the network
n_iters = X.size(0) * 5
for i in range(n_iters):
r = random.randint(0, X.size(0) - 1)
optimizer.zero_grad()
out = model(X[r]).reshape(1, -1)
loss = criterion(out, y[r])
loss.backward()
optimizer.step()
context.logger.info('Save model')
#saves and logs model into mlrun context
dump(model._modules, open(model_name, 'wb'))
context.log_artifact('model', src_path=model_name, target_path=model_name, labels={'framework': 'Pytorch-FeedForwardNN'})
os.remove(model_name)
# +
# nuclio: end-code
# +
model_serving_function = code_to_function(name='recognize-faces',
filename='./nuclio-face-prediction.ipynb',
kind='nuclio')
model_serving_function.with_http(workers=2).apply(mount_v3io())
# +
api_serving_function = code_to_function(name='video-api-server',
filename='./nuclio-api-serving.ipynb',
kind='nuclio')
api_serving_function.with_http(workers=2).apply(mount_v3io())
# -
# ### Test pipeline functions locally
task = NewTask(handler=encode_images, out_path=ARTIFACTS_PATH)
run = new_function().run(task)
task2 = NewTask(handler=train, inputs={'processed_data': run.outputs['encodings_path']}, out_path=ARTIFACTS_PATH)
train = new_function().run(task2)
# ### Create a function from notebook and build image
# supposed to take a few minutes
fn = code_to_function('face-recognition', kind='job')
#fn.deploy()
fn.with_code()
from mlrun import mlconf
mlconf.dbpath = 'http://mlrun-api:8080'
fn.apply(mount_v3io())
# Uncomment the lines below based on free GPUs. If you wish to utilize a GPU during training process uncomment the first. If you wish to utilize a GPU for prediction uncomment the latter.
# +
#fn.gpus(1)
#serving_function.gpus(1)
# -
# ### Create pipeline
@dsl.pipeline(
name='face recognition pipeline',
description='Creates and deploys a face recognition model'
)
def face_recognition_pipeline(with_cuda=True):
encode = fn.as_step(name='encode-images', handler='encode_images', out_path=ARTIFACTS_PATH, outputs=['idx2name', 'encodings_path'],
inputs={'cuda': with_cuda})
train = fn.as_step(name='train', handler='train', out_path=ARTIFACTS_PATH, outputs=['model'],
inputs={'processed_data': encode.outputs['encodings_path'], 'cuda': with_cuda})
deploy_model = model_serving_function.deploy_step(project='default', models={'face_rec_v1': train.outputs['model']})
deploy_api = api_serving_function.deploy_step(project='default').after(deploy_model)
client = kfp.Client(namespace='default-tenant')
#For debug purposes compile pipeline code
kfp.compiler.Compiler().compile(face_recognition_pipeline, 'face_rec.yaml')
# ### Run pipeline
arguments = {}
run_result = client.create_run_from_pipeline_func(face_recognition_pipeline, arguments=arguments, run_name='face_rec_1', experiment_name='face_rec')
| realtime-face-recognition/notebooks/face-recognition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center> PROCESAMIENTO DIGITAL DE SEÑALES DE AUDIO</center>
# ## <center> Reverberador con filtros digitales</center>
# +
# %matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
import IPython.display as ipd
# -
# **NOTA:** *Las siguientes dos celdas solo son necesarias para descargar el archivo de ejemplo. Ignórelas si va a trabajar con sus propios archivos de audio.*
# !pip install wget
import wget
# ### Descripción
#
# Este ejercicio sirve para estudiar una estructura de filtros digitales para construir un **reverberador** del tipo propuesto por [Moorer], tal como se describe en el libro de [Steiglitz].
#
# La idea de la estrcutura se representa en el siguiente diagrama. Consiste en seis filtros peine en paralelo, cada uno con sus diferentes parámetros, que modelan las reflexiones en el recinto. El camino directo con ganancia K representa la onda directa. El filtro pasa-todos se incluye para la *difusión* de las reflexiones. Los filtros peine se refinan incluyendo un filtro pasabajos (IIR de primer orden) en el bucle, que modela la absorción del sonido, las frecuencias mas altas se absorben mas rápidamente.
#
# 
#
#
# .. [Moorer] <NAME>. (1979). *About this reverberation business*. Computer Music Journal, 3(2):13–28.
#
# .. [Steiglitz] <NAME>. (1996). *Digital Signal Processing Primer: With Applications to Digital Audio and Computer Music.* Prentice Hall.
#
# ### Cómo correr el notebook
# Se puede bajar y correr el notebook de forma local en una computadora.
#
# O también se puede correr en Google Colab usando el siguiente enlace.
#
# <table align="center">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/mrocamora/audio-dsp/blob/main/notebooks/audioDSP-moorer_reverb_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# </table>
# ### Filtro peine
#
# La siguiente función implementa un filtro peine, siguiendo el esquema que se presenta a continuación. Estudie el código y responda las siguientes preguntas.
#
# 
#
# 1. ¿Cómo describiría a este filtro peine? ¿Qué tipo de realimentación tiene?
# 2. ¿Cómo es su respuesta al impulso? ¿Y su respuesta en frecuencia?
# 3. ¿Cómo modificaría este filtro para agregar un pasabajos en cada iteración?
def comb_filter(x, L, R):
"""
comb filter
Parameters
----------
x (numpy array) : input audio waveform
L (int) : delay length in samples
R (float) : dumping factor
Returns
-------
y (numpy array) : filtered audio waveform
"""
# signal length
N = x.size
# output signal
y = np.copy(x)
# force L to be integer
L = int(L)
# compute ouput from filter equation
for n in range(L,N):
y[n] = x[n] + R**L * y[n-L]
return y
# ### Reverberador
#
# Complete el código de la siguiente función que implementa el reverberador, usando los filtros definidos anteriormente (por el momento no implemente el filtro pasa-todos).
def moorer_reverb(x, fs, t0=0.05, K=1.2, delays=[0.050, 0.056, 0.061, 0.068, 0.072, 0.078], rt60=1.5):
"""
moorer reverb
Parameters
----------
x (numpy array) : input audio waveform
fs (int) : sampling frequency in Hz
t0 (float) : pre-delay in seconds
delays (list float) : delay line lengths (of comb filters) in seconds
res_bw (float) : reverberation time at zero-frequency (e.g., 1.5)
Returns
-------
y (numpy array) : filtered audio waveform
"""
# delays as numpy array
ds = np.array(delays)
# pre-delay in samples
L0 = round(t0 * fs)
# comb filter delays in samples
Ls = np.round(ds * fs)
# comb filter gains
Rs = 10**((-3.0*ds)/(rt60*fs))
# pre-delay
w0 = np.append(np.zeros(L0), x)
x_out = np.append(x, np.zeros(L0))
ws = np.zeros(w0.shape)
# comb filters
for ind in range(len(delays)):
# apply comb filter
# w =
# sum comb filter output
# ws =
# all-pass filter delay
L_ap = np.round(0.005 * fs)
# all-pass filter gain
g_ap5 = 0.7
# allpass filter
# ap = allpass(...)
# produce output
# y =
return y
# ### Prueba del reverberador
#
# Pruebe el resultado del reverberador con una señal de audio de ejemplo. Ejecute el código y analice lo siguiente.
#
# 1. ¿Se logra simular el efecto de reverberación?
# 2. ¿Cuál es el efecto de cambiar la ganancia K?
# 3. ¿Cómo varía el resultado cambiando el tiempo de reverberación (rt60)?
# 4. Considere una menor cantidad de filtros peine. ¿Cuál es la mínima cantidad de filtros razonable?
# download audio file to use
wget.download('https://github.com/mrocamora/audio-dsp/blob/main/audio/ohwhere.wav?raw=true')
# +
# load audio file from local path
fs, x = wavfile.read('./ohwhere.wav')
# play audio
ipd.Audio(x, rate=fs)
# -
y = moorer_reverb(x, fs)
ipd.Audio(y,rate=fs)
# ### Ejercicio: Filtro pasa-todos
#
# Complete la siguiente función que implementa un filtro peine, siguiendo el esquema que se presenta a continuación. Modifique la implementación del filtro para incluir el filtro pasa-todos.
#
# 
def all_pass(x, L, a):
"""
all-pass filter
Parameters
----------
x (numpy array) : input audio waveform
L (int) : delay length in samples
R (float) : dumping factor
Returns
-------
y (numpy array) : filtered audio waveform
"""
# signal length
N = x.size
# output signal
y = np.copy(x)
# force L to be integer
L = int(L)
# compute ouput from filter equation
# ...
# ...
return y
# ### Ejercicio: Filtro pasa-bajos
#
# Para refinar el modelo se utiliza un filtro pasabajos (IIR de primer orden) en el bucle del filtro peine. El filtro pasabajos modela que las frecuencias más altas se absorben más rápidamente. Modifique la implementación del filtro peine para incluir el pasabajos y analice su efecto en el resultado del reverberador.
| notebooks/audioDSP-moorer_reverb_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building a Trie in Python
#
# Before we start let us reiterate the key components of a Trie or Prefix Tree. A trie is a tree-like data structure that stores a dynamic set of strings. Tries are commonly used to facilitate operations like predictive text or autocomplete features on mobile phones or web search.
#
# Before we move into the autocomplete function we need to create a working trie for storing strings. We will create two classes:
# * A `Trie` class that contains the root node (empty string)
# * A `TrieNode` class that exposes the general functionality of the Trie, like inserting a word or finding the node which represents a prefix.
#
# Give it a try by implementing the `TrieNode` and `Trie` classes below!
# +
## Represents a single node in the Trie
class TrieNode:
def __init__(self):
## Initialize this node in the Trie
def insert(self, char):
## Add a child node in this Trie
## The Trie itself containing the root node and insert/find functions
class Trie:
def __init__(self):
## Initialize this Trie (add a root node)
def insert(self, word):
## Add a word to the Trie
def find(self, prefix):
## Find the Trie node that represents this prefix
# -
# # Finding Suffixes
#
# Now that we have a functioning Trie, we need to add the ability to list suffixes to implement our autocomplete feature. To do that, we need to implement a new function on the `TrieNode` object that will return all complete word suffixes that exist below it in the trie. For example, if our Trie contains the words `["fun", "function", "factory"]` and we ask for suffixes from the `f` node, we would expect to receive `["un", "unction", "actory"]` back from `node.suffixes()`.
#
# Using the code you wrote for the `TrieNode` above, try to add the suffixes function below. (Hint: recurse down the trie, collecting suffixes as you go.)
class TrieNode:
def __init__(self):
## Initialize this node in the Trie
pass
def insert(self, char):
## Add a child node in this Trie
pass
def suffixes(self, suffix = ''):
## Recursive function that collects the suffix for
## all complete words below this point
# # Testing it all out
#
# Run the following code to add some words to your trie and then use the interactive search box to see what your code returns.
MyTrie = Trie()
wordList = [
"ant", "anthology", "antagonist", "antonym",
"fun", "function", "factory",
"trie", "trigger", "trigonometry", "tripod"
]
for word in wordList:
MyTrie.insert(word)
from ipywidgets import widgets
from IPython.display import display
from ipywidgets import interact
def f(prefix):
if prefix != '':
prefixNode = MyTrie.find(prefix)
if prefixNode:
print('\n'.join(prefixNode.suffixes()))
else:
print(prefix + " not found")
else:
print('')
interact(f,prefix='');
| Problem_5/task_description.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pytorch_p36]
# language: python
# name: conda-env-pytorch_p36-py
# ---
# +
import os
os.chdir("/home/ec2-user/scVI/")
os.getcwd()
import matplotlib
# %matplotlib inline
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
import matplotlib.pyplot as plt
import seaborn as sns
from umap import UMAP
use_cuda = True
import torch
# -
from sklearn.neighbors import NearestNeighbors
import scipy
def entropy_batch_mixing(latent_space, batches, n_neighbors=50, n_pools=50, n_samples_per_pool=100):
def entropy(hist_data):
n_batches = len(np.unique(hist_data))
if n_batches > 2:
raise ValueError("Should be only two clusters for this metric")
frequency = np.mean(hist_data == 1)
if frequency == 0 or frequency == 1:
return 0
return -frequency * np.log(frequency) - (1 - frequency) * np.log(1 - frequency)
nne = NearestNeighbors(n_neighbors=1 + n_neighbors, n_jobs=8)
nne.fit(latent_space)
kmatrix = nne.kneighbors_graph(latent_space) - scipy.sparse.identity(latent_space.shape[0])
score = 0
for t in range(n_pools):
indices = np.random.choice(np.arange(latent_space.shape[0]), size=n_samples_per_pool)
score += np.mean([entropy(batches[kmatrix[indices].nonzero()[1]\
[kmatrix[indices].nonzero()[0] == i]]) for i in range(n_samples_per_pool)])
return score / float(n_pools)
# +
from scvi.harmonization.utils_chenling import get_matrix_from_dir,assign_label
from scvi.harmonization.utils_chenling import select_indices_evenly
from scvi.dataset.pbmc import PbmcDataset
import numpy as np
from scvi.dataset.dataset import GeneExpressionDataset
dataset1 = PbmcDataset(filter_out_de_genes=False)
dataset1.update_cells(dataset1.batch_indices.ravel()==0)
dataset1.subsample_genes(dataset1.nb_genes)
count, geneid, cellid = get_matrix_from_dir('cite')
count = count.T.tocsr()
seurat = np.genfromtxt('../cite/cite.seurat.labels', dtype='str', delimiter=',')
cellid = np.asarray([x.split('-')[0] for x in cellid])
labels_map = [0, 0, 1, 2, 3, 4, 5, 6]
labels = seurat[1:, 4]
cell_type = ['CD4 T cells', 'NK cells', 'CD14+ Monocytes', 'B cells','CD8 T cells', 'FCGR3A+ Monocytes', 'Other']
dataset2 = assign_label(cellid, geneid, labels_map, count, cell_type, seurat)
set(dataset2.cell_types).intersection(set(dataset2.cell_types))
dataset1.subsample_genes(dataset1.nb_genes)
dataset2.subsample_genes(dataset2.nb_genes)
gene_dataset = GeneExpressionDataset.concat_datasets(dataset1, dataset2)
# -
from scvi.inference import UnsupervisedTrainer, SemiSupervisedTrainer,AlternateSemiSupervisedTrainer
from scvi.models.scanvi import SCANVI
from scvi.models.vae import VAE
gene_dataset.subsample_genes(1000)
# # SCANVI1 works
vae = VAE(gene_dataset.nb_genes, n_batch=gene_dataset.n_batches,
n_hidden=128, n_latent=10, n_layers=2, dispersion='gene')
trainer = UnsupervisedTrainer(vae, gene_dataset, train_size=1.0)
trainer.train(n_epochs=250)
full = trainer.create_posterior(trainer.model, gene_dataset, indices=np.arange(len(gene_dataset)))
scanvi = SCANVI(gene_dataset.nb_genes, gene_dataset.n_batches, gene_dataset.n_labels, n_layers=2)
scanvi.load_state_dict(full.model.state_dict(), strict=False)
trainer_scanvi = SemiSupervisedTrainer(scanvi, gene_dataset, classification_ratio=50,
n_epochs_classifier=1, lr_classification=5 * 1e-3)
trainer_scanvi.labelled_set = trainer_scanvi.create_posterior(indices=(gene_dataset.batch_indices == 0))
trainer_scanvi.unlabelled_set = trainer_scanvi.create_posterior(indices=(gene_dataset.batch_indices == 1))
trainer_scanvi.train(n_epochs=10)
full_scanvi = trainer_scanvi.create_posterior(trainer_scanvi.model, gene_dataset, indices=np.arange(len(gene_dataset)))
latent_scanvi, batch_indices, labels = full_scanvi.sequential().get_latent()
import matplotlib
# %matplotlib inline
sample = select_indices_evenly(2000, batch_indices)
colors = sns.color_palette('bright') +\
sns.color_palette('muted') + \
sns.color_palette('dark') + \
sns.color_palette('pastel') + \
sns.color_palette('colorblind')
latent_s = latent_scanvi[sample, :]
label_s = labels[sample]
batch_s = batch_indices[sample]
if latent_s.shape[1] != 2:
latent_s = UMAP(spread=2).fit_transform(latent_s)
keys= gene_dataset.cell_types
fig, ax = plt.subplots(figsize=(18, 12))
key_order = np.argsort(keys)
for i,k in enumerate(key_order):
ax.scatter(latent_s[label_s == k, 0], latent_s[label_s == k, 1], c=colors[i%30], label=keys[k],
edgecolors='none')
ax.legend(bbox_to_anchor=(1.1, 0.5), borderaxespad=0, fontsize='x-large')
ax.axis('off')
fig.tight_layout()
plt.show()
batch_s = batch_s.ravel()
batch = ['Cite', 'PBMC8k']
fig, ax = plt.subplots(figsize=(18, 12))
for i, x in enumerate(batch):
ax.scatter(latent_s[batch_s == i, 0], latent_s[batch_s == i, 1], c=colors[i], label=x,
edgecolors='none')
ax.legend(bbox_to_anchor=(1.1, 0.5), borderaxespad=0, fontsize='x-large')
ax.axis('off')
plt.show()
entropy_batch_mixing(latent_s, batch_s)
# # SCANVI2 does not
scanvi = SCANVI(gene_dataset.nb_genes, gene_dataset.n_batches, gene_dataset.n_labels, n_layers=2)
scanvi.load_state_dict(full.model.state_dict(), strict=False)
trainer_scanvi = SemiSupervisedTrainer(scanvi, gene_dataset, classification_ratio=50,
n_epochs_classifier=1, lr_classification=5 * 1e-3)
trainer_scanvi.labelled_set = trainer_scanvi.create_posterior(indices=(gene_dataset.batch_indices == 1))
trainer_scanvi.unlabelled_set = trainer_scanvi.create_posterior(indices=(gene_dataset.batch_indices == 0))
trainer_scanvi.train(n_epochs=10)
full_scanvi = trainer_scanvi.create_posterior(trainer_scanvi.model, gene_dataset, indices=np.arange(len(gene_dataset)))
latent_scanvi, batch_indices, labels = full_scanvi.sequential().get_latent()
sample = select_indices_evenly(2000, batch_indices)
colors = sns.color_palette('bright') +\
sns.color_palette('muted') + \
sns.color_palette('dark') + \
sns.color_palette('pastel') + \
sns.color_palette('colorblind')
latent_s = latent_scanvi[sample, :]
label_s = labels[sample]
batch_s = batch_indices[sample]
if latent_s.shape[1] != 2:
latent_s = UMAP(spread=2).fit_transform(latent_s)
keys= gene_dataset.cell_types
fig, ax = plt.subplots(figsize=(18, 12))
key_order = np.argsort(keys)
for i,k in enumerate(key_order):
ax.scatter(latent_s[label_s == k, 0], latent_s[label_s == k, 1], c=colors[i%30], label=keys[k],
edgecolors='none')
ax.legend(bbox_to_anchor=(1.1, 0.5), borderaxespad=0, fontsize='x-large')
ax.axis('off')
fig.tight_layout()
plt.show()
batch_s = batch_s.ravel()
batch = ['Cite', 'PBMC8k']
fig, ax = plt.subplots(figsize=(18, 12))
for i, x in enumerate(batch):
ax.scatter(latent_s[batch_s == i, 0], latent_s[batch_s == i, 1], c=colors[i], label=x,
edgecolors='none')
ax.legend(bbox_to_anchor=(1.1, 0.5), borderaxespad=0, fontsize='x-large')
ax.axis('off')
plt.show()
entropy_batch_mixing(latent_s, batch_s)
# # SCANVI2 Parameter search
# +
for R in np.arange(0,11,1):
scanvi = SCANVI(gene_dataset.nb_genes, gene_dataset.n_batches, gene_dataset.n_labels, n_layers=2)
scanvi.load_state_dict(full.model.state_dict(), strict=False)
trainer_scanvi = SemiSupervisedTrainer(scanvi, gene_dataset, classification_ratio=int(R),
n_epochs_classifier=1, lr_classification=5 * 1e-3)
trainer_scanvi.labelled_set = trainer_scanvi.create_posterior(indices=(gene_dataset.batch_indices == 1))
trainer_scanvi.unlabelled_set = trainer_scanvi.create_posterior(indices=(gene_dataset.batch_indices == 0))
trainer_scanvi.train(n_epochs=10)
full_scanvi = trainer_scanvi.create_posterior(trainer_scanvi.model, gene_dataset, indices=np.arange(len(gene_dataset)))
latent_scanvi, batch_indices, labels = full_scanvi.sequential().get_latent()
print("R=%i:%.4f"%(R,entropy_batch_mixing(latent_scanvi, batch_indices.ravel())))
# +
from sklearn.neighbors import KNeighborsClassifier
def PartialPrediction(latent,labelled_idx,unlabelled_idx,labels):
latent_labelled = latent[labelled_idx, :]
latent_unlabelled = latent[unlabelled_idx, :]
labels_labelled = labels[labelled_idx]
labels_unlabelled = labels[unlabelled_idx]
neigh = KNeighborsClassifier(n_neighbors=50)
neigh = neigh.fit(latent_labelled, labels_labelled)
labels_pred = neigh.predict(latent_unlabelled)
labels_prob = neigh.predict_proba(latent_unlabelled)
return labels_pred,labels_prob
# +
latent, batch_indices,labels = full.sequential().get_latent()
labelled_idx = batch_indices.ravel()==0
unlabelled_idx = batch_indices.ravel()==1
pred,prob=(PartialPrediction(latent, labelled_idx,unlabelled_idx,gene_dataset.labels.ravel()))
# -
from copy import deepcopy
dataset3 = deepcopy(dataset2)
dataset3.labels = pred.reshape(len(pred),1)
dataset3.cell_types = dataset1.cell_types
gene_dataset = GeneExpressionDataset.concat_datasets(dataset1, dataset3)
gene_dataset.subsample_genes(1000)
allcelltype = np.unique(labels[labelled_idx])
celltypedict = dict(zip(allcelltype,np.arange(len(allcelltype))))
# +
scanvi = SCANVI(gene_dataset.nb_genes, gene_dataset.n_batches, gene_dataset.n_labels, n_layers=2)
scanvi.load_state_dict(full.model.state_dict(), strict=False)
trainer_scanvi = SemiSupervisedTrainer(scanvi, gene_dataset, classification_ratio=int(R),
n_epochs_classifier=1, lr_classification=5 * 1e-3)
labelled = np.where(gene_dataset.batch_indices.ravel() == 1)[0][np.asarray([prob[i,celltypedict[x]] for i,x in enumerate(pred)])==1]
temp1 = np.where(gene_dataset.batch_indices.ravel() == 1)[0][np.asarray([prob[i,celltypedict[x]] for i,x in enumerate(pred)])<1]
temp2 = np.where(gene_dataset.batch_indices.ravel() == 0)[0]
unlabelled = np.concatenate([temp1,temp2 ])
trainer_scanvi.labelled_set = trainer_scanvi.create_posterior(indices=(labelled))
trainer_scanvi.unlabelled_set = trainer_scanvi.create_posterior(indices=(unlabelled))
trainer_scanvi.full_dataset = trainer_scanvi.create_posterior(shuffle=True)
trainer_scanvi.train(n_epochs=10)
full_scanvi = trainer_scanvi.create_posterior(trainer_scanvi.model, gene_dataset, indices=np.arange(len(gene_dataset)))
latent_scanvi, batch_indices, labels = full_scanvi.sequential().get_latent()
print("R=%i:%.4f"%(R,entropy_batch_mixing(latent_scanvi, batch_indices.ravel())))
# -
trainer_scanvi.unlabelled_set.accuracy()
# +
scanvi = SCANVI(gene_dataset.nb_genes, gene_dataset.n_batches, gene_dataset.n_labels, n_layers=2)
scanvi.load_state_dict(full.model.state_dict(), strict=False)
trainer_scanvi = AlternateSemiSupervisedTrainer(scanvi, gene_dataset, classification_ratio=int(R),
n_epochs_classifier=1, lr_classification=5 * 1e-3)
labelled = np.where(gene_dataset.batch_indices.ravel() == 1)[0][np.asarray([prob[i,celltypedict[x]] for i,x in enumerate(pred)])==1]
temp1 = np.where(gene_dataset.batch_indices.ravel() == 1)[0][np.asarray([prob[i,celltypedict[x]] for i,x in enumerate(pred)])<1]
temp2 = np.where(gene_dataset.batch_indices.ravel() == 0)[0]
unlabelled = np.concatenate([temp1,temp2 ])
trainer_scanvi.labelled_set = trainer_scanvi.create_posterior(indices=(labelled))
trainer_scanvi.unlabelled_set = trainer_scanvi.create_posterior(indices=(unlabelled))
trainer_scanvi.train(n_epochs=10)
full_scanvi = trainer_scanvi.create_posterior(trainer_scanvi.model, gene_dataset, indices=np.arange(len(gene_dataset)))
latent_scanvi, batch_indices, labels = full_scanvi.sequential().get_latent()
print("R=%i:%.4f"%(R,entropy_batch_mixing(latent_scanvi, batch_indices.ravel())))
# -
trainer_scanvi.unlabelled_set.accuracy()
| notebooks/.ipynb_checkpoints/SCANVI_paramsearch-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AoC Day 9
#
# <NAME>
#
# 9 December 2021
#
# ## Prompt
#
# --- Day 9: Smoke Basin ---
#
# These caves seem to be lava tubes. Parts are even still volcanically active; small hydrothermal vents release smoke into the caves that slowly settles like rain.
#
# If you can model how the smoke flows through the caves, you might be able to avoid it and be that much safer. The submarine generates a heightmap of the floor of the nearby caves for you (your puzzle input).
#
# Smoke flows to the lowest point of the area it's in. For example, consider the following heightmap:
#
# ```
# 2199943210
# 3987894921
# 9856789892
# 8767896789
# 9899965678
# ```
#
# Each number corresponds to the height of a particular location, where 9 is the highest and 0 is the lowest a location can be.
#
# Your first goal is to find the low points - the locations that are lower than any of its adjacent locations. Most locations have four adjacent locations (up, down, left, and right); locations on the edge or corner of the map have three or two adjacent locations, respectively. (Diagonal locations do not count as adjacent.)
#
# In the above example, there are four low points, all highlighted: two are in the first row (a 1 and a 0), one is in the third row (a 5), and one is in the bottom row (also a 5). All other locations on the heightmap have some lower adjacent location, and so are not low points.
#
# The risk level of a low point is 1 plus its height. In the above example, the risk levels of the low points are 2, 1, 6, and 6. The sum of the risk levels of all low points in the heightmap is therefore 15.
#
# Find all of the low points on your heightmap. What is the sum of the risk levels of all low points on your heightmap?
# +
# get input
with open("inputs/day9.txt") as file:
inputs = file.read().splitlines()
test_inputs = ["2199943210","3987894921","9856789892","8767896789","9899965678"]
# -
def gridify(inputs):
grid = []
for line in inputs:
grid.append([int(n) for n in list(line)])
return grid
test_grid = gridify(test_inputs)
test_grid
def test_for_lower_adjacency(grid, position):
"Will return FALSE if position is a low point"
# position = (row, column)
has_lower_adjacency = False
row_max = len(grid) - 1
column_max = len(grid[0]) - 1
pos_value = grid[position[0]][position[1]]
if position[0] > 0:
up_value = grid[position[0]-1][position[1]]
if up_value <= pos_value:
has_lower_adjacency = True
if position[0] < row_max:
down_value = grid[position[0]+1][position[1]]
if down_value <= pos_value:
has_lower_adjacency = True
if position[1] > 0:
left_value = grid[position[0]][position[1]-1]
if left_value <= pos_value:
has_lower_adjacency = True
if position[1] < column_max:
right_value = grid[position[0]][position[1]+1]
if right_value <= pos_value:
has_lower_adjacency = True
return has_lower_adjacency
test_for_lower_adjacency(test_grid, (4,0))
def find_low_points(grid):
low_points = []
positions = []
for row in range(len(grid)):
for column in range(len(grid[0])):
pos = (row, column)
positions.append(pos)
for pos in positions:
has_lower_adjacent = test_for_lower_adjacency(grid, pos)
if not has_lower_adjacent:
low_points.append(pos)
return low_points
find_low_points(test_grid)
def calculate_total_risk(grid, low_points):
total_risk = 0
for pos in low_points:
val = grid[pos[0]][pos[1]]
total_risk += val + 1
return total_risk
test_low_points = find_low_points(test_grid)
calculate_total_risk(test_grid, test_low_points)
grid = gridify(inputs)
low_points = find_low_points(grid)
calculate_total_risk(grid, low_points)
# ### Part 1 Solution
#
# 600 - CORRRECT
#
# At first I submitted 1640, which was too high. The error was that it was counting a point as the lowest even if it was equal to an adjacent point. This was difficult to resolve b/c there was no equivalent case in the test example.
# --- Part Two ---
#
# Next, you need to find the largest basins so you know what areas are most important to avoid.
#
# A basin is all locations that eventually flow downward to a single low point. Therefore, every low point has a basin, although some basins are very small. Locations of height 9 do not count as being in any basin, and all other locations will always be part of exactly one basin.
#
# The size of a basin is the number of locations within the basin, including the low point. The example above has four basins.
#
# The top-left basin, size 3:
#
# ```
# 2199943210
# 3987894921
# 9856789892
# 8767896789
# 9899965678
# ```
#
# The top-right basin, size 9:
#
# ```
# 2199943210
# 3987894921
# 9856789892
# 8767896789
# 9899965678
# ```
#
# The middle basin, size 14:
#
# ```
# 2199943210
# 3987894921
# 9856789892
# 8767896789
# 9899965678
# ```
#
# The bottom-right basin, size 9:
#
# ```
# 2199943210
# 3987894921
# 9856789892
# 8767896789
# 9899965678
# ```
#
# Find the three largest basins and multiply their sizes together. In the above example, this is 9 * 14 * 9 = 1134.
#
# What do you get if you multiply together the sizes of the three largest basins?
# ### Notes on recursion
#
# Define a function. Inside that function, call the same function.
# Must have a stop condition.
#
# What should the recursive function do?
# Look up, down, left, right, and determine if each should be added to the basin or if there is a 9.
#
# Function input?
# single position?
#
# Function output?
# list of positions?
def find_next_basin_position(grid, position, basin_positions = None):
# position = (row, column)
if basin_positions is None:
basin_positions = []
basin_positions.append(position)
row_max = len(grid) - 1
column_max = len(grid[0]) - 1
if position[0] > 0:
up_position = (position[0]-1, position[1])
up_value = grid[position[0]-1][position[1]]
if (up_value != 9) and (up_position not in basin_positions):
basin_positions.append(up_position)
basin_positions_up = find_next_basin_position(grid, up_position, basin_positions)
if position[0] < row_max:
down_position = (position[0]+1, position[1])
down_value = grid[position[0]+1][position[1]]
if (down_value != 9) and (down_position not in basin_positions):
basin_positions.append(down_position)
basin_positions_down = find_next_basin_position(grid, down_position, basin_positions)
if position[1] > 0:
left_position = (position[0], position[1]-1)
left_value = grid[position[0]][position[1]-1]
if (left_value != 9) and (left_position not in basin_positions):
basin_positions.append(left_position)
basin_positions_left = find_next_basin_position(grid, left_position, basin_positions)
if position[1] < column_max:
right_position = (position[0], position[1]+1)
right_value = grid[position[0]][position[1]+1]
if (right_value != 9) and (right_position not in basin_positions):
basin_positions.append(right_position)
basin_positions_right = find_next_basin_position(grid, right_position, basin_positions)
return basin_positions
def largest_basins_product(grid):
low_points = find_low_points(grid)
basin_lengths = []
for lp in low_points:
basin = find_next_basin_position(grid, lp)
basin_lengths.append(len(basin))
sorted_basin_lenths = sorted(basin_lengths, reverse=True)
return sorted_basin_lenths[0] * sorted_basin_lenths[1] * sorted_basin_lenths[2]
largest_basins_product(test_grid)
largest_basins_product(grid)
# ### Part 2 Solution
#
# 987840 - CORRECT!
| 2021/Day_09.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
note = [1, 2, 5, 10, 20, 50, 100, 200, 500, 2000]
deno = 1470
ans = []
count = 0
note.sort(reverse = True)
print(note)
for i in range(len(note)):
while(deno >= note[i]):
deno -= note[i]
print(deno, note[i])
count += 1
ans.append(note[i])
print(ans)
print(count)
| Greedy/Denomination Problem(Greedy Method).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## ORBIT
#
#
# ### Getting started:
#
#
# 0. Prereqs: Install cairo library
# On linux:
# `sudo apt-get install libcairo2-dev`
#
# 1. Create a virtualenv:
# `virtualenv --python=python3 ~/venv/py36`
#
# 2. Activate virtualenv:
# `source ~/venv/py36/bin/activate`
#
# 3. Install packages
# `pip install imageio pycairo`
#
#
# +
import imageio
import cairo
import math
import tempfile
import os
FRAME_COUNT = 150
WIDTH = 800
HEIGHT = 800
# +
class CelestialBody:
def __init__(self, radius, orbit_radius, orbit_center, fill, parent=None, pc=0, col=(1, 1, 1)):
self.radius = radius
self.orbit_radius = orbit_radius
self.orbit_center = orbit_center
self.parent_body = parent
self.phase = 0.0
self.phase_const = pc
self.fill = fill
self.col = col
self.update()
def update(self):
if self.parent_body is not None:
self.orbit_center = self.parent_body.center
self.center = (
self.orbit_center[0] + self.orbit_radius * math.cos(self.phase),
self.orbit_center[1] + self.orbit_radius * math.sin(self.phase))
def set_phase(self, i):
self.phase = i * self.phase_const * math.pi / FRAME_COUNT
self.update()
def draw(self, context):
context.set_source_rgb(*self.col)
context.arc(*self.center, self.radius, 0, 2 * math.pi)
if self.fill:
context.fill()
else:
context.stroke()
def clear_screen(context):
context.set_source_rgb(0, 0, 0)
context.rectangle(0, 0, WIDTH, HEIGHT)
context.fill()
# +
sun = CelestialBody(70, 0, (WIDTH//2, HEIGHT//2), True, None, 0, (1,1,1))
planets = {
'earth' : CelestialBody(15, 200, (750, 500), True, sun, 3, (0.5,0.5,0.9)),
'jupiter' : CelestialBody(30, 300, (300, 300), True, sun, 1, (0.9,0.3,0.3))
}
moons = {
'moon': CelestialBody(3, 25, (750, 500), False, planets['earth'], 7, (1,1,1))
}
bodies = {
'sun' : sun,
**planets,
**moons
}
# +
images = []
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
context = cairo.Context(surface)
for i in range(0, FRAME_COUNT):
clear_screen(context)
for _, body in bodies.items():
body.set_phase(i)
body.draw(context)
context.stroke()
temp_png = f"/tmp/{i}.png".format(i)
surface.write_to_png(temp_png)
images.append(imageio.imread(temp_png))
os.remove(temp_png)
imageio.mimsave('planets.gif', images)
# +
# jupyter notebook only...
from IPython.core.display import HTML, Image
Image(url="planets.gif", width=800, height=800)
# -
| orbit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # pathlib and glob basic
# 1. [11.1. pathlib — Object-oriented filesystem paths — Python 3.6.4 documentation](https://docs.python.org/3/library/pathlib.html)
# 2. [11.7. glob — Unix style pathname pattern expansion — Python 3.6.4 documentation](https://docs.python.org/3/library/glob.html)
# 多行结果输出支持
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from pathlib import Path, PurePath, PureWindowsPath
# 查看目录下的内容
p = Path('..')
[x for x in p.iterdir() if x.is_dir()]
# 所有的notebook文件
list(p.glob('**/*.ipynb'))
# 切换目录
dir1 = p / 'python'
dir1
# 获取绝对路径,在Windows下会把 '/' 替换为 '\'
dir1.resolve()
# 当前路径是否存在
dir1.exists()
# 判断是否是目录
dir1.is_dir()
file1 = dir1 / 'pathlib_glob.ipynb'
file1.is_dir()
# 打开文件
with file1.open() as f:
f.readline()
# +
# Pure Path 类对象会使得访问与操作系统无关
# -
dir2 = PurePath('../python')
# or
dir2 = PurePath('..', 'python')
# or
dir2 = PurePath(Path('..'), Path('python'))
# PurePath() 会默认获得当前的路径
dir2
# 当一次给出多个绝对路径时,一般将最后的路径作为最终的结果
# 但是,在Windows路径中,更改本地根目录不会丢弃之前的驱动器设置
PurePath('/etc', '/usr', 'lib64')
PureWindowsPath('c:/Windows', '/Program Files')
PurePath('foo//bar')
PurePath('foo/./bar')
PurePath('foo/../bar')
# * 路径之间也是可以进行比较的
# * >
# * <
# * ==
# * in
dir3 = dir2 / 'pathlib_glob.ipynb'
dir3
# 获取路径的各个部分
# 其会返回一个元组
dir3.parts
# 获取驱动器标识
dir3.drive
dir3.root
dir2.anchor
# 获取父目录
dir3.parents[0]
dir3.parents[1]
dir3.parent
# 获取最后的名字
dir3.name
# 最后文件或者路径的后缀获取
dir3.suffix
dir3.suffixes
# 最后文件的名字,不带有后缀
dir3.stem
# 匹配
PurePath('a/b.py').match('*.py')
PurePath('/a/b/c.py').match('b/*.py')
# Return a new path with the name changed. If the original path doesn’t have a name,
# ValueError is raised:
p1 = PurePath('../python')
p1.with_name('pathlib_glob.ipynb')
# +
# with_suffix 改后缀
# -
# 获取当前的路径
Path.cwd()
# 获取文件的信息
file1.stat()
# ** .glob 可以用于匹配文件**
# * Path.mkdir() 可以用于创建文件夹
# 获取文件或者文件夹的所有者
file1.owner()
# ### glob Module使用和 Unix shell 相同的正则表达式进行匹配
| nbs/pathlib_glob.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import os
import fsspec
import json
import pandas as pd
import numpy as np
# ## Load the data
#
# We exclude two projects from all further analysis due to insufficient underlying FIA data
#
# +
with fsspec.open(
"https://carbonplan.blob.core.windows.net/carbonplan-forests/offsets/archive/results/reclassification-crediting-error.json",
"r",
) as f:
data = json.load(f)
with fsspec.open(
"https://carbonplan.blob.core.windows.net/carbonplan-forests/offsets/database/forest-offsets-database-v1.0.json",
"r",
) as f:
db = json.load(f)
db = list(filter(lambda x: ~(x["id"] == "ACR360") & ~(x["id"] == "CAR1102"), db))
# -
print(f"Number of projects: {len(db)}")
print(
f"Total ARBOCs: {np.sum([[x for x in db if x['id'] == key][0]['arbocs']['issuance'] for key in data.keys()])}"
)
# ## Calculate over-crediting
#
# We write a simple function to help with formatting
#
def format_si(num, precision=1, suffixes=["", "K", "M", "G", "T", "P"], hide_suffix=False):
m = sum([abs(num / 1000.0 ** x) >= 1 for x in range(1, len(suffixes))])
if hide_suffix:
return f"{num/1000.0**m:.{precision}f}"
else:
return f"{num/1000.0**m:.{precision}f}{suffixes[m]}"
def format_pt(num, precision=1):
return f"{num * 100:.{precision}f}%"
# We write a function to calculate over-crediting (as a percentage or in ARBOCs), optionally using a
# condition by which to filter projects
#
def get_overcrediting(condition=None, percentage=True, display=False):
if condition is not None:
keys = list(map(lambda x: x["id"], filter(condition, db)))
else:
keys = list(map(lambda x: x["id"], db))
keys = list(filter(lambda x: x in data.keys(), keys))
total = []
for i in range(1000):
total.append(np.nansum([data[key]["delta_arbocs"][i] for key in keys]))
total_percentage = np.percentile(total, [5, 50, 95]) / np.sum(
[[x for x in db if x["id"] == key][0]["arbocs"]["issuance"] for key in keys]
)
total_arbocs = np.percentile(total, [5, 50, 95])
if display:
print(
f"Over-crediting in ARBOCs: \
{format_si(total_arbocs[1])}, ({format_si(total_arbocs[0])}, {format_si(total_arbocs[2])})"
)
print(
f"Over-crediting as %: \
{format_pt(total_percentage[1])} ({format_pt(total_percentage[0])}, {format_pt(total_percentage[2])})"
)
print(
f"Projects included: \
{len(keys)}"
)
else:
return {"count": len(keys), "percent": total_percentage, "arbocs": total_arbocs}
# ## Examples of over-crediting
#
# Now we compute over-crediting for some example conditions
#
condition = None
get_overcrediting(condition=condition, percentage=True, display=True)
condition = lambda x: "New Forests" in x["developers"] or "New Forests" in x["owners"]
get_overcrediting(condition=condition, percentage=True, display=True)
condition = lambda x: "Finite Carbon" in x["developers"] or "Finite Carbon" in x["owners"]
get_overcrediting(condition=condition, percentage=True, display=True)
condition = lambda x: x["id"] == "ACR189"
get_overcrediting(condition=condition, percentage=True, display=True)
# ## Table of over-crediting by developer
#
# First we get unique developers
#
developers = [x["developers"] for x in db if x["id"] in data.keys()]
developers = [item for sublist in developers for item in sublist]
developers = list(set(developers))
df = pd.DataFrame()
df["Developer"] = developers
results = [
get_overcrediting(condition=lambda x: d in x["developers"] or d in x["owners"], percentage=True)
for d in developers
]
df["Count"] = [d["count"] for d in results]
df["Over-crediting as %"] = [f'{format_pt(d["percent"][1])}' for d in results]
df["Range for %"] = [
f'({format_pt(d["percent"][0])}, {format_pt(d["percent"][2])})' for d in results
]
df["Over-crediting as ARBOCs"] = [f'{format_si(d["arbocs"][1])}' for d in results]
df["Range for ARBOCs"] = [
f'({format_si(d["arbocs"][0])}, {format_si(d["arbocs"][2])})' for d in results
]
df[df["Count"] > 5]
| notebooks/Statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="j0a4mTk9o1Qg" colab_type="code" colab={}
# Copyright 2019 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + id="dCpvgG0vwXAZ" colab_type="text"
#Predicting Movie Review Sentiment with BERT on TF Hub
# + [markdown] id="xiYrZKaHwV81" colab_type="text"
# If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.
#
# Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases.
#
# Here, we'll train a model to predict whether an IMDB movie review is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started!
# + id="hsZvic2YxnTz" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
# + [markdown] id="cp5wfXDx5SPH" colab_type="text"
# In addition to the standard libraries we imported above, we'll need to install BERT's python package.
# + id="jviywGyWyKsA" colab_type="code" outputId="166f3005-d219-404f-b201-2a0b75480360" colab={"base_uri": "https://localhost:8080/", "height": 51.0}
# !pip install bert-tensorflow
# + id="hhbGEfwgdEtw" colab_type="code" colab={}
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
# + [markdown] id="KVB3eOcjxxm1" colab_type="text"
# Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.
#
# Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.
#
# Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
# + id="US_EAnICvP7f" colab_type="code" outputId="7780a032-31d4-4794-e6aa-664a5d2ae7dd" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 34.0}
# Set the output directory for saving model file
# Optionally, set a GCP bucket location
OUTPUT_DIR = 'OUTPUT_DIR_NAME'#@param {type:"string"}
#@markdown Whether or not to clear/delete the directory and create a new one
DO_DELETE = False #@param {type:"boolean"}
#@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket.
USE_BUCKET = True #@param {type:"boolean"}
BUCKET = 'BUCKET_NAME' #@param {type:"string"}
if USE_BUCKET:
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)
from google.colab import auth
auth.authenticate_user()
if DO_DELETE:
try:
tf.gfile.DeleteRecursively(OUTPUT_DIR)
except:
# Doesn't matter if the directory didn't exist
pass
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
# + [markdown] id="pmFYvkylMwXn" colab_type="text"
# #Data
# + [markdown] id="MC_w8SRqN0fr" colab_type="text"
# First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub).
# + id="fom_ff20gyy6" colab_type="code" colab={}
from tensorflow import keras
import os
import re
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# + id="2abfwdn-g135" colab_type="code" colab={}
train, test = download_and_load_datasets()
# + [markdown] id="XA8WHJgzhIZf" colab_type="text"
# To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
# + id="lw_F488eixTV" colab_type="code" colab={}
train = train.sample(5000)
test = test.sample(5000)
# + id="prRQM8pDi8xI" colab_type="code" outputId="34445cb8-2be0-4379-fdbc-7794091f6049" colab={"base_uri": "https://localhost:8080/", "height": 34.0}
train.columns
# + [markdown] id="sfRnHSz3iSXz" colab_type="text"
# For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
# + id="IuMOGwFui4it" colab_type="code" colab={}
DATA_COLUMN = 'sentence'
LABEL_COLUMN = 'polarity'
# label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat'
label_list = [0, 1]
# + [markdown] id="V399W0rqNJ-Z" colab_type="text"
# #Data Preprocessing
# We'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library.
#
# - `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe.
# - `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank.
# - `label` is the label for our example, i.e. True, False
# + id="p9gEt5SmM6i6" colab_type="code" colab={}
# Use the InputExample class from BERT's run_classifier code to create examples from the data
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
# + [markdown] id="SCZWZtKxObjh" colab_type="text"
# Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):
#
#
# 1. Lowercase our text (if we're using a BERT lowercase model)
# 2. Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"])
# 3. Break words into WordPieces (i.e. "calling" -> ["call", "##ing"])
# 4. Map our words to indexes using a vocab file that BERT provides
# 5. Add special "CLS" and "SEP" tokens (see the [readme](https://github.com/google-research/bert))
# 6. Append "index" and "segment" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))
#
# Happily, we don't have to worry about most of these details.
#
#
#
# + [markdown] id="qMWiDtpyQSoU" colab_type="text"
# To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:
# + id="IhJSe0QHNG7U" colab_type="code" outputId="20b28cc7-3cb3-4ce6-bfff-a7847ce3bbaa" colab={"base_uri": "https://localhost:8080/", "height": 34.0}
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
# + [markdown] id="z4oFkhpZBDKm" colab_type="text"
# Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:
# + id="dsBo6RCtQmwx" colab_type="code" outputId="9af8c917-90ec-4fe9-897b-79dc89ca88e1" colab={"base_uri": "https://localhost:8080/", "height": 221.0}
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
# + [markdown] id="0OEzfFIt6GIc" colab_type="text"
# Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands.
# + id="LL5W8gEGRTAf" colab_type="code" outputId="65001dda-155b-48fc-b5fc-1e4cabc8dfbf" colab={"base_uri": "https://localhost:8080/", "height": 1261.0}
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
# + [markdown] id="ccp5trMwRtmr" colab_type="text"
# #Creating a model
#
# Now that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning).
# + id="6o2a5ZIvRcJq" colab_type="code" colab={}
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
"""Creates a classification model."""
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
# + [markdown] id="qpE0ZIDOCQzE" colab_type="text"
# Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction.
# + id="FnH-AnOQ9KKW" colab_type="code" colab={}
# model_fn_builder actually creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(
loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predicted_labels)
auc = tf.metrics.auc(
label_ids,
predicted_labels)
recall = tf.metrics.recall(
label_ids,
predicted_labels)
precision = tf.metrics.precision(
label_ids,
predicted_labels)
true_pos = tf.metrics.true_positives(
label_ids,
predicted_labels)
true_neg = tf.metrics.true_negatives(
label_ids,
predicted_labels)
false_pos = tf.metrics.false_positives(
label_ids,
predicted_labels)
false_neg = tf.metrics.false_negatives(
label_ids,
predicted_labels)
return {
"eval_accuracy": accuracy,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
# + id="OjwJ4bTeWXD8" colab_type="code" colab={}
# Compute train and warmup steps from batch size
# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)
BATCH_SIZE = 32
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
# Warmup is a period of time where hte learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# + id="emHf9GhfWBZ_" colab_type="code" colab={}
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# + id="oEJldMr3WYZa" colab_type="code" colab={}
# Specify outpit directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
# + id="q_WebpS1X97v" colab_type="code" outputId="1648932a-7391-49d3-8af7-52d514e226e8" colab={"base_uri": "https://localhost:8080/", "height": 156.0}
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
# + [markdown] id="NOO3RfG1DYLo" colab_type="text"
# Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators).
# + id="1Pv2bAlOX_-K" colab_type="code" colab={}
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
# + [markdown] id="t6Nukby2EB6-" colab_type="text"
# Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
# + id="nucD4gluYJmK" colab_type="code" outputId="5d728e72-4631-42bf-c48d-3f51d4b968ce" colab={"base_uri": "https://localhost:8080/", "height": 68.0}
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
# + [markdown] id="CmbLTVniARy3" colab_type="text"
# Now let's use our test data to see how well our model did:
# + id="JIhejfpyJ8Bx" colab_type="code" colab={}
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
# + id="PPVEXhNjYXC-" colab_type="code" outputId="dd5482cd-c558-465f-c854-ec11a0175316" colab={"base_uri": "https://localhost:8080/", "height": 445.0}
estimator.evaluate(input_fn=test_input_fn, steps=None)
# + [markdown] id="ueKsULteiz1B" colab_type="text"
# Now let's write code to make predictions on new sentences:
# + id="OsrbTD2EJTVl" colab_type="code" colab={}
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
# + id="-thbodgih_VJ" colab_type="code" colab={}
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!"
]
# + id="QrZmvZySKQTm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 649.0} outputId="3891fafb-a460-4eb8-fa6c-335a5bbc10e5"
predictions = getPrediction(pred_sentences)
# + [markdown] id="MXkRiEBUqN3n" colab_type="text"
# Voila! We have a sentiment classifier!
# + id="ERkTE8-7oQLZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221.0} outputId="26c33224-dc2c-4b3d-f7b4-ac3ef0a58b27"
predictions
| predicting_movie_reviews_with_bert_on_tf_hub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="copyright"
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="title:generic,gcp"
# # E2E ML on GCP: MLOps stage 3 : formalization: get started with Dataflow pipeline components
#
# <table align="left">
# <td>
# <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/ml_ops_stage3/get_started_with_dataflow_pipeline_components.ipynb">
# <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
# View on GitHub
# </a>
# </td>
# <td>
# <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/ml_ops_stage3/get_started_with_dataflow_pipeline_components.ipynb">
# Open in Google Cloud Notebooks
# </a>
# </td>
# </table>
# <br/><br/><br/>
# + [markdown] id="overview:mlops"
# ## Overview
#
#
# This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 3 : formalization: get started with Dataflow pipeline components.
# + [markdown] id="dataset:gsod,lrg"
# ### Dataset
#
# The dataset used for this tutorial is the GSOD dataset from [BigQuery public datasets](https://cloud.google.com/bigquery/public-data). The version of the dataset you use only the fields year, month and day to predict the value of mean daily temperature (mean_temp).
# + [markdown] id="objective:mlops,stage3,get_started_dataflow_pipeline_components"
# ### Objective
#
# In this tutorial, you learn how to use prebuilt `Google Cloud Pipeline Components` for `Dataflow`.
#
# This tutorial uses the following Google Cloud ML services:
#
# - `Vertex AI Pipelines`
# - `Google Cloud Pipeline Components`
# - `Dataflow`
#
# The steps performed include:
#
# - Build an Apache Beam data pipeline.
# - Encapsulate the Apache Beam data pipeline with a Dataflow component in a Vertex AI pipeline.
# - Execute a Vertex AI pipeline.
# + [markdown] id="install_mlops"
# ## Installations
#
# Install *one time* the packages for executing the MLOps notebooks.
# + id="install_mlops"
ONCE_ONLY = False
if ONCE_ONLY:
# ! pip3 install -U tensorflow==2.5 $USER_FLAG
# ! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
# ! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
# ! pip3 install -U tensorflow-io==0.18 $USER_FLAG
# ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
# ! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
# ! pip3 install --upgrade google-cloud-logging $USER_FLAG
# ! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
# ! pip3 install --upgrade pyarrow $USER_FLAG
# ! pip3 install --upgrade cloudml-hypertune $USER_FLAG
# ! pip3 install --upgrade kfp $USER_FLAG
# + [markdown] id="restart"
# ### Restart the kernel
#
# Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
# + id="restart"
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# + [markdown] id="project_id"
# #### Set your project ID
#
# **If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
# + id="set_project_id"
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
# + id="autoset_project_id"
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
# + id="set_gcloud_project_id"
# ! gcloud config set project $PROJECT_ID
# + [markdown] id="region"
# #### Region
#
# You can also change the `REGION` variable, which is used for operations
# throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
#
# - Americas: `us-central1`
# - Europe: `europe-west4`
# - Asia Pacific: `asia-east1`
#
# You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
#
# Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
# + id="region"
REGION = "us-central1" # @param {type: "string"}
# + [markdown] id="timestamp"
# #### Timestamp
#
# If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
# + id="timestamp"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# + [markdown] id="bucket:mbsdk"
# ### Create a Cloud Storage bucket
#
# **The following steps are required, regardless of your notebook environment.**
#
# When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
#
# Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
# + id="bucket"
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
# + id="autoset_bucket"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
# + [markdown] id="create_bucket"
# **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
# + id="create_bucket"
# ! gsutil mb -l $REGION $BUCKET_NAME
# + [markdown] id="validate_bucket"
# Finally, validate access to your Cloud Storage bucket by examining its contents:
# + id="validate_bucket"
# ! gsutil ls -al $BUCKET_NAME
# + [markdown] id="set_service_account"
# #### Service Account
#
# **If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
# + id="set_service_account"
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
# + id="autoset_service_account"
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
# shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
# + [markdown] id="set_service_account:pipelines"
# #### Set service account access for Vertex AI Pipelines
#
# Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
# + id="set_service_account:pipelines"
# ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
# ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
# + [markdown] id="setup_vars"
# ### Set up variables
#
# Next, set up some variables used throughout the tutorial.
# ### Import libraries and define constants
# + id="import_aip:mbsdk"
import google.cloud.aiplatform as aip
# + id="import_kfp"
import json
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import component
# + id="import_gcpc:dataflow"
from google_cloud_pipeline_components.experimental.dataflow import \
DataflowPythonJobOp
from google_cloud_pipeline_components.experimental.wait_gcp_resources import \
WaitGcpResourcesOp
# + [markdown] id="init_aip:mbsdk"
# ### Initialize Vertex AI SDK for Python
#
# Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
# + id="init_aip:mbsdk"
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
# + [markdown] id="writefile:wc.py"
# ### Write the Apache Beam pipeline module
#
# First, you write the Python module for the Dataflow pipeline. Since it is a module, you additional add the `if __name__ == '__main__':` entry point and use `argparse` to pass command line arguments to the module.
#
# This module implements the Apache Beam word count example.
# + id="writefile:wc.py"
# %%writefile wc.py
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
# + [markdown] id="writefile:requirements,wc"
# ### Write the requirements (installs) for the Apache Beam pipeline module
#
# Next, create the `requirements.txt` file to specify Python modules that are required to be installed for executing the Apache Beam pipeline module -- in this case, `apache-beam` is required.
# + id="writefile:requirements,wc"
# %%writefile requirements.txt
apache-beam
# + [markdown] id="copy_to_gcs:wc"
# ### Copy python module and requirements file to Cloud Storage
#
# Next, you copy the Python module and requirements file to your Cloud Storage bucket.
#
# Additional, you set the Cloud Storage location for the output of the Apache Beam word count pipeline.
# + id="copy_to_gcs:wc"
GCS_WC_PY = BUCKET_NAME + "/wc.py"
# ! gsutil cp wc.py $GCS_WC_PY
GCS_REQUIREMENTS_TXT = BUCKET_NAME + "/requirements.txt"
# ! gsutil cp requirements.txt $GCS_REQUIREMENTS_TXT
GCS_WC_OUT = BUCKET_NAME + "/wc_out.txt"
# + [markdown] id="create_dataflow_pipeline:wc"
# ### Create and execute the pipeline job
#
# In this example, the `DataflowPythonJobOp` component takes the following parameters:
#
# - `project_id`: The project ID.
# - `location`: The region.
# - `python_module_path`: The Cloud Storage location of the Apache Beam pipeline.
# - `temp_location`: The Cloud Storage temporary file workspace for the Apache Beam pipeline.
# - `requirements_file_path`: The required Python modules to install.
# - `args`: The arguments to pass to the Apache Beam pipeline.
#
# Learn more about [Google Cloud Pipeline Component for Dataflow](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.2.0/google_cloud_pipeline_components.experimental.dataflow.html)
# + id="create_dataflow_pipeline:wc"
import json
PIPELINE_ROOT = "{}/pipeline_root/dataflow_wc".format(BUCKET_NAME)
@dsl.pipeline(name="dataflow-wc", description="Dataflow word count component pipeline")
def pipeline(
python_file_path: str = GCS_WC_PY,
project_id: str = PROJECT_ID,
location: str = REGION,
staging_dir: str = PIPELINE_ROOT,
args: list = ["--output", GCS_WC_OUT, "--runner", "DataflowRunner"],
requirements_file_path: str = GCS_REQUIREMENTS_TXT,
):
dataflow_python_op = DataflowPythonJobOp(
project=project_id,
location=location,
python_module_path=python_file_path,
temp_location=staging_dir,
requirements_file_path=requirements_file_path,
args=args,
)
dataflow_wait_op = WaitGcpResourcesOp(
gcp_resources=dataflow_python_op.outputs["gcp_resources"]
)
compiler.Compiler().compile(pipeline_func=pipeline, package_path="dataflow_wc.json")
pipeline = aip.PipelineJob(
display_name="dataflow_wc",
template_path="dataflow_wc.json",
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
pipeline.run()
# ! gsutil cat {GCS_WC_OUT}* | head -n10
# ! rm -f dataflow_wc.json wc.py requirements.txt
# + [markdown] id="delete_pipeline"
# ### Delete a pipeline job
#
# After a pipeline job is completed, you can delete the pipeline job with the method `delete()`. Prior to completion, a pipeline job can be canceled with the method `cancel()`.
# + id="delete_pipeline"
pipeline.delete()
# + [markdown] id="writefile:split.py"
# ### Write the Apache Beam pipeline module
#
# Next, you write the Python module for the Apache Beam pipeline. This module implements the a dataset split task into training and test data, and writes the split dataset as CSV files to a Cloud Storage bucket. In this example, the Python module will recieve some arguments for the pipeline from the command-line, which will be passed by the Dataflow pipeline component.
#
# *Note:* The Dataflow prebuilt component implicitly adds Dataflow-specific command-line arguments, such as `project`, `location`, `runner`, and `temp_location`.
# + id="writefile:split.py"
# %%writefile split.py
import argparse
import logging
import tensorflow_transform.beam as tft_beam
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--bq_table',
dest='bq_table')
parser.add_argument('--bucket',
dest='bucket')
args, pipeline_args = parser.parse_known_args(argv)
logging.info("ARGS")
logging.info(args)
logging.info("PIPELINE ARGS")
logging.info(pipeline_args)
for i in range(0, len(pipeline_args), 2):
if "--temp_location" == pipeline_args[i]:
temp_location = pipeline_args[i+1]
elif "--project" == pipeline_args[i]:
project = pipeline_args[i+1]
exported_train = args.bucket + '/exported_data/train'
exported_eval = args.bucket + '/exported_data/eval'
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as pipeline:
with tft_beam.Context(temp_location):
raw_data_query = "SELECT {0},{1} FROM {2} LIMIT 500".format("CAST(station_number as STRING) AS station_number,year,month,day","mean_temp", args.bq_table)
def parse_bq_record(bq_record):
"""Parses a bq_record to a dictionary."""
output = {}
for key in bq_record:
output[key] = [bq_record[key]]
return output
def split_dataset(bq_row, num_partitions, ratio):
"""Returns a partition number for a given bq_row."""
import json
assert num_partitions == len(ratio)
bucket = sum(map(ord, json.dumps(bq_row))) % sum(ratio)
total = 0
for i, part in enumerate(ratio):
total += part
if bucket < total:
return i
return len(ratio) - 1
# Read raw BigQuery data.
raw_train_data, raw_eval_data = (
pipeline
| "Read Raw Data"
>> beam.io.ReadFromBigQuery(
query=raw_data_query,
project=project,
use_standard_sql=True,
)
| "Parse Data" >> beam.Map(parse_bq_record)
| "Split" >> beam.Partition(split_dataset, 2, ratio=[8, 2])
)
# Write raw train data to GCS .
_ = raw_train_data | "Write Raw Train Data" >> beam.io.WriteToText(
file_path_prefix=exported_train, file_name_suffix=".csv"
)
# Write raw eval data to GCS .
_ = raw_eval_data | "Write Raw Eval Data" >> beam.io.WriteToText(
file_path_prefix=exported_eval, file_name_suffix=".csv"
)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
# + [markdown] id="writefile:requirements,split"
# ### Write the requirements (installs) for the Apache Beam pipeline module
#
# Next, create the `requirements.txt` file to specify Python modules that are required to be installed for executing the Apache Beam pipeline module -- in this case, `apache-beam` and `tensorflow-transform` are required.
# + id="writefile:requirements,split"
# %%writefile requirements.txt
apache-beam
tensorflow-transform==1.2.0
# + [markdown] id="writefile:setup,split"
# ### Write the setup.py (installs) for the Dataflow workers
#
# Next, create the `setup.py` file to specify Python modules that are required to be installed for executing the Dataflow workers -- in this case, `tensorflow-transform` is required.
# + id="writefile:setup,split"
# %%writefile setup.py
import setuptools
REQUIRED_PACKAGES = [
'tensorflow-transform==1.2.0',
]
PACKAGE_NAME = 'my_package'
PACKAGE_VERSION = '0.0.1'
setuptools.setup(
name=PACKAGE_NAME,
version=PACKAGE_VERSION,
description='Demo for split transformation',
install_requires=REQUIRED_PACKAGES,
author="<EMAIL>",
packages=setuptools.find_packages()
)
# + [markdown] id="copy_to_gcs:split"
# ### Copy python module and requirements file to Cloud Storage
#
# Next, you copy the Python module, requirements and setup file to your Cloud Storage bucket.
#
# Additional, you set the Cloud Storage location for the output of the Apache Beam dataset split pipeline.
# + id="copy_to_gcs:split"
GCS_SPLIT_PY = BUCKET_NAME + "/split.py"
# ! gsutil cp split.py $GCS_SPLIT_PY
GCS_REQUIREMENTS_TXT = BUCKET_NAME + "/requirements.txt"
# ! gsutil cp requirements.txt $GCS_REQUIREMENTS_TXT
GCS_SETUP_PY = BUCKET_NAME + "/setup.py"
# ! gsutil cp setup.py $GCS_SETUP_PY
# + [markdown] id="import_file:u_dataset,bq"
# #### Location of BigQuery training data.
#
# Now set the variable `IMPORT_FILE` to the location of the data table in BigQuery.
# + id="import_file:gsod,bq,lrg"
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
BQ_TABLE = "bigquery-public-data.samples.gsod"
# + [markdown] id="create_dataflow_pipeline:split"
# ### Create and execute the pipeline job
#
# In this example, the `DataflowPythonJobOp` component takes the following parameters:
#
# - `project_id`: The project ID.
# - `location`: The region.
# - `python_module_path`: The Cloud Storage location of the Apache Beam pipeline.
# - `temp_location`: The Cloud Storage temporary file workspace for the Apache Beam pipeline.
# - `requirements_file_path`: The required Python modules to install.
# - `args`: The arguments to pass to the Apache Beam pipeline.
#
# Learn more about [Google Cloud Pipeline Component for Dataflow](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.2.0/google_cloud_pipeline_components.experimental.dataflow.html)
#
# Additional, you add `--runner=DataflowRunner` to the input args, to tell the component to use Dataflow instead of DirectRunner for the Apache Beam job.
# + id="create_dataflow_pipeline:split"
PIPELINE_ROOT = "{}/pipeline_root/dataflow_split".format(BUCKET_NAME)
@dsl.pipeline(name="dataflow-split", description="Dataflow split dataset")
def pipeline(
python_file_path: str = GCS_SPLIT_PY,
project_id: str = PROJECT_ID,
location: str = REGION,
staging_dir: str = PIPELINE_ROOT,
args: list = [
"--bucket",
BUCKET_NAME,
"--bq_table",
BQ_TABLE,
"--runner",
"DataflowRunner",
"--setup_file",
GCS_SETUP_PY,
],
requirements_file_path: str = GCS_REQUIREMENTS_TXT,
):
DataflowPythonJobOp.component_spec.implementation.container.image = (
"gcr.io/ml-pipeline/google-cloud-pipeline-components:v0.2.0_dataflow_logs_fix"
)
dataflow_python_op = DataflowPythonJobOp(
project=project_id,
location=location,
python_module_path=python_file_path,
temp_location=staging_dir,
requirements_file_path=requirements_file_path,
args=args,
)
dataflow_wait_op = WaitGcpResourcesOp(
gcp_resources=dataflow_python_op.outputs["gcp_resources"]
)
compiler.Compiler().compile(pipeline_func=pipeline, package_path="dataflow_split.json")
pipeline = aip.PipelineJob(
display_name="dataflow_split",
template_path="dataflow_split.json",
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
pipeline.run()
# ! gsutil ls {BUCKET_NAME}/exported_data
# ! rm -f dataflow_split.json split.py requirements.txt
# + [markdown] id="delete_pipeline"
# ### Delete a pipeline job
#
# After a pipeline job is completed, you can delete the pipeline job with the method `delete()`. Prior to completion, a pipeline job can be canceled with the method `cancel()`.
# + id="delete_pipeline"
pipeline.delete()
# + [markdown] id="cleanup:mbsdk"
# # Cleaning up
#
# To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
# project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
#
# Otherwise, you can delete the individual resources you created in this tutorial:
#
# - Dataset
# - Pipeline
# - Model
# - Endpoint
# - AutoML Training Job
# - Batch Job
# - Custom Job
# - Hyperparameter Tuning Job
# - Cloud Storage Bucket
# + id="cleanup:mbsdk"
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline training job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom training job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
# ! gsutil rm -r $BUCKET_NAME
| notebooks/community/ml_ops/stage3/get_started_with_dataflow_pipeline_components.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Exercício 02
# ### Linguagens e Paradigmas de Programação
#
# <NAME>
# <br /> <b>RA</b> 816114781
# ### Listas
# 1- Crie um programa que recebe uma lista de números e
# - retorne o maior elemento
# - retorne a soma dos elementos
# - retorne o número de ocorrências do primeiro elemento da lista
# - retorne a média dos elementos
# - retorne o valor mais próximo da média dos elementos
# - retorne a soma dos elementos com valor negativo
# - retorne a quantidade de vizinhos iguais
# +
list = [int(x) for x in input("Digite uma lista de números separados por espaço: ").split()]
media = sum(list) / len(list)
print(f"list: {list} \n")
print(f"O maior elemento é: {max(list)}")
print(f"A soma dos elementos é: {sum(list)}")
print(f"O elemento {list[0]} aparece {list.count(list[0])}x na lista")
print(f"A média dos elementos é: {media}")
print(f"O valor mais próximo da média dos elementos é: {min(list, key=lambda x:abs(x-media))}")
print(f"A soma dos valores negativos é: {sum(i for i in list if i < 0)}")
count = 0
for i in range(len(list)):
if(i < (len(list)-1)):
if(list[i] == list[i+1]):
count += 1
print(f"A quantidade de vizinhos iguais é: {count}")
# -
# 2- Faça um programa que receba duas listas e retorne True se são
# iguais ou False caso contrario.
# Duas listas são iguais se possuem os mesmos valores e na mesma
# ordem.
# +
list_a = input("Digite os elementos da lista A, separados por virgula: ")
list_a = list_a.split(',')
list_b = input("Digite os elementos da lista B, separados por virgula: ")
list_b = list_b.split(',')
match = bool(set(list_a).intersection(list_b))
if(match):
print("As duas listas são iguais.")
else:
print("As duas listas não são iguais.")
# -
# 3- Faça um programa que receba duas listas e retorne True se têm
# os mesmos elementos ou False caso contrário
# Duas listas possuem os mesmos elementos quando são compostas
# pelos mesmos valores, mas não obrigatoriamente na mesma ordem.
# +
list_a = input("Digite os elementos da lista A, separados por virgula: ")
list_a = list_a.split(',')
list_b = input("Digite os elementos da lista B, separados por virgula: ")
list_b = list_b.split(',')
if(set(list_a) == set(list_b)):
print("As duas listas possui os mesmos elementos.")
else:
print("As duas listas não possui os mesmos elementos.")
# -
# 4- Faça um programa que percorre uma lista com o seguinte
# formato: [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]],
# ['Italia', 'Espanha', [7,8]]]. Essa lista indica o número de faltas que
# cada time fez em cada jogo. Na lista acima, no jogo entre Brasil e
# Itália, o Brasil fez 10 faltas e a Itália fez 9.
# O programa deve imprimir na tela:
# - o total de faltas do campeonato
# - o time que fez mais faltas
# - o time que fez menos faltas
# +
import operator
list = [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]
total_faltas = 0
dicFaltas = {'Brasil': 0, 'Italia': 0, 'Espanha': 0}
for l in list:
total_faltas += sum(l[2])
dicFaltas[l[0]] += l[2][0]
dicFaltas[l[1]] += l[2][1]
print(dicFaltas)
time_mais_falta = max(dicFaltas.items(), key=operator.itemgetter(1))[0]
time_menos_falta = min(dicFaltas.items(), key=operator.itemgetter(1))[0]
print(f"Total de faltas do campeonato: {total_faltas}")
print(f"O time que mais fez falta: {time_mais_falta}")
print(f"O time que menos fez falta: {time_menos_falta}")
# -
# ### Dictionaries
# 5- Escreva um programa que conta a quantidade de vogais em uma
# string e armazena tal quantidade em um dicionário, onde a chave é
# a vogal considerada.
# +
sentence = input("Digite uma frase: ")
counts = {i:0 for i in 'aeiouáéêãõàÁÊÉÃÕÀAEIOU'}
for char in sentence:
if char in counts:
counts[char] += 1
vogais = {}
for k,v in counts.items():
vogais[k] = v
print(vogais)
# -
# 6- Escreva um programa que lê̂ duas notas de vários alunos e
# armazena tais notas em um dicionário, onde a chave é o nome do
# aluno. A entrada de dados deve terminar quando for lida uma string
# vazia como nome. Escreva uma função que retorna a média do
# aluno, dado seu nome.
# +
values = input("Insira as notas no formato Nome, nota1, nota2, separando por ponto e virgula: ")
values = values.split(";")
notas = {}
count = 0
for n in values:
nota = n.split(',')
notas[nota[0]] = {"nota1": nota[1], "nota2": nota[2]}
for n in notas:
media = (int(notas[n]['nota1']) + int(notas[n]['nota2']))/2
print(f"Média de {n} é: {media}")
# -
# 7- Uma pista de Kart permite 10 voltas para cada um de 6
# corredores. Escreva um programa que leia todos os tempos em
# segundos e os guarde em um dicionário, onde a chave é o nome do
# corredor. Ao final diga de quem foi a melhor volta da prova e em
# que volta; e ainda a classifcação final em ordem (1o o campeão). O
# campeão é o que tem a menor média de tempos
# +
count = 0
while(count <= 6):
values = input("Digite os valores das voltas: 'Corredor':[3,2,2,3,4,5,4,3,2,4] ")
dic = dict(x.split() for x in values.splitlines())
print(dic)
# -
# 8- Escreva um programa para armazenar uma agenda de telefones em um dicionário. Cada pessoa pode ter um ou mais telefones e a
# chave do dicionário é o nome da pessoa. Seu programa deve ter as
# seguintes funções:
# incluirNovoNome – essa função acrescenta um novo nome na
# agenda, com um ou mais telefones. Ela deve receber como
# argumentos o nome e os telefones.
# incluirTelefone – essa função acrescenta um telefone em um nome
# existente na agenda. Caso o nome não exista na agenda, você̂ deve
# perguntar se a pessoa deseja inclui-lo. Caso a resposta seja
# afrmativa, use a função anterior para incluir o novo nome.
# excluirTelefone – essa função exclui um telefone de uma pessoa
# que já está na agenda. Se a pessoa tiver apenas um telefone, ela
# deve ser excluída da agenda.
# excluirNome – essa função exclui uma pessoa da agenda.
# consultarTelefone – essa função retorna os telefones de uma
# pessoa na agenda.
# ### Arquivos
# 9- Faça um programa que leia um arquivo texto contendo uma lista
# de endereços IP e gere um outro arquivo, contendo um relatório dos
# endereços IP válidos e inválidos.
# O arquivo de entrada possui o seguinte formato:
# - 172.16.17.32
# - 192.168.1.1
# - 8.35.67.74
# - 257.32.4.5
# - 85.345.1.2
# - 1.2.3.4
# - 192.168.127.12
# - 192.168.0.256
#
# O arquivo de saída possui o seguinte formato:
# - [Endereços válidos:]
# - 172.16.17.32
# - 192.168.1.1
# - 8.35.67.74
# - 1.2.3.4
# - [Endereços inválidos:]
# - 257.32.4.5
# - 85.345.1.2
# - 192.168.127.12
# - 192.168.0.256
# 10- A ACME Inc., uma empresa de 500 funcionários, está tendo
# problemas de espaço em disco no seu servidor de arquivos. Para
# tentar resolver este problema, o Administrador de Rede precisa
# saber qual o espaço ocupado pelos usuários, e identifcar os
# usuários com maior espaço ocupado. Através de um programa,
# baixado da Internet, ele conseguiu gerar o seguinte arquivo,
# chamado "usuarios.txt":
# alexandre 456123789
# anderson 1245698456
# antonio 123456456
# carlos 91257581
# cesar 987458
# rosemary 789456125
# Neste arquivo, o nome do usuário possui 15 caracteres. A partir
# deste arquivo, você deve criar um programa que gere um relatório,
# chamado "relatório.txt", no seguinte formato:
# ACME Inc. Uso do espaço em disco pelos usuários
# --------------------------------------------------------------
# ----------
# Nr. Usuário Espaço utilizado % do uso
# 1 alexandre 434,99 MB 16,85%
# 2 anderson 1187,99 MB 46,02%
# 3 antonio 117,73 MB 4,56%
# 4 carlos 87,03 MB 3,37%
# 5 cesar 0,94 MB 0,04%
# 6 rosemary 752,88 MB 29,16%
# Espaço total ocupado: 2581,57 MB
# Espaço médio ocupado: 430,26 MB
# O arquivo de entrada deve ser lido uma única vez, e os dados
# armazenados em memória, caso sejam necessários, de forma a
# agilizar a execução do programa. A conversão da espaço ocupado
# em disco, de bytes para megabytes deverá ser feita através de uma
# função separada, que será chamada pelo programa principal. O
# cálculo do percentual de uso também deverá ser feito através de
# uma função, que será chamada pelo programa principal.
| Exercicio02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Maxwell filter data with movement compensation
#
#
# Demonstrate movement compensation on simulated data. The simulated data
# contains bilateral activation of auditory cortices, repeated over 14
# different head rotations (head center held fixed). See the following for
# details:
#
# https://github.com/mne-tools/mne-misc-data/blob/master/movement/simulate.py
#
#
#
# +
# Authors: <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
from os import path as op
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = op.join(mne.datasets.misc.data_path(verbose=True), 'movement')
head_pos = mne.chpi.read_head_pos(op.join(data_path, 'simulated_quats.pos'))
raw = mne.io.read_raw_fif(op.join(data_path, 'simulated_movement_raw.fif'))
raw_stat = mne.io.read_raw_fif(op.join(data_path,
'simulated_stationary_raw.fif'))
# -
# Visualize the "subject" head movements. By providing the measurement
# information, the distance to the nearest sensor in each direction
# (e.g., left/right for the X direction, forward/backward for Y) can
# be shown in blue, and the destination (if given) shown in red.
#
#
mne.viz.plot_head_positions(
head_pos, mode='traces', destination=raw.info['dev_head_t'], info=raw.info)
# This can also be visualized using a quiver.
#
#
mne.viz.plot_head_positions(
head_pos, mode='field', destination=raw.info['dev_head_t'], info=raw.info)
# Process our simulated raw data (taking into account head movements).
#
#
# +
# extract our resulting events
events = mne.find_events(raw, stim_channel='STI 014')
events[:, 2] = 1
raw.plot(events=events)
topo_kwargs = dict(times=[0, 0.1, 0.2], ch_type='mag', vmin=-500, vmax=500,
time_unit='s')
# -
# First, take the average of stationary data (bilateral auditory patterns).
#
#
evoked_stat = mne.Epochs(raw_stat, events, 1, -0.2, 0.8).average()
evoked_stat.plot_topomap(title='Stationary', **topo_kwargs)
# Second, take a naive average, which averages across epochs that have been
# simulated to have different head positions and orientations, thereby
# spatially smearing the activity.
#
#
evoked = mne.Epochs(raw, events, 1, -0.2, 0.8).average()
evoked.plot_topomap(title='Moving: naive average', **topo_kwargs)
# Third, use raw movement compensation (restores pattern).
#
#
raw_sss = maxwell_filter(raw, head_pos=head_pos)
evoked_raw_mc = mne.Epochs(raw_sss, events, 1, -0.2, 0.8).average()
evoked_raw_mc.plot_topomap(title='Moving: movement compensated', **topo_kwargs)
| 0.16/_downloads/plot_movement_compensation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/usm.jpg" width="480" height="240" align="left"/>
# # MAT281 - Laboratorio N°02
#
# ## Objetivos de la clase
#
# * Reforzar los conceptos básicos de numpy.
# ## Contenidos
#
# * [Problema 01](#p1)
# * [Problema 02](#p2)
# * [Problema 03](#p3)
# <a id='p1'></a>
#
# ## Problema 01
#
# Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:
#
# $$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$
#
#
# Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:
#
# * $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería:
#
#
# * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5]
# * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]
#
#
# Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:
#
# * **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$
#
# En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.
#
# **Hint**: utilice la función `numpy.cumsum`
import numpy as np
def sma(a, n) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
a=[5,3,8,10,2,1,5,1,0,2]
n=2
sma(a, n)
# <a id='p2'></a>
#
# ## Problema 02
#
# La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.
#
# * Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos.
#
# El resultado tendría que ser algo así:$$\begin{pmatrix}
# 1& 2 &3 &4 \\
# 3& 4&5&6 \\
# 5& 6 &7 &8 \\
# 7& 8 &9 &10 \\
# \end{pmatrix}$$
#
#
# Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos.
#
# * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix}
# 1& 2 &3 &4 \\
# 3& 4&5&6 \\
# 5& 6 &7 &8 \\
# 7& 8 &9 &10 \\
# \end{pmatrix}$
#
import numpy as np
𝑎 = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
print (a)
a
fila
a[-1]
a[0]
a[4]
def strides(a,n,p):
c=a[-1]
fila=int((c-n)/p)
b=np.zeros((fila+1,n))
contador=0
matri=[]
for i in range(0,fila+1):
b[i][:]=a[contador:contador+n]
contador=contador+p
return b
strides(a,4,2)
# <a id='p3'></a>
#
# ## Problema 03
#
#
# Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que
# la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.
#
# Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales
# es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$
# Por ejemplo,
#
# * $A= \begin{pmatrix}
# 4& 9 &2 \\
# 3& 5&7 \\
# 8& 1 &6
# \end{pmatrix}$,
# es un cuadrado mágico.
#
# * $B= \begin{pmatrix}
# 4& 2 &9 \\
# 3& 5&7 \\
# 8& 1 &6
# \end{pmatrix}$, no es un cuadrado mágico.
#
# Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario
#
# * **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False
#
# **Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
matriz=[[4,9,2],[3,5,7],[8,1,6]]
def es_cuadrado_magico (matriz) :
N = len(matriz[0])
sumas = []
for i in range (N) :
suma = 0
for j in range (N) :
suma += matriz[i][j]
sumas.append (suma)
for j in range (N) :
suma = 0
for i in range (N) :
suma += matriz[i][j]
sumas.append (suma)
suma = 0
for k in range (N):
suma += matriz[k][k]
sumas.append (suma)
suma = 0
for j in range (N):
i = N - 1 - j
suma += matriz[i][j]
sumas.append (suma)
ultimaSuma = sumas[0]
for suma in sumas :
if ultimaSuma != suma :
return False
ultimaSuma = suma
return True
es_cuadrado_magico(matriz)
matriz2=[[4,2,9],[3,5,7],[8,1,6]]
es_cuadrado_magico(matriz2)
| homeworks/laboratorio_02.ipynb |