code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="NrHfTDyt6TMG"
# # Gaussian processes and Bayesian optimization
# + [markdown] colab_type="text" id="oxDRoppa6TMK"
# In this assignment you will learn how to use <a href="http://sheffieldml.github.io/GPy/">GPy</a> and <a href="http://sheffieldml.github.io/GPyOpt/">GPyOpt</a> libraries to deal with gaussian processes. These libraries provide quite simple and inuitive interfaces for training and inference, and we will try to get familiar with them in a few tasks.
# + [markdown] colab_type="text" id="Cr9w3wbF6TMM"
# ### Setup
# Load auxiliary files and then install and import the necessary libraries.
# + colab={} colab_type="code" id="etfETFjO6TMk"
import numpy as np
import GPy
import GPyOpt
import matplotlib.pyplot as plt
from sklearn.svm import SVR
import sklearn.datasets
from xgboost import XGBRegressor
from sklearn.model_selection import cross_val_score
import time
# %matplotlib inline
# + [markdown] colab_type="text" id="SkxQnNSl6TM3"
# ## Gaussian processes: GPy (<a href="http://pythonhosted.org/GPy/">documentation</a>)
# + [markdown] colab_type="text" id="9m363FFu6TM5"
# We will start with a simple regression problem, for which we will try to fit a Gaussian Process with RBF kernel.
# + colab={} colab_type="code" id="RdCm9E3c6TM7"
def generate_points(n=25, noise_variance=0.0036):
np.random.seed(777)
X = np.random.uniform(-3., 3., (n, 1))
y = np.sin(X) + np.random.randn(n, 1) * noise_variance**0.5
return X, y
def generate_noise(n=25, noise_variance=0.0036):
np.random.seed(777)
X = np.random.uniform(-3., 3., (n, 1))
y = np.random.randn(n, 1) * noise_variance**0.5
return X, y
# + colab={"base_uri": "https://localhost:8080/", "height": 265} colab_type="code" id="AIucv9NP6TND" outputId="c19b2081-a8a8-435e-db0f-5e5a6a4906e3"
# Create data points
X, y = generate_points()
plt.plot(X, y, '.')
plt.show()
# + [markdown] colab_type="text" id="ZJgdjy2K6TNH"
# To fit a Gaussian Process, you will need to define a kernel. For Gaussian (GBF) kernel you can use `GPy.kern.RBF` function.
# + [markdown] colab_type="text" id="r9VqhPy96TNJ"
# <b> Task 1.1: </b> Create RBF kernel with variance 1.5 and length-scale parameter 2 for 1D samples and compute value of the kernel between points `X[5]` and `X[9]`. Submit a single number.
# <br><b>Hint:</b> use `.K` property of kernel object.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="srU1pqXo6TNK" outputId="c2b9ab8c-9d99-4ce7-f5ec-37a38d250bc3"
kernel = GPy.kern.RBF(input_dim=1, variance=1.5, lengthscale=2.0) ### YOUR CODE HERE
kernel_59 = kernel.K(X[5].reshape((1,1)), X[9].reshape((1,1)))[0,0]
kernel_59
# + [markdown] colab_type="text" id="wxWXD0vG6TNR"
# <b> Task 1.2: </b> Fit GP into generated data. Use kernel from previous task. Submit predicted mean and vairance at position $x=1$.
# <br><b>Hint:</b> use `GPy.models.GPRegression` class.
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="GcI-b2LW6TNS" outputId="d39c1efd-57a3-459d-d228-a97fa28be0f1"
model = GPy.models.GPRegression(X,y, kernel=kernel)### YOUR CODE HERE
inference = model.predict(np.array([[1]]))
mean = np.asscalar(inference[0])### YOUR CODE HERE
variance = np.asscalar(inference[1])### YOUR CODE HERE
mean, variance
# + colab={"base_uri": "https://localhost:8080/", "height": 297} colab_type="code" id="8HgBVuqT6TNX" outputId="0832984d-883c-4a2c-96bd-db929204155b"
model.plot()
plt.show()
# + [markdown] colab_type="text" id="W3P5p-ZL6TNg"
# We see that the model didn't fit the data quite well. Let's try to fit kernel and noise parameters automatically as discussed in the lecture! You can see the current parameters below:
# + colab={"base_uri": "https://localhost:8080/", "height": 222} colab_type="code" id="qJ-k-QKx6TNi" outputId="3e195fe6-848c-49f9-bb7b-2c5ab409491c"
model
# + [markdown] colab_type="text" id="FzKkmVzh6TNq"
# <b> Task 1.3: </b> Optimize length-scale, variance and noise component of the model and submit optimal length-scale value of the kernel.
# <br><b>Hint:</b> Use `.optimize()` function of the model and `.lengthscale` property of the kernel.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="ZffCbme66TNr" outputId="75bf6769-b94d-4b81-fdb0-c7bb896f8e93"
model.optimize(max_iters=500)
kernel.lengthscale
# + colab={"base_uri": "https://localhost:8080/", "height": 297} colab_type="code" id="J_zla0NL6TN0" outputId="e5df4866-6a5a-43af-afb9-f41caf1f3601"
model.plot()
plt.show()
# + [markdown] colab_type="text" id="Sv5_QYYs6TN5"
# As you see, the process generates outputs just right. Let's see if GP can figure out itself when we try to fit it into noise or signal.
# + [markdown] colab_type="text" id="cnZ3vKlw6TN7"
# <b> Task 1.4: </b> Generate two datasets: sinusoid wihout noise and samples from gaussian noise. Optimize kernel parameters and submit optimal values of noise component.
# <br><b>Note:</b> generate data only using ```generate_points(n, noise_variance)``` and ```generate_noise(n, noise_variance)``` function!
# + colab={} colab_type="code" id="MPenDNIr6TN9"
X, y = generate_noise(noise_variance=10)
gp_noisy = GPy.models.GPRegression(X, y, kernel=kernel)
gp_noisy.optimize()
noise = gp_noisy.Gaussian_noise[0]
# + colab={} colab_type="code" id="ZU1t4V6R6TOF"
X, y = generate_points(noise_variance=0)
gp_pure = GPy.models.GPRegression(X, y, kernel=kernel)
gp_pure.optimize()
just_signal = gp_pure.Gaussian_noise[0]
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="jJ74U4r16TOL" outputId="8d6221bb-a203-42af-cfe8-76113857b15d"
noise, just_signal
# + [markdown] colab_type="text" id="ZByrgGsS6TOP"
# ## Sparse GP
# Now let's consider the speed of GP. We will generate a dataset of 3000 points and measure the time that is consumed for prediction of mean and variance for each point. We will then try to use inducing inputs and find the optimal number of points according to quality-time tradeoff.
#
# For the sparse model with inducing points, you should use ```GPy.models.SparseGPRegression``` class. You can set the number of inducing inputs with parameter ```num_inducing``` and optimize their positions and values with ```.optimize()``` call.
# + [markdown] colab_type="text" id="FrXom83l6TOR"
# <b>Task 1.5</b>: Create a dataset of 1000 points and fit GPRegression. Measure time for predicting mean and variance at position $x=1$. Then fit `SparseGPRegression` with 10 inducing inputs and repeat the experiment. Report speedup as a ratio between consumed time without and with inducing inputs.
# + colab={} colab_type="code" id="k66ojuf-6TOU"
X, y = generate_points(1000)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="z-hPAAqD6TOe" outputId="79d34c1c-095f-4fe9-bf93-2b10a7eeefb2"
start = time.time()
gp_time = GPy.models.GPRegression(X, y, kernel=kernel)
gp_time.optimize()
gp_pred = gp_time.predict(np.array([1]).reshape(1,1))
print(gp_pred)
time_gp = time.time()-start
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="mv8Wy6Gy6TOm" outputId="6dc3c64f-b603-40d5-d794-47006acfc68c"
start = time.time()
sparse_gp = GPy.models.SparseGPRegression(X, y, kernel=kernel, num_inducing=10)
sparse_gp.optimize()
sparse_gp_pred = sparse_gp.predict(np.array([1]).reshape(1,1))
print(sparse_gp_pred)
time_sgp = time.time()-start
# + colab={"base_uri": "https://localhost:8080/", "height": 577} colab_type="code" id="EQTkERYR6TOv" outputId="70dd1ad1-009f-413b-afee-a927f5d0dc89"
gp_time.plot()
sparse_gp.plot()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="5xNCDtGJ6TOz" outputId="d873aec6-0d52-41ad-cd3e-85c74f5c6363"
time_gp / time_sgp
# + [markdown] colab_type="text" id="fKpvWQWp6TO2"
# ## Bayesian optimization: GPyOpt (<a href="http://pythonhosted.org/GPyOpt/">documentation</a>, <a href="http://nbviewer.jupyter.org/github/SheffieldML/GPyOpt/blob/master/manual/index.ipynb">tutorials</a>)
# + [markdown] colab_type="text" id="xRO_xj7T6TO3"
# In this part of the assignment, we will try to find optimal hyperparameters to XGBoost model! We will use data from a small competition to speed things up, but keep in mind that the approach works even for large datasets.
#
# We will use diabetes dataset provided in sklearn package.
# + colab={} colab_type="code" id="WkIfU2ba6TO7"
dataset = sklearn.datasets.load_diabetes()
X = dataset['data']
y = dataset['target']
# + [markdown] colab_type="text" id="pSXJaPOe6TPB"
# We will use cross-validation score to estimate accuracy and our goal will be to tune: ```max_depth```, ```learning_rate```, ```n_estimators``` parameters. The baseline MSE with default XGBoost parameters is $0.2$. Let's see if we can do better. First, we have to define optimization function and domains.
# + colab={} colab_type="code" id="NSVA8gKP6TPC"
# Score. Optimizer will try to find minimum, so we will add a "-" sign.
def f(parameters):
parameters = parameters[0]
score = -cross_val_score(
XGBRegressor(learning_rate=parameters[0],
max_depth=int(parameters[2]),
n_estimators=int(parameters[3]),
gamma=int(parameters[1]),
min_child_weight = parameters[4]),
X, y, scoring='neg_mean_squared_error'
).mean()
score = np.array(score)
return score
# + colab={"base_uri": "https://localhost:8080/", "height": 123} colab_type="code" id="nLpdAfHH6TPJ" outputId="fafa68d9-d297-40b0-e05c-267ee53b9b86"
baseline = -cross_val_score(
XGBRegressor(), X, y, scoring='neg_mean_squared_error'
).mean()
baseline
# + colab={} colab_type="code" id="TtqXl0Jc6TPQ"
# Bounds (NOTE: define continuous variables first, then discrete!)
bounds = [
{'name': 'learning_rate',
'type': 'continuous',
'domain': (0, 1)},
{'name': 'gamma',
'type': 'continuous',
'domain': (0, 5)},
{'name': 'max_depth',
'type': 'discrete',
'domain': (1, 50)},
{'name': 'n_estimators',
'type': 'discrete',
'domain': (1, 300)},
{'name': 'min_child_weight',
'type': 'discrete',
'domain': (1, 10)}
]
# + colab={"base_uri": "https://localhost:8080/", "height": 458} colab_type="code" id="OL2XN6Eg6TPW" outputId="805094b6-af0a-4d0b-d254-d7f87cdcbcc1"
np.random.seed(777)
optimizer = GPyOpt.methods.BayesianOptimization(
f=f, domain=bounds,
acquisition_type ='MPI',
acquisition_par = 0.1,
exact_eval=True
)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="7aWnWgk76TPZ" outputId="d1bad337-91ec-4ea2-837e-e0a6a31ac4f0"
max_iter = 50
max_time = 60
optimizer.run_optimization(max_iter, max_time)
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" id="rJZvvOON6TPx" outputId="e57e08b6-0d8e-45b1-a853-375f1c2e72c9"
optimizer.plot_convergence()
# + [markdown] colab_type="text" id="Y4rtECdX6TP3"
# Best values of parameters:
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="ON31H0ZW6TP4" outputId="55320c6a-7c80-41fc-8ce0-05c1779f30ff"
optimizer.X[np.argmin(optimizer.Y)]
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="4CoYomX96TQC" outputId="218e3700-150e-4155-a8a9-e7a4e458e3fe"
print('MSE:', np.min(optimizer.Y),
'Gain:', baseline/np.min(optimizer.Y)*100)
# + [markdown] colab_type="text" id="8zuUiSeW6TQF"
# We were able to get 9% boost without tuning parameters by hand! Let's see if you can do the same.
# + [markdown] colab_type="text" id="ou0g0ACB6TQK"
# <b>Task 2.1:</b> Tune SVR model. Find optimal values for three parameters: `C`, `epsilon` and `gamma`. Use range (1e-5, 1000) for `C`, (1e-5, 10) for `epsilon` and `gamma`. Use MPI as an acquisition function with weight 0.1. Submit the optimal value of epsilon that was found by a model.
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="KKDkT2TFZ5u3" outputId="348d5944-cc6e-4a71-c7a1-5e0b8a1475bf"
baseline = -cross_val_score(SVR(gamma = 'auto'), X, y, scoring='neg_mean_squared_error').mean()
baseline
# + colab={} colab_type="code" id="BWzm3h57ZNx2"
def f_model(parameters):
parameters = parameters[0]
score = -cross_val_score(
SVR(gamma=float(parameters[0]),
C=float(parameters[1]),
epsilon=float(parameters[2])),
X, y, scoring='neg_mean_squared_error'
).mean()
score = np.array(score)
return score
# + colab={} colab_type="code" id="oVUFDinwZXaG"
svr_bounds = [
{'name': 'gamma',
'type': 'continuous',
'domain': (1e-5, 10)},
{'name': 'C',
'type': 'continuous',
'domain': (1e-5, 1000)},
{'name': 'epsilon',
'type': 'continuous',
'domain': (1e-5, 10)},
]
# + colab={} colab_type="code" id="bf6dYIAjZO3c"
np.random.seed(777)
optimizer = GPyOpt.methods.BayesianOptimization(
f=f_model, domain=svr_bounds,
acquisition_type ='MPI',
acquisition_par = 0.1,
exact_eval=True
)
# + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" id="ML_2kSy1Z0eL" outputId="9f4f4332-1936-4da0-f41b-7d6ccf2284ca"
max_iter = 50
max_time = 60
optimizer.run_optimization(max_iter, max_time)
optimizer.plot_convergence()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="-YePyinXbjtx" outputId="1bee39a7-f97b-4751-d689-b4db7810ceaf"
optimizer.X[np.argmin(optimizer.Y)]
# + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="HSS_Di9qaceS" outputId="bf638849-010b-4d61-cece-369697f926a6"
res = dict(best_eps=optimizer.X[np.argmin(optimizer.Y)][2],
MSE=np.min(optimizer.Y),
Gain=baseline/np.min(optimizer.Y)*100)
res
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="zp0R5rHG6TQK" outputId="75cd3488-0919-405c-b904-70c87ccd386f"
best_epsilon = res['best_eps']### YOUR CODE HERE
best_epsilon
# + [markdown] colab_type="text" id="z65YEJ5k6TQQ"
# <b>Task 2.2:</b> For the model above submit boost in improvement that you got after tuning hyperparameters (output percents) [e.g. if baseline MSE was 40 and you got 20, output number 200]
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="1O6SSlQd6TQS" outputId="df455b21-1693-4c15-b9d0-2ed747450810"
performance_boost = res['Gain']### YOUR CODE HERE
performance_boost
# + colab={} colab_type="code" id="WKGvEjhyhYFX"
| Bayesian_Methods_for_Machine_Learning/week_6/notebooks/gp_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="0vzENvOtHD2H" executionInfo={"status": "ok", "timestamp": 1619608665885, "user_tz": -330, "elapsed": 11651, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}}
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow import keras
# + id="i4CUP9F_WAaY" executionInfo={"status": "ok", "timestamp": 1619609222626, "user_tz": -330, "elapsed": 1569, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}}
PATH = "/content/drive/MyDrive/Projects/Yoga-82/A_Notebooks/"
NUM_CLASSES = 5
IMAGE_RESIZE = 224
BATCH_SIZE = 32
IMG_SIZE = (224, 224)
DENSE_LAYER_ACTIVATION = 'softmax'
OBJECTIVE_FUNCTION = 'categorical_crossentropy'
LOSS_METRICS = ['accuracy']
# EARLY_STOP_PATIENCE must be < NUM_EPOCHS
NUM_EPOCHS = 10
EARLY_STOP_PATIENCE = 3
# These steps value should be proper FACTOR of no.-of-images in train & valid folders respectively
# Training images processed in each step would be no.-of-train-images / STEPS_PER_EPOCH_TRAINING
STEPS_PER_EPOCH_TRAINING = 10
STEPS_PER_EPOCH_VALIDATION = 10
# These steps value should be proper FACTOR of no.-of-images in train & valid folders respectively
# NOTE that these BATCH* are for Keras ImageDataGenerator batching to fill epoch step input
BATCH_SIZE_TRAINING = 10
BATCH_SIZE_VALIDATION = 10
# Using 1 to easily manage mapping between test_generator & prediction for submission preparation
BATCH_SIZE_TESTING = 1
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'test')
class_name = sorted({'Bow_Pose_or_Dhanurasana_': 0,
'Bridge_Pose_or_Setu_Bandha_Sarvangasana_': 1,
'Cobra_Pose_or_Bhujangasana_': 2,
'Extended_Revolved_Triangle_Pose_or_Utthita_Trikonasana_': 3,
'Tree_Pose_or_Vrksasana_': 4}.keys())
def get_score_label(score):
if score > 80:
return "Pro"
elif score > 65:
return "Good"
elif score > 50:
return "Average"
elif score > 30:
return "Rookie"
else:
return "Try Again"
# + colab={"base_uri": "https://localhost:8080/"} id="ipE5_3jwHf4b" executionInfo={"status": "ok", "timestamp": 1619611087194, "user_tz": -330, "elapsed": 1515, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="934c2a71-bb89-4f15-ae57-0f9257d34ca3"
from keras.preprocessing.image import ImageDataGenerator
image_size = IMAGE_RESIZE
data_generator = ImageDataGenerator()
# flow_From_directory generates batches of augmented data (where augmentation can be color conversion, etc)
# Both train & valid folders must have NUM_CLASSES sub-folders
train_generator = data_generator.flow_from_directory(
train_dir,
target_size=(image_size, image_size),
batch_size=BATCH_SIZE_TRAINING,
class_mode='categorical')
validation_generator = data_generator.flow_from_directory(
validation_dir,
target_size=(image_size, image_size),
batch_size=BATCH_SIZE_VALIDATION,
class_mode='categorical')
test_generator = data_generator.flow_from_directory(
validation_dir,
target_size=(image_size, image_size),
batch_size=BATCH_SIZE_VALIDATION,
class_mode='categorical')
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="6nz5YCjqm-vh" executionInfo={"status": "ok", "timestamp": 1619608698931, "user_tz": -330, "elapsed": 7612, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="87a829c1-fa1d-438e-a3b0-100321e90087"
ds_dir_demo = "/content/drive/MyDrive/Projects/Yoga-82/demo/"
img_paths = os.listdir(ds_dir_demo)
f, ax = plt.subplots(2, 5, figsize = (15, 7))
for i, img_path in enumerate(img_paths):
# image_path = "/content/drive/MyDrive/Projects/Yoga-82/demo/yoga_demo1.png"
# image_path = "/content/drive/MyDrive/Projects/Yoga-82/demo/Capture4.JPG"
image_path = ds_dir_demo+img_path
image = tf.keras.preprocessing.image.load_img(image_path, target_size= IMG_SIZE)
# input_arr = tf.keras.preprocessing.image.img_to_array(image)
# input_arr = np.array([input_arr]) # Convert single image to a batch.
# predictions = model.predict(input_arr)
# idx = np.argmax(predictions)
# # img=mpimg.imread(image_path)
# # # plt.imshow(img)
# i = i%10
ax[i//5, i%5].imshow(image)
ax[i//5, i%5].axis('off')
ax[i//5, i%5].set_title(f"{img_path[:-5]}")
# + id="EoiHUZHSjQC1" executionInfo={"status": "ok", "timestamp": 1619612153465, "user_tz": -330, "elapsed": 1556, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}}
def get_model(IMAGE_RESIZE=224):
preprocess_input = tf.keras.applications.vgg16.preprocess_input
# rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)
# Create the base model from the pre-trained model inception v3
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.VGG16(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
dense32 = tf.keras.layers.Dense(32, activation = 'relu')
prediction_layer = tf.keras.layers.Dense(NUM_CLASSES, activation = DENSE_LAYER_ACTIVATION)
inputs = tf.keras.Input(shape=(IMAGE_RESIZE, IMAGE_RESIZE, 3))
# x = data_augmentation(inputs)
x = preprocess_input(inputs)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
base_learning_rate = 0.005
optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate)
# model.compile(optimizer=optimizer,
# loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
# metrics=['accuracy'])
# optimizer = tf.keras.optimizers.SGD(lr = 0.005, decay = 1e-6, momentum = 0.9, nesterov = True)
model.compile(optimizer = optimizer, loss = OBJECTIVE_FUNCTION, metrics = LOSS_METRICS)
return model
# + id="Op8GBtzvPIGX" executionInfo={"status": "ok", "timestamp": 1619612155968, "user_tz": -330, "elapsed": 1382, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}}
# preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
# rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)
# # Create the base model from the pre-trained model MobileNet V2
# IMG_SHAPE = IMG_SIZE + (3,)
# base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
# include_top=False,
# weights='imagenet')
# image_batch, label_batch = next(iter(train_dataset))
# feature_batch = base_model(image_batch)
# print(feature_batch.shape)
# base_model.trainable = False
# global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
# feature_batch_average = global_average_layer(feature_batch)
# print(feature_batch_average.shape)
# prediction_layer = tf.keras.layers.Dense(NUM_CLASSES, activation = DENSE_LAYER_ACTIVATION)
# prediction_batch = prediction_layer(feature_batch_average)
# print(prediction_batch.shape)
# inputs = tf.keras.Input(shape=(IMAGE_RESIZE, IMAGE_RESIZE, 3))
# # x = data_augmentation(inputs)
# x = preprocess_input(inputs)
# x = base_model(x, training=False)
# x = global_average_layer(x)
# # x = tf.keras.layers.Dropout(0.2)(x)
# outputs = prediction_layer(x)
# model = tf.keras.Model(inputs, outputs)
# # base_learning_rate = 0.0001
# # optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate)
# # model.compile(optimizer=optimizer,
# # loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
# # metrics=['accuracy'])
# sgd = tf.keras.optimizers.SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
# model.compile(optimizer = sgd, loss = OBJECTIVE_FUNCTION, metrics = LOSS_METRICS)
# + colab={"base_uri": "https://localhost:8080/"} id="FKb07OeDXZS_" executionInfo={"status": "ok", "timestamp": 1619612273871, "user_tz": -330, "elapsed": 1211, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="d5fcdcee-83a2-4189-afff-86d1f078f76e"
model = get_model()
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="RGOc8GuHYoZW" executionInfo={"status": "ok", "timestamp": 1619612312827, "user_tz": -330, "elapsed": 38179, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="f4128cab-965e-407a-a180-29d614827260"
from tensorflow.python.keras.callbacks import EarlyStopping, ModelCheckpoint
# cb_early_stopper = EarlyStopping(monitor = 'val_loss', patience = EARLY_STOP_PATIENCE)
cb_checkpointer = ModelCheckpoint(filepath = '/content/drive/MyDrive/Projects/Models/vgg16_best.h5', monitor = 'val_loss', save_best_only = True, mode = 'auto')
history = model.fit(
train_generator,
steps_per_epoch=STEPS_PER_EPOCH_TRAINING,
epochs = 10,
validation_data=validation_generator,
validation_steps=STEPS_PER_EPOCH_VALIDATION,
callbacks=[cb_checkpointer]
)
# + colab={"base_uri": "https://localhost:8080/", "height": 513} id="pYtcSK32jFnI" executionInfo={"status": "ok", "timestamp": 1619612344947, "user_tz": -330, "elapsed": 1883, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="9863e0a5-7ca2-409b-c1db-12762ecc0456"
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
# plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
# + id="8_U6u1VnGFyt" executionInfo={"status": "ok", "timestamp": 1619612352122, "user_tz": -330, "elapsed": 2960, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}}
model_path = "/content/drive/MyDrive/Projects/Models/vgg16_best.h5"
model = keras.models.load_model(model_path)
# + colab={"base_uri": "https://localhost:8080/"} id="cmVS3xLRxzfY" executionInfo={"status": "ok", "timestamp": 1619612365986, "user_tz": -330, "elapsed": 16085, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="fb5be7d9-0be9-473f-aeae-65484ba8e089"
def_img_list50 = []
def_img_list70 = []
def_img_list80 = []
for cls in test_generator.class_indices:
temp_dir = "/content/drive/MyDrive/Projects/Yoga-82/A_Notebooks/test/"
temp_dir = temp_dir + cls+'/'
t_path = os.listdir(temp_dir)
pred_list = []
pred_array_2d = np.zeros((len(t_path), 5))
curr_index = test_generator.class_indices[cls]
for i, tt_path in enumerate(t_path):
image_path = temp_dir + tt_path
image = tf.keras.preprocessing.image.load_img(image_path, target_size=(image_size, image_size))
input_arr = tf.keras.preprocessing.image.img_to_array(image)
# input_arr = preprocess_input(input_arr)
input_arr = np.array([input_arr]) # Convert single image to a batch.
predictions = model.predict(input_arr)
if predictions[0][np.argmax(predictions)] < 0.5:
def_img_list50.append(image_path)
elif predictions[0][np.argmax(predictions)] < 0.7:
def_img_list70.append(image_path)
elif predictions[0][np.argmax(predictions)] < 0.8:
def_img_list80.append(image_path)
pred_list.append(predictions)
pred_array_2d[i] = np.squeeze(predictions)
predicted_class_indices = []
for pred in pred_list:
predicted_class_indices.append( np.argmax(np.array(pred), axis = 1)[0])
pred_array = np.array(predicted_class_indices)
print(cls, np.sum(pred_array==curr_index)/len(predicted_class_indices))
# flag = False
# if curr_index == 0:
tsum70 = 0
tsum50 = 0
tsum80 = 0
for j in pred_array_2d:
if j[curr_index]*100>80:
tsum80 += 1
if j[curr_index]*100>70:
tsum70 += 1
if j[curr_index]*100>50:
tsum50 += 1
flag = True
print(len(pred_array))
print(tsum50, tsum50/len(pred_array))
print(tsum70, tsum70/len(pred_array))
print(tsum80, tsum80/len(pred_array))
# if flag:
# break
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="dPptVpTeEEFT" executionInfo={"status": "ok", "timestamp": 1619612383531, "user_tz": -330, "elapsed": 3817, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}} outputId="35b549ba-2e63-4526-925f-d92973b8ec1c"
ds_dir_demo = "/content/drive/MyDrive/Projects/Yoga-82/A_Notebooks/demo_test_result/"
img_paths = os.listdir(ds_dir_demo)
f, ax = plt.subplots(2, 5, figsize = (15, 7))
for i, img_path in enumerate(img_paths):
# image_path = "/content/drive/MyDrive/Projects/Yoga-82/demo/yoga_demo1.png"
# image_path = "/content/drive/MyDrive/Projects/Yoga-82/demo/Capture4.JPG"
image_path = ds_dir_demo+img_path
image = tf.keras.preprocessing.image.load_img(image_path, target_size= IMG_SIZE)
input_arr = tf.keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
predictions = model.predict(input_arr)
idx = np.argmax(predictions)
# img=mpimg.imread(image_path)
# # plt.imshow(img)
i = i%10
ax[i//5, i%5].imshow(image)
ax[i//5, i%5].axis('off')
cname = class_name[idx].split('_')
scr_label = get_score_label(int(predictions[0][idx]*100))
ax[i//5, i%5].set_title(f"{cname[0]} {cname[1]}: {scr_label}")
# + id="Y1lwnPn-WdpQ" executionInfo={"status": "aborted", "timestamp": 1619611310862, "user_tz": -330, "elapsed": 97145, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "03541567842422634281"}}
| notebooks/main_vgg16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import spacy
from spacy.lang.en import English
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
patterns = [
[{"LOWER" : "age>="},{"LIKE_NUM" : True},{"LEMMA" : "year"}],
[{"LOWER" : "age<="},{"LIKE_NUM" : True},{"LEMMA" : "year"}],
[{"LOWER" : "age"},{"ORTH" : "<"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"ORTH" : ">"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"ORTH" : "<"},{"ORTH" : "="},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"ORTH" : ">"},{"ORTH" : "="},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LEMMA" : "old"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA" : "old"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LOWER" : "under"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LOWER" : "under"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LOWER" : "over"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LOWER" : "over"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LEMMA": "year"},{"LOWER" : "or"},{"LEMMA" : "old"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA": "year"},{"LEMMA" : "old"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA": "year"},{"LOWER" : "under"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LEMMA": "year"},{"LOWER" : "under"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA": "year"},{"LOWER" : "over"}],
[{"LOWER" : "aged"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LEMMA": "year"},{"LOWER" : "over"}],
[{"LOWER" : "age"},{"LIKE_NUM" : True},{"LEMMA": "year"},{"LOWER" : "or"},{"LEMMA" : "old"}],
[{"LOWER" : "age"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA": "year"},{"LEMMA" : "old"}],
[{"LOWER" : "age"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA": "year"},{"LOWER" : "under"}],
[{"LOWER" : "age"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LEMMA": "year"},{"LOWER" : "under"}],
[{"LOWER" : "age"},{"LIKE_NUM" : True},{"LOWER" : "and"},{"LEMMA": "year"},{"LOWER" : "over"}],
[{"LOWER" : "age"},{"LIKE_NUM" : True},{"LOWER" : "or"},{"LEMMA": "year"},{"LOWER" : "over"}],
[{"LOWER" : "aged"},{"TEXT" : "between"},{"LIKE_NUM" : True},{"ORTH": "-"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"ORTH" : "<"},{"LIKE_NUM" : True}],
[{"LOWER" : "age"},{"ORTH" : ">"},{"LIKE_NUM" : True}],
[{"TEXT" : "age <="},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"TEXT" : "age >="},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"LOWER" : "greater"},{"LOWER" : "than"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"LOWER" : "less"},{"LOWER" : "than"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"LOWER" : "over"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"LOWER" : "under"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"LOWER" : "above"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "age"},{"LOWER" : "below"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "aged"},{"LOWER" : "over"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "aged"},{"LOWER" : "under"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "aged"},{"LOWER" : "above"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "aged"},{"LOWER" : "below"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
[{"LOWER" : "aged"},{"LOWER": "between"},{"LIKE_NUM" : True},{"TEXT" : "and"},{"LIKE_NUM" : True}],
[{"LOWER" : "age"},{"LOWER": "between"},{"LIKE_NUM" : True},{"TEXT" : "and"},{"LIKE_NUM" : True}],
[{"LIKE_NUM": True}, {"LEMMA": "year"}, {"LEMMA": "old"}],
[{"LEMMA" : "age"}, {"LIKE_NUM" : True}, {"LEMMA": "year"}],
[{"LOWER" : "age"}, {"LIKE_NUM" : True}, {"LEMMA": "year"}],
[{"LOWER": "between"},{"LIKE_NUM" : True},{"TEXT" : "to"},{"LIKE_NUM" : True},{"LEMMA": "year"},{"TEXT" : "of"},{"LOWER" : "age"},],
[{"LOWER" : "aged"},{"LOWER": "between"},{"LIKE_NUM" : True},{"TEXT" : "to"},{"LIKE_NUM" : True},{"LEMMA": "year"}],
# [{"Text" : "Age"}, {"LIKE_NUM" : True}, {"TEXT" : "or older"}]
# [{"Text" : "Aged"}, {"LIKE_NUM" : True}, {"TEXT" : "or older"}]
# [{"TEXT" IN{'age','ages','aged'},{"LIKE_NUM" : True}]
]
matcher.add("age_rule", patterns)
# +
sentence_1 = "Mary, age>= 10 years lady"
doc = nlp(sentence_1)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_2 = "Mary, age<= 10 years lady"
doc = nlp(sentence_2)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
#Worked after addin ORTH
sentence_3 = "Mary, age < 10 years lady"
doc = nlp(sentence_3)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_4 = "Mary, age > 10 lady"
doc = nlp(sentence_4)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_5 = "Mary, age <= 10 years lady"
doc = nlp(sentence_5)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_6 = "Mary, Age >= 10 years lady"
doc = nlp(sentence_6)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_7 = "Aged 20 or older"
doc = nlp(sentence_7)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_8 = "Aged 20 and older"
doc = nlp(sentence_8)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_9 = "Aged 20 and under"
doc = nlp(sentence_9)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_10="Aged between 5 - 14 years"
doc = nlp(sentence_10)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_11="Age < 18 Diagnosed with Relapsed or Refractory Multiple Myeloma"
doc = nlp(sentence_11)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_12="Age > 18"
doc = nlp(sentence_12)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
sentence_13="Age greater than 18 years"
doc = nlp(sentence_13)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# -
# +
sentence_14="Age > 10 "
doc = nlp(sentence_14)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
#over under above below
sentence_15="Age over 18 years"
doc = nlp(sentence_15)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
print(match_id, string_id, start, end, span.text)
# +
text = '''Aged 22 and older, undergoing 1 or 2 level spinal decompression.Age over 18 years.Age less than 65 years, have diagnosed ulcerative colitis
Aged 20 or older, myocardial ischemia,
Aged over 18 years Confirmed diagnosis of bronchiectasis within 5 years
Aged between 18-70 years old Diagnosis of Non-alcoholic Fatty Liver Disease
Age >= 10 years Diagnosed with Relapsed or Refractory Multiple Myeloma
Aged between 40 and 85 years Diagnosed with COPD
Women between 40 to 70 years of age.
Aged 18 years or older Diagnosis of Ankylosing Spondylitis or Axial '''
doc = nlp(text)
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
span = doc[start:end] # The matched span
# print(f'Matching ID , Age Criteria , Texy Matched is ')
print(f'{match_id}, {string_id}, {start}, {end}, {span.text}')
# data.frame(match_id,string_id,start, end, span.text)
# -
| Extracting age.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
data = pd.read_csv('data.csv')
data.head()
data.shape
data.isnull().sum()
data.corr()
import seaborn as sns
plt.figure(figsize=(10,10))
sns.heatmap(data.corr(), annot = True)
X = data.iloc[:,:-1]
y = data['Outcome']
X.shape
y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 10)
print("Train Set: ", X_train.shape, y_train.shape)
print("Test Set: ", X_test.shape, y_test.shape)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=20)
model.fit(X_train, y_train)
# +
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
langs = ['Malaria' 'Pneumonia', 'Breast-cancer', 'Diabetes']
students = [96,95,96,98]
ax.bar(langs,students)
plt.show()
# -
from sklearn.metrics import accuracy_score
# +
import numpy as np
import matplotlib.pyplot as plt
# creating the dataset
data = {'Malaria':95, 'Pneumonia':96, 'Breast Cancer':95.47,
'Diabetes':98.25}
courses = list(data.keys())
values = list(data.values())
fig = plt.figure(figsize = (10, 5))
# creating the bar plot
plt.bar(courses, values, color ='maroon',
width = 0.4)
plt.ylim(90, 100)
plt.xlabel("Diasease Detection Models")
plt.ylabel("Accuracies")
plt.title("Accuracies of Models")
plt.show()
# -
print(accuracy_score(y_test, model.predict(X_test))*100)
import pickle
pickle.dump(model, open("diabetes.pkl",'wb'))
| python notebooks/Diabetes_Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import Dependencies and setup
from urllib.request import urlopen
from bs4 import BeautifulSoup
path='https://www.syracuse.com/coronavirus-ny/'
f = urlopen(path)
html = str(f.read())
soup = BeautifulSoup(html, 'html.parser')
txt = soup.find_all('iframe')
for element in txt:
with open('image_url', 'a+')
print(element.attrs["src"])
| Extract_Transform/Scrape Syracuse Image.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Análisis de componentes
# ### Introducción a Python
# ### GitHub repository: https://github.com/jorgemauricio/analisis_componentes
# ### Instructor: <NAME>
# +
# librerías
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# librerías
from sklearn.ensemble import RandomForestRegressor
# The error metric, In this cas, we will use c-stat (aka ROC/AUC)
from sklearn.metrics import roc_auc_score
# an efficient data structure
import pandas as pd
# %matplotlib inline
# -
# leer csv
df = pd.read_csv("data/PAPAYA_FISICOQUIMICOS.csv")
# estructura del csv
df.head()
df.MUESTRA.unique()
# EDA (Exploratory Data Analysis) correlación entre variables
sns.pairplot(df,hue='MUESTRA',palette='Set1')
# dividir los datos en entrenamiento y prueba para evitar que el modelo se sobrealimente
# Train Test Split
from sklearn.model_selection import train_test_split
X = df.drop('MUESTRA',axis=1)
y = df['MUESTRA']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
# UTILIZAMOS UN ARBOL DE DECISIONES PARA DETERMINAR COMO ES QUE SE CLASIFICAN LOS COMPUESTOS DE ACUERDO
# A SUS CUALIDADES
#Decision Trees
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
# prediction and evaluation
predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,predictions))
print(confusion_matrix(y_test,predictions))
# +
# Tree Visualization
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn.tree import export_graphviz
import pydot
features = list(df.columns[1:])
features
# +
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data,feature_names=features,filled=True,rounded=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png())
# -
# random forests
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
print(confusion_matrix(y_test,rfc_pred))
print(classification_report(y_test,rfc_pred))
# PCA
from sklearn.preprocessing import StandardScaler
X = df.drop("MUESTRA", axis=1)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X)
scaled_data = scaler.transform(X)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(scaled_data)
x_pca = pca.transform(scaled_data)
scaled_data.shape
x_pca.shape
X.head()
def generar_indice(elemento):
if elemento == "Pa-AC-1":
return 1
if elemento == "Pa-AC-2":
return 2
if elemento == "Pa-LIO-1":
return 3
if elemento == "Pa-LIO-2":
return 4
y_dummies = list(map(generar_indice,y))
plt.figure(figsize=(8,6))
plt.scatter(x_pca[:,0],x_pca[:,1],c=y_dummies,cmap='plasma')
plt.xlabel('First principal component')
plt.ylabel('Second Principal Component')
# interpreting the components
pca.components_
df_comp = pd.DataFrame(pca.components_,columns=X.columns)
plt.figure(figsize=(12,6))
sns.heatmap(df_comp,cmap='plasma',)
# K Nearest Neighbors
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(scaled_data,y_dummies,
test_size=0.30)
# +
# using KNN
# -
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,pred))
print(classification_report(y_test,pred))
# +
# choosing a K Value
error_rate = []
# Will take some time
for i in range(1,25):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train,y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
# -
plt.figure(figsize=(10,6))
plt.plot(range(1,25),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
# +
# FIRST A QUICK COMPARISON TO OUR ORIGINAL K=1
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=1')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
# +
# NOW WITH K=23
knn = KNeighborsClassifier(n_neighbors=23)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=23')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
# -
# # EDA
sns.set_style('whitegrid')
sns.lmplot('L','CRA',data=df, hue='MUESTRA',
palette='coolwarm',size=6,aspect=1,fit_reg=False)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=6)
kmeans.fit(X)
kmeans.cluster_centers_
kmeans.labels_
df.head()
def generar_indice(elemento):
if elemento == "PiLIO50":
return 1
if elemento == "PiLIO60":
return 2
if elemento == "PiLIO80":
return 3
if elemento == "PiSAC50":
return 4
if elemento == "PiSAC60":
return 5
if elemento == "PiSAC80":
return 6
df['Cluster'] = df['MUESTRA'].apply(generar_indice)
df.head(10)
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(df['Cluster'],kmeans.labels_))
print(classification_report(df['Cluster'],kmeans.labels_))
| papaya_fisicoquimicos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multivariate Classification
# +
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report
from sktime.classification.compose import ColumnEnsembleClassifier
from sktime.classification.dictionary_based import BOSSEnsemble
from sktime.classification.interval_based import TimeSeriesForestClassifier
from sktime.classification.shapelet_based import MrSEQLClassifier
from sktime.datasets import load_basic_motions
from sktime.transformations.panel.compose import ColumnConcatenator
import sktime
from sktime.utils.data_io import load_from_tsfile_to_dataframe
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
from sktime.classification.compose import ComposableTimeSeriesForestClassifier
from sktime.datasets import load_arrow_head
from sktime.utils.slope_and_trend import _slope
from sklearn.metrics import plot_confusion_matrix
# -
X, y = load_from_tsfile_to_dataframe("haptic_data_1.ts", replace_missing_vals_with='NaN')
X_f, y_f = load_from_tsfile_to_dataframe("haptic_data_2.ts", replace_missing_vals_with='NaN')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42)
X_train_f, X_test_f, y_train_f, y_test_f = train_test_split(X_f, y_f, test_size=.2, random_state=42)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
print(X_train_f.shape, y_train_f.shape, X_test_f.shape, y_test_f.shape)
X_train.head()
X_train_f.head()
print(np.unique(y_train))
print(np.unique(y_train_f))
y_test_f
# # TimeSeriesForestClassifier
# ## Without features
steps = [("concatenate", ColumnConcatenator()),("classify", TimeSeriesForestClassifier(n_estimators=100)),]
clf = Pipeline(steps)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
print(classification_report(y_test, clf.predict(X_test)))
plot_confusion_matrix(clf, X_test, y_test)
# ## with features
steps = [("concatenate", ColumnConcatenator()),("classify", TimeSeriesForestClassifier(n_estimators=100)),]
clf = Pipeline(steps)
clf.fit(X_train_f, y_train_f)
clf.score(X_test_f, y_test_f)
print(classification_report(y_test_f, clf.predict(X_test_f)))
plot_confusion_matrix(clf, X_test_f, y_test_f)
# # BOSSEnsemble & ColumnEnsembleClassifier
clf = ColumnEnsembleClassifier(estimators=[("TSF0", TimeSeriesForestClassifier(n_estimators=100), [0]),("BOSSEnsemble3", BOSSEnsemble(max_ensemble_size=5), [3]),])
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
print(classification_report(y_test, clf.predict(X_test)))
# # MrSEQLClassifier - univareate time serial classification not sutalble
clf = MrSEQLClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
print(classification_report(y_test, clf.predict(X_test)))
# +
# Cannot plot confution matrix
# plot_confusion_matrix(clf, X_test, y_test)
# -
# # Plot
# binary target variable
labels, counts = np.unique(y_train, return_counts=True)
print(labels, counts)
fig, ax = plt.subplots(1, figsize=plt.figaspect(0.25))
for label in labels:
X_train.loc[y_train == label, "dim_3"].iloc[0].plot(ax=ax, label=f"class {label}")
plt.legend()
ax.set(title="Example time series", xlabel="Time");
# ## Feature extraction with sklearn
# +
from sktime.transformations.panel.tsfresh import TSFreshFeatureExtractor
from sklearn.pipeline import make_pipeline
# with sktime, we can write this as a pipeline
from sktime.transformations.panel.reduce import Tabularizer
from sklearn.ensemble import RandomForestClassifier
from sktime.datatypes._panel._convert import from_nested_to_2d_array
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
# +
# for univariate tdata this works
#time_series_tree.fit(X_train, y_train)
#time_series_tree.score(X_test, y_test)
# +
t = TSFreshFeatureExtractor(default_fc_parameters="efficient", show_warnings=False)
Xtrain = t.fit_transform(X_train)
Xtest = t.fit_transform(X_test)
Xtrain.head()
# -
X_train_fs, X_test_fs, y_train_fs, y_test_fs = train_test_split(Xtrain, y_train, test_size=.2)
X_train_fs
# +
from sklearn.metrics import classification_report
classifier_full = DecisionTreeClassifier()
classifier_full.fit(X_train_fs, y_train_fs)
print(classifier_full.score(X_train_fs, y_train_fs))
print(classification_report(y_test_fs, classifier_full.predict(X_test_fs)))
# +
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(classifier_full, X_test_fs, y_test_fs)
# -
# ### Time series forest
from sktime.classification.compose import ComposableTimeSeriesForestClassifier
tsf = ComposableTimeSeriesForestClassifier(
estimator=time_series_tree,
n_estimators=100,
bootstrap=True,
oob_score=True,
random_state=1,
n_jobs=-1,
)
# +
tsf.fit(X_train, y_train)
if tsf.oob_score:
print(tsf.oob_score_)
# +
# remove NaN
#from tsfresh import select_features
#from tsfresh.utilities.dataframe_functions import impute
#impute(Xt) #impute(extracted_features)
#features_filtered = select_features(Xt, y_train) #select_features(extracted_features, y)
# +
#this is for ts file formet without feature extreacted
#classifier = make_pipeline(TSFreshFeatureExtractor(show_warnings=False), RandomForestClassifier())
#classifier.fit(Xtrain, y_train)
#classifier.score(Xtest, y_test)
# -
# # Trying out other Classification
# +
from sktime.datatypes._panel._convert import from_nested_to_2d_array
X_train_tab = from_nested_to_2d_array(X_train)
X_test_tab = from_nested_to_2d_array(X_test)
X_train_tab.head()
# -
features_filtered
# +
from sklearn.dummy import DummyClassifier
dummy_clf = DummyClassifier(strategy="uniform")
dummy_clf.fit(X_train_tab, y_train)
dummy_clf.score(X_test_tab, y_test)
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
rand_clf = RandomForestClassifier(n_estimators=100)
rand_clf.fit(X_train_tab, y_train)
y_predict = rand_clf.predict(X_test_tab)
print("Accuracy:",metrics.accuracy_score(y_test,y_predict))
# -
| examples/Haptic_data_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generate a Noise Model using Calibration Data
#
# We will use pairs of noisy calibration observations $x_i$ and clean signal $s_i$ (created by averaging these noisy, calibration images) to estimate the conditional distribution $p(x_i|s_i)$. Histogram-based and Gaussian Mixture Model-based noise models are generated and saved.
#
# __Note:__ Noise model can also be generated if calibration data is not available. In such a case, we use an approach called ```Bootstrapping```. Take a look at the notebook ```0b-CreateNoiseModel (With Bootstrapping)``` on how to do so. To understand more about the ```Bootstrapping``` procedure, take a look at the readme [here](https://github.com/juglab/PPN2V).
# +
import warnings
warnings.filterwarnings('ignore')
import torch
import os
import urllib
import zipfile
from torch.distributions import normal
import matplotlib.pyplot as plt, numpy as np, pickle
from scipy.stats import norm
from tifffile import imread
import sys
sys.path.append('../../')
from divnoising.gaussianMixtureNoiseModel import GaussianMixtureNoiseModel
from divnoising import histNoiseModel
from divnoising.utils import plotProbabilityDistribution
dtype = torch.float
device = torch.device("cuda:0")
# -
# ### Download data
#
# Download the data from https://zenodo.org/record/5156960/files/Mouse%20skull%20nuclei.zip?download=1. Here we show the pipeline for Mouse nuclei dataset. Save the dataset in an appropriate path. For us, the path is the data folder which exists at `./data`.
# +
# Download data
if not os.path.isdir('./data'):
os.mkdir('./data')
zipPath="./data/Mouse_skull_nuclei.zip"
if not os.path.exists(zipPath):
data = urllib.request.urlretrieve('https://zenodo.org/record/5156960/files/Mouse%20skull%20nuclei.zip?download=1', zipPath)
with zipfile.ZipFile(zipPath, 'r') as zip_ref:
zip_ref.extractall("./data")
# -
# The noise model is a characteristic of your camera and not of the sample. The downloaded data folder contains a set of calibration images (For the Mouse nuclei dataset, it is ```edgeoftheslide_300offset.tif``` showing the edge of a slide and the data to be denoised is named ```example2_digital_offset300.tif```). The calibration images can be anything which is static and imaged multiple times in succession. Thus, the edge of slide works as well. We can either bin the noisy - GT pairs (obtained from noisy calibration images) as a 2-D histogram or fit a GMM distribution to obtain a smooth, parametric description of the noise model.
# Specify ```path``` where the noisy calibration data will be loaded from. It is the same path where noise model will be stored when created later, ```dataName``` is the name you wish to have for the noise model, ```n_gaussian``` to indicate how many Gaussians willbe used for learning a GMM based noise model, ```n_coeff``` for indicating number of polynomial coefficients will be used to patrametrize the mean, standard deviation and weight of GMM noise model. The default settings for ```n_gaussian``` and ```n_coeff``` generally work well for most datasets.
# +
path="./data/Mouse_skull_nuclei/"
observation= imread(path+'edgeoftheslide_300offset.tif') # Load the appropriate calibration data
dataName = 'nuclei' # Name of the noise model
n_gaussian = 3 # Number of gaussians to use for Gaussian Mixture Model
n_coeff = 2 # No. of polynomial coefficients for parameterizing the mean, standard deviation and weight of Gaussian components.
# -
nameHistNoiseModel ='HistNoiseModel_'+dataName+'_'+'calibration'
nameGMMNoiseModel = 'GMMNoiseModel_'+dataName+'_'+str(n_gaussian)+'_'+str(n_coeff)+'_'+'calibration'
# +
# The data contains 100 images of a static sample (edge of a slide).
# We estimate the clean signal by averaging all images.
signal=np.mean(observation[:, ...],axis=0)[np.newaxis,...]
# Let's look the raw data and our pseudo ground truth signal
print(signal.shape)
plt.figure(figsize=(12, 12))
plt.subplot(1, 2, 2)
plt.title(label='average (ground truth)')
plt.imshow(signal[0],cmap='gray')
plt.subplot(1, 2, 1)
plt.title(label='single raw image')
plt.imshow(observation[0],cmap='gray')
plt.show()
# -
# ### Creating the Histogram Noise Model
# Using the raw pixels $x_i$, and our averaged GT $s_i$, we are now learning a histogram based noise model. It describes the distribution $p(x_i|s_i)$ for each $s_i$.
# +
# We set the range of values we want to cover with our model.
# The pixel intensities in the images you want to denoise have to lie within this range.
minVal, maxVal = 2000, 22000
bins = 400
# We are creating the histogram.
# This can take a minute.
histogram = histNoiseModel.createHistogram(bins, minVal, maxVal, observation,signal)
# Saving histogram to disc.
np.save(path+nameHistNoiseModel+'.npy', histogram)
histogramFD=histogram[0]
# -
# Let's look at the histogram-based noise model.
plt.xlabel('Observation Bin')
plt.ylabel('Signal Bin')
plt.imshow(histogramFD**0.25, cmap='gray')
plt.show()
# ### Creating the GMM noise model
# Using the raw pixels $x_i$, and our averaged GT $s_i$, we are now learning a GMM based noise model. It describes the distribution $p(x_i|s_i)$ for each $s_i$.
min_signal=np.min(signal)
max_signal=np.max(signal)
print("Minimum Signal Intensity is", min_signal)
print("Maximum Signal Intensity is", max_signal)
# Iterating the noise model training for `n_epoch=4000` and `batchSize=25000` works the best for `Mouse nuclei` dataset.
gaussianMixtureNoiseModel = GaussianMixtureNoiseModel(min_signal = min_signal, max_signal =max_signal,
path=path, weight = None, n_gaussian = n_gaussian,
n_coeff = n_coeff, min_sigma = 50, device = device)
gaussianMixtureNoiseModel.train(signal, observation, batchSize = 25000, n_epochs = 4000,
learning_rate=0.1, name = nameGMMNoiseModel)
# ### Visualizing the Histogram-based and GMM-based noise models
plotProbabilityDistribution(signalBinIndex=170, histogram=histogramFD,
gaussianMixtureNoiseModel=gaussianMixtureNoiseModel, min_signal=minVal,
max_signal=maxVal, n_bin= bins, device=device)
| examples/Mouse_nuclei/0a-CreateNoiseModel (With Calibration Data).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
# # <span style="text-align: right; direction: rtl; float: right;">מודולים</span>
# ## <span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# מתכנתים העוסקים בקביעות במלאכתם, יזכו לעיתים תכופות להיתקל באתגרים בתחומים שונים.<br>
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# דוגמה טובה לאתגר נפוץ שכזה היא יצירת מידע אקראי:
# </p>
# <ul style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>מתכנת שיוצר משחק קלפים יצטרך לכתוב קוד לשליפת קלף אקראי מחפיסת הקלפים.</li>
# <li>מתכנת שמתקשה להחליט אם הוא רוצה להיות בפריז, ברומא או בקוטב הצפוני, יכתוב תוכנית שתבחר עבורו את היעד באקראיות.</li>
# <li>מתכנת שרוצה לבנות סימולציות למצבים מהחיים האמיתיים, יצטרך פעמים רבות להשתמש במידע אקראי.</li>
# </ul>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# דוגמה טובה נוספת היא עבודה עם תאריכים:
# </p>
# <ul style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>מתכנת שרוצה לדעת בעוד כמה זמן מגיע תאריך מסוים (יום הולדת, לדוגמה).</li>
# <li>מתכנת שרוצה לבדוק מה יהיה התאריך בעוד 100,000,000 שניות.</li>
# <li>מתכנת שקיבל מסמך עם תאריכי הלידה של כל אוכלוסיית ישראל, ורוצה לדעת מהו החודש שבו נולדים הכי הרבה תינוקות בישראל.</li>
# </ul>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# פתירת האתגרים הללו עשויה להיות משימה מורכבת, ולטמון בחובה בעיות ומקרי קצה רבים.<br>
# תארו לעצמכם כמה זמן היה נחסך לו היה מישהו פותר את הבעיות הנפוצות הללו עבור כל המתכנתים!<br>
# </p>
# ## <span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כרעיון, <dfn>מודול</dfn> הוא פיסת תוכנה עצמאית המשרתת מטרה מוגדרת.<br>
# המטרה יכולה להיות, לדוגמה, טיפול בתאריכים, יצירת נתונים אקראיים או תקשורת עם אתרי אינטרנט.<br>
# בפייתון, מודול הוא קובץ המאגד הגדרות ופקודות, שיחדיו יוצרות אוסף כלים בתחום מסוים.<br>
# </p>
# ## <span style="text-align: right; direction: rtl; float: right; clear: both;">שימוש</span>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">יבוא<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# ניקח כדוגמה את המודול <code>random</code>, שמטרתו לעזור לנו ליצור מידע אקראי.<br>
# לפני שנוכל להשתמש ביכולותיו של המודול, נצטרך לבקש מפייתון לטעון אותו בעזרת מילת המפתח <code>import</code>:
# </p>
# +
import random
# random.choice?
# -
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# פעולה זו טוענת את המודול ומאפשרת לנו להשתמש בו בהמשך הקוד. נהוג להגיד ש<dfn>ייבאנו</dfn> את המודול <code>random</code>.<br>
# עכשיו, כשהמודול יובא, אפשר להשתמש בו בקוד התוכנית שלנו באופן שיענה על הצורך ליצירת דברים אקראיים.<br>
# כדי להבין איך להשתמש במודול ומהן יכולותיו, נוכל לקרוא מידע נוסף על אודותיו במגוון דרכים:
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>בעזרת התיעוד ב<a href="https://docs.python.org/3/library/random.html">אתר פייתון</a>, שאליו אפשר להגיע אם כותבים <q>python documentation random</q> במנועי חיפוש.</li>
# <li>בעזרת הפונקציה <var>dir</var> – נוכל לכתוב <code dir="ltr">dir(random)</code>.</li>
# <li>בג'ופיטר, אם נכתוב בתא קוד <code dir="ltr">random.</code> ונלחץ <kbd dir="ltr">Tab ↹</kbd>.</li>
# </ol>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמאות לשימוש<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# במשחק הפוקימון "פוקימון אדום" אפשר לבחור בתור הפוקימון ההתחלתי את בלבזאור, את סקווירטל או את צ'רמנדר.<br>
# על הפוקימון הטוב ביותר לבחירה <a href="https://www.google.com/search?q=starter+pokemon+red%2Fblue">ניטשים ויכוחים רבים</a> עוד מאז שחרורו של המשחק ב־1996.<br>
# כדי לא להיקלע לעין הסערה, נבנה תוכנה שתבחר את הפוקימון עבורנו באופן אקראי.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# <a href="https://docs.python.org/3/library/random.html">התיעוד</a> של המודול <code>random</code>, כולל פונקציה בשם <var>choice</var>, שבאמצעותה נוכל לבחור איבר אקראי מתוך iterable.<br>
# השימוש בתיעוד מומלץ במיוחד בהקשרי מודולים, שכן ההסברים שם בהירים, ופעמים רבות מובאות שם דוגמאות לשימוש בפונקציות של המודול.<br>
# נייבא את <code>random</code> ואז נשתמש ב־<var>choice</var>:
# </p>
# +
import random
pokemons = ['Bulbasaur', 'Squirtle', 'Charmander']
starter_pokemon = random.choice(pokemons)
print(starter_pokemon)
# -
# <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
# <div style="display: flex; width: 10%; float: right; ">
# <img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
# </div>
# <div style="width: 90%">
# <p style="text-align: right; direction: rtl;">
# לא היינו יכולים להשתמש ב־<code>random.choice</code> לולא היינו מייבאים את <code>random</code>.<br>
# מרגע שייבאנו את המודול – נוכל להשתמש בו לאורך כל הקוד.
# </p>
# </div>
# </div>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
# <div style="display: flex; width: 10%; float: right; clear: both;">
# <img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
# </div>
# <div style="width: 70%">
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# השתמשו בפונקציה הנמצאת במודול <code>random</code> כדי לדמות הטלת קובייה בעלת 20 פאות (<a href="https://en.wikipedia.org/wiki/D20_System">D20</a>).<br>
# הגרילו מספר בין 1 ל־20, הפעם בעזרת פונקציה שהיא לא <code>choice</code>.<br>
# השתמשו במקורות המידע שצוינו כדי למצוא את הפונקציה המתאימה למשימה.
# </p>
# </div>
# <div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
# <p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
# <strong>חשוב!</strong><br>
# פתרו לפני שתמשיכו!
# </p>
# </div>
# </div>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">יתרונות<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# אלו, בין היתר, היתרונות של שימוש במודולים הקיימים בפייתון:
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>
# <strong>פישוט</strong>: מודול לרוב יפתור לנו בעיה אחת, קטנה ומוגדרת היטב.
# </li>
# <li>
# <strong>הפרדה</strong>: המודול מופרד מהקוד שלנו, ישאיר את הקוד שלנו נקי ויעזור לנו להתמקד בבעיה שאנחנו מעוניינים לפתור.
# </li>
# <li>
# <strong>מִחזוּר</strong>: מודול מאפשר לנו להשתמש בקוד שכתבנו בפרויקטים רבים, מבלי לכתוב את הקוד מחדש בכל פרויקט.
# </li>
# <li>
# <strong>רמת גימור</strong>: המודולים הרשמיים של פייתון מתוחזקים ברמה גבוהה, יש בהם מעט באגים והם מכסים מקרי קצה רבים.
# </li>
# </ol>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# רוב התוכנות שבהן אתם משתמשים נעזרות במספר לא מועט של מודולים.<br>
# כרגע נתמקד בשימוש במודולים שמגיעים עם פייתון, אך בעתיד נשתמש במודולים שיצרו מתכנתים אחרים, ואף ניצור מודולים בעצמנו.
# </p>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: יצירת סיסמה<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# איש דג החרב מגדיר "מחולל סיסמאות חזקות" כך:
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>בקריאה לפונקציית המחולל, המחולל יחזיר סיסמה חזקה.</li>
# <li>אורכה של סיסמה חזקה הוא בין 12 ל־20 תווים.</li>
# <li>לפחות בחלק מהסיסמאות שמייצר המחולל יהיו גם אותיות גדולות, גם אותיות קטנות וגם מספרים.</li>
# </ol>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו מחולל סיסמאות חזקות.
# </p>
# <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
# <div style="display: flex; width: 10%; float: right; ">
# <img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
# </div>
# <div style="width: 90%">
# <p style="text-align: right; direction: rtl;">
# על אף שזהו תרגיל נחמד, לעולם לא נשתמש ב־<code>random</code> לצורכי אבטחת מידע.<br>
# להרחבה, קראו על יתרונותיו של המודול <code><a href="https://docs.python.org/3/library/secrets.html#module-secrets">secrets</a></code>.
# </p>
# </div>
# </div>
# +
def password():
characters=list(map(str, range(9))) + list(map(chr, range(65,90))) + list(map(chr, range(97,122)))
size=random.choice(range(12,21))
return random.sample(characters, size)
print(password())
# -
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">יבוא מתוך מודול<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# עצרת של המספר $n$ (המסומנת כ־$n!$) היא מכפלת כל המספרים השלמים החיוביים עד $n$, כולל.<br>
# נחשב את העצרת של 6 באמצעות המודול <code>math</code>:
# </p>
import math
print(math.factorial(6)) # 1 * 2 * 3 * 4 * 5 * 6
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# במקרה שהצגנו, אין לנו באמת צורך בכל הפונקציות הנמצאות במודול <code>math</code>, אלא רק בפונקציה <var>factorial</var>.<br>
# תעלול נחמד שאפשר לעשות הוא לייבא רק את factorial באמצעות מילת המפתח <code>from</code>:<br>
# </p>
from math import factorial
print(factorial(6))
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כדאי לשים לב שלאחר שייבאנו בעזרת <code>from</code> נשתמש ישירות ב־factorial, מבלי להזכיר את השייכות שלה למודול <code>math</code>.<br>
# אפשר גם לייבא יותר משם אחד מאותו מודול, במכה אחת:
# </p>
from math import cos, pi
print(cos(pi))
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# באופן כללי, העדיפו לייבא את המודול כולו ולא חלקים מתוכו.<br>
# זה יעזור לכם להימנע מיצירת מקרים מבלבלים כמו זה:<br>
# </p>
# +
from math import e
# (דמיינו הרבה קוד כאן)
print(e) # ?לרוב הוא שם משתנה שתוכנו שגיאה, האם זה המקרה גם פה e ?מאיפה זה הגיע
# -
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# אם ממשיכים בכיוון, פייתון מאפשרת לנו לעשות את הדבר המזעזע הבא:
# </p>
from math import *
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# יבוא בעזרת כוכבית יגרום לכך שכל מה שמוגדר במודול "יישפך" לתוך התוכנית שלנו.<br>
# זה יאפשר לנו להשתמש בכל התוכן של <code>math</code> בלי להזכיר את שם המודול:
# </p>
print(f"floor(5.5) --> {floor(5.5)}")
print(f"ceil(5.5) --> {ceil(5.5)}")
print(f"pow(9, 2) --> {pow(9, 2)}")
print(f"e --> {e}")
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# פרקטיקה זו נחשבת מאוד לא מנומסת, ואתם מתבקשים לא להשתמש בה, אלא אם כן אלו ההוראות הכתובות בתיעוד של המודול.<br>
# ישנן לא מעט סיבות הגיוניות מאחורי איסור זה:
# </p>
# <ul style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>זה מקשה על מתכנת שקורא את הקוד להבין איפה הוגדרו <var>ceil</var>, <var>floor</var>, <var>e</var> ו־<var>pow</var>.</li>
# <li>אנחנו "מזהמים" את הסביבה שלנו בהגדרות רבות שלעולם לא נשתמש בהן.</li>
# <li>חלק מהכלים שבהם תשתמשו בעתיד לא ידעו לזהות את השמות הללו, כיוון שלא ייבאתם אותם מפורשות.</li>
# </ul>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">שינוי שם ביבוא<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# המודול <code>turtle</code> הוא דרך פופולרית ללמד ילדים תכנות באמצעים גרפיים.<br>
# תכנות ב־turtle הוא מעין משחק: ישנו צב שהולך במרחב, ומשאיר צבע בכל מקום שאליו הוא מגיע.<br>
# אפשר להורות לצב בכמה מעלות להסתובב לכל כיוון, ולאיזה מרחק ללכת בכיוון שאליו הוא מופנה.<br>
# כך, בסוף מסלול הטיול של הצב, מתקבל ציור מלא חן.<br>
# הרעיון נוצר כחלק משפת התכנות <em>Logo</em> בשנת 1967.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נראה דוגמה לתוצר של ריצת תוכנית שכזו, שבה אנחנו משרטטים 100 ריבועים בהיסט של מעלה אחת בכל פעם.<br>
# אנחנו ממליצים לשחק עם המודול קצת (זה כיף!) ולראות אם אתם מצליחים לצייר כוכב, לדוגמה :)
# </p>
# +
import turtle
turtle.speed(10)
for i in range(400):
turtle.forward(50 + i)
turtle.right(91)
turtle.done()
# -
# <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
# <div style="display: flex; width: 10%; float: right; ">
# <img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
# </div>
# <div style="width: 90%">
# <p style="text-align: right; direction: rtl;">
# הרצת קוד של <code>turtle</code> תפתח לכם חלון חדש בו יצויר פלט התוכנית.<br>
# כדי להמשיך להריץ את התאים במחברת, סגרו את החלון.
# </p>
# </div>
# </div>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כדי לשנות את השם שבו אנחנו מתייחסים למודול, ניעזר במילת המפתח <code>as</code>.<br>
# לדוגמה:
# </p>
# +
import turtle as kipik
kipik.speed(100) # קיפיק מהיר הרבה יותר מצב רגיל
for i in range(400):
kipik.forward(50 + i)
kipik.right(91)
kipik.done()
# -
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כך אפשר לקצר את שם המודול שאנחנו מייבאים, ולהימנע מסרבול מיותר.<br>
# למרות זאת, יבוא מודול תחת שם אחר נחשב פרקטיקה לא מנומסת, שעלולה לבלבל קוראים שבקיאים בשמות המודולים הקיימים.
# </p>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# ישנם שני מקרים יוצאי דופן, שבהם השימוש ב־<code>as</code> נחשב רצוי:
# </p>
#
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>כאשר מבקשים מאיתנו להשתמש ב־<code>as</code> במסמכי התיעוד של המודול.</li>
# <li>כאשר אנחנו רוצים להתנסות בדברים ולחסוך זמן, נקצר את השמות של המודולים, הפונקציות או הקבועים שאותם אנחנו מייבאים.</li>
# </ol>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">נימוסים והליכות<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# לשימוש במודולים יש כללי סגנון שמוסכמים על רוב המתכנתים.<br>
# הם מופיעים במסמך בשם <a href="https://www.python.org/dev/peps/pep-0008/#imports">PEP8</a>, שמטרתו להגדיר איך נראה קוד פייתון המסוגנן כראוי.<br>
# הנה כמה כללים שראוי שתעקבו אחריהם:
# </p>
# <ul style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>יבוא המודולים תמיד יתבצע בראש הקוד, לפני כל דבר אחר.</li>
# <li>כאשר מייבאים יותר ממודול אחד, סדר היבוא צריך להיות מילוני – לפי שם המודול שיובא.</li>
# <li>טכנית, אפשר לייבא יותר ממודול אחד בשורה, אם נפריד את שמות המודולים בפסיק. מעשית זה לא מנומס.</li>
# <li>כאשר משתמשים ב־<code>from</code> ליבוא של יותר משם אחד – השמות צריכים להיות מסודרים בסדר מילוני.</li>
# <li>יש להימנע משימוש ביבוא בעזרת כוכבית.</li>
# </ul>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמאות נוספות<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# מה התאריך היום?
# </p>
import datetime
print("What is the time now?")
print(datetime.datetime.now())
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# נשתמש במודול <code>calendar</code> כדי להדפיס את לוח השנה של החודש הנוכחי:
# </p>
# +
from calendar import prmonth as print_calendar # לא מנומס, אבל דוגמה טובה
import datetime
current_date = datetime.datetime.now()
print_calendar(current_date.year, current_date.month)
# -
# ## <span style="align: right; direction: rtl; float: right; clear: both;">סיכום</span>
# <ul style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>מודולים הם חלקי תוכנה שמטרתם לפתור בעיה מעולם תוכן מסוים.</li>
# <li>אפשר לייבא מודולים ולהשתמש במה שהם מציעים כדי להשיג את מטרותינו במהירות ובקלות.</li>
# <li>המודולים שמגיעים עם פייתון מתוחזקים היטב, ויחסכו לנו באגים והתעסקות עם מקרי קצה.</li>
# <li>לפני שניגש לפתור בעיה מורכבת, ננסה למצוא בתיעוד של פייתון פתרון בדמות מודול קיים.</li>
# </ul>
# ## <span style="align: right; direction: rtl; float: right; clear: both;">מונחים</span>
# <dl style="text-align: right; direction: rtl; float: right; clear: both;">
# <dt>מודול</dt><dd>יחידה עצמאית של קוד המיועדת לטיפול במטרה מסוימת, ושקוד חיצוני יכול להשתמש בה.<br>בשפות תכנות אחרות מוכר גם כ<dfn>ספרייה</dfn>.</dd>
# <dt>יבוא</dt><dd>הצהרה בעזרת <code>import</code> על כך שאנחנו הולכים להשתמש במודול מסוים בקוד שלנו.</dd>
# <dt>PEP8</dt><dd>מסמך תקינה המתאר לפרטים מהי הדרך הנכונה לסגנן קוד פייתון.</dd>
# </dl>
# ## <span style="text-align: right; direction: rtl; float: right; clear: both;">תרגילים<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בתרגילים הבאים השתמשו באינטרנט כדי למצוא מודולים ופונקציות שיסייעו לכם לפתור את התרגיל.<br>
# נסו להימנע מחיפוש ומקריאה של פתרונות לתרגיל המסוים המופיע במחברת (כמו חיפושים הכוללים "חפיסת קלפים").
# </p>
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">זו הדרך<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה שמקבלת נתיב לתיקייה, ומחזירה את רשימת כל הקבצים שמתחילים ברצף האותיות "<i>deep</i>" באותה תיקייה.<br>
# בדקו שהפעלת הפונקציה על התיקייה <i>images</i> מחזירה שני קבצים.
# </p>
# +
import os
def files(prefix,path):
return list(filter(lambda file: file.startswith(prefix), os.listdir(path)))
files("deep","images")
# -
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">משחק קלפים משונה<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# בחפיסת קלפים רגילה, שבה 52 קלפים, יש לכל קלף שתי תכונות:
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>
# <strong>ערך</strong>: מספר שבין 1 ל־13.
# </li>
# <li>
# <strong>צורה</strong>: תלתן (Club), יהלום (Diamond), לב (Heart) או עלה (Spade).
# </li>
# </ol>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כל צירוף של ערך וצורה מופיע בחפיסה בדיוק פעם אחת.
# </p>
# <ol style="text-align: right; direction: rtl; float: right; clear: both;">
# <li>צרו חפיסת קלפים מלאה.</li>
# <li>טרפו את הקלפים.</li>
# <li>חלקו אותם בין 4 שחקנים.</li>
# <li>הדפיסו לאיזה שחקן סכום הקלפים הגבוה ביותר.</li>
# </ol>
# +
import random
shape=['diamond','spade','heart','club']
cards = [(i,j) for j in range(1,14) for i in shape]
random.shuffle(cards)
#print(cards)
cards_value=[i[1] for i in cards]
#print(cards_value)
split=[sum(cards_value[i::4]) for i in range(4)]
#print(cards_value)
print(max(split))
# -
# ### <span style="text-align: right; direction: ltr; float: right; clear: both;">It's the final?<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו פונקציה שמקבלת תאריך עתידי בתצורה YYYY-MM-DD, ומדפיסה את מספר הימים שנשארו עד שנגיע לתאריך המיוחל.<br>
# לדוגמה, אם התאריך היום הוא 2020-05-04 וקיבלנו כקלט 2020-05-25, הפונקציה תחזיר <samp>21</samp>.
# </p>
# +
from datetime import datetime
def final(final_date):
current_date = datetime.now().date()
final_date = datetime.fromisoformat(final_date).date()
delta = final_date - current_date
return delta.days
final('2022-03-12')
# -
# ### <span style="text-align: right; direction: rtl; float: right; clear: both;">אין לי וִנִגְרֶט<span>
# <p style="text-align: right; direction: rtl; float: right; clear: both;">
# כתבו תוכנה שמקבלת כקלט מהמשתמש שני תאריכים בתצורה: YYYY-MM-DD.<br>
# התוכנה תגריל תאריך חדש שנמצא בין שני התאריכים שהמשתמש הזין כקלט.<br>
# לדוגמה, עבור הקלטים 1912-06-23 ו־1954-06-07, פלט אפשרי הוא <samp>1939-09-03</samp>.<br>
# כיוון שאני הולך למכולת רק בימי שני ואני צרכן כבד של רוטב ויניגרט, אם התאריך נופל על יום שני, הדפיסו: "אין לי ויניגרט!"<br>
# רמז: <span style="background: black;">קראו על EPOCH</span>.
# </p>
# +
import datetime
import random
def vinigret(date_start,date_end):
date_end = datetime.datetime.fromisoformat(date_end).date()
date_start = datetime.datetime.fromisoformat(date_start).date()
delta = date_end - date_start
rand = random.choice(range(1,delta.days))
new_date= date_start + datetime.timedelta(days=rand)
if new_date.weekday() == 1:
return 'I don''t have a vinigret'
return new_date.weekday() + 1
vinigret('2020-03-12','2022-03-12')
| week5/1_Modules.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/tensorflow-2.3-cpu-py37-ubuntu18.04-v1
# ---
# ## Download the Fashion-MNIST dataset
# +
import os
import numpy as np
from tensorflow.keras.datasets import fashion_mnist
(x_train, y_train), (x_val, y_val) = fashion_mnist.load_data()
os.makedirs("./data", exist_ok = True)
np.savez('./data/training', image=x_train, label=y_train)
np.savez('./data/validation', image=x_val, label=y_val)
# -
# !pygmentize fmnist-2.py
# ## Upload Fashion-MNIST data to S3
# +
import sagemaker
print(sagemaker.__version__)
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
bucket = sess.default_bucket()
prefix = 'keras2-fashion-mnist'
training_input_path = sess.upload_data('data/training.npz', key_prefix=prefix+'/training')
validation_input_path = sess.upload_data('data/validation.npz', key_prefix=prefix+'/validation')
output_path = 's3://{}/{}/output/'.format(bucket, prefix)
chk_path = 's3://{}/{}/checkpoints/'.format(bucket, prefix)
print(training_input_path)
print(validation_input_path)
print(output_path)
print(chk_path)
# -
# ## Train with Tensorflow
# +
from sagemaker.tensorflow import TensorFlow
tf_estimator = TensorFlow(entry_point='fmnist-2.py',
role=role,
instance_count=1,
instance_type='ml.g4dn.xlarge',
framework_version='2.1.0',
py_version='py3',
hyperparameters={'epochs': 20},
output_path=output_path,
use_spot_instances=True,
max_run=3600,
max_wait=7200
)
# -
objective_metric_name = 'val_acc'
objective_type = 'Maximize'
metric_definitions = [
{'Name': 'val_acc', 'Regex': 'val_accuracy: ([0-9\\.]+)'}
]
# +
from sagemaker.tuner import ContinuousParameter, IntegerParameter
hyperparameter_ranges = {
'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type='Logarithmic'),
'batch-size': IntegerParameter(32,512)
}
# -
from sagemaker.tuner import HyperparameterTuner
tuner = HyperparameterTuner(tf_estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions=metric_definitions,
objective_type=objective_type,
max_jobs=30,
max_parallel_jobs=2,
early_stopping_type='Auto')
tuner.fit({'training': training_input_path, 'validation': validation_input_path}, wait=False)
# +
# Wait for a couple of jobs to start
from sagemaker.analytics import HyperparameterTuningJobAnalytics
exp = HyperparameterTuningJobAnalytics(
hyperparameter_tuning_job_name=tuner.latest_tuning_job.name)
jobs = exp.dataframe()
jobs.sort_values('FinalObjectiveValue', ascending=0)
# -
# ## Deploy
# +
import time
tf_endpoint_name = 'keras-tf-fmnist-'+time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
tf_predictor = tuner.deploy(
initial_instance_count=1,
instance_type='ml.m5.large',
endpoint_name=tf_endpoint_name)
# -
# ## Predict
# +
# %matplotlib inline
import random
import matplotlib.pyplot as plt
num_samples = 5
indices = random.sample(range(x_val.shape[0] - 1), num_samples)
images = x_val[indices]/255
labels = y_val[indices]
for i in range(num_samples):
plt.subplot(1,num_samples,i+1)
plt.imshow(images[i].reshape(28, 28), cmap='gray')
plt.title(labels[i])
plt.axis('off')
payload = images.reshape(num_samples, 28, 28, 1)
# -
response = tf_predictor.predict(payload)
prediction = np.array(response['predictions'])
predicted_label = prediction.argmax(axis=1)
print('Predicted labels are: {}'.format(predicted_label))
# ## Clean up
tf_predictor.delete_endpoint()
| Chapter 10/model_tuning/Keras on Fashion-MNIST - Automatic Model Tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: LHL_Bootcamp
# language: python
# name: lhl_bootcamp
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/robynmundle/predicting_flight_delays/blob/main/robyn_workspace/Feature%20Engineering%20COLAB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="royal-following"
# Import Packages
# + id="grateful-hebrew"
import pandas as pd
pd.set_option('display.max_columns', None)
import numpy as np
from sklearn import preprocessing
import time
from datetime import datetime, date, time
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import warnings
warnings.filterwarnings('ignore')
import copy
# + [markdown] id="perceived-connecticut"
# Completed Functions
# + id="greek-eagle"
# CRS_ELAPSED_TIME --> HAUL_LENGTH
def haul(df, col):
'''Determine if flight length is SHORT, MEDIUM or LONG based on expected elapsed flight time.
Input:
(0) df containing flight information,
(1) column containing the elapsed flight time in minutes
Output: 'haul_length' column determining haul length category per row in df'''
length=[]
for i in df[col]:
if i < (3*60): # up to 3 hours
length.append(0) # 0 = SHORT HAUL
elif (i >= (3*60)) and (i < (6*60)): # 3-6 hours
length.append(1) # 1 = MEDIUM HAUL
elif i >= (6*60):# 6+ hours
length.append(2) # 2 = LONG HAUL
df['haul_length'] = length
# example of implementation: haul(flight10k, 'crs_elapsed_time')
# CRS_DEP_TIME (hhmm) --> CRS_DEP_TIME (hh) -- to be used within time_day function
def gethour(df,col):
'''Convert hhmm to hh (24-hr) hour-only output
Input:
(0) df containing flight information,
(1) column containing the hhmm time
Output: rewrite on input column in rounded hh format'''
values = []
for i in df[col]:
mins = (i % 100) / 60
hour = i // 100
hh = round(hour+mins)
values.append(hh)
df[col] = values
# example of implementation: gethour(flight10k, 'crs_dep_time')
# CRS_DEP/ARR_TIME (hhmm) --> hot encoded categorical time of day 'morning, aft...'
def time_day(df, col):
''' Input:
(0) df containing flight information
(1) corresponding column of time of flight (i.e. departure or arrival) (format hhmm)
Output: rewrite of time column into categorical MORNING, AFTERNOON, EVENING, or OVERNIGHT'''
gethour(df, col)
timeday = []
for i in df[col]:
if (i>=23) or (i<5):
timeday.append(0) # 0 = OVERNIGHT
elif (i>=5) and (i<12):
timeday.append(1) # 1 = MORNING
elif (i>=12) and (i<18):
timeday.append(2) # 2 = AFTERNOON
elif (i>=18) and (i<23):
timeday.append(3) # 3 = EVENING
return timeday
# example of implementation: time_day(flight10k, 'crs_dep_time')
# + [markdown] id="pursuant-stamp"
# Open CSVs of Pre-Evaluated Features
# + id="fallen-interaction"
airline_rating = pd.read_csv('data/airline_delay_rating.csv', index_col=0)
origin_traffic = pd.read_csv('data/origin_traffic_rating.csv', index_col=0)
origin_delay = pd.read_csv('data/origin_delay_rating.csv', index_col=0)
dest_traffic = pd.read_csv('data/dest_traffic_rating.csv', index_col=0)
delay_dep_h = pd.read_csv('data/crs_dep_time_delay_rating.csv', index_col=0)
delay_arr_h = pd.read_csv('data/crs_arr_time_delay_rating.csv', index_col=0)
# + [markdown] id="massive-longitude"
# Open CSV of Flight Information to Model
# + id="interpreted-passion" outputId="7e374601-aa66-4546-d925-5aa41b736a06" colab={"base_uri": "https://localhost:8080/", "height": 134}
# This is for the dataset you want to investigate
flights = pd.read_csv('data/flights250K.csv', index_col=0)
flights.head(1)
flights.shape
# + [markdown] id="reflected-genius"
# Build df based on columns we will use in transformation - Data Cleaning and Feature Implementation
#
# **See option A or B in first rows to build df based on training or test dataset**
# + id="guilty-being" outputId="4a7d6b1f-b16e-472a-a315-4e46a6efb9d4" colab={"base_uri": "https://localhost:8080/", "height": 221}
# A - if this is a training dataset, we need arr_delay as our target variable so use this first block of code
model_df = flights[flights['cancelled'] == 0][['arr_delay','fl_date','op_unique_carrier','origin','dest','crs_dep_time','crs_arr_time','crs_elapsed_time','distance']]
# B - if this is a testing dataset, we will not have arr_delay and cannot include it
#model_df = flights[flights['cancelled'] == 0][['tail_num','op_carrier_fl_num','fl_date','op_unique_carrier','origin','dest','crs_dep_time','crs_arr_time','crs_elapsed_time','distance']]
model_df['crs_elapsed_time'] = model_df['crs_elapsed_time'].fillna(0)
# first regression will be simple-- is the flight going to be delayed or not?
if 'arr_delay' in model_df:
model_df['delay_flag'] = model_df['arr_delay'].map(lambda x: 0 if x <= 0 else 1)
arr_delay = model_df['arr_delay']
model_df.drop(columns='arr_delay', inplace=True)
# label encode tail_num for identification of the flight later
#le = preprocessing.LabelEncoder()
#tail_num = model_df['tail_num'].values
#model_df['tail_num'] = le.fit_transform(tail_num)
# convert date to datetime in order to grab the month
model_df['fl_date'] = pd.to_datetime(model_df['fl_date'])
#model_df['year'] = model_df['fl_date'].dt.year
model_df['month'] = model_df['fl_date'].dt.month
model_df['day'] = model_df['fl_date'].dt.day
model_df['weekday'] = model_df['fl_date'].dt.dayofweek
model_df.drop(columns='fl_date', inplace=True) # this won't be needed after we got month
# set delay rating based on expected performance of the airline
model_df = model_df.merge(airline_rating, left_on='op_unique_carrier', right_on='airline', how='left')
model_df.drop(columns=['op_unique_carrier','airline'],inplace=True)
# obtain haul length of the flight using haul function defined above
haul(model_df, 'crs_elapsed_time')
#model_df.drop(columns=['crs_elapsed_time'],inplace=True)
# new column of categorical time of day information using time_day function defined above as well as expected delays relating to the time of day departure
model_df['dep_timeday'] = time_day(model_df, 'crs_dep_time')
model_df['arr_timeday'] = time_day(model_df, 'crs_arr_time')
model_df = model_df.merge(delay_dep_h, left_on='crs_dep_time', right_on='crs_dep_time', how='left')
model_df = model_df.merge(delay_arr_h, left_on='crs_arr_time', right_on='crs_arr_time', how='left')
model_df.drop(columns=['crs_dep_time','crs_arr_time'],inplace=True)
# classify the expected traffic of the origin and departure airports
model_df = model_df.merge(origin_traffic, left_on='origin', right_on='origin', how='left')
model_df = model_df.merge(dest_traffic, left_on='dest', right_on='dest', how='left')
model_df = model_df.fillna(model_df['busy_origin'].mean())
model_df = model_df.merge(origin_delay, left_on='origin', right_on='origin', how='left')
model_df.drop(columns=['origin','dest'],inplace=True)
#if 'arr_delay' in model_df:
# training_full = model_df.copy(deep=True)
# model_df.drop(columns='arr_delay', inplace=True)
# have a look at the dataset
model_df.head()
model_df.shape
# + id="worst-helicopter"
training_full.head(1)
training_full.shape
# + id="magnetic-criterion"
from sklearn.preprocessing import MinMaxScaler, RobustScaler, StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import r2_score
import seaborn as sns; sns.set(style='darkgrid', context='talk')
import matplotlib.pyplot as plt
# + [markdown] id="accredited-clause"
# Data Scaling
# + id="legendary-mouth" outputId="e4e02983-0aab-4148-cdd3-478c587dc53e" colab={"base_uri": "https://localhost:8080/"}
if 'delay_flag' in model_df: # training dataset
X = model_df.drop(columns=['delay_flag'])
else: # testset
X = model_df
y = model_df['delay_flag']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = RobustScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# + [markdown] id="drawn-helen"
# Logistic Regression -- is it a delay or not?
# + id="plain-township" outputId="4a522d45-6782-43bf-b382-fb8b89c477a1" colab={"base_uri": "https://localhost:8080/"}
# %%time
# Logistic Regression
log_params = {'penalty': ['l1', 'l2'], 'C': np.logspace(-4, 4, 20)}
grid_log = GridSearchCV(LogisticRegression(), log_params, cv=5, verbose=True, n_jobs=-1)
grid_log.fit(X_train, y_train)
logreg = grid_log.best_estimator_
print(logreg.get_params)
logreg_score = cross_val_score(logreg, X_train, y_train, cv=5)
print('\nLogistic Regression Cross Validation Score: ', round(logreg_score.mean() * 100, 2).astype(str) + '%')
print("Training R2 / Variance: ", round(grid_log.best_score_,2))
print(f"Residual Sum of Squares (RSS): {round(np.mean((grid_log.predict(X_test) - y_test) ** 2),2)}")
y_logreg = logreg.predict(X_test)
print('\nLogistic Regression - y_test')
#print('Test R2 Score \t{:.2f}'.format(metrics.r2_score(y_test, y_logreg)))
print('Recall \t\t{:.2f}'.format(metrics.recall_score(y_test, y_logreg)))
print('Precision \t{:.2f}'.format(metrics.precision_score(y_test, y_logreg)))
print('F1 Score \t{:.2f}'.format(metrics.f1_score(y_test, y_logreg)))
print('Accuracy \t{:.2f} <--'.format(metrics.accuracy_score(y_test, y_logreg)))
print('AUC Score \t{:.2f}\n'.format(metrics.roc_auc_score(y_test, y_logreg)))
# + id="minus-services" outputId="2d80a3c8-82ba-4ee7-c7cc-46841ce5d5b4" colab={"base_uri": "https://localhost:8080/", "height": 338}
y_score = logreg.decision_function(X_test)
from sklearn.metrics import average_precision_score
average_precision = average_precision_score(y_test, y_score)
from sklearn.metrics import plot_precision_recall_curve
disp = plot_precision_recall_curve(logreg, X_test, y_test)
disp.ax_.set_title('2-class Precision-Recall curve: '
'AP={0:0.2f}'.format(average_precision))
# + [markdown] id="flush-thong"
# Wow ok those are some results! Let's predict the entire X now so we can implement this result as a column in the model_df to progress into another model that defines 'how much' the delay will be
# + id="distant-extreme"
y_logregX = logreg.predict(X)
if 'delay_pred' not in model_df:
model_df['delay_pred'] = y_logregX
#model_df['tail_num'] = le.inverse_transform(model_df['tail_num'])
#model_df.head()
#model_df.shape
# + id="equipped-uniform"
model_df = model_df.join(arr_delay)
model_df.dropna(inplace=True)
#model_df.head()
#model_df.shape
# + id="fleet-reservoir"
model_df.drop(columns='delay_flag', inplace=True)
delayed = model_df['delay_pred'] == 1
model_df2 = model_df[delayed]
#model_df2.head()
#model_df2.shape
# + id="vital-break" outputId="1b677491-be9b-4695-cfab-52b4180f60ec" colab={"base_uri": "https://localhost:8080/", "height": 97}
delay_bin = []
for i in model_df2['arr_delay']:
if i <= 5:
delay_bin.append(0) # no delay (within 5 minutes)
elif (i > 5) and (i <= 10):
delay_bin.append(1) # expect a 5 to 10 minute delay
elif (i > 10) and (i <= 20):
delay_bin.append(2) # expect a 10 to 20 minute delay
elif (i >= 20) and (i <= 45):
delay_bin.append(3) # expect a 20 to 45 minute delay
elif (i > 45):
delay_bin.append(4) # expect a 45+ minute delay
model_df2['delay_range'] = delay_bin
if 'arr_delay' in model_df2:
model_df2.drop(columns='arr_delay', inplace=True)
model_df2.head(1)
model_df2.shape
#model_df2['delay_range'].value_counts()
# + [markdown] id="artistic-haiti"
# Second Model: Arrival Delay Range Prediction
#
# Data Scaling
# + id="about-assets" outputId="18e73e09-ecec-4db3-8d85-e5195cd6f400" colab={"base_uri": "https://localhost:8080/"}
if 'delay_range' in model_df: # training dataset
X = model_df2.drop(columns=['delay_range'])
else: # testset
X = model_df2
y = model_df2['delay_range']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = RobustScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# + [markdown] id="mighty-spoke"
# Random Forest Classifier -- can we predict the predicted delay's time allotment?
# + id="least-robin" outputId="669b7037-52a1-42b8-b9be-b0fe781304cd" colab={"base_uri": "https://localhost:8080/"}
# %%time
tree_params = {"n_estimators": [100, 250, 500, 750, 1000],
'max_depth': [int(x) for x in np.linspace(1, 32, num = 5)]}
grid_tree = GridSearchCV(RandomForestClassifier(), tree_params, cv=3, verbose=True, n_jobs=-1)
grid_tree.fit(X_train, y_train)
forest = grid_tree.best_estimator_
print(forest.get_params)
forest_score = cross_val_score(forest, X_train, y_train, cv=3)
print('Random Forest Classifier Cross Validation Score: ', round(forest_score.mean() * 100, 2).astype(str) + '%')
print("Training R2 / Variance: ", round(grid_tree.best_score_,2))
print(f"Residual Sum of Squares (RSS): {round(np.mean((grid_tree.predict(X_test) - y_test) ** 2),2)}")
y_forest = forest.predict(X_test)
print('\nRandom Forest Classifier - y_test')
#print('AUC Score \t{:.2f}\n'.format(metrics.roc_auc_score(y_test, y_forest)))
# + id="trying-nashville" outputId="031480ff-8d54-484b-bc2c-ee05b2bcca5a" colab={"base_uri": "https://localhost:8080/"}
metrics.confusion_matrix(y_test, y_forest)
# + id="sagWPY2dRa9R" outputId="45e3848f-73de-4b17-f4fe-1e9cde2166b7" colab={"base_uri": "https://localhost:8080/"}
y_forest_proba = forest.predict_proba(X_test)
print('AUC Score \t{:.2f}\n'.format(metrics.roc_auc_score(y_test, y_forest_proba, multi_class='ovr', average="weighted")))
# + id="6s2nIyw_ReKG" outputId="e27011bd-f8d5-4490-d045-9df866bf90eb" colab={"base_uri": "https://localhost:8080/"}
forest.score(X_test, y_test)
# + id="te2KYl6FS_oL"
| robyn_workspace/Modeling Attempt Notebooks/Feature Engineering COLAB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Bk93_iQp5xzE" colab_type="code" colab={}
import numpy as np
# + id="sAhv-W0y51fp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="7c952126-5c5a-4971-98e3-ebeffd2e9fe5"
from keras.datasets import mnist
# + id="emvb3fMS583H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="c53ab040-7714-4877-b424-f1bf4d6b030b"
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# + id="DT5ocxwQ5_pJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="85c04b50-0fd5-4c3b-bc8f-aa976a835927"
train_images.shape
# + id="ZqcHHdIT6aOu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ee6520f4-3f45-4a62-cc0c-1b463e0f2aef"
len(train_images)
# + id="y4WnaJs-6fFX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="7bd0f5fb-6382-4287-8bd1-ad1b776afd91"
train_labels
# + id="FhQR2bE36n54" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2a087f31-6dcc-4cfb-8dc0-3242d1fd9523"
train_labels.shape
# + id="wU2-CUi36sA7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="fa7c162e-84e9-435e-a51b-f7060f017057"
len(train_labels)
# + id="-QaqL64k6wUJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ac9120a7-7835-4de6-eb30-4019aae3b622"
test_labels.shape
# + id="Ym9LOcrz61Ie" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="a30df96f-a4b9-489e-b9c2-3d71afdfbc5a"
len(test_labels)
# + id="XH_JBxha645u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="83a2c1f7-2831-455a-cde7-3d6d40afb80c"
test_labels
# + id="c4MooVXs67CF" colab_type="code" colab={}
from keras import models
from keras import layers
# + id="npALgmyF7IVz" colab_type="code" colab={}
network = models.Sequential()
# + id="Ki95fBcH7NOY" colab_type="code" colab={}
network.add(layers.Dense(512, activation="relu", input_shape = (28 * 28,)))
# + id="SxU3mevE73FF" colab_type="code" colab={}
network.add(layers.Dense(10, activation='softmax'))
# + id="iyC1Da4Z8C19" colab_type="code" colab={}
network.compile(optimizer = 'rmsprop',
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
# + id="mG0ON9X-8e_f" colab_type="code" colab={}
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
# + id="ChXF8GLx86fc" colab_type="code" colab={}
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# + id="vvZnEetb9Iph" colab_type="code" colab={}
from keras.utils import to_categorical
# + id="uOewyBtS9QET" colab_type="code" colab={}
train_labels = to_categorical(train_labels)
# + id="y_7I8bk69YZ2" colab_type="code" colab={}
test_labels = to_categorical(test_labels)
# + id="mn6eOtGZ9drT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="a6679313-f0fd-49f4-e82f-9da3314f3921"
network.fit(train_images, train_labels, epochs=10, batch_size = 128)
# + id="kPTpt2CR9qWl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3f26cf46-cd5b-43d9-c6aa-f6f8ca1be622"
test_loss , test_accuracy = network.evaluate(test_images, test_labels)
# + id="yUIZVScG-U8D" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="fa89470c-6337-4927-a610-a20127fc3a78"
print("test loss",test_loss)
# + id="x_2EQtfL-gJZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="dcae34d6-6fd2-4a8a-fb4a-bbf34a33beb7"
print("Test acc",test_accuracy)
# + id="SGF72PV6-rRD" colab_type="code" colab={}
| Mnist_Practice.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# + tags=["remove-cell"]
library(repr) ; options(repr.plot.width=6, repr.plot.height= 6) # Change plot sizes (in cm)
# -
# # Biological Computing in R
# ## Introduction
#
# R is a freely available statistical software with strong programming capabilities widely used by professional scientists around the world. It was based on the commercial statistical software `S` by <NAME> and <NAME>. The first stable version appeared in 2000. It was essentially designed for *programming* statistical analysis and data-mining. It became the standard tool for data analysis and visualization in biology in a matter of just 10 years or so. It is also frequently used for mathematical modelling in biology.
#
# This chapter aims to lay down the foundations for you to use R for scientific computing in biology by exploiting it full potential as a fully featured object-oriented programming language. Specifically, this chapter aims at teaching you:
#
# * Basic R syntax and programming conventions, assuming you have never set your eyes on R
# * Principles of data processing and exploration (including visualization) using R
# * Principles of clean and efficient programming using R
# * To generate publication quality graphics in R
# * To develop reproducible data analysis "work flows" so you (or anybody else) can run and re-run your analyses, graphics outputs and all, in R
# * To make R simulations more efficient using vectorization
# * To find and fix errors in R code using debugging
# * To make data wrangling and analyses more efficient and convenient using special packages
# * Some additional, advanced topics (accessing databases, building your own packages, etc.).
#
# ### Why R?
#
# There are many commercial statistical (minitab, SPSS, etc) software packages in the world that are mouse-driven, warm, and friendly, and have lots of statistical tests and plotting/graphing capabilities. Why not just use them? Here are some very good reasons:
#
# * R has numerous tried and tested packages for data-handling and processing
# * R provides basically every statistical test you'll ever need and is constantly being improved. You can tailor your analyses rather than trying to use the more limited options each statistical software package can offer.
# * R has excellent graphing and visualization capabilities and produce publication-quality graphics that can be re-produced with scripts – you won't get [RSI](https://en.wikipedia.org/wiki/Repetitive_strain_injury) mouse-clicking your way though graphing and re-graphing your data every time you change your analysis!
# * It also has good capabilities for mathematical calculations, including matrix algebra
# * R is scriptable, so you can build a perfectly repeatable record of your analysis. This in itself has several advantages:
# * You can never replicate *exactly* the same analysis with all the same steps using a point-and-click approach/software. With R you can reproduce your full analysis for yourself (in the future!), your colleagues, your supervisor/employer, and any journal you might want to submit your work to.
# * You may need to rerun your analysis every time you get new data. Once you have it all in a R script, you can just rerun your analysis and go home!
# * You may need to tweak your analysis many times (new data, supervisor changes mind, you change mind, paper reviewers want you do something differently). Having the analysis recorded as script then allows you to do so by revising the relevant parts of your analysis with relatively little pain.
# * R is freely available for all common computer operating systems – if you want a copy on your laptop, help yourself at the [CRAN website](https://cran.r-project.org).
# * Being able to program in R means you can develop and automate your own data handling, statistical analysis, and graphing/plotting, a set of skills you are likely to need in many, if not most careers paths.
#
# #### Would you ever need anything other than R?
#
# Being able to program R means you can develop and automate your statistical analyses and the generation of figures into a reproducible work flow. For many of you, using R as your only programming language will do the job. However, if your work also includes extensive numerical simulations, manipulation of very large matrices, bioinformatics, relational database access and manipulation, or web development, you will be better-off *also* knowing another programming language that is more versatile and computationally efficient (like Python or Julia).
#
# ### Installing R
#
# If you are using a college computer, R will likely already be available.
#
# Otherwise you can follow [these instructions](https://imperial-fons-computing.github.io/rstudio.html) to install R on your own computer.
#
# In particular, on Ubuntu Linux, it is as simple as typing the following in terminal:
#
# ```bash
# sudo apt install r-base r-base-dev
# ```
#
# ## Getting started
#
# You should be using an IDE for R. Please re-visit the ["To IDE or not to IDE" section of the introduction](../intro.md) if you are not familiar with IDEs.
#
# Let's briefly look at the bare-bones R interface and command line interface (CLI), and then switch to an IDE like Visual Studio Code or RStudio.
#
# Launch R (From Applications menu on Window or Mac, from terminal in Linux/Ubuntu) — it should look something like this (on Linux/Ubuntu or Mac terminal):
#
# ---
#
# :::{figure-md} R-Linux-console
#
# <img src="./graphics/R_Linux.png" alt="R in UNIX/Linux" width="500px">
#
# **The R console in Linux/Unix.**
#
# :::
#
# Or like this (Windows "console", similar in Mac):
#
# ---
#
# :::{figure-md} R-Windows-console
#
# <img src="./graphics/R_terminal.jpg" alt="R in Windows" width="500px">
#
# **The R console in Windows/Mac OS.**
#
# :::
#
# ---
#
#
# ## R basics
#
# Lets get started with some R basics. You will be working by entering R commands interactively at the R user prompt (`>`). Up and down arrow keys scroll through your command history.
#
# ### Useful R commands
#
# |Command| What it does|
# |:-|:-|
# | `ls()`| list all the variables in the work space |
# | `rm('a', 'b')`| remove variable(s) `a` and `b`|
# | `rm(list=ls())`| remove all variable(s)|
# | `getwd()`| get current working directory |
# | `setwd('Path')`| set working directory to `Path`|
# | `q()`| quit R |
# | `?Command`| show the documentation of `Command`|
# | `??Keyword`| search the all packages/functions with 'Keyword', "fuzzy search"|
#
# ### Baby steps
#
# Like in any programming language, you will need to use "variables" to store information in a R session's workspace. Each variable has a reserved location in your [RAM](https://en.wikipedia.org/wiki/Random-access_memory), and takes up "real estate" in it — that is when you create a variable you reserve some space in your computer's memory.
#
# $\star$ Now, let's try assigning a few variables in R and doing things to them:
a <- 4 # store 4 as variable a
a # display it
a * a # product
# Store a variable:
a_squared <- a * a
# ```{note}
# Unlike Python or most other programming languages, R uses the `<-` [operator to assign variables](https://stat.ethz.ch/R-manual/R-devel/library/base/html/assignOps.html). You can use `=` as well, but it does not work everywhere, so better to stick with `<-`.
# ```
sqrt(a_squared) # square root
# Build a vector with `c` (stands for "`c`oncatenate"):
v <- c(0, 1, 2, 3, 4)
v # Display the vector-valued variable you created
# Note that any text after a "#" is ignored by R, like in many other languages — handy for commenting.
#
# *In general, please comment your code and scripts, for *everybody's* sake. You will be amazed by how difficult it is to read and understand what a certain R script does (or any other script, for that matter) without judicious comments — even scripts you yourself wrote not so long ago!
#
# ```{tip}
#
# **The Concatenate function:** `c()`(concatenate) is one of the most commonly used R functions because it is the default method for combining multiple arguments into a vector. To learn more about it, type `?c` at the R prompt and hit enter.
# ```
is.vector(v) # check if v's a vector
mean(v) # mean
# Thus, a *vector* is like a single column or row in a *spreadsheet* (just like it is in [Python](05-Python_I.ipynb)). Multiple vectors can be combined to make a matrix (the full spreadsheet).
#
# This is one of many ways R stores and processes data. More on R data types and objects below.
#
# A single value (any kind) is a vector object of length 1 by default. That's why in the R console you see `[1]` before any single-value output (e.g., type `8`, and you will see `[1] 8`).
# Some examples of operations on vectors:
var(v) # variance
median(v) # median
sum(v) # sum all elements
prod(v + 1) # multiply
length(v) # how many elements in the vector
# ### Variable names and Tabbing
#
# In R, you can name variables in the following way to keep track of related variables:
wing.width.cm <- 1.2 #Using dot notation
wing.length.cm <- c(4.7, 5.2, 4.8)
# This can be handy; type:
#
# ```r
# wing.
# ```
#
# And then hit the `tab`key to reveal all variables in that category. This is nice — variable names should be as obvious as possible. However, they should not be over-long either! Good style and readability is more important than just convenient variable names.
#
# In fact, R allows dots to be used in variable names of all objects (not just objects that are variables). For example, functions names can have dots in them as well, as you will see below with the `is.*` family (e.g., `is.infinite()`, `is.nan()`, etc.).
# ### Operators
# The usual operators are available in R:
#
# |Operator||
# |:-|:-|
# | `+`| Addition |
# | `-`| Subtraction|
# | `*`| Multiplication|
# | `/`| Division|
# | `^`| Power|
# | `%%`| Modulo|
# | `%/%`| Integer division|
# | `==`| Equals|
# | `!=`| Differs|
# | `>`| Greater|
# | `>=`| Greater or equal|
# | `&`| Logical and|
# | `|` | Logical or|
# | `!`| Logical not|
# ### When things go wrong
#
# Here are some common [syntax errors](https://en.wikipedia.org/wiki/Syntax_error) that you might run into in R, especially when you just beginning to learn this language:
#
#
# * Missing a closing bracket leads to continuation line, which looks something like this, with a `+` at the end:
#
# ```r
# x <- (1 + (2 * 3)
# +
# ```
#
# Hit `Ctrl-C`(UNIX terminal or base R command line) or ESC (in in RStudio) or keep typing!
#
# * Too many parentheses; for example, `2 + (2*3))`
#
# * Wrong or mismatched brackets (see next subsection)
#
# * Mixing double and single quotes will also give you an error
#
#
# When things are taking too long and the R console seems frozen, try `Ctrl + C` (UNIX terminal or base R command line) or ESC (in RStudio) to force an exit from whatever is going on.
# ### Types of parentheses
#
# R has specific uses for different types of Parentheses type that you need to get used to:
#
# | Parentheses type| What it does|
# |:-|:-|
# |`f(3,4)`| call the function (or command) f, with the arguments 3 & 4. |
# |`a + (b*c)`| to enforce order over which statements or calculations are executed. Here `(b*c)`is executed before adding to `a`. Here is an alternative order: `(a + b)*c`|
# |`{expr1; expr2;...exprn}` | group a set of expressions or commands into one compound expression. Value returned is value of last expression; used in building function, loops, and conditionals (more on these soon!).|
# |`x[4]`| get the 4th element of the vector `x`.|
# | `li[[3]]`| get the 3rd element of some list `li`, and return it.(compare with `li[3]`, which returns a list with just the 3rd element inside). More on lists in next section.
li = list(c(1,2,3))
class(li)
# ## Variable types
#
# There are different kinds of data variable types in R, but you will basically need to know four for most of your work: integer, float (or "numeric", including real numbers), string (or "character", e.g.,
# text), and Boolean ("logical"; `True`or `FALSE`).
#
# Try this:
v <- TRUE
v
class(v)
# The `class()` function tells you what type of variable any object in the workspace is.
v <- 3.2
class(v)
v <- 2L
class(v)
v <- "A string"
class(v)
b <- NA
class(b)
# Also, the `is.*` family of functions allow you to check if a variable is a specific in R:
is.na(b)
b <- 0/0
b
is.nan(b)
b <- 5/0
b
is.nan(b)
# ```{tip}
# Beware of the difference between `NA`(` N`ot `A`vailable) and `NaN`(` N`ot `a N`umber). R will use `NA`to represent/identify missing values in data or outputs, while `NaN`represent nonsense values (e.g., 0/0) that cannot be represented as a number or some other data type.
#
# See what R has to say about this: try `?is.nan`, `?is.na`, `?NA`, `?NaN`in the R commandline (one at a time!).
#
# There are also `Inf`(Infinity, e.g., 1/0), and `NULL` (variable not set) value types. Look these up as well using `?`.
# ```
is.infinite(b)
is.finite(b)
is.finite(0/0)
# ### Type Conversions and Special Values
#
# The `as.*` commands all convert a variable from one type to another. Try out the following examples:
as.integer(3.1)
as.numeric(4)
as.roman(155)
as.character(155) # same as converting to string
as.logical(5)
# *What just happened?!* R maps all values other than `0` to logical `True`, and `0` to `False`. This can be useful in some cases, for example, when you want to convert all your data to Presence-Absence only.
as.logical(0)
# Also, *keep an eye out for [E notation](https://en.wikipedia.org/wiki/Scientific_notation) in outputs of R functions for statistical analyses, and learn to interpret numbers formatted this way.* R uses E notation in outputs of statistical tests to display very large or small numbers. If you are not used to different representations of long numbers, the E notation might be confusing.
#
# Try this:
1E4
1e4
5e-2
1E4^2
1 / 3 / 1e8
# ```{tip}
# **Boolean arguments in R**: In `R`, you can use `F` and `T` for boolean `FALSE` and `TRUE` respectively. To see this, type
#
# `a <- T`
#
# in the R commandline, and then see what R returns when you type `a`. Using `F` and `T` for boolean `FALSE` and `TRUE` respectively is not necessarily good practice, but be aware that this option exists.
# ```
# ## R Data Structures
#
# R comes with different built-in structures (objects) for data storage and manipulation. Mastering these, and knowing which one to use when will help you write better, more efficient programs and also handle diverse datasets (numbers, counts, names, dates, etc).
#
# ```{note}
# **Data "structures" vs. "objects" in R**: You will often see the terms "object" and "data structure" used in this and other chapters. These two have a very distinct meaning in object-oriented programming (OOP) languages like R and Python. A data structure is just a "dumb" container for data (e.g., a vector). An object, on the other hand can be a data structure, but also any other variable or a function in your R environment. R, being an OOP language, converts everything in the current environment to an object so that it knows what to do with each such entity — each object type has its own set of rules for operations and manipulations that R uses when interpreting your commands.
# ```
#
# ### Vectors
# The Vector, which you have already seen above, is a fundamental data object / structure in R. Scalars (single data values) are treated as vector of length 1. *A vector is like a single column or row in a spreadsheet.*
#
# $\star$ Now get back into R (if you somehow quit R using `q()`or something else), and try this:
a <- 5
is.vector(a)
v1 <- c(0.02, 0.5, 1)
v2 <- c("a", "bc", "def", "ghij")
v3 <- c(TRUE, TRUE, FALSE)
v1;v2;v3
# R vectors can only store data of a single type (e.g., all numeric or all character). If you try to combine different types, R will homogenize everything to the same data type. To see this, try the following:
v1 <- c(0.02, TRUE, 1)
v1
# TRUE gets converted to 1.00!
v1 <- c(0.02, "Mary", 1)
v1
# Everything gets converted to text!
#
# Basically, the function `c` "coerces" arguments that are of mixed types (strings/text, real numbers, logical arguments, etc) to a common type.
#
# (R-matrices)=
# ### Matrices and arrays
#
# A R "matrix" is a 2 dimensional vector (has both rows and columns) and a "array" can store data in more than two dimensions (e.g., a stack of 2-D matrices).
#
# R has many functions to build and manipulate matrices and arrays.
#
# Try this:
mat1 <- matrix(1:25, 5, 5)
mat1
# Note that the output in your terminal or console will be more like:
print(mat1)
# (even if you don't use the `print()` command)
#
# The output looks different above because the R code is running in a jupyter notebook.
#
# Now try this:
mat1 <- matrix(1:25, 5, 5, byrow=TRUE)
mat1
# That is, you can order the elements of a matrix by row instead of column (default).
dim(mat1) #get the size of the matrix
# To make an array consisting of two 5$\times$5 matrices containing the integers 1-50:
arr1 <- array(1:50, c(5, 5, 2))
arr1[,,1]
print(arr1)
arr1[,,2]
# Just like R vectors, R matrices and arrays have to be of a homogeneous type, and R will do the same sort of type homogenization you saw for R vectors above.
#
# Try inserting a text value in `mat1`and see what happens:
mat1[1,1] <- "one"
mat1
# `mat1` went from being `A matrix: 5 × 5 of type int` to `A matrix: 5 × 5 of type chr`. That is, inserting a string in one location converted all the elements of the matrix to the `chr` (string) data type.
# Thus R's matrix and array are similar to Python's `numpy` array and matrix data structures.
# ### Data frames
#
# This is a very important data structure in R. Unlike matrices and vectors, R data frames can store data in which each column contains a different data type (e.g., numbers, strings, boolean) or even a combination of data types, just like a standard spreadsheet. Indeed, the dataframe data type was built to emulate some of the convenient properties of spreadsheets. Many statistical and plotting functions and packages in R naturally use data frames.
#
# Let's build and manipulate a dataframe. First create three vectors:
# ```{note}
# Under the hood, a R data frame is in fact a `list` of equal-length `vector`s.
# ```
Col1 <- 1:10
Col1
Col2 <- LETTERS[1:10]
Col2
Col3 <- runif(10) # 10 random numbers from a uniform distribution
Col3
# Now combine them into a dataframe:
MyDF <- data.frame(Col1, Col2, Col3)
MyDF
# Again, this output looks different from your terminal/console output because these commands are running in a Jupyter notebook.
#
# Your output will look like this:
print(MyDF)
# You can easily assign names to the columns of dataframes:
names(MyDF) <- c("MyFirstColumn", "My Second Column", "My.Third.Column")
MyDF
# And unlike matrices, you can access the contents of data frames by naming the columns directly using a $ sign:
MyDF$MyFirstColumn
# But don't use spaces in column names! Try this:
MyDF$My Second Column
# That gives an error, because R cannot handle spaces in column names easily.
#
# So replace that column name using the `colnames` function:
colnames(MyDF)
colnames(MyDF)[2] <- "MySecondColumn"
MyDF
# But using dots in column names is OK (just as it is for variable names):
MyDF$My.Third.Column
# You can also access elements by using numerical indexing:
MyDF[,1]
# That is, you asked R to return values of `MyDF` in all Rows (therefore, nothing before the comma), and the first column (`1` after the comma).
MyDF[1,1]
MyDF[c("MyFirstColumn","My.Third.Column")] # show two specific columns only
# You can check whether a particular object is a dataframe data structure with:
class(MyDF)
# You can check the structure of a dataframe with `str()`:
str(MyDF)
# You can print the column names and top few rows with `head()`:
head(MyDF)
# And the bottom few rows with `tail()`:
tail(MyDF)
# ```{note}
# **The Factor data type**: R has a special data type called "factor". Different values in this data type are called "levels". This data type is used to identify "grouping variables" such as columns in a dataframe that contain experimental treatments. This is convenient for statistical analyses using one of the many plotting and statistical commands or routines available in R, which have been written to interpret the `factor` data type as such and use it to automatically compare subgroups in the data. More on this later, when we delve into analyses (including visualization) in R.
# ```
# ### Lists
#
# A list is used to collect a group of data objects of different sizes and types (e.g., one whole data frame and one vector can both be in a single list). It is simply an ordered collection of objects (that can be variables).
#
# ```{note}
# The outputs of many statistical functions in R are lists (e.g. linear model fitting using `lm()`), to return all relevant information in one output object. So you need to know how to unpack and manipulate lists.
# ```
#
# As a budding multilingual quantitative biologist, you should not be perturbed by the fact that a `list` is a very different data structure in python vs R. It will take some practice — sometimes the same word means different things in different human languages too!
#
# Try this:
MyList <- list(species=c("<NAME>","Fraxinus excelsior"), age=c(123, 84))
MyList
# You can access contents of a list item using number of the item instead of name using nested square brackets:
MyList[[1]]
MyList[[1]][1]
# or using the name in either this way:
MyList$species
# or this way:
MyList[["species"]]
# And to access a specific element inside a list's item
MyList$species[1]
# A more complex list:
pop1<-list(species='Cancer magister',
latitude=48.3,longitude=-123.1,
startyr=1980,endyr=1985,
pop=c(303,402,101,607,802,35))
pop1
# You can build lists of lists too:
pop1<-list(lat=19,long=57,
pop=c(100,101,99))
pop2<-list(lat=56,long=-120,
pop=c(1,4,7,7,2,1,2))
pop3<-list(lat=32,long=-10,
pop=c(12,11,2,1,14))
pops<-list(sp1=pop1,sp2=pop2,sp3=pop3)
pops
pops$sp1 # check out species 1
pops$sp1["pop"] # sp1's population sizes
pops[[2]]$lat #latitude of second species
pops[[3]]$pop[3]<-102 #change population of third species at third time step
pops
# Maybe you have guessed by now that R dataframes are actually a kind of list.
#
# ### Matrix vs Dataframe
#
# If dataframes are so nice, why use R matrices at all? The problem is that dataframes can be too slow when large numbers of mathematical calculations or operations (e.g., matrix - vector multiplications or other linear algebra operations) need to be performed. In such cases, you will need to convert a dataframe to a matrix. But for statistical analyses, plotting, and writing output of standard R analyses to a file, data frames are more convenient. Dataframes also allow you to refer to columns by name (using `$`), which is often convenient.
#
# To see the difference in memory usage of matrices vs dataframes, try this:
MyMat = matrix(1:8, 4, 4)
MyMat
MyDF = as.data.frame(MyMat)
MyDF
object.size(MyMat) # returns size of an R object (variable) in bytes
object.size(MyDF)
# Quite a big difference!
# ## Creating and manipulating data
#
# ### Creating sequences
#
# The `:` operator creates vectors of sequential integers:
years <- 1990:2009
years
years <- 2009:1990 # or in reverse order
years
# For sequences of float numbers, you have to use `seq()`:
seq(1, 10, 0.5)
# ```{tip}
# Don't forget, you can get help on a particular R command by prefixing it with `?`. For example, try:
#
# `?seq`
# ```
# You can also use `seq(from=1,to=10, by=0.5) `OR` seq(from=1, by=0.5, to=10)` with the same effect (try it). This explicit, "argument matching" approach is partly what makes R so popular and accessible to a wider range of users.
# ### Accessing parts of data stuctures: Indices and Indexing
#
# Every element (entry) of a vector in R has an order (an "index" value): the first value, second, third, etc. To illustrate this, let's create a simple vector:
MyVar <- c( 'a' , 'b' , 'c' , 'd' , 'e' )
# Then, square brackets extract values based on their numerical order in the vector:
MyVar[1] # Show element in first position
MyVar[4] # Show element in fourth position
# The values in square brackets are called "indices" — they give the index (position) of the required value. We can also select sets of values in different orders, or repeat values:
MyVar[c(3,2,1)] # reverse order
MyVar[c(1,1,5,5)] # repeat indices
# You can also manipulate data structures/objects by indexing:
v <- c(0, 1, 2, 3, 4) # Create a vector named v
v[3] # access one element
v[1:3] # access sequential elements
v[-3] # remove elements
v[c(1, 4)] # access non-sequential indices
# For matrices, you need to use both row and column indices:
mat1 <- matrix(1:25, 5, 5, byrow=TRUE) #create a matrix
mat1
mat1[1,2]
mat1[1,2:4]
mat1[1:2,2:4]
# And to get all elements in a particular row or column, you need to leave the value blank:
mat1[1,] # First row, all columns
mat1[,1] # First column, all rows
# (R-recycling)=
# ### Recycling
#
# When vectors are of different lengths, R will recycle the shorter one to make a vector of the same length:
a <- c(1,5) + 2
a
x <- c(1,2); y <- c(5,3,9,2)
x;y
x + y
# Strange! R just recycled `x` (repeated `1,2` twice) so that the two vectors could be summed! here's another example:
x + c(y,1)
# *Think about what happened here*. R is clearly not comfortable doing this, so it warns you! Recycling could be convenient at times, but is dangerous!
# ### Basic vector-matrix operations
#
# You can perform the usual vector matrix operations on R `vectors`
v <- c(0, 1, 2, 3, 4)
v2 <- v*2 # multiply whole vector by 2
v2
v * v2 # element-wise product
t(v) # transpose the vector
v %*% t(v) # matrix/vector product
v3 <- 1:7 # assign using sequence
v3
v4 <- c(v2, v3) # concatenate vectors
v4
# ### Strings and Pasting
#
# It is important to know how to handle strings in R for two main reasons:
#
# * To deal with text data, such as names of experimental treatments
# * To generate appropriate text labels and titles for figures
#
# Let's try creating and manipulating strings:
species.name <- "<NAME>" #You can alo use single quotes
species.name
# To combine to strings:
paste("Quercus", "robur")
paste("Quercus", "robur",sep = "") #Get rid of space
paste("Quercus", "robur",sep = ", ") #insert comma to separate
# As you can see above, both double and single quotes work, but using double quotes is better because it will allow you to define strings that contain a single quotes, which is often necessary.
#
# And as is the case with so many R functions, pasting works on vectors:
paste('Year is:', 1990:2000)
# Note that this last example creates a vector of 11 strings as it is 1990:2000 *inclusive*.
# ## Useful R functions
#
# There are a number of very useful functions available by default (in the "base packages"). Some particularly useful ones are listed below.
#
# ### For manipulating strings
#
# |Function||
# |:-|:-|
# |`strsplit(x,';')`| Split the string `x` at ';' |
# |`nchar(x)`| Number of characters in string `x`|
# |`toupper(x)`| Set string `x` to upper case|
# |`tolower(x)`| Set string `x` to lower case|
# |`paste(x1,x2,sep=';')`| Join the two strings using ';'|
#
# ### Mathematical
#
# |Function||
# |:-|:-|
# |`log(x)`| Natural logarithm of the number (or every number in the vector or matrix) `x`|
# |`log10(x)`| Logarithm in base 10 of the number (or every number in the vector or matrix) `x`|
# |`exp(x)`| Exponential of the number (or every number in the vector or matrix) `x` ($e^x$)|
# |`abs(x)`| Absolute value of the number (or every number in the vector or matrix) `x`|
# |`floor(x)`| Largest integer smaller than the number (or every number in the vector or matrix) `x`|
# |`ceiling(x)`| Smallest integer greater than the number (or every number in the vector or matrix) `x`|
# |`sqrt(x)`| Square root of the number (or every number in the vector or matrix) `x` ($\sqrt{x}$)|
# |`sin(x)`| Sine function of the number (or every number in a vector or matrix) `x`|
# |`pi`| Value of the constant $\pi$|
#
# ### Statistical
#
# |Function||
# |:-|:-|
# |`mean(x)`| Compute mean of (a vector or matrix) `x`|
# |`sd(x)`| Standard deviation of (a vector or matrix) `x`|
# |`var(x)`| Variance of (a vector or matrix) `x`|
# |`median(x)`| Median of (a vector or matrix) `x`|
# |`quantile(x,0.05)`| Compute the 0.05 quantile of (a vector or matrix) `x`|
# |`range(x)`| Range of the data in (a vector or matrix) `x`|
# |`min(x)`| Minimum of (a vector or matrix) `x`|
# |`max(x)`| Maximum of (a vector or matrix) `x`|
# |`sum(x)`| Sum all elements of (a vector or matrix) `x`|
# |`summary(x)`| Summary statistics for (a vector or matrix) `x`|
#
# ### Sets
#
# |Function||
# |:-|:-|
# |`union(x, y)` | Union of all elements of two vectors x & y|
# |`intersect(x, y)`|Elements common to two vectors x & y|
# |`setdiff(x, y)`|Elements unique to two vectors x & y|
# |`setequal(x, y)`|Check if two vectors x & y are the same set (have same unique elements)|
# |`is.element(x, y)`|Check if an element x is in vector y (same as `x %in% y`)|
#
# $\star$ *Try out the above commands in your R console by generating the appropriate data*. For example,
strsplit("String; to; Split",';')# Split the string at ';'
# (R-random-numbers)=
# ### Generating Random Numbers
#
# You will probably need to generate random numbers at some point as a quantitative biologist.
#
# R has many routines for generating random samples from various probability distributions. There are a number of random number distributions that you can sample or generate random numbers from:
#
# |Function||
# |:-|:-|
# |`rnorm(10, m=0, sd=1)`| Draw 10 normal random numbers with mean=0 and standard deviation = 1|
# |`dnorm(x, m=0, sd=1)` | Density function|
# |`qnorm(x, m=0, sd=1)` | Cumulative density function|
# |`runif(20, min=0, max=2)` | Twenty random numbers from uniform `[0,2]`|
# |` rpois(20, lambda=10)` | Twenty random numbers from Poisson (with mean $\lambda$)|
#
# #### "Seeding" random number generators
#
# Computers *can't* generate *true* mathematically random numbers. This may seem surprising, but basically a computer cannot be programmed to do things purely by chance; it can only follow given instructions blindly and is therefore completely predictable. Instead, computers have algorithms called "pseudo-random number generators" that generate *practically random* sequences of numbers. These are typically based on a iterative formula that generates a sequence of random numbers, starting with a first number called the "**seed**". This sequence is completely "deterministic", that is, starting with a particular seed yields exactly the same sequence of pseudo-random numbers every time you re-run the generator.
#
# Try this:
set.seed(1234567)
rnorm(1)
# Everybody in the class will get the same answer!
#
# Now try and compare the results with your neighbor:
rnorm(10)
# And then the whole sequence of 11 numbers you generated:
set.seed(1234567)
rnorm(11)
# Thus, setting the seed allows you to reliably generate the identical sequence of "random" numbers. These numbers are not truly random, but have the properties of random numbers. Note also that pseudo-random number generators are periodic, which means that the sequence will eventually repeat itself. However, this period is so long that it can be ignored for most practical purposes. So effectively, `rnorm` has an enormous list that it cycles through. The random seed starts the process, i.e., indicates where in the list to start. This is usually taken from the clock when you start R.
#
# But why bother with random number seeds? Setting a particular seed can be useful when debugging programs (coming up below). Bugs in code can be hard to find — harder still if you are generating random numbers, so repeat runs of your code may or may not all trigger the same behavior. You can set the seed once at the beginning of the code — ensuring repeatability, retaining (pseudo) randomness. Once debugged, if you want, you can remove the set seed line.
# ## Your analysis workflow
#
# In using R for an analysis, you will likely use and create several files. As in the case of bash and Python based projects, in R projects as well, you should keep your workflow well organized. For example, it is sensible to create a folder (directory) to keep all code files together. You can then set R to work from this directory, so that files are easy to find and run — this will be your "working directory" (more on this below). Also, you don't want to mix code files with data and results files. So you should create separate directories for these as well.
#
# Thus, your typical R analysis workflow will be:
#
# ---
#
# :::{figure-md} R-project-org
#
# <img src="./graphics/RWorkflow.png" alt="R Project Organization" width="600px">
#
# **Your R project.** Keeping it neat and organized if the key to becoming a good R programmer.
#
# :::
#
# ---
#
#
# Some details on each kind of file:
#
#
# * *R script files*: These are plain text files containing all the R code needed for an analysis. These should be created with a text editor, typically part of some smart code editor like vscode, or RStudio, and saved with the extension `*.R`. You should *never* use Word to save or edit these files as R can only read code from plain text files.
#
# * *Text data files* These are files of data in plain text format containing one or more columns of data (numbers, strings, or both). Although there are several format options, we will typically be using `csv` files, where the data entries are separated by commas. These are easy to create and export from Excel (if that's what you use...).
#
# * *Results output files* These are a plain text files containing your results, such the the summary of output of a regression or ANOVA analysis. Typically, you will output your results in a table format where the columns are separated by commas (csv) or tabs (tab-delimited).
#
# * *Graphics files* R can export graphics in a wide range of formats. This can be done automatically from R code and we will look at this later but you can also select a graphics window (e.g., in RStudio) and click `File` $\triangleright$ `Save as...`.
#
# * *Rdata files* You can save any data loaded or created in R, including outputs of statistical analyses and other things, into a single`Rdata` file. These are not plain text and can only be read by R, but can hold all the data from an analysis in a single handy location. We will not use these much in this course.
#
# So let's build your R analysis project structure.
#
# Do the following:
#
# $\star$ Create a sensibly-named directory (e.g., `MyRCoursework`, `week3`, etc, depnding on which course you are on) in an appropriate location on your computer. If you are using a college Windows computer, you may need to create it in your `H:` drive. Avoid including spaces in your file or directory names, as this will often create problems when you share your file or directory with somebody else. Many software programs do not handle spaces in file/directory names well. Use underscores instead of spaces. For example, instead of `My R Coursework`, use `My_R_Coursework` or `MyRCoursework`.
#
# $\star$ Create subdirectories *within this directory* called `code`, `data`, and `results`. Remember, commands in all programming languages are case-sensitive when it comes to reading directory path names, so `code` is not the same as `Code`!
#
# You can create directories using `dir.create()`within R (or if on Mac/Linux, with the usual `mkdir` from the bash terminal):
#
# ```R
# dir.create("MyRCoursework")
# dir.create("MyRCoursework/code")
# dir.create("MyRCoursework/data")
# dir.create("MyRCoursework/results")
# ```
# ### The R Workspace and Working Directory
#
# R has a "workspace" – a current working environment that includes any user-defined data structures objects (vectors, matrices, data frames, lists) as well as other objects (e.g., functions). At the end of an R session, the user can save an image of the current workspace that is automatically reloaded the next time R is started. Your workspace is saved in your "Working Directory", which has to be set manually.
#
# So before we go any further, let's get sort out where your R "Working Directory" should be and how you should set it. R has a default location where it assumes your working directory is.
#
# * In UNIX/Linux, it is whichever directory you are in when you launch R.
#
# * In Mac, it is `/User/YourUserName` or similar.
#
# * In Windows, it is `C:/Windows/system32`or similar.
#
# To see where your current working directory is, at the R command prompt, type:
#
# `getwd()`
#
# This tells you what the current `w`orking `d`irectory is.
#
# Now, set the working directory to be `MrRCourseworkcode`. For example, if you created `MrRCoursework`directly in your `H:\`, the you would use:
#
# `setwd("H:/MrRCourseworkcode")`
#
#
# `dir()` #check what's in the current working directory
#
# On your own computer, you can also change R's default to a particular working directory where you would like to start (easily done in RStudio):
#
# * In Linux, you can do this by editing the `Rprofile.site`site with `sudo gedit /etc/R/Rprofile.site`. In that file, you would add your start-up parameters between the lines
#
# `.First <- function() cat("\n Welcome to R!\n\n")`
#
# and
#
# `.Last <- function() cat("\n Goodbye! \n\n")`.
#
# Between these two lines, insert:
# `setwd("/home/YourName/YourDirectoryPath")`
#
# * In Windows and Macs, you can find the `Rprofile.site`file by searching for it. On Windows, it should be at `C:\Program Files\R\R-x.x.x\etc` directory, where `x.x.x` is your R version.
#
# * If you are using RStudio, you can change the default working directory through the RStudio "Options" dialog.
# ## Importing and Exporting Data
#
# We are now ready to see how to import and export data in R, typically the first step of your analysis. The best option is to have your data in a `c`omma `s`eparated `v`alue (` csv`) text file or in a tab separated text file. Then, you can use the function `read.csv`(or `read.table`) to import your data. Now, lets get some data into your `Data`directory.
#
# $\star$ Go to the [TheMulQuaBio git repository](https://github.com/mhasoba/TheMulQuaBio) and navigate to the [`data` directory](https://github.com/mhasoba/TheMulQuaBio/tree/master/content/data).
#
# $\star$ Download and copy the file [`trees.csv`](https://raw.githubusercontent.com/mhasoba/TheMulQuaBio/master/content/data/trees.csv) into your own `data` directory.
#
# Alternatively, of you may download the whole repository to your computer and then grab the file from the place where you downloaded it.
#
# Now, import the data:
# + tags=["remove-cell"]
setwd("../code/")
# -
MyData <- read.csv("../data/trees.csv")
ls(pattern = "My*") # Check that MyData has appeared
# Your output may be somewhat different, depending on what what variables and other objects have been created in
# your R Workspace during the current R session. But the main thing is, you should be able to see a `MyData` in the list of objects printed.
#
# ```{tip}
# You can list only objects with a particular name pattern by using the `pattern` option of `ls()`. It works by using regular expressions, which you were introduced through the `grep` command in the [UNIX chapter](Using-grep). We will delve deeper into regular expressions in the [Python II Chapter](Python_II:python-regex).
# ```
# Note that the resulting `MyData` object in your workspace is a R dataframe:
class(MyData)
# ### Relative paths
#
# Note the UNIX-like path to the file we used in the `read.csv()` command above (using forward slashes; Windows uses back slashes).
#
# The `../` in `read.csv("../data/trees.csv")` above signifies a "relative" path. That is, you are asking R to load data that lies in a different directory (folder) *relative* your current location (in this case, you are in your `Code`directory). In other, more technical words, `../data/trees.txt`points to a file named `trees.txt`located in the "parent" of the current directory.
#
# *What is an absolute path?* — one that specifies the whole path on your computer, say from `C:\`"upwards" on Windows, `/Users/` upwards on Mac, and `/home/` upwards on Linux. Absolute paths are specific to each computer, so should be avoided. So to import data and export results, your script should *not* use absolute paths. Also, *AVOID putting a `setwd()`command at the start of your R script*, because setting the working directory requires an absolute directory path, which will differ across computers, platforms, and users. Let the end users set the working directory on their machine themselves.
#
# Using relative paths in in your R scripts and code will make your code computer-independent and easier for others to use your code. The relative path way should always be the way you load data in your analyses scripts — it will guarantee that your analysis works on every computer, not just your own or college computer.
#
#
# ```{tip}
# If you are using a computer from elsewhere in the EU, Excel may use a comma (e.g., $\pi=3,1416$) instead of a decimal point ($\pi=3.1416$). In this case, `csv`files may use a semi-colon to separate columns and you can use the alternative function `read.csv2()` to read them into the R workspace.
#
# ```
head(MyData) # Have a quick look at the data frame
# You can also have a more detailed look at the data you imported:
str(MyData) # Note the data types of the three columns
MyData <- read.csv("../data/trees.csv", header = F) # Import ignoring headers
head(MyData)
# Or you can load data using the more general `read.table` function:
MyData <- read.table("../data/trees.csv", sep = ',', header = TRUE) #another way
# With `read.table` you need to specify whether there is a header row that needs to be imported as such.
head(MyData)
MyData <- read.csv("../data/trees.csv", skip = 5) # skip first 5 lines
# ### Writing to and saving files
#
# You can also save your data frames using `write.table` or `write.csv`:
write.csv(MyData, "../results/MyData.csv")
dir("../results/") # Check if it worked
write.table(MyData[1,], file = "../results/MyData.csv",append=TRUE) # append
# You get a warning with here because R thinks it is strange that you are appending headers to a file that already has headers!
write.csv(MyData, "../results/MyData.csv", row.names=TRUE) # write row names
write.table(MyData, "../results/MyData.csv", col.names=FALSE) # ignore col names
# ## Writing R code
#
# Typing in commands interactively in the R console is good for starters, but you will want to switch to putting your sequence of commands into a script file, and then ask R to run those commands.
#
#
# * Open a new text file, call it `basic_io.R`, and save it to your `code`directory.
# * Write the above input-output commands in it:
#
# ```R
# # A simple script to illustrate R input-output.
# # Run line by line and check inputs outputs to understand what is happening
#
# MyData <- read.csv("../data/trees.csv", header = TRUE) # import with headers
#
# write.csv(MyData, "../results/MyData.csv") #write it out as a new file
#
# write.table(MyData[1,], file = "../results/MyData.csv",append=TRUE) # Append to it
#
# write.csv(MyData, "../results/MyData.csv", row.names=TRUE) # write row names
#
# write.table(MyData, "../results/MyData.csv", col.names=FALSE) # ignore column names
#
# ```
#
# * Now place the cursor on the first line of code in the script file and run it by pressing the appropriate keyboard shortcut (e.g., PC: ctrl+R, Mac: command+enter, Linux: ctrl+enter are the usual shortcuts for doing this).
# * Check after every line that you are getting the expected result.
# ```{note}
# **Why no shebang in your first R script?** Because we are learning R here mainly for writing scripts for data analysis & visualization, and relatively simple numerical calculations & modelling/simulation tasks. Read [this](https://www.r-bloggers.com/2019/11/r-scripts-as-command-line-tools/) and [this](https://blog.sellorm.com/2017/12/18/learn-to-write-command-line-utilities-in-r/) for some more technical advice about R scripts.
# ```
# ## Running R code
#
# But even writing to a script file and running the code line-by-line or block-by-block is not your ultimate goal. What you would really like to do is to just run your full analysis and output all the results. There are two main approaches for running R script/code.
#
# ### Using `source`
#
# You can run all the contents of a `*.R`script file from the R command line by using `source()`.
#
# $\star$ Try sourcing `basic_io.R` (you will need to make sure you have `setwd` to your code directory):
source("basic_io.R")
# That has run OK, warning and all.
#
# * If you get errors, read and then try to fix them. The most common problem is likely to be that you have not `setwd()` to the `code` directory.
#
# Alternatively, you can run the script from wherever (e.g., `data` directory) by adding the directory path to the script file name, if the script file is not in your working directory and you don't want to change your working directory. For example, you will need `source("../code/control.R` if your working directory is `data` and not `code` (using a relative path).
#
# *Do not* put a `source()`command inside the script file you are sourcing, as it is then trying to run itself again and again and that's just cruel!*
#
# ```{tip}
# The command `source()`has a `chdir`argument whose default value is FALSE. When set to TRUE, it will change the working directory to the directory of the file being sourced.
#
# ```
#
# Also, if you have `source`ed a script successfully, you will see no output on in R console/terminal unless there was an error or warning, or if you explicitly asked for something to be printed. So it can be useful to add a line at the end of the script saying something like `print("Script complete!")`.
#
#
# ### Using `Rscript`
#
# You can also run R script from the UNIX/Linux terminal by calling `Rscript`. That is, while you have to be inside an R session to use the `source` command to run a script, you can run a R script directly from the UNIX/Linux terminal by calling `Rscript`.
#
# This allows you to easily automate execution of your R scripts (e.g., by writing a bash script) and integrate R into a bigger computing pipeline/workflow by calling it through other tools or languages (e.g., see the [Python Chapter II](06-Python_II.ipynb)).
#
# If you are on Linux, try using `Rscript` to run `basic_io.R`:
#
# * Exit from the R console using `ctrl+D`, or open a new bash terminal
#
# * `cd` to the location of `basic_io.R` (e.g., `week3/code`)
#
# * Then run the script using `Rscript basic_io.R`
#
# Also, please have a look at `man Rscript` in a bash terminal.
#
# ### Running R in batch mode
#
# In addition to `Rscript`, there is another way to run you R script without opening the R console. In Mac or Linux, you can do so by typing:
#
# `R CMD BATCH MyCode.R MyResults.Rout`
#
# This will create an `MyResults.Rout`file containing all the output. On Microsoft Windows, it's more complicated — change the path to `R.exe`and output file as needed:
#
# `"C:\Program Files\R\R-4.x.x\bin\R.exe" CMD BATCH -vanilla -slave "C:\PathToMyResults\Results\MyCode.R"`
#
# Here, replace 4.x.x with the R version you have.
# ## Control flow tools
#
# In R, you can write "if-then", and "else" statements, and "for" and "while" loops like any programming language to give you finer control over your program's "control flow". Such statements are useful to include in functions and scripts because you may only want to do certain calculations or other tasks, under certain conditions (e.g., `if` the dataset is from a particular year, do something different).
#
# Let's look at some examples of these control flow tools in R.
#
# $\star$ Type each of the following blocks of code in a script file called `control_flow.R` and save it in your `code` directory. then run each block *separately* by sending or pasting into the R console.
# ### `if` statements
a <- TRUE
if (a == TRUE){
print ("a is TRUE")
} else {
print ("a is FALSE")
}
# You can also write an `if` statement on a single line:
z <- runif(1) ## Generate a uniformly distributed random number
if (z <= 0.5) {print ("Less than a half")}
# But code readability is important, so avoid squeezing control flow blocks like this into a single line.
#
# ```{tip}
# Please indent your code for readability, even if its not strictly necessary (unlike [Python](./05-Python_I.ipynb)). Indentation helps you see the flow of the logic, rather than flattened version, which is hard for you and anybody else to read. For example, the following code block is so much much readable than the one above:
#
# ```r
# z <- runif(1)
# if (z <= 0.5) {
# print ("Less than a half")
# }
# ```
# ### `for` loops
# [Loops](https://en.wikipedia.org/wiki/Control_flow#Loops) are really useful to repeat a task over some range of input values.
#
# The following code "loops" over a range of numbers, squaring each, and then printing the result:
for (i in 1:10){
j <- i * i
print(paste(i, " squared is", j ))
}
# What exactly is going on in the piece of code above? What are `i` and `j`? Let's break it down:
#
# Firstly The `1:10` part simply generates a sequence, as you [learned above](#Creating-sequences):
1:10
# This is the same as using `seq(10)` (try substituting this instead of `1:10` in the above block of code).
#
# Then, `j` is a temporary variable that stores the value of the number (the other temporary variable, `i`) in that iteration of the loop.
#
# ```{note}
# Using the `:` operator or the `seq()` function pre-generates the sequence and stores it in memory, so is less efficient that Python's [`range` function](Python-loops), which generates numbers in a sequence on a "need-to" basis.
# ```
#
# You can also loop over a vector of strings:
for(species in c('Heliodoxa rubinoides',
'Boissonneaua jardini',
'Sula nebouxii')){
print(paste('The species is', species))
}
# These is a random assortment of birds!
#
# And here's a for loop using apre-existing vector:
v1 <- c("a","bc","def")
for (i in v1){
print(i)
}
# ### `while` loops
#
# If you want to perform an operation till some condition is met, use a `while` loop:
i <- 0
while (i < 10){
i <- i+1
print(i^2)
}
# $\star$ Now test `control_flow.R`. That is, run the script file using `source` (and `Rscript` if on Linux).
#
# If you get errors, read them carefully and fix them (this is going to be your mantra henceforth!).
# ### Some more control flow tools
#
# Let's look at some more control tools that are less commonly used, but can be useful in certain scenarios.
#
# #### `break`ing out of loops
# Often it is useful (or necessary) to `break` out of a loop when some condition is met. Use `break` (like in pretty much any other programming language, like Python) in situations when you cannot set a target number of iterations and would like to stop the loop execultion once some condition is met (as you would with a `while` loop).
#
# Try this (type into `break.R` and save in `code`):
i <- 0 #Initialize i
while(i < Inf) {
if (i == 10) {
break
} # Break out of the while loop!
else {
cat("i equals " , i , " \n")
i <- i + 1 # Update i
}
}
# #### Using `next`
#
# You can also skip to next iteration of a loop. Both `next` and `break` can be used within other loops (` while`, `for`). Try this (type into `next.R` and save in `code`):
for (i in 1:10) {
if ((i %% 2) == 0) # check if the number is odd
next # pass to next iteration of loop
print(i)
}
# This code checks if a number is odd using the "modulo" operation and prints it if it is.
# ## Writing R Functions
#
# A function is a block of re-useable code that takes an input, does something with it (or to it!), and returns the result. Like any other programming language, R lets you write your own functions. All the "commands" that you have been using, such as `ls()`, `mean()`, `c()`, etc are basically functions. You will want to write your own function for every scenario where a particular task or set of analysis steps need to be performed again and again.
# The syntax for R functions is quite simple, with each function accepting "arguments" and "returning" a value.
#
# ### Your first R function
#
# $\star$ Type the following into a script file called `boilerplate.R`and save it in your `code` directory:
#
# ```R
#
# # A boilerplate R script
#
# MyFunction <- function(Arg1, Arg2){
#
# # Statements involving Arg1, Arg2:
# print(paste("Argument", as.character(Arg1), "is a", class(Arg1))) # print Arg1's type
# print(paste("Argument", as.character(Arg2), "is a", class(Arg2))) # print Arg2's type
#
# return (c(Arg1, Arg2)) #this is optional, but very useful
# }
#
# MyFunction(1,2) #test the function
# MyFunction("Riki","Tiki") #A different test
# ```
# Note the curly brackets – these are necessary for R to know where the specification of the function starts and ends. Also, note the indentation. Not necessary (unlike Python), but recommended to make the code more readable.
#
# Now enter the R console and source the script:
source("boilerplate.R")
# This will run the script, and also save your function `MyFunction` as an object into your workspace (try `ls()`, and you will see `MyFunction` appear in the list of objects):
ls(pattern = "MyFun*")
# Again, you output may be a bit different as you have been different things in your R workspace/session than I have. What matters is that you see `MyFunction` in the list of objects above.
#
# Now try:
class(MyFunction)
# So, yes, `MyFunction`is a `function` objec, just as it would be in Python.
# ### Functions with conditionals
#
#
# Here are some examples of functions with conditionals:
# +
# Checks if an integer is even
is.even <- function(n = 2){
if (n %% 2 == 0)
{
return(paste(n,'is even!'))
}
return(paste(n,'is odd!'))
}
is.even(6)
# +
# Checks if a number is a power of 2
is.power2 <- function(n = 2){
if (log2(n) %% 1==0)
{
return(paste(n, 'is a power of 2!'))
}
return(paste(n,'is not a power of 2!'))
}
is.power2(4)
# +
# Checks if a number is prime
is.prime <- function(n){
if (n==0){
return(paste(n,'is a zero!'))
}
if (n==1){
return(paste(n,'is just a unit!'))
}
ints <- 2:(n-1)
if (all(n%%ints!=0)){
return(paste(n,'is a prime!'))
}
return(paste(n,'is a composite!'))
}
is.prime(3)
# -
# $\star$ Run the three blocks of code in R one at a time and make sure you understand what each function is doing and how. Save all three blocks to a single script file called `R_conditionals.R` in your `code` directory, and make sure it runs using `source` and `Rscript` (in the Linux/Ubuntu terminal) .
# ### An example utility function
#
#
# Now let's write a script containing a more useful function:
#
# $\star$ In your text editor type the following in a file called `TreeHeight.R`, and save it in your `code`directory:
#
# ```R
# # This function calculates heights of trees given distance of each tree
# # from its base and angle to its top, using the trigonometric formula
# #
# # height = distance * tan(radians)
# #
# # ARGUMENTS
# # degrees: The angle of elevation of tree
# # distance: The distance from base of tree (e.g., meters)
# #
# # OUTPUT
# # The heights of the tree, same units as "distance"
#
# TreeHeight <- function(degrees, distance){
# radians <- degrees * pi / 180
# height <- distance * tan(radians)
# print(paste("Tree height is:", height))
#
# return (height)
# }
#
# TreeHeight(37, 40)
# ```
#
# * Run `TreeHeight.R`'s two blocks (the `TreeHeight` function, and the call to the function, `TreeHeight(37, 40)`) by pasting them sequentially into the R console. Try and understand what each line is doing.
# * Now test the whole `TreeHeight.R` script at one go using `source` and/or `Rscript` (in Linux/Ubuntu).
# ## Practicals
#
# ### Tree heights
#
# Modify the script `TreeHeight.R` so that it does the following:
# * Loads `trees.csv` and calculates tree heights for all trees in the data. Note that the distances have been measured in meters. (Hint: use relative paths)).
# * Creates a csv output file called `TreeHts.csv` in `results`that contains the calculated tree heights along with the original data in the following format (only first two rows and headers shown):
# ```bash
# "Species","Distance.m","Angle.degrees","Tree.Height.m"
# "Populus tremula",31.6658337740228,41.2826361937914,27.8021161438536
# "<NAME>",45.984992608428,44.5359166583512,45.2460250644405
# ```
# This script should work using either `source` or `Rscript` in Linux / UNIX.
#
#
# ### Groupwork Practical on Tree Heights
#
# The goal of this practical is to make the `TreeHeight.R` script more general, so that it could be used for other datasets, not just `trees.csv`.
#
# Guidelines:
#
# * Write another R script called `get_TreeHeight.R` that takes a csv file name from the command line (e.g., `get_TreeHeight.R Trees.csv`) and outputs the result to a file just like `TreeHeight.R`above, but this time includes the input file name in the output file name as `InputFileName_treeheights.csv`. Note that you will have to strip the `.csv`or whatever the extension is from the filename, and also `../` etc., if you are using relative paths. (Hint: Command-line parameters are accessible within the R running environment via `commandArgs()` — so `help(commandArgs)` might be your starting point.)
# * Write a Unix shell script called `run_get_TreeHeight.sh` that tests `get_TreeHeight.R`. Include `trees.csv`as your example file. Note that `source`will not work in this case as it does not allow scripts with arguments to be run; you will have to use `Rscript`instead.
#
# ### Groupwork Practical on Tree Heights 2
#
# Assuming you have already worked through [Python Chapter I](./05-Python_I.ipynb), write a Python version of `get_TreeHeight.R` (call it `get_TreeHeight.py`). Include a test of this script into `run_get_TreeHeight.sh`.
#
# (R-Vectorization)=
#
# ## Vectorization
#
# R is relatively slow at cycling through a data structure such as a dataframe or matrix (e.g., by using a `for` loop) because it is a *high-level, interpreted computer language*.
#
# That is, when you execute a command in R, it needs to "read" and interpret the necessary code from scratch every single time the command is called. On the other hand, compiled languages like C know exactly what the flow of the program is because the code is pre-interpreted and ready to go before execution (i.e., the code is "compiled").
#
# For example, when you assign a new variable in R:
a <- 1.0
class(a)
# R automatically figures out that `1.0` is an floating point number, and finds a place in the system memory for it with `a` registered as a "pointer" to it.
#
# In contrast, in C, for example, you would have to do so manually by giving `a` an address and space in memory:
#
# ```C
# float a
# a = 1
# ```
#
# *Vectorization is an approach where you directly apply compiled, optimized code to run an operation on a vector, matrix, or an higher-dimensional data structure (like an R array), instead of performing the operation element-wise (each row or column element one at a time) on the data structure*.
#
# Apart from computational efficiency vectorization makes writing code more concise, easy to read, and less error prone.
#
# Let's try an example that illustrates this point.
#
# $\star$ Type (save in `Code`) as `Vectorize1.R` the following script, and run it (it sums all elements of a matrix):
# +
M <- matrix(runif(1000000),1000,1000)
SumAllElements <- function(M){
Dimensions <- dim(M)
Tot <- 0
for (i in 1:Dimensions[1]){
for (j in 1:Dimensions[2]){
Tot <- Tot + M[i,j]
}
}
return (Tot)
}
print("Using loops, the time taken is:")
print(system.time(SumAllElements(M)))
print("Using the in-built vectorized function, the time taken is:")
print(system.time(sum(M)))
# -
# Note the `system.time` R function: it calculates how much time your code takes. This time will vary with every run, and every computer that this code is run on (so your times will be a bit different than what I got).
#
# Both `SumAllElements()` and `sum()` approaches are correct, and will give you the right answer. However, the inbuilt function `sum()` is about 100 times faster than the other, because it uses vectorization, avoiding the amount of looping that `SumAllElements()` uses.
#
# In effect, of course, the computer still has to run loops. However, this running of the loops is encapsulated in a pre-complied program that R calls. These programs are written in more primitive (and therefore, faster) languages like Fortran and C. For example `sum` is actually written in C if you look under the hood.
#
# In R, even if you should try to avoid loops, in practice, it is often much easier to throw in a `for` loop, and *then* "optimize" the code to avoid the loop if the running time is not satisfactory. Therefore, it is still essential that you to become familiar with loops and looping as you learned in the sections above.
#
# ### Pre-allocation
#
# And if you are using loops, one operation that is slow in R (and somewhat slow in all languages) is memory allocation for a particular variable that will change during looping (e.g., a variable that is a dataframe). So writing a for loop that resizes a vector repeatedly makes R re-allocate memory repeatedly, which makes it slow. Try this:
# +
NoPreallocFun <- function(x){
a <- vector() # empty vector
for (i in 1:x) {
a <- c(a, i) # concatenate
print(a)
print(object.size(a))
}
}
system.time(NoPreallocFun(10))
# -
# Here, in each repetition of the for loop, you can see that R has to re-size the vector and re-allocate (more) memory. It has to find the vector in memory, create a new vector that will fit more data, copy the old data over, insert the new data, and erase the old vector. This can get very slow as vectors get big.
#
# On the other hand, if you "pre-allocate" a vector that fits all the values, R doesn't have to re-allocate memory each iteration. Here's how you'd do that for the above case:
# +
PreallocFun <- function(x){
a <- rep(NA, x) # pre-allocated vector
for (i in 1:x) {
a[i] <- i # assign
print(a)
print(object.size(a))
}
}
system.time(PreallocFun(10))
# -
# $\star$ Write the above two blocks of code into a script called `preallocate.R`. You can't really see the difference in timing here using `system.time()` because the vector is really small (just 10 elements), and the `print()` commands take up most of the time. To really see the difference in efficiency between the two functions, increase the iterations in your script from 10 to 1000, and suppress the print commands. Make this modification to your script. The modified script should print just the outputs of the two `system.time()` calls.
#
# Fortunately, R has several functions that can operate on entire vectors and matrices without requiring looping (Vectorization). That is, vectorizing a computer program means you write it such that as many operations as possible are applied to whole data structure (vectors, matrices, dataframes, lists, etc) at one go, instead of its individual elements.
#
# You will learn about some important R functions that allow vectorization in the following sections.
#
# ### The `*apply` family of functions
#
# There are a family of functions called `*apply` in R that vectorize your code for you. These functions are described in the help files (e.g. `?apply`).
#
# For example, `apply` can be used when you want to apply a function to the rows or columns of a matrix (and higher-dimensional analogues – remember arrays!). This is not generally advisable for data frames as it will first need to coerce the data frame to a matrix first.
#
# Let us try using applying the same function to rows/colums of a matrix using `apply`.
#
#
# $\star$ Type the following in a script file called `apply1.R`, save it to your `Code` directory, and run it:
# +
## Build a random matrix
M <- matrix(rnorm(100), 10, 10)
## Take the mean of each row
RowMeans <- apply(M, 1, mean)
print (RowMeans)
# -
## Now the variance
RowVars <- apply(M, 1, var)
print (RowVars)
## By column
ColMeans <- apply(M, 2, mean)
print (ColMeans)
# That was using `apply` on some of R's inbuilt functions. You can use apply to define your own functions. Let's try it.
#
# $\star$ Type the following in a script file called `apply2.R`, save it to your `Code` directory, and run it:
# +
SomeOperation <- function(v){ # (What does this function do?)
if (sum(v) > 0){ #note that sum(v) is a single (scalar) value
return (v * 100)
}
return (v)
}
M <- matrix(rnorm(100), 10, 10)
print (apply(M, 1, SomeOperation))
# -
# Thus, the function `SomeOperation` takes as input `v`. Then if the sum of v is greater than zero, it multiplies that value by 100. So if `v` has positive and negative numbers, and the sum comes out to be positive, only then does it multiply all the values in `v` by 100 and return them.
#
# There are many other methods: `lapply`, `sapply`, `eapply`, etc. Each is best for a given data type. For example, `lapply` abd `sapply` are designed for R lists. Have a look at [this Stackoveflow thread](https://stackoverflow.com/questions/3505701/grouping-functions-tapply-by-aggregate-and-the-apply-family)
# for some guidelines.
#
# #### A vectorization example
#
# Let's try an example of vectorization involving `lapply` and `sapply`. Both of these *apply* a function to *each element of a list*, but the former returns a list, while the latter returns a vector.
#
# We will also see how [sampling random numbers](R-random-numbers) works in the process of trying out this example.
#
# $\star$ Type the following blocks of code into a single script called `sample.R` and save in `code`.
#
# First some functions:
# +
######### Functions ##########
## A function to take a sample of size n from a population "popn" and return its mean
myexperiment <- function(popn,n){
pop_sample <- sample(popn, n, replace = FALSE)
return(mean(pop_sample))
}
## Calculate means using a FOR loop on a vector without preallocation:
loopy_sample1 <- function(popn, n, num){
result1 <- vector() #Initialize empty vector of size 1
for(i in 1:num){
result1 <- c(result1, myexperiment(popn, n))
}
return(result1)
}
## To run "num" iterations of the experiment using a FOR loop on a vector with preallocation:
loopy_sample2 <- function(popn, n, num){
result2 <- vector(,num) #Preallocate expected size
for(i in 1:num){
result2[i] <- myexperiment(popn, n)
}
return(result2)
}
## To run "num" iterations of the experiment using a FOR loop on a list with preallocation:
loopy_sample3 <- function(popn, n, num){
result3 <- vector("list", num) #Preallocate expected size
for(i in 1:num){
result3[[i]] <- myexperiment(popn, n)
}
return(result3)
}
## To run "num" iterations of the experiment using vectorization with lapply:
lapply_sample <- function(popn, n, num){
result4 <- lapply(1:num, function(i) myexperiment(popn, n))
return(result4)
}
## To run "num" iterations of the experiment using vectorization with sapply:
sapply_sample <- function(popn, n, num){
result5 <- sapply(1:num, function(i) myexperiment(popn, n))
return(result5)
}
# -
# *Think carefully about what each of these functions does.*
#
# Now let's generate a population. To get the same result every time, let's set seed (you might want to review [this section](R-random-numbers)).
set.seed(12345)
popn <- rnorm(10000) # Generate the population
hist(popn)
# And run and time the different functions:
# +
n <- 100 # sample size for each experiment
num <- 10000 # Number of times to rerun the experiment
print("Using loops without preallocation on a vector took:" )
print(system.time(loopy_sample1(popn, n, num)))
print("Using loops with preallocation on a vector took:" )
print(system.time(loopy_sample2(popn, n, num)))
print("Using loops with preallocation on a list took:" )
print(system.time(loopy_sample3(popn, n, num)))
print("Using the vectorized sapply function (on a list) took:" )
print(system.time(sapply_sample(popn, n, num)))
print("Using the vectorized lapply function (on a list) took:" )
print(system.time(lapply_sample(popn, n, num)))
# -
# *Note that these times will be different on your computer because they depend on the computer's hardware and software (especially, the operating system), as well as what other processes are running.*
#
# $\star$ Run`sample.R` using `source` and/or `Rscript` and make sure it works.
#
# Rerun the script a few times, first by fixing the `n` and `num` paraneters, and then also by varying them. Compare the times you get from these repeated runs of the script and think about which approach is the most efficient and why. Clearly, the loopy, witout pre-allocation apporach is ususally going to be bad, while the others are pretty comparable.
# #### The `tapply` function
#
# Let's look at `tapply`, which is particularly useful because it allows you to apply a function to subsets of a vector in a dataframe, with the subsets defined by some other vector in the same dataframe, usually a factor (this could be useful for pound hill data analysis that's coming up, for example).
#
# This makes it a bit of a different member of the `*apply` family. Try this:
x <- 1:20 # a vector
x
# Now create a `factor` type variable (of the same length) defining groups:
y <- factor(rep(letters[1:5], each = 4))
y
# + [markdown] tags=[]
# Now add up the values in x within each subgroup defined by y:
# -
tapply(x, y, sum)
# ### Using `by`
#
# You can also do something similar to `tapply` with the `by` function, i.e., apply a function to a dataframe using some factor to define the subsets. Try this:
#
# First import some data:
attach(iris)
iris
# Now run the `colMeans` function (it is better for dataframes than just mean) on multiple columns:
by(iris[,1:2], iris$Species, colMeans)
by(iris[,1:2], iris$Petal.Width, colMeans)
# ### Using `replicate`
#
# The `replicate` function is useful to avoid a loop for function that typically involves random number generation (more on this below). For example:
replicate(10, runif(5))
# That is, you just generated 10 sets (columns) of 5 uniformly-distributed random numbers (a 10 $\times$ 5 matrix).
#
# ```{note}
# The actual numbers you get will be different from the ones you see here unless you set the random number seed [as we learned above](R-random-numbers).
# ```
# ### Using `plyr` and `ddply`
#
# The `plyr` package combines the functionality of the `*apply` family, into a few handy functions. Look up the [web page](http://plyr.had.co.nz/).
#
# In particular, `ddply` is very useful, because for each subset of a data frame, it applies a function and then combines results into another data frame. In other words, "ddply" means: take a data frame, split it up, do something to it, and return a data frame. Look up [this](http://seananderson.ca/2013/12/01/plyr.html) and
# [this](https://www.r-bloggers.com/transforming-subsets-of-data-in-r-with-by-ddply-and-data-table/)
# for examples. There you will also see a comparison of speed of `ddply` vs `by` at the latter web page; `ddply` is actually slower than other vectorized methods, as it trades-off compactness of use for some of the speed of vectorization! Indeed, overall functions in `plyr` can be slow if you are working with very large datasets that involve a lot of subsetting (analyses by many groups or grouping variables).
#
# The base `*apply` functions remain useful and worth knowing even if you do get into `plyr` or better still, `dplyr`, which we will see in the [Data chapter](08-Data_R.ipynb).
#
#
# ## Practicals
#
# ### A vectorization challenge
#
# The Ricker model is a classic discrete population model which was introduced in 1954 by Ricker to model recruitment of stock in fisheries. It gives the expected number (or density) $N_{t+1}$ of individuals in generation $t + 1$ as a function of the number of individuals in the previous generation $t$:
#
# $$ N_{t+1}= N_t e^{r\left(1-\frac{N_t}{k}\right)} $$
#
# Here $r$ is intrinsic growth rate and $k$ as the carrying capacity of the environment. Try this script (call it `Ricker.R` and save it to `code`) that runs it:
# +
Ricker <- function(N0=1, r=1, K=10, generations=50)
{
# Runs a simulation of the Ricker model
# Returns a vector of length generations
N <- rep(NA, generations) # Creates a vector of NA
N[1] <- N0
for (t in 2:generations)
{
N[t] <- N[t-1] * exp(r*(1.0-(N[t-1]/K)))
}
return (N)
}
plot(Ricker(generations=10), type="l")
# -
# Now open and run the script `Vectorize2.R` (available on the TheMulQuaBio repository). This is the stochastic Ricker model (compare with the above script to see where the stochasticity (random error) enters). Now modify the script to complete the exercise given.
# *Your goal is to come up with a solution better than mine!*
# ## Errors and Debugging
#
# As we learned in [Python Chapter I](Python:errors), dealing with errors is a fundamental programming task that never really goes away!
#
# ### Unit testing
#
# We will not cover unit testing here, which we [did in Python Chapter I](Python:errors)). Unit testing in R is equally recommended if you are going to develop complex code and packages.
#
# ```{tip}
# A very convenient tool for R unit testing is [`testthat`](https://testthat.r-lib.org).
# ```
#
# In addition, you can use other "[defensive programming](https://en.wikipedia.org/wiki/Defensive_programming)" methods in R to keep an eye on errors that might arise in special circumstances in an complex program. A good option for this is to convert warnings to errors using the [`stopifnot()`](https://stat.ethz.ch/R-manual/R-devel/library/base/html/stopifnot.html). This is a bit like `try` (which we will cover below, and we also covered in [Python Chapter I](Python:errors)).
# ### Debugging
#
# Once you have found an error, you need to debug it. Assuming you have an idea about where (or the general area) in your code the error is, a simple way to debug is to use the `browser()` function.
#
# The `browser()` function allows you to insert a breakpoint in your script (where the running of the script is stopped) and then step through your code. Place it within your script at the point you want to examine local variables (e.g. inside a `for` loop).
#
# Let's look at an example usage of `browser()`.
#
# $\star$ Type the following in a script file called `browse.R` and save in `code`:
#
# ```R
# Exponential <- function(N0 = 1, r = 1, generations = 10){
# # Runs a simulation of exponential growth
# # Returns a vector of length generations
#
# N <- rep(NA, generations) # Creates a vector of NA
#
# N[1] <- N0
# for (t in 2:generations){
# N[t] <- N[t-1] * exp(r)
# browser()
# }
# return (N)
# }
#
# plot(Exponential(), type="l", main="Exponential growth")
# ```
# The script will be run till the first iteration of the `for` loop and the console will enter the browser mode, which looks like this:
#
# ```R
# Browse[1]>
# ```
#
# Now, you can examine the variables present at that point. Also, at the browser console, you can enter expressions as normal, or use a few particularly useful debug commands (similar to the Python debugger):
#
# * `n`: single-step
# * `c`: exit browser and continue
# * `Q`: exit browser and abort, return to top-level.
#
#
# ```{tip}
# We will not cover advanced debugging here as we did {ref}`in Python Chapter I <Python:errors>`, but look up `traceback()` (to find where the errors(s) are when a program crashes), and `debug()` (to debug a whole function).
# ```
# ### "Catching" errors
#
# Often, you don't know if a simulation or a R function will work on a particular data or variable, or a value of a variable (can happen in many statistical functions).
#
# Indeed, as most of you must have already experienced by now, there can be frustrating, puzzling bugs in programs that lead to mysterious errors. Often, the error and warning messages you get are un-understandable, especially in R!
#
# Rather than having R throw you out of the code, you would rather catch the error and keep going. This can be done by using the `try` keyword.
#
# Lets try `try`!
#
# First, let's write a function:
doit <- function(x){
temp_x <- sample(x, replace = TRUE)
if(length(unique(temp_x)) > 30) {#only take mean if sample was sufficient
print(paste("Mean of this sample was:", as.character(mean(temp_x))))
}
else {
stop("Couldn't calculate mean: too few unique values!")
}
}
# This function runs a simulation that involves sampling from a synthetic population with replacement and takes its mean, but *only* if at least 30 unique samples are obtained. Note that we have used another keyword here: `stop()`. Pay close attention to `sample()`, and `stop()` in the above script (check out what they are by using R help or searching online).
#
#
# Now, generate your population:
# +
set.seed(1345) # again, to get the same result for illustration
popn <- rnorm(50)
hist(popn)
# -
# Now try running it using `lapply` as you did above, repeating the sampling exercise 15 times:
lapply(1:15, function(i) doit(popn))
# Your result will be different than this (you would get between 0 - 15 samples; we got 4 here). But in most cases, the script will fail because of the `stop` command (on line 7 of the above function) at some iteration, returning less than the requested 15 mean values, followed by an error stating that the mean could not be calculated because too few unique values were sampled.
# Now try doing the same using `try`:
result <- lapply(1:15, function(i) try(doit(popn), FALSE))
# In this run, you again asked for the means of 15 samples, and again you (most likely) got less than that (see the above example output), but without any error! The `FALSE` modifier for the `try` command suppresses any error messages, but `result` will still contain them so that you can inspect them later (see below). *Please don't forget to check the help on inbuilt R commands like `try`.*
#
# The errors are stored in the object `result`:
class(result)
# This is a list that stores the result of each of the 15 runs, including the ones that ran into an error. Have a look at it:
result
# That's a lot output! But basically it tells you which runs ran into and error and why (no surprises here, of course!). You can also store the results "manually" by using a loop to do the same:
result <- vector("list", 15) #Preallocate/Initialize
for(i in 1:15) {
result[[i]] <- try(doit(popn), FALSE)
}
# Now have a look at the new `result`; it will have similar content as the one you got from using `lapply`.
#
# ```{tip}
# Also check out `tryCatch()` as an alternative to `try()`.
# ```
# $\star$ Type the above blocks of code illustrating `try` into a script file called `try.R` and save in `code`. The script, when `source`d, should run without errors (that is, don't run the sampling function without `try`!).
# ## Packages
#
# The big strength of R is that users can easily build and share packages through [cran.r-project.org](https://cran.r-project.org).
#
# There are R packages for practically all statistical and mathematical analysis you might conceive, so check them out before reinventing the wheel. Visit [cran.r-project.org](https://cran.r-project.org) and go to packages to see a list and a brief description.
#
# In Windows and Macs and Linux, you can install a package within R by using the `install.packages()`command.
#
# For example:
#
# ```r
# install.packages(c("tidyverse"))
# ```
# You can also install multiple packages by concatenating their names: `install.packages(c("pack1","pack2","pack3"))`
# (three packages in this hypothetical example)
#
# In Ubuntu, you will have to launch a `sudo R`session to get `install.packages()`to work properly. Otherwise, you be forced to install the package in a non-standard location on your drive that does not require `sudo` privileges (nor recommended).
#
# You can also use the RStudio GUI to install packages using your mouse and menu.
#
# In Ubuntu, you can also use the bash terminal:
#
# ```bash
# sudo apt install r-cran-tidyverse
# ```
#
# $\star$ Go ahead and install `tidyverse` if you don't have it yet. We will be using it [soon](./08-Data_R.ipynb)!
#
#
# ### Building your own
#
# You can combine your code, data sets and documentation to make a *bona fide* R package. You may wish to do this for particularly large projects that you think will be useful for others. Read the [*Writing R Extensions*](https://cran.r-project.org/doc/manuals/r-release/R-exts.html) manual and see *package.skeleton* to get started.
#
# The R tool set [EcoDataTools](https://github.com/DomBennett/EcoDataTools) and the package [cheddar](https://github.com/quicklizard99/cheddar) were written by Silwood Grad Students!
# ## Practicals
#
# ### Is Florida getting warmer?
#
# *This Practical assumes you have at least a basic understanding of [correlation coefficients](regress:correlations) and [p-values](13-t_F_tests.ipynb).*
#
# Your goal is to write an R script that will help answer the question: *Is Florida getting warmer*? Call this script `Florida.R`.
#
# To answer the question, you need to calculate the [correlation coefficients](regress:correlations) between temperature and time. However, you can't use the standard p-value calculated for a correlation coefficient, because measurements of climatic variables in successive time-points in a time series (successive seconds, minutes, hours, months, years, etc.) are *not independent*. Therefore you will use a permutation analysis instead, by generating a distribution of random correlation coefficients and compare your observed coefficient with this random distribution.
#
# Some guidelines:
#
# * Load and examine the annual temperature dataset from Key West in Florida, USA for the 20th century:
# +
rm(list=ls())
load("../data/KeyWestAnnualMeanTemperature.RData")
ls()
# -
class(ats)
head(ats)
plot(ats)
# Then,
# * Compute the appropriate correlation coefficient between years and Temperature and store it (look at the help file for `cor()`
# * Repeat this calculation a *sufficient* number of times, each time randomly reshuffling the temperatures (i.e., randomly re-assigning temperatures to years), and recalculating the correlation coefficient (and storing it)
#
# ```{tip}
# You can use the `sample` function that we learned about above to do the shuffling. Read the help file for this function and experiment with it.
# ```
# * Calculate what fraction of the random correlation coefficients were greater than the observed one (this is your approximate, asymptotic p-value).
#
# * *Interpret and present the results*: Present your results and their interpretation in a pdf document written in $\LaTeX$ (include the the document's source code in the submission) (*Keep the writeup, including any figures, to one A4 page*).
# ### Groupwork Practical: Autocorrelation in Florida weather
#
#
# *This Practical assumes you have at least a basic understanding of [correlation coefficients](regress:correlations) and [p-values](13-t_F_tests.ipynb).*
#
# Your goal is to write an R script (name it `TAutoCorr.R`) that will help answer the question: *Are temperatures of one year significantly correlated with the next year (successive years), across years in a given location*?
#
# To answer this question, you need to calculate the correlation between \(n-1\) pairs of years, where $n$ is the total number of years. However, here again, you can't use the standard p-value calculated for a correlation coefficient, because measurements of climatic variables in successive time-points in a time series (successive seconds, minutes, hours, months, years, etc.) are *not independent*.
#
# The general guidelines are:
#
# * Compute the appropriate correlation coefficient between successive years and store it (look at the help file for `cor()`
# * Repeat this calculation *a sufficient number of* times by – randomly permuting the time series, and then recalculating the correlation coefficient for each randomly permuted sequence of annual temperatures and storing it.
# * Then calculate what fraction of the correlation coefficients from the previous step were greater than that from step 1 (this is your approximate p-value).
# * *Interpret and present the results* Present your results and their interpretation in a pdf document written in $\LaTeX$ (submit the the document's source code as well).
#
#
# <!-- ### Practicals wrap-up
#
# * Review and make sure you can run all the commands, code fragments, and named scripts we have built till now and get the expected outputs.
#
# * Annotate/comment your code lines as much and as often as necessary using #.
#
# * Keep all files organized in `code`, `data`and `results` directories
#
# * (If you are required to as part of your course) `git add`, `commit` and `push` all your code and data from this chapter to your git repository by the given deadline. -->
# ## Readings and Resources
#
# Check the readings under the R directory in the TheMulQuaBio repository. Also, search online for "R tutorial", and plenty will pop up. Choose ones that seem the most intuitive to you. * *Remember, all R packages come with pdf guides/documentation!*
#
# ### R as a Programming language
#
# * There are excellent websites besides cran. In particular, check out
# [statmethods.net](https://www.statmethods.net/) and the [R Wiki](https://en.wikibooks.org/wiki/R_Programming).
# * https://blog.sellorm.com/2017/12/18/learn-to-write-command-line-utilities-in-r/
# * https://www.r-bloggers.com/2019/11/r-scripts-as-command-line-tools/
#
# ### Mathematical modelling and Stats in R
#
# * The Use R! series (all yellow books) by Springer are really good. In particular, consider: "A Beginner's Guide to R", "R by Example", "Numerical Ecology With R", "ggplot2" (coming up in the [Visualization Chapter](08-Data_R.ipynb), "A Primer of Ecology with R", "Nonlinear Regression with R", and "Analysis of Phylogenetics and Evolution with R".
# * For more focus on dynamical models: Soetaert & Herman. 2009 "A practical guide to ecological modelling: using R as a simulation platform".
#
# ### Debugging
#
# * [`testthat`](https://testthat.r-lib.org/): Unit Testing for R by <NAME> (also developer of the `tidyverse` package, including `ggplot`).
# * [Notes on debugging in R by Wickham](https://adv-r.hadley.nz/debugging.html)
# * A good [overview of debugging methods in R with examples](https://data-flair.training/blogs/debugging-in-r-programming)
# * [An introduction to the Interactive Debugging Tools in R](http://www.biostat.jhsph.edu/~rpeng/docs/R-debug-tools.pdf) by <NAME>.
# * If you are using RStudio, [see this](https://support.rstudio.com/hc/en-us/articles/205612627-Debugging-with-RStudio).
#
# ### Sweave and knitr
#
# Sweave and knitr are tools that allows you to write your Dissertation Report or some other document such that it can be updated automatically if data or R analysis change. Instead of inserting a prefabricated graph or table into the report, the master document contains the R code necessary to obtain it. When run through R, all data analysis output (tables, graphs, etc.) is created on the fly and inserted into a final document, which can be written using $\LaTeX$, LyX, HTML, or Markdown. The report can be automatically updated if data or analysis change, which allows for truly reproducible research.
#
# * For a practical intro to Sweave and knitr, see [this](https://support.rstudio.com/hc/en-us/articles/200552056-Using-Sweave-and-knitr) and [this](http://yihui.name/knitr/).
| content/_build/html/_sources/notebooks/07-R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Installing Python and GraphLab Create
# Please follow the installation instructions here before getting started:
#
#
# ## We have done
# * Installed Python
# * Started Ipython Notebook
# # Getting started with Python
print 'Hello World!'
# ## Create some variables in Python
i = 4 #int
type(i)
f = 4.1 #float
type(f)
b = True #boolean variable
s = "This is a string!"
print s
# ## Advanced python types
l = [3,1,2] #list
print l
d = {'foo':1, 'bar':2.3, 's':'my first dictionary'} #dictionary
print d
print d['foo'] #element of a dictionary
n = None #Python's null type
type(n)
# ## Advanced printing
print "Our float value is %s. Our int value is %s." % (f,i) #Python is pretty good with strings
# ## Conditional statements in python
if i == 1 and f > 4:
print "The value of i is 1 and f is greater than 4."
elif i > 4 or f > 4:
print "i or f are both greater than 4."
else:
print "both i and f are less than or equal to 4"
# ## Conditional loops
print l
for e in l:
print e
# Note that in Python, we don't use {} or other markers to indicate the part of the loop that gets iterated. Instead, we just indent and align each of the iterated statements with spaces or tabs. (You can use as many as you want, as long as the lines are aligned.)
counter = 6
while counter < 10:
print counter
counter += 1
# # Creating functions in Python
#
# Again, we don't use {}, but just indent the lines that are part of the function.
def add2(x):
y = x + 2
return y
i = 5
add2(i)
# We can also define simple functions with lambdas:
square = lambda x: x*x
| 0_Foundations/Week1_Intro/Getting started with iPython Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ojm_6E9f9Kcf"
# # MLP GenCode
# MLP_GenCode_107 with one change.
# Work on the entire dataset with almost no length restriction: 200bp to 99Kbp.
# accuracy: 99.56%, AUC: 99.96%
# + colab={"base_uri": "https://localhost:8080/"} id="RmPF4h_YI_sT" outputId="294be12a-360d-47d6-a923-93312c0fe98f"
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
# + id="NEbXt5N4I_sc"
PC_TRAINS=8000
NC_TRAINS=8000
PC_TESTS=8000
NC_TESTS=8000 # Wen et al 2019 used 8000 and 2000 of each class
PC_LENS=(200,99000)
NC_LENS=(200,99000) # Wen et al 2019 used 250-3500 for lncRNA only
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=128
DROP_RATE=0.25
EPOCHS=100 # 25
SPLITS=5
FOLDS=5 # make this 5 for serious testing
# + id="VQY7aTj29Kch"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
# + colab={"base_uri": "https://localhost:8080/"} id="xUxEB53HI_sk" outputId="244f8aaf-0aea-4e4c-f87a-59a739178d10"
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
# + [markdown] id="8buAhZRfI_sp"
# ## Data Load
# Restrict mRNA to those transcripts with a recognized ORF.
# + id="ypn0lhiqI_sq"
PC_FILENAME='gencode.v26.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v26.lncRNA_transcripts.fa.gz'
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
# + colab={"base_uri": "https://localhost:8080/"} id="xTz0pkFlI_sr" outputId="3ce2c753-d884-4678-f6b7-9419c8f021b2"
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(False)
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()
# + [markdown] id="CCNh_FZaI_sv"
# ## Data Prep
# + colab={"base_uri": "https://localhost:8080/"} id="mkBXTJ__I_sx" outputId="59c27c35-fc43-4d78-f4c1-eec84cb0526f"
def dataframe_length_filter(df,low_high):
(low,high)=low_high
# The pandas query language is strange,
# but this is MUCH faster than loop & drop.
return df[ (df['seqlen']>=low) & (df['seqlen']<=high) ]
def dataframe_shuffle(df):
# The ignore_index option is new in Pandas 1.3.
# The default (False) replicates the old behavior: shuffle the index too.
# The new option seems more logical th
# After shuffling, df.iloc[0] has index == 0.
# return df.sample(frac=1,ignore_index=True)
return df.sample(frac=1) # Use this till CoLab upgrades Pandas
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(
#dataframe_shuffle(
dataframe_length_filter(pcdf,PC_LENS))#)
nc_all = dataframe_extract_sequence(
#dataframe_shuffle(
dataframe_length_filter(ncdf,NC_LENS))#)
#pc_all=['CAAAA','CCCCC','AAAAA','AAACC','CCCAA','CAAAA','CCCCC','AACAA','AAACC','CCCAA']
#nc_all=['GGGGG','TTTTT','GGGTT','GGGTG','TTGTG','GGGGG','TTTTT','GGTTT','GGGTG','TTGTG']
show_time()
print("PC seqs pass filter:",len(pc_all))
print("NC seqs pass filter:",len(nc_all))
# Garbage collection to reduce RAM footprint
pcdf=None
ncdf=None
# + colab={"base_uri": "https://localhost:8080/"} id="V91rP2osI_s1" outputId="f71e9b2b-8700-4511-d791-3af987df37c8"
# Any portion of a shuffled list is a random selection
pc_train=pc_all[:PC_TRAINS]
nc_train=nc_all[:NC_TRAINS]
pc_test=pc_all[PC_TRAINS:PC_TRAINS+PC_TESTS]
nc_test=nc_all[NC_TRAINS:NC_TRAINS+PC_TESTS]
print("PC train, NC train:",len(pc_train),len(nc_train))
print("PC test, NC test:",len(pc_test),len(nc_test))
# Garbage collection
pc_all=None
nc_all=None
# + colab={"base_uri": "https://localhost:8080/"} id="FfyPeInGI_s4" outputId="916fd44f-6e43-45e1-8fca-62006ac58e81"
def prepare_x_and_y(seqs1,seqs0):
len1=len(seqs1)
len0=len(seqs0)
total=len1+len0
L1=np.ones(len1,dtype=np.int8)
L0=np.zeros(len0,dtype=np.int8)
S1 = np.asarray(seqs1)
S0 = np.asarray(seqs0)
all_labels = np.concatenate((L1,L0))
all_seqs = np.concatenate((S1,S0))
return all_seqs,all_labels # use this to test unshuffled
# bug in next line?
X,y = shuffle(all_seqs,all_labels) # sklearn.utils.shuffle
#Doesn't fix it
#X = shuffle(all_seqs,random_state=3) # sklearn.utils.shuffle
#y = shuffle(all_labels,random_state=3) # sklearn.utils.shuffle
return X,y
Xseq,y=prepare_x_and_y(pc_train,nc_train)
print(Xseq[:3])
print(y[:3])
# Tests:
show_time()
# + colab={"base_uri": "https://localhost:8080/"} id="LWLixZOfI_s7" outputId="1ee35ecd-0228-4f55-ff2d-71435d786332"
def seqs_to_kmer_freqs(seqs,max_K):
tool = KmerTools() # from SimTools
empty = tool.make_dict_upto_K(max_K)
collection = []
for seq in seqs:
counts = empty
# Last param should be True when using Harvester.
counts = tool.update_count_one_K(counts,max_K,seq,True)
# Given counts for K=3, Harvester fills in counts for K=1,2.
counts = tool.harvest_counts_from_K(counts,max_K)
fdict = tool.count_to_frequency(counts,max_K)
freqs = list(fdict.values())
collection.append(freqs)
return np.asarray(collection)
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
show_time()
# + [markdown] id="dJ4XhrzGI_s-"
# ## Neural network
# + colab={"base_uri": "https://localhost:8080/"} id="o5NPW7zKI_tC" outputId="c51821ab-4c84-4337-8f5d-20ffc2dbfd88"
def make_DNN():
dt=np.float32
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt)) # relu doesn't work as well
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=dt))
dnn.compile(optimizer='adam', # adadelta doesn't work as well
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
return dnn
model = make_DNN()
print(model.summary())
# + id="E2Dr6L9lI_tE"
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS,shuffle=True)
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Ey7Q-YWcI_tH" outputId="23979635-5ce1-41f5-9266-694aae6b3fee"
do_cross_validation(Xfrq,y)
# + colab={"base_uri": "https://localhost:8080/"} id="GVImN4_0I_tJ" outputId="e55472c9-f0d4-445c-d249-65e548d1a510"
from keras.models import load_model
print(pc_train[0])
Xseq,y=prepare_x_and_y(pc_train,nc_train)
print(Xseq[0])
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
print(Xfrq[0])
X=Xfrq
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
# + colab={"base_uri": "https://localhost:8080/", "height": 347} id="sxC-6dXcC8jG" outputId="b7e9fd9a-bc09-460f-89ae-c77a08d24d14"
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
print("predictions.shape",bm_probs.shape)
print("first prediction",bm_probs[0])
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
# + id="tGf2PcxRC8jT"
| Notebooks/MLP_GenCode_108.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.2
# language: julia
# name: julia-1.5
# ---
# +
using PyPlot
x = collect(0.0:1.0:100)
y = 2.0.*x.^2 .- 100.0 + x .+0.0003.*x.^5
plot(x,y)
ese = y./100
NN = Int32(length(y))
wx = 1. ./(ese.^2) # relative variance of observations
wy = zeros(1).+1. # systematic errors... not used so put them to 1
VAL = ese.^2
M = 2
N = Int32(length(x))
K = Int32(1) # number of y columns
MD = Int32(2) #spline mode
NC = Int32(length(y))
c = ones(NN,NC)
WK = ones(6*(N*M+1)+N,1)
IER=Int32[1]
# -
typeof(NC)
#ccall( (:gcvspl_, "./libgcvspl.so"), Void, (Ptr{Float64},Ptr{Float64},Ptr{Clonglong},Ptr{Float64},Ptr{Float64},Ptr{Clonglong},Ptr{Clonglong},Ptr{Clonglong},Ptr{Clonglong},Ptr{Float64},Ptr{Float64},Ptr{Clonglong},Ptr{Float64},Ptr{Clonglong}),x,y,&NN,wx,wy,&M,&N,&K,&MD,VAL,c,&NC,WK,IER)
ccall( (:gcvspl_, "./libgcvspl.so"), Cvoid,
(Ptr{Float64},Ptr{Float64},Ref{Cint},Ptr{Float64},Ptr{Float64},Ref{Cint},Ref{Cint},Ref{Cint},Ref{Cint},Ptr{Float64},Ref{Float64},Ref{Cint},Ref{Float64},Ref{Cint}),
x,y,N,wx,wy,M,N,K,MD,VAL,c,NC,WK,IER)
y_calc = zeros(size(x,1),1)
IDER = Int32(0)
l= Int32(1)
q = zeros(2*M,1)
typeof(l)
for i =1:size(y_calc,1)
y_calc[i] = ccall( (:splder_, "./libgcvspl.so"),
Float64, (Ref{Cint},Ref{Cint},Ref{Cint},Ref{Float64},Ptr{Float64},Ptr{Float64},Ref{Cint},Ptr{Float64}),
IDER, M, N, x[i], x, c, l, q)
end
figure()
scatter(x,y)
plot(x,y_calc)
replace("Elementary, my dear Watson!", r"e" => "")
| deps/src/gcvspline/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# High Density Areas of Urban Development cit.014 http://data.jrc.ec.europa.eu/dataset/jrc-ghsl-ghs_smod_pop_globe_r2016a
# +
import numpy as np
import pandas as pd
import rasterio
import boto3
import requests as req
from matplotlib import pyplot as plt
# %matplotlib inline
import os
import sys
import threading
# -
# Establish s3 location
# +
# Investigate what the data in these rasters means, and whether we can
# Display high and low density clusters separately as is
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/cit_014_areas_of_urban_development/"
s3_files = ["cit_014_areas_of_urban_development_1975.tif",
"cit_014_areas_of_urban_development_1990.tif",
"cit_014_areas_of_urban_development_2000.tif",
"cit_014_areas_of_urban_development_2015.tif",
"cit_014_areas_of_urban_development_2015_HDC.tif",
"cit_014_areas_of_urban_development_2015_LDC.tif"]
s3_file_merge = "cit_014_areas_of_urban_development_merge.tif"
s3_key_origs = []
s3_key_edits = []
for file in s3_files:
orig = s3_folder + file
s3_key_origs.append(orig)
s3_key_edits.append(orig[0:-4] + "_edit.tif")
s3_key_merge = s3_folder + s3_file_merge
# -
s3_key_edits
# Create local staging folder for holding data
# !mkdir staging
os.chdir("staging")
staging_folder = os.getcwd()
os.environ["Z_STAGING_FOLDER"] = staging_folder
# Local files
# +
local_folder = "/Users/nathansuberi/Desktop/WRI_Programming/RW_Data"
rw_data_type = "/Cities/"
# Topics include: [Society, Food, Forests, Water, Energy, Climate, Cities, Biodiversity, Commerce, Disasters]
local_files = [
"GHS_SMOD_POP1975_GLOBE_R2016A_54009_1k_v1_0/GHS_SMOD_POP1975_GLOBE_R2016A_54009_1k_v1_0.tif",
"GHS_SMOD_POP1990_GLOBE_R2016A_54009_1k_v1_0/GHS_SMOD_POP1990_GLOBE_R2016A_54009_1k_v1_0.tif",
"GHS_SMOD_POP2000_GLOBE_R2016A_54009_1k_v1_0/GHS_SMOD_POP2000_GLOBE_R2016A_54009_1k_v1_0.tif",
"GHS_SMOD_POP2015_GLOBE_R2016A_54009_1k_v1_0/GHS_SMOD_POP2015_GLOBE_R2016A_54009_1k_v1_0.tif",
"GHS_SMOD_POP2015HDC_GLOBE_R2016A_54009_1k_v1_0/GHS_SMOD_POP2015HDC_GLOBE_R2016A_54009_1k_v1_0.tif",
"GHS_SMOD_POP2015LDC_GLOBE_R2016A_54009_1k_v1_0/GHS_SMOD_POP2015LDC_GLOBE_R2016A_54009_1k_v1_0.tif"
]
local_orig_keys = []
local_edit_keys = []
for file in local_files:
local_orig_keys.append(local_folder + rw_data_type + file)
local_edit_keys.append(local_folder + rw_data_type + file[0:-4] + "_edit.tif")
# -
local_orig_keys
# <b>Regardless of any needed edits, upload original file</b>
#
# <i>Upload tif to S3 folder</i>
#
# http://boto3.readthedocs.io/en/latest/guide/s3-example-creating-buckets.html
#
# <i>Monitor Progress of Upload</i>
#
# http://boto3.readthedocs.io/en/latest/_modules/boto3/s3/transfer.html
# https://boto3.readthedocs.io/en/latest/guide/s3.html#using-the-transfer-manager
# +
s3 = boto3.client("s3")
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"\r%s %s / %s (%.2f%%)" % (
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
# -
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
for i in range(0,6):
print(i)
s3.upload_file(local_orig_keys[i], s3_bucket, s3_key_origs[i],
Callback=ProgressPercentage(local_orig_keys[i]))
# Check for compression, projection
#
# Create edit file if necessary
# +
# Check Compression, Projection
with rasterio.open(local_orig_keys[0]) as src:
pro0 = src.profile
data0 = src.read(1)
with rasterio.open(local_orig_keys[1]) as src:
pro1 = src.profile
data1 = src.read(1)
with rasterio.open(local_orig_keys[2]) as src:
pro2 = src.profile
data2 = src.read(1)
with rasterio.open(local_orig_keys[3]) as src:
pro3 = src.profile
data3 = src.read(1)
with rasterio.open(local_orig_keys[4]) as src:
pro4 = src.profile
data4 = src.read(1)
with rasterio.open(local_orig_keys[5]) as src:
pro5 = src.profile
data5 = src.read(1)
# uniq0 = np.unique(data0, return_counts=True)
# uniq1 = np.unique(data1, return_counts=True)
# uniq2 = np.unique(data2, return_counts=True)
# uniq3 = np.unique(data3, return_counts=True)
# uniq4 = np.unique(data4, return_counts=True)
# uniq5 = np.unique(data5, return_counts=True)
# -
uniq4
# Examine each of the profiles - are they all the same data type?
print(pro0)
print(pro1)
print(pro2)
print(pro3)
print(pro4)
print(pro5)
profiles = [pro0, pro1, pro2, pro3, pro4, pro5]
# Upload edited files to S3
# +
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
for i in range(0,6):
orig_key = local_orig_keys[i]
edit_key = local_edit_keys[i]
# Use rasterio to reproject and store locally, then upload
with rasterio.open(orig_key) as src:
kwargs = profiles[i]
print(kwargs)
kwargs.update(
driver='GTiff',
dtype=rasterio.int32, #rasterio.int16, rasterio.int32, rasterio.uint8,rasterio.uint16, rasterio.uint32, rasterio.float32, rasterio.float64
count=1,
compress='lzw',
nodata=0,
bigtiff='NO',
crs = 'EPSG:4326',
)
windows = src.block_windows()
with rasterio.open(edit_key, 'w', **kwargs) as dst:
for idx, window in windows:
src_data = src.read(1, window=window)
formatted_data = src_data.astype("int32")
dst.write_band(1, formatted_data, window=window)
s3.upload_file(edit_key, s3_bucket, s3_key_edits[i],
Callback=ProgressPercentage(edit_key))
# -
s3_file_merge
# Merge files and upload to s3
# +
merge_key = './'+s3_file_merge
kwargs = profiles[i]
print(kwargs)
kwargs.update(
driver='GTiff',
dtype=rasterio.int32, #rasterio.int16, rasterio.int32, rasterio.uint8,rasterio.uint16, rasterio.uint32, rasterio.float32, rasterio.float64
count=len(profiles),
compress='lzw',
nodata=0,
bigtiff='NO',
crs = 'EPSG:4326',
)
with rasterio.open(merge_key, 'w', **kwargs) as dst:
for idx, file in enumerate(local_edit_keys):
print(idx)
with rasterio.open(file) as src:
band = idx+1
windows = src.block_windows()
for win_id, window in windows:
src_data = src.read(1, window=window)
dst.write_band(band, src_data, window=window)
s3.upload_file(merge_key, s3_bucket, s3_key_merge,
Callback=ProgressPercentage(merge_key))
# -
# Inspect the final product
tmp = "./temp"
s3 = boto3.resource("s3")
s3.meta.client.download_file(s3_bucket, s3_key_merge, tmp)
with rasterio.open(tmp) as src:
print(src.profile)
data = src.read(4)
os.getcwd()
np.unique(data, return_counts=True)
plt.imshow(data)
| cit_014_merge_done.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
# -
# 데이터와 레이블 가져오기
breast_cancer_data = load_breast_cancer()
# 데이터 확인하기
df_data = pd.DataFrame(breast_cancer_data.data)
df_labels = pd.DataFrame(breast_cancer_data.target)
df_labels.head()
print(breast_cancer_data.target_names)
df_data.head()
df_data.describe()
# 각 컬럼마다 최대값 등, 스케일이 현저히 차이나므로 정규화 필요
# 최소 최대 정규화 적용
def min_max_normalize(lst):
normalized = []
for value in lst:
normalized_num = (value - min(lst)) / (max(lst) - min(lst))
normalized.append(normalized_num)
return normalized
# +
# df_data에 적용
for x in range(len(df_data.columns)):
df_data[x] = min_max_normalize(df_data[x])
df_data.describe()
# +
# 데이터셋 분리 (Train / Test)
x_train, y_train, x_test, y_test = train_test_split(df_data, df_labels, test_size=0.2, random_state=100)
print(len(x_train), len(y_train), len(x_test), len(y_test))
# -
# 모델 생성하기
classifier = KNeighborsClassifier(n_neighbors=3)
classifier.fit(x_train, x_test)
# 모델의 정확도 평가
print('정확도: ', classifier.score(y_train, y_test))
# k 값을 1 ~ 100 까지 증가시키면서 모델 정확도를 구해 저장 후, 시각화
import matplotlib.pyplot as plt
k_list = range(1, 101)
accuracies = []
acc_dict = {}
for k in k_list:
classifier = KNeighborsClassifier(n_neighbors=k)
classifier.fit(x_train, x_test)
accuracies.append(classifier.score(y_train, y_test))
acc_dict[k] = classifier.score(y_train, y_test)
plt.plot(k_list, accuracies)
plt.xlabel('k')
plt.ylabel('Validation Accuracy')
plt.title('Breast Cancer Classifier Accuracy')
plt.show()
sorted(acc_dict.items(), reverse=True, key=lambda item: item[1])[:10]
| 3. ML/code/.ipynb_checkpoints/learn_KNN-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project II: Economic Growth
#
# This notebook will help you getting started with analyzing the growth dataset, `growth.csv`.
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
# ## Read data
dat = pd.read_csv('growth.csv')
lbldf = pd.read_csv('labels.csv', index_col='variable')
lbl_all = lbldf.label.to_dict() # as a dictionary
print(f'The data contains {dat.shape[0]} rows (countries) and {dat.shape[1]} columns (variables).')
# # Descriptive plots
dat.plot.scatter(x='lgdp_initial', y='gdp_growth');
import seaborn as sns
sns.scatterplot(x='lgdp_initial', y='gdp_growth', data=dat, hue='malfal');
# # Collections of variables
#
# In order to make the analysis simpler, it may be convenient to collect variables in sets that belong together naturally.
# +
# all available variables
vv_institutions = ['marketref', 'dem', 'demCGV', 'demBMR', 'demreg']
vv_geography = [
'tropicar','distr', 'distcr', 'distc','suitavg','temp', 'suitgini', 'elevavg', 'elevstd',
'kgatr', 'precip', 'area', 'abslat', 'cenlong', 'area_ar', 'rough','landlock',
'africa', 'asia', 'oceania', 'americas' # 'europe' is the reference
]
vv_geneticdiversity = ['pdiv', 'pdiv_aa', 'pdivhmi', 'pdivhmi_aa']
vv_historical = ['pd1000', 'pd1500', 'pop1000', 'pop1500', 'ln_yst'] # these are often missing: ['pd1', 'pop1']
vv_religion = ['pprotest', 'pcatholic', 'pmuslim']
vv_danger = ['yellow', 'malfal', 'uvdamage']
vv_resources = ['oilres', 'goldm', 'iron', 'silv', 'zinc']
vv_educ = ['ls_bl', 'lh_bl'] # secondary, tertiary: we exclude 'lp_bl' (primary) to avoid rank failure
vv_all = {'institutions': vv_institutions,
'geography': vv_geography,
'geneticdiversity': vv_geneticdiversity,
'historical': vv_historical,
'religion': vv_religion,
'danger':vv_danger,
'resources':vv_resources
}
list_of_lists = vv_all.values()
vv_all['all'] = [v for sublist in list_of_lists for v in sublist]
# -
# convenient to keep a column of ones in the dataset
dat['constant'] = np.ones((dat.shape[0],))
# # Simple OLS
# +
# 1. avoiding missings
I = dat[['gdp_growth', 'lgdp_initial']].notnull().all(axis=1)
# 2. extract dataset
y = dat.loc[I, 'gdp_growth'].values.reshape((-1,1)) * 100.0
X = dat.loc[I, ['constant','lgdp_initial']].values
# 3. run OLS
betahat = np.linalg.inv(X.T @ X) @ X.T @ y
print(betahat)
# -
# # Adding more controls
# +
vs = vv_all['geography'] + vv_all['religion']
xs = ['lgdp_initial', 'pop_growth', 'investment_rate'] + vs
# avoiding missings
all_vars = ['gdp_growth'] + xs
I = dat[all_vars].notnull().all(1)
# extract data
X = dat.loc[I, xs].values
y = dat.loc[I,'gdp_growth'].values.reshape((-1,1)) * 100. #easier to read output when growth is in 100%
# add const. (unless this breaks the rank condition)
oo = np.ones((I.sum(),1))
X = np.hstack([X, oo])
xs.append('constant') # we put it in as the last element
# check the rank condition
K = X.shape[1]
assert np.linalg.matrix_rank(X) == X.shape[1], f'X does not have full rank'
# compute the OLS estimator
betas = np.linalg.inv(X.T @ X) @ X.T @ y
# -
# format nicely
print(f'Mean y = {y.mean(): 5.2f}% growth per year')
pd.DataFrame({'β': betas[:,0]}, index=xs).round(3)
| Growth/explore.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tcbic/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module4-sequence-your-narrative/LS_DS_124_Sequence_your_narrative.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="JbDHnhet8CWy"
# _Lambda School Data Science_
#
# # Sequence your narrative
#
# Today we will create a sequence of visualizations inspired by [<NAME>'s 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).
#
# Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):
# - [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)
# - [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)
# - [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)
# - [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)
# - [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv)
# + [markdown] colab_type="text" id="zyPYtsY6HtIK"
# Objectives
# - sequence multiple visualizations
# - combine qualitative anecdotes with quantitative aggregates
#
# Links
# - [<NAME>’s TED talks](https://www.ted.com/speakers/hans_rosling)
# - [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)
# - "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."
# - [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling
# + [markdown] colab_type="text" id="SxTJBgRAW3jD"
# ## Make a plan
#
# #### How to present the data?
#
# Variables --> Visual Encodings
# - Income --> x
# - Lifespan --> y
# - Region --> color
# - Population --> size
# - Year --> animation frame (alternative: small multiple)
# - Country --> annotation
#
# Qualitative --> Verbal
# - Editorial / contextual explanation --> audio narration (alternative: text)
#
#
# #### How to structure the data?
#
# | Year | Country | Region | Income | Lifespan | Population |
# |------|---------|----------|--------|----------|------------|
# | 1818 | USA | Americas | ### | ## | # |
# | 1918 | USA | Americas | #### | ### | ## |
# | 2018 | USA | Americas | ##### | ### | ### |
# | 1818 | China | Asia | # | # | # |
# | 1918 | China | Asia | ## | ## | ### |
# | 2018 | China | Asia | ### | ### | ##### |
#
# + [markdown] colab_type="text" id="3ebEjShbWsIy"
# ## Upgrade Seaborn
#
# Make sure you have at least version 0.9.0.
#
# In Colab, go to **Restart runtime** after you run the `pip` command.
# + colab_type="code" id="4RSxbu7rWr1p" colab={"base_uri": "https://localhost:8080/", "height": 248} outputId="6abac932-7325-4c50-df1c-8f900006b5ac"
# !pip install --upgrade seaborn
# + colab_type="code" id="5sQ0-7JUWyN4" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d9d4d795-cc5c-491c-86f9-aeb1143249b9"
import seaborn as sns
sns.__version__
# + [markdown] colab_type="text" id="S2dXWRTFTsgd"
# ## More imports
# + colab_type="code" id="y-TgL_mA8OkF" colab={}
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# + [markdown] colab_type="text" id="CZGG5prcTxrQ"
# ## Load & look at data
# + id="EhZeFh1rHHt-" colab_type="code" colab={}
#Let's create DataFrames for our data.
# + colab_type="code" id="-uE25LHD8CW0" colab={}
income_df = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv')
# + colab_type="code" id="gg_pJslMY2bq" colab={}
lifespan_df = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv')
# + colab_type="code" id="F6knDUevY-xR" colab={}
population_df = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv')
# + colab_type="code" id="hX6abI-iZGLl" colab={}
entities_df = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv')
# + colab_type="code" id="AI-zcaDkZHXm" colab={}
concepts_df = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv')
# + colab_type="code" id="EgFw-g0nZLJy" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="117b67dd-40d8-4d73-fb91-4337910d7464"
#What does our data look like?
income_df.shape, lifespan_df.shape, population_df.shape, entities_df.shape, concepts_df.shape
# + colab_type="code" id="I-T62v7FZQu5" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="882037fd-5381-497a-9daa-875d4bb5af20"
income_df.head()
# + colab_type="code" id="2zIdtDESZYG5" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="56445ac2-7e0d-412b-e34f-1ddaf0b866c6"
lifespan_df.head()
# + colab_type="code" id="58AXNVMKZj3T" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="7732697b-09e4-4253-989d-2cf182c83054"
population_df.head()
# + colab_type="code" id="0ywWDL2MZqlF" colab={"base_uri": "https://localhost:8080/", "height": 270} outputId="a40fb7c1-b14b-4916-b1da-581ad391b3e9"
#Add this code so that all columns are visible.
pd.options.display.max_columns = 500
entities_df.head()
# + colab_type="code" id="mk_R0eFZZ0G5" colab={"base_uri": "https://localhost:8080/", "height": 559} outputId="b17d66de-1598-4205-db71-2a6fa6508b25"
concepts_df.head()
# + [markdown] colab_type="text" id="6HYUytvLT8Kf"
# ## Merge data
# + [markdown] colab_type="text" id="dhALZDsh9n9L"
# https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
# + colab_type="code" id="A-tnI-hK6yDG" colab={}
#We want to join the income, lifespan and population DataFrames.
#How can we do this?
three_df = pd.merge(pd.merge(income_df, lifespan_df), population_df)
#This could also be written as...
#three_df = pd.merge(income, lifespan)
#three_df = pd.merge(three_df, population)
# + id="8szno1bgLtW2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1132fced-a9d3-49b9-eacd-e9c8f9936cda"
#Let's look at our merged DataFrame.
three_df.shape
# + id="NXaoXr2-L1bI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="cd071036-ea24-491e-e48b-f625bca6040c"
three_df.head()
# + id="roMXRfkzNN-h" colab_type="code" colab={}
#We still need the features: regions and nice, full name of the country.
#Both of these features are in the entities_df.
entities_df = entities_df[['country', 'name', 'world_6region']]
# + id="Ul-kjj5UcVI3" colab_type="code" colab={}
#Notice however that in order to merge this DataFrame with three_df, we'll need to rename the 'country'
#column in this DataFrame to 'geo'.
#Let's do that.
entities_df = entities_df.rename(columns={'country':'geo'})
# + id="D7WMJ63mVmWn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="f62a0f10-dc95-48b2-fa6e-41d7a9f5305e"
#Yay, now we are set to merge!
data_df = pd.merge(three_df, entities_df)
data_df.head()
# + id="FGDjFa-7X_u2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a61b79cb-67be-427e-a376-86f223fbe262"
data_df.shape
# + id="NxHGBguTmF9t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="a27a2e2f-919d-4b6b-f603-a721ed6a68d6"
#The geo column has served its purpose. We were able to merge our two DataFrames, however, this
#column does very little for us moving forward. We're going to go ahead and drop it.
data_df = data_df.drop(columns=['geo'])
data_df.head()
# + id="0sFSNWqFm6PM" colab_type="code" colab={}
#At this time, we'll also rename the columns of this DataFrame for ease of use moving forward.
data_df = data_df.rename(columns={'time':'year', 'income_per_person_gdppercapita_ppp_inflation_adjusted':'income',
'life_expectancy_years':'lifespan', 'population_total':'population',
'name':'country', 'world_6region':'region'})
# + id="WraMPqyMp8wm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="2e3bb1ce-57ca-45c2-d689-38800145b0d0"
data_df.head()
# + id="Nto44h1LqANA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="482328e9-5cf2-4917-a0d0-b22c7811ad3f"
#Given that we have a lot of data in this DataFrame, it's hard to think that we'd be able to summarize it into just one visualization.
data_df.shape
# + id="rOeNIU3zwD59" colab_type="code" colab={}
#If we wanted to replace _ in the region column:
data_df.region = data_df.region.str.replace('_', ' ')
# + id="Tmmt-JNaxE5S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="a698ef36-b9f9-4b71-aae5-e031044a3965"
data_df.head()
# + [markdown] colab_type="text" id="4OdEr5IFVdF5"
# ## Explore data
# + colab_type="code" id="4IzXea0T64x4" colab={"base_uri": "https://localhost:8080/", "height": 290} outputId="9d9ac766-18d1-4aa9-963c-69e0910aeebe"
data_df.describe()
# + id="XSXg2iSKRp_H" colab_type="code" colab={}
#Take a look at the mean and median(50%), specifically for the income column. What do you notice?
#There is quite a significant difference between the two: mean ~ 4619 and median is 1442.
#What does this imply? #Skewness. Because the mean (disproportionately affected by outliers)
#is larger than the median, this implies a distribution that is skewed to the right.
# + id="_CiM5V-LX9sk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="f8af00f1-19ae-43fe-96d1-89de170a77e9"
#Let's get measures of the skewness for our columns.
#We notice population also has significant skew. This is because there are countries that have
#a significant difference in population (i.e. large countries like China and India compared to smaller examples).
data_df.skew()
# + id="yKIjpIpziHI4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1925} outputId="90de2dec-fc98-49c3-a1af-7cc21b168513"
#If we just wanted to look at just the year 2018 of all countries...
data_df[(data_df['year'] == 2018)]
# + id="wfqCJvdfjEVN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="e3d5582c-d8dd-4883-b008-9f72333ab5a5"
#If we wanted to filter this DataFrame to only three years (1818, 1918 and 2018) and for the United States...
#First, let's filter the DataFrame for just the United States.
usa_df = data_df[data_df['country'] == 'United States']
#Check that we accomplished that. Looks good!
usa_df.head()
# + id="mPCbKigDlIGm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="e2dca5cd-24c6-41bd-cf1a-311f1d056ac2"
#Now, if we just want the years 1818, 1918 and 2018...
usa_df = usa_df[usa_df['year'].isin([1818, 1918, 2018])]
#Again, let's check. We did it!
usa_df.head()
# + [markdown] colab_type="text" id="hecscpimY6Oz"
# ## Plot visualization
# + [markdown] colab_type="text" id="8OFxenCdhocj"
# ## Analyze outliers
# + id="FpTme7L8nLMd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 294} outputId="ad4be79b-4698-4480-8f5a-4833bf638919"
#Let's make a visualization for just 2018.
current_df = data_df[data_df['year'] == 2018]
current_df.hist();
#Notice that we can observe skewness in these plots, specifically for income, lifespan and population.
# + id="ldC_H8u1ow2K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 795} outputId="0e29192e-ec93-45f9-df44-2c9157939cec"
#Creating a visualization using the scatter matrix (similar to seaborn pairplot.)
pd.plotting.scatter_matrix(current_df);
# + [markdown] id="7xCIEFDGpfg8" colab_type="text"
# We observe that income in relation to lifespan does not have a linear relationship.
# + id="MKKahTMKb-HC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="21a85f28-e58b-4acd-9e0e-28a43f8e32aa"
#Using the relplot in seaborn...
#Very useful plot, particularly when considering multiple dimensions!
import seaborn as sns
#We're assigning values to the parameters based on the example visualization from Gapminder.
sns.relplot(x='income', y='lifespan', hue='region', size='population', data=current_df);
# + id="cbo2vaJasMB3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="3d2b6add-1319-4d00-a59e-b9249aea6403"
#Taking the log of income will even things out a little for our distribution.
#Creating a column for the log of income in our DataFrame.
current_df['log of income'] = np.log(current_df['income'])
# + id="t3msodL_tglD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="37ea85cb-dc90-4d62-8519-445748e14e6e"
#To illustrate the difference in taking the log...
current_df.income.hist();
# + id="DiJY-v53tq7_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="bcb56753-b609-4a77-dec4-d46dfd7cc53e"
current_df['log of income'].hist();
#We see that taking the log of income normalizes the distribution.
# + id="cy7XJC5-0LI7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 369} outputId="6e555688-6533-43b6-98a2-6d9b72863bfc"
#If we were to change our seaborn plot using the log of income instead, this is what our visualization would look like...
sns.relplot(x='log of income', y='lifespan', hue='region', size='population', data=current_df);
#This visualization displays a much more linear distribution.Income tends to fall along an explonential distribution,
#so undoing that exponentiation using a log gives it a much more linear shape.
# + id="LxN-vDb_17H-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 290} outputId="177a9479-3ca6-4c9d-9464-a08a6ce0f04b"
#How to point out a specific country and annotate it...
#For example, what countries log of income is greater than 11?
current_df[current_df['log of income'] > 11].sort_values(by='log of income')
# + id="NzTZ4xA73fXc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 78} outputId="bf8b3feb-7d9b-4d3e-fb8b-b8d78046a4c3"
#Qatar appears to be the wealthiest. Let's annotate it.
#To do that, we need to figure out where it's located.
#Grab Qatar from the DataFrame.
qatar = current_df[current_df['country'] == 'Qatar']
qatar
# + id="yfHDnY3T3ulj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="646e0a31-0e9b-4460-b4e8-72af35a4e245"
#Finding the coordinates to put our annotation for Qatar...
#Reminder that our x-axis is log of income and our y-axis is lifespan.
#So, we need to Qatar's location for log of income and lifespan.
qatar_log_income = qatar['log of income'].iloc[0]
qatar_lifespan = qatar['lifespan'].iloc[0]
print(qatar_log_income, qatar_lifespan)
# + id="Y5SDw1uG7xWT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 382} outputId="0f065a02-12d7-4a41-e4cd-dc4b25570768"
sns.relplot(x='log of income', y='lifespan', hue='region', size='population', data=current_df)
plt.text(x=qatar_log_income + .1, y=qatar_lifespan, s='Qatar')
plt.ylabel('Lifespan')
plt.xlabel('Income(log)')
plt.title('Qatar has the Largest GDP for 2018');
# + [markdown] colab_type="text" id="DNTMMBkVhrGk"
# ## Plot multiple years
# + colab_type="code" id="JkTUmYGF7BQt" colab={}
#If we wanted to generate plots for our initial DataFrame for the three years we pulled data for above...
#Reminder that those years were 1818, 1918 and 2018.
#Let's grab our main DataFrame again!
data_df = data_df[data_df['year'].isin([1818, 1918, 2018])]
# + id="oVBLwKZr-S9g" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1925} outputId="d1e6105d-7f16-4de8-c2a3-97b42b8b4317"
#And if we want to use the log of income as our x-axis, we need to create a new column for it in this DataFrame.
data_df['log of income'] = np.log(data_df['income'])
#Let's check that we created a new column in our DataFrame. Yep, there it is!
data_df
# + id="Fl3cV6n--SF3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 389} outputId="fe1eedad-cef0-48fd-d4bf-9c33935108be"
#Now for the visualization...
#Notice that we added the parameter'col', so that we would generate plots for each separate year.
sns.relplot(x='log of income', y='lifespan', hue='region', size='population', col='year', data=data_df);
# + [markdown] colab_type="text" id="BB1Ki0v6hxCA"
# ## Point out a story
# + colab_type="code" id="eSgZhD3v7HIe" colab={}
# + [markdown] id="_C5e0NUfYyeC" colab_type="text"
# # ASSIGNMENT
# Replicate the lesson code.
#
# # STRETCH OPTIONS
#
# ## 1. Animate!
# - [Making animations work in Google Colaboratory](https://medium.com/lambda-school-machine-learning/making-animations-work-in-google-colaboratory-new-home-for-ml-prototyping-c6147186ae75)
# - [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1)
# - [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student)
#
# ## 2. Work on anything related to your portfolio site/project!
| module4-sequence-your-narrative/LS_DS_124_Sequence_your_narrative.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
def foo():
return 'bar'
print(foo)
print(foo())
# +
import time
def timer(func):
def wrap(sleep_time):
t_start = time.time()
func(sleep_time)
t_end = time.time()
t_count = t_end - t_start
print('[花費時間]', t_count)
return wrap
# +
def dosomething(sleep_time):
print('do some thing')
time.sleep(sleep_time)
foo = timer(dosomething)
foo(3)
# +
@timer
def dosomething(sleep_time):
print('do some thing')
time.sleep(sleep_time)
dosomething(3)
# -
@timer
def dosomething(sleep_time):
print('do some thing')
print(dosomething.__name__)
# +
from functools import wraps
def timer(func):
@wraps(func)
def wrap():
t_start = time.time()
func()
t_end = time.time()
t_count = t_end - t_start
print('[花費時間]', t_count)
return wrap
@timer
def dosomething():
print('do some thing')
print(dosomething.__name__)
# -
import os
import xlwings as xw
os.chdir(r'O:\結構型商品\ELN每週報價\報價-新版 backup')
app = xw.App(visible=True)
app.calculate()
wb = xw.books.open('標的波動率.xlsm')
wb.macro('CM全部更新').run()
wb.save()
wb.close()
app = xw.App(visible=True)
app.calculate()
wb = xw.books.open('2.ELN報價_請複製這個檔案-最新版20140121.xls')
ws=wb.sheets('本季可發行標的清單')
ws.index
wb.api.RefreshAll()
import requests
from bs4 import BeautifulSoup
search="美女"
url='https://www.google.com/search?q='+search+'&tbm=isch&ved=2ahUKEwj3yfDXnaHpAhUBS5QKHRJHCeIQ2-cCegQIABAA&oq='+search+'&gs_lcp=CgNpbWcQAzICCAAyAggAMgIIADICCAAyAggAMgIIADICCAAyAggAMgIIADICCAA6BwgjEOoCECc6BAgAEBg6BggAEAUQHlD7B1j_N2DrPGgCcAB4AIABM4gBnQKSAQE3mAEAoAEBqgELZ3dzLXdpei1pbWewAQo&sclient=img&ei=FrmzXvfmMIGW0QSSjqWQDg&bih=937&biw=1920&hl=zh-TW'
url
# ? requests.get
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# %load_ext autoreload
# %autoreload 2
# +
import logging
logging.basicConfig(format="%(asctime)s [%(process)d] %(levelname)-8s "
"%(name)s,%(lineno)s\t%(message)s")
logging.getLogger().setLevel('INFO')
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
# -
# Read information to connect to the database and put it in environment variables
import os
with open('ENVVARS.txt') as f:
for line in f:
parts = line.split('=')
if len(parts) == 2:
os.environ[parts[0]] = parts[1].strip()
db_name = 'ticclat'
# db_name = 'ticclat_test'
os.environ['dbname'] = db_name
# +
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, WordformLink, WordformLinkSource, lexical_source_wordform
from ticclat.dbutils import get_session, session_scope
Session = get_session(os.environ['user'], os.environ['password'], os.environ['dbname'])
# +
# %%time
# select wordforms that occur in at least 2 lexica
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func
with session_scope(Session) as session:
subq = select([Wordform, func.count('lexicon_id').label('num_lexicons')]).select_from(lexical_source_wordform.join(Wordform)) \
.group_by(Wordform.wordform_id)
q = select(['*']).select_from(subq.alias()).where(text('num_lexicons >= 2'))
print(q)
r = session.execute(q) #.filter(subq.c.num_lexica > 1)
for row in r.fetchall():
print(row)
print(row['wordform'], row['num_lexicons'])
print()
# +
# select wordforms that occur in at least 2 lexica that are vocabularies (so only contain correct wordforms)
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func
with session_scope(Session) as session:
subq = select([Wordform, func.count('lexicon_id').label('num_lexicons')]).select_from(lexical_source_wordform.join(Wordform).join(Lexicon)) \
.where(Lexicon.vocabulary == True).group_by(Wordform.wordform_id)
q = select(['*']).select_from(subq.alias()).where('num_lexicons > 1')
print(q)
r = session.execute(q) #.filter(subq.c.num_lexica > 1)
for row in r.fetchall():
print(row)
print(row['wordform'], row['num_lexicons'])
print()
# +
# %%time
# Get all the wordforms in a corpus
from sqlalchemy import select
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
with session_scope(Session) as session:
q = select([Wordform.wordform_id,Wordform.wordform, Corpus.name]).select_from(
Corpus.__table__.join(corpusId_x_documentId).join(Document).join(TextAttestation).join(Wordform)
).distinct()
r = session.execute(q)
for wf in r:
print(wf)
# +
# %%time
# count the unique wordforms
from sqlalchemy import select
from sqlalchemy.sql import func, distinct
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
with session_scope(Session) as session:
q = select([func.count(distinct(Wordform.wordform_id)), Corpus.name, func.sum(TextAttestation.frequency).label('total_wordforms')]).select_from(
Corpus.__table__.join(corpusId_x_documentId).join(Document).join(TextAttestation).join(Wordform)
).where(Corpus.name == 'corpus1')
print(q)
r = session.execute(q)
for wf in r:
print(wf)
# +
# %%time
# count the unique wordforms in each corpus
from sqlalchemy import select
from sqlalchemy.sql import func, distinct
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
with session_scope(Session) as session:
q = select([func.count(distinct(Wordform.wordform_id))]).select_from(
Corpus.__table__.join(corpusId_x_documentId).join(Document).join(TextAttestation).join(Wordform)
).groupby(Corpus.name)
print(q)
r = session.execute(q)
for wf in r:
print(wf)
# -
# +
# get all the wordforms in a lexicon
name = 'l2'
with session_scope(Session) as session:
q = select([Wordform.wordform]).select_from(
Lexicon.__table__.join(lexical_source_wordform).join(Wordform)
).where(Lexicon.lexicon_name == name)
r = session.execute(q)
for wf in r:
print(wf)
# +
# select wordforms that occur in a lexicon and corpus
from sqlalchemy.sql import intersect, and_
# mysql does not have intersect
with session_scope(Session) as session:
x = Wordform.__table__.alias('x')
name1 = 'l2'
name2 = 'corpus1'
q1 = select([Wordform]).select_from(
Wordform.__table__.join(lexical_source_wordform).join(Lexicon).join(TextAttestation, TextAttestation.wordform_id==Wordform.wordform_id).join(Document).join(corpusId_x_documentId).join(Corpus)
).where(and_(Lexicon.lexicon_name == name1, Corpus.name == name2)).distinct()
print(q1)
#y = Wordform.__table__.alias('y')
#name = 'corpus1'
#q2 = select([y]).select_from(
# Corpus.__table__.join(corpusId_x_documentId).join(Document).join(TextAttestation).join(Wordform)
# ).where(Corpus.name == name).distinct()
#print(q1.join(TextAttestation, x.c.wordform_id == TextAttestation.wordform_id).join(Document).join(corpusId_x_documentId).join(Corpus))
r = session.execute(q1).fetchall()
print(r)
#r = session.execute(q2).fetchall()
#print(r)
#r = session.execute(intersect(q1, q2)).fetchall()
#print(r)
# +
# %%time
# (aantal) wordforms per document in bepaald corpus
from sqlalchemy import select
from sqlalchemy.sql import func, distinct
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
corpus_name = 'corpus1'
with session_scope(Session) as session:
q = select([Wordform, Document.title]) \
.select_from(
Corpus.__table__.join(corpusId_x_documentId).join(Document)
.join(TextAttestation).join(Wordform)
).where(Corpus.name == corpus_name).group_by(Document.title, Wordform.wordform_id)
r = session.execute(q).fetchall()
print(r)
# +
# %%time
# (aantal) wordforms per document in bepaald corpus
from sqlalchemy import select
from sqlalchemy.sql import func, distinct
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
corpus_name = 'SoNaR-500'
with session_scope(Session) as session:
q = select([Document.title, func.count(distinct(Wordform.wordform_id)).label('tot_freq')]) \
.select_from(
Corpus.__table__.join(corpusId_x_documentId).join(Document)
.join(TextAttestation).join(Wordform)
).where(Corpus.name == corpus_name).group_by(Document.title)
print(q)
wf_doc = pd.read_sql(q, session.bind)
#r = session.execute(q).fetchall()
#print(r)
print(wf_doc)
# -
wf_doc = wf_doc.set_index('title')
wf_doc
# +
# %%time
# (aantal) wordforms per document in bepaald corpus en bepaald lexicon
from sqlalchemy import select
from sqlalchemy.sql import func, distinct, and_
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
corpus_name = 'SoNaR-500'
lexicon_name = 'GB95-05_002.csv.alltokens.utf8.nopunct'
with session_scope(Session) as session:
q = select([Document.title, func.count(distinct(Wordform.wordform_id)).label('lexicon_freq')]) \
.select_from(
Corpus.__table__.join(corpusId_x_documentId).join(Document)
.join(TextAttestation).join(Wordform).join(lexical_source_wordform).join(Lexicon)
).where(and_(Corpus.name == corpus_name, Lexicon.lexicon_name == lexicon_name)).group_by(Document.title)
print(q)
wf_l_doc = pd.read_sql(q, session.bind)
#r = session.execute(q).fetchall()
#print(r)
print(wf_l_doc)
# -
wf_l_doc = wf_l_doc.set_index('title')
wf_l_doc
data = pd.concat([wf_doc, wf_l_doc], axis=1)
data
data['%_lexicon_wordforms'] = data['lexicon_freq']/data['tot_freq']*100
data
# +
from tabulate import tabulate
print(tabulate(data, headers=['text_type', '#wordforms', '#wordforms in GB1995/2005', '%overlap'], tablefmt="github"))
# +
# %%time
# lijst met lexicons en aantal woordvormen per lexicon
from sqlalchemy import select
from sqlalchemy.sql import func, distinct, and_
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
with session_scope(Session) as session:
q = select([Lexicon.lexicon_name, func.count(distinct(Wordform.wordform_id)).label('num_wordforms')]) \
.select_from(
Wordform.__table__.join(lexical_source_wordform).join(Lexicon)
).group_by(Lexicon.lexicon_name)
print(q)
r = session.execute(q).fetchall()
print(r)
# +
# %%time
# anahashes with number of wordforms
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc
with session_scope(Session) as session:
subq = select([Anahash, func.count('wordform_id').label('num_wf')]).select_from(Anahash.__table__.join(Wordform)) \
.group_by(Anahash.anahash_id)
q = select(['*']).select_from(subq.alias()).where(text('num_wf > 1')).order_by(desc('num_wf'))
print(q)
r = session.execute(q) #.filter(subq.c.num_lexica > 1)
for row in r.fetchall():
print(row)
break
# +
# %%time
cf_file = '/Users/jvdzwaan/data/ticclat/ticcl/nld.aspell.dict.c20.d2.confusion'
cfs = []
with open(cf_file) as f:
for line in f:
cf, _ = line.split('#')
cfs.append(int(cf))
print(len(cfs))
# +
# %%time
# gegeven een woord, geef alle woorden in de db die 'dichtbij' zijn (1 character confusion verschil)
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc
word = 'koelkast'
with session_scope(Session) as session:
q = select([Wordform, Anahash]).select_from(Wordform.__table__.join(Anahash)).where(Wordform.wordform == word)
r = session.execute(q)
wf = r.fetchone()
anahash = wf['anahash']
print(wf, anahash)
# +
# %%time
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
results = []
with session_scope(Session) as session:
for v in cfs:
av = anahash + v
q = select([Wordform, Anahash]).select_from(Wordform.__table__.join(Anahash)).where(Anahash.anahash == av)
for row in session.execute(q).fetchall():
results.append(row)
# -
print(len(results))
# +
# %%time
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
results = []
with session_scope(Session) as session:
for v in cfs:
av = anahash + v
q = select([Wordform, Anahash]).select_from(Wordform.__table__.join(Anahash)).where(Anahash.anahash == av)
for row in session.execute(q).fetchall():
results.append(row)
# +
# # %%time
# do the same as above, but now with just the maximum confusion value and a range query
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
from sqlalchemy import and_
results = []
with session_scope(Session) as session:
av_max = anahash + max(cfs)
av_min = anahash - max(cfs)
q = select([Wordform, Anahash]).select_from(Wordform.__table__.join(Anahash))\
.where(and_(Anahash.anahash >= av_min, Anahash.anahash <= av_max))
for row in session.execute(q).fetchall():
results.append(row)
# -
print(len(results))
print(results[343])
# +
# %%time
# Given a wordform, give word frequencies per year (term frequency and document frequency)
# Seems useful to optionally select a corpus or corpora
from ticclat.ticclat_schema import TextAttestation, corpusId_x_documentId
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc, and_
word = 'wf2'
corpus_name = 'corpus1'
with session_scope(Session) as session:
q = select([Wordform.wordform_id, Wordform.wordform, Document.pub_year, func.count(Document.document_id).label('document_frequency'), func.sum(TextAttestation.frequency).label('term_frequency')]).select_from(
Corpus.__table__.join(corpusId_x_documentId, Corpus.corpus_id == corpusId_x_documentId.c.corpus_id).join(Document, Document.document_id == corpusId_x_documentId.c.document_id).join(TextAttestation).join(Wordform)
).where(and_(Wordform.wordform == word, Corpus.name == corpus_name)).group_by(Document.pub_year, Wordform.wordform, Wordform.wordform_id)
#q = select(['wordform', 'name', func.sum('frequency').label('freq')]).select_from(subq.alias()).group_by('name')
print(q)
r = session.execute(q)
for row in r.fetchall():
#print(row['name'], row['corpus_frequency'])
print(row)
# +
# %%time
# Given a wordform, in what corpora does it occur, with what frequencies (term frequency and document frequency)
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc
word = 'wf2'
with session_scope(Session) as session:
q = select([Wordform.wordform_id,Wordform.wordform, Corpus.name, func.count(Document.document_id).label('document_frequency'), func.sum(TextAttestation.frequency).label('term_frequency')]).select_from(
Corpus.__table__.join(corpusId_x_documentId, Corpus.corpus_id == corpusId_x_documentId.c.corpus_id).join(Document, Document.document_id == corpusId_x_documentId.c.document_id).join(TextAttestation).join(Wordform)
).where(Wordform.wordform == word).group_by(Corpus.name, Wordform.wordform, Wordform.wordform_id)
print(q)
r = session.execute(q)
for row in r.fetchall():
print(row)
# +
# get first and last year (overall) from database
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc
with session_scope(Session) as session:
q = select([func.min(Document.pub_year).label('min_year'), func.max(Document.pub_year).label('max_year')]).select_from(Document)
r = session.execute(q)
print(r.fetchone())
# +
# %time
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc, and_
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
wf = 'nieuws'
start_year = 1850
end_year = 1900
with session_scope(Session) as session:
q = (
select(
[
Corpus.name,
Document.pub_year,
func.count(Document.document_id).label("document_frequency"),
func.sum(TextAttestation.frequency).label("term_frequency"),
func.sum(Document.word_count).label("num_words"),
]
)
.select_from(
Corpus.__table__.join(
corpusId_x_documentId,
Corpus.corpus_id == corpusId_x_documentId.c.corpus_id,
)
.join(Document, Document.document_id == corpusId_x_documentId.c.document_id)
.join(TextAttestation)
.join(Wordform)
)
.where(and_(Wordform.wordform == wf, Document.pub_year >= start_year, Document.pub_year <= end_year))
.group_by(
Corpus.name, Document.pub_year, Wordform.wordform, Wordform.wordform_id
)
.order_by(Document.pub_year)
)
df = pd.read_sql(q, session.connection())
df
# +
# %%time
# select wordforms + total_frequency that occur in corpora
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc, and_
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
wf = 'nieuws'
with session_scope(Session) as session:
q = select([Wordform, func.sum(TextAttestation.frequency).label('freq')]).select_from(Wordform.__table__.join(TextAttestation)).group_by(Wordform.wordform_id)
df = pd.read_sql(q, session.connection())
df.head()
# +
# %%time
# select all linked wordforms given a wordform and lexicon_id
# the results includes whether the linked wordforms are correct
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc, and_, alias
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
wf = 'Amsterdam'
lexicon_id = 20
with session_scope(Session) as session:
wf_to = alias(Wordform)
q = select([Wordform.wordform.label('wordform_from'),
WordformLinkSource.wordform_from_correct,
wf_to.c.wordform.label('wordform_to'),
WordformLinkSource.wordform_to_correct,
WordformLinkSource.ld]).select_from(
WordformLink.__table__.join(WordformLinkSource)
.join(Wordform, onclause=WordformLink.wordform_from==Wordform.wordform_id) \
.join(wf_to, onclause=WordformLink.wordform_to==wf_to.c.wordform_id)) \
.where(and_(Wordform.wordform==wf, WordformLinkSource.lexicon_id == lexicon_id)) \
.order_by(WordformLinkSource.ld, 'wordform_to')
df = pd.read_sql(q, session.get_bind())
print(df.head())
# +
# %%time
# select all linked wordforms given a wordform, lexicon_id, and corpus_id
# the results includes whether the linked wordforms are correct
# and the corpus frequencies of the wordforms
from sqlalchemy import select
from sqlalchemy import text
from sqlalchemy.sql import func, desc, and_, alias
from ticclat.ticclat_schema import Lexicon, Wordform, Anahash, Document, Corpus, \
WordformLink, WordformLinkSource, lexical_source_wordform, corpusId_x_documentId, \
TextAttestation
wf = 'Amsterdam'
lexicon_id = 20
corpus_id = 2
with session_scope(Session) as session:
wf_to = alias(Wordform)
q = select([Wordform.wordform.label('wordform_from'),
WordformLinkSource.wordform_from_correct,
wf_to.c.wordform.label('wordform_to'),
WordformLinkSource.wordform_to_correct,
WordformLinkSource.ld,
func.sum(TextAttestation.frequency).label('freq_in_corpus')]) \
.select_from(WordformLink.__table__.join(WordformLinkSource)
.join(Wordform, onclause=WordformLink.wordform_from==Wordform.wordform_id) \
.join(wf_to, onclause=WordformLink.wordform_to==wf_to.c.wordform_id) \
.join(TextAttestation, onclause=wf_to.c.wordform_id == TextAttestation.wordform_id) \
.join(Document) \
.join(corpusId_x_documentId) \
.join(Corpus)) \
.where(and_(Wordform.wordform==wf, WordformLinkSource.lexicon_id == lexicon_id, Corpus.corpus_id == corpus_id)) \
.group_by('wordform_to', WordformLinkSource.wordform_from_correct, WordformLinkSource.wordform_to_correct, WordformLinkSource.ld) \
.order_by(WordformLinkSource.ld, 'wordform_to')
df = pd.read_sql(q, session.get_bind())
print(df.head())
| notebooks/queries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # World Wide Products Inc.
# __Problem Statement__: Build the forecasting models to determine the demand of a particular product
# __Dataset__: Data set contains the product demands for encoded products
# <br>
# Source: https://www.kaggle.com/felixzhao/productdemandforecasting
# __Reference__: https://pythondata.com/forecasting-time-series-data-with-prophet-part-1/
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
from fbprophet import Prophet
import numpy as np
warnings.filterwarnings('ignore') # specify to ignore warning messages
# ## Data Extraction and Feature Engineering
demand = pd.read_csv('../data/external/Historical Product Demand.csv', low_memory=False)
list(demand)
demand.head()
# Check if there are any columns with null values
demand.isna().any()
# Drop NAs
demand = demand.dropna(how='any',axis=0)
demand.info()
demand.head()
demand.Product_Code.unique()
# +
## Which product has maximum demand?
# -
demand.groupby("Product_Code").sum().sort_values("Order_Demand", ascending=False).head(1)
# +
## Which product has least demand?
# -
demand.groupby("Product_Code").sum().sort_values("Order_Demand", ascending=False).tail(1)
# +
## Which warehouse has maximum demand?
# -
demand.groupby("Warehouse").sum().sort_values("Order_Demand", ascending=False).head(5)
demand.groupby("Product_Category").sum().sort_values("Order_Demand", ascending=False).head(5)
# +
##Picking the product with most order demand
# -
product = demand.loc[demand['Product_Code'] == 'Product_1359']
product['Date'].min(), product['Date'].max()
product.head(5)
product.info()
product = product.drop(columns=['Warehouse','Product_Category','Product_Code'])
product.head(5)
product.isnull().sum()
productnew = product.copy()
productnew.info()
productnew.head()
productnew = productnew.rename(columns={'Date': 'ds', 'Order_Demand': 'y'})
productnew.head(5)
# ## Time Series using FBProphet
model = Prophet() #instantiate Prophet
model.fit(productnew); #fit the model with your dataframe
future_data = model.make_future_dataframe(periods=365)
forecast_data = model.predict(future_data)
forecast_data[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
model.plot(forecast_data)
model.plot_components(forecast_data)
# __Conclusion__: From the plots, this can be observed that the trend of order demand over the next year is still going high for the product which currently has highest demand. It can also be observed that although the demand is almost consistent through the week, it takes a huge dip on Saturday, and then takes a leep on Sunday. As far as months are concerned, demand shows no particular consistency, and takes a lot of ups and downs. Though there is a huge dip during the month of February.
| notebooks/WorldWide Products FBProphet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import networkx as nx
from textblob import TextBlob
from networkx.algorithms import community
import spacy
nlp = spacy.load('en_core_web_sm')
# We would like to understand the relationships between different topics and concepts in the text below. We will combine text analytics and network analytics.
#
# - Using sentence segmentation, break the text to sentences.
# - Using named entity extraction, identify named entities in each sentence.
# - Create a newtork in which the nodes are named entities and the links are whether the two named entities co-occured in same sentences. Add the frequency of co-occurences as an edge property.
# - What is the most frequent relation in the data?
# - What are the top five influential nodes?
# - Plot the network, any insights?
text = open('data/const.txt', 'r').read()
print(text[:500])
blob = TextBlob(text)
sentences = blob.sentences
sentences[:3]
def get_ne(text):
doc = nlp(text)
return [e.text for e in doc.ents]
named_entities = [get_ne(sent.raw) for sent in sentences]
edges1 = [(word1, word2) for le in named_entities
for word1 in le for word2 in le
if word1 != word2
]
from collections import Counter
edges = Counter(edges1)
net = nx.Graph()
for edge in edges:
node1 = edge[0]
node2 = edge[1]
freq = edges[edge]
net.add_edge(node1, node2, freq=freq)
# +
#net.edges(data=True)
# -
nedges = net.edges(data=True)
data = pd.DataFrame(nedges)
data.columns = ['n1', 'n2', 'f']
data.head()
data['freq'] = data['f'].apply(lambda x: x['freq'])
data = data.drop('f', axis=1)
data.sort_values('freq', ascending=False).head(25)
pd.Series(nx.degree_centrality(net)).sort_values(ascending=False).head(25)
pd.Series(nx.closeness_centrality(net)).sort_values(ascending=False).head(25)
nx.draw(net, with_labels=True)
net2 = nx.Graph()
for edge in edges:
node1 = edge[0]
node2 = edge[1]
freq = edges[edge]
if freq > 6:
net2.add_edge(node1, node2, freq=freq)
nx.draw(net2, with_labels=True)
communities = community.greedy_modularity_communities(net, weight='freq')
len(communities)
communities
| 40-workout-solution_network_content_and_structure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
import seaborn as sns
import pandas as pd
import os
dir_path = os.getcwd()
filename_M= os.listdir('C:/Users/lab343/Desktop/LAB/WORK/智慧電表/User_Value/M')
filename_T= os.listdir('C:/Users/lab343/Desktop/LAB/WORK/智慧電表/User_Value/T')
# %matplotlib inline
per_date = pd.date_range('2016-01-01 00:15:00','2017-01-01 00:00:00',freq='15Min')
per_month = ['2016-01','2016-02','2016-03','2016-04','2016-05','2016-06','2016-07','2016-08','2016-09','2016-10','2016-11',
'2016-12']
per_hour = pd.date_range('2016-01-01 01:00:00','2017-01-01 00:00:00',freq='1H')
per_week = pd.date_range('2016-01-01 00:15:00','2017-01-01 00:15:00',freq='7D')
per_day = pd.date_range('2016-01-01','2016-12-31',freq='1D')
week = ['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']
contract = {'1':'低壓表燈',
'4':'低壓綜合用電',
'7':'普通低壓電力',
'C':'低壓需量綜合',
'D':'低壓需量電力',
'F':'低壓表燈時間電價'}
user = {'0':'公用路燈或自來水',
'1':'軍眷',
'5':'非營業用',
'6':'營業用'}
# -
full_file = pd.read_csv('full_user.csv')
full_file = full_file.filename.values.tolist()
def pre_ratio(userfile):
try:
test = pd.read_csv('C:/Users/lab343/Desktop/LAB/WORK/智慧電表/hour/M/'+userfile,index_col=0)
except:
test = pd.read_csv('C:/Users/lab343/Desktop/LAB/WORK/智慧電表/hour/T/'+userfile,index_col=0)
test.index = pd.to_datetime(test.index)
print(userfile,'是否有遺失值:',test.isnull().values.any())
day_sum = []
j = 0
for i in range(366):
day_sum.append(test.iloc[j:j+24,1].sum())
j +=24
for i in per_day.astype(str):
test.loc[i,'day_sum'] = day_sum[per_day.astype(str).tolist().index(i)]
test.day_sum = test.day_sum.shift(1)
test.iloc[0,4] = test.iloc[1,4]
test['ratio_value'] = test.Value/test.day_sum
test.drop(['CustomerID','Value','day_sum','Week','holiday'],1,inplace=True)
return test
for i in full_file:
data = pre_ratio(i)
data.to_csv('C:/Users/lab343/Desktop/LAB/WORK/智慧電表/ratiodata/'+i)
| 2_preprocess_to_ratio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pandas import read_csv, DataFrame
series = read_csv('daily-minimum-temperatures.csv',
header=0, index_col=0, parse_dates=True, squeeze=True)
print(series.head())
dataframe = DataFrame()
dataframe['month'] = [series.index[i].month for i in range(series.size)]
dataframe['day'] = [series.index[i].day for i in range(series.size)]
dataframe['temperature'] = [series[i] for i in range(series.size)]
print(dataframe.head(5))
| UnivariateFeatures/univariate_features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def dtw(T1, T2):
"""
This is code with running time O(|T1|*|T2|) and space O(|T2|)
It returns only the value for dtw(T1,T2)
Below is code with running time O(|T1|*|T2|) and space O(|T1|*|T2|), that can also return the monotone assignment
memo = [0] * len(T2)
memo[0] .append(dist(T1[0], T2[0]))
for col in range(1, len(T2)):
memo[i] = memo[col - 1] + dist(T1[0], T2[col])
for row in range(1, len(T1)):
temp = memo[0]
memo[0] = memo[0] + dist(T1[row], T2[0])
for col in range(1, len(T2)):
store = memo[col]
memo[col] = dist(T1[row], T2[col]) + min(temp, memo[col-1], memo[col])
temp = store
return memo[-1]
"""
#We store in matrix[row][col][0] the minimal dtw when we assign only the first "row" points in T1
#and the first "col" points in T2.
#matrix[row][col] = dist(T1[row], T2[col]) + min(matrix[row-1][col-1], matrix[row-1][col], matrix[row][col-1])
#Note that an assignment of the first "row" points in T1 and the first "col" points in T2,
#always assigns the last point of T1 to the last point of T2.
#Note as well that the assignment of points is monotone and thus it is nondecreasing in both indices
#i.e. assignment[i][0] <= assignment[i + 1][0], and assignment[i][1] <= assignment[i + 1][1]
#That allows us to efficiently store only the differences between assignment[i + 1] - assignment[i]
#Thus we store in matrix[row][col][1] the previous tuple of points in the assignment
#E.g. if matrix[row-1][col-1] == min(matrix[row-1][col-1], matrix[row-1][col], matrix[row][col-1]),
#then we asssign (row-1, col-1), and store in matrix[row][col][1] the vector (-1,-1) since (row-1, col-1) = (row, col) + (-1,-1)
#if matrix[row-1][col] == min(matrix[row-1][col-1], matrix[row-1][col], matrix[row][col-1]),
#then we asssign (row-1, col), and store in matrix[row][col][1] the vector (-1,0) since (row-1, col) = (row, col) + (-1,0)
matrix = [[[] for i in range(len(T2))] for j in range(len(T1))]
matrix[0][0].append(dist(T1[0], T2[0]))
for col in range(1, len(T2)):
matrix[0][col].append(matrix[0][col - 1][0] + dist(T1[0], T2[col]))
matrix[0][col].append((0,-1))
for row in range(1, len(T1)):
matrix[row][0].append(matrix[row - 1][0][0] + dist(T1[row], T2[0]))
matrix[row][0].append((-1,0))
for row in range(1, len(T1)):
for col in range(1, len(T2)):
recursive = min(matrix[row-1][col-1][0], matrix[row-1][col][0], matrix[row][col-1][0])
matrix[row][col].append(dist(T1[row], T2[col]) + recursive)
if matrix[row-1][col-1][0] == recursive:
matrix[row][col].append((-1,-1))
elif matrix[row-1][col][0] == recursive:
matrix[row][col].append((-1,0))
else:
matrix[row][col].append((0,-1))
#The moves that we store in matrix[i][j][1] are essentially pointers to the next tuple of points in @assignment
#We traverse through those pointers from the bottom right corner (len(T1)- 1, len(T2) -1) until we reach (0,0).
row , col = len(T1) -1, len(T2) -1
assignment = [(row,col)]
while (row,col) != (0,0):
move = matrix[row][col][1]
row = row + move[0]
col = col + move[1]
assignment.append((row,col))
print(matrix)
return matrix[-1][-1][0], assignment
def dfd(T1, T2):
matrix = [[[] for i in range(len(T2))] for j in range(len(T1))]
matrix[0][0].append(dist(T1[0], T2[0]))
for col in range(1, len(T2)):
matrix[0][col].append(max(matrix[0][col - 1][0], dist(T1[0], T2[col])))
matrix[0][col].append((0,-1))
for row in range(1, len(T1)):
matrix[row][0].append(max(matrix[row -1][col][0], dist(T1[row], T2[0])))
matrix[row][0].append((-1,0))
for row in range(1, len(T1)):
for col in range(1, len(T2)):
recursive = min(matrix[row-1][col-1][0], matrix[row-1][col][0], matrix[row][col-1][0])
matrix[row][col].append(max(dist(T1[row], T2[col]), recursive))
if matrix[row-1][col-1][0] == recursive:
matrix[row][col].append((-1,-1))
elif matrix[row-1][col][0] == recursive:
matrix[row][col].append((-1,0))
else:
matrix[row][col].append((0,-1))
row , col = len(T1) -1, len(T2) -1
assignment = [(row,col)]
while (row,col) != (0,0):
move = matrix[row][col][1]
row = row + move[0]
col = col + move[1]
assignment.append((row,col))
return matrix[-1][-1][0], assignment
def dist(p,q):
return (p[0] - q[0]) * (p[0] - q[0]) + (p[1] - q[1]) * (p[1] - q[1])
T1 = [[0,0], [10,10]]
T2 = [[1,2], [11,13]]
dtw(T1,T2)
| Project1 dtw and dfd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="9aHTrpANhSoC"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="WAnqNAzfcVu0"
# # Emotion prediction with GoEmotions and PRADO
#
#
# + [markdown] id="OcNoWgG7hvIs"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/research/seq_flow_lite/demo/colab/emotion_colab.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/seq_flow_lite/demo/colab/emotion_colab.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] id="dhekoIrWiSsv"
# In this tutorial, we will work through training a neural emotion prediction model, using the tensorflow-models PIP package, and Bazel.
#
# This tutorial is using GoEmotions, an emotion prediction dataset, available on [TensorFlow TFDS](https://www.tensorflow.org/datasets/catalog/goemotions). We will be training a sequence projection model architecture named PRADO, available on [TensorFlow Model Garden](https://github.com/tensorflow/models/blob/master/research/seq_flow_lite/models/prado.py). Finally, we will examine an application of emotion prediction to emoji suggestions from text.
# + [markdown] id="grmac7ZYj02a"
# ## Setup
# + [markdown] id="D_mi4NZeeB1l"
# ### Install the TensorFlow Model Garden pip package
# + [markdown] id="tCk46-HdmIyD"
# `tf-nightly` is the nightly Model Garden package created daily automatically. We install it with pip.
# + id="mvO0_HcKx0_V"
# !pip install tfds-nightly
# + [markdown] id="p2wqyg-7mbfV"
# ### Install the Sequence Projection Models package
# + [markdown] id="1JRZS_aSeINK"
# Install Bazel: This will allow us to build custom TensorFlow ops used by the PRADO architecture.
# + id="N00X4P229Ppm"
# !sudo apt install curl gnupg
# !curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
# !echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
# !sudo apt update
# !sudo apt install bazel
# + [markdown] id="9JeSDpZFelL5"
# Install the library:
# * `seq_flow_lite` includes the PRADO architecture and custom ops.
# * We download the code from GitHub, and then build and install the TF and TFLite ops used by the model.
#
# + id="mktlCYcd9iLG"
# !git clone https://www.github.com/tensorflow/models
# !models/research/seq_flow_lite/demo/colab/setup_workspace.sh
# !pip install models/research/seq_flow_lite
# !rm -rf models/research/seq_flow_lite/tf_ops
# !rm -rf models/research/seq_flow_lite/tflite_ops
# + [markdown] id="rP8iKa4Il4mL"
# ## Training an Emotion Prediction Model
#
# * First, we load the GoEmotions data from TFDS.
# * Next, we prepare the PRADO model for training. We set up the model configuration, including hyperparameters and labels. We also prepare the dataset, which involves projecting the inputs from the dataset, and passing the projections to the model. This is needed because a model training on TPU can not handle string inputs.
# * Finally, we train and evaluate the model and produce model-level and per-label metrics.
# + [markdown] id="YtvR40K8K0Bn"
# ***Start here on Runtime reset***, once the packages above are properly installed:
# * Go to the `seq_flow_lite` directory.
# + id="ImEejssVKvxR"
# %cd models/research/seq_flow_lite
# + [markdown] id="uwSPqHXAeQ6H"
# * Import the Tensorflow and Tensorflow Dataset libraries.
# + id="kc4y4n80eL_b"
import tensorflow as tf
import tensorflow_datasets as tfds
# + [markdown] id="j-CtG3cagPgl"
# ### The data: GoEmotions
# In this tutorial, we use the [GoEmotions dataset from TFDS](https://www.tensorflow.org/datasets/catalog/goemotions).
#
# GoEmotions is a corpus of comments extracted from Reddit, with human annotations to 27 emotion categories or Neutral.
#
# * Number of labels: 27.
# * Size of training dataset: 43,410.
# * Size of evaluation dataset: 5,427.
# * Maximum sequence length in training and evaluation datasets: 30.
#
# The emotion categories are admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise.
#
# + [markdown] id="Bvsn_s3S0SAt"
# Load the data from TFDS:
# + id="KtTLwtEqwcR2"
ds = tfds.load('goemotions', split='train')
# + [markdown] id="gJuu4jKet9zq"
# Print 5 sample data elements from the dataset:
# + id="y0O18rSLuDx5"
for element in ds.take(5):
print(element)
# + [markdown] id="UAz-tdQfuVBn"
# ### The model: PRADO
#
# We train an Emotion Prediction model, based on the [PRADO architecture](https://github.com/tensorflow/models/blob/master/research/seq_flow_lite/models/prado.py) from the [Sequence Projection Models package](https://github.com/tensorflow/models/tree/master/research/seq_flow_lite).
#
# PRADO projects input sequences to fixed sized features. The idea behind this approach is to build embedding-free models that minimize the model size. Instead of using an embedding table to lookup embeddings, sequence projection models compute them on the fly, resulting in space-efficient models.
#
# In this section, we prepare the PRADO model for training.
#
# This GoEmotions dataset is not set up so that it can be directly fed into the PRADO model, so below, we also handle the necessary preprocessing by providing a dataset builder.
# + [markdown] id="e9uPSZYpgBqP"
# Prepare the model configuration:
# * Enumerate the labels expected to be found in the GoEmotions dataset.
# * Prepare the `MODEL_CONFIG` dictionary which includes training parameters for the model. See sample configs for the PRADO model [here](https://github.com/tensorflow/models/tree/master/research/seq_flow_lite/configs).
# + id="DkQMnTcLyFeR"
LABELS = [
'admiration',
'amusement',
'anger',
'annoyance',
'approval',
'caring',
'confusion',
'curiosity',
'desire',
'disappointment',
'disapproval',
'disgust',
'embarrassment',
'excitement',
'fear',
'gratitude',
'grief',
'joy',
'love',
'nervousness',
'optimism',
'pride',
'realization',
'relief',
'remorse',
'sadness',
'surprise',
'neutral',
]
# Model training parameters.
CONFIG = {
'name': 'models.prado',
'batch_size': 1024,
'train_steps': 10000,
'learning_rate': 0.0006,
'learning_rate_decay_steps': 340,
'learning_rate_decay_rate': 0.7,
}
# Limits the amount of logging output produced by the training run, in order to
# avoid browser slowdowns.
CONFIG['save_checkpoints_steps'] = int(CONFIG['train_steps'] / 10)
MODEL_CONFIG = {
'labels': LABELS,
'multilabel': True,
'quantize': False,
'max_seq_len': 128,
'max_seq_len_inference': 128,
'exclude_nonalphaspace_unicodes': False,
'split_on_space': True,
'embedding_regularizer_scale': 0.035,
'embedding_size': 64,
'bigram_channels': 64,
'trigram_channels': 64,
'feature_size': 512,
'network_regularizer_scale': 0.0001,
'keep_prob': 0.5,
'word_novelty_bits': 0,
'doc_size_levels': 0,
'add_bos_tag': False,
'add_eos_tag': False,
'pre_logits_fc_layers': [],
'text_distortion_probability': 0.0,
}
CONFIG['model_config'] = MODEL_CONFIG
# + [markdown] id="R-pUW649gfzA"
# Write a function that builds the datasets for the model. It will load the data, handle batching, and generate projections for the input text.
# + id="unYlUYXq119f"
from layers import base_layers
from layers import projection_layers
def build_dataset(mode, inspect=False):
if mode == base_layers.TRAIN:
split = 'train'
count = None
elif mode == base_layers.EVAL:
split = 'test'
count = 1
else:
raise ValueError('mode={}, must be TRAIN or EVAL'.format(mode))
batch_size = CONFIG['batch_size']
if inspect:
batch_size = 1
# Convert examples from their dataset format into the model format.
def process_input(features):
# Generate the projection for each comment_text input. The final tensor
# will have the shape [batch_size, number of tokens, feature size].
# Additionally, we generate a tensor containing the number of tokens for
# each comment_text (seq_length). This is needed because the projection
# tensor is a full tensor, and we are not using EOS tokens.
text = features['comment_text']
text = tf.reshape(text, [batch_size])
projection_layer = projection_layers.ProjectionLayer(MODEL_CONFIG, mode)
projection, seq_length = projection_layer(text)
# Convert the labels into an indicator tensor, using the LABELS indices.
label = tf.stack([features[label] for label in LABELS], axis=-1)
label = tf.cast(label, tf.float32)
label = tf.reshape(label, [batch_size, len(LABELS)])
model_features = ({'projection': projection, 'sequence_length': seq_length}, label)
if inspect:
model_features = (model_features[0], model_features[1], features)
return model_features
ds = tfds.load('goemotions', split=split)
ds = ds.repeat(count=count)
ds = ds.shuffle(buffer_size=batch_size * 2)
ds = ds.batch(batch_size, drop_remainder=True)
ds = ds.map(process_input,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
deterministic=False)
ds = ds.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return ds
train_dataset = build_dataset(base_layers.TRAIN)
test_dataset = build_dataset(base_layers.EVAL)
inspect_dataset = build_dataset(base_layers.TRAIN, inspect=True)
# + [markdown] id="DQmYWg6ivCHS"
# Print a batch of examples in model format. This will consist of:
# * the projection tensors (projection and seq_length)
# * the label tensor (second tuple value)
#
# The projection tensor is a **[batch size, max_seq_length, feature_size]** floating point tensor. The **[b, i]** vector is a feature vector of the **i**th token of the **b**th comment_text. The rest of the tensor is zero-padded, and the
# seq_length tensor indicates the number of features vectors for each comment_text.
#
# The label tensor is an indicator tensor of the set of true labels for the example.
# + id="1OyK7rjTvBjF"
example = next(iter(train_dataset))
print("inputs = {}".format(example[0]))
print("labels = {}".format(example[1]))
# + [markdown] id="ytMQHT5Kd7A_"
# In this version of the dataset, the original example has been added as the third element of the tuple.
# + id="29EzRoCfI91r"
example = next(iter(inspect_dataset))
print("inputs = {}".format(example[0]))
print("labels = {}".format(example[1]))
print("original example = {}".format(example[2]))
# + [markdown] id="CLDbHTIvvX11"
# ### Train and Evaluate
# + [markdown] id="QqUTa7wXsHoO"
# First we define a function to build the model. We vary the model inputs depending on task. For training and evaluation, we'll take the projection and sequence length as inputs. Otherwise, we'll take strings as inputs.
# + id="erEiNX3ToLZ1"
from models import prado
def build_model(mode):
# First we define our inputs.
inputs = []
if mode == base_layers.TRAIN or mode == base_layers.EVAL:
# For TRAIN and EVAL, we'll be getting dataset examples,
# so we'll get projections and sequence_lengths.
projection = tf.keras.Input(
shape=(MODEL_CONFIG['max_seq_len'], MODEL_CONFIG['feature_size']),
name='projection',
dtype='float32')
sequence_length = tf.keras.Input(
shape=(), name='sequence_length', dtype='float32')
inputs = [projection, sequence_length]
else:
# Otherwise, we get string inputs which we need to project.
input = tf.keras.Input(shape=(), name='input', dtype='string')
projection_layer = projection_layers.ProjectionLayer(MODEL_CONFIG, mode)
projection, sequence_length = projection_layer(input)
inputs = [input]
# Next we add the model layer.
model_layer = prado.Encoder(MODEL_CONFIG, mode)
logits = model_layer(projection, sequence_length)
# Finally we add an activation layer.
if MODEL_CONFIG['multilabel']:
activation = tf.keras.layers.Activation('sigmoid', name='predictions')
else:
activation = tf.keras.layers.Activation('softmax', name='predictions')
predictions = activation(logits)
model = tf.keras.Model(
inputs=inputs,
outputs=[predictions])
return model
# + [markdown] id="caHpK9Htv40g"
# Train the model:
# + id="2xM-2R38kogo"
# Remove any previous training data.
# !rm -rf model
model = build_model(base_layers.TRAIN)
# Create the optimizer.
learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=CONFIG['learning_rate'],
decay_rate=CONFIG['learning_rate_decay_rate'],
decay_steps=CONFIG['learning_rate_decay_steps'],
staircase=True)
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
# Define the loss function.
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
model.compile(optimizer=optimizer, loss=loss)
epochs = int(CONFIG['train_steps'] / CONFIG['save_checkpoints_steps'])
model.fit(
x=train_dataset,
epochs=epochs,
validation_data=test_dataset,
steps_per_epoch=CONFIG['save_checkpoints_steps'])
model.save_weights('model/model_checkpoint')
# + [markdown] id="0hdbXBs0g3oX"
# Load a training checkpoint and evaluate:
# + id="A1qc9GNtF3s5"
model = build_model(base_layers.EVAL)
# Define metrics over each category.
metrics = []
for i, label in enumerate(LABELS):
metric = tf.keras.metrics.Precision(
thresholds=[0.5],
class_id=i,
name='precision@0.5/{}'.format(label))
metrics.append(metric)
metric = tf.keras.metrics.Recall(
thresholds=[0.5],
class_id=i,
name='recall@0.5/{}'.format(label))
metrics.append(metric)
# Define metrics over the entire task.
metric = tf.keras.metrics.Precision(thresholds=[0.5], name='precision@0.5/all')
metrics.append(metric)
metric = tf.keras.metrics.Recall(thresholds=[0.5], name='recall@0.5/all')
metrics.append(metric)
model.compile(metrics=metrics)
model.load_weights('model/model_checkpoint')
result = model.evaluate(x=test_dataset, return_dict=True)
# + [markdown] id="Namwa3enwQBc"
# Print evaluation metrics for the model, as well as per emotion label:
# + id="l420PosisfXN"
for label in LABELS:
precision_key = 'precision@0.5/{}'.format(label)
recall_key = 'recall@0.5/{}'.format(label)
if precision_key in result and recall_key in result:
print('{}: (precision@0.5: {}, recall@0.5: {})'.format(
label, result[precision_key], result[recall_key]))
precision_key = 'precision@0.5/all'
recall_key = 'recall@0.5/all'
if precision_key in result and recall_key in result:
print('all: (precision@0.5: {}, recall@0.5: {})'.format(
result[precision_key], result[recall_key]))
# + [markdown] id="AZSWnwTMqZ5f"
# ## Suggest Emojis using an Emotion Prediction model
#
# In this section, we apply the Emotion Prediction model trained above to suggest emojis relevant to input text.
#
# Refer to our [GoEmotions Model Card](https://github.com/google-research/google-research/blob/master/goemotions/goemotions_model_card.pdf) for additional uses of the model and considerations and limitations for using the GoEmotions data.
# + [markdown] id="aybpGQV1qr8I"
# Map each emotion label to a relevant emoji:
# * Emotions are subtle and multi-faceted. In many cases, no one emoji can truely capture the full complexity of the human experience behind each emotion.
# * For the purpose of this exercise, we will select an emoji that captures at least one facet that is conveyed by an emotion label.
# + id="lgs12b90qmSQ"
EMOJI_MAP = {
'admiration': '👏',
'amusement': '😂',
'anger': '😡',
'annoyance': '😒',
'approval': '👍',
'caring': '🤗',
'confusion': '😕',
'curiosity': '🤔',
'desire': '😍',
'disappointment': '😞',
'disapproval': '👎',
'disgust': '🤮',
'embarrassment': '😳',
'excitement': '🤩',
'fear': '😨',
'gratitude': '🙏',
'grief': '😢',
'joy': '😃',
'love': '❤️',
'nervousness': '😬',
'optimism': '🤞',
'pride': '😌',
'realization': '💡',
'relief': '😅',
'remorse': '',
'sadness': '😞',
'surprise': '😲',
'neutral': '',
}
# + [markdown] id="rh_3y7OL7JG_"
# Select sample inputs:
# + id="rdD6xPpn7Mjm"
PREDICT_TEXT = [
b'Good for you!',
b'Happy birthday!',
b'I love you.',
]
# + [markdown] id="vavivya6hGw0"
# Run inference for the selected examples:
# + id="tJ6iyLlLo5-3"
import numpy as np
model = build_model(base_layers.PREDICT)
model.load_weights('model/model_checkpoint')
for text in PREDICT_TEXT:
results = model.predict(x=[text])
print('')
print('{}:'.format(text))
labels = np.flip(np.argsort(results[0]))
for x in range(3):
label = LABELS[labels[x]]
label = EMOJI_MAP[label] if EMOJI_MAP[label] else label
print('{}: {}'.format(label, results[0][labels[x]]))
| research/seq_flow_lite/demo/colab/emotion_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pickle import load
with open('result_summary.pkl', 'rb') as f:
near_eq_result_summary = load(f)
def get_near_eq_curves_with_unc(scheme, marginal):
r = near_eq_result_summary[scheme][marginal]
return r['dts'], r['near_eq_estimates'], r['near_eq_uncertainty']
dts, mean, unc = get_near_eq_curves_with_unc('VRORV', 'configuration')
# -
from tqdm import tqdm
import numpy as np
from benchmark.integrators import LangevinSplittingIntegrator
from benchmark.testsystems import water_cluster_rigid, NonequilibriumSimulator
from benchmark import simulation_parameters
from simtk import unit
temperature = simulation_parameters['temperature']
# +
# let's see how this depends also on the collision_rate?
# -
dts
# +
#dts = np.linspace(0.1, 8.0, 10) * unit.femtosecond
def estimate_acceptance_rate(scheme, dt, n_samples=10000, gamma=1.0/unit.picosecond):
integrator = LangevinSplittingIntegrator(splitting=' '.join(scheme),
temperature=temperature,
timestep=dt * unit.femtosecond,
collision_rate=gamma)
noneq_sim = NonequilibriumSimulator(water_cluster_rigid, integrator)
acceptance_ratios = np.zeros(n_samples)
for i in range(n_samples):
x0 = water_cluster_rigid.sample_x_from_equilibrium()
v0 = water_cluster_rigid.sample_v_given_x(x0)
acceptance_ratios[i] = min(1, np.exp(-noneq_sim.accumulate_shadow_work(x0, v0, 1)['W_shad'])) # already in units of kT
return acceptance_ratios
# -
# +
def get_acceptance_rate_curve(scheme):
"""Get acceptance rate +/- stderr as a function of timestep."""
acceptance_rates = np.zeros(len(dts))
acceptance_rates_unc = np.zeros(len(dts))
for i in tqdm(range(len(dts))):
dt = dts[i]
acceptance_ratios = estimate_acceptance_rate(scheme, dt)
acceptance_rates[i] = np.mean(acceptance_ratios)
acceptance_rates_unc[i] = 1.96 * np.std(acceptance_ratios) / np.sqrt(len(acceptance_ratios))
return acceptance_rates, acceptance_rates_unc
schemes = ['OVRVO', 'ORVRO', 'VRORV', 'RVOVR']
acceptance_rate_curves = {}
acceptance_rate_unc_curves = {}
for scheme in schemes:
mean, stderr = get_acceptance_rate_curve(scheme)
acceptance_rate_curves[scheme] = mean
acceptance_rate_unc_curves[scheme] = stderr
# +
import seaborn.apionly as sns
schemes = sorted(['RVOVR', 'VRORV', 'OVRVO', 'ORVRO'])
colors = dict(zip(schemes, ['Blues', 'Greens', 'Oranges', 'Purples']))
colormaps = dict()
for scheme in schemes:
colormap = sns.color_palette(colors[scheme], n_colors=len(dts))
colormaps[scheme] = dict(zip(dts, colormap))
dt_ = sorted(dts)[int(len(dts) / 2)]
half_depth_colors = dict()
for scheme in schemes:
half_depth_colors[scheme] = colormaps[scheme][dt_]
# -
import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure()
ax = plt.subplot(1,1,1)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for scheme in schemes:
mean, stderr = (1 - acceptance_rate_curves[scheme]), acceptance_rate_unc_curves[scheme]
plt.plot(dts, mean, color=half_depth_colors[scheme], label=scheme)
plt.fill_between(dts, mean - stderr, mean + stderr, color=half_depth_colors[scheme], alpha=0.25)
plt.legend(loc='best', title='GHMC based on:')
#plt.ylim(0.5,1)
plt.xlabel('$\Delta t$ (fs)')
plt.ylabel("GHMC reject rate\n(1 - accept rate)")
plt.title('GHMC reject rate vs. $\Delta t$' + '\n(water_cluster_rigid)')
plt.savefig('water_cluster_reject_rate.jpg', dpi=300, bbox_inches='tight')
dts
# +
scale_factor = 3.2
n_cols = 3
n_rows = 1
plt.figure(figsize=(n_cols * scale_factor,n_rows * scale_factor))
ax = plt.subplot(n_rows,n_cols,1)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for scheme in schemes:
mean, stderr = (1 - acceptance_rate_curves[scheme]), acceptance_rate_unc_curves[scheme]
plt.plot(dts, mean, color=half_depth_colors[scheme], label=scheme)
plt.fill_between(dts, mean - stderr, mean + stderr, color=half_depth_colors[scheme], alpha=0.25)
#plt.legend(loc='best', title='GHMC based on:')
#plt.ylim(0.5,1)
plt.xlabel('$\Delta t$ (fs)')
plt.ylabel("GHMC reject rate\n(1 - accept rate)")
plt.title('(a) GHMC reject rate vs. $\Delta t$' + '\n(water cluster)')
ax = plt.subplot(n_rows,n_cols,2)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for scheme in schemes:
dts_, D_KL_phase_space = get_near_eq_curves_with_unc(scheme, 'full')[:2]
mask = D_KL_phase_space > 1e-4
acceptance_rate = np.array([acceptance_rate_curves[scheme][i] for i in range(len(dts)) if dts[i] in dts_])
#D_KL_phase_space = np.array([entropy(rho.flatten(), exact_hist_xv.flatten()) for rho in phase_space_rhos[scheme]])
#acceptance_rate = acceptance_rate_curves[scheme]
plt.plot(1 - acceptance_rate[mask], D_KL_phase_space[mask], color=half_depth_colors[scheme], label=scheme)
plt.xlabel('GHMC reject rate')
plt.ylabel(r'$\mathcal{D}_{KL}(\rho \| \pi)$')
plt.title('(b) GHMC reject rate vs\n phase space ' + r'$\mathcal{D}_{KL}$')
plt.legend(loc='best', title='GHMC based on:')
ax = plt.subplot(n_rows,n_cols,3, sharey=ax, sharex=ax)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
for scheme in schemes:
dts_, D_KL_conf_space = get_near_eq_curves_with_unc(scheme, 'configuration')[:2]
#D_KL_conf_space = np.array([entropy(rho_x, pi_x) for rho_x in conf_space_rhos[scheme]])
acceptance_rate = np.array([acceptance_rate_curves[scheme][i] for i in range(len(dts)) if dts[i] in dts_])
mask = D_KL_conf_space > 1e-4
plt.plot(1 - acceptance_rate[mask], D_KL_conf_space[mask], color=half_depth_colors[scheme], label=scheme)
plt.xlabel('GHMC reject rate')
plt.ylabel(r'$\mathcal{D}_{KL}(\rho_{\mathbf{x}} \| \pi_{\mathbf{x}})$')
plt.title('(c) GHMC reject rate vs\n configuration space ' + r'$\mathcal{D}_{KL}$')
#plt.legend(loc='best', title='GHMC based on:')
plt.yscale('log')
#plt.xscale('log')
plt.tight_layout()
plt.savefig('water_cluster_acceptance_rate_vs_D_KL.jpg', dpi=300, bbox_inches='tight')
# -
| notebooks/Water Cluster GHMC accept rates.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **1**.(20 points)
#
# Consider the following system of equations:
#
# $$\begin{align*}
# 2x_1& - x_2& +x_x &=& 6\\
# -x_1& +2x_2& - x_3 &=& 2\\
# x_1 & -x_2& + x_3 &=& 1
# \end{align*}$$
#
# 1. Consider the system in matrix form $Ax=b$ and define $A$, $b$ in numpy. (5 points)
# 2. Show that $A$ is positive-definite (5 points)
# 3. Use the appropriate matrix decomposition function in numpy and back-substitution to solve the system (10 points)
# +
import numpy as np
import scipy.linalg as la
A = np.array([
[2, -1, 1],
[-1, 2, -1],
[1, -1, 1]
])
b = np.array([6,2,1]).reshape(-1,1)
la.eigvalsh(A)
# -
# Since all eigenvalues are positive, $A$ is positive definite.
# Thus, for a positive definite matrix A, a cholesky decomposition is the most appropriate choice.
#
# **Please note that cholesky only works on positive definite, but not on all symmetric matrices.**
C = np.linalg.cholesky(A)
# Back substitution:
#
# $Ax = b \longrightarrow MNx = b \longrightarrow M(Nx) = b \longrightarrow My = b \longrightarrow Nx = y$
# +
y = la.solve_triangular(C, b, lower=True)
x = la.solve_triangular(C.T, y, lower=False)
x
# -
A @ x
# **Alternative**
la.cho_solve(la.cho_factor(A), b)
# **2**. (20 points)
#
# Exact geometric solutions with $n = m$
#
# - Find the equation of the line that passes through the points (2,1) and (3,7)
# - Find the equation of the circle that passes through the points (1,7), (6,2) and (4,6)
#
# Hint: The equation of a circle can be written as
#
# $$
# (x - a)^2 + (y - b)^2 = r^2
# $$
# - Find the equation of the line that passes through the points (2,1) and (3,7)
#
# We write the following equation using matrix notation
#
# $a_0 + a_1 x = y$
x = np.array([2,3])
y = np.array([1,7])
A = np.c_[np.ones(2), x]
A
# +
la.solve(A, y)
# -
# Find the equation of the circle that passes through the points (1,7), (6,2) and (4,6)
#
# We expand the circle equation to get
#
# $$
# x^2 - 2ax + a^2 + y^2 - 2by + b^2 = r^2
# $$
#
# and rearrange terms
#
# $$
# 2ax + 2by + (r^2 - a^2 -b^2) = x^2 + y^2
# $$
#
# which we can solve as a matrix equation.
x = np.array([1, 6, 4])
y = np.array([7, 2, 6])
A = np.c_[2*x, 2*y, np.ones(3)]
A
la.solve(A, x**2 + y**2)
# **3**. 20 points
#
# - Load the matrix in `Q2.npy` - this consists of two columns representing the x and y coordinates of 10 points
# - Find the equation of the circle that best fits these points
# - Plot the points and fitted circle
#
# Hint: You need to estimate the center of the circle and its radius.
X = np.load('Q2.npy')
X
# %matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(X[:,0], X[:,1])
plt.axis('square')
pass
# - Find the equation of the circle that best fits these points (15 points)
# $$
# x^2 - 2ax + a^2 + y^2 - 2by + b^2 = r^2
# $$
#
# and rearrange terms
#
# $$
# 2ax + 2by + (r^2 - a^2 -v^2) = x^2 + y^2
# $$
A = np.c_[2*X, np.ones(X.shape[0])]
A
# +
sol = la.lstsq(A, np.sum(X**2, axis=1))[0]
a, b, z = sol
r = np.sqrt(z - a**2 - b**2)
r, a, b
# -
# - Plot the points and fitted circle (5 points)
plt.scatter(X[:,0], X[:,1])
c = plt.Circle([a,b], r, fill=False)
plt.gca().add_artist(c, )
plt.axis('square')
pass
# **4**. 20 points
#
# The figure below shows the current population of Durham, Chapel Hill and Raleigh. Arrows show fractions that move between cities each year.
#
# - What are the population sizes of the 3 cities after 3 years have passed?
# - Find the steady state population of the 3 cities by solving a linear system.
#
# Assume no births, deaths or any other fluxes other than those shown.
#
# 
import numpy as np
import scipy.linalg as la
# +
M = np.array([
[0.9, 0.05, 0.05],
[0.2, 0.5, 0.3],
[0, 0.2, 0.8]
]).T
x = np.array([300000, 80000, 500000])[:, None]
# -
x.sum()
M
# - What are the population sizes of the 3 cities after 3 years have passed? (5 points)
each = (np.linalg.matrix_power(M, 3) @ x).astype('int')
each
each.sum()
# - Find the steady state population of the 3 cities by solving a linear system. (15 points)
#
# Note
#
# - You are asked for the steady state *population*
# - A check for both cases is that total population does not change
T = M - np.eye(3)
T[-1,:] = [1,1,1]
T
p = la.solve(T, np.array([0,0,1]))
p
p * x.sum()
M @ p
M
u,v = la.eig(M)
u
a = v[:,0]
a
res = a/a.sum()
res
M@res
M@M@res
res * 880000
# **5** (20 points)
#
# The file `Q5.npy` contains the x and y coordinates in cols 1 and 2 respectively.
#
# - Find a cubic polynomial model to fit the data using the normal equations
# - Provide a geometric interpretation of the solution in terms of projection of a vector onto a space. What is the vector, what is the basis of the space, and what does the numerical solution you obtained represent?
# +
x, y = np.load('Q5.npy').T
y = y[:, None]
# -
# - Find a cubic polynomial model to fit the data using the normal equations (5 points)
# +
X = np.c_[x**3, x**2, x, np.ones_like(x)]
np.linalg.solve(X.T@X, X.T@y)
# -
la.lstsq(X, y)[0]
# The description should indicate some version the following points
#
# - The vector being projected is $y$
# - It is being projected onto the column space of $X$
# - The columns of $X$ corresponds to the coefficients representing the vector space of cubic polynomials
# - The numerical solution is the vector of coefficients for a cubic polynomial that is closest to $y$
| notebooks/copies/homework/solutions/Homework06_Sample_Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: myenv
# language: python
# name: myenv
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="pgygHDQ0Cov2" executionInfo={"status": "ok", "timestamp": 1644178799174, "user_tz": -60, "elapsed": 17983, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}} outputId="25278b55-eec6-41e7-81d2-d3cd099e3dcb"
from google.colab import drive
drive.mount('/content/drive')
# + id="AGQMlYkEsYd3"
# !pip install rasterio
# + id="T-P8tEYWsazH" executionInfo={"status": "ok", "timestamp": 1644178979862, "user_tz": -60, "elapsed": 228, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}}
import rasterio as rio
from rasterio.plot import show
# + id="fU3l0X5msoAT" executionInfo={"status": "ok", "timestamp": 1644178984550, "user_tz": -60, "elapsed": 735, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}}
dem = rio.open("/content/drive/MyDrive/Image Segmentation/Source DEMs/test_dem.tif")
# + colab={"base_uri": "https://localhost:8080/"} id="3BF1wPwttAyV" executionInfo={"status": "ok", "timestamp": 1644179003317, "user_tz": -60, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}} outputId="1ff374ab-76c1-49be-f981-7b4c1a41db00"
dem.shape
# + colab={"base_uri": "https://localhost:8080/"} id="8ykP9sAqtJMd" executionInfo={"status": "ok", "timestamp": 1644179008395, "user_tz": -60, "elapsed": 229, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}} outputId="3ed1e655-eaaa-4b5a-8c31-c8150a7293db"
dem.nodata
# + colab={"base_uri": "https://localhost:8080/"} id="kQjIaiWxtKeU" executionInfo={"status": "ok", "timestamp": 1644179021946, "user_tz": -60, "elapsed": 232, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}} outputId="093a0c0a-81bd-4155-e9d5-3fa4b2717057"
dem.dtypes
# + id="1EDzlEgwtNz2" executionInfo={"status": "ok", "timestamp": 1644179045302, "user_tz": -60, "elapsed": 4011, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}}
mask = dem.read_masks(1)
# + colab={"base_uri": "https://localhost:8080/", "height": 286} id="nUMWgKQJtSkh" executionInfo={"status": "ok", "timestamp": 1644179058101, "user_tz": -60, "elapsed": 4429, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}} outputId="de141cb4-5ec3-49a4-b1ae-4656d43eeb79"
show(mask)
# + colab={"base_uri": "https://localhost:8080/"} id="dNbCOTYUtVZd" executionInfo={"status": "ok", "timestamp": 1644179094456, "user_tz": -60, "elapsed": 238, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjunRQpsmF0tn6Af3tw3lfsxRDb9Aw-IOMvTst6wQ=s64", "userId": "05931522359698354671"}} outputId="49dfdbe7-33c7-4c32-b362-7003e1c08966"
mask[200:205,200:205]
# + id="OZx9zlyvtdxt"
| Code/Visualization/Tryouts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
sys.path.append('../../')
from torchuq.dataset.classification import *
for k in classification_load_funs:
X, y = classification_load_funs[k]()
print(k, X.shape, np.isnan(X).sum(), np.isnan(y).sum())
print(y.dtype, y.max(), y.min(), dataset_nclasses[k])
assert y.min() == 0
assert y.max() == dataset_nclasses[k] - 1
# +
# Test the splitting function
val_fractions = [0.0, 0.1, 0.2]
test_fractions = [0.0, 0.1, 0.2]
for val_size in val_fractions:
for test_size in test_fractions:
print(val_size, test_size)
train, val, test = get_classification_datasets('adult', val_fraction=val_size, test_fraction=test_size)
if val is None:
print("No val")
elif test is None:
print("No test")
else:
print(len(train), len(val), len(test))
# This should trigger an assert
try:
get_classification_datasets('adult', val_fraction=0.6, test_fraction=0.6)
except:
print("Assert successful")
# -
| tests/dataset/classification_unit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="s_qNSzzyaCbD"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="jmjh290raIky"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="J0Qjg6vuaHNt"
# # 理解语言的 Transformer 模型
# + [markdown] colab_type="text" id="AOpGoE2T-YXS"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://tensorflow.google.cn/tutorials/text/transformer">
# <img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />
# 在 tensorflow.google.cn 上查看</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/transformer.ipynb">
# <img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />
# 在 Google Colab 运行</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/text/transformer.ipynb">
# <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />
# 在 Github 上查看源代码</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/text/transformer.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="7Saq5g1mnE5Y"
# Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
# [官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
# [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
# [<EMAIL> Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)
# + [markdown] colab_type="text" id="M-f8TnGpE_ex"
# 本教程训练了一个 <a href="https://arxiv.org/abs/1706.03762" class="external">Transformer 模型</a> 用于将葡萄牙语翻译成英语。这是一个高级示例,假定您具备[文本生成(text generation)](text_generation.ipynb)和 [注意力机制(attention)](nmt_with_attention.ipynb) 的知识。
#
# Transformer 模型的核心思想是*自注意力机制(self-attention)*——能注意输入序列的不同位置以计算该序列的表示的能力。Transformer 创建了多层自注意力层(self-attetion layers)组成的堆栈,下文的*按比缩放的点积注意力(Scaled dot product attention)*和*多头注意力(Multi-head attention)*部分对此进行了说明。
#
# 一个 transformer 模型用自注意力层而非 [RNNs](text_classification_rnn.ipynb) 或 [CNNs](../images/intro_to_cnns.ipynb) 来处理变长的输入。这种通用架构有一系列的优势:
#
# * 它不对数据间的时间/空间关系做任何假设。这是处理一组对象(objects)的理想选择(例如,[星际争霸单位(StarCraft units)](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/#block-8))。
# * 层输出可以并行计算,而非像 RNN 这样的序列计算。
# * 远距离项可以影响彼此的输出,而无需经过许多 RNN 步骤或卷积层(例如,参见[场景记忆 Transformer(Scene Memory Transformer)](https://arxiv.org/pdf/1903.03878.pdf))
# * 它能学习长距离的依赖。在许多序列任务中,这是一项挑战。
#
# 该架构的缺点是:
#
# * 对于时间序列,一个单位时间的输出是从*整个历史记录*计算的,而非仅从输入和当前的隐含状态计算得到。这*可能*效率较低。
# * 如果输入*确实*有时间/空间的关系,像文本,则必须加入一些位置编码,否则模型将有效地看到一堆单词。
#
# 在此 notebook 中训练完模型后,您将能输入葡萄牙语句子,得到其英文翻译。
#
# <img src="https://tensorflow.google.cn/images/tutorials/transformer/attention_map_portuguese.png" width="800" alt="Attention heatmap">
# + colab={} colab_type="code" id="JjJJyJTZYebt"
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# !pip install tf-nightly
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] colab_type="text" id="fd1NWMxjfsDd"
# ## 设置输入流水线(input pipeline)
# + [markdown] colab_type="text" id="t4_Qt8W1hJE_"
# 使用 [TFDS](https://tensorflow.google.cn/datasets) 来导入 [葡萄牙语-英语翻译数据集](https://github.com/neulab/word-embeddings-for-nmt),该数据集来自于 [TED 演讲开放翻译项目](https://www.ted.com/participate/translate).
#
# 该数据集包含来约 50000 条训练样本,1100 条验证样本,以及 2000 条测试样本。
# + colab={} colab_type="code" id="8q9t4FmN96eN"
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
# + [markdown] colab_type="text" id="RCEKotqosGfq"
# 从训练数据集创建自定义子词分词器(subwords tokenizer)。
# + colab={} colab_type="code" id="KVBg5Q8tBk5z"
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
# + colab={} colab_type="code" id="4DYWukNFkGQN"
sample_string = 'Transformer is awesome.'
tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
# + [markdown] colab_type="text" id="o9KJWJjrsZ4Y"
# 如果单词不在词典中,则分词器(tokenizer)通过将单词分解为子词来对字符串进行编码。
# + colab={} colab_type="code" id="bf2ntBxjkqK6"
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
# + colab={} colab_type="code" id="bcRp7VcQ5m6g"
BUFFER_SIZE = 20000
BATCH_SIZE = 64
# + [markdown] colab_type="text" id="kGi4PoVakxdc"
# 将开始和结束标记(token)添加到输入和目标。
# + colab={} colab_type="code" id="UZwnPr4R055s"
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
# + [markdown] colab_type="text" id="6JrGp5Gek6Ql"
# Note:为了使本示例较小且相对较快,删除长度大于40个标记的样本。
# + colab={} colab_type="code" id="2QEgbjntk6Yf"
MAX_LENGTH = 40
# + colab={} colab_type="code" id="c081xPGv1CPI"
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
# + [markdown] colab_type="text" id="Tx1sFbR-9fRs"
# `.map()` 内部的操作以图模式(graph mode)运行,`.map()` 接收一个不具有 numpy 属性的图张量(graph tensor)。该`分词器(tokenizer)`需要将一个字符串或 Unicode 符号,编码成整数。因此,您需要在 `tf.py_function` 内部运行编码过程,`tf.py_function` 接收一个 eager 张量,该 eager 张量有一个包含字符串值的 numpy 属性。
# + colab={} colab_type="code" id="Mah1cS-P70Iz"
def tf_encode(pt, en):
result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
result_pt.set_shape([None])
result_en.set_shape([None])
return result_pt, result_en
# + colab={} colab_type="code" id="9mk9AZdZ5bcS"
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# 将数据集缓存到内存中以加快读取速度。
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE)
# + colab={} colab_type="code" id="_fXvfYVfQr2n"
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
# + [markdown] colab_type="text" id="nBQuibYA4n0n"
# ## 位置编码(Positional encoding)
#
# 因为该模型并不包括任何的循环(recurrence)或卷积,所以模型添加了位置编码,为模型提供一些关于单词在句子中相对位置的信息。
#
# 位置编码向量被加到嵌入(embedding)向量中。嵌入表示一个 d 维空间的标记,在 d 维空间中有着相似含义的标记会离彼此更近。但是,嵌入并没有对在一句话中的词的相对位置进行编码。因此,当加上位置编码后,词将基于*它们含义的相似度以及它们在句子中的位置*,在 d 维空间中离彼此更近。
#
# 参看 [位置编码](https://github.com/tensorflow/examples/blob/master/community/en/position_encoding.ipynb) 的 notebook 了解更多信息。计算位置编码的公式如下:
#
# $$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
# $$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
# + colab={} colab_type="code" id="WhIOZjMNKujn"
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
# + colab={} colab_type="code" id="1Rz82wEs5biZ"
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# 将 sin 应用于数组中的偶数索引(indices);2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# 将 cos 应用于数组中的奇数索引;2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
# + colab={} colab_type="code" id="1kLCla68EloE"
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
# + [markdown] colab_type="text" id="a_b4ou4TYqUN"
# ## 遮挡(Masking)
# + [markdown] colab_type="text" id="s42Uydjkv0hF"
# 遮挡一批序列中所有的填充标记(pad tokens)。这确保了模型不会将填充作为输入。该 mask 表明填充值 `0` 出现的位置:在这些位置 mask 输出 `1`,否则输出 `0`。
# + colab={} colab_type="code" id="U2i8-e1s8ti9"
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# 添加额外的维度来将填充加到
# 注意力对数(logits)。
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
# + colab={} colab_type="code" id="A7BYeBCNvi7n"
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
# + [markdown] colab_type="text" id="Z0hzukDBgVom"
# 前瞻遮挡(look-ahead mask)用于遮挡一个序列中的后续标记(future tokens)。换句话说,该 mask 表明了不应该使用的条目。
#
# 这意味着要预测第三个词,将仅使用第一个和第二个词。与此类似,预测第四个词,仅使用第一个,第二个和第三个词,依此类推。
# + colab={} colab_type="code" id="dVxS8OPI9uI0"
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
# + colab={} colab_type="code" id="yxKGuXxaBeeE"
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
# + [markdown] colab_type="text" id="xluDl5cXYy4y"
# ## 按比缩放的点积注意力(Scaled dot product attention)
# + [markdown] colab_type="text" id="vsxEE_-Wa1gF"
# <img src="https://tensorflow.google.cn/images/tutorials/transformer/scaled_attention.png" width="500" alt="scaled_dot_product_attention">
#
# Transformer 使用的注意力函数有三个输入:Q(请求(query))、K(主键(key))、V(数值(value))。用于计算注意力权重的等式为:
#
# $$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$
#
# 点积注意力被缩小了深度的平方根倍。这样做是因为对于较大的深度值,点积的大小会增大,从而推动 softmax 函数往仅有很小的梯度的方向靠拢,导致了一种很硬的(hard)softmax。
#
# 例如,假设 `Q` 和 `K` 的均值为0,方差为1。它们的矩阵乘积将有均值为0,方差为 `dk`。因此,*`dk` 的平方根*被用于缩放(而非其他数值),因为,`Q` 和 `K` 的矩阵乘积的均值本应该为 0,方差本应该为1,这样会获得一个更平缓的 softmax。
#
# 遮挡(mask)与 -1e9(接近于负无穷)相乘。这样做是因为遮挡与缩放的 Q 和 K 的矩阵乘积相加,并在 softmax 之前立即应用。目标是将这些单元归零,因为 softmax 的较大负数输入在输出中接近于零。
# + colab={} colab_type="code" id="LazzUq3bJ5SH"
def scaled_dot_product_attention(q, k, v, mask):
"""计算注意力权重。
q, k, v 必须具有匹配的前置维度。
k, v 必须有匹配的倒数第二个维度,例如:seq_len_k = seq_len_v。
虽然 mask 根据其类型(填充或前瞻)有不同的形状,
但是 mask 必须能进行广播转换以便求和。
参数:
q: 请求的形状 == (..., seq_len_q, depth)
k: 主键的形状 == (..., seq_len_k, depth)
v: 数值的形状 == (..., seq_len_v, depth_v)
mask: Float 张量,其形状能转换成
(..., seq_len_q, seq_len_k)。默认为None。
返回值:
输出,注意力权重
"""
matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
# 缩放 matmul_qk
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
# 将 mask 加入到缩放的张量上。
if mask is not None:
scaled_attention_logits += (mask * -1e9)
# softmax 在最后一个轴(seq_len_k)上归一化,因此分数
# 相加等于1。
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
return output, attention_weights
# + [markdown] colab_type="text" id="FiqETnhCkoXh"
# 当 softmax 在 K 上进行归一化后,它的值决定了分配到 Q 的重要程度。
#
# 输出表示注意力权重和 V(数值)向量的乘积。这确保了要关注的词保持原样,而无关的词将被清除掉。
# + colab={} colab_type="code" id="n90YjClyInFy"
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print ('Attention weights are:')
print (temp_attn)
print ('Output is:')
print (temp_out)
# + colab={} colab_type="code" id="yAzUAf2DPlNt"
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32) # (4, 3)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32) # (4, 2)
# 这条 `请求(query)符合第二个`主键(key)`,
# 因此返回了第二个`数值(value)`。
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# + colab={} colab_type="code" id="zg6k-fGhgXra"
# 这条请求符合重复出现的主键(第三第四个),
# 因此,对所有的相关数值取了平均。
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# + colab={} colab_type="code" id="UAq3YOzUgXhb"
# 这条请求符合第一和第二条主键,
# 因此,对它们的数值去了平均。
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# + [markdown] colab_type="text" id="aOz-4_XIhaTP"
# 将所有请求一起*传递*。
# + colab={} colab_type="code" id="6dlU8Tm-hYrF"
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3)
print_out(temp_q, temp_k, temp_v)
# + [markdown] colab_type="text" id="kmzGPEy64qmA"
# ## 多头注意力(Multi-head attention)
# + [markdown] colab_type="text" id="fz5BMC8Kaoqo"
# <img src="https://tensorflow.google.cn/images/tutorials/transformer/multi_head_attention.png" width="500" alt="multi-head attention">
#
#
# 多头注意力由四部分组成:
# * 线性层并分拆成多头。
# * 按比缩放的点积注意力。
# * 多头及联。
# * 最后一层线性层。
# + [markdown] colab_type="text" id="JPmbr6F1C-v_"
# 每个多头注意力块有三个输入:Q(请求)、K(主键)、V(数值)。这些输入经过线性(Dense)层,并分拆成多头。
#
# 将上面定义的 `scaled_dot_product_attention` 函数应用于每个头(进行了广播(broadcasted)以提高效率)。注意力这步必须使用一个恰当的 mask。然后将每个头的注意力输出连接起来(用`tf.transpose` 和 `tf.reshape`),并放入最后的 `Dense` 层。
#
# Q、K、和 V 被拆分到了多个头,而非单个的注意力头,因为多头允许模型共同注意来自不同表示空间的不同位置的信息。在分拆后,每个头部的维度减少,因此总的计算成本与有着全部维度的单个注意力头相同。
# + colab={} colab_type="code" id="BSV3PPKsYecw"
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
"""分拆最后一个维度到 (num_heads, depth).
转置结果使得形状为 (batch_size, num_heads, seq_len, depth)
"""
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights
# + [markdown] colab_type="text" id="0D8FJue5lDyZ"
# 创建一个 `MultiHeadAttention` 层进行尝试。在序列中的每个位置 `y`,`MultiHeadAttention` 在序列中的所有其他位置运行所有8个注意力头,在每个位置y,返回一个新的同样长度的向量。
# + colab={} colab_type="code" id="Hu94p-_-2_BX"
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
# + [markdown] colab_type="text" id="RdDqGayx67vv"
# ## 点式前馈网络(Point wise feed forward network)
# + [markdown] colab_type="text" id="gBqzJXGfHK3X"
# 点式前馈网络由两层全联接层组成,两层之间有一个 ReLU 激活函数。
# + colab={} colab_type="code" id="ET7xLt0yCT6Z"
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
])
# + colab={} colab_type="code" id="mytb1lPyOHLB"
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
# + [markdown] colab_type="text" id="7e7hKcxn6-zd"
# ## 编码与解码(Encoder and decoder)
# + [markdown] colab_type="text" id="yScbC0MUH8dS"
# <img src="https://tensorflow.google.cn/images/tutorials/transformer/transformer.png" width="600" alt="transformer">
# + [markdown] colab_type="text" id="MfYJG-Kvgwy2"
# Transformer 模型与标准的[具有注意力机制的序列到序列模型(sequence to sequence with attention model)](nmt_with_attention.ipynb),遵循相同的一般模式。
#
# * 输入语句经过 `N` 个编码器层,为序列中的每个词/标记生成一个输出。
# * 解码器关注编码器的输出以及它自身的输入(自注意力)来预测下一个词。
# + [markdown] colab_type="text" id="QFv-FNYUmvpn"
# ### 编码器层(Encoder layer)
#
# 每个编码器层包括以下子层:
#
# 1. 多头注意力(有填充遮挡)
# 2. 点式前馈网络(Point wise feed forward networks)。
#
# 每个子层在其周围有一个残差连接,然后进行层归一化。残差连接有助于避免深度网络中的梯度消失问题。
#
# 每个子层的输出是 `LayerNorm(x + Sublayer(x))`。归一化是在 `d_model`(最后一个)维度完成的。Transformer 中有 N 个编码器层。
# + colab={} colab_type="code" id="ncyS-Ms3i2x_"
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model)
ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)
return out2
# + colab={} colab_type="code" id="AzZRXdO0mI48"
sample_encoder_layer = EncoderLayer(512, 8, 2048)
sample_encoder_layer_output = sample_encoder_layer(
tf.random.uniform((64, 43, 512)), False, None)
sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model)
# + [markdown] colab_type="text" id="6LO_48Owmx_o"
# ### 解码器层(Decoder layer)
#
# 每个解码器层包括以下子层:
#
# 1. 遮挡的多头注意力(前瞻遮挡和填充遮挡)
# 2. 多头注意力(用填充遮挡)。V(数值)和 K(主键)接收*编码器输出*作为输入。Q(请求)接收*遮挡的多头注意力子层的输出*。
# 3. 点式前馈网络
#
# 每个子层在其周围有一个残差连接,然后进行层归一化。每个子层的输出是 `LayerNorm(x + Sublayer(x))`。归一化是在 `d_model`(最后一个)维度完成的。
#
# Transformer 中共有 N 个解码器层。
#
# 当 Q 接收到解码器的第一个注意力块的输出,并且 K 接收到编码器的输出时,注意力权重表示根据编码器的输出赋予解码器输入的重要性。换一种说法,解码器通过查看编码器输出和对其自身输出的自注意力,预测下一个词。参看按比缩放的点积注意力部分的演示。
# + colab={} colab_type="code" id="9SoX0-vd1hue"
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
# enc_output.shape == (batch_size, input_seq_len, d_model)
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model)
ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model)
return out3, attn_weights_block1, attn_weights_block2
# + colab={} colab_type="code" id="Ne2Bqx8k71l0"
sample_decoder_layer = DecoderLayer(512, 8, 2048)
sample_decoder_layer_output, _, _ = sample_decoder_layer(
tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
False, None, None)
sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model)
# + [markdown] colab_type="text" id="SE1H51Ajm0q1"
# ### 编码器(Encoder)
#
# `编码器` 包括:
# 1. 输入嵌入(Input Embedding)
# 2. 位置编码(Positional Encoding)
# 3. N 个编码器层(encoder layers)
#
# 输入经过嵌入(embedding)后,该嵌入与位置编码相加。该加法结果的输出是编码器层的输入。编码器的输出是解码器的输入。
# + colab={} colab_type="code" id="jpEox7gJ8FCI"
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
maximum_position_encoding, rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding,
self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
# 将嵌入和位置编码相加。
x = self.embedding(x) # (batch_size, input_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x # (batch_size, input_seq_len, d_model)
# + colab={} colab_type="code" id="8QG9nueFQKXx"
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, input_vocab_size=8500,
maximum_position_encoding=10000)
sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)),
training=False, mask=None)
print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model)
# + [markdown] colab_type="text" id="p-uO6ls8m2O5"
# ### 解码器(Decoder)
# + [markdown] colab_type="text" id="ZtT7PKzrXkNr"
# `解码器`包括:
# 1. 输出嵌入(Output Embedding)
# 2. 位置编码(Positional Encoding)
# 3. N 个解码器层(decoder layers)
#
# 目标(target)经过一个嵌入后,该嵌入和位置编码相加。该加法结果是解码器层的输入。解码器的输出是最后的线性层的输入。
# + colab={} colab_type="code" id="d5_d5-PLQXwY"
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
maximum_position_encoding, rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x) # (batch_size, target_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
# x.shape == (batch_size, target_seq_len, d_model)
return x, attention_weights
# + colab={} colab_type="code" id="a1jXoAMRZyvu"
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, target_vocab_size=8000,
maximum_position_encoding=5000)
output, attn = sample_decoder(tf.random.uniform((64, 26)),
enc_output=sample_encoder_output,
training=False, look_ahead_mask=None,
padding_mask=None)
output.shape, attn['decoder_layer2_block2'].shape
# + [markdown] colab_type="text" id="y54xnJnuYgJ7"
# ## 创建 Transformer
# + [markdown] colab_type="text" id="uERO1y54cOKq"
# Transformer 包括编码器,解码器和最后的线性层。解码器的输出是线性层的输入,返回线性层的输出。
# + colab={} colab_type="code" id="PED3bIpOYkBu"
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, pe_input, pe_target, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, pe_input, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, pe_target, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model)
# dec_output.shape == (batch_size, tar_seq_len, d_model)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size)
return final_output, attention_weights
# + colab={} colab_type="code" id="tJ4fbQcIkHW1"
sample_transformer = Transformer(
num_layers=2, d_model=512, num_heads=8, dff=2048,
input_vocab_size=8500, target_vocab_size=8000,
pe_input=10000, pe_target=6000)
temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))
fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
enc_padding_mask=None,
look_ahead_mask=None,
dec_padding_mask=None)
fn_out.shape # (batch_size, tar_seq_len, target_vocab_size)
# + [markdown] colab_type="text" id="wsINyf1VEQLC"
# ## 配置超参数(hyperparameters)
# + [markdown] colab_type="text" id="zVjWCxFNcgbt"
# 为了让本示例小且相对较快,已经减小了*num_layers、 d_model 和 dff* 的值。
#
# Transformer 的基础模型使用的数值为:*num_layers=6*,*d_model = 512*,*dff = 2048*。关于所有其他版本的 Transformer,请查阅[论文](https://arxiv.org/abs/1706.03762)。
#
# Note:通过改变以下数值,您可以获得在许多任务上达到最先进水平的模型。
# + colab={} colab_type="code" id="lnJn5SLA2ahP"
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
# + [markdown] colab_type="text" id="xYEGhEOtzn5W"
# ## 优化器(Optimizer)
# + [markdown] colab_type="text" id="GOmWW--yP3zx"
# 根据[论文](https://arxiv.org/abs/1706.03762)中的公式,将 Adam 优化器与自定义的学习速率调度程序(scheduler)配合使用。
#
# $$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
#
# + colab={} colab_type="code" id="iYQdOO1axwEI"
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
# + colab={} colab_type="code" id="7r4scdulztRx"
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
# + colab={} colab_type="code" id="f33ZCgvHpPdG"
temp_learning_rate_schedule = CustomSchedule(d_model)
plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
# + [markdown] colab_type="text" id="YgkDE7hzo8r5"
# ## 损失函数与指标(Loss and metrics)
# + [markdown] colab_type="text" id="oxGJtoDuYIHL"
# 由于目标序列是填充(padded)过的,因此在计算损失函数时,应用填充遮挡非常重要。
# + colab={} colab_type="code" id="MlhsJMm0TW_B"
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
# + colab={} colab_type="code" id="67oqVHiT0Eiu"
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
# + colab={} colab_type="code" id="phlyxMnm-Tpx"
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
# + [markdown] colab_type="text" id="aeHumfr7zmMa"
# ## 训练与检查点(Training and checkpointing)
# + colab={} colab_type="code" id="UiysUa--4tOU"
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size,
pe_input=input_vocab_size,
pe_target=target_vocab_size,
rate=dropout_rate)
# + colab={} colab_type="code" id="ZOJUSB1T8GjM"
def create_masks(inp, tar):
# 编码器填充遮挡
enc_padding_mask = create_padding_mask(inp)
# 在解码器的第二个注意力模块使用。
# 该填充遮挡用于遮挡编码器的输出。
dec_padding_mask = create_padding_mask(inp)
# 在解码器的第一个注意力模块使用。
# 用于填充(pad)和遮挡(mask)解码器获取到的输入的后续标记(future tokens)。
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
# + [markdown] colab_type="text" id="Fzuf06YZp66w"
# 创建检查点的路径和检查点管理器(manager)。这将用于在每 `n` 个周期(epochs)保存检查点。
# + colab={} colab_type="code" id="hNhuYfllndLZ"
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# 如果检查点存在,则恢复最新的检查点。
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
# + [markdown] colab_type="text" id="0Di_Yaa1gf9r"
# 目标(target)被分成了 tar_inp 和 tar_real。tar_inp 作为输入传递到解码器。`tar_real` 是位移了 1 的同一个输入:在 `tar_inp` 中的每个位置,`tar_real` 包含了应该被预测到的下一个标记(token)。
#
# 例如,`sentence` = "SOS A lion in the jungle is sleeping EOS"
#
# `tar_inp` = "SOS A lion in the jungle is sleeping"
#
# `tar_real` = "A lion in the jungle is sleeping EOS"
#
# Transformer 是一个自回归(auto-regressive)模型:它一次作一个部分的预测,然后使用到目前为止的自身的输出来决定下一步要做什么。
#
# 在训练过程中,本示例使用了 teacher-forcing 的方法(就像[文本生成教程](./text_generation.ipynb)中一样)。无论模型在当前时间步骤下预测出什么,teacher-forcing 方法都会将真实的输出传递到下一个时间步骤上。
#
# 当 transformer 预测每个词时,*自注意力(self-attention)*功能使它能够查看输入序列中前面的单词,从而更好地预测下一个单词。
#
# 为了防止模型在期望的输出上达到峰值,模型使用了前瞻遮挡(look-ahead mask)。
# + colab={} colab_type="code" id="LKpoA6q1sJFj"
EPOCHS = 20
# + colab={} colab_type="code" id="iJwmp9OE29oj"
# 该 @tf.function 将追踪-编译 train_step 到 TF 图中,以便更快地
# 执行。该函数专用于参数张量的精确形状。为了避免由于可变序列长度或可变
# 批次大小(最后一批次较小)导致的再追踪,使用 input_signature 指定
# 更多的通用形状。
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
# + [markdown] colab_type="text" id="qM2PDWGDJ_8V"
# 葡萄牙语作为输入语言,英语为目标语言。
# + colab={} colab_type="code" id="bbvmaKNiznHZ"
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 50 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
# + [markdown] colab_type="text" id="QfcsSWswSdGV"
# ## 评估(Evaluate)
# + [markdown] colab_type="text" id="y6APsFrgImLW"
# 以下步骤用于评估:
#
# * 用葡萄牙语分词器(`tokenizer_pt`)编码输入语句。此外,添加开始和结束标记,这样输入就与模型训练的内容相同。这是编码器输入。
# * 解码器输入为 `start token == tokenizer_en.vocab_size`。
# * 计算填充遮挡和前瞻遮挡。
# * `解码器`通过查看`编码器输出`和它自身的输出(自注意力)给出预测。
# * 选择最后一个词并计算它的 argmax。
# * 将预测的词连接到解码器输入,然后传递给解码器。
# * 在这种方法中,解码器根据它预测的之前的词预测下一个。
#
# Note:这里使用的模型具有较小的能力以保持相对较快,因此预测可能不太正确。要复现论文中的结果,请使用全部数据集,并通过修改上述超参数来使用基础 transformer 模型或者 transformer XL。
# + colab={} colab_type="code" id="5buvMlnvyrFm"
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
# 输入语句是葡萄牙语,增加开始和结束标记
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
# 因为目标是英语,输入 transformer 的第一个词应该是
# 英语的开始标记。
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
# predictions.shape == (batch_size, seq_len, vocab_size)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
# 从 seq_len 维度选择最后一个词
predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# 如果 predicted_id 等于结束标记,就返回结果
if predicted_id == tokenizer_en.vocab_size+1:
return tf.squeeze(output, axis=0), attention_weights
# 连接 predicted_id 与输出,作为解码器的输入传递到解码器。
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
# + colab={} colab_type="code" id="CN-BV43FMBej"
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
# 画出注意力权重
ax.matshow(attention[head][:-1, :], cmap='viridis')
fontdict = {'fontsize': 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel('Head {}'.format(head+1))
plt.tight_layout()
plt.show()
# + colab={} colab_type="code" id="lU2_yG_vBGza"
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(predicted_sentence))
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
# + colab={} colab_type="code" id="YsxrAlvFG8SZ"
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
# + colab={} colab_type="code" id="7EH5y_aqI4t1"
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
# + colab={} colab_type="code" id="J-hVCTSUMlkb"
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
# + [markdown] colab_type="text" id="_1MxkSZvz0jX"
# 您可以为 `plot` 参数传递不同的层和解码器的注意力模块。
# + colab={} colab_type="code" id="t-kFyiOLH0xg"
translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
# + [markdown] colab_type="text" id="RqQ1fIsLwkGE"
# ## 总结
#
# 在本教程中,您已经学习了位置编码,多头注意力,遮挡的重要性以及如何创建一个 transformer。
#
# 尝试使用一个不同的数据集来训练 transformer。您可也可以通过修改上述的超参数来创建基础 transformer 或者 transformer XL。您也可以使用这里定义的层来创建 [BERT](https://arxiv.org/abs/1810.04805) 并训练最先进的模型。此外,您可以实现 beam search 得到更好的预测。
| site/zh-cn/tutorials/text/transformer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''myenv'': conda)'
# name: python3
# ---
# # IF Algorithm Analysis
# It shows the problems of the _standard_ algorithm, using the __full__ training set.
# +
import scipy.io as sio
import numpy as np
import matplotlib.pylab as plt
import matplotlib.cm as cm
import pandas as pd
from scipy import stats
from sklearn import metrics
from os.path import dirname, join as pjoin
from sklearn.ensemble import IsolationForest
from sklearn.ensemble._iforest import _average_path_length
import seaborn as sns
def get_data(name):
print('\n')
# fix the data directory before starting
filename = pjoin('..','Datasets','data',name)
print(name)
# load data stored in .mat files
mat_contents = sio.loadmat(filename)
X,y = mat_contents['X'],mat_contents['y']
# dataset statistics
n_data = X.shape[0]
n_features = X.shape[1]
n_anomalies = sum(y.flatten())
contamination = n_anomalies/n_data * 100
return X,y
def measure(y_true, y_pred, plot = False):
# apply metrics
fpr, tpr, thresholds = metrics.roc_curve(y_true, y_pred)
auc = metrics.auc(fpr, tpr)
precision, recall, thresholds = metrics.precision_recall_curve(y_true, y_pred)
average_precision_score = metrics.average_precision_score(y_true, y_pred)
if plot == True:
plot_prc(fpr, tpr,auc,recall,precision,average_precision_score)
else:
return average_precision_score
def plot_prc(fpr, tpr,auc,recall,precision,average_precision_score):
fig,(ax1,ax2) = plt.subplots(1,2,figsize=[5*2,5])
def ax_plot(ax,x,y,xlabel,ylabel,title=''):
ax.plot(x,y);ax.set_xlabel(xlabel),;ax.set_ylabel(ylabel)
ax.set_title(title);ax.grid()
ax_plot(ax1,fpr, tpr,'fpr', 'tpr',title="auc: {:.3f}".format(auc))
ax_plot(ax2,recall,precision, 'recall','precision', title="average precision: {:.3f}".format(average_precision_score))
# -
# ## Toy datasets
# - single cluster
# - double cluster
# - toroid
# +
def single_cluster(seed):
np.random.seed(seed)
std = 0.1
central_cluster = np.random.randn(1000,2)*std
anomaly = (np.random.rand(50,2)-0.5)*2
data = np.vstack([central_cluster,anomaly])
labels = np.linalg.norm(data,axis=1)>3*std
plt.figure(figsize=[5,5])
plt.scatter(data[:,0],data[:,1],c=1-labels,cmap='Set1')
plt.xlim([-1,1])
plt.ylim([-1,1])
#plt.grid(True)
plt.xticks([]);plt.yticks([])
return data,labels
def double_cluster(seed):
np.random.seed(seed)
std = 0.1
step = 0.4
sx_cluster = np.random.randn(500,2)*std+step
dx_cluster = np.random.randn(500,2)*std-step
anomaly = (np.random.rand(25,2)-0.5)*2
data = np.vstack([sx_cluster,dx_cluster,anomaly])
labels = (np.linalg.norm(data+step,axis=1)>std*3)&(np.linalg.norm(data-step,axis=1)>std*3)
plt.figure(figsize=[5,5])
plt.scatter(data[:,0],data[:,1],c=1-labels,cmap='Set1')
plt.xlim([-1,1])
plt.ylim([-1,1])
#plt.grid(True)
plt.xticks([]);plt.yticks([])
return data,labels
def square_toroid(seed):
np.random.seed(seed)
std = 0.1
central_cluster = np.random.uniform(-0.8,0.8,[1000,2])
central_cluster = central_cluster[np.any(np.abs(central_cluster)>0.6,axis=1)]
anomaly = np.random.uniform(-0.55,0.55,[100,2])
data = np.vstack([central_cluster,anomaly])
labels = np.hstack([np.zeros(central_cluster.shape[0]),np.ones(anomaly.shape[0])])
plt.figure(figsize=[5,5])
plt.scatter(data[:,0],data[:,1],c=1-labels,cmap='Set1')
#plt.scatter(anomaly[:,0],anomaly[:,1])
plt.xlim([-1,1])
plt.ylim([-1,1])
#plt.grid(True)
plt.xticks([]);plt.yticks([])
return data,labels
def get_grid(n):
x_grid,y_grid = np.meshgrid(np.linspace(-1,1,n),np.linspace(-1,1,n))
data_grid = np.vstack([np.ravel(x_grid),np.ravel(y_grid)]).T
return data_grid,x_grid,y_grid
# -
# Dataset load.
# +
# dataset load
#data_train,labels_train = single_cluster()
data_train,labels_train = double_cluster(0)
#data_train,labels_train = square_toroid()
n = 25 ; data_grid,x_grid,y_grid = get_grid(n)
#data_train,labels_train = get_data('ionosphere')
#data_train,labels_train = get_data('speech')
#data_train,labels_train = get_data('mammography')
# -
# _Standard_ IF training.
# +
# unsupervised training
sk_IF = IsolationForest(random_state=0).fit(data_train)
y_pred = sk_IF.score_samples(data_train)
y_grid_pred = sk_IF.score_samples(data_grid)
# plot of the anomaly score
plt.figure(figsize=[5,5])
plt.scatter(data_train[:,0],data_train[:,1],c=y_pred)
plt.xlim([-1,1]);plt.ylim([-1,1]);plt.grid(True)
plt.contour(x_grid,y_grid,y_grid_pred.reshape(n,n),levels=25)
measure(labels_train, -y_pred, plot=True) # MINUS SIGN
# -
# ## Analysis
#
# To get the tree depths for each sample point, we used a modified version of the original _sklearn_ function, that can be found here:
# - https://github.com/scikit-learn/scikit-learn/blob/844b4be24/sklearn/ensemble/_iforest.py#L26
def compute_tree_anomaly_scores(forest,X):
"""
Compute the score of each samples in X going through the extra trees.
Parameters
----------
X : array-like or sparse matrix
Data matrix.
subsample_features : bool
Whether features should be subsampled.
"""
n_samples = X.shape[0]
depths = np.zeros(n_samples, order="f")
collection_tree_anomaly_scores = []
for tree in forest.estimators_:
leaves_index = tree.apply(X)
node_indicator = tree.decision_path(X)
n_samples_leaf = tree.tree_.n_node_samples[leaves_index]
tree_anomaly_scores = (
np.ravel(node_indicator.sum(axis=1))
+ _average_path_length(n_samples_leaf)
- 1.0)
depths += tree_anomaly_scores
collection_tree_anomaly_scores.append(tree_anomaly_scores)
denominator = len(forest.estimators_) * _average_path_length([forest.max_samples_])
scores = 2 ** (
# For a single training sample, denominator and depth are 0.
# Therefore, we set the score manually to 1.
-np.divide(
depths, denominator, out=np.ones_like(depths), where=denominator != 0
)
)
return scores,np.array(collection_tree_anomaly_scores)
# Compute the anomaly scores.
# +
# compute the anomaly scores for each data
sklean_scores,tree_train = compute_tree_anomaly_scores(sk_IF,data_train)
# check 1
plt.plot(sklean_scores == y_pred)
plt.title('the two functions correspond');plt.grid()
# check 2
print(measure(labels_train,-y_pred) == measure(labels_train,-tree_train.mean(axis=0))) # MINUS SIGN
print("Original forest average precision: {:.3}".format(measure(labels_train,-tree_train.mean(axis=0)))) # MINUS SIGN
# -
# Compute the average precision for each tree.
# average precision for each tree
ap_tree_train = np.array([measure(labels_train, - __tree_train__) for __tree_train__ in tree_train]) ## MINUS SIGN
# histogram of average precisions of the trees
_ = plt.hist(ap_tree_train)
plt.title('histogram of the tree average precison');plt.grid(True)
# ### Best strategy
# Compute the strategy named _best_
# +
# learns the best tree order, according to the average precision score previously computed
learned_ordering = np.argsort(ap_tree_train)[::-1]
# sorts the average precisions, using the learned ordering
# this step is not used in the algorithm, it is just a check
sorted_ap_tree_train = ap_tree_train[learned_ordering]
plt.plot(sorted_ap_tree_train)
plt.title('sorted tree average precison');plt.grid(True)
print("best tree average precision: {:.3f}".format(sorted_ap_tree_train[0]))
print("worst tree average precision: {:.3f}".format(sorted_ap_tree_train[-1]))
# -
# Tree sorting and computation of forests anomaly scores
# +
# orders the trees accoridng to the learned ordering
sorted_tree_train = tree_train[learned_ordering]
# computes the anomaly scores for each forest
forest_train = (sorted_tree_train.cumsum(axis=0).T/np.arange(1,sorted_tree_train.shape[0]+1)).T
# check
plt.plot(forest_train[-1],tree_train.mean(axis=0));plt.grid(True)
plt.title('the anomaly scores of the last forest\n is equal to \n the anomaly scores of the original forest')
_=plt.axis('equal')
# -
# Computes the average precision for each forest.
# +
# average precision for each forest
ap_forest_train = np.array([measure(labels_train, - __forest__) for __forest__ in forest_train]) ## MINUS SIGN
plt.plot(ap_forest_train);plt.grid(True);plt.xlabel("forest composed of n trees");plt.ylabel("average precision score")
print("first forest average precision (= best tree average precision): \t\t {:10.3f}".format(ap_forest_train[0]))
print("last forest average precision (= original standard forest average precision): \t {:10.3f}".format(ap_forest_train[-1]))
plt.hlines(ap_forest_train[0],0,100,color='k',linestyle='--',linewidth=1)
plt.hlines(ap_forest_train[-1],0,100,color='k',linestyle='--',linewidth=1)
plt.hlines(ap_forest_train.max(),0,100,color='k',linestyle='--',linewidth=1)
# -
# # Functions
# +
def study(data_train,labels_train):
n_repetitions = 100
sk_IF = train_test_measure(data_train,labels_train)
ap_tree_train,tree_train = get_tree_collections(sk_IF,data_train,labels_train)
plt.figure()
_ = plt.hist(ap_tree_train)
plt.title('histogram of the tree average precison');plt.grid(True)
best = get_forests('best', labels_train,ap_tree_train,tree_train)
worst = get_forests('worst',labels_train,ap_tree_train,tree_train)
mean_random,std_random = get_random_forests(labels_train,n_repetitions,ap_tree_train,tree_train)
plt.figure()
plt.plot(best, label='best')
plt.plot(worst,label='worst')
plt.xlabel("forest composed of $n$ trees");plt.ylabel("average precision score")
x = np.hstack([np.arange(100),np.arange(100)[::-1]])
y = np.hstack([mean_random+std_random,(mean_random-std_random)[::-1]])
plt.plot(mean_random,color='green',linestyle='--',label='random')
plt.fill(x,y,color='green',alpha=0.1)
plt.grid(True);plt.legend()
plt.hlines(best[0], 0,100,color='k',linestyle='--',linewidth=1)
plt.hlines(best[-1], 0,100,color='k',linestyle='--',linewidth=1)
plt.hlines(best.max(), 0,100,color='k',linestyle='--',linewidth=1)
plt.hlines(worst[0], 0,100,color='k',linestyle='--',linewidth=1)
plt.hlines(worst.min(), 0,100,color='k',linestyle='--',linewidth=1)
def train_test_measure(data,labels):
sk_IF = IsolationForest(random_state=0).fit(data)
y_pred = sk_IF.score_samples(data)
measure(labels, -y_pred, plot=True)
return sk_IF
def get_tree_collections(sk_IF,data_train,labels_train):
sklean_scores,tree_train = compute_tree_anomaly_scores(sk_IF,data_train)
ap_tree_train = np.array([measure(labels_train, - __tree_train__) for __tree_train__ in tree_train]) ## MINUS SIGN
return ap_tree_train,tree_train
def get_forests(strategy,labels_train,ap_tree_train,tree_train):
if strategy == 'best':
order = -1
elif strategy == 'worst':
order = 1
learned_ordering = np.argsort(ap_tree_train)[::order]
sorted_tree_train = tree_train[learned_ordering]
forest_train = (sorted_tree_train.cumsum(axis=0).T/np.arange(1,sorted_tree_train.shape[0]+1)).T
ap_forest_train = np.array([measure(labels_train, - __forest__) for __forest__ in forest_train]) ## MINUS SIGN
return ap_forest_train
def get_random_forests(labels_train,n_repetitions,ap_tree_train,tree_train):
repetitions_ap_forest_train = []
for r in range(n_repetitions):
print("\r random repetition {:.0f}".format(r),end='')
# random ordering
learned_ordering = np.random.choice(np.arange(tree_train.shape[0]),tree_train.shape[0],replace=False)
sorted_tree_train = tree_train[learned_ordering]
forest_train = (sorted_tree_train.cumsum(axis=0).T/np.arange(1,sorted_tree_train.shape[0]+1)).T
ap_forest_train = np.array([measure(labels_train, - __forest__) for __forest__ in forest_train]) ## MINUS SIGN
repetitions_ap_forest_train.append(ap_forest_train)
repetitions_ap_forest_train = np.array(repetitions_ap_forest_train)
mean_random = repetitions_ap_forest_train.mean(axis=0)
std_random = repetitions_ap_forest_train.std(axis=0)
return mean_random,std_random
# -
data,labels = double_cluster(None)
study(data,labels)
data,labels = square_toroid(None)
study(data,labels)
| analysis_toy_datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="NUD-hySm_uPc"
# # **Open Source SW and Python Programming Project: Implementation of Subway Navigation**
#
# In this project, you will implement a navigation system for Seoul subway stations, especially **step by step**
#
#
#
# + [markdown] id="uUsnepoeAhVl"
# # 0. Initialization
#
# Read subway station information by reading the file **simplified_subway_info_english.xlsx** or **simplified_subway_info_korean.xlsx**
# * The excel file contains subway station information of Seoul subway line 1 ~ 4
# * When you execute this sourcecode, you MUST upload these files in your Colab runtime environment
# * **MUST NOT** change this code cell
# + id="qnbOlA6o_mIl"
import xlrd
# Read data file
data = xlrd.open_workbook("simplified_subway_info_english.xlsx")
data = data.sheet_by_name('Sheet1')
# Store the loaded book object as a string list in subwayStation variable
subwayStation = []
for line in range(4) :
cur = [x for x in data.col_values(line) if x]
subwayStation.append(cur[1:])
# + [markdown] id="DBXU61Tn_vKS"
# # 1. Your implementation
# Let's start the implementation of a subway naviation system by using the loaded subway station information :)
#
# * You can access the subway station information by referring to **subwayStation** variable (list type)
# + id="7h-scy6AI8Re" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3d7e8014-84a6-4078-8fcc-fc3c04f84dc8"
def findLinesForStation(subwayStation, stationName):
lines = []
for line, stations in subwayStation.items():
if stationName in stations:
lines.append(line)
return lines
def requestValidStation(subwayStation, msg, errorMsg):
while True:
station = input(msg)
lines = findLinesForStation(subwayStation, station)
if len(lines) > 0:
return station, lines
print(errorMsg)
def findPathInLine(stations, start, end):
startIdx = stations.index(start)
endIdx = stations.index(end)
path = list(stations[min(startIdx, endIdx):max(startIdx, endIdx) + 1])
if startIdx >= endIdx:
path.reverse()
return path
if not isinstance(subwayStation, dict):
tmp = subwayStation
subwayStation = {}
for key, value in zip(('Line1', 'Line2', 'Line3', 'Line4'), tmp):
subwayStation[key] = tuple(value)
while True:
print('*****' * 30)
print('1. Display subway line information (Line 1 ~ 4)')
print('2. Display subway station information')
print('3. Find a path between two subway stations')
print('4. Exit')
print('*****' * 30)
select = int(input('Please choose one of the optiosn (1 - 4):'))
if select == 1:
print('*****' * 30)
print('Subway line information service')
print('*****' * 30)
while True:
line = 'Line' + input('Please enter a subway line number (1 - 4):')
if line in subwayStation.keys():
print(subwayStation[line])
break
print('[ERROR] Please enter a valid number (1 - 4)')
elif select == 2:
print('*****' * 30)
print('Subway station information service')
print('*****' * 30)
station, lines = requestValidStation(subwayStation,
'Please enter a subway station name:',
'[ERROR] Please enter a valid station name')
print(station, 'station is in', lines)
elif select == 3:
dptStation, dptLines = requestValidStation(subwayStation,
'Please enter a departure station name:',
'[ERROR] Please enter a valid departure station name')
dstStation, dstLines = requestValidStation(subwayStation,
'Please enter a destination station name:',
'[ERROR] Please enter a valid destination station name')
commonLine = list(set(dptLines).intersection(set(dstLines)))
if len(commonLine) > 0:
curLine = commonLine[0]
curStations = subwayStation[curLine]
path = findPathInLine(curStations, dptStation, dstStation)
print('In', curLine)
print(path)
else:
isDetected = False
for dptLine in dptLines:
for dstLine in dstLines:
commonStations = list(set(subwayStation[dptLine]).intersection(set(subwayStation[dstLine])))
if len(commonStations) > 0:
curComm = commonStations[0]
curStations = subwayStation[dptLine]
path = findPathInLine(curStations, dptStation, curComm)
print('In', dptLine)
print(path)
curStations = subwayStation[dstLine]
path = findPathInLine(curStations, curComm, dstStation)
print('Transfer to', dstLine)
print(path)
isDetected = True
break
if isDetected:
break
elif select == 4:
print('Bye bye~~')
break
else:
print('[ERROR] Please enter a valid option (1 - 4)')
| ossppl_project_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### New to Plotly?
# Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by dowloading the client and [reading the primer](https://plot.ly/python/getting-started/).
# <br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
# <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
# #### Version Check
# Note: Python Buttons are available in version <b>1.12.12+</b><br>
# Run `pip install plotly --upgrade` to update your Plotly version
import plotly
plotly.__version__
# #### Methods
# The [updatemenu method](https://plot.ly/python/reference/#layout-updatemenus-buttons-method) determines which [plotly.js function](https://plot.ly/javascript/plotlyjs-function-reference/) will be used to modify the chart. There are 4 possible methods:
# - `"restyle"`: modify data or data attributes
# - `"relayout"`: modify layout attributes
# - `"update"`: modify data **and** layout attributes
# - `"animate"`: start or pause an [animation](https://plot.ly/python/#animations))
# #### Restyle Dropdown
# The `"restyle"` method should be used when modifying the data and data attributes of the graph.<br>
# **Update One Data Attribute**<br>
# This example demonstrates how to update a single data attribute: chart `type` with the `"restyle"` method.
# +
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.tools import FigureFactory as FF
import json
import numpy as np
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/volcano.csv')
data = [go.Surface(z=df.values.tolist(), colorscale='Viridis')]
layout = go.Layout(
width=800,
height=900,
autosize=False,
margin=dict(t=0, b=0, l=0, r=0),
scene=dict(
xaxis=dict(
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
yaxis=dict(
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230, 230)'
),
zaxis=dict(
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
aspectratio = dict(x=1, y=1, z=0.7),
aspectmode = 'manual'
)
)
updatemenus=list([
dict(
buttons=list([
dict(
args=['type', 'surface'],
label='3D Surface',
method='restyle'
),
dict(
args=['type', 'heatmap'],
label='Heatmap',
method='restyle'
)
]),
direction = 'down',
pad = {'r': 10, 't': 10},
showactive = True,
x = 0.1,
xanchor = 'left',
y = 1.1,
yanchor = 'top'
),
])
annotations = list([
dict(text='Trace type:', x=0, y=1.085, yref='paper', align='left', showarrow=False)
])
layout['updatemenus'] = updatemenus
layout['annotations'] = annotations
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='cmocean-picker-one-dropdown')
# -
# **Update Several Data Attributes**<br>
# This example demonstrates how to update several data attributes: colorscale, chart type, and line display with the "restyle" method.
# This example uses the cmocean python package. You can install this package with `pip install cmocean`.
# +
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.tools import FigureFactory as FF
import cmocean
import json
import numpy as np
import pandas as pd
def cmocean_to_plotly(cmap, pl_entries=100):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/volcano.csv')
data = [go.Surface(z=df.values.tolist(), colorscale='Viridis')]
button_layer_1_height = 1.12
button_layer_2_height = 1.065
layout = go.Layout(
width=800,
height=900,
autosize=False,
margin=dict(t=0, b=0, l=0, r=0),
scene=dict(
xaxis=dict(
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
yaxis=dict(
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
zaxis=dict(
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
aspectratio = dict(x=1, y=1, z=0.7 ),
aspectmode = 'manual'
)
)
updatemenus=list([
dict(
buttons=list([
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.haline)) ],
label='Haline',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.turbid))],
label='Turbid',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.speed))],
label='Speed',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.haline)) ],
label='Tempo',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.gray))],
label='Gray',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.phase))],
label='Phase',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.balance)) ],
label='Balance',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.delta))],
label='Delta',
method='restyle'
),
dict(
args=['colorscale', json.dumps(cmocean_to_plotly(cmocean.cm.curl))],
label='Curl',
method='restyle'
),
]),
direction = 'down',
pad = {'r': 10, 't': 10},
showactive = True,
x = 0.1,
xanchor = 'left',
y = button_layer_1_height,
yanchor = 'top'
),
dict(
buttons=list([
dict(
args=['reversescale', True],
label='Reverse',
method='restyle'
),
dict(
args=['reversescale', False],
label='Undo',
method='restyle'
)
]),
direction = 'down',
pad = {'r': 10, 't': 10},
showactive = True,
x = 0.55,
xanchor = 'left',
y = button_layer_1_height,
yanchor = 'top'
),
dict(
buttons=list([
dict(
args=[{'contours.showlines':False, 'type':'contour'}],
label='Hide lines',
method='restyle'
),
dict(
args=[{'contours.showlines':True, 'type':'contour'}],
label='Show lines',
method='restyle'
),
]),
direction = 'down',
pad = {'r': 10, 't': 10},
showactive = True,
x = 0.775,
xanchor = 'left',
y = button_layer_1_height,
yanchor = 'top'
),
dict(
buttons=list([
dict(
args=['type', 'surface'],
label='3d Surface',
method='restyle'
),
dict(
args=['type', 'heatmap'],
label='Heatmap',
method='restyle'
),
dict(
args=['type', 'contour'],
label='Contour',
method='restyle'
)
]),
direction = 'down',
pad = {'r': 10, 't': 10},
showactive = True,
x = 0.3,
xanchor = 'left',
y = button_layer_1_height,
yanchor = 'top'
),
])
annotations = list([
dict(text='cmocean<br>scale', x=0, y=1.11, yref='paper', align='left', showarrow=False ),
dict(text='Trace<br>type', x=0.25, y=1.11, yref='paper', showarrow=False ),
dict(text="Colorscale", x=0.5, y=1.10, yref='paper', showarrow=False),
dict(text="Lines", x=0.75, y=1.10, yref='paper', showarrow=False)
])
layout['updatemenus'] = updatemenus
layout['annotations'] = annotations
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='cmocean-picker-dropdown')
# -
# #### Relayout Dropdown
# The `"relayout"` method should be used when modifying the layout attributes of the graph.<br>
# **Update One Layout Attribute**<br>
# This example demonstrates how to update a layout attribute: chart `type` with the `"relayout"` method.
# +
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.normal(2, 0.4, 400)
y0 = np.random.normal(2, 0.4, 400)
x1 = np.random.normal(3, 0.6, 600)
y1 = np.random.normal(6, 0.4, 400)
x2 = np.random.normal(4, 0.2, 200)
y2 = np.random.normal(4, 0.4, 200)
trace0 = go.Scatter(
x=x0,
y=y0,
mode='markers',
marker=dict(color='#835AF1')
)
trace1 = go.Scatter(
x=x1,
y=y1,
mode='markers',
marker=dict(color='#7FA6EE')
)
trace2 = go.Scatter(
x=x2,
y=y2,
mode='markers',
marker=dict(color='#B8F7D4')
)
data = [trace0, trace1, trace2]
cluster0 = [dict(type='circle',
xref='x', yref='y',
x0=min(x0), y0=min(y0),
x1=max(x0), y1=max(y0),
opacity=.25,
line=dict(color='#835AF1'),
fillcolor='#835AF1')]
cluster1 = [dict(type='circle',
xref='x', yref='y',
x0=min(x1), y0=min(y1),
x1=max(x1), y1=max(y1),
opacity=.25,
line=dict(color='#7FA6EE'),
fillcolor='#7FA6EE')]
cluster2 = [dict(type='circle',
xref='x', yref='y',
x0=min(x2), y0=min(y2),
x1=max(x2), y1=max(y2),
opacity=.25,
line=dict(color='#B8F7D4'),
fillcolor='#B8F7D4')]
updatemenus = list([
dict(buttons=list([
dict(label = 'None',
method = 'relayout',
args = ['shapes', []]),
dict(label = 'Cluster 0',
method = 'relayout',
args = ['shapes', cluster0]),
dict(label = 'Cluster 1',
method = 'relayout',
args = ['shapes', cluster1]),
dict(label = 'Cluster 2',
method = 'relayout',
args = ['shapes', cluster2]),
dict(label = 'All',
method = 'relayout',
args = ['shapes', cluster0+cluster1+cluster2])
]),
)
])
layout = dict(title='Highlight Clusters', showlegend=False,
updatemenus=updatemenus)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='relayout_option_dropdown')
# -
# #### Update Dropdown
# The `"update"` method should be used when modifying the data and layout sections of the graph.<br>
# This example demonstrates how to update which traces are displayed while simulaneously updating layout attributes such as the chart title and annotations.
# +
import plotly.plotly as py
import plotly.graph_objs as go
from datetime import datetime
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
df.columns = [col.replace('AAPL.', '') for col in df.columns]
trace_high = go.Scatter(x=list(df.index),
y=list(df.High),
name='High',
line=dict(color='#33CFA5'))
trace_high_avg = go.Scatter(x=list(df.index),
y=[df.High.mean()]*len(df.index),
name='High Average',
visible=False,
line=dict(color='#33CFA5', dash='dash'))
trace_low = go.Scatter(x=list(df.index),
y=list(df.Low),
name='Low',
line=dict(color='#F06A6A'))
trace_low_avg = go.Scatter(x=list(df.index),
y=[df.Low.mean()]*len(df.index),
name='Low Average',
visible=False,
line=dict(color='#F06A6A', dash='dash'))
data = [trace_high, trace_high_avg, trace_low, trace_low_avg]
high_annotations=[dict(x='2016-03-01',
y=df.High.mean(),
xref='x', yref='y',
text='High Average:<br>'+str(df.High.mean()),
ax=0, ay=-40),
dict(x=df.High.idxmax(),
y=df.High.max(),
xref='x', yref='y',
text='High Max:<br>'+str(df.High.max()),
ax=0, ay=-40)]
low_annotations=[dict(x='2015-05-01',
y=df.Low.mean(),
xref='x', yref='y',
text='Low Average:<br>'+str(df.Low.mean()),
ax=0, ay=40),
dict(x=df.High.idxmin(),
y=df.Low.min(),
xref='x', yref='y',
text='Low Min:<br>'+str(df.Low.min()),
ax=0, ay=40)]
updatemenus = list([
dict(active=-1,
buttons=list([
dict(label = 'High',
method = 'update',
args = [{'visible': [True, True, False, False]},
{'title': 'Yahoo High',
'annotations': high_annotations}]),
dict(label = 'Low',
method = 'update',
args = [{'visible': [False, False, True, True]},
{'title': 'Yahoo Low',
'annotations': low_annotations}]),
dict(label = 'Both',
method = 'update',
args = [{'visible': [True, True, True, True]},
{'title': 'Yahoo',
'annotations': high_annotations+low_annotations}]),
dict(label = 'Reset',
method = 'update',
args = [{'visible': [True, False, True, False]},
{'title': 'Yahoo',
'annotations': []}])
]),
)
])
layout = dict(title='Yahoo', showlegend=False,
updatemenus=updatemenus)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='update_dropdown')
# -
# #### Style Dropdown
# When adding dropdowns to Plotly charts, users have the option of styling the color, font, padding, and position of the dropdown menus. The example below demonstrates how to apply different styling options. See all updatemenu styling attributes here: https://plot.ly/python/reference/#layout-updatemenus.
# +
df_wind = pd.read_csv('https://plot.ly/~datasets/2805.csv')
df_known_capacity = df_wind[ df_wind['total_cpcy'] != -99999.000 ]
df_sum = df_known_capacity.groupby('manufac')['total_cpcy'].sum().sort_values(ascending=False).to_frame()
df_farms = pd.read_csv('https://plot.ly/~jackp/17256.csv')
df_farms.set_index('Wind Farm', inplace=True)
wind_farms=list([
dict(
args=[ {
'mapbox.center.lat':38,
'mapbox.center.lon':-94,
'mapbox.zoom':3,
'annotations[0].text':'All US wind turbines (scroll to zoom)'
} ],
label='USA',
method='relayout'
)
])
for farm, row in df_farms.iterrows():
desc = []
for col in df_farms.columns:
if col not in ['DegMinSec','Latitude','Longitude']:
if str(row[col]) not in ['None','nan','']:
desc.append( col + ': ' + str(row[col]).strip("'") )
desc.insert(0, farm)
wind_farms.append(
dict(
args=[ {
'mapbox.center.lat':row['Latitude'],
'mapbox.center.lon':float(str(row['Longitude']).strip("'")),
'mapbox.zoom':9,
'annotations[0].text': '<br>'.join(desc)
} ],
label=' '.join(farm.split(' ')[0:2]),
method='relayout'
)
)
data = []
for mfr in list(df_sum.index):
if mfr != 'unknown':
trace = dict(
lat = df_wind[ df_wind['manufac'] == mfr ]['lat_DD'],
lon = df_wind[ df_wind['manufac'] == mfr ]['long_DD'],
name = mfr,
marker = dict(size = 4),
type = 'scattermapbox'
)
data.append(trace)
# mapbox_access_token = 'insert mapbox token here'
layout = dict(
height = 800,
margin = dict( t=0, b=0, l=0, r=0 ),
font = dict( color='#FFFFFF', size=11 ),
paper_bgcolor = '#000000',
mapbox=dict(
accesstoken=mapbox_access_token,
bearing=0,
center=dict(
lat=38,
lon=-94
),
pitch=0,
zoom=3,
style='dark'
),
)
updatemenus=list([
dict(
buttons = wind_farms[0:10],
pad = {'r': 0, 't': 10},
x = 0.1,
xanchor = 'left',
y = 1.0,
yanchor = 'top',
bgcolor = '#AAAAAA',
active = 99,
bordercolor = '#FFFFFF',
font = dict(size=11, color='#000000')
),
dict(
buttons=list([
dict(
args=['mapbox.style', 'dark'],
label='Dark',
method='relayout'
),
dict(
args=['mapbox.style', 'light'],
label='Light',
method='relayout'
),
dict(
args=['mapbox.style', 'satellite'],
label='Satellite',
method='relayout'
),
dict(
args=['mapbox.style', 'satellite-streets'],
label='Satellite with Streets',
method='relayout'
)
]),
direction = 'up',
x = 0.75,
xanchor = 'left',
y = 0.05,
yanchor = 'bottom',
bgcolor = '#000000',
bordercolor = '#FFFFFF',
font = dict(size=11)
),
])
annotations = list([
dict(text='All US wind turbines (scroll to zoom)', font=dict(color='magenta',size=14), borderpad=10,
x=0.05, y=0.05, xref='paper', yref='paper', align='left', showarrow=False, bgcolor='black'),
dict(text='Wind<br>Farms', x=0.01, y=0.99, yref='paper', align='left', showarrow=False,font=dict(size=14))
])
layout['updatemenus'] = updatemenus
layout['annotations'] = annotations
figure = dict(data=data, layout=layout)
py.iplot(figure, filename='wind-turbine-territory-dropdown')
# -
# #### Reference
# See https://plot.ly/python/reference/#layout-updatemenus for more information about `updatemenu` dropdowns.
# +
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
# !pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'dropdown.ipynb', 'python/dropdowns/', 'Dropdown Menus | plotly',
'How to add dropdowns to update Plotly chart attributes in Python.',
title='Dropdown Menus | plotly',
name='Dropdown Menus',
has_thumbnail='true', thumbnail='thumbnail/dropdown.jpg',
language='python', page_type='example_index',
display_as='controls', order=2, ipynb= '~notebook_demo/85')
# -
| _posts/python-v3/controls/dropdowns/dropdown.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Graduated Rotational Internship Program: The Sparks Foundation
# ## Data Science & Analytics Intern
# ### Author: Daksh.R.Solanki
#
# ## TASK 1: Prediction using Supervised ML
# ### Problem Statement
# In this regression task we will predict the percentage of marks that a student is expected to score based upon the number of hours they studied. This is a simple linear regression task as it involves just two variables.
#
# ### To predict:
# What will be predicted score if a student studies for 9.25 hrs/ day?
# ## Importing required Libraries
# +
#importing required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# -
# importing link from the url link
url = "http://bit.ly/w-data"
data = pd.read_csv(url)
print("Data Imported Successfully")
data
data.head()
data.tail()
data.info()
data.describe()
# Plotting the graph to know the relationship between the data
data.plot(x='Hours', y='Scores', style='*')
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
# Above graph that there is a positive relation between numbers of hours studied and percentage of score
# # Linear Regression
# ### Linear regression is a supervised learning algorithm used when target ? dependent variable continues real number.
#
# ### It establishes relationship between dependent variable y and one or more independent variable x using fit line.
#
#
# ## Data Visualization
# +
#visulizing with line plot
plt.style.use('ggplot')
data.plot(kind="line" )
plt.title('Hours vs Percentage')
plt.xlabel('Hours studied')
plt.ylabel('Percentage score')
plt.show()
# +
x=np.asanyarray(data[['Hours']])
y=np.asanyarray(data['Scores'])
#using train test split to split the data in train and test data
train_x,test_x,train_y,test_y=train_test_split(x,y,test_size=0.2,random_state=2)
regressor=LinearRegression()
regressor.fit(train_x,train_y)
print("Training completed")
print("Coefficients:",regressor.coef_)
print("Intercept",regressor.intercept_)
# +
from sklearn.metrics import mean_squared_error
from sklearn import metrics
from sklearn.metrics import r2_score
y_pred=regressor.predict(test_x)
print("Mean_Absolute_Error:{}".format(metrics.mean_absolute_error(y_pred,test_y)))
print("R2 score: %.2f"%r2_score(y_pred,test_y))
# -
# Making Predictions
data=pd.DataFrame({'Actual':test_y,'Predicted':y_pred})
data
# ## test
# +
hours=9.25
predicted_score=regressor.predict([[hours]])
print(f'No of hours= {hours}')
print(f'Predicted Score= {predicted_score[0]}')
# -
| GRIP _TASK_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generate a top-n list of books on consumer preference - Amazon books data
# **About Book Crossing Dataset**<br>
#
# This dataset has been compiled by <NAME> in 2004, and it comprises of three tables for users, books and ratings. Explicit ratings are expressed on a scale from 1-10 (higher values denoting higher appreciation) and implicit rating is expressed by 0.
# Reference: http://www2.informatik.uni-freiburg.de/~cziegler/BX/
#
# **Objective**
#
# This project entails building a Book Recommender System for users based on user-based and item-based collaborative filtering approaches.
# #### Execute the below cell to load the datasets
# + active=""
# !pip install --upgrade google-api-python-client
# -
# !pip install nltk --upgrade
# !pip install dill
# !python -m pip install --upgrade pip
# !pip install google-cloud
# !pip install google-cloud-vision
import os
# +
#Import required libraries
import numpy as np
import pandas as pd
import math
import json
import time
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestNeighbors
import scipy.sparse
from scipy.sparse import csr_matrix
import warnings; warnings.simplefilter('ignore')
# %matplotlib inline
# +
#Loading data
books = pd.read_csv("BX-Books.csv", sep=";", error_bad_lines=False, encoding="latin-1")
books.columns = ['ISBN', 'bookTitle', 'bookAuthor', 'yearOfPublication', 'publisher', 'imageUrlS', 'imageUrlM', 'imageUrlL']
users = pd.read_csv("BX-Users.csv", sep=';', error_bad_lines=False, encoding="latin-1")
users.columns = ['userID', 'Location', 'Age']
ratings = pd.read_csv("BX-Book-Ratings.csv", sep=';', error_bad_lines=False, encoding="latin-1")
ratings.columns = ['userID', 'ISBN', 'bookRating']
# -
# #### Check no.of records and features given in each dataset
books.shape
users.shape
ratings.shape
# ## Exploring books dataset
books.head()
# ## Data Pre-processing : yearOfPublication####
# ### Check unique values of yearOfPublication
books.dtypes
bookspub = books['yearOfPublication']
bookspub.unique()
# As it can be seen from above that there are some incorrect entries in this field. It looks like Publisher names 'DK Publishing Inc' and 'Gallimard' have been incorrectly loaded as yearOfPublication in dataset due to some errors in csv file.
#
#
# Also some of the entries are strings and same years have been entered as numbers in some places. We will try to fix these things in the coming questions.
# ### Check the rows having 'DK Publishing Inc' as yearOfPublication
books.loc[books.yearOfPublication == 'DK Publishing Inc',:]
# ### Drop the rows having `'DK Publishing Inc'` and `'Gallimard'` as `yearOfPublication`
books = books[(books.yearOfPublication != 'DK Publishing Inc') & (books.yearOfPublication != 'Gallimard')]
# ### Change the datatype of yearOfPublication to 'int'
books['yearOfPublication'] = books['yearOfPublication'].astype('int32')
books.dtypes
# ### Drop NaNs in `'publisher'` column
#drop NaNs in 'publisher' column
books = books.dropna(subset=['publisher'])
books.publisher.isnull().sum()
# ## Exploring Users dataset
print(users.shape)
users.head()
# ### Get all unique values in ascending order for column `Age`
print(sorted(users.Age.unique()))
# Age column has some invalid entries like nan, 0 and very high values like 100 and above
# ### Values below 5 and above 90 do not make much sense for our book rating case...hence replace these by NaNs
print(sorted(users.Age.unique()))
# Age column has some invalid entries like nan, 0 and very high values like 100 and above
# ### Values below 5 and above 90 do not make much sense for our book rating case...hence replace these by NaNs
import numpy as np
users.loc[(users.Age > 90) | (users.Age < 5), 'Age'] = np.nan
# ### Replace null values in column `Age` with mean
users['Age'] = users['Age'].fillna(users['Age'].mean())
# ### Change the datatype of `Age` to `int`
users['Age'] = users['Age'].astype(int)
print(sorted(users.Age.unique()))
# ## Exploring the Ratings Dataset
# ### check the shape
ratings.shape
n_users = users.shape[0]
n_books = books.shape[0]
ratings.head(5)
# ### Ratings dataset should have books only which exist in our books dataset. Drop the remaining rows
ratings_new = ratings[ratings.ISBN.isin(books.ISBN)]
ratings_new.shape
# ### Ratings dataset should have ratings from users which exist in users dataset. Drop the remaining rows
ratings_new = ratings_new[ratings.userID.isin(users.userID)]
ratings_new.shape
# ### Consider only ratings from 1-10 and leave 0s in column `bookRating`
ratings_new['bookRating'].unique()
#Hence segragating implicit and explict ratings datasets
ratings_explicit = ratings_new[ratings_new.bookRating != 0]
ratings_implicit = ratings_new[ratings_new.bookRating == 0]
print(ratings_new.shape)
print(ratings_explicit.shape)
print(ratings_implicit.shape)
# ### Find out which rating has been given highest number of times
#plotting count of bookRating
import seaborn as sns
import matplotlib.pyplot as plt
sns.countplot(data=ratings_explicit , x='bookRating')
plt.show()
# ratings_explicit['bookRating'].plot(kind = 'bar')
# ### **Collaborative Filtering Based Recommendation Systems**
# ### For more accurate results only consider users who have rated atleast 500 books
counts1 = pd.value_counts(ratings_explicit['userID'])
ratings_explicit = ratings_explicit[ratings_explicit['userID'].isin(counts1[counts1 >= 500].index)]
ratings_explicit.head(2)
ratings_explicit
# ### Transform data to surprise format
# !pip install surprise
# +
from surprise import Dataset,Reader
from surprise.model_selection import cross_validate
from surprise import NormalPredictor
reader = Reader(rating_scale=(1, 10))
# -
ratings_explicit.head(2)
ratings_explicit.shape
data = Dataset.load_from_df(ratings_explicit[['userID', 'ISBN', 'bookRating']], reader)
data.df.head(2)
# ### Points to Note:
# 1) Trainset is no longer a pandas dataframe. Rather, it's a specific datatypes defined by the Surprise library
#
#
# 2) UserID and ISBN in the pandas dataframe can contain any value (either string/integer etc). However, Trainset convert these raw ids into numeric indexes called as "inner id"
#
#
# 3) Methods are provided to convert rw id to inner id and vice verca
# ### SVD Based Recommendation System
# +
from surprise import Dataset,Reader
reader = Reader(rating_scale=(1, 10))
data = Dataset.load_from_df(ratings_explicit[['userID', 'ISBN', 'bookRating']], reader)
# +
# Split data to train and test
from surprise.model_selection import train_test_split
trainset, testset = train_test_split(data, test_size=.25,random_state=123)
# to build on full data
#trainset = data.build_full_trainset()
# -
trainset.all_ratings()
# +
# However the ids are the inner ids and not the raw ids
# raw ids can be obatined as follows
print(trainset.to_raw_uid(0))
#print(trainset.to_raw_iid(1066))
# -
from surprise import SVD, KNNWithMeans
from surprise import accuracy
svd_model = SVD(n_factors=5,biased=False)
svd_model.fit(trainset)
testset[0]
test_pred = svd_model.test(testset)
# compute RMSE
accuracy.rmse(test_pred)
# ## KNNWithMeans
# +
from surprise import KNNWithMeans
from surprise import accuracy
algo_i = KNNWithMeans(k=10, sim_options={ 'user_based': False})
algo_i.fit(trainset)
# +
#from surprise.model_selection import cross_validate
#cross_validate(algo_i, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)
# -
test_pred=algo_i.test(testset)
print(accuracy.rmse(test_pred))
# ## KNNWithMeans gave better results.
#
# We can try Cross validation and improve accuracy
# +
uid = str(11676) # raw user id (as in the ratings file). They are **strings**!
iid = "074323748X" # raw item id (as in the ratings file). They are **strings**!
# get a prediction for specific users and items.
pred = algo_i.predict(uid, iid, r_ui=0.0, verbose=True)
# -
# ### Generating top n recommendations
pred = pd.DataFrame(test_pred)
pred[pred['uid'] == 11676][['iid', 'r_ui','est']].sort_values(by = 'r_ui',ascending = False).head(10)
# Summarise your insights.
#
# Model predicts average rating wherever estimation is not possible
#
# Model-based Collaborative Filtering is a personalised recommender system, the recommendations are based on the past behavior of the user and it is not dependent on any additional information.
#
# The Popularity-based recommender system is non-personalised and the recommendations are based on frequecy counts, which may be not suitable to the user.
| Collaborative Filtering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # JetBot - Data collection without gamecontroller
#
# In this notebook we'll collect training data for CNN VAE. The training data save to dataset directory.
#
# ## Import module
#
#
import os
import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbot import Robot, Camera, bgr8_to_jpeg
# ## Show log_button
#
# If you enable log_button then start recording images.
#
log_button = widgets.ToggleButton(value=False, description='enable logging')
display(log_button)
# ## Initialize Camera
#
# Next is initializing camera module. Image size is 320 x 240. Frame rate is about 27Hz. We'll save image in camera observer method. camera observer method can get image per frame rate. Thus, frame rate is decide to image save interval.
camera = Camera.instance(width=320, height=240)
image = widgets.Image(format='jpeg', width=320, height=240)
camera_link = traitlets.dlink((camera,'value'), (image,'value'), transform=bgr8_to_jpeg)
# ## UI Widget
#
# +
DATASET_DIR = 'dataset'
try:
os.makedirs(DATASET_DIR)
except FileExistsError:
print('Directories not created becasue they already exist')
dataset=DATASET_DIR
layout = widgets.Layout(width='100px', height='64px')
count_box = widgets.IntText(layout=layout, value=len(os.listdir(dataset)))
count_label = widgets.Label(layout=layout, value='Number image:')
count_panel = widgets.HBox([count_label,count_box])
panel = widgets.VBox([count_panel])
display(widgets.HBox([panel,image]))
# -
# ## Set callback for collect the training data.
#
# ```save_record``` is callback for training data. The method set to camera observer. This callback saving the image to DATASET_DIR. When click ```enable logging``` button, this method recording training data. You can check number of training data with ```Number image text box```.
# +
import os
from uuid import uuid1
def save_record(change):
if log_button.value:
image_name = '{}.jpg'.format(uuid1())
image_path = os.path.join(DATASET_DIR, image_name)
save_image=bgr8_to_jpeg(change['new'])
with open(image_path, 'wb') as f:
f.write(save_image)
count_box.value = len(os.listdir(dataset))
save_record({'new': camera.value})
camera.observe(save_record, names='value')
# -
# ## Cleanup
#
# After collecting enough data. cleanup camera observer and stop all motor.
camera.unobserve(save_record, names='value')
camera_link.unlink()
# ## Cleate dataset.zip file
# +
import datetime
def timestr():
return str(datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
# !zip -r -q jetbot_{DATASET_DIR}_{timestr()}.zip {DATASET_DIR}
| notebooks/utility/jetbot/data_collection_withoutgamepad.ipynb |
# ##### Copyright 2020 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # magic_sequence_sat
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/magic_sequence_sat.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/magic_sequence_sat.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2018 <NAME>
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Solve the magic sequence problem with the CP-SAT solver."""
from ortools.sat.python import cp_model
"""Magic sequence problem."""
n = 100
values = range(n)
model = cp_model.CpModel()
x = [model.NewIntVar(0, n, 'x%i' % i) for i in values]
for k in values:
tmp_array = []
for i in values:
tmp_var = model.NewBoolVar('')
model.Add(x[i] == k).OnlyEnforceIf(tmp_var)
model.Add(x[i] != k).OnlyEnforceIf(tmp_var.Not())
tmp_array.append(tmp_var)
model.Add(sum(tmp_array) == x[k])
# Redundant constraint.
model.Add(sum(x) == n)
solver = cp_model.CpSolver()
# No solution printer, this problem has only 1 solution.
solver.parameters.log_search_progress = True
solver.Solve(model)
print(solver.ResponseStats())
for k in values:
print('x[%i] = %i ' % (k, solver.Value(x[k])), end='')
print()
| examples/notebook/contrib/magic_sequence_sat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Futures Trading Considerations
# by <NAME> and <NAME>
#
# Part of the Quantopian Lecture Series:
#
# * [www.quantopian.com/lectures](https://www.quantopian.com/lectures)
# * [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
#
# Notebook released under the Creative Commons Attribution 4.0 License.
#
#
# In this lecture we will consider some practical implications for trading futures contracts. We will discuss the futures calendar and how it impacts trading as well as how to maintain futures positions across expiries.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from quantopian.research.experimental import continuous_future, history
# ## Futures Calendar
# An important feature of futures markets is the calendar used to trade them. Futures markets are open long after the equity markets close, though the effective periods within which you can trade with large amounts of liquidity tend to overlap. The specific high points in volume for futures contracts vary greatly from underlying to underlying. Despite this, the majority of the volume for many contracts typically falls within normal EST market hours.
#
# Let's have a look at a day in the life of the S&P 500 Index E-Mini futures contract that was deliverable in March 2017.
contract = symbols('ESH17')
one_day_volume = get_pricing(contract, start_date='2017-02-01', end_date='2017-02-01', frequency='minute', fields='volume')
one_day_volume.tz_convert('EST').plot()
plt.title('Trading Volume for 3/01/2017 by Minute')
plt.xlabel('Minute')
plt.ylabel('Volume');
# This is one of the most liquid futures contracts and we see this significant increase in volume traded during normal equity trading hours. These hours can be even more tight for less liquid commodities. For example, let's look at how Feeder Cattle trades during the same time period on the same day.
contract = 'FCH17'
one_day_volume = get_pricing(contract, start_date='2017-02-01', end_date='2017-02-01', frequency='minute', fields='volume')
one_day_volume.tz_convert('EST').plot()
plt.title('Trading Volume for 3/01/2017 by Minute')
plt.xlabel('Minute')
plt.ylabel('Volume');
# If we are trying to trade multiple different underlyings with futures contracts in the same algorithm, we need to be conscious of their volume relative to each other. All trading algorithms are dependent on orders being executed as determined by their calculations. Some contracts are so illiquid that entering into even the smallest position will amount to becoming a large part of the volume for a given day. This could heavily impact slippage
#
# Unsurprisingly, volume will also vary for different expiries on the same underlying. The front month contract, the contract closest to delivery, has the largest amount of volume. As we draw closer to delivery the front month's volume is eclipsed by the next expiry date as participants in the market close out their positions and roll them forward.
contracts = symbols(['ESH16', 'ESM16', 'ESU16'])
rolling_volume = get_pricing(contracts, start_date='2015-12-15', end_date='2016-09-15', fields='volume')
rolling_volume.plot()
plt.title('Volume for Different Expiries of same Underlying')
plt.xlabel('Date')
plt.ylabel('Volume');
# ## Futures Positions Have Inherent Leverage
# In entering a futures position, you place down a certain amount of capital in a margin account. This margin account is exposed to the fluctuating futures price of the underlying that you have chosen. This creates a levered position off the bat as the value that you are exposed to (before delivery) in the account is different from the overall value that is on the hook at delivery.
#
# This internal leverage is determined on a contract to contract basis due to the different multipliers involved for different underlyings.
# ## Roll-over
# If we want to maintain a futures position across expiries, we need to "roll over" our contracts. This is the practice of switching to the next month's contract after closing your previous holding. The majority of futures positions are either closed or rolled over before ever reaching delivery.
#
# The futures contract with expiry closest to the current date is known as the "front month" contract. It usually enjoys the smallest spread between futures and spot prices as well as the most liquidity. In contrast, the futures contract that has the furthest expiration date in a set of contracts is known as the "back month" contract. Contracts that are further out have significantly less liquidity, though they still may contain vague information about future prices anticipated by the market.
#
# By rolling forward our positions, we can maintain a hedge on a particular underlying or simply maintain a position across time. Without rolling contracts over we would be required to develop trading strategies that work only on a short timescale.
#
# This graph illustrates the volume that results from rolling over contracts on the first date where the front month contract's volume is eclipsed by the following month on the same underlying.
maximum_any_day_volume = rolling_volume.max(axis=1)
maximum_any_day_volume.name = 'Volume Roll-over'
rolling_volume.plot()
maximum_any_day_volume.plot(color='black', linestyle='--')
plt.title('Volume for Front Contract with Volume-based Rollover')
plt.xlabel('Date')
plt.ylabel('Volume')
plt.legend();
# In this particular instance, our goal is to ride the wave of liquidity provided by the front contract.
# ## Continuous Futures
# With futures, it is difficult to get a continuous series of historical prices. Each time that you roll forward to a new contract, the price series incurs a jump. This jump negatively impacts our analysis of prices as the discontinuity introduces shocks in our return and volatility measures that may not be representative of the actual changes in the underlying.
#
# We use the continuous futures objects as part of the platform to get a continuous chain of historical data for futures contracts, taking these concerns into account. There are several ways to adjust for the cost of carry when looking at historical data, though people differ on what they prefer. The general consensus is that an adjustment should be done.
#
# We can have a continuous future "roll" forward either based on calendar dates or based on the shift in volume from the front month contract to the next. The `ContinuousFuture` object is not a tradable asset, however. It is an API construct that abstracts the chain of consecutive contracts for the same underlying. They maintain ongoing references to the active contract in the chain and make it easier to to maintain a dynamic reference to contracts that you want to order as well as to get historical series of data, all based on your chosen method of adjustment and your desired roll method.
continuous_corn = continuous_future('CN', offset=0, roll='calendar', adjustment='mul')
# The above defined continuous future has an `offset` of $0$, indicating that we want it to reference the front month contract at each roll. Incrementing the offset causes the continuous future to instead monitor the contract that is displaced from the front month by that number.
# ### Adjustments
# We can define a continuous future to use multiplicative adjustments, additive adjustments, or no adjustments (`'mul'`, `'add'`, `None`). The cost of carry that is realized as we shift from one contract to the next can be seen as the shock from a dividend payment. Adjustments are important to frame past prices relative to today's prices by including the cost of carry. Additive adjustments close the gaps betwen contracts by simply taking the differences and aggregating those back, while multiplicative adjustments scale previous prices using a ratio to close the gap.
continuous_corn_price = history(continuous_corn, start_date='2009-01-01', end_date='2016-01-01', fields='price')
continuous_corn_price.plot();
# Once we have a continuous time series of prices we can conduct meaningful statistical analysis to form a foundation for our research.
# ## Fewer Assets
# There are around 8000 equities in the US market, but there are far fewer futures contracts, especially those with enough liquidity to trade. We can make up for this by trading different expiries on the same underlying, though we need to ensure that we are conducting rigorous testing to ensure that our resulting signals are viable in the market given the potential liquidity constraints that come with not trading the front month.
# *This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| docs/memo/notebooks/lectures/Futures_Trading_Considerations/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Clustering and Classification of MEPs' Tweets - Exercise Answers
#
# In this set of exercises, we will be tackling preprocessing, clustering and classification problems.
#
# > <NAME> <br />
# > Α.Μ. 3160245
# ## Data Preparation
#
# * Before else, we'll be importing all the libraries used in the notebook
# * In case the file "tweets.csv" is not present on the same folder as the .ipynb, the Twitter API requires keys for the tweets' download
# +
import tweepy
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from os import path
import seaborn as sn
import json
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from nltk.stem import PorterStemmer
from sklearn.cluster import KMeans
from yellowbrick.cluster import InterclusterDistance
from yellowbrick.cluster import KElbowVisualizer
from yellowbrick.cluster import SilhouetteVisualizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV
from sklearn.dummy import DummyClassifier
import lightgbm as lgb
import xgboost as xgb
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import VotingClassifier, AdaBoostClassifier
# Authenticate to Twitter
auth = tweepy.OAuthHandler("KEY", "SECRET_KEY")
auth.set_access_token("TOKEN","SECRET_TOKEN")
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
# test authentication
try:
api.verify_credentials()
print("Authentication OK.")
except:
print("Error during authentication.")
# -
# * After importing, we read our dataset and drop everything not in English
# * We create a lst of the tweet IDs we'll be downloading
df = pd.read_csv("retweets.csv")
df = df[df.lang == 'en'].reset_index(drop = True)
tweet_ids = df.origTweetId.to_list()
# * If "tweets.csv" is present, no download will be needed.
# * Else, the function lookup_tweets will download the specified tweets on batches.
# * Apart from the text, there is a lot of additional info downloaded along. <br>
# To keep the file's size small enough, we drop all columns except the id and the text
if (path.exists("tweets.csv")):
tweetDf = pd.read_csv("tweets.csv")
print("File opened.")
else:
print("Tweets.csv file not found. Downlading tweets...")
def lookup_tweets(tweet_IDs, api):
full_tweets = []
tweet_count = len(tweet_IDs)
try:
for i in range((tweet_count // 100) + 1):
# Catch the last group if it is less than 100 tweets
end_loc = min((i + 1) * 100, tweet_count)
full_tweets.extend(
api.statuses_lookup(id_ = tweet_IDs[i * 100:end_loc])
)
return full_tweets
except tweepy.TweepError:
print('Something went wrong, quitting...')
results = lookup_tweets(tweet_ids, api)
temp = json.dumps([status._json for status in results])
tweetDf = pd.read_json(temp, orient='records')
tweetDf = tweetDf[['id', 'text']]
tweetDf.to_csv("tweets.csv", index = False)
print("Tweets downloaded!")
# * Now that we have our tweets, we'll join the two dataframes and find the outlier groups
df = df.set_index('origTweetId').join(tweetDf.set_index('id').text).dropna(subset = ['text']).reset_index().rename(columns = {'index' : 'origTweetId'})
df.groupby('origMepGroupId').size()
# * We can see groups 7 and 8 have insufficient amounts of tweets, so we'll be dropping them
# * Groups 0, 2 and 5 are also smaller than the rest. This will be significant during classification
df = df[(df.origMepGroupId != 7) & (df.origMepGroupId != 8)].reset_index(drop = True)
# * We'll be dropping the urls in tweets as well
df['text'] = df['text'].replace(r'http\S+', '', regex = True).replace(r'www\S+', '', regex = True).replace(r'&', '', regex = True)
# * Before we begin clustering, we'll convert the tweets in TF-IDF features
# * We drop english stopwords, convert capital letters and strip accents from the text
vec = TfidfVectorizer(lowercase = True, strip_accents = 'unicode', stop_words = 'english', min_df = 10, max_df = 0.50)
tfidf = vec.fit_transform(df['text'])
tfidf
# ## Clustering
# * We'll begin clustering using Yellowbrick's elbow visualizer
# * Due to the seemingly insufficient preprocess of the tweets, KMeans has difficulties discerning optimal clusters
# * Depending on the run, the visualizer might not find an elbow at all
kmeans = KMeans(n_init = 20, n_jobs = -1)
visualizer = KElbowVisualizer(kmeans, size = (1200,1000), k = (2, 16))
visualizer.fit(tfidf)
visualizer.show()
# * If we use the Silhouette visualizer, we'll see that the "best" score is always that of the run with the most clusters
# +
scores = {}
plt.figure(figsize=(2 * 5, 10 * 4))
for n_clusters in range(2, 16):
plt.subplot(10, 2, n_clusters - 1)
kmeans = KMeans(n_clusters, n_init = 20, n_jobs = -1)
visualizer = SilhouetteVisualizer(kmeans, colors='yellowbrick')
visualizer.fit(tfidf)
scores[n_clusters] = visualizer.silhouette_score_
plt.title(f'clusters: {n_clusters} score: {visualizer.silhouette_score_}')
# -
# * In order to pick an appropriate amount of clusters, we'll use the scores of the silhouettes and plot them
# * We'll use the following plot and the elbow visualizer to pick a number of clusters ourselves
score_list = sorted(scores.items(), key=lambda kv: kv[0], reverse=False)
x, y = zip(*score_list)
plt.figure(figsize = (10,6))
plt.plot(x, y)
plt.show()
# * To check that the amount is appropriate, we'll run an InterclusterDistance visualizer and see if any groups fall on top of each other
kmeans = KMeans(n_jobs = -1, n_clusters = 7, n_init = 20).fit(tfidf)
visualizer = InterclusterDistance(kmeans, size = (1200,1000))
visualizer.fit(tfidf)
visualizer.show()
# * We can discern if the clusters make sense by printing the most important features of the clusters
# * In almost every run, most clusters concern topics of the UK and Greece
# +
clusters = KMeans(n_clusters = 7, n_init = 20, n_jobs = -1).fit_predict(tfidf)
text_clust = pd.DataFrame(tfidf.todense()).groupby(clusters).mean()
for i,r in text_clust.iterrows():
print('\nCluster {}'.format(i))
print(', '.join([vec.get_feature_names()[j] for j in np.argsort(r)[-20:]]))
# -
# ## Classification
#
# * We'll be testing a couple of classification models to find those with greater accuracy
# * For some of them, we'll run a grid search for hyperparameter tuning
# ### Preprocess
#
# * Slightly different from the first time. We'll drop words that contain numericals and are shorter than two letters
# * We'll be splitting the dataset to a train and a test set (70:30)
# +
vec = TfidfVectorizer(lowercase = True, strip_accents = 'unicode', stop_words = 'english', min_df = 5, max_df = 0.7, token_pattern=r'(?u)\b[A-Za-z]\w\w+\b')
tfidf = vec.fit_transform(df['text']).toarray()
X_train, X_test, y_train, y_test = train_test_split(tfidf, df['origMepGroupId'], test_size = 0.3)
# -
# ### Dummy Classifier
#
# * Our baseline. Any model with values close the dummy's score will not be used further
# +
dummy_classifier = DummyClassifier(strategy = 'most_frequent')
dc_fitted = dummy_classifier.fit(X_train, y_train)
y_pred_dummy = dummy_classifier.predict(X_test)
print('Dummy Classifier accuracy score: ' + str(round(accuracy_score(y_test, y_pred_dummy), 4)))
# -
# ### Multinomial Naive Bayes
# +
mnb = MultinomialNB().fit(X_train, y_train)
y_pred_mnb = mnb.predict(X_test)
print('Multinomial Naive Bayes accuracy score: ' + str(round(accuracy_score(y_test, y_pred_mnb), 4)))
# -
# ### Logistic Regression
# +
lr = LogisticRegression(solver = 'lbfgs', multi_class = 'multinomial', class_weight = 'balanced', max_iter = 200).fit(X_train, y_train)
y_pred_lr = lr.predict(X_test)
print('Logistic regression accuracy score: ' + str(round(accuracy_score(y_test, y_pred_lr), 4)))
# -
# ### LightGBM
#
# * To find the best hyperparameteres, we'll run a grid search
# * <b> WARNING :</b> Do not run unless you have the spare time!
# +
model = lgb.LGBMClassifier(n_jobs = -1)
param_grid = {
'objective' : ['multiclass', 'multiclassova'],
'n_estimators' : [220, 550, 560],
'learning_rate' : [1, 0.1, 0.2],
'num_classes': [7],
'reg_alpha': [0, 0,5],
'reg_lambda': [0, 0.5],
'colsample_bytree': [1, 0.7]
}
gridgbm = GridSearchCV(model, param_grid, cv=5, scoring = 'accuracy', n_jobs = -1, verbose = 10)
gridgbm.fit(X_train, y_train)
print('Best parameters found by grid search are:', gridgbm.best_params_)
# +
lgbm = lgb.LGBMClassifier(objective = 'multiclassova', learning_rate = 0.2, n_estimators = 220, num_classes = 7, n_jobs = -1)
lgbm = lgbm.fit(X_train, y_train)
y_pred_lgb = lgbm.predict(X_test)
print('LightGBM accuracy score: ' + str(round(accuracy_score(y_test, y_pred_lgb), 4)))
# -
# ### XGBoost
#
# * XGBoost generally shares parameteres with lightgbm
# * Slower than LightGBM
# +
xg_clas = xgb.XGBClassifier(objective = 'multi:softmax', num_class = 7, n_estimators = 560, learning_rate = 0.1, n_jobs = -1, verbosity = 1)
xg_clas = xg_clas.fit(X_train, y_train)
y_pred_xgb = xg_clas.predict(X_test)
print('XGBoost accuracy score: ' + str(round(accuracy_score(y_test, y_pred_xgb), 4)))
# -
# ### AdaBoost
#
# * Extremely slow and not accurate enough
# +
ada = AdaBoostClassifier(n_estimators = 500)
ada = ada.fit(X_train, y_train)
y_pred_ada = ada.predict(X_test)
print('AdaBoost accuracy score: ' + str(round(accuracy_score(y_test, y_pred_ada), 4)))
# -
# ### K Nearest Neighbors
#
# * During testing, KNN did not seem accurate or trustworthy enough
# +
knn = KNeighborsClassifier(n_neighbors = 7, algorithm = 'auto', weights = 'distance', n_jobs = -1)
knn = knn.fit(X_train, y_train)
y_pred_knn = knn.predict(X_test)
print('KNN accuracy score: ' + str(round(accuracy_score(y_test, y_pred_knn), 4)))
# -
# ### SGDClassifier
#
# * We'll be running another grid search to find the best hyperparameters
# +
sgd = SGDClassifier(early_stopping = True, n_jobs = -1)
param_grid = {
'loss' : ['modified_huber', 'log', 'squared_hinge'],
'eta0' : [0.1, 0.2],
'learning_rate' : ['adaptive', 'optimal'],
'n_iter_no_change' : [4, 5],
'max_iter': [500]
}
gridgbm = GridSearchCV(sgd, param_grid, cv=5, scoring = 'accuracy', n_jobs = -1, verbose = 5)
gridgbm.fit(X_train, y_train)
print('Best parameters found by grid search are:', gridgbm.best_params_)
# +
sgd = SGDClassifier(loss = 'modified_huber', eta0 = 0.1, learning_rate = 'adaptive', n_iter_no_change = 4, early_stopping = True, n_jobs = -1)
sgd = sgd.fit(X_train, y_train)
y_pred_sgd = sgd.predict(X_test)
print('SGDClassifier accuracy score: ' + str(round(accuracy_score(y_test, y_pred_sgd), 4)))
# -
# ### Multi Layer Perceptron
#
# * Sklearn's adaptation of neural networks
# +
mlp = MLPClassifier(hidden_layer_sizes=(128, 32,), solver='adam', learning_rate = 'adaptive', early_stopping = True).fit(X_train, y_train)
y_pred_mlp = mlp.predict(X_test)
print('MLPClassifier accuracy score: ' + str(round(accuracy_score(y_test, y_pred_mlp), 4)))
# -
# ### Voting Classifier
#
# * Now that we've run through our classifier models, we'll combine the results through sklearn's Voting Classifier
# +
estimators=[('multinomial_nb', mnb), ('logistic_regression', lr), ('lightgbm', lgbm), ('xgboost', xg_clas), ('sgdclassifier', sgd), ('multilayer_perceptron', mlp)]
vclf = VotingClassifier(estimators, voting = 'soft', n_jobs = -1).fit(X_train, y_train)
vclf.fit(X_train, y_train)
y_pred_vote = vclf.predict(X_test)
print('Voting classifier accuracy score: ' + str(round(vclf.score(X_test, y_test), 4)))
# -
# * In most of the runs of the classification models, the voting classifier generally provides an 70%+ accuracy
# * We'll create a plot to see check which predictions are mostly wrong, if they exist. <br>
# We'll count the correct predictions along with wrong ones and create a barplot
# +
comp = pd.DataFrame()
temp = pd.DataFrame()
temp['True'] = y_test
temp['Predicted'] = y_pred_vote
temp = temp.reset_index().groupby(['True', 'Predicted']).count().reset_index()
comp['True'] = temp[temp['True'] == temp['Predicted']]['index']
comp = comp.reset_index(drop = True)
for i in range(7):
comp.loc[i, 'Predicted'] = temp[(temp['True'] != temp['Predicted']) & (i == temp['True']) & (i != temp['Predicted'])]['index'].sum()
ax = comp.plot(kind = 'bar', figsize = (15, 8), rot = 0)
ax.set_xlabel("Group ID of MEP tweets")
ax.set_ylabel("Number of tweets")
plt.show()
# -
# * As expected, the largest amounts of wrong predictions in relation to the total number of predictions are those of groups 0, 2 and 5
# * We can safely assume that it might not be possible to achieve higher accuracies on those groups without enriching our dataset with more tweets
# * Note: these observations were made with the tweets of "tweets.csv". Although unlikely, it might be possible that a new download of tweets might provide different results
| Project_3/3160245_DIMITRIOS_STEFANOU_twitter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import math
import matplotlib
import requests
import numpy as np
import pandas as pd
import seaborn as sns
import time
import misc_function as mf
from datetime import date, datetime
from matplotlib import pyplot as plt
from numpy.random import seed
from pylab import rcParams
from sklearn.metrics import mean_squared_error
from tqdm import tqdm_notebook
from sklearn.preprocessing import StandardScaler
from tensorflow import set_random_seed
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
from keras.utils import plot_model
# +
#### Input params ##################
test_size = 0.2 # proportion of dataset to be used as test set
cv_size = 0.2 # proportion of dataset to be used as cross-validation set
N = 9 # for feature at day t, we use lags from t-1, t-2, ..., t-N as features.
# initial value before tuning
lstm_units=50 # lstm param. initial value before tuning.
dropout_prob=1 # lstm param. initial value before tuning.
optimizer='adam' # lstm param. initial value before tuning.
epochs=1 # lstm param. initial value before tuning.
batch_size=1 # lstm param. initial value before tuning.
model_seed = 100
fontsize = 14
ticklabelsize = 14
# +
####################################
# Set seeds to ensure same output results
seed(101)
set_random_seed(model_seed)
# -
pd.options.mode.chained_assignment = None # turn off warning message
# ### Functions
# +
def get_mape(y_true, y_pred):
"""
Compute mean absolute percentage error (MAPE)
"""
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def get_x_y(data, N, offset):
"""
Split data into x (features) and y (target)
"""
x, y = [], []
for i in range(offset, len(data)):
x.append(data[i-N:i])
y.append(data[i])
x = np.array(x)
y = np.array(y)
return x, y
def get_x_scaled_y(data, N, offset):
"""
Split data into x (features) and y (target)
We scale x to have mean 0 and std dev 1, and return this.
We do not scale y here.
Inputs
data : pandas series to extract x and y
N
offset
Outputs
x_scaled : features used to predict y. Scaled such that each element has mean 0 and std dev 1
y : target values. Not scaled
mu_list : list of the means. Same length as x_scaled and y
std_list : list of the std devs. Same length as x_scaled and y
"""
x_scaled, y, mu_list, std_list = [], [], [], []
for i in range(offset, len(data)):
mu_list.append(np.mean(data[i-N:i]))
std_list.append(np.std(data[i-N:i]))
x_scaled.append((data[i-N:i]-mu_list[i-offset])/std_list[i-offset])
y.append(data[i])
x_scaled = np.array(x_scaled)
y = np.array(y)
return x_scaled, y, mu_list, std_list
def train_pred_eval_model(x_train_scaled, \
y_train_scaled, \
x_cv_scaled, \
y_cv, \
mu_cv_list, \
std_cv_list, \
lstm_units=50, \
dropout_prob=0.5, \
optimizer='adam', \
epochs=1, \
batch_size=1):
'''
Train model, do prediction, scale back to original range and do evaluation
Use LSTM here.
Returns rmse, mape and predicted values
Inputs
x_train_scaled : e.g. x_train_scaled.shape=(451, 9, 1). Here we are using the past 9 values to predict the next value
y_train_scaled : e.g. y_train_scaled.shape=(451, 1)
x_cv_scaled : use this to do predictions
y_cv : actual value of the predictions
mu_cv_list : list of the means. Same length as x_scaled and y
std_cv_list : list of the std devs. Same length as x_scaled and y
lstm_units : lstm param
dropout_prob : lstm param
optimizer : lstm param
epochs : lstm param
batch_size : lstm param
Outputs
rmse : root mean square error
mape : mean absolute percentage error
est : predictions
'''
# Create the LSTM network
model = Sequential()
model.add(LSTM(units=lstm_units, return_sequences=True, input_shape=(x_train_scaled.shape[1],1)))
model.add(Dropout(dropout_prob)) # Add dropout with a probability of 0.5
model.add(LSTM(units=lstm_units))
model.add(Dropout(dropout_prob)) # Add dropout with a probability of 0.5
model.add(Dense(1))
# Compile and fit the LSTM network
model.compile(loss='mean_squared_error', optimizer=optimizer)
model.fit(x_train_scaled, y_train_scaled, epochs=epochs, batch_size=batch_size, verbose=0)
# Do prediction
est_scaled = model.predict(x_cv_scaled)
est = (est_scaled * np.array(std_cv_list).reshape(-1,1)) + np.array(mu_cv_list).reshape(-1,1)
# Calculate RMSE and MAPE
# print("x_cv_scaled = " + str(x_cv_scaled))
# print("est_scaled = " + str(est_scaled))
# print("est = " + str(est))
rmse = math.sqrt(mean_squared_error(y_cv, est))
mape = get_mape(y_cv, est)
return rmse, mape, est
# -
# ### Get Data from API
#request api of current daily stock of selected symbol
funct = "history"
sym = "AAPL"
fromdt = "2015-01-01"
req_url = mf.stock_url(funct, sym, fromdt)
print(req_url)
#convert response to python json list
data = requests.get(req_url).json()[funct]
df = pd.DataFrame(data)
df = df.T
# ### Data Check
#convert list to dataframe
df = pd.DataFrame(columns=['date','open','high','low','close'])
for k,v in data.items():
date = datetime.strptime(k, '%Y-%m-%d')
data_row = [date.date(),float(v['open']),float(v['high']),
float(v['low']),float(v['close'])]
df.loc[-1,:] = data_row
df.index = df.index + 1
df.head()
# ### Visualize Data
# +
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='close', style='b-', grid=True)
ax.set_xlabel("date")
ax.set_ylabel("USD")
# -
# ### Split Data into train, validation, test
# +
df = df.sort_index()
# Get sizes of each of the datasets
num_cv = int(cv_size*len(df))
num_test = int(test_size*len(df))
num_train = len(df) - num_cv - num_test
print("num_train = " + str(num_train))
print("num_cv = " + str(num_cv))
print("num_test = " + str(num_test))
# Split into train, cv, and test
train = df[:num_train]
cv = df[num_train:num_train+num_cv]
train_cv = df[:num_train+num_cv]
test = df[num_train+num_cv:]
print("train.shape = " + str(train.shape))
print("cv.shape = " + str(cv.shape))
print("train_cv.shape = " + str(train_cv.shape))
print("test.shape = " + str(test.shape))
# +
# Converting dataset into x_train and y_train
# Only scale the train dataset, not the entire dataset to prevent information leak
scaler = StandardScaler()
train_scaled = scaler.fit_transform(np.array(train['close']).reshape(-1,1))
print("scaler.mean_ = " + str(scaler.mean_))
print("scaler.var_ = " + str(scaler.var_))
# Split into x and y
x_train_scaled, y_train_scaled = get_x_y(train_scaled, N, N)
print("x_train_scaled.shape = " + str(x_train_scaled.shape)) # (446, 7, 1)
print("y_train_scaled.shape = " + str(y_train_scaled.shape)) # (446, 1)
# -
# Scale the cv dataset
# Split into x and y
x_cv_scaled, y_cv, mu_cv_list, std_cv_list = get_x_scaled_y(np.array(train_cv['close']).reshape(-1,1), N, num_train)
print("x_cv_scaled.shape = " + str(x_cv_scaled.shape))
print("y_cv.shape = " + str(y_cv.shape))
print("len(mu_cv_list) = " + str(len(mu_cv_list)))
print("len(std_cv_list) = " + str(len(std_cv_list)))
# Scale the train_cv set, for the final model
scaler_final = StandardScaler()
train_cv_scaled_final = scaler_final.fit_transform(np.array(train_cv['close']).reshape(-1,1))
print("scaler_final.mean_ = " + str(scaler_final.mean_))
print("scaler_final.var_ = " + str(scaler_final.var_))
# ### LSTM Initialization
# +
# Create the LSTM network init
model = Sequential()
model.add(LSTM(units=lstm_units, return_sequences=True, input_shape=(x_train_scaled.shape[1],1)))
model.add(Dropout(dropout_prob)) # Add dropout with a probability of 0.5
model.add(LSTM(units=lstm_units))
model.add(Dropout(dropout_prob)) # Add dropout with a probability of 0.5
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=optimizer)
model.fit(x_train_scaled, y_train_scaled, epochs=epochs, batch_size=batch_size, verbose=2)
# -
# Print model summary
model.summary()
# +
# Plot model and save to file
from IPython.display import SVG
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
plot_model(model, to_file='lstm_model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
# -
# ### Prediction
# +
# Do prediction
est_scaled = model.predict(x_cv_scaled)
est = (est_scaled * np.array(std_cv_list).reshape(-1,1)) + np.array(mu_cv_list).reshape(-1,1)
print("est.shape = " + str(est.shape))
# Calculate RMSE
rmse_bef_tuning = math.sqrt(mean_squared_error(y_cv, est))
print("RMSE = %0.3f" % rmse_bef_tuning)
# Calculate MAPE
mape_pct_bef_tuning = get_mape(y_cv, est)
print("MAPE = %0.3f%%" % mape_pct_bef_tuning)
# +
# Plot adjusted close over time
rcParams['figure.figsize'] = 18, 8 # width 18, height 8
est_df = pd.DataFrame({'est': est.reshape(-1),
'y_cv': y_cv.reshape(-1),
'date': cv['date']})
ax = train.plot(x='date', y='close', style='b-', grid=True)
ax = cv.plot(x='date', y='close', style='y-', grid=True, ax=ax)
ax = test.plot(x='date', y='close', style='g-', grid=True, ax=ax)
ax = est_df.plot(x='date', y='est', style='r-', grid=True, ax=ax)
ax.legend(['train', 'dev', 'test', 'est'])
ax.set_xlabel("date")
ax.set_ylabel("USD")
# +
### Find best 'N' days to use
# +
param_label = 'N'
param_list = range(3, 60)
error_rate = {param_label: [], 'rmse': [], 'mape_pct': []}
tic = time.time()
for param in tqdm_notebook(param_list):
# Split train into x and y
x_train_scaled, y_train_scaled = get_x_y(train_scaled, param, param)
# Split cv into x and y
x_cv_scaled, y_cv, mu_cv_list, std_cv_list = get_x_scaled_y(np.array(train_cv['close']).reshape(-1,1), param, num_train)
# Train, predict and eval model
rmse, mape, _ = train_pred_eval_model(x_train_scaled, \
y_train_scaled, \
x_cv_scaled, \
y_cv, \
mu_cv_list, \
std_cv_list, \
lstm_units=lstm_units, \
dropout_prob=dropout_prob, \
optimizer=optimizer, \
epochs=epochs, \
batch_size=batch_size)
# Collect results
error_rate[param_label].append(param)
error_rate['rmse'].append(rmse)
error_rate['mape_pct'].append(mape)
error_rate = pd.DataFrame(error_rate)
toc = time.time()
print("Minutes taken = " + str((toc-tic)/60.0))
error_rate
# +
# Plot RMSE
rcParams['figure.figsize'] = 18, 8 # width 18, height 8
ax = error_rate.plot(x='N', y='rmse', style='bx-', grid=True)
ax = error_rate.plot(x='N', y='mape_pct', style='rx-', grid=True, ax=ax)
ax.set_xlabel("N")
ax.set_ylabel("RMSE/MAPE(%)")
# -
# Get optimum value for param
temp = error_rate[error_rate['rmse'] == error_rate['rmse'].min()]
N_opt = temp['N'].values[0]
print("min RMSE = %0.3f" % error_rate['rmse'].min())
print("min MAPE = %0.3f%%" % error_rate['mape_pct'].min())
print("optimum " + param_label + " = " + str(N_opt))
# ### Tuning epoch and batch size
# +
param_label = 'epochs'
param_list = [1, 10, 20, 30, 40, 50]
param2_label = 'batch_size'
param2_list = [8, 16, 32, 64, 128]
# Split train into x and y
x_train_scaled, y_train_scaled = get_x_y(train_scaled, N_opt, N_opt)
# Split cv into x and y
x_cv_scaled, y_cv, mu_cv_list, std_cv_list = get_x_scaled_y(np.array(train_cv['close']).reshape(-1,1), N_opt, num_train)
error_rate = {param_label: [], param2_label: [], 'rmse': [], 'mape_pct': []}
tic = time.time()
for param in tqdm_notebook(param_list):
for param2 in tqdm_notebook(param2_list):
# Train, predict and eval model
rmse, mape, _ = train_pred_eval_model(x_train_scaled, \
y_train_scaled, \
x_cv_scaled, \
y_cv, \
mu_cv_list, \
std_cv_list, \
lstm_units=lstm_units, \
dropout_prob=dropout_prob, \
optimizer=optimizer, \
epochs=param, \
batch_size=param2)
# Collect results
error_rate[param_label].append(param)
error_rate[param2_label].append(param2)
error_rate['rmse'].append(rmse)
error_rate['mape_pct'].append(mape)
error_rate = pd.DataFrame(error_rate)
toc = time.time()
print("Minutes taken = " + str((toc-tic)/60.0))
error_rate
# +
# Plot performance versus params
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
temp = error_rate[error_rate[param2_label]==param2_list[0]]
ax = temp.plot(x=param_label, y='rmse', style='bs-', grid=True)
legend_list = [param2_label + '_' + str(param2_list[0])]
color_list = ['r', 'g', 'k', 'y', 'm', 'c', '0.75']
for i in range(1,len(param2_list)):
temp = error_rate[error_rate[param2_label]==param2_list[i]]
ax = temp.plot(x=param_label, y='rmse', color=color_list[i%len(color_list)], marker='s', grid=True, ax=ax)
legend_list.append(param2_label + '_' + str(param2_list[i]))
ax.set_xlabel(param_label)
ax.set_ylabel("RMSE")
matplotlib.rcParams.update({'font.size': 14})
plt.legend(legend_list, loc='center left', bbox_to_anchor=(1.0, 0.5)) # positions legend outside figure
# ax.set_xlim([10, 50])
# ax.set_ylim([0, 5])
# +
# Get optimum value for param and param2
temp = error_rate[error_rate['rmse'] == error_rate['rmse'].min()]
epochs_opt = temp[param_label].values[0]
batch_size_opt = temp[param2_label].values[0]
print("min RMSE = %0.3f" % error_rate['rmse'].min())
print("min MAPE = %0.3f%%" % error_rate['mape_pct'].min())
print("optimum " + param_label + " = " + str(epochs_opt))
print("optimum " + param2_label + " = " + str(batch_size_opt))
# -
# ### Tuning LSTM units and dropout probability
# +
param_label = 'lstm_units'
param_list = [10, 50, 64, 128]
param2_label = 'dropout_prob'
param2_list = [0.5, 0.6, 0.7, 0.8, 0.9, 1]
error_rate = {param_label: [], param2_label: [], 'rmse': [], 'mape_pct': []}
tic = time.time()
for param in tqdm_notebook(param_list):
for param2 in tqdm_notebook(param2_list):
# Train, predict and eval model
rmse, mape, _ = train_pred_eval_model(x_train_scaled, \
y_train_scaled, \
x_cv_scaled, \
y_cv, \
mu_cv_list, \
std_cv_list, \
lstm_units=param, \
dropout_prob=param2, \
optimizer=optimizer, \
epochs=epochs_opt, \
batch_size=batch_size_opt)
# Collect results
error_rate[param_label].append(param)
error_rate[param2_label].append(param2)
error_rate['rmse'].append(rmse)
error_rate['mape_pct'].append(mape)
error_rate = pd.DataFrame(error_rate)
toc = time.time()
print("Minutes taken = " + str((toc-tic)/60.0))
error_rate
# +
# Plot performance versus params
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
temp = error_rate[error_rate[param2_label]==param2_list[0]]
ax = temp.plot(x=param_label, y='rmse', style='bs-', grid=True)
legend_list = [param2_label + '_' + str(param2_list[0])]
color_list = ['r', 'g', 'k', 'y', 'm', 'c', '0.75']
for i in range(1,len(param2_list)):
temp = error_rate[error_rate[param2_label]==param2_list[i]]
ax = temp.plot(x=param_label, y='rmse', color=color_list[i%len(color_list)], marker='s', grid=True, ax=ax)
legend_list.append(param2_label + '_' + str(param2_list[i]))
ax.set_xlabel(param_label)
ax.set_ylabel("RMSE")
matplotlib.rcParams.update({'font.size': 14})
plt.legend(legend_list, loc='center left', bbox_to_anchor=(1.0, 0.5)) # positions legend outside figure
# -
# Get optimum value for param and param2
temp = error_rate[error_rate['rmse'] == error_rate['rmse'].min()]
lstm_units_opt = temp[param_label].values[0]
dropout_prob_opt = temp[param2_label].values[0]
print("min RMSE = %0.3f" % error_rate['rmse'].min())
print("min MAPE = %0.3f%%" % error_rate['mape_pct'].min())
print("optimum " + param_label + " = " + str(lstm_units_opt))
print("optimum " + param2_label + " = " + str(dropout_prob_opt))
# ### Tune optimizer
# +
param_label = 'optimizer'
param_list = ['adam', 'sgd', 'rmsprop', 'adagrad', 'adadelta', 'adamax', 'nadam']
error_rate = {param_label: [], 'rmse': [], 'mape_pct': []}
tic = time.time()
for param in tqdm_notebook(param_list):
# Train, predict and eval model
rmse, mape, _ = train_pred_eval_model(x_train_scaled, \
y_train_scaled, \
x_cv_scaled, \
y_cv, \
mu_cv_list, \
std_cv_list, \
lstm_units=lstm_units_opt, \
dropout_prob=dropout_prob_opt, \
optimizer=param, \
epochs=epochs_opt, \
batch_size=batch_size_opt)
# Collect results
error_rate[param_label].append(param)
error_rate['rmse'].append(rmse)
error_rate['mape_pct'].append(mape)
error_rate = pd.DataFrame(error_rate)
toc = time.time()
print("Minutes taken = " + str((toc-tic)/60.0))
error_rate
# +
# Plot RMSE and MAPE%
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = error_rate.plot(x='optimizer', y='rmse', style='bx-', grid=True)
ax = error_rate.plot(x='optimizer', y='mape_pct', style='rx-', grid=True, ax=ax)
ax.set_xticklabels(param_list)
ax.set_xlabel("Optimizer")
ax.set_ylabel("RMSE/MAPE(%)")
# -
# Get optimum value for param and param2
temp = error_rate[error_rate['rmse'] == error_rate['rmse'].min()]
optimizer_opt = temp[param_label].values[0]
print("min RMSE = %0.3f" % error_rate['rmse'].min())
print("min MAPE = %0.3f%%" % error_rate['mape_pct'].min())
print("optimum " + param_label + " = " + str(optimizer_opt))
# ### Param original vs tuned
d = {'param': ['N', 'lstm_units', 'dropout_prob', 'optimizer', 'epochs', 'batch_size', 'rmse', 'mape_pct'],
'original': [N, lstm_units, dropout_prob, optimizer, epochs, batch_size, rmse_bef_tuning, mape_pct_bef_tuning],
'after_tuning': [N_opt, lstm_units_opt, dropout_prob_opt, optimizer_opt, epochs_opt, batch_size_opt, error_rate['rmse'].min(), error_rate['mape_pct'].min()]}
tuned_params = pd.DataFrame(d)
tuned_params
# ### Train model
# +
# Split train_cv into x and y
x_train_cv_scaled, y_train_cv_scaled = get_x_y(train_cv_scaled_final, N_opt, N_opt)
# Split test into x and y
x_test_scaled, y_test, mu_test_list, std_test_list = get_x_scaled_y(np.array(df['close']).reshape(-1,1), N_opt, num_train+num_cv)
# Train, predict and eval model
rmse, mape, est = train_pred_eval_model(x_train_cv_scaled, \
y_train_cv_scaled, \
x_test_scaled, \
y_test, \
mu_test_list, \
std_test_list, \
lstm_units=lstm_units_opt, \
dropout_prob=dropout_prob_opt, \
optimizer=optimizer_opt, \
epochs=epochs_opt, \
batch_size=batch_size_opt)
# Calculate RMSE
print("RMSE on test set = %0.3f" % rmse)
# Calculate MAPE
print("MAPE on test set = %0.3f%%" % mape)
# +
# Plot adjusted close over time
rcParams['figure.figsize'] = 18, 8 # width 18, height 8
est_df = pd.DataFrame({'est': est.reshape(-1),
'date': df[num_train+num_cv:]['date']})
ax = train.plot(x='date', y='close', style='b-', grid=True)
ax = cv.plot(x='date', y='close', style='y-', grid=True, ax=ax)
ax = test.plot(x='date', y='close', style='g-', grid=True, ax=ax)
ax = est_df.plot(x='date', y='est', style='r-', grid=True, ax=ax)
ax.legend(['train', 'dev', 'test', 'predictions'])
ax.set_xlabel("date")
ax.set_ylabel("USD")
# -
from datetime import date
# Plot adjusted close over time, for test set only
rcParams['figure.figsize'] = 15, 8 # width 15, height 8
ax = train.plot(x='date', y='close', style='b-', grid=True)
ax = cv.plot(x='date', y='close', style='y-', grid=True, ax=ax)
ax = test.plot(x='date', y='close', style='g-', grid=True, ax=ax)
ax = est_df.plot(x='date', y='est', style='r-', grid=True, ax=ax)
ax.legend(['train', 'dev', 'test', 'predictions'])
ax.set_xlabel("date")
ax.set_ylabel("USD")
ax.set_xlim([date(2018, 7, 1), date(2019, 7, 20)])
ax.set_ylim([140, 250])
ax.set_title("Zoom in to test set")
# +
# Plot adjusted close over time, only for test set
rcParams['figure.figsize'] = 15, 8 # width 15, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = test.plot(x='date', y='close', style='gx-', grid=True)
ax = est_df.plot(x='date', y='est', style='rx-', grid=True, ax=ax)
ax.legend(['test', 'predictions using lstm'], loc='upper left')
ax.set_xlabel("date")
ax.set_ylabel("USD")
ax.set_xlim([date(2018, 8, 1), date(2019, 7, 20)])
ax.set_ylim([140, 250])
# -
# ### Optimal result
# RMSE on test set = 3.760
# MAPE on test set = 1.452%
# Using 3 days as N
| Stock LSTM (main).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Elevation profiles extracted from SRTM over Mt Baker compared with ICESat-2
import numpy as np
import geopandas as gpd
import rasterio
import matplotlib.pyplot as plt
import glob
from topolib import gda_lib
import topolib
import gdal
# ### Read reference DEM
# +
# dem_fn = '/home/jovyan/data/srtm_elevation/SRTM3/cache/srtm_wa_subset.vrt'
# dem_fn = '/home/jovyan/data/baker_1_m/baker_2015_utm_m.vrt'
# dem = rasterio.open(dem_fn)
# -
# ### Read ATL06 data
data_dir = '/home/jovyan/data/nsidc/**/'
ATL06_list = sorted(glob.glob(data_dir + "*.h5"))
# ### Extract points from NED
# +
ATL06_fn = ATL06_list[0]
dataset_dict={'land_ice_segments':['h_li',
'delta_time',
'longitude',
'latitude'],
'land_ice_segments/ground_track':['x_atc']}
ATL06_gdf = gda_lib.ATL06_2_gdf(ATL06_fn,dataset_dict)
# -
ATL06_gdf.head()
from pygeotools.lib import geolib
# +
ned_elevs = []
lons = list(ATL06_gdf['longitude'].values)
lats = list(ATL06_gdf['latitude'].values)
# -
len(lons)
len(ned_elevs)
# %%capture
ned_elevs = geolib.get_HAE(lons[0],lats[0])
ned_elevs
# +
# ATL06_gdf['NED_elev'] = ned_elevs
# -
ATL06_gdf['NED_ATL06_diff'] = ATL06_gdf['NED_elev'] - glas_gdf_aea_rgi['h_li']
ATL06_gdf['longtitude'], ATL06_gdf['latitude']
ATL06_gdf['longtitude']
glas_df['geometry'] = glas_df['geometry'].apply(geolib.get_NED)
# +
# geolib.get_NED?
# -
# ### Sample DEM at ICESat-2 ATL06
# +
ATL06_fn = ATL06_list[0]
dataset_dict={'land_ice_segments':['h_li',
'delta_time',
'longitude',
'latitude'],
'land_ice_segments/ground_track':['x_atc']}
ATL06_gdf = gda_lib.ATL06_2_gdf(ATL06_fn,dataset_dict)
ATL06_gdf = ATL06_gdf.to_crs(dem.crs)
# -
poly = gda_lib.dem2polygon(dem_fn)
glas_gdf_aea_rgi = gpd.sjoin(ATL06_gdf, poly, op='intersects', how='inner')
# +
points_xy = list(zip(glas_gdf_aea_rgi.geometry.x, glas_gdf_aea_rgi.geometry.y))
refdem_sample = []
for val in dem.sample(points_xy):
refdem_sample.append(val[0])
# -
# ### Plot data
glas_gdf_aea_rgi['srtm_height'] = refdem_sample
glas_gdf_aea_rgi['diff'] = glas_gdf_aea_rgi['srtm_height'] - glas_gdf_aea_rgi['h_li']
fit,ax = plt.subplots(1,2)
ax[0].scatter(glas_gdf_aea_rgi.index, glas_gdf_aea_rgi['h_li'])
ax[1].scatter(glas_gdf_aea_rgi.index, glas_gdf_aea_rgi['srtm_height']);
glas_gdf_aea_rgi['diff'].plot(marker='.', linestyle='none');
glas_gdf_aea_rgi['diff'].mean()
# #### geoid offset
glas_gdf_aea_rgi.total_bounds
fig,ax = plt.subplots()
im = ax.imshow(dem.read(1),cmap='inferno')
glas_gdf_aea_rgi.plot(ax=ax)
plt.colorbar(im,label='HAE (m WGS84)')
glas_df['geometry'] = glas_df['geometry'].apply(geolib.get_NED)
https://github.com/dshean/pygeotools.git
ds = gdal.Open('/home/jovyan/data/baker_1_m/baker_2015_dsm_7_wgs84-adj.tif')
ds = gdal.Translate('/home/jovyan/data/baker_1_m/baker_2015_dsm_7_wgs84-adj-sub.tif', ds, projWin = [-121.855968, 48.774327, -121.796142, 48.712009])
ds = None
# + language="bash"
# gdalwarp -r cubic -t_srs EPSG:4326 /home/jovyan/data/baker_1_m/baker_2015_dsm_7_m_utm-adj.tif /home/jovyan/data/baker_1_m/baker_2015_dsm_7_wgs84-adj.tif
| contributors/friedrich/NED_vs_ATL06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Embracing web standards
# One of the main reasons why we developed the current notebook web application
# was to embrace the web technology.
#
# By being a pure web application using HTML, JavaScript, and CSS, the Notebook can get
# all the web technology improvement for free. Thus, as browser support for different
# media extend, the notebook web app should be able to be compatible without modification.
#
# This is also true with performance of the User Interface as the speed of JavaScript VM increases.
# The other advantage of using only web technology is that the code of the interface is fully accessible to the end user and is modifiable live.
# Even if this task is not always easy, we strive to keep our code as accessible and reusable as possible.
# This should allow us - with minimum effort - development of small extensions that customize the behavior of the web interface.
# ## Tampering with the Notebook application
# The first tool that is available to you and that you should be aware of are browser "developers tool". The exact naming can change across browser and might require the installation of extensions. But basically they can allow you to inspect/modify the DOM, and interact with the JavaScript code that runs the frontend.
#
# - In Chrome and Safari, Developer tools are in the menu `View > Developer > JavaScript Console`
# - In Firefox you might need to install [Firebug](http://getfirebug.com/)
#
# Those will be your best friends to debug and try different approaches for your extensions.
# ### Injecting JS
# #### Using magics
# The above tools can be tedious for editing edit long JavaScript files. Therefore we provide the `%%javascript` magic. This allows you to quickly inject JavaScript into the notebook. Still the JavaScript injected this way will not survive reloading. Hence, it is a good tool for testing and refining a script.
#
# You might see here and there people modifying css and injecting js into the notebook by reading file(s) and publishing them into the notebook.
# Not only does this often break the flow of the notebook and make the re-execution of the notebook broken, but it also means that you need to execute those cells in the entire notebook every time you need to update the code.
#
# This can still be useful in some cases, like the `%autosave` magic that allows you to control the time between each save. But this can be replaced by a JavaScript dropdown menu to select the save interval.
## you can inspect the autosave code to see what it does.
# %autosave??
# #### custom.js
# To inject JavaScript we provide an entry point: `custom.js` that allows the user to execute and load other resources into the notebook.
# JavaScript code in `custom.js` will be executed when the notebook app starts and can then be used to customize almost anything in the UI and in the behavior of the notebook.
#
# `custom.js` can be found in the `~/.jupyter/custom/custom.js`. You can share your custom.js with others.
# ##### Back to theory
from jupyter_core.paths import jupyter_config_dir
jupyter_dir = jupyter_config_dir()
jupyter_dir
# and custom js is in
import os.path
custom_js_path = os.path.join(jupyter_dir, 'custom', 'custom.js')
# my custom js
if os.path.isfile(custom_js_path):
with open(custom_js_path) as f:
print(f.read())
else:
print("You don't have a custom.js file")
# Note that `custom.js` is meant to be modified by user. When writing a script, you can define it in a separate file and add a line of configuration into `custom.js` that will fetch and execute the file.
# **Warning** : even if modification of `custom.js` takes effect immediately after browser refresh (except if browser cache is aggressive), *creating* a file in `static/` directory needs a **server restart**.
# ## Exercise :
# - Create a `custom.js` in the right location with the following content:
# ```javascript
# alert("hello world from custom.js")
# ```
#
# - Restart your server and open any notebook.
# - Be greeted by custom.js
# Have a look at [default custom.js](https://github.com/jupyter/notebook/blob/4.0.x/notebook/static/custom/custom.js), to see it's content and for more explanation.
# ### For the quick ones :
# We've seen above that you can change the autosave rate by using a magic. This is typically something I don't want to type every time, and that I don't like to embed into my workflow and documents. (readers don't care what my autosave time is). Let's build an extension that allows us to do it.
# + [markdown] foo=true
# Create a dropdown element in the toolbar (DOM `Jupyter.toolbar.element`), you will need
#
# - `Jupyter.notebook.set_autosave_interval(milliseconds)`
# - know that 1 min = 60 sec, and 1 sec = 1000 ms
# -
# ```javascript
#
# var label = jQuery('<label/>').text('AutoScroll Limit:');
# var select = jQuery('<select/>')
# //.append(jQuery('<option/>').attr('value', '2').text('2min (default)'))
# .append(jQuery('<option/>').attr('value', undefined).text('disabled'))
#
# // TODO:
# //the_toolbar_element.append(label)
# //the_toolbar_element.append(select);
#
# select.change(function() {
# var val = jQuery(this).val() // val will be the value in [2]
# // TODO
# // this will be called when dropdown changes
#
# });
#
# var time_m = [1,5,10,15,30];
# for (var i=0; i < time_m.length; i++) {
# var ts = time_m[i];
# //[2] ____ this will be `val` on [1]
# // |
# // v
# select.append($('<option/>').attr('value', ts).text(thr+'min'));
# // this will fill up the dropdown `select` with
# // 1 min
# // 5 min
# // 10 min
# // 10 min
# // ...
# }
# ```
# #### A non-interactive example first
# I like my cython to be nicely highlighted
#
# ```javascript
# Jupyter.config.cell_magic_highlight['magic_text/x-cython'] = {}
# Jupyter.config.cell_magic_highlight['magic_text/x-cython'].reg = [/^%%cython/]
# ```
#
# `text/x-cython` is the name of CodeMirror mode name, `magic_` prefix will just patch the mode so that the first line that contains a magic does not screw up the highlighting. `reg`is a list or regular expression that will trigger the change of mode.
# #### Get more documentation
# Sadly, you will have to read the js source file (but there are lots of comments) and/or build the JavaScript documentation using yuidoc.
# If you have `node` and `yui-doc` installed:
# ```bash
# $ cd ~/jupyter/notebook/notebook/static/notebook/js/
# $ yuidoc . --server
# warn: (yuidoc): Failed to extract port, setting to the default :3000
# info: (yuidoc): Starting YUIDoc@0.3.45 using YUI@3.9.1 with NodeJS@0.10.15
# info: (yuidoc): Scanning for yuidoc.json file.
# info: (yuidoc): Starting YUIDoc with the following options:
# info: (yuidoc):
# { port: 3000,
# nocode: false,
# paths: [ '.' ],
# server: true,
# outdir: './out' }
# info: (yuidoc): Scanning for yuidoc.json file.
# info: (server): Starting server: http://127.0.0.1:3000
# ```
#
# and browse http://127.0.0.1:3000 to get documentation
# + [markdown] foo=true
# #### Some convenience methods
# -
# By browsing the documentation you will see that we have some convenience methods that allows us to avoid re-inventing the UI every time :
# ```javascript
# Jupyter.toolbar.add_buttons_group([
# {
# 'label' : 'run qtconsole',
# 'icon' : 'fa-terminal', // select your icon from
# // http://fontawesome.io/icons/
# 'callback': function(){Jupyter.notebook.kernel.execute('%qtconsole')}
# }
# // add more button here if needed.
# ]);
# ```
# with a [lot of icons] you can select from.
#
# [lot of icons]: http://fontawesome.io/icons/
# + [markdown] foo=true
# ## Cell Metadata
# + [markdown] foo=true
# The most requested feature is generally to be able to distinguish an individual cell in the notebook, or run a specific action with them.
# To do so, you can either use `Jupyter.notebook.get_selected_cell()`, or rely on `CellToolbar`. This allows you to register a set of actions and graphical elements that will be attached to individual cells.
# -
# ### Cell Toolbar
# You can see some example of what can be done by toggling the `Cell Toolbar` selector in the toolbar on top of the notebook. It provides two default `presets` that are `Default` and `slideshow`. Default allows the user to edit the metadata attached to each cell manually.
# First we define a function that takes at first parameter an element on the DOM in which to inject UI element. The second element is the cell this element wis registered with. Then we will need to register that function and give it a name.
#
# #### Register a callback
# + language="javascript"
# var CellToolbar = Jupyter.CellToolbar
# var toggle = function(div, cell) {
# var button_container = $(div)
#
# // let's create a button that shows the current value of the metadata
# var button = $('<button/>').addClass('btn btn-mini').text(String(cell.metadata.foo));
#
# // On click, change the metadata value and update the button label
# button.click(function(){
# var v = cell.metadata.foo;
# cell.metadata.foo = !v;
# button.text(String(!v));
# })
#
# // add the button to the DOM div.
# button_container.append(button);
# }
#
# // now we register the callback under the name foo to give the
# // user the ability to use it later
# CellToolbar.register_callback('tuto.foo', toggle);
# -
# #### Registering a preset
# This function can now be part of many `preset` of the CellToolBar.
# + foo=true slideshow={"slide_type": "subslide"} language="javascript"
# Jupyter.CellToolbar.register_preset('Tutorial 1',['tuto.foo','default.rawedit'])
# Jupyter.CellToolbar.register_preset('Tutorial 2',['slideshow.select','tuto.foo'])
# -
# You should now have access to two presets :
#
# - Tutorial 1
# - Tutorial 2
#
# And check that the buttons you defined share state when you toggle preset.
# Also check that the metadata of the cell is modified when you click the button, and that when saved on reloaded the metadata is still available.
# #### Exercise:
# Try to wrap the all code in a file, put this file in `{jupyter_dir}/custom/<a-name>.js`, and add
#
# ```
# require(['custom/<a-name>']);
# ```
#
# in `custom.js` to have this script automatically loaded in all your notebooks.
#
#
# `require` is provided by a [JavaScript library](http://requirejs.org/) that allow you to express dependency. For simple extension like the previous one we directly mute the global namespace, but for more complex extension you could pass a callback to `require([...], <callback>)` call, to allow the user to pass configuration information to your plugin.
#
# In Python language,
#
# ```javascript
# require(['a/b', 'c/d'], function( e, f){
# e.something()
# f.something()
# })
# ```
#
# could be read as
# ```python
# import a.b as e
# import c.d as f
# e.something()
# f.something()
# ```
#
#
# See for example @damianavila ["ZenMode" plugin](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/blob/b29c698394239a6931fa4911440550df214812cb/src/jupyter_contrib_nbextensions/nbextensions/zenmode/main.js#L32) :
#
# ```javascript
#
# // read that as
# // import custom.zenmode.main as zenmode
# require(['custom/zenmode/main'],function(zenmode){
# zenmode.background('images/back12.jpg');
# })
# ```
#
# #### For the quickest
# Try to use [the following](https://github.com/ipython/ipython/blob/1.x/IPython/html/static/notebook/js/celltoolbar.js#L367) to bind a dropdown list to `cell.metadata.difficulty.select`.
#
# It should be able to take the 4 following values :
#
# - `<None>`
# - `Easy`
# - `Medium`
# - `Hard`
#
# We will use it to customize the output of the converted notebook depending on the tag on each cell
# +
# # %load soln/celldiff.js
# -
| 4-assets/BOOKS/Jupyter-Notebooks/Overflow/JavaScript_Notebook_Extensions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# ## How data scientists use BigQuery
#
# This notebook accompanies the presentation
# ["Machine Learning and Bayesian Statistics in minutes: How data scientists use BigQuery"](http://bit.ly/bigquery-datascience-talk)
# -
# ### Bayesian Statistics in minutes
#
# Let's say that we want to find the probability of a flight being late $\theta$ given a specific departure delay $\textbf{D}$
# Bayes' Law tells that can be obtained for any specific departure delay using the formula:
#
# <center><font size="+5">
# $P(\theta|\textbf{D}) = P(\theta ) \frac{P(\textbf{D} |\theta)}{P(\textbf{D})} $
# </font></center>
#
# Once you have large datasets, the probabilities above are just exercises in counting and so, applying Bayesian statistics is super-easy in BigQuery.
#
# For example, let's find the probability that a flight will be 15+ minutes late:
# +
# %%bigquery df
WITH rawnumbers AS (
SELECT
departure_delay,
COUNT(1) AS num_flights,
COUNTIF(arrival_delay < 15) AS num_ontime
FROM
`bigquery-samples.airline_ontime_data.flights`
GROUP BY
departure_delay
HAVING
num_flights > 100
),
totals AS (
SELECT
SUM(num_flights) AS tot_flights,
SUM(num_ontime) AS tot_ontime
FROM rawnumbers
),
bayes AS (
SELECT
departure_delay,
num_flights / tot_flights AS prob_D,
num_ontime / tot_ontime AS prob_D_theta,
tot_ontime / tot_flights AS prob_theta
FROM
rawnumbers, totals
WHERE
num_ontime > 0
)
SELECT
*, (prob_theta * prob_D_theta / prob_D) AS prob_ontime
FROM
bayes
ORDER BY
departure_delay ASC
# -
df.plot(x='departure_delay', y='prob_ontime');
# But is it right, though? What's with the weird hump for early departures (departure_delay less than zero)?
#
# First, we should verify that we can apply Bayes Law. Grouping by the departure delay is incorrect if the departure delay is a chaotic input variable. We have do exploratory analysis to validate that:
#
# * If a flight departs late, will it arrive late?
# * Is the relationship between the two variables non-chaotic?
# * Does the linearity hold even for extreme values of departure delays?
#
# This, too, is straightforward in BigQuery
# + deletable=true editable=true jupyter={"outputs_hidden": false}
# %%bigquery df
SELECT
departure_delay,
COUNT(1) AS num_flights,
APPROX_QUANTILES(arrival_delay, 10) AS arrival_delay_deciles
FROM
`bigquery-samples.airline_ontime_data.flights`
GROUP BY
departure_delay
HAVING
num_flights > 100
ORDER BY
departure_delay ASC
# + deletable=true editable=true jupyter={"outputs_hidden": false}
import pandas as pd
percentiles = df['arrival_delay_deciles'].apply(pd.Series)
percentiles = percentiles.rename(columns = lambda x : str(x*10) + "%")
df = pd.concat([df['departure_delay'], percentiles], axis=1)
df.head()
# + deletable=true editable=true jupyter={"outputs_hidden": false}
without_extremes = df.drop(['0%', '100%'], 1)
without_extremes.plot(x='departure_delay', xlim=(-30,50), ylim=(-50,50));
# -
# Note the crazy non-linearity for top half of of the flights that leave more than 20 minutes early. Most likely, these are planes that try to beat some weather situation. About half of such flights succeed (the linear bottom) and the other half don't (the non-linear top). The average is what we saw as the weird hump in the probability plot. So yes, the hump is real. The rest of the distribution is clear-cut and the Bayes probabilities are quite valid.
# Solving the flights problem using GCP tools end-to-end (from ingest to machine learning) is covered in this book:
# [<img src="https://aisoftwarellc.weebly.com/uploads/5/1/0/0/51003227/published/data-science-on-gcp_2.jpg?1563508887"></img>](https://www.amazon.com/Data-Science-Google-Cloud-Platform/dp/1491974567)
# ## Machine Learning in BigQuery
#
# Here, we will use BigQuery ML to create a deep neural network that predicts the duration of bicycle rentals in London.
# %%bigquery
CREATE OR REPLACE MODEL ch09eu.bicycle_model_dnn
TRANSFORM(
duration
, start_station_name
, CAST(EXTRACT(dayofweek from start_date) AS STRING)
as dayofweek
, CAST(EXTRACT(hour from start_date) AS STRING)
as hourofday
)
OPTIONS(input_label_cols=['duration'],
model_type='dnn_regressor', hidden_units=[32, 4])
AS
SELECT
duration, start_station_name, start_date
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
# %%bigquery
SELECT * FROM ML.EVALUATE(MODEL ch09eu.bicycle_model_dnn)
# %%bigquery
SELECT * FROM ML.PREDICT(MODEL ch09eu.bicycle_model_dnn,(
SELECT
'Park Street, Bankside' AS start_station_name
,CURRENT_TIMESTAMP() AS start_date
))
# ## BigQuery and TensorFlow
#
# Batch predictions of a TensorFlow model from BigQuery!
# %%bigquery
CREATE OR REPLACE MODEL advdata.txtclass_tf
OPTIONS (model_type='tensorflow',
model_path='gs://cloud-training-demos/txtclass/export/exporter/1549825580/*')
# %%bigquery
SELECT
input,
(SELECT AS STRUCT(p, ['github', 'nytimes', 'techcrunch'][ORDINAL(s)]) prediction FROM
(SELECT p, ROW_NUMBER() OVER() AS s FROM
(SELECT * FROM UNNEST(dense_1) AS p))
ORDER BY p DESC LIMIT 1).*
FROM ML.PREDICT(MODEL advdata.txtclass_tf,
(
SELECT 'Unlikely Partnership in House Gives Lawmakers Hope for Border Deal' AS input
UNION ALL SELECT "Fitbit\'s newest fitness tracker is just for employees and health insurance members"
UNION ALL SELECT "Show HN: Hello, a CLI tool for managing social media"
))
# We use the bicycle rentals problem as a way to illustrate lots of BigQuery features in
# [<img src="https://aisoftwarellc.weebly.com/uploads/5/1/0/0/51003227/published/bigquery-the-definitive-guide.jpg?1563508864"></img>](https://www.amazon.com/Google-BigQuery-Definitive-Warehousing-Analytics/dp/1492044466)
# Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| blogs/bigquery_datascience/bigquery_datascience.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from tqdm.notebook import tqdm
import sqlalchemy as sa
from dotenv import load_dotenv
import os
# +
# We have the countries.json -> link -> dataframe using pandas.
# Iterate over the countries dataframe -> legislative_csv_urls
# Moving to a database.-> to_sql method to move it to a database.
# -
df = pd.read_json("https://cdn.rawgit.com/everypolitician/everypolitician-data/080cb46/countries.json")
df.head()
def get_csv_urls(df):
csv_dict = {}
for idx, row in df.iterrows():
country_name = row["country"]
country_code = row["code"]
for leg in row["legislatures"]:
leg_name = leg["name"]
for leg_period in leg["legislative_periods"]:
st_dt = leg_period["start_date"]
if "end_date" in leg_period.keys():
ed_dt = leg_period["end_date"]
else:
ed_dt = ""
csv_url = leg_period["csv_url"]
csv_dict[(country_name, country_code, leg_name, st_dt, ed_dt)] = csv_url
return csv_dict
urls = get_csv_urls(df)
# +
def get_politician_data(pol_urls):
all_politician_df = []
for keys, value in tqdm(pol_urls.items()):
try:
politician_df = pd.read_csv(value)
politician_df["country_name"] = keys[0]
politician_df["country_code"] = keys[1]
politician_df["leg_name"] = keys[2]
politician_df["start_date_legislature"] = keys[3]
politician_df["end_date_legislature"] = keys[4]
all_politician_df.append(politician_df)
except Exception as e:
print(f"For the url {value} got error {e}")
return pd.concat(all_politician_df)
# -
pol_df = get_politician_data(urls)
usa = pol_df[pol_df.country_code=='US']
pol_df.to_csv("politician_data.csv", index=False)Abkhazia
pol_df = pd.read_csv("politician_data.csv", encoding='utf-8')
# +
class Connection(object):
def __init__(self):
'''
Load the environment variables by calling the load_dotenv method.
The database connection details are set as environment variables.
'''
load_dotenv()
def connect_to_database(self, use_postgres=True, return_as_string=False):
'''
Connect to the mysql database and create the engine object
using sqlalchemy.
'''
if use_postgres:
conn_string = f'postgres://{os.getenv("user")}:{os.getenv("password")}@{os.getenv("host")}:{os.getenv("port")}/{os.getenv("database")}?charset=utf8'
else:
conn_string = f'mysql://{os.getenv("user")}:{os.getenv("password")}@{os.getenv("host")}:{os.getenv("port")}/{os.getenv("database")}?charset=utf8'
if return_as_string:
return conn_string
engine = sa.create_engine(conn_string)
return engine
# -
conn = Connection()
engine = conn.connect_to_database(use_postgres=False, return_as_string=True)
usa.to_sql("every_politician", engine, index=False, if_exists = 'append')
# +
# Figure, out the unicode for entering the other characters to a database.
# -
| every_politician/everypolitician.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
#apikey='<KEY>'
# ### 1) Make a request from the Forecast.io API for where you were born (or lived, or want to visit!).
# SEOUL = '37.532600,127.024612'
response = requests.get("https://api.forecast.io/forecast/4230d91e7452245b2479aec2fc16870d/37.532600,127.024612")
data = response.json()
data.keys()
data['timezone']
# ### 2) What's the current wind speed? How much warmer does it feel than it actually is?
# Wind speed
data['currently']['windSpeed']
# How much warmer does it feel than it actually is
data['currently']['apparentTemperature']-data['currently']['temperature']
# ### 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
daily = data['daily']['data']
print (daily[0]['moonPhase'])
# ### 4) What's the difference between the high and low temperatures for today?
daily[0]['temperatureMax']-daily[0]['temperatureMin']
# ### 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
for temmax in daily:
print (temmax['temperatureMax'])
if temmax['temperatureMax'] <80:
print ("Oh, it's cold")
else:
print("Oh, it's warm")
# ### 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
# Miami, Florida = 25.787676,-80.224145
response = requests.get("https://api.forecast.io/forecast/4230d91e7452245b2479aec2fc16870d/25.787676,-80.224145")
data = response.json()
data['timezone']
# +
hourly = data['hourly']['data']
for weather in hourly:
if weather['cloudCover'] > 0.5:
print (weather['temperature'],"and cloudy")
else:
print(weather['temperature'])
# -
# ### 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
# 1980-12-25 = '346550400'
# Central Park = '40.785091,-73.968285'
response = requests.get("https://api.forecast.io/forecast/4230d91e7452245b2479aec2fc16870d/40.785091,-73.968285,346550400")
data = response.json()
data['currently']['temperature']
print ("It was", data['currently']['temperature'], "degree in Central Park on Christmas Day in 1980")
# 1990-12-25 = '662083200'
response = requests.get("https://api.forecast.io/forecast/4230d91e7452245b2479aec2fc16870d/40.785091,-73.968285,662083200")
data = response.json()
print ("It was", data['currently']['temperature'], "degree in Central Park on Christmas Day in 1990")
# 2000-12-25 = '977702400'
response = requests.get("https://api.forecast.io/forecast/4230d91e7452245b2479aec2fc16870d/40.785091,-73.968285,977702400")
data = response.json()
print ("It was", data['currently']['temperature'], "degree in Central Park on Christmas Day in 2000")
| 06/forecast_api.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
fruits = ['orange', 'apple', 'pear', 'banana', 'kiwi', 'apple', 'banana']
fruits.count('apple')
# -
fruits.count('tangerine')
fruits.index('banana')
fruits.index('banana', 4) # Find next banana starting a position 4
fruits.reverse()
fruits
fruits.append('grape')
fruits
fruits.sort()
fruits
# +
# Using Lists as Stacks
stack = [3, 4, 5]
stack
# -
stack.append(6)
stack.append(7)
stack
stack.pop()
stack
stack.pop()
stack.pop()
stack
from collections import deque
queue = deque(["Eric", "John", "Michael"])
queue
queue.append("Terry") # Terry arrives
queue.append("Graham") # Graham arrives
queue.popleft() # The first to arrive now leaves
queue
# +
queue.pop() # The second to arrive now leaves
# -
queue # Remaining queue in order of arrival
# +
# List comprehension or building a list
squares = []
for x in range(10):
squares.append(x**2)
squares
# -
squares = [x**2 for x in range(10)]
squares
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
]
list(zip(*matrix))
# +
# The del statement
# -
a = [-1, 1, 66.25, 333, 333, 1234.5]
del a[0]
a
del a[2:4]
a
del a[:]
a
# +
# Tuples
t = 12345, 54321, 'hello!'
t[0]
t
# -
# Tuples may be nested:
u = t, (1, 2, 3, 4, 5)
u
# +
# Tuples are immutable:
t[0] = 88888
# +
# Sets
basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}
print(basket) # show that duplicates have been removed
# +
'orange' in basket # fast membership testing
# -
'crabgrass' in basket
# +
# Demonstrate set operations on unique letters from two words
a = set('abracadabra')
b = set('alacazam')
a # unique letters in a
# -
b
a - b # letters in a but not in b
a | b # letters in a or b or both
a & b # letters in both a and b
# +
a ^ b # letters in a or b but not both
# -
# Dictionaries
dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
knights = {'gallahad': 'the pure', 'robin': 'the brave', 'today is': 25}
for k, v in knights.items():
print(k, v)
for i, v in enumerate(['tic', 'tac', 'toe']):
print(i, v)
| Section 8/Data Structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# RMinimum : Full - Test
import math
import random
import queue
# Testfall : $X = [0, \cdots, n-1]$, $k$
# +
# User input
n = 2**10
k = 2**5
# Automatic
X = [i for i in range(n)]
# Show Testcase
print(' Testcase: ')
print('=============================')
print('X = [0, ..., ' + str(n - 1) + ']')
print('k =', k)
# -
# Algorithmus : Full
# +
def rminimum(X, k, cnt = [], rec = 0):
# Generate empty cnt list if its not a recursive call
if cnt == []:
cnt = [0 for _ in range(max(X) + 1)]
# Convert parameters if needed
k = int(k)
n = len(X)
# Base case |X| = 3
if len(X) == 3:
if X[0] < X[1]:
cnt[X[0]] += 2
cnt[X[1]] += 1
cnt[X[2]] += 1
if X[0] < X[2]:
mini = X[0]
else:
mini = X[2]
else:
cnt[X[0]] += 1
cnt[X[1]] += 2
cnt[X[2]] += 1
if X[1] < X[2]:
mini = X[1]
else:
mini = X[2]
return mini, cnt, rec
# Run phases
W, L, cnt = phase1(X, cnt)
M, cnt = phase2(L, k, cnt)
Wnew, cnt = phase3(W, k, M, cnt)
mini, cnt, rec = phase4(Wnew, k, n, cnt, rec)
return mini, cnt, rec
return mini, cnt, rec
# --------------------------------------------------
def phase1(X, cnt):
# Init W, L
W = [0 for _ in range(len(X) // 2)]
L = [0 for _ in range(len(X) // 2)]
# Random pairs
random.shuffle(X)
for i in range(len(X) // 2):
if X[2 * i] > X[2 * i + 1]:
W[i] = X[2 * i + 1]
L[i] = X[2 * i]
else:
W[i] = X[2 * i]
L[i] = X[2 * i + 1]
cnt[X[2 * i + 1]] += 1
cnt[X[2 * i]] += 1
return W, L, cnt
# --------------------------------------------------
def phase2(L, k, cnt):
# Generate subsets
random.shuffle(L)
subsets = [L[i * k:(i + 1) * k] for i in range((len(L) + k - 1) // k)]
# Init M
M = [0 for _ in range(len(subsets))]
# Perfectly balanced tournament tree using a Queue
for i in range(len(subsets)):
q = queue.Queue()
for ele in subsets[i]:
q.put(ele)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
M[i] = q.get()
return M, cnt
# --------------------------------------------------
def phase3(W, k, M, cnt):
# Generate subsets
random.shuffle(W)
W_i = [W[i * k:(i + 1) * k] for i in range((len(W) + k - 1) // k)]
subsets_filtered = [0 for _ in range(len(subsets))]
# Filter subsets
for i in range(len(subsets_filtered)):
subsets_filtered[i] = [elem for elem in subsets[i] if elem < M[i]]
cnt[M[i]] += len(subsets[i])
for elem in subsets[i]:
cnt[elem] += 1
# Merge subsets
Wnew = [item for sublist in subsets_filtered for item in sublist]
return Wnew, cnt
# --------------------------------------------------
def phase4(Wnew, k, n0, cnt, rec):
# Recursive call check
if len(Wnew) <= math.log(n0, 2) ** 2:
q = queue.Queue()
for ele in Wnew:
q.put(ele)
while q.qsize() > 1:
a = q.get()
b = q.get()
if a < b:
q.put(a)
else:
q.put(b)
cnt[a] += 1
cnt[b] += 1
mini = q.get()
return mini, cnt, rec
else:
rec += 1
rminimum(Wnew, k, cnt, rec)
# ==================================================
# Testcase
mini, cnt, rec = rminimum(X, k)
# -
# Resultat :
# +
def test(X, k, mini, cnt, rec):
print('')
print('Testfall n / k:', len(X), '/', k)
print('====================================')
print('Fragile Complexity:')
print('-------------------')
print('f_min :', cnt[0])
print('f_rem :', max(cnt[1:]))
print('f_n :', max(cnt))
print('Work :', int(sum(cnt)/2))
print('====================================')
print('Process:')
print('--------')
print('Minimum :', mini)
print('n :', len(X))
print('log(n) :', round(math.log(len(X), 2), 2))
print('log(k) :', round(math.log(k, 2), 2))
print('lg / lglg :', round(math.log(len(X), 2) / math.log(math.log(len(X), 2), 2)))
print('n / log(n) :', round(len(X) / math.log(len(X), 2)))
print('====================================')
return
# Testfall
test(X, k, mini, cnt, rec)
# -
| jupyter/jupyter_algo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## In Deep Learning
#
# * Many layers: compositionality
# * Convolutions: locality + stationarity of images
# * Pooling: Invariance of object class to translations
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
import numpy
def get_n_params(model):
np = 0
for p in list(model.parameters()):
np += p.nelement()
return np
# -
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# ## Load MNIST
# +
input_size = 28*28
output_size = 10
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])), batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])), batch_size=1000, shuffle=True)
# -
class CNN(nn.Module):
def __init__(self, input_size, n_feature, output_size):
super(CNN, self).__init__()
self.n_feature = n_feature
self.conv1 = nn.Conv2d(in_channels=1, out_channels=n_feature, kernel_size=5)
self.conv2 = nn.Conv2d(n_feature, n_feature, kernel_size=5)
self.fc1 = nn.Linear(n_feature*4*4, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x, verbose=False):
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = x.view(-1, self.n_feature*4*4)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.log_softmax(x, dim=1)
return x
# +
accuracy_list = []
def train(epoch, model, perm=torch.arange(0, 784).long()):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# send to device
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print("Train epoch: {} [{}/{} ({:.0f}%)\t Loss: {:.6f}]".format(epoch, batch_idx*len(data),
len(train_loader.dataset),
100.*batch_idx/len(train_loader), loss.item()))
def test(model):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).cpu().sum().item()
test_loss /= len(test_loader.dataset)
accuracy = 100. * correct / len(test_loader.dataset)
accuracy_list.append(accuracy)
print("\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:0f}%)\n".format(test_loss, correct, len(test_loader.dataset), accuracy))
# +
n_features = 6
model_cnn = CNN(input_size, n_features, output_size)
model_cnn.to(device)
optimizer = optim.SGD(model_cnn.parameters(), lr=0.01, momentum=0.5)
print('Number of parameters: {}'.format(get_n_params(model_cnn)))
for epoch in range(0, 1):
train(epoch, model_cnn)
test(model_cnn)
# -
| NYU Deep Learning/Introduction/Parameters sharing/ConvNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Amazon S3
# > Introduction to AWS S3 & Glacier
#
# - toc: true
# - comments: true
# - author: <NAME>
# - categories: [aws,S3,Glacier]
# ### Introduction
#
# Amazon S3 serves as the durable target storage for Amazon Kinesis and Amazon Elastic MapReduce (Amazon EMR),
# it is used as the storage for Amazon Elastic Block Store (Amazon EBS) and Amazon Relational Database
# Service (Amazon RDS) snapshots, and it is used as a data staging or loading storage mechanism for
# Amazon Redshift and Amazon DynamoDB.
#
# Amazon S3 objects are automatically replicated on multiple devices in multiple facilities within a region.
# You can create and use multiple buckets; you can have up to 100 per account by default.
# Objects can range in size from 0 bytes upto 5TB,and a single bucket can store an unlimited number of objects
#
# The native interface for Amazon S3 is a REST (Representational State Transfer) API.
# With the REST interface, you use standard HTTP or HTTPS requests to create and delete buckets,
# list keys, and read and write objects.
#
# Amazon S3 achieves high durability by automatically storing data redundantly on multiple devices in
# multiple facilities within a region. It is designed to sustain the concurrent loss of data in two
# facilities without loss of user data.
# ### Access Control
# Amazon S3 is secure by default; when you create a bucket or object in Amazon S3, only you have access.
# To allow you to give controlled access to others, Amazon S3 provides both coarse-grained access
# controls (Amazon S3 Access Control Lists [ACLs]), and fine-grained access controls (Amazon S3
# bucket policies, AWS Identity and Access Management [IAM] policies, and query-string authentication).
#
# Using an Amazon S3 bucket policy, you can specify who can access the bucket, from where (by Classless
# Inter-Domain Routing [CIDR] block or IP address), and during what time of day.
#
# Finally, IAM policies may be associated directly with IAM principals that grant access to an Amazon
# S3 bucket, just as it can grant access to any AWS service and resource.
#
# Lifecycle configurations are attached to the bucket and can apply to all objects in the bucket or
# only to objects specified by a prefix.
# ### Encryption
#
# To encrypt your Amazon S3 data in flight, you can use the Amazon S3 Secure Sockets Layer (SSL) API
# endpoints. This ensures that all data sent to and from Amazon S3 is encrypted while in transit
# using the HTTPS protocol.
#
# To encrypt your Amazon S3 data at rest, you can use several variations of Server-Side Encryption (SSE).
# Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and
# decrypts it for you when you access it. All SSE performed by Amazon S3 and AWS Key Management Service
# (Amazon KMS) uses the 256-bit Advanced Encryption Standard (AES). You can also encrypt your Amazon
# S3 data at rest using Client-Side Encryption, encrypting your data on the client before sending it
# to Amazon S3.
#
# SSE-S3 (AWS-Managed Keys)
# This is a fully integrated “check-box-style” encryption solution where AWS handles the key management
# and key protection for Amazon S3. Every object is encrypted with a unique key. The actual object key
# itself is then further encrypted by a separate master key. A new master key is issued at least monthly,
# with AWS rotating the keys. Encrypted data, encryption keys, and master keys are all stored separately
# on secure hosts, further enhancing protection.
#
# SSE-KMS (AWS KMS Keys)
# This is a fully integrated solution where Amazon handles your key management and protection for
# Amazon S3, but where you manage the keys. SSE-KMS offers several additional benefits compared to SSE-S3.
# Using SSE-KMS, there are separate permissions for using the master key, which provide protection against
# unauthorized access to your objects stored in Amazon S3 and an additional layer of control.
#
# SSE-C (Customer-Provided Keys)
# This is used when you want to maintain your own encryption keys but don’t want to manage or implement
# your own client-side encryption library. With SSE-C, AWS will do the encryption/decryption of your
# objects while you maintain full control of the keys used to encrypt/decrypt the objects in Amazon S3.
#
# Client-Side Encryption
# Client-side encryption refers to encrypting data on the client side of your application before sending
# it to Amazon S3.
# ### Pointers
#
# Pre-Signed URLs
# All Amazon S3 objects by default are private, meaning that only the owner has access. However, the
# object owner can optionally share objects with others by creating a pre-signed URL, using their own
# security credentials to grant time-limited permission to download the objects. When you create a
# pre-signed URL for your object, you must provide your security credentials and specify a bucket name,
# an object key, the HTTP method (GET to download the object), and an expiration date and time. The
# pre-signed URLs are valid only for the specified duration. This is particularly useful to protect
# against “content scraping” of web content such as media files stored in Amazon S3.
#
# Multipart Upload
# To better support uploading or copying of large objects, Amazon S3 provides the Multipart Upload API.
# This allows you to upload large objects as a set of parts, which generally gives better network
# utilization (through parallel transfers), the ability to pause and resume, and the ability to
# upload objects where the size is initially unknown.
#
# Range GETs
# It is possible to download (GET) only a portion of an object in both Amazon S3 and Amazon Glacier by
# using something called a Range GET. Using the Range HTTP header in the GET request or equivalent
# parameters in one of the SDK wrapper libraries, you specify a range of bytes of the object. This can
# be useful in dealing with large objects when you have poor connectivity or to download only a known
# portion of a large Amazon Glacier backup.
#
# Cross-Region Replication
# Cross-region replication is a feature of Amazon S3 that allows you to asynchronously replicate all
# new objects in the source bucket in one AWS region to a target bucket in another region. Any metadata
# and ACLs associated with the object are also part of the replication. After you set up cross-region
# replication on your source bucket, any changes to the data, metadata, or ACLs on an object trigger a
# new replication to the destination bucket. To enable cross-region replication, versioning must be
# turned on for both source and destination buckets, and you must use an IAM policy to give Amazon
# S3 permission to replicate objects on your behalf.
#
# Logging
# In order to track requests to your Amazon S3 bucket, you can enable Amazon S3 server access logs.
# Logging is off by default, but it can easily be enabled.
#
# Event Notifications
# Amazon S3 event notifications can be sent in response to actions taken on objects uploaded or stored
# in Amazon S3. Event notifications enable you to run workflows, send alerts, or perform other actions
# in response to changes in your objects stored in Amazon S3. You can use Amazon S3 event notifications
# to set up triggers to perform actions, such as transcoding media files when they are uploaded,processing
# data files when they become available, and synchronizing Amazon S3 objects with other data stores.
#
# Another common pattern is to use Amazon S3 as bulk “blob” storage for data, while keeping an index to
# that data in another service, such as Amazon DynamoDB or Amazon RDS. This allows quick searches and
# complex queries on key names without listing keys continually.
# ### Amazon Glacier
#
# Archives
# In Amazon Glacier, data is stored in archives. An archive can contain up to 40TB of data, and you
# can have an unlimited number of archives.
#
# Vaults
# Vaults are containers for archives. Each AWS account can have up to 1,000 vaults. You can control
# access to your vaults and the actions allowed using IAM policies or vault access policies.
#
# Vaults Locks
# You can easily deploy and enforce compliance controls for individual Amazon Glacier vaults with a
# vault lock policy. You can specify controls such as Write Once Read Many (WORM) in a vault lock
# policy and lock the policy from future edits. Once locked, the policy can no longer be changed.
#
#
| _notebooks/2020-06-07-Amazon Simple Storage Service and Amazon Glacier Storage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# language: python
# name: python3
# ---
# import the necessary libraries
#
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from collections import Counter
import xlsxwriter
# import the required data
df = pd.read_excel("C:/Users/User/Desktop/Analyzed_Data/Analysis-of-Georgian-Data/Vehicles-Statistic-Georgia/Data/Alienability Data/Alienation 2019.xlsx")
# write down how many columns and how many rows we want them to appear
pd.set_option('display.max_columns', None) # number of Colums
pd.set_option('display.max_rows', 18) # number of Rows
# Create functions to simplify the case in the future
# Visualization
# +
""" drawing with a percentage """
def Build_Barh_sizes( key_Phrases , Quantity , Filtered_data ,style , x_axis_visible, sizes, title ):
fig, ax = plt.subplots(figsize=(sizes))
""" COLOR--COLOR--COLOR--COLOR--COLOR--COLOR--COLOR--COLOR """
ax.set_facecolor('xkcd:black')
fig.patch.set_facecolor('xkcd:black')
ax.spines['bottom'].set_color('white')
ax.spines['top'].set_color('white')
ax.spines['left'].set_color('white')
ax.spines['right'].set_color('white')
ax.xaxis.label.set_color('white')
ax.yaxis.label.set_color('white')
ax.grid(alpha=0.2)
ax.title.set_color('white')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
""" COLOR--COLOR--COLOR--COLOR--COLOR--COLOR--COLOR--COLOR """
langs = key_Phrases # add key_Phrases into new variable
langs_users_num = np.array(Quantity)
total = Filtered_data
percent = langs_users_num/total*100
new_labels = [i+' {:.2f}%'.format(j) for i, j in zip( langs, percent )] # percentage
plt.barh(langs, langs_users_num) # Chart
plt.yticks( range(len(langs)), new_labels) # Ylabel
for spine in ax.spines.values(): # vertical lines exist - clear
spine.set_visible(False)
ax.axes.get_xaxis().set_visible(x_axis_visible) # X-label, numbers show-hide
ax.tick_params(axis="y", left=False)
plt.style.use(style) # chart style
plt.title(title) # add title
# change the fontsize of the xtick and ytick labels
plt.rc('xtick', labelsize=13)
plt.rc('ytick', labelsize=13)
# change the fontsize of axes title
plt.rc('axes', titlesize=18)
# Add padding between axes and labels
ax.xaxis.set_tick_params(pad = 10)
ax.yaxis.set_tick_params(pad = 10)
plt.show()
# -
# The most commonly used functions
def sort_Dictionary(Dictionary, reverse = False): # Sorts by increase or decrease
return dict(sorted(Dictionary.items(), key = lambda x: x[1], reverse = reverse))
# +
def Quantity_key_Phrases_all(Dictinary , Increase_decrease):
global key_Phrases # We declare it a global variable so that other functions can see and use it
sorted = sort_Dictionary (Dictinary , Increase_decrease) # Sort by increase or decrease. Comes with what we point out
key_Phrases = [] # We create a list to store phrases
for i in sorted:
key_Phrases.append(i) # Add phrases to the list
# -
def Show_first(Dictinary,show_first_item, Other_Show_Hide ):
global key_Phrases # We declare it a global variable so that other functions can see and use it
global Quantity # We declare it a global variable so that other functions can see and use it
global Total # We declare it a global variable so that other functions can see and use it
Dictinary = sort_Dictionary(Dictinary , False)
Total_Uncounted = [] # We create a list to store the actual quantity
for i in Dictinary:
Total_Uncounted.append(Dictinary[i]) # Add the actual quantity to the list
Total = sum(Total_Uncounted) # sum individual quantities to understand the total quantity
sorted_items = Dictinary.items() # We get individual members from dictionary
first_two = list(sorted_items)[(len(Dictinary)-show_first_item):len(Dictinary)] # specify the amount of information we want to draw
sorted = dict(first_two) # return the list back to dictionary
key_Phrases = [] # We create a list to store phrases
Quantity = [] # We create a list to store the quantity of each
for i in sorted:
key_Phrases.append(i) # Add phrases to the list
Quantity.append(sorted[i]) # add the quantity of each in the list
print("Max is " + str(len(Dictinary))) # print the number of whole phrases
if Other_Show_Hide == True:
key_Phrases.insert(0, "Other's") # add the name of the rest of the information
Quantity.insert(0, (Total - sum(Quantity))) # add the total quantity of others
# Create an Excel file and add information for Power BI
# +
#WorkBook = xlsxwriter.Workbook("Alienation 2019 (processed).xlsx") # Create a new Excel file in the same directory
#WorkBook = xlsxwriter.Workbook('C:/Users/User/Desktop/Analyzed_Data/Analysis-of-Georgian-Data/Vehicles-Statistic-Georgia/Analyzed Data/Alienability/Alienation 2019 (processed).xlsx') # Create a new Excel file in another directory
# To display information in an Excel file, write WorkBook.close() at the bottom of this file
# +
# Creates new sheets in Excel and adds information
def Add_excel(key_Phrases , Quantity, Sheet_Name):
Exist = WorkBook.sheetnames # Checks if there is a similar page
key_Phrases.reverse() # Rotates the list
Quantity.reverse() # Rotates the list
if Sheet_Name in Exist: # Checks if there is a similar page
outSheet = WorkBook.get_worksheet_by_name(Sheet_Name) # If there is a similar page, the function calls it and writes new information on it
outSheet.write("A1" , "ფრაზები") # Writes the title
outSheet.write("B1" , "რაოდენობა") # Writes the title
for items in range(len(key_Phrases)):
outSheet.write(items + 1 , 0 , key_Phrases[items]) # Writes the rest of the information
outSheet.write(items + 1 , 1 , Quantity[items]) # Writes the rest of the information
else:
outSheet = WorkBook.add_worksheet(Sheet_Name) # If there is no similar page, the function creates it and writes the information
outSheet.write("A1" , "ფრაზები") # Writes the title
outSheet.write("B1" , "რაოდენობა") # Writes the title
for items in range(len(key_Phrases)):
outSheet.write(items+1 , 0 , key_Phrases[items]) # Writes the rest of the information
outSheet.write(items+1 , 1 , Quantity[items]) # Writes the rest of the information
# +
# We try to draw a drawing here through one function and add information in Excel as well
def Build_Barh_sizes_excel(key_Phrases , Quantity , Filtered_data ,style , x_axis_visible, sizes, title):
Build_Barh_sizes(key_Phrases , Quantity , Filtered_data ,style , x_axis_visible, sizes, title)
#Add_excel(key_Phrases , Quantity, title) # This function adds information to Excel
# -
# *
# *
# *
# Let's start processing the data directly
# +
Type_of_vehicle = df["ტრანსპორტის ტიპი"] # We get information on types of transport
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 7, False) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,8), "ტრანსპორტის ტიპი") # draw the drawing
# +
Type_of_vehicle = df["ძარის ტიპი"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 15, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,8), "ძარის ტიპი") # draw the drawing
# +
Type_of_vehicle = df["მარკა"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 15, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,10), "მარკა") # draw the drawing
# +
Type_of_vehicle = df["მოდელი"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 15, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,10), "მოდელი") # draw the drawing
# +
Type_of_vehicle = df["გამოშვების წელი"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 15, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
key_Phraseso = [] # We are creating a new list to save strings instead of ints so as not to error code
for i in key_Phrases:
key_Phraseso.append(str(i)) # We store it in strings and put it in a list
Build_Barh_sizes_excel( key_Phraseso , Quantity, Total ,"fivethirtyeight", True , (18,10), "გამოშვების წელი") # draw the drawing
# +
Type_of_vehicle = df["ფერი"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 9, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,10), "ფერი") # draw the drawing
# +
Type_of_vehicle = df["საწვავის ტიპი"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 4, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,10), "საწვავის ტიპი") # draw the drawing
# +
Type_of_vehicle = df["ძრავის მოცულობა"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 15, True) # With this function we calculate the total quantity, and choose how much information to display on the drawing
key_Phraseso = [] # We are creating a new list to save strings instead of ints so as not to error code
for i in key_Phrases:
key_Phraseso.append(str(i)) # We store it in strings and put it in a list
Build_Barh_sizes_excel( key_Phraseso , Quantity, Total ,"fivethirtyeight", True , (18,10), "ძრავის მოცულობა") # draw the drawing
# +
Type_of_vehicle = df["მფლობელის ტიპი"] # get information about certain characteristics
Type_of_vehicle_Counter = Counter(Type_of_vehicle) # count the number of times the variables are repeated
Quantity_key_Phrases_all(Type_of_vehicle_Counter, False) # We use the function to collect the phrases to which we need to build a drawing
key_Phrases # Phrases whose quantity we need to find
union = {} # create a dictionary to save the final answers
for item in key_Phrases:
Quantity_of_Transport = df[ (Type_of_vehicle == item) & ( df["რაოდენობა"] != "NaN")] # get the information according to the required phrases
Quantity = Quantity_of_Transport["რაოდენობა"] # store quantitative information in the new variable
Quantity = sum(Quantity) # We count the actual number of specific phrases
union[item] = Quantity # add the actual number along with its corresponding phrase in the answers dictionary
Show_first(union , 2, False) # With this function we calculate the total quantity, and choose how much information to display on the drawing
Build_Barh_sizes_excel( key_Phrases , Quantity, Total ,"fivethirtyeight", True , (18,10), "მფლობელის ტიპი") # draw the drawing
| Vehicles-Statistic/Alienability (Processing)/2019_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:carnd-term1]
# language: python
# name: conda-env-carnd-term1-py
# ---
# +
import csv
import cv2
import numpy as np
from keras.models import Sequential
from keras.layers import Lambda
from keras.layers.core import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
lines=[]
with open('driving_log.csv') as csvfile:
reader=csv.reader(csvfile)
for line in reader:
lines.append(line)
images=[]
measurements=[]
i=0;
for line in lines:
if(i!=0):
center_image_path=line[0]
left_image_path=line[1]
right_image_path=line[2]
steering_center = float(line[3])
correction = 0.2 # this is a parameter to tune
steering_left = steering_center + correction
steering_right = steering_center - correction
#filename=source_path.split('/')[-1]
#current_path='IMG/'+filename
# current_path = source_path
center_image=cv2.imread(center_image_path)
left_imag=cv2.imread(center_image_path)
right_image=cv2.imread(center_image_path)
images.extend(center_image, left_imag, right_image)
measurements.extend(steering_center, steering_left, steering_right)
else:
i=1;
augmentated_images,augmented_measurements=[],[]
for image,measurement in zip(images,measurements):
if(measurement!=0)
augmentated_images.append(image)
augmented_measurements.append(measurement)
augmentated_images.append(cv2.flip(image,1))
augmented_measurements.append(measurement*-1.0)
X_augment=np.array(augmentated_images)
y_augment=np.array(augmented_measurements)
X_train=np.concatenate([X_train,X_augment])
y_train=np.concatenate([y_train,y_augment])
print(X_train.shape)
model=Sequential()
model.add(Cropping2D(cropping=((50,20), (0,0)), input_shape=(3,160,320)))
model.add(Lambda(lambda x: (x / 255.0) - 0.5))
model.add(Convolution2D(6, 5, 5))
model.add(MaxPooling2D())
model.add(Activation('relu'))
model.add(Convolution2D(6, 5, 5))
model.add(Activation('relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(120))
model.add(Dense(84))
model.add(Dense(1))
model.compile(loss='mse',optimizer='adam')
model.fit(X_train,y_train,validation_split=0.2,shuffle=True,nb_epoch=1)
model.save('model.h5')
# -
| .ipynb_checkpoints/Untitled1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook is to work with K-nearest neighbors
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
iris_attributes, iris_classes = load_iris(return_X_y=True)
attributes_train, attributes_test, classes_train, classes_test = train_test_split(iris_attributes, iris_classes, test_size=0.2, random_state=50)
# Define Model, and choose the number of neighbors to inspect
Five_Neighbors = KNeighborsClassifier(n_neighbors=5)
Five_Neighbors.fit(attributes_train, classes_train)
print(Five_Neighbors.predict(attributes_test))
print(classes_test)
print("Errors:", sum(Five_Neighbors.predict(attributes_test)-classes_test))
Seven_Neighbors = KNeighborsClassifier(n_neighbors=7)
Seven_Neighbors.fit(attributes_train, classes_train)
print(Seven_Neighbors.predict(attributes_test))
print(classes_test)
print("Errors:", sum(Seven_Neighbors.predict(attributes_test)-classes_test))
Three_Neighbors = KNeighborsClassifier(n_neighbors=3)
Three_Neighbors.fit(attributes_train, classes_train)
print(Three_Neighbors.predict(attributes_test))
print(classes_test)
print("Errors:", sum(Three_Neighbors.predict(attributes_test)-classes_test))
Too_Many_Neighbors = KNeighborsClassifier(n_neighbors=17)
Too_Many_Neighbors.fit(attributes_train, classes_train)
print(Too_Many_Neighbors.predict(attributes_test))
print(classes_test)
print("Errors:", sum(Too_Many_Neighbors.predict(attributes_test)-classes_test))
import matplotlib.pyplot as plt
import numpy as np
# +
iris0 = (iris_attributes[0:50,0], iris_attributes[0:50,1])
iris1 = (iris_attributes[51:100,0], iris_attributes[51:100,1])
iris2 = (iris_attributes[101:150,0], iris_attributes[101:150,1])
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
data = (iris0, iris1, iris2)
colors = ('purple', 'pink', 'blue')
groups = ('0','1','2')
for data, color, group in zip(data, colors, groups):
x,y = data
ax.scatter(x, y, alpha=0.8, c=color, label=group)
plt.title("1. sepal length in cm vs classification")
plt.xlabel("Sepal_length")
plt.ylabel("Sepal_width")
plt.show
# +
iris0 = (iris_attributes[0:50,2], iris_attributes[0:50,3])
iris1 = (iris_attributes[51:100,2], iris_attributes[51:100,3])
iris2 = (iris_attributes[101:150,2], iris_attributes[101:150,3])
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
data = (iris0, iris1, iris2)
colors = ('purple', 'pink', 'blue')
groups = ('0','1','2')
for data, color, group in zip(data, colors, groups):
x,y = data
ax.scatter(x, y, alpha=0.8, c=color, label=group)
plt.title("1. sepal length in cm vs classification")
plt.xlabel("Petal_length")
plt.ylabel("Petal_width")
plt.show
# -
| 05_K-Nearest_Neighbor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Spark3
# language: python
# name: spk
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Load-the-libraries" data-toc-modified-id="Load-the-libraries-1"><span class="toc-item-num">1 </span>Load the libraries</a></span></li><li><span><a href="#Creating-DataFrames" data-toc-modified-id="Creating-DataFrames-2"><span class="toc-item-num">2 </span>Creating DataFrames</a></span><ul class="toc-item"><li><span><a href="#From-RDD" data-toc-modified-id="From-RDD-2.1"><span class="toc-item-num">2.1 </span>From RDD</a></span></li><li><span><a href="#From-Spark-Data-Sources" data-toc-modified-id="From-Spark-Data-Sources-2.2"><span class="toc-item-num">2.2 </span>From Spark Data Sources</a></span></li></ul></li><li><span><a href="#Inspect-Data" data-toc-modified-id="Inspect-Data-3"><span class="toc-item-num">3 </span>Inspect Data</a></span><ul class="toc-item"><li><span><a href="#Queries" data-toc-modified-id="Queries-3.1"><span class="toc-item-num">3.1 </span>Queries</a></span></li></ul></li></ul></div>
# -
# # Load the libraries
# +
# pyspark
import pyspark
spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate()
# sql
from pyspark.sql.functions import col as _col
from pyspark.sql.functions import udf
# @udf("integer") def myfunc(x,y): return x - y
# stddev format_number date_format, dayofyear, when
from pyspark.sql import functions as F
from pyspark.sql.window import Window
from pyspark.sql.functions import (mean as _mean, min as _min,
max as _max, avg as _avg,
when as _when
)
from pyspark.sql.types import (StructField,StringType,
IntegerType, FloatType,
DoubleType,StructType)
from pyspark import SparkConf, SparkContext, SQLContext
sc = spark.sparkContext
sqlContext = SQLContext(sc)
sqc = sqlContext
# spark_df = sqlContext.createDataFrame(pandas_df)
# +
import numpy as np
import pandas as pd
import seaborn as sns
pd.set_option('max_columns',100)
import time,os,json
time_start_notebook = time.time()
home = os.path.expanduser('~')
SEED=100
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
[(x.__name__,x.__version__) for x in [np,pd,sns]]
# -
# # Creating DataFrames
# ## From RDD
# + language="bash"
# cat people.txt
# +
# infer schema
lines = sc.textFile('people.txt')
parts = lines.map(lambda l: l.split(','))
people = parts.map(lambda p: Row(name=p[0],
age=int(p[1])))
# peopledf = spark.createDataFrame(people)
# peopledf.show()
# Py4JJavaError
# -
# ## From Spark Data Sources
sdf = spark.read.text("people.txt")
sdf.show()
sdf = spark.read.json('people.json')
sdf = spark.read.load('people.json',format='json')
sdf.show()
# %%writefile customer.json
{"address": ["New York,10021,N"],"age":25,"firstName":"John","lastName":"Smith","phoneNumber": [["212 555-1234 hover"],["213 555-1234 hover"]]}
{"address":["New York,10021,N"],"age":21,"firstName":"Jane","lastName":"Doe","phoneNumber": [["322 888-1234, hover"],["323 888-1234, hover"]]}
sdf = spark.read.json("customer.json")
sdf.show(truncate=False)
sdf = spark.read.json("customer.json")
sdf.toPandas()
pdf = pd.read_json('customer.json',lines=True)
pdf
# # Inspect Data
sdf.dtypes
sdf.head()
sdf.first()
sdf.take(2)
sdf.schema
sdf.printSchema()
## Duplicate Values
sdf = sdf.dropDuplicates()
# ## Queries
# +
## Select
# -
sdf.select("firstName")
sdf.select("firstName","lastName")
(sdf.select("firstName",
"age",
F.explode("phoneNumber").alias("contactInfo")
)
.select("contactInfo","firstName")
.show(truncate=False)
)
| a01_PySpark/a03_PySpark_Cheatsheet/spark_cheatsheet_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/KSY1526/myblog/blob/master/_notebooks/2022-02-16-torch1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="B0yo3ND2KBjS"
# # "[PyTorch] 파이토치 기초 연습하기"
# - author: <NAME>
# - categories: [SSUDA, jupyter, Deep Learning, Pytorch]
# - image: images/220216.png
# + [markdown] id="ets2c7uAKBop"
# # 텐서
# + colab={"base_uri": "https://localhost:8080/"} id="mujigGq_J2KS" outputId="00368496-4e98-4afd-f98c-9634bab049fd"
import torch
import numpy as np
data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
print(x_data)
print(x_np)
# + [markdown] id="H5qY4htsKZoX"
# 일반 리스트 데이터, 넘파이 데이터를 텐서로 만들 수 있습니다.
# + id="VwwHh6WgKXFH" colab={"base_uri": "https://localhost:8080/"} outputId="e68d8cac-b114-4987-df2f-d01e17bfb28a"
tensor = torch.rand(3, 4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
if torch.cuda.is_available():
tensor = tensor.to('cuda')
print(f"Device tensor is stored on: {tensor.device}")
# + [markdown] id="X2vbEBx7OseZ"
# 텐서의 속성에는 모양, 자료형, 어느 장치에 저장되는지가 있습니다.
#
# 사용할 수 있는 gpu가 있다면 사용이 되는 모습입니다.
# + colab={"base_uri": "https://localhost:8080/"} id="RYwuazrjOn3M" outputId="ef90d19a-2e23-44b4-a1c3-fcdd26258e40"
tensor = torch.ones(4, 4)
tensor[:,1] = 0
print(tensor)
print(f"tensor.matmul(tensor.T) \n {tensor.matmul(tensor.T)} \n")
# 다른 문법:
print(f"tensor @ tensor.T \n {tensor @ tensor.T}")
# + [markdown] id="vz51RwGuPwOc"
# 텐서는 넘파이와 같이 값을 변경해줄 수 있습니다. 또한 텐서간 @ 연산자를 사용하면 행렬 곱 연산이 가능합니다.
# + [markdown] id="3dVsfXMNQJu_"
# # 분류기 학습하기
# + [markdown] id="nbo9hsTXFIrF"
# # 데이터 불러오고 정규화하기
# + id="ARO2Kkz6PvDr"
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
# + [markdown] id="v_LHp4dRe7C3"
# 데이터를 불러왔을때 바로 전처리 할수 있는 툴을 transforms 클래스를 이용해 구현했습니다.
# + colab={"base_uri": "https://localhost:8080/", "height": 103, "referenced_widgets": ["24d5ed103be843ae92a347540b904ae0", "06255f17ca4e4da48c31df3af2bd72b7", "fd1a73d8ab974180a1c5c9dcc2667f62", "<KEY>", "<KEY>", "9cef0f34b25245e09e414cee6bf59622", "83efbc0273da4ad1a0c683731858c43e", "4be63ee276bd4d9cb641fa40cec869c1", "e59d86b6b07f4b10ae7802566c787d85", "ca1f2f670e594fddadf7d1d1f9868578", "8bdb331de05f4ff78c71a341837715ca"]} id="s2FyCGYocnoV" outputId="a50c46c1-ba76-4567-d39b-c3c916d2f940"
trainset = torchvision.datasets.CIFAR10(root = './data', train = True,
download = True, transform = transform)
testset = torchvision.datasets.CIFAR10(root = './data', train = False,
download = True, transform = transform)
# + [markdown] id="yJzWTXJajRLx"
# CIFAR10 데이터를 불러옵니다. 이때 앞서 구현한 transform을 이용해 -1 ~ 1 범위로 정규화한 텐서로 변환합니다.
# + id="PdlQuKjvh7oD"
batch_size = 4
trainloader = torch.utils.data.DataLoader(trainset, batch_size = batch_size,
shuffle = True, num_workers = 2)
testloader = torch.utils.data.DataLoader(testset, batch_size = batch_size,
shuffle = True, num_workers = 2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# + [markdown] id="T5PeY1WCkDOn"
# 데이터를 배치단위로 묶어서 로더로 만듭니다.
# + colab={"base_uri": "https://localhost:8080/", "height": 156} id="qJHlyFwPkCjk" outputId="49415346-9959-4294-d3d3-2e22fbe4efa7"
import matplotlib.pyplot as plt
import numpy as np
def imshow(img):
img = img / 2 + 0.5 # -1 ~ 1 사이 값을 0 ~ 1 사이 값으로 변환
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
dataiter = iter(trainloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
print(' '.join(f'{classes[labels[j]]:5s}' for j in range(batch_size)))
# + [markdown] id="B9hJNUUSGIyt"
# 학습용 이미지에는 무엇이 있는지 실제로 시각적으로 관찰하는 코드입니다.
#
# 우선 먼저 봐야할께 iter과 next 함수입니다. iter은 iterable 한 객체(반복가능한)에 적용하는 함수로 iterator 객체로 변환시킵니다.
#
# iterator 객체는 한번에 하나씩 객체 내 요소를 순서대로 엑세스가 가능합니다. 자료를 가져온 이후 폐기하기 때문에 메모리 절약이 가능합니다.
#
# 그 뒤 next 함수를 통해 iterator 객체 값을 다음값으로 넘기고, 이전값을 반환하게 됩니다.
#
# 다음으로 torchvision 내 utils.make_grid 함수를 이용해 이미지 배치 데이터(4차원 형식)를 입력받으면 실제 이미지를 출력합니다.
# + [markdown] id="hIcymHXIVfxc"
# # 합성곱 신경망 정의하기
# + id="xc7kCosmVf_0"
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # 인풋 채널, 아웃풋 채널, 커널 사이즈
self.pool = nn.MaxPool2d(2, 2) # 커널, 스트라이드 값, 특징 맵 크기가 반이됨.
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x))) # 3 * 32 * 32 => 6 * 28 * 28 => 6 * 14 * 14
x = self.pool(F.relu(self.conv2(x))) # 6 * 14 * 14 => 16 * 10 * 10 => 16 * 5 * 5
x = torch.flatten(x, 1) # 채널 포함 1차원화시킴.
x = F.relu(self.fc1(x)) # 16 * 5 * 5 => 120
x = F.relu(self.fc2(x)) # 120 => 84
x = self.fc3(x) # 84 => 10(10개로 분류하기 때문에 원핫 인코딩 꼴로 변환)
return x
net = Net()
# + [markdown] id="9AEpxSxmTwpK"
# nn.Module 클래스를 상속해 합성곱 신경망을 정의했습니다.
#
# init 함수 부분에는 신경망 함수를 선언하고, forward 함수 부분에서 선언한 신경망 함수를 실행했습니다.
#
# torch.nn.functional 에서는 relu 등 여러가지 활성화 함수들이 있습니다.
# + id="5FxUe79CUP0O"
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr = 0.001, momentum = 0.9)
# + [markdown] id="RThBdnmWUhmF"
# 손실 함수로 크로스 엔트로피 함수를, 옵티마이저로 SGD를 사용했습니다.
#
# 손실 함수는 nn 클래스 내 존재하고 옵티마이저는 torch.optim 클래스 내 존재합니다.
# + [markdown] id="0rEqAdQXUyoK"
# # 신경망 학습하기
# + colab={"base_uri": "https://localhost:8080/"} id="K9f4RnNEYYYt" outputId="ee56efe0-5c36-49a0-cf53-1a92e8b399f5"
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
optimizer.zero_grad() # 매개변수를 0으로 만듭니다.
outputs = net(inputs) # 입력값을 넣어 순전파를 진행시킨뒤 결과값 배출
loss = criterion(outputs, labels) # 결과와 실제 값을 손실함수에 대입
loss.backward() # 손실함수에서 역전파 수행
optimizer.step() # 옵티마이저를 사용해 매개변수 최적화
running_loss += loss.item()
if i % 2000 == 1999:
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
# + [markdown] id="4aroOUDuW3pP"
# 2 에포크로 트레인 데이터 로더를 사용하고, 앞서 정의한 모델, 옵티마이저, 손실함수를 사용합니다.
#
# 신경망 학습 과정은 옵티마이저 초기화하기 => 순전파 진행으로 output 값 배출 => 손실함수 사용해서 loss값 배출 => 손실함수 역전파 수행 => 옵티마이저 사용 매개변수 최적화 과정으로 진행됩니다.
# + [markdown] id="e_jethfPZXLv"
# # 테스트 데이터로 모델 검정하기
# + colab={"base_uri": "https://localhost:8080/", "height": 156} id="mbhhl-AKWwbp" outputId="8138f59f-bdf2-45fd-87c5-458d99958b01"
dataiter = iter(testloader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join(f'{classes[labels[j]]:5s}' for j in range(4)))
# + [markdown] id="ZqNQFPk0ZsgD"
# 테스트 데이터 로더 내 첫번째 배치 데이터를 사용해 시각적으로 검정하겠습니다. 실제 값을 출력한 모습이죠.
# + colab={"base_uri": "https://localhost:8080/"} id="uAskWVyOZoFX" outputId="a2c62d4e-46b3-48c9-a62f-57129bd4f857"
outputs = net(images)
# 모델 내 인풋값을 넣으면 원핫인코딩 방식으로 출력됩니다.
_, predicted = torch.max(outputs, 1)
# torch.max 함수를 사용해 배치 내 데이터 당 최댓 값을 찾아줍니다.
# 첫 출력은 최댓값 그 자체를, 두 번째 출력은 몇번 레이블인지 찾아줍니다.
# 첫 출력은 관심대상이 아니므로 '_'를 사용하여 메모리를 절약합니다.
print('Predicted: ', ' '.join(f'{classes[predicted[j]]:5s}'
for j in range(4)))
# + [markdown] id="8MgHBQcMbMB7"
# 테스트 첫번째 배치 데이터를 모델에 넣어서 레이블을 예측했습니다. 2번째 plane 빼고 맞췄습니다!
# + colab={"base_uri": "https://localhost:8080/"} id="NtP5x887ae_W" outputId="9afb2c10-000e-4120-8351-8abf3b31e565"
correct = 0
total = 0
with torch.no_grad():
# 이 내부서에 생성된 텐서들은 requires_grad=False 상태가 되어 gradient 연산이 불가능해집니다.
# 가중치 업데이트가 필요한 부분이 아니기 때문에 메모리 절약 차원입니다.
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')
# + [markdown] id="9IDpBbl1eLuQ"
# 전체 테스트 데이터 중 53% 정도를 맞췄습니다. 분류 레이블 개수가 10개인걸 감안하면 엄청 나쁜 수치는 아닙니다.
# + colab={"base_uri": "https://localhost:8080/"} id="7Ys1p-Hxdzy1" outputId="afd52ed1-4b83-4516-a973-010ed124f397"
# 각 분류(class)에 대한 예측값 계산을 위해 준비
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
# 변화도는 여전히 필요하지 않습니다
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# 각 분류별로 올바른 예측 수를 모읍니다
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# 각 분류별 정확도(accuracy)를 출력합니다
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print(f'Accuracy for class: {classname:5s} is {accuracy:.1f} %')
# + [markdown] id="5A49X8yDeedt"
# 어느 클래스를 더 잘 분류하고, 어느 클래스는 잘 분류하지 못했는지 찾아봤습니다.
#
# 다만 정확도의 한계로 단순히 한 클래스를 많이 예측한 경우도 있기 때문에 전적으로 신뢰할 결과는 아닙니다.
# + [markdown] id="bQ00Ep3zevG4"
# # GPU에서 학습하기
# + colab={"base_uri": "https://localhost:8080/"} id="UZig7Sj3eb2d" outputId="c4e536f1-1c86-4950-e529-5e0e16e9eb42"
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
# + [markdown] id="iMFkhEJCfEKD"
# GPU를 사용하고 있군요.
# + id="0rBvqPiyfAUN"
net.to(device)
inputs, labels = data[0].to(device), data[1].to(device)
# + [markdown] id="Oi4Zm2M7fT5U"
# 모델과 입력, 레이블 값들을 GPU로 보내면 정상적인 GPU 연산이 가능해집니다.
# + [markdown] id="h2JW24pWfeED"
# # 느낀점
# + [markdown] id="Zg78suptffj6"
# 파이토치라는 딥러닝 도구 사용법을 익히기 위해 쉬운 예제부터 시작했습니다.
#
# 예제 자체는 무슨말인지 알고 있으나, 파이토치 내 어느 클래스에서 어떤 함수를 가져오는지를 중점적으로 학습했습니다.
#
# 생각보다 어렵네요. 낯선 부분이 다소 있습니다. 하지만 하나하나 알아가는 기분이 좋네요.
#
# 간단한 예제를 학습했는데, 다음엔 상대적으로 더 복잡한 다른 코드를 리뷰해보도록 하겠습니다.
#
# 참고 : https://tutorials.pytorch.kr/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
| _notebooks/2022-02-16-torch1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import scipy
import numpy as np
pd.DataFrame.from_dict({'a': [2], 'b':[3]}).T
statistics, p_value = ks_2samp(real.trans_amount, fake.trans_amount)
from IPython.core.display import display, Markdown
class EvaluationResult(object):
def __init__(self, name, content, prefix=None, appendix=None, notebook=False):
self.name = name
self.prefix = prefix
self.content = content
self.appendix = appendix
self.notebook = notebook
def show(self):
if self.notebook:
output = widgets.Output()
with output:
display(Markdown(f'## {self.name}'))
if self.prefix: display(Markdown(self.prefix))
display(self.content)
if self.appendix: display(Markdown(self.appendix))
display(output)
import ipywidgets as widgets
er = EvaluationResult('Jensen-Shannon distance', js_df, notebook=True, appendix=f' Mean: {js_df.js_distance.mean(): .3f}')
er.show()
print(str(js_df))
| notebooks/Jupyter-UI.ipynb |
# -*- coding: utf-8 -*-
# <!-- dom:TITLE: Data Analysis and Machine Learning: Linear Regression -->
# # Data Analysis and Machine Learning: Linear Regression
# <!-- dom:AUTHOR: <NAME> at Department of Physics and Center for Computing in Science Education, University of Oslo, Norway & Department of Physics and Astronomy and Facility for Rare Ion Beams and National Superconducting Cyclotron Laboratory, Michigan State University, USA -->
# <!-- Author: -->
# **<NAME>**, Department of Physics and Center for Computing in Science Education, University of Oslo, Norway and Department of Physics and Astronomy and Facility for Rare Ion Beams and National Superconducting Cyclotron Laboratory, Michigan State University, USA
#
# Date: **Jun 21, 2020**
#
# Copyright 1999-2020, <NAME>. Released under CC Attribution-NonCommercial 4.0 license
#
#
#
# ## Linear Regression, basic overview
#
# The aim of this set of lectures is to introduce basic aspects of linear regression, a widely applied set of methods used to fit continuous functions. We will in particular focus on
#
# * Ordinary linear regression
#
# * Ridge regression
#
# * Lasso regression
#
#
# We will also use these widely popular methods to introduce resampling techniques like bootstrapping and cross-validation.
#
#
# ## Why Linear Regression (aka Ordinary Least Squares and family)?
#
# Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$.
# * Method of choice for fitting a continuous function!
#
# * Gives an excellent introduction to central Machine Learning features with **understandable pedagogical** links to other methods like **Neural Networks**, **Support Vector Machines** etc
#
# * Analytical expression for the fitting parameters $\boldsymbol{\beta}$
#
# * Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more
#
# * Analytical relation with probabilistic interpretations
#
# * Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics
#
# * Easy to code! And links well with classification problems and logistic regression and neural networks
#
# * Allows for **easy** hands-on understanding of gradient descent methods. These methods are at the heart of all essentially all Machine Learning methods.
#
# * and many more features
#
#
# For more discussions of Ridge and Lasso regression, [<NAME>'s](https://arxiv.org/abs/1509.09169) article is highly recommended.
# Similarly, [Mehta et al's article](https://arxiv.org/abs/1803.08823) is also recommended. The textbook by [<NAME> on The Elements of Statistical Learning Data Mining](https://link.springer.com/book/10.1007/978-0-387-84858-7) is highly recommended.
#
#
# ## Regression Analysis, Definitions and Aims
#
#
# ## Regression analysis, overarching aims
#
# Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$.
# The first variable is called the **dependent**, the **outcome** or the **response** variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable.
#
# A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with
# * $n$ cases $i = 0, 1, 2, \dots, n-1$
#
# * Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$
#
# * $p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}_i=[x_{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples.
#
# The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{x}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things.
#
#
#
# ## Regression analysis, overarching aims II
#
#
# Consider an experiment in which $p$ characteristics of $n$ samples are
# measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix
# $\mathbf{X}$.
#
# The matrix $\mathbf{X}$ is called the *design
# matrix*. Additional information of the samples is available in the
# form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is
# generally referred to as the *response variable*. The aim of
# regression analysis is to explain $\boldsymbol{y}$ in terms of
# $\boldsymbol{X}$ through a functional relationship like $y_i =
# f(\mathbf{X}_{i,\ast})$. When no prior knowledge on the form of
# $f(\cdot)$ is available, it is common to assume a linear relationship
# between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to
# the *linear regression model* where $\boldsymbol{\beta} = [\beta_0, \ldots,
# \beta_{p-1}]^{T}$ are the *regression parameters*.
#
# Linear regression gives us a set of analytical equations for the parameters $\beta_j$.
#
#
#
#
#
# ## Examples
# In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$,
# consider the model we discussed for describing nuclear binding energies.
#
# There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model.
# Assuming
# $$
# BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1},
# $$
# we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms.
# This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a
# $p\times n$ matrix $\boldsymbol{X}$.
#
# Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the
# so-called [credit card default data from Taiwan](https://www.sciencedirect.com/science/article/pii/S0957417407006719?via%3Dihub). The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$. This is however a classification problem and we will come back to it when we discuss Logistic Regression.
#
#
#
#
#
#
#
# ## General linear models
# Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function.
#
# Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is
# $$
# y=y(x) \rightarrow y(x_i)=\tilde{y}_i+\epsilon_i=\sum_{j=0}^{n-1} \beta_j x_i^j+\epsilon_i,
# $$
# where $\epsilon_i$ is the error in our approximation.
#
#
#
#
# ## Rewriting the fitting procedure as a linear algebra problem
# For every set of values $y_i,x_i$ we have thus the corresponding set of equations
# $$
# \begin{align*}
# y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\\
# y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\\
# y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\\
# \dots & \dots \\
# y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\\
# \end{align*}
# $$
# ## Rewriting the fitting procedure as a linear algebra problem, more details
# Defining the vectors
# $$
# \boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T,
# $$
# and
# $$
# \boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T,
# $$
# and
# $$
# \boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T,
# $$
# and the design matrix
# $$
# \boldsymbol{X}=
# \begin{bmatrix}
# 1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\\
# 1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\\
# 1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\\
# \dots& \dots &\dots& \dots & \dots &\dots\\
# 1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\\
# \end{bmatrix}
# $$
# we can rewrite our equations as
# $$
# \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
# $$
# The above design matrix is called a [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix).
#
#
#
#
# ## Generalizing the fitting procedure as a linear algebra problem
#
# We are obviously not limited to the above polynomial expansions. We
# could replace the various powers of $x$ with elements of Fourier
# series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j
# x_i)}$, or time series or other orthogonal functions. For every set
# of values $y_i,x_i$ we can then generalize the equations to
# $$
# \begin{align*}
# y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\\
# y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\\
# y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\\
# \dots & \dots \\
# y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\\
# \dots & \dots \\
# y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\\
# \end{align*}
# $$
# **Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case!**
#
#
#
#
# ## Generalizing the fitting procedure as a linear algebra problem
# We redefine in turn the matrix $\boldsymbol{X}$ as
# $$
# \boldsymbol{X}=
# \begin{bmatrix}
# x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\\
# x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\\
# x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\\
# \dots& \dots &\dots& \dots & \dots &\dots\\
# x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\\
# \end{bmatrix}
# $$
# and without loss of generality we rewrite again our equations as
# $$
# \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
# $$
# The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknown quantities. How can we obtain the optimal set of $\beta_i$ values?
#
#
#
#
# ## Optimizing our parameters
# We have defined the matrix $\boldsymbol{X}$ via the equations
# $$
# \begin{align*}
# y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\\
# y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\\
# y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\\
# \dots & \dots \\
# y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\\
# \dots & \dots \\
# y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\\
# \end{align*}
# $$
# As we noted above, we stayed with a system with the design matrix
# $\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define
# our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements.
#
#
#
#
# ## Our model for the nuclear binding energies
#
# In our [introductory notes](https://compphysics.github.io/MachineLearning/doc/pub/How2ReadData/html/How2ReadData.html) we looked at the so-called [liquid drop model](https://en.wikipedia.org/wiki/Semi-empirical_mass_formula). Let us remind ourselves about what we did by looking at the code.
#
# We restate the parts of the code we are most interested in.
# +
# %matplotlib inline
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
# Then nice printout using pandas
DesignMatrix = pd.DataFrame(X)
DesignMatrix.index = A
DesignMatrix.columns = ['1', 'A', 'A^(2/3)', 'A^(-1/3)', '1/A']
display(DesignMatrix)
# -
# With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
# $$
# \boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
# $$
# throughout these lectures.
#
#
# ## Optimizing our parameters, more details
# With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
# $$
# \boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
# $$
# and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
# $$
# C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right\},
# $$
# or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
# $$
# C(\boldsymbol{\beta})=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}^T\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}^T\boldsymbol{\beta}\right)\right\}.
# $$
# This function is one possible way to define the so-called cost function.
#
#
#
# It is also common to define
# the function $Q$ as
# $$
# C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
# $$
# since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
#
#
#
#
# ## Interpretations and optimizing our parameters
#
# The function
# $$
# C(\boldsymbol{\beta})=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right\},
# $$
# can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
# When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value
# $$
# y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
# $$
# where $\langle y_i \rangle$ is the mean value. Keep in mind also that
# till now we have treated $y_i$ as the exact value. Normally, the
# response (dependent or outcome) variable $y_i$ the outcome of a
# numerical experiment or another type of experiment and is thus only an
# approximation to the true value. It is then always accompanied by an
# error estimate, often limited to a statistical error estimate given by
# the standard deviation discussed earlier. In the discussion here we
# will treat $y_i$ as our exact value for the response variable.
#
# In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
# $$
# {\displaystyle \min_{\boldsymbol{\beta}\in
# {\mathbb{R}}^{p}}}\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right\}.
# $$
# In practical terms it means we will require
# $$
# \frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
# $$
# which results in
# $$
# \frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
# $$
# or in a matrix-vector form as
# $$
# \frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
# $$
# ## Interpretations and optimizing our parameters
# We can rewrite
# $$
# \frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
# $$
# as
# $$
# \boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
# $$
# and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
# $$
# \boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
# $$
# We note also that since our design matrix is defined as $\boldsymbol{X}\in
# {\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
# {\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
# in our case $p=5$ meaning that we end up with inverting a small
# $5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
# matrices to invert. The methods discussed here and for many other
# supervised learning algorithms like classification with logistic
# regression or support vector machines, exhibit dimensionalities which
# allow for the usage of direct linear algebra methods such as **LU** decomposition or **Singular Value Decomposition** (SVD) for finding the inverse of the matrix
# $\boldsymbol{X}^T\boldsymbol{X}$.
#
#
#
# **Small question**: Do you think the example we have at hand here (the nuclear binding energies) can lead to problems in inverting the matrix $\boldsymbol{X}^T\boldsymbol{X}$? What kind of problems can we expect?
#
#
#
# ## Some useful matrix and vector expressions
#
# The following matrix and vector relation will be useful here and for the rest of the course. Vectors are always written as boldfaced lower case letters and
# matrices as upper case boldfaced letters.
# 2
# 6
#
# <
# <
# <
# !
# !
# M
# A
# T
# H
# _
# B
# L
# O
# C
# K
# 2
# 7
#
# <
# <
# <
# !
# !
# M
# A
# T
# H
# _
# B
# L
# O
# C
# K
# 2
# 8
#
# <
# <
# <
# !
# !
# M
# A
# T
# H
# _
# B
# L
# O
# C
# K
# $$
# \frac{\partial \log{\vert\boldsymbol{A}\vert}}{\partial \boldsymbol{A}} = (\boldsymbol{A}^{-1})^T.
# $$
# ## Interpretations and optimizing our parameters
# The residuals $\boldsymbol{\epsilon}$ are in turn given by
# $$
# \boldsymbol{\epsilon} = \boldsymbol{y}-\boldsymbol{\tilde{y}} = \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta},
# $$
# and with
# $$
# \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
# $$
# we have
# $$
# \boldsymbol{X}^T\boldsymbol{\epsilon}=\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
# $$
# meaning that the solution for $\boldsymbol{\beta}$ is the one which minimizes the residuals. Later we will link this with the maximum likelihood approach.
#
#
#
#
# Let us now return to our nuclear binding energies and simply code the above equations.
#
# ## Own code for Ordinary Least Squares
#
# It is rather straightforward to implement the matrix inversion and obtain the parameters $\boldsymbol{\beta}$. After having defined the matrix $\boldsymbol{X}$ we simply need to
# write
# matrix inversion to find beta
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Energies)
# and then make the prediction
ytilde = X @ beta
# Alternatively, you can use the least squares functionality in **Numpy** as
fit = np.linalg.lstsq(X, Energies, rcond =None)[0]
ytildenp = np.dot(fit,X.T)
# And finally we plot our fit with and compare with data
Masses['Eapprox'] = ytilde
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016OLS")
plt.show()
# ## Adding error analysis and training set up
#
# We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of _Scikit_Learn_ in the introductory slides.
# Since we are not using _Scikit-Learn here we can define our own $R2$ function as
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
# and we would be using it as
print(R2(Energies,ytilde))
# We can easily add our **MSE** score as
# +
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
print(MSE(Energies,ytilde))
# -
# and finally the relative error as
def RelativeError(y_data,y_model):
return abs((y_data-y_model)/y_data)
print(RelativeError(Energies, ytilde))
# ## The $\chi^2$ function
#
# Normally, the response (dependent or outcome) variable $y_i$ is the
# outcome of a numerical experiment or another type of experiment and is
# thus only an approximation to the true value. It is then always
# accompanied by an error estimate, often limited to a statistical error
# estimate given by the standard deviation discussed earlier. In the
# discussion here we will treat $y_i$ as our exact value for the
# response variable.
#
# Introducing the standard deviation $\sigma_i$ for each measurement
# $y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
# as
# $$
# \chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right\},
# $$
# where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
#
#
#
# ## The $\chi^2$ function
#
# In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
# $$
# \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
# $$
# which results in
# $$
# \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
# $$
# or in a matrix-vector form as
# $$
# \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
# $$
# where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
#
#
#
# ## The $\chi^2$ function
#
# We can rewrite
# $$
# \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
# $$
# as
# $$
# \boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
# $$
# and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
# $$
# \boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
# $$
# ## The $\chi^2$ function
#
# If we then introduce the matrix
# $$
# \boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
# $$
# we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
# $$
# \beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
# $$
# We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
# $$
# \sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
# $$
# resulting in
# $$
# \sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
# $$
# ## The $\chi^2$ function
# The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
# $$
# y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
# $$
# By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
# $$
# \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
# $$
# and
# $$
# \frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
# $$
# ## The $\chi^2$ function
#
# For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
# Defining
# $$
# \gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
# $$
# $$
# \gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
# $$
# $$
# \gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
# $$
# $$
# \gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
# $$
# $$
# \gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
# $$
# we obtain
# $$
# \beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
# $$
# $$
# \beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
# $$
# This approach (different linear and non-linear regression) suffers
# often from both being underdetermined and overdetermined in the
# unknown coefficients $\beta_i$. A better approach is to use the
# Singular Value Decomposition (SVD) method discussed below. Or using
# Lasso and Ridge regression. See below.
#
#
#
# ## Regression Examples
#
# ## Fitting an Equation of State for Dense Nuclear Matter
#
# Before we continue, let us introduce yet another example. We are going to fit the
# nuclear equation of state using results from many-body calculations.
# The equation of state we have made available here, as function of
# density, has been derived using modern nucleon-nucleon potentials with
# [the addition of three-body
# forces](https://www.sciencedirect.com/science/article/pii/S0370157399001106). This
# time the file is presented as a standard **csv** file.
#
# The beginning of the Python code here is similar to what you have seen
# before, with the same initializations and declarations. We use also
# **pandas** again, rather extensively in order to organize our data.
#
# The difference now is that we use **Scikit-Learn's** regression tools
# instead of our own matrix inversion implementation. Furthermore, we
# sneak in **Ridge** regression (to be discussed below) which includes a
# hyperparameter $\lambda$, also to be explained below.
#
# ## The code
# +
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),4))
X[:,3] = Density**(4.0/3.0)
X[:,2] = Density
X[:,1] = Density**(2.0/3.0)
X[:,0] = 1
# We use now Scikit-Learn's linear regressor and ridge regressor
# OLS part
clf = skl.LinearRegression().fit(X, Energies)
ytilde = clf.predict(X)
EoS['Eols'] = ytilde
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, ytilde))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, ytilde))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, ytilde))
print(clf.coef_, clf.intercept_)
# The Ridge regression with a hyperparameter lambda = 0.1
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X, Energies)
yridge = clf_ridge.predict(X)
EoS['Eridge'] = yridge
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, yridge))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, yridge))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, yridge))
print(clf_ridge.coef_, clf_ridge.intercept_)
fig, ax = plt.subplots()
ax.set_xlabel(r'$\rho[\mathrm{fm}^{-3}]$')
ax.set_ylabel(r'Energy per particle')
ax.plot(EoS['Density'], EoS['Energy'], alpha=0.7, lw=2,
label='Theoretical data')
ax.plot(EoS['Density'], EoS['Eols'], alpha=0.7, lw=2, c='m',
label='OLS')
ax.plot(EoS['Density'], EoS['Eridge'], alpha=0.7, lw=2, c='g',
label='Ridge $\lambda = 0.1$')
ax.legend()
save_fig("EoSfitting")
plt.show()
# -
# The above simple polynomial in density $\rho$ gives an excellent fit
# to the data.
#
# We note also that there is a small deviation between the
# standard OLS and the Ridge regression at higher densities. We discuss this in more detail
# below.
#
#
# ## Splitting our Data in Training and Test data
#
# It is normal in essentially all Machine Learning studies to split the
# data in a training set and a test set (sometimes also an additional
# validation set). **Scikit-Learn** has an own function for this. There
# is no explicit recipe for how much data should be included as training
# data and say test data. An accepted rule of thumb is to use
# approximately $2/3$ to $4/5$ of the data as training data. We will
# postpone a discussion of this splitting to the end of these notes and
# our discussion of the so-called **bias-variance** tradeoff. Here we
# limit ourselves to repeat the above equation of state fitting example
# but now splitting the data into a training set and a test set.
# +
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organized into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),5))
X[:,0] = 1
X[:,1] = Density**(2.0/3.0)
X[:,2] = Density
X[:,3] = Density**(4.0/3.0)
X[:,4] = Density**(5.0/3.0)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2)
# matrix inversion to find beta
beta = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train)
# and then make the prediction
ytilde = X_train @ beta
print("Training R2")
print(R2(y_train,ytilde))
print("Training MSE")
print(MSE(y_train,ytilde))
ypredict = X_test @ beta
print("Test R2")
print(R2(y_test,ypredict))
print("Test MSE")
print(MSE(y_test,ypredict))
# -
# <!-- !split -->
# ## The Boston housing data example
#
# The Boston housing
# data set was originally a part of UCI Machine Learning Repository
# and has been removed now. The data set is now included in **Scikit-Learn**'s
# library. There are 506 samples and 13 feature (predictor) variables
# in this data set. The objective is to predict the value of prices of
# the house using the features (predictors) listed here.
#
# The features/predictors are
# 1. CRIM: Per capita crime rate by town
#
# 2. ZN: Proportion of residential land zoned for lots over 25000 square feet
#
# 3. INDUS: Proportion of non-retail business acres per town
#
# 4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
#
# 5. NOX: Nitric oxide concentration (parts per 10 million)
#
# 6. RM: Average number of rooms per dwelling
#
# 7. AGE: Proportion of owner-occupied units built prior to 1940
#
# 8. DIS: Weighted distances to five Boston employment centers
#
# 9. RAD: Index of accessibility to radial highways
#
# 10. TAX: Full-value property tax rate per USD10000
#
# 11. B: $1000(Bk - 0.63)^2$, where $Bk$ is the proportion of [people of African American descent] by town
#
# 12. LSTAT: Percentage of lower status of the population
#
# 13. MEDV: Median value of owner-occupied homes in USD 1000s
#
# ## Housing data, the code
# We start by importing the libraries
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# -
# and load the Boston Housing DataSet from **Scikit-Learn**
# +
from sklearn.datasets import load_boston
boston_dataset = load_boston()
# boston_dataset is a dictionary
# let's check what it contains
boston_dataset.keys()
# -
# Then we invoke Pandas
boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
boston.head()
boston['MEDV'] = boston_dataset.target
# and preprocess the data
# check for missing values in all the columns
boston.isnull().sum()
# We can then visualize the data
# +
# set the size of the figure
sns.set(rc={'figure.figsize':(11.7,8.27)})
# plot a histogram showing the distribution of the target values
sns.distplot(boston['MEDV'], bins=30)
plt.show()
# -
# It is now useful to look at the correlation matrix
# compute the pair wise correlation for all columns
correlation_matrix = boston.corr().round(2)
# use the heatmap function from seaborn to plot the correlation matrix
# annot = True to print the values inside the square
sns.heatmap(data=correlation_matrix, annot=True)
# From the above coorelation plot we can see that **MEDV** is strongly correlated to **LSTAT** and **RM**. We see also that **RAD** and **TAX** are stronly correlated, but we don't include this in our features together to avoid multi-colinearity
# +
plt.figure(figsize=(20, 5))
features = ['LSTAT', 'RM']
target = boston['MEDV']
for i, col in enumerate(features):
plt.subplot(1, len(features) , i+1)
x = boston[col]
y = target
plt.scatter(x, y, marker='o')
plt.title(col)
plt.xlabel(col)
plt.ylabel('MEDV')
# -
# Now we start training our model
X = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns = ['LSTAT','RM'])
Y = boston['MEDV']
# We split the data into training and test sets
# +
from sklearn.model_selection import train_test_split
# splits the training and test data set in 80% : 20%
# assign random_state to any value.This ensures consistency.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state=5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
# -
# Then we use the linear regression functionality from **Scikit-Learn**
# +
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
lin_model = LinearRegression()
lin_model.fit(X_train, Y_train)
# model evaluation for training set
y_train_predict = lin_model.predict(X_train)
rmse = (np.sqrt(mean_squared_error(Y_train, y_train_predict)))
r2 = r2_score(Y_train, y_train_predict)
print("The model performance for training set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
print("\n")
# model evaluation for testing set
y_test_predict = lin_model.predict(X_test)
# root mean square error of the model
rmse = (np.sqrt(mean_squared_error(Y_test, y_test_predict)))
# r-squared score of the model
r2 = r2_score(Y_test, y_test_predict)
print("The model performance for testing set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
# -
# plotting the y_test vs y_pred
# ideally should have been a straight line
plt.scatter(Y_test, y_test_predict)
plt.show()
# ## Singular Value Decomposition Algorithm
#
#
#
# ## The singular value decomposition
#
#
# The examples we have looked at so far are cases where we normally can
# invert the matrix $\boldsymbol{X}^T\boldsymbol{X}$. Using a polynomial expansion as we
# did both for the masses and the fitting of the equation of state,
# leads to row vectors of the design matrix which are essentially
# orthogonal due to the polynomial character of our model. Obtaining the inverse of the design matrix is then often done via a so-called LU, QR or Cholesky decomposition.
#
#
#
# This may
# however not the be case in general and a standard matrix inversion
# algorithm based on say LU, QR or Cholesky decomposition may lead to singularities. We will see examples of this below.
#
# There is however a way to partially circumvent this problem and also gain some insight about the ordinary least squares approach.
#
# This is given by the **Singular Value Decomposition** algorithm, perhaps
# the most powerful linear algebra algorithm. Let us look at a
# different example where we may have problems with the standard matrix
# inversion algorithm. Thereafter we dive into the math of the SVD.
#
#
#
#
#
# ## Linear Regression Problems
#
# One of the typical problems we encounter with linear regression, in particular
# when the matrix $\boldsymbol{X}$ (our so-called design matrix) is high-dimensional,
# are problems with near singular or singular matrices. The column vectors of $\boldsymbol{X}$
# may be linearly dependent, normally referred to as super-collinearity.
# This means that the matrix may be rank deficient and it is basically impossible to
# to model the data using linear regression. As an example, consider the matrix
# $$
# \begin{align*}
# \mathbf{X} & = \left[
# \begin{array}{rrr}
# 1 & -1 & 2
# \\
# 1 & 0 & 1
# \\
# 1 & 2 & -1
# \\
# 1 & 1 & 0
# \end{array} \right]
# \end{align*}
# $$
# The columns of $\boldsymbol{X}$ are linearly dependent. We see this easily since the
# the first column is the row-wise sum of the other two columns. The rank (more correct,
# the column rank) of a matrix is the dimension of the space spanned by the
# column vectors. Hence, the rank of $\mathbf{X}$ is equal to the number
# of linearly independent columns. In this particular case the matrix has rank 2.
#
# Super-collinearity of an $(n \times p)$-dimensional design matrix $\mathbf{X}$ implies
# that the inverse of the matrix $\boldsymbol{X}^T\boldsymbol{X}$ (the matrix we need to invert to solve the linear regression equations) is non-invertible. If we have a square matrix that does not have an inverse, we say this matrix singular. The example here demonstrates this
# $$
# \begin{align*}
# \boldsymbol{X} & = \left[
# \begin{array}{rr}
# 1 & -1
# \\
# 1 & -1
# \end{array} \right].
# \end{align*}
# $$
# We see easily that $\mbox{det}(\boldsymbol{X}) = x_{11} x_{22} - x_{12} x_{21} = 1 \times (-1) - 1 \times (-1) = 0$. Hence, $\mathbf{X}$ is singular and its inverse is undefined.
# This is equivalent to saying that the matrix $\boldsymbol{X}$ has at least an eigenvalue which is zero.
#
#
# ## Fixing the singularity
#
# If our design matrix $\boldsymbol{X}$ which enters the linear regression problem
# <!-- Equation labels as ordinary links -->
# <div id="_auto1"></div>
#
# $$
# \begin{equation}
# \boldsymbol{\beta} = (\boldsymbol{X}^{T} \boldsymbol{X})^{-1} \boldsymbol{X}^{T} \boldsymbol{y},
# \label{_auto1} \tag{1}
# \end{equation}
# $$
# has linearly dependent column vectors, we will not be able to compute the inverse
# of $\boldsymbol{X}^T\boldsymbol{X}$ and we cannot find the parameters (estimators) $\beta_i$.
# The estimators are only well-defined if $(\boldsymbol{X}^{T}\boldsymbol{X})^{-1}$ exits.
# This is more likely to happen when the matrix $\boldsymbol{X}$ is high-dimensional. In this case it is likely to encounter a situation where
# the regression parameters $\beta_i$ cannot be estimated.
#
# A cheap *ad hoc* approach is simply to add a small diagonal component to the matrix to invert, that is we change
# $$
# \boldsymbol{X}^{T} \boldsymbol{X} \rightarrow \boldsymbol{X}^{T} \boldsymbol{X}+\lambda \boldsymbol{I},
# $$
# where $\boldsymbol{I}$ is the identity matrix. When we discuss **Ridge** regression this is actually what we end up evaluating. The parameter $\lambda$ is called a hyperparameter. More about this later.
#
#
#
# ## Basic math of the SVD
#
#
# From standard linear algebra we know that a square matrix $\boldsymbol{X}$ can be diagonalized if and only it is
# a so-called [normal matrix](https://en.wikipedia.org/wiki/Normal_matrix), that is if $\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$
# we have $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ or if $\boldsymbol{X}\in {\mathbb{C}}^{n\times n}$ we have $\boldsymbol{X}\boldsymbol{X}^{\dagger}=\boldsymbol{X}^{\dagger}\boldsymbol{X}$.
# The matrix has then a set of eigenpairs
# $$
# (\lambda_1,\boldsymbol{u}_1),\dots, (\lambda_n,\boldsymbol{u}_n),
# $$
# and the eigenvalues are given by the diagonal matrix
# $$
# \boldsymbol{\Sigma}=\mathrm{Diag}(\lambda_1, \dots,\lambda_n).
# $$
# The matrix $\boldsymbol{X}$ can be written in terms of an orthogonal/unitary transformation $\boldsymbol{U}$
# $$
# \boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T,
# $$
# with $\boldsymbol{U}\boldsymbol{U}^T=\boldsymbol{I}$ or $\boldsymbol{U}\boldsymbol{U}^{\dagger}=\boldsymbol{I}$.
#
# Not all square matrices are diagonalizable. A matrix like the one discussed above
# $$
# \boldsymbol{X} = \begin{bmatrix}
# 1& -1 \\
# 1& -1\\
# \end{bmatrix}
# $$
# is not diagonalizable, it is a so-called [defective matrix](https://en.wikipedia.org/wiki/Defective_matrix). It is easy to see that the condition
# $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ is not fulfilled.
#
#
# ## The SVD, a Fantastic Algorithm
#
#
# However, and this is the strength of the SVD algorithm, any general
# matrix $\boldsymbol{X}$ can be decomposed in terms of a diagonal matrix and
# two orthogonal/unitary matrices. The [Singular Value Decompostion
# (SVD) theorem](https://en.wikipedia.org/wiki/Singular_value_decomposition)
# states that a general $m\times n$ matrix $\boldsymbol{X}$ can be written in
# terms of a diagonal matrix $\boldsymbol{\Sigma}$ of dimensionality $n\times n$
# and two orthognal matrices $\boldsymbol{U}$ and $\boldsymbol{V}$, where the first has
# dimensionality $m \times m$ and the last dimensionality $n\times n$.
# We have then
# $$
# \boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T
# $$
# As an example, the above defective matrix can be decomposed as
# $$
# \boldsymbol{X} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1& 1 \\ 1& -1\\ \end{bmatrix} \begin{bmatrix} 2& 0 \\ 0& 0\\ \end{bmatrix} \frac{1}{\sqrt{2}}\begin{bmatrix} 1& -1 \\ 1& 1\\ \end{bmatrix}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T,
# $$
# with eigenvalues $\sigma_1=2$ and $\sigma_2=0$.
# The SVD exits always!
#
#
# ## Another Example
#
# Consider the following matrix which can be SVD decomposed as
# $$
# \boldsymbol{X} = \frac{1}{15}\begin{bmatrix} 14 & 2\\ 4 & 22\\ 16 & 13\end{bmatrix}=\frac{1}{3}\begin{bmatrix} 1& 2 & 2 \\ 2& -1 & 1\\ 2 & 1& -2\end{bmatrix} \begin{bmatrix} 2& 0 \\ 0& 1\\ 0 & 0\end{bmatrix}\frac{1}{5}\begin{bmatrix} 3& 4 \\ 4& -3\end{bmatrix}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T.
# $$
# This is a $3\times 2$ matrix which is decomposed in terms of a
# $3\times 3$ matrix $\boldsymbol{U}$, and a $2\times 2$ matrix $\boldsymbol{V}$. It is easy to see
# that $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal (how?).
#
# And the SVD
# decomposition (singular values) gives eigenvalues
# $\sigma_i\geq\sigma_{i+1}$ for all $i$ and for dimensions larger than $i=2$, the
# eigenvalues (singular values) are zero.
#
# In the general case, where our design matrix $\boldsymbol{X}$ has dimension
# $n\times p$, the matrix is thus decomposed into an $n\times n$
# orthogonal matrix $\boldsymbol{U}$, a $p\times p$ orthogonal matrix $\boldsymbol{V}$
# and a diagonal matrix $\boldsymbol{\Sigma}$ with $r=\mathrm{min}(n,p)$
# singular values $\sigma_i\geq 0$ on the main diagonal and zeros filling
# the rest of the matrix. There are at most $p$ singular values
# assuming that $n > p$. In our regression examples for the nuclear
# masses and the equation of state this is indeed the case, while for
# the Ising model we have $p > n$. These are often cases that lead to
# near singular or singular matrices.
#
# The columns of $\boldsymbol{U}$ are called the left singular vectors while the columns of $\boldsymbol{V}$ are the right singular vectors.
#
# ## Economy-size SVD
#
# If we assume that $n > p$, then our matrix $\boldsymbol{U}$ has dimension $n
# \times n$. The last $n-p$ columns of $\boldsymbol{U}$ become however
# irrelevant in our calculations since they are multiplied with the
# zeros in $\boldsymbol{\Sigma}$.
#
# The economy-size decomposition removes extra rows or columns of zeros
# from the diagonal matrix of singular values, $\boldsymbol{\Sigma}$, along with the columns
# in either $\boldsymbol{U}$ or $\boldsymbol{V}$ that multiply those zeros in the expression.
# Removing these zeros and columns can improve execution time
# and reduce storage requirements without compromising the accuracy of
# the decomposition.
#
# If $n > p$, we keep only the first $p$ columns of $\boldsymbol{U}$ and $\boldsymbol{\Sigma}$ has dimension $p\times p$.
# If $p > n$, then only the first $n$ columns of $\boldsymbol{V}$ are computed and $\boldsymbol{\Sigma}$ has dimension $n\times n$.
# The $n=p$ case is obvious, we retain the full SVD.
# In general the economy-size SVD leads to less FLOPS and still conserving the desired accuracy.
#
# ## Codes for the SVD
# +
import numpy as np
# SVD inversion
def SVDinv(A):
''' Takes as input a numpy matrix A and returns inv(A) based on singular value decomposition (SVD).
SVD is numerically more stable than the inversion algorithms provided by
numpy and scipy.linalg at the cost of being slower.
'''
U, s, VT = np.linalg.svd(A)
# print('test U')
# print( (np.transpose(U) @ U - U @np.transpose(U)))
# print('test VT')
# print( (np.transpose(VT) @ VT - VT @np.transpose(VT)))
print(U)
print(s)
print(VT)
D = np.zeros((len(U),len(VT)))
for i in range(0,len(VT)):
D[i,i]=s[i]
UT = np.transpose(U); V = np.transpose(VT); invD = np.linalg.inv(D)
return np.matmul(V,np.matmul(invD,UT))
X = np.array([ [1.0, -1.0, 2.0], [1.0, 0.0, 1.0], [1.0, 2.0, -1.0], [1.0, 1.0, 0.0] ])
print(X)
A = np.transpose(X) @ X
print(A)
# Brute force inversion of super-collinear matrix
B = np.linalg.inv(A)
print(B)
C = SVDinv(A)
print(C)
# -
# The matrix $\boldsymbol{X}$ has columns that are linearly dependent. The first
# column is the row-wise sum of the other two columns. The rank of a
# matrix (the column rank) is the dimension of space spanned by the
# column vectors. The rank of the matrix is the number of linearly
# independent columns, in this case just $2$. We see this from the
# singular values when running the above code. Running the standard
# inversion algorithm for matrix inversion with $\boldsymbol{X}^T\boldsymbol{X}$ results
# in the program terminating due to a singular matrix.
#
#
#
# ## Mathematical Properties
#
# There are several interesting mathematical properties which will be
# relevant when we are going to discuss the differences between say
# ordinary least squares (OLS) and **Ridge** regression.
#
# We have from OLS that the parameters of the linear approximation are given by
# $$
# \boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
# $$
# The matrix to invert can be rewritten in terms of our SVD decomposition as
# $$
# \boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{U}^T\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T.
# $$
# Using the orthogonality properties of $\boldsymbol{U}$ we have
# $$
# \boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{\Sigma}\boldsymbol{V}^T = \boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T,
# $$
# with $\boldsymbol{D}$ being a diagonal matrix with values along the diagonal given by the singular values squared.
#
# This means that
# $$
# (\boldsymbol{X}^T\boldsymbol{X})\boldsymbol{V} = \boldsymbol{V}\boldsymbol{D},
# $$
# that is the eigenvectors of $(\boldsymbol{X}^T\boldsymbol{X})$ are given by the columns of the right singular matrix of $\boldsymbol{X}$ and the eigenvalues are the squared singular values. It is easy to show (show this) that
# $$
# (\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D},
# $$
# that is, the eigenvectors of $(\boldsymbol{X}\boldsymbol{X})^T$ are the columns of the left singular matrix and the eigenvalues are the same.
#
# Going back to our OLS equation we have
# $$
# \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y}.
# $$
# We will come back to this expression when we discuss Ridge regression.
#
#
#
# ## Beyond Ordinary Least Squares
#
#
# ## Ridge and LASSO Regression
#
# Let us remind ourselves about the expression for the standard Mean Squared Error (MSE) which we used to define our cost function and the equations for the ordinary least squares (OLS) method, that is
# our optimization problem is
# $$
# {\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right\}.
# $$
# or we can state it as
# $$
# {\displaystyle \min_{\boldsymbol{\beta}\in
# {\mathbb{R}}^{p}}}\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2,
# $$
# where we have used the definition of a norm-2 vector, that is
# $$
# \vert\vert \boldsymbol{x}\vert\vert_2 = \sqrt{\sum_i x_i^2}.
# $$
# By minimizing the above equation with respect to the parameters
# $\boldsymbol{\beta}$ we could then obtain an analytical expression for the
# parameters $\boldsymbol{\beta}$. We can add a regularization parameter $\lambda$ by
# defining a new cost function to be optimized, that is
# $$
# {\displaystyle \min_{\boldsymbol{\beta}\in
# {\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_2^2
# $$
# which leads to the Ridge regression minimization problem where we
# require that $\vert\vert \boldsymbol{\beta}\vert\vert_2^2\le t$, where $t$ is
# a finite number larger than zero. By defining
# $$
# C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1,
# $$
# we have a new optimization equation
# $$
# {\displaystyle \min_{\boldsymbol{\beta}\in
# {\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1
# $$
# which leads to Lasso regression. Lasso stands for least absolute shrinkage and selection operator.
#
# Here we have defined the norm-1 as
# $$
# \vert\vert \boldsymbol{x}\vert\vert_1 = \sum_i \vert x_i\vert.
# $$
# ## More on Ridge Regression
#
# Using the matrix-vector expression for Ridge regression,
# $$
# C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\left\{(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})^T(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\right\}+\lambda\boldsymbol{\beta}^T\boldsymbol{\beta},
# $$
# by taking the derivatives with respect to $\boldsymbol{\beta}$ we obtain then
# a slightly modified matrix inversion problem which for finite values
# of $\lambda$ does not suffer from singularity problems. We obtain
# $$
# \boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{X}^T\boldsymbol{X}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y},
# $$
# with $\boldsymbol{I}$ being a $p\times p$ identity matrix with the constraint that
# $$
# \sum_{i=0}^{p-1} \beta_i^2 \leq t,
# $$
# with $t$ a finite positive number.
#
# We see that Ridge regression is nothing but the standard
# OLS with a modified diagonal term added to $\boldsymbol{X}^T\boldsymbol{X}$. The
# consequences, in particular for our discussion of the bias-variance tradeoff
# are rather interesting.
#
# Furthermore, if we use the result above in terms of the SVD decomposition (our analysis was done for the OLS method), we had
# $$
# (\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D}.
# $$
# We can analyse the OLS solutions in terms of the eigenvectors (the columns) of the right singular value matrix $\boldsymbol{U}$ as
# $$
# \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y}
# $$
# For Ridge regression this becomes
# $$
# \boldsymbol{X}\boldsymbol{\beta}^{\mathrm{Ridge}} = \boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T+\lambda\boldsymbol{I} \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\sum_{j=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\frac{\sigma_j^2}{\sigma_j^2+\lambda}\boldsymbol{y},
# $$
# with the vectors $\boldsymbol{u}_j$ being the columns of $\boldsymbol{U}$.
#
# ## Interpreting the Ridge results
#
# Since $\lambda \geq 0$, it means that compared to OLS, we have
# $$
# \frac{\sigma_j^2}{\sigma_j^2+\lambda} \leq 1.
# $$
# Ridge regression finds the coordinates of $\boldsymbol{y}$ with respect to the
# orthonormal basis $\boldsymbol{U}$, it then shrinks the coordinates by
# $\frac{\sigma_j^2}{\sigma_j^2+\lambda}$. Recall that the SVD has
# eigenvalues ordered in a descending way, that is $\sigma_i \geq
# \sigma_{i+1}$.
#
# For small eigenvalues $\sigma_i$ it means that their contributions become less important, a fact which can be used to reduce the number of degrees of freedom.
# Actually, calculating the variance of $\boldsymbol{X}\boldsymbol{v}_j$ shows that this quantity is equal to $\sigma_j^2/n$.
# With a parameter $\lambda$ we can thus shrink the role of specific parameters.
#
#
# ## More interpretations
#
# For the sake of simplicity, let us assume that the design matrix is orthonormal, that is
# $$
# \boldsymbol{X}^T\boldsymbol{X}=(\boldsymbol{X}^T\boldsymbol{X})^{-1} =\boldsymbol{I}.
# $$
# In this case the standard OLS results in
# $$
# \boldsymbol{\beta}^{\mathrm{OLS}} = \boldsymbol{X}^T\boldsymbol{y}=\sum_{i=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\boldsymbol{y},
# $$
# and
# $$
# \boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{I}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\left(1+\lambda\right)^{-1}\boldsymbol{\beta}^{\mathrm{OLS}},
# $$
# that is the Ridge estimator scales the OLS estimator by the inverse of a factor $1+\lambda$, and
# the Ridge estimator converges to zero when the hyperparameter goes to
# infinity.
#
# For more discussions of Ridge and Lasso regression, [<NAME>'s](https://arxiv.org/abs/1509.09169) article is highly recommended.
# Similarly, [Mehta et al's article](https://arxiv.org/abs/1803.08823) is also recommended.
#
#
# ## Statistics and Resampling Techniques
#
#
# ## Where are we going?
#
# Before we proceed, we need to rethink what we have been doing. In our
# eager to fit the data, we have omitted several important elements in
# our regression analysis. In what follows we will
# 1. remind ourselves about some statistical properties, including a discussion of mean values, variance and the so-called bias-variance tradeoff
#
# 2. introduce resampling techniques like cross-validation, bootstrapping and jackknife and more
#
# This will allow us to link the standard linear algebra methods we have discussed above to a statistical interpretation of the methods.
#
#
#
#
#
# ## Resampling methods
# Resampling methods are an indispensable tool in modern
# statistics. They involve repeatedly drawing samples from a training
# set and refitting a model of interest on each sample in order to
# obtain additional information about the fitted model. For example, in
# order to estimate the variability of a linear regression fit, we can
# repeatedly draw different samples from the training data, fit a linear
# regression to each new sample, and then examine the extent to which
# the resulting fits differ. Such an approach may allow us to obtain
# information that would not be available from fitting the model only
# once using the original training sample.
#
# Two resampling methods are often used in Machine Learning analyses,
# 1. The **bootstrap method**
#
# 2. and **Cross-Validation**
#
# In addition there are several other methods such as the Jackknife and the Blocking methods. We will discuss in particular
# cross-validation and the bootstrap method.
#
#
#
#
# ## Resampling approaches can be computationally expensive
#
# Resampling approaches can be computationally expensive, because they
# involve fitting the same statistical method multiple times using
# different subsets of the training data. However, due to recent
# advances in computing power, the computational requirements of
# resampling methods generally are not prohibitive. In this chapter, we
# discuss two of the most commonly used resampling methods,
# cross-validation and the bootstrap. Both methods are important tools
# in the practical application of many statistical learning
# procedures. For example, cross-validation can be used to estimate the
# test error associated with a given statistical learning method in
# order to evaluate its performance, or to select the appropriate level
# of flexibility. The process of evaluating a model’s performance is
# known as model assessment, whereas the process of selecting the proper
# level of flexibility for a model is known as model selection. The
# bootstrap is widely used.
#
#
#
# ## Why resampling methods ?
# **Statistical analysis.**
#
#
# * Our simulations can be treated as *computer experiments*. This is particularly the case for Monte Carlo methods
#
# * The results can be analysed with the same statistical tools as we would use analysing experimental data.
#
# * As in all experiments, we are looking for expectation values and an estimate of how accurate they are, i.e., possible sources for errors.
#
#
#
# ## Statistical analysis
#
# * As in other experiments, many numerical experiments have two classes of errors:
#
# * Statistical errors
#
# * Systematical errors
#
#
# * Statistical errors can be estimated using standard tools from statistics
#
# * Systematical errors are method specific and must be treated differently from case to case.
#
#
#
#
#
# ## Statistics
# The *probability distribution function (PDF)* is a function
# $p(x)$ on the domain which, in the discrete case, gives us the
# probability or relative frequency with which these values of $X$ occur:
# $$
# p(x) = \mathrm{prob}(X=x)
# $$
# In the continuous case, the PDF does not directly depict the
# actual probability. Instead we define the probability for the
# stochastic variable to assume any value on an infinitesimal interval
# around $x$ to be $p(x)dx$. The continuous function $p(x)$ then gives us
# the *density* of the probability rather than the probability
# itself. The probability for a stochastic variable to assume any value
# on a non-infinitesimal interval $[a,\,b]$ is then just the integral:
# $$
# \mathrm{prob}(a\leq X\leq b) = \int_a^b p(x)dx
# $$
# Qualitatively speaking, a stochastic variable represents the values of
# numbers chosen as if by chance from some specified PDF so that the
# selection of a large set of these numbers reproduces this PDF.
#
#
#
#
# ## Statistics, moments
# A particularly useful class of special expectation values are the
# *moments*. The $n$-th moment of the PDF $p$ is defined as
# follows:
# $$
# \langle x^n\rangle \equiv \int\! x^n p(x)\,dx
# $$
# The zero-th moment $\langle 1\rangle$ is just the normalization condition of
# $p$. The first moment, $\langle x\rangle$, is called the *mean* of $p$
# and often denoted by the letter $\mu$:
# $$
# \langle x\rangle = \mu \equiv \int\! x p(x)\,dx
# $$
# ## Statistics, central moments
# A special version of the moments is the set of *central moments*,
# the n-th central moment defined as:
# $$
# \langle (x-\langle x \rangle )^n\rangle \equiv \int\! (x-\langle x\rangle)^n p(x)\,dx
# $$
# The zero-th and first central moments are both trivial, equal $1$ and
# $0$, respectively. But the second central moment, known as the
# *variance* of $p$, is of particular interest. For the stochastic
# variable $X$, the variance is denoted as $\sigma^2_X$ or $\mathrm{var}(X)$:
# <!-- Equation labels as ordinary links -->
# <div id="_auto2"></div>
#
# $$
# \begin{equation}
# \sigma^2_X\ \ =\ \ \mathrm{var}(X) = \langle (x-\langle x\rangle)^2\rangle =
# \int\! (x-\langle x\rangle)^2 p(x)\,dx
# \label{_auto2} \tag{2}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto3"></div>
#
# $$
# \begin{equation}
# = \int\! \left(x^2 - 2 x \langle x\rangle^{2} +
# \langle x\rangle^2\right)p(x)\,dx
# \label{_auto3} \tag{3}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto4"></div>
#
# $$
# \begin{equation}
# = \langle x^2\rangle - 2 \langle x\rangle\langle x\rangle + \langle x\rangle^2
# \label{_auto4} \tag{4}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto5"></div>
#
# $$
# \begin{equation}
# = \langle x^2\rangle - \langle x\rangle^2
# \label{_auto5} \tag{5}
# \end{equation}
# $$
# The square root of the variance, $\sigma =\sqrt{\langle (x-\langle x\rangle)^2\rangle}$ is called the *standard deviation* of $p$. It is clearly just the RMS (root-mean-square)
# value of the deviation of the PDF from its mean value, interpreted
# qualitatively as the *spread* of $p$ around its mean.
#
#
#
# ## Statistics, covariance
# Another important quantity is the so called covariance, a variant of
# the above defined variance. Consider again the set $\{X_i\}$ of $n$
# stochastic variables (not necessarily uncorrelated) with the
# multivariate PDF $P(x_1,\dots,x_n)$. The *covariance* of two
# of the stochastic variables, $X_i$ and $X_j$, is defined as follows:
# $$
# \mathrm{cov}(X_i,\,X_j) \equiv \langle (x_i-\langle x_i\rangle)(x_j-\langle x_j\rangle)\rangle
# \nonumber
# $$
# <!-- Equation labels as ordinary links -->
# <div id="eq:def_covariance"></div>
#
# $$
# \begin{equation}
# =
# \int\!\cdots\!\int\!(x_i-\langle x_i \rangle)(x_j-\langle x_j \rangle)\,
# P(x_1,\dots,x_n)\,dx_1\dots dx_n
# \label{eq:def_covariance} \tag{6}
# \end{equation}
# $$
# with
# $$
# \langle x_i\rangle =
# \int\!\cdots\!\int\!x_i\,P(x_1,\dots,x_n)\,dx_1\dots dx_n
# $$
# ## Statistics, more covariance
# If we consider the above covariance as a matrix $C_{ij}=\mathrm{cov}(X_i,\,X_j)$, then the diagonal elements are just the familiar
# variances, $C_{ii} = \mathrm{cov}(X_i,\,X_i) = \mathrm{var}(X_i)$. It turns out that
# all the off-diagonal elements are zero if the stochastic variables are
# uncorrelated. This is easy to show, keeping in mind the linearity of
# the expectation value. Consider the stochastic variables $X_i$ and
# $X_j$, ($i\neq j$):
# <!-- Equation labels as ordinary links -->
# <div id="_auto6"></div>
#
# $$
# \begin{equation}
# \mathrm{cov}(X_i,\,X_j) = \langle(x_i-\langle x_i\rangle)(x_j-\langle x_j\rangle)\rangle
# \label{_auto6} \tag{7}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto7"></div>
#
# $$
# \begin{equation}
# =\langle x_i x_j - x_i\langle x_j\rangle - \langle x_i\rangle x_j + \langle x_i\rangle\langle x_j\rangle\rangle
# \label{_auto7} \tag{8}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto8"></div>
#
# $$
# \begin{equation}
# =\langle x_i x_j\rangle - \langle x_i\langle x_j\rangle\rangle - \langle \langle x_i\rangle x_j\rangle +
# \langle \langle x_i\rangle\langle x_j\rangle\rangle
# \label{_auto8} \tag{9}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto9"></div>
#
# $$
# \begin{equation}
# =\langle x_i x_j\rangle - \langle x_i\rangle\langle x_j\rangle - \langle x_i\rangle\langle x_j\rangle +
# \langle x_i\rangle\langle x_j\rangle
# \label{_auto9} \tag{10}
# \end{equation}
# $$
# <!-- Equation labels as ordinary links -->
# <div id="_auto10"></div>
#
# $$
# \begin{equation}
# =\langle x_i x_j\rangle - \langle x_i\rangle\langle x_j\rangle
# \label{_auto10} \tag{11}
# \end{equation}
# $$
# ## Covariance example
#
# Suppose we have defined three vectors $\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z}$ with
# $n$ elements each. The covariance matrix is defined as
# $$
# \boldsymbol{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\
# \sigma_{yx} & \sigma_{yy} & \sigma_{yz} \\
# \sigma_{zx} & \sigma_{zy} & \sigma_{zz}
# \end{bmatrix},
# $$
# where for example
# $$
# \sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
# $$
# The Numpy function **np.cov** calculates the covariance elements using
# the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have
# the exact mean values.
#
# The following simple function uses the **np.vstack** function which
# takes each vector of dimension $1\times n$ and produces a $3\times n$
# matrix $\boldsymbol{W}$
# $$
# \boldsymbol{W} = \begin{bmatrix} x_0 & y_0 & z_0 \\
# x_1 & y_1 & z_1 \\
# x_2 & y_2 & z_2 \\
# \dots & \dots & \dots \\
# x_{n-2} & y_{n-2} & z_{n-2} \\
# x_{n-1} & y_{n-1} & z_{n-1}
# \end{bmatrix},
# $$
# which in turn is converted into into the $3\times 3$ covariance matrix
# $\boldsymbol{\Sigma}$ via the Numpy function **np.cov()**. We note that we can
# also calculate the mean value of each set of samples $\boldsymbol{x}$ etc
# using the Numpy function **np.mean(x)**. We can also extract the
# eigenvalues of the covariance matrix through the **np.linalg.eig()**
# function.
#
#
# ## Covariance in numpy
# +
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
z = x**3+np.random.normal(size=n)
print(np.mean(z))
W = np.vstack((x, y, z))
Sigma = np.cov(W)
print(Sigma)
# -
# ## Statistics, independent variables
# If $X_i$ and $X_j$ are independent, we get
# $\langle x_i x_j\rangle =\langle x_i\rangle\langle x_j\rangle$, resulting in $\mathrm{cov}(X_i, X_j) = 0\ \ (i\neq j)$.
#
# Also useful for us is the covariance of linear combinations of
# stochastic variables. Let $\{X_i\}$ and $\{Y_i\}$ be two sets of
# stochastic variables. Let also $\{a_i\}$ and $\{b_i\}$ be two sets of
# scalars. Consider the linear combination:
# $$
# U = \sum_i a_i X_i \qquad V = \sum_j b_j Y_j
# $$
# By the linearity of the expectation value
# $$
# \mathrm{cov}(U, V) = \sum_{i,j}a_i b_j \mathrm{cov}(X_i, Y_j)
# $$
# ## Statistics, more variance
# Now, since the variance is just $\mathrm{var}(X_i) = \mathrm{cov}(X_i, X_i)$, we get
# the variance of the linear combination $U = \sum_i a_i X_i$:
# <!-- Equation labels as ordinary links -->
# <div id="eq:variance_linear_combination"></div>
#
# $$
# \begin{equation}
# \mathrm{var}(U) = \sum_{i,j}a_i a_j \mathrm{cov}(X_i, X_j)
# \label{eq:variance_linear_combination} \tag{12}
# \end{equation}
# $$
# And in the special case when the stochastic variables are
# uncorrelated, the off-diagonal elements of the covariance are as we
# know zero, resulting in:
# 1
# 1
# 1
#
# <
# <
# <
# !
# !
# M
# A
# T
# H
# _
# B
# L
# O
# C
# K
# $$
# \mathrm{var}(\sum_i a_i X_i) = \sum_i a_i^2 \mathrm{var}(X_i)
# $$
# which will become very useful in our study of the error in the mean
# value of a set of measurements.
#
#
#
# ## Statistics and stochastic processes
# A *stochastic process* is a process that produces sequentially a
# chain of values:
# $$
# \{x_1, x_2,\dots\,x_k,\dots\}.
# $$
# We will call these
# values our *measurements* and the entire set as our measured
# *sample*. The action of measuring all the elements of a sample
# we will call a stochastic *experiment* since, operationally,
# they are often associated with results of empirical observation of
# some physical or mathematical phenomena; precisely an experiment. We
# assume that these values are distributed according to some
# PDF $p_X^{\phantom X}(x)$, where $X$ is just the formal symbol for the
# stochastic variable whose PDF is $p_X^{\phantom X}(x)$. Instead of
# trying to determine the full distribution $p$ we are often only
# interested in finding the few lowest moments, like the mean
# $\mu_X^{\phantom X}$ and the variance $\sigma_X^{\phantom X}$.
#
#
#
#
# <!-- !split -->
# ## Statistics and sample variables
# In practical situations a sample is always of finite size. Let that
# size be $n$. The expectation value of a sample, the *sample mean*, is then defined as follows:
# $$
# \bar{x}_n \equiv \frac{1}{n}\sum_{k=1}^n x_k
# $$
# The *sample variance* is:
# $$
# \mathrm{var}(x) \equiv \frac{1}{n}\sum_{k=1}^n (x_k - \bar{x}_n)^2
# $$
# its square root being the *standard deviation of the sample*. The
# *sample covariance* is:
# $$
# \mathrm{cov}(x)\equiv\frac{1}{n}\sum_{kl}(x_k - \bar{x}_n)(x_l - \bar{x}_n)
# $$
# ## Statistics, sample variance and covariance
# Note that the sample variance is the sample covariance without the
# cross terms. In a similar manner as the covariance in Eq. ([6](#eq:def_covariance)) is a measure of the correlation between
# two stochastic variables, the above defined sample covariance is a
# measure of the sequential correlation between succeeding measurements
# of a sample.
#
# These quantities, being known experimental values, differ
# significantly from and must not be confused with the similarly named
# quantities for stochastic variables, mean $\mu_X$, variance $\mathrm{var}(X)$
# and covariance $\mathrm{cov}(X,Y)$.
#
#
#
# ## Statistics, law of large numbers
# The law of large numbers
# states that as the size of our sample grows to infinity, the sample
# mean approaches the true mean $\mu_X^{\phantom X}$ of the chosen PDF:
# $$
# \lim_{n\to\infty}\bar{x}_n = \mu_X^{\phantom X}
# $$
# The sample mean $\bar{x}_n$ works therefore as an estimate of the true
# mean $\mu_X^{\phantom X}$.
#
# What we need to find out is how good an approximation $\bar{x}_n$ is to
# $\mu_X^{\phantom X}$. In any stochastic measurement, an estimated
# mean is of no use to us without a measure of its error. A quantity
# that tells us how well we can reproduce it in another experiment. We
# are therefore interested in the PDF of the sample mean itself. Its
# standard deviation will be a measure of the spread of sample means,
# and we will simply call it the *error* of the sample mean, or
# just sample error, and denote it by $\mathrm{err}_X^{\phantom X}$. In
# practice, we will only be able to produce an *estimate* of the
# sample error since the exact value would require the knowledge of the
# true PDFs behind, which we usually do not have.
#
#
#
#
# ## Statistics, more on sample error
# Let us first take a look at what happens to the sample error as the
# size of the sample grows. In a sample, each of the measurements $x_i$
# can be associated with its own stochastic variable $X_i$. The
# stochastic variable $\overline X_n$ for the sample mean $\bar{x}_n$ is
# then just a linear combination, already familiar to us:
# $$
# \overline X_n = \frac{1}{n}\sum_{i=1}^n X_i
# $$
# All the coefficients are just equal $1/n$. The PDF of $\overline X_n$,
# denoted by $p_{\overline X_n}(x)$ is the desired PDF of the sample
# means.
#
#
#
# ## Statistics
# The probability density of obtaining a sample mean $\bar x_n$
# is the product of probabilities of obtaining arbitrary values $x_1,
# x_2,\dots,x_n$ with the constraint that the mean of the set $\{x_i\}$
# is $\bar x_n$:
# $$
# p_{\overline X_n}(x) = \int p_X^{\phantom X}(x_1)\cdots
# \int p_X^{\phantom X}(x_n)\
# \delta\!\left(x - \frac{x_1+x_2+\dots+x_n}{n}\right)dx_n \cdots dx_1
# $$
# And in particular we are interested in its variance $\mathrm{var}(\overline X_n)$.
#
#
#
#
#
# ## Statistics, central limit theorem
# It is generally not possible to express $p_{\overline X_n}(x)$ in a
# closed form given an arbitrary PDF $p_X^{\phantom X}$ and a number
# $n$. But for the limit $n\to\infty$ it is possible to make an
# approximation. The very important result is called *the central limit theorem*. It tells us that as $n$ goes to infinity,
# $p_{\overline X_n}(x)$ approaches a Gaussian distribution whose mean
# and variance equal the true mean and variance, $\mu_{X}^{\phantom X}$
# and $\sigma_{X}^{2}$, respectively:
# <!-- Equation labels as ordinary links -->
# <div id="eq:central_limit_gaussian"></div>
#
# $$
# \begin{equation}
# \lim_{n\to\infty} p_{\overline X_n}(x) =
# \left(\frac{n}{2\pi\mathrm{var}(X)}\right)^{1/2}
# e^{-\frac{n(x-\bar x_n)^2}{2\mathrm{var}(X)}}
# \label{eq:central_limit_gaussian} \tag{13}
# \end{equation}
# $$
# <!-- !split -->
# ## Linking the regression analysis with a statistical interpretation
#
# Finally, we are going to discuss several statistical properties which can be obtained in terms of analytical expressions.
# The
# advantage of doing linear regression is that we actually end up with
# analytical expressions for several statistical quantities.
# Standard least squares and Ridge regression allow us to
# derive quantities like the variance and other expectation values in a
# rather straightforward way.
#
#
# It is assumed that $\varepsilon_i
# \sim \mathcal{N}(0, \sigma^2)$ and the $\varepsilon_{i}$ are
# independent, i.e.:
# $$
# \begin{align*}
# \mbox{Cov}(\varepsilon_{i_1},
# \varepsilon_{i_2}) & = \left\{ \begin{array}{lcc} \sigma^2 & \mbox{if}
# & i_1 = i_2, \\ 0 & \mbox{if} & i_1 \not= i_2. \end{array} \right.
# \end{align*}
# $$
# The randomness of $\varepsilon_i$ implies that
# $\mathbf{y}_i$ is also a random variable. In particular,
# $\mathbf{y}_i$ is normally distributed, because $\varepsilon_i \sim
# \mathcal{N}(0, \sigma^2)$ and $\mathbf{X}_{i,\ast} \, \boldsymbol{\beta}$ is a
# non-random scalar. To specify the parameters of the distribution of
# $\mathbf{y}_i$ we need to calculate its first two moments.
#
# Recall that $\boldsymbol{X}$ is a matrix of dimensionality $n\times p$. The
# notation above $\mathbf{X}_{i,\ast}$ means that we are looking at the
# row number $i$ and perform a sum over all values $p$.
#
#
# ## Assumptions made
#
# The assumption we have made here can be summarized as (and this is going to be useful when we discuss the bias-variance trade off)
# that there exists a function $f(\boldsymbol{x})$ and a normal distributed error $\boldsymbol{\varepsilon}\sim \mathcal{N}(0, \sigma^2)$
# which describe our data
# $$
# \boldsymbol{y} = f(\boldsymbol{x})+\boldsymbol{\varepsilon}
# $$
# We approximate this function with our model from the solution of the linear regression equations, that is our
# function $f$ is approximated by $\boldsymbol{\tilde{y}}$ where we want to minimize $(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2$, our MSE, with
# $$
# \boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta}.
# $$
# ## Expectation value and variance
#
# We can calculate the expectation value of $\boldsymbol{y}$ for a given element $i$
# $$
# \begin{align*}
# \mathbb{E}(y_i) & =
# \mathbb{E}(\mathbf{X}_{i, \ast} \, \boldsymbol{\beta}) + \mathbb{E}(\varepsilon_i)
# \, \, \, = \, \, \, \mathbf{X}_{i, \ast} \, \beta,
# \end{align*}
# $$
# while
# its variance is
# $$
# \begin{align*} \mbox{Var}(y_i) & = \mathbb{E} \{ [y_i
# - \mathbb{E}(y_i)]^2 \} \, \, \, = \, \, \, \mathbb{E} ( y_i^2 ) -
# [\mathbb{E}(y_i)]^2 \\ & = \mathbb{E} [ ( \mathbf{X}_{i, \ast} \,
# \beta + \varepsilon_i )^2] - ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2 \\ &
# = \mathbb{E} [ ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2 + 2 \varepsilon_i
# \mathbf{X}_{i, \ast} \, \boldsymbol{\beta} + \varepsilon_i^2 ] - ( \mathbf{X}_{i,
# \ast} \, \beta)^2 \\ & = ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2 + 2
# \mathbb{E}(\varepsilon_i) \mathbf{X}_{i, \ast} \, \boldsymbol{\beta} +
# \mathbb{E}(\varepsilon_i^2 ) - ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2
# \\ & = \mathbb{E}(\varepsilon_i^2 ) \, \, \, = \, \, \,
# \mbox{Var}(\varepsilon_i) \, \, \, = \, \, \, \sigma^2.
# \end{align*}
# $$
# Hence, $y_i \sim \mathcal{N}( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta}, \sigma^2)$, that is $\boldsymbol{y}$ follows a normal distribution with
# mean value $\boldsymbol{X}\boldsymbol{\beta}$ and variance $\sigma^2$ (not be confused with the singular values of the SVD).
#
# ## Expectation value and variance for $\boldsymbol{\beta}$
#
# With the OLS expressions for the parameters $\boldsymbol{\beta}$ we can evaluate the expectation value
# $$
# \mathbb{E}(\boldsymbol{\beta}) = \mathbb{E}[ (\mathbf{X}^{\top} \mathbf{X})^{-1}\mathbf{X}^{T} \mathbf{Y}]=(\mathbf{X}^{T} \mathbf{X})^{-1}\mathbf{X}^{T} \mathbb{E}[ \mathbf{Y}]=(\mathbf{X}^{T} \mathbf{X})^{-1} \mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta}=\boldsymbol{\beta}.
# $$
# This means that the estimator of the regression parameters is unbiased.
#
# We can also calculate the variance
#
# The variance of $\boldsymbol{\beta}$ is
# $$
# \begin{eqnarray*}
# \mbox{Var}(\boldsymbol{\beta}) & = & \mathbb{E} \{ [\boldsymbol{\beta} - \mathbb{E}(\boldsymbol{\beta})] [\boldsymbol{\beta} - \mathbb{E}(\boldsymbol{\beta})]^{T} \}
# \\
# & = & \mathbb{E} \{ [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} - \boldsymbol{\beta}] \, [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} - \boldsymbol{\beta}]^{T} \}
# \\
# % & = & \mathbb{E} \{ [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y}] \, [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y}]^{T} \} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
# % \\
# % & = & \mathbb{E} \{ (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} \, \mathbf{Y}^{T} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} \} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
# % \\
# & = & (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \, \mathbb{E} \{ \mathbf{Y} \, \mathbf{Y}^{T} \} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
# \\
# & = & (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \, \{ \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \mathbf{X}^{T} + \sigma^2 \} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
# % \\
# % & = & (\mathbf{X}^T \mathbf{X})^{-1} \, \mathbf{X}^T \, \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^T \, \mathbf{X}^T \, \mathbf{X} \, (\mathbf{X}^T % \mathbf{X})^{-1}
# % \\
# % & & + \, \, \sigma^2 \, (\mathbf{X}^T \mathbf{X})^{-1} \, \mathbf{X}^T \, \mathbf{X} \, (\mathbf{X}^T \mathbf{X})^{-1} - \boldsymbol{\beta} \boldsymbol{\beta}^T
# \\
# & = & \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} + \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
# \, \, \, = \, \, \, \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1},
# \end{eqnarray*}
# $$
# where we have used that $\mathbb{E} (\mathbf{Y} \mathbf{Y}^{T}) =
# \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \mathbf{X}^{T} +
# \sigma^2 \, \mathbf{I}_{nn}$. From $\mbox{Var}(\boldsymbol{\beta}) = \sigma^2
# \, (\mathbf{X}^{T} \mathbf{X})^{-1}$, one obtains an estimate of the
# variance of the estimate of the $j$-th regression coefficient:
# $\boldsymbol{\sigma}^2 (\hat{\beta}_j ) = \boldsymbol{\sigma}^2 \sqrt{
# [(\mathbf{X}^{T} \mathbf{X})^{-1}]_{jj} }$. This may be used to
# construct a confidence interval for the estimates.
#
#
# In a similar way, we can obtain analytical expressions for say the
# expectation values of the parameters $\boldsymbol{\beta}$ and their variance
# when we employ Ridge regression, allowing us again to define a confidence interval.
#
# It is rather straightforward to show that
# $$
# \mathbb{E} \big[ \boldsymbol{\beta}^{\mathrm{Ridge}} \big]=(\mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} (\mathbf{X}^{\top} \mathbf{X})\boldsymbol{\beta}^{\mathrm{OLS}}.
# $$
# We see clearly that
# $\mathbb{E} \big[ \boldsymbol{\beta}^{\mathrm{Ridge}} \big] \not= \boldsymbol{\beta}^{\mathrm{OLS}}$ for any $\lambda > 0$. We say then that the ridge estimator is biased.
#
# We can also compute the variance as
# $$
# \mbox{Var}[\boldsymbol{\beta}^{\mathrm{Ridge}}]=\sigma^2[ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1} \mathbf{X}^{T} \mathbf{X} \{ [ \mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I} ]^{-1}\}^{T},
# $$
# and it is easy to see that if the parameter $\lambda$ goes to infinity then the variance of Ridge parameters $\boldsymbol{\beta}$ goes to zero.
#
# With this, we can compute the difference
# $$
# \mbox{Var}[\boldsymbol{\beta}^{\mathrm{OLS}}]-\mbox{Var}(\boldsymbol{\beta}^{\mathrm{Ridge}})=\sigma^2 [ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1}[ 2\lambda\mathbf{I} + \lambda^2 (\mathbf{X}^{T} \mathbf{X})^{-1} ] \{ [ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1}\}^{T}.
# $$
# The difference is non-negative definite since each component of the
# matrix product is non-negative definite.
# This means the variance we obtain with the standard OLS will always for $\lambda > 0$ be larger than the variance of $\boldsymbol{\beta}$ obtained with the Ridge estimator. This has interesting consequences when we discuss the so-called bias-variance trade-off below.
#
#
# ## Resampling methods
#
# ## Resampling methods, basic overview
#
# With all these analytical equations for both the OLS and Ridge
# regression, we will now outline how to assess a given model. This will
# lead us to a discussion of the so-called bias-variance tradeoff (see
# below) and so-called resampling methods.
#
# One of the quantities we have discussed as a way to measure errors is
# the mean-squared error (MSE), mainly used for fitting of continuous
# functions. Another choice is the absolute error.
#
# In the discussions below we will focus on the MSE and in particular since we will split the data into test and training data,
# we discuss the
# 1. prediction error or simply the **test error**, where we have a fixed training set and the test error is the MSE arising from the data reserved for testing. We discuss also the
#
# 2. training error $\mathrm{Err_{Train}}$, which is the average loss over the training data.
#
# As our model becomes more and more complex, more of the training data
# tends to used. The training may thence adapt to more complicated
# structures in the data. This may lead to a decrease in the bias (see
# below for code example) and a slight increase of the variance for the
# test error. For a certain level of complexity the test error will
# reach a minimum, before starting to increase again. The training error
# reaches a saturation.
#
#
#
#
# ## Resampling methods: Jackknife and Bootstrap
#
# Two famous
# resampling methods are the **independent bootstrap** and **the jackknife**.
#
# The jackknife is a special case of the independent bootstrap. Still, the jackknife was made
# popular prior to the independent bootstrap. And as the popularity of
# the independent bootstrap soared, new variants, such as **the dependent bootstrap**.
#
# The Jackknife and independent bootstrap work for
# independent, identically distributed random variables.
# If these conditions are not
# satisfied, the methods will fail. Yet, it should be said that if the data are
# independent, identically distributed, and we only want to estimate the
# variance of $\overline{X}$ (which often is the case), then there is no
# need for bootstrapping.
#
# ## Resampling methods: Jackknife
#
# The Jackknife works by making many replicas of the estimator $\widehat{\theta}$.
# The jackknife is a resampling method where we systematically leave out one observation from the vector of observed values $\boldsymbol{x} = (x_1,x_2,\cdots,X_n)$.
# Let $\boldsymbol{x}_i$ denote the vector
# $$
# \boldsymbol{x}_i = (x_1,x_2,\cdots,x_{i-1},x_{i+1},\cdots,x_n),
# $$
# which equals the vector $\boldsymbol{x}$ with the exception that observation
# number $i$ is left out. Using this notation, define
# $\widehat{\theta}_i$ to be the estimator
# $\widehat{\theta}$ computed using $\vec{X}_i$.
#
#
# ## Jackknife code example
# +
from numpy import *
from numpy.random import randint, randn
from time import time
def jackknife(data, stat):
n = len(data);t = zeros(n); inds = arange(n); t0 = time()
## 'jackknifing' by leaving out an observation for each i
for i in range(n):
t[i] = stat(delete(data,i) )
# analysis
print("Runtime: %g sec" % (time()-t0)); print("Jackknife Statistics :")
print("original bias std. error")
print("%8g %14g %15g" % (stat(data),(n-1)*mean(t)/n, (n*var(t))**.5))
return t
# Returns mean of data samples
def stat(data):
return mean(data)
mu, sigma = 100, 15
datapoints = 10000
x = mu + sigma*random.randn(datapoints)
# jackknife returns the data sample
t = jackknife(x, stat)
# -
# ## Resampling methods: Bootstrap
# Bootstrapping is a nonparametric approach to statistical inference
# that substitutes computation for more traditional distributional
# assumptions and asymptotic results. Bootstrapping offers a number of
# advantages:
# 1. The bootstrap is quite general, although there are some cases in which it fails.
#
# 2. Because it does not require distributional assumptions (such as normally distributed errors), the bootstrap can provide more accurate inferences when the data are not well behaved or when the sample size is small.
#
# 3. It is possible to apply the bootstrap to statistics with sampling distributions that are difficult to derive, even asymptotically.
#
# 4. It is relatively simple to apply the bootstrap to complex data-collection plans (such as stratified and clustered samples).
#
#
#
#
# ## Resampling methods: Bootstrap background
#
# Since $\widehat{\theta} = \widehat{\theta}(\boldsymbol{X})$ is a function of random variables,
# $\widehat{\theta}$ itself must be a random variable. Thus it has
# a pdf, call this function $p(\boldsymbol{t})$. The aim of the bootstrap is to
# estimate $p(\boldsymbol{t})$ by the relative frequency of
# $\widehat{\theta}$. You can think of this as using a histogram
# in the place of $p(\boldsymbol{t})$. If the relative frequency closely
# resembles $p(\vec{t})$, then using numerics, it is straight forward to
# estimate all the interesting parameters of $p(\boldsymbol{t})$ using point
# estimators.
#
#
# ## Resampling methods: More Bootstrap background
#
# In the case that $\widehat{\theta}$ has
# more than one component, and the components are independent, we use the
# same estimator on each component separately. If the probability
# density function of $X_i$, $p(x)$, had been known, then it would have
# been straight forward to do this by:
# 1. Drawing lots of numbers from $p(x)$, suppose we call one such set of numbers $(X_1^*, X_2^*, \cdots, X_n^*)$.
#
# 2. Then using these numbers, we could compute a replica of $\widehat{\theta}$ called $\widehat{\theta}^*$.
#
# By repeated use of (1) and (2), many
# estimates of $\widehat{\theta}$ could have been obtained. The
# idea is to use the relative frequency of $\widehat{\theta}^*$
# (think of a histogram) as an estimate of $p(\boldsymbol{t})$.
#
# ## Resampling methods: Bootstrap approach
#
# But
# unless there is enough information available about the process that
# generated $X_1,X_2,\cdots,X_n$, $p(x)$ is in general
# unknown. Therefore, [Efron in 1979](https://projecteuclid.org/euclid.aos/1176344552) asked the
# question: What if we replace $p(x)$ by the relative frequency
# of the observation $X_i$; if we draw observations in accordance with
# the relative frequency of the observations, will we obtain the same
# result in some asymptotic sense? The answer is yes.
#
#
# Instead of generating the histogram for the relative
# frequency of the observation $X_i$, just draw the values
# $(X_1^*,X_2^*,\cdots,X_n^*)$ with replacement from the vector
# $\boldsymbol{X}$.
#
# ## Resampling methods: Bootstrap steps
#
# The independent bootstrap works like this:
#
# 1. Draw with replacement $n$ numbers for the observed variables $\boldsymbol{x} = (x_1,x_2,\cdots,x_n)$.
#
# 2. Define a vector $\boldsymbol{x}^*$ containing the values which were drawn from $\boldsymbol{x}$.
#
# 3. Using the vector $\boldsymbol{x}^*$ compute $\widehat{\theta}^*$ by evaluating $\widehat \theta$ under the observations $\boldsymbol{x}^*$.
#
# 4. Repeat this process $k$ times.
#
# When you are done, you can draw a histogram of the relative frequency
# of $\widehat \theta^*$. This is your estimate of the probability
# distribution $p(t)$. Using this probability distribution you can
# estimate any statistics thereof. In principle you never draw the
# histogram of the relative frequency of $\widehat{\theta}^*$. Instead
# you use the estimators corresponding to the statistic of interest. For
# example, if you are interested in estimating the variance of $\widehat
# \theta$, apply the etsimator $\widehat \sigma^2$ to the values
# $\widehat \theta ^*$.
#
#
# ## Code example for the Bootstrap method
#
# The following code starts with a Gaussian distribution with mean value
# $\mu =100$ and variance $\sigma=15$. We use this to generate the data
# used in the bootstrap analysis. The bootstrap analysis returns a data
# set after a given number of bootstrap operations (as many as we have
# data points). This data set consists of estimated mean values for each
# bootstrap operation. The histogram generated by the bootstrap method
# shows that the distribution for these mean values is also a Gaussian,
# centered around the mean value $\mu=100$ but with standard deviation
# $\sigma/\sqrt{n}$, where $n$ is the number of bootstrap samples (in
# this case the same as the number of original data points). The value
# of the standard deviation is what we expect from the central limit
# theorem.
# +
from numpy import *
from numpy.random import randint, randn
from time import time
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Returns mean of bootstrap samples
def stat(data):
return mean(data)
# Bootstrap algorithm
def bootstrap(data, statistic, R):
t = zeros(R); n = len(data); inds = arange(n); t0 = time()
# non-parametric bootstrap
for i in range(R):
t[i] = statistic(data[randint(0,n,n)])
# analysis
print("Runtime: %g sec" % (time()-t0)); print("Bootstrap Statistics :")
print("original bias std. error")
print("%8g %8g %14g %15g" % (statistic(data), std(data),mean(t),std(t)))
return t
mu, sigma = 100, 15
datapoints = 10000
x = mu + sigma*random.randn(datapoints)
# bootstrap returns the data sample
t = bootstrap(x, stat, datapoints)
# the histogram of the bootstrapped data
n, binsboot, patches = plt.hist(t, 50, normed=1, facecolor='red', alpha=0.75)
# add a 'best fit' line
y = mlab.normpdf( binsboot, mean(t), std(t))
lt = plt.plot(binsboot, y, 'r--', linewidth=1)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.axis([99.5, 100.6, 0, 3.0])
plt.grid(True)
plt.show()
# -
# <!-- !split -->
# ## Various steps in cross-validation
#
# When the repetitive splitting of the data set is done randomly,
# samples may accidently end up in a fast majority of the splits in
# either training or test set. Such samples may have an unbalanced
# influence on either model building or prediction evaluation. To avoid
# this $k$-fold cross-validation structures the data splitting. The
# samples are divided into $k$ more or less equally sized exhaustive and
# mutually exclusive subsets. In turn (at each split) one of these
# subsets plays the role of the test set while the union of the
# remaining subsets constitutes the training set. Such a splitting
# warrants a balanced representation of each sample in both training and
# test set over the splits. Still the division into the $k$ subsets
# involves a degree of randomness. This may be fully excluded when
# choosing $k=n$. This particular case is referred to as leave-one-out
# cross-validation (LOOCV).
#
# <!-- !split -->
# ## How to set up the cross-validation for Ridge and/or Lasso
#
# * Define a range of interest for the penalty parameter.
#
# * Divide the data set into training and test set comprising samples $\{1, \ldots, n\} \setminus i$ and $\{ i \}$, respectively.
#
# * Fit the linear regression model by means of ridge estimation for each $\lambda$ in the grid using the training set, and the corresponding estimate of the error variance $\boldsymbol{\sigma}_{-i}^2(\lambda)$, as
# $$
# \begin{align*}
# \boldsymbol{\beta}_{-i}(\lambda) & = ( \boldsymbol{X}_{-i, \ast}^{T}
# \boldsymbol{X}_{-i, \ast} + \lambda \boldsymbol{I}_{pp})^{-1}
# \boldsymbol{X}_{-i, \ast}^{T} \boldsymbol{y}_{-i}
# \end{align*}
# $$
# * Evaluate the prediction performance of these models on the test set by $\log\{L[y_i, \boldsymbol{X}_{i, \ast}; \boldsymbol{\beta}_{-i}(\lambda), \boldsymbol{\sigma}_{-i}^2(\lambda)]\}$. Or, by the prediction error $|y_i - \boldsymbol{X}_{i, \ast} \boldsymbol{\beta}_{-i}(\lambda)|$, the relative error, the error squared or the R2 score function.
#
# * Repeat the first three steps such that each sample plays the role of the test set once.
#
# * Average the prediction performances of the test sets at each grid point of the penalty bias/parameter. It is an estimate of the prediction performance of the model corresponding to this value of the penalty parameter on novel data. It is defined as
# $$
# \begin{align*}
# \frac{1}{n} \sum_{i = 1}^n \log\{L[y_i, \mathbf{X}_{i, \ast}; \boldsymbol{\beta}_{-i}(\lambda), \boldsymbol{\sigma}_{-i}^2(\lambda)]\}.
# \end{align*}
# $$
# ## Cross-validation in brief
#
# For the various values of $k$
#
# 1. shuffle the dataset randomly.
#
# 2. Split the dataset into $k$ groups.
#
# 3. For each unique group:
#
# a. Decide which group to use as set for test data
#
# b. Take the remaining groups as a training data set
#
# c. Fit a model on the training set and evaluate it on the test set
#
# d. Retain the evaluation score and discard the model
#
#
# 5. Summarize the model using the sample of model evaluation scores
#
# ## Code Example for Cross-validation and $k$-fold Cross-validation
#
# The code here uses Ridge regression with cross-validation (CV) resampling and $k$-fold CV in order to fit a specific polynomial.
# +
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import PolynomialFeatures
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(3155)
# Generate the data.
nsamples = 100
x = np.random.randn(nsamples)
y = 3*x**2 + np.random.randn(nsamples)
## Cross-validation on Ridge regression using KFold only
# Decide degree on polynomial to fit
poly = PolynomialFeatures(degree = 6)
# Decide which values of lambda to use
nlambdas = 500
lambdas = np.logspace(-3, 5, nlambdas)
# Initialize a KFold instance
k = 5
kfold = KFold(n_splits = k)
# Perform the cross-validation to estimate MSE
scores_KFold = np.zeros((nlambdas, k))
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
j = 0
for train_inds, test_inds in kfold.split(x):
xtrain = x[train_inds]
ytrain = y[train_inds]
xtest = x[test_inds]
ytest = y[test_inds]
Xtrain = poly.fit_transform(xtrain[:, np.newaxis])
ridge.fit(Xtrain, ytrain[:, np.newaxis])
Xtest = poly.fit_transform(xtest[:, np.newaxis])
ypred = ridge.predict(Xtest)
scores_KFold[i,j] = np.sum((ypred - ytest[:, np.newaxis])**2)/np.size(ypred)
j += 1
i += 1
estimated_mse_KFold = np.mean(scores_KFold, axis = 1)
## Cross-validation using cross_val_score from sklearn along with KFold
# kfold is an instance initialized above as:
# kfold = KFold(n_splits = k)
estimated_mse_sklearn = np.zeros(nlambdas)
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
X = poly.fit_transform(x[:, np.newaxis])
estimated_mse_folds = cross_val_score(ridge, X, y[:, np.newaxis], scoring='neg_mean_squared_error', cv=kfold)
# cross_val_score return an array containing the estimated negative mse for every fold.
# we have to the the mean of every array in order to get an estimate of the mse of the model
estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds)
i += 1
## Plot and compare the slightly different ways to perform cross-validation
plt.figure()
plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score')
plt.plot(np.log10(lambdas), estimated_mse_KFold, 'r--', label = 'KFold')
plt.xlabel('log10(lambda)')
plt.ylabel('mse')
plt.legend()
plt.show()
# -
# ## How to decide upon the best model?
#
# ## The bias-variance tradeoff
#
#
# We will discuss the bias-variance tradeoff in the context of
# continuous predictions such as regression. However, many of the
# intuitions and ideas discussed here also carry over to classification
# tasks. Consider a dataset $\mathcal{L}$ consisting of the data
# $\mathbf{X}_\mathcal{L}=\{(y_j, \boldsymbol{x}_j), j=0\ldots n-1\}$.
#
# Let us assume that the true data is generated from a noisy model
# $$
# \boldsymbol{y}=f(\boldsymbol{x}) + \boldsymbol{\epsilon}
# $$
# where $\epsilon$ is normally distributed with mean zero and standard deviation $\sigma^2$.
#
# In our derivation of the ordinary least squares method we defined then
# an approximation to the function $f$ in terms of the parameters
# $\boldsymbol{\beta}$ and the design matrix $\boldsymbol{X}$ which embody our model,
# that is $\boldsymbol{\tilde{y}}=\boldsymbol{X}\boldsymbol{\beta}$.
#
# Thereafter we found the parameters $\boldsymbol{\beta}$ by optimizing the mean-squared error via the so-called cost function
# $$
# C(\boldsymbol{X},\boldsymbol{\beta}) =\frac{1}{n}\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2=\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right].
# $$
# We can rewrite this as
# $$
# \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\frac{1}{n}\sum_i(f_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\frac{1}{n}\sum_i(\tilde{y}_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\sigma^2.
# $$
# The three terms represent the square of the bias of the learning
# method, which can be thought of as the error caused by the simplifying
# assumptions built into the method. The second term represents the
# variance of the chosen model and finally the last terms is variance of
# the error $\boldsymbol{\epsilon}$.
#
# To derive this equation, we need to recall that the variance of $\boldsymbol{y}$ and $\boldsymbol{\epsilon}$ are both equal to $\sigma^2$. The mean value of $\boldsymbol{\epsilon}$ is by definition equal to zero. Furthermore, the function $f$ is not a stochastics variable, idem for $\boldsymbol{\tilde{y}}$.
# We use a more compact notation in terms of the expectation value
# $$
# \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}})^2\right],
# $$
# and adding and subtracting $\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]$ we get
# $$
# \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}}+\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right],
# $$
# which, using the abovementioned expectation values can be rewritten as
# $$
# \mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{y}-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right]+\mathrm{Var}\left[\boldsymbol{\tilde{y}}\right]+\sigma^2,
# $$
# that is the rewriting in terms of the so-called bias, the variance of the model $\boldsymbol{\tilde{y}}$ and the variance of $\boldsymbol{\epsilon}$.
#
#
#
#
#
# ## Example code for Bias-Variance tradeoff
# +
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.utils import resample
np.random.seed(2018)
n = 500
n_boostraps = 100
degree = 18 # A quite high value, just to show.
noise = 0.1
# Make data set.
x = np.linspace(-1, 3, n).reshape(-1, 1)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2) + np.random.normal(0, 0.1, x.shape)
# Hold out some test data that is never used in training.
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
# Combine x transformation and model into one operation.
# Not neccesary, but convenient.
model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False))
# The following (m x n_bootstraps) matrix holds the column vectors y_pred
# for each bootstrap iteration.
y_pred = np.empty((y_test.shape[0], n_boostraps))
for i in range(n_boostraps):
x_, y_ = resample(x_train, y_train)
# Evaluate the new model on the same test data each time.
y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel()
# Note: Expectations and variances taken w.r.t. different training
# data sets, hence the axis=1. Subsequent means are taken across the test data
# set in order to obtain a total value, but before this we have error/bias/variance
# calculated per data point in the test set.
# Note 2: The use of keepdims=True is important in the calculation of bias as this
# maintains the column vector form. Dropping this yields very unexpected results.
error = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) )
bias = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 )
variance = np.mean( np.var(y_pred, axis=1, keepdims=True) )
print('Error:', error)
print('Bias^2:', bias)
print('Var:', variance)
print('{} >= {} + {} = {}'.format(error, bias, variance, bias+variance))
plt.plot(x[::5, :], y[::5, :], label='f(x)')
plt.scatter(x_test, y_test, label='Data points')
plt.scatter(x_test, np.mean(y_pred, axis=1), label='Pred')
plt.legend()
plt.show()
# -
# ## Understanding what happens
# +
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.utils import resample
np.random.seed(2018)
n = 40
n_boostraps = 100
maxdegree = 14
# Make data set.
x = np.linspace(-3, 3, n).reshape(-1, 1)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.normal(0, 0.1, x.shape)
error = np.zeros(maxdegree)
bias = np.zeros(maxdegree)
variance = np.zeros(maxdegree)
polydegree = np.zeros(maxdegree)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
for degree in range(maxdegree):
model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False))
y_pred = np.empty((y_test.shape[0], n_boostraps))
for i in range(n_boostraps):
x_, y_ = resample(x_train, y_train)
y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel()
polydegree[degree] = degree
error[degree] = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) )
bias[degree] = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 )
variance[degree] = np.mean( np.var(y_pred, axis=1, keepdims=True) )
print('Polynomial degree:', degree)
print('Error:', error[degree])
print('Bias^2:', bias[degree])
print('Var:', variance[degree])
print('{} >= {} + {} = {}'.format(error[degree], bias[degree], variance[degree], bias[degree]+variance[degree]))
plt.plot(polydegree, error, label='Error')
plt.plot(polydegree, bias, label='bias')
plt.plot(polydegree, variance, label='Variance')
plt.legend()
plt.show()
# -
# <!-- !split -->
# ## Summing up
#
#
#
#
# The bias-variance tradeoff summarizes the fundamental tension in
# machine learning, particularly supervised learning, between the
# complexity of a model and the amount of training data needed to train
# it. Since data is often limited, in practice it is often useful to
# use a less-complex model with higher bias, that is a model whose asymptotic
# performance is worse than another model because it is easier to
# train and less sensitive to sampling noise arising from having a
# finite-sized training dataset (smaller variance).
#
#
#
# The above equations tell us that in
# order to minimize the expected test error, we need to select a
# statistical learning method that simultaneously achieves low variance
# and low bias. Note that variance is inherently a nonnegative quantity,
# and squared bias is also nonnegative. Hence, we see that the expected
# test MSE can never lie below $Var(\epsilon)$, the irreducible error.
#
#
# What do we mean by the variance and bias of a statistical learning
# method? The variance refers to the amount by which our model would change if we
# estimated it using a different training data set. Since the training
# data are used to fit the statistical learning method, different
# training data sets will result in a different estimate. But ideally the
# estimate for our model should not vary too much between training
# sets. However, if a method has high variance then small changes in
# the training data can result in large changes in the model. In general, more
# flexible statistical methods have higher variance.
#
#
# You may also find this recent [article](https://www.pnas.org/content/116/32/15849) of interest.
#
#
# ## More examples on bootstrap and cross-validation and errors
# +
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.model_selection import train_test_split
from sklearn.utils import resample
from sklearn.metrics import mean_squared_error
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
Maxpolydegree = 30
X = np.zeros((len(Density),Maxpolydegree))
X[:,0] = 1.0
testerror = np.zeros(Maxpolydegree)
trainingerror = np.zeros(Maxpolydegree)
polynomial = np.zeros(Maxpolydegree)
trials = 100
for polydegree in range(1, Maxpolydegree):
polynomial[polydegree] = polydegree
for degree in range(polydegree):
X[:,degree] = Density**(degree/3.0)
# loop over trials in order to estimate the expectation value of the MSE
testerror[polydegree] = 0.0
trainingerror[polydegree] = 0.0
for samples in range(trials):
x_train, x_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2)
model = LinearRegression(fit_intercept=True).fit(x_train, y_train)
ypred = model.predict(x_train)
ytilde = model.predict(x_test)
testerror[polydegree] += mean_squared_error(y_test, ytilde)
trainingerror[polydegree] += mean_squared_error(y_train, ypred)
testerror[polydegree] /= trials
trainingerror[polydegree] /= trials
print("Degree of polynomial: %3d"% polynomial[polydegree])
print("Mean squared error on training data: %.8f" % trainingerror[polydegree])
print("Mean squared error on test data: %.8f" % testerror[polydegree])
plt.plot(polynomial, np.log10(trainingerror), label='Training Error')
plt.plot(polynomial, np.log10(testerror), label='Test Error')
plt.xlabel('Polynomial degree')
plt.ylabel('log10[MSE]')
plt.legend()
plt.show()
# -
# <!-- !split -->
# ## The same example but now with cross-validation
# +
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
Maxpolydegree = 30
X = np.zeros((len(Density),Maxpolydegree))
X[:,0] = 1.0
estimated_mse_sklearn = np.zeros(Maxpolydegree)
polynomial = np.zeros(Maxpolydegree)
k =5
kfold = KFold(n_splits = k)
for polydegree in range(1, Maxpolydegree):
polynomial[polydegree] = polydegree
for degree in range(polydegree):
X[:,degree] = Density**(degree/3.0)
OLS = LinearRegression()
# loop over trials in order to estimate the expectation value of the MSE
estimated_mse_folds = cross_val_score(OLS, X, Energies, scoring='neg_mean_squared_error', cv=kfold)
#[:, np.newaxis]
estimated_mse_sklearn[polydegree] = np.mean(-estimated_mse_folds)
plt.plot(polynomial, np.log10(estimated_mse_sklearn), label='Test Error')
plt.xlabel('Polynomial degree')
plt.ylabel('log10[MSE]')
plt.legend()
plt.show()
# -
# ## Cross-validation with Ridge
# +
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import PolynomialFeatures
# A seed just to ensure that the random numbers are the same for every run.
np.random.seed(3155)
# Generate the data.
n = 100
x = np.linspace(-3, 3, n).reshape(-1, 1)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.normal(0, 0.1, x.shape)
# Decide degree on polynomial to fit
poly = PolynomialFeatures(degree = 10)
# Decide which values of lambda to use
nlambdas = 500
lambdas = np.logspace(-3, 5, nlambdas)
# Initialize a KFold instance
k = 5
kfold = KFold(n_splits = k)
estimated_mse_sklearn = np.zeros(nlambdas)
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
estimated_mse_folds = cross_val_score(ridge, x, y, scoring='neg_mean_squared_error', cv=kfold)
estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds)
i += 1
plt.figure()
plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score')
plt.xlabel('log10(lambda)')
plt.ylabel('MSE')
plt.legend()
plt.show()
# -
# ## Applying Regression Analysis to the Ising Model
#
#
#
# ## The Ising model
#
# The one-dimensional Ising model with nearest neighbor interaction, no
# external field and a constant coupling constant $J$ is given by
# <!-- Equation labels as ordinary links -->
# <div id="_auto11"></div>
#
# $$
# \begin{equation}
# H = -J \sum_{k}^L s_k s_{k + 1},
# \label{_auto11} \tag{14}
# \end{equation}
# $$
# where $s_i \in \{-1, 1\}$ and $s_{N + 1} = s_1$. The number of spins
# in the system is determined by $L$. For the one-dimensional system
# there is no phase transition.
#
# We will look at a system of $L = 40$ spins with a coupling constant of
# $J = 1$. To get enough training data we will generate 10000 states
# with their respective energies.
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
# -
# Here we use ordinary least squares
# regression to predict the energy for the nearest neighbor
# one-dimensional Ising model on a ring, i.e., the endpoints wrap
# around. We will use linear regression to fit a value for
# the coupling constant to achieve this.
#
# ## Reformulating the problem to suit regression
#
# A more general form for the one-dimensional Ising model is
# <!-- Equation labels as ordinary links -->
# <div id="_auto12"></div>
#
# $$
# \begin{equation}
# H = - \sum_j^L \sum_k^L s_j s_k J_{jk}.
# \label{_auto12} \tag{15}
# \end{equation}
# $$
# Here we allow for interactions beyond the nearest neighbors and a state dependent
# coupling constant. This latter expression can be formulated as
# a matrix-product
# <!-- Equation labels as ordinary links -->
# <div id="_auto13"></div>
#
# $$
# \begin{equation}
# \boldsymbol{H} = \boldsymbol{X} J,
# \label{_auto13} \tag{16}
# \end{equation}
# $$
# where $X_{jk} = s_j s_k$ and $J$ is a matrix which consists of the
# elements $-J_{jk}$. This form of writing the energy fits perfectly
# with the form utilized in linear regression, that is
# <!-- Equation labels as ordinary links -->
# <div id="_auto14"></div>
#
# $$
# \begin{equation}
# \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon},
# \label{_auto14} \tag{17}
# \end{equation}
# $$
# We split the data in training and test data as discussed in the previous example
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# ## Linear regression
#
# In the ordinary least squares method we choose the cost function
# <!-- Equation labels as ordinary links -->
# <div id="_auto15"></div>
#
# $$
# \begin{equation}
# C(\boldsymbol{X}, \boldsymbol{\beta})= \frac{1}{n}\left\{(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})\right\}.
# \label{_auto15} \tag{18}
# \end{equation}
# $$
# We then find the extremal point of $C$ by taking the derivative with respect to $\boldsymbol{\beta}$ as discussed above.
# This yields the expression for $\boldsymbol{\beta}$ to be
# $$
# \boldsymbol{\beta} = \frac{\boldsymbol{X}^T \boldsymbol{y}}{\boldsymbol{X}^T \boldsymbol{X}},
# $$
# which immediately imposes some requirements on $\boldsymbol{X}$ as there must exist
# an inverse of $\boldsymbol{X}^T \boldsymbol{X}$. If the expression we are modeling contains an
# intercept, i.e., a constant term, we must make sure that the
# first column of $\boldsymbol{X}$ consists of $1$. We do this here
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
def ols_inv(x: np.ndarray, y: np.ndarray) -> np.ndarray:
return scl.inv(x.T @ x) @ (x.T @ y)
beta = ols_inv(X_train_own, y_train)
# ## Singular Value decomposition
#
# Doing the inversion directly turns out to be a bad idea since the matrix
# $\boldsymbol{X}^T\boldsymbol{X}$ is singular. An alternative approach is to use the **singular
# value decomposition**. Using the definition of the Moore-Penrose
# pseudoinverse we can write the equation for $\boldsymbol{\beta}$ as
# $$
# \boldsymbol{\beta} = \boldsymbol{X}^{+}\boldsymbol{y},
# $$
# where the pseudoinverse of $\boldsymbol{X}$ is given by
# $$
# \boldsymbol{X}^{+} = \frac{\boldsymbol{X}^T}{\boldsymbol{X}^T\boldsymbol{X}}.
# $$
# Using singular value decomposition we can decompose the matrix $\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma} \boldsymbol{V}^T$,
# where $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal(unitary) matrices and $\boldsymbol{\Sigma}$ contains the singular values (more details below).
# where $X^{+} = V\Sigma^{+} U^T$. This reduces the equation for
# $\omega$ to
# <!-- Equation labels as ordinary links -->
# <div id="_auto16"></div>
#
# $$
# \begin{equation}
# \boldsymbol{\beta} = \boldsymbol{V}\boldsymbol{\Sigma}^{+} \boldsymbol{U}^T \boldsymbol{y}.
# \label{_auto16} \tag{19}
# \end{equation}
# $$
# Note that solving this equation by actually doing the pseudoinverse
# (which is what we will do) is not a good idea as this operation scales
# as $\mathcal{O}(n^3)$, where $n$ is the number of elements in a
# general matrix. Instead, doing $QR$-factorization and solving the
# linear system as an equation would reduce this down to
# $\mathcal{O}(n^2)$ operations.
def ols_svd(x: np.ndarray, y: np.ndarray) -> np.ndarray:
u, s, v = scl.svd(x)
return v.T @ scl.pinv(scl.diagsvd(s, u.shape[0], v.shape[0])) @ u.T @ y
beta = ols_svd(X_train_own,y_train)
# When extracting the $J$-matrix we need to make sure that we remove the intercept, as is done here
J = beta[1:].reshape(L, L)
# A way of looking at the coefficients in $J$ is to plot the matrices as images.
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J, **cmap_args)
plt.title("OLS", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
# It is interesting to note that OLS
# considers both $J_{j, j + 1} = -0.5$ and $J_{j, j - 1} = -0.5$ as
# valid matrix elements for $J$.
# In our discussion below on hyperparameters and Ridge and Lasso regression we will see that
# this problem can be removed, partly and only with Lasso regression.
#
# In this case our matrix inversion was actually possible. The obvious question now is what is the mathematics behind the SVD?
#
#
#
#
#
# ## The one-dimensional Ising model
#
# Let us bring back the Ising model again, but now with an additional
# focus on Ridge and Lasso regression as well. We repeat some of the
# basic parts of the Ising model and the setup of the training and test
# data. The one-dimensional Ising model with nearest neighbor
# interaction, no external field and a constant coupling constant $J$ is
# given by
# <!-- Equation labels as ordinary links -->
# <div id="_auto17"></div>
#
# $$
# \begin{equation}
# H = -J \sum_{k}^L s_k s_{k + 1},
# \label{_auto17} \tag{20}
# \end{equation}
# $$
# where $s_i \in \{-1, 1\}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition.
#
# We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies.
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import sklearn.linear_model as skl
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
# -
# A more general form for the one-dimensional Ising model is
# <!-- Equation labels as ordinary links -->
# <div id="_auto18"></div>
#
# $$
# \begin{equation}
# H = - \sum_j^L \sum_k^L s_j s_k J_{jk}.
# \label{_auto18} \tag{21}
# \end{equation}
# $$
# Here we allow for interactions beyond the nearest neighbors and a more
# adaptive coupling matrix. This latter expression can be formulated as
# a matrix-product on the form
# <!-- Equation labels as ordinary links -->
# <div id="_auto19"></div>
#
# $$
# \begin{equation}
# H = X J,
# \label{_auto19} \tag{22}
# \end{equation}
# $$
# where $X_{jk} = s_j s_k$ and $J$ is the matrix consisting of the
# elements $-J_{jk}$. This form of writing the energy fits perfectly
# with the form utilized in linear regression, viz.
# <!-- Equation labels as ordinary links -->
# <div id="_auto20"></div>
#
# $$
# \begin{equation}
# \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}.
# \label{_auto20} \tag{23}
# \end{equation}
# $$
# We organize the data as we did above
# +
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.96)
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
# -
# We will do all fitting with **Scikit-Learn**,
clf = skl.LinearRegression().fit(X_train, y_train)
# When extracting the $J$-matrix we make sure to remove the intercept
J_sk = clf.coef_.reshape(L, L)
# And then we plot the results
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_sk, **cmap_args)
plt.title("LinearRegression from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
# The results perfectly with our previous discussion where we used our own code.
#
# ## Ridge regression
#
# Having explored the ordinary least squares we move on to ridge
# regression. In ridge regression we include a **regularizer**. This
# involves a new cost function which leads to a new estimate for the
# weights $\boldsymbol{\beta}$. This results in a penalized regression problem. The
# cost function is given by
# 1
# 5
# 3
#
# <
# <
# <
# !
# !
# M
# A
# T
# H
# _
# B
# L
# O
# C
# K
# +
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X_train, y_train)
J_ridge_sk = clf_ridge.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_ridge_sk, **cmap_args)
plt.title("Ridge from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
# -
# ## LASSO regression
#
# In the **Least Absolute Shrinkage and Selection Operator** (LASSO)-method we get a third cost function.
# <!-- Equation labels as ordinary links -->
# <div id="_auto22"></div>
#
# $$
# \begin{equation}
# C(\boldsymbol{X}, \boldsymbol{\beta}; \lambda) = (\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y}) + \lambda \sqrt{\boldsymbol{\beta}^T\boldsymbol{\beta}}.
# \label{_auto22} \tag{25}
# \end{equation}
# $$
# Finding the extremal point of this cost function is not so straight-forward as in least squares and ridge. We will therefore rely solely on the function ``Lasso`` from **Scikit-Learn**.
# +
clf_lasso = skl.Lasso(alpha=_lambda).fit(X_train, y_train)
J_lasso_sk = clf_lasso.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_lasso_sk, **cmap_args)
plt.title("Lasso from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
# -
# It is quite striking how LASSO breaks the symmetry of the coupling
# constant as opposed to ridge and OLS. We get a sparse solution with
# $J_{j, j + 1} = -1$.
#
#
#
# ## Performance as function of the regularization parameter
#
# We see how the different models perform for a different set of values for $\lambda$.
# +
lambdas = np.logspace(-4, 5, 10)
train_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
test_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
plot_counter = 1
fig = plt.figure(figsize=(32, 54))
for i, _lambda in enumerate(tqdm.tqdm(lambdas)):
for key, method in zip(
["ols_sk", "ridge_sk", "lasso_sk"],
[skl.LinearRegression(), skl.Ridge(alpha=_lambda), skl.Lasso(alpha=_lambda)]
):
method = method.fit(X_train, y_train)
train_errors[key][i] = method.score(X_train, y_train)
test_errors[key][i] = method.score(X_test, y_test)
omega = method.coef_.reshape(L, L)
plt.subplot(10, 5, plot_counter)
plt.imshow(omega, **cmap_args)
plt.title(r"%s, $\lambda = %.4f$" % (key, _lambda))
plot_counter += 1
plt.show()
# -
# We see that LASSO reaches a good solution for low
# values of $\lambda$, but will "wither" when we increase $\lambda$ too
# much. Ridge is more stable over a larger range of values for
# $\lambda$, but eventually also fades away.
#
# ## Finding the optimal value of $\lambda$
#
# To determine which value of $\lambda$ is best we plot the accuracy of
# the models when predicting the training and the testing set. We expect
# the accuracy of the training set to be quite good, but if the accuracy
# of the testing set is much lower this tells us that we might be
# subject to an overfit model. The ideal scenario is an accuracy on the
# testing set that is close to the accuracy of the training set.
# +
fig = plt.figure(figsize=(20, 14))
colors = {
"ols_sk": "r",
"ridge_sk": "y",
"lasso_sk": "c"
}
for key in train_errors:
plt.semilogx(
lambdas,
train_errors[key],
colors[key],
label="Train {0}".format(key),
linewidth=4.0
)
for key in test_errors:
plt.semilogx(
lambdas,
test_errors[key],
colors[key] + "--",
label="Test {0}".format(key),
linewidth=4.0
)
plt.legend(loc="best", fontsize=18)
plt.xlabel(r"$\lambda$", fontsize=18)
plt.ylabel(r"$R^2$", fontsize=18)
plt.tick_params(labelsize=18)
plt.show()
# -
# From the above figure we can see that LASSO with $\lambda = 10^{-2}$
# achieves a very good accuracy on the test set. This by far surpasses the
# other models for all values of $\lambda$.
| doc/pub/Day1/ipynb/Day1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="3GofkbEjwQCI"
# ## Setup
# + colab={"base_uri": "https://localhost:8080/"} id="BaXpEKLdXNPa" executionInfo={"elapsed": 663, "status": "ok", "timestamp": 1636271220486, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} outputId="2aacf9ae-cc79-43f0-f3f9-41ae04c75e16"
import os
project_name = "ieee21cup-recsys"; branch = "main"; account = "sparsh-ai"
project_path = os.path.join('/content', branch)
if not os.path.exists(project_path):
# !cp -r /content/drive/MyDrive/git_credentials/. ~
# !mkdir "{project_path}"
# %cd "{project_path}"
# !git init
# !git remote add origin https://github.com/"{account}"/"{project_name}".git
# !git pull origin "{branch}"
# !git checkout -b "{branch}"
else:
# %cd "{project_path}"
# + colab={"base_uri": "https://localhost:8080/"} id="MZvPHRyMXdlS" executionInfo={"elapsed": 967, "status": "ok", "timestamp": 1636271225175, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} outputId="9ace871e-5585-4062-e21a-70e4d38d537e"
# %cd /content
# + id="2eRcpGL6XfDs"
# !cd /content/main && git add . && git commit -m 'commit' && git push origin main
# + id="DctyNOSdx-7h"
# !pip install -q wget
# + id="vrEmNkAAsQlM"
import io
import copy
import sys
import wget
import os
import logging
import pandas as pd
from os import path as osp
import numpy as np
from tqdm.notebook import tqdm
from pathlib import Path
import multiprocessing as mp
import functools
from sklearn.preprocessing import MinMaxScaler
import bz2
import pickle
import _pickle as cPickle
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="M4swQxyAsQnj"
class Args:
# Paths
datapath_bronze = '/content/main/data/bronze'
datapath_silver = '/content/main/data/silver/T445041'
datapath_gold = '/content/main/data/gold/T445041'
filename_trainset = 'train.csv'
filename_iteminfo = 'item_info.csv'
filename_track1_testset = 'track1_testset.csv'
data_sep = ' '
N_ITEMS = 380
N_USER_PORTRAITS = 10
N_THREADS = mp.cpu_count() - 1
args = Args()
# + id="wIDRSKqOtEdb"
logging.basicConfig(stream=sys.stdout,
level = logging.INFO,
format='%(asctime)s [%(levelname)s] : %(message)s',
datefmt='%d-%b-%y %H:%M:%S')
logger = logging.getLogger('IEEE21 Logger')
# + [markdown] id="N1bmqnvQv27E"
# ## Utilities
# + id="tH7lmOJbAOIf"
def save_pickle(data, title):
with bz2.BZ2File(title, 'w') as f:
cPickle.dump(data, f)
def load_pickle(path):
data = bz2.BZ2File(path, 'rb')
data = cPickle.load(data)
return data
# + id="AGTVUdmtwWgZ"
def download_dataset():
# create bronze folder if not exist
Path(args.datapath_bronze).mkdir(parents=True, exist_ok=True)
# also creating silver and gold folder for later use
Path(args.datapath_silver).mkdir(parents=True, exist_ok=True)
Path(args.datapath_gold).mkdir(parents=True, exist_ok=True)
# for each of the file, download if not exist
datasets = ['train.parquet.snappy', 'item_info.parquet.snappy',
'track1_testset.parquet.snappy', 'track2_testset.parquet.snappy']
for filename in datasets:
file_savepath = osp.join(args.datapath_bronze,filename)
if not osp.exists(file_savepath):
logger.info('Downloading {}'.format(filename))
wget.download(url='https://github.com/sparsh-ai/ieee21cup-recsys/raw/main/data/bronze/{}'.format(filename),
out=file_savepath)
else:
logger.info('{} file already exists, skipping!'.format(filename))
# + id="vk93jRMwtEWP"
def parquet_to_csv(path):
savepath = osp.join(str(Path(path).parent),str(Path(path).name).split('.')[0]+'.csv')
pd.read_parquet(path).to_csv(savepath, index=False, sep=args.data_sep)
# + id="_F4vRpFCzYsf"
def convert_dataset():
# for each of the file, convert into csv, if csv not exist
datasets = ['train.parquet.snappy', 'item_info.parquet.snappy',
'track1_testset.parquet.snappy', 'track2_testset.parquet.snappy']
datasets = {x:str(Path(x).name).split('.')[0]+'.csv' for x in datasets}
for sfilename, tfilename in datasets.items():
file_loadpath = osp.join(args.datapath_bronze,sfilename)
file_savepath = osp.join(args.datapath_bronze,tfilename)
if not osp.exists(file_savepath):
logger.info('Converting {} to {}'.format(sfilename, tfilename))
parquet_to_csv(file_loadpath)
else:
logger.info('{} file already exists, skipping!'.format(tfilename))
# + [markdown] id="3usPrS7pKdo5"
# ```
# Script to prepare data objects for training and testing
# Usage: from DataPrep import getUserFeaturesTrainSet, getPurchasedItemsTrainSet, getUserFeaturesTestSet
# 1. getUserFeaturesTrainSet():
# return: DataFrame with N_ITEMS+N_USER_PORTRAITS columns
# first N_ITEMS cols: one hot encoding of clicked items
# last N_USER_PORTRAITS cols: normalized user portraits to [0,1] range
# DataFrame shape: (260087, 380+10)
# 2. getPurchasedItemsTrainSet():
# return: a list, each element is a list of purchased itemIDs by a user
# each element i of the list corresponds to a user in row i of getUserFeaturesTrainSet()
# list length: 260087
# 3. getUserFeaturesTestSet():
# return: DataFrame with N_ITEMS+N_USER_PORTRAITS columns
# first N_ITEMS cols: one hot encoding of clicked items
# last N_USER_PORTRAITS cols: normalized user portraits to [0,1] range
# 4. getClusterLabels():
# return: (model, labels)
# model : model for testset prediction
# labels: numpy array of labels of clusters from the trainset
# ```
# + id="I5iL4t4TM5C9"
def parseUserFeaturesOneLine(inputArray):
"""
Kernel function
Return: list of length args.N_ITEMS + args.N_USER_PORTRAITS
Input:
inputArray: an array as a row of trainset or testset raw data
ASSUMPTIONS:
user_click_history is on column index 1 of inputArray
user_portrait is on column index 2 of inputArray
"""
CLICKHIST_INDEX = 1
PORTRAIT_INDEX = 2
output = [0]*(args.N_ITEMS + args.N_USER_PORTRAITS)
# parse click history, assuming
clickSeries = inputArray[CLICKHIST_INDEX].split(',')
clickedItems = [item.partition(':')[0] for item in clickSeries]
# add clicked items to output
for itemID in clickedItems:
if int(itemID)<=0 or int(itemID)>=args.N_ITEMS: # ignore if itemID invalid
continue
colIndex = int(itemID) - 1 # index of clicked item on an element of outputSharedList
output[colIndex] = 1
# parse user portraits
portraits = inputArray[PORTRAIT_INDEX].split(',')
if len(portraits)!=args.N_USER_PORTRAITS:
raise Exception("row "+rowIndex+" of data set does not have the expected number of portrait features")
# add portrait features to output
for i in range(args.N_USER_PORTRAITS):
colIndex = args.N_ITEMS + i # index of feature on an element of outputSharedList
output[colIndex] = int(portraits[i])
return output
# + id="cTygb9y5Kdmc"
def prepareUserFeaturesTrainSet():
"""
save to UserFeaturesTrainSet.pkl
data frame with N_ITEMS+N_USER_PORTRAITS columns
first N_ITEMS cols: one hot encoding of clicked items
last N_USER_PORTRAITS cols: normalized user portraits
Data source: trainset.csv
"""
readfilepath = osp.join(args.datapath_bronze,args.filename_trainset)
outfilepath = osp.join(args.datapath_silver,'UserFeaturesTrainSet.pkl')
if not osp.exists(outfilepath):
# read data to pd dataframe
logger.info('reading raw data file ...')
rawTrainSet = pd.read_csv(readfilepath, sep=args.data_sep)
# create output frame
colNames = ['clickedItem'+str(i+1) for i in range(args.N_ITEMS)] + ['userPortrait'+str(i+1) for i in range(args.N_USER_PORTRAITS)]
output = pd.DataFrame(data = np.zeros(shape = (rawTrainSet.shape[0], args.N_ITEMS+args.N_USER_PORTRAITS)), columns = colNames)
# parse each line in parallel
# first objects in shared memory for input and output
logger.info('creating shared memory objects ...')
inputList = rawTrainSet.values.tolist() # for memory efficiency
p = mp.Pool(args.N_THREADS)
logger.info('multiprocessing ... ')
outputList = p.map(parseUserFeaturesOneLine, inputList)
# convert outputSharedList back to DataFrame
logger.info('convert to DataFrame ...')
output = pd.DataFrame(data = outputList, columns = colNames)
import gc; gc.collect()
# normalize the portraits columns
for i in range(args.N_USER_PORTRAITS):
colName = 'userPortrait' + str(i+1)
scaler = MinMaxScaler()
output[colName] = scaler.fit_transform(output[colName].values.reshape(-1,1))
# save to pickle file
output.to_pickle(outfilepath)
logger.info('Saved processed file at {}'.format(outfilepath))
else:
logger.info('{} Processed data already exists, skipping!'.format(outfilepath))
# + colab={"base_uri": "https://localhost:8080/"} id="NAu5MxaOU8Ze" executionInfo={"elapsed": 86158, "status": "ok", "timestamp": 1636271401213, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} outputId="5472f3fe-fbdc-4bec-f5db-22a88c94631e"
prepareUserFeaturesTrainSet()
# + id="56YPgpz4KdkL"
def preparePurchasedItemsTrainSet():
"""
save to PurchasedItemsTrainSet.pkl
Data source: trainset.csv
"""
readfilepath = osp.join(args.datapath_bronze,args.filename_trainset)
outfilepath = osp.join(args.datapath_silver,'PurchasedItemsTrainSet.pkl')
if not osp.exists(outfilepath):
# read data to pd dataframe
logger.info('reading raw data file ...')
rawTrainSet = pd.read_csv(readfilepath, sep=args.data_sep)
output = []
logger.info('processing ...')
for i in tqdm(range(rawTrainSet.shape[0])):
# parse each line
exposedItems = rawTrainSet.exposed_items[i]
labels = rawTrainSet.labels[i]
exposedItems = exposedItems.split(',')
labels = labels.split(',')
purchasedItems = []
for j in range(len(labels)):
if int(labels[j])==1:
# item is purchased, append it to the purchasedItems list
purchasedItems.append(int(exposedItems[j]))
import gc; gc.collect()
# append the list of this row to output
output.append(purchasedItems)
# save to pickle file
save_pickle(output, outfilepath)
logger.info('Saved processed file at {}'.format(outfilepath))
else:
logger.info('{} Processed data already exists, skipping!'.format(outfilepath))
# + colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["c869695ad6a145b28d6940503dbba252", "9cf661a2b1264855aca52f042fae12d3", "b56438dab4e442a9bb21fac14ee32e4e", "4810d1c20ec54a15aaacdeb8f56e2478", "ee3ff1c5c1e848f39657742b0f5e0a18", "04e53ec292c94fe0b7e35ce5a8838b01", "5bd3f62468854d29b2a6c0c01c6e9edb", "32b52cab9c0149bf927e9f54794164cc", "45a71bf5bda641aeaca69f252401e758", "ccbed971f11a45ed8020060b2412b5ed", "5fb96f9cd5ba4fdd9a187743460e50d7"]} id="VcuA07bJWzap" outputId="1bb4ca57-16e2-46c3-dc01-3f0569643141"
preparePurchasedItemsTrainSet()
# + id="nQ5kzkryOonD"
def prepareUserFeaturesTestSet():
"""
save to PurchasedItemsTestSet.pkl
write content: userIDs, UserFeaturesTestSet
userIDs: array of user ids
UserFeaturesTestSet: data frame with N_ITEMS+N_USER_PORTRAITS columns
Data source: track1_testset.csv
"""
readfilepath = osp.join(args.datapath_bronze,args.filename_track1_testset)
outfilepath = osp.join(args.datapath_silver,'PurchasedItemsTestSet.pkl')
if not osp.exists(outfilepath):
# read data to pd dataframe
logger.info('reading raw data file ...')
rawTestSet = pd.read_csv(readfilepath)
# create output frame
colNames = ['clickedItem'+str(i+1) for i in range(args.N_ITEMS)] + ['userPortrait'+str(i+1) for i in range(args.N_USER_PORTRAITS)]
output = pd.DataFrame(data = np.zeros(shape = (rawTestSet.shape[0], args.N_ITEMS+args.N_USER_PORTRAITS)), columns = colNames)
# parse each line in parallel
# first objects in shared memory for input and output
print('creating shared memory objects ... ')
inputList = rawTestSet.values.tolist() # for memory efficiency
p = mp.Pool(args.N_THREADS)
print('multiprocessing ... ')
outputList = p.map(parseUserFeaturesOneLine, inputList)
# convert outputSharedList back to DataFrame
print('convert to DataFrame ...')
output = pd.DataFrame(data = outputList, columns = colNames)
# normalize the portraits columns
for i in range(args.N_USER_PORTRAITS):
colName = 'userPortrait' + str(i+1)
scaler = MinMaxScaler()
output[colName] = scaler.fit_transform(output[colName].values.reshape(-1,1))
# create userIDs array
userIDs = rawTestSet['user_id'].tolist()
# save to pickle file
save_pickle((userIDs,output), outfilepath)
logger.info('Saved processed file at {}'.format(outfilepath))
else:
logger.info('{} Processed data already exists, skipping!'.format(outfilepath))
# + id="0desQ4yISXDX"
def getUserFeaturesTrainSet():
savefilepath = osp.join(args.datapath_silver,'UserFeaturesTrainSet.pkl')
return load_pickle(savefilepath)
def getPurchasedItemsTrainSet():
savefilepath = osp.join(args.datapath_silver,'PurchasedItemsTrainSet.pkl')
return load_pickle(savefilepath)
def getUserFeaturesTestSet():
savefilepath = osp.join(args.datapath_silver,'UserFeaturesTestSet.pkl')
return load_pickle(savefilepath)
# + id="DQMyFaoQUSiv"
def getExposedItemsTrainSet():
"""return list of exposed items in trainset and whether they are purchased
(exposedItems, purchaseLabels)
both are list of list
"""
readfilepath = osp.join(args.datapath_bronze,args.filename_trainset)
rawTrainSet = pd.read_csv(readfilepath)
exposedItems = rawTrainSet.exposed_items
purchaseLabels = rawTrainSet.labels
exposedItems_out = []
purchaseLabels_out = []
for i in range(len(exposedItems)):
items = exposedItems[i]
labels = purchaseLabels[i]
items = [int(x) for x in items.split(',')]
labels = [int(x) for x in labels.split(',')]
exposedItems_out.append(items)
purchaseLabels_out.append(labels)
return (exposedItems_out, purchaseLabels_out)
# + id="ewFHPoEmUsGG"
def getItemPrice():
"""return: array of item prices"""
readfilepath = osp.join(args.datapath_bronze,args.filename_iteminfo)
itemInfo = pd.read_csv(readfilepath)
itemInfo = itemInfo.sort_values(by = 'item_id')
itemPrice = itemInfo.price
return itemPrice
# + id="ePwzVpKuSXBC"
def splitTrainSet(percentageTrain = 0.8):
readfilepath = osp.join(args.datapath_bronze,args.filename_trainset)
outfilepath = osp.join(args.datapath_silver,'splitTrainSet.pkl')
if not osp.exists(outfilepath):
# read raw data
userFeatures = getUserFeaturesTrainSet()
rawTrainSet = pd.read_csv(readfilepath)
purchaseLabels1 = rawTrainSet.labels
recItems1 = rawTrainSet.exposed_items
N = len(purchaseLabels1)
# create permutation index
permutedIndex = np.random.permutation(N)
trainIndex = permutedIndex[:int(N*percentageTrain)]
testIndex = permutedIndex[int(N*percentageTrain):]
# split user features
userFeaturesTrain = userFeatures.iloc[trainIndex]
userFeaturesTest = userFeatures.iloc[testIndex]
# convert recItems to integer
recItems = []
for i, s in enumerate(recItems1):
# loop thru samples
recItems.append([int(x) for x in s.split(',')])
recItems = np.array(recItems)
# convert purchaseLabels to integer
purchaseLabels = []
for i, s in enumerate(purchaseLabels1):
# loop thru samples
purchaseLabels.append([int(x) for x in s.split(',')])
purchaseLabels = np.array(purchaseLabels)
# split recItems
recItemsTrain = recItems[trainIndex]
recItemsTest = recItems[testIndex]
# split purchaseLabels
purchaseLabelTrain = purchaseLabels[trainIndex]
purchaseLabelTest = purchaseLabels[testIndex]
# saving pickle
save_pickle((userFeaturesTrain, recItemsTrain, purchaseLabelTrain, userFeaturesTest, recItemsTest, purchaseLabelTest), outfilepath)
logger.info('Saved processed file at {}'.format(outfilepath))
else:
logger.info('{} Processed data already exists, skipping!'.format(outfilepath))
# + id="CaYdx9KRSW-y"
class Metrics:
def __init__(self, recommended_testset, purchaseLabels_testset, itemPrice):
""" recommended_testset: list
purchaseLabels_testset: list
itemPrice: list
"""
self.rec = recommended_testset
self.labels = purchaseLabels_testset
self.price = itemPrice
def calculate_metrics1(self, recommendedItems):
"""
recommendedItems: list of length equal to recommended_testset
metrics calculated by summing total rewards of purchased items, no punishment
"""
score = 0
for i in range(len(recommendedItems)):
# loop each sample in data
predItems = recommendedItems[i]
givenItems = self.rec[i]
labels = self.labels[i]
purchaseAND = [givenItems[i] for i in range(9) if labels[i]==1]
for item in predItems:
# loop each items in the sample
if item in purchaseAND:
score = score + self.price[item-1]
return score
# + id="BM2s0auSacpX"
################################################################
# Exploring Collaborative Filtering based on KNN
################################################################
# 1. Use User data with clicked items and user_portraits
# 2. train KNN algorithm
# 3. for a test observaion, find K nearest neighbors
# 4. find the most common items from the neighbors to recommend
# 4. Use cross-validation to calibrate K
from sklearn.neighbors import NearestNeighbors
class KNNModel:
def __init__(self, TrainData, purchaseData, K_neighbors):
"""
train KNN model on TrainData
purchaseData: list of length len(TrainData), each element is a list of purchased itemID
K_neighbors: KNN parameter
"""
self.model = NearestNeighbors(n_neighbors = K_neighbors)
self.model.fit(TrainData)
self.purchaseData = purchaseData
self.K_neighbors = K_neighbors
def predict(self, newPoint):
"""
newPoint should have the same columns as TrainData, any number of row
first find the nearest neighbors
then count the frequency of their purchased items
return: list with length = nrow of newPoint
each element of list is a list of length 9
"""
neighborDist, neighborIDs = self.model.kneighbors(newPoint)
output = []
# calculate score of purchased items with dictionary
itemScore = {}
for rowID in range(len(neighborIDs)):
for i in range(self.K_neighbors):
uID = neighborIDs[rowID][i]
dist = neighborDist[rowID][i]
if dist==0:
dist = 1e-7
itemList = self.purchaseData[uID]
for itemID in itemList:
if itemID not in itemScore.keys():
itemScore[itemID] = 1/dist
else:
itemScore[itemID] = itemScore[itemID] + 1/dist
# find 9 items with highest frequency
# first sort the dict by decreasing value
sortedDict = {k: v for k, v in sorted(itemScore.items(), key=lambda item: item[1], reverse = True)}
finalItems = list(sortedDict.keys())[:9]
output.append(finalItems)
return output
# + id="NL4KP4XrasQn"
def knn_training_and_prediction():
# load processed datasets
TrainSet = getUserFeaturesTrainSet()
PurchasedItems = getPurchasedItemsTrainSet()
# initiate knn model object
model = KNNModel(TrainSet, PurchasedItems, 50)
# get test set
userIDs, TestSet = getUserFeaturesTestSet()
# make prediction
recommendedItems = model.predict(TestSet)
# format data according to submission format and write to file
outFile = '/tf/shared/track2_output.csv'
f = open(outFile, "w")
f.write('id,itemids')
for i in range(len(userIDs)):
f.write('\n')
itemList = recommendedItems[i]
itemString = ' '.join([str(j) for j in itemList])
outString = str(userIDs[i]) + ',' + itemString
f.write(outString)
# + [markdown] id="igLLZV6gGu-v"
# ---
# + [markdown] id="koFQxtgos6gE"
# ## Jobs
# + colab={"base_uri": "https://localhost:8080/"} id="2y8mdDjds6dr" executionInfo={"elapsed": 5, "status": "ok", "timestamp": 1636267384268, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} outputId="c4db2615-7617-4509-ae6a-3bbb88430e98"
logger.info('JOB START: DOWNLOAD_RAW_DATASET')
download_dataset()
logger.info('JOB END: DOWNLOAD_RAW_DATASET')
# + colab={"base_uri": "https://localhost:8080/"} id="3ig3tPpB2Fx-" executionInfo={"elapsed": 14250, "status": "ok", "timestamp": 1636267418097, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "13037694610922482904"}, "user_tz": -330} outputId="bbc27a15-6f9c-4e5a-f227-e6cfd9363c6f"
logger.info('JOB START: DATASET_CONVERSION_PARQUET_TO_CSV')
convert_dataset()
logger.info('JOB END: DATASET_CONVERSION_PARQUET_TO_CSV')
# + id="toXUL9pVItp_"
| _docs/nbs/T445041-IEEE-BigData-2021-RecSys-ComFOR-RL-Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
# - Author: <NAME>
# - GitHub Repository: https://github.com/rasbt/deeplearning-models
# %load_ext watermark
# %watermark -a '<NAME>' -v -p torch
# - Runs on CPU or GPU (if available)
# # Model Zoo -- Convolutional Neural Network
# ## Imports
# +
import time
import numpy as np
import torch
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
# -
# ## Settings and Dataset
# +
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:3" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 1
learning_rate = 0.05
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
# -
# ## Model
# +
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
# 28x28x1 => 28x28x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(3, 3),
stride=(1, 1),
padding=1) # (1(28-1) - 28 + 3) / 2 = 1
# 28x28x4 => 14x14x4
self.pool_1 = torch.nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2),
padding=0) # (2(14-1) - 28 + 2) = 0
# 14x14x4 => 14x14x8
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=8,
kernel_size=(3, 3),
stride=(1, 1),
padding=1) # (1(14-1) - 14 + 3) / 2 = 1
# 14x14x8 => 7x7x8
self.pool_2 = torch.nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2),
padding=0) # (2(7-1) - 14 + 2) = 0
self.linear_1 = torch.nn.Linear(7*7*8, num_classes)
def forward(self, x):
out = self.conv_1(x)
out = F.relu(out)
out = self.pool_1(out)
out = self.conv_2(out)
out = F.relu(out)
out = self.pool_2(out)
logits = self.linear_1(out.view(-1, 7*7*8))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# -
# ## Training
# +
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for features, targets in data_loader:
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval()
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
# -
# ## Evaluation
with torch.set_grad_enabled(False): # save memory during inference
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
# %watermark -iv
| pytorch_ipynb/cnn/cnn-basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from bs4 import BeautifulSoup
import requests
from splinter import Browser
url="https://mars.nasa.gov/news/"
from webdriver_manager.chrome import ChromeDriverManager
def init_browser():
executable_path = {'executable_path': ChromeDriverManager().install()}
return Browser('chrome', **executable_path, headless=False)
browser= init_browser()
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
#create HTMl Object
html = browser.html
html = browser.html
#parse HTML with beautiful soup
soup = BeautifulSoup(html, 'html.parser')
news_title = soup.find_all('div',class_='content_title')[1].text
news_paragraph=soup.find('div',class_='article_teaser_body').text
print(news_title)
print(news_paragraph)
# # IMAGE
url = "https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html"
from webdriver_manager.chrome import ChromeDriverManager
def init_browser():
executable_path = {'executable_path': ChromeDriverManager().install()}
return Browser('chrome', **executable_path, headless=False)
browser= init_browser()
url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
browser.visit(url)
#create HTMl Object
html = browser.html
# +
soup = BeautifulSoup(html, "html.parser")
# -
print(soup.prettify())
body = soup.find_all("body")
div = body[0].find("div", class_="floating_text_area")
image = div.find("a")
image
pic_source = []
pic = image['href']
pic_source.append(pic)
featured_image_url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/' + pic
featured_image_url
# # MARS FActs
url='https://space-facts.com/mars/'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
table = pd.read_html(url)
facts_df = table[0]
facts_df.rename(columns = { 0 :"facts",
1 : "values"})
facts_html = facts_df.to_html()
facts_html = facts_html.replace("\n","")
facts_html
# # IMAGES OF THE HEMISPHERE
url = "https://astrogeology.usgs.gov/search/map/Mars/Viking/cerberus_enhanced"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
pic = soup.find_all( "div", class_ = "wide-image-wrapper")
image = pic[0].find("li")
image_url = image.find("a")['href']
hemisphere_title = soup.find("h2", class_="title").text
print(hemisphere_title)
print(image_url)
url_2 = "https://astrogeology.usgs.gov/search/map/Mars/Viking/schiaparelli_enhanced"
response_2 = requests.get(url_2)
soup_2 = BeautifulSoup(response_2.text, "html.parser")
pic_2 = soup_2.find_all( "div", class_ = "wide-image-wrapper")
image_2 = pic_2[0].find("li")
image_url_2 = image_2.find("a")['href']
hemisphere_title_2 = soup_2.find("h2", class_="title").text
print(hemisphere_title_2)
print(image_url_2)
url_3="https://astrogeology.usgs.gov/search/map/Mars/Viking/syrtis_major_enhanced"
response_3 = requests.get(url_3)
soup_3 = BeautifulSoup(response_3.text, "html.parser")
pic_3 = soup_3.find_all( "div", class_ = "wide-image-wrapper")
image_3 = pic_3[0].find("li")
image_url_3 = image_3.find("a")['href']
hemisphere_title_3 = soup_3.find("h2", class_="title").text
print(hemisphere_title_3)
print(image_url_3)
url_4="https://astrogeology.usgs.gov/search/map/Mars/Viking/valles_marineris_enhanced"
response_4 = requests.get(url_4)
soup_4 = BeautifulSoup(response_4.text, "html.parser")
pic_4 = soup_4.find_all( "div", class_ = "wide-image-wrapper")
image_4 = pic_4[0].find("li")
image_url_4 = image_4.find("a")['href']
hemisphere_title_4 = soup_4.find("h2", class_="title").text
print(hemisphere_title_4)
print(image_url_4)
hemisphere_dict= [
{"title":hemisphere_title, "image_url":image_url},
{"title":hemisphere_title_2, "image_url":image_url_2},
{"title":hemisphere_title_3, "image_url":image_url_3},
{"title":hemisphere_title_4, "image_url":image_url_4},
]
hemisphere_dict
| mission_to_mars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ShakeMe: Key Generation Based on Accelerometer Signals Obtained From Synchronous Sensor Motion
# +
# %matplotlib inline
from scipy.io import loadmat
from scipy import signal
from scipy.stats import skew, kurtosis
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
from IPython.display import Latex
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score
mpl.rcParams['font.size'] = 24
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['figure.figsize'] = (10, 6)
mpl.rcParams['lines.linewidth'] = 3
mpl.rcParams['lines.markersize'] = 10
mpl.rcParams['font.size'] = 16
mpl.rcParams['figure.figsize'] = (10*1.2, 6*1.2)
# -
# ## Load accelerometer data set
#
# In this work, two Samsung Galaxy Nexus smart-phones are used to acquire accelerometer sensor data. The data are acquired from linear_acceleration sensor, which is a software-based sensor, of Android API. The sampling rate $F_s$ of the sensor is $100$Hz.
#
# The positive class test data $D_1$ consists of $150$ shaking experiments recorded from $10$ individuals ($15$ experiments per individual). Five of the test subjects are male and five of them are female. All test subjects are asked to shake two devices ($1$ and $2$) together in one hand for $5$ seconds which results in approximately $500$ time samples in an acceleration signal. Except this, no other instructions are given to the individuals. For negative class test data $D_2$, in turn, $300$ test samples are randomly generated from $D_1$ such that first two random individuals are selected out of $10$ individuals and then two acceleration signals of those two individuals are randomly selected. This pair of signals constitutes one negative test sample of $D_2$.
datasets = loadmat('ShakeMe.mat')['ShakeMe']
datasets
# +
fig, ax = plt.subplots(2,1)
ax[0].plot(datasets[0,0][:,1:])
ax[0].set_title('device 1 raw signal')
ax[0].set_xlabel('sample index')
ax[0].set_ylabel('acceleration (m/s^2)')
ax[1].plot(datasets[0,1][:,1:])
ax[1].set_title('device 2 raw signal')
ax[1].set_xlabel('sample index')
ax[1].set_ylabel('acceleration (m/s^2)')
fig.tight_layout()
# -
labels = np.concatenate(datasets[:,-1])
labels = labels.flatten()
labels
# convert data to dataframe
df_datasets = pd.DataFrame(datasets[:,:-1] , columns=['acceleration1', 'acceleration2'])
df_datasets['acceleration_pair'] = df_datasets.apply(lambda row:
(row['acceleration1'], row['acceleration2']), axis=1)
df_datasets.drop(columns=['acceleration1', 'acceleration2'], inplace=True)
df_datasets['label'] = labels
df_datasets.head()
nsignals = len(df_datasets)
nsignals
npositive_class_signals = np.sum(df_datasets['label'] == 1)
npositive_class_signals
nnegative_class_signals = np.sum(df_datasets['label'] == 0)
nnegative_class_signals
# +
is_confirmed = np.zeros((nsignals,2))
nbits = np.arange(3, 8)
nfeatures = 10
peak_threshold_raw = 0
peak_threshold_filtered = 0.01
kernel_size = np.arange(5, 51, 5)
df_metric = pd.DataFrame(columns=['preprocessing', 'criteria', 'nbit', 'kernel_size',
'conf_mat', 'accuracy', 'precision', 'recall', 'f1'])
normalizer_vector = np.array([10, 10, 10, 10, 1, 1, 1, 10, 1e5, 100]) # pre-defined feature normalizer vector
# -
# ## Pre-processing of accelerometer signals
# +
euclidean_norm = lambda accel: np.sqrt(accel[:,0]**2 + accel[:,1]**2 + accel[:,2]**2)
df_datasets['acceleration_norm'] = df_datasets['acceleration_pair'].apply(
lambda row: tuple(euclidean_norm(elem[:,1:]) for elem in row) ) # elem[:,1:] skips the time stamp
# +
fig, ax = plt.subplots(2,1)
ax[0].plot(df_datasets['acceleration_norm'].iloc[0][0])
ax[0].set_title('device 1 magnitude signal')
ax[0].set_xlabel('sample index')
ax[0].set_ylabel('acceleration (m/s^2)')
ax[1].plot(df_datasets['acceleration_norm'].iloc[0][1])
ax[1].set_title('device 2 magnitude signal')
ax[1].set_xlabel('sample index')
ax[1].set_ylabel('acceleration (m/s^2)')
fig.tight_layout()
# -
# ## Feature Extraction (FeX) From Accelerometer Time Series
#
# In this work, $10$ different features were used: number of peaks, root-mean-square (rms), mean, variance, skewness, kurtosis, crest factor, peak to peak, autocorrelation and average power. These features are extracted from the whole acceleration signal without doing any windowing. Since the ranges of feature values are quite different, feature values are normalized before the feature signal is passed to the quantizer.
def FeX(acc_signal, min_peak_distance, min_peak_height, peak_threshold):
"""Data descriptive statistics -- summary statistics"""
crest_factor = lambda sig: 0.5 * (max(sig) - min(sig))/(np.sqrt(np.mean(sig**2)))
pks_acc_signal,_ = signal.find_peaks(acc_signal,
height=min_peak_height,
threshold=peak_threshold,
distance=min_peak_distance+0.0000001)
rms_acc_signal = np.sqrt(np.mean(acc_signal**2))
mean_acc_signal = np.mean(acc_signal)
var_acc_signal = np.var(acc_signal, ddof=1)
skewness_acc_signal = skew(acc_signal)
kurtosis_acc_signal = kurtosis(acc_signal) + 3 # python kurtosis fct subtracts 3, therefore 3 is added
cf_acc_signal = crest_factor(acc_signal)
p2p_acc_signal = max(acc_signal) - min(acc_signal)
autocorr_acc_signal = np.correlate(acc_signal, acc_signal)[0]
pband_acc_signal = np.linalg.norm(acc_signal)**2/len(acc_signal)
feature_acc_signal = [len(pks_acc_signal), rms_acc_signal, mean_acc_signal, var_acc_signal, skewness_acc_signal,
kurtosis_acc_signal, cf_acc_signal, p2p_acc_signal, autocorr_acc_signal, pband_acc_signal]
feature_acc_signal = np.asarray(feature_acc_signal)
return feature_acc_signal
df_datasets['acceleration_feature'] = df_datasets['acceleration_norm'].apply(
lambda row: tuple(FeX(elem, 3, 0, peak_threshold_raw) / normalizer_vector for elem in row) )
# +
feature_filtered_acc_signal1 = np.full((nsignals, nfeatures, len(kernel_size)), np.inf)
feature_filtered_acc_signal2 = np.full((nsignals, nfeatures, len(kernel_size)), np.inf)
for idx_ks, ks in enumerate(kernel_size):
lp_filter = 1/ks * np.ones(ks)
df_filtered_acc = df_datasets['acceleration_norm'].apply(
lambda row: tuple(signal.lfilter(lp_filter, 1, elem) for elem in row) )
feature_filtered_acc_signal1[:, :, idx_ks] = np.stack(df_filtered_acc.apply(
lambda row: FeX(row[0], 3, 0, peak_threshold_filtered) / normalizer_vector).values, axis=0)
feature_filtered_acc_signal2[:, :, idx_ks] = np.stack(df_filtered_acc.apply(
lambda row: FeX(row[1], 3, 0, peak_threshold_filtered) / normalizer_vector).values, axis=0)
print("ks: %d/%d " %(idx_ks+1, len(kernel_size)) )
#df_datasets['acceleration1_filtered_feature'] = feature_filtered_acc_signal1
#df_datasets['acceleration2_filtered_feature'] = feature_filtered_acc_signal2
# -
# ## Key Generation
#
# * The ultimate objective is to generate exactly the same key from shared shaking processes independently without exchanging any acceleration signal content. Moreover, we want our algorithm to generate different keys on devices when they are not shaken together.
#
# * It is known that although the both signals are similar, they are not identical. As a consequence, similar raw signals result in similar feature signals. However, we want our key generation algorithm to map similar feature signals to exactly same key which requires a hashing process. This could be realized via a quantizer which can also be interpreted as a classifier.
#
# * Before the normalized feature signal is passed to quantizer, it is rescaled according to number of bits used in the binary representation of the key. The canonical conversion from decimal to binary is adopted for mapping. At the end of quantization and binary representation, a bit stream of a certain length will be generated based on the number of features and number of bits used in binary representation.
# * It is worth to notice that this quantization method is very simple to implement and calculate.
def generate_key(feature_signal, nbits):
"""generates information signal (key) from a given feature signal using Q(.) with nbits"""
scaled_feature_signal = feature_signal/max(abs(feature_signal))
scaled_feature_signal = np.round(2**(nbits-1) * (scaled_feature_signal + 1))
bitstream = [format(feat, 'b').zfill(nbits+1) for feat in scaled_feature_signal.astype(int)]
information_signal = ''.join(bitstream)
return information_signal
def compute_metrics(gt, pred):
conf_mat = confusion_matrix(y_true=gt, y_pred=pred, labels=[True, False])
accuracy = accuracy_score(gt, pred)
precision = precision_score(gt, pred)
recall = recall_score(gt, pred, labels=[True, False])
f1 = f1_score(gt, pred)
metric = {'conf_mat': conf_mat,
'accuracy': accuracy,
'precision': precision,
'recall': recall,
'f1': f1}
return metric
for idx_nb, nbit in enumerate(nbits):
df_datasets['key_raw_descstat'] = df_datasets['acceleration_feature'].apply(
lambda row: tuple(generate_key(elem, nbit) for elem in row))
is_confirmed[:, 0] = df_datasets['key_raw_descstat'].apply(
lambda row: sum([b1 == b2 for (b1,b2) in zip(row[0], row[1])])).values
for idx_ks, ks in enumerate(kernel_size):
key1_filtered_descstat = []
key2_filtered_descstat = []
for idx_signal in range(nsignals):
key1_filtered_descstat.append(generate_key(feature_filtered_acc_signal1[idx_signal, :, idx_ks], nbit))
key2_filtered_descstat.append(generate_key(feature_filtered_acc_signal2[idx_signal, :, idx_ks], nbit))
is_confirmed[idx_signal, 1] = sum([b1 == b2 for (b1,b2) in zip(key1_filtered_descstat[idx_signal],
key2_filtered_descstat[idx_signal])])
# Performance Assessment (Confusion Matrix (filtered))
y_pred_strict = is_confirmed[:, 1] == (nbit+1) * nfeatures
metrics_filtered_strict = compute_metrics(labels.astype(bool), y_pred_strict)
df_metric = df_metric.append({'preprocessing': 'filtered',
'criteria': 'strict',
'nbit': nbit+1,
'kernel_size': ks,
'conf_mat': metrics_filtered_strict['conf_mat'],
'accuracy': metrics_filtered_strict['accuracy'],
'precision': metrics_filtered_strict['precision'],
'recall': metrics_filtered_strict['recall'],
'f1': metrics_filtered_strict['f1']}, ignore_index=True)
y_pred_relaxed = is_confirmed[:, 1] >= ((nbit+1)*nfeatures - (0.1 * (nbit+1)*nfeatures))
metrics_filtered_relaxed = compute_metrics(labels.astype(bool), y_pred_relaxed)
df_metric = df_metric.append({'preprocessing': 'filtered',
'criteria': 'relaxed',
'nbit': nbit+1,
'kernel_size': ks,
'conf_mat': metrics_filtered_relaxed['conf_mat'],
'accuracy': metrics_filtered_relaxed['accuracy'],
'precision': metrics_filtered_relaxed['precision'],
'recall': metrics_filtered_relaxed['recall'],
'f1': metrics_filtered_relaxed['f1']}, ignore_index=True)
print("nb:%d/%d ks:%d/%d" %(idx_nb+1, len(nbits), idx_ks+1, len(kernel_size)))
# Performance Assessment (Confusion Matrix (raw))
y_pred_strict = is_confirmed[:, 0] == (nbit+1)*nfeatures
metrics_raw_strict = compute_metrics(labels.astype(bool), y_pred_strict)
df_metric = df_metric.append({'preprocessing': 'raw',
'criteria': 'strict',
'nbit': nbit+1,
'conf_mat': metrics_raw_strict['conf_mat'],
'accuracy': metrics_raw_strict['accuracy'],
'precision': metrics_raw_strict['precision'],
'recall': metrics_raw_strict['recall'],
'f1': metrics_raw_strict['f1']}, ignore_index=True)
y_pred_relaxed = is_confirmed[:, 0] >= (nbit+1)*nfeatures - 0.1 * (nbit+1)*nfeatures
metrics_raw_relaxed = compute_metrics(labels.astype(bool), y_pred_relaxed)
df_metric = df_metric.append({'preprocessing': 'raw',
'criteria': 'relaxed',
'nbit': nbit+1,
'conf_mat': metrics_raw_relaxed['conf_mat'],
'accuracy': metrics_raw_relaxed['accuracy'],
'precision': metrics_raw_relaxed['precision'],
'recall': metrics_raw_relaxed['recall'],
'f1': metrics_raw_relaxed['f1']}, ignore_index=True)
# ## Performance Assessment Summary
#
# - hard constraint (100% matches are required)
# - relaxed conditions (90% matches are enough)
idx_best_accuracy = df_metric.groupby(['preprocessing', 'criteria'])['accuracy'].transform(max) == df_metric['accuracy']
df_metric[idx_best_accuracy]
idx_best_f1 = df_metric.groupby(['preprocessing', 'criteria'])['f1'].transform(max) == df_metric['f1']
df_metric[idx_best_f1]
df_filt_strict = df_metric[(df_metric['preprocessing'] == 'filtered') & (df_metric['criteria'] == 'strict')]
accuracy_filt_strict = pd.pivot_table(df_filt_strict, values='accuracy', index='nbit', columns='kernel_size').values
f1_filt_strict = pd.pivot_table(df_filt_strict, values='f1', index='nbit', columns='kernel_size').values
df_filt_relaxed = df_metric[(df_metric['preprocessing'] == 'filtered') & (df_metric['criteria'] == 'relaxed')]
accuracy_filt_relaxed = pd.pivot_table(df_filt_relaxed, values='accuracy', index='nbit', columns='kernel_size').values
f1_filt_relaxed = pd.pivot_table(df_filt_relaxed, values='f1', index='nbit', columns='kernel_size').values
# +
metrics = {}
metrics['Accuracy (Strict)'] = accuracy_filt_strict
metrics['F1 (Strict)'] = f1_filt_strict
metrics['Accuracy (Relaxed)'] = accuracy_filt_relaxed
metrics['F1 (Relaxed)'] = f1_filt_relaxed
fig, axes = plt.subplots(2,2)
for ax, key in zip(axes.flatten(), metrics):
im = ax.imshow(metrics[key], aspect= 'auto', cmap="hot")
ax.set_xlabel('ks')
ax.set_ylabel('nb')
ax.set_title(key)
ax.set_xticks(range(0, len(kernel_size)))
ax.set_yticks(range(0, len(nbits)))
ax.set_xticklabels(kernel_size)
ax.set_yticklabels(nbits + 1)
fig.colorbar(im, ax=ax)
fig.tight_layout()
# -
# ## Entropy Analysis
#
# - Generate keys with best parameters, i.e. nb= 4 bits & kernel_size = 5
# - For the above four cases (strict/relaxed, raw/filtered) we also estimated the entropies of the information signals. The maximal possible entropy is of course $40$ bits when each of the $10$ feature signals are quantized to $nb=4$ four bits and then concatenated to one bitstream of length $40$. The needed probabilities were obtained by estimating a multivariate Bernoulli mixture with the expectation maximization algorithm from our keys. The Bayesian information criterion was used to determine the size of the mixture. The hereby calculated entropies varied between $14$-$16$ bits for the four cases, which is sufficiently strong security for typical device pairing applications.
df_datasets['key_raw_descstat'] = df_datasets['acceleration_feature'].apply(
lambda row: tuple(generate_key(elem, 3) for elem in row))
# +
idx_best_ks = np.where(kernel_size==5)[0][0]
df_datasets['filtered_acceleration_feature'] = [(f1,f2) for f1,f2 in zip(feature_filtered_acc_signal1[:,:,idx_best_ks],
feature_filtered_acc_signal2[:,:,idx_best_ks])]
df_datasets['key_filtered_descstat'] = df_datasets['filtered_acceleration_feature'].apply(
lambda row: tuple(generate_key(elem, 3) for elem in row))
# +
A = []
for idx_signal, row in df_datasets.iterrows():
A.append(list(map(int, row['key_raw_descstat'][0])))
A = np.asarray(A[:150])
print(A)
# -
vA = np.var(A, axis=0, ddof=1)
vA
A = A[:, vA>0]
A
N, D = np.shape(A)
N,D
# ## EM estimation of the density
# +
C_range = np.arange(1, 31) # hyperparameter: number of mixture components
MAX_ITER = 20
BICmtx = np.zeros((len(C_range), MAX_ITER))
Hmtx = BICmtx.copy()
# +
for idx, C in enumerate(C_range):
for it in range(MAX_ITER):
p = 1/C * np.ones((C,1)) + np.random.randn(C,1)*(0.2*1/C) # initialization of mixing parameters
q = 1/2 * np.ones((C,D)) + np.random.randn(C,D)*(0.2*1/2) # initialization of Bernoulli parameters
k = A.copy() # data
f = lambda k, q_c: np.prod(np.power(np.tile(q_c, (N, 1)), k) * np.power(1-np.tile(q_c, (N, 1)), 1-k), dtype='float64', axis=1)
p0 = p.copy()
pn = 1
q0 = q.copy()
qn = 1
while (pn > 1e-8 or qn > 1e-8):
sumC = 0
for i in range(C):
sumC += p[i] * f(k, q[i,:])
for i in range(C):
gnk = sum((p[i] * f(k, q[i, :])) / sumC)
gnkx = sum(np.tile(((p[i] * f(k, q[i, :]) / sumC))[..., np.newaxis], (1, D)) * k)
p[i] = 1/N * gnk
q[i, :] = gnkx/gnk
pn = np.median(abs(p-p0) / np.maximum(abs(p), 1e-10))
qn = np.median(abs(q.flatten()-q0.flatten()) / np.maximum(abs(q.flatten()), 1e-10))
p0 = p
q0 = q
# Entropy calculation:
sumC = 0
for i in range(C):
sumC += p[i] * f(k, q[i,:])
Hmtx[idx, it] = -sum(np.log2(sumC))
Hmtx[idx, it] /= N
# BIC calculation:
loglik = sum(np.log(sumC))
BICmtx[idx, it] = -2 * loglik + (C+C*D) * np.log(N)
BIC = np.median(BICmtx, axis=1)
H = np.median(Hmtx, axis=1)
indexi = np.argmin(BIC)
Entropy = H[indexi]
print("Entropy: ", Entropy)
# +
fig, ax = plt.subplots(2, 1)
ax[0].plot(C_range,BIC, C_range[indexi], BIC[indexi], 'ro')
ax[0].set_title("BIC")
ax[1].plot(C_range, H, C_range[indexi], H[indexi], 'ro')
ax[1].set_title("Entropy")
plt.tight_layout()
plt.show()
# -
| ShakeMe-PICOM2015.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Approximation of 2D bubble shapes
#
# ## Outline
#
# 1. [Starting point](#starting_point)
# 2. [Volume-of-fluid data](#vof_data)
# 3. [Parameterization](#parameterization)
# 4. [Simple function approximation](#function_approximation)
# 5. [Direct approximation of the radius](#direct_approximation)
# 6. [Using prior/domain knowledge](#prior_knowledge)
# 1. [Re-scaling the data](#rescaling)
# 2. [Adding artificial data](#artificial_data)
# 3. [Creating ensemble models](#ensemble_models)
# 7. [Final notes](#final_notes)
#
# ## Starting point<a id="starting_point"></a>
#
# - parametrize geometries (non-linear interpolation)
# - create mappings to shape -> optimization
# - concepts apply to all sorts of function approximation problems
# -
# +
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import collections as mc
import torch
from sklearn.preprocessing import MinMaxScaler
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
from google.colab import drive
drive.mount('/content/gdrive')
matplotlib.rcParams['figure.dpi'] = 80
print("Pandas version: {}".format(pd.__version__))
print("Numpy version: {}".format(np.__version__))
print("PyTorch version: {}".format(torch.__version__))
print("Running notebook {}".format("in colab." if IN_COLAB else "locally."))
# -
# ## Volume-of-fluid data<a id="vof_data"></a>
#
# plic elements: volume fraction indicates phase, interface locally reconstructed as plane (in 3D) or line (in 2D), piecewise elements form gas-liquid interface, elements not connected
#
# basilisk flow solver
# volume of fluid data
#
# training data contains intersection points of line segments with Octree background mesh px, py
# +
if not IN_COLAB:
data_file_cap = "../data/bhaga_03_l16.csv"
data_file_eli = "../data/water_05_l16.csv"
else:
data_file_cap = "https://raw.githubusercontent.com/AndreWeiner/machine-learning-applied-to-cfd/master/data/bhaga_03_l16.csv"
data_file_eli = "https://raw.githubusercontent.com/AndreWeiner/machine-learning-applied-to-cfd/master/data/water_05_l16.csv"
data_cap = pd.read_csv(data_file_cap, header=0)
data_eli = pd.read_csv(data_file_eli, header=0)
print("The spherical cap data set contains {} points.".format(data_cap.shape[0]))
print("The ellipse data set contains {} points.".format(data_eli.shape[0]))
data_eli.head()
# +
if IN_COLAB:
# %matplotlib inline
else:
# %matplotlib notebook
fontsize = 14
fig, ax = plt.subplots(1, figsize=(12, 10))
line_segments_cap = [[(data_cap.py[i], data_cap.px[i]),(data_cap.py[i+1], data_cap.px[i+1])]
for i in range(0, data_cap.shape[0] - 1, 2) ]
lc_cap = mc.LineCollection(line_segments_cap, linewidths=1, colors='C0', label=r"spherical cap PLIC elements")
ax.add_collection(lc_cap)
line_segments_eli = [[(data_eli.py[i], data_eli.px[i]),(data_eli.py[i+1], data_eli.px[i+1])]
for i in range(0, data_eli.shape[0] - 1, 2) ]
lc_eli = mc.LineCollection(line_segments_eli, linewidths=1, colors='C1', label=r"ellipse PLIC elements")
ax.add_collection(lc_eli)
ax.autoscale()
x = [i[0] for j in line_segments_cap for i in j]
y = [i[1] for j in line_segments_cap for i in j]
ax.scatter(x, y, marker='x', color='C0', s=30, linewidth=0.5)
x = [i[0] for j in line_segments_eli for i in j]
y = [i[1] for j in line_segments_eli for i in j]
ax.scatter(x, y, marker='x', color='C1', s=30, linewidth=0.5)
ax.set_aspect('equal')
ax.set_xlabel(r"$x$", fontsize=fontsize)
ax.set_ylabel(r"$y$", fontsize=fontsize)
ax.set_xlim(0.0, 0.9)
plt.legend()
plt.show()
# -
# ## Parameterization<a id="parameterization"></a>
# we transform to polar coordinates because of fixed argument range; we want to learn r(hpi)
# x and y swaped such that bubbles rises in y
def polar_coordinates(px, py):
'''Converts radius from Cartesian coordinates r(x,y) to polar coordinates r(phi).
Parameters
----------
px, py - array-like: x and y coordinates of PLIC points
Returns
-------
radius - array-like: radii of PLIC points
phi - array-like: polar angle
'''
radius = np.sqrt(np.square(px) + np.square(py))
phi = np.arccos(py / radius)
return radius, phi
# +
# %matplotlib inline
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(121, projection='polar')
ax2 = plt.subplot(122)
radius_cap, phi_cap = polar_coordinates(data_cap.py.values, data_cap.px.values)
radius_eli, phi_eli = polar_coordinates(data_eli.py.values, data_eli.px.values)
ax1.set_theta_zero_location("N")
ax1.scatter(phi_cap, radius_cap, marker='x', color='C0', s=30, linewidth=0.5)
ax1.scatter(phi_eli, radius_eli, marker='x', color='C1', s=30, linewidth=0.5)
ax1.set_xlim(0.0, np.pi)
ax1.set_title("Polar plot", loc='left', fontsize=fontsize)
ax1.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax1.set_ylabel(r"$r$", fontsize=fontsize)
ax2.scatter(phi_cap, radius_cap, marker='x', color='C0', s=30, linewidth=0.5, label=r"spherical cap")
ax2.scatter(phi_eli, radius_eli, marker='x', color='C1', s=30, linewidth=0.5, label=r"ellipse")
ax2.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax2.set_ylabel(r"$r$", fontsize=fontsize)
asp = np.diff(ax1.get_xlim())[0] / np.diff(ax1.get_ylim())[0]
ax2.set_aspect(asp)
ax2.legend()
plt.show()
# -
# ## Simple function approximation<a id="function_approximation"></a>
#
# simple neural network with some hyperparameters: layers, neurons, activation function
# +
torch.set_default_tensor_type(torch.DoubleTensor)
class SimpleMLP(torch.nn.Module):
def __init__(self, n_inputs=1, n_outputs=1, n_layers=1, n_neurons=10, activation=torch.sigmoid):
super().__init__()
self.n_inputs = n_inputs
self.n_outputs = n_outputs
self.n_layers = n_layers
self.n_neurons = n_neurons
self.activation = activation
self.layers = torch.nn.ModuleList()
# input layer to first hidden layer
self.layers.append(torch.nn.Linear(self.n_inputs, self.n_neurons))
# add more hidden layers if specified
if self.n_layers > 1:
for hidden in range(self.n_layers-1):
self.layers.append(torch.nn.Linear(self.n_neurons, self.n_neurons))
# last hidden layer to output layer
self.layers.append(torch.nn.Linear(self.n_neurons, self.n_outputs))
def forward(self, x):
for i_layer in range(self.n_layers):
x = self.activation(self.layers[i_layer](x))
return self.layers[-1](x)
# -
# function uses MSE loss as loss function, whenever loss decreases we save model weights
# function approximator for many inputs and many outputs (with minor modification)
def approximate_function(x, y, model, l_rate=0.001, max_iter=1000, path=None, verbose=100):
'''Train MLP to approximate a function y(x).
The training stops when the maximum number of training epochs is reached.
Parameters
----------
x - array-like : argument of the function
y - array-like : function value at x
model - SimpleMLP : PyTorch model which is adjusted to approximate the function
l_rate - Float : learning rate for weight optimization
max_iter - Integer: maximum number of allowed training epochs
path - String : location to save model weights
verbose - Integer : defines frequency for loss information output
Returns
-------
model - SimpleMLP: trained version of the given model
'''
# convert coordinates to torch tensors
x_tensor = torch.from_numpy(x).unsqueeze_(-1)
y_tensor = torch.from_numpy(y)
# define loss function
criterion = torch.nn.MSELoss()
# define optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=l_rate)
# training loop
best_loss = 1.0E5
count = 0
for e in range(1, max_iter+1):
# backpropagation
optimizer.zero_grad()
output = model.forward(x_tensor)
loss = criterion(output.squeeze(dim=1), y_tensor)
loss.backward()
optimizer.step()
# check error
diff = output.squeeze(dim=1) - y_tensor
max_diff = np.amax(np.absolute(diff.detach().numpy()))
if loss.item() < best_loss:
count += 1
best_loss = loss.item()
if count % verbose == 0:
print("Loss/max. dev. decreased in epoch {}: {}/{}".format(e, loss.item(), max_diff))
if path is not None:
if count % verbose == 0: print("Saving model as {}".format(path))
torch.save(model.state_dict(), path)
return model.eval()
def set_path(name=None):
if IN_COLAB:
return F"/content/gdrive/My Drive/" + name
else:
return "models/" + name
# ## Direct approximation of the radius<a id="direct_approximation"></a>
#
# straight forward approach
radius_model_cap_direct = SimpleMLP(n_layers=6, n_neurons=40)
radius_model_cap_direct = approximate_function(phi_cap, radius_cap, radius_model_cap_direct, max_iter=1500,
l_rate=0.01, path=set_path("radius_model_cap_direct.pt"))
radius_model_eli_direct = SimpleMLP(n_layers=6, n_neurons=40)
radius_model_eli_direct = approximate_function(phi_eli, radius_eli, radius_model_eli_direct, max_iter=1500,
l_rate=0.01, path=set_path("radius_model_eli_direct.pt"))
# +
fig, ax = plt.subplots(figsize=(14, 8))
eval_phi = np.linspace(-0.1, np.pi+0.1, 200)
phi_tensor = torch.from_numpy(eval_phi).unsqueeze_(-1)
# load best weights and compute forward pass
radius_model_cap_direct.load_state_dict(torch.load(set_path("radius_model_cap_direct.pt")))
model_radius_cap = radius_model_cap_direct.forward(phi_tensor).detach().squeeze().numpy()
radius_model_eli_direct.load_state_dict(torch.load(set_path("radius_model_eli_direct.pt")))
model_radius_eli = radius_model_eli_direct.forward(phi_tensor).detach().squeeze().numpy()
# evaluate maximum relative deviation
phi_cap_tensor = torch.from_numpy(phi_cap).unsqueeze_(-1)
phi_eli_tensor = torch.from_numpy(phi_eli).unsqueeze_(-1)
model_radius_cap_data = radius_model_cap_direct.forward(phi_cap_tensor).detach().squeeze().numpy()
model_radius_eli_data = radius_model_eli_direct.forward(phi_eli_tensor).detach().squeeze().numpy()
diff_cap = np.absolute(model_radius_cap_data - radius_cap)
diff_eli = np.absolute(model_radius_eli_data - radius_eli)
max_pos_cap = np.argmax(diff_cap)
max_pos_eli = np.argmax(diff_eli)
print(r"Maximum relative deviation for spherical cap: {:2.2f}% at angle {:2.2f}.".format(
np.amax(diff_cap)/radius_cap[max_pos_cap] * 100, phi_cap[max_pos_cap]))
print(r"Maximum relative deviation for ellipse: {:2.2f}% at angle {:2.2f}.".format(
np.amax(diff_eli)/radius_eli[max_pos_eli] * 100, phi_cap[max_pos_eli]))
ax.plot(eval_phi, model_radius_cap, linewidth=2, linestyle="--", c='C3', label=r"model radii")
ax.plot(eval_phi, model_radius_eli, linewidth=2, linestyle="--", c='C3')
ax.scatter(phi_cap, radius_cap, marker='x', color='C0', s=30, linewidth=0.5, label=r"spherical cap data")
ax.scatter(phi_eli, radius_eli, marker='x', color='C1', s=30, linewidth=0.5, label=r"ellipse data")
ax.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax.set_ylabel(r"$r$", fontsize=fontsize)
plt.legend()
plt.show()
# -
# ## Using prior/domain knowledge<a id="prior_knowledge"></a>
#
# ### Re-scaling the data<a id="rescaling"></a>
#
# We how bubble is going to look like -> we can create simple function approximation and use them to simplify the approximation problem
# vanishing gradient problem, especially with sigmoid functions; the output should be close to one be gradient of sigmoid vanishes in that range; gradient of r w.r.t. phi becomes very small -> rescale data to 0...1
# +
from scipy.special import expit
def ellipse_radius(phi, a, b):
'''Compute the radius of an ellipse.
Parameters
----------
phi - array-like : polar angle
a - Float : long half axis length
b - Float : short half axis length
Returns
-------
radius - array-like : ellipse radius
'''
return a * b / np.sqrt(np.square(a * np.cos(phi)) + np.square(b * np.sin(phi)))
def spherical_cap_radius(phi, a, b, h, phi_max, R_max):
'''Compute the radius of a spherical cap w.r.t. the cap center.
Parameters
----------
phi - array-like : polar angle w.r.t. to cap center
a - Float : half axis length of the cap
b - Float : distance between cap center and bottom
h - Float : cap height
phi_max - Float : polar angle of R_max
R_max - Float : maximum radius of the cap w.r.t. its center
Returns
-------
radius - array-like : spherical cap radius
'''
R_cap = (a**2 + h**2) / (2 * h)
h_1 = h - b
term_1 = np.cos(phi) * (h_1 - R_cap)
term_2 = np.square(np.cos(phi) * (R_cap - h_1)) - h_1 * (h_1 - 2.0 * R_cap)
R_1 = term_1 + np.sqrt(term_2)
R_2 = np.minimum(b / np.cos(np.pi - phi), np.ones(len(phi)) * R_max)
R_2 = np.where(R_2 > 0, R_2, R_max)
return np.minimum(R_1, R_2)
# +
# simple approximation of spherical cap
long_axis_cap = abs(np.amax(data_cap.py.values) - np.amin(data_cap.py.values))
height_cap = abs(np.amax(data_cap.px.values) - np.amin(data_cap.px.values))
offset = abs(np.amin(data_cap.px.values))
phi_max = phi_cap[np.argmax(radius_cap)]
R_max = np.amax(radius_cap)
radius_cap_simple = spherical_cap_radius(phi_cap, long_axis_cap, offset, height_cap, phi_max, R_max)
# simple approximation of ellipse
long_axis_eli = abs(np.amax(data_eli.py.values) - np.amin(data_eli.py.values))
short_axis_eli = abs(np.amax(data_eli.px.values) - np.amin(data_eli.px.values)) * 0.5
radius_eli_simple = ellipse_radius(phi_eli, long_axis_eli, short_axis_eli)
# rescaling of the original radii
radius_cap_scaled = radius_cap / radius_cap_simple
radius_eli_scaled = radius_eli / radius_eli_simple
scaler_cap = MinMaxScaler()
scaler_eli = MinMaxScaler()
radius_cap_scaled_01 = np.squeeze(scaler_cap.fit_transform(radius_cap_scaled.reshape(-1,1)))
radius_eli_scaled_01 = np.squeeze(scaler_eli.fit_transform(radius_eli_scaled.reshape(-1,1)))
# compute the relative variance (index of dispersion) of original and scaled data
print("Spherical cap data:")
print("-" * len("Spherical cap data:"))
print("The relative variance of the original radius is {:1.4f}.".format(np.var(radius_cap) / np.mean(radius_cap)))
print("The relative variance of the scaled radius is {:1.4f}.".format(np.var(radius_cap_scaled) / np.mean(radius_cap_scaled)))
print("\nEllipse data:")
print("-" * len("Ellipse data:"))
print("The relative variance of the original radius is {:1.4f}.".format(np.var(radius_eli) / np.mean(radius_eli)))
print("The relative variance of the scaled radius is {:1.4f}.".format(np.var(radius_eli_scaled) / np.mean(radius_eli_scaled)))
fig, (ax1, ax2) = plt.subplots(2, figsize=(14, 10), sharex=True)
ax1.scatter(phi_cap, radius_cap, marker='x', color='C0', s=30, linewidth=0.5, label=r"original radius")
ax1.scatter(phi_cap, radius_cap_simple, marker='+', color='C1', s=30, linewidth=0.5, label=r"simple approximation")
ax1.scatter(phi_cap, radius_cap_scaled, marker='<', color='C2', s=30, linewidth=0.5, label=r"scaled radius")
ax1.scatter(phi_cap, radius_cap_scaled_01, marker='>', color='C4', s=30, linewidth=0.5, label=r"scaled radius [0,1]")
ax1.set_ylabel(r"$r$", fontsize=fontsize)
ax1.legend(fontsize=fontsize)
ax2.scatter(phi_eli, radius_eli, marker='x', color='C0', s=30, linewidth=0.5, label=r"original radius")
ax2.scatter(phi_eli, radius_eli_simple, marker='+', color='C1', s=30, linewidth=0.5, label=r"simple approximation")
ax2.scatter(phi_eli, radius_eli_scaled, marker='<', color='C2', s=30, linewidth=0.5, label=r"scaled radius")
ax2.scatter(phi_eli, radius_eli_scaled_01, marker='>', color='C4', s=30, linewidth=0.5, label=r"scaled radius [0,1]")
ax2.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax2.set_ylabel(r"$r$", fontsize=fontsize)
ax2.legend(fontsize=fontsize)
plt.show()
# -
radius_model_cap_scaled = SimpleMLP(n_layers=6, n_neurons=40)
radius_model_cap_scaled = approximate_function(phi_cap, radius_cap_scaled_01, radius_model_cap_scaled, max_iter=1500,
l_rate=0.01, path=set_path("radius_model_cap_scaled.pt"))
radius_model_eli_scaled = SimpleMLP(n_layers=6, n_neurons=40)
radius_model_eli_scaled = approximate_function(phi_eli, radius_eli_scaled_01, radius_model_eli_scaled, max_iter=1500,
l_rate=0.01, path=set_path("radius_model_eli_scaled.pt"))
# +
# load best weights and compute forward pass
radius_model_cap_scaled.load_state_dict(torch.load(set_path("radius_model_cap_scaled.pt")))
model_radius_cap = radius_model_cap_scaled.forward(phi_tensor).detach().squeeze().numpy()
radius_model_eli_scaled.load_state_dict(torch.load(set_path("radius_model_eli_scaled.pt")))
model_radius_eli = radius_model_eli_scaled.forward(phi_tensor).detach().squeeze().numpy()
# evaluate maximum relative deviation
model_radius_cap_data = radius_model_cap_scaled.forward(phi_cap_tensor).detach().squeeze().numpy()
model_radius_eli_data = radius_model_eli_scaled.forward(phi_eli_tensor).detach().squeeze().numpy()
diff_cap = np.absolute(model_radius_cap_data - radius_cap_scaled_01)
diff_eli = np.absolute(model_radius_eli_data - radius_eli_scaled_01)
max_pos_cap = np.argmax(diff_cap)
max_pos_eli = np.argmax(diff_eli)
print(r"Maximum relative deviation sherical cap: {:2.2f}% at angle {:2.2f}.".format(
np.amax(diff_cap)/radius_cap_scaled[max_pos_cap] * 100, phi_cap[max_pos_cap]))
print(r"Maximum relative deviation ellipse: {:2.2f}% at angle {:2.2f}.".format(
np.amax(diff_eli)/radius_eli_scaled[max_pos_eli] * 100, phi_eli[max_pos_eli]))
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(eval_phi, model_radius_cap, linewidth=2, linestyle="--", c='C3', label=r"model radii")
ax.plot(eval_phi, model_radius_eli, linewidth=2, linestyle="--", c='C3')
ax.scatter(phi_cap, radius_cap_scaled_01, marker='+', color='C0', s=30, linewidth=0.5, label=r"spherical cap")
ax.scatter(phi_eli, radius_eli_scaled_01, marker='+', color='C1', s=30, linewidth=0.5, label="ellipse")
ax.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax.set_ylabel(r"$r$", fontsize=fontsize)
ax.legend(fontsize=fontsize)
plt.show()
# +
# transform back to compare to original data
cap_radius_scaled = np.squeeze(scaler_cap.inverse_transform(model_radius_cap.reshape(-1, 1)))
eli_radius_scaled = np.squeeze(scaler_eli.inverse_transform(model_radius_eli.reshape(-1, 1)))
cap_radius_final = cap_radius_scaled * spherical_cap_radius(eval_phi, long_axis_cap, offset, height_cap, phi_max, R_max)
eli_radius_final = eli_radius_scaled * ellipse_radius(eval_phi, long_axis_eli, short_axis_eli)
# pointwise comparison
cap_radius_data_scaled = np.squeeze(scaler_cap.inverse_transform(model_radius_cap_data.reshape(-1, 1)))
eli_radius_data_scaled = np.squeeze(scaler_eli.inverse_transform(model_radius_eli_data.reshape(-1, 1)))
final_cap_data_model = cap_radius_data_scaled * radius_cap_simple
final_eli_data_model = eli_radius_data_scaled * radius_eli_simple
diff_cap = np.absolute(radius_cap - final_cap_data_model)
diff_eli = np.absolute(radius_eli - final_eli_data_model)
max_pos_cap = np.argmax(diff_cap)
max_pos_eli = np.argmax(diff_eli)
print(r"Maximum relative deviation sherical cap: {:2.2f}% at angle {:2.2f}.".format(
np.amax(diff_cap)/radius_cap[max_pos_cap] * 100, phi_cap[max_pos_cap]))
print(r"Maximum relative deviation ellipse: {:2.2f}% at angle {:2.2f}.".format(
np.amax(diff_eli)/radius_eli[max_pos_eli] * 100, phi_eli[max_pos_eli]))
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(eval_phi, cap_radius_final, linewidth=2, linestyle="--", c='C3', label=r"model radii")
ax.plot(eval_phi, eli_radius_final, linewidth=2, linestyle="--", c='C3')
ax.scatter(phi_cap, radius_cap, marker='+', color='C0', s=30, linewidth=0.5, label=r"spherical cap")
ax.scatter(phi_eli, radius_eli, marker='+', color='C1', s=30, linewidth=0.5, label="ellipse")
ax.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax.set_ylabel(r"$r$", fontsize=fontsize)
ax.legend(fontsize=fontsize)
plt.show()
# -
# ### Adding artificial data<a id="artificial_data"></a>
#
# We can enforce certain mathematical properties of our function by adding artificial data; we could also modify loss function and include gradient
# +
phi_threshold = 0.5
phi_add = []
radius_add = []
for p, r in zip(phi_cap, radius_cap):
if p < phi_threshold:
phi_add.append(-p)
radius_add.append(r)
if p > np.pi - phi_threshold:
phi_add.append(2 * np.pi - p)
radius_add.append(r)
phi_cap_extended = np.concatenate((phi_cap, np.asarray(phi_add)))
radius_cap_extended = np.concatenate((radius_cap, np.asarray(radius_add)))
print("Added {} points to the training data.".format(radius_cap_extended.shape[0] - radius_cap.shape[0]))
# -
# ### Creating ensemble models<a id="ensemble_models"></a>
#
# we could create the same model architcture in one shot; training is different because loss function is different
class EnsembleModel(torch.nn.Module):
def __init__(self, model_1, model_2, diff_train):
super(EnsembleModel, self).__init__()
self.model_1 = model_1
self.model_2 = model_2
self.diff = diff_train
self.diff_min = torch.min(self.diff)
self.diff_range = torch.max(self.diff) - self.diff_min
def forward(self, x):
x_1 = self.model_1(x)
x_2 = self.model_2(x)
x_2 = x_2 * self.diff_range + self.diff_min
return x_1 + x_2
# +
phi_ex_tensor = torch.from_numpy(phi_cap_extended).unsqueeze_(-1)
radius_ex_tensor = torch.from_numpy(radius_cap_extended).unsqueeze_(-1)
def train_ensemble_model(layers_m1, layers_m2, neurons_m1, neurons_m2):
print("Configuration - model 1: {} layers, {} neurons; model 2: {} layers, {} neurons".format(
layers_m1, neurons_m1, layers_m2, neurons_m2))
# train model 1
model_1 = SimpleMLP(n_layers=layers_m1, n_neurons=neurons_m1)
model_1 = approximate_function(phi_cap_extended, radius_cap_extended, model_1, max_iter=1500,
l_rate=0.01, path=set_path("model_1.pt"), verbose=2000)
model_1.load_state_dict(torch.load(set_path("model_1.pt")))
# compute deviation from training data, rescale to [0,1]
diff = radius_ex_tensor - model_1(phi_ex_tensor)
diff_min = torch.min(diff)
diff_range = torch.max(diff) - diff_min
diff_norm = (diff - diff_min) / diff_range
# train model 2
model_2 = SimpleMLP(n_layers=layers_m2, n_neurons=neurons_m2)
model_2 = approximate_function(phi_cap_extended, diff_norm.detach().squeeze().numpy(), model_2, max_iter=1500,
l_rate=0.01, path=set_path("model_2.pt"), verbose=2000)
model_2.load_state_dict(torch.load(set_path("model_2.pt")))
# create and evaluate ensemble model
ensemble = EnsembleModel(model_1, model_2, diff)
ensemble_radius_data = ensemble(phi_ex_tensor).detach().squeeze().numpy()
final_diff = np.absolute(radius_cap_extended - ensemble_radius_data)
max_pos = np.argmax(final_diff)
return np.amax(final_diff)/radius_cap_extended[max_pos_cap], model_1, model_2
n_layers = range(1, 11)
n_neurons = range(10, 60, 10)
min_error = 100
for i in range(5):
print("Iteration {}\n------------".format(i))
layers = np.random.choice(n_layers, 2)
neurons = np.random.choice(n_neurons, 2)
error, model_1, model_2 = train_ensemble_model(layers[0], layers[1], neurons[0], neurons[1])
if error < min_error:
print("\033[1mError decreased to {:2.2f}%\033[0m. Saving model.".format(error * 100))
min_error = error
torch.save(model_1.state_dict(), set_path("model_1_final.pt"))
torch.save(model_2.state_dict(), set_path("model_2_final.pt"))
best_layers = layers
best_neurons = neurons
print("")
# -
# recreate best ensemble model and compute final output
model_1 = SimpleMLP(n_layers=best_layers[0], n_neurons=best_neurons[0])
model_2 = SimpleMLP(n_layers=best_layers[1], n_neurons=best_neurons[1])
model_1.load_state_dict(torch.load(set_path("model_1_final.pt")))
model_2.load_state_dict(torch.load(set_path("model_2_final.pt")))
diff = radius_ex_tensor - model_1(phi_ex_tensor)
ensemble = EnsembleModel(model_1, model_2, diff)
ensemble_radius_data = ensemble(phi_ex_tensor).detach().squeeze().numpy()
final_diff = np.absolute(radius_cap_extended - ensemble_radius_data)
max_pos = np.argmax(final_diff)
print(r"Maximum relative deviation sherical cap: {:2.2f}% at angle {:2.2f}.".format(
np.amax(final_diff)/radius_cap_extended[max_pos_cap] * 100, phi_cap_extended[max_pos]))
# +
fig, ax = plt.subplots(figsize=(12, 8))
# load best weights and compute forward pass
eval_phi = np.linspace(-0.5, np.pi+0.5, 200)
phi_tensor = torch.from_numpy(eval_phi).unsqueeze_(-1)
ensemble_radius = ensemble(phi_tensor).detach().squeeze().numpy()
model_1_radius = model_1(phi_tensor).detach().squeeze().numpy()
ax.plot(eval_phi, ensemble_radius, linewidth=2, linestyle="--", c='C3', label=r"ensemble model")
ax.plot(eval_phi, model_1_radius, linewidth=2, linestyle=":", c='C4', label=r"single model")
ax.scatter(phi_cap_extended, radius_cap_extended, marker='x', color='C0', s=30, linewidth=0.5, label=r"spherical cap data")
ax.set_xlabel(r"$\varphi$", fontsize=fontsize)
ax.set_ylabel(r"$r$", fontsize=fontsize)
ax.axvline(0.0, 0.0, 1.0, color='k', linestyle='--')
ax.axvline(np.pi, 0.0, 1.0, color='k', linestyle='--')
plt.legend(fontsize=fontsize)
plt.show()
# -
# ## Final notes<a id="final_notes"></a>
#
# - mapping of other fields to shape; allows to compute gradients, useful for optimization
# - networks strong when feature space high-dimensional
# - model training is never deterministic; iterative search sometimes necessary
# - many layers, sigmoid suffers from vanishing gradient, change activation function, e.g., to *torch.relu*
# - create a custom loss funtion that weights each individual inversly to the distribution of the training data over the polar angle.
| notebooks/2D_shape_approximation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="EMz1vjZandwk" outputId="217a1d8b-f382-485d-cc73-c16996e469e3"
# !git clone https://github.com/RiskModellingResearch/DeepLearning_Winter22.git
# + colab={"base_uri": "https://localhost:8080/"} id="npXlo_ftoH4I" outputId="ef22b3b3-a4c2-4d57-dc1b-a15498a78bfc"
# !pip install torchmetrics
# + colab={"base_uri": "https://localhost:8080/"} id="wZ8tqez8nKcQ" outputId="48b4752a-cebf-43cf-d7f5-2679ddd26a34"
import numpy as np
import pandas as pd
import pickle
import torch
print(torch.__version__)
import torch.nn as nn
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
from torchmetrics import Accuracy
from torch.utils.data import Dataset, DataLoader
# + id="BTnqKiLPo_Uz"
class CustomDataset(Dataset):
def __init__(self, dataset_path):
with open(dataset_path, 'rb') as f:
data, self.nrof_emb_categories, self.unique_categories = pickle.load(f)
self.embedding_columns = ['workclass_cat', 'education_cat', 'marital-status_cat', 'occupation_cat',
'relationship_cat', 'race_cat', 'sex_cat', 'native-country_cat']
self.nrof_emb_categories = {key + '_cat': val for key, val in self.nrof_emb_categories.items()}
self.numeric_columns = ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
self.columns = self.embedding_columns + self.numeric_columns
self.X = data[self.columns].reset_index(drop=True)
self.y = np.asarray([0 if el == '<50k' else 1 for el in data['salary'].values], dtype=np.int32)
return
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
row = self.X.take([idx], axis=0)
row = {col: torch.tensor(row[col].values, dtype=torch.float32) for i, col in enumerate(self.columns)}
return row, np.float32(self.y[idx])
# + id="tZ2wMJ-7pX6i"
class DenseFeatureLayer(nn.Module):
def __init__(self, nrof_cat, emb_dim, emb_columns, numeric_columns):
super(DenseFeatureLayer, self).__init__()
self.emb_columns = emb_columns
self.numeric_columns = numeric_columns
self.numeric_feature_bn = torch.nn.BatchNorm1d(len(numeric_columns))
input_size = len(emb_columns) + len(numeric_columns)
self.first_feature_bn = torch.nn.BatchNorm1d(input_size)
self.second_feature_bn = torch.nn.BatchNorm1d(input_size)
# first order feature interactions
self.first_order_embd = nn.ModuleDict()
for i, col in enumerate(self.emb_columns):
self.first_order_embd[col] = torch.nn.Embedding(nrof_cat[col], 1)
self.first_order_scalar = nn.ParameterDict({})
for i, col in enumerate(numeric_columns):
self.first_order_scalar[col] = nn.Parameter(torch.nn.init.xavier_uniform_(torch.empty(1,1)))
# second order feature interactions
self.second_order_embd = nn.ModuleDict({})
for i, col in enumerate(self.emb_columns):
self.second_order_embd[col] = torch.nn.Embedding(nrof_cat[col], emb_dim)
self.second_order_scalar = nn.ParameterDict({})
for i, col in enumerate(numeric_columns):
self.second_order_scalar[col] = nn.Parameter(torch.nn.init.xavier_uniform_(torch.empty(emb_dim, 1)))
return
def forward(self, input_data):
numeric_features = torch.stack([input_data[col] for col in self.numeric_columns], dim=1)
numeric_features = self.numeric_feature_bn(numeric_features)
# first order feature interactions
# categorical_columns
first_order_embd_output = None
for i, col in enumerate(self.emb_columns):
if first_order_embd_output is None:
first_order_embd_output = self.first_order_embd[col](
torch.tensor(input_data[self.emb_columns[i]], dtype=torch.int64))
else:
first_order_embd_output = torch.cat(
[first_order_embd_output, self.first_order_embd[col](
torch.tensor(input_data[self.emb_columns[i]], dtype=torch.int64))], dim=1)
# numeric_columns
first_order_embd_output = torch.squeeze(first_order_embd_output, dim=2)
for i, col in enumerate(self.numeric_columns):
if first_order_embd_output is None:
first_order_embd_output = torch.mul(numeric_features[i], self.first_order_scalar[col])
else:
first_order_embd_output = torch.cat(
[first_order_embd_output, torch.mul(numeric_features[:, i], self.first_order_scalar[col])], dim=1)
# second order feature interactions
# categorical_columns
second_order_embd_output = None
for i, col in enumerate(self.emb_columns):
if second_order_embd_output is None:
second_order_embd_output = self.second_order_embd[col](
torch.tensor(input_data[self.emb_columns[i]], dtype=torch.int64))
else:
second_order_embd_output = torch.cat(
[second_order_embd_output, self.second_order_embd[col](
torch.tensor(input_data[self.emb_columns[i]], dtype=torch.int64))], dim=1)
# numeric_columns
for i, col in enumerate(self.numeric_columns):
if second_order_embd_output is None:
second_order_embd_output = torch.mul(numeric_features[i], self.second_order_scalar[col])
else:
second_order_embd_output = torch.cat(
[second_order_embd_output, torch.unsqueeze(torch.mul(
numeric_features[:, i], torch.squeeze(
torch.stack([self.second_order_scalar[col]] * len(numeric_features)), 2)), 1)], dim=1)
first_order_embd_output = self.first_feature_bn(first_order_embd_output)
second_order_embd_output = self.second_feature_bn(second_order_embd_output)
return first_order_embd_output, second_order_embd_output
# + id="mt8z5wOXrGIk"
class FMLayer(nn.Module):
def __init__(self, ):
super(FMLayer, self).__init__()
return
def forward(self, first_order_embd, second_order_embd):
# sum_square part
summed_features_embd = torch.sum(second_order_embd, dim=1)
summed_features_embd_square = torch.square(summed_features_embd)
# square_sum part
squared_features_embd = torch.square(second_order_embd)
squared_sum_features_embd = torch.sum(squared_features_embd, dim=1)
# second order
second_order = 0.5 * torch.sub(summed_features_embd_square, squared_sum_features_embd)
return first_order_embd, second_order
# + id="mdzW1jMjrK2X"
class MLPLayer(nn.Module):
def __init__(self, input_size, nrof_layers, nrof_neurons, output_size):
super(MLPLayer, self).__init__()
all_input_sizes = [input_size]
[all_input_sizes.append(nrof_neurons) for i in range(nrof_layers - 1)]
list_layers = []
[list_layers.extend([torch.nn.Linear(all_input_sizes[i], nrof_neurons),
torch.nn.BatchNorm1d(nrof_neurons),
torch.nn.ReLU()]) for i in range(nrof_layers - 1)]
self.deep_block = torch.nn.Sequential(*list_layers)
self.output_layer = torch.nn.Linear(nrof_neurons, output_size)
def init_weights(self, m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform(m.weight)
# m.bias.data.fill_(0.001)
def forward(self, input_data):
output = self.deep_block(input_data)
output = self.output_layer(output)
return output
# + id="1XMyO31trNve"
class DeepFMNet(nn.Module):
def __init__(self, nrof_cat, emb_dim, emb_columns, numeric_columns,
nrof_layers, nrof_neurons, output_size, nrof_out_classes):
super(DeepFMNet, self).__init__()
self.emb_dim = emb_dim
self.emb_columns = emb_columns
self.numeric_columns = numeric_columns
self.features_embd = DenseFeatureLayer(nrof_cat, emb_dim, emb_columns, numeric_columns)
self.FM = FMLayer()
input_size = (len(emb_columns) + len(numeric_columns)) * emb_dim
self.MLP = MLPLayer(input_size, nrof_layers, nrof_neurons, output_size)
input_size = len(emb_columns) + len(numeric_columns) + emb_dim + output_size
self.dense_layer = nn.Linear(input_size, nrof_out_classes)
def forward(self, input_data):
first_order_embd, second_order_embd = self.features_embd(input_data)
FM_first_order, FM_second_order = self.FM(first_order_embd, second_order_embd)
second_order_embd = torch.reshape(second_order_embd,
[-1, (len(self.emb_columns) + len(self.numeric_columns)) * self.emb_dim])
Deep = self.MLP(second_order_embd)
concat_output = torch.cat([FM_first_order, FM_second_order, Deep], dim=1)
output = self.dense_layer(concat_output)
output = torch.squeeze(output, 1)
return output
# + id="3PjHrIWBnKcS"
EPOCHS = 500
EMBEDDING_SIZE = 5
BATCH_SIZE = 512
NROF_LAYERS = 3
NROF_NEURONS = 50
DEEP_OUTPUT_SIZE = 50
NROF_OUT_CLASSES = 1
LEARNING_RATE = 3e-4
TRAIN_PATH = 'DeepLearning_Winter22/week_05/data/train_adult.pickle'
VALID_PATH = 'DeepLearning_Winter22/week_05/data/valid_adult.pickle'
# + id="BaJHVBdBnKcT"
class DeepFM:
def __init__(self):
self.train_dataset = CustomDataset(TRAIN_PATH)
self.train_loader = DataLoader(dataset=self.train_dataset, batch_size=BATCH_SIZE, shuffle=True)
self.build_model()
self.log_params()
self.train_writer = SummaryWriter('./logs/train')
self.valid_writer = SummaryWriter('./logs/valid')
return
def build_model(self):
self.network = DeepFMNet(nrof_cat=self.train_dataset.nrof_emb_categories,
emb_dim=EMBEDDING_SIZE,
emb_columns=self.train_dataset.embedding_columns,
numeric_columns=self.train_dataset.numeric_columns,
nrof_layers=NROF_LAYERS, nrof_neurons=NROF_NEURONS,
output_size=DEEP_OUTPUT_SIZE,
nrof_out_classes=NROF_OUT_CLASSES)
self.loss = torch.nn.BCEWithLogitsLoss()
self.accuracy = Accuracy()
self.optimizer = optim.Adam(self.network.parameters(), lr=LEARNING_RATE)
return
def log_params(self):
return
def load_model(self, restore_path=''):
if restore_path == '':
self.step = 0
else:
pass
return
def run_train(self):
print('Run train ...')
self.load_model()
for epoch in range(EPOCHS):
self.network.train()
for features, label in self.train_loader:
# Reset gradients
self.optimizer.zero_grad()
output = self.network(features)
# Calculate error and backpropagate
loss = self.loss(output, label)
output = torch.sigmoid(output)
loss.backward()
acc = self.accuracy(output, torch.tensor(label, dtype=torch.int64)).item()
# Update weights with gradients
self.optimizer.step()
self.train_writer.add_scalar('CrossEntropyLoss', loss, self.step)
self.train_writer.add_scalar('Accuracy', acc, self.step)
self.step += 1
if self.step % 50 == 0:
print('EPOCH %d STEP %d : train_loss: %f train_acc: %f' %(epoch, self.step, loss.item(), acc))
# self.train_writer.add_histogram('hidden_layer', self.network.linear1.weight.data, self.step)
# Run validation
#TODO
return
# + colab={"base_uri": "https://localhost:8080/"} id="m0yixHcZnKcV" outputId="5b9da71d-1c11-461a-c7d8-f10ba9f9c228"
deep_fm = DeepFM()
deep_fm.run_train()
# + id="E06x3iFAj1Ga"
| week_05/DeepFM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
# +
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
import time
import numpy as np
import h5py
import seaborn as sns
sns.set(style="white", palette="muted", color_codes=True)
# +
with h5py.File('../data/waveforms/waveforms_3s_0100_1200.h5', 'r') as file:
waveforms = np.array(file['waveforms'])
failed = np.array(file['failed'])
waveforms = np.array([waveforms[i] for i in range(len(waveforms)) if i not in failed])
print(len(waveforms))
# -
waveform_maxima = [np.max(np.abs(waveform)) for waveform in waveforms]
# +
percentile_50 = np.percentile(waveform_maxima, 50)
percentile_02 = np.percentile(waveform_maxima, 2)
print(percentile_50)
print(percentile_02)
plt.gcf().set_size_inches(16, 4, forward=True)
# sns.distplot(waveform_maxima, hist=True, bins=100, kde_kws={"shade": False})
plt.hist(waveform_maxima, bins=200)
plt.xlim(0, 0.5*10**-20)
for x in np.geomspace(percentile_02, percentile_50, num=20):
plt.axvline(x=x, color='Orange', lw=1)
plt.axvline(x=percentile_50, color='Orange', lw=2)
plt.axvline(x=percentile_02, color='Orange', lw=2)
plt.show()
# +
thresholds = iter(np.geomspace(percentile_02, percentile_50, num=20)[::-1])
threshold = None
for epoch in range(100):
if epoch % 5 == 0:
threshold = next(thresholds)
print(epoch, threshold)
# -
| quantum gravity/convwave/src/jupyter/get_waveform_maxima_percentiles.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Central Limit Theorem (CLT) and Normality Testing
# My main inspiration for this notebook is [MIT's edX Lecture](https://courses.edx.org/courses/MITx/6.041x_1/1T2015/courseware/Unit_8_Limit_theorems_and_classical_statistics/Lec__19_The_Central_Limit_Theorem__CLT_/) (you might need an edX account to view it). In particular, I wanted to demonstrate the CLT by continuously summing samples of random variables from a non-normal distribution, _i.e._ an exponential or uniform distribution.
#
# The question arises: how many times do you need to sum samples from a non-normal distribution to get "close enough" to a normal distribution? I explore a number of [normality tests](#stat_testing).
#
# In math terms, let $X$ be a random variable:
#
# $$
# \begin{align} S_n &= X_1 + X_2 + \dots + X_n \\[1.5ex]
# Z_n &= \frac{S_n - n \mu}{\sigma \sqrt{n}} \end{align}
# $$
#
# where $X_i$s are i.i.d.
#
# The CLT states:
#
# Let $Z$ be a standard normal RV: $Z \sim \mathcal{N}(\mu = 0, \sigma = 1)$.
#
# $$ \forall z: \lim_{n \to \infty} P(Z_n \le z) = P(Z \le z) $$
#
# At what value of $n$ does $P(Z_n \le z) \approx P(Z \le z)$?
#
# # Imports and Definitions
# +
# %matplotlib inline
import warnings
warnings.simplefilter('once', UserWarning)
import numpy as np
import matplotlib.pyplot as plt
from cycler import cycler # for plot colors
from scipy import stats
np.random.seed(56) # ensure reproducibility
plt.close('all')
# Globals
samples = int(5e3) # n > 5000 gives warnings about p-values
res = 1e-4
# Define standard normal distribution
N = stats.norm(loc=0, scale=1)
# -
# ## Define the Test Distribution
# Feel free to experiment with any of these [scipy.stats.rv_continuous](https://docs.scipy.org/doc/scipy/reference/stats.html) distributions! Distributions that are asymmetrical or have fat tails will give a more pronounced effect.
# +
# Uniform
# a = 1
# b = 9 # s.t. pdf = 1/8 = 0.125
# dist = stats.uniform(loc=a, scale=b-a) # ~ dist[loc, loc+scale]
# Exponential
lam = 1
dist = stats.expon(scale=1/lam)
# scale = 1/lambda for f(x) = lambda * exp(-lambda*x)
# -
# # Convolve Samples From A Distribution
#
# My major open question in this simple function arises in the normalization factors. Do $\mu$ = `dist.mean()` and $\sigma^2$ = `dist.var()` need to be known from the underlying distribution of our sample (which is not normally known)?
def convolve_dist(dist, n, samples=1000, norm_out=True):
"""Convolve a distribution n times.
For a random variable X,
Sn = X1 + X2 + ... + Xn
Parameters
----------
dist : rv_continuous
continuous distrubution object, i.e. scipy.stats.norm
n : int
number of convolutions to make
samples : int, optional, default=1000
number of samples to draw for each convolution
norm_out : boolean, optional, default=True
normalize output to Z-score: (S - n*dist.mean()) / np.sqrt(n*dist.var())
Returns
-------
out : ndarray, shape (samples,)
if norm_out, out = Zn values, otherwise out = Sn values
"""
Sn = np.zeros(samples)
for i in range(n):
# Draw from distribution and add to sum
Sn += dist.rvs(size=samples)
if norm_out:
Zn = (Sn - n*dist.mean()) / np.sqrt(n*dist.var()) # normalize Sn
return Zn
else:
return Sn
# ## Plot the pdf of the test distribution
# +
# Draw samples on the range where pdf has support
x = np.linspace(dist.ppf(res), dist.ppf(1-res), 100)
fig = plt.figure(1)
fig.clf()
ax = plt.gca()
ax.set_title('Test Distribution')
ax.plot(x, dist.pdf(x), 'r-', label='test pdf')
# Draw from the distribution and display the histogram
r = dist.rvs(size=1000)
ax.hist(r, density=True, bins=25, histtype='stepfilled', alpha=0.2, label='samples')
ax.legend(loc='lower right')
plt.show()
# -
# ## Demonstrate CLT
# The following plot shows our test distribution vs. a standard normal for values of $n \in \{1, 2, 10, 30\}$. The convolution gets astoundingly close to normal at $n = 30$, even for the heavily skewed exponential distribution.
# +
#------------------------------------------------------------------------------
# Plots vs. n
#------------------------------------------------------------------------------
# Plot histogram of samples vs normal distribution
fig = plt.figure(2, figsize=(11,9))
xN = np.linspace(N.ppf(res), N.ppf(1-res), 1000)
n_arr = [1, 2, 10, 30]
for i in range(len(n_arr)):
# Convolve the pdfs
Zn = convolve_dist(dist, n=n_arr[i], samples=samples)
Nn = stats.gaussian_kde(Zn) # compare to actual normal
# Plot vs standard normal distribution
ax = fig.add_subplot(2, 2, i+1)
sns.distplot(Zn, kde=False, norm_hist=True, ax=ax)
ax.plot(xN, Nn.pdf(xN), 'C0', label='$Z_n$ KDE')
ax.plot(xN, N.pdf(xN), 'C3', label='$\mathcal{N}(0,1)$')
ax.set_xlim([-4, 4])
# ax.set_ylim([0, 1.25*max(Nn.pdf(xN))])
ax.set_title("n = {}".format(n_arr[i]))
fig.suptitle(("Central Limit Theorem, $N_{{samples}}$ = {}\n" + \
"$S_n = X_1 + \dots + X_n$").format(samples))
ax.legend(loc='lower right')
plt.show()
# -
# <a id="stat_testing"></a>
# ## Determine $n$ Using Normality Tests
# Tests employed are:
# * [Kolmogorov-Smirnov (K-S) Test `kstest`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstest.html#scipy.stats.kstest)
# * [Shapiro-Wilk Test `shapiro`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html#scipy.stats.shapiro)
# * [D'Agostino-Pearson Test `normaltest`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.normaltest.html#scipy.stats.normaltest)
# * [Anderson-Darling Test `anderson`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.anderson.html#scipy.stats.anderson)
#
# In all but the Anderson test, the null hypothesis is that the sample is drawn from the reference distribution (standard normal in this case). The Shaprio and D'Agostino tests are specific to normality testing. The others may be used to compare with _any_ reference distribution.
#
# The $D$ statistic for the K-S test should approach 0 for a true normal distribution. The $W$ statistic for the Shapiro-Wilk test should approach 1, so I have plotted $1-W$ to compare with the other statistics. The $K$ statistic of the D'Agostino-Pearson test (`normaltest`) is not bound, so I have scaled it by its maximum value for comparison with the other statistics.
#
# **Each test is plotted vs. $n$ convolutions of the test distribution, so $n = 1$ is just an exponential distribution (or whatever you chose for the test).**
#
# I got some odd results with a [chi-squared test `chisquare`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html#scipy.stats.chisquare) (lines commented out below).
# +
#------------------------------------------------------------------------------
# Determine n to match N(0,1)
#------------------------------------------------------------------------------
# Draw from distrubution until we approach a standard normal
MAX_N = 100
#thresh = 1 - 1e-3
score = np.inf
n = 1
D = np.empty(MAX_N)
W = np.empty(MAX_N)
A = np.empty(MAX_N)
K = np.empty(MAX_N)
p = np.empty(MAX_N)
# X2 = np.empty(MAX_N)
D.fill(np.nan)
A.fill(np.nan)
W.fill(np.nan)
K.fill(np.nan)
p.fill(np.nan)
# X2.fill(np.nan)
Zn = []
while n < MAX_N:
# Compute convolution
Zn.append(convolve_dist(dist, n=n, samples=samples))
# Test if convolution is equivalent to normal distribution
D[n], p[n] = stats.kstest(Zn[-1], 'norm')
W[n], _ = stats.shapiro(Zn[-1])
A[n], cv, sig = stats.anderson(Zn[-1], dist='norm')
K[n], _ = stats.normaltest(Zn[-1])
# # Chi-squared test requires bins of data
# Zn_hist, _ = np.histogram(Zn[-1], bins=100, density=True)
# N_hist, _ = np.histogram(N.rvs(size=100000), bins=100, density=True)
# X2[n], _ = stats.chisquare(f_obs=Zn_hist, f_exp=N_hist)
# # Possible test if we've reached an acceptable threshold value:
#if W[n] > thresh:
# break
n += 1
# Plot test statistics vs. n
plt.figure(9, figsize=(11,9))
plt.clf()
ax = plt.gca()
ax.plot(np.arange(MAX_N), D, c='C3', label='$D$ statistic')
ax.plot(np.arange(MAX_N), 1-W, c='C2', label='$W$ statistic')
# ax.plot(np.arange(MAX_N), X2/np.nanmax(X2), c='C4', label='$\chi^2$ statistic')
ax.plot(np.arange(MAX_N), K/np.nanmax(K), c='C1', label='$K^2$ statistic')
ax.plot(np.arange(MAX_N), p, c='C0', label='$p$-value', zorder=0, alpha=0.5)
# ax.set_yscale('log')
ax.set_title('Test Statistics vs. $n$ convolutions, {} samples per convolution'.format(samples))
ax.set_xlabel('Number of convolved distributions')
ax.set_ylabel('Statistic')
# ax.set_ylim([0, 2])
ax.legend(loc='upper right')
plt.show()
# -
# ### Results
# We note each of the statistics starts at a large value, then decays rapidly towards 0 as we approach a normal distribution. The $p$-value gets quite noisy for large $n$ values. I am unsure why that is the case, and of how to interpret the $p$-value in conjunction with the test statistics.
#
# ### Anderson-Darling Test
# This test statistic has a slightly different interpretation. If $A^2$ is larger than a given threshold, the null hypothesis that the data come from the chosen (normal) distribution can be rejected at corresponding significance level. The Anderson test is ([according to Wikipedia](https://en.wikipedia.org/wiki/Anderson–Darling_test)) more sensitive in the tails of the distribution.
#
# For an exponential test distribution, the test statistic is _larger_ than all of the critical values until about $n = 95$, so we reject the null hypothesis that the data come from a normal distribution. It seems the Anderson test is **most stringent** when performing normality testing.
# Plot A^2 statistic (Anderson-Darling test)
# If A^2 is larger than critical value for corresponding significance level,
# the null hypothesis that the data come from the chosen distribution can be
# rejected
plt.figure(10, figsize=(11,9))
ax = plt.gca()
ax.plot(np.arange(MAX_N), A, c='C1', label='$A^2$ statistic')
# Use greys for threshold values
ax.set_prop_cycle(cycler('color',
[plt.cm.bone(i) for i in np.linspace(0, 0.75, 5)]))
for i in range(5):
ax.plot(np.array([0, n]), cv[i]*np.array([1, 1]),
label='Threshold {}%'.format(sig[i]))
ax.set_yscale('log')
ax.set_title('Test Statistics vs. $n$')
ax.set_xlabel('Number of convolved distributions')
ax.set_ylabel('Statistic')
ax.legend(loc='upper right')
plt.show()
# ## Q-Q Plot
# Lastly, we can plot the [quartile-quartile plot](https://en.wikipedia.org/wiki/Q–Q_plot) of the samples from our test distribution vs. the expected values from a normal distribution. The darker colors are higher $n$ values, which show the approach to the straight red line (true standard normal).
plt.figure(11, figsize=(11,9))
ax = plt.gca()
colors = [plt.cm.bone(i) for i in np.linspace(0, 0.9, len(Zn))][::-1]
for i in range(len(Zn)):
result = stats.probplot(Zn[i], dist='norm', plot=ax) # Q-Q plot
ax.get_lines()[2*i].set_markeredgecolor('none')
ax.get_lines()[2*i].set_markerfacecolor(colors[i])
# Turn off all but last fit line
if i < len(Zn)-1:
ax.get_lines()[2*i+1].set_linestyle('none')
plt.show()
| notebooks/CLT and Statistical Testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/FelixIG15/tensorflow-1-public/blob/felix's_branch/C2/W2/ungraded_labs/C2_W2_Lab_2_horses_v_humans_augmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="XCDoWOgH6geg"
# <a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/master/C2/W2/ungraded_labs/C2_W2_Lab_2_horses_v_humans_augmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="37v_yExZppEp"
# # Ungraded Lab: Data Augmentation on the Horses or Humans Dataset
#
# In the previous lab, you saw how data augmentation helped improve the model's performance on unseen data. By tweaking the cat and dog training images, the model was able to learn features that are also representative of the validation data. However, applying data augmentation requires good understanding of your dataset. Simply transforming it randomly will not always yield good results.
#
# In the next cells, you will apply the same techniques to the `Horses or Humans` dataset and analyze the results.
# + id="Lslf0vB3rQlU" colab={"base_uri": "https://localhost:8080/"} outputId="ebc540b5-950f-47a0-93a1-10de0512c11e"
# Download the training set
# !wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip
# + id="6hHj28Jl6gel" colab={"base_uri": "https://localhost:8080/"} outputId="8975872b-1c8b-4a96-cb90-8ee81e96a357"
# Download the validation set
# !wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/validation-horse-or-human.zip
# + id="RXZT2UsyIVe_"
import os
import zipfile
# Extract the archive
zip_ref = zipfile.ZipFile('./horse-or-human.zip', 'r')
zip_ref.extractall('tmp/horse-or-human')
zip_ref = zipfile.ZipFile('./validation-horse-or-human.zip', 'r')
zip_ref.extractall('tmp/validation-horse-or-human')
zip_ref.close()
# Directory with training horse pictures
train_horse_dir = os.path.join('tmp/horse-or-human/horses')
# Directory with training human pictures
train_human_dir = os.path.join('tmp/horse-or-human/humans')
# Directory with training horse pictures
validation_horse_dir = os.path.join('tmp/validation-horse-or-human/horses')
# Directory with training human pictures
validation_human_dir = os.path.join('tmp/validation-horse-or-human/humans')
# + id="PixZ2s5QbYQ3"
import tensorflow as tf
# Build the model
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
# + id="8DHWhFP_uhq3"
from tensorflow.keras.optimizers import RMSprop
# Set training parameters
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(learning_rate=1e-4),
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="297JurUO9nDP" outputId="a5cb6fc1-373f-4683-8b82-2dc74d43a27b"
model.summary()
# + id="ClebU9NJg99G" colab={"base_uri": "https://localhost:8080/"} outputId="77f7e196-47cf-4213-d22a-dac904db07c4"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Apply data augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'tmp/validation-horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# + id="Fb1_lgobv81m" colab={"base_uri": "https://localhost:8080/"} outputId="c81774cc-9459-4d60-fe09-9a9d984aeec2"
# Constant for epochs
EPOCHS = 20
# Train the model
history = model.fit(
train_generator,
steps_per_epoch=8,
epochs=EPOCHS,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
# + id="7zNPRWOVJdOH" colab={"base_uri": "https://localhost:8080/", "height": 545} outputId="fae4f47d-dfd6-40ca-c461-b07ba2b70ce4"
import matplotlib.pyplot as plt
# Plot the model results
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + [markdown] id="hwyabYvCsvtn"
# As you can see in the results, the preprocessing techniques used in augmenting the data did not help much in the results. The validation accuracy is fluctuating and not trending up like the training accuracy. This might be because the additional training data generated still do not represent the features in the validation data. For example, some human or horse poses in the validation set cannot be mimicked by the image processing techniques that `ImageDataGenerator` provides. It might also be that the background of the training images are also learned so the white background of the validation set is throwing the model off even with cropping. Try looking at the validation images in the `tmp/validation-horse-or-human` directory (note: if you are using Colab, you can use the file explorer on the left to explore the images) and see if you can augment the training images to match its characteristics. If this is not possible, then at this point you can consider other techniques and you will see that in next week's lessons.
# + id="9oK_vAVHBeDE"
| C2/W2/ungraded_labs/C2_W2_Lab_2_horses_v_humans_augmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Marriott Hotel Category Change 2020 Analysis
# import pandas
import pandas as pd
# read data from csv
df = pd.read_csv('marriott-category-changes-2020.csv')
df.head()
# ## Question 1
# Discribe the data types for each feature/column, e.g., xxx feature's data type is String, yyy feature's data type is float, etc.
# prints out data types for each columns.
for colname, coltype in df.dtypes.iteritems():
print(colname,'data type is',coltype)
# My answer to the question 1:
# * Hotel data type is object
# * Brand data type is object
# * Destination data type is object
# * Current Category data type is int64
# * Current Standard Price data type is int64
# * New Category data type is int64
# * New Standard Price data type is int64
# ## Question 2
# - How many hotels are in this dataset?
# - The hotels are from how many unique brands?
# - Which destination/country has the most hotels listed in this dataset? List the total number of hotels in that country
# - How many brands in China have hotel category changes?
# getting data shape, listing total number of rows and columns.
df.shape
# +
# list all unique values in brand columns
print(df['Brand'].value_counts())
# prints out total number of brands.
print('The hotels are from',df['Brand'].nunique(),'unique brands.')
# -
# list the total numbers of hotel bt their destination. showing top 5 destinations.
df2 = df.groupby('Destination')['Hotel'].nunique().sort_values(ascending=False).reset_index(name='count')
df2
# create a data subset which only conain China as destination.then show the total brands in the dataset.
cdf = df2.loc[df2['Destination'] == "China"]
cdf.head()
# this part is only for testing the results
# count the totla number of hotels which meets the condition: current category not equal to new category.
counter = 0
for i,r in df.iterrows():
if (r['Current Category'] != r['New Category']) & (r['Destination'] == 'China'):
counter+=1
# There are total 68 hotels have category changed in China.
counter
# My answer to the question 2:
# * There are 2185 hotels in this dataset.
# * The hotels are from 30 unique brands.
# * USA has the most number of hotels in this dataset which are 1545 hotels.
# * There are 68 hotels in China have category changed.
# ## Question 3
# - What's the percentage of hotels worldwide with category upgrade in 2020?
# +
# first count how many hotels have category increased worldwide.
# create a new column shows total category increased.
df['total_upcatchanged']=df['New Category'] - df['Current Category']
# sliced the data which contains total category at least 1, and then count the total number of hotels.
updf = df.loc[df['total_upcatchanged'] >= 1]
# divided by its total number of hotels
uppercen = round((updf.shape[0]/df.shape[0])*100,2)
print('There are %{} of hotels worldwide with category upgrade in 2020.' .format(uppercen))
# -
# My answer to the question 3:
# * There are %77.16 of hotels worldwide with category upgrade in 2020.
# ## Question 4
# - List hotels with category changes greater than 1 if any, such as changing from category 3 to 5 or from category 7 to 4
# - List all JW Marriott hotels in China that have a category upgrade
# +
# create a new column shows total category change.
df['total_changed'] = abs(df['New Category'] - df['Current Category'])
print(df['total_changed'].value_counts())
# List hotels with category changes greater than 1 if any and subset it into another dataframe.
morechangedf = df.loc[df['total_changed']>1]
morechangedf.head()
# -
a = df.loc[(df['Brand']=='JW Marriott') & (df['Destination']=='China') & (df['total_upcatchanged'] >= 1)]
a
# My answer to the question 4:
# * There is only 1 hotel with category changes greater than 1 which is hotel Four Points by Sheraton Bali, Ungasan.
# * All JW Marriott hotels in China that have a category upgrade is listed above.
# ## Question 5
# Assume you are in Feburary 2020 and the category changes will take effect on March 4, 2020. You are planning your trip to Florence, Italy and Hong Kong, China in April. You only stay in category 8 hotel (existing category 8 or future category 8) and want to optimize your point spending. Based on the data, which hotel you should book? when should you book your hotels for Florence and Hong Kong? Why?
# get all the hotels which located in Italy and have or will have category level 8.
b = df.loc[((df['Destination']=='Italy') & (df['Current Category']==8)) | ((df['Destination']=='Italy') & (df['New Category']==8))]
b
# get all the hotels which located in Hong Kong and have or will have category level 8.
c = df.loc[((df['Destination']=='China') & (df['Current Category']==8)) | ((df['Destination']=='China') & (df['New Category']==8))]
c
# My answer to the question 5:
# * I would like to book Cervo Hotel, Costa Smeralda Resort and The <NAME> before March 4, 2020. This is because these two hotels are going to upgrade to category 8 on March. However,the price for these two hotels are still cheap right now, which are 60000, and it will increase to 85000 once it upgrade to category 8. So I would like to book these two hotels before March 4. On the other side, I would not book any hotels in Hong Kong since there are no any category 8 hotel in this place.
| marriott-category-analysis-finished-xigang huang.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example: Transit mask
# If transits have already been discovered, it is best practice to mask them while detrending. This way, the in-transit data points can not influence the detrending. The current version only supports the ``cosine`` and ``lowess`` methods. Additional methods are planned for future releases.
# We begin our example by downloading a TESS dataset:
# +
import numpy as np
from astropy.io import fits
def load_file(filename):
"""Loads a TESS *spoc* FITS file and returns TIME, PDCSAP_FLUX"""
hdu = fits.open(filename)
time = hdu[1].data['TIME']
flux = hdu[1].data['PDCSAP_FLUX']
flux[flux == 0] = np.nan
return time, flux
path = 'https://archive.stsci.edu/hlsps/tess-data-alerts/'
filename = 'hlsp_tess-data-alerts_tess_phot_00207081058-s01_tess_v1_lc.fits'
time, flux = load_file(path + filename)
# -
# We detrend these data blindly (without masking) to prepare for a transit search:
from wotan import flatten
flatten_lc1, trend_lc1 = flatten(
time,
flux,
method='cosine',
window_length=0.4,
return_trend=True,
robust=True
)
# We plot the result:
import matplotlib.pyplot as plt
from matplotlib import rcParams; rcParams["figure.dpi"] = 150
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc1, color='blue', linewidth=2)
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.show();
plt.close()
plt.scatter(time, flatten_lc1, s=1, color='black')
plt.xlabel('Time (days)')
plt.ylabel('Detrended flux');
# There appear to be 2 transits in the data. We use the ``transitleastsquares`` package to get the ephemeris:
from transitleastsquares import transitleastsquares
model = transitleastsquares(time, flatten_lc1)
results = model.power(n_transits_min=1)
print('Period (days)', format(results.period, '.5f'))
print('Duration (days)', format(results.duration, '.5f'))
print('T0 (days)', results.T0)
# These parameters look sensible. We can visually verify that the in-transit points have been determined correctly. For this, we use the feature ``transit_mask`` in ``wotan``:
# +
from wotan import transit_mask
mask = transit_mask(
time=time,
period=results.period,
duration=results.duration,
T0=results.T0)
plt.scatter(time[~mask], flux[~mask], s=1, color='black')
plt.scatter(time[mask], flux[mask], s=1, color='orange')
plt.show()
plt.close()
# -
# That looks good. We can now use this mask to refine our detrending with the knowledge of the in-transit points. More precisely, we instruct the detrender to ignore these points, by setting their weight to zero.
flatten_lc2, trend_lc2 = flatten(
time,
flux,
method='cosine',
window_length=0.4,
return_trend=True,
robust=True,
mask=mask
)
# Let's plot both trends to compare the difference of the blind detrending (blue) and the masked one (red):
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc1, color='blue', linewidth=2)
plt.plot(time, trend_lc2, color='red', linewidth=2, linestyle='dashed')
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.show()
plt.close();
# When zooming into the transit, we can see the effect more clearly:
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc1, color='blue', linewidth=2)
plt.plot(time, trend_lc2, color='red', linewidth=2, linestyle='dashed')
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.xlim(results.T0 - 1, results.T0 + 1)
plt.show()
plt.close();
# Excellent! The red trend (using the transit mask) gives a perfect estimate for further analysis.
| examples/transit mask.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Data Mining and Machine Learning
# ### <NAME>
# ### September 2021
# ### Visualization of Missing Values
#
# In this notebook will be using the library missingno created by <NAME> https://github.com/ResidentMario The missigno module allow us visualization of missing values. Dataset: Census
# Dataset: Census
import numpy as np
import missingno as msno
#from matplotlib import gridspec
#import matplotlib.pyplot as plt
import pandas as pd
# %matplotlib inline
#Leyendo la base de datos census de la internet
df = pd.read_csv('http://academic.uprm.edu/eacuna/census.csv', sep=',',na_values=[' ?'])
df.columns
msno.bar(df,color="Green")
# ### Displaying an image of the dataframe showing where the missing values are located
msno.matrix(df,color=(.2,.4,.6),sparkline=False)
| notebooks/missingvisualiz2021.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pandas import DataFrame
# + pycharm={"name": "#%%\n"}
df: DataFrame = pd.read_csv('../../data/raw/uber.csv')
df.head()
# + pycharm={"name": "#%%\n"}
df = df.replace(0, np.nan).dropna()
df = df.drop(['key', 'Unnamed: 0'], axis=1, errors='ignore')
df = df.drop_duplicates()
df['pickup_datetime'] = pd.to_datetime(df['pickup_datetime'],
format='%Y-%m-%d %H:%M:%S %Z')
# + pycharm={"name": "#%%\n"}
df.dtypes
# + pycharm={"name": "#%%\n"}
df.head()
# + pycharm={"name": "#%%\n"}
from datetime import datetime
import pytz
pst = pytz.timezone("US/Eastern")
pst.localize(datetime(2021, 3, 14, 3)).dst()
# + pycharm={"name": "#%%\n"}
df['pickup_datetime'] = df['pickup_datetime'].dt.tz_convert('US/Eastern')
df
# + pycharm={"name": "#%%\n"}
df.loc[0]['pickup_datetime'].hour
# + pycharm={"name": "#%%\n"}
| notebooks/0.1_data_processing_tests/0.1-mmykhaylov-timezone-conversion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
#
# <hr style="height:3px;border:none;color:#333;background-color:#333;" />
# <img style=" float:right; display:inline" src="http://opencloud.utsa.edu/wp-content/themes/utsa-oci/images/logo.png"/>
#
# ### **University of Texas at San Antonio**
# <br/>
# <br/>
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 2.5em;"> **Open Cloud Institute** </span>
#
# <hr style="height:3px;border:none;color:#333;background-color:#333;" />
# ### Machine Learning/BigData EE-6973-001-Fall-2016
#
# <br/>
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **<NAME>, Ph.D.** </span>
#
#
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **<NAME>, Research Fellow** </span>
#
#
# <hr style="height:1.5px;border:none;color:#333;background-color:#333;" />
# <hr style="height:1.5px;border:none;color:#333;background-color:#333;" />
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 2em;"> **Pedestrian Motion Detection** </span>
# <br/>
# <br/>
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.6em;"> <NAME>, <NAME> </span>
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> *Autonomous Controls Lab, University of Texas at San Antonio, San Antonio, Texas, USA* </span>
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> {<EMAIL> </span>
# <br/>
# <br/>
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Project Definition:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> This project presents an implementation of an architecture for the detection of a human body using techniques of neural networking and deep learning, jointly called as a system of Artificial Neural Networking(ANN). The process at a high level is to use the concepts of development of the architecture, rule of activity and rule of learning. </span>
#
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> The Caltech Pedestrian Dataset consists of approximately 10 hours of 640x480 30Hz video taken from a vehicle driving through regular traffic in an urban environment. About 250,000 frames (in 137 approximately minute long segments) with a total of 350,000 bounding boxes and 2300 unique pedestrians were annotated. [1].</span>
#
#
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> The input is through a camera, which captures video and ANN in turn consumes this data and using the training data classifies and predicts the human posture and outputs the results by applying a rectangle on the human figure in the video. </span>
#
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in autonomous systems including driver assistance automobiles, road scene understanding, surveillance systems and search and rescue systems. </span>
#
#
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Outcome:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Applying deep learning to identify people in a camera's image </span>
# [1]: <NAME>, <NAME>, <NAME> and <NAME>
# Pedestrian Detection: An Evaluation of the State of the Art
# PAMI, 2012.
#
# <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Dataset:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> The image data can be found in [http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/] [1]. The Caltech Pedestrian Dataset consists of approximately 10 hours of 640x480 30Hz video taken from a vehicle driving through regular traffic in an urban environment. About 250,000 frames (in 137 approximately minute long segments) with a total of 350,000 bounding boxes and 2300 unique pedestrians were annotated. </span>
#
#
# <div style="width:830; background-color:white; height:220px; overflow:scroll; overflow-x: scroll;overflow-y: hidden;">
#
# <img align="center" src="http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/dc_ped_class_benchmark.gif"/>
#
#
#
# </div>
| project/Pedestrain Motion Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analysis of Housing Prices using Gradient Boosting
#
#
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn import ensemble
from sklearn.metrics import mean_absolute_error
# -
df = pd.read_csv('C:/Datasets/Melbourne_Housing/Melbourne_Housing.csv')
# Preview data
df.head(10)
print("Total number of entries: ", len(df))
# +
new_bath = df['Bathroom']
new_bed = df['Bedroom2']
new_rooms = df['Rooms']
print(len(new_bath))
print(len(new_bed))
print(len(new_rooms))
print(new_br[0:40])
NaN_Count = 0
for i in new_bath:
if np.isnan(i) == True:
NaN_Count += 1
print("Number of NaN in df['Bathroom']: ", NaN_Count)
NaN_Count = 0
for i in new_bed:
if np.isnan(i) == True:
NaN_Count += 1
print("Number of NaN in df['Bedroom2']: ", NaN_Count)
NaN_Count = 0
for i in new_rooms:
if np.isnan(i) == True:
NaN_Count += 1
print("Number of NaN in df['Rooms']: ", NaN_Count)
# more on column manipulation here:
# http://pytolearn.csd.auth.gr/b4-pandas/40/moddfcols.html
# more on plotting data here:
# https://towardsdatascience.com/linear-regression-using-python-ce21aa90ade6
# -
plt.scatter(new_rooms, new_bath)
plt.show()
# More available [here](https://towardsdatascience.com/linear-regression-using-python-ce21aa90ade6)
sns.pairplot(df, x_vars='Rooms', y_vars='Bedroom2', height=5, aspect=1.0, kind='reg')
df.iloc[26]
df.columns
# Cleaning Data
#
# Spelling mistakes
# +
del df['Address']
del df['Method']
del df['SellerG']
del df['Date']
del df['Postcode']
del df['Lattitude']
del df['Longtitude']
del df['Regionname']
del df['Propertycount']
df.columns
# -
# Remove entries that are missing data
# Could use median values as substitutes, but we have sufficient data without.
# Important to drop coloumns before rows to preserve data.
# Later can try to update bathrooms based on bedrooms.
print(len(df))
df.dropna(axis=0, how='any', thresh=None, subset=None, inplace=True)
print(len(df))
# Replace data with [one-hot encoding](https://medium.com/@contactsunny/label-encoder-vs-one-hot-encoder-in-machine-learning-3fc273365621)
features_df = pd.get_dummies(df, columns = ['Suburb', 'CouncilArea', 'Type'])
features_df.iloc[0]
# Remove the dependent variable from the feature set
del features_df['Price']
# Create the final arrays we will use to train the algorithm.
X = features_df.values
Y = df['Price'].values
print(Y[25:50])
# Split the dataset into the training and test set.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3, shuffle=True)
print("Training set count: ",len(Y_train))
print("Test set count: ",len(Y_test))
# Setup algorithm and hyperparameters
# review params page 141-143 and Scikit-learn website
model = ensemble.GradientBoostingRegressor(n_estimators = 150, learning_rate = 0.1, max_depth = 30,
min_samples_split = 4, min_samples_leaf = 6,
max_features=0.6, loss='huber')
# Finally, run the model
model.fit(X_train, Y_train)
# Evaluate results
# +
mse_train = mean_absolute_error(Y_train, model.predict(X_train))
mse_test = mean_absolute_error(Y_test, model.predict(X_test))
result1 = "Training set error: ${0:,.2f}"
result2 = "Test set error: ${0:,.2f}"
print(result.format(mse_train))
print(result.format(mse_test))
# -
# Evaluate causes of overfitting, including max_depth = 30 in Gradient Boosting Regressor
| Housing-Gradient-Boosting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import subprocess
import pandas as pd
import os
import sys
import pprint
import local_models.local_models
import logging
import ml_battery.log
from Todd_eeg_utils import *
import rpy2
import numpy as np
import rpy2.robjects.numpy2ri
from rpy2.robjects.packages import importr
import matplotlib.pyplot as plt
logger = logging.getLogger(__name__)
# -
data_dir = "/home/brown/disk2/eeg/Phasespace/Phasespace/data/eeg-text"
transformed_data_dir = "/home/brown/disk2/eeg/transformed_data"
data_info = pd.read_csv(os.path.join(data_dir, "fileinformation.csv"), skiprows=1).iloc[:,2:]
data_info
data_info.shape
how_many_epis = len([which for which in range(data_info.shape[0]) if data_info.iloc[which,4]>0])
how_many_epis
positive_samples = []
negative_samples = []
for i in range(data_info.shape[0]):
data_file = data_info.iloc[i,0]
data_epipoint = data_info.iloc[i,4]
data_len = data_info.iloc[i,1]
if data_len > data_epipoint > 0:
transformed_data_file_dir = os.path.join(transformed_data_dir, data_file)
transformed_data_files = os.listdir(transformed_data_file_dir)
negative_data_files = sorted([f for f in transformed_data_files if "negative" in f])
positive_data_files = sorted([f for f in transformed_data_files if "negative" not in f])
positive_sample_all_channels = []
negative_sample_all_channels = []
for ndf, pdf in zip(negative_data_files, positive_data_files):
positive_sample_all_channels.append(np.loadtxt(os.path.join(transformed_data_file_dir, pdf))[:,0])
negative_sample_all_channels.append(np.loadtxt(os.path.join(transformed_data_file_dir, ndf))[:,0])
positive_samples.append(np.stack(positive_sample_all_channels,axis=1))
negative_samples.append(np.stack(negative_sample_all_channels,axis=1))
positive_samples[0].shape
positive_samples = np.stack(positive_samples)
negative_samples = np.stack(negative_samples)
positive_samples.shape, negative_samples.shape
np.random.seed(0)
indices = list(range(39))
np.random.shuffle(indices)
train_set = indices[:20]
test_set = indices[20:]
positive_train = positive_samples[train_set]
negative_train = negative_samples[train_set]
positive_test = positive_samples[test_set]
negative_test = negative_samples[test_set]
train = np.concatenate((positive_train, negative_train))
test = np.concatenate((positive_test, negative_test))
train_labels = np.concatenate((np.ones(positive_train.shape[0]), np.zeros(negative_train.shape[0])))
test_labels = np.concatenate((np.ones(positive_test.shape[0]), np.zeros(negative_test.shape[0])))
positive_samples.shape
# +
rpy2.robjects.numpy2ri.activate()
# Set up our R namespaces
R = rpy2.robjects.r
DTW = importr('dtw')
# -
cdists = np.empty((test.shape[0], train.shape[0]))
cdists.shape
# +
timelog = local_models.local_models.loggin.TimeLogger(
logger=logger,
how_often=1, total=len(train_set)*len(test_set)*4,
tag="dtw_matrix")
import gc
# Calculate the alignment vector and corresponding distance
for test_i in range(cdists.shape[0]):
for train_i in range(cdists.shape[1]):
with timelog:
alignment = R.dtw(test[test_i], train[train_i], keep_internals=False, distance_only=True)
dist = alignment.rx('distance')[0][0]
print(dist)
cdists[test_i, train_i] = dist
gc.collect()
R('gc()')
gc.collect()
# -
import sklearn.metrics
cdists.shape
cdists_file = os.path.join(transformed_data_dir, "dtw_cdists.dat")
if "cdists" in globals() and not os.path.exists(cdists_file):
np.savetxt(cdists_file, cdists)
else:
cdists = np.loadtxt(cdists_file)
test_labels.shape
np.argmin(cdists, axis=1).shape
cm = sklearn.metrics.confusion_matrix(test_labels, train_labels[np.argmin(cdists, axis=1)])
print(cm)
pd.DataFrame(np.round(cdists/10**3,0))
np.argmin(cdists, axis=1)
pd.DataFrame(cm, index=[["true"]*2,["-","+"]], columns=[["pred"]*2, ["-", "+"]])
acc = np.sum(np.diag(cm))/np.sum(cm)
prec = cm[1,1]/np.sum(cm[:,1])
rec = cm[1,1]/np.sum(cm[1])
acc,prec,rec
| examples/Todd_eeg_local_gpr_dtw_gprdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: IPython (Python 3)
# language: python
# name: python3
# ---
# # Abstract
#
# Agglomerative hierarchical clustering is a widely used clustering algorithm in which data points are sequentially merged according to a distance metric from individual clusters into a single cluster. While many different metrics exist to evaluate inter-cluster distance, single-linkage is among the most popular, in part due to its simplicity. According to this method, pairwise distances are computed between all points, and the clusters containing the two points with the shortest Euclidean distance are joined in each iteration. However, single-linkage has some drawbacks, most notably that it scales quadratically as data grows, since each additional point must be measured from all of the existing ones. As such, Koga, Ishibashi, and Watanbe propose using locality-sensitive hashing (hereafter LSH) as an approximation to the single-linkage method. Under LSH, points are hashed into buckets such that the distance from a given point $p$ need only be computed to the subset of points with which it shares a bucket.
#
# In this paper, we implement LSH as a linkage method for agglomerative hierarchical clustering, optimize the code with an emphasis on reducing the time complexity of the creation of the hash tables, and apply the algorithm to a variety of synthetic and real-world datasets.
#
# **Keywords**: agglomerative hierarchical clustering, locality-sensitive hashing, single-linkage, nearest neighbor search, dendrogram similarity, cophenetic correlation coefficient, unsupervised learning
#
# **Github link**: https://github.com/lisalebovici/LSHLinkClustering
# # Background
#
# As data in the modern age gets easier and easier to quickly accumulate, techniques that can accurately segment or organize it are becoming increasingly important. In particular, unsupervised learning — that is, data which has no "ground truth" representation — finds itself at the center of many fields, ranging from consumer marketing to computer vision to genetics. At its heart, the goal of unsupervised learning is pattern recognition. To this end, algorithms such as k-means and agglomerative hierarchical clustering have been sufficiently effective until now; but as the size of data continues to grow, scalability issues are becoming a bigger barrier to analysis and understanding.
#
# Koga et al.'s *Fast Agglomerative Hierarchical Clustering Algorithm Using Locality-Sensitive Hashing* provides a faster approximation to single-linkage hierarchical agglomerative clustering for large data, which has a run time complexity of $O(n^2)$. The single-linkage method requires that the distances from every point to all other points be calculated, which becomes prohibitively expensive. In contrast, the primary advantage of LSH is that it reduces the number of distances that need to be computed as well as the the number of iterations that need to run, resulting in linear run time complexity $O(nB)$ (where $B$ is the maximum number of points in a single hash table).
#
# However, one consequence of the gain in run time is that LSH results in a coarser approximation of the data than single-linkage. More precisely, estimating which points are close with a hash function instead of explicitly examining each candidate comes at the cost of potentially overlooking nearby points. Plus, since all clusters within a threshold distance $r$ are merged, LSH by definition has many fewer iterations than single-linkage. This granularity can be controlled by a parameter within the algorithm, but in general LSH is not expected to reproduce the single-linkage method exactly. Nonetheless, when granularity is not a primary concern, the improvements in terms of efficiency make it a worthwhile alternative.
#
# We will proceed by describing the implementation of LSH for hierarchical clustering.
# # Description of Algorithm
#
# Some important notation will first be defined:
#
# - $n$: number of rows in data
# - $d$: dimension of data
# - $C$: least integer greater than the maximal coordinate value in the data
# - $\ell$: number of hash functions
# - $k$: number of sampled bits from a hashed value
# - $r$: minimal distance between points required to merge clusters
# - $A$: increase ratio of r on each iteration
#
# LSH works in two primary phases: first, by creating a series of hash tables in which the data points are placed, and second, by computing the distances between a point and its "similar" points, as defined by sharing a hash bucket, to determine which clusters should be merged.
#
# **Phase 1: Generation of hash tables**. Suppose we have a d-dimensional point $x$. A unary function is applied to each coordinate value of $x$ such that $x_i$ 1s are followed by $C-x_i$ 0s. The sequence of 1s and 0s for all coordinate values of $x$ are then joined to form the hashed point of $x$. Some number $k$ bits are then randomly sampled without replacement from the hashed point. For example, if $x$ is the point $(2, 1, 3)$ and $C = 4$, the hashed point would be $\underbrace{1100}_{2}\underbrace{1000}_{1}\underbrace{1110}_{3}$. If $k = 5$ and the indices $I = {5, 9, 2, 11, 8}$ are randomly sampled, then our resulting value would be $11110$. $x$ would thus be placed into the hash table for $11110$.
#
# This hash function is applied to all points, and a point is added to the corresponding hash table if no other point in its cluster is already present. Another set of $k$ indices $I$ is then randomly sampled and the resulting hash function is applied to all of the points. This procedure is repeated $\ell$ times.
#
# The intuition behind this step is as follows: a point $s$ that is very similar to $x$ is likely to have a similar hash sequence to $x$ and therefore appear in many of the same hash tables as $x$. However, for any given hash, there is no guarantee; for example, for $s = (1, 1, 3)$, the hashed point would be $\underbrace{1000}_{1}\underbrace{1000}_{1}\underbrace{1110}_{3}$ and the sampled value would be $11010$. In this case, $s$ would not be in the same bucket as $x$ above. However, by applying $\ell$ hash functions to the data, we have some certainty that if $s$ really is close to $x$, it will share at least one hash table.
#
# **Phase 2: Nearest neighbor search**. For each point $x$, we find all of the points that share at least one hash table with $x$ and are not currently in the same cluster as $x$. These are the similar points which are candidates to have their clusters merged with $x$'s cluster. The distances between $x$ and the similar points are computed, and for any point $p$ for which the Euclidean distance between $x$ and $p$ is less than $r$, $p$'s cluster is merged with $x$'s cluster.
#
# If there is more than one cluster remaining after the merges, the values for $r$ and $k$ are updated and then phases 1 and 2 are repeated. $r$ is increased (we now consider points that are slightly further away than previously to merge more distant clusters) and $k$ is decreased (so that the hashed values become shorter and therefore more common). This continues until all points are in the same cluster, at which point the algorithm terminates.
# #### LSH-Link Algorithm
# > **Input:** Starting values for $\ell$, $k$, and $A$
#
# > **Initialize**:
#
# > > **if** $n < 500$
#
# > > > sample $M = \{\sqrt{n}$ points from data}
#
# > > > $r = \text{min dist}(p, q)$, where $p, q \in M$
#
# > > **else**
#
# > > > $r = \frac{d * C}{2 * (k + d)}\sqrt{d}$
#
# > **while** num_clusters > 1:
#
# > > **for** $i = 1,..,\ell$:
#
# > > > $unary_C(x) = \underbrace{11...11}_{x}\underbrace{00...00}_{C-x}$
#
# > > > sample $k$ bits from $unary_C(x)$
#
# > > > **if** $x$'s cluster is not in hash table:
#
# > > > > add $x$ to hash table
#
# > > > repeat for $n$ points
#
# > > **for** $p = 1,..,n$:
#
# > > > S = {set of points that share at least one hash table with $p$}
#
# > > > Q = {$q \in S$ s.t. $\text{dist}(p, q) < r$}
#
# > > > merge $Q$'s clusters with $p$'s cluster
#
# > > **if** num_clusters > 1:
#
# > > > $r = A*r$
#
# > > > $k = \frac{d * C}{2 * r}\sqrt{d}$
# # Performance Optimizations
# Run time profiling for `build_hash_tables()` and `LSHLink()` are below, as well as a comparison to `scipy`'s single-linkage hierarchical clustering and our own implementation of agglomerative hierarchical clustering.
#
# The three different functions were applied to an extended version of the *iris* dataset; we built a set of 1,500 points based on the original 150 points with random noise added. As seen below, `scipy`'s implementation is clearly the superior method, running in an impressive 16.7 ms. Note that this is to be expected since a great deal of it is written in C.
#
# In comparison, our implementation of single-linkage hierarchical clustering, written purely in Python, ran in 2 minutes 46 seconds, while the LSH version ran in 2 minutes 13 seconds. A plot in the *Applications to Simulated Datasets* section below will show that the speed advantage for LSH vs. single-linkage does not necessarily hold for smaller datasets.
#
# Note from the profiling below that the vast majority of the run time in LSH is spent in `build_hash_tables()`; the code for nearest neighbor search and merging clusters is relatively trivial. The results show that of the 157 seconds spent running `LSHLink()`, 151 seconds were in `build_hash_tables()`. As such, nearly all of the optimization efforts were spent in speeding up the creation of the hash tables.
from imports import *
import v1
import funcs
iris = datasets.load_iris().data * 10
iris_big = datasets.load_iris().data
iris_big = v1.data_extend(iris_big, 10) * 10
iris_big += np.abs(np.min(iris_big))
# %timeit -r1 linkage(iris_big, method = 'single')
# %timeit -r1 lsh.singleLink(1, iris_big)
# %timeit -r1 v1.LSHLinkv1(iris_big, A = 1.4, l = 10, k = 100)
# %prun -q -D LSHLink.prof v1.LSHLinkv1(iris_big, A = 1.4, l = 10, k = 100)
p = pstats.Stats('LSHLink.prof')
p.sort_stats('time', 'cumulative').print_stats(
'build_hash_tables')
p.sort_stats('time', 'cumulative').print_stats(
'LSHLink')
pass
# The primary optimization in the second version of the code was caching the hash points using `lru_cache()` from `functools`. By storing the hashed unary values and point representations, `build_hash_tables()` only had to compute each point's hash value on the initial run, rather than on each iteration; this was possible since a point's unary representation remains the same for a given $C$ value, and $C$ does not change throughout the course of the algorithm.
#
# The other optimization was to introduce more control logic into the nearest neighbor search. Specifically, additional "if" statements were added, such that if no points were found within distance $r$ of point $p$, that iteration of the `for` loop terminates rather than continuing on to attempt to find clusters that can be merged.
#
# These changes reduced the run time of LSH to around 20 seconds in our tests on the *iris* version with 1500 points. Run times are shown below.
lsh.clear_caches()
# %timeit -r1 lsh.LSHLink(
iris_big, A = 1.4, l = 10, k = 100, seed1 = 12, seed2 = 6)
lsh.clear_caches()
# %prun -q -D LSHLink2.prof lsh.LSHLink(
iris_big, A = 1.4, l = 10, k = 100, seed1 = 12, seed2 = 6)
p = pstats.Stats('LSHLink2.prof')
p.sort_stats('time', 'cumulative').print_stats(
'build_hash_tables')
p.sort_stats('time', 'cumulative').print_stats(
'LSHLink')
pass
# Since our `build_hash_tables()` function includes a loop to create the various hash tables ($\ell$ hash tables, to be precise), it's a good candidate for multiprocessing. More specifically, if we create each of the hash tables in parallel, the total time to run should be reduced.
#
# Of course, for these tasks to be truly parallel there can't be any interdependencies, which is not the case in our original `build_hash_tables()` function. Each hash function updates a shared dictionary, which would slow down or completely derail an attempt at parallel execution since each competing hash table would lock the dictionary from the others when updating. So to utilize multiprocessing, we had to redefine the workhorse function as `build_hash_table()` so that it created and returned its own individual dictionary, which can then be zipped with the others into a single hash table.
#
# There is overhead associated with multiprocessing so that there are no or minimal efficiency gains seen for small datasets with limited numbers of hash tables (small $\ell$ in other words). But as you increase $\ell$ or $n$, multiprocessing becomes more and more useful, allowing the parallel version of the function to eventually outperform the original. We illustrate this below and the raw data can be found in the Appendix.
# +
mp1_x = [10, 20, 30, 40, 50, 100, 200, 500]
mp1_y = [.162, .291, .426, .586, .703, 1.42, 2.83, 7.23]
bht1_x = [10, 20, 30, 40, 50, 100, 200, 500]
bht1_y = [.256, .423, .641, .993, 1.02, 2.01, 4.14, 10.9]
funcs.mp_run_times(mp1_x, mp1_y, bht1_x, bht1_y,
title = 'Run Time Comparisons for build_hash_tables(): Varying l',
xlabel = r'$\ell$',
ylabel = 'time (seconds)')
# +
mp2_x = [150, 300, 450, 600, 750, 1050, 1500, 2100, 4950, 10050, 15000]
mp2_y = [.181, .391, .440, .577, .925, 1.1, 1.83, 2.05, 6.81, 13, 20]
bht2_x = [150, 300, 450, 600, 750, 1050, 1500, 2100, 4950, 10050, 15000]
bht2_y = [.257, .530, .624, .821, 1.06, 1.45, 2.23, 2.96, 8.93, 18, 26.3]
funcs.mp_run_times(mp2_x, mp2_y, bht2_x, bht2_y,
title = 'Run Time Comparisons for build_hash_tables(): Varying n',
xlabel = r'$n$',
ylabel = 'time (seconds)')
# -
# That being said, multiprocessing ultimately didn't improve the timing nearly as much as simply caching the unary representation of points (i.e. the bitstream pre-sampling). That fact, combined with technical difficulties associated with marrying the two approaches, led us to only include the cached version in the final package without any parallelization.
# # Applications to Simulated Datasets
# ### Iris Dataset
# The algorithm was first applied to the *iris* dataset, as in the paper by Koga et al., in an effort to reproduce their results. Below, we compare the dendrograms produced from running single-linkage hierarchical clustering with our implementation of LSH under two slightly different parameterizations. In both instances, we set initial values for $k$ and $\ell$ to be 100 and 10, respectively. However, the first run of LSH has the increase ratio $A = 1.4$ while the second run has $A = 2.0$.
z = linkage(iris, method="single")
dendrogram(z)
plt.gcf().set_size_inches(12, 6)
plt.title('Single-Linkage Hierarchical Clustering')
plt.show();
clusters, Z = lsh.LSHLink(iris, A = 1.4, l = 10, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
dendrogram(Z, color_threshold = 18)
plt.gcf().set_size_inches(12, 6)
plt.title('LSH-Link Hierarchical Clustering, A = 1.4')
plt.show();
clusters2, Z2 = lsh.LSHLink(iris, A = 2.0, l = 10, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
dendrogram(Z2, color_threshold = 18)
plt.gcf().set_size_inches(12, 6)
plt.title('LSH-Link Hierarchical Clustering, A = 2.0')
plt.show();
# As in the paper, we have confirmed that the members of the top two clusters match perfectly across all three implementations (note that the green cluster in the first figure above is equivalent to the red clusters in the second and third figures). However, as expected, the increase in $A$ corresponds to a sparser dendrogram; the algorithm is now making bigger leaps in $r$ with each iteration, so it will merge a greater number of clusters at a given time. The height of the dendrogram shows us the point in time at which two clusters merged. For single-linkage, we see that there are n-1 unique merge times, which is indeed the behavior of the single-linkage method. Under LSH with $A = 1.4$, there are six iterations of cluster merges, while under LSH with $A = 2.0$ there are only four iterations until the data is fully merged. If a high level of preciseness is not needed, however, the graphs above show that LSH does give us a rough approximation of single-linkage.
# One way to measure the similarity between two dendrograms is called cophenetic correlation. For two dendograms $X$ and $Y$ it is given by
#
# $$
# r_{X,Y} = \frac{\sum_{i<j}(X_{ij}-\bar{x})(Y_{ij}-\bar{y})}
# {\sqrt{\sum_{i<j}(X_{ij}-\bar{x})^2\sum_{i<j}(Y_{ij}-\bar{y})^2}}
# $$
#
# where $X_{ij}$ is the height of the node in the dendrogram where points $i$ and $j$ are first joined and $\bar{x}$ is the mean of all those heights. `scipy` also provides a built-in function that applies cophenetic correlation to dendrograms.
#
# We don't expect perfect correlation between the dendrograms created by single-linkage and LSH for two reasons. First of all, the very point of LSH is that the minimum distances are approximate so it's always possible that close points don't get bucketed by the same hashes, in which case they may meet at a node unrealistically near the root. More importantly though, the height of each node isn't controlled by distance between points but by the minimum threshold distance $r$.
#
# That being said the correlation is _near_ perfect. Moreover, the correlation has an inverse relationship with the increase ratio: as $A$ decreases, the correlation inches closer to 1. Since $A$ can be loosely interpreted as a learning rate parameter, this relationship is intuitive — the slower we increase $r$, the more closely the algorithm approximates single linkage (or any alghorithm that considers more inter-point distances). Here we see the cophenetic correlation as calculated for three values of A:
# +
Z1 = linkage(iris, method="single")
clusters2, Z2 = lsh.LSHLink(iris, A = 2.0, l = 10, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
clusters3, Z3 = lsh.LSHLink(iris, A = 1.4, l = 10, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
clusters4, Z4 = lsh.LSHLink(iris, A = 1.2, l = 10, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
C1 = cophenet(Z1)
C2 = cophenet(Z2)
C3 = cophenet(Z3)
C4 = cophenet(Z4)
print(np.corrcoef(C1, C2)[0,1])
print(np.corrcoef(C1, C3)[0,1])
print(np.corrcoef(C1, C4)[0,1])
# -
# In addition to analyzing accuracy between single-linkage and LSH, we must look at performance. To some extent, it is possible to control the run time for LSH by changing the initial parameters, particularly $A$. Setting a larger starting value for $A$ will result in a coarser but quicker run, while a smaller value will provide a slower and finer approximation. The size of the data also plays a significant role. For small datasets, single-linkage can be equivalent to or even outperform LSH. This is due to the fact that LSH has non-trivial overhead in computing the hash tables for each iteration; when there are not many data points, it can be slower to calculate the hash values on each iteration than to just do $n - 1$ merges as in single-linkage. As the dataset grows however, LSH scales linearly ($O(n)$) while single-linkage scales quadratically ($O(n^2)$). This is when LSH becomes particularly useful, as long as a high degree of precision is not needed.
#
# Below, we show the run time in seconds on an enlarged *iris* dataset (with noise added as mentioned above) on four different hierarchical clustering implementations: scipy's single-linkage, our implementation of single-linkage, LSH with $A = 1.4$, and LSH with $A = 2.0$. Due to prohibitive run times on our single-linkage implementation for large datasets, we have only shown up to 3,000 data points.
funcs.iris_run_times()
# `scipy` is clearly orders of magnitude faster and is only shown above as an example of the theoretical possibility for LSH. For discussion purposes, we look at our custom implementation of single-linkage compared to LSH under the two different parameterizations $A = 1.4$ and $A = 2.0$. The single-linkage method clearly grows quadratically and performs relatively worse as the data size increases; on the other hand, LSH performs nearly linearly (we would expect that fully optimized code would have precisely linear run time). We observe that, although for small datasets LSH does not necessarily represent an improvement, the gains in efficiency for larger data can be substantial.
# ### Synthetic Data of Abstract Shapes
# We now illustrate our implementation of LSH for agglomerative hierarchical clustering on several generated datasets containing different shapes.
#
# We start with a dataset of 200 data points, replicating synthetic data from the Koga et al. paper:
tencircles = np.genfromtxt('../data/tencircles.csv', delimiter=",")
lsh.plot_square(tencircles)
# Running our implementation of LSH clustering on this data, with a cutoff of 10 clusters, yields the following:
lsh.plot_clusters(tencircles, cutoff=10, scale=10, linkage='LSH', A=1.4, k=10, l=100)
# As clearly shown, the algorithm successfully identifies the ten clusters that are visible to the human eye. Intuitively, a researcher might also identify a way to break the data into two clusters: the five groups on the left and the five groups on the right. The algorithm, however, fails to capture this:
lsh.plot_clusters(tencircles, cutoff=2, scale=10, linkage='LSH', A=1.2, k=10, l=100)
# This could potentially be improved by changing some combination of the parameters set by the researcher, $A$, $k$, or $\ell$. It is worth noting that the two center groups do appear closer to each other than does the center left with the two top left groups, so it is not unreasonable to expect that the algorithm would fail to recognize the pattern.
# Now we turn to two other fabricated datasets in different shapes: the classic spiral that is often the canonical example for agglomerative hierarchical clustering (300 points) and a dataset with clusterings of different kinds of shapes (500 points). In both instances, the algorithm successfully identifies what we would perceive to be distinct clusters.
spirals = np.genfromtxt('../data/spirals.csv', delimiter=",")[1:, 1:]
lsh.plot_square(spirals)
lsh.plot_clusters(spirals, cutoff=2, scale=100, linkage='LSH', A=1.4, k=10, l=100)
smiley = np.genfromtxt('../data/smiley.csv', delimiter=",")[1:, 1:]
lsh.plot_square(smiley)
lsh.plot_clusters(smiley, cutoff=4, scale=100, linkage='LSH', A=1.4, k=10, l=100)
# # Applications to Real Datasets
#
# We apply LSH to the same real dataset that the paper uses, which is a sample of gene exprssion data collected from diabetic mice featuring 28 dimensions of cDNA. Again, we see that the dendrogram created by our clustering algorithm is not only similar to the one that single linkage yields, but that by decreasing the $A$ paremeter to more slowly grow the threshold distance $r$, the results start to converge. In particular, an $A$ value of 2.0 resulted in about 87% cophenetic correlation to the single linkage dendrogram, while an an $A$ value of 1.2 resulted in nearly 91% cophenetic correlation.
# +
gds10 = np.genfromtxt('../data/GDS10.csv', delimiter=",")[1:, 3:]
gds = gds10[~np.isnan(gds10).any(axis=1)][1:501]
Z1 = linkage(gds, method="single")
_, Z2 = lsh.LSHLink(gds, A = 2.0, l = 40, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
_, Z3 = lsh.LSHLink(gds, A = 1.6, l = 40, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
_, Z4 = lsh.LSHLink(gds, A = 1.2, l = 40, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
C1 = cophenet(Z1)
C2 = cophenet(Z2)
C3 = cophenet(Z3)
C4 = cophenet(Z4)
# -
print(np.corrcoef(C1, C2)[0,1])
print(np.corrcoef(C1, C3)[0,1])
print(np.corrcoef(C1, C4)[0,1])
# We turned to Major League Baseball (MLB) for a second real dataset that was not in the paper. The league has cameras installed in all of its stadium that capture the speed and trajectory of pitched balls. There are libraries available that let users download this so-called PITCHF/x data. For our purposes, we gathered 1,000 of <NAME>’s pitches, filtering for the pitch types of either fastball or curveball. While the data has several dozen columns, for the sake of visualization we select three that we know to be important: the speed of the pitch (in miles per hour), the horizontal break (in inches), and the vertical break (in inches). We expect this data to cluster well because fastballs and curveballs are very distinct pitches, especially for a good player like Kershaw, with the former being faster (naturally) and having less break in either direction. Indeed, not only does the LSH algorithm break the clusters apart easily, but when comparing the classifications back to the original identities of the pitch types, we see 100% accuracy.
# +
kershaw = pd.read_csv('../data/Kershaw.csv').iloc[:, 1:5]
kershaw_data = np.array(kershaw.iloc[:, 0:3])
kershaw_labels = np.array(kershaw.iloc[:, 3])
a = np.array([0])
b = np.min(kershaw_data, axis = 0)[1:3]
shift = np.r_[a, b]
kershaw_data_shifted = kershaw_data - shift
kersh_clusters = lsh.LSHLink(kershaw_data_shifted,
A = 1.4, l = 10, k = 100,
seed1 = 12, seed2 = 6, cutoff = 2)
# -
kersh_full = np.c_[kershaw_data, kersh_clusters]
# +
g1 = np.where(kersh_clusters == np.unique(kersh_clusters)[0])[0]
g2 = np.where(kersh_clusters == np.unique(kersh_clusters)[1])[0]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(kersh_full[g1, 0], kersh_full[g1, 1], kersh_full[g1, 2],
alpha = 0.4, c = 'red')
ax.scatter(kersh_full[g2, 0], kersh_full[g2, 1], kersh_full[g2, 2],
alpha = 0.4, c = 'blue')
ax.view_init(azim=200)
ax.set_xlabel('Speed')
ax.set_ylabel('Horizontal Break')
ax.set_zlabel('Vertical Break')
plt.show()
# -
class_LSH = kersh_clusters == 32
class_TRUE = kershaw_labels == 'FF'
sum(np.equal(class_LSH, class_TRUE))/kershaw.shape[0]
# # Comparative Analysis with Competing Algorithms
#
# Another popular clustering algorithm is k-means, which, given a number of clusters, randomly initializes that many cluster centers and then iteratively adjusts their positions while classifying points based on the closest center until no points are being reclassified. We wrote a k-means clustering algorithm from scratch thats performs equally well to our LSH algorithm on the fastball/curveball data from above:
np.random.seed(1)
clust, cen = funcs.kmeans(2, kershaw_data)
# +
kersh_full = np.c_[kershaw_data, clust]
g1 = np.where(kersh_clusters == np.unique(kersh_clusters)[0])[0]
g2 = np.where(kersh_clusters == np.unique(kersh_clusters)[1])[0]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(kersh_full[g1, 0], kersh_full[g1, 1], kersh_full[g1, 2],
alpha = 0.4, c = 'red')
ax.scatter(kersh_full[g2, 0], kersh_full[g2, 1], kersh_full[g2, 2],
alpha = 0.4, c = 'blue')
ax.view_init(azim=200)
ax.set_xlabel('Speed')
ax.set_ylabel('Horizontal Break')
ax.set_zlabel('Vertical Break')
plt.show()
# -
# However, k-means does have a comparative advantage over both single linkage and LSH clustering when we introduce a third pitch, a change-up, that from a speed and break perspective, is somewhere between a fastball and curveball. The two aforementioned algorithms struggle to find three unique clusters in this data since they work from point to point instead of from inside out via cluster centers, so overlapping boundary regions between pitches prove confusing. However, k-means easily separates the three:
kershawFCC = pd.read_csv('../data/KershawFCC.csv').iloc[:, 1:5]
kershawFCC_data = np.array(kershawFCC.iloc[:, 0:3])
kershawFCC_labels = np.array(kershawFCC.iloc[:, 3])
np.random.seed(1)
FCCclust, FCCcen = funcs.kmeans(3, kershawFCC_data)
kersh_full = np.c_[kershawFCC_data, FCCclust]
g1 = np.where(FCCclust == np.unique(FCCclust)[0])[0]
g2 = np.where(FCCclust == np.unique(FCCclust)[1])[0]
g3 = np.where(FCCclust == np.unique(FCCclust)[2])[0]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(kersh_full[g1, 0], kersh_full[g1, 1], kersh_full[g1, 2], alpha = 0.5)
ax.scatter(kersh_full[g2, 0], kersh_full[g2, 1], kersh_full[g2, 2], alpha = 0.5)
ax.scatter(kersh_full[g3, 0], kersh_full[g3, 1], kersh_full[g3, 2], alpha = 0.5)
ax.view_init(azim=160)
ax.set_xlabel('start_speed')
ax.set_ylabel('pfx_x')
ax.set_zlabel('pfx_z')
plt.show()
# That being said, k-means struggles with non-spherical clusters where classes don't necessarily have similar distances from a central point like the synthetic spiral data shown above.
# An interesting hybrid between single linkage hierarchical clustering and k-means clustering is centroid linkage hierarchical clustering. The algorithm starts by making each individual point the center of its own class, and then iteratively combines and recalculates centroids based on shortest distance. Like single linkage, it successively merges points, but like k-means the classifications are defined by a centroid that averages over the members of its class.
#
# This method, not surprisingly, produces similar results to our LSH algorithm, as demonstrated by the 97.3% cophenetic correlation between the dendrograms. An interesting note about the centroid algorithm worth mentioning is that due to the nature of the algorithm, the dendrogram nodes don't necessarily have to unilaterally increase in height. If you look closely at the dendrogram you can see several examples of a merge between nodes ocurring below the heights of the two children.
#
# This is because when classes merge, the centroid of the newly created group is recalculated as an average of the coordinates of the points that make it up. Therefore, it's possible that the centroid could be closer to another centroid than were either of its children. As a result, you see in the dendrogram that some merges grow down (slightly) instead of up. This means that the usual feature of being able to cut a dendrogram at any height to reveal the clusters at that juncture does not apply to centroid linkage.
Z1 = linkage(iris, method="centroid")
dendrogram(z)
plt.gcf().set_size_inches(12, 6)
plt.title('Centroid Hierarchical Clustering')
plt.show();
clusters2, Z2 = lsh.LSHLink(iris, A = 1.4, l = 10, k = 100,
dendrogram = True, seed1 = 12, seed2 = 6)
C1 = cophenet(Z1)
C2 = cophenet(Z2)
print(np.corrcoef(C1, C2)[0,1])
# # Discussion / Conclusion
#
# Unsupervised learning, and clustering in particular, are currently hot topics in the world of machine learning and data science because they have so many applications, from bioinformatics to customer segmentation. With the propagation of high-dimensional data, efficient versions of clustering algorithms become of utmost importance, since the difference between linear and quadratic runtime for large datasets can separate theoretical use from practical implementations. For those reasons, locality-sensitive hashing for agglomerative hierarchical clustering certainly fulfills a need in the realm of statistical computing.
#
# An extension of the algorithm that wasn’t mentioned in the original paper might consider how to apply it to on wider varieties of datasets. In other words, even though we specified various implementation details to deal with cases of negative numbers or rounding, how to run the algorithm on data with categorical or factor columns was never mentioned. Would one-hot encoding solve the problem or are more rigorous transformations necessary? Having a more dynamic version of the algorithm that could parse data of less homogenous types would allow it to be applied to more datasets, and most likely open doors for new domains.
#
# And finally, one of the limitations (that is by no means uncommon in machine learning) of the algorithm was how much manual and ad-hoc tuning of parameters was required to find optimal solutions. While some guidance was given in the original paper, applying the algorithm to new datasets requires lots of fiddling around and various initializations of the parameters; and given how many there are, this can be quite time-consuming. Further research might seek to establish rules of thumb in terms of tuning.
# # References / Bibliography
#
# 1. <NAME>, <NAME>, <NAME>, Lyons PA et al. Combining mouse congenic strains and microarray gene expression analyses to study a complex trait: the NOD model of type 1 diabetes. Genome Res 2002 Feb;12(2):232-43. PMID: 11827943
# 2. <NAME>, <NAME>, <NAME> (2006) Fast agglomerative hierarchical clustering algorithm using Locality-Sensitive Hashing. In: Knowledge and Information Systems 12(1): 25-53
# 3. <NAME> (2015). pitchRx: Tools for Harnessing 'MLBAM' 'Gameday' Data and Visualizing 'pitchfx'. R package version 1.8.2.
# # Appendix
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
k = 10
# +
### increase l
# -
l = 10
args = []
for i in range(l):
args.append((C, k, iris))
# l = 10
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 20
args = []
for i in range(l):
args.append((C, k, iris))
# l = 20
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 30
args = []
for i in range(l):
args.append((C, k, iris))
# l = 30
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 40
args = []
for i in range(l):
args.append((C, k, iris))
# l = 40
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 50
args = []
for i in range(l):
args.append((C, k, iris))
# l = 50
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 100
args = []
for i in range(l):
args.append((C, k, iris))
# l = 100
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 200
args = []
for i in range(l):
args.append((C, k, iris))
# l = 200
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
l = 500
args = []
for i in range(l):
args.append((C, k, iris))
# l = 500
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
# +
### increase n
# -
k = 10
l = 10
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 1) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 150
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 2) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 300
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 3) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 450
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 4) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 600
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 5) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 750
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 7) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 1050
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 10) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 1500
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 14) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 2100
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 33) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 4950
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 67) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 10050
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
iris = datasets.load_iris().data
iris = v1.data_extend(iris, 100) * 10
iris += np.abs(np.min(iris))
n, d = iris.shape
clusters = np.arange(n)
C = int(np.ceil(np.max(iris))) + 1
args = []
for i in range(l):
args.append((C, k, iris))
# n = 15000
# %timeit -r1 funcs.mp(v1.build_hash_table, args)
# %timeit -r1 v1.build_hash_tables(C, d, l, k, iris, clusters)
| report/An Implementation of Fast Agglomerative Hierarchical Clustering Algorithm Using Locality-Sensitive Hashing by Walker Harrison and Lisa Lebovici.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="uco9yMnAmbIm" colab={"base_uri": "https://localhost:8080/"} outputId="fa0739b7-ce60-4f9a-debe-681f75df2470"
import pickle
import zipfile
from google.colab import drive
drive.mount('/content/drive')
# update path to import from Drive
import sys
sys.path.append('content/drive/MyDrive')
import numpy as np
import torch
import matplotlib.pyplot as plt
from torch import nn
from torch.utils.data import Dataset, TensorDataset, DataLoader
import torch.optim as optim
import time
from time import sleep
from tqdm import tqdm
import os
seed = 42
torch.manual_seed(seed)
device = 'cuda:0' # "cpu"
# + colab={"base_uri": "https://localhost:8080/"} id="qUdNaX-9LRQq" outputId="b9430bf6-d139-47fa-9052-3a856a74e3f2"
folder_path = './'
data_path = folder_path + 'FFPN-Ellipse-TrainingData-0.015IndividualNoise.pkl'
if os.path.isfile(data_path):
print("FFPN data .pkl file already exists.")
else:
print("Extracting data from .pkl file.")
with zipfile.ZipFile('/content/drive/MyDrive/FixedPointNetworks/FFPN-Ellipse-TrainingData-IndividualNoise.zip', 'r') as zip_ref:
zip_ref.extractall('./')
print("Extraction complete.")
sys.path.append('content/drive/MyDrive/FixedPointNetworks')
sys.path.insert(0,'/content/drive/My Drive/FixedPointNetworks')
state = torch.load(data_path)
A = state['A'].to(device)
u_train = state['u_true_train']
u_test = state['u_true_test']
data_obs_train = state['data_obs_train']
data_obs_test = state['data_obs_test']
# + id="7KSnNLuuUyNk"
plt.figure()
plt.subplot(1,2,1)
plt.imshow(u_train[0,0,:,:])
plt.subplot(1,2,2)
plt.imshow(data_obs_train[0,0,:,:])
plt.colorbar()
# + id="qa00-1hAKi_7"
print("A size = ", A.size())
S = torch.diag(torch.count_nonzero(A, dim=0) ** -1.0).float()
S = S.to(device)
print(S)
print('S size = ', S.size())
# + [markdown] id="7Lp8aVZbd7oO"
# Create training datasets
# + id="bzUyStSFmu0p"
batch_size = 15
data_train = TensorDataset(u_train[0:10000,:,:,:], data_obs_train[0:10000,:,:,:])
data_loader = DataLoader(dataset=data_train, batch_size=batch_size, shuffle=True)
n_batches = int(len(data_loader.dataset)/batch_size)
print()
print(f'u_train.min(): {u_train.min()}')
print(f'u_train.max(): {u_train.max()}')
print("data_obs_train.shape = ", data_obs_train.shape)
print('n_batches = ', n_batches)
# + [markdown] id="iL4cX5Dedz9k"
# Define Network
# + id="dMpjvT8vbI5K"
class Regularizer_Net(nn.Module):
def __init__(self, D, M, res_net_contraction=0.99, res_layers=4,
num_channels = 42):
super().__init__()
self.relu = nn.ReLU()
self.leaky_relu = nn.LeakyReLU(0.05)
self.gamma = res_net_contraction
self.D = D
self.M = M
self.Mt = M.t()
in_channels = lambda i: 1 if i == 0 else num_channels
out_channels = lambda i: 1 if i == res_layers-1 else num_channels
self.convs = nn.ModuleList([nn.Sequential(nn.Conv2d(
in_channels=in_channels(i),
out_channels=num_channels,
kernel_size=3, stride=1,
padding=(1,1)),
self.leaky_relu,
nn.Conv2d(in_channels=num_channels,
out_channels=out_channels(i),
kernel_size=3, stride=1,
padding=(1,1)),
self.leaky_relu)
for i in range(res_layers)])
def name(self) -> str:
return "Regularizer_Net"
def device(self):
return next(self.parameters()).data.device
def _T(self, u, d):
batch_size = u.shape[0]
# Learned Regularization Operator
for idx, conv in enumerate(self.convs):
u_ref = u if idx + 1 < len(self.convs) \
else u[:,0,:,:].view(batch_size,1,128,128)
Du = torch.roll(u, 1, dims=-1) - u if idx%2 == 0 \
else torch.roll(u, 1, dims=-2) - u
u = u_ref + conv(Du)
u = torch.clamp(u, min=-1.0e1, max=1.0e1)
# Constraints Projection
u_vec = u.view(batch_size, -1).to(self.device())
u_vec = u_vec.permute(1,0).to(self.device())
d = d.view(batch_size,-1).to(self.device())
d = d.permute(1,0)
res = torch.matmul(self.Mt, self.M.matmul(u_vec) - d)
res = 1.99 * torch.matmul(self.D.to(self.device()), res)
res = res.permute(1,0)
res = res.view(batch_size, 1, 128, 128).to(self.device())
return u - res
def normalize_lip_const(self, u, d):
''' Scale convolutions in R to make it gamma Lipschitz
It should hold that |R(u,v) - R(w,v)| <= gamma * |u-w| for all u
and w. If this doesn't hold, then we must rescale the convolution.
Consider R = I + Conv. To rescale, ideally we multiply R by
norm_fact = gamma * |u-w| / |R(u,v) - R(w,v)|,
averaged over a batch of samples, i.e. R <-- norm_fact * R. The
issue is that ResNets include an identity operation, which we don't
wish to rescale. So, instead we use
R <-- I + norm_fact * Conv,
which is accurate up to an identity term scaled by (norm_fact - 1).
If we do this often enough, then norm_fact ~ 1.0 and the identity
term is negligible.
Note: BatchNorm and ReLUs are nonexpansive when...???
'''
noise_u = 0.05 * torch.randn(u.size(), device=self.device())
w = u.clone() + noise_u
w = w.to(self.device())
Twd = self._T(w, d)
Tud = self._T(u, d)
T_diff_norm = torch.mean(torch.norm(Twd - Tud, dim=1))
u_diff_norm = torch.mean(torch.norm(w - u, dim=1))
R_is_gamma_lip = T_diff_norm <= self.gamma * u_diff_norm
if not R_is_gamma_lip:
normalize_factor = (self.gamma * u_diff_norm / T_diff_norm) ** (1.0 / len(self.convs))
print("normalizing!")
for i in range(len(self.convs)):
self.convs[i][0].weight.data *= normalize_factor
self.convs[i][0].bias.data *= normalize_factor
self.convs[i][2].weight.data *= normalize_factor
self.convs[i][2].bias.data *= normalize_factor
def forward(self, d, eps=1.0e-3, max_depth=100,
depth_warning=False):
''' FPN forward prop
With gradients detached, find fixed point. During forward iteration,
u is updated via R(u,Q(d)) and Lipschitz constant estimates are
refined. Gradient are attached performing one final step.
'''
with torch.no_grad():
self.depth = 0.0
u = torch.zeros((d.size()[0], 1, 128, 128),
device=self.device())
u_prev = np.Inf*torch.ones(u.shape, device=self.device())
all_samp_conv = False
while not all_samp_conv and self.depth < max_depth:
u_prev = u.clone()
u = self._T(u, d)
res_norm = torch.max(torch.norm(u - u_prev, dim=1))
self.depth += 1.0
all_samp_conv = res_norm <= eps
if self.training:
self.normalize_lip_const(u, d)
if self.depth >= max_depth and depth_warning:
print("\nWarning: Max Depth Reached - Break Forward Loop\n")
return self._T(u, d)
# + [markdown] id="-oGQCMmFeBR0"
# Set up training parameters
# + id="mMcRhjnqm-Uy" colab={"base_uri": "https://localhost:8080/"} outputId="43769bef-46f8-433e-ab3f-786b14e92d62"
Phi = Regularizer_Net(S.to(device), A.to(device))
Phi = Phi.to(device)
pytorch_total_params = sum(p.numel() for p in Phi.parameters() if p.requires_grad)
print(f'Number of trainable parameters: {pytorch_total_params}')
# + id="UDoVIWSQ3uR5" colab={"base_uri": "https://localhost:8080/"} outputId="3d0b2575-e058-4db6-8614-a3dc565599f2"
max_epochs = 25
max_depth = 100
eps = 5.0e-3
criterion = torch.nn.MSELoss()
learning_rate = 2.5e-5
optimizer = optim.Adam(Phi.parameters(), lr=learning_rate)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
fmt = '[{:2d}/{:2d}]: train_loss = {:7.3e} | '
fmt += 'depth = {:5.1f} | lr = {:5.1e} | time = {:4.1f} sec'
load_weights = True
if load_weights:
# FOR RELOADING
state = torch.load('./drive/MyDrive/FixedPointNetworks/Feasible_FPN_Ellipses_weights.pth', map_location=torch.device(device))
Phi.load_state_dict(state['Phi_state_dict'])
print('Loaded Phi from file.')
# + [markdown] id="vW3jgaTCecRc"
# Execute Training
# + id="LEr2wb91nCoa"
best_loss = 1.0e10
for epoch in range(max_epochs):
sleep(0.5) # slows progress bar so it won't print on multiple lines
tot = len(data_loader)
loss_ave = 0.0
start_time_epoch = time.time()
with tqdm(total=tot, unit=" batch", leave=False, ascii=True) as tepoch:
for idx, (u_batch, d) in enumerate(data_loader):
u_batch = u_batch.to(device)
batch_size = u_batch.shape[0]
train_batch_size = d.shape[0] # re-define if batch size changes
Phi.train()
optimizer.zero_grad()
u = Phi(d.to(device), max_depth=max_depth, eps=eps) # add snippet for hiding
output = criterion(u, u_batch)
train_loss = output.detach().cpu().numpy()
loss_ave += train_loss * train_batch_size
output.backward()
optimizer.step()
tepoch.update(1)
tepoch.set_postfix(train_loss="{:5.2e}".format(train_loss),
depth="{:5.1f}".format(Phi.depth))
if epoch%1 == 0:
# compute test image
Phi.eval()
u_test_approx = Phi(data_obs_test[0,:,:,:], max_depth=max_depth, eps=eps)
plt.figure()
plt.subplot(2,2,1)
plt.imshow(u_batch[0,0,:,:].cpu(), vmin=0, vmax=1)
plt.title('u true train')
plt.subplot(2,2,2)
plt.imshow(u[0,0,:,:].detach().cpu(), vmin=0, vmax=1)
plt.title('u approx train')
plt.subplot(2,2,3)
plt.imshow(u_test[0,0,:,:].cpu(), vmin=0, vmax=1)
plt.title('u true test')
plt.subplot(2,2,4)
plt.imshow(u_test_approx[0,0,:,:].detach().cpu(), vmin=0, vmax=1)
plt.title('u approx test')
plt.show()
Phi.train()
# ---------------------------------------------------------------------
# Save weights
# ---------------------------------------------------------------------
if loss_ave < best_loss:
best_loss = loss_ave
state = {
'Phi_state_dict': Phi.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'lr_scheduler': lr_scheduler
}
file_name = './drive/MyDrive/FixedPointNetworks/Feasible_FPN_Ellipses_weights.pth'
torch.save(state, file_name)
print('\nModel weights saved to ' + file_name)
loss_ave = loss_ave/len(data_loader.dataset)
end_time_epoch = time.time()
time_epoch = end_time_epoch - start_time_epoch
lr_scheduler.step()
print(fmt.format(epoch+1, max_epochs, loss_ave, Phi.depth,
optimizer.param_groups[0]['lr'],
time_epoch))
# + id="iyxCl1Id_Fwb"
from skimage.metrics import structural_similarity as ssim
from skimage.metrics import peak_signal_noise_ratio as psnr
n_samples = u_test.shape[0]
data_test = TensorDataset(u_test, data_obs_test)
test_data_loader = DataLoader(dataset=data_test, batch_size=batch_size, shuffle=True)
def compute_avg_SSIM_PSNR(u_true, u_gen, n_mesh, data_range):
# assumes images are size n_samples x n_features**2 and are detached
n_samples = u_true.shape[0]
u_true = u_true.reshape(n_samples, n_mesh, n_mesh).cpu().numpy()
u_gen = u_gen.reshape(n_samples, n_mesh, n_mesh).cpu().numpy()
ssim_val = 0
psnr_val = 0
for j in range(n_samples):
ssim_val = ssim_val + ssim(u_true[j,:,:], u_gen[j,:,:], data_range=data_range)
psnr_val = psnr_val + psnr(u_true[j,:,:], u_gen[j,:,:], data_range=data_range)
return ssim_val/n_samples, psnr_val/n_samples
# + id="xrt0leh7qg59" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="77f8dc53-14ee-4cb2-883c-054826272e83"
test_loss_ave = 0
test_PSNR_ave = 0
test_SSIM_ave = 0
with torch.no_grad():
for idx, (u_batch, d) in enumerate(test_data_loader):
u_batch = u_batch.to(device)
batch_size = u_batch.shape[0]
temp = u_batch.view(batch_size, -1)
temp = temp.permute(1,0)
test_batch_size = d.shape[0]
Phi.eval()
u = Phi(d, max_depth=max_depth, eps=eps)
output = criterion(u, u_batch)
test_loss = output.detach().cpu().numpy()
test_SSIM, test_PSNR = compute_avg_SSIM_PSNR(u_batch, u, 128, 1)
test_PSNR_ave += test_PSNR * test_batch_size
test_loss_ave += test_loss * test_batch_size
test_SSIM_ave += test_SSIM * test_batch_size
print('test_PSNR = {:7.3e}'.format(test_PSNR))
print('test_SSIM = {:7.3e}'.format(test_SSIM))
print('test_loss = {:7.3e}'.format(test_loss))
if idx%1 == 0:
# compute test image
plt.figure()
plt.subplot(1,2,1)
plt.imshow(u_batch[0,0,:,:].cpu(), vmin=0, vmax=1)
plt.title('u true')
plt.subplot(1,2,2)
plt.imshow(u[0,0,:,:].detach().cpu(), vmin=0, vmax=1)
plt.title('u approx')
plt.show()
print('\n\nSUMMARY')
print('test_loss_ave = {:7.3e}'.format(test_loss_ave / 1000))
print('test_PSNR_ave = {:7.3e}'.format(test_PSNR_ave / 1000))
print('test_SSIM_ave = {:7.3e}'.format(test_SSIM_ave / 1000))
# + id="qDWZLuTs_Dpl"
ind_val = 0
u = Phi(data_obs_test[ind_val,:,:,:]).view(128,128)
u_true = u_test[ind_val,0,:,:]
def string_ind(index):
if index < 10:
return '000' + str(index)
elif index < 100:
return '00' + str(index)
elif index < 1000:
return '0' + str(index)
else:
return str(index)
cmap = 'gray'
fig = plt.figure()
plt.imshow(np.rot90(u.detach().cpu().numpy()),cmap=cmap, vmin=0, vmax=1)
plt.axis('off')
save_loc = './drive/MyDrive/FixedPointNetworks/Learned_Feasibility_Ellipse_FFPN_ind_' + string_ind(ind_val) + '.pdf'
plt.savefig(save_loc,bbox_inches='tight')
plt.show()
print("SSIM: ", compute_avg_SSIM_PSNR(u_true.view(1,128,128), u.view(1,128,128).detach(), 128, 1))
#------------------------------------------------------------
# TRUE
#------------------------------------------------------------
cmap = 'gray'
fig = plt.figure()
plt.imshow(np.rot90(u_true.detach().cpu().numpy()),cmap=cmap, vmin=0, vmax=1)
plt.axis('off')
save_loc = './drive/MyDrive/FixedPointNetworks/Learned_Feasibility_Ellipse_GT_ind_' + string_ind(ind_val) + '.pdf'
plt.savefig(save_loc,bbox_inches='tight')
plt.show()
| F_FPN_Ellipses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
DATA_DIR = '../data/raw/'
# -
# load files
train = pd.read_csv(os.path.join(DATA_DIR, 'train.csv'))
test = pd.read_csv(os.path.join(DATA_DIR, 'test.csv'))
users = pd.read_csv(os.path.join(DATA_DIR, 'users.csv'))
words = pd.read_csv(os.path.join(DATA_DIR, 'words.csv'), encoding='ISO-8859-1')
# ### Description about train/test files
#
#
# This csv file contains data that relate to how people rate EMI artists, during the market research interviews, right after hearing a sample of an artist’s song.
#
# The 6 columns are:
#
# * __Artist__: An anonymised identifier for the EMI artist.
# * __Track__: An anonymised identifier for the artist’s track.
# * __User__: An anonymised identifier for the market research respondent, who will have just heard a sample from the track.
# * __Rating__: A number between X-100 which answers the question: How much do you like or dislike the music? (Train only, you're predicting this for the test set)
# * __Time__: The time the market research was completed: It is the anonymised research date indicating which month the research was conducted in. It can help you understand which other artists/tracks were researched in the same wave. Note it is not in chronological order
train.head()
test.head()
# users.csv files gives data about the respondents themselves, including their attitude towards music. The columns include:
#
# * __User__: The anonymised user identifier
# * __Gender__: Male/female
# * __Age__: The respondent’s age, in years.
# * __Working status__: Whether they are working full-time/retired/etc.
# * __Region__: The region of the United Kingdom where they live.
# * __MUSIC__: The respondent’s view on the importance of music in his/her life.
# * __LIST_OWN__: An estimate for the number of daily hours spent listening to music they own or have chosen.
# * __LIST_BACK__: An estimate for the number of daily hours the respondent spends listening to background music/music they have not chosen.
# * __Music habit questions__: Each of these asks the respondent to rate, on a scale of X-100, whether they agree with the following:
# <ol>
# <li>I enjoy actively searching for and discovering music that I have never heard before</li>
# <li>I find it easy to find new music</li>
# <li>I am constantly interested in and looking for more music</li>
# <li>I would like to buy new music but I don’t know what to buy</li>
# <li>I used to know where to find music</li>
# <li>I am not willing to pay for music</li>
# <li>I enjoy music primarily from going out to dance</li>
# <li>Music for me is all about nightlife and going out</li>
# <li>I am out of touch with new music</li>
# <li>My music collection is a source of pride</li>
# <li>Pop music is fun</li>
# <li>Pop music helps me to escape</li>
# <li>I want a multi media experience at my fingertips wherever I go</li>
# <li>I love technology</li>
# <li>People often ask my advice on music - what to listen to</li>
# <li>I would be willing to pay for the opportunity to buy new music pre-release</li>
# <li>I find seeing a new artist / band on TV a useful way of discovering new music</li>
# <li>I like to be at the cutting edge of new music</li>
# <li>I like to know about music before other people</li>
# </ol>
users.head()
# words.csv file contains data that shows how people describe the EMI artists whose music they have just heard.
#
# * __Artist__: An anonymised identifier for the EMI artist.
# * __User__: An anonymised identifier for the market research respondent, who will have just heard one or more samples from the artist.
# * __HEARD_OF__: An entry which answers the question: Have you heard of and/or heard music by this artist?
# * __OWN_ARTIST_MUSIC__: which answers the question: Do you have this artist in your music collection?
# * __LIKE_ARTIST__: A numerical entry which answers the question: To what extent do you like or dislike listening this artist?
#
# <p>Finally, a list of words. There are 82 different words, ranging from “Soulful” to “Cheesy” and “Aggressive.” After listening to tracks from a particular artist, each respondent will have selected the words they think best describe the artist from a given set. The values in each column are therefore 1, if the respondent thinks that word describes the artist, 0 if the respondent does not think the word describes the artist, and empty if the word was not part of the current interview set.</p>
words.head()
# ### Distribution of Ratings
sns.kdeplot(train.Rating);
# ## Mean Ratings By Artist
train.groupby('Artist')['Rating'].mean().plot();
# ## Mean Rating By Track
train.groupby('Track')['Rating'].mean().plot();
# ## Mean Ratings By Time
train.groupby('Time')['Rating'].mean().plot();
| notebooks/EMI_MUSIC_DATA_SCIENCE_HACKATHON_EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Poisson Kriging - Area to Point Kriging
#
# ## Table of Contents:
#
# 1. Load areal and point data,
# 2. Load semivariogram (regularized),
# 3. Build point-based map of better spatial resolution.
#
# ## Level: Advanced
#
# ## Changelog
#
# | Date | Change description | Author |
# |------|--------------------|--------|
# | 2021-05-28 | Updated paths for input/output data | @szymon-datalions |
# | 2021-05-11 | Refactored TheoreticalSemivariogram class | @szymon-datalions |
# | 2021-03-31 | Update related to the change of semivariogram weighting. Updated cancer rates data. | @szymon-datalions |
#
# ## Introduction
#
# To start this tutorial it is required to understand concepts in the **Ordinary and Simple Kriging** and **Semivariogram Regularization** tutorials. The good idea is to end **Poisson Kriging - Centroid based** and **Poisson Kriging - Area to Area** tutorials before this one.
#
# Poisson Kriging technique is used to model spatial count data. We are analyzing special case where data is counted over areas. Those areas may have irregular shapes and sizes beacuse they represent administrative regions.
#
# In this tutorial we try to predict rates of Breast Cancer in Pennsylvania counties. Along with areal data we use U.S. Census 2010 data for population blocks.
#
# > Breast cancer rates data is stored in the shapefile in folder `sample_data/areal_data/cancer_data.shp`.
#
# > Population blocks data is stored in the shapefile in folder `sample_data/population_data/cancer_population_base.shp`
#
# This tutorial covers following steps:
#
# 1. Read and explore data,
# 2. Load semivariogram model,
# 3. Perform Area to Point smoothing of areal data.
# 4. Visualize points.
# ## 1) Read and explore data
# +
import numpy as np
import pandas as pd
import geopandas as gpd
from pyinterpolate.io_ops import prepare_areal_shapefile, get_points_within_area
from pyinterpolate.semivariance import TheoreticalSemivariogram
from pyinterpolate.kriging import ArealKriging
# +
areal_data_file = '../sample_data/areal_data/cancer_data.shp'
point_data_file = '../sample_data/population_data/cancer_population_base.shp'
areal_id_column_name = 'FIPS'
areal_val_column_name = 'rate'
points_val_column_name = 'POP10'
areal_data = prepare_areal_shapefile(areal_data_file,
id_column_name=areal_id_column_name,
value_column_name=areal_val_column_name)
point_data = get_points_within_area(areal_data_file, point_data_file, areal_id_col_name=areal_id_column_name,
points_val_col_name=points_val_column_name)
# +
# Lets take a look into a map of areal counts
gdf = gpd.read_file(areal_data_file)
gdf.plot(column='rate', cmap='Spectral_r', legend=True)
# -
# #### Clarification:
#
# It is good idea to look into the spatial patterns in dataset and to visually check if our data do not have any NaN values. We use geopandas GeoDataFrame plot function for it with color map which is diverging color classes well.
# ## 2) Load semivariogram model
# In this step we load regularized semivariogram from the tutorial **Semivariogram Regularization (Intermediate)**. You can always perform semivariogram regulariztion along with the Poisson Kriging but it is a very long process and it is more convenient to separate those two steps.
semivariogram = TheoreticalSemivariogram() # Create TheoreticalSemivariogram object
semivariogram.import_model('output/regularized_model.csv') # Load regularized semivariogram
# ## 3) Perform Area to Point smoothing of areal data.
#
# The process of map smoothing is straightforward. We need to initialize Kriging model then invoke method **regularize_data**. This method takes three parameters:
#
# => **number of observations** (the most important parameter - how many neighbors are affecting your area of analysis),
#
# => minimum **search radius** (minimum search radius for analysis from your area of interest, if there are less areas than number of observations then the new, next closest neighbors are included in the analysis),
#
# => **data_crs** with default **EPSG 4326**.
#
# Method returns GeoDataFrame with points and predicted values. It iteratively re-calculates each area risk and returns predictions per point. In Area to Area Kriging those predictions where aggregated, now we leave them and use them as a smooth map of areal risk.
# +
number_of_obs = 8
radius = 60000 # 60 km
kriging_model = ArealKriging(semivariogram_model=semivariogram,
known_areas=areal_data,
known_areas_points=point_data,
kriging_type='atp')
# -
smoothed_area = kriging_model.regularize_data(number_of_neighbours=number_of_obs,
max_search_radius=radius,
data_crs=gdf.crs)
# ## 4) Visualize data
#
# The last step is data visualization. We use choropleth map from the GeoPandas package, but you can store smoothed map to shapefile of points and process it in a different place or with specific software (in our idea the best for it is **QGIS**).
smoothed_area.head()
base = gdf.plot(figsize=(14, 14), color='white')
smoothed_area.plot(ax=base, column='reg.est', cmap='Spectral_r', legend=True, markersize=20, vmax=500)
# ---
| tutorials/Poisson Kriging - Area to Point (Advanced).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hello LTR!
# Fire up an elastic server with the LTR plugin installed and run thru the cells below to get started with Learning-to-Rank. These notebooks we'll use in this training have something of an ltr client library, and a starting point for demonstrating several important learning to rank capabilities.
#
# This notebook will document many of the important pieces so you can reuse them in future training sessions
# ### The library: ltr
#
# This is a Python library, located at the top level of the repository in `hello-ltr/ltr/`. It contains helper functions used through out the notebooks.
#
# If you want to edit the source code, make sure you are running the Jupyter Notebook server locally and not from a Docker container.
import ltr
import ltr.client as client
import ltr.index as index
import ltr.helpers.movies as helpers
# ### Download some requirements
#
# Several requirements/datasets are stored in online, these include various training data sets, the data sets, and tools. You'll only need to do this once. But if you lose the data, you can repeat this command if needed.
# +
corpus = 'http://es-learn-to-rank.labs.o19s.com/tmdb.json'
ltr.download([corpus], dest='data/')
# -
# ### Use the Elastic client
#
# Two LTR clients exist in this code, an ElasticClient and a SolrClient. The workflow for doing Learning to Rank is the same in both search engines
client = client.ElasticClient()
# ### Index Movies
#
# In these demos, we'll use [TheMovieDB](http://themoviedb.org) alongside some supporting assets from places like movielens.
#
# When we reindex, we'll use `ltr.index.rebuild` which deletes and recreates the index, with a few hooks to help us enrich the underlying data or modify the search engine configuration for feature engineering.
# +
movies = helpers.indexable_movies(movies='data/tmdb.json')
index.rebuild(client, index='tmdb', doc_src=movies)
# -
# ### Configure Learning to Rank
#
# We'll discuss the feature sets a bit more. You can think of them as a series of queries that will be stored and executed before we need to train a model.
#
# `setup` is our function for preparing learning to rank to optimize search using a set of features. In this stock demo, we just have one feature, the year of the movie's release.
# wipes out any existing LTR models/feature sets in the tmdb index
client.reset_ltr(index='tmdb')
# +
# A feature set as a tuple, which looks a lot like JSON
feature_set = {
"featureset": {
"features": [
{
"name": "release_year",
"params": [],
"template": {
"function_score": {
"field_value_factor": {
"field": "release_year",
"missing": 2000
},
"query": { "match_all": {} }
}
}
}
]
}
}
feature_set
# -
# pushes the feature set to the tmdb index's LTR store (a hidden index)
client.create_featureset(index='tmdb', name='release', ftr_config=feature_set)
# ## Is this thing on?
#
# Before we dive into all the pieces, with a real training set, we'll try out two examples of models. One that always prefers newer movies. And another that always prefers older movies. If you're curious you can opet `classic-training.txt` and `latest-training.txt` after running this to see what the training set looks like.
# ### Generate some judgement data
#
# This will write out judgment data to a file path.
#
# Look at the source code in `ltr/years_as_ratings.py` to see what assumptions are being made in this synthetic judgment. What assumptions do you make in your judgment process?
# +
from ltr.years_as_ratings import synthesize
synthesize(
client,
featureSet='release', # must match the name set in client.create_featureset(...)
classicTrainingSetOut='data/classic-training.txt',
latestTrainingSetOut='data/latest-training.txt'
)
# -
# ### Format the training data as two arrays of Judgement objects
#
# This step is in preparation for passing the traning data into Ranklib.
# +
import ltr.judgments as judge
classic_training_set = [j for j in judge.judgments_from_file(open('data/classic-training.txt'))]
latest_training_set = [j for j in judge.judgments_from_file(open('data/latest-training.txt'))]
# -
# ### Train and Submit
#
# We'll train a lot of models in this class! Our ltr library has a `train` method that wraps a tool called `Ranklib` (more on Ranklib later), allows you to pass the most common commands to Ranklib, stores a model in the search engine, and then returns diagnostic output that's worth inspecting.
#
# For now we'll just train using the generated training set, and store two models `latest` and `classic`.
#
# +
from ltr.ranklib import train
train(client, training_set=latest_training_set,
index='tmdb', featureSet='release', modelName='latest')
# -
# Now train another model based on the 'classsic' movie judgments.
train(client, training_set=classic_training_set,
index='tmdb', featureSet='release', modelName='classic')
# ### <NAME> vs <NAME>
# If we search for `batman`, how do the results compare? Since the `classic` model prefered old movies it has old movies in the top position, and the opposite is true for the `latest` model. To continue learning LTR, brainstorm more features and generate some real judgments for real queries.
import ltr.release_date_plot as rdp
rdp.plot(client, 'batman')
# ### See top 12 results for both models
#
# Looking at the `classic` model first.
import pandas as pd
classic_results = rdp.search(client, 'batman', 'classic')
pd.json_normalize(classic_results)[['id', 'title', 'release_year', 'score']].head(12)
# And then the `latest` model.
latest_results = rdp.search(client, 'batman', 'latest')
pd.json_normalize(latest_results)[['id', 'title', 'release_year', 'score']].head(12)
| notebooks/elasticsearch/tmdb/hello-ltr (ES).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="d3jO2DcK01bz"
# # HW5: Pre-tokenization
#
#
# * 본 실습에서는 자연어 처리 데이터 전처리 과정에 필요한 다양한 module 을 사용해봅니다.
#
#
# > Reference: https://wikidocs.net/21703
#
#
#
# + [markdown] id="Lf5OcIO0RHxh"
# ### Regular Expression
# + id="k6s8155xRHHQ"
import re
# + [markdown] id="ehgcE7jaSu5z"
# . 기호
# + id="aoEZ2tFGRrtC"
# 임의의 한 개의 문자를 나타내는 .
r = re.compile("a.c")
r.search("ab")
# + id="s2G0d2haR0aH" outputId="968be68e-5af9-40c9-c59f-a1bbd57f6e3e" colab={"base_uri": "https://localhost:8080/"}
r.search("abc")
# + [markdown] id="LfqKEAr6SxjY"
# ? 기호
# + id="EHbrO-jjSDnT" outputId="ca9d5e51-d6f0-4784-d93f-ee70a4bdb250" colab={"base_uri": "https://localhost:8080/"}
# ? 앞의 문자가 존재할 수도 있고, 존재하지 않을 수도 있는 경우
r = re.compile("a?c")
r.search("bc")
# + id="i7itXRsnSPbt" outputId="8d436d08-ac66-4d64-e5ee-3c2ceda61611" colab={"base_uri": "https://localhost:8080/"}
# 존재 하는 경우의 매칭
r.search("ac")
# + id="x7QbSiGGSUAG" outputId="5cdcad03-d469-4128-e16a-bec9ebe3d6fa" colab={"base_uri": "https://localhost:8080/"}
# 존재하지 않는 경우의 매칭
r.search("abc")
# + [markdown] id="E-TvGvF0Sys8"
# \* 기호
# + id="v1xtvrwlSbkN"
# * 은 바로 앞의 문자가 0개 이상일 경우를 나타냄.
r = re.compile("ab*c") # b 가 하나도 없거나, 여러 개인 경우
r.search("a")
# + id="XE-3Iu9WTIlx" outputId="0ac786eb-834a-4b8d-baf1-8262d450a704" colab={"base_uri": "https://localhost:8080/"}
r.search("ac")
# + id="F0SXuafEV2MK" outputId="2783931f-b260-4698-eb6c-eb4cc5fdab83" colab={"base_uri": "https://localhost:8080/"}
r.search("abc")
# + id="IfqYapkETEkB" outputId="82250378-56c0-41f1-c953-e7632af3b016" colab={"base_uri": "https://localhost:8080/"}
r.search("abbbbc")
# + [markdown] id="hHcfFwnzTSGX"
# \+ 기호
# + id="kVN6Fg81TUIX"
# # + 앞의 문자가 최소 1개 이상 있어야 함.
r = re.compile("ab+c")
r.search("ac")
# + id="J6QvtyK3WHRv" outputId="6e736790-1832-4214-d07d-3e9e781daf39" colab={"base_uri": "https://localhost:8080/"}
r.search("abc")
# + id="Ir5dEOoLWHM1" outputId="65ba5c22-f419-4f30-efb1-95218d1ed9de" colab={"base_uri": "https://localhost:8080/"}
r.search("abbbc")
# + [markdown] id="u0550AuMWLYU"
# ^ 기호
# + id="hJpo7oGtWHFz"
# ^ 시작되는 글자를 지정함.
r = re.compile("^a")
r.search("bbc")
# + id="H5LEsCPrWHBW" outputId="eec951ab-0e18-461d-ce58-039fa8d90515" colab={"base_uri": "https://localhost:8080/"}
r.search("ab")
# + [markdown] id="VCHfZhfXWv6w"
# {숫자} 기호
# + id="V6V1jNMaWvRQ"
# 앞 문자를 해당 숫자만큼 반복해야 함.
r = re.compile("ab{2}c")
r.search("ac")
# + id="m1NnGVX_XDNq"
r.search("abc")
# + id="fCdJVMlUXDJo" outputId="68083536-460f-41e0-e1e5-01d2d36448df" colab={"base_uri": "https://localhost:8080/"}
r.search("abbc")
# + [markdown] id="gAV5hiNpXIiR"
# {숫자1, 숫자2} 기호
# + id="RAztm80IXDGG"
# 앞 문자를 숫자1 이상 숫자2 이하 만큼 반복해야 함.
r = re.compile("ab{2,8}c")
r.search("ac")
# + id="NFlnwaqfXDBI"
r.search("abc")
# + id="7zMN8nGoXbww" outputId="6f382afb-6d7a-4fff-85a9-ff9b578d8b6e" colab={"base_uri": "https://localhost:8080/"}
r.search("abbc")
# + id="fDSTC0LqXdq1" outputId="90a8e199-0bc1-4938-aeab-1149b8694f35" colab={"base_uri": "https://localhost:8080/"}
r.search("abbbbbbbbc")
# + id="XcQSM0dBXf8G"
r.search("abbbbbbbbbc")
# + [markdown] id="1ar2VQCcXkXw"
# {숫자,} 기호
# + id="zuzOi9OFXjpV"
# 앞 문자를 숫자 이상 만큼 반복해야 함.
r = re.compile("a{2,}bc")
r.search("abc")
# + id="Zar_YxUeaCWQ" outputId="9da9f154-6f90-43ef-a2ff-1e0c33e29f9e" colab={"base_uri": "https://localhost:8080/"}
r.search("aabc")
# + [markdown] id="W-FZGAemaGI0"
# [ ] 기호
# + id="2gujcbK2aItx"
# [] 안에 있는 문자들 중 한 개의 문자와 매치
# 범위를 지정할 수도 있음. 예) a-z, A-Z, 0-9
r = re.compile('[abc]')
r.search("dd")
# + id="_D0ouzK8af8b" outputId="ee3111fb-5f28-4336-a5d5-4c63982ab984" colab={"base_uri": "https://localhost:8080/"}
r.search("ad")
# + id="KhJFKGcjaiUe" outputId="33d717fc-43de-47c7-d8c4-d09c176916f9" colab={"base_uri": "https://localhost:8080/"}
r.search("adb")
# + [markdown] id="gOl5g4NYatBB"
# [^문자] 기호
# + id="TVztqwGaapM-"
# ^기호 뒤에 붙은 문자들을 제외한 모든 문자를 매치함.
r = re.compile("[^abc]") # abc 제외 모든 문자
r.search("ab")
# + id="u_8vC9IybGMI" outputId="f01056b8-735f-4e30-c537-4d93c1f55508" colab={"base_uri": "https://localhost:8080/"}
r.search("abcd")
# + [markdown] id="2t9pTu-81BQk"
# ### KoNLPy
# + id="KX0tqj-r1xm8" outputId="dc0123f6-d4f6-468c-f2d1-cb01a9e4c52c" colab={"base_uri": "https://localhost:8080/"}
# !pip install konlpy
# + [markdown] id="ngUBX6iA1KCk"
# 한나눔(Hannanum)
# + id="TCubXxCT0d5T" outputId="0e0ea4e1-b855-4a71-f294-8a29eeebd8d7" colab={"base_uri": "https://localhost:8080/"}
from konlpy.tag import Hannanum
hannanum = Hannanum()
text = '환영합니다! 자연어 처리 수업은 재미있게 듣고 계신가요?'
print(hannanum.morphs(text)) # Parse phrase to morphemes
print(hannanum.nouns(text)) # Noun extractors
print(hannanum.pos(text)) # POS tagger
# + id="S0eD1CVr1QuS" outputId="0f60920d-90ac-4468-c55c-6d981585a967" colab={"base_uri": "https://localhost:8080/"}
from konlpy.tag import Kkma
kkma = Kkma()
text = '환영합니다! 자연어 처리 수업은 재미있게 듣고 계신가요?'
print(kkma.morphs(text)) # Parse phrase to morphemes
print(kkma.nouns(text)) # Noun extractors
print(kkma.pos(text)) # POS tagger
# + [markdown] id="LtqXc97dOm2c"
# ### Khaiii
# + id="O2mQ7tf6O02Y" outputId="19e0a4be-6084-4965-98bf-386f324b75d5" colab={"base_uri": "https://localhost:8080/"}
# !git clone https://github.com/kakao/khaiii.git
# !pip install cmake
# !mkdir build
# !cd build && cmake /content/khaiii
# !cd /content/build/ && make all
# !cd /content/build/ && make resource
# !cd /content/build && make install
# !cd /content/build && make package_python
# !pip install /content/build/package_python
# + id="oQmkykw_OpSU"
from khaiii import KhaiiiApi
khaiiApi = KhaiiiApi()
# + id="cl0W3gQVOxpH" outputId="ab7ae877-9c5d-4a5d-d2a5-6bca2a531dc8" colab={"base_uri": "https://localhost:8080/"}
tokenized = khaiiApi.analyze('구름 자연어 처리 전문가 양성 과정 1기에 오신 여러분을 환영합니다!')
tokens = []
for word in tokenized:
print(f"word: {word}")
tokens.extend([str(m).split('/')[0] for m in word.morphs])
print(tokens)
# + [markdown] id="4fY328Gu6qy5"
# ### PyKoSpacing
# + id="qLwdPupS5ifF" outputId="7ac542dc-c4e5-4b58-d554-793da9244f74" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !pip install git+https://github.com/haven-jeon/PyKoSpacing.git
# + id="U-XKh-nW6puT"
sent = '구름 자연어 처리 전문가 양성 과정 1기에 오신 여러분을 환영합니다!'
# + id="59Rjq1Gv7Kjv" outputId="c3c927f9-e775-4b82-9ca8-3947e06347a3" colab={"base_uri": "https://localhost:8080/"}
new_sent = sent.replace(" ", '') # 띄어쓰기가 없는 문장 임의로 만들기
print(new_sent)
# + id="iT-7fIvW7Lm8" outputId="d34fb7c3-1236-4ad7-9eea-c87386ca5dd3" colab={"base_uri": "https://localhost:8080/"}
from pykospacing import Spacing
spacing = Spacing()
kospacing_sent = spacing(new_sent)
print('띄어쓰기가 없는 문장 :\n', new_sent)
print('정답 문장:\n', sent)
print('띄어쓰기 교정 후:\n', kospacing_sent)
# + [markdown] id="U4Isiy1r98OT"
# ### Py-Hanspell
# + id="a6grkKkX7NiI" outputId="a8c7682f-c61a-4980-e5b4-1b475cf5009c" colab={"base_uri": "https://localhost:8080/"}
# !pip install git+https://github.com/ssut/py-hanspell.git
# + id="VFZk-TYx-AgN" outputId="861b74bb-6997-4bc6-990c-285b5e6ba5ae" colab={"base_uri": "https://localhost:8080/"}
from hanspell import spell_checker
sent = "맞춤법 틀리면 외 않되? 쓰고싶은대로쓰면돼지 "
spelled_sent = spell_checker.check(sent)
hanspell_sent = spelled_sent.checked
print(hanspell_sent)
# + [markdown] id="4napObNevG1U"
#
# ### Crawling
#
# 네이버 영화 감상평을 크롤링하고 정제해봅시다.
# Crawling 파트 강의를 들으면서 함께 실습을 진행하시면 됩니다.
# + id="JmRVS1U-Uw1d"
from urllib.request import urlopen # 웹서버에 접근 모듈
from bs4 import BeautifulSoup # 웹페이지 내용구조 분석 모듈
# + id="ud5KafK7vLaY"
url='https://movie.naver.com/movie/bi/mi/pointWriteFormList.naver?code=187348&type=after&isActualPointWriteExecute=false&isMileageSubscriptionAlready=false&isMileageSubscriptionReject=false&page=1'
html=urlopen(url)
html_source = BeautifulSoup(html,'html.parser',from_encoding='utf-8') # 댓글 페이지를 utf-8형식으로 html 소스가져오기
# + id="Efo0xpKuvaCo" outputId="b31e77ae-3f78-4ca0-fc3c-5516e97e4966" colab={"base_uri": "https://localhost:8080/"}
print(html_source)
# + [markdown] id="XmW-E1MCzuiH"
# 댓글 부분에 해당하는 요소를 확인하고 내용을 추출합니다.
# + id="9TkLAcWOvnSc" outputId="ce026c96-3939-4e90-d37b-9643cdafa551" colab={"base_uri": "https://localhost:8080/"}
# 첫 번째 리뷰
html_reviews = html_source.find('span',{'id': '_filtered_ment_0'})
print(html_reviews)
# + id="v6xKmgUC1H7_" outputId="dfed2d7c-2ee5-47b5-bac7-e14d3d2368d5" colab={"base_uri": "https://localhost:8080/"}
# 10명 리뷰 확인
for i in range(10):
html_reviews = html_source.find('span',{'id': '_filtered_ment_'+str(i)})
print(html_reviews)
# + id="mtLop6M32zuF" outputId="4e901c15-c1f9-4726-ed98-c2031f94a9e0" colab={"base_uri": "https://localhost:8080/"}
# 10명 리뷰 확인, 불필요한 HTML 태그 제거하기
for i in range(10):
html_reviews = html_source.find('span',{'id': '_filtered_ment_'+str(i)})
print(html_reviews.text.strip())
# + id="TncEqpQd3c0I"
# 10 페이지에 대해 댓글 수집
from urllib.request import urlopen # 웹서버에 접근 모듈
from bs4 import BeautifulSoup # 웹페이지 내용구조 분석 모듈
reviews_list = []
for j in range(1, 11):
url='https://movie.naver.com/movie/bi/mi/pointWriteFormList.naver?code=187348&type=after&isActualPointWriteExecute=false&isMileageSubscriptionAlready=false&isMileageSubscriptionReject=false&page='+str(j)
html=urlopen(url)
html_source = BeautifulSoup(html,'html.parser',from_encoding='utf-8') # 댓글 페이지를 utf-8형식으로 html 소스가져오기
for i in range(10):
html_reviews = html_source.find('span',{'id': '_filtered_ment_'+str(i)})
reviews_list.append(html_reviews.text.strip())
file = open('reviews.txt', 'w', encoding='utf-8')
for review in reviews_list: # 요소를 1개의 행으로 저장되도록 개행문자 추가
file.write(review + '\n') # 개행 문자 추가 --> Enter, 줄바꿈 효과
file.close()
# + [markdown] id="M86-WQzQ5A8D"
# 크롤링한 데이터 전처리
# + id="58wEPxiH4uAA"
with open('reviews.txt', 'r', encoding='utf-8') as f:
review_data = f.readlines()
f.close()
# + id="OSlC2cAL5Hz5" outputId="d637c871-5328-401c-a408-9f89fa1029fa" colab={"base_uri": "https://localhost:8080/"}
len(review_data)
# + id="vw1rqSQUVLbb" outputId="2c69981e-9333-4389-8a61-85b0ff6c463d" colab={"base_uri": "https://localhost:8080/"}
review_data
# + [markdown] id="OYCOwtG75XiC"
# 한글만 남기고 다른 글자 제거
# + id="bXqQbEU15IuH" outputId="0deffbf5-446f-48c8-ac9e-cbb27f011ce1" colab={"base_uri": "https://localhost:8080/"}
import re
tmp = re.sub('[^ 가-힣]', '', review_data[8])
print(tmp)
# + id="mPji4L4e6Pqz" outputId="55d72d40-990b-4c85-ab21-bde4c3bf83c3" colab={"base_uri": "https://localhost:8080/"}
tmp = re.sub(' +', ' ', tmp)
print(tmp)
# + id="9sV76A9O67SG" outputId="ba52a266-a994-4f1e-d54a-e54f50140cdf" colab={"base_uri": "https://localhost:8080/"}
from pykospacing import Spacing
spacing = Spacing()
kospacing_sent = spacing(tmp)
print(kospacing_sent)
# + id="UAjowSa67I2-" outputId="73328246-a034-4ba3-f74d-ecf2148e6c65" colab={"base_uri": "https://localhost:8080/"}
from hanspell import spell_checker
spelled_sent = spell_checker.check(tmp)
hanspell_sent = spelled_sent.checked
print(hanspell_sent)
# + id="ouLxPt1IaY2O"
| 06_Natural_Language_Processing/sol/[HW29]_Pre_processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # ALL-IDB Classification using Autoencoder
#
# ### Dataset used:- [ALL-IDB:Acute Lymphoblastic Leukemia Image Database for Image Processing](https://homes.di.unimi.it/scotti/all/)
# Follow the instructions provided in the linked website to download the dataset. After downloading, extract the files to the current directory (same folder as your codes). Note that ALL_IDB2 is used in this tutorial.
# %matplotlib inline
import os
import struct
import torch
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
import torchvision
from torch.autograd import Variable
from torch.utils.data import TensorDataset,DataLoader
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import copy
import time
# ### Load Data:
Datapath = 'ALL_IDB2/img/'
listing = os.listdir(Datapath)
# +
# ALL_IDB2 dataset has 260 images in total
TrainImages = torch.FloatTensor(200,3072)
TrainLabels = torch.LongTensor(200)
TestImages = torch.FloatTensor(60,3072)
TestLabels = torch.LongTensor(60)
# First 200 images are used for training and the remaining 60 for testing
img_no = 0
for file in listing:
im=Image.open(Datapath + file)
im = im.resize((32,32))
im = np.array(im)
im = np.reshape(im, 32*32*3)
if img_no < 200:
TrainImages[img_no] = torch.from_numpy(im)
TrainLabels[img_no] = int(listing[img_no][6:7])
else:
TestImages[img_no - 200] = torch.from_numpy(im)
TestLabels[img_no - 200] = int(listing[img_no][6:7])
img_no = img_no + 1
# -
print(TrainImages.size())
print(TrainLabels.size())
print(TestImages.size())
print(TestLabels.size())
# Creating pytorch dataset from the feature matices
trainDataset = TensorDataset(TrainImages, TrainLabels)
testDataset = TensorDataset(TestImages, TestLabels)
# Creating dataloader
BatchSize = 64
trainLoader = DataLoader(trainDataset, batch_size=BatchSize, shuffle=True,num_workers=4, pin_memory=True)
testLoader = DataLoader(testDataset, batch_size=BatchSize, shuffle=True,num_workers=4, pin_memory=True)
# Check availability of GPU
use_gpu = torch.cuda.is_available()
if use_gpu:
print('GPU is available!')
# ### Define the Autoencoder class
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(32*32*3, 1000),
nn.ReLU(),
nn.Linear(1000, 500),
nn.ReLU(),
nn.Linear(500, 100),
nn.ReLU())
self.decoder = nn.Sequential(
nn.Linear(100, 500),
nn.ReLU(),
nn.Linear(500, 1000),
nn.ReLU(),
nn.Linear(1000, 32*32*3),
nn.ReLU())
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
# ### Initialize the network
# +
net = autoencoder()
print(net)
if use_gpu:
net = net.cuda()
# -
# ### Define optimization technique:
criterion = nn.MSELoss() # Mean Squared Error
optimizer = optim.SGD(net.parameters(), lr=0.5, momentum=0.9) # Stochastic Gradient Descent
# ### Training the autoencoder for representaion learning
iterations = 20
for epoch in range(iterations):
net.train(True) # For training
runningLoss = 0
for data in trainLoader:
inputs,_ = data # Labels are not required
inputs = inputs/255
if use_gpu:
inputs = Variable(inputs).cuda()
else:
inputs = Variable(inputs)
# Initialize the gradients to zero
optimizer.zero_grad()
# Feed forward the input data through the network
outputs = net(inputs)
# Compute the error/loss
loss = criterion(outputs, inputs)
# Backpropagate the loss to compute gradients
loss.backward()
# Update model parameters
optimizer.step()
# Accumulate loss per batch
runningLoss += loss.data[0]
# Printing average loss per epoch
print('At Iteration : %d / %d ; Mean-Squared Error : %f'%(epoch + 1,iterations,runningLoss/
(TrainImages.size()[0]/BatchSize)))
# ### Modifying the autoencoder for classification
# Removing the decoder module from the autoencoder
new_classifier = nn.Sequential(*list(net.children())[:-1])
net = new_classifier
# Adding linear layer for 2-class classification problem
net.add_module('classifier', nn.Sequential(nn.Linear(100, 2),nn.LogSoftmax()))
print(net)
if use_gpu:
net = net.cuda()
# Copying initial weights for visualization
cll_weights = copy.deepcopy(net[0][0].weight.data)
init_classifier_weights = copy.deepcopy(net.classifier[0].weight.data)
# ### Define loss function and optimizer:
criterion = nn.NLLLoss() # Negative Log-Likelihood
optimizer = optim.SGD(net.parameters(), lr=1e-4, momentum=0.9) # Stochastic gradient descent
# ## Train the network
iterations = 20
trainLoss = []
testAcc = []
start = time.time()
for epoch in range(iterations):
epochStart = time.time()
runningLoss = 0
for data in trainLoader:
inputs,labels = data
inputs = inputs/255
if use_gpu:
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# Initialize gradients to zero
optimizer.zero_grad()
# Feed-forward input data through the network
outputs = net(inputs)
# Compute loss/error
loss = criterion(outputs, labels)
# Backpropagate loss and compute gradients
loss.backward()
# Update the network parameters
optimizer.step()
# Accumulate loss per batch
runningLoss += loss.data[0]
avgTrainLoss = runningLoss/200
trainLoss.append(avgTrainLoss)
# Evaluating performance on test set for each epoch
net.train(False) # For testing
inputs = TestImages/255
if use_gpu:
inputs = Variable(inputs.cuda())
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
predicted = predicted.cpu()
else:
inputs = Variable(inputs)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
correct = 0
total = 0
total += TestLabels.size(0)
correct += (predicted == TestLabels).sum()
avgTestAcc = correct/60.0
testAcc.append(avgTestAcc)
# Plotting training loss vs Epochs
fig1 = plt.figure(1)
plt.plot(range(epoch+1),trainLoss,'r-',label='train')
if epoch==0:
plt.legend(loc='upper left')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
# Plotting testing accuracy vs Epochs
fig2 = plt.figure(2)
plt.plot(range(epoch+1),testAcc,'g-',label='test')
if epoch==0:
plt.legend(loc='upper left')
plt.xlabel('Epochs')
plt.ylabel('Testing accuracy')
epochEnd = time.time()-epochStart
print('At Iteration: {:.0f} /{:.0f} ; Training Loss: {:.6f} ; Testing Acc: {:.3f} ; Time consumed: {:.0f}m {:.0f}s '\
.format(epoch + 1,iterations,avgTrainLoss,avgTestAcc*100,epochEnd//60,epochEnd%60))
end = time.time()-start
print('Training completed in {:.0f}m {:.0f}s'.format(end//60,end%60))
| 1_ALL-IDB classification using Autoencoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''flax_p38'': conda)'
# language: python
# name: python3
# ---
# ## 1305. All Elements in Two Binary Search Trees
# [Link](https://leetcode.com/problems/all-elements-in-two-binary-search-trees/)
#
# Given two binary search trees root1 and root2, return a list containing all the integers from both trees sorted in ascending order.
# +
from typing import List
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
# +
def get_nodes(root, nodes):
# implements inorder traversal
if root is None:
pass
if root.left:
get_nodes(root.left, nodes)
nodes.append(root.val)
if root.right:
get_nodes(root.right, nodes)
class Solution:
def getAllElements(self, root1: TreeNode, root2: TreeNode) -> List[int]:
nodes1, nodes2 = [], []
get_nodes(root1, nodes1)
get_nodes(root2, nodes2)
n1, n2 = len(nodes1), len(nodes2)
i, j = 0, 0
res = []
while i<n1 and j<n2:
if nodes1[i]<nodes2[j]:
res.append(nodes1[i])
i += 1
else:
res.append(nodes2[j])
j += 1
res.extend(nodes1[i:])
res.extend(nodes2[j:])
# print(i, j)
return res
# -
# TODO Write a helper function to create a tree from a list of ndoes.
# def create_tree(nodes: List[int]):
# for n in nodes:
# Test code
root1, root2 = [2,1,4], [1,0,3]
Solution().getAllElements(root1, root2)
| leetcode/1305_inorder_bst.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from splinter import Browser
from bs4 import BeautifulSoup as bs
import time
# +
# Scrape the )
# and collect the latest News Title and Paragraph Text.
# Assign the text to variables that you can reference later.
def init_browser():
executable_path = {"executable_path": "mission_to_mars\chromedriver"}
return Browser("chrome", **executable_path, headless=False)
def scrape_info():
browser = init_browser()
# Visit https://mars.nasa.gov/news/
url = "https://mars.nasa.gov/news/"
browser.visit(url)
time.sleep(1)
# Scrape page into Soup
html = browser.html
soup = bs(html, "html.parser")
# Find the article title
article_title = soup.find('div', id='content_title').text
# Get the article paragraph
article_teaser = soup.find('div', id='article_teaser_body').text
# Store data in a dictionary
article_data = {
"article_title": article_title,
"article_teaser": article_teaser
}
# Close the browser after scraping
browser.quit()
# Return results
return article_data
# +
# Visit the url for JPL Featured Space Image
# https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars
# Use splinter to navigate the site.
# and find the image url for the current Featured Mars Image
# and assign the url string to a variable called `featured_image_url`.
# Make sure to find the image url to the full size `.jpg` image.
# Make sure to save a complete url string for this image.
from splinter import Browser
from bs4 import BeautifulSoup as bs
import time
def init_browser():
executable_path = {"executable_path": "mission_to_mars\chromedriver"}
return Browser("chrome", **executable_path, headless=False)
def scrape_info():
browser = init_browser()
# Visit https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars
url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(url)
time.sleep(1)
# Scrape page into Soup
html = browser.html
soup = bs(html, "html.parser")
# Find the article title
relative_image_path = soup.find_all('img')[2]["src"]
featured_image_url = url + relative_image_path
# Close the browser after scraping
browser.quit()
# +
# Visit the Mars Weather twitter account and scrape the
# latest Mars weather tweet from the page.
# Save the tweet text for the weather report as a variable called `mars_weather`.
def init_browser():
executable_path = {"executable_path": "mission_to_mars\chromedriver"}
return Browser("chrome", **executable_path, headless=False)
def scrape_info():
browser = init_browser()
# Visit https://mars.nasa.gov/news/
url = "https://twitter.com/marswxreport?lang=en"
browser.visit(url)
time.sleep(1)
# Scrape page into Soup
html = browser.html
soup = bs(html, "html.parser")
# Find the Tweet
mars_weather = soup.find('div', id='js-tweet-text-container').text
# Close the browser after scraping
browser.quit()
# +
### Mars Facts
# Visit the Mars Facts webpage [here](http://space-facts.com/mars/)
# and use Pandas to scrape the table containing facts about the
# planet including Diameter, Mass, etc.
# Use Pandas to convert the data to a HTML table string.
| mission_to_mars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] Collapsed="false" colab_type="text" id="OjUNeFsxyguu"
# <img style="float: left;padding: 1.3em" src="https://indico.in2p3.fr/event/18313/logo-786578160.png">
#
# # Gravitational Wave Open Data Workshop #3
#
#
# #### Tutorial 2.4: Parameter estimation on GW150914 using open data.
#
# This example estimates the non-spinning parameters of the binary black hole system using
# commonly used prior distributions. This will take about 40 minutes to run.
#
# More examples at https://lscsoft.docs.ligo.org/bilby/examples.html
# + [markdown] Collapsed="false" colab_type="text" id="VwsIdKJ3yguv"
# ## Installation (execute only if running on a cloud platform!)
# + Collapsed="false" colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="eJeo4XrHyguw" outputId="fc1a59d8-8ad9-494f-e7a1-9ac9b0775a9f"
# -- Use the following line in Google Colab
# #! pip install -q 'lalsuite==6.66' 'bilby==1.0.1' 'gwpy==1.0.1'
# + [markdown] Collapsed="false" colab_type="text" id="f0q3Y_9gygu0"
# **Important:** With Google Colab, you may need to restart the runtime after running the cell above.
# + [markdown] Collapsed="false" colab_type="text" id="XK8fHu13ygu1"
# ## Initialization
#
# We begin by importing some commonly used functions
# + Collapsed="false" colab={} colab_type="code" id="HyRSGt6cygu2"
# %matplotlib inline
from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt
import bilby
from bilby.core.prior import Uniform
from bilby.gw.conversion import convert_to_lal_binary_black_hole_parameters, generate_all_bbh_parameters
from gwpy.timeseries import TimeSeries
# -
# ## Bilby version
print(bilby.__version__)
# + [markdown] Collapsed="false" colab_type="text" id="_Hd4d4KVygu6"
# ## Getting the data: GW150914
#
# In this notebook, we'll analyse GW150914. Our first task is to obtain some data!
#
# We need to know the trigger time. This can be found on the [GWOSC page](https://www.gw-openscience.org/events/GW150914/), here we define it as a variable
# + Collapsed="false" colab={} colab_type="code" id="1cUhLaFIygu6"
time_of_event = 1126259462.4
# -
# ### Set up empty interferometers
#
# We need to get some data to analyse. We'll be using data from the Hanford (H1) and Livinston (L1) ground-based gravitational wave detectors. To organise ourselves, we'll create two "empty" interferometers. These are empty in the sense that they don't have any strain data. But, they know about the orientation and location of their respective namesakes. It may also be interesting to note that they are initialised with the planned design sensitivity power spectral density of advanced LIGO - we'll overwrite this later on, but it is often useful for simulations.
H1 = bilby.gw.detector.get_empty_interferometer("H1")
L1 = bilby.gw.detector.get_empty_interferometer("L1")
# ### Download the data
#
# To analyse the signal, we need to download analyis data. Here, we will use [gwpy](https://gwpy.github.io/) to download the open strain data. For a general introduction to reading/writing data with gwpy, see [the documentation](https://gwpy.github.io/docs/stable/timeseries/).
#
# To analyse GW150914, we will use a 4s period duration centered on the event itself. It is standard to choose the data such that it always includes a "post trigger duration of". That is, there is always 2s of data after the trigger time. We therefore define all times relative to the trigger time, duration and this post-trigger duration.
# +
# Definite times in relatation to the trigger time (time_of_event), duration and post_trigger_duration
post_trigger_duration = 2
duration = 4
analysis_start = time_of_event + post_trigger_duration - duration
# Use gwpy to fetch the open data
H1_analysis_data = TimeSeries.fetch_open_data(
"H1", analysis_start, analysis_start + duration, sample_rate=4096, cache=True)
L1_analysis_data = TimeSeries.fetch_open_data(
"L1", analysis_start, analysis_start + duration, sample_rate=4096, cache=True)
# -
# Here, `H1_analysis_data` and its L1 counterpart are gwpy TimeSeries objects. As such, we can plot the data itself out:
H1_analysis_data.plot()
plt.show()
# This doesn't tell us much of course! It is dominated by the low frequency noise.
#
# ### Initialise the bilby inteferometers with the strain data
#
# Now, we pass the downloaded strain data to our `H1` and `L1` bilby inteferometer objects. For other methods to set the strain data, see the various `set_strain_data*` methods.
H1.set_strain_data_from_gwpy_timeseries(H1_analysis_data)
L1.set_strain_data_from_gwpy_timeseries(L1_analysis_data)
# ### Download the power spectral data
#
# Parameter estimation relies on having a power spectral density (PSD) - an estimate of the coloured noise properties of the data. Here, we will create a PSD using off-source data (for discussion on methods to estimate PSDs, see, e.g. [Chatziioannou et al. (2019)](https://ui.adsabs.harvard.edu/abs/2019PhRvD.100j4004C/abstract))
#
# Again, we need to download this from the open strain data. We start by figuring out the amount of data needed - in this case 32 times the analysis duration.
# +
psd_duration = duration * 32
psd_start_time = analysis_start - psd_duration
H1_psd_data = TimeSeries.fetch_open_data(
"H1", psd_start_time, psd_start_time + psd_duration, sample_rate=4096, cache=True)
L1_psd_data = TimeSeries.fetch_open_data(
"L1", psd_start_time, psd_start_time + psd_duration, sample_rate=4096, cache=True)
# -
# Having obtained the data to generate the PSD, we now use the standard [gwpy psd](https://gwpy.github.io/docs/stable/api/gwpy.timeseries.TimeSeries.html#gwpy.timeseries.TimeSeries.psd) method to calculate the PSD. Here, the `psd_alpha` variable is converting the `roll_off` applied to the strain data into the fractional value used by `gwpy`.
psd_alpha = 2 * H1.strain_data.roll_off / duration
H1_psd = H1_psd_data.psd(fftlength=duration, overlap=0, window=("tukey", psd_alpha), method="median")
L1_psd = L1_psd_data.psd(fftlength=duration, overlap=0, window=("tukey", psd_alpha), method="median")
# ### Initialise the PSD
# Now that we have psd's for H1 and L1, we can overwrite the `power_spectal_density` attribute of our interferometers with a new PSD.
H1.power_spectral_density = bilby.gw.detector.PowerSpectralDensity(
frequency_array=H1_psd.frequencies.value, psd_array=H1_psd.value)
L1.power_spectral_density = bilby.gw.detector.PowerSpectralDensity(
frequency_array=H1_psd.frequencies.value, psd_array=L1_psd.value)
# ### Looking at the data
# Okay, we have spent a bit of time now downloading and initializing things. Let's check that everything makes sense. To do this, we'll plot our analysis data alongwise the amplitude spectral density (ASD); this is just the square root of the PSD and has the right units to be comparable to the frequency-domain strain data.
fig, ax = plt.subplots()
idxs = H1.strain_data.frequency_mask # This is a boolean mask of the frequencies which we'll use in the analysis
ax.loglog(H1.strain_data.frequency_array[idxs],
np.abs(H1.strain_data.frequency_domain_strain[idxs]))
ax.loglog(H1.power_spectral_density.frequency_array[idxs],
H1.power_spectral_density.asd_array[idxs])
ax.set_xlabel("Frequency [Hz]")
ax.set_ylabel("Strain [strain/$\sqrt{Hz}$]")
plt.show()
# What is happening at high frequencies? This is an artifact of the downsampling applied to the data - note that we downloaded the 4096Hz data which is downsamples for 16384Hz. We aren't really interested in the data at these high frequencies so let's adjust the maximum frequency used in the analysis to 1024 Hz and plot things again.
H1.maximum_frequency = 1024
L1.maximum_frequency = 1024
fig, ax = plt.subplots()
idxs = H1.strain_data.frequency_mask
ax.loglog(H1.strain_data.frequency_array[idxs],
np.abs(H1.strain_data.frequency_domain_strain[idxs]))
ax.loglog(H1.power_spectral_density.frequency_array[idxs],
H1.power_spectral_density.asd_array[idxs])
ax.set_xlabel("Frequency [Hz]")
ax.set_ylabel("Strain [strain/$\sqrt{Hz}$]")
plt.show()
# Okay, that is better - we now won't analyse any data near to the artifact produced by downsampling. Now we have some sensible data to analyse so let's get right on with down the analysis!
# ## Low dimensional analysis
#
# In general a compact binary coalescense signal is described by 15 parameters describing the masses, spins, orientation, and position of the two compact objects along with a time at which the signal merges. The goal of parameter estimation is to figure out what the data (and any cogent prior information) can tell us about the likely values of these parameters - this is called the "posterior distribution of the parameters".
#
# To start with, we'll analyse the data fixing all but a few of the parameters to known values (in Bayesian lingo - we use delta function priors), this will enable us to run things in a few minutes rather than the many hours needed to do full parameter estimation.
#
# We'll start by thinking about the mass of the system. We call the heavier black hole the primary and label its mass $m_1$ and that of the secondary (lighter) black hole $m_2$. In this way, we always define $m_1 \ge m_2$. It turns out that inferences about $m_1$ and $m_2$ are highly correlated, we'll see exactly what this means later on.
#
# Bayesian inference methods are powerful at figuring out highly correlated posteriors. But, we can help it along by sampling in parameters which are not highly correlated. In particular, we define a new parameter called the [chirp mass](https://en.wikipedia.org/wiki/Chirp_mass) to be
#
# $$ \mathcal{M} = \frac{(m_1 m_2)^{3/5}}{(m_1 + m_2)^{1/5}} $$
#
# and the mass ratio
#
# $$ q = \frac{m_{2}}{m_1} $$
#
# If we sample (make inferences about) $\mathcal{M}$ and $q$, our code is much faster than if we use $m_1$ and $m_2$ directly! Note that so long as equivalent prior is given - one can also sample in the component masses themselves and you will get the same answer, it is just much slower!
#
# Once we have inferred $\mathcal{M}$ and $q$, we can then derive $m_1$ and $m_2$ from the resulting samples (we'll do that in just a moment).
#
# Okay, let's run a short (~1min on a single 2.8GHz core), low-dimensional parameter estimation analysis. This is done by defining a prior dictionary where all parameters are fixed, except those that we want to vary.
# ### Create a prior
#
# Here, we create a prior fixing everything except the chirp mass, mass ratio, phase and geocent_time parameters to fixed values. The first two we described above. The second two give the phase of the system and the time at which it mergers.
prior = bilby.core.prior.PriorDict()
prior['chirp_mass'] = Uniform(name='chirp_mass', minimum=30.0,maximum=32.5)
prior['mass_ratio'] = Uniform(name='mass_ratio', minimum=0.5, maximum=1)
prior['phase'] = Uniform(name="phase", minimum=0, maximum=2*np.pi)
prior['geocent_time'] = Uniform(name="geocent_time", minimum=time_of_event-0.1, maximum=time_of_event+0.1)
prior['a_1'] = 0.0
prior['a_2'] = 0.0
prior['tilt_1'] = 0.0
prior['tilt_2'] = 0.0
prior['phi_12'] = 0.0
prior['phi_jl'] = 0.0
prior['dec'] = -1.2232
prior['ra'] = 2.19432
prior['theta_jn'] = 1.89694
prior['psi'] = 0.532268
prior['luminosity_distance'] = 412.066
# ## Create a likelihood
#
# For Bayesian inference, we need to evaluate the likelihood. In Bilby, we create a likelihood object. This is the communication interface between the sampling part of Bilby and the data. Explicitly, when Bilby is sampling it only uses the `parameters` and `log_likelihood()` of the likelihood object. This means the likelihood can be arbitrarily complicated and the sampling part of Bilby won't mind a bit!
#
# Let's create a `GravitationalWaveTransient`, a special inbuilt method carefully designed to wrap up evaluating the likelihood of a waveform model in some data.
# +
# First, put our "data" created above into a list of intererometers (the order is arbitrary)
interferometers = [H1, L1]
# Next create a dictionary of arguments which we pass into the LALSimulation waveform - we specify the waveform approximant here
waveform_arguments = dict(
waveform_approximant='IMRPhenomPv2', reference_frequency=100., catch_waveform_errors=True)
# Next, create a waveform_generator object. This wraps up some of the jobs of converting between parameters etc
waveform_generator = bilby.gw.WaveformGenerator(
frequency_domain_source_model=bilby.gw.source.lal_binary_black_hole,
waveform_arguments=waveform_arguments,
parameter_conversion=convert_to_lal_binary_black_hole_parameters)
# Finally, create our likelihood, passing in what is needed to get going
likelihood = bilby.gw.likelihood.GravitationalWaveTransient(
interferometers, waveform_generator, priors=prior,
time_marginalization=True, phase_marginalization=True, distance_marginalization=False)
# -
# This will print a warning about the `start_time`, it is safe to ignore this.
#
# Note that we also specify `time_marginalization=True` and `phase_marginalization=True`. This is a trick often used in Bayesian inference. We analytically marginalize (integrate) over the time/phase of the system while sampling, effectively reducing the parameter space and making it easier to sample. Bilby will then figure out (after the sampling) posteriors for these marginalized parameters. For an introduction to this topic, see [<NAME> (2019)](https://arxiv.org/abs/1809.02293).
# ### Run the analysis
# + [markdown] Collapsed="false" colab_type="text" id="LCfygeVyygvM"
# Now that the prior is set-up and the likelihood is set-up (with the data and the signal mode), we can run the sampler to get the posterior result. This function takes the likelihood and prior along with some options for how to do the sampling and how to save the data.
# + Collapsed="false" colab={"base_uri": "https://localhost:8080/", "height": 627} colab_type="code" id="HHS9JSX3ygvN" outputId="69e07ce1-f118-4378-c750-5ec65d43e0db"
result_short = bilby.run_sampler(
likelihood, prior, sampler='dynesty', outdir='short', label="GW150914",
conversion_function=bilby.gw.conversion.generate_all_bbh_parameters,
sample="unif", nlive=500, dlogz=3 # <- Arguments are used to make things fast - not recommended for general use
)
# -
# ### Looking at the outputs
# + [markdown] Collapsed="false" colab_type="text" id="wKR045TIygvT"
# The `run_sampler` returned `result_short` - this is a Bilby result object. The posterior samples are stored in a [pandas data frame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) (think of this like a spreadsheet), let's take a look at it
# -
result_short.posterior
# We can pull out specific parameters that we are interested in
result_short.posterior["chirp_mass"]
# This returned another `pandas` object. If you just want to get the numbers as a numpy array run
Mc = result_short.posterior["chirp_mass"].values
# We can then get some useful quantities such as the 90\% credible interval
lower_bound = np.quantile(Mc, 0.05)
upper_bound = np.quantile(Mc, 0.95)
median = np.quantile(Mc, 0.5)
print("Mc = {} with a 90% C.I = {} -> {}".format(median, lower_bound, upper_bound))
# We can then plot the chirp mass in a histogram adding a region to indicate the 90\% C.I.
fig, ax = plt.subplots()
ax.hist(result_short.posterior["chirp_mass"], bins=20)
ax.axvspan(lower_bound, upper_bound, color='C1', alpha=0.4)
ax.axvline(median, color='C1')
ax.set_xlabel("chirp mass")
plt.show()
# The result object also has in-built methods to make nice plots such as corner plots. You can add the priors if you are only plotting parameter which you sampled in, e.g.
result_short.plot_corner(parameters=["chirp_mass", "mass_ratio", "geocent_time", "phase"], prior=True)
# You can also plot lines indicating specific points. Here, we add the values recorded on [GWOSC](https://www.gw-openscience.org/events/GW150914/). Notably, these fall outside the bulk of the posterior uncertainty here. This is because we limited our prior - if instead we ran the full analysis these agree nicely.
# + Collapsed="false" colab={"base_uri": "https://localhost:8080/", "height": 259} colab_type="code" id="SB4AqmTaygvU" outputId="3b4d648b-378e-480f-93d8-7929c81f9559"
parameters = dict(mass_1=36.2, mass_2=29.1)
result_short.plot_corner(parameters)
# -
# Earlier we discussed the "correlation" - in this plot we start to see the correlation between $m_1$ and $m_2$.
# ### Meta data
# The result object also stores meta data, like the priors
result_short.priors
# and details of the analysis itself:
result_short.sampler_kwargs["nlive"]
# Finally, we can also get out the Bayes factor for the signal vs. Gaussian noise:
print("ln Bayes factor = {} +/- {}".format(
result_short.log_bayes_factor, result_short.log_evidence_err))
# ## Challenge questions
# First, let's take a closer look at the result obtained with the run above. What are the means of the chirp mass and mass ratio distributions? What are the medians of the distributions for the components masses? You can use `np.mean` and `np.median` to calculate these.
#
# Now let's expand on this example a bit. Rerun the analysis above but change the prior on the distance from a delta function to `bilby.core.prior.PowerLaw(alpha=2., minimum=50., maximum=800., name='luminosity_distance')`. You should also replace `sample='unif'` with `sample="rwalk", nact=1, walks=1` in your call to `bilby.run_sampler` above. This will take a bit longer than the original run, around ~20 minutes. You also need to change the `label` in the call to `run_sampler` to avoid over-writing your results.
#
# What is the median reported value of the distance posterior? What is the new log Bayes factor for signal vs. Gaussian noise? Don't be alarmed if your results do not match the official LVC results, as these are not rigorous settings.
| Day_2/Tuto_2.4_Parameter_estimation_for_compact_object_mergers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!jupyter nbextension enable --py gmaps
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
# -
# ### Store Part I results into DataFrame
# * Load the csv exported in Part I to a DataFrame
# Store csv created in part one into a DataFrame
# ### Humidity Heatmap
# * Configure gmaps.
# * Use the Lat and Lng as locations and Humidity as the weight.
# * Add Heatmap layer to map.
# Configure gmaps
# Heatmap of humidity
# ### Create new DataFrame fitting weather criteria
# * Narrow down the cities to fit weather conditions.
# * Drop any rows will null values.
# Narrow down cities that fit criteria and drop any results with null values
# ### Hotel Map
# * Store into variable named `hotel_df`.
# * Add a "Hotel Name" column to the DataFrame.
# * Set parameters to search for hotels with 5000 meters.
# * Hit the Google Places API for each city's coordinates.
# * Store the first Hotel result into the DataFrame.
# * Plot markers on top of the heatmap.
# Create DataFrame called hotel_df to store hotel names along with city, country and coordinates
# +
# Set parameters to search for a hotel
# Iterate through
# get lat, lng from df
# Use the search term: "Hotel" and our lat/lng
# make request and print url
# convert to json
# Grab the first hotel from the results and store the name
# +
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# +
# Add marker layer ontop of heat map
# Display figure
# -
| VacationPy-with-outputs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TensorFlow Tutorial #01
# # Simple Linear Model
#
# by [<NAME>](http://www.hvass-labs.org/)
# / [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ)
# ## Introduction
#
# This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.
#
# You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.
# ## Imports
# %matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
# This was developed using Python 3.6.1 (Anaconda) and TensorFlow version:
tf.__version__
# ## Load Data
# The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
# The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
# ### One-Hot Encoding
# The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are:
data.test.labels[0:5, :]
# We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
data.test.cls = np.array([label.argmax() for label in data.test.labels])
# We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
data.test.cls[0:5]
# ### Data dimensions
# The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
# +
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
# -
# ### Helper-function for plotting images
# Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# ### Plot a few images to see if data is correct
# +
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
# -
# ## TensorFlow Graph
#
# The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
#
# TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
#
# TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
#
# A TensorFlow graph consists of the following parts which will be detailed below:
#
# * Placeholder variables used to change the input to the graph.
# * Model variables that are going to be optimized so as to make the model perform better.
# * The model which is essentially just a mathematical function that calculates some output given the input in the placeholder variables and the model variables.
# * A cost measure that can be used to guide the optimization of the variables.
# * An optimization method which updates the variables of the model.
#
# In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
# ### Placeholder variables
# Placeholder variables serve as the input to the graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
#
# First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
x = tf.placeholder(tf.float32, [None, img_size_flat])
# Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
y_true = tf.placeholder(tf.float32, [None, num_classes])
# Finally we have the placeholder variable for the true class of each image in the placeholder variable `x`. These are integers and the dimensionality of this placeholder variable is set to `[None]` which means the placeholder variable is a one-dimensional vector of arbitrary length.
y_true_cls = tf.placeholder(tf.int64, [None])
# ### Variables to be optimized
# Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
#
# The first variable that must be optimized is called `weights` and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is `[img_size_flat, num_classes]`, so it is a 2-dimensional tensor (or matrix) with `img_size_flat` rows and `num_classes` columns.
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
# The second variable that must be optimized is called `biases` and is defined as a 1-dimensional tensor (or vector) of length `num_classes`.
biases = tf.Variable(tf.zeros([num_classes]))
# ### Model
# This simple mathematical model multiplies the images in the placeholder variable `x` with the `weights` and then adds the `biases`.
#
# The result is a matrix of shape `[num_images, num_classes]` because `x` has shape `[num_images, img_size_flat]` and `weights` has shape `[img_size_flat, num_classes]`, so the multiplication of those two matrices is a matrix with shape `[num_images, num_classes]` and then the `biases` vector is added to each row of that matrix.
#
# Note that the name `logits` is typical TensorFlow terminology, but other people may call the variable something else.
logits = tf.matmul(x, weights) + biases
# Now `logits` is a matrix with `num_images` rows and `num_classes` columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
#
# However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the `logits` matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in `y_pred`.
y_pred = tf.nn.softmax(logits)
# The predicted class can be calculated from the `y_pred` matrix by taking the index of the largest element in each row.
y_pred_cls = tf.argmax(y_pred, axis=1)
# ### Cost-function to be optimized
# To make the model better at classifying the input images, we must somehow change the variables for `weights` and `biases`. To do this we first need to know how well the model currently performs by comparing the predicted output of the model `y_pred` to the desired output `y_true`.
#
# The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the `weights` and `biases` of the model.
#
# TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the `logits` because it also calculates the softmax internally.
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
# We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
cost = tf.reduce_mean(cross_entropy)
# ### Optimization method
# Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
#
# Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
# ### Performance measures
# We need a few more performance measures to display the progress to the user.
#
# This is a vector of booleans whether the predicted class equals the true class of each image.
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
# This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# ## TensorFlow Run
# ### Create TensorFlow session
#
# Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
session = tf.Session()
# ### Initialize variables
#
# The variables for `weights` and `biases` must be initialized before we start optimizing them.
session.run(tf.global_variables_initializer())
# ### Helper-function to perform optimization iterations
# There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
batch_size = 100
# Function for performing a number of optimization iterations so as to gradually improve the `weights` and `biases` of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# ### Helper-functions to show performance
# Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
# Function for printing the classification accuracy on the test-set.
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
# Function for printing and plotting the confusion matrix using scikit-learn.
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Function for plotting examples of images from the test-set that have been mis-classified.
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred = session.run([correct_prediction, y_pred_cls],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
# ### Helper-function to plot the model weights
# Function for plotting the `weights` of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# ## Performance before any optimization
#
# The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
print_accuracy()
plot_example_errors()
# ## Performance after 1 optimization iteration
#
# Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
# The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
#
# For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
#
# Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
#
# Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
plot_weights()
# ## Performance after 10 optimization iterations
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
# ## Performance after 1000 optimization iterations
#
# After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_accuracy()
plot_example_errors()
# The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
plot_weights()
# We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
print_confusion_matrix()
# We are now done using TensorFlow, so we close the session to release its resources.
# +
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
# -
# ## Exercises
#
# These are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.
#
# You may want to backup this Notebook before making any changes.
#
# * Change the learning-rate for the optimizer.
# * Change the optimizer to e.g. `AdagradOptimizer` or `AdamOptimizer`.
# * Change the batch-size to e.g. 1 or 1000.
# * How do these changes affect the performance?
# * Do you think these changes will have the same effect (if any) on other classification problems and mathematical models?
# * Do you get the exact same results if you run the Notebook multiple times without changing any parameters? Why or why not?
# * Change the function `plot_example_errors()` so it also prints the `logits` and `y_pred` values for the mis-classified examples.
# * Use `sparse_softmax_cross_entropy_with_logits` instead of `softmax_cross_entropy_with_logits`. This may require several changes to multiple places in the source-code. Discuss the advantages and disadvantages of using the two methods.
# * Remake the program yourself without looking too much at this source-code.
# * Explain to a friend how the program works.
# ## License (MIT)
#
# Copyright (c) 2016 by [<NAME>](http://www.hvass-labs.org/)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| 01_Simple_Linear_Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
#
# # Introducción a Optimización con Cplex & Python
#
# <div class="alert alert-info"> </h4> Material preparado para la carrera de Ingeniería Civil Industrial | Universidad Católica del Norte | Escuela de Ingeniería - Campus Coquimbo.
# © <NAME>, Ingenerio Industrial UCN, Master of International Business UQ, Master Engeneering of Supply Chain and Logistics, MIT ZLC-Global Scale. Contacto: <EMAIL> o <EMAIL>
# </h4> </div>
#
# > El objetivo de este material es introducir a la optimización utilizando Python y Cplex.
# Para mayor información puede visitar el siguiente link [Cplex IBM](https://ibmdecisionoptimization.github.io/tutorials/html/Linear_Programming.html)
# .Para realizar gráficas puede revisar el siguiente link [Matplot](https://matplotlib.org)
#
# (Ejemplo basado del Wiston)
#
# ### Gepeto Toys Inc. manufactura 2 tipos de jugestes: **Soldados** y **Trenes**.
#
# Un soldado se vende por 27 dólares y usa 10 dólares en materias primas. Cada soldado que se fabrica aumenta la mano de obra variable y los costos generales de Giapetto en 14 dólares. Un tren se vende por 21 dólares y usa 9 dólares en materias primas. Cada tren construido aumenta los costos laborales y generales variables de Giapetto en 10 dólares. La fabricación de soldados y trenes de madera requiere dos tipos de mano de obra calificada: **carpintería** y **acabado**.
#
# Un soldado requiere 2 horas de trabajo de acabado y 1 hora de trabajo de carpintería. Un tren requiere 1 hora de trabajo de acabado y 1 hora de trabajo de carpintería. Cada semana, Giapetto puede obtener toda la materia prima necesaria, pero solo 100 horas de acabado y 80 horas de carpintería. La demanda de trenes es ilimitada, pero se compran como máximo 40 soldados cada semana. Giapetto quiere maximizar la ganancia semanal (ingresos-costos).
#
# **Formule un modelo matemático de la situación de Giapetto que pueda usarse para maximizar el beneficio semanal de Giapetto.**
#
# ### <font color=blue> Modelo de Programación Lineal </font>
# \begin{equation}
# Max\;Z=27x_1+21x_2-(10x_1+9x_2)-(14x_1+10x_2)
# \end{equation}
# \begin{equation}
# x_1+x_2\leq 80
# \end{equation}
# \begin{equation}
# 2x_1+x_2 \leq 100
# \end{equation}
# \begin{equation}
# x_1\leq 40
# \end{equation}
# \begin{equation}
# x_1,x_2\geq0
# \end{equation}
#
# Donde $x_1$ es la cantidad de soldados a producir y $x_2$ es la cantidad de trenes a producir.
from IPython.display import YouTubeVideo
YouTubeVideo('lTcJlw0iaJ4')
# Importanto Cplex.
from docplex.mp.model import Model
# Inicializando el modelo
mdl = Model('modelo')
# +
# Creando las variables de decisión - continuas
x1 = mdl.continuous_var(name='x1')
x2 = mdl.continuous_var(name='x2')
# -
# ### Creando las restricciones
#
# \begin{equation}
# x_1+x_2\leq 80
# \end{equation}
# \begin{equation}
# 2x_1+x_2 \leq 100
# \end{equation}
# \begin{equation}
# x_1\leq 40
# \end{equation}
# \begin{equation}
# x_1,x_2\geq0
# \end{equation}
# +
# Escribiendo las restricciones
mdl.add_constraint(x1+x2 <= 80)
mdl.add_constraint(2*x1+x2 <= 100)
mdl.add_constraint(x1 <= 40)
# -
# ### Creando la Función Objetivo
# \begin{equation}
# Max\;Z=27x_1+21x_2-(10x_1+9x_2)-(14x_1+10x_2)
# \end{equation}
# Creando la función objetivo
mdl.maximize(27*x1+21*x2-10*x1-9*x2-14*x1-10*x2)
print(mdl.export_to_string())
solucion = mdl.solve(log_output=True)
solucion.display()
# +
# Aprendiendo a graficar
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import PathPatch
import seaborn as sns
colores=sns.palplot(sns.color_palette("hls", ))
sns.set()
sns.set(style='white')
sns.despine()
sns.set_context('notebook')
# Crear un objeto gráfico.
fig, ax = plt.subplots(figsize=(8, 8))
x1 = np.linspace(0, 100)
# agregando la restricción de horas de capintería
plt.plot(x1, 80 - x1, lw=3, label='Carpinteria')
plt.fill_between(x1, 0, 80 - x1, alpha=0.1)
# agregando la restricción de horas de acabado
plt.plot(x1, 100 - 2 * x1, lw=3, label='Acabado')
plt.fill_between(x1, 0, 100 - 2 * x1, alpha=0.1)
# agregando la restricción de demanda
plt.plot(40 * np.ones_like(x1), x1, lw=3, label='Demanda')
plt.fill_betweenx(x1, 0, 40, alpha=0.1)
# add non-negativity constraints
plt.plot(np.zeros_like(x1), x1, lw=3, label='$x_2$ No Negatividad')
plt.plot(x1, np.zeros_like(x1), lw=3, label='$x_1$ No Negatividad')
plt.xlabel('Soldados', fontsize=16)
plt.ylabel('Trenes', fontsize=16)
plt.xlim(-0.5, 100)
plt.ylim(-0.5, 100)
plt.legend(fontsize=12)
plt.show()
# +
# Aprendiendo a graficar
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import PathPatch
import seaborn as sns
colores=sns.palplot(sns.color_palette("hls", ))
sns.set()
sns.set(style='white')
sns.despine()
sns.set_context('notebook')
# Crear un objeto gráfico.
fig, ax = plt.subplots(figsize=(8, 8))
x1 = np.linspace(0, 100)
# agregando la restricción de horas de capintería
plt.plot(x1, 80 - x1, lw=3, label='Carpinteria')
plt.fill_between(x1, 0, 80 - x1, alpha=0.1)
# agregando la restricción de horas de acabado
plt.plot(x1, 100 - 2 * x1, lw=3, label='Acabado')
plt.fill_between(x1, 0, 100 - 2 * x1, alpha=0.1)
# agregando la restricción de demanda
plt.plot(40 * np.ones_like(x1), x1, lw=3, label='Demanda')
plt.fill_betweenx(x1, 0, 40, alpha=0.1)
# add non-negativity constraints
plt.plot(np.zeros_like(x1), x1, lw=3, label='$x_2$ No Negatividad')
plt.plot(x1, np.zeros_like(x1), lw=3, label='$x_1$ No Negatividad')
# agregando la restricción función objetivo
plt.plot(x1, 50 - (3/2) * x1,color='magenta',linestyle='dashed',lw=4, label='$Z=100$')
plt.plot(x1, 75 - (3/2) * x1,color='magenta',linestyle='dashed',lw=4, label='$Z=150$')
plt.plot(x1, 90 - (3/2) * x1,color='magenta',linestyle='dashed',lw=4, label='$Z=180$')
plt.plot(x1, 100 - (3/2) * x1,color='magenta',linestyle='dashed',lw=4, label='$Z=200$')
plt.xlabel('Soldados', fontsize=16)
plt.ylabel('Trenes', fontsize=16)
plt.xlim(-0.5, 100)
plt.ylim(-0.5, 100)
plt.legend(fontsize=12)
plt.show()
# -
# <div class="alert alert-info"> </h4> Material preparado para la carrera de Ingeniería Civil Industrial | Universidad Católica del Norte | Escuela de Ingeniería - Campus Coquimbo.
# © <NAME>, Ingenerio Industrial UCN, Master of International Business UQ, Master Engeneering of Supply Chain and Logistics, MIT ZLC-Global Scale. Contacto: <EMAIL> o <EMAIL>
# </h4> </div>
| Intro/Introducción a Cplex & Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Dependencies:**
#
# - To direct c++ std stream to Jupyter notebook:
# `pip install wurlitzer` (python 2.7 only?)
#
# - Deformed frame visualization
# Install `meshcat-python`: https://github.com/rdeits/meshcat-python
# %load_ext autoreload
# %autoreload 2
# +
import numpy as np
import pyconmech as cm
# from wurlitzer import sys_pipes
import meshcat
import meshcat.geometry as g
from deformed_frame_viz import meshcat_visualize_deformed
# -
vis = meshcat.Visualizer()
vis.jupyter_cell()
# # check deformation for a completed structure
# +
import os
import time
from deformed_frame_viz import meshcat_visualize_deformed
cwd = os.getcwd()
json_path = os.path.join(cwd, '..', "assembly_instances/extrusion","robarch_tree_S.json")
# json_path = os.path.join(cwd, '..', "assembly_instances/extrusion","compas_fea_beam_tree_S.json")
# json_path = os.path.join(cwd, '..', "assembly_instances/extrusion","four-frame.json")
# json_path = os.path.join(cwd, '..', "assembly_instances/extrusion","topopt-100_S1_03-14-2019_w_layer.json")
# json_path = os.path.join(cwd, '..', "test_data","tower_3D_broken_lines.json")
# json_path = os.path.join(cwd, "assembly_instances/extrusion","rotated_dented_cube.json")
# load_case_path = os.path.join(cwd, '..', "test_data","tower_3D_broken_lines_load_case.json")
vis.delete()
disc = 10
exagg_ratio = 1.0
time_step = 1.0
scale = 1.0
# with sys_pipes():
sc = cm.stiffness_checker(json_file_path = json_path, verbose = False)
# sc.set_output_json(True)
# sc.set_output_json_path(file_path = cwd, file_name = "sf-test_result.json")
# manually assigned point loads
# ext_load = np.zeros([1,7])
# ext_load[0,0] = 3
# ext_load[0,4] = -1
# include_sw = True
# import load from a json file
# ext_load, include_sw = cm.parse_load_case_from_json(load_case_path)
# print('external load:')
# print(ext_load)
# print('include self weight : {0}'.format(include_sw))
# sc.set_load(nodal_forces = ext_load)
# sc.set_self_weight_load(include_sw) # now this is true by default
# sc.set_nodal_displacement_tol(transl_tol=1e-3)
existing_ids = [125, 126, 115, 122, 111, 108, 23, 22, 98, 75, 64, 34, 61, 65, 59, 60, 39, 36, 44, 67]
# existing_ids = [64] # unsupported
# print("Pass criteria? {0}\n".format(sc.solve(list(existing_ids))))
print("Pass criteria? {0}\n\n".format(sc.solve()))
# Collecting results
success, nD, fR, eR = sc.get_solved_results()
print("pass criteria?\n {0}".format(success))
trans_tol, rot_tol = sc.get_nodal_deformation_tol()
max_trans, max_rot, max_trans_vid, max_rot_vid = sc.get_max_nodal_deformation()
compliance = sc.get_compliance()
print('max deformation: translation: {0} / tol {1}, at node #{2}'.format(max_trans, trans_tol, max_trans_vid))
print('max deformation: rotation: {0} / tol {1}, at node #{2}'.format(max_rot, rot_tol, max_rot_vid))
# e_stiffness_mats = sc.get_element_stiffness_matrices(exist_ids=[])
e_stiffness_mats = sc.get_element_stiffness_matrices()
# print(e_stiffness_mats[0].shape)
e_rot_mats = sc.get_element_local2global_rot_matrices()
# print(e_rot_mats[0].shape)
# nodal_load = sc.get_nodal_load(existing_ids)
nodal_load = sc.get_nodal_load([])
# print(nodal_load.size)
e2dof_map = sc.get_element2dof_id_map()
# print(e2dof_map.shape)
v2dof_map = sc.get_node2dof_id_map()
# print(v2dof_map.shape)
# print(v2dof_map)
num_node, num_e = sc.get_frame_stat()
print('# of nodes: {}. # of elements: {}'.format(num_node, num_e))
# visualize deformed structure
orig_beam_shape = sc.get_original_shape(disc=disc, draw_full_shape=False)
cp_orig_beam_shape = orig_beam_shape
# print(cp_orig_beam_shape)
beam_disp = sc.get_deformed_shape(exagg_ratio=exagg_ratio, disc=disc)
# print(beam_disp)
meshcat_visualize_deformed(vis, beam_disp, cp_orig_beam_shape, disc=disc, scale=scale)
# +
import os
cwd = os.getcwd()
json_path = os.path.join(cwd, '..', "assembly_instances/extrusion","four-frame.json")
sc = cm.stiffness_checker(json_file_path = json_path, verbose = False)
e_rot_mats = sc.get_element_local2global_rot_matrices()
for mat in e_rot_mats:
print('---')
for sub in mat[0:3]:
print(sub[0:3])
# -
# # animate deformation for a construction sequence
# +
import os
import time
cwd = os.getcwd()
json_path = os.path.join(cwd, '..', 'test_data', "tower_3D.json")
# json_path = os.path.join(cwd,"sf-test_3-frame.json")
load_case_path = os.path.join(cwd, '..', 'test_data', "tower_3D_load_case.json")
vis.delete()
disc = 10
exagg_ratio=1.0
time_step = 1.0
sc = cm.stiffness_checker(json_file_path = json_path, verbose = False)
# sc.set_output_json(True)
# sc.set_output_json_path(file_path = cwd, file_name = "sf-test_result.json")
# ext_load = np.zeros([1,7])
# ext_load[0,0] = 3
# include_sw = False
# ext_load, include_sw = cm.parse_load_case_from_json(load_case_path)
# print('external load:')
# print(ext_load)
# print('include self weight : {0}'.format(include_sw))
# sc.set_load(nodal_forces = ext_load)
# sc.set_self_weight_load(include_sw)
sc.set_self_weight_load(True)
for i in range(0,24):
vis.delete()
existing_ids = list(range(0,i+1))
sc.solve(existing_ids)
orig_beam_shape = sc.get_original_shape(disc=disc, draw_full_shape=False)
beam_disp = sc.get_deformed_shape(exagg_ratio=exagg_ratio, disc=disc)
meshcat_visualize_deformed(vis, beam_disp, orig_beam_shape, disc=disc, scale=0.5)
time.sleep(time_step)
# -
# # repetitive run test
# +
import os
import time
cwd = os.getcwd()
json_path = os.path.join(cwd, '..', 'assembly_instances','extrusion', 'topopt-100_S1_03-14-2019_w_layer.json')
N = 10000
solve_partial = True
existing_ids = [125, 126, 115, 122, 111, 108, 23, 22, 98, 75, 64, 34, 61, 65, 59, 60, 39, 36, 44, 67]
check_result = False
# -
# ## without reinit on everal `solve` call
# +
sc = cm.stiffness_checker(json_file_path = json_path, verbose = False)
st_time = time.time()
for i in range(0,N):
if solve_partial:
sc.solve(existing_ids)
else:
sc.solve()
if check_result:
trans_tol, rot_tol = sc.get_nodal_deformation_tol()
max_trans, max_rot, max_trans_vid, max_rot_vid = sc.get_max_nodal_deformation()
compliance = sc.get_compliance()
print('iter - {0}'.format(i))
print('max deformation: translation : {0} / tol {1}, at node #{2}'.format(max_trans, trans_tol, max_trans_vid))
print('max deformation: rotation : {0} / tol {1}, at node #{2}'.format(max_rot, rot_tol, max_rot_vid))
print('avg time: {0} s'.format((time.time() - st_time) / N))
# -
# ## reinit on everal `solve` call
# +
st_time = time.time()
for i in range(0,N):
sc = cm.stiffness_checker(json_file_path = json_path, verbose = False)
existing_ids = list(range(0,10))
if solve_partial:
sc.solve(existing_ids)
else:
sc.solve()
if check_result:
trans_tol, rot_tol = sc.get_nodal_deformation_tol()
max_trans, max_rot, max_trans_vid, max_rot_vid = sc.get_max_nodal_deformation()
compliance = sc.get_compliance()
print('iter - {0}'.format(i))
print('max deformation: translation : {0} / tol {1}, at node #{2}'.format(max_trans, trans_tol, max_trans_vid))
print('max deformation: rotation : {0} / tol {1}, at node #{2}'.format(max_rot, rot_tol, max_rot_vid))
print('avg time: {0} s'.format((time.time() - st_time) / N))
# -
# ## What exactly is the `stiffness_checker` trying to solve?
#
# Elastic structure deforms under load, and by deforming themselves they develop resistance (or reaction) force to balance the external load.
#
# So in a nutshell, the stiffness checker calculates elastic deformation and corresponding reaction force of a frame structure under given load. We mainly consider the gravity load induced by elements' self-weight in construction sequencing.
#
# Conceptually, the solver tries to piece many elements' unit behavior together to reach equilibrium with the external force. Each element obeys both Hooke's law and the [Beam equations](https://en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli_beam_theory), which tells us how does an element **develops internal force** to balance external load via **deformation**.
#
# So locally in **each beam's own local coordinate system**, we have the local elastic equation:
# $$\begin{pmatrix} F^{e-n1}_{L} \\ --- \\ F^{e-n2}_{L} \end{pmatrix} := \begin{pmatrix}F^{n1}_{Lx}\\ F^{n1}_{Ly} \\ F^{n1}_{Lz} \\ M^{n1}_{Lx} \\ M^{n1}_{Ly} \\ M^{n1}_{Lz} \\ --- \\ F^{n2}_{Lx}\\ F^{n2}_{Ly} \\ F^{n2}_{Lz} \\ M^{n2}_{Lx} \\ M^{n2}_{Ly} \\ M^{n2}_{Lz}\end{pmatrix} = \mathbf{K_e} \begin{pmatrix} d^{n1}_{Lx}\\ d^{n1}_{Ly} \\ d^{n1}_{Lz} \\ \theta^{n1}_{Lx} \\ \theta^{n1}_{Ly} \\ \theta^{n1}_{Lz} \\ --- \\ d^{n2}_{Lx}\\ d^{n2}_{Ly} \\ d^{n2}_{Lz} \\ \theta^{n2}_{Lx} \\ \theta^{n2}_{Ly} \\ \theta^{n2}_{Lz} \end{pmatrix} = \mathbf{K_e} \begin{pmatrix} u^{e-n1}_{L} \\ --- \\ u^{e-n2}_{L} \end{pmatrix}$$
#
# Here you can conceptually think about this $12 \times 12$ element stiffness matrix $\mathbf{K_e}$ as the stiffness factor $k$ in Hooke's law $\Delta{F} = k \Delta{x}$ in a string system. The only difference is that it's capturing the shear, bending, and torsion effect as well, not only the axial elongation (see picture below).
#
# 
# image source: [MIT 1.571 lecture note 11](../../../../docs/references/MIT_1.571_L11_Displacement_Method.pdf), page 11 (<NAME>)
#
# But since some elements are sharing a node, these elements' reaction must relate to each other so that
# 1. the deformation at the shared node ($u^{e-v}_{L}$) is the same
# 2. the reaction forces of these elements ($F^{e-v}_{L}$) reach equilibrium at the shared node.
#
# Thus, at each node $v$, we have first the equilibrium equation:
#
# $$\sum_{e \in \{e | e \sim v\}} \begin{pmatrix}R_{e, GL} & 0 \\ 0 & R_{e, GL}\\\end{pmatrix}\begin{pmatrix}F^{v}_{e, Lx} \\ F^{v}_{e, Ly} \\ F^{v}_{e, Lz} \\ M^{v}_{e, Lx} \\ M^{v}_{e, Lx} \\ M^{v}_{e, Lz} \end{pmatrix} = \sum_{e \in \{e | e \sim v\}} (F_{v, \textrm{e self-w load}}) + F_{v, \textrm{pt load}} $$
#
# The RHS of the equation above represents all the loads (gravity, external point load) at node $v$. I will come back to loads in the section "The relationship between `fixities_reaction` and `element_reaction`" below. Notice that we have to apply this local to global rotation matrix to transform all the element internal force to the global coordinate system.
#
# Then, we also have to make sure the shared nodal deformation is the same for all the connected elements, so by plugging into the local elastic equation above, the equilibrium equation becomes:
#
# $$(\sum_{e \in \{e | e \sim v\}} \begin{pmatrix}R_{e, GL} & 0 \\ 0 & R_{e, GL}\\\end{pmatrix} \mathbf{K_e} \begin{pmatrix}R_{e, GL} & 0 \\ 0 & R_{e, GL}\\\end{pmatrix}^T) \begin{pmatrix} u^{e-n1}_{G} \\ u^{e-n2}_{G} \end{pmatrix} = \sum_{e \in \{e | e \sim v\}} (F_{v, \textrm{e self-w load}}) + F_{v, \textrm{pt load}} $$
#
# Notice that we are enforcing the nodal deformation compatibility by having all the connected elements share the same nodal deformation $\begin{pmatrix} u^{e-n1}_{G}, u^{e-n2}_{G} \end{pmatrix}$ **in global coordinate**.
#
# The equation above must be satisfied by all the nodes in the structure, and we have to solve all of them together by *assembling* the global stiffness matrix.
#
# PS: we can really see the essence of FEM here: first we have the physics model for one single element, then we try to enforce (1) internal reaction equibilirium at nodes (shared element boundary) (2) compability on the deformation at the shared element boundary. Finally, we assembly these nodal equations together into a giant linear system and we solve.
# # `stiffness_checker` outputs explained
# We have three outputs from the `stiffness_checker`:
#
# ## 1.`nodal_deformation`
#
# - (num_of_nodes x 7) matrix
#
# Each row is $(\textrm{node id}, d_{Gx}, d_{Gy}, d_{Gz}, \theta_{Gx}, \theta_{Gy}, \theta_{Gz})$. Here the subscript $G\,\cdot$ means the displacements are described in the **global coordinate system**.
# ## 2. `element_reaction`
#
# - (num_of_elements x 13) matrix
#
# Each row is
# $(\textrm{element_id}, F^{n1}_{Lx}, F^{n1}_{Ly}, F^{n1}_{Lz}, M^{n1}_{Lx}, M^{n1}_{Ly}, M^{n1}_{Lz}, F^{n2}_{Lx}, F^{n2}_{Ly}, F^{n2}_{Lz}, M^{n2}_{Lx}, M^{n2}_{Ly}, M^{n2}_{Lz})$. Each row describes the element's internal reaction force and moment at the two end points.
#
# Here $n1, n2$ refer to the end nodes of this element. $F_{L\,\cdot}$ and $M_{L\,\cdot}$ refer to the element's internal force and moment **under the element's local coordinate system**.
#
# By convention, the local cooridinate system is constructed by setting the origin to node $n1$, and the local x axis as the direction from $n1$ to $n2$ (see picture below). The local y axis is constructed by simply checking if the element's local x axis is aligned with the global z axis, and construct the local y axis by taking cross product between the local x axis and the global z. See the [src code](https://github.com/yijiangh/conmech/blob/224b24b07688af61033994a23f24fb9bb0e7c2d0/src/stiffness_checker/Util.cpp#L72-L142) for more details.
#
# PS: This extra degree of freedom in choosing local y axis makes it very hard to compare our internal element reaction results with an existing FEM solver...
#
# 
# image source: https://www.sciencedirect.com/topics/engineering/moment-distribution
# ## 3.`fixities_reaction`
# - (num_of_fixities x 7) matrix
#
# Each row is $(\textrm{fix_node_id}, F_{Gx}, F_{Gy}, F_{Gz}, M_{Gx}, M_{Gy}, M_{Gz})$. Here the forces and moments are described in **global coordinate system**.
# ## The relationship between `fixities_reaction` and `element_reaction`
#
# Since we are dealing with fully constrained elastic frame structures (all the frame nodes and fixities are 6-dof fixed), a partially assembled structure will never collapse because of mechanism, as long as it does not have unconnected, floating elements. Or in other words, reaction force will be in equilibrium at each node (no matter they are grounded or not).
#
# ### For nodes that are not fixities
#
# For nodes that are not fixities, the forces are from connected elements' internal reaction force and loads. So the force equilibrium at this node $v$ is:
# $$\sum_{e \in \{e | e \sim v\}} \begin{pmatrix}R_{e, GL} & 0 \\ 0 & R_{e, GL}\\\end{pmatrix}\begin{pmatrix}F^{v}_{e, Lx} \\ F^{v}_{e, Ly} \\ F^{v}_{e, Lz} \\ M^{v}_{e, Lx} \\ M^{v}_{e, Lx} \\ M^{v}_{e, Lz} \end{pmatrix} = \sum_{e \in \{e | e \sim v\}} (F_{v, \textrm{e self-w load}}) + F_{v, \textrm{pt load}} $$
#
# Where $R_{e, GL}$ is the $3 \times 3$ global to local rotation matrix of element $e$. The element internal reaction $\begin{pmatrix}F^{v}_{e, Lx}, F^{v}_{e, Ly}, F^{v}_{e, Lz}, M^{v}_{e, Lx}, M^{v}_{e, Lx}, M^{v}_{e, Lz} \end{pmatrix}$ is the entries that we get from element $e$'s corresponding row in `element_reaction`. This vector is descirbed in local coordinate of element $e$, and that's why we need to use the rotation matrix $R_{e, GL}$ to transform it back to global coordinate, so that we can have all the element's reaction force described in the same system.
#
# Notice that each row of `element_reaction` contains two such 6 dimensional vectors, corresponding to two end point of element $e$.
# (We do not have a convienient API to help us know which node index these two 6-dim vectors are referring to now. But I can add one if we need it.)
#
#
# The force $F_{v, \textrm{e self-w load}}$ is the lumped self-weight of the element $e$ described in the global coordinate. We will talk about this formation of this self-weight load in more detail in the next section.
#
# The $F_{v, \textrm{pt load}}$ refers to extra loads that are specified on the node. We don't have this in our `pb-construction` sequencing context (but "classic structural analysis" usually works with these point load scenarios!)
#
# The summation over all the connected element $e \in \{e | e \sim v\}$ means that both the number of internal force and the self-weight load will be changing when we are considering different partially assembled structures. Indeed, within the implementation of `stiffness_checker`, we pre-calculate all the [element-wise stiffness matrix](https://github.com/yijiangh/conmech/blob/224b24b07688af61033994a23f24fb9bb0e7c2d0/src/stiffness_checker/Stiffness.cpp#L246) and [self-weight load](https://github.com/yijiangh/conmech/blob/224b24b07688af61033994a23f24fb9bb0e7c2d0/src/stiffness_checker/Stiffness.cpp#L363), both of which are in global coordinate. Whenever we feed the checker with a set of exisiting element ids, we simply assembly corresponding element's stiffness matrix and load vector together to form a structure-wise linear system and solve it. By doing this, we are not doing repetitive calculation.
#
# ### For nodes that are fixities
#
# For nodes that are fixities, the forces are from connected elements' internal reaction force and the reaction force from the ground (fixities). So the force equilibrium at this node $v$ is:
#
# $$\sum_{e \in \{e | e \sim v\}} \begin{pmatrix}R_{e, GL} & 0 \\ 0 & R_{e, GL}\\\end{pmatrix}\begin{pmatrix}F^{v}_{e, Lx} \\ F^{v}_{e, Ly} \\ F^{v}_{e, Lz} \\ M^{v}_{e, Lx} \\ M^{v}_{e, Lx} \\ M^{v}_{e, Lz} \end{pmatrix} = \begin{pmatrix}RF^{v}_{Gx} \\ RF^{v}_{Gy} \\ RF^{v}_{Gz} \\ RM^{v}_{Gx} \\ RM^{v}_{Gx} \\ RM^{v}_{Gz} \end{pmatrix}$$
#
# Where $R_{e, GL}$ is the $3 \times 3$ global to local rotation matrix of element $e$. Here the $RF^{v}_{G\cdot}$ and $RM^{v}_{G\cdot}$ refer to the fixity reaction force and moment in the global coordinate, **which is specified by the corresponding row that we get from the output matrix `fixities_reaction`**.
# ## Calculation of self-weight load at a node
#
# The main load case of construction sequencing is self-weight, i.e. the gravity force that acts *along the span of the frame element*, which is a distributed load. One common idea in Finite Element Analysis (FEM) is to "lump" these distributed load to its boundary. In our context, the boundary of our elements is two end points, so we calculate the equivalent point load acting at the element end points.
#
# From the beam equation, we can calculate the equivalent point loads (force, moment) for a uniform load of density $w$ using the formula outlined in the picture below:
#
# 
# image source: [MIT 1.571 lecture note 11](../../../../docs/references/MIT_1.571_L11_Displacement_Method.pdf), page 9 (<NAME>)
#
# And for skewed elements, where the local x axis is not aligned with the global z axis (- gravity direction), we can decompose the gravity force to components that are perpendicular to the local x axis $P_{\perp x}$ amd $P_{// x}$. The $P_{// x}$ can be simply treated as load along the local x axis, and $P_{\perp x}$ can be lumped into end point loads using the fixed-end uniformly loaded beam formula above.
#
# This is a still a bit handwavy for now, but I will find time to fill in more details on this later... See the [source code](https://github.com/yijiangh/conmech/blob/224b24b07688af61033994a23f24fb9bb0e7c2d0/src/stiffness_checker/Stiffness.cpp#L363) for more details.
| examples/notebook_demo/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Computational graphs
#
# Pytorch encapsulates all the operations. We can keep track of values and calculate gradients automatically. By passing `requires_grad=True` we can access `grad` property of the tensor.
import torch
import numpy as np
x = torch.ones(2,2, requires_grad=True)
def describe(x):
print("Type: {}".format(x.type()))
print("Shape: {}".format(x.shape))
print("value: \n{}".format(x))
describe(x)
x.grad is None
y = (x+1)*(x+3) +2
describe(y)
y.grad is None
x.grad is None
z= y.mean()
describe(z)
x.grad is None
z.backward()
# In these set of operations we can track the gradient by calling `backward()` function on the end node of computation graph and access the `grad` attribute which is the gradient of the variable.
#
# In order to understand this further lets go through the operations `y` is calculated using `x`. The equation can be seen as arithmatic operation since `x` is a `2x2` matric, `y` is calculated using matrix multiplication and other arithmatic operations. `z` is caculated by calling `mean()` function. `backward()` function computes the gradient of current tensor w.r.t. graph leaves. (here `x` is the computation graph leaf).
x.grad
| libraries/pytorch/Gradients.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# -*- coding: utf-8 -*-
import pandas as pd
import numpy as np
import tensorflow as tf
class svdTensorflow:
#Step1:下载数据
#Step2:数据准备
ratings_df = pd.read_csv(r'C:\Users\tong\Desktop\movies\ml-latest-small\ratings.csv')
movies_df = pd.read_csv(r'C:\Users\tong\Desktop\movies\ml-latest-small\movies.csv')
movies_df['movieRow'] = movies_df.index
##筛选movies_df中的特征
movies_df = movies_df[['movieRow', 'movieId', 'title']]
movies_df.to_csv('movieProcessed.csv', index=False, header=True, encoding='utf-8')
#将ratings_df中的movieId替换为行号
ratings_df = pd.merge(ratings_df, movies_df, on='movieId')
ratings_df = ratings_df[['userId', 'movieRow', 'rating']]
ratings_df.to_csv('ratingsProcessed.csv', index=False, header=True,encoding='utf-8')
#创建电影评分矩阵rating和评分记录矩阵record,即用户是否对电影进行了评分,评分则1,未评分则为0
userNo = ratings_df['userId'].max() + 1
movieNo = ratings_df['movieRow'].max() + 1
rating = np.zeros((movieNo,userNo))
flag = 0
ratings_df_length = np.shape(ratings_df)[0] #ratings_df的样本个数
#填写rating
for index, row in ratings_df.iterrows():
#将rating当中对应的电影编号及用户编号填上row当中的评分
rating[int(row['movieRow']), int(row['userId'])] = row['rating']
flag += 1
#电影评分表中,>0表示已经评分,=0表示未被评分
record = rating > 0
#bool值转换为0和1
record = np.array(record, dtype=int)
##Step3:构建模型
#写一个函数,对评分取值范围进行缩放,这样能使评分系统性能更好一些
def normalizeRatings(rating, record):
m, n = rating.shape
rating_mean = np.zeros((m, 1)) #初始化对于每部电影每个用户的平均评分
rating_norm = np.zeros((m, n)) #保存处理后的数据
#原始评分-平均评分,最后将计算结果和平均评分返回。
for i in range(m):
idx = record[i, :] != 0 #获取已经评分的电影的下标
rating_mean[i] = np.mean(rating[i, idx]) #计算平均值,右边式子代表第i行已经评分的电影的平均值
rating_norm[i, idx] = rating[i, idx] - rating_mean[i]
return rating_norm, rating_mean
rating_norm, rating_mean = normalizeRatings(rating, record)
rating_norm = np.nan_to_num(rating_norm)
rating_mean = np.nan_to_num(rating_mean)
#假设有10种类型的电影
num_features = 10
#初始化电影矩阵X,用户喜好矩阵Theta,这里产生的参数都是随机数,并且是正态分布
X_parameters = tf.Variable(tf.random_normal([movieNo, num_features], stddev=0.35))
Theta_paramters = tf.Variable(tf.random_normal([userNo, num_features], stddev=0.35))
#tf.matmul(X_parameters, Theta_paramters, transpose_b=True)代表X_parameters和Theta_paramters的转置相乘
loss = 1/2 * tf.reduce_sum(((tf.matmul(X_parameters, Theta_paramters, transpose_b=True)
- rating_norm) * record) ** 2) \
+ 1/2 * (tf.reduce_sum(X_parameters**2)+tf.reduce_sum(Theta_paramters**2)) #正则化项,其中λ=1,可以调整来观察模型性能变化。
#创建优化器和优化目标
optimizer = tf.train.AdamOptimizer(1e-4)
train = optimizer.minimize(loss)
#Step4:训练模型
#使用TensorFlow中的tf.summary模块,它用于将TensorFlow的数据导出,从而变得可视化,因为loss是标量,所以使用scalar函数
tf.summary.scalar('loss', loss)
#将所有summary信息汇总
summaryMerged = tf.summary.merge_all()
#定义保存信息的路径
filename = ('C:/Users/tong/Desktop/movies/ml-latest-small/movie_tensorboard')
#把信息保存在文件当中
writer = tf.summary.FileWriter(filename)
#创建tensorflow绘画
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
#开始训练模型
for i in range(5000):
#不重要的变量可用下划线表示,每次train的结果都会保存到"_",summaryMerged的训练结果保存到"movie_summary"
_, movie_summary = sess.run([train, summaryMerged])
writer.add_summary(movie_summary, i)
#记录一下次数,有没有都无所谓,只是看看还有多久
#print('i=', i, 'loss=', loss)
#Step5:评估模型
Current_X_paramters, Current_Theta_parameters = sess.run([X_parameters, Theta_paramters])
#将电影内容矩阵和用户喜好矩阵相乘,再加上每一行的均值,便得到一个完整的电影评分表
predicts = np.dot(Current_X_paramters, Current_Theta_parameters.T) + rating_mean
#计算预测值与真实值的残差平方和的算术平方根,将它作为误差error,随着迭代次数增加而减少
errors = np.sqrt(np.sum((predicts - rating)**2))
#Step6:构建完整的电影推荐系统
user_id = input('请输入用户编号:')
#获取对该用户的电影评分的列表,predicts[:, int(user_id)]是该用户对应的所有电影的评分,即系统预测的用户对于电影的评分
#argsort()从小到大排序,argsort()[::-1]从大到小排序
sortedResult = predicts[:, int(user_id)].argsort()[::-1]
#idx用于保存已经推荐了多少部电影
idx = 0
print('为该用户推荐的评分最高的10部电影是:'.center(30, '='))
for i in sortedResult:
print('评分: %.2f, 电影id: %d' % (predicts[i, int(user_id)], movies_df.iloc[i]['movieId']))
idx += 1
if idx == 10:
break
| algorithm/svdTensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LEARNING APPLICATIONS
#
# In this notebook we will take a look at some indicative applications of machine learning techniques. We will cover content from [`learning.py`](https://github.com/aimacode/aima-python/blob/master/learning.py), for chapter 18 from <NAME>'s and <NAME>'s book [*Artificial Intelligence: A Modern Approach*](http://aima.cs.berkeley.edu/). Execute the cell below to get started:
from learning import *
from notebook import *
# ## CONTENTS
#
# * MNIST Handwritten Digits
# * Loading and Visualising
# * Testing
# ## MNIST HANDWRITTEN DIGITS CLASSIFICATION
#
# The MNIST database, available from [this page](http://yann.lecun.com/exdb/mnist/), is a large database of handwritten digits that is commonly used for training and testing/validating in Machine learning.
#
# The dataset has **60,000 training images** each of size 28x28 pixels with labels and **10,000 testing images** of size 28x28 pixels with labels.
#
# In this section, we will use this database to compare performances of different learning algorithms.
#
# It is estimated that humans have an error rate of about **0.2%** on this problem. Let's see how our algorithms perform!
#
# NOTE: We will be using external libraries to load and visualize the dataset smoothly ([numpy](http://www.numpy.org/) for loading and [matplotlib](http://matplotlib.org/) for visualization). You do not need previous experience of the libraries to follow along.
# ### Loading MNIST Digits Data
#
# Let's start by loading MNIST data into numpy arrays.
#
# The function `load_MNIST()` loads MNIST data from files saved in `aima-data/MNIST`. It returns four numpy arrays that we are going to use to train and classify hand-written digits in various learning approaches.
train_img, train_lbl, test_img, test_lbl = load_MNIST()
# Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
#
# Each 28x28 pixel image is flattened to a 784x1 array and we should have 60,000 of them in training data. Similarly, we should have 10,000 of those 784x1 arrays in testing data.
print("Training images size:", train_img.shape)
print("Training labels size:", train_lbl.shape)
print("Testing images size:", test_img.shape)
print("Training labels size:", test_lbl.shape)
# ### Visualizing Data
#
# To get a better understanding of the dataset, let's visualize some random images for each class from training and testing datasets.
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img)
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img)
# Let's have a look at the average of all the images of training and testing data.
# +
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img)
# -
# ## Testing
#
# Now, let us convert this raw data into `DataSet.examples` to run our algorithms defined in `learning.py`. Every image is represented by 784 numbers (28x28 pixels) and we append them with its label or class to make them work with our implementations in learning module.
print(train_img.shape, train_lbl.shape)
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
print(training_examples.shape)
# Now, we will initialize a DataSet with our training examples, so we can use it in our algorithms.
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
# Moving forward we can use `MNIST_DataSet` to test our algorithms.
# ### Plurality Learner
#
# The Plurality Learner always returns the class with the most training samples. In this case, `1`.
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
# +
# %matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
# -
# It is obvious that this Learner is not very efficient. In fact, it will guess correctly in only 1135/10000 of the samples, roughly 10%. It is very fast though, so it might have its use as a quick first guess.
# ### Naive-Bayes
#
# The Naive-Bayes classifier is an improvement over the Plurality Learner. It is much more accurate, but a lot slower.
# +
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous=False)
print(nBD(test_img[0]))
# -
# ### k-Nearest Neighbors
#
# We will now try to classify a random image from the dataset using the kNN classifier.
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
# To make sure that the output we got is correct, let's plot that image along with its label.
# +
# %matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
# -
# Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset.
| learning_apps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ohSVX97z_IK8"
# # Numpy Discussion
#
# + colab={"base_uri": "https://localhost:8080/"} id="ZI6k8XGt_OEh" executionInfo={"status": "ok", "timestamp": 1615266101771, "user_tz": 300, "elapsed": 486, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="9561bd6e-8918-49be-b490-3386282c1f4b"
"""
Numpy
It seems like numpy array is similar to regular array.
I just found that the cancatanation/joining of the array are different tho.
for example:
"""
import numpy as np
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
print("Numpy array:",x + y)
a=[[1,2],[3,4]]
b=[[5,6],[7,8]]
print("Regular array:",a+b)
print("\n")
"""
in addition, we talked about how it seems like the array are not reference
based on numpy.
For example
"""
foo=np.array([1,2])
np.append(foo,10)
print("No change:",foo)
foo=np.append(foo,50)
print("change:",foo)
"""
but in regular array
"""
bar=[1,2]
bar.append(1000)
print("regular array:",bar)
# + [markdown] id="9NVWcBwnya4T"
# # Part 1
#
#
# + id="FEEH9-XXs3kp" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615266128774, "user_tz": 300, "elapsed": 27480, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="b01d6947-c4fa-4caa-d17b-95e9c7a115b2"
from google.colab import drive
print("Part: 1\n")
drive.mount('/content/drive')
# + [markdown] id="p3qPfxLpyjeg"
# # Part 2
#
#
# + id="5YFwFBrIy68b" executionInfo={"status": "ok", "timestamp": 1615266128775, "user_tz": 300, "elapsed": 27478, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}}
import pandas as pd
# + id="L7xAfNqezJv-" executionInfo={"status": "ok", "timestamp": 1615266128777, "user_tz": 300, "elapsed": 27478, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}}
#helping constants
MAX_ROW=7
MAX_COL=5
fill_values = {'horsepower': 10}
drop_col = ["name"]
# + id="QYRfrZ6izTrO" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615266129783, "user_tz": 300, "elapsed": 28482, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="c1f210b5-1a00-4dfd-c10e-b6afbf70078e"
"""Section A"""
# getting data
car_data = pd.read_csv("/content/drive/Shareddrives/Team 8- Neural Network/Assignment-2/car_data.csv")
print("Part:2 - Section A \n")
print(car_data)
# + id="8NvnZa7P2Y8i" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615266129784, "user_tz": 300, "elapsed": 28474, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="656313ca-28e5-4961-a469-82583e4b17c3"
"""Section B"""
#seting display boundary
pd.set_option("display.max_columns",MAX_COL)
pd.set_option("display.max_rows",MAX_ROW)
print("Part:2 - Section B \n")
print(car_data)
# + id="mQegbtFmzgLO" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615266129913, "user_tz": 300, "elapsed": 28597, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="0f2f853f-1ab4-4be0-ce69-8ee35e54f437"
"""Section C"""
car_data.fillna(fill_values,inplace=True)
print("Part:2 - Section C \n")
print(car_data)
# + id="a-Qq1bCC0h5U" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1615266129915, "user_tz": 300, "elapsed": 28594, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="175857c0-d463-46a1-a9de-234e78bbb06a"
"""Section D"""
car_data.drop(columns=drop_col,inplace=True)
print("Part:2 - Section D \n")
print(car_data.columns)
# + [markdown] id="J8ECxpYR3VA1"
# # Part 3
#
# + id="HpB37EU33YFA" executionInfo={"status": "ok", "timestamp": 1615266129916, "user_tz": 300, "elapsed": 28592, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}}
import numpy as np
import matplotlib.pyplot as plt
# + id="9Rl2Z_un3lfy" executionInfo={"status": "ok", "timestamp": 1615266129916, "user_tz": 300, "elapsed": 28590, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}}
def x(i):
if i==11:
return 1.0
return 0.0
def x_out(t):
x_list=np.array([])
for i in t:
x_list=np.append(x_list,x(i))
return x_list
# + id="8TUKeSVv3xDF" executionInfo={"status": "ok", "timestamp": 1615266129917, "user_tz": 300, "elapsed": 28588, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}}
def get_y(arr,i,w,v):
if len(arr)==0:
return v* x(i-1)
return v* x(i-1) + w*arr[len(arr)-1]
def y_out(t,w,v):
y_list=y_list=np.array([])
for i in t:
y_list=np.append(y_list,get_y(y_list,i,w,v))
return y_list
# + id="eMgRm6HK33t8" executionInfo={"status": "ok", "timestamp": 1615266129917, "user_tz": 300, "elapsed": 28587, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}}
t = np.arange(100)
# + id="CmWVAvq730Tq" colab={"base_uri": "https://localhost:8080/", "height": 298} executionInfo={"status": "ok", "timestamp": 1615266130479, "user_tz": 300, "elapsed": 29143, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="51f136a7-879f-4859-9429-803f443daa76"
"""
experiment 1
"""
v = 1.0
w = 0.95
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(t,x_out(t))
plt.title("Input")
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(t, y_out(t,w,v))
plt.title("Output")
# Show the figure.
print("Part:3 - Expiremnet 1")
plt.show()
# + id="MCFKrU5939qN" colab={"base_uri": "https://localhost:8080/", "height": 298} executionInfo={"status": "ok", "timestamp": 1615266131306, "user_tz": 300, "elapsed": 29959, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09789286685912502299"}} outputId="7c51d236-71f3-4cb5-c16b-c4fd7c6dbbd2"
"""
Experiment 2
"""
v = 1.0
w = 1.05
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(t, x_out(t))
plt.title("Input")
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(t, y_out(t,w,v))
plt.title("Output")
# Show the figure.
print("Part:3 - Expiremnet 2")
plt.show()
| Assignment-2/HW2_Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_latest_p36
# language: python
# name: conda_pytorch_latest_p36
# ---
# # Finetuning a custom question answering model
# The [BERT](https://arxiv.org/abs/1810.04805) family of models are a powerful set of natural language understanding models based on the transformer architecture originating from the paper Attention Is All You Need, which you can find [here](https://arxiv.org/abs/1706.03762).
#
# These models work by running unsupervised pre-training on massive sets of text data. This process requires an enormous amount of time and compute. Luckily for us, BERT models are built for transfer learning. BERT models are able to be finetuned to perform many different NLU tasks like question answering, sentiment analysis, document summarization, and more.
#
# For this tutorial, we are going to download the [Stanford Question Answering Dataset](https://rajpurkar.github.io/SQuAD-explorer/) and walk through the steps necessary to augment it with our own questions using [SageMaker Ground Truth](https://aws.amazon.com/sagemaker/groundtruth/), use [Huggingface](https://huggingface.co/) to finetune a BERT variant for question answering, deploy our finetuned model to a Sagemaker endpoint, and then visualize some results.
# !pip install transformers
# !pip install datasets
# +
import collections
import math
import random
import torch
import os, tarfile, json
import time, datetime
from io import StringIO
import numpy as np
import boto3
import sagemaker
from tqdm import tqdm
from IPython.display import Markdown as md
from sagemaker.pytorch import estimator, PyTorchModel, PyTorchPredictor, PyTorch
from sagemaker.utils import name_from_base
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline, TrainingArguments, Trainer, default_data_collator
from sagemaker.huggingface import HuggingFace, HuggingFaceModel
import datasets
from datasets import load_dataset, load_metric, Dataset, Features
# Configuration file with IAM role ARN, bucket name, and pre/post processing lambda ARN,
# generated by notebook lifecycle script.
CONFIG_FILE_PATH = "/home/ec2-user/SageMaker/hf-gt-custom-qa.json"
sagemaker_cl = boto3.client("sagemaker")
sagemaker_session = sagemaker.Session()
s3 = boto3.resource("s3")
region = sagemaker_session.boto_region_name
role = sagemaker.get_execution_role()
prefix = 'hf_squad'
runtime_client = boto3.client('runtime.sagemaker')
with open(CONFIG_FILE_PATH) as f:
config = json.load(f)
role_arn = config["SageMakerRoleArn"]
bucket = config['LabelingJobInputBucket']
pre_annot_lambda_arn = config["PreLabelTaskLambdaArn"]
post_annot_lambda_arn = config["PostLabelTaskLambdaArn"]
# -
# ## Prerequisites
#
# You will create some of the resources you need to launch a Ground Truth labeling job in this notebook. You must create the following resources before executing this notebook:
#
# * A work team. A work team is a group of workers that complete labeling tasks. If you want to preview the worker UI and execute the labeling task you will need to create a private work team, add yourself as a worker to this team, and provide the work team ARN below. This [GIF](images/create-workteam-loop.gif) demonstrates how to quickly create a private work team on the Amazon SageMaker console. If you do not want to use a private or vendor work team ARN, set `private_work_team` to `False` to use the Amazon Mechanical Turk workforce. To learn more about private, vendor, and Amazon Mechanical Turk workforces, see [Create and Manage Workforces
# ](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.html).
# +
WORKTEAM_ARN = '<<ADD WORK TEAM ARN HERE>>'
print(f'This notebook will use the work team ARN: {WORKTEAM_ARN}')
# Make sure workteam arn is populated if private work team is chosen
assert (WORKTEAM_ARN != '<<ADD WORK TEAM ARN HERE>>')
# -
# ## Download and inspect the data
#
# Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
#
# To get an idea of what SQuAD contains, let's download it locally and take a look.
#
# SQuAD was created by <NAME>, <NAME>, <NAME>, and <NAME>. You can find the original paper [here](https://arxiv.org/abs/1606.05250) and the dataset [here](https://rajpurkar.github.io/SQuAD-explorer/). SQuAD has been licensed by the authors under the [Creative Commons Attribution-ShareAlike 4.0 International Public License](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
#
# + language="sh"
#
# mkdir data
#
# v2="data/v2.0"
# mkdir $v2
# wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O $v2/train-v2.0.json
# wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O $v2/dev-v2.0.json
# +
# load the v2.0 dev set
with open('data/v2.0/dev-v2.0.json', 'r') as f:
squad_dev = json.load(f)
# -
# Now that we've loaded some of the data, you can use the following block to look at a random context, question, and answer:
# view a random Q&A pair
ind = random.randint(0,34)
sq = squad_dev['data'][ind]
print('Paragraph title: ',sq['title'], '\n')
print(sq['paragraphs'][0]['context'],'\n')
print('Question:', sq['paragraphs'][0]['qas'][0]['question'])
print('Answer:', sq['paragraphs'][0]['qas'][0]['answers'][0]['text'])
# ## Load Model
#
# Now that we've viewed some of the different question and answer pairs in SQuAD, let's download a model that we can finetune for question answering. Huggingface allows us to easily download a base model that has undergone large scale pre-training and re-initialize it for a different downstream task. In this case we are taking the distilbert-base-uncased model and repurposing it for question answering using the AutoModelForQuestionAnswering class from Huggingface. We also utilize the AutoTokenizer class to retrieve the model's pre-trained tokenizer. Since the models are PyTorch based, we can look at the different modules in the model, the following block will print out the question answering output layer after downloading the model and tokenizer.
# +
# loadbase_model_prefix
model_name = "distilbert-base-uncased-distilled-squad"
# Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# set model to evaluation mode
evl = model.eval()
print(' \nOutput layer:',list(model.named_modules())[-2])
# -
# view model layers
model
# ## View BERT Input
#
# BERT needs us to transform our text data into a numeric representation known as tokens. There are a variety of tokenizers available, we are going to use a tokenizer specially designed for BERT that we will instantiate with our vocabulary file. Let's take a look at our transformed question and context we will be supplying BERT for inference.
QA_input = {
'question': sq['paragraphs'][0]['qas'][0]['question'],
'context': sq['paragraphs'][0]['context']
}
inputs = tokenizer(QA_input['question'], QA_input['context'], return_tensors='pt')
input_ids = inputs["input_ids"].tolist()[0]
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
inputs
# ## Model Inference
#
# Let's try getting some predictions from our question answering model.
# +
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
# convert answers back to english
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {QA_input['question']}")
print(f"Answer: {answer}")
# -
# ## Augment SQuAD
#
# It looks like we didn't get any output from the model for our previous prediction. This is because the weights for the question answering head have not been tuned. In order to tune them we need to train the model on our question answering dataset, but what if we want the model to perform particularly well on our domain specific questions? While it might be able to generalize by simply training on SQuAD, we can augment our training or validation set with our own questions.
#
# We can use the following code to augment SQuAD by launching a labeling job using a custom question answering template through SageMaker Ground Truth.
# +
print(f"Uploading annotation UI to {bucket}")
annotation_ui_s3_uri = f"s3://{bucket}/qa_annotator.liquid.html"
# !aws s3 cp qa_annotator.liquid.html {annotation_ui_s3_uri}
squad_s3_uri = f"s3://{bucket}/data/sample_squad.json"
# !aws s3 cp sample_squad.json {squad_s3_uri}
# create our manifest
sample_manifest_lines = [{"source": squad_s3_uri}]
sample_manifest = "\n".join([json.dumps(data_object) for data_object in sample_manifest_lines])
# send manifest to s3
manifest_name = "input.manifest"
s3.Object(bucket, manifest_name).put(Body=sample_manifest)
input_manifest_uri = f"s3://{bucket}/{manifest_name}"
# launch our labeling job
time_seconds = int(time.time())
labeling_job_name = f"squad-{time_seconds}"
labeling_job_request = {
"LabelingJobName": labeling_job_name,
"HumanTaskConfig": {
"AnnotationConsolidationConfig": {
"AnnotationConsolidationLambdaArn": post_annot_lambda_arn,
},
"MaxConcurrentTaskCount": 1000,
"NumberOfHumanWorkersPerDataObject": 1,
"PreHumanTaskLambdaArn": pre_annot_lambda_arn,
"TaskAvailabilityLifetimeInSeconds": 864000,
"TaskDescription": labeling_job_name,
"TaskTimeLimitInSeconds": 28800,
"TaskTitle": labeling_job_name,
"UiConfig": {
"UiTemplateS3Uri": annotation_ui_s3_uri,
},
"WorkteamArn": WORKTEAM_ARN,
},
"InputConfig": {
"DataAttributes": {
"ContentClassifiers": [
"FreeOfPersonallyIdentifiableInformation",
"FreeOfAdultContent",
]
},
"DataSource": {"S3DataSource": {"ManifestS3Uri": input_manifest_uri}},
},
"LabelAttributeName": labeling_job_name,
"OutputConfig": {
"S3OutputPath": f"s3://{bucket}/smgt-output",
},
"StoppingConditions": {"MaxPercentageOfInputDatasetLabeled": 100},
"RoleArn": role_arn,
}
response = sagemaker_cl.create_labeling_job(
**labeling_job_request
)
response
# -
# ## Access Labeling Portal
#
# To access the labeling portal run the following cell and click on the link:
domain = sagemaker_cl.list_workteams()['Workteams'][0]['SubDomain']
md(f"[labeling worker portal](https://{domain})")
# ## Check Labeling Job Status
#
# Once our job has been completed, it may take a few minutes to process and for Ground Truth to put the output in S3. This block will check for the job completion status.
labeling_job_description = sagemaker_cl.describe_labeling_job(LabelingJobName=labeling_job_name)
status = labeling_job_description["LabelingJobStatus"]
print(f"Job Status: {status}")
assert status == "Completed"
# ## Load Labeled Data
#
# Once our job is completed that means that our output data is now in S3, we can download it and load into our notebook.
# +
output_manifest = labeling_job_description["LabelingJobOutput"]["OutputDatasetS3Uri"]
output_manifest_lines = s3.Object(*output_manifest[len("s3://"):].split("/", 1)).get()["Body"].read().decode("utf-8").splitlines()
output_manifest_lines = [json.loads(line) for line in output_manifest_lines]
custom_squad_labels_s3_uri = output_manifest_lines[0][labeling_job_name]["s3Uri"]
sample_squad_labels = json.loads(sagemaker.s3.S3Downloader.read_file(custom_squad_labels_s3_uri))
# -
sample_squad_labels['data'][0]
# ## Load SQuAD Train Set
#
# Let's load SQuAD and add our own labeled examples. SQuAD formats the data as a list of topics. Each topic has it's own set of context statements, each of which have a variety of question and answer pairs. Our sample labels are formatted as their own topic, so we can simply append them to the list of topics already in SQuAD.
# +
with open('data/v2.0/train-v2.0.json', 'r') as f:
actual_squad = json.load(f)
print('Original length', len(actual_squad['data']))
# add our dataset to squad
actual_squad['data'].extend(sample_squad_labels['data'])
actual_squad['data'][-1]
print('New length', len(actual_squad['data']))
# -
# Now we can save our augmented version of SQuAD
with open('data/augmented_squad.json', 'w') as f:
json.dump(actual_squad, f)
# ## Create a HF Dataset Object
#
# Now that we've combined our data, we can transform it into a Huggingface dataset object. There are several ways to do this. We can use the [load_dataset](https://huggingface.co/docs/datasets/loading_datasets.html) option, in this case we can supply a CSV, JSON, or text file that will then be loaded as a dataset object. You can supply load_dataset a processing script to convert your file into the desired format. In this case for demonstration purposes we are instead going to use the Dataset.from_dict() method which allows us to supply an in-memory dictionary to create a dataset object.
#
# Run the following block to create our dataset dictionaries.
# +
def create_squad_dict(actual_squad):
titles = []
contexts = []
ids = []
questions = []
answers = []
for example in tqdm(actual_squad["data"]):
title = example.get("title", "").strip()
for paragraph in example["paragraphs"]:
context = paragraph["context"].strip()
for qa in paragraph["qas"]:
question = qa["question"].strip()
id_ = qa["id"]
answer_starts = [answer["answer_start"] for answer in qa["answers"]]
answer_list = [answer["text"].strip() for answer in qa["answers"]]
titles.append(title)
contexts.append(context)
questions.append(question)
ids.append(id_)
answers.append({
"answer_start": answer_starts,
"text": answer_list,
})
dataset_dict = {
"answers":answers,
"context":contexts,
"id":ids,
"question":questions,
"title":titles,
}
return dataset_dict
dataset_dict = create_squad_dict(actual_squad)
test_dataset_dict = create_squad_dict(squad_dev)
# -
# We also need to define our dataset features. We also will define our dataset features. In our case our features are:
#
# * ID - the ID of the text
# * Title - the associated title for the topic
# * context - the context statement the model must search to find an answer
# * question - the question the model is being asked
# * answer - the accepted answer text and location in the context statement
#
# HF datasets allow us to define this schema using the Features argument.
squad_dataset = Dataset.from_dict(dataset_dict,
features=datasets.Features(
{
"id": datasets.Value("string"),
"title": datasets.Value("string"),
"context": datasets.Value("string"),
"question": datasets.Value("string"),
"answers": datasets.features.Sequence(
{
"text": datasets.Value("string"),
"answer_start": datasets.Value("int32"),
}
),
# These are the features of your dataset like images, labels ...
}
))
squad_test = Dataset.from_dict(test_dataset_dict,
features=datasets.Features(
{
"id": datasets.Value("string"),
"title": datasets.Value("string"),
"context": datasets.Value("string"),
"question": datasets.Value("string"),
"answers": datasets.features.Sequence(
{
"text": datasets.Value("string"),
"answer_start": datasets.Value("int32"),
}
),
}
))
# Once we have our Dataset object created, we then have to tokenize the text. Since models can’t accept raw text as an input, we need to convert our text into a numeric input that it can understand.
def prepare_train_features(examples, tokenizer, max_length, doc_stride):
# Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
pad_on_right = tokenizer.padding_side == "right"
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
# ## Local Training
#
# Let’s start by understanding how training a model in Huggingface works locally, then go over the adjustments we make to run it in SageMaker.
# Huggingface makes training easy through the use of their trainer class. The trainer class allows us to pass in our model, our train and validation datasets, our hyperparameters, and even our tokenizer. Since we already have our model as well as our training and validation sets, we only need to define our hyperparameters. This can be done through the TrainingArguments class. This allows us to specify things like the learning rate, batch size, number of epochs, and more in depth parameters like weight decay or a learning rate scheduling strategy. Once we’ve defined our TrainingArguments, we can pass in our model, our training set, validation set and arguments to instantiate our trainer class. Once this is all ready we can simply call trainer.train() to start training our model.
#
# To run training locally, you can uncomment out the below block, but this will take a very long time if you are not running on a GPU instance.
# +
# for running local training, this will likely take a very long time if not running on a GPU instance.
# doc_stride=128
# max_length=512
# tokenized_train = squad_dataset.map(prepare_train_features, batched=True, remove_columns=squad_dataset.column_names, fn_kwargs = {'tokenizer':tokenizer, 'max_length':max_length, 'doc_stride':doc_stride})
# tokenized_test = squad_test.map(prepare_train_features, batched=True, remove_columns=squad_test.column_names, fn_kwargs = {'tokenizer':tokenizer, 'max_length':max_length, 'doc_stride':doc_stride})
# hf_args = TrainingArguments(
# 'test_local',
# evaluation_strategy = "epoch",
# learning_rate=5e-5,
# per_device_train_batch_size=16,
# per_device_eval_batch_size=16,
# num_train_epochs=1,
# weight_decay=0.0001,
# )
# trainer = Trainer(
# model,
# hf_args,
# train_dataset=tokenized_train,
# eval_dataset=tokenized_test,
# data_collator=default_data_collator,
# tokenizer=tokenizer,
# )
# trainer.train()
# -
# ## Send Data to S3
#
# To run the same training job in SageMaker is straightforward. The first step is putting our data in S3 so that our model can access it. SageMaker training allows you to specify a data source, you can use sources like S3, EFS, or FSx for Lustre for high performance data ingestion. In our case, our augmented SQuAD dataset isn’t particularly large, so S3 is a good choice. We upload our training data to a folder in S3 and when SageMaker spins up our training instance, it will download the data from our specified S3 location.
# !aws s3 cp --recursive data s3://{bucket}/{prefix}
# +
s3train = f's3://{bucket}/{prefix}/'
# create a pointer to our data in S3
train = sagemaker.session.s3_input(s3train, distribution='FullyReplicated',
content_type=None, s3_data_type='S3Prefix')
data_channels = {'train': train}
# -
# ## Instantiate the model
#
# Now we are going to instantiate our model, here we are going to specify our hyperparameters for training as well as the number of GPUs we are going to use. For a longer running training job, ml.p4d.24xlarge instances contain 8 A100 GPUs, making them ideal for heavy duty deep learning training. We don't need quite as much horsepower since we are only training our model for a couple epochs, so we can instead use a smaller GPU instance. For our specific training job we are going to use a p3.8xlarge instance consisting of 4 V100 GPUs.
#
# Once we have set our hyperparameters, we will instantiate a Sagemaker Estimator that we will use to run our training job. We specify the Docker image we just pushed to ECR as well as an entrypoint giving instructions for what operations our container should perform when it starts up. Our Docker container has two commands, train and serve. When we instantiate a training job, behind the scenes Sagemaker is running our Docker container and telling it to run the train command.
# +
# account=!aws sts get-caller-identity --query Account --output text
# Get the region defined in the current configuration (default to us-west-2 if none defined)
# region=!aws configure get region
# metric definition to extract the results
metric_definitions=[
{"Name": "train_runtime", "Regex": "train_runtime.*=\D*(.*?)$"},
{'Name': 'train_samples_per_second', 'Regex': "train_samples_per_second.*=\D*(.*?)$"},
{'Name': 'epoch', 'Regex': "epoch.*=\D*(.*?)$"},
{'Name': 'f1', 'Regex': "f1.*=\D*(.*?)$"},
{'Name': 'exact_match', 'Regex': "exact_match.*=\D*(.*?)$"}]
# +
# hyperparameters, which are passed into the training job
hyperparameters={
'model_name': model_name,
'dataset_name':'squad',
'do_train': True,
'do_eval': True,
'fp16': True,
'train_batch_size': 32,
'eval_batch_size': 32,
'weight_decay':0.01,
'warmup_steps':500,
'learning_rate':5e-5,
'epochs': 2,
'max_length': 384,
'max_steps': 100,
'pad_to_max_length': True,
'doc_stride': 128,
'output_dir': '/opt/ml/model'
}
# estimator
huggingface_estimator = HuggingFace(entry_point='run_qa.py',
source_dir='container_training',
metric_definitions=metric_definitions,
instance_type='ml.p3.8xlarge',
instance_count=1,
volume_size=100,
role=role,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters)
# -
# ## Fine-tune the Model
#
# If you use an instance with 4 GPUs and a batch size of 32 this process will take ~30 minutes to complete for this particular finetuning task with 2 epochs. Each additional epoch will add another 8 or so minutes. It's recommended to at minimum use a training instance with 4 GPUs, although you will likely get better performance with one of the ml.p3.16xlarge, ml.p3dn.24xlarge, or ml.p4d.24xlarge instances.
#
# We'll be running this training in a SageMaker Training job, which is triggered by the below cell.
huggingface_estimator.fit(data_channels, wait=False, job_name=f'hf-distilbert-squad-{int(time.time())}')
# ## Download Modal Locally
#
# Once our training job is complete, we can download our trained model. SageMaker stores our model output in S3 as a tarball, so we will need to retrieve it from S3 and then extract the contents locally.
# +
# Once training has completed, copy the model weights to our instance.
# ! aws s3 cp {huggingface_estimator.output_path}{huggingface_estimator.jobs[0].job_name}/output/model.tar.gz model/model.tar.gz
import tarfile
with tarfile.open('model/model.tar.gz', 'r:gz') as f:
f.extractall('model')
# -
# ## Load Model with Trained Weights
#
# Now that we've extracted the weights, we can reinitialize our model with them by pointing to the folder containing our weights.
model = AutoModelForQuestionAnswering.from_pretrained('model')
tokenizer = AutoTokenizer.from_pretrained('model') # model_name
evl = model.eval()
# Time to test out our model! Let's try a few different questions from our dev set and see how our model performs:
# +
i = np.random.randint(0,25)
QA_input = {
'question': sq['paragraphs'][i]['qas'][0]['question'],
'context': sq['paragraphs'][i]['context']
}
inputs = tokenizer(QA_input['question'], QA_input['context'], return_tensors='pt')
input_ids = inputs["input_ids"].tolist()[0]
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Context: {QA_input['context']} \n")
print(f"Question: {QA_input['question']} \n")
print(f"Answer: {answer}")
# -
# ## Deploy Trained Model
#
# Now that we've finetuned our model, what now? Let's deploy our trained model to an endpoint and ask it some questions!
# +
from sagemaker.pytorch import PyTorch, PyTorchModel
from sagemaker.predictor import RealTimePredictor
# this class defines the content type used for our endpoint, in this case plain text
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
# +
endpoint_name = 'hf-distilbert-QA-string-endpoint'
# if deploying from a model you trained in the same session
# bert_end = torch_model.deploy(instance_type='ml.g4dn.4xlarge', initial_instance_count=1,
# endpoint_name=endpoint_name)
model_data = f"{huggingface_estimator.output_path}{huggingface_estimator.jobs[0].job_name}/output/model.tar.gz"
# We are going to use a SageMaker serving container
torch_model = PyTorchModel(model_data=model_data,
source_dir = 'container_serving',
role=role,
entry_point='transform_script.py',
framework_version='1.8.1',
py_version='py3',
predictor_cls = StringPredictor)
bert_end = torch_model.deploy(instance_type='ml.m5.2xlarge', initial_instance_count=1, #'ml.g4dn.xlarge'
endpoint_name=endpoint_name)
# -
# Here are a few questions we can ask our model about SageMaker:
# +
amzn_context = """
Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.
Traditional ML development is a complex, expensive, iterative process made even harder because there are no integrated
tools for the entire machine learning workflow. You need to stitch together tools and workflows, which is time-consuming
and error-prone. SageMaker solves this challenge by providing all of the components used for machine learning in a single
toolset so models get to production faster with much less effort and at lower cost.
"""
amzn_questions = [
"How does SageMaker solve the challenge of traditional ML Development?",
"What is Traditional ML development?",
]
# -
# We can create a predictor object by using the RealTimePredictor class and specifying our endpoint name
# test hf classifier
predictor = RealTimePredictor(endpoint_name=endpoint_name)
# ## Plain Text Inference Results
#
# +
# %%time
print('Context:',amzn_context)
ind = 0
predictions = []
for amzn_question in amzn_questions:
print('-------------------------------------------------------------------------------------')
print('Question:',amzn_question)
test_text = "|".join((amzn_question, amzn_context))
pred = predictor.predict(test_text).decode().strip('"')
pred = "".join(eval(pred))
print('-------------------------------------------------------------------------------------')
print('Prediction:',pred)
print('-------------------------------------------------------------------------------------')
predictions.append(pred)
# -
# ## Visualize model results
#
# Now that we've deployed our model and run some inference through it, we can actually visualize the results for multiple questions at once
# in the original annotation UI by manually creating some SQuAD format and performing the liquid injection ourselves.
# +
# This is the SquAD input format required by the annotation UI.
squad_txt = json.dumps(
{
"version": "v2.0",
"data": [
{
"title": "Ground Truth Marketing",
"paragraphs": [
{
"context": amzn_context.replace("\n", ""),
"qas": [
{
"question": question,
"id": i,
"answers": [
{
"answer_id": i,
"text": prediction,
"answer_start": amzn_context.replace("\n", "").find(prediction),
}
],
}
for i, (question, prediction) in enumerate(
zip(amzn_questions, predictions)
)
],
},
],
}
],
}
)
with open("qa_annotator.liquid.html") as f:
# Manually inject our generated squad into the liquid tag.
tmp_annotator_txt = f.read().replace("{{ task.input.source }}", squad_txt)
# We'll remove crowd html since we aren't loading within a Ground Truth context.
tmp_annotator_txt = tmp_annotator_txt.replace('<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>', "")
tmp_annotator_path = "tmp_annotator.html"
with open(tmp_annotator_path, "w") as f:
f.write(tmp_annotator_txt)
from IPython.display import IFrame
IFrame(tmp_annotator_path, 1280, 800)
# -
# ## Conclusion
#
# In this notebook, you learned how to create your own question answering dataset using SageMaker Ground Truth and combine it with SQuAD to train your own question answering model using SageMaker training.
#
# Try augmenting SQuAD or even creating an entire dataset with your own questions using SageMaker Ground Truth (https://aws.amazon.com/sagemaker/groundtruth/). Also try out finetuning different BERT variants using Huggingface (https://huggingface.co/) for question answering. Happy building!
#
# ## Cleanup
# !rm bert_base.pt
# !rm s3_bucket.txt
bert_end.delete_endpoint()
| hf_squad_finetuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Notebook to explore distance metric with simplified DEM
import os
import h5py
import numpy as np
from scipy.stats import wasserstein_distance as EarthMover
# -- galpopfm
from galpopfm import dustfm as dustFM
from galpopfm import dust_infer as dustInfer
from galpopfm import measure_obs as measureObs
# -- plotting --
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
dat_dir = os.environ['GALPOPFM_DIR']
# Read in SDSS observables
f_obs = os.path.join(dat_dir, 'obs', 'tinker_SDSS_centrals_M9.7.Mr_complete.Mr_GR_FUVNUV.npy')
r_edges, gr_edges, fn_edges, _, _ = np.load(f_obs, allow_pickle=True)
x_obs_2d = dustInfer.sumstat_obs(name='sdss', statistic='2d')
x_obs_1d = dustInfer.sumstat_obs(name='sdss', statistic='1d')
dr = r_edges[1] - r_edges[0]
dgr = gr_edges[1] - gr_edges[0]
dfn = fn_edges[1] - fn_edges[0]
nbins = [len(r_edges)-1, len(gr_edges)-1, len(fn_edges)-1]
ranges = [(r_edges[0], r_edges[-1]), (gr_edges[0], gr_edges[-1]), (fn_edges[0], fn_edges[-1])]
# +
fig = plt.figure(figsize=(12,5))
sub = fig.add_subplot(121)
h = sub.pcolormesh(r_edges, gr_edges, x_obs_2d[1].T, cmap='gist_gray_r')
plt.colorbar(h, ax=sub)
sub = fig.add_subplot(122)
h = sub.pcolormesh(r_edges, fn_edges, x_obs_2d[2].T, cmap='gist_gray_r')
plt.colorbar(h, ax=sub)
# +
fig = plt.figure(figsize=(12,5))
sub = fig.add_subplot(121)
sub.plot(0.5 * (gr_edges[1:] + gr_edges[:-1]), x_obs_1d[1])
sub.set_xlim(ranges[1])
sub.set_yscale('log')
sub = fig.add_subplot(122)
sub.plot(0.5 * (fn_edges[1:] + fn_edges[:-1]), x_obs_1d[2])
sub.set_xlim(ranges[2])
sub.set_yscale('log')
# -
# Read in simulation data
# +
sim_sed = dustInfer._read_sed('simba')
# pass through the minimal amount of memory
wlim = (sim_sed['wave'] > 1e3) & (sim_sed['wave'] < 8e3)
# only keep centrals and impose mass limit as well.
# the lower limit log M* > 9.4 is padded by >0.25 dex to conservatively account
# for log M* and R magnitude scatter
downsample = np.zeros(len(sim_sed['logmstar'])).astype(bool)
downsample[::10] = True
f_downsample = 0.1
cens = sim_sed['censat'].astype(bool) & (sim_sed['logmstar'] > 9.4) & downsample
# global variable that can be accessed by multiprocess (~2GB)
sed = {}
sed['sim'] = 'simba'
sed['logmstar'] = sim_sed['logmstar'][cens].copy()
sed['logsfr.100'] = sim_sed['logsfr.100'][cens].copy()
sed['wave'] = sim_sed['wave'][wlim].copy()
sed['sed_noneb'] = sim_sed['sed_noneb'][cens,:][:,wlim].copy()
sed['sed_onlyneb'] = sim_sed['sed_onlyneb'][cens,:][:,wlim].copy()
# -
def model_observable_1d(theta, dem='slab_noll_simple'):
return dustInfer.sumstat_model(theta, sed=sed, dem=dem,
f_downsample=f_downsample,
statistic='1d')
def model_observable_2d(theta, dem='slab_noll_simple'):
return dustInfer.sumstat_model(theta, sed=sed, dem=dem,
f_downsample=f_downsample,
statistic='2d')
thetas = [np.array([3., 0.5]), np.array([3., -2]), np.array([5., -1.])]#, np.array([0., 0.]), np.array([5., 5.])]
for _theta in thetas:
x_model = model_observable_1d(_theta, dem='slab_noll_simple')
print('nbar_obs', x_obs_1d[0])
print('nbar_mod', x_model[0])
print('L2')
dustInfer.distance_metric(x_obs_1d, x_model, method='L2')
print('L1')
dustInfer.distance_metric(x_obs_1d, x_model, method='L1')
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(121)
sub.plot(0.5 * (gr_edges[1:] + gr_edges[:-1]), x_obs_1d[1], c='k')
sub.plot(0.5 * (gr_edges[1:] + gr_edges[:-1]), x_model[1], c='C1')
sub.set_xlim(-1, 5)
#sub.set_yscale('log')
sub = fig.add_subplot(122)
sub.plot(0.5 * (fn_edges[1:] + fn_edges[:-1]), x_obs_1d[2], c='k')
sub.plot(0.5 * (fn_edges[1:] + fn_edges[:-1]), x_model[2], c='C1')
sub.set_xlim(-1, 10)
#sub.set_yscale('log')
thetas = [np.array([3., 0.5]), np.array([0., 0.]), np.array([5., 5.])]
for _theta in thetas:
x_model = model_observable_1d(_theta, dem='slab_noll_simple')
print('nbar_obs', x_obs_1d[0])
print('nbar_mod', x_model[0])
print('L2')
dustInfer.distance_metric(x_obs_1d, x_model, method='L2')
print('L1')
dustInfer.distance_metric(x_obs_1d, x_model, method='L1')
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(121)
sub.plot(0.5 * (gr_edges[1:] + gr_edges[:-1]), x_obs_1d[1], c='k')
sub.plot(0.5 * (gr_edges[1:] + gr_edges[:-1]), x_model[1], c='C1')
sub.set_xlim(-5, 5)
#sub.set_yscale('log')
sub = fig.add_subplot(122)
sub.plot(0.5 * (fn_edges[1:] + fn_edges[:-1]), x_obs_1d[2], c='k')
sub.plot(0.5 * (fn_edges[1:] + fn_edges[:-1]), x_model[2], c='C1')
sub.set_xlim(-5, 10)
#sub.set_yscale('log')
# +
x_model = model_observable_2d(np.array([3., 0.5]), dem='slab_noll_simple')
print('nbar_obs', x_obs_2d[0])
print('nbar_mod', x_model[0])
print('L2')
dustInfer.distance_metric(x_obs_1d, x_model, method='L2')
print('L1')
dustInfer.distance_metric(x_obs_1d, x_model, method='L1')
fig = plt.figure(figsize=(10,10))
sub = fig.add_subplot(221)
sub.pcolormesh(r_edges, gr_edges, x_obs_2d[1].T, vmax=1e-2, cmap='Greys')
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([])
sub.set_ylabel(r'$G-R$', fontsize=20)
sub.set_ylim(-1, 4)
sub = fig.add_subplot(222)
sub.pcolormesh(r_edges, gr_edges, x_model[1].T, vmax=1e-2, cmap='Oranges')
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([])
sub.set_ylim(-1, 4)
sub.set_yticklabels([])
sub = fig.add_subplot(223)
h = sub.pcolormesh(r_edges, fn_edges, x_obs_2d[2].T, vmax=1e-2, cmap='Greys')
sub.set_xlabel(r'$M_r$', fontsize=20)
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([-20, -21, -22, -23])
sub.set_ylabel(r'$FUV - NUV$', fontsize=20)
sub.set_ylim(-1, 10)
sub = fig.add_subplot(224)
sub.pcolormesh(r_edges, fn_edges, x_model[2].T, vmax=1e-2, cmap='Oranges')
sub.set_xlabel(r'$M_r$', fontsize=20)
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([-20, -21, -22, -23])
sub.set_ylim(-1, 10)
sub.set_yticklabels([])
fig.subplots_adjust(wspace=0.1, hspace=0.1, right=0.85)
cbar_ax = fig.add_axes([0.875, 0.15, 0.02, 0.7])
fig.colorbar(h, cax=cbar_ax)
# +
x_model = model_observable_2d(np.array([5., -1.]), dem='slab_noll_simple')
print('nbar_obs', x_obs_2d[0])
print('nbar_mod', x_model[0])
print('L2')
dustInfer.distance_metric(x_obs_1d, x_model, method='L2')
print('L1')
dustInfer.distance_metric(x_obs_1d, x_model, method='L1')
fig = plt.figure(figsize=(10,10))
sub = fig.add_subplot(221)
sub.pcolormesh(r_edges, gr_edges, x_obs_2d[1].T, vmax=1e-2, cmap='Greys')
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([])
sub.set_ylabel(r'$G-R$', fontsize=20)
sub.set_ylim(-1, 4)
sub = fig.add_subplot(222)
sub.pcolormesh(r_edges, gr_edges, x_model[1].T, vmax=1e-2, cmap='Oranges')
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([])
sub.set_ylim(-1, 4)
sub.set_yticklabels([])
sub = fig.add_subplot(223)
h = sub.pcolormesh(r_edges, fn_edges, x_obs_2d[2].T, vmax=1e-2, cmap='Greys')
sub.set_xlabel(r'$M_r$', fontsize=20)
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([-20, -21, -22, -23])
sub.set_ylabel(r'$FUV - NUV$', fontsize=20)
sub.set_ylim(-1, 10)
sub = fig.add_subplot(224)
sub.pcolormesh(r_edges, fn_edges, x_model[2].T, vmax=1e-2, cmap='Oranges')
sub.set_xlabel(r'$M_r$', fontsize=20)
sub.set_xlim(20., 23)
sub.set_xticks([20., 21., 22., 23])
sub.set_xticklabels([-20, -21, -22, -23])
sub.set_ylim(-1, 10)
sub.set_yticklabels([])
fig.subplots_adjust(wspace=0.1, hspace=0.1, right=0.85)
cbar_ax = fig.add_axes([0.875, 0.15, 0.02, 0.7])
fig.colorbar(h, cax=cbar_ax)
# -
| nb/distance_metric.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Question 93 - Rental Car Locations
#
# Suppose you're working for a car rental company, looking to model potential location distribution of their cars at major airports. The company operates in LA, SF, and San Jose. Customers regularly pickup a car in one of these 3 cities and drop it off in another. The company is looking to compute how likely it is that a given car will end up in a given city. You can model this as a Markov chain (where each time step corresponds to a new customer taking the car). The transition probabilities of the company's car allocation by city is as follows:
#
# ```
# SF | LA | San Jose
#
# 0.6 0.1 0.3 | SF
#
# 0.2 0.8 0.3 | LA
#
# 0.2 0.1 0.4 | San Jose
# ```
#
# As shown, the probability a car stays in SF is 0.6, the probability it moves from SF to LA is 0.2, SF to San Jose is 0.2, etc.
#
# Using the information above, determine the probability a car will start in SF but move to LA right after.
# +
# the answer is given in the question: 0.2
import numpy as np
transitions = np.array([
[.6,.1,.3],
[.2,.8,.3],
[.2,.1,.4]
])
start = np.array([1,0,0])
p = transitions.dot(start)[1]
print(f'probability of starting in SF and going to LA right after is {p}')
# -
# **Alternative question: what is the long-run state?**
#
# We have 4 equations with 3 unknowns.
# ```
# a = .6a + .1b + .3c
# b = .2a + .8b + .3c
# c = .2a + .1b + .4c
# a + b + c = 1
# ```
#
# Solving this:
# 1. equation 4: `a = 1 - b - c`
#
# 2. replace `a` into equation 1:
#
# ```
# 1 - b - c = .6 - .6b - .6c + .1b + .3c,
# .4 = .5b + .7c,
# b = 4/5 - 7/5 c
#
# a = 1 - 4/5 + 7/5 c - c = 1/5 + 2/5 c
# ```
#
# 3. obtain `c` by replacing `a` and `b` into equation 2:
#
# ```
# 4/5 - 7/5 c = 1/5*1/5 + 1/5*2/5c + 4/5*4/5 - 4/5*7/5c + .3c
# 4/5 - 1/25 - 16/25 = (7/5 + 2/25 - 28/25 + 3/10)c
# 3/25 = (90/250 + 75/250)c = 33/50 c
# c = 3*2/33 = 2/11
#
# b = 4/5 - 7/5*2/11 = 44/55 - 14/55 = 30/55 = 6/11
# a = 1 - 6/11 - 2/11 = 3/11
# ```
#
# 4. Verification: transitions x stable_state = stable_state
#
# ```
# 3 /11
# 6
# 2
#
# 6 1 3 30
# 2 8 3 60
# 2 1 4 20 /10/11
# /10
# ```
#
# 4. Conclusion: In the long run, 30% of cars end up in SF, 60% in LA, and 20% in SJ.
# Recommendation: If demand is equal in LA, SF, and SJ, we will need to drive back some cars from SJ and/or SF to LA.
# +
# some links
# https://math.stackexchange.com/q/2487893
# http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/lecture33.pdf
# -
| interviewq_exercises/q093_stats_rental_car_markov_chain_long_run_state.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Descriptive statistics
import numpy as np
import seaborn as sns
import scipy.stats as st
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import pandas as pd
import statsmodels.api as sm
import statistics
import os
from scipy.stats import norm
# ## Probability data, binomial distribution
# We already got to know data that follow a binomial distribution, but we actually had not looked at the distribution. We will do this now. 10% of the 100 cells we count have deformed nuclei. To illustrate the distribution we will count repeatedly....
# +
n = 100 # number of trials
p = 0.1 # probability of each trial
s = np.random.binomial(n, p, 1000) #simulation repeating the experiment 1000 times
print(s)
# -
# As you can see, the result of the distribution is in absolute counts, not proportions - they can easyly be converted by deviding with n, but they dont have to...
props = s/n
print(props)
# Now we plot the distribution. The easiest first look is a histogram.
plt.hist(props, bins = 50)
plt.xlabel("proportion")
plt.ylabel("frequency")
plt.show()
# The resolution is a bit inappropriate, given that we deal with integers. To increase the bin number would be a good idea. Maybe we should also plot a confidence interval.
# +
CI= sm.stats.proportion_confint(n*p, n, alpha=0.05)
print(CI)
plt.axvspan(CI[0],CI[1], alpha=0.2, color='yellow')
plt.hist(props, bins = 50)
plt.xlabel("proportion")
plt.ylabel("frequency")
plt.axvline(p, color="black")
# -
# In a binomial distribution, the distribution is given by the proportion and the sample size. Therefore we could calculate a confidence interval from one measurement.
# #### How can we now describe the distribution?
# Summary statistics:
print("the minimum is:", min(props))
print("the maximum is:", max(props))
print(statistics.mean(props))
# Is the mean a good way to look at our distribution?
# +
n = 50 # number of trials
p = 0.02 # probability of each trial
s = np.random.binomial(n, p, 1000) #simulation repeating the experiment 1000 times
props = s/n
CI= sm.stats.proportion_confint(n*p, n, alpha=0.05)
print(CI)
plt.axvspan(CI[0],CI[1], alpha=0.2, color='yellow')
plt.hist(props, bins = 20)
plt.xlabel("proportion")
plt.ylabel("frequency")
plt.axvline(p, color="black")
plt.axvline(statistics.mean(props), color="red")
print(statistics.mean(props))
# +
n = 500 # number of trials
p = 0.02 # probability of each trial
s = np.random.binomial(n, p, 1000) #simulation repeating the experiment 1000 times
props = s/n
CI= sm.stats.proportion_confint(n*p, n, alpha=0.05)
print(CI)
plt.axvspan(CI[0],CI[1], alpha=0.2, color='yellow')
plt.hist(props, bins = 50)
plt.xlabel("proportion")
plt.ylabel("frequency")
plt.axvline(p, color="black")
plt.axvline(statistics.mean(props), color="red")
print(statistics.mean(props))
# -
# ## Count data/ the Poisson distribution
# The Poisson distribution is built on count data, e.g. the numbers of raisins in a Dresdner Christstollen, the number of geese at any given day between Blaues Wunder and Waldschlösschenbrücke, or radioactive decay. So lets use a Geiger counter and count the numbers of decay per min.
# +
freq =1.6
s = np.random.poisson(freq, 1000)
plt.hist(s, bins = 20)
plt.xlabel("counts per minute")
plt.ylabel("frequency")
plt.axvline(freq, color="black")
# -
# ### Confidence intervals for a Poisson distribution
# Similar to the binomial distribution, the distribution is defined by sample size and the mean.
# Also for Poisson, one can calculate an also asymmetrical confidence interval:
# +
freq =1.6
s = np.random.poisson(freq, 1000)
CI = st.poisson.interval(0.95,freq)
plt.axvspan(CI[0],CI[1], alpha=0.2, color='yellow')
plt.hist(s, bins = 20)
plt.xlabel("counts per minute")
plt.ylabel("frequency")
plt.axvline(freq, color="black")
# -
# For a poisson distribution, poisson error can be reduced, when increasing the counting population, in our case lets count for 10 min instead of 1 min, and see what happens.
# +
CI = np.true_divide(st.poisson.interval(0.95,freq*10),10)
print(CI)
s = np.true_divide(np.random.poisson(freq*10, 1000),10)
plt.axvspan(CI[0],CI[1], alpha=0.2, color='yellow')
plt.hist(s, bins = 70)
plt.xlabel("counts per minute")
plt.ylabel("frequency")
plt.axvline(freq, color="black")
# -
# What is the difference between Poisson and Binomial? Aren't they both kind of looking at count data?
# Yes, BUT:
# Binomial counts events versus another event, e.g. for the cells there are two options, normal versus deformed. A binomial distribution is about comparing the two options.
# Poisson counts with an open end, e.g. number of mutations.
# ## Continuous data
# Let's import the count data you have generated with Robert. When you download it from Google sheets (https://docs.google.com/spreadsheets/d/1Ek-23Soro5XZ3y1kJHpvaTaa1f4n2C7G3WX0qddD-78/edit#gid=0), it comes with spaces. Try to avoid spaces and special characters in file names, they tend to make trouble.
# I renamed it to 'BBBC001.csv'.
# +
dat = pd.read_csv('https://raw.githubusercontent.com/BiAPoL/Bio-image_Analysis_with_Python/main/biostatistics/data/BBBC001.csv', header=1, sep=';')
print(dat)
# -
# For now we will focus on the manual counts, visualise it and perform summary statistics.
# +
man_count = dat["BBBC001 manual count"].values
auto_count = dat.iloc[:,[2,3,4]].values
plt.hist(man_count,bins=100)
# -
print(man_count)
plt.hist(auto_count,bins=100)
sns.kdeplot(data=dat)
# There are different alternatives of displaying such data, some of which independent of distribution. You will find documentation in the graph galery: https://www.python-graph-gallery.com/
sns.kdeplot(man_count)
# A density plot is sometimes helpful to see the distribution, but be aware of the smoothing and that you loose the information on sample size.
sns.stripplot(data=auto_count)
sns.swarmplot(y=man_count)
sns.violinplot(y=man_count)
# this plot is useful, but the density function can sometimes be misleading and lead to artefacts dependent on the sample size. Unless explicitely stated, sample sizes are usually normalised and therefore hidden!
sns.boxplot(y=man_count)
# Be aware that boxplots hide the underlying distribution and the sample size.
# So the "safest" plot, when in doubt, is to combine boxplot and jitter:
ax = sns.swarmplot(y=man_count, color="black")
ax = sns.boxplot(y=man_count,color="lightgrey")
ax.grid()
# The boxplot is very useful, because it directly provides non-parametric summary statistics:
# Min, Max, Median, Quartiles and therefore the inter-quartile range (IQR). The whiskers are usually the highest point that is within 1.5x the quartile plus the IQR. Everything beyond that is considered an outlier. Whiskers are however not always used in this way!
# The mean and standard diviation are not visible in a boxplot, because it is only meaningful in distributions that center around the mean. It is however a part of summary statistics:
dat["BBBC001 manual count"].describe()
# ## Normal distribution
# We assume that our distribution is "normal".
# First we fit a normal distribution to our data.
# +
(mu, sigma) = norm.fit(man_count)
n, bins, patches = plt.hist(man_count, 100,density=1)
# add a 'best fit' line
y = norm.pdf(bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=2)
#plot
plt.xlabel('manual counts counts')
plt.ylabel('binned counts')
plt.title(r'$\mathrm{Histogram\ of\ manual\ counts:}\ \mu=%.3f,\ \sigma=%.3f$' %(mu, sigma))
plt.show()
# -
# Is it really normally distributed? What we see here is already one of the most problematic properties of a normal distribution: The susceptibility to outliers.
# In normal distributions the confidence interval is determined by the standard diviation. A confidence level of 95% equals 1.96 x sigma.
# +
#plot
(mu, sigma) = norm.fit(man_count)
n, bins, patches = plt.hist(man_count, 100,density=1)
# add a 'best fit' line
y = norm.pdf(bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=2)
plt.xlabel('manual counts counts')
plt.ylabel('binned counts')
plt.title(r'$\mathrm{Histogram\ of\ manual\ counts:}\ \mu=%.3f,\ \sigma=%.3f$' %(mu, sigma))
plt.axvspan((mu-1.96*sigma),(mu+1.96*sigma), alpha=0.2, color='yellow')
plt.axvline(mu, color="black")
plt.show()
# -
# This shows even nicer that our outlier messes up the distribution :-)
# How can we solve this in practise?
# 1. Ignore the problem and continue with the knowledge that we are overestimating the width of the distribution and underestimating the mean.
# 2. Censor the outlier.
# 3. Decide that we cannot assume normality and move to either a different distribution or non-parametric statistics.
# ## Other distributions
# Of course there are many more distributions, e.g.
# Lognormal is a distribution that becomes normal, when log transformed. It is important for the "geometric mean".
# Bimodal distributions may arise from imaging data with background signal, or DNA methylation data.
# Negative binomial distributions are very important in genomics, especially RNA-Seq analysis.
# ## Exercise
# 1. We had imported the total table with also the automated counts. Visualise the distribution of the data next to each other
# 2. Generate the summary statistics and compare the different distributions
# 3. Two weeks ago you learned how to analyze a folder of images and measured the average size of beads:
# https://nbviewer.jupyter.org/github/BiAPoL/Bio-image_Analysis_with_Python/blob/main/image_processing/12_process_folders.ipynb
#
# Go back to the bead-analysis two weeks ago and measure the intensity of the individual beads (do not average over the image). Plot the beads' intensities as different plots. Which one do you find most approproiate for these data?
#
#
| biostatistics/stats2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df=pd.read_csv('tamil_train.csv')
df.head()
df.Not_offensive.unique()
df.shape
df1=pd.DataFrame({"movie vara level la Erika poguthu":["movie vara level la Erika poguthu"],
"Not_offensive":["Not_offensive"]})
df1.head()
data=df1.append(df, ignore_index = True)
data.head()
data1=data.rename(columns={"movie vara level la Erika poguthu": "text", "Not_offensive": "label"})
data1.shape
data1.head()
| .ipynb_checkpoints/offensive language identification implementation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:TFG]
# language: python
# name: conda-env-TFG-py
# ---
# **FCN - Inspect Weights of a Trained Model
#
# This notebook includes code and visualizations to test, debug, and evaluate the Mask R-CNN model.
# ## Build FCN Model and display summary
# +
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import sys, os, random, pprint
sys.path.append('../')
import tensorflow as tf
import keras.backend as KB
import numpy as np
import skimage.io
import matplotlib.pyplot as plt
import mrcnn.visualize as visualize
import mrcnn.utils as utils
from mrcnn.datagen import data_generator, load_image_gt, data_gen_simulate
from mrcnn.callbacks import get_layer_output_1,get_layer_output_2
from mrcnn.utils import mask_string, parse_image_meta, apply_box_deltas_tf
from mrcnn.prep_notebook import mrcnn_coco_test, mrcnn_coco_train, prep_coco_dataset
from mrcnn.coco import CocoDataset, CocoConfig, CocoInferenceConfig, evaluate_coco, build_coco_results
import mrcnn.model_fcn as fcn_modellib
from mrcnn.utils import log
pp = pprint.PrettyPrinter(indent=2, width=100)
np.set_printoptions(linewidth=100,precision=4,threshold=1000, suppress = True)
## Notebook Preferences
# Device to load the neural network on.
# Useful if you're training a model on the same
# machine, in which case use CPU and leave the
# GPU for training.
DEVICE = "/gpu:0" # /cpu:0 or /gpu:0
# def get_ax(rows=1, cols=1, size=16):
# """Return a Matplotlib Axes array to be used in
# all visualizations in the notebook. Provide a
# central point to control graph sizes.
# Adjust the size attribute to control how big to render images
# """
# _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
# return ax
## Configurations
DIR_TRAINING = os.path.expanduser('/home/kbardool/models/')
FCN_TRAINING_PATH = os.path.join(DIR_TRAINING , "train_fcn_coco")
print(FCN_TRAINING_PATH)
##------------------------------------------------------------------------------------
## Build configuration for FCN model
##------------------------------------------------------------------------------------
fcn_config = CocoConfig()
# fcn_config.IMAGE_MAX_DIM = 600
# fcn_config.IMAGE_MIN_DIM = 480
fcn_config.NAME = 'fcn'
fcn_config.BATCH_SIZE = 1 # Batch size is 2 (# GPUs * images/GPU).
fcn_config.IMAGES_PER_GPU = 1 # Must match BATCH_SIZE
# fcn_config.HEATMAP_SCALE_FACTOR = 4
fcn_config.FCN_INPUT_SHAPE = fcn_config.FCN_INPUT_SHAPE[0:2] // fcn_config.HEATMAP_SCALE_FACTOR
# fcn_config.FCN_VGG16_MODEL_PATH = mrcnn_config.FCN_VGG16_MODEL_PATH
fcn_config.TRAINING_PATH = FCN_TRAINING_PATH
fcn_config.BATCH_MOMENTUM = 0.9
fcn_config.WEIGHT_DECAY = 2.0e-4
fcn_config.STEPS_PER_EPOCH = 4
fcn_config.EPOCHS_TO_RUN = 2
fcn_config.LEARNING_RATE = 0.01
fcn_config.LAST_EPOCH_RAN = 0
fcn_config.VALIDATION_STEPS = 5
fcn_config.REDUCE_LR_FACTOR = 0.5
fcn_config.REDUCE_LR_COOLDOWN = 50
fcn_config.REDUCE_LR_PATIENCE = 33
fcn_config.EARLY_STOP_PATIENCE = 50
fcn_config.EARLY_STOP_MIN_DELTA = 1.0e-4
fcn_config.MIN_LR = 1.0e-10
fcn_config.NEW_LOG_FOLDER = True
fcn_config.OPTIMIZER = 'ADAGRAD'
fcn_config.SYSOUT = 'screen'
fcn_config.display()
## Build FCN Model
with tf.device(DEVICE):
##------------------------------------------------------------------------------------
## Build FCN Model in Training Mode
##------------------------------------------------------------------------------------
try :
del fcn_model
gc.collect()
except:
pass
# fcn_model = fcn_modellib.FCN(mode="training", config=fcn_config, model_dir=fcn_config.TRAINING_PATH)
fcn_model = fcn_modellib.FCN(mode="inference", arch='FCN8', config=fcn_config)
fcn_model.keras_model.summary()
# -
# ## Set weight files
# +
# weights_path= '/home/kbardool/models/train_fcn_coco/fcn20181020T1506/fcn_0124.h5'
# weights_path= 'F:/models/train_fcn_coco/fcn20181020T1200/fcn_0056.h5'
# weights_path= 'F:/models/train_fcn_coco/fcn20181021T1602/fcn_0188.h5'
# weights_path= '/home/kbardool/models/train_fcn_coco/fcn20181022T1622/fcn_0001.h5'
# DIR_WEIGHTS = '/home/kbardool/models/train_fcn_coco/fcn20181023T0825'
# DIR_WEIGHTS = '/home/kbardool/models/train_fcn8_coco/fcn20181026T1432'
# filepath = os.path.join(DIR_WEIGHTS, 'fcn_init_weights')
# fcn_model.keras_model.save_weights(filepath, overwrite=True)
# fcn_model.save_model(DIR_WEIGHTS, 'fcn_init_weights')
# fcn_model.keras_model.summary()
##'fcn_init_weights.h5',
DIR_WEIGHTS = 'F:/models/train_fcn8_coco/fcn20181031T0000' ### Training with LR=0.00001, MSE Loss NO L2 Regularization
# DIR_WEIGHTS = '/home/kbardool/models/train_fcn8_coco/fcn20181028T1324' ### Training with LR=0.0001, MSE Loss
# DIR_WEIGHTS = '/home/kbardool/models/train_fcn8_coco/fcn20181030T0000' ### Training with LR=0.0001, MSE Loss NO L2 Regularization
DIR_WEIGHTS = '/home/kbardool/models/train_fcn8_coco/fcn20181031T0000' ### Training with LR=0.00001, MSE Loss NO L2 Regularization
# files = ['fcn_0001.h5','fcn_0027.h5','fcn_0036.h5','fcn_0051.h5','fcn_0076.h5','fcn_0106.h5','fcn_0156.h5']
files = ['fcn_0001.h5','fcn_0106.h5','fcn_0170.h5','fcn_0256.h5','fcn_0383.h5','fcn_0500.h5','fcn_2623.h5']
# -
# ## Load Weights - 1
weights_path = os.path.join(DIR_WEIGHTS , files[0])
print("Loading weights ", weights_path)
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 1st weight file
# Show stats of all trainable weights
a = visualize.display_weight_stats(fcn_model)
weights_stats = os.path.join(DIR_WEIGHTS , 'stats_'+files[0]+'.pdf')
# utils.convertHtmlToPdf(a, weights_stats)
from mrcnn.utils import convertHtmlToPdf
# ### Histograms of Weights - 1st weight file
# Pick layer types to display
a = visualize.display_weight_histograms(fcn_model,width=15,height=4, filename = files[0])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[0]+'.png')
a.savefig(weights_histogram)
# ## Load Weights - 2
weights_path = os.path.join(DIR_WEIGHTS , files[1])
print("Loading weights ", weights_path)
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 2nd weights file
# Show stats of all trainable weights
visualize.display_weight_stats(fcn_model)
# ### Histograms of Weights - 2nd weights file
a = visualize.display_weight_histograms(fcn_model, filename = files[1])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[1]+'.png')
a.savefig(weights_histogram)
# ## Load Weights - 3rd weight file
# ### load
weights_path = os.path.join(DIR_WEIGHTS , files[2])
print("Loading weights ", weights_path)
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 3rd weight file
# Show stats of all trainable weights
visualize.display_weight_stats(fcn_model)
# ### Histograms of Weights - 3rd weight file
a = visualize.display_weight_histograms(fcn_model, filename = files[2])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[2]+'.png')
a.savefig(weights_histogram)
# ## Load Weights - 4rd weight file
weights_path = os.path.join(DIR_WEIGHTS , files[3])
print("Loading weights ", weights_path)
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 4th weight file
# Show stats of all trainable weights
visualize.display_weight_stats(fcn_model)
# ### Histograms of Weights
a = visualize.display_weight_histograms(fcn_model, filename = files[3])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[3]+'.png')
a.savefig(weights_histogram)
# ## Load Weights - 5th weight file
weights_path = os.path.join(DIR_WEIGHTS , files[4])
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 5th weight file
# Show stats of all trainable weights
visualize.display_weight_stats(fcn_model)
# ### Histograms of Weights - 5th weight file
a = visualize.display_weight_histograms(fcn_model, filename = files[4])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[4]+'.png')
a.savefig(weights_histogram)
# ## Load Weights - 6th weight file
weights_path = os.path.join(DIR_WEIGHTS , files[5])
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 6th weight file
# Show stats of all trainable weights
visualize.display_weight_stats(fcn_model)
# ### Histograms of Weights - 6th weight file
a = visualize.display_weight_histograms(fcn_model, filename = files[5])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[5]+'.png')
a.savefig(weights_histogram)
# ## Load Weights - 7th weight file
weights_path = os.path.join(DIR_WEIGHTS , files[6])
fcn_model.load_model_weights(weights_path)
# ### Review Weight Stats - 7th weight file
# Show stats of all trainable weights
visualize.display_weight_stats(fcn_model)
# ### Histograms of Weights - 7th weight file
a = visualize.display_weight_histograms(fcn_model, filename = files[6])
weights_histogram = os.path.join(DIR_WEIGHTS , 'histogram_'+files[6]+'.png')
a.savefig(weights_histogram)
| notebooks/Utils/Utils - Inspect_weights - FCN Model (Template).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv("aapl.csv",parse_dates=["Date"], index_col="Date")
df.head(2)
df.index
df['2017-06-30']
df["2017-01"]
df['2017-06'].head()
df['2017-06'].Close.mean()
df['2017'].head(2)
df['2017-01-08':'2017-01-03']
df['2017-01']
df['Close'].resample('M').mean().head()
df['2016-07']
# %matplotlib inline
df['Close'].plot()
df['Close'].resample('M').mean().plot(kind='bar')
| path_of_ML/Pandas/Time_series.ipynb |