code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Lightweight python components
Lightweight python components do not require you to build a new container image for every code change. They're intended to use for fast iteration in notebook environment.
**Building a lightweight python component**
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function:
- The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.
- The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)
- If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.
- To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])
```
# Install the dependency packages
!pip install --upgrade pip
!pip install numpy tensorflow kfp-tekton
```
**Important**: If you are running this notebook using the Kubeflow Jupyter Server, you need to restart the Python **Kernel** because the packages above overwrited some default packages inside the Kubeflow Jupyter image.
```
import kfp
import kfp.components as comp
```
Simple function that just add two numbers:
```
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
```
Convert the function to a pipeline operation
```
add_op = comp.func_to_container_op(add)
```
A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
```
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Pip installs inside a component function.
#NOTE: installs should be placed right at the beginning to avoid upgrading a package
# after it has already been imported and cached by python
import sys, subprocess;
subprocess.run([sys.executable, '-m', 'pip', 'install', 'tensorflow==1.8.0'])
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
```
Test running the python function directly
```
my_divmod(100, 7)
```
#### Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
```
divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
```
#### Define the pipeline
Pipeline function has to be decorated with the `@dsl.pipeline` decorator
```
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
# Currently kfp-tekton doesn't support pass parameter to the pipelinerun yet, so we hard code the number here
def calc_pipeline(
a='7',
b='8',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
```
Compile and run the pipeline into Tekton yaml using kfp-tekton SDK
```
# Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
# Specify Kubeflow Pipeline Host
host=None
# Submit a pipeline run using the KFP Tekton client.
from kfp_tekton import TektonClient
TektonClient(host=host).create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# For Argo users, submit the pipeline run using the below client.
# kfp.Client(host=host).create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
```
| github_jupyter |
# Predicting Student Admissions with Neural Networks
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
## Loading the data
To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:
- https://pandas.pydata.org/pandas-docs/stable/
- https://docs.scipy.org/
```
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
```
## Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
```
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
```
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
```
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
```
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
## TODO: One-hot encoding the rank
Use the `get_dummies` function in numpy in order to one-hot encode the data.
```
# TODO: Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data["rank"],prefix="rank")],axis=1)
# TODO: Drop the previous rank column
one_hot_data = one_hot_data.drop("rank", axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
```
## TODO: Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
```
# Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
processed_data["gre"] = processed_data["gre"] / 800
processed_data["gpa"] = processed_data["gpa"] / 4.0
# Printing the first 10 rows of our procesed data
processed_data[:10]
```
## Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
```
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
```
## Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
```
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
```
## Training the 2-layer Neural Network
The following function trains the 2-layer neural network. First, we'll write some helper functions.
```
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
```
# TODO: Backpropagate the error
Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ -(y-\hat{y}) \sigma'(x) $$
```
# TODO: Write the error term formula
def error_term_formula(y, output):
return -(y - output) * output * (output - 1)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
# Notice we calulate f'(h) here instead of defining a separate
# sigmoid_prime function. This just makes it faster because we
# can re-use the result of the sigmoid function stored in
# the output variable
error_term = error_term_formula(y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
```
## Calculating the Accuracy on the Test Data
```
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing import image
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from imutils import paths
import matplotlib.pyplot as plt
plt.style.use("seaborn")
import numpy as np
from numpy import expand_dims
import pandas as pd
import random
from pathlib import Path
from IPython.display import display
from PIL import Image
import pickle
import glob
import os
import cv2
from google.colab import drive
drive.mount('/drive')
os.listdir('/drive/My Drive/FaceMask/data')
def loadData(path,dataFrame):
data = []
for i in range(len(dataFrame)):
data.append(path+dataFrame['filename'][i])
return data
def loadImages(listPath, img_size):
images = []
for img in listPath:
z= image.load_img(img,target_size=img_size)
r = image.img_to_array(z)
r = preprocess_input(r)
images.append(r)
return np.array(images)
def loadLabels(dataFrame):
labels = []
for row in range(len(dataFrame)):
if dataFrame["class"][row] == 'with_mask':
y= [1.0, 0.0]
else:
y=[0.0, 1.0]
labels.append(y)
return np.array(labels,dtype="float32")
```
##### Load data train path
```
path = "/drive/My Drive/FaceMask/data/train/"
train_csv_df = pd.DataFrame(pd.read_csv("/drive/My Drive/FaceMask/data/train.csv"))
train_csv_df.head()
imgPath = loadData(path,train_csv_df)
```
##### Load data test path
```
testPath = "/drive/My Drive/FaceMask/data/test/"
test_csv_df = pd.DataFrame(pd.read_csv("/drive/My Drive/FaceMask/data/test.csv"))
test_csv_df.head()
imgTest = loadData(testPath,test_csv_df)
```
### Get data train and data test
```
train_images_array = loadImages(imgPath, (300,300))
test_images_array = loadImages(imgTest, (224,224))
```
### Get the labels
```
train_labels_array = loadLabels(train_csv_df)
test_labels_array = loadLabels(test_csv_df)
```
### Augmentasi data train
```
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
# Loading the MobileNetV2 network, with the topmost layer removed
base_model = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
# Freeze the layer of the base model to make them untrainable.
# This ensures that their weights are not updated when we train the model.
for layer in base_model.layers:
layer.trainable = False
# Construct head of the model that will be attached on top of the base model:
head_model = base_model.output
head_model = AveragePooling2D(pool_size=(7, 7))(head_model)
head_model = Flatten(name="flatten")(head_model)
head_model = Dense(128, activation="relu")(head_model)
head_model = Dropout(0.5)(head_model)
head_model = Dense(2, activation="softmax")(head_model)
# Combine the head and base of the models together:
my_model = Model(inputs=base_model.input, outputs=head_model)
my_model.summary()
INIT_LR = 1e-4
EPOCHS = 20
BATCH_SIZE = 32
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
my_model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
history = my_model.fit(
aug.flow(train_images_array, train_labels_array, batch_size=BATCH_SIZE),
steps_per_epoch=len(train_images_array) // BATCH_SIZE,
validation_data=(test_images_array, test_labels_array),
validation_steps=len(test_images_array)//BATCH_SIZE,
epochs=EPOCHS)
my_model.save("/drive/My Drive/FaceMask/model.h5")
```
## EVALUATE MODEL
```
results = my_model.evaluate(test_images_array, test_labels_array, batch_size=128)
```
## PLOT RESULT
```
train_acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
train_loss = history.history['loss']
val_loss = history.history['val_loss']
plt.rcParams['figure.figsize'] = [10, 5]
# Plot training & validation accuracy values
plt.plot(train_acc)
plt.plot(val_acc)
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.savefig('/drive/My Drive/FaceMask/accuracy.png')
plt.legend(['Train', 'Test'])
# Plot training & validation loss values
plt.plot(train_loss)
plt.plot(val_loss)
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.savefig('/drive/My Drive/FaceMask/lost.png')
plt.legend(['Train', 'Test'])
```
## PREDICT
```
my_model = load_model('D:\PROGRAMING\Python\Face Mask/model.h5')
img = image.load_img('D:\PROGRAMING\Python\Face Mask/1.jpg',target_size=(224,224))
plt.imshow(img)
plt.axis('off')
plt.show()
img = np.array(img, dtype='float')
img = img.reshape(1, 224, 224, 3)
prediksi = my_model.predict(img)
idx = np.argmax(prediksi)
percentage = "%.2f" % (prediksi[0][idx] * 100)
print(str(percentage)+" %")
if (idx):
print("Wearing Masker")
else:
print("Not Wearing Masker")
```
| github_jupyter |
```
import matplotlib.pyplot as plt
from matplotlib import cm, colors, rcParams
import numpy as np
import bayesmark.constants as cc
from bayesmark.path_util import abspath
from bayesmark.serialize import XRSerializer
from bayesmark.constants import ITER, METHOD, TEST_CASE, OBJECTIVE, VISIBLE_TO_OPT
# User settings, must specify location of the data to make plots here for this to run
DB_ROOT = abspath(".")
DBID = "bo_example_folder"
metric_for_scoring = VISIBLE_TO_OPT
# Matplotlib setup
# Note this will put type-3 font BS in the pdfs, if it matters
rcParams["mathtext.fontset"] = "stix"
rcParams["font.family"] = "STIXGeneral"
def build_color_dict(names):
"""Make a color dictionary to give each name a mpl color.
"""
norm = colors.Normalize(vmin=0, vmax=1)
m = cm.ScalarMappable(norm, cm.tab20)
color_dict = m.to_rgba(np.linspace(0, 1, len(names)))
color_dict = dict(zip(names, color_dict))
return color_dict
# Load the data
agg_results_ds, meta = XRSerializer.load_derived(DB_ROOT, db=DBID, key=cc.PERF_RESULTS)
# Setup for plotting
method_list = agg_results_ds.coords[METHOD].values
method_to_rgba = build_color_dict(method_list.tolist())
# Make the plots for inidividual test functions
for func_name in agg_results_ds.coords[TEST_CASE].values:
plt.figure(figsize=(5, 5), dpi=300)
for method_name in method_list:
curr_ds = agg_results_ds.sel({TEST_CASE: func_name, METHOD: method_name, OBJECTIVE: metric_for_scoring})
plt.fill_between(
curr_ds.coords[ITER].values,
curr_ds[cc.LB_MED].values,
curr_ds[cc.UB_MED].values,
color=method_to_rgba[method_name],
alpha=0.5,
)
plt.plot(
curr_ds.coords[ITER].values,
curr_ds[cc.PERF_MED].values,
color=method_to_rgba[method_name],
label=method_name,
marker=".",
)
plt.xlabel("evaluation", fontsize=10)
plt.ylabel("median score", fontsize=10)
plt.title(func_name)
plt.legend(fontsize=8, bbox_to_anchor=(1.05, 1), loc="upper left", borderaxespad=0.0)
plt.grid()
plt.figure(figsize=(5, 5), dpi=300)
for method_name in method_list:
curr_ds = agg_results_ds.sel({TEST_CASE: func_name, METHOD: method_name, OBJECTIVE: metric_for_scoring})
plt.fill_between(
curr_ds.coords[ITER].values,
curr_ds[cc.LB_MEAN].values,
curr_ds[cc.UB_MEAN].values,
color=method_to_rgba[method_name],
alpha=0.5,
)
plt.plot(
curr_ds.coords[ITER].values,
curr_ds[cc.PERF_MEAN].values,
color=method_to_rgba[method_name],
label=method_name,
marker=".",
)
plt.xlabel("evaluation", fontsize=10)
plt.ylabel("mean score", fontsize=10)
plt.title(func_name)
plt.legend(fontsize=8, bbox_to_anchor=(1.05, 1), loc="upper left", borderaxespad=0.0)
plt.grid()
```
| github_jupyter |
```
import sys
# sys.path.append('GVGAI_GYM')
import gym
import gym_gvgai
import numpy as np
import random
from IPython.display import clear_output
from collections import deque
import progressbar
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Dense, Embedding, Reshape, Input, Flatten
from tensorflow.keras.optimizers import Adam
from datetime import datetime
from packaging import version
# Constants
IMG_H = 9
IMG_W = 13
STATE_SIZE = (IMG_H, IMG_W)
# Tensorboard extension
# %load_ext tensorboard
# logdir = "logs/scalars/zelda-04"
# tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# print("TensorFlow version: ", tf.__version__)
# assert version.parse(tf.__version__).release[0] >= 2, \
# "This notebook requires TensorFlow 2.0 or above."
class Agent:
def __init__(self, enviroment):
# Initialize atributes
self._state_size = STATE_SIZE
#self._state_size = enviroment.observation_space.n
self._action_size = enviroment.action_space.n
self.expirience_replay = deque(maxlen=2000)
# Initialize discount and exploration rate
self.gamma = 0.6
self.epsilon = 1.0
self.dec_epsilon_rate = 0.0001
self.min_epsilon = 0.1
# Build networks
self.q_network = self._build_compile_model()
self.target_network = self._build_compile_model()
self.align_target_model()
def store(self, state, action, reward, next_state, terminated):
self.expirience_replay.append((state, action, reward, next_state, terminated))
def _build_compile_model(self):
model = Sequential()
model.add(Flatten(input_shape=STATE_SIZE))
model.add(Dense(30, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(self._action_size, activation='linear'))
model.compile(loss='mse', optimizer=Adam(learning_rate=0.01))
return model
def align_target_model(self):
self.target_network.set_weights(self.q_network.get_weights())
def act(self, state, use_epsilon_strategy=True):
if np.random.rand() <= self.epsilon and use_epsilon_strategy:
self.epsilon = self.epsilon - self.dec_epsilon_rate if self.epsilon > self.min_epsilon else self.min_epsilon
return enviroment.action_space.sample()
if not use_epsilon_strategy and np.random.rand() <= self.min_epsilon:
return enviroment.action_space.sample()
tensor_state = tf.convert_to_tensor([state])
q_values = self.q_network.predict(tensor_state)
return np.argmax(q_values[0])
def retrain(self, batch_size):
minibatch = random.sample(self.expirience_replay, batch_size)
for state, action, reward, next_state, terminated in minibatch:
tensor_state = tf.convert_to_tensor([state])
tensor_next_state = tf.convert_to_tensor([next_state])
q_values = self.q_network.predict(tensor_state)
if terminated:
q_values[0][action] = reward
else:
t = self.target_network.predict(tensor_next_state)
q_values[0][action] = reward + self.gamma * np.amax(t)
self.q_network.fit(tensor_state, q_values, epochs=1, verbose=0)
def save_model(self, model_name="models/zelda_03_q_network"):
self.q_network.save(model_name)
def load_model(self, model_path):
self.q_network = keras.models.load_model(model_path)
self.align_target_model()
# Utils
import matplotlib.pyplot as plt
from IPython import display
def show_state(env, step=0, name="", info=""):
plt.figure(3)
plt.clf()
plt.imshow(env.render(mode='rgb_array'))
plt.title("{} | Step: {} {}".format(name, step, info))
plt.axis("off")
display.clear_output(wait=True)
display.display(plt.gcf())
def grayToArray(array):
result = np.zeros((IMG_H, IMG_W))
for i in range(int(array.shape[0]/10)):
for j in range(int(array.shape[1]/10)):
result[i][j] = int(array[10*i+5, 10*j+5])
return result
def grayConversion(image):
b = image[..., 0]
g = image[..., 1]
r = image[..., 2]
return 0.21 * r + 0.72 * g + 0.07 * b
def reshape_state(state):
return grayToArray(grayConversion(state))
# Train
batch_size = 500
num_of_episodes = 10
timesteps_per_episode = 1000
enviroment = gym.make("gvgai-zelda-lvl0-v0")
enviroment.reset()
agent = Agent(enviroment)
# agent.load_model("models/zelda_04_q_network")
agent.q_network.summary()
PLAYER_TILE_VALUE = 201
SECONDARY_PLAYER_TILE_VALUE = 38
KEY_TILE_VALUE = 151
DOOR_TILE_VALUE = 57
ACTIONS = ['ACTION_NIL', 'ACTION_USE', 'ACTION_LEFT',
'ACTION_RIGHT', 'ACTION_DOWN', 'ACTION_UP']
def find_key(state: list):
for row in state:
for column in row:
if column == KEY_TILE_VALUE:
return True
return False
def find_position(state: list, previous_position):
player_row_index = None
player_col_index = None
if previous_position is None or previous_position[0] is None:
for row_index, row in enumerate(state):
for col_index, value in enumerate(row):
if value == PLAYER_TILE_VALUE or value == SECONDARY_PLAYER_TILE_VALUE:
player_row_index = row_index
player_col_index = col_index
break
if not (player_row_index is None):
break
return player_row_index, player_col_index
for row_offset in range(-2, 2, 1):
for col_offset in range(-2, 2, 1):
row_index = previous_position[0] + row_offset
col_index = previous_position[1] + col_offset
value = int(state[row_index][col_index])
# print('row_index, col_index, value', row_index, col_index, value)
if value == PLAYER_TILE_VALUE or value == SECONDARY_PLAYER_TILE_VALUE:
player_row_index = row_index
player_col_index = col_index
break
if not (player_row_index is None):
break
return player_row_index, player_col_index
def process_reward(state, next_state, action_id, raw_reward, is_over, info, position, next_position):
action = ACTIONS[action_id]
was_key_present = find_key(state)
is_key_present = find_key(next_state)
grabbed_the_key = was_key_present and not is_key_present
is_winner = info["winner"] == "PLAYER_WINS"
if is_over and is_winner:
return 1000
if is_over and not is_winner:
return -1000
if grabbed_the_key:
return 500
if raw_reward > 0:
return raw_reward * 10
if action in ['ACTION_NIL']:
return -500
if action in ['ACTION_USE']:
return -100
has_moved = next_position[0] != position[0] or next_position[1] != position[1]
if not has_moved:
return -20
return -1
for e in range(0, num_of_episodes):
# Reset the enviroment
state = enviroment.reset()
state = reshape_state(state)
position = find_position(state, None)
# Initialize variables
reward = 0
terminated = False
bar = progressbar.ProgressBar(maxval=timesteps_per_episode/10, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
for timestep in range(timesteps_per_episode):
# Run Action
action = agent.act(state)
# Take action
next_state, reward, terminated, info = enviroment.step(action)
next_state = reshape_state(next_state)
next_position = find_position(next_state, position)
reward = process_reward(state, next_state, action, reward, terminated, info, position, next_position)
agent.store(state, action, reward, next_state, terminated)
state = next_state
position = next_position
if terminated:
agent.align_target_model()
break
if len(agent.expirience_replay) > batch_size:
agent.retrain(batch_size)
if timestep%10 == 0:
agent.align_target_model()
bar.update(timestep/10 + 1)
bar.finish()
if (e + 1) % 10 == 0:
print("**********************************")
print("Episode: {}".format(e + 1))
print("**********************************")
agent.save_model("models/zelda_04_q_network_{}".format(e))
# Play with agent
enviroment = gym.make("gvgai-zelda-lvl0-v0")
actions_list = ['ACTION_NIL', 'ACTION_USE', 'ACTION_LEFT',
'ACTION_RIGHT', 'ACTION_DOWN', 'ACTION_UP']
agent = Agent(enviroment)
model_path = "models/zelda_04_q_network_7"
timesteps = 2000
agent.load_model(model_path)
state = reshape_state(enviroment.reset())
for timestep in range(timesteps):
action = agent.act(state, False)
next_state, reward, terminated, info = enviroment.step(action)
state = reshape_state(next_state)
print(action, reward, terminated, info)
show_state(enviroment, timestep, "Zelda", "Action: {} Player Status: {} Terminated: {}".format(actions_list[action], info['winner'], terminated))
if terminated:
print("terminated")
break
# !ls
# !tar -czvf models.tar.gz models
# from google.colab import files
# files.download("models.tar.gz")
```
| github_jupyter |
```
from sklearn.neighbors import NearestNeighbors
import numpy as np
from matplotlib import pyplot as plt
#import corner
import urllib
import os
import sys
#import GCRCatalogsF
from astropy.io import fits
#from demo_funcs_local import *
from sklearn.model_selection import train_test_split
import pandas as pd
from astropy.cosmology import Planck15 as P15
from astropy import units as u
import seaborn as sns
import pandas as pd
sns.set_context("talk")
sns.set(font_scale=2.5)
sns.set_style("white")
sns.set_style("ticks", {"xtick.major.size": 20, "ytick.major.size": 40})
sns.set_style("ticks", {"xtick.minor.size": 8, "ytick.minor.size": 8})
plt.rcParams.update({
"text.usetex": True,
"font.family": "sans-serif",
"font.sans-serif": ["Helvetica"]})
## for Palatino and other serif fonts use:
plt.rcParams.update({
"text.usetex": True,
"font.family": "serif",
"font.serif": ["Palatino"],
})
sns.set_style('white', {'axes.linewidth': 1.0})
plt.rcParams['xtick.major.size'] = 15
plt.rcParams['ytick.major.size'] = 15
plt.rcParams['xtick.minor.size'] = 10
plt.rcParams['ytick.minor.size'] = 10
plt.rcParams['xtick.minor.width'] = 2
plt.rcParams['ytick.minor.width'] = 2
plt.rcParams['xtick.major.width'] = 2
plt.rcParams['ytick.major.width'] = 2
plt.rcParams['xtick.bottom'] = True
plt.rcParams['xtick.top'] = True
plt.rcParams['ytick.left'] = True
plt.rcParams['ytick.right'] = True
plt.rcParams['xtick.minor.visible'] = True
plt.rcParams['ytick.minor.visible'] = True
plt.rcParams['xtick.direction'] = 'in'
plt.rcParams['ytick.direction'] = 'in'
fnIa = "./matchedGals_IaGhostlib.tar.gz"
fnII = "./matchedGals_IIGhostlib.tar.gz"
dc2_Ia = pd.read_csv(fnIa)
dc2_II = pd.read_csv(fnII)
dc2_Ia['GHOST_transientclass'] = 'SN Ia'
dc2_II['GHOST_transientclass'] = 'SN II'
dc2_full = pd.concat([dc2_Ia, dc2_II],ignore_index=True)
dc2_full['size_minor_true_x'].dropna()
#dc2_full['size_minor_true'] = dc2_full['size_minor_true_x']
#del dc2_full['size_minor_true_x']
#del dc2_full['size_minor_true_y']
dc2_full['position_angle_true'] = dc2_full['position_angle_true_x']
del dc2_full['position_angle_true_x']
del dc2_full['position_angle_true_y']
dc2_full.columns.values
SN_list = []
for SN in np.unique(dc2_full['GHOST_transientclass'].values):
SN_list.append(dc2_full[dc2_full['GHOST_transientclass'] == SN])
#g_obs, r_obs
#Size_true, size_minor_true → a0_Sersic, b0_Sersic
#Sersic index → n0_Sersic
#Position_angle_true → a_rot Martine: figure out if this is the same
SNCount = {}
for dc2 in SN_list:
SN = dc2['GHOST_transientclass'].values[0]
SN = SN.replace("/", "")
DF = pd.DataFrame({'VARNAMES:':['GAL:']*len(dc2['stellar_mass'].values),
'GALID':dc2['galaxy_id'].values,
'galaxy_id':dc2['galaxy_id'].values,
'RA_GAL':dc2['ra'].values,
'DEC_GAL':dc2['dec'].values,
'ZTRUE':dc2['PZflowredshift'].values,
'PZflowredshift':dc2['PZflowredshift'].values,
'DC2redshift':dc2['DC2redshift'].values,
'g_obs':dc2['Mag_true_g_lsst'].values,
'r_obs':dc2['Mag_true_r_lsst'].values,
'i_obs':dc2['Mag_true_i_lsst'].values,
'z_obs':dc2['Mag_true_z_lsst'].values,
'a0_Sersic':dc2['size_true'].values,
'size_true':dc2['size_true'].values,
'b0_Sersic':dc2['size_minor_true'].values,
'size_minor_true':dc2['size_minor_true'].values,
'n0_Sersic':dc2['totalSersicIndex'].values,
'totalSersicIndex':dc2['totalSersicIndex'].values,
'a_rot':dc2['position_angle_true'].values,
'position_angle_true':dc2['position_angle_true'].values,
'TOTAL_ELLIPTICITY':dc2['totalEllipticity'].values,
'LOGMASS_TRUE':np.log10(dc2['stellar_mass'].values),
'LOGMASS':np.log10(dc2['stellar_mass'].values),
'LOGMASS_OBS':np.log10(dc2['stellar_mass'].values),
'stellar_mass':dc2['stellar_mass'].values,
'STAR_FORMATION_RATE':dc2['PZflowSFRtot'].values,
'PZflowSFRtot':dc2['PZflowSFRtot'].values,
'DC2SFRtot':dc2['DC2SFRtot'].values,
'Mag_true_g_sdss_z0':dc2['Mag_true_g_sdss_z0'].values,
'Mag_true_r_sdss_z0':dc2['Mag_true_r_sdss_z0'].values,
'Mag_true_i_sdss_z0':dc2['Mag_true_i_sdss_z0'].values,
'Mag_true_z_sdss_z0':dc2['Mag_true_z_sdss_z0'].values,
'GHOST_objID':dc2['GHOST_objID'].values,
'GHOST_ra':dc2['GHOST_ra'].values,
'GHOST_dec':dc2['GHOST_dec'].values,
'GHOST_transientclass':dc2['GHOST_transientclass'].values,
'sersic_disk':dc2['sersic_disk'].values,
'sersic_bulge':dc2['sersic_bulge'].values,
'nn_distance':dc2['nn_distance'].values,
'size_bulge_true':dc2['size_bulge_true'].values,
'size_minor_bulge_true':dc2['size_minor_bulge_true'].values,
'size_disk_true':dc2['size_disk_true'].values,
'size_minor_disk_true':dc2['size_minor_disk_true'].values,
'mag_true_g_lsst':dc2['mag_true_g_lsst'].values,
'mag_true_r_lsst':dc2['mag_true_r_lsst'].values,
'mag_true_i_lsst':dc2['mag_true_i_lsst'].values,
'mag_true_z_lsst':dc2['mag_true_z_lsst'].values,
'Mag_true_g_lsst':dc2['Mag_true_g_lsst'].values,
'Mag_true_r_lsst':dc2['Mag_true_r_lsst'].values,
'Mag_true_i_lsst':dc2['Mag_true_i_lsst'].values,
'Mag_true_z_lsst':dc2['Mag_true_z_lsst'].values})
SNCount[SN] = len(DF)
#combine with original to get same names out
DF_merged = pd.merge(DF, dc2_full,on='galaxy_id')
DF_merged.drop_duplicates(subset=['GALID'], inplace=True)
DF_merged.to_csv("%s_G.HOSTLIB"%SN.replace(" ", ""),index=False, sep=' ')
DF_merged['g_obs']
sns.set_context("talk")
plt.figure(figsize=(15,10))
i = 0
cols = sns.color_palette()
for SN in np.unique(dc2_full['GHOST_transientclass']):
dc2_temp = dc2_full[dc2_full['GHOST_transientclass'] == SN]
sns.kdeplot(data=dc2_temp, x="PZflowredshift",lw=3, label=SN, color=cols[i])
i += 1
#plt.yscale("log")
plt.legend()
#plt.savefig("7Class_HostlibHist_FracHist_NewMatching_10pct.png",bbox_inches='tight', dpi=300)
```
| github_jupyter |
```
import pandas as pd
root = "C:\\Users\\user\\SadafPythonCode\\MLHackathon\\ML-for-Good-Hackathon\\Data\\"
df = pd.read_csv(root + 'CrisisLogger\\crisislogger.csv')
a = df.iloc[28,1]
a
import spacy
nlp = spacy.load('en_core_web_sm',disable=['ner','textcat'])
# function for rule 2
def rule2(text):
doc = nlp(text)
pat = []
# iterate over tokens
for token in doc:
phrase = ''
# if the word is a subject noun or an object noun
if (token.pos_ == 'NOUN')\
and (token.dep_ in ['dobj','pobj','nsubj','nsubjpass']):
# iterate over the children nodes
for subtoken in token.children:
# if word is an adjective or has a compound dependency
if (subtoken.pos_ == 'ADJ') or (subtoken.dep_ == 'compound'):
phrase += subtoken.text + ' '
if len(phrase)!=0:
phrase += token.text
if len(phrase)!=0:
pat.append(phrase)
return pat
```
### Noun Phrases
```
rule2(a) # noun phrases
```
### I/We phrases
```
def rule2_mod(text,index):
doc = nlp(text)
phrase = ''
for token in doc:
if token.i == index:
for subtoken in token.children:
if (subtoken.pos_ == 'ADJ'):
phrase += ' '+subtoken.text
break
return phrase
# rule 1 modified function
def rule1_mod(text):
doc = nlp(text)
sent = []
for token in doc:
# root word
if (token.pos_=='VERB'):
phrase =''
# only extract noun or pronoun subjects
for sub_tok in token.lefts:
if (sub_tok.dep_ in ['nsubj','nsubjpass']) and (sub_tok.pos_ in ['NOUN','PROPN','PRON']):
# look for subject modifier
adj = rule2_mod(text,sub_tok.i)
phrase += adj + ' ' + sub_tok.text
# save the root word of the word
phrase += ' '+token.lemma_
# check for noun or pronoun direct objects
for sub_tok in token.rights:
if (sub_tok.dep_ in ['dobj']) and (sub_tok.pos_ in ['NOUN','PROPN']):
# look for object modifier
adj = rule2_mod(text,sub_tok.i)
phrase += adj+' '+sub_tok.text
sent.append(phrase)
return sent
rule1_mod(a) # I/we phrases
```
### Phrases with noun followed by prepositions
```
# rule 3 function
def rule3(text):
doc = nlp(text)
sent = []
for token in doc:
# look for prepositions
if token.pos_=='ADP':
phrase = ''
# if its head word is a noun
if token.head.pos_=='NOUN':
# append noun and preposition to phrase
phrase += token.head.text
phrase += ' '+token.text
# check the nodes to the right of the preposition
for right_tok in token.rights:
# append if it is a noun or proper noun
if (right_tok.pos_ in ['NOUN','PROPN']):
phrase += ' '+right_tok.text
if len(phrase)>2:
sent.append(phrase)
return sent
rule3(a) # prepositions
```
| github_jupyter |
# Elementare Datentypen
*Erinnerung:* Beim Deklarieren einer Variable muss man deren Datentyp angeben oder er muss eindeutig erkennbar sein.
Die beiden folgenden Anweisungen erzeugen beide eine Variable vom Typ `int`:
var a int
b := 42
Bisher haben wir nur einen Datentyp benutzt: `int`. Dieser Typ steht für ganze Zahlen, also positive oder negative Zahlen ohne Nachkommastellen. Go stellt eine ganze Reihe von Datentypen bereit, für verschiedene Arten von Daten oder auch für verschiedene Datenbereiche.
## Ganzzahlige Datentypen
```
var i1 int // Zahl mit Vorzeichen
var i2 int32 // 32-Bit-Zahl mit Vorzeichen
var i3 int64 // 64-Bit-Zahl mit Vorzeichen
var i4 uint // Zahl ohne Vorzeichen
var i5 uint32 // 32-Bit-Zahl ohne Vorzeichen
var i6 uint64 // 64-Bit-Zahl ohne Vorzeichen
var i7 byte // Sonderfall: 8 Bit ohne Vorzeichen, meist als Buchstaben verwendet
i7 := 42 // Automatische Typinferenz, hieraus wird `int`
```
### Maximale Wertebereiche von `int`-Datentypen
Die meisten Datentypen haben einen festen, begrenzten Wertebereich, d.h. es gibt eine größte und eine kleinste Zahl, die man damit speichern kann.
Diese Einschränkung liegt daran, dass eine feste Anzahl an Ziffern verwendet wird. Reichen diese Ziffern nicht mehr aus, gehen Informationen verloren.
```
^uint(0) // Größter Wert für ein uint
int(^uint(0) >> 1) // Größter Wert für ein int
-int(^uint(0) >> 1)-1 // Größter Wert für ein int
^uint32(0) >> 1 // Größter Wert für ein uint32
^uint64(0) >> 1 // Größter Wert für ein int64
```
### Überläufe
Überschreitet man den maximalen Wert eines Datentypen, so geschieht ein *Überlauf*:
```
^uint(0)+1
int32(^uint32(0) >> 1)+1
```
## Fließkomma-Datentypen
Neben den ganzzahligen Datentypen gibt es auch zwei *Gleitkomma*-Datentypen, die zur Darstellung von Kommazahlen mit einer variablen Anzahl an Nachkommastellen gedacht sind:
```
var f1 float32 = 42
var f2 float64 = 23.5
```
Gleitkommazahlen werden z.B. gebraucht, um Divisionen, Wurzelberechnungen etc. durchzuführen.
Sie werden intern in der Form $m \cdot b^e$ dargestellt, d.h. z.B. ist $234,567 = 2,23456 * 10^2$.
Ein Problem dabei ist, dass für die Darstellung von *Mantisse* ($m$) und *Exponent* ($e$) nur eine begrenzte Anzahl an Bits zur Verfügung steht.
Dadurch ist die Genauigkeit bei der Darstellung von und Rechnung mit Gleitkommazahlen begrenzt. Die folgenden Beispiele demonstrieren das:
```
a, b := 5.67, 8.97
a - b
var x float64 = 1.01
var i float64 = 0.01
for x < 1.4 {
println(x)
x += i
}
```
## Wahrheitswerte
Ein weiterer wichtiger Datentyp sind die Wahrheitswerte `true` und `false`.
Wie der Name schon andeutet, dienen sie zur Darstellung von Auswertungen, ob etwas *wahr* oder *falsch* ist.
Bspw. ist der Vergleich `42 == 6 * 7` wahr, die Aussage `42 > 15` jedoch falsch.
```
var b1 bool
b1
```
Mit Wahrheitswerten wird z.B. bei bedingten Sprüngen und Schleifen gerechnet, um die Bedingungen auszuwerten.
Oft schreibt man Funktionen, die komplexere Zusammenhänge prüfen sollen und die einen Wert vom Typ `bool` liefern.
Als kleines Beispiel prüft die folgende Funktion, ob ihr Parameter eine ungerade Zahl ist:
```
func is_odd(n int) bool {
return n % 2 != 0
}
is_odd(3)
```
| github_jupyter |
# CPO Datascience
This program is intended for use by the Portland State University Campus Planning Office (CPO).
```
#Import required packages
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
def format_date(df_date):
"""
Splits Meeting Times and Dates into datetime objects where applicable using regex.
"""
df_date['Days'] = df_date['Meeting_Times'].str.extract('([^\s]+)', expand=True)
df_date['Start_Date'] = df_date['Meeting_Dates'].str.extract('([^\s]+)', expand=True)
df_date['Year'] = df_date['Term'].astype(str).str.slice(0,4)
df_date['Quarter'] = df_date['Term'].astype(str).str.slice(5,6)
df_date['Building'] = df_date['ROOM'].str.extract('([^\s]+)', expand=True)
#df_date['Start_Month'] = pd.to_datetime(df_date['Year'] + df_date['Start_Date'], format='%Y%b')
df_date['End_Date'] = df_date['Meeting_Dates'].str.extract('(?<=-)(.*)(?= )', expand=True)
#df_date['End_Month'] = pd.to_datetime(df_date['End_Date'], format='%b')
df_date['Start_Time'] = df_date['Meeting_Times'].str.extract('(?<= )(.*)(?=-)', expand=True)
df_date['Start_Time'] = pd.to_datetime(df_date['Start_Time'], format='%H%M')
df_date['End_Time'] = df_date['Meeting_Times'].str.extract('((?<=-).*$)', expand=True)
df_date['End_Time'] = pd.to_datetime(df_date['End_Time'], format='%H%M')
df_date['Duration_Hr'] = ((df_date['End_Time'] - df_date['Start_Time']).dt.seconds)/3600
#df_date = df_date.set_index(pd.DatetimeIndex(df_date['Start_Month']))
return df_date
def format_xlist(df_xl):
"""
revises % capacity calculations by using Max Enrollment instead of room capacity.
"""
df_xl['Cap_Diff'] = np.where(df_xl['Xlst'] != '',
df_xl['Max_Enrl'].astype(int) - df_xl['Actual_Enrl'].astype(int),
df_xl['Room_Capacity'].astype(int) - df_xl['Actual_Enrl'].astype(int))
df_xl = df_xl.loc[df_xl['Room_Capacity'].astype(int) < 999]
return df_xl
def grouped_plot_graph(df_in):
fig, ax = plt.subplots()
grouped = df_in.groupby(['Year', 'Quarter'])
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='Start_Month', y='Cap_Diff', label=key)
plt.show()
def plot_graph(x_vals, y_vals):
"""
Plots the dataframe information.
"""
x = range(len(x_vals))
plt.figure(figsize=(20,10))
N = 50
colors = np.random.rand(N)
sns.stripplot(x_vals, y_vals)
plt.xticks(rotation=90)
plt.scatter(x, y_vals, alpha=0.5, )
plt.show()
def OLS_operations(y, X):
#mod = smf.OLS('Cap_Diff ~ C(Dept)', data=df_data)
mod = sm.OLS(np.asarray(y), X)
res = mod.fit()
print(res.summary())
fig = plt.figure(figsize=(20,10))
fig = sm.graphics.plot_partregress_grid(mod, fig=fig)
plt.show()
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv('data/PSU_master_classroom_91-17.csv')
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
#df = df.loc[df['Term'] == 201604]
# Calculate number of days per week and treat Sunday condition
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df_cl = df_cl.loc[df_cl['Term'] < 201701]
df_cl = format_xlist(df)
dep_var = df_cl['Cap_Diff'] # Our dependent variable
category = 'Dept'
test_var = df_cl['{0}'.format(category)]
plot_graph(test_var, dep_var)
def main():
pd.set_option('display.max_columns', None)
df = pd.read_csv('data/PSU_master_classroom_91-17.csv')
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df_cl = format_xlist(df)
dep_var = df_cl['Cap_Diff'] # Our dependent variable
category = 'Building'
test_var = df_cl['{0}'.format(category)]
plot_graph(test_var, dep_var)
main()
def main():
pd.set_option('display.max_columns', None)
df = pd.read_csv('data/PSU_master_classroom_91-17.csv')
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df_cl = format_xlist(df)
dep_var = df_cl['Cap_Diff'] # Our dependent variable
category = 'Class'
test_var = df_cl['{0}'.format(category)]
plot_graph(test_var, dep_var)
main()
```
| github_jupyter |
# Trabalhando com o Jupyter
Ferramenta que permite criação de código, visualização de resultados e documentação no mesmo documento (.ipynb)
**Modo de comando:** `esc` para ativar, o cursor fica inativo
**Modo de edição:** `enter` para ativar, modo de inserção
### Atalhos do teclado (MUITO úteis)
Para usar os atalhos descritos abaixo a célula deve estar selecionada porém não pode estar no modo de edição.
* Para entrar do modo de comando: `esc`
* Criar nova célula abaixo: `b` (elow)
* Criar nova célula acima: `a` (bove)
* Recortar uma célula: `x`
* Copiar uma célula: `c`
* Colar uma cálula: `v`
* Executar uma célula e permanecer nela mesma: `ctrl + enter`
* Executar uma célula e mover para a próxima: `shift + enter`
* ** Para ver todos os atalhos, tecle `h`**
### Tipos de célula
**Code:** Para código Python
**Markdown:** Para documentação
Também existem **Raw NBConverter** e **Heading**
# Pandas (http://pandas.pydata.org/)
* Biblioteca Python para análise de dados
* Provê ferramentas de alta performance e fácil usabilidade para análise de dados
### Como instalar
* Anaconda (http://pandas.pydata.org/pandas-docs/stable/install.html#installing-pandas-with-anaconda)
* Download anaconda: https://www.continuum.io/downloads
* Instalar Anaconda: https://docs.continuum.io/anaconda/install
* Disponível para `osx-64`, `linux-64`, `linux-32`, `win-64`, `win-32` e `Python 2.7`, `Python 3.4`, e `Python 3.5`
* `conda install pandas`
* Pip
* `pip install pandas`
# Matplotlib (http://matplotlib.org/)
* Biblioteca Python para plotar gráficos 2D
### Como instalar
* Anaconda (http://pandas.pydata.org/pandas-docs/stable/install.html#installing-pandas-with-anaconda)
* Download anaconda: https://www.continuum.io/downloads
* Instalar Anaconda: https://docs.continuum.io/anaconda/install
* Disponível para `osx-64`, `linux-64`, `linux-32`, `win-64`, `win-32` e `Python 2.7`, `Python 3.4`, e `Python 3.5`
* `conda install matplotlib`
* Pip
* `pip install matplotlib`
```
import pandas as pd
import matplotlib
%matplotlib inline
```
### Carregando um arquivo csv em um DataFrame do Pandas
* `pd.DataFrame.from_csv(file_name)`
Se, ao usar este comando, você se deparar com um UnicodeDecodingError, adicione o parâmetro `encoding='utf-8'`
## cast.csv
## release_dates.csv
## titles
**`df.head(n):`**
* Visualizar as primeiras *n* linhas.
* Default: *n = 5*.
**`df.tail(n):`**
* Visualizar as últimas *n* linhas.
* Default: *n = 5*.
### Quantos registros há no conjunto?
**`len(df)`:**
* Tamanho do df
### Quais são os possíveis valores para a coluna `type`?
**`df[col]`:**
* Visualizar uma coluna do df
ou
**`df.col`:**
* Se o nome da coluna não tiver, espaços, caracteres especiais ou for uma variável
**Obs:** Ao selecionar uma coluna e manipulá-la fora de um DataFrame, a mesma é tratada como uma **Série**.
**`df[col].unique()`:**
* Mostrar os possíveis valores de uma coluna
### Quantos atores e quantas atrizes há no conjunto?
**df[col].value_counts():**
* Contagem de quantos registros há para cada valor possível da coluna col (somente se col for categórica)
### Operações com colunas
#### Operações Aritméticas
#### Comparações
#### Filtrar
* Por valor específico de uma coluna
* Por colunas
* Por valor nulo ou não nulo
* Por vetor de booleanos
* Preencher valores nulos
Por DataFrame
Por coluna
### Quantos atores atuaram em cada ano?
### Qual foi a diferença entre o número de atores e atrizes que atuaram em cada década?
### Datas
### Quanto % dos filmes foram lançados na sexta-feira?
### Merge
### Qual o nome e ano do filme mais antigo?
### Quantos filmes são de 1960?
### Quantos filmes são de cada ano dos anos 70?
### Quantos filmes foram lançados desde o ano que você nasceu até hoje?
### Quais são os nomes dos filmes de 1906?
### Quais são os 15 nomes de filmes mais comuns?
### Em quantos filmes Judi Dench atuou?
### Liste os filmes nos quais Judi Dench atuou como o ator número 1, ordenado por ano.
### Liste os atores da versão de 1972 de Sleuth pela ordem do rank n.
### Quais atores mais atuaram em 1985?
# SciKit Learn (http://scikit-learn.org)
* Biblioteca Python para mineração e análise de dados
### Como instalar
* Anaconda (http://pandas.pydata.org/pandas-docs/stable/install.html#installing-pandas-with-anaconda)
* Download anaconda: https://www.continuum.io/downloads
* Instalar Anaconda: https://docs.continuum.io/anaconda/install
* Disponível para `osx-64`, `linux-64`, `linux-32`, `win-64`, `win-32` e `Python 2.7`, `Python 3.4`, e `Python 3.5`
* `conda install scikit-learn`
* Pip
* `pip install -U scikit-learn`
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix
from sklearn.cross_validation import train_test_split
import pickle
import time
time1=time.strftime('%Y-%m-%d_%H-%M-%S')
```
### iris.csv
### Treinar modelo de Árvore de Decisão
### Salvar modelo
```
```
### Carregar modelo
### Predição para casos de teste
| github_jupyter |
# Uniview module for LIGO event GW170817
*Aaron Geller, 2018*
### Imports and function definitions
```
#This directory contains all the data needed for the module. It should be in the same directory as the notebook
dataFolder = "data"
import sys, os, shutil, errno, string, urllib
sys.path.append(( os.path.abspath( os.path.join(os.path.realpath("__file__"), os.pardir, os.pardir) )))
import uvmodlib.v1 as uvmod
```
### USES Conf Template
```
Template = """mesh
{
data center ./modules/$folderName/center.raw #I get errors if I don't pass something to geometry??
data GRB ./modules/$folderName/GRB.raw
data grid ./modules/$folderName/grid512.obj
data quad ./modules/$folderName/quad.3ds
cullRadius $cr
glslVersion 330
propertyCollection
{
__objectName__
{
vec2f pfit 1.38504943e-05 3.73532968e-01
vec1f m1 1.4
vec1f m2 1.4
vec1f NSrad 10 | public | desc "NS radius " | widget slider | range 0 100
vec1f kilonovaRad 20000 | public | desc "kilonova max radius " | widget slider | range 0 1e5
vec1f kilonovaMaxT 0.3 | public | desc "time at max kilonova radius " | widget slider | range 0 1
vec1f GRBrad 3 | public | desc "GRB point radius " | widget slider | range 0 10
vec1f GRBspeed 1000 | public | desc "GRB speed " | widget slider | range 0 1000
vec1f GRBMaxT 1 | public | desc "GRB fade time" | widget slider | range 0 10
vec1f coronaFac 3 | public | desc "corona radius multiplier" | widget slider | range 0 10
#for GW membrane
vec4f membraneColor .3 .3 .6 0.2 | public | desc "GW mesh color"
#This chages both the display size and the domain in which the Ricci Scalar is evaluated (e.g. if the value is 100, the curvature is evaluated in the domain where -100 < x < 100, and similarly for y)
vec1f gridScale 5000 | public| desc "GW grid domain" | widget slider | range 0 1e5
# This factor controls the contrast between the peaks and valleys of the membrane. Larger values increase contrast.
vec1f shadingFactor 5000 | public | desc "contrast for GW membrane" | widget slider | range 0 1e5
#This is an overall amplitude factor
vec1f amplitudeFactor 430000 | public | desc "GW amplitude factor" | widget slider | range 0 1e6
#The Ricci Scalar Curvature is divergant at the origin. In order to draw attention to the propigation of gravitational waves from the binary system we have added an artifical function of the form exp[-r0^2/r^2] which dramatically attenuates results close to the origin. This parameter is the value of r0 in our attenuation function.
vec1f centerAttenuationDistance 35 | public | desc "GW amplitude factor" | widget slider | range 0 1000
vec1f GWAmpClamp 300 | public | desc "GW max amplitude " | widget slider | range 1 1000
vec1f eventTime -0.1 | public | desc "event time " #| widget slider | range -30 30
vec1f transitionLength 30 | public | desc "transition length in seconds"
bool jump true | public | desc "jump to time without transition"
}
}
############# to hold the time information
renderTexture
{
name stateTexture
width 1
height 1
numTextures 1
isPingPong true
isPersistent true
isFramePersistent true
internalTextureFormat GL_RGB32F
magnify GL_NEAREST
minify GL_NEAREST
}
############# set Transition State
pass
{
useDataObject quad
renderTarget
{
name stateTexture
enableColorClear false
}
shader
{
type defaultMeshShader
{
vertexShader ./modules/$folderName/pass0.vs
fragmentShader ./modules/$folderName/state.fs
textureFBO stateTexture stateTexture
stateManagerVar __objectName__.transitionLength transitionLength
stateManagerVar __objectName__.jump jump
stateManagerVar __objectName__.eventTime eventTime
parameter2f timeRange -30 30
}
}
}
############# gravitation waves
pass
{
useDataObject grid
shader
{
type defaultMeshShader
{
vertexShader ./modules/$folderName/binaryGW.vs
fragmentShader ./modules/$folderName/binaryGW.fs
textureFBO stateTexture stateTexture
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.pfit pfit
stateManagerVar __objectName__.membraneColor fillColor
stateManagerVar __objectName__.gridScale gridScale
stateManagerVar __objectName__.shadingFactor shadingFactor
stateManagerVar __objectName__.amplitudeFactor A
stateManagerVar __objectName__.centerAttenuationDistance killFunctionDecay
stateManagerVar __objectName__.GWAmpClamp GWAmpClamp
glState
{
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE true
UV_CULL_FACE_ENABLE false
}
}
}
}
############# NS 1 "corona"
pass
{
useDataObject center
shader
{
type defaultMeshShader
{
geometryShader ./modules/$folderName/corona.gs
vertexShader ./modules/$folderName/NS.vs
fragmentShader ./modules/$folderName/corona.fs
textureFBO stateTexture stateTexture
parameter1f starNum 1.
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.pfit pfit
stateManagerVar __objectName__.NSrad NSrad
stateManagerVar __objectName__.m1 m1
stateManagerVar __objectName__.m2 m2
stateManagerVar __objectName__.coronaFac coronaFac
glState
{
UV_CULL_FACE_ENABLE false
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE false
UV_WRITE_MASK_DEPTH true
UV_BLEND_FUNC GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA
}
}
}
}
############# NS 2 "corona"
pass
{
useDataObject center
shader
{
type defaultMeshShader
{
geometryShader ./modules/$folderName/corona.gs
vertexShader ./modules/$folderName/NS.vs
fragmentShader ./modules/$folderName/corona.fs
textureFBO stateTexture stateTexture
parameter1f starNum 2.
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.pfit pfit
stateManagerVar __objectName__.NSrad NSrad
stateManagerVar __objectName__.m1 m1
stateManagerVar __objectName__.m2 m2
stateManagerVar __objectName__.coronaFac coronaFac
glState
{
UV_CULL_FACE_ENABLE false
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE false
UV_WRITE_MASK_DEPTH true
UV_BLEND_FUNC GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA
}
}
}
}
############# NS 1
pass
{
useDataObject center
shader
{
type defaultMeshShader
{
geometryShader ./modules/$folderName/NS.gs
vertexShader ./modules/$folderName/NS.vs
fragmentShader ./modules/$folderName/NS.fs
textureFBO stateTexture stateTexture
parameter1f starNum 1.
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.pfit pfit
stateManagerVar __objectName__.NSrad NSrad
stateManagerVar __objectName__.m1 m1
stateManagerVar __objectName__.m2 m2
glState
{
UV_CULL_FACE_ENABLE false
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE false
UV_WRITE_MASK_DEPTH true
UV_BLEND_FUNC GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA
}
}
}
}
############# NS 2
pass
{
useDataObject center
shader
{
type defaultMeshShader
{
geometryShader ./modules/$folderName/NS.gs
vertexShader ./modules/$folderName/NS.vs
fragmentShader ./modules/$folderName/NS.fs
textureFBO stateTexture stateTexture
parameter1f starNum 2.
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.pfit pfit
stateManagerVar __objectName__.NSrad NSrad
stateManagerVar __objectName__.m1 m1
stateManagerVar __objectName__.m2 m2
glState
{
UV_CULL_FACE_ENABLE false
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE false
UV_WRITE_MASK_DEPTH true
UV_BLEND_FUNC GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA
}
}
}
}
############# GRB
pass
{
useDataObject GRB
shader
{
type defaultMeshShader
{
geometryShader ./modules/$folderName/GRB.gs
vertexShader ./modules/$folderName/kilonova.vs
fragmentShader ./modules/$folderName/GRB.fs
textureFBO stateTexture stateTexture
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.GRBrad GRBrad
stateManagerVar __objectName__.GRBspeed GRBspeed
stateManagerVar __objectName__.GRBMaxT GRBMaxT
glState
{
UV_CULL_FACE_ENABLE false
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE false
UV_WRITE_MASK_DEPTH true
UV_BLEND_FUNC GL_SRC_ALPHA GL_ONE
}
}
}
}
############# kilonova
pass
{
useDataObject center
shader
{
type defaultMeshShader
{
geometryShader ./modules/$folderName/kilonova.gs
vertexShader ./modules/$folderName/kilonova.vs
fragmentShader ./modules/$folderName/kilonova.fs
textureFBO stateTexture stateTexture
texture cmap ./modules/$folderName/cmap.png
{
wrapModeS GL_CLAMP_TO_EDGE
wrapModeR GL_CLAMP_TO_EDGE
colorspace linear
}
#stateManagerVar __objectName__.eventTime eventTime
stateManagerVar __objectName__.kilonovaRad kilonovaRad
stateManagerVar __objectName__.kilonovaMaxT kilonovaMaxT
glState
{
UV_CULL_FACE_ENABLE false
UV_BLEND_ENABLE true
UV_DEPTH_ENABLE false
UV_WRITE_MASK_DEPTH true
UV_BLEND_FUNC GL_SRC_ALPHA GL_ONE_MINUS_SRC_ALPHA
}
}
}
}
}"""
```
### Kilonova class
```
class Kilonova():
def __init__(self, object):
self.object = object
uvmod.Utility.ensurerelativepathexsists("kilonova.gs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("kilonova.vs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("kilonova.fs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("NS.gs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("NS.vs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("NS.fs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("corona.gs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("corona.fs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("binaryGW.vs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("binaryGW.fs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("GRB.gs",dataFolder)
uvmod.Utility.ensurerelativepathexsists("GRB.fs",dataFolder)
self.cr = 1000
self.Scale = 1
def generatemod(self):
self.object.setgeometry(self.object.name+"Mesh.usesconf")
return self.object.generatemod()
def generatefiles(self, absOutDir, relOutDir):
fileName = self.object.name+"Mesh.usesconf"
s = string.Template(Template)
f = open(absOutDir+"\\"+fileName, 'w')
if f:
f.write(s.substitute(folderName = relOutDir,
cr = self.cr,
Scale = self.Scale
))
f.close()
uvmod.Utility.copyfoldercontents(os.getcwd()+"\\"+dataFolder, absOutDir)
```
### Object Instantiation
```
model = Kilonova(uvmod.OrbitalObject())
scene = uvmod.Scene()
parentScene = uvmod.Scene()
modinfo = uvmod.ModuleInformation()
generator = uvmod.Generator()
```
### Specify Settings and generate the module
```
scene.setname("Kilonova")
scene.setparent("MilkyWay")
scene.setunit(1000.0)
scene.setentrydist(10000.)
scene.setstaticposition(-35025580.45131495, -11010152.02509566, -15874043.79585574)
model.object.setcameraradius(1.)
model.object.setcoord(scene.name)
model.object.setname("Kilonova")
model.object.setguiname("/KavliLecture/Larson/Kilonova")
model.object.settargetradius(20)
model.object.showatstartup(False)
model.cr = 10000
modinfo.setname("Kilonova")
modinfo.setauthor("Aaron Geller<sup>1</sup> Jeffrey SubbaRao, and Shane Larson<sup>2</sup><br />(1)Adler Planetarium,<br />(2)Northwestern University")
modinfo.cleardependencies()
modinfo.setdesc("Uniview module for LIGO event GW170817")
#modinfo.setthumbnail("data/R0010133.JPG")
modinfo.setversion("1.0")
generator.generate("Kilonova",[scene],[model],modinfo)
uvmod.Utility.senduvcommand(model.object.name+".reload")
```
## Helper Functions for modifing code
*Reload Module and Shaders in Uniview*
```
uvmod.Utility.senduvcommand(model.object.name+".reload; system.reloadallshaders")
```
*Copy modified Shader files and reload*
```
from config import Settings
uvmod.Utility.copyfoldercontents(os.getcwd()+"\\"+dataFolder, Settings.uvcustommodulelocation+'\\'+model.object.name)
uvmod.Utility.senduvcommand(model.object.name+".reload")
```
### Create colormap texture
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
gradient = np.linspace(0, 1, 256)
gradient = np.vstack((gradient, gradient))
def plot_cmap(colormap):
fig=plt.imshow(gradient, aspect=1, cmap=colormap)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
plt.savefig("data/cmap.png", bbox_inches='tight',pad_inches=0)
plot_cmap('hot_r')
plt.show()
```
*Testing some fit numbers*
```
pfit = [1.38504943e-05, 3.73532968e-01]
t = -0.1
per = (-1.*pfit[0]*t)**pfit[1]; #seconds
print(per, 6./(per*86400))
```
## Kilonova position
*From the [wikipedia page](https://en.wikipedia.org/wiki/GW170817)*
```
from astropy.coordinates import SkyCoord
from astropy import units, constants
RA = "13h 09m 48.08s" #right ascension
Dec= "−23d 22m 53.3s" #declination
dist = (40 *units.Mpc).to(units.pc) #distance
coord = SkyCoord(RA, Dec, dist)
print(coord.cartesian)
```
*Check the semi-major axis at -0.1s*
```
import numpy as np
a = 1.38504943e-05
b = 3.73532968e-01
t = -0.1
p = (-a*t)**b * units.s
sma = (p**2.*constants.G*(2.*1.4*units.solMass)/(4.*np.pi**2.))**(1./3.)
print(p, sma.to(units.km))
```
| github_jupyter |
# Experiment Analysis
In this notebook we will evaluate the results form the experiments executed. For each experiment, one parameter is changed and all others were kept constant as to determine the effect of one variable.
**The goals of this analysis are:**
1. Determine the relationship of the number of parameters in the neural network and the number of timesteps in the dataset
2. Determine what effect increasing the number patterns are w.r.t. this relationship
3. Determine what effect sparsity has on the capacity of the neural networks
4. Investigate which activation function lead to the highest retention of information
5. What type of network is able to retain the most information
To determine whether a relationshop exists between the variable being investigated and the number of required parameters in each respective neural network, the Pearson correlation coefficient is used. The domain of this metric lies between -1 and +1 or in mathematical notation $P \in [-1, 1]$. If there exists a strong positive relationship between variables, the Pearson coefficient will approach +1 and for the negative case -1.
```
import pandas as pd
import numpy as np
import scipy
import sklearn
import pandas as pd
from sqlalchemy import Column, Integer, String
from sqlalchemy import create_engine, Column
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import seaborn as sns
import matplotlib.pyplot as plt
# Before using a cluster
# Base = declarative_base()
# engine = create_engine('postgresql://masters_user:password@localhost:5432/masters_experiments')
```
# Timesteps Analysis
```
df = pd.read_csv("timesteps.csv", delimiter=",")
df.head(5)
```
## Number of parameters ∝ time steps
### Overall
```
from matplotlib import pyplot
def plot_by_filter(x_col,
y_col,
x_label='Sparsity length',
y_label='Number of network parameters',
title="Effect of sparsity on the number of parameters \n in a neural network with activation ",
hue="network_type",
filter_col="activation_function",
filter_val="tanh",
legend_title="NN TYPE",
df=None):
sns.set_style("whitegrid")
a4_dims = (6, 2.25)
fig, ax = pyplot.subplots(figsize=a4_dims)
ax.set(xlabel=x_label,
ylabel=y_label )
if filter_val is not None:
ax = sns.pointplot(ax=ax, x=x_col, y=y_col, hue=hue,
marker='o', markersize=5, ci=None,
data = df[df[filter_col] == filter_val])
ax.axes.set_title(title + filter_val,
fontsize=12, y=1.05)
ax.legend(title=filter_val.upper(), loc='center right', bbox_to_anchor=(1.37, 0.5), ncol=1)
else:
ax = sns.pointplot(ax=ax, x=x_col, y=y_col, hue=hue,
marker='o', markersize=5, ci=None,
data = df)
ax.axes.set_title(title, fontsize=12, y=1.05)
ax.legend(title=legend_title, loc='center right', bbox_to_anchor=(1.37, 0.5), ncol=1)
# plt.legend()
filter_col = "network_type"
plot_by_filter(x_col="timesteps",
y_col="num_network_parameters",
x_label='Timesteps',
y_label='Number of network parameters',
title="Effect of timesteps on the number of parameters " +
"\n in a neural network over all activation functions",
hue="network_type",
filter_col=filter_col, filter_val=None, df=df)
filter_col = "network_type"
for filter_val in df[filter_col].unique():
df_temp = df[df[filter_col] == filter_val]
df_temp = df_temp.groupby(["timesteps", "network_type"]).agg({"num_network_parameters": "mean"}).to_records()
df_temp = pd.DataFrame.from_records(df_temp)
df_temp["timesteps"] = df_temp["timesteps"].astype(float)
df_temp["num_network_parameters"] = df_temp["num_network_parameters"].astype(float)
print("Pearson Correlation Between Timesteps and Number of Network Parameters for", filter_val, df_temp["timesteps"].corr(df_temp["num_network_parameters"]), type="spearman")
```
### Ratio of required parameters for increase in time steps
```
df_temp = df.groupby(["timesteps", "network_type"]).agg({"num_network_parameters": "mean"}).to_records()
df_temp = pd.DataFrame.from_records(df_temp)
df_temp.pivot(index="timesteps", columns="network_type", values="num_network_parameters").head(11)
```
### Discussion of results
From the Pearson coefficient, it is seems as if increasing the number of timesteps increases the number of required parameters for the Elman and GRU RNNs, while decreasing this requirement for the LSTM. However, upon inspecting the graph and values in the table, it is more apparent that this small correlation is due to variablity in the experiment during training. Thus it is safe to assume that there is no correlation between the number of network parameters required and the number of time steps if sparsity, number of patterns and output nodes are fixed for the **average case**.
### Effect of timesteps on networks with specific activation functions
```
filter_col = "activation_function"
for filter_val in df[filter_col].unique():
df_temp = df[(df[filter_col] == filter_val)]
df_temp = df_temp.groupby(["timesteps"]).agg({"num_network_parameters": "mean"}).to_records()
df_temp = pd.DataFrame.from_records(df_temp)
df_temp["timesteps"] = df_temp["timesteps"].astype(float)
df_temp["num_network_parameters"] = df_temp["num_network_parameters"].astype(float)
print("Pearson Correlation Between Timesteps and Number of Network Parameters for", filter_val, df_temp["timesteps"].corr(df_temp["num_network_parameters"]))
df_temp = df.groupby(["timesteps", "activation_function"]).agg({"num_network_parameters": "mean"}).to_records()
df_temp = pd.DataFrame.from_records(df_temp)
df_temp.pivot(index="timesteps", columns="activation_function", values="num_network_parameters").head(11)
```
### Discussion of activation functions ∝ time steps
The correlation coefficient between the required network parameters required and the increase in time steps for respective activation functions indicate that for **most activation functions**, increasing time steps will not have an effect on the required parameters of the network.
Interestingly enough, this is not the case for the **selu** and **softplus**. For networks using these activation functions, the amount of memory loss is effected by the increase in timesteps.
The **softmax** and **linear** activation functions seem to cope the best with the increase in timesteps and the **relu** activation function has the highest variance. The high variance of the **relu** function lends itself to be usefull in avoiding local optima.
```
filter_col = "activation_function"
for filter_val in df[filter_col].unique():
for filter_val_1 in df["network_type"].unique():
df_temp = df[df["network_type"] == filter_val_1]
df_temp = df[df[filter_col] == filter_val]
df_temp = df_temp.groupby(["timesteps"]).agg({"num_network_parameters": "mean"}).to_records()
df_temp = pd.DataFrame.from_records(df_temp)
df_temp["timesteps"] = df_temp["timesteps"].astype(float)
df_temp["num_network_parameters"] = df_temp["num_network_parameters"].astype(float)
print("Pearson Correlation Between Timesteps and Number of Network Parameters for", filter_val_1 + " "+ filter_val, df_temp["timesteps"].corr(df_temp["num_network_parameters"]))
filter_col = "activation_function"
for filter_val in df[filter_col].unique():
plot_by_filter(x_col="timesteps",
y_col="num_network_parameters",
x_label='Timesteps',
y_label='Number of network parameters',
title="Effect of timesteps on the number of parameters " +
"\n in a neural network with activation " +str(filter_val),
hue="network_type",
filter_col=filter_col, filter_val=filter_val, df=df)
```
Comparing the correlation between the type of neural network and activation function it is clear that assumptions made about activation functions hold for all recurrent neural networks.
### Effect of time steps on training time
```
filter_col = "network_type"
plot_by_filter(x_col="timesteps",
y_col="epocs",
x_label='Timesteps',
y_label='Number of EPOCS required to train network',
title="Effect of timesteps on training time ",
hue="network_type",
filter_col=filter_col, filter_val=None, df=df)
```
### Effect of time steps on training time
```
filter_col = "activation_function"
for filter_val in df[filter_col].unique():
plot_by_filter(x_col="timesteps",
y_col="epocs",
x_label='Timesteps',
y_label='Number of EPOCS required to train network',
title="Effect of timesteps on training time " +
"\n for a neural network with activation " +str(filter_val),
hue="network_type",
filter_col=filter_col, filter_val=filter_val, df=df)
```
### Conclusion about capacity?
Increasing the number of time steps does not have a direct effect on the performance of RNN's when all other parameters are kept constant. It is important to note, increasing time steps can dramatically effect size of the search space. Increasing the number of timesteps will exponentially increase the search space if all possible patterns in that search space is explored. During the execution of these experiments, all 46 GB of memory would be utilised. For an input space of $2$ binary inputs and $15$ time steps, the total number of possible patterns become $(2^2)^{15} = 1073741824$.
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn import metrics
import warnings
warnings.filterwarnings('ignore')
```
### Load the dataset
- Load the train data and using all your knowledge of pandas try to explore the different statistical properties of the dataset.
```
# Code starts here
train = pd.read_csv('E:/GreyAtom/glab proj/Predict the Insurance Claim using Logistic Regression/train.csv')
train.head()
train.shape
train.info()
train.describe()
#Drop ID
train.drop('Id', axis=1, inplace=True)
train.head()
#Checking for distribution of target
train['insuranceclaim'].value_counts().plot(kind='bar')
#Checking for skewness for features
train.skew()
```
### EDA & Data Preprocessing
- Check for the categorical & continuous features.
- Check out the best plots for plotting between categorical target and continuous features and try making some inferences from these plots.
```
# Code starts here
plt.boxplot(train['bmi'])
train['insuranceclaim'].value_counts(normalize=True)
train.corr()
sns.pairplot(train)
#Check count_plot for different features vs target variable insuranceclaim
# This will tell us which features are highly correlated with the target variable insuranceclaim and help us predict it better.
cols = ['children', 'sex', 'region', 'smoker']
fig,axes = plt.subplots(nrows=2, ncols=2, figsize=(20,20))
for i in range(0,2):
for j in range(0,2):
col = cols[i*2 + j]
sns.countplot(x=train[col], hue=train['insuranceclaim'], ax=axes[i,j])
```
### Model building
- Separate the features and target and then split the train data into train and validation set.
- Now let's come to the actual task, using logistic regression, predict the insuranceclaim. Select the best model by cross-validation using Grid Search.
- Try improving upon the `roc_auc_score` using different parameters for Grid Search that give the best score.
```
# Code starts here
#Store independent and dependent variable
X = train.drop('insuranceclaim', axis=1)
y=train['insuranceclaim']
#Split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=6)
#Instantiate Logistic regression model
lr = LogisticRegression(random_state=9)
#Grid search on Logistic regression
lr.fit(X_train, y_train)
#make predictions
y_pred = lr.predict(X_test)
accuracy = accuracy_score(y_pred, y_test)
print('Accuracy is : ',accuracy)
score = roc_auc_score(y_test, y_pred)
print('Score is : ', score)
#visualize performance of a binary classifier.
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred)
roc_auc = roc_auc_score(y_test, y_pred)
print(roc_auc)
plt.plot(fpr,tpr,label="Logistic model, auc="+str(roc_auc))
plt.legend(loc=4)
plt.show()
```
### Prediction on the test data and creating the sample submission file.
- Load the test data and store the `Id` column in a separate variable.
- Perform the same operations on the test data that you have performed on the train data.
- Create the submission file as a `csv` file consisting of the `Id` column from the test data and your prediction as the second column.
```
# Code starts here
test = pd.read_csv('E:/GreyAtom/glab proj/Predict the Insurance Claim using Logistic Regression/test.csv')
test.head()
id_ = test['Id']
# Applying same transformation on test
test.drop('Id',axis=1,inplace=True)
# make predictions
y_pred_test =lr.predict(test)
# Create a sample submission file
sample_submission = pd.DataFrame({'Id':id_,'insuranceclaim':y_pred_test})
sample_submission.head()
# Convert the sample submission file into a csv file
sample_submission.to_csv('Sample_submission_1.csv',index=False)
```
| github_jupyter |
# Hyper-parameter tuning
**Learning Objectives**
1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
## Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
**1. Manual**
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
**2. Grid Search**
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
**3. Random Search**
Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
**4. Bayesian Optimization**
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization).
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
**AI Platform HyperTune**
AI Platform HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/ml-engine/docs/tensorflow/hyperparameter-tuning-overview#search_algorithms) Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
```
PROJECT = "<YOUR PROJECT>"
BUCKET = "<YOUR BUCKET>"
REGION = "<YOUR REGION>"
TFVERSION = "2.1" # TF version for AI Platform to use
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
```
## Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
1. Upload data to Google Cloud Storage
2. Move code into a trainer Python package
4. Submit training job with `gcloud` to train on AI Platform
### Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
To do this run the notebook [0_export_data_from_bq_to_gcs.ipynb](./0_export_data_from_bq_to_gcs.ipynb), which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
```
!gsutil ls gs://$BUCKET/taxifare/data
```
## Move code into python package
In the [previous lab](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb), we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory:
- `__init__.py`
- `model.py`
- `task.py`
```
!ls -la taxifare/trainer
```
To use hyperparameter tuning in your training job you must perform the following steps:
1. Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object.
2. Include the following code in your training application:
- Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial.
Add your hyperparameter metric to the summary for your graph.
- To submit a hyperparameter tuning job, we must modify `model.py` and `task.py` to expose any variables we want to tune as command line arguments.
### Modify model.py
## Exercise.
Complete the TODOs in the `train_and_evaluate` functin below.
- Define the hyperparameter tuning metric `hp_metric`
- Set up cloudml-hypertune to report the results of each trial by calling its helper function, `report_hyperparameter_tuning_metric`
```
%%writefile ./taxifare/trainer/model.py
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = # TODO: Your code goes here
# TODO 1
hpt = # TODO: Your code goes here
# TODO: Your code goes here
return history
```
### Modify task.py
```
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
```
### Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1:
## Exercise.
Complete the TODOs below.
- Specify the hypertuning cofiguration for the learning rate, the batch size and the number of buckets using one of the available [hyperparameter types](https://cloud.google.com/ai-platform/training/docs/hyperparameter-tuning-overview#hyperparameter_types).
- Specify the hyperparameter tuning metric tag
- Set the maximum number of parallel trial and the max number of trials
```
%%writefile hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: # TODO: Your code goes here
maxParallelTrials: # TODO: Your code goes here
hyperparameterMetricTag: # TODO: Your code goes here
enableTrialEarlyStopping: True
params:
- parameterName: lr
# TODO: Your code goes here
- parameterName: nbuckets
# TODO: Your code goes here
- parameterName: batch_size
# TODO: Your code goes here
```
#### Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
```
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO
gcloud ai-platform jobs submit training $JOBID \
# TODO: Your code goes here
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
```
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
```
import pandas as pd
import os
import geopy as geo
import numpy as np
from folium.plugins import FastMarkerCluster
import folium
from geopy.geocoders import Nominatim
import matplotlib.pyplot as plt
%matplotlib inline
```
## Importando a base e gerando um sample
```
os.getcwd()
df = pd.read_csv('/home/rsa/Documentos/database/2017-01-01.csv')
df = df.rename(columns={df.columns[0]: "DATAHORA", df.columns[1]:"ORDEM", df.columns[2]: "LINHA",df.columns[3]:"LATITUDE",df.columns[4]:"LONGITUDE",df.columns[5]: "VELOCIDADE"})
df = df.copy().drop_duplicates()
df['lat_long'] = list(zip(round(df.LATITUDE,5), round(df.LONGITUDE,5)))
print(df.shape)
df.head()
print('Linha:', df.LINHA.nunique())
print('Ordem:', df.ORDEM.nunique())
print('LATITUDE:', df.LATITUDE.nunique())
print('LONGITUDE:', df.LONGITUDE.nunique())
print('DATAHORA:', df.DATAHORA.nunique())
import seaborn as sns
# sns.boxplot(x=df311g.lng)
sns.boxplot(x=df.LATITUDE)
print(df.describe(include='all'))
import datetime
#df1 = df[df.LINHA.isin(['476.0'])].copy()
#df1 = df.sample(1000000, random_state=1)
df1 = df.copy()
df1 = df1.rename(columns={'LATITUDE':'lat', 'LONGITUDE':'lng'})
#df1["ID"] = df1["ORDEM"].map(str) +"-"+ df1["LINHA"].map(str)
df1['DATAHORA'] = df1.apply(lambda x: datetime.datetime.strptime(x.DATAHORA,"%m-%d-%Y %H:%M:%S"),1)
df1 = df1[['DATAHORA', 'ORDEM', 'LINHA', 'lat_long', 'lat', 'lng']]
df1 = df1.drop_duplicates()
df1 = df1.sort_values(['LINHA', 'ORDEM', 'DATAHORA']).reset_index()
df1 = df1.drop(columns=['index'])
print(df1.shape)
df1.info()
df1.head()
df1.groupby(df1.DATAHORA.dt.date).count()
import seaborn as sns
print(df1['lat'].describe())
plt.figure(figsize=(9, 8))
sns.distplot(df1['lat'], color='g', bins=100, hist_kws={'alpha': 0.4});
#sns.distplot(df1['lng'], color='g', bins=100, hist_kws={'alpha': 0.4});
### Visualizando os dados no mapa (Folium)
import folium
from folium.plugins import MousePosition, MiniMap
from folium.plugins import Draw
locs = [list(i) for i in df1.drop_duplicates(['lat_long'])['lat_long']]
brasil = folium.Map(
#width=500,height=500,
#tiles='OpenStreetMap',
#tiles='cartodbpositron',
tiles='CartoDB dark_matter',
location=locs[0],
zoom_start=11
)
for i in locs:
folium.CircleMarker(
location=i,
color=['blue'],
radius=1,
weight=3
).add_to(brasil)
path = '/home/rsa/Documentos/thesis/plots_images/'
mouse = MousePosition(position='topright')
draw = Draw(export=True)
minimap = MiniMap(toggle_display=True, tile_layer='CartoDB dark_matter')
mouse.add_to(brasil)
draw.add_to(brasil)
minimap.add_to(brasil)
brasil.save(path + 'plot_data.html')
# IMPORTANDO E "TRADUZINDO" O POLÍGONO SELECIONADO, PARA EM SEGUIDA SELECIONAR OS PONTOS ENGLOBADOS
import geojson
from shapely.geometry.polygon import Polygon
with open('/home/rsa/Downloads/data.geojson') as f:
gj = geojson.load(f)
polygon = gj['features'][0].geometry.coordinates[0]
pol = Polygon(polygon) # create polygon
print(polygon)
pol
# ESTA CÉLULA DEVE SER RODADA APENAS EM CASO DE SELEÇÃO DE POLÍGONO
# SELECIONANDO OS PONTOS ENGLOBADOS
from shapely.geometry import Point
df1['isin_draw'] = df1.apply(lambda x: pol.contains(Point(x.lng,x.lat)), 1)*1 # create column indicating if point is in Polygon
df1 = df1[df1.isin_draw==True].copy()
print(df1.shape)
df1.head()
```
## Applying Uber H3 (hexagons)
```
print(df1.shape)
df1.head()
import seaborn as sns
# sns.boxplot(x=df311g.lng)
sns.boxplot(x=df1.lat)
# REMOVING OUTLIER USING Z-SCORE (Opcional: Rodar apenas para visualizar melhor no mapa do matplotlib)
def remove_outlier(data_1, threshold=3):
outliers=[]
mean_1 = np.mean(data_1)
std_1 =np.std(data_1)
for y in data_1:
z_score= (y - mean_1)/std_1
if np.abs(z_score) > threshold:
outliers.append(y)
output = [i for i in data_1 if i not in outliers]
return output
threshold = 2.5
df1 = df1[df1.lng.isin(remove_outlier(df1.lng, threshold))]
df1 = df1[df1.lat.isin(remove_outlier(df1.lat, threshold))]
# CREATING HEXAGONS WITH CUSTOMIZED SIZE
from h3 import h3
APERTURE_SIZE = 12
hex_col = 'hex' + str(APERTURE_SIZE)
# Functions
def plot_scatter(df, metric_col, x='lng_hex', y='lat_hex', marker='o', alpha=1, figsize=(17, 15), colormap='viridis'):
#https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.scatter.html
df.plot.scatter(x=x, y=y, c=metric_col, edgecolors='none', colormap=colormap, marker=marker, alpha=alpha, figsize=figsize);
plt.xticks([], []); plt.yticks([], [])
# find hexs containing the points
df1[hex_col] = df1.apply(lambda x: h3.geo_to_h3(x.lat_long[0],x.lat_long[1],APERTURE_SIZE), 1)
# aggregate the points
df1_ag = df1.groupby(hex_col).size().to_frame('count').reset_index()
# find center of hex for visualization
df1_ag['lat_hex'] = df1_ag[hex_col].apply(lambda x: h3.h3_to_geo(x)[0])
df1_ag['lng_hex'] = df1_ag[hex_col].apply(lambda x: h3.h3_to_geo(x)[1])
# plot the hexs
plot_scatter(df1_ag, metric_col='count', figsize=(10,5))
#print(df1.head())
#print(df1_ag.head())
# VISUALIZING THE HEXAGONS
from h3 import h3
def visualize_hexagons(hexagons, color="red", folium_map=None, fill= None):
"""
hexagons is a list of hexcluster. Each hexcluster is a list of hexagons.
eg. [[hex1, hex2], [hex3, hex4]]
"""
polylines = []
lat = []
lng = []
for hex in hexagons:
polygons = h3.h3_set_to_multi_polygon([hex], geo_json=False)
# flatten polygons into loops.
outlines = [loop for polygon in polygons for loop in polygon]
polyline = [outline + [outline[0]] for outline in outlines][0]
lat.extend(map(lambda v:v[0],polyline))
lng.extend(map(lambda v:v[1],polyline))
polylines.append(polyline)
if folium_map is None:
m = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=13, tiles='cartodbpositron')
else:
m = folium_map
for polyline in polylines:
my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color=color,fill_color=fill)
m.add_child(my_PolyLine)
return m
hexagons = df1_ag[hex_col].tolist()
m = visualize_hexagons(hexagons, color='red', folium_map=brasil)
m.save(path + 'hexagons_plot_data.html')
# ELIMINANDO LINHAS DE ONIBUS COM APENAS 1 SINAL EMITIDO
aux = pd.DataFrame(df1.groupby('LINHA').size()).reset_index()
aux = aux.rename(columns={0:'count'})
include = aux[aux['count']>1]['LINHA']
df1 = df1[df1.LINHA.isin(include)]
df1 = df1.sort_values('DATAHORA')
print(df1.shape)
df1.head()
# FUNÇÃO PARA CALCULAR AS DISTÂNCIAS ENTRE OS SINAIS DE GPS EMITIDOS (AGRUPADO POR LINHA DE ÔNIBUS)
# VECTORIZES HAVERSINE FUNCTION
def haversine2(lat1, lon1, lat2, lon2, to_radians=True, earth_radius=6371):
"""
slightly modified version: of http://stackoverflow.com/a/29546836/2901002
Calculate the great circle distance between two points
on the earth (specified in decimal degrees or in radians)
All (lat, lon) coordinates must have numeric dtypes and be of equal length.
"""
if to_radians:
lat1, lon1, lat2, lon2 = np.radians([lat1, lon1, lat2, lon2])
a = np.sin((lat2-lat1)/2.0)**2 + \
np.cos(lat1) * np.cos(lat2) * np.sin((lon2-lon1)/2.0)**2
distance_in_meters = earth_radius * (10**3) * 2 * np.arcsin(np.sqrt(a))
return distance_in_meters
# Ou dá para utilizar a função do pacote do Haversine
#import haversine as hv
#hv.haversine(df1.lat_long[1989351], df1.lat_long[3667807], unit = 'm')
import geopy as geo
from geopy import distance
start = [41.49008, -71.312796]
end = [41.499498, -81.695391]
print(geo.distance.geodesic(start, end).m)
print(haversine2(start[0], start[1], end[0], end[1]))
# CALCULANDO AS DISTANCIAS (DELTA S) PELO GEODESIC DO GEOPY
df1 = df1.sort_values(['LINHA','DATAHORA'])
grp = df1.groupby('LINHA')
df1['shift_lat_long'] = grp.shift()['lat_long']
lista = []
for i in df1.index:
if 'n' in str(df1.shift_lat_long[i]).lower():
lista.append(np.nan)
else:
lista.append(geo.distance.geodesic(df1.lat_long[i], df1.shift_lat_long[i]).m)
df1['delta_S'] = lista
df1.head()
# CALCULANDO AS DISTANCIAS (DELTA s) E OS TEMPOS (DELTA T)
df1 = df1.sort_values(['LINHA','DATAHORA'])
grp = df1.groupby('LINHA')
# CALCULANDO DISTANCE TRAVELED
df1['delta_s'] = np.concatenate(grp.apply(lambda x: haversine2(x['lat'], x['lng'], x['lat'].shift(), x['lng'].shift()
)
).values
)
# CALCULANDO TIME SPENT
df1['delta_t'] = (grp.apply(lambda x: x['DATAHORA'] - x['DATAHORA'].shift()).values)
df1['delta_t'] = df1.apply(lambda x: x['delta_t'].total_seconds(),1).values
#df1['delta_t'] = df1['delta_t'].fillna(pd.Timedelta(seconds=0))
df1['delta_t'] = df1['delta_t'].fillna(0)
df1['delta_s'] = df1['delta_s'].fillna(0)
df1['delta_S'] = df1['delta_S'].fillna(0)
#df1 = df1[df1.delta_s!=0]
df1.head()
# CALCULANDO AS ÁREAS DOS HEXÁGONOS E O TAMANHO DAS JANELAS DE TEMPO QUE DEVEM SER CONSIDERADAS
from shapely.geometry.polygon import Polygon
df1['area_hexagon'] = df1.apply(lambda x: Polygon(h3.h3_to_geo_boundary(x[hex_col])).area,1) # o tamanho hex_col já foi definido anteriormente
grou = df1.groupby([pd.Grouper(freq='10min', key='DATAHORA'), hex_col]) # GROUP BY POR JANELA DE TEMPO e HEXAGONOS
graf = grou.agg({'delta_S': 'sum', 'delta_t': 'sum', 'area_hexagon':'first', 'LINHA':'nunique'})
graf = graf[graf.delta_S!=0]
graf = graf.reset_index()
from sklearn import preprocessing
"""
graf = graf[graf.speed != np.inf]
dens = np.array([graf.LINHA/graf.area_hexagon])
vel = np.array([graf.speed])
graf['density'] = preprocessing.normalize([dens][0])[0]
graf['speed'] = preprocessing.normalize([vel][0])[0]
graf = graf[['density', 'speed']]
graf.head()
"""
graf.head(5)
graf['Q_i'] = graf.delta_S/(5*graf.area_hexagon*1e8) # DENSITY
graf['K_i'] = graf.delta_t/(5*graf.area_hexagon*1e8) # FLOW
# A method to estimate the macroscopic fundamental diagram using limited mobile probe data
#graf['Q_i'] = graf.delta_s/(5*100)
#graf['K_i'] = graf.delta_t/(5*100)
graf.head()
grou2 = graf.groupby([hex_col, 'DATAHORA'])
graf2 = grou2.agg({'Q_i': 'sum', 'delta_t': 'sum', 'area_hexagon':'sum'})
def f_mi(x):
"""
Gera as variáveis de fluxo (Q), densidade (K) e velocidade (V)"""
d = []
d.append( ( (x['area_hexagon'] * x['Q_i']).sum() ) / ( x['area_hexagon'].sum() ) )
d.append( ( (x['area_hexagon'] * x['K_i']).sum() ) / ( x['area_hexagon'].sum() ) )
d.append( (( (x['area_hexagon'] * x['Q_i']).sum() ) / ( x['area_hexagon'].sum() )) / (( (x['area_hexagon'] * x['K_i']).sum() ) / ( x['area_hexagon'].sum() )) )
return pd.Series(d, index=['Q', 'K', 'V'])
graf2 = graf.groupby(['DATAHORA']).apply(f_mi)
graf2.head()
# CRIANDO COLUNA COM TURNO
conditions = [
(graf2.index.hour > 7) & (graf2.index.hour < 12),
(graf2.index.hour > 14) & (graf2.index.hour < 19)]
choices = ['manha','tarde']
graf2['turno'] = np.select(conditions, choices, default='outro')
print(graf2.shape)
graf2.head()
gr = graf2.reset_index()
gr['time'] = gr.apply(lambda x: x['DATAHORA'].time(),1)
gr = gr.reset_index(drop=1)
xlim = [0, 20]
ylim = [0, 700]
fig, ax = plt.subplots()
bp = graf2.plot(ax=ax, x='K', y='V', style='.', figsize=(20,8), alpha=1, xlim=xlim, ylim=None)
#bp = gr.plot(ax=ax, x='V', y='K', style='.', figsize=(20,8), alpha=1)
#bp = graf2.groupby('turno').plot(ax=ax, x="V", y="K", style='.', figsize=(20,8), alpha=1)
# PERÍODO DE 15 MINUTOS, HEX12
xlim = [0, .019e11]
ylim = [0, 1200]
fig, ax = plt.subplots()
#bp = graf2.plot(ax=ax, x='K', y='Q', style='.', figsize=(20,8), alpha=1, xlim=xlim, ylim=ylim)
bp = graf2.plot(ax=ax, x='Q', y='K', style='.', figsize=(20,8), alpha=1)
#bp = graf.groupby('turno').plot(ax=ax, x='density', y='VELOCIDADE', style='.', figsize=(20,8), alpha=1)
# PERÍODO DE 15 MINUTOS, HEX12
xlim = [0, .019e11]
ylim = [0, 1000]
fig, ax = plt.subplots()
bp = graf2.plot(ax=ax, x='K', y='V', style='.', figsize=(20,8), alpha=1, xlim=xlim, ylim=ylim)
#bp = graf2.plot(ax=ax, x='K', y='V', style='.', figsize=(20,8), alpha=1)
#bp = graf.groupby('turno').plot(ax=ax, x='density', y='VELOCIDADE', style='.', figsize=(20,8), alpha=1)
# PERÍODO DE 5 MINUTOS, HEX11
xlim = [0, .019e10]
ylim = [0, 1000]
fig, ax = plt.subplots()
bp = graf2.plot(ax=ax, x='K', y='V', style='.', figsize=(20,8), alpha=1, xlim=xlim, ylim=ylim)
#bp = graf2.plot(ax=ax, x='K', y='Q', style='.', figsize=(20,8), alpha=1)
#bp = graf.groupby('turno').plot(ax=ax, x='density', y='VELOCIDADE', style='.', figsize=(20,8), alpha=1)
```
| github_jupyter |
# Layers and Blocks
:label:`sec_model_construction`
When we first introduced neural networks,
we focused on linear models with a single output.
Here, the entire model consists of just a single neuron.
Note that a single neuron
(i) takes some set of inputs;
(ii) generates a corresponding scalar output;
and (iii) has a set of associated parameters that can be updated
to optimize some objective function of interest.
Then, once we started thinking about networks with multiple outputs,
we leveraged vectorized arithmetic
to characterize an entire layer of neurons.
Just like individual neurons,
layers (i) take a set of inputs,
(ii) generate corresponding outputs,
and (iii) are described by a set of tunable parameters.
When we worked through softmax regression,
a single layer was itself the model.
However, even when we subsequently
introduced MLPs,
we could still think of the model as
retaining this same basic structure.
Interestingly, for MLPs,
both the entire model and its constituent layers
share this structure.
The entire model takes in raw inputs (the features),
generates outputs (the predictions),
and possesses parameters
(the combined parameters from all constituent layers).
Likewise, each individual layer ingests inputs
(supplied by the previous layer)
generates outputs (the inputs to the subsequent layer),
and possesses a set of tunable parameters that are updated
according to the signal that flows backwards
from the subsequent layer.
While you might think that neurons, layers, and models
give us enough abstractions to go about our business,
it turns out that we often find it convenient
to speak about components that are
larger than an individual layer
but smaller than the entire model.
For example, the ResNet-152 architecture,
which is wildly popular in computer vision,
possesses hundreds of layers.
These layers consist of repeating patterns of *groups of layers*. Implementing such a network one layer at a time can grow tedious.
This concern is not just hypothetical---such
design patterns are common in practice.
The ResNet architecture mentioned above
won the 2015 ImageNet and COCO computer vision competitions
for both recognition and detection :cite:`He.Zhang.Ren.ea.2016`
and remains a go-to architecture for many vision tasks.
Similar architectures in which layers are arranged
in various repeating patterns
are now ubiquitous in other domains,
including natural language processing and speech.
To implement these complex networks,
we introduce the concept of a neural network *block*.
A block could describe a single layer,
a component consisting of multiple layers,
or the entire model itself!
One benefit of working with the block abstraction
is that they can be combined into larger artifacts,
often recursively. This is illustrated in :numref:`fig_blocks`. By defining code to generate blocks
of arbitrary complexity on demand,
we can write surprisingly compact code
and still implement complex neural networks.

:label:`fig_blocks`
From a programing standpoint, a block is represented by a *class*.
Any subclass of it must define a forward propagation function
that transforms its input into output
and must store any necessary parameters.
Note that some blocks do not require any parameters at all.
Finally a block must possess a backpropagation function,
for purposes of calculating gradients.
Fortunately, due to some behind-the-scenes magic
supplied by the auto differentiation
(introduced in :numref:`sec_autograd`)
when defining our own block,
we only need to worry about parameters
and the forward propagation function.
[**To begin, we revisit the code
that we used to implement MLPs**]
(:numref:`sec_mlp_concise`).
The following code generates a network
with one fully-connected hidden layer
with 256 units and ReLU activation,
followed by a fully-connected output layer
with 10 units (no activation function).
```
from mxnet import np, npx
from mxnet.gluon import nn
npx.set_np()
net = nn.Sequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net.initialize()
X = np.random.uniform(size=(2, 20))
net(X)
```
In this example, we constructed
our model by instantiating an `nn.Sequential`,
assigning the returned object to the `net` variable.
Next, we repeatedly call its `add` function,
appending layers in the order
that they should be executed.
In short, `nn.Sequential` defines a special kind of `Block`,
the class that presents a block in Gluon.
It maintains an ordered list of constituent `Block`s.
The `add` function simply facilitates
the addition of each successive `Block` to the list.
Note that each layer is an instance of the `Dense` class
which is itself a subclass of `Block`.
The forward propagation (`forward`) function is also remarkably simple:
it chains each `Block` in the list together,
passing the output of each as the input to the next.
Note that until now, we have been invoking our models
via the construction `net(X)` to obtain their outputs.
This is actually just shorthand for `net.forward(X)`,
a slick Python trick achieved via
the `Block` class's `__call__` function.
## [**A Custom Block**]
Perhaps the easiest way to develop intuition
about how a block works
is to implement one ourselves.
Before we implement our own custom block,
we briefly summarize the basic functionality
that each block must provide:
1. Ingest input data as arguments to its forward propagation function.
1. Generate an output by having the forward propagation function return a value. Note that the output may have a different shape from the input. For example, the first fully-connected layer in our model above ingests an input of arbitrary dimension but returns an output of dimension 256.
1. Calculate the gradient of its output with respect to its input, which can be accessed via its backpropagation function. Typically this happens automatically.
1. Store and provide access to those parameters necessary
to execute the forward propagation computation.
1. Initialize model parameters as needed.
In the following snippet,
we code up a block from scratch
corresponding to an MLP
with one hidden layer with 256 hidden units,
and a 10-dimensional output layer.
Note that the `MLP` class below inherits the class that represents a block.
We will heavily rely on the parent class's functions,
supplying only our own constructor (the `__init__` function in Python) and the forward propagation function.
```
class MLP(nn.Block):
# Declare a layer with model parameters. Here, we declare two
# fully-connected layers
def __init__(self, **kwargs):
# Call the constructor of the `MLP` parent class `Block` to perform
# the necessary initialization. In this way, other function arguments
# can also be specified during class instantiation, such as the model
# parameters, `params` (to be described later)
super().__init__(**kwargs)
self.hidden = nn.Dense(256, activation='relu') # Hidden layer
self.out = nn.Dense(10) # Output layer
# Define the forward propagation of the model, that is, how to return the
# required model output based on the input `X`
def forward(self, X):
return self.out(self.hidden(X))
```
Let us first focus on the forward propagation function.
Note that it takes `X` as the input,
calculates the hidden representation
with the activation function applied,
and outputs its logits.
In this `MLP` implementation,
both layers are instance variables.
To see why this is reasonable, imagine
instantiating two MLPs, `net1` and `net2`,
and training them on different data.
Naturally, we would expect them
to represent two different learned models.
We [**instantiate the MLP's layers**]
in the constructor
(**and subsequently invoke these layers**)
on each call to the forward propagation function.
Note a few key details.
First, our customized `__init__` function
invokes the parent class's `__init__` function
via `super().__init__()`
sparing us the pain of restating
boilerplate code applicable to most blocks.
We then instantiate our two fully-connected layers,
assigning them to `self.hidden` and `self.out`.
Note that unless we implement a new operator,
we need not worry about the backpropagation function
or parameter initialization.
The system will generate these functions automatically.
Let us try this out.
```
net = MLP()
net.initialize()
net(X)
```
A key virtue of the block abstraction is its versatility.
We can subclass a block to create layers
(such as the fully-connected layer class),
entire models (such as the `MLP` class above),
or various components of intermediate complexity.
We exploit this versatility
throughout the following chapters,
such as when addressing
convolutional neural networks.
## [**The Sequential Block**]
We can now take a closer look
at how the `Sequential` class works.
Recall that `Sequential` was designed
to daisy-chain other blocks together.
To build our own simplified `MySequential`,
we just need to define two key function:
1. A function to append blocks one by one to a list.
2. A forward propagation function to pass an input through the chain of blocks, in the same order as they were appended.
The following `MySequential` class delivers the same
functionality of the default `Sequential` class.
```
class MySequential(nn.Block):
def add(self, block):
# Here, `block` is an instance of a `Block` subclass, and we assume
# that it has a unique name. We save it in the member variable
# `_children` of the `Block` class, and its type is OrderedDict. When
# the `MySequential` instance calls the `initialize` function, the
# system automatically initializes all members of `_children`
self._children[block.name] = block
def forward(self, X):
# OrderedDict guarantees that members will be traversed in the order
# they were added
for block in self._children.values():
X = block(X)
return X
```
The `add` function adds a single block
to the ordered dictionary `_children`.
You might wonder why every Gluon `Block`
possesses a `_children` attribute
and why we used it rather than just
define a Python list ourselves.
In short the chief advantage of `_children`
is that during our block's parameter initialization,
Gluon knows to look inside the `_children`
dictionary to find sub-blocks whose
parameters also need to be initialized.
When our `MySequential`'s forward propagation function is invoked,
each added block is executed
in the order in which they were added.
We can now reimplement an MLP
using our `MySequential` class.
```
net = MySequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net.initialize()
net(X)
```
Note that this use of `MySequential`
is identical to the code we previously wrote
for the `Sequential` class
(as described in :numref:`sec_mlp_concise`).
## [**Executing Code in the Forward Propagation Function**]
The `Sequential` class makes model construction easy,
allowing us to assemble new architectures
without having to define our own class.
However, not all architectures are simple daisy chains.
When greater flexibility is required,
we will want to define our own blocks.
For example, we might want to execute
Python's control flow within the forward propagation function.
Moreover, we might want to perform
arbitrary mathematical operations,
not simply relying on predefined neural network layers.
You might have noticed that until now,
all of the operations in our networks
have acted upon our network's activations
and its parameters.
Sometimes, however, we might want to
incorporate terms
that are neither the result of previous layers
nor updatable parameters.
We call these *constant parameters*.
Say for example that we want a layer
that calculates the function
$f(\mathbf{x},\mathbf{w}) = c \cdot \mathbf{w}^\top \mathbf{x}$,
where $\mathbf{x}$ is the input, $\mathbf{w}$ is our parameter,
and $c$ is some specified constant
that is not updated during optimization.
So we implement a `FixedHiddenMLP` class as follows.
```
class FixedHiddenMLP(nn.Block):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Random weight parameters created with the `get_constant` function
# are not updated during training (i.e., constant parameters)
self.rand_weight = self.params.get_constant(
'rand_weight', np.random.uniform(size=(20, 20)))
self.dense = nn.Dense(20, activation='relu')
def forward(self, X):
X = self.dense(X)
# Use the created constant parameters, as well as the `relu` and `dot`
# functions
X = npx.relu(np.dot(X, self.rand_weight.data()) + 1)
# Reuse the fully-connected layer. This is equivalent to sharing
# parameters with two fully-connected layers
X = self.dense(X)
# Control flow
while np.abs(X).sum() > 1:
X /= 2
return X.sum()
```
In this `FixedHiddenMLP` model,
we implement a hidden layer whose weights
(`self.rand_weight`) are initialized randomly
at instantiation and are thereafter constant.
This weight is not a model parameter
and thus it is never updated by backpropagation.
The network then passes the output of this "fixed" layer
through a fully-connected layer.
Note that before returning the output,
our model did something unusual.
We ran a while-loop, testing
on the condition its $L_1$ norm is larger than $1$,
and dividing our output vector by $2$
until it satisfied the condition.
Finally, we returned the sum of the entries in `X`.
To our knowledge, no standard neural network
performs this operation.
Note that this particular operation may not be useful
in any real-world task.
Our point is only to show you how to integrate
arbitrary code into the flow of your
neural network computations.
```
net = FixedHiddenMLP()
net.initialize()
net(X)
```
We can [**mix and match various
ways of assembling blocks together.**]
In the following example, we nest blocks
in some creative ways.
```
class NestMLP(nn.Block):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.net = nn.Sequential()
self.net.add(nn.Dense(64, activation='relu'),
nn.Dense(32, activation='relu'))
self.dense = nn.Dense(16, activation='relu')
def forward(self, X):
return self.dense(self.net(X))
chimera = nn.Sequential()
chimera.add(NestMLP(), nn.Dense(20), FixedHiddenMLP())
chimera.initialize()
chimera(X)
```
## Efficiency
The avid reader might start to worry
about the efficiency of some of these operations.
After all, we have lots of dictionary lookups,
code execution, and lots of other Pythonic things
taking place in what is supposed to be
a high-performance deep learning library.
The problems of Python's [global interpreter lock](https://wiki.python.org/moin/GlobalInterpreterLock) are well known.
In the context of deep learning,
we may worry that our extremely fast GPU(s)
might have to wait until a puny CPU
runs Python code before it gets another job to run.
The best way to speed up Python is by avoiding it altogether.
One way that Gluon does this is by allowing for
*hybridization*, which will be described later.
Here, the Python interpreter executes a block
the first time it is invoked.
The Gluon runtime records what is happening
and the next time around it short-circuits calls to Python.
This can accelerate things considerably in some cases
but care needs to be taken when control flow (as above)
leads down different branches on different passes through the net.
We recommend that the interested reader checks out
the hybridization section (:numref:`sec_hybridize`)
to learn about compilation after finishing the current chapter.
## Summary
* Layers are blocks.
* Many layers can comprise a block.
* Many blocks can comprise a block.
* A block can contain code.
* Blocks take care of lots of housekeeping, including parameter initialization and backpropagation.
* Sequential concatenations of layers and blocks are handled by the `Sequential` block.
## Exercises
1. What kinds of problems will occur if you change `MySequential` to store blocks in a Python list?
1. Implement a block that takes two blocks as an argument, say `net1` and `net2` and returns the concatenated output of both networks in the forward propagation. This is also called a parallel block.
1. Assume that you want to concatenate multiple instances of the same network. Implement a factory function that generates multiple instances of the same block and build a larger network from it.
[Discussions](https://discuss.d2l.ai/t/54)
| github_jupyter |
**Note.** *The following notebook contains code in addition to text and figures. By default, the code has been hidden. You can click the icon that looks like an eye in the toolbar above to show the code. To run the code, click the cell menu, then "run all".*
```
# Import packages, set preferences, etc.
%matplotlib inline
from brian2 import *
import ipywidgets as ipw
from numpy.random import poisson
from scipy.integrate import quad
from scipy.special import erf
import warnings
warnings.filterwarnings("ignore")
prefs.codegen.target = 'cython'
defaultclock.dt = 0.05*ms
%%html
<!-- hack to improve styling of ipywidgets sliders -->
<style type="text/css">
.widget-label {
min-width: 35ex;
max-width: 35ex;
}
.widget-hslider {
width: 100%;
}
</style>
```
This notebook demonstrates the analytical solution to the diffusion approximation equations in [the basic model](basic_model.ipynb). These equations are from Brunel 2000 "Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons", appendix A, which cites Tuckwell 1988 "Introduction to Theoretical Neurobiology".
Without refractoriness, the mean interspike interval is
$$m=\tau\sqrt{\pi}\int_{-\mu/\sigma}^{(1-\mu)/\sigma}e^{x^2}(1+\mathrm{erf}(x))\,\mathrm{d}x$$
so the firing rate is $1/m$. The CV is
$$CV^2 = 2\pi\tau^2/m^2\int_{-\mu/\sigma}^{(1-\mu)/\sigma}e^{x^2}\int_{-\infty}^x e^{y^2}(1+\mathrm{erf}(y))^2\,\mathrm{d}y\,\mathrm{d}x$$
With refractoriness, the mean interspike interval is
$$\hat{m} = m+t_\mathrm{ref}$$
and the CV is
$$\hat{CV}=CV\;\hat{m}\,/\,m$$
The accuracy of this analytical formulation is demonstrated in the interactive figure below.
```
def analytical_fr_cv(mu, sigma, tau, refrac):
ytheta = (1-mu)/sigma
yr = -mu/sigma
r0 = 1/(tau*sqrt(pi)*quad(lambda x: exp(x*x)*(1+erf(x)), yr, ytheta)[0])
c = quad(lambda x: exp(x*x)*quad(lambda y: exp(y*y)*(1+erf(y))**2, -20, x)[0], yr, ytheta)[0]
cv2 = 2*pi*tau**2*r0**2*c
cv = sqrt(cv2)
rate_ref = 1/(1/r0+refrac)
cv_ref = cv*rate_ref/r0
return rate_ref, cv_ref
def reduced_model(mu=1.5, sigma=0.5, tau_ms=10, t_ref_ms=0.1):
# Set parameters
repeats = 1000
duration = 1000*ms
tau = tau_ms*ms
t_ref = t_ref_ms*ms
# Define and run the model
eqs = '''
dv/dt = (mu-v)/tau+sigma*xi*tau**-0.5 : 1 (unless refractory)
'''
G = NeuronGroup(repeats, eqs, threshold='v>1', reset='v=0',
refractory=t_ref, method='euler')
spikemon = SpikeMonitor(G)
statemon = StateMonitor(G, 'v', record=[0])
run(duration)
# Compute ISI histograms
isi = []
for train in spikemon.spike_trains().values():
train.sort()
isi.append(diff(train))
isi = hstack(isi)
cv = std(isi)/mean(isi)
# Plot results
figure(figsize=(10, 2.5))
subplot(131)
plot(spikemon.t/ms, spikemon.i, ',k')
xlabel('Time (ms)')
ylabel('Repeat number')
title('Spike raster plot')
xlim(0, duration/ms)
ylim(0, repeats)
subplot(132)
plot(statemon.t[:1000]/ms, statemon.v.T[:1000], '-k')
xlabel('Time (ms)')
ylabel('v')
title('Membrane potential trace')
#xlim(0, duration/ms)
ylim(-0.2, 1.2)
axhline(0, ls=':', c='r')
axhline(1, ls=':', c='g')
subplot(133)
hist(isi/ms, fc='k', bins=arange(60)*0.5)
yticks([])
ylabel('Frequency')
xlabel('ISI (ms)')
title('Interspike interval histogram')
#title('CV = %.2f' % cv)
text(0.95, 0.9, 'CV = %.2f' % cv, ha='right', va='top',
bbox=dict(facecolor='white'),
transform=gca().transAxes)
tight_layout()
sim_fr = spikemon.num_spikes/(duration*repeats)
sim_cv = cv
an_fr, an_cv = analytical_fr_cv(mu, sigma, tau, t_ref)
print 'Firing rate: simulated=%d sp/s, analytical=%d sp/s' % (sim_fr, an_fr)
print 'CV: simulated=%.2f, analytical=%.2f' % (sim_cv, an_cv)
display(ipw.interact(reduced_model,
tau_ms=ipw.FloatSlider(
min=0.1, max=20.0, step=0.1, value=10.0,
continuous_update=False,
description=r"Membrane time constant $\tau$ (ms)"),
t_ref_ms=ipw.FloatSlider(
min=0, max=5, step=0.05, value=0.1,
continuous_update=False,
description=r"Refractory period $t_\mathrm{ref}$ (ms)"),
mu=ipw.FloatSlider(
min=0, max=5, step=0.05, value=1.5,
continuous_update=False,
description=r"Mean current $\mu$"),
sigma=ipw.FloatSlider(
min=0, max=5, step=0.05, value=0.5,
continuous_update=False,
description=r"Standard deviation of current $\sigma$"),
));
```
| github_jupyter |
# Project 2: Digit Recognition
## Statistical Machine Learning (COMP90051), Semester 2, 2017
*Copyright the University of Melbourne, 2017*
### Submitted by: Yitong Chen
### Student number: 879326
### Kaggle-in-class username: *YitongChen*
In this project, you will be applying machine learning for recognising digits from real world images. The project worksheet is a combination of text, pre-implemented code and placeholders where we expect you to add your code and answers. You code should produce desired result within a reasonable amount of time. Please follow the instructions carefully, **write your code and give answers only where specifically asked**. In addition to worksheet completion, you are also expected to participate **live competition with other students in the class**. The competition will be run using an on-line platform called Kaggle.
** Marking:** You can get up to 33 marks for Project 2. The sum of marks for Project 1 and Project 2 is then capped to 50 marks
**Due date:** Wednesday 11/Oct/17, 11:59pm AEST (LMS components); and Kaggle competition closes Monday 09/Oct/17, 11:59pm AEST.
**Late submissions** will incur a 10% penalty per calendar day
** Submission materials**
- **Worksheet**: Fill in your code and answers within this IPython Notebook worksheet.
- **Competition**: Follow the instructions provided in the corresponding section of this worksheet. Your competition submissions should be made via Kaggle website.
- **Report**: The report about your competition entry should be submitted to the LMS as a PDF file (see format requirements in `2.2`).
- **Code**: The source code behind your competition entry.
The **Worksheet**, **Report** and **Code** should be bundled into a `.zip` file (not 7z, rar, tar, etc) and submitted in the LMS. Marks will be deducted for submitting files in other formats, or we may elect not to mark them at all.
**Academic Misconduct:** Your submission should contain only your own work and ideas. Where asked to write code, you cannot re-use someone else's code, and should write your own implementation. We will be checking submissions for originality and will invoke the University’s <a href="http://academichonesty.unimelb.edu.au/policy.html">Academic Misconduct policy</a> where inappropriate levels of collusion or plagiarism are deemed to have taken place.
**Table of Contents**
1. Handwritten Digit Recognition **(16 marks)**
1. Linear Approach
2. Basis Expansion
3. Kernel Perceptron
4. Dimensionality Reduction
2. Kaggle Competition **(17 marks)**
1. Making Submissions
2. Method Description
## 1. Handwritten Digit Recognition
Handwritten digit recognition can be framed as a classification task: given a bitmap image as input, predict the digit type (0, 1, ..., 9). The pixel values in each position of the image form our features, and the digit type is the class. We are going to use a dataset where the digits are represented as *28 x 28* bitmap images. Each pixel value ranges between 0 and 1, and represents the monochrome ink intensity at that position. Each image matrix has been flattened into one long feature vector, by concatenating each row of pixels.
In this part of the project, we will only use images of two digits, namely "7" and "9". As such, we will be working on a binary classification problem. *Throughout this first section, our solution is going to be based on the perceptron classifier.*
Start by setting up working environment, and loading the dataset. *Do not override variable `digits`, as this will be used throughout this section.*
```
%pylab inline
digits = np.loadtxt('digits_7_vs_9.csv', delimiter=' ')
```
Take some time to explore the dataset. Note that each image of "7" is labeled as -1, and each image of "9" is labeled as +1.
```
# extract a stack of 28x28 bitmaps
X = digits[:, 0:784]
# extract labels for each bitmap
y = digits[:, 784:785]
# display a single bitmap and print its label
bitmap_index = 0
plt.imshow(X[bitmap_index,:].reshape(28, 28), interpolation=None)
print(y[bitmap_index])
```
You can also display several bitmaps at once using the following code.
```
def gallery(array, ncols):
nindex, height, width = array.shape
nrows = nindex//ncols
result = (array.reshape((nrows, ncols, height, width))
.swapaxes(1,2)
.reshape((height*nrows, width*ncols)))
return result
ncols = 10
result = gallery(X.reshape((300, 28, 28))[:ncols**2], ncols)
plt.figure(figsize=(10,10))
plt.imshow(result, interpolation=None)
```
### 1.1 Linear Approach
We are going to use perceptron for our binary classification task. Recall that perceptron is a linear method. Also, for this first step, we will not apply non-linear transformations to the data.
Implement and fit a perceptron to the data above. You may use the implementation from *sklearn*, or implementation from one of our workshops. Report the error of the fit as the proportion of misclassified examples.
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
## your code here
# To do this, simply concatenate a column of 1s to the data matrix.
Phi = np.column_stack([np.ones(X.shape[0]), X])
# Prediction function
def perc_pred(phi, w):
return np.sign(np.sign(np.dot(phi, w)) + 0.5)
# Training algorithm
def train(data, target, epochs, w , eta= 1.):
for e in range(epochs):
for i in range(data.shape[0]):
yhat = perc_pred(data[i,:], w)
if yhat != target[i]:
w += eta * target[i] * data[i]
return w
# Run the training algorithm for 100 epochs to learn the weights
w = np.zeros(Phi.shape[1])
w = train(Phi, y, 100, w)
print(w.shape)
Error = np.sum(perc_pred(Phi, w).reshape(y.shape[0],1) != y) / float(y.shape[0])
print(Error)
```
One of the advantages of a linear approach is the ability to interpret results. To this end, plot the parameters learned above. Exclude the bias term if you were using it, set $w$ to be the learned perceptron weights, and run the following command.
```
#print(w.reshape(1,785))
w = np.delete(w.reshape(1,785),0,1)
plt.imshow(w.reshape(28,28), interpolation=None)
```
In a few sentences, describe what you see, referencing which features are most important for making classification. Report any evidence of overfitting.
<font color='red'>**Write your answer here ...**</font> (as a *markdown* cell)
The importance of a feature is captured by computing how much the learned model depends on, as the epoch here.
The result is blurry, and the error proportion of misclassified examples is always 0.
Split the data into training and heldout validation partitions by holding out a random 25% sample of the data. Evaluate the error over the course of a training run, and plot the training and validation error rates as a function of the number of passes over the training dataset.
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
## your code here
from sklearn.model_selection import train_test_split
phi_train, phi_test, y_train, y_test = train_test_split(Phi, y,
test_size=0.25,
random_state=0)
w_hat = np.zeros(Phi.shape[1])
T = 60
train_error = np.zeros(T)
heldout_error = np.zeros(T)
for ep in range(T):
# here we use a learning rate, which decays with each epoch
lr = 1./(1+ep)
w_hat = train(Phi, y, 1, w_hat, eta = lr )
#print(w_hat)
train_error[ep] = np.sum(perc_pred(phi_train, w_hat).reshape(y_train.shape[0],1) != y_train) / np.float(y_train.shape[0])
heldout_error[ep] = np.sum(perc_pred(phi_test, w_hat).reshape(y_test.shape[0],1) != y_test) / np.float(y_test.shape[0])
plot(train_error, label = 'Train Error')
plot(heldout_error, label = 'Held-out Error')
plt.legend()
xlabel('Epochs')
ylabel('Error')
```
In a few sentences, describe the shape of the curves, and compare the two. Now consider if we were to stop training early, can you choose a point such that you get the best classification performance? Justify your choice.
<font color='red'>**Write your answer here ...**</font> (as a *markdown* cell)
The shape was at a very high point at the beginning, and then went down sharply, but it increased a little at around 7 epoches, finally decreased to an even low level at around 9, so it is not good to stop training early. We may choose the point as 10, which is a reasonable low point for both train and holdout error rate.
Now that we have tried a simple approach, we are going to implement several non-linear approaches to our task. Note that we are still going to use a linear method (the perceptron), but combine this with a non-linear data transformation. We start with basis expansion.
### 1.2 Basis Expansion
Apply Radial Basis Function (RBF)-based transformation to the data, and fit a perceptron model. Recall that the RBF basis is defined as
$$\varphi_l(\mathbf{x}) = \exp\left(-\frac{||\mathbf{x} - \mathbf{z}_l||^2}{\sigma^2}\right)$$
where $\mathbf{z}_l$ is centre of the $l^{th}$ RBF. We'll use $L$ RBFs, such that $\varphi(\mathbf{x})$ is a vector with $L$ elements. The spread parameter $\sigma$ will be the same for each RBF.
*Hint: You will need to choose the values for $\mathbf{z}_l$ and $\sigma$. If the input data were 1D, the centres $\mathbf{z}_l$ could be uniformly spaced on a line. However, here we have 784-dimensional input. For this reason you might want to use some of the training points as centres, e.g., $L$ randomly chosen "2"s and "7"s.*
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
## your code here
#Each RBF basis function should take x (a 784 dimensional vector), and return a scalar.
#Each RBF will be parameterised by a centre (784 dimensional vector) and a length scale (scalar).
#The return scalar is computed based on the distance between x and the centre, as shown in the mathematical formulation.
#Consequently phi(x) should return a vector of size L, and accordingly Phi(X) for the whole dataset will be a matrix N x L.
# Input:
# x - is a column vector of input values
# z - is a scalar that controls location
# s - is a scalar that controls spread
#
# Output:
# v - contains the values of RBF evaluated for each element x
# v has the same dimensionality as x
def radial_basis_function(x, z, s):
# ensure that t is a column vector
'''
x = np.array(x)
if x.size == 1:
x.shape = (1,1)
else:
x_length = x.shape[0]
x.shape = (x_length, 1)
'''
# compute RBF value
r = np.linalg.norm(x - z)
v = np.exp(-r**2/(s**2))
return v
# Input:
# x - is an Nx784 column vector
# z - is an Lx784 column vector with locations for each of M RBFs
# s - is a scalar that controls spread, shared between all RBFs
#
# Output:
# Phi - is an NxL matrix, such that Phi(i,j) is the
# RBF transformation of x(i) using location z(j) and scale s
def expand_to_RBF(x, z, s):
#... your code here ...
#... in your code use "radial_basis_function" from above ...
L = z.shape[0]
N = x.shape[0]
Phi = np.zeros((N, L))
for i in range(N):
for j in range(L):
#y_rbf = radial_basis_function(x_rbf, z[i], sigma)
v = radial_basis_function(x[i], z[j], sigma)
Phi[i,j] = v
return Phi
# set L to 60 and sigma to 0.01
l = 60
z = X[np.random.choice(X.shape[0], l, replace=False), :]
sigma = 0.01 # same scale for each RBF
# use "expand_to_RBF" function from above
x = expand_to_RBF(X, z, sigma)
print(x.shape)
x_dummy = np.ones(X.shape[0])
X_expand = np.column_stack((x_dummy, x))
print(X_expand.shape)
# Run the training algorithm for 100 epochs to learn the weights
w = np.zeros(X_expand.shape[1])
w = train(X_expand, y, 100, w)
print(w.shape)
Error = np.sum(perc_pred(X_expand, w).reshape(y.shape[0],1) != y) / float(y.shape[0])
print(Error)
```
Now compute the validation error for your RBF-perceptron and use this to choose good values of $L$ and $\sigma$. Show a plot of the effect of changing each of these parameters, and justify your parameter choice.
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
## your code here
## when sigma is always 0.01
sigma = 0.01 # same scale for each RBF
train_error = np.zeros(300)
heldout_error = np.zeros(300)
for L in range(300):
z = X[np.random.choice(X.shape[0], L, replace=False), :]
x = expand_to_RBF(X, z, sigma)
#print(x.shape)
x_dummy = np.ones(X.shape[0])
X_expand = np.column_stack((x_dummy, x))
w_hat = np.zeros(X_expand.shape[1])
w_hat = train(X_expand, y, 100, w_hat)
phi_train, phi_test, y_train, y_test = train_test_split(X_expand, y,
test_size=0.25,
random_state=0)
#print(w_hat)
train_error[L] = np.sum(perc_pred(phi_train, w_hat).reshape(y_train.shape[0],1) != y_train) / np.float(y_train.shape[0])
heldout_error[L] = np.sum(perc_pred(phi_test, w_hat).reshape(y_test.shape[0],1) != y_test) / np.float(y_test.shape[0])
plt.title('Errors for sigma = 0.01')
plot(train_error, label = 'Train Error')
plot(heldout_error, label = 'Held-out Error')
plt.legend()
xlabel('number of L')
ylabel('Error')
# when L is always 300
l = 300
train_error_list = []
heldout_error_list = []
sigmas = [1e-10, 1e-6, 1e-4, 1e-2, 1, 100]
for n, s in enumerate(sigmas):
z = X[np.random.choice(X.shape[0], l, replace=False), :]
x = expand_to_RBF(X, z, s)
#print(x.shape)
x_dummy = np.ones(X.shape[0])
X_expand = np.column_stack((x_dummy, x))
w_hat = np.zeros(X_expand.shape[1])
w_hat = train(X_expand, y, 100, w_hat)
phi_train, phi_test, y_train, y_test = train_test_split(X_expand, y,
test_size=0.25,
random_state=0)
#print(w_hat)
train_error_list.append(np.sum(perc_pred(phi_train, w_hat).reshape(y_train.shape[0],1) != y_train) / np.float(y_train.shape[0]))
heldout_error_list.append(np.sum(perc_pred(phi_test, w_hat).reshape(y_test.shape[0],1) != y_test) / np.float(y_test.shape[0]))
plt.title('Errors for L = 300')
plt.plot(sigmas[:6], np.asarray(train_error_list))
plt.plot(sigmas[:6], np.asarray(heldout_error_list))
plt.xlim((min(sigmas), max(sigmas)))
plt.xscale('log')
plt.ylim(0, 1)
#plot(heldout_error, label = 'Held-out Error')
xlabel('sigma')
ylabel('Error')
```
<font color='red'>**Write your justfication here ...**</font> (as a *markdown* cell)
From the first plot, we can just set sigma to 0.01 at first, and found that the error rate went down when L became close to 300, thus we set the L to 300 when testing for sigma, as in the second plot.
From the second plot, it seems that the sigma has no effect on the performance.
As the result, we will just choose L as 300 with sigma 0.01.
### 1.3 Kernel Perceptron
Next, instead of directly computing a feature space transformation, we are going to use the kernel trick. Specifically, we are going to use the kernelised version of perceptron in combination with a few different kernels.
*In this section, you cannot use any libraries other than `numpy` and `matplotlib`.*
First, implement linear, polynomial and RBF kernels. The linear kernel is simply a dot product of its inputs, i.e., there is no feature space transformation. Polynomial and RBF kernels should be implemented as defined in the lecture slides.
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
# Input:
# u,v - column vectors of the same dimensionality
#
# Output:
# v - a scalar
def linear_kernel(u, v):
## your code here
z = np.dot(u.T, v)
return z
# Input:
# u,v - column vectors of the same dimensionality
# c,d - scalar parameters of the kernel as defined in lecture slides
#
# Output:
# v - a scalar
def polynomial_kernel(u, v, c=0, d=3):
## your code here
z = (np.dot(u.T, v)+c)**d
return z
# Input:
# u,v - column vectors of the same dimensionality
# gamma - scalar parameter of the kernel as defined in lecture slides
#
# Output:
# v - a scalar
def rbf_kernel(u, v, gamma=1):
## your code here
r = np.linalg.norm(u - v)
v = np.exp(-gamma*(r**2))
return v
```
Kernel perceptron was a "green slides" topic, and you will not be asked about this method in the exam. Here, you are only asked to implement a simple prediction function following the provided equation. In kernel perceptron, the prediction for instance $\mathbf{x}$ is made based on the sign of
$$w_0 + \sum_{i=1}^{n}\alpha_i y_i K(\mathbf{x}_i, \mathbf{x})$$
Here $w_0$ is the bias term, $n$ is the number of training examples, $\alpha_i$ are learned weights, $\mathbf{x}_i$ and $y_i$ is the training dataset,and $K$ is the kernel.
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
# Input:
# x_test - (r x m) matrix with instances for which to predict labels
# X - (n x m) matrix with training instances in rows
# y - (n x 1) vector with labels
# alpha - (n x 1) vector with learned weigths
# bias - scalar bias term
# kernel - a kernel function that follows the same prototype as each of the three kernels defined above
#
# Output:
# y_pred - (r x 1) vector of predicted labels
def kernel_ptron_predict(x_test, X, y, alpha, bias, kernel, c=0 ,d=3 ,gamma=1):
## your code here
# test x_test is a matrix or vector
if x_test.size == 784:
R = 1
else:
R = x_test.shape[0]
#R = int(x_test.shape[0]/784)
#print(R)
x_test = x_test.reshape(R,784)
N = X.shape[0]
y_pred = np.zeros((R,1))
for i in range(R):
for j in range(N):
if kernel == linear_kernel:
y_pred[i] += alpha[j]*y[j]*kernel(x_test[i],X[j])
elif kernel == polynomial_kernel:
y_pred[i] += alpha[j]*y[j]*kernel(x_test[i],X[j],c,d)
else:
y_pred[i] += alpha[j]*y[j]*kernel(x_test[i],X[j],gamma)
y_pred[i] += bias
y_pred[i]= np.sign(y_pred[i])
return y_pred
```
The code for kernel perceptron training is provided below. You can treat this function as a black box, but we encourage you to understand the implementation.
```
# Input:
# X - (n x m) matrix with training instances in rows
# y - (n x 1) vector with labels
# kernel - a kernel function that follows the same prototype as each of the three kernels defined above
# epochs - scalar, number of epochs
#
# Output:
# alpha - (n x 1) vector with learned weigths
# bias - scalar bias term
def kernel_ptron_train(X, y, kernel, epochs=100):
n, m = X.shape
alpha = np.zeros(n)
bias = 0
updates = None
for epoch in range(epochs):
print('epoch =', epoch, ', updates =', updates)
updates = 0
schedule = list(range(n))
np.random.shuffle(schedule)
for i in schedule:
y_pred = kernel_ptron_predict(X[i], X, y, alpha, bias, kernel)
if y_pred != y[i]:
alpha[i] += 1
bias += y[i]
updates += 1
if updates == 0:
break
return alpha, bias
```
Now use the above functions to train the perceptron. Use heldout validation, and compute the validation error for this method using each of the three kernels. Write a paragraph or two with analysis how the accuracy differs between the different kernels and choice of kernel parameters. Discuss the merits of a kernel approach versus direct basis expansion approach as was used in the previous section.
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
phi_train, phi_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=0)
# Linear kernel
print('LINEAR')
alpha, bias = kernel_ptron_train(phi_train, y_train, linear_kernel)
# exclude bias term
#bias = 0
Error = np.sum(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, linear_kernel) != y_test) / float(y_test.shape[0])
print('Error =',Error)
#print(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, linear_kernel))
# Polynomial kernel
print('POLYNOMIAL')
alpha, bias = kernel_ptron_train(phi_train, y_train, polynomial_kernel)
# exclude bias term
#bias = 0
d_list = [0, 1, 10, 100]
for n, s in enumerate(d_list):
Error = np.sum(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, polynomial_kernel, d = s) != y_test) / float(y_test.shape[0])
print('d:',s,'Error =',Error)
c_list = [0, 1e-2, 1, 10, 100]
for n, s in enumerate(c_list):
Error = np.sum(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, polynomial_kernel, c = s) != y_test) / float(y_test.shape[0])
print('c:',s,'Error =',Error)
#print(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, polynomial_kernel))
# RBF kernel
print('RBF')
alpha, bias = kernel_ptron_train(phi_train, y_train, rbf_kernel)
# exclude bias term
#bias = 0
g_list = [ 1e-10, 1e-6, 1e-4, 1e-2, 1, 10, 100]
for n, s in enumerate(g_list):
Error = np.sum(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, rbf_kernel, gamma = s) != y_test) / float(y_test.shape[0])
print('g:',s,'Error =',Error)
#print(kernel_ptron_predict(phi_test, phi_train, y_train, alpha, bias, rbf_kernel))
```
<font color='red'>**Provide your analysis here ...**</font> (as a *markdown* cell)
It seems that the accuracy of the polynomial and rbf kernel are slightly better than the linear method, but they requir the good choice of parameters.
As for the polynomial kernel, it may have the best accuracy when c is 10 as well as d is 0-1.
As for the rbf kernel, it could have the best performance when gamma is 1.
The basis expansion relies on a large number of L, whereas the kernel approach is much easier to get a high accuracy.
### 1.4 Dimensionality Reduction
Yet another approach to working with complex data is to use a non-linear dimensionality reduction. To see how this might work, first apply a couple of dimensionality reduction methods and inspect the results.
```
from sklearn import manifold
# extract a stack of 28x28 bitmaps
#X = digits[:, 0:784]
# extract labels for each bitmap
y = digits[:, 784:785]
#print(y)
X = digits[:, 0:784]
y = np.squeeze(digits[:, 784:785])
print(y)
# n_components refers to the number of dimensions after mapping
# n_neighbors is used for graph construction
X_iso = manifold.Isomap(n_neighbors=30, n_components=2).fit_transform(X)
# n_components refers to the number of dimensions after mapping
embedder = manifold.SpectralEmbedding(n_components=2, random_state=0)
X_se = embedder.fit_transform(X)
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(X_iso[y==-1,0], X_iso[y==-1,1], "bo")
ax1.plot(X_iso[y==1,0], X_iso[y==1,1], "ro")
ax1.set_title('Isomap')
ax2.plot(X_se[y==-1,0], X_se[y==-1,1], "bo")
ax2.plot(X_se[y==1,0], X_se[y==1,1], "ro")
ax2.set_title('spectral')
```
In a few sentences, explain how a dimensionality reduction algorithm can be used for your binary classification task.
<font color='red'>**Write your answer here ...**</font> (as a *markdown* cell)
There are two fields in dimension reduction, linear techniques that use a linear mapping to reduce the dimension, and nonlinear technique makes the assumption that the data available is embedded on a manifold (or surface in lower dimensional space). The data is then mapped onto a lower-dimensional manifold for more efficient processing.
Implement such an approach and assess the result. For simplicity, we will assume that both training and test data are available ahead of time, and thus the datasets should be used together for dimensionality reduction, after which you can split off a test set for measuring generalisation error. *Hint: you do not have to reduce number of dimensions to two. You are welcome to use the sklearn library for this question.*
<br />
<font color='red'>**Write your code in the cell below ...**</font>
```
from sklearn.linear_model import perceptron
# n_components refers to the number of dimensions after mapping
# n_neighbors is used for graph construction
X_iso = manifold.Isomap(n_neighbors=30, n_components=2).fit_transform(X)
# split off the test data
phi_train, phi_test, y_train, y_test = train_test_split(X_iso, y,
test_size=0.25,
random_state=0)
# Create the model
clf = perceptron.Perceptron()
clf.fit(phi_train, y_train)
Error = np.sum(clf.predict(phi_test) != y_test) / float(y_test.shape[0])
print(Error)
# Create the KMeans model
#clf = cluster.KMeans(init='k-means++', n_clusters=10, random_state=42)
# Fit the training data to the model
#clf.fit(X_train)
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(X_iso[np.where(clf.predict(phi_test)==-1),0], X_iso[np.where(clf.predict(phi_test)==-1),1], "bo")
ax1.plot(X_iso[np.where(clf.predict(phi_test)==1),0], X_iso[np.where(clf.predict(phi_test)==1),1], "ro")
ax1.set_title('Predicted Training Labels for spectral')
ax2.plot(X_iso[np.where(y_test==-1),0], X_iso[np.where(y_test==-1),1], "bo")
ax2.plot(X_iso[np.where(y_test==1),0], X_iso[np.where(y_test==1),1], "ro")
ax2.set_title('Actual Training Labels for Isomap')
# split off the test data
phi_train, phi_test, y_train, y_test = train_test_split(X_se, y,
test_size=0.25,
random_state=0)
# Create the model
clf = perceptron.Perceptron()
clf.fit(phi_train, y_train)
Error = np.sum(clf.predict(phi_test) != y_test) / float(y_test.shape[0])
print(Error)
# Create the KMeans model
#clf = cluster.KMeans(init='k-means++', n_clusters=10, random_state=42)
# Fit the training data to the model
#clf.fit(X_train)
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(X_se[np.where(clf.predict(phi_test)==-1),0], X_se[np.where(clf.predict(phi_test)==-1),1], "bo")
ax1.plot(X_se[np.where(clf.predict(phi_test)==1),0], X_se[np.where(clf.predict(phi_test)==1),1], "ro")
ax1.set_title('Predicted Training Labels for spectral')
ax2.plot(X_se[np.where(y_test==-1),0], X_se[np.where(y_test==-1),1], "bo")
ax2.plot(X_se[np.where(y_test==1),0], X_se[np.where(y_test==1),1], "ro")
ax2.set_title('Actual Training Labels for spectral')
```
In a few sentences, comment on the merits of the dimensionality reduction based approach compared to linear classification from Section 1.1 and basis expansion from Section 1.2.
<font color='red'>**Write your answer here ...**</font> (as a *markdown* cell)
From the error rate and the plots, we can find the performance of dimensionality reduction approach is pretty good, especially from the plots, which make the results clear and intuitive.
The merits over the linear classification is the classification becoming obviously faster, as well as getting high accuracy easily, which is because:
1. It reduces the time and storage space required.
2. Removal of multi-collinearity improves the performance of the machine learning model.
Besides, it becomes easier to visualize the data when reduced to very low dimensions such as 2D or 3D, which could be more intuitive to look at the performance.
## 2. Kaggle Competition
The final part of the project is a competition, on more challenging digit data sourced from natural scenes. This data is coloured, pixelated or otherwise blurry, and the digits are not perfectly centered. It is often difficult for humans to classify! The dataset is also considerably larger.
Please sign up to the [COMP90051 Kaggle competition](https://inclass.kaggle.com/c/comp90051-2017) using your `student.unimelb.edu.au` email address. Then download the file `data.npz` from Kaggle. This is a compressed `numpy` data file containing three ndarray objects:
- `train_X` training set, with 4096 input features (greyscale pixel values);
- `train_Y` training labels (0-9)
- `test_X` test set, with 4096 input features, as per above
Each image is 64x64 pixels in size, which has been flattened into a vector of 4096 values. You should load the files using `np.load`, from which you can extract the three elements. You may need to transpose the images for display, as they were flattened in a different order. Each pixel has an intensity value between 0-255. For those using languages other than python, you may need to output these objects in another format, e.g., as a matlab matrix.
Your job is to develop a *multiclass* classifier on this dataset. You can use whatever techniques you like, such as the perceptron code from above, or other methods such as *k*NN, logistic regression, neural networks, etc. You may want to compare several methods, or try an ensemble combination of systems. You are free to use any python libraries for this question. Note that some fancy machine learning algorithms can take several hours or days to train (we impose no time limits), so please start early to allow sufficient time. *Note that you may want to sample smaller training sets, if runtime is an issue, however this will degrade your accuracy. Sub-sampling is a sensible strategy when developing your code.*
You may also want to do some basic image processing, however, as this is not part of the subject, we would suggest that you focus most of your efforts on the machine learning. For inspiration, please see [Yan Lecun's MNIST page](http://yann.lecun.com/exdb/mnist/), specifically the table of results and the listed papers. Note that your dataset is harder than MNIST, so your mileage may vary.
```
%pylab inline
# load the files
train_X = np.load('data/train_X.npy')
train_y = np.load('data/train_y.npy')
test_X = np.load('data/test_X.npy')
print(train_X.shape)
print(train_X[0].shape)
#plt.imshow(train_X[0,:].reshape(64, 64), interpolation=None)
plt.subplot(221)
plt.imshow(train_X[0].reshape(64,64), cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.imshow(train_X[1].reshape(64,64), cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.imshow(train_X[2].reshape(64,64), cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.imshow(train_X[3].reshape(64,64), cmap=plt.get_cmap('gray'))
%pylab inline
# Import `train_test_split`
from sklearn.model_selection import train_test_split
# Import GridSearchCV
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
# load the files
train_X = np.load('data/train_X.npy')
train_y = np.load('data/train_y.npy')
test_X = np.load('data/test_X.npy')
# We can also reduce our memory requirements
# by forcing the precision of the pixel values to be 32 bit
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
# normalize inputs from 0-255 to 0-1
train_X = train_X / 255
test_X = test_X / 255
# one hot encode outputs
#train_y = np_utils.to_categorical(train_y)
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(train_X, train_y, test_size=0.25, random_state=42)
#----------------------------SVC--------------------------------
# Import the `svm` model
from sklearn import svm
# Create the SVC model
svc_model = svm.SVC(gamma=0.001, C=100., kernel='linear')
# Fit the data to the SVC model
svc_model.fit(X_train, y_train)
print(svc_model.score(X_test, y_test))
#----------------------SVM candidates----------------------------
# Set the parameter candidates
parameter_candidates = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
# Create a classifier with the parameter candidates
clf = GridSearchCV(estimator=svm.SVC(), param_grid=parameter_candidates, n_jobs=1)
# Train the classifier on training data
clf.fit(X_train, y_train)
# Print out the results
print('Best score for training data:', clf.best_score_)
print('Best `C`:',clf.best_estimator_.C)
print('Best kernel:',clf.best_estimator_.kernel)
print('Best `gamma`:',clf.best_estimator_.gamma)
# Apply the classifier to the test data, and view the accuracy score
print(clf.score(X_test, y_test))
#--------------------------Baseline Model with Multi-Layer Perceptrons------------------------
%pylab inline
# Import `train_test_split`
from sklearn.model_selection import train_test_split
# Import GridSearchCV
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
# load the files
train_X = np.load('data/train_X.npy')
train_y = np.load('data/train_y.npy')
test_X = np.load('data/test_X.npy')
print(train_X.shape)
# We can also reduce our memory requirements
# by forcing the precision of the pixel values to be 32 bit
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
# normalize inputs from 0-255 to 0-1
train_X = train_X / 255
test_X = test_X / 255
# one hot encode outputs
train_y = np_utils.to_categorical(train_y)
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(train_X, train_y, test_size=0.25, random_state=42)
num_pixels = X_train.shape[1]
num_classes = y_train.shape[1]
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = baseline_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=200, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Baseline Error: %.2f%%" % (100-scores[1]*100))
print('end')
#--------------------------Simple Convolutional Neural Network--------------------------
%pylab inline
# Import `train_test_split`
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.set_image_dim_ordering('th')
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# load the files
train_X = np.load('data/train_X.npy')
train_y = np.load('data/train_y.npy')
test_X = np.load('data/test_X.npy')
# reshape to be [samples][pixels][width][height]
train_X = train_X.reshape(train_X.shape[0], 1, 64, 64).astype('float32')
test_X = test_X.reshape(test_X.shape[0], 1, 64, 64).astype('float32')
# normalize inputs from 0-255 to 0-1
train_X = train_X / 255
test_X = test_X / 255
# one hot encode outputs
train_y = np_utils.to_categorical(train_y)
num_classes = train_y.shape[1]
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(train_X, train_y, test_size=0.25, random_state=42)
def baseline_model():
# create model
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(1, 64, 64), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = baseline_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=400, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Baseline Error: %.2f%%" % (100-scores[1]*100))
np.savetxt("simple.csv",model.predict(test_X).argmax(1),delimiter=",")
#-------------------Larger Convolutional Neural Network-------------------
%pylab inline
# Import `train_test_split`
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.set_image_dim_ordering('th')
from keras.callbacks import ModelCheckpoint
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# load the files
train_X = np.load('data/train_X.npy')
train_y = np.load('data/train_y.npy')
test_X = np.load('data/test_X.npy')
# reshape to be [samples][pixels][width][height]
train_X = train_X.reshape(train_X.shape[0], 1, 64, 64).astype('float32')
test_X = test_X.reshape(test_X.shape[0], 1, 64, 64).astype('float32')
# normalize inputs from 0-255 to 0-1
train_X = train_X / 255
test_X = test_X / 255
# one hot encode outputs
train_y = np_utils.to_categorical(train_y)
num_classes = train_y.shape[1]
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(train_X, train_y, test_size=0.25, random_state=42)
# define the larger model
def larger_model():
# create model
model = Sequential()
model.add(Conv2D(30, (5, 5), input_shape=(1, 64, 64), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(15, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = larger_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=200)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Large CNN Error: %.2f%%" % (100-scores[1]*100))
np.savetxt("large_2.csv",model.predict(test_X).argmax(1),delimiter=",")
#-------------------------------Visualization of results---------------------------------
from sklearn import manifold
# Create an isomap and fit the `digits` data to it
X_test_reshape = X_test.reshape(X_test.shape[0], 4096)
X_iso = manifold.Isomap(n_neighbors=10).fit_transform(X_test_reshape[0:1000])
print('prediction begins')
# Compute cluster centers and predict cluster index for each sample
predicted = model.predict(X_test[0:1000]).argmax(1)
print('plot')
# Create a plot with subplots in a grid of 1X2
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
# Adjust the layout
fig.subplots_adjust(top=0.85)
y_test_color = y_test[0:1000].argmax(1)
# Add scatterplots to the subplots
ax[0].scatter(X_iso[:, 0], X_iso[:, 1], c=predicted)
ax[0].set_title('Predicted labels')
ax[1].scatter(X_iso[:, 0], X_iso[:, 1], c=y_test_color)
ax[1].set_title('Actual Labels')
# Add title
fig.suptitle('Predicted versus actual labels', fontsize=14, fontweight='bold')
# Show the plot
plt.show()
#--------------------------Increase batch size-----------------------------------
# build the model
model = larger_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=2000)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Large CNN Error: %.2f%%" % (100-scores[1]*100))
np.savetxt("large_3.csv",model.predict(test_X).argmax(1),delimiter=",")
#----------------------------Increased layer larger CNN-------------------------------
%pylab inline
# Import `train_test_split`
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.set_image_dim_ordering('th')
from keras.callbacks import ModelCheckpoint
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# load the files
train_X = np.load('data/train_X.npy')
train_y = np.load('data/train_y.npy')
test_X = np.load('data/test_X.npy')
# reshape to be [samples][pixels][width][height]
train_X = train_X.reshape(train_X.shape[0], 1, 64, 64).astype('float32')
test_X = test_X.reshape(test_X.shape[0], 1, 64, 64).astype('float32')
# normalize inputs from 0-255 to 0-1
train_X = train_X / 255
test_X = test_X / 255
# one hot encode outputs
train_y = np_utils.to_categorical(train_y)
num_classes = train_y.shape[1]
# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(train_X, train_y, test_size=0.25, random_state=42)
# define the larger model
def larger_model():
# create model
model = Sequential()
model.add(Conv2D(64, (10, 10), input_shape=(1, 64, 64), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = larger_model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=200)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Large CNN Error: %.2f%%" % (100-scores[1]*100))
np.savetxt("large_4.csv",model.predict(test_X).argmax(1),delimiter=",")
```
### 2.1 Making Submissions
This will be setup as a *Kaggle in class* competition, in which you can upload your system predictions on the test set. You should format your predictions as a csv file, with the same number of lines as the test set, and each line comprising two numbers `id, class` where *id* is the instance number (increasing integers starting from 1) and *class* is an integer between 0-9, corresponding to your system prediction. E.g.,
```
Id,Label
1,9
2,9
3,4
4,5
5,1
...```
based on the first five predictions of the system being classes `9 9 4 5 1`. See the `sample_submission.csv` for an example file.
Kaggle will report your accuracy on a public portion of the test set, and maintain a leaderboard showing the performance of you and your classmates. You will be allowed to upload up to four submissions each day. At the end of the competition, you should nominate your best submission, which will be scored on the private portion of the test set. The accuracy of your system (i.e., proportion of correctly classified examples) on the private test set will be used for grading your approach.
**Marks will be assigned as follows**:
- position in the class, where all students are ranked and then the ranks are linearly scaled to <br>0 marks (worst in class) - 4 marks (best in class)
- absolute performance (4 marks), banded as follows (rounded to nearest integer):
<br>below 80% = 0 marks; 80-89% = 1; 90-92% = 2; 93-94% = 3; above 95% = 4 marks
Note that you are required to submit your code with this notebook, submitted to the LMS. Failure to provide your implementation may result in assigning zero marks for the competition part, irrespective of the competition standing. Your implementation should be able to exactly reproduce submitted final Kaggle entry, and match your description below.
### 2.2. Method Description
Describe your approach, and justify each of the choices made within your approach. You should write a document with no more than 400 words, as a **PDF** file (not *docx* etc) with up to 2 pages of A4 (2 sides). Text must only appear on the first page, while the second page is for *figures and tables only*. Please use a font size of 11pt or higher. Please consider using `pdflatex` for the report, as it's considerably better for this purpose than wysiwyg document editors. You are encouraged to include empirical results, e.g., a table of results, graphs, or other figures to support your argument. *(this will contribute 9 marks; note that we are looking for clear presentation, sound reasoning, good evaluation and error analysis, as well as general ambition of approach.)*
| github_jupyter |
# Machine Learning Engineer Nanodegree
## Supervised Learning
## Project 2: Building a Student Intervention System
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
### Question 1 - Classification vs. Regression
*Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?*
**Answer: ** Classification. It is basically a binary classification problem trying to identify whether or not a student need early intervention. It is not a regression problem trying to estimate some numbers.
## Exploring the Data
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, `'passed'`, will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
```
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print("Student data read successfully!")
```
### Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, `n_students`.
- The total number of features for each student, `n_features`.
- The number of those students who passed, `n_passed`.
- The number of those students who failed, `n_failed`.
- The graduation rate of the class, `grad_rate`, in percent (%).
```
# Preview the data
student_data.head()
# TODO: Calculate number of students
n_students = len(student_data)
# TODO: Calculate number of features
n_features = len(student_data.columns) - 1
# TODO: Calculate passing students
n_passed = len(student_data[student_data['passed'] == 'yes'])
# TODO: Calculate failing students
n_failed = len(student_data[student_data['passed'] == 'no'])
# TODO: Calculate graduation rate
grad_rate = float(n_passed)/n_students * 100
# Print the results
print("Total number of students: {}".format(n_students))
print("Number of features: {}".format(n_features))
print("Number of students who passed: {}".format(n_passed))
print("Number of students who failed: {}".format(n_failed))
print("Graduation rate of the class: {:.2f}%".format(grad_rate))
```
## Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
### Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
```
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print("Feature columns:\n{}".format(feature_cols))
print("\nTarget column: {}".format(target_col))
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print("\nFeature values:")
X_all.head()
```
### Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply `yes`/`no`, e.g. `internet`. These can be reasonably converted into `1`/`0` (binary) values.
Other columns, like `Mjob` and `Fjob`, have more than two values, and are known as _categorical variables_. The recommended way to handle such a column is to create as many columns as possible values (e.g. `Fjob_teacher`, `Fjob_other`, `Fjob_services`, etc.), and assign a `1` to one of them and `0` to all others.
These generated columns are sometimes called _dummy variables_, and we will use the [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
```
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print("Processed feature columns ({} total features):\n{}"\
.format(len(X_all.columns), list(X_all.columns)))
# Preview the preprocessed data
X_all.head()
```
### Implementation: Training and Testing Data Split
So far, we have converted all _categorical_ features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (`X_all`, `y_all`) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a `random_state` for the function(s) you use, if provided.
- Store the results in `X_train`, `X_test`, `y_train`, and `y_test`.
```
# TODO: Import any additional functionality you may need here
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
X_train = X_all.sample(n=num_train, random_state=1)
X_test = X_all[~X_all.index.isin(X_train.index)]
y_train = y_all[X_train.index]
y_test = y_all[~y_all.index.isin(y_train.index)]
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
```
## Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in `scikit-learn`. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
**The following supervised learning models are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
### Question 2 - Model Application
*List three supervised learning models that are appropriate for this problem. For each model chosen*
- Describe one real-world application in industry where the model can be applied. *(You may need to do a small bit of research for this — give references!)*
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
**Answer: **
Decision Trees
* It was applied in pharmacology for [drug analysis](https://www.ncbi.nlm.nih.gov/pubmed/8170530).
* The strength of the Decision Trees is that the model is easy to interprete and understand. It is able to handle both numerical and categorical features very well. Therefore, when the data has numerical and also a lot of categorical features, the Decision Trees model usually can do very well. It is also able to classify linearly-inseparable data.
* The weakness of the Decision Trees model is that it is very easy to overfit. One should be careful during training the model and apply pruning conditions if necessary. Cross-validation is also useful to prevent overfiting.
* In our dataset, the data contains both numerical and categorical data. Therefore, applying Decision Trees model might be very useful.
Support Vector Machines (SVM)
* It was applied in [facial expression classification](http://cbcl.mit.edu/publications/ps/iccv2001.pdf).
* The strengh of the Support Vector Machine model is that by using different kernels it is able to separate linear-inseparable data. It maximizes the margin between two classes and is easy to control overfitting.
* The weakness of the Support Vector Machine model is that the training time and hardware cost becomes extremely high when the training data size becomes larger.
* For our dateset, the size of dataset is actually small, which limites the training and prediction time of the Support Vector Machine model.
Gaussian Naive Bayes (GaussianNB)
* It was applied in [automatic medical diagnosis](http://www.research.ibm.com/people/r/rish/papers/RC22230.pdf).
* An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification.
* The weakness of the model is that the the model assumes all the features are independent to each other, which makes too stringent for most of the data.
* For our dateset, the size of dataset is actually small, which makes it extremely suitable for the Gaussian Naive Bayes.
### Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- `train_classifier` - takes as input a classifier and training data and fits the classifier to the data.
- `predict_labels` - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- `train_predict` - takes as input a classifier, and the training and testing data, and performs `train_clasifier` and `predict_labels`.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
```
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print("Trained model in {:.4f} seconds".format(end - start))
return float(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print("Made predictions in {:.4f} seconds.".format(end - start))
return (float(end - start), f1_score(target.values, y_pred, pos_label='yes'))
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print("Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train)))
# Train the classifier
time_training = train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
(prediction_time_training, f1_score_training) = predict_labels(clf, X_train, y_train)
(prediction_time_test, f1_score_test) = predict_labels(clf, X_test, y_test)
print("F1 score for training set: {:.4f}.".format(f1_score_training))
print("F1 score for test set: {:.4f}.".format(f1_score_test))
return (time_training, prediction_time_test, f1_score_training, f1_score_test)
```
### Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the `train_predict` function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in `clf_A`, `clf_B`, and `clf_C`.
- Use a `random_state` for each model you use, if provided.
- **Note:** Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- *Do not reshuffle and resplit the data! The new training points should be drawn from `X_train` and `y_train`.*
- Fit each model with each training set size and make predictions on the test set (9 in total).
**Note:** Three tables are provided after the following code cell which can be used to store your results.
```
# TODO: Import the three supervised learning models from sklearn
from sklearn.tree import DecisionTreeClassifier # Decision Tree
from sklearn.svm import SVC # SVM
from sklearn.naive_bayes import GaussianNB # Naive Bayesian Gaussian
# from sklearn.neighbors import KNeighborsClassifier # k-NN
# TODO: Initialize the three models
clf_A = DecisionTreeClassifier()
clf_B = SVC()
clf_C = GaussianNB()
# clf_C = KNeighborsClassifier()
# TODO: Set up the training set sizes
X_train_100 = X_train[0:100]
y_train_100 = y_train[0:100]
X_train_200 = X_train[0:200]
y_train_200 = y_train[0:200]
X_train_300 = X_train[0:300]
y_train_300 = y_train[0:300]
# TODO: Execute the 'train_predict' function for each classifier and each training set size
# train_predict(clf, X_train, y_train, X_test, y_test)
Classifier_1 = pd.DataFrame\
(columns=['Training Set Size', 'Training Time', 'Prediction Time (test)', 'F1 Score (train)', 'F1 Score (test)'])
for (size, X_training_set, y_training_set) in [(100, X_train_100, y_train_100), (200, X_train_200, y_train_200), (300, X_train_300, y_train_300)]:
print(size)
(time_training, prediction_time_test, f1_score_training, f1_score_test) = \
train_predict(clf_A, X_training_set, y_training_set, X_test, y_test)
Classifier_1 = Classifier_1.append({'Training Set Size': size, 'Training Time': round(time_training, 4), \
'Prediction Time (test)': round(prediction_time_test, 4), \
'F1 Score (train)': round(f1_score_training, 4), \
'F1 Score (test)': round(f1_score_test, 4)}, ignore_index=True)
Classifier_2 = pd.DataFrame\
(columns=['Training Set Size', 'Training Time', 'Prediction Time (test)', 'F1 Score (train)', 'F1 Score (test)'])
for (size, X_training_set, y_training_set) in [(100, X_train_100, y_train_100), (200, X_train_200, y_train_200), (300, X_train_300, y_train_300)]:
print(size)
(time_training, prediction_time_test, f1_score_training, f1_score_test) = \
train_predict(clf_B, X_training_set, y_training_set, X_test, y_test)
Classifier_2 = Classifier_2.append({'Training Set Size': size, 'Training Time': round(time_training, 4), \
'Prediction Time (test)': round(prediction_time_test, 4), \
'F1 Score (train)': round(f1_score_training, 4), \
'F1 Score (test)': round(f1_score_test, 4)}, ignore_index=True)
Classifier_3 = pd.DataFrame\
(columns=['Training Set Size', 'Training Time', 'Prediction Time (test)', 'F1 Score (train)', 'F1 Score (test)'])
for (size, X_training_set, y_training_set) in [(100, X_train_100, y_train_100), (200, X_train_200, y_train_200), (300, X_train_300, y_train_300)]:
print(size)
(time_training, prediction_time_test, f1_score_training, f1_score_test) = \
train_predict(clf_C, X_training_set, y_training_set, X_test, y_test)
Classifier_3 = Classifier_3.append({'Training Set Size': size, 'Training Time': round(time_training, 4), \
'Prediction Time (test)': round(prediction_time_test, 4), \
'F1 Score (train)': round(f1_score_training, 4), \
'F1 Score (test)': round(f1_score_test, 4)}, ignore_index=True)
```
### Tabular Results
Edit the cell below to see how a table can be designed in [Markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#tables). You can record your results from above in the tables provided.
Classifier_1
```
# Decision Tree
Classifier_1
```
Classifier_2
```
# Support Vector Machine
Classifier_2
```
Classifier_3
```
# Naive Bayesian Gaussian
Classifier_3
```
## Choosing the Best Model
In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
### Question 3 - Choosing the Best Model
*Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?*
**Answer: **
I think the Support Vector Machine model is the best and the most appropriate model given our dataset, computational cost and model performance. Regarding the training time, the differences between the three models are negaligible. It should be noted that usually the training time of the Support Vector Machine model increases as the dataset size increases. However, because our dataset size is quite small, the training time did not reflect a defect of our model. Similarly, the prediction time differences between the three models are also negaligible. Regarding the F1_scores (training) of the three model, F1_score (training) equals 1.0 in all three trainings in the Decision Tree model, which is certainly a reflection of overfitting. F1_score (training) is bad in the Naive Bayesian model with training set size of 100, because the training set size is too small and the features are too many. Overall, the Support Vector Machine model showed the better F1_score (training). In addition, the Support Vector Machine model also showed the best F1_score (test), suggesting better generalization over the other two models.
### Question 4 - Model in Layman's Terms
*In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.*
**Answer: ** Basically with the labeled training data Support Vector Machine creates a hyperplane that separates the labeled data. Each side of the hyperplane represents a class of data. This hyperplane also maximize the distances to its nearest training data of any class, representing lower generalization error. To predict the label of new data, simply calculate which side of hyperplane the new data is in.
### Implementation: Model Tuning
Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import [`https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html`](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: `parameters = {'parameter' : [list of values]}`.
- Initialize the classifier you've chosen and store it in `clf`.
- Create the F<sub>1</sub> scoring function using `make_scorer` and store it in `f1_scorer`.
- Set the `pos_label` parameter to the correct value!
- Perform grid search on the classifier `clf` using `f1_scorer` as the scoring method, and store it in `grid_obj`.
- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_obj`.
```
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
# TODO: Create the parameters list you wish to tune
parameters = {'C': [0.1, 1, 10], 'kernel': ['linear', 'poly', 'rbf']}
# TODO: Initialize the classifier
clf = SVC()
# TODO: Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(f1_score, pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(estimator = clf, param_grid = parameters, scoring = f1_scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print("Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train)[1]))
print("Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test)[1]))
```
### Question 5 - Final F<sub>1</sub> Score
*What is the final model's F<sub>1</sub> score for training and testing? How does that score compare to the untuned model?*
**Answer: ** The F1 score for training and testing of optimized Support Vector Machine model are 0.8540 and 0.8085, respectively. The F1 score for training is lower than the untuned model while the F1 score for testing is higher than the untuned model. It is probably because the optimized Support Vector Machine used a different kernel compared to the untuned one so that the F1 score for training is lower. However, because the GridSearchCV funtion internally used cross-validation technique therefore the F1 score for testing is higher than the untuned one.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| github_jupyter |
<a href="https://colab.research.google.com/github/JoanesMiranda/Machine-learning/blob/master/Autoenconder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Importando as bibliotecas necessárias
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.datasets import mnist
```
### Carregando a base de dados
```
(x_train, y_train),(x_test, y_test) = mnist.load_data()
```
### Plotando uma amostra das imagens
```
plt.imshow(x_train[10], cmap="gray")
```
### Aplicando normalização nos dados de treino e teste
```
x_train = x_train / 255.0
x_test = x_test / 255.0
print(x_train.shape)
print(x_test.shape)
```
### Adicionando ruido a base de treino
```
noise = 0.3
noise_x_train = []
for img in x_train:
noisy_image = img + noise * np.random.randn(*img.shape)
noisy_image = np.clip(noisy_image, 0., 1.)
noise_x_train.append(noisy_image)
noise_x_train = np.array(noise_x_train)
print(noise_x_train.shape)
```
### Plotando uma amostra da imagem com o ruido aplicado
```
plt.imshow(noise_x_train[10], cmap="gray")
```
### Adicionando ruido a base de teste
```
noise = 0.3
noise_x_test = []
for img in x_train:
noisy_image = img + noise * np.random.randn(*img.shape)
noisy_image = np.clip(noisy_image, 0., 1.)
noise_x_test.append(noisy_image)
noise_x_test = np.array(noise_x_test)
print(noise_x_test.shape)
```
### Plotando uma amostra da imagem com o ruido aplicado
```
plt.imshow(noise_x_test[10], cmap="gray")
noise_x_train = np.reshape(noise_x_train,(-1, 28, 28, 1))
noise_x_test = np.reshape(noise_x_test,(-1, 28, 28, 1))
print(noise_x_train.shape)
print(noise_x_test.shape)
```
### Autoencoder
```
x_input = tf.keras.layers.Input((28,28,1))
# encoder
x = tf.keras.layers.Conv2D(filters=16, kernel_size=3, strides=2, padding='same')(x_input)
x = tf.keras.layers.Conv2D(filters=8, kernel_size=3, strides=2, padding='same')(x)
# decoder
x = tf.keras.layers.Conv2DTranspose(filters=16, kernel_size=3, strides=2, padding='same')(x)
x = tf.keras.layers.Conv2DTranspose(filters=1, kernel_size=3, strides=2, activation='sigmoid', padding='same')(x)
model = tf.keras.models.Model(inputs=x_input, outputs=x)
model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.001))
model.summary()
```
### Treinando os dados
```
model.fit(noise_x_train, x_train, batch_size=100, validation_split=0.1, epochs=10)
```
### Realizando a predição das imagens usando os dados de teste com o ruido aplicado
```
predicted = model.predict(noise_x_test)
predicted
```
### Plotando as imagens com ruido e depois de aplicar o autoencoder
```
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
for images, row in zip([noise_x_test[:10], predicted], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
| github_jupyter |
## Peer-graded Assignment: Segmenting and Clustering Neighborhoods in Toronto
# Part 3
```
import pandas as pd
import numpy as np
import json
!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim
import requests
from pandas.io.json import json_normalize
import matplotlib.cm as cm
import matplotlib.colors as colors
from sklearn.cluster import KMeans
!conda install -c conda-forge folium=0.5.0 --yes
import folium
```
### Get the dataframe from first task
```
# read data from html
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
read_table = pd.read_html(url,header=[0])
df1 = read_table[0]
# rename columns' name
df1 = df1.rename(index=str, columns={'Postcode':'PostalCode','Neighbourhood':'Neighborhood'})
# Ignore cells with a borough that is Not assigned
df1 = df1[df1.Borough !='Not assigned']
df1.reset_index(drop=True,inplace=True)
# groupby
df1 = df1.groupby('PostalCode',as_index=False).agg(lambda x: ','.join(set(x.dropna())))
# If a cell has a borough but a Not assigned neighborhood,
#then the neighborhood will be the same as the borough
df1.loc[df1['Neighborhood'] == 'Not assigned','Neighborhood'] = df1['Borough']
df1.head()
```
### Get the dataframe from Second task
```
df2 = pd.read_csv('http://cocl.us/Geospatial_data')
# Change the columns' name
df2.columns = ['PostalCode','Latitude','Longitude']
# Merge two dataframes
df = pd.merge(left=df1,right=df2,on='PostalCode')
df.head()
```
# Part 3
### Get Toronto data
```
Toronto = df[df['Borough'].str.contains('Toronto')].reset_index(drop=True)
Toronto.head()
```
### Get the latitude and longitude values of Toronto
```
address = 'Toronto'
geolocator = Nominatim()
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are{},{}.'.format(latitude,longitude))
```
### Create map of Toronto
```
# create map of Toronto using latitude and longitude values
map_Toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, label in zip(Toronto['Latitude'], Toronto['Longitude'], Toronto['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_Toronto)
map_Toronto
```
### Define Foursquare Credentials and Version
```
CLIENT_ID = 'HCIWEYMLE0SJAI3ESV4AFX5PNQVBSLP5HQ1YU4GISAHHRIFV' # your Foursquare ID
CLIENT_SECRET = 'P4KVBEVJDIVREULUPIZHUL124JX353PUIP5KWJOGX1PLDB5B' # your Foursquare Secret
VERSION = '20200202' # Foursquare API version
```
### Try to explore other neighborhood
#### pick up Etobicoke to be explored
```
Etobicoke = df[df['Borough']=='Etobicoke'].reset_index(drop=True)
# get the Etobicoke latitude and longitude values
Etobicoke_latitude = Etobicoke.loc[0,'Latitude']
Etobicoke_longitude = Etobicoke.loc[0,'Longitude']
print('Latitude and longitude values of Etobicoke are {},{}.'.format(
Etobicoke_latitude, Etobicoke_longitude))
map_Etobicoke = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, label in zip(Etobicoke['Latitude'], Etobicoke['Longitude'], Etobicoke['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_Etobicoke)
map_Etobicoke
```
### Explore Neighborhoods in Toronto
#### Getting the top 100 venues that are in Toronto within a raidus of 500 meters
```
LIMIT = 100
radius = 500
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
# write the code to run the above function
Toronto_venues = getNearbyVenues(names=Toronto['Neighborhood'],
latitudes=Toronto['Latitude'],
longitudes=Toronto['Longitude'])
```
### Check the size of the resulting dataframe
```
print(Toronto_venues.shape)
Toronto_venues.head()
```
### Check how many venues were returned
```
Toronto_venues.groupby('Neighborhood').count()
```
### Find out how many unique categories can be curated from all the returned venues
```
print('There are {} uniques categories.'.format(len(Toronto_venues['Venue Category'].unique())))
```
### Analyze Neighborhood
```
# one hot encoding
Toronto_onehot = pd.get_dummies(Toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
Toronto_onehot['Neighborhood'] = Toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [Toronto_onehot.columns[-1]] + list(Toronto_onehot.columns[:-1])
Toronto_onehot = Toronto_onehot[fixed_columns]
Toronto_onehot.head()
```
### New dataframe shape
```
Toronto_onehot.shape
```
### Group rows by neigoborhood and by taking the mean of the frequency of occurrence of each category
```
Toronto_grouped = Toronto_onehot.groupby('Neighborhood').mean().reset_index()
Toronto_grouped
```
### New size after grouping
```
# new size
Toronto_grouped.shape
```
### Pick up top 10 venues
```
num_top_venues = 10
# write a funcition to sort the venues in descending order
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
Toronto_venues_sorted = pd.DataFrame(columns=columns)
Toronto_venues_sorted['Neighborhood'] = Toronto_grouped['Neighborhood']
for ind in np.arange(Toronto_grouped.shape[0]):
Toronto_venues_sorted.iloc[ind, 1:] = return_most_common_venues(Toronto_grouped.iloc[ind, :], num_top_venues)
Toronto_venues_sorted
```
### Cluster Neighborhoods
Run k-means to cluster the neighborhood into 5 clusters.
```
# set number of clusters
kclusters = 5
Toronto_grouped_clustering = Toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(Toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
Toronto_merged = Toronto
# add clustering labels
Toronto_merged['Cluster Labels'] = kmeans.labels_
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
Toronto_merged = Toronto_merged.join(Toronto_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
Toronto_merged.head() # check the last columns!
```
### Visualize the resulting clusters
```
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(Toronto_merged['Latitude'], Toronto_merged['Longitude'],
Toronto_merged['Neighborhood'],
Toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 5: Regularization and Dropout**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 5 Material
* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)
* Part 5.2: Using K-Fold Cross Validation with Keras [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)
* **Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting** [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)
* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)
* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting
L1 and L2 regularization are two common regularization techniques that can reduce the effects of overfitting (Ng, 2004). Both of these algorithms can either work with an objective function or as a part of the backpropagation algorithm. In both cases the regularization algorithm is attached to the training algorithm by adding an additional objective.
Both of these algorithms work by adding a weight penalty to the neural network training. This penalty encourages the neural network to keep the weights to small values. Both L1 and L2 calculate this penalty differently. For gradient-descent-based algorithms, such as backpropagation, you can add this penalty calculation to the calculated gradients. For objective-function-based training, such as simulated annealing, the penalty is negatively combined with the objective score.
Both L1 and L2 work differently in the way that they penalize the size of a weight. L2 will force the weights into a pattern similar to a Gaussian distribution; the L1 will force the weights into a pattern similar to a Laplace distribution, as demonstrated the following:

As you can see, L1 algorithm is more tolerant of weights further from 0, whereas the L2 algorithm is less tolerant. We will highlight other important differences between L1 and L2 in the following sections. You also need to note that both L1 and L2 count their penalties based only on weights; they do not count penalties on bias values.
Tensor flow allows [l1/l2 to be directly added to your network](http://tensorlayer.readthedocs.io/en/stable/modules/cost.html).
```
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
########################################
# Keras with L1/L2 for Regression
########################################
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras import regularizers
# Cross-validate
kf = KFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
#kernel_regularizer=regularizers.l2(0.01),
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1],
activation='relu',
activity_regularizer=regularizers.l1(1e-4))) # Hidden 1
model.add(Dense(25, activation='relu',
activity_regularizer=regularizers.l1(1e-4))) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1) # raw probabilities to chosen class (highest probability)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
```
| github_jupyter |
# Saving a web page to scrape later
For many scraping jobs, it makes sense to first save a copy of the web page (or pages) that you want to scrape and then operate on the local files you've saved. This is a good practice for a couple of reasons: You won't be bombarding your target server with requests every time you fiddle with your script and rerun it, and you've got a copy saved in case the page (or pages) disappear.
Here's one way to accomplish that. (If you haven't run through [the notebook on using `requests` to fetch web pages](02.%20Fetching%20HTML%20with%20requests.ipynb), do that first.)
We'll need the `requests` and `bs4` libraries, so let's start by importing them:
```
import requests
import bs4
```
## Fetch the page and write to file
Let's grab the Texas death row page: `'https://www.tdcj.texas.gov/death_row/dr_offenders_on_dr.html'`
```
dr_page = requests.get('https://www.tdcj.texas.gov/death_row/dr_offenders_on_dr.html')
# take a peek at the HTML
dr_page.text
```
Now, instead of continuing on with our scraping journey, we'll use some built-in Python tools to write this to file:
```
# define a name for the file we're saving to
HTML_FILE_NAME = 'death-row-page.html'
# open that file in write mode and write the page's HTML into it
with open(HTML_FILE_NAME, 'w') as outfile:
outfile.write(dr_page.text)
```
The `with` block is just a handy way to deal with opening and closing files -- note that everything under the `with` line is indented.
The `open()` function is used to open files for reading or writing. The first _argument_ that you hand this function is the name of the file you're going to be working on -- we defined it above and attached it to the `HTML_FILE_NAME` variable, which is totally arbitrary. (We could have called it `HTML_BANANAGRAM` if we wanted to.)
The `'w'` means that we're opening the file in "write" mode. We're also tagging the opened file with a variable name using the `as` operator -- `outfile` is an arbitrary variable name that I came up with.
But then we'll use that variable name to do things to the file we've opened. In this case, we want to use the file object's `write()` method to write some content to the file.
What content? The HTML of the page we grabbed, which is accessible through the `.text` attribute.
In human words, this block of code is saying: "Open a file called `death-row-page.html` and write the HTML of thata death row page you grabbed earlier into it."
## Reading the HTML from a saved web page
At some point after you've saved your page to file, eventually you'll want to scrape it. To read the HTML into a variable, we'll use a `with` block again, but this time we'll specify "read mode" (`'r'`) and use the `read()` method instead of the `write()` method:
```
with open(HTML_FILE_NAME, 'r') as infile:
html = infile.read()
html
```
Now it's just a matter of turning that HTML into soup -- [see this notebook for more details](03.%20Parsing%20HTML%20with%20BeautifulSoup.ipynb) -- and parsing the results.
```
soup = bs4.BeautifulSoup(html, 'html.parser')
```
| github_jupyter |
```
import pandas as pd
nhl_games= pd.read_csv("/Users/joejohns/data_bootcamp/GitHub/final_project_nhl_prediction/Data/Kaggle_Data_Ellis/game.csv")
nhl_games.columns
nhl_20162017 = nhl_games.loc[(nhl_games['season'] == 20162017)&(nhl_games['type'] == 'R') , ['game_id', 'season', 'type', 'date_time_GMT', 'away_team_id',
'home_team_id', 'away_goals', 'home_goals', 'outcome']]
nhl_20162017['date_time_GMT'] = pd.to_datetime(nhl_20162017['date_time_GMT'])
nhl_20162017.loc[0,'date_time_GMT']
nhl_20162017.sort_values(by='date_time_GMT', inplace = True)
#nhl_team.loc[(nhl_mp["settled_in"] == 'tbc'), :]
nhl_20162017.head(20)
team_info = pd.read_csv("/Users/joejohns/data_bootcamp/GitHub/final_project_nhl_prediction/Data/Kaggle_Data_Ellis/team_info.csv" )
team_info.dtypes
```
team_info.head()
```
team_info.head()
team_info.loc[team_info['team_id'] == 1, 'abbreviation'][0]
def map_names(team_id):
index = team_info.loc[team_info['team_id'] == team_id, 'abbreviation'].index[0]
return team_info.loc[team_info['team_id'] == team_id, 'abbreviation'][index]
map_names(10)
team_info.loc[:,['team_id', 'abbreviation']]
nhl_20162017['away_team_id'] = nhl_20162017['away_team_id'].map(map_names)
nhl_20162017['home_team_id'] = nhl_20162017['home_team_id'].map(map_names)
nhl_20162017
nhl_20162017 = nhl_20162017.reset_index(drop = True)
df = nhl_20162017.copy()
df
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Ridge
df['goal_difference'] = df['home_goals'] - df['away_goals']
# create new variables to show home team win or loss result
df['home_win'] = np.where(df['goal_difference'] > 0, 1, 0)
df['home_loss'] = np.where(df['goal_difference'] < 0, 1, 0)
df.head(6)
df_visitor = pd.get_dummies(df['away_team_id'], dtype=np.int64)
df_home = pd.get_dummies(df['home_team_id'], dtype=np.int64)
df_visitor.head(3)
df_model = df_home.sub(df_visitor)
df_model['goal_difference'] = df['goal_difference']
df_model
df_train = df_model.iloc[:600, :].copy()
df_test = df_model.iloc[600:, :].copy()
###test
lr.fit
%time
lr = Ridge(alpha=0.001)
X = df_train.drop(['goal_difference'], axis=1)
y = df_train['goal_difference']
lr.fit(X, y)
df_ratings = pd.DataFrame(data={'team': X.columns, 'rating': lr.coef_})
df_ratings
##test this on rest of this season WITHOUT updating the rankings as we go!
X_test = df_test.drop(['goal_difference'], axis=1)
y_test = df_test['goal_difference']
y_pred = lr.predict(X_test)
df_test['goal_difference']
def make_win(y):
if y >0:
return 1
if y< 0:
return 0
y_test_win = pd.Series(y_test).map(make_win)
y_pred_win = pd.Series(y_pred).map(make_win)
y_pred_win.value_counts()
y_test_win.value_counts()
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, confusion_matrix
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score, f1_score
import matplotlib.pyplot as plt
import seaborn as sns
def evaluate_binary_classification(model_name, y_test, y_pred, y_proba=None):
accuracy = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
#try:
if y_proba != None:
rocauc_score = roc_auc_score(y_test, y_proba)
else:
rocauc_score = "no roc"
#except:
# pass
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
plt.tight_layout()
plt.title(f'{model_name}', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plt.show()
print("accuracy: ", accuracy)
print("precision: ", precision)
print("recall: ", recall)
print("f1 score: ", f1)
print("rocauc: ", rocauc_score)
print(cm)
#return accuracy, precision, recall, f1, rocauc_score
evaluate_binary_classification('ridge_regression', y_test_win, y_pred_win)
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score, f1_score
def evaluate_regression(y_test, y_pred):
mae = mean_absolute_error(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
print("mae", mae)
print("mse", mse)
print('r2', r2)
evaluate_regression(y_test, y_pred)
##off by 2 goals on avg?? that's really bad
len(y_test_win)
pred_res= pd.DataFrame({ 'pred_win': list(y_pred_win), 'actual_win': list(y_test_win), 'pred_GD': y_pred, 'actual_GD': y_test })
pred_res.iloc[:20,:]
```
| github_jupyter |
# Handle shapefiles using Geopandas
```
###############################################################################################
###############################################################################################
# Part 1: work with shapefiles
# I am using a "shapefile" which consists of at least four actual files (.shp, .shx, .dbf, .prj). This is a commonly used format.
# The new ".rds" format shapefiles seem to be designed only for use in R programming (For more about shapefile formats: https://gadm.org/formats.html).
# An example shapefiles source: https://gadm.org/download_country_v3.html
###############################################################################################
# Method 1 (Matplotlib + Cartopy)
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.io.shapereader import Reader
from cartopy.feature import ShapelyFeature
# set up working directory
import os
os.chdir("move-to-your-working-directory")
# import the shapefile
UK_shape_file = r'gadm36_GBR_3.shp'
# get the map (geometries)
UK_map = ShapelyFeature(Reader(UK_shape_file).geometries(),ccrs.PlateCarree(), edgecolor='black',facecolor='none')
# initialize a plot
test= plt.axes(projection=ccrs.PlateCarree())
# add the shapefile for the whole UK
test.add_feature(UK_map)
# zoom in to London
test.set_extent([-2,2,51,52], crs=ccrs.PlateCarree())
###############################################################################################
# Method 2 (Matplotlib + Cartopy + Geopandas)
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import geopandas as gpd
# set up working directory
import os
os.chdir("move-to-your-working-directory")
# read the UK shapefile as a "geopandas.geodataframe.GeoDataFrame"
UK_shapefile = gpd.read_file("gadm36_GBR_3.shp")
# check what UK shapefile contains
UK_shapefile
# get shapes for London, Birmingham and Edinburgh from the UK shapefile
# you can also go to a coarser/finer layer to select a bigger/smaller domain
London = UK_shapefile[UK_shapefile['NAME_2'] == "Greater London"]
Birmingham = UK_shapefile[UK_shapefile['NAME_2'] == "Birmingham"]
Edinburgh = UK_shapefile[UK_shapefile['NAME_2'] == "Edinburgh"]
# check the geometry for each city
print(London.geometry)
print(Birmingham.geometry)
print(Edinburgh.geometry)
# create a list of your study cities and merge the shapes (geopandas.geodataframe.GeoDataFrame)
import pandas as pd
study_cities = [London,Birmingham,Edinburgh]
study_cities_shapes = gpd.GeoDataFrame(pd.concat(study_cities, ignore_index=True))
# initialize a plot
test= plt.axes(projection=ccrs.PlateCarree())
# add shapefiles to your chosen cities only
# you can change "edgecolor", "facecolor" and "linewidth" to highlight certain areas
# you can change the "zorder" to decide the layer
test.add_geometries(study_cities_shapes.geometry, crs=ccrs.PlateCarree(),edgecolor='black',facecolor='none',linewidth=2,zorder=0)
# zoom in to your study domain
test.set_extent([-5,2,51,57], crs=ccrs.PlateCarree())
# Part 1 Remarks:
# 1> I prefer Method 2 as "geopandas" allows more control of the shapefile
# 2> But it is almost impossible to install geopandas following the instructions on its homepage: https://geopandas.org/install.html
# I managed to install it on my windows PC following this video: https://www.youtube.com/watch?v=LNPETGKAe0c
# 3> Method 1 is easy to use on all platforms, althought there is less control of the shapefile
###############################################################################################
# Part 2: some techniques with "polygons"
# sometimes, we may want to know which data sites are within our outside a certain area
# or we may want to know if two areas have any overlaps
# use can use some tricks with "polygons" to achieve these
###############################################################################################
# task 1: create a polygon using shapefiles (using Chinese cities as an example)
# read shapefiles for cities in mainland China
os.chdir("/rds/projects/2018/maraisea-glu-01/Study/Research_Data/BTH/domain/gadm36_CHN_shp")
China_cities = gpd.read_file("gadm36_CHN_2.shp")
# check the city list
print(China_cities['NAME_2'])
# get the list of your target cities
os.chdir("/rds/projects/2018/maraisea-glu-01/Study/Research_Data/BTH/domain/")
BTH_cities = pd.read_csv("2+26_cities.csv")
BTH_cities = list(BTH_cities['City'])
# extract the shape (multi-polygon) for each city
BTH_shapes = [China_cities[China_cities['NAME_2'] == city_name] for city_name in BTH_cities]
print("Number of city shapefiles:",len(BTH_shapes))
# combine shapefiles from all cities into a single shape
BTH_shapes = gpd.GeoDataFrame(pd.concat(BTH_shapes, ignore_index=True))
# check the shape for a certain city
BTH_shapes['geometry'][0]
# plot the combined shape for the target cities
from shapely.ops import cascaded_union
BTH_polygons = BTH_shapes['geometry']
BTH_boundary = gpd.GeoSeries(cascaded_union(BTH_polygons))
BTH_boundary.plot(color = 'red')
plt.show()
###############################################################################################
# task 2: derive the polygon for a grid centre with given resolutions (use GEOS-Chem model grids as the example)
def create_polygon(lon,lat,lon_res,lat_res):
'''Input lon,lat,resolution of lon,lat in order. Then create the polygon for the target grid'''
from shapely import geometry
from shapely.ops import cascaded_union
p1 = geometry.Point(lon-lon_res/2,lat-lat_res/2)
p2 = geometry.Point(lon+lon_res/2,lat-lat_res/2)
p3 = geometry.Point(lon+lon_res/2,lat+lat_res/2)
p4 = geometry.Point(lon-lon_res/2,lat+lat_res/2)
pointList = [p1, p2, p3, p4, p1]
output_polygon = geometry.Polygon([[p.x, p.y] for p in pointList])
output_polygon = gpd.GeoSeries(cascaded_union(poly))
return output_polygon
# based on this, you can also create your function to return a polygon using coordiantes of of data points.
###############################################################################################
# task 3: test if a polygon contains a certain point
from shapely.geometry import Point
from shapely.geometry.polygon import Polygon
point = Point(0.5, 0.5)
polygon = Polygon([(0, 0), (0, 1), (1, 1), (1, 0)])
print(polygon.contains(point))
###############################################################################################
# task 4: test if two polygons have any overlaps
print(polyon_A.intersects(polyon_B))
# Part 2 Remarks:
# these can be useful as sometimes we want to know which grid contain our target data points
# or we may want to know if which grids are within the target domain
# or we may want to know some details of data in a certain domain
###############################################################################################
```
| github_jupyter |
# Task 1: Getting started with Numpy
Let's spend a few minutes just learning some of the fundamentals of Numpy. (pronounced as num-pie **not num-pee**)
### what is numpy
Numpy is a Python library that support large, multi-dimensional arrays and matrices.
Let's look at an example. Suppose we start with a little table:
| a | b | c | d | e |
| :---: | :---: | :---: | :---: | :---: |
| 0 | 1 | 2 | 3 | 4 |
|10| 11| 12 | 13 | 14|
|20| 21 | 22 | 23 | 24 |
|30 | 31 | 32 | 33 | 34 |
|40 |41 | 42 | 43 | 44 |
and I simply want to add 10 to each cell:
| a | b | c | d | e |
| :---: | :---: | :---: | :---: | :---: |
| 10 | 11 | 12 | 13 | 14 |
|20| 21| 22 | 23 | 24|
|30| 31 | 32 | 33 | 34 |
|40 | 41 | 42 | 43 | 44 |
|50 |51 | 52 | 53 | 54 |
To make things interesting, instead of a a 5 x5 array, let's make it 1,000x1,000 -- so 1 million cells!
First, let's construct it in generic Python
```
a = [[x + y * 1000 for x in range(1000)] for y in range(1000)]
```
Instead of glossing over the first code example in the course, take your time, go back, and parse it out so you understand it. Test it out and see what it looks like. For example, how would you change the example to make a 10x10 array called `a2`? execute the code here:
```
# TO DO
```
Now let's take a look at the value of a2:
```
a2
```
Now that we understand that line of code let's go on and write a function that will add 10 to each cell in our original 1000x1000 matrix.
```
def addToArr(sizeof):
for i in range(sizeof):
for j in range(sizeof):
a[i][j] = a[i][j] + 10
```
As you can see, we iterate over the array with nested for loops.
Let's take a look at how much time it takes to run that function:
```
%time addToArr(1000)
```
My results were:
CPU times: user 145 ms, sys: 0 ns, total: 145 ms
Wall time: 143 ms
So about 1/7 of a second.
### Doing in using Numpy
Now do the same using Numpy.
We can construct the array using
arr = np.arange(1000000).reshape((1000,1000))
Not sure what that line does? Numpy has great online documentation. [Documentation for np.arange](https://numpy.org/doc/stable/reference/generated/numpy.arange.html) says it "Return evenly spaced values within a given interval." Let's try it out:
```
import numpy as np
np.arange(16)
```
So `np.arange(10)` creates a matrix of 16 sequential integers. [The documentation for reshape](https://numpy.org/doc/1.18/reference/generated/numpy.reshape.html) says, as the name suggests, "Gives a new shape to an array without changing its data." Suppose we want to reshape our 1 dimensional matrix of 16 integers to a 4x4 one. we can do:
```
np.arange(16).reshape((4,4))
```
As you can see it is pretty easy to find documentation on Numpy.
Back to our example of creating a 1000x1000 matrix, we now can time how long it takes to add 10 to each cell.
%time arr = arr + 10
Let's put this all together:
```
import numpy as np
arr = np.arange(1000000).reshape((1000,1000))
%time arr = arr + 10
```
My results were
CPU times: user 1.26 ms, sys: 408 µs, total: 1.67 ms
Wall time: 1.68 ms
So, depending on your computer, somewhere around 25 to 100 times faster. **That is phenomenally faster!**
And Numpy is even fast in creating arrays:
#### the generic Python way
```
%time a = [[x + y * 1000 for x in range(1000)] for y in range(1000)]
```
My results were
CPU times: user 92.1 ms, sys: 11.5 ms, total: 104 ms
Wall time: 102 ms
#### the Numpy way
```
%time arr = np.arange(1000000).reshape((1000,1000))
```
What are your results?
<h3 style="color:red">Q1. Speed</h3>
<span style="color:red">Suppose I want to create an array with 10,000 by 10,000 cells. Then I want to add 1 to each cell. How much time does this take using generic Python arrays and using Numpy arrays?</span>
#### in Python
(be patient -- this may take a number of seconds)
#### in Numpy
### built in functions
In addition to being faster, numpy has a wide range of built in functions. So, for example, instead of you writing code to calculate the mean or sum or standard deviation of a multidimensional array you can just use numpy:
```
arr.mean()
arr.sum()
arr.std()
```
So not only is it faster, but it minimizes the code you have to write. A win, win.
Let's continue with some basics.
## numpy examined
So Numpy is a library containing a super-fast n-dimensional array object and a load of functions that can operate on those arrays. To use numpy, we must first load the library into our code and we do that with the statement:
```
import numpy as np
```
Perhaps most of you are saying "fine, fine, I know this already", but let me catch others up to speed. This is just one of several ways we can load a library into Python. We could just say:
```
import numpy
```
and everytime we need to use one of the functions built in
to numpy we would need to preface that function with `numpy` . So for example, we could create an array with
```
arr = numpy.array([1, 2, 3, 4, 5])
```
If we got tired of writing `numpy` in front of every function, instead of typing
```
import numpy
```
we could write:
```
from numpy import *
```
(where that * means 'everything' and the whole expression means import everything from the numpy library). Now we can use any numpy function without putting numpy in front of it:
```
arr = array([1, 2, 3, 4, 5])
```
This may at first seem like a good idea, but it is considered bad form by Python developers.
The solution is to use what we initially introduced:
```
import numpy as np
```
this makes `np` an alias for numpy. so now we would put *np* in front of numpy functions.
```
arr = np.array([1, 2, 3, 4, 5])
```
Of course we could use anything as an alias for numpy:
```
import numpy as myCoolSneakers
arr = myCoolSneakers.array([1, 2, 3, 4, 5])
```
But it is convention among data scientists, machine learning experts, and the cool kids to use np. One big benefit of this convention is that it makes the code you write more understandable to others and vice versa (I don't need to be scouring your code to find out what `myCoolSneakers.array` does)
## creating arrays
An Array in Numpy is called an `ndarray` for n-dimensional array. As we will see, they share some similarities with Python lists. We have already seen how to create one:
```
arr = np.array([1, 2, 3, 4, 5])
```
and to display what `arr` equals
```
arr
```
This is a one dimensional array. The position of an element in the array is called the index. The first element of the array is at index 0, the next at index 1 and so on. We can get the item at a particular index by using the syntax:
```
arr[0]
arr[3]
```
We can create a 2 dimensional array that looks like
10 20 30
40 50 60
by:
```
arr = np.array([[10, 20, 30], [40, 50, 60]])
```
and we can show the contents of that array just be using the name of the array, `arr`
```
arr
```
We don't need to name arrays `arr`, we can name them anything we want.
```
ratings = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
ratings
```
So far, we've been creating numpy arrays by using Python lists. We can make that more explicit by first creating the Python list and then using it to create the ndarray:
```
pythonArray = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
sweet = np.array(pythonArray)
sweet
```
We can also create an array of all zeros or all ones directly:
```
np.zeros(10)
np.ones((5, 2))
```
### indexing
Indexing elements in ndarrays works pretty much the same as it does in Python. We have already seen one example, here is another example with a one dimensional array:
```
temperatures = np.array([48, 44, 37, 35, 32, 29, 33, 36, 42])
temperatures[0]
temperatures[3]
```
and a two dimensional one:
```
sample = np.array([[10, 20, 30], [40, 50, 60]])
sample[0][1]
```
For numpy ndarrays we can also use a comma to separate the indices of multi-dimensional arrays:
```
sample[1,2]
```
And, like Python you can also get a slice of an array. First, here is the basic Python example:
```
a = [10, 20, 30, 40, 50, 60]
b = a[1:4]
b
```
and the similar numpy example:
```
aarr = np.array(a)
barr = aarr[1:4]
barr
```
### Something wacky to remember
But there is a difference between Python arrays and numpy ndarrays. If I alter the array `b` in Python the orginal `a` array is not altered:
```
b[1] = b[1] + 5
b
a
```
but if we do the same in numpy:
```
barr[1] = barr[1] + 5
barr
aarr
```
we see that the original array is altered since we modified the slice. This may seem wacky to you, or maybe it doesn't. In any case, it is something you will get used to. For now, just be aware of this.
## functions on arrays
Numpy has a wide range of array functons. Here is just a sample.
### Unary functions
#### absolute value
```
arr = np.array([-2, 12, -25, 0])
arr2 = np.abs(arr)
arr2
arr = np.array([[-2, 12], [-25, 0]])
arr2 = np.abs(arr)
arr2
```
#### square
```
arr = np.array([-1, 2, -3, 4])
arr2 = np.square(arr)
arr2
```
#### squareroot
```
arr = np.array([[4, 9], [16, 25]])
arr2 = np.sqrt(arr)
arr2
```
## Binary functions
#### add /subtract / multiply / divide
```
arr1 = np.array([[10, 20], [30, 40]])
arr2 = np.array([[1, 2], [3, 4]])
np.add(arr1, arr2)
np.subtract(arr1, arr2)
np.multiply(arr1, arr2)
np.divide(arr1, arr2)
```
#### maximum / minimum
```
arr1 = np.array([[10, 2], [3, 40]])
arr2 = np.array([[1, 20], [30, 4]])
np.maximum(arr1, arr2)
```
#### these are just examples. There are more unary and binary functions
## Numpy Uber
lets say I have Uber drivers at various intersections around Austin. I will represent that as a set of x,y coordinates.
| Driver |xPos | yPos |
| :---: | :---: | :---: |
| Ann | 4 | 5 |
| Clara | 6 | 6 |
| Dora | 3 | 1 |
| Erica | 9 | 5 |
Now I would like to find the closest driver to a customer who is at 6, 3.
And to further define *closest* I am going to use what is called **Manhattan Distance**. Roughly put, Manhattan distance is distance if you followed streets. Ann, for example, is two blocks West of our customer and two blocks north. So the Manhattan distance from Ann to our customer is `2+2` or `4`.
First, to make things easy (and because the data in a numpy array must be of the same type), I will represent the x and y positions in one numpy array and the driver names in another:
```
locations = np.array([[4, 5], [6, 6], [3, 1], [9,5]])
locations
drivers = np.array(["Ann", "Clara", "Dora", "Erica"])
```
Our customer is at
```
cust = np.array([6, 3])
```
now we are going to figure out the distance between each of our drivers and the customer
```
xydiff = locations - cust
xydiff
```
NOTE: displaying the results with `xydiff` isn't a necessary step. I just like seeing intermediate results.
Ok. now I am goint to sum the absolute values:
```
distances =np.abs(xydiff).sum(axis = 1)
distances
```
So the output is the array `[4, 3, 5, 5]` which shows that Ann is 4 away from our customer; Clara is 3 away and so on.
Now I am going to sort these using `argsort`:
```
sorted = np.argsort(distances)
sorted
```
`argsort` returns an array of sorted indices. So the element at position 1 is the smallest followed by the element at position 0 and so on.
Next, I am going to get the first element of that array (in this case 1) and find the name of the driver at that position in the `drivers` array
```
drivers[sorted[0]]
```
<h3 style="color:red">Q2. You Try</h3>
<span style="color:red">Can you put all the above in a function. that takes 3 arguments, the location array, the array containing the names of the drivers, and the array containing the location of the customer. It should return the name of the closest driver.</span>
```
def findDriver(distanceArr, driversArr, customerArr):
result = ''
### put your code here
return result
print(findDriver(locations, drivers, cust)) # this should return Clara
```
### CONGRATULATIONS
Even though this is just an intro to Numpy, I am going to throw some math at you. So far we have been looking at a two dimensional example, x and y (or North-South and East-West) and our distance formula for the distance, Dist between Ann, A and Customer C is
$$ DIST_{AC} = |A_x - C_x | + |A_y - C_y | $$
Now I am going to warp this a bit. In this example, each driver is represented by an array (as is the customer) So, Ann is represented by `[1,2]` and the customer by `[3,4]`. So Ann's 0th element is 1 and the customer's 0th element is 3. And, sorry, computer science people start counting at 0 but math people (and all other normal people) start at 1 so we can rewrite the above formula as:
$$ DIST_{AC} = |A_1 - C_1 | + |A_2 - C_2 | $$
That's the distance formula for Ann and the Customer. We can make the formula by saying the distance between and two people, let's call them *x* and *y* is
$$ DIST_{xy} = |x_1 - y_1 | + |x_2 - y_2 | $$
That is the formula for 2 dimensional Manhattan Distance. We can imagine a three dimensional case.
$$ DIST_{xy} = |x_1 - y_1 | + |x_2 - y_2 | + |x_3 - y_3 | $$
and we can generalize the formula to the n-dimensional case.
$$ DIST_{xy}=\sum_{i=1}^n |x_i - y_i| $$
Just in time for a five dimensional example:
# The Amazing 5D Music example
Guests went into a listening booth and rated the following tunes:
* [Janelle Monae Tightrope](https://www.youtube.com/watch?v=pwnefUaKCbc)
* [Major Lazer - Cold Water](https://www.youtube.com/watch?v=nBtDsQ4fhXY)
* [Tim McGraw - Humble & Kind](https://www.youtube.com/watch?v=awzNHuGqoMc)
* [Maren Morris - My Church](https://www.youtube.com/watch?v=ouWQ25O-Mcg)
* [Hailee Steinfeld - Starving](https://www.youtube.com/watch?v=xwjwCFZpdns)
Here are the results:
| Guest | Janelle Monae | Major Lazer | Tim McGraw | Maren Morris | Hailee Steinfeld|
|---|---|---|---|---|---|
| Ann | 4 | 5 | 2 | 1 | 3 |
| Ben | 3 | 1 | 5 | 4 | 2|
| Jordyn | 5 | 5 | 2 | 2 | 3|
| Sam | 4 | 1 | 4 | 4 | 1|
| Hyunseo | 1 | 1 | 5 | 4 | 1 |
| Ahmed | 4 | 5 | 3 | 3 | 1 |
So Ann, for example, really liked Major Lazer and Janelle Monae but didn't care much for Maren Morris.
Let's set up a few numpy arrays.
```
customers = np.array([[4, 5, 2, 1, 3],
[3, 1, 5, 4, 2],
[5, 5, 2, 2, 3],
[4, 1, 4, 4, 1],
[1, 1, 5, 4, 1],
[4, 5, 3, 3, 1]])
customerNames = np.array(["Ann", "Ben", 'Jordyn', "Sam", "Hyunseo", "Ahmed"])
```
Now let's set up a few new customers:
```
mikaela = np.array([3, 2, 4, 5, 4])
brandon = np.array([4, 5, 1, 2, 3])
```
Now we would like to determine which of our current customers is closest to Mikaela and which to Brandon.
<h3 style="color:red">Q3. You Try</h3>
<span style="color:red">Can you write a function findClosest that takes 3 arguments: customers, customerNames, and an array representing one customer's ratings and returns the name of the closest customer?</span>
Let's break this down a bit.
1. Which line in the Numpy Uber section above will create a new array which is the result of subtracting the Mikaela array from each row of the customers array resulting in
```
array([[ 1, 3, -2, -4, -1],
[ 0, -1, 1, -1, -2],
[ 2, 3, -2, -3, -1],
[ 1, -1, 0, -1, -3],
[-2, -1, 1, -1, -3],
[ 1, 3, -1, -2, -3]])
```
```
# TODO
```
2. Which line above will take the array you created and generate a single integer distance for each row representing how far away that row is from Mikaela? The results will look like:
```
array([11, 5, 11, 6, 8, 10])
```
```
# TO DO
```
Finally, we want a sorted array of indices, the zeroth element of that array will be the closest row to Mikaela, the next element will be the next closest and so on. The result should be
```
array([1, 3, 4, 5, 0, 2])
```
```
# TO DO
```
Finally we need the name of the person that is the closest.
```
# TO DO
```
Okay, time to put it all together. Can you combine all the code you wrote above to finish the following function? So x is the new person and we want to find the closest customer to x.
```
def findClosest(customers, customerNames, x):
# TO DO
return ''
print(findClosest(customers, customerNames, mikaela)) # Should print Ben
print(findClosest(customers, customerNames, brandon)) # Should print Ann
```
## Numpy Amazon
We are going to start with the same array we did way up above:
| Drone |xPos | yPos |
| :---: | :---: | :---: |
| wing_1a | 4 | 5 |
| wing_2a | 6 | 6 |
| wing_3a | 3 | 1 |
| wing_4a | 9 | 5 |
But this time, instead of Uber drivers, think of these as positions of [Alphabet's Wing delivery drones](https://wing.com/).
Now we would like to find the closest drone to a customer who is at 7, 1.
With the previous example we used Manhattan Distance. With drones, we can compute the distance as the crow flies -- or Euclidean Distance. We probably learned how to do this way back in 7th grade when we learned the Pythagorean Theorem which states:
$$c^2 = a^2 + b^2$$
Where *c* is the hypotenuse and *a* and *b* are the two other sides. So, if we want to find *c*:
$$c = \sqrt{a^2 + b^2}$$
If we want to find the distance between the drone and a customer, *x* and *y* in the formula becomes
$$Dist_{xy} = \sqrt{(x_1-y_1)^2 + (x_2-y_2)^2}$$
and for `wing_1a` who is at `[4,5]` and our customer who is at `[7,1]` then the formula becomes:
$$Dist_{xy} = \sqrt{(x_1-y_1)^2 + (x_2-y_2)^2} = \sqrt{(4-7)^2 + (5-1)^2} =\sqrt{-3^2 + 4^2} = \sqrt{9 + 16} = \sqrt{25} = 5$$
Sweet! And to generalize this distance formula:
$$Dist_{xy} = \sqrt{(x_1-y_1)^2 + (x_2-y_2)^2}$$
to n-dimensions:
$$Dist_{xy} = \sum_{i=1}^n{\sqrt{(x_i-y_i)^2}}$$
<h4 style="color:red">Q4. You Try</h3>
<span style="color:red">Can you write a function euclidean that takes 3 arguments: droneLocation, droneNames, and an array representing one customer's position and returns the name of the closest drone?</span>
First, a helpful hint:
```
arr = np.array([-1, 2, -3, 4])
arr2 = np.square(arr)
arr2
locations = np.array([[4, 5], [6, 6], [3, 1], [9,5]])
drivers = np.array(["wing_1a", "wing_2a", "wing_3a", "wing_4a"])
cust = np.array([6, 3])
def euclidean(droneLocation, droneNames, x):
result = ''
### your code here
return result
euclidean(locations, drivers, cust)
```
<h4 style="color:red">Q5. You Try</h3>
<span style="color:red">try your code on the "Amazing 5D Music example. Does it return the same person or a different one?"</span>
```
#TBD
```
| github_jupyter |
# Intake / Pangeo Catalog: Making It Easier To Consume Earth’s Climate and Weather Data
Anderson Banihirwe (abanihi@ucar.edu), Charles Blackmon-Luca (blackmon@ldeo.columbia.edu), Ryan Abernathey (rpa@ldeo.columbia.edu), Joseph Hamman (jhamman@ucar.edu)
- NCAR, Boulder, CO, USA
- Columbia University, Palisades, NY, USA
[2020 EarthCube Annual Meeting](https://www.earthcube.org/EC2020) ID: 133
## Introduction
Computer simulations of the Earth’s climate and weather generate huge amounts of data. These data are often persisted on high-performance computing (HPC) systems or in the cloud across multiple data assets in a variety of formats (netCDF, Zarr, etc.).
Finding, investigating, and loading these data assets into compute-ready data containers costs time and effort.
The user should know what data are available and their associated metadata, preferably before loading a specific data asset and analyzing it.
In this notebook, we demonstrate [intake-esm](https://github.com/NCAR/intake-esm), a Python package and an [intake](https://github.com/intake/intake) plugin with aims of facilitating:
- the discovery of earth's climate and weather datasets.
- the ingestion of these datasets into [xarray](https://github.com/pydata/xarray) dataset containers.
The common/popular starting point for finding and investigating large datasets is with a data catalog.
A *data catalog* is a collection of metadata, combined with search tools, that helps data analysts and other users to find the data they need.
For a user to take full advantage of intake-esm, they must point it to an *Earth System Model (ESM) data catalog*.
This is a JSON-formatted file that conforms to the ESM collection specification.
## ESM Collection Specification
The [ESM collection specification](https://github.com/NCAR/esm-collection-spec) provides a machine-readable format for describing a wide range of climate and weather datasets, with a goal of making it easier to index and discover climate and weather data assets.
An asset is any netCDF/HDF file or Zarr store that contains relevant data.
An ESM data catalog serves as an inventory of available data, and provides information to explore the existing data assets.
Additionally, an ESM catalog can contain information on how to aggregate compatible groups of data assets into singular xarray datasets.
## Use Case: CMIP6 hosted on Google Cloud
The Coupled Model Intercomparison Project (CMIP) is an international collaborative effort to improve the knowledge about climate change and its impacts on the Earth System and on our society.
[CMIP began in 1995](https://www.wcrp-climate.org/wgcm-cmip), and today we are in its sixth phase (CMIP6).
The CMIP6 data archive consists of data models created across approximately 30 working groups and 1,000 researchers investigating the urgent environmental problem of climate change, and will provide a wealth of information for the next Assessment Report (AR6) of the [Intergovernmental Panel on Climate Change](https://www.ipcc.ch/) (IPCC).
Last year, Pangeo partnered with Google Cloud to bring CMIP6 climate data to Google Cloud’s Public Datasets program.
You can read more about this process [here](https://cloud.google.com/blog/products/data-analytics/new-climate-model-data-now-google-public-datasets).
For the remainder of this section, we will demonstrate intake-esm's features using the ESM data catalog for the CMIP6 data stored on Google Cloud Storage.
This catalog resides [in a dedicated CMIP6 bucket](https://storage.googleapis.com/cmip6/pangeo-cmip6.json).
### Loading an ESM data catalog
To load an ESM data catalog with intake-esm, the user must provide a valid ESM data catalog as input:
```
import warnings
warnings.filterwarnings("ignore")
import intake
col = intake.open_esm_datastore('https://storage.googleapis.com/cmip6/pangeo-cmip6.json')
col
```
The summary above tells us that this catalog contains over 268,000 data assets.
We can get more information on the individual data assets contained in the catalog by calling the underlying dataframe created when it is initialized:
```
col.df.head()
```
The first data asset listed in the catalog contains:
- the ambient aerosol optical thickness at 550nm (`variable_id='od550aer'`), as a function of latitude, longitude, time,
- in an individual climate model experiment with the Taiwan Earth System Model 1.0 model (`source_id='TaiESM1'`),
- forced by the *Historical transient with SSTs prescribed from historical* experiment (`experiment_id='histSST'`),
- developed by the Taiwan Research Center for Environmental Changes (`instution_id='AS-RCEC'`),
- run as part of the Aerosols and Chemistry Model Intercomparison Project (`activity_id='AerChemMIP'`)
And is located in Google Cloud Storage at `gs://cmip6/AerChemMIP/AS-RCEC/TaiESM1/histSST/r1i1p1f1/AERmon/od550aer/gn/`.
Note: the amount of details provided in the catalog is determined by the data provider who builds the catalog.
### Searching for datasets
After exploring the [CMIP6 controlled vocabulary](https://github.com/WCRP-CMIP/CMIP6_CVs), it’s straightforward to get the data assets you want using intake-esm's `search()` method. In the example below, we are are going to search for the following:
- variables: `tas` which stands for near-surface air temperature
- experiments: `['historical', 'ssp245', 'ssp585']`:
- `historical`: all forcing of the recent past.
- `ssp245`: update of [RCP4.5](https://en.wikipedia.org/wiki/Representative_Concentration_Pathway) based on SSP2.
- `ssp585`: emission-driven [RCP8.5](https://en.wikipedia.org/wiki/Representative_Concentration_Pathway) based on SSP5.
- table_id: `Amon` which stands for Monthly atmospheric data.
- grid_label: `gr` which stands for regridded data reported on the data provider's preferred target grid.
For more details on the CMIP6 vocabulary, please check this [website](http://clipc-services.ceda.ac.uk/dreq/index.html).
```
# form query dictionary
query = dict(experiment_id=['historical', 'ssp245', 'ssp585'],
table_id='Amon',
variable_id=['tas'],
member_id = 'r1i1p1f1',
grid_label='gr')
# subset catalog and get some metrics grouped by 'source_id'
col_subset = col.search(require_all_on=['source_id'], **query)
col_subset.df.groupby('source_id')[['experiment_id', 'variable_id', 'table_id']].nunique()
```
### Loading datasets
Once you've identified data assets of interest, you can load them into xarray dataset containers using the `to_dataset_dict()` method. Invoking this method yields a Python dictionary of high-level aggregated xarray datasets.
The logic for merging/concatenating the query results into higher level xarray datasets is provided in the input JSON file, under `aggregation_control`:
```json
"aggregation_control": {
"variable_column_name": "variable_id",
"groupby_attrs": [
"activity_id",
"institution_id",
"source_id",
"experiment_id",
"table_id",
"grid_label"
],
"aggregations": [{
"type": "union",
"attribute_name": "variable_id"
},
{
"type": "join_new",
"attribute_name": "member_id",
"options": {
"coords": "minimal",
"compat": "override"
}
},
{
"type": "join_new",
"attribute_name": "dcpp_init_year",
"options": {
"coords": "minimal",
"compat": "override"
}
}
]
}
```
Though these aggregation specifications are sufficient to merge individual data assets into xarray datasets, sometimes additional arguments must be provided depending on the format of the data assets.
For example, Zarr-based assets can be loaded with the option `consolidated=True`, which relies on a consolidated metadata file to describe the assets with minimal data egress.
```
dsets = col_subset.to_dataset_dict(zarr_kwargs={'consolidated': True}, storage_options={'token': 'anon'})
# list all merged datasets
[key for key in dsets.keys()]
```
When the datasets have finished loading, we can extract any of them like we would a value in a Python dictionary:
```
ds = dsets['ScenarioMIP.THU.CIESM.ssp585.Amon.gr']
ds
# Let’s create a quick plot for a slice of the data:
ds.tas.isel(time=range(1, 1000, 90))\
.plot(col="time", col_wrap=4, robust=True)
```
## Pangeo Catalog
Pangeo Catalog is an open-source project to enumerate and organize cloud-optimized climate data stored across a variety of providers.
In addition to offering various useful climate datasets in a consolidated location, the project also serves as a means of accessing public ESM data catalogs.
### Accessing catalogs using Python
At the core of the project is a [GitHub repository](https://github.com/pangeo-data/pangeo-datastore) containing several static intake catalogs in the form of YAML files.
Thanks to plugins like intake-esm and [intake-xarray](https://github.com/intake/intake-xarray), these catalogs can contain links to ESM data catalogs or data assets that can be loaded into xarray datasets, along with the arguments required to load them.
By editing these files using Git-based version control, anyone is free to contribute a dataset supported by the available [intake plugins](https://intake.readthedocs.io/en/latest/plugin-directory.html).
Users can then browse these catalogs by providing their associated URL as input into intake's `open_catalog()`; their tree-like structure allows a user to explore their entirety by simply opening the [root catalog](https://github.com/pangeo-data/pangeo-datastore/blob/master/intake-catalogs/master.yaml) and recursively walking through it:
```
cat = intake.open_catalog('https://raw.githubusercontent.com/pangeo-data/pangeo-datastore/master/intake-catalogs/master.yaml')
entries = cat.walk(depth=5)
[key for key in entries.keys()]
```
The catalogs can also be explored using intake's own `search()` method:
```
cat_subset = cat.search('cmip6')
list(cat_subset)
```
Once we have found a dataset or collection we want to explore, we can do so without the need of any user inputted argument:
```
cat.climate.tracmip()
```
### Accessing catalogs using catalog.pangeo.io
For those who don't want to initialize a Python environmemt to explore the catalogs, [catalog.pangeo.io](https://catalog.pangeo.io/) offers a means of viewing them from a standalone web application.
The website directly mirrors the catalogs in the GitHub repository, with previews of each dataset or collection loaded on the fly:
<img src="images/pangeo-catalog.png" alt="Example of an intake-esm collection on catalog.pangeo.io" width="1000">
From here, users can view the JSON input associated with an ESM collection and sort/subset its contents:
<img src="images/esm-demo.gif" alt="Example of an intake-esm collection on catalog.pangeo.io" width="800">
## Conclusion
With intake-esm, much of the toil associated with discovering, loading, and consolidating data assets can be eliminated.
In addition to making computations on huge datasets more accessible to the scientific community, the package also promotes reproducibility by providing simple methodology to create consistent datasets.
Coupled with Pangeo Catalog (which in itself is powered by intake), intake-esm gives climate scientists the means to create and distribute large data collections with instructions on how to use them essentially written into their ESM specifications.
There is still much work to be done with respect to intake-esm and Pangeo Catalog; in particular, goals include:
- Merging ESM collection specifications into [SpatioTemporal Asset Catalog (STAC) specification](https://stacspec.org/) to offer a more universal specification standard
- Development of tools to verify and describe catalogued data on a regular basis
- Restructuring of catalogs to allow subsetting by cloud provider region
[Please reach out](https://discourse.pangeo.io/) if you are interested in participating in any way.
## References
- [intake-esm documentation](https://intake-esm.readthedocs.io/en/latest/)
- [intake documentation](https://intake.readthedocs.io/en/latest/)
- [Pangeo Catalog on GitHub](https://github.com/pangeo-data/pangeo-datastore)
- [Pangeo documentation](http://pangeo.io/)
- [A list of existing, "known" catalogs](https://intake-esm.readthedocs.io/en/latest/faq.html#is-there-a-list-of-existing-catalogs)
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pyart
import scipy
radar = pyart.io.read('/home/zsherman/cmac_test_radar.nc')
radar.fields.keys()
max_lat = 37
min_lat = 36
min_lon = -98.3
max_lon = -97
lal = np.arange(min_lat, max_lat, .2)
lol = np.arange(min_lon, max_lon, .2)
display = pyart.graph.RadarMapDisplay(radar)
fig = plt.figure(figsize=[10, 8])
display.plot_ppi_map('reflectivity', sweep=0, resolution='c',
vmin=-8, vmax=64, mask_outside=False,
cmap=pyart.graph.cm.NWSRef,
min_lat=min_lat, min_lon=min_lon,
max_lat=max_lat, max_lon=max_lon,
lat_lines=lal, lon_lines=lol)
# plt.savefig('')
print(radar.fields['gate_id']['notes'])
cat_dict = {}
for pair_str in radar.fields['gate_id']['notes'].split(','):
print(pair_str)
cat_dict.update(
{pair_str.split(':')[1]:int(pair_str.split(':')[0])})
happy_gates = pyart.correct.GateFilter(radar)
happy_gates.exclude_all()
happy_gates.include_equal('gate_id', cat_dict['rain'])
happy_gates.include_equal('gate_id', cat_dict['melting'])
happy_gates.include_equal('gate_id', cat_dict['snow'])
max_lat = 37
min_lat = 36
min_lon = -98.3
max_lon = -97
lal = np.arange(min_lat, max_lat, .2)
lol = np.arange(min_lon, max_lon, .2)
display = pyart.graph.RadarMapDisplay(radar)
fig = plt.figure(figsize=[10, 8])
display.plot_ppi_map('reflectivity', sweep=1, resolution='c',
vmin=-8, vmax=64, mask_outside=False,
cmap=pyart.graph.cm.NWSRef,
min_lat=min_lat, min_lon=min_lon,
max_lat=max_lat, max_lon=max_lon,
lat_lines=lal, lon_lines=lol,
gatefilter=happy_gates)
# plt.savefig('')
grids1 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids1)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
grids2 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
grids3 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
grids4 = pyart.map.grid_from_radars(
(radar, ), grid_shape=(46, 251, 251),
grid_limits=((0, 15000.0), (-50000, 50000), (-50000, 50000)),
fields=list(radar.fields.keys()), gridding_algo="map_gates_to_grid",
weighting_function='BARNES', gatefilters=(happy_gates, ),
map_roi=True, toa=17000.0, copy_field_data=True, algorithm='kd_tree',
leafsize=10., roi_func='dist_beam', constant_roi=500.,
z_factor=0.05, xy_factor=0.02, min_radius=500.0,
h_factor=1.0, nb=1.5, bsp=1.0,)
display = pyart.graph.GridMapDisplay(grids)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 3
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap=pyart.graph.cm.NWSRef)
# plt.savefig('')
```
| github_jupyter |
# 第2回 ベクトル空間モデル
この演習ページでは,ベクトル空間モデルに基づく情報検索モデルについて説明します.具体的には,文書から特徴ベクトルへの変換方法,TF-IDFの計算方法,コサイン類似度による文書ランキングについて,その実装例を説明します.第2回演習の最終目的は,ある与えられた文書コーパスに対して,TF-IDFで重み付けされた特徴ベクトルによる文書ランキングが実装できるようになることです.
## ライブラリ
この回の演習では,以下のライブラリを使用します.
- [numpy, scipy](http://www.numpy.org/)
+ Pythonで科学技術計算を行うための基礎的なライブラリ.
- [gensim](https://radimrehurek.com/gensim/index.html)
+ トピックモデリング(LDA)やword2vecなどを手軽に利用するためのPythonライブラリ.
- [nltk (natural language toolkit)](http://www.nltk.org/)
+ 自然言語処理に関するpythonライブラリです.この演習ではストップワードのために用います.ほかにも,単語のステミングやトークナイズなどの機能をはじめ,品詞推定,依存関係分析など自然言語処理のあらゆるメソッドが用意されています.
- [pandas](http://pandas.pydata.org/)
+ pythonでデータ分析をするためのフレームワークです.この演習ではデータをプロットするために用いています.
## 第2回目の演習の内容
``h29iro/data/`` に `sample.corpus` というファイルを置いています. このファイルには改行区切りで3件の短い文書が保存されています.この演習では,このファイルに対してTF-IDFで重み付けされた特徴ベクトルを作成し,コサイン類似度によるランキングを行います.
## 1. 文書の読み込みとトークナイズ
まずは,`sample.corpus`を読み込み,各文書のBoW表現を抽出します.
```
import numpy as np
import gensim
from nltk.corpus import stopwords
import pandas as pd
np.set_printoptions(precision=4)
# 小数点3ケタまで表示
%precision 3
with open("../data/sample.corpus", "r") as f: #sample.corpusの読み込み
text = f.read().strip().split("\n") #sample.corpusのテキストデータを取得し,それを改行で分割
text
```
3件の文書があることが分かりますね.次に,文章をトークン(単語)に分割します.今回は簡単のため単純にスペース区切りによって単語に分割します.
```
raw_corpus = [d.lower().split() for d in text] #文章を小文字に変換して単語に分割する
print("d1=" , raw_corpus[0])
print("d2=" , raw_corpus[1])
print("d3=" , raw_corpus[2])
```
文が単語の集合に変換されました.しかし,この単語集合には "i" や "of" などのストップワードが含まれています.そこで,ストップワードを除去してみましょう.
ストップワードのリストはネットで探せば様々な種類が見つかります.ここでは,nltkのstopwordsモジュールを利用します.
```
# stopwords.words("english")に含まれていない単語のみ抽出
corpus = [list(filter(lambda word: word not in stopwords.words("english"), x)) for x in raw_corpus]
print("d1=" , corpus[0])
print("d2=" , corpus[1])
print("d3=" , corpus[2])
```
## 2. 特徴ベクトルの生成
次に文書の特徴ベクトルを生成します.ここからの流れは,以下の通りになります.
1. 文書集合(corpus)から 単語->単語ID の辞書 (dictionary) を作成する.
2. 作成された辞書を基に,文書を (単語ID,出現回数)の集合 (id_corpus) として表現する.
3. id_corpusからTfidfModelを用いて,TF-IDFで重み付けされた特徴ベクトルを作成する.
まずは,文書集合(コーパス)から単語->単語ID の辞書 (dictionary) を作成します.
```
dictionary = gensim.corpora.Dictionary(corpus) #コーパスを与えて,単語->IDの辞書を作成する
dictionary.token2id #作成された辞書の中身
```
このdictionaryを用いて,文書の単語をID化します.
```
id_corpus = [dictionary.doc2bow(document) for document in corpus]
id_corpus
```
作成されたid_corpusは,たとえば,1件目の文書は
```
id_corpus[0]
```
という内容になっています.たとえば,(0,2)というデータは
```
単語ID0の単語が2回出現
```
という内容を表しています. つまり,単語の出現頻度(term frequency)のみで文書を特徴ベクトル化したことになります.なお,これをnumpyのベクトルとして抽出したければ,corpus2denseメソッドを用います.
```
tf_vectors = gensim.matutils.corpus2dense(id_corpus, len(dictionary)).T
print("d1=", tf_vectors[0])
print("d2=", tf_vectors[1])
print("d3=", tf_vectors[2])
```
今回用意したコーパスは語彙数が8しかありませんが,実際のケースでは,この特徴ベクトルは非常に疎になることが容易に想像つくと思います.
さて,id_corpusからTFIDFで重み付けされた特徴ベクトルを得るには, models.TfidfModel メソッドを用います.
```
tfidf_model = gensim.models.TfidfModel(id_corpus, normalize=False) #normalize=Trueにすると,文書長によってtfを正規化する
tfidf_corpus = tfidf_model[id_corpus] #id_corpusをtfidfで重み付けされたものに変換
```
これでTF-IDFで重み付けされた特徴ベクトルが得られました.たとえば,1件目の文書$d_1$に対する特徴ベクトル${\mathbf d}_1$の中身を見てみます.
```
tfidf_corpus[0]
```
TFIDFの値は,(単語ID,重み) として得られています.単語IDを実際の単語に変換するにはdictionaryを通します.
```
[(dictionary[x[0]], x[1]) for x in tfidf_corpus[0]]#dictionary[token_id]でアクセスすると実際の単語が返ってくる
```
同様に2件目の文書$d_2$についても見てみます.
```
doc2 = [(dictionary[x[0]], x[1]) for x in tfidf_corpus[1]]
doc2
```
たとえば, 文書$d_{2}$における`japan`のTFIDF値が本当に正しいのか検証してみましょう.
$tfidf_{d_2, japan} = tf_{d_2, japan} \log \frac{N}{df_{japan}}$ ,
いま, $tf_{d_2, japan} = 2$, $N = 3$, $df_{japan}$ = 1 ですので,
$tfidf_{d_2, japan} = 2 \log 3 = 3.170$
となり,gensimで得られた結果と一致していることが分かります.
```
import math
2*math.log2(3) #2log3の計算方法
```
# 3. コサイン類似度
それでは,コサイン類似度による文書ランキングを行ってみましょう.
クエリと文書の類似度を測る前に,まずは文書同士のコサイン類似度を計算してみます. コサイン類似度の計算はgensimでも良いのですが,ここでは,いったんnumpyのベクトルを取得して,そのベクトルに対してコサイン類似度を計算してみます.
```
# 各文書のtfidfベクトルを取得
tfidf_vectors = gensim.matutils.corpus2dense(tfidf_corpus, len(dictionary)).T
print ("d1=", tfidf_vectors[0])
print ("d2=", tfidf_vectors[1])
print ("d3=", tfidf_vectors[2])
# コサイン類似度を計算する関数を用意
from scipy.spatial.distance import cosine
def cosine_sim(v1, v2):
#scipyのcosineは類似度ではなく距離関数のため, 1-コサイン距離 とすることで,コサイン類似度に変換する
return 1.0 - cosine(v1, v2)
# 各文書間のコサイン類似度を計算してみる
print ("sim(d1, d2)=", cosine_sim(tfidf_vectors[0], tfidf_vectors[1]))
print ("sim(d2, d3)=", cosine_sim(tfidf_vectors[1], tfidf_vectors[2]))
print ("sim(d1, d3)=", cosine_sim(tfidf_vectors[0], tfidf_vectors[2]))
```
それでは,クエリを特徴ベクトルに変換し,クエリと文書のコサイン類似度を求めていきましょう.
```
q = {"kansai", "japan"}
tfidf_q = tfidf_model[dictionary.doc2bow(q)] #クエリをtfidfベクトルに変換
query_vector = gensim.matutils.corpus2dense([tfidf_q], len(dictionary)).T[0] #numpyのベクトルに変換
print ("q=", query_vector)
print([(dictionary[x[0]], x[1]) for x in tfidf_q])
print ("sim(q, d1) = ", cosine_sim(query_vector, tfidf_vectors[0]))
print ("sim(q, d2) = ", cosine_sim(query_vector, tfidf_vectors[1]))
print ("sim(q, d3) = ", cosine_sim(query_vector, tfidf_vectors[2]))
```
この結果から,q={"kansai", "japan"} というクエリに対しては,$d_2,d_3, d_1$の順でランク付けされることが分かります.
## 4. ベクトル空間の可視化
最後に,得られた特徴ベクトルを可視化してみましょう.特徴ベクトルそのものは多次元(今回の場合は8次元)ですが,これを次元削減の手法を使って,2次元空間に射影してみます.今回は,`LSI`(Latent Semantic Indexing)という手法を用いて,特徴ベクトルを2次元空間に落とし込みます.LSIについては,講義で触れるかもしれません(講義の進み方次第).
```
import matplotlib.pylab as plt
%matplotlib inline
# LSIにより特徴ベクトルを2次元に落とし込む
lsi = gensim.models.LsiModel(tfidf_corpus, id2word=dictionary, num_topics=2)
lsi_corpus = lsi[tfidf_corpus]
lsi_vectors = gensim.matutils.corpus2dense(lsi_corpus, 2).T
print("d1=", lsi_vectors[0])
print("d2=", lsi_vectors[1])
print("d3=", lsi_vectors[2])
query_lsi_corpus = lsi[[tfidf_q]]
query_lsi_vector = gensim.matutils.corpus2dense(query_lsi_corpus, 2).T[0]
print ("q=", query_lsi_vector)
# 散布図にプロットするため,DataFrameに変換
axis_names = ["z1", "z2"]
doc_names = ["d1", "d2", "d3", "q"]
df = pd.DataFrame(np.r_[lsi_vectors, [query_lsi_vector]],
columns=axis_names, index=doc_names) # np.r_ は行列同士の連結
df
# 散布図をプロット
fig, ax = plt.subplots()
df.plot.scatter(x="z1", y="z2", ax=ax)
ax.axvline(x=0, lw=2, color='red') #x軸とy軸に線を引く
ax.axhline(y=0, lw=2, color='red')
ax.grid(True)
for k, v in df.iterrows():
ax.annotate(k, xy=(v[0]+0.05,v[1]+0.05),size=15) #データ点にラベル名を付与
```
この図を見てみると,やはりクエリ$q$と文書$d_2$はほぼ同じ方向(つまり,コサイン類似度が1に近い)であることがわかり, $q$と$d_1$の角度はほぼ直角(つまりコサイン類似度が0)であることがわかります.
----
# 演習課題その1 ベクトル空間モデル
## 必須課題(1) 与えられたコーパスに対する検索の実現
以下からコーパスを1つ以上選択し,ベクトル空間モデルに基づいた検索を実現せよ.3種類以上のクエリでの検索結果を示すこと.
1. 京都観光に関する83件の文書(h29iro/data/kyoto_results_100.json)
2. 各自で用意したコーパス.ただし,100件以上の文書数を含むこと.もっと多くてもよい.
3. Wikipedia([参考: gensim Tutorial](https://radimrehurek.com/gensim/wiki.html) )※ただし,モデル構築にとんでもない時間がかかるそうなので覚悟すること.
- ページに表示する検索結果は各クエリ5-10件程度で良い.
```
# 1.のコーパスはjson形式で保管されている.
import json
with open("../data/kyoto_results_100.json", "r") as f:
docs = json.load(f)
print("Num of docs = ", len(docs))
docs[0]
# `bow` には形態素解析でトークン化された単語列がスペース区切りで保存されている.
# これを使用して特徴ベクトルを作成するとよい.
docs[0]["bow"]
```
## 任意課題(a) Okapi BM25
上記(1)に対して, Okapi BM25 に基づくランキングを行い,上記(1)の結果と比較してみよ.
## 任意課題(b) 適合性フィードバック
適合性フィードバックによるクエリ修正を行い,検索結果がどのように変化するのか分析せよ.また,コーパス及びクエリを可視化することで,修正されたクエリが適合・不適合文書の特徴ベクトルにどのように影響されているか幾何的に分析せよ.
# 課題の提出方法
いずれかの方法で,ipython notebookのページ(.ipynbファイル)とそのhtml版を提出すること.
1. 添付ファイルで山本に送信.
- 送付先 tyamamot at dl.kuis.kyoto-u.ac.jp
2. 各自のgithubやgithub gistにアップロードし,そのURLを山本に送信.この場合はhtml版を用意する必要はない.
3. 上記以外で,山本が実際に.ipynbファイルを確認できる方法.
# 締切
- 2017年11月30日(木)23:59
- 締切に関する個別の相談は``受け付けます``.
| github_jupyter |
```
!pip install roboschool==1.0.48 gym==0.15.4
import tensorflow as tf
import numpy as np
import gym
import roboschool
class TD3PG:
def __init__(self,env,memory):
self.env=env
self.state_dimension=env.observation_space.shape
self.action_dimension=env.action_space.shape[0]
self.min_action=env.action_space.low[0]
self.max_action=env.action_space.high[0]
self.Train_actor=None
self.Target_actor=None
self.Train_critic_1=None
self.Target_critic_1=None
self.Train_critic_2=None
self.Target_critic_2=None
self.memory=memory
self.batch_size=128
self.collect_initial_=10000
self.cr_1_opt=tf.keras.optimizers.Adam(0.001)
self.cr_2_opt=tf.keras.optimizers.Adam(0.001)
self.ac_opt=tf.keras.optimizers.Adam(0.001)
self.steps_to_stop_exp=2000
self.steps_to_train=1000000
self.update_actor_step=2
self.tau=0.005
def get_critic(self):
input_state=tf.keras.layers.Input(self.state_dimension)
input_action=tf.keras.layers.Input(self.action_dimension)
layer_1=tf.keras.layers.concatenate([input_state,input_action],axis=-1)
layer_2=tf.keras.layers.Dense(400,activation="relu")(layer_1)
layer_3=tf.keras.layers.Dense(300,activation="relu")(layer_2)
out_Q=tf.keras.layers.Dense(1,activation=None)(layer_3)
model=tf.keras.Model(inputs=[input_state,input_action],outputs=[out_Q])
return model
def get_actor(self):
input=tf.keras.layers.Input(self.state_dimension)
layer_1=tf.keras.layers.Dense(400,activation="relu")(input)
layer_2=tf.keras.layers.Dense(300,activation="relu")(layer_1)
out=tf.keras.layers.Dense(self.action_dimension,activation="tanh")(layer_2)
model=tf.keras.Model(inputs=[input],outputs=[out])
return model
def get_action(self,actor,s,sigma=0,noise=False):
mu=actor(s)
Noise_sigma=sigma
if noise:
action=mu+tf.random.normal(shape=[self.action_dimension],mean=0,stddev=Noise_sigma)
else:
action=mu
action=self.max_action*(tf.clip_by_value(action,self.min_action,self.max_action)) ## AS tanh is used in activation
return action
def get_Q_value(self,critic,s,a):
q=critic([s,a])
return q
def initialize_buffer(self):
curr_state=self.env.reset()
for _ in range(self.collect_initial_):
action=self.env.action_space.sample()
next_state,reward,done,_=self.env.step(action)
self.memory.push(curr_state,action,reward,next_state,not done)
if done:
curr_state=self.env.reset()
else:
curr_state=next_state
def update_networks(self,target_net,train_net,tau):
weights_tar, weights_tra = target_net.get_weights(), train_net.get_weights()
for i in range(len(weights_tar)):
weights_tar[i] = tau*weights_tra[i] + (1-tau)*weights_tar[i]
target_net.set_weights(weights_tar)
def critic_pred(self,critic,states):
c=0.5
mu=self.Target_actor(states)
noise_action=mu+tf.clip_by_value(tf.random.normal(shape=[self.action_dimension],mean=0,stddev=0.2),-c,c)
predicted_actions=self.max_action*tf.clip_by_value(noise_action,self.min_action,self.max_action)
next_state_value=self.get_Q_value(critic,states,predicted_actions)
return next_state_value
def loss_critics(self,states, actions, rewards, next_states, not_dones, gamma=0.99):
next_value_1=tf.squeeze(self.critic_pred(self.Target_critic_1,next_states))
next_value_2=tf.squeeze(self.critic_pred(self.Target_critic_2,next_states))
pred_value_1=tf.squeeze(self.get_Q_value(self.Train_critic_1,np.array(states,dtype="float32"),np.array(actions,dtype="float32")))
pred_value_2=tf.squeeze(self.get_Q_value(self.Train_critic_2,np.array(states,dtype="float32"),np.array(actions,dtype="float32")))
next_value=tf.math.minimum(next_value_1,next_value_2)
target_value= rewards + gamma*next_value*not_dones
critic_loss_1=tf.reduce_mean(tf.math.squared_difference(target_value,pred_value_1))
critic_loss_2=tf.reduce_mean(tf.math.squared_difference(target_value,pred_value_2))
return critic_loss_1,critic_loss_2
def train(self):
self.Train_actor=self.get_actor()
self.Target_actor=self.get_actor()
self.Target_actor.set_weights(self.Train_actor.get_weights())
self.Train_critic_1=self.get_critic()
self.Target_critic_1=self.get_critic()
self.Target_critic_1.set_weights(self.Train_critic_1.get_weights())
self.Train_critic_2=self.get_critic()
self.Target_critic_2=self.get_critic()
self.Target_critic_2.set_weights(self.Train_critic_2.get_weights())
self.initialize_buffer()
curr_state=self.env.reset()
overall_Reward=0
episode_reward=0
no_of_comp=0
for i in range(self.steps_to_train):
if i<self.steps_to_stop_exp:
action=self.get_action(self.Train_actor,curr_state.reshape(1,-1),sigma=0.1,noise=True)
else:
action=self.get_action(self.Train_actor,curr_state.reshape(1,-1))
next_state,reward,done,_=self.env.step(action.numpy()[0])
episode_reward+=reward
self.memory.push(curr_state,action,reward,next_state,not done)
if done:
curr_state=self.env.reset()
overall_Reward+=episode_reward
if no_of_comp%20==0:
print('On step {}, no. of complete episodes {} average episode reward {}'.format(i,no_of_comp,overall_Reward/20))
overall_Reward=0
episode_reward=0 ### Updating the reward to 0
no_of_comp+=1
else:
curr_state=next_state
states, actions, rewards, next_states, not_dones = self.memory.sample(self.batch_size)
with tf.GradientTape() as t1, tf.GradientTape() as t2:
critic_loss_1,critic_loss_2=self.loss_critics(states, actions, rewards, next_states, not_dones)
grad_crit_1=t1.gradient(critic_loss_1,self.Train_critic_1.trainable_variables)
grad_crit_2=t2.gradient(critic_loss_2,self.Train_critic_2.trainable_variables)
self.cr_1_opt.apply_gradients(zip(grad_crit_1,self.Train_critic_1.trainable_variables))
self.cr_2_opt.apply_gradients(zip(grad_crit_2,self.Train_critic_2.trainable_variables))
if i % self.update_actor_step==0:
with tf.GradientTape() as t:
new_actions=self.Train_actor(states)
act_loss=-1*tf.reduce_mean(self.Train_critic_1([states,new_actions]))
grad_act=t.gradient(act_loss,self.Train_actor.trainable_variables)
self.ac_opt.apply_gradients(zip(grad_act,self.Train_actor.trainable_variables))
self.update_networks(self.Target_actor,self.Train_actor,self.tau)
self.update_networks(self.Target_critic_1,self.Train_critic_1,self.tau)
self.update_networks(self.Target_critic_2,self.Train_critic_2,self.tau)
env = gym.make('RoboschoolInvertedPendulum-v1')
from memory_module import replayBuffer
memory=replayBuffer(100000)
agent=TD3PG(env,memory)
agent.train()
```
| github_jupyter |
# Exploring Text Data (2)
## PyConUK talk abstract
Data set of abstracts for the PyConUK 2016 talks (retrieved 14th Sept 2016 from https://github.com/PyconUK/2016.pyconuk.org)
The data can be found in `../data/pyconuk2016/{keynotes,workshops,talks}/*`
There are 101 abstracts
## Load the data
Firstly, we load all the data into the `documents` dictionary
We also merge the documents into one big string, `corpus_all_in_one`, for convenience
```
import os
data_dir = os.path.join('..', 'data', 'pyconuk2016')
talk_types = ['keynotes', 'workshops', 'talks']
all_talk_files = [os.path.join(data_dir, talk_type, fname)
for talk_type in talk_types
for fname in os.listdir(os.path.join(data_dir, talk_type))]
documents = {}
for talk_fname in all_talk_files:
bname = os.path.basename(talk_fname)
talk_title = os.path.splitext(bname)[0]
with open(talk_fname, 'r') as f:
content = f.read()
documents[talk_title] = content
corpus_all_in_one = ' '.join([doc for doc in documents.values()])
print("Number of talks: {}".format(len(all_talk_files)))
print("Corpus size (char): {}".format(len(corpus_all_in_one)))
%matplotlib inline
import matplotlib.pyplot as plt
from wordcloud import WordCloud
cloud = WordCloud(max_words=100)
cloud.generate_from_text(corpus_all_in_one)
plt.figure(figsize=(12,8))
plt.imshow(cloud)
plt.axis('off')
plt.show()
all_talk_files[0]
%cat {all_talk_files[0]}
# For a list of magics type:
# %lsmagic
documents = {}
for talk_fname in all_talk_files:
bname = os.path.basename(talk_fname)
talk_title = os.path.splitext(bname)[0]
with open(talk_fname, 'r') as f:
content = ""
for line in f:
if line.startswith('title:'):
line = line[6:]
if line.startswith('subtitle:') \
or line.startswith('speaker:') \
or line.startswith('---'):
continue
content += line
documents[talk_title] = content
corpus_all_in_one = ' '.join([doc for doc in documents.values()])
%matplotlib inline
import matplotlib.pyplot as plt
from wordcloud import WordCloud
cloud = WordCloud(max_words=100)
cloud.generate_from_text(corpus_all_in_one)
plt.figure(figsize=(12,8))
plt.imshow(cloud)
plt.axis('off')
plt.show()
from collections import Counter
import string
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stop_list = stopwords.words('english') + list(string.punctuation)
document_frequency = Counter()
for talk_id, content in documents.items():
try: # py3
tokens = word_tokenize(content)
except UnicodeDecodeError: # py27
tokens = word_tokenize(content.decode('utf-8'))
unique_tokens = [token.lower() for token in set(tokens)
if token.lower() not in stop_list]
document_frequency.update(unique_tokens)
for word, freq in document_frequency.most_common(20):
print("{}\t{}".format(word, freq))
# print(stop_list)
for item in ['will', "'ll", 'll']:
print("{} in stop_list == {}".format(item, item in stop_list))
from nltk import ngrams
try:
all_tokens = [t for t in word_tokenize(corpus_all_in_one)]
except UnicodeDecodeError:
all_tokens = [t for t in word_tokenize(corpus_all_in_one.decode('utf-8'))]
bigrams = ngrams(all_tokens, 2)
trigrams = ngrams(all_tokens, 3)
bi_count = Counter(bigrams)
tri_count = Counter(trigrams)
for phrase, freq in bi_count.most_common(20):
print("{}\t{}".format(phrase, freq))
for phrase, freq in tri_count.most_common(20):
print("{}\t{}".format(phrase, freq))
```
## Term Frequency (TF)
TF provides a weight of a term within a document, based on the term frequency
TF(term, doc) = count(term in doc)
TF(term, doc) = count(term in doc) / len(doc)
## Inverse Document Frequency (IDF)
IDF provides a weight of a term across the collection, based on the document frequency of such term
IDF(term) = log( N / DF(term) )
IDF(term) = log( 1 + N / DF(term) )
## Introducing sklearn
So far, we have used some homemade implementation to count words
What if we need something more involved?
sklearn (http://scikit-learn.org/) is one of the main libraries for Machine Learning in Python
With an easy-to-use interface, it provides support for a variety of Machine Learning models
We're going to use it to tackle a Text Classification problem
```
from random import randint
winner = randint(1, 36)
print("And the winner is ... {}".format(winner))
from nltk import pos_tag
from nltk.tokenize import word_tokenize
s = "The quick brown fox juped over the dog"
tokens = word_tokenize(s)
tokens
pos_tag(tokens)
```
| github_jupyter |
# Explicit Feedback Neural Recommender Systems
Goals:
- Understand recommender data
- Build different models architectures using Keras
- Retrieve Embeddings and visualize them
- Add metadata information as input to the model
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os.path as op
from zipfile import ZipFile
try:
from urllib.request import urlretrieve
except ImportError: # Python 2 compat
from urllib import urlretrieve
ML_100K_URL = "http://files.grouplens.org/datasets/movielens/ml-100k.zip"
ML_100K_FILENAME = ML_100K_URL.rsplit('/', 1)[1]
ML_100K_FOLDER = 'ml-100k'
if not op.exists(ML_100K_FILENAME):
print('Downloading %s to %s...' % (ML_100K_URL, ML_100K_FILENAME))
urlretrieve(ML_100K_URL, ML_100K_FILENAME)
if not op.exists(ML_100K_FOLDER):
print('Extracting %s to %s...' % (ML_100K_FILENAME, ML_100K_FOLDER))
ZipFile(ML_100K_FILENAME).extractall('.')
```
### Ratings file
Each line contains a rated movie:
- a user
- an item
- a rating from 1 to 5 stars
```
import pandas as pd
raw_ratings = pd.read_csv(op.join(ML_100K_FOLDER, 'u.data'), sep='\t',
names=["user_id", "item_id", "rating", "timestamp"])
raw_ratings.head()
```
### Item metadata file
The item metadata file contains metadata like the name of the movie or the date it was released. The movies file contains columns indicating the movie's genres. Let's only load the first five columns of the file with `usecols`.
```
m_cols = ['item_id', 'title', 'release_date', 'video_release_date', 'imdb_url']
items = pd.read_csv(op.join(ML_100K_FOLDER, 'u.item'), sep='|',
names=m_cols, usecols=range(5), encoding='latin-1')
items.head()
```
Let's write a bit of Python preprocessing code to extract the release year as an integer value:
```
def extract_year(release_date):
if hasattr(release_date, 'split'):
components = release_date.split('-')
if len(components) == 3:
return int(components[2])
# Missing value marker
return 1920
items['release_year'] = items['release_date'].map(extract_year)
items.hist('release_year', bins=50);
```
Enrich the raw ratings data with the collected items metadata:
```
all_ratings = pd.merge(items, raw_ratings)
all_ratings.head()
```
### Data preprocessing
To understand well the distribution of the data, the following statistics are computed:
- the number of users
- the number of items
- the rating distribution
- the popularity of each movie
```
max_user_id = all_ratings['user_id'].max()
max_user_id
max_item_id = all_ratings['item_id'].max()
max_item_id
all_ratings['rating'].describe()
```
Let's do a bit more pandas magic compute the popularity of each movie (number of ratings):
```
popularity = all_ratings.groupby('item_id').size().reset_index(name='popularity')
items = pd.merge(popularity, items)
items.nlargest(10, 'popularity')
```
Enrich the ratings data with the popularity as an additional metadata.
```
all_ratings = pd.merge(popularity, all_ratings)
all_ratings.head()
```
Later in the analysis we will assume that this popularity does not come from the ratings themselves but from an external metadata, e.g. box office numbers in the month after the release in movie theaters.
Let's split the enriched data in a train / test split to make it possible to do predictive modeling:
```
from sklearn.model_selection import train_test_split
ratings_train, ratings_test = train_test_split(
all_ratings, test_size=0.2, random_state=0)
user_id_train = np.array(ratings_train['user_id'])
item_id_train = np.array(ratings_train['item_id'])
rating_train = np.array(ratings_train['rating'])
user_id_test = np.array(ratings_test['user_id'])
item_id_test = np.array(ratings_test['item_id'])
rating_test = np.array(ratings_test['rating'])
```
# Explicit feedback: supervised ratings prediction
For each pair of (user, item) try to predict the rating the user would give to the item.
This is the classical setup for building recommender systems from offline data with explicit supervision signal.
## Predictive ratings as a regression problem
The following code implements the following architecture:
<img src="images/rec_archi_1.svg" style="width: 600px;" />
```
from tensorflow.keras.layers import Embedding, Flatten, Dense, Dropout
from tensorflow.keras.layers import Dot
from tensorflow.keras.models import Model
# For each sample we input the integer identifiers
# of a single user and a single item
class RegressionModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.dot = Dot(axes=1)
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
y = self.dot([user_vecs, item_vecs])
return y
model = RegressionModel(30, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='mae')
# Useful for debugging the output shape of model
initial_train_preds = model.predict([user_id_train, item_id_train])
initial_train_preds.shape
```
### Model error
Using `initial_train_preds`, compute the model errors:
- mean absolute error
- mean squared error
Converting a pandas Series to numpy array is usually implicit, but you may use `rating_train.values` to do so explicitly. Be sure to monitor the shapes of each object you deal with by using `object.shape`.
```
# %load solutions/compute_errors.py
squared_differences = np.square(initial_train_preds[:,0] - rating_train)
absolute_differences = np.abs(initial_train_preds[:,0] - rating_train)
print("Random init MSE: %0.3f" % np.mean(squared_differences))
print("Random init MAE: %0.3f" % np.mean(absolute_differences))
# You may also use sklearn metrics to do so using scikit-learn:
from sklearn.metrics import mean_squared_error, mean_absolute_error
print("Random init MSE: %0.3f" % mean_squared_error(initial_train_preds, rating_train))
print("Random init MAE: %0.3f" % mean_absolute_error(initial_train_preds, rating_train))
```
### Monitoring runs
Keras enables to monitor various variables during training.
`history.history` returned by the `model.fit` function is a dictionary
containing the `'loss'` and validation loss `'val_loss'` after each epoch
```
%%time
# Training the model
history = model.fit([user_id_train, item_id_train], rating_train,
batch_size=64, epochs=6, validation_split=0.1,
shuffle=True)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 2)
plt.legend(loc='best')
plt.title('Loss');
```
**Questions**:
- Why is the train loss higher than the first loss in the first few epochs?
- Why is Keras not computing the train loss on the full training set at the end of each epoch as it does on the validation set?
Now that the model is trained, the model MSE and MAE look nicer:
```
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
test_preds = model.predict([user_id_test, item_id_test])
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
train_preds = model.predict([user_id_train, item_id_train])
print("Final train MSE: %0.3f" % mean_squared_error(train_preds, rating_train))
print("Final train MAE: %0.3f" % mean_absolute_error(train_preds, rating_train))
```
## A Deep recommender model
Using a similar framework as previously, the following deep model described in the course was built (with only two fully connected)
<img src="images/rec_archi_2.svg" style="width: 600px;" />
To build this model we will need a new kind of layer:
```
from tensorflow.keras.layers import Concatenate
```
### Exercise
- The following code has **4 errors** that prevent it from working correctly. **Correct them and explain** why they are critical.
```
# For each sample we input the integer identifiers
# of a single user and a single item
class DeepRegressionModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
self.dropout = Dropout(0.99)
self.dense1 = Dense(64, activation="relu")
self.dense2 = Dense(2, activation="tanh")
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs])
y = self.dropout(input_vecs)
y = self.dense1(y)
y = self.dense2(y)
return y
model = DeepRegressionModel(30, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='binary_crossentropy')
initial_train_preds = model.predict([user_id_train, item_id_train])
# %load solutions/deep_explicit_feedback_recsys.py
# For each sample we input the integer identifiers
# of a single user and a single item
class DeepRegressionModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
## Error 1: Dropout was too high, preventing any training
self.dropout = Dropout(0.5)
self.dense1 = Dense(64, activation="relu")
## Error 2: output dimension was 2 where we predict only 1-d rating
## Error 3: tanh activation squashes the outputs between -1 and 1
## when we want to predict values between 1 and 5
self.dense2 = Dense(1)
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs])
y = self.dropout(input_vecs)
y = self.dense1(y)
y = self.dense2(y)
return y
model = DeepRegressionModel(30, max_user_id, max_item_id)
## Error 4: A binary crossentropy loss is only useful for binary
## classification, while we are in regression (use mse or mae)
model.compile(optimizer='adam', loss='mae')
initial_train_preds = model.predict([user_id_train, item_id_train])
%%time
history = model.fit([user_id_train, item_id_train], rating_train,
batch_size=64, epochs=5, validation_split=0.1,
shuffle=True)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 2)
plt.legend(loc='best')
plt.title('Loss');
train_preds = model.predict([user_id_train, item_id_train])
print("Final train MSE: %0.3f" % mean_squared_error(train_preds, rating_train))
print("Final train MAE: %0.3f" % mean_absolute_error(train_preds, rating_train))
test_preds = model.predict([user_id_test, item_id_test])
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
```
### Home assignment:
- Add another layer, compare train/test error
- What do you notice?
- Try adding more dropout and modifying layer sizes: should you increase
or decrease the number of parameters
### Model Embeddings
- It is possible to retrieve the embeddings by simply using the Keras function `model.get_weights` which returns all the model learnable parameters.
- The weights are returned the same order as they were build in the model
- What is the total number of parameters?
```
# weights and shape
weights = model.get_weights()
[w.shape for w in weights]
# Solution:
model.summary()
user_embeddings = weights[0]
item_embeddings = weights[1]
print("First item name from metadata:", items["title"][1])
print("Embedding vector for the first item:")
print(item_embeddings[1])
print("shape:", item_embeddings[1].shape)
```
### Finding most similar items
Finding k most similar items to a point in embedding space
- Write in numpy a function to compute the cosine similarity between two points in embedding space
- Write a function which computes the euclidean distance between a point in embedding space and all other points
- Write a most similar function, which returns the k item names with lowest euclidean distance
- Try with a movie index, such as 181 (Return of the Jedi). What do you observe? Don't expect miracles on such a small training set.
Notes:
- you may use `np.linalg.norm` to compute the norm of vector, and you may specify the `axis=`
- the numpy function `np.argsort(...)` enables to compute the sorted indices of a vector
- `items["name"][idxs]` returns the names of the items indexed by array idxs
```
# %load solutions/similarity.py
EPSILON = 1e-07
def cosine(x, y):
dot_pdt = np.dot(x, y.T)
norms = np.linalg.norm(x) * np.linalg.norm(y)
return dot_pdt / (norms + EPSILON)
# Computes cosine similarities between x and all item embeddings
def cosine_similarities(x):
dot_pdts = np.dot(item_embeddings, x)
norms = np.linalg.norm(x) * np.linalg.norm(item_embeddings, axis=1)
return dot_pdts / (norms + EPSILON)
# Computes euclidean distances between x and all item embeddings
def euclidean_distances(x):
return np.linalg.norm(item_embeddings - x, axis=1)
# Computes top_n most similar items to an idx,
def most_similar(idx, top_n=10, mode='euclidean'):
sorted_indexes=0
if mode == 'euclidean':
dists = euclidean_distances(item_embeddings[idx])
sorted_indexes = np.argsort(dists)
idxs = sorted_indexes[0:top_n]
return list(zip(items["title"][idxs], dists[idxs]))
else:
sims = cosine_similarities(item_embeddings[idx])
# [::-1] makes it possible to reverse the order of a numpy
# array, this is required because most similar items have
# a larger cosine similarity value
sorted_indexes = np.argsort(sims)[::-1]
idxs = sorted_indexes[0:top_n]
return list(zip(items["title"][idxs], sims[idxs]))
# sanity checks:
print("cosine of item 1 and item 1: %0.3f"
% cosine(item_embeddings[1], item_embeddings[1]))
euc_dists = euclidean_distances(item_embeddings[1])
print(euc_dists.shape)
print(euc_dists[1:5])
print()
# Test on movie 181: Return of the Jedi
print("Items closest to 'Return of the Jedi':")
for title, dist in most_similar(181, mode="euclidean"):
print(title, dist)
# We observe that the embedding is poor at representing similarities
# between movies, as most distance/similarities are very small/big
# One may notice a few clusters though
# it's interesting to plot the following distributions
# plt.hist(euc_dists)
# The reason for that is that the number of ratings is low and the embedding
# does not automatically capture semantic relationships in that context.
# Better representations arise with higher number of ratings, and less overfitting
# in models or maybe better loss function, such as those based on implicit
# feedback.
```
### Visualizing embeddings using TSNE
- we use scikit learn to visualize items embeddings
- Try different perplexities, and visualize user embeddings as well
- What can you conclude ?
```
from sklearn.manifold import TSNE
item_tsne = TSNE(perplexity=30).fit_transform(item_embeddings)
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
plt.scatter(item_tsne[:, 0], item_tsne[:, 1]);
plt.xticks(()); plt.yticks(());
plt.show()
```
Alternatively with [Uniform Manifold Approximation and Projection](https://github.com/lmcinnes/umap):
```
!pip install umap-learn
import umap
item_umap = umap.UMAP().fit_transform(item_embeddings)
plt.figure(figsize=(10, 10))
plt.scatter(item_umap[:, 0], item_umap[:, 1]);
plt.xticks(()); plt.yticks(());
plt.show()
```
## Using item metadata in the model
Using a similar framework as previously, we will build another deep model that can also leverage additional metadata. The resulting system is therefore an **Hybrid Recommender System** that does both **Collaborative Filtering** and **Content-based recommendations**.
<img src="images/rec_archi_3.svg" style="width: 600px;" />
```
from sklearn.preprocessing import QuantileTransformer
meta_columns = ['popularity', 'release_year']
scaler = QuantileTransformer()
item_meta_train = scaler.fit_transform(ratings_train[meta_columns])
item_meta_test = scaler.transform(ratings_test[meta_columns])
class HybridModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
self.dense1 = Dense(64, activation="relu")
self.dropout = Dropout(0.5)
self.dense2 = Dense(32, activation='relu')
self.dense3 = Dense(2, activation="tanh")
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
meta_inputs = inputs[2]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs, meta_inputs])
y = self.dense1(input_vecs)
y = self.dropout(y)
y = self.dense2(y)
y = self.dense3(y)
return y
model = DeepRecoModel(30, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='mae')
initial_train_preds = model.predict([user_id_train, item_id_train, item_meta_train])
%%time
history = model.fit([user_id_train, item_id_train, item_meta_train], rating_train,
batch_size=64, epochs=15, validation_split=0.1,
shuffle=True)
test_preds = model.predict([user_id_test, item_id_test, item_meta_test])
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
```
The additional metadata seem to improve the predictive power of the model a bit at least in terms of MAE.
### A recommendation function for a given user
Once the model is trained, the system can be used to recommend a few items for a user, that he/she hasn't already seen:
- we use the `model.predict` to compute the ratings a user would have given to all items
- we build a reco function that sorts these items and exclude those the user has already seen
```
indexed_items = items.set_index('item_id')
def recommend(user_id, top_n=10):
item_ids = range(1, max_item_id)
seen_mask = all_ratings["user_id"] == user_id
seen_movies = set(all_ratings[seen_mask]["item_id"])
item_ids = list(filter(lambda x: x not in seen_movies, item_ids))
print("User %d has seen %d movies, including:" % (user_id, len(seen_movies)))
for title in all_ratings[seen_mask].nlargest(20, 'popularity')['title']:
print(" ", title)
print("Computing ratings for %d other movies:" % len(item_ids))
item_ids = np.array(item_ids)
user_ids = np.zeros_like(item_ids)
user_ids[:] = user_id
items_meta = scaler.transform(indexed_items[meta_columns].loc[item_ids])
rating_preds = model.predict([user_ids, item_ids, items_meta])
item_ids = np.argsort(rating_preds[:, 0])[::-1].tolist()
rec_items = item_ids[:top_n]
return [(items["title"][movie], rating_preds[movie][0])
for movie in rec_items]
for title, pred_rating in recommend(5):
print(" %0.1f: %s" % (pred_rating, title))
```
### Home assignment: Predicting ratings as a classification problem
In this dataset, the ratings all belong to a finite set of possible values:
```
import numpy as np
np.unique(rating_train)
```
Maybe we can help the model by forcing it to predict those values by treating the problem as a multiclassification problem. The only required changes are:
- setting the final layer to output class membership probabities using a softmax activation with 5 outputs;
- optimize the categorical cross-entropy classification loss instead of a regression loss such as MSE or MAE.
```
# %load solutions/classification.py
class ClassificationModel(Model):
def __init__(self, embedding_size, max_user_id, max_item_id):
super().__init__()
self.user_embedding = Embedding(output_dim=embedding_size, input_dim=max_user_id + 1,
input_length=1, name='user_embedding')
self.item_embedding = Embedding(output_dim=embedding_size, input_dim=max_item_id + 1,
input_length=1, name='item_embedding')
# The following two layers don't have parameters.
self.flatten = Flatten()
self.concat = Concatenate()
self.dropout1 = Dropout(0.5)
self.dense1 = Dense(128, activation="relu")
self.dropout2 = Dropout(0.2)
self.dense2 = Dense(128, activation='relu')
self.dense3 = Dense(5, activation="softmax")
def call(self, inputs):
user_inputs = inputs[0]
item_inputs = inputs[1]
user_vecs = self.flatten(self.user_embedding(user_inputs))
item_vecs = self.flatten(self.item_embedding(item_inputs))
input_vecs = self.concat([user_vecs, item_vecs])
y = self.dropout1(input_vecs)
y = self.dense1(y)
y = self.dropout2(y)
y = self.dense2(y)
y = self.dense3(y)
return y
model = ClassificationModel(16, max_user_id, max_item_id)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
initial_train_preds = model.predict([user_id_train, item_id_train]).argmax(axis=1) + 1
print("Random init MSE: %0.3f" % mean_squared_error(initial_train_preds, rating_train))
print("Random init MAE: %0.3f" % mean_absolute_error(initial_train_preds, rating_train))
history = model.fit([user_id_train, item_id_train], rating_train - 1,
batch_size=64, epochs=15, validation_split=0.1,
shuffle=True)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 2)
plt.legend(loc='best')
plt.title('loss');
test_preds = model.predict([user_id_test, item_id_test]).argmax(axis=1) + 1
print("Final test MSE: %0.3f" % mean_squared_error(test_preds, rating_test))
print("Final test MAE: %0.3f" % mean_absolute_error(test_preds, rating_test))
```
| github_jupyter |
```
import sys
sys.path.insert(0, '/cndd/fangming/CEMBA/snmcseq_dev')
from __init__ import *
from snmcseq_utils import cd
from snmcseq_utils import create_logger
# from CEMBA_update_mysql import connect_sql
log = create_logger()
log.info("Hello")
#
input_f = '/cndd2/fangming/projects/scf_enhancers/data/bulk/round2/mc/dmr/cgdmr_round2_rms_results_collapsed.tsv'
df = pd.read_table(input_f, index_col=['#chr', 'start', 'end'], dtype={'#chr': object})
print(df.shape)
df.head()
print(df.shape)
df_hypo = df.loc[(df['number_of_dms']>=3) & (~df['hypomethylated_samples'].isnull()), 'hypomethylated_samples'].apply(lambda x: x.split(','))
df_hyper = df.loc[(df['number_of_dms']>=3) & (~df['hypermethylated_samples'].isnull()), 'hypermethylated_samples'].apply(lambda x: x.split(','))
print(df_hypo.shape)
print(df_hyper.shape)
df_hypo.head()
# df_hypo_cluster = df_hypo.loc[
# df_hypo.apply(lambda x: ('merged_mCG_{}_{}_{}'.format(cluster_type, i+1, ens) in x))]
outdir = '/cndd2/fangming/projects/scf_enhancers/enhancer_round2_191211/dmr'
if not os.path.isdir(outdir):
os.makedirs(outdir)
logging.info('Created directory {}'.format(outdir))
clsts = [i[len('methylation_level_'):] for i in df.filter(regex='^methylation_level_*').columns]
for clst in clsts:
df_hypo_cluster = df_hypo[df_hypo.apply(lambda x: (clst in x))]
output = os.path.join(outdir, 'dmr_{}.bed'.format(clst))
df_hypo_cluster.to_csv(output, sep='\t', header=False, index=True, na_rep='NA')
logging.info("Saved to {}".format(output))
logging.info(df_hypo_cluster.shape)
outdir = '/cndd2/fangming/projects/scf_enhancers/enhancer_round2_hyper_200226/dmr'
if not os.path.isdir(outdir):
os.makedirs(outdir)
logging.info('Created directory {}'.format(outdir))
clsts = [i[len('methylation_level_'):] for i in df.filter(regex='^methylation_level_*').columns]
for clst in clsts:
df_hyper_cluster = df_hyper[df_hyper.apply(lambda x: (clst in x))]
output = os.path.join(outdir, 'dmr_{}.bed'.format(clst))
df_hyper_cluster.to_csv(output, sep='\t', header=False, index=True, na_rep='NA')
logging.info("Saved to {}".format(output))
logging.info(df_hyper_cluster.shape)
```
# Not organized
```
with cd(os.path.join(PATH_ENSEMBLES, ens, 'dmr')):
f = 'dmr_allc_merged_mCG_cluster_mCHmCG_lv_npc50_k30_rms_results_collapsed.tsv.DMR.bed'
df_bed = pd.read_table(f, header=None, names=['chr', 'start', 'end', 'num_dms'])
print(df_bed.shape)
df_bed.head()
df_out = df_bed[df_bed['num_dms']>=20]
with cd(os.path.join(PATH_ENSEMBLES, ens, 'dmr')):
df_out.to_csv('dmr_allc_merged_mCG_cluster_mCHmCG_lv_npc50_k30_rms_results_collapsed_20dms.tsv.DMR.bed',
sep='\t', na_rep='NA', header=False, index=False)
```
| github_jupyter |
# Loading Libraries
```
# Importing the core libraies
import numpy as np
import pandas as pd
from IPython.display import Markdown
from datetime import timedelta
from datetime import datetime
import plotly.express as px
import plotly.graph_objs as go
!pip install pycountry
import pycountry
from plotly.offline import init_notebook_mode, iplot
import plotly.offline as py
import plotly.express as ex
from plotly.offline import download_plotlyjs,init_notebook_mode,plot,iplot
import matplotlib.pyplot as plt
py.init_notebook_mode(connected=True)
plt.style.use("seaborn-talk")
plt.rcParams['figure.figsize'] = 8, 5
plt.rcParams['image.cmap'] = 'viridis'
import folium
#!pip install ipympl
%matplotlib notebook
from fbprophet import Prophet
from fbprophet.plot import plot_plotly
pd.set_option('display.max_rows', None)
from math import sin, cos, sqrt, atan2, radians
from warnings import filterwarnings
filterwarnings('ignore')
from sklearn import preprocessing
from xgboost import XGBRegressor
import warnings
warnings.filterwarnings("ignore")
# from google.colab import drive
# drive.mount('/content/drive')
# Importing the dataset into python
#df = pd.read_csv('./covid19-global-forecasting-week-5/covid_19_data.csv',parse_dates=['ObservationDate'])
df = pd.read_csv('/content/drive/My Drive/Master Online Exams/Dataset/covid_19_data.csv',parse_dates=['ObservationDate'])
df.drop(['SNo','Last Update'],axis =1, inplace = True)
df['Active'] = df['Confirmed'] - (df['Recovered'] + df['Deaths'])
#week5_train = pd.read_csv('./covid19-global-forecasting-week/train.csv')
#week5_test = pd.read_csv('./covid19-global-forecasting-week-5/test.csv')
#dataset=pd.read_csv('./covid19-global-forecasting-week-5/covid_19_complete.csv',parse_dates=['Date'])
week5_train = pd.read_csv('/content/drive/My Drive/Master Online Exams/Dataset/train.csv')
week5_test = pd.read_csv('/content/drive/My Drive/Master Online Exams/Dataset/test.csv')
dataset=pd.read_csv('/content/drive/My Drive/Master Online Exams/Dataset/covid_19_complete.csv',parse_dates=['Date'])
```
# Data Cleaning
```
# Dataset Cleaning
# Creating a colunm for active cases
dataset['Confirmed'] = dataset['Confirmed'].astype(int)
dataset['Active'] = dataset['Confirmed'] - dataset['Deaths'] - dataset['Recovered']
# Replacing the word Mainland China with china
dataset['Country/Region'] = dataset['Country/Region'].replace('Mainland China','China')
#Filling Missing values
dataset[['Province/State']] = dataset[['Province/State']].fillna('')
dataset[['Confirmed', 'Deaths', 'Recovered', 'Active']] = dataset[['Confirmed', 'Deaths', 'Recovered', 'Active']].fillna(0)
# Datatypes
dataset['Recovered'] = dataset['Recovered'].astype(int)
# Creating Grouped dataset
dataset_grouped = dataset.groupby(['Date','Country/Region'])['Confirmed','Deaths','Recovered','Active'].sum().reset_index()
# New cases
temp = dataset_grouped.groupby(['Country/Region', 'Date', ])['Confirmed', 'Deaths', 'Recovered']
temp = temp.sum().diff().reset_index()
mask = temp['Country/Region'] != temp['Country/Region'].shift(1)
temp.loc[mask, 'Confirmed'] = np.nan
temp.loc[mask, 'Deaths'] = np.nan
temp.loc[mask, 'Recovered'] = np.nan
#Renaming columns
temp.columns = ['Country/Region', 'Date', 'New cases', 'New deaths', 'New recovered']
# Merging new values
# Dataset_grouped = pd.merge(dataset_grouped, temp, on=['Country/Region', 'Date'])
dataset_grouped = pd.merge(dataset_grouped, temp, on=['Country/Region', 'Date'])
# Filling na with 0
dataset_grouped = dataset_grouped.fillna(0)
# Fixing data types
cols = ['New cases', 'New deaths', 'New recovered']
dataset_grouped[cols] = dataset_grouped[cols].astype('int')
dataset_grouped['New cases'] = dataset_grouped['New cases'].apply(lambda x: 0 if x<0 else x)
# Country data grouping
country_wise = dataset_grouped[dataset_grouped['Date']==max(dataset_grouped['Date'])].reset_index(drop=True).drop('Date',axis=1)
#Grouped by Country
country_wise = country_wise.groupby('Country/Region')['Confirmed','Deaths','Recovered','Active','New cases'].sum().reset_index()
# Grouped per 100 cases
country_wise['Deaths / 100 Cases'] = round((country_wise['Deaths']/country_wise['Confirmed'])*100,2)
country_wise['Recovered / 100 cases'] = round((country_wise['Recovered']/country_wise['Confirmed'])*100,2)
country_wise['Deaths / 100 Recovered'] = round((country_wise['Deaths']/country_wise['Recovered'])*100,2)
cols= ['Deaths / 100 Cases','Recovered / 100 cases','Deaths / 100 Recovered']
country_wise[cols] = country_wise[cols].fillna(0)
# Grouping by Time Values
today = dataset_grouped[dataset_grouped['Date']==max(dataset_grouped['Date'])].reset_index(drop=True).drop('Date', axis=1)[['Country/Region', 'Confirmed']]
last_week = dataset_grouped[dataset_grouped['Date']==max(dataset_grouped['Date'])-timedelta(days=7)].reset_index(drop=True).drop('Date', axis=1)[['Country/Region', 'Confirmed']]
temp = pd.merge(today, last_week, on='Country/Region', suffixes=(' today', ' last week'))
#temp = temp[['Country/Region', 'Confirmed last week']]
temp['1 week change'] = temp['Confirmed today'] - temp['Confirmed last week']
temp = temp[['Country/Region', 'Confirmed last week', '1 week change']]
country_wise = pd.merge(country_wise, temp, on='Country/Region')
country_wise['1 week % increase'] = round(country_wise['1 week change']/country_wise['Confirmed last week']*100, 2)
day_wise = dataset_grouped.groupby('Date')['Confirmed', 'Deaths', 'Recovered', 'Active', 'New cases'].sum().reset_index()
# number cases per 100 cases
day_wise['Deaths / 100 Cases'] = round((day_wise['Deaths']/day_wise['Confirmed'])*100, 2)
day_wise['Recovered / 100 Cases'] = round((day_wise['Recovered']/day_wise['Confirmed'])*100, 2)
day_wise['Deaths / 100 Recovered'] = round((day_wise['Deaths']/day_wise['Recovered'])*100, 2)
# no. of countries
day_wise['No. of countries'] = dataset_grouped[dataset_grouped['Confirmed']!=0].groupby('Date')['Country/Region'].unique().apply(len).values
# fillna by 0
cols = ['Deaths / 100 Cases', 'Recovered / 100 Cases', 'Deaths / 100 Recovered']
day_wise[cols] = day_wise[cols].fillna(0)
week5_train = week5_train.drop(columns = ['County' , 'Province_State'])
week5_test = week5_test.drop(columns = ['County' , 'Province_State'])
week5_train['Date']= pd.to_datetime(week5_train['Date']).dt.strftime("%Y%m%d").astype(int)
week5_test['Date'] = pd.to_datetime(week5_test['Date']).dt.strftime("%Y%m%d").astype(int)
date_wise_data = df[['Country/Region',"ObservationDate","Confirmed","Deaths","Recovered",'Active']]
date_wise_data['Date'] = date_wise_data['ObservationDate'].apply(pd.to_datetime, dayfirst=True)
date_wise_data = date_wise_data.groupby(["ObservationDate"]).sum().reset_index()
date_wise_data.rename({"ObservationDate": 'Date','Recovered':'Cured'}, axis=1,inplace= True)
def formatted_text(string):
display(Markdown(string))
# Converting columns into numberic for Train
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
w5_X = week5_train.iloc[:,1].values
week5_train.iloc[:,1] = labelencoder.fit_transform(w5_X.astype(str))
w5_X = week5_train.iloc[:,5].values
week5_train.iloc[:,5] = labelencoder.fit_transform(w5_X)
#Converting columns into numberic Test
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
w5te_X = week5_test.iloc[:,1].values
week5_test.iloc[:,1] = labelencoder.fit_transform(w5te_X)
w5te_X = week5_test.iloc[:,5].values
week5_test.iloc[:,5] = labelencoder.fit_transform(w5te_X)
#Train & Test ======================================================
x = week5_train.iloc[:,1:6]
y = week5_train.iloc[:,6]
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test =train_test_split(x,y, test_size = 0.2, random_state = 0 )
#Adding Population Data
pop = pd.read_csv("/content/drive/My Drive/Master Online Exams/Dataset/population_by_country_2020.csv")
# select only population
pop = pop.iloc[:, :2]
# rename column names
pop.columns = ['Country/Region', 'Population']
# merged data
country_wise = pd.merge(country_wise, pop, on='Country/Region', how='left')
# update population
cols = ['Burma', 'Congo (Brazzaville)', 'Congo (Kinshasa)', "Cote d'Ivoire", 'Czechia',
'Kosovo', 'Saint Kitts and Nevis', 'Saint Vincent and the Grenadines',
'Taiwan*', 'US', 'West Bank and Gaza']
pops = [54409800, 89561403, 5518087, 26378274, 10708981, 1793000,
53109, 110854, 23806638, 330541757, 4543126]
for c, p in zip(cols, pops):
country_wise.loc[country_wise['Country/Region']== c, 'Population'] = p
country_wise['Cases / Million People'] = round((country_wise['Confirmed'] / country_wise['Population']) * 1000000)
temp = country_wise.copy()
temp = temp.iloc[:,:6]
temp = temp.sort_values('Confirmed',ascending=False).reset_index()
temp.style.background_gradient(cmap='Blues',subset=["Confirmed"])\
.background_gradient(cmap='Reds',subset=["Deaths"])\
.background_gradient(cmap='Greens',subset=["Recovered"])\
.background_gradient(cmap='Purples',subset=["Active"])\
.background_gradient(cmap='PuBu',subset=["New cases"])
temp = dataset.groupby('Date')['Confirmed', 'Deaths', 'Recovered', 'Active'].sum().reset_index()
temp = temp[temp['Date']==max(temp['Date'])].reset_index(drop=True)
temp1 = temp.melt(id_vars="Date", value_vars=['Active', 'Deaths', 'Recovered'])
fig = px.pie(temp1,
values= 'value',labels=['Active Cases','Cured','Death'],
names="variable",
title="Current Situation of COVID-19 in the world",
template="seaborn")
fig.update_traces(hoverinfo='label+percent',textinfo='value', textfont_size=14,
marker=dict(colors=['#263fa3','#cc3c2f','#2fcc41'], line=dict(color='#FFFFFF', width=2)))
fig.update_traces(textposition='inside')
fig.update_traces(rotation=90, pull=0.05, textinfo="percent+label")
fig.show()
perday2 = date_wise_data.groupby(['Date'])['Confirmed','Cured','Deaths','Active'].sum().reset_index().sort_values('Date',ascending = True)
perday2['New Daily Confirmed Cases'] = perday2['Confirmed'].sub(perday2['Confirmed'].shift())
perday2['New Daily Confirmed Cases'].iloc[0] = perday2['Confirmed'].iloc[0]
perday2['New Daily Confirmed Cases'] = perday2['New Daily Confirmed Cases'].astype(int)
perday2['New Daily Cured Cases'] = perday2['Cured'].sub(perday2['Cured'].shift())
perday2['New Daily Cured Cases'].iloc[0] = perday2['Cured'].iloc[0]
perday2['New Daily Cured Cases'] = perday2['New Daily Cured Cases'].astype(int)
perday2['New Daily Deaths Cases'] = perday2['Deaths'].sub(perday2['Deaths'].shift())
perday2['New Daily Deaths Cases'].iloc[0] = perday2['Deaths'].iloc[0]
perday2['New Daily Deaths Cases'] = perday2['New Daily Deaths Cases'].astype(int)
perday2.to_csv('/content/drive/My Drive/Master Online Exams/Dataset/perday_daily_cases.csv')
import plotly.express as px
fig = px.bar(perday2, x="Date", y="New Daily Confirmed Cases", barmode='group',height=500)
fig.update_layout(title_text='New COVID-19 cases reported daily all over the World',plot_bgcolor='rgb(275, 270, 273)')
fig.show()
import plotly.express as px
fig = px.bar(perday2, x="Date", y="New Daily Cured Cases", barmode='group',height=500,
color_discrete_sequence = ['#319146'])
fig.update_layout(title_text='New COVID-19 Recovered cases reported daily all over the world',plot_bgcolor='rgb(275, 270, 273)')
fig.show()
# Exploratory Data Analysis
# 1. Displaying a sample of the train and test dataset
print('Training Dataset: \n\n', train.sample(5))
print('\n\n\n Test Dataset: \n\n', test.sample(5))
# Visualising the Descriptive analysis of the dataset
# 2 Visializing the data types for each colunm
print('Data Types for each columns (Training) \n\n',train.dtypes)
print('\n\n\n Data Types for each columns (Testing) \n\n',train.dtypes)
# 2 Visualizing the Static Information for Numerical Columns
print('Descriptiva analytics for the dataset (Training): \n\n',train.describe())
print('\n\n\n Descriptive analytics for dataset (test): \n\n' ,test.describe())
# Displaying some information regarding the dataset
print('Dataset information (Training): \n\n',train.info())
print('\n\n\n Dataset information (Testing): \n\n', test.info())
# Displaying the shape of the dataset
print('Traing dataset shape: ', train.shape)
print('\n Testing Dataset shape: ' ,test.shape)
# Displaying graph of the top 15 Countries
l = train[['Country_Region','TargetValue']].groupby(['Country_Region'], as_index = False).sum().sort_values(by = 'TargetValue',ascending=False)
w = pd.DataFrame(l)
data1 = l.head(15)
fig = px.bar(data1,x = 'Country_Region',y = 'TargetValue')
fig.show()
# pi char Vusualizing confirmed casea and faitalities
fig = px.pie(train, values='TargetValue', names='Target')
fig.update_traces(textposition='inside')
fig.update_layout(uniformtext_minsize=12, uniformtext_mode='hide')
fig.show()
# Displaing the curent world share of covid-19 cases
fig = px.pie(train, values='TargetValue', names='Country_Region')
fig.update_traces(textposition='inside')
fig.update_layout(uniformtext_minsize=12, uniformtext_mode='hide')
fig.show()
# Tree Map of Covid Cases Globaly
fig = px.treemap(train, path=['Country_Region'], values='TargetValue',
color='Population', hover_data=['Country_Region'],
color_continuous_scale='matter', title='Current share of Worldwide COVID19 Confirmed Cases')
fig.show()
fig = px.treemap(train, path=['Country_Region'], values='TargetValue',
color='Population', hover_data=['Country_Region'],
color_continuous_scale='matter', title='Current share of Worldwide COVID19 Confirmed Cases')
fig.show()
# COVID 19 Total cases growth for top 15 countries
last_date = train.Date.max()
df_countries = train[train['Date']==last_date]
df_countries = df_countries.groupby('Country_Region', as_index=False)['TargetValue'].sum()
df_countries = df_countries.nlargest(15,'TargetValue')
df_trend = train.groupby(['Date','Country_Region'], as_index=False)['TargetValue'].sum()
df_trend = df_trend.merge(df_countries, on='Country_Region')
df_trend.rename(columns={'Country_Region':'Country', 'TargetValue_x':'Cases'}, inplace=True)
px.line(df_trend, x='Date', y='Cases', color='Country', title='COVID19 Total Cases growth for top 10 worst affected countries')
```
| github_jupyter |
Author: Maxime Marin
@: mff.marin@gmail.com
# Accessing IMOS data case studies: Walk-through and interactive session - Analysis
In this notebook, we will provide a receipe for further analysis to be done on the same dataset we selected earlier. In the future, a similar notebook can be tailored to a particular dataset, performing analysis that is easily repeatable.
Alternatively, curious users can use the code to "tweak it" to their needs, and perform slightly different analysis and visualisation. This is why we have deliberately left some code in the cells, rather than hiding it.
As always, we start by importing our data and some main libraries
```
import sys
import os
sys.path.append('/home/jovyan/intake-aodn')
import intake_aodn
import matplotlib.pyplot as plt
from intake_aodn.plot import Clim_plot
from intake_aodn.analysis import lin_trend, make_clim
import xarray as xr
data = xr.open_dataset('Example_Data.nc')
```
***
## 1) Climatology
Calculating climatology of a particular variable is a very common operation performed in climate science. It allows to quantify the "mean" state of a particular variable and later substract it to the given variable to obtain anomalies.
The most common climatology is done on a yearly timescale. It is equivalent to yearly average and is useful to calculate linear trends:
```
# We will work with the box-average timeseries:
data_bavg = data.stack(space=['longitude','latitude']).mean(dim='space')
# Perform and plot annual climatology
clim,ax = Clim_plot(da = data_bavg['sea_surface_temperature'],time_res = 'year')
ylab = ax.get_ylabel() # stores the y-axis label to re-use later.
# Calculate the linear trend and confidence interval
coef,fit,hci,lci = lin_trend(clim,'year')
# Plot the linear model
fit['linear_fit'].plot(ax=ax,color='red',label = 'trend')
plt.fill_between(lci['year'].values,lci['linear_fit'].values,hci['linear_fit'].values,alpha=0.2,color='red')
# add label, title and legend
ax.set_ylabel(ylab)
ax.set_title('Annual Mean')
plt.legend();
```
We have plotted the annual averages of box-averaged SST, along with the correspinding linear trend. However, something seems to be off..
The first and last yearly values appear to be underestimated and overestimated, respectively. Why is that? (Hint: try to execute `clim.year[0]`, and `data.time[0]`. Hint 2: the access the last index of a list, we use the `[-1]`.)
```
# type code here
```
The cell below outputs the same plot as before, but the 7th line is different. The function `Clim_plot()` takes `time_main` as an argument, which defines what time period we are interested in. Can we change line 7 to get a better plot?...
```
# We will work with the box-average timeseries:
data_bavg = data.stack(space=['longitude','latitude']).mean(dim='space')
# Perform and plot annual climatology
clim,ax = Clim_plot(da = data_bavg['sea_surface_temperature'],time_res = 'year',time_main = ['1992-01-01','2021-12-31'])
ylab = ax.get_ylabel() # stores the y-axis label to re-use later.
# Calculate the linear trend and confidence interval
coef,fit,hci,lci = lin_trend(clim,'year')
# Plot the linear model
fit['linear_fit'].plot(ax=ax,color='red',label = 'trend')
plt.fill_between(lci['year'].values,lci['linear_fit'].values,hci['linear_fit'].values,alpha=0.2,color='red')
# add label, title and legend
ax.set_ylabel(ylab)
ax.set_title('Annual Mean')
plt.legend();
```
***
## 2) Monthly climatology
Using the same function as before `Clim_plot` we can also calculate monthly climatology, which gives us the mean state of the variable for all months of the year.
To do so, we just have to change the `time_res` argument such as `time_res = 'month'`
```
clim,ax = Clim_plot(data_bavg['sea_surface_temperature'],time_res = 'month',time_main = ['1992-01-01','2021-12-31'],time_recent = ['2011-01-01', None],ind_yr = [2011,2018 ,2019])
ax.set_title('Monthly Climatology');
```
We notice that the function takes several other arguments than before, including `time_recent` and `ind_yr`.
`time_recent` tells the function to also plot monthly climatology for a "more recent" time period, reflecting the recent mean state.
` ind_yr` let the user chose individual years to plot. This is useful to compare one particular year with the average. For example, we clearly see that the 2011 summer was way warmer than usual!
Note: You can add these arguments if `time_res = 'year'`, but it will not do anything as it has no purpose for annual climatology.
***
## 3) Linear trends
It can also be useful to visualise the spatial distribution of long-term changes or trends. Rather than plotting a linear trend from one timeseries, let's calculate linear trend coefficients at all pixels and map it.
```
from intake_aodn.plot import map_var, create_cb
from intake_aodn.analysis import make_clim
import cmocean
import numpy as np
#First, we calculate yearly averages
clim = make_clim(data['sea_surface_temperature'],time_res = 'year')
#Then we can compute our linear models
coef,fit,hci,lci= lin_trend(clim[0],'year',deg=1)
#We rename a variable so that our plot makes more sense
coef = coef.rename({'polyfit_coefficients':'SST trend [C/decade]'})
#let's plot
fig = plt.figure(figsize=(30,8))
ax,gl,axproj = map_var((coef.isel(degree=0)['SST trend [C/decade]']*10),[coef.longitude.min(),coef.longitude.max()],[coef.latitude.min(),coef.latitude.max()],
title = 'Linear Trends',
cmap = cmocean.cm.balance,
add_colorbar = False,
vmin = -0.4,vmax = 0.4,
levels=np.arange(-0.4,0.45,0.05));
cb = create_cb(fig,ax,axproj,'SST trend [C/decade]',size = "4%", pad = 0.5,labelpad = 20,fontsize=20)
```
***
## 4) Anomalies
Finally, let's use climatology to compute anomalies. This is particularly usefull to understand inter-annual variability of the system (ENSO influence), which requires removing the seasonal cycle.
```
from intake_aodn.analysis import time_average, make_clim
# Make monthly anomalies
mn_ano = time_average(data,'M',var='sea_surface_temperature',ignore_inc = True).groupby('time.month') - make_clim(data['sea_surface_temperature'],time_res = 'month')[0]
# Compute box-averge and plot
bavg = mn_ano.stack(space=['longitude','latitude']).mean(dim='space')
fig = plt.figure(figsize=(12,8))
bavg.plot(label='monthly',color='black')
# Make yearly running anomalies on monthly timescale
yr_rol = bavg.rolling(time = 12, center=True).mean()
# plot smoothed timeseries
yr_rol.plot(label='annual',color='red',lw = 2)
plt.legend();
# fix xlim
xl = mn_ano.coords['time'].values
plt.xlim(xl.min(),xl.max())
```
The plot shows that monthly variability can be quite important compared to inter-annual variability, hence why smoothing can enhance important inter-annual patterns.
***
## 5) Multi maps
Finally, we give the possibility to the user to make a publication-ready subplot map of a statistic of their data.
The first step requires the user to define the data to plot. Let's imagine we want to plot monthly SST anomalies that we calculated in part 4, over a specific year.
Once the data is ready, we simply have to call `multimap()` function, give it the dimension of the subplot panels and the number of columns we want:
```
from intake_aodn.plot import multimap
import cmocean
from intake_aodn.analysis import time_average, make_clim
# Make monthly anomalies
da = time_average(data,'M',var='sea_surface_temperature',ignore_inc = True).groupby('time.month') - make_clim(data['sea_surface_temperature'],time_res = 'month')[0]
# data to plot
da = da.sel(time = slice('2011-01-01','2011-12-31'))
fig = multimap(da,col = 'time',col_wrap=4,freq = 'month')
```
***
## 6) Saving Figures
*But where is the save button!*
I know, I did not place a save button. But remember, you can always save any image by right-clicking on it. We are working on a browser!
Of course, we might want to save figures at a better quality, especially for publications. In reality, saving a plot is very easy, just insert the one-liner below at the end of any cell, chosing your file name, and format:
```
plt.gcf().savefig("filename.changeformathere");#plt.gcf() indicates you want to save the latest figure.
```
| github_jupyter |
```
"""The purpose of this tutorial is to introduce you to:
(1) how gradient-based optimization of neural networks
operates in concrete practice, and
(2) how different forms of learning rules lead to more or less
efficient learning as a function of the shape of the optimization
landscape
This tutorial should be used in conjunction with the lecture:
http://cs375.stanford.edu/lectures/lecture6_optimization.pdf
""";
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
#the above imports the plotting library matplotlib
#standard imports
import time
import numpy as np
import h5py
#We're not using the GPU here, so we set the
#"CUDA_VISIBLE_DEVICES" environment variable to -1
#which tells tensorflow to only use the CPU
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
import tensorflow as tf
```
## Gradient Descent
```
#let's define a model which "believes" that the output data
#is scalar power of a scalar input, e.g. :
# y ~ x^p
#defining the scalar input data variable
batch_size = 200
#the "placeholder" mechanis is similar in effect to
# x = tf.get_variable('x', shape=(batch_size,), dtype=tf.float32)
#except we don't have to define a fixed name "x"
x = tf.placeholder(shape=(batch_size,), dtype=tf.float32)
#define the scalar power variable
initial_power = tf.zeros(shape=())
power = tf.get_variable('pow', initializer=initial_power, dtype=tf.float32)
#define the model
model = x**power
#the output data needs a variable too
y = tf.placeholder(shape=(batch_size,), dtype=tf.float32)
#the error rate of the model is mean L2 distance across
#the batch of data
power_loss = tf.reduce_mean((model - y)**2)
#now, our goal is to use gradient descent to
#figure out the parameter of our model -- namely, the power variable
grad = tf.gradients(power_loss, power)[0]
#Let's fit (optimize) the model.
#to do that we'll have to first of course define a tensorflow session
sess = tf.Session()
#... and initialize the power variable
initializer = tf.global_variables_initializer()
sess.run(initializer)
#ok ... so let's test the case where the true input-output relationship
#is x --> x^2
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**2
#OK
initial_guess = 0
assign_op = tf.assign(power, initial_guess)
sess.run(assign_op)
gradval = sess.run(grad, feed_dict={x: xval, y: yval})
gradval
#ok so this is telling us to do:
new_guess = initial_guess + -1 * (gradval)
print(new_guess)
#ok so let's assign the new guess to the power variable
assign_op = tf.assign(power, new_guess)
sess.run(assign_op)
#... and get the gradient again
gradval = sess.run(grad, feed_dict={x: xval, y: yval})
gradval
new_guess = new_guess + -1 * (gradval)
print(new_guess)
#... and one more time ...
assign_op = tf.assign(power, new_guess)
sess.run(assign_op)
#... get the gradient again
gradval = sess.run(grad, feed_dict={x: xval, y: yval})
print('gradient: %.3f', gradval)
#... do the update
new_guess = new_guess + -1 * (gradval)
print('power: %.3f', new_guess)
#ok so we're hovering back and forth around guess of 2.... which is right!
#OK let's do this in a real loop and keep track of useful stuff along the way
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**2
#start the guess off at 0 again
assign_op = tf.assign(power, 0)
sess.run(assign_op)
#let's keep track of the guess along the way
powers = []
#and the loss, which should go down
losses = []
#and the grads just for luck
grads = []
#let's iterate the gradient descent process 20 timesteps
num_iterations = 20
#for each timestep ...
for i in range(num_iterations):
#... get the current derivative (grad), the current guess of "power"
#and the loss, given the input and output training data (xval & yval)
cur_power, cur_loss, gradval = sess.run([power, power_loss, grad],
feed_dict={x: xval, y: yval})
#... keep track of interesting stuff along the way
powers.append(cur_power)
losses.append(cur_loss)
grads.append(gradval)
#... now do the gradient descent step
new_power = cur_power - gradval
#... and actually update the value of the power variable
assign_op = tf.assign(power, new_power)
sess.run(assign_op)
#and then, the loop runs again
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.plot(grads, label='gradients')
plt.xlabel('iterations')
plt.legend(loc='lower right')
plt.title('Estimating a quadratic')
##ok now let's try that again except where y ~ x^3
#all we need to do is change the data
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**3
#The rest of the code remains the same
assign_op = tf.assign(power, 0)
sess.run(assign_op)
powers = []
losses = []
grads = []
num_iterations = 20
for i in range(num_iterations):
cur_power, cur_loss, gradval = sess.run([power, power_loss, grad],
feed_dict={x: xval, y: yval})
powers.append(cur_power)
losses.append(cur_loss)
grads.append(gradval)
new_power = cur_power - gradval
assign_op = tf.assign(power, new_power)
sess.run(assign_op)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.xlabel('iterations')
plt.legend(loc='center right')
plt.title('Failing to estimate a cubic')
#wait ... this did *not* work. why?
#whoa ... the loss must have diverged to infinity (or close) really early
losses
#why?
#let's look at the gradients
grads
#hm. the gradient was getting big at the end.
#after all, the taylor series only works in the close-to-the-value limit.
#we must have been been taking too big steps.
#how do we fix this?
```
### With Learning Rate
```
def gradient_descent(loss,
target,
initial_guess,
learning_rate,
training_data,
num_iterations):
#assign initial value to the target
initial_op = tf.assign(target, initial_guess)
#get the gradient
grad = tf.gradients(loss, target)[0]
#actually do the gradient descent step directly in tensorflow
newval = tf.add(target, tf.multiply(-grad, learning_rate))
#the optimizer step actually performs the parameter update
optimizer_op = tf.assign(target, newval)
#NB: none of the four steps above are actually running anything yet
#They are just formal graph computations.
#to actually do anything, you have to run stuff in a session.
#set up containers for stuff we want to keep track of
targetvals = []
losses = []
gradvals = []
#first actually run the initialization operation
sess.run(initial_op)
#now take gradient steps in a loop
for i in range(num_iterations):
#just by virtue of calling "run" on the "optimizer" op,
#the optimization occurs ...
output = sess.run({'opt': optimizer_op,
'grad': grad,
'target': target,
'loss': loss
},
feed_dict=training_data)
targetvals.append(output['target'])
losses.append(output['loss'])
gradvals.append(output['grad'])
return losses, targetvals, gradvals
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**3
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=.25, #chose learning rate < 1
training_data=data_dict,
num_iterations=20)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title('Estimating a cubic')
#ok -- now the result stably converges!
#and also for a higher power ....
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**4
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=0.1,
training_data=data_dict,
num_iterations=100)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title('Estimating a quartic')
#what about when the data is actually not of the right form?
xval = np.arange(0, 2, .01)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=0.1,
training_data=data_dict,
num_iterations=20)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='center right')
plt.title('Estimating sine with a power, not converged yet')
#doesn't look like it's converged yet -- maybe we need to run it longer?
#sine(x) now with more iterations
xval = np.arange(0, 2, .01)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, powers, grads = gradient_descent(loss=power_loss,
target=power,
initial_guess=0,
learning_rate=0.1,
training_data=data_dict,
num_iterations=100) #<-- more iterations
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='center right')
plt.title('Estimating sine with a power (badly)')
#ok it's converged but not to a great loss. This is unsurprising
#since x^p is a bad model for sine(x)
#how should we improve?
#THE MACHINE LEARNING ANSWER: well, let's have more parameters in our model!
#actually, let's write a model using the Taylor series idea more explicitly:
# y ~ sum_i a_i x^i
#for some coefficients a_i that we have to learn
#let's go out to x^5, so approx_order = 7 (remember, we're 0-indexing in python)
approximation_order = 6
#ok so now let's define the variabe we'll be using
#instead of "power" this will be coefficients of the powers
#with one coefficient for each power from 0 to approximation_order-1
coefficients = tf.get_variable('coefficients',
initializer = tf.zeros(shape=(approximation_order,)),
dtype=tf.float32)
#gotta run the initializer again b/c we just defined a new trainable variable
initializer = tf.global_variables_initializer()
sess.run(initializer)
sess.run(coefficients)
#Ok let's define the model
#here's the vector of exponents
powervec = tf.range(0, approximation_order, dtype=tf.float32)
#we want to do essentially:
# sum_i coefficient_i * x^powervec[i]
#but to do x^powervec, we need to create an additional dimension on x
x_expanded = tf.expand_dims(x, axis=1)
#ok, now we can actually do x^powervec
x_exponentiated = x_expanded**powervec
#now multiply by the coefficient variable
x_multiplied_by_coefficients = coefficients * x_exponentiated
#and add up over the 1st dimension e.g. dong the sum_i
polynomial_model = tf.reduce_sum(x_multiplied_by_coefficients, axis=1)
#the loss is again l2 difference between prediction and desired output
polynomial_loss = tf.reduce_mean((polynomial_model - y)**2)
xval = np.arange(-2, 2, .02)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
#starting out at 0 since the coefficients were all intialized to 0
sess.run(polynomial_model, feed_dict=data_dict)
#ok let's try it
losses, coefvals, grads = gradient_descent(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.1,
training_data=data_dict,
num_iterations=100)
#ok, so for each timstep we have 6 values -- the coefficients
print(len(coefvals))
coefvals[-1].shape
#here's the last set of coefficients learned
coefvals[-1]
#whoa -- what's going on?
#let's lower the learning rate
losses, coefvals, grads = gradient_descent(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.005, #<-- lowered learning rate
training_data=data_dict,
num_iterations=100)
#ok not quite as bad
coefvals[-1]
#let's visualize what we learned
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
#ok, fine, but not great
#what if we let it run longer?
losses, coefvals, grads = gradient_descent(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.005,
training_data=data_dict,
num_iterations=5000) #<-- more iterations
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Gradient Descent')
#ok much better
coefvals[-1]
tf.Variable(np.zeros(6))
```
### With momentum
```
def gradient_descent_with_momentum(loss,
target,
initial_guess,
learning_rate,
momentum,
training_data,
num_iterations):
#set target to initial guess
initial_op = tf.assign(target, initial_guess)
#get gradient
grad = tf.gradients(loss, target)[0]
#set up the variable for the gradient accumulation
grad_shp = grad.shape.as_list()
#needs to be specified as float32 to interact properly with other things (but numpy defaults to float64)
grad_accum = tf.Variable(np.zeros(grad_shp).astype(np.float32))
#gradplus = grad + momentum * grad_accum
gradplus = tf.add(grad, tf.multiply(grad_accum, momentum))
#newval = oldval - learning_rate * gradplus
newval = tf.add(target, tf.multiply(-gradplus, learning_rate))
#the optimizer step actually performs the parameter update
optimizer_op = tf.assign(target, newval)
#this step updates grad_accum
update_accum = tf.assign(grad_accum, gradplus)
#run initialization
sess.run(initial_op)
#necessary b/c we've defined a new variable ("grad_accum") above
init_op = tf.global_variables_initializer()
sess.run(init_op)
#run the loop
targetvals = []
losses = []
gradvals = []
times = []
for i in range(num_iterations):
t0 = time.time()
output = sess.run({'opt': optimizer_op, #have to have this for optimization to occur
'accum': update_accum, #have to have this for grad_accum to update
'grad': grad, #the rest of these are just for keeping track
'target': target,
'loss': loss
},
feed_dict=training_data)
times.append(time.time() - t0)
targetvals.append(output['target'])
losses.append(output['loss'])
gradvals.append(output['grad'])
print('Average time per iteration --> %.5f' % np.mean(times))
return losses, targetvals, gradvals
losses, coefvals, grads = gradient_descent_with_momentum(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
learning_rate=0.01, #<-- can use higher learning rate!
momentum=0.9,
training_data=data_dict,
num_iterations=250) #<-- can get away from fewer iterations!
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Gradient Descent')
#so momentum is really useful
```
### Tensorflow's Built-In Optimizers
```
def tf_builtin_optimization(loss,
optimizer_class,
target,
training_data,
num_iterations,
optimizer_args=(),
optimizer_kwargs={},
):
#construct the optimizer
optimizer = optimizer_class(*optimizer_args,
**optimizer_kwargs)
#formal tensorflow optimizers will always have a "minimize" method
#this is how you actually get the optimizer op
optimizer_op = optimizer.minimize(loss)
init_op = tf.global_variables_initializer()
sess.run(init_op)
targetvals = []
losses = []
times = []
for i in range(num_iterations):
t0 = time.time()
output = sess.run({'opt': optimizer_op,
'target': target,
'loss': loss},
feed_dict=training_data)
times.append(time.time() - t0)
targetvals.append(output['target'])
losses.append(output['loss'])
print('Average time per iteration --> %.5f' % np.mean(times))
return np.array(losses), targetvals
xval = np.arange(-2, 2, .02)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.GradientDescentOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=5000,
optimizer_args=(0.005,),
) #<-- more iterations
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Gradient Descent')
#right ok, we recovered what we did before by hand, now using
#the standard tensorflow tools
#Let's use the Momentum Optimizer. standard parameters for learning
#are learning_rate = 0.01 and momentum = 0.9
xval = np.arange(-2, 2, .02)
yval = np.sin(xval )
data_dict = {x: xval, y:yval}
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.MomentumOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=250,
optimizer_kwargs={'learning_rate': 0.01,
'momentum': 0.9})
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Momentum Optimizer')
#again reproducing what we see before by hand
#and we can try some other stuff, such as the Adam Optimizer
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.AdamOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=500,
optimizer_kwargs={'learning_rate': 0.01})
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Adam optimizer')
#Adam as usualy requires a bit more steps than Momentum -- but the advantage of Adam
#is that sometimes Momentum blows up and Adam is usually more stable
#(compare the loss traces! even though Momentum didn't below up above, it's
#loss is much more jaggedy -- signs up potential blowup)
#so hm ... maybe because Adam is more stable we can jack up the
#initial learning rate and thus converge even faster than with Momentum
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.AdamOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=150,
optimizer_kwargs={'learning_rate': 0.5})
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.plot(xval, yval)
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Adam optimizer\nhigh initial learning rate')
#indeed we can!
```
### Newton's Method (Second Order)
```
def newtons_method(loss,
target,
initial_guess,
training_data,
num_iterations,
grad2clip=1.):
#create initialization operation
initial_op = tf.assign(target, initial_guess)
grad = tf.gradients(loss, target)[0]
#to actually compute the second order correction
#we split the one-variable and multi-variable cases up -- for ease of working
if len(target.shape) == 0: #one-variable case
#actually get the second derivative
grad2 = tf.gradients(grad, target)[0]
#now morally we want to compute:
# newval = target - grad / grad2
#BUT there is often numerical instability caused by dividing
#by grad2 if grad2 is small... so we have to clip grad2 by a clip value
clippedgrad2 = tf.maximum(grad2, grad2clip)
#and now we can do the newton's formula update
newval = tf.add(target, -tf.divide(grad, clippedgrad2))
else:
#in the multi-variable case, we first compute the hessian matrix
#thank gosh tensorflow has this built in finally!
hess = tf.hessians(loss, target)[0]
#now we take it's inverse
hess_inv = tf.matrix_inverse(hess)
#now we get H^{-1} grad, e.g. multiple the matrix by the vector
hess_inv_grad = tf.tensordot(hess_inv, grad, 1)
#again we have to clip for numerical stability
hess_inv_grad = tf.clip_by_value(hess_inv_grad, -grad2clip, grad2clip)
#and get the new value for the parameters
newval = tf.add(target, -hess_inv_grad)
#the rest of the code is just as in the gradient descent case
optimizer_op = tf.assign(target, newval)
targetvals = []
losses = []
gradvals = []
sess.run(initial_op)
for i in range(num_iterations):
output = sess.run({'opt': optimizer_op,
'grad': grad,
'target': target,
'loss': loss},
feed_dict=training_data)
targetvals.append(output['target'])
losses.append(output['loss'])
gradvals.append(output['grad'])
return losses, targetvals, gradvals
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**2
data_dict = {x: xval, y:yval}
losses, powers, grads = newtons_method(loss=power_loss,
target=power,
initial_guess=0,
training_data=data_dict,
num_iterations=20,
grad2clip=1)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title("Newton's Method on Quadractic")
#whoa -- much faster than before
xval = np.arange(0, 2, .01)
yval = np.arange(0, 2, .01)**3
data_dict = {x: xval, y:yval}
losses, powers, grads = newtons_method(loss=power_loss,
target=power,
initial_guess=0,
training_data=data_dict,
num_iterations=20,
grad2clip=1)
plt.plot(powers, label='estimated power')
plt.plot(losses, label='loss')
plt.legend(loc='upper right')
plt.title("Newton's Method on a Cubic")
xval = np.arange(-2, 2, .02)
yval = np.sin(xval)
data_dict = {x: xval, y:yval}
losses, coefvals, grads = newtons_method(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
training_data=data_dict,
num_iterations=2)
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
#no joke -- the error goes to 0 after 1 update step
#let's try something a little more complicated
xval = np.arange(-2, 2, .02)
yval = np.cos(2 * xval) + np.sin(xval + 1)
data_dict = {x: xval, y:yval}
losses, coefvals, grads = newtons_method(loss=polynomial_loss,
target=coefficients,
initial_guess=np.zeros(approximation_order),
training_data=data_dict,
num_iterations=5)
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
#really fast -- actually Newton's method always converges this fast if
#the model is polynomial
#just to put the above in context, let's compare to momentum
xval = np.arange(-2, 2, .02)
yval = np.cos(2 * xval) + np.sin(xval + 1)
data_dict = {x: xval, y:yval}
losses, coefvals = tf_builtin_optimization(loss=polynomial_loss,
optimizer_class=tf.train.MomentumOptimizer,
target=coefficients,
training_data=data_dict,
num_iterations=200,
optimizer_kwargs={'learning_rate': 0.01,
'momentum': 0.9},
)
x0 = coefvals[-1]
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval)
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}))
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
```
### Using External Optimizers
```
#actually, let's use an *external* optimizer -- not do
#the optimization itself in tensorflow
from scipy.optimize import minimize
#you can see all the methods for optimization here:
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize
#Ok here's the model we want to learn
xval = np.arange(-2, 2, .02)
yval = np.cosh(2 * xval) + np.sin(xval + 1)
plt.plot(xval, yval)
plt.title("Target to Learn")
polynomial_loss
#we need to make a python function from our tensorflow model
#(actually we could simply write the model directly in numpy
#but ... since we already have it in Tensorflow might as well use it
def func_loss(vals):
data_dict = {x: xval,
y: yval,
coefficients: vals}
lossval = sess.run(polynomial_loss, feed_dict=data_dict)
losses.append(lossval)
return lossval
#Ok, so let's use a method that doesn't care about the derivative
#specifically "Nelder-Mead" -- this is a simplex-based method
losses = []
result = minimize(func_loss,
x0=np.zeros(6),
method='Nelder-Mead')
x0 = result.x
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval, label='True')
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}), label='Appox.')
plt.legend(loc='upper center')
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with Nelder-Mead')
#OK now let's try a method that *does* care about the derivative
#specifically, a method called L-BFGS -- this is basically
#an approximate version of the newton's method.
#It's called a "quasi-second-order" method because it uses only
#first derivatives to get an approximation to the second derivative
#to use it, we need *do* need to calculate the derivative
#... and here's why tensorflow STILL matters even if we're using
#an external optimizer
polynomial_grad = tf.gradients(polynomial_loss, coefficients)[0]
#we need to create a function that returns loss and loss derivative
def func_loss_with_grad(vals):
data_dict = {x: xval,
y:yval,
coefficients: vals}
lossval, g = sess.run([polynomial_loss, polynomial_grad],
feed_dict=data_dict)
losses.append(lossval)
return lossval, g.astype(np.float64)
#Ok, so let's see what happens with L-BFGS
losses = []
result = minimize(func_loss_with_grad,
x0=np.zeros(6),
method='L-BFGS-B', #approximation of newton's method
jac=True #<-- meaning, we're telling minimizer
#to use the derivative info -- the so-called
#"jacobian"
)
x0 = result.x
assign_op = tf.assign(coefficients, x0)
sess.run(assign_op)
plt.figure(figsize=(10, 3))
plt.subplot(1, 2, 1)
plt.plot(xval, yval, label='True')
plt.plot(xval, sess.run(polynomial_model, feed_dict={x:xval}), label='Appox.')
plt.legend(loc='upper center')
plt.subplot(1, 2, 2)
plt.plot(losses)
plt.xlabel('iterations')
plt.ylabel('loss')
plt.title('Loss with L-BFGS')
#substantially better than the non-derivative-based method
#-- fewer interations are needed, loss curve is stabler, and final
#results are better
```
## Deploying it in a real case
```
#ok let's load the neural data
DATA_PATH = "/home/chengxuz/Class/psych253_2018/data/ventral_neural_data.hdf5"
Ventral_Dataset = h5py.File(DATA_PATH)
categories = Ventral_Dataset['image_meta']['category'][:] #array of category labels for all images --> shape == (5760,)
unique_categories = np.unique(categories) #array of unique category labels --> shape == (8,)
var_levels = Ventral_Dataset['image_meta']['variation_level'][:]
Neural_Data = Ventral_Dataset['time_averaged_trial_averaged'][:]
num_neurons = Neural_Data.shape[1]
num_categories = 8
categories[:10]
#we'll construct 8 one-vs-all vectors with {-1, 1} values
category_matrix = np.array([2 * (categories == c) - 1 for
c in unique_categories]).T.astype(int)
category_matrix[0]
sess = tf.Session()
#first, get initializers for W and b
initial_weights = tf.random_uniform(shape=(num_neurons, num_categories),
minval=-1,
maxval=1,
seed=0)
initial_bias = tf.zeros(shape=(num_categories,))
#now construct the TF variables
weights = tf.get_variable('weights',
dtype=tf.float32,
initializer=initial_weights)
bias = tf.get_variable('bias',
dtype=tf.float32,
initializer=initial_bias)#initialize variables
init_op = tf.global_variables_initializer()
sess.run(init_op)
#input slots for data and labels
#note the batch size is "None" -- effectively meaning batches of
#varying sizes can be used
neural_data = tf.placeholder(shape=(None, num_neurons),
dtype=tf.float32)
category_labels = tf.placeholder(shape=(None, num_categories),
dtype=tf.float32)
#now construct margins
margins = tf.matmul(neural_data, weights) + bias
#the hinge loss
hinge_loss = tf.maximum(0., 1. - category_labels * margins)
#and take the mean of the loss over the batch
hinge_loss_mean = tf.reduce_mean(hinge_loss)
#simple interface for using tensorflow built-in optimizer
#as seen yesterclass
def tf_optimize(loss,
optimizer_class,
target,
training_data,
num_iterations,
optimizer_args=(),
optimizer_kwargs=None,
sess=None,
initial_guesses=None):
if sess is None:
sess = tf.Session()
if optimizer_kwargs is None:
optimizer_kwargs = {}
#construct the optimizer
optimizer = optimizer_class(*optimizer_args,
**optimizer_kwargs)
optimizer_op = optimizer.minimize(loss)
#initialize variables
init_op = tf.global_variables_initializer()
sess.run(init_op)
if initial_guesses is not None:
for k, v in initial_guesses.items():
op = tf.assign(k, v)
sess.run(op)
targetvals = []
losses = []
times = []
for i in range(num_iterations):
t0 = time.time()
output = sess.run({'opt': optimizer_op,
'target': target,
'loss': loss},
feed_dict=training_data)
times.append(time.time() - t0)
targetvals.append(output['target'])
losses.append(output['loss'])
print('Average time per iteration --> %.5f' % np.mean(times))
return np.array(losses), targetvals
#let's just focus on one batch of data for the moment
batch_size = 640
data_batch = Neural_Data[0: batch_size]
label_batch = category_matrix[0: batch_size]
data_dict = {neural_data: data_batch,
category_labels: label_batch}
#let's look at the weights and biases before training
weight_vals, bias_vals = sess.run([weights, bias])
#right, it's num_neurons x num_categories
print('weights shape:', weight_vals.shape)
#let's look at some of the weights
plt.hist(weight_vals[:, 0])
plt.xlabel('Weight Value')
plt.ylabel('Neuron Count')
plt.title('Weights for Animals vs All')
print('biases:', bias_vals)
#ok so we'll use the Momentum optimizer to find weights and bias
#for this classification problem
losses, targs = tf_optimize(loss=hinge_loss_mean,
optimizer_class=tf.train.MomentumOptimizer,
target=[],
training_data=data_dict,
num_iterations=100,
optimizer_kwargs={'learning_rate': 1, 'momentum': 0.9},
sess=sess)
#losses decrease almost to 0
plt.plot(losses)
weight_vals, bias_vals = sess.run([weights, bias])
#right, it's num_neurons x num_categories
weight_vals.shape
#let's look at some of the weights
plt.hist(weight_vals[:, 2])
plt.xlabel('Weight Value')
plt.ylabel('Neuron Count')
plt.title('Weights for Faces vs All')
print('biases:', bias_vals)
#ok so things have been learned!
#how good are the results on training?
#actually get the predictions by first getting the margins
margin_vals = sess.run(margins, feed_dict = data_dict)
#now taking the argmax across categories
pred_inds = margin_vals.argmax(axis=1)
#compare prediction to actual
correct = pred_inds == label_batch.argmax(axis=1)
pct = correct.sum() / float(len(correct)) * 100
print('Training accuracy: %.2f%%' % pct)
#Right, very accurate on training
```
### Stochastic Gradient Descent
```
class BatchReader(object):
def __init__(self, data_dict, batch_size, shuffle=True, shuffle_seed=0, pad=True):
self.data_dict = data_dict
self.batch_size = batch_size
_k = data_dict.keys()[0]
self.data_length = data_dict[_k].shape[0]
self.total_batches = (self.data_length - 1) // self.batch_size + 1
self.curr_batch_num = 0
self.curr_epoch = 1
self.pad = pad
self.shuffle = shuffle
self.shuffle_seed = shuffle_seed
if self.shuffle:
self.rng = np.random.RandomState(seed=self.shuffle_seed)
self.perm = self.rng.permutation(self.data_length)
def __iter__(self):
return self
def next(self):
return self.get_next_batch()
def get_next_batch(self):
data = self.get_batch(self.curr_batch_num)
self.increment_batch_num()
return data
def increment_batch_num(self):
m = self.total_batches
if (self.curr_batch_num >= m - 1):
self.curr_epoch += 1
if self.shuffle:
self.perm = self.rng.permutation(self.data_length)
self.curr_batch_num = (self.curr_batch_num + 1) % m
def get_batch(self, cbn):
data = {}
startv = cbn * self.batch_size
endv = (cbn + 1) * self.batch_size
if self.pad and endv > self.data_length:
startv = self.data_length - self.batch_size
endv = startv + self.batch_size
for k in self.data_dict:
if self.shuffle:
data[k] = self.data_dict[k][self.perm[startv: endv]]
else:
data[k] = self.data_dict[k][startv: endv]
return data
class TF_Optimizer(object):
"""Make the tensorflow SGD-style optimizer into a scikit-learn compatible class
Uses BatchReader for stochastically getting data batches.
model_func: function which returns tensorflow nodes for
predictions, data_input
loss_func: function which takes model_func prediction output node and
returns tensorflow nodes for
loss, label_input
optimizer_class: which tensorflow optimizer class to when learning the model parameters
batch_size: which batch size to use in training
train_iterations: how many iterations to run the optimizer for
--> this should really be picked automatically by like when the training
error plateaus
model_kwargs: dictionary of additional arguments for the model_func
loss_kwargs: dictionary of additional arguments for the loss_func
optimizer_args, optimizer_kwargs: additional position and keyword args for the
optimizer class
sess: tf session to use (will be constructed if not passed)
train_shuffle: whether to shuffle example order during training
"""
def __init__(self,
model_func,
loss_func,
optimizer_class,
batch_size,
train_iterations,
model_kwargs=None,
loss_kwargs=None,
optimizer_args=(),
optimizer_kwargs=None,
sess=None,
train_shuffle=False
):
self.model_func = model_func
if model_kwargs is None:
model_kwargs = {}
self.model_kwargs = model_kwargs
self.loss_func = loss_func
if loss_kwargs is None:
loss_kwargs = {}
self.loss_kwargs = loss_kwargs
self.train_shuffle=train_shuffle
self.train_iterations = train_iterations
self.batch_size = batch_size
if sess is None:
sess = tf.Session()
self.sess = sess
if optimizer_kwargs is None:
optimizer_kwargs = {}
self.optimizer = optimizer_class(*optimizer_args,
**optimizer_kwargs)
def fit(self, train_data, train_labels):
self.model, self.data_holder = self.model_func(**self.model_kwargs)
self.loss, self.labels_holder = self.loss_func(self.model, **self.loss_kwargs)
self.optimizer_op = self.optimizer.minimize(self.loss)
data_dict = {self.data_holder: train_data,
self.labels_holder: train_labels}
train_data = BatchReader(data_dict=data_dict,
batch_size=self.batch_size,
shuffle=self.train_shuffle,
shuffle_seed=0,
pad=True)
init_op = tf.global_variables_initializer()
sess.run(init_op)
self.losses = []
for i in range(self.train_iterations):
data_batch = train_data.next()
output = self.sess.run({'opt': self.optimizer_op,
'loss': self.loss},
feed_dict=data_batch)
self.losses.append(output['loss'])
def predict(self, test_data):
data_dict = {self.data_holder: test_data}
test_data = BatchReader(data_dict=data_dict,
batch_size=self.batch_size,
shuffle=False,
pad=False)
preds = []
for i in range(test_data.total_batches):
data_batch = test_data.get_batch(i)
pred_batch = self.sess.run(self.model, feed_dict=data_batch)
preds.append(pred_batch)
return np.row_stack(preds)
def binarize_labels(labels):
"""takes discrete-valued labels and binarizes them into {-1, 1}-value format
returns:
binarized_labels: of shape (num_stimuli, num_categories)
unique_labels: actual labels indicating order of first axis in binarized_labels
"""
unique_labels = np.unique(labels)
num_classes = len(unique_labels)
binarized_labels = np.array([2 * (labels == c) - 1 for
c in unique_labels]).T.astype(int)
return binarized_labels, unique_labels
class TF_OVA_Classifier(TF_Optimizer):
"""
Subclass of TFOptimizer for use with categorizers. Basically, this class
handles data binarization (in the fit method) and un-binarization
(in the predict method), so that we can use the class with the function:
train_and_test_scikit_classifier
that we've previously defined.
The predict method here implements a one-vs-all approach for multi-class problems.
"""
def fit(self, train_data, train_labels):
#binarize labels
num_features = train_data.shape[1]
binarized_labels, classes_ = binarize_labels(train_labels)
#set .classes_ attribute, since this is needed by train_and_test_scikit_classifier
self.classes_ = classes_
num_classes = len(classes_)
#pass number of features and classes to the model construction
#function that will be called when the fit method is called
self.model_kwargs['num_features'] = num_features
self.model_kwargs['num_classes'] = num_classes
#now actually call the optimizer fit method
TF_Optimizer.fit(self, train_data=train_data,
train_labels=binarized_labels)
def decision_function(self, test_data):
#returns what are effectively the margins (for a linear classifier)
return TF_Optimizer.predict(self, test_data)
def predict(self, test_data):
#use the one-vs-all rule for multiclass prediction.
preds = self.decision_function(test_data)
preds = np.argmax(preds, axis=1)
classes_ = self.classes_
return classes_[preds]
def linear_classifier(num_features, num_classes):
"""generic form of a linear classifier, e.g. the model
margins = np.dot(data, weight) + bias
"""
initial_weights = tf.zeros(shape=(num_features,
num_classes),
dtype=tf.float32)
weights = tf.Variable(initial_weights,
dtype=tf.float32,
name='weights')
initial_bias = tf.zeros(shape=(num_classes,))
bias = tf.Variable(initial_bias,
dtype=tf.float32,
name='bias')
data = tf.placeholder(shape=(None, num_features), dtype=tf.float32, name='data')
margins = tf.add(tf.matmul(data, weights), bias, name='margins')
return margins, data
def hinge_loss(margins):
"""standard SVM hinge loss
"""
num_classes = margins.shape.as_list()[1]
category_labels = tf.placeholder(shape=(None, num_classes),
dtype=tf.float32,
name='labels')
h = tf.maximum(0., 1. - category_labels * margins, name='hinge_loss')
hinge_loss_mean = tf.reduce_mean(h, name='hinge_loss_mean')
return hinge_loss_mean, category_labels
#construct the classifier instance ... just like with scikit-learn
cls = TF_OVA_Classifier(model_func=linear_classifier,
loss_func=hinge_loss,
batch_size=2500,
train_iterations=1000,
train_shuffle=True,
optimizer_class=tf.train.MomentumOptimizer,
optimizer_kwargs = {'learning_rate':10.,
'momentum': 0.99
},
sess=sess
)
#ok let's try out our classifier on medium-variation data
data_subset = Neural_Data[var_levels=='V3']
categories_subset = categories[var_levels=='V3']
cls.fit(data_subset, categories_subset)
plt.plot(cls.losses)
plt.xlabel('number of iterations')
plt.ylabel('Hinge loss')
#ok how good was the actual training accuracy?
preds = cls.predict(data_subset)
acc = (preds == categories_subset).sum()
pct = acc / float(len(preds)) * 100
print('Training accuracy was %.2f%%' % pct)
```
#### Side note on getting relevant tensors
```
#here's the linear mode constructed above:
lin_model = cls.model
print(lin_model)
#suppose we want to access the weights / bias used in this model?
#these can be accessed by the "op.inputs" attribute in TF
#first, we see that this is the stage of the caluation
#where the linear model (the margins) is put together by adding
#the result of the matrix multiplication ("MatMul_[somenumber]")
#to the bias
list(lin_model.op.inputs)
#so bias is just the first of these inputs
bias_tensor = lin_model.op.inputs[1]
bias_tensor
#if we follow up the calculation graph by taking apart
#whatever was the inputs to the matmul stage, we see
#the data and the weights
matmul_tensor = lin_model.op.inputs[0]
list(matmul_tensor.op.inputs)
#so the weights tensor is just the first of *these* inputs
weights_tensor = matmul_tensor.op.inputs[1]
weights_tensor
#putting this together, we could have done:
weights_tensor = lin_model.op.inputs[0].op.inputs[1]
weights_tensor
```
#### Regularization
```
#we can define other loss functions -- such as L2 regularization
def hinge_loss_l2reg(margins, C, square=False):
#starts off the same as regular hinge loss
num_classes = margins.shape.as_list()[1]
category_labels = tf.placeholder(shape=(None, num_classes),
dtype=tf.float32,
name='labels')
h = tf.maximum(0., 1 - category_labels * margins)
#allows for squaring the hinge_loss optionally, as done in sklearn
if square:
h = h**2
hinge_loss = tf.reduce_mean(h)
#but how let's get the weights from the margins,
#using the method just explored above
weights = margins.op.inputs[0].op.inputs[1]
#and get sum-square of the weights -- the 0.5 is for historical reasons
reg_loss = 0.5*tf.reduce_mean(weights**2)
#total up the loss from the two terms with constant C for weighting
total_loss = C * hinge_loss + reg_loss
return total_loss, category_labels
cls = TF_OVA_Classifier(model_func=linear_classifier,
loss_func=hinge_loss_l2reg,
loss_kwargs={'C':1},
batch_size=2500,
train_iterations=1000,
train_shuffle=True,
optimizer_class=tf.train.MomentumOptimizer,
optimizer_kwargs = {'learning_rate':10.,
'momentum': 0.99
},
sess=sess,
)
data_subset = Neural_Data[var_levels=='V3']
categories_subset = categories[var_levels=='V3']
cls.fit(data_subset, categories_subset)
plt.plot(cls.losses)
plt.xlabel('number of iterations')
plt.ylabel('Regularized Hinge loss')
preds = cls.predict(data_subset)
acc = (preds == categories_subset).sum()
pct = acc / float(len(preds)) * 100
print('Regularized training accuracy was %.2f%%' % pct)
#unsuprisingly training accuracy goes down a bit with regularization
#compared to before w/o regularization
```
### Integrating with cross validation tools
```
import cross_validation as cv
meta_array = np.core.records.fromarrays(Ventral_Dataset['image_meta'].values(),
names=Ventral_Dataset['image_meta'].keys())
#the whole point of creating the TF_OVA_Classifier above
#was that we could simply stick it into the cross-validation regime
#that we'd previously set up for scikit-learn style classifiers
#so now let's test it out
#create some train/test splits
splits = cv.get_splits(meta_array,
lambda x: x['object_name'], #we're balancing splits by object
5,
5,
35,
train_filter=lambda x: (x['variation_level'] == 'V3'),
test_filter=lambda x: (x['variation_level'] == 'V3'),)
#here are the arguments to the classifier
model_args = {'model_func': linear_classifier,
'loss_func': hinge_loss_l2reg,
'loss_kwargs': {'C':5e-2, #<-- a good regularization value
},
'batch_size': 2500,
'train_iterations': 1000, #<-- about the right number of steps
'train_shuffle': True,
'optimizer_class':tf.train.MomentumOptimizer,
'optimizer_kwargs': {'learning_rate':.1,
'momentum': 0.9},
'sess': sess}
#so now it should work just like before
res = cv.train_and_test_scikit_classifier(features=Neural_Data,
labels=categories,
splits=splits,
model_class=TF_OVA_Classifier,
model_args=model_args)
#yep!
res[0]['test']['mean_accuracy']
#### Logistic Regression with Softmax loss
def softmax_loss_l2reg(margins, C):
"""this shows how to write softmax logistic regression
using tensorflow
"""
num_classes = margins.shape.as_list()[1]
category_labels = tf.placeholder(shape=(None, num_classes),
dtype=tf.float32,
name='labels')
#get the softmax from the margins
probs = tf.nn.softmax(margins)
#extract just the prob value for the correct category
#(we have the (cats + 1)/2 thing because the category_labels
#come in as {-1, +1} values but we need {0,1} for this purpose)
probs_cat_vec = probs * ((category_labels + 1.) / 2.)
#sum up over categories (actually only one term, that for
#the correct category, contributes on each row)
probs_cat = tf.reduce_mean(probs_cat_vec, axis=1)
#-log
neglogprob = -tf.log(probs_cat)
#average over the batch
log_loss = tf.reduce_mean(neglogprob)
weights = cls.model.op.inputs[0].op.inputs[1]
reg_loss = 0.5*tf.reduce_mean(tf.square(weights))
total_loss = C * log_loss + reg_loss
return total_loss, category_labels
model_args={'model_func': linear_classifier,
'model_kwargs': {},
'loss_func': softmax_loss_l2reg,
'loss_kwargs': {'C': 5e-3},
'batch_size': 2500,
'train_iterations': 1000,
'train_shuffle': True,
'optimizer_class':tf.train.MomentumOptimizer,
'optimizer_kwargs': {'learning_rate': 1.,
'momentum': 0.9
},
'sess': sess}
res = cv.train_and_test_scikit_classifier(features=Neural_Data,
labels=categories,
splits=splits,
model_class=TF_OVA_Classifier,
model_args=model_args)
res[0]['test']['mean_accuracy']
#ok works reasonably well
```
| github_jupyter |
## Problem Statement
An experimental drug was tested on 2100 individual in a clinical trial. The ages of participants ranged from thirteen to hundred. Half of the participants were under the age of 65 years old, the other half were 65 years or older.
Ninety five percent patients that were 65 years or older experienced side effects. Ninety five percent patients under 65 years experienced no side effects.
You have to build a program that takes the age of a participant as input and predicts whether this patient has suffered from a side effect or not.
Steps:
• Generate a random dataset that adheres to these statements
• Divide the dataset into Training (90%) and Validation (10%) set
• Build a Simple Sequential Model
• Train and Validate the Model on the dataset
• Randomly choose 20% data from dataset as Test set
• Plot predictions made by the Model on Test set
## Generating Dataset
```
import numpy as np
from random import randint
from sklearn.utils import shuffle
from sklearn.preprocessing import MinMaxScaler
train_labels = [] # one means side effect experienced, zero means no side effect experienced
train_samples = []
for i in range(50):
# The 5% of younger individuals who did experience side effects
random_younger = randint(13, 64)
train_samples.append(random_younger)
train_labels.append(1)
# The 5% of older individuals who did not experience side effects
random_older = randint(65, 100)
train_samples.append(random_older)
train_labels.append(0)
for i in range(1000):
# The 95% of younger individuals who did not experience side effects
random_younger = randint(13, 64)
train_samples.append(random_younger)
train_labels.append(0)
# The 95% of older individuals who did experience side effects
random_older = randint(65, 100)
train_samples.append(random_older)
train_labels.append(1)
train_labels = np.array(train_labels)
train_samples = np.array(train_samples)
train_labels, train_samples = shuffle(train_labels, train_samples) # randomly shuffles each individual array, removing any order imposed on the data set during the creation process
scaler = MinMaxScaler(feature_range = (0, 1)) # specifying scale (range: 0 to 1)
scaled_train_samples = scaler.fit_transform(train_samples.reshape(-1,1)) # transforms our data scale (range: 13 to 100) into the one specified above (range: 0 to 1), we use the reshape fucntion as fit_transform doesnot accept 1-D data by default hence we need to reshape accordingly here
```
## Building a Sequential Model
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
model = Sequential([
Dense(units = 16, input_shape = (1,), activation = 'relu'),
Dense(units = 32, activation = 'relu'),
Dense(units = 2, activation = 'softmax')
])
model.summary()
```
## Training the Model
```
model.compile(optimizer = Adam(learning_rate = 0.0001), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model.fit(x = scaled_train_samples, y = train_labels, validation_split = 0.1, batch_size = 10, epochs = 30, shuffle = True, verbose = 2)
```
## Preprocessing Test Data
```
test_labels = []
test_samples = []
for i in range(10):
# The 5% of younger individuals who did experience side effects
random_younger = randint(13, 64)
test_samples.append(random_younger)
test_labels.append(1)
# The 5% of older individuals who did not experience side effects
random_older = randint(65, 100)
test_samples.append(random_older)
test_labels.append(0)
for i in range(200):
# The 95% of younger individuals who did not experience side effects
random_younger = randint(13, 64)
test_samples.append(random_younger)
test_labels.append(0)
# The 95% of older individuals who did experience side effects
random_older = randint(65, 100)
test_samples.append(random_older)
test_labels.append(1)
test_labels = np.array(test_labels)
test_samples = np.array(test_samples)
test_labels, test_samples = shuffle(test_labels, test_samples)
scaled_test_samples = scaler.fit_transform(test_samples.reshape(-1,1))
```
## Testing the Model using Predictions
```
predictions = model.predict(x = scaled_test_samples, batch_size = 10, verbose = 0)
rounded_predictions = np.argmax(predictions, axis = -1)
```
## Preparing Confusion Matrix
```
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
cm = confusion_matrix(y_true = test_labels, y_pred = rounded_predictions)
# This function has been taken from the website of scikit Learn. link: https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
## Plotting Predictions using Confusion Matrix
```
cm_plot_labels = ['no_side_effects', 'had_side_effects']
plot_confusion_matrix(cm = cm, classes = cm_plot_labels, title = 'Confusion Matrix')
```
| github_jupyter |
# Riemannian Optimisation with Pymanopt for Inference in MoG models
The Mixture of Gaussians (MoG) model assumes that datapoints $\mathbf{x}_i\in\mathbb{R}^d$ follow a distribution described by the following probability density function:
$p(\mathbf{x}) = \sum_{m=1}^M \pi_m p_\mathcal{N}(\mathbf{x};\mathbf{\mu}_m,\mathbf{\Sigma}_m)$ where $\pi_m$ is the probability that the data point belongs to the $m^\text{th}$ mixture component and $p_\mathcal{N}(\mathbf{x};\mathbf{\mu}_m,\mathbf{\Sigma}_m)$ is the probability density function of a multivariate Gaussian distribution with mean $\mathbf{\mu}_m \in \mathbb{R}^d$ and psd covariance matrix $\mathbf{\Sigma}_m \in \{\mathbf{M}\in\mathbb{R}^{d\times d}: \mathbf{M}\succeq 0\}$.
As an example consider the mixture of three Gaussians with means
$\mathbf{\mu}_1 = \begin{bmatrix} -4 \\ 1 \end{bmatrix}$,
$\mathbf{\mu}_2 = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ and
$\mathbf{\mu}_3 = \begin{bmatrix} 2 \\ -1 \end{bmatrix}$, covariances
$\mathbf{\Sigma}_1 = \begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix}$,
$\mathbf{\Sigma}_2 = \begin{bmatrix} 1 & 1 \\ 1 & 3 \end{bmatrix}$ and
$\mathbf{\Sigma}_3 = \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix}$
and mixture probability vector $\boldsymbol{\pi}=\left[0.1, 0.6, 0.3\right]^\top$.
Let's generate $N=1000$ samples of that MoG model and scatter plot the samples:
```
import autograd.numpy as np
np.set_printoptions(precision=2)
import matplotlib.pyplot as plt
%matplotlib inline
# Number of data points
N = 1000
# Dimension of each data point
D = 2
# Number of clusters
K = 3
pi = [0.1, 0.6, 0.3]
mu = [np.array([-4, 1]), np.array([0, 0]), np.array([2, -1])]
Sigma = [np.array([[3, 0],[0, 1]]), np.array([[1, 1.], [1, 3]]), 0.5 * np.eye(2)]
components = np.random.choice(K, size=N, p=pi)
samples = np.zeros((N, D))
# For each component, generate all needed samples
for k in range(K):
# indices of current component in X
indices = k == components
# number of those occurrences
n_k = indices.sum()
if n_k > 0:
samples[indices, :] = np.random.multivariate_normal(mu[k], Sigma[k], n_k)
colors = ['r', 'g', 'b', 'c', 'm']
for k in range(K):
indices = k == components
plt.scatter(samples[indices, 0], samples[indices, 1], alpha=0.4, color=colors[k%K])
plt.axis('equal')
plt.show()
```
Given a data sample the de facto standard method to infer the parameters is the [expectation maximisation](https://en.wikipedia.org/wiki/Expectation-maximization_algorithm) (EM) algorithm that, in alternating so-called E and M steps, maximises the log-likelihood of the data.
In [arXiv:1506.07677](http://arxiv.org/pdf/1506.07677v1.pdf) Hosseini and Sra propose Riemannian optimisation as a powerful counterpart to EM. Importantly, they introduce a reparameterisation that leaves local optima of the log-likelihood unchanged while resulting in a geodesically convex optimisation problem over a product manifold $\prod_{m=1}^M\mathcal{PD}^{(d+1)\times(d+1)}$ of manifolds of $(d+1)\times(d+1)$ symmetric positive definite matrices.
The proposed method is on par with EM and shows less variability in running times.
The reparameterised optimisation problem for augmented data points $\mathbf{y}_i=[\mathbf{x}_i^\top, 1]^\top$ can be stated as follows:
$$\min_{(\mathbf{S}_1, ..., \mathbf{S}_m, \boldsymbol{\nu}) \in \mathcal{D}}
-\sum_{n=1}^N\log\left(
\sum_{m=1}^M \frac{\exp(\nu_m)}{\sum_{k=1}^M\exp(\nu_k)}
q_\mathcal{N}(\mathbf{y}_n;\mathbf{S}_m)
\right)$$
where
* $\mathcal{D} := \left(\prod_{m=1}^M \mathcal{PD}^{(d+1)\times(d+1)}\right)\times\mathbb{R}^{M-1}$ is the search space
* $\mathcal{PD}^{(d+1)\times(d+1)}$ is the manifold of symmetric positive definite
$(d+1)\times(d+1)$ matrices
* $\mathcal{\nu}_m = \log\left(\frac{\alpha_m}{\alpha_M}\right), \ m=1, ..., M-1$ and $\nu_M=0$
* $q_\mathcal{N}(\mathbf{y}_n;\mathbf{S}_m) =
2\pi\exp\left(\frac{1}{2}\right)
|\operatorname{det}(\mathbf{S}_m)|^{-\frac{1}{2}}(2\pi)^{-\frac{d+1}{2}}
\exp\left(-\frac{1}{2}\mathbf{y}_i^\top\mathbf{S}_m^{-1}\mathbf{y}_i\right)$
**Optimisation problems like this can easily be solved using Pymanopt – even without the need to differentiate the cost function manually!**
So let's infer the parameters of our toy example by Riemannian optimisation using Pymanopt:
```
import sys
sys.path.insert(0,"../..")
import autograd.numpy as np
from autograd.scipy.special import logsumexp
import pymanopt
from pymanopt.manifolds import Product, Euclidean, SymmetricPositiveDefinite
from pymanopt import Problem
from pymanopt.solvers import SteepestDescent
# (1) Instantiate the manifold
manifold = Product([SymmetricPositiveDefinite(D+1, k=K), Euclidean(K-1)])
# (2) Define cost function
# The parameters must be contained in a list theta.
@pymanopt.function.Autograd
def cost(S, v):
# Unpack parameters
nu = np.append(v, 0)
logdetS = np.expand_dims(np.linalg.slogdet(S)[1], 1)
y = np.concatenate([samples.T, np.ones((1, N))], axis=0)
# Calculate log_q
y = np.expand_dims(y, 0)
# 'Probability' of y belonging to each cluster
log_q = -0.5 * (np.sum(y * np.linalg.solve(S, y), axis=1) + logdetS)
alpha = np.exp(nu)
alpha = alpha / np.sum(alpha)
alpha = np.expand_dims(alpha, 1)
loglikvec = logsumexp(np.log(alpha) + log_q, axis=0)
return -np.sum(loglikvec)
problem = Problem(manifold=manifold, cost=cost, verbosity=1)
# (3) Instantiate a Pymanopt solver
solver = SteepestDescent()
# let Pymanopt do the rest
Xopt = solver.solve(problem)
```
Once Pymanopt has finished the optimisation we can obtain the inferred parameters as follows:
```
mu1hat = Xopt[0][0][0:2,2:3]
Sigma1hat = Xopt[0][0][:2, :2] - mu1hat.dot(mu1hat.T)
mu2hat = Xopt[0][1][0:2,2:3]
Sigma2hat = Xopt[0][1][:2, :2] - mu2hat.dot(mu2hat.T)
mu3hat = Xopt[0][2][0:2,2:3]
Sigma3hat = Xopt[0][2][:2, :2] - mu3hat.dot(mu3hat.T)
pihat = np.exp(np.concatenate([Xopt[1], [0]], axis=0))
pihat = pihat / np.sum(pihat)
```
And convince ourselves that the inferred parameters are close to the ground truth parameters.
The ground truth parameters $\mathbf{\mu}_1, \mathbf{\Sigma}_1, \mathbf{\mu}_2, \mathbf{\Sigma}_2, \mathbf{\mu}_3, \mathbf{\Sigma}_3, \pi_1, \pi_2, \pi_3$:
```
print(mu[0])
print(Sigma[0])
print(mu[1])
print(Sigma[1])
print(mu[2])
print(Sigma[2])
print(pi[0])
print(pi[1])
print(pi[2])
```
And the inferred parameters $\hat{\mathbf{\mu}}_1, \hat{\mathbf{\Sigma}}_1, \hat{\mathbf{\mu}}_2, \hat{\mathbf{\Sigma}}_2, \hat{\mathbf{\mu}}_3, \hat{\mathbf{\Sigma}}_3, \hat{\pi}_1, \hat{\pi}_2, \hat{\pi}_3$:
```
print(mu1hat)
print(Sigma1hat)
print(mu2hat)
print(Sigma2hat)
print(mu3hat)
print(Sigma3hat)
print(pihat[0])
print(pihat[1])
print(pihat[2])
```
Et voilà – this was a brief demonstration of how to do inference for MoG models by performing Manifold optimisation using Pymanopt.
## When Things Go Astray
A well-known problem when fitting parameters of a MoG model is that one Gaussian may collapse onto a single data point resulting in singular covariance matrices (cf. e.g. p. 434 in Bishop, C. M. "Pattern Recognition and Machine Learning." 2001). This problem can be avoided by the following heuristic: if a component's covariance matrix is close to being singular we reset its mean and covariance matrix. Using Pymanopt this can be accomplished by using an appropriate line search rule (based on [LineSearchBackTracking](https://github.com/pymanopt/pymanopt/blob/master/pymanopt/solvers/linesearch.py)) -- here we demonstrate this approach:
```
class LineSearchMoG:
"""
Back-tracking line-search that checks for close to singular matrices.
"""
def __init__(self, contraction_factor=.5, optimism=2,
suff_decr=1e-4, maxiter=25, initial_stepsize=1):
self.contraction_factor = contraction_factor
self.optimism = optimism
self.suff_decr = suff_decr
self.maxiter = maxiter
self.initial_stepsize = initial_stepsize
self._oldf0 = None
def search(self, objective, manifold, x, d, f0, df0):
"""
Function to perform backtracking line-search.
Arguments:
- objective
objective function to optimise
- manifold
manifold to optimise over
- x
starting point on the manifold
- d
tangent vector at x (descent direction)
- df0
directional derivative at x along d
Returns:
- stepsize
norm of the vector retracted to reach newx from x
- newx
next iterate suggested by the line-search
"""
# Compute the norm of the search direction
norm_d = manifold.norm(x, d)
if self._oldf0 is not None:
# Pick initial step size based on where we were last time.
alpha = 2 * (f0 - self._oldf0) / df0
# Look a little further
alpha *= self.optimism
else:
alpha = self.initial_stepsize / norm_d
alpha = float(alpha)
# Make the chosen step and compute the cost there.
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = 1
# Backtrack while the Armijo criterion is not satisfied
while (newf > f0 + self.suff_decr * alpha * df0 and
step_count <= self.maxiter and
not reset):
# Reduce the step size
alpha = self.contraction_factor * alpha
# and look closer down the line
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = step_count + 1
# If we got here without obtaining a decrease, we reject the step.
if newf > f0 and not reset:
alpha = 0
newx = x
stepsize = alpha * norm_d
self._oldf0 = f0
return stepsize, newx
def _newxnewf(self, x, d, objective, manifold):
newx = manifold.retr(x, d)
try:
newf = objective(newx)
except np.linalg.LinAlgError:
replace = np.asarray([np.linalg.matrix_rank(newx[0][k, :, :]) != newx[0][0, :, :].shape[0]
for k in range(newx[0].shape[0])])
x[0][replace, :, :] = manifold.rand()[0][replace, :, :]
return x, objective(x), True
return newx, newf, False
```
| github_jupyter |
# 样式迁移
如果你是一位摄影爱好者,你也许接触过滤镜。它能改变照片的颜色样式,从而使风景照更加锐利或者令人像更加美白。但一个滤镜通常只能改变照片的某个方面。如果要照片达到理想中的样式,你可能需要尝试大量不同的组合。这个过程的复杂程度不亚于模型调参。
在本节中,我们将介绍如何使用卷积神经网络,自动将一个图像中的样式应用在另一图像之上,即*样式迁移*(style transfer) :cite:`Gatys.Ecker.Bethge.2016`。
这里我们需要两张输入图像:一张是*内容图像*,另一张是*样式图像*。
我们将使用神经网络修改内容图像,使其在样式上接近样式图像。
例如, :numref:`fig_style_transfer` 中的内容图像为本书作者在西雅图郊区的雷尼尔山国家公园拍摄的风景照,而样式图像则是一幅主题为秋天橡树的油画。
最终输出的合成图像应用了样式图像的油画笔触让整体颜色更加鲜艳,同时保留了内容图像中物体主体的形状。

:label:`fig_style_transfer`
## 方法
:numref:`fig_style_transfer_model` 用简单的例子阐述了基于卷积神经网络的样式迁移方法。
首先,我们初始化合成图像,例如将其初始化为内容图像。
该合成图像是样式迁移过程中唯一需要更新的变量,即样式迁移所需迭代的模型参数。
然后,我们选择一个预训练的卷积神经网络来抽取图像的特征,其中的模型参数在训练中无须更新。
这个深度卷积神经网络凭借多个层逐级抽取图像的特征,我们可以选择其中某些层的输出作为内容特征或样式特征。
以 :numref:`fig_style_transfer_model` 为例,这里选取的预训练的神经网络含有3个卷积层,其中第二层输出内容特征,第一层和第三层输出样式特征。

:label:`fig_style_transfer_model`
接下来,我们通过正向传播(实线箭头方向)计算样式迁移的损失函数,并通过反向传播(虚线箭头方向)迭代模型参数,即不断更新合成图像。
样式迁移常用的损失函数由3部分组成:
(i) *内容损失*使合成图像与内容图像在内容特征上接近;
(ii) *样式损失*使合成图像与样式图像在样式特征上接近;
(iii) *总变差损失*则有助于减少合成图像中的噪点。
最后,当模型训练结束时,我们输出样式迁移的模型参数,即得到最终的合成图像。
在下面,我们将通过代码来进一步了解样式迁移的技术细节。
## [**阅读内容和样式图像**]
首先,我们读取内容和样式图像。
从打印出的图像坐标轴可以看出,它们的尺寸并不一样。
```
%matplotlib inline
import torch
import torchvision
from torch import nn
from d2l import torch as d2l
d2l.set_figsize()
content_img = d2l.Image.open('../img/rainier.jpg')
d2l.plt.imshow(content_img);
style_img = d2l.Image.open('../img/autumn-oak.jpg')
d2l.plt.imshow(style_img);
```
## [**预处理和后处理**]
下面,定义图像的预处理函数和后处理函数。
预处理函数`preprocess`对输入图像在RGB三个通道分别做标准化,并将结果变换成卷积神经网络接受的输入格式。
后处理函数`postprocess`则将输出图像中的像素值还原回标准化之前的值。
由于图像打印函数要求每个像素的浮点数值在0到1之间,我们对小于0和大于1的值分别取0和1。
```
rgb_mean = torch.tensor([0.485, 0.456, 0.406])
rgb_std = torch.tensor([0.229, 0.224, 0.225])
def preprocess(img, image_shape):
transforms = torchvision.transforms.Compose([
torchvision.transforms.Resize(image_shape),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=rgb_mean, std=rgb_std)])
return transforms(img).unsqueeze(0)
def postprocess(img):
img = img[0].to(rgb_std.device)
img = torch.clamp(img.permute(1, 2, 0) * rgb_std + rgb_mean, 0, 1)
return torchvision.transforms.ToPILImage()(img.permute(2, 0, 1))
```
## [**抽取图像特征**]
我们使用基于ImageNet数据集预训练的VGG-19模型来抽取图像特征 :cite:`Gatys.Ecker.Bethge.2016`。
```
pretrained_net = torchvision.models.vgg19(pretrained=True)
```
为了抽取图像的内容特征和样式特征,我们可以选择VGG网络中某些层的输出。
一般来说,越靠近输入层,越容易抽取图像的细节信息;反之,则越容易抽取图像的全局信息。
为了避免合成图像过多保留内容图像的细节,我们选择VGG较靠近输出的层,即*内容层*,来输出图像的内容特征。
我们还从VGG中选择不同层的输出来匹配局部和全局的样式,这些图层也称为*样式层*。
正如 :numref:`sec_vgg` 中所介绍的,VGG网络使用了5个卷积块。
实验中,我们选择第四卷积块的最后一个卷积层作为内容层,选择每个卷积块的第一个卷积层作为样式层。
这些层的索引可以通过打印`pretrained_net`实例获取。
```
style_layers, content_layers = [0, 5, 10, 19, 28], [25]
```
使用VGG层抽取特征时,我们只需要用到从输入层到最靠近输出层的内容层或样式层之间的所有层。
下面构建一个新的网络`net`,它只保留需要用到的VGG的所有层。
```
net = nn.Sequential(*[pretrained_net.features[i] for i in
range(max(content_layers + style_layers) + 1)])
```
给定输入`X`,如果我们简单地调用前向计算`net(X)`,只能获得最后一层的输出。
由于我们还需要中间层的输出,因此这里我们逐层计算,并保留内容层和样式层的输出。
```
def extract_features(X, content_layers, style_layers):
contents = []
styles = []
for i in range(len(net)):
X = net[i](X)
if i in style_layers:
styles.append(X)
if i in content_layers:
contents.append(X)
return contents, styles
```
下面定义两个函数:`get_contents`函数对内容图像抽取内容特征;
`get_styles`函数对样式图像抽取样式特征。
因为在训练时无须改变预训练的VGG的模型参数,所以我们可以在训练开始之前就提取出内容特征和样式特征。
由于合成图像是样式迁移所需迭代的模型参数,我们只能在训练过程中通过调用`extract_features`函数来抽取合成图像的内容特征和样式特征。
```
def get_contents(image_shape, device):
content_X = preprocess(content_img, image_shape).to(device)
contents_Y, _ = extract_features(content_X, content_layers, style_layers)
return content_X, contents_Y
def get_styles(image_shape, device):
style_X = preprocess(style_img, image_shape).to(device)
_, styles_Y = extract_features(style_X, content_layers, style_layers)
return style_X, styles_Y
```
## [**定义损失函数**]
下面我们来描述样式迁移的损失函数。
它由内容损失、样式损失和总变差损失3部分组成。
### 内容损失
与线性回归中的损失函数类似,内容损失通过平方误差函数衡量合成图像与内容图像在内容特征上的差异。
平方误差函数的两个输入均为`extract_features`函数计算所得到的内容层的输出。
```
def content_loss(Y_hat, Y):
# 我们从动态计算梯度的树中分离目标:
# 这是一个规定的值,而不是一个变量。
return torch.square(Y_hat - Y.detach()).mean()
```
### 样式损失
样式损失与内容损失类似,也通过平方误差函数衡量合成图像与样式图像在样式上的差异。
为了表达样式层输出的样式,我们先通过`extract_features`函数计算样式层的输出。
假设该输出的样本数为1,通道数为$c$,高和宽分别为$h$和$w$,我们可以将此输出转换为矩阵 $\mathbf{X}$,其有$c$行和$hw$列。
这个矩阵可以被看作是由$c$个长度为$hw$的向量$\mathbf{x}_1, \ldots, \mathbf{x}_c$组合而成的。其中向量$\mathbf{x}_i$代表了通道 $i$ 上的样式特征。
在这些向量的*格拉姆矩阵* $\mathbf{X}\mathbf{X}^\top \in \mathbb{R}^{c \times c}$ 中,$i$ 行 $j$ 列的元素 $x_{ij}$ 即向量 $\mathbf{x}_i$ 和 $\mathbf{x}_j$ 的内积。它表达了通道 $i$ 和通道 $j$ 上样式特征的相关性。我们用这样的格拉姆矩阵来表达样式层输出的样式。
需要注意的是,当$hw$的值较大时,格拉姆矩阵中的元素容易出现较大的值。
此外,格拉姆矩阵的高和宽皆为通道数$c$。
为了让样式损失不受这些值的大小影响,下面定义的`gram`函数将格拉姆矩阵除以了矩阵中元素的个数,即 $chw$ 。
```
def gram(X):
num_channels, n = X.shape[1], X.numel() // X.shape[1]
X = X.reshape((num_channels, n))
return torch.matmul(X, X.T) / (num_channels * n)
```
自然地,样式损失的平方误差函数的两个格拉姆矩阵输入分别基于合成图像与样式图像的样式层输出。这里假设基于样式图像的格拉姆矩阵`gram_Y`已经预先计算好了。
```
def style_loss(Y_hat, gram_Y):
return torch.square(gram(Y_hat) - gram_Y.detach()).mean()
```
### 总变差损失
有时候,我们学到的合成图像里面有大量高频噪点,即有特别亮或者特别暗的颗粒像素。
一种常见的降噪方法是*总变差降噪*:
假设 $x_{i, j}$ 表示坐标 $(i, j)$ 处的像素值,降低总变差损失
$$\sum_{i, j} \left|x_{i, j} - x_{i+1, j}\right| + \left|x_{i, j} - x_{i, j+1}\right|$$
能够尽可能使邻近的像素值相似。
```
def tv_loss(Y_hat):
return 0.5 * (torch.abs(Y_hat[:, :, 1:, :] - Y_hat[:, :, :-1, :]).mean() +
torch.abs(Y_hat[:, :, :, 1:] - Y_hat[:, :, :, :-1]).mean())
```
### 损失函数
[**风格转移的损失函数是内容损失、风格损失和总变化损失的加权和**]。
通过调节这些权值超参数,我们可以权衡合成图像在保留内容、迁移样式以及降噪三方面的相对重要性。
```
content_weight, style_weight, tv_weight = 1, 1e3, 10
def compute_loss(X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram):
# 分别计算内容损失、样式损失和总变差损失
contents_l = [content_loss(Y_hat, Y) * content_weight for Y_hat, Y in zip(
contents_Y_hat, contents_Y)]
styles_l = [style_loss(Y_hat, Y) * style_weight for Y_hat, Y in zip(
styles_Y_hat, styles_Y_gram)]
tv_l = tv_loss(X) * tv_weight
# 对所有损失求和
l = sum(10 * styles_l + contents_l + [tv_l])
return contents_l, styles_l, tv_l, l
```
## [**初始化合成图像**]
在样式迁移中,合成的图像是训练期间唯一需要更新的变量。因此,我们可以定义一个简单的模型 `SynthesizedImage`,并将合成的图像视为模型参数。模型的前向计算只需返回模型参数即可。
```
class SynthesizedImage(nn.Module):
def __init__(self, img_shape, **kwargs):
super(SynthesizedImage, self).__init__(**kwargs)
self.weight = nn.Parameter(torch.rand(*img_shape))
def forward(self):
return self.weight
```
下面,我们定义 `get_inits` 函数。该函数创建了合成图像的模型实例,并将其初始化为图像 `X` 。样式图像在各个样式层的格拉姆矩阵 `styles_Y_gram` 将在训练前预先计算好。
```
def get_inits(X, device, lr, styles_Y):
gen_img = SynthesizedImage(X.shape).to(device)
gen_img.weight.data.copy_(X.data)
trainer = torch.optim.Adam(gen_img.parameters(), lr=lr)
styles_Y_gram = [gram(Y) for Y in styles_Y]
return gen_img(), styles_Y_gram, trainer
```
## [**训练模型**]
在训练模型进行样式迁移时,我们不断抽取合成图像的内容特征和样式特征,然后计算损失函数。下面定义了训练循环。
```
def train(X, contents_Y, styles_Y, device, lr, num_epochs, lr_decay_epoch):
X, styles_Y_gram, trainer = get_inits(X, device, lr, styles_Y)
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_decay_epoch, 0.8)
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[10, num_epochs],
legend=['content', 'style', 'TV'],
ncols=2, figsize=(7, 2.5))
for epoch in range(num_epochs):
trainer.zero_grad()
contents_Y_hat, styles_Y_hat = extract_features(
X, content_layers, style_layers)
contents_l, styles_l, tv_l, l = compute_loss(
X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram)
l.backward()
trainer.step()
scheduler.step()
if (epoch + 1) % 10 == 0:
animator.axes[1].imshow(postprocess(X))
animator.add(epoch + 1, [float(sum(contents_l)),
float(sum(styles_l)), float(tv_l)])
return X
```
现在我们[**训练模型**]:
首先将内容图像和样式图像的高和宽分别调整为300和450像素,用内容图像来初始化合成图像。
```
device, image_shape = d2l.try_gpu(), (300, 450)
net = net.to(device)
content_X, contents_Y = get_contents(image_shape, device)
_, styles_Y = get_styles(image_shape, device)
output = train(content_X, contents_Y, styles_Y, device, 0.3, 500, 50)
```
我们可以看到,合成图像保留了内容图像的风景和物体,并同时迁移了样式图像的色彩。例如,合成图像具有与样式图像中一样的色彩块,其中一些甚至具有画笔笔触的细微纹理。
## 小结
* 样式迁移常用的损失函数由3部分组成:(i) 内容损失使合成图像与内容图像在内容特征上接近;(ii) 样式损失令合成图像与样式图像在样式特征上接近;(iii) 总变差损失则有助于减少合成图像中的噪点。
* 我们可以通过预训练的卷积神经网络来抽取图像的特征,并通过最小化损失函数来不断更新合成图像来作为模型参数。
* 我们使用格拉姆矩阵表达样式层输出的样式。
## 练习
1. 选择不同的内容和样式层,输出有什么变化?
1. 调整损失函数中的权值超参数。输出是否保留更多内容或减少更多噪点?
1. 替换实验中的内容图像和样式图像,你能创作出更有趣的合成图像吗?
1. 我们可以对文本使用样式迁移吗?提示:你可以参阅调查报告 :cite:`Hu.Lee.Aggarwal.2020`。
[Discussions](https://discuss.d2l.ai/t/3300)
| github_jupyter |
```
!pip install -U -q pyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
import os
os.listdir()
os.chdir("drive/My Drive/Colab Notebooks/Text-Generation/PyTorch/")
import argparse
import torch
import data
# parser = argparse.ArgumentParser(description='PyTorch Wikitext-2 Language Model')
# # Model parameters.
# parser.add_argument('--data', type=str, default='./data/wikitext-2',
# help='location of the data corpus')
# parser.add_argument('--checkpoint', type=str, default='./model.pt',
# help='model checkpoint to use')
# parser.add_argument('--outf', type=str, default='generated.txt',
# help='output file for generated text')
# parser.add_argument('--words', type=int, default='1000',
# help='number of words to generate')
# parser.add_argument('--seed', type=int, default=1111,
# help='random seed')
# parser.add_argument('--cuda', action='store_true',
# help='use CUDA')
# parser.add_argument('--temperature', type=float, default=1.0,
# help='temperature - higher will increase diversity')
# parser.add_argument('--log-interval', type=int, default=100,
# help='reporting interval')
# args = parser.parse_args()
args = {}
args["data"] = "./data/paul_graham/"
args["checkpoint"] = "./model3.pt"
args["outf"] = "generated3.txt"
args["words"] = 1000
args["seed"] = 1111
args["cuda"] = True
args["temperature"] = 1.0
args["log_interval"] = 100
# Set the random seed manually for reproducibility.
torch.manual_seed(args["seed"])
if torch.cuda.is_available():
if not args["cuda"]:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
device = torch.device("cuda" if args["cuda"] else "cpu")
if args["temperature"] < 1e-3:
parser.error("--temperature has to be greater or equal 1e-3")
with open(args["checkpoint"], 'rb') as f:
model = torch.load(f).to(device)
model.eval()
corpus = data.Corpus(args["data"])
ntokens = len(corpus.dictionary)
is_transformer_model = hasattr(model, 'model_type') and model.model_type == 'Transformer'
if not is_transformer_model:
hidden = model.init_hidden(1)
input = torch.randint(ntokens, (1, 1), dtype=torch.long).to(device)
with open(args["outf"], 'w') as outf:
with torch.no_grad(): # no tracking history
for i in range(args["words"]):
if is_transformer_model:
output = model(input, False)
word_weights = output[-1].squeeze().div(args["temperature"]).exp().cpu()
word_idx = torch.multinomial(word_weights, 1)[0]
word_tensor = torch.Tensor([[word_idx]]).long().to(device)
input = torch.cat([input, word_tensor], 0)
else:
output, hidden = model(input, hidden)
word_weights = output.squeeze().div(args["temperature"]).exp().cpu()
word_idx = torch.multinomial(word_weights, 1)[0]
input.fill_(word_idx)
word = corpus.dictionary.idx2word[word_idx]
outf.write(word + ('\n' if i % 20 == 19 else ' '))
if i % args["log_interval"] == 0:
print('| Generated {}/{} words'.format(i, args["words"]))
```
| github_jupyter |
```
import os
import csv
import platform
import pandas as pd
import networkx as nx
from graph_partitioning import GraphPartitioning, utils
run_metrics = True
cols = ["WASTE", "CUT RATIO", "EDGES CUT", "TOTAL COMM VOLUME", "Qds", "CONDUCTANCE", "MAXPERM", "NMI", "FSCORE", "FSCORE RELABEL IMPROVEMENT", "LONELINESS"]
pwd = %pwd
config = {
"DATA_FILENAME": os.path.join(pwd, "data", "predition_model_tests", "network", "rand_edge_weights", "network_1.txt"),
#"DATA_FILENAME": os.path.join(pwd, "data", "predition_model_tests", "network", "network_1.txt"),
"OUTPUT_DIRECTORY": os.path.join(pwd, "output"),
# Set which algorithm is run for the PREDICTION MODEL.
# Either: 'FENNEL' or 'SCOTCH'
"PREDICTION_MODEL_ALGORITHM": "SCOTCH",
# Alternativly, read input file for prediction model.
# Set to empty to generate prediction model using algorithm value above.
"PREDICTION_MODEL": "",
"PARTITIONER_ALGORITHM": "SCOTCH",
# File containing simulated arrivals. This is used in simulating nodes
# arriving at the shelter. Nodes represented by line number; value of
# 1 represents a node as arrived; value of 0 represents the node as not
# arrived or needing a shelter.
"SIMULATED_ARRIVAL_FILE": os.path.join(pwd,
"data",
"predition_model_tests",
"dataset_1_shift_rotate",
"simulated_arrival_list",
"percentage_of_prediction_correct_100",
"arrival_100_1.txt"
),
# File containing the prediction of a node arriving. This is different to the
# simulated arrivals, the values in this file are known before the disaster.
"PREDICTION_LIST_FILE": os.path.join(pwd,
"data",
"predition_model_tests",
"dataset_1_shift_rotate",
"prediction_list",
"prediction_1.txt"
),
# File containing the geographic location of each node, in "x,y" format.
"POPULATION_LOCATION_FILE": os.path.join(pwd,
"data",
"predition_model_tests",
"coordinates",
"coordinates_1.txt"
),
# Number of shelters
"num_partitions": 4,
# The number of iterations when making prediction model
"num_iterations": 1,
# Percentage of prediction model to use before discarding
# When set to 0, prediction model is discarded, useful for one-shot
"prediction_model_cut_off": 1.0,
# Alpha value used in one-shot (when restream_batches set to 1)
"one_shot_alpha": 0.5,
# Number of arrivals to batch before recalculating alpha and restreaming.
# When set to 1, one-shot is used with alpha value from above
"restream_batches": 1000,
# When the batch size is reached: if set to True, each node is assigned
# individually as first in first out. If set to False, the entire batch
# is processed and empty before working on the next batch.
"sliding_window": False,
# Create virtual nodes based on prediction model
"use_virtual_nodes": False,
# Virtual nodes: edge weight
"virtual_edge_weight": 1.0,
# Loneliness score parameter. Used when scoring a partition by how many
# lonely nodes exist.
"loneliness_score_param": 1.2,
####
# GRAPH MODIFICATION FUNCTIONS
# Also enables the edge calculation function.
"graph_modification_functions": True,
# If set, the node weight is set to 100 if the node arrives at the shelter,
# otherwise the node is removed from the graph.
"alter_arrived_node_weight_to_100": False,
# Uses generalized additive models from R to generate prediction of nodes not
# arrived. This sets the node weight on unarrived nodes the the prediction
# given by a GAM.
# Needs POPULATION_LOCATION_FILE to be set.
"alter_node_weight_to_gam_prediction": False,
# Enables edge expansion when graph_modification_functions is set to true
"edge_expansion_enabled": True,
# The value of 'k' used in the GAM will be the number of nodes arrived until
# it reaches this max value.
"gam_k_value": 100,
# Alter the edge weight for nodes that haven't arrived. This is a way to
# de-emphasise the prediction model for the unknown nodes.
"prediction_model_emphasis": 1.0,
# This applies the prediction_list_file node weights onto the nodes in the graph
# when the prediction model is being computed and then removes the weights
# for the cutoff and batch arrival modes
"apply_prediction_model_weights": True,
"SCOTCH_LIB_PATH": os.path.join(pwd, "libs/scotch/macOS/libscotch.dylib")
if 'Darwin' in platform.system()
else "/usr/local/lib/libscotch.so",
# Path to the PaToH shared library
"PATOH_LIB_PATH": os.path.join(pwd, "libs/patoh/lib/macOS/libpatoh.dylib")
if 'Darwin' in platform.system()
else os.path.join(pwd, "libs/patoh/lib/linux/libpatoh.so"),
"PATOH_ITERATIONS": 10,
# Expansion modes: 'no_expansion', 'avg_node_weight', 'total_node_weight', 'smallest_node_weight'
# 'largest_node_weight', 'product_node_weight'
# add '_squared' or '_sqrt' at the end of any of the above for ^2 or sqrt(weight)
# add '_complete' for applying the complete algorithm
# for hyperedge with weights: A, B, C, D
# new weights are computed
# (A*B)^2 = H0
# (A*C)^2 = H1, ... Hn-1
# then normal hyperedge expansion computed on H0...Hn-1
# i.e. 'avg_node_weight_squared
"PATOH_HYPEREDGE_EXPANSION_MODE": 'total_node_weight_sqrt_complete',
# Alters how much information to print. Keep it at 1 for this notebook.
# 0 - will print nothing, useful for batch operations.
# 1 - prints basic information on assignments and operations.
# 2 - prints more information as it batches arrivals.
"verbose": 1
}
#gp = GraphPartitioning(config)
# Optional: shuffle the order of nodes arriving
# Arrival order should not be shuffled if using GAM to alter node weights
#random.shuffle(gp.arrival_order)
%pylab inline
import scipy
iterations = 1000
#modes = ['product_node_weight_complete_sqrt']
modes = ['no_expansion', 'avg_node_weight_complete', 'total_node_weight_complete', 'smallest_node_weight_complete','largest_node_weight_complete']
#modes = ['no_expansion']
for mode in modes:
metricsDataPrediction = []
metricsDataAssign = []
dataQdsOv = []
dataCondOv = []
config['PATOH_HYPEREDGE_EXPANSION_MODE'] = mode
print('Mode', mode)
for i in range(0, iterations):
if (i % 50) == 0:
print('Mode', mode, 'Iteration', str(i))
config["DATA_FILENAME"] = os.path.join(pwd, "data", "predition_model_tests", "network", "network_" + str(i + 1) + ".txt")
gp = GraphPartitioning(config)
gp.verbose = 0
gp.load_network()
gp.init_partitioner()
m = gp.prediction_model()
metricsDataPrediction.append(m[0])
'''
#write_graph_files
#
gp.metrics_timestamp = datetime.datetime.now().strftime('%H%M%S')
f,_ = os.path.splitext(os.path.basename(gp.DATA_FILENAME))
gp.metrics_filename = f + "-" + gp.metrics_timestamp
if not os.path.exists(gp.OUTPUT_DIRECTORY):
os.makedirs(gp.OUTPUT_DIRECTORY)
if not os.path.exists(os.path.join(gp.OUTPUT_DIRECTORY, 'oslom')):
os.makedirs(os.path.join(gp.OUTPUT_DIRECTORY, 'oslom'))
file_oslom = os.path.join(gp.OUTPUT_DIRECTORY, 'oslom', "{}-all".format(gp.metrics_filename) + '-edges-oslom.txt')
with open(file_oslom, "w") as outf:
for e in gp.G.edges_iter(data=True):
outf.write("{}\t{}\t{}\n".format(e[0], e[1], e[2]["weight"]))
#file_oslom = utils.write_graph_files(gp.OUTPUT_DIRECTORY,
# "{}-all".format(gp.metrics_filename),
# gp.G,
# quiet=True)
community_metrics = utils.run_community_metrics(gp.OUTPUT_DIRECTORY,
"{}-all".format(gp.metrics_filename),
file_oslom)
dataQdsOv.append(float(community_metrics['Qds']))
dataCondOv.append(float(community_metrics['conductance']))
'''
ec = ''
tcv = ''
qds = ''
conductance = ''
maxperm = ''
nmi = ''
lonliness = ''
qdsOv = ''
condOv = ''
dataEC = []
dataTCV = []
dataQDS = []
dataCOND = []
dataMAXPERM = []
dataNMI = []
dataLonliness = []
for i in range(0, iterations):
dataEC.append(metricsDataPrediction[i][2])
dataTCV.append(metricsDataPrediction[i][3])
dataQDS.append(metricsDataPrediction[i][4])
dataCOND.append(metricsDataPrediction[i][5])
dataMAXPERM.append(metricsDataPrediction[i][6])
dataNMI.append(metricsDataPrediction[i][7])
dataLonliness.append(metricsDataPrediction[i][10])
# UNCOMMENT FOR BATCH ARRIVAL
#dataECB.append(metricsDataAssign[i][2])
#dataTCVB.append(metricsDataAssign[i][3])
if(len(ec)):
ec = ec + ','
ec = ec + str(metricsDataPrediction[i][2])
if(len(tcv)):
tcv = tcv + ','
tcv = tcv + str(metricsDataPrediction[i][3])
if(len(qds)):
qds = qds + ','
qds = qds + str(metricsDataPrediction[i][4])
if(len(conductance)):
conductance = conductance + ','
conductance = conductance + str(metricsDataPrediction[i][5])
if(len(maxperm)):
maxperm = maxperm + ','
maxperm = maxperm + str(metricsDataPrediction[i][6])
if(len(nmi)):
nmi = nmi + ','
nmi = nmi + str(metricsDataPrediction[i][7])
if(len(lonliness)):
lonliness = lonliness + ','
lonliness = lonliness + str(dataLonliness[i])
'''
if(len(qdsOv)):
qdsOv = qdsOv + ','
qdsOv = qdsOv + str(dataQdsOv[i])
if(len(condOv)):
condOv = condOv + ','
condOv = condOv + str(dataCondOv[i])
'''
ec = 'EC_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataEC)) + ',' + str(scipy.std(dataEC)) + ',' + ec
tcv = 'TCV_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataTCV)) + ',' + str(scipy.std(dataTCV)) + ',' + tcv
lonliness = "LONELINESS," + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataLonliness)) + ',' + str(scipy.std(dataLonliness)) + ',' + lonliness
qds = 'QDS_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataQDS)) + ',' + str(scipy.std(dataQDS)) + ',' + qds
conductance = 'CONDUCTANCE_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataCOND)) + ',' + str(scipy.std(dataCOND)) + ',' + conductance
maxperm = 'MAXPERM_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataMAXPERM)) + ',' + str(scipy.std(dataMAXPERM)) + ',' + maxperm
nmi = 'NMI_PM,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataNMI)) + ',' + str(scipy.std(dataNMI)) + ',' + nmi
#qdsOv = 'QDS_OV,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataQdsOv)) + ',' + str(scipy.std(dataQdsOv)) + qdsOv
#condOv = 'CONDUCTANCE_OV,' + config['PATOH_HYPEREDGE_EXPANSION_MODE'] + ',' + str(scipy.mean(dataCondOv)) + ',' + str(scipy.std(dataCondOv)) + condOv
print(ec)
print(tcv)
print(lonliness)
print(qds)
print(conductance)
print(maxperm)
#print(qdsOv)
#print(condOv)
```
| github_jupyter |
```
%%capture
!pip install wikidataintegrator
from rdflib import Graph, URIRef
from wikidataintegrator import wdi_core, wdi_login
from datetime import datetime
import copy
import pandas as pd
import getpass
print("username:")
username = input()
print("password:")
password = getpass.getpass()
login = wdi_login.WDLogin(user=username, pwd=password)
# functions
def createOBOReference(doid):
statedin = wdi_core.WDItemID(obowditem, prop_nr="P248", is_reference=True)
retrieved = datetime.now()
timeStringNow = retrieved.strftime("+%Y-%m-%dT00:00:00Z")
refRetrieved = wdi_core.WDTime(timeStringNow, prop_nr="P813", is_reference=True)
id = wdi_core.WDExternalID(oboid, prop_nr=oboidwdprop, is_reference=True)
return [statedin, refRetrieved, id]
query = """
SELECT * WHERE {
?ontology rdfs:label ?ontologyLabel ;
wdt:P361 wd:Q4117183 ;
wdt:P1687 ?wdprop .
OPTIONAL {?ontology wdt:P1813 ?shortname .}
OPTIONAL {?wdprop wdt:P1630 ?formatterURL .}
FILTER (lang(?ontologyLabel) = "en")
}
"""
wdmappings = wdi_core.WDFunctionsEngine.execute_sparql_query(query, as_dataframe=True)
oboid = "SO:0000110"
obouri = "http://purl.obolibrary.org/obo/"+oboid.replace(":", "_")
oboontology = oboid.split(":")[0]
## Fetch the OBO ontology
obog = Graph()
obog.parse(f"http://www.ontobee.org/ontology/rdf/{oboontology}?iri="+obouri, format="xml")
oboqid = wdmappings[wdmappings["shortname"]==oboid.split(":")[0]]["ontology"].iloc[0].replace("http://www.wikidata.org/entity/", "")
wdmappings
# wikidata
obowditem = wdmappings[wdmappings["shortname"]==oboid.split(":")[0]]["ontology"].iloc[0].replace("http://www.wikidata.org/entity/", "")
oboidwdprop =wdmappings[wdmappings["shortname"]==oboid.split(":")[0]]["wdprop"].iloc[0].replace("http://www.wikidata.org/entity/", "") #gene ontology id
## Fetch Wikidata part of the OBO ontology
query = f"""
SELECT * WHERE {{?item wdt:{oboidwdprop} '{oboid}'}}
"""
qid = wdi_core.WDFunctionsEngine.execute_sparql_query(query, as_dataframe=True)
if len(qid) >0:
qid = qid.iloc[0]["item"].replace("http://www.wikidata.org/entity/", "")
else:
qid = None
# Bot
## ShEx precheck
if qid:
item = wdi_core.WDItemEngine(wd_item_id=qid)
# precheck = item.check_entity_schema(eid="E323", output="result")
#if not precheck["result"]:
# print(qid + " needs fixing to conform to E323")
# quit()
print("continue")
obo_reference = createOBOReference(oboid)
# Statements build up
## OBO ontology generic
statements = []
# OBO ID
statements.append(wdi_core.WDString(value=oboid, prop_nr=oboidwdprop, references=[copy.deepcopy(obo_reference)]))
# exact match (P2888)
statements.append(wdi_core.WDUrl(value=obouri, prop_nr="P2888", references=[copy.deepcopy(obo_reference)]))
## OBO resource specific
### Gene Ontology
gotypes = {"biological_process": "Q2996394",
"molecular_function": "Q14860489",
"cellular_component": "Q5058355",
}
for gotype in obog.objects(predicate=URIRef("http://www.geneontology.org/formats/oboInOwl#hasOBONamespace")):
statements.append(wdi_core.WDItemID(gotypes[str(gotype)], prop_nr="P31", references=[copy.deepcopy(obo_reference)]))
#external identifiers based on skos:exactMatch
for extID in obog.objects(predicate=URIRef("http://www.w3.org/2004/02/skos/core#exactMatch")):
# if "MESH:" in extID:
# statements.append(wdi_core.WDExternalID(row["exactMatch"].replace("MESH:", ""), prop_nr="P486", references=[copy.deepcopy(do_reference)]))
if "NCI:" in extID:
statements.append(wdi_core.WDExternalID(row["exactMatch"], prop_nr="P1748", references=[copy.deepcopy(do_reference)]))
if "ICD10CM:" in extID:
statements.append(wdi_core.WDExternalID(row["exactMatch"], prop_nr="P4229", references=[copy.deepcopy(do_reference)]))
if "UMLS_CUI:" in extID:
statements.append(wdi_core.WDExternalID(row["exactMatch"], prop_nr="P2892", references=[copy.deepcopy(do_reference)]))
item = wdi_core.WDItemEngine(data=statements, keep_good_ref_statements=True)
print(item.write(login))
bloeb = Graph()
uri = bloeb.parse("http://www.ontobee.org/ontology/rdf/SO?iri=http://purl.obolibrary.org/obo/SO_0001565", format="xml")
print(bloeb.serialize(format="turtle"))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
import pandas as pd
import seaborn as sns
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn import neighbors, datasets
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from scipy.spatial import ConvexHull
from tqdm import tqdm
import random
plt.style.use('ggplot')
import pickle
from sklearn import tree
from sklearn.tree import export_graphviz
from joblib import dump, load
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
%matplotlib inline
from sklearn.impute import SimpleImputer
def getAuc(X,y,test_size=0.25,max_depth=None,n_estimators=100,
minsplit=4,FPR=[],TPR=[],VERBOSE=False, USE_ONLY=None):
'''
get AUC given training data X, with target labels y
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size)
CLASSIFIERS=[DecisionTreeClassifier(max_depth=max_depth, min_samples_split=minsplit,class_weight='balanced'),
RandomForestClassifier(n_estimators=n_estimators,
max_depth=max_depth,min_samples_split=minsplit,class_weight='balanced'),
ExtraTreesClassifier(n_estimators=n_estimators,
max_depth=max_depth,min_samples_split=minsplit,class_weight='balanced'),
AdaBoostClassifier(n_estimators=n_estimators),
GradientBoostingClassifier(n_estimators=n_estimators,max_depth=max_depth),
svm.SVC(kernel='rbf',gamma='scale',class_weight='balanced',probability=True)]
if USE_ONLY is not None:
if isinstance(USE_ONLY, (list,)):
CLASSIFIERS=[CLASSIFIERS[i] for i in USE_ONLY]
if isinstance(USE_ONLY, (int,)):
CLASSIFIERS=CLASSIFIERS[USE_ONLY]
for clf in CLASSIFIERS:
clf.fit(X_train,y_train)
y_pred=clf.predict_proba(X_test)
#print(X_test,y_pred)
fpr, tpr, thresholds = metrics.roc_curve(y_test,y_pred[:,1], pos_label=1)
auc=metrics.auc(fpr, tpr)
if VERBOSE:
print(auc)
FPR=np.append(FPR,fpr)
TPR=np.append(TPR,tpr)
points=np.array([[a[0],a[1]] for a in zip(FPR,TPR)])
hull = ConvexHull(points)
x=np.argsort(points[hull.vertices,:][:,0])
auc=metrics.auc(points[hull.vertices,:][x,0],points[hull.vertices,:][x,1])
return auc,CLASSIFIERS
def saveFIG(filename='tmp.pdf',AXIS=False):
'''
save fig for publication
'''
import pylab as plt
plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0,
hspace = 0, wspace = 0)
plt.margins(0,0)
if not AXIS:
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
plt.savefig(filename,dpi=300, bbox_inches = 'tight',
pad_inches = 0,transparent=False)
return
def getCoverage(model,verbose=True):
'''
return how many distinct items (questions)
are used in the model set.
This includes the set of questions being
covered by all forms that may be
generated by the model set
'''
FS=[]
for m in model:
for count in range(len(m.estimators_)):
clf=m.estimators_[count]
fs=clf.tree_.feature[clf.tree_.feature>0]
FS=np.array(list(set(np.append(FS,fs))))
if verbose:
print("Number of items used: ", FS.size)
return FS
def getConfusion(X,y,test_size=0.25,max_depth=None,n_estimators=100,
minsplit=4,CONFUSION={},VERBOSE=False, USE_ONLY=None,target_names = None):
'''
get AUC given training data X, with target labels y
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size)
CLASSIFIERS=[DecisionTreeClassifier(max_depth=max_depth, min_samples_split=minsplit),
RandomForestClassifier(n_estimators=n_estimators,class_weight='balanced',
max_depth=max_depth,min_samples_split=minsplit),
ExtraTreesClassifier(n_estimators=n_estimators,class_weight='balanced',
max_depth=max_depth,min_samples_split=minsplit),
AdaBoostClassifier(n_estimators=n_estimators),
GradientBoostingClassifier(n_estimators=n_estimators,max_depth=max_depth),
svm.SVC(kernel='rbf',gamma='scale',class_weight='balanced',probability=True)]
if USE_ONLY is not None:
if isinstance(USE_ONLY, (list,)):
CLASSIFIERS=[CLASSIFIERS[i] for i in USE_ONLY]
if isinstance(USE_ONLY, (int,)):
CLASSIFIERS=CLASSIFIERS[USE_ONLY]
for clf in CLASSIFIERS:
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
print(y_test,y_pred)
cmat=confusion_matrix(y_test, y_pred)
acc=accuracy_score(y_test, y_pred)
CONFUSION[clf]=cmat
if VERBOSE:
print(classification_report(y_test, y_pred, target_names=target_names))
print('Confusion MAtrix:\n', cmat)
print(' ')
print('Accuracy:', acc)
return CONFUSION,acc
df=pd.read_csv('combined_bsnip.csv',index_col=0).drop('DSM',axis=1)
df.head()
df.Biotype.value_counts()
# 3 is HC
#df=df[df['Biotype']==3]
df=df.dropna()
df0=df
#df=df0[df0.Biotype.isin([1,5])]
df=df0
X=df.iloc[:,2:].values
y=df.Biotype.values#.astype(str)
y=[(int(x)==2)+0 for x in y ]
ACC=[]
CLFh={}
for run in tqdm(np.arange(500)):
auc,CLFS=getAuc(X,y,test_size=0.2,max_depth=10,n_estimators=2,
minsplit=2,VERBOSE=False, USE_ONLY=[2])
ACC=np.append(ACC,auc)
if auc > 0.5:
CLFh[auc]=CLFS
sns.distplot(ACC)
np.median(ACC)
CLFstar=CLFh[np.array([k for k in CLFh.keys()]).max()][0]
CLFstar
from scipy import interpolate
from scipy.interpolate import interp1d
auc_=[]
ROC={}
fpr_ = np.linspace(0, 1, num=20, endpoint=True)
for run in np.arange(1000):
clf=CLFstar
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
y_pred=clf.predict_proba(X_test)
fpr, tpr, thresholds = metrics.roc_curve(y_test,y_pred[:,1], pos_label=1)
f = interp1d(fpr, tpr)
auc_=np.append(auc_,metrics.auc(fpr_, f(fpr_)))
ROC[metrics.auc(fpr, tpr)]={'fpr':fpr_,'tpr':f(fpr_)}
sns.distplot(auc_)
auc_.mean()
TPR=[]
for a in ROC.keys():
#print(a)
#break
plt.plot(ROC[a]['fpr'],ROC[a]['tpr'],'-k',alpha=.05)
TPR=np.append(TPR,ROC[a]['tpr'])
TPR=TPR.reshape(int(len(TPR)/len(fpr_)),len(fpr_))
plt.plot(fpr_,np.median(TPR,axis=0),'-r')
metrics.auc(fpr_,np.median(TPR,axis=0))
plt.gca().set_title('B2 vs others',fontsize=20)
plt.text(.6,.65,'AUC: '+str(metrics.auc(fpr_,np.median(TPR,axis=0)))[:5],color='r',fontsize=20)
#plt.text(.6,.31,'AUC: '+str(metrics.auc(fpr_,np.median(tprA,axis=0)))[:5],color='b')
#plt.text(.6,.19,'AUC: '+str(metrics.auc(fpr_,np.median(tprB,axis=0)))[:5],color='g')
plt.gca().set_xlabel('1-specificity',fontsize=20)
plt.gca().set_ylabel('sensitivity',fontsize=20)
plt.gca().xaxis.set_tick_params(labelsize=20)
plt.gca().yaxis.set_tick_params(labelsize=20)
saveFIG('bsnip001_updated_L2.pdf',AXIS=True)
```
# feature importance analysis
```
fig=plt.figure(figsize=[8,6])
IMP=[]
for e in CLFstar.estimators_:
IMP.append(e.feature_importances_)
len(IMP)
N=15
I15=pd.DataFrame(IMP).sum()
I15=I15/I15.max()
I15=I15.sort_values(ascending=False).head(N)
ax=I15.plot(kind='bar',ax=plt.gca())
ax.set_ylim(0.3,None)
plt.gca().set_xlabel('Item ID',fontsize=20)
plt.gca().set_ylabel('Normalized Feature Importance',fontsize=20)
plt.gca().xaxis.set_tick_params(labelsize=20)
plt.gca().yaxis.set_tick_params(labelsize=20)
plt.gca().set_title('B2 vs others',fontsize=20)
saveFIG('updated_bsnip001_L2_importances.pdf',AXIS=True)
a=list(CLFh.keys())#.sort()
a.sort()
top_keys=a[-3:]
CLFh[top_keys[0]]
fig=plt.figure(figsize=[8,6])
IMP=[]
for k in top_keys:
for e in CLFh[k][0].estimators_:
IMP.append(e.feature_importances_)
len(IMP)
N=15
I15=pd.DataFrame(IMP).sum()
I15=I15/I15.max()
I15=I15.sort_values(ascending=False).head(N)
ax=I15.plot(kind='bar',ax=plt.gca())
ax.set_ylim(0.3,None)
plt.gca().set_xlabel('Item ID',fontsize=20)
plt.gca().set_ylabel('Normalized Feature Importance',fontsize=20)
plt.gca().xaxis.set_tick_params(labelsize=20)
plt.gca().yaxis.set_tick_params(labelsize=20)
plt.gca().set_title('B2 vs others',fontsize=20)
saveFIG('updated_bsnip001_L2_importances.pdf',AXIS=True)
```
| github_jupyter |
```
import sys
sys.executable
import argparse
import math
import h5py
import numpy as np
import tensorflow as tf
import socket
import glob
import os
import sys
import h5py
import provider
import tf_util
from model import *
from plyfile import PlyData, PlyElement
print("success")
BATCH_SIZE = 1
BATCH_SIZE_EVAL = 1
NUM_POINT = 4096
MAX_EPOCH = 50
BASE_LEARNING_RATE = 0.001
GPU_INDEX = 0
MOMENTUM = 0.9
OPTIMIZER = 'adam'
DECAY_STEP = 300000
DECAY_RATE = 0.5
LOG_DIR = 'log'
if not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)
os.system('cp model.py %s' % (LOG_DIR)) # bkp of model def
#os.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure
LOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')
# LOG_FOUT.write(str(FLAGS)+'\n')
MAX_NUM_POINT = 4096
NUM_CLASSES = 2
BN_INIT_DECAY = 0.5
BN_DECAY_DECAY_RATE = 0.5
#BN_DECAY_DECAY_STEP = float(DECAY_STEP * 2)
BN_DECAY_DECAY_STEP = float(DECAY_STEP)
BN_DECAY_CLIP = 0.99
HOSTNAME = socket.gethostname()
# Load ALL data
f = h5py.File('data/test.h5')
print(f)
#Choose a frame to test, (0,60)
frame_to_test = 15
test_data = np.zeros((4096, 6))
test_label = np.ones((1,4096))
xmax = 3.0
xmin = -3.0
data = f['data']
label = f['label']
test_data[:,0:3] = (data[frame_to_test][:, 0:3]- xmin) / (xmax - xmin )
test_data[:,3:6] = data[frame_to_test][:, 3:6]
test_label[:,:] = label[frame_to_test][:]
print(test_data.shape)
print(test_label.shape)
features = ["x","y","z","r","g","b"]
for i in range(6):
print(features[i] + "_range :", np.min(test_data[:, i]), np.max(test_data[:, i]))
test_data_min = []
test_data_max = []
for i in range(6):
test_data_min.append(np.min(test_data[:,i]))
test_data_max.append(np.max(test_data[:,i]))
print(test_data_min)
print(test_data_max)
features = ["x","y","z","r","g","b"]
for i in range(6):
print(features[i] + "_range :", np.min(test_data[:, i]), np.max(test_data[:, i]))
def log_string(out_str):
LOG_FOUT.write(out_str+'\n')
LOG_FOUT.flush()
print(out_str)
def get_learning_rate(batch):
learning_rate = tf.train.exponential_decay(
BASE_LEARNING_RATE, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
DECAY_STEP, # Decay step.
DECAY_RATE, # Decay rate.
staircase=True)
learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!!
return learning_rate
def get_bn_decay(batch):
bn_momentum = tf.train.exponential_decay(
BN_INIT_DECAY,
batch*BATCH_SIZE,
BN_DECAY_DECAY_STEP,
BN_DECAY_DECAY_RATE,
staircase=True)
bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)
return bn_decay
def evaluate():
with tf.Graph().as_default():
with tf.device('/gpu:'+str(GPU_INDEX)):
pointclouds_pl, labels_pl = placeholder_inputs(BATCH_SIZE, NUM_POINT)
is_training_pl = tf.placeholder(tf.bool, shape=())
# Note the global_step=batch parameter to minimize.
# That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.
batch = tf.Variable(0)
bn_decay = get_bn_decay(batch)
tf.summary.scalar('bn_decay', bn_decay)
# Get model and loss
pred = get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)
loss = get_loss(pred, labels_pl)
tf.summary.scalar('loss', loss)
learning_rate = get_learning_rate(batch)
if OPTIMIZER == 'momentum':
optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)
elif OPTIMIZER == 'adam':
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss, global_step=batch)
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Create a session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
config.log_device_placement = True
sess = tf.Session(config=config)
merged = tf.summary.merge_all()
ops = {'pointclouds_pl': pointclouds_pl,
'labels_pl': labels_pl,
'is_training_pl': is_training_pl,
'pred': pred,
'loss': loss,
'train_op': train_op,
'merged': merged,
'step': batch}
MODEL_PATH = "log/model.ckpt"
# Restore variables from disk.
saver.restore(sess, MODEL_PATH)
log_string("Model restored.")
test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'))
eval_one_epoch(sess, ops, test_writer)
def eval_one_epoch(sess, ops, test_writer):
""" ops: dict mapping from string to tf ops """
is_training = False
total_correct = 0
total_seen = 0
loss_sum = 0
total_seen_class = [0 for _ in range(NUM_CLASSES)]
total_correct_class = [0 for _ in range(NUM_CLASSES)]
log_string('----')
#current_data = np.zeros((1,4096, 6))
current_data = test_data[0:NUM_POINT,:]
current_label = test_label
current_data = current_data.reshape(1,4096, 6)
file_size = current_data.shape[0]
num_batches = file_size // BATCH_SIZE_EVAL
fout = open('log/'+str(frame_to_test)+'_pred.obj', 'w')
fout_gt = open('log/'+str(frame_to_test)+'_gt.obj', 'w')
for batch_idx in range(num_batches):
start_idx = batch_idx * BATCH_SIZE_EVAL
end_idx = (batch_idx+1) * BATCH_SIZE_EVAL
feed_dict = {ops['pointclouds_pl']: current_data[:, :],
ops['labels_pl']: current_label[:],
ops['is_training_pl']: is_training}
summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'], ops['loss'], ops['pred']],
feed_dict=feed_dict)
pred_label = np.argmax(pred_val, 2) # BxN
test_writer.add_summary(summary, step)
pred_val = np.argmax(pred_val, 2)
correct = np.sum(pred_val == current_label[start_idx:end_idx])
total_correct += correct
total_seen += (BATCH_SIZE_EVAL*NUM_POINT)
loss_sum += (loss_val*BATCH_SIZE_EVAL)
class_color = [[0,255,0],[0,0,255]]
print(start_idx, end_idx)
for i in range(start_idx, end_idx):
print(pred_label.shape)
pred = pred_label[i-start_idx, :]
pts = current_data[i-start_idx, :, :]
l = current_label[i-start_idx,:]
for j in range(NUM_POINT):
l = int(current_label[i, j])
total_seen_class[l] += 1
total_correct_class[l] += (pred_val[i-start_idx, j] == l)
color = class_color[pred_val[i-start_idx, j]]
color_gt = class_color[l]
fout.write('v %f %f %f %d %d %d\n' % (pts[j,0], pts[j,1], pts[j,2], color[0], color[1], color[2]))
fout_gt.write('v %f %f %f %d %d %d\n' % (pts[j,0], pts[j,1], pts[j,2], color_gt[0], color_gt[1], color_gt[2]))
log_string('eval mean loss: %f' % (loss_sum / float(total_seen/NUM_POINT)))
log_string('eval accuracy: %f'% (total_correct / float(total_seen)))
log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))
if __name__ == "__main__":
evaluate()
LOG_FOUT.close()
eval mean loss: 0.297671
eval accuracy: 0.840576
eval avg class acc: 0.707515
```
| github_jupyter |
# Thực hiện học trên model
```
# import
import random
import math
import time
import pandas as pd
import numpy as np
import torch
import torch.utils.data as data
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Thiết định các giá trị ban đầu
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
```
# Tạo DataLoader
```
from utils.dataloader import make_datapath_list, DataTransform, COCOkeypointsDataset
# Tạo list từ MS COCO
train_img_list, train_mask_list, val_img_list, val_mask_list, train_meta_list, val_meta_list = make_datapath_list(
rootpath="./data/")
print(len(train_img_list))
print(len(train_meta_list))
# ★Lấy 1000 data để train
data_num = 1024 # bội số của batch size
train_img_list = train_img_list[:data_num]
train_mask_list = train_mask_list[:data_num]
val_img_list = val_img_list[:data_num]
val_mask_list = val_mask_list[:data_num]
train_meta_list = train_meta_list[:data_num]
val_meta_list = val_meta_list[:data_num]
# Tạo dataset
train_dataset = COCOkeypointsDataset(
val_img_list, val_mask_list, val_meta_list, phase="train", transform=DataTransform())
# Để đơn giản hóa trong bài này không tạo dữ liệu đánh giá
# val_dataset = CocokeypointsDataset(val_img_list, val_mask_list, val_meta_list, phase="val", transform=DataTransform())
# Tạo DataLoader
batch_size = 32
train_dataloader = data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True)
dataloaders_dict = {"train": train_dataloader, "val": None}
```
# Tạo Model
```
from utils.openpose_net import OpenPoseNet
net = OpenPoseNet()
```
# Định nghĩa hàm mất mát
```
class OpenPoseLoss(nn.Module):
def __init__(self):
super(OpenPoseLoss, self).__init__()
def forward(self, saved_for_loss, heatmap_target, heat_mask, paf_target, paf_mask):
"""
tính loss
Parameters
----------
saved_for_loss : Output ofOpenPoseNet (list)
heatmap_target : [num_batch, 19, 46, 46]
Anotation information
heatmap_mask : [num_batch, 19, 46, 46]
paf_target : [num_batch, 38, 46, 46]
PAF Anotation
paf_mask : [num_batch, 38, 46, 46]
PAF mask
Returns
-------
loss :
"""
total_loss = 0
for j in range(6):
# Không tính những vị trí của mask
pred1 = saved_for_loss[2 * j] * paf_mask
gt1 = paf_target.float() * paf_mask
# heatmaps
pred2 = saved_for_loss[2 * j + 1] * heat_mask
gt2 = heatmap_target.float()*heat_mask
total_loss += F.mse_loss(pred1, gt1, reduction='mean') + \
F.mse_loss(pred2, gt2, reduction='mean')
return total_loss
criterion = OpenPoseLoss()
```
# Thiết định optimizer
```
optimizer = optim.SGD(net.parameters(), lr=1e-2,
momentum=0.9,
weight_decay=0.0001)
```
# Thực hiện việc học
```
def train_model(net, dataloaders_dict, criterion, optimizer, num_epochs):
# Xem máy train của bạn có dùng gpu hay không
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("Use:", device)
# chuyển thông tin model vào ram
net.to(device)
torch.backends.cudnn.benchmark = True
num_train_imgs = len(dataloaders_dict["train"].dataset)
batch_size = dataloaders_dict["train"].batch_size
iteration = 1
# vòng học
for epoch in range(num_epochs):
# lưu thời gian bắt đầu học
t_epoch_start = time.time()
t_iter_start = time.time()
epoch_train_loss = 0.0
epoch_val_loss = 0.0
print('-------------')
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-------------')
# phân loại data học và kiểm chứng
for phase in ['train', 'val']:
if phase == 'train':
net.train()
optimizer.zero_grad()
print('(train)')
# lần này bỏ qua thông tin kiểm chứng
else:
continue
# net.eval()
# print('-------------')
# print('(val)')
# Lấy từng minibatch files từ data loader
for imges, heatmap_target, heat_mask, paf_target, paf_mask in dataloaders_dict[phase]:
if imges.size()[0] == 1:
continue
# Gửi data đến GPU nếu máy cài GPU
imges = imges.to(device)
heatmap_target = heatmap_target.to(device)
heat_mask = heat_mask.to(device)
paf_target = paf_target.to(device)
paf_mask = paf_mask.to(device)
# thiết lập giá trị khởi tạo cho optimizer
optimizer.zero_grad()
# tính forward
with torch.set_grad_enabled(phase == 'train'):
_, saved_for_loss = net(imges)
loss = criterion(saved_for_loss, heatmap_target,
heat_mask, paf_target, paf_mask)
del saved_for_loss
# gửi thông tin loss theo back propagation khi học
if phase == 'train':
loss.backward()
optimizer.step()
if (iteration % 10 == 0):
t_iter_finish = time.time()
duration = t_iter_finish - t_iter_start
print('イテレーション {} || Loss: {:.4f} || 10iter: {:.4f} sec.'.format(
iteration, loss.item()/batch_size, duration))
t_iter_start = time.time()
epoch_train_loss += loss.item()
iteration += 1
# Validation (skip)
# else:
#epoch_val_loss += loss.item()
t_epoch_finish = time.time()
print('-------------')
print('epoch {} || Epoch_TRAIN_Loss:{:.4f} ||Epoch_VAL_Loss:{:.4f}'.format(
epoch+1, epoch_train_loss/num_train_imgs, 0))
print('timer: {:.4f} sec.'.format(t_epoch_finish - t_epoch_start))
t_epoch_start = time.time()
# Lưu thông tin sau khi học
torch.save(net.state_dict(), 'weights/openpose_net_' +
str(epoch+1) + '.pth')
# HỌc (chạy 1 lần)
num_epochs = 1
train_model(net, dataloaders_dict, criterion, optimizer, num_epochs=num_epochs)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 분석전략을 사용한 모델 저장 및 불러오기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/save_and_load"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub)소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />노트북 다운로드 하기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 [tensorflow/docs](https://github.com/tensorflow/docs-l10n) 깃허브 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 [docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로 메일을 보내주시기 바랍니다.
## 개요
훈련 도중 모델을 저장하고 불러오는 것은 흔히 일어나는 일입니다. 케라스 모델을 저장하고 불러오기 위한 API에는 high-level API와 low-level API, 두 가지가 있습니다. 이 튜토리얼은 `tf.distribute.Strategy`를 사용할 때 어떻게 SavedModel APIs를 사용할 수 있는지 보여줍니다. SavedModel와 직렬화에 관한 일반적인 내용을 학습하려면, [saved model guide](../../guide/saved_model.ipynb)와 [Keras model serialization guide](../../guide/keras/save_and_serialize.ipynb)를 읽어보는 것을 권장합니다. 간단한 예로 시작해보겠습니다:
필요한 패키지 가져오기:
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %텐서플로 버전(tensorflow_version) 코랩(Colab)에만 존재합니다.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
```
`tf.distribute.Strategy`를 사용하는 모델과 데이터 준비하기:
모델 훈련시키기:
```
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
```
## 모델 저장하고 불러오기
이제 사용할 모델을 가지고 있으므로 API를 이용해 모델을 저장하고 불러오는 방법에 대해 살펴봅시다.
두 가지 API를 사용 할 수 있습니다:
* 고수준 케라스 `model.save`와 `tf.keras.models.load_model`
* 저수준 케라스 `tf.saved_model.save`와 `tf.saved_model.load`
### 케라스 API
케라스 API들을 이용해 모델을 저장하고 불러오는 예를 소개합니다.
```
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path) # save()는 전략 범위를 벗어나 호출되어야 합니다.
```
`tf.distribute.Strategy`없이 모델 복원시키기:
```
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
```
모델을 복원시킨 후에는 `compile()`이 이미 저장되기 전에 컴파일 되기 때문에, `compile()`을 다시 호출하지 않고도 모델 훈련을 계속 할 수 있습니다. 그 모델은 텐서플로 표준 `SavedModel`의 프로토 타입에 저장됩니다. 더 많은 정보를 원한다면, [the guide to `saved_model` format](../../guide/saved_model.ipynb)를 참고하세요.
`tf.distribute.strategy`의 범위를 벗어나서 `model.save()` 방법을 호출하는 것은 중요합니다. 범위 안에서 호출하는 것은 지원하지 않습니다.
이제 모델을 불러와서 `tf.distribute.Strategy`를 사용해 훈련시킵니다:
```
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
```
위에서 볼 수 있듯이, 불러오기는 `tf.distribute.Strategy`에서 예상한대로 작동합니다. 여기서 사용된 전략은 이전에 사용된 전략과 같지 않아도 됩니다.
### `tf.saved_model` 형 API
이제 저수준 API에 대해서 살펴봅시다. 모델을 저장하는 것은 케라스 API와 비슷합니다:
```
model = get_model() # 새 모델 얻기
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
```
`tf.saved_model.load()`로 불러올 수 있습니다. 그러나 저수준 단계의 API이기 때문에 (따라서 더 넓은 사용범위를 갖습니다), 케라스 모델을 반환하지 않습니다. 대신, 추론하기 위해 사용될 수 있는 기능들을 포함한 객체를 반환합니다. 예를 들어:
```
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
```
불러와진 객체는 각각 키와 관련된 채, 여러 기능을 포함할 수 있습니다. `"serving_default"`는 저장된 케라스 모델이 있는 추론 기능을 위한 기본 키입니다. 이 기능을 이용하여 추론합니다:
```
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
```
또한 분산방식으로 불러오고 추론할 수 있습니다:
```
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# 분산방식으로 기능 호출하기
for batch in dist_predict_dataset:
another_strategy.experimental_run_v2(inference_func,
args=(batch,))
```
복원된 기능을 호출하는 것은 단지 저장된 모델로의 정방향 패쓰입니다(예상하기에). 만약 계속해서 불러온 기능을 훈련시키고 싶다면 어떻게 하실건가요? 불러온 기능을 더 큰 모델에 내장시킬 건가요? 일반적인 방법은 이 불러온 객체를 케라스 층에 싸서(wrap) 달성하는 것입니다. 다행히도, [TF Hub](https://www.tensorflow.org/hub)는 이 목적을 위해 [hub.KerasLayer](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/keras_layer.py)을 갖고 있으며, 다음과 같습니다.
```
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# 케라스 층에 불러온 것 포장(wrap)하기
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(train_dataset, epochs=2)
```
볼 수 있듯이, `hub.KerasLayer`은 `tf.saved_model.load()`로부터 불려온 결과를 또 다른 모델을 만드는데 사용될 수 있는 케라스 층으로 포장(wrap)합니다. 이것은 학습에 매우 유용합니다.
### 어떤 API를 사용해야 할까요?
저장에 관해서, 케라스 모델을 사용하는 경우, 케라스의 `model.save()` API를 사용하는 것을 권장합니다. 저장하려는 모델이 케라스 모델이 아닌 경우, 더 낮은 단계의 API를 선택해야 합니다.
모델을 불러옴에 있어서, 어떤 API를 사용하느냐는 로딩 API에서 얻고자 하는 내용에 따라 결정됩니다. 케라스 모델을 가져올 수 없으면(또는 가져오고 싶지 않다면), `tf.saved_model.load()`를 사용합니다. 그 외의 경우에는, `tf.keras.models.load_model()`을 사용합니다. 케라스 모델을 저장한 경우에만 케라스 모델을 반환 받을 수 있다는 점을 유의하세요.
API들을 목적에 따라 혼합하고 짜 맞추는 것이 가능합니다. 케라스 모델을 `model.save`와 함께 저장할 수 있고, 저수준 API인, `tf.saved_model.load`로 케라스가 아닌 모델을 불러올 수 있습니다.
```
model = get_model()
# 케라스의 save() API를 사용하여 모델 저장하기
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# 저수준 API를 사용하여 모델 불러오기
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
```
### 주의사항
특별한 경우는 잘 정의되지 않은 입력을 갖는 케라스 모델을 갖고 있는 경우입니다. 예를 들어, 순차 모델은 입력 형태(`Sequential([Dense(3), ...]`) 없이 만들 수 있습니다. 하위 분류된 모델들 또한 초기화 후에 잘 정의된 입력을 갖고 있지 않습니다. 이 경우 모델을 저장하고 불러올 시 저수준 API를 사용해야 하며, 그렇지 않으면 오류가 발생할 수 있습니다.
모델이 잘 정의된 입력을 갖는지 확인하려면, `model.inputs` 이 `None`인지 확인합니다. `None`이 아니라면 잘 정의된 입력입니다. 입력 형태들은 모델이 `.fit`, `.evaluate`, `.predict`에서 쓰이거나 모델을 호출 (`model(inputs)`) 할 때 자동으로 정의됩니다.
예시를 살펴봅시다:
```
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # 오류!
tf.saved_model.save(my_model, saved_model_path)
```
| github_jupyter |

<hr style="margin-bottom: 40px;">
<img src="https://user-images.githubusercontent.com/7065401/58563302-42466a80-8201-11e9-9948-b3e9f88a5662.jpg"
style="width:400px; float: right; margin: 0 40px 40px 40px;"></img>
# Exercises
## Bike store sales

## Hands on!
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
sales = pd.read_csv(
'data/sales_data.csv',
parse_dates=['Date'])
sales.head()
```

### What's the mean of `Customers_Age`?
```
# your code goes here
```
Why don't you try with `.mean()`
```
sales['Customer_Age'].mean()
```
Go ahead and show a <b>density (KDE)</b> and a <b>box plot</b> with the `Customer_Age` data:
```
# your code goes here
sales['Customer_Age'].plot(kind='kde', figsize=(14,6))
sales['Customer_Age'].plot(kind='box', vert=False, figsize=(14,6))
```

### What's the mean of `Order_Quantity`?
```
# your code goes here
sales['Order_Quantity'].mean()
```
Go ahead and show a <b>histogram</b> and a <b>box plot</b> with the `Order_Quantity` data:
```
# your code goes here
sales['Order_Quantity'].plot(kind='hist', bins=30, figsize=(14,6))
sales['Order_Quantity'].plot(kind='box', vert=False, figsize=(14,6))
```

### How many sales per year do we have?
```
# your code goes here
sales['Year'].value_counts()
```
Go ahead and show a <b>pie plot</b> with the previous data:
```
# your code goes here
sales['Year'].value_counts().plot(kind='pie', figsize=(6,6))
```

### How many sales per month do we have?
```
# your code goes here
sales['Month'].value_counts()
```
Go ahead and show a <b>bar plot</b> with the previous data:
```
# your code goes here
sales['Month'].value_counts().plot(kind='bar', figsize=(14,6))
```

### Which country has the most sales `quantity of sales`?
```
# your code goes here
sales['Country'].value_counts().head(1)
sales['Country'].value_counts()
```
Go ahead and show a <b>bar plot</b> of the sales per country:
```
# your code goes here
sales['Country'].value_counts().plot(kind='bar', figsize=(14,6))
```

### Create a list of every product sold
```
# your code goes here
#sales.loc[:, 'Product'].unique()
sales['Product'].unique()
```
Create a **bar plot** showing the 10 most sold products (best sellers):
```
# your code goes here
sales['Product'].value_counts().head(10).plot(kind='bar', figsize=(14,6))
```

### Can you see any relationship between `Unit_Cost` and `Unit_Price`?
Show a <b>scatter plot</b> between both columns.
```
# your code goes here
sales.plot(kind='scatter', x='Unit_Cost', y='Unit_Price', figsize=(6,6))
```

### Can you see any relationship between `Order_Quantity` and `Profit`?
Show a <b>scatter plot</b> between both columns.
```
# your code goes here
sales.plot(kind='scatter', x='Order_Quantity', y='Profit', figsize=(6,6))
```

### Can you see any relationship between `Profit` per `Country`?
Show a grouped <b>box plot</b> per country with the profit values.
```
# your code goes here
sales[['Profit', 'Country']].boxplot(by='Country', figsize=(10,6))
```

### Can you see any relationship between the `Customer_Age` per `Country`?
Show a grouped <b>box plot</b> per country with the customer age values.
```
# your code goes here
sales[['Customer_Age', 'Country']].boxplot(by='Country', figsize=(10,6))
```

### Add and calculate a new `Calculated_Date` column
Use `Day`, `Month`, `Year` to create a `Date` column (`YYYY-MM-DD`).
```
# your code goes here
sales['Calculated_Date'] = sales[['Year', 'Month', 'Day']].apply(lambda x: '{}-{}-{}'.format(x[0], x[1], x[2]), axis=1)
sales['Calculated_Date'].head()
```

### Parse your `Calculated_Date` column into a datetime object
```
# your code goes here
sales['Calculated_Date'] = pd.to_datetime(sales['Calculated_Date'])
sales['Calculated_Date'].head()
```

### How did sales evolve through the years?
Show a <b>line plot</b> using `Calculated_Date` column as the x-axis and the count of sales as the y-axis.
```
# your code goes here
sales['Calculated_Date'].value_counts().plot(kind='line', figsize=(14,6))
```

### Increase 50 U$S revenue to every sale
```
# your code goes here
#sales['Revenue'] = sales['Revenue'] + 50
sales['Revenue'] += 50
```

### How many orders were made in `Canada` or `France`?
```
# your code goes here
sales.loc[(sales['Country'] == 'Canada') | (sales['Country'] == 'France')].shape[0]
```

### How many `Bike Racks` orders were made from Canada?
```
# your code goes here
sales.loc[(sales['Country'] == 'Canada') & (sales['Sub_Category'] == 'Bike Racks')].shape[0]
```

### How many orders were made in each region (state) of France?
```
# your code goes here
france_states = sales.loc[sales['Country'] == 'France', 'State'].value_counts()
france_states
```
Go ahead and show a <b>bar plot</b> with the results:
```
# your code goes here
france_states.plot(kind='bar', figsize=(14,6))
```

### How many sales were made per category?
```
# your code goes here
sales['Product_Category'].value_counts()
```
Go ahead and show a <b>pie plot</b> with the results:
```
# your code goes here
sales['Product_Category'].value_counts().plot(kind='pie', figsize=(6,6))
```

### How many orders were made per accessory sub-categories?
```
# your code goes here
accessories = sales.loc[sales['Product_Category'] == 'Accessories', 'Sub_Category'].value_counts()
accessories
```
Go ahead and show a <b>bar plot</b> with the results:
```
# your code goes here
accessories.plot(kind='bar', figsize=(14,6))
```

### How many orders were made per bike sub-categories?
```
# your code goes here
bikes = sales.loc[sales['Product_Category'] == 'Bikes', 'Sub_Category'].value_counts()
bikes
```
Go ahead and show a <b>pie plot</b> with the results:
```
# your code goes here
bikes.plot(kind='pie', figsize=(6,6))
```

### Which gender has the most amount of sales?
```
# your code goes here
sales['Customer_Gender'].value_counts()
sales['Customer_Gender'].value_counts().plot(kind='bar')
```

### How many sales with more than 500 in `Revenue` were made by men?
```
# your code goes here
sales.loc[(sales['Customer_Gender'] == 'M') & (sales['Revenue'] == 500)].shape[0]
```

### Get the top-5 sales with the highest revenue
```
# your code goes here
sales.sort_values(['Revenue'], ascending=False).head(5)
```

### Get the sale with the highest revenue
```
# your code goes here
#sales.sort_values(['Revenue'], ascending=False).head(1)
cond = sales['Revenue'] == sales['Revenue'].max()
sales.loc[cond]
```

### What is the mean `Order_Quantity` of orders with more than 10K in revenue?
```
# your code goes here
cond = sales['Revenue'] > 10_000
sales.loc[cond, 'Order_Quantity'].mean()
```

### What is the mean `Order_Quantity` of orders with less than 10K in revenue?
```
# your code goes here
cond = sales['Revenue'] < 10_000
sales.loc[cond, 'Order_Quantity'].mean()
```

### How many orders were made in May of 2016?
```
# your code goes here
cond = (sales['Year'] == 2016) & (sales['Month'] == 'May')
sales.loc[cond].shape[0]
```

### How many orders were made between May and July of 2016?
```
# your code goes here
cond = (sales['Year'] == 2016) & (sales['Month'].isin(['May', 'June', 'July']))
sales.loc[cond].shape[0]
```
Show a grouped <b>box plot</b> per month with the profit values.
```
# your code goes here
profit_2016 = sales.loc[sales['Year'] == 2016, ['Profit', 'Month']]
profit_2016.boxplot(by='Month', figsize=(14,6))
```

### Add 7.2% TAX on every sale `Unit_Price` within United States
```
# your code goes here
#sales.loc[sales['Country'] == 'United States', 'Unit_Price'] = sales.loc[sales['Country'] == 'United States', 'Unit_Price'] * 1.072
sales.loc[sales['Country'] == 'United States', 'Unit_Price'] *= 1.072
```

| github_jupyter |
```
# Dataset from here
# https://archive.ics.uci.edu/ml/datasets/Adult
import great_expectations as ge
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
"""
age: continuous.
workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
fnlwgt: continuous.
education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
education-num: continuous.
marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.
sex: Female, Male.
capital-gain: continuous.
capital-loss: continuous.
hours-per-week: continuous.
native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
"""
categorical_columns = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
continuous_columns = ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
df = ge.read_csv('../data/adult.data.b_2_train.csv')
df_test = ge.read_csv('../data/adult.data.b_2_test.csv')
df.head()
df.shape
df.expect_column_values_to_be_in_set('sex', ['Female', 'Male'])
def strip_spaces(df):
for column in df.columns:
if isinstance(df[column][0], str):
df[column] = df[column].apply(str.strip)
strip_spaces(df)
strip_spaces(df_test)
df.expect_column_values_to_be_in_set('sex', ['Female', 'Male'])
df['y'] = df['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
df_test['y'] = df_test['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
df['sex'].value_counts().plot(kind='bar')
sex_partition = ge.dataset.util.categorical_partition_data(df['sex'])
df.expect_column_chisquare_test_p_value_to_be_greater_than('sex', sex_partition)
df_test.expect_column_chisquare_test_p_value_to_be_greater_than('sex', sex_partition, output_format='SUMMARY')
plt.hist(df['age'])
age_partition = ge.dataset.util.continuous_partition_data(df['age'])
df.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('age', age_partition)
out = df_test.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('age', age_partition, output_format='SUMMARY')
print(out)
plt.plot(out['summary_obj']['expected_cdf']['x'], out['summary_obj']['expected_cdf']['cdf_values'])
plt.plot(out['summary_obj']['observed_cdf']['x'], out['summary_obj']['observed_cdf']['cdf_values'])
plt.plot(out['summary_obj']['expected_partition']['bins'][1:], out['summary_obj']['expected_partition']['weights'])
plt.plot(out['summary_obj']['observed_partition']['bins'][1:], out['summary_obj']['observed_partition']['weights'])
df['<=50k'].value_counts().plot(kind='bar')
df['education'].value_counts().plot(kind='bar')
education_partition = ge.dataset.util.categorical_partition_data(df['education'])
df.expect_column_chisquare_test_p_value_to_be_greater_than('education', education_partition)
df_test['education'].value_counts().plot(kind='bar')
df_test.expect_column_chisquare_test_p_value_to_be_greater_than('education', education_partition)
df_test.expect_column_kl_divergence_to_be_less_than('education', education_partition, threshold=0.1)
plt.hist(df['education-num'])
education_num_partition_auto = ge.dataset.util.continuous_partition_data(df['education-num'])
df.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('education-num', education_num_partition_auto)
education_num_partition_auto
education_num_partition_cat = ge.dataset.util.categorical_partition_data(df['education-num'])
df.expect_column_chisquare_test_p_value_to_be_greater_than('education-num', education_num_partition_cat)
df_test.expect_column_chisquare_test_p_value_to_be_greater_than('education-num', education_num_partition_cat)
education_num_partition = ge.dataset.util.continuous_partition_data(df['education-num'], bins='uniform', n_bins=10)
df.expect_column_bootstrapped_ks_test_p_value_to_be_greater_than('education-num', education_num_partition)
s1 = df['education'][df['y'] == 1].value_counts()
s1.name = 'education_y_1'
s2 = df['education'][df['y'] == 0].value_counts()
s2.name = 'education_y_0'
plotter = pd.concat([s1, s2], axis=1)
p1 = plt.bar(range(len(plotter)), plotter['education_y_0'])
p2 = plt.bar(range(len(plotter)), plotter['education_y_1'], bottom=plotter['education_y_0'])
plt.xticks(range(len(plotter)), plotter.index, rotation='vertical')
plt.show()
df.get_expectation_suite()
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.ensemble import RandomForestClassifier
def build_transformer(df_train):
le = {}
ohe = OneHotEncoder()
X_cat = pd.DataFrame()
for cat_column in categorical_columns:
le[cat_column] = LabelEncoder()
X_cat[cat_column + '_le'] = le[cat_column].fit_transform(df_train[cat_column])
X_cat = ohe.fit_transform(X_cat)
X_train = np.append(X_cat.toarray(), df_train[continuous_columns], axis=1)
return le, ohe, X_train
def apply_transformer(le, ohe, df_test):
X_cat = pd.DataFrame()
for cat_column in categorical_columns:
X_cat[cat_column + '_le'] = le[cat_column].transform(df_test[cat_column])
X_cat = ohe.transform(X_cat)
X_test = np.append(X_cat.toarray(), df_test[continuous_columns], axis=1)
return X_test
clf = RandomForestClassifier()
le, ohe, X_train = build_transformer(df)
clf.fit(X_train, df['y'])
clf.score(X_train, df['y'])
my_expectations = df.get_expectation_suite()
my_expectations
results = df_test.validate(expectation_suite=my_expectations)
results
failures = df_test.validate(expectation_suite=my_expectations, only_return_failures=True)
failures
X_test = apply_transformer(le, ohe, df_test)
clf.score(X_test, df_test['y'])
df_test_2 = ge.read_csv('../data/adult.data.b_1_train.csv')
strip_spaces(df_test_2)
#df_test_2 = df_test_2[df_test_2['native-country'] != 'Holand-Netherlands']
df_test_2['y'] = df_test_2['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
X_test_2 = apply_transformer(le, ohe, df_test_2)
clf.score(X_test_2, df_test_2['y'])
# Health Screening: Preventative Checkup!
failures = df_test_2.validate(my_expectations, only_return_failures=True, output_format='SUMMARY')
failures
df_test_2['sex'].value_counts().plot(kind='bar')
df_test_3 = ge.read_csv('../data/adult.data.b_1_test.csv')
strip_spaces(df_test_3)
#df_test_3 = df_test_3[df_test_3['native-country'] != 'Holand-Netherlands']
df_test_3['y'] = df_test_3['<=50k'].apply(lambda x: 0 if (x == '<=50K') else 1)
X_test_3 = apply_transformer(le, ohe, df_test_3)
clf.score(X_test_3, df_test_3['y'])
#What could have gone wrong?
#
# a. The world changed.
# b. New sensor means different data.
# c. Bueller? Bueller?
# d. Biased sample of the data
result = df_test_2.validate(my_expectations, only_return_failures=True, output_format='SUMMARY')
failures
```
| github_jupyter |
# Nothing But NumPy: A 3-layer Binary Classification Neural Network on Iris Flowers
Part of the blog ["Nothing but NumPy: Understanding & Creating Binary Classification Neural Networks with Computational Graphs from Scratch"](https://medium.com/@rafayak/nothing-but-numpy-understanding-creating-binary-classification-neural-networks-with-e746423c8d5c)- by [Rafay Khan](https://twitter.com/RafayAK)
In this notebook we'll create a 3-layer neural network (i.e. one input one, one hidden layer and one output layer) and train it on Iris dataset using _only_ **sepals** as input features to classify **Iris-virginica vs. others**
First, let's import NumPy, our neural net layers, the Binary Cross-Entropy(bce) Cost function and helper functions.
_Feel free to look into the helper functions in the utils directory._
```
import numpy as np
from Layers.LinearLayer import LinearLayer
from Layers.ActivationLayer import SigmoidLayer
from util.utilities import *
from util.cost_functions import compute_stable_bce_cost
import matplotlib.pyplot as plt
# to show all the generated plots inline in the notebook
%matplotlib inline
```

For convenience we'll load the data through [scikit-learn](https://scikit-learn.org/stable/index.html#).
If you don't have it installed please refer to this [link](https://scikit-learn.org/stable/install.html)
```
# load data from scikit-learn's datasets module
from sklearn.datasets import load_iris
iris = load_iris() # returns a python dictionary with the dataset
```
Let's see what the dataset contains:
```
list(iris.keys())
```
- **data**: contains the 4 features of each example in a row, has 150 rows
- **target**: contains the label for each example _(0->setosa, 1->versicolor, 2->virginica)_
- **target_names**: contains the names of each target label
- **DESCR**: contains the desription of the dataset
- **feature_names**: contains the names of the 4 features(sepal length, sepal width, petal length, petal width)
- **filename** : where the file is located on the computer
Let's explore the data:
```
iris.data.shape # rows(examples), cols(features)
iris.target.shape # labels for 150 flowers
iris.target_names # print the name of the 3 labels(species) an example could belong to
iris.feature_names # name of each feature in data's columns
iris.data[:5, :] # print first 5 examples from the Iris dataset
iris.target[:5] # print labels for the first 5 examples in the Iris dataset
```
So, the data of the **first** 5 examples looks as follows:
| exmaple# | sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | target | target name|
| --- | --- | --- || --- | --- | --- |
| 0 | 5.1 | 3.5 | 1.4 | 0.2| 0| setosa
| 1 |4.9| 3. | 1.4| 0.2|0| setosa
| 2 |4.7| 3.2| 1.3| 0.2|0| setosa
| 3 |4.6| 3.1| 1.5| 0.2|0| setosa
| 4 |5. | 3.6| 1.4| 0.2|0| setosa
For our model we will only use **sepal length and sepal width** to classify whether the Iris flower is _virginica_ or _other_
```
# take only sepal length(0th col) and sepal width(1st col)
X = iris.data[:, :2]
# fix the labes shape so that instead of (150,) its (150,1),
# helps avoiding weird broadcasting errors
Y = (iris.target).reshape((150, 1))
X.shape
Y.shape
```
**Notice** in the table above that the first 5 examples belong to __'setosa'__ species, this pattern continues in the dataset(the pattern is all _setosa_ examples followed by _versicolor_ examples and finally _virginica_ examples). ___A good practice is to randomize the data before training a neural network, so that the neural network does not, by accident, learn a trivial ordering pattern in the data.___
So let's randomize the data
```
np.random.seed(48) # for reproducible randomization
random_indices = np.random.permutation(len(X)) # genrate random indices
X_train = X[random_indices]
Y_train = Y[random_indices]
```
Now let's again print the first 5 examples and see the results(note this time there are only two features - _sepal lenght_, _sepal width_ )
```
X_train[:5, :]
Y_train[:5]
```
Now, the data of the **first** 5 examples looks as follows:
| exmaple# | sepal length (cm) | sepal width (cm) | target | target name|
| --- | --- | --- || --- |
| 0 | 5.7| 2.9| 1| versicolor
| 1 | 6.1| 2.8| 1| versicolor
| 2 | 6.1| 2.6| 2| virginica
| 3 | 4.5| 2.3| 0| setosa
| 4 | 5.9| 3.2| 1| versicolor
Finally, let's put the training set(`X_train`) & and labels(`Y_train`) in the correct shape `(feat, examples)` and `(examples,1)`, respectively. Also we'll make the target label ___virginica=1___ and the rest ___0___.
```
# Transpose the data so that it's in the correct shape
# for passing through neural network
# also binarize the classes viginica=1 and the rest 0
X_train = X_train.T
Y_train = Y_train.T
Y_train = (Y_train==2).astype('int') # uses bool logic to binarize labels, wherever label=2 output True(1) rest Flase(0)
print("Shape of training data, X_train: {}".format(X_train.shape))
print("Shape of labels, Y_train: {}".format(Y_train.shape))
Y_train[:, :5] # print first five examples
```
Before training the neural net let's visulaize the data:
```
cmap = matplotlib.colors.ListedColormap(["red", "green"], name='from_list', N=None)
# scattter plot
scatter = plt.scatter(X_train.T[:, 0], X_train.T[:, 1],
s=200, c=np.squeeze(Y_train.T),
marker='x', cmap=cmap) # s-> size of marker
plt.xlabel('sepal lenght', size=20)
plt.ylabel('sepal width', size=20)
plt.legend(scatter.legend_elements()[0], ['others', 'virginica'])
plt.show()
```
### Notice that this data is very tough to classify perfectly, as many of the data points are intertwined( i.e some green and red points are too close to each other)
***
***
#### Now we are ready to setup and train the Neural Network
This is the neural net architecture we'll use

```
# define training constants
learning_rate = 1
number_of_epochs = 10000
np.random.seed(48) # set seed value so that the results are reproduceable
# (weights will now be initailzaed to the same pseudo-random numbers, each time)
# Our network architecture has the shape:
# (input)--> [Linear->Sigmoid] -> [Linear->Sigmoid] -> [Linear->Sigmoid] -->(output)
#------ LAYER-1 ----- define 1st hidden layer that takes in training data
Z1 = LinearLayer(input_shape=X_train.shape, n_out=5, ini_type='xavier')
A1 = SigmoidLayer(Z1.Z.shape)
#------ LAYER-2 ----- define 2nd hidden layer that takes in values from 1st-hidden layer
Z2= LinearLayer(input_shape=A1.A.shape, n_out= 3, ini_type='xavier')
A2= SigmoidLayer(Z2.Z.shape)
#------ LAYER-3 ----- define output layer that takes in values from 2nd-hidden layer
Z3= LinearLayer(input_shape=A2.A.shape, n_out=1, ini_type='xavier')
A3= SigmoidLayer(Z3.Z.shape)
```
Now we can start the training loop:
```
costs = [] # initially empty list, this will store all the costs after a certian number of epochs
# Start training
for epoch in range(number_of_epochs):
# ------------------------- forward-prop -------------------------
Z1.forward(X_train)
A1.forward(Z1.Z)
Z2.forward(A1.A)
A2.forward(Z2.Z)
Z3.forward(A2.A)
A3.forward(Z3.Z)
# ---------------------- Compute Cost ----------------------------
cost, dZ3 = compute_stable_bce_cost(Y=Y_train, Z=Z3.Z)
# print and store Costs every 100 iterations and of the last iteration.
if (epoch % 100) == 0 or epoch == number_of_epochs - 1:
print("Cost at epoch#{}: {}".format(epoch, cost))
costs.append(cost)
# ------------------------- back-prop ----------------------------
Z3.backward(dZ3)
A2.backward(Z3.dA_prev)
Z2.backward(A2.dZ)
A1.backward(Z2.dA_prev)
Z1.backward(A1.dZ)
# ----------------------- Update weights and bias ----------------
Z3.update_params(learning_rate=learning_rate)
Z2.update_params(learning_rate=learning_rate)
Z1.update_params(learning_rate=learning_rate)
```
Now let's see how well the neural net peforms on the training data after the training as finished
`predict` helper functionin the cell below returns three things:
* `p`: predicted labels (output 1 if predictded output is greater than classification threshold `thresh`)
* `probas`: raw probabilities (how sure the neural net thinks the output is 1, this is just `P_hat`)
* `accuracy`: the number of correct predictions from total predictions
```
classifcation_thresh = 0.5
predicted_outputs, p_hat, accuracy = predict(X=X_train, Y=Y_train,
Zs=[Z1, Z2, Z3], As=[A1, A2, A3], thresh=classifcation_thresh)
print("The predicted outputs of first 5 examples: \n{}".format(predicted_outputs[:,:5]))
print("The predicted prbabilities of first 5 examples:\n {}".format(np.round(p_hat[:, :5], decimals=3)) )
print("\nThe accuracy of the model is: {}%".format(accuracy))
```
#### The Learning Curve
```
plot_learning_curve(costs, learning_rate, total_epochs=number_of_epochs)
```
#### The Decision Boundary
```
plot_decision_boundary(lambda x: predict_dec(Zs=[Z1, Z2, Z3], As=[A1, A2, A3], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train)
```
#### The Shaded Decision Boundary
```
plot_decision_boundary_shaded(lambda x: predict_dec(Zs=[Z1, Z2, Z3], As=[A1, A1, A3], X=x.T, thresh=classifcation_thresh),
X=X_train.T, Y=Y_train)
```
## Bounus
Train this dataset using only a 1-layer or 2-layer neural network
_(Hint: works slightly better)_
| github_jupyter |
```
import san
from src_end2end import statistical_features
import lsa_features
import pickle
import numpy as np
from tqdm import tqdm
import pandas as pd
import os
import skopt
from skopt import gp_minimize
from sklearn import preprocessing
from skopt.space import Real, Integer, Categorical
from skopt.utils import use_named_args
from sklearn.metrics import f1_score
st_models = ["roberta-large-nli-stsb-mean-tokens", "xlm-r-large-en-ko-nli-ststb", "distilbert-base-nli-mean-tokens"]
from sentence_transformers import SentenceTransformer
st_models = ["roberta-large-nli-stsb-mean-tokens", "xlm-r-large-en-ko-nli-ststb", "distilbert-base-nli-mean-tokens"]
def embedd_bert(text, st_model = 'paraphrase-distilroberta-base-v1', split = 'train'):
paths = "temp_berts/"+st_model+"_"+split+'.pkl'
if os.path.isfile(paths):
sentence_embeddings = pickle.load(open(paths,'rb'))
return sentence_embeddings
model = SentenceTransformer(st_model)
sentence_embeddings = model.encode(text)
with open(paths, 'wb') as f:
pickle.dump(sentence_embeddings, f)
return sentence_embeddings
from sentence_transformers import SentenceTransformer
st_models = ["roberta-large-nli-stsb-mean-tokens", "xlm-r-large-en-ko-nli-ststb", "distilbert-base-nli-mean-tokens"]
def embedd_bert2(text, st_model = 'paraphrase-distilroberta-base-v1'):
text = [t[:512] for t in text]
model = SentenceTransformer(st_model)
sentence_embeddings = model.encode(text)
return sentence_embeddings
def export_kgs(dataset):
path = "representations/"+dataset+"/"
for split in ["train", "dev", "test"]:
for kg in ["complex", "transe", "quate", "simple", "rotate", "distmult"]:
path_tmp = path + split + "/" + kg + ".csv"
tmp_kg = prep_kgs(kg, split)
tmp_kg = np.array((tmp_kg))
np.savetxt(path_tmp, tmp_kg, delimiter=",")
def prep_kgs(kg_emb, split='train'):
embs = []
global dataset
path_in = "kg_emb_dump/"+dataset+"/"+split+"_"+kg_emb+'_n.pkl'
with open(path_in, "rb") as f:
kgs_p = pickle.load(f)
for x,y in kgs_p:
embs.append(y)
return embs
def export_kgs_spec(dataset):
path = "representations/"+dataset+"/"
for split in ["train", "dev", "test"]:
for kg in ["complex", "transe", "quate", "simple", "rotate", "distmult"]:
path_tmp = path + split + "/" + kg + "_entity.csv"
tmp_kg = prep_kgs2(kg, split)
tmp_kg = np.array((tmp_kg))
np.savetxt(path_tmp, tmp_kg, delimiter=",")
def export_LM(dataset):
texts = {}
ys = {}
path = "representations/"+dataset+"/"
for thing in ["train", "dev", "test"]:
path_in = "data/final/"+dataset+"/"+thing+'.csv'
df = pd.read_csv(path_in, encoding='utf-8')
texts[thing] = df.text_a.to_list()
#ys[thing] = df.label.to_list()
staticstical = statistical_features.fit_space(texts[thing])
kg = 'stat'
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, staticstical, delimiter=",")
bertz = embedd_bert2(texts[thing], st_models[0])
kg = st_models[0]
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, bertz, delimiter=",")
bertz2 = embedd_bert2(texts[thing], st_models[1])
kg = st_models[1]
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, bertz2, delimiter=",")
bertz3 = embedd_bert2(texts[thing], st_models[2])
kg = st_models[2]
path_tmp = path + thing + "/" + kg + ".csv"
np.savetxt(path_tmp, bertz3, delimiter=",")
for dataset in tqdm(["pan2020", "AAAI2021_COVID19_fake_news", "LIAR_PANTS", "ISOT", "FakeNewsNet"]):
path = "representations/"+dataset+"/"
for thing in ["train", "dev", "test"]:
path_in = "data/final/"+dataset+"/"+thing+'.csv'
df = pd.read_csv(path_in, encoding='utf-8')
if dataset == "pan2020":
ys = df.labels.to_list()
else:
ys = df.label.to_list()
path_tmp = path + thing + "/" + "_ys.csv"
np.savetxt(path_tmp, ys, delimiter=",")
def prep_kgs2(kg_emb, split='train'):
embs = []
global dataset
path_in = "kg_emb_dump/"+dataset+"/"+split+"_"+kg_emb+'_speakers.pkl'
with open(path_in, "rb") as f:
kgs_p = pickle.load(f)
for x,y in kgs_p:
embs.append(y)
return embs
from tqdm import tqdm
for dataset in tqdm(["LIAR_PANTS", "FakeNewsNet"]):
export_kgs_spec(dataset)
for dataset in tqdm(["pan2020", "AAAI2021_COVID19_fake_news", "LIAR_PANTS", "ISOT", "FakeNewsNet"]):
export_kgs(dataset)
from tqdm import tqdm
for dataset in tqdm(["ISOT", "pan2020"]):#)""LIAR_PANTS","pan2020", "ISOT", "AAAI2021_COVID19_fake_news", "FakeNewsNet"]):
export_LM(dataset)
```
| github_jupyter |
```
# default_exp qlearning.dqn_target
#export
import torch.nn.utils as nn_utils
from fastai.torch_basics import *
from fastai.data.all import *
from fastai.basics import *
from dataclasses import field,asdict
from typing import List,Any,Dict,Callable
from collections import deque
import gym
import torch.multiprocessing as mp
from torch.optim import *
from fastrl.data import *
from fastrl.async_data import *
from fastrl.basic_agents import *
from fastrl.learner import *
from fastrl.metrics import *
from fastrl.ptan_extension import *
from fastrl.qlearning.dqn import *
if IN_NOTEBOOK:
from IPython import display
import PIL.Image
```
# Target DQN
```
# export
class TargetDQNTrainer(Callback):
def __init__(self,n_batch=0): store_attr()
def after_pred(self):
exps=[ExperienceFirstLast(*o) for o in self.learn.sample_yb]
s=torch.stack([e.state for e in exps]).float().to(default_device())
a=torch.stack([e.action for e in exps]).to(default_device())
sp=torch.stack([e.last_state for e in exps]).float().to(default_device())
r=torch.stack([e.reward for e in exps]).float().to(default_device())
d=torch.stack([e.done for e in exps]).to(default_device())
state_action_values = self.learn.model(s.float()).gather(1, a.unsqueeze(-1)).squeeze(-1)
# next_state_values = self.learn.target_model(sp.float()).max(1)[0]
next_state_values=self.get_next_state_values(sp)
next_state_values[d] = 0.0
expected_state_action_values=next_state_values.detach()*(self.learn.discount**self.learn.n_steps)+r
# print(*self.learn.yb,self.learn.pred)
# print(self.learn.pred,self.learn.yb)
# print(self.learn._yb,self.learn.yb[0])
self.learn._yb=self.learn.yb
self.learn.yb=(expected_state_action_values.float(),)
# print(self.learn.yb[0].mean(),self.learn.yb[0].size())
self.learn.pred=state_action_values
# print(self.learn.pred.mean(),self.learn.pred.size())
# print(self.learn.agent.a_selector.epsilon,self.n_batch)
def get_next_state_values(self,sp):
return self.learn.target_model(sp.float()).max(1)[0]
# def after_epoch(self): print(len(self.learn.cbs[4].queue))
def after_loss(self):
self.learn.yb=self.learn._yb
def after_batch(self):
if self.n_batch%self.learn.target_sync==0:
# print('copy over',self.n_batch)
self.learn.target_model.load_state_dict(self.learn.model.state_dict())
self.n_batch+=1
# export
class TargetDQNLearner(AgentLearner):
def __init__(self,dls,discount=0.99,n_steps=3,target_sync=300,**kwargs):
store_attr()
self.target_q_v=[]
super().__init__(dls,loss_func=nn.MSELoss(),**kwargs)
self.target_model=deepcopy(self.model)
env='CartPole-v1'
model=LinearDQN((4,),2)
agent=DiscreteAgent(model=model.to(default_device()),device=default_device(),
a_selector=EpsilonGreedyActionSelector())
block=FirstLastExperienceBlock(agent=agent,seed=0,n_steps=1,dls_kwargs={'bs':1,'num_workers':0,'verbose':False,'indexed':True,'shuffle_train':False})
blk=IterableDataBlock(blocks=(block),
splitter=FuncSplitter(lambda x:False),
)
dls=blk.dataloaders([env]*1,n=1*1000,device=default_device())
learner=TargetDQNLearner(dls,agent=agent,n_steps=1,cbs=[EpsilonTracker(e_steps=100),
ExperienceReplay(sz=100000,bs=32,starting_els=32,max_steps=gym.make(env)._max_episode_steps),
TargetDQNTrainer],metrics=[AvgEpisodeRewardMetric(experience_cls=ExperienceFirstLast,always_extend=True)])
learner.fit(47,lr=0.0001,wd=0)
# hide
from nbdev.export import *
from nbdev.export2html import *
notebook2script()
notebook2html()
```
| github_jupyter |
# 1. Import Libraries
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from jcopml.pipeline import num_pipe, cat_pipe
from jcopml.utils import save_model, load_model
from jcopml.plot import plot_missing_value
from jcopml.feature_importance import mean_score_decrease
from luwiji.text_proc import illustration
from jcopml.plot import plot_missing_value
from jcopml.plot import plot_confusion_matrix
from jcopml.plot import plot_roc_curve
from jcopml.plot import plot_classification_report
from jcopml.plot import plot_pr_curve
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from string import punctuation
sw_eng = stopwords.words('english')
import string
import seaborn as sns
```
# 2. Import Dataset
```
df = [line.rstrip() for line in open('SMSSpamCollection')]
df
df = pd.read_csv('SMSSpamCollection', names=['label','message'], sep='\t')
df.head()
df.label = df.label.replace({'ham': 0 ,'spam': 1})
df
```
# 3. Exploratory Data Analysis (EDA)
### Data Information
```
df.info()
df.shape
plot_missing_value(df)
```
### Data Description
```
df.head()
df.describe()
df.groupby('label').describe()
```
### Count a Character in Text Message
```
df['Length_Char'] = df['message'].apply(len)
df.head()
# Visualize a length of Character in text message
df['Length_Char'].plot.hist(bins = 150, edgecolor='black')
df.Length_Char.describe()
#Grab the maximum character in text message
df[df['Length_Char'] == 910]['message'].iloc[0]
```
### Visualize Label Distribution
```
df.hist(column='Length_Char', by='label', bins =60, figsize=(12,4), edgecolor='black', color = 'red')
```
## Cleaning Dataset
```
df.message
import re
def clean_data(text):
text = text.lower()
clean_word = word_tokenize(text)
clean_word = [word for word in clean_word if word not in punctuation]
clean_word = [word for word in clean_word if len(word) > 1 and word.isalpha()]
clean_word = [word for word in clean_word if word not in sw_eng]
emoji_removal = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
"]+", flags=re.UNICODE)
clean_word = ' '.join(clean_word)
return emoji_removal.sub(r'', clean_word)
df.message = df.message.apply(clean_data)
df.message
```
## Check Imbalanced Dataset
```
sns.countplot(df.label)
df.label.value_counts()
```
# 4. Dataset Splitting
```
X = df.message
y = df.label
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
# 5. Modeling
```
from sklearn.feature_extraction.text import TfidfVectorizer
from lightgbm import LGBMClassifier
from sklearn.model_selection import RandomizedSearchCV
from jcopml.tuning import random_search_params as rsp
from jcopml.tuning.space import Integer, Real
pipeline = Pipeline([
('prep', TfidfVectorizer()),
('algo', LGBMClassifier(n_jobs=-1, random_state=42))
])
parameter = {'algo__num_leaves': Integer(low=60, high=300),
'algo__max_depth': Integer(low=1, high=10),
'algo__learning_rate': Real(low=-2, high=0, prior='log-uniform'),
'algo__n_estimators': Integer(low=150, high=300),
'algo__min_child_samples': Integer(low=10, high=40),
'algo__min_child_weight': Real(low=-3, high=-2, prior='log-uniform'),
'algo__subsample': Real(low=0.3, high=0.8, prior='uniform'),
'algo__colsample_bytree': Real(low=0.1, high=1, prior='uniform'),
'algo__reg_alpha': Real(low=-1, high=1, prior='log-uniform'),
'algo__reg_lambda': Real(low=-3, high=1, prior='log-uniform'),
}
model = RandomizedSearchCV(pipeline, parameter, cv=3, n_iter= 100, n_jobs=-2, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
```
# 6. Hyperparameters Tuning
```
pipeline = Pipeline([
('prep', TfidfVectorizer()),
('algo', LGBMClassifier(n_jobs=-1, random_state=42))
])
parameter = {'algo__num_leaves': Integer(low=60, high=300),
'algo__max_depth': Integer(low=1, high=10),
'algo__learning_rate': Real(low=-2, high=0, prior='log-uniform'),
'algo__n_estimators': Integer(low=150, high=300),
'algo__min_child_samples': Integer(low=10, high=40),
'algo__min_child_weight': Real(low=-3, high=-2, prior='log-uniform'),
'algo__subsample': Real(low=0.3, high=0.8, prior='uniform'),
'algo__colsample_bytree': Real(low=0.1, high=1, prior='uniform'),
'algo__reg_alpha': Real(low=-1, high=1, prior='log-uniform'),
'algo__reg_lambda': Real(low=-2, high=1, prior='log-uniform'),
}
model = RandomizedSearchCV(pipeline, parameter, cv=3, n_iter= 100, n_jobs=-2, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
```
# 7. Evaluation
## 7.1. Classification Report
```
plot_classification_report(X_train, y_train, X_test, y_test, model)
plot_classification_report(X_train, y_train, X_test, y_test, model, report=True)
```
## 7.2. Confusion Matrix
```
plot_confusion_matrix(X_train, y_train, X_test, y_test, model)
```
## 7.3. ROC AUC Curve
```
plot_roc_curve(X_train, y_train, X_test, y_test, model)
```
## 7.4. Precision-Recall Curve
```
plot_pr_curve(X_train, y_train, X_test, y_test, model)
```
### Result Analysis
```
df_analysis = pd.DataFrame(X_test)
df_analysis['Prediction'] = model.predict(X_test)
df_analysis['Actual'] = y_test
df_analysis
df_analysis[(df_analysis['Prediction'] == 0) & (df_analysis['Actual'] == 1)]
df_analysis[(df_analysis['Prediction'] == 1) & (df_analysis['Actual'] == 0)]
```
# Save Model
```
save_model(model.best_estimator_, 'SMS_Spam_Classifier_LGBM_Classifier.pkl')
```
| github_jupyter |
# Programming_Assingment17
```
Question1.
Create a function that takes three arguments a, b, c and returns the sum of the
numbers that are evenly divided by c from the range a, b inclusive.
Examples
evenly_divisible(1, 10, 20) ➞ 0
# No number between 1 and 10 can be evenly divided by 20.
evenly_divisible(1, 10, 2) ➞ 30
# 2 + 4 + 6 + 8 + 10 = 30
evenly_divisible(1, 10, 3) ➞ 18
# 3 + 6 + 9 = 18
def sumDivisibles(a, b, c):
sum = 0
for i in range(a, b + 1):
if (i % c == 0):
sum += i
return sum
a = int(input('Enter a : '))
b = int(input('Enter b : '))
c = int(input('Enter c : '))
print(sumDivisibles(a, b, c))
```
### Question2.
Create a function that returns True if a given inequality expression is correct and
False otherwise.
Examples
correct_signs("3 > 7 < 11") ➞ True
correct_signs("13 > 44 > 33 > 1") ➞ False
correct_signs("1 < 2 < 6 < 9 > 3") ➞ True
```
def correct_signs ( txt ) :
return eval ( txt )
print(correct_signs("3 > 7 < 11"))
print(correct_signs("13 > 44 > 33 > 1"))
print(correct_signs("1 < 2 < 6 < 9 > 3"))
```
### Question3.
Create a function that replaces all the vowels in a string with a specified character.
Examples
replace_vowels('the aardvark, '#') ➞ 'th# ##rdv#rk'
replace_vowels('minnie mouse', '?') ➞ 'm?nn?? m??s?'
replace_vowels('shakespeare', '*') ➞ 'sh*k*sp**r*'
```
def replace_vowels(str, s):
vowels = 'AEIOUaeiou'
for ele in vowels:
str = str.replace(ele, s)
return str
input_str = input("enter a string : ")
s = input("enter a vowel replacing string : ")
print("\nGiven Sting:", input_str)
print("Given Specified Character:", s)
print("Afer replacing vowels with the specified character:",replace_vowels(input_str, s))
```
### Question4.
Write a function that calculates the factorial of a number recursively.
Examples
factorial(5) ➞ 120
factorial(3) ➞ 6
factorial(1) ➞ 1
factorial(0) ➞ 1
```
def factorial(n):
if n == 0:
return 1
return n * factorial(n-1)
num = int(input('enter a number :'))
print("Factorial of", num, "is", factorial(num))
```
### Question 5
Hamming distance is the number of characters that differ between two strings.
To illustrate:
String1: 'abcbba'
String2: 'abcbda'
Hamming Distance: 1 - 'b' vs. 'd' is the only difference.
Create a function that computes the hamming distance between two strings.
Examples
hamming_distance('abcde', 'bcdef') ➞ 5
hamming_distance('abcde', 'abcde') ➞ 0
hamming_distance('strong', 'strung') ➞ 1
```
def hamming_distance(str1, str2):
i = 0
count = 0
while(i < len(str1)):
if(str1[i] != str2[i]):
count += 1
i += 1
return count
# Driver code
str1 = "abcde"
str2 = "bcdef"
# function call
print(hamming_distance(str1, str2))
print(hamming_distance('strong', 'strung'))
hamming_distance('abcde', 'abcde')
```
| github_jupyter |
## Importing necessary packages
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# preprocessing/decomposition
from sklearn.preprocessing import LabelEncoder, StandardScaler, OneHotEncoder
from sklearn.decomposition import PCA, FastICA, FactorAnalysis, KernelPCA
# keras
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, BatchNormalization, Activation
from keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping, ModelCheckpoint
# model evaluation
from sklearn.model_selection import cross_val_score, KFold, train_test_split
from sklearn.metrics import r2_score, mean_squared_error
# supportive models
from sklearn.ensemble import GradientBoostingRegressor
# feature selection (from supportive model)
from sklearn.feature_selection import SelectFromModel
# to make results reproducible
seed = 42 # was 42
# Read datasets
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
# save IDs for submission
id_test = test['ID'].copy()
# glue datasets together
total = pd.concat([train, test], axis=0)
print('initial shape: {}'.format(total.shape))
# binary indexes for train/test set split
is_train = ~total.y.isnull()
# find all categorical features
cf = total.select_dtypes(include=['object']).columns
print total[cf].head()
# make one-hot-encoding convenient way - pandas.get_dummies(df) function
dummies = pd.get_dummies(
total[cf],
drop_first=False # you can set it = True to ommit multicollinearity (crucial for linear models)
)
print('oh-encoded shape: {}'.format(dummies.shape))
print dummies.head()
# get rid of old columns and append them encoded
total = pd.concat(
[
total.drop(cf, axis=1), # drop old
dummies # append them one-hot-encoded
],
axis=1 # column-wise
)
print('appended-encoded shape: {}'.format(total.shape))
# recreate train/test again, now with dropped ID column
train, test = total[is_train].drop(['ID'], axis=1), total[~is_train].drop(['ID', 'y'], axis=1)
# drop redundant objects
del total
# check shape
print('\nTrain shape: {}\nTest shape: {}'.format(train.shape, test.shape))
# Calculate additional features: dimensionality reduction components
n_comp=10 # was 10
# uncomment to scale data before applying decompositions
# however, all features are binary (in [0,1] interval), i don't know if it's worth something
train_scaled = train.drop('y', axis=1).copy()
test_scaled = test.copy()
'''
ss = StandardScaler()
ss.fit(train.drop('y', axis=1))
train_scaled = ss.transform(train.drop('y', axis=1).copy())
test_scaled = ss.transform(test.copy())
'''
# PCA
pca = PCA(n_components=n_comp, random_state=seed)
pca2_results_train = pca.fit_transform(train_scaled)
pca2_results_test = pca.transform(test_scaled)
# ICA
ica = FastICA(n_components=n_comp, random_state=42)
ica2_results_train = ica.fit_transform(train_scaled)
ica2_results_test = ica.transform(test_scaled)
# Factor Analysis
fca = FactorAnalysis(n_components=n_comp, random_state=seed)
fca2_results_train = fca.fit_transform(train_scaled)
fca2_results_test = fca.transform(test_scaled)
# Append it to dataframes
for i in range(1, n_comp+1):
train['pca_' + str(i)] = pca2_results_train[:,i-1]
test['pca_' + str(i)] = pca2_results_test[:, i-1]
train['ica_' + str(i)] = ica2_results_train[:,i-1]
test['ica_' + str(i)] = ica2_results_test[:, i-1]
#train['fca_' + str(i)] = fca2_results_train[:,i-1]
#test['fca_' + str(i)] = fca2_results_test[:, i-1]
# create augmentation by feature importances as additional features
t = train['y']
tr = train.drop(['y'], axis=1)
# Tree-based estimators can be used to compute feature importances
clf = GradientBoostingRegressor(
max_depth=4,
learning_rate=0.005,
random_state=seed,
subsample=0.95,
n_estimators=200
)
# fit regressor
clf.fit(tr, t)
# df to hold feature importances
features = pd.DataFrame()
features['feature'] = tr.columns
features['importance'] = clf.feature_importances_
features.sort_values(by=['importance'], ascending=True, inplace=True)
features.set_index('feature', inplace=True)
print features.tail()
# select best features
model = SelectFromModel(clf, prefit=True)
train_reduced = model.transform(tr)
test_reduced = model.transform(test.copy())
# dataset augmentation
train = pd.concat([train, pd.DataFrame(train_reduced)], axis=1)
test = pd.concat([test, pd.DataFrame(test_reduced)], axis=1)
# check new shape
print('\nTrain shape: {}\nTest shape: {}'.format(train.shape, test.shape))
# define custom R2 metrics for Keras backend
from keras import backend as K
def r2_keras(y_true, y_pred):
SS_res = K.sum(K.square( y_true - y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
from keras.optimizers import SGD, Adagrad, Adadelta
# base model architecture definition
def model():
model = Sequential()
#input layer
model.add(Dense(input_dims, input_dim=input_dims))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.3))
# hidden layers
model.add(Dense(input_dims))
model.add(BatchNormalization())
model.add(Activation(act_func))
model.add(Dropout(0.3))
model.add(Dense(input_dims//2))
model.add(BatchNormalization())
model.add(Activation(act_func))
model.add(Dropout(0.3))
model.add(Dense(input_dims//4, activation=act_func))
# output layer (y_pred)
model.add(Dense(1, activation='linear'))
# compile this model
model.compile(loss='mean_squared_error', # one may use 'mean_absolute_error' as alternative
#optimizer='adam',
optimizer=Adadelta(),#SGD(lr=0.0001, momentum=0.9),
#optimizer=Adagrad(),
metrics=[r2_keras] # you can add several if needed
)
# Visualize NN architecture
print(model.summary())
return model
# initialize input dimension
input_dims = train.shape[1]-1
#activation functions for hidden layers
act_func = 'tanh' # could be 'relu', 'sigmoid', ...
# make np.seed fixed
np.random.seed(seed)
# initialize estimator, wrap model in KerasRegressor
estimator = KerasRegressor(
build_fn=model,
nb_epoch=100,
batch_size=10,
verbose=1
)
# X, y preparation
X, y = train.drop('y', axis=1).values, train.y.values
print(X.shape)
# X_test preparation
X_test = test
print(X_test.shape)
# train/validation split
X_tr, X_val, y_tr, y_val = train_test_split(
X,
y,
test_size=0.20,
random_state=seed
)
# define path to save model
import os
model_path = 'keras_model_adadelta.h5'
# prepare callbacks
callbacks = [
EarlyStopping(
monitor='val_loss',
patience=10, # was 10
verbose=1),
ModelCheckpoint(
model_path,
monitor='val_loss',
save_best_only=True,
verbose=0)
]
# fit estimator
estimator.fit(
X_tr,
y_tr,
#nb_epoch=10, # increase it to 20-100 to get better results
validation_data=(X_val, y_val),
verbose=2,
callbacks=callbacks,
shuffle=True
)
# if best iteration's model was saved then load and use it
if os.path.isfile(model_path):
estimator = load_model(model_path, custom_objects={'r2_keras': r2_keras})
# check performance on train set
print('MSE train: {}'.format(mean_squared_error(y_tr, estimator.predict(X_tr))**0.5)) # mse train
print('R^2 train: {}'.format(r2_score(y_tr, estimator.predict(X_tr)))) # R^2 train
# check performance on validation set
print('MSE val: {}'.format(mean_squared_error(y_val, estimator.predict(X_val))**0.5)) # mse val
print('R^2 val: {}'.format(r2_score(y_val, estimator.predict(X_val)))) # R^2 val
pass
```
### Temporary check for results
```
# if best iteration's model was saved then load and use it
if os.path.isfile(model_path):
estimator = load_model(model_path, custom_objects={'r2_keras': r2_keras})
# check performance on train set
print('MSE train: {}'.format(mean_squared_error(y_tr, estimator.predict(X_tr))**0.5)) # mse train
print('R^2 train: {}'.format(r2_score(y_tr, estimator.predict(X_tr)))) # R^2 train
# check performance on validation set
print('MSE val: {}'.format(mean_squared_error(y_val, estimator.predict(X_val))**0.5)) # mse val
print('R^2 val: {}'.format(r2_score(y_val, estimator.predict(X_val)))) # R^2 val
pass
# predict results
res = estimator.predict(X_test.values).ravel()
# create df and convert it to csv
output = pd.DataFrame({'id': id_test, 'y': res})
output.to_csv('../results/adadelta.csv', index=False)
estimator.predict(X_tr).ravel()
```
# Trying another method
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.preprocessing import LabelEncoder
# read datasets
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
# process columns, apply LabelEncoder to categorical features
for c in train.columns:
if train[c].dtype == 'object':
lbl = LabelEncoder()
lbl.fit(list(train[c].values) + list(test[c].values))
train[c] = lbl.transform(list(train[c].values))
test[c] = lbl.transform(list(test[c].values))
# shape
print('Shape train: {}\nShape test: {}'.format(train.shape, test.shape))
from sklearn.decomposition import PCA, FastICA
n_comp = 10
# PCA
pca = PCA(n_components=n_comp, random_state=42)
pca2_results_train = pca.fit_transform(train.drop(["y"], axis=1))
pca2_results_test = pca.transform(test)
# ICA
ica = FastICA(n_components=n_comp, random_state=42)
ica2_results_train = ica.fit_transform(train.drop(["y"], axis=1))
ica2_results_test = ica.transform(test)
# Append decomposition components to datasets
for i in range(1, n_comp+1):
train['pca_' + str(i)] = pca2_results_train[:,i-1]
test['pca_' + str(i)] = pca2_results_test[:, i-1]
train['ica_' + str(i)] = ica2_results_train[:,i-1]
test['ica_' + str(i)] = ica2_results_test[:, i-1]
y_train = train["y"]
y_mean = np.mean(y_train)
()# mmm, xgboost, loved by everyone ^-^
import xgboost as xgb
# prepare dict of params for xgboost to run with
xgb_params = {
'n_trees': 500,
'eta': 0.005,
'max_depth': 4,
'subsample': 0.95,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'base_score': y_mean, # base prediction = mean(target)
'silent': 1
}
# form DMatrices for Xgboost training
dtrain = xgb.DMatrix(train.drop('y', axis=1), y_train)
dtest = xgb.DMatrix(test)
# xgboost, cross-validation
cv_result = xgb.cv(xgb_params,
dtrain,
num_boost_round=650, # increase to have better results (~700)
early_stopping_rounds=50,
verbose_eval=50,
show_stdv=True
)
num_boost_rounds = len(cv_result)
print(num_boost_rounds)
# train model
xgb_model = xgb.train(dict(xgb_params, silent=0), dtrain, num_boost_round=num_boost_rounds)
# check f2-score (to get higher score - increase num_boost_round in previous cell)
from sklearn.metrics import r2_score
# now fixed, correct calculation
print(r2_score(dtrain.get_label(), xgb_model.predict(dtrain)))
xgb_model.save_model('xgb_{}.model'.format(num_boost_rounds))
# make predictions and save results
y_pred = xgb_model.predict(dtest)
output = pd.DataFrame({'id': test['ID'].astype(np.int32), 'y': y_pred})
output.to_csv('xgboost-boost_rounds{}-pca-ica.csv'.format(num_boost_rounds), index=False)
```
# Trying to ensemble the results of the two
```
xgb_train_preds = xgb_model.predict(dtrain)
keras_train_preds = estimator.predict(X)
xgb_test_preds = xgb_model.predict(dtest)
keras_test_preds = estimator.predict(X_test.values).ravel()
cum_train_preds = np.column_stack((keras_train_preds, xgb_train_preds))
cum_test_preds = np.column_stack((keras_test_preds, xgb_test_preds))
# prepare dict of params for xgboost to run with
xgb_params_new = {
'n_trees': 500,
'eta': 0.005,
'max_depth': 4,
'subsample': 0.95,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'base_score': y_mean, # base prediction = mean(target)
'silent': 1
}
# form DMatrices for Xgboost training
dtrain_new = xgb.DMatrix(cum_train_preds, y_train)
dtest_new = xgb.DMatrix(cum_test_preds)
# xgboost, cross-validation
cv_result_new = xgb.cv(xgb_params_new,
dtrain,
num_boost_round=500, # increase to have better results (~700)
early_stopping_rounds=50,
verbose_eval=50,
show_stdv=True
)
num_boost_rounds_new = len(cv_result_new)
print(num_boost_rounds_new)
# train model
xgb_model_new = xgb.train(dict(xgb_params_new, silent=0), dtrain_new, num_boost_round=num_boost_rounds_new)
# now fixed, correct calculation
print(r2_score(dtrain_new.get_label(), xgb_model_new.predict(dtrain_new)))
xgb_model_new.save_model('xgb_{}_ensemble.model'.format(num_boost_rounds_new))
# make predictions and save results
y_pred_new = xgb_model_new.predict(dtest_new)
output_new = pd.DataFrame({'id': test['ID'].astype(np.int32), 'y': y_pred_new})
output_new.to_csv('xgboost-boost_rounds{}-pca-ica_ensemble.csv'.format(num_boost_rounds_new), index=False)
```
| github_jupyter |
# Simulating a Predator and Prey Relationship
Without a predator, rabbits will reproduce until they reach the carrying capacity of the land. When coyotes show up, they will eat the rabbits and reproduce until they can't find enough rabbits. We will explore the fluctuations in the two populations over time.
# Using Lotka-Volterra Model
## Part 1: Rabbits without predators
According to [Mother Earth News](https://www.motherearthnews.com/homesteading-and-livestock/rabbits-on-pasture-intensive-grazing-with-bunnies-zbcz1504), a rabbit eats six square feet of pasture per day. Let's assume that our rabbits live in a five acre clearing in a forest: 217,800 square feet/6 square feet = 36,300 rabbit-days worth of food. For simplicity, let's assume the grass grows back in two months. Thus, the carrying capacity of five acres is 36,300/60 = 605 rabbits.
Female rabbits reproduce about six to seven times per year. They have six to ten children in a litter. According to [Wikipedia](https://en.wikipedia.org/wiki/Rabbit), a wild rabbit reaches sexual maturity when it is about six months old and typically lives one to two years. For simplicity, let's assume that in the presence of unlimited food, a rabbit lives forever, is immediately sexually mature, and has 1.5 children every month.
For our purposes, then, let $x_t$ be the number of rabbits in our five acre clearing on month $t$.
$$
\begin{equation*}
R_t = R_{t-1} + 1.5\frac{605 - R_{t-1}}{605} R_{t-1}
\end{equation*}
$$
The formula could be put into general form
$$
\begin{equation*}
R_t = R_{t-1} + growth_{R} \times \big( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \big) R_{t-1}
\end{equation*}
$$
By doing this, we allow users to interact with growth rate and the capacity value visualize different interaction
```
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
from IPython.display import display, clear_output
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
style = {'description_width': 'initial'}
capacity_R = widgets.FloatText(description="Capacity", value=605)
growth_rate_R = widgets.FloatText(description="Growth rate", value=1.5)
initial_R = widgets.FloatText(description="Initial population",style=style, value=1)
button_R = widgets.Button(description="Plot Graph")
display(initial_R, capacity_R, growth_rate_R, button_R)
def plot_graph_r(b):
print("helo")
clear_output()
display(initial_R, capacity_R, growth_rate_R, button_R)
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0, 20, 1)
s = np.zeros(t.shape)
R = initial_R.value
for i in range(t.shape[0]):
s[i] = R
R = R + growth_rate_R.value * (capacity_R.value - R)/(capacity_R.value) * R
if R < 0.0:
R = 0.0
ax.plot(t, s)
ax.set(xlabel='time (months)', ylabel='number of rabbits',
title='Rabbits Without Predators')
ax.grid()
button_R.on_click(plot_graph_r)
```
**Exercise 1** (1 point). Complete the following functions, find the number of rabbits at time 5, given $x_0$ = 10, population capcity =100, and growth rate = 0.8
```
R_i = 10
for i in range(5):
R_i = int(R_i + 0.8 * (100 - R_i)/(100) * R_i)
print(f'There are {R_i} rabbits in the system at time 5')
```
## Tweaking the Growth Function
The growth is regulated by this part of the formula:
$$
\begin{equation*}
\frac{capacity_{R} - R_{t-1}}{capacity_{R}}
\end{equation*}
$$
That is, this fraction (and thus growth) goes to zero when the land is at capacity. As the number of rabbits goes to zero, this fraction goes to 1.0, so growth is at its highest speed. We could substitute in another function that has the same values at zero and at capacity, but has a different shape. For example,
$$
\begin{equation*}
\left( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \right)^{\beta}
\end{equation*}
$$
where $\beta$ is a positive number. For example, if $\beta$ is 1.3, it indicates that the rabbits can sense that food supplies are dwindling and pre-emptively slow their reproduction.
```
#### %matplotlib inline
import math
style = {'description_width': 'initial'}
capacity_R_2 = widgets.FloatText(description="Capacity", value=605)
growth_rate_R_2 = widgets.FloatText(description="Growth rate", value=1.5)
initial_R_2 = widgets.FloatText(description="Initial population",style=style, value=1)
shaping_R_2 = widgets.FloatText(description="Shaping", value=1.3)
button_R_2 = widgets.Button(description="Plot Graph")
display(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2)
def plot_graph_r(b):
clear_output()
display(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2)
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0, 20, 1)
s = np.zeros(t.shape)
R = initial_R_2.value
beta = float(shaping_R_2.value)
for i in range(t.shape[0]):
s[i] = R
reserve_ratio = (capacity_R_2.value - R)/capacity_R_2.value
if reserve_ratio > 0.0:
R = R + R * growth_rate_R_2.value * reserve_ratio**beta
else:
R = R - R * growth_rate_R_2.value * (-1.0 * reserve_ratio)**beta
if R < 0.0:
R = 0
ax.plot(t, s)
ax.set(xlabel='time (months)', ylabel='number of rabbits',
title='Rabbits Without Predators (Shaped)')
ax.grid()
button_R_2.on_click(plot_graph_r)
```
**Exercise 2** (1 point). Repeat Exercise 1, with $\beta$ = 1.5 Complete the following functions, find the number of rabbits at time 5. Should we expect to see more rabbits or less?
```
R_i = 10
b=1.5
for i in range(5):
R_i = int(R_i + 0.8 * ((100 - R_i)/(100))**b * R_i)
print(f'There are {R_i} rabbits in the system at time 5, less rabbits compare to exercise 1, where beta = 1')
```
## Part 2: Coyotes without Prey
According to [Huntwise](https://www.besthuntingtimes.com/blog/2020/2/3/why-you-should-coyote-hunt-how-to-get-started), coyotes need to consume about 2-3 pounds of food per day. Their diet is 90 percent mammalian. The perfect adult cottontail rabbits weigh 2.6 pounds on average. Thus, we assume the coyote eats one rabbit per day.
For coyotes, the breeding season is in February and March. According to [Wikipedia](https://en.wikipedia.org/wiki/Coyote#Social_and_reproductive_behaviors), females have a gestation period of 63 days, with an average litter size of 6, though the number fluctuates depending on coyote population density and the abundance of food. By fall, the pups are old enough to hunt for themselves.
In the absence of rabbits, the number of coyotes will drop, as their food supply is scarce.
The formula could be put into general form:
$$
\begin{align*}
C_t & \sim (1 - death_{C}) \times C_{t-1}\\
&= C_{t-1} - death_{C} \times C_{t-1}
\end{align*}
$$
```
%matplotlib inline
style = {'description_width': 'initial'}
initial_C=widgets.FloatText(description="Initial Population",style=style,value=200.0)
declining_rate_C=widgets.FloatText(description="Death rate",value=0.5)
button_C=widgets.Button(description="Plot Graph")
display(initial_C, declining_rate_C, button_C)
def plot_graph_c(b):
clear_output()
display(initial_C, declining_rate_C, button_C)
fig = plt.figure()
ax = fig.add_subplot(111)
t1 = np.arange(0, 20, 1)
s1 = np.zeros(t1.shape)
C = initial_C.value
for i in range(t1.shape[0]):
s1[i] = C
C = (1 - declining_rate_C.value)*C
ax.plot(t1, s1)
ax.set(xlabel='time (months)', ylabel='number of coyotes',
title='Coyotes Without Prey')
ax.grid()
button_C.on_click(plot_graph_c)
```
**Exercise 3** (1 point). Assume the system has 100 coyotes at time 0, the death rate is 0.5 if there are no prey. At what point in time, coyotes become extinct.
```
ti = 0
coyotes_init = 100
c_i = coyotes_init
d_r = 0.5
while c_i > 10:
c_i= int((1 - d_r)*c_i)
ti =ti + 1
print(f'At time t={ti}, the coyotes become extinct')
```
## Part 3: Interaction Between Coyotes and Rabbit
With the simple interaction from the first two parts, now we can combine both interaction and come out with simple interaction.
$$
\begin{align*}
R_t &= R_{t-1} + growth_{R} \times \big( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \big) R_{t-1} - death_{R}(C_{t-1})\times R_{t-1}\\\\
C_t &= C_{t-1} - death_{C} \times C_{t-1} + growth_{C}(R_{t-1}) \times C_{t-1}
\end{align*}
$$
In equations above, death rate of rabbit is a function parameterized by the amount of coyote. Similarly, the growth rate of coyotes is a function parameterized by the amount of the rabbit.
The death rate of the rabbit should be $0$ if there are no coyotes, while it should approach $1$ if there are many coyotes. One of the formula fulfilling this characteristics is hyperbolic function.
$$
\begin{equation}
death_R(C) = 1 - \frac{1}{xC + 1}
\end{equation}
$$
where $x$ determines how quickly $death_R$ increases as the number of coyotes ($C$) increases. Similarly, the growth rate of the coyotes should be $0$ if there are no rabbits, while it should approach infinity if there are many rabbits. One of the formula fulfilling this characteristics is a linear function.
$$
\begin{equation}
growth_C(R) = yC
\end{equation}
$$
where $y$ determines how quickly $growth_C$ increases as number of rabbit ($R$) increases.
Putting all together, the final equtions are
$$
\begin{align*}
R_t &= R_{t-1} + growth_{R} \times \big( \frac{capacity_{R} - R_{t-1}}{capacity_{R}} \big) R_{t-1} - \big( 1 - \frac{1}{xC_{t-1} + 1} \big)\times R_{t-1}\\\\
C_t &= C_{t-1} - death_{C} \times C_{t-1} + yR_{t-1}C_{t-1}
\end{align*}
$$
**Exercise 4** (3 point). The model we have created above is a variation of the Lotka-Volterra model, which describes various forms of predator-prey interactions. Complete the following functions, which should generate the state variables plotted over time. Blue = prey, Orange = predators.
```
%matplotlib inline
initial_rabbit = widgets.FloatText(description="Initial Rabbit",style=style, value=1)
initial_coyote = widgets.FloatText(description="Initial Coyote",style=style, value=1)
capacity = widgets.FloatText(description="Capacity rabbits", style=style,value=5)
growth_rate = widgets.FloatText(description="Growth rate rabbits", style=style,value=1)
death_rate = widgets.FloatText(description="Death rate coyotes", style=style,value=1)
x = widgets.FloatText(description="Death rate ratio due to coyote",style=style, value=1)
y = widgets.FloatText(description="Growth rate ratio due to rabbit",style=style, value=1)
button = widgets.Button(description="Plot Graph")
display(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)
def plot_graph(b):
clear_output()
display(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0, 20, 0.5)
s = np.zeros(t.shape)
p = np.zeros(t.shape)
R = initial_rabbit.value
C = initial_coyote.value
for i in range(t.shape[0]):
s[i] = R
p[i] = C
R = R + growth_rate.value * (capacity.value - R)/(capacity.value) * R - (1 - 1/(x.value*C + 1))*R
C = C - death_rate.value * C + y.value*s[i]*C
ax.plot(t, s, label="rabit")
ax.plot(t, p, label="coyote")
ax.set(xlabel='time (months)', ylabel='population size',
title='Coyotes-Rabbit (Predator-Prey) Relationship')
ax.grid()
ax.legend()
button.on_click(plot_graph)
```
The system shows an oscillatory behavior. Let's try to verify the nonlinear oscillation in phase space visualization.
## Part 4: Trajectories and Direction Fields for a system of equations
To further demonstrate the predator numbers rise and fall cyclically with their preferred prey, we will be using the Lotka-Volterra equations, which is based on differential equations. The Lotka-Volterra Prey-Predator model involves two equations, one describes the changes in number of preys and the second one decribes the changes in number of predators. The dynamics of the interaction between a rabbit population $R_t$ and a coyotes $C_t$ is described by the following differential equations:
$$
\begin{align*}
\frac{dR}{dt} = aR_t - bR_tC_t
\end{align*}
$$
$$
\begin{align*}
\frac{dC}{dt} = bdR_tC_t - cC_t
\end{align*}
$$
with the following notations:
R$_t$: number of preys(rabbits)
C$_t$: number of predators(coyotes)
a: natural growing rate of rabbits, when there is no coyotes
b: natural dying rate of rabbits, which is killed by coyotes per unit of time
c: natural dying rate of coyotes, when ther is not rabbits
d: natural growing rate of coyotes with which consumed prey is converted to predator
We start from defining the system of ordinary differential equations, and then find the equilibrium points for our system. Equilibrium occurs when the frowth rate is 0, and we can see that we have two equilibrium points in our example, the first one happens when theres no preys or predators, which represents the extinction of both species, the second equilibrium happens when $R_t=\frac{c}{b d}$ $C_t=\frac{a}{b}$. Move on, we will use the scipy to help us integrate the differential equations, and generate the plot of evolution for both species:
**Exercise 5** (3 point). As we can tell from the simulation results of predator-prey model, the system shows an oscillatory behavior. Find the equilibrium points of the system and generate the phase space visualization to demonstrate the oscillation seen previously is nonlinear with distorted orbits.
```
from scipy import integrate
#using the same input number from the previous example
input_a = widgets.FloatText(description="a",style=style, value=1)
input_b = widgets.FloatText(description="b",style=style, value=1)
input_c = widgets.FloatText(description="c",style=style, value=1)
input_d = widgets.FloatText(description="d",style=style, value=1)
# Define the system of ODEs
# P[0] is prey, P[1] is predator
def dP_dt(P,t=0):
return np.array([a*P[0]-b*P[0]*P[1], d*b*P[0]*P[1]-c*P[1]])
button_draw_trajectories = widgets.Button(description="Plot Graph")
display(input_a, input_b, input_c, input_d, button_draw_trajectories)
def plot_trajectories(graph):
global a, b, c, d, eq1, eq2
clear_output()
display(input_a, input_b, input_c, input_d, button_draw_trajectories)
a = input_a.value
b = input_b.value
c = input_c.value
d = input_d.value
# Define the Equilibrium points
eq1 = np.array([0. , 0.])
eq2 = np.array([c/(d*b),a/b])
values = np.linspace(0.1, 3, 10)
# Colors for each trajectory
vcolors = plt.cm.autumn_r(np.linspace(0.1, 1., len(values)))
f = plt.figure(figsize=(10,6))
t = np.linspace(0, 150, 1000)
for v, col in zip(values, vcolors):
# Starting point
P0 = v*eq2
P = integrate.odeint(dP_dt, P0, t)
plt.plot(P[:,0], P[:,1],
lw= 1.5*v, # Different line width for different trajectories
color=col, label='P0=(%.f, %.f)' % ( P0[0], P0[1]) )
ymax = plt.ylim(bottom=0)[1]
xmax = plt.xlim(left=0)[1]
nb_points = 20
x = np.linspace(0, xmax, nb_points)
y = np.linspace(0, ymax, nb_points)
X1,Y1 = np.meshgrid(x, y)
DX1, DY1 = dP_dt([X1, Y1])
M = (np.hypot(DX1, DY1))
M[M == 0] = 1.
DX1 /= M
DY1 /= M
plt.title('Trajectories and direction fields')
Q = plt.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.plasma)
plt.xlabel('Number of rabbits')
plt.ylabel('Number of coyotes')
plt.legend()
plt.grid()
plt.xlim(0, xmax)
plt.ylim(0, ymax)
print(f"\n\nThe equilibrium pointsof the system are:", list(eq1), list(eq2))
plt.show()
button_draw_trajectories.on_click(plot_trajectories)
```
The model here is described in continuous differential equations, thus there is no jump or intersections between the trajectories.
## Part 5: Multiple Predators and Preys Relationship
The previous relationship could be extended to multiple predators and preys relationship
**Exercise 6** (3 point). Develop a discrete-time mathematical model of four species, and each two of them competing for the same resource, and simulate its behavior. Plot the simulation results.
```
%matplotlib inline
initial_rabbit2 = widgets.FloatText(description="Initial Rabbit", style=style,value=2)
initial_coyote2 = widgets.FloatText(description="Initial Coyote",style=style, value=2)
initial_deer2 = widgets.FloatText(description="Initial Deer", style=style,value=1)
initial_wolf2 = widgets.FloatText(description="Initial Wolf", style=style,value=1)
population_capacity = widgets.FloatText(description="capacity",style=style, value=10)
population_capacity_rabbit = widgets.FloatText(description="capacity rabbit",style=style, value=3)
growth_rate_rabbit = widgets.FloatText(description="growth rate rabbit",style=style, value=1)
death_rate_coyote = widgets.FloatText(description="death rate coyote",style=style, value=1)
growth_rate_deer = widgets.FloatText(description="growth rate deer",style=style, value=1)
death_rate_wolf = widgets.FloatText(description="death rate wolf",style=style, value=1)
x1 = widgets.FloatText(description="death rate ratio due to coyote",style=style, value=1)
y1 = widgets.FloatText(description="growth rate ratio due to rabbit", style=style,value=1)
x2 = widgets.FloatText(description="death rate ratio due to wolf",style=style, value=1)
y2 = widgets.FloatText(description="growth rate ratio due to deer", style=style,value=1)
plot2 = widgets.Button(description="Plot Graph")
display(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity,
population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,
x1, y1,x2, y2, plot2)
def plot_graph(b):
clear_output()
display(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity,
population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,
x1, y1,x2, y2, plot2)
fig = plt.figure()
ax = fig.add_subplot(111)
t_m = np.arange(0, 20, 0.5)
r_m = np.zeros(t_m.shape)
c_m = np.zeros(t_m.shape)
d_m = np.zeros(t_m.shape)
w_m = np.zeros(t_m.shape)
R_m = initial_rabbit2.value
C_m = initial_coyote2.value
D_m = initial_deer2.value
W_m = initial_wolf2.value
population_capacity_deer = population_capacity.value - population_capacity_rabbit.value
for i in range(t_m.shape[0]):
r_m[i] = R_m
c_m[i] = C_m
d_m[i] = D_m
w_m[i] = W_m
R_m = R_m + growth_rate_rabbit.value * (population_capacity_rabbit.value - R_m)\
/(population_capacity_rabbit.value) * R_m - (1 - 1/(x1.value*C_m + 1))*R_m - (1 - 1/(x2.value*W_m + 1))*R_m
D_m = D_m + growth_rate_deer.value * (population_capacity_deer - D_m) \
/(population_capacity_deer) * D_m - (1 - 1/(x1.value*C_m + 1))*D_m - (1 - 1/(x2.value*W_m + 1))*D_m
C_m = C_m - death_rate_coyote.value * C_m + y1.value*r_m[i]*C_m + y2.value*d_m[i]*C_m
W_m = W_m - death_rate_wolf.value * W_m + y1.value*r_m[i]*W_m + y2.value*d_m[i]*W_m
ax.plot(t_m, r_m, label="rabit")
ax.plot(t_m, c_m, label="coyote")
ax.plot(t_m, d_m, label="deer")
ax.plot(t_m, w_m, label="wolf")
ax.set(xlabel='time (months)', ylabel='population',
title='Multiple Predator Prey Relationship')
ax.grid()
ax.legend()
plot2.on_click(plot_graph)
```
| github_jupyter |
```
import csv
import pickle
import pandas as pd
import numpy as np
import requests
import json
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
import sklearn.preprocessing
from sklearn import preprocessing
data=pd.read_csv("temp1-costamesa.csv")
data
data.columns
data=data.drop_duplicates(subset=['ADDRESS'], keep='first')
data
data=data.rename(columns={'URL (SEE http://www.redfin.com/buy-a-home/comparative-market-analysis FOR INFO ON PRICING)':'url',
'PROPERTY TYPE':'type',
'CITY':'city',
'ZIP':'zip',
'PRICE':'price',
'BEDS':'beds',
'BATHS':'baths',
'SQUARE FEET':'sqrft',
'LOT SIZE':'lot',
'YEAR BUILT':'built',
'DAYS ON MARKET':'dom',
'$/SQUARE FEET':'$/sqrft',
'HOA/MONTH':'hoa',
'LATITUDE':'lat',
'LONGITUDE':'lon'})
data['full_address'] = data['ADDRESS'] + ", " + data['city'] + ", " + data['STATE']
data.head()
# api_key='AIzaSyAOjSf4Tk_StWcxTANG_2Sih0IN19W9cSI'
# url="https://maps.googleapis.com/maps/api/geocode/json?address={}&key={}"
# url
# lat_list=[]
# lon_list=[]
# for i in data.full_address:
# response=requests.get(url.format(i,api_key)).json()
# print(json.dumps(response, indent=4, sort_keys=True))
# lat=response["results"][0]["geometry"]["location"]["lat"]
# lat_list.append(lat)
# lon=response["results"][0]["geometry"]["location"]["lng"]
# lon_list.append(lon)
# len(lat_list)
# data['lat_backup']=pd.Series(lat_list)
# data['lon_backup']=pd.Series(lon_list)
# data
data=data[['type','city','zip','price','beds','baths','sqrft','lot','built',
'dom','$/sqrft','hoa','lat','lon']]
data.describe()
data.info()
data['type']=data['type'].replace('Single Family Residential','sfr')
data['type']=data['type'].replace('Condo/Co-op','condo')
data['type']=data['type'].replace('Townhouse','thr')
data['type']=data['type'].replace('Multi-Family (2-4 Unit)','mfr')
data['type']=data['type'].replace('Multi-Family (5+ Unit)','mfr')
data.isnull().sum()
data=data[data['built'].notnull()]
data.head()
print(data.isnull().sum())
from numpy import nan
data[data['hoa'].isnull()]
#pass 0 for hoa of NaN homes with yeaer before 2000
mask=(data['hoa'].isnull()) & (data['built']<2000)
data['hoa']=data['hoa'].mask(mask,0)
data.info()
data=data.set_index('zip')
data['lot medians']=data.groupby('zip')['lot'].median()
data.head()
mask1=(data['lot'].isnull())
data['lot']=data['lot'].mask(mask1,data['lot medians'])
data.head()
del data['lot medians']
data=data.reset_index()
data.info()
print(data.isnull().sum())
data[data['beds'].isnull()]
data = data.dropna(axis=0, subset=['beds'])
print(data.isnull().sum())
data = data.dropna(axis=0, subset=['sqrft'])
print(data.isnull().sum())
data.shape
```
# Multicollinearity check
```
correlations=data.corr()
plt.subplots(figsize=(10,8))
sns.heatmap(correlations,annot=True)
fig=plt.figure()
plt.show()
# beds
# baths
# sqrft
# lot
# per_sqrft
# zipcode
# types
#yr built
#hoa
#multi-collinearity: beds and sqrft/baths and sqrft/beds and baths
plt.subplots(figsize=(20,8))
sns.distplot(data['price'],fit=stats.norm)
(mu,sigma)=stats.norm.fit(data['price'])
plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
plt.ylabel('frequency')
fig=plt.figure()
plt.show()
mini=data['built'].min()
maxi=data['built'].max()
print(mini,maxi)
decades_no=[]
for i in data.built:
decades=(i-mini)/10
# print(decades)
decades_no.append(decades)
data['train_built']=pd.Series(decades_no)
data['train_built']=data['train_built'].round(0)
data.head()
data.tail()
data = data.dropna(axis=0, subset=['train_built'])
decades_no
len(decades_no)
len(data)
```
# Pickled Cleaned Irvine DF Pre-Inference
```
# data.to_pickle('costamesa_data.pkl')
infile=open('costamesa_data.pkl','rb')
train=pickle.load(infile)
train.head()
train[train['train_built'].isnull()]
train
len(train)
train.info()
train['train_built'].unique()
```
# Flask Functions for Front End:
## Sizer Assist for Pred DF + Min Built Return
```
def train_flask():
infile=open('costamesa_data.pk1','rb')
train=pickle.load(infile)
cols=['zip','type','train_built','beds','baths','sqrft','lot','$/sqrft']
x=train[cols]
train['$/sqrft']=np.log1p(train['$/sqrft'])
train['sqrft']=np.log1p(train['sqrft'])
train['lot']=np.log1p(train['lot'])
x=pd.get_dummies(x,columns=['zip','type','train_built'])
return x
def min_built():
infile=open('costamesa_data.pk1','rb')
train=pickle.load(infile)
#for integrating: load all pickle files
#output is a list of minimums
costamesa_mini=int(train['built'].min())
return costamesa_mini
# print(train['built'].min())
# type(int(train['built'].min()))
# def min_built():
# infile=open('irvine_data.pk1','rb')
# train=pickle.load(infile)
# f=open('whatever')
# train_tust=pickle.load(f)
# #for integrating: load all pickle files
# #output is a list of minimums
# irvine_mini=train['built'].min()
# tustin_mini=train_tustin['built'].min()
# return [irvine_mini,tustin_mini]
min_b=min_built()
min_b
```
# Inference Tests
```
# mini=train['built'].min()
# maxi=train['built'].max()
# print(mini,maxi)
# decades_no=[]
# for i in train.built:
# decades=(i-mini)/10
# # print(decades)
# decades_no.append(decades)
# train['train_built']=pd.Series(decades_no)
# train['train_built']=train['train_built'].round(0)
# train.head()
anova_data=train[['price','train_built']]
# anova_data['train_built']=anova_data['train_built'].round(0)
# bin_series=anova_data['train_built'].value_counts()
##bin without series:
bins=pd.unique(anova_data.train_built.values)
f_test_data={grp:anova_data['price'][anova_data.train_built==grp] for grp in bins}
print(bins)
from scipy import stats
F, p=stats.f_oneway(f_test_data[5.],f_test_data[10.],f_test_data[9.],f_test_data[11.],f_test_data[7.],
f_test_data[6.],f_test_data[8.],f_test_data[3.],f_test_data[4.],f_test_data[0.])
# array([ 5., 10., 9., 11., 7., 6., 8., 3., 4., 0.])
print(F,p)
k=len(pd.unique(anova_data.train_built.values))
N=len(anova_data.values)
n=anova_data['train_built'].value_counts()
#F-static: btw/within variability
DFbetween = k - 1
DFwithin = N - k
DFtotal = N - 1
print(f"degrees of freedom between: {DFbetween}")
print(f"degrees of freedom within: {DFwithin}")
print(f"degrees of freedom total: {DFtotal}")
#reject null, not all group means are equal, variance exists, include year built in ML
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error, accuracy_score
##non-zero HOA data df prep for reg. analysis, p-value, 95% CI
hoa_f_prep=train[train['hoa'].notnull()]
# hoa_f_prep.info()
dep_var=hoa_f_prep['price']
indep_var=hoa_f_prep['hoa']
indep_var=indep_var.values.reshape(-1,1)
# define the model
model = LinearRegression()
# fit the model to training data
model.fit(indep_var, dep_var)
#run p-test
params = np.append(model.intercept_,model.coef_)
predictions = model.predict(indep_var)
new_indep_var = pd.DataFrame({"Constant":np.ones(len(indep_var))}).join(pd.DataFrame(indep_var))
MSE = (sum((dep_var-predictions)**2))/(len(new_indep_var)-len(new_indep_var.columns))
var_b = MSE*(np.linalg.inv(np.dot(new_indep_var.T,new_indep_var)).diagonal())
sd_b = np.sqrt(var_b)
ts_b = params/ sd_b
p_values =[2*(1-stats.t.cdf(np.abs(i),(len(new_indep_var)-1))) for i in ts_b]
sd_b = np.round(sd_b,3)
ts_b = np.round(ts_b,3)
p_values = np.round(p_values,3)
params = np.round(params,4)
p_test_df = pd.DataFrame()
p_test_df["Coefficients"],p_test_df["Standard Errors"],p_test_df["t values"],p_test_df["Probabilites"] = [params,sd_b,ts_b,p_values]
print(p_test_df)
# predict
dep_var_pred = model.predict(indep_var)
print(r2_score(dep_var, dep_var_pred))
#low r2 value, despite low p-val, t-statistic lookup conclusive, we look for precise predictions for upcoming ML section, it is statistically safe to disregard hoa inferentially
train['price']=np.log1p(train['price'])
plt.subplots(figsize=(20,8))
sns.distplot(train['price'],fit=stats.norm)
(mu,sigma)=stats.norm.fit(train['price'])
plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
plt.ylabel('frequency')
fig=plt.figure()
stats.probplot(train['price'],plot=plt)
plt.show()
# cols=['zip','type','beds','baths','sqrft','lot','$/sqrft','train_built']
cols=['zip','train_built','type','beds','baths','sqrft','lot']
x=train[cols]
y=train['price']
# y=np.log1p(y)
# plt.subplots(figsize=(20,8))
# sns.distplot(y,fit=stats.norm)
# (mu,sigma)=stats.norm.fit(y)
# plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
# plt.ylabel('frequency')
# fig=plt.figure()
# stats.probplot(y,plot=plt)
# plt.show()
train['$/sqrft']=np.log1p(train['$/sqrft'])
plt.subplots(figsize=(5,5))
sns.distplot(train['$/sqrft'],fit=stats.norm)
(mu,sigma)=stats.norm.fit(train['$/sqrft'])
plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
plt.ylabel('frequency')
fig=plt.figure()
stats.probplot(train['$/sqrft'],plot=plt)
plt.show()
train['sqrft']=np.log1p(train['sqrft'])
plt.subplots(figsize=(5,5))
sns.distplot(train['sqrft'],fit=stats.norm)
(mu,sigma)=stats.norm.fit(train['sqrft'])
plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
plt.ylabel('frequency')
fig=plt.figure()
stats.probplot(train['sqrft'],plot=plt)
plt.show()
train['lot']=np.log1p(train['lot'])
plt.subplots(figsize=(5,5))
sns.distplot(train['lot'],fit=stats.norm)
(mu,sigma)=stats.norm.fit(train['lot'])
plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
plt.ylabel('frequency')
fig=plt.figure()
stats.probplot(train['lot'],plot=plt)
plt.show()
train[cols].head()
x.head()
x=pd.get_dummies(x,columns=['zip','type','train_built'])
x.head()
x.columns
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(x,y,test_size=0.20,random_state=42)
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error, accuracy_score
# define the model
model = LinearRegression()
# fit the model to training data
model.fit(x_train, y_train)
# predict
y_train_pred = model.predict(x_train)
y_test_pred = model.predict(x_test)
print("The R^2 score for training data is", r2_score(y_train, y_train_pred))
print("The R^2 score for testing data is", r2_score(y_test, y_test_pred))
print("The train RMSE is ", mean_squared_error(y_train, y_train_pred)**0.5)
print("The test RMSE is ", mean_squared_error(y_test, y_test_pred)**0.5)
dff=pd.DataFrame({"true_values": y_train, "predicted": y_train_pred, "residuals": y_train - y_train_pred})
dff
```
# Check normality of residuals for IV
```
plt.subplots(figsize=(5,5))
# plt.subplots(1,2,sharex='none')
# sns.distplot(dff['residuals'],fit=stats.norm)
# plt.subplots(1,2,sharex='none')
# stats.probplot(dff['residuals'],plot=plt)
# fig, (ax1, ax2) = plt.subplots(ncols=2, sharex=False,sharey=False)
sns.distplot(dff['residuals'],fit=stats.norm)
(mu,sigma)=stats.norm.fit(dff['residuals'])
plt.legend(['Normal Distribution Params mu={} and sigma={}'.format(mu,sigma)],loc='best')
plt.ylabel('frequency')
fig=plt.figure()
stats.probplot(dff['residuals'],plot=plt)
plt.show()
dff['true_values'].max()
from sklearn.linear_model import Lasso, Ridge, ElasticNet
# define the model
lasso = Lasso(random_state=42)
# fir the model to the data
lasso.fit(x_train, y_train)
# predictions
y_pred_lasso = lasso.predict(x_test)
RMSE_lasso = mean_squared_error(y_test, y_pred_lasso)**0.5
r2_lasso = r2_score(y_test, y_pred_lasso)
print(RMSE_lasso)
print(r2_lasso)
# define the model
ridge = Ridge(random_state=42)
# fir the model to the data
ridge.fit(x_train, y_train)
y_train_pred=ridge.predict(x_train) ##this one
# predictions
y_pred_ridge = ridge.predict(x_test)
RMSE_ridge = mean_squared_error(y_test, y_pred_ridge)**0.5
r2_ridge = r2_score(y_test, y_pred_ridge)
RMSE_ridge_train = mean_squared_error(y_train, y_train_pred)**0.5 #this
r2_train=r2_score(y_train, y_train_pred) #this
print(RMSE_ridge)
print(r2_ridge)
print(RMSE_ridge_train)
print(r2_train)
ridge
np.expm1(model.predict(x_test.iloc[0].values.reshape(1,-1)))
x_test.iloc[0]
np.expm1(y_test.iloc[0])
# import pickle
# ridge_pickle_t = open("costamesa_model.pkl","wb")
# pickle.dump(ridge, ridge_pickle_t)
ridge_model = open("costamesa_model.pkl","rb")
ridge = pickle.load(ridge_model)
beds = []
baths = []
sqrft = []
lot = []
# per_sqrft = []
zipcode = ""
types = ""
year_built=""
beds.append(input("Bedrooms: "))
baths.append(input("Bathrooms: "))
sqrft.append(input("Squarefeet: "))
lot.append(input("Lot Size: "))
# per_sqrft.append(input("$'s per Square Feet': "))
city=input("City: ")
zipcode = input("Zipcode: ")
types = input("House Type: ")
year_built=input("Built: ")
int_year_built=int(year_built)
# def min_built():
# infile=open('irvine_data.pk1','rb')
# train=pickle.load(infile)
# f=open('whatever')
# train_tust=pickle.load(f)
# #for integrating: load all pickle files
# #output is a list of minimums
# irvine_mini=train['built'].min()
# tustin_mini=train_tustin['built'].min()
# return [irvine_mini,tustin_mini]
temp=min_built()
def temp_bin(num):
temp_yr_bin=round((num-temp)/10,0)
return temp_yr_bin
# def binned_year(num):
# minimums=min_built()
# if city=="Irvine" or city=="irvine":
# city_min=minimum[0]
# elif city=="tustin" or 'Tustin'
# city_min=minimum[1]
# #etc
# binned_yr=round((num-city_min)/10,0)
# return binned_yr
print(temp_bin(int_year_built))
print(type(temp_bin(int_year_built)))
user_dictionary={'zip':zipcode,'type':types,'train_built':str(temp_bin(int_year_built)),'beds':beds,'baths':baths,'sqrft':sqrft,'lot':lot}
user_df=pd.DataFrame(user_dictionary)
user_df_fit=pd.get_dummies(user_df,columns=['zip','type','train_built'])
type(user_dictionary['train_built'])
user_df_fit
x.columns
for i in x.columns:
if i in user_df_fit.columns:
pass
else:
user_df_fit[i]=0
user_df_fit
user_df_fit.columns
x.columns
# np.expm1(ridge.predict(user_df_fit))
np.expm1(ridge.predict(user_df_fit))
# np.expm1(model.predict(x_test.iloc[0].values.reshape(1,-1)))
```
| github_jupyter |
# Late contributions Received and Made
## Setup
```
%load_ext sql
from django.conf import settings
connection_string = 'postgresql+psycopg2://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format(
**settings.DATABASES['default']
)
%sql $connection_string
```
## Unique Composite Key
The documentation says that the records are unique on the following fields:
* `FILING_ID`
* `AMEND_ID`
* `LINE_ITEM`
* `REC_TYPE`
* `FORM_TYPE`
`REC_TYPE` is always the same value: `S497`, so we can ignore this column.
`FORM_TYPE` is either `F497P1` or `F497P2`, indicating in whether itemized transaction is listed under Part 1 (Contributions Received) or Part 2 (Contributions Made). I'll split these up into separate tables.
## Are the `S497_CD` records actually unique on `FILING_ID`, `AMEND_ID` and `LINE_ITEM`?
Yes. And this is even true across the Parts 1 and 2 (Contributions Received and Contributions Made).
```
%%sql
SELECT "FILING_ID", "AMEND_ID", "LINE_ITEM", COUNT(*)
FROM "S497_CD"
GROUP BY 1, 2, 3
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC;
```
## `TRAN_ID`
The `S497_CD` table includes a `TRAN_ID` field, which the [documentation](http://calaccess.californiacivicdata.org/documentation/calaccess-files/s497-cd/#fields) describes as a "Permanent value unique to this item".
### Is `TRAN_ID` ever `NULL` or blank?
No.
```
%%sql
SELECT COUNT(*)
FROM "S497_CD"
WHERE "TRAN_ID" IS NULL OR "TRAN_ID" = '' OR "TRAN_ID" = '0';
```
### Is `TRAN_ID` unique across filings?
Decidedly no.
```
%%sql
SELECT "TRAN_ID", COUNT(DISTINCT "FILING_ID")
FROM "S497_CD"
GROUP BY 1
HAVING COUNT(DISTINCT "FILING_ID") > 1
ORDER BY COUNT(DISTINCT "FILING_ID") DESC
LIMIT 100;
```
But `TRAN_ID` does appear to be unique within each filing amendment, and appears to be reused for each filing.
```
%%sql
SELECT "FILING_ID", "TRAN_ID", COUNT(DISTINCT "AMEND_ID") AS amend_count, COUNT(*) AS row_count
FROM "S497_CD"
GROUP BY 1, 2
ORDER BY COUNT(*) DESC
LIMIT 100;
```
There's one exception:
```
%%sql
SELECT "FILING_ID", "TRAN_ID", "AMEND_ID", COUNT(*)
FROM "S497_CD"
GROUP BY 1, 2, 3
HAVING COUNT(*) > 1;
```
Looks like this `TRAN_ID` is duplicated across the two parts of the filing. So it was both a contribution both made and received?
```
%%sql
SELECT *
FROM "S497_CD"
WHERE "FILING_ID" = 2072379
AND "TRAN_ID" = 'EXP9671';
```
Looking at the [PDF for the filing](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2072379&amendid=1), it appears to be a check from the California Psychological Association PAC to the McCarty for Assembly 2016 committee, which was given and returned on 8/25/2016.
Regardless, because the combinations of `FILING_ID`, `AMEND_ID` and `TRAN_ID` are unique within each part of the Schedule 497, we could substitute `TRAN_ID` for `LINE_ITEM` in the composite key when splitting up the contributions received from the contributions made.
The advantage is that the `TRAN_ID` purportedly points to the same contribution from one amendment to the next, whereas the same `LINE_ITEM` might not because the filers don't necessarily list transactions on the same line from one filing amendment to the next.
Here's an example: On the [original Schedule 497 filing](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2083478&amendid=0) for Steven Bradford for Senate 2016, a $8,500.00 contribution from an AFL-CIO sub-committee is listed on line 1. But on the [first](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2083478&amendid=1) and [second](http://cal-access.ss.ca.gov/PDFGen/pdfgen.prg?filingid=2083478&amendid=2) amendments to the filing, it is listed on line 4.
| github_jupyter |
# Basics
Let's first take a look at what's inside the ``ib_insync`` package:
```
import ib_insync
print(ib_insync.__all__)
```
### Importing
The following two lines are used at the top of all notebooks. The first line imports everything and the second
starts an event loop to keep the notebook live updated:
```
from ib_insync import *
util.startLoop()
```
*Note that startLoop() only works in notebooks, not in regular Python programs.*
### Connecting
The main player of the whole package is the "IB" class. Let's create an IB instance and connect to a running TWS/IBG application:
```
ib = IB()
ib.connect('127.0.0.1', 4002, clientId=12)
```
If the connection failed, then verify that the application has the API port enabled and double-check the hostname and port. For IB Gateway the default port is 4002. Make sure the clientId is not already in use.
If the connection succeeded, then ib will be synchronized with TWS/IBG. The "current state" is now available via methods such as ib.positions(), ib.trades(), ib.openTrades(), ib.accountValues() or ib.tickers(). Let's list the current positions:
```
ib.positions()
```
Or filter the account values to get the liquidation value:
```
[v for v in ib.accountValues() if v.tag == 'NetLiquidationByCurrency' and v.currency == 'BASE']
```
The "current state" will automatically be kept in sync with TWS/IBG. So an order fill will be added as soon as it is reported, or account values will be updated as soon as they change in TWS.
### Contracts
Contracts can be specified in different ways:
* The ibapi way, by creating an empty Contract object and setting its attributes one by one;
* By using Contract and giving the attributes as keyword argument;
* By using the specialized Stock, Option, Future, Forex, Index, CFD, Commodity,
Bond, FuturesOption, MutualFund or Warrant contracts.
Some examples:
```
Contract(conId=270639)
Stock('AMD', 'SMART', 'USD')
Stock('INTC', 'SMART', 'USD', primaryExchange='NASDAQ')
Forex('EURUSD')
CFD('IBUS30')
Future('ES', '20180921', 'GLOBEX')
Option('SPY', '20170721', 240, 'C', 'SMART')
Bond(secIdType='ISIN', secId='US03076KAA60');
```
### Sending a request
The IB class has nearly all request methods that the IB API offers. The methods that return a result will block until finished and then return the result. Take for example reqContractDetails:
```
contract = Stock('TSLA', 'SMART', 'USD')
ib.reqContractDetails(contract)
```
### Current state vs request
Doing a request involves network traffic going up and down and can take considerable time. The current state on the other hand is always immediately available. So it is preferable to use the current state methods over requests. For example, use ``ib.openOrders()`` in preference over ``ib.reqOpenOrders()``, or ``ib.positions()`` over ``ib.reqPositions()``, etc:
```
%time l = ib.positions()
%time l = ib.reqPositions()
```
### Logging
The following will put log messages of INFO and higher level under the current active cell:
```
util.logToConsole()
```
To see all debug messages (including network traffic):
```
import logging
util.logToConsole(logging.DEBUG)
```
### Disconnecting
The following will disconnect ``ib`` and clear all its state:
```
ib.disconnect()
```
| github_jupyter |
# Practical 2 - Loops and conditional statements
In today's practical we are going to continue practicing working with loops whilst also moving on to the use of conditional statements.
<div class="alert alert-block alert-success">
<b>Objectives:</b> The objectives of todays practical are:
- 1) [Loops: FOR loops continued](#Part1)
* [Exercise 1: Cycling through arrays and modifying values](#Exercise1)
- 2) [Conditional statements: IF, ELSE and ELIF](#Part2)
* [Exercise 2: Modify a loop to implement one of two equations according to a condition being met](#Exercise2)
- 3) [Nested loops: Working with more than 1 dimension](#Part3)
* [Exercise 3: Print out the result from a nested loop according to a condition being met](#Exercise3)
* [Exercise 4: Print out which variables match a condition](#Exercise4)
* [Exercise 5: Repeat Bob Newby's code breaking nested loops to crack the code in the Hawkins lab](#Exercise5)
Please note that you should not feel pressured to complete every exercise in class. These practicals are designed for you to take outside of class and continue working on them. Proposed solutions to all exercises can be found in the 'Solutions' folder.
</div>
<div class="alert alert-block alert-warning">
<b>Please note:</b> After reading the instructions and aims of any exercise, search the code snippets for a note that reads ------'INSERT CODE HERE'------ to identify where you need to write your code
</div>
## 1) Loops: FOR loops continued <a name="Part1">
Let us jump straight into our first exercise, following on from the previous practical.
<div class="alert alert-block alert-success">
<b> Exercise 1: Cycling through arrays and modifying values. <a name="Exercise1"> </b> Create a loop that implements the function:
\begin{eqnarray}
Y = X^{2.8/X}
\end{eqnarray}
Where x is an array of 75 values from 10 to 85. Print the final 4 values of the X and Y array one-by-one. Your output should look something like:
```python
The 72nd element of x is 82
The 72nd element of y is 1.1623843156991178
The 73rd element of x is 83
The 73rd element of y is 1.1607534518329277
The 74th element of x is 84
The 74th element of y is 1.1591580160090038
The 75th element of x is 85
The 75th element of y is 1.157596831308393
```
</div>
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y = []
# Now loop through 75 values and append each list accordingly.
# One list contained values for 'x', the other 'y'.
# Please note the operator ** is needed to raise one number to another [e.g 2**3]
#------'INSERT CODE HERE'------
for step in range(75):
# Append a value to our x array
x.append(10+(step+1))
# Append a value to our y array
y.append(x[step]**(2.8/x[step]))
#------------------------------
# Print the last four values from both our x and y arrays
print("The 72nd element of x is ",x[71])
print("The 72nd element of y is ",y[71])
print("The 73rd element of x is ",x[72])
print("The 73rd element of y is ",y[72])
print("The 74th element of x is ",x[73])
print("The 74th element of y is ",y[73])
print("The 75th element of x is ",x[74])
print("The 75th element of y is ",y[74])
```
## 2) Conditional statements: The IF, ELIF and ELSE statements <a name="Part2">
Once we have information stored in an array, or wish to generate information iteratively, we start to use a combination of **loops** and **conditional statements**. Conditional statements allow us to develop software that can be responsive to certain conditions. For example, in the following control flow diagram we define a set of instructions that initiate a variable and start a loop that adds 3 to this variable at each iteration. However, at each iteration we also check the value of said variable and if it becomes equals or greater than 30, we stop the program.

The following table lists the Python equivalent of common mathematical symbols to check numerical values.
| Meaning | Math Symbol | Python Symbols |
| --- | --- | --- |
| Less than | < | < |
| Greater than | > | > |
| Less than or equal | ≤ | <= |
| Greater than or equal | ≥ | >= |
| Equals | = | == |
| Not equal | ≠ | != |
<div class="alert alert-block alert-danger">
<b> Warning </b> The obvious choice for equals, a single equal sign, is not used to check for equality. It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment!
</div>
How do we implement checks using these symbols? This is where we use the IF, ELIF and ELSE statements. Let us start with an example.
```python
# Initialise two variables with integer values
x=3
y=5
# Now use an IF statement to check the relative values of x and y, then act accordingly
if x > y:
print("X is greater than Y")
...
if x < y:
print("X is less than Y")
..."X is less than Y"
```
Once again, notice how we have introduced a statement that ends with a colon : and thus requires the next line to be indented. We also use specific symbols to check whether one value is greater than [>] or less than [<] another. Within each condition check, depending on which is true, we print a message to the screen.
Rather than use two IF statements, we could combine these checks using an ELSE statement as follows:
```python
# Initialise two variables with integer values
x=3
y=5
# Now use an IF statement to check the relative values of x and y, then act accordingly
if x > y:
print("X is greater than Y")
else x < y:
print("X is less than Y")
"X is less than Y"
```
There are a huge number of examples we could work on here, but to begin lets build on the first exercise. In the following code we again have two variables 'x' and 'y'. Each has 50 elements. Lets assume that we want to implement two functions: one that is used if our x value is *less than or equal* to 20, the other if x is *greater than* 20. We can use a combination of the IF and ELSE statements.
- If $X$ is *less than or equal* to 20, $ Y = \frac{X}{12.5} $
- Otherwise [else], $Y = X^{12.5} $
Lets see this implemented as code below. Read through the syntax and if you do not understand, please ask.
<div class="alert alert-block alert-danger">
<b>Indexing </b> Once again, notice how we have introduced a statement that ends with a colon : and thus requires the next line to be indented.
</div>
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y = []
for step in range(50):
# Append a value to our x array
x.append(step+1)
# Now add a conditional statement to check the value of x
# Notice our additional indentation
if x[step] <= 20:
# Append a value to our y array
y.append(x[step]/12.5)
else:
# Append a value to our y array
y.append(x[step]**12.5)
# Print the first and last four values from both our x and y arrays
# First four
print("The 1st element of x is ",x[0])
print("The 1st element of y is ",y[0])
print("The 2nd element of x is ",x[1])
print("The 2nd element of y is ",y[1])
print("The 3rd element of x is ",x[2])
print("The 3rd element of y is ",y[2])
print("The 4th element of x is ",x[3])
print("The 4th element of y is ",y[3])
# Last four
print("The 47th element of x is ",x[46])
print("The 47th element of y is ",y[46])
print("The 48th element of x is ",x[47])
print("The 48th element of y is ",y[47])
print("The 49th element of x is ",x[48])
print("The 49th element of y is ",y[48])
print("The 50th element of x is ",x[49])
print("The 50th element of y is ",y[49])
```
- put exercise here
### The AND statment
Once we move beyond two mutually exclusive conditions, we can also use the ELIF statements. However we need to be careful that we are assigning correct boundaries on our conditions. For example, let us assume we have been tasked with creating an array X that contains values from 1 to 200 and we want to implement 3 equations according to the following rules:
- If X is less than 20, use: $ Y = \frac{X}{1.56} $
- If X is greater than or equal to 20, but less than 60 use: $ Y = X^{0.35} $
- If X is greater than or equal to 60 use: $ Y = 4.5*X $
Look at the following two different versions of a loop using the conditional statements introduced earlier.
```python
# Version 1
if x[step] < 20:
<<action>>
elif x[step] >= 20:
<<action>>
elif x[step] >= 60
<<action>>
```
```python
# Version 2
if x[step] < 20:
<<action>>
elif x[step] >= 20 and x[step] < 60:
<<action>>
elif x[step] >= 60
<<action>>
```
The first version will work, but produce incorrect results. Why is that? If you follow the code instructions, as the Python interpreter would, once x[step] is greater than 20 the second conditional statement will always be true. As a result, it will never have to move to the third. In the second version however, the second conditional will no longer be true once x[step] is greater than or equal to 60.
Let us run both versions and plot the results so you can see the difference. In the following code I will create two Y arrays, one for each loop variant. A line plot will be produced where you should see a step change in values according to these rules. Do not worry about the syntax or module used to create the plot, we will visit this throughout the course.
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y_version1 = []
y_version2 = []
for step in range(200):
# Append a value to our x array
x.append(step+1)
# Version 1
if x[step] < 20:
# Append a value to our y array
y_version1.append(x[step]/1.56)
elif x[step] >= 20:
# Append a value to our y array
y_version1.append(x[step]**0.35)
elif x[step] >= 60:
y_version1.append(4.5*x[step])
# Version 2
if x[step] < 20:
# Append a value to our y array
y_version2.append(x[step]/1.56)
elif x[step] >= 20 and x[step] < 60:
# Append a value to our y array
y_version2.append(x[step]**0.35)
elif x[step] >= 60:
y_version2.append(4.5*x[step])
# Plot results
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
import numpy as np # The Numpy package - more soon!!
fig = plt.figure(figsize=(8,8))
ax = plt.axes()
ax.plot(np.array(x),np.log(y_version1),label='Version 1')
ax.plot(np.array(x),np.log(y_version2),label='Version 2')
ax.set_title('Y as a function of X')
ax.legend(loc='upper left')
ax.set_ylabel('Y')
ax.set_xlabel('X')
```
<div class="alert alert-block alert-success">
<b> Exercise 2: Modify a loop to implement one of three equations according to a condition being met <a name="Exercise2"> </b> In this case, let us assume an array X contains values from 1 to 1000 and we want to implement 3 equations according to the following rules:
- If X is less than or equal to 250, $ Y = X^{1.03} $
- If X is greater than 250, but less than 690 $ Y = X^{1.25} $
- If X is greater than or equal to 690, $ Y = X^{1.3} $
This is the first graph we have created. Dont worry about the syntax for now, we will produce graphs in every practical following this.
Your output should look like the following:

</div>
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y = []
for step in range(1000):
# Append a value to our x array
x.append(step+1)
#------'INSERT CODE HERE'------
# Now add a conditional statement to check the value of x
# Notice our additional indentation
if x[step] <= 250:
# Append a value to our y array
y.append(x[step]**1.03)
elif x[step] > 250 and x[step] < 690:
y.append(x[step]**1.2)
elif x[step] >= 690:
# Append a value to our y array
y.append(x[step]**2.5)
#------------------------------
# Print the first and last four values from both our x and y arrays
#Import plotting package
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
import numpy as np # The Numpy package - more soon!!
fig = plt.figure(figsize=(8,8))
ax = plt.axes()
ax.plot(np.array(x),np.log(y))
ax.set_title('Y as a function of X')
ax.set_ylabel('Y')
ax.set_xlabel('X')
```
## 3) Nested loops: Working with more than 1 dimension <a name="Part3">
In many we want to work with more than one variable at a time, often in a two [or more] dimensional setting. We can combine 'FOR' loops on any number of levels. For example, take the following hypothetical example:
```python
for [first iterating variable] in [outer loop]: # Outer loop
[do something] # Optional
for [second iterating variable] in [nested loop]: # Nested loop
[do something]
```
Notice how we have what we might call our first, or 'outer' loop cycling through our first iterating variable. As we cycle through this variable, we then 'do something' as a direct consequence. However, directly following this action, we cycle through a second iterating variable as part of our 'nested loop'. In other words, we have a loop that is nested within our first, or outer.
<div class="alert alert-block alert-danger">
<b>Indexing </b> Once again, notice how we have introduced a statement that ends with a colon : and thus requires the next line to be indented.
</div>
Let us run an example of cycling through a list of words. In this case we are not using the
```python
range()
```
function as we are not dealing with numeric examples or cycling through integers.
```
# Create two lists of words
list1 = ['Hello','Goodbye']
list2 = ['George','Frank','Susan','Sarah']
for word1 in list1:
for word2 in list2:
print(word1)
print(word2)
```
Turns out we can make the output easier to read by adding each word together, with a space ' ' in between as:
```
# Create two lists of words
list1 = ['Hello','Goodbye']
list2 = ['George','Frank','Susan','Sarah']
for word1 in list1:
for word2 in list2:
print(word1+' '+word2)
```
We will not be deadling with the rich text processing power of Python in this course, but if you are interested there are some great examples to follow on the [internet](https://towardsdatascience.com/gentle-start-to-natural-language-processing-using-python-6e46c07addf3). The important lesson here is noticing how we deal with a nested loop. Please also note that we can use a 'FOR' loop to iterate on members of any list, whether they are numeric or string values.
Again we can use conditional statements to modify our output. What if we only wanted to output entries involving Susan? We can add a conditional statement as follows:
```
# Create two lists of words
list1 = ['Hello','Goodbye']
list2 = ['George','Frank','Susan','Sarah']
for word1 in list1:
for word2 in list2:
if word2 == "Susan":
print(word1+' '+word2)
```
<div class="alert alert-block alert-success">
<b> Exercise 3: Print out the result from a nested loop according to a condition being met <a name="Exercise3"> </b>
In this exercise we have three lists with the following entries:
list1 = ['Maths','Physics','Programming','Chemistry']
list2 = ['is','can be','is not']
list3 = ['enjoyable','awful!','ok, I guess','....']
Your task is to create a triple nested loop and only print out when the word in list1 is 'Physics' and list2 is 'can be'.
There are multiple ways to achieve this.
Your results should look like the following:
```python
Physics can be enjoyable
Physics can be awful!
Physics can be ok, I guess
Physics can be ....
```
</div>
```
# Create three lists of words
#------'INSERT CODE HERE'------
list1 = ['Maths','Physics','Programming','Chemistry']
list2 = ['is','can be','is not']
list3 = ['enjoyable','awful!','ok, I guess','....']
for word1 in list1:
for word2 in list2:
for word3 in list3:
if word1 == "Physics" and word2 == "can be":
print(word1+' '+word2+' '+word3)
#------------------------------
```
<div class="alert alert-block alert-success">
<b> Exercise 4: Print out which variables match a condition <a name="Exercise4"> </b>
In this exercise we have two variables, 'x' and 'y' taking on a value from two loops that cycle through 80 values.
```python
for x in range(80):
for y in range(80):
[do something]
```
Your task is to identify which combinations of x and y through the function:
\begin{eqnarray}
Z = Y+X^{2}
\end{eqnarray}
produce a value of Z = 80
</div>
```
#------'INSERT CODE HERE'------
for x in range(80):
for y in range(80):
z = y + x**2.0
if z == 80:
print('x = ', x)
print('y = ', y)
#------------------------------
```
<div class="alert alert-block alert-success">
<b> Exercise 5: Repeat Bob Newby's code breaking nested loops to crack the code in the Hawkins lab <a name="Exercise5"> </b>
In this exercise we imagine that we are tasked with finding the value of a password that is different everytime the program is executed. This will be generated by an internal Python function and used to create a string which has 5 numbers in it. We then have to create a 5 level nested loop to combine 5 different numbers into one word and when this matches the one generated by the internal Python function the attempted, thus correct, password is printed to the screen.
The code box below provides you with indentend lines in which to enter the rest of the code required. The first loop is provided. As part of the 5th loop you will need to combine all of thje individual numbers, as strings, into one word and then check if this is the same as the internally generated password. You can use the following commands for this, assuming that you call each letter as letter1, letter2 etc.
```python
password_attempt = letter1+letter2+letter3+letter4+letter5
if password_attempt == password_string:
print("Passwords match!, attempted password = ",password_attempt)
```
Once you have finished, why not see how many steps have been taken to arrive at the correct password?
</div>
```
# The following imports a module [see Practical 3] and then creates a string of a random number of 5 digits
from random import randint
n=5
password_string = ''.join(["{}".format(randint(0, 9)) for num in range(0, 5)])
print("password = ", password_string)
# Now create a 5 level nested loop which prints when the successful password has been met
#------'INSERT CODE HERE'------
# First loop
for step1 in range(10):
letter1 = str(step1) # Convert number to a string
# Second loop
for step2 in range(10):
letter2 = str(step2)
# Third loop
for step3 in range(10):
letter3 = str(step3)
# Fourth loop
for step4 in range(10):
letter4 = str(step4)
# Fifth loop
for step5 in range(10):
letter5 = str(step5)
password_attempt = letter1+letter2+letter3+letter4+letter5
if password_attempt == password_string:
print("Passwords match!, attempted password = ",password_attempt)
#------------------------------
```
| github_jupyter |
## Imports
```
from __future__ import print_function, division
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
import patsy
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats as stats
%matplotlib inline
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.pipeline import Pipeline
from sklearn import cross_validation
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNetCV
from sklearn.linear_model import LassoCV
from sklearn.linear_model import RidgeCV
from sklearn.metrics import mean_squared_error as MSE
```
## Reading and preparing the df
```
horsey = pd.read_csv('finalmerged_clean').drop('Unnamed: 0', axis=1)
```
#### Smaller data set (maiden females)
```
MaidenFems = horsey.iloc[42:49]
MaidenFems
```
#### Larger data set (without maiden females)
```
horse_fast = horsey.drop(horsey.index[42:49]).reset_index(drop=True)
horse_fast
horse_fast = horse_fast.drop('Final_Time',1).drop('Horse Name',1)
horse_fast
```
## Splitting into Master Test-Train
```
ttest = horse_fast.iloc[[1,5,10,15,20,25,30,35,40,45,50]].reset_index(drop=True)
ttrain = horse_fast.drop(axis = 0, index = [1,5,10,15,20,25,30,35,40,45,50]).sample(frac=1).reset_index(drop=True)
ttrain
y_ttrain = ttrain['Final_Time_Hund']
y_ttest = ttest['Final_Time_Hund'] #extract dependent variable
X_ttrain = ttrain.drop('Final_Time_Hund',1)
X_ttest = ttest.drop('Final_Time_Hund',1) # Get rid of ind. variables
```
## Testing Assumptions
Didn't complete for sake of time
#### Assumption 1
```
XAssum = X_ttrain
yAssum = y_ttrain
XAssum_train, XAssum_test, yAssum_train, yAssum_test = train_test_split(XAssum, yAssum, test_size=0.2)
def diagnostic_plot(x, y):
plt.figure(figsize=(20,5))
rgr = LinearRegression()
rgr.fit(XAssum_train, yAssum_train)
pred = rgr.predict(XAssum_test, yAssum_test)
#Regression plot
plt.subplot(1, 3, 1)
plt.scatter(XAssum_train,yAssum_train)
plt.plot(XAssum_train, pred, color='blue',linewidth=1)
plt.title("Regression fit")
plt.xlabel("x")
plt.ylabel("y")
#Residual plot (true minus predicted)
plt.subplot(1, 3, 2)
res = yAssum_train - pred
plt.scatter(pred, res)
plt.title("Residual plot")
plt.xlabel("prediction")
plt.ylabel("residuals")
#A Q-Q plot (for the scope of today), it's a percentile, percentile plot. When the predicted and actual distributions
#are the same, they Q-Q plot has a diagonal 45degree line. When stuff diverges, the kertosis between predicted and actual are different,
#your line gets wonky.
plt.subplot(1, 3, 3)
#Generates a probability plot of sample data against the quantiles of a
# specified theoretical distribution
stats.probplot(res, dist="norm", plot=plt)
plt.title("Normal Q-Q plot")
diagnostic_plot(XAssum_train, yAssum_train)
modelA = ElasticNet(1, l1_ratio=.5)
fit = modelA.fit(XAssum_train, yAssum_train)
rsq = fit.score(XAssum_train, yAssum_train)
adj_rsq = 1 - (1-rsq)*(len(yAssum_train)-1)/(len(yAssum_train)-XAssum_train.shape[1]-1)
print(rsq)
print(adj_rsq)
```
#### Assumption 2
```
# develop OLS with Sklearn
X = ttrain[1:]
y = ttrain[0] # predictor
lr = LinearRegression()
fit = lr.fit(X,y)
t['predict']=fit.predict(X)
data['resid']=data.cnt-data.predict
with sns.axes_style('white'):
plot=data.plot(kind='scatter',
x='predict',y='resid',alpha=0.2,figsize=(10,6))
```
## Model 0 - Linear Regression
Working with the training data that doesn't include the maiden-filly race.
```
horsey = ttrain
Xlin = X_ttrain
ylin = y_ttrain
```
#### Regplots
```
sns.regplot('Gender','Final_Time_Hund', data=horsey);
#Makes sense! Male horses tend to be a little faster.
sns.regplot('Firsts','Final_Time_Hund', data=horsey);
#Makes sense! Horses that have won more races tend to be faster.
sns.regplot('Seconds','Final_Time_Hund', data=horsey);
#Similar to the result for "firsts", but slightly less apparent.
sns.regplot('Thirds','Final_Time_Hund', data=horsey);
#Similar to the results above.
sns.regplot('PercentWin','Final_Time_Hund', data=horsey);
#Not a great correlation...
sns.regplot('Starts','Final_Time_Hund', data=horsey);
#This seems pretty uncorrelated...
sns.regplot('Date','Final_Time_Hund', data=horsey);
#Horses with more practice have faster times. But pretty uncorrelated...
sns.regplot('ThreeF','Final_Time_Hund', data=horsey);
#Really no correlation!
sns.regplot('FourF','Final_Time_Hund', data=horsey);
#Huh, not great either.
sns.regplot('FiveF','Final_Time_Hund', data=horsey);
#Slower practice time means slower finaltime. But yeah... pretty uncorrelated...
```
#### Correlations
```
horsey.corr()
%matplotlib inline
import matplotlib
matplotlib.rcParams["figure.figsize"] = (12, 10)
sns.heatmap(horsey.corr(), vmin=-1,vmax=1,annot=True, cmap='seismic');
```
Pretty terrible... but it seems like FiveF, Date, Gender and Percent win are the best... (in that order).
```
sns.pairplot(horsey, size = 1.2, aspect=1.5);
plt.hist(horsey.Final_Time_Hund);
```
#### Linear Regression (All inputs)
```
#Gotta add the constant... without it my r^2 was 1.0!
Xlin = sm.add_constant(Xlin)
#Creating the model
lin_model = sm.OLS(ylin,Xlin)
# Fitting the model to the training set
fit_lin = lin_model.fit()
# Print summary statistics of the model's performance
fit_lin.summary()
```
- r2 could be worse...
- adj r2 also could be worse...
- Inputs that seem significant based on pvalue : Gender... that's about it! The other lowests are Firsts, seconds and date (though they're quite crappy). But I guess if 70% of data lies within the level of confidence... that's better than none...
** TESTING! **
```
Xlin = X_ttrain
ylin = y_ttrain
lr_train = LinearRegression()
lr_fit = lr_train.fit(Xlin, ylin)
r2_training = lr_train.score(Xlin, ylin)
r2adj_training = 1 - (1-r2_training)*(len(ylin)-1)/(len(ylin)-Xlin.shape[1]-1)
preds = lr_fit.predict(X_ttest)
rmse = np.sqrt(MSE(y_ttest, preds))
print('R2:', r2_training)
print('R2 Adjusted:', r2adj_training)
print('Output Predictions', preds)
print('RMSE:', rmse)
```
#### Linear Regression (Updated Inputs)
Below is the best combination of features to drop: Thirds, ThreeF & PrecentWin
```
Xlin2 = Xlin.drop(labels ='Thirds', axis = 1).drop(labels ='ThreeF', axis = 1).drop(labels ='PercentWin', axis = 1)
ylin2 = y_ttrain
#Gotta add the constant... without it my r^2 was 1.0!
Xlin2 = sm.add_constant(Xlin2)
#Creating the model
lin_model = sm.OLS(ylin,Xlin2)
# Fitting the model to the training set
fit_lin = lin_model.fit()
# Print summary statistics of the model's performance
fit_lin.summary()
```
Slightly better...
## Model A - Elastic Net (no frills)
```
## Establishing x and y
XA = X_ttrain
yA = y_ttrain
#Checking the predictability of the model with this alpha = 1
modelA = ElasticNet(1, l1_ratio=.5)
fit = modelA.fit(XA, yA)
rsq = fit.score(XA, yA)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
print(rsq)
print(adj_rsq)
```
** 0.3073 ** not great... but not terrible. 30% of the variance is explained by the model.
```
#Let's see if I play around with the ratios of L1 and L2
modelA = ElasticNet(1, l1_ratio=.2)
fit = modelA.fit(XA, yA)
rsq = fit.score(XA, yA)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
print(rsq)
print(adj_rsq)
```
** Looks slightly worse. I guess there wasn't much need to compress complexity, or fix colinearity. **
```
#Let's check it in the other direction, with L1 getting more weight.
modelA = ElasticNet(1, l1_ratio=.98)
fit = modelA.fit(XA, yA)
rsq = fit.score(XA, yA)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
print(rsq)
print(adj_rsq)
```
** Seems like l1 of 0.98 really takes the cake! Let's check out alpha... Might be worth it to switch to a
Lasso model... something to keep in mind**
```
#Let's see if we can find a better alpha...
kf = KFold(n_splits=5, shuffle = True, random_state = 40 )
alphas = [1e-9,1e-8,1e-7,1e-6,1e-5,1e-4,1e-3,1e-2,1e-1,1,10,100,1000,10000, 100000, 1000000]
#alphas = [0,.001,.01,.1,.2,.5,.9,1,5,10,50,100,1000,10000]
errors = []
for i in alphas:
err_list = []
for train_index, test_index in kf.split(XA):
#print("TRAIN:", train_index, "TEST:", test_index) #This gives the index of the rows you're training and testing.
XA_train, XA_test = XA.loc[train_index], XA.loc[test_index]
yA_train, yA_test = yA[train_index], yA[test_index]
ef = ElasticNet(i, l1_ratio = 0.5)
ef.fit(XA_train,yA_train)
#print(ef.coef_) #This prints the coefficients of each of the input variables.
preds = ef.predict(XA_test) #Predictions for the y value.
error = np.sqrt(MSE(preds,yA_test))
err_list.append(error)
error = np.mean(err_list)
errors.append(error)
print("The RMSE for alpha = {0} is {1}".format(i,error))
```
** Looks like the best alpha is around 1000! Lets see if we can get even more granular. **
```
kf = KFold(n_splits=5, shuffle = True, random_state = 40)
alphas = [500, 600, 800, 900, 1000, 1500, 2000, 3000]
#alphas = [0,.001,.01,.1,.2,.5,.9,1,5,10,50,100,1000,10000]
errors = []
for i in alphas:
err_list = []
for train_index, test_index in kf.split(XA):
#print("TRAIN:", train_index, "TEST:", test_index) #This gives the index of the rows you're training and testing.
XA_train, XA_test = XA.loc[train_index], XA.loc[test_index]
yA_train, yA_test = yA[train_index], yA[test_index]
ef = ElasticNet(i)
ef.fit(XA_train,yA_train)
#print(ef.coef_) #This prints the coefficients of each of the input variables.
preds = ef.predict(XA_test) #Predictions for the y value.
error = np.sqrt(MSE(preds,yA_test))
err_list.append(error)
error = np.mean(err_list)
errors.append(error)
print("The RMSE for alpha = {0} is {1}".format(i,error))
```
** I'm going to settle on an alpha of 800 **
```
#Checking the predictability of the model again with the new alpha of 90.
modelA = ElasticNet(alpha = 800)
fit = modelA.fit(XA, yA)
fit.score(XA, yA)
```
Hm. Not really sure what that did, but definitely didn't work...
** TESTING **
Doing ElasticNetCV (withouth any modifications)
```
## Letting it do it's thing on it's own.
encvA = ElasticNetCV()
fitA = encvA.fit(XA, yA)
r2_training = encvA.score(XA, yA)
y= np.trim_zeros(encvA.fit(XA,yA).coef_)
#r2adj_training = 1 - (1-r2_training)*(XA.shape[1]-1)/(XA.shape[1]-len(y)-1)
adj_rsq = 1 - (1-r2_training)*(len(XA)-1)/(len(XA)-XA.shape[1]-len(y)-1)
preds = fitA.predict(X_ttest)
rmse = np.sqrt(MSE(preds, y_ttest))
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Output Predictions', preds)
print('RMSE:', rmse)
print('Alpha:',encvA.alpha_)
print('L1:',encvA.l1_ratio_)
print('Coefficients:',fitA.coef_)
elastic_coef = encvA.fit(XA, yA).coef_
_ = plt.bar(range(len(XA.columns)), elastic_coef)
_ = plt.xticks(range(len(XA.columns)), XA.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
```
Doing ElasticNet CV - changing the l1 ratio
```
encvA2 = ElasticNetCV(l1_ratio = .99)
fitA2 = encvA2.fit(XA, yA)
r2_training = encvA2.score(XA, yA)
y= np.trim_zeros(encvA2.fit(XA,yA).coef_)
adj_rsq = 1 - (1-r2_training)*(len(XA)-1)/(len(XA)-XA.shape[1]-len(y)-1)
preds = fitA2.predict(X_ttest)
rmse = np.sqrt(MSE(y_ttest, preds))
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Output Predictions', preds)
print('RMSE:', rmse)
print('Alpha:',encvA2.alpha_)
print('L1:',encvA2.l1_ratio_)
print('Coefficients:',fitA.coef_)
elastic_coef = encvA2.fit(XA, yA).coef_
_ = plt.bar(range(len(XA.columns)), elastic_coef)
_ = plt.xticks(range(len(XA.columns)), XA.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
```
### Extras
```
## L1 is 0.98
encvA2 = ElasticNetCV(l1_ratio = 0.98)
fitA2 = encvA2.fit(XA_train, yA_train)
rsq = fitA2.score(XA_test, yA_test)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
preds = fitA2.predict(XA_test)
mserror = np.sqrt(MSE(preds,yA_test))
print(rsq)
print(adj_rsq)
print(preds)
print(mserror)
print(encvA2.alpha_)
print(encvA2.l1_ratio_)
```
Still weird...
```
## Trying some alphas...
encvA3 = ElasticNetCV(alphas = [80,800,1000])
fitA3 = encvA3.fit(XA_train, yA_train)
rsq = fitA3.score(XA_test, yA_test)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yA)-XA.shape[1]-1)
preds = fitA3.predict(XA_test)
mserror = np.sqrt(MSE(preds,yA_test))
print(rsq)
print(adj_rsq)
print(preds)
print(mserror)
print(encvA3.alpha_)
print(encvA3.l1_ratio_)
```
Still confused...
## Model B - Elastic Net (polynomial transformation)
```
## Establishing x and y
XB = X_ttrain
yB = y_ttrain
ModelB = make_pipeline(PolynomialFeatures(2), LinearRegression())
fit = ModelB.fit(XB, yB)
rsq = fit.score(XB, yB)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yB)-XB.shape[1]-1)
print(rsq)
print(adj_rsq)
ModelB = make_pipeline(PolynomialFeatures(3), ElasticNetCV(l1_ratio = .5))
fit = ModelB.fit(XB, yB)
rsq = fit.score(XB, yB)
adj_rsq = 1 - (1-rsq)*(len(yA)-1)/(len(yB)-XB.shape[1]-1)
print(rsq)
print(adj_rsq)
```
... Hm ... Not great. But we'll test it anyway.
** TESTING **
```
encvB = make_pipeline(PolynomialFeatures(2), LinearRegression())
fitB = encvB.fit(XB, yB)
r2_training = encvB.score(X_ttest, y_ttest)
#y= np.trim_zeros(encvB.fit(XB,yB).coef_)
#r2adj_training = 1 - (1-r2_training)*(XB.shape[1]-1)/(XB.shape[1]-len(y)-1)
preds = fitB.predict(X_ttest)
rmse = np.sqrt(MSE(y_ttest, preds))
print('R2:', r2_training)
print('R2 Adjusted:', r2adj_training)
print('Output Predictions', preds)
print('RMSE:', rmse)
print('Alpha:',encvB_steps.elasticnetcv.alpha_)
print('L1:',encvB.named_steps.elasticnetcv.l1_ratio_)
#Testing the predictability of the model with this alpha = 0.5
XB_train, XB_test, yB_train, yB_test = train_test_split(XB, yB, test_size=0.2)
modelB = make_pipeline(PolynomialFeatures(2), ElasticNetCV(l1_ratio = .5))
modelB.fit(XB_train, yB_train)
rsq = modelB.score(XB_train,yB_train)
adj_rsq = 1 - (1-rsq)*(len(yB_train)-1)/(len(yB_train)-XB_train.shape[1]-1)
preds = fitA3.predict(XB_test)
mserror = np.sqrt(MSE(preds,yB_test))
print(rsq)
print(adj_rsq)
print(preds)
print(mserror)
print(modelB.named_steps.elasticnetcv.alpha_)
print(modelB.named_steps.elasticnetcv.l1_ratio_)
```
## Model C - Elastic Net CV with transformations
On second review, none of the inputs would benefit from transformations
```
C_train = ttrain
C_train['new_firsts_log']=np.log(C_train.Firsts)
C_train
#C_train.new_firsts_log.str.replace('-inf', '0')
```
## Predicting Today's Race!
```
todays_race = pd.read_csv('big_race_day').drop('Unnamed: 0', axis = 1).drop('Horse Name', axis =1)
## today_race acting as testing x
todays_race
```
### Maiden Fems Prediction
```
ym_train = MaidenFems['Final_Time_Hund']
xm_train = MaidenFems.drop('Final_Time_Hund',1).drop('Horse Name',1).drop('Final_Time',1)
enMaid = ElasticNetCV(.90)
fitMaid = enMaid.fit(xm_train, ym_train)
preds = fitMaid.predict(todays_race)
r2_training = enMaid.score(xm_train, ym_train)
y= np.trim_zeros(enMaid.fit(xm_train,ym_train).coef_)
adj_rsq = 1 - (1-r2_training)*(len(xm_train)-1)/(len(xm_train)-xm_train.shape[1]-len(y)-1)
print('Output Predictions', preds)
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Alpha:',enMaid.alpha_)
print('L1:',enMaid.l1_ratio_)
print('Coefficients:',fitMaid.coef_)
elastic_coef = enMaid.fit(xm_train, ym_train).coef_
_ = plt.bar(range(len(xm_train.columns)), elastic_coef)
_ = plt.xticks(range(len(xm_train.columns)), xm_train.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
finalguesses_Maiden = [{'Horse Name': 'Lady Lemon Drop' ,'Maiden Horse Guess': 10116.53721999},
{'Horse Name': 'Curlins Prize' ,'Maiden Horse Guess': 10097.09521978},
{'Horse Name': 'Luminoso' ,'Maiden Horse Guess':10063.11500294},
{'Horse Name': 'Party Dancer' ,'Maiden Horse Guess': 10069.32339855},
{'Horse Name': 'Bring on the Band' ,'Maiden Horse Guess': 10054.64900894},
{'Horse Name': 'Rockin Ready' ,'Maiden Horse Guess': 10063.67940254},
{'Horse Name': 'Rattle' ,'Maiden Horse Guess': 10073.93665433},
{'Horse Name': 'Curlins Journey' ,'Maiden Horse Guess': 10072.45966259},
{'Horse Name': 'Heaven Escape' ,'Maiden Horse Guess':10092.43120946}]
```
### EN-CV prediction
```
encvL = ElasticNetCV(l1_ratio = 0.99)
fiten = encvL.fit(X_ttrain, y_ttrain)
preds = fiten.predict(todays_race)
r2_training = encvL.score(X_ttrain, y_ttrain)
y = np.trim_zeros(encvL.fit(X_ttrain,y_ttrain).coef_)
adj_rsq = 1 - (1-r2_training)*(len(X_ttrain)-1)/(len(X_ttrain)-X_ttrain.shape[1]-len(y)-1)
print('Output Predictions', preds)
print('R2:', r2_training)
print('R2 Adjusted:', adj_rsq)
print('Alpha:',encv.alpha_)
print('L1:',encv.l1_ratio_)
print('Coefficients:',fiten.coef_)
elastic_coef = encvL.fit(X_ttrain, y_ttrain).coef_
_ = plt.bar(range(len(X_ttrain.columns)), elastic_coef)
_ = plt.xticks(range(len(X_ttrain.columns)), X_ttrain.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
finalguesses_EN = [{'Horse Name': 'Lady Lemon Drop' ,'Guess': 9609.70585871},
{'Horse Name': 'Curlins Prize' ,'Guess': 9645.82659915},
{'Horse Name': 'Luminoso' ,'Guess':9558.93257549},
{'Horse Name': 'Party Dancer' ,'Guess': 9564.01963654},
{'Horse Name': 'Bring on the Band' ,'Guess': 9577.9212198},
{'Horse Name': 'Rockin Ready' ,'Guess': 9556.46879067},
{'Horse Name': 'Rattle' ,'Guess': 9549.09508205},
{'Horse Name': 'Curlins Journey' ,'Guess': 9546.58621572},
{'Horse Name': 'Heaven Escape' ,'Guess':9586.917829}]
```
### Linear Regression prediction
```
Xlin = X_ttrain
ylin = y_ttrain
lr = LinearRegression()
lrfit = lr.fit(Xlin, ylin)
preds = lrfit.predict(todays_race)
r2_training = lr.score(Xlin, ylin)
r2adj_training = 1 - (1-r2_training)*(len(ylin)-1)/(len(ylin)-Xlin.shape[1]-1)
print('Output Predictions', preds)
print('R2:', r2_training)
print('R2 Adjusted:', r2adj_training)
elastic_coef = lrfit.fit(Xlin, ylin).coef_
_ = plt.bar(range(len(Xlin.columns)), elastic_coef)
_ = plt.xticks(range(len(Xlin.columns)), Xlin.columns, rotation=45)
_ = plt.ylabel('Coefficients')
plt.show()
finalguesses_Lin = [{'Horse Name': 'Lady Lemon Drop' ,'Guess': 9720.65585682},
{'Horse Name': 'Curlins Prize' ,'Guess': 9746.17852003},
{'Horse Name': 'Luminoso' ,'Guess':9608.10444379},
{'Horse Name': 'Party Dancer' ,'Guess': 9633.58532183},
{'Horse Name': 'Bring on the Band' ,'Guess': 9621.04698335},
{'Horse Name': 'Rockin Ready' ,'Guess': 9561.82026773},
{'Horse Name': 'Rattle' ,'Guess': 9644.13062968},
{'Horse Name': 'Curlins Journey' ,'Guess': 9666.24092249},
{'Horse Name': 'Heaven Escape' ,'Guess':9700.56665335}]
```
### Setting the data frames
```
GuessLin = pd.DataFrame(finalguesses_Lin)
GuessMaid = pd.DataFrame(finalguesses_Maiden)
GuessEN = pd.DataFrame(finalguesses_EN)
GuessLin.sort_values('Guess')
GuessMaid.sort_values('Maiden Horse Guess')
GuessEN.sort_values('Guess')
```
| github_jupyter |
```
%tensorflow_version 1.x
#Suppress warnings which keep poping up
import warnings
warnings.filterwarnings("ignore")
from keras import backend as K
import time
import matplotlib.pyplot as plt
import numpy as np
% matplotlib inline
np.random.seed(2017)
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D, SeparableConvolution2D
from keras.layers import Activation, Flatten, Dense, Dropout, AveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.datasets import cifar10
(train_features, train_labels), (test_features, test_labels) = cifar10.load_data()
num_train, img_channels, img_rows, img_cols = train_features.shape
num_test, _, _, _ = test_features.shape
num_classes = len(np.unique(train_labels))
print(train_features.shape)
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
fig = plt.figure(figsize=(8,3))
for i in range(num_classes):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
idx = np.where(train_labels[:]==i)[0]
features_idx = train_features[idx,::]
img_num = np.random.randint(features_idx.shape[0])
im = features_idx[img_num]
ax.set_title(class_names[i])
plt.imshow(im)
plt.show()
def plot_model_history(model_history):
fig, axs = plt.subplots(1,2,figsize=(15,5))
# summarize history for accuracy
axs[0].plot(range(1,len(model_history.history['acc'])+1),model_history.history['acc'])
axs[0].plot(range(1,len(model_history.history['val_acc'])+1),model_history.history['val_acc'])
axs[0].set_title('Model Accuracy')
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel('Epoch')
axs[0].set_xticks(np.arange(1,len(model_history.history['acc'])+1),len(model_history.history['acc'])/10)
axs[0].legend(['train', 'val'], loc='best')
# summarize history for loss
axs[1].plot(range(1,len(model_history.history['loss'])+1),model_history.history['loss'])
axs[1].plot(range(1,len(model_history.history['val_loss'])+1),model_history.history['val_loss'])
axs[1].set_title('Model Loss')
axs[1].set_ylabel('Loss')
axs[1].set_xlabel('Epoch')
axs[1].set_xticks(np.arange(1,len(model_history.history['loss'])+1),len(model_history.history['loss'])/10)
axs[1].legend(['train', 'val'], loc='best')
plt.show()
def accuracy(test_x, test_y, model):
result = model.predict(test_x)
predicted_class = np.argmax(result, axis=1)
true_class = np.argmax(test_y, axis=1)
num_correct = np.sum(predicted_class == true_class)
accuracy = float(num_correct)/result.shape[0]
return (accuracy * 100)
train_features = train_features.astype('float32')/255
test_features = test_features.astype('float32')/255
# convert class labels to binary class labels
train_labels = np_utils.to_categorical(train_labels, num_classes)
test_labels = np_utils.to_categorical(test_labels, num_classes)
# Define the model
model = Sequential()
model.add(Convolution2D(48, 3, 3, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Convolution2D(48, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(96, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(96, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(192, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(192, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(zoom_range=0.0,
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True)
# train the model
start = time.time()
# Train the model
model_info = model.fit_generator(datagen.flow(train_features, train_labels, batch_size = 128),
samples_per_epoch = train_features.shape[0], nb_epoch = 50,
validation_data = (test_features, test_labels), verbose=1)
end = time.time()
print ("Model took %0.2f seconds to train"%(end - start))
# plot model history
plot_model_history(model_info)
# compute test accuracy
print ("Accuracy on test data is: %0.2f"%accuracy(test_features, test_labels, model))
from keras.callbacks import LearningRateScheduler
def scheduler(epoch, lr):
lrate = 0.001
# if epoch > 15:
# lrate = 0.0003
if epoch > 35:
lrate = 0.0005
if epoch > 60:
lrate = 0.0003
if epoch > 100:
lrate = 0.0001
return lrate
# return round(0.003 * 1/(1 + 0.319 * epoch), 10)
# Define the model
my_model = Sequential()
my_model.add(SeparableConvolution2D(96, 3, 3, border_mode='same', input_shape=(32, 32, 3), activation='relu')) # 32*32*96
my_model.add(BatchNormalization())
my_model.add(Dropout(0.1))
my_model.add(SeparableConvolution2D(96, 3, 3, border_mode='valid', activation='relu')) # 30*30*96
my_model.add(BatchNormalization())
my_model.add(Dropout(0.1))
my_model.add(MaxPooling2D(pool_size=(2, 2))) # 15*15*96
my_model.add(Dropout(0.1))
my_model.add(SeparableConvolution2D(192, 3, 3, border_mode='same', activation='relu')) # 15*15*192
my_model.add(BatchNormalization())
my_model.add(Dropout(0.1))
my_model.add(SeparableConvolution2D(192, 3, 3, border_mode='valid', activation='relu')) # 13*13*192
my_model.add(BatchNormalization())
my_model.add(Dropout(0.1))
my_model.add(MaxPooling2D(pool_size=(2, 2))) # 6*6*192
my_model.add(Dropout(0.1))
my_model.add(SeparableConvolution2D(96, 3, 3, border_mode='same', activation='relu')) # 6*6*96
my_model.add(BatchNormalization())
my_model.add(Dropout(0.1))
my_model.add(SeparableConvolution2D(48, 3, 3, border_mode='valid', activation='relu')) # 4*4*48
my_model.add(BatchNormalization())
my_model.add(Dropout(0.1))
my_model.add(AveragePooling2D())
my_model.add(Flatten())
my_model.add(Dense(num_classes, activation='softmax'))
# Compile the model
my_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
my_model.summary()
# train the model
my_start = time.time()
# Train the model
my_model_info = my_model.fit_generator(datagen.flow(train_features, train_labels, batch_size = 128),
samples_per_epoch = train_features.shape[0], nb_epoch = 50,
callbacks=[LearningRateScheduler(scheduler,verbose=1)],
validation_data = (test_features, test_labels), verbose=1)
my_end = time.time()
print ("Model took %0.2f seconds to train"%(my_end - my_start))
# plot model history
plot_model_history(my_model_info)
# compute test accuracy
print ("Accuracy on test data is: %0.2f"%accuracy(test_features, test_labels, my_model))
```
| github_jupyter |
```
from sympy import pi, cos, sin, symbols
from sympy.utilities.lambdify import implemented_function
import pytest
from sympde.calculus import grad, dot
from sympde.calculus import laplace
from sympde.topology import ScalarFunctionSpace
from sympde.topology import element_of
from sympde.topology import NormalVector
from sympde.topology import Square
from sympde.topology import Union
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.expr import Norm
from sympde.expr import find, EssentialBC
from sympde.expr.expr import linearize
from psydac.fem.basic import FemField
from psydac.api.discretization import discretize
x,y,z = symbols('x1, x2, x3')
```
# Non-Linear Poisson in 2D
In this section, we consider the non-linear Poisson problem:
$$
-\nabla \cdot \left( (1+u^2) \nabla u \right) = f, \Omega
\\
u = 0, \partial \Omega
$$
where $\Omega$ denotes the unit square.
For testing, we shall take a function $u$ that fulfills the boundary condition, the compute $f$ as
$$
f(x,y) = -\nabla^2 u - F(u)
$$
The weak formulation is
$$
\int_{\Omega} (1+u^2) \nabla u \cdot \nabla v ~ d\Omega = \int_{\Omega} f v ~d\Omega, \quad \forall v \in \mathcal{V}
$$
For the sack of generality, we shall consider the linear form
$$
G(v;u,w) := \int_{\Omega} (1+w^2) \nabla u \cdot \nabla v ~ d\Omega, \quad \forall u,v,w \in \mathcal{V}
$$
Our problem is then
$$
\mbox{Find } u \in \mathcal{V}, \mbox{such that}\\
G(v;u,u) = l(v), \quad \forall v \in \mathcal{V}
$$
where
$$
l(v) := \int_{\Omega} f v ~d\Omega, \quad \forall v \in \mathcal{V}
$$
#### Topological domain
```
domain = Square()
B_dirichlet_0 = domain.boundary
```
#### Function Space
```
V = ScalarFunctionSpace('V', domain)
```
#### Defining the Linear form $G$
```
u = element_of(V, name='u')
v = element_of(V, name='v')
w = element_of(V, name='w')
# Linear form g: V --> R
g = LinearForm(v, integral(domain, (1+w**2)*dot(grad(u), grad(v))))
```
#### Defining the Linear form L
```
solution = sin(pi*x)*sin(pi*y)
f = 2*pi**2*(sin(pi*x)**2*sin(pi*y)**2 + 1)*sin(pi*x)*sin(pi*y) - 2*pi**2*sin(pi*x)**3*sin(pi*y)*cos(pi*y)**2 - 2*pi**2*sin(pi*x)*sin(pi*y)**3*cos(pi*x)**2
# Linear form l: V --> R
l = LinearForm(v, integral(domain, f * v))
```
### Picard Method
$$
\mbox{Find } u_{n+1} \in \mathcal{V}_h, \mbox{such that}\\
G(v;u_{n+1},u_n) = l(v), \quad \forall v \in \mathcal{V}_h
$$
### Newton Method
Let's define
$$
F(v;u) := G(v;u,u) -l(v), \quad \forall v \in \mathcal{V}
$$
Newton method writes
$$
\mbox{Find } u_{n+1} \in \mathcal{V}_h, \mbox{such that}\\
F^{\prime}(\delta u,v; u_n) = - F(v;u_n), \quad \forall v \in \mathcal{V} \\
u_{n+1} := u_{n} + \delta u, \quad \delta u \in \mathcal{V}
$$
#### Computing $F^{\prime}$ the derivative of $F$
**SymPDE** allows you to linearize a linear form and get a bilinear form, using the function **linearize**
```
F = LinearForm(v, g(v,w=u)-l(v))
du = element_of(V, name='du')
Fprime = linearize(F, u, trials=du)
```
## Picard Method
#### Abstract Model
```
un = element_of(V, name='un')
# Bilinear form a: V x V --> R
a = BilinearForm((u, v), g(v, u=u,w=un))
# Dirichlet boundary conditions
bc = [EssentialBC(u, 0, B_dirichlet_0)]
# Variational problem
equation = find(u, forall=v, lhs=a(u, v), rhs=l(v), bc=bc)
# Error norms
error = u - solution
l2norm = Norm(error, domain, kind='l2')
```
#### Discretization
```
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=[16,16], comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=[2,2])
# Discretize equation using Dirichlet bc
equation_h = discretize(equation, domain_h, [Vh, Vh])
# Discretize error norms
l2norm_h = discretize(l2norm, domain_h, Vh)
```
#### Picard solver
```
def picard(niter=10):
Un = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
Un = equation_h.solve(un=Un)
# Compute error norms
l2_error = l2norm_h.assemble(u=Un)
print('l2_error = ', l2_error)
return Un
Un = picard(niter=5)
from matplotlib import pyplot as plt
from utilities.plot import plot_field_2d
nbasis = [w.nbasis for w in Vh.spaces]
p1,p2 = Vh.degree
x = Un.coeffs._data[p1:-p1,p2:-p2]
u = x.reshape(nbasis)
plot_field_2d(Vh.knots, Vh.degree, u) ; plt.colorbar()
```
## Newton Method
#### Abstract Model
```
# Dirichlet boundary conditions
bc = [EssentialBC(du, 0, B_dirichlet_0)]
# Variational problem
equation = find(du, forall=v, lhs=Fprime(du, v,u=un), rhs=-F(v,u=un), bc=bc)
```
#### Discretization
```
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=[16,16], comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=[2,2])
# Discretize equation using Dirichlet bc
equation_h = discretize(equation, domain_h, [Vh, Vh])
# Discretize error norms
l2norm_h = discretize(l2norm, domain_h, Vh)
```
#### Newton Solver
```
def newton(niter=10):
Un = FemField( Vh, Vh.vector_space.zeros() )
for i in range(niter):
delta_x = equation_h.solve(un=Un)
Un = FemField( Vh, delta_x.coeffs + Un.coeffs )
# Compute error norms
l2_error = l2norm_h.assemble(u=Un)
print('l2_error = ', l2_error)
return Un
un = newton(niter=5)
nbasis = [w.nbasis for w in Vh.spaces]
p1,p2 = Vh.degree
x = un.coeffs._data[p1:-p1,p2:-p2]
u = x.reshape(nbasis)
plot_field_2d(Vh.knots, Vh.degree, u) ; plt.colorbar()
```
| github_jupyter |
```
import re
```
The re module uses a backtracking regular expression engine
Regular expressions match text patterns
Use case examples:
- Check if an email or phone number was written correctly.
- Split text by some mark (comma, dot, newline) which may be useful to parse data.
- Get content from HTML tags.
- Improve your linux command skills.
However ...
>Some people, when confronted with a problem, think "I know, I'll use regular expressions". Now they have two problems - Jamie Zawinski, 1997
## **Python String**
\begin{array}{ccc}
\hline Type & Prefixed & Description \\\hline
\text{String} & - & \text{They are string literals. They're Unicode. The backslash is
necessary to escape meaningful characters.} \\
\text{Raw String} & \text{r or R} & \text{They're equal to literal strings with the exception of the
backslashes, which are treated as normal characters.} \\
\text{Byte String} & \text{b or B} & \text{Strings represented as bytes. They can only contain ASCII
characters; if the byte is greater than 128, it must be escaped.} \\
\end{array}
```
#Normal String
print("feijão com\t limão")
#Raw String
print(r"feijão com\t limão")
#Byte String
print(b"feij\xc3\xa3o com\t lim\xc3\xa3o")
str(b"feij\xc3\xa3o com\t lim\xc3\xa3o", "utf-8")
```
## **General**
Our build blocks are composed of:
- Literals
- Metacharacter
- Backslash: \\
- Caret: \^
- Dollar Sign: \$
- Dot: \.
- Pipe Symbol: \|
- Question Mark: \?
- Asterisk: \*
- Plus sign: \+
- Opening parenthesis: \(
- Closing parenthesis: \)
- Opening square bracket: \[
- The opening curly brace: \{
### **Literals**
```
"""
version 1: with compile
"""
def areYouHungry_v1(pattern, text):
match = pattern.search(text)
if match: print("HERE !!!\n")
else: print("Sorry pal, you'll starve to death.\n")
helloWorldRegex = r"rodrigo"
pattern = re.compile(helloWorldRegex)
text1 = r"Where can I find food here? - rodrigo"
text2 = r"Where can I find food here? - Rodrigo"
areYouHungry_v1(pattern,text1)
areYouHungry_v1(pattern,text2)
"""
version 2: without compile
"""
def areYouHungry_v2(regex, text):
match = re.search(regex, text)
if match: print("HERE !!!\n")
else: print("Sorry pal, you'll starve to death.\n")
helloWorldRegex = r"rodrigo"
text1 = r"Where can I find food here? - rodrigo"
text2 = r"Where can I find food here? - Rodrigo"
areYouHungry_v2(helloWorldRegex, text1)
areYouHungry_v2(helloWorldRegex, text2)
```
### **Character classes**
```
"""
version 3: classes
"""
def areYouHungry_v3(pattern, text):
match = pattern.search(text)
if match: print("Beer is also food !!\n")
else: print("Sorry pal, you'll starve to death.\n")
helloWorldRegex = r"[rR]odrigo"
pattern = re.compile(helloWorldRegex)
text1 = r"Where can I find food here? - rodrigo"
text2 = r"Where can I find food here? - Rodrigo"
areYouHungry_v3(pattern,text1)
areYouHungry_v3(pattern,text2)
```
Usual Classes:
- [0-9]: Matches anything between 0 and 9.
- [a-z]: Matches anything between a and z.
- [A-Z]: Matches anything between A and Z.
Predefined Classes:
- **\.** : Matches everything except newline.
- Lower Case classes:
- \d : Same as [0-9].
- \s : Same as [ \t\n\r\f\v] the first character of the class is the whitespace character.
- \w : Same as [a-zA-Z0-9_] the last character of the class is the underscore character.
- Upper Case classes (the negation):
- \D : Matches non decimal digit, same as [^0-9].
- \S : Matches any non whitespace character [^ \t\n\r\f\v].
- \W : Matches any non alphanumeric character [^a-zA-Z0-9_] .
Both codes will do the same ...
The re module keeps a cache with come compiled regex so you do not need to compile the regex everytime you call the function (technique called memoization).
The first version just give you a fine control ...
```Pattern``` was **re.Pattern** variable which has a lot of methods. Let's find out with methods are there using regular expression !!
```
helloWorldRegex = r"[rR]odrigo"
pattern = re.compile(helloWorldRegex)
patternText = "\n".join(dir(pattern))
patternText
#Regex for does not start with “__”
pattern_list_methods = set(re.findall(r"^(?!__).*$", patternText, re.M))
to_delete = ["fullmatch", "groupindex", "pattern", "scanner"]
pattern_list_methods.difference_update(to_delete)
print(pattern_list_methods)
```
- RegexObject: It is also known as Pattern Object. It represents a compiled regular expression
- MatchObject: It represents the matched pattern
### Regex Behavior
```
def isGotcha(match):
if match: print("Found it")
else: print("None here")
data = "aaabbbccc"
match1 = re.match("\w+", data)
isGotcha(match1)
match2 = re.match("bbb",data)
isGotcha(match2)
match3 = re.search("bbb",data)
isGotcha(match3)
```
\begin{array}{rrr}
\hline \text{match1} & \text{match2} & \text{match3}\\\hline
\text{aaabbbccc} & \text{aaabbbccc} & \text{aaabbbccc}\\
\text{aabbbccc} & \text{returns None} & \text{aabbbccc}\\
\text{abbbccc} & - & \text{abbbccc}\\
\text{bbbccc} & - & \text{bbbccc}\\
\text{bbccc} & - & \text{bbccc}\\
\text{bccc} & - & \text{bccc}\\
\text{ccc} & - & \text{returns Match}\\
\text{cc} & - & - \\
\text{c} & - & - \\
\text{returns None} & - & -
\end{array}
### Greedy Behavior
```
text = "<b>foo</b> and <i>so on</i>"
match = re.match("<.*>",text)
print(match)
print(match.group())
text = "<b>foo</b> and <i>so on</i>"
match = re.match("<.*?>",text)
print(match)
print(match.group())
```
The non-greedy behavior can be requested by adding an extra question mark to the quantifier; for example, ??, *? or +?. A quantifier marked as reluctant will behave like the exact opposite of the greedy ones. They will try to have the smallest match possible.
## **Problem 1** - Phone Number
### **Search**
```
def isThere_v1(regexObject, text):
if regexObject: return f"Your number is: {regexObject.group()}!"
else: return "Hey! I did not find it."
text = """ 9-96379889
96379889
996379889
9-9637-9889
42246889
4224-6889
99637 9889
9 96379889
"""
#The first character is not a number, but a whitespace.
regex1 = re.search(r"\d?", text)
#Removing the whitespace character we find the number ! The ? operator means optional
regex2 = re.search(r"\d?", text.strip())
#Then, it could appear a optional whitespace or -. We also get two decimal character with \d\d
regex3 = re.search(r"\d?-?\d\d", text.strip())
#However we want more than one decimal chracter. This can be achievied by using the + operator
regex4 = re.search(r"\d?-?\d+", text.strip())
#Looking backwards $
regex5 = re.search(r"\d?-?\d+$", text.strip())
#Using class to get - or whitespace
regex6 = re.search(r"\d?[-\s]?\d+$", text.strip())
regex_lst = [regex1, regex2, regex3, regex4, regex5, regex6]
for index, regex in enumerate(regex_lst):
print(f"Regex Number {index+1}")
print(isThere_v1(regex,text) + "\n")
```
### **Findall**
```
def isThere_v2(regexObject, text):
if regexObject: return f"Uow phone numbers:\n{regexObject} !"
else: return "Hey! I did not find it."
text = """ 996349889
96359889
9-96349889
9-9634-9889
42256889
4225-6889
99634 9889
9 96349889
"""
#findall looks for every possible match.
regex7 = re.findall(r"\d?[-\s]?\d+", text)
"""
Why is [... ' 9', '-96349889' ...] splited?
Step1: \d? is not consumed.
Step2: [-\s]? the whitespace is consumed.
Step3: \d+ Consumes 9 and stop due to the - character.
Therefore ' 9' is recognized.
"""
regex8 = re.findall(r"\d?[-\s]?\d+[-\s]?\d+", text.strip())
"""
Why is [... ' 9-9634', '-9889' ...] splited?
Step1: \d? is consumed.
Step2: [-\s]? is consumed.
Step3: \d+ Consumes until the - character
Step4: [-\s]? is not consumed
Step5: \d+ is ignored because the first decimal was consumed in Step3
Therefore ' 9-9634' is recognized.
"""
#Adds a restrition of 4 decimals in the first part.
regex9 = re.findall(r"\d?[-\s]?\d{4}[-\s]?\d+", text.strip())
#Adds a restrition of 4 decimals in the second part forcing a number after the whitespace.
regex10 = re.findall(r"\d?[-\s]?\d{4}[-\s]?\d{4}", text.strip())
regex_lst = [regex7, regex8, regex9, regex10]
for index, regex in enumerate(regex_lst):
print(f"Regex Number {index+7}")
print(isThere_v2(regex,text) + "\n")
text_dirty = r"""996379889
96379889
9-96379889
9-9637-9889
42246889
4224-6889
99637 9889
9 96379889
777 777 777
90 329921 0
9999999999 9
8588588899436
"""
#Regex 10
regex_dirty1 = re.findall(r"\d?[-\s]?\d{4}[-\s]?\d{4}", text_dirty.strip())
#Adding Negative look behind and negative look ahead
regex_dirty2 = re.findall(r"(?<!\d)\d?[-\s]?\d{4}[-\s]?\d{4}(?!\d)", text_dirty.strip())
#\b is a word boundary which depend on the contextkey contexts.
regex_dirty3 = re.findall(r"\b\d?[-\s]?\d{4}[-\s]?\d{4}\b", text_dirty.strip())
regex_dirty_lst = [regex_dirty1, regex_dirty2, regex_dirty3]
for index, result in enumerate(map(lambda x: isThere_v2(x,text_dirty), regex_dirty_lst)):
print(f"Regex Dirty Number {index+1}")
print(result + "\n")
```
### **Finditer**
This is a lazy method !!
```
real_text_example = """
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis viverra consectetur sodales. Vestibulum consequat,
risus in sollicitudin imperdiet, velit 996379889 elit congue sem, vitae aliquet ligula mi eget justo. Nulla facilisi.
Maecenas a egestas nisi. Morbi purus dolor, ornare ac dui a, eleifend dignissim nunc. Proin pellentesque dolor non lectus pellentesque
tincidunt. Ut et 345.323.343-9 tempus orci. Duis molestie 9 96379889 cursus tortor vitae pretium. 4224-6889 Donec non sapien neque. Pellentesque urna ligula, finibus a lectus sit amet
, ultricies cursus metus. Quisque eget orci et turpis faucibus 4224-6889 pharetra.
"""
match_genarator = re.finditer(r'\b((?:9[ -]?)?\d{4}[ \-]?\d{4})\b', real_text_example.strip())
for match in match_genarator:
print(f"Phone Number: {match.group()}\nText Position: {match.span()}\n")
```
## **Problem 2** - Format Text
### **Groups**
```
email_text = "hey my email is: localPart@domain"
```
Using the parenthesis it is possible to capture a group:
```
match = re.search("(\w+)@(\w+)", email_text)
print(match.group(1))
print(match.group(2))
```
Using the following syntax it is possible to give a name to the group:
```?P<name>pattern```
```
match = re.search("(?P<localPart>\w+)@(?P<domain>\w+)", email_text)
print(match.group("localPart"))
print(match.group("domain"))
```
### **Sub**
Suppose a text with the following structure:
```
time - day | usage | id, description \n
```
Definitely, a unique separator should be used .. However life is tough.
```
my_txt = r"""
20:18:14 - 21/01 | 0.65 | 3947kedj, No |dia| em que eu saí de casa
25:32:26 - 11/07 | 0.80 | 5679lqui, Minha mãe me disse: |filho|, vem cá
12:13:00 - 12/06 | 0.65 | 5249dqok, Passou a mão em meu cabelos
23:12:35 - 13/03 | 0.77 | 3434afdf, Olhou em meus |olhos|, começou falar
20:22:00 - 12/02 | 0.98 | 1111absd, We are the champions, my friends
22:12:00 - 07/03 | 0.65 | 4092bvds, And we'll keep on |fighting| till the end
22:52:59 - 30/02 | 0.41 | 9021poij, We are the |champions|, we are the champions
21:47:00 - 28/03 | 0.15 | 6342fdpo, No time for |losers|, 'cause we are the champions
19:19:00 - 31/08 | 0.30 | 2314qwen, of the |world|
00:22:21 - 99/99 | 0.00 | 0000aaaa,
"""
print(my_txt)
#\g<name> to reference a group.
pattern = re.compile(r"(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2})")
text = pattern.sub("\g<day> - \g<time>",my_txt)
print(text)
#pattern new_text texto
"""
Changes an optional whitespace with the - caracter to a comma (,).
"""
text = re.sub(r"\s?-", ',', my_txt)
print(text)
#pattern new_text texto
"""
Do not forget to escape meaninful caracters :)
the dot character is escaped however, the pipe character is not :(
"""
pattern = r"(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2}) | (?P<usage>\d\.\d{2}) | (?P<id>\d{4}\w{4})"
new_text = r"\g<time>, \g<day>, \g<usage>, \g<id>"
text = re.sub(pattern, new_text, my_txt)
print(text)
#pattern new_text texto
pattern = "(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2}) \| (?P<usage>\d\.\d{2}) \| (?P<id>\d{4}\w{4})"
new_text = "\g<time>, \g<day>, \g<usage>, \g<id>"
text = re.sub(pattern, new_text, my_txt)
print(text)
```
### **Subn**
Similar to ```sub```. It returns a tuple with the new string and the number of substitutions made.
```
#pattern new_text texto
pattern = "\|"
new_text = ""
clean_txt, mistakes_count = re.subn(pattern, new_text, text)
print(f"Clean Text Result:\n{clean_txt}")
print(f"How many mistakes did I make it?\n{mistakes_count}")
```
### **Groupdict**
```
#pattern new_text texto
"""
Do not forget to escape meaninful caracters :)
the dot character is escaped however, the pipe character is not :(
"""
pattern = r"(?P<time>\d{2}:\d{2}:\d{2}) - (?P<day>\d{2}/\d{2}) \| (?P<usage>\d\.\d{2}) \| (?P<id>\d{4}\w{4})"
matchs = re.finditer(pattern, my_txt)
for match in matchs:
print(match.groupdict())
```
## **Performance**
>Programmers waste enormous amounts of time thinking about, or worrying
about, the speed of noncritical parts of their programs, and these attempts at
efficiency actually have a strong negative impact when debugging and maintenance
are considered. We should forget about small efficiencies, say about 97% of the
time: premature optimization is the root of all evil. Yet we should not pass up our
opportunities in that critical 3%. - Donald Knuth
General:
- Don't be greedy.
- Reuse compiled patterns.
- Be specific.
## References
Book:
- Mastering Python Regular Expressions (PACKT) - by: Félix López Víctor Romero
Links:
- https://developers.google.com/edu/python/regular-expressions
| github_jupyter |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h2>Quantum Tomography</h2>
We start with initializing a qubit with an arbitrary state by using a rotation.
<h3> Initialize a qubit with an arbitrary state </h3>
We can specify a (real-valued) quantum state by its angle ranged from 0 to $ 2\pi $ radian.
If $ \theta $ is our angle, then our quantum state is $ \ket{v} = \myvector{\cos \theta \\ \sin \theta} $.
<b> How can we set a qubit to an arbitrary quantum state when started in state $ \ket{0} $?</b>
We can use a rotation operator. Rotations preserve the lengths of vectors, and so they are quantum operators.
In qiskit, ry-gate can be used for rotation in 2-dimensional real-valued plane.
<a id="remark"></a>
<h3> Technical remark</h3>
Even though, we focus on only real-valued quantum systems in this tutorial, the quantum state of a qubit is represented by 2-dimensional complex-valued vector in general. To visually represent a complex number, we use two dimensions. So, to visually represent the state of a qubit, we use four dimensions.
On the other hand, we can still visualize any state of a qubit by using certain mapping from four dimensions to three dimensions. This representation is called as <i>Bloch sphere</i>.
The rotation operators over a single (complex-valued) qubit are defined on Bloch sphere. The names of gates "x", "y", or "z" refer to the axes on Bloch sphere. When we focus on real-valued qubit, then we should be careful about the parameter(s) that a gate takes.
<i>In qiskit, ry-gate makes a rotation around $y$-axis with the given angle, say $\theta$, on Bloch sphere. This refers to a rotation in our real-valued $\ket{0}$-$\ket{1}$ plane with angle $ \frac{\theta}{2} $. Therefore, <b>we should provide the twice of the desired angle in this tutorial.</b></i>
<h3> Rotations with ry-gate </h3>
The ry-gate is used for rotation in 2-dimensional real-valued plane.
If our angle is $ \theta $ radians, then we pass $ 2 \theta $ radians as the parameter to ry-gate.
Then ry-gate implements the rotation with angle $\theta$.
The default direction of a rotation by ry-gate is counterclockwise.
mycircuit.ry(2*angle_of_rotation,quantum_register)
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
# we define a quantum circuit with one qubit and one bit
qreg1 = QuantumRegister(1) # quantum register with a single qubit
creg1 = ClassicalRegister(1) # classical register with a single bit
mycircuit1 = QuantumCircuit(qreg1,creg1) # quantum circuit with quantum and classical registers
# angle of rotation in radian
rotation_angle = 2*pi/3
# rotate the qubit with rotation_angle
mycircuit1.ry(2*rotation_angle,qreg1[0])
# measure the qubit
mycircuit1.measure(qreg1,creg1)
# draw the circuit
mycircuit1.draw(output='mpl')
# execute the program 1000 times
job = execute(mycircuit1,Aer.get_backend('qasm_simulator'),shots=1000)
# print the results
counts = job.result().get_counts(mycircuit1)
print(counts) # counts is a dictionary
from math import sin,cos
# the quantum state
quantum_state = [ cos(rotation_angle) , sin (rotation_angle) ]
the_expected_number_of_zeros = 1000*cos(rotation_angle)**2
the_expected_number_of_ones = 1000*sin(rotation_angle)**2
# expected results
print("The expected value of observing '0' is",round(the_expected_number_of_zeros,4))
print("The expected value of observing '1' is",round(the_expected_number_of_ones,4))
# draw the quantum state
%run qlatvia.py
draw_qubit()
draw_quantum_state(quantum_state[0],quantum_state[1],"|v>")
```
<h3> Task 1 </h3>
You are given 1000 copies of an arbitrary quantum state which lies in the first or second quadrant of the unit circle.
This quantum state can be represented by an angle $ \theta \in [0,180) $.
<i>Please execute the following cell, but do not check the value of $\theta$.</i>
```
from random import randrange
from math import pi
theta = randrange(18000)/18000 * pi
```
Your task is to guess this quantum state by writing quantum programs.
We assume that the quantum state is given to us with the following code.
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# we define a quantum circuit with one qubit and one bit
qreg2 = QuantumRegister(1) # quantum register with a single qubit
creg2 = ClassicalRegister(1) # classical register with a single bit
circuit2 = QuantumCircuit(qreg2,creg2) # quantum circuit with quantum and classical registers
# rotate the qubit with rotation_angle
circuit2.ry(2*theta,qreg2[0])
You should write further codes without using variable $theta$ again.
You may use measurements or further $ry$-gates.
You can use 1000 shots in total when executing your quantum programs (you can have more than one program starting with the above code).
After your guess, please check the actual value and calculate your error in percentage.
```
# program 1
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
# we define a quantum circuit with one qubit and one bit
qreg1 = QuantumRegister(1) # quantum register with a single qubit
creg1 = ClassicalRegister(1) # classical register with a single bit
circuit1 = QuantumCircuit(qreg1,creg1) # quantum circuit with quantum and classical registers
# rotate the qubit with rotation_angle
circuit1.ry(2*theta,qreg1[0])
#
# your code is here
#
# program 2
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
# we define a quantum circuit with one qubit and one bit
qreg2 = QuantumRegister(1) # quantum register with a single qubit
creg2 = ClassicalRegister(1) # classical register with a single bit
circuit2 = QuantumCircuit(qreg2,creg2) # quantum circuit with quantum and classical registers
# rotate the qubit with rotation_angle
circuit2.ry(2*theta,qreg2[0])
# program 3
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
# we define a quantum circuit with one qubit and one bit
qreg3 = QuantumRegister(1) # quantum register with a single qubit
creg3 = ClassicalRegister(1) # classical register with a single bit
circuit3 = QuantumCircuit(qreg3,creg3) # quantum circuit with quantum and classical registers
# rotate the qubit with rotation_angle
circuit3.ry(2*theta,qreg3[0])
```
<a href="B36_Quantum_Tomography_Solution.ipynb#task1">click for our solution</a>
<h3> Task 2 [extra] </h3>
In Task 1, assume that you are given two qubits that are in states $ \myvector{\cos \theta_1 \\ \sin \theta_1} $ and $ \myvector{\cos \theta_2 \\ \sin \theta_2} $, where $ \theta_1,\theta_2 \in [0,\pi) $.
By following the same assumptions in Task 1, can you approximate $ \theta_1 $ and $ \theta_2 $ by using qiskit?
Your circuit should have a quantum register with these two qubits, and so your measurement outcomes will be '00', '01', '10', and '11'.
<h3> Task 3 (Discussion) </h3>
If the angle in Task 1 is picked in range $ [0,360) $, then can we determine its quadrant correctly?
<h3> Global phase </h3>
Suppose that we have a qubit and its state is either $ \ket{0} $ or $ -\ket{0} $.
Is there any sequence of one-qubit gates such that we can measuare different results after applying them?
All one-qubit gates are $ 2 \times 2 $ matrices, and their application is represented by a single matrix: $ A_n \cdot \cdots \cdot A_2 \cdot A_1 = A $.
By linearity, if $ A \ket{0} = \ket{u} $, then $ A - \ket{0} = -\ket{u} $. Thus, after measurement, the probabilities of observing state $ \ket{0} $ and state $ \ket{1} $ are the same. Therefore, we cannot distinguish them.
Even though the states $ \ket{0} $ and $ -\ket{0} $ are different mathematically, they are assumed the same from the physical point of view.
The minus sign in front of $ -\ket{0} $ is also called as global phase.
| github_jupyter |
# Document classifier
Praktisch wenn eine Menge Dokumente sortiert werden muss
## Daten
- Wir brauchen zuerst daten um unser Modell zu trainieren
```
#!pip3 install -U textblob
from textblob.classifiers import NaiveBayesClassifier
train = [
('I love this sandwich.', 'pos'),
('this is an amazing place!', 'pos'),
('I feel very good about these beers.', 'pos'),
('this is my best work.', 'pos'),
("what an awesome view", 'pos'),
('I do not like this restaurant', 'neg'),
('I am tired of this stuff.', 'neg'),
("I can't deal with this", 'neg'),
('He is my sworn enemy!', 'neg'),
('My boss is horrible.', 'neg')
]
test = [
('The beer was good.', 'pos'),
('I do not enjoy my job', 'neg'),
("I ain't feeling dandy today.", 'neg'),
("I feel amazing!", 'pos'),
('Gary is a friend of mine.', 'pos'),
("I can't believe I'm doing this.", 'neg')
]
```
## Training
```
a = NaiveBayesClassifier(train)
```
## Test
- Wie gut performed unser Modell bei Daten die es noch nie gesehen hat?
```
a.accuracy(test)
```
- Zu 80% korrekt, ok für mich :)
## Features
- Welche wörter sorgen am meisten dafür dass etwas positiv oder negativ klassifiziert wird?
```
a.show_informative_features(5)
```
Er ist der meinung wenn "this" vorkommt ist es eher positiv, was natürlich quatsch ist, aber das hat er nun mal so gelernt, deswegen braucht ihr gute trainingsdaten.
## Klassifizierung
```
a.classify("their burgers are amazing") # "pos"
a.classify("I don't like their pizza.") # "neg"
a.classify("I love my job.")
```
### Klassizierung nach Sätzen
```
from textblob import TextBlob
blob = TextBlob("The beer was amazing. "
"But the hangover was horrible. My boss was not happy.",
classifier=a)
for sentence in blob.sentences:
print(("%s (%s)") % (sentence,sentence.classify()))
```
## Mit schweizer Songtexten Kommentare klassifizieren
- http://www.falleri.ch
```
import os,glob
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from io import open
train = []
countries = ["schweiz", "deutschland"]
for country in countries:
out = []
folder_path = 'songtexte/%s' % country
for filename in glob.glob(os.path.join(folder_path, '*.txt')):
with open(filename, 'r') as f:
text = f.read()
words = word_tokenize(text)
words=[word.lower() for word in words if word.isalpha()]
for word in words:
out.append(word)
out = set(out)
for word in out:
train.append((word,country))
#print (filename)
#print (len(text))
train
#len(train)
from textblob.classifiers import NaiveBayesClassifier
c2 = NaiveBayesClassifier(train)
c2.classify("Ich gehe durch den Wald") # "deutsch"
c2.classify("Häsch es guet") # "deutsch"
c2.classify("Ich fahre mit meinem Porsche auf der Autobahn.")
c2.show_informative_features(5)
```
### Orakel
Ihr könnt natürlich jetzt Euer eigenes Orakel bauen wie hier: - http://home.datacomm.ch/cgi-heeb/dialect/chochi.pl?Hand=Hand&nicht=net&heute=hit&Fenster=Feischter&gestern=gescht&Abend=Abend&gehorchen=folge&Mond=Manat&jeweils=abe&Holzsplitter=Schepfa&Senden=Jetzt+analysieren%21
## Hardcore Beispiel mit Film-review daten mit NLTK
- https://www.nltk.org/book/ch06.html
- Wir nutzen nur noch die 100 häufigsten Wörter in den Texten und schauen ob sie bei positiv oder negativ vorkommen
```
import random
import nltk
nltk.download("movie_reviews")
from nltk.corpus import movie_reviews
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)
len(documents)
(" ").join(documents[1][0])
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = list(all_words)[:2000]
word_features[0:-10]
all_words
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = list(all_words)[:2000]
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
print(document_features(movie_reviews.words('pos/cv957_8737.txt')))
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
b = nltk.NaiveBayesClassifier.train(train_set)
#document_features("a movie with bad actors".split(" "))
b.classify(document_features("a movie with bad actors".split(" ")))
b.classify(document_features("an uplifting movie with russel crowe".split(" ")))
b.show_most_informative_features(10)
b.accuracy(test_set)
```
| github_jupyter |
# A simple DNN model built in Keras.
Let's start off with the Python imports that we need.
```
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print(tf.__version__)
```
## Locating the CSV files
We will start with the CSV files that we wrote out in the [first notebook](../01_explore/taxifare.iypnb) of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
```
!ls -l ../data/*.csv
```
## Use tf.data to read the CSV files
We wrote these cells in the [third notebook](../03_tfdata/input_pipeline.ipynb) of this sequence.
```
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
.cache())
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
INPUT_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count']
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='fare')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
model = build_dnn_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
```
## Train model
To train the model, call model.fit()
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
```
## Predict with model
This is how you'd predict with this model.
```
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
})
```
Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
## Export model
Let's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
```
# This doesn't work yet.
shutil.rmtree('./export/savedmodel', ignore_errors=True)
tf.keras.experimental.export_saved_model(model, './export/savedmodel')
# Recreate the exact same model
new_model = tf.keras.experimental.load_from_saved_model('./export/savedmodel')
# try predicting with this model
new_model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
})
```
In the next notebook, we will improve this model through feature engineering.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme(style="whitegrid")
sns.set_context("paper")
my_data = pd.read_csv("success_rate_RFC_bm5_normal_docking_2.csv")
my_data
my_data.set_index("Scoring",inplace=True)
# sns.barplot(data=my_data[my_data["Method"]=="Pydock"])
# sns.barplot(data=my_data[my_data["Method"]=="Zdock"])
# sns.barplot(data=my_data[my_data["Method"]=="Swardock"],hue=my_data.index)
my_data[my_data["Method"]=="Pydock"].plot(kind="bar",color=['#C0C0C0','#A9A9A9','#808080'])
my_data[my_data["Method"]=="Zdock"].plot(kind="bar",color=['#C0C0C0','#A9A9A9','#808080'])
my_data[my_data["Method"]=="Swardock"].plot(xlabel=" ",kind="bar",color=['#C0C0C0','#A9A9A9','#808080'],rot=0)
fig, ((ax1, ax2, ax3) ) = plt.subplots(3,1 , sharex=False
,sharey=True
,figsize=( 5 , 8 )
#
)
my_data[my_data["Method"]=="Pydock"].plot(ax=ax1,title="A) Pydock",
rot=0 ,
kind="bar",
color=['#C0C0C0','#A9A9A9','#808080'],
xlabel="")
my_data[my_data["Method"]=="Zdock"].plot(ax=ax2,title="B) Zdock",
rot=0 ,
kind="bar",
color=['#C0C0C0','#A9A9A9','#808080'],
legend=False,
xlabel="")
my_data[my_data["Method"]=="Swardock"].plot(ax=ax3, title="C) Swardock",
rot=0,
kind="bar",
color=['#C0C0C0','#A9A9A9','#808080'],
legend=False)
plt.ylabel("Percentage")
plt.tight_layout()
plt.savefig("../success_rate_CODES_vs_IRAPPA.eps")
my_data = pd.read_csv("success_rate_RFC_bm5_normal_docking.csv")
my_data.sort_values("Top10", inplace=True)
# my_data.set_index("Scoring", inplace=True)
my_data["Scoring"] = ['Pydock',
'Zdock',
'IRAPPA\nZdock',
'CODES',
'Swarmdock',
'IRAPPA\nPydock',
'IRAPPA\nSwardock']
my_data.set_index("Scoring", inplace=True)
my_data["Top10"].plot(
rot=0,
kind="bar",
color=['#A9A9A9']
# legend=False
)
plt.ylabel("Percentage")
plt.tight_layout()
plt.savefig("../success_rate_CODES_combines.eps")
# sns.barplot(x=my_data.index,y="Top10",data=my_data)
```
| github_jupyter |
```
import numpy as np
import torch
import pandas as pd
import json
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder as LE
import bisect
import torch
from datetime import datetime
from sklearn.model_selection import train_test_split
!cp -r drive/My\ Drive/T11 ./T11
np.random.seed(22)
torch.manual_seed(22)
with open('T11/batsmen.json', 'r') as f:
batsmen = json.load(f)
with open('T11/bowlers.json', 'r') as f:
bowlers = json.load(f)
batsmen = {k: [x for x in v if x[1][1]>=0] for k,v in batsmen.items()}
batsmen = {k: sorted(v, key=lambda x : x[0]) for k,v in batsmen.items() if v}
bowlers = {k: sorted(v, key=lambda x : x[0]) for k,v in bowlers.items() if v}
def getBatScores(scores):
#runs, balls, boundaries, contribs, out
array = []
for score in scores:
date = score[0]
_, runs, balls, fours, sixes, _, contrib = score[1]
boundaries = fours + sixes * 1.5
array.append((date, np.array([runs, balls, boundaries, contrib])))
return array
def getBowlScores(scores):
#overs, maidens, runs, wickets, contribs
array = []
for score in scores:
date = score[0]
overs, maidens, runs, wickets, _, contrib = score[1]
overs = int(overs) + (overs-int(overs))*10/6
array.append((date, np.array([overs, maidens, runs, wickets, contrib])))
return array
batsmen_scores = {k:getBatScores(v) for k,v in batsmen.items()}
bowlers_scores = {k:getBowlScores(v) for k,v in bowlers.items()}
_batsmen_scores = {k:{_v[0]: _v[1] for _v in v} for k,v in batsmen_scores.items()}
_bowlers_scores = {k:{_v[0]: _v[1] for _v in v} for k,v in bowlers_scores.items()}
att = pd.read_csv('T11/attributes.csv')
att['BatHand']=0+(att['Bats'].str.find('eft')>0)
att['BowlHand']=0+(att['Bowls'].str.find('eft')>0)
att['BowlType']=0+((att['Bowls'].str.find('ast')>0) | (att['Bowls'].str.find('edium')>0))
def getBatStats(scores):
dates, scorelist = [score[0] for score in scores], [score[1] for score in scores]
scorelist = np.array(scorelist)
cumscores = np.cumsum(scorelist, axis=0)
innings = np.arange(1, cumscores.shape[0]+1)
average = cumscores[:, 0]/innings
sr = cumscores[:, 0]/(cumscores[:, 1]+1)
contrib = cumscores[:, 3]/innings
stats = np.array([innings, average, sr, contrib]).T
return [datetime.strptime(date, "%Y-%m-%d") for date in dates], stats
def getBowlStats(scores):
dates, scorelist = [score[0] for score in scores], [score[1] for score in scores]
scorelist = np.array(scorelist)
cumscores = np.cumsum(scorelist, axis=0)
overs = cumscores[:, 0]
overs = overs.astype('int32')+10/6*(overs - overs.astype('int32'))
runs = cumscores[:, 2]
economy = runs/overs
wickets = cumscores[:, 3]
average = wickets/(runs+1)
sr = wickets/overs
contrib = cumscores[:, 4]/np.arange(1, cumscores.shape[0]+1)
stats = np.array([overs, average, economy, sr, contrib]).T
return [datetime.strptime(date, "%Y-%m-%d") for date in dates], stats
batsmen_stats = {key:getBatStats(getBatScores(v)) for key,v in batsmen.items()}
bowlers_stats = {key:getBowlStats(getBowlScores(v)) for key,v in bowlers.items()}
with open('T11/scorecard.json', 'r') as f:
scorecards = json.load(f)
position = dict()
for code, match in scorecards.items():
for pos, batsmen in enumerate(match['BATTING1']):
if batsmen[0] in position:
position[batsmen[0]].append(pos+1)
else:
position[batsmen[0]]=[pos+1]
for pos, batsmen in enumerate(match['BATTING2']):
if batsmen[0] in position:
position[batsmen[0]].append(pos+1)
else:
position[batsmen[0]]=[pos+1]
position = {int(k):max(set(v), key = v.count) for k,v in position.items()}
for missing in set(att['Code']) - set(position.keys()):
position[missing]=0
with open('T11/region.json','r') as f:
region = json.load(f)
with open('T11/tmap.json','r') as f:
tmap = json.load(f)
matches = pd.read_csv('T11/matches.csv')
att['BatPos']=att['Code'].apply(lambda x : position[x])
matches['GroundCode']=matches['GroundCode'].apply(lambda x : region[str(x)])
matches=matches[pd.to_datetime(matches['Date'], format='%Y-%m-%d')>"1990-01-01"]
df_cards = pd.DataFrame(scorecards).transpose()
df_cards = df_cards[df_cards.index.astype(int).isin(matches['MatchCode'])]
matches = matches[matches['MatchCode'].isin(df_cards.index.astype(int))]
att=pd.get_dummies(att, columns=['BatPos'])
le = {
'GC' : LE(),
'Team' : LE(),
'Venue' : LE(),
}
le['Team'].fit((matches['Team_1'].tolist())+(matches['Team_2'].tolist()))
matches['Team_1']=le['Team'].transform(matches['Team_1'])
matches['Team_2']=le['Team'].transform(matches['Team_2'])
matches['Venue']=le['Venue'].fit_transform(matches['Venue'])
matches['GroundCode']=le['GC'].fit_transform(matches['GroundCode'])
matches
patts = att[['BatHand', 'BowlHand', 'BowlType', 'BatPos_0', 'BatPos_1', 'BatPos_2', 'BatPos_3', 'BatPos_4', 'BatPos_5', 'BatPos_6', 'BatPos_7', 'BatPos_8', 'BatPos_9', 'BatPos_10']].values
pcodes = att['Code'].tolist()
attdict = dict()
for i,pc in enumerate(pcodes):
attdict[pc]=patts[i]
df_cards['MatchCode']=df_cards.index.astype(int)
matches=matches.sort_values(by='MatchCode')
df_cards=df_cards.sort_values(by='MatchCode')
df_cards.reset_index(drop=True, inplace=True)
matches.reset_index(drop=True, inplace=True)
df_cards['BAT2']=le['Team'].transform(df_cards['ORDER'].apply(lambda x : tmap[x[1]]))
df_cards['BAT1']=le['Team'].transform(df_cards['ORDER'].apply(lambda x : tmap[x[0]]))
df_cards['RUN1']=df_cards['SCORES'].apply(lambda x : x[0])
df_cards['RUN2']=df_cards['SCORES'].apply(lambda x : x[1])
df_cards['TOSS']=le['Team'].transform(df_cards['TOSS'].apply(lambda x : tmap[x]))
df = pd.merge(matches, df_cards)
df['PLAYERS1']=df['BATTING1'].apply(lambda x : [y[0] for y in x])
df['PLAYERS2']=df['BATTING2'].apply(lambda x : [y[0] for y in x])
_BAT1, _BAT2, _BOW1, _BOW2 = df['PLAYERS1'].tolist(), df['PLAYERS2'].tolist(), [[_x[0] for _x in x] for x in df['BOWLING1'].tolist()], [[_x[0] for _x in x] for x in df['BOWLING2'].tolist()]
for i in range(len(_BAT1)):
try:
_BAT1[i].append(list(set(_BOW2[i])-set(_BAT1[i]))[0])
_BAT2[i].append(list(set(_BOW1[i])-set(_BAT2[i]))[0])
except:
pass
df['PLAYERS1'], df['PLAYERS2'] = _BAT1, _BAT2
df=df[['Date', 'Team_1', 'Team_2', 'Venue', 'GroundCode', 'TOSS', 'BAT1', 'BAT2', 'RUN1', 'RUN2', 'PLAYERS1', 'PLAYERS2']]
df=df[df['PLAYERS1'].apply(lambda x : len(x)==11) & df['PLAYERS2'].apply(lambda x : len(x)==11)]
df.reset_index(drop=True, inplace=True)
Team_1, Team_2, BAT1, BAT2, BOWL1, BOWL2= [], [], [], [], [], []
for t1,t2,b1,b2 in zip(df['Team_1'].tolist(), df['Team_2'].tolist(), df['BAT1'].tolist(), df['BAT2'].tolist()):
if b1==t1:
Team_1.append(t1)
Team_2.append(t2)
else:
Team_1.append(t2)
Team_2.append(t1)
df['Team_1']=Team_1
df['Team_2']=Team_2
df.drop(['BAT1', 'BAT2', 'Venue'],axis=1, inplace=True)
def getStats(code, date):
_date = datetime.strptime(date, "%Y-%m-%d")
if code in batsmen_stats:
i = bisect.bisect_left(batsmen_stats[code][0], _date)-1
if i == -1:
bat = np.zeros(4)
else:
bat = batsmen_stats[code][1][i]
else:
bat = np.zeros(4)
if code in bowlers_stats:
i = bisect.bisect_left(bowlers_stats[code][0], _date)-1
if i == -1:
bowl = np.zeros(5)
else:
bowl = bowlers_stats[code][1][i]
else:
bowl = np.zeros(5)
if int(code) in attdict:
patt = attdict[int(code)]
else:
patt = np.zeros(14)
stats = np.concatenate([bat, bowl, patt])
return stats
def getScores(code, date):
if code in _batsmen_scores and date in _batsmen_scores[code]:
bat = _batsmen_scores[code][date]
else:
bat = np.zeros(4)
if code in _bowlers_scores and date in _bowlers_scores[code]:
bowl = _bowlers_scores[code][date]
else:
bowl = np.zeros(5)
return np.concatenate([bat, bowl])
P1, P2, Dates = df['PLAYERS1'].tolist(), df['PLAYERS2'].tolist(), df['Date'].tolist()
PStats1, PStats2 = [[getStats(p, date) for p in team] for team,date in zip(P1,Dates)], [[getStats(p, date) for p in team] for team,date in zip(P2,Dates)]
PScores1, PScores2 = [[getScores(p, date) for p in team] for team,date in zip(P1,Dates)], [[getScores(p, date) for p in team] for team,date in zip(P2,Dates)]
def getNRR(matchcode):
card = scorecards[matchcode]
run1, run2 = card['SCORES']
overs = sum([int(b[1]) + 10/6*(b[1]-int(b[1])) for b in card['BOWLING2']])
allout = not (len(card['BATTING2'][-1][1])<2 or ('not' in card['BATTING2'][-1][1]))
if allout:
overs=50
return abs((run1/50) - (run2/overs))
df['NRR']=matches['MatchCode'].apply(lambda x : getNRR(str(x)))
df['TEAM1WIN']=0
df['TEAM1WIN'][df['RUN1']>df['RUN2']]=1
df_0=df[df['TEAM1WIN']==0]
df_1=df[df['TEAM1WIN']==1]
df_0['NRR']=-df_0['NRR']
df=(df_0.append(df_1)).sort_index()
nPStats1, nPStats2, nPScores1, nPScores2 = np.array(PStats1), np.array(PStats2), np.array(PScores1), np.array(PScores2)
StatMaxes = np.max(np.concatenate([nPStats1, nPStats2]), axis=(0,1))
dfStats_N1 = nPStats1/StatMaxes
dfStats_N2 = nPStats2/StatMaxes
ScoreMaxes = np.max(np.concatenate([nPScores1, nPScores2]), axis=(0,1))
dfScores_N1 = nPScores1/ScoreMaxes
dfScores_N2 = nPScores2/ScoreMaxes
NRRMax = np.max(df['NRR'])
df['NRR']=df['NRR']/NRRMax
nnPStats1 = np.concatenate([dfStats_N1, dfStats_N2],axis=0)
nnPStats2 = np.concatenate([dfStats_N2, dfStats_N1],axis=0)
nnPScores1 = np.concatenate([dfScores_N1, dfScores_N2],axis=0)
nnPScores2 = np.concatenate([dfScores_N2, dfScores_N1],axis=0)
_NRR = np.concatenate([df['NRR'].values, -df['NRR'].values])
train_idx, test_idx = train_test_split(np.arange(2*len(df)), test_size=0.1)
import torch.nn as nn
import torch
from torch import optim
class AE(nn.Module):
def __init__(self, input_shape=12, output_shape=1, hidden=16, dropout=0.2):
super(AE, self).__init__()
self.hidden = hidden
self.input_shape = input_shape
self.output_shape = output_shape
self.noise = GaussianNoise(sigma=0.1)
self.player_encoder = nn.Sequential(
nn.Linear(input_shape, hidden),
nn.Tanh(),
nn.Dropout(dropout),
nn.Linear(hidden, hidden),
nn.Tanh(),
nn.Dropout(dropout),
)
self.score_regressor = nn.Sequential(
nn.Linear(hidden, 9),
nn.Tanh(),
)
self.decoder = nn.Sequential(
nn.Linear(hidden, input_shape)
)
self.team_encoder = nn.Sequential(
nn.Linear(11*hidden, hidden*4),
nn.Tanh(),
nn.Dropout(dropout),
)
self.nrr_regressor = nn.Sequential(
nn.Linear(hidden*8, hidden*2),
nn.Tanh(),
nn.Dropout(dropout),
nn.Linear(hidden*2, output_shape),
nn.Tanh(),
)
def forward(self, x1, x2):
encoded1, decoded1, scores1 = [], [], []
encoded2, decoded2, scores2 = [], [], []
for i in range(11):
e1 = self.player_encoder(x1[:,i,:])
d1 = self.decoder(e1)
e2 = self.player_encoder(x2[:,i,:])
d2 = self.decoder(e2)
noise = (0.1**0.5)*torch.randn(e1.size())
e1, e2 = e1 + noise, e2 + noise
scores1.append(self.score_regressor(e1))
scores2.append(self.score_regressor(e2))
encoded1.append(e1)
decoded1.append(d1)
encoded2.append(e2)
decoded2.append(d2)
team1, team2 = self.team_encoder(torch.cat(tuple(encoded1), axis=1)), self.team_encoder(torch.cat(tuple(encoded2), axis=1))
out = self.nrr_regressor(torch.cat((team1, team2), axis=1))
decoded=torch.cat(tuple(decoded1 + decoded2), axis=1)
scores1=torch.cat(tuple(scores1),axis=1)
scores2=torch.cat(tuple(scores2),axis=1)
return decoded, out, scores1, scores2
model = AE(dropout=0.3)
criterion = nn.MSELoss()
ED_Loss_train, NRR_Loss_train, Player_Loss_train = [], [], []
ED_Loss_test, NRR_Loss_test, Player_Loss_test = [], [], []
optimizer = optim.RMSprop(model.parameters(), lr=3e-4, )
epochs = 10000
for epoch in range(1,epochs+1):
model.train()
inputs1 = torch.FloatTensor(nnPStats1[:,:,:12][train_idx])
inputs2 = torch.FloatTensor(nnPStats2[:,:,:12][train_idx])
outputs = torch.FloatTensor(_NRR[train_idx].reshape(-1,1))
optimizer.zero_grad()
decoded, out, scores1, scores2 = model(inputs1, inputs2)
inp = (inputs1).view(train_idx.shape[0], -1), (inputs2).view(train_idx.shape[0], -1)
loss1 = criterion(decoded, torch.cat(inp, axis=1))
loss2 = criterion(out, outputs)
loss3 = criterion(scores1, torch.FloatTensor(nnPScores1[train_idx]).view(train_idx.shape[0], -1))
loss4 = criterion(scores2, torch.FloatTensor(nnPScores2[train_idx]).view(train_idx.shape[0], -1))
loss = 1e-5*loss1 + 1*loss2 + 1e-3*(loss3 + loss4)
loss.backward()
ED_Loss_train.append(loss1.item())
NRR_Loss_train.append(loss2.item())
Player_Loss_train.append((loss3.item()+loss4.item())/2)
optimizer.step()
if epoch%100==0:
print(f"Epoch {epoch}/{epochs}")
print("Train Losses Decoder: %0.3f NRR: %0.3f Player Performance %0.3f" % (loss1.item(), loss2.item(), (loss3.item()+loss4.item())/2))
model.eval()
inputs1 = torch.FloatTensor(nnPStats1[:,:,:12][test_idx])
inputs2 = torch.FloatTensor(nnPStats2[:,:,:12][test_idx])
outputs = torch.FloatTensor(_NRR[test_idx].reshape(-1,1))
decoded, out, scores1, scores2 = model(inputs1, inputs2)
inp = (inputs1).view(test_idx.shape[0], -1), (inputs2).view(test_idx.shape[0], -1)
loss1 = criterion(decoded, torch.cat(inp, axis=1))
loss2 = criterion(out, outputs)
loss3 = criterion(scores1, torch.FloatTensor(nnPScores1[test_idx]).view(test_idx.shape[0], -1))
loss4 = criterion(scores2, torch.FloatTensor(nnPScores2[test_idx]).view(test_idx.shape[0], -1))
ED_Loss_test.append(loss1.item())
print("Validation Losses Decoder: %0.3f NRR: %0.3f Player Performance: %0.3f" % (loss1.item(), loss2.item(), (loss3.item()+loss4.item())/2))
NRR_Loss_test.append(loss2.item())
out, outputs = out.detach().numpy(), outputs.detach().numpy()
Player_Loss_test.append((loss3.item()+loss4.item())/2)
acc=100*np.sum((out*outputs)>0)/out.shape[0]
print("Val Accuracy: %0.3f" % acc)
sns.lineplot(x=np.arange(1,10001), y=ED_Loss_train)
sns.lineplot(x=np.arange(1,10001,50), y=ED_Loss_test)
sns.lineplot(x=np.arange(1,10001), y=NRR_Loss_train)
sns.lineplot(x=np.arange(1,10001,50), y=NRR_Loss_test)
sns.lineplot(x=np.arange(1,10001), y=Player_Loss_train)
sns.lineplot(x=np.arange(1,10001,50), y=Player_Loss_test)
```
| github_jupyter |
# CaptureFile - Transactional record logging library
## Overview
Capture files are compressed transactional record logs and by convention use the
extension ".capture". Records can be appended but not modified and are
explicitly committed to the file.
Any records that are added but not committed will not be visible to other
processes and will be lost if the process that added them stops or otherwise
closes the capture file before committing. All records that were added between
commits either become available together or, if the commit fails, are discarded
together. This is true even if the file buffers were flushed to disk as the
number of records added between commits grew.
Records in a capture file are each of arbitrary length and can contain up to 4GB
(2³² bytes) of binary data.
Capture files can quickly retrieve any record by its sequential record number.
This is true even with trillions of records.
Metadata can be attached to and read from a capture file. The current metadata
reference is replaced by any subsequent metadata writes. A metadata update is
also transactional and will be committed together with any records that were
added between commits.
Concurrent read access is supported for multiple threads and OS processes.
Only one writer is permitted at a time.
This is a pure Python implementation with no dependencies beyond the Python
standard library. Development build dependencies are listed in requirements.txt.
Click here for the implementation language independent [internal details of the
design and the data structures
used](https://github.com/MIOsoft/CaptureFile-Python/blob/master/docs/DESIGN.md).
Click here for a detailed description of the [Python CaptureFile
API](https://github.com/MIOsoft/CaptureFile-Python/blob/master/docs/CaptureFile.CaptureFile.md).
The detailed description covers several useful APIs and parameters that are not
covered in the Quickstart below.
To work with capture files visually, you can use the free [MIObdt](https://miosoft.com/miobdt/) application.
## Install
```
pip install CaptureFile
```
## Quickstart
### Example 1. Creating a new capture file and then adding and committing some records to it.
```
from CaptureFile import CaptureFile
# in the **existing** sibling "TempTestFiles" folder create a new empty capture file
cf = CaptureFile("../TempTestFiles/Test.capture", to_write=True, force_new_empty_file=True)
# add five records to the capture file
cf.add_record("Hey this is my record 1")
cf.add_record("Hey this is my record 2")
cf.add_record("Hey this is my record 3")
cf.add_record("Hey this is my record 4")
cf.add_record("Hey this is my record 5")
# commit records to capture file
cf.commit()
print(f"There are {cf.record_count()} records in this capture file.")
# close the capture file
cf.close()
```
### Example 2. Reading a record from the capture file created above.
```
from CaptureFile import CaptureFile
# open existing capture file for reading
cf = CaptureFile("../TempTestFiles/Test.capture")
# retrieve the second record from the capture file
record = cf.record_at(2)
print(record)
# close the capture file
cf.close()
```
### Example 3. Opening an existing capture file and then reading a range of records from it.
```
from CaptureFile import CaptureFile
# open existing capture file for reading
cf = CaptureFile("../TempTestFiles/Test.capture")
# retrieve and print records 2 to 3
print(cf[2:4])
# close the capture file
cf.close()
```
### Example 4. Opening an existing capture file using a context manager and then iterating over all records from it.
```
from CaptureFile import CaptureFile
# open existing capture file for reading using a context manager
# so no need to close the capture file
with CaptureFile("../TempTestFiles/Test.capture") as cf:
#iterate over and print all records
for record in iter(cf):
print(record)
```
| github_jupyter |
### Code to implement Graphs
```
class DiGraphAsAdjacencyMatrix:
def __init__(self):
#would be better a set, but I need an index
self.__nodes = list()
self.__matrix = list()
def __len__(self):
"""gets the number of nodes"""
return len(self.__nodes)
def nodes(self):
return self.__nodes
def matrix(self):
return self.__matrix
def __str__(self):
header = "\t".join([n for n in self.__nodes])
data = ""
for i in range(0,len(self.__matrix)):
data += str(self.__nodes[i]) + "\t"
data += "\t".join([str(x) for x in self.__matrix[i]]) + "\n"
return "\t"+ header +"\n" + data
def insertNode(self, node):
#add the node if not there.
if node not in self.__nodes:
self.__nodes.append(node)
#add a row and a column of zeros in the matrix
if len(self.__matrix) == 0:
#first node
self.__matrix = [[0]]
else:
N = len(self.__nodes)
for row in self.__matrix:
row.append(0)
self.__matrix.append([0 for x in range(N)])
def insertEdge(self, node1, node2, weight):
i = -1
j = -1
if node1 in self.__nodes:
i = self.__nodes.index(node1)
if node2 in self.__nodes:
j = self.__nodes.index(node2)
if i != -1 and j != -1:
self.__matrix[i][j] = weight
def deleteEdge(self, node1,node2):
"""removing an edge means to set its
corresponding place in the matrix to 0"""
i = -1
j = -1
if node1 in self.__nodes:
i = self.__nodes.index(node1)
if node2 in self.__nodes:
j = self.__nodes.index(node2)
if i != -1 and j != -1:
self.__matrix[i][j] = 0
def deleteNode(self, node):
"""removing a node means removing
its corresponding row and column in the matrix"""
i = -1
if node in self.__nodes:
i = self.__nodes.index(node)
#print("Removing {} at index {}".format(node, i))
if node != -1:
self.__matrix.pop(i)
for row in self.__matrix:
row.pop(i)
self.__nodes.pop(i)
def adjacent(self, node, incoming = True):
"""Your treat! (see exercise 1)"""
def edges(self):
"""Your treat! (see exercise1). Returns all the edges"""
if __name__ == "__main__":
G = DiGraphAsAdjacencyMatrix()
for i in range(6):
n = "Node_{}".format(i+1)
G.insertNode(n)
for i in range(0,4):
n = "Node_" + str(i+1)
six = "Node_6"
n_plus = "Node_" + str((i+2) % 6)
G.insertEdge(n, n_plus,0.5)
G.insertEdge(n, six,1)
G.insertEdge("Node_5", "Node_1", 0.5)
G.insertEdge("Node_5", "Node_6", 1)
G.insertEdge("Node_6", "Node_6", 1)
print(G)
print("Nodes:")
print(G.nodes())
print("Matrix:")
print(G.matrix())
G.insertNode("Node_7")
G.insertEdge("Node_1", "Node_7", -1)
G.insertEdge("Node_2", "Node_7", -2)
G.insertEdge("Node_5", "Node_7", -5)
G.insertEdge("Node_7", "Node_2", -2)
G.insertEdge("Node_7", "Node_3", -3)
print("Size is: {}".format(len(G)))
print("Nodes: {}".format(G.nodes()))
print("\nMatrix:")
print(G)
G.deleteNode("Node_7")
G.deleteEdge("Node_6", "Node_2")
#no effect, nodes do not exist!
G.insertEdge("72", "25",3)
print(G)
class DiGraphAsAdjacencyMatrix:
def __init__(self):
#would be better a set, but I need an index
self.__nodes = list()
self.__matrix = list()
def __len__(self):
"""gets the number of nodes"""
return len(self.__nodes)
def nodes(self):
return self.__nodes
def matrix(self):
return self.__matrix
def __str__(self):
#TODO
pass
def insertNode(self, node):
#TODO
pass
def insertEdge(self, node1, node2, weight):
#TODO
pass
def deleteEdge(self, node1,node2):
"""removing an edge means to set its
corresponding place in the matrix to 0"""
#TODO
pass
def deleteNode(self, node):
"""removing a node means removing
its corresponding row and column in the matrix"""
#TODO
pass
def adjacent(self, node, incoming = True):
#TODO
pass
def edges(self):
#TODO
pass
```
In this implementation of a directed weighted graph, we use a dictionary to store the data.
```
class Graph:
# initializer, nodes are private!
def __init__(self):
self.__nodes = dict()
#returns the size of the Graph
#accessible through len(Graph)
def __len__(self):
return len(self.__nodes)
#returns the nodes
def V(self):
return self.__nodes.keys()
#a generator of nodes to access all of them
#once (not a very useful example!)
def node_iterator(self):
for n in self.__nodes.keys():
yield n
#a generator of edges (as triplets (u,v,w)) to access all of them
def edge_iterator(self):
for u in self.__nodes:
for v in self.__nodes[u]:
yield (u,v,self.__nodes[u][v])
#returns all the adjacent nodes of node
#as a dictionary with key as the other node
#and value the weight
def adj(self,node):
if node in self.__nodes.keys():
return self.__nodes[node]
#adds the node to the graph
def insert_node(self, node):
if node not in self.__nodes:
self.__nodes[node] = dict()
#adds the edge startN --> endN with weight w
#that has 0 as default
def insert_edge(self, startN, endN, w = 0):
#does nothing if already in
self.insert_node(startN)
self.insert_node(endN)
self.__nodes[startN][endN] = w
#converts the graph into a string
def __str__(self):
out_str = "Nodes:\n" + ",".join(self.__nodes)
out_str +="\nEdges:\n"
for u in self.__nodes:
for v in self.__nodes[u]:
out_str +="{} --{}--> {}\n".format(u,self.__nodes[u][v],v )
if len(self.__nodes[u]) == 0:
out_str +="{}\n".format(u)
return out_str
if __name__ == "__main__":
G = Graph()
for u,v in [ ('a', 'b'), ('a', 'd'), ('b', 'c'),
('d', 'a'), ('d', 'c'), ('d', 'e'), ('e', 'c') ]:
G.insert_edge(u,v)
for edge in G.edge_iterator():
print("{} --{}--> {}".format(edge[0],
edge[1],
edge[2]))
G.insert_node('f')
print("\nG has {} nodes:".format(len(G)))
for node in G.node_iterator():
print("{}".format(node), end= " ")
print("")
print(G)
print("Nodes adjacent to 'd': {}".format(G.adj('d')))
print("\nNodes adjacent to 'c': {}".format(G.adj('c')))
for node in G.V():
#do something with the node
for u in G.V():
#for all starting nodes u
for v in G.adj(u):
#for all ending nodes v
#do something with (u,v)
for node in G.node_iterator():
#do something with the node
for edge in G.edge_iterator():
#do something with the edge
#######
## WARNING WRONG CODE!!!!
#######
from collections import deque()
def BFS(node):
Q = deque()
if node != None:
Q.append(node)
while len(Q) > 0:
curNode = Q.popleft()
if curNode != None:
print("{}".format(curNode))
for v in G.adj(curNode):
Q.append(v)
```
## BFS search
```
"CODE NOT SHOWN"
#Drawing a graph in pygraphviz
import pygraphviz as pgv
G=pgv.AGraph(directed=True)
#for u in 'abcdefghkj':
# G.add_node(u, color = 'black')
#for u,v,c in [('a', 'c', 'black'), ('a', 'f','red'), ('a', 'e','black'), ('c', 'b','black'), ('c', 'd','black'),
# ('b', 'f','black'), ('d','f','black'),('d','g','black'), ('f','g','red'),('g','j', 'red'),
# ('e','h','black'), ('h','j','black'), ('k','l','black'), ('d', 'b','black'), ('j','a','blue'),
# ('g','b','black'), ('j','d','black')]:
#for u, v,c in [('a', 'b','black'), ('b', 'a','black'), ('b', 'c','black'), ('c', 'b','black'), ('c', 'd','black'),
# ('d', 'c','black'), ('d','b','black'),('b','d','black'), ('d','a','black'),('a','d','black'), ('e','g','black'),
# ('g','e','black'), ('e','f','black'), ('f', 'e','black'), ('f','h','black'), ('h','f','black'), ('h','g','black'),
# ('g','h','black'),('h','i','black'),('i','h','black'),('f','i','black'),('i','f','black'), ('j','k','black'),('k','j','black')]:
for u,v,c in [('a','b', 'black'), ('a','c','black'), ('a','e', 'black'),
('c','e','black'), ('b','d','black')]:#, ('e','b', 'black')]:
G.add_edge(u, v, color=c)
print(u,v,c)
G.add_edge(u, v, color=c)
# write to a dot file
#G.write('test.dot')
#create a png file
G.layout(prog='fdp') # use dot
G.draw('test_top_sort.png')
```
### DFS iterative in post-order
```
%reset -s -f
from collections import deque
import math
class Graph:
# initializer, nodes are private!
def __init__(self):
self.__nodes = dict()
#returns the size of the Graph
#accessible through len(Graph)
def __len__(self):
return len(self.__nodes)
#returns the nodes
def V(self):
return self.__nodes.keys()
#a generator of nodes to access all of them
#once (not a very useful example!)
def node_iterator(self):
for n in self.__nodes.keys():
yield n
#a generator of edges (as triplets (u,v,w)) to access all of them
def edge_iterator(self):
for u in self.__nodes:
for v in self.__nodes[u]:
yield (u,v,self.__nodes[u][v])
#returns all the adjacent nodes of node
#as a dictionary with key as the other node
#and value the weight
def adj(self,node):
if node in self.__nodes.keys():
return self.__nodes[node]
#adds the node to the graph
def insert_node(self, node):
if node not in self.__nodes:
self.__nodes[node] = dict()
#adds the edge startN --> endN with weight w
#that has 0 as default
def insert_edge(self, startN, endN, w = 0):
#does nothing if already in
self.insert_node(startN)
self.insert_node(endN)
self.__nodes[startN][endN] = w
#converts the graph into a string
def __str__(self):
out_str = "Nodes:\n" + ",".join(self.__nodes)
out_str +="\nEdges:\n"
for u in self.__nodes:
for v in self.__nodes[u]:
out_str +="{} --{}--> {}\n".format(u,self.__nodes[u][v],v )
if len(self.__nodes[u]) == 0:
out_str +="{}\n".format(u)
return out_str
def DFS_rec(self, node, visited):
visited.add(node)
## visit node (preorder)
print("visiting: {}".format(node))
for u in self.adj(node):
if u not in visited:
self.DFS_rec(u, visited)
##visit node (post-order)
def DFS(self, root):
#stack implemented as deque
S = deque()
S.append(root)
visited = set()
while len(S) > 0:
node = S.pop()
if not node in visited:
#visit node in preorder
print("visiting {}".format(node))
visited.add(node)
for n in self.adj(node):
#visit edge (node,n)
S.append(n)
# Idea:
# when we find a node we add it to the stack with tag "discovery"
# if extracted with tag discovery, it is pushed back with tag "finish" and all its neighbors
# are added
# When it is extracted with tag finish the post visit is done
def DFS_postorder(self, root):
#stack implemented as deque
S = deque()
S.append((root, "discovery"))
visited = set()
discovered = set()
discovered.add(root)
cnt = 0
while len(S) > 0:
node,tag = S.pop()
#print("{} {}".format(node, tag))
if not node in visited:
if tag == "discovery":
S.append((node, "finished"))
for n in self.adj(node):
if n not in discovered:
S.append((n, "discovery"))
discovered.add(n)
else:
#visit the node in postorder:
visited.add(node)
print("visiting {}".format(node))
if __name__ == "__main__":
G2 = Graph()
for u, v in [('a', 'c'), ('a', 'f'), ('a', 'e'), ('c', 'b'), ('c', 'd'),
('b', 'f'), ('d','f'),('d','g'), ('f','g'),('g','j'), ('e','h'),
('h','j'), ('k','l'), ('d', 'b'), ('j','a'), ('g','b'), ('j','d')]:
G2.insert_edge(u,v)
G2.DFS('a')
print("DFS from {}")
visited = set()
G2.DFS_rec('a', visited)
print("\nPostorder:")
G2.DFS_postorder('a')
```




```
%reset -s -f
from collections import deque
import math
class Graph:
# initializer, nodes are private!
def __init__(self):
self.__nodes = dict()
#returns the size of the Graph
#accessible through len(Graph)
def __len__(self):
return len(self.__nodes)
#returns the nodes
def V(self):
return self.__nodes.keys()
#a generator of nodes to access all of them
#once (not a very useful example!)
def node_iterator(self):
for n in self.__nodes.keys():
yield n
#a generator of edges (as triplets (u,v,w)) to access all of them
def edge_iterator(self):
for u in self.__nodes:
for v in self.__nodes[u]:
yield (u,v,self.__nodes[u][v])
#returns all the adjacent nodes of node
#as a dictionary with key as the other node
#and value the weight
def adj(self,node):
if node in self.__nodes.keys():
return self.__nodes[node]
#adds the node to the graph
def insert_node(self, node):
if node not in self.__nodes:
self.__nodes[node] = dict()
#adds the edge startN --> endN with weight w
#that has 0 as default
def insert_edge(self, startN, endN, w = 0):
#does nothing if already in
self.insert_node(startN)
self.insert_node(endN)
self.__nodes[startN][endN] = w
#converts the graph into a string
def __str__(self):
out_str = "Nodes:\n" + ",".join(self.__nodes)
out_str +="\nEdges:\n"
for u in self.__nodes:
for v in self.__nodes[u]:
out_str +="{} --{}--> {}\n".format(u,self.__nodes[u][v],v )
if len(self.__nodes[u]) == 0:
out_str +="{}\n".format(u)
return out_str
def BFS(self, node):
Q = deque()
Q.append(node)
visited = set()
visited.add(node)
print("visiting: {}".format(node))
while len(Q) > 0:
curNode = Q.popleft()
for n in self.adj(curNode):
if n not in visited:
Q.append(n)
visited.add(n)
print("visiting: {}".format(n))
#print("visited: {}".format(visited))
#print("Q: {}".format(list(Q)))
#computes the distance from root of all nodes
def get_distance(self, root):
distances = dict()
parents = dict()
for node in self.node_iterator():
distances[node] = math.inf
parents[node] = -1
Q = deque()
Q.append(root)
distances[root] = 0
parents[root] = root
while len(Q) > 0:
curNode = Q.popleft()
for n in self.adj(curNode):
if distances[n] == math.inf:
distances[n] = distances[curNode] + 1
parents[n] = curNode
Q.append(n)
return (distances,parents)
def get_shortest_path(self, start, end):
#your courtesy
#returns [start, node,.., end]
#if shortest path is start --> node --> ... --> end
D_s_e,P_s_e = self.get_distance(start)
D_e_s, P_e_s = self.get_distance(end)
P = []
s = None
e = None
if D_s_e[end] > D_e_s[start]:
P = P_e_s
s = end
e = start
else:
P = P_s_e
s = start
e = end
outPath = str(e)
#this assumes all the nodes are in the
#parents structure
curN = e
while curN != s and curN != -1:
curN = P[curN]
outPath = str(curN) + " --> " + outPath
if str(curN) != s:
return "Not available"
return outPath
def DFS_rec(self, node, visited):
visited.add(node)
## visit node (preorder)
print("visiting: {}".format(node))
for u in self.adj(node):
if u not in visited:
self.DFS_rec(u, visited)
##visit node (post-order)
def DFS(self, root):
#stack implemented as deque
S = deque()
S.append(root)
visited = set()
while len(S) > 0:
node = S.pop()
if not node in visited:
#visit node in preorder
print("visiting {}".format(node))
visited.add(node)
for n in self.adj(node):
#visit edge (node,n)
S.append(n)
def printPath(startN, endN, parents):
outPath = str(endN)
#this assumes all the nodes are in the
#parents structure
curN = endN
while curN != startN and curN != -1:
curN = parents[curN]
outPath = str(curN) + " --> " + outPath
if str(curN) != startN:
return "Not available"
return outPath
def cc(G):
ids = dict()
for node in G.node_iterator():
ids[node] = 0
counter = 0
for u in G.node_iterator():
if ids[u] == 0:
counter += 1
ccdfs(G, counter, u, ids)
return (counter, ids)
def ccdfs(G, counter, u, ids):
ids[u] = counter
for v in G.adj(u):
if ids[v] == 0:
ccdfs(G, counter, v, ids)
if __name__ == "__main__":
G = Graph()
for u,v in [ ('a', 'b'), ('a', 'd'), ('b', 'c'),
('d', 'a'), ('d', 'c'), ('d', 'e'), ('e', 'c') ]:
G.insert_edge(u,v)
for edge in G.edge_iterator():
print("{} --{}--> {}".format(edge[0],
edge[1],
edge[2]))
G.insert_node('f')
print("\nG has {} nodes:".format(len(G)))
for node in G.node_iterator():
print("{}".format(node), end= " ")
print()
print(G)
G1 = Graph()
for u, v in [('a', 'c'), ('a', 'f'), ('a', 'e'), ('c', 'b'), ('c', 'd'),
('b', 'f'), ('d','f'),('d','g'), ('f','g'),('g','j'), ('e','h'),
('h','j'), ('k','l')]:
G1.insert_edge(u,v)
print("BFS from {}".format('a'))
G1.BFS('a')
G2 = Graph()
for u, v in [('a', 'c'), ('a', 'f'), ('a', 'e'), ('c', 'b'), ('c', 'd'),
('b', 'f'), ('d','f'),('d','g'), ('f','g'),('g','j'), ('e','h'),
('h','j'), ('k','l'), ('d', 'b'), ('j','a'), ('g','b'), ('j','d')]:
G2.insert_edge(u,v)
D, P = G2.get_distance('b')
print("Distances from 'b': {}".format(D))
print("All parents: {}".format(P))
print("Path from 'b' to 'c': {}".format(printPath('b','c', P)))
print("Distances from 'a': {}".format(D))
print("All parents: {}".format(P))
D, P = G2.get_distance('a')
print("Path from 'a' to 'j': {}".format(printPath('a','j', P)))
print("Path from 'a' to 'k': {}".format(printPath('a','k', P)))
print("Path from 'a' to 'f': {}".format(printPath('a','f', P)))
print("Path from 'a' to 'h': {}".format(printPath('a','h', P)))
sp = G2.get_shortest_path('a','j')
print("Shortest path from 'a' to 'j': {}".format(sp))
print("DFS from a:")
G2.DFS('a')
print("DFS from b:")
G2.DFS('b')
myG = Graph()
for u, v in [('a', 'b'), ('b', 'a'), ('b', 'c'), ('c', 'b'), ('c', 'd'),
('d', 'c'), ('d','b'),('b','d'), ('d','a'),('a','d'), ('e','g'),
('g','e'), ('e','f'), ('f', 'e'), ('f','h'), ('h','f'), ('h','g'),
('g','h'),('h','i'),('i','h'),('f','i'),('i','f'), ('j','k'),('k','j')]:
myG.insert_edge(u,v)
N, con_comp = cc(myG)
print("{} connected components:\n{}".format(N,con_comp))
from collections import deque
import math
class Graph:
"""..."""
#computes the distance from root of all nodes
def get_distance(self, root):
distances = dict()
parents = dict()
for node in self.node_iterator():
distances[node] = math.inf
parents[node] = -1
Q = deque()
Q.append(root)
distances[root] = 0
parents[root] = root
while len(Q) > 0:
curNode = Q.popleft()
for n in self.adj(curNode):
if distances[n] == math.inf:
distances[n] = distances[curNode] + 1
parents[n] = curNode
Q.append(n)
return (distances,parents)
```
### Cycle detection (un-directed graphs)
The recursive visit does a DFS and it checks for each node if it back-connects to form a cycle. At each call we need to remember where we came from to avoid trivial loops.
```
def has_cycleRec(G, u, from_node, visited):
visited.add(u)
for v in G.adj(u):
if v != from_node: #to avoid trivial cycles
if v in visited:
return True
else:
#continue with the visit to check
#if there are cycles
if has_cycleRec(G,v, u, visited):
return True
return False
def has_cycle(G):
visited = set()
#I am starting the visit from all nodes
for node in G.node_iterator():
if node not in visited:
if has_cycleRec(G, node, None, visited):
return True
return False
myG = Graph()
for u, v in [('a', 'b'), ('b', 'a'), ('b', 'c'), ('c', 'b'), ('c', 'd'),
('d', 'c'), ('c','e'),('e','c'), ('d','a'),('a','d'), ('e','d'),
('d','e')]:
myG.insert_edge(u,v)
print(has_cycle(myG))
myG = Graph()
for u, v in [('a', 'b'), ('b', 'a'), ('b', 'c'), ('c', 'b'), ('c', 'd'),
('d', 'c'), ('c','e'),('e','c'), ('e','d'),
('d','e')]:
myG.insert_edge(u,v)
print(has_cycle(myG))
myG = Graph()
for u, v in [('a', 'b'), ('b', 'a'), ('b', 'c'), ('c', 'b'),
('c','e'),('e','c'), ('e','d'),
('d','e')]:
myG.insert_edge(u,v)
print(has_cycle(myG))
```
## DFS schema
```
clock = 0
def dfs_schema(G, node, dt, ft):
#clock: visit time (global variable)
#dt: discovery time
#ft: finish time
global clock
clock += 1
dt[node] = clock
print("Start time {}: {}".format(node, clock))
for v in G.adj(node):
if dt[v] == 0:
#DFS VISIT edge
#visit the edge (node,v)
print("\tDFS edge: {} --> {}".format(node, v))
dfs_schema(G,v, dt, ft)
elif dt[node] > dt[v] and ft[v] == 0:
#BACK EDGE
#visit the back edge (node,v)
print("\tBack edge: {}--> {}".format(node,v))
elif dt[node] < dt[v] and ft[v] != 0:
#FORWARD EDGE
#visit the forward edge (node,v)
print("\tForward edge: {}--> {}".format(node,v))
else:
#CROSS EDGE
print("\tCross edge: {} --> {}".format(node,v))
clock += 1
ft[node] = clock
print("Finish time {}: {}".format(node,clock))
return dt,ft
G = Graph()
for u,v,c in [('a','b', 'black'), ('a','c','black'), ('a','d', 'black'),
('d','a','black'), ('d','b','black'), ('b','c', 'black'),
('e','c','black')]:
print(u,v,c)
G.insert_edge(u,v)
dt = dict()
df = dict()
for node in G.node_iterator():
dt[node] = 0
df[node] = 0
#print(G)
#clock = 0
s,e = dfs_schema(G,'a', dt, df)
s,e = dfs_schema(G,'e', dt, df)
print("Discovery times:{}".format(s))
print("Finish times: {}".format(e))
```
## Cycle check in ordered graphs
```
def detect_cycle(G):
dt = dict()
ft = dict()
global clock
def has_cycle(G, node, dt, ft):
#clock: visit time (global variable)
#dt: discovery time
#ft: finish time
global clock
clock += 1
dt[node] = clock
for v in G.adj(node):
if dt[v] == 0:
#DFS VISIT edge
if has_cycle(G,v, dt, ft):
return True
elif dt[node] > dt[v] and ft[v] == 0:
#BACK EDGE
#CYCLE FOUND!!!!
print("Back edge: {} --> {}".format(node,v))
return True
## Note we are not interested
## in forward and cross edges
clock += 1
ft[node] = clock
return False
for node in G.node_iterator():
dt[node] = 0
ft[node] = 0
clock = 1
for u in G.node_iterator():
if ft[u] == 0:
if has_cycle(G,u, dt, ft):
return True
return False
G = Graph()
for u,v,c in [('a','b', 'black'), ('a','c','black'), ('a','d', 'black'),
('d','a','black'), ('d','b','black'), ('b','c', 'black'),
('e','c','black')]:
print(u,v,c)
G.insert_edge(u,v)
print(G)
print("Does G have a cycle? {}".format(detect_cycle(G)))
G = Graph()
for u,v,c in [('a','b', 'black'), ('b','c','black'), ('a','c', 'black')]:
print(u,v,c)
G.insert_edge(u,v)
print(G)
print("Does G have a cycle? {}".format(detect_cycle(G)))
G = Graph()
for u,v,c in [('a','b', 'black'), ('b','c','black'), ('c','a', 'black')]:
print(u,v,c)
G.insert_edge(u,v)
print(G)
print("Does G have a cycle? {}".format(detect_cycle(G)))
```
## Topological sort of a DAG
Idea: perform a DFS visit and when the visit of a node is finished (post-order) add the node to a stack. The stack at the end contains the nodes in one of the possible topological orders.
```
class Stack:
# initializer, the inner structure is a list
# data is added at the end of the list
# for speed
def __init__(self):
self.__data = []
# returns the length of the stack (size)
def __len__(self):
return len(self.__data)
# returns True if stack is empty
def isEmpty(self):
return len(self.__data) == 0
# returns the last inserted item of the stack
# and shrinks the stack
def pop(self):
if len(self.__data) > 0:
return self.__data.pop()
# returns the last inserted element without
# removing it (None if empty)
def peek(self):
if len(self.__data) > 0:
return self.__data[-1]
else:
return None
# adds an element to the stack
def push(self, item):
self.__data.append(item)
# transforms the Stack into a string
def __str__(self):
if len(self.__data) == 0:
return "Stack([])"
else:
out = "Stack(" + str(self.__data[-1])
for i in range(len(self.__data) -2,-1, -1):
out += " | " + str(self.__data[i])
out += ")"
return out
def top_sort(G):
S = Stack()
visited = set()
for u in G.node_iterator():
if u not in visited:
top_sortRec(G, u, visited, S)
return S
def top_sortRec(G, u, visited, S):
visited.add(u)
for v in G.adj(u):
if v not in visited:
top_sortRec(G,v,visited,S)
S.push(u)
G = Graph()
for u,v,c in [('a','c','black'), ('a','b', 'black'), ('c','e','black'), ('a','e', 'black'),
('b','d','black')]:
G.insert_edge(u,v)
print(top_sort(G))
G = Graph()
for u,v,c in [('a','b', 'black'), ('a','c','black'), ('a','e', 'black'),
('c','e','black'), ('b','d','black'), ('e','b', 'black')]:
G.insert_edge(u,v)
print(top_sort(G))
```
## Strongly connected components (SCC)
```
class Stack:
# initializer, the inner structure is a list
# data is added at the end of the list
# for speed
def __init__(self):
self.__data = []
# returns the length of the stack (size)
def __len__(self):
return len(self.__data)
# returns True if stack is empty
def isEmpty(self):
return len(self.__data) == 0
# returns the last inserted item of the stack
# and shrinks the stack
def pop(self):
if len(self.__data) > 0:
return self.__data.pop()
# returns the last inserted element without
# removing it (None if empty)
def peek(self):
if len(self.__data) > 0:
return self.__data[-1]
else:
return None
# adds an element to the stack
def push(self, item):
self.__data.append(item)
# transforms the Stack into a string
def __str__(self):
if len(self.__data) == 0:
return "Stack([])"
else:
out = "Stack(" + str(self.__data[-1])
for i in range(len(self.__data) -2,-1, -1):
out += " | " + str(self.__data[i])
out += ")"
return out
def top_sort(G):
S = Stack()
visited = set()
for u in G.node_iterator():
if u not in visited:
top_sortRec(G, u, visited, S)
return S
def top_sortRec(G, u, visited, S):
visited.add(u)
for v in G.adj(u):
if v not in visited:
top_sortRec(G,v,visited,S)
S.push(u)
def scc(G):
#performs a topological sort of G
S = top_sort(G)
print(S)
#Transposes G
GT = transpose(G)
#modified version of CC algo that
#gets starting nodes off the stack S
counter, ids = cc(GT,S)
return (counter,ids)
def transpose(G):
tmpG = Graph()
for u in G.node_iterator():
for v in G.adj(u):
tmpG.insert_edge(v,u)
return tmpG
def cc(G, S):
ids = dict()
for node in G.node_iterator():
ids[node] = 0
counter = 0
while len(S) > 0:
u = S.pop()
if ids[u] == 0:
counter += 1
ccdfs(G, counter, u, ids)
return (counter, ids)
def ccdfs(G, counter, u, ids):
ids[u] = counter
for v in G.adj(u):
if ids[v] == 0:
ccdfs(G, counter, v, ids)
G = Graph()
for u,v,c in [('a','b', 'black'), ('b','c','black'), ('a','d', 'black'),
('c','e','black'), ('d','c','black'), ('e','d', 'black'),
('e','f','black'), ('f','c','black')]:
G.insert_edge(u,v)
print(G)
c,i = scc(G)
print("Components: {}\nIds:{}".format(c,i))
G1 = Graph()
for u,v,c in [('a','b', 'black'), ('b','c','black'), ('a','d', 'black'),
('c','e','black'), ('d','c','black'), ('e','d', 'black'),
('e','f','black'), ('f','c','black'), ('f','g','black')]:
G1.insert_edge(u,v)
print(G1)
c,i = scc(G1)
print("Components: {}\nIds:{}".format(c,i))
```
## Complexity of visits
Complexity: $O(n+m)$
* every node is inserted in the queue at most once;
* whenever a node is extracted all its edges are analyzed once and only once;
* number of edges analyzed: $$m = \sum_{u \in V} out\_degree(u)$$
| github_jupyter |
# NMF Analysis
Performs a simple tf-idf of the question pairs and NMF dimension reduction to calculate cosine similarity of each question pair. The goal of the analysis is to see if the pairs labeled as duplicates have a distinctly different cosine similarity compared to those pairs marked as not duplicates.
```
# data manipulation
from utils import save, load
import pandas as pd
import numpy as np
# modeling
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import cosine_similarity
# visualization
import matplotlib.pyplot as plt
import seaborn as sns
# You can configure the format of the images: ‘png’, ‘retina’, ‘jpeg’, ‘svg’, ‘pdf’.
%config InlineBackend.figure_format = 'svg'
# this statement allows the visuals to render within your Jupyter Notebook
%matplotlib inline
X_train = load('X_train')
y_train = load('y_train')
train_df = pd.DataFrame(np.concatenate([X_train, y_train.reshape(-1, 1)], axis=1))
train_df = train_df.rename(columns={0:'id', 1:'question1', 2:'question2', 3:'is_duplicate'})
train_df.head()
```
Let's make a stack of questions maintaining the `id` of the question pair.
```
question_df = train_df.loc[:, ['id', 'question1']]
question_df = question_df.append(train_df.loc[:,['id', 'question2']], sort=False)
question_df.loc[question_df['question1'].isna(), 'question1'] = question_df.loc[question_df['question1'].isna(), 'question2']
question_df = question_df.drop(columns='question2')
question_df = question_df.sort_values('id')
question_df.head(6)
```
Let's now calcualte the tf-idf term matrix.
```
tf = TfidfVectorizer(stop_words='english', token_pattern='\\b[a-zA-Z0-9][a-zA-Z0-9]+\\b')
question_tf = tf.fit_transform(question_df['question1'])
# first 10 terms
tf.get_feature_names()[:10]
# last 10 terms
tf.get_feature_names()[-10:]
# total terms
len(tf.get_feature_names())
```
Lots of words, but some cleanup will probably needed given the numbers.
Let's now reduce the 74,795 term matrix utilizing NMF.
```
def calc_NMF_sim(n_components, col_name, tf_df, df):
nmf = NMF(n_components=n_components)
nmf_topics = nmf.fit_transform(tf_df)
odd_idx = [i for i in range(nmf_topics.shape[0]) if i % 2 == 1]
even_idx = [i for i in range(nmf_topics.shape[0]) if i % 2 == 0]
sim_list = [cosine_similarity(
nmf_topics[odd_idx[i]].reshape(1,-1),
nmf_topics[even_idx[i]].reshape(1,-1)
)[0,0]
for i in range(len(odd_idx))]
df = pd.concat([df.sort_values('id'), pd.Series(sim_list)], axis=1)
df = df.rename(columns={0:col_name})
return df
train_df_cosine = calc_NMF_sim(5, 'cos_sim_5', question_tf, train_df.reset_index())
train_df_cosine = calc_NMF_sim(10, 'cos_sim_10', question_tf, train_df_cosine)
train_df_cosine = calc_NMF_sim(50, 'cos_sim_50', question_tf, train_df_cosine)
train_df_cosine = calc_NMF_sim(100, 'cos_sim_100', question_tf, train_df_cosine)
train_df_cosine.head()
```
We calcualted the cosine similarity for 5, 10, 50, and 100 dimensional dimensional NMF. Let's now plot the distribution for the duplicate pairs and not duplicate pairs. The goal is to see if there is a natural division based purely on the cosine similarity between the pair of questions.
## This seems off!!!
Need to figure out why there is so much overlap now. Let's ignore for now and see if the MVP model suffers the same.
```
cols = ['cos_sim_5', 'cos_sim_10', 'cos_sim_50', 'cos_sim_100']
plt.figure(figsize=(10,10))
for i in range(4):
plt.subplot(2, 2, i+1)
sns.kdeplot(train_df_cosine.loc[train_df_cosine['is_duplicate'] == 0, cols[i]],
shade=True,
label = 'No Intent',
color = 'red')
sns.kdeplot(train_df_cosine.loc[train_df_cosine['is_duplicate'] == 1, cols[i]],
shade=True,
label = 'Intent',
color = 'green')
plt.title(cols[i])
plt.ylim(top=25)
# plt.xlabel('cosine similarity')
# plt.ylabel('density')
plt.suptitle('KDE comparing pairs with intent and no intent')
;
```
More of the duplicate pairs have a higher cosine similarity compared to the non-duplicate pairs. However, there is also significant overlap, which means finding the decision boundary will be difficult.
Let's take a look at the set of pairs marked as duplicates with a 0 cosine similarity with the NMF 100 transformation.
```
train_df_cosine[(train_df_cosine['is_duplicate'] == 1) & (train_df_cosine['cos_sim_100'] == 0)]
```
The first example is very confusing. This may be a result of the tf-idf calculation with default parameters is incorrect, or the cosine similarity is not the best metric. The next step would be to build a classification model using NMF or LDA topics for the pair of questions to predict whether or not the pair has the same intent.
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from marshmallow import Schema, fields, post_load
# pip install marshmallow
# SQLite
import sqlite3
conn = sqlite3.connect("db/nba.sqlite")
c = conn.cursor()
# preview data
c.execute('SELECT * FROM PLAYERS')
all_rows = c.fetchall()
print(all_rows)
# to df
df = pd.read_sql('SELECT * FROM PLAYERS', conn)
df = df[['salary_2019to2020','age', 'pts','reb','ast', 'plusminus']]
df.head()
df = df.rename(columns={"salary_2019to2020":"y","age":"X1", "pts":"X2","reb":"X3","ast":"X4", "plusminus":"X5"})
df.head()
X = df.drop(columns=['y'])
y = df['y']
print(X.shape, y.shape)
X.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
model = LinearRegression(normalize=False)
model.fit(X_train, y_train)
training_score = model.score(X_train, y_train)
testing_score = model.score(X_test, y_test)
print(f"Training Score: {training_score}")
print(f"Testing Score: {testing_score}")
model.coef_
score = model.score(X, y)
print(f"R2 Score: {score}")
plt.scatter(model.predict(X_train), model.predict(X_train) - y_train, c="blue", label="Training Data")
plt.scatter(model.predict(X_test), model.predict(X_test) - y_test, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y.min(), xmax=y.max())
plt.title("Residual Plot")
#initial prediction- to test
model.predict(np.array([34,10.8,6.5,1.2,-1.2]).reshape(1, -1))
# class Player:
# def __init__(stats, age, pts, reb, ast, plm):
# stats.age = age
# stats.pts = pts
# stats.reb = reb
# stats.ast = ast
# stats.plm = plm
# def __repr__(stats):
# return f'Your player is {stats.age} years old and averages {stats.pts} points, {stats.reb} rebounds, {stats.ast} assists, and has a plus-minus of {stats.plm}.'
# input_data = {}
# input_data['age'] = int(input("Enter your age: "))
# input_data['pts'] = int(input("Enter your average points per game: "))
# input_data['reb '] = int(input("Enter your average rebounds per game: "))
# input_data['ast'] = int(input("Enter your average assists per game: "))
# input_data['plm'] = int(input("Enter your plus-minus: "))
# player = Player(age = input_data['age'], pts = input_data['pts'], reb = input_data['reb '], ast = input_data['ast'], plm = input_data['plm'])
# print(player)
# prediction = model.predict(np.array([input_data['age'], input_data['pts'], input_data['reb '], input_data['ast'], input_data['plm']]).reshape(1,-1))
# print(f"The player's salary prediction is ${prediction[0]}")
class Player:
def __init__(stats, age, pts, reb, ast, plm):
stats.age = age
stats.pts = pts
stats.reb = reb
stats.ast = ast
stats.plm = plm
def __repr__(stats):
return f'Your player is {stats.age} years old and averages {stats.pts} points, {stats.reb} rebounds, {stats.ast} assists, and has a plus-minus of {stats.plm}.'
class PlayerSchema(Schema):
age = fields.Integer()
pts = fields.Integer()
reb = fields.Integer()
ast = fields.Integer()
plm = fields.Integer()
@post_load
def create_player(stats, data, **kwargs):
return Player(**data)
input_data = {}
input_data['age'] = int(input("Enter your age: "))
input_data['pts'] = int(input("Enter your average points per game: "))
input_data['reb'] = int(input("Enter your average rebounds per game: "))
input_data['ast'] = int(input("Enter your average assists per game: "))
input_data['plm'] = int(input("Enter your plus-minus: "))
schema = PlayerSchema()
player = schema.load(input_data)
print(player)
# result = schema.dump(player)
# print(result)
# player = Player(age = input_data['age'], pts = input_data['pts'], reb = input_data['reb '], ast = input_data['ast'], plm = input_data['plm'])
# print(player)
prediction = model.predict(np.array([input_data['age'], input_data['pts'], input_data['reb'], input_data['ast'], input_data['plm']]).reshape(1,-1))
salary = prediction.astype('int32')
print(f"The player's salary prediction is ${salary[0]:,}")
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 使用分布策略保存和加载模型
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tutorials/distribute/save_and_load" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/save_and_load.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/save_and_load.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 Github 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/save_and_load.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
## 概述
在训练期间一般需要保存和加载模型。有两组用于保存和加载 Keras 模型的 API:高级 API 和低级 API。本教程演示了在使用 `tf.distribute.Strategy` 时如何使用 SavedModel API。要了解 SavedModel 和序列化的相关概况,请参阅[保存的模型指南](../../guide/saved_model.ipynb)和 [Keras 模型序列化指南](../../guide/keras/save_and_serialize.ipynb)。让我们从一个简单的示例开始:
导入依赖项:
```
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
```
使用 `tf.distribute.Strategy` 准备数据和模型:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
return model
```
训练模型:
```
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
```
## 保存和加载模型
现在,您已经有一个简单的模型可供使用,让我们了解一下如何保存/加载 API。有两组可用的 API:
- 高级 Keras `model.save` 和 `tf.keras.models.load_model`
- 低级 `tf.saved_model.save` 和 `tf.saved_model.load`
### Keras API
以下为使用 Keras API 保存和加载模型的示例:
```
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path) # save() should be called out of strategy scope
```
恢复无 `tf.distribute.Strategy` 的模型:
```
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
```
恢复模型后,您可以继续在它上面进行训练,甚至无需再次调用 `compile()`,因为在保存之前已经对其进行了编译。模型以 TensorFlow 的标准 `SavedModel` proto 格式保存。有关更多信息,请参阅 [`saved_model` 格式指南](../../guide/saved_model.ipynb)。
现在,加载模型并使用 `tf.distribute.Strategy` 进行训练:
```
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
```
如您所见, `tf.distribute.Strategy` 可以按预期进行加载。此处使用的策略不必与保存前所用策略相同。
### `tf.saved_model` API
现在,让我们看一下较低级别的 API。保存模型与 Keras API 类似:
```
model = get_model() # get a fresh model
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
```
可以使用 `tf.saved_model.load()` 进行加载。但是,由于该 API 级别较低(因此用例范围更广泛),所以不会返回 Keras 模型。相反,它返回一个对象,其中包含可用于进行推断的函数。例如:
```
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
```
加载的对象可能包含多个函数,每个函数与一个键关联。`"serving_default"` 是使用已保存的 Keras 模型的推断函数的默认键。要使用此函数进行推断,请运行以下代码:
```
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
```
您还可以采用分布式方式加载和进行推断:
```
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
```
调用已恢复的函数只是基于已保存模型的前向传递(预测)。如果您想继续训练加载的函数,或者将加载的函数嵌入到更大的模型中,应如何操作? 通常的做法是将此加载对象包装到 Keras 层以实现此目的。幸运的是,[TF Hub](https://tensorflow.google.cn/hub) 为此提供了 [hub.KerasLayer](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/keras_layer.py),如下所示:
```
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(train_dataset, epochs=2)
```
如您所见,`hub.KerasLayer` 可将从 `tf.saved_model.load()` 加载回的结果包装到可供构建其他模型的 Keras 层。这对于迁移学习非常实用。
### 我应使用哪种 API?
对于保存,如果您使用的是 Keras 模型,那么始终建议使用 Keras 的 `model.save()` API。如果您所保存的不是 Keras 模型,那么您只能选择使用较低级的 API。
对于加载,使用哪种 API 取决于您要从加载的 API 中获得什么。如果您无法或不想获取 Keras 模型,请使用 `tf.saved_model.load()`。否则,请使用 `tf.keras.models.load_model()`。请注意,只有保存 Keras 模型后,才能恢复 Keras 模型。
可以混合使用 API。您可以使用 `model.save` 保存 Keras 模型,并使用低级 API `tf.saved_model.load` 加载非 Keras 模型。
```
model = get_model()
# Saving the model using Keras's save() API
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using lower level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
```
### 警告
有一种特殊情况,您的 Keras 模型没有明确定义的输入。例如,可以创建没有任何输入形状的序贯模型 (`Sequential([Dense(3), ...]`)。子类化模型在初始化后也没有明确定义的输入。在这种情况下,在保存和加载时都应坚持使用较低级别的 API,否则会出现错误。
要检查您的模型是否具有明确定义的输入,只需检查 `model.inputs` 是否为 `None`。如果非 `None`,则一切正常。在 `.fit`、`.evaluate`、`.predict` 中使用模型,或调用模型 (`model(inputs)`) 时,输入形状将自动定义。
以下为示例:
```
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # ERROR!
tf.saved_model.save(my_model, saved_model_path)
```
| github_jupyter |
# Multivariate Dependencies Beyond Shannon Information
This is a companion Jupyter notebook to the work *Multivariate Dependencies Beyond Shannon Information* by Ryan G. James and James P. Crutchfield. This worksheet was written by Ryan G. James. It primarily makes use of the ``dit`` package for information theory calculations.
## Basic Imports
We first import basic functionality. Further functionality will be imported as needed.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from dit import ditParams, Distribution
from dit.distconst import uniform
ditParams['repr.print'] = ditParams['print.exact'] = True
```
## Distributions
Here we define the two distributions to be compared.
```
from dit.example_dists.mdbsi import dyadic, triadic
dists = [('dyadic', dyadic), ('triadic', triadic)]
```
## I-Diagrams and X-Diagrams
Here we construct the I- and X-Diagrams of both distributions. The I-Diagram is constructed by considering how the entropies of each variable interact. The X-Diagram is similar, but considers how the extropies of each variable interact.
```
from dit.profiles import ExtropyPartition, ShannonPartition
def print_partition(dists, partition):
ps = [str(partition(dist)).split('\n') for _, dist in dists ]
print('\t' + '\t\t\t\t'.join(name for name, _ in dists))
for lines in zip(*ps):
print('\t\t'.join(lines))
print_partition(dists, ShannonPartition)
```
Both I-Diagrams are the same. This implies that *no* Shannon measure (entropy, mutual information, conditional mutual information [including the transfer entropy], co-information, etc) can differentiate these patterns of dependency.
```
print_partition(dists, ExtropyPartition)
```
Similarly, the X-Diagrams are identical and so no extropy-based measure can differentiate the distributions.
## Measures of Mutual and Common Information
We now compute several measures of mutual and common information:
```
from prettytable import PrettyTable
from dit.multivariate import (entropy,
coinformation,
total_correlation,
dual_total_correlation,
independent_information,
caekl_mutual_information,
interaction_information,
intrinsic_total_correlation,
gk_common_information,
wyner_common_information,
exact_common_information,
functional_common_information,
mss_common_information,
tse_complexity,
)
from dit.other import (extropy,
disequilibrium,
perplexity,
LMPR_complexity,
renyi_entropy,
tsallis_entropy,
)
def print_table(title, table, dists):
pt = PrettyTable(field_names = [''] + [name for name, _ in table])
for name, _ in table:
pt.float_format[name] = ' 5.{0}'.format(3)
for name, dist in dists:
pt.add_row([name] + [measure(dist) for _, measure in table])
print("\n{}".format(title))
print(pt.get_string())
```
### Entropies
Entropies generally capture the uncertainty contained in a distribution. Here, we compute the Shannon entropy, the Renyi entropy of order 2 (also known as the collision entropy), and the Tsallis entropy of order 2. Though we only compute the order 2 values, any order will produce values identical for both distributions.
```
entropies = [('H', entropy),
('Renyi (α=2)', lambda d: renyi_entropy(d, 2)),
('Tsallis (q=2)', lambda d: tsallis_entropy(d, 2)),
]
print_table('Entropies', entropies, dists)
```
The entropies for both distributions are indentical. This is not surprising: they have the same probability mass function.
### Mutual Informations
Mutual informations are multivariate generalizations of the standard Shannon mutual information. By far, the most widely used (and often simply assumed to be the only) generalization is the total correlation, sometimes called the multi-information. It is defined as:
$$
T[\mathbf{X}] = \sum H[X_i] - H[\mathbf{X}] = \sum p(\mathbf{x}) \log_2 \frac{p(\mathbf{x})}{p(x_1)p(x_2)\ldots p(x_n)}
$$
Other generalizations exist, though, including the co-information, the dual total correlation, and the CAEKL mutual information.
```
mutual_informations = [('I', coinformation),
('T', total_correlation),
('B', dual_total_correlation),
('J', caekl_mutual_information),
('II', interaction_information),
]
print_table('Mutual Informations', mutual_informations, dists)
```
The equivalence of all these generalizations is not surprising: Each of them can be defined as a function of the I-diagram, and so must be identical here.
### Common Informations
Common informations are generally defined using an auxilliary random variable which captures some amount of information shared by the variables of interest. For all but the Gács-Körner common information, that shared information is the dual total correlation.
```
common_informations = [('K', gk_common_information),
('C', lambda d: wyner_common_information(d, niter=1, polish=False)),
('G', lambda d: exact_common_information(d, niter=1, polish=False)),
('F', functional_common_information),
('M', mss_common_information),
]
print_table('Common Informations', common_informations, dists)
```
As it turns out, only the Gács-Körner common information, `K`, distinguishes the two.
### Other Measures
Here we list a variety of other information measures.
```
other_measures = [('IMI', lambda d: intrinsic_total_correlation(d, d.rvs[:-1], d.rvs[-1])),
('X', extropy),
('R', independent_information),
('P', perplexity),
('D', disequilibrium),
('LMRP', LMPR_complexity),
('TSE', tse_complexity),
]
print_table('Other Measures', other_measures, dists)
```
Several other measures fail to differentiate our two distributions. For many of these (`X`, `P`, `D`, `LMRP`) this is because they are defined relative to the probability mass function. For the others, it is due to the equality of the I-diagrams. Only the intrinsic mutual information, `IMI`, can distinguish the two.
## Information Profiles
Lastly, we consider several "profiles" of the information.
```
from dit.profiles import *
def plot_profile(dists, profile):
n = len(dists)
plt.figure(figsize=(8*n, 6))
ent = max(entropy(dist) for _, dist in dists)
for i, (name, dist) in enumerate(dists):
ax = plt.subplot(1, n, i+1)
profile(dist).draw(ax=ax)
if profile not in [EntropyTriangle, EntropyTriangle2]:
ax.set_ylim((-0.1, ent + 0.1))
ax.set_title(name)
```
### Complexity Profile
```
plot_profile(dists, ComplexityProfile)
```
Once again, these two profiles are identical due to the I-Diagrams being identical. The complexity profile incorrectly suggests that there is no information at the scale of 3 variables.
### Marginal Utility of Information
```
plot_profile(dists, MUIProfile)
```
The marginal utility of information is based on a linear programming problem with constrains related to values from the I-Diagram, and so here again the two distributions are undifferentiated.
### Connected Informations
```
plot_profile(dists, SchneidmanProfile)
```
The connected informations are based on differences between maximum entropy distributions with differing $k$-way marginal distributions fixed. Here, the two distributions are differentiated
### Multivariate Entropy Triangle
```
plot_profile(dists, EntropyTriangle)
```
Both distributions are at an idential location in the multivariate entropy triangle.
## Partial Information
We next consider a variety of partial information decompositions.
```
from dit.pid.helpers import compare_measures
for name, dist in dists:
compare_measures(dist, name=name)
```
Here we see that the PID determines that in dyadic distribution two random variables uniquely contribute a bit of information to the third, whereas in the triadic distribution two random variables redundantly influene the third with one bit, and synergistically with another.
## Multivariate Extensions
```
from itertools import product
outcomes_a = [
(0,0,0,0),
(0,2,3,2),
(1,0,2,1),
(1,2,1,3),
(2,1,3,3),
(2,3,0,1),
(3,1,1,2),
(3,3,2,0),
]
outcomes_b = [
(0,0,0,0),
(0,0,1,1),
(0,1,0,1),
(0,1,1,0),
(1,0,0,1),
(1,0,1,0),
(1,1,0,0),
(1,1,1,1),
]
outcomes = [ tuple([2*a+b for a, b in zip(a_, b_)]) for a_, b_ in product(outcomes_a, outcomes_b) ]
quadradic = uniform(outcomes)
dyadic2 = uniform([(4*a+2*c+e, 4*a+2*d+f, 4*b+2*c+f, 4*b+2*d+e) for a, b, c, d, e, f in product([0,1], repeat=6)])
dists2 = [('dyadic2', dyadic2), ('quadradic', quadradic)]
print_partition(dists2, ShannonPartition)
print_partition(dists2, ExtropyPartition)
print_table('Entropies', entropies, dists2)
print_table('Mutual Informations', mutual_informations, dists2)
print_table('Common Informations', common_informations, dists2)
print_table('Other Measures', other_measures, dists2)
plot_profile(dists2, ComplexityProfile)
plot_profile(dists2, MUIProfile)
plot_profile(dists2, SchneidmanProfile)
plot_profile(dists2, EntropyTriangle)
```
| github_jupyter |
```
%run technical_trading.py
#%%
data = pd.read_csv('../../data/hs300.csv',index_col = 'date',parse_dates = 'date')
data.vol = data.vol.astype(float)
#start = pd.Timestamp('2005-09-01')
#end = pd.Timestamp('2012-03-15')
#data = data[start:end]
#%%
chaikin = CHAIKINAD(data, m = 14, n = 16)
kdj = KDJ(data)
adx = ADX(data)
emv = EMV(data, n = 20, m = 23)
cci = CCI(data, n=20, m = 8)
bbands = BBANDS(data, n =20, m=2)
aroon = AROON(data)
cmo = CMO(data)
#%%
signal = pd.DataFrame(index=data.index)
#signal['kdj'] = kdj['2']
signal['chaikin'] = chaikin['3']
signal['emv'] = emv['2']
signal['adx'] = adx['1']
signal['cci'] = cci['2']
signal['aroon'] = aroon['2']
signal['cmo'] = cmo['2']
signal['bbands'] = bbands['1']
signal = signal.fillna(0)
returns_c = Backtest(data, signal.mode(axis=1).ix[:,0])
(1+returns_c).cumprod().plot()
#%%
oos_date = pd.Timestamp('2012-03-15')
#pf.create_returns_tear_sheet(returns, live_start_date=oos_date)
pf.create_full_tear_sheet(returns_c)
#%%
%matplotlib inline
(1+returns_c).cumprod().plot()
returns = pd.DataFrame(index=data.index)
#signal['kdj'] = kdj['2']
returns['chaikin'] = np.array(Backtest(data, chaikin['3']))
returns['emv'] = np.array(Backtest(data, emv['2']))
returns['adx'] = np.array(Backtest(data, adx['1']))
returns['cci'] = np.array(Backtest(data, cci['2']))
returns['aroon'] = np.array(Backtest(data, aroon['2']))
returns['cmo'] = np.array(Backtest(data, cmo['2']))
returns['bbands'] = np.array(Backtest(data, bbands['1']))
returns = returns.fillna(0)
(1+returns['chaikin']).cumprod().plot()
nav = pd.DataFrame()
nav['combined'] = (1+returns_c).cumprod()
ema5 = talib.EMA(np.array(nav['combined']), 5)
ema20 = talib.EMA(np.array(nav['combined']), 20)
signal5 = (nav['combined'] > ema5) * 1 + (nav['combined']<ema5) *0
signal20 = (nav['combined'] > ema20) * 1 + (nav['combined']<ema20) * 0
signal5_20 = (ema5 > ema20) * 1 + (ema20 < ema5)*0
return_ema5 = returns_c * signal5.shift(1)
return_ema20 = returns_c * signal20.shift(1)
nav['ema5'] = (1+return_ema5).cumprod()
nav['ema20'] = (1+return_ema20).cumprod()
#nav['ema5_20'] = (1+retrun_ema5_20).cumprod()
nav.plot()
(1+returns.sum(1)/4).cumprod().plot()
ret_target = returns.sum(1) / 4
ret_target.index = data.index.tz_localize('UTC')
pf.create_full_tear_sheet(ret_target)
%run ../Strategy_Evalution_Tools/turtle_evalution.py
# This file is part of DEAP.
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
import array
import random
import numpy
from deap import algorithms
from deap import base
from deap import creator
from deap import tools
### insample vs. oos
returns_is = returns.ix[:, :]
returns_oos = returns.ix[1001::, :]
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", array.array, typecode='b', fitness=creator.FitnessMax)
toolbox = base.Toolbox()
# Attribute generator
toolbox.register("attr_bool", random.randint, 0, 1)
# Structure initializers
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, 7)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
def evalOneMax(individual):
print(individual)
for i in range(7) :
if i == 0:
rets = returns_is.ix[:,i] * individual[i]
else :
rets = rets + returns_is.ix[:,i] * individual[i]
rets = rets.fillna(0)
sharpe, rsharpe = Sharpe(rets)
rrr = RRR(rets)
if np.isnan(rsharpe) :
rsharpe = 0
print(rsharpe)
return rsharpe,;
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
def main():
random.seed(64)
pop = toolbox.population(n=128)
hof = tools.HallOfFame(2)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
pop, log = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=10,
stats=stats, halloffame=hof, verbose=True)
#print(log)
return pop, log, hof
if __name__ == "__main__":
pop, log, hof = main()
pop
hof.items
import operator
from deap import base
from deap import creator
from deap import gp
from deap import tools
pset = gp.PrimitiveSet("MAIN", arity=1)
pset.addPrimitive(operator.add, 2)
pset.addPrimitive(operator.sub, 2)
pset.addPrimitive(operator.mul, 2)
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", gp.PrimitiveTree, fitness=creator.FitnessMin,
pset=pset)
toolbox = base.Toolbox()
toolbox.register("expr", gp.genHalfAndHalf, pset=pset, min_=1, max_=2)
toolbox.register("individual", tools.initIterate, creator.Individual,
toolbox.expr)
### insample and out-of-sample test
data = pd.read_csv('../../data/hs300.csv',index_col = 'date',parse_dates = 'date')
data.vol = data.vol.astype(float)
a.
```
| github_jupyter |
<img src=https://a-static.projektn.sk/2020/11/Startup.jpg>
# Startup Profit Prediction
# 1. Reading and Understanding the Data
```
#basic libraries and visualization
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#statmodels
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
#sklearn-dataprocessing
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.feature_selection import f_regression
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import PolynomialFeatures, StandardScaler, OneHotEncoder
from sklearn.metrics import r2_score, mean_squared_error
#sklearn-models
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
import warnings
warnings.filterwarnings('ignore')
sns.set_theme(style='darkgrid', palette='Accent')
pd.options.display.float_format = '{:,.2f}'.format
```
**About the dataset:**
This dataset has data collected from New York, California and Florida about 50 business Startups.
The variables used in the dataset are Profit, R&D spending(research and development), Administration Spending, and Marketing Spending.
```
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
startups = pd.read_csv("/kaggle/input/d/farhanmd29/50-startups/50_Startups.csv")
startups.head()
startups.shape
startups.info()
startups.isnull().sum()
startups.duplicated().any()
startups.describe()
```
# 2. EDA
## Checking for outliers
```
startups.columns.values
sns.boxplot(data=startups)
# We can see that in the Profit column we have an outlier. Since we have a small dataset, this could be a problem
# in predicting the profit, hence, we are going to remove this outlier.
Q3, Q1 = np.percentile(startups["Profit"], [75 ,25])
IQR = Q3 - Q1
startups = startups[~(startups.Profit< (Q1 - 1.5*IQR))]
```
## Visualising Numerical Variables
```
sns.pairplot(startups[['R&D Spend', 'Administration', 'Marketing Spend', 'Profit']], kind="reg", diag_kind="kde")
plt.show()
```
**Insights:**
1. We can see normal distributions of numerical variables.
2. R&D Spend and Marketing Spend are in linear relation with target variable.
### Profit Distribution
```
sns.distplot(startups["Profit"], bins=30)
plt.show()
```
### R&D Spend vs. Profit Correlation
```
sns.jointplot(x=startups["Profit"], y=startups["R&D Spend"], kind="reg")
plt.show()
```
## Visualising Categorical Variables
```
g=sns.FacetGrid(data=startups, col="State", height=5, aspect=0.7)
g.map_dataframe(sns.barplot, palette="Accent")
g.set_xticklabels(rotation=45)
plt.show()
```
**Insights:**
1. Profits and Marketing Spend are higher at Florida than other states.
2. R&D Spend and Administration are same for all of the states.
# 3. Data Preparation
```
startups_prepared = startups.copy()
```
## Checking for multicollinearity
```
numerical = startups_prepared.drop(columns=["State", "Profit"])
vif = pd.DataFrame()
vif["Features"] = numerical.columns
vif["VIF"] = [variance_inflation_factor(numerical.values, i) for i in range(numerical.shape[1])]
vif["VIF"] = round(vif["VIF"], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
vif
```
**Insights:**
1. VIF scores are higher for R&D and Marketing Spend.
2. Since Administration is not so correlated with Profit as other variables, we will consider dropping this variable, which will drive VIF factor down.
## Creating dummy variables
```
startups_prepared = pd.get_dummies(startups_prepared, drop_first=True)
startups_prepared.rename(columns={"R&D Spend":"R&D", "Marketing Spend":"Marketing",
"State_Florida":"Florida", "State_New York":"New York"}, inplace=True)
startups_prepared.head()
```
## Defining input and target variables
```
X = startups_prepared.drop(columns="Profit")
y = startups_prepared.Profit
```
## Feature selection
```
data = f_regression(X[["R&D", "Administration", "Marketing"]], y)
f_df = pd.DataFrame(data, index=[["F_statistic", "p_value"]], columns=X[["R&D", "Administration", "Marketing"]].columns).T
f_df
```
**Insights:**
1. R&D and Marketing has nearly 0 p-value what implies statistical significance.
2. On the other hand Administration seems to have no effect in predicting the Profit, as we previously seen from correlation as well.
3. We are going to drop Administration column as it has no statistical significance in our model.
```
X = X.drop(columns="Administration")
```
## Splitting the Data into Training and Testing Sets
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=6)
#Checking if the split has approximately equal distributions of values
print(f"Train Florida: {X_train['Florida'].mean()}")
print(f"Test Florida: {X_test['Florida'].mean()}")
print(f"Train Marketing: {X_train['Marketing'].mean()}")
print(f"Test Marketing: {X_test['Marketing'].mean()}")
```
## Scaling the Features
```
#scaling inputs
sc_x = StandardScaler()
X_train = sc_x.fit_transform(X_train)
X_test = sc_x.transform(X_test)
#scaling target variable
sc_y = StandardScaler()
y_train = sc_y.fit_transform(y_train.values.reshape(-1, 1))
y_test = sc_y.transform(y_test.values.reshape(-1, 1))
y_train = y_train.reshape(32)
y_test = y_test.reshape(17)
```
# 4. Model Selection and Evaluation
## Multiple Linear regression
```
Rsqr_test = []
order = range(1,4)
for n in order:
pr = PolynomialFeatures(degree=n)
X_train_poly = pr.fit_transform(X_train)
X_test_poly = pr.fit_transform(X_test)
lr = LinearRegression()
lr.fit(X_train_poly, y_train)
Rsqr_test.append(lr.score(X_test_poly, y_test))
Rsqr_test
plt.plot(Rsqr_test)
plt.show()
```
## Support Vector regression
```
svr = SVR()
svr.fit(X_train, y_train)
svr.score(X_test, y_test)
```
## Decision Tree regression
```
dt = DecisionTreeRegressor()
dt.fit(X_train, y_train)
dt.score(X_test, y_test)
```
## Random Forest regression
```
rf = RandomForestRegressor()
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
```
#### The best performing model is Multiple Linear Regression.
```
lr = LinearRegression()
lr.fit(X_train, y_train)
r2_score = lr.score(X_test, y_test)
# Adjusted R-square of the model
n = X_test.shape[0]
p = X_test.shape[1]
adjusted_r2 = 1-(1-r2_score)*(n-1)/(n-p-1)
adjusted_r2
```
# 5. Residuals analysis
```
y_test_hat = lr.predict(X_test)
plt.scatter(x=y_test, y=y_test_hat, alpha=0.8)
plt.plot(y_test, y_test, color='darkgreen')
plt.show()
residuals = y_test - y_test_hat
```
The errors should not follow any pattern and equally distributed.
```
plt.scatter(y=residuals, x=y_test_hat, alpha=0.8)
plt.show()
lr.coef_
lr.intercept_
```
## Our preferred model now has an equation that looks like this:
## $$ PROFIT = -3.77 + 0.96 * R&D Spend + 0.01 * Marketing Spend + (-0.01) * Florida + (-0.06) * NewYork $$
Note: Since we scaled the data for modeling, if we want to predict profits, we have to perform inverse_transform() on the predicted values.
| github_jupyter |
```
import os
import requests
from bs4 import BeautifulSoup
import pandas as pd
from splinter import Browser
import splinter
import numpy as np
# create variables to store scraped info
movie_titles = []
opening_amts = []
total_gross = []
per_of_total = []
num_of_theaters = []
open_date = []
# loop through pages to get data from all 1000 movies
pages = np.arange(0, 801, 200)
for page in pages:
page = requests.get('https://www.boxofficemojo.com/chart/top_opening_weekend/?offset=' + str(page))
# Create BeautifulSoup object; parse with 'lxml'
soup = BeautifulSoup(page.text, 'lxml')
titles = soup.find_all('td', class_ = 'a-text-left mojo-field-type-release mojo-cell-wide')
for movie in titles:
movie_titles.append(movie.select('a')[0].string)
opening = soup.find_all('td', class_ = 'a-text-right mojo-field-type-money')
for amt in opening:
opening_amts.append(amt.string)
total = soup.find_all('td', class_ = 'a-text-right mojo-field-type-money mojo-estimatable')
for money in total:
total_gross.append(money.string)
percent_of_total = soup.find_all('td', class_ = 'a-text-right mojo-field-type-percent')
for percent in percent_of_total:
per_of_total.append(percent.string)
theaters = soup.find_all('td', class_ = 'a-text-right mojo-field-type-positive_integer mojo-estimatable')
for thr in theaters:
num_of_theaters.append(thr.string)
date = soup.find_all('td', class_ = 'a-text-left mojo-field-type-date a-nowrap')
for day in date:
open_date.append(day.string)
gross_df = pd.DataFrame({"Gross & Average": total_gross})
gross_df
# strip punctuation and turn into integer
gross_df['Gross & Average'] = gross_df['Gross & Average'].map(lambda x: x.lstrip('$')).str.replace(',', '').astype(int)
# separate total gross from avg. per theater
avg_thr = gross_df[gross_df['Gross & Average'] < 100000].dropna().reset_index()['Gross & Average']
total_grss = gross_df[gross_df['Gross & Average'] > 100000].dropna().reset_index()['Gross & Average']
gross_df['Total Gross'] = total_grss
gross_df['Average per Theater'] = avg_thr
gross_df = gross_df.drop(columns=['Gross & Average']).dropna()
gross_df['Movie Title'] = movie_titles
gross_df
# create DataFrame
opening_df = pd.DataFrame({"Movie Title": movie_titles, "Opening": opening_amts, "Total Gross": total_grss, "% of Total": per_of_total, "Theaters": num_of_theaters, "Average per Theater": avg_thr, "Date": open_date})
#pd.options.display.float_format = '{:,}'.format
#df['new_column_name'] = df['column_name'].map('{:,.2f}'.format)
#opening_df['Total Gross'] = '$' + (opening_df['Total Gross'].astype(float)).map('{:,.2f}'.format).astype(str)
#opening_df['Average per Theater'] = '$' + (opening_df['Average per Theater'].astype(float)).map('{:,.2f}'.format).astype(str)
opening_df
# strip punctuation and turn into integer
opening_df['Opening'] = opening_df['Opening'].map(lambda x: x.lstrip('$')).str.replace(',', '').astype(int)
opening_df['Theaters'] = opening_df['Theaters'].str.replace(',', '').astype(int)
opening_df
import datetime as dt
opening_df['Date'] = pd.to_datetime(opening_df['Date'])
opening_df
#month = opening_df['Date'].dt.month
month = opening_df['Date']
opening_df['Season Number'] = (month.dt.month%12 + 3)//3
opening_df
year = opening_df['Date'].dt.year
opening_df['Release Year'] = year
opening_df
opening_df["Movie & Year"] = opening_df["Movie Title"] + " " + "(" + opening_df["Release Year"].astype(str) + ")"
opening_df.head()
seasons = opening_df['Season Number'].astype(int)
season_name = []
for s in seasons:
if (s == 1):
season_name.append('Winter')
elif (s == 2):
season_name.append('Spring')
elif (s == 3):
season_name.append('Summer')
elif (s == 4):
season_name.append('Fall')
season_name
opening_df['Season Name'] = season_name
opening_df
# export to csv
opening_df.to_csv('data/opening.csv')
```
| github_jupyter |
# DAT257x: Reinforcement Learning Explained
## Lab 2: Bandits
### Exercise 2.3: UCB
```
import numpy as np
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.bandit import BanditEnv
from lib.simulation import Experiment
#Policy interface
class Policy:
#num_actions: (int) Number of arms [indexed by 0 ... num_actions-1]
def __init__(self, num_actions):
self.num_actions = num_actions
def act(self):
pass
def feedback(self, action, reward):
pass
#Greedy policy
class Greedy(Policy):
def __init__(self, num_actions):
Policy.__init__(self, num_actions)
self.name = "Greedy"
self.total_rewards = np.zeros(num_actions, dtype = np.longdouble)
self.total_counts = np.zeros(num_actions, dtype = np.longdouble)
def act(self):
current_averages = np.divide(self.total_rewards, self.total_counts, where = self.total_counts > 0)
current_averages[self.total_counts <= 0] = 0.5 #Correctly handles Bernoulli rewards; over-estimates otherwise
current_action = np.argmax(current_averages)
return current_action
def feedback(self, action, reward):
self.total_rewards[action] += reward
self.total_counts[action] += 1
#Epsilon Greedy policy
class EpsilonGreedy(Greedy):
def __init__(self, num_actions, epsilon):
Greedy.__init__(self, num_actions)
if (epsilon is None or epsilon < 0 or epsilon > 1):
print("EpsilonGreedy: Invalid value of epsilon", flush = True)
sys.exit(0)
self.epsilon = epsilon
self.name = "Epsilon Greedy"
def act(self):
choice = None
if self.epsilon == 0:
choice = 0
elif self.epsilon == 1:
choice = 1
else:
choice = np.random.binomial(1, self.epsilon)
if choice == 1:
return np.random.choice(self.num_actions)
else:
current_averages = np.divide(self.total_rewards, self.total_counts, where = self.total_counts > 0)
current_averages[self.total_counts <= 0] = 0.5 #Correctly handles Bernoulli rewards; over-estimates otherwise
current_action = np.argmax(current_averages)
return current_action
```
Now let's implement a UCB algorithm.
```
import math
#UCB policy
class UCB(Greedy):
def __init__(self, num_actions):
Greedy.__init__(self, num_actions)
self.name = "UCB"
self.round = 0
def act(self):
current_action = None
self.round += 1
if self.round <= self.num_actions:
"""The first k rounds, where k is the number of arms/actions, play each arm/action once"""
current_action = self.round % self.num_actions
else:
"""At round t, play the arms with maximum average and exploration bonus"""
current_averages = np.divide(self.total_rewards, self.total_counts)
f = lambda idx, avg: avg + math.sqrt(2 * math.log(self.round) / self.total_counts[idx])
scores = np.fromiter((f(i, v) for i, v in enumerate(current_averages)), dtype = np.longdouble)
current_action = np.argmax(scores)
return current_action
```
Now let's prepare the simulation.
```
evaluation_seed = 1239
num_actions = 10
trials = 10000
# distribution = "bernoulli"
distribution = "normal"
```
What do you think the regret graph would look like?
```
env = BanditEnv(num_actions, distribution, evaluation_seed)
agent = UCB(num_actions)
experiment = Experiment(env, agent)
experiment.run_bandit(trials)
```
| github_jupyter |
<p><img src="https://oceanprotocol.com/static/media/banner-ocean-03@2x.b7272597.png" alt="drawing" width="800" align="center"/>
<h1><center>Ocean Protocol - Manta Ray project</center></h1>
<h3><center>Decentralized Data Science and Engineering, powered by Ocean Protocol</center></h3>
<p>Version 0.5.3 - beta</p>
<p>Package compatibility: squid-py v0.6.13, keeper-contracts 0.10.3, utilities 0.2.2,
<p>Component compatibility (Nile): Brizo v0.3.12, Aquarius v0.3.4, Nile testnet smart contracts 0.10.3</p>
<p><a href="https://github.com/oceanprotocol/mantaray">mantaray on Github</a></p>
<p>
Getting Underway - Downloading Datasets (Assets)
To complete the basic datascience workflow, this notebook will demonstrate how a user
can download an asset. Downloading an asset is a simple example of a Service Execution Agreement -
similar to a contract with a series of clauses. Each clause is secured on the blockchain, allowing for trustful
execution of a contract.
In this notebook, an asset will be first published as before, and then ordered and downloaded.
### Section 0: Import modules, and setup logging
```
import logging
import os
from squid_py import Metadata, Ocean
import squid_py
import mantaray_utilities as manta_utils
# Setup logging
from mantaray_utilities.user import get_account_from_config
from mantaray_utilities.events import subscribe_event
manta_utils.logging.logger.setLevel('INFO')
import mantaray_utilities as manta_utils
from squid_py import Config
from squid_py.keeper import Keeper
from pathlib import Path
import datetime
import web3
import asyncio
# path_log_file = Path.home() / '{}.log'.format(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# fh = logging.FileHandler(path_log_file)
# fh.setLevel(logging.DEBUG)
# manta_utils.logging.logger.addHandler(fh)
```
## Section 1: Get the configuration from the INI file
```
# Get the configuration file path for this environment
OCEAN_CONFIG_PATH = Path(os.environ['OCEAN_CONFIG_PATH'])
assert OCEAN_CONFIG_PATH.exists(), "{} - path does not exist".format(OCEAN_CONFIG_PATH)
# The Market Place will be delegated to provide access to your assets, so we need the address
MARKET_PLACE_PROVIDER_ADDRESS = os.environ['MARKET_PLACE_PROVIDER_ADDRESS']
logging.critical("Configuration file selected: {}".format(OCEAN_CONFIG_PATH))
logging.critical("Deployment type: {}".format(manta_utils.config.get_deployment_type()))
logging.critical("Squid API version: {}".format(squid_py.__version__))
logging.info("MARKET_PLACE_PROVIDER_ADDRESS:{}".format(MARKET_PLACE_PROVIDER_ADDRESS))
# Instantiate Ocean with the default configuration file.
configuration = Config(OCEAN_CONFIG_PATH)
squid_py.ConfigProvider.set_config(configuration)
ocn = Ocean(configuration)
```
## Section 2: Delegate access of your asset to the marketplace
When we publish a register a DDO to a marketplace, we assign several services and conditions on those services.
By default, the permission to grant access will lie with you, the publisher. As a publisher, you would need to
run the services component (brizo), in order to manage access to your assets.
However, for the case of a marketplace, we will delegate permission to grant access to these services to the market
place on our behalf. Therefore, we will need the public address of the marketplace component. Of course, the
conditions are defined ultimately by you, the publisher.
```
MARKET_PLACE_PROVIDER_ADDRESS = web3.Web3.toChecksumAddress(MARKET_PLACE_PROVIDER_ADDRESS)
```
## Section 3: Instantiate Ocean
```
keeper = Keeper.get_instance()
```
## Section 4: Get Publisher and register an asset for testing the download
Of course, you can download your own asset, one that you have created, or
one that you have found via the search api. All you need is the DID of the asset.
```
publisher_account = manta_utils.user.get_account_by_index(ocn,0)
# publisher_account = get_account_from_config(config_from_ini, 'parity.address', 'parity.password')
print("Publisher address: {}".format(publisher_account.address))
print("Publisher ETH: {:0.1f}".format(ocn.accounts.balance(publisher_account).eth/10**18))
print("Publisher OCEAN: {:0.1f}".format(ocn.accounts.balance(publisher_account).ocn/10**18))
# Register an asset
ddo = ocn.assets.create(Metadata.get_example(), publisher_account, providers=[MARKET_PLACE_PROVIDER_ADDRESS])
logging.info(f'registered ddo: {ddo.did}')
asset_price = int(ddo.metadata['base']['price']) / 10**18
asset_name = ddo.metadata['base']['name']
print("Registered {} for {} OCN".format(asset_name, asset_price))
```
## Section 5: Get Consumer account, ensure token balance
```
# consumer_account = get_account_from_config(config_from_ini, 'parity.address1', 'parity.password1')
consumer_account = manta_utils.user.get_account_by_index(ocn,1)
print("Consumer address: {}".format(consumer_account.address))
print("Consumer ETH: {:0.1f}".format(ocn.accounts.balance(consumer_account).eth/10**18))
print("Consumer OCEAN: {:0.1f}".format(ocn.accounts.balance(consumer_account).ocn/10**18))
assert ocn.accounts.balance(consumer_account).eth/10**18 > 1, "Insufficient ETH in account {}".format(consumer_account.address)
# Ensure the consumer always has enough Ocean Token (with a margin)
if ocn.accounts.balance(consumer_account).ocn/10**18 < asset_price + 1:
logging.info("Insufficient Ocean Token balance for this asset!".format())
refill_amount = int(15 - ocn.accounts.balance(consumer_account).ocn/10**18)
logging.info("Requesting {} tokens".format(refill_amount))
ocn.accounts.request_tokens(consumer_account, refill_amount)
```
## Section 6: Initiate the agreement for accessing (downloading) the asset, wait for condition events
```
agreement_id = ocn.assets.order(ddo.did, 'Access', consumer_account)
logging.info("Consumer has placed an order for asset {}".format(ddo.did))
logging.info("The service agreement ID is {}".format(agreement_id))
```
In Ocean Protocol, downloading an asset is enforced by a contract.
The contract conditions and clauses are set by the publisher. Conditions trigger events, which are monitored
to ensure the contract is successfully executed.
```
subscribe_event("created agreement", keeper, agreement_id)
subscribe_event("lock reward", keeper, agreement_id)
subscribe_event("access secret store", keeper, agreement_id)
subscribe_event("escrow reward", keeper, agreement_id)
```
Now wait for all events to complete!
Now that the agreement is signed, the consumer can download the asset.
```
assert ocn.agreements.is_access_granted(agreement_id, ddo.did, consumer_account.address)
# ocn.agreements.status(agreement_id)
ocn.assets.consume(agreement_id, ddo.did, 'Access', consumer_account, 'downloads_nile')
logging.info('Success buying asset.')
```
| github_jupyter |
# Exercise 1
Add the specified code for each code cell, running the cells _in order_.
Write a **`while`** loop that prints out every 5th number (multiples of 5) from 0 to 100 (inclusive).
- _Tip:_ use an **`end=','`** keyword argument to the `print()` function to print all the numbers on the same line.
```
nums = 0
while nums <= 100:
print(nums, end = ',')
nums = nums + 5
```
Use a **`while`** loop to print out the first 15 [Triangular numbers](https://en.wikipedia.org/wiki/Triangular_number). This is a sequence of numbers for which the _nth_ value is the sum of the numbers from 0 to _n_. **Do this only using addition!**
- _Hint:_ use an additional variable to keep track of the `total` value, and have that value increase by the number of times you've been through the loop each iteration!
```
total = 0
nums = 0
while nums <= 15:
total = total + nums
print(total, end = ',')
nums = nums + 1
```
_Challenge_ Use a **`while`** loop to print out 20 numbers, each of which is larger than the previous by the the _sum_ of the **two** previous numbers (the [Fibonacci sequence](https://en.wikipedia.org/wiki/Fibonacci_number).
- _Hint_: you'll need to keep track of the two previous values (start them at 0 and 1), and then "update" them each time through the loop, storing the "new total" in the first previous variable, and the first previous variable in the second (be careful about the ordering of this!)
```
start = 0 #Not the best names
next = 1
count = 0
while count < 20:
print(next, end =',') #print initial 1 value. Print before calcs
sum = start + next
start = next
next = sum
count = count + 1
```
Use a **`while`** loop to print out a sequence of random numbers from 0 to 10, stopping after the number `4` is printed for the first time. You will need to import the `random` module.
```
import random
test = True
while test: #The syntax "while test == True" isn't necessary here; works w/o it
num =random.randint(0,11)
print(num, end = ', ')
if num == 4:
test = False
#An "else" statement isn't necessary here
# else:
# test = True
```
Modify the below "coin flipping" example from the course text so that it keeps flipping coins until you get two "heads" in a row.
```
# # flip a coin until it shows up heads
# still_flipping = True
# while still_flipping:
# flip = randint(0,1)
# if flip == 0:
# flip = "Heads"
# else:
# flip = "Tails"
# print(flip, end=", ")
# if flip == "Heads":
# still_flipping = False
# flip a coin until it shows up heads twice
import random
still_flipping = True
previous_flip = None
while still_flipping:
flip = random.randint(0,1)
if flip == 0:
flip = "Heads"
else:
flip = "Tails"
print(flip, end=", ")
if previous_flip == "Heads" and flip =="Heads":
still_flipping = False
previous_flip = flip
```
Define a function **`input_number()`** that takes a minimum and maximum value as arguments. This function should prompt the user to input a number within the range, repeating the prompt if the provided value is not acceptable. Once an acceptable value has been provided, the function should return that number. You can assume that the user-entered input will always be numeric.
Be sure and call your function and print its results to test it!
```
def input_number(min, max):
valid = False
while (not valid):
number = int(input("Pick a # between " + str(min)+ " and " + str(max)+ ": "))
if (min <= number <= max):
valid = True
print("Good Choice")
else:
print("Invalid number!")
return number
print(input_number (2,10))
```
| github_jupyter |
```
import numpy as np
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from nn_interpretability.interpretation.lrp.lrp_0 import LRP0
from nn_interpretability.interpretation.lrp.lrp_eps import LRPEpsilon
from nn_interpretability.interpretation.lrp.lrp_gamma import LRPGamma
from nn_interpretability.interpretation.lrp.lrp_ab import LRPAlphaBeta
from nn_interpretability.interpretation.lrp.lrp_composite import LRPMix
from nn_interpretability.model.model_trainer import ModelTrainer
from nn_interpretability.model.model_repository import ModelRepository
from nn_interpretability.visualization.mnist_visualizer import MnistVisualizer
from nn_interpretability.dataset.mnist_data_loader import MnistDataLoader
model_name = 'model_cnn.pt'
train = False
mnist_data_loader = MnistDataLoader()
MnistVisualizer.show_dataset_examples(mnist_data_loader.trainloader)
model = ModelRepository.get_general_mnist_cnn(model_name)
if train:
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.0005)
model.train()
ModelTrainer.train(model, criterion, optimizer, mnist_data_loader.trainloader)
ModelRepository.save(model, model_name)
```
# I. LRP-0
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRP0(model, target_class, transforms, visualize_layer)
interpretor = LRP0(model, i, None, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
## Comparison between LRP gradient and LRP convolution transpose implementation
For **convolution layers** there is no difference, we will obtain the same numerical results with either approach. However, for **pooling layers** the result from convolution transpose approach is **4^(n)** as large as for those from gradient approach, where n is the number of pooling layers. The reason is because in every average unpooling operation, s will be unpooled directly without multiplying any scaling factor. For gradient approach, every input activation influence the output equally therefore the gradient for every activation entrices is 0.25. The operation is an analog of first unpooling and then multiplying a scale of 0.25 to s.
The gradient approach will be more reasonable to the equation described in Montavon's paper. As we treat pooling layers like convolutional layers, the scaling factor 0.25 from pooling should be considered in the steps that we multiply weights in convolutional layers (step1 and step3).
# II. LRP-ε
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPEpsilon(model, target_class, transforms, visualize_layer)
interpretor = LRPEpsilon(model, i, None, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
# III. LRP- γ
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPGamma(model, target_class, transforms, visualize_layer)
interpretor = LRPGamma(model, i, None, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
# IV. LRP-αβ
## 1. LPP-α1β0
```
images = []
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPAlphaBeta(model, target_class, transforms, alpha, beta, visualize_layer)
interpretor = LRPAlphaBeta(model, i, None, 1, 0, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
## 2. LPP-α2β1
```
images = []
img_shape = (28, 28)
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPAlphaBeta(model, target_class, transforms, alpha, beta, visualize_layer)
interpretor = LRPAlphaBeta(model, i, None, 2, 1, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
# IV. Composite LRP
```
images = []
img_shape = (28, 28)
for i in range(10):
img = mnist_data_loader.get_image_for_class(i)
# LRPMix(model, target_class, transforms, alpha, beta, visualize_layer)
interpretor = LRPMix(model, i, None, 1, 0, 0)
endpoint = interpretor.interpret(img)
images.append(endpoint[0].detach().cpu().numpy().sum(axis=0))
MnistVisualizer.display_heatmap_for_each_class(images)
```
| github_jupyter |
<img src="images/bannerugentdwengo.png" alt="Banner" width="400"/>
<div>
<font color=#690027 markdown="1">
<h1>BESLISSINGSBOOM</h1>
</font>
</div>
<div class="alert alert-box alert-success">
In deze notebook laat je Python een beslissingsboom genereren op basis van een tabel met gelabelde voorbeelden.<br>Een beslissingsboom biedt een oplossing voor een classificatieprobleem, hier in een medische context.
</div>
<div>
<font color=#690027 markdown="1">
<h2>1. Het medisch probleem</h2>
</font>
</div>
Men kan enkele parameters in rekening brengen om te proberen voorspellen of een patiënt risico loopt op een hartaanval. Van een gekende patiënt zijn bepaalde parameters terug te vinden in het patiëntendossier.<br>
De volgende tabel toont zo’n parameters voor zes (gekende) patiënten met de vermelding of ze al dan niet een hartaanval kregen.
<div>
<font color=#690027 markdown="1">
<h2>2. De beslissingsboom</h2>
</font>
</div>
```
# nodige modules importeren
import numpy as np # om tabel te kunnen ingeven als een matrix
import matplotlib.pyplot as plt # om afbeelding van beslissingsboom te kunnen tonen
from sklearn import tree # om beslissingsboom te generen
# data
data = np.array(
[[1, 1, 0, 1, 1],
[1, 1, 1, 0, 1],
[0, 0, 1, 0, 1],
[0, 1, 0, 1, 0],
[1, 0, 1, 1, 1],
[0, 1, 1, 1, 0]])
# parameters en klasse onderscheiden
gezondheidsparameters = data[:, :4] # eerste 4 kolommen van matrix zijn beschouwde parameters
klasse = data[:, 4] # laatste kolom zijn klasse
# beslissingsboom genereren op basis van data
beslissingsboom = tree.DecisionTreeClassifier(criterion="gini") # boom wordt aangemaakt via gini-index
beslissingsboom.fit(gezondheidsparameters, klasse) # boom genereren die overeenkomt met data
# beslissingsboom tonen
plt.figure(figsize=(10,10)) # tekenvenster aanmaken
tree.plot_tree(beslissingsboom, # aangeven wat er moet getoond worden
class_names=["geen risico", "risico"],
feature_names=["Pijn borststreek", "Man", "Rookt", "Beweging"], # gezondheidsparameters: 'pijn in borststreek', 'man', 'rookt', 'voldoende beweging'
filled=True, rounded=True)
plt.show() # figuur tonen
```
<img src="images/cclic.png" alt="Banner" align="left" width="100"/><br><br>
Notebook AI in de Zorg, zie <a href="http://www.aiopschool.be">AI Op School</a>, van F. wyffels & N. Gesquière is in licentie gegeven volgens een <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationaal-licentie</a>.
| github_jupyter |
# Project 1: Linear Regression Model
This is the first project of our data science fundamentals. This project is designed to solidify your understanding of the concepts we have learned in Regression and to test your knowledge on regression modelling. There are four main objectives of this project.
1\. Build Linear Regression Models
* Use closed form solution to estimate parameters
* Use packages of choice to estimate parameters<br>
2\. Model Performance Assessment
* Provide an analytical rationale with choice of model
* Visualize the Model performance
* MSE, R-Squared, Train and Test Error <br>
3\. Model Interpretation
* Intepret the results of your model
* Intepret the model assement <br>
4\. Model Dianostics
* Does the model meet the regression assumptions
#### About this Notebook
1\. This notebook should guide you through this project and provide started code
2\. The dataset used is the housing dataset from Seattle homes
3\. Feel free to consult online resources when stuck or discuss with data science team members
Let's get started.
### Packages
Importing the necessary packages for the analysis
```
# Necessary Packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Model and data preprocessing
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.svm import SVR
from sklearn.feature_selection import RFE
from sklearn import preprocessing
%matplotlib inline
```
Now that you have imported your packages, let's read the data that we are going to be using. The dataset provided is a titled *housing_data.csv* and contains housing prices and information about the features of the houses. Below, read the data into a variable and visualize the top 8 rows of the data.
```
# Initiliazing seed
np.random.seed(42)
data1 = pd.read_csv('housing_data.csv')
data = pd.read_csv('housing_data_2.csv')
data.head(8)
```
### Split data into train and test
In the code below, we need to split the data into the train and test for modeling and validation of our models. We will cover the Train/Validation/Test as we go along in the project. Fill the following code.
1\. Subset the features to the variable: features <br>
2\. Subset the target variable: target <br>
3\. Set the test size in proportion in to a variable: test_size <br>
```
features = data[['lot_area', 'firstfloor_sqft', 'living_area', 'bath', 'garage_area', 'price']]
target = data['price']
test_size = .33
x_train, x_test, y_train, y_test = train_test_split(features, target, test_size=test_size, random_state=42)
```
### Data Visualization
The best way to explore the data we have is to build some plots that can help us determine the relationship of the data. We can use a scatter matrix to explore all our variables. Below is some starter code to build the scatter matrix
```
features = pd.plotting.scatter_matrix(x_train, figsize=(14,8), alpha=1, diagonal='kde')
#columns = pd.plotting.scatter_matrix(columns, figsize=(14,8), alpha=1, diagonal='kde')
```
Based on the scatter matrix above, write a brief description of what you observe. In thinking about the description, think about the relationship and whether linear regression is an appropriate choice for modelling this data.
#### a. lot_area
My initial intutions tell me that lot_area would be the best indicator of price; that being said, there is a weak correlation between lot_area and the other features, which is a good sign! However, the distribution is dramatically skewed-right indicating that the mean lot_area is greater than the median. This tells me that lot_area stays around the same size while price increases. In turn, that tells me that some other feature is helping determine the price bceause if lot_area we're determining the increase in price, we'd see a linear distribution. In determining the best feature for my linear regression model, I think lot_area may be one of the least fitting to use.
#### b. firstfloor_sqft
There is a stronger correlation between firstfloor_sqft and the other features. The distrubution is still skewed-right making the median a better measure of center. firstfloor_sqft would be a good candidate for the linear regression model becuse of the stronger correlation and wider distribution; however, there appears to be a overly strong, linear correlation between firstfloor_sqft and living_area. Given that this linear correlation goes against the Regression Assumption that "all inputs are linearly independent," I would not consider using both in my model. I could, however, use one or the other.
#### c. living_area
There is a similarly strong correlation between living_area (as compared to firstfloor_sqft) and the other features, but these plots are better distributed than firstfloor_sqft. A right skew still exists, but less so than the firstfloor_sqft. However, the observation of a strong, linear correlation between firstfloor_sqft and living_area (or living_area and firstfloor_sqft) is reinforced here. Thus, I would not use both of these in my final model and having to choose between the two, I will likely choose living_area since it appears to be more well-distributed.
#### d. bath
Baths are static numbers, so the plots are much less distributed; however, the length and the clustering of the bath to living_area & bath to garage_area may indicate a correlation. Since I cannot use both living_area and firstfloor_sqft, and I think living_area has a better distribution, I would consider using bath in conjunction with living_area.
#### e. garage_area
Garage_area appears to be well-distributed with the lowest correlation between the other features. This could make it a great fit for the final regression model. It's also the least skewed right distribution.
#### Correlation Matrix
In the code below, compute the correlation matrix and write a few thoughts about the observations. In doing so, consider the interplay in the features and how their correlation may affect your modeling.
The correlation matrix below is in-line with my thought process. Lot_area has the lowest correlation between it and the other features, but it's not well distributed. firstfloor_sqft has a strong correlation between it and living_area. Given that the correlation is just over 0.5, both features may be able to be used in the model given that the correlation isn't overly strong; however, to be most accurate, I plan to leave out one of them (likely firstfloor_sqft). living_area also reflects this strong correlation between it and firstfloor_sqft. Surprisingly, there is a strong correlation between living_area and bath. Looking solely at the scatter matrix, I did not see this strong correlation. This changes my approach slighltly, which I will outline below. garage_area, again, has the lowest correlations while being the most well-distributed.
#### Approach
Given this new correlation information, I will approach the regression model in one of the following ways:
1. Leave out bath as a feature and use living_area + garage_area.
2. Swap firstfloor_sqft for living_area and include bath + garage area.
#### Conclusion
I'm not 100% sure if more features are better than less in this situation; however, I am sure that I want linearly independet features.
```
# Use pandas correlation function
x_train.corr(method='pearson').style.format("{:.2}").background_gradient(cmap=plt.get_cmap('coolwarm'), axis=1)
```
## 1. Build Your Model
Now that we have explored the data at a high level, let's build our model. From our sessions, we have discussed both closed form solution, gradient descent and using packages. In this section you will create your own estimators. Starter code is provided to makes this easier.
#### 1.1. Closed Form Solution
Recall: <br>
$$\beta_0 = \bar {y} - \beta_1 \bar{x}$$ <br>
$$\beta_1 = \frac {cov(x, y)} {var(x)}$$ <br>
Below, let's define functions that will compute these parameters
```
# Pass the necessary arguments in the function to calculate the coefficients
def compute_estimators(feature, target):
n1 = np.sum(feature*target) - np.mean(target)*np.sum(feature)
d1 = np.sum(feature*feature) - np.mean(feature)*np.sum(feature)
# Compute the Intercept and Slope
beta1 = n1/d1
beta0 = np.mean(target) - beta1*np.mean(feature)
return beta0, beta1 # Return the Intercept and Slope
```
Run the compute estimators function above and display the estimated coefficients for any of the predictors/input variables.
```
# Remember to pass the correct arguments
x_array = np.array(data1['living_area'])
normalized_X = preprocessing.normalize([x_array])
beta0, beta1 = compute_estimators(normalized_X, data1['price'])
print(beta0, beta1)
#### Computing coefficients for our model by hand using the actual mathematical equations
#y = beta1x + beta0
#print(y)
```
#### 1.2. sklearn solution
Now that we know how to compute the estimators, let's leverage the sklearn module to compute the metrics for us. We have already imported the linear model, let's initialize the model and compute the coefficients for the model with the input above.
```
# Initilize the linear Regression model here
model = linear_model.LinearRegression()
# Pass in the correct inputs
model.fit(data1[['living_area']], data1['price'])
# Print the coefficients
print("This is beta0:", model.intercept_)
print("This is beta1:", model.coef_)
#### Computing coefficients for our model using the sklearn package
```
Do the results from the cell above and your implementation match? They should be very close to each other.
#### Yes!! They match!
### 2. Model Evaluation
Now that we have estimated our single model. We are going to compute the coefficients for all the inputs. We can use a for loop for multiple model estimation. However, we need to create a few functions:
1\. Prediction function: Functions to compute the predictions <br>
2\. MSE: Function to compute Mean Square Error <br>
```
#Function that computes predictions of our model using the betas above + the feature data we've been using
def model_predictions(intercept, slope, feature):
""" Compute Model Predictions """
y_hat = intercept+(slope*feature)
return y_hat
y_hat = model_predictions(beta0, beta1, data1['living_area'])
#Function to compute MSE which determines the total loss for each predicted data point in our model
def mean_square_error(y_outcome, predictions):
""" Compute the mean square error """
mse = (np.sum((y_outcome - predictions) ** 2))/np.size(predictions)
return mse
mse = mean_square_error(target, y_hat)
print(mse)
```
The last function we need is a plotting function to visualize our predictions relative to our data.
```
#Function used to plot the data
def plotting_model(feature, target, predictions, name):
""" Create a scatter and predictions """
fig = plt.figure(figsize=(10,8))
plot_model = model.fit(feature, target)
plt.scatter(x=feature, y=target, color='blue')
plt.plot(feature, predictions, color='red')
plt.xlabel(name)
plt.ylabel('Price')
return model
model = plotting_model(data1[['living_area']], data1['price'], y_hat, data1['living_area'].name)
```
## Considerations/Reasoning
#### Data Integrity
After my inital linear model based on the feature "living area," I've eliminated 8 data points. If you look at the graph above, there are 4 outliers that are clear, and at least 4 others that follow a similar trend based on the x, y relationship. I used ~3500 sqft of living area as my cutoff for being not predictive of the model, and any price above 600000. Given the way these data points skew the above model, they intuitively appear to be outliers with high leverage. I determined this by comparing these high leverag points with points similar to it in someway and determined whether it was an outlier (i.e. if point A's price was abnormally high, I found a point (B) with living area at or close to point A's living area and compared the price. vice versa if living area was abnormally high).
#### Inital Feature Analysis - "Best" Feature (a priori)
Living area is the best metric to use to train the linear model because it incorporates multiple of the other features within it: first floor living space & bath. Living area has a high correlation with both first floor sq ft (0.53) and baths (0.63). Based on the other correlations, these are the two highest, and thus should immediately be eliminated. Additionally, based on initial intuition, one would assume that an increase in the metric "firstfloor sqft" will lead to an increase in the "living area" metric; if both firstfloor sqft and overall living area are increased, the "bath" metric will likely also increase to accommodate the additional living area/sqft in a home. Thus, I will not need to use them in my model because these can be accurately represented by the feature "living area."
### Single Feature Assessment
```
#Running each feature through to determine which has best linear fit
features = data[['living_area', 'garage_area', 'lot_area', 'firstfloor_sqft', 'bath']]
count = 0
for feature in features:
feature = features.iloc[:, count]
# Compute the Coefficients
beta0, beta1 = compute_estimators(feature, target)
count+=1
# Print the Intercept and Slope
print(feature.name)
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat = model_predictions(beta0, beta1, feature)
# Plot the Model Scatter
name = feature.name
model = plotting_model(feature.values.reshape(-1, 1), target, y_hat, name)
# Compute the MSE
mse = mean_square_error(target, y_hat)
print('mean squared error:', mse)
print()
```
#### Analysis of Feature Linear Models
After eliminating these 8 data points, MSE for Living Area drop significantly from 8957196059.803959 to 2815789647.7664313. In fact, Living Area has the lowest MSE 2815789647.7664313 of all the individual models, and the best linear fit.
Garage Area is the next lowest MSE 3466639234.8407283, and the model is mostly linear; however, the bottom left of the model is concerning. You'll notice that a large number of data points go vertically upward indicating an increase in price with 0 garage area. That says to me that garage area isn't predicting the price of these homes, which indicates that it may be a good feature to use in conjunction with another feature (i.e. Living Area) or since those data points do not fit in with the rest of the population, they may need to be removed.
#### Run Model Assessment
Now that we have our functions ready, we can build individual models, compute preductions, plot our model results and determine our MSE. Notice that we compute our MSE on the test set and not the train set
### Dot Product (multiple feature) Assessment
```
#Models Living Area alone and compares it to the Dot Product of Living Area with each other feature
##Determining if a MLR would be a better way to visualize the data
features = data[['living_area', 'garage_area', 'lot_area', 'firstfloor_sqft', 'bath']]
count = 0
for feature in features:
feature = features.iloc[:, count]
#print(feature.head(0))
if feature.name == 'living_area':
x = data['living_area']
else:
x = feature * data['living_area']
# Compute the Coefficients
beta0, beta1 = compute_estimators(x, target)
# Print the Intercept and Slope
if feature.name == 'living_area':
print('living_area')
print('beta0:', beta0)
print('beta1:', beta1)
else:
print(feature.name, "* living_area")
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat = model_predictions(beta0, beta1, x)
# Plot the Model Scatter
if feature.name == 'living_area':
name = 'living_area'
else:
name = feature.name + " " + "* living_area"
model = plotting_model(x.values.reshape(-1, 1), target, y_hat, name)
# Compute the MSE
mse = mean_square_error(target, y_hat)
print('mean squared error:', mse)
print()
count+=1
```
## Analysis
Based on the models, it appears that two of the dot products provide a more accurate model:
1. Living Area * First Floor SqFt
2. Living Area * Garage Area
These two dot products provide a lower MSE and thus lowers the loss per prediction point.
#1.
My intuition says that since Living Area, as a feature, will include First Floor SqFt in its data. The FirstFloor SqFt can be captured by Living Area, so it can be left out. Additionally, since one is included within the other, we cannot say anything in particular about Living Area or FirstFloor SqFt individually. Also, the correlation (Ln 24 & Out 24) between Living Area and FirstFloor SqFt is 0.53, which is the highest apart from Bath. This correlation is low in comparison to the "standard;" however, that standard is arbitrary. I've lowered it to be in context with data sets I'm working with in this notebook.
#2.
The dot product of Living Area & Garage Area provides doesn't allow us to make a statement about each individually, unless we provide a model of each, which I will do below. This dot product is a better model. Garage Area is advertised as 'bonus' space and CANNOT be included in the overall square footage of the home (i.e. living area). Thus, garage area vector will not be included as an implication within the living area vector making them linearly independent.
Garage Area can be a sought after feature depending on a buyer's desired lifestlye; more garage space would be sought after by buyers with more cars, which allows us to draw a couple possible inferences about the buyers:
1. enough net worth/monthly to make payments on multiple vehicles plus make payments on a house/garage
2. enough disposable income to outright buy multiple vehicles plus make payments on a house/garage
Additionally, it stands to reason that garage area would scale with living area for pragmatic reasons (more living area implies more people and potentially more vehicles) and for aesthetic reasons (more living area makes home look larger and would need larger garage).
Homes with more living area and garage area may be sought after by buyers with the ability to spend more on a home, and thus the market would bear a higher price for those homes, which helps explain why living area * garage area is a better indicator of home price.
#### Conclusion
Combining living area with other features lowered the MSE for each. The lowest MSE is living area * garage area, which confirms my hypothesis: Living Area is the best feature to predict price, and garage area is good when used in conjunction.
```
#Modeling Living Area & Garage Area separately.
features = data[['living_area', 'garage_area']]
count = 0
for feature in features:
feature = features.iloc[:, count]
if feature.name == 'living_area':
x = data['living_area']
elif feature.name == 'garage_area':
x = data['garage_area']
beta0, beta1 = compute_estimators(x, target)
count+=1
if feature.name == 'living_area':
print('living_area')
print('beta0:', beta0)
print('beta1:', beta1)
elif feature.name == 'garage_area':
print('garage_area')
print('beta0:', beta0)
print('beta1:', beta1)
y_hat = model_predictions(beta0, beta1, x)
if feature.name == 'living_area':
name = 'living_area'
elif feature.name == 'garage_area':
name = 'garage_area'
model = plotting_model(x.values.reshape(-1, 1), target, y_hat, name)
mse = mean_square_error(target, y_hat)
print('mean squared error:', mse)
print()
#Modeling dot product of Living Area * Garage Area
features = data[['living_area']]
x = features.iloc[:, 0]
x2 = x * data['garage_area']
#x3 = x2 * data['bath']
# Compute the Coefficients
beta0, beta1 = compute_estimators(x2, target)
# Print the Intercept and Slope
print('Name: garage_area * living_area')
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat_1 = model_predictions(beta0, beta1, x2)
# Plot the Model Scatter
name = 'garage_area * living_area'
model = plotting_model(x2.values.reshape(-1, 1), target, y_hat_1, name)
# Compute the MSE
mse = mean_square_error(target, y_hat_1)
print('mean squared error:', mse)
print()
```
## Reasoning
Above, I modeled both living area and garage area by themselves then the dot product of Living Area * Garage Area to highlight the MSE of each vs. the MSE of the dot product. Garage Area, much more so than Living Area, has a high MSE indicating that on its own, Garage Area isn't the best predictor of a home's price; we must take the data in context with reality, and intuitively speaking, one wouldn't assume that the garage area, on its own, would be a feature indicative of price.
This fact combined with the assumption/implication that garage may scale with living area implies some correlation between the features, which would go against the linear assumption of feature independence. As a matter of fact, there is a correlation between them (Ln 24 & Out 24) of 0.44; however, this isn't problematic for two reasons:
1. 0.44 is quite low in regard to typical correlation standards.
2. Data must be seen in context.
#1.
Although I eliminated First Floor SqFt due, in part, to a high correlation and that correclation is only 0.09 points lower. The main reason why First Floor SqFt is eliminated is due to its inclusion within the living area vector. Additionally, the main reason why I'm including garage area is because it is not included with the living area vector.
#2.
Similar to my #1 explanation, knowing that garage area is 'bonus space' and, as such, is NOT included in a home's advertised square feet indicates that it isn't within the Living Area data set in the same way FF SqFt or Baths would be. It will most likely to scale with the living area independently of the living area making it a good fit for a MLR.
### 3. Model Interpretation
Now that you have calculated all the individual models in the dataset, provide an analytics rationale for which model has performed best. To provide some additional assessment metrics, let's create a function to compute the R-Squared.
#### Mathematically:
$$R^2 = \frac {SS_{Regression}}{SS_{Total}} = 1 - \frac {SS_{Error}}{SS_{Total}}$$<br>
where:<br>
$SS_{Regression} = \sum (\widehat {y_i} - \bar {y_i})^2$<br>
$SS_{Total} = \sum ({y_i} - \bar {y_i})^2$<br>
$SS_{Error} = \sum ({y_i} - \widehat {y_i})^2$
```
#ssr = sum of squares of regression --> variance of prediction from the mean
#sst = sum of squares total --> variance of the actuals from the prediction
#sse = sume of squares error --> variance of the atuals from the mean
def r_squared(y_outcome, predictions):
""" Compute the R Squared """
ssr = np.sum((predictions - np.mean(y_outcome))**2)
sst = np.sum((y_outcome - np.mean(y_outcome))**2)
sse = np.sum((y_outcome - predictions)**2)
# print(sse, "/", sst)
print("1 - SSE/SST =", round((1 - (sse/sst))*100), "%")
rss = (ssr/sst) * 100
return rss
```
Now that you we have R Squared calculated, evaluate the R Squared for the test group across all models and determine what model explains the data best.
```
rss = r_squared(target, y_hat_1)
print("R-Squared =", round(rss), "%")
count += 1
```
### R-Squared Adjusted
$R^2-adjusted = 1 - \frac {(1-R^2)(n-1)}{n-k-1}$
```
def r_squared_adjusted(rss, sample_size, regressors):
n = np.size(sample_size)
k = regressors
numerator = (1-rss)*(n)
denominator = n-k-1
rssAdj = 1 - (numerator / denominator)
return rssAdj
rssAdj = r_squared_adjusted(rss, y_hat_1, 2)
print(round(rssAdj), "%")
```
### 4. Model Diagnostics
Linear regressions depends on meetings assumption in the model. While we have not yet talked about the assumptions, you goal is to research and develop an intuitive understanding of why the assumptions make sense. We will walk through this portion on Multiple Linear Regression Project
| github_jupyter |
[SCEC BP3-QD](https://strike.scec.org/cvws/seas/download/SEAS_BP3.pdf) document is here.
# [DRAFT] Quasidynamic thrust fault earthquake cycles (plane strain)
## Summary
* Most of the code here follows almost exactly from [the previous section on strike-slip/antiplane earthquake cycles](c1qbx/part6_qd).
* Since the fault motion is in the same plane as the fault normal vectors, we are no longer operating in an antiplane approximation. Instead, we use plane strain elasticity, a different 2D reduction of full 3D elasticity.
* One key difference is the vector nature of the displacement and the tensor nature of the stress. We must always make sure we are dealing with tractions on the correct surface.
* We construct a mesh, build our discrete boundary integral operators, step through time and then compare against other benchmark participants' results.
Does this section need detailed explanation or is it best left as lonely code? Most of the explanation would be redundant with the antiplane QD document.
```
from tectosaur2.nb_config import setup
setup()
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
from tectosaur2 import gauss_rule, refine_surfaces, integrate_term, panelize_symbolic_surface
from tectosaur2.elastic2d import elastic_t, elastic_h
from tectosaur2.rate_state import MaterialProps, qd_equation, solve_friction, aging_law
surf_half_L = 1000000
fault_length = 40000
max_panel_length = 400
n_fault = 400
mu = shear_modulus = 3.2e10
nu = 0.25
quad_rule = gauss_rule(6)
sp_t = sp.var("t")
angle_rad = sp.pi / 6
sp_x = (sp_t + 1) / 2 * sp.cos(angle_rad) * fault_length
sp_y = -(sp_t + 1) / 2 * sp.sin(angle_rad) * fault_length
fault = panelize_symbolic_surface(
sp_t, sp_x, sp_y,
quad_rule,
n_panels=n_fault
)
free = refine_surfaces(
[
(sp_t, -sp_t * surf_half_L, 0 * sp_t) # free surface
],
quad_rule,
control_points = [
# nearfield surface panels and fault panels will be limited to 200m
# at 200m per panel, we have ~40m per solution node because the panels
# have 5 nodes each
(0, 0, 1.5 * fault_length, max_panel_length),
(0, 0, 0.2 * fault_length, 1.5 * fault_length / (n_fault)),
# farfield panels will be limited to 200000 m per panel at most
(0, 0, surf_half_L, 50000),
]
)
print(
f"The free surface mesh has {free.n_panels} panels with a total of {free.n_pts} points."
)
print(
f"The fault mesh has {fault.n_panels} panels with a total of {fault.n_pts} points."
)
plt.plot(free.pts[:,0]/1000, free.pts[:,1]/1000, 'k-o')
plt.plot(fault.pts[:,0]/1000, fault.pts[:,1]/1000, 'r-o')
plt.xlabel(r'$x ~ \mathrm{(km)}$')
plt.ylabel(r'$y ~ \mathrm{(km)}$')
plt.axis('scaled')
plt.xlim([-100, 100])
plt.ylim([-80, 20])
plt.show()
```
And, to start off the integration, we'll construct the operators necessary for solving for free surface displacement from fault slip.
```
singularities = np.array(
[
[-surf_half_L, 0],
[surf_half_L, 0],
[0, 0],
[float(sp_x.subs(sp_t,1)), float(sp_y.subs(sp_t,1))],
]
)
(free_disp_to_free_disp, fault_slip_to_free_disp), report = integrate_term(
elastic_t(nu), free.pts, free, fault, singularities=singularities, safety_mode=True, return_report=True
)
fault_slip_to_free_disp = fault_slip_to_free_disp.reshape((-1, 2 * fault.n_pts))
free_disp_to_free_disp = free_disp_to_free_disp.reshape((-1, 2 * free.n_pts))
free_disp_solve_mat = (
np.eye(free_disp_to_free_disp.shape[0]) + free_disp_to_free_disp
)
from tectosaur2.elastic2d import ElasticH
(free_disp_to_fault_stress, fault_slip_to_fault_stress), report = integrate_term(
ElasticH(nu, d_cutoff=8.0),
# elastic_h(nu),
fault.pts,
free,
fault,
tol=1e-12,
safety_mode=True,
singularities=singularities,
return_report=True,
)
fault_slip_to_fault_stress *= shear_modulus
free_disp_to_fault_stress *= shear_modulus
```
**We're not achieving the tolerance we asked for!!**
Hypersingular integrals can be tricky but I think this is solvable.
```
report['integration_error'].max()
A = -fault_slip_to_fault_stress.reshape((-1, 2 * fault.n_pts))
B = -free_disp_to_fault_stress.reshape((-1, 2 * free.n_pts))
C = fault_slip_to_free_disp
Dinv = np.linalg.inv(free_disp_solve_mat)
total_fault_slip_to_fault_stress = A - B.dot(Dinv.dot(C))
nx = fault.normals[:, 0]
ny = fault.normals[:, 1]
normal_mult = np.transpose(np.array([[nx, 0 * nx, ny], [0 * nx, ny, nx]]), (2, 0, 1))
total_fault_slip_to_fault_traction = np.sum(
total_fault_slip_to_fault_stress.reshape((-1, 3, fault.n_pts, 2))[:, None, :, :, :]
* normal_mult[:, :, :, None, None],
axis=2,
).reshape((-1, 2 * fault.n_pts))
```
## Rate and state friction
```
siay = 31556952 # seconds in a year
density = 2670 # rock density (kg/m^3)
cs = np.sqrt(shear_modulus / density) # Shear wave speed (m/s)
Vp = 1e-9 # Rate of plate motion
sigma_n0 = 50e6 # Normal stress (Pa)
# parameters describing "a", the coefficient of the direct velocity strengthening effect
a0 = 0.01
amax = 0.025
H = 15000
h = 3000
fx = fault.pts[:, 0]
fy = fault.pts[:, 1]
fd = -np.sqrt(fx ** 2 + fy ** 2)
a = np.where(
fd > -H, a0, np.where(fd > -(H + h), a0 + (amax - a0) * (fd + H) / -h, amax)
)
mp = MaterialProps(a=a, b=0.015, Dc=0.008, f0=0.6, V0=1e-6, eta=shear_modulus / (2 * cs))
plt.figure(figsize=(3, 5))
plt.plot(mp.a, fd/1000, label='a')
plt.plot(np.full(fy.shape[0], mp.b), fd/1000, label='b')
plt.xlim([0, 0.03])
plt.ylabel('depth')
plt.legend()
plt.show()
mesh_L = np.max(np.abs(np.diff(fd)))
Lb = shear_modulus * mp.Dc / (sigma_n0 * mp.b)
hstar = (np.pi * shear_modulus * mp.Dc) / (sigma_n0 * (mp.b - mp.a))
mesh_L, Lb, np.min(hstar[hstar > 0])
```
## Quasidynamic earthquake cycle derivatives
```
from scipy.optimize import fsolve
import copy
init_state_scalar = fsolve(lambda S: aging_law(mp, Vp, S), 0.7)[0]
mp_amax = copy.copy(mp)
mp_amax.a=amax
tau_amax = -qd_equation(mp_amax, sigma_n0, 0, Vp, init_state_scalar)
init_state = np.log((2*mp.V0/Vp)*np.sinh((tau_amax - mp.eta*Vp) / (mp.a*sigma_n0))) * mp.a
init_tau = np.full(fault.n_pts, tau_amax)
init_sigma = np.full(fault.n_pts, sigma_n0)
init_slip_deficit = np.zeros(fault.n_pts)
init_conditions = np.concatenate((init_slip_deficit, init_state))
class SystemState:
V_old = np.full(fault.n_pts, Vp)
state = None
def calc(self, t, y, verbose=False):
# Separate the slip_deficit and state sub components of the
# time integration state.
slip_deficit = y[: init_slip_deficit.shape[0]]
state = y[init_slip_deficit.shape[0] :]
# If the state values are bad, then the adaptive integrator probably
# took a bad step.
if np.any((state < 0) | (state > 2.0)):
print("bad state")
return False
# The big three lines solving for quasistatic shear stress, slip rate
# and state evolution
sd_vector = np.stack((slip_deficit * -ny, slip_deficit * nx), axis=1).ravel()
traction = total_fault_slip_to_fault_traction.dot(sd_vector).reshape((-1, 2))
delta_sigma_qs = np.sum(traction * np.stack((nx, ny), axis=1), axis=1)
delta_tau_qs = -np.sum(traction * np.stack((-ny, nx), axis=1), axis=1)
tau_qs = init_tau + delta_tau_qs
sigma_qs = init_sigma + delta_sigma_qs
V = solve_friction(mp, sigma_qs, tau_qs, self.V_old, state)
if not V[2]:
print("convergence failed")
return False
V=V[0]
if not np.all(np.isfinite(V)):
print("infinite V")
return False
dstatedt = aging_law(mp, V, state)
self.V_old = V
slip_deficit_rate = Vp - V
out = (
slip_deficit,
state,
delta_sigma_qs,
sigma_qs,
delta_tau_qs,
tau_qs,
V,
slip_deficit_rate,
dstatedt,
)
self.data = out
return self.data
def plot_system_state(t, SS, xlim=None):
"""This is just a helper function that creates some rough plots of the
current state to help with debugging"""
(
slip_deficit,
state,
delta_sigma_qs,
sigma_qs,
delta_tau_qs,
tau_qs,
V,
slip_deficit_rate,
dstatedt,
) = SS
slip = Vp * t - slip_deficit
fd = -np.linalg.norm(fault.pts, axis=1)
plt.figure(figsize=(15, 9))
plt.suptitle(f"t={t/siay}")
plt.subplot(3, 3, 1)
plt.title("slip")
plt.plot(fd, slip)
plt.xlim(xlim)
plt.subplot(3, 3, 2)
plt.title("slip deficit")
plt.plot(fd, slip_deficit)
plt.xlim(xlim)
# plt.subplot(3, 3, 2)
# plt.title("slip deficit rate")
# plt.plot(fd, slip_deficit_rate)
# plt.xlim(xlim)
# plt.subplot(3, 3, 2)
# plt.title("strength")
# plt.plot(fd, tau_qs/sigma_qs)
# plt.xlim(xlim)
plt.subplot(3, 3, 3)
# plt.title("log V")
# plt.plot(fd, np.log10(V))
plt.title("V")
plt.plot(fd, V)
plt.xlim(xlim)
plt.subplot(3, 3, 4)
plt.title(r"$\sigma_{qs}$")
plt.plot(fd, sigma_qs)
plt.xlim(xlim)
plt.subplot(3, 3, 5)
plt.title(r"$\tau_{qs}$")
plt.plot(fd, tau_qs, 'k-o')
plt.xlim(xlim)
plt.subplot(3, 3, 6)
plt.title("state")
plt.plot(fd, state)
plt.xlim(xlim)
plt.subplot(3, 3, 7)
plt.title(r"$\Delta\sigma_{qs}$")
plt.plot(fd, delta_sigma_qs)
plt.hlines([0], [fd[-1]], [fd[0]])
plt.xlim(xlim)
plt.subplot(3, 3, 8)
plt.title(r"$\Delta\tau_{qs}$")
plt.plot(fd, delta_tau_qs)
plt.hlines([0], [fd[-1]], [fd[0]])
plt.xlim(xlim)
plt.subplot(3, 3, 9)
plt.title("dstatedt")
plt.plot(fd, dstatedt)
plt.xlim(xlim)
plt.tight_layout()
plt.show()
def calc_derivatives(state, t, y):
"""
This helper function calculates the system state and then extracts the
relevant derivatives that the integrator needs. It also intentionally
returns infinite derivatives when the `y` vector provided by the integrator
is invalid.
"""
if not np.all(np.isfinite(y)):
return np.inf * y
state_vecs = state.calc(t, y)
if not state_vecs:
return np.inf * y
derivatives = np.concatenate((state_vecs[-2], state_vecs[-1]))
return derivatives
```
## Integrating through time
```
%%time
from scipy.integrate import RK23, RK45
# We use a 5th order adaptive Runge Kutta method and pass the derivative function to it
# the relative tolerance will be 1e-11 to make sure that even
state = SystemState()
derivs = lambda t, y: calc_derivatives(state, t, y)
integrator = RK45
atol = Vp * 1e-6
rtol = 1e-11
rk = integrator(derivs, 0, init_conditions, 1e50, atol=atol, rtol=rtol)
# Set the initial time step to one day.
rk.h_abs = 60 * 60 * 24
# Integrate for 1000 years.
max_T = 1000 * siay
n_steps = 500000
t_history = [0]
y_history = [init_conditions.copy()]
for i in range(n_steps):
# Take a time step and store the result
if rk.step() != None:
raise Exception("TIME STEPPING FAILED")
t_history.append(rk.t)
y_history.append(rk.y.copy())
# Print the time every 5000 steps
if i % 5000 == 0:
print(f"step={i}, time={rk.t / siay} yrs, step={(rk.t - t_history[-2]) / siay}")
if rk.t > max_T:
break
y_history = np.array(y_history)
t_history = np.array(t_history)
```
## Plotting the results
Now that we've solved for 1000 years of fault slip evolution, let's plot some of the results. I'll start with a super simple plot of the maximum log slip rate over time.
```
derivs_history = np.diff(y_history, axis=0) / np.diff(t_history)[:, None]
max_vel = np.max(np.abs(derivs_history), axis=1)
plt.plot(t_history[1:] / siay, np.log10(max_vel))
plt.xlabel('$t ~~ \mathrm{(yrs)}$')
plt.ylabel('$\log_{10}(V)$')
plt.show()
```
And next, we'll make the classic plot showing the spatial distribution of slip over time:
- the blue lines show interseismic slip evolution and are plotted every fifteen years
- the red lines show evolution during rupture every three seconds.
```
plt.figure(figsize=(10, 4))
last_plt_t = -1000
last_plt_slip = init_slip_deficit
event_times = []
for i in range(len(y_history) - 1):
y = y_history[i]
t = t_history[i]
slip_deficit = y[: init_slip_deficit.shape[0]]
should_plot = False
# Plot a red line every three second if the slip rate is over 0.1 mm/s.
if (
max_vel[i] >= 0.0001 and t - last_plt_t > 3
):
if len(event_times) == 0 or t - event_times[-1] > siay:
event_times.append(t)
should_plot = True
color = "r"
# Plot a blue line every fifteen years during the interseismic period
if t - last_plt_t > 15 * siay:
should_plot = True
color = "b"
if should_plot:
# Convert from slip deficit to slip:
slip = -slip_deficit + Vp * t
plt.plot(slip, fd / 1000.0, color + "-", linewidth=0.5)
last_plt_t = t
last_plt_slip = slip
plt.xlim([0, np.max(last_plt_slip)])
plt.ylim([-40, 0])
plt.ylabel(r"$\textrm{z (km)}$")
plt.xlabel(r"$\textrm{slip (m)}$")
plt.tight_layout()
plt.savefig("halfspace.png", dpi=300)
plt.show()
```
And a plot of recurrence interval:
```
plt.title("Recurrence interval")
plt.plot(np.diff(event_times) / siay, "k-*")
plt.xticks(np.arange(0, 10, 1))
plt.yticks(np.arange(75, 80, 0.5))
plt.xlabel("Event number")
plt.ylabel("Time between events (yr)")
plt.show()
```
## Comparison against SCEC SEAS results
```
ozawa_data = np.loadtxt("ozawa7500.txt")
ozawa_slip_rate = 10 ** ozawa_data[:, 2]
ozawa_stress = ozawa_data[:, 3]
t_start_idx = np.argmax(max_vel > 1e-4)
t_end_idx = np.argmax(max_vel[t_start_idx:] < 1e-6)
n_steps = t_end_idx - t_start_idx
t_chunk = t_history[t_start_idx : t_end_idx]
shear_chunk = []
slip_rate_chunk = []
for i in range(n_steps):
system_state = SystemState().calc(t_history[t_start_idx + i], y_history[t_start_idx + i])
slip_deficit, state, delta_sigma_qs, sigma_qs, delta_tau_qs, tau_qs, V, slip_deficit_rate, dstatedt = system_state
shear_chunk.append((tau_qs - mp.eta * V))
slip_rate_chunk.append(V)
shear_chunk = np.array(shear_chunk)
slip_rate_chunk = np.array(slip_rate_chunk)
fault_idx = np.argmax((-7450 > fd) & (fd > -7550))
VAvg = np.mean(slip_rate_chunk[:, fault_idx:(fault_idx+2)], axis=1)
SAvg = np.mean(shear_chunk[:, fault_idx:(fault_idx+2)], axis=1)
fault_idx
t_align = t_chunk[np.argmax(VAvg > 0.2)]
ozawa_t_align = np.argmax(ozawa_slip_rate > 0.2)
for lims in [(-1, 1), (-15, 30)]:
plt.figure(figsize=(12, 8))
plt.subplot(2, 1, 1)
plt.plot(t_chunk - t_align, SAvg / 1e6, "k-o", markersize=0.5, linewidth=0.5, label='here')
plt.plot(
ozawa_data[:, 0] - ozawa_data[ozawa_t_align, 0],
ozawa_stress,
"b-*",
markersize=0.5,
linewidth=0.5,
label='ozawa'
)
plt.legend()
plt.xlim(lims)
plt.xlabel("Time (s)")
plt.ylabel("Shear Stress (MPa)")
# plt.show()
plt.subplot(2, 1, 2)
plt.plot(t_chunk - t_align, VAvg, "k-o", markersize=0.5, linewidth=0.5, label='here')
plt.plot(
ozawa_data[:, 0] - ozawa_data[ozawa_t_align, 0],
ozawa_slip_rate[:],
"b-*",
markersize=0.5,
linewidth=0.5,
label='ozawa'
)
plt.legend()
plt.xlim(lims)
plt.xlabel("Time (s)")
plt.ylabel("Slip rate (m/s)")
plt.tight_layout()
plt.show()
```
| github_jupyter |
# 머신 러닝 교과서 3판
# 9장 - 웹 애플리케이션에 머신 러닝 모델 내장하기
**아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.org/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch09/ch09.ipynb"><img src="https://jupyter.org/assets/share.png" width="60" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch09/ch09.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
### 목차
- 8장 정리 - 영화 리뷰 분류를 위한 모델 훈련하기
- 학습된 사이킷런 추정기 저장
- 데이터를 저장하기 위해 SQLite 데이터베이스 설정
- 플라스크 웹 애플리케이션 개발
- 첫 번째 플라스크 애플리케이션
- 폼 검증과 화면 출력
- 영화 리뷰 분류기를 웹 애플리케이션으로 만들기
- 공개 서버에 웹 애플리케이션 배포
- 영화 분류기 업데이트
- 요약
```
# 코랩에서 실행할 경우 최신 버전의 사이킷런을 설치합니다.
!pip install --upgrade scikit-learn
from IPython.display import Image
```
플래스크(Flask) 웹 애플리케이션 코드는 다음 디렉토리에 있습니다:
- `1st_flask_app_1/`: 간단한 플래스크 웹 애플리케이션
- `1st_flask_app_2/`: `1st_flask_app_1`에 폼 검증과 렌더링을 추가하여 확장한 버전
- `movieclassifier/`: 웹 애플리케이션에 내장한 영화 리뷰 분류기
- `movieclassifier_with_update/`: `movieclassifier`와 같지만 초기화를 위해 sqlite 데이터베이스를 사용합니다.
웹 애플리케이션을 로컬에서 실행하려면 `cd`로 (위에 나열된) 각 디렉토리에 들어가서 메인 애플리케이션 스크립트를 실행합니다.
cd ./1st_flask_app_1
python app.py
터미널에서 다음같은 내용일 출력됩니다.
* Running on http://127.0.0.1:5000/
* Restarting with reloader
웹 브라우저를 열고 터미널에 출력된 주소(일반적으로 http://127.0.0.1:5000/)를 입력하여 웹 애플리케이션에 접속합니다.
**이 튜토리얼로 만든 예제 애플리케이션 데모는 다음 주소에서 볼 수 있습니다: http://haesun.pythonanywhere.com/**.
<br>
<br>
# 8장 정리 - 영화 리뷰 분류를 위한 모델 훈련하기
이 절은 8장의 마지막 섹션에서 훈련한 로지스틱 회귀 모델을 다시 사용합니다. 이어지는 코드 블럭을 실행하여 다음 절에서 사용할 모델을 훈련시키겠습니다.
**노트**
다음 코드는 8장에서 만든 `movie_data.csv` 데이터셋을 사용합니다.
**코랩을 사용할 때는 다음 셀을 실행하세요.**
```
!wget https://github.com/rickiepark/python-machine-learning-book-3rd-edition/raw/master/ch09/movie_data.csv.gz
import gzip
with gzip.open('movie_data.csv.gz') as f_in, open('movie_data.csv', 'wb') as f_out:
f_out.writelines(f_in)
import nltk
nltk.download('stopwords')
import numpy as np
import re
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
```
`pyprind`는 주피터 노트북에서 진행바를 출력하기 위한 유틸리티입니다. `pyprind` 패키지를 설치하려면 다음 셀을 실행하세요.
```
!pip install pyprind
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('정확도: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
```
### 노트
pickle 파일을 만드는 것이 조금 까다로울 수 있기 때문에 `pickle-test-scripts/` 디렉토리에 올바르게 환경이 설정되었는지 확인하는 간단한 테스트 스크립트를 추가했습니다. 기본적으로 `movie_data` 데이터 일부를 포함하고 있고 `ch08`의 관련된 코드를 정리한 버전입니다.
다음처럼 실행하면
python pickle-dump-test.py
`movie_data_small.csv`에서 작은 분류 모델을 훈련하고 2개의 pickle 파일을 만듭니다.
stopwords.pkl
classifier.pkl
그다음 아래 명령을 실행하면
python pickle-load-test.py
다음 2줄이 출력되어야 합니다:
Prediction: positive
Probability: 85.71%
<br>
<br>
# 학습된 사이킷런 추정기 저장
앞에서 로지스틱 회귀 모델을 훈련한 후에 분류기, 불용어, 포터 어간 추출기, `HashingVectorizer`를 로컬 디스크에 직렬화된 객체로 저장합니다. 나중에 웹 애플리케이션에서 학습된 분류기를 이용하겠습니다.
```
import pickle
import os
dest = os.path.join('movieclassifier', 'pkl_objects')
if not os.path.exists(dest):
os.makedirs(dest)
pickle.dump(stop, open(os.path.join(dest, 'stopwords.pkl'), 'wb'), protocol=4)
pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'), protocol=4)
```
그다음 나중에 임포트할 수 있도록 별도의 파일에 `HashingVectorizer`를 저장합니다.
```
%%writefile movieclassifier/vectorizer.py
from sklearn.feature_extraction.text import HashingVectorizer
import re
import os
import pickle
cur_dir = os.path.dirname(__file__)
stop = pickle.load(open(
os.path.join(cur_dir,
'pkl_objects',
'stopwords.pkl'), 'rb'))
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text.lower())
text = re.sub('[\W]+', ' ', text.lower()) \
+ ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
```
이전 코드 셀을 실행한 후에 객체가 올바르게 저장되었는지 확인하기 위해 IPython 노트북 커널을 재시작할 수 있습니다.
먼저 현재 파이썬 디렉토리를 `movieclassifer`로 변경합니다:
```
import os
os.chdir('movieclassifier')
import pickle
import re
import os
from vectorizer import vect
clf = pickle.load(open(os.path.join('pkl_objects', 'classifier.pkl'), 'rb'))
import numpy as np
label = {0:'음성', 1:'양성'}
example = ["I love this movie. It's amazing."]
X = vect.transform(example)
print('예측: %s\n확률: %.2f%%' %\
(label[clf.predict(X)[0]],
np.max(clf.predict_proba(X))*100))
```
<br>
<br>
# 데이터를 저장하기 위해 SQLite 데이터베이스 설정
이 코드를 실행하기 전에 현재 위치가 `movieclassifier` 디렉토리인지 확인합니다.
```
os.getcwd()
import sqlite3
import os
conn = sqlite3.connect('reviews.sqlite')
c = conn.cursor()
c.execute('DROP TABLE IF EXISTS review_db')
c.execute('CREATE TABLE review_db (review TEXT, sentiment INTEGER, date TEXT)')
example1 = 'I love this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example1, 1))
example2 = 'I disliked this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example2, 0))
conn.commit()
conn.close()
conn = sqlite3.connect('reviews.sqlite')
c = conn.cursor()
c.execute("SELECT * FROM review_db WHERE date BETWEEN '2017-01-01 10:10:10' AND DATETIME('now')")
results = c.fetchall()
conn.close()
print(results)
Image(url='https://git.io/Jts3V', width=700)
```
<br>
# 플라스크 웹 애플리케이션 개발
...
## 첫 번째 플라스크 애플리케이션
...
```
Image(url='https://git.io/Jts3o', width=700)
```
## 폼 검증과 화면 출력
```
Image(url='https://git.io/Jts3K', width=400)
Image(url='https://git.io/Jts36', width=400)
```
<br>
<br>
## 화면 요약
```
Image(url='https://git.io/Jts3P', width=800)
Image(url='https://git.io/Jts3X', width=800)
Image(url='https://git.io/Jts31', width=400)
```
# 영화 리뷰 분류기를 웹 애플리케이션으로 만들기
```
Image(url='https://git.io/Jts3M', width=400)
Image(url='https://git.io/Jts3D', width=400)
Image(url='https://git.io/Jts3y', width=400)
Image(url='https://git.io/Jts3S', width=200)
Image(url='https://git.io/Jts32', width=400)
```
<br>
<br>
# 공개 서버에 웹 애플리케이션 배포
```
Image(url='https://git.io/Jts39', width=600)
```
<br>
<br>
## 영화 분류기 업데이트
다운로드한 깃허브 저장소에 들어있는 movieclassifier_with_update 디렉토리를 사용합니다(그렇지 않으면 `movieclassifier` 디렉토리를 복사해서 사용하세요).
**코랩을 사용할 때는 다음 셀을 실행하세요.**
```
!cp -r ../movieclassifier ../movieclassifier_with_update
import shutil
os.chdir('..')
if not os.path.exists('movieclassifier_with_update'):
os.mkdir('movieclassifier_with_update')
os.chdir('movieclassifier_with_update')
if not os.path.exists('pkl_objects'):
os.mkdir('pkl_objects')
shutil.copyfile('../movieclassifier/pkl_objects/classifier.pkl',
'./pkl_objects/classifier.pkl')
shutil.copyfile('../movieclassifier/reviews.sqlite',
'./reviews.sqlite')
```
SQLite 데이터베이스에 저장된 데이터로 분류기를 업데이트하는 함수를 정의합니다:
```
import pickle
import sqlite3
import numpy as np
# 로컬 디렉토리에서 HashingVectorizer를 임포트합니다
from vectorizer import vect
def update_model(db_path, model, batch_size=10000):
conn = sqlite3.connect(db_path)
c = conn.cursor()
c.execute('SELECT * from review_db')
results = c.fetchmany(batch_size)
while results:
data = np.array(results)
X = data[:, 0]
y = data[:, 1].astype(int)
classes = np.array([0, 1])
X_train = vect.transform(X)
clf.partial_fit(X_train, y, classes=classes)
results = c.fetchmany(batch_size)
conn.close()
return None
```
모델을 업데이트합니다:
```
cur_dir = '.'
# app.py 파일에 이 코드를 삽입했다면 다음 경로를 사용하세요.
# import os
# cur_dir = os.path.dirname(__file__)
clf = pickle.load(open(os.path.join(cur_dir,
'pkl_objects',
'classifier.pkl'), 'rb'))
db = os.path.join(cur_dir, 'reviews.sqlite')
update_model(db_path=db, model=clf, batch_size=10000)
# classifier.pkl 파일을 업데이트하려면 다음 주석을 해제하세요.
# pickle.dump(clf, open(os.path.join(cur_dir,
# 'pkl_objects', 'classifier.pkl'), 'wb')
# , protocol=4)
```
<br>
<br>
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('Data/20200403-WHO.csv')
df
df = df[df['Country/Territory'] != 'conveyance (Diamond']
death_rate = df['Total Deaths']/df['Total Confirmed']*100
df['Death Rate'] = death_rate
df
countries_infected = len(df)
print('The total number of countries infected is:',countries_infected)
df = df.sort_values(by=['Death Rate'],ascending=False)
df[0:30]
minimum_number_cases = 1000 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfMinNumCases = df[df['Total Confirmed'] > minimum_number_cases]
dfMinNumCases = dfMinNumCases.reset_index(drop=True)
dfMinNumCases.index = np.arange(1, (len(dfMinNumCases)+1))
dfMinNumCases[0:30]
#matplotlib defaults
sns.set(style="whitegrid")
top15_deathrate = dfMinNumCases[0:15]
death_rate = top15_deathrate.round({'Death Rate':2})
death_rate = death_rate['Death Rate']
plt.figure(figsize=(15,10))
plt.barh(top15_deathrate['Country/Territory'],top15_deathrate['Death Rate'],height=0.7, color='red')
plt.title('Death Rate per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death Rate [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
plt.gca().invert_yaxis()
for i in range (0,15):
plt.text(x=death_rate.iloc[i]+0.4, y=i , s=death_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
plt.show()
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Death Rate',y='Country/Territory', data=top15_deathrate ,
label="Deaths", color="red")
plt.title('Death Rate per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death Rate [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=death_rate.iloc[i]+0.4, y=i , s=death_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DeathRatePerCountry.png', bbox_inches='tight')
plt.show()
#matplotlib defaults
top15_confirmed = top15_deathrate.sort_values(by=['Total Confirmed'],ascending=False)
countries = np.array(top15_confirmed['Country/Territory'])
confirmed = np.array(top15_confirmed['Total Confirmed'])
deaths = np.array(top15_confirmed['Total Deaths'])
diference = confirmed - deaths
plt.figure(figsize=(15,10))
p1 = plt.barh(countries,deaths, color='red')
p2 = plt.barh(countries,diference,left=deaths, color='yellow')
plt.title('Total Number of Cases/Deaths (03/04/2020)',fontsize=25)
plt.xlabel('Cases/Deaths',fontsize=18)
plt.ylabel('Country',fontsize=18)
plt.legend((p1[0], p2[0]), ('Deaths', 'Confirmed'), loc='lower right')
plt.gca().invert_yaxis()
for i in range (0,15):
plt.text(x=deaths[i]+1900, y=i , s=deaths[i],horizontalalignment='center',verticalalignment='center', color='red',fontsize=14)
plt.text(x=confirmed[i]+4000, y=i , s=confirmed[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
plt.show()
#seaborn defaults
sns.set(style="whitegrid")
f, ax = plt.subplots(figsize=(15, 6))
sns.set_color_codes("pastel")
sns.barplot(x='Total Confirmed',y='Country/Territory', data=top15_confirmed,
label="Confirmed", color="yellow")
sns.set_color_codes("muted")
sns.barplot(x='Total Deaths',y='Country/Territory', data=top15_confirmed ,
label="Deaths", color="red")
plt.title('Total Number of Cases/Deaths in the Top15 Death Rate Countries (03/04/2020)',fontsize=18)
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(ylabel="Countries/Territory",
xlabel="Cases/Deaths")
for i in range (0,15):
plt.text(x=deaths[i]+1900, y=i , s=deaths[i],horizontalalignment='center',verticalalignment='center', color='red',fontsize=14)
plt.text(x=confirmed[i]+4000, y=i , s=confirmed[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
sns.despine(left=True, bottom=True)
plt.savefig('Graphs/20200403_TotalNumberCasesDeaths.png', bbox_inches='tight')
dfDSLRC = df.sort_values(by=['Days since last reported case'],ascending=False)#dfDSLRC = dataframe Days since last reported case
dfDSLRC[0:30]
#seaborn defaults
top15DSLRC = dfDSLRC[0:15].sort_values(by=['Days since last reported case'])
DSLRC = top15DSLRC['Days since last reported case']
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Country/Territory',y='Days since last reported case', data=top15DSLRC ,
label="Days since last reported case", color="blue")
plt.title('Days since Last Reported Case per Country (03/04/2020)',fontsize=24)
plt.ylabel('Days since last reported case',fontsize=18)
plt.xlabel('Countries/Territory',fontsize=18)
plt.xticks(rotation='vertical')
for i in range (0,15):
plt.text(x=i, y=DSLRC.iloc[i]+0.4 , s=DSLRC.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DaysSinceLast.png', bbox_inches='tight')
plt.show()
#seaborn defaults
confirmedDSLRC = np.array(top15DSLRC['Total Confirmed'])
deathsDSLRC = np.array(top15DSLRC['Total Deaths'])
sns.set(style="whitegrid")
f, ax = plt.subplots(figsize=(15, 6))
sns.set_color_codes("pastel")
sns.barplot(x='Total Confirmed',y='Country/Territory', data=top15DSLRC,
label="Confirmed", color="yellow")
sns.set_color_codes("muted")
sns.barplot(x='Total Deaths',y='Country/Territory', data=top15DSLRC ,
label="Deaths", color="red")
plt.title('Total Number of Cases/Deaths in the Top15 Days Since Last Reported Case Countries (03/04/2020)',fontsize=18)
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(ylabel="Countries/Territory",
xlabel="Cases/Deaths")
for i in range (0,15):
plt.text(x=deathsDSLRC[i]+0.2, y=i , s=deathsDSLRC[i],horizontalalignment='center',verticalalignment='center', color='red',fontsize=14)
plt.text(x=confirmedDSLRC[i]+0.4, y=i , s=confirmedDSLRC[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
sns.despine(left=True, bottom=True)
plt.savefig('Graphs/20200403_TotalNumberCasesDeathsDSLRC.png', bbox_inches='tight')
Transmission_type = pd.get_dummies(df, columns=['Transmission Classification'])
Transmission_type
print('The number of countries with only imported cases is:',Transmission_type['Transmission Classification_Imported cases only'].sum())
print('The number of countries with local transmissions cases is:',Transmission_type['Transmission Classification_Local transmission'].sum())
print('The number of countries under investigation to determine the type of transmission is:',Transmission_type['Transmission Classification_Under investigation'].sum())
WorldPopulation = pd.read_csv('Data/WorldPopulation.csv')
df['Population'] = 0
for i in range (0,len(df)):
pop = WorldPopulation.loc[WorldPopulation.loc[:,'Country/Territory']==df.loc[i,'Country/Territory']]
if pop.empty == True:
df.loc[i,'Population'] = 0
else:
df.loc[i,'Population'] = pop.iloc[0,1]
for i in range (0,len(df)):
if df.loc[i,'Population'] != 0:
df.loc[i,'Population Contaminated %'] = df.loc[i,'Total Confirmed']/df.loc[i,'Population']*100
else:
df.loc[i,'Population Contaminated %'] = 0
dfPopContaminated = df.sort_values(by=['Population Contaminated %'],ascending=False)
minimum_number_cases = 1 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopContaminated[dfPopContaminated['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_contaminated = dfPopMinNumCases[0:15]
contamination_rate = top15_contaminated.round({'Population Contaminated %':4})
contamination_rate = contamination_rate['Population Contaminated %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Contaminated %',y='Country/Territory', data=top15_contaminated ,
label="Deaths", color="navy")
plt.title('Cases Confirmed per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Cases Confirmed per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=contamination_rate.iloc[i]+0.03, y=i , s=contamination_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_ContaminationPerCountry.png', bbox_inches='tight')
plt.show()
minimum_number_cases = 1000 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopContaminated[dfPopContaminated['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_contaminated = dfPopMinNumCases[0:15]
contamination_rate = top15_contaminated.round({'Population Contaminated %':4})
contamination_rate = contamination_rate['Population Contaminated %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Contaminated %',y='Country/Territory', data=top15_contaminated ,
label="Deaths", color="navy")
plt.title('Cases Confirmed per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Cases Confirmed per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=contamination_rate.iloc[i]+0.03, y=i , s=contamination_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_ContaminationPerCountry1kCases.png', bbox_inches='tight')
plt.show()
for i in range (0,len(df)):
if df.loc[i,'Population'] != 0:
df.loc[i,'Population Death Rate %'] = df.loc[i,'Total Deaths']/df.loc[i,'Population']*100
else:
df.loc[i,'Population Death Rate %'] = 0
dfPopDeathRate = df.sort_values(by=['Population Death Rate %'],ascending=False)
minimum_number_cases = 1 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopDeathRate[dfPopDeathRate['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_PopDeathRate = dfPopMinNumCases[0:15]
popDeath_rate = top15_PopDeathRate.round({'Population Death Rate %':4})
popDeath_rate = popDeath_rate['Population Death Rate %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Death Rate %',y='Country/Territory', data=top15_PopDeathRate ,
label="Deaths", color="navy")
plt.title('Death rate per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death rate per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=popDeath_rate.iloc[i]+0.003, y=i , s=popDeath_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DeathRateinPopPerCountryCases.png', bbox_inches='tight')
plt.show()
minimum_number_cases = 1000 #define the minimum number of cases here/defina o número mínimo de casos aqui
dfPopMinNumCases = dfPopDeathRate[dfPopDeathRate['Total Confirmed'] > minimum_number_cases]
dfPopMinNumCases = dfPopMinNumCases.reset_index(drop=True)
dfPopMinNumCases.index = np.arange(1, (len(dfPopMinNumCases)+1))
top15_PopDeathRate = dfPopMinNumCases[0:15]
popDeath_rate = top15_PopDeathRate.round({'Population Death Rate %':4})
popDeath_rate = popDeath_rate['Population Death Rate %']
#seaborn defaults
f, ax = plt.subplots(figsize=(15, 12))
sns.set(style="whitegrid")
sns.set_color_codes("muted")
sns.barplot(x='Population Death Rate %',y='Country/Territory', data=top15_PopDeathRate ,
label="Deaths", color="navy")
plt.title('Death rate per Number of Habitants per Country (03/04/2020)',fontsize=25)
plt.xlabel('Death rate per Number of Habitants per Country [%]',fontsize=18)
plt.ylabel('Country',fontsize=18)
for i in range (0,15):
plt.text(x=popDeath_rate.iloc[i]+0.001, y=i , s=popDeath_rate.iloc[i],horizontalalignment='center',verticalalignment='center', fontsize=16)
plt.savefig('Graphs/20200403_DeathRateinPopPerCountry1kCases.png', bbox_inches='tight')
plt.show()
#seaborn defaults
confirmedPop = np.array(top15_PopDeathRate['Total Confirmed'])
deathsPop = np.array(top15_PopDeathRate['Total Deaths'])
sns.set(style="whitegrid")
f, ax = plt.subplots(figsize=(15, 6))
sns.set_color_codes("pastel")
sns.barplot(x='Total Confirmed',y='Country/Territory', data=top15_PopDeathRate,
label="Confirmed", color="yellow")
sns.set_color_codes("muted")
sns.barplot(x='Total Deaths',y='Country/Territory', data=top15_PopDeathRate ,
label="Deaths", color="red")
plt.title('Total Number of Cases/Deaths in the Top15 Death Rate per Number of Habitants Countries (03/04/2020)',fontsize=18)
ax.legend(ncol=2, loc="upper right", frameon=True)
ax.set(ylabel="Countries/Territory",
xlabel="Cases/Deaths")
for i in range (0,15):
plt.text(x=deathsPop[i]+2500, y=i , s=deathsPop[i],horizontalalignment='center',verticalalignment='center', color='blue',fontsize=14)
plt.text(x=confirmedPop[i]+10000, y=i , s=confirmedPop[i],horizontalalignment='center',verticalalignment='center', fontsize=14)
sns.despine(left=True, bottom=True)
plt.savefig('Graphs/20200403_TotalNumberCasesDeathsPop.png', bbox_inches='tight')
```
| github_jupyter |
#Gaussian bayes classifier
In this assignment we will use a Gaussian bayes classfier to classify our data points.
# Import packages
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from sklearn.metrics import classification_report
from matplotlib import cm
```
# Load training data
Our data has 2D feature $x1, x2$. Data from the two classes is are in $\texttt{class1_train}$ and $\texttt{class2_train}$ respectively. Each file has two columns corresponding to the 2D feature.
```
class1_train = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/class1_train').to_numpy()
class2_train = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/class2_train').to_numpy()
```
# Visualize training data
Generate 2D scatter plot of the training data. Plot the points from class 1 in red and the points from class 2 in blue.
```
plt.figure(figsize=(10,10))
plt.scatter(class1_train[:,0], class1_train[:,1], color = 'red', label = 'Class 1')
plt.scatter(class2_train[:,0], class2_train[:,1], color = 'blue', label = 'Class 2')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc = 'best')
plt.show()
```
# Maximum likelihood estimate of parameters
We will model the likelihood, $P(\mathbf{x}|C_1)$ and $P(\mathbf{x}|C_2)$ as $\mathcal{N}(\mathbf{\mu_1},|\Sigma_1)$ and $\mathcal{N}(\mathbf{\mu_2},|\Sigma_2)$ respectively. The prior probability of the classes are called, $P(C_1)=\pi_1$ and $P(C_2)=\pi_2$.
The maximum likelihood estimate of the parameters as follows:
\begin{align*}
\pi_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)}{N}\\
\mathbf{\mu_k} &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)\mathbf{x}^i}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\
\Sigma_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)(\mathbf{x}^i-\mathbf{\mu_k})(\mathbf{x}^i-\mathbf{\mu_k})^T}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\
\end{align*}
Here, $t^i$ is the target or class of $i^{th}$ sample. $\mathbb{1}(t^i=k)$ is 1 if $t^i=k$ and 0 otherwise.
Compute maximum likelihood values estimates of $\pi_1$, $\mu_1$, $\Sigma_1$ and $\pi_2$, $\mu_2$, $\Sigma_2$
Also print these values
```
n1, n2 = class1_train.shape[0], class2_train.shape[0]
pi1, pi2 = n1/(n1+n2), n2/(n1+n2)
mu1 = np.mean(class1_train, axis = 0)
mu2 = np.mean(class2_train, axis = 0)
# ------------------ sigma -------------------- #
XT = (class1_train-mu1).reshape(n1,1,2)
X = (class1_train-mu1).reshape(n1,2,1)
sigma1 = np.matmul(X,XT).mean(axis = 0)
XT = (class2_train-mu2).reshape(n2,1,2)
X = (class2_train-mu2).reshape(n2,2,1)
sigma2 = np.matmul(X,XT).mean(axis = 0)
print(' pi1 = {}\n mu1 = {}\n sigma1 = \n{}\n'.format(pi1, mu1, sigma1))
print(' pi2 = {}\n mu2 = {}\n sigma2 = \n{}\n'.format(pi2, mu2, sigma2))
```
# Alternate approach
```
sigma1 = np.cov((class1_train-mu1).T, bias='True')
sigma2 = np.cov((class2_train-mu2).T, bias='True')
print(sigma1)
print(sigma2)
```
# Another alternate
```
XT = (class1_train-mu1).T
X = (class1_train-mu1)
sigma1 = np.matmul(XT,X)/n1
XT = (class2_train-mu2).T
X = (class2_train-mu2)
sigma2 = np.matmul(XT,X)/n2
print(sigma1)
print(sigma2)
```
# Visualize the likelihood
Now that you have the parameters, let us visualize how the likelihood looks like.
1. Use $\texttt{np.mgrid}$ to generate points uniformly spaced in -5 to 5 along 2 axes
1. Use $\texttt{multivariate_normal.pdf}$ to get compute the Gaussian likelihood for each class
1. Use $\texttt{plot_surface}$ to plot the likelihood of each class.
1. Use $\texttt{contourf}$ to plot the likelihood of each class.
For the plots, use $\texttt{cmap=cm.Reds}$ for class 1 and $\texttt{cmap=cm.Blues}$ for class 2. Use $\texttt{alpha=0.5}$ to overlay both plots together.
```
x, y = np.mgrid[-5:5:.01, -5:5:.01]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv1 = multivariate_normal(mean = mu1, cov = sigma1)
rv2 = multivariate_normal(mean = mu2, cov = sigma2)
# plt.plot(x,y,likelihood1.pdf(pos), coo = 'red')
likelihood1 = rv1.pdf(pos)
likelihood2 = rv2.pdf(pos)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(121, projection='3d')
plt.title('Likelihood')
ax.plot_surface(x,y,likelihood1, cmap=cm.Reds, alpha = 0.5)
ax.plot_surface(x,y,likelihood2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
plt.subplot(122)
plt.title('Contour plot of likelihood')
plt.contourf(x, y, likelihood1, cmap=cm.Reds, alpha = 0.5)
plt.contourf(x, y, likelihood2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
```
#Visualize the posterior
Use the prior and the likelihood you've computed to obtain the posterior distribution for each class.
Like in the case of the likelihood above, make same similar surface and contour plots for the posterior.
```
posterior1 = likelihood1*pi1/(likelihood1*pi1+likelihood2*pi2)
posterior2 = likelihood2*pi2/(likelihood1*pi1+likelihood2*pi2)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(121, projection='3d')
plt.title('Posterior')
ax.plot_surface(x,y,posterior1, cmap=cm.Reds, alpha = 0.5)
ax.plot_surface(x,y,posterior2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
plt.subplot(122)
plt.title('Contour plot of Posterior')
plt.contourf(x, y, posterior1, cmap=cm.Reds, alpha = 0.5)
plt.contourf(x, y, posterior2, cmap=cm.Blues, alpha = 0.5)
plt.xlabel('x1')
plt.ylabel('x2')
```
# Decision boundary
1. Decision boundary can be obtained by $P(C_2|x)>P(C_1|x)$ in python. Use $\texttt{contourf}$ to plot the decision boundary. Use $\texttt{cmap=cm.Blues}$ and $\texttt{alpha=0.5}$
1. Also overlay the scatter plot of train data points from the 2 classes on the same plot. Use red color for class 1 and blue color for class 2
```
decision = posterior2>posterior1
plt.figure(figsize=(10,10))
plt.contourf(x, y, decision, cmap=cm.Blues, alpha = 0.5)
plt.scatter(class1_train[:,0], class1_train[:,1], color = 'red', label = 'Class 1')
plt.scatter(class2_train[:,0], class2_train[:,1], color = 'blue', label = 'Class 2')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc = 'best')
plt.show()
```
# Test Data
Now let's use our trained model to classify test data points
1. $\texttt{test_data}$ contains the $x1,x2$ features of different data points
1. $\texttt{test_label}$ contains the true class of the data points. 0 means class 1. 1 means class 2.
1. Classify the test points based on whichever class has higher posterior probability for each data point
1. Use $\texttt{classification_report}$ to test the classification performance
```
test = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L3/test').to_numpy()
test_data, test_label = test[:,:2], test[:,2]
# classfication
l1 = pi1*rv1.pdf(test_data)
l2 = pi2*rv2.pdf(test_data)
den = l1+l2
l1 /= den
l2 /= den
test_decision = l2>l1
print(classification_report(test_label, test_decision))
```
| github_jupyter |
# BLU02 - Learning Notebook - Data wrangling workflows - Part 2 of 3
```
import matplotlib.pyplot as plt
import pandas as pd
import os
```
# 2 Combining dataframes in Pandas
## 2.1 How many programs are there per season?
How many different programs does the NYP typically present per season?
Programs are under `/data/programs/` which contains a file per Season.
### Concatenate
To analyze how many programs there are per season, over time, we need a single dataframe containing *all* seasons.
Concatenation means, in short, to unite multiple dataframes (or series) in one.
The `pd.concat()` function performs concatenation operations along an axis (`axis=0` for index and `axis=1` for columns).
```
season_0 = pd.read_csv('./data/programs/1842-43.csv')
season_1 = pd.read_csv('./data/programs/1843-44.csv')
seasons = [season_0, season_1]
pd.concat(seasons, axis=1)
```
Concatenating like this makes no sense, as we no longer have a single observation per row.
What we want to do instead is to concatenate the dataframe along the index.
```
pd.concat(seasons, axis=0)
```
This dataframe looks better, but there's something weird with the index: it's not unique anymore.
Different observations share the same index. Not cool.
For dataframes that don't have a meaningful index, you may wish to ignore the indexes altogether.
```
pd.concat(seasons, axis=0, ignore_index=True)
```
Now, let's try something different.
Let's try to change the name of the columns, so that each dataframe has different ones, before concatenating.
```
season_0_ = season_0.copy()
season_0_.columns = [0, 1, 2, 'Season']
seasons_ = [season_0_, season_1]
pd.concat(seasons_, axis=0)
```
What a mess! What did we learn?
* When the dataframes have different columns, `pd.concat()` will take the union of all dataframes by default (no information loss)
* Concatenation will fill columns that are not present for specific dataframes with `np.NaN` (missing values).
The good news is that you can set how you want to glue the dataframes in regards to the other axis, the one not being concatenated.
Setting `join='inner'` will take the intersection, i.e., the columns that are present in all dataframes.
```
pd.concat(seasons_, axis=0, join='inner')
```
There you go. Concatenation complete.
### Append
The method `df.append()` is a shortcut for `pd.concat()`, that can be called on either a `pd.DataFrame` or a `pd.Series`.
```
season_0.append(season_1)
```
It can take multiple objects to concatenate as well. Please note the `ignore_index=True`.
```
season_2 = pd.read_csv('./data/programs/1844-45.csv')
more_seasons = [season_1, season_2]
season_0.append(more_seasons, ignore_index=True)
```
We are good to go. Let's use `pd.concat` to combine all seasons into a great dataframe.
```
def read_season(file):
path = os.path.join('.', 'data', 'programs', file)
return pd.read_csv(path)
files = os.listdir('./data/programs/')
files = [f for f in files if '.csv' in f]
```
A logical approach would be to iterate over all files and appending all of them to a single dataframe.
```
%%timeit
programs = pd.DataFrame()
for file in files:
season = read_season(file)
programs = programs.append(season, ignore_index=True)
```
It is worth noting that both `pd.concat()` and `df.append()` make a full copy of the data and continually reusing this function can create a significant performance hit.
Instead, use a list comprehension if you need to use the operation several times.
This way, you only call `pd.concat()` or `df.append()` once.
```
%%timeit
seasons = [read_season(f) for f in files if '.csv' in f]
programs = pd.concat(seasons, axis=0, ignore_index=True)
seasons = [read_season(f) for f in files if '.csv' in f]
programs = pd.concat(seasons, axis=0, ignore_index=True)
```
Now that we have the final `programs` dataframe, we can see how the number of distinct programs changes over time.
```
programs['Season'] = pd.to_datetime(programs['Season'].str[:4])
(programs.groupby('Season')
.size()
.plot(legend=False, use_index=True, figsize=(10, 7),
title='Number of programs per season (from 1842-43 to 2016-17)'));
```
The NYP appears to be investing in increasing the number of distinct programs per season since '95.
## 2.2 How many concerts are there per season?
What about the number of concerts? The first thing we need to do is to import the `concerts.csv` data.
```
concerts = pd.read_csv('./data/concerts.csv')
concerts.head()
```
We will use the Leon Levy Digital Archives ID (`GUID`) to identify each program.
Now, we have information regarding all the concerts that took place and the season for each program.
The problem? Information about the concert and the season are in different tables, and the program is the glue between the two. Familiar?
### Merge
Pandas provides high-performance join operations, very similar to SQL.
The method `df.merge()` method provides an interface for all database-like join methods.
```
?pd.merge
```
We can call `pd.merge` to join both tables on the `GUID` (and the `ProgramID`, that provides similar info).
```
# Since GUID and ProgramID offer similar info, we will drop the later.
programs = programs.drop(columns='ProgramID')
df = pd.merge(programs, concerts, on='GUID')
df.head()
```
Or, alternatively, we can call `merge()` directly on the dataframe.
```
df_ = programs.merge(concerts, on='GUID')
df_.head()
```
The critical parameter here is the `how`. Since we are not explicitly using it, the merge default to `inner` (for inner-join) by default.
But, in fact, you can use any join, just like you did in SQL: `left`, `right`, `outer` and `inner`.
Remember?

*Fig. 1 - Types of joins in SQL, note how left, right, outer and inner translate directly to Pandas.*
A refresher on different types of joins, all supported by Pandas:
| Pandas | SQL | What it does |
| ---------------------------------------------- | ---------------- | ----------------------------------------- |
| `pd.merge(right, left, on='key', how='left')` | LEFT OUTER JOIN | Use all keys from left frame only |
| `pd.merge(right, left, on='key', how='right')` | RIGHT OUTER JOIN | Use all keys from right frame only |
| `pd.merge(right, left, on='key', how='outer')` | FULL OUTER JOIN | Use union of keys from both frames |
| `pd.merge(right, left, on='key', how='inner')` | INNER JOIN | Use intersection of keys from both frames |
In this particular case, we have:
* A one-to-many relationship (i.e., one program to many concerts)
* Since every single show in `concerts` has a match in `programs`, the type of join we use doesn't matter.
We can use the `validate` argument to automatically check whether there are unexpected duplicates in the merge keys and check their uniqueness.
```
df__ = pd.merge(programs, concerts, on='GUID', how='outer', validate="one_to_many")
assert(concerts.shape[0] == df_.shape[0] == df__.shape[0])
```
Back to our question, how is the number of concerts per season evolving?
```
(programs.merge(concerts, on='GUID')
.groupby('Season')
.size()
.plot(legend=False, use_index=True, figsize=(10, 7),
title='Number of concerts per season (from 1842-43 to 2016-17)'));
```
Likewise, the number of concerts seems to be trending upwards since about 1995, which could be a sign of growing interest in the genre.
### Join
Now, we want the top-3 composer in total appearances.
Without surprise, we start by importing `works.csv`.
```
works = pd.read_csv('./data/works.csv',index_col='GUID')
```
Alternatively, we can use `df.join()` instead of `df.merge()`.
There are, however, differences in the default behavior: for example `df.join` uses `how='left'` by default.
Let's try to perform the merge.
```
(programs.merge(works, on="GUID")
.head(n=3))
programs.merge(works, on="GUID").shape
(programs.join(works, on='GUID')
.head(n=3))
# equivalent to
# pd.merge(programs, works, left_on='GUID', right_index=True,
# how='left').head(n=3)
programs.join(works, on="GUID").shape
```
We noticed that the shape of the results is diferent, we have a different number of lines in each one of the methods.
Typically, you would use `df.join()` when you want to do a left join or when you want to join on the index of the dataframe on the right.
Now for our goal: what are the top-3 composers?
```
(programs.join(works, on='GUID')
.groupby('ComposerName')
.size()
.nlargest(n=3))
```
Wagner wins!
What about the top-3 works?
```
(programs.join(works, on='GUID')
.groupby(['ComposerName', 'WorkTitle'])
.size()
.nlargest(n=3))
```
Wagner wins three times!
| github_jupyter |
```
%matplotlib inline
```
# Faces dataset decompositions
This example applies to `olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
`decompositions`) .
```
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 clause
import logging
from time import time
from numpy.random import RandomState
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn.cluster import MiniBatchKMeans
from sklearn import decomposition
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
```
| github_jupyter |
# In this step, we'll process graduation data from the federal files
## In most cases, this is a straight "pull" from the data, but there are a few possible modifications:
- If the sample is too small from the most recent year, use 3 years of data
- For HBCUs, boost by 15%
- For a handful of schools, adjust down to reflect the true Noble rate of success
- Add in a handful of estimates
```
import pandas as pd
import numpy as np
import os
# Edit these to reflect any changes
work_location = 'inputs'
directory_file = 'hd2018.csv'
base_dir = 'base_dir.csv'
noble_attending = '../../raw_inputs/noble_attending.csv'
gr_output = 'grad_rates.csv'
gr_files = {'latest':'gr2018.csv',
'one_removed':'gr2017.csv',
'two_removed':'gr2016.csv'}
output_files = {'latest':'grad2018.csv',
'one_removed':'grad2017.csv',
'two_removed':'grad2016.csv'}
os.chdir(work_location)
# We'll use a dict to keep track of each grad rate file, reading in each one
years=['latest','one_removed','two_removed']
gr_dfs = {}
for year in years:
gr_dfs[year] = pd.read_csv(gr_files[year], index_col=['UNITID'],
usecols=['UNITID', 'GRTYPE', 'GRTOTLT','GRBKAAT','GRHISPT'],
na_values='.',
dtype={'GRTOTLT':float,'GRBKAAT':float,'GRHISPT':float},
encoding='latin-1')
gr_dfs[year].rename(columns={'GRTOTLT':'Total','GRBKAAT':'Black','GRHISPT':'Hisp'}, inplace=True)
gr_dfs[year]['AA_H']=gr_dfs[year].Black+gr_dfs[year].Hisp
gr_dfs['latest'].head()
# We now have to sort through these GRTYPES:
# 8 is the adjusted cohort for bachelor's seeking students (completions: 12=6yr, 13=4yr, 14=5yr; transfers=16)
# 29 for associate's seeking (completions: 30=3yr 35=2yr; transfers=33)
# We'll build a list of unitids that have both starting cohorts and completions for either one
valid_unitids = {}
for year in years:
df = gr_dfs[year]
valid_unitids[year] = list( (set(df[df['GRTYPE']==8].index) & set(df[df['GRTYPE']==12].index)) |
(set(df[df['GRTYPE']==29].index) & set(df[df['GRTYPE']==30].index)) )
print('%d, %d' % (len(gr_dfs['latest']), len(valid_unitids['latest'])))
# We'll use the basic "hd" directory to form the base of the final year output
def create_year_df(df, source_df1, source_df2):
"""Apply function to pull the appropriate data into a single row per college"""
ix = df.name
if ix in source_df1.index:
return source_df1.loc[ix][['Total','Black','Hisp','AA_H']]
elif ix in source_df2.index:
return source_df2.loc[ix][['Total','Black','Hisp','AA_H']]
else:
return [np.nan,np.nan,np.nan,np.nan]
year_dfs = {}
for year in years:
dir_df = pd.read_csv(directory_file, index_col=['UNITID'],
usecols=['UNITID','INSTNM'],encoding='latin-1')
dir_df = dir_df[dir_df.index.isin(valid_unitids[year])]
# First do the starts
start1 = gr_dfs[year][gr_dfs[year].GRTYPE == 12]
start2 = gr_dfs[year][gr_dfs[year].GRTYPE == 30]
dir_df[['Cl_Total','Cl_Black','Cl_Hisp','Cl_AA_H']]=dir_df.apply(create_year_df,axis=1,result_type="expand",
args=(start1,start2))
# Then do the completions
start1 = gr_dfs[year][gr_dfs[year].GRTYPE == 8]
start2 = gr_dfs[year][gr_dfs[year].GRTYPE == 29]
dir_df[['St_Total','St_Black','St_Hisp','St_AA_H']]=dir_df.apply(create_year_df,axis=1,result_type="expand",
args=(start1,start2))
# Next the transfers
start1 = gr_dfs[year][gr_dfs[year].GRTYPE == 16]
start2 = gr_dfs[year][gr_dfs[year].GRTYPE == 33]
dir_df[['Xf_Total','Xf_Black','Xf_Hisp','Xf_AA_H']]=dir_df.apply(create_year_df,axis=1,result_type="expand",
args=(start1,start2))
# Finally, calculated within year stats
for type in ['Total','Black','Hisp','AA_H']:
dir_df['GR_'+type]=dir_df['Cl_'+type]/dir_df['St_'+type]
dir_df['Xfr_'+type]=dir_df['Xf_'+type]/dir_df['St_'+type]
dir_df['CI_'+type]=np.sqrt(dir_df['GR_'+type]*(1-dir_df['GR_'+type])/dir_df['St_'+type])
dir_df.replace(np.inf,np.nan)
year_dfs[year]=dir_df.copy()
year_dfs['latest'].head()
# Here, we're just saving the one year files locally for reference
for yr in ['latest', 'one_removed', 'two_removed']:
year_dfs[yr].to_csv(output_files[yr], na_rep="N/A")
```
## The above code created three DFs for the most recent three years
## Each DF has the in year counting stats and rates for graduation
### Now we need create a final set of statistics based on these:
- Adj6yrGrad (overall number after adjustments)
- Adj6yrAAH (African American/Hispanic number after adjustments)
- 6yrGrad (overall number, no adjustments)
- 6yrAAH (AA/H no adjustments)
- 6yrAA
- 6yrH
- Xfer
- XferAAH
- XferAA
- XferH
```
# We'll start with reading some of the rows from the 'base_dir' created in the last step
dir_df = pd.read_csv(base_dir, index_col=['UNITID'],
usecols=['UNITID','INSTNM','Type','HBCU'],encoding='latin-1')
dir_df.head()
# NOTE THAT THERE ARE YEAR REFERENCES IN THIS CODE THAT NEED TO BE UPDATED TOO
def bump15(x):
"""Helper function to increase by 15% or half the distance to 100"""
if x > .7:
return x + (1-x)*.5
else:
return x + .15
def set_gradrates(df, year_dfs):
"""Apply function to decide how to set the specific values specified above"""
ix = df.name
# First we see if there is actual data for the latest year
if ix in year_dfs['latest'].index:
ty = year_dfs['latest'].loc[ix]
gr_source = '2018'
gr_6yr,gr_6yr_aah,gr_6yr_aa,gr_6yr_h,xf,xf_aah,xf_aa,xf_h = ty.reindex(
['GR_Total','GR_AA_H','GR_Black','GR_Hisp','Xfr_Total','Xfr_AA_H','Xfr_Black','Xfr_Hisp'])
# If there's data in the latest year, we'll check how robust and add in prior years if necessary
ci, ci_aah = ty.reindex(['CI_Total','CI_AA_H'])
# For HBCUs, we bump by the lesser of 15% or half the distance to 100%
if (df.HBCU == 'Yes') and (ci_aah <= 0.04):
adj_6yr = gr_6yr
adj_6yr_aah = bump15(gr_6yr_aah)
# Otherwise, add more years if the confidence intervals are too low
elif (ci >0.015) or (ci_aah >0.05):
calc_fields = ['Cl_Total','Cl_Black','Cl_Hisp','Cl_AA_H',
'St_Total','St_Black','St_Hisp','St_AA_H',
'Xf_Total','Xf_Black','Xf_Hisp','Xf_AA_H']
calc_data = ty.reindex(calc_fields)
if ix in year_dfs['one_removed'].index:
gr_source = '2017-2018'
ty=year_dfs['one_removed'].loc[ix]
calc_data = calc_data+ty.reindex(calc_fields)
if ix in year_dfs['two_removed'].index:
gr_source = '2016-2018'
ty=year_dfs['two_removed'].loc[ix]
calc_data = calc_data+ty.reindex(calc_fields)
gr_6yr = calc_data['Cl_Total']/calc_data['St_Total'] if calc_data['St_Total']>0 else np.nan
gr_6yr_aah = calc_data['Cl_AA_H']/calc_data['St_AA_H'] if calc_data['St_AA_H']>0 else np.nan
gr_6yr_aa = calc_data['Cl_Black']/calc_data['St_Black'] if calc_data['St_Black']>0 else np.nan
gr_6yr_h = calc_data['Cl_Hisp']/calc_data['St_Hisp'] if calc_data['St_Hisp']>0 else np.nan
xf = calc_data['Xf_Total']/calc_data['St_Total'] if calc_data['St_Total']>0 else np.nan
xf_aah = calc_data['Xf_AA_H']/calc_data['St_AA_H'] if calc_data['St_AA_H']>0 else np.nan
xf_aa = calc_data['Xf_Black']/calc_data['St_Black'] if calc_data['St_Black']>0 else np.nan
xf_h = calc_data['Xf_Hisp']/calc_data['St_Hisp'] if calc_data['St_Hisp']>0 else np.nan
adj_6yr = gr_6yr
adj_6yr_aah = gr_6yr_aah
else:
adj_6yr = gr_6yr
adj_6yr_aah = gr_6yr_aah
# If there was no data in the most recent year, we got the prior (and stick--no need to add prior prior)
elif ix in year_dfs['one_removed'].index:
ty = year_dfs['one_removed'].loc[ix]
gr_source = '2017'
gr_6yr,gr_6yr_aah,gr_6yr_aa,gr_6yr_h,xf,xf_aah,xf_aa,xf_h = ty.reindex(
['GR_Total','GR_AA_H','GR_Black','GR_Hisp','Xfr_Total','Xfr_AA_H','Xfr_Black','Xfr_Hisp'])
adj_6yr = gr_6yr
adj_6yr_aah = gr_6yr_aah
# If no data in the last two years, we'll go to prior prior (and stick--no need to check CI)
elif ix in year_dfs['two_removed'].index:
ty = year_dfs['two_removed'].loc[ix]
gr_source = '2016'
gr_6yr,gr_6yr_aah,gr_6yr_aa,gr_6yr_h,xf,xf_aah,xf_aa,xf_h = ty.reindex(
['GR_Total','GR_AA_H','GR_Black','GR_Hisp','Xfr_Total','Xfr_AA_H','Xfr_Black','Xfr_Hisp'])
adj_6yr = gr_6yr
adj_6yr_aah = gr_6yr_aah
# No data in any of the last 3 years
else:
gr_source,adj_6yr,adj_6yr_aah,gr_6yr,gr_6yr_aah,gr_6yr_aa,gr_6yr_h,xf,xf_aah,xf_aa,xf_h=['N/A']+[np.nan]*10
# 2 year schools are given
if df['Type'] == '2 year':
adj_6yr = adj_6yr+0.5*xf
adj_6yr_aah = adj_6yr_aah+0.5*xf_aah
return [gr_source,
np.round(adj_6yr,decimals=2),np.round(adj_6yr_aah,decimals=2),
np.round(gr_6yr,decimals=2),np.round(gr_6yr_aah,decimals=2),
np.round(gr_6yr_aa,decimals=2),np.round(gr_6yr_h,decimals=2),
np.round(xf,decimals=2),np.round(xf_aah,decimals=2),
np.round(xf_aa,decimals=2),np.round(xf_h,decimals=2)]
new_columns = ['GR_Source','Adj6yrGrad','Adj6yrAAH','6yrGrad',
'6yrAAH','6yrAA','6yrH','Xfer','XferAAH','XferAA','XferH']
dir_df[new_columns] = dir_df.apply(set_gradrates,axis=1,args=(year_dfs,),result_type="expand")
dir_df.head()
dir_df.to_csv(gr_output,na_rep='N/A')
```
# A few more manual steps
## These should eventually be moved to code, but they should be modified in a number of cases (discussed in more detail below):
1. Add a correction for schools where we have a lot of historic results. Historically, this has meant reducing grad rates for schools by 1/3 of the difference between Noble retention and university retention (typically at only 3-4 schools)
2. Increase grad rates for partner colleges (15%)
3. Double check schools known to report oddly: Robert Morris University-Illinois specifically
4. Look for major shifts in grad rate at schools many Noble students attend and consider shifting to a 3year average
In all of these cases, we will change the grad rates and the "GR_Source" to designate that a non-standard practice was followed
You can see all of this work in the "manual_grad_rates_corrections_2020.xlsx" file in the raw_inputs folder. This file was created by importing columns from the prior year directory and then applying a process against them. Specifically:
1. Start with "grad_rates.csv" (saved above) and insert 6 columns between columns B&C:
-count: # of students (you can grab from financial_aid_analysis_output.xlsx from the archive-analysis)
-Adj6yr2019: from "manual_grad_rates_corrections_2019.xlsx" (these will be in the first few columns.)
-Adj6yrAAH2019: same
-2019note: same
-2019src: same
-2020-2019AAH: calculated from the above and what's in the file
2. Create a column after Type (will be Column K) with "2020 note". This is where you'll disposition each row.
3. Create columns X-AG as copies of columns M-V. This is where formula-modified values will go. First we'll fill in values for the modified entries. Second, we'll fill those columns in for the (vast majority) of rows with no corrections.
Then look at the notes below for specific steps, but the main thing to keep in mind is that "Special" rows in prior
years are likely "special" in current years, so be sure to check those. The vast majority will end up "stet" meaning no manual adjustment.
The sections below describe how to do each of the changes listed above.
_After work is completed in this file, the extra columns were removed and the result was saved as "grad_rates.csv" in the raw_inputs folder._
## For Case #1 in the above, see the "Noble history bump analysis2020.xlsx" file in raw_inputs
This file was taken from the post-NSC process, looking at the "College and GPA" tab of the "snapshot" report. To create it, take the last version of that file and perform the following steps:
1. Save the "College and GPA" tab alone in a new workbook.
2. Remove the columns at the right for all but the two most recent years and the "All" columns for the # starting and # remaining one year later sections. (In this case, we keep 2017 and 2018.)
3. Filter on "All" GPAs
4. Calculate the columns shown if there were 50 Noble students in 2018 OR if there were 200+ in prior years and 50+ in 2017+2018. (These are arbitrary. Be sure to use the # of starting students for your filter.) Note only keep the "Adjustment" columns if the result has a magnitude greater than 1% and is not a two-year college.
For the columns that have an adjustment, edit those rows in the "manual_grad_rate_corrections_2020.xlsx" file:
1. Change the "GR_Source" to "2018-1/3 Noble gap" (or +) and "2020 note" to "reduce by x%" (or increase)
2. Change the values in X-AC based on the rounded adjustment. AD-AG should just equal the original values.
3. Finally, eyeball how the AA/H value changes compared to the prior year. If there is 5+% drop, change the note to "stet, big natural drop" and change the source to 2018.
## For Case #2 in the above:
1. Filter on the 2019 note for the word "partner".
2. Mirror that increase in columns X-AG unless the partnership has ended. (Also mirror the language in the note and source.)
## For Case #3 in the above:
1. Filter the 2019 note for anything not "stet" or "N/A".
2. For the ones with no 2020 note yet, look at the details and determine (with college counseling guidance) whether a change should be made. A few more notes:
3. "minimum value: .25" is added for 4-year schools with N/A for grad rate if Noble students attend. Apply this rule to all such schools (even if 2019 note was stet or N/A).
4. "floor CCC at 20%" means the rate for City Colleges should be at least 20%. Apply this rule to all City colleges (regardless of 2019 note).
5. Again for any of these, update the 2020 note, source, and then make the actual changes in X-AG.
6. Note that you might want to refer back to the manual grad rate corrections file from the prior year and look at X-AG there specifically to see the formulas used to apply the changes for any non-standard rows.
## For Case #4 in the above:
1. Filter for rows with "2020 note" still blank. (The 2019 note for all of these should be "N/A" or "stet".)
2. Filter for rows with "count" >= 5.
3. Filter for rows with declines bigger than 5%. If the school is using 2018 as the source, switch to the 3yr average. (You'll need to do this by manually grabbing the source files.
4. Filter for rows with increases bigger than 10%. You probably won't change these, but discuss the increases with a college counselor to see if they pass the "smell test".
## Final disposition:
1. Change the remaining 2020 note values that were blank to "stet"
2. Populate X-AG for all blank rows with the values in M-V (just use an assignment formula in Excel on the filtered selection).
3. Save this file for reference.
3. Copy and paste values for X-AG.
4. Delete columns B-K. (The file will start with UNITID and GR_Source.
5. Delete the remaining columns until your next column is the old column X.
6. Save in raw_inputs as grad_rates.csv
| github_jupyter |
# Straighten an image using the Hough transform
We'll write our own Hough transform to compute the Hough transform and use it to straighten a wonky image.
## Package inclusion for Python
```
import copy
import math
import numpy as np
import cv2
```
## Read the image from a file on the disk and return a new matrix

```
image = cv2.imread("../img/wonky.png");
```
## Check for errors
```
# Check for failure
if image is None:
raise Exception("Could not open or find the image");
```
## Convert degrees to gradients
```
def deg2rad(anAngleInDegrees: float) -> float:
return anAngleInDegrees * math.pi / 180.0
```
## Apply the Canny edge detector
```
def cannyEdgeDetector(anInputImage: np.array,
aCannyThreshold: int) -> np.array:
# Find edges using Canny
ratio = 3
kernel_size = 3
edge_image = cv2.Canny(anInputImage,
aCannyThreshold,
aCannyThreshold * ratio,
kernel_size)
return edge_image
blurred_image = cv2.blur( image, (3,3) )
edge = cannyEdgeDetector(blurred_image, 60)
cv2.namedWindow("edge", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("edge", edge)
cv2.namedWindow("image", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("image", image)
cv2.waitKey(0) # Wait for any keystroke in the window
cv2.destroyAllWindows() # Destroy all the created windows
```
| Original image | canny |
|----------------|--------|
| |  |
## Compute the accumulator
```
def houghTransform(anInputImage: np.array,
aCannyThreshold: int) -> np.array:
# Blur the input image
blurred_image = cv2.blur( anInputImage, (3,3) )
# Find edges using Canny
edge_image = cannyEdgeDetector(blurred_image, aCannyThreshold)
width = 180;
diagonal = math.sqrt(edge_image.shape[1] * edge_image.shape[1] + edge_image.shape[0] * edge_image.shape[0])
height = math.floor(2.0 * diagonal)
half_height = height / 2.0
accumulator = np.zeros((height, width), np.single)
# Process all the pixels of the edge image
for j in range(edge_image.shape[0]):
for i in range(edge_image.shape[1]):
# The pixel is on an edge
if edge_image[j,i] > 0:
# Process all the angles
for theta in range(180):
angle = deg2rad(theta);
r = i * math.cos(angle) + j * math.sin(angle)
v = math.floor(r + half_height)
accumulator[v, theta] += 1
return accumulator
accumulator = houghTransform(image, 60)
```
## Visualise the accumulator
Look for dots. Every dot represents a line in the original image. There are for of them
```
vis_accumulator = cv2.normalize(accumulator, None, 0, 1, cv2.NORM_MINMAX, cv2.CV_32FC1)
cv2.namedWindow("accumulator", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("accumulator", vis_accumulator)
cv2.waitKey(0) # Wait for any keystroke in the window
cv2.destroyAllWindows() # Destroy all the created windows
```

## Draw the lines
```
Scalar = [float, float, float]
def drawLines(anImage: np.array,
anAccumulator: np.array,
aHoughThreshold: float,
aLineWidth: int,
aLineColour: Scalar) -> np.array:
# Copy the input image into the output image
output = copy.deepcopy(anImage)
# Process all the pixels of the accumulator image
for j in range(anAccumulator.shape[0]):
for i in range(anAccumulator.shape[1]):
# The pixel value in the accumulator is greater than the threshold
# Display the corresponding line
if anAccumulator[j, i] >= aHoughThreshold:
# The pixel location
location = (i, j)
# The two corners of the image
pt1 = [ 0, 0]
pt2 = [anImage.shape[1] - 1, anImage.shape[0] - 1]
# Get theta in radian
theta = deg2rad(location[0])
# Get r
r = location[1]
r -= anAccumulator.shape[0] / 2.0
# How to retrieve the line from theta and r:
# x = (r - y * sin(theta)) / cos(theta);
# y = (r - x * cos(theta)) / sin(theta);
# sin(theta) != 0
if location[0] != 0 and location[0] != 180:
pt1[1] = math.floor((r - pt1[0] * math.cos(theta)) / math.sin(theta))
pt2[1] = math.floor((r - pt2[0] * math.cos(theta)) /math.sin(theta))
# math.sin(theta) == 0 && math.cos(theta) != 0
else:
pt1[0] = math.floor((r - pt1[1] * sin(theta)) / math.cos(theta))
pt2[0] = math.floor((r - pt2[1] * sin(theta)) / math.cos(theta))
# Draw the line
output = cv2.line(output, pt1, pt2, aLineColour, aLineWidth)
return output
# Get tne min and max in the accumulator
min_value, max_value, min_loc, max_loc = cv2.minMaxLoc(accumulator)
hough_threshold = min_value + 0.6 * (max_value - min_value)
image_with_lines = drawLines(image, accumulator, hough_threshold, 4, (0, 0, 255))
cv2.namedWindow("image", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("image", image)
cv2.namedWindow("edge", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("edge", edge)
cv2.namedWindow("image_with_lines", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("image_with_lines", image_with_lines)
cv2.waitKey(0) # Wait for any keystroke in the window
cv2.destroyAllWindows() # Destroy all the created windows
```
| Original image | canny | lines |
|----------------|--------|--------|
| |  | 
## Extract the angle from the accumulator
```
min_value, max_value, min_loc, max_loc = cv2.minMaxLoc(accumulator)
print("Max value: ", max_value, " Location: ", (90 - max_loc[0], max_loc[1]))
```
We must convert the position along the horizontal axis into an angle. It's simple. [9, 1632] Tells us that the image is rotated by 9 degrees. To straighten it, we must rotate it by -9 degrees.
```
def rotate(anImage: np.array, angle: float) -> np.array:
/# Point from where to rotate (centre of rotation), here the centre of the image
pt = (anImage.shape[1] / 2.0, anImage.shape[0] / 2.0)
# Create a rotation matrix
rotation_matrix = cv2.getRotationMatrix2D(pt, angle, 1.0)
# Apply the transforation to the image
rotated_image = cv2. warpAffine(anImage, rotation_matrix, (anImage.shape[0], anImage.shape[1]))
return rotated_image
print(90 - max_loc[0])
rotated = rotate(image, -(90 - max_loc[0]))
cv2.namedWindow("image", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("image", image)
cv2.namedWindow("rotated", cv2.WINDOW_GUI_EXPANDED)
cv2.imshow("rotated", rotated)
cv2.waitKey(0) # Wait for any keystroke in the window
cv2.destroyAllWindows() # Destroy all the created windows
```
| Original image | straighten |
|----------------|--------|
| | 
| github_jupyter |
```
# 초기 설정
from IPython.core.display import display, HTML
display(HTML("<style>.container { width: 100% !important; }</style>"))
pd.set_option("display.max_columns", 40)
import missingno as msno
%matplotlib inline
import pprint
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import time
from datetime import datetime
```
##### 하단 정제 데이터 부분에서 모델 돌리세용 ######
##### 변수 추가
```
#BABIP
sdata['BABIP'] = (sdata['안타'] - sdata['홈런'])/(sdata['타수'] -sdata['삼진']-sdata['홈런'] -sdata['희비'])
#외국인
import re
names = sdata.이름.unique()
sdata['외국인'] = None
for name in names:
if re.findall('[에, 브, 워, 대, 피, 히, 버, 러, 칸, 루, 필, 파, 스, 아,마, 가, 초, 모, 로, 발, 번, 테, 호]', name[0]):
sdata['외국인'][sdata['이름'] == name] = 1
else:
sdata['외국인'][sdata['이름'] == name] = 0
a = ['나바로', '조쉬벨', '고메즈']
for name in a:
sdata['외국인'][sdata['이름'] == name] = 1
# 나이 컬럼화
sdata['나이C'] = sdata.나이.apply(lambda x : 0 if x <= 23 else x)
sdata['나이C'] = sdata.나이C.apply(lambda x : 1 if 26 >= x > 23 else x)
sdata['나이C'] = sdata.나이C.apply(lambda x : 2 if 33 >= x >26 else x)
sdata['나이C'] = sdata.나이C.apply(lambda x : 3 if 37 >= x >33 else x)
sdata['나이C']= sdata.나이C.apply(lambda x : 4 if 40 >= x > 37 else x)
sdata['나이C']= sdata.나이C.apply(lambda x : 5 if x >= 40 else x)
# 이적
sdata['이적'] = None
for x in df['이름']:
sdata['이적'][sdata['이름'] == x] = len(sdata[sdata['이름'] == x]['팀'].unique())
df['이적C'] = df.이적.apply(lambda x: 0 if x == 1 else x)
df['이적C'] = df.이적.apply(lambda x: 1 if x == 2 else x)
df['이적C'] = df.이적.apply(lambda x: 2 if x == 3 else x)
df['이적C'] = df.이적.apply(lambda x: 3 if x >= 3 else x)
sdata.columns
```
#### 정제 데이터 넣고 돌리는 부분
```
test = pd.read_csv('test_datas0402.csv')
train = pd.read_csv('train_datas0402.csv')
#sdata
#train = sdata[(sdata['시즌'] <= 2018)]
#test = sdata[(sdata['시즌'] == 2019)]
train
import statsmodels.api as sm
model = sm.OLS.from_formula('연봉 ~ np.log(나이) + scale(np.log(G)) + scale(타석) + np.log(scale(안타)) + (np.log(볼넷)) + \
(np.log(사구)) + scale(고4) + scale(np.log(삼진)) + np.log(병살) +\
np.log(scale(희타)) + scale(희비) + np.log((twoBLUCK)) + np.log(scale(threeBLUCK)) + (scale(ISO)) +\
np.log(scale(ISOD)) ' , data=train)
# model = sm.OLS.from_formula('연봉 ~ ' + '+'.join(s_col) , data=train)
result = model.fit()
#print(result.summary())
train.columns
train_x = train[['시즌', '팀', '포지션', '나이', 'G', '타석', '타수', '득점', '안타', '타1',
'타2', '타3', '홈런', '루타', '타점', '도루', '도실', '볼넷', '사구', '고4', '삼진', '병살',
'희타', '희비', '타율', '출루', '장타', 'OPS', 'wOBA', 'wRC', 'twoBLUCK',
'threeBLUCK', 'ISO', 'BBK', 'ISOD', '횟수', '경험', 'BABIP',
'외국인', '나이C', '이적', '이적C']]
train_y = train[['연봉']]
train_y_log = train[['로그연봉']]
test_x = test[['시즌', '팀', '포지션', '나이', 'G', '타석', '타수', '득점', '안타', '타1',
'타2', '타3', '홈런', '루타', '타점', '도루', '도실', '볼넷', '사구', '고4', '삼진', '병살',
'희타', '희비', '타율', '출루', '장타', 'OPS', 'wOBA', 'wRC', 'twoBLUCK',
'threeBLUCK', 'ISO', 'BBK', 'ISOD', '횟수', '경험', 'BABIP',
'외국인', '나이C', '이적', '이적C']]
test_y = test[['연봉']]
test_y_log = test['로그연봉']
#col2 = ['시즌', '나이', 'G', '타석', '도실', '볼넷',\
# '사구', '고4', '삼진', '병살', '희타', '희비', '타율', 'WAR', 'twoBLUCK',\
# 'threeBLUCK', 'ISO', 'ISOD']
#s_col = [f'scale({c})' for c in col2]
model = sm.OLS.from_formula('연봉 ~ G' , data=train)
# model = sm.OLS.from_formula('연봉 ~ ' + '+'.join(s_col) , data=train)
result = model.fit()
print(result.summary())
```
##### 1. 전체 모형
```
model = sm.OLS.from_formula('연봉 ~ 나이+ G+ 타석+ 타수+ 득점+ 안타+ 타1+ 타2+ 타3+\
홈런+ 루타+ 타점+ 도루+ 도실+ 볼넷+ 사구+ 고4+ 삼진+ 병살+ 희타+ 희비+ \
타율+ 출루+ 장타+ OPS+ wOBA+ wRC+ twoBLUCK+ threeBLUCK+ ISO+ BBK+ ISOD+ 횟수+\
경험 + BABIP+외국인+ 나이C+ 이적+ 이적C', data=train)
result = model.fit()
print(result.summary())
df = train
from sklearn.model_selection import KFold
scores = np.zeros(5)
cv = KFold(5, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df)):
df_train = df.iloc[idx_train]
df_test = df.iloc[idx_test]
model = sm.OLS.from_formula('연봉 ~ 나이+ G+ 타석+ 타수+ 득점+ 안타+ 타1+ 타2+ 타3+\
홈런+ 루타+ 타점+ 도루+ 도실+ 볼넷+ 사구+ 고4+ 삼진+ 병살+ 희타+ 희비+ \
타율+ 출루+ 장타+ OPS+ wOBA+ wRC+ twoBLUCK+ threeBLUCK+ ISO+ BBK+ ISOD+ 횟수+\
경험 + BABIP+외국인+ 나이C+ 이적+ 이적C' , data=df_train)
result = model.fit()
pred = result.predict(df_test)
rss = ((df_test.연봉 - pred) ** 2).sum()
tss = ((df_test.연봉 - df_test.연봉.mean())** 2).sum()
rsquared = 1 - rss / tss
scores[i] = rsquared
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
```
#### 2. 모델2
```
model = sm.OLS.from_formula('로그연봉 ~ C(시즌) + 포지션 + scale(G) + C(팀) + scale(홈런) + scale(타3)\
+ scale(타석) + scale(득점) + scale(사구) + scale(고4) + scale(도루)\
+ scale(삼진) + scale(병살) + scale(희타) + scale(희비) + scale(타율) + 외국인\
+ scale(OPS) + C(나이C):횟수', data=train)
result = model.fit()
print(result.summary())
df = train
from sklearn.model_selection import KFold
scores = np.zeros(5)
cv = KFold(5, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df)):
df_train = df.iloc[idx_train]
df_test = df.iloc[idx_test]
model = sm.OLS.from_formula('로그연봉 ~ C(시즌) + 포지션 + scale(G) + C(팀) + scale(홈런) + scale(타3)\
+ scale(타석) + scale(득점) + scale(사구) + scale(고4) + scale(도루)\
+ scale(삼진) + scale(병살) + scale(희타) + scale(희비) + scale(타율) + 외국인\
+ scale(OPS) + C(나이C):횟수' , data=df_train)
result = model.fit()
pred = result.predict(df_test)
rss = ((df_test.로그연봉 - pred) ** 2).sum()
tss = ((df_test.로그연봉 - df_test.로그연봉.mean())** 2).sum()
rsquared = 1 - rss / tss
scores[i] = rsquared
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
```
#### 3. 모델3
```
model = sm.OLS.from_formula('로그연봉 ~ C(시즌) +C(팀) + C(나이C):scale(횟수) + C(포지션) + scale(G) + scale(홈런) + scale(루타) +\
scale(도루) +scale(고4) + scale(타3) + C(외국인) + scale(BABIP) +C(이적C) + scale(ISOD) + \
scale(BBK) + scale(타율) + scale(경험)', data=train)
result = model.fit()
print(result.summary())
df = train
from sklearn.model_selection import KFold
scores = np.zeros(5)
cv = KFold(5, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df)):
df_train = df.iloc[idx_train]
df_test = df.iloc[idx_test]
model = sm.OLS.from_formula('로그연봉 ~ C(시즌) +C(팀) + C(나이C):scale(횟수) + C(포지션) + scale(G) + scale(홈런) + scale(루타) +\
scale(도루) +scale(고4) + scale(타3) + C(외국인) + scale(BABIP) +C(이적C) + scale(ISOD) + \
scale(BBK) + scale(타율) + scale(경험)' , data=df_train)
result = model.fit()
pred = result.predict(df_test)
rss = ((df_test.로그연봉 - pred) ** 2).sum()
tss = ((df_test.로그연봉 - df_test.로그연봉.mean())** 2).sum()
rsquared = 1 - rss / tss
scores[i] = rsquared
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
```
#### 4. 모델 4
```
model = sm.OLS.from_formula('로그연봉 ~ C(시즌) + C(팀) + C(포지션) + G +\
홈런 + 루타 + 타석 + 득점 +\
도루 + 고4 + 타3 + 희타 + 희비 + 타율 + OPS +\
C(외국인) + C(나이C):횟수', data=train)
result = model.fit()
print(result.summary())
df = train
from sklearn.model_selection import KFold
scores = np.zeros(5)
cv = KFold(5, shuffle=True, random_state=0)
for i, (idx_train, idx_test) in enumerate(cv.split(df)):
df_train = df.iloc[idx_train]
df_test = df.iloc[idx_test]
model = sm.OLS.from_formula('로그연봉 ~ C(시즌) + C(팀) + C(포지션) + G +\
홈런 + 루타 + 타석 + 득점 +\
도루 + 고4 + 타3 + 희타 + 희비 + 타율 + OPS +\
C(외국인) + C(나이C):횟수' , data=df_train)
result = model.fit()
pred = result.predict(df_test)
rss = ((df_test.로그연봉 - pred) ** 2).sum()
tss = ((df_test.로그연봉 - df_test.로그연봉.mean())** 2).sum()
rsquared = 1 - rss / tss
scores[i] = rsquared
print("학습 R2 = {:.8f}, 검증 R2 = {:.8f}".format(result.rsquared, rsquared))
```
| github_jupyter |
# Demonstration notebook to search, list product and download a band
This notebook is defined in four different sections:
1. Import requirements and definition of the parameters
2. Search for cached products
3. Listing the bands of one product
4. Downloading a band
## Import requirements and definition of the parameters
```
import os
import os.path as path
import requests
import json
```
Access to the Catalogue API
This API is listing the EO products that have been downloaded within Max-ICS Cache
The cache is populated thanks to user jobs that can be created using the user job api
```
API_CATALOGUE_BASE_URL = "https://api.earthlab.lu/priv/sat-job-catalog"
API_PRODUCT_BANDS_BASE_URL = "https://api.earthlab.lu/priv/sat-product-job/v1/products/{product_id}/bands"
API_PRODUCT_BAND_BASE_URL = "https://api.earthlab.lu/priv/sat-product-job/v1/products/{product_id}/bands/{file_id}"
API_AUTH_BASE_URL = "https://max-ics.earthlab.lu/authServer/api/v1/connection"
PROVIDER = "sentinel-2"
GEO_AREA = {"type":"Polygon","coordinates":[[[-104.852226585911,39.600325260831596],[-97.9901366137382,39.600325260831596],[-97.9901366137382,43.20496589647098],[-104.852226585911,43.20496589647098],[-104.852226585911,39.600325260831596]]]}
USERNAME = "<user>" ## Please modify
PASSWORD = "<pass>" ## Please modify
```
Various function to can be used to authenticate, search for products, list bands and download bands
```
def get_auth_token() -> type:
"""Function to get an authentication Bearer
:return: JWT token
:rtype: str"""
payload = {"user_name": USERNAME,"password": PASSWORD,"with_token": False}
req_auth = requests.post(API_AUTH_BASE_URL, json=payload, )
req_auth.raise_for_status()
bearer = req_auth.headers['Set-Cookie'].split(";")[0]
return "Bearer " + bearer.split("=")[1]
def search_for_cached_products(geo_area: dict, provider: str, limit: int) -> list:
"""Function to search for cached product within a certain area
:param geo_area: GeoJSON for the rectangled searched area
:type geo_area: dict (GeoJSON)
:param provider: name of the provider (one of: landsat-8, sentinel-2...)
:type provider: str
:param limit: Number of result (max 1000)
:type limit: int
:return: List of products
:rtype: list
"""
## Filter results within given flattened coordinates as a single array.
## It can contains 4 coordinates (bbox) or more than 8 (polygon)
within = []
for point in geo_area['coordinates'][0]:
within.append(point[0])
within.append(point[1])
url = API_CATALOGUE_BASE_URL + "/products?provider=" + provider
for coord in within:
url = url + "&within=" + str(coord)
url = url + "&limit=" + str(limit)
req = requests.get(url=url, headers={"authorization": get_auth_token()})
req.raise_for_status()
return req.json()
def download_product_bands(product_id: str) -> list:
"""Function to get the bands information for one particular product
:param product_id: ID of the product
:type product_id: str
:return: list of products
:rtype: list
"""
url = API_PRODUCT_BANDS_BASE_URL.format(product_id=product_id)
req = requests.request(method="get", url=url, headers={"authorization": get_auth_token()})
req.raise_for_status()
return req.json()
def download_one_band(product_id: str, file_id: str, filename: str, out_path: str = "./") -> None:
"""Function to download a product from the API
:param product_id: ID of the product
:type product_id: str
:param file_id: ID of the band
:type file_id: str
:param filename: Name of the file to save
:type filename: str
:param out_path: folder to use for saving the file [Default to: './']
:type out_path: str
"""
url = API_PRODUCT_BAND_BASE_URL.format(product_id=product_id, file_id=file_id)
req = requests.get(url=url, headers={"authorization": get_auth_token()}, stream=True)
req.raise_for_status()
handle = open(os.path.join(out_path, filename), "wb")
for chunk in req.iter_content(chunk_size=8192):
if chunk: # filter out keep-alive new chunks
handle.write(chunk)
```
# First making the search within the cached products
```
product_list = search_for_cached_products(geo_area=GEO_AREA, provider=PROVIDER, limit=1000)
print("Got ", len(product_list), " products")
```
# Second listing the bands
```
one_product_id = product_list[0]['id']
one_product_bands = download_product_bands(product_id=one_product_id)
print("The first product has ", len(one_product_bands), " pseudo-bands (files respective to band and resolutions):")
print("------------------------------------|---------------------------------------------------------------")
print("Filename | Band ID")
print("------------------------------------|---------------------------------------------------------------")
for band in one_product_bands:
print(band['filename'], " | ", band['id'])
```
Getting more information about the band
```
## Example of band information:
print("Example of information for ", band['filename'])
print("Name: ", band['details']['name'])
print("Start of wavelength: ", band['details']['start_wavelength_nm'], " nm")
print("End of wavelength: ", band['details']['end_wavelength_nm'], " nm")
print("Resolution: ", band['details']['pixel_size_m'], " m/px")
```
# Downoading the band as file
```
## Example of code to download the mentioned band
download_one_band(product_id=one_product_id, file_id=band['id'], filename=band['filename'])
```
| github_jupyter |
Wayne H Nixalo - 09 Aug 2017
This JNB is an attempt to do the neural artistic style transfer and super-resolution examples done in class, on a GPU using PyTorch for speed.
Lesson NB: [neural-style-pytorch](https://github.com/fastai/courses/blob/master/deeplearning2/neural-style-pytorch.ipynb)
## Neural Style Transfer
Style Transfer / Super Resolution Implementation in PyTorch
```
%matplotlib inline
import importlib
import os, sys; sys.path.insert(1, os.path.join('../utils'))
from utils2 import *
import torch, torch.nn as nn, torch.nn.functional as F, torch.optim as optim
from torch.autograd import Variable
from torch.utils.serialization import load_lua
from torch.utils.data import DataLoader
from torchvision import transforms, models, datasets
```
### Setup
```
path = '../data/nst/'
fnames = pickle.load(open(path+'fnames.pkl','rb'))
img = Image.open(path + fnames[0]); img
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((1,1,1,3))
preproc = lambda x: (x - rn_mean)[:,:,:,::-1]
img_arr = preproc(np.expand_dims(np.array(img),0))
shp = img_arr.shape
deproc = lambda x: x[:,:,:,::-1] + rn_mena
```
### Create Model
```
def download_convert_vgg16_model():
model_url = 'http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/vgg16.t7'
file = get_file(model_url, cache_subdir='models')
vgglua = load_lua(file).parameters()
vgg = models.VGGFeature()
for (src, dst) in zip(vgglua[0], vgg.parameters()): dst[:] = src[:]
torch.save(vgg.state_dict(), path + 'vgg16_feature.pth')
url = 'https://s3-us-west-2.amazonaws.com/jcjohns-models/'
fname = 'vgg16-00b39a1b.pth'
file = get_file(fname, url+fname, cache_subdir='models')
vgg = models.vgg.vgg16()
vgg.load_state_dict(torch.load(file))
optimizer = optim.Adam(vgg.parameters())
vgg.cuda();
arr_lr = bcolz.open(path + 'trn_resized_72.bc')[:]
arr_hr = bcolz.open(path + 'trn_resized_288.bc')[:]
arr = bcolz.open(dpath + 'trn_resized.bc')[:]
x = Variable(arr[0])
y = model(x)
url = 'http://www.files.fast.ai/models/'
fname = 'imagenet_class_index.json'
fpath = get_file(fname, url + fname, cache_subdir='models')
class ResidualBlock(nn.Module):
def __init__(self, num):
super(ResideualBlock, self).__init__()
self.c1 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.c2 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.b1 = nn.BatchNorm2d(num)
self.b2 = nn.BatchNorm2d(num)
def forward(self, x):
h = F.relu(self.b1(self.c1(x)))
h = self.b2(self.c2(h))
return h + x
class FastStyleNet(nn.Module):
def __init__(self):
super(FastStyleNet, self).__init__()
self.cs = [nn.Conv2d(3, 32, kernel_size=9, stride=1, padding=4),
nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding1)]
self.b1s = [nn.BatchNorm2d(i) for i in [32, 64, 128]]
self.rs = [ResidualBlock(128) for i in range(5)]
self.ds = [nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1)]
self.b2s = [nn.BatchNorm2d(i) for i in [64, 32]]
self.d3 = nn.Conv2d(32, 3, kernel_size=9, stride=1, padding=4)
def forward(self, h):
for i in range(3): h = F.relu(self.b1s[i](self.cs[i](x)))
for r in self.rs: h = r(h)
for i in range(2): h = F.relu(self.b2s[i](self.ds[i](x)))
return self.d3(h)
```
### Loss Functions and Processing
| github_jupyter |
# In-Class Coding Lab: Conditionals
The goals of this lab are to help you to understand:
- Relational and Logical Operators
- Boolean Expressions
- The if statement
- Try / Except statement
- How to create a program from a complex idea.
# Understanding Conditionals
Conditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
```
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
```
###### Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options.
On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.
The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`.
## Now Try It
Write a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.
To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
```
# TODO write your program here:
number = int(input("Enter an integer: "))
if number >=0:
print("%d is greater than or equal to zero" % (number))
else:
print("%d is not greater than or equal to zero" % (number))
```
# Rock, Paper Scissors
In this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.
The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
```
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
```
## Randomizing the Computer's Selection
Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.
To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html)
It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
```
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
```
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.
How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything!
## Getting input and guarding against stupidity
With step one out of the way, its time to move on to step 2. Getting input from the user.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
```
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem.
We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play.
### In operator
The `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
```
# TODO Try these:
'rock' in choices, 'mike' in choices
```
### You Do It!
Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
## Playing the game
With the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:
- rock beats scissors (rock smashes scissors)
- scissors beats paper (scissors cuts paper)
- paper beats rock (paper covers rock)
So for example:
- If you choose rock and the computer chooses paper, you lose because paper covers rock.
- Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.
- If you both choose rock, it's a tie.
## It's too complicated!
It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.
One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended.
## Paper: Making the program a bit more complex.
With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.
At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder.
### You Do It
In the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
## The final program
With the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise.
Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'rock'):
print("TODO")
elif (you == 'paper' and computer == 'scissors'):
print("TODO")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gabilodeau/INF6804/blob/master/FeatureVectorsComp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
INF6804 Vision par ordinateur
Polytechnique Montréal
Distances entre histogrammes (L1, L2, MDPA, Bhattacharyya)
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
```
Fonction pour calculer la distance MDPA
```
def distMDPA(V1, V2):
Dist=0;
for i in range(0,len(V1)):
dint=0;
for j in range(0,i):
dint=dint+V1[j]-V2[j]
Dist=Dist+abs(dint)
return Dist;
```
Création de 5 vecteurs. On comparera avec Vecteur1 comme base.
```
Vecteur1 = np.array([3.0, 4.0, 3.0, 1.0, 6.0])
Vecteur2 = np.array([2.0, 5.0, 3.0, 1.0, 6.0])
Vecteur3 = np.array([2.0, 4.0, 3.0, 1.0, 7.0])
Vecteur4 = np.array([1.0, 5.0, 4.0, 1.0, 6.0])
Vecteur5 = np.array([3.0, 5.0, 2.0, 2.0, 5.0])
```
Distance ou norme L1. Les résultats seront affichés sur un graphique.
```
dist1 = cv2.norm(Vecteur1, Vecteur2, cv2.NORM_L1)
dist2 = cv2.norm(Vecteur1, Vecteur3, cv2.NORM_L1)
dist3 = cv2.norm(Vecteur1, Vecteur4, cv2.NORM_L1)
dist4 = cv2.norm(Vecteur1, Vecteur5, cv2.NORM_L1)
#Pour affichage...
x = [0, 0.1, 0.2, 0.3]
color = ['r','g','b','k']
dist = [dist1, dist2, dist3, dist4]
```
Distance ou norme L2.
```
dist1 = cv2.norm(Vecteur1, Vecteur2, cv2.NORM_L2)
dist2 = cv2.norm(Vecteur1, Vecteur3, cv2.NORM_L2)
dist3 = cv2.norm(Vecteur1, Vecteur4, cv2.NORM_L2)
dist4 = cv2.norm(Vecteur1, Vecteur5, cv2.NORM_L2)
x = x + [1, 1.1, 1.2, 1.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b','k']
```
Distance MDPA (Maximum distance of pair assignments).
```
dist1 = distMDPA(Vecteur1, Vecteur2)
dist2 = distMDPA(Vecteur1, Vecteur3)
dist3 = distMDPA(Vecteur1, Vecteur4)
dist4 = distMDPA(Vecteur1, Vecteur5)
x = x + [2, 2.1, 2.2, 2.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b','k']
```
Distance de Bhattacharyya avec les valeurs normalisées entre 0 et 1.
```
Vecteur1 = Vecteur1/np.sum(Vecteur1)
Vecteur2 = Vecteur2/np.sum(Vecteur2)
Vecteur3 = Vecteur3/np.sum(Vecteur3)
Vecteur4 = Vecteur4/np.sum(Vecteur3)
dist1 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur2.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
dist2 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur3.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
dist3 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur4.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
dist4 = cv2.compareHist(Vecteur1.transpose().astype('float32'), Vecteur5.transpose().astype('float32'), cv2.HISTCMP_BHATTACHARYYA)
x = x + [3, 3.1, 3.2, 3.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b', 'k']
```
Similarité cosinus.
```
dist1 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur2.reshape(1, -1))
dist2 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur3.reshape(1, -1))
dist3 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur4.reshape(1, -1))
dist4 = cosine_similarity(Vecteur1.reshape(1, -1), Vecteur5.reshape(1, -1))
x = x + [4, 4.1, 4.2, 4.3]
dist = dist + [dist1, dist2, dist3, dist4]
color = color + ['r','g','b', 'k']
```
Affichage des distances.
```
plt.scatter(x, dist, c = color)
plt.text(0,0, 'Distance L1')
plt.text(0.8,1, 'Distance L2')
plt.text(1.6,0, 'Distance MDPA')
plt.text(2.6,0.5, 'Bhattacharyya')
plt.text(3.8,0.3, 'Similarité\n cosinus')
plt.show()
```
| github_jupyter |
# The IPython widgets, now in IHaskell !!
It is highly recommended that users new to jupyter/ipython take the *User Interface Tour* from the toolbar above (Help -> User Interface Tour).
> This notebook introduces the [IPython widgets](https://github.com/ipython/ipywidgets), as implemented in [IHaskell](https://github.com/gibiansky/IHaskell). The `Button` widget is also demonstrated as a live action example.
### The Widget Hierarchy
These are all the widgets available from IPython/Jupyter.
#### Uncategorized Widgets
+ Button
+ Image*Widget*
+ Output*Widget*
#### Box Widgets
+ Box
+ FlexBox
+ Accordion
+ Tab*Widget*
#### Boolean Widgets
+ CheckBox
+ ToggleButton
#### Integer Widgets
+ IntText
+ BoundedIntText
+ IntProgress
+ IntSlider
+ IntRangeSlider
#### Float Widgets
+ FloatText
+ BoundedFloatText
+ FloatProgress
+ FloatSlider
+ FloatRangeSlider
#### Selection Widgets
+ Selection
+ Dropdown
+ RadioButtons
+ Select
+ SelectMultiple
+ ToggleButtons
#### String Widgets
+ HTML*Widget*
+ Latex*Widget*
+ TextArea
+ Text*Widget*
### Using Widgets
#### Necessary Extensions and Imports
All the widgets and related functions are available from a single module, `IHaskell.Display.Widgets`. It is strongly recommended that users use the `OverloadedStrings` extension, as widgets make extensive use of `Text`.
```
{-# LANGUAGE OverloadedStrings #-}
import IHaskell.Display.Widgets
```
The module can be imported unqualified. Widgets with common names, such as `Text`, `Image` etc. have a `-Widget` suffix to prevent name collisions.
#### Widget interface
Each widget has different properties, but the surface level API is the same.
Every widget has:
1. A constructor:
An `IO <widget>` value/function of the form `mk<widget_name>`.
2. A set of properties, which can be manipulated using `setField` and `getField`.
The `setField` and `getField` functions have nasty type signatures, but they can be used by just intuitively understanding them.
```
:t setField
```
The `setField` function takes three arguments:
1. A widget
2. A `Field`
3. A value for the `Field`
```
:t getField
```
The `getField` function takes a `Widget` and a `Field` and returns the value of that `Field` for the `Widget`.
Another utility function is `properties`, which shows all properties of a widget.
```
:t properties
```
#### Displaying Widgets
IHaskell automatically displays anything *displayable* given to it directly.
```
-- Showables
1 + 2
"abc"
```
Widgets can either be displayed this way, or explicitly using the `display` function from `IHaskell.Display`.
```
import IHaskell.Display
:t display
```
#### Multiple displays
A widget can be displayed multiple times. All these *views* are representations of a single object, and thus are linked.
When a widget is created, a model representing it is created in the frontend. This model is used by all the views, and any modification to it propagates to all of them.
#### Closing widgets
Widgets can be closed using the `closeWidget` function.
```
:t closeWidget
```
### Our first widget: `Button`
Let's play with buttons as a starting example:
As noted before, all widgets have a constructor of the form `mk<Widget>`. Thus, to create a `Button`, we use `mkButton`.
```
button <- mkButton -- Construct a Button
:t button
```
Widgets can be displayed by just entering them into a cell.
```
button -- Display the button
```
To view a widget's properties, we use the `properties` function. It also shows the type represented by the `Field`, which generally are not visible in type signatures due to high levels of type-hackery.
```
-- The button widget has many properties.
properties button
```
Let's try making the button widget wider.
```
import qualified IHaskell.Display.Widgets.Layout as L
btnLayout <- getField button Layout
setField btnLayout L.Width $ Just "100%"
```
There is a lot that can be customized. For example:
```
setField button Description "Click Me (._.\")"
setField button ButtonStyle SuccessButton
setField btnLayout L.Border $ Just "ridge 2px"
setField btnLayout L.Padding $ Just "10"
setField btnLayout L.Height $ Just "7em"
```
The button widget also provides a click handler. We can make it do anything, except console input. Universally, no widget event can trigger console input.
```
setField button ClickHandler $ putStrLn "fO_o"
button -- Displaying again for convenience
```
Now try clicking the button, and see the output.
> Note: If you display to stdout using Jupyter Lab, it will be displayed in a log entry, not as the cell output.
We can't do console input, though, but you can always use another widget! See the other notebooks with examples for more information
```
setField button ClickHandler $ getLine >>= putStrLn
```
| github_jupyter |
# VARLiNGAM
## Import and settings
In this example, we need to import `numpy`, `pandas`, and `graphviz` in addition to `lingam`.
```
import os
os.environ["PATH"] += os.pathsep + '/Users/elena/opt/anaconda3/lib/python3.7/site-packages/graphviz'
from sklearn.preprocessing import StandardScaler
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_dot, print_causal_directions, print_dagc
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(8)
```
## Test data
We create test data consisting of 5 variables.
```
from sklearn.preprocessing import StandardScaler
df = pd.read_csv('/Users/elena/Documents/Диплом/Data/russia.csv')
df['lnGDP'] = np.log(df['gdp'])
df['lnCO2'] = np.log(df['x5'])
df['lnEn'] = np.log(df['x11'])
df['dlnGDP']=df['lnGDP'].diff()
df['dlnCO2'] = df['lnCO2'].diff()
df['dlnEn'] = df['lnEn'].diff()
df['dlnTr'] = df['lntr'].diff()
X_raw = df[['temp', 'dlnGDP', 'dlnCO2', 'dlnEn', 'dlnTr']]
X_raw
standard_scaler = StandardScaler(with_std=False)
X = standard_scaler.fit_transform(X_raw)
#X = np.array(df[['temp', 'lnGDP', 'lnCO2', 'lnEn']])
# B0 = [
# [0,-0.12,0,0,0],
# [0,0,0,0,0],
# [-0.41,0.01,0,-0.02,0],
# [0.04,-0.22,0,0,0],
# [0.15,0,-0.03,0,0],
# ]
# B1 = [
# [-0.32,0,0.12,0.32,0],
# [0,-0.35,-0.1,-0.46,0.4],
# [0,0,0.37,0,0.46],
# [-0.38,-0.1,-0.24,0,-0.13],
# [0,0,0,0,0],
# ]
causal_order = [3, 2, 1, 0, 0]
# data generated from B0 and B1
#X = pd.read_csv('data/sample_data_var_lingam.csv')
```
## Causal Discovery
To run causal discovery, we create a `VARLiNGAM` object and call the `fit` method.
```
model = lingam.VARLiNGAM()
model.fit(X[1:])
```
Using the `causal_order_` properties, we can see the causal ordering as a result of the causal discovery.
```
model.causal_order_
```
Also, using the `adjacency_matrices_` properties, we can see the adjacency matrix as a result of the causal discovery.
```
model.adjacency_matrices_
# B0
model.adjacency_matrices_[0]
# B1
model.adjacency_matrices_[1]
```
We can draw a causal graph by utility funciton.
```
labels = ['temp(t)', 'lnGDP(t)', 'lnCO2(t)', 'lnEn(t)', 'lnTr(t)']
make_dot(np.hstack(model.adjacency_matrices_), ignore_shape=True, lower_limit=0.05, labels=labels)
```
## Bootstrap
### Bootstrapping
We call `bootstrap()` method instead of `fit()`. Here, the second argument specifies the number of bootstrap sampling.
```
model = lingam.VARLiNGAM()
result = model.bootstrap(X[1:], 100)
labels = ['temp(t)', 'dlnGDP(t)', 'dlnCO2(t)', 'dlnEn(t)', 'dlnTr']
```
Since `BootstrapResult` object is returned, we can get the ranking of the causal directions extracted by `get_causal_direction_counts()` method. In the following sample code, `n_directions` option is limited to the causal directions of the top 8 rankings, and `min_causal_effect` option is limited to causal directions with a coefficient of 0.3 or more.
```
cdc = result.get_causal_direction_counts(n_directions=20, min_causal_effect=0.3, split_by_causal_effect_sign=True)
cdc
```
We can check the result by utility function.
```
print_causal_directions(cdc, 100, labels=labels)
```
Also, using the `get_directed_acyclic_graph_counts()` method, we can get the ranking of the DAGs extracted. In the following sample code, `n_dags` option is limited to the dags of the top 3 rankings, and `min_causal_effect` option is limited to causal directions with a coefficient of 0.2 or more.
```
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.2, split_by_causal_effect_sign=True)
```
We can check the result by utility function.
```
print_dagc(dagc, 100, labels=labels)
```
Using the `get_probabilities()` method, we can get the probability of bootstrapping.
```
prob = result.get_probabilities(min_causal_effect=0.1)
print('Probability of B0:\n', prob[0])
print('Probability of B1:\n', prob[1])
```
| github_jupyter |
# Contanimate DNS Data
```
"""
Make dataset pipeline
"""
import pandas as pd
import numpy as np
import os
from collections import Counter
import math
import torch
from torch.utils.data import DataLoader
from torch.nn.utils.rnn import pad_sequence
from dga.models.dga_classifier import DGAClassifier
from dga.datasets.domain_dataset import DomainDataset
!pip install tldextract
import tldextract
df = pd.read_csv("../data/raw/dns.csv")
a_aaaa_df = df.loc[(df.qtype_name == 'A') | (df.qtype_name == 'AAAA')]
# Take subset by nxdomain response
nxdomain_df = a_aaaa_df.loc[(df['rcode_name'] == 'NXDOMAIN')]
# Drop subset from full records
a_aaaa_df = a_aaaa_df[a_aaaa_df['rcode_name'] != 'NXDOMAIN']
# Load known DGAs
mal_df = pd.read_csv("../data/processed/validation.csv")
mal_df = mal_df.loc[mal_df['label'] == 1]
# Inject dga domains randomly
nxdomain_df['query'] = np.random.choice(list(mal_df['domain'].values), len(nxdomain_df))
# Put dataset back together
a_aaaa_df = pd.concat([a_aaaa_df, nxdomain_df])
# a_aaaa_df['domain_name'] = a_aaaa_df['query'].str.replace('www.', '')
a_aaaa_df.drop(['QR', 'AA', 'TC', 'RD', 'Z', 'answers'], axis=1, inplace=True)
a_aaaa_df.sort_values(by=['ts'])
# a_aaaa_df['domain_name'].unique()
a_aaaa_df = a_aaaa_df.reset_index(drop=True)
def extract_domain(url):
return tldextract.extract(url).domain
a_aaaa_df['domain'] = a_aaaa_df['query'].apply(extract_domain)
def extract_tld(url):
return tldextract.extract(url).suffix
a_aaaa_df['tld'] = a_aaaa_df['query'].apply(extract_tld)
a_aaaa_df['domain_name'] = a_aaaa_df['domain'] + '.' + a_aaaa_df['tld']
a_aaaa_df.head()
model_dir = '../models/'
model_info = {}
model_info_path = os.path.join(model_dir, '1595825381_dga_model_info.pth')
with open(model_info_path, 'rb') as f:
model_info = torch.load(f)
print("model_info: {}".format(model_info))
# Determine the device and construct the model.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DGAClassifier(input_features=model_info['input_features'],
hidden_dim=model_info['hidden_dim'],
n_layers=model_info['n_layers'],
output_dim=model_info['output_dim'],
embedding_dim=model_info['embedding_dim'],
batch_size=model_info['batch_size'])
# Load the stored model parameters.
model_path = os.path.join(model_dir, '1595825381_dga_model.pth')
with open(model_path, 'rb') as f:
model.load_state_dict(torch.load(f))
# set to eval mode, could use no_grad
model.to(device).eval()
def entropy(s):
p, lns = Counter(s), float(len(s))
return -sum( count/lns * math.log(count/lns, 2) for count in p.values())
def pad_collate_pred(batch):
x_lens = [len(x) for x in batch]
xx_pad = pad_sequence(batch, batch_first=True, padding_value=0)
return xx_pad, x_lens
def get_predict_loader(batch_size, df):
print("Getting test and train data loaders.")
dataset = DomainDataset(df, train=False)
predict_dl = DataLoader(dataset, batch_size=batch_size, shuffle=False, collate_fn=pad_collate_pred)
return predict_dl
def get_prediction(df):
predict_dl = get_predict_loader(1000, df)
classes = {0: 'Benign', 1: 'DGA'}
model.eval()
predictions = []
with torch.no_grad():
for batch_num, (x_padded, x_lens) in enumerate(predict_dl):
output = model(x_padded, x_lens)
y_hat = torch.round(output.data)
predictions += [classes[int(key)] for key in y_hat.flatten().numpy()]
return predictions
a_aaaa_df = a_aaaa_df[~a_aaaa_df['domain_name'].str.contains('\(')].reset_index(drop=True)
a_aaaa_df = a_aaaa_df[~a_aaaa_df['domain_name'].str.contains(',')].reset_index(drop=True)
a_aaaa_df[['domain_name']]
a_aaaa_df['dga'] = get_prediction(a_aaaa_df[['domain_name']])
a_aaaa_df['entropy'] = a_aaaa_df['domain_name'].apply(entropy)
print(a_aaaa_df.shape)
a_aaaa_df.head(25)
a_aaaa_df.to_csv('../data/processed/demo_dns_logs.csv', index=False)
```
| github_jupyter |
# Eliminating Outliers
Eliminating outliers is a big topic. There are many different ways to eliminate outliers. A data engineer's job isn't necessarily to decide what counts as an outlier and what does not. A data scientist would determine that. The data engineer would code the algorithms that eliminate outliers from a data set based on any criteria that a data scientist has decided.
In this exercise, you'll write code to eliminate outliers based on the Tukey rule.
Run the code cell below to read in the data and visualize the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# read in the projects data set and do basic wrangling
gdp = pd.read_csv('../data/gdp_data.csv', skiprows=4)
gdp.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
population = pd.read_csv('../data/population_data.csv', skiprows=4)
population.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# Reshape the data sets so that they are in long format
gdp_melt = gdp.melt(id_vars=['Country Name'],
var_name='year',
value_name='gdp')
# Use back fill and forward fill to fill in missing gdp values
gdp_melt['gdp'] = gdp_melt.sort_values('year').groupby('Country Name')['gdp'].fillna(method='ffill').fillna(method='bfill')
population_melt = population.melt(id_vars=['Country Name'],
var_name='year',
value_name='population')
# Use back fill and forward fill to fill in missing population values
population_melt['population'] = population_melt.sort_values('year').groupby('Country Name')['population'].fillna(method='ffill').fillna(method='bfill')
# merge the population and gdp data together into one data frame
df_country = gdp_melt.merge(population_melt, on=('Country Name', 'year'))
# filter data for the year 2016
df_2016 = df_country[df_country['year'] == '2016']
# filter out values that are not countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# remove non countries from the data
df_2016 = df_2016[~df_2016['Country Name'].isin(non_countries)]
# plot the data
x = list(df_2016['population'])
y = list(df_2016['gdp'])
text = df_2016['Country Name']
fig, ax = plt.subplots(figsize=(15,10))
ax.scatter(x, y)
plt.title('GDP vs Population')
plt.xlabel('GDP')
plt.ylabel('Population')
for i, txt in enumerate(text):
ax.annotate(txt, (x[i],y[i]))
```
# Exercise
Write a function that uses the Tukey rule to eliminate outliers from an array of data.
```
# TODO: Write a function that uses the Tukey rule to detect outliers in a dataframe column
# and then removes that entire row from the data frame. For example, if the United States
# is detected to be a GDP outlier, then remove the entire row of United States data.
# The function inputs should be a data frame and a column name.
# The output is a data_frame with the outliers eliminated
# HINT: Re-use code from the previous exercise
def tukey_rule(data_frame, column_name):
data = data_frame[column_name]
Q1 = data.quantile(0.25)
Q3 = data.quantile(0.75)
IQR = Q3 - Q1
max_value = Q3 + 1.5 * IQR
min_value = Q1 - 1.5 * IQR
return data_frame[(data_frame[column_name] < max_value) & (data_frame[column_name] > min_value)]
```
Now use the function to eliminate population outliers and then gdp outliers from the dataframe. Store results in the df_outlier_removed variable.
```
# TODO: Use the tukey_rule() function to make a new data frame with gdp and population outliers removed
# Put the results in the df_outlier_removed variable
df_outlier_removed = df_2016.copy()
for column in ['population', 'gdp']:
df_outlier_removed = tukey_rule(df_outlier_removed, column)
```
Run the code cell below to plot the results.
```
# plot the data
x = list(df_outlier_removed['population'])
y = list(df_outlier_removed['gdp'])
text = df_outlier_removed['Country Name']
fig, ax = plt.subplots(figsize=(15,10))
ax.scatter(x, y)
plt.title('GDP vs Population')
plt.xlabel('GDP')
plt.ylabel('Population')
for i, txt in enumerate(text):
ax.annotate(txt, (x[i],y[i]))
#
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.