code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problema 5
# En este ejercicio se quiere ver como se comporta la magnitud absoluta en la banda r en función del redshift para las galaxias de la muestra.
#
# La magnitud absoluta para cada galaxia se puede calcular usando la aproximación:
#
# $$M = m - 25 - 5.log_{10}(\frac{c.z}{H})$$ Ec (1)
#
# donde r es la magnitud aparente, c=300000 km/s es la velocidad de la luz y H=75 (km/s)/Mpc la constante de Hubble. En este ejercicio se pide considerar los valores $m_r < 17.5$, por lo tanto se tiene m=r y M=$M_r$.
# Se comienza importando los datos necesario de la tabla de datos: la magnitud aparente r y el resdhift de cada galaxia.
from math import *
import numpy as np
import matplotlib.pyplot as plt
#import random
import seaborn as sns
sns.set()
# +
#defino la petroMag_r (columna 5)
r = np.genfromtxt('Tabla2_g3.csv', delimiter=',', usecols=5)
#defino el redshift (columna 6)
z = np.genfromtxt('Tabla2_g3.csv', delimiter=',', usecols=6)
# +
#calculo magnitud absoluta para cada galaxia
c=300000 #km/s
H=75 #km/s /Mpc
z2=[]
Mr=[]
for i in range(len(z)):
if (-9999 != r[i]) and (r[i] < 17.5): #hay valores de Mr que dan -9999 que no los considero y pido mr<17.5
M = r[i] - 25 - 5 * log10((c * z[i]) / H)
z2.append(z[i])
Mr.append(M)
else:
None
# +
#grafico para ver la forma
plt.figure(figsize=(11,6))
plt.scatter(z2, Mr, s=20, color='orchid')
plt.ylim(-24,-16)
plt.title('Magnitud absoluta vs. Redshift')
plt.xlabel('z')
plt.ylabel('$M_r$')
plt.show()
# -
# Se quiere ajustar los puntos del borde, aquellos que envuelven el resto de puntos. Para ello se realiza un programa para identificarlos.
#
# Se comienza dividiendo al resdshinf en n bines y para cada uno de ellos se calcula el valor máximo de la magnitud absoluta en ese intervalo. Luego se grafica ese $M_r$ máximo del intervalo con el valor extremo izquierdo del bin.
n=50
bin_z=np.linspace(min(z2), max(z2), n)
Mr_max=[] #lista que va a guardar las Mr máximas
z3=[]
for i in range(len(bin_z)-1): #recorro los bines, pongo -1 por que sino cuando sume hasta i+1 se rompe
lista_i=[] #lista para cada i=bin
for j in range(len(z2)): #recorro el z2
if (bin_z[i] <= z2[j]) and (z2[j] < bin_z[i+1]): #condicion para que esté adentro del bin
lista_i.append(Mr[j]) #lo agrego a una lista
#print(len(lista_i))
if (len(lista_i) != 0): #le indico que no considere estos valores sino no le gusta
x=max(lista_i) #busco el máximo para cada bin
Mr_max.append(x)
z3.append(bin_z[i])
# +
#grafico los valores a ajustar
plt.figure(figsize=(11,6))
plt.scatter(z2, Mr, s=20, color='orchid', label='Muestra')
plt.scatter(z3, Mr_max, marker='x', color='black', label='Puntos a ajustar')
plt.ylim(-24,-16)
plt.xlabel('z')
plt.ylabel('$M_r$')
plt.legend(loc='best')
plt.show()
# -
#
#
# Se puede ver que los puntos a ajustar dependen del número de bineado 'n' elegido para el redshift: si son pocos puntos puese ser que no sean suficientes para realizar el ajuste, mientras que si son muchos se introduce mucho ruido en los mismos. Se toma n=50.
# Se propone para ajustar la envolvente de puntos la Ec (1), donde m (magnitud aparente) es el parámentro a ajustar.
#defino funcion de ajuste
def func(m):
lista=[]
for i in range(len(z3)):
lista.append(m-25- 5 * log10((c * z3[i]) / H))
return lista
# Se le dan distintos valores de m para ver cual es la mejor curva de ajuste:
# +
plt.figure(figsize=(11,6))
plt.scatter(z2, Mr, s=20, color='orchid', label='Muestra')
plt.scatter(z3, Mr_max, marker='x', color='black', label='Puntos a ajustar')
plt.plot(z3, func(m=17.2), label='m=17.2')
plt.plot(z3, func(m=17.3), label='m=17.3')
plt.plot(z3, func(m=17.4), label='m=17.4')
plt.plot(z3, func(m=17.5), label='m=17.5')
plt.plot(z3, func(m=17.6), label='m=17.6')
plt.ylim(-24,-16)
plt.xlabel('z')
plt.ylabel('$M_r$')
plt.legend(loc='best')
plt.show()
# -
# Al variar m, la función de ajuste sube y baja en el eje de $M_r$. Para ver cual es la que mejor ajusta a los puntos, se calcula para cada m el valor de $\chi^2$ entre los valores de func(i,m) con Mr_max(i) para cada i en z3.
#funcion chi-cuadrado
def chi2(x):
chi=0
for i in range(len(z3)):
chi=chi+((x[i] - Mr_max[i])**2/Mr_max[i])
return chi
# +
#veo los valores para las funciones graficadas
print('chi2 para m=17.2:', chi2(func(m=17.2)))
print('chi2 para m=17.3:', chi2(func(m=17.3)))
print('chi2 para m=17.4:', chi2(func(m=17.4)))
print('chi2 para m=17.5:', chi2(func(m=17.5)))
print('chi2 para m=17.6:', chi2(func(m=17.6)))
# -
# Se tiene que para m=17.3 el módulo de $\chi^2$ es el menor, por lo que indicaría que es el m para el cual los puntos de la función de ajuste se diferencian menos con los valores a ajustar.
#
# Se esperaba que el valor que mejor ajustara sea m=17.5. Una de las causas que esto no pase puede ser que se notan muchos puntos desviados entre los que se deben ajustar, estando varios por debajo de la envolvente.
#
# También se podría probar con una muestra más grande para completar de mejor forma la envolvente, ya que se notan espacios vacíos.
# ## Conclusión:
#
# En este ejercicio se estudia la relación entre la magnitud absoluta $M_r$ y el redshift por medio de la aproximación que se expresa en la Ec (1).
# Se nota que al graficar los pares de puntos todos se encuentran por debajo de una función de la misma forma que la Ec (1), cuyo mejor parámetro encontrado es m=17.3.
| g3-p5-Ajuste-MrvsZ.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pickle
def sample_data_3():
count = 100000
rand = np.random.RandomState(0)
a = [[-1.5, 2.5]] + rand.randn(count // 3, 2) * 0.2
b = [[1.5, 2.5]] + rand.randn(count // 3, 2) * 0.2
c = np.c_[2 * np.cos(np.linspace(0, np.pi, count // 3)),
-np.sin(np.linspace(0, np.pi, count // 3))]
c += rand.randn(*c.shape) * 0.2
data_x = np.concatenate([a, b, c], axis=0)
data_y = np.array([0] * len(a) + [1] * len(b) + [2] * len(c))
perm = rand.permutation(len(data_x))
return data_x[perm].astype(np.float32), data_y[perm]
x, y = sample_data_3()
plt.figure()
plt.scatter(x[:,0], x[:,1], c=y)
plt.axis((-5,5,-5,5))
def get_warmup_temp(epoch, nrof_warmup_epochs):
if nrof_warmup_epochs>0:
temp = np.minimum(1.0, 1.0/nrof_warmup_epochs * epoch)
else:
temp = 1.0
return temp
# +
def softlimit(x, limit=0.1):
return tf.math.log(tf.math.exp(x) + 1.0 + limit)
def dense(x, nrof_units, activation=None, training=True, use_batch_norm=False):
x = tf.compat.v1.layers.Flatten()(x)
x = tf.compat.v1.layers.Dense(units=nrof_units)(x)
if use_batch_norm:
x = tf.compat.v1.layers.BatchNormalization()(x, training=training)
x = x if activation is None else activation(x)
return x
def mlp(x, nrof_units, activation, nrof_layers=1, training=True):
for _ in range(nrof_layers):
x = dense(x, nrof_units=nrof_units, activation=activation, training=training)
return x
def sample(mu, sigma, nrof_iw_samples=1):
shp = tf.shape(mu)
batch_size = shp[1]
nrof_stochastic_units = shp[2]
epsilon = tf.random.normal((nrof_iw_samples,batch_size,nrof_stochastic_units), mean=0.0, stddev=1.0)
return mu + sigma*epsilon
def log_normal(x, mean, log_var, eps=1e-5):
with tf.variable_scope('log_normal'):
c = - 0.5 * np.log(2*np.pi)
return c - log_var/2 - (x - mean)**2 / (2 * tf.math.exp(log_var) + eps)
# -
def create_dataset(x, batch_size):
dataset = tf.data.Dataset.from_tensor_slices(x)
dataset = dataset.repeat() # Repeat the dataset indefinitely
dataset = dataset.shuffle(10000) # Shuffle the data
dataset = dataset.batch(batch_size) # Create batches of data
dataset = dataset.prefetch(batch_size) # Prefetch data for faster consumption
iterator = tf.compat.v1.data.make_initializable_iterator(dataset) # Create an iterator over the dataset
return iterator
def vae(x, temp, nrof_mlp_units, nrof_stochastic_units, nrof_layers, nrof_iw_samples, is_training):
dbg = dict()
batch_size = tf.shape(x)[0]
dbg['x'] = x
h = x
with tf.variable_scope('encoder'):
h = mlp(h, nrof_mlp_units, activation=tf.nn.leaky_relu, nrof_layers=nrof_layers, training=is_training)
q_mu = dense(h, nrof_stochastic_units, training=is_training)
q_sigma = dense(h, nrof_stochastic_units, activation=softlimit, training=is_training)
q_mu = tf.expand_dims(q_mu, axis=0)
q_sigma = tf.expand_dims(q_sigma, axis=0)
dbg['q_mu'] = q_mu
dbg['q_sigma'] = q_sigma
with tf.variable_scope('decoder'):
z = tf.reshape(sample(q_mu, q_sigma, nrof_iw_samples), (-1, nrof_stochastic_units))
dbg['z'] = z
h = mlp(z, nrof_mlp_units, activation=tf.nn.leaky_relu, nrof_layers=nrof_layers, training=is_training)
x_rec_mu = tf.reshape(dense(h, nrof_stochastic_units, training=is_training), (nrof_iw_samples,batch_size,nrof_stochastic_units))
x_rec_sigma = tf.reshape(dense(h, nrof_stochastic_units, activation=softlimit, training=is_training), (nrof_iw_samples,batch_size,nrof_stochastic_units))
dbg['x_rec_mu'] = x_rec_mu
dbg['x_rec_sigma'] = x_rec_sigma
dbg['x_sample'] = sample(x_rec_mu, x_rec_sigma)
with tf.variable_scope('rec_loss'):
log_pxz = log_normal(x, x_rec_mu, tf.math.log(x_rec_sigma)*2)
dbg['log_pxz'] = tf.reduce_mean(log_pxz) * np.log2(np.e)
with tf.variable_scope('reg_loss'):
p_mu, p_sigma = tf.zeros_like(q_mu), tf.ones_like(q_sigma)
zx = tf.reshape(z, (nrof_iw_samples,batch_size,nrof_stochastic_units))
log_qz = log_normal(zx, q_mu, tf.math.log(q_sigma)*2)
log_pz = log_normal(zx, p_mu, tf.math.log(p_sigma)*2)
dbg['log_pzx'] = tf.reduce_mean(log_pz - log_qz) * np.log2(np.e)
with tf.variable_scope('elbo'):
a = tf.reduce_mean(log_pxz + temp*(log_pz-log_qz), axis=2)
a_max = tf.reduce_max(a, axis=0, keepdims=True)
elbo = (tf.reduce_mean(a_max) + tf.reduce_mean(tf.log(tf.reduce_mean(tf.exp(a-a_max), axis=0)))) * np.log2(np.e)
dbg['elbo'] = elbo
loss = -elbo
return loss, dbg
# +
nrof_epochs = 20
batch_size = 100
nrof_stochastic_units = 2
nrof_mlp_units = 128
nrof_layers = 3
nrof_warmup_epochs = 5
nrof_iw_samples_train = 1
x_train = x[:70000,:]
y_train = y[:70000]
x_val = x[70000:80000,:]
y_val = x[70000:80000]
x_test = x[80000:,:]
y_test = x[80000:]
tf.set_random_seed(42)
np.random.seed(42)
tf.reset_default_graph()
with tf.Graph().as_default():
train_iterator = create_dataset(x_train, batch_size)
eval_input_ph = tf.placeholder(tf.float32, shape=(None, 2))
temp_ph = tf.placeholder(tf.float32, shape=())
with tf.variable_scope('model', reuse=False):
train_loss, train_dbg = vae(train_iterator.get_next(), temp_ph, nrof_mlp_units=nrof_mlp_units,
nrof_stochastic_units=nrof_stochastic_units, nrof_layers=nrof_layers, nrof_iw_samples=nrof_iw_samples_train,
is_training=True)
with tf.variable_scope('model', reuse=True):
eval1_loss, eval1_dbg = vae(eval_input_ph, temp_ph, nrof_mlp_units=nrof_mlp_units,
nrof_stochastic_units=nrof_stochastic_units, nrof_layers=nrof_layers, nrof_iw_samples=1,
is_training=False)
with tf.variable_scope('model', reuse=True):
eval100_loss, eval100_dbg = vae(eval_input_ph, temp_ph, nrof_mlp_units=nrof_mlp_units,
nrof_stochastic_units=nrof_stochastic_units, nrof_layers=nrof_layers, nrof_iw_samples=100,
is_training=False)
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
train_op = optimizer.minimize(train_loss)
sess = tf.compat.v1.InteractiveSession()
sess.run(tf.compat.v1.global_variables_initializer())
sess.run(train_iterator.initializer)
nrof_batches = x_train.shape[0] // batch_size
train_elbo_list, train_log_pxz_list, train_log_pzx_list = [], [], []
val1_elbo_list, val1_log_pxz_list, val1_log_pzx_list = [], [], []
val100_elbo_list, val100_log_pxz_list, val100_log_pzx_list = [], [], []
for epoch in range(1, nrof_epochs+1):
temp = get_warmup_temp(epoch, nrof_warmup_epochs)
for i in range(nrof_batches):
_, loss_, dbg_ = sess.run([train_op, train_loss, train_dbg], feed_dict={temp_ph:temp})
train_elbo_list += [ dbg_['elbo'] ]
train_log_pxz_list += [ dbg_['log_pxz'] ]
train_log_pzx_list += [ dbg_['log_pzx'] ]
print('train epoch: %4d temp: %7.3f loss: %7.3f p(x|z): %7.3f p(z|x): %7.3f' % (
epoch, temp, loss_, dbg_['log_pxz'], dbg_['log_pzx']))
loss_, dbg_ = sess.run([eval1_loss, eval1_dbg], feed_dict={eval_input_ph:x_val, temp_ph:1.0})
val1_elbo_list += [ dbg_['elbo'] ]
val1_log_pxz_list += [ dbg_['log_pxz'] ]
val1_log_pzx_list += [ dbg_['log_pzx'] ]
print('val epoch: %4d loss: %7.3f p(x|z): %7.3f p(z|x): %7.3f' % (
epoch, loss_, dbg_['log_pxz'], dbg_['log_pzx']))
loss_, dbg_ = sess.run([eval100_loss, eval100_dbg], feed_dict={eval_input_ph:x_val, temp_ph:1.0})
val100_elbo_list += [ dbg_['elbo'] ]
val100_log_pxz_list += [ dbg_['log_pxz'] ]
val100_log_pzx_list += [ dbg_['log_pzx'] ]
print('val epoch: %4d loss: %7.3f p(x|z): %7.3f p(z|x): %7.3f' % (
epoch, loss_, dbg_['log_pxz'], dbg_['log_pzx']))
loss_, dbg_ = sess.run([eval1_loss, eval1_dbg], feed_dict={eval_input_ph:x_test[:10000,:], temp_ph:1.0})
print('test epoch: %4d loss: %7.3f p(x|z): %7.3f p(z|x): %7.3f' % (
epoch, loss_, dbg_['log_pxz'], dbg_['log_pzx']))
loss_, dbg_ = sess.run([eval100_loss, eval100_dbg], feed_dict={eval_input_ph:x_test[:10000,:], temp_ph:1.0})
print('test epoch: %4d loss: %7.3f p(x|z): %7.3f p(z|x): %7.3f' % (
epoch, loss_, dbg_['log_pxz'], dbg_['log_pzx']))
# -
plt.plot(train_elbo_list)
plt.plot(train_log_pxz_list)
plt.plot(train_log_pzx_list)
plt.ylim((-4.0, 2.0))
plt.title('Training loss [bits/dim]')
plt.xlabel('Training step')
plt.legend(['ELBO', 'p(x|z)', 'p(z|x)'])
_ = plt.ylabel('Negative Log Likelihood')
plt.plot(train_elbo_list)
val_x = np.arange(0,len(val1_elbo_list)*nrof_batches,nrof_batches)
plt.plot(val_x, val1_elbo_list)
plt.plot(val_x, val100_elbo_list)
plt.ylim((-4.0,2.0))
plt.title('ELBO')
plt.legend(['Train ELBO, iw_samp=1', 'Val ELBO, iw_samp=1', 'Val ELBO, iw_samp=100'])
plt.xlabel('Training step')
_ = plt.ylabel('Negative Log Likelihood [bits/dim]')
dbg_ = sess.run(eval1_dbg, feed_dict={eval_input_ph:x_train, temp_ph:1.0})
plt.figure()
plt.scatter(dbg_['z'][:,0], dbg_['z'][:,1], c=y_train)
plt.axis((-5,5,-5,5))
| HW3/HW3_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python second brain env
# language: python
# name: brain-tumor-e2xz5xls
# ---
# https://keras.io/examples/structured_data/structured_data_classification_from_scratch/
#
# mudar nome das coisas. Editar como quero // para de servir de exemplo pra o futuro..
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
import pydot
file_url = "http://storage.googleapis.com/download.tensorflow.org/data/heart.csv"
dataframe = pd.read_csv(file_url)
dataframe.head()
val_dataframe = dataframe.sample(frac=0.2, random_state=1337)
train_dataframe = dataframe.drop(val_dataframe.index)
# +
def dataframe_to_dataset(dataframe):
dataframe = dataframe.copy()
labels = dataframe.pop("target")
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
train_ds = dataframe_to_dataset(train_dataframe)
val_ds = dataframe_to_dataset(val_dataframe)
# -
# for x, y in train_ds.take(1):
# print("Input:", x)
# print("Target:", y)
#
# |||||| entender isto melhor
train_ds = train_ds.batch(32)
val_ds = val_ds.batch(32)
# +
from tensorflow.keras.layers.experimental.preprocessing import Normalization
from tensorflow.keras.layers.experimental.preprocessing import CategoryEncoding
from tensorflow.keras.layers.experimental.preprocessing import StringLookup
def encode_numerical_feature(feature, name, dataset):
# Create a Normalization layer for our feature
normalizer = Normalization()
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))
# Learn the statistics of the data
normalizer.adapt(feature_ds)
# Normalize the input feature
encoded_feature = normalizer(feature)
return encoded_feature
def encode_string_categorical_feature(feature, name, dataset):
# Create a StringLookup layer which will turn strings into integer indices
index = StringLookup()
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))
# Learn the set of possible string values and assign them a fixed integer index
index.adapt(feature_ds)
# Turn the string input into integer indices
encoded_feature = index(feature)
# Create a CategoryEncoding for our integer indices
encoder = CategoryEncoding(output_mode="binary")
# Prepare a dataset of indices
feature_ds = feature_ds.map(index)
# Learn the space of possible indices
encoder.adapt(feature_ds)
# Apply one-hot encoding to our indices
encoded_feature = encoder(encoded_feature)
return encoded_feature
def encode_integer_categorical_feature(feature, name, dataset):
# Create a CategoryEncoding for our integer indices
encoder = CategoryEncoding(output_mode="binary")
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
feature_ds = feature_ds.map(lambda x: tf.expand_dims(x, -1))
# Learn the space of possible indices
encoder.adapt(feature_ds)
# Apply one-hot encoding to our indices
encoded_feature = encoder(feature)
return encoded_feature
# +
# Categorical features encoded as integers
sex = keras.Input(shape=(1,), name="sex", dtype="int64")
cp = keras.Input(shape=(1,), name="cp", dtype="int64")
fbs = keras.Input(shape=(1,), name="fbs", dtype="int64")
restecg = keras.Input(shape=(1,), name="restecg", dtype="int64")
exang = keras.Input(shape=(1,), name="exang", dtype="int64")
ca = keras.Input(shape=(1,), name="ca", dtype="int64")
# Categorical feature encoded as string
thal = keras.Input(shape=(1,), name="thal", dtype="string")
# Numerical features
age = keras.Input(shape=(1,), name="age")
trestbps = keras.Input(shape=(1,), name="trestbps")
chol = keras.Input(shape=(1,), name="chol")
thalach = keras.Input(shape=(1,), name="thalach")
oldpeak = keras.Input(shape=(1,), name="oldpeak")
slope = keras.Input(shape=(1,), name="slope")
all_inputs = [
sex,
cp,
fbs,
restecg,
exang,
ca,
thal,
age,
trestbps,
chol,
thalach,
oldpeak,
slope,
]
# Integer categorical features
sex_encoded = encode_integer_categorical_feature(sex, "sex", train_ds)
cp_encoded = encode_integer_categorical_feature(cp, "cp", train_ds)
fbs_encoded = encode_integer_categorical_feature(fbs, "fbs", train_ds)
restecg_encoded = encode_integer_categorical_feature(restecg, "restecg", train_ds)
exang_encoded = encode_integer_categorical_feature(exang, "exang", train_ds)
ca_encoded = encode_integer_categorical_feature(ca, "ca", train_ds)
# String categorical features
thal_encoded = encode_string_categorical_feature(thal, "thal", train_ds)
# Numerical features
age_encoded = encode_numerical_feature(age, "age", train_ds)
trestbps_encoded = encode_numerical_feature(trestbps, "trestbps", train_ds)
chol_encoded = encode_numerical_feature(chol, "chol", train_ds)
thalach_encoded = encode_numerical_feature(thalach, "thalach", train_ds)
oldpeak_encoded = encode_numerical_feature(oldpeak, "oldpeak", train_ds)
slope_encoded = encode_numerical_feature(slope, "slope", train_ds)
all_features = layers.concatenate(
[
sex_encoded,
cp_encoded,
fbs_encoded,
restecg_encoded,
exang_encoded,
slope_encoded,
ca_encoded,
thal_encoded,
age_encoded,
trestbps_encoded,
chol_encoded,
thalach_encoded,
oldpeak_encoded,
]
)
x = layers.Dense(32, activation="relu")(all_features)
x = layers.Dropout(0.5)(x)
output = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(all_inputs, output)
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
# -
model.fit(train_ds, epochs=50, validation_data=val_ds)
# +
sample = {
"age": 60,
"sex": 1,
"cp": 1,
"trestbps": 145,
"chol": 233,
"fbs": 1,
"restecg": 2,
"thalach": 150,
"exang": 0,
"oldpeak": 2.3,
"slope": 3,
"ca": 0,
"thal": "fixed",
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = model.predict(input_dict)
print(
"This particular patient had a %.1f percent probability "
"of having a heart disease, as evaluated by our model." % (100 * predictions[0][0],)
)
# -
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Codebook
# **Authors**: <NAME>
#
# Documenting existing data files of DaanMatch with information about location, owner, "version", source etc.
import pandas as pd
import numpy as np
import boto3
import seaborn as sns
import matplotlib.pyplot as plt
from collections import Counter
import io
# +
client = boto3.client('s3')
resource = boto3.resource('s3')
my_bucket = resource.Bucket('daanmatchdatafiles')
path = 's3://daanmatchdatafiles/from Shekhar/Final_Data_Globalgiving.org.xlsx'
obj = client.get_object(Bucket='daanmatchdatafiles', Key='from Shekhar/Final_Data_Globalgiving.org.xlsx')
df = pd.read_excel(io.BytesIO(obj['Body'].read()))
# -
# # Final_Data_Globalgiving.org.xlsx
# ## TOC:
# Lists out the column names in TOC format
def toc_maker(dataset):
counter = 1
for column in dataset.columns:
print("* ["+column+"](4."+str(counter)+")")
counter +=1
#toc_maker(Final_Data_ngodarpan)
# **About this dataset** <br>
# Data provided by: Unknown <br>
# Source: <br>
# Type: xlsx <br>
# Last Modified: <br>
# Size: <br>
#Read into file
df.head()
# **What's in this dataset?**
print("Shape:", df.shape)
print("Rows:", df.shape[0])
print("Columns:", df.shape[1])
print("Each row is a NGO.")
# **Codebook**
# +
dataset_columns = [column for column in df.columns]
dataset_desc = ["URL of NGO",
"Name",
"NGO Name",
"Money collected",
"Target collection",
"Donations",
"Description of NGO",
"Challenges NGO is targeting",
"Solution NGO is providing",
"Long-term impact of NGO",
"Location of NGO",
"NGO Website Link",
"NGO Facebook Link",
"NGO Twitter Link"]
dataset_dtypes = [dtype for dtype in df.dtypes]
data = {"Column Name": dataset_columns, "Description": dataset_desc, "Type": dataset_dtypes}
codebook = pd.DataFrame(data)
codebook
# -
# **Missing Values**
df.isnull().sum()
# **Summary Statistics**
df.describe()
# ## Columns
# ### Url
df['Url'].head()
# Many, if not all, URLs are not functional anymore.
# +
url = df['Url']
# Number of empty strings/missing values
print("Invalid:", sum(url == " ") + sum(url.isnull()))
print("No. of unique values:", len(url.unique()))
# Check for duplicates
counter = dict(Counter(url))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate URLs", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Name
# +
url = df['Name']
# Number of empty strings/missing values
print("Invalid:", sum(url == " ") + sum(url.isnull()))
print("No. of unique values:", len(url.unique()))
# Check for duplicates
counter = dict(Counter(url))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### NGO Name
# +
url = df['NGO Name']
# Number of empty strings/missing values
print("Invalid:", sum(url == " ") + sum(url.isnull()))
print("No. of unique values:", len(url.unique()))
# Check for duplicates
counter = dict(Counter(url))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
df[df['NGO Name'] == 'Indian Association for the Blind']
# Duplicate NGO Names do not indicate duplicate rows
# ### Collected
# +
url = df['Collected']
# Number of empty strings/missing values
print("Invalid:", sum(url == " ") + sum(url.isnull()))
print("No. of unique values:", len(url.unique()))
# Check for duplicates
counter = dict(Counter(url))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate Collection", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# +
# graph ?
# -
# ### Target
# +
url = df['Target']
# Number of empty strings/missing values
print("Invalid:", sum(url == " ") + sum(url.isnull()))
print("No. of unique values:", len(url.unique()))
# Check for duplicates
counter = dict(Counter(url))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate Targets", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Donations
# +
col = df['Donations']
# Number of empty strings/missing values
print("Invalid:", sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate Donations", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
max(df['Donations'])
# ### Description
# +
col = df['Description']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Challenge
# +
col = df['Challenge']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Solution
# +
col = df['Solution']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Long-Term Impact
# +
col = df['Long-Term Impact']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Location
# +
col = df['Location']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Website
# +
col = df['Website']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Facebook
# +
col = df['Facebook']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
# ### Twitter
# +
col = df['Twitter']
# Number of empty strings/missing values
print("Invalid:", sum(col == " ") + sum(col.isnull()))
print("No. of unique values:", len(col.unique()))
# Check for duplicates
counter = dict(Counter(col))
duplicates = { key:[value] for key, value in counter.items() if value > 1}
print("No. of Duplicates:", len(duplicates))
table = pd.DataFrame.from_dict(duplicates)
table = table.melt(var_name="Duplicate NGO Names", value_name="Count").sort_values(by=["Count"], ascending=False).reset_index(drop=True)
table
# -
| [DIR] from Shekhar/Global Giving/Final_Data_Globalgiving.org.xlsx.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
import math
from mpl_toolkits.mplot3d import Axes3D
import open3d as o3d
# +
class RANSAC:
"""
RANSAC Class
"""
def __init__(self, point_cloud, max_iterations, distance_ratio_threshold):
self.point_cloud = point_cloud
self.max_iterations = max_iterations
self.distance_ratio_threshold = distance_ratio_threshold
def run(self):
"""
method for running the class directly
:return:
"""
inliers, outliers = self._ransac_algorithm(self.max_iterations, self.distance_ratio_threshold)
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(inliers.X , inliers.Y, inliers.Z, c="green")
ax.scatter(outliers.X, outliers.Y, outliers.Z, c="red")
plt.show()
def _visualize_point_cloud(self):
"""
Visualize the 3D data
:return: None
"""
# Visualize the point cloud data
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(self.point_cloud.X , self.point_cloud.Y, self.point_cloud.Z)
plt.show()
def _ransac_algorithm(self, max_iterations, distance_ratio_threshold):
"""
Implementation of the RANSAC logic
:return: inliers(Dataframe), outliers(Dataframe)
"""
inliers_result = set()
while max_iterations:
max_iterations -= 1
# Add 3 random indexes
random.seed()
inliers = []
while len(inliers) < 3:
random_index = random.randint(0, len(self.point_cloud.X)-1)
inliers.append(random_index)
# print(inliers)
try:
# In case of *.xyz data
x1, y1, z1, _, _, _ = point_cloud.loc[inliers[0]]
x2, y2, z2, _, _, _ = point_cloud.loc[inliers[1]]
x3, y3, z3, _, _, _ = point_cloud.loc[inliers[2]]
except:
# In case of *.pcd data
x1, y1, z1 = point_cloud.loc[inliers[0]]
x2, y2, z2 = point_cloud.loc[inliers[1]]
x3, y3, z3 = point_cloud.loc[inliers[2]]
# Plane Equation --> ax + by + cz + d = 0
# Value of Constants for inlier plane
a = (y2 - y1)*(z3 - z1) - (z2 - z1)*(y3 - y1)
b = (z2 - z1)*(x3 - x1) - (x2 - x1)*(z3 - z1)
c = (x2 - x1)*(y3 - y1) - (y2 - y1)*(x3 - x1)
d = -(a*x1 + b*y1 + c*z1)
plane_lenght = max(0.1, math.sqrt(a*a + b*b + c*c))
for point in self.point_cloud.iterrows():
index = point[0]
# Skip iteration if point matches the randomly generated inlier point
if index in inliers:
continue
try:
# In case of *.xyz data
x, y, z, _, _, _ = point[1]
except:
# In case of *.pcd data
x, y, z = point[1]
# Calculate the distance of the point to the inlier plane
distance = math.fabs(a*x + b*y + c*z + d)/plane_lenght
# Add the point as inlier, if within the threshold distancec ratio
if distance <= distance_ratio_threshold:
inliers.append(index)
# Update the set for retaining the maximum number of inlier points
if len(inliers) > len(inliers_result):
inliers_result.clear()
inliers_result = inliers
# Segregate inliers and outliers from the point cloud
inlier_points = pd.DataFrame(columns={"X", "Y", "Z"})
outlier_points = pd.DataFrame(columns={"X", "Y", "Z"})
for point in point_cloud.iterrows():
if point[0] in inliers_result:
inlier_points = inlier_points.append({"X": point[1]["X"],
"Y": point[1]["Y"],
"Z": point[1]["Z"]}, ignore_index=True)
continue
outlier_points = outlier_points.append({"X": point[1]["X"],
"Y": point[1]["Y"],
"Z": point[1]["Z"]}, ignore_index=True)
return inlier_points, outlier_points
if __name__ == "__main__":
# Read the point cloud data
# point_cloud = pd.read_csv("point_cloud_data_sample.xyz", delimiter=" ", nrows=500)
pcd = o3d.io.read_point_cloud(r"1.pcd")
point_cloud = pd.DataFrame(pcd.points, columns={"X" ,"Y" ,"Z"})
APPLICATION = RANSAC(point_cloud, max_iterations=5, distance_ratio_threshold=0.5)
APPLICATION.run()
# -
| ransac.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pyspark.sql import SparkSession
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
spark = SparkSession.builder \
.master("local") \
.appName("stream to csv") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
dataSchema = StructType([StructField('Arrival_Time',LongType(),True),
StructField('Creation_Time',LongType(),True),
StructField('Device',StringType(),True),
StructField('Index',LongType(),True),
StructField('Model',StringType(),True),
StructField('User',StringType(),True),
StructField('gt',StringType(),True),
StructField('x',DoubleType(),True),
StructField('y',DoubleType(),True),
StructField('z',DoubleType(),True)
])
streaming = spark.readStream.schema(dataSchema).option("maxFilesPerTrigger", 1)\
.json( '/home/henry/projects/Spark-The-Definitive-Guide/data/activity-data_small')
spark.conf.set("spark.sql.shuffle.partitions", 5)
streaming.registerTempTable("my_table")
spark.sql("select * from my_table").writeStream.format("csv")\
.option("checkpointLocation", "checkpoint_dir3").outputMode("append")\
.option("path", "stream_dir4").start()
del(spark)
| big_data/streaming3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''pytorch'': conda)'
# name: python37664bitpytorchconda0cdad03962454fdfb22b6d3ea1ad8fae
# ---
# +
from d2l import torch as d2l
import torch
from torch import nn
def nin_block(in_channels, out_channels, kernel_size, strides, padding):
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, strides, padding),
nn.ReLU(),
nn.Conv2d(out_channels, out_channels, kernel_size=1), nn.ReLU(),
nn.Conv2d(out_channels, out_channels, kernel_size=1), nn.ReLU())
# -
net = nn.Sequential(
nin_block(1, 96, kernel_size=11, strides=4, padding=0),
nn.MaxPool2d(3, stride=2),
nin_block(96, 256, kernel_size=5, strides=1, padding=2),
nn.MaxPool2d(3, stride=2),
nin_block(256, 384, kernel_size=3, strides=1, padding=1),
nn.MaxPool2d(3, stride=2),
nn.Dropout(0.5),
nin_block(384, 10, kernel_size=3, strides=1, padding=1),
nn.AdaptiveMaxPool2d((1,1)),
nn.Flatten())
# + tags=[]
X = torch.rand(size=(1, 1, 224, 224))
for layer in net:
X = layer(X)
print(layer.__class__.__name__,'output shape:\t', X.shape)
# + tags=[]
lr, num_epochs, batch_size = 0.1, 10, 128
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr)
| Ch07_Modern_CNN/7-3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# 
#
# ## RAMP Challenge: French deputies votes prediction
#
# Team members: *<NAME>, <NAME>, <NAME>, <NAME>, <NAME>*
#
# ## Introduction
#
# This group project is about predicting positions of deputies on votes in the French National Assembly.
#
# ### Context
# #### The National Assembly
#
# We will be studying votes in the French National Assembly.
#
# The National Assembly is the French lower house of the parliament, the higher house being the Senate. The Assembly is composed of deputies elected every five years during legislatives elections. One deputy represents one electoral district, composed of 63 000 to 150 000 inhabitants. In total, there are 577 seats available for the deputies in the Assembly.
#
# Most of the times, deputies campaign in their district under the banner of a **political party**. But once elected, inside the Assembly, deputies join or create **parliamentary groups** of at least 15 deputies, that are not exactly the same as political parties.
#
# These groups are used for various reasons.
#
# - Small parties can group their voting forces.
# - Likely-minded deputies that are not in the same political parties can work together.
# - You get various advantages: more time to talk at the Assembly and one representative at the President's conference (a weekly reunion of all the parliamentary groups presidents).
#
# Some groups corresponds to exactly one party. For example, the group "En marche" or the group "Les Républicains". But this is far from the case of every group.
#
# If a deputy does not belong to any parliamentary groups, he is part of the **‘non inscrit’** group, which is not a parliamentary group strictly speaking.
#
# A deputy can also be a member of a **parliamentary commission**, which are specialized groups that review the laws for a specific subject (finance, security, health…).
#
# How these groups vote is the focus of this project.
#
#
# #### What do we vote in the Assembly ?
#
# In the French legislative system, the government and deputies propose laws. Before a law is adopted, the text is called a:
#
# + **project of law**, when it is proposed by the government
# + **proposition of law**, when it is proposed by a parliament member
#
# Please note that that those are literal French translations, as we are not legalists.
#
# All members of the parliament, commissions and the government can propose **amendments** which are modifications about parts or articles of the discussed text of law. We call those who propose amendments (or any kind of vote) **demandeurs** (literally *askers*), and we will refer to them as **demandeur groups**. The groups that vote on these amendments will be refered as **voting groups**.
#
# An amendment is basically a rewrite of one particular aspect of the text of law. There also exists **sub-amendments**, which are amendments of an amendment. Even a small amendment, that adds a comma in a text, is important. Everything that is voted ultimately becomes law and is enforced.
#
# All text of laws and amendments are voted by absolute majority, where one deputy equals one vote. However, for most of the votes, you don't need every deputy to be present to vote.
#
# Given a vote topic, we will try to predict the position of each group on this topic.
# ### Objective
# Citizens scrutinize the deputies they voted for with great interest. A concern is that deputies don't vote thinking about the vote topic itself, but only with partisanship reasoning: they vote for an allied political group, and against an ennemy political group. These "friendship rules" undermine Democracy and are disapproved by citizens. Indeed, it's seen more as a political warfare than a democratic co-construction of the French society.
#
# Our goal is to study to which extent political groups' votes in the Assembly are reducible to those "friendship rules", and are thus predictible. In other words, **our project is a [Multilabel classification task](https://scikit-learn.org/stable/modules/multiclass.html) to predict if a set of parliamentary group vote for a given vote topic, or not.**
#
# Note that to simplify the problem, we reduce the number of positions to two (**for** or **not for**). But in reality, there can be 4 possible positions: for, against, abstention, or non-presence. We also don't consider every parliamentary group for prediction, but only the largest ones.
#
# **The metric used for evaluation is a weighted F1-score:** several F1-scores are computed for each class (parliamentary group), and then averaged, using the logarithm of the share of deputies in Assembly for this parliamentary group as weights.
# ### Data
# Most of the data we use in this project comes from government's [open data, freely available on the National Assembly website](https://data.assemblee-nationale.fr/). Some additional features were created using aggregated data from [nosdeputes.fr](https://www.nosdeputes.fr/).
#
# Data from the government is very large, as everything ever said or proposed is recorded, and not in the format we want. We will describe below the processing we performed.
#
# #### Processing
#
# 1. We gather the various stacks of individual .csv files into aggregate ones with only relevant information. We create aggregate .csv :
# - one .csv with all the **actors**, which is another name for deputies and scenators.
# - one .csv with all information about **votes** : topic, date...
# - one .csv with the **results of votes**
# 2. We filter these aggregate .csv to keep only relevant information. We keep only data relevant to the latest election. In otherwords, we select data in the timeframe between July, 4th 2017 and November, 20th 2020.
# 3. We format this data in a "prediction-ready" format, with a features DataFrame X and a target np.array y.
#
# #### Description
#
# Here are the **features** we made available for the problem:
# + **vote_uid (str)**: the id of the vote
# + **date (str)**: date of the vote
# + **libelle (str)**: short description of the topic of the vote
# + **code_type_vote (str)**: code of the type of the vote
# + **libelle_type_vote (str)**: same as code_type_vote but more explicit
# + **demandeur (str)** : the deputy or group which has asked for this vote
# + **presence_per_party (dict)**: number of deputies physically present for the vote by group
#
# Our **target** corresponds to the results of the vote.
# - There are 10 columns, one for each parliamentary group considered.
# - Each line corresponds to a vote. The index is the vote_id.
# - The values are 1 if the group voted for, 0 if the group did something else (*against, abstention, non-presence*)
#
# We also provided another .csv file with information about political parties and their members : birth date, sex, twitter... This .csv can be used for further feature engineering.
#
# #### Train-test split
#
# The train-test split is made temporally. First 80% of votes are in the train dataset, last 20% of votes are in the test dataset. This is because votes are temporally bounded and, ultimately, we want to predict future political decisions.
# ### Data Exploration
#
# To better understand the problem, let's drill down in the data.
# +
from problem import get_train_data, get_test_data, get_actor_party_data
import pandas as pd
import numpy as np
import re
import seaborn as sns
import matplotlib.pyplot as plt
X_train, y_train = get_train_data()
X_test, y_test = get_test_data()
X = X_train.append(X_test)
y = np.concatenate([y_train, y_test])
actors = get_actor_party_data()
# +
# X the features vectors with info about votes
X.head(5)
# -
X['libelle'].iloc[10] # Example of the libelle of a vote
# +
# y the mutlilabel target vector
# To each of the 10 columns corresponds a political group, as defined in Context:
y_columns = ["SOC", "FI", "Dem", "LT", "GDR", "LaREM", "Agir ens", "UDI-I", "LR", "NI"]
# Those corresponds to :
# Socialistes et apparentés, France insoumise, Mouvement démocrate et indépendant,
# Libertés et Territoires, Gauche Démocratique et République, La République En Marche,
# Agir Ensemble, UDI et Indépendants, Les Républicains, Non inscrits
# It's an numpy array, but we convert it into a dataframe for easier manipulation.
y = pd.DataFrame(y, columns=y_columns)
y.index = X.index
y.head(5)
# +
# actors is data to be used for feature engineering
# It was downloaded from mesdeputes.fr
# It can give good insight about who's inside the parliament and who they are.
actors.head(8)
# -
# #### Exploring deputies data
# Let's get a sense of scale of the different political parties (**not** groups) in the French National Assembly. In 2017, the French voted massively for Macron's "La République En Marche" and the "Mouvement démocrate". Notice how some parties only have 1 deputie: this is especially the case for oversea parties. That's why they tend to group with others.
#
# We study parties since they are more fine-grained than groups. Note also that a deputy can be in several groups.
effectif_table = actors[["membre_acteurRef", 'membre_parti']].groupby("membre_parti").count().reset_index().rename({"membre_parti": "parti", "membre_acteurRef": "parti_nb_membres"}, axis=1).set_index('parti').sort_values('parti_nb_membres', ascending=False)
effectif_table
# Let's plot the average age and the share of women in each party.
# +
from datetime import datetime
from dateutil.relativedelta import relativedelta
def yearsago(years, from_date=None):
if from_date is None:
from_date = datetime.now()
return from_date - relativedelta(years=years)
def num_years(begin, end=None):
if end is None:
end = datetime.now()
num_years = int((end - begin).days / 365.25)
if begin > yearsago(num_years, end):
return num_years - 1
else:
return num_years
# Count the number of actors of each sex in each party
parite_table = pd.pivot_table(actors, values="membre_acteurRef", columns="membre_sex", index="membre_parti", aggfunc='count', fill_value=0).reset_index().rename({"membre_parti": "parti"}, axis=1).set_index('parti')
# Compute the share of women of each party
parite_table["parti_share_women"] = parite_table["F"] / (parite_table["F"] + parite_table["H"])
parite_table.drop(["F", "H"], axis=1, inplace=True)
# Compute age of actors
actors['membre_birthDate'] = pd.to_datetime(actors['membre_birthDate'])
actors["membre_age"] = actors["membre_birthDate"].apply(num_years)
# Compute the average age of actors of each party
age_table = actors[["membre_acteurRef", 'membre_parti', "membre_age"]].groupby("membre_parti").mean().reset_index().rename({"membre_parti": "parti", "membre_age": "parti_mean_age"}, axis=1).set_index('parti')
# Group these features together and display them
parti_features = effectif_table.join(parite_table).join(age_table)
# Plot
plt.figure(figsize=(8,10))
p1 = sns.scatterplot(x='parti_share_women', y='parti_mean_age', data=parti_features.reset_index(), size = "parti_nb_membres", hue="parti", legend=False, sizes=(20,100))
for line in range(0, parti_features.shape[0]):
# Jitter the text a bit to avoid text clusters
p1.text(parti_features['parti_share_women'][line]+0.01, parti_features['parti_mean_age'][line] + 1.2*(0.5 - np.random.random()),
parti_features.reset_index()['parti'][line][:25], horizontalalignment='left',
size='medium', color='black', weight='light')
plt.title('Partis positioning according to age and share of women')
plt.xlabel('Share of women')
plt.ylabel('Mean age')
# -
# For the share of women, "Les Républicains" are way behind the other big parties by having only 25% of women, when "La République En Marche", "Parti socialiste" and "La France Insoumise" have roughly 50% of women. Parties with 100% of women have only 1 member.
#
# Regarding the age of the big parties, "Mouvement démocrate", "Parti socialiste" and "Les Républicains" are the oldest in the assembly. This is the traditional "left vs center vs right" paradigma. On the countrary, "La République En Marche" and "Front National", the two parties that dueled in the Presidential election last round, are younger. This corresponds to new "liberals vs nationalists" paradigma.
# #### Exploring votes position repartition
#
# Let's compare the count of "for" and "other" positions. The repartition seems roughly balanced.
# +
votes_for = y.sum()
votes_other = y.shape[0] - votes_for
plt.bar([1, 2], [votes_for.sum(), votes_other.sum()])
plt.xticks([1, 2], ('For', 'Other (not for)'))
plt.title("Number of votes per position")
plt.show()
# -
# But if we do a breakdown per group, the story is not the same. "LaRem" (La République En Marche) and "Dem" (Démocrates et apparentés) rarely vote *for*. And the Left leaning groups (SOC Socialistes, FI France Insoumise, GDR Gauche Démocrate et Républiquaine) very often vote for, despite being a minority in the Assembly.
#
# The intuition we get is that left-leaning groups propose a lot of amendments that the majority groups (La République En Marche and Démocrates et appartentés) systematically block. Those kind "useless amendments" (that demandeur group *know* will be blocked, but that are proposed nonetheless, just to **delay** the proposition of law), are known in US politics as [Filibuster](https://en.wikipedia.org/wiki/Filibuster) (*obstruction parlementaire* in French). In the next graph, we will confirm this intuition.
#
# From a machine learning point of view, this means that classes are imbalanced. Beware !
# +
ind = np.arange(len(y.columns))
plt.figure(figsize=(10,6))
p1 = plt.bar(ind, votes_for, width=0.35)
p2 = plt.bar(ind, votes_other, width=0.35, bottom=votes_for)
plt.ylabel('Number of votes')
plt.title('Repartition of \'For\' votes')
plt.xticks(ind, y.columns)
plt.legend((p1[0], p2[0]), ("For", 'Other'))
plt.show()
# -
# #### Vote demandeur repartition
#
# Let's now look at the most common demandeur. We recall that the **demandeur** is the group that asks for an amendment.
#
# To clean up demandeur's column, we use a custom Transformer available in `estimator.py`.
#
# We see that the group La France Insoumise, despite having 17 deputies in the Assemblée, is the leading demandeur. This means that most amendments are proposed by this minority group. This confirm our previous analysis of fillibusting. Depending on your point of view, you can say that :
# - France Insoumise defends democracy by preventing bad laws from being voted, and asking for radical changes.
# - France Insoumise asks for useless votes that will be, most of the time, discarded, just to slow down laws' adoptions.
# +
from estimator import FindGroupVoteDemandeurTransformer
from ast import literal_eval
demandeur = FindGroupVoteDemandeurTransformer()
X_b = demandeur.transform(X.copy())
# Won't explode without this trick
X_b['demandeur_group'] = X_b['demandeur_group'].apply(literal_eval)
X_b = X_b[["demandeur_group", "vote_uid"]].explode('demandeur_group', ignore_index=True)
demandeur_table = X_b.groupby("demandeur_group").count()
# Plot
demandeur_table.plot(kind='bar', figsize=(10,6))
plt.title("Amount of propositions for each demandeur group")
plt.xticks(rotation=90)
plt.xlabel("Demandeur group")
plt.show()
# -
# Now, let's cross this information with the voting groups, to see which group vote most for which group's propositions. To do that, we compute some kind of confusion matrix that indicates the share of times a voting group voted for this demandeur group.
# +
map_group_sigle = {'Agir Ensemble': 'Agir ens',
'Gauche démocrate et républicaine': 'GDR',
'La France insoumise': 'FI',
'La République en Marche': 'LaREM',
'Les Républicains': 'LR',
'Libertés et Territoires': 'LT',
'Socialistes et apparentés': 'SOC',
'UDI Agir et Indépendants': 'UDI Agir I',
'UDI et Indépendants': 'UDI-I',
'Écologie Démocratie Solidarité': 'Eco',
'Non inscrits': 'NI',
'Nouvelle Gauche': 'NG',
"Les Constructifs : républicains UDI indépendants": "LC",
"Mouvement Démocrate et apparentés": "Dem",
np.nan: 'UNK'}
X_b['demandeur_group_sigle'] = X_b['demandeur_group'].apply(lambda x: map_group_sigle[x])
partisanship_table = X_b.set_index('vote_uid').join(y).groupby('demandeur_group_sigle').sum()
opposition_table = X_b.set_index('vote_uid').join(1 - y).groupby('demandeur_group_sigle').sum()
# Normalize by the number of votes
partisanship_table_norm = partisanship_table / (partisanship_table + opposition_table)
partisanship_table_norm.sort_index(axis=1, inplace=True)
partisanship_table_norm
# -
# Since numbers between 0 and 1 are hard to read, we plot the results below.
# - Blue square means bottom group agrees with left group.
# - Red square means bottom group disagrees or ignores left group.
#
# Note here that we have some missing data, labeled UNK. The feature extraction can probably still be perfected. We guess that those might be propositions coming directly from the _Government_ or from _Ministers_.
# +
cmap = sns.diverging_palette(20, 230, as_cmap=True)
plt.figure(figsize=(10,10))
sns.heatmap(partisanship_table_norm.sort_index(axis=1), cmap=cmap, center=0.5, square=True, linewidths=.5,)
plt.ylabel('Demandeur group')
plt.xlabel('Voting group')
plt.title('Partisanship between groups\n1=support, 0=ignore or disagree')
# plt.xticks(rotation=60)
plt.show()
# -
# Now, the story is pretty clear. LaREM (La République En Marche) only agrees with proposals from their own members.
#
# In general, if the voting and the demandeur groups are the same, there is support. This is logical: groups vote for their own amendments. Let's inspect where intra-group support is low. This is the case for FI (France Insoumise) and UDI-I (UDI Indépendants). It might be the sign of idelogical tensions inside the group. France insoumise, because of an intern crisis. UDI, by nature of the Centrist ideology.
#
# LaREM is often criticized for becomming more and more right-leaning on the political spectrum, since Macron appointed as ministers many politicians from LR. But, on this chart, we see that LaREM (center right) and LR (traditional right) agree mostly only on voting against FI, GDR, and SOC proposals. The support, is less clear. LR supports LaREM on a bit over half of theirs proposals, and LaREM support less than half of LR proposals. What seems to be an alliance from the outside, is much less so inside the Assembly.
#
# This chart also reveals ideological contradictions between public relations and votes. Notably, LaREM rarely votes for Eco (Ecologistes), despite environment being one of the main campaign points of <NAME>.
#
# The proposals from UDI Agir-I (UDI Agir et Indépendants) are notably cleaving. It's probably an outlier, and values are extreme due to small number of votes involving this group.
#
# To sum up, our statistical analysis and this chart **reveals lots of complex partisan and ideological tensions**. Will the machine learning model be able to pick up on them ?
# ## Sample estimator
#
# Using NLP, we extract features from X, and create a basic fully-connected neural network to fit a classifier.
#
# ### Feature extraction
#
# The first step is to extract features from X. We created several custom Transformers that mostly use regex.
#
# 1. `FindGroupVoteDemandeurTransformer` on the column demandeur (ex: Le Président du groupe "La France Insoumise" <NAME>) to find demandeur's group (La France Insoumise).
# 2. `DecomposeVoteObjetTransformer` on the column libelle, to find:
# - libelle_type: the string with the type of vote (ex: l'amendement, le sous-amendement, l'article...)
# - libelle_desc: the string with a description of the object vote (ex: loi bioéthique)
# - libelle_auteur: the list of actors mentioned in the libelle (ex: <NAME>). These auteurs can be different from demandeurs.
# 3. `FindPartyActorTransformer` on the column libelle_auteur to find the corresponding party (ex: Rassemblement National).
#
# This way, we extracted meaningful data from unstructured text data. This should give a upstart to our model.
#
# ### Numerical encoding
#
# Now, we need to encode data numerically.
#
# - `demandeur_group` is a list of strings, each string being a category. We One Hot encode it : to each category corresponds a column, and if the category is present in the list we write 1, else 0.
# - `auteur_parti` is pretty much the same as `demandeur_group`.
# - `libelle_type` is a category. We One Hot encode it.
# - `libelle_desc` is a text. We encode it using TF-IDF, [which is a very classic way to encode text](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html). But you could imagine something else.
# - `presence_per_party` is a dictionary. We create one column for each party with corresponding attendance.
#
# We then normalize each dimension between 0 and 1.
#
# ### Neural Network model
#
# Using Keras, we create a fully connected network with 64 hidden relu-activated Dense units and a dropout of 20%. As an output, we use 10 sigmoid activated units, one for each party.
#
# We take care of class imbalance by assigning weights to classes according to their observed frequency.
# +
from estimator import get_estimator
model = get_estimator()
model.fit(X_train, y_train)
# -
y_pred = model.predict(X_test)
# We compare the model accuracies for each class, independently.
# +
# Naive prediction accuracy per class
ind = np.arange(len(y_columns))
accuracies = np.mean(y_pred == y_test, axis=0)
plt.figure(figsize=(10,5))
plt.bar(ind, accuracies)
plt.ylabel('Accuracy')
plt.xlabel('Voting group')
plt.title("Model accuracy per class")
plt.xticks(ind, y.columns)
plt.show()
# -
# We however, won't be evaluating our model using accuracy, but using a custom metric defined below.
# ### Evaluation
#
# For evaluation, we use a custom metric, which is a weighted F1-score.
# - F1-score is generally the go-to when we talk of classification, since it encompasses very well two important metrics : precision and recall.
# - We don't have as many positive and negative examples inside of each class (data imbalance). F1-score is a fairer metrics than accuracy, that favours models that always yield the same value.
#
# Let's first compare accuracy with F1-score.
# +
from problem import CustomF1Score
from sklearn.metrics import f1_score, accuracy_score, precision_score, recall_score
# Let's compute the individual F1 of each class
class_f1_scores = []
for i in range(10):
class_f1_scores.append(f1_score(y_test[:,i], y_pred[:,i]))
f1_df = pd.DataFrame(class_f1_scores, y.columns, columns=["F1 score"]).transpose()
f1_df
# -
# On this graph, you see how F1-score is a bit more pessimistic about LREM, that has lots of negative examples. Probably our model outputs lots of zeros, without much thinking. F1-score helps us alleviate this bias, and see the model's performances more realistically.
# +
plt.figure(figsize=(10,5))
plt.bar(ind-0.2, accuracies, width=0.4)
plt.bar(ind+0.2, class_f1_scores, width=0.4)
plt.ylabel('Score')
plt.xlabel('Voting group')
plt.title("Comparison of model accuracy and F1-score per class")
plt.xticks(ind, y.columns)
plt.legend(["Accuracy", "F1-score"])
plt.show()
# -
# Now, about our custom metric.
#
# For each of the 10 class, we compute an F1-score, and then average all of those. The weights are the log-proportions of the share of deputies of this group.
#
# $$\forall g^* \in \text{Groups}, \text{weight}_{g^*} = \frac{\log (\text{# deputies in group } g^*)} {\sum_{g \in \text{Groups}} \log (\text{# deputies in group } g)}$$
#
# $$F1_{log-weighted} = \sum_{g \in \text{Groups}} \text{weight}_g \times F1(\text{class } g)$$
#
# This way :
# - We focus more on big groups (in Democracy, majority groups are the ones that matter)
# - Using the log-proportion, smaller groups are not discarded (we won't evaluate the model only on its performances with LREM).
#
# A good score is close to 1, a bad score is close to 0.
#
# Below, we compare $F1_{\text{non-weighted}}$, $F1_{\text{linear-weighted}}$ (same formulas as above but without the logarithms), and $F1_{\text{log-weighted}}$ that we just presented.
# +
# Basic average
non_weighted_score = np.mean(class_f1_scores)
# Linear weights average
linear_score = CustomF1Score(weights_type="linear")(y_test, y_pred)
# Log weights average : the metric that will be used in the RAMP challenge.
log_score = CustomF1Score(weights_type="log")(y_test, y_pred)
print(f"Non-weighted average of F1 scores: {non_weighted_score:.3f}")
print(f"Linear-weighted average of F1 scores: {linear_score:.3f}")
print(f"Log-weighted average of F1 scores: {log_score:.3f}")
# -
# We see that Log-weighted is more optimistic on the performance of our model than non-weighted F1, while still being more pessimistic than linear-weighted F1. This is kind of a compromise between the two.
#
# On the graph below, we compare the scoring weights.
# +
plt.figure(figsize=(10,5))
plt.bar(ind-0.2, CustomF1Score(weights_type="linear").weights_, width=0.4)
plt.bar(ind+0.2, CustomF1Score(weights_type="log").weights_, width=0.4)
plt.plot(ind, .1*np.ones(ind.shape[0]), "--")
plt.ylabel('Weight')
plt.xlabel('Voting group')
plt.title("Scoring weights for each political group")
plt.xticks(ind, y.columns)
plt.legend(['Uniform', 'Linear', 'Log'])
plt.show()
# -
# ## Conclusion
#
# We found that the voting position of political groups is, to an extent, predictible.
#
# Rivalry between political groups is the main driver of their votes. When a political group votes, it doesn't try to chose the best option for the nation, as it reaffirms its position : its ideology, its support of a group, its opposition to another group.
#
# Paradoxically, we can say that this is reassuring for us, citizens. This means that :
# - Deputies don't vote randomly, which is always good to know.
# - Deputies defend the positions they have been elected for.
# - Rivalry in the National Assembly means that there is a political debate, and this is what democracy is all about.
#
# We conclude this study of National Assembly with a broader understanding of forces driving votes in the National Assembly.
| starting_kit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Traing BERT sentiment analysis model
import nuclio
# ## Environment
# +
# # !pip install transformers==3.0.1
# # !pip install torch==1.6.0
# -
# ## Function
# +
#nuclio: start-code
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" id="XpEvMc9v-hla" outputId="551914c8-e8e7-412a-92d0-2688ef4f47ea"
import os
import pandas as pd
from transformers import BertTokenizer, AdamW, get_linear_schedule_with_warmup, BertModel
import torch
import torch.nn as nn
from torch.utils import data
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import numpy as np
import seaborn as sns
from collections import defaultdict
from mlrun.artifacts import PlotArtifact, ChartArtifact, TableArtifact
from mlrun.datastore import DataItem
from mlrun import MLClientCtx
# -
class BertSentimentClassifier(nn.Module):
def __init__(self, pretrained_model, n_classes):
super(BertSentimentClassifier, self).__init__()
self.bert = BertModel.from_pretrained(pretrained_model)
self.dropout = nn.Dropout(p=0.2)
self.out_linear = nn.Linear(self.bert.config.hidden_size, n_classes)
self.softmax = nn.Softmax(dim=1)
def forward(self, input_ids, attention_mask):
_, pooled_out = self.bert(
input_ids=input_ids,
attention_mask=attention_mask
)
out = self.dropout(pooled_out)
out = self.out_linear(out)
return self.softmax(out)
# + colab={} colab_type="code" id="LA_e_S8b8Qjo"
class ReviewsDataset(data.Dataset):
def __init__(self, review, target, tokenizer, max_len):
self.review = review
self.target = target
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.review)
def __getitem__(self, item):
review = str(self.review[item])
enc = self.tokenizer.encode_plus(
review,
max_length=self.max_len,
add_special_tokens=True,
pad_to_max_length=True,
return_attention_mask=True,
return_token_type_ids=False,
return_tensors='pt',
truncation=True)
return {'input_ids': enc['input_ids'].squeeze(0),
'attention_mask': enc['attention_mask'].squeeze(0),
'targets': torch.tensor(self.target[item], dtype=torch.long)}
# + colab={} colab_type="code" id="4NVlWc-o8Qjh"
def score_to_sents(score):
if score <= 2:
return 0
if score == 3:
return 1
return 2
# -
def create_data_loader(df, tokenizer, max_len, batch_size):
dataset = ReviewsDataset(
review=df.content.to_numpy(),
target=df.sentiment.to_numpy(),
tokenizer=tokenizer,
max_len=max_len)
return data.DataLoader(dataset, batch_size=batch_size, num_workers=4)
def train_epoch(
model,
data_loader,
criterion,
optimizer,
scheduler,
n_examples,
device
):
model.train()
losses = []
correct_preds = 0
for i, d in enumerate(data_loader):
if i % 50 == 0:
print(f'batch {i + 1}/ {len(data_loader)}')
input_ids = d['input_ids'].to(device)
attention_mask = d['attention_mask'].to(device)
targets = d['targets'].to(device)
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask
)
_, pred = torch.max(outputs, dim=1)
loss = criterion(outputs, targets)
correct_preds += torch.sum(pred == targets)
losses.append(loss.item())
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optimizer.step()
scheduler.step()
optimizer.zero_grad()
return (correct_preds.double() / n_examples).detach().cpu().numpy(), np.mean(losses)
def eval_model(
model,
data_loader,
criterion,
n_examples,
device
):
print('evaluation')
model = model.eval()
correct_preds = 0
losses = []
with torch.no_grad():
for i, d in enumerate(data_loader):
if i % 50 == 0:
print(f'batch {i + 1}/ {len(data_loader)}')
input_ids = d['input_ids'].to(device)
attention_mask = d['attention_mask'].to(device)
targets = d['targets'].to(device)
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask
)
_, pred = torch.max(outputs, dim=1)
loss = criterion(outputs, targets)
correct_preds += torch.sum(pred == targets)
losses.append(loss.item())
return (correct_preds.double() / n_examples).detach().cpu().numpy(), np.mean(losses)
def eval_on_test(model_path, data_loader, device, n_examples, pretrained_model, n_classes):
model = BertSentimentClassifier(pretrained_model, n_classes).to(device)
model.load_state_dict(torch.load(model_path))
model.eval()
correct_preds = 0
with torch.no_grad():
for i, d in enumerate(data_loader):
if i % 50 == 0:
print(f'batch {i + 1}/ {len(data_loader)}')
input_ids = d['input_ids'].to(device)
attention_mask = d['attention_mask'].to(device)
targets = d['targets'].to(device)
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask
)
_, pred = torch.max(outputs, dim=1)
correct_preds += torch.sum(pred == targets)
return correct_preds.double() / n_examples
def train_sentiment_analysis_model(context: MLClientCtx,
reviews_dataset: DataItem,
pretrained_model: str = 'bert-base-cased',
models_dir: str = 'models',
model_filename: str = 'bert_sentiment_analysis_model.pt',
n_classes: int = 3,
MAX_LEN: int = 128,
BATCH_SIZE: int = 16,
EPOCHS: int = 50,
random_state: int = 42):
# Check for CPU or GPU
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
base_path = os.path.abspath(context.artifact_path)
plots_path = os.path.join(base_path, 'plots')
data_path = os.path.join(base_path, 'data')
context.logger.info(f'Using {device}')
models_basepath = os.path.join(context.artifact_path, models_dir)
os.makedirs(models_basepath, exist_ok=True)
model_filepath = os.path.join(models_basepath, model_filename)
# Get dataset
df = reviews_dataset.as_df()
# Save score plot
df = df[['content', 'score']]
sns.distplot(df.score)
reviews_scores_artifact = context.log_artifact(PlotArtifact(f"reviews-scores", body=plt.gcf()),
target_path=f"{plots_path}/reviews-scores.html")
# Turn scores to sentiment label
df['sentiment'] = df['score'].apply(score_to_sents)
# Load bert tokenizer
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
# Tokenize reviews
lens = [len(tokenizer.encode(df.loc[review]['content'])) for review in df.index]
max_length = max(lens)
context.logger.info(f'longest review: {max_length}')
plt.clf()
sns.distplot(lens)
reviews_lengths_artifact = context.log_artifact(PlotArtifact(f"reviews-lengths", body=plt.gcf()),
target_path=f"{plots_path}/reviews-lengths.html")
# Create training and validation datasets
df_train, df_test = train_test_split(df, test_size=0.2, random_state=random_state)
df_dev, df_test = train_test_split(df_test, test_size = 0.5, random_state=random_state)
# Create dataloaders for all datasets
train_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)
dev_loader = create_data_loader(df_dev, tokenizer, MAX_LEN, BATCH_SIZE)
test_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
# Load the bert sentiment classifier base
model = BertSentimentClassifier(pretrained_model, n_classes=n_classes).to(device)
# training
optimizer = AdamW(model.parameters(), lr=2e-5, correct_bias=False)
total_steps = len(train_loader) * EPOCHS
scheduler = get_linear_schedule_with_warmup(optimizer=optimizer, num_warmup_steps=0, num_training_steps=total_steps)
criterion = nn.CrossEntropyLoss().to(device)
history = defaultdict(list)
best_acc = train_acc = train_loss = dev_acc = dev_loss = 0
context.logger.info('Started training the model')
for epoch in range(EPOCHS):
train_acc, train_loss = train_epoch(
model,
train_loader,
criterion,
optimizer,
scheduler,
len(df_train),
device
)
dev_acc, dev_loss = eval_model(
model,
dev_loader,
criterion,
len(df_dev),
device
)
# Append results to history
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['dev_acc'].append(dev_acc)
history['dev_loss'].append(dev_loss)
context.logger.info(f'Epoch: {epoch + 1}/{EPOCHS}: Train loss: {train_loss}, accuracy: {train_acc} Val loss: {dev_loss}, accuracy: {dev_acc}')
if dev_acc > best_acc:
torch.save(model.state_dict(), model_filepath)
context.logger.info(f'Updating model, Current models is better then the previous one ({best_acc} vs. {dev_acc}).')
best_acc = dev_acc
context.logger.info('Finished training, testing and logging results')
chart = ChartArtifact('summary')
chart.header = ['epoch', 'accuracy', 'val_accuracy', 'loss', 'val_loss']
for i in range(len(history['train_acc'])):
chart.add_row([i + 1, history['train_acc'][i],
history['train_loss'][i],
history['dev_acc'][i],
history['dev_loss'][i]])
summary = context.log_artifact(chart, local_path=os.path.join('plots', 'summary.html'))
history_df = pd.DataFrame(history)
history_table = TableArtifact('history', df=history_df)
history_artifact = context.log_artifact(history_table, target_path=os.path.join(data_path, 'history.csv'))
test_acc = eval_on_test(model_filepath, test_loader, device, len(df_test), pretrained_model, n_classes)
context.logger.info(f'Received {test_acc} on test dataset')
results = {'train_accuracy': train_acc,
'train_loss': train_loss,
'best_acccuracy': best_acc,
'validation_accuracy': dev_acc,
'validation_loss': dev_loss}
context.log_results(results)
context.log_model(key='bert_sentiment_analysis_model',
model_file=model_filename,
model_dir=models_dir,
artifact_path=context.artifact_path,
upload=False,
labels={'framework': 'pytorch',
'category': 'nlp',
'action': 'sentiment_analysis'},
metrics=context.results,
parameters={'pretrained_model': pretrained_model,
'MAX_LEN': MAX_LEN,
'BATCH_SIZE': BATCH_SIZE,
'EPOCHS': EPOCHS,
'random_state': random_state},
extra_data={'reviews_scores': reviews_scores_artifact,
'reviews_length': reviews_lengths_artifact,
'training_history': history_artifact})
# +
#nuclio: end-code
# -
# ## Test locally
# +
from mlrun import NewTask
import os
reviews_datafile = os.path.join(os.path.abspath('..'), 'data', 'reviews.csv')
pretrained_model = 'bert-base-cased'
task = NewTask(name = "train-sentiment-analysis",
params={'pretrained_model': pretrained_model,
'EPOCHS': 1},
inputs={'reviews_dataset': reviews_datafile})
# -
lrun = run_local(task, handler=train_sentiment_analysis_model,
artifact_path = './artifacts')
# ## Deploy to cluster
# +
import mlrun
import os
fn = mlrun.code_to_function(name='train_sentiment_analysis',
project="stocks-" + os.getenv('V3IO_USERNAME'),
handler='train_sentiment_analysis_model', kind='job', image="mlrun/ml-models-gpu")
fn.apply(mlrun.platforms.v3io_cred())
fn.apply(mlrun.mount_v3io())
fn.spec.build.commands = ['pip install transformers==3.0.1', 'pip install torch==1.6.0']
# fn.gpus(1) # Make sure you have available GPU
# fn.export('bert_sentiment_classification.yaml')
# -
run = fn.with_code().run(task, artifact_path=os.path.dirname(os.getcwd()))
| stock-analysis/code/00-train-sentiment-analysis-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import pandas as pd
import numpy as np
# +
df = pd.read_csv('../data/raw/train.csv')
df.head()
# -
df.isnull().sum()
df[df.isna().any(axis=1)]
df.dropna(inplace=True)
df.describe(include=[np.object])
df['question1'].apply(lambda x: len(x)).max()
df['question2'].apply(lambda x: len(x)).max()
| notebooks/data-exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Package load
# +
import os
import time
import numpy as np
import matplotlib.pyplot as plt
import PIL
import torch
print('pytorch version: {}'.format(torch.__version__))
print('GPU 사용 가능 여부: {}'.format(torch.cuda.is_available()))
device = "cuda" if torch.cuda.is_available() else "cpu" # GPU 사용 가능 여부에 따라 device 정보 저장
# -
# ## 하이퍼파라미터 세팅
batch_size = 16 # Mini-batch size
num_epochs = 50
learning_rate = 0.0003
# ## 데이터 전처리 함수 정의 (VOC2007 Dataset)
#
# - Download VOC2007 dataset from link in references. File should have name VOCtrainval_06-Nov-2007.tar and weight approx 460MB. Extract VOC2007 folder and modify path below.
# +
# Dataset v3
num_classes = 21 # background, airplane, ..., border
data_root = "../data/VOC2007" # Dataset location
# +
# for reference, not used in this notebook
voc_classes = ('background', # always index 0
'aeroplane', 'bicycle', 'bird', 'boat', # indices 1, 2, 3, 4
'bottle', 'bus', 'car', 'cat', 'chair', # 5, ...
'cow', 'diningtable', 'dog', 'horse',
'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor', # ..., 21
'border') # but border has index 255 (!) / (불필요시 수정 필요)
assert num_classes == len(voc_classes)
# +
# Class below reads segmentation dataset in VOC2007 compatible format.
class PascalVOCDataset(torch.utils.data.Dataset):
"""Pascal VOC2007 or compatible dataset"""
def __init__(self, num_classes, list_file, img_dir, mask_dir, transform=None):
self.num_classes = num_classes
self.images = open(list_file, "rt").read().split("\n")[:-1]
self.transform = transform
self.img_extension = ".jpg"
self.mask_extension = ".png"
self.image_root_dir = img_dir
self.mask_root_dir = mask_dir
def __len__(self):
return len(self.images)
def __getitem__(self, index):
name = self.images[index]
image_path = os.path.join(self.image_root_dir, name + self.img_extension)
mask_path = os.path.join(self.mask_root_dir, name + self.mask_extension)
image = self.load_image(path=image_path)
gt_mask = self.load_mask(path=mask_path)
return torch.FloatTensor(image), torch.LongTensor(gt_mask)
def load_image(self, path=None):
raw_image = PIL.Image.open(path)
raw_image = np.transpose(raw_image.resize((224, 224)), (2,1,0))
imx_t = np.array(raw_image, dtype=np.float32)/255.0
return imx_t
def load_mask(self, path=None):
raw_image = PIL.Image.open(path)
raw_image = raw_image.resize((224, 224))
imx_t = np.array(raw_image)
imx_t[imx_t==255] = self.num_classes-1 # convert VOC border into last class
return imx_t
# -
# ## Dataset 정의 및 DataLoader 할당
# +
train_path = os.path.join(data_root, 'ImageSets/Segmentation/train.txt')
val_path = os.path.join(data_root, 'ImageSets/Segmentation/val.txt')
img_dir = os.path.join(data_root, "JPEGImages")
mask_dir = os.path.join(data_root, "SegmentationClass")
# -
# - **Create train and validation datasets**
# +
train_dataset = PascalVOCDataset(num_classes = num_classes,
list_file = train_path,
img_dir = img_dir,
mask_dir=mask_dir)
val_dataset = PascalVOCDataset(num_classes = num_classes,
list_file=val_path,
img_dir=img_dir,
mask_dir=mask_dir)
# +
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size = batch_size,
shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset,
batch_size = batch_size,
shuffle=False)
# -
print('Train Dataset:')
print(' length:', len(train_dataset))
print("------------------------")
print('Validation Dataset:')
print(' length:', len(val_dataset))
# ### 데이터 샘플 시각화 (Show example image and mask)
# +
image, mask = train_dataset[10]
image.transpose_(0, 2)
print('image shape:', list(image.shape))
print('mask shape: ', list(mask.shape))
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(8,4))
ax1.imshow(image)
ax2.imshow(mask)
plt.show()
# +
'''
mask array에 들어있는 unique value 로 각각 의미하는 label은 아래와 같음
0 : background
5 : bottle
20 : tvmonitor
21 : border
'''
set(mask.numpy().reshape(-1))
# -
# ## 네트워크 설계 (Pretrained 되지 않은 모델 사용)
# - SegNet
# 
# +
import torch
import torch.nn as nn
class SegNet(nn.Module):
def __init__(self, num_classes=12):
super(SegNet, self).__init__()
def CBR(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
layers = []
layers += [nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding)]
layers += [nn.BatchNorm2d(num_features=out_channels)]
layers += [nn.ReLU()]
cbr = nn.Sequential(*layers)
return cbr
# conv1
self.cbr1_1 = CBR(3, 64, 3, 1, 1)
self.cbr1_2 = CBR(64, 64, 3, 1, 1)
self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv2
self.cbr2_1 = CBR(64, 128, 3, 1, 1)
self.cbr2_2 = CBR(128, 128, 3, 1, 1)
self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv3
self.cbr3_1 = CBR(128, 256, 3, 1, 1)
self.cbr3_2 = CBR(256, 256, 3, 1, 1)
self.cbr3_3 = CBR(256, 256, 3, 1, 1)
self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv4
self.cbr4_1 = CBR(256, 512, 3, 1, 1)
self.cbr4_2 = CBR(512, 512, 3, 1, 1)
self.cbr4_3 = CBR(512, 512, 3, 1, 1)
self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv5
self.cbr5_1 = CBR(512, 512, 3, 1, 1)
self.cbr5_2 = CBR(512, 512, 3, 1, 1)
self.cbr5_3 = CBR(512, 512, 3, 1, 1)
self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# deconv5
self.unpool5 = nn.MaxUnpool2d(2, stride=2)
self.dcbr5_3 = CBR(512, 512, 3, 1, 1)
self.dcbr5_2 = CBR(512, 512, 3, 1, 1)
self.dcbr5_1 = CBR(512, 512, 3, 1, 1)
# deconv4
self.unpool4 = nn.MaxUnpool2d(2, stride=2)
self.dcbr4_3 = CBR(512, 512, 3, 1, 1)
self.dcbr4_2 = CBR(512, 512, 3, 1, 1)
self.dcbr4_1 = CBR(512, 256, 3, 1, 1)
# deconv3
self.unpool3 = nn.MaxUnpool2d(2, stride=2)
self.dcbr3_3 = CBR(256, 256, 3, 1, 1)
self.dcbr3_2 = CBR(256, 256, 3, 1, 1)
self.dcbr3_1 = CBR(256, 128, 3, 1, 1)
# deconv2
self.unpool2 = nn.MaxUnpool2d(2, stride=2)
self.dcbr2_2 = CBR(128, 128, 3, 1, 1)
self.dcbr2_1 = CBR(128, 64, 3, 1, 1)
# deconv1
self.unpool1 = nn.MaxUnpool2d(2, stride=2)
self.dcbr1_2 = CBR(64, 64, 3, 1, 1)
self.dcbr1_1 = CBR(64, 64, 3, 1, 1)
self.score_fr = nn.Conv2d(64, num_classes, kernel_size = 1)
def forward(self, x):
h = self.cbr1_1(x)
h = self.cbr1_2(h)
dim1 = h.size()
h, pool1_indices = self.pool1(h)
h = self.cbr2_1(h)
h = self.cbr2_2(h)
dim2 = h.size()
h, pool2_indices = self.pool2(h)
h = self.cbr3_1(h)
h = self.cbr3_2(h)
h = self.cbr3_3(h)
dim3 = h.size()
h, pool3_indices = self.pool3(h)
h = self.cbr4_1(h)
h = self.cbr4_2(h)
h = self.cbr4_3(h)
dim4 = h.size()
h, pool4_indices = self.pool4(h)
h = self.cbr5_1(h)
h = self.cbr5_2(h)
h = self.cbr5_3(h)
dim5 = h.size()
h, pool5_indices = self.pool5(h)
h = self.unpool5(h, pool5_indices, output_size = dim5)
h = self.dcbr5_3(h)
h = self.dcbr5_2(h)
h = self.dcbr5_1(h)
h = self.unpool4(h, pool4_indices, output_size = dim4)
h = self.dcbr4_3(h)
h = self.dcbr4_2(h)
h = self.dcbr4_1(h)
h = self.unpool3(h, pool3_indices, output_size = dim3)
h = self.dcbr3_3(h)
h = self.dcbr3_2(h)
h = self.dcbr3_1(h)
h = self.unpool2(h, pool2_indices, output_size = dim2)
h = self.dcbr2_2(h)
h = self.dcbr2_1(h)
h = self.unpool1(h, pool1_indices, output_size = dim1)
h = self.dcbr1_2(h)
h = self.dcbr1_1(h)
h = self.score_fr(h)
return torch.sigmoid(h)
# -
from torchsummary import summary
# +
# 구현된 model에 임의의 input을 넣어 output이 잘 나오는지 test
model = SegNet(num_classes=21)
x = torch.randn([1, 3, 224, 224])
print("input shape : ", x.shape)
# -
out = model(x)
#print("output shape : ", out.size())
# +
from torch.autograd import Variable
model.eval()
for idx in range(10):
input = Variable(torch.rand([3, 224, 224]).unsqueeze(0), requires_grad=False)
start_time = time.time()
out = model(input)
torch.cuda.synchronize()
time_taken = time.time() - start_time
print("Run-Time: %.4f s" % time_taken)
# -
from torchsummary import summary
summary(model.to(device), (3, 224, 224))
# ## 가독성 있는 코드로 변경
import torch
import torch.nn as nn
class DeconvNet(nn.Module):
def __init__(self, num_classes=21):
super(DeconvNet, self).__init__()
def CBR(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
layers = []
layers += [nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding)]
layers += [nn.BatchNorm2d(num_features=out_channels)]
layers += [nn.ReLU()]
cbr = nn.Sequential(*layers)
return cbr
def DCB(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
layers = []
layers += [nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding)]
layers += [nn.BatchNorm2d(num_features=out_channels)]
cbr = nn.Sequential(*layers)
return cbr
# conv1
self.cbr1_1 = CBR(3, 64, 3, 1, 1)
self.cbr1_2 = CBR(64, 64, 3, 1, 1)
self.pool1 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv2
self.cbr2_1 = CBR(64, 128, 3, 1, 1)
self.cbr2_2 = CBR(128, 128, 3, 1, 1)
self.pool2 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv3
self.cbr3_1 = CBR(128, 256, 3, 1, 1)
self.cbr3_2 = CBR(256, 256, 3, 1, 1)
self.cbr3_3 = CBR(256, 256, 3, 1, 1)
self.pool3 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv4
self.cbr4_1 = CBR(256, 512, 3, 1, 1)
self.cbr4_2 = CBR(512, 512, 3, 1, 1)
self.cbr4_3 = CBR(512, 512, 3, 1, 1)
self.pool4 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# conv5
self.cbr5_1 = CBR(512, 512, 3, 1, 1)
self.cbr5_2 = CBR(512, 512, 3, 1, 1)
self.cbr5_3 = CBR(512, 512, 3, 1, 1)
self.pool5 = nn.MaxPool2d(2, stride=2, ceil_mode=True, return_indices=True)
# fc1
self.fc6 = CBR(512, 4096, 1, 1, 0)
self.drop6 = nn.Dropout2d()
# fc2
self.fc7 = CBR(4096, 4096, 1, 1, 0)
self.drop7 = nn.Dropout2d()
# Deconv
self.dcb6 = DCB(4096, 512, 7, 1, 3)
# Deconv5
self.unpool5 = nn.MaxUnpool2d(2, stride=2)
self.dcb5_3 = DCB(512, 512, 3, 1, 1)
self.dcb5_2 = DCB(512, 512, 3, 1, 1)
self.dcb5_1 = DCB(512, 512, 3, 1, 1)
# Deconv4
self.unpool4 = nn.MaxUnpool2d(2, stride=2)
self.dcb4_3 = DCB(512, 512, 3, 1, 1)
self.dcb4_2 = DCB(512, 512, 3, 1, 1)
self.dcb4_1 = DCB(512, 256, 3, 1, 1)
# Deconv3
self.unpool3 = nn.MaxUnpool2d(2, stride=2)
self.dcb3_3 = DCB(256, 256, 3, 1, 1)
self.dcb3_2 = DCB(256, 256, 3, 1, 1)
self.dcb3_1 = DCB(256, 128, 3, 1, 1)
# Deconv2
self.unpool2 = nn.MaxUnpool2d(2, stride=2)
self.dcb2_2 = DCB(128, 128, 3, 1, 1)
self.dcb2_1 = DCB(128, 64, 3, 1, 1)
# Deconv1
self.unpool1 = nn.MaxUnpool2d(2, stride=2)
self.dcb1_2 = DCB(64, 64, 3, 1, 1)
self.dcb1_1 = DCB(64, 64, 3, 1, 1)
self.score_fr = nn.Conv2d(64, num_classes, kernel_size = 1)
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform_(m.weight)
# xavier_uniform은 bias에 대해서는 제공하지 않음
# ValueError: Fan in and fan out can not be computed for tensor with fewer than 2 dimensions
if m.bias is not None:
torch.nn.init.zeros_(m.bias)
def forward(self, x):
h = self.cbr1_1(x)
h = self.cbr1_2(h)
h, pool1_indices = self.pool1(h)
h = self.cbr2_1(h)
h = self.cbr2_2(h)
h, pool2_indices = self.pool2(h)
h = self.cbr3_1(h)
h = self.cbr3_2(h)
h = self.cbr3_3(h)
h, pool3_indices = self.pool3(h)
h = self.cbr4_1(h)
h = self.cbr4_2(h)
h = self.cbr4_3(h)
h, pool4_indices = self.pool4(h)
h = self.cbr5_1(h)
h = self.cbr5_2(h)
h = self.cbr5_3(h)
h, pool5_indices = self.pool5(h)
h = self.fc6(h)
h = self.drop6(h)
h = self.fc7(h)
h = self.drop7(h)
h = self.dcb6(h)
h = self.unpool5(h, pool5_indices)
h = self.dcb5_3(h)
h = self.dcb5_2(h)
h = self.dcb5_1(h)
h = self.unpool4(h, pool4_indices)
h = self.dcb4_3(h)
h = self.dcb4_2(h)
h = self.dcb4_1(h)
h = self.unpool3(h, pool3_indices)
h = self.dcb3_3(h)
h = self.dcb3_2(h)
h = self.dcb3_1(h)
h = self.unpool2(h, pool2_indices)
h = self.dcb2_2(h)
h = self.dcb2_1(h)
h = self.unpool1(h, pool1_indices)
h = self.dcb1_2(h)
h = self.dcb1_1(h)
h = self.score_fr(h)
return torch.sigmoid(h)
# +
# 구현된 model에 임의의 input을 넣어 output이 잘 나오는지 test
SegNet_model = SegNet(num_classes=21)
x = torch.randn([1, 3, 224, 224])
print("input shape : ", x.shape)
out = model(x)
print("output shape : ", out.size())
# -
out = SegNet_model(x)
# +
from torch.autograd import Variable
model.eval()
for idx in range(10):
input = Variable(torch.rand([3, 224, 224]).unsqueeze(0), requires_grad=False)
start_time = time.time()
out = SegNet_model(input)
torch.cuda.synchronize()
time_taken = time.time() - start_time
print("Run-Time: %.4f s" % time_taken)
# -
# ## train, validation 함수 정의
def train(num_epochs, model, data_loader, criterion, optimizer, saved_dir, val_every, device):
print('Start training..')
best_loss = 9999999
for epoch in range(num_epochs):
for step, (image, mask) in enumerate(data_loader):
image = image.type(torch.float32)
mask = mask.type(torch.long)
print(image[0].shape)
print('------------')
print(mask[0].shape)
image, mask = image.to(device), mask.to(device)
outputs = model(image)
print('------------')
print(outputs[0].shape)
loss = criterion(outputs, mask)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (step + 1) % 25 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
epoch+1, num_epochs, step+1, len(train_loader), loss.item()))
if (epoch + 1) % val_every == 0:
avrg_loss = validation(epoch + 1, model, val_loader, cross_entropy2d, device)
if avrg_loss < best_loss:
print('Best performance at epoch: {}'.format(epoch + 1))
print('Save model in', saved_dir)
best_loss = avrg_loss
save_model(model, saved_dir)
def validation(epoch, model, data_loader, criterion, device):
print('Start validation #{}'.format(epoch))
model.eval()
with torch.no_grad():
total_loss = 0
cnt = 0
for step, (image, mask) in enumerate(data_loader):
image = image.type(torch.float32)
mask = mask.type(torch.long)
image, mask = image.to(device), mask.to(device)
outputs = model(image)
loss = criterion(outputs, mask)
total_loss += loss
cnt += 1
avrg_loss = total_loss / cnt
print('Validation #{} Average Loss: {:.4f}'.format(epoch, avrg_loss))
model.train()
return avrg_loss
# ## 모델 저장 함수 정의
def save_model(model, saved_dir, file_name='model.pt'):
check_point = {
'net': model.state_dict()
}
output_path = os.path.join(saved_dir, file_name)
torch.save(model, output_path)
# ## 모델 생성 및 Loss function, Optimizer 정의
#
# - [다중분류를 위한 대표적인 손실함수 : `torch.nn.CrossEntropyLoss`](http://www.gisdeveloper.co.kr/?p=8668)
# +
# cross_entropy 동작 원리 :
output = torch.Tensor(
[
[0.8982, 0.805, 0.6393, 0.9983, 0.5731, 0.0469, 0.556, 0.1476, 0.8404, 0.5544],
[0.9457, 0.0195, 0.9846, 0.3231, 0.1605, 0.3143, 0.9508, 0.2762, 0.7276, 0.4332]
]
)
target = torch.LongTensor([1.4, 5.3])
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
print(loss) # tensor(2.3519)
# +
import torch.nn.functional as F
def criterion(input, target, weight=None, size_average=True):
'''
cross_entropy2d
'''
n, c, h, w = input.size()
nt, ht, wt = target.size()
# Handle inconsistent size between input and target
if h != ht and w != wt: # upsample labels
input = F.interpolate(input, size=(ht, wt), mode="bilinear", align_corners=True)
input = input.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c)
target = torch.LongTensor(target.view(-1)) # target의 type을 lnt로 명시적으로 변환
# print('input : {}'.format(input.shape))
# print('target : {}'.format(target.shape))
# print('first target index : {}'.format(target[0]))
loss = F.cross_entropy(
input, target, weight=weight, size_average=size_average, ignore_index=250
)
return loss
# -
# cross_entropy2d() test
model = fcn32(num_classes=22)
input = np.transpose(image, [2, 1, 0]).reshape([1, 3, 224, 224])
out = model(input)
criterion(out, mask.reshape([1,224,224]))
# +
torch.manual_seed(7777)
model = fcn32(num_classes=22)
model = model.to(device)
optimizer = torch.optim.SGD(params = model.parameters(), lr = learning_rate, weight_decay=1e-6)
val_every = 1
saved_dir = './saved/FCN32'
# -
# ## Training
train(num_epochs, model, train_loader, criterion, optimizer, saved_dir, val_every, device)
# ## Test
# ## 저장된 model 불러오기
# ## Reference
# - [dataloader using VOC2007 Dataset](https://marcinbogdanski.github.io/ai-sketchpad/PyTorchNN/1630_PT_SegNet_VOC2007.html)
| A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation (SegNet) Review/code/.ipynb_checkpoints/SegNet (VOC Format)-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %config Application.log_level="INFO"
from fitESPconstrained import *
import ase.io
import parmed as pmd
from parmed import gromacs
from insertHbyList import insertHbyList
import warnings
import pandas as pd
import logging
import sys
pmd.__file__
pd.set_option('precision', 3)
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.info("Started")
# +
implicitHbondingPartners = {'CD4':1,'CD3':1,'CA2':2,'CA3':2,'CB2':2,'CB3':2}
infile_pdb = 'sandbox/system100.pdb'
infile_top = 'sandbox/system100.lean.top'
# +
ua_ase_struct = ase.io.read(infile_pdb)
ua_pmd_struct = pmd.load_file(infile_pdb)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ua_pmd_top = gromacs.GromacsTopologyFile(infile_top,parametrize=False)
# throws some warnings on angle types, does not matter for bonding info
# if error thrown, just try to "reduce" .top as far as possible
# warnings supressed as shown on
# https://docs.python.org/2/library/warnings.html
ua_pmd_top.strip(':SOL,CL') # strip water and electrolyte from system (if not yet done in .top)
ua_pmd_top.box = ua_pmd_struct.box # Needed because .pdb contains box info
ua_pmd_top.positions = ua_pmd_struct.positions
ua_names = [ a.name for a in ua_pmd_top.atoms ]
ua_residues = [ a.residue.name for a in ua_pmd_top.atoms ]
aa_ase_struct, aa_pmd_struct, aa_names, aa_residues = \
insertHbyList(ua_ase_struct,ua_pmd_top,
implicitHbondingPartners,1.0)
ua_count = len(ua_ase_struct) # united atoms structure
aa_count = len(aa_ase_struct) # all atoms structure
ua_ase_index = np.arange(ua_count)
aa_ase_index = np.arange(aa_count)
aa_atom_residue_list = list(zip(aa_names,aa_residues))
aa_ase_index = range(aa_count)
aa_ase2pmd = dict(zip(aa_ase_index,aa_atom_residue_list))
aa_pmd2ase = dict(zip(aa_atom_residue_list,aa_ase_index))
ua_atom_residue_list = list(zip(ua_names,ua_residues))
ua_ase_index = range(ua_count)
ua_ase2pmd = dict(zip(ua_ase_index,ua_atom_residue_list))
ua_pmd2ase = dict(zip(ua_atom_residue_list,ua_ase_index))
# -
ua_pmd_struct.atoms[0].cgnr = 2
ua_pmd_struct.atoms[0].cgnr
ua_pmd_top.atoms[0].cgnr = 2
ua_pmd_top.save('test.top',overwrite=True)
ua_pmd2ase
# ## United-Atom fit
A_horton, B_horton, C_horton, N_horton = read_horton_cost_function(
file_name = 'sandbox/system100.cost_ua.h5')
### Charge Groups:
# read in all charge groups and construct the corresponding constraints
cg2ase, cg2cgtype, ncgtypes = read_AtomName_ChargeGroup(
file_name = 'sandbox/atoms_in_charge_group.csv',ase2pmd=ua_ase2pmd)
ua_pmd2ase[('CA1','OXO0')]
# +
#ase2cg_list = []
ase2cg = dict([(idx, cgnr+1) for cgnr,cg in enumerate(cg2ase) for idx in cg])
# -
ase2cg
cg2cgtype
ncgtypes
cg_q = read_ChargeGroup_TotalCharge(
file_name = 'sandbox/charge_group_total_charge.csv')
cg_q
#loop over set of charge groups (each charge group occures only ones)
charges = [ cg_q[cg] for cg in cg2cgtype ]
charges
D_matrix_cg, q_vector_cg = constructChargegroupConstraints(
chargeGroups = cg2ase, N = N_horton, q = charges, debug=True)
D_matrix_cg.shape
q_vector_cg.shape
### Same Charged Atoms
sym2ase = read_SameChargedAtoms(
file_name='sandbox/atoms_of_same_charge.csv',
ase2pmd=ua_ase2pmd)
sym2ase
D_matrix_sym, q_vector_sym = constructPairwiseSymmetryConstraints(
charges = sym2ase, N = N_horton, symmetry = 1.0, debug = True)
for i, r in enumerate(D_matrix_sym):
#r[r == -1]
print(i, ": =1 ", np.nonzero(r == 1))
print(i, ": =-1 ", np.nonzero(r == -1))
#print(np.nonzero(r == -1))
D_matrix_qtot, q_vector_qtot = constructTotalChargeConstraint(charge = 6.0,
N = N_horton)
D_matrix_qtot.shape
q_vector_qtot.shape
D_matrix_all, q_vector_all = concatenated_constraints(
D_matrices = [D_matrix_cg,D_matrix_sym,D_matrix_qtot],
q_vectors = [q_vector_cg,q_vector_sym,q_vector_qtot])
D_matrix_all.shape
np.linalg.matrix_rank(D_matrix_all)
D_matrix_all.shape
D_matrix_all[0].shape
#from numpy.linalg import matrix_rank
def construct_D_of_full_rank(D,q):
D_LI=[D[0]]
q_LI=[q[0]]
for i in range(D.shape[0]):
tmp=[]
for r in D_LI:
tmp.append(r)
tmp.append(D[i]) #set tmp=LI+[M[i]]
if np.linalg.matrix_rank(tmp)>len(D_LI): #test if M[i] is linearly independent from all (row) vectors in LI
D_LI.append(D[i]) #note that matrix_rank does not need to take in a square matrix
q_LI.append(q[i])
return np.array(D_LI), np.array(q_LI) #return set of linearly independent (row) vectors
### Unconstrained Minimization
X_unconstrained, A_unconstrained, B_unconstrained = \
unconstrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
debug = True)
A_unconstrained.shape
X_unconstrained.shape
A_horton.shape
### Constrained Minimization
X_qtot_constraint, A_qtot_constraint, B_qtot_constraint = \
constrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
D_matrix = D_matrix_qtot,
q_vector = q_vector_qtot,
debug = True)
D_matrix_all_fr, q_vector_all_fr = construct_D_of_full_rank(D_matrix_all,q_vector_all)
D_matrix_all_fr.shape
### Constrained Minimization
X, A, B = constrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
D_matrix = D_matrix_all_fr,
q_vector = q_vector_all_fr,
debug = True)
X
D_matrix_cg.shape
D_matrix_all.shape
D_matrix_all_fr.shape
A.shape
X.shape
# +
logging.info('Results:')
#prevent scientific notation and make the prints mor readable
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
logging.info('unconstrained charges {}:\n {}\ncharge sum = {}\n'.format( X_unconstrained[:N_horton].T.shape,
X_unconstrained[:N_horton].T,
X_unconstrained[:N_horton].T.sum() ))
logging.info('qtot constraint charges {}:\n {}\ncharge sum = {}\n'.format( X_qtot_constraint[:N_horton].T.shape,
X_qtot_constraint[:N_horton].T,
X_qtot_constraint[:N_horton].T.sum() ))
logging.info('constrained charges {}:\n {}\ncharge sum = {}\n'.format( X[:N_horton].T.shape,
X[:N_horton].T,
X[:N_horton].T.sum() ))
logging.info('Lagrange multipliers {}:\n {}'.format( X[N_horton:].T.shape,
X[N_horton:].T ) )
### test the results
### test the results
logging.info("value of cost function, unconstrained: {}".format(
(np.dot(X_unconstrained.T, np.dot(A_unconstrained, X_unconstrained)) - 2*np.dot(B_unconstrained.T, X_unconstrained) - C_horton) ) )
logging.info("value of cost function, qtot constrained: {}".format(
(np.dot(X_qtot_constraint.T, np.dot(A_qtot_constraint, X_qtot_constraint)) - 2*np.dot(B_qtot_constraint.T, X_qtot_constraint) - C_horton) ) )
logging.info("value of cost function, fully constrained: {}".format(
(np.dot(X.T, np.dot(A, X)) - 2*np.dot(B.T, X) - C_horton) ) )
# -
ua_ase2pmd_df = pd.DataFrame(ua_ase2pmd).T
ua_ase2pmd_df[2] = X_unconstrained
ua_ase2pmd_df[3] = X_qtot_constraint[:N_horton]
ua_ase2pmd_df[4] = X[:N_horton]
ua_ase2pmd_df.columns = ['atom','resudue','q_unconstrained','q_total_charge_constrained', 'q_fully_constrained']
ua_ase2pmd_df
ua_ase2pmd_df.iloc[sym2ase[0]]
# ## all-atom cost function & fit
A_horton_aa, B_horton_aa, C_horton_aa, N_horton_aa = read_horton_cost_function(
file_name = 'sandbox/system100.cost_aa.h5')
N_horton_aa
cg_aa, cgtypes_aa, ncgtypes_aa = read_AtomName_ChargeGroup(
file_name = 'sandbox/atoms_in_charge_group.csv',ase2pmd = aa_ase2pmd)
cg_aa
len(cgtypes_aa)
ncgtypes_aa
cg_q_aa = read_ChargeGroup_TotalCharge(
file_name = 'sandbox/charge_group_total_charge.csv')
cg_q_aa
charges_aa = [ cg_q_aa[cg] for cg in cgtypes_aa ]
charges_aa
len(charges)
### Same Charged Atoms
sym2ase_aa = read_SameChargedAtoms(
file_name='sandbox/atoms_of_same_charge.csv',
ase2pmd=aa_ase2pmd)
sym2ase_aa
sym2ase_aa
qtot_aa = 6.0
# +
D_matrix_cg_aa, q_vector_cg_aa = constructChargegroupConstraints(
chargeGroups = cg_aa, N = N_horton_aa, q = charges_aa, debug=True)
D_matrix_qtot_aa, q_vector_qtot_aa = constructTotalChargeConstraint(charge = qtot_aa,
N = N_horton_aa)
D_matrix_sym_aa, q_vector_sym_aa = constructPairwiseSymmetryConstraints(
charges = sym2ase_aa, N = N_horton_aa, symmetry = 1.0, debug = True)
# -
D_matrix_all_aa, q_vector_all_aa = concatenated_constraints(
D_matrices = [D_matrix_cg_aa,D_matrix_sym_aa,D_matrix_qtot_aa],
q_vectors = [q_vector_cg_aa,q_vector_sym_aa,q_vector_qtot_aa])
D_matrix_all_aa.shape
### Unconstrained Minimization
X_unconstrained_aa, A_unconstrained_aa, B_unconstrained_aa = \
unconstrainedMinimize(A_matrix = A_horton_aa,
b_vector = B_horton_aa,
C_scalar = C_horton_aa,
debug = True)
### Constrained Minimization
X_qtot_constraint_aa, A_qtot_constraint_aa, B_qtot_constraint_aa = \
constrainedMinimize(A_matrix = A_horton_aa,
b_vector = B_horton_aa,
C_scalar = C_horton_aa,
D_matrix = D_matrix_qtot_aa,
q_vector = q_vector_qtot_aa,
debug = True)
### Constrained Minimization
X_aa, A_aa, B_aa = constrainedMinimize(A_matrix = A_horton_aa,
b_vector = B_horton_aa,
C_scalar = C_horton_aa,
D_matrix = D_matrix_all_aa,
q_vector = q_vector_all_aa,
debug = True)
# +
logging.info('Results:')
#prevent scientific notation and make the prints mor readable
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
logging.info('unconstrained charges {}:\n {}\ncharge sum = {}\n'.format( X_unconstrained_aa[:N_horton_aa].T.shape,
X_unconstrained_aa[:N_horton_aa].T,
X_unconstrained_aa[:N_horton_aa].T.sum() ))
logging.info('qtot constraint charges {}:\n {}\ncharge sum = {}\n'.format( X_qtot_constraint_aa[:N_horton_aa].T.shape,
X_qtot_constraint_aa[:N_horton_aa].T,
X_qtot_constraint_aa[:N_horton_aa].T.sum() ))
logging.info('constrained charges {}:\n {}\ncharge sum = {}\n'.format( X_aa[:N_horton_aa].T.shape,
X_aa[:N_horton_aa].T,
X_aa[:N_horton_aa].T.sum() ))
logging.info('Lagrange multipliers {}:\n {}'.format( X_aa[N_horton_aa:].T.shape,
X_aa[N_horton_aa:].T ) )
### test the results
### test the results
logging.info( 'value of cost function, unconstrained: ',
(np.dot(X_unconstrained_aa.T, np.dot(A_unconstrained_aa, X_unconstrained_aa)) - 2*np.dot(B_unconstrained_aa.T, X_unconstrained_aa) - C_horton_aa) )
logging.info( 'value of cost function, qtot constrained: ',
(np.dot(X_qtot_constraint_aa.T, np.dot(A_qtot_constraint_aa, X_qtot_constraint_aa)) - 2*np.dot(B_qtot_constraint_aa.T, X_qtot_constraint_aa) - C_horton_aa) )
logging.info( 'value of cost function, fully constrained: ',
(np.dot(X_aa.T, np.dot(A_aa, X_aa)) - 2*np.dot(B_aa.T, X_aa) - C_horton_aa) )
# -
aa_ase2pmd_df = pd.DataFrame(aa_ase2pmd).T
aa_ase2pmd_df[2] = X_unconstrained_aa
aa_ase2pmd_df[3] = X_qtot_constraint_aa[:N_horton_aa]
aa_ase2pmd_df[4] = X_aa[:N_horton_aa]
aa_ase2pmd_df.columns = ['atom','resudue','q_unconstrained','q_total_charge_constrained', 'q_fully_constrained']
aa_ase2pmd_df
aa_ase2pmd_df.iloc[sym2ase_aa[0]]
ua_ase2pmd_df.iloc[sym2ase[0]]
# ## Summarize whole process
# +
#implicitHbondingPartners = {'CD4':1,'CD3':1,'CA2':2,'CA3':2,'CB2':2,'CB3':2}
#infile_pdb = 'sandbox/system100.pdb'
#infile_top = 'sandbox/system100.lean.top'
# -
def printResults(X,A,B,C,N):
#prevent scientific notation and make the prints mor readable
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
logging.info('charges {}:\n {}\ncharge sum = {}\n'.format( X[:N].T.shape,
X[:N].T,
X[:N].T.sum() ))
logging.info('Lagrange multipliers {}:\n {}'.format( X[N:].T.shape,
X[N:].T ) )
### test the results
logging.info( 'value of cost function: {}'.format(
(np.dot(X.T, np.dot(A, X)) - 2*np.dot(B.T, X) - C) ) )
def fitESPconstrained(infile_pdb, infile_top, infile_cost_h5,
infile_atoms_in_cg_csv, infile_cg_charges_csv, infile_atoms_of_same_charge_csv,
qtot = 0.0, strip_string=':SOL,CL',
implicitHbondingPartners = {'CD4':1,'CD3':1,'CA2':2,'CA3':2,'CB2':2,'CB3':2},
debug=False):
# A: construct all-atom representation from united-atom structure and topology:
ua_ase_struct = ase.io.read(infile_pdb)
ua_pmd_struct = pmd.load_file(infile_pdb)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ua_pmd_top = gromacs.GromacsTopologyFile(infile_top,parametrize=False)
# throws some warnings on angle types, does not matter for bonding info
# if error thrown, just try to "reduce" .top as far as possible
# warnings supressed as shown on
# https://docs.python.org/2/library/warnings.html
ua_pmd_top.strip(strip_string) # strip water and electrolyte from system (if not yet done in .top)
ua_pmd_top.box = ua_pmd_struct.box # Needed because .pdb contains box info
ua_pmd_top.positions = ua_pmd_struct.positions
ua_names = [ a.name for a in ua_pmd_top.atoms ]
ua_residues = [ a.residue.name for a in ua_pmd_top.atoms ]
aa_ase_struct, aa_pmd_struct, aa_names, aa_residues = \
insertHbyList(ua_ase_struct,ua_pmd_top,
implicitHbondingPartners,1.0)
ua_count = len(ua_ase_struct) # united atoms structure
aa_count = len(aa_ase_struct) # all atoms structure
ua_ase_index = np.arange(ua_count)
aa_ase_index = np.arange(aa_count)
aa_atom_residue_list = list(zip(aa_names,aa_residues))
aa_ase_index = range(aa_count)
aa_ase2pmd = dict(zip(aa_ase_index,aa_atom_residue_list))
aa_pmd2ase = dict(zip(aa_atom_residue_list,aa_ase_index))
ua_atom_residue_list = list(zip(ua_names,ua_residues))
ua_ase_index = range(ua_count)
ua_ase2pmd = dict(zip(ua_ase_index,ua_atom_residue_list))
ua_pmd2ase = dict(zip(ua_atom_residue_list,ua_ase_index))
# TODO: distinction for ua and aa fitting:
ase2pmd = ua_ase2pmd
# B: read cost function
A_horton, B_horton, C_horton, N_horton = \
read_horton_cost_function(file_name = infile_cost_h5)
# C: read constraints files
### Charge Groups:
# read in all charge groups and construct the corresponding constraints
cg2ase, cg2cgtype, ncgtypes = read_AtomName_ChargeGroup(
file_name = infile_atoms_in_cg_csv, ase2pmd = ase2pmd)
cg_q = read_ChargeGroup_TotalCharge(file_name = infile_cg_charges_csv)
cg2q = [ cg_q[cg] for cg in cg2cgtype ]
### Same Charged Atoms
sym2ase = read_SameChargedAtoms(
file_name = infile_atoms_of_same_charge_csv, ase2pmd = ase2pmd)
# D: construct constraints matrices
D_matrix_cg, q_vector_cg = constructChargegroupConstraints(
chargeGroups = cg2ase, N = N_horton, q = cg2q, debug = debug)
D_matrix_sym, q_vector_sym = constructPairwiseSymmetryConstraints(
charges = sym2ase, N = N_horton, symmetry = 1.0, debug = False)
D_matrix_qtot, q_vector_qtot = constructTotalChargeConstraint(
charge = qtot, N = N_horton)
D_matrix_all, q_vector_all = concatenated_constraints(
D_matrices = [D_matrix_cg,D_matrix_sym,D_matrix_qtot],
q_vectors = [q_vector_cg,q_vector_sym,q_vector_qtot])
# E: Minimization
### Constrained minimization
X, A, B = constrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
D_matrix = D_matrix_all,
q_vector = q_vector_all,
debug = debug)
ase2pmd_df = pd.DataFrame(ase2pmd).T
ase2pmd_df.columns = ['atom','residue']
ase2pmd_df['q'] = X[:N_horton]
# additional debug cases
if debug:
### Unconstrained minimization
X_unconstrained, A_unconstrained, B_unconstrained = \
unconstrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
debug = debug)
### Total charge constraint minimization
X_qtot_constraint, A_qtot_constraint, B_qtot_constraint = \
constrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
D_matrix = D_matrix_qtot,
q_vector = q_vector_qtot,
debug = debug)
### Charge group & total charge constraint minimization
D_matrix_cg_qtot, q_vector_cg_qtot = concatenated_constraints(
D_matrices = [D_matrix_cg,D_matrix_qtot],
q_vectors = [q_vector_cg,q_vector_qtot])
X_cg_qtot, A_cg_qtot, B_cg_qtot = \
constrainedMinimize(A_matrix = A_horton,
b_vector = B_horton,
C_scalar = C_horton,
D_matrix = D_matrix_cg_qtot,
q_vector = q_vector_cg_qtot,
debug = debug)
logging.info("")
logging.info("")
logging.info("")
logging.info("#################################")
logging.info("RESULTS FOR DIFFERENT CONSTRAINTS")
logging.info("#################################")
logging.info("")
logging.info("")
logging.info("### UNCONSTRAINED ###")
printResults(X_unconstrained,A_unconstrained,B_unconstrained,C_horton,N_horton)
logging.info("")
logging.info("")
logging.info("### QTOT CONSTRAINED ###")
printResults(X_qtot_constraint,A_qtot_constraint,B_qtot_constraint,C_horton,N_horton)
logging.info("")
logging.info("")
logging.info("### QTOT & CG CONSTRAINED ###")
printResults(X_cg_qtot,A_cg_qtot,B_cg_qtot,C_horton,N_horton)
logging.info("")
logging.info("")
logging.info("### FULLY CONSTRAINED ###")
printResults(X,A,B,C_horton,N_horton)
#ase2pmd_df.columns.append(['q_unconstrained', 'q_qtot_constrained', 'q_qtot_cg_constrained'])
ase2pmd_df['q_unconstrained'] = X_unconstrained
ase2pmd_df['q_qtot_constrained'] = X_qtot_constraint[:N_horton]
ase2pmd_df['q_cg_qtot_constrained'] = X_cg_qtot[:N_horton]
checkChargeGroups(ase2pmd_df,cg2ase,cg2cgtype,cg2q)
checkSymmetries(ase2pmd_df,sym2ase)
return X[:N_horton], X[N_horton:], ase2pmd_df, cg2ase, cg2cgtype, cg2q, sym2ase
# check charge group constraints:
def checkChargeGroups( df, cg2ase, cg2cgtype, cg2q,
q_cols = ['q','q_unconstrained','q_qtot_constrained','q_cg_qtot_constrained']):
logging.info("")
logging.info("")
logging.info("##############################")
logging.info("CHARGE GROUP CONSTRAINTS CHECK")
logging.info("##############################")
logging.info("")
logging.info("atoms grouped together by their ASE indices:")
logging.info("{}".format(cg2ase))
logging.info("")
logging.info("desired charge of each group:")
logging.info("{}".format(cg2q))
for cg_index, ase_indices_in_cg in enumerate(cg2ase):
logging.info("cg {:d}, type {:d}:".format(cg_index,cg2cgtype[cg_index]))
for q_col in q_cols:
q_cg = df.iloc[ase_indices_in_cg][q_col].sum() # select first charge group
logging.info(" {:>30}:{:8.4f} absolute error:{:12.4e}".format(q_col,q_cg,q_cg-cg2q[cg_index]))
# check symmetry constraints:
def checkSymmetries( df, sym2ase,
q_cols = ['q','q_unconstrained','q_qtot_constrained','q_cg_qtot_constrained']):
logging.info("")
logging.info("")
logging.info("##########################")
logging.info("SYMMETRY CONSTRAINTS CHECK")
logging.info("##########################")
logging.info("")
logging.info("groups of equally charged atoms by their ASE indices:")
logging.info("{}".format(sym2ase))
for sym_index, ase_indices_in_sym in enumerate(sym2ase):
#logging.info("cg {:d}, type {:d}:".format(cg_index,cg2cgtype[cg_index]))
msg = []
for ase_index in ase_indices_in_sym:
msg.append("({}, {})".format(
df.iloc[ase_index]['atom'],
df.iloc[ase_index]['residue']))
logging.info("sym {:d}: {}".format(sym_index,"; ".join(msg)))
for q_col in q_cols:
msg = []
for ase_index in ase_indices_in_sym:
msg.append("{:.3f}".format(df.iloc[ase_index][q_col]))
logging.info("{:>30}: {}".format(q_col,",".join(msg)))
logging.info("")
q, lagrange_multiplier, info_df, cg2ase, cg2cgtype, cg2q, sym2ase = \
fitESPconstrained(infile_pdb = 'sandbox/system100.pdb',
infile_top = 'sandbox/system100.lean.top',
infile_cost_h5 = 'sandbox/system100.cost_ua.h5',
infile_atoms_in_cg_csv = 'sandbox/atoms_in_charge_group.csv',
infile_cg_charges_csv = 'sandbox/charge_group_total_charge.csv',
infile_atoms_of_same_charge_csv = 'sandbox/atoms_of_same_charge.csv',
qtot = 6.0, strip_string=':SOL,CL',
implicitHbondingPartners = {'CD4':1,'CD3':1,'CA2':2,'CA3':2,'CB2':2,'CB3':2},
debug=True)
q
q_neg, *_ = fitESPconstrained(infile_pdb = 'sandbox/system100.pdb',
infile_top = 'sandbox/system100.lean.top',
infile_cost_h5 = 'sandbox/system100.cost_ua.h5',
infile_atoms_in_cg_csv = 'sandbox/atoms_in_charge_group.csv',
infile_cg_charges_csv = 'sandbox/charge_group_total_charge_negative.csv',
infile_atoms_of_same_charge_csv = 'sandbox/atoms_of_same_charge.csv',
qtot = -6.0, strip_string=':SOL,CL',
implicitHbondingPartners = {'CD4':1,'CD3':1,'CA2':2,'CA3':2,'CB2':2,'CB3':2},
debug=True)
q_neg # ok, but all charges inverted
#q, lagrange_multiplier, info_df, cg2ase, cg2cgtype, cg2q, sym2ase
q_cost_neg, lagrange_multiplier_cost_neg, info_df_cost_neg, cg2ase_cost_neg, \
cg2cgtype_cost_neg, cg2q_cost_neg, sym2ase_cost_neg = \
fitESPconstrained(infile_pdb = 'sandbox/system100.pdb',
infile_top = 'sandbox/system100.lean.top',
infile_cost_h5 = 'sandbox/system100.cost_ua_neg.h5',
infile_atoms_in_cg_csv = 'sandbox/atoms_in_charge_group.csv',
infile_cg_charges_csv = 'sandbox/charge_group_total_charge.csv',
infile_atoms_of_same_charge_csv = 'sandbox/atoms_of_same_charge.csv',
qtot = 6.0, strip_string=':SOL,CL',
implicitHbondingPartners = {'CD4':1,'CD3':1,'CA2':2,'CA3':2,'CB2':2,'CB3':2},
debug=True)
q_neg
q_cost_neg
lagrange_multiplier_cost_neg
info_df_cost_neg.iloc[cg2ase_cost_neg[0]] # select first charge group
info_df_cost_neg.iloc[cg2ase_cost_neg[0]]['q'].sum()
info_df_cost_neg.iloc[sym2ase_cost_neg[0]] # select first symmetry group
# ## double check
#
# ### total charge of system
from ase.io.cube import read_cube_data
from ase.units import Bohr
cube_data, cube_atoms = read_cube_data("sandbox/system100.rho.cube")
unit_cell = cube_atoms.cell.diagonal() / cube_data.shape
unit_volume = np.prod(unit_cell)
q_el = cube_data.sum()*unit_volume/Bohr**3
q_core_total = 0
for a in cube_atoms:
q_core_total += a.number
q_core_total
q_el
q_core_total - q_el
ua_pmd_top
| HortonFitEspConstrainedSample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="VYNA79KmgvbY" colab_type="text"
# Copyright 2019 The Dopamine Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
# + [markdown] id="emUEZEvldNyX" colab_type="text"
# # Dopamine: How to train an agent on Cartpole
#
# This colab demonstrates how to train the DQN and C51 on Cartpole, based on the default configurations provided.
#
# The hyperparameters chosen are by no mean optimal. The purpose of this colab is to illustrate how to train two
# agents on a non-Atari gym environment: cartpole.
#
# We also include default configurations for Acrobot in our repository: https://github.com/google/dopamine
#
# Run all the cells below in order.
# + id="Ckq6WG-seC7F" colab_type="code" cellView="form" colab={}
# @title Install necessary packages.
# !pip install --upgrade --no-cache-dir dopamine-rl
# !pip install gin-config
# + id="WzwZoRKxdFov" colab_type="code" cellView="form" colab={}
# @title Necessary imports and globals.
import numpy as np
import os
from dopamine.discrete_domains import run_experiment
from dopamine.colab import utils as colab_utils
from absl import flags
import gin.tf
BASE_PATH = '/tmp/colab_dopamine_run' # @param
# + [markdown] id="bidurBV0djGi" colab_type="text"
# ## Train DQN
# + id="PUBRSmX6dfa3" colab_type="code" colab={}
# @title Load the configuration for DQN.
DQN_PATH = os.path.join(BASE_PATH, 'dqn')
# Modified from dopamine/agents/dqn/config/dqn_cartpole.gin
dqn_config = """
# Hyperparameters for a simple DQN-style Cartpole agent. The hyperparameters
# chosen achieve reasonable performance.
import dopamine.discrete_domains.gym_lib
import dopamine.discrete_domains.run_experiment
import dopamine.agents.dqn.dqn_agent
import dopamine.replay_memory.circular_replay_buffer
import gin.tf.external_configurables
DQNAgent.observation_shape = %gym_lib.CARTPOLE_OBSERVATION_SHAPE
DQNAgent.observation_dtype = %gym_lib.CARTPOLE_OBSERVATION_DTYPE
DQNAgent.stack_size = %gym_lib.CARTPOLE_STACK_SIZE
DQNAgent.network = @gym_lib.CartpoleDQNNetwork
DQNAgent.gamma = 0.99
DQNAgent.update_horizon = 1
DQNAgent.min_replay_history = 500
DQNAgent.update_period = 4
DQNAgent.target_update_period = 100
DQNAgent.epsilon_fn = @dqn_agent.identity_epsilon
DQNAgent.tf_device = '/gpu:0' # use '/cpu:*' for non-GPU version
DQNAgent.optimizer = @tf.train.AdamOptimizer()
tf.train.AdamOptimizer.learning_rate = 0.001
tf.train.AdamOptimizer.epsilon = 0.0003125
create_gym_environment.environment_name = 'CartPole'
create_gym_environment.version = 'v0'
create_agent.agent_name = 'dqn'
TrainRunner.create_environment_fn = @gym_lib.create_gym_environment
Runner.num_iterations = 50
Runner.training_steps = 1000
Runner.evaluation_steps = 1000
Runner.max_steps_per_episode = 200 # Default max episode length.
WrappedReplayBuffer.replay_capacity = 50000
WrappedReplayBuffer.batch_size = 128
"""
gin.parse_config(dqn_config, skip_unknown=False)
# + id="WuWFGwGHfkFp" colab_type="code" colab={}
# @title Train DQN on Cartpole
dqn_runner = run_experiment.create_runner(DQN_PATH, schedule='continuous_train')
print('Will train DQN agent, please be patient, may be a while...')
dqn_runner.run_experiment()
print('Done training!')
# + [markdown] id="aRkvG1Nr6Etc" colab_type="text"
# # Train C51
# + id="s5o3a8HX6G2A" colab_type="code" colab={}
# @title Load the configuration for C51.
C51_PATH = os.path.join(BASE_PATH, 'c51')
# Modified from dopamine/agents/rainbow/config/c51_cartpole.gin
c51_config = """
# Hyperparameters for a simple C51-style Cartpole agent. The hyperparameters
# chosen achieve reasonable performance.
import dopamine.agents.dqn.dqn_agent
import dopamine.agents.rainbow.rainbow_agent
import dopamine.discrete_domains.gym_lib
import dopamine.discrete_domains.run_experiment
import dopamine.replay_memory.prioritized_replay_buffer
import gin.tf.external_configurables
RainbowAgent.observation_shape = %gym_lib.CARTPOLE_OBSERVATION_SHAPE
RainbowAgent.observation_dtype = %gym_lib.CARTPOLE_OBSERVATION_DTYPE
RainbowAgent.stack_size = %gym_lib.CARTPOLE_STACK_SIZE
RainbowAgent.network = @gym_lib.CartpoleRainbowNetwork
RainbowAgent.num_atoms = 51
RainbowAgent.vmax = 10.
RainbowAgent.gamma = 0.99
RainbowAgent.update_horizon = 1
RainbowAgent.min_replay_history = 500
RainbowAgent.update_period = 4
RainbowAgent.target_update_period = 100
RainbowAgent.epsilon_fn = @dqn_agent.identity_epsilon
RainbowAgent.replay_scheme = 'uniform'
RainbowAgent.tf_device = '/gpu:0' # use '/cpu:*' for non-GPU version
RainbowAgent.optimizer = @tf.train.AdamOptimizer()
tf.train.AdamOptimizer.learning_rate = 0.001
tf.train.AdamOptimizer.epsilon = 0.0003125
create_gym_environment.environment_name = 'CartPole'
create_gym_environment.version = 'v0'
create_agent.agent_name = 'rainbow'
Runner.create_environment_fn = @gym_lib.create_gym_environment
Runner.num_iterations = 50
Runner.training_steps = 1000
Runner.evaluation_steps = 1000
Runner.max_steps_per_episode = 200 # Default max episode length.
WrappedPrioritizedReplayBuffer.replay_capacity = 50000
WrappedPrioritizedReplayBuffer.batch_size = 128
"""
gin.parse_config(c51_config, skip_unknown=False)
# + id="VI_v9lm66jzq" colab_type="code" colab={}
# @title Train C51 on Cartpole
c51_runner = run_experiment.create_runner(C51_PATH, schedule='continuous_train')
print('Will train agent, please be patient, may be a while...')
c51_runner.run_experiment()
print('Done training!')
# + [markdown] id="hqBe5Yad63FT" colab_type="text"
# # Plot the results
# + id="IknanILXX4Zz" colab_type="code" outputId="e7e5b94c-2872-426b-fb69-a51365eb5fe4" colab={"base_uri": "https://localhost:8080/", "height": 53}
# @title Load the training logs.
data = colab_utils.read_experiment(DQN_PATH, verbose=True,
summary_keys=['train_episode_returns'])
data['agent'] = 'DQN'
data['run'] = 1
c51_data = colab_utils.read_experiment(C51_PATH, verbose=True,
summary_keys=['train_episode_returns'])
c51_data['agent'] = 'C51'
c51_data['run'] = 1
data = data.merge(c51_data, how='outer')
# + id="mSOVFUKN-kea" colab_type="code" outputId="8ec8c20a-2409-420e-a0a0-1b14907bfbed" colab={"base_uri": "https://localhost:8080/", "height": 512}
# @title Plot training results.
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(16,8))
sns.tsplot(data=data, time='iteration', unit='run',
condition='agent', value='train_episode_returns', ax=ax)
plt.title('Cartpole')
plt.show()
| dopamine/colab/cartpole.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3
# ---
# # Zero to Singularity: Create, Tune, Deploy and Scale a Deep Neural Network in 90 Minutes
#
# This notebook is part of a masterclass held at IBM Think on 13th of February 2019 in San Fransisco
# In this exercise you will train a Keras DeepLearning model running on top of TensorFlow.
#
# Note: For sake of bringing the training runtime down we've done two things
#
# 1) Used a softmax regression model over a Convolutional Neural Network
#
# 2) Trained only for one epoch instead of 20
#
# This leads to approx. 5% less accuracy
#
#
# Authors
#
# <NAME> - Chief Data Scientist, IBM Watson IoT
#
# <NAME> - Architect, Watson Machine Learning Software Lab, Bangalore
#
#
# # Prerequisites
#
# Please make sure the currently installed version of Keras and Tensorflow are matching the requirememts, if not, please run the two PIP commands below in order to re-install. Please restart the kernal before proceeding, please re-check if the versions are matching.
import keras
print('Current:\t', keras.__version__)
print('Expected:\t 2.2.5 ')
import tensorflow as tf
print('Current:\t', tf.__version__)
print('Expected:\t 1.15.0')
# # IMPORTANT !!!
#
# If you ran the two lines below please restart your kernel (Kernel->Restart & Clear Output)
# !pip install keras==2.2.5
# !pip install tensorflow==1.15.0
# # 1.0 Train a MNIST digits recognition model
# We start with some global parameters and imports
# +
#some learners constantly reported 502 errors in Watson Studio.
#This is due to the limited resources in the free tier and the heavy resource consumption of Keras.
#This is a workaround to limit resource consumption
from keras import backend as K
K.set_session(K.tf.Session(config=K.tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)))
# +
import keras
from keras.models import Model
from keras.layers import Input, Dense
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.datasets import mnist
from keras.models import Sequential, load_model
from keras.optimizers import RMSprop
from keras.layers import LeakyReLU
from keras import backend as K
import numpy as np
# +
batch_size = 128
num_classes = 10
epochs = 1
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# -
# # Training a simple model
# First we'll train a simple softmax regressor and check what accuracy we get
# +
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('\n')
print('Accuracy:',score[1])
# -
#some cleanup from the previous run
# !rm -f ker_*
# !rm -f my_best_model.tgz
# You should see an accuracy of approximately 90%. Now lets define a hyper-parameter grid including different activation functions and gradient descent optimizers. We’re optimizing over the grid using grid search (nested for loops) and store each model variant in a file. We then decide for the best one in order to deploy to IBM Watson Machine Learning.
# +
#define parameter grid
activation_functions_layer_1 = ['sigmoid','tanh','relu']
opimizers = ['rmsprop','adagrad','adadelta']
#optimize over parameter grid (grid search)
for activation_function_layer_1 in activation_functions_layer_1:
for opimizer in opimizers:
model = Sequential()
model.add(Dense(512, activation = activation_function_layer_1, input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=opimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
save_path = "ker_func_mnist_model_2.%s.%s.%s.h5" % (activation_function_layer_1,opimizer,score[1])
model.save(save_path)
# -
# # Model evaluation
# Let's have a look at all the models and see which hyper parameter configuration was the best one. You should see that relu and rmsprop gives you > 95% of accuracy on the validation set
# ls -ltr ker_*
# Now it's time to create a tarball out of your favorite model, please replace the name of your favorite model H5 file with “please-put-me-here”
# !tar -zcvf my_best_model.tgz please-put-me-here.h5
# ## 2.0 Save the trained model to WML Repository
# We will use `watson_machine_learning_client` python library to save the trained model to WML Repository, to deploy the saved model and to make predictions using the deployed model.</br>
#
#
# `watson_machine_learning_client` can be installed using the following `pip` command in case you are running outside Watson Studio:
#
# `!pip install watson-machine-learning-client --upgrade`
from watson_machine_learning_client import WatsonMachineLearningAPIClient
# Please go to https://cloud.ibm.com/, login, click on the “Create Resource” button. From the “AI” category, please choose “Machine Learning”. Wait for the “Create” button to activate and click on “Create”. Click on “Service Credentials”, then “New Credential”, then “Add”. From the new entry in the table, under “ACTIONS”, please click on “View Credentials”. Please copy the whole JSON object to your clipboard. Now just paste the JSON object below so that you are able to use your personal instance of Watson Machine Learning.
wml_credentials={
"apikey": "<KEY>",
"iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:pm-20:us-south:a/4b5f219cdaee498f9dac672a8966c254:708f4e4e-ffa6-4be2-8427-7a0a73ae6949::",
"iam_apikey_name": "auto-generated-apikey-ae8c30a4-8f83-44e2-98b5-9461e847b11f",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/4b5f219cdaee498f9dac672a8966c254::serviceid:ServiceId-c6a23b0b-5e7d-47b0-a3e0-6a2b51aa1817",
"instance_id": "708f4e4e-ffa6-4be2-8427-7a0a73ae6949",
"password": "",
"url": "https://us-south.ml.cloud.ibm.com",
"username": "ae8c30a4-8f83-44e2-98b5-9461e847b11f"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
model_props = {client.repository.ModelMetaNames.AUTHOR_NAME: "IBM",
client.repository.ModelMetaNames.AUTHOR_EMAIL: "<EMAIL>",
client.repository.ModelMetaNames.NAME: "KK3_clt_keras_mnist",
client.repository.ModelMetaNames.FRAMEWORK_NAME: "tensorflow",
client.repository.ModelMetaNames.FRAMEWORK_VERSION: "1.15" ,
client.repository.ModelMetaNames.FRAMEWORK_LIBRARIES: [{"name": "keras", "version": "2.2.5"}]
}
published_model = client.repository.store_model(model="my_best_model.tgz", meta_props=model_props)
published_model_uid = client.repository.get_model_uid(published_model)
model_details = client.repository.get_details(published_model_uid)
# ## 3.0 Deploy the Keras model
client.deployments.list()
# To keep your environment clean, just delete all deployments from previous runs
client.deployments.delete("PASTE_YOUR_GUID_HERE_IF_APPLICABLE")
created_deployment = client.deployments.create(published_model_uid, name="k1_keras_mnist_clt1")
# ## Test the model
#scoring_endpoint = client.deployments.get_scoring_url(created_deployment)
scoring_endpoint = created_deployment['entity']['scoring_url']
print(scoring_endpoint)
x_score_1 = x_test[23].tolist()
print('The answer should be: ',np.argmax(y_test[23]))
scoring_payload = {'values': [x_score_1]}
predictions = client.deployments.score(scoring_endpoint, scoring_payload)
print('And the answer is!... ',predictions['values'][0][1])
| applied-ai-with-deep-learning/Week 4/WML/Keras_train_deploy_score_wmlc_py_pub.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Scikit-Learn Practice Solutions
#
# This notebook offers a set of potential solutions to the Scikit-Learn excercise notebook.
#
# Exercises are based off (and directly taken from) the quick [introduction to Scikit-Learn notebook](https://github.com/mrdbourke/zero-to-mastery-ml/blob/master/section-2-data-science-and-ml-tools/introduction-to-scikit-learn.ipynb).
#
# Different tasks will be detailed by comments or text.
#
# For further reference and resources, it's advised to check out the [Scikit-Learn documnetation](https://scikit-learn.org/stable/user_guide.html).
#
# And if you get stuck, try searching for a question in the following format: "how to do XYZ with Scikit-Learn", where XYZ is the function you want to leverage from Scikit-Learn.
#
# Since we'll be working with data, we'll import Scikit-Learn's counterparts, Matplotlib, NumPy and pandas.
#
# Let's get started.
# +
# Setup matplotlib to plot inline (within the notebook)
# %matplotlib inline
# Import the pyplot module of Matplotlib as plt
import matplotlib.pyplot as plt
# Import pandas under the abbreviation 'pd'
import pandas as pd
# Import NumPy under the abbreviation 'np'
import numpy as np
# -
# ## End-to-end Scikit-Learn classification workflow
#
# Let's start with an end to end Scikit-Learn workflow.
#
# More specifically, we'll:
# 1. Get a dataset ready
# 2. Prepare a machine learning model to make predictions
# 3. Fit the model to the data and make a prediction
# 4. Evaluate the model's predictions
#
# The data we'll be using is [stored on GitHub](https://github.com/mrdbourke/zero-to-mastery-ml/tree/master/data). We'll start with [`heart-disease.csv`](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv), a dataset which contains anonymous patient data and whether or not they have heart disease.
#
# **Note:** When viewing a `.csv` on GitHub, make sure it's in the raw format. For example, the URL should look like: https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv
#
# ### 1. Getting a dataset ready
# +
# Import the heart disease dataset and save it to a variable
# using pandas and read_csv()
# Hint: You can directly pass the URL of a csv to read_csv()
heart_disease = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv")
# Check the first 5 rows of the data
heart_disease.head()
# -
# Our goal here is to build a machine learning model on all of the columns except `target` to predict `target`.
#
# In essence, the `target` column is our **target variable** (also called `y` or `labels`) and the rest of the other columns are our independent variables (also called `data` or `X`).
#
# And since our target variable is one thing or another (heart disease or not), we know our problem is a classification problem (classifying whether something is one thing or another).
#
# Knowing this, let's create `X` and `y` by splitting our dataframe up.
# +
# Create X (all columns except target)
X = heart_disease.drop("target", axis=1)
# Create y (only the target column)
y = heart_disease["target"]
# -
# Now we've split our data into `X` and `y`, we'll use Scikit-Learn to split it into training and test sets.
# +
# Import train_test_split from sklearn's model_selection module
from sklearn.model_selection import train_test_split
# Use train_test_split to split X & y into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y)
# -
# View the different shapes of the training and test datasets
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# What do you notice about the different shapes of the data?
#
# Since our data is now in training and test sets, we'll build a machine learning model to fit patterns in the training data and then make predictions on the test data.
#
# To figure out which machine learning model we should use, you can refer to [Scikit-Learn's machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html).
#
# After following the map, you decide to use the [`RandomForestClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html).
#
# ### 2. Preparing a machine learning model
# +
# Import the RandomForestClassifier from sklearn's ensemble module
from sklearn.ensemble import RandomForestClassifier
# Instantiate an instance of RandomForestClassifier as clf
clf = RandomForestClassifier()
# -
# Now you've got a `RandomForestClassifier` instance, let's fit it to the training data.
#
# Once it's fit, we'll make predictions on the test data.
#
# ### 3. Fitting a model and making predictions
# Fit the RandomForestClassifier to the training data
clf.fit(X_train, y_train)
# Use the fitted model to make predictions on the test data and
# save the predictions to a variable called y_preds
y_preds = clf.predict(X_test)
# ### 4. Evaluating a model's predictions
#
# Evaluating predictions is as important making them. Let's check how our model did by calling the `score()` method on it and passing it the training (`X_train, y_train`) and testing data.
# Evaluate the fitted model on the training set using the score() function
clf.score(X_train, y_train)
# Evaluate the fitted model on the test set using the score() function
clf.score(X_test, y_test)
# * How did you model go?
# * What metric does `score()` return for classifiers?
# * Did your model do better on the training dataset or test dataset?
# ## Experimenting with different classification models
#
# Now we've quickly covered an end-to-end Scikit-Learn workflow and since experimenting is a large part of machine learning, we'll now try a series of different machine learning models and see which gets the best results on our dataset.
#
# Going through the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we see there are a number of different classification models we can try (different models are in the green boxes).
#
# For this exercise, the models we're going to try and compare are:
# * [LinearSVC](https://scikit-learn.org/stable/modules/svm.html#classification)
# * [KNeighborsClassifier](https://scikit-learn.org/stable/modules/neighbors.html) (also known as K-Nearest Neighbors or KNN)
# * [SVC](https://scikit-learn.org/stable/modules/svm.html#classification) (also known as support vector classifier, a form of [support vector machine](https://en.wikipedia.org/wiki/Support-vector_machine))
# * [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) (despite the name, this is actually a classifier)
# * [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) (an ensemble method and what we used above)
#
# We'll follow the same workflow we used above (except this time for multiple models):
# 1. Import a machine learning model
# 2. Get it ready
# 3. Fit it to the data and make predictions
# 4. Evaluate the fitted model
#
# **Note:** Since we've already got the data ready, we can reuse it in this section.
# +
# Import LinearSVC from sklearn's svm module
from sklearn.svm import LinearSVC
# Import KNeighborsClassifier from sklearn's neighbors module
from sklearn.neighbors import KNeighborsClassifier
# Import SVC from sklearn's svm module
from sklearn.svm import SVC
# Import LogisticRegression from sklearn's linear_model module
from sklearn.linear_model import LogisticRegression
# Note: we don't have to import RandomForestClassifier, since we already have
# -
# Thanks to the consistency of Scikit-Learn's API design, we can use virtually the same code to fit, score and make predictions with each of our models.
#
# To see which model performs best, we'll do the following:
# 1. Instantiate each model in a dictionary
# 2. Create an empty results dictionary
# 3. Fit each model on the training data
# 4. Score each model on the test data
# 5. Check the results
#
# If you're wondering what it means to instantiate each model in a dictionary, see the example below.
# +
# EXAMPLE: Instantiating a RandomForestClassifier() in a dictionary
example_dict = {"RandomForestClassifier": RandomForestClassifier()}
# Create a dictionary called models which contains all of the classification models we've imported
# Make sure the dictionary is in the same format as example_dict
# The models dictionary should contain 5 models
models = {"LinearSVC": LinearSVC(),
"KNN": KNeighborsClassifier(),
"SVC": SVC(),
"LogisticRegression": LogisticRegression(),
"RandomForestClassifier": RandomForestClassifier()}
# Create an empty dictionary called results
results = {}
# -
# Since each model we're using has the same `fit()` and `score()` functions, we can loop through our models dictionary and, call `fit()` on the training data and then call `score()` with the test data.
# +
# EXAMPLE: Looping through example_dict fitting and scoring the model
example_results = {}
for model_name, model in example_dict.items():
model.fit(X_train, y_train)
example_results[model_name] = model.score(X_test, y_test)
example_results
# +
# Loop through the models dictionary items, fitting the model on the training data
# and appending the model name and model score on the test data to the results dictionary
for model_name, model in models.items():
model.fit(X_train, y_train)
results[model_name] = model.score(X_test, y_test)
results
# -
# * Which model performed the best?
# * Do the results change each time you run the cell?
# * Why do you think this is?
#
# Due to the randomness of how each model finds patterns in the data, you might notice different results each time.
#
# Without manually setting the random state using the `random_state` parameter of some models or using a NumPy random seed, every time you run the cell, you'll get slightly different results.
#
# Let's see this in effect by running the same code as the cell above, except this time setting a [NumPy random seed equal to 42](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html).
# +
# Run the same code as the cell above, except this time set a NumPy random seed
# equal to 42
np.random.seed(42)
for model_name, model in models.items():
model.fit(X_train, y_train)
results[model_name] = model.score(X_test, y_test)
results
# -
# * Run the cell above a few times, what do you notice about the results?
# * Which model performs the best this time?
# * What happens if you add a NumPy random seed to the cell where you called `train_test_split()` (towards the top of the notebook) and then rerun the cell above?
#
# Let's make our results a little more visual.
# +
# Create a pandas dataframe with the data as the values of the results dictionary,
# the index as the keys of the results dictionary and a single column called accuracy.
# Be sure to save the dataframe to a variable.
results_df = pd.DataFrame(results.values(),
results.keys(),
columns=["Accuracy"])
# Create a bar plot of the results dataframe using plot.bar()
results_df.plot.bar();
# -
# Using `np.random.seed(42)` results in the `LogisticRegression` model perfoming the best (at least on my computer).
#
# Let's tune its hyperparameters and see if we can improve it.
#
# ### Hyperparameter Tuning
#
# Remember, if you're ever trying to tune a machine learning models hyperparameters and you're not sure where to start, you can always search something like "MODEL_NAME hyperparameter tuning".
#
# In the case of LogisticRegression, you might come across articles, such as [Hyperparameter Tuning Using Grid Search by <NAME>](https://chrisalbon.com/machine_learning/model_selection/hyperparameter_tuning_using_grid_search/).
#
# The article uses [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) but we're going to be using [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html).
#
# The different hyperparameters to search over have been setup for you in `log_reg_grid` but feel free to change them.
# Different LogisticRegression hyperparameters
log_reg_grid = {"C": np.logspace(-4, 4, 20),
"solver": ["liblinear"]}
# Since we've got a set of hyperparameters we can import `RandomizedSearchCV`, pass it our dictionary of hyperparameters and let it search for the best combination.
# +
# Setup np random seed of 42
np.random.seed(42)
# Import RandomizedSearchCV from sklearn's model_selection module
from sklearn.model_selection import RandomizedSearchCV
# Setup an instance of RandomizedSearchCV with a LogisticRegression() estimator,
# our log_reg_grid as the param_distributions, a cv of 5 and n_iter of 5.
rs_log_reg = RandomizedSearchCV(estimator=LogisticRegression(),
param_distributions=log_reg_grid,
cv=5,
n_iter=5,
verbose=True)
# Fit the instance of RandomizedSearchCV
rs_log_reg.fit(X_train, y_train);
# -
# Once `RandomizedSearchCV` has finished, we can find the best hyperparmeters it found using the `best_params_` attributes.
# Find the best parameters of the RandomizedSearchCV instance using the best_params_ attribute
rs_log_reg.best_params_
# Score the instance of RandomizedSearchCV using the test data
rs_log_reg.score(X_test, y_test)
# After hyperparameter tuning, did the models score improve? What else could you try to improve it? Are there any other methods of hyperparameter tuning you can find for `LogisticRegression`?
#
# ### Classifier Model Evaluation
#
# We've tried to find the best hyperparameters on our model using `RandomizedSearchCV` and so far we've only been evaluating our model using the `score()` function which returns accuracy.
#
# But when it comes to classification, you'll likely want to use a few more evaluation metrics, including:
# * [**Confusion matrix**](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) - Compares the predicted values with the true values in a tabular way, if 100% correct, all values in the matrix will be top left to bottom right (diagnol line).
# * [**Cross-validation**](https://scikit-learn.org/stable/modules/cross_validation.html) - Splits your dataset into multiple parts and train and tests your model on each part and evaluates performance as an average.
# * [**Precision**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score) - Proportion of true positives over total number of samples. Higher precision leads to less false positives.
# * [**Recall**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score) - Proportion of true positives over total number of true positives and false positives. Higher recall leads to less false negatives.
# * [**F1 score**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) - Combines precision and recall into one metric. 1 is best, 0 is worst.
# * [**Classification report**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) - Sklearn has a built-in function called `classification_report()` which returns some of the main classification metrics such as precision, recall and f1-score.
# * [**ROC Curve**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_score.html) - [Receiver Operating Characterisitc](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of true positive rate versus false positive rate.
# * [**Area Under Curve (AUC)**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) - The area underneath the ROC curve. A perfect model achieves a score of 1.0.
#
# Before we get to these, we'll instantiate a new instance of our model using the best hyerparameters found by `RandomizedSearchCV`.
# +
# Instantiate a LogisticRegression classifier using the best hyperparameters from RandomizedSearchCV
clf = LogisticRegression(solver="liblinear", C=0.23357214690901212)
# Fit the new instance of LogisticRegression with the best hyperparameters on the training data
clf.fit(X_train, y_train);
# -
# Now it's to import the relative Scikit-Learn methods for each of the classification evaluation metrics we're after.
# +
# Import confusion_matrix and classification_report from sklearn's metrics module
from sklearn.metrics import confusion_matrix, classification_report
# Import precision_score, recall_score and f1_score from sklearn's metrics module
from sklearn.metrics import precision_score, recall_score, f1_score
# Import plot_roc_curve from sklearn's metrics module
from sklearn.metrics import plot_roc_curve
# -
# Evaluation metrics are very often comparing a model's predictions to some ground truth labels.
#
# Let's make some predictions on the test data using our latest model and save them to `y_preds`.
# Make predictions on test data and save them
y_preds = clf.predict(X_test)
# Time to use the predictions our model has made to evaluate it beyond accuracy.
# Create a confusion matrix using the confusion_matrix function
confusion_matrix(y_test, y_preds)
# **Challenge:** The in-built `confusion_matrix` function in Scikit-Learn produces something not too visual, how could you make your confusion matrix more visual?
#
# You might want to search something like "how to plot a confusion matrix". Note: There may be more than one way to do this.
# +
# Import seaborn for improving visualisation of confusion matrix
import seaborn as sns
# Make confusion matrix more visual
def plot_conf_mat(y_test, y_preds):
"""
Plots a confusion matrix using Seaborn's heatmap().
"""
fig, ax = plt.subplots(figsize=(3, 3))
ax = sns.heatmap(confusion_matrix(y_test, y_preds),
annot=True, # Annotate the boxes
cbar=False)
plt.xlabel("Predicted label")
plt.ylabel("True label")
# Fix the broken annotations (this happened in Matplotlib 3.1.1)
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5);
plot_conf_mat(y_test, y_preds)
# -
# How about a classification report?
# classification report
print(classification_report(y_test, y_preds))
# **Challenge:** Write down what each of the columns in this classification report are.
#
# * **Precision** - Indicates the proportion of positive identifications (model predicted class 1) which were actually correct. A model which produces no false positives has a precision of 1.0.
# * **Recall** - Indicates the proportion of actual positives which were correctly classified. A model which produces no false negatives has a recall of 1.0.
# * **F1 score** - A combination of precision and recall. A perfect model achieves an F1 score of 1.0.
# * **Support** - The number of samples each metric was calculated on.
# * **Accuracy** - The accuracy of the model in decimal form. Perfect accuracy is equal to 1.0.
# * **Macro avg** - Short for macro average, the average precision, recall and F1 score between classes. Macro avg doesn’t class imbalance into effort, so if you do have class imbalances, pay attention to this metric.
# * **Weighted avg** - Short for weighted average, the weighted average precision, recall and F1 score between classes. Weighted means each metric is calculated with respect to how many samples there are in each class. This metric will favour the majority class (e.g. will give a high value when one class out performs another due to having more samples).
#
# The classification report gives us a range of values for precision, recall and F1 score, time to find these metrics using Scikit-Learn functions.
# Find the precision score of the model using precision_score()
precision_score(y_test, y_preds)
# Find the recall score
recall_score(y_test, y_preds)
# Find the F1 score
f1_score(y_test, y_preds)
# Confusion matrix: done.
# Classification report: done.
# ROC (receiver operator characteristic) curve & AUC (area under curve) score: not done.
#
# Let's fix this.
#
# If you're unfamiliar with what a ROC curve, that's your first challenge, to read up on what one is.
#
# In a sentence, a [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of the true positive rate versus the false positive rate.
#
# And the AUC score is the area behind the ROC curve.
#
# Scikit-Learn provides a handy function for creating both of these called [`plot_roc_curve()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_roc_curve.html).
# Plot a ROC curve using our current machine learning model using plot_roc_curve
plot_roc_curve(clf, X_test, y_test);
# Beautiful! We've gone far beyond accuracy with a plethora extra classification evaluation metrics.
#
# If you're not sure about any of these, don't worry, they can take a while to understand. That could be an optional extension, reading up on a classification metric you're not sure of.
#
# The thing to note here is all of these metrics have been calculated using a single training set and a single test set. Whilst this is okay, a more robust way is to calculate them using [cross-validation](https://scikit-learn.org/stable/modules/cross_validation.html).
#
# We can calculate various evaluation metrics using cross-validation using Scikit-Learn's [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function along with the `scoring` parameter.
# Import cross_val_score from sklearn's model_selection module
from sklearn.model_selection import cross_val_score
# EXAMPLE: By default cross_val_score returns 5 values (cv=5).
cross_val_score(clf,
X,
y,
scoring="accuracy",
cv=5)
# +
# EXAMPLE: Taking the mean of the returned values from cross_val_score
# gives a cross-validated version of the scoring metric.
cross_val_acc = np.mean(cross_val_score(clf,
X,
y,
scoring="accuracy",
cv=5))
cross_val_acc
# -
# In the examples, the cross-validated accuracy is found by taking the mean of the array returned by `cross_val_score()`.
#
# Now it's time to find the same for precision, recall and F1 score.
# +
# Find the cross-validated precision
cross_val_precision = np.mean(cross_val_score(clf,
X,
y,
scoring="precision",
cv=5))
cross_val_precision
# +
# Find the cross-validated recall
cross_val_recall = np.mean(cross_val_score(clf,
X,
y,
scoring="recall",
cv=5))
cross_val_recall
# +
# Find the cross-validated F1 score
cross_val_f1 = np.mean(cross_val_score(clf,
X,
y,
scoring="f1",
cv=5))
cross_val_f1
# -
# ### Exporting and importing a trained model
#
# Once you've trained a model, you may want to export it and save it to file so you can share it or use it elsewhere.
#
# One method of exporting and importing models is using the joblib library.
#
# In Scikit-Learn, exporting and importing a trained model is known as [model persistence](https://scikit-learn.org/stable/modules/model_persistence.html).
# Import the dump and load functions from the joblib library
from joblib import dump, load
# Use the dump function to export the trained model to file
dump(clf, "trained-classifier.joblib")
# +
# Use the load function to import the trained model you just exported
# Save it to a different variable name to the origial trained model
loaded_clf = load("trained-classifier.joblib")
# Evaluate the loaded trained model on the test data
loaded_clf.score(X_test, y_test)
# -
# What do you notice about the loaded trained model results versus the original (pre-exported) model results?
#
#
# ## Scikit-Learn Regression Practice
#
# For the next few exercises, we're going to be working on a regression problem, in other words, using some data to predict a number.
#
# Our dataset is a [table of car sales](https://docs.google.com/spreadsheets/d/1LPEIWJdSSJYrfn-P3UQDIXbEn5gg-o6I7ExLrWTTBWs/edit?usp=sharing), containing different car characteristics as well as a sale price.
#
# We'll use Scikit-Learn's built-in regression machine learning models to try and learn the patterns in the car characteristics and their prices on a certain group of the dataset before trying to predict the sale price of a group of cars the model has never seen before.
#
# To begin, we'll [import the data from GitHub](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv) into a pandas DataFrame, check out some details about it and try to build a model as soon as possible.
# +
# Read in the car sales data
car_sales = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv")
# View the first 5 rows of the car sales data
car_sales.head()
# -
# Get information about the car sales DataFrame
car_sales.info()
# Looking at the output of `info()`,
# * How many rows are there total?
# * What datatypes are in each column?
# * How many missing values are there in each column?
# Find number of missing values in each column
car_sales.isna().sum()
# Find the datatypes of each column of car_sales
car_sales.dtypes
# Knowing this information, what would happen if we tried to model our data as it is?
#
# Let's see.
# EXAMPLE: This doesn't work because our car_sales data isn't all numerical
from sklearn.ensemble import RandomForestRegressor
car_sales_X, car_sales_y = car_sales.drop("Price", axis=1), car_sales.Price
rf_regressor = RandomForestRegressor().fit(car_sales_X, car_sales_y)
# As we see, the cell above breaks because our data contains non-numerical values as well as missing data.
#
# To take care of some of the missing data, we'll remove the rows which have no labels (all the rows with missing values in the `Price` column).
# Remove rows with no labels (NaN's in the Price column)
car_sales.dropna(subset=["Price"], inplace=True)
# ### Building a pipeline
# Since our `car_sales` data has missing numerical values as well as the data isn't all numerical, we'll have to fix these things before we can fit a machine learning model on it.
#
# There are ways we could do this with pandas but since we're practicing Scikit-Learn, we'll see how we might do it with the [`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) class.
#
# Because we're modifying columns in our dataframe (filling missing values, converting non-numerical data to numbers) we'll need the [`ColumnTransformer`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html), [`SimpleImputer`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) and [`OneHotEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) classes as well.
#
# Finally, because we'll need to split our data into training and test sets, we'll import `train_test_split` as well.
# +
# Import Pipeline from sklearn's pipeline module
from sklearn.pipeline import Pipeline
# Import ColumnTransformer from sklearn's compose module
from sklearn.compose import ColumnTransformer
# Import SimpleImputer from sklearn's impute module
from sklearn.impute import SimpleImputer
# Import OneHotEncoder from sklearn's preprocessing module
from sklearn.preprocessing import OneHotEncoder
# Import train_test_split from sklearn's model_selection module
from sklearn.model_selection import train_test_split
# -
# Now we've got the necessary tools we need to create our preprocessing `Pipeline` which fills missing values along with turning all non-numerical data into numbers.
#
# Let's start with the categorical features.
# +
# Define different categorical features
categorical_features = ["Make", "Colour"]
# Create categorical transformer Pipeline
categorical_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to "constant" and fill value to "missing"
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
# Set OneHotEncoder to ignore the unknowns
("onehot", OneHotEncoder(handle_unknown="ignore"))])
# -
# It would be safe to treat `Doors` as a categorical feature as well, however since we know the vast majority of cars have 4 doors, we'll impute the missing `Doors` values as 4.
# +
# Define Doors features
door_feature = ["Doors"]
# Create Doors transformer Pipeline
door_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to "constant" and fill value to 4
("imputer", SimpleImputer(strategy="constant", fill_value=4))])
# -
# Now onto the numeric features. In this case, the only numeric feature is the `Odometer (KM)` column. Let's fill its missing values with the median.
# +
# Define numeric features (only the Odometer (KM) column)
numeric_features = ["Odometer (KM)"]
# Crearte numeric transformer Pipeline
numeric_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to fill missing values with the "Median"
("imputer", SimpleImputer(strategy="median"))])
# -
# Time to put all of our individual transformer `Pipeline`'s into a single `ColumnTransformer` instance.
# Setup preprocessing steps (fill missing values, then convert to numbers)
preprocessor = ColumnTransformer(
transformers=[
# Use the categorical_transformer to transform the categorical_features
("cat", categorical_transformer, categorical_features),
# Use the door_transformer to transform the door_feature
("door", door_transformer, door_feature),
# Use the numeric_transformer to transform the numeric_features
("num", numeric_transformer, numeric_features)])
# Boom! Now our `preprocessor` is ready, time to import some regression models to try out.
#
# Comparing our data to the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we can see there's a handful of different regression models we can try.
#
# * [RidgeRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html)
# * [SVR(kernel="linear")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form form of support vector machine.
# * [SVR(kernel="rbf")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form of support vector machine.
# * [RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) - the regression version of RandomForestClassifier.
# +
# Import Ridge from sklearn's linear_model module
from sklearn.linear_model import Ridge
# Import SVR from sklearn's svm module
from sklearn.svm import SVR
# Import RandomForestRegressor from sklearn's ensemble module
from sklearn.ensemble import RandomForestRegressor
# -
# Again, thanks to the design of the Scikit-Learn library, we're able to use very similar code for each of these models.
#
# To test them all, we'll create a dictionary of regression models and an empty dictionary for regression model results.
# +
# Create dictionary of model instances, there should be 4 total key, value pairs
# in the form {"model_name": model_instance}.
# Don't forget there's two versions of SVR, one with a "linear" kernel and the
# other with kernel set to "rbf".
regression_models = {"Ridge": Ridge(),
"SVR_linear": SVR(kernel="linear"),
"SVR_rbf": SVR(kernel="rbf"),
"RandomForestRegressor": RandomForestRegressor()}
# Create an empty dictionary for the regression results
regression_results = {}
# -
# Our regression model dictionary is prepared as well as an empty dictionary to append results to, time to get the data split into `X` (feature variables) and `y` (target variable) as well as training and test sets.
#
# In our car sales problem, we're trying to use the different characteristics of a car (`X`) to predict its sale price (`y`).
# +
# Create car sales X data (every column of car_sales except Price)
car_sales_X = car_sales.drop("Price", axis=1)
# Create car sales y data (the Price column of car_sales)
car_sales_y = car_sales["Price"]
# +
# Use train_test_split to split the car_sales_X and car_sales_y data into
# training and test sets.
# Give the test set 20% of the data using the test_size parameter.
# For reproducibility set the random_state parameter to 42.
car_X_train, car_X_test, car_y_train, car_y_test = train_test_split(car_sales_X,
car_sales_y,
test_size=0.2,
random_state=42)
# Check the shapes of the training and test datasets
car_X_train.shape, car_X_test.shape, car_y_train.shape, car_y_test.shape
# -
# * How many rows are in each set?
# * How many columns are in each set?
#
# Alright, our data is split into training and test sets, time to build a small loop which is going to:
# 1. Go through our `regression_models` dictionary
# 2. Create a `Pipeline` which contains our `preprocessor` as well as one of the models in the dictionary
# 3. Fits the `Pipeline` to the car sales training data
# 4. Evaluates the target model on the car sales test data and appends the results to our `regression_results` dictionary
# Loop through the items in the regression_models dictionary
for model_name, model in regression_models.items():
# Create a model pipeline with a preprocessor step and model step
model_pipeline = Pipeline(steps=[("preprocessor", preprocessor),
("model", model)])
# Fit the model pipeline to the car sales training data
print(f"Fitting {model_name}...")
model_pipeline.fit(car_X_train, car_y_train)
# Score the model pipeline on the test data appending the model_name to the
# results dictionary
print(f"Scoring {model_name}...")
regression_results[model_name] = model_pipeline.score(car_X_test,
car_y_test)
# Our regression models have been fit, let's see how they did!
# Check the results of each regression model by printing the regression_results
# dictionary
regression_results
# * Which model did the best?
# * How could you improve its results?
# * What metric does the `score()` method of a regression model return by default?
#
# Since we've fitted some models but only compared them via the default metric contained in the `score()` method (R^2 score or coefficient of determination), let's take the `RidgeRegression` model and evaluate it with a few other [regression metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics).
#
# Specifically, let's find:
# 1. **R^2 (pronounced r-squared) or coefficient of determination** - Compares your models predictions to the mean of the targets. Values can range from negative infinity (a very poor model) to 1. For example, if all your model does is predict the mean of the targets, its R^2 value would be 0. And if your model perfectly predicts a range of numbers it's R^2 value would be 1.
# 2. **Mean absolute error (MAE)** - The average of the absolute differences between predictions and actual values. It gives you an idea of how wrong your predictions were.
# 3. **Mean squared error (MSE)** - The average squared differences between predictions and actual values. Squaring the errors removes negative errors. It also amplifies outliers (samples which have larger errors).
#
# Scikit-Learn has a few classes built-in which are going to help us with these, namely, [`mean_absolute_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html), [`mean_squared_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) and [`r2_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html).
# +
# Import mean_absolute_error from sklearn's metrics module
from sklearn.metrics import mean_absolute_error
# Import mean_squared_error from sklearn's metrics module
from sklearn.metrics import mean_squared_error
# Import r2_score from sklearn's metrics module
from sklearn.metrics import r2_score
# -
# All the evaluation metrics we're concerned with compare a model's predictions with the ground truth labels. Knowing this, we'll have to make some predictions.
#
# Let's create a `Pipeline` with the `preprocessor` and a `Ridge()` model, fit it on the car sales training data and then make predictions on the car sales test data.
# +
# Create RidgeRegression Pipeline with preprocessor as the "preprocessor" and
# Ridge() as the "model".
ridge_pipeline = Pipeline(steps=[("preprocessor", preprocessor),
("model", Ridge())])
# Fit the RidgeRegression Pipeline to the car sales training data
ridge_pipeline.fit(car_X_train, car_y_train)
# Make predictions on the car sales test data using the RidgeRegression Pipeline
car_y_preds = ridge_pipeline.predict(car_X_test)
# View the first 50 predictions
car_y_preds[:50]
# -
# Nice! Now we've got some predictions, time to evaluate them. We'll find the mean squared error (MSE), mean absolute error (MAE) and R^2 score (coefficient of determination) of our model.
# EXAMPLE: Find the MSE by comparing the car sales test labels to the car sales predictions
mse = mean_squared_error(car_y_test, car_y_preds)
# Return the MSE
mse
# Find the MAE by comparing the car sales test labels to the car sales predictions
mae = mean_absolute_error(car_y_test, car_y_preds)
# Return the MAE
mae
# Find the R^2 score by comparing the car sales test labels to the car sales predictions
r2 = r2_score(car_y_test, car_y_preds)
# Return the R^2 score
r2
# Boom! Our model could potentially do with some hyperparameter tuning (this would be a great extension). And we could probably do with finding some more data on our problem, 1000 rows doesn't seem to be sufficient.
#
# * How would you export the trained regression model?
# ## Extensions
#
# You should be proud. Getting this far means you've worked through a classification problem and regression problem using pure (mostly) Scikit-Learn (no easy feat!).
#
# For more exercises, check out the [Scikit-Learn getting started documentation](https://scikit-learn.org/stable/getting_started.html). A good practice would be to read through it and for the parts you find interesting, add them into the end of this notebook.
#
# Finally, as always, remember, the best way to learn something new is to try it. And try it relentlessly. If you're unsure of how to do something, never be afraid to ask a question or search for something such as, "how to tune the hyperparmaters of a scikit-learn ridge regression model".
| zero-to-mastery-ml-master/section-2-appendix-video-code/scikit-learn-exercises-solutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Map Comparison
# This is an implementation that compares the three models.
# ## Create an Instance of RouteEstimator
# +
# import the required modules and packages
# -
import sys
sys.path.append("route_estimator")
sys.path.append("route_estimator/models")
sys.path.append("route_estimator/weather")
sys.path.append("route_estimator/traffic")
sys.path.append("range_estimator")
from route_estimator import RouteEstimator
from range_estimator import RangeEstimator
from config import Config
route_estimator= RouteEstimator(Config())
# ## Activate Compare Mode
# This will add energy estimates to the graph for both the simple energy model and fastsim. This will take some time. On a rather fast CPU this took approximately 8 minutes.
# %%time
route_estimator.activate_both_models()
# ## Create a Map
# This will create an interactive map that should let you compare the three routes. The shortest distance is drawn in black. The simple energy model is drawn in blue. The fastsim model is drawn in red.
route_map = route_estimator.create_map()
route_map
| map_comparison.ipynb |
# ### Decision tree Estimator on Analyzing a all_vars_for_zeroinf_analysis data
#
# - This Python notebook demonstrates creating an ML Pipeline to preprocess a dataset, train a Machine Learning model, and make predictions.
#
# - Data: The dataset contains all_vars_for_zeroinf_analysis
#
# - Goal: We want to learn to predict confirmed deaths from information such as
#
# - Approach: We will use Spark ML Pipelines, which help users piece together parts of a workflow such as feature processing and model training. We will also demonstrate model selection (a.k.a. hyperparameter tuning) using Cross Validation in order to fine-tune and improve our ML model.
# %sh
pip install mleap
from pyspark.sql.functions import log
# %sh ls /dbfs/FileStore/tables/
# +
# File location and type
file_location = "/FileStore/tables/all_vars_for_zeroinf_analysis.csv"
file_type = "csv"
# CSV options
infer_schema = "true"
first_row_is_header = "true"
delimiter = ","
## import all_vars_for_zeroinf_analysis file
# The applied options are for CSV files. For other file types, these will be ignored.
zeroinf_analysis = spark.read.format(file_type) \
.option("inferSchema", infer_schema) \
.option("header", first_row_is_header) \
.option("sep", delimiter) \
.load(file_location)
# +
#display(zeroinf_analysis)
# -
zeroinf_analysis.count()
zeroinf_analysis.dtypes
# +
new_column_name_list= list(map(lambda x: x.replace(".", "_"), zeroinf_analysis.columns))
zeroinf_analysis = zeroinf_analysis.toDF(*new_column_name_list)
# -
zeroinf_analysis.where( zeroinf_analysis['confirmed_deaths'].isNull() ).count()
zeroinf_analysis.where( zeroinf_analysis['pop_fraction'].isNull() ).count()
# +
def count_nulls(df):
null_counts = [] #make an empty list to hold our results
for col in df.dtypes: #iterate through the column data types we saw above, e.g. ('C0', 'bigint')
cname = col[0] #splits out the column name, e.g. 'C0'
ctype = col[1] #splits out the column type, e.g. 'bigint'
if ctype != 'string': #skip processing string columns for efficiency (can't have nulls)
nulls = df.where( df[cname].isNull() ).count()
result = tuple([cname, nulls]) #new tuple, (column name, null count)
null_counts.append(result) #put the new tuple in our result list
return null_counts
null_counts = count_nulls(zeroinf_analysis)
# -
null_counts
zeroinf_analysis_described = zeroinf_analysis.describe()
zeroinf_analysis_described.show()
from pyspark.sql.functions import skewness, kurtosis
from pyspark.sql.functions import var_pop, var_samp, stddev, stddev_pop, sumDistinct, ntile
zeroinf_analysis.select(skewness('confirmed_deaths')).show()
# +
from pyspark.sql import Row
columns = zeroinf_analysis_described.columns #list of column names
funcs = [skewness, kurtosis] #list of functions we want to include (imported earlier)
fnames = ['skew', 'kurtosis'] #a list of strings describing the functions in the same order
def new_item(func, column):
"""
This function takes in an aggregation function and a column name, then applies the aggregation to the
column, collects it and returns a value. The value is in string format despite being a number,
because that matches the output of describe.
"""
return str(zeroinf_analysis.select(func(column)).collect()[0][0])
new_data = []
for func, fname in zip(funcs, fnames):
row_dict = {'summary':fname} #each row object begins with an entry for "summary"
for column in columns[1:]:
row_dict[column] = new_item(func, column)
new_data.append(Row(**row_dict)) #using ** tells Python to unpack the entries of the dictionary
print(new_data)
# -
zeroinf_analysis_described.collect()
# +
new_describe = sc.parallelize(new_data).toDF() #turns the results from our loop into a dataframe
new_describe = new_describe.select(zeroinf_analysis_described.columns) #forces the columns into the same order
expanded_describe = zeroinf_analysis_described.unionAll(new_describe) #merges the new stats with the original describe
expanded_describe.show()
# +
import pandas as pd
#import matplotlib.pyplot as plt
# #%matplotlib inline #tells the Jupyter Notebook to display graphs inline (rather than in a separate window)
# -
confirmed_deaths = zeroinf_analysis[['confirmed_deaths']].collect()
confirmed_cases = zeroinf_analysis[['confirmed_cases']].collect()
print(confirmed_deaths[:5])
print(confirmed_cases[:5])
zeroinf_analysis.columns
# +
from pyspark.sql.functions import col
dataset = zeroinf_analysis.select( col('length_of_lockdown'), col('confirmed_cases'), col('confirmed_deaths'), col('POP_ESTIMATE_2018'), col('ICU_Beds'), col('Adult_obesity_percentage'),col('Quality_of_Life_rank'), col('Excessive_drinking_percentage'),col('Population_per_sq_mile'), col('Clinical_Care_rank'), col('Adult_smoking_percentage'), col('Total_Specialist_Physicians__2019_'), col('Physical_Environment_rank'), col('Number_of_Tests_with_Results_per_1_000_Population'))
# +
dataset.show(5)
# -
dataset.dtypes
# +
from pyspark.sql.functions import col
dataset = dataset.withColumnRenamed('confirmed_deaths', 'label')
# -
# +
from pyspark.ml.linalg import Vectors
from pyspark.ml import Pipeline
from pyspark.ml.regression import GBTRegressor, GeneralizedLinearRegression, AFTSurvivalRegression
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql.types import DoubleType
from pyspark.ml.feature import StringIndexer, VectorAssembler, OneHotEncoder
# -
valuableColumns = list(dataset.columns)
valuableColumns
# +
for col in valuableColumns[:-1]:
# Of cause we can't change immutable values, but we can owerwrite them
dataset = dataset.withColumn(col+"_d", dataset[col].cast("double"))
dataset = dataset.fillna(-1., subset=valuableColumns)
dataset
# -
from pyspark.ml import Pipeline
from pyspark.ml.regression import DecisionTreeRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
dataset=dataset.rdd.map(lambda x:(Vectors.dense(x[0:-1]), x[-1])).toDF(["features", "label"])
dataset.show()
# Automatically identify categorical features, and index them.
# We specify maxCategories so features with > 4 distinct values are treated as continuous.
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4).fit(dataset)
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = dataset.randomSplit([0.7, 0.3])
# Train a DecisionTree model.
dt = DecisionTreeRegressor(featuresCol="indexedFeatures")
# Chain indexer and tree in a Pipeline
pipeline = Pipeline(stages=[featureIndexer, dt])
# Train model. This also runs the indexer.
model = pipeline.fit(trainingData)
# +
# Make predictions.
predictions = model.transform(testData)
# Select example rows to display.
predictions.select("prediction", "label", "features").show(5)
# -
# Select (prediction, true label) and compute test error
evaluator = RegressionEvaluator(
labelCol="label", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictions)
print("Root Mean Squared Error (RMSE) on test data = %g" % rmse)
treeModel = model.stages[1]
# summary only
print(treeModel)
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
grid = ParamGridBuilder() \
.addGrid(dt.maxDepth, [2, 3, 4, 5, 6, 7, 8]) \
.addGrid(dt.maxBins, [2, 4, 8]) \
.build()
cv = CrossValidator(estimator=pipeline, evaluator=evaluator, estimatorParamMaps=grid, numFolds=3)
# Explicitly create a new run.
# This allows this cell to be run multiple times.
# If you omit mlflow.start_run(), then this cell could run once,
# but a second run would hit conflicts when attempting to overwrite the first run.
import mlflow
import mlflow
import mlflow.mleap
import pyspark
#import pyspark.ml.mleap.SparkUtil
#import mlflow.mleap.SparkUtil
import mlflow.mleap
with mlflow.start_run():
cvModel = cv.fit(trainingData)
mlflow.set_tag('owner_team', 'UX Data Science') # Logs user-defined tags
test_metric = evaluator.evaluate(cvModel.transform(testData))
mlflow.log_metric('testData_' + evaluator.getMetricName(), test_metric) # Logs additional metrics
mlflow.mleap.log_model(spark_model=cvModel.bestModel, sample_input=testData, artifact_path='dbfs:/databricks/mlflow/2835302286394144') # Logs the best model via mleap
import mlflow
import mlflow.mleap
from mlflow import log_metric, log_param, log_artifacts
def fit_model():
import mlflow
import mlflow.mleap
from mlflow import log_metric, log_param, log_artifacts
# Start a new MLflow run
with mlflow.start_run() as run:
# Fit the model, performing cross validation to improve accuracy
grid = ParamGridBuilder().addGrid(dt.maxDepth, [2, 3, 4, 5, 6, 7, 8]).addGrid(dt.maxBins, [2, 4, 8]).build()
#paramGrid = ParamGridBuilder().addGrid(hashingTF.numFeatures, [1000, 2000]).build()
cv = CrossValidator(estimator=pipeline, evaluator=evaluator, estimatorParamMaps=grid, numFolds=3)
#cv = CrossValidator(estimator=pipeline, evaluator=MulticlassClassificationEvaluator(), estimatorParamMaps=paramGrid)
cvModel = cv.fit(trainingData)
#cvModel = cv.fit(df)
model = cvModel.bestModel
# Log the model within the MLflow run
mlflow.mleap.log_model(spark_model=model, sample_input=trainingData, artifact_path="dbfs:/databricks/mlflow/2835302286394144")
fit_model()
| Codes/Researching_the_model/Decision Tree Estimator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# # The Discrete Fourier Transform
#
# *This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [<EMAIL>](mailto:<EMAIL>).*
# -
# ## Fast Fourier Transform
#
# The discrete Fourier transformation (DFT) can be implemented computationally very efficiently by the [fast Fourier transform (FFT)](https://en.wikipedia.org/wiki/Fast_Fourier_transform). Various algorithms have been developed for the FFT resulting in various levels of computational efficiency for a wide range of DFT lengths. The concept of the so called [radix-2 Cooley–Tukey algorithm](https://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm) is introduced in the following as representative.
# ### Radix-2 Decimation-in-Time Algorithm
#
# Let's first consider the straightforward implementation of the DFT $X[\mu] = \text{DFT}_N \{ x[k] \}$ by its definition
#
# \begin{equation}
# X[\mu] = \sum_{k=0}^{N-1} x[k] \, w_N^{\mu k}
# \end{equation}
#
# where $w_N = e^{-j \frac{2 \pi}{N}}$. Evaluation of the definition for $\mu = 0,1,\dots,N-1$ requires $N^2$ complex multiplications and $N \cdot (N-1)$ complex additions. The numerical complexity of the DFT scales consequently [on the order of](https://en.wikipedia.org/wiki/Big_O_notation) $\mathcal{O} (N^2)$.
#
# The basic idea of the radix-2 decimation-in-time (DIT) algorithm is to decompose the computation of the DFT into two summations: one over the even indexes $k$ of the signal $x[k]$ and one over the odd indexes. Splitting the definition of the DFT for even lengths $N$ and rearranging terms yields
#
# \begin{align}
# X[\mu] &= \sum_{\kappa = 0}^{\frac{N}{2} - 1} x[2 \kappa] \, w_N^{\mu 2 \kappa} +
# \sum_{\kappa = 0}^{\frac{N}{2} - 1} x[2 \kappa + 1] \, w_N^{\mu (2 \kappa + 1)} \\
# &= \sum_{\kappa = 0}^{\frac{N}{2} - 1} x[2 \kappa] \, w_{\frac{N}{2}}^{\mu \kappa} +
# w_N^{\mu} \sum_{\kappa = 0}^{\frac{N}{2} - 1} \, x[2 \kappa + 1] w_{\frac{N}{2}}^{\mu \kappa} \\
# &= X_1[\mu] + w_N^{\mu} \cdot X_2[\mu]
# \end{align}
#
# It follows from the last equality that the DFT can be decomposed into two DFTs $X_1[\mu]$ and $X_2[\mu]$ of length $\frac{N}{2}$ operating on the even and odd indexes of the signal $x[k]$. The decomposed DFT requires $2 \cdot (\frac{N}{2})^2 + N$ complex multiplications and $2 \cdot \frac{N}{2} \cdot (\frac{N}{2} -1) + N$ complex additions. For a length $N = 2^w$ with $w \in \mathbb{N}$ which is a power of two this principle can be applied recursively till a DFT of length $2$ is reached. This comprises then the radix-2 DIT algorithm. It requires $\frac{N}{2} \log_2 N$ complex multiplications and $N \log_2 N$ complex additions. The numerical complexity of the FFT algorithm scales consequently on the order of $\mathcal{O} (N \log_2 N)$. The notation DIT is due to the fact that the decomposition is performed with respect to the (time-domain) signal $x[k]$ and not its spectrum $X[\mu]$.
#
# The derivation of the FFT following above principles for the case $N=8$ is illustrated using signal flow diagrams. The first decomposition results in
#
# 
#
# where $\underset{a}{\oplus}$ denotes the weigthed summation $g[k] + a \cdot h[k]$ of two signals whereby the weighted signal $h[k]$ is denoted by the arrow. The same decomposition is now applied to each of the DFTs of length $\frac{N}{2} = 4$ resulting in two DFTs of length $\frac{N}{4} = 2$
#
# 
#
# where $w_\frac{N}{2}^0 = w_N^0$, $w_\frac{N}{2}^1 = w_N^2$, $w_\frac{N}{2}^2 = w_N^4$ und $w_\frac{N}{2}^3 = w_N^6$. The resulting DFTs of length $2$ can be realized by
#
# 
#
# where $w_2^0 = 1$ und $w_2^1 = -1$. Combining the decompositions yields the overall flow diagram of the FFT for $N=8$
#
# 
#
# Further optimizations can be applied by noting that various common terms exist and that a sign reversal requires to swap only one bit in common representations of floating point numbers.
# ### Benchmark
#
# The radix-2 DIT algorithm presented above can only be applied to lengths $N = 2^w$ which are powers of two. Similar and other principles can be applied to derive efficient algorithms for other cases. A wide variety of implemented FFTs is available for many hardware platforms. Their computational efficiency depends heavily on the particular algorithm and hardware used. In the following the performance of the [`numpy.fft`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft) function is evaluated by comparing the execution times of a [DFT realized by matrix/vector multiplication](definition.ipynb#Matrix/Vector-Representation) with the FFT algorithm. Note that the execution of the following cell may take some time.
# +
import matplotlib.pyplot as plt
import numpy as np
import timeit
# %matplotlib inline
n = np.arange(14) # lengths = 2**n to evaluate
reps = 100 # number of repetitions per measurement
# measure execution times
gain = np.zeros(len(n))
for N in n:
length = 2**N
# setup environment for timeit
tsetup = 'import numpy as np; from scipy.linalg import dft; \
x=np.random.randn(%d)+1j*np.random.randn(%d); F = dft(%d)' % (length, length, length)
# DFT
tc = timeit.timeit('np.matmul(F, x)', setup=tsetup, number=reps)
# FFT
tf = timeit.timeit('np.fft.fft(x)', setup=tsetup, number=reps)
# gain by using the FFT
gain[N] = tc/tf
# show the results
plt.barh(n, gain, log=True)
plt.plot([1, 1], [-1, n[-1]+1], 'r-')
plt.yticks(n, 2**n)
plt.xlabel('Gain of FFT')
plt.ylabel('Length $N$')
plt.title('Ratio of execution times between DFT and FFT')
plt.grid()
# -
# **Exercise**
#
# * For which lengths $N$ is the FFT algorithm faster than the direct computation of the DFT?
# * Why is it slower below a given length $N$?
# * Does the trend of the gain follow the expected numerical complexity of the radix-2 FFT algorithm?
# + [markdown] nbsphinx="hidden"
# **Copyright**
#
# This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *<NAME>, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
| discrete_fourier_transform/fast_fourier_transform.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: torch
# language: python
# name: torch
# ---
# # Project: TMDB-Movies Dataset Analysis
# ## Table of Contents
# <ul>
# <li><a href="#intro">Introduction</a></li>
# <li><a href="#wrangling">Data Wrangling</a></li>
# <li><a href="#eda">Exploratory Data Analysis</a></li>
# <li><a href="#conclusions">Conclusions</a></li>
# </ul>
# <a id='intro'></a>
# ## Introduction
#
# [TMDB-Movies](https://www.themoviedb.org/) is The movie database(TMDb). This database is community built movies and TV database. In this database have many movies and features, but I use following properties:
#
# * popularity
# * budget_adj
# * revenue_adj
# * keywords
# * genres
# * release_date
#
# In this report, I explore the following questions:
#
# * How has the profitability of making films changed over time?
# * When do movies of each genre or keyword make the best profits?
#
# Throughout my analysis film profitability and popularity will be dependent variables, while release date, budget, genres, and keywords will be independent variables.
#
# +
# Use this cell to set up import statements for all of the packages that you
# plan to use.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# Remember to include a 'magic word' so that your visualizations are plotted
# inline with the notebook. See this page for more:
# http://ipython.readthedocs.io/en/stable/interactive/magics.html
# %matplotlib inline
#plt.style.use('seaborn')
# -
# <a id='wrangling'></a>
# ## Data Wrangling
# In data wrangling step, I want to collect some meaningful columns and change to useful data types.
#
# * Remove unuseful columns
# * Change `release_date` format to `datetime
# * Split `genres` and `keywords` string by "|"
# * Fill mean value in NaN or 0 data
#
# ### General Properties
# First of all I want to what kind of data in this dataset. See some rows and described table.
df = pd.read_csv("tmdb-movies.csv")
df.head()
df.describe()
df.info()
# Remove unusefule columns
df2 = pd.DataFrame(df, columns = ['popularity', 'budget_adj', 'revenue_adj', 'keywords', 'genres', 'release_date', 'release_year', 'original_title'])
df2.keys()
df3 = pd.DataFrame(df2)
print(df3['release_date'])
df3['release_date'] = pd.to_datetime(df2['release_date'])
print(df3['release_date'])
df3['release_date'] = df3.apply(lambda x: x['release_date'].replace(year = x['release_year']), axis = 1)
print(df3['release_date'])
# +
# Split string data
df4 = pd.DataFrame(df3)
df4['keywords'] = df4['keywords'].str.split("|")
df4['genres'] = df4['genres'].str.split("|")
df2.head(3)
# -
# ### Data Cleaning
# Dataset have some null data, therefore I clean some columns.
df4.info()
# There are null data in `keywords` and `genres`. These columns cannot fill any estimated value(because these are string type), then I remove some rows which are included null data.
# remove null data
df5 = pd.DataFrame(df4)
df5.dropna(inplace = True)
df5.info()
# +
df6 = pd.DataFrame(df5)
df6.loc[df6['budget_adj'] == 0, 'budget_adj'] = df6['budget_adj'].mean()
df6.loc[df6['revenue_adj'] == 0, 'revenue_adj'] = df6['revenue_adj'].mean()
fig, ax = plt.subplots(2,1,figsize = (10, 8))
sns.histplot(data = df6['budget_adj'], bins = 50, fill = True, ax = ax[0], alpha = 0.5, label = 'after', color = 'red')
sns.histplot(data = df5['budget_adj'], bins = 50, fill = True, ax = ax[0], alpha = 0.5, label = 'before', color = 'blue')
sns.histplot(data = df6['revenue_adj'], bins = 50, fill = True, ax = ax[1], alpha = 0.5, color = 'red')
sns.histplot(data = df5['revenue_adj'], bins = 50, fill = True, ax = ax[1], alpha = 0.5, color = 'blue')
ax[0].legend()
fig.suptitle('Filling Zero Value to Mean')
plt.show()
# -
# <a id='eda'></a>
# ## Exploratory Data Analysis
#
# ### General(Compare every variable which can do.)
# +
# ref: https://jehyunlee.github.io/2020/10/10/Python-DS-37-seaborn_matplotlib4/
g = sns.PairGrid(df6, diag_sharey = False, corner = True)
# diagonal
g.map_diag(sns.kdeplot, fill = True)
for i in range(3):
g.axes[i][i].spines['left'].set_visible(True)
g.axes[i][i].spines['top'].set_visible(True)
g.axes[i][i].spines['right'].set_visible(True)
# lower
g.map_lower(sns.scatterplot, s = 30, edgecolor = 'w')
g.map_lower(sns.regplot, scatter = False, truncate = False, ci = False)
g.map_lower(sns.kdeplot, alpha = 0.5)
g.fig.suptitle("TMDB-Movie", y = 1.01, weight = "bold", fontsize = 'x-large')
g.fig.tight_layout()
plt.show()
# -
# * Popularity has positive correlation budget and revenue!
# - popular movie make many money
# - and spend lots of money
# * If the movie company want to make lots of money, then they must spend lots of money!
# +
pick_genres = df6.explode('keywords')['keywords'].value_counts().head(3).keys()
g = sns.PairGrid(df6.explode('keywords').query('keywords in {}'.format(list(pick_genres))), hue = 'keywords', diag_sharey = False, corner = True)
# diagonal
g.map_diag(sns.kdeplot, fill = True)
for i in range(3):
g.axes[i][i].spines['left'].set_visible(True)
g.axes[i][i].spines['top'].set_visible(True)
g.axes[i][i].spines['right'].set_visible(True)
# lower
g.map_lower(sns.scatterplot, s = 30, edgecolor = 'w')
g.map_lower(sns.regplot, scatter = False, truncate = False, ci = False)
g.map_lower(sns.kdeplot, alpha = 0.5)
handles = g._legend_data.values()
labels = g._legend_data.keys()
# Insert legend
g.axes[1][0].legend(handles=handles, labels=labels,
bbox_to_anchor=(3.45, 1),
fontsize="large", frameon=False
)
g.fig.suptitle("TMDB-Movie Group by Top 3 Keywords, ", y = 1.01, weight = "bold", fontsize = 'x-large')
g.fig.tight_layout()
plt.show()
# -
# * Independent films can't make money, but you don't even have to spend money
# * The movies, which is based on novel, are very efficient(these are the highest revenue per budget on investment.)
# * Films directed by women have the highest profits compared to their popularity.
# +
pick_genres = df6.explode('genres')['genres'].value_counts().head(3).keys()
g = sns.PairGrid(df6.explode('genres').query('genres in {}'.format(list(pick_genres))), hue = 'genres', diag_sharey = False, corner = True)
# diagonal
g.map_diag(sns.kdeplot, fill = True)
for i in range(3):
g.axes[i][i].spines['left'].set_visible(True)
g.axes[i][i].spines['top'].set_visible(True)
g.axes[i][i].spines['right'].set_visible(True)
# lower
g.map_lower(sns.scatterplot, s = 30, edgecolor = 'w')
g.map_lower(sns.regplot, scatter = False, truncate = False, ci = False)
g.map_lower(sns.kdeplot, alpha = 0.5)
handles = g._legend_data.values()
labels = g._legend_data.keys()
# Insert legend
g.axes[1][0].legend(handles=handles, labels=labels,
bbox_to_anchor=(3.45, 1),
fontsize="large", frameon=False
)
g.fig.suptitle("TMDB-Movie Group by Top 3 Genres, ", y = 1.01, weight = "bold", fontsize = 'x-large')
g.fig.tight_layout()
plt.show()
# -
# * The revenue compared to the budget for each genres is similar
# ### Research Question 1: How has the profitability of making films changed over time?
# +
df2['profit'] = df2['revenue_adj'] / df2['budget_adj']
fig, ax = plt.subplots(1,1,figsize = (10, 10))
df2.plot(x = 'release_year', y = 'budget_adj', ax = ax, kind = 'line', color = 'blue')
df2.plot(x = 'release_year', y = 'revenue_adj', ax = ax, kind = 'line', color = 'red')
# -
print(df2['release_date'])
# ### Research Question 2: When do movies of each genre or keyword make the best profits?
# <a id='conclusions'></a>
# ## Conclusion
#
# ###
from subprocess import call
call(['python', '-m', 'nbconvert', 'Investigate_a_Dataset.ipynb'])
| Knowledge/Python/210218 - [UDACITY]Introduction to Data Analysis/.ipynb_checkpoints/Investigate_a_Dataset-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dispersion and Dissipation
#
# Copyright (C) 2010-2020 <NAME><br>
# Copyright (C) 2020 <NAME>
#
# <details>
# <summary>MIT License</summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# </details>
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
# Consider
# $$u_t+au_x=0$$
# with periodic boundary conditions.
#
# Set up parameters:
#
# - `a` for the advection speed
# - `lmbda` for the CFL number
# - `dx` for the grid spacing in $x$
# - `dt` for the time step
# - `ks` for the range of wave numbers to consider
a = 1
lmbda = 0.6/a
dx = .1
dt = dx*lmbda
ks = np.arange(1,16)
# Find $\omega(\kappa)$. Recall $\lambda = ah_t / h_x$.
#
# ETBS:
# $$ u_{k, \ell + 1} = \lambda u_{k - 1 , \ell} + (1 - \lambda) u_{k, \ell} $$
#
# Recall:
# * $r_k=\delta_{k,j}\Leftrightarrow\hat{\boldsymbol{r}} (\varphi) = e^{- i \theta j}$.
# * Index sign flip between matrix and Toeplitz vector.
# * $e^{- i \omega (\kappa) h_t} = s (\kappa)$.
# +
#clear
kappa = ks*dx
p_ETBS = 1
q_ETBS = lmbda*np.exp(-1j*kappa) + (1-lmbda)
s_ETBS = q_ETBS/p_ETBS
omega_ETBS = 1j*np.log(s_ETBS)/dt
# -
# Again recall $\lambda = ah_t / h_x$.
#
# Lax-Wendroff:
# $$
# u_{k, \ell + 1} - u_{k, \ell}
# = -\frac{\lambda}2 (u_{k + 1, \ell} - u_{k - 1, \ell}) +
# \frac{\lambda^2}{2} ( u_{k + 1, \ell} - 2 u_{k, \ell} + u_{k - 1, \ell})
# $$
# +
#clear
p_LW = 1
q_LW = (
# u_{k,l}
1 - 2*lmbda**2/2
# u_{k+1,l}
+ np.exp(1j*kappa) * (-lmbda/2 + lmbda**2/2)
# u_{k-1,l}
+ np.exp(-1j*kappa) * (lmbda/2 + lmbda**2/2)
)
s_LW = q_LW/p_LW
omega_LW = 1j*np.log(s_LW)/dt
# + jupyter={"outputs_hidden": false}
plt.plot(ks, omega_ETBS.real, label="ETBS")
plt.plot(ks, omega_LW.real, label="Lax-Wendroff")
plt.plot(ks, a*ks, color='black', label='exact')
plt.legend(loc="best")
# + jupyter={"outputs_hidden": false}
plt.plot( ks, omega_ETBS.imag, label="ETBS")
plt.plot( ks, omega_LW.imag, label="Lax-Wendroff")
plt.legend(loc="best")
# -
| demos/fd-tdep/Dispersion and Dissipation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (py39)
# language: python
# name: py39
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import pickle
import cmocean
import json
import f90nml
from collections import OrderedDict
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
# %matplotlib inline
# + active=""
# from IPython.display import HTML
#
# HTML('''<script>
# code_show=true;
# function code_toggle() {
# if (code_show){
# $('div.input').hide();
# } else {
# $('div.input').show();
# }
# code_show = !code_show
# }
# $( document ).ready(code_toggle);
# </script>
#
# <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
#
# -
year=2009
PATH= '/results2/SalishSea/nowcast-green.201905/'
datadir='/ocean/eolson/MEOPAR/obs/WADE/ptools_data/ecology'
display(Markdown('''## Year: '''+ str(year)))
display(Markdown('''### Model output: '''+ PATH))
# ## Yearly model-data comparisons of nutrients, chlorophyll, temperature and salinity between 201905 runs and WADE observations
# ### load observations
dfSta=pickle.load(open(os.path.join(datadir,'sta_df.p'),'rb'))
dfSta.head()
dfCTD0=pickle.load(open(os.path.join(datadir,f'Casts_{str(year)}.p'),'rb'))
dfCTD0.head()
dfCTD=pd.merge(left=dfSta,right=dfCTD0,how='right',
left_on='Station',right_on='Station')
#right join means all rows in right table (dfCTD) are included in output
dfCTD.head()
# check that there are no stations without lat and lon:
dfCTD.loc[pd.isnull(dfCTD['Latitude'])]
# check one to one matches:
len(dfCTD),len(dfCTD0), len(dfSta)
# where no time is provided, set time to midday Pacific time = ~ 20:00 UTC for now
# (most sampling takes place during the day)
# accurate times will be provided at a later date
# the code below takes advantage of all elements in 'Date' having a time component
# set to midnight
dfCTD['dtUTC']=[iiD+dt.timedelta(hours=20) for iiD in dfCTD['Date']]
# We require the following columns:
# dtUTC datetime
# Lat Latitude
# Lon Longitude
# Z Depth, increasing downward (positive)
dfCTD.rename(columns={'Latitude':'Lat','Longitude':'Lon'},inplace=True)
dfCTD['Z']=-1*dfCTD['Z']
dfCTD.head()
# Calculate Absolute (Reference) Salinity (g/kg) and Conservative Temperature (deg C) from
# Salinity (psu) and Temperature (deg C):
press=gsw.p_from_z(-1*dfCTD['Z'],dfCTD['Lat'])
dfCTD['SA']=gsw.SA_from_SP(dfCTD['Salinity'],press,
dfCTD['Lon'],dfCTD['Lat'])
dfCTD['CT']=gsw.CT_from_t(dfCTD['SA'],dfCTD['Temperature'],press)
print(len(dfCTD),'data points')
print('Number of data points in each region:')
dfCTD.groupby('Basin')['SA'].count()
# ### set up variables for model-data matching
# start_date and end_date are the first and last dates that will
# be included in the matched data set
start_date = dt.datetime(year,1,1)
end_date = dt.datetime(year,12,31)
flen=1 # number of days per model output file. always 1 for 201905 and 201812 model runs
namfmt='nowcast' # for 201905 and 201812 model runs, this should always be 'nowcast'
# filemap is dictionary of the form variableName: fileType, where variableName is the name
# of the variable you want to extract and fileType designates the type of
# model output file it can be found in (usually ptrc_T for biology, grid_T for temperature and
# salinity)
filemap={'vosaline':'grid_T','votemper':'grid_T'}
# fdict is a dictionary mappy file type to its time resolution. Here, 1 means hourly output
# (1h file) and 24 means daily output (1d file). In certain runs, multiple time resolutions
# are available
fdict={'ptrc_T':1,'grid_T':1}
# Note: to switch between 201812 and 201905 model results, change PATH
# to switch from hourly to daily model output, change fdict values from 1 to 24 (but daily
# files are not available for some runs and file types)
data=et.matchData(dfCTD,filemap,fdict,start_date,end_date,'nowcast',PATH,1,quiet=False);
cm1=cmocean.cm.thermal
with nc.Dataset('/data/eolson/results/MEOPAR/NEMO-forcing-new/grid/bathymetry_201702.nc') as bathy:
bathylon=np.copy(bathy.variables['nav_lon'][:,:])
bathylat=np.copy(bathy.variables['nav_lat'][:,:])
bathyZ=np.copy(bathy.variables['Bathymetry'][:,:])
# +
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as grid:
viz_tools.plot_coastline(ax, grid, coords = 'map',isobath=.1)
colors=('blue','green','firebrick','darkorange','darkviolet','fuchsia',
'royalblue','darkgoldenrod','mediumspringgreen','deepskyblue')
datreg=dict()
for ind, iregion in enumerate(data.Basin.unique()):
datreg[iregion] = data.loc[data.Basin==iregion]
ax.plot(datreg[iregion]['Lon'], datreg[iregion]['Lat'],'.',
color = colors[ind], label=iregion)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0,0])
ax.set_xlim(-124, -122);
ax.set_title('Observation Locations');
iz=(data.Z<15)
JFM=data.loc[iz&(data.dtUTC<=dt.datetime(year,4,1)),:]
Apr=data.loc[iz&(data.dtUTC<=dt.datetime(year,5,1))&(data.dtUTC>dt.datetime(year,4,1)),:]
MJJA=data.loc[iz&(data.dtUTC<=dt.datetime(year,9,1))&(data.dtUTC>dt.datetime(year,5,1)),:]
SOND=data.loc[iz&(data.dtUTC>dt.datetime(year,9,1)),:]
# +
def byDepth(ax,obsvar,modvar,lims):
ps=et.varvarPlot(ax,data,obsvar,modvar,'Z',(15,22),'z','m',('mediumseagreen','darkturquoise','navy'))
l=ax.legend(handles=ps)
ax.set_xlabel('Obs')
ax.set_ylabel('Model')
ax.plot(lims,lims,'k-',alpha=.5)
ax.set_xlim(lims)
ax.set_ylim(lims)
ax.set_aspect(1)
return ps,l
def byRegion(ax,obsvar,modvar,lims):
ps=[]
for ind, iregion in enumerate(data.Basin.unique()):
ax.plot(datreg[iregion]['Lon'], datreg[iregion]['Lat'],'.',
color = colors[ind], label=iregion)
ps0=et.varvarPlot(ax,datreg[iregion],obsvar,modvar,
cols=(colors[ind],),lname=iregion)
ps.append(ps0)
l=ax.legend(handles=[ip[0][0] for ip in ps])
ax.set_xlabel('Obs')
ax.set_ylabel('Model')
ax.plot(lims,lims,'k-',alpha=.5)
ax.set_xlim(lims)
ax.set_ylim(lims)
ax.set_aspect(1)
return ps,l
def bySeason(ax,obsvar,modvar,lims):
for axi in ax:
axi.plot(lims,lims,'k-')
axi.set_xlim(lims)
axi.set_ylim(lims)
axi.set_aspect(1)
axi.set_xlabel('Obs')
axi.set_ylabel('Model')
ps=et.varvarPlot(ax[0],JFM,obsvar,modvar,cols=('crimson','darkturquoise','navy'))
ax[0].set_title('Jan-Mar')
ps=et.varvarPlot(ax[1],Apr,obsvar,modvar,cols=('crimson','darkturquoise','navy'))
ax[1].set_title('Apr')
ps=et.varvarPlot(ax[2],MJJA,obsvar,modvar,cols=('crimson','darkturquoise','navy'))
ax[2].set_title('May-Aug')
ps=et.varvarPlot(ax[3],SOND,obsvar,modvar,cols=('crimson','darkturquoise','navy'))
ax[3].set_title('Sep-Dec')
return
def ErrErr(fig,ax,obsvar1,modvar1,obsvar2,modvar2,lims1,lims2):
m=ax.scatter(data[modvar1]-data[obsvar1],data[modvar2]-data[obsvar2],c=data['Z'],s=1,cmap='gnuplot')
cb=fig.colorbar(m,ax=ax,label='Depth (m)')
ax.set_xlim(lims1)
ax.set_ylim(lims2)
ax.set_aspect((lims1[1]-lims1[0])/(lims2[1]-lims2[0]))
return m,cb
# -
### These groupings will be used to calculate statistics. The keys are labels and
### the values are corresponding dataframe views
statsubs=OrderedDict({'z < 15 m':data.loc[data.Z<15],
'15 m < z < 22 m':data.loc[(data.Z>=15)&(data.Z<22)],
'z >= 22 m':data.loc[data.Z>=22],
'z > 50 m':data.loc[data.Z>50],
'all':data,
'z < 15 m, JFM':JFM,
'z < 15 m, Apr':Apr,
'z < 15 m, MJJA':MJJA,
'z < 15 m, SOND': SOND,})
for iregion in data.Basin.unique():
statsubs[iregion]=datreg[iregion]
statsubs.keys()
# # Absolute Salinity (g/kg)
obsvar='SA'
modvar='mod_vosaline'
statsDict={year:dict()}
statsDict[year]['SA']=OrderedDict()
for isub in statsubs:
print(isub)
statsDict[year]['SA'][isub]=dict()
var=statsDict[year]['SA'][isub]
var['N'],mmean,omean,var['Bias'],var['RMSE'],var['WSS']=et.stats(statsubs[isub].loc[:,[obsvar]],
statsubs[isub].loc[:,[modvar]])
tbl,tdf=et.displayStats(statsDict[year]['SA'],level='Subset',suborder=list(statsubs.keys()))
tbl
# +
fig, ax = plt.subplots(1,2,figsize = (16,7))
ps,l=byDepth(ax[0],obsvar,modvar,(0,40))
ax[0].set_title('S$_A$ (g kg$^{-1}$) By Depth')
ps,l=byRegion(ax[1],obsvar,modvar,(0,40))
ax[1].set_title('S$_A$ (g kg$^{-1}$) By Region');
# -
fig, ax = plt.subplots(1,4,figsize = (16,3.3))
bySeason(ax,obsvar,modvar,(0,30))
fig,ax=plt.subplots(1,1,figsize=(20,.3))
ax.plot(data.dtUTC,np.ones(np.shape(data.dtUTC)),'k.')
ax.set_xlim((dt.datetime(year,1,1),dt.datetime(year,12,31)))
ax.set_title('Data Timing')
ax.yaxis.set_visible(False)
# # Conservative Temperature
obsvar='CT'
modvar='mod_votemper'
statsDict[year]['CT']=OrderedDict()
for isub in statsubs:
statsDict[year]['CT'][isub]=dict()
var=statsDict[year]['CT'][isub]
var['N'],mmean,omean,var['Bias'],var['RMSE'],var['WSS']=et.stats(statsubs[isub].loc[:,[obsvar]],
statsubs[isub].loc[:,[modvar]])
tbl,tdf=et.displayStats(statsDict[year]['CT'],level='Subset',suborder=list(statsubs.keys()))
tbl
# +
mv=(0,80)
fig, ax = plt.subplots(1,2,figsize = (16,7))
ps,l=byDepth(ax[0],obsvar,modvar,mv)
ax[0].set_title('$\Theta$ ($^{\circ}$C) By Depth')
ps,l=byRegion(ax[1],obsvar,modvar,mv)
ax[1].set_title('$\Theta$ ($^{\circ}$C) By Region');
# -
fig, ax = plt.subplots(1,4,figsize = (16,3.3))
bySeason(ax,obsvar,modvar,mv)
fig,ax=plt.subplots(1,1,figsize=(20,.3))
ax.plot(data.dtUTC,np.ones(np.shape(data.dtUTC)),'k.')
ax.set_xlim((dt.datetime(year,1,1),dt.datetime(year,12,31)))
ax.set_title('Data Timing')
ax.yaxis.set_visible(False)
# ### Temperature-Salinity by Region
def tsplot(ax,svar,tvar):
limsS=(0,36)
limsT=(5,20)
ss,tt=np.meshgrid(np.linspace(limsS[0],limsS[1],20),np.linspace(limsT[0],limsT[1],20))
rho=gsw.rho(ss,tt,np.zeros(np.shape(ss)))
r=ax.contour(ss,tt,rho,colors='k')
ps=list()
for ind, iregion in enumerate(data.Basin.unique()):
p=ax.plot(datreg[iregion][svar], datreg[iregion][tvar],'.',
color = colors[ind], label=iregion)
ps.append(p[0])
l=ax.legend(handles=ps,bbox_to_anchor=(1.01,1))
ax.set_ylim(limsT)
ax.set_xlim(limsS)
ax.set_ylabel('$\Theta$ ($^{\circ}$C)')
ax.set_xlabel('S$_A$ (g kg$^{-1}$)')
ax.set_aspect((limsS[1]-limsS[0])/(limsT[1]-limsT[0]))
return
fig,ax=plt.subplots(1,2,figsize=(16,3.5))
tsplot(ax[0],'SA','CT')
ax[0].set_title('Observed')
tsplot(ax[1],'mod_vosaline','mod_votemper')
ax[1].set_title('Modelled')
tbl,tdf=et.displayStats(statsDict[year],level='Variable',suborder=list(statsubs.keys()))
tbl
| notebooks/Examples/CTD_Puget_WADE_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import nltk
import pandas as pd
from docx import Document
import os
'''files = list(os.listdir(os.getcwd()))
destination = os.path.join(os.getcwd(), 'pre_test')'''
'''for file in files:
file = file.lower()
if "pre" in file:
os.rename(os.path.join(path, file), os.path.join(destination, file))'''
pre_test_path = os.path.join(os.getcwd(), 'Unlabeled_transcripts\\pre_test')
pre_test_list = os.listdir(pre_test_path)
print(pre_test_list)
# Read actress and test-subjects' responses into a list from word document
def conversation(filename):
'''returns a list of conversation'''
text_A = []
text_T = []
doc = docx.Document(os.path.join(pre_test_path, filename))
for p in doc.paragraphs:
if "A:" in p.text:
text_A.append(p.text.strip(" A: "))
elif "T:" in p.text:
text_T.append(p.text.strip(" T: "))
return text_A, text_T
text_A_all = []
text_T_all = []
for file in pre_test_list:
text_A, text_T = conversation(file)
text_A_all.extend(text_A)
text_T_all.extend(text_T)
# Create a dataframe to collect each actress comment and test-subjects' responses
col_names = [p[:4] for p in pre_test_list]
df_pre = pd.DataFrame(index=text_A_all, columns = col_names)
df_pre.index.rename('Actor\'s_comments', inplace=True)
for file in pre_test_list:
text_A, text_T = conversation(file)
if len(text_A) > len(text_T):
text_T.append('')
elif len(text_A) < len(text_T):
text_A.append('')
new_column = pd.Series(text_T, name=file[:4], index=text_A)
df_pre.update(new_column)
df_pre.head()
# Compute the length of conversation for each participant
df_summary = pd.DataFrame(index = col_names, columns=['response_count'])
df_summary.index.rename(name = 'Participant_id', inplace = True)
new = {}
for participant in col_names:
new[participant] = df_pre[participant].value_counts(dropna=True).sum()
df_summary['response_count'] = list(new.values())
#new_column = pd.Series(list(new.values()), name = 'conv_length', index=sorted(list(new.keys())))
#df_summary.update(new_column)
total_response_count = {}
unique_words_count = {}
for participant in col_names:
temp_list = []
for w in list(df_pre[participant].dropna()):
temp_list.extend(w.split())
unique_words_count[participant] = len(set(temp_list))
total_response_count[participant] = len(temp_list)
df_summary['TotalWordCount'] = list(total_response_count.values())
df_summary['avgWordCount'] = df_summary['TotalWordCount'] / df_summary['response_count']
df_summary['avgWordCount'] = df_summary['avgWordCount'].round()
df_summary['Unique_words_count'] = list(unique_words_count.values())
df_summary['Lexical_diversity'] = df_summary['Unique_words_count'] / df_summary['TotalWordCount']
df_summary['Lexical_diversity'] = df_summary['Lexical_diversity'].round(2)
df_summary
| pre_test_summary_statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting to know the Intersight API's
#
# In this Python file, we use basic authenticaton with the Intersight API's and perform a simple GET operation to display physical equipment claimed in Intersight. If you download this `*.ipynb` file and have Jupyter Notebook installed, you can replace the variables `secret_key_filename` and `api_key_id` below and this should work with your environment.
# +
"""
intersight_ucs_x_operations.py - shows how to use intersight REST API
author: <NAME> (<EMAIL>)
"""
import json
import requests
from intersight_auth import IntersightAuth
# -
# ## Let's first declare an AUTH object
#
# The private key is available in your Intersight account. Copy/paste the values and add them to a blank file named `SecretKey.txt`. Also, get a hold of your API key and copy/paste the value into the `api_key_id` variable below.
AUTH = IntersightAuth(
secret_key_filename='./key/SecretKey.txt',
api_key_id='paste_your_api_key_here'
)
# ## Establish a base URL
# We'll use the base URL of `https://www.intersight.com/api/v1/` for this exercise.
BURL = 'https://www.intersight.com/api/v1/'
# # Example GET operation
# ## Explore the GET operation by pulling a summary of phsyical equipment
#
# The GET operations provide insight into the hardware claimed by Intersight. In the example below, we'll get an inventory of compute nodes and see if they are managed by IMM (Intersight Managed Mode).
#
# First we'll set up a list containing a dictionary of the operation we would like to perform:
OPERATION = [
{
"resource_path":"compute/PhysicalSummaries",
"request_method":"GET"
}
]
response = requests.get(
BURL + OPERATION[0]['resource_path'],
auth=AUTH
)
response.json()['Results']
# ## Let's see how many items were returned by `Results`
# `Results` is a list which we can use the `len()` BIF to see how many results are returned. The result should tell us how many physical items are returned by the call to `compute/PhysicalSummaries`.
len(response.json()['Results'])
# ## Results - bringing it all together
# Great! Now we see the number of physical compute devices and now we can pull more information from the returned JSON and organize it by Device, Chassis ID, Management Mode, Model, Memory, and CPU. The Management Mode shows if the phsyical compute device is managed in Intersight Management Mode.
#
# > Intersight Managed Mode (IMM) is a new architecture that manages the UCS Fabric Interconnected systems through a Redfish-based standard model. If you are familiar with the UCS blades, it means the Fabric Interconnect is fully managed by Intersight. Instead of having the familiar UCSM (UCS Manager) interface available directly from the Fabric Interconnect, the interface and all of the Fabric Interconnect operations are managed by Intersight.
#
# We do some CLI formatting to organize our data and see the type of compute hardware managed by Intersight along with its resources (memory and CPU). Then, we iterate over the JSON data and pull the data we're interested in seeing. In this instance, the Model shows the UCS-X series hardware.
#
# > Experiment! See if you can add more information to the list below by choosing other items from the returned JSON data above. For example, you could add a column with `NumCpuCores` and or `Ipv4Address`.
print ("{:<8} {:12} {:<18} {:<15} {:<10} {:<10}".format('Device','Chassis ID','Management Mode','Model','Memory','CPU'))
for num, items in enumerate(response.json()['Results'], start=1):
print ("{:<8} {:<12} {:<18} {:<15} {:<10} {:<10}".format(num, items['ChassisId'], items['ManagementMode'], items['Model'], items['AvailableMemory'], items['CpuCapacity']))
# # Example POST operation
| .ipynb_checkpoints/intersight_ucs_x_operations-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multiple linear regression
# ## Import the relevant libraries
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.linear_model import LinearRegression
# -
# ## Load the data
data = pd.read_csv('1.02. Multiple linear regression.csv')
data.head()
data.describe()
# ## Create the multiple linear regression
# ### Declare the dependent and independent variables
x = data[['SAT','Rand 1,2,3']]
y = data['GPA']
# ### Regression itself
reg = LinearRegression()
reg.fit(x,y)
reg.coef_
reg.intercept_
# ### Calculating the R-squared
reg.score(x,y)
# ### Formula for Adjusted R^2
#
# $R^2_{adj.} = 1 - (1-R^2)*\frac{n-1}{n-p-1}$
x.shape
# +
r2 = reg.score(x,y)
n = x.shape[0]
p = x.shape[1]
adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)
adjusted_r2
# -
| Resources/Data-Science/Machine-Learning/Multiple-Linear-Regression/sklearn - Multiple Linear Regression and Adjusted R-squared.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro
#
# It's the last time we meet in class for exercises! And to celebrate this mile-stone, we will finally combine what we have learnt so far about our network and the text related to it to understand the communities of BotW characters!
#
# * We'll start with looking at communities and their words in two exercises
# - Part A: First we finish up the work on TF-IDF from last week but this time on communities
# - Part B: Secton we'll try something new and study the topics related characters and communities
# * In the latter half of the exercises (Part C), we play around with sentiment analysis - trying to see if we can find differences between the communities.
#
# **Data to use:** Part A and B require to start with the ZeldaWiki pages of characters. You should use the output you saved from Week 7 Exercise 1!
# # Part A: Communities TF-IDF word-clouds
#
#
# We continue where we left off last time, so the aim of this part is to create community wordclouds based on TF-IDF.
#
# The aim is to understand which words are important for each community. And now that you're TF-IDF experts, we're going to use the code you wrote from last week.
#
# Let's start by creating $N_C$ documents, where $N_C$ is the number of communities you have found in Week 7 exercise 7.
#
# _Exercise 1_:
#
# > * Now that we have the communities, let's start by calculating a the TF list for each community (use the same code you've written in ex. 7). Find the top 5 terms within each community.
# > * Next, calculate IDF for every word in every list (again by using the code from ex. 7).
# > * We're now ready to calculate TF-IDFs. Do that for each community.
# > * List the 5 top words for each community accourding to TF-IDF. Are these words more descriptive of the community than just the TF? Justify your answer.
# > * Create a wordcloud for each community. Do the wordclouds with TC-IDF (you can also use TF-IDF but need to re-normalize) lists enable you to understand the communities you have found (or is it just gibberish)? Justify your answer.
# # Part B - Topic Modeling
# Now that we have characterized each community with the most important words (according to TF-IDF), let's explore whether we can distinguish them by the topics of their characters' page.
#
# To do this we need to learn about topic modeling, a technique to extract the hidden topics from large volumes of text and Latent Dirichlet Allocation(LDA), a popular algorithm for topic modeling.
#
# > **Video Lecture**: Introduction to Topic Modeling with LDA
from IPython.display import YouTubeVideo
YouTubeVideo("u-avnKI3oXU",width=800, height=450)
# > **Optional Reading**: For more information on how LDA works and is implemented, you can refer to [Latent Dirichlet Allocation](https://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf)
# Now to the exercise! We want to extract the topics of the ZeldaWiki, to see whether we can further characterize the communities through the topics of their characters. Follow the steps below for success.
#
# _Setup:_ LDA and gensim
# > **Install and import**
# > * LDA has an excellent implementation in the [Python’s Gensim package](https://radimrehurek.com/gensim/), which you can isntall via `pip install --upgrade gensim`
# > * Have a look at the [documentation of Gensim](https://radimrehurek.com/gensim/auto_examples/index.html#documentation) and check that you correctly installed gensim by typing `import gensim` in your notebook.
# > * We will also need `import gensim.corpora as corpora` and `from gensim.models import CoherenceModel` to prepare the input and tune our model respectively.
# >
# > **Input** The two main inputs to the LDA algorithm are a dictionary `id2word` mapping each word to an index and the corpus:
# > * First create a list of lists containing the preprocessed text of each character's page (you should have a list per character).
# > * *Optional:* I found better topics by re-running the data preprocessing (Week 7 Ex. 1) and adding a step to compute bigrams. You can find an example to follow in the [documentation](https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.html#sphx-glr-auto-examples-tutorials-run-lda-py)
# > * Second, build the dictionary `id2word` by using `corpora.Dictionary(YOUR_LIST_OF_LISTS)`
# > * Finally, build your corpus by mapping the words in `YOUR_LIST_OF_LISTS` as follows `id2word.doc2bow(PAGE)` (here page is one entry of `YOUR_LIST_OF_LISTS`)
# >
#
# _Exercise 2:_ LDA model and the ZeldaWiki.
#
# > **Number of Topics** Topic coherence is a measure to judge how good a given topic model is. Do as follows:
# >
# > * You can run the LDA model `gensim.models.LdaMulticore()` with input `id2word` and your `corpus`. As we do not expect many topics per document (i.e. in each character page) we will set a low value of alpha (e.g. `alpha=0.3`).
# > * Now build many LDA models with different values of number of topics $N_t$ and plot the coherence score.
# > * Pick the $N_t$ that gives the highest coherence value. How many topics did you find? What is the coherence score corresponding to the chosen model?
# >
# > **Results**
# >
# > * Print the top 10 keywords and their importance for each topic by using `show_topics()` and interpret the topics you have found. Describe in general what they are about and whether they are different from each other.
# > * We now want to find the association between characters of the same community and their topics:
# > * Get the association between your documents (each character's page) and the topics via `get_document_topics()`.
# > * Group the values you have obtained by averaging over each community and topic, i.e. for each character in a community $c$ find the average of their score over the same topic $t$. The result should be in the form of a matrix $M\in\mathbb{R}^{N_c\times N_t}$, with communities on rows and topics on columns.
# > * Create a `heatmap` of $M$ and describe your result. Are communities characterized by different topics? What are the communities that discuss different topics than other? Did you expect this result?
# > * *Optional* You can rerun your code on single communities instead to find intra-community topics (similar to what we have done with TF-IDF). Try it out with the 3 biggest communities!
# >
#
# _Exercise 3:_ LDA topics visualization.
#
# > **Visualization** We can examine the topics we previously found and their associated keywords with [pyLDAvis](https://pyldavis.readthedocs.io/en/latest/index.html), an interactive tool designed to work well with jupyter notebooks.
# >
# > * Use `pip install pyldavis` to install the package and type `pyLDAvis.enable_notebook()` in your notebook.
# > * Now visualize your results with `pyLDAvis.gensim_models.prepare`. Describe the meaning of the visualization. What do the "bubbles" on the left plot represent? And, what is the meaning of the bars on the right plot?
# > * Which one is the most prevalent topic? Report the 5 most relevant terms.
# > * Are there overlapping topics? What do they describe in general? Report their 5 most relevant terms.
# # Part C - Sentiment analysis
# Sentiment analysis is another highly useful technique which we'll use to make sense of the ZeldaWiki
# data. Further, experience shows that it might well be very useful when you get to the project stage of the class.
#
#
# > **Video Lecture**: Uncle Sune talks about sentiment and his own youthful adventures.
#
#
YouTubeVideo("JuYcaYYlfrI",width=800, height=450)
# There's also this one from 2010
YouTubeVideo("hY0UCD5UiiY",width=800, height=450)
# > Reading: [Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026752)
#
# _Exercise_ 4: Dictionary-based sentiment within the communities data.
# >
# > * Download the LabMT wordlist. It's available as supplementary material from [Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026752) (Data Set S1). Describe briefly how the list was generated.
# > * We need a list of tokens as an input. We can use the preprocessed characters pages, however, Wiki pages are written without a clear sentiment. Thus, I have put together for you a dataset of characters' [dialogue in BotW](https://github.com/SocialComplexityLab/socialgraphs2021/blob/main/files/CharactersDialogue.json). The file contains a dictionary, whose keys are characters names and whose values are lists containing sentences said by a character in the game.
# > * **Note** Not all characters in the network have data (especially enemies and bosses). Thus, we will consider just communities with nodes having some dialogue in the following.
# > * Based on the LabMT word list, write a function that calculates sentiment given a list of tokens. To get the list of tokens follow these 3 steps: lemmatize, set to lower case, and tokenize the sentences.
# > * Iterate over the nodes in your network, tokenize each sentence, and calculate the average sentiment of every character. Now you have sentiment as a new nodal property.
# > * Remember histograms? Create a histogram of all character's associated dialogue-sentiments.
# > * What are the 10 characters with happiest and saddest pages?
# > * Now we average the average sentiment of the nodes in each community to find a *community level sentiment*.
# > - Create a bar plot showing the average sentiment of each community (Optional: add error-bars using the standard deviation).
# > - Name each community by its three most connected characters.
# > - What are the three happiest communities?
# > - what are the three saddest communities?
# > - Do these results confirm what you can learn about each community by skimming the wikipedia pages?
#
# Calculating sentiment takes a long time, so arm yourself with patience as your code runs (remember to check that it runs correctly, before waiting patiently). Further, these tips may speed things up. And save somewhere, so you don't have to start over.
#
# **Tips for speed**
# * If you use `freqDist` prior to finding the sentiment, you only have to find it for every unique word and hereafter you can do a weighted mean.
# * More tips for speeding up loops https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Loops
YouTubeVideo("Pgxbnfi93Jk",width=800, height=450)
# We will now explore another method for sentiment analysis, called VADER, which uses both dictionary-based methods and rule-based methods. If you are interested in finding out more, you can find more material in the [original article](http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf).
#
# _Exercise 5_: (dictionary & rule)-based sentiment within the communities data.
#
# > * Download the VADER lexicon dictionary from [here](https://raw.githubusercontent.com/cjhutto/vaderSentiment/master/vaderSentiment/vader_lexicon.txt). Read the description of the VADER lexicon in the README file of [VADER Github repo](https://github.com/cjhutto/vaderSentiment). How was the dictionary created?
# > * Explore the VADER lexicon data.
# > * What are the top 10 words by polarity. And the bottom 10? Does this surprise you?
# > * Plot the distribution of polarity according to the VADER Lexicon data. What are the differences compared to the labMT data? Is it to be expected?
# > * Install the VADER library using `pip install vaderSentiment`.
# > * Go through the example sentences in the [vaderSentiment documentation page](https://github.com/cjhutto/vaderSentiment#code-examples) (Section Code Examples). Compute the compund polarity for each sentence.
# > * Try VADER on your own sentences. Can you find a sentence where VADER gets wrong (the polarity has opposite sign compared to what one would expect)? You can have a look at VADER set of rules in the paper linked above.
#
# _Exercise 6_: VADER and BotW Sentiment
#
# > * Now use the BotW sentences from Exercise 4. Apply VADER to each individual sentence (i.e. list of tokens). Then compute the average polarity for each character. (**hint** remember that now it is important to keep punctuation, ALL-CAPS words, etc.)
# > * What are the 10 characters with happiest and saddest pages according to VADER?
# > * Aggregate by community and compute the average community compund polarity. Create a bar plot showing the average compound polarity of each community and add error-bars using the standard deviation.
# > - Name each community by its three most connected characters.
# > - What are the three happiest communities according to VADER?
# > - what are the three saddest communities according to VADER?
# > * Do the bar plot and results from the previous step look different to the one you obtained in Exercise 4? How do you explain it?
# > * What is the advantage of using a rule-based method over the simple dictionary-based approach?
# You've made it!! We are at the end of our lectures and now it's time for you to put together all you have learnt and use it for your own project!!
| lectures/Week8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
xs=[1,2,3,4,5,6,7]
ys=[1,1,-1,1,1,1,1,1,1,1,1,-1,1,1,1]
plt.plot(ys)
w=[1,-1,1]
out1=np.convolve(ys,w,mode='same')
out2=np.convolve(ys,w,mode='valid')
out1
out2
len(ys),len(out1),len(out2),len(ys)-len(w)+1
plt.plot(out1)
plt.plot(out2)
| Deep Learning with Tensorflow/Untitled14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Vanishing Gradients and Exploding Gradients in RNNs : Ungraded Lecture Notebook
# In this notebook, you'll take another look at vanishing and exploding gradients in RNNs, from an intuitive standpoint.
#
# ## Background
# Vanilla RNNs are prone to vanishing and exploding gradients when dealing with long sequences. Recall that the gradient with respect to $W_h$ is proportional to a sum of products:
#
# $$\frac{\delta L}{\delta W_h} \propto \sum_{1\le k\le t} \left(\prod_{t\ge i>k} \frac{\delta h_i}{\delta h_{i-1}}\right)\frac{\delta h_k}{\delta W_h}$$
#
# where, for step $k$ far away from the place where the loss is computed ($t$), the product
#
# $$\prod_{t\ge i>k} \frac{\delta h_i}{\delta h_{i-1}}$$
#
# can either go to 0 or infinity depending on the values of the partial derivative of the hidden state $\frac{\delta h_i}{\delta h_{i-1}}$. In this ungraded lab, you will take a closer look at the partial derivative of the hidden state, and I'll show you how gradient problems arise when dealing with long sequences in vanilla RNNs.
#
# ## Imports
# + tags=[]
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed, interact_manual
# %matplotlib inline
# -
# ## Activations & Partial Derivative
#
# ### Partial Derivative
# Recall that the hidden state at step $i$ is defined as:
#
# $$h_i= \sigma(W_{hh} h_{i-1} + W_{hx} x_i + b_h)$$
#
# where $\sigma$ is an activation function (usually sigmoid). So, you can use the chain rule to get the partial derivative:
#
# $$\frac{\delta h_i}{\delta h_{i-1}} = W_{hh}^T \text{diag} (\sigma'(W_{hh} h_{i-1} + W_{hx} x_i + b_h))$$
#
# $W_h^T$ is the transpose of the weight matrix, and $\sigma'$ is the gradient of the activation function. The gradient of the activation function is a vector of size equal to the hidden state size, and the $\text{diag}$ converts that vector into a diagonal matrix. You <strong>don't have to worry about the calculus</strong> behind this derivative, and you only need to be familiar with the form it takes.
#
# ### Vanishing and Exploding Gradient Conditions
#
# When the product
#
# $$\prod_{t\ge i > k} \frac{\partial h_i}{\partial h_{i-1}} = \prod_{t\ge i > k} W_{hh}^T \text{diag} (\sigma'(W_{hh} h_{i-1} + W_{hx} x_i + b_h))$$
#
# approaches 0, you face vanishing gradient problems where the contribution of item $k$ in the sequence is neglected. Conversely, when the product approaches infinity you will face exploding gradients and convergence problems arise. For that product approaching either of those values, two conditions need to be met:
#
# <ol>
# <li> Derivative of the activation function is bounded by some value $\alpha$ </li>
# <li> The absolute value of the largest eigenvalue of the weight matrix $W_{hh}$ is lower than $\frac{1}{\alpha}$ (sufficient condition for vanishing gradient), or greater than $\frac{1}{\alpha}$ (necessary condition for exploding gradient).</li>
# </ol>
#
# ### Activation
#
# So let's check the first condition for the sigmoid function. Run the cell below to get an interactive plot of the sigmoid function and its derivative at different points. Feel free to change the argument values to check if the derivative is bounded or not.
# +
# Data
### START CODE HERE ###
x = np.linspace(-6, 6, 100) # try changing the range of values in the data. eg: (-100,100,1000)
### END CODE HERE ###
# Activation
# Interval [0, 1]
def sigmoid(x):
return 1 / (1 + np.exp(-x))
activations = sigmoid(x)
# Gradient
# Interval [0, 0.25]
def sigmoid_gradient(x):
return sigmoid(x) * (1 - sigmoid(x))
# Add the tangent line
def plot_func(x_tan = 0):
plt.plot(x, activations)
plt.title("Sigmoid Function and Gradient")
plt.xlabel("$x$")
plt.ylabel("sigmoid($x$)")
plt.text(x_tan, sigmoid(x_tan), f"Gradient: {sigmoid_gradient(x_tan):.4f}")
plt.xlim((-6,6))
plt.ylim((-0.5,1.5))
plt.rcParams['figure.figsize'] = [7, 5]
y_tan = sigmoid(x_tan) # y value
span = 4 # line span along x axis
data_tan = np.linspace(x_tan - span, x_tan + span) # x values to plot
gradient_tan = sigmoid_gradient(x_tan) # gradient of the tangent
tan = y_tan + gradient_tan * (data_tan - x_tan) # y values to plot
plt.plot(x_tan, y_tan, marker="o", color="orange", label=True) # marker
plt.plot(data_tan, tan, linestyle="--", color="orange") # line
plt.show()
interact(plot_func, x_tan = widgets.FloatSlider(value=0,
min=-6,
max=6,
step=0.5))
# -
# As you checked, the derivative of the sigmoid function is bounded by $\alpha=\frac{1}{4}$. So vanishing gradient problems will arise for long-term components if the largest eigenvalue of $W_{hh}$ is lower than 4, and exploding gradient problems will happen if the largest eigenvalue is larger than 4.
# ## Vanishing Gradient with Sigmoid Activation
# Let's generate a random checkpoint for an RNN model and assume that the sequences are of length $t=20$:
np.random.seed(12345)
t = 20
h = np.random.randn(5,t)
x = np.random.randn(5,t)
b_h = np.random.randn(5,1)
W_hx = np.random.randn(5,5)
# In the next cell, you will create a random matrix $W_{hh}$ with eigenvalues lower than four.
eig = np.random.rand(5)*4 #Random eigenvalues lower than 4
Q = np.random.randn(5,5) #Random eigenvectors stacked in matrix Q
W_hh = Q@np.diag(eig)@np.linalg.inv(Q) #W_hh
# Finally, let us define the product function for a determined step $k$.
def prod(k):
p = 1
for i in range(t-1, k-2, -1):
p *= W_hh.T@np.diag(sigmoid_gradient(W_hh@h[:,i]+ W_hx@x[:,i] + b_h))
return p
# Now, you can plot the contribution to the gradient for different steps $k$.
# +
product = np.zeros(20)
for k in range(t):
product[k] = np.max(prod(k+1))
plt.plot(np.array(range(t))+1, product)
plt.title("Maximum contribution to the gradient at step $k$");
plt.xlabel("k");
plt.ylabel("Maximum contribution");
plt.xticks(np.array(range(t))+1);
# -
# With the largest eigenvalue of the weight matrix $W_{hh}$ being lower than 4 --with a sigmoid activation function, the contribution of the early items in the sequence to the gradient go to zero. In practice, this will make your RNN rely only upon the most recent items in the series.
# ## Exploding Gradient with Sigmoid Activation
# An essential difference with the vanishing gradient problem is that the condition for exploding gradients is necessary but not sufficient. Therefore, it is very likely that you will face vanishing gradients rather than exploding gradient problems. However, let's fabricate an example for exploding gradients.
np.random.seed(12345)
t = 20
h = np.zeros((5,t))
x = np.zeros((5,t))
b_h = np.zeros((5,1))
W_hx = np.random.randn(5,5)
# In the next cell, a random matrix $W_{hh}$ with eigenvalues greater than 4 is created
eig = 4 + np.random.rand(5)*10 #Random eigenvalues greater than 4
Q = np.random.randn(5,5) #Random eigenvectors stacked in matrix Q
W_hh = Q@np.diag(eig)@np.linalg.inv(Q) #W_hh
# Now, you can plot the contribution to the gradient for different steps $k$.
# +
product = np.zeros(20)
for k in range(t):
product[k] = np.max(prod(k+1))
plt.plot(np.array(range(t))+1, product)
plt.title("Maximum contribution to the gradient at step $k$");
plt.xlabel("k");
plt.ylabel("Maximum contribution");
plt.xticks(np.array(range(t))+1);
# -
# With the largest eigenvalue of the weight matrix $W_{hh}$ being greater than 4 --with a sigmoid activation function, the contribution of the early items in the sequence to the gradient goes to infinity. In practice, this will make you face convergence problems during training.
# Now you are more familiar with the conditions for vanishing and exploding gradient problems. You should take away that for vanishing gradient it is <strong>sufficient</strong> to satisfy an eigenvalue condition, while for the exploding gradient problem it is <strong>neccesary</strong> but not enough. I used the weight matrix $W_{hh}$ in this discussion, but everything exposed here also applies for $W_{hx}$.
# ## Solution
# One solution is to use RNN architectures specially designed to avoid these problems (like GRUs and LSTMs). Other solutions involve skip-connections or gradient clipping. But those are both discussions for another time.
| Courses/Natural Language Processing Specialization/Natural Language Processing with Sequence Models/Week3/C3_W3_Lecture_Notebook_Vanishing_Gradients.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="npzJ574a6A94" colab_type="text"
# **Reinforcement Learning with TensorFlow & TRFL: V-Trace**
#
# Outline:
# 1. V-Trace
# * TRFL Usage: trfl.vtrace_from_logits() with CartPole
#
# + id="RyxlWytnVqJI" colab_type="code" outputId="709909ff-9fd4-47e1-f2a6-a2d7ed0fc951" colab={"base_uri": "https://localhost:8080/", "height": 328}
#TRFL works with TensorFlow 1.12
#installs TensorFlow version 1.12 then restarts the runtime
# !pip install tensorflow==1.12
import os
os.kill(os.getpid(), 9)
# + id="XRS56AQDVybG" colab_type="code" outputId="02461eee-885c-40ba-db32-ad69fd047d1d" executionInfo={"status": "ok", "timestamp": 1554003263374, "user_tz": 240, "elapsed": 13556, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09166577195279766198"}} colab={"base_uri": "https://localhost:8080/", "height": 191}
#install tensorflow-probability 0.5.0 that works with TensorFlow 1.12
# !pip install tensorflow-probability==0.5.0
#install TRFL
# !pip install trfl==1.0
# + id="SGop2a_BZCBl" colab_type="code" colab={}
import gym
import tensorflow as tf
import trfl
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
# + [markdown] id="VkC2bii3Zi9_" colab_type="text"
# ** V-trace **
#
# V-trace is used in IMPALA to correct for the off-policyness of the data. Since the actors are decoupled from the learner and gather experience asynchronously to the learner’s updating, the trajectories of experience may lag and be off-policy. Ie the actors may be gathering experience with a different trajectory than the current learner. Thus an off-policy correction algorithm like V-trace is needed. V-trace is similar to Retrace. Unlike Retrace, in the on policy case, V-trace reduces to TD(λ).
#
# ** CartPole with V-Trace **
# + colab_type="code" id="-cLhuWNY9-r7" colab={}
class PolicyVFNetwork:
def __init__(self, name, policy_lr=0.001, vf_lr=0.002, obs_size=2, action_size=3, policy_hidden=64, vf_hidden=64,
entropy_coefficient=0.0001, clip_rho_threshold=1.0, clip_pg_rho_threshold=1.0):
with tf.variable_scope(name):
self.name=name
# input tensors
self.input_ = tf.placeholder(tf.float32, [None, obs_size], name='input')
self.action_ = tf.placeholder(tf.int32, [None, 1], name='action')
self.discount_ = tf.placeholder(tf.float32, [None, 1], name='discount')
self.reward_ = tf.placeholder(tf.float32, [None, 1], name='reward')
self.bootstrap_ = tf.placeholder(tf.float32, [None], name='bootstrap')
self.behaviour_policy_ = tf.placeholder(tf.float32, [None, 1, action_size], name='behaviour_policy')
# set up policy network
self.policy_fc1_ = tf.contrib.layers.fully_connected(self.input_, policy_hidden, activation_fn=tf.nn.relu)
self.policy_fc2_ = tf.contrib.layers.fully_connected(self.policy_fc1_, policy_hidden, activation_fn=tf.nn.relu)
self.policy_fc3_ = tf.contrib.layers.fully_connected(self.policy_fc2_, action_size, activation_fn=None)
self.policy_out_ = tf.reshape(self.policy_fc3_, [-1, 1, action_size])
# generate action probabilities for taking actions
self.action_prob_ = tf.nn.softmax(self.policy_fc3_)
# set up value function network
self.vf_fc1_ = tf.contrib.layers.fully_connected(self.input_, vf_hidden, activation_fn=tf.nn.relu)
self.vf_fc2_ = tf.contrib.layers.fully_connected(self.vf_fc1_, vf_hidden, activation_fn=tf.nn.relu)
self.vf_out_ = tf.contrib.layers.fully_connected(self.vf_fc2_, 1, activation_fn=None)
# TRFL V-trace usage
self.vtrace_return_ = trfl.vtrace_from_logits(behaviour_policy_logits=self.behaviour_policy_,
target_policy_logits=self.policy_out_, actions=self.action_, discounts=self.discount_,
rewards=self.reward_, values=self.vf_out_, bootstrap_value=self.bootstrap_,
clip_rho_threshold=clip_rho_threshold, clip_pg_rho_threshold=clip_pg_rho_threshold)
# optimize the policy loss with V-trace policy gradient advantages
self.pg_loss_ = trfl.discrete_policy_gradient_loss(self.policy_out_, self.action_, self.vtrace_return_.pg_advantages)
self.entropy_loss_, _ = trfl.discrete_policy_entropy_loss(self.policy_fc3_, normalise=True)
self.combined_loss_ = self.pg_loss_ + entropy_coefficient * self.entropy_loss_
self.policy_loss_ = tf.reduce_mean(self.combined_loss_)
self.policy_optim_ = tf.train.AdamOptimizer(learning_rate=policy_lr).minimize(self.policy_loss_)
# optimize the value function with V-trace vs
self.vf_loss_ = tf.losses.mean_squared_error(self.vf_out_, self.vtrace_return_.vs)
self.vf_optim_ = tf.train.AdamOptimizer(learning_rate=vf_lr).minimize(self.vf_loss_)
def get_network_variables(self):
return [t for t in tf.trainable_variables() if t.name.startswith(self.name)]
# + [markdown] id="59VEH5kCd0r7" colab_type="text"
# ** TRFL Usage **
#
# We set up an actor-critic like network, with a policy net to generate the target probabilities and a value function network for the values. We feed the target policy logits and the value function output into trfl.vtrace_from_logits() along with the input tensors. Using the V-trace return, we optimize the policy net with the pg_advantages return and the value function with the vs return
# + colab_type="code" id="idWJmcaU9-r_" colab={}
# hyperparameters
discount = 0.99
train_episodes = 5000
policy_hidden_size = 32
vf_hidden_size = 32
policy_learning_rate = 0.0001
vf_learning_rate = 0.001
entropy_coefficient = 0.00001
update_tau = 0.001
update_behaviour_every = 10
clip_rho_threshold = 1.0
clip_pg_rho_threshold = 1.0
stats_every = 10
evaluate_every = 20 # evaluate the learning net after this main episodes
evaluate_ep = 100 # evaluate the learning net for this many episodes
seed = 31
env = gym.make('CartPole-v0')
env.seed(seed)
np.random.seed(seed)
action_size = env.action_space.n
obs_size = env.observation_space.shape[0]
tf.reset_default_graph()
tf.set_random_seed(seed)
# Create networks
train_net = PolicyVFNetwork(name='train_net', policy_lr=policy_learning_rate, vf_lr=vf_learning_rate, obs_size=obs_size,
action_size=action_size, policy_hidden=policy_hidden_size, vf_hidden=vf_hidden_size,
entropy_coefficient=entropy_coefficient, clip_rho_threshold=1.0, clip_pg_rho_threshold=1.0)
behaviour_net = PolicyVFNetwork(name='behaviour_net', policy_lr=policy_learning_rate, vf_lr=vf_learning_rate, obs_size=obs_size,
action_size=action_size, policy_hidden=policy_hidden_size, vf_hidden=vf_hidden_size,
entropy_coefficient=entropy_coefficient, clip_rho_threshold=1.0, clip_pg_rho_threshold=1.0)
vf_target_net = PolicyVFNetwork(name='vf_target_net', policy_lr=policy_learning_rate, vf_lr=vf_learning_rate, obs_size=obs_size,
action_size=action_size, policy_hidden=policy_hidden_size, vf_hidden=vf_hidden_size,
entropy_coefficient=entropy_coefficient, clip_rho_threshold=1.0, clip_pg_rho_threshold=1.0)
# target network updating
target_network_update_op = trfl.update_target_variables(vf_target_net.get_network_variables(),
train_net.get_network_variables(), tau=update_tau)
behaviour_update_op = trfl.update_target_variables(behaviour_net.get_network_variables(),
train_net.get_network_variables(), tau=1.0)
# + colab_type="code" outputId="f7a0b238-90af-4a64-e36f-b92b3093aa8a" executionInfo={"status": "ok", "timestamp": 1554003992887, "user_tz": 240, "elapsed": 209992, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09166577195279766198"}} id="7bM6u2cu9-sE" colab={"base_uri": "https://localhost:8080/", "height": 600}
stats_list = []
stats_eval_list = []
eval_episode = 0
stop_after_eval = False
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for ep in range(1, train_episodes):
if stop_after_eval:
break
total_reward = 0
ep_length = 0
state = env.reset()
done = 0
# store trajectories in lists, use trajectories to train agent
obs_list, rew_list, action_list, logits_list, policy_loss_list, vf_loss_list = [], [], [], [], [], []
while not done:
# generate action probabilities from behaviour net policy probs and sample from the action probs
action_probs, behaviour_logits = sess.run([behaviour_net.action_prob_, behaviour_net.policy_out_],
feed_dict={behaviour_net.input_:np.expand_dims(state,axis=0)})
action_probs = action_probs[0]
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
behaviour_logits = behaviour_logits[0]
obs_list.append(state)
action_list.append(action)
logits_list.append(behaviour_logits)
state, reward, done, info = env.step(action)
total_reward += reward
ep_length += 1
rew_list.append(reward)
# bootstrap value for next state using behaviour net value function
if done:
bootstrap_value = 0.
else:
bootstrap_value = sess.run(vf_target_net.vf_out_, feed_dict={
vf_target_net.input_:np.reshape(state,(1,-1)),
})
bootstrap_value = bootstrap_value[0]
feed_dict = {
train_net.input_:np.array(obs_list),
train_net.action_:np.array(action_list).reshape(-1,1),
train_net.discount_:np.array([discount]*len(rew_list)).reshape(-1,1),
train_net.reward_:np.array(rew_list).reshape(-1,1),
train_net.bootstrap_:np.array(bootstrap_value).reshape((1,)),
train_net.behaviour_policy_:np.array(logits_list)
}
# run the optimizers to get the V-trace return and update the networks
_, _, stats_policy_loss, stats_vf_loss = sess.run([train_net.policy_optim_, train_net.vf_optim_,
train_net.policy_loss_, train_net.vf_loss_],
feed_dict=feed_dict)
policy_loss_list.append(stats_policy_loss)
vf_loss_list.append(stats_vf_loss)
# update target network
sess.run(target_network_update_op)
# update behaviour network
if ep % update_behaviour_every == 0:
sess.run(behaviour_update_op)
if done:
if ep % stats_every == 0:
print('Episode: {}'.format(ep),
'Total reward: {:.1f}'.format(np.mean(stats_list[-stats_every:],axis=0)[1]),
'Ep length: {:.1f}'.format(np.mean(stats_list[-stats_every:],axis=0)[2]),
'Policy Loss: {:.4f}'.format(np.mean(policy_loss_list)),
'VF Loss: {:.4f}'.format(np.mean(vf_loss_list)))
stats_list.append((ep, total_reward, ep_length))
# evaluate performance every evaluate_every episodes
if ep % evaluate_every == 0:
for i in range(evaluate_ep):
state = env.reset()
total_reward = 0
total_len = 0
done = 0
eval_episode += 1
while not done:
action_probs = sess.run(train_net.action_prob_,
feed_dict={train_net.input_:np.expand_dims(state,axis=0)})
action_probs = action_probs[0]
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
state, rew, done, info = env.step(action)
total_reward += rew
total_len += 1
if done:
stats_eval_list.append((eval_episode, total_reward, total_len, ep))
print("Evaluation at Episode {}: Avg Rew, Len: {:.2f}, {:.2f} ".format(ep,
np.mean(stats_eval_list[-evaluate_ep:],axis=0)[1],
np.mean(stats_eval_list[-evaluate_ep:],axis=0)[2]))
# stop episodes when agent is able to solve game
if np.mean(stats_eval_list[-evaluate_ep:],axis=0)[1] > 190:
print("Stopping at episode {} with average rewards of {} in last {} episodes".
format(ep, np.mean(stats_eval_list[-evaluate_ep:],axis=0)[1], evaluate_ep))
stop_after_eval = True
# + colab_type="code" outputId="5fd1ab04-0226-44f3-ebd2-e8e705920482" executionInfo={"status": "ok", "timestamp": 1554004002049, "user_tz": 240, "elapsed": 963, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09166577195279766198"}} id="KoQERr9V9-sK" colab={"base_uri": "https://localhost:8080/", "height": 296}
# %matplotlib inline
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews, lens, ep = np.array(stats_eval_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
| Section 5/Reinforcement Learning with TensorFlow & TRFL -- V-Trace.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Rtrey29/DS-Unit-2-Tree-Ensembles/blob/master/module2-random-forests/U2S7M2_random_forests.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="fgdSjG6R0UOY" colab_type="code" outputId="8eb2312a-c6f8-4d3a-d72b-02eef1a5f1e8" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !pip install -U pandas-profiling
# + id="eR9_POIi3Hkm" colab_type="code" outputId="e8bad3bb-a9e1-4dea-e03c-b4853c59fbe0" colab={"base_uri": "https://localhost:8080/", "height": 293}
# !pip install category_encoders
# + id="V41EyAvGiq4I" colab_type="code" outputId="8c6eff9f-8faf-4ce5-d27d-79b4c68c3e33" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler, StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/tanzania/'
train = pd.merge(pd.read_csv(WEB + 'train_features.csv'),
pd.read_csv(WEB + 'train_labels.csv'))
test = pd.read_csv(WEB + 'test_features.csv')
sample_submission = pd.read_csv(WEB + 'sample_submission.csv')
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
train.shape, val.shape, test.shape
# + id="KwKss3wJK16e" colab_type="code" outputId="2d2e1002-1e0c-4393-d5e4-82309a0eee82" colab={"base_uri": "https://localhost:8080/", "height": 337}
train.describe()
# + id="odREMUlYRShJ" colab_type="code" outputId="cfa082d5-28fe-4490-a0f2-cc8b951677cc" colab={"base_uri": "https://localhost:8080/", "height": 247}
train.describe(exclude='number')
# + id="ZrJzQQrb7bJq" colab_type="code" outputId="6fa9152f-866e-45eb-ae19-460f5ba51576" colab={"base_uri": "https://localhost:8080/", "height": 204}
train.columns
# + id="7XIouWJjPUXF" colab_type="code" colab={}
def wrangle(X):
X = X.copy()
# small coordinate values to 0
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# replacing zeroes with mean values
cols_with_zeroes = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeroes:
X[col] = X[col].replace(0, np.nan)
X[col] = X[col].fillna(X[col].mean())
# date recorded to date time format
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# engineer features how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# dropping features
X = X.drop(columns= ['quantity_group', 'payment', 'num_private',
'extraction_type_group','wpt_name', 'ward', 'extraction_type_class',
'quality_group', 'source','management', 'management_group',
'waterpoint_type_group', 'recorded_by', 'district_code',
'region', 'lga', 'id', 'region_code',
'district_code', 'quality_group'])
# Drop duplicate columns
# duplicate_columns = ['quantity_group']
# X = X.drop(columns=duplicate_columns)
# for categoricals with missing values fill with 'missing'
categoricals = X.select_dtypes(exclude='number').columns
for col in categoricals:
X[col] = X[col].fillna('MISSING')
return X
train = wrangle(train)
val = wrangle(val)
test= wrangle(test)
# + id="vWtIa_bXUjSC" colab_type="code" colab={}
# define the target
target = 'status_group'
# get a DF with all train cols except target
train_features = train.drop(columns=[target])
# get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# geta list of all categorical features with cardinality <=50
categorical_features = cardinality[cardinality <=50].index.tolist()
# combine the lists
features = numeric_features + categorical_features
# + id="zAAM72XYWmol" colab_type="code" colab={}
# Arrange data into Xfeatures matrix and Y vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
# + id="68FNwGH7uZiE" colab_type="code" colab={}
# making a pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=1000, min_samples_split=6,
criterion='gini', max_depth = 20, max_features='auto',
oob_score=True, random_state=42, n_jobs=-1)
)
# + id="H4Xp_Z0bvhmb" colab_type="code" outputId="d88f4bbb-2532-4985-858d-18123b6c5939" colab={"base_uri": "https://localhost:8080/", "height": 34}
# fit on train score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
y_pred = pipeline.predict(X_test)
# + id="ysmmrDqahP2c" colab_type="code" colab={}
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('submission-01.csv', index=False)
# + id="QVU0aDy2PKnd" colab_type="code" colab={}
from google.colab import files
files.download('submission-01.csv')
| module2-random-forests/U2S7M2_random_forests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import numpy
import numpy as np
# people weights
weights = [94.93428306, 82.23471398, 97.95377076, 115.46059713, 80.31693251, 80.31726086, 116.58425631,
100.34869458, 75.61051228, 95.85120087, 75.73164614, 75.68540493, 89.83924543, 46.73439511,
50.50164335, 73.75424942, 64.74337759, 91.28494665, 66.83951849, 56.75392597, 114.31297538,
80.48447399, 86.35056409, 56.50503628, 74.11234551, 66.1092259 , 53.49006423, 68.75698018,
58.9936131 , 62.0830625 , 58.98293388, 83.52278185, 64.86502775, 54.42289071, 73.22544912,
52.7915635 ,67.08863595, 45.40329876, 51.71813951, 66.96861236, 72.3846658 , 66.71368281,
63.84351718, 61.98896304, 50.2147801 , 57.80155792, 60.39361229, 75.57122226, 68.4361829 , 47.36959845]
# #### Set the significance level (alpha) to 0.05
alpha = 0.05
# #### Create function `evaluate_test` which prints a conclusion of hypothesis test based on p-value and alpha
#
# PARAMS:
# - p (float) - p-value from test
# - alpha (float) - significance level
#
def evaluate_test(p,alpha):
if p < alpha:
return 'H0 is rejected'
else:
return 'Ho is not rejected'
# #### Import Shapiro-Wilk Test to test if weights are normally distributed
#
# - H0 = weights are normally distributed
# - HA = weights are not normally distributed
# - https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html
from scipy import stats
np.random.seed(12345678)
shapiro_test = stats.shapiro(weights)
shapiro_test
#ShapiroResult(statistic=0.9772805571556091, pvalue=0.08144091814756393)
#shapiro_test.statistic
#0.9772805571556091
#shapiro_test.pvalue
#0.08144091814756393
# #### Use function `evaluate_test` to make conclusion if weights are normally distributed
p = shapiro_test.pvalue
evaluate_test(p,alpha)
# #### Test the hypothesis that mean of weights is equal to 72
#
# - use one sample t-test
# - H0: mean = 72
# - HA: mean != 72
# - note that we don't know the population standard deviation
# +
#pop_mean = np.sum(weights)/len(weights)
#pop_mean
ttest = stats.ttest_1samp(weights,72)
# -
# #### Use function `evaluate_test` to make conclusion if the mean of the heights is 72
evaluate_test(ttest.pvalue,alpha)
# +
# salaries in the first company
salaries_company_A = [ 62779.75930907, 67487.49834604, 78998.91885801, 92801.06354333,
94917.76195759, 85409.43843246, 65536.36510309, 97608.88920408,
79613.1791369 , 74035.25988438, 72698.71057961, 57170.2204782 ,
96496.56571672, 78123.01652012, 69617.56847376, 89109.14505065,
91809.98342107, 54010.91167324, 103259.7319888 , 113319.79557154,
81529.81385057, 83590.49251746, 115902.53443622, 63608.1666576 ,
72175.25765417, 88719.32305603, 97215.1090373 , 80570.98830349,
67796.25874935, 99321.80738101]
# salaries in the second company
salaries_company_B = [ 89845.96793876, 90027.93042629, 108596.08141043, 120113.67952031,
94794.04532001, 99565.51332692, 110927.06162603, 85471.82457925,
79030.8553638 , 82644.84718934, 71592.66608011, 68244.23637394,
134420.97566401, 72106.76757987, 95429.7573215 , 88285.90615416,
110973.4078626 , 92323.32822085, 117740.37152488, 87412.61048855,
94906.53993793, 105017.39597368, 93983.46012639, 100538.051311 ,
95673.65143504, 61727.33698247, 105311.27474286, 113551.6401474 ,
87408.82036567, 85895.00912077]
# -
# #### Test the hypothesis that mean of salaries in both companies are equal
# - use t-test
# - H0: salaries are the same
# - HA: salaries are not the same
#
A_B_ttest = stats.ttest_ind(salaries_company_A,salaries_company_B)
A_B_ttest.pvalue
if A_B_ttest.pvalue < 0.05:
print('HA: salaries are not the same')
else:
print('H0: salaries are the same')
# #### Use function `evaluate_test` to make conclusion if the salaries are equal in both comapnies
evaluate_test(A_B_ttest.pvalue,alpha)
| Probability and Statistics/Hypothesis Testing I.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Graph Convolutional Neural Networks
# ## Graph LeNet5 with PyTorch
# ### <NAME>, Oct. 2017
# Implementation of spectral graph ConvNets<br>
# Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering<br>
# <NAME>, <NAME>, <NAME><br>
# Advances in Neural Information Processing Systems, 3844-3852, 2016<br>
# ArXiv preprint: [arXiv:1606.09375](https://arxiv.org/pdf/1606.09375.pdf) <br>
# +
import torch
from torch.autograd import Variable
import torch.nn.functional as F
import torch.nn as nn
import pdb #pdb.set_trace()
import collections
import time
import numpy as np
import sys
sys.path.insert(0, 'lib/')
# %load_ext autoreload
# %autoreload 2
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
if torch.cuda.is_available():
print('cuda available')
dtypeFloat = torch.cuda.FloatTensor
dtypeLong = torch.cuda.LongTensor
torch.cuda.manual_seed(1)
else:
print('cuda not available')
dtypeFloat = torch.FloatTensor
dtypeLong = torch.LongTensor
torch.manual_seed(1)
# -
# # MNIST
# +
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('datasets', one_hot=False) # load data in folder datasets/
train_data = mnist.train.images.astype(np.float32)
val_data = mnist.validation.images.astype(np.float32)
test_data = mnist.test.images.astype(np.float32)
train_labels = mnist.train.labels
val_labels = mnist.validation.labels
test_labels = mnist.test.labels
print(train_data.shape)
print(train_labels.shape)
print(val_data.shape)
print(val_labels.shape)
print(test_data.shape)
print(test_labels.shape)
# -
# # Graph
# +
from lib.grid_graph import grid_graph
from lib.coarsening import coarsen
from lib.coarsening import lmax_L
from lib.coarsening import perm_data
from lib.coarsening import rescale_L
# Construct graph
t_start = time.time()
grid_side = 28
number_edges = 8
metric = 'euclidean'
A = grid_graph(grid_side,number_edges,metric) # create graph of Euclidean grid
# Compute coarsened graphs
coarsening_levels = 4
L, perm = coarsen(A, coarsening_levels)
# Compute max eigenvalue of graph Laplacians
lmax = []
for i in range(coarsening_levels):
lmax.append(lmax_L(L[i]))
print('lmax: ' + str([lmax[i] for i in range(coarsening_levels)]))
# Reindex nodes to satisfy a binary tree structure
train_data = perm_data(train_data, perm)
val_data = perm_data(val_data, perm)
test_data = perm_data(test_data, perm)
print(train_data.shape)
print(val_data.shape)
print(test_data.shape)
print('Execution time: {:.2f}s'.format(time.time() - t_start))
del perm
# -
# # Graph ConvNet LeNet5
# ### Layers: CL32-MP4-CL64-MP4-FC512-FC10
# +
# class definitions
class my_sparse_mm(torch.autograd.Function):
"""
Implementation of a new autograd function for sparse variables,
called "my_sparse_mm", by subclassing torch.autograd.Function
and implementing the forward and backward passes.
"""
def forward(self, W, x): # W is SPARSE
self.save_for_backward(W, x)
y = torch.mm(W, x)
return y
def backward(self, grad_output):
W, x = self.saved_tensors
grad_input = grad_output.clone()
grad_input_dL_dW = torch.mm(grad_input, x.t())
grad_input_dL_dx = torch.mm(W.t(), grad_input )
return grad_input_dL_dW, grad_input_dL_dx
class Graph_ConvNet_LeNet5(nn.Module):
def __init__(self, net_parameters):
print('Graph ConvNet: LeNet5')
super(Graph_ConvNet_LeNet5, self).__init__()
# parameters
D, CL1_F, CL1_K, CL2_F, CL2_K, FC1_F, FC2_F = net_parameters
FC1Fin = CL2_F*(D//16)
# graph CL1
self.cl1 = nn.Linear(CL1_K, CL1_F)
Fin = CL1_K; Fout = CL1_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.cl1.weight.data.uniform_(-scale, scale)
self.cl1.bias.data.fill_(0.0)
self.CL1_K = CL1_K; self.CL1_F = CL1_F;
# graph CL2
self.cl2 = nn.Linear(CL2_K*CL1_F, CL2_F)
Fin = CL2_K*CL1_F; Fout = CL2_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.cl2.weight.data.uniform_(-scale, scale)
self.cl2.bias.data.fill_(0.0)
self.CL2_K = CL2_K; self.CL2_F = CL2_F;
# FC1
self.fc1 = nn.Linear(FC1Fin, FC1_F)
Fin = FC1Fin; Fout = FC1_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.fc1.weight.data.uniform_(-scale, scale)
self.fc1.bias.data.fill_(0.0)
self.FC1Fin = FC1Fin
# FC2
self.fc2 = nn.Linear(FC1_F, FC2_F)
Fin = FC1_F; Fout = FC2_F;
scale = np.sqrt( 2.0/ (Fin+Fout) )
self.fc2.weight.data.uniform_(-scale, scale)
self.fc2.bias.data.fill_(0.0)
# nb of parameters
nb_param = CL1_K* CL1_F + CL1_F # CL1
nb_param += CL2_K* CL1_F* CL2_F + CL2_F # CL2
nb_param += FC1Fin* FC1_F + FC1_F # FC1
nb_param += FC1_F* FC2_F + FC2_F # FC2
print('nb of parameters=',nb_param,'\n')
def init_weights(self, W, Fin, Fout):
scale = np.sqrt( 2.0/ (Fin+Fout) )
W.uniform_(-scale, scale)
return W
def graph_conv_cheby(self, x, cl, L, lmax, Fout, K):
# parameters
# B = batch size
# V = nb vertices
# Fin = nb input features
# Fout = nb output features
# K = Chebyshev order & support size
B, V, Fin = x.size(); B, V, Fin = int(B), int(V), int(Fin)
# rescale Laplacian
lmax = lmax_L(L)
L = rescale_L(L, lmax)
# convert scipy sparse matric L to pytorch
L = L.tocoo()
indices = np.column_stack((L.row, L.col)).T
indices = indices.astype(np.int64)
indices = torch.from_numpy(indices)
indices = indices.type(torch.LongTensor)
L_data = L.data.astype(np.float32)
L_data = torch.from_numpy(L_data)
L_data = L_data.type(torch.FloatTensor)
L = torch.sparse.FloatTensor(indices, L_data, torch.Size(L.shape))
L = Variable( L , requires_grad=False)
if torch.cuda.is_available():
L = L.cuda()
# transform to Chebyshev basis
x0 = x.permute(1,2,0).contiguous() # V x Fin x B
x0 = x0.view([V, Fin*B]) # V x Fin*B
x = x0.unsqueeze(0) # 1 x V x Fin*B
def concat(x, x_):
x_ = x_.unsqueeze(0) # 1 x V x Fin*B
return torch.cat((x, x_), 0) # K x V x Fin*B
if K > 1:
x1 = my_sparse_mm()(L,x0) # V x Fin*B
x = torch.cat((x, x1.unsqueeze(0)),0) # 2 x V x Fin*B
for k in range(2, K):
x2 = 2 * my_sparse_mm()(L,x1) - x0
x = torch.cat((x, x2.unsqueeze(0)),0) # M x Fin*B
x0, x1 = x1, x2
x = x.view([K, V, Fin, B]) # K x V x Fin x B
x = x.permute(3,1,2,0).contiguous() # B x V x Fin x K
x = x.view([B*V, Fin*K]) # B*V x Fin*K
# Compose linearly Fin features to get Fout features
x = cl(x) # B*V x Fout
x = x.view([B, V, Fout]) # B x V x Fout
return x
# Max pooling of size p. Must be a power of 2.
def graph_max_pool(self, x, p):
if p > 1:
x = x.permute(0,2,1).contiguous() # x = B x F x V
x = nn.MaxPool1d(p)(x) # B x F x V/p
x = x.permute(0,2,1).contiguous() # x = B x V/p x F
return x
else:
return x
def forward(self, x, d, L, lmax):
# graph CL1
x = x.unsqueeze(2) # B x V x Fin=1
x = self.graph_conv_cheby(x, self.cl1, L[0], lmax[0], self.CL1_F, self.CL1_K)
x = F.relu(x)
x = self.graph_max_pool(x, 4)
# graph CL2
x = self.graph_conv_cheby(x, self.cl2, L[2], lmax[2], self.CL2_F, self.CL2_K)
x = F.relu(x)
x = self.graph_max_pool(x, 4)
# FC1
x = x.view(-1, self.FC1Fin)
x = self.fc1(x)
x = F.relu(x)
x = nn.Dropout(d)(x)
# FC2
x = self.fc2(x)
return x
def loss(self, y, y_target, l2_regularization):
loss = nn.CrossEntropyLoss()(y,y_target)
l2_loss = 0.0
for param in self.parameters():
data = param* param
l2_loss += data.sum()
loss += 0.5* l2_regularization* l2_loss
return loss
def update(self, lr):
update = torch.optim.SGD( self.parameters(), lr=lr, momentum=0.9 )
return update
def update_learning_rate(self, optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
def evaluation(self, y_predicted, test_l):
_, class_predicted = torch.max(y_predicted.data, 1)
return 100.0* (class_predicted == test_l).sum()/ y_predicted.size(0)
# +
# Delete existing network if exists
try:
del net
print('Delete existing network\n')
except NameError:
print('No existing network to delete\n')
# network parameters
D = train_data.shape[1]
CL1_F = 32
CL1_K = 25
CL2_F = 64
CL2_K = 25
FC1_F = 512
FC2_F = 10
net_parameters = [D, CL1_F, CL1_K, CL2_F, CL2_K, FC1_F, FC2_F]
# instantiate the object net of the class
net = Graph_ConvNet_LeNet5(net_parameters)
if torch.cuda.is_available():
net.cuda()
print(net)
# Weights
L_net = list(net.parameters())
# learning parameters
learning_rate = 0.05
dropout_value = 0.5
l2_regularization = 5e-4
batch_size = 100
num_epochs = 20
train_size = train_data.shape[0]
nb_iter = int(num_epochs * train_size) // batch_size
print('num_epochs=',num_epochs,', train_size=',train_size,', nb_iter=',nb_iter)
# Optimizer
global_lr = learning_rate
global_step = 0
decay = 0.95
decay_steps = train_size
lr = learning_rate
optimizer = net.update(lr)
# loop over epochs
indices = collections.deque()
for epoch in range(num_epochs): # loop over the dataset multiple times
# reshuffle
indices.extend(np.random.permutation(train_size)) # rand permutation
# reset time
t_start = time.time()
# extract batches
running_loss = 0.0
running_accuray = 0
running_total = 0
while len(indices) >= batch_size:
# extract batches
batch_idx = [indices.popleft() for i in range(batch_size)]
train_x, train_y = train_data[batch_idx,:], train_labels[batch_idx]
train_x = Variable( torch.FloatTensor(train_x).type(dtypeFloat) , requires_grad=False)
train_y = train_y.astype(np.int64)
train_y = torch.LongTensor(train_y).type(dtypeLong)
train_y = Variable( train_y , requires_grad=False)
# Forward
y = net.forward(train_x, dropout_value, L, lmax)
loss = net.loss(y,train_y,l2_regularization)
loss_train = loss.data[0]
# Accuracy
acc_train = net.evaluation(y,train_y.data)
# backward
loss.backward()
# Update
global_step += batch_size # to update learning rate
optimizer.step()
optimizer.zero_grad()
# loss, accuracy
running_loss += loss_train
running_accuray += acc_train
running_total += 1
# print
if not running_total%100: # print every x mini-batches
print('epoch= %d, i= %4d, loss(batch)= %.4f, accuray(batch)= %.2f' % (epoch+1, running_total, loss_train, acc_train))
# print
t_stop = time.time() - t_start
print('epoch= %d, loss(train)= %.3f, accuracy(train)= %.3f, time= %.3f, lr= %.5f' %
(epoch+1, running_loss/running_total, running_accuray/running_total, t_stop, lr))
# update learning rate
lr = global_lr * pow( decay , float(global_step// decay_steps) )
optimizer = net.update_learning_rate(optimizer, lr)
# Test set
running_accuray_test = 0
running_total_test = 0
indices_test = collections.deque()
indices_test.extend(range(test_data.shape[0]))
t_start_test = time.time()
while len(indices_test) >= batch_size:
batch_idx_test = [indices_test.popleft() for i in range(batch_size)]
test_x, test_y = test_data[batch_idx_test,:], test_labels[batch_idx_test]
test_x = Variable( torch.FloatTensor(test_x).type(dtypeFloat) , requires_grad=False)
y = net.forward(test_x, 0.0, L, lmax)
test_y = test_y.astype(np.int64)
test_y = torch.LongTensor(test_y).type(dtypeLong)
test_y = Variable( test_y , requires_grad=False)
acc_test = net.evaluation(y,test_y.data)
running_accuray_test += acc_test
running_total_test += 1
t_stop_test = time.time() - t_start_test
print(' accuracy(test) = %.3f %%, time= %.3f' % (running_accuray_test / running_total_test, t_stop_test))
# -
#
| 02_graph_convnet_lenet5_mnist_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="D_PfLJqAgMIq" outputId="1fd2de6f-fbc5-40c9-ac1f-363ddcde0007"
import tsflex
print(tsflex.__version__)
# + [markdown] id="KXqEDPikmHc6"
# ## Get the data
# + id="bB1tBMOLggLv"
from tsflex.utils.data import load_empatica_data
df_tmp, df_acc, df_gsr, df_ibi = load_empatica_data(["tmp", "acc", "gsr", "ibi"])
# + colab={"base_uri": "https://localhost:8080/"} id="Qo6btlF8kn8v" outputId="2d572d8a-b2cb-4e24-ff22-679d85a25a90"
from pandas.tseries.frequencies import to_offset
data = [df_tmp, df_acc, df_gsr, df_ibi]
for df in data:
print("Time-series:", df.columns.values)
print(df.shape)
try:
print("Sampling rate:", 1 / pd.to_timedelta(to_offset(pd.infer_freq(df.index))).total_seconds(), "Hz")
except:
print("Irregular sampling rate")
print()
# + [markdown] id="G3JN03iomGui"
# ## Look at the data
# + colab={"base_uri": "https://localhost:8080/", "height": 817} id="HYLMtx7tjTtR" outputId="5619b18f-5ef9-49df-9f4b-53a0de934931"
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=len(data), cols=1, shared_xaxes=True,
subplot_titles=[df.columns.values[0].split('_')[0] for df in data],
vertical_spacing=0.1,
)
for plot_idx, df in enumerate(data, 1):
# Select first minute of data
sub_df = df.first('1min')
for col in df.columns:
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=plot_idx, col=1
)
fig.update_layout(height=len(data)*200)
fig.show(renderer='iframe')
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="aJeTIxeupRu5" outputId="9c0b7085-623a-4d00-e73d-7f78006d753c"
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(16,4))
for plot_idx, df in enumerate(data):
df.plot(kind='box', ax=axes[plot_idx])
plt.tight_layout()
# + [markdown] id="AKLw2bJxpCiE"
# These visualizations indicate that some preprocessing might be necessary for the signals (some sort of clipping)
# + [markdown] id="qlswfOOyr4zT"
# # tsflex processing
# -
# This is roughly identical to the processing of [this paper notebook](https://github.com/predict-idlab/tsflex/blob/main/examples/tsflex_paper.ipynb)
# + colab={"base_uri": "https://localhost:8080/"} id="XsAKFB3bkjQ8" outputId="c87cfc5b-fabe-47bf-d9d2-7bed9e731f8b"
import pandas as pd; import numpy as np; from scipy.signal import savgol_filter
from tsflex.processing import SeriesProcessor, SeriesPipeline
# Create the processing functions
def clip_data(sig: pd.Series, min_val=None, max_val=None) -> np.ndarray:
return np.clip(sig, a_min=min_val, a_max=max_val)
def smv(*sigs) -> pd.Series:
sig_prefixes = set(sig.name.split('_')[0] for sig in sigs)
result = np.sqrt(np.sum([np.square(sig) for sig in sigs], axis=0))
return pd.Series(result, index=sigs[0].index, name='|'.join(sig_prefixes)+'_'+'SMV')
# Create the series processors (with their keyword arguments)
tmp_clippper = SeriesProcessor(clip_data, series_names="TMP", max_val=35)
acc_savgol = SeriesProcessor(
savgol_filter, ["ACC_x", "ACC_y", "ACC_z"], window_length=33, polyorder=2
)
acc_smv = SeriesProcessor(smv, ("ACC_x", "ACC_y", "ACC_z"))
# Create the series pipeline & process the data
series_pipe = SeriesPipeline([tmp_clippper, acc_savgol, acc_smv])
series_pipe
# + id="TK64KF0h0HuT"
out_data = series_pipe.process(data, drop_keys=["ACC_x", "ACC_y", "ACC_z"])
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="jXUDDMbWxqkv" outputId="94c79711-9202-4296-c248-1b515cff1e4f"
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(16,4))
for plot_idx, df in enumerate(out_data):
df.plot(kind='box', ax=axes[plot_idx])
plt.tight_layout()
# + [markdown] id="Fy0gYc961AAz"
# # tsflex feature extraction with [tsfel](https://github.com/fraunhoferportugal/tsfel) integration
# + tags=[]
# # !pip install tsfel
# -
# > Useful links;
# > [List of all tsfel features](https://tsfel.readthedocs.io/en/latest/descriptions/feature_list.html)
# > [More detailed documentation of the tsfel features](https://tsfel.readthedocs.io/en/latest/descriptions/modules/tsfel.feature_extraction.html#module-tsfel.feature_extraction.features)
# > [More detailed documentation of the tsfel feature dictionaries](https://tsfel.readthedocs.io/en/latest/descriptions/get_started.html#set-up-the-feature-extraction-config-file)
#
# As [tsfel feature-functions](https://github.com/dmbee/seglearn/blob/master/seglearn/feature_functions.py) oftern requires `fs` (the sampling frequency), it is not advised to use these on irregular sampled data!
# Of course features that retun more than 1 value or require some keyword arguments should be wrapped in a `FuncWrapper`!
#
# tsfel represents [a collection of features as a dictionary](https://github.com/fraunhoferportugal/tsfel/blob/master/tsfel/feature_extraction/features_settings.py).
# **=> requires wrapping this dictionary in `tsfel_feature_dict_wrapper` for interoperability with tsflex**
#
# For individual functions, we differentiate two sets of tsfel features;
# * *basic features*; functions that require no additional arguments (thus no `fs`) and return 1 value.
# **=> integrates natively with tsflex**
# * *advanced features*; features that require some keyword arguments (e.g., `fs`) and/or return multiple outputs.
# **=> requires wrapping the function with its arguments and/or output names in a `FuncWrapper`**
# This wrapper handles tsfel its feature extraction settings
from tsflex.features.integrations import tsfel_feature_dict_wrapper
# + tags=[]
from tsflex.features import FeatureCollection, MultipleFeatureDescriptors
# -
# ## Using tsfel feature extraction settings (i.e., dictionaries)
# Import some functions that return preset feature extraction setting from tsfel
from tsfel.feature_extraction import get_features_by_domain, get_features_by_tag
# Calculate the features for a tsfel feature extraction dictionary.
# Note that;
# * `tsfel_feature_dict_wrapper` transforms this feature extraction dictionary to a list of features that you can directly pass as the `function` argument of tsflex `MultipleFeatureDescriptors`.
# + tags=[]
simple_feats = MultipleFeatureDescriptors(
functions=tsfel_feature_dict_wrapper(get_features_by_domain("temporal")),
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides=["2.5min"],
)
feature_collection = FeatureCollection(simple_feats)
feature_collection
# + tags=[]
features_df = feature_collection.calculate(out_data, return_df=True, show_progress=True)
features_df
# -
# Extract some other tsfel features
# !! Make sure that you dont add tsfel feature dicts that contain the same feature (in that case tsflex will throw an error)
# + tags=[]
simple_feats = MultipleFeatureDescriptors(
functions=(
tsfel_feature_dict_wrapper(get_features_by_domain("statistical"))
+ tsfel_feature_dict_wrapper(get_features_by_tag("ecg"))
),
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides=["2.5min"],
)
feature_collection = FeatureCollection(simple_feats)
feature_collection
# + tags=[]
features_df = feature_collection.calculate(out_data, return_df=True, show_progress=True)
features_df
# + [markdown] id="c36Hw96oDPkV"
# ### Plot the EDA features
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="NxJKV1u0DVvg" outputId="853b0f4c-62b6-4978-e826-693cd411b9b8" tags=[]
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features'],
vertical_spacing=0.1,
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
if not np.issubdtype(sub_df.values.dtype, np.number):
continue
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
# + [markdown] tags=[]
# ## Using basic tsfel features
# -
# Import some "basic" tsfel features
from tsfel.feature_extraction.features import (
# Some temporal features
autocorr, mean_abs_diff, mean_diff, distance, zero_cross,
slope, abs_energy, pk_pk_distance, entropy, neighbourhood_peaks,
# Some statistical features
interq_range, kurtosis, skewness, calc_max, calc_median,
median_abs_deviation, rms,
# Some spectral features
# -> Almost all are "advanced" features
wavelet_entropy
)
# + colab={"base_uri": "https://localhost:8080/"} id="zwnMitvayEhd" outputId="c6ea7f18-e007-4bb9-bdc4-386475a7c3d0"
from tsflex.features import MultipleFeatureDescriptors, FeatureCollection
basic_funcs = [
# Temporal
autocorr, mean_abs_diff, mean_diff, distance, zero_cross,
slope, abs_energy, pk_pk_distance, entropy, neighbourhood_peaks,
# Statistical
interq_range, kurtosis, skewness, calc_max, calc_median,
median_abs_deviation, rms,
# Spectral
wavelet_entropy
]
basic_feats = MultipleFeatureDescriptors(
functions=basic_funcs,
series_names=["ACC_SMV", "EDA", "TMP"],
windows=["5min", "2.5min"],
strides="2min",
)
feature_collection = FeatureCollection(basic_feats)
feature_collection
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="ahXC5VxR2w0W" outputId="f7f8b9e0-937f-4986-80f4-eb6eb36c3093"
features_df = feature_collection.calculate(out_data, return_df=True)
features_df
# + [markdown] id="c36Hw96oDPkV" tags=[]
# ### Plot the EDA features
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="NxJKV1u0DVvg" outputId="853b0f4c-62b6-4978-e826-693cd411b9b8"
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features'],
vertical_spacing=0.1,
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
# -
# ## Using advanced features
# Import some "advanced" tsfel features
from tsfel.feature_extraction.features import (
# Some temporal features
calc_centroid, auc, entropy, neighbourhood_peaks,
# Some statistical features
hist, ecdf, ecdf_percentile_count,
# Some spectral features
spectral_distance, fundamental_frequency, max_power_spectrum,
spectral_centroid, spectral_decrease, spectral_kurtosis,
spectral_spread, human_range_energy, mfcc, fft_mean_coeff,
wavelet_abs_mean, wavelet_std, wavelet_energy
)
# + colab={"base_uri": "https://localhost:8080/"} id="zwnMitvayEhd" outputId="c6ea7f18-e007-4bb9-bdc4-386475a7c3d0"
# Import all feature functions from seg-learn
from tsflex.features import FeatureCollection, MultipleFeatureDescriptors, FuncWrapper
advanced_feats = MultipleFeatureDescriptors(
functions=[
# Temporal
FuncWrapper(calc_centroid, fs=4), FuncWrapper(auc, fs=4),
FuncWrapper(entropy, prob="kde", output_names="entropy_kde"),
FuncWrapper(entropy, prob="gauss", output_names="entropy_gauss"),
FuncWrapper(neighbourhood_peaks, n=5, output_names="neighbourhood_peaks_n=5"),
# Statistical
FuncWrapper(hist, nbins=4, output_names=[f"hist{i}" for i in range(1,5)]),
FuncWrapper(ecdf, output_names=[f"ecdf{i}" for i in range(1,11)]),
FuncWrapper(ecdf_percentile_count, output_names=["ecdf_0.2", "ecdf_0.8"]),
# Spectral
FuncWrapper(spectral_distance, fs=4), FuncWrapper(fundamental_frequency, fs=4),
FuncWrapper(max_power_spectrum, fs=4), FuncWrapper(spectral_centroid, fs=4),
FuncWrapper(spectral_decrease, fs=4), FuncWrapper(spectral_kurtosis, fs=4),
FuncWrapper(spectral_spread, fs=4), FuncWrapper(human_range_energy, fs=4),
FuncWrapper(mfcc, fs=4, num_ceps=6, output_names=[f"mfcc{i}" for i in range(1,7)]),
FuncWrapper(fft_mean_coeff, fs=4, nfreq=8, output_names=[f"fft_mean_coeff_{i}" for i in range(8)]),
FuncWrapper(wavelet_abs_mean, output_names=[f"wavelet_abs_mean_{i}" for i in range(1,10)]),
FuncWrapper(wavelet_std, output_names=[f"wavelet_std_{i}" for i in range(1,10)]),
FuncWrapper(wavelet_energy, widths=np.arange(1, 5), output_names=[f"wavelet_energy_{i}" for i in range(1,5)]),
],
series_names=["EDA", "TMP"],
windows=["5min", "2.5min"],
strides=["2.5min"],
)
feature_collection = FeatureCollection(advanced_feats)
feature_collection
# + colab={"base_uri": "https://localhost:8080/", "height": 640} id="ahXC5VxR2w0W" outputId="f7f8b9e0-937f-4986-80f4-eb6eb36c3093" tags=[]
features_df = feature_collection.calculate(out_data, return_df=True, logging_file_path="tsfel_advanced.log")
features_df
# + [markdown] id="c36Hw96oDPkV"
# ### Plot the EDA features
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="NxJKV1u0DVvg" outputId="853b0f4c-62b6-4978-e826-693cd411b9b8"
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(
rows=2, cols=1, shared_xaxes=True,
subplot_titles=['Raw EDA data', 'EDA features'],
vertical_spacing=0.1,
)
fig.add_trace(
go.Scattergl(x=df_gsr.index[::4*5], y=df_gsr['EDA'].values[::4*5], name='EDA', mode='markers'),
row=1, col=1
)
ibi_feats = [c for c in features_df.columns if 'EDA_' in c and 'w=2m30s_' in c]
for col in ibi_feats:
sub_df = features_df[[col]].dropna()
fig.add_trace(
go.Scattergl(x=sub_df.index, y=sub_df[col].values, name=col, mode='markers'),
row=2, col=1
)
fig.update_layout(height=2*350)
fig.show(renderer='iframe')
# -
# ### Analyze the logging results
# +
from tsflex.features.logger import get_feature_logs
get_feature_logs(logging_file_path="tsfel_advanced.log")
# +
from tsflex.features.logger import get_function_stats
get_function_stats(logging_file_path="tsfel_advanced.log")
# -
# It is now obvious that the `entropy` function takes the main chunk of the feature extraction time and does not scale to larger window sizes.
# +
from tsflex.features.logger import get_series_names_stats
get_series_names_stats(logging_file_path="tsfel_advanced.log")
# -
# In general, feature calculation on the `5m` window takes more than twice as long as on the `2m30s` window
| examples/tsfel_integration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math
import itertools
import numpy as np
from numpy import random
from numpy.random import Generator, PCG64
import scipy.stats
from fractal_ml import generate_direction_fractal, approx_box_counting
import matplotlib.pyplot as plt
# -
def convert_base(x, base=2, precision=1000):
x = x - int(x)
exponents = range(-1, (-precision - 1) * 2, -1)
for e in exponents:
d = int(x // (base ** e))
x -= d * (base ** e)
yield d
if x == 0: break
def cantor_sample(precision=100):
# Uses the bijection between [0, 1] and the Cantor set that takes x in [0, 1] in binary form, replaces the 1's with 2's
# and reinterprets it as a ternary number.
x = random.rand()
base = convert_base(x, 2, precision)
#converts the binary form to ternary and evaluates it as a real number.
val = sum(2*d * (3 ** (-i - 1)) for i, d in enumerate(convert_base(x)))
return val
def sierpinski_sample(precision=100):
# Takes a random number in [0, 1] and uses it to navigate the Left/Top/Right tree.
x = random.rand()
s_x, s_y = 0, 0
path = convert_base(x, 3, precision)
exp = 1
for p in path:
exp -= 1
if p == 0:
pass
elif p == 1:
s_x += 0.25 * 2 ** exp
s_y += 0.5 * 2 ** exp
elif p == 2:
s_x += 0.5 * 2 ** exp
return s_x, s_y
data = np.array([(cantor_sample(), cantor_sample()) for _ in range(400)])
x, y = data.T
plt.scatter(x, y, s=2)
plt.show()
def fractal_dim(base, exp_low, exp_high, data):
counts = approx_box_counting(base, exp_low, exp_high, data)
data = np.array([[(exp_low + i), np.log(counts[i])/np.log(base)] for i in range(len(counts))])
return data
# +
base = 3
data = np.array([(cantor_sample(), cantor_sample()) for _ in range(5000)])
counts = fractal_dim(base, -5, 0, data)
x, y = counts[:,0], counts[:,1]
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
plt.plot(x, m*x+b)
counts = fractal_dim(base, -15, 5, data)
x, y = counts[:,0], counts[:,1]
plt.plot(x, y, 'o')
print("True dim: ", np.log(4)/np.log(3))
print("y = ", m, "x + ", b)
print(r_value**2, p_value)
# +
base = 2
data = np.array([sierpinski_sample() for _ in range(5000)])
counts = fractal_dim(base, -5, 0, data)
x, y = counts[:,0], counts[:,1]
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
plt.plot(x, m*x+b)
counts = fractal_dim(base, -15, 5, data)
x, y = counts[:,0], counts[:,1]
plt.plot(x, y, 'o')
print("True dim: ", np.log(3)/np.log(2))
print("y = ", m, "x + ", b)
print(r_value**2, p_value)
# +
def random_direction_matrix(number_of_directions, extrinsic_dim, precision=1000):
mean = np.zeros([extrinsic_dim])
cov = np.identity(extrinsic_dim)
rg = np.random.Generator(PCG64())
# We can normalize this if we want to
base = rg.multivariate_normal(mean,cov,[precision,number_of_directions])
return base
directions = random_direction_matrix(3, 2, 100)
print(directions.shape)
data = generate_direction_fractal(10000, 0.3, directions)
x, y = np.array(data).T
plt.scatter(x, y, s=2, c='black')
plt.show()
# +
base = 2
counts = fractal_dim(base, -10, 0, data)
x, y = counts[:,0], counts[:,1]
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
plt.plot(x, m*x+b)
counts = fractal_dim(base, -15, 5, data)
x, y = counts[:,0], counts[:,1]
plt.plot(x, y, 'o')
print("y = ", m, "x + ", b)
print(r_value**2, p_value)
# +
base = 3
counts = fractal_dim(base, -7, -1, data)
x, y = counts[:,0], counts[:,1]
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
plt.plot(x, m*x+b)
counts = fractal_dim(base, -15, 5, data)
x, y = counts[:,0], counts[:,1]
plt.plot(x, y, 'o')
print("y = ", m, "x + ", b)
print(r_value**2, p_value)
# +
base = 4
counts = fractal_dim(base, -6, 0, data)
x, y = counts[:,0], counts[:,1]
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
plt.plot(x, m*x+b)
counts = fractal_dim(base, -15, 5, data)
x, y = counts[:,0], counts[:,1]
plt.plot(x, y, 'o')
print("y = ", m, "x + ", b)
print(r_value**2, p_value)
| notebooks/.ipynb_checkpoints/Dimensions-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sparse Gaussian Process - Binary Classification (MNIST)
import matplotlib as mpl; mpl.use('pgf')
# %matplotlib inline
# +
import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# import tensorflow as tf
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from collections import defaultdict
from matplotlib import animation
from IPython.display import HTML
from scribbles.gaussian_processes import gp_sample_custom, dataframe_from_gp_samples
# -
GOLDEN_RATIO = 0.5 * (1 + np.sqrt(5))
golden_size = lambda width: (width, width / GOLDEN_RATIO)
# +
FIG_WIDTH = 10
rc = {
"figure.figsize": golden_size(FIG_WIDTH),
# "font.serif": ['Times New Roman'],
"text.usetex": True,
}
sns.set(context="talk", style="ticks", palette="colorblind",
font="serif", rc=rc)
# +
# shortcuts
tfd = tfp.distributions
kernels = tfp.math.psd_kernels
# constants
# num_train = 60000
# num_test = 10000
num_features = 784 # dimensionality
num_classes = 10
# n_index_points = 256 # nbr of index points
# n_samples = 20 # nbr of GP prior samples
jitter = 1e-6
kernel_cls = kernels.MaternFiveHalves
num_inducing_points = 64
num_epochs = 2000
batch_size = 32
quadrature_size = 5
seed = 8888 # set random seed for reproducibility
random_state = np.random.RandomState(seed)
# x_min, x_max = -1.0, 1.0
# y_min, y_max = -6.0, 4.0
# x_loc = -0.5
# # index points
# X_q = np.linspace(-5., 5., n_index_points).reshape(-1, n_features)
# -
def plot_image_grid(ax, images, shape, rows=20, cols=None, cmap=None):
if cols is None:
cols = rows
grid = images[:rows*cols].reshape(rows, cols, *shape).squeeze()
return ax.imshow(np.vstack(np.dstack(grid)), cmap=cmap)
def binarize(positive_label=3, negative_label=5):
def d(X, y, label, new_label=1):
X_val = X[y == label]
y_val = np.full(len(X_val), new_label)
return X_val, y_val
def binarize_decorator(load_data_fn):
def new_load_data_fn():
(X_train, Y_train), (X_test, Y_test) = load_data_fn()
X_train_pos, Y_train_pos = d(X_train, Y_train, positive_label, new_label=1)
X_train_neg, Y_train_neg = d(X_train, Y_train, negative_label, new_label=0)
X_train_new = np.vstack([X_train_pos, X_train_neg])
Y_train_new = np.hstack([Y_train_pos, Y_train_neg])
X_test_pos, Y_test_pos = d(X_test, Y_test, positive_label, new_label=1)
X_test_neg, Y_test_neg = d(X_test, Y_test, negative_label, new_label=0)
X_test_new = np.vstack([X_test_pos, X_test_neg])
Y_test_new = np.hstack([Y_test_pos, Y_test_neg])
return (X_train_new, Y_train_new), (X_test_new, Y_test_new)
return new_load_data_fn
return binarize_decorator
binary_mnist_load_data = binarize(positive_label=2, negative_label=7)(tf.keras.datasets.mnist.load_data)
# +
(X_train, Y_train), (X_test, Y_test) = binary_mnist_load_data()
num_train, img_rows, img_cols = X_train.shape
num_test, img_rows, img_cols = X_test.shape
X_train = X_train.reshape(num_train, num_features) / 255.
X_test = X_test.reshape(num_test, num_features) / 255.
# +
fig, (ax1, ax2) = plt.subplots(ncols=2)
plot_image_grid(ax1, X_train[Y_train == 0],
shape=(img_rows, img_cols), rows=10, cmap="cividis")
plot_image_grid(ax2, X_train[Y_train == 1],
shape=(img_rows, img_cols), rows=10, cmap="cividis")
plt.show()
# -
amplitude = tf.exp(tf.Variable(np.float64(0)), name='amplitude')
length_scale = tf.exp(tf.Variable(np.float64(-1)), name='length_scale')
observation_noise_variance = tf.exp(tf.Variable(np.float64(-5)), name='observation_noise_variance')
base_kernel = kernel_cls(amplitude=amplitude, length_scale=length_scale)
scale_diag = tf.exp(tf.Variable(np.zeros(num_features), name='scale_diag'))
kernel = kernels.FeatureScaled(base_kernel, scale_diag=scale_diag)
ind = random_state.randint(num_train, size=num_inducing_points)
inducing_index_points_initial = X_train[ind]
inducing_index_points_initial.shape
# +
fig, ax = plt.subplots()
plot_image_grid(ax, inducing_index_points_initial,
shape=(img_rows, img_cols), rows=8, cmap="cividis")
plt.show()
# -
inducing_index_points = tf.Variable(
inducing_index_points_initial,
name='inducing_index_points')
variational_inducing_observations_loc = tf.Variable(
np.zeros(num_inducing_points),
name='variational_inducing_observations_loc')
variational_inducing_observations_scale = tf.Variable(
np.eye(num_inducing_points),
name='variational_inducing_observations_scale')
vgp = tfd.VariationalGaussianProcess(
kernel=kernel,
index_points=X_test,
inducing_index_points=inducing_index_points,
variational_inducing_observations_loc=variational_inducing_observations_loc,
variational_inducing_observations_scale=variational_inducing_observations_scale,
observation_noise_variance=observation_noise_variance
)
vgp
# +
# bijector = tfp.bijectors.Transpose(rightmost_transposed_ndims=2)
# vgp = tfd.TransformedDistribution(tfd.Independent(base, reinterpreted_batch_ndims=1),
# bijector=bijector)
# vgp
# -
def make_likelihood(f):
return tfd.Independent(tfd.Bernoulli(logits=f),
reinterpreted_batch_ndims=1)
# return tfd.Categorical(logits=tf.transpose(f))
def log_likelihood(Y, f):
p = make_likelihood(f)
return p.log_prob(Y)
dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train.astype("float64"))) \
.shuffle(buffer_size=500, seed=seed) \
.batch(batch_size, drop_remainder=True)
iterator = tf.data.make_initializable_iterator(dataset)
X_batch, Y_batch = iterator.get_next()
X_batch, Y_batch
# +
# ell = vgp.surrogate_posterior_expected_log_likelihood(
# observation_index_points=X_batch,
# observations=Y_batch,
# log_likelihood_fn=log_likelihood,
# quadrature_size=quadrature_size
# )
# ell
# -
ell = vgp.surrogate_posterior_expected_log_likelihood(
observation_index_points=X_batch,
observations=Y_batch,
log_likelihood_fn=log_likelihood,
quadrature_size=quadrature_size
)
ell
kl = vgp.surrogate_posterior_kl_divergence_prior()
kl
nelbo = - ell + kl * batch_size / num_train
nelbo
optimizer = tf.train.AdamOptimizer()
optimize = optimizer.minimize(nelbo)
steps_per_epoch = num_train // batch_size
steps_per_epoch
# +
history = defaultdict(list)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(num_epochs):
print(f"Epoch {i}")
sess.run(iterator.initializer)
for j in range(steps_per_epoch):
(_, nelbo_value,
amplitude_value,
length_scale_value,
observation_noise_variance_value,
inducing_index_points_value,
variational_inducing_observations_loc_value,
variational_inducing_observations_scale_value) = sess.run([optimize,
nelbo,
amplitude,
length_scale,
observation_noise_variance,
inducing_index_points,
variational_inducing_observations_loc,
variational_inducing_observations_scale])
history["nelbo"].append(nelbo_value)
history["amplitude"].append(amplitude_value)
history["length_scale"].append(length_scale_value)
history["observation_noise_variance"].append(observation_noise_variance_value)
history["inducing_index_points"].append(inducing_index_points_value)
history["variational_inducing_observations_loc"].append(variational_inducing_observations_loc_value)
history["variational_inducing_observations_scale"].append(variational_inducing_observations_scale_value)
# -
history_df = pd.DataFrame(history)
# +
fig, ax = plt.subplots()
sns.lineplot(x='index', y='nelbo', data=history_df.reset_index(),
alpha=0.8, ax=ax)
ax.set_xlabel("epoch")
ax.set_yscale("log")
plt.show()
# -
history["inducing_index_points"][-1].shape
# +
fig, (ax1, ax2) = plt.subplots(ncols=2)
plot_image_grid(ax1, history["inducing_index_points"][0],
shape=(img_rows, img_cols), rows=8, cmap="cividis")
plot_image_grid(ax2, history["inducing_index_points"][-1],
shape=(img_rows, img_cols), rows=8, cmap="cividis")
plt.show()
# -
inducing_index_points_history = np.stack(history["inducing_index_points"])
inducing_index_points_history.shape
segments_min_history = np.dstack(np.broadcast_arrays(inducing_index_points_history, y_min))
segments_max_history = np.dstack([inducing_index_points_history,
history["variational_inducing_observations_loc"]])
segments_history = np.stack([segments_max_history, segments_min_history], axis=-2)
segments_history.shape
# +
kernel_history = kernel_cls(amplitude=history.get("amplitude"), length_scale=history.get("length_scale"))
vgp_history = tfd.VariationalGaussianProcess(
kernel=kernel_history,
index_points=X_q,
inducing_index_points=np.stack(history.get("inducing_index_points")),
variational_inducing_observations_loc=np.stack(history.get("variational_inducing_observations_loc")),
variational_inducing_observations_scale=np.stack(history.get("variational_inducing_observations_scale")),
observation_noise_variance=history.get("observation_noise_variance")
)
vgp_mean = vgp_history.mean()
vgp_stddev = vgp_history.stddev()
# -
with tf.Session() as sess:
vgp_mean_value, vgp_stddev_value = sess.run([vgp_mean, vgp_stddev])
# +
fig, ax = plt.subplots()
ax.plot(X_q, latent_true, 'k-', label=r"$\log p(x) - \log q(x)$")
ax.plot(X_q, vgp_mean_value[-1], label="posterior predictive mean")
ax.fill_between(np.squeeze(X_q),
vgp_mean_value[-1] - vgp_stddev_value[-1],
vgp_mean_value[-1] + vgp_stddev_value[-1],
alpha=0.1, label="posterior predictive std dev")
# ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
ax.vlines(history["inducing_index_points"][-1], ymin=y_min,
ymax=history["variational_inducing_observations_loc"][-1],
color='k', linewidth=1.0, alpha=0.4, label="inducing inputs")
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x)$')
ax.set_ylim(y_min, y_max)
ax.legend()
fig.savefig("f_posterior_predictive.pgf")
# plt.show()
# +
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True, gridspec_kw=dict(hspace=0.1))
ax1.plot(X_q, latent_true, 'k-', label=r"$\log p(x) - \log q(x)$")
line_mean, = ax1.plot(X_q, vgp_mean_value[-1], color="tab:blue", label="posterior predictive mean")
line_stddev_lower, = ax1.plot(X_q, vgp_mean_value[-1] - 2*vgp_stddev_value[-1],
color="tab:blue", alpha=0.4)
line_stddev_upper, = ax1.plot(X_q, vgp_mean_value[-1] + 2*vgp_stddev_value[-1],
color="tab:blue", alpha=0.4)
vlines_inducing_index_points = ax1.vlines(inducing_index_points_history[-1].squeeze(),
ymax=history["variational_inducing_observations_loc"][-1],
ymin=y_min, linewidth=1.0, alpha=0.4)
ax.set_ylabel('$f(x)$')
ax.set_ylim(y_min, y_max)
lines_inducing_index_points = ax2.plot(inducing_index_points_history.squeeze(), range(n_epochs),
color='k', linewidth=1.0, alpha=0.4)
ax2.set_xlabel(r"$x$")
ax2.set_ylabel("epoch")
ax1.legend()
plt.show()
# -
# num_frames = 100
num_frames = n_epochs
def animate(i):
line_mean.set_data(X_q, vgp_mean_value[-num_frames+i])
line_stddev_lower.set_data(X_q, vgp_mean_value[-num_frames+i] - 2*vgp_stddev_value[-num_frames+i])
line_stddev_upper.set_data(X_q, vgp_mean_value[-num_frames+i] + 2*vgp_stddev_value[-num_frames+i])
vlines_inducing_index_points.set_segments(segments_history[-num_frames+i])
for j, line in enumerate(lines_inducing_index_points):
line.set_data(inducing_index_points_history[:-num_frames+i, j],
np.arange(n_epochs-num_frames+i))
ax2.relim()
ax2.autoscale_view(scalex=False)
return lines_inducing_index_points + [line_mean, line_stddev_lower, line_stddev_upper]
anim = animation.FuncAnimation(fig, animate, frames=num_frames,
interval=60, repeat_delay=5, blit=True)
# +
# HTML(anim.to_html5_video())
# +
kernel_final = kernel_cls(amplitude=history["amplitude"][-1], length_scale=history["length_scale"][-1])
vgp_final = tfd.VariationalGaussianProcess(
kernel=kernel_final,
index_points=X_q,
inducing_index_points=history["inducing_index_points"][-1],
variational_inducing_observations_loc=history["variational_inducing_observations_loc"][-1],
variational_inducing_observations_scale=history["variational_inducing_observations_scale"][-1],
observation_noise_variance=history["observation_noise_variance"][-1]
)
# -
p = tfd.LogNormal(loc=vgp_final.mean(), scale=vgp_final.stddev())
p
Y_q = np.linspace(-6, 4, 200).reshape(-1, 1)
Y_q.shape
a = tf.exp(p.log_prob(Y_q))
# +
fig, ax = plt.subplots()
ax.plot(X_q, latent_true, 'k-', label=r"$\log p(x) - \log q(x)$")
with tf.Session() as sess:
# sess.run(tf.global_variables_initializer())
ax.pcolor(sess.run(a), cmap="Blues")
plt.show()
# -
mean, stddev = p.mean(), p.stddev()
with tf.Session() as sess:
mean_value, stddev_value = sess.run([mean, stddev])
# +
fig, ax = plt.subplots()
ax.plot(X_q, density_ratio_value, 'k-', label=r"$p(x) / q(x)$")
ax.plot(X_q, mean_value, label="posterior predictive mean")
ax.fill_between(np.squeeze(X_q),
mean_value - stddev_value,
mean_value + stddev_value,
alpha=0.1, label="posterior predictive std dev")
# ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
ax.set_xlabel('$x$')
ax.set_ylabel('$r(x) = \exp{f(x)}$')
ax.set_ylim(-1.0, 12.0)
ax.legend()
fig.savefig("r_posterior_predictive.pgf")
# plt.show()
# -
vgp_sample = vgp_final.sample(sample_shape=n_samples)
vgp_sample
make_likelihood(vgp_sample).mean()
with tf.Session() as sess:
vgp_sample_value, class_prob_sample_value = sess.run([vgp_sample, make_likelihood(vgp_sample).mean()])
# +
fig, ax = plt.subplots()
ax.plot(X_q, latent_true, 'k-', label=r"$\log p(x) - \log q(x)$")
ax.plot(X_q, vgp_sample_value.T, alpha=0.4)
# ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x)$')
ax.set_ylim(y_min, y_max)
ax.legend()
plt.show()
# +
names = ["sample", "x"]
v = [list(map(r"$s={}$".format, range(n_samples))), X_q.squeeze()]
index = pd.MultiIndex.from_product(v, names=names)
d = pd.DataFrame(class_prob_sample_value.ravel(), index=index, columns=["y"])
data = d.reset_index()
data
# +
fig, ax1 = plt.subplots()
ax1.scatter(X, Y, c=Y, s=12.**2,
marker='s', alpha=.2, cmap='coolwarm_r')
ax1.set_xlabel('$x$')
ax1.set_xlim(-5.5, 5.5)
ax1.set_ylim(-0.05, 1.05)
ax1.set_yticks([0, 1])
ax1.set_yticklabels(['$x_q \sim q(x)$',
'$x_p \sim p(x)$'])
ax2 = ax1.twinx()
ax2.plot(X_q, class_prob_true, 'k-')
ax2.plot(X_q, class_prob_sample_value.T, alpha=0.4)
# sns.lineplot(x='x', y='y', hue='sample', data=data, palette="tab20", alpha=0.4, ax=ax2)
ax2.set_xlim(-5.5, 5.5)
ax2.set_xlabel('$x$')
ax2.set_ylim(-0.05, 1.05)
# ax2.set_ylabel('$\mathcal{P}(y=1 \mid x)$')
ax2.set_ylabel('$\pi(x)$')
fig.savefig("pi_posterior_predictive_samples.pgf")
# plt.show()
# -
| scratch/sparse_gaussian_process_classification_binary_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
'''
crawl Playful Kiss (2010) comment from https://mydramalist.com
URL: https://mydramalist.com/20-playful-kiss/reviews
'''
import requests
from bs4 import BeautifulSoup
import csv
def get_html(url):
try:
r = requests.get(url, timeout=30)
r.raise_for_status
#site encoding
r.encoding = 'utf-8'
return r.text
except:
print('ERROR')
dramaComments = []
def get_content(url):
html = get_html(url)
soup = BeautifulSoup(html, 'lxml')
# find comment list
commentList = soup.find('div', class_='app-body')
comments = commentList.find_all('div', class_='review')
for comment in comments:
data = {}
data['name'] = comment.find('a', class_='text-primary').text
data['review'] = ''
try:
review = comment.find('div', class_='review-body').contents[2:-4]
except:
print("error: review-bodyfull-read")
review = comment.find('div', class_='review-bodyfull-read').contents[2:]
pass
for item in review:
if(isinstance(item, str)):
data['review'] += item.strip()
#get review text without br
data['vote'] = comment.find('div', class_='user-stats').small.b.text
data['overall'] = comment.find('div', class_='rating-overall').b.span.text
data['story'] = comment.find('div', class_='review-rating').findChildren()[1].text
data['cast'] = comment.find('div', class_='review-rating').findChildren()[2].span.text
data['music'] = comment.find('div', class_='review-rating').findChildren()[4].span.text
data['rewatch'] = comment.find('div', class_='review-rating').findChildren()[6].span.text
dramaComments.append(data)
return dramaComments
# write download contents to local file
def saveCSV(contents):
res = ['name', 'review','vote', 'overall', 'story', 'cast', 'music', 'rewatch']
with open('review_new.csv', 'a+') as f:
wr = csv.writer(f, quoting=csv.QUOTE_ALL, lineterminator='\n')
wr.writerow(res)
for comment in contents:
wr.writerow(comment.values())
def main():
url = 'https://mydramalist.com/20-playful-kiss/reviews?page='
urlList = []
page = 8
for i in range(page):
urlList.append(url + str(i + 1))
for url in urlList:
content = get_content(url)
saveCSV(content)
main()
| assets/data/.ipynb_checkpoints/get_data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib auto
# Automatically determine your backend.
# If auto-detection fails, run the following two lines to determine your backend.
# >>> import matplotlib
# >>> matplotlib.get_backend()
# It should be one of the following (or at least close to)
# ['auto', 'gtk', 'gtk3', 'inline', 'nbagg', 'notebook', 'osx', 'qt', 'qt4', 'qt5', 'tk', 'wx']
from sfqlib.sfqQubit import Sfq2LevelFancyQubit, Sfq3LevelFancyQubit, Sfq2LevelQubit, Sfq3LevelQubit
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from numpy import pi, dot
# Create a interactive plot.
plt.ion()
fig = plt.figure(figsize=(10, 10))
axis_01 = fig.add_subplot(2, 2, 1, projection='3d', label='0-1 subspace')
axis_12 = fig.add_subplot(2, 2, 2, projection='3d', label='1-2 subspace')
axis_alpha = fig.add_subplot(2, 2, 3)
axis_beta = fig.add_subplot(2, 2, 4)
# Create a qubit
qubit = Sfq3LevelFancyQubit((axis_01, axis_12), d_theta=pi/40, w_clock=2*pi*40e9,
w_qubit=(2*pi*5.0e9, 2*pi*9.8e9), theta=pi/2)
# Only plot the motion of the ground state and the excited state.
qubit.set_plot_kets(['G', 'E'])
# Apply a resonant sequence. Remove the pause if you don't want to see the animation.
for i in range(20):
qubit.pulse_and_precess()
plt.pause(0.01)
for j in range(7):
qubit.precess()
plt.pause(0.01)
# Plot the euler angles.
axis_alpha.plot(qubit.alpha_list, label=r'$\alpha$', color='r')
axis_beta.plot(qubit.beta_list, label=r'$\beta$', color='b')
axis_alpha.legend()
axis_beta.legend()
| notebooks/three_level_qubit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.12 64-bit (''kaggle'': conda)'
# name: python3812jvsc74a57bd04cccc9253a97be6bfa179811742e359441dab23dea9b85717016d6b8ba8b672c
# ---
# # 作業(一)
# # Color histogram base method
#
# 參考資源:
#
# https://www.pyimagesearch.com/2014/07/14/3-ways-compare-histograms-using-opencv-python/
import cv2
# # 灰階情境
x = cv2.imread('resource/jpg/Bread/0.jpg')
y = cv2.imread('resource/jpg/Bread/1.jpg')
x = cv2.cvtColor(x, cv2.COLOR_BGR2GRAY)
y = cv2.cvtColor(y, cv2.COLOR_BGR2GRAY)
hist_x = cv2.calcHist([x], [0], None, [20], [0, 256])
hist_y = cv2.calcHist([y], [0], None, [20], [0, 256])
hist_norm_x = hist_x / hist_x.sum()
hist_norm_y = hist_y / hist_y.sum()
# +
## L1 norm
score = cv2.compareHist(hist_norm_x, hist_norm_y, cv2.NORM_L1)
print("l1 {}".format(score))
## L2 norm
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_y, cv2.NORM_L2) + 1)
print("l2 {}".format(score))
## Correlation
score = (cv2.compareHist(hist_norm_x, hist_norm_y, cv2.HISTCMP_CORREL) + 1)/2
print("correlation {}".format(score))
## Chi-Square
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_y, cv2.HISTCMP_CHISQR) + 1)
print("chisquare {}".format(score))
# Intersection
score = cv2.compareHist(hist_norm_x, hist_norm_y, cv2.HISTCMP_INTERSECT)
print("intersection {}".format(score))
# Bhattacharyya distance
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_y, cv2.HISTCMP_BHATTACHARYYA) + 1)
print("bhattacharyya distance {}".format(score))
# Alternative Chi-Square
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_y, cv2.HISTCMP_CHISQR_ALT) + 1)
print("alternative chisquare {}".format(score))
# -
# # 彩色情境
x = cv2.imread('resource/jpg/data/Bread/0.jpg')
y = cv2.imread('resource/jpg/data/Bread/1.jpg')
x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
y = cv2.cvtColor(y, cv2.COLOR_BGR2RGB)
hist_x = cv2.calcHist([x], [0,1,2], None, [20]*3, [0, 256]*3)
hist_y = cv2.calcHist([y], [0,1,2], None, [20]*3, [0, 256]*3)
hist_norm_x = hist_x / hist_x.sum()
hist_norm_y = hist_y / hist_y.sum()
# +
## L1 norm
score = cv2.compareHist(hist_norm_x, hist_norm_x, cv2.NORM_L1)
print("l1 {}".format(score))
## L2 norm
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_x, cv2.NORM_L2) + 1)
print("l2 {}".format(score))
## Correlation
score = (cv2.compareHist(hist_norm_x, hist_norm_x, cv2.HISTCMP_CORREL) + 1)/2
print("correlation {}".format(score))
## Chi-Square
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_x, cv2.HISTCMP_CHISQR) + 1)
print("chisquare {}".format(score))
# Intersection
score = cv2.compareHist(hist_norm_x, hist_norm_x, cv2.HISTCMP_INTERSECT)
print("intersection {}".format(score))
# Bhattacharyya distance
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_x, cv2.HISTCMP_BHATTACHARYYA) + 1)
print("bhattacharyya distance {}".format(score))
# Alternative Chi-Square
score = 1 / (cv2.compareHist(hist_norm_x, hist_norm_x, cv2.HISTCMP_CHISQR_ALT) + 1)
print("alternative chisquare {}".format(score))
# -
# ---
# # Texture feature base method
# 參考資源:
#
# https://learnopencv.com/edge-detection-using-opencv/
from skimage.metrics import structural_similarity as compare_ssim
x = cv2.imread('resource/jpg/Bread/0.jpg')
y = cv2.imread('resource/jpg/Bread/1.jpg')
x = cv2.cvtColor(x, cv2.COLOR_BGR2GRAY)
y = cv2.cvtColor(y, cv2.COLOR_BGR2GRAY)
# Sobel Edge Detection
edge_x = cv2.Sobel(src=x, ddepth=cv2.CV_64F, dx=1, dy=1, ksize=5)
edge_y = cv2.Sobel(src=y, ddepth=cv2.CV_64F, dx=1, dy=1, ksize=5)
(score, diff) = compare_ssim(edge_x, edge_y, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
| skip/skip/experiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import re
# +
# 참고자료
# http://nbviewer.jupyter.org/github/KMJJ1/hiphop/blob/master/RNN%20-%20LSTM%EC%9D%84%20%EC%9D%B4%EC%9A%A9%ED%95%9C%20Hiphop%20%EA%B0%80%EC%82%AC%20%EC%83%9D%EC%84%B1.ipynb
# https://github.com/dyelax/encore.ai
# -
# %pwd
# # !pip install soynlp
# !pip show soynlp
data= pd.read_csv("song_data_fixed.csv")
song = pd.DataFrame(data)
song.head(3)
song.columns
song['artist'].value_counts().head()
song[song['artist'] == '키스'].head(3) # 키스는 누구인가? => 미국가수, 삭제예정
# 유니크한 가수의 수
len(song['artist'].unique())
# 곡의 수
song.shape
song['lyrics'].isnull().sum()
song = song['lyrics'].dropna().reset_index() #inplace=True 를 하면 원래 데이터프레임에서 날라간다
song['lyrics'].isnull().sum()
song
# ## 텍스트 데이터 전처리
def preprocessing(text):
# 개행문자 제거
text = text.strip('\t\n\r')
pattern = re.compile(r'\s+')
text = re.sub(pattern, ' ', text)
# 특수문자 제거
# 특수문자나 이모티콘 등은 때로는 의미를 갖기도 하지만 여기에서는 제거했습니다.
# text = re.sub('[?.,;:|\)*~`’!^\-_+<>@\#$%&-=#}※]', '', text)
# 한글, 영문, 숫자만 남기고 모두 제거하도록 합니다.
# text = re.sub('[^가-힣ㄱ-ㅎㅏ-ㅣa-zA-Z0-9]', ' ', text)
# 한글, 영문만 남기고 모두 제거하도록 합니다.
text = re.sub('[^가-힣ㄱ-ㅎㅏ-ㅣa-zA-Z]', ' ', text)
return text
sample_content = song['lyrics'][100]
type(sample_content)
sample_content = preprocessing(sample_content)
sample_content[:1000]
# # %time을 찍어주면 해당 코드를 실행할 때 걸리는 시간을 출력해 줍니다
# %time sentences = song['lyrics'].apply(preprocessing)
sentences.head()
# ## Bigram
#
# '말이 되는' 가사 생성에 도움받기 위해 자주 나오는 단어들을 묶어 어구로 만들어야 한다.
#
# n-gram 이라는 기법으로 bigram(두 단어씩 묶기), trigram(세 단어씩 묶기) 등의 방법이 있다
# +
# 토크나이징 하기 전 sentences 로 txt파일 생성
# %%time
USE_PREMADE_TEXT = False
text_filepath = 'all_lyrics_text.txt'
if not USE_PREMADE_TEXT:
with open(text_filepath, 'w', encoding='utf-8') as f:
for lyrics in sentences.values:
if pd.isnull(lyrics): # null값 있다면 그 다음으로 넘어감
continue
f.write(lyrics + '\n')
else:
assert path.exists(text_filepath)
# +
# bigram model 저장
USE_PREMADE_BIGRAM_MODEL = False
all_bigram_model_filepath = 'all_bigram_model'
all_sentences_normalized_filepath = 'all_lyrics_text.txt'
all_unigram_sentences = LineSentence(all_sentences_normalized_filepath)
if not USE_PREMADE_BIGRAM_MODEL:
all_bigram_model = Phrases(all_unigram_sentences) #phrase냐 아니냐를 판단해줌
all_bigram_model.save(all_bigram_model_filepath)
else:
all_bigram_model = Phrases.load(all_bigram_model_filepath)
# +
# %%time
USE_PREMADE_BIGRAM_SENTENCES = False
all_bigram_sentences_filepath = 'all_sentences_for_word2vec.txt'
if not USE_PREMADE_BIGRAM_SENTENCES:
with open(all_bigram_sentences_filepath, 'w', encoding='utf-8') as f:
for unigram_sentence in all_unigram_sentences:
all_bigram_sentence = all_bigram_model[unigram_sentence]
f.write(' '.join(all_bigram_sentence) + '\n')
else:
assert path.exists(all_bigram_sentences_filepath)
# -
# ## Word2vec 모델 만들기
# +
# word2vec 모델 학습에 로그를 찍을 수 있도록 합니다.
import logging
logging.basicConfig(
format='%(asctime)s : %(levelname)s : %(message)s',
level=logging.INFO)
# -
# total_examples
model.corpus_count
# +
# %%time
USE_PREMADE_WORD2VEC = False
all2vec_filepath = 'all_word2vec_model'
if not USE_PREMADE_WORD2VEC:
lyrics_for_word2vec = LineSentence(all_bigram_sentences_filepath)
all2vec = Word2Vec(lyrics_for_word2vec, size=100, window=5, min_count=1, sg=1)
# sg=0 cbow 1=Skip-Gram Model
# 100차원으로 가져옴 / 보통 20~100 정도
# window = 5 앞 5개, 뒤 5개 단어를 보겠다는 뜻
# window size 작을수록 문법적인 의미가 너무 중요해짐, 클수록 주제 지향적으로 문맥적인 정보를 많이 담게 됨
for _ in range(9):
all2vec.train(lyrics_for_word2vec,total_examples=16377, epochs=1)
# 총 곡 가사에 총 16377 문장이 있다는 것 명시
all2vec.save(all2vec_filepath)
else:
all2vec = Word2Vec.load(all2vec_filepath)
all2vec.init_sims()
# -
all2vec_filepath = 'all_word2vec_model'
all2vec = Word2Vec.load(all2vec_filepath)
# +
# 총 20만 4703개의 단어들을 얻었다
a = pd.DataFrame(all2vec.wv.index2word)
len(a)
# -
len(a)
# ## 100회 이상 등장한 단어들만 가지고 t-SNE 시각화
# %%time
words = []
for i in (range(204703)):
cnt = all2vec.wv.vocab[a[0][i]] # 횟수 카운트
if cnt.count > 100:
words.append(a[0][i]) # 횟수 100회 초과하는 단어만 리스트화
i += 1
len (words) # 100회 이상 등장한 단어는 3140개
# +
# 100회 이상 등장한 단어들에 대해 vector값 가져오기
X = []
for i in words:
X.append(all2vec.wv[i])
# -
X2 = pd.DataFrame(X, index = words)
# 단어와 각 vector 값으로 데이터 프레임 생성
X2.head()
# +
import pickle
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.manifold import TSNE
# t-SNE 모델 모델링
USE_PREMADE_TSNE = False
tsne_filepath = 'tsne.pkl'
if not USE_PREMADE_TSNE:
tsne = TSNE(random_state=0)
tsne_points = tsne.fit_transform(X2)
with open(tsne_filepath, 'wb') as f:
pickle.dump(tsne_points, f)
else:
with open(tsne_filepath, 'rb') as f:
tsne_points = pickle.load(f)
tsne_df = pd.DataFrame(tsne_points, index=X2.index, columns=['x_coord', 'y_coord'])
tsne_df['word'] = tsne_df.index
# +
# t-SNE 모델 시각화
from bokeh.plotting import figure, show, output_notebook
from bokeh.models import HoverTool, ColumnDataSource, value
output_notebook()
# prepare the data in a form suitable for bokeh.
plot_data = ColumnDataSource(tsne_df)
# create the plot and configure it
tsne_plot = figure(title='t-SNE Word Embeddings',
plot_width = 800,
plot_height = 800,
active_scroll='wheel_zoom'
)
# add a hover tool to display words on roll-over
tsne_plot.add_tools( HoverTool(tooltips = '@word') )
tsne_plot.circle('x_coord', 'y_coord', source=plot_data,
color='blue', line_alpha=0.2, fill_alpha=0.1,
size=10, hover_line_color='orange')
# adjust visual elements of the plot
tsne_plot.title.text_font_size = value('16pt')
tsne_plot.xaxis.visible = False
tsne_plot.yaxis.visible = False
tsne_plot.grid.grid_line_color = None
tsne_plot.outline_line_color = None
# show time!
show(tsne_plot);
# -
# 각 데이터 포인트에 마우스를 올리면 어떤 단어들이 함께 맵핑되어 있는지 확인 가능하다.
#
# 1. 진한 부분일 수록 비슷한 단어들이 많이 모여 있음을 알 수 있다.
#
#
# 2. 한글, 영어, 일본어, 중국어들은 알아서 끼리끼리 모여서 임베딩되었다.
#
#
# 3. 전처리를 1도 안한거 치고는 잘 나온다.
# ## (주의) 100회 이상 단어를 거르지 않고 bigram으로 학습한 전체를 사용해봤더니 상태가 안좋다.
#
# 그냥 토크나이징하고, word2vec 돌리는게 훨씬 잘 나온다. 왜 그런지는 모르겠음
all2vec.most_similar(positive=['여자'])
# ## 토크나이징
# +
from soynlp.tokenizer import RegexTokenizer
import gensim
from gensim.models import Phrases
from gensim.models.word2vec import LineSentence
from gensim import corpora, models
from gensim.models import LdaMulticore
from gensim.models import Word2Vec
from gensim.corpora import Dictionary, MmCorpus
tokenizer = RegexTokenizer()
tokenizer
# -
# 전처리 이후의 샘플 텍스트로 토큰화
tokened_content = tokenizer.tokenize(sample_content)
tokened_content
# %time tokens = sentences.apply(tokenizer.tokenize)
tokens[:3]
# 워드투벡한 단어의 사전 수 보기
len(model.wv.vocab)
# 단어 사전에서 상위 10개만 보기
vocab = model.wv.vocab
sorted(vocab, key=vocab.get, reverse=True)[:10]
# 가장 유사한 단어를 추출
model.wv.most_similar('너')
model.wv.most_similar('사랑')
model.wv.most_similar('여자')
model.wv.most_similar('남자')
model.wv.most_similar('baby')
model.wv.most_similar('이별')
model.wv.most_similar('sexy')
model.wv.most_similar('여자친구')
model.wv.most_similar('남자친구')
model.wv.most_similar('친구')
# ## LSTM 공부하고 학습 시켜보기
# Instructions to Train:
# If you want to use our model to train your own artists, follow these steps:
#
# Pick an artist – it should be someone with a lot of lyrics. (Over 100,000 words).
# Collect all of the artist's lyrics from your favorite lyrics website. Save each song as a text file in data/artist_name/. We recommend leaving newlines in as a special token so that the network will learn line and stanza breaks.
# Train by navigating to the code directory and running python runner.py -a <artist_name> -m <model_save_name>.
# Our models were all trained for 30,000 steps.
# Generate new songs by running
# python runner.py -a <artist_name> -l ../save/models/<model_save_name>/<ckpt_file> -t.
# Optional: If you would like to specify "prime text" – the initial text that the model will generate from – pass in a string with the -p flag.
# Share your trained models with us so we can feature them on our website! Create an issue with a link to a public Dropbox or Google Drive containing your model's .ckpt file.
| NLP/NLP_sample_Jieun_ver02.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: .NET (C#)
// language: C#
// name: .net-csharp
// ---
#r "nuget: System.Threading.Tasks.Dataflow, 4.10.0"
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
var bufferBlock = new BufferBlock<int>();
Action producer = () =>
{
var i = 0;
while (i <= 20)
{
Thread.Sleep(1000);
i += 1;
Console.WriteLine($"Posting {i}...");
bufferBlock.Post(i);
}
};
Action consumer = () =>
{
var items = new List<int>() as IList<int>;
while (true)
{
Thread.Sleep(3500);
if (bufferBlock.TryReceiveAll(out items))
Console.WriteLine($"Consumer received {string.Join(',', items)}...");
}
};
Task.Factory.StartNew(producer).ContinueWith(_ => bufferBlock.Complete());
Task.Factory.StartNew(consumer);
bufferBlock.Completion.Wait();
| csharp/tpl-dataflow/BufferBlock.TryReceiveAll.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## План
#
# 0.1) Как-то разбираюсь с визуализацией в python
#
# 0.2) Как-то разбираюсь с теорией
#
# 1) Пишу нейронку, вычисляющую нормали, на pytorch, пользуясь [automatic differentiation](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py)
#
# 2) Сверяясь с тем, что получает pytorch, выражаю производные и переписываю на чистом numpy.
#
# 3) как-то разбираюсь с визуализацией в javascript
#
# 4) Переписываю нейронку на javascript, чтобы получить демку-html-страничку
# # Demo in Python
#
# Let's just explore some math and then we'll rewrite it to javascript.
import matplotlib.pyplot as plt
import numpy as np
import torch
# ## Plotting SDF
#
# [example](https://matplotlib.org/examples/images_contours_and_fields/pcolormesh_levels.html) from documentation
# +
"""
Shows how to combine Normalization and Colormap instances to draw
"levels" in pcolor, pcolormesh and imshow type plots in a similar
way to the levels keyword argument to contour/contourf.
"""
import matplotlib.pyplot as plt
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
import numpy as np
# make these smaller to increase the resolution
dx, dy = 0.05, 0.05
# generate 2 2d grids for the x & y bounds
y, x = np.mgrid[slice(1, 5 + dy, dy),
slice(1, 5 + dx, dx)]
z = np.sin(x)**10 + np.cos(10 + y*x) * np.cos(x)
# x and y are bounds, so z should be the value *inside* those bounds.
# Therefore, remove the last value from the z array.
z = z[:-1, :-1]
levels = MaxNLocator(nbins=15).tick_values(z.min(), z.max())
# pick the desired colormap, sensible levels, and define a normalization
# instance which takes data values and translates those into levels.
cmap = plt.get_cmap('PiYG')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
fig, (ax0, ax1) = plt.subplots(nrows=2)
im = ax0.pcolormesh(x, y, z, cmap=cmap, norm=norm)
fig.colorbar(im, ax=ax0)
ax0.set_title('pcolormesh with levels')
# contours are *point* based plots, so convert our bound into point
# centers
cf = ax1.contourf(x[:-1, :-1] + dx/2.,
y[:-1, :-1] + dy/2., z, levels=levels,
cmap=cmap)
fig.colorbar(cf, ax=ax1)
ax1.set_title('contourf with levels')
# adjust spacing between subplots so `ax1` title and `ax0` tick labels
# don't overlap
fig.tight_layout()
plt.show()
# -
# SDF for circle with radius 3, and center at (1, 2)
x0 = 1
y0 = 2
r = 3
sdf_circle = lambda x, y: np.sqrt((x - x0) ** 2 + (y - y0) ** 2) - r
# Let's plot it!
# +
dx, dy = 0.05, 0.05
# generate 2 2d grids for the x & y bounds
y, x = np.mgrid[slice(-5, 5 + dy, dy),
slice(-5, 5 + dx, dx)]
z = sdf_circle(x, y)
z = z[:-1, :-1]
levels = MaxNLocator(nbins=15).tick_values(z.min(), z.max())
cmap = plt.get_cmap('PiYG')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
fig, ax0 = plt.subplots(nrows=1)
im = ax0.pcolormesh(x, y, z, cmap=cmap, norm=norm)
fig.colorbar(im, ax=ax0)
ax0.set_title('pcolormesh with levels')
# -
# ## Формальная постановка задачи ML
#
# Поставим задачу supervised learning. Дана обучающая выборка из n пар {(x, y), SDF(x, y)}, где
# - (x, y) — точка на плоскости
# - SDF(x, y) — значение signed distance fucntion в этой точке
#
# Необходимо восстановить signed distance function на всей плоскости.
# ## Нейронка
#
# Давайте предсказывать signed distance function до 2D фигуры нейронкой с одним скрытым слоем. В качестве нелинейности возьмём, например, тангенс. К выходному слою никакой нелинейности применять не будем.
#
# <img src="architecture.png">
# $z_1 = w^{[1]}_{11}x + w^{[1]}_{12}y + b^{[1]}_1$
#
# $z_2 = w^{[1]}_{21}x + w^{[1]}_{22}y + b^{[1]}_2$
#
# $z_3 = w^{[1]}_{31}x + w^{[1]}_{32}y + b^{[1]}_3$
#
# $z_4 = w^{[1]}_{41}x + w^{[1]}_{42}y + b^{[1]}_4$
#
#
# $a_1 = tanh(z_1)$
#
# $a_2 = tanh(z_2)$
#
# $a_3 = tanh(z_3)$
#
# $a_4 = tanh(z_4)$
#
# $ S = w^{[2]}_{11}a_1 + w^{[2]}_{12}a_2
# + w^{[2]}_{13}a_3 + w^{[2]}_{14}a_4$
# В качестве функции потерь возьмём MSE.
#
# $$Loss(W, b) = \frac{1}{n}\sum_{i = 1}^{n} (S(W, b, x_i, y_i) - SDF(x_i, y_i))^2$$
# ## Восстановление нормалей
#
# Вектор нормали к точке (x, y) на поверхности, signed distance function до которой предсказывает наша нейронка, выражается так:
#
# $$\vec{n} = (\frac{dS}{dx}, \frac{dS}{dy})^T$$
# ## Pytorch
# Просто тыкаю pytorch, осознавая, что делает `.backward()`
x = torch.ones(2, 2, requires_grad=True)
print(x)
y = x + 2
print(y)
print(y.grad_fn)
z = y * y * 3
out = z.mean()
print(z, out)
out.backward()
print(x.grad)
| sandbox.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment #01
# This week's assignment is about getting you more comfortable with linux and able to use python on your computer before doing more complicated things.
# ## Exercise #01-01: linux tutorial
# Follow the linux tutorial from {doc}`01-Linux` (on MyBinder or your laptop).
# ## Exercise #01-02a (optional during pandemics): install python on your university account
# Follow the instructions from {ref}`installation:uni` to install python on your linux account at the university.
# ## Exercise #01-02b: install python on your laptop
# Follow the instructions from {ref}`installation:pc` to install python on your laptop.
# ```{admonition} Online learning
# :class: note
# In previous years, I would make the installation on the University lab mandatory and optional on your laptop - this would be enough for the entire semester. This year is different though, and working in the computer lab is not as easy as before. For this week, you can use MyBinder - but starting from next week I recommend to have a working python on your PC as well.
# ```
# ## Exercise #01-03: python script instead of bash
# Use linux on MyBinder for this exercise. Write a python script called ``my_cli_test.py``. The script can be as simple as you wish, but it should produce one printed output (a simple "Hello world!" would suffice!). Put this script in a new directory in your ``HOME``, called ``bin``. Using your knowledge from the Linux tutorial, make this script callable from everywhere (not just from the ``bin`` directory) with a call to ``$ my_cli_test.py``. If not on MyBinder, make this change "permanent" for your ``HOME`` environment (on MyBinder it is possible to do it as well, but it will be forgotten next time you open a session anyway).
| book/week_01/Assignment-01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# metadata:
# interpreter:
# hash: b7016158c24df4f3a8edbac2ec353508da107b1a683a7741fb92313752825eb0
# name: python3
# ---
# +
import pandas as pd
# Cargamos los datos
datos = pd.read_csv("./../Datasets/alturas.txt", delimiter="\t")
datos = datos.apply(lambda x: x * 2.54)
# +
# Mostramos el gráfico para ver como están relacionados los datos
import matplotlib.pyplot as plt
plt.scatter(datos.Father, datos.Son)
# -
# Análisis de la normalidad básico
plt.hist(datos)
# Mostramos el índice de correlación de ambas columnas
datos.corr()
# +
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
# Asignamos nuestra variable de entrada X para entrenamiento y las etiquetas Y.
X = pd.DataFrame(datos['Father'])
Y = pd.DataFrame(datos['Son'])
# Separamos nuestros datos en entrenamiento y test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
# Creamos el modelo de Regresión Lineal
modelo = LinearRegression()
# Entrenamos nuestro modelo
regr.fit(X_train, y_train)
# Hacemos las predicciones que es en definitiva una línea (en este caso, al ser 2D)
y_pred_train = regr.predict(X_train)
# OJO!!!!! Estos datos son sólo del conjunto de entrenamiento
# Veamos los coeficienetes obtenidos, En nuestro caso, serán la Tangente
print('Coeficientes: \n', regr.coef_)
# Este es el valor donde corta el eje Y (en X=0)
print('Término independiente (corte de la recta en eje Y): \n', regr.intercept_)
# Error Cuadrado Medio
ECM_train = mean_squared_error(y_train, y_pred_train)
print("Error cuadrático medio: %.2f" % ECM_train)
# Puntaje de Varianza. El mejor puntaje es un 1.0
R2_train = r2_score(y_train, y_pred_train)
print('R²: %.2f' % R2_train)
# +
# Mostramos el gráfico de nuestro entrenamiento con la recta
plt.scatter(X_train, y_train)
plt.plot(X_train, y_pred_train, color="red")
plt.show()
# +
# Vamos a ver si nuestro modelo generaliza bien comparando los datos de predicción con nuestro conjunto de test para ver si generaliza bien
y_pred_test = regr.predict(X_test)
# Error Cuadrado Medio
ECM_test = mean_squared_error(y_test, y_pred_test)
print("Error cuadrático medio entrenamiento: %.2f" % ECM_train)
print("Error cuadrático medio test: %.2f" % ECM_test)
# Puntaje de Varianza. El mejor puntaje es un 1.0
R2_test = r2_score(y_test, y_pred_test)
print('R² entrenamiento: %.2f' % R2_train)
print('R² test: %.2f' % R2_test)
| LR_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
__author__ = "<NAME>, <NAME> and <NAME>"
__version__ = "CS224u, Final Project 2021 v1"
__updated__ = "Mar 23, 2021, 11pmCST"
# +
import os
import logging
from lxml import etree
import re
import pandas as pd
from bs4 import BeautifulSoup
from datetime import datetime as dt
from parse_token_utils import create_directory, parse_fileName_vals
from filing_parsers import *
today = str(dt.today()).split(' ')[0]
today
# -
# <div class="alert alert-info">
# <b>At some point</b><br> We will need to define what the labels are '1's and '0's as the discrepancy between performance_earnings and performance_filing this is true for about 10% - this is related to our hypothesis, the text that we need to show that it changes sentiment
# +
## copied & adapted from data-engineering/parsers/filing_parsers.py (git)
forms = ['10-K']
def quick_parse(filepath):
parser = etree.XMLParser(recover=True, huge_tree=True)
tree = etree.parse(filepath, parser)
notags = etree.tostring(tree, encoding='utf8', method='text')
return notags
def parse_10q(filepath, cik, year):
"""
Parses the 10-Q passed. Returns a dictionary:
body: The body of the 10-Q
:param filepath:
:return:
"""
from bs4 import BeautifulSoup
results = {}
with open(filepath) as fp:
soup = BeautifulSoup(fp, "lxml")
#results['TenKText'] = __extract_document_text(soup)
results['headers'] = __parse_sec_header(soup, cik, year)
results['documents'] = __parse_documents(soup)
return results
def get_date(filingPath):
"""From FinBert"""
accession_code = filingPath.split('/')[-2]
ticker = filingPath.split('/')[-4]
filing_type = filingPath.split('/')[-3]
#print(accession_code, ticker, filing_type)
code = accession_code+'.txt'
with open(filingPath, 'r') as f:
readlin = f.read()
for i in readlin.split('\n'):
# print(i)
if code in i:
date_extr = i.split(' : ')[1]
return date_extr
def __parse_documents(soup):
import time
# Get all the douments
result_documents = []
documents = soup.find_all('document')
for doc in documents:
type_node = doc.find('type')
if type_node:
type_text = type_node.contents[0].strip()
desc_node = type_node.find('description')
else:
type_text = 'NA'
seq_node = doc.find('sequence')
if seq_node:
seq_text = seq_node.contents[0].strip()
else:
seq_text = str(int(time.time()))[-6:]
if desc_node:
desc_text = desc_node.contents[0]
else:
desc_text = ''
if type_text in forms:
is_10_q = True
else:
is_10_q = False
if type_text not in ["XML", "GRAPHIC", "EXCEL", "ZIP"]:
logging.debug("Parsing doucment type: {}".format(type_text))
result_documents.append(
{'is_10_q': is_10_q,
'type': type_text,
'sequence': seq_text,
'description': desc_text,
'document': __extract_document_text(soup)}
)
return result_documents
def __parse_sec_header(soup, cik, year):
sec_header = soup.find("sec-header")
if not sec_header:
sec_header = soup
result = {
'cik': cik,
'year': year,
}
result['accession_number'] = __get_line_item(sec_header, 'ACCESSION NUMBER:')
result['conformed_period_of_report'] = __get_line_item(sec_header, 'CONFORMED PERIOD OF REPORT:')
result['filed_as_of_date'] = __get_line_item(sec_header, 'FILED AS OF DATE:')
result['company_confirmed_name'] = __get_line_item(sec_header, 'COMPANY CONFORMED NAME:')
result['central_index_key'] = __get_line_item(sec_header, 'CENTRAL INDEX KEY:')
result['standard_industrial_classification'] = __get_line_item(sec_header, 'STANDARD INDUSTRIAL CLASSIFICATION:')
result['state_of_incorporation'] = __get_line_item(sec_header, 'STATE OF INCORPORATION:')
result['fiscal_year_end'] = __get_line_item(sec_header, 'FISCAL QUARTER END:')
return result
def __get_line_item(sec_header, attr_name):
find_results = re.findall(attr_name + '(.*?)\n', str(sec_header))
if find_results:
return find_results[0].strip()
else:
return None
def __extract_document_text(soup):
if len(soup) == 1:
TenKtext = soup.find('text').extract().text
TenKtext = parser_custom(TenKtext)
#tables = document.find_all('table')
#for table in tables:
# table.decompose()
return TenKtext
# -
dataset_
# +
FULL_DATASET = True
PICKLES = '../pickle_data/'
DATA = 'data'
# GLOBAL START AND END:
START = '2016-01-01'
END = '2021-03-01'
doc_folder_name = 'item7_parsed_documents'
if FULL_DATASET == True:
stocks_PATH = os.path.join(PICKLES, "performance_labeled.pickle")
forms = [ '10-K']
# -
stocks_PATH
stocks_df = pd.read_pickle(stocks_PATH)
stocks_df.head()
#parsed_sec_filings_path = f'../data/{today}_parsed_sec_filings_path/'
#create_directory(parsed_sec_filings_path)
parsed_sec_filings_path = '../data/2021-03-20_parsed_sec_filings_path/'
# +
# uncomment below - if we needed to re-download,
forms = ['10-K']
from bs4 import BeautifulSoup
#create_directory(os.path.join(sec_filings_path,doc_folder_name)) #if exists, will not create again
sec_filings_path = '../data/sec-edgar-filings/'
def run_parse(stocks_df):
num_stocks = stocks_df.shape[0]
c = 0
for idx, row in stocks_df.iterrows():
if row['form'] in forms:
ticker_form_path = os.path.join(sec_filings_path, row['ticker'], row['form'][:4])
ciks = os.listdir(ticker_form_path)
#print(ticker_form_path)
for cik in ciks:
#print(cik)
if not cik == '.DS_Store':
text_to_parse = os.path.join(ticker_form_path, cik, 'full-submission.txt')
if row['date_filed'] == dt.strptime(get_date(text_to_parse), '%Y%m%d'):
c += 1
print("{:0>2d} of {} {}".format(c, num_stocks, (30*'-')))
ticker, cik_10dig, filing_type, filing_qrtr, doc_reported_on, doc_year = parse_fileName_vals(cik, row, text_to_parse)
#input()
out_file_name = ticker+'_'+cik_10dig+'_'+filing_type+'_'+filing_qrtr+ ".txt"
out_file_path = os.path.join(parsed_sec_filings_path, out_file_name)
if os.path.exists(out_file_path):
continue
print(f'Parsing a {filing_type} for {ticker}, quarter {filing_qrtr}, dated {doc_reported_on}...')
#results = parse_10q(text_to_parse, cik, doc_year)
with open(text_to_parse) as fp:
soup = BeautifulSoup(fp, "lxml")
tenK = __extract_document_text(soup)
if tenK:
print(len(tenK))
#print(len(results))
with open(out_file_path, 'w') as f:
f.write(tenK)
print()
print(f'Parsing finished, output saved in:\n{out_file_path}')
print('__'*30)
print()
else:
print('Bad command or file name', text_to_parse)
run_parse(stocks_df)
# -
discards = []
def parser_custom(TenKtext):
matches = re.compile(r'(item\s(7[\.\s]|8[\.\s])|'
'discussion\sand\sanalysis\sof\s(consolidated\sfinancial|financial)\scondition|'
'(consolidated\sfinancial|financial)\sstatements\sand\ssupplementary\sdata)', re.IGNORECASE)
matches_array = pd.DataFrame([(match.group(), match.start()) for match in matches.finditer(TenKtext)])
print(len(matches_array), matches_array.shape, matches_array.columns)
# Set columns in the dataframe
matches_array.columns = ['SearchTerm', 'Start']
print('hi', len(matches_array), matches_array.shape, matches_array.columns)
# Get the number of rows in the dataframe
Rows = matches_array['SearchTerm'].count()
# Create a new column in 'matches_array' called 'Selection' and add adjacent 'SearchTerm' (i and i+1 rows) text concatenated
count = 0 # Counter to help with row location and iteration
while count < (Rows-1): # Can only iterate to the second last row
matches_array.at[count,'Selection'] = (matches_array.iloc[count,0] + matches_array.iloc[count+1,0]).lower() # Convert to lower case
count += 1
# Set up 'Item 7/8 Search Pattern' regex patterns
matches_item7 = re.compile(r'(item\s7\.discussion\s[a-z]*)')
matches_item8 = re.compile(r'(item\s8\.(consolidated\sfinancial|financial)\s[a-z]*)')
# Lists to store the locations of Item 7/8 Search Pattern matches
Start_Loc = []
End_Loc = []
# Find and store the locations of Item 7/8 Search Pattern matches
count = 0 # Set up counter
while count < (Rows-1): # Can only iterate to the second last row
# Match Item 7 Search Pattern
if re.match(matches_item7, matches_array.at[count,'Selection']):
# Column 1 = 'Start' column in 'matches_array'
Start_Loc.append(matches_array.iloc[count,1]) # Store in list => Item 7 will be the starting location (column '1' = 'Start' column)
# Match Item 8 Search Pattern
if re.match(matches_item8, matches_array.at[count,'Selection']):
End_Loc.append(matches_array.iloc[count,1])
count += 1
print(Start_Loc, End_Loc)
input()
if len(Start_Loc)==len(End_Loc)==0:
return False
if len(Start_Loc)==len(End_Loc)==1:
print(Start_Loc, End_Loc)
# Extract section of text and store in 'TenKItem7'
TenKItem7 = TenKtext[Start_Loc[0]:End_Loc[0]]
#print(len(TenKItem7))
elif len(Start_Loc)!=len(End_Loc):
TenKItem7 = TenKtext[Start_Loc[0]:End_Loc[-1]]
else :
print(Start_Loc, End_Loc)
# Extract section of text and store in 'TenKItem7'
TenKItem7 = TenKtext[Start_Loc[1]:End_Loc[1]]
#print(len(TenKItem7))
# Clean newly extracted text
TenKItem7 = TenKItem7.strip() # Remove starting/ending white spaces
TenKItem7 = TenKItem7.replace('\n', ' ') # Replace \n (new line) with space
TenKItem7 = TenKItem7.replace('\r', '') # Replace \r (carriage returns-if you're on windows) with space
TenKItem7 = TenKItem7.replace(' ', ' ') # Replace " " (a special character for space in HTML) with space
TenKItem7 = TenKItem7.replace(' ', ' ') # Replace " " (a special character for space in HTML) with space
while ' ' in TenKItem7:
TenKItem7 = TenKItem7.replace(' ', ' ') # Remove extra spaces
# Print first 500 characters of newly extracted text
#print(TenKItem7[:500])
TenKItem7 = TenKItem7.replace(u'/xa0', u' ') #
return TenKItem7
# +
from bs4 import BeautifulSoup
badfile = '../data/sec-edgar-filings/AMAT/10-K/0000006951-18-000041/full-submission.txt'
badfile_html = '../data/sec-edgar-filings/AMAT/10-K/0000006951-18-000041/filing-details.html'
badfile = '../data/sec-edgar-filings/AMAT/10-K/0000006951-16-000068/full-submission.txt'
with open(badfile) as fp:
#soup = BeautifulSoup(fp, "lxml-xml")
soup = BeautifulSoup(fp, "html.parser")
print(type(soup), len(soup))
tenK = __extract_document_text(soup)
#tenK = filing_document.find('text').extract().text
#tenK = soup.find('text').extract().text
# with open(text_to_parse) as fp:
# soup = BeautifulSoup(fp, "lxml")
if tenK:
print(len(tenK))
TenKtext = parser_custom(tenK)
else:
print('nope')
TenKtext[:500]
# -
| notebooks/Step_1-0_Parser-10Qs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
re.findall('abc','askdfj;akdfj;lajdf;laj;lakdsfabcasdfaabcdadf')
s = 'asdfjl;ajdf;la3534l2k3o;lkdagj;qi4touoq'
L = re.findall('[0-9]',s)
print(len(L))
print(L)
s = 'asdfjl;ajdf;la3534l2k3o;lkdagj;qi4touoq'
L = re.findall('[9610342587]',s)
print(len(L))
print(L)
s = 'asdfjl;ajdf;la353k4l2k3o;lkdagj;qi498u598touoq'
L = re.findall('[0-9][0-9][0-9][0-9]',s)
#print(L)
if len(L)>0:
print("Found")
else:
print("Not Found")
documents = ['asdfj;laieorkdjf;aliejr;akjdf23k4j;lajds;l',
'asdfjoqweitulad;ai@<EMAIL>g;lajoetiuaodkgjier',
'asdkfjqoitlskdnfoqwiekhas;ioew=adgoie',
'askdfl_asdkfei_asdjkfla****askeasfff',
'{{{{{asdfjowei@@##askdfoie}}}}}']
regExp = '[0-9:"{}()@#&]'
for doc in documents:
if len(re.findall(regExp,doc))>0:
pass
else:
print(doc)
documents = ['asdfj;laieorkdjf;aliejr;akjdf23k4j;lajds;l',
'asdfjoqweitulad;<EMAIL>;lajoetiuaod<EMAIL>',
'asdkfjqoitlskdnfoqwiekhas;ioew=adgoie',
'askdfl_asdkfei_asdjkfla****askeasfff',
'{{{{{asdfjowei@@##askdfoie}}}}}']
regExp = '[^0-9:"{}()@#&]'
for doc in documents:
if len(re.findall(regExp,doc))==len(doc):
print(doc)
print("'\'")
print("'\\'")
regExp = '[\\\]'
print(len(re.findall(regExp,'\n for new [line, ] \section and \document and \\\section \n')))
regExp = '[\\\]'
print(len(re.findall(regExp,r'\n for new [line, ] \section and \document and \\\section \n')))
regExp = '[\[\]\\\]'
print(len(re.findall(regExp,r'\n for new [line, ] \section and \document and \\\section \n')))
doc = r'\n for new [line, ] \section and \document and \\\section \n'
regExp_brackets = '[\[\]]'
brackets_count = len(re.findall(regExp_brackets,doc))
print(brackets_count)
regExp_slashes = '[\\\]'
slash_count = len(re.findall(regExp_slashes,doc))
print(slash_count)
print(brackets_count+slash_count)
doc = r'\n for new [line, ] \section and \document and \\\section \n'
regExp = '[\\\]section'
print(len(re.findall(regExp,doc)))
doc = ('asdfj;laieo2rkdjf;ali4ejr;akjdf23k4j;lajds;l'
'asdfjoqweitu lad;ai<EMAIL> dg;lajoetiuaodkgjier'
'asdkfjqoitlskdnfoqwiekhas;ioew=adgoie'
'askdfl_asdkfei_asdjkfla****askeasfff'
'{{{{{asdfjowei@@##askdfoie}}}}}')
regExp = '[\s\d_]'
rs = re.findall(regExp,doc)
print(rs)
print(len(rs))
doc = 'YahooYahoooYahoooooooYahoooooYahoasdfl;akj;lYahoaYahaoo'
regExp = 'Yahooo*'
rs = re.findall(regExp,doc)
print(rs)
doc = ('asdfj;35laieo2rkdjf;ali4ejr;akjdf23k4j;lajds;l'
'asdfjoqweitu lad;ai@<EMAIL> dg;lajoetiuaodkgjier'
'asdkfjqoitlskdnfoqwiekhas;ioew=adgoie'
'askdfl_asdkfei_asdjkfla****askeasfff'
'{{{{{asdfjowei@@##askdfoie}}}}}')
regExp = '[a-zA-Z_]\w*'
rs = re.findall(regExp,doc)
print(rs)
doc = 'YahooYahoooYahoooooooYahoooooYahoasdfl;akj;lYahoaYahaoo'
regExp = 'Yahoo+'
rs = re.findall(regExp,doc)
print(rs)
doc = 'YahooYahoooYaoooooooYahoooooYahoasdfl;akj;lYaooooaYahaoo'
regExp = 'Yah?oo+'
rs = re.findall(regExp,doc)
print(rs)
doc = ('asdfj;35laieo2rkdjf;ali4ejr;akjdf23k4j;lajds;l'
'asdfjoqweitu lad;ai<EMAIL> dg;lajoetiuaodkgjier'
'asdkfjqoitlskdnfoqwiekhas;ioew=adgoie'
'askdfl_asdkfei_asd567890jkfla****askeasfff'
'{{{{{asdfjowei@@##askdfoie}}}}}')
regExp = '\d{2,5}'
rs = re.findall(regExp,doc)
print(rs)
p = re.compile('a\d*b',re.IGNORECASE)
p.findall('a234122B23456Ab')
doc = ' \t \t \t how are you today ... asdfjla;joeito;angl;a4o3'
p = re.compile('\s*')
m = p.match(doc)
print(m.span(),m.start(),m.end(),len(m.group()))
print(doc[m.end():])
m.span() , m.start() , m.end()
doc = 'ad \t \t \t how are \t you today ... asdfjla;joeito;angl;a4o3'
p = re.compile('\s+')
m = p.search(doc)
print(m.span(),m.start(),m.end(),len(m.group()))
print(doc[m.end():])
doc = ('aSdfj;35lAieo2rkdjf;ali4ejr;akjdf23k4j;lajds;l'
'asdfjoqweitu lad;ai@weuta dg;lajoetiuaodkgjier'
'asdkfjqoitlskdnfoqwiekhas;ioew=adgoie'
'askdfl_asdkfei_asdjkfla****askeasfff'
'{{{{{asdfjowei@@##askdfoie}}}}}')
#regExp = '[a-zA-Z_]\w*'
#p = re.compile(regExp)
regExp = '[a-z_]\w*'
p = re.compile(regExp,re.IGNORECASE)
iterator = p.finditer(doc)
for m in iterator:
print(m.span(),m.group())
doc = r'\n for new [line, ] \section and \document and \\\section \n'
regExp_brackets = '[\[\]]'
#brackets_count = len(re.findall(regExp_brackets,doc))
#print(brackets_count)
regExp_slashes = '[\\\]'
#slash_count = len(re.findall(regExp_slashes,doc))
print(len(re.findall(regExp_brackets+'|'+regExp_slashes,doc)))
doc = 'find in thethethethetheasdfthethe thethethefas'
p = re.compile('(the)+')
iterator = p.finditer(doc)
for m in iterator:
print(m.span(),m.group())
p = re.compile('\W+')
p.split('This test, --++**\\ is short and sweet ,,... 45u4__&&##buft')
p = re.compile('(blue\s|white\s|red\s)+',re.IGNORECASE)
p.sub('color ','blue REd shoes and White BLue red white red socks')
doc = ' abd asd kluoeiur liueou lkuioe lieoj \t sd '
p1 = re.compile('\s+')
p2 = re.compile('^ | $')
p2.sub('',p1.sub(' ',doc))
| 01-Code/2a_RegularExpressions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Good practices
#
# In python as in other languages, there are conventions on how to write code. You don't have to respect all of them, but some of them have become unanimously used. Let's see together a summary of the most important rules.
#
# ## PEP8 in summary !
#
# PEP 8 (for Python Extension Proposal) is a set of rules that allows to homogenize the code and to apply good practices. The best known example is the war between developers over whether to indent code with spaces or tabs. The PEP 8 is the winner: it is the spaces that win, 4 in number. End of the debate.
#
# This is a very interesting logic for Python, which aims to be a language "that can be read" rather than "a language that can be written". So yes, this sentence doesn't mean anything but writing code understandable for a machine is rather easy, but writing code understandable by the greatest number of coders (of any level) is a different story...
#
# PEP 8 provides rules and conventions to make the coder's reading easier and thus make him less stressed and more productive. The advantage of PEP 8 is that it makes the code more attractive. So I still don't know if the code is embellished with PEP 8 or if the fact of introducing rules doesn't make the code without PEP 8 ugly. You can tell from reading a script that the author hasn't followed these basic rules and therefore is not a good coder. This is not necessarily the case but...
#
# ### Indentation
# The indentation in your code must be 4 characters. The more is too much, the less is not enough. On the coffeescript side for example it's 2, it's ugly. Shame on them. It had to be written somewhere, it's done.
# 👌 Good :
def indent():
my_var = "Use 4 characters for indent"
# ❌ Bad :
def indent():
my_var = "Use 4 characters for indent"
# ### Code layout
# 79 characters per line, no more.
#
# Writing in python is like writing a kind of poem in alexandrine. There is the content but also the form. If you're a perfectionist, review your code, even for a single overflow; it won't change anything in the execution of your code, it will only waste your time,
# but passion is passion, rules are rules.
# 👌 Good :
def my_function(context, width, height, size=10000,
color='black', emphasis=None, highlight=0):
pass
# ❌ Bad :
def my_function(context, width, height, size=10000, color='black', emphasis=None, highlight=0):
pass
# ### Import
# Imports are declared at the beginning of the script. Seems obvious, but it's always nice to remember it. You can add some in a function too (if you can't do otherwise or for exceptional reasons) but after the docstring. Allow one line per import. Separate imports. First you have to put the modules that are internal to Python. Then you have to import the third party libraries like pandas, matplotlib, etc. . Then you import your own modules. Each part of the modules must be separated by a line that spaces them.
# 👌 Good :
# ```python
# import os # STD lib imports first
# import sys # alphabetical
#
# import some_third_party_lib # 3rd party stuff next
# import some_third_party_other_lib # alphabetical
#
# import local_stuff # local stuff last
# import more_local_stuff
# ```
# ❌ Bad :
# ```python
# import local_stuff, more_local_stuff, dont_import_two, modules_in_one_line # IMPORTANT!
# import os, sys
# import some_third_party_lib
# ```
# ### Spaces
# Operators must be surrounded by spaces.
#
# The following should be done :
# 👌 Good :
# ```python
# name = 'Batman'
# color == 'black'
# 1 + 2
# ```
# ❌ Bad :
# ```python
# name='name'
# color=='black'
# 1+2
# ```
# There are two notable exceptions.
#
# The first is that the mathematical operators with the highest priority are grouped together to distinguish groups :
#
# 👌 Good :
#
# ```python
# a = x*2 - 1
# b = x*x + y*y
# c = (a+b) * (a-b)
#
# ```
# The second is the = sign in argument declaration and parameter passing :
#
# 👌 Good :
#
# ```python
# def my_function(arg='value'):
# pass
#
# result = my_fonction(arg='value')
#
# ```
#
# There is no space inside parentheses, brackets or braces.
#
# 👌 Good :
#
# ```python
# 2 * (3 + 4)
#
# def fonction(arg='valeur'):
#
# {str(x): x for x in range(10)}
#
# val = dic['key']
# ```
#
# ❌ Bad :
#
# ```python
# 2 * ( 3 + 4 )
#
# def fonction( arg= 'valeur' ):
#
# { str( x ): x for x in range( 10 )}
#
# val = dic ['key']
# ```
#
#
#
# You don't put a space before colons and commas, but you do afterwards.
#
# 👌 Good :
#
# ```python
# def my_function(arg1='valeur', arg2=None):
#
# dic = {'a': 1}
# ```
# ❌ Bad :
#
# ```python
# def my_function(arg='value' , arg2=None) :
#
# dico = {'a' : 1}
#
# ```
# ### Docstrings
# There are both single and multi-line docstrings that can be used in Python. However, the single line comment fits in one line, triple quotes are used in both cases. These are used to define a particular program or define a particular function.
# Example:
# ```python
# def exam():
# """This is single line docstring"""
#
# """This is
# a
# multiline comment"""
# ```
# ### Naming Conventions
# There are few naming conventions that should be followed in order to make the program less complex and more readable. At the same time, the naming conventions in Python is a bit of mess, but here are few conventions that can be followed easily.
# There is an overriding principle that follows that the names that are visible to the user as public parts of API should follow conventions that reflect usage rather than implementation.
# Here are few other naming conventions:
#
# **variable, function and module** : Use [snake_case.](https://en.wikipedia.org/wiki/Snake_case)
#
# 👌 Good :
#
# ```python
# my_variable = 'Hello'
#
# def my_function(element):
# pass
#
# ```
# ❌ Bad :
#
# ```python
# myVariable = 'Hello'
# MyVariable = 'Hello'
#
# def myFunction(element):
# pass
#
# ```
#
# **class** : Use [PascalCase](https://techterms.com/definition/pascalcase)
#
# 👌 Good :
#
# ```python
# class MyClass:
#
# ```
# ❌ Bad :
# ```python
# class my_class:
#
# ```
#
# 
# ### In summary
# +
# #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""This module's docstring summary line.
This is a multi-line docstring. Paragraphs are separated with blank lines.
Lines conform to 79-column limit.
Module and packages names should be short, lower_case_with_underscores.
Notice that this in not PEP8-cheatsheet.py
Seriously, use flake8. Atom.io with https://atom.io/packages/linter-flake8
is awesome!
See http://www.python.org/dev/peps/pep-0008/ for more PEP-8 details
"""
import os # STD lib imports first
import sys # alphabetical
import some_third_party_lib # 3rd party stuff next
import some_third_party_other_lib # alphabetical
import local_stuff # local stuff last
import more_local_stuff
import dont_import_two, modules_in_one_line # IMPORTANT!
from pyflakes_cannot_handle import * # and there are other reasons it should be avoided # noqa
# Using # noqa in the line above avoids flake8 warnings about line length!
_a_global_var = 2 # so it won't get imported by 'from foo import *'
_b_global_var = 3
A_CONSTANT = 'ugh.'
# 2 empty lines between top-level funcs + classes
def naming_convention():
"""Write docstrings for ALL public classes, funcs and methods.
Functions use snake_case.
"""
if x == 4: # x is blue <== USEFUL 1-liner comment (2 spaces before #)
x, y = y, x # inverse x and y <== USELESS COMMENT (1 space after #)
c = (a + b) * (a - b) # operator spacing should improve readability.
dict['key'] = dict[0] = {'x': 2, 'cat': 'not a dog'}
class NamingConvention(object):
"""First line of a docstring is short and next to the quotes.
Class and exception names are CapWords.
Closing quotes are on their own line
"""
a = 2
b = 4
_internal_variable = 3
class_ = 'foo' # trailing underscore to avoid conflict with builtin
# this will trigger name mangling to further discourage use from outside
# this is also very useful if you intend your class to be subclassed, and
# the children might also use the same var name for something else; e.g.
# for simple variables like 'a' above. Name mangling will ensure that
# *your* a and the children's a will not collide.
__internal_var = 4
# NEVER use double leading and trailing underscores for your own names
__nooooooodontdoit__ = 0
# don't call anything (because some fonts are hard to distiguish):
l = 1
O = 2
I = 3
# some examples of how to wrap code to conform to 79-columns limit:
def __init__(self, width, height,
color='black', emphasis=None, highlight=0):
if width == 0 and height == 0 and \
color == 'red' and emphasis == 'strong' or \
highlight > 100:
raise ValueError('sorry, you lose')
if width == 0 and height == 0 and (color == 'red' or
emphasis is None):
raise ValueError("I don't think so -- values are %s, %s" %
(width, height))
Blob.__init__(self, width, height,
color, emphasis, highlight)
# empty lines within method to enhance readability; no set rule
short_foo_dict = {'loooooooooooooooooooong_element_name': 'cat',
'other_element': 'dog'}
long_foo_dict_with_many_elements = {
'foo': 'cat',
'bar': 'dog'
}
# 1 empty line between in-class def'ns
def foo_method(self, x, y=None):
"""Method and function names are lower_case_with_underscores.
Always use self as first arg.
"""
pass
@classmethod
def bar(cls):
"""Use cls!"""
pass
# a 79-char ruler:
# 34567891123456789212345678931234567894123456789512345678961234567897123456789
"""
Common naming convention names:
snake_case
MACRO_CASE
camelCase
CapWords
"""
# Newline at end of file
# -
# ## Bonus
# flake8 is an excellent linter that will check your code style. It is available as a command line or plugin for most editors.
#
# Likewise, mccabe will check the complexity of your code and tell you if you are smoking by giving you a score. It is also available as a flake8 plugin and can be activated via an option.
#
# Tox allows you to orchestrate all this, in addition to your unit tests. I'll do an article on it one of these 4.
#
# If you see comments like # noqa or # xxx: ignore or # xxx: disable=YYYY, these are comments to tell these tools to disregard these lines.
#
# Because remember, these rules are there to help you. If at any time they cease to be useful, you have every right to ignore them.
#
# But these common rules make Python an exceptional ecosystem language. They make teamwork, sharing, and productivity much easier. Once you get used to this, working under other conditions will seem like an unnecessary burden.
#
#
# ## Ressources
#
# * https://docs.python-guide.org/writing/style/
# * https://pep8.readthedocs.io/en/release-1.7.x/intro.html
| Content/1.python/2.python_advanced/09.Good_practices/good_practices.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# + tags=["context"] editable=false dc={"key": "3"} run_control={"frozen": true} deletable=false
# ## 1. Introduction
# <p>How do musicians choose the chords they use in their songs? Do guitarists, pianists, and singers gravitate towards different kinds of harmony?</p>
# <p>We can uncover trends in the kinds of chord progressions used by popular artists by analyzing the harmonic data provided in the <a href="http://ddmal.music.mcgill.ca/research/billboard">McGill Billboard Dataset</a>. This dataset includes professionally tagged chords for several hundred pop/rock songs representative of singles that made the Billboard Hot 100 list between 1958 and 1991. Using the data-wrangling tools available in the <code>dplyr</code> package, and the visualization tools available in the <code>ggplot2</code> package, we can explore the most common chords and chord progressions in these songs, and contrast the harmonies of some guitar-led and piano-led artists to see where the "affordances" of those instruments may affect the chord choices artists make.</p>
# + tags=["sample_code"] dc={"key": "3"}
# Loading the tidyverse meta-package
library(tidyverse)
# Reading in the McGill Billboard chord data
bb <- read_csv("datasets/bb_chords.csv")
# Taking a look at the first rows in bb
head(bb)
# + tags=["context"] editable=false dc={"key": "10"} run_control={"frozen": true} deletable=false
# ## 2. The most common chords
# <p>As seen in the previous task, this is a <em>tidy</em> dataset: each row represents a single observation, and each column a particular variable or attribute of that observation. Note that the metadata for each song (title, artist, year) is repeated for each chord -- like "I Don't Mind" by <NAME>, 1961 -- while the unique attributes of each chord (chord symbol, chord quality, and analytical designations like integer and Roman-numeral notation) is included once for each chord change.</p>
# <p>A key element of the style of any popular musical artist is the kind of chords they use in their songs. But not all chords are created equal! In addition to differences in how they sound, some chords are simply easier to play than others. On top of that, some chords are easier to play on one instrument than they are on another. And while master musicians can play a wide variety of chords and progressions with ease, it's not a stretch to think that even the best musicians may choose more "idiomatic" chords and progressions for their instrument.</p>
# <p>To start to explore that, let's look at the most common chords in the McGill Billboard Dataset.</p>
# + tags=["sample_code"] dc={"key": "10"}
# Counting the most common chords
bb_count <- bb %>%
count(chord, sort = TRUE)
# Displaying the top 20 chords
head(bb_count, 20)
# + tags=["context"] editable=false dc={"key": "17"} run_control={"frozen": true} deletable=false
# ## 3. Visualizing the most common chords
# <p>Of course, it's easier to get a feel for just how common some of these chords are if we graph them and show the percentage of the total chord count represented by each chord.
# Musicians may notice right away that the most common chords in this corpus are chords that are easy to play on both the guitar and the piano: C, G, A, and D major — and to an extent, F and E major. (They also belong to keys, or scales, that are easy to play on most instruments, so they fit well with melodies and solos, as well.) After that, there is a steep drop off in the frequency with which individual chords appear. </p>
# <p>To illustrate this, here is a short video demonstrating the relative ease (and difficulty) of some of the most common (and not-so-common) chords in the McGill Billboard dataset.
# <br><br>
# <a href="https://player.vimeo.com/video/251381886" target="blank_"><img style="max-width: 500px;" src="https://s3.amazonaws.com/assets.datacamp.com/production/project_78/img/smaller_video_screenshot.jpeg"></a></p>
# + tags=["sample_code"] dc={"key": "17"}
# Creating a bar plot from `bb_count`
bb_count %>%
slice(1:20) %>%
mutate(share = n/sum(n),
chord = reorder(chord, share)) %>%
ggplot(aes(x=chord, y=share, color = chord, fill = chord)) +
geom_bar(stat="identity") +
coord_flip() +
ylab("SHARE") +
xlab("CHORD") +
theme(legend.position = 'none')
# + tags=["context"] editable=false dc={"key": "24"} run_control={"frozen": true} deletable=false
# ## 4. Chord "bigrams"
# <p>Just as some chords are more common and more idiomatic than others, not all chord <em>progressions</em> are created equal. To look for common patterns in the structuring of chord progressions, we can use many of the same modes of analysis used in text-mining to analyze phrases. A chord change is simply a <em>bigram</em> — a two-"word" phrase — composed of a starting chord and a following chord. Here are the most common two-chord "phrases" in the McGill Billboard dataset.
# To help you get your ear around some of these common progressions, here's a short audio clip containing some of the most common chord bigrams.
# <br><br></p>
# <audio controls src="http://assets.datacamp.com/production/project_79/img/bigrams.mp3">
# Your browser does not support the audio tag.
# </audio>
# + tags=["sample_code"] dc={"key": "24"}
# Wrangling and counting bigrams
bb_bigram_count <- bb %>%
mutate(next_chord = lead(chord),
next_title = lead(title),
bigram = str_c(chord, next_chord, sep= " ")) %>%
filter(title==next_title) %>%
count(bigram, sort = TRUE)
# Displaying the first 20 rows of bb_bigram_count
head(bb_bigram_count, 20)
# + tags=["context"] editable=false dc={"key": "31"} run_control={"frozen": true} deletable=false
# ## 5. Visualizing the most common chord progressions
# <p>We can get a better sense of just how popular some of these chord progressions are if we plot them on a bar graph. Note how the most common chord change, G major to D major, occurs more than twice as often than even some of the other top 20 chord bigrams.</p>
# + tags=["sample_code"] dc={"key": "31"}
# Creating a column plot from `bb_bigram_count`
bb_bigram_count %>%
slice(1:20) %>%
mutate(share = n/sum(n),
bigram = reorder(bigram, share)) %>%
ggplot(aes(x=bigram, y=share, fill = bigram)) +
geom_col() +
coord_flip() +
ylab("SHARE") +
xlab("BIGRAM") +
theme(legend.position = 'none')
# + tags=["context"] editable=false dc={"key": "38"} run_control={"frozen": true} deletable=false
# ## 6. Finding the most common artists
# <p>As noted above, the most common chords (and chord bigrams) are those that are easy to play on both the guitar and the piano. If the degree to which these chords are idiomatic on guitar or piano (or both) <em>determine</em> how common they are, we would expect to find the more idiomatic guitar chords (C, G, D, A, and E major) to be more common in guitar-driven songs, but we would expect the more idiomatic piano chords (C, F, G, D, and B-flat major) to be more common in piano-driven songs. (Note that there is some overlap between these two instruments.)</p>
# <p>The McGill Billboard dataset does not come with songs tagged as "piano-driven" or "guitar-driven," so to test this hypothesis, we'll have to do that manually. Rather than make this determination for every song in the corpus, let's focus on just a few to see if the hypothesis has some validity. If so, then we can think about tagging more artists in the corpus and testing the hypothesis more exhaustively.</p>
# <p>Here are the 30 artists with the most songs in the corpus. From this list, we'll extract a few artists who are obviously heavy on guitar or piano to compare.</p>
# + tags=["sample_code"] dc={"key": "38"}
# Finding and displaying the 30 artists with the most songs in the corpus
bb_30_artists <- bb %>%
select(artist, title) %>%
unique() %>%
count(artist, sort = TRUE)
bb_30_artists %>%
slice(1:30)
# + tags=["context"] editable=false dc={"key": "45"} run_control={"frozen": true} deletable=false
# ## 7. Tagging the corpus
# <p>There are relatively few artists in this list whose music is demonstrably "piano-driven," but we can identify a few that generally emphasize keyboards over guitar: Abba, <NAME>, <NAME>, and <NAME> — totaling 17 songs in the corpus. There are many guitar-centered artists in this list, so for our test, we'll focus on three well known, guitar-heavy artists with a similar number of songs in the corpus: The Rolling Stones, The Beatles, and <NAME> (18 songs).</p>
# <p>Once we've subset the corpus to only songs by these seven artists and applied the "piano" and "guitar" tags, we can compare the chord content of piano-driven and guitar-driven songs.</p>
# + tags=["sample_code"] dc={"key": "45"}
tags <- tibble(
artist = c('Abba', '<NAME>', '<NAME>', '<NAME>', 'The Rolling Stones', 'The Beatles', '<NAME>'),
instrument = c('piano', 'piano', 'piano', 'piano', 'guitar', 'guitar', 'guitar'))
# Creating a new dataframe `bb_tagged` that includes a new column `instrument` from `tags`
bb_tagged <- bb %>%
inner_join(tags, by="artist")
# Displaying the new dataframe
bb_tagged
# + tags=["context"] editable=false dc={"key": "52"} run_control={"frozen": true} deletable=false
# ## 8. Comparing chords in piano-driven and guitar-driven songs
# <p>Let's take a look at any difference in how common chords are in these two song groups. To clean things up, we'll just focus on the 20 chords most common in the McGill Billboard dataset overall.</p>
# <p>While we want to be careful about drawing any conclusions from such a small set of songs, we can see that the chords easiest to play on the guitar <em>do</em> dominate the guitar-driven songs, especially G, D, E, and C major, as well as A major and minor. Similarly, "flat" chords (B-flat, E-flat, A-flat major) occur frequently in piano-driven songs, though they are nearly absent from the guitar-driven songs. In fact, the first and fourth most frequent piano chords are "flat" chords that occur rarely, if at all, in the guitar songs.</p>
# <p>So with all the appropriate caveats, it seems like the instrument-based-harmony hypothesis does have some merit and is worth further examination.</p>
# + tags=["sample_code"] dc={"key": "52"}
# The top 20 most common chords
top_20 <- bb_count$chord[1:20]
# Comparing the frequency of the 20 most common chords in piano- and guitar-driven songs
bb_tagged %>%
filter(chord %in% top_20) %>%
group_by(instrument) %>%
count(chord, sort=TRUE) %>%
ggplot(aes(x=chord, y=n)) + geom_col() +
coord_flip() +
xlab("CHORD") +
ylab("Number of times") +
facet_wrap(.~instrument)
# + tags=["context"] editable=false dc={"key": "59"} run_control={"frozen": true} deletable=false
# ## 9. Comparing chord bigrams in piano-driven and guitar-driven songs
# <p>Since chord occurrence and chord bigram occurrence are naturally strongly tied to each other, it would not be a reach to expect that a difference in chord frequency would be reflected in a difference in chord bigram frequency. Indeed that is what we find.</p>
# + tags=["sample_code"] dc={"key": "59"}
# The top 20 most common bigrams
top_20_bigram <- bb_bigram_count$bigram[1:20]
# Creating a faceted plot comparing guitar- and piano-driven songs for bigram frequency
bb_tagged %>%
mutate(next_chord = lead(chord),
next_title = lead(title),
bigram = str_c(chord, next_chord,sep=" ")) %>%
filter(title == next_title) %>%
filter(chord %in% top_20) %>%
group_by(instrument) %>%
count(chord, sort=TRUE) %>%
ggplot(aes(x=chord, y=n)) + geom_col() +
coord_flip() +
xlab("CHORD") +
ylab("Number of times") +
facet_wrap(.~instrument)
# + tags=["context"] editable=false dc={"key": "66"} run_control={"frozen": true} deletable=false
# ## 10. Conclusion
# <p>We set out asking if the degree to which a chord is "idiomatic" on an instrument affects how frequently it is used by a songwriter. It seems that is indeed the case. In a large representative sample of pop/rock songs from the historical Billboard charts, the chords most often learned first by guitarists and pianists are the most common. In fact, chords commonly deemed <em>easy</em> or <em>beginner-friendly</em> on <strong>both</strong> piano and guitar are far and away the most common in the corpus.</p>
# <p>We also examined a subset of 35 songs from seven piano- and guitar-heavy artists and found that guitarists and pianists tend to use different sets of chords for their songs. This was an extremely small (and likely not representative) sample, so we can do nothing more than hypothesize that this trend might carry over throughout the larger dataset. But it seems from this exploration that it's worth a closer look.</p>
# <p>There are still more questions to explore with this dataset. What about band-driven genres like classic R&B and funk, where artists like <NAME> and Chicago build chords from a large number of instruments each playing a single note? What about "progressive" bands like Yes and Genesis, where "easy" and "idiomatic" may be less of a concern during the songwriting process? And what if we compared this dataset to a collection of chords from classical songs, jazz charts, folk songs, liturgical songs?</p>
# <p>There's only one way to find out!</p>
# + tags=["sample_code"] dc={"key": "66"}
# Set to TRUE or FALSE to reflect your answer.
hypothesis_valid <- ....
# Set to TRUE or FALSE to reflect your answer.
more_data_needed <- ....
| notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
mucho_texto="vizcarra es incapaz"
print(mucho_texto[:])
print(mucho_texto[0:])
print(mucho_texto[0:8]+mucho_texto[-8]+mucho_texto[14:])
print(mucho_texto[0:8]+mucho_texto[8:12]+mucho_texto[14:])
# -
lista_1=[1,2,3,4,5,6,7,8,9]
lista_1
lista1=["miguel","alexis","gustavo"]
lista2=["Aaron","gandi","julio"]
lista3=["estudiante","egresado"]
lista1+lista2
lista1[0]="Aaron"
lista1
lista2[0]="miguel"
lista2
lista1.append("cerzoc")
lista1
lista2.append("meche")
lista2
lista1[1:3]=["coz","necio"]
lista1
lista1.insert(1,"alexis")
lista1
lista1.remove("coz")
lista1
lista1[2:]=[]
lista1
lista_total=lista1+lista2+lista3
lista_total
lista_total.index("gandi")
"meche" in lista_total
"coz kk" in lista_total
len("meche")
len("estudiante")
lista5=[lista2,lista3]
lista5
lista5[0][3]
lista5.index(["estudiante","egresado"])
# ### Hasta aqui he repasado todo el capitulo 3
# ### Continuaremos con el capitulo 4.1 parte1
variable=input("ingresar la palabra:")
type(variable)
variable2=int(input("ingresa numero deseado: "))
type(variable2)
variable3=float(input("ingresa decimal: "))
type(variable3)
# +
n=int(input("ingrese numero analizar: "))
if n<0:
print("es un numero negativo")
elif n==0:
print("este numero es cero")
else:
print("este numero es positivo")
# -
lista=["oliver","jessica","junior","indira","mercedes"]
print("oliver" in lista)
print("mercedes" in lista)
print("coz" in lista)
palabra="amesquita"
print("a" in palabra)
# ### capitulo 4 parte2
# +
palabra="tres tristes tigres comen trigo"
for i in palabra:
print(i)
# +
lista_1=["perro","gato","vaca","murcielago"]
for i in lista_1:
print(i,len(i))
# -
for n in [3,4,5,6]:
print("bob es tonto",n)
for i in "mimamamemima":
print("mono", end="-")
email=input("ingresar correo electronico: ")
for m in email:
if m=="@":
email=True
print(email)
if email==True:
print("correcto")
else:
print("incorrecto")
n_lista=[]
num_e=int(input("#de elementos: "))
i=1
while i<=num_e:
j=input("ingresa valor: ")
n_lista.append(j)
i=i+1
print(n_lista)
n_lista1=[]
num=int(input("ingresa el numero de animales: "))
i=1
while i<num:
z=input("ingresa el animal:")
n_lista1.append(z)
i=i+1
print(n_lista1)
dic={"mono":"negro","chikito":"blanco","boliche":"naranja"}
print(dic)
print(dic["mono"],dic["boliche"])
dic1={1:True,2:False,3:True}
print(dic1[1])
dic2={}
dic2["mono"]="negro"
dic2["boliche"]="naranja"
print(dic2)
del dic2["mono"]
print(dic2)
items=dic.items()
print(items)
for i in items:
print(i)
for i,j in items:
print(i)
for i,j in items:
print(j)
for i,j in items:
print(i,j)
| IntroToPython/repaso integral.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''hlu'': conda)'
# language: python
# name: python3
# ---
# # Site-Level Facilities Layout Using Ant Colony Optimization
# > Implementing the Ant Colony Optimization (ACO) algorithm from scratch in MATLAB. Solving the site-level facilities layout problem. Creating functions for pretty-plotting the pheromone matrix/weights and visualizing the solutions.
#
# - toc: true
# - author: <NAME>
# - badges: true
# - comments: true
# - image: images/ipynb/aco_site_layout_problem.png
# - hide: false
# - search_exclude: false
# - categories: [notebook, code implementation]
# - permalink: /blog/:title/
#
# <br>
#
# > Note: The complete MATLAB source code for this implementation can be found at *[this repository](https://github.com/Outsiders17711/Site-Layout-Ant-Colony-Optimization)*. This post and the linked repository reference the following publication for the design variables and constraints: <br><br> **<NAME> & <NAME> - [Site-Level Facilities Layout Using Genetic Algorithms](https://ascelibrary.org/doi/abs/10.1061/%28ASCE%290887-3801%281998%2912%3A4%28227%29)** <br><br> The file *Site_Layout_Ant_Colony_Optimization ([PDF](https://github.com/Outsiders17711/Site-Layout-Ant-Colony-Optimization/blob/main/Site_Layout_Ant_Colony_Optimization.pdf) [HTML](https://github.com/Outsiders17711/Site-Layout-Ant-Colony-Optimization/blob/main/Site_Layout_Ant_Colony_Optimization.html))* in the linked repository can serve as a TL;DR to this post.
#
# ---
# ## Background
#
# > Tip: You can skip over the **Background** section. Jump to the **[Code Implementation](#Code-Implementation:-Ant-Colony-Optimization)**.
#
# <br>
# ### Site-Level Facilities Layout
#
# [Systematic Layout Planning (SLP)](https://en.wikipedia.org/wiki/Systematic_layout_planning) - is a tool used to arrange a workplace in a plant by locating areas with high frequency and logical relationships close to each other. The process permits the quickest material flow in processing the product at the lowest cost and least amount of handling.
#
# In construction, SLP is commonly referred to as Site-Level Facilities Layout. The objective of this activity is to allocate appropriate locations to temporary site-level facilities such as warehouses, job offices, various workshops and batch plants.
#
# <a><img src="https://www.allaboutlean.com/wp-content/uploads/2018/11/Construction-Site-Plan.png" title="Credit: https://www.allaboutlean.com/lean-construction/construction-site-plan/" alt="Construction Site Layout" style="max-height:300px;"></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Construction Site Layout</i></p>
#
# The layout of these facilities has an important impact on the production time and cost, especially for large projects. The optimal placement of facilities on a site helps to minimize transportation, minimize cost, minimize travel time, and enhance safety.
#
# > In the [reference paper](https://ascelibrary.org/doi/abs/10.1061/%28ASCE%290887-3801%281998%2912%3A4%28227%29), a construction site-level facility layout problem was described as allocating a set of predetermined facilities into a set of predetermined places, while satisfying layout constraints and requirements.
#
# ### Ant Colony Optimization
#
# Ant Colony Optimization (ACO) is a probabilistic technique introduced in the early 1990's for solving computational problems which can be reduced to finding good paths through graphs. The inspiring source of ant colony optimization is the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems, continuous optimization problems, and important problems in telecommunications, such as routing and load balancing *([source](https://www.sciencedirect.com/science/article/abs/pii/S1571064505000333))*.
#
# <a><img src="../images/ipynb/aco_shortpath_wikipedia.png" alt="Find The Shortest Path With Aco" title="Credit: https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_shortpath_wikipedia.png';" style="max-height:300px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Find The Shortest Path With ACO</i></p>
# Artificial ants locate optimal solutions by moving through a search space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated ants similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions *([source](https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms))*.
#
# ### Ant Colony Optimization Algorithm
#
# > Important: This section (and images) is a modified excerpt of *Chapter 13.5 - Modern Methods of Optimization* of **<NAME> - [Engineering Optimization: Theory and Practice (2020)](https://www.wiley.com/en-us/Engineering+Optimization%3A+Theory+and+Practice%2C+5th+Edition-p-9781119454793)**. This is an excellent reference textbook which *"...presents the techniques and applications of engineering optimization in a comprehensive manner."*
# The ACO algorithm can be explained by representing the optimization problem as a multilayered graph, where the number of layers is equal to the number of design variables and the number of nodes in a particular layer is equal to the number of discrete values permitted for the corresponding design variable. Thus, each node is associated with a permissible discrete value of a design variable.
#
#
# <a><img src="../images/ipynb/aco_multilayer_network.png" alt="GRaphical Representation Of The Aco Algorithm In The Form Of A Multi-Layered Network" title="Credit: Engineering Optimization: Theory and Practice - <NAME> (2020)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_multilayer_network.png';" style="max-height:480px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Graphical Representation Of The ACO Algorithm In The Form Of A Multi-Layered Network</i></p>
# Let the colony consist of `N` ants. The ants start at the home node, travel through the various layers from the first layer to the last or final layer, and end at the destination node in each cycle or iteration. Each ant can select only one node in each layer in accordance with the **state transition rule**. The nodes selected along the path visited by an ant represent a candidate solution.
#
# For example, a typical path visited by an ant is shown by thick lines in the figure above. This path represents the solution *(x12, x23, x31, x45, x56, x64)*. Once the path is complete, the ant deposits some pheromone on the path based on the **local updating rule**. When all the ants complete their paths, the pheromones on the globally best path are updated using the **global updating rule**.
#
# > In the beginning of the optimization process, all the edges are initialized with an equal amount of pheromone, thus, ants start from the home node and end at the destination node by randomly selecting a node in each layer.
#
# The optimization process is terminated if either the prespecified maximum number of iterations is reached or no better solution is found in a prespecified number of successive cycles or iterations. The values of the design variables denoted by the nodes on the path with largest amount of pheromone are considered as the components of the optimum solution vector. **In general, at the optimum solution, all ants travel along the same best (converged) path**.
#
# > Note: The step-by-step procedure of ACO algorithm for solving a minimization problem can be summarized as shown in the figure below. The steps and equations presented in this figure **(algorithm summary)** will be used in the MATLAB code implementation and constantly referred to.
#
#
# <a><img src="../images/ipynb/aco_algorithm_summary_e.png" alt="Aco Algorithm Summary For Solving A Minimization Problem" title="Credit: Engineering Optimization: Theory and Practice - <NAME> (2020)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_algorithm_summary_e.png';" style="max-height:500px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: ACO Algorithm Summary For Solving A Minimization Problem **(open image in new tab for full size)**</i></p>
# <hr>
#
# ## Problem Analysis: Site-Level Facilities Layout
#
# > Note: All the information in this section is derived from the reference paper - *[Site-Level Facilities Layout Using Genetic Algorithms](https://ascelibrary.org/doi/abs/10.1061/%28ASCE%290887-3801%281998%2912%3A4%28227%29)*.
#
# <br>
#
# ### Layout Chromosome Representation
#
# The locations of the layout are displayed in the string representation, and the order of facilities is assumed to be sequential from 1 to 11 as follows:
#
# 1. *Site Office (SO)*
# 1. *Falsework Workshop (FW)*
# 1. *Labor Residence (LR)*
# 1. *Storeroom 1 (S1)*
# 1. *Storeroom 2 (S2)*
# 1. *Carpentry Workshop (CW)*
# 1. *Reinforcement Steel Workshop (RSW)*
# 1. *Side Gate (SG)*
# 1. *Electrical, Water, And Other Utilities Control Room (UCR)*
# 1. *Concrete Batch Workshop (CBW)*
# 1. *Main Gate (MG)*
#
# -- start: MATLAB code --
f_Facilities = ["Site Office", "Falsework Workshop", "Labor Residence", "Storeroom 1", ...
"Storeroom 2", "Carpentry Workshop", "Reinforcement Steel Workshop", ...
"Side Gate", "Utilities Control Room", "Concrete Batch Workshop", "Main Gate"];
# -- end: MATLAB code--
# > Thus, a layout string representation (chromosome) of **[ 7 5 3 11 8 6 4 1 2 9 10 ]** represents the following Facility@Location-pair information:<br><br>**[ SO@L7 , FW@L5 , LR@L3 , S1@L11 , S2@L8 , CW@L6 , RSW@L4 , SG@L1 , UCR@L2 , CBW@L9 , MG@L10 ]**
#
# ### Design Variables
#
# The design variables in this problem are the following:
#
# - The matrix variable `facilitiesFrequenciesMatrix` represents the **frequency of trips (in 1 day)** made between facilities.
# - The matrix variable `locationsDistancesMatrix` represents the **distances between the available locations** (in meters) on the site.
#collapse-show
# -- start: MATLAB code --
facilitiesFrequenciesMatrix = [
0, 5, 2, 2, 1, 1, 4, 1, 2, 9, 1;
5, 0, 2, 5, 1, 2, 7, 8, 2, 3, 8;
2, 2, 0, 7, 4, 4, 9, 4, 5, 6, 5;
2, 5, 7, 0, 8, 7, 8, 1, 8, 5, 1;
1, 1, 4, 8, 0, 3, 4, 1, 3, 3, 6;
1, 2, 4, 7, 3, 0, 5, 8, 4, 7, 5;
4, 7, 9, 8, 4, 5, 0, 7, 6, 3, 2;
1, 8, 4, 1, 1, 8, 7, 0, 9, 4, 8;
2, 2, 5, 8, 3, 4, 6, 9, 0, 5, 3;
9, 3, 6, 5, 3, 7, 3, 4, 5, 0, 5;
1, 8, 5, 1, 6, 5, 2, 8, 3, 5, 0;
];
locationsDistancesMatrix = [
0, 15, 25, 33, 40, 42, 47, 55, 35, 30, 20;
15, 0, 10, 18, 25, 27, 32, 42, 50, 45, 35;
25, 10, 0, 8, 15, 17, 22, 32, 52, 55, 45;
33, 18, 8, 0, 7, 9, 14, 24, 44, 49, 53;
40, 25, 15, 7, 0, 2, 7, 17, 37, 42, 52;
42, 27, 17, 9, 2, 0, 5, 15, 35, 40, 50;
47, 32, 22, 14, 7, 5, 0, 10, 30, 35, 40;
55, 42, 32, 24, 17, 15, 10, 0, 20, 25, 35;
35, 50, 52, 44, 37, 35, 30, 20, 0, 5, 15;
30, 45, 55, 49, 42, 40, 35, 25, 5, 0, 10;
20, 35, 45, 53, 52, 50, 40, 35, 15, 10, 0;
];
[nLocations, nFacilities] = size(facilitiesFrequenciesMatrix);
# -- end: MATLAB code --
# <a><img src="../images/ipynb/aco_combined_variables_matrices.png" alt="Contour Plots Of The Design Variables" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_combined_variables_matrices.png';" style="max-height:300px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Contour Plots Of The Design Variables **(open image in new tab for full size)**</i></p>
# The figure above shows the symmetric contour plots of the `facilitiesFrequenciesMatrix` (left), `locationsDistancesMatrix` (middle) and the dot-product of both matrices (right).
#
# A close study of each contour plot reveals some information about the optimization problem and gives a good idea of what to expect from the optimization algorithm. The contour plots are discussed in the rest of this section. Note that *Fn* and *Ln* refer to *Facility n* and *Location n*, respectively.
#
# 1. **facilitiesFrequenciesMatrix**: From the contour plot, there is a lot of traffic between some facilities, particulary between facility-pairs F3 & F7, F8 & F9, and F10 & F1. Thus, it is expected that the algorithm places these facility-pairs at adjacent locations on the site.
# 1. **locationsDistancesMatrix**: The contour plot shows that location-pairs L2 & L9, L3 & L10, and L4 & L11 are farthest from each other on the site. Intuitively, one can expect that only facility-pairs with little traffic between them are placed at these location-pairs and vice-versa.
# 1. **facilitiesFrequenciesMatrix .* facilitiesFrequenciesMatrix**: This combined matrix and its contour plot summarizes the information contained in each matrix and shows three pairs of facility-pairs and location-pairs combinations that should be avoided.<br><br>These three hotspots are result from the placement of facility-pairs with the high traffic at location-pairs with large distances. F2 & F8 should not both occupy L2 & L8 simultaneously. F3 & F10 should not both occupy L3 & L10 simultaneously. F4 & F9 should not both occupy L4 & L9 simultaneously.
# ### Constraints
#
# The relative positions of the gates -- *Side Gate (SG) and Main Gate (MG)* -- are not subject to change during the allocation process and are treated as special facilities clamped on the predetermined locations. Facility 8 *(Side Gate)* and Facility 11 *(Main Gate)* are clamped at Locations 1 and 10.
#
# Thus, the population of chromosomes will be represented as: **[ x x x x x x x 1 x x 10 ]** where the remaining **9** blank positions '**x**' are site locations to be assigned to the remaining **9** facilities by the optimization algorithm.
#collapse-show
# -- start: MATLAB code --
reservedLocations = [1, 10];
specialFacilities = [8, 11];
# -- end: MATLAB code --
# To encode these constraints (reserved locations for special facilities) into the ACO implementation, the pheromone matrix is initialized such that reservedLocation-specialFacility node has a 100% probability of being selected by ants at every iteration. This is done as shown in the figure below, where the initial pheromone value is set to 1.
# <a><img src="../images/ipynb/aco_initialized_pheromone_matrix.png" alt="Initialized Pheromone Matrix With Constraints" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_initialized_pheromone_matrix.png';" style="max-height:360px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Initialized Pheromone Matrix With Constraints</i></p>
# The pheromone initializations of the entire row and column containing the reservedLocation-specialFacility node -- other than the reservedLocation-specialFacility node itself -- are set to zero. Thus, only one facility can be probabilistically chosen for that location and that same facility will always be probabilistically assigned to the same location.
# <hr>
#
# ## Code Implementation: Ant Colony Optimization
#
# > Note: All the code presented in this post are written in the **MATLAB scripting language**, however the post was prepared in a Jupyter notebook (for convenience). Thus, the syntax highlighting will be slightly off. Other than that, the code is all good and can be directly copy-pasted into MATLAB.
#
# <br>
# <a><img src="../images/ipynb/aco_code_implementation_overview.png" alt="GACO Code Implementation Overview" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_code_implementation_overview.png';" style="max-height:420px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: ACO Code Implementation Overview **(open image in new tab for full size)**</i></p>
#hide
import pandas as pd
import numpy as np
#
#
facilitiesFrequenciesMatrix = np.array([
[0, 5, 2, 2, 1, 1, 4, 1, 2, 9, 1],
[5, 0, 2, 5, 1, 2, 7, 8, 2, 3, 8],
[2, 2, 0, 7, 4, 4, 9, 4, 5, 6, 5],
[2, 5, 7, 0, 8, 7, 8, 1, 8, 5, 1],
[1, 1, 4, 8, 0, 3, 4, 1, 3, 3, 6],
[1, 2, 4, 7, 3, 0, 5, 8, 4, 7, 5],
[4, 7, 9, 8, 4, 5, 0, 7, 6, 3, 2],
[1, 8, 4, 1, 1, 8, 7, 0, 9, 4, 8],
[2, 2, 5, 8, 3, 4, 6, 9, 0, 5, 3],
[9, 3, 6, 5, 3, 7, 3, 4, 5, 0, 5],
[1, 8, 5, 1, 6, 5, 2, 8, 3, 5, 0],
])
locationsDistancesMatrix = np.array([
[ 0, 15, 25, 33, 40, 42, 47, 55, 35, 30, 20],
[15, 0, 10, 18, 25, 27, 32, 42, 50, 45, 35],
[25, 10, 0, 8, 15, 17, 22, 32, 52, 55, 45],
[33, 18, 8, 0, 7, 9, 14, 24, 44, 49, 53],
[40, 25, 15, 7, 0, 2, 7, 17, 37, 42, 52],
[42, 27, 17, 9, 2, 0, 5, 15, 35, 40, 50],
[47, 32, 22, 14, 7, 5, 0, 10, 30, 35, 40],
[55, 42, 32, 24, 17, 15, 10, 0, 20, 25, 35],
[35, 50, 52, 44, 37, 35, 30, 20, 0, 5, 15],
[30, 45, 55, 49, 42, 40, 35, 25, 5, 0, 10],
[20, 35, 45, 53, 52, 50, 40, 35, 15, 10, 0],
])
reservedLocations = [1, 10]
specialFacilities = [8, 11]
#
#
init_pheromoneMatrix = np.ones(locationsDistancesMatrix.shape, dtype=np.int)
cnstr_pheromoneMatrix = init_pheromoneMatrix.copy()
for i in reservedLocations:
cnstr_pheromoneMatrix[i-1, :] = 0
for i in specialFacilities:
cnstr_pheromoneMatrix[:, i-1] = 0
for i in range(len(reservedLocations)):
cnstr_pheromoneMatrix[reservedLocations[i]-1, specialFacilities[i]-1] = 1
# The first order of business is to initialize the algorithm hyperparameters as follows (in order):
#
# - *fix the MATLAB random number generator seed for reproducibility*
# - *set the maximum number of iterations, if there is no convergence*
# - *set the number of iterations to determine convergence if the best fitness value remains the same*
# - *initialize a "history" matrix for logging the [best fitness | mean fitness | best solution] per iteration*
# - *set the number of ants in the population*
# - *set the pheromone decay factor (evaporation rate)*
# - *set the pheromone scaling parameter (how much to boost pheromone values for good solutions)*
# - *choose whether to pretty-plot the pheromone matrix weights every iteration*
#collapse-show
# -- start: MATLAB code --
rng(sum(double(char("AntColonyOptimization"))))
maxIterations = 100;
nIterConvergence = 10;
histFitnesses = zeros(maxIterations, nLocations+2);
nPopulation = 50;
pheromoneDecayFactor = 0.05;
pheromoneScalingParameter = 1.5;
printWeights = false;
# -- end: MATLAB code --
# Next, we initialize the pheromone matrix. This will be a `m x m` (11 x 11) matrix since the number of facilities and available locations are the same (11). The rows represent the locations while the columns represent the facilities respectively.
#
#collapse-show
# -- start: MATLAB code --
pheromoneMatrix = ones(nLocations, nFacilities);
# -- end: MATLAB code --
# +
#hide
<details open style="cursor:pointer;">
<summary style="font-size:small;"><b><i>Initialized Pheromone Matrix:</i></b></summary>
<br>
| - | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 4 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 10 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
</details>
# -
# From the problem analysis, we have two constraints -- Facility 8 (Side Gate) and Facility 11 (Main Gate) are clamped at Locations 1 and 10 respectively. Thus, we modify the initialized pheromone matrix such that these special facilities (F8 and F11) are the only option for the ants traversing the reserved locations (L1 and L10) respectively.
#
# This is done by setting the reserved locations rows to zero i.e. no facilities assigned at those locations. Next, the special facilities columns are set to zero i.e. those facilities are not assigned to any locations. Finally, the special facilities are placed at their reserved locations i.e. the location-facility nodes are set to one.
#collapse-show
# -- start: MATLAB code --
for idx = 1:length(reservedLocations)
pheromoneMatrix(reservedLocations(idx),:) = 0;
pheromoneMatrix(:,specialFacilities(idx)) = 0;
pheromoneMatrix(reservedLocations(idx),specialFacilities(idx)) = 1;
end;
# -- end: MATLAB code --
# +
#hide
<details open style="cursor:pointer;">
<summary style="font-size:small;"><b><i>Constrained Pheromone Matrix:</i></b></summary>
<br>
| - | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
| 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 4 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
| 10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 11 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 |
</details>
# -
# <hr>
#
# **Now we can start the algorithm loop i.e. *Iteration 1*.**
#
# **[1.]**
# From `Step 3a `of the algorithm summary above, we need to generate for each ant for each location. The random numbers will be stored in a `nPopulation x nLocations` matrix -- the rows represent ants while the columns represent locations. These random numbers -- **generated once every iteration** -- will be used to probabilistically assign facilities to each location -- in conjunction with the pheromone matrix -- as each ant traverses the search space.
#
# **[2.]**
# We also initialize a matrix to store the path taken by each ant. This will also be a `nPopulation x nLocations` matrix. The entire matrix will initially be filled with ones -- meaning for all ants, Facility 1 is assigned to all locations. Obviously, this makes no sense. The correct facility allocations will be done subsequently.
#collapse-show
# -- start: MATLAB code --
iters = 1
% [1.]
randLocationProbabilities = rand(nPopulation, nLocations);
% [2.]
facilitiesXlocations = ones(size(randLocationProbabilities));
# -- end: MATLAB code --
# <hr>
#
# Now, we need to determine the actual path taken by each ant i.e. the correct location-facility pair allocations.
#
# **[3.]**
# We create a temporary copy of the pheromone matrix for each ant. We won't be working with the actual pheromone matrix because we will be iteratively eliminating location-facility pairs as the allocations are made. **This prevents a facility from being assigned to two locations or two facilities being assigned to the same location**. This will be done in the same manner we constrained some special facilities to reserved locations earlier.
#
# > Once a facility has been assigned to a location, the facility's column in the temporary pheromone matrix is set to zero ensuring that the facility will never be selected again because its probabilities henceforth will be zero!
#
# **[4.]**
# From `Step 2a` of the algorithm summary above, we need to compute the cummulative probability ranges for each location in the temporary pheromone matrix. This will be done before each location-facility allocation to reflect any eliminations done in the temporary pheromone matrix. We will create the function *calcProbsMatrices.m* to handle this as follows:
#
# 1. Convert the entries in each row of the (temporary) pheromone matrix to probabilities by dividing the row by its sum. For example, given a row as *[1, 2, 3]*. The sum is 6. Dividing the row by its sum, we get *[0.17, 0.33, 0.50]*. Thus, the new probabilities row will sum to 1.
# 2. Iteratively add each probability in the row to the previous entry. Continuing with the previous example, we get the cummulative probabilities row as *[0.17, (0.17+0.33), (0.17+0.33+0.50)] == [0.17, 0.50, 1.00]*. Thus the last entry in the new cummulative probabilities row will be 1.
# +
#
# >>> function @ file: calcProbsMatrices.m
# +
#collapse-hide
# -- start: MATLAB code --
function [cummProbsMatrix] = calcProbsMatrices(pheromoneMatrix)
[nLocations, nFacilities] = size(pheromoneMatrix);
probsMatrix = zeros(size(pheromoneMatrix));
for i = 1:nLocations
for j = 1:nFacilities
probsMatrix(i,j) = pheromoneMatrix(i,j) / sum(pheromoneMatrix(i,:));
end
end
cummProbsMatrix = probsMatrix;
for i = 2:nLocations
for j = 2: nFacilities
cummProbsMatrix(i,j) = cummProbsMatrix(i,j-1) + cummProbsMatrix(i,j);
end
end
end
# -- end: MATLAB code --
# -
# **[5.]**
# Using the `randLocationProbabilities` matrix with the constantly updated cummulative probabilities pheromone matrix, we determine the path taken by each ant using the roulette-wheel selection process (`Step 2b` in the algorithm summary). You can check out this [article](https://en.wikipedia.org/wiki/Fitness_proportionate_selection) or this [post](https://rocreguant.com/roulette-wheel-selection-python/2019/) for more information on roulette-wheel selection. The basic idea is as follows (excerpt from Wikipedia):
#
# > This could be imagined similar to a Roulette wheel in a casino. Usually, a proportion of the wheel is assigned to each of the possible selections based on their fitness value. This could be achieved by dividing the fitness of a selection by the total fitness of all the selections, thereby normalizing them to 1. Then a random selection is made similar to how the roulette wheel is rotated.
#
# The location-facility allocation is carried out as follows: **for each ant, for each location, the first facility whose cummulative probability value is greater then the random number generated is assigned**. Using the cummulative probabilities row in the previous example *[0.17, 0.50, 1.00]*, let us assume we randomly generated 0.47. The first value in the row greater than 0.47 is 0.50. Thus, Facility 2 will be assigned to the location represented by this row.
#collapse-show
# -- start: MATLAB code --
for ant = 1:nPopulation
% [3.] create temporary pheromone matrix
tempPheromoneMatrix = pheromoneMatrix;
for i = 1:nLocations
% [4.] calculate new cummulative probabilities
[cummProbsMatrix] = calcProbsMatrices(tempPheromoneMatrix);
for j = 1:nFacilities
% [5.] roulette-wheel selection
if cummProbsMatrix(i,j) > randLocationProbabilities(ant, i)
facilitiesXlocations(ant,j) = i;
% [3.] eliminate selected facility from temporary pheromone matrix
tempPheromoneMatrix(:,j) = 0;
break
end
end
end
end;
# -- end: MATLAB code --
# <hr>
#
# **[6.]**
# We can carry out a feasibility check to ensure that a facility is not assigned to two locations or two facilities assigned to the same location. This is unnecessary because the iterative elimination detailed in **[3.]** prevents exactly this from happening. Still, it doesn't hurt to check. The function *calcFeasibilities.m* handles this as follows:
#
# - For a solution to be feasible, the number of unique location-facility pairs should be equal to the number of available locations (11).
# - If there are no infeasible solutions in the ant population, the number of feasible solutions should be equal to the number of ants in the population (50).
#
#collapse-show
# -- start: MATLAB code --
% [6.] feasibility check
calcFeasibilities(facilitiesXlocations, false);
# -- end: MATLAB code --
# +
#
# >>> function @ file: calcFeasibilities.m
# +
#collapse-hide
# -- start: MATLAB code --
function calcFeasibilities(solutionsMatrix, verbose)
[nPopulation, nVariables] = size(solutionsMatrix);
nUniques = zeros(1, nPopulation);
for i = 1:nPopulation
nUniques(i) = length(unique(solutionsMatrix(i,:)));
end
feasibilityCheck = nUniques == nVariables;
if sum(feasibilityCheck) ~= nPopulation
error("There are INVALID solutions in the population!")
else
if verbose
disp("All solutions in the population are VALID!")
end
end
end
# -- end: MATLAB code --
# -
# <hr>
#
# **[7.]**
# From `Step 3c` of the algorithm summary, we need to evaluate the objective function for the path traversed (solutions generated) by the ants in the population. This will be handled by the function *calcObjFunction.m* as follows:
#
# - Initialize a 1D matrix to hold the fitness values for the path traversed by each ant.
# - For each ant in the population:
# - For each location-facility pair in the ant's path (solution):
# - Calculate the total travel distance between the pair and all other location-facility pairs in the solution.
# - The total travel distance between two location-facility pairs is distance between the locations multiplied by the trip frequency between the facilities.
# - Sum all the total travel distance for all location-facility pairs in the solution. This gives the fitness value of the path traversed (solution generated) by the ant.
# - Sort the fitness values for the ants in the population in descending order.
#
# **[8.]**
# Next, we log the best (maximum) fitness values, mean of all fitness values and the best path traversed (solution generated) at the current iteration. Recall that we have initialized the matrix `histFinesses` for this purpose. We also isolate the best path (solution) at current iteration for updating the actual pheromone matrix later on.
#
# > Note: We have not altered the actual pheromone matrix so far. We used copies of it to allocate facilities to locations earlier. The last time we altered the actual pheromone matrix was when we assigned the special facilities to their reserved locations.
#
# <br>
#
# **[9.]**
# At this point, we can check out the pheromone matrix using a simple heatmap. I have also written a function *printPheromoneWeights.m* to create a pretty plot of the pheromone matrix with the best path (solution) superimposed. The function can optionally create .png images of the plots too. I won't go into the breakdown of the function but it can be turned on/off with the `printWeights` variable.
#
# +
#collapse-show
# -- start: MATLAB code --
% [7.]
[fitnesses, population] = l_calcObjFunction(facilitiesXlocations);
% [8.]
histFitnesses(iters, :) = [fitnesses(1) mean(fitnesses) population(1,:)];
bestSolution = population(1,:);
% [9.]
heatmap(pheromoneMatrix, "CellLabelFormat","%0.2g", "ColorScaling","log", "ColorbarVisible","off");
xlabel("Facilities"); ylabel("Locations"); pause(1);
if printWeights
l_printPheromoneWeights(pheromoneMatrix, bestSolution, iters, false)
end
# -- end: MATLAB code --
# +
#
# >>> function @ file: calcObjFunction.m
# >>> function @ file: printPheromoneWeights.m
# +
#collapse-hide
# -- start: MATLAB code --
function [popFitnesses, sortedPopulation] = calcObjFunction(population)
[nLocations, nFacilities] = size(facilitiesFrequenciesMatrix);
[nPopulation, ~] = size(population);
popFitnesses = zeros(nPopulation, 1);
for idx = 1:nPopulation
chrmsm = population(idx,:);
distXfreq = zeros(nLocations, nFacilities);
for i = 1:nLocations
for j = 1:nFacilities
dist = locationsDistancesMatrix(chrmsm(i),chrmsm(j));
freq = facilitiesFrequenciesMatrix(i,j);
distXfreq(i,j) = dist*freq;
end
end
popFitnesses(idx) = sum(sum(distXfreq));
end
if nPopulation > 1
temp = [population, popFitnesses];
temp = sortrows(temp, nLocations+1);
sortedPopulation = temp(:, 1:nLocations);
popFitnesses = temp(:, nLocations+1);
end
end
# -- break: MATLAB code --
function printPheromoneWeights(pheromoneMatrix, bestSolution, iters, display)
normPheromoneMatrix = zeros(size(pheromoneMatrix));
[nLocations, nFacilities] = size(pheromoneMatrix);
for i = 1:nLocations
normPheromoneMatrix(i,:) = pheromoneMatrix(i,:) - min(pheromoneMatrix(i,:));
normPheromoneMatrix(i,:) = normPheromoneMatrix(i,:) / max(normPheromoneMatrix(i,:));
end
if not(display)
figure('visible', 'off');
end
for i = 1:nLocations
for j = 1:nFacilities
scatter(i, j, 'o', 'MarkerEdgeAlpha', normPheromoneMatrix(i,j), "LineWidth", 3);
hold on;
end
end
grid on; box on;
axis([0 nFacilities+1 0 nLocations+1]);
xticks([1:1:nFacilities]); yticks([1:1:nLocations]);
xlabel("Locations"); ylabel("Facilities");
if not(display)
title(sprintf("Best Solution & Pheromone Weights - Iteration %02d", iters));
else
title("Current Best Solution & Pheromone Weights");
end
plot(bestSolution, [1:1:length(bestSolution)], "r--", "LineWidth",1);
hold off;
if not(display)
print(sprintf('Pheromone_Matrix_Weights_%02d', iters), "-dpng")
end
end
# -- end: MATLAB code --
# -
# <hr>
#
# **[10.]**
# Next, we check if the algorithm has converged (`Step 4`). The criteria we will be using is if the best fitness value remains unchanged for `nIterConvergence` iterations. While this is different from the criteria specified in the algorithm summary -- checking if all the ants take the same best path -- both criteria should converge to the same solution. Also, we don't start checking for convergence until some iterations have passed (`2*nIterConvergence` iterations).
#
# **[11.]**
# If the algorithm is yet to converge, we update the pheromone matrix (`Step 4`). The pheromone trails for location-facility pairs (nodes) on the best path (solution) are boosted. This increases the probability that the good paths will be taken by ants in subsequent iterations. On the other hand, all location-facility pairs (nodes) not on the best path (solution) have some of their pheromone trails evaporated.
#
# - **Pheromone Trails Boosting Update**: Divide the best (maximum) fitness value at the current iteration by the worst (minimum) fitness value and multiply by the `pheromoneScalingParameter`. The result is added to pheromone matrix nodes CORRESPONDING to the best path.
# - **Pheromone Trails Evaporation Update**: Subtract the `pheromoneDecayFactor` from 1. The result is used to multiply pheromone matrix nodes NOT CORRESPONDING to the best path.
#
# +
#collapse-show
# -- start: MATLAB code --
% [10.]
if iters > 2*nIterConvergence && length(unique(histFitnesses(iters-nIterConvergence:iters, 1))) == 1
warning("Fitness has stagnated! Terminating at iteration %d!\n", iters)
histFitnesses(iters+1:end, :) = [];
break
end
% [11.]
bestNodeUpdate = pheromoneScalingParameter * (fitnesses(1)/fitnesses(end));
otherNodesUpdate = 1-pheromoneDecayFactor;
for i = bestSolution
for j = 1:nFacilities
if j == find(bestSolution == i)
pheromoneMatrix(i,j) = pheromoneMatrix(i,j) + bestNodeUpdate;
else
pheromoneMatrix(i,j) = pheromoneMatrix(i,j) * otherNodesUpdate;
end
end
end;
# -- end: MATLAB code --
# -
# <hr>
#
# **[12.]**
# **Finally, we proceed to the next iteration/generation!** We can refactor all the previous steps into a loop and wrap the entire code implementation into a single MATLAB file.
#
# I have also written a function `visualizeSolution.m` for visualizing location-facility pairs in a solution. This helps to eyeball how "sensible" the particular location-facility allocation is with respect to other location-facility allocations in the solution. Unfortunately, I won't go into the breakdown of the function but I promise the code is pretty straightforward.
# +
#
# >>> complete implementation script @ file: Site_Layout_Ant_Colony_Optimization_Code.m
# >>> complete implementation live script @ file: Site_Layout_Ant_Colony_Optimization.mlx
# +
#collapse-hide
# -- start: MATLAB code --
% --- START GENERATION LOOP ---
for iters = 1:maxIterations
% --- GENERATE LOCATION-FACILITY PLACEMENT PROBABILTES FOR ALL ANTS IN POPULATION
randLocationProbabilities = rand(nPopulation, nLocations);
% --- CHOOSE THE PATH FOR EACH ANT USING ROULETTE WHEEL SELECTION
locationsXfacilities = ones(size(randLocationProbabilities));
facilitiesXlocations = ones(size(randLocationProbabilities));
for ant = 1:nPopulation
% create a copy of the pheromone matrix which will be updated once a facility has been assigned to a location
% i.e. once a facility has been assigned to a location, its pheromone values (for its column) are set to zero such that
% there is zero probability of that same facility being assigned to another location in subsequent steps.
tempPheromoneMatrix = pheromoneMatrix;
for i = 1:nLocations
% (re)compute the probabilities matrices to reflect any updates in the temp pheromone matrix
[cummProbsMatrix] = calcProbsMatrices(tempPheromoneMatrix);
for j = 1:nFacilities
if cummProbsMatrix(i,j) > randLocationProbabilities(ant, i)
% TRANSLATION: facility j (column idx value) assigned to location i (column idx)
facilitiesXlocations(ant,j) = i;
% TRANSLATION: location i (column idx) contains facility j (column idx value)
locationsXfacilities(ant,i) = j;
% since facility j has been assigned to a location, set column j in the temp pheromone matrix to zero
% ensuring that facility j will never be selected again! because its probabilities henceforth will be zero!
tempPheromoneMatrix(:,j) = 0;
break
end
end
end
end; clear ant tempPheromoneMatrix i j locationsXfacilities cummProbsMatrix
% --- TEST THAT THERE ARE NO FACILITIES ASSIGNED TO MULTIPLE LOCATIONS
calcFeasibilities(facilitiesXlocations, false);
% --- CALCULATE THE FITNESSES OF SOLUTIONS IN THE POPULATION
[fitnesses, population] = calcObjFunction(facilitiesXlocations);
histFitnesses(iters, :) = [fitnesses(1) mean(fitnesses) population(1,:)];
bestSolution = population(1,:);
% --- PLOT THE BEST SOLUTION AND CURRENT PHEROMONE WEIGHTS AND SAVE TO FILE
heatmap(pheromoneMatrix, "CellLabelFormat","%0.2g", "ColorScaling","log", "ColorbarVisible","off");
xlabel("Facilities"); ylabel("Locations"); title(sprintf("Pheromone Matrix @Iteration %02d", iters)); pause(1);
if printWeights
printPheromoneWeights(pheromoneMatrix, bestSolution, iters, false)
end
% --- CHECK FOR CONVERGENCE
% terminate the algorithm if the best fitness value did not change for n iterations
if iters > 2*nIterConvergence && length(unique(histFitnesses(iters-nIterConvergence:iters, 1))) == 1
warning("Fitness has stagnated for %d iterations! Terminating at iteration %d!\n", nIterConvergence, iters)
histFitnesses(iters+1:end, :) = [];
break
end
% --- UPDATE THE PHEROMONE MATRIX
bestNodeUpdate = pheromoneScalingParameter * (fitnesses(1)/fitnesses(end)); % add this to the current value
otherNodesUpdate = 1-pheromoneDecayFactor; % multiply the current value by this
for i = bestSolution
for j = 1:nFacilities
if j == find(bestSolution == i)
pheromoneMatrix(i,j) = pheromoneMatrix(i,j) + bestNodeUpdate;
else
pheromoneMatrix(i,j) = pheromoneMatrix(i,j) * otherNodesUpdate;
end
end
end; clear i j bestNodeUpdate otherNodesUpdate bestSolution
% --- END GENERATION LOOP ---
end; clear iters randLocationProbabilities facilitiesXlocations
clear maxIterations nFacilities nIterConvergence nLocations nPopulation pheromoneDecayFactor pheromoneScalingParameter
# -- end: MATLAB code --
# +
#
# >>> function @ file: visualizeSolution.m
# +
#collapse-hide
# -- start: MATLAB code --
function visualizeSolution(solution, facility)
n_vals = length(solution);
y_vals = zeros(n_vals);
y_labels = [];
y_textx = [];
x_dists = zeros(n_vals);
t_freqs = zeros(n_vals);
for i = 1:length(solution)
y_vals(i) = i;
x_dists(i) = locationsDistancesMatrix(solution(facility), i);
if i == facility
t_freqs(solution(i)) = 1;
else
t_freqs(solution(i)) = facilitiesFrequenciesMatrix(facility, i);
end
y_labels{solution(i)} = sprintf("L%02d", solution(i));
y_texts{solution(i)} = sprintf("[ F%02d ]", i);
end
for i = 1:length(solution)
plot([0, x_dists(i)], [y_vals(i), y_vals(i)], 'LineWidth', t_freqs(i));
text(x_dists(i) + 1, y_vals(i), y_texts(i), 'FontWeight', "bold", 'Color', "b");
hold on;
end
grid("on");
ylim([0, 12]); yticks([1:1:11]);
yticklabels(y_labels);
ylabel("Facilities/Locations", 'FontWeight', "bold");
xlabel("Distances", 'FontWeight', "bold");
title(sprintf("Visualizing Solution: Distances & Frequencies @ Facility %02d", facility));
set(gcf, 'Position', [90 260 750 400]);
hold off;
end
# -- end: MATLAB code --
# -
# <hr>
#
# ## Optimum Solution: Site-Level Facilities Layout Using Ant Colony Optimization
#
# The optimum solution **[ 9 11 5 6 7 2 4 1 3 8 10 ]** -- representing Facility@Location-pair information -- was found the 20th iteration using an ant colony population size of 50. However, the algorithm did not terminate until the 33rd iteration to ensure that the solution was not a local optimum.
#
# <a><img src="../images/ipynb/aco_print_pheromone_weights.gif" alt="Pretty-Plot Of The Pheromone Weights & Best (Optimum) Solution Per Iteration" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_print_pheromone_weights.gif';" style="max-height:360px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Pretty-Plot Of The Pheromone Weights & Best (Optimum) Solution Per Iteration</i></p>
#
# <a><img src="../images/ipynb/aco_optimum_solution.png" alt="Pheromone Matrix Values (left) & Normalized Pheromone Weights and Best (Optimum) Solution" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_optimum_solution.png';" style="max-height:360px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Pheromone Matrix Values (left) & Normalized Pheromone Weights and Best (Optimum) Solution (right)</i></p>
#
# The figure above shows that for F2, the only viable location was L11, even though F2 is not a reserved facility. However, for F5 and F10, multiple locations were contending for viability. Furthermore, how the pheromone weights are clustered around a small subset of the pheromone matrix shows how quickly the algorithm narrows down to optimal combinations of nodes.
#
# The figure below (derived from *visualizeSolution.m*) shows the distances/frequencies of the facility-location pair *(F3/L5)* relative to other facility-location pairs in the solution. The thickness of the lines represents the frequencies, while the lengths represents the distances. This helps in analyzing the optimality of the solution from a commonsensical point of view. The commonsense heuristic for the allocation of facilities to locations can be summarized as follows:
#
# <p style="text-align:center;"><b><i>Facilities with high trip frequencies between them should be placed at locations with minimal distances between them and vice-versa.</i></b></p>
#
# <a><img src="../images/ipynb/aco_visualize_solution_f3.png" alt="Visualizing Relative Frequencies & Distances For Facility F3 / Location L5" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_visualize_solution_f3.png';" style="max-height:360px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Visualizing Relative Frequencies & Distances For <b>Facility F3 / Location L5</b></i></p>
#
# We can see that the facility-location pair *(F3/L5)* allocation in the optimum solution aligned with the commonsense heuristic. F3 has high trip frequencies with F4 and F7. Hence, those facilities were placed at adjacent locations. F3 has the least trip frequency with F2, thus, F2 was placed the furthest away! The same outcome can be observed for other facility-location pairs using `visualizeSolution(solution, facility)`.
# <hr>
#
# ## Conclusion
#
# I hope this post provided a good background and walkthrough on the site-level facilities layout and MATLAB code implementation of the ant colony optimization algorithm. The code was a bit involved but the individual steps were quite straightforward. From the historical best and mean fitness plots above, we can see how quickly the algorithm converged. Furthermore, the visualization of the optimal solution showed that the location-facility allocations were sensible.
#
# <a><img src="../images/ipynb/aco_historical_best_mean_fitness.png" alt="Graphical representation of the ACO algorithm in the form of a multi-layered network" title="Credit: Mein.Platz (2021)" onerror="this.onerror=null;this.src='{{ site.baseurl }}/images/ipynb/aco_historical_best_mean_fitness.png';" style="max-height:360px;"/></a>
#
# <p style="font-size:small; text-align:center;"><i>Figure: Historical Best (left) & Mean Fitness Values (right) Per Iteration **(open image in new tab for full size)**</i></p>
#
#
# I encourage the reader to check out *Chapter 13.5 - Modern Methods of Optimization* of **<NAME> - [Engineering Optimization: Theory and Practice (2020)](https://www.wiley.com/en-us/Engineering+Optimization%3A+Theory+and+Practice%2C+5th+Edition-p-9781119454793)** if there is any confusion about the steps and equations of the ant colony optimization algorithm.
#
# > Note: The source code for this post can be found at ***[this repository](https://github.com/Outsiders17711/Site-Layout-Ant-Colony-Optimization)***. Kindly comment below if you spot any issues with the code.
#
# <br>
#
# This is the end of the post on MATLAB code implementation of the ant colony optimization algorithm towards solving a site-level facilities layout optimization problem. Thank you for reading. This post referenced the following resources:
#
# - <NAME> & <NAME> - [Site-Level Facilities Layout Using Genetic Algorithms](https://ascelibrary.org/doi/abs/10.1061/%28ASCE%290887-3801%281998%2912%3A4%28227%29)
# - <NAME> - [Ant Colony Optimization: Introduction And Recent Trends](https://www.sciencedirect.com/science/article/abs/pii/S1571064505000333)
# - Wikipedia - [Systematic layout planning](https://en.wikipedia.org/wiki/Systematic_layout_planning)
# - <NAME> - [Construction Site Layout](https://www.allaboutlean.com/lean-construction/construction-site-plan/)
# - Wikipedia - [Ant colony optimization algorithms](https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms)
# - <NAME> - [Engineering Optimization: Theory and Practice (2020)](https://www.wiley.com/en-us/Engineering+Optimization%3A+Theory+and+Practice%2C+5th+Edition-p-9781119454793)
# - Wikipedia - [Fitness proportionate (roulette-wheel) selection](https://en.wikipedia.org/wiki/Fitness_proportionate_selection)
# - Roc Reguant - [Roulette Wheel Selection In Python](https://rocreguant.com/roulette-wheel-selection-python/2019/)
#
# <br>
#
# > Tip: **[Jump To Top](#Background)**
#
#
# ---
| _notebooks/2021-12-27-Site-Layout-Ant-Colony-Optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from astropy.io import fits
from astropy.table import Table as Table
import matplotlib.pyplot as plt
import astropy.units as u
from astropy import constants as const
# %matplotlib inline
# +
stack1 = fits.open("/Users/jsmonzon/lbg_da/fits_data/composites/observed/lowz/composite.fits")
model1 = fits.open("/Users/jsmonzon/lbg_da/fits_data/composites/modeled/lowz/sed.fits")
stack2 = fits.open("/Users/jsmonzon/lbg_da/fits_data/composites/observed/hiz/composite.fits")
model2 = fits.open("/Users/jsmonzon/lbg_da/fits_data/composites/modeled/hiz/sed.fits")
lowgal= fits.open("/Users/jsmonzon/lbg_da/fits_data/CG274-fits-werr.fits")
# +
wave_1 = stack1[1].data["wavelength"]
flux_1 = stack1[1].data["flux"]
err_1 = stack1[1].data["flux_err"]
model_1 = model1[0].data
wave_2 = stack2[1].data["wavelength"]
flux_2 = stack2[1].data["flux"]
err_2 = stack2[1].data["flux_err"]
model_2 = model2[0].data
# +
fig, ax = plt.subplots(figsize=(10, 10), tight_layout=True)
#data
plt.subplot(2,1,1)
plt.plot(wave_1, flux_1,label="$z_{low}$ composite ($z_{med}$ = 2.43)",color="black")
plt.plot(wave_1, err_1, label="1$\sigma$ (bootstrap generated)",color="grey")
plt.plot(wave_1[wave_1 < 1225], model_1[wave_1 < 1225], label="extrapolation",color="blue")
plt.plot(wave_1[wave_1 > 1222], model_1[wave_1 > 1222], label="fitting region",color="#f03b20")
plt.fill_between(wave_1, model_1, flux_1, where= (1070 < wave_1) & (wave_1 < 1170),
color="purple", alpha = .4, label="$Lyα$ forest")
#interstellar absorption features
plt.text(1244.,.25,"$SiII$",fontsize=12,color="#fd8d3c")
plt.vlines(1260,0,4, lw = 15, color="grey", alpha=.2, label="ISM absorption")
plt.text(1285.,.25,"$SiII$",fontsize=12,color="#fd8d3c")
plt.vlines(1303,0,4, lw = 20, color="grey", alpha=.2)
plt.text(1320.,.25,"$CII$",fontsize=12,color="#fd8d3c")
plt.vlines(1334,0,4, lw = 15, color="grey", alpha=.2)
plt.text(1377.,.25,"$SiIV$",fontsize=12,color="#fd8d3c")
plt.vlines(1396,0,4, lw = 15, color="grey", alpha=.2)
#emission features
plt.text(1222,2.0,"$Lyα$",fontsize=12,color="#fd8d3c")
#axis
plt.xlabel("$\lambda_{rest} (\AA)$",fontsize=15)
plt.ylabel("Relative flux",fontsize=15)
plt.hlines(0,1000,1500, color="#fd8d3c",linestyle="--")
#misc
plt.legend(fontsize=12, loc=2)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlim(1050,1400)
plt.ylim(-.2,3.0)
#--------------------------------------------------------------------------
plt.subplot(2,1,2)
#data
plt.plot(wave_2, flux_2,label="$z_{high}$ composite ($z_{med}$ = 2.58)",color="black")
plt.plot(wave_2, err_2, label="1$\sigma$ (bootstrap generated)",color="grey")
plt.plot(wave_2[wave_2 < 1225], model_2[wave_2 < 1225], label="extrapolation",color="blue")
plt.plot(wave_2[wave_2 > 1222], model_2[wave_2 > 1222], label="fitting region",color="#f03b20")
plt.fill_between(wave_2, model_2, flux_2, where= (1070 < wave_2) & (wave_2 < 1170),
color="purple", alpha = .4, label="$Lyα$ forest")
#interstellar absorption features
plt.text(1244.,.25,"$SiII$",fontsize=12,color="#fd8d3c")
plt.vlines(1260,0,4, lw = 15, color="grey", alpha=.2, label="ISM absorption")
plt.text(1285.,.25,"$SiII$",fontsize=12,color="#fd8d3c")
plt.vlines(1303,0,4, lw = 20, color="grey", alpha=.2)
plt.text(1320.,.25,"$CII$",fontsize=12,color="#fd8d3c")
plt.vlines(1334,0,4, lw = 15, color="grey", alpha=.2)
plt.text(1377.,.25,"$SiIV$",fontsize=12,color="#fd8d3c")
plt.vlines(1396,0,4, lw = 15, color="grey", alpha=.2)
#emission features
plt.text(1222,2.0,"$Lyα$",fontsize=12,color="#fd8d3c")
#axis
plt.xlabel("$\lambda_{rest} (\AA)$",fontsize=15)
plt.ylabel("Relative flux",fontsize=15)
plt.hlines(0,1000,1500, color="#fd8d3c",linestyle="--")
#misc
plt.legend(fontsize=12, loc=2)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlim(1050,1400)
plt.ylim(-.2,3.8)
plt.savefig("/Users/jsmonzon/lbg_da/figures/composite_models.pdf")
plt.show()
# +
fix = np.argsort(lowgal[1].data[0]["wave"])
wave = lowgal[1].data[0]["wave"][fix]
flux = lowgal[1].data[0]["flux"][fix]
model = lowgal[1].data[0]["cont"][fix]
err = lowgal[1].data[0]["err"][fix]
# -
new_ISM = np.array([1175, 1238.82, 1316.2, 1335.71, 1393.76])
# +
plt.figure(figsize=(10,5))
plt.plot(wave, flux, label="CG 274", color="black")
plt.plot(wave[wave > 1226], model[wave > 1226], label="fitting region", color="#f03b20")
plt.plot(wave[wave < 1226], model[wave < 1226], label="extrapolation", color="blue")
plt.plot(wave, err, label="1$\sigma$", color="grey")
#------------------------------
for i, line in enumerate(new_ISM):
if i==0:
plt.vlines(line,0,3, color="grey", alpha=.2, lw=10, label="ISM absorption")
else:
plt.vlines(line,0,3, color="grey", alpha=.2, lw=10)
plt.vlines(1262,0,3, color="#f03b20", alpha=.2, lw=24, label="detector gap")
plt.vlines(1196,0,3, color="green", alpha=.2, lw=10, label="geocoronal line")
plt.vlines(1285,0,3, color="green", alpha=.2, lw=10)
#emission features
plt.text(1222,2.0,"$Lyα$",fontsize=12,color="#fd8d3c")
#axis
plt.xlabel("$\lambda_{rest} (\AA)$",fontsize=15)
plt.ylabel("Relative flux",fontsize=15)
plt.hlines(0,1000,1500, color="#fd8d3c",linestyle="--")
#misc
plt.legend(fontsize=12)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlim(1125,1400)
plt.ylim(-.2,2.5)
plt.savefig("/Users/jsmonzon/lbg_da/figures/CG274_sb99.pdf")
plt.show()
# -
| notebooks/composite_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Loading Large Files
# +
# load libraries
import os
import sys
import gzip
import psutil
import numpy as np
import pandas as pd
# -
psutil.virtual_memory()
os.listdir()
# ### Reading Large Files in Python
ftrain = 'train.csv'
# +
os.path.getsize(ftrain)
# approx. 4GB file
# -
from itertools import islice
n = 6
# +
with open(ftrain) as myfile:
head = list(islice(myfile, n))
columns = head[0].replace('\n','').split(',')
columns
# +
rows = []
for i in range(1,n):
rows.append(head[1].replace('\n','').split(','))
rows
# -
df = pd.DataFrame(rows)
df.columns = columns
df.head()
# ### Loading A Pandas DataFrame From GZ Files
fname = 'test.csv.gz'
# +
# read gz file
with gzip.open('test.csv.gz') as f:
test = pd.read_csv(f)
test.head()
# -
test.shape
# ### Pandas DataFrame Columns & Dtypes
cols = list(test.columns)
cols
for col in cols:
print(col, test[col].dtype)
# ### Loading A Large CSV in Chunks
os.path.getsize(fname)
cs = 100000
# +
chunks = pd.read_table(fname, chunksize=1000000, sep=',', names=cols, index_col='id',
header=None, parse_dates=['date_time'])
chunks
# -
df = pd.DataFrame()
# %time df = pd.concat(chunks)
# #### Chunks is exhausted
df.shape
| projects/recommender/ds_python_loading_large_files_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Bin filtering and normalization
# __running time__: < 10 min
#
# Columns in the matrix with significantly less interaction counts than the rest are likely to be genomic regions with low mappability or high repeat content like telomeres and centromeres. The consideration of those columns affect the performance and accuracy of most normalization algorithms. It is therefore necessary to discard those columns prior to the normalization of the matrix.
#
# `tadbit normalize` performs a bin filtering before the normalization.
# ### Filter bins with low interaction counts
#
# #### Genome-wide filtering
# In `tadbit normalize` the default measure to consider if a column should be filtered out is the percentage of cis interactions (inter-chromosomal) over the total. Artifactual columns with a percentage of cis interactions below the estimated minimum percentage or above the estimated maximum percentage are discarded.
#
# 
# #### Threshold filtering
#
# In cases where the default methodology cannot be applied (for instance in sparse datasets or when we only have one chromosome) we can use thresholds with absolute values (`--min_count`) or percentages (`--perc_zeros`,`--min_perc`,`--max_perc`).
# ### Normalization algorithms
# *__Note__: if columns with a lot of zeroes are present the ICE normalization will last very long to converge, and these low-coverage columns will present, at the end of the normalization, few cells with very high values of interaction*
# ### Iterative Correction and Eigenvector decomposition (ICE)
# ICE normalization <a name="ref-1"/>[(Imakaev et al., 2012)](#cite-Imakaev2012a) assumes an equal experimental visibility of each bin and seeks iteratively for biases that equalize the sum of counts per bin in the matrix. At each iteration, a new matrix is generated by dividing each cell by the product of the sum of counts in its row times the sum of counts in its column. The process converges to a matrix in which all bins have identical sum.
#
# If $W$ is the raw matrix, $N$ is its size, and $i$($j$) the index of the columns(rows), the normalized matrix $M$ is iteratively computed as:
#
# $$M_{i,j} = \frac{W_{i,j}}{\sqrt{\sum_{n=0}^N{W_{i,n}} \times \sum_{n=0}^N{W_{n,j}}}}$$
#
# This normalization has usually a quite strong effect, and visually the matrices look very smooth and regular.
# + language="bash"
#
# tadbit normalize -w ../results/PSC_rep1/ --normalization ICE \
# --resolution 10000 --min_count 100
# -
# The resulting biases are stored in a pickle file under the subfolder `04_normalization` and are only valid for matrices binned at 50kbp resolution. Together with the biases file, `tadbit normalize` produces a plot of the relation between the genomic distance of two regions of the genome and the number of read pairs (interaction counts) having sequences mapped on to those two regions on each end:
# + language="bash"
#
# ls ../results/PSC_rep1/04_normalization/
# -
from IPython.display import Image
Image(filename='../results/PSC_rep1/04_normalization/interactions_vs_genomic-coords.png_10000_9eb56390cc.png')
# + language="bash"
#
# tadbit bin -w ../results/PSC_rep1/ --only_plot \
# -c chr3:33950000-35450000 --resolution 10000 \
# --cmap Reds --format png \
# --norm raw norm
# -
# Raw matrix 50kb
Image(filename='../results/PSC_rep1/05_sub-matrices/raw_chr3:3395-3545_10kb_bab7be0e72.png')
# ICE normalized matrix 50kb
Image(filename='../results/PSC_rep1/05_sub-matrices/nrm_chr3:3395-3545_10kb_bab7be0e72.png')
# ### Vanilla coverage normalization
# The vanilla normalization <a name="ref-2"/>[(Rao et al., 2014)](#cite-Rao2014) is a variation of the ICE where a single iteration is performed.
# + language="bash"
#
# tadbit normalize -w ../results/PSC_rep1/ --normalization Vanilla \
# --resolution 10000 --min_count 100
# + language="bash"
#
# tadbit bin -w ../results/PSC_rep1/ --only_plot \
# -c chr3:33950000-35450000 --resolution 10000 \
# --cmap Reds --format png \
# --norm norm \
# --jobid 9
# -
# Vanilla normalized matrix 50kb
Image(filename='../results/PSC_rep1/05_sub-matrices/nrm_chr3:3395-3545_10kb_e89b5aa69f.png')
# ### Square root vanilla coverage (SQRT) normalization
# The SQRT vanilla normalization <a name="ref-2"/>[(Rao et al., 2014)](#cite-Rao2014) is a variation of the Vanilla coverage where each element in the matrix is divided by the square root of the product of sums of counts.
#
# $$M_{i,j} = \frac{W_{i,j}}{\sqrt{\sum_{n=0}^N{W_{i,n}} \times \sum_{n=0}^N{W_{n,j}}}}$$
# + language="bash"
#
# tadbit normalize -w ../results/PSC_rep1/ --normalization SQRT \
# --resolution 10000 --min_count 100
# + language="bash"
#
# tadbit bin -w ../results/PSC_rep1/ --only_plot \
# -c chr3:33950000-35450000 --resolution 10000 \
# --cmap Reds --format png \
# --norm norm \
# --jobid 11
# -
# SQRT normalized matrix 50kb
Image(filename='../results/PSC_rep1/05_sub-matrices/nrm_chr3:3395-3545_10kb_94045bfce3.png')
# ### OneD normalization
# OneD normalization <a name="ref-3"/>[(Vidal et al., 2018)](#cite-Vidal2018) is based on fitting a non-linear model between the total amount of contacts per bin and the known biases:
# - GC content
# - number of RE sites (the most important bias, the more cut sites, the more mapped reads)
# - read mappability (can be produced with genmap https://github.com/cpockrandt/genmap)
#
# As the estimation of each of this statistics is very important for the normalization, they are left outside the normalization function, in order to allow user to modify them.
# + language="bash"
#
# tadbit normalize -w ../results/PSC_rep1/ --normalization oneD \
# --resolution 10000 --min_count 100 \
# --fasta ../refGenome/mm39_chr3.fa \
# --renz MboI \
# --mappability ../refGenome/genmap/mappability/mm39_chr3.genmap.bedgraph
# + language="bash"
#
# tadbit bin -w ../results/PSC_rep1/ --only_plot \
# -c chr3:33950000-35450000 --resolution 10000 \
# --cmap Reds --format png \
# --norm norm \
# --jobid 13
# -
# oneD normalized matrix 50kb
Image(filename='../results/PSC_rep1/05_sub-matrices/nrm_chr3:3395-3545_10kb_0c74b80961.png')
# ### Other normalizations
# ICE and Vanilla normalizations are widely used however other, more convoluted, normalizations <a name="ref-4"/>[(Hu et al., 2012)](#cite-hu2012hicnorm) <a name="ref-5"/>[(Yaffe and Tanay, 2011)](#cite-Yaffe2011) can be used outside TADbit and then loaded in TADbit as normalized matrices for further analysis.
# ### Best normalization
# Which is the best normalization to use is a question that can not be answered easily because it depends on the type of data and the type of analysis.
#
# Most of the time Hi-C experiments are conducted in different conditions and, for each, in several replicates. A good way to find the best normalization method may be to select the one that achieve to minimize the differences between replicates and maximize the differences between conditions (this in the context of the analysis to be performed).
# <!--bibtex
# @article{hu2012hicnorm,
# title={HiCNorm: removing biases in Hi-C data via Poisson regression},
# author={<NAME> <NAME> and <NAME> and <NAME> and <NAME>},
# journal={Bioinformatics},
# volume={28},
# number={23},
# pages={3131--3133},
# year={2012},
# publisher={Oxford Univ Press}
# }
# @article{Yaffe2011,
# abstract = {Hi-C experiments measure the probability of physical proximity between pairs of chromosomal loci on a genomic scale. We report on several systematic biases that substantially affect the Hi-C experimental procedure, including the distance between restriction sites, the GC content of trimmed ligation junctions and sequence uniqueness. To address these biases, we introduce an integrated probabilistic background model and develop algorithms to estimate its parameters and renormalize Hi-C data. Analysis of corrected human lymphoblast contact maps provides genome-wide evidence for interchromosomal aggregation of active chromatin marks, including DNase-hypersensitive sites and transcriptionally active foci. We observe extensive long-range (up to 400 kb) cis interactions at active promoters and derive asymmetric contact profiles next to transcription start sites and CTCF binding sites. Clusters of interacting chromosomal domains suggest physical separation of centromere-proximal and centromere-distal regions. These results provide a computational basis for the inference of chromosomal architectures from Hi-C experiments.},
# author = {<NAME> and <NAME>},
# doi = {10.1038/ng.947},
# file = {:home/fransua/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Yaffe, Tanay - 2011 - Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal archit.pdf:pdf},
# issn = {1546-1718},
# journal = {Nature genetics},
# keywords = {Binding Sites,Chromosomes,Cluster Analysis,Epigenesis,Genetic,Human,Humans,Lymphocytes,Lymphocytes: ultrastructure,Models,Probability},
# mendeley-groups = {Research articles},
# month = {nov},
# number = {11},
# pages = {1059--65},
# pmid = {22001755},
# title = {{Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal architecture.}},
# url = {http://www.ncbi.nlm.nih.gov/pubmed/22001755},
# volume = {43},
# year = {2011}
# }
# @article{Imakaev2012a,
# abstract = {Extracting biologically meaningful information from chromosomal interactions obtained with genome-wide chromosome conformation capture (3C) analyses requires the elimination of systematic biases. We present a computational pipeline that integrates a strategy to map sequencing reads with a data-driven method for iterative correction of biases, yielding genome-wide maps of relative contact probabilities. We validate this ICE (iterative correction and eigenvector decomposition) technique on published data obtained by the high-throughput 3C method Hi-C, and we demonstrate that eigenvector decomposition of the obtained maps provides insights into local chromatin states, global patterns of chromosomal interactions, and the conserved organization of human and mouse chromosomes.},
# author = {Imakaev, <NAME> Fudenberg, <NAME> McCord, <NAME> and Naumova, <NAME> Goloborodko, <NAME> Lajoie, <NAME> Dekker, <NAME>, <NAME>},
# doi = {10.1038/nmeth.2148},
# file = {:home/fransua/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Imakaev et al. - 2012 - Iterative correction of Hi-C data reveals hallmarks of chromosome organization.pdf:pdf},
# issn = {1548-7105},
# journal = {Nature methods},
# keywords = {Hi-C},
# mendeley-groups = {stats/Hi-C,Research articles},
# mendeley-tags = {Hi-C},
# month = {oct},
# number = {10},
# pages = {999--1003},
# pmid = {22941365},
# title = {{Iterative correction of Hi-C data reveals hallmarks of chromosome organization.}},
# url = {http://www.ncbi.nlm.nih.gov/pubmed/22941365},
# volume = {9},
# year = {2012}
# }
# @article{Rao2014,
# author = {<NAME> and Huntley, <NAME> and Durand, <NAME> and Stamenova, <NAME> and Bochkov, <NAME>. and {<NAME>} and Sanborn, <NAME>. and <NAME> and Omer, <NAME>. and Lander, <NAME>. and <NAME>},
# doi = {10.1016/j.cell.2014.11.021},
# file = {:home/fransua/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Rao et al. - 2014 - A 3D Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin Looping.pdf:pdf},
# issn = {0092-8674},
# journal = {Cell},
# keywords = {Hi-C},
# mendeley-groups = {Research articles,projects/GEVO/CTCF},
# mendeley-tags = {Hi-C},
# number = {7},
# pages = {1665--1680},
# pmid = {25497547},
# publisher = {Elsevier Inc.},
# title = {{A 3D Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin Looping}},
# url = {http://dx.doi.org/10.1016/j.cell.2014.11.021},
# volume = {159},
# year = {2014}
# }
#
# -->
# ### Questions
#
# - Could you explain the main hypothesis to motivate the ICE normalization?
# - Why do you think skipping the bin filtering step would significantly affect the normalization?
# ### References
#
# <a name="cite-Imakaev2012a"/><sup>[^](#ref-1) </sup><NAME> and Fudenberg, Geoffrey and McCord, <NAME> and Naumova, Natalia and Goloborodko, Anton and Lajoie, <NAME> and Dekker, Job and Mirny, <NAME>. 2012. _Iterative correction of Hi-C data reveals hallmarks of chromosome organization._. [URL](http://www.ncbi.nlm.nih.gov/pubmed/22941365)
#
# <a name="cite-Rao2014"/><sup>[^](#ref-2) </sup><NAME> and Huntley, <NAME>, <NAME> Bochkov, <NAME>. and <NAME> Sanborn, <NAME>. and <NAME> Omer, <NAME>. and <NAME>. and <NAME>. 2014. _A 3D Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin Looping_. [URL](http://dx.doi.org/10.1016/j.cell.2014.11.021)
#
# <a name="cite-Vidal2018"/><sup>[^](#ref-3) </sup><NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. 2018. _OneD: increasing reproducibility of Hi-C samples with abnormal karyotypes_. [URL](https://doi.org/10.1093/nar/gky064)
#
# <a name="cite-hu2012hicnorm"/><sup>[^](#ref-4) </sup><NAME> and Deng, <NAME> <NAME> <NAME>. 2012. _HiCNorm: removing biases in Hi-C data via Poisson regression_.
#
# <a name="cite-Yaffe2011"/><sup>[^](#ref-5) </sup><NAME> and <NAME>. 2011. _Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal architecture._. [URL](http://www.ncbi.nlm.nih.gov/pubmed/22001755)
#
#
| assets/material/Notebooks/Day2/07-Bin_filtering_and_normalization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import sys
import numba
import swifter
import timeit
import numpy as np
import pandas as pd
# import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
# sns.set_style('whitegrid')
plt.style.use('seaborn-whitegrid')
# %matplotlib inline
__author__ = '<NAME>'
__version__ = 'Python 2'
'''
Analysis originaly performed in Python 2 (deprecated)
Seaborn, Statsmodel, and * imports broken in Python 3
'''
# -
borrower_1 = pd.read_csv("../Data/borrower_listing_fe/borrower_listing_fe_25k.csv")
borrower_2 = pd.read_csv("../Data/borrower_listing_fe/borrower_listing_fe_50k.csv")
borrower_3 = pd.read_csv("../Data/borrower_listing_fe/borrower_listing_fe_75k.csv")
borrower_4 = pd.read_csv("../Data/borrower_listing_fe/borrower_listing_fe_100k.csv")
borrower_5 = pd.read_csv("../Data/borrower_listing_fe/borrower_listing_fe_128k.csv")
borrower_frames = [borrower_1, borrower_2, borrower_3, borrower_4, borrower_5]
data = pd.concat(borrower_frames)
data['Listing_Key'] = data.ListingKey
data = data[['Listing_Key', 'ListingKey', 'BorrowerCompletedListings', 'BorrowerRepaidListings', 'BorrowerTotalListings']]
data["BorrowerListingSuccessRate"] = data['BorrowerCompletedListings'] / data['BorrowerTotalListings']
data["BorrowerRepaymentSuccessRate"] = data['BorrowerRepaidListings'] / data['BorrowerCompletedListings']
data.head()
del borrower_1
del borrower_2
del borrower_3
del borrower_4
del borrower_5
del borrower_frames
# ## Mean
f_avg = {'ListingKey': ['max'],
'BorrowerCompletedListings': ['mean'],
'BorrowerRepaidListings': ['mean'],
'BorrowerTotalListings': ['mean'],
'BorrowerListingSuccessRate': ['mean'],
'BorrowerRepaymentSuccessRate': ['mean']
}
borrower_mean_attr = pd.DataFrame(data.groupby(["Listing_Key"]).agg(f_avg).as_matrix())
borrower_mean_attr = borrower_mean_attr.rename(index=str, columns={0: "BorrowerAvgRepaidListings",
1: "BorrowerAvgTotalListings",
2: "BorrowerAvgCompletedListings",
3: "BorrowerAvgListingSuccessRate",
4: "BorrowerAvgRepaymentSuccessRate",
5: "ListingKey"})
borrower_mean_attr.head()
# ## Median
# +
# f_median = {'ListingKey': ['max'],
# 'BorrowerCompletedListings': ['median'],
# 'BorrowerRepaidListings': ['median'],
# 'BorrowerTotalListings': ['median'],
# 'BorrowerListingSuccessRate': ['median'],
# 'BorrowerRepaymentSuccessRate': ['median']
# }
# borrower_median_attr = pd.DataFrame(data.groupby(["Listing_Key"]).agg(f_median).as_matrix())
# borrower_median_attr = borrower_median_attr.rename(index=str, columns={0: "BorrowerMedianRepaidListings",
# 1: "BorrowerMedianTotalListings",
# 2: "BorrowerMedianCompletedListings",
# 3: "BorrowerMedianListingSuccessRate",
# 4: "BorrowerMedianRepaymentSuccessRate",
# 5: "ListingKey"})
# borrower_median_attr.head()
# +
# np.mean(data[data["ListingKey"]=="00003383856420083050622"].groupby(["Listing_Key"]).agg(f_avg))
# -
# ## Standard Deviation
# +
# f_std = {'ListingKey': ['max'],
# 'BorrowerCompletedListings': ['std'],
# 'BorrowerRepaidListings': ['std'],
# 'BorrowerTotalListings': ['std'],
# 'BorrowerListingSuccessRate': ['std'],
# 'BorrowerRepaymentSuccessRate': ['std']
# }
# borrower_std_attr = pd.DataFrame(data.groupby(["Listing_Key"]).agg(f_std).as_matrix())
# borrower_std_attr = borrower_std_attr.rename(index=str, columns={0: "BorrowerStdRepaidListings",
# 1: "BorrowerStdTotalListings",
# 2: "BorrowerStdCompletedListings",
# 3: "BorrowerStdListingSuccessRate",
# 4: "BorrowerStdRepaymentSuccessRate",
# 5: "ListingKey"})
# borrower_std_attr.head()
# -
# ## Merge Listing Feature DataFrames
# Prepare final data
final_borrower_data = borrower_mean_attr
# final_borrower_data = final_borrower_data.merge(borrower_median_attr, on="ListingKey")
# final_borrower_data = final_borrower_data.merge(borrower_std_attr, on="ListingKey")
final_borrower_data["BorrowerExperience"] = (final_borrower_data.BorrowerAvgListingSuccessRate*1) + (final_borrower_data.BorrowerAvgRepaymentSuccessRate*2)
final_borrower_data = final_borrower_data[["ListingKey", "BorrowerExperience"]]
final_borrower_data.head()
# ## Import Class Variable
listing_data = pd.read_csv("../Data/ProjectLevelData.txt", sep="|")
listing_data = listing_data.loc[(listing_data['RepaidOrNot']==True) | (listing_data['RepaidOrNot']==False)]
listing_data = listing_data[["ListingKey", "RepaidOrNot"]]
listing_data.head()
# ## Merge Features and Class Variable
# +
# del data
# -
final_data = final_borrower_data.merge(listing_data, on="ListingKey")
final_data['RepaidOrNot'] = final_data['RepaidOrNot'].astype(int)
final_data = final_data.fillna(0)
final_data.head()
# ## Check Feature Correlation
corr = final_data.swifter.apply(pd.to_numeric, errors='coerce').corr(method='pearson')
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(12,12))
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
cmap=sns.color_palette("coolwarm_r"),
mask = mask,
linewidths=.5,
vmin=-1,
vmax=1,
annot=True)
plt.title("Variable Correlation Heatmap")
plt.show()
# ## Save Data to CSV
## Save data to csv file
# final_lender_data.to_csv("../Data/lender_listing_attr_filtered.csv", index=False)
final_borrower_data = final_borrower_data.fillna(0)
final_borrower_data.to_csv("../Data/borrower_listing_attr.csv", index=False)
final_borrower_data.head()
| WebSci19-P2P-Lending/Notebooks/preprocessing/prosper_borrower_listing_feature_eng.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.812574, "end_time": "2021-12-27T21:55:20.189377", "exception": false, "start_time": "2021-12-27T21:55:18.376803", "status": "completed"} tags=[]
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# + papermill={"duration": 175.050193, "end_time": "2021-12-27T21:58:15.251953", "exception": false, "start_time": "2021-12-27T21:55:20.201760", "status": "completed"} tags=[]
# !python -m pip install -q 'git+https://github.com/facebookresearch/detectron2.git'
# + papermill={"duration": 2.986391, "end_time": "2021-12-27T21:58:18.249324", "exception": false, "start_time": "2021-12-27T21:58:15.262933", "status": "completed"} tags=[]
import pandas as pd
import numpy as np
import pandas as pd
from tqdm import tqdm
from tqdm import tqdm_notebook as tqdm # progress bar
from datetime import datetime
import time
import matplotlib.pyplot as plt
from pycocotools.coco import COCO
import os, json, cv2, random
import skimage.io as io
import copy
from pathlib import Path
from typing import Optional
from tqdm import tqdm
import itertools
import torch
import albumentations as A
from albumentations.pytorch.transforms import ToTensorV2
from glob import glob
import numba
from numba import jit
import warnings
warnings.filterwarnings('ignore') #Ignore "future" warnings and Data-Frame-Slicing warnings.
# detectron2
from detectron2.structures import BoxMode
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.data import DatasetCatalog, MetadataCatalog
from detectron2.engine import DefaultPredictor, DefaultTrainer, launch
from detectron2.evaluation import COCOEvaluator
from detectron2.structures import BoxMode
from detectron2.utils.visualizer import ColorMode
from detectron2.utils.logger import setup_logger
from detectron2.utils.visualizer import Visualizer
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader, build_detection_train_loader
from detectron2.data import detection_utils as utils
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader, build_detection_train_loader
from detectron2.data import detection_utils as utils
import detectron2.data.transforms as T
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
setup_logger()
# + papermill={"duration": 0.016963, "end_time": "2021-12-27T21:58:18.277091", "exception": false, "start_time": "2021-12-27T21:58:18.260128", "status": "completed"} tags=[]
# # !pip install orjson
# + papermill={"duration": 0.016536, "end_time": "2021-12-27T21:58:18.304471", "exception": false, "start_time": "2021-12-27T21:58:18.287935", "status": "completed"} tags=[]
# import json
# with open("/kaggle/input/crossvalidationfold5/coco_cell_train_fold3.json") as f:
# val_data = json.loads(f.read())
# all_val_img = []
# for file_block in range(len(val_data["images"])):
# all_val_img.append(val_data["images"][file_block]["file_name"].replace("..","/kaggle"))
# + papermill={"duration": 0.016333, "end_time": "2021-12-27T21:58:18.331594", "exception": false, "start_time": "2021-12-27T21:58:18.315261", "status": "completed"} tags=[]
# len(all_val_img)
# + papermill={"duration": 4.543824, "end_time": "2021-12-27T21:58:22.886254", "exception": false, "start_time": "2021-12-27T21:58:18.342430", "status": "completed"} tags=[]
Data_Resister_training="sartorius_Cell_train";
Data_Resister_valid="sartorius_Cell_valid";
from detectron2.data.datasets import register_coco_instances
dataDir=Path('/kaggle/input/sartorius-cell-instance-segmentation')
register_coco_instances(Data_Resister_training,{}, '/kaggle/input/crossvalidationfold5/coco_cell_train_fold3.json', dataDir)
register_coco_instances(Data_Resister_valid,{},'/kaggle/input/crossvalidationfold5/coco_cell_valid_fold3.json', dataDir)
metadata = MetadataCatalog.get(Data_Resister_training)
dataset_train = DatasetCatalog.get(Data_Resister_training)
dataset_valid = DatasetCatalog.get(Data_Resister_valid)
# + papermill={"duration": 0.018306, "end_time": "2021-12-27T21:58:22.916769", "exception": false, "start_time": "2021-12-27T21:58:22.898463", "status": "completed"} tags=[]
# dataset_valid[2]
# + papermill={"duration": 0.997545, "end_time": "2021-12-27T21:58:23.925935", "exception": false, "start_time": "2021-12-27T21:58:22.928390", "status": "completed"} tags=[]
fig, ax = plt.subplots(figsize =(18,11))
d=dataset_valid[2]
img = cv2.imread(d["file_name"])
print(img.shape)
v = Visualizer(img[:, :, ::-1],
metadata=metadata,
scale=1,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_dataset_dict(d)
ax.grid(False)
ax.axis('off')
ax.imshow(out.get_image()[:, :, ::-1])
# + papermill={"duration": 0.040339, "end_time": "2021-12-27T21:58:23.993057", "exception": false, "start_time": "2021-12-27T21:58:23.952718", "status": "completed"} tags=[]
def custom_mapper(dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict)
image = utils.read_image(dataset_dict["file_name"], format="BGR")
transform_list = [
T.RandomBrightness(0.9, 1.1),
T.RandomContrast(0.9, 1.1),
T.RandomSaturation(0.9, 1.1),
T.RandomLighting(0.9),
T.RandomFlip(prob=0.5, horizontal=False, vertical=True),
T.RandomFlip(prob=0.5, horizontal=True, vertical=False),
]
image, transforms = T.apply_transform_gens(transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict
class AugTrainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
return build_detection_train_loader(cfg, mapper=custom_mapper)
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
return MAPIOUEvaluator(dataset_name)
# + papermill={"duration": 0.043262, "end_time": "2021-12-27T21:58:24.063803", "exception": false, "start_time": "2021-12-27T21:58:24.020541", "status": "completed"} tags=[]
# Taken from https://www.kaggle.com/theoviel/competition-metric-map-iou
from detectron2.evaluation.evaluator import DatasetEvaluator
import pycocotools.mask as mask_util
def precision_at(threshold, iou):
matches = iou > threshold
true_positives = np.sum(matches, axis=1) == 1 # Correct objects
false_positives = np.sum(matches, axis=0) == 0 # Missed objects
false_negatives = np.sum(matches, axis=1) == 0 # Extra objects
return np.sum(true_positives), np.sum(false_positives), np.sum(false_negatives)
def score(pred, targ):
pred_masks = pred['instances'].pred_masks.cpu().numpy()
enc_preds = [mask_util.encode(np.asarray(p, order='F')) for p in pred_masks]
enc_targs = list(map(lambda x:x['segmentation'], targ))
ious = mask_util.iou(enc_preds, enc_targs, [0]*len(enc_targs))
prec = []
for t in np.arange(0.5, 1.0, 0.05):
tp, fp, fn = precision_at(t, ious)
p = tp / (tp + fp + fn)
prec.append(p)
return np.mean(prec)
class MAPIOUEvaluator(DatasetEvaluator):
def __init__(self, dataset_name):
dataset_dicts = DatasetCatalog.get(dataset_name)
self.annotations_cache = {item['image_id']:item['annotations'] for item in dataset_dicts}
def reset(self):
self.scores = []
def process(self, inputs, outputs):
for inp, out in zip(inputs, outputs):
if len(out['instances']) == 0:
self.scores.append(0)
else:
targ = self.annotations_cache[inp['image_id']]
self.scores.append(score(out, targ))
def evaluate(self):
return {"MaP IoU": np.mean(self.scores)}
class Trainer(DefaultTrainer):
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
return MAPIOUEvaluator(dataset_name)
# + papermill={"duration": 0.034547, "end_time": "2021-12-27T21:58:24.125196", "exception": false, "start_time": "2021-12-27T21:58:24.090649", "status": "completed"} tags=[]
os.makedirs("detectron2cell/output")
# + papermill={"duration": 24.629432, "end_time": "2021-12-27T21:58:48.781566", "exception": true, "start_time": "2021-12-27T21:58:24.152134", "status": "failed"} tags=[]
cfg = get_cfg()
config_name = "COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml"
cfg.merge_from_file(model_zoo.get_config_file(config_name))
cfg.DATASETS.TRAIN = (Data_Resister_training,)
cfg.DATASETS.TEST = (Data_Resister_valid,)
# cfg.MODEL.WEIGHTS ="/kaggle/input/detectron2cell/output/model_final.pth"
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(config_name)
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # 64 is slower but more accurate (128 faster but less accurate)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
cfg.SOLVER.IMS_PER_BATCH = 2 #(2 is per defaults)
cfg.INPUT.MASK_FORMAT='bitmask'
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.SOLVER.BASE_LR = 0.0005 #(quite high base learning rate but should drop)
#cfg.SOLVER.MOMENTUM = 0.9
#cfg.SOLVER.WEIGHT_DECAY = 0.0005
#cfg.SOLVER.GAMMA = 0.1
cfg.SOLVER.WARMUP_ITERS = 10 #How many iterations to go from 0 to reach base LR
cfg.SOLVER.MAX_ITER = 3000 #Maximum of iterations 1
cfg.SOLVER.STEPS = (500, 1000) #At which point to change the LR 0.25,0.5
cfg.TEST.EVAL_PERIOD = 250
cfg.SOLVER.CHECKPOINT_PERIOD=250
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = AugTrainer(cfg) # with data augmentation
# trainer = Trainer(cfg) # without data augmentation
trainer.resume_or_load(resume=False)
trainer.train()
# + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} tags=[]
# + papermill={"duration": null, "end_time": null, "exception": null, "start_time": null, "status": "pending"} tags=[]
| train/somu-detectron2-mrcnn-train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 그래프
# - 정점(Node)과 간선(Edge)으로 이루어진 자료구조
# - 차수(Degree) : 정점에 몇개의 간선이 연결되어 있는가?
# - 사이클(Cycle) : 자기 자신으로 다시 돌아올 수 있는 경로
#
# ### 그래프는 왜 중요한가?
# - 현실 세계의 많은 것들을 그래프로 나타낼 수 있음. 즉, 그래프와 관련된 문제가 많다
# - 그래프와 관련된 수학적 정리가 매우 많다. 그래프 이론이라는 분야가 따로 있음
# - 어렵다.. 그래프와 관련된 이론도 어렵고, 구현도 어렵다
# ### 그래프 순회
# - 그래프 : 주머니
# - 그래프라는 자료구조에 어떠한 값이 저장되어 있는가?
# - 왜 돌아다니는가!?
#
# #### 그래서 순회를 어떤 원리로 할 것이냐 -> 그래프 순회 알고리즘
# - 깊이우선탐색 (Depth First Search) : **스택**을 이용해 그래프를 순회하는 방법
# - 너비우선탐색 (Breadth First Search) : **큐**를 이용하여 그래프를 순회하는 방법
# ## 깊이 우선 탐색 (DFS)
# - 스택 이용!
# - 나를 먼저 방문하고, 이웃한 정점을 방문한다
#
# ## 너비 우선 탐색 (BFS)
# - 큐 이용!
# - 인접한 노드들을 우선 모두 색칠해 두고 뻗어나간다
# 1. Node 선택
# 2. 인접한 노드 색칠
#
# - 내 이웃이 누가 있는지 보고, 뛴다!
# 예제 1)
# - DFS : 1-2-3-5-6-7-8-9-4-10-11
# - BFS : 1-2-4-3-11-9-5-6-8-7-10
#
# ## 그래프 문제가 어려운 점
# - 그래프 문제라는 것을 파악하는 것이 어렵다. 문제에는 그래프가 전혀 등장하지 않을 수도 있음
# - 파악 후 풀이르 만들어 내는 것이 어렵다. 수학문제 푸는 것과 비슷..
# - 코딩하는 것이 어렵다
# 문제 2) 유치원 소풍
#
# - 유치원에 파벌이 몇개인지, 그 파벌의 학생 수는 몇명인지 출력
# - 상하좌우로 인접하면 같은편!
#
# - 문제를 보고 그래프를 만들어야함..! ( 정점과 간선을 만듬 )
# - 이제 한명을 잡고 편을 잡기 ( = 그래프를 순회하기 )
# +
import queue
def isOkay(pMap, y, x):
'''
(y, x)가 유효한 좌표이면 True, 아니면 False
'''
n = len(pMap)
if 0 <= y and y < n and 0 <= x and x < n :
return True
else:
return False
def findStudents(pMap, y, x, visited):
'''
pMap의 (y,x)에 있는 학생과 같은 편인 학생의 수를 반환하고, visited에 색칠하는 함수
DFS는 코드를 보여드릴테니 BFS로 짜봅시다
1. Queue에 시작점을 넣고 BFS 시작!
2. Queue에서 하나를 뺸다. 이것이 곧 내가 현재 있는 위치
3. 내 위치에서 인접한 정점 중 방문하지 않은 정점을 모두 queue에 넣고 visited에 색칠
4. 다시 2로 돌아간다
'''
myQueue = queue.Queue()
myQueue.put((y,x))
visited[y][x] = True
numStudents = 0
while not myQueue.empty():
# myQueue.empty() => 비었으면 참을 반환
current = myQueue.get()
numStudents += 1
y = current[0]
x = current[1]
if isOkay(pMap, y-1, x) and pMap[y-1][x] == 1 and visited[y-1][x] == False:
myQueue.put((y-1, x))
visited[y-1][x] = True
if isOkay(pMap, y, x-1) and pMap[y][x-1] == 1 and visited[y][x-1] == False:
myQueue.put((y, x-1))
visited[y][x-1] = True
if isOkay(pMap, y, x+1) and pMap[y][x+1] == 1 and visited[y][x+1] == False:
myQueue.put((y, x+1))
visited[y][x+1] = True
if isOkay(pMap, y+1, x) and pMap[y+1][x] == 1 and visited[y+1][x] == False:
myQueue.put((y+1, x))
visited[y+1][x] = True
return numStudents
def picnic(pMap):
'''
유치원의 파벌 개수를 반환하는 함수를 작성하세요.
'''
n = len(pMap)
visited = [[ False for i in range(n)] for j in range(n)]
result = []
for i in range(n) :
for j in range(n):
if pMap[i][j] == 1 and visited[i][j] == False :
# 이제 그래프를 순회하면 됨!
numStudents = findStudents(pMap, i, j, visited) # return : 학생 수
result.append(numStudents)
result.sort()
return (len(result), result)
def read_input():
size = int(input())
returnMap = []
for i in range(size):
line = input()
__line = []
for j in range(len(line)) :
__line.append(int(line[j]))
returnMap.append(__line)
return returnMap
def main():
pMap = read_input()
print(picnic(pMap))
if __name__ == "__main__":
main()
# +
import queue
def isOkay(pMap, y, x):
'''
(y, x)가 유효한 좌표이면 True, 아니면 False
'''
n = len(pMap)
if 0 <= y and y < n and 0 <= x and x < n :
return True
else:
return False
def findStudents(pMap, y, x, visited):
'''
pMap의 (y,x)에 있는 학생과 같은 편인 학생의 수를 반환하고, visited에 색칠하는 함수
DFS는 코드를 보여드릴테니 BFS로 짜봅시다
1. Queue에 시작점을 넣고 BFS 시작!
2. Queue에서 하나를 뺸다. 이것이 곧 내가 현재 있는 위치
3. 내 위치에서 인접한 정점 중 방문하지 않은 정점을 모두 queue에 넣고 visited에 색칠
4. 다시 2로 돌아간다
'''
myQueue = queue.Queue()
myQueue.put((y,x))
visited[y][x] = True
numStudents = 0
while not myQueue.empty():
# myQueue.empty() => 비었으면 참을 반환
current = myQueue.get()
numStudents += 1
y = current[0]
x = current[1]
dy = [-1, 0, 0, 1]
dx = [0, -1, 1, 0]
for k in range(4):
yy = y + dy[k]
xx = x + dy[k]
if isOkay(pMap, yy, xx) and pMap[yy][xx] == 1 and visited[yy][xx] == False:
myQueue.pust((yy, xx))
visited[yy][xx] = True
return numStudents
def picnic(pMap):
'''
유치원의 파벌 개수를 반환하는 함수를 작성하세요.
'''
n = len(pMap)
visited = [[ False for i in range(n)] for j in range(n)]
result = []
for i in range(n) :
for j in range(n):
if pMap[i][j] == 1 and visited[i][j] == False :
# 이제 그래프를 순회하면 됨!
numStudents = findStudents(pMap, i, j, visited) # return : 학생 수
result.append(numStudents)
result.sort()
return (len(result), result)
def read_input():
size = int(input())
returnMap = []
for i in range(size):
line = input()
__line = []
for j in range(len(line)) :
__line.append(int(line[j]))
returnMap.append(__line)
return returnMap
def main():
pMap = read_input()
print(picnic(pMap))
if __name__ == "__main__":
main()
# -
# ### 그래프 알고리즘 배워봤다 하려면 ( 이정도는 익혀야 함! )
# - 그래프 순회 (BFS, DFS)
# - 최단경로 알고리즘 (Dijkstra, Floyd)
# - 최소신장트리 구하기 (Prim, Kruskal)
#
# ### 심화
# - Strongly Connected Component
# - Maximum Flow (Ford-Fulkerson Algorithm)
# - Maximum Flow-Min Cut Theorem
#
# ### 고수
# - Minimum Independent Set
# - Maximum Vertext Cover
# - NP-Completeness
#
| Elice-algorithm/7 week(1) - Note (graph).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Instructions:
# Reeborg was exploring a dark maze and the battery in its flashlight ran out.
#
# Write a program using an if/elif/else statement so Reeborg can find the exit. The secret is to have Reeborg follow along the right edge of the maze, turning right if it can, going straight ahead if it can’t turn right, or turning left as a last resort.
#
# What you need to know:
# - The functions move() and turn_left().
# - Either the test front_is_clear() or wall_in_front(), right_is_clear() or wall_on_right(), and at_goal().
# - How to use a while loop and if/elif/else statements.
# - It might be useful to know how to use the negation of a test (not in Python).
#
# LINK FOR THE GAME: [Press Here](https://reeborg.ca/reeborg.html?lang=en&mode=python&menu=worlds%2Fmenus%2Freeborg_intro_en.json&name=Maze&url=worlds%2Ftutorial_en%2Fmaze1.json)
# +
#Função para virar a esquerda e economizar algumas linhas de código:
def turn_right():
turn_left()
turn_left()
turn_left()
#Instruções para o que o boneco deve fazer quando se depararcom as informações das funções abaixo:
while at_goal() is not True:
if right_is_clear():
turn_right()
move()
elif wall_in_front():
turn_left()
elif wall_on_right():
turn_left()
elif right_is_clear():
turn_right()
elif front_is_clear():
move()
| 100 Days of Code The Complete Python Pro Bootcamp for 2022/Day 6 - Escaping the Maze.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
# Some nice default configuration for plots
plt.rcParams['figure.figsize'] = 10, 7.5
plt.rcParams['axes.grid'] = True
plt.gray()
import numpy as np
import pandas as pd
# -
# # 5. Measuring Performance in Regression Models
# For models predicting a numeric outcome, some measure of accuracy is typically used to evaluate the effectiveness of the model. However, there are different ways to measure accuracy, each with its own nuance. To understand the strengths and weaknesses of a particular model, relying solely on a single metric is problematic. Visualizations of the model fit, particularly residual plots, are critical to understand whether the model is fit for purpose.
# ## 5.1 Quantitative Measures of Performance
# When the outcome is a number, the most common method for characterizing a model's predictive capabilities is to use the root mean squared error (RMSE). This metric is a function of the model residuals, which are the observed values minus the model predictions. The value is usually interpreted as either how far (on average) the residuals are from zero or as the average distance between the observed values and the model predictions.
#
# Another common metric is the coefficient of determination, i.e., $R^2$. This value can be interpreted as the proportion of the information in the data that is explained by the model, i.e., the proportion of variation is explained by the model. While it is an easily interpretable statistic, the practitioner must remember that $R^2$ is a measure of correlation, not accuracy. Also, it is dependent on the variation in the outcome. Practically, this dependence on the outcome variance can also have a drastic effect on how the model is viewed.
#
# In some cases, the goal of the model is to simply rank new samples, where the *rank correlation* between the observed and predicted values might be a more appropriate metric. The rank correlation takes the ranks of the observed outcome values and evaluates how close these are to ranks of the model predictions. To calculate this value, the ranks of the observed and predicted outcomes are obtained and the correlation coefficient between these ranks is calculated, which is known as Spearman's rank correlation.
# ## 5.2 The Variance-Bias Trade-off
# The MSE can be decomposed into more specific pieces. Formally, the MSE of a model is $$\text{MSE} = {1\over n} \sum_{i=1}^n (y_i - \hat{y}_i)^2,$$ where $y_i$ is the outcome and $\hat{y}_i$ is the model prediction of that sample's outcome. If we assume that the data points are statistically independent and that the residuals have a theoretical mean of zero and a constant variance of $\sigma^2$, then $$E[\text{MSE}] = \sigma^2 + (\text{Model Bias})^2 + \text{Model Variance}.$$ The first part $(\sigma^2)$ is usually called "irreducible noise" and cannot be eliminated by modeling. The second term is the squared bias of the model. This reflects how close the functional form of the model can get to the true relationship between the predictors and the outcome. The last term is the model variance.
# An extreme example of models that are either high bias or high variance.
# a simulated sin wave
X = np.random.uniform(2, 10, 100)
y = np.sin(X) + np.random.normal(0, 0.2, 100)
# +
# high bias estimate
est_bias = np.zeros(100)
est_bias[:50] = np.mean(y[np.argsort(X)[:50]])
est_bias[50:] = np.mean(y[np.argsort(X)[50:]])
# high variance estimate
def movingaverage(values, window):
'''calculate simple moving average'''
weigths = np.repeat(1.0, window)/window
#including valid will REQUIRE there to be enough datapoints.
#for example, if you take out valid, it will start @ point one,
#not having any prior points, so itll be 1+0+0 = 1 /3 = .3333
smas = np.convolve(values, weigths, 'valid')
return smas
est_var = movingaverage(y[np.argsort(X)], 3) # MA(3)
# +
plt.scatter(X, y)
# plot high bias estimate
plt_bias, = plt.plot(np.insert(X[np.argsort(X)], 50, np.nan),
np.insert(est_bias, 50, np.nan), # insert discontinuous point
color='g', linewidth=2)
# plot high variance estimate
plt_var, = plt.plot(X[np.argsort(X)][2:], est_var, color='r')
plt.xlabel("Predictor")
plt.ylabel("Outcome")
plt.legend([plt_bias, plt_var], ['High bias model', 'High variance model'])
# -
# It is generally true that more complex models can have very high variance, which leads to over-fitting. On the other hand, simple models tend not to over-fit, but under-fit if they are not flexible enough to model the true relationship (thus high bias). Also, highly correlated predictors can lead to collinearity issues and this can greatly increase the model variance. Increase the bias in the model to greatly reduce the model variance is a way to mitigate the problem of collinearity. This is referred to as the *variance-bias trade-off*.
| notebooks/.ipynb_checkpoints/Chapter 5-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.image import imread
x = np.array([1.0,2.0,3.0])
print(x)
type(x)
x = np.array([1.0,2.0,3.0])
y = np.array([2.0,4.0,6.0])
print(x+y)
print(x-y)
print(x/y)
A = np.array([[1,2],[3,4]])
B = np.array([10,20])
A*B
# +
X = np.array([[1,2,3],[4,5,6],[9,3,4]])
# Flatten
print(X.flatten())
# Get elements index 0,2
print(X[np.array([0,2])])
print(X>4)
print(X[X>4])
# +
# Array from 0 to 6 then 0.1
x = np.arange(0, 6, 0.1)
y = np.sin(x)
plt.plot(x, y)
plt.show()
# +
x = np.arange(0, 6, 0.1)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1, label="sin")
plt.plot(x, y2, label="cos", linestyle="--")
plt.xlabel("x") # x-axis name
plt.ylabel("y")
plt.title("sin & cos")
plt.legend()
plt.show()
# +
img = imread("/Users/hansblackcat/Pictures/Screen Shot 2020-02-19 at 9.20.15 AM.png")
plt.imshow(img)
plt.show()
# -
| MachineLearning/DeepLearning/ch1_base.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import pandas_profiling as pds
import pandas_profiling as ProfileReport
import glob
import os, fnmatch
import sys
from functools import reduce
import re
def get_file(directory, filetype, filenumber):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple (dirpath, dirnames, filenames).
"""
listOfFiles = os.listdir(directory)
pattern = f"*{filetype}"
df_list = []
csv_files = []
for entry in listOfFiles:
if fnmatch.fnmatch(entry, pattern):
csv_files.append(entry)
df_list.append((directory + entry))
filename = [csv.replace(filetype, '') for csv in csv_files] #replace each delimiter in turn with a space
series = [pd.read_csv(df) for df in df_list]
# print (entry)
# print (filename)
return series[filenumber] # Self-explanatory.
circuits_raw_df = get_file('../data/raw/', '.csv', 0)
constructors_raw_df = get_file('../data/raw/', '.csv', 1)
constructor_results_raw_df = get_file('../data/raw/', '.csv', 2)
constructor_standings_raw_df = get_file('../data/raw/', '.csv', 3)
drivers_raw_df = get_file('../data/raw/', '.csv', 4)
driver_standings_raw_df = get_file('../data/raw/', '.csv', 5)
lap_times_raw_df = get_file('../data/raw/', '.csv', 6)
pit_stops_raw_df = get_file('../data/raw/', '.csv', 7)
qualifying_raw_df = get_file('../data/raw/', '.csv', 8)
races_raw_df = get_file('../data/raw/', '.csv', 9)
results_raw_df = get_file('../data/raw/', '.csv', 10)
seasons_raw_df = get_file('../data/raw/', '.csv', 11)
status_raw_df = get_file('../data/raw/', '.csv', 12)
# I can see time will be a problem, I will set all the times to seconds for all dataframes with time intervals:
def hh_mm_ss2seconds(hh_mm_ss):
return reduce(lambda acc, x: acc*60 + x, map(float, hh_mm_ss.split(':')))
circuits_raw_df
# Alt value no consistent information
circuits_interim_df = pd.read_csv('../data/raw/circuits.csv').drop(['alt'], axis=1)
circuits_interim_df.to_csv('../data/interim/i_circuits.csv', index=False)
constructors_raw_df
# no changes made so I filename was not modified
constructors_interim_df = pd.read_csv('../data/raw/constructors.csv')
constructors_interim_df.to_csv('../data/interim/constructors.csv', index=False)
constructor_results_raw_df
# status value no information just \N
constructor_results_interim_df = pd.read_csv('../data/raw/constructor_results.csv').drop(['status'], axis=1)
constructor_results_interim_df.to_csv('../data/interim/i_constructor_results.csv', index=False)
drivers_raw_df
drivers_raw_df['full name'] = drivers_raw_df[['forename','surname']].apply(lambda x: ' '.join(x), axis=1)
drivers_raw_df.drop(columns= ['forename', 'surname', 'url'], inplace = True)
drivers_raw_df.to_csv('../data/processed/driver.csv')
drivers_raw_df
rank_winners_2020 = ['<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>', '<NAME>']
# driver = [name for name in winners_2019 ]
# [row for row in drivers_raw_df['driverRef'] if driver[0] in row]
def name_check(stringy):
return stringy in rank_winners_2020
rank_winners_2020 = drivers_raw_df[drivers_raw_df['full name'].map(name_check)]
rank_winners_2020['driverId']
rank_winners_2020
# +
# 'hamilton' in ['hamilton', 'bottas', 'verstappen', 'leclerc', 'vettel', 'sainz', 'gasly', 'albon', 'ricciardo', 'perez']
# -
# no changes made so I filename was not modified, want to wait to match relevant datasets before dropping \N
drivers_interim_df = pd.read_csv('../data/raw/drivers.csv')
drivers_interim_df.to_csv('../data/interim/drivers.csv', index=False)
driver_standings_raw_df
# no changes made so I filename was not modified, want to wait to match relevant datasets before dropping \N
driver_standings_interim_df = pd.read_csv('../data/raw/driver_standings.csv')
driver_standings_interim_df.to_csv('../data/interim/driver_standings.csv', index=False)
lap_times_raw_df
lap_times_interim_df = pd.read_csv('../data/raw/lap_times.csv',
converters={'time': hh_mm_ss2seconds})
lap_times_interim_df
# changes made to time to be in seconds as float
lap_times_interim_df.to_csv('../data/interim/i_lap_times.csv', index=False)
pit_stops_raw_df
pit_stops_interim_df = pd.read_csv('../data/raw/pit_stops.csv',
converters={'time': hh_mm_ss2seconds})
pit_stops_interim_df
# changes made to time to be in seconds as float
pit_stops_interim_df.to_csv('../data/interim/i_pit_stops.csv', index=False)
qualifying_raw_df
# replace all null values with zeroes for time conversion:
qualifying_raw_df['q1'] = qualifying_raw_df['q1'].fillna(0)
qualifying_raw_df['q2'] = qualifying_raw_df['q2'].fillna(0)
qualifying_raw_df['q3'] = qualifying_raw_df['q3'].fillna(0)
qualifying_raw = qualifying_raw_df.replace({'q1': '\\N', 'q2': '\\N', 'q3': '\\N'}, 0)
qualifying_raw.to_csv('../data/interim/N_A_qualifying.csv', index=False)
qualifying_interim_df = pd.read_csv('../data/interim/N_A_qualifying.csv',
converters={'q1': hh_mm_ss2seconds, 'q2': hh_mm_ss2seconds, 'q3': hh_mm_ss2seconds})
qualifying_interim_df
#Had to remove null values and \\N
qualifying_interim_df.to_csv('../data/interim/i_qualifying.csv', index=False)
races_raw_df
# Just in case
races_raw = races_raw_df.replace({'time': '\\N'}, 0)
races_raw.to_csv('../data/interim/N_A_races.csv', index=False)
races_interim_df = pd.read_csv('../data/interim/N_A_races.csv',
converters={'time': hh_mm_ss2seconds})
races_interim_df
results_raw_df
results_raw_df['position'].value_counts()
results_raw_df["time"].value_counts()
results_raw_df['Win or Lost'] = results_raw_df['time']
results_raw_df['time'] = results_raw_df['time'].str.replace('+', '')
results_raw_df['time']
results = results_raw_df.replace({'fastestLapTime': '\\N', 'time': '\\N', 'milliseconds': '\\N', 'fastestLapSpeed': '\\N', 'Win or Lost': '\\N', 'fastestLap': '\\N'}, 0)
results['milliseconds'].dtypes
results['milliseconds'] = pd.to_numeric(results['milliseconds'], downcast="float")
results['fastestLap'] = pd.to_numeric(results['fastestLap'])
results['milliseconds'] = results['milliseconds'].div(1000)
results.to_csv('../data/interim/N_A_results.csv', index=False)
results_interim_df = pd.read_csv('../data/interim/N_A_results.csv',
converters={'fastestLapTime': hh_mm_ss2seconds})
results_interim_df
results_interim_df.loc[(results_interim_df["Win or Lost"].str.match("\+\d+") | results_interim_df["Win or Lost"].str.match('0')) == True, 'Win or Lost'] = "lost"
results_interim_df.loc[(results_interim_df["Win or Lost"].str.match("lost") == False, 'Win or Lost')] = "win"
wl_results = results_interim_df.replace({'Win or Lost': '0'}, 'lost')
wl_results['Win or Lost'].value_counts()
s = wl_results.groupby('driverId')['Win or Lost']
counts = s.value_counts()
percent = s.value_counts(normalize=True)
# percent100 = s.value_counts(normalize=True).mul(100).round(1).astype(str) + '%'
percent100 = s.value_counts(normalize=True).mul(100).round(1)
wl_ratio = pd.DataFrame({'counts': counts, 'per': percent, 'per100': percent100})
wl_ratio.reset_index(inplace= True)
wl_ratio
wl_results.to_csv('../data/processed/wl_results.csv', index=False)
fastest_lap_time = wl_results.groupby('driverId')['fastestLapTime'].agg(['min','max','mean'])
fastest_lap_time.rename(columns={'min':'Slowest Lap Time', 'max': 'Fastest Lap Time', 'mean': 'Average Lap Time'}).reset_index()
speed = wl_results.groupby('driverId')['fastestLapSpeed'].agg(['min','max','mean'])
speed.rename(columns={'min':'Minimum Speed', 'max': 'Maximum Speed', 'mean': 'Average Speed'})
lap_speed = wl_results.groupby('driverId')['fastestLap'].agg(['min','max','mean'])
lap_speed.rename(columns={'min':'Minimum Laps', 'max': 'Maximum Laps', 'mean': 'Average Laps'}).reset_index()
wl_ratio.to_csv('../data/processed/top10_wl.csv', index=False)
wl_ratio
top10_rank_stats = rank_winners_2020.merge(wl_ratio, on = 'driverId', how = 'left')
top10_rank_stats.drop(columns='url', inplace=True)
top10_rank_stats.to_csv('../data/processed/top10_results.csv', index=False)
speed_index = rank_winners_2020.merge(fastest_lap_time, on = 'driverId', how = 'left').merge(speed, on='driverId', how='left').merge(lap_speed, on='driverId', how='left').fillna(0)
speed_index.drop(columns='url', inplace=True)
speed_index.rename(columns={'min_x':'Slowest Lap Time', 'man_x':'Fastest Lap Time', 'mean_x': 'Average Lap Time', 'min_y': 'Minimum Speed', 'max_y': 'Maximum Speed', 'mean_y': 'Average Speed', 'min': 'Minimum Laps', 'max': 'Maximum Laps', 'mean': 'Average Laps'}, inplace = True)
speed_index
speed_index.to_csv('../data/processed/top10_speed_results.csv', index=False)
seasons_raw_df
races_interim_df.iloc['name', ]
# no changes made so I filename was not modified, want to wait to match relevant datasets before dropping \N
seasons_interim_df = pd.read_csv('../data/raw/seasons.csv')
seasons_interim_df.to_csv('../data/interim/seasons.csv', index=False)
status_raw_df
# no changes made so I filename was not modified, want to wait to match relevant datasets before dropping \N
status_interim_df = pd.read_csv('../data/raw/status.csv')
status_interim_df.to_csv('../data/interim/status.csv', index=False)
The number of Grand prix occurrences between (2010–2020)?
What are the top
| notebooks/formula_etl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_amazonei_pytorch_latest_p36
# language: python
# name: conda_amazonei_pytorch_latest_p36
# ---
# # MNIST Training using PyTorch
# ## Contents
#
# 1. [Background](#Background)
# 1. [Setup](#Setup)
# 1. [Data](#Data)
# 1. [Train](#Train)
# 1. [Host](#Host)
#
# ---
#
# ## Background
#
# MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). This tutorial will show how to train and test an MNIST model on SageMaker using PyTorch.
#
# For more information about the PyTorch in SageMaker, please visit [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers) and [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) github repositories.
#
# ---
#
# ## Setup
#
# _This notebook was created and tested on an ml.m4.xlarge notebook instance._
#
# Let's start by creating a SageMaker session and specifying:
#
# - The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
# - The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the `sagemaker.get_execution_role()` with a the appropriate full IAM role arn string(s).
#
# Upgrade sagemaker sdk to v2
# !pip install sagemaker --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
# +
import boto3
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "sagemaker/DEMO-pytorch-mnist"
role = sagemaker.get_execution_role(sagemaker_session=sagemaker_session)
print("Sagemaker SDK version: {0}".format(sagemaker.__version__))
print("Sagemaker Execute Role: {0}".format(role))
print("Bucket: {0}".format(bucket))
# -
# ## Data
# ### Getting the data
#
#
# +
from torchvision import datasets, transforms
datasets.MNIST(
"data",
download=True,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
)
# -
# ### Uploading the data to S3
# We are going to use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use later when we start the training job.
#
inputs = sagemaker_session.upload_data(path="data", bucket=bucket, key_prefix=prefix)
print("input spec (in this case, just an S3 path): {}".format(inputs))
# ## Train
# ### Training script
# The `mnist.py` script provides all the code we need for training and hosting a SageMaker model (`model_fn` function to load a model).
# The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:
#
# * `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to.
# These artifacts are uploaded to S3 for model hosting.
# * `SM_NUM_GPUS`: The number of gpus available in the current container.
# * `SM_CURRENT_HOST`: The name of the current container on the container network.
# * `SM_HOSTS`: JSON encoded list containing all the hosts .
#
# Supposing one input channel, 'training', was used in the call to the PyTorch estimator's `fit()` method, the following will be set, following the format `SM_CHANNEL_[channel_name]`:
#
# * `SM_CHANNEL_TRAINING`: A string representing the path to the directory containing data in the 'training' channel.
#
# For more information about training environment variables, please visit [SageMaker Containers](https://github.com/aws/sagemaker-containers).
#
# A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance.
#
# Because the SageMaker imports the training script, you should put your training code in a main guard (``if __name__=='__main__':``) if you are using the same script to host your model as we do in this example, so that SageMaker does not inadvertently run your training code at the wrong point in execution.
#
# For example, the script run by this notebook:
# !pygmentize mnist.py
# ### Run training in SageMaker
#
# The `PyTorch` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, the training instance type, and hyperparameters. In this case we are going to run our training job on 2 ```ml.c4.xlarge``` instances. But this example can be ran on one or multiple, cpu or gpu instances ([full list of available instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)). The hyperparameters parameter is a dict of values that will be passed to your training script -- you can see how to access these values in the `mnist.py` script above.
#
# +
from sagemaker.pytorch import PyTorch
estimator = PyTorch(
sagemaker_session=sagemaker_session,
entry_point="mnist.py",
role=role,
framework_version="1.6.0",
py_version="py3",
instance_count=1,
instance_type='ml.m5.xlarge',
use_spot_instances=True,
max_run= 3600,
max_wait=7200,
hyperparameters={"epochs": 6, "backend": "gloo"},
)
# -
# After we've constructed our `PyTorch` object, we can fit it using the data we uploaded to S3. SageMaker makes sure our data is available in the local filesystem, so our training script can simply read the data from disk.
#
estimator.fit({"training": "s3://sagemaker-cn-northwest-1-188642756190/sagemaker/DEMO-pytorch-mnist"})
# ## Host
# ### Create endpoint
# After training, we use the `PyTorch` estimator object to build and deploy a `PyTorchPredictor`. This creates a Sagemaker Endpoint -- a hosted prediction service that we can use to perform inference.
#
# As mentioned above we have implementation of `model_fn` in the `mnist.py` script that is required. We are going to use default implementations of `input_fn`, `predict_fn`, `output_fn` and `transform_fm` defined in [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers).
#
# The arguments to the deploy function allow us to set the number and type of instances that will be used for the Endpoint. These do not need to be the same as the values we used for the training job. For example, you can train a model on a set of GPU-based instances, and then deploy the Endpoint to a fleet of CPU-based instances, but you need to make sure that you return or save your model as a cpu model similar to what we did in `mnist.py`. Here we will deploy the model to a single ```ml.m4.xlarge``` instance.
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
# ### Evaluate
# We can now use this predictor to classify hand-written digits. Drawing into the image box loads the pixel data into a `data` variable in this notebook, which we can then pass to the `predictor`.
# +
from IPython.display import HTML
HTML(open("input.html").read())
# +
import numpy as np
image = np.array([data], dtype=np.float32)
response = predictor.predict(image)
prediction = response.argmax(axis=1)[0]
print(prediction)
# -
# ### Cleanup
#
# After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it
estimator.delete_endpoint()
| pytorch_mnist/pytorch_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/neuroidss/silent_speech/blob/main/EMG_Silent_Speech_with_WaveNet%26DeepSpeech_via_BrainFlow.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="8SD4RO2F-xOT"
gen_tpu = 1 << 0
gen_gpu = 1 << 1
gen_pytorch = 1 << 2
gen_tf1 = 1 << 3
gen_tf2 = 1 << 4
gen_stylegan2 = 1 << 5
gen_sg2_nagolinc_pt = 1 << 6
gen_sg2_nvlabs_ada_pt = 1 << 7
gen_sg2_tf1 = 1 << 8
gen_sg2_tf2 = 1 << 9
gen_sg2_rosasalberto_tf2 = 1 << 10
gen_anime_tf2_npy = 1 << 11
gen_tadne_tf2_npy = 1 << 12
gen_anime_protraits_tf2_npy = 1 << 13
gen_abctract_art_tf2_npy = 1 << 14
gen_tf2_npy = 1 << 15
gen_sg2_moono_tf2 = 1 << 16
gen_anime_tf2 = 1 << 17
gen_sg2_shawwn_tpu = 1 << 18
gen_sg2_cyrilzakka_tpu = 1 << 19
gen_sg2_nvlabs = 1 << 20
gen_anime = 1 << 21
gen_sg2_shawwn = 1 << 22
gen_tadne = 1 << 23
gen_sg2_nvlabs_ada = 1 << 24
gen_anime_protraits = 1 << 25
gen_abctract_art = 1 << 26
gen_wavegan = 1 << 27
gen_drums = 1 << 28
gen_mp3 = 1 << 29
gen_wav = 1 << 30
gen_png = 1 << 31
gen_jpeg = 1 << 32
gen_heatmap = 1 << 33
gen_thdne = 1 << 34
gen_wg_stereo = 1 << 35
gen_wg_st_swap = 1 << 36
gen_webm = 1 << 37
gen_mp4 = 1 << 38
gen_mp4_pyav = 1 << 39
gen_mp4_imageio = 1 << 40
gen_mp4_moviepy = 1 << 41
gen_mp4_h264_nvenc = 1 << 42
gen_sg2_aydao_surgery_model_release = 1 << 43
gen_game = 1 << 44
gen_game_mode1 = 1 << 45
gen_game_mode3 = 1 << 46
gen_parallel = 1 << 47
gen_silent_speech = 1 << 48
gen_ss_dgaddy_pt = 1 << 49
gen_ss_wm50_tm07_dm070 = 1 << 50
gen_gpu_cuda = 1 << 51
# + id="Q08NMGPUAfmh"
#generate = gen_gpu | gen_pytorch | gen_stylegan2 | gen_sg2_nvlabs_ada_pt | gen_anime_protraits
#generate = gen_gpu | gen_pytorch | gen_stylegan2 | gen_sg2_nvlabs_ada_pt | gen_abctract_art
#generate = gen_gpu | gen_pytorch | gen_stylegan2 | gen_sg2_nvlabs_ada_pt | gen_abctract_art | gen_tf1 | gen_sg2_nvlabs_ada | gen_anime_protraits
#generate = gen_gpu | gen_pytorch | gen_stylegan2 | gen_sg2_nvlabs_ada_pt | gen_abctract_art | gen_tf1 | gen_sg2_nvlabs_ada | gen_anime_protraits | gen_wavegan | gen_drums
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs_ada | gen_anime_protraits
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs_ada | gen_abctract_art
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_shawwn | gen_tadne
#generate = gen_gpu | gen_pytorch | gen_sg2_nagolinc_pt | gen_tf1 | gen_stylegan2 | gen_sg2_shawwn | gen_tadne
#generate = gen_gpu | gen_tf1 | gen_wavegan | gen_drums
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs_ada | gen_anime_protraits | gen_wavegan | gen_drums
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs_ada | gen_abctract_art | gen_wavegan | gen_drums
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_shawwn | gen_tadne | gen_wavegan | gen_drums
#generate = gen_gpu | gen_pytorch | gen_sg2_nagolinc_pt | gen_tf1 | gen_stylegan2 | gen_sg2_shawwn | gen_tadne | gen_wavegan | gen_drums
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs | gen_anime_protraits | gen_tf2_npy
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs_ada | gen_anime_protraits | gen_tf2_npy
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_nvlabs_ada | gen_abctract_art | gen_tf2_npy
#generate = gen_gpu | gen_tf2 | gen_stylegan2 | gen_sg2_rosasalberto_tf2 | gen_anime_protraits_tf2_npy
#generate = gen_gpu | gen_tf2 | gen_stylegan2 | gen_sg2_rosasalberto_tf2 | gen_abctract_art_tf2_npy
#generate = gen_tpu | gen_tf2 | gen_stylegan2 | gen_sg2_rosasalberto_tf2 | gen_anime_protraits_tf2_npy
#generate = gen_tpu | gen_tf2 | gen_stylegan2 | gen_sg2_rosasalberto_tf2 | gen_abctract_art_tf2_npy
#generate = gen_tpu | gen_tf1 | gen_wavegan | gen_drums
#generate = gen_tf2 | gen_stylegan2 | gen_sg2_rosasalberto_tf2 | gen_anime_protraits_tf2_npy
#generate = gen_tf2 | gen_stylegan2 | gen_sg2_rosasalberto_tf2 | gen_abctract_art_tf2_npy
#generate = gen_tf1 | gen_wavegan | gen_drums
#generate = gen_tpu | gen_tf1 | gen_stylegan2 | gen_sg2_aydao_surgery_model_release | gen_thdne
#generate = gen_tpu | gen_tf1 | gen_stylegan2 | gen_sg2_shawwn_tpu | gen_thdne
#generate = gen_gpu | gen_tf1 | gen_stylegan2 | gen_sg2_shawwn | gen_thdne
#generate = gen_gpu | gen_pytorch | gen_stylegan2 | gen_sg2_nvlabs_ada_pt | gen_thdne
#generate = gen_gpu | gen_tf1 | gen_wavegan | gen_wg_stereo | gen_drums
#generate = gen_tf1 | gen_wavegan | gen_wg_stereo | gen_drums
#generate = gen_tf1 | gen_wavegan | gen_wg_stereo | gen_wg_st_swap | gen_drums
generate = gen_gpu | gen_pytorch | gen_tf2 | gen_silent_speech | gen_ss_dgaddy_pt | gen_ss_wm50_tm07_dm070
generate = generate | gen_wg_stereo | gen_wg_st_swap
#generate = generate | gen_png | gen_wav
generate = generate | gen_jpeg | gen_mp3
#generate = generate | gen_mp4
#generate = generate | gen_webm
#generate = generate | gen_mp4_pyav
#generate = generate | gen_mp4_imageio
#generate = generate | gen_mp4_moviepy
#generate = generate | gen_mp4_h264_nvenc
#generate = generate | gen_game
#generate = generate | gen_game_mode1
#generate = generate | gen_game_mode3
generate = generate | gen_parallel
#generate = generate | gen_gpu_cuda
# + id="DwpQzGQCZVrM"
device_ad7771 = 1 << 0
device_ads131m08 = 1 << 1
device = device_ad7771
#device = device_ads131m08
if device&device_ad7771:
sfreq=512
vref = 2.50 #2.5V voltage ref +/- 250nV
gain = 8
data_channels = 32
if device&device_ads131m08:
sfreq=250
#sfreq=83.3333333333
#sfreq=83
vref = 1.25 #2.5V voltage ref +/- 250nV
gain = 32
# data_channels = 32
data_channels = 128
stepSize = 1/pow(2,24)
vscale = (vref/gain)*stepSize #volts per step.
uVperStep = 1000000 * ((vref/gain)*stepSize) #uV per step.
scalar = 1/(1000000 / ((vref/gain)*stepSize)) #steps per uV.
# + id="Mkq7jIXnJm1W"
##biosemi16-2:
ch_names_wg = ['FP1','F3','T7','C3','P3','Pz','O1','O2','P4','C4','T8','F4','FP2','Fz']
ch_locations_wg=[0,3,6,7,11,12,14,16,18,22,23,26,29,30]
#biosemi32_l14
ch_names_wg_l = ['FP1','AF3','F7','F3','FC1','FC5','T7','C3','CP1','CP5','P7','P3','PO3','O1']
ch_locations_wg_l=[0,1,2,3,4,5,6,7,8,9,10,11,13,14]
#biosemi32_r14
ch_names_wg_r_ = ['O2','PO4','P4','P8','CP6','CP2','C4','T8','FC6','FC2','F4','F8','AF4','FP2']
ch_locations_wg_r_=[15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]
#biosemi32_r14_
ch_names_wg_r = ['FP2','AF4','F8','F4','FC2','FC6','T8','C4','CP2','CP6','P8','P4','PO4','O2']
ch_locations_wg_r=[29,28,27,26,25,24,23,22,21,20,19,18,17,16,15]
#biosemi128_45
#ch_names_sg2 = ['A1','A2','A3','A4','A5','A6','A7','A8','A9','A10','A11','A12','A13','A14','A15','A16','A17','A18','A19','A20','A21','A22','A23','A24','A25','A26','A27','A28','A29','A30','A31','A32',
# 'B1','B2','B3','B4','B5','B6','B7','B8','B9','B10','B11','B12','B13']
#ch_locations_sg2=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,
# 32,33,34,35,36,37,38,39,40,41,42,43,44,45]
#biosemi128_32
#ch_names_sg2 = ['A1','A2','A3','A4','A5','A6','A7','A8','A9','A10','A11','A12','A13','A14','A15','A16','A17','A18','A19','A20','A21','A22','A23','A24','A25','A26','A27','A28','A29','A30','A31','A32']
#ch_locations_sg2=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
#biosemi128
#ch_names_sg2 = ['A1','A2','A3','A4','A5','A6','A7','A8','A9','A10','A11','A12','A13','A14','A15','A16','A17','A18','A19','A20','A21','A22','A23','A24','A25','A26','A27','A28','A29','A30','A31','A32',
# 'B1','B2','B3','B4','B5','B6','B7','B8','B9','B10','B11','B12','B13','B14','B15','B16','B17','B18','B19','B20','B21','B22','B23','B24','B25','B26','B27','B28','B29','B30','B31','B32',
# 'C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14','C15','C16','C17','C18','C19','C20','C21','C22','C23','C24','C25','C26','C27','C28','C29','C30','C31','C32',
# 'D1','D2','D3','D4','D5','D6','D7','D8','D9','D10','D11','D12','D13','D14','D15','D16','D17','D18','D19','D20','D21','D22','D23','D24','D25','D26','D27','D28','D29','D30','D31','D32']
#ch_locations_sg2=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,
# 32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,
# 64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,
# 96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
#biosemi32-1
#ch_names_sg2 = ['FP1','AF3','F7','F3','FC1','FC5','T7','C3','CP1','CP5','P7','P3','Pz','PO3','O1','Oz','O2','PO4','P4','P8','CP6','CP2','C4','T8','FC6','FC2','F4','F8','AF4','FP2','Cz']
#ch_locations_sg2=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,26,27,28,29,30,31]
#silent_speech_8
ch_names_sg2 = ['1','2','3','4','5','6','7','8']
ch_locations_sg2=[0,1,2,3,4,5,6,7]
#biosemi32
#ch_names_sg2 = ['FP1','AF3','F7','F3','FC1','FC5','T7','C3','CP1','CP5','P7','P3','Pz','PO3','O1','Oz','O2','PO4','P4','P8','CP6','CP2','C4','T8','FC6','FC2','F4','F8','AF4','FP2','Fz','Cz']
#ch_locations_sg2=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
##Bernard's 19ch:
#ch_names = ["FP2","FP1","O2","T6","T4","F8","F4","C4","P4","F3","C3","P3","O1","T5","T3","F7","FZ","PZ"]#,"other"]
#ch_locations=[4,24,0,1,2,3,5,6,7,25,26,27,28,29,30,31,16,12]#,8]
##Bernard's 2ch:
#ch_names = ["FP2","FP1"]#,"other"]
#ch_locations=[4,24]#,8]
bands = [[8.,12.]]
#bands = [[4.,7.],[8.,12.]]
#bands = [[8.,12.],[8.,12.],[8.,12.]]
methods = ['coh']
#methods = ['plv']
#methods = ['ciplv']
#methods = ['ppc']
#methods = ['pli']
#methods = ['wpli']
#methods = ['coh', 'plv', 'ciplv', 'ppc', 'pli', 'wpli']
vol=1
#vol=6
#vol=0.1
duration=5*1/8
overlap=0
#overlap=duration-0.1
if generate&gen_game:
xsize=128
ysize=128
else:
#xsize=256
#ysize=256
# xsize=128
# ysize=128
xsize=512
ysize=512
#xsize=512/2
#ysize=512/2
hz=44100
#fps=hz/(32768)
#if generate_stylegan2:
fps_sg2=1
#if generate_wavegan:
#fps_wg=((hz/(32768*2))/1)*0.25
#fps_wg=((hz/(32768*2))/1)*0.5
fps_wg=((hz/(32768*2))/1)*1
#fps_wg=((hz/(32768*2))/1)*2
#fps_wg=((hz/(32768*2))/1)*3
#fps_sg2=fps_wg
fps_sg2=fps_wg*1
#fps_sg2=2
#fps_sg2=6
#fps_sg2=1/5
#fps_sg2=fps_wg/3
#fps_sg2=fps_wg*4
fps_hm=fps_wg
fps2_sg2=((fps_sg2*24/8)/3)*1
#fps2_sg2=((fps_sg2*24/8)/3)*2
#if 1/fps_wg-0.2>duration:
# duration=1/fps_wg-0.2
# overlap=duration-0.1
if 2*1/fps_wg>duration:
duration=2*1/fps_wg
# overlap=0
overlap=(duration/2)-(duration/2)/(fps2_sg2/fps_sg2)
#fps2_sg2=1
#fps2_sg2=1
#duration=2*1/fps_wg
#overlap=duration-(fps_wg/fps_sg2)
if generate&gen_wavegan:
dim_wg = 100
if generate&gen_stylegan2:
dim_sg2 = 512
if generate&gen_sg2_shawwn:
dim_sg2 = 1024
if generate&gen_sg2_shawwn_tpu:
dim_sg2 = 1024
if generate&gen_sg2_aydao_surgery_model_release:
dim_sg2 = 1024
debug=False
#debug=True
#mp4_codec = 'h264_cuvid'
mp4_codec = 'libx264'
device=None
# + id="exdhoEe151Ja"
#from google.colab import drive
#drive.mount('/content/gdrive')
# + colab={"base_uri": "https://localhost:8080/"} id="7WErNtyaMBkg" outputId="56c5c465-afbe-43e5-a6b5-5f666785933e"
if generate&gen_tf1:
import os
if 'COLAB_GPU' in os.environ:
print("I'm running on Colab")
# %tensorflow_version 1.x
else:
# !pip install testresources
# !pip install tensorflow==1.15
if generate&gen_tf2:
import os
if 'COLAB_GPU' in os.environ:
print("I'm running on Colab")
# %tensorflow_version 2.x
else:
# !pip install tensorflow==2.6
import tensorflow as tf
print('Tensorflow version: {}'.format(tf.__version__) )
# + colab={"base_uri": "https://localhost:8080/"} id="BLsvM-V7Oh9S" outputId="2985f1f8-acaa-4f22-cea4-c7eee9b67f5a"
if generate&gen_tf1:
if generate&gen_gpu:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
if generate&gen_tpu:
import os
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
import pprint
assert 'COLAB_TPU_ADDR' in os.environ, 'Did you forget to switch to TPU?'
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
with tf.Session(tpu_address) as sess:
devices = sess.list_devices()
pprint.pprint(devices)
device_is_tpu = [True if 'TPU' in str(x) else False for x in devices]
assert True in device_is_tpu, 'Did you forget to switch to TPU?'
if generate&gen_tf2:
try: # detect TPUs
tpu = None
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError: # detect GPUs
strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines
#strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
#strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines
print("Number of accelerators: ", strategy.num_replicas_in_sync)
# + colab={"base_uri": "https://localhost:8080/"} id="0Q265NLt3Unk" outputId="9ec4f62a-c08d-4273-8c7e-b60a21b71a3c"
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
# + id="FasUDeJx6L9R"
if generate&gen_sg2_shawwn_tpu:
# %env TPU_NAME={tpu_address}
if generate&gen_sg2_aydao_surgery_model_release:
# %env TPU_NAME={tpu_address}
# %env DEBUG=1
# + id="5nVIVsZOe_B4"
if generate&gen_sg2_aydao_surgery_model_release:
from google.colab import drive
drive.mount('/content/drive')
import os
if os.path.isdir("/content/drive/MyDrive/colab-sg2-aydao"):
# %cd "/content/drive/MyDrive/colab-sg2-aydao/stylegan2-surgery"
else:
#install script
# %cd "/content/drive/MyDrive/"
# !mkdir colab-sg2-aydao
# %cd colab-sg2-aydao
# !git clone --branch model-release https://github.com/aydao/stylegan2-surgery.git
# # !git clone --branch tpu https://github.com/shawwn/stylegan2.git
# # !git clone https://github.com/dvschultz/stylegan2-ada
# %cd stylegan2-surgery
# !mkdir downloads
# !mkdir datasets
# !gcloud auth login
# project_id = 'encoded-phalanx-326615'
project_id = 'local-abbey-335821'
# !gcloud config set project {project_id}
# GCP_PROJECT_ID = 'encoded-phalanx-326615'
GCP_PROJECT_ID = 'local-abbey-335821'
PROJECT_NUMBER = '0'
# !gcloud services --project $GCP_PROJECT_ID enable ml.googleapis.com cloudbuild.googleapis.com
from google.colab import auth
auth.authenticate_user()
# + colab={"base_uri": "https://localhost:8080/"} id="iaHracqcMbGG" outputId="e5ed4e31-db86-4a6c-ba4b-3b5bfc0afaa7"
if generate&gen_ss_dgaddy_pt:
import os
if os.path.isdir("/content/silent_speech-dgaddy-pytorch"):
# %cd "/content/silent_speech-dgaddy-pytorch"
else:
# !pip install torch==1.7.1
# %cd "/content/"
# !git clone https://github.com/dgaddy/silent_speech.git /content/silent_speech-dgaddy-pytorch
# %cd "/content/silent_speech-dgaddy-pytorch"
# !git clone https://github.com/NVIDIA/nv-wavenet.git nv_wavenet
# !git clone https://github.com/hubertsiuzdak/voice-conversion.git voice-conversion
# %cp -TRv voice-conversion/nv_wavenet nv_wavenet/
# %cd /content/silent_speech-dgaddy-pytorch/nv_wavenet/pytorch
if 'Tesla K80' in device_lib.list_local_devices():
# !sed -i 's/ARCH=sm_70/ARCH=sm_37/' ./Makefile
# !rm -rf /usr/local/cuda
# !ln -s /usr/local/cuda-10.1 /usr/local/cuda
# #!ln -s /usr/local/cuda-11.2 /usr/local/cuda
# !make
# !python build.py install
# %cd /content/silent_speech-dgaddy-pytorch
if generate&gen_sg2_nvlabs_ada_pt:
# %cd /content
# !git clone https://github.com/NVlabs/stylegan2-ada-pytorch /content/stylegan2-nvlabs-ada-pytorch
# %cd /content/stylegan2-nvlabs-ada-pytorch
if generate&gen_sg2_nvlabs:
# %cd /content
# !git clone https://github.com/NVlabs/stylegan2.git /content/stylegan2-nvlabs
# %cd /content/stylegan2-nvlabs
if generate&gen_sg2_nvlabs_ada:
# %cd /content
# !git clone https://github.com/NVlabs/stylegan2-ada.git /content/stylegan2-nvlabs-ada
# %cd /content/stylegan2-nvlabs-ada
if generate&gen_sg2_shawwn:
# %cd /content
# !git clone https://github.com/shawwn/stylegan2.git /content/stylegan2-shawwn
# %cd /content/stylegan2-shawwn
if generate&gen_sg2_cyrilzakka_tpu:
# %cd /content
# !git clone https://github.com/cyrilzakka/stylegan2-tpu.git /content/stylegan2-cyrilzakka-tpu
# %cd /content/stylegan2-cyrilzakka-tpu
if generate&gen_sg2_shawwn_tpu:
# %cd /content
# !git clone --branch tpu https://github.com/shawwn/stylegan2.git /content/stylegan2-shawwn-tpu
# %cd /content/stylegan2-shawwn-tpu
if generate&gen_sg2_moono_tf2:
# %cd /content
# !git clone https://github.com/moono/stylegan2-tf-2.x.git /content/stylegan2-moono-tf2
# %cd /content/stylegan2-moono-tf2
if generate&gen_sg2_rosasalberto_tf2:
# %cd /content
# !git clone https://github.com/rosasalberto/StyleGAN2-TensorFlow-2.x.git /content/stylegan2-rosasalberto-tf2
# %cd /content/stylegan2-rosasalberto-tf2
if generate&gen_sg2_nagolinc_pt:
# %cd /content
# !git clone https://github.com/nagolinc/stylegan2-pytorch.git /content/stylegan2-nagolinc-pytorch
# %cd /content/stylegan2-nagolinc-pytorch
#if generate&gen_sg2_aydao_surgery_model_release:
# %cd /content
# # !git clone --branch model-release https://github.com/aydao/stylegan2-surgery.git /content/stylegan2-aydao-surgery-model-release
# %cd stylegan2-aydao-surgery-model-release
# + id="LmDJ996ZBg4V"
def download_file_from_google_drive(file_id,dest_path):
import os.path
while not os.path.exists(dest_path):
# !mkdir -p $(dirname {dest_path})
# !wget --save-cookies cookies.txt 'https://docs.google.com/uc?export=download&id='{file_id} -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1/p' > confirm.txt
import os
if os.path.getsize("confirm.txt")==0:
# !wget -O {dest_path} 'https://docs.google.com/uc?export=download&id='{file_id}
else:
# !wget --load-cookies cookies.txt -O {dest_path} 'https://docs.google.com/uc?export=download&id='{file_id}'&confirm='$(<confirm.txt)
if os.path.getsize(dest_path)==0:
# !rm {dest_path}
# + id="MON1WhlMATAh"
files_path=[]
if generate&gen_drums:
files_path = [['1ZJir-_ls92s56LFmw_HVuQ7ANqFFN5WG', '/content/model/model.ckpt-18637.data-00000-of-00001'],
['1d5ayi4w-70AvKYPk-8sXzsSzpK1jMRgm', '/content/model/model.ckpt-18637.index'],
['15CWn0yK3FKsHbAOGNLQYVg4eZC1oNIrL', '/content/model/model.ckpt-18637.meta'],
['1x5QEFeoochk-rhvtvJc98kIB5_SAwn0u', '/content/model/args.txt'],
['1UgSZaBTCTDXaPbfv8l0wmpHbD5u051o5', '/content/model/graph.pbtxt'],
['1LGfAkuOFvA3NdFE_rOq9WyeXGGgEOf0F', '/content/model/checkpoint'],
['1bPD0bXCC_18oWbUjmjkacF-CShlA6yNd', '/content/model/infer/infer.pbtxt'],
['13OQuRx7Ku6KJ9o9FU-JN3yB0Njul9Vem', '/content/model/infer/infer.meta']]
for i in range(len(files_path)):
download_file_from_google_drive(file_id=files_path[i][0], dest_path=files_path[i][1])
files_path=[]
if generate&gen_ss_wm50_tm07_dm070:
files_path = [['1_x5Ath-6CRtjoiGXrkTqz1jhhYrAISX_', '/content/silent_speech-dgaddy-pytorch/models/wavenet_model/wavenet_model_50.pt',
'wavenet_model_50',''],
['1cHkkUC8xbwbCnV76ewwxU2t_GPr5r-jj', '/content/silent_speech-dgaddy-pytorch/models/transduction_model/model_07.pt',
'model_07',''],
['16UhHp3FLiDl1wwgEvOsz9hsFDxYAYnK0', '/content/silent_speech-dgaddy-pytorch/deepspeech-0.7.0-models.pbmm',
'deepspeech-0.7.0-models',''],
['1q34LabqWGIOKwf5DfJYOLZksnNuc2rpv', '/content/silent_speech-dgaddy-pytorch/deepspeech-0.7.0-models.scorer',
'deepspeech-0.7.0-models',''],
['1p97-FG984_OQhk0X2okpyUbByG3DJhdb', '/content/silent_speech-dgaddy-pytorch/emg_data.zip',
'emg_data',''],
['1RNYqqutEeSpFny_yYarHLeZvgxrvE_dm', '/content/silent_speech-dgaddy-pytorch/out.tar.gz',
'out',''],
['1Adhn8Y4qplXMtp44VqRm7esUdhxd3pQk', '/content/silent_speech-dgaddy-pytorch/books/War_of_the_Worlds.txt',
'book',''],
['1YoycqFrjtWnDM67vSc4yC3U-WxOUSvrN', '/content/silent_speech-dgaddy-pytorch/testset_onlinedev.json',
'testset_onlinedev','']]
if generate&gen_abctract_art:
files_path = [['1ie1vWw1JNsfrZWRtMvhteqzVz4mt4KGa', '/content/model/sg2-ada_abstract_network-snapshot-000188.pkl',
'sg2-ada_abstract_network-snapshot-000188','stylegan2-ada']]
if generate&gen_anime_protraits:
files_path = [['1aUrChOhq5jDEddZK1v_Dp1vYNlHSBL9o', '/content/model/sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664.pkl',
'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664','stylegan2-ada']]
if generate&gen_abctract_art:
if generate&gen_anime_protraits:
files_path = [['1aUrChOhq5jDEddZK1v_Dp1vYNlHSBL9o', '/content/model/sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664.pkl',
'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664','stylegan2-ada'],
['1ie1vWw1JNsfrZWRtMvhteqzVz4mt4KGa', '/content/model/sg2-ada_abstract_network-snapshot-000188.pkl',
'sg2-ada_abstract_network-snapshot-000188','stylegan2-ada']]
if generate&gen_tadne:
# files_path = [['1sdnL-lIl2kYAnuleafK-5GPiLNHfxh4W', '/content/model/sg2-ext_aydao-anime-danbooru2019s-512-5268480.pkl',
# 'sg2-ext_aydao-anime-danbooru2019s-512-5268480','stylegan2-shawwn']]
files_path = [['1LCkyOPmcWBsPlQX_DxKAuPM1Ew_nh83I', '/content/model/sg2-ext_network-tadne.pkl',
'sg2-ext_network-tadne','stylegan2-shawwn']]
#files_path = [['1l5zG0g_RMEAwFUK_veD1EZweVEoY9gUT', '/content/model/aydao-anime-danbooru2019s-512-5268480.pkl']]
# files_path = [['1BHeqOZ58WZ-vACR2MJkh1ZVbJK2B-Kle', '/content/model/network-snapshot-017325.pkl']]
# files_path = [['1WNQELgHnaqMTq3TlrnDaVkyrAH8Zrjez', '/content/model/network-snapshot-018528.pkl']]
if generate&gen_anime:
files_path = [['1YckI8gwqPbZBI8X4eaQAJCgWx-CqCTdi', '/content/model/sg2_anime_network-snapshot-018528.pkl',
'sg2_anime_network-snapshot-018528']]
if generate&gen_anime_tf2:
files_path = [['1-1neAg_FUymzBvCStMe7CfV94-VD22kk', '/content/stylegan2-moono-tf2/official-converted/cuda/ckpt-0.data-00000-of-00001'],
['1-4ih0wi68y4xH5tg0_kClpuWDSnvdmoE', '/content/stylegan2-moono-tf2/official-converted/cuda/ckpt-0.index'],
['1-C6H58vmfZykqWpilR1u9puH8oPFQtcQ', '/content/stylegan2-moono-tf2/official-converted/cuda/checkpoint']]
if generate&gen_abctract_art_tf2_npy:
# files_path = [['1cauGWIVGGiMJA0_OZftJU3-rVAVdFwZM', '/content/stylegan2-rosasalberto-tf2/weights/sg2-ada_abstract_network-snapshot-000188.npy',
# 'sg2-ada_abstract_network-snapshot-000188']]
files_path = [['1-CXjDfP_g5ZD5aC9AwOXEC5WNIf5dCEh', '/content/stylegan2-rosasalberto-tf2/weights/sg2-ada_abstract_network-snapshot-000188.npy',
'sg2-ada_abstract_network-snapshot-000188']]
if generate&gen_anime_protraits_tf2_npy:
# files_path = [['1-Cp-RRJnjvfCIrD0ylaUYxvLbxN4aj8K', '/content/stylegan2-rosasalberto-tf2/weights/sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664.npy',
# 'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664']]
files_path = [['1-AiS_pdkssIz_nU9GYSLJRZiXgJpCrSo', '/content/stylegan2-rosasalberto-tf2/weights/sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664.npy',
'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664']]
if generate&gen_tadne_tf2_npy:
files_path = [['1-36-rfueVBWvigBvCuwzrXKl1AeVtzu6', '/content/stylegan2-rosasalberto-tf2/weights/sg2-ext_aydao-anime-danbooru2019s-512-5268480.npy',
'sg2-ext_aydao-anime-danbooru2019s-512-5268480']]
if generate&gen_anime_tf2_npy:
files_path = [['1--ajK29hgTTAYNcZhQk9lLFhwUlXqxNA', '/content/stylegan2-rosasalberto-tf2/weights/sg2_anime_network-snapshot-018528.npy',
'sg2_anime_network-snapshot-018528']]
if generate&gen_thdne:
files_path = [['106KNd9oqMmslYKpGmjUkyjNZzpd_vKmM', '/content/model/sg2-ext_thdne-120320.pkl',
'sg2-ext_thdne','stylegan2-shawwn']]
# files_path = [['1-Fop9RImTgWh3-WtehUAKNfcPskmzx1O', '/content/model/sg2-ext_thdne-34048.pkl',
# 'sg2-ext_thdne','stylegan2-shawwn']]
# files_path = [['1-KLjFV2mCQw7AiofYO-EKSV5XZutbDcn', '/content/model/sg2-ext_thdne-16384.pkl',
# 'sg2-ext_thdne','stylegan2-shawwn']]
# files_path = [['1zBXH6O-6i3TPorFAVZ8HqgamjWq3WTeg', '/content/model/sg2-ext_thdne-113152.pkl',
# 'sg2-ext_thdne','stylegan2-shawwn']]
# files_path = [['1-GdLwuYWdu3QLGQJWKCRsLLYjgQpxrgd', '/content/model/sg2-ext_thdne-95232.pkl',
# 'sg2-ext_thdne','stylegan2-shawwn']]
# files_path = [['1O1dCRbeMjD0EemjVHWmoO_x-58vOzNuK', '/content/model/sg2-ext_thdne-latest.pkl',
# 'sg2-ext_thdne','stylegan2-shawwn']]
# files_path = [['1-Nhf4lcxSiUvCmQiHJNU1GfFTVNWobvY', '/content/model/sg2-ext_thdne-54784.pkl',
# 'sg2-ext_thdne','stylegan2-shawwn']]
#if generate&gen_thdne_256:
# files_path = [['1-J_b-nX0KnKK_fDkZ8KC0RgoP9CTmAxy', '/content/model/sg2-ext_network-thdne_256.pkl',
# 'sg2-ext_network-thdne_256','stylegan2-shawwn']]
#if generate&gen_thdne_256_pt:
# files_path = [['1L3joymV2LartzOSMXyzzniGr57Gqf_1z', '/content/model/sg2-ada-pt_thdne-snapshot-256-latest.pkl',
# 'sg2-ada-pt_thdne-snapshot-256-latest','stylegan2-ada-pytorch']]
#https://drive.google.com/file/d/1ie1vWw1JNsfrZWRtMvhteqzVz4mt4KGa/view?usp=sharing
#network-snapshot-000188.pkl
#https://drive.google.com/file/d/1YckI8gwqPbZBI8X4eaQAJCgWx-CqCTdi/view?usp=sharing
#network-snapshot-018528.pkl
for i in range(len(files_path)):
download_file_from_google_drive(file_id=files_path[i][0], dest_path=files_path[i][1])
# + id="S1cqleJU9YdF"
# #!python convert_ckpt_to_pkl.py --ckpt_model_dir gs://train_with_tpu/networks_aydao/sq-512-135680 --tpu_address={tpu_address} --output_pkl_dir gs://train_with_tpu/networks_aydao/ --reference_pkl gs://train_with_tpu/models/2020-11-27-aydao-stylegan2ext-danbooru2019s-512px-5268480.pkl
#files_path = [['', 'gs://train_with_tpu/networks_aydao/model.ckpt-113152.pkl', 'sg2-ext_thdne','stylegan2-shawwn']]
# #!gsutil cp -r gs://train_with_tpu/networks_aydao/model.ckpt-113152.pkl /content/drive/MyDrive/networks_aydao/model.ckpt-113152.pkl
# + id="YbqAzyggAXtL"
if generate&gen_stylegan2:
# !pip install Pillow
import PIL.Image
# !pip install tqdm
from tqdm import tqdm
# !pip install imageio==2.4.1
# !pip install imageio-ffmpeg==0.4.3 pyspng==0.1.0
# + id="uqYszSxLKjt1"
if generate&gen_sg2_rosasalberto_tf2:
import tensorflow as tf
import numpy as np
# !pip install matplotlib
import matplotlib.pyplot as plt
from utils.utils_stylegan2 import convert_images_to_uint8
def generate_and_plot_images(gen, seed, w_avg, truncation_psi=1):
""" plot images from generator output """
fig, ax = plt.subplots(1,3,figsize=(15,15))
for i in range(3):
# creating random latent vector
rnd = np.random.RandomState(seed)
z = rnd.randn(1, 512).astype('float32')
# running mapping network
dlatents = gen.mapping_network(z)
# adjusting dlatents depending on truncation psi, if truncatio_psi = 1, no adjust
dlatents = w_avg + (dlatents - w_avg) * truncation_psi
# running synthesis network
out = gen.synthesis_network(dlatents)
#converting image/s to uint8
img = convert_images_to_uint8(out, nchw_to_nhwc=True, uint8_cast=True)
#plotting images
ax[i].axis('off')
img_plot = ax[i].imshow(img.numpy()[0])
seed += 1
impl = 'ref' # 'ref' if cuda is not available in your machine
gpu = False # False if tensorflow cpu is used
if generate&gen_tpu:
impl = 'ref' # 'ref' if cuda is not available in your machine
gpu = False # False if tensorflow cpu is used
if generate&gen_gpu:
impl = 'cuda' # 'ref' if cuda is not available in your machine
gpu = True # False if tensorflow cpu is used
import tensorflow as tf
import numpy as np
from utils.weights_map import available_weights, synthesis_weights, mapping_weights, weights_stylegan2_dir
from utils.utils_stylegan2 import nf
from layers.dense_layer import DenseLayer
from layers.synthesis_main_layer import SynthesisMainLayer
from layers.to_rgb_layer import ToRgbLayer
from dnnlib.ops.upfirdn_2d import upsample_2d
class MappingNetwork(tf.keras.layers.Layer):
"""
StyleGan2 generator mapping network, from z to dlatents for tensorflow 2.x
"""
def __init__(self, resolution=1024, **kwargs):
super(MappingNetwork, self).__init__(**kwargs)
self.dlatent_size = 512
self.dlatent_vector = (int(np.log2(resolution))-1)*2
self.mapping_layers = 8
self.lrmul = 0.01
def build(self, input_shape):
self.weights_dict = {}
for i in range(self.mapping_layers):
setattr(self, 'Dense{}'.format(i), DenseLayer(fmaps=512, lrmul=self.lrmul, name='Dense{}'.format(i)))
self.g_mapping_broadcast = tf.keras.layers.RepeatVector(self.dlatent_vector)
def call(self, z):
z = tf.cast(z, 'float32')
# Normalize inputs
scale = tf.math.rsqrt(tf.reduce_mean(tf.square(z), axis=1, keepdims=True) + 1e-8)
x = tf.math.multiply(z, scale)
# Mapping
for i in range(self.mapping_layers):
x = getattr(self, 'Dense{}'.format(i))(x)
x = tf.math.multiply(tf.nn.leaky_relu(x, 0.2), tf.math.sqrt(2.))
# Broadcasting
dlatents = self.g_mapping_broadcast(x)
return dlatents
class SynthesisNetwork(tf.keras.layers.Layer):
"""
StyleGan2 generator synthesis network from dlatents to img tensor for tensorflow 2.x
"""
def __init__(self, resolution=1024, impl='cuda', gpu=True, **kwargs):
"""
Parameters
----------
resolution : int, optional
Resolution output of the synthesis network, will be parsed to the floor integer power of 2.
The default is 1024.
impl : str, optional
Wether to run some convolutions in custom tensorflow operations or cuda operations. 'ref' and 'cuda' available.
The default is 'cuda'.
gpu : boolean, optional
Wether to use gpu. The default is True.
"""
super(SynthesisNetwork, self).__init__(**kwargs)
self.impl = impl
self.gpu = gpu
self.resolution = resolution
self.resolution_log2 = int(np.log2(self.resolution))
self.resample_kernel = [1, 3, 3, 1]
def build(self, input_shape):
#constant layer
self.const_4_4 = self.add_weight(name='4x4/Const/const', shape=(1, 512, 4, 4),
initializer=tf.random_normal_initializer(0, 1), trainable=True)
#early layer 4x4
self.layer_4_4 = SynthesisMainLayer(fmaps=nf(1), impl=self.impl, gpu=self.gpu, name='4x4')
self.torgb_4_4 = ToRgbLayer(impl=self.impl, gpu=self.gpu, name='4x4')
#main layers
for res in range(3, self.resolution_log2 + 1):
res_str = str(2**res)
setattr(self, 'layer_{}_{}_up'.format(res_str, res_str),
SynthesisMainLayer(fmaps=nf(res-1), impl=self.impl, gpu=self.gpu, up=True, name='{}x{}'.format(res_str, res_str)))
setattr(self, 'layer_{}_{}'.format(res_str, res_str),
SynthesisMainLayer(fmaps=nf(res-1), impl=self.impl, gpu=self.gpu, name='{}x{}'.format(res_str, res_str)))
setattr(self, 'torgb_{}_{}'.format(res_str, res_str),
ToRgbLayer(impl=self.impl, gpu=self.gpu, name='{}x{}'.format(res_str, res_str)))
def call(self, dlatents_in):
dlatents_in = tf.cast(dlatents_in, 'float32')
y = None
# Early layers
x = tf.tile(tf.cast(self.const_4_4, 'float32'), [tf.shape(dlatents_in)[0], 1, 1, 1])
x = self.layer_4_4(x, dlatents_in[:, 0])
y = self.torgb_4_4(x, dlatents_in[:, 1], y)
# Main layers
for res in range(3, self.resolution_log2 + 1):
x = getattr(self, 'layer_{}_{}_up'.format(2**res, 2**res))(x, dlatents_in[:, res*2-5])
x = getattr(self, 'layer_{}_{}'.format(2**res, 2**res))(x, dlatents_in[:, res*2-4])
y = upsample_2d(y, k=self.resample_kernel, impl=self.impl, gpu=self.gpu)
y = getattr(self, 'torgb_{}_{}'.format(2**res, 2**res))(x, dlatents_in[:, res*2-3], y)
images_out = y
return tf.identity(images_out, name='images_out')
class StyleGan2Generator(tf.keras.layers.Layer):
"""
StyleGan2 generator config f for tensorflow 2.x
"""
def __init__(self, resolution=1024, weights=None, impl='cuda', gpu=True, **kwargs):
"""
Parameters
----------
resolution : int, optional
Resolution output of the synthesis network, will be parsed
to the floor integer power of 2.
The default is 1024.
weights : string, optional
weights name in weights dir to be loaded. The default is None.
impl : str, optional
Wether to run some convolutions in custom tensorflow operations
or cuda operations. 'ref' and 'cuda' available.
The default is 'cuda'.
gpu : boolean, optional
Wether to use gpu. The default is True.
"""
super(StyleGan2Generator, self).__init__(**kwargs)
self.resolution = resolution
if weights is not None: self.__adjust_resolution(weights)
self.mapping_network = MappingNetwork(resolution=self.resolution,name='Mapping_network')
self.synthesis_network = SynthesisNetwork(resolution=self.resolution, impl=impl,
gpu=gpu, name='Synthesis_network')
# load weights
if weights is not None:
#we run the network to define it, not the most efficient thing to do...
_ = self(tf.zeros(shape=(1, 512)))
self.__load_weights(weights)
def call(self, z):
"""
Parameters
----------
z : tensor, latent vector of shape [batch, 512]
Returns
-------
img : tensor, image generated by the generator of shape [batch, channel, height, width]
"""
dlatents = self.mapping_network(z)
img = self.synthesis_network(dlatents)
return img
def __adjust_resolution(self, weights_name):
"""
Adjust resolution of the synthesis network output.
Parameters
----------
weights_name : name of the weights
Returns
-------
None.
"""
if weights_name == 'ffhq':
self.resolution = 1024
elif weights_name == 'car':
self.resolution = 512
elif weights_name in ['cat', 'church', 'horse']:
self.resolution = 256
elif weights_name == 'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664':
self.resolution = 512
elif weights_name == 'sg2_anime_network-snapshot-018528':
self.resolution = 512
elif weights_name == 'sg2-ext_aydao-anime-danbooru2019s-512-5268480':
self.resolution = 1024
elif weights_name == 'sg2-ada_abstract_network-snapshot-000188':
self.resolution = 1024
def __load_weights(self, weights_name):
"""
Load pretrained weights, stored as a dict with numpy arrays.
Parameters
----------
weights_name : name of the weights
Returns
-------
None.
"""
if (weights_name in available_weights) and type(weights_name) == str:
data = np.load(weights_stylegan2_dir + weights_name + '.npy', allow_pickle=True)[()]
#datatmp = np.load(weights_stylegan2_dir + weights_name + '.npy', allow_pickle=True)[()]
#data=datatmp.copy()
#for key in datatmp.keys():
# if not (key[:4]=='disc'):
# del data[key]
for key in data.keys():
print(key)
weights_mapping = [data.get(key) for key in mapping_weights]
print(weights_mapping)
weights_synthesis = [data.get(key) for key in synthesis_weights[weights_name]]
#print(weights_synthesis)
self.mapping_network.set_weights(weights_mapping)
self.synthesis_network.set_weights(weights_synthesis)
print("Loaded {} generator weights!".format(weights_name))
else:
raise Exception('Cannot load {} weights'.format(weights_name))
def generate_and_plot_images_notrunc(gen, seed):
""" plot images from generator output """
fig, ax = plt.subplots(1,3,figsize=(15,15))
for i in range(3):
# creating random latent vector
rnd = np.random.RandomState(seed)
z = rnd.randn(1, 512).astype('float32')
# running mapping network
dlatents = gen.mapping_network(z)
# adjusting dlatents depending on truncation psi, if truncatio_psi = 1, no adjust
#dlatents = w_avg + (dlatents - w_avg) * truncation_psi
# running synthesis network
out = gen.synthesis_network(dlatents)
#converting image/s to uint8
img = convert_images_to_uint8(out, nchw_to_nhwc=True, uint8_cast=True)
#plotting images
ax[i].axis('off')
img_plot = ax[i].imshow(img.numpy()[0])
#plt.axis('off')
#plt.imshow(img.numpy()[0])
#plt.show()
seed += 1
weights_name = files_path[0][2]
from utils.weights_map import synthesis_weights_1024, synthesis_weights_512, synthesis_weights_256
from utils.weights_map import discriminator_weights_1024, discriminator_weights_512, discriminator_weights_256
available_weights = ['ffhq', 'car', 'cat', 'church', 'horse',
'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664',
'sg2_anime_network-snapshot-018528',
'sg2-ext_aydao-anime-danbooru2019s-512-5268480',
'sg2-ada_abstract_network-snapshot-000188']
mapping_weights = [ 'Dense0/weight', 'Dense0/bias',
'Dense1/weight', 'Dense1/bias'
,
'Dense2/weight', 'Dense2/bias',
'Dense3/weight', 'Dense3/bias',
'Dense4/weight', 'Dense4/bias',
'Dense5/weight', 'Dense5/bias',
'Dense6/weight', 'Dense6/bias',
'Dense7/weight', 'Dense7/bias'
]
synthesis_weights = {
'ffhq' : synthesis_weights_1024,
'car' : synthesis_weights_512,
'cat' : synthesis_weights_256,
'horse' : synthesis_weights_256,
'church' : synthesis_weights_256,
'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664' : synthesis_weights_512,
'sg2_anime_network-snapshot-018528' : synthesis_weights_512,
'sg2-ext_aydao-anime-danbooru2019s-512-5268480' : synthesis_weights_1024,
'sg2-ada_abstract_network-snapshot-000188' : synthesis_weights_1024
}
discriminator_weights = {
'ffhq' : discriminator_weights_1024,
'car' : discriminator_weights_512,
'cat' : discriminator_weights_256,
'horse' : discriminator_weights_256,
'church' : discriminator_weights_256,
'sg2-ada_2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664' : discriminator_weights_512,
'sg2_anime_network-snapshot-018528' : discriminator_weights_512,
'sg2-ext_aydao-anime-danbooru2019s-512-5268480' : discriminator_weights_1024,
'sg2-ada_abstract_network-snapshot-000188' : discriminator_weights_1024
}
# instantiating generator network
generator = StyleGan2Generator(weights=weights_name, impl=impl, gpu=gpu)
# loading w average
#w_average = np.load('weights/{}_dlatent_avg.npy'.format(weights_name))
# + id="lb6XC_M-AeoC"
if generate&gen_sg2_moono_tf2:
import os
import numpy as np
import tensorflow as tf
from PIL import Image
from stylegan2.utils import postprocess_images
from load_models import load_generator
from copy_official_weights import convert_official_weights_together
if True:
from tf_utils import allow_memory_growth
allow_memory_growth()
# common variables
ckpt_dir_base = './official-converted'
use_custom_cuda=True
# saving phase
#for use_custom_cuda in [True, False]:
# ckpt_dir = os.path.join(ckpt_dir_base, 'cuda') if use_custom_cuda else os.path.join(ckpt_dir_base, 'ref')
# convert_official_weights_together(ckpt_dir, use_custom_cuda)
# inference phase
ckpt_dir_cuda = os.path.join(ckpt_dir_base, 'cuda')
ckpt_dir_ref = os.path.join(ckpt_dir_base, 'ref')
g_clone = load_generator(g_params=None, is_g_clone=True, ckpt_dir=ckpt_dir_cuda, custom_cuda=use_custom_cuda)
#if generate_stylegan2_tpu:
# tflib.init_tf()
# import pretrained_networks
# _G, _D, Gs = pretrained_networks.load_networks(network_pkl)
# + id="cvoouJG1Pt-c"
if generate&gen_sg2_aydao_surgery_model_release:
import argparse
import numpy as np
import PIL.Image
import dnnlib
import dnnlib.tflib as tflib
import re
import sys
import pretrained_networks
#----------------------------------------------------------------------------
def generate_images(network_pkl, seeds, truncation_psi):
print('Loading networks from "%s"...' % network_pkl)
global _G, _D, Gs, Gs_kwargs
_G, _D, Gs = pretrained_networks.load_networks(network_pkl)
noise_vars = [var for name, var in Gs.components.synthesis.vars.items() if name.startswith('noise')]
Gs_kwargs = dnnlib.EasyDict()
Gs_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
Gs_kwargs.randomize_noise = False
if truncation_psi is not None:
Gs_kwargs.truncation_psi = truncation_psi
for seed_idx, seed in enumerate(seeds):
print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))
rnd = np.random.RandomState(seed)
z = rnd.randn(1, *Gs.input_shape[1:]) # [minibatch, component]
tflib.set_vars({var: rnd.randn(*var.shape.as_list()) for var in noise_vars}) # [height, width]
images = Gs.run(z, None, **Gs_kwargs) # [minibatch, height, width, channel]
PIL.Image.fromarray(images[0], 'RGB').save(dnnlib.make_run_dir_path('seed%04d.png' % seed))
#----------------------------------------------------------------------------
def style_mixing_example(network_pkl, row_seeds, col_seeds, truncation_psi, col_styles, minibatch_size=4):
print('Loading networks from "%s"...' % network_pkl)
global _G, _D, Gs, Gs_syn_kwargs
_G, _D, Gs = pretrained_networks.load_networks(network_pkl)
w_avg = Gs.get_var('dlatent_avg') # [component]
Gs_syn_kwargs = dnnlib.EasyDict()
Gs_syn_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
Gs_syn_kwargs.randomize_noise = False
Gs_syn_kwargs.minibatch_size = minibatch_size
print('Generating W vectors...')
all_seeds = list(set(row_seeds + col_seeds))
all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) # [minibatch, component]
all_w = Gs.components.mapping.run(all_z, None) # [minibatch, layer, component]
all_w = w_avg + (all_w - w_avg) * truncation_psi # [minibatch, layer, component]
w_dict = {seed: w for seed, w in zip(all_seeds, list(all_w))} # [layer, component]
print('Generating images...')
all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) # [minibatch, height, width, channel]
image_dict = {(seed, seed): image for seed, image in zip(all_seeds, list(all_images))}
print('Generating style-mixed images...')
for row_seed in row_seeds:
for col_seed in col_seeds:
w = w_dict[row_seed].copy()
w[col_styles] = w_dict[col_seed][col_styles]
image = Gs.components.synthesis.run(w[np.newaxis], **Gs_syn_kwargs)[0]
image_dict[(row_seed, col_seed)] = image
print('Saving images...')
for (row_seed, col_seed), image in image_dict.items():
PIL.Image.fromarray(image, 'RGB').save(dnnlib.make_run_dir_path('%d-%d.png' % (row_seed, col_seed)))
print('Saving image grid...')
_N, _C, H, W = Gs.output_shape
canvas = PIL.Image.new('RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black')
for row_idx, row_seed in enumerate([None] + row_seeds):
for col_idx, col_seed in enumerate([None] + col_seeds):
if row_seed is None and col_seed is None:
continue
key = (row_seed, col_seed)
if row_seed is None:
key = (col_seed, col_seed)
if col_seed is None:
key = (row_seed, row_seed)
canvas.paste(PIL.Image.fromarray(image_dict[key], 'RGB'), (W * col_idx, H * row_idx))
canvas.save(dnnlib.make_run_dir_path('grid.png'))
#----------------------------------------------------------------------------
def _parse_num_range(s):
'''Accept either a comma separated list of numbers 'a,b,c' or a range 'a-c' and return as a list of ints.'''
range_re = re.compile(r'^(\d+)-(\d+)$')
m = range_re.match(s)
if m:
return range(int(m.group(1)), int(m.group(2))+1)
vals = s.split(',')
return [int(x) for x in vals]
# + id="c2comFiMPa2H"
if generate&gen_sg2_aydao_surgery_model_release:
# #%env TF_XLA_FLAGS="--tf_xla_auto_jit=2 --tf_xla_cpu_global_jit"
# %env TF_XLA_FLAGS="--tf_xla_auto_jit=2 --tf_xla_cpu_global_jit --tf_xla_clustering_debug"
# %env TF_DUMP_GRAPH_PREFIX="gs://train_with_tpu/generated"
# #%env TF_DUMP_GRAPH_PREFIX="/content/generated"
# #%env XLA_FLAGS="--xla_dump_to=/content/generated"
# %env XLA_FLAGS="--xla_dump_hlo_as_text --xla_dump_to=gs://train_with_tpu/generated"
# #%env XLA_FLAGS="--xla_dump_hlo_as_text --xla_dump_to=/content/generated"
_G = None
_D = None
Gs = None
Gs_syn_kwargs = None
Gs_kwargs = None
class Namespace:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
#from argparse import Namespace
network_pkl=files_path[0][1]
truncation_psi=0.5
args = Namespace(command='generate-images', network_pkl=network_pkl,
result_dir='/content/result_dir', seeds=[66],
truncation_psi=0.5)
kwargs = vars(args)
subcmd = kwargs.pop('command')
sc = dnnlib.SubmitConfig()
sc.num_gpus = 8
# sc.num_gpus = 1
sc.submit_target = dnnlib.SubmitTarget.LOCAL
sc.local.do_not_copy_source_files = True
sc.run_dir_root = kwargs.pop('result_dir')
sc.run_desc = subcmd
func_name_map = {
'generate-images': 'run_generator.generate_images',
'style-mixing-example': 'run_generator.style_mixing_example'
}
dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs)
print('Loading networks from "%s"...' % network_pkl)
_G, _D, Gs = pretrained_networks.load_networks(network_pkl)
noise_vars = [var for name, var in Gs.components.synthesis.vars.items() if name.startswith('noise')]
Gs_kwargs = dnnlib.EasyDict()
Gs_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True)
Gs_kwargs.randomize_noise = False
if truncation_psi is not None:
Gs_kwargs.truncation_psi = truncation_psi
# + id="MgmmI70rAiey"
if generate&gen_stylegan2:
if generate&gen_tf1:
if generate&gen_gpu:
#if generate_stylegan2_ada or generate_stylegan2_ext:
import dnnlib
import dnnlib.tflib as tflib
tflib.init_tf()
import pickle
network_pkl=files_path[0][1]
#with dnnlib.util.open_url(network_pkl) as fp:
# _G, _D, Gs = pickle.load(fp)
_G, _D, Gs = pickle.load(open(network_pkl, "rb"))
if generate&gen_tf2_npy:
import numpy as np
data = {}
# import pretrained_networks
# g, d, Gs_network = pretrained_networks.load_networks('/content/model/2020-01-11-skylion-stylegan2-animeportraits-networksnapshot-024664.pkl')
# for key in d.trainables.keys():
# data['disc_'+ key] = d.get_var(key)
#print(_G)
#print(_D)
#print(Gs)
_G.print_layers()
_D.print_layers()
Gs.print_layers()
for key in _G.trainables.keys():
data[key[key.find('/')+1:]] = _G.get_var(key)
#for key in Gs.trainables.keys():
# data[key[key.find('/')+1:]] = Gs.get_var(key)
#for key in _G.trainables.keys():
# data[key] = _G.get_var(key)
#for key in Gs.trainables.keys():
# data['gens_'+ key] = Gs.get_var(key)
for key in _D.trainables.keys():
data['disc_'+ key] = _D.get_var(key)
np.save('/content/model/{}.npy'.format(files_path[0][2]), data, allow_pickle=True)
#from google.colab import files
#files.download('/content/model/{}.npy'.format(files_path[0][2]))
from google.colab import drive
drive.mount('/content/gdrive')
# !mkdir /content/gdrive/MyDrive/EEG-GAN-audio-video/models
# !cp -r -v "/content/model/{files_path[0][2]}.npy" "/content/gdrive/MyDrive/EEG-GAN-audio-video/models/{files_path[0][2]}.npy"
# + id="bhYyUwxJ1oxu"
if generate&gen_sg2_nagolinc_pt:
import subprocess
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
# !pip install ninja
# !pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
# #%cd /content/stylegan2-pytorch
from convert_weight import convertStyleGan2
#conver the model from tf to torch
ckpt, g, disc,g_train = convertStyleGan2(_G,_D,Gs)#,style_dim=dim_sg2,max_channel_size=dim_sg2)
latent_avg=ckpt["latent_avg"]
import torch
import matplotlib.pyplot as plt
def fmtImg(r):
img = ((r+1)/2*256).clip(0,255).astype(np.uint8).transpose(1,2,0)
return PIL.Image.fromarray(img, 'RGB')
device='cuda'
n_sample=1
g = g.to(device)
inputSize=1024#dim_sg2
import numpy as np
# + id="JrgaAwdJOkee"
if generate&gen_sg2_nvlabs_ada_pt:
# !pip install click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3
# !pip install torch
# #!pip install torch==1.7.1
# %pip install ninja
# import pickle
import copy
import os
#from time import perf_counter
#import click
import imageio
import numpy as np
import PIL.Image
import torch
import torch.nn.functional as F
# %cd /content/stylegan2-nvlabs-ada-pytorch
import dnnlib
import legacy
network_pkl=files_path[0][1]
if len(files_path)>1:
network_pkl=files_path[1][1]
device = torch.device('cuda:0')
# device = torch.device('cuda')
with dnnlib.util.open_url(network_pkl) as fp:
G = legacy.load_network_pkl(fp)['G_ema'].requires_grad_(False).to(device) # type: ignore
# + id="EUKP12wZTYT7"
#generate_and_plot_images_notrunc(generator, seed=396)
# not using truncation
#generate_and_plot_images(generator, seed=96, w_avg=w_average)
# using truncation 0.5
#generate_and_plot_images(generator, seed=96, w_avg=w_average, truncation_psi=0.5)
# + id="wrZhTCe5USvq"
#def gen():
# global generator
# seed = 6600
# # creating random latent vector
# rnd = np.random.RandomState(seed)
# __z = rnd.randn(1, 512).astype('float32')
# # running mapping network
# dlatents = generator.mapping_network(__z)
#
# out = generator.synthesis_network(dlatents)
# #converting image/s to uint8
# images = convert_images_to_uint8(out, nchw_to_nhwc=True, uint8_cast=True)
#gen()
# + id="8aA95OcEv5YU"
if generate&gen_wavegan:
if generate&gen_drums:
# Load the model
tf.reset_default_graph()
saver = tf.train.import_meta_graph('/content/model/infer/infer.meta')
graph = tf.get_default_graph()
sess = tf.InteractiveSession()
sess.close()
sess = tf.InteractiveSession()
saver.restore(sess, f'/content/model/model.ckpt-18637')
#dim = 100
break_len = 65536
z = graph.get_tensor_by_name('z:0')
G_z = graph.get_tensor_by_name('G_z:0')
import numpy as np
from IPython.display import display, Audio
#from google.colab import files
import scipy.io.wavfile
import matplotlib.pyplot as plt
# %matplotlib inline
# !mkdir "./neuralfunk examples"
def generate_trajectory(n_iter, _z0=None, mov_last=None, jump=0.3, smooth=0.3, include_z0=True):
_z = np.empty((n_iter + int(not include_z0), dim))
_z[0] = _z0 if _z0 is not None else np.random.random(dim)*2-1
mov = mov_last if mov_last is not None else (np.random.random(dim)*2-1)*jump
for i in range(1, len(_z)):
mov = mov*smooth + (np.random.random(dim)*2-1)*jump*(1-smooth)
mov -= (np.abs(_z[i-1] + mov) > 1) * 2 * mov
_z[i] = _z[i-1] + mov
return _z[-n_iter:], mov
# !pip install pydub
from pydub import AudioSegment
# !pip install ffmpeg
# + colab={"base_uri": "https://localhost:8080/"} id="_lt3tZX8iYM1" outputId="82358ea3-0dee-4b79-d67c-4524f2041ced"
if generate&gen_gpu_cuda:
# !curl https://colab.chainer.org/install | sh -
import chainer
chainer.print_runtime_info()
# %env MNE_USE_CUDA=true
# !pip install mne==0.23.3
# !pip install pandas
# !pip install matplotlib
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
from mne.connectivity import spectral_connectivity, seed_target_indices
if generate&gen_gpu_cuda:
mne.cuda.init_cuda(verbose=True)
import pandas as pd
import numpy as np
# + id="9aHDP6GOVeVP"
if generate&gen_stylegan2:
# !pip install ffmpeg-python
import ffmpeg
import scipy
import moviepy.editor
# !pip install av
import av
from IPython.utils import io
# !mkdir '/content/out'
# + id="ZWAgsxLebf3T"
if generate&gen_game:
import io
from PIL import Image as pilimage
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/"} id="_lsqZe1gzdbl" outputId="cce84d34-d2d8-4e7f-db81-401fc3690c22"
if generate&gen_silent_speech:
# #!pip install ffmpeg-python
#import ffmpeg
#import scipy
#import moviepy.editor
# #!pip install av
#import av
from IPython.utils import io
import numpy as np
#from IPython.display import display, Audio
#import scipy.io.wavfile
import matplotlib.pyplot as plt
# %matplotlib inline
# !pip install pydub
from pydub import AudioSegment
# #!pip install ffmpeg
# !pip install pysndfile
# !pip install absl-py librosa soundfile matplotlib scipy numba jiwer unidecode deepspeech==0.8.2 praat-textgrids
# # !unzip -o emg_data.zip
# !unzip -n emg_data.zip
if generate&gen_ss_wm50_tm07_dm070:
# # !ln -s /content/emg_data ./emg_data
# # !ln -s /content/text_alignments ./text_alignments
# %cd /content/silent_speech
# %cd /content/silent_speech-dgaddy-pytorch/
# # !python evaluate.py --models ./models/transduction_model/model_07.pt --pretrained_wavenet_model ./models/wavenet_model/wavenet_model_50.pt --output_directory evaluation_output
# + id="Z0IYRVtf9I1v"
if generate&gen_silent_speech:
if False:
# !pip install brainflow time
import time
import brainflow
from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds
import mne
from mne.channels import read_layout
# use synthetic board for demo
params = BrainFlowInputParams()
board = BoardShim(BoardIds.SYNTHETIC_BOARD.value, params)
board.release_all_sessions()
board.prepare_session()
board.start_stream()
time.sleep(10)
data = board.get_board_data()
board.stop_stream()
board.release_session()
eeg_channels = BoardShim.get_eeg_channels(BoardIds.SYNTHETIC_BOARD.value)
eeg_data = data[eeg_channels, :]
eeg_data = eeg_data / 1000000 # BrainFlow returns uV, convert to V for MNE
# Creating MNE objects from brainflow data arrays
ch_types = ['eeg'] * len(eeg_channels)
ch_names = BoardShim.get_eeg_names(BoardIds.SYNTHETIC_BOARD.value)
sfreq = BoardShim.get_sampling_rate(BoardIds.SYNTHETIC_BOARD.value)
info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
raw = mne.io.RawArray(eeg_data, info)
# its time to plot something!
raw.plot_psd(average=False)
# + colab={"base_uri": "https://localhost:8080/"} id="6hkldZZIdFGq" outputId="fda88333-1f21-4ffd-e6a6-c4c14f182d60"
if generate&gen_silent_speech:
# !pip install brainflow
# !pip install mne==0.23.3
# #!pip install pandas
# #!pip install matplotlib
# #!pip install brainflow time pyserial
# !pip install pyserial
#import os, pty, serial
# !apt install -y socat
generate = generate | gen_stylegan2
generate = generate | gen_wavegan
# !pip install sounddevice
# !apt-get install libasound-dev libportaudio2 -y
# + colab={"base_uri": "https://localhost:8080/"} id="d9inLq7xc0yO" outputId="a43b1297-f621-477a-81d9-920c2b713fe9"
# !pip install nltk
import nltk
nltk.download('punkt')
# + id="ngXGUGvQWYST"
import os
import nltk
class Book(object):
def __init__(self, book_file):
self.file = book_file
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
with open(book_file) as f:
all_text = f.read()
paragraphs = all_text.split('\n\n')
sentences = [s for p in paragraphs for s in sent_detector.tokenize(p.strip())]
self.sentences = [s.replace('\n', ' ') for s in sentences]
bookmark_file = self.file + '.bookmark'
if os.path.exists(bookmark_file):
with open(bookmark_file) as f:
self.current_index = int(f.read().strip())
else:
self.current_index = 0
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
bookmark_file = self.file + '.bookmark'
with open(bookmark_file, 'w') as f:
f.write(str(self.current_index))
def current_sentence(self):
return self.sentences[self.current_index]
def next(self):
self.current_index = (self.current_index+1) % len(self.sentences)
# + id="JvwXo6I9fkYw"
import subprocess
if True:
subprocess.Popen(["socat", "PTY,link=/dev/ttyS10", "PTY,link=/dev/ttyS11"])
# #!pip install brainflow
import time
import brainflow
from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds
# #!pip install mne==0.23.3
# #!pip install pandas
# #!pip install matplotlib
import mne
from mne.channels import read_layout
# #!pip install brainflow time pyserial
# #!pip install pyserial
import os, pty, serial
from IPython.display import Javascript
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode
from IPython.display import Javascript
import json
import base64
from io import BytesIO
from time import perf_counter
#master, slave = pty.openpty()
#s_name = os.ttyname(slave)
#ser = serial.Serial(s_name)
if True:
ser = serial.Serial('/dev/ttyS10', 921600)
# use synthetic board for demo
params = BrainFlowInputParams()
params.serial_port = '/dev/ttyS11'
#params.serial_port = os.ttyname(slave)
sample_rate = 512
board = BoardShim(BoardIds.FREEEEG32_BOARD.value, params)
board.release_all_sessions()
board.prepare_session()
board.start_stream()
# + id="YFdB1PnioRrs"
# + id="Yvvp4qtQa6sc" colab={"base_uri": "https://localhost:8080/"} outputId="8f7d9d55-dd33-4254-f4bf-66bd38c597ff"
# %cd /content/silent_speech-dgaddy-pytorch
# !ln -s /content/silent_speech-dgaddy-pytorch/nv_wavenet/pytorch/nv_wavenet_ext.egg-info ./nv_wavenet_ext.egg-info
# + id="42E-tvnsbxxS" colab={"base_uri": "https://localhost:8080/"} outputId="7a9f1330-5993-4a9d-9115-39cbd6bbb15f"
# %cd /content/silent_speech-dgaddy-pytorch
from nv_wavenet.pytorch.wavenet import WaveNet
# %cd /content/silent_speech-dgaddy-pytorch/nv_wavenet/pytorch
import nv_wavenet
# %cd /content/silent_speech-dgaddy-pytorch
# + id="HxoBl335nsf4" colab={"base_uri": "https://localhost:8080/"} outputId="5382a03d-a0fc-46e7-b633-16d9f8dc09d6"
# %cd /content/silent_speech-dgaddy-pytorch
# + id="mvWf5XzL9Mpk" colab={"base_uri": "https://localhost:8080/"} outputId="b8295787-719d-45d7-e180-903a421b3603"
# !pip show nv_wavenet_ext
# + id="ug3WLsdQMUpA"
egg_path='/usr/local/lib/python3.7/dist-packages/nv_wavenet_ext-0.0.0-py3.7-linux-x86_64.egg'
import sys
sys.path.append(egg_path)
import nv_wavenet_ext
# + id="2NrbHq2lcd0H"
from absl import flags
FLAGS = flags.FLAGS
for name in list(flags.FLAGS):
delattr(flags.FLAGS, name)
# + id="UnL-_K0igPft"
from absl import flags
FLAGS = flags.FLAGS
for name in list(flags.FLAGS):
delattr(flags.FLAGS, name)
#data_utils.py
import numpy as np
import librosa
import soundfile as sf
import torch
import matplotlib.pyplot as plt
from absl import flags
FLAGS = flags.FLAGS
flags.DEFINE_boolean('mel_spectrogram', False, 'use mel spectrogram features instead of mfccs for audio')
flags.DEFINE_string('normalizers_file', 'normalizers.pkl', 'file with pickled feature normalizers')
phoneme_inventory = ['aa','ae','ah','ao','aw','ax','axr','ay','b','ch','d','dh','dx','eh','el','em','en','er','ey','f','g','hh','hv','ih','iy','jh','k','l','m','n','nx','ng','ow','oy','p','r','s','sh','t','th','uh','uw','v','w','y','z','zh','sil']
def normalize_volume(audio):
rms = librosa.feature.rms(audio)
max_rms = rms.max() + 0.01
target_rms = 0.2
audio = audio * (target_rms/max_rms)
max_val = np.abs(audio).max()
if max_val > 1.0: # this shouldn't happen too often with the target_rms of 0.2
audio = audio / max_val
return audio
def load_audio(filename, start=None, end=None, max_frames=None, renormalize_volume=False):
audio, r = sf.read(filename)
assert r == 16000
if len(audio.shape) > 1:
audio = audio[:,0] # select first channel of stero audio
if start is not None or end is not None:
audio = audio[start:end]
if renormalize_volume:
audio = normalize_volume(audio)
if FLAGS.mel_spectrogram:
mfccs = librosa.feature.melspectrogram(audio, sr=16000, n_mels=128, center=False, n_fft=512, win_length=432, hop_length=160).T
mfccs = np.log(mfccs+1e-5)
else:
mfccs = librosa.feature.mfcc(audio, sr=16000, n_mfcc=26, n_fft=512, win_length=432, hop_length=160, center=False).T
audio_discrete = librosa.core.mu_compress(audio, mu=255, quantize=True)+128
if max_frames is not None and mfccs.shape[0] > max_frames:
mfccs = mfccs[:max_frames,:]
audio_length = 160*mfccs.shape[0]+(432-160)
audio_discrete = audio_discrete[:audio_length] # cut off audio to match framed length
return mfccs.astype(np.float32), audio_discrete
def double_average(x):
assert len(x.shape) == 1
f = np.ones(9)/9.0
v = np.convolve(x, f, mode='same')
w = np.convolve(v, f, mode='same')
return w
def get_emg_features(emg_data, debug=False):
xs = emg_data - emg_data.mean(axis=0, keepdims=True)
frame_features = []
for i in range(emg_data.shape[1]):
x = xs[:,i]
w = double_average(x)
p = x - w
r = np.abs(p)
w_h = librosa.util.frame(w, frame_length=16, hop_length=6).mean(axis=0)
p_w = librosa.feature.rms(w, frame_length=16, hop_length=6, center=False)
p_w = np.squeeze(p_w, 0)
p_r = librosa.feature.rms(r, frame_length=16, hop_length=6, center=False)
p_r = np.squeeze(p_r, 0)
z_p = librosa.feature.zero_crossing_rate(p, frame_length=16, hop_length=6, center=False)
z_p = np.squeeze(z_p, 0)
r_h = librosa.util.frame(r, frame_length=16, hop_length=6).mean(axis=0)
s = abs(librosa.stft(np.ascontiguousarray(x), n_fft=16, hop_length=6, center=False))
# s has feature dimension first and time second
if debug:
plt.subplot(7,1,1)
plt.plot(x)
plt.subplot(7,1,2)
plt.plot(w_h)
plt.subplot(7,1,3)
plt.plot(p_w)
plt.subplot(7,1,4)
plt.plot(p_r)
plt.subplot(7,1,5)
plt.plot(z_p)
plt.subplot(7,1,6)
plt.plot(r_h)
plt.subplot(7,1,7)
plt.imshow(s, origin='lower', aspect='auto', interpolation='nearest')
plt.show()
frame_features.append(np.stack([w_h, p_w, p_r, z_p, r_h], axis=1))
frame_features.append(s.T)
frame_features = np.concatenate(frame_features, axis=1)
return frame_features.astype(np.float32)
class FeatureNormalizer(object):
def __init__(self, feature_samples, share_scale=False):
""" features_samples should be list of 2d matrices with dimension (time, feature) """
feature_samples = np.concatenate(feature_samples, axis=0)
self.feature_means = feature_samples.mean(axis=0, keepdims=True)
if share_scale:
self.feature_stddevs = feature_samples.std()
else:
self.feature_stddevs = feature_samples.std(axis=0, keepdims=True)
def normalize(self, sample):
sample -= self.feature_means
sample /= self.feature_stddevs
return sample
def inverse(self, sample):
sample = sample * self.feature_stddevs
sample = sample + self.feature_means
return sample
def combine_fixed_length(tensor_list, length):
total_length = sum(t.size(0) for t in tensor_list)
if total_length % length != 0:
pad_length = length - (total_length % length)
tensor_list = list(tensor_list) # copy
tensor_list.append(torch.zeros(pad_length,*tensor_list[0].size()[1:], dtype=tensor_list[0].dtype))
total_length += pad_length
tensor = torch.cat(tensor_list, 0)
n = total_length // length
return tensor.view(n, length, *tensor.size()[1:])
def decollate_tensor(tensor, lengths):
b, s, d = tensor.size()
tensor = tensor.view(b*s, d)
results = []
idx = 0
for length in lengths:
assert idx + length <= b * s
results.append(tensor[idx:idx+length])
idx += length
return results
def splice_audio(chunks, overlap):
chunks = [c.copy() for c in chunks] # copy so we can modify in place
assert np.all([c.shape[0]>=overlap for c in chunks])
result_len = sum(c.shape[0] for c in chunks) - overlap*(len(chunks)-1)
result = np.zeros(result_len, dtype=chunks[0].dtype)
ramp_up = np.linspace(0,1,overlap)
ramp_down = np.linspace(1,0,overlap)
i = 0
for chunk in chunks:
l = chunk.shape[0]
# note: this will also fade the beginning and end of the result
chunk[:overlap] *= ramp_up
chunk[-overlap:] *= ramp_down
result[i:i+l] += chunk
i += l-overlap
return result
def print_confusion(confusion_mat, n=10):
# axes are (pred, target)
target_counts = confusion_mat.sum(0) + 1e-4
aslist = []
for p1 in range(len(phoneme_inventory)):
for p2 in range(p1):
if p1 != p2:
aslist.append(((confusion_mat[p1,p2]+confusion_mat[p2,p1])/(target_counts[p1]+target_counts[p2]), p1, p2))
aslist.sort()
aslist = aslist[-n:]
max_val = aslist[-1][0]
min_val = aslist[0][0]
val_range = max_val - min_val
print('Common confusions (confusion, accuracy)')
for v, p1, p2 in aslist:
p1s = phoneme_inventory[p1]
p2s = phoneme_inventory[p2]
print(f'{p1s} {p2s} {v*100:.1f} {(confusion_mat[p1,p1]+confusion_mat[p2,p2])/(target_counts[p1]+target_counts[p2])*100:.1f}')
# + id="Nxl_cApsqj0J"
#nv_wavenet.py
# *****************************************************************************
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the NVIDIA CORPORATION nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# *****************************************************************************
import torch
import nv_wavenet_ext
def interleave_lists(a, b, c, d, e, f, g):
return [x for t in zip(a, b, c, d, e, f, g) for x in t]
def column_major(x):
"""
PyTorch Tensors are row major, so this just returns a contiguous transpose
"""
assert(x.is_contiguous)
if len(x.size()) == 1:
return x
if len(x.size()) == 3:
assert(x.size(2)==1)
x = torch.squeeze(x)
if len(x.size())==2:
return torch.t(x).contiguous()
if len(x.size())==4:
return x.permute(3,2,1,0).contiguous()
def enum(**enums):
return type('Enum', (), enums)
Impl = enum(AUTO=0, SINGLE_BLOCK=1, DUAL_BLOCK=2, PERSISTENT=3)
class NVWaveNet:
def __init__(self, embedding_prev,
embedding_curr,
conv_out_weight,
conv_end_weight,
dilate_weights,
dilate_biases,
max_dilation,
res_weights,
res_biases,
skip_weights,
skip_biases,
use_embed_tanh):
self.R = nv_wavenet_ext.num_res_channels()
self.S = nv_wavenet_ext.num_skip_channels()
self.A = nv_wavenet_ext.num_out_channels()
self.max_dilation = max_dilation
self.use_embed_tanh = use_embed_tanh
assert embedding_prev.size() == (self.A, self.R), \
("embedding_prev: {} doesn't match compiled"
" nv-wavenet size: {}").format(embedding_prev.size(),
(self.A, self.R))
self.embedding_prev = column_major(torch.t(embedding_prev))
assert embedding_curr.size() == (self.A, self.R), \
("embedding_curr: {} doesn't match compiled"
" nv-wavenet size: {}").format(embedding_curr.size(),
(self.A, self.R))
self.embedding_curr = column_major(torch.t(embedding_curr))
assert conv_out_weight.size()[:2] == (self.A, self.S), \
("conv_out_weight: {} doesn't match compiled"
" nv-wavenet size: {}").format(conv_out_weight.size()[:2],
(self.A, self.S))
self.conv_out = column_major(conv_out_weight)
assert conv_end_weight.size()[:2] == (self.A, self.A), \
("conv_end_weight: {} doesn't match compiled"
" nv-wavenet size: {}").format(conv_end_weight.size()[:2],
(self.A, self.A))
self.conv_end = column_major(conv_end_weight)
dilate_weights_prev = []
dilate_weights_curr = []
for weight in dilate_weights:
assert weight.size(2) == 2, \
"nv-wavenet only supports kernel_size 2"
assert weight.size()[:2] == (2*self.R, self.R), \
("dilated weight: {} doesn't match compiled"
" nv-wavenet size: {}").format(weight.size()[:2],
(2*self.R, self.R))
Wprev = column_major(weight[:,:,0])
Wcurr = column_major(weight[:,:,1])
dilate_weights_prev.append(Wprev)
dilate_weights_curr.append(Wcurr)
for bias in dilate_biases:
assert(bias.size(0) == 2*self.R)
for weight in res_weights:
assert weight.size()[:2] == (self.R, self.R), \
("residual weight: {} doesn't match compiled"
" nv-wavenet size: {}").format(weight.size()[:2],
(self.R, self.R))
for bias in res_biases:
assert(bias.size(0) == self.R), \
("residual bias: {} doesn't match compiled"
" nv-wavenet size: {}").format(bias.size(0), self.R)
for weight in skip_weights:
assert weight.size()[:2] == (self.S, self.R), \
("skip weight: {} doesn't match compiled"
" nv-wavenet size: {}").format(weight.size()[:2],
(self.S, self.R))
for bias in skip_biases:
assert(bias.size(0) == self.S), \
("skip bias: {} doesn't match compiled"
" nv-wavenet size: {}").format(bias.size(0), self.S)
dilate_biases = [column_major(bias) for bias in dilate_biases]
res_weights = [column_major(weight) for weight in res_weights]
res_biases = [column_major(bias) for bias in res_biases]
skip_weights = [column_major(weight) for weight in skip_weights]
skip_biases = [column_major(bias) for bias in skip_biases]
# There's an extra residual layer that's not used
res_weights.append(torch.zeros(self.R,self.R))
res_biases.append(torch.zeros(self.R))
assert(len(res_biases)==len(skip_biases) and
len(res_biases)==len(dilate_biases) and
len(res_weights)==len(skip_weights) and
len(res_weights)==len(dilate_weights)), \
"""Number of layers is inconsistent for different parameter types.
The list sizes should be the same for skip weights/biases and
dilate weights/biases. Additionally the residual weights/biases
lists should be one shorter. But their sizes are:
len(dilate_weights) = {}
len(dilale_biases) = {}
len(skip_weights) = {}
len(skip_biases) = {}
len(res_weights) = {}
len(res_biases) = {}""".format(len(dilate_weights),
len(dilate_biases),
len(skip_weights),
len(skip_biases),
len(res_weights)-1,
len(res_biases)-1)
self.num_layers = len(res_biases)
self.layers = interleave_lists(dilate_weights_prev,
dilate_weights_curr,
dilate_biases,
res_weights,
res_biases,
skip_weights,
skip_biases)
def infer(self, cond_input, implementation):
# cond_input is channels x batch x num_layers x samples
assert(cond_input.size()[0:3:2] == (2*self.R, self.num_layers)), \
"""Inputs are channels x batch x num_layers x samples.
Channels and num_layers should be sizes: {}
But input is: {}""".format((2*self.R, self.num_layers),
cond_input.size()[0:3:2])
batch_size = cond_input.size(1)
sample_count = cond_input.size(3)
cond_input = column_major(cond_input)
samples = torch.cuda.IntTensor(batch_size, sample_count)
nv_wavenet_ext.infer(samples,
sample_count,
batch_size,
self.embedding_prev,
self.embedding_curr,
self.conv_out,
self.conv_end,
cond_input,
self.num_layers,
self.use_embed_tanh,
self.max_dilation,
implementation,
self.layers)
return samples
# + id="Wm4yPyA5uFr6"
from nv_wavenet.pytorch import nv_wavenet
# + id="0_vP8gL3f_54"
from absl import flags
FLAGS = flags.FLAGS
for name in list(flags.FLAGS):
delattr(flags.FLAGS, name)
#read_emg.py
import re
import os
import numpy as np
import matplotlib.pyplot as plt
import random
from collections import defaultdict
import scipy
import json
import copy
import sys
import pickle
import string
import logging
import librosa
import soundfile as sf
from textgrids import TextGrid
import torch
#from data_utils import load_audio, get_emg_features, FeatureNormalizer, combine_fixed_length, phoneme_inventory
from absl import flags
FLAGS = flags.FLAGS
flags.DEFINE_boolean('mel_spectrogram', False, 'use mel spectrogram features instead of mfccs for audio')
flags.DEFINE_string('normalizers_file', 'normalizers.pkl', 'file with pickled feature normalizers')
flags.DEFINE_list('remove_channels', [], 'channels to remove')
#flags.DEFINE_list('silent_data_directories', ['./emg_data/silent_parallel_data'], 'silent data locations')
#flags.DEFINE_list('voiced_data_directories', ['./emg_data/voiced_parallel_data','./emg_data/nonparallel_data'], 'voiced data locations')
#flags.DEFINE_string('testset_file', 'testset_largedev.json', 'file with testset indices')
flags.DEFINE_list('silent_data_directories', ['./out'], 'silent data locations')
flags.DEFINE_list('voiced_data_directories', ['./out','./out'], 'voiced data locations')
flags.DEFINE_string('testset_file', 'testset_onlinedev.json', 'file with testset indices')
flags.DEFINE_string('text_align_directory', 'text_alignments', 'directory with alignment files')
def remove_drift(signal, fs):
b, a = scipy.signal.butter(3, 2, 'highpass', fs=fs)
return scipy.signal.filtfilt(b, a, signal)
def notch(signal, freq, sample_frequency):
b, a = scipy.signal.iirnotch(freq, 25, sample_frequency)
# b, a = scipy.signal.iirnotch(freq, 30, sample_frequency)
return scipy.signal.filtfilt(b, a, signal)
def notch_harmonics(signal, freq, sample_frequency):
max_harmonic=(sample_frequency//freq)//2
for harmonic in range(1,max_harmonic):
# for harmonic in range(1,8):
signal = notch(signal, freq*harmonic, sample_frequency)
return signal
def subsample(signal, new_freq, old_freq):
times = np.arange(len(signal))/old_freq
sample_times = np.arange(0, times[-1], 1/new_freq)
result = np.interp(sample_times, times, signal)
return result
def apply_to_all(function, signal_array, *args, **kwargs):
results = []
for i in range(signal_array.shape[1]):
results.append(function(signal_array[:,i], *args, **kwargs))
return np.stack(results, 1)
def load_utterance(base_dir, index, limit_length=False, debug=False, text_align_directory=None):
index = int(index)
raw_emg = np.load(os.path.join(base_dir, f'{index}_emg.npy'))
before = os.path.join(base_dir, f'{index-1}_emg.npy')
after = os.path.join(base_dir, f'{index+1}_emg.npy')
if os.path.exists(before):
raw_emg_before = np.load(before)
else:
raw_emg_before = np.zeros([0,raw_emg.shape[1]])
if os.path.exists(after):
raw_emg_after = np.load(after)
else:
raw_emg_after = np.zeros([0,raw_emg.shape[1]])
if 'out' in base_dir:
raw_emg_freq=512
# raw_emg_freq=1000
else:
raw_emg_freq=1000
x = np.concatenate([raw_emg_before, raw_emg, raw_emg_after], 0)
x = apply_to_all(notch_harmonics, x, 50, raw_emg_freq)
# x = apply_to_all(notch_harmonics, x, 60, raw_emg_freq)
x = apply_to_all(remove_drift, x, raw_emg_freq)
x = x[raw_emg_before.shape[0]:x.shape[0]-raw_emg_after.shape[0],:]
emg_orig = apply_to_all(subsample, x, 800, raw_emg_freq)
x = apply_to_all(subsample, x, 600, raw_emg_freq)
emg = x
for c in FLAGS.remove_channels:
emg[:,int(c)] = 0
emg_orig[:,int(c)] = 0
emg_features = get_emg_features(emg)
mfccs, audio_discrete = load_audio(os.path.join(base_dir, f'{index}_audio_clean.flac'),
max_frames=min(emg_features.shape[0], 800 if limit_length else float('inf')))
if emg_features.shape[0] > mfccs.shape[0]:
emg_features = emg_features[:mfccs.shape[0],:]
emg = emg[6:6+6*emg_features.shape[0],:]
emg_orig = emg_orig[8:8+8*emg_features.shape[0],:]
assert emg.shape[0] == emg_features.shape[0]*6
with open(os.path.join(base_dir, f'{index}_info.json')) as f:
info = json.load(f)
sess = os.path.basename(base_dir)
tg_fname = f'{text_align_directory}/{sess}/{sess}_{index}_audio.TextGrid'
if os.path.exists(tg_fname):
phonemes = read_phonemes(tg_fname, mfccs.shape[0], phoneme_inventory)
else:
phonemes = np.zeros(mfccs.shape[0], dtype=np.int64)+phoneme_inventory.index('sil')
return mfccs, audio_discrete, emg_features, info['text'], (info['book'],info['sentence_index']), phonemes, emg_orig.astype(np.float32)
def read_phonemes(textgrid_fname, mfcc_len, phone_inventory):
tg = TextGrid(textgrid_fname)
phone_ids = np.zeros(int(tg['phones'][-1].xmax*100), dtype=np.int64)
phone_ids[:] = -1
for interval in tg['phones']:
phone = interval.text.lower()
if phone in ['', 'sp', 'spn']:
phone = 'sil'
if phone[-1] in string.digits:
phone = phone[:-1]
ph_id = phone_inventory.index(phone)
phone_ids[int(interval.xmin*100):int(interval.xmax*100)] = ph_id
assert (phone_ids >= 0).all(), 'missing aligned phones'
phone_ids = phone_ids[1:mfcc_len+1] # mfccs is 2-3 shorter due to edge effects
return phone_ids
class EMGDirectory(object):
def __init__(self, session_index, directory, silent, exclude_from_testset=False):
self.session_index = session_index
self.directory = directory
self.silent = silent
self.exclude_from_testset = exclude_from_testset
def __lt__(self, other):
return self.session_index < other.session_index
def __repr__(self):
return self.directory
class SizeAwareSampler(torch.utils.data.Sampler):
def __init__(self, emg_dataset, max_len):
self.dataset = emg_dataset
self.max_len = max_len
def __iter__(self):
indices = list(range(len(self.dataset)))
random.shuffle(indices)
batch = []
batch_length = 0
for idx in indices:
directory_info, file_idx = self.dataset.example_indices[idx]
with open(os.path.join(directory_info.directory, f'{file_idx}_info.json')) as f:
info = json.load(f)
if not np.any([l in string.ascii_letters for l in info['text']]):
continue
length = sum([emg_len for emg_len, _, _ in info['chunks']])
if length > self.max_len:
logging.warning(f'Warning: example {idx} cannot fit within desired batch length')
if length + batch_length > self.max_len:
yield batch
batch = []
batch_length = 0
batch.append(idx)
batch_length += length
# dropping last incomplete batch
class EMGDataset(torch.utils.data.Dataset):
def __init__(self, base_dir=None, limit_length=False, dev=False, test=False, no_testset=False, no_normalizers=False):
self.text_align_directory = FLAGS.text_align_directory
if no_testset:
devset = []
testset = []
else:
with open(FLAGS.testset_file) as f:
testset_json = json.load(f)
devset = testset_json['dev']
testset = testset_json['test']
#print(testset)
directories = []
if base_dir is not None:
directories.append(EMGDirectory(0, base_dir, False))
else:
for sd in FLAGS.silent_data_directories:
for session_dir in sorted(os.listdir(sd)):
directories.append(EMGDirectory(len(directories), os.path.join(sd, session_dir), True))
has_silent = len(FLAGS.silent_data_directories) > 0
for vd in FLAGS.voiced_data_directories:
for session_dir in sorted(os.listdir(vd)):
directories.append(EMGDirectory(len(directories), os.path.join(vd, session_dir), False, exclude_from_testset=has_silent))
self.example_indices = []
self.voiced_data_locations = {} # map from book/sentence_index to directory_info/index
for directory_info in directories:
for fname in os.listdir(directory_info.directory):
m = re.match(r'(\d+)_info.json', fname)
if m is not None:
idx_str = m.group(1)
with open(os.path.join(directory_info.directory, fname)) as f:
info = json.load(f)
#print(info['book'],info['sentence_index'])
if info['sentence_index'] >= 0: # boundary clips of silence are marked -1
location_in_testset = [info['book'], info['sentence_index']] in testset
location_in_devset = [info['book'], info['sentence_index']] in devset
#print(location_in_testset,location_in_devset)
if (test and location_in_testset and not directory_info.exclude_from_testset) \
or (dev and location_in_devset and not directory_info.exclude_from_testset) \
or (not test and not dev and not location_in_testset and not location_in_devset):
self.example_indices.append((directory_info,int(idx_str)))
if not directory_info.silent:
location = (info['book'], info['sentence_index'])
self.voiced_data_locations[location] = (directory_info,int(idx_str))
self.example_indices.sort()
random.seed(0)
random.shuffle(self.example_indices)
self.no_normalizers = no_normalizers
if not self.no_normalizers:
self.mfcc_norm, self.emg_norm = pickle.load(open(FLAGS.normalizers_file,'rb'))
sample_mfccs, _, sample_emg, _, _, _, _ = load_utterance(self.example_indices[0][0].directory, self.example_indices[0][1])
self.num_speech_features = sample_mfccs.shape[1]
self.num_features = sample_emg.shape[1]
self.limit_length = limit_length
self.num_sessions = len(directories)
def silent_subset(self):
silent_indices = []
for i, (d, _) in enumerate(self.example_indices):
if d.silent:
silent_indices.append(i)
return torch.utils.data.Subset(self, silent_indices)
def __len__(self):
return len(self.example_indices)
def __getitem__(self, i):
directory_info, idx = self.example_indices[i]
mfccs, audio, emg, text, book_location, phonemes, raw_emg = load_utterance(directory_info.directory, idx, self.limit_length, text_align_directory=self.text_align_directory)
raw_emg = raw_emg / 10
if not self.no_normalizers:
mfccs = self.mfcc_norm.normalize(mfccs)
emg = self.emg_norm.normalize(emg)
emg = 8*np.tanh(emg/8.)
session_ids = np.full(emg.shape[0], directory_info.session_index, dtype=np.int64)
result = {'audio_features':mfccs, 'quantized_audio':audio, 'emg':emg, 'text':text, 'file_label':idx, 'session_ids':session_ids, 'book_location':book_location, 'silent':directory_info.silent, 'raw_emg':raw_emg}
if directory_info.silent:
voiced_directory, voiced_idx = self.voiced_data_locations[book_location]
voiced_mfccs, _, voiced_emg, _, _, phonemes, _ = load_utterance(voiced_directory.directory, voiced_idx, False, text_align_directory=self.text_align_directory)
if not self.no_normalizers:
voiced_mfccs = self.mfcc_norm.normalize(voiced_mfccs)
voiced_emg = self.emg_norm.normalize(voiced_emg)
voiced_emg = 8*np.tanh(voiced_emg/8.)
result['parallel_voiced_audio_features'] = voiced_mfccs
result['parallel_voiced_emg'] = voiced_emg
result['phonemes'] = phonemes # either from this example if vocalized or aligned example if silent
return result
@staticmethod
def collate_fixed_length(batch):
batch_size = len(batch)
audio_features = []
audio_feature_lengths = []
parallel_emg = []
for ex in batch:
if ex['silent']:
audio_features.append(ex['parallel_voiced_audio_features'])
audio_feature_lengths.append(ex['parallel_voiced_audio_features'].shape[0])
parallel_emg.append(ex['parallel_voiced_emg'])
else:
audio_features.append(ex['audio_features'])
audio_feature_lengths.append(ex['audio_features'].shape[0])
parallel_emg.append(np.zeros(1))
audio_features = [torch.from_numpy(af) for af in audio_features]
parallel_emg = [torch.from_numpy(pe) for pe in parallel_emg]
phonemes = [torch.from_numpy(ex['phonemes']) for ex in batch]
emg = [torch.from_numpy(ex['emg']) for ex in batch]
raw_emg = [torch.from_numpy(ex['raw_emg']) for ex in batch]
session_ids = [torch.from_numpy(ex['session_ids']) for ex in batch]
lengths = [ex['emg'].shape[0] for ex in batch]
silent = [ex['silent'] for ex in batch]
seq_len = 200
result = {'audio_features':combine_fixed_length(audio_features, seq_len),
'audio_feature_lengths':audio_feature_lengths,
'emg':combine_fixed_length(emg, seq_len),
'raw_emg':combine_fixed_length(raw_emg, seq_len*8),
'parallel_voiced_emg':parallel_emg,
'phonemes':phonemes,
'session_ids':combine_fixed_length(session_ids, seq_len),
'lengths':lengths,
'silent':silent}
return result
def make_normalizers():
dataset = EMGDataset(no_normalizers=True)
mfcc_samples = []
emg_samples = []
for d in dataset:
mfcc_samples.append(d['audio_features'])
emg_samples.append(d['emg'])
if len(emg_samples) > 50:
break
mfcc_norm = FeatureNormalizer(mfcc_samples, share_scale=True)
emg_norm = FeatureNormalizer(emg_samples, share_scale=False)
pickle.dump((mfcc_norm, emg_norm), open(FLAGS.normalizers_file, 'wb'))
if False:
FLAGS(sys.argv)
d = EMGDataset()
for i in range(1000):
d[i]
# + id="szcXBWSHZvsA"
from absl import flags
FLAGS = flags.FLAGS
for name in list(flags.FLAGS):
delattr(flags.FLAGS, name)
#wavenet_model.py
import sys
import os
import numpy as np
import soundfile as sf
import librosa
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as torchdata
from nv_wavenet.pytorch.wavenet import WaveNet
#from nv_wavenet.pytorch import nv_wavenet
from data_utils import splice_audio
#from read_emg import EMGDataset
from read_librispeech import SpeechDataset
from absl import flags
FLAGS = flags.FLAGS
#flags.DEFINE_boolean('mel_spectrogram', False, 'use mel spectrogram features instead of mfccs for audio')
#flags.DEFINE_string('normalizers_file', 'normalizers.pkl', 'file with pickled feature normalizers')
flags.DEFINE_list('remove_channels', [], 'channels to remove')
#flags.DEFINE_list('silent_data_directories', ['./emg_data/silent_parallel_data'], 'silent data locations')
#flags.DEFINE_list('voiced_data_directories', ['./emg_data/voiced_parallel_data','./emg_data/nonparallel_data'], 'voiced data locations')
#flags.DEFINE_string('testset_file', 'testset_largedev.json', 'file with testset indices')
flags.DEFINE_list('silent_data_directories', ['./out'], 'silent data locations')
flags.DEFINE_list('voiced_data_directories', ['./out','./out'], 'voiced data locations')
flags.DEFINE_string('testset_file', 'testset_onlinedev.json', 'file with testset indices')
flags.DEFINE_string('text_align_directory', 'text_alignments', 'directory with alignment files')
flags.DEFINE_boolean('debug', False, 'debug')
flags.DEFINE_string('output_directory', 'output', 'where to save models and outputs')
flags.DEFINE_boolean('librispeech', False, 'train with librispeech data')
flags.DEFINE_string('pretrained_wavenet_model', None, 'filename of model to start training with')
flags.DEFINE_float('clip_norm', 0.1, 'gradient clipping max norm')
flags.DEFINE_boolean('wavenet_no_lstm', False, "don't use a LSTM before the wavenet")
class WavenetModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
if not FLAGS.wavenet_no_lstm:
self.lstm = nn.LSTM(input_dim, 512, bidirectional=True, batch_first=True)
self.projection_layer = nn.Linear(512*2, 128)
else:
self.projection_layer = nn.Linear(input_dim, 128)
self.wavenet = WaveNet(n_in_channels=256, n_layers=16, max_dilation=128, n_residual_channels=64, n_skip_channels=256, n_out_channels=256, n_cond_channels=128, upsamp_window=432, upsamp_stride=160)
def pre_wavenet_processing(self, x):
if not FLAGS.wavenet_no_lstm:
x, _ = self.lstm(x)
x = F.dropout(x, 0.5, training=self.training)
x = self.projection_layer(x)
return x.permute(0,2,1)
def forward(self, x, audio):
x = self.pre_wavenet_processing(x)
return self.wavenet((x, audio))
def test(wavenet_model, testset, device):
wavenet_model.eval()
errors = []
dataloader = torchdata.DataLoader(testset, batch_size=1, shuffle=True, pin_memory=(device=='cuda'))
with torch.no_grad():
for batch in dataloader:
mfcc = batch['audio_features'].to(device)
audio = batch['quantized_audio'].to(device)
audio_out = wavenet_model(mfcc, audio)
loss = F.cross_entropy(audio_out, audio)
errors.append(loss.item())
wavenet_model.train()
return np.mean(errors)
#def save_output(wavenet_model, input_data, filename, device):
def save_wavenet_output(wavenet_model, input_data, filename, device):
wavenet_model.eval()
assert len(input_data.shape) == 2
X = torch.tensor(input_data, dtype=torch.float32).to(device).unsqueeze(0)
wavenet = wavenet_model.wavenet
inference_wavenet = NVWaveNet(**wavenet.export_weights())
# inference_wavenet = nv_wavenet.NVWaveNet(**wavenet.export_weights())
cond_input = wavenet_model.pre_wavenet_processing(X)
chunk_len = 400
overlap = 1
audio_chunks = []
for i in range(0, cond_input.size(2), chunk_len-overlap):
if cond_input.size(2)-i < overlap:
break # don't make segment at end that doesn't go past overlapped part
cond_chunk = cond_input[:,:,i:i+chunk_len]
wavenet_cond_input = wavenet.get_cond_input(cond_chunk)
audio_data = inference_wavenet.infer(wavenet_cond_input, nv_wavenet.Impl.SINGLE_BLOCK)
audio_chunk = librosa.core.mu_expand(audio_data.squeeze(0).cpu().numpy()-128, 255, True)
audio_chunks.append(audio_chunk)
audio_out = splice_audio(audio_chunks, overlap*160)
sf.write(filename, audio_out, 16000)
wavenet_model.train()
def train():
if FLAGS.librispeech:
dataset = SpeechDataset('LibriSpeech/train-clean-100-sliced', 'M', 'LibriSpeech/SPEAKERS.TXT')
testset = torch.utils.data.Subset(dataset, list(range(10)))
trainset = torch.utils.data.Subset(dataset, list(range(10,len(dataset))))
num_features = dataset.num_speech_features
batch_size = 4
logging.info('output example: %s', dataset.filenames[0])
else:
trainset = EMGDataset(dev=False, test=False, limit_length=True)
testset = EMGDataset(dev=True, limit_length=True)
num_features = testset.num_speech_features
batch_size = 1
logging.info('output example: %s', testset.example_indices[0])
if not os.path.exists(FLAGS.output_directory):
os.makedirs(FLAGS.output_directory)
device = 'cuda' if torch.cuda.is_available() and not FLAGS.debug else 'cpu'
wavenet_model = WavenetModel(num_features).to(device)
if FLAGS.pretrained_wavenet_model is not None:
wavenet_model.load_state_dict(torch.load(FLAGS.pretrained_wavenet_model))
optim = torch.optim.Adam(wavenet_model.parameters(), weight_decay=1e-7)
lr_sched = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, 'min', 0.5, patience=2)
dataloader = torchdata.DataLoader(trainset, batch_size=batch_size, shuffle=True, pin_memory=(device=='cuda'))
best_dev_err = float('inf')
for epoch_idx in range(50):
losses = []
for batch in dataloader:
mfcc = batch['audio_features'].to(device)
audio = batch['quantized_audio'].to(device)
optim.zero_grad()
audio_out = wavenet_model(mfcc, audio)
loss = F.cross_entropy(audio_out, audio)
losses.append(loss.item())
loss.backward()
nn.utils.clip_grad_norm_(wavenet_model.parameters(), FLAGS.clip_norm)
optim.step()
train_err = np.mean(losses)
dev_err = test(wavenet_model, testset, device)
lr_sched.step(dev_err)
logging.info(f'finished epoch {epoch_idx+1} with error {dev_err:.2f}')
logging.info(f' train error {train_err:.2f}')
if dev_err < best_dev_err:
logging.info('saving model')
torch.save(wavenet_model.state_dict(), os.path.join(FLAGS.output_directory, 'wavenet_model.pt'))
best_dev_err = dev_err
wavenet_model.load_state_dict(torch.load(os.path.join(FLAGS.output_directory,'wavenet_model.pt'))) # re-load best parameters
for i, datapoint in enumerate(testset):
save_output(wavenet_model, datapoint['audio_features'], os.path.join(FLAGS.output_directory, f'wavenet_output_{i}.wav'), device)
if False:
FLAGS(sys.argv)
os.makedirs(FLAGS.output_directory, exist_ok=True)
logging.basicConfig(handlers=[
logging.FileHandler(os.path.join(FLAGS.output_directory, 'log.txt'), 'w'),
logging.StreamHandler()
], level=logging.INFO, format="%(message)s")
logging.info(sys.argv)
train()
# + id="xPGIBL4XFfj9"
from absl import flags
FLAGS = flags.FLAGS
for name in list(flags.FLAGS):
delattr(flags.FLAGS, name)
#transduction_model.py
import sys
#sys.argv = " --train_dir training/".split(" ")
sys.argv = " ".split(" ")
import os
import sys
import numpy as np
import logging
import subprocess
import torch
from torch import nn
import torch.nn.functional as F
#from read_emg import EMGDataset, SizeAwareSampler
#from wavenet_model import WavenetModel, save_output as save_wavenet_output
from align import align_from_distances
from asr import evaluate
from transformer import TransformerEncoderLayer
#from data_utils import phoneme_inventory, decollate_tensor
from absl import app, flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('verbosity', 1, 'number of hidden dimensions')
flags.DEFINE_bool('debug', False, 'debug mode')
flags.DEFINE_boolean('mel_spectrogram', False, 'use mel spectrogram features instead of mfccs for audio')
flags.DEFINE_string('normalizers_file', 'normalizers.pkl', 'file with pickled feature normalizers')
flags.DEFINE_list('remove_channels', [], 'channels to remove')
flags.DEFINE_list('silent_data_directories', ['./emg_data/silent_parallel_data'], 'silent data locations')
flags.DEFINE_list('voiced_data_directories', ['./emg_data/voiced_parallel_data','./emg_data/nonparallel_data'], 'voiced data locations')
flags.DEFINE_string('testset_file', 'testset_largedev.json', 'file with testset indices')
flags.DEFINE_string('text_align_directory', 'text_alignments', 'directory with alignment files')
flags.DEFINE_boolean('run_with_pdb', False, 'Set to true for PDB debug mode')
flags.DEFINE_boolean('pdb_post_mortem', False,
'Set to true to handle uncaught exceptions with PDB '
'post mortem.')
flags.DEFINE_alias('pdb', 'pdb_post_mortem')
flags.DEFINE_boolean('run_with_profiling', False,
'Set to true for profiling the script. '
'Execution will be slower, and the output format might '
'change over time.')
flags.DEFINE_string('profile_file', None,
'Dump profile information to a file (for python -m '
'pstats). Implies --run_with_profiling.')
flags.DEFINE_boolean('use_cprofile_for_profiling', True,
'Use cProfile instead of the profile module for '
'profiling. This has no effect unless '
'--run_with_profiling is set.')
flags.DEFINE_boolean('only_check_args', False,
'Set to true to validate args and exit.',
allow_hide_cpp=True)
flags.DEFINE_integer('model_size', 768, 'number of hidden dimensions')
flags.DEFINE_integer('num_layers', 6, 'number of layers')
flags.DEFINE_integer('batch_size', 32, 'training batch size')
flags.DEFINE_float('learning_rate', 1e-3, 'learning rate')
flags.DEFINE_integer('learning_rate_patience', 5, 'learning rate decay patience')
flags.DEFINE_integer('learning_rate_warmup', 500, 'steps of linear warmup')
#flags.DEFINE_string('start_training_from', None, 'start training from this model')
flags.DEFINE_float('data_size_fraction', 1.0, 'fraction of training data to use')
flags.DEFINE_boolean('no_session_embed', False, "don't use a session embedding")
flags.DEFINE_float('phoneme_loss_weight', 0.1, 'weight of auxiliary phoneme prediction loss')
flags.DEFINE_float('l2', 1e-7, 'weight decay')
flags.DEFINE_string('start_training_from', './models/transduction_model/model_07.pt', 'start training from this model')
flags.DEFINE_string('pretrained_wavenet_model', "./models/wavenet_model/wavenet_model_50.pt", 'start training from this model')
flags.DEFINE_string('output_directory', "./models/transduction_model/07/", 'start training from this model')
class ResBlock(nn.Module):
def __init__(self, num_ins, num_outs, stride=1):
super().__init__()
self.conv1 = nn.Conv1d(num_ins, num_outs, 3, padding=1, stride=stride)
self.bn1 = nn.BatchNorm1d(num_outs)
self.conv2 = nn.Conv1d(num_outs, num_outs, 3, padding=1)
self.bn2 = nn.BatchNorm1d(num_outs)
if stride != 1 or num_ins != num_outs:
self.residual_path = nn.Conv1d(num_ins, num_outs, 1, stride=stride)
self.res_norm = nn.BatchNorm1d(num_outs)
else:
self.residual_path = None
def forward(self, x):
input_value = x
x = F.relu(self.bn1(self.conv1(x)))
x = self.bn2(self.conv2(x))
if self.residual_path is not None:
res = self.res_norm(self.residual_path(input_value))
else:
res = input_value
return F.relu(x + res)
class Model(nn.Module):
def __init__(self, num_ins, num_outs, num_aux_outs, num_sessions):
super().__init__()
self.conv_blocks = nn.Sequential(
ResBlock(8, FLAGS.model_size, 2),
ResBlock(FLAGS.model_size, FLAGS.model_size, 2),
ResBlock(FLAGS.model_size, FLAGS.model_size, 2),
)
self.w_raw_in = nn.Linear(FLAGS.model_size, FLAGS.model_size)
if not FLAGS.no_session_embed:
emb_size = 32
self.session_emb = nn.Embedding(num_sessions, emb_size)
self.w_emb = nn.Linear(emb_size, FLAGS.model_size)
encoder_layer = TransformerEncoderLayer(d_model=FLAGS.model_size, nhead=8, relative_positional=True, relative_positional_distance=100, dim_feedforward=3072)
self.transformer = nn.TransformerEncoder(encoder_layer, FLAGS.num_layers)
self.w_out = nn.Linear(FLAGS.model_size, num_outs)
self.w_aux = nn.Linear(FLAGS.model_size, num_aux_outs)
def forward(self, x_feat, x_raw, session_ids):
# x shape is (batch, time, electrode)
x_raw = x_raw.transpose(1,2) # put channel before time for conv
x_raw = self.conv_blocks(x_raw)
x_raw = x_raw.transpose(1,2)
x_raw = self.w_raw_in(x_raw)
if FLAGS.no_session_embed:
x = x_raw
else:
emb = self.session_emb(session_ids)
x = x_raw + self.w_emb(emb)
x = x.transpose(0,1) # put time first
x = self.transformer(x)
x = x.transpose(0,1)
return self.w_out(x), self.w_aux(x)
def test(model, testset, device):
model.eval()
dataloader = torch.utils.data.DataLoader(testset, batch_size=32, collate_fn=testset.collate_fixed_length)
losses = []
accuracies = []
phoneme_confusion = np.zeros((len(phoneme_inventory),len(phoneme_inventory)))
with torch.no_grad():
for example in dataloader:
X = example['emg'].to(device)
X_raw = example['raw_emg'].to(device)
sess = example['session_ids'].to(device)
pred, phoneme_pred = model(X, X_raw, sess)
loss, phon_acc = dtw_loss(pred, phoneme_pred, example, True, phoneme_confusion)
losses.append(loss.item())
accuracies.append(phon_acc)
model.train()
return np.mean(losses), np.mean(accuracies), phoneme_confusion #TODO size-weight average
def save_output(model, datapoint, filename, device, gold_mfcc=False):
model.eval()
if gold_mfcc:
y = datapoint['audio_features']
else:
with torch.no_grad():
sess = torch.tensor(datapoint['session_ids'], device=device).unsqueeze(0)
X = torch.tensor(datapoint['emg'], dtype=torch.float32, device=device).unsqueeze(0)
X_raw = torch.tensor(datapoint['raw_emg'], dtype=torch.float32, device=device).unsqueeze(0)
pred, _ = model(X, X_raw, sess)
pred = pred.squeeze(0)
y = pred.cpu().detach().numpy()
wavenet_model = WavenetModel(y.shape[1]).to(device)
assert FLAGS.pretrained_wavenet_model is not None
wavenet_model.load_state_dict(torch.load(FLAGS.pretrained_wavenet_model))
save_wavenet_output(wavenet_model, y, filename, device)
model.train()
def dtw_loss(predictions, phoneme_predictions, example, phoneme_eval=False, phoneme_confusion=None):
device = predictions.device
predictions = decollate_tensor(predictions, example['lengths'])
phoneme_predictions = decollate_tensor(phoneme_predictions, example['lengths'])
audio_features = example['audio_features'].to(device)
phoneme_targets = example['phonemes']
audio_features = decollate_tensor(audio_features, example['audio_feature_lengths'])
losses = []
correct_phones = 0
total_length = 0
for pred, y, pred_phone, y_phone, silent in zip(predictions, audio_features, phoneme_predictions, phoneme_targets, example['silent']):
assert len(pred.size()) == 2 and len(y.size()) == 2
y_phone = y_phone.to(device)
if silent:
dists = torch.cdist(pred.unsqueeze(0), y.unsqueeze(0))
costs = dists.squeeze(0)
# pred_phone (seq1_len, 48), y_phone (seq2_len)
# phone_probs (seq1_len, seq2_len)
pred_phone = F.log_softmax(pred_phone, -1)
phone_lprobs = pred_phone[:,y_phone]
costs = costs + FLAGS.phoneme_loss_weight * -phone_lprobs
alignment = align_from_distances(costs.T.cpu().detach().numpy())
loss = costs[alignment,range(len(alignment))].sum()
if phoneme_eval:
alignment = align_from_distances(costs.T.cpu().detach().numpy())
pred_phone = pred_phone.argmax(-1)
correct_phones += (pred_phone[alignment] == y_phone).sum().item()
for p, t in zip(pred_phone[alignment].tolist(), y_phone.tolist()):
phoneme_confusion[p, t] += 1
else:
assert y.size(0) == pred.size(0)
dists = F.pairwise_distance(y, pred)
assert len(pred_phone.size()) == 2 and len(y_phone.size()) == 1
phoneme_loss = F.cross_entropy(pred_phone, y_phone, reduction='sum')
loss = dists.cpu().sum() + FLAGS.phoneme_loss_weight * phoneme_loss.cpu()
if phoneme_eval:
pred_phone = pred_phone.argmax(-1)
correct_phones += (pred_phone == y_phone).sum().item()
for p, t in zip(pred_phone.tolist(), y_phone.tolist()):
phoneme_confusion[p, t] += 1
losses.append(loss)
total_length += y.size(0)
return sum(losses)/total_length, correct_phones/total_length
def train_model(trainset, devset, device, save_sound_outputs=True, n_epochs=80):
if FLAGS.data_size_fraction >= 1:
training_subset = trainset
else:
training_subset = torch.utils.data.Subset(trainset, list(range(int(len(trainset)*FLAGS.data_size_fraction))))
dataloader = torch.utils.data.DataLoader(training_subset, pin_memory=(device=='cuda'), collate_fn=devset.collate_fixed_length, num_workers=8, batch_sampler=SizeAwareSampler(trainset, 256000))
n_phones = len(phoneme_inventory)
model = Model(devset.num_features, devset.num_speech_features, n_phones, devset.num_sessions).to(device)
if FLAGS.start_training_from is not None:
state_dict = torch.load(FLAGS.start_training_from)
del state_dict['session_emb.weight']
model.load_state_dict(state_dict, strict=False)
optim = torch.optim.AdamW(model.parameters(), weight_decay=FLAGS.l2)
lr_sched = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, 'min', 0.5, patience=FLAGS.learning_rate_patience)
def set_lr(new_lr):
for param_group in optim.param_groups:
param_group['lr'] = new_lr
target_lr = FLAGS.learning_rate
def schedule_lr(iteration):
iteration = iteration + 1
if iteration <= FLAGS.learning_rate_warmup:
set_lr(iteration*target_lr/FLAGS.learning_rate_warmup)
batch_idx = 0
for epoch_idx in range(n_epochs):
losses = []
for example in dataloader:
optim.zero_grad()
schedule_lr(batch_idx)
X = example['emg'].to(device)
X_raw = example['raw_emg'].to(device)
sess = example['session_ids'].to(device)
pred, phoneme_pred = model(X, X_raw, sess)
loss, _ = dtw_loss(pred, phoneme_pred, example)
losses.append(loss.item())
loss.backward()
optim.step()
batch_idx += 1
train_loss = np.mean(losses)
val, phoneme_acc, _ = test(model, devset, device)
lr_sched.step(val)
logging.info(f'finished epoch {epoch_idx+1} - validation loss: {val:.4f} training loss: {train_loss:.4f} phoneme accuracy: {phoneme_acc*100:.2f}')
torch.save(model.state_dict(), os.path.join(FLAGS.output_directory,'model.pt'))
if save_sound_outputs:
save_output(model, devset[0], os.path.join(FLAGS.output_directory, f'epoch_{epoch_idx}_output.wav'), device)
model.load_state_dict(torch.load(os.path.join(FLAGS.output_directory,'model.pt'))) # re-load best parameters
if save_sound_outputs:
for i, datapoint in enumerate(devset):
save_output(model, datapoint, os.path.join(FLAGS.output_directory, f'example_output_{i}.wav'), device)
evaluate(devset, FLAGS.output_directory)
return model
def main(argvs):
os.makedirs(FLAGS.output_directory, exist_ok=True)
logging.basicConfig(handlers=[
logging.FileHandler(os.path.join(FLAGS.output_directory, 'log.txt'), 'w'),
logging.StreamHandler()
], level=logging.INFO, format="%(message)s")
logging.info(subprocess.run(['git','rev-parse','HEAD'], stdout=subprocess.PIPE, universal_newlines=True).stdout)
logging.info(subprocess.run(['git','diff'], stdout=subprocess.PIPE, universal_newlines=True).stdout)
logging.info(sys.argv)
trainset = EMGDataset(dev=False,test=False)
devset = EMGDataset(dev=True)
logging.info('output example: %s', devset.example_indices[0])
logging.info('train / dev split: %d %d',len(trainset),len(devset))
device = 'cuda' if torch.cuda.is_available() and not FLAGS.debug else 'cpu'
model = train_model(trainset, devset, device, save_sound_outputs=(FLAGS.pretrained_wavenet_model is not None))
#app.run(main)
#main()
# + id="yVGWXG-huOJT"
from absl import flags
FLAGS = flags.FLAGS
for name in list(flags.FLAGS):
delattr(flags.FLAGS, name)
import sys
#sys.argv = " --train_dir training/".split(" ")
sys.argv = " ".split(" ")
sys.argv = " --models ./models/transduction_model/model_07.pt --pretrained_wavenet_model ./models/wavenet_model/wavenet_model_50.pt --output_directory evaluation_output".split(" ")
#evaluate.py
import sys
import os
import logging
import torch
from torch import nn
#from transduction_model import test, save_output, Model
#from read_emg import EMGDataset
from asr import evaluate
#from data_utils import phoneme_inventory, print_confusion
from absl import flags, app#, logging
FLAGS = flags.FLAGS
flags.DEFINE_list('models', [], 'identifiers of models to evaluate')
flags.DEFINE_integer('verbosity', 1, 'number of hidden dimensions')
flags.DEFINE_bool('debug', False, 'debug mode')
flags.DEFINE_boolean('mel_spectrogram', False, 'use mel spectrogram features instead of mfccs for audio')
flags.DEFINE_string('normalizers_file', 'normalizers.pkl', 'file with pickled feature normalizers')
flags.DEFINE_list('remove_channels', [], 'channels to remove')
#flags.DEFINE_list('silent_data_directories', ['./emg_data/silent_parallel_data'], 'silent data locations')
#flags.DEFINE_list('voiced_data_directories', ['./emg_data/voiced_parallel_data','./emg_data/nonparallel_data'], 'voiced data locations')
#flags.DEFINE_string('testset_file', 'testset_largedev.json', 'file with testset indices')
flags.DEFINE_list('silent_data_directories', ['./out'], 'silent data locations')
flags.DEFINE_list('voiced_data_directories', ['./out','./out'], 'voiced data locations')
flags.DEFINE_string('testset_file', 'testset_onlinedev.json', 'file with testset indices')
flags.DEFINE_string('text_align_directory', 'text_alignments', 'directory with alignment files')
flags.DEFINE_boolean('run_with_pdb', False, 'Set to true for PDB debug mode')
flags.DEFINE_boolean('pdb_post_mortem', False,
'Set to true to handle uncaught exceptions with PDB '
'post mortem.')
flags.DEFINE_alias('pdb', 'pdb_post_mortem')
flags.DEFINE_boolean('run_with_profiling', False,
'Set to true for profiling the script. '
'Execution will be slower, and the output format might '
'change over time.')
flags.DEFINE_string('profile_file', None,
'Dump profile information to a file (for python -m '
'pstats). Implies --run_with_profiling.')
flags.DEFINE_boolean('use_cprofile_for_profiling', True,
'Use cProfile instead of the profile module for '
'profiling. This has no effect unless '
'--run_with_profiling is set.')
flags.DEFINE_boolean('only_check_args', False,
'Set to true to validate args and exit.',
allow_hide_cpp=True)
flags.DEFINE_integer('model_size', 768, 'number of hidden dimensions')
flags.DEFINE_integer('num_layers', 6, 'number of layers')
flags.DEFINE_integer('batch_size', 32, 'training batch size')
flags.DEFINE_float('learning_rate', 1e-3, 'learning rate')
flags.DEFINE_integer('learning_rate_patience', 5, 'learning rate decay patience')
flags.DEFINE_integer('learning_rate_warmup', 500, 'steps of linear warmup')
#flags.DEFINE_string('start_training_from', None, 'start training from this model')
flags.DEFINE_float('data_size_fraction', 1.0, 'fraction of training data to use')
flags.DEFINE_boolean('no_session_embed', False, "don't use a session embedding")
flags.DEFINE_float('phoneme_loss_weight', 0.1, 'weight of auxiliary phoneme prediction loss')
flags.DEFINE_float('l2', 1e-7, 'weight decay')
#flags.DEFINE_boolean('debug', False, 'debug')
#flags.DEFINE_string('output_directory', 'output', 'where to save models and outputs')
flags.DEFINE_boolean('librispeech', False, 'train with librispeech data')
#flags.DEFINE_string('pretrained_wavenet_model', None, 'filename of model to start training with')
flags.DEFINE_float('clip_norm', 0.1, 'gradient clipping max norm')
flags.DEFINE_boolean('wavenet_no_lstm', False, "don't use a LSTM before the wavenet")
flags.DEFINE_string('start_training_from', './models/transduction_model/model_07.pt', 'start training from this model')
flags.DEFINE_string('pretrained_wavenet_model', "./models/wavenet_model/wavenet_model_50.pt", '')
flags.DEFINE_string('output_directory', "./evaluation_output", '')
#flags.DEFINE_string('output_directory', "./models/transduction_model/07/", 'start training from this model')
class EnsembleModel(nn.Module):
def __init__(self, models):
super().__init__()
self.models = nn.ModuleList(models)
def forward(self, x, x_raw, sess):
ys = []
ps = []
for model in self.models:
y, p = model(x, x_raw, sess)
ys.append(y)
ps.append(p)
return torch.stack(ys,0).mean(0), torch.stack(ps,0).mean(0)
def main(argvs):
os.makedirs(FLAGS.output_directory, exist_ok=True)
logging.basicConfig(handlers=[
logging.FileHandler(os.path.join(FLAGS.output_directory, 'eval_log.txt'), 'w'),
logging.StreamHandler()
], level=logging.INFO, format="%(message)s")
testset = EMGDataset(test=True)
device = 'cuda' if torch.cuda.is_available() and not FLAGS.debug else 'cpu'
models = []
for fname in FLAGS.models:
state_dict = torch.load(fname)
n_sess = 1 if FLAGS.no_session_embed else state_dict["session_emb.weight"].size(0)
model = Model(testset.num_features, testset.num_speech_features, len(phoneme_inventory), n_sess).to(device)
model.load_state_dict(state_dict)
models.append(model)
ensemble = EnsembleModel(models)
_, _, confusion = test(ensemble, testset, device)
print_confusion(confusion)
for i, datapoint in enumerate(testset):
save_output(ensemble, datapoint, os.path.join(FLAGS.output_directory, f'example_output_{i}.wav'), device)
evaluate(testset, FLAGS.output_directory)
if False:
FLAGS(sys.argv)
main()
#app.run(main)
# + colab={"base_uri": "https://localhost:8080/"} id="Ri7HdNvuuY8r" outputId="f93eebe7-63e3-464b-f73a-37c45bddcfb2"
FLAGS(sys.argv)
# + id="RGNLwPiJdWZ-"
# %rm -rf ./out
import sys
import os
import textwrap
#import curses
import soundfile as sf
import json
import numpy as np
book_file='books/War_of_the_Worlds.txt'
output_directory='./out/0'
if True:
os.makedirs(output_directory, exist_ok=True)
# os.makedirs(FLAGS.output_directory, exist_ok=False)
output_idx = 0
book = Book(book_file)
def display_sentence(sentence):#, win):
#height, width = win.getmaxyx()
height=20
width=80
#win.clear()
print(' ')
wrapped_sentence = textwrap.wrap(sentence, width)
for i, text in enumerate(wrapped_sentence):
if i >= height:
break
#win.addstr(i, 0, text)
print(text)
print(' ')
#win.refresh()
def save_data(output_idx, data, book):
emg, audio, button, chunk_info = data
emg_file = os.path.join(output_directory, f'{output_idx}_emg.npy')
# audio_file = os.path.join(output_directory, f'{output_idx}_audio.flac')
audio_file = os.path.join(output_directory, f'{output_idx}_audio_clean.flac')
button_file = os.path.join(output_directory, f'{output_idx}_button.npy')
info_file = os.path.join(output_directory, f'{output_idx}_info.json')
#assert not os.path.exists(emg_file), 'trying to overwrite existing file'
np.save(emg_file, emg)
sf.write(audio_file, audio, 16000)
np.save(button_file, button)
if book is None:
# special silence segment
bf = ''
bi = -1
t = ''
else:
bf = book.file
bi = book.current_index
t = book.current_sentence()
with open(info_file, 'w') as f:
json.dump({'book':bf, 'sentence_index':bi, 'text':t, 'chunks':chunk_info}, f)
def get_ends(data):
emg, audio, button, chunk_info = data
emg_start = emg[:500,:]
emg_end = emg[-500:,:]
dummy_audio = np.zeros(8000)
dummy_button = np.zeros(500, dtype=bool)
chunk_info = [(500,8000,500)]
return (emg_start, dummy_audio, dummy_button, chunk_info), (emg_end, dummy_audio, dummy_button, chunk_info)
# + id="tB35a5PF5joA"
#asr.py
import os
import logging
import deepspeech
import jiwer
import soundfile as sf
import numpy as np
from unidecode import unidecode
def evaluate(testset, audio_directory):
model = deepspeech.Model('deepspeech-0.7.0-models.pbmm')
model.enableExternalScorer('deepspeech-0.7.0-models.scorer')
predictions = []
targets = []
for i, datapoint in enumerate(testset):
#if i == 0:
audio, rate = sf.read(os.path.join(audio_directory,f'example_output_{i}.wav'))
assert rate == model.sampleRate(), 'wrong sample rate'
audio_int16 = (audio*(2**15)).astype(np.int16)
text = model.stt(audio_int16)
predictions.append(text)
target_text = unidecode(datapoint['text'])
targets.append(target_text)
transformation = jiwer.Compose([jiwer.RemovePunctuation(), jiwer.ToLowerCase()])
targets = transformation(targets)
predictions = transformation(predictions)
#logging.info(f'targets: {targets}')
logging.info(f'predictions: {predictions}')
#logging.info(f'wer: {jiwer.wer(targets, predictions)}')
# + id="Okl1tri4CLYn" colab={"base_uri": "https://localhost:8080/"} outputId="394c6833-093c-4790-a48b-fae185312d42"
import time
from matplotlib.animation import FuncAnimation
import matplotlib.pyplot as plt
import numpy as np
import sounddevice as sd
import scipy.signal
import brainflow
from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds, IpProtocolType
from brainflow.data_filter import DataFilter, FilterTypes, AggOperations
def remove_drift(signal, fs):
b, a = scipy.signal.butter(3, 2, 'highpass', fs=fs)
return scipy.signal.filtfilt(b, a, signal)
def notch(signal, freq, sample_frequency):
b, a = scipy.signal.iirnotch(freq, 25, sample_frequency)
# b, a = scipy.signal.iirnotch(freq, 30, sample_frequency)
return scipy.signal.filtfilt(b, a, signal)
def notch_harmonics(signal, freq, sample_frequency):
for f in range(freq, sample_frequency//2, freq):
signal = notch(signal, f, sample_frequency)
return signal
def filter_signal(signals, fs):
""" signals is 2d: time, channels """
result = np.zeros_like(signals)
for i in range(signals.shape[1]):
x = signals[:,i]
x = notch_harmonics(x, 50, fs)
# x = notch_harmonics(x, 60, fs)
x = remove_drift(x, fs)
result[:,i] = x
return result
def get_last_sequence(chunk_list, n, k, do_filtering, fs):
cumulative_size = 0
selected_chunks = [np.zeros((0,k))]
for chunk in reversed(chunk_list):
selected_chunks.append(chunk)
cumulative_size += chunk.shape[0]
if cumulative_size > n:
break
selected_chunks.reverse()
result = np.concatenate(selected_chunks, 0)[-n:,:]
if do_filtering and result.shape[0] > 12:
result = filter_signal(result, fs)
if result.shape[0] < n:
result_padded = np.concatenate([np.zeros((n-result.shape[0],result.shape[1])), result], 0)
else:
result_padded = result
return result_padded
time000=perf_counter()
last_frame_sg2=0
class Recorder(object):
def __init__(self, debug=False, display=True, num_channels=None, wifi=True):
# make audio stream
# self.audio_stream = sd.InputStream(device=None, channels=1, samplerate=16000)
# make emg stream
params = BrainFlowInputParams()
if debug:
board_id = -1 # synthetic
self.sample_rate = 256
else:
board_id = BoardIds.FREEEEG32_BOARD.value
params.serial_port = '/dev/ttyS11'
self.sample_rate = 512
self.emg_channels = BoardShim.get_emg_channels(board_id)
if num_channels is not None:
self.emg_channels = self.emg_channels[:num_channels]
#board = BoardShim(board_id, params)
global board
board.release_all_sessions()
board.prepare_session()
board.start_stream()
self.board = board
# config and make data holders
audio_multiplier = int(16000/self.sample_rate)
self.window = self.sample_rate*5
self.audio_data = []
self.emg_data = []
self.button_data = []
self.debug = debug
self.previous_sample_number = -1
# plot setup
self.display = display
if display:
print('init')
#plt.ion()
plt.figure()
fig, (audio_ax, emg_ax) = plt.subplots(2)
#audio_ax.axis((0, window*audio_multiplier, -1, 1))
audio_ax.axis((0, self.window, -300, 300))
emg_ax.axis((0, self.window, -300, 300))
#audio_lines = audio_ax.plot(np.zeros(window*audio_multiplier))
#audio_lines = audio_ax.plot(np.zeros(window),len(self.emg_channels))
emg_lines = emg_ax.plot(np.zeros((self.window,len(self.emg_channels))))
for l,c in zip(emg_lines, ['grey', 'mediumpurple', 'blue', 'green', 'yellow', 'orange', 'red', 'sienna']):
l.set_color(c)
text = emg_ax.text(50,-250,'RMS: 0')
for ax in (audio_ax, emg_ax):
ax.set_yticks([0])
ax.yaxis.grid(True)
ax.tick_params(bottom=False, top=False, labelbottom=False,
right=False, left=False, labelleft=False)
#self.fig.tight_layout(pad=0)
plt.close('all')
def update_plot(frame):
""" This is called by matplotlib for each plot update. """
# audio_to_plot = get_last_sequence(self.audio_data, window*audio_multiplier, 1, False, sample_rate)
# audio_to_plot = audio_to_plot.squeeze(1)
# audio_lines[0].set_ydata(audio_to_plot)
emg_to_plot = get_last_sequence(self.emg_data, self.window, len(self.emg_channels), True, self.sample_rate)
for column, line in enumerate(emg_lines):
line.set_ydata(emg_to_plot[:, column])
text.set_text('RMS: '+str(emg_to_plot[-self.sample_rate*2:-self.sample_rate//2].std()))
return emg_lines
#return audio_lines + emg_lines
#self.ani = FuncAnimation(self.fig, update_plot, interval=30)
def update(self):
global output_idx, book
#if self.display:
# next two lines seem to be a better alternative to plt.pause(0.005)
# https://github.com/matplotlib/matplotlib/issues/11131
# plt.gcf().canvas.draw_idle()
# plt.gcf().canvas.start_event_loop(0.005)
#else:
# time.sleep(0.005)
current_audio = []
#print(self.board.get_board_data_count())
while self.board.get_board_data_count() > 0: # because stream.read_available seems to max out, leading us to not read enough with one read
data = self.board.get_board_data()
#if True:
# if len(data[0])>int(512/fps_sg2):
# data=data[:,len(data[0])-int(512/fps_sg2):]
#print(data)
#assert not overflowed
current_audio.append(data)
if len(current_audio) > 0:
##self.audio_data.append(np.concatenate(current_audio,0))
#self.audio_data.append(data[self.emg_channels,0].T)
#print(len(data[0]))
self.audio_data.append(np.zeros(int(len(data[0])*(16000/self.sample_rate))))
#data = self.board.get_board_data() # get all data and remove it from internal buffer
self.emg_data.append(data[self.emg_channels,:].T)
#print('update:', self.emg_data)
if True:
# if not self.debug:
for sn in data[0,:]:
if self.previous_sample_number != -1 and sn != (self.previous_sample_number+1)%256:
print(f'skip from {self.previous_sample_number} to {sn}')
self.previous_sample_number = sn
is_digital_inputs = data[12,:] == 193
button_data = data[16,is_digital_inputs].astype(bool)
self.button_data.append(button_data)
if sum(button_data) != 0:
print('button pressed')
time100=perf_counter()
global time000, last_frame_sg2, fps_sg2
emg = np.concatenate(self.emg_data, 0)
#print(len(emg))
if len(emg)<int(512/fps_sg2)*2:
return []
this_frame_sg2=int((time100-time000)*fps_sg2)
send_sg2=False
if this_frame_sg2>last_frame_sg2:
last_frame_sg2=this_frame_sg2
send_sg2=True
if send_sg2:
if True:
data1 = self.get_data()
#if True:
# if len(data1)>int(512/fps_sg2):
# data1=data1[len(data1)-int(512/fps_sg2):]
output_idx=0
book.current_index=0
save_data(output_idx, data1, book)
return [True]
plt.figure()
#print('plt.figure()')
fig, (audio_ax, emg_ax) = plt.subplots(2)
#audio_ax.axis((0, window*audio_multiplier, -1, 1))
audio_ax.axis((0, self.window, -300, 300))
emg_ax.axis((0, self.window, -300, 300))
#audio_lines = audio_ax.plot(np.zeros(window*audio_multiplier))
#audio_lines = audio_ax.plot(np.zeros(window),len(self.emg_channels))
emg_lines = emg_ax.plot(np.zeros((self.window,len(self.emg_channels))))
for l,c in zip(emg_lines, ['grey', 'mediumpurple', 'blue', 'green', 'yellow', 'orange', 'red', 'sienna']):
l.set_color(c)
text = emg_ax.text(50,-250,'RMS: 0')
for ax in (audio_ax, emg_ax):
ax.set_yticks([0])
ax.yaxis.grid(True)
ax.tick_params(bottom=False, top=False, labelbottom=False,
right=False, left=False, labelleft=False)
emg_to_plot = get_last_sequence(self.emg_data, self.window, len(self.emg_channels), True, self.sample_rate)
for column, line in enumerate(emg_lines):
line.set_ydata(emg_to_plot[:, column])
text.set_text('RMS: '+str(emg_to_plot[-self.sample_rate*2:-self.sample_rate//2].std()))
buf2 = BytesIO()
buf2.seek(0)
#self.fig
plt.savefig(buf2, format='png')
#plt.show()
myimage=buf2.getvalue()
plt.close('all')
#print('plt.close()')
buf2.close()
#msg['buffers']=
if False:
data1 = self.get_data()
output_idx=0
book.current_index=0
save_data(output_idx, data1, book)
# if output_idx == 0:
# save_data(output_idx, data1, None)
# else:
# save_data(output_idx, data1, book)
# book.next()
#
# output_idx += 1
# display_sentence(book.current_sentence())#, text_win)
#print(myimage)
return [memoryview(myimage)]
else:
return []
def get_data(self):
#print('get_data:', self.emg_data)
emg = np.concatenate(self.emg_data, 0)
if True:
if len(emg)>int(512/fps_sg2):
emg=emg[len(emg)-int(512/fps_sg2):]
audio = np.concatenate(self.audio_data, 0)
if True:
if len(audio)>int(16000/fps_sg2):
audio=audio[len(audio)-int(16000/fps_sg2):]
#audio = np.concatenate(self.audio_data, 0).squeeze(1)
button = np.concatenate(self.button_data, 0)
chunk_sizes = [(e.shape[0],a.shape[0],b.shape[0]) for e, a, b in zip(self.emg_data, self.audio_data, self.button_data)]
self.emg_data = []
self.audio_data = []
self.button_data = []
return emg, audio, button, chunk_sizes
def __enter__(self):
# self.audio_stream.start()
return self
def __exit__(self, type, value, traceback):
# self.audio_stream.stop()
# self.audio_stream.close()
#self.board.stop_stream()
#self.board.release_session()
#print('plt.close()')
#plt.close()
return 0
if True:
r= Recorder(debug=False, display=True, wifi=False, num_channels=8)
# + id="fxupESH8TxGZ" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="bd900c4c-b1a8-42bb-98a0-60de48b659fb"
def target_func1(comm, msg):
global ser, master, board, r
# To Write to the device
msg_buffers_0_tobytes = msg['buffers'][0].tobytes()
msg['buffers']=[]
#print(msg_buffers_0_tobytes)
# print(len(msg_buffers_0_tobytes))
ser.write(msg_buffers_0_tobytes)
# print(len(msg_buffers_0_tobytes))
# To read from the device
# os.read(master,len(msg_buffers_0_tobytes))
# time.sleep(10)
if False:
data = board.get_board_data()
print(data)
# board.stop_stream()
# board.release_session()
if True:
msg['buffers']=r.update()
encoded=''
#print(len(msg['buffers']))
if len(msg['buffers'])>0:
msg['buffers']=[]
# if True:
global FLAGS
os.makedirs(FLAGS.output_directory, exist_ok=True)
logging.basicConfig(handlers=[
logging.FileHandler(os.path.join(FLAGS.output_directory, 'eval_log.txt'), 'w'),
logging.StreamHandler()
], level=logging.INFO, format="%(message)s")
testset = EMGDataset(test=True)
device = 'cuda' if torch.cuda.is_available() and not FLAGS.debug else 'cpu'
models = []
for fname in FLAGS.models:
# state_dict = torch.load(fname, map_location=torch.device('cpu'))
state_dict = torch.load(fname)
n_sess = 1 if FLAGS.no_session_embed else state_dict["session_emb.weight"].size(0)
model = Model(testset.num_features, testset.num_speech_features, len(phoneme_inventory), n_sess).to(device)
model.load_state_dict(state_dict)
models.append(model)
ensemble = EnsembleModel(models)
_, _, confusion = test(ensemble, testset, device)
print_confusion(confusion)
for i1, datapoint in enumerate(testset):
#if i == 0:
# save_output(ensemble, datapoint, os.path.join(FLAGS.output_directory, f'example_output_{i}.wav'), device)
model = ensemble
#datapoint
filename = os.path.join(FLAGS.output_directory, f'example_output_{i1}.wav')
#device
gold_mfcc=False
model.eval()
if gold_mfcc:
y = datapoint['audio_features']
else:
with torch.no_grad():
sess = torch.tensor(datapoint['session_ids'], device=device).unsqueeze(0)
X1 = torch.tensor(datapoint['emg'], dtype=torch.float32, device=device).unsqueeze(0)
X_raw = torch.tensor(datapoint['raw_emg'], dtype=torch.float32, device=device).unsqueeze(0)
pred, _ = model(X1, X_raw, sess)
pred = pred.squeeze(0)
y = pred.cpu().detach().numpy()
wavenet_model = WavenetModel(y.shape[1]).to(device)
assert FLAGS.pretrained_wavenet_model is not None
wavenet_model.load_state_dict(torch.load(FLAGS.pretrained_wavenet_model))
# wavenet_model.load_state_dict(torch.load(FLAGS.pretrained_wavenet_model, map_location=torch.device('cpu')))
#save_wavenet_output(wavenet_model, y, filename, device)
#wavenet_model
input_data = y
#filename
#device
wavenet_model.eval()
assert len(input_data.shape) == 2
X = torch.tensor(input_data, dtype=torch.float32).to(device).unsqueeze(0)
wavenet = wavenet_model.wavenet
inference_wavenet = NVWaveNet(**wavenet.export_weights())
# inference_wavenet = nv_wavenet.NVWaveNet(**wavenet.export_weights())
cond_input = wavenet_model.pre_wavenet_processing(X)
chunk_len = 400
overlap = 1
audio_chunks = []
for i in range(0, cond_input.size(2), chunk_len-overlap):
if cond_input.size(2)-i < overlap:
break # don't make segment at end that doesn't go past overlapped part
cond_chunk = cond_input[:,:,i:i+chunk_len]
wavenet_cond_input = wavenet.get_cond_input(cond_chunk)
audio_data = inference_wavenet.infer(wavenet_cond_input, nv_wavenet.Impl.SINGLE_BLOCK)
audio_chunk = librosa.core.mu_expand(audio_data.squeeze(0).cpu().numpy()-128, 255, True)
audio_chunks.append(audio_chunk)
audio_out = splice_audio(audio_chunks, overlap*160)
sf.write(filename, audio_out, 16000)
if True:
evaluate(testset, FLAGS.output_directory)
if True:
buffer = BytesIO()
if generate&gen_mp3:
buffer_wav = BytesIO()
sf.write(buffer_wav, audio_out, 16000, format='wav')
AudioSegment.from_wav(buffer_wav).export(buffer, format="mp3")
if generate&gen_wav:
sf.write(buffer, audio_out, 16000, format='wav')
buffer.seek(0)
mysound = buffer.getvalue()
msg['buffers']=[]
#msg['buffers']=[memoryview(mysound)]
if generate&gen_mp3:
encoded= "data:audio/mp3;base64,"+base64.b64encode(mysound).decode()
if generate&gen_wav:
encoded= "data:audio/wav;base64,"+base64.b64encode(mysound).decode()
#print('audio encoded')
#wavenet_model.train()
#model.train()
# evaluate(testset, FLAGS.output_directory)
if False:
# with Recorder(debug=False, display=True, wifi=False, num_channels=1) as r:
# with Recorder(debug=True, display=False, wifi=False, num_channels=1) as r:
# while True:
msg['buffers']=r.update()
if len(msg['buffers'])>0:
encoded= "binary:data:image/png"
#print('image encoded')
else:
encoded=''
if True:
comm.send({
'response': encoded,
# 'response': 'close',
}, None, msg['buffers']);
if False:
eeg_channels = BoardShim.get_eeg_channels(BoardIds.FREEEEG32_BOARD.value)
eeg_data = data[eeg_channels, :]
eeg_data = eeg_data / 1000000 # BrainFlow returns uV, convert to V for MNE
# Creating MNE objects from brainflow data arrays
ch_types = ['eeg'] * len(eeg_channels)
ch_names = [str(x) for x in range(len(eeg_channels))]
#ch_names = BoardShim.get_eeg_names(BoardIds.FREEEEG32_BOARD.value)
sfreq = BoardShim.get_sampling_rate(BoardIds.FREEEEG32_BOARD.value)
info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)
raw = mne.io.RawArray(eeg_data, info)
# its time to plot something!
raw.plot_psd(average=False)
if True:
comm.send({
'response': 'close',
}, None, msg['buffers']);
get_ipython().kernel.comm_manager.register_target('comm_target1', target_func1)
Javascript('''
//<NAME>, GPL (copyleft)
//import 'regenerator-runtime/runtime' //For async functions on node\\
class eeg32 { //Contains structs and necessary functions/API calls to analyze serial data for the FreeEEG32
constructor(
onDecodedCallback = this.onDecodedCallback,
onConnectedCallback = this.onConnectedCallback,
onDisconnectedCallback = this.onDisconnectedCallback,
CustomDecoder = this.decode,
//baudrate = 1500000//115200
baudrate = 921600//115200
) {
this.onDecodedCallback = onDecodedCallback;
this.onConnectedCallback = onConnectedCallback;
this.onDisconnectedCallback = onDisconnectedCallback;
this.decode = CustomDecoder;
//Free EEG 32 data structure:
// [stop byte, start byte, counter byte, 32x3 channel data bytes (24 bit), 3x2 accelerometer data bytes, stop byte, start byte...] Gyroscope not enabled yet but would be printed after the accelerometer..
// Total = 105 bytes/line
this.connected = false;
this.subscribed = false;
this.buffer = [];
this.startByte = 160; // Start byte value
this.stopByte = 192; // Stop byte value
this.searchString = new Uint8Array([this.stopByte,this.startByte]); //Byte search string
this.readRate = 16.666667; //Throttle EEG read speed. (1.953ms/sample min @103 bytes/line)
this.nChannels=%(data_channels)d;
if(this.nChannels==128)
{
this.readBufferSize = 8000; //Serial read buffer size, increase for slower read speeds (~1030bytes every 20ms) to keep up with the stream (or it will crash)
}
else
{
//this.readBufferSize = 8000; //Serial read buffer size, increase for slower read speeds (~1030bytes every 20ms) to keep up with the stream (or it will crash)
//this.readBufferSize = 4000; //Serial read buffer size, increase for slower read speeds (~1030bytes every 20ms) to keep up with the stream (or it will crash)
//this.readBufferSize = 1000; //Serial read buffer size, increase for slower read speeds (~1030bytes every 20ms) to keep up with the stream (or it will crash)
this.readBufferSize = 2000; //Serial read buffer size, increase for slower read speeds (~1030bytes every 20ms) to keep up with the stream (or it will crash)
}
//this.sps = 512; // Sample rate
//this.sps = 250; // Sample rate
this.sps=%(sfreq)d;
//this.sps=%(sfreq)f;
//this.nChannels = 128;
//this.nChannels = 32;
this.generate_game=%(generate_game)d;
this.generate_game_mode1=%(generate_game_mode1)d;
this.generate_game_mode3=%(generate_game_mode3)d;
this.nPeripheralChannels = 6; // accelerometer and gyroscope (2 bytes * 3 coordinates each)
this.updateMs = 1000/this.sps; //even spacing
this.stepSize = 1/Math.pow(2,24);
//this.vref = 2.50; //2.5V voltage ref +/- 250nV
//this.gain = 8;
//this.vref = 1.25; //2.5V voltage ref +/- 250nV
//this.gain = 32;
this.vref =%(vref)f; //2.5V voltage ref +/- 250nV
this.gain = %(gain)d;
this.vscale = (this.vref/this.gain)*this.stepSize; //volts per step.
this.uVperStep = 1000000 * ((this.vref/this.gain)*this.stepSize); //uV per step.
this.scalar = 1/(1000000 / ((this.vref/this.gain)*this.stepSize)); //steps per uV.
this.maxBufferedSamples = this.sps*60*2; //max samples in buffer this.sps*60*nMinutes = max minutes of data
this.data = { //Data object to keep our head from exploding. Get current data with e.g. this.data.A0[this.data.count-1]
count: 0,
startms: 0,
ms: [],
'A0': [],'A1': [],'A2': [],'A3': [],'A4': [],'A5': [],'A6': [],'A7': [], //ADC 0
'A8': [],'A9': [],'A10': [],'A11': [],'A12': [],'A13': [],'A14': [],'A15': [], //ADC 1
'A16': [],'A17': [],'A18': [],'A19': [],'A20': [],'A21': [],'A22': [],'A23': [], //ADC 2
'A24': [],'A25': [],'A26': [],'A27': [],'A28': [],'A29': [],'A30': [],'A31': [], //ADC 3
//'A0': [],'A1': [],'A2': [],'A3': [],'A4': [],'A5': [],'A6': [],'A7': [], //ADC 0
//'A8': [],'A9': [],'A10': [],'A11': [],'A12': [],'A13': [],'A14': [],'A15': [], //ADC 1
//'A16': [],'A17': [],'A18': [],'A19': [],'A20': [],'A21': [],'A22': [],'A23': [], //ADC 2
//'A24': [],'A25': [],'A26': [],'A27': [],'A28': [],'A29': [],'A30': [],'A31': [], //ADC 3
//'A0': [],'A1': [],'A2': [],'A3': [],'A4': [],'A5': [],'A6': [],'A7': [], //ADC 0
//'A8': [],'A9': [],'A10': [],'A11': [],'A12': [],'A13': [],'A14': [],'A15': [], //ADC 1
//'A16': [],'A17': [],'A18': [],'A19': [],'A20': [],'A21': [],'A22': [],'A23': [], //ADC 2
//'A24': [],'A25': [],'A26': [],'A27': [],'A28': [],'A29': [],'A30': [],'A31': [], //ADC 3
//'A0': [],'A1': [],'A2': [],'A3': [],'A4': [],'A5': [],'A6': [],'A7': [], //ADC 0
//'A8': [],'A9': [],'A10': [],'A11': [],'A12': [],'A13': [],'A14': [],'A15': [], //ADC 1
//'A16': [],'A17': [],'A18': [],'A19': [],'A20': [],'A21': [],'A22': [],'A23': [], //ADC 2
//'A24': [],'A25': [],'A26': [],'A27': [],'A28': [],'A29': [],'A30': [],'A31': [], //ADC 3
'Ax': [], 'Ay': [], 'Az': [], 'Gx': [], 'Gy': [], 'Gz': [] //Peripheral data (accelerometer, gyroscope)
};
this.bufferednewLines = 0;
this.data_slice=[];
this.data_slice_size=this.sps*(5*1/8+0.1);
this.ready_to_send_data = false;
this.data_send_count=0;
this.generate_parallel=%(generate_parallel)d;
this.xsize=%(xsize)d;
this.ysize=%(ysize)d;
this.generate_stylegan2=true;
this.generate_stylegan2=%(generate_stylegan2)d;
//this.generate_stylegan2=false;
this.generate_wavegan=true;
this.generate_wavegan=%(generate_wavegan)d;
this.generate_heatmap=true;
this.generate_heatmap=%(generate_heatmap)d;
//data:audio/wav;base64,
//data:image/jpeg;base64,
this.time100=Date.now();
this.time000=Date.now();
this.this_frame_wg=-1;
this.last_frame_wg=-1;
this.send_wg=false;
this.this_frame_sg2=-1;
this.last_frame_sg2=-1;
this.send_sg2=false;
this.frame_last=0;
if(this.generate_stylegan2)
{
this.fps_sg2=1;
}
if(this.generate_wavegan)
{
this.hz=44100;
this.fps_wg=this.hz/(32768*2);
//this.fps=this.hz/(32768);
}
this.fps_sg2=this.fps_wg;
this.fps_wg=%(fps_wg)f;
this.fps_sg2=%(fps_sg2)f;
this.fps_hm=%(fps_hm)f;
this.fps=Math.max(this.fps_wg,this.fps_sg2,this.fps_hm)*4;
//this.fps=Math.max(this.fps_wg,this.fps_sg2,this.fps_hm)*5;
//this.fps=10;
this.samples_count=0;
//this.channel=None;
this.resetDataBuffers();
//navigator.serial utils
if(!navigator.serial){
console.error("`navigator.serial not found! Enable #enable-experimental-web-platform-features in chrome://flags (search 'experimental')")
}
this.port = null;
this.reader = null;
this.baudrate = baudrate;
}
resetDataBuffers(){
this.data.count = 0;
this.data.startms = 0;
for(const prop in this.data) {
if(typeof this.data[prop] === "object"){
this.data[prop] = new Array(this.maxBufferedSamples).fill(0);
}
}
}
setScalar(gain=24,stepSize=1/(Math.pow(2,23)-1),vref=4.50) {
this.stepSize = stepSize;
this.vref = vref; //2.5V voltage ref +/- 250nV
this.gain = gain;
this.vscale = (this.vref/this.gain)*this.stepSize; //volts per step.
this.uVperStep = 1000000 * ((this.vref/this.gain)*this.stepSize); //uV per step.
this.scalar = 1/(1000000 / ((this.vref/this.gain)*this.stepSize)); //steps per uV.
}
getLatestData(channel="A0",count=1) { //Return slice of specified size of the latest data from the specified channel
let ct = count;
if(ct <= 1) {
return [this.data[channel][this.data.count-1]];
}
else {
if(ct > this.data.count) {
ct = this.data.count;
}
return this.data[channel].slice(this.data.count-ct,this.data.count);
}
}
bytesToInt16(x0,x1){
return x0 * 256 + x1;
}
int16ToBytes(y){ //Turns a 24 bit int into a 3 byte sequence
return [y & 0xFF , (y >> 8) & 0xFF];
}
bytesToInt24(x0,x1,x2){ //Turns a 3 byte sequence into a 24 bit int
return x0 * 65536 + x1 * 256 + x2;
}
int24ToBytes(y){ //Turns a 24 bit int into a 3 byte sequence
return [y & 0xFF , (y >> 8) & 0xFF , (y >> 16) & 0xFF];
}
decode(buffer = this.buffer) { //returns true if successful, returns false if not
var needle = this.searchString
var haystack = buffer;
var search = this.boyerMoore(needle);
var skip = search.byteLength;
var indices = [];
let newLines = 0;
for (var i = search(haystack); i !== -1; i = search(haystack, i + skip)) {
indices.push(i);
}
//console.log(indices);
if(indices.length >= 2){
for(let k = 1; k < indices.length; k++) {
if(indices[k] - indices[k-1] !== 105) {
} //This is not a valid sequence going by size, drop sequence and return
else {
var line = buffer.slice(indices[k-1],indices[k]+1); //Splice out this line to be decoded
// line[0] = stop byte, line[1] = start byte, line[2] = counter, line[3:99] = ADC data 32x3 bytes, line[100-104] = Accelerometer data 3x2 bytes
//line found, decode.
if(this.data.count < this.maxBufferedSamples){
this.data.count++;
}
if(this.data.count-1 === 0) {this.data.ms[this.data.count-1]= Date.now(); this.data.startms = this.data.ms[0];}
else {
this.data.ms[this.data.count-1]=this.data.ms[this.data.count-2]+this.updateMs;
if(this.data.count >= this.maxBufferedSamples) {
this.data.ms.splice(0,5120);
this.data.ms.push(new Array(5120).fill(0));
}
}//Assume no dropped samples
var sample_count = line[2];
var sample_count_diff = sample_count-this.samples_count;
if(sample_count_diff<0){
sample_count_diff+=256;
}
if(sample_count_diff!=1)
{
console.error("dropped samples:"+sample_count_diff.toString());
}
this.samples_count=sample_count;
for(var i = 3; i < 99; i+=3) {
var channel = "A"+(i-3)/3;
this.data[channel][this.data.count-1]=this.bytesToInt24(line[i],line[i+1],line[i+2]);
if(this.data.count >= this.maxBufferedSamples) {
this.data[channel].splice(0,5120);
this.data[channel].push(new Array(5120).fill(0));//shave off the last 10 seconds of data if buffer full (don't use shift())
}
//console.log(this.data[channel][this.data.count-1],indices[k], channel)
}
this.data["Ax"][this.data.count-1]=this.bytesToInt16(line[99],line[100]);
this.data["Ay"][this.data.count-1]=this.bytesToInt16(line[101],line[102]);
this.data["Az"][this.data.count-1]=this.bytesToInt16(line[103],line[104]);
if(this.data.count >= this.maxBufferedSamples) {
this.data["Ax"].splice(0,5120);
this.data["Ay"].splice(0,5120);
this.data["Az"].splice(0,5120);
this.data["Ax"].push(new Array(5120).fill(0))
this.data["Ay"].push(new Array(5120).fill(0))
this.data["Az"].push(new Array(5120).fill(0))
this.data.count -= 5120;
}
//console.log(this.data)
newLines++;
//console.log(indices[k-1],indices[k])
//console.log(buffer[indices[k-1],buffer[indices[k]]])
//indices.shift();
}
}
if(newLines > 0) buffer.splice(0,indices[indices.length-1]);
return newLines;
//Continue
}
//else {this.buffer = []; return false;}
}
//Callbacks
onDecodedCallback(newLinesInt){
//console.log("new samples:", newLinesInt);
this.bufferednewLines=this.bufferednewLines+newLinesInt;
}
onConnectedCallback() {
console.log("port connected!");
}
onDisconnectedCallback() {
console.log("port disconnected!");
}
onReceive(value){
this.buffer.push(...value);
let newLines=this.buffer.length;
this.onDecodedCallback(newLines);
if(this.ready_to_send_data)
{
//this.sendserial(this.buffer);
//console.log(this.buffer.length)
//this.buffer=[];
}
//console.log(value.length)
//let newLines = this.decode(this.buffer);
//console.log(this.data)
//console.log("decoding... ", this.buffer.length)
//if(newLines !== false && newLines !== 0 && !isNaN(newLines) ) this.onDecodedCallback(newLines);
}
async onPortSelected(port,baud=this.baudrate) {
try{
try {
await port.open({ baudRate: baud, bufferSize: this.readBufferSize });
this.onConnectedCallback();
this.connected = true;
this.subscribed = true;
this.subscribe(port);//this.subscribeSafe(port);
} //API inconsistency in syntax between linux and windows
catch {
await port.open({ baudrate: baud, buffersize: this.readBufferSize });
this.onConnectedCallback();
this.connected = true;
this.subscribed = true;
this.subscribe(port);//this.subscribeSafe(port);
}
}
catch(err){
console.log(err);
this.connected = false;
}
}
async subscribe(port){
if (this.port.readable && this.subscribed === true) {
this.reader = port.readable.getReader();
const streamData = async () => {
try {
const { value, done } = await this.reader.read();
if (done || this.subscribed === false) {
// Allow the serial port to be closed later.
await this.reader.releaseLock();
}
if (value) {
//console.log(value.length);
try{
this.onReceive(value);
}
catch (err) {console.log(err)}
//console.log("new Read");
//console.log(this.decoder.decode(value));
}
if(this.subscribed === true) {
setTimeout(()=>{streamData();}, this.readRate);//Throttled read 1/512sps = 1.953ms/sample @ 103 bytes / line or 1030bytes every 20ms
}
} catch (error) {
console.log(error);// TODO: Handle non-fatal read error.
if(error.message.includes('framing') || error.message.includes('overflow') || error.message.includes('Overflow') || error.message.includes('break')) {
this.subscribed = false;
setTimeout(async ()=>{
try{
if (this.reader) {
await this.reader.releaseLock();
this.reader = null;
}
} catch (er){ console.error(er);}
this.subscribed = true;
this.subscribe(port);
//if that fails then close port and reopen it
},30); //try to resubscribe
} else if (error.message.includes('parity') || error.message.includes('Parity') || error.message.includes('overrun') ) {
if(this.port){
this.subscribed = false;
setTimeout(async () => {
try{
if (this.reader) {
await this.reader.releaseLock();
this.reader = null;
}
await port.close();
} catch (er){ console.error(er);}
//this.port = null;
this.connected = false;
setTimeout(()=>{this.onPortSelected(this.port)},100); //close the port and reopen
}, 50);
}
}
else {
this.closePort();
}
}
}
streamData();
}
}
//Unfinished
async subscribeSafe(port) { //Using promises instead of async/await to cure hangs when the serial update does not meet tick requirements
var readable = new Promise((resolve,reject) => {
while(this.port.readable && this.subscribed === true){
this.reader = port.readable.getReader();
var looper = true;
var prom1 = new Promise((resolve,reject) => {
return this.reader.read();
});
var prom2 = new Promise((resolve,reject) => {
setTimeout(resolve,100,"readfail");
});
while(looper === true ) {
//console.log("reading...");
Promise.race([prom1,prom2]).then((result) => {
console.log("newpromise")
if(result === "readfail"){
console.log(result);
}
else{
const {value, done} = result;
if(done === true || this.subscribed === true) { var donezo = new Promise((resolve,reject) => {
resolve(this.reader.releaseLock())}).then(() => {
looper = false;
return;
});
}
else{
this.onReceive(value);
}
}
});
}
}
resolve("not readable");
});
}
async closePort(port=this.port) {
//if(this.reader) {this.reader.releaseLock();}
if(this.port){
this.subscribed = false;
setTimeout(async () => {
if (this.reader) {
await this.reader.releaseLock();
this.reader = null;
}
await port.close();
this.port = null;
this.connected = false;
this.onDisconnectedCallback();
}, 100);
}
}
async setupSerialAsync(baudrate=this.baudrate) { //You can specify baudrate just in case
const filters = [
{ usbVendorId: 0x10c4, usbProductId: 0x0043 } //CP2102 filter (e.g. for UART via ESP32)
];
this.port = await navigator.serial.requestPort();
navigator.serial.addEventListener("disconnect",(e) => {
this.closePort(this.port);
});
this.onPortSelected(this.port,baudrate);
//navigator.serial.addEventListener("onReceive", (e) => {console.log(e)});//this.onReceive(e));
}
//Boyer Moore fast byte search method copied from https://codereview.stackexchange.com/questions/20136/uint8array-indexof-method-that-allows-to-search-for-byte-sequences
asUint8Array(input) {
if (input instanceof Uint8Array) {
return input;
} else if (typeof(input) === 'string') {
// This naive transform only supports ASCII patterns. UTF-8 support
// not necessary for the intended use case here.
var arr = new Uint8Array(input.length);
for (var i = 0; i < input.length; i++) {
var c = input.charCodeAt(i);
if (c > 127) {
throw new TypeError("Only ASCII patterns are supported");
}
arr[i] = c;
}
return arr;
} else {
// Assume that it's already something that can be coerced.
return new Uint8Array(input);
}
}
boyerMoore(patternBuffer) {
// Implementation of Boyer-Moore substring search ported from page 772 of
// Algorithms Fourth Edition (Sedgewick, Wayne)
// http://algs4.cs.princeton.edu/53substring/BoyerMoore.java.html
// USAGE:
// needle should be ASCII string, ArrayBuffer, or Uint8Array
// haystack should be an ArrayBuffer or Uint8Array
// var search = boyerMoore(needle);
// var skip = search.byteLength;
// var indices = [];
// for (var i = search(haystack); i !== -1; i = search(haystack, i + skip)) {
// indices.push(i);
// }
var pattern = this.asUint8Array(patternBuffer);
var M = pattern.length;
if (M === 0) {
throw new TypeError("patternBuffer must be at least 1 byte long");
}
// radix
var R = 256;
var rightmost_positions = new Int32Array(R);
// position of the rightmost occurrence of the byte c in the pattern
for (var c = 0; c < R; c++) {
// -1 for bytes not in pattern
rightmost_positions[c] = -1;
}
for (var j = 0; j < M; j++) {
// rightmost position for bytes in pattern
rightmost_positions[pattern[j]] = j;
}
var boyerMooreSearch = (txtBuffer, start, end) => {
// Return offset of first match, -1 if no match.
var txt = this.asUint8Array(txtBuffer);
if (start === undefined) start = 0;
if (end === undefined) end = txt.length;
var pat = pattern;
var right = rightmost_positions;
var lastIndex = end - pat.length;
var lastPatIndex = pat.length - 1;
var skip;
for (var i = start; i <= lastIndex; i += skip) {
skip = 0;
for (var j = lastPatIndex; j >= 0; j--) {
var c = txt[i + j];
if (pat[j] !== c) {
skip = Math.max(1, j - right[c]);
break;
}
}
if (skip === 0) {
return i;
}
}
return -1;
};
boyerMooreSearch.byteLength = pattern.byteLength;
return boyerMooreSearch;
}
//---------------------end copy/pasted solution------------------------
async sendserial() {
//console.log('sending sendserial');
if(this.ready_to_send_data&&this.bufferednewLines)//&&this.data_slice[0].length)
{
if(!this.generate_parallel)
{
this.ready_to_send_data=false;
}
this.data_send_count++;
var array_to_send_as_json='';
var value = new Uint8Array(this.buffer);
this.bufferednewLines=0;
this.buffer=[];
//console.log('sending buffer');
//document.body.appendChild(document.createTextNode('sending buffer'));
const channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, [value.buffer]);
let success = false;
for await (const message of channel.messages) {
//this.ready_to_send_data=true;
if (message.data.response == 'close') {
//if (message.data.response == 'got comm open!') {
//const responseBuffer = new Uint8Array(message.buffers[0]);
//for (let i = 0; i < buffer.length; ++i) {
// if (responseBuffer[i] != buffer[i]) {
// console.error('comm buffer different at ' + i);
// document.body.appendChild(document.createTextNode('comm buffer different at2 ' + i));
// return;
// }
//}
// Close the channel once the expected message is received. This should
// cause the messages iterator to complete and for the for-await loop to
// end.
//console.error('comm buffer same ' + responseBuffer);
//document.body.appendChild(document.createTextNode('comm buffer same2 ' + responseBuffer));
channel.close();
}
//console.log('audio&image received');
//var message_parsed=JSON.parse(message.data.response);
//console.log('audio&image decoded');
//for(let i = 0; i < message_parsed.length; ++i)
{
if (this.generate_wavegan)
{
//if((typeof message_parsed[i]) === 'string')
{
//console.log("audio to set")
if(message.data.response.startsWith('data:audio'))
//if(message_parsed[i].startsWith('data:audio'))
{
//document.body.appendChild(document.createTextNode('audio decoded'));
//await
//playAudio1(message_parsed[i]);
//await
//playAudio1(message.buffers[0]);
playAudio1(message.data.response);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_wg));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(playAudio2,next_time);
//console.log("audio set")
}
if(0)
if(message.data.response.startsWith('data:video'))
//if(message_parsed[i].startsWith('data:audio'))
{
//document.body.appendChild(document.createTextNode('audio decoded'));
//await
//playAudio1(message_parsed[i]);
//await
playvideo1(message.data.response);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_wg));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(playVideo2,next_time);
}
}
}
if (this.generate_stylegan2)
{
//if((typeof message_parsed[i]) === 'string')
//console.log(message.data.response);
{
//console.log(message.buffers);
//if(message.buffers[0].length>0)
if(message.data.response.startsWith('binary:data:image'))
{
//document.body.appendChild(document.createTextNode('image decoded'));
var image_type='';
if(message.data.response.startsWith('binary:data:image/png'))
{
image_type='image/png';
}
if(message.data.response.startsWith('binary:data:image/jpeg'))
{
image_type='image/jpeg';
}
if(message.data.response.includes('user:killed'))
{
console.log('user:killed');
avic01.clear();
avic02.clear();
}
if(message.data.response.includes('enemy:killed'))
{
console.log('enemy:killed');
avic02.clear();
}
if(message.data.response.includes('user:add'))
{
console.log('user:add');
avic01.displayPhoto1(message.buffers[0],this.xsize,this.ysize,image_type);
//displayPhoto1001(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic01.displayPhoto4();},next_time);
}
if(message.data.response.includes('user:attack'))
if(0)
{
console.log('user:attack');
displayPhoto10011(message.buffers[0],this.xsize,this.ysize,image_type);
//displayPhoto1001(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(displayPhoto40011,next_time);
}
if(message.data.response.includes('enemy:add'))
{
console.log('enemy:add');
avic02.displayPhoto1(message.buffers[1],this.xsize,this.ysize,image_type);
//displayPhoto1001(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic02.displayPhoto4();},next_time);
}
if(message.data.response.includes('enemy:attack'))
if(0)
{
console.log('enemy:attack');
displayPhoto10021(message.buffers[1],this.xsize,this.ysize,image_type);
//displayPhoto1001(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(displayPhoto40021,next_time);
}
if(message.data.response.includes('user:restored'))
if(0)
{
console.log('user:restored');
}
var image_buffer_shift=0;
if(message.data.response.includes('mode:3'))
{
var image_buffer_shift=3;
avic1.displayPhoto1(message.buffers[2],this.xsize,this.ysize,image_type);
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic1.displayPhoto4();},next_time);
avic2.displayPhoto1(message.buffers[3],device.xsize,device.ysize,image_type);
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic2.displayPhoto4();},next_time);
avic3.displayPhoto1(message.buffers[4],this.xsize,this.ysize,image_type);
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic3.displayPhoto4();},next_time);
}
if(message.data.response.includes('user:cards_life'))
{
console.log('user:cards_life');
avic002.displayPhoto1(message.buffers[2+image_buffer_shift],this.xsize*2,this.ysize,'image/png');
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic002.displayPhoto4();},next_time);
}
if(message.data.response.includes('enemy:cards_life'))
{
console.log('enemy:cards_life');
avic001.displayPhoto1(message.buffers[3+image_buffer_shift],this.xsize*2,this.ysize,'image/png');
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic001.displayPhoto4();},next_time);
}
if(message.data.response.includes('user_attack_enemy:cards_life'))
{
console.log('user_attack_enemy:cards_life');
avic.displayPhoto1(message.buffers[4+image_buffer_shift],this.xsize*7,this.ysize*3,'image/png');
//avic.displayPhoto1(message.buffers[4],128,128,'image/png');
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic.displayPhoto4();},next_time);
}
if(this.generate_game)
{
if(this.generate_game_mode1)
{
avic00.displayPhoto1(message.buffers[0],this.xsize,this.ysize,image_type);
}
}
else
{
console.log('avic00');
avic00.displayPhoto1(message.buffers[0],this.xsize,this.ysize,image_type);
//avic00.displayPhoto1(message.buffers[0],512,512,image_type);
}
//displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
this.time000=Date.now();
var frame_time=parseInt((1000/this.fps_sg2));
var next_time=frame_time-((this.time000-this.time100)%%frame_time);
setTimeout(function(){avic00.displayPhoto4();},next_time);
//displayPhoto10(message.buffers[0],128,128,image_type);
////displayPhoto10(message.buffers[0],device.xsize,device.ysize,image_type);
//device.time000=Date.now();
//var frame_time=parseInt((1000/device.fps_sg2));
//var next_time=frame_time-((device.time000-device.time100)%%frame_time);
//setTimeout(displayPhoto40,next_time);
var frame_now=(this.time000-this.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
//console.log('show');
}
if(0)
if(message.data.response.startsWith('binary:data:video'))
//if(message_parsed[i].startsWith('data:image'))
{
//document.body.appendChild(document.createTextNode('image decoded'));
//await
//displayPhoto1(message_parsed[i]);
//await
var video_type='';
if(message.data.response.startsWith('binary:data:video/webm'))
{
video_type='video/webm';
}
if(message.data.response.startsWith('binary:data:video/mp4'))
{
video_type='video/mp4';
}
displayVideo10(message.buffers[0],device.xsize,device.ysize,video_type);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_sg2));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(displayVideo40,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
}
//console.log("image string");
if(0)
if(message.data.response.startsWith('data:image'))
//if(message_parsed[i].startsWith('data:image'))
{
//document.body.appendChild(document.createTextNode('image decoded'));
//await
//displayPhoto1(message_parsed[i]);
//await
displayPhoto1(message.data.response,device.xsize,device.ysize);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_sg2));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(displayPhoto4,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
}
if(0)
if(message.data.response.startsWith('data:video'))
//if(message_parsed[i].startsWith('data:image'))
{
//document.body.appendChild(document.createTextNode('image decoded'));
//await
//displayPhoto1(message_parsed[i]);
//await
var video_type='';
if(message.data.response.startsWith('data:video/webm'))
{
video_type='video/webm';
}
if(message.data.response.startsWith('data:video/mp4'))
{
video_type='video/mp4';
}
displayVideo1(message.data.response,device.xsize,device.ysize,video_type);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_sg2));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(displayVideo2,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
}
}
//else
{
//console.log("image not string");
//displayPhoto3(message_parsed[i]);
}
}
}
//console.log("close");
channel.close();
}
this.ready_to_send_data = true;
//console.log("ready_to_send_data");
//document.body.appendChild(document.createTextNode('done2.'));
}
}
async takeandsendSlice(data_slice_from,data_slice_to) {
//this.lastnewLines=this.bufferednewLines;
//if(this.ready_to_send_data&&this.bufferednewLines){//&&this.data_slice[0].length)
if(this.ready_to_send_data&&this.bufferednewLines)//&&this.data_slice[0].length)
{
//this.ready_to_send_data = false;
this.data_slice = [
//device.data.ms.slice(data_slice_from,data_slice_to),
this.data["A"+0].slice(data_slice_from,data_slice_to),
this.data["A"+1].slice(data_slice_from,data_slice_to),
this.data["A"+2].slice(data_slice_from,data_slice_to),
this.data["A"+3].slice(data_slice_from,data_slice_to),
this.data["A"+4].slice(data_slice_from,data_slice_to),
this.data["A"+5].slice(data_slice_from,data_slice_to),
this.data["A"+6].slice(data_slice_from,data_slice_to),
this.data["A"+7].slice(data_slice_from,data_slice_to),
this.data["A"+8].slice(data_slice_from,data_slice_to),
this.data["A"+9].slice(data_slice_from,data_slice_to),
this.data["A"+10].slice(data_slice_from,data_slice_to),
this.data["A"+11].slice(data_slice_from,data_slice_to),
this.data["A"+12].slice(data_slice_from,data_slice_to),
this.data["A"+13].slice(data_slice_from,data_slice_to),
this.data["A"+14].slice(data_slice_from,data_slice_to),
this.data["A"+15].slice(data_slice_from,data_slice_to),
this.data["A"+16].slice(data_slice_from,data_slice_to),
this.data["A"+17].slice(data_slice_from,data_slice_to),
this.data["A"+18].slice(data_slice_from,data_slice_to),
this.data["A"+19].slice(data_slice_from,data_slice_to),
this.data["A"+20].slice(data_slice_from,data_slice_to),
this.data["A"+21].slice(data_slice_from,data_slice_to),
this.data["A"+22].slice(data_slice_from,data_slice_to),
this.data["A"+23].slice(data_slice_from,data_slice_to),
this.data["A"+24].slice(data_slice_from,data_slice_to),
this.data["A"+25].slice(data_slice_from,data_slice_to),
this.data["A"+26].slice(data_slice_from,data_slice_to),
this.data["A"+27].slice(data_slice_from,data_slice_to),
this.data["A"+28].slice(data_slice_from,data_slice_to),
this.data["A"+29].slice(data_slice_from,data_slice_to),
this.data["A"+30].slice(data_slice_from,data_slice_to),
this.data["A"+31].slice(data_slice_from,data_slice_to)
];
this.bufferednewLines=0;
//const buffer = new Uint8Array(10);
//for (let i = 0; i < buffer.byteLength; ++i) {
// buffer[i] = i
//}
var data_slice_uint32array=new Uint32Array(this.data_slice.length*this.data_slice[0].length);
for(let i=0;i<this.data_slice.length;i++)
{
for(let j=0;j<this.data_slice[i].length;j++)
{
data_slice_uint32array[i*this.data_slice[i].length + j]=this.data_slice[i][j];
}
}
//var data_slice_uint8array = new Int8Array(this.data_slice_array.buffer);
//const buffer = new Uint8Array(this.data_slice.byteLength);
//for (let i = 0; i < buffer.byteLength; ++i) {
// buffer[i] = i
//}
//var array_to_send_as_json = JSON.stringify(this.data_slice);
var array_to_send_as_json = JSON.stringify([]);
//document.body.appendChild(document.createTextNode('sending ready'));
this.data_send_count++;
//if(this.channel==None)
//{
// this.channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, []);
//this.channel = await google.colab.kernel.comms.open(this.data_send_count.toString(), array_to_send_as_json, []);
//} else
//{
//this.channel.send(this.data_send_count.toString())
//}
//document.body.appendChild(document.createTextNode(array_to_send_as_json));
//const channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, []);
// const channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, [this.data_slice.buffer]);
const channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, [data_slice_uint32array.buffer]);
//const channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, [buffer.buffer]);
//const channel = await google.colab.kernel.comms.open('comm_target1', array_to_send_as_json, [this.data_slice]);
//const channel = await google.colab.kernel.comms.open('comm_target1', 'the data', [buffer.buffer]);
let success = false;
for await (const message of channel.messages) {
if (message.data.response == 'close') {
//if (message.data.response == 'got comm open!') {
//const responseBuffer = new Uint8Array(message.buffers[0]);
//for (let i = 0; i < buffer.length; ++i) {
// if (responseBuffer[i] != buffer[i]) {
// console.error('comm buffer different at ' + i);
// document.body.appendChild(document.createTextNode('comm buffer different at2 ' + i));
// return;
// }
//}
// Close the channel once the expected message is received. This should
// cause the messages iterator to complete and for the for-await loop to
// end.
//console.error('comm buffer same ' + responseBuffer);
//document.body.appendChild(document.createTextNode('comm buffer same2 ' + responseBuffer));
channel.close();
}
//console.log('audio&image received');
//var message_parsed=JSON.parse(message.data.response);
//console.log('audio&image decoded');
//for(let i = 0; i < message_parsed.length; ++i)
{
if (this.generate_wavegan)
{
//if((typeof message_parsed[i]) === 'string')
{
if(message.data.response.startsWith('data:audio'))
//if(message_parsed[i].startsWith('data:audio'))
{
//document.body.appendChild(document.createTextNode('audio decoded'));
//await
//playAudio1(message_parsed[i]);
//await
//playAudio1(message.buffers[0]);
playAudio1(message.data.response);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_wg));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(playAudio2,next_time);
}
if(0)
if(message.data.response.startsWith('data:video'))
//if(message_parsed[i].startsWith('data:audio'))
{
//document.body.appendChild(document.createTextNode('audio decoded'));
//await
//playAudio1(message_parsed[i]);
//await
playvideo1(message.data.response);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_wg));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(playVideo2,next_time);
}
}
}
if (this.generate_stylegan2)
{
//if((typeof message_parsed[i]) === 'string')
{
//console.log(message.buffers);
//if(message.buffers[0].length>0)
if(0)
{
displayPhoto10(message.buffers[0],device.xsize,device.ysize);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_sg2));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(displayPhoto40,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
}
//console.log("image string");
if(message.data.response.startsWith('data:image'))
//if(message_parsed[i].startsWith('data:image'))
{
//document.body.appendChild(document.createTextNode('image decoded'));
//await
//displayPhoto1(message_parsed[i]);
//await
displayPhoto1(message.data.response,device.xsize,device.ysize);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_sg2));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(displayPhoto4,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
}
if(message.data.response.startsWith('data:video'))
//if(message_parsed[i].startsWith('data:image'))
{
//document.body.appendChild(document.createTextNode('image decoded'));
//await
//displayPhoto1(message_parsed[i]);
//await
var video_type='';
if(message.data.response.startsWith('data:video/webm'))
{
video_type='video/webm';
}
if(message.data.response.startsWith('data:video/mp4'))
{
video_type='video/mp4';
}
displayVideo1(message.data.response,device.xsize,device.ysize,video_type);
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps_sg2));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(displayVideo2,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-this.frame_last)!=1)
{
console.log('f2:'+next_time+','+frame_time+','+
frame_now+','+this.frame_last+','+
(frame_now-this.frame_last));
}
this.frame_last=frame_now;
}
}
//else
{
//console.log("image not string");
//displayPhoto3(message_parsed[i]);
}
}
}
}
this.ready_to_send_data = true;
//document.body.appendChild(document.createTextNode('done2.'));
}
}
}
device = new eeg32();
connect = async () => {
await this.device.setupSerialAsync();
}
disconnect = () => {
if (this.ui) this.ui.deleteNode()
this.device.closePort();
}
//const canvas = document.createElement('canvas');
//const audio = document.createElement('audio');
//const audio1 = document.createElement('audio');
//const audio2 = document.createElement('audio');
var audios = new Array();
var videos1 = new Array();
if(device.generate_wavegan)
{
const audios_length = 5;
for(let i = 0; i < audios_length; i++)
{
audios[i]=document.createElement('audio');
}
const videos1_length = 1;
for(let i = 0; i < videos1_length; i++)
{
videos1[i]=document.createElement('video');
}
}
class audio_video_image_canvas {
constructor(
canvases_length = 1,
images_length = 1,
videos_length = 1,
width = 128,
height = 128
) {
this.canvases = new Array();
this.ctxs = new Array();
this.canvases_length = canvases_length;//images_length;
for(let i = 0; i < this.canvases_length; i++)
{
//var ctx = canvas.getContext("2d");
this.canvases[i]=document.createElement('canvas');
this.ctx = this.canvases[i].getContext("2d");
this.ctxs[i]=this.canvases[i].getContext("2d");
this.canvases[i].width=width;
this.canvases[i].height=height;
this.ctxs[i].clearRect(0, 0, this.canvases[i].width, this.canvases[i].height);
}
this.images = new Array();
this.videos = new Array();
//if(device.generate_stylegan2)
{
this.images_length = images_length;
//const canvases_length = 1;//images_length;
for(let i = 0; i < this.images_length; i++)
{
//images[i]=document.createElement('image');
//canvases[i]=document.createElement('canvas');
//var ctx = canvases[i].getContext("2d");
this.images[i]=new Image();
//images[i].onload = function() {
// ctx.drawImage(images[i], 0, 0);
//};
}
this.videos_length = videos_length;
for(let i = 0; i < this.videos_length; i++)
{
//videos2[i]=new Video();
this.videos[i]=document.createElement('video');
this.videos[i].addEventListener('play', function() {
var $this = this; //cache
(function loop() {
if (!$this.paused && !$this.ended) {
this.ctx.drawImage($this, 0, 0);
setTimeout(loop, 1000 / 10); // drawing at 30fps
}
})();
}, 0);
}
}
this.div = document.createElement('div');
//if(device.generate_stylegan2)
{
for(let i = 0; i <this.canvases.length; i++)
{
this.div.appendChild(this.canvases[i]);
}
for(let i = 0; i < this.videos.length; i++)
{
//div3.appendChild(videos2[i]);
//videos2[i].controls = true;
//videos2[i].autoplay = true;
}
}
this.image_now=0;
this.canvas_now=0;
}
async clear() {
for(let i = 0; i < this.canvases.length; i++)
{
this.ctxs[i].clearRect(0, 0, this.canvases[i].width, this.canvases[i].height);
}
this.canvas_now=0;
this.image_now=0;
}
async displayPhoto1(photodata,photoWidth=512,photoHeight=512,image_type="image/jpeg") {
//if(canvas.width != photoWidth) canvas.width = photoWidth;
//if(canvas.height != photoHeight) canvas.height = photoHeight;
await this.displayPhoto2(photodata,photoWidth,photoHeight,image_type);
}
async displayPhoto2(photodata,photoWidth,photoHeight,image_type="image/jpeg") {
//if(canvas.width != photoWidth) canvas.width = photoWidth;
//if(canvas.height != photoHeight) canvas.height = photoHeight;
//image.src = photodata;
{
if(this.canvases[this.canvas_now%%this.canvases.length].width != photoWidth)
{
this.canvases[this.canvas_now%%this.canvases.length].width = photoWidth;
}
if(this.canvases[this.canvas_now%%this.canvases.length].height != photoHeight)
{
this.canvases[this.canvas_now%%this.canvases.length].height = photoHeight;
}
//if(canvases[0].width != photoWidth)
//{
// canvases[0].width = photoWidth;
//}
//if(canvases[0].height != photoHeight)
//{
// canvases[0].height = photoHeight;
//}
//console.log(photodata);
var arrayBufferView = new Uint8Array( photodata );
var blob = new Blob( [ arrayBufferView ], { type: image_type } );
var urlCreator = window.URL || window.webkitURL;
var imageUrl = urlCreator.createObjectURL( blob );
//images[image_now%%images.length].src = photodata;
//if(images[image_now%%images.length].src)
//{
// URL.revokeObjectURL(images[image_now%%images.length].src);
//}
this.images[this.image_now%%this.images.length].src = imageUrl;
//audios[audio_now%%audios.length].play();
//console.log(imageUrl);
}
//image_now++;
}
async displayPhoto4(photodata){//},photoWidth,photoHeight) {
this.ctxs[this.canvas_now%%this.canvases.length].drawImage(this.images[this.image_now%%this.images.length], 0, 0, this.canvases[this.canvas_now%%this.canvases.length].width, this.canvases[this.canvas_now%%this.canvases.length].height);
//ctxs01[image01_now%%images01.length].drawImage(images01[image01_now%%images01.length], 0, 0);
if((this.image_now-1)%%this.images.length>=0)
{
if(this.images[(this.image_now-1)%%this.images.length].src)
{
URL.revokeObjectURL(this.images[(this.image_now-1)%%this.images.length].src);
}
}
this.image_now++;
this.canvas_now++;
}
}
const div = document.createElement('div');
const div01 = document.createElement('div');
const div02 = document.createElement('div');
const div03 = document.createElement('div');
const div2 = document.createElement('div');
const div3 = document.createElement('div');
const div4 = document.createElement('div');
const div5 = document.createElement('div');
const div6 = document.createElement('div');
const div7 = document.createElement('div');
const div8 = document.createElement('div');
const btnconnect = document.createElement('button');
const btndisconnect = document.createElement('button');
const capture = document.createElement('button');
if (device.generate_game)
{
if (device.generate_game_mode3)
{
avic1 = new audio_video_image_canvas(1,1,0);
avic2 = new audio_video_image_canvas(1,1,0);
avic3 = new audio_video_image_canvas(1,1,0);
div01.appendChild(avic1.div);
div02.appendChild(avic2.div);
div03.appendChild(avic3.div);
}
avic = new audio_video_image_canvas(1,1,0);
avic01 = new audio_video_image_canvas(7,7,0);
avic02 = new audio_video_image_canvas(7,7,0);
div3.appendChild(avic.div);
div4.appendChild(avic01.div);
div5.appendChild(avic02.div);
if (device.generate_game_mode1)
{
avic00 = new audio_video_image_canvas(1,1,0,device.xsize,device.ysize);
div6.appendChild(avic00.div);
}
avic001 = new audio_video_image_canvas(1,1,0,device.xsize*2,device.ysize);
avic002 = new audio_video_image_canvas(1,1,0,device.xsize*2,device.ysize);
div7.appendChild(avic001.div);
div8.appendChild(avic002.div);
}
else
{
avic00 = new audio_video_image_canvas(1,1,0,device.xsize,device.ysize);
// avic00 = new audio_video_image_canvas(1,1,0,512,512);
div6.appendChild(avic00.div);
}
async function takePhoto2(quality=1) {
btnconnect.remove();
capture.remove();
device.ready_to_send_data = true;
}
async function takePhoto(quality=1) {
btnconnect.textContent = 'connect';
div.appendChild(btnconnect);
btnconnect.onclick = this.connect;
btndisconnect.textContent = 'disconnect';
div.appendChild(btndisconnect);
btndisconnect.onclick = this.disconnect;
capture.textContent = 'Capture';
capture.onclick = takePhoto2;
div.appendChild(capture);
//div.appendChild(canvas);
//div.appendChild(audio);
//div.appendChild(audio1);
//div.appendChild(audio2);
if(device.generate_wavegan)
{
for(let i = 0; i < audios.length; i++)
{
div2.appendChild(audios[i]);
//audios[i].controls = true;
//audios[i].autoplay = true;
}
for(let i = 0; i < videos1.length; i++)
{
div2.appendChild(videos1[i]);
//videos1[i].controls = true;
//videos1[i].autoplay = true;
}
}
document.body.appendChild(div);
if(device.generate_wavegan)
{
//console.log("audio init")
document.body.appendChild(div2);
}
if(device.generate_game)
{
document.body.appendChild(div5);
document.body.appendChild(div2);
document.body.appendChild(div7);
div7.style.styleFloat = 'left';
div7.style.cssFloat = 'left';
}
if(device.generate_game_mode3)
{
document.body.appendChild(div01);
div01.style.styleFloat = 'left';
div01.style.cssFloat = 'left';
document.body.appendChild(div02);
div02.style.styleFloat = 'left';
div02.style.cssFloat = 'left';
document.body.appendChild(div03);
div03.style.styleFloat = 'left';
div03.style.cssFloat = 'left';
}
else
{
document.body.appendChild(div6);
div6.style.styleFloat = 'left';
div6.style.cssFloat = 'left';
}
if(device.generate_game)
{
document.body.appendChild(div8);
document.body.appendChild(div4);
document.body.appendChild(div3);
}
await new Promise((resolve) => capture.onclick = resolve);
btnconnect.remove();
capture.remove();
device.ready_to_send_data = true;
}
async function takePhoto1(quality=1) {
//var data_slice_send=this.device.data_slice;
//var data_slice_send=[this.device.data_slice[0],this.device.data_slice[1]];
//var data_slice_send=[this.device.data_slice[0]];
var data_slice_send=[[this.device.data_slice[0][0]]];
//console.log("data_slice_send[0].length:", data_slice_send[0].length);
//console.log("device.bufferednewLines:", device.bufferednewLines);
device.bufferednewLines=0;
return data_slice_send;
}
var audio_now=0;
async function playAudio1(audiodata){//},photoWidth=512,photoHeight=512) {
//const canvas = document.createElement('canvas');
//if(canvas.width != photoWidth) canvas.width = photoWidth;
//if(canvas.height != photoHeight) canvas.height = photoHeight;
//audio.controls = true;
//audio.autoplay = true;
//audio1.controls = true;
//audio1.autoplay = true;
//audio2.controls = true;
//audio2.autoplay = true;
//canvas.getContext('2d').drawImage(photodata, 0, 0);
//var canvas = document.getElementById("c");
///var ctx = canvas.getContext("2d");
///var image = new Image();
///image.onload = function() {
/// ctx.drawImage(image, 0, 0);
///};
//audio.src = audiodata;
//audio.play()
//if(audio_now%%2==0)
//{
// audio1.src = audiodata;
// audio1.play()
//}
//else
//{
// audio2.src = audiodata;
// audio2.play()
//}
//for(let i = 0; i < audios.length; ++i)
{
audios[audio_now%%audios.length].src = audiodata;
//if(audio_now==0)
{
//audios[audio_now%%audios.length].controls = false;
audios[audio_now%%audios.length].controls = true;
//audios[audio_now%%audios.length].autoplay = true;
}
//audios[audio_now%%audios.length].play();
}
//audio_now++;
//console.log("audio add")
}
async function playAudio2(audiodata){//},photoWidth=512,photoHeight=512) {
audios[audio_now%%audios.length].play();
audio_now++;
//console.log("audio play")
}
function addSourceToVideo(element, src, type) {
var source = document.createElement('source');
source.src = src;
source.type = type;
element.appendChild(source);
}
var video1_now=0;
async function playVideo1(videodata){//},photoWidth=512,photoHeight=512) {
//for(let i = 0; i < audios.length; ++i)
{
//addSourceToVideo(videos1[video1_now%%videos1.length], videodata, 'video/mp4');
videos1[video1_now%%videos1.length].src = videodata;
//if(audio_now==0)
{
videos1[video1_now%%videos1.length].controls = false;//true;
//videos1[video1_now%%videos1.length].load();
//audios[audio_now%%audios.length].autoplay = true;
}
//audios[audio_now%%audios.length].play();
}
//audio_now++;
}
async function playVideo2(videodata){//},photoWidth=512,photoHeight=512) {
videos1[video1_now%%videos1.length].play();
video1_now++;
}
var video2_now=0;
async function displayVideo1(videodata,photoWidth=512,photoHeight=512,video_type='video/mp4') {
if(videos2[video2_now%%videos2.length].width != photoWidth)
{
videos2[video2_now%%videos2.length].width = photoWidth;
}
if(videos2[video2_now%%videos2.length].height != photoHeight)
{
videos2[video2_now%%videos2.length].height = photoHeight;
}
//for(let i = 0; i < audios.length; ++i)
{
//addSourceToVideo(videos2[video2_now%%videos2.length], videodata, 'video/mp4');
videos2[video2_now%%videos2.length].src = videodata;
videos2[video2_now%%videos2.length].type = video_type;
//videos2[video2_now%%videos2.length].type = 'video/webm';
// videos2[video2_now%%videos2.length].type = 'video/mp4';
//if(audio_now==0)
{
//videos2[video2_now%%videos2.length].controls = true;
videos2[video2_now%%videos2.length].controls = false;
videos2[video2_now%%videos2.length].load();
//audios[audio_now%%audios.length].autoplay = true;
}
//audios[audio_now%%audios.length].play();
}
//audio_now++;
}
async function displayVideo2(videodata){//},photoWidth=512,photoHeight=512) {
videos2[video2_now%%videos2.length].play();
videos2[video2_now%%videos2.length].style.display = "block";
video2_now++;
videos2[video2_now%%videos2.length].style.display = "none";
}
async function displayVideo10(videodata,photoWidth=512,photoHeight=512,video_type='video/mp4') {
//if(videos2[video2_now%%videos2.length].width != photoWidth)
//{
// videos2[video2_now%%videos2.length].width = photoWidth;
//}
//if(videos2[video2_now%%videos2.length].height != photoHeight)
//{
// videos2[video2_now%%videos2.length].height = photoHeight;
//}
//console.log("canvases[video2_now%%videos2.length]:", canvases[video2_now%%videos2.length]);
//if(canvases[video2_now%%videos2.length].width != photoWidth)
//{
// canvases[video2_now%%videos2.length].width = photoWidth;
//}
//if(canvases[video2_now%%videos2.length].height != photoHeight)
//{
// canvases[video2_now%%videos2.length].height = photoHeight;
//}
if(canvases[0].width != photoWidth)
{
canvases[0].width = photoWidth;
}
if(canvases[0].height != photoHeight)
{
canvases[0].height = photoHeight;
}
//for(let i = 0; i < audios.length; ++i)
{
//addSourceToVideo(videos2[video2_now%%videos2.length], videodata, 'video/mp4');
var arrayBufferView = new Uint8Array( videodata );
var blob = new Blob( [ arrayBufferView ], { type: video_type } );
var urlCreator = window.URL || window.webkitURL;
var videoUrl = urlCreator.createObjectURL( blob );
//console.log("videoUrl:", videoUrl);
//images[image_now%%images.length].src = photodata;
videos2[video2_now%%videos2.length].src = videoUrl;
videos2[video2_now%%videos2.length].type = video_type;
//videos2[video2_now%%videos2.length].type = 'video/webm';
// videos2[video2_now%%videos2.length].type = 'video/mp4';
//if(audio_now==0)
{
videos2[video2_now%%videos2.length].controls = true;
//videos2[video2_now%%videos2.length].controls = false;
videos2[video2_now%%videos2.length].load();
//audios[audio_now%%audios.length].autoplay = true;
}
//audios[audio_now%%audios.length].play();
}
//audio_now++;
}
async function displayVideo20(videodata){//},photoWidth=512,photoHeight=512) {
videos2[video2_now%%videos2.length].play();
videos2[video2_now%%videos2.length].style.display = "block";
video2_now++;
videos2[video2_now%%videos2.length].style.display = "none";
}
async function displayVideo40(videodata){//},photoWidth,photoHeight) {
//ctx.drawImage(videos2[video2_now%%videos2.length], 0, 0);
videos2[video2_now%%videos2.length].play();
if((video2_now-1)%%videos2.length>=0)
{
videos2[(video2_now-1)%%videos2.length].pause();
if(videos2[(video2_now-1)%%videos2.length].src)
{
URL.revokeObjectURL(videos2[(video2_now-1)%%videos2.length].src);
}
}
video2_now++;
}
takePhoto();
data_count=0;
var frame_last=0;
async function check_to_send() {
//while(true)
{
//console.log("device.bufferednewLines:", device.bufferednewLines);
if(device.bufferednewLines)
{
//if(this.bufferednewLines>this.data_slice_size)
// {
// this.bufferednewLines=this.data_slice_size;
// }
/*device.time000=Date.now();
if(device.generate_wavegan)
{
device.this_frame_wg=parseInt((device.time000-device.time100)*device.fps_wg);
if(device.this_frame_wg>device.last_frame_wg)
{
device.last_frame_wg=device.this_frame_wg;
device.send_wg=true;
}
}
if(device.generate_stylegan2)
{
device.this_frame_sg2=parseInt((device.time000-device.time100)*device.fps_sg2);
if(device.this_frame_sg2>device.last_frame_sg2)
{
device.last_frame_sg2=device.this_frame_sg2;
device.send_sg2=true;
}
}
if(device.send_wg || device.send_sg2)
//if(this.bufferednewLines>512/(this.fps_sg2))
if(device.ready_to_send_data)*/
{
//this.bufferednewLines=512/this.fps;
//device.ready_to_send_data = true;
device.sendserial();
//device.takeandsendSlice(device.data.count-1-device.bufferednewLines,device.data.count-1);
device.send_wg=false;
device.send_sg2=false;
}
//this.takeSlice(this.data.count-1-this.bufferednewLines,this.data.count-1);
//this.takeandsendSlice(this.data.count-1-this.bufferednewLines,this.data.count-1);
//this.takeandsendSliceBroadcast(this.data.count-1-this.bufferednewLines,this.data.count-1);
}
}
device.time000=Date.now();
// var frame_time=parseInt((1000/device.fps_wg)/12);
var frame_time=parseInt((1000/device.fps));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(check_to_send,next_time);
var frame_now=(device.time000-device.time100)/frame_time;
if(Math.round(frame_now-frame_last)!=1)
{
console.log('f1:'+next_time+','+frame_time+','+frame_now+','+
frame_last+','+(frame_now-frame_last));
}
frame_last=frame_now;
}
device.time000=Date.now();
var frame_time=parseInt((1000/device.fps));
var next_time=frame_time-((device.time000-device.time100)%%frame_time);
setTimeout(check_to_send,next_time);
//console.log(next_time);
//var intervalID = setInterval(check_to_send,(1000/device.fps_wg)/10);
// window.requestAnimationFrame
''' % {'xsize':xsize,'ysize':ysize,'generate_stylegan2':generate&gen_stylegan2,'generate_wavegan':generate&gen_wavegan,'generate_heatmap':generate&gen_heatmap,
'fps_sg2':fps_sg2,'fps_wg':fps_wg,'fps_hm':fps_hm,'sfreq':sfreq,'vref':vref,'gain':gain,'data_channels':data_channels,
'generate_game':generate&gen_game,'generate_game_mode1':generate&gen_game_mode1,'generate_game_mode3':generate&gen_game_mode3,
'generate_parallel':generate&gen_parallel})
# + id="SKDuqNM8HOBZ"
| EMG_Silent_Speech_with_WaveNet&DeepSpeech_via_BrainFlow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8 mealsuggestion
# language: python
# name: mealsuggestion
# ---
# %matplotlib inline
from pycocotools.coco import COCO # has to be installed manually from https://github.com/cocodataset/cocoapi
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import pylab
from pathlib import Path
pylab.rcParams['figure.figsize'] = (8.0, 10.0)
#set path to coco data
dataDir=Path('C:/projects/mealsuggestion/coco/images')
annTrainFile=Path('C:/projects/mealsuggestion/coco/annotations/instances_train2017.json')
annValFile=Path('C:/projects/mealsuggestion/coco/annotations/instances_val2017.json')
print(dataDir,annTrainFile,annValFile)
# initialize COCO api for instance annotations
import os
print(os.getcwd())
coco_train =COCO(str(annTrainFile))
coco_val =COCO(str(annValFile))
# +
# display COCO categories and supercategories
cats = coco_train.loadCats(coco_train.getCatIds())
nms=[cat['name'] for cat in cats]
print('COCO categories: \n{}\n'.format(' '.join(nms)))
nms = set([cat['supercategory'] for cat in cats])
print('COCO supercategories: \n{}'.format(' '.join(nms)))
# +
# get all images containing food
catIds = coco_train.getCatIds(catNms=[ 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake']);
from itertools import chain
imgIds_train = set(chain.from_iterable(coco_train.getImgIds(catIds=catid) for catid in catIds))
imgIds_val = set(chain.from_iterable(coco_val.getImgIds(catIds=catid) for catid in catIds))
# example
img = coco_train.loadImgs(imgIds_train)[0]
print(len(imgIds_train))
print(len(imgIds_val))
print(img)
# +
# load and display example annotated image
from PIL import Image, ImageDraw
import requests
from io import BytesIO
def draw_img(img):
response = requests.get(img['coco_url'])
im = Image.open(BytesIO(response.content))
draw = ImageDraw.Draw(im)
annIds = coco_train.getAnnIds(imgIds=img['id'], catIds=catIds, iscrowd=None)
anns = coco_train.loadAnns(annIds)
for i in anns:
[x,y,w,h] = i['bbox']
draw.rectangle(((int(x), int(y)), (int(x+w), int(y+h))), outline="red")
draw.rectangle(((int(x), int(y)), (int(x+50), int(y+10))), fill="blue")
draw.text((x, y), coco_train.loadCats(ids=[i['category_id']])[0]['name'], fill=(255,255,255,128))
plt.imshow(im)
plt.show()
draw_img(img)
# +
#functions for creating darknet label
def get_new_id_of(class_name):
catNms = ['banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake']
return catNms.index(class_name)
def get_old_id_of(class_name):
catNms=['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush']
return catNms.index(class_name)
def create_darknet_label(x_size, y_size, x, y, width, height, class_name, coco=False):
# Each row is class x_center y_center width height format.
x_center = (x+(width/2.0))/x_size
y_center = y_center = (y+(height/2.0))/y_size
if coco:
classid = get_old_id_of(class_name)
else:
classid = get_new_id_of(class_name)
return classid, x_center, y_center, 1.0*width/x_size, 1.0*height/y_size
# examples
annIds = coco_train.getAnnIds(imgIds=img['id'], catIds=catIds, iscrowd=None)
anns = coco_train.loadAnns(annIds)
for i in anns:
[x,y,w,h] = i['bbox']
print(create_darknet_label(img['width'], img['height'], w, y, w, h, coco_train.loadCats(ids=[i['category_id']])[0]['name']))
#banane
print(get_old_id_of("banana"))
print(get_new_id_of("banana"))
# +
# customize folder as needed
import os
path = "C:\\projects\\mealsuggestion\\food" # for model with 10 food catgories
path_coco = "C:\\projects\\mealsuggestion\\food_coco" # for original with 40 catgories
img_train_path = os.path.join(path,"images\\train")
img_val_path = os.path.join(path,"images\\val")
labels_train_path = os.path.join(path,"labels\\train")
labels_val_path = os.path.join(path,"labels\\val")
labels_coco_train_path = os.path.join(path_coco,"labels\\train")
labels_coco_val_path = os.path.join(path_coco,"labels\\val")
if not os.path.exists(img_train_path):
os.makedirs(img_train_path)
if not os.path.exists(img_val_path):
os.makedirs(img_val_path)
if not os.path.exists(labels_train_path):
os.makedirs(labels_train_path)
if not os.path.exists(labels_val_path):
os.makedirs(labels_val_path)
if not os.path.exists(labels_coco_train_path):
os.makedirs(labels_coco_train_path)
if not os.path.exists(labels_coco_val_path):
os.makedirs(labels_coco_val_path)
#create symlink to folder of images
if not os.path.exists(os.path.join(path_coco,"images")): # does not work on windows
try:
os.symlink(os.path.join(path,"images"), os.path.join(path_coco,"images"))
except OSError:
# ! mklink /J {os.path.join(path_coco,"images")} {os.path.join(path,"images")}
# +
import sklearn.model_selection
import wget
from pathlib import Path
def lookup_food_data(coco, imgIds, is_train):
imgs = coco.loadImgs(imgIds)
annIds = coco.getAnnIds(imgIds=imgIds, catIds=catIds)
annos = coco.loadAnns(annIds)
images_to_load = [] # [tuple(origin, destination)]
for img in imgs:
img_anno = coco.getAnnIds(imgIds=img['id'], catIds=catIds)
anns = coco.loadAnns(img_anno)
labels = []
labels_coco = []
for ann in anns:
x,y,w,h = ann['bbox']
x_size = img['width']
y_size = img['height']
anno_class = coco.loadCats(ids=[ann['category_id']])[0]['name']
label = create_darknet_label(x_size, y_size, x, y, w, h, anno_class)
label_coco = create_darknet_label(x_size, y_size, x, y, w, h, anno_class, coco=True)
labels.append(" ".join(str(i) for i in label))
labels_coco.append(" ".join(str(i) for i in label_coco))
basename = Path(img['coco_url']).name
stem = Path(img['coco_url']).stem
if is_train:
image_path = Path(os.path.join(img_train_path, basename))
label_path = Path(os.path.join(labels_train_path, stem+".txt"))
label_coco_path = Path(os.path.join(labels_coco_train_path, stem+".txt"))
else:
image_path = Path(os.path.join(img_val_path, basename))
label_path = Path(os.path.join(labels_val_path, stem+".txt"))
label_coco_path = Path(os.path.join(labels_coco_val_path, stem+".txt"))
images_to_load.append((img['coco_url'],str(image_path))) # image location for later download
with open(label_path, "w") as f: # create label for 10 cats
f.write("\n".join(labels))
with open(label_coco_path, "w") as f: # create label for 40 cats
f.write("\n".join(labels_coco))
return images_to_load
images_to_load_train = lookup_food_data(coco_train, imgIds_train, True)
images_to_load_val = lookup_food_data(coco_val, imgIds_val, False)
print(len(images_to_load_val), images_to_load_val[:10])
# +
import os
import requests
from multiprocessing.pool import ThreadPool
import tqdm
OVERWRITE = False
def url_response( obj ):
do_load = OVERWRITE
url, path = obj
if not OVERWRITE:
file = Path(path)
if not file.is_file():
do_load = True
if do_load:
r = requests.get(url, stream = True)
with open(path, 'wb') as f:
for ch in r:
f.write(ch)
pool = ThreadPool(9)
for _ in tqdm.tqdm(pool.imap_unordered(url_response, images_to_load_train), total=len(images_to_load_train)):
pass
for _ in tqdm.tqdm(pool.imap_unordered(url_response, images_to_load_val), total=len(images_to_load_val)):
pass
# -
| machine_learning/pycoco_ingredients.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.3
# language: julia
# name: julia-1.6
# ---
# # OrdinalMultinomialModels.jl
#
# OrdinalMultinomialModels.jl provides Julia utilities to fit ordered multinomial models, including [proportional odds model](https://en.wikipedia.org/wiki/Ordered_logit) and [ordered Probit model](https://en.wikipedia.org/wiki/Ordered_probit) as special cases.
# ## Installation
#
# This package requires Julia v0.7 or later. The package has not yet been registered and must be installed using the repository location. Start julia and use the `]` key to switch to the package manager REPL
# ```julia
# (v1.6) pkg> add https://github.com/OpenMendel/OrdinalMultinomialModels.jl
# ```
# Machine info for results in this tutorial
versioninfo()
# for use in this tutorial
using OrdinalMultinomialModels, BenchmarkTools, RDatasets
# ## Example data
#
# `housing` is a data set from R package [MASS](https://cran.r-project.org/web/packages/MASS/index.html). The outcome of interest is `Sat` (satisfication) that takes values `Low`, `Medium`, or `High`. Predictors include `Infl` (inflation, categorical), `Type` (housing type, categorical), and `Cont` (categorical). `Freq` codes number of observation for each combination of levels.
housing = dataset("MASS", "housing")
# There are 72 unique combination of levels and the total number of observations is 1,681.
size(housing, 1), sum(housing[!,:Freq])
# ## Syntax
#
# `polr` is the main function of fitting ordered multinomial model. For documentation, type `?polr` at Julia REPL.
# ```@docs
# polr
# ```
# ## Fit ordered multinomial models
#
# ### Proportional odds model
#
# To fit an ordered multinomial model using default link `link=LogitLink()`, i.e., proportional odds model
house_po = polr(@formula(Sat ~ Infl + Type + Cont), housing, wts = housing[!, :Freq])
# Since there are $J=3$ categories in `Sat`, the fitted model has 2 intercept parameters $\theta_1$ and $\theta_2$ that satisfy $\theta_1 \le \theta_2$. $\beta_1, \beta_2$ are regression coefficients for `Infl` (3 levels), $\beta_3, \beta_4, \beta_5$ for `Type` (4 levels), and $\beta_6$ for `Cont` (2 levels).
#
# Deviance (-2 loglikelihood) of the fitted model is
deviance(house_po)
# Estimated regression coefficients are
coef(house_po)
# with standard errors
stderror(house_po)
# ### Ordered probit model
#
# To fit an ordered probit model, we use link `ProbitLink()`
house_op = polr(@formula(Sat ~ Infl + Type + Cont), housing, ProbitLink(), wts = housing[!, :Freq])
deviance(house_op)
# ### Proportional hazards model
#
# To fit a proportional hazards model, we use `CloglogLink()`
house_ph = polr(@formula(Sat ~ Infl + Type + Cont), housing, CloglogLink(), wts = housing[!, :Freq])
deviance(house_ph)
# From the deviances, we see that the proportional odds model (logit link) has the best fit among all three models.
deviance(house_po), deviance(house_op), deviance(house_ph)
# ### Alternative syntax without using DataFrame
#
# An alternative syntax is useful when it is inconvenient to use DataFrame
# ```julia
# polr(X, y, link, solver; wts)
# ```
# where `y` is the response vector and `X` is the `n x p` predictor matrix **excluding** intercept.
# ## Optimization algorithms
#
# PolrModels.jl relies on nonlinear programming (NLP) optimization algorithms to find the maximum likelihood estimate (MLE). User can input any solver supported by the MathProgBase.jl package (see <http://www.juliaopt.org>) as the 4th argument of `polr` function. Common choices are:
# - Ipopt solver: `Ipopt.Optimizer()`. See [Ipopt.jl](https://github.com/JuliaOpt/Ipopt.jl) for numerous arguments to `Ipopt.Optimizer()`. For example, setting `print_level=5` is useful for diagnostic purposes.
# - [NLopt package](https://github.com/JuliaOpt/NLopt.jl): `NLopt.Optimizer()`. See [NLopt algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/) for all algorithms in [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl).
#
# When optimization fails, user can always try another algorithm.
# Use Ipopt (interior-point) solver
solver = Ipopt.Optimizer()
set_optimizer_attributes(solver, "print_level" => 3)
polr(@formula(Sat ~ Infl + Type + Cont), housing, LogitLink(),
solver; wts = housing[!, :Freq])
# Use SLSQP (sequential quadratic programming) in NLopt.jl package
solver = NLopt.Optimizer()
set_optimizer_attributes(solver, "algorithm" => :LD_SLSQP)
polr(@formula(Sat ~ Infl + Type + Cont), housing, LogitLink(),
solver; wts = housing[!, :Freq])
# Use LBFGS (quasi-Newton algorithm) in NLopt.jl package
solver = NLopt.Optimizer()
set_optimizer_attributes(solver, "algorithm" => :LD_LBFGS)
polr(@formula(Sat ~ 0 + Infl + Type + Cont), housing, LogitLink(),
solver; wts = housing[!, :Freq])
# ## Likelihood ratio test (LRT)
#
# `polr` function calculates the Wald test (or t-test) p-value for each predictor in the model. To carry out the potentially more powerful likelihood ratio test (LRT), we need to fill the null and alternative models separately.
#
# **Step 1**: Fit the null model with only `Infl` and `Type` factors.
house_null = polr(@formula(Sat ~ Infl + Type), housing; wts = housing[!, :Freq])
# **Step 2**: To test significance of the `Cont` variable, we use `polrtest` function. The first argument is the fitted null model, the second argument is the predictor vector to be tested
# last column of model matrix is coding for Cont (2-level factor)
cont = modelmatrix(house_po.model)[:, end]
# calculate p-value
polrtest(house_null, cont; test=:LRT)
# ## Score test
#
# User can perform **score test** using the `polrtest` function too. Score test has the advantage that, when testing a huge number of predictors such as in genomewide association studies (GWAS), one only needs to fit the null model once and then testing each predictor is cheap. Both Wald and likelihood ratio test (LRT) need to fit a separate alternative model for each predictor being tested.
#
# **Step 1**: Fit the null model with only `Infl` and `Type` factors.
house_null = polr(@formula(Sat ~ Infl + Type), housing; wts = housing[!, :Freq])
# **Step 2**: To test significance of the `Cont` variable, we use `polrtest` function. The first argument is the fitted null model, the second argument is the predictor vector to be tested
# last column of model matrix is coding for Cont (2-level factor)
cont = modelmatrix(house_po.model)[:, end]
# calculate p-value
polrtest(house_null, cont; test=:score)
# **Step 3**: Now suppose we want to test significance of another predictor, `z1`. We just need to call `polrtest` with `z1` and the same fiited null model. No model fitting is needed.
#
# For demonstration purpose, we generate `z1` randomly. The score test p-value of `z1` is, not suprisingly, large.
z1 = randn(nobs(house_null))
polrtest(house_null, z1)
# **Step 4**: We can also test a set of precitors or a factor.
z3 = randn(nobs(house_null), 3)
polrtest(house_null, z3)
| docs/OrdinalMultinomialModels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lecture 4 - Intro to ML: PCA, t-SNE
#
# ## 1. Importing Libraries
# +
import matplotlib.pyplot as plt
import numpy as np
import random as rd
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import decomposition
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.manifold import TSNE
import seaborn as sn
from tensorflow.keras.datasets import mnist
# -
# ## 2. Scikit-Learn Example
# +
np.random.seed(5)
centers = [[1, 1], [-1, -1], [1, -1]]
iris = datasets.load_iris()
X = iris.data
y = iris.target
fig = plt.figure(1, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
pca = decomposition.PCA(n_components=3)
pca.fit(X)
X = pca.transform(X)
for name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:
ax.text3D(X[y == label, 0].mean(),
X[y == label, 1].mean() + 1.5,
X[y == label, 2].mean(), name,
horizontalalignment='center',
bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap=plt.cm.nipy_spectral,
edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
plt.show()
# -
# ## PCA Implementation on MNIST
#
# ### Objective
#
# 1.1 Apply PCA to MNIST and see how many dimension you need in the embedded space for having 90%, 95% or 99% of variance explained.
# +
def checkcheck(check):
"""
Performs a check to see whether or not the three thresholds of % variance explained were reached
"""
if check["90%"]==check["95%"]==check["99%"]==True: return True
else: return False
def normalize(dataset):
"""
Normalizes the values in an image array
"""
return dataset/255.
# +
# Starts at 86 components because we already know that 90% is reached around 87 components
number_of_components = 86
components_explained_variance = {}
check = {"90%":False,"95%":False,"99%":False}
# +
# Reads the dataset and split it beween train and test sets
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Normalizes
x_train, x_test = normalize(x_train), normalize(x_test)
# Reformats to be acceptable by PCA
x_train = [np.concatenate(i) for i in x_train]
x_test = [np.concatenate(i) for i in x_test]
# -
while not checkcheck(check):
print(number_of_components)
pca_model = decomposition.PCA(n_components=number_of_components, svd_solver='full')
pca_model.fit(x_train)
X_train_pca = pca_model.transform(x_train)
X_test_pca = pca_model.transform(x_test)
components_explained_variance[number_of_components] = [pca_model.components_,
pca_model.explained_variance_ratio_,
sum(pca_model.explained_variance_ratio_)]
if sum(pca_model.explained_variance_ratio_)>=0.9 and check["90%"]==False:
check["90%"] = True
print(f"90% variance explained mark reached at {number_of_components} components")
if sum(pca_model.explained_variance_ratio_)>=0.95 and check["95%"]==False:
check["95%"] = True
print(f"95% variance explained mark reached at {number_of_components} components")
if sum(pca_model.explained_variance_ratio_)>=0.99 and check["99%"]==False:
check["99%"] = True
print(f"99% variance explained mark reached at {number_of_components} components")
number_of_components += 1
for status in [87, 154, 331]:
print(f"{status} components:")
PCA_components = components_explained_variance[status][0]
fig = plt.figure(figsize=(16, 9))
for i in range(0,10):
ax = fig.add_subplot(2, 5, i+1)
plt.imshow(PCA_components[i].reshape((28,28)),cmap='gray')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.axis('off')
plt.legend('off')
ax.set_title('Principle component'+str(i+1))
# x and y axis should be equal length
x0,x1 = ax.get_xlim()
y0,y1 = ax.get_ylim()
plt.show()
components_explained_variance[87][2]
components_explained_variance[154][2]
components_explained_variance[331][2]
# ## 2D visualization using PCA
#
# ### Objective
# 1.2 Reduce to PCA in dimension 2 and display the result (use different colors for the different classes
# +
pca = decomposition.PCA(n_components=2)
pca_results = pca.fit_transform(x_train)
principalDf = pd.DataFrame(data = pca_results, columns = ['principal component 1', 'principal component 2'])
finalDf = pd.concat([principalDf, pd.DataFrame(data = y_train, columns = ['target'])], axis = 1)
# +
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('2 component PCA', fontsize = 20)
targets = list(range(1,11))
for target in targets:
indicesToKeep = finalDf['target'] == target
ax.scatter(finalDf.loc[indicesToKeep, 'principal component 1']
, finalDf.loc[indicesToKeep, 'principal component 2']
, s = 50)
ax.legend(targets)
ax.grid()
# -
# ## t_SNE Implementation on MNIST
#
# ### Objective
#
# Part 2: tSNE in dimension 2
# +
model = TSNE(n_components=2, random_state=0)
# configuring the parameteres
# the number of components = 2
# default perplexity = 30
# default learning rate = 200
# default Maximum number of iterations for the optimization = 1000
tsne_data = model.fit_transform(x_train)
# creating a new data frame which help us in ploting the result data
tsne_data = np.vstack((tsne_data.T, y_train)).T
tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2", "label"))
# Ploting the result of tsne
sn.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
plt.show()
# -
| PCA/PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from tqdm import tqdm
import seaborn as sns
# ## Class that performs Linear Regression
class LogisticRegression():
"""Class that performs Logistic Regression
Parameters:
---------------
learningRate- The Learning Rate value
tolerance- The tolerance
maxIteration- The maximum number of iterations
"""
def __init__(self, learningRate, tolerance, maxIteration=5000, removeIndex = False):
"""Function to initialise the parameters"""
self.learningRate = learningRate
self.tolerance = tolerance
self.maxIteration = maxIteration
def datasetReader(self):
"""Function that reads in the data and divides it into training and test set"""
train_df = pd.read_excel('Lab3_data.xls', sheet_name='2004--2007 Data')
test_df = pd.read_excel('Lab3_data.xls', sheet_name='2004--2005 Data')
train_df, test_df = np.array(train_df, dtype=np.float64), np.array(test_df, dtype=np.float64)
X_train, y_train = train_df[:, 1:], train_df[:, 0]
X_test, y_test = test_df[:, 1], test_df[:, 0]
return X_train, X_test, y_train, y_test
def removeIndex(self, X_train, y_train):
"""Function that removes the indexes of the user's choice"""
input_index = input('Enter the indexes you want to exlude:')
input_index = input_index.split()
input_index = [int(i) for i in input_index]
print('Entered indexes are; ', input_index)
X_train = np.delete(X_train, input_index, axis = 0)
y_train = np.delete(y_train, input_index, axis = 0)
return X_train, y_train
def addX0(self, X):
"""Function to add a column of ones to the dataset"""
return np.column_stack([np.ones([X.shape[0], 1]), X])
def sigmoid(self, z):
"""Function that performns the sigmoid operation"""
sig = 1 / (1 + np.exp(-z))
return sig
def costFunction(self, X, y):
"""Function that calculates the cost function"""
# approach 1
# pred_ = np.log(np.ones(X.shape[0]) + np.exp(X.dot(self.w))) - X.dot(self.w).dot(y) # negetive log lokelyhood
# cost = pred_.sum()
# approach 2
sig = self.sigmoid(X.dot(self.w))
pred_ = y * np.log(sig) + (1 - y) * np.log(1 - sig)
cost = pred_.sum()
return cost
def gradient(self, X, y):
"""Function that calculates the gradient"""
sig = self.sigmoid(X.dot(self.w))
grad = (sig - y).dot(X)
return grad
def gradientDescent(self, X, y):
"""Function that performs gradient descent"""
costSequence = []
lastCost = float('inf')
for i in tqdm(range(self.maxIteration)):
self.w = self.w - self.learningRate * self.gradient(X, y)
currentCost = self.costFunction(X, y)
dif = lastCost - currentCost
lastCost = currentCost
costSequence.append(abs(currentCost))
if dif < self.tolerance:
print('The Model Has Stopped - No Further Improvement')
break
return
def plotCost(self, costSequence):
"""Function that plots the cost over every iteration"""
s = np.array(costSequence)
t = np.array(s.size)
s = costSequence
t = list(range(0, len(costSequence)))
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel = 'iterations', ylabel = 'cost', title = 'cost trend')
ax.grid()
plt.legend(bbox_to_anchor=(1.05, 1), shadow=True)
plt.show()
def predict(self, X):
"""Function that performs the prediction task"""
sig = self.sigmoid(X.dot(self.w))
return np.around(sig)
def evaluate(self, y, y_hat):
"""Function that evaluates the predictions of the model"""
y = (y == 1)
y_hat = (y_hat == 1)
accuracy = (y == y_hat).sum() / y.size
precision = (y & y_hat).sum() / y_hat.sum()
recall = (y & y_hat).sum() / y.sum()
return accuracy, recall, precision
def plot(self):
plt.figure(figsize=(12, 8))
ax = plt.axes(projection='3d')
# Data for three-dimensional scattered points
ax.scatter3D(self.X_train[:, 0], self.X_train[:, 1],
self.sigmoid(self.X_train.dot(self.w)),
c=self.y_train[:], cmap='viridis', s=100);
ax.set_xlim3d(55, 80)
ax.set_ylim3d(80, 240)
plt.xlabel('$x_1$ feature', fontsize=15)
plt.ylabel('$x_2$ feature', fontsize=15, )
ax.set_zlabel('$P(Y = 1|x_1, x_2)$', fontsize=15, rotation=0)
plt.show()
def scatterPlt(self):
# evenly sampled points
x_min, x_max = 55, 80
y_min, y_max = 80, 240
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 250),
np.linspace(y_min, y_max, 250))
grid = np.c_[xx.ravel(), yy.ravel()]
probs = grid.dot(self.w).reshape(xx.shape)
f, ax = plt.subplots(figsize=(14, 12))
ax.contour(xx, yy, probs, levels=[0.5], cmap="Greys", vmin=0, vmax=.6)
ax.scatter(self.X_train[:, 0], self.X_train[:, 1],
c=self.y_train[:], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=1)
plt.xlabel('x1 feature')
plt.ylabel('x2 feature')
plt.show()
def plot3D(self):
# evenly sampled points
x_min, x_max = 55, 80
y_min, y_max = 80, 240
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 250),
np.linspace(y_min, y_max, 250))
grid = np.c_[xx.ravel(), yy.ravel()]
probs = grid.dot(self.w).reshape(xx.shape)
fig = plt.figure(figsize=(14, 12))
ax = plt.axes(projection='3d')
ax.contour3D(xx, yy, probs, 50, cmap='binary')
ax.scatter3D(self.X_train[:, 0], self.X_train[:, 1],
c=self.y_train[:], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=1)
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('probs')
ax.set_title('3D contour')
plt.show()
def runModel(self):
"""Function to run the model"""
self.X_train, self.X_test, self.y_train, self.y_test = self.datasetReader()
print(self.X_train.shape)
self.w = np.ones(self.X_train.shape[1], dtype=np.float64) * 0
self.gradientDescent(self.X_train, self.y_train)
print(self.w)
y_hat_train = self.predict(self.X_train)
accuracy, recall, precision = self.evaluate(self.y_train, y_hat_train)
print('Training Accuracy: ', accuracy)
print('Training Recall: ', recall)
print('Training precision: ', precision)
print('--------------------------------')
print('Results without outliers', '\n'*5)
self.X_train, self.y_train = self.removeIndex(self.X_train, self.y_train)
print(self.X_train.shape, self.y_train.shape)
self.w = np.ones(self.X_train.shape[1], dtype = np.float64) * 0
self.gradientDescent(self.X_train, self.y_train)
print(self.w)
y_hat_train = self.predict(self.X_train)
accuracy, recall, precision = self.evaluate(self.y_train, y_hat_train)
print('Training Accuracy: ', accuracy)
print('Training Recall: ', recall)
print('Training precision: ', precision)
self.scatterPlt()
self.plot()
self.plot3D()
lr = LogisticRegression(tolerance=0.01, learningRate=0.001)
lr.runModel()
| LogisticReg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="27q1gU91a9CV"
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
# set batch size
batch_size = 100
# download and transform train dataset
train_loader = torch.utils.data.DataLoader(datasets.MNIST('../mnist_data', download=True, train=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])), batch_size=batch_size, shuffle=True)
# download and transform test dataset
test_loader = torch.utils.data.DataLoader(datasets.MNIST('../mnist_data', download=True, train=False, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])), batch_size=batch_size, shuffle=True)
# set training epochs
training_epochs = 20
# + colab={} colab_type="code" id="GehuHwXDhMQu"
# define cnn architecture
class CNN_MNIST(nn.Module):
def __init__(self):
super(CNN_MNIST, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1)
self.conv4 = nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1)
self.conv5 = nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1)
self.fc6 = nn.Linear(2*2*128, 100)
self.fc7 = nn.Linear(100, 10)
def forward(self, x):
out = F.relu(self.conv1(x))
out = F.relu(self.conv2(out))
out = F.relu(self.conv3(out))
out = F.relu(self.conv4(out))
out = F.relu(self.conv5(out))
out = out.view(-1, 2*2*128)
out = F.relu(self.fc6(out))
out = self.fc7(out)
return F.log_softmax(out, dim=1)
# + colab={} colab_type="code" id="9DeuVj0lhXrU"
# train and test model
def Q2_part_1():
cnn = CNN_MNIST()
cnn = cnn.cuda()
optimizer = optim.Adam(cnn.parameters(), lr=0.001)
train_accuracy_list = []
test_accuracy_list = []
for epoch in range(training_epochs):
train_correct = 0
test_correct = 0
# training
for idx, (data, label) in enumerate(train_loader):
cnn.train()
data = Variable(data)
label = Variable(label)
data = data.cuda()
label = label.cuda()
optimizer.zero_grad()
logits = cnn(data)
loss = F.nll_loss(logits, label)
loss.backward()
optimizer.step()
_, pred = torch.max(logits.data, 1)
train_correct += (pred==label).sum().item()
# testing
for idx, (data, label) in enumerate(test_loader):
cnn.eval()
data = Variable(data)
label = Variable(label)
data = data.cuda()
label = label.cuda()
with torch.no_grad():
logits = cnn(data)
_, pred = torch.max(logits.data, 1)
test_correct += (pred==label).sum().item()
train_accuracy = float(train_correct)/len(train_loader.dataset)
train_accuracy_list.append(train_accuracy)
test_accuracy = float(test_correct)/len(test_loader.dataset)
test_accuracy_list.append(test_accuracy)
if(epoch % 4 == 0):
print("\nEpoch: ", epoch+1, ", Training Accuracy: ", train_accuracy, ", Test Accuracy: ", test_accuracy, "\n")
print("\nFinal Test Accuracy: ", test_accuracy)
return cnn, train_accuracy_list, test_accuracy_list
# + colab={"base_uri": "https://localhost:8080/", "height": 650} colab_type="code" id="xmYZYkbHh3Im" outputId="da98acff-173e-48b1-fbdc-a1149de041e3"
model, train_accuracy_list, test_accuracy_list = Q2_part_1()
plt.subplot(1, 2, 1)
plt.plot(np.arange(len(train_accuracy_list))+1, train_accuracy_list)
plt.xlabel("Number of epochs")
plt.ylabel("Training accuracy")
plt.subplot(1, 2, 2)
plt.plot(np.arange(len(test_accuracy_list))+1, test_accuracy_list)
plt.xlabel("Number of epochs")
plt.ylabel("Test accuracy")
plt.show()
# + colab={} colab_type="code" id="9FKMsLW-2Dei"
test_loader_2 = torch.utils.data.DataLoader(datasets.MNIST('../mnist_data', download=True, train=False, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])), batch_size=1, shuffle=True)
# + colab={} colab_type="code" id="0SpdJOPesHen"
# define untargeted fast gradient sign attack pertubations
def fgsm_untargeted(image, epsilon, data_grad):
perturbed_image = image + epsilon*data_grad.sign()
perturbed_image = torch.clamp(perturbed_image, -1.0, 1.0)
return perturbed_image
# + colab={} colab_type="code" id="vnNPA348sOmK"
# get adversarial images from untargeted FGSM attack
def adversarial_untargeted(model, test_loader, epsilon):
model.eval()
adversarials = []
originals = []
predictions = []
confidences = []
truths = []
for data, label in test_loader:
data = data.cuda()
label = label.cuda()
true_data = data.clone()
out = model(data)
_, pred = torch.max(out.data, 1)
# generate adversarial images for correctly classified images
if(pred.item() == label.item() and (len(adversarials) < 5)):
prob = 0
data.requires_grad = True
while(prob < 0.9):
out = model(data)
loss = F.nll_loss(out, label)
model.zero_grad()
loss.backward()
data_grad = data.grad.data
perturbed_image = fgsm_untargeted(data, epsilon, data_grad)
out2 = model(perturbed_image)
_, pred2 = torch.max(out2.data, 1)
confidence = F.softmax(out2, dim=1)[0][pred2].data.cpu().numpy()[0]
prob = confidence
data = Variable(perturbed_image, requires_grad=True)
if((pred2.item() != label.item()) and (confidence >= 0.9)):
adversarials.append(perturbed_image.squeeze().detach().cpu().numpy())
originals.append(true_data.squeeze().detach().cpu().numpy())
predictions.append(pred2.item())
confidences.append(confidence.item())
truths.append(label.item())
break
if(len(adversarials) == 10):
break
return adversarials, originals, predictions, confidences, truths
# + colab={} colab_type="code" id="mtuDR1UZvWz8"
adversarials_1, originals_1, predictions_1, confidences_1, truths_1 = adversarial_untargeted(model, test_loader_2, 0.09)
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="N0WVUYmvBKVS" outputId="b5b1b490-402e-4e22-d0d1-a804dd1099a4"
# check first 3 adversarial images
for i in range(len(adversarials_1[:3])):
print("Original image with label ", truths_1[i], " is predicted as ", predictions_1[i], " with confidence of ", confidences_1[i])
# + colab={"base_uri": "https://localhost:8080/", "height": 134} colab_type="code" id="-K9TyvfkPk9K" outputId="b76b3b74-9f17-463e-c5da-8b243fae2b22"
cnt = 0
plt.figure(figsize=(5,5))
for j in range(len(adversarials_1[:3])):
cnt += 1
plt.subplot(2,len(adversarials_1[:3]),cnt)
plt.xticks([], [])
plt.yticks([], [])
ex = adversarials_1[j]
plt.imshow(ex, cmap="gray")
plt.tight_layout()
plt.show()
# + colab={} colab_type="code" id="QhlADWd_XalQ"
# define targeted fast gradient sign attack pertubations
def fgsm_targeted(image, epsilon, data_grad):
perturbed_image = image - epsilon*data_grad.sign()
perturbed_image = torch.clamp(perturbed_image, -1.0, 1.0)
return perturbed_image
# + colab={} colab_type="code" id="xQkawEQ-Syog"
# get adversarial images from targeted FGSM attack
def adversarial_targeted(model, test_loader, epsilon):
model.eval()
adversarials = []
originals = []
predictions = []
confidences = []
truths = []
for data, label in test_loader:
data = data.cuda()
label = label.cuda()
true_data = data.clone()
target = Variable(torch.Tensor([7])).long()
target = target.cuda()
out = model(data)
_, pred = torch.max(out.data, 1)
# generate adversarial images for correctly classified images
if((pred.item() == label.item()) and (label.item() != 7)):
prob = 0
data.requires_grad = True
while(prob < 0.9):
out = model(data)
loss = F.nll_loss(out, target)
model.zero_grad()
loss.backward()
data_grad = data.grad.data
perturbed_image = fgsm_targeted(data, epsilon, data_grad)
out2 = model(perturbed_image)
_, pred2 = torch.max(out2.data, 1)
confidence = F.softmax(out2, dim=1)[0][pred2].data.cpu().numpy()[0]
prob = confidence
data = Variable(perturbed_image, requires_grad=True)
if((pred2.item() == target.item()) and confidence >= 0.9):
adversarials.append(perturbed_image.squeeze().detach().cpu().numpy())
originals.append(true_data.squeeze().detach().cpu().numpy())
predictions.append(pred2.item())
confidences.append(confidence.item())
truths.append(label.item())
if(len(adversarials) == 10):
break
return adversarials, originals, predictions, confidences, truths
# + colab={} colab_type="code" id="bQchHSy9UIyF"
adversarials_2, originals_2, predictions_2, confidences_2, truths_2 = adversarial_targeted(model, test_loader_2, 0.09)
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="9Xa7zo47UVQP" outputId="69c386c9-90de-48d3-dcab-8abf383b7763"
# check first 3 adversarial images
for i in range(len(adversarials_1[:3])):
print("Original image with label ", truths_2[i], " is predicted as ", predictions_2[i], " with confidence of ", confidences_2[i])
# + colab={"base_uri": "https://localhost:8080/", "height": 134} colab_type="code" id="15MzAORcUWAA" outputId="0a94de70-138a-4442-8984-4a593f0ab2d1"
cnt = 0
plt.figure(figsize=(5,5))
for j in range(len(adversarials_2[:3])):
cnt += 1
plt.subplot(2,len(adversarials_2[:3]),cnt)
plt.xticks([], [])
plt.yticks([], [])
ex = adversarials_2[j]
plt.imshow(ex, cmap="gray")
plt.tight_layout()
plt.show()
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="XibzlCzoojrK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="16cc60a1-efc7-42a1-c784-920a9a106423" executionInfo={"status": "ok", "timestamp": 1583444372650, "user_tz": -60, "elapsed": 15256, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
# !pip install --upgrade tables
# !pip install eli5
# !pip install xgboost
# + id="_QXlq41Eo3DG" colab_type="code" colab={}
import pandas as pd
import numpy as np
from sklearn.dummy import DummyRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
import eli5
from eli5.sklearn import PermutationImportance
# + id="qDmF3dm0psup" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5d319488-3405-4f88-8188-7983c340314d" executionInfo={"status": "ok", "timestamp": 1583444690705, "user_tz": -60, "elapsed": 1015, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
# cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_2"
# + id="deH2K4-HqILQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="60ecfad5-7497-47e6-f981-1bb176886c5e" executionInfo={"status": "ok", "timestamp": 1583444740679, "user_tz": -60, "elapsed": 4842, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
df = pd.read_hdf('data/car.h5')
df.shape
# + id="wrjCfEw9qTcU" colab_type="code" colab={}
# + [markdown] id="ympqgJV8qlVe" colab_type="text"
# #Feature Engineering
# + id="OtWvcDdBqnwY" colab_type="code" colab={}
SUFFIX_CAT ='__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT]= factorized_values
# + id="VBzqykL1rL-G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bbfa260b-dc80-4173-9b33-536cb52c9aa1" executionInfo={"status": "ok", "timestamp": 1583445048702, "user_tz": -60, "elapsed": 1027, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
# + id="6L6YYuW1rfk7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0c0c4758-4fc4-4b8b-da20-c7bc17293070" executionInfo={"status": "ok", "timestamp": 1583445303577, "user_tz": -60, "elapsed": 4081, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
x = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth = 5)
scores = cross_val_score(model, x, y, cv = 3, scoring = 'neg_mean_absolute_error')
np.mean(scores), np.std(scores)
# + id="avq5duELr2mz" colab_type="code" colab={}
def run_model(model, feats):
x = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, x, y, cv = 3, scoring = 'neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
# + id="PdBCUNhzvMpX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7b8a151a-53d4-47d1-d31a-c2144f3a9ee6" executionInfo={"status": "ok", "timestamp": 1583446086639, "user_tz": -60, "elapsed": 4806, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
run_model(DecisionTreeRegressor(max_depth=5), cat_feats)
# + id="E7M-7-twvU1Y" colab_type="code" colab={}
# + [markdown] id="y2DbwWEEvdtb" colab_type="text"
# #Random Forest
# + id="pmXIplz8vfIL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b68b4bfd-a734-4b34-a5ff-30a6f4e53b03" executionInfo={"status": "ok", "timestamp": 1583446304871, "user_tz": -60, "elapsed": 105118, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
model = RandomForestRegressor(max_depth=5, n_estimators=50, random_state=0)
run_model(model, cat_feats)
# + id="JVXgyQKnv416" colab_type="code" colab={}
# + [markdown] id="XRzpXszbv7B-" colab_type="text"
# #XGBoost
# + id="vcoDPjvuv8nO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="fead8162-ab37-4380-be88-b1f8dd5f00c5" executionInfo={"status": "ok", "timestamp": 1583446442793, "user_tz": -60, "elapsed": 61274, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
xgb_params = {
'max_depth': 5,
'n_estimators': 50,
'learning_rate': 0.1,
'seed': 0
}
model= xgb.XGBRegressor(**xgb_params)
run_model(model, cat_feats)
# + id="Wnk8w3e_wgry" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 408} outputId="54df9906-5597-4b8f-e334-cd19a33fa9fc" executionInfo={"status": "ok", "timestamp": 1583446962990, "user_tz": -60, "elapsed": 369415, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
m = xgb.XGBRegressor(max_depth=5, n_estimators=50, learning_rate=0.1, seed=0)
m.fit(x, y)
imp = PermutationImportance(m, random_state=0).fit(x, y)
eli5.show_weights(imp, feature_names=cat_feats)
# + id="RHk8oVV8xY_b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="d8e3b817-89c2-4445-c942-8cba33d0f8d0" executionInfo={"status": "ok", "timestamp": 1583447601257, "user_tz": -60, "elapsed": 13841, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
feats = ['param_napęd__cat', 'param_rok-produkcji__cat', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc__cat', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat' ]
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="LUMvcNF70yv5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="ecd20531-6b54-4f06-afdf-d161ee2cd56d" executionInfo={"status": "ok", "timestamp": 1583447663104, "user_tz": -60, "elapsed": 986, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
df['param_rok-produkcji'].unique()
# + id="YrMk-1wa1Zf7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="7f2e32ee-317a-46ca-fdab-e772463dcb8b" executionInfo={"status": "ok", "timestamp": 1583447698219, "user_tz": -60, "elapsed": 917, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
df['param_rok-produkcji__cat'].unique()
# + id="U_SLXate1hoL" colab_type="code" colab={}
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
# + id="Ir0gHu1s2IGs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="04ecccb1-df8a-4831-eb2e-d709218318d6" executionInfo={"status": "ok", "timestamp": 1583447894171, "user_tz": -60, "elapsed": 14180, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc__cat', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat' ]
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="ErEJbC5E2TAa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="2f2824af-c13c-4ab2-d2b1-1cd6b4f6f1ad" executionInfo={"status": "ok", "timestamp": 1583449007603, "user_tz": -60, "elapsed": 13787, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]))
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat' ]
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="z0Dk0vn02jGY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="a58cb000-7426-4e95-fa9f-e369bf6f6589" executionInfo={"status": "ok", "timestamp": 1583449426711, "user_tz": -60, "elapsed": 13414, "user": {"displayName": "<NAME>\u0142a", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggnav9sCJsBvoHjyZAE5-hdytji0L2Acp21HmMJMQ=s64", "userId": "03238446980367325604"}}
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(str(x).split('cm')[0].replace(' ', '')))
feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat' ]
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + id="zw7S0Fin74Qs" colab_type="code" colab={}
| day4_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="5rGMaQn2cvZ5"
# # Kestrel+Model
# ### A [Bangkit 2021](https://grow.google/intl/id_id/bangkit/) Capstone Project
#
# Kestrel is a TensorFlow powered American Sign Language translator Android app that will make it easier for anyone to seamlessly communicate with people who have vision or hearing impairments. The Kestrel model builds on the state of the art MobileNetV2 model that is optimized for speed and latency on smartphones to accurately recognize and interpret sign language from the phone’s camera and display the translation through a beautiful, convenient and easily accessible Android app.
#
# # American Sign Language
# Fingerspelling alphabets
# from the [National Institute on Deafness and Other Communication Disorders (NIDCD)](https://www.nidcd.nih.gov/health/american-sign-language-fingerspelling-alphabets-image)
#
# <table>
# <tr><td>
# <img src="https://www.nidcd.nih.gov/sites/default/files/Content%20Images/NIDCD-ASL-hands-2019_large.jpg"
# alt="Fashion MNIST sprite" width="600">
# </td></tr>
# <tr><td align="center">
# <b>Figure 1.</b> <a href="https://www.nidcd.nih.gov/health/american-sign-language-fingerspelling-alphabets-image">ASL Fingerspelling Alphabets</a> <br/>
# </td></tr>
# </table>
# + colab={"base_uri": "https://localhost:8080/"} id="KsvnG3UmpXiM" executionInfo={"status": "ok", "timestamp": 1620807379937, "user_tz": -420, "elapsed": 27873, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="4e313bd1-34d6-4b73-a96c-b7cd8083dafc"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="1S9QJaQeNoNy"
# # Initial setup
# + colab={"base_uri": "https://localhost:8080/"} id="mAP-yk1Zgad-" executionInfo={"status": "ok", "timestamp": 1621650104131, "user_tz": -420, "elapsed": 11018, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="6754caff-0957-4585-ecc3-756acef015f0"
try:
# %tensorflow_version 2.x
except:
pass
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import PIL
import PIL.Image
from os import listdir
import pathlib
from tqdm import tqdm
from tensorflow.keras.preprocessing import image_dataset_from_directory
print("\u2022 Using TensorFlow Version:", tf.__version__)
print("\u2022 Using TensorFlow Hub Version: ", hub.__version__)
print('\u2022 GPU Device Found.' if tf.config.list_physical_devices('GPU') else '\u2022 GPU Device Not Found. Running on CPU')
# + [markdown] id="TIV1xNnvlpuX"
# # Data preprocessing
# + [markdown] id="knFJFbTZthZp"
# ### (Optional) Unzip file on Google Drive
# + id="1g5Rf5mZtjOh"
import zipfile
import pathlib
zip_dir = pathlib.Path('/content/drive/Shareddrives/Kestrel/A - Copy.zip')
unzip_dir = pathlib.Path('/content/drive/Shareddrives/Kestrel/A_Unzipped')
with zipfile.ZipFile(zip_dir, 'r') as zip_ref:
zip_ref.extractall(unzip_dir)
# + [markdown] id="BBHgfg7-ltz7"
# ### Loading images from directory
# + id="GWguhrIJj7hF" executionInfo={"status": "ok", "timestamp": 1621650142152, "user_tz": -420, "elapsed": 18, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}}
data_dir = pathlib.Path('/Dev/fingerspelling5/dataset5/Combined')
# + [markdown] id="NN6NCO-umTeo"
# ### (Optional) Counting the number of images in the dataset
# + colab={"base_uri": "https://localhost:8080/"} id="7f83imbimZA1" executionInfo={"status": "ok", "timestamp": 1621650145621, "user_tz": -420, "elapsed": 518, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="e0fb6076-ccdd-444c-cdae-07467223e3c5"
image_count = len(list(data_dir.glob('*/color*.png')))
print(image_count)
# + [markdown] id="GyVJSjb-oVFc"
# ### (Optional) Displaying one of the "a" letter sign language image:
# + colab={"base_uri": "https://localhost:8080/", "height": 149} id="IalNVQR2oUjh" executionInfo={"status": "ok", "timestamp": 1621650162455, "user_tz": -420, "elapsed": 485, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="2baa75b2-6d1f-4c7c-bb40-f2259f15101f"
two = list(data_dir.glob('*/color*.png'))
PIL.Image.open(str(two[0]))
# + [markdown] id="koiCqb7MqsgL"
# # Create the dataset
# + [markdown] id="wLOGSxw7qwMS"
# Loading the images off disk using [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory). Define some parameters for the loader:
# + id="D_ggH4TDqr3Z" executionInfo={"status": "ok", "timestamp": 1621650170428, "user_tz": -420, "elapsed": 16, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}}
BATCH_SIZE = 30
IMG_SIZE = (160, 160)
# + [markdown] id="f3xkr1IZrn8B"
# ### Coursera method using ImageDataGenerator
# + id="TizyL6xrrtCc" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1621650177419, "user_tz": -420, "elapsed": 3258, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="54aef186-1b70-4c41-9255-a32fa2e6cdf8"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_generator = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest',
validation_split=0.2)
validation_generator = ImageDataGenerator(
rescale = 1./255,
validation_split=0.2)
train_dataset = train_generator.flow_from_directory(data_dir,
batch_size = BATCH_SIZE,
class_mode = 'categorical',
subset='training',
target_size = IMG_SIZE,
shuffle=True,
)
validation_dataset = validation_generator.flow_from_directory(data_dir,
batch_size = BATCH_SIZE,
class_mode = 'categorical',
subset='validation',
target_size = IMG_SIZE,
shuffle=True,
)
# + [markdown] id="VVMmrEYbrB3N"
# Splitting images for training and validation
# + [markdown] id="Srzt94kBPHvj"
# ### (Optional) Visualize the data
# + [markdown] id="5dhnLfYjPKH2"
# Show the first 9 images and labels from the training set:
# + id="vax3TGmwPTdv"
#@title Showing 9 images
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
# + colab={"base_uri": "https://localhost:8080/"} id="cAg5hy-kwBMg" executionInfo={"status": "ok", "timestamp": 1620857454829, "user_tz": -420, "elapsed": 225, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="54866c4e-a7eb-44ed-ff41-8688515d3741"
for image_batch, labels_batch in train_dataset:
print(image_batch.shape)
print(labels_batch.shape)
break
# + [markdown] id="CqffjjU1QrnZ"
# ### (Deprecated) Create a test set
# + [markdown] id="mB-fJOiGQvm3"
# To create a Test Set, determine how many batches of data are available in the validation set using ```tf.data.experimental.cardinality```, then move 20% of them to a test set.
# + colab={"base_uri": "https://localhost:8080/"} id="1Oqr9-sGQ4eY" executionInfo={"status": "ok", "timestamp": 1620738885070, "user_tz": -420, "elapsed": 1104, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="e9126388-3781-48a6-e202-71d13fc35332"
validation_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(validation_batches // 5)
validation_dataset = validation_dataset.skip(validation_batches // 5)
print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset))
# + [markdown] id="SFi_k2kcSLNb"
# ### Configure the dataset for performance
# + [markdown] id="jCTIg4DxSeKG"
# Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the [data performance](https://www.tensorflow.org/guide/data_performance) guide.
# + id="lfWJb_KTSkyR"
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.cache().prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.cache().prefetch(buffer_size=AUTOTUNE)
# test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
# + [markdown] id="7o1FbuMeq0Ur"
# # Create the model
# + [markdown] id="G1IgOn30WlBi"
# ### Create the base model from the pre-trained convnets
# You will create the base model from the **MobileNet V2** model developed at Google. This is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes. ImageNet is a research training dataset with a wide variety of categories like `jackfruit` and `syringe`. This base of knowledge will help us classify cats and dogs from our specific dataset.
#
# First, you need to pick which layer of MobileNet V2 you will use for feature extraction. The very last classification layer (on "top", as most diagrams of machine learning models go from bottom to top) is not very useful. Instead, you will follow the common practice to depend on the very last layer before the flatten operation. This layer is called the "bottleneck layer". The bottleneck layer features retain more generality as compared to the final/top layer.
#
# First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the **include_top=False** argument, you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction.
# + id="AZyiMHF5W9dP" executionInfo={"status": "ok", "timestamp": 1621650215537, "user_tz": -420, "elapsed": 3512, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}}
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=(160, 160, 3),
include_top=False,
weights='imagenet')
# + [markdown] id="GuUzSpLfXGoN"
# This feature extractor converts each `224 x 224` image into a `7x7x1280` block of features. Let's see what it does to an example batch of images:
# + colab={"base_uri": "https://localhost:8080/"} id="9k5oEpO0XmII" executionInfo={"status": "ok", "timestamp": 1621650221290, "user_tz": -420, "elapsed": 4390, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="047f4882-96c4-498c-c7d5-7bbde36f34b1"
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
# + [markdown] id="3UZ9h9TtYNEK"
# ### Freeze the convolutional base
# In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier.
#
# It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's `trainable` flag to False will freeze all of them.
# + id="6cIG-zKlYUjs" executionInfo={"status": "ok", "timestamp": 1621650222982, "user_tz": -420, "elapsed": 25, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}}
base_model.trainable = False
# + colab={"base_uri": "https://localhost:8080/"} id="bgxi4i53YcrR" executionInfo={"status": "ok", "timestamp": 1621650225514, "user_tz": -420, "elapsed": 84, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="13580549-7bf3-489f-f8cf-5b69c6dbca07"
# Let's take a look at the base model architecture
base_model.summary()
# + [markdown] id="cCeUq3jig_1N"
# ### Adding new layer to the model
# + colab={"base_uri": "https://localhost:8080/"} id="_j8Kpkx_h-9t" executionInfo={"status": "ok", "timestamp": 1621650242258, "user_tz": -420, "elapsed": 25, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="4f51c9ae-0dd1-406c-fe90-560d3da78ada"
last_layer = base_model.get_layer('out_relu')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# + colab={"base_uri": "https://localhost:8080/"} id="ZwUEbQPJhCPP" executionInfo={"status": "ok", "timestamp": 1621650309200, "user_tz": -420, "elapsed": 315, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="437b71ae-1a36-4de4-9f38-9d4fee1642e8"
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras import layers
from tensorflow.keras import Model
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.4
x = layers.Dropout(0.4)(x)
# Add a final layer for classification
x = layers.Dense (24, activation='softmax')(x)
model = Model( base_model.input, x)
model.summary()
# + id="4x9lnV-3sUaf"
# # !pip install scipy
# + [markdown] id="zJ-nqXrJreTj"
# ### Training the model
# + id="YhTg39oh-WPe" executionInfo={"status": "ok", "timestamp": 1621650337904, "user_tz": -420, "elapsed": 35, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}}
checkpoint_path = "TensorFlow_Training_Checkpoint/Kestrel_Training_12_Max50Dropout0.4/cp.ckpt"
# + colab={"base_uri": "https://localhost:8080/"} id="UR0sbHgaSR3k" executionInfo={"status": "ok", "timestamp": 1621661672398, "user_tz": -420, "elapsed": 11304580, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="b69dd11e-ed6d-47b1-84ca-967f1d7e2ffc"
import os
# base_learning_rate = 0.0001
def get_uncompiled_model():
model = Model( base_model.input, x)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
checkpoint_dir = os.path.dirname(checkpoint_path)
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
# Used if restoring from full model
# return tf.keras.models.load_model(latest_checkpoint)
# Used if restoring from weights only
model = Model( base_model.input, x)
model.load_weights(checkpoint_path)
model.compile(
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
print("Creating a new model")
return get_compiled_model()
# Create a callback that saves the model's weights
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
monitor='val_accuracy',
mode='auto',
save_best_only=True, # Only save a model if `val_loss` has improved.
verbose=1)
early_callbacks = [
tf.keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_accuracy",
# "no longer improving" being defined as "no better than 1e-2 less"
# min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=10,
verbose=1,
)
]
model = make_or_restore_model()
history = model.fit(train_dataset,
epochs=40,
validation_data = validation_dataset,
verbose = 1,
callbacks=[model_checkpoint_callback, early_callbacks])# Pass callback to training
# This may generate warnings related to saving the state of the optimizer.
# These warnings (and similar warnings throughout this notebook)
# are in place to discourage outdated usage, and can be ignored.
# # EXERCISE: Use the tf.saved_model API to save your model in the SavedModel format.
# export_dir = 'saved_model/2'
# # YOUR CODE HERE
# tf.saved_model.save(model, export_dir)
# + [markdown] id="6wzPoTeYrlsS"
# ### Plotting the accuracy and loss
# + colab={"base_uri": "https://localhost:8080/", "height": 545} id="wLTZkG-Qbgyh" executionInfo={"status": "ok", "timestamp": 1621661676845, "user_tz": -420, "elapsed": 484, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="8e7b53a2-382f-4397-a26c-8e7059932f51"
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + [markdown] id="uLbmnlz-r6TM"
# # Exporting to TFLite
# You will now save the model to TFLite. We should note, that you will probably see some warning messages when running the code below. These warnings have to do with software updates and should not cause any errors or prevent your code from running.
#
# + id="X0TeWXnosXW3" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1621378280383, "user_tz": -420, "elapsed": 32480, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="867f1a6e-79ef-45b4-93b5-fd1d09095e87"
# EXERCISE: Use the tf.saved_model API to save your model in the SavedModel format.
export_dir = 'saved_model/11_40Dropout0.4'
# YOUR CODE HERE
tf.saved_model.save(model, export_dir)
# + id="8vRn9z-7thBq"
# # Select mode of optimization
# mode = "Speed"
# if mode == 'Storage':
# optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE
# elif mode == 'Speed':
# optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY
# else:
# optimization = tf.lite.Optimize.DEFAULT
# + id="K9DdUb4ytk1v"
# EXERCISE: Use the TFLiteConverter SavedModel API to initialize the converter
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir) # YOUR CODE HERE
# Set the optimzations
converter.optimizations = [tf.lite.Optimize.DEFAULT]# YOUR CODE HERE
# Invoke the converter to finally generate the TFLite model
tflite_model = converter.convert()# YOUR CODE HERE
# + id="EYgOxxa5tmTq" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1621378322342, "user_tz": -420, "elapsed": 82, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="318497d8-2793-40e1-e770-28949743e62c"
tflite_model_file = pathlib.Path('saved_model/11_40Dropout0.4/model.tflite')
tflite_model_file.write_bytes(tflite_model)
# + id="HAN1iOmUmTr2"
# path_to_pb = "C:/saved_model/saved_model.pb"
# def load_pb(path_to_pb):
# with tf.gfile.GFile(path_to_pb, "rb") as f:
# graph_def = tf.GraphDef()
# graph_def.ParseFromString(f.read())
# with tf.Graph().as_default() as graph:
# tf.import_graph_def(graph_def, name='')
# return graph
# print(graph)
# + [markdown] id="HBOJpZ9YtpQ7"
# # Test the model with TFLite interpreter
# + id="iLYbJOCCttYd"
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# + id="9d3NurtTtv7V" colab={"base_uri": "https://localhost:8080/", "height": 235} executionInfo={"status": "error", "timestamp": 1620869802531, "user_tz": -420, "elapsed": 87, "user": {"displayName": "<NAME> M2322239", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhUzNB71B6oUWl-CxgzNaee2K6t1hICzdl9WbGQ=s64", "userId": "13069600450773884742"}} outputId="78631ed2-da18-49a1-a41e-46d3b65c5fbd"
# Gather results for the randomly sampled test images
predictions = []
test_labels = []
test_images = []
test_batches = data_dir.map(format_example).batch(1)
for img, label in test_batches.take(50):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label[0])
test_images.append(np.array(img))
# + id="NGiMQsoVtxqk"
# Utilities functions for plotting
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label.numpy():
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks(list(range(10)))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array[0], color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array[0])
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# + id="AQVQQjbltzsQ"
# Visualize the outputs
# Select index of image to display. Minimum index value is 1 and max index value is 50.
index = 5
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(index, predictions, test_labels)
plt.show()
| TensorFlow/unused/archived_notebooks/Kestrel+Model+Max50Dropout0.4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_chbughc"
# We saw a *similar* problem earlier in **Data Structures** course, **Maps and Hashing** lesson. There, we used an additional space to create a dictionary in order to solve the problem.
#
#
# ## Problem Statement
#
# Given an input array and a target value (integer), find two values in the array whose sum is equal to the target value. Solve the problem **without using extra space**. You can assume the array has unique values and will never have more than one solution.
# + graffitiCellId="id_9rkom1w"
def pair_sum(arr, target):
"""
:param: arr - input array
:param: target - target value
TODO: complete this method to find two numbers such that their sum is equal to the target
Return the two numbers in the form of a sorted list
"""
pass
# + [markdown] graffitiCellId="id_z5auf94"
# <span class="graffiti-highlight graffiti-id_z5auf94-id_mxw6vbb"><i></i><button>Show Solution</button></span>
# + graffitiCellId="id_3eusmdv"
def test_function(test_case):
input_list = test_case[0]
target =test_case[1]
solution = test_case[2]
output = pair_sum(input_list, target)
if output == solution:
print("Pass")
else:
print("False")
# + graffitiCellId="id_lt2ac2g"
input_list = [2, 7, 11, 15]
target = 9
solution = [2, 7]
test_case = [input_list, target, solution]
test_function(test_case)
# + graffitiCellId="id_p8o19gq"
input_list = [0, 8, 5, 7, 9]
target = 9
solution = [0, 9]
test_case = [input_list, target, solution]
test_function(test_case)
# + graffitiCellId="id_f0dyr3c"
input_list = [110, 9, 89]
target = 9
solution = [None, None]
test_case = [input_list, target, solution]
test_function(test_case)
| Basic Algorithms/Sort Algorithms/Pair Sum Problem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/TejDS/NUMPY/blob/master/NUMPY_END_TO_END.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="o0fiMv45ylhF" colab_type="text"
# **Numpy:**
#
#
# It's an acronym for Numerical Python.It is a python library used for working with arrays and also has functions for working in domain of Linear Algebra, Fourier Transform and Matrices.
#
#
# + [markdown] id="FABdnCwuyll7" colab_type="text"
# **Difference Between List and Numpy**
#
#
#
# * Speed : List is slow and Numpy is fast.
#
# 1. Numpy is a fixed type. In numpy a number is represented in the form of
# int32 so it represents number in total memory space of 4 bytes.
# 2. Lists represent the number with :
#
# (i) Size (int32)
# (ii) Reference Count(How many times the number is specifically pointed at) (int64)
# (iii) Object Type (int64)
# (iv) Object Value (int64)
#
# 3. Numpy utilises contiguous memory for storing elements. Whereas list stores elements in a scattered pattern (certainly not next to each other in the memory)
#
#
#
#
#
#
#
#
#
#
#
# + [markdown] id="sHeJHQsPylsU" colab_type="text"
# **Applications Of Numpy:**
#
#
#
# 1. Mathematics
# 2. Matplotlib
# 3. Backend of Pandas
# 4. Digital Photography
# 5. Machine Learning
#
#
# + id="kyCfk2OOyzJf" colab_type="code" colab={}
# Importing Numpy Library
import numpy as np
# + id="BkoQG-poy1p2" colab_type="code" outputId="023c6e1a-41c6-474b-d0b2-10e41c7fb7ea" colab={"base_uri": "https://localhost:8080/", "height": 34}
a = np.array([1,2,3],dtype='int8')
print(f'a: {a}')
# + id="3-YDMVR0y3iM" colab_type="code" outputId="2c0e9f4b-b47b-48f7-ebf0-771158f1d7ae" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Get Dimension
print(f'Dimension of a : {a.ndim}')
# + id="BAmU7SqQy5y2" colab_type="code" outputId="f03b8147-359b-4dbd-a724-e31f3e12f91e" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Type and Size
print(f'Type: {a.dtype} Size: {a.itemsize}')
# + id="CjCRiY5Jy8eZ" colab_type="code" outputId="c9f6fc8f-95cd-4d0c-caaa-682c505ea434" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Get Shape
a.shape
# + id="1-7CegQ50hiz" colab_type="code" outputId="efa35607-84e8-454e-b95e-16e45146e8a8" colab={"base_uri": "https://localhost:8080/", "height": 50}
# Get Total Size
print(a.size * a.itemsize)
print(a.nbytes)
# + id="oRoNDOS02jfb" colab_type="code" colab={}
# Accessing Elements in a array
b = np.array([[1,2,3,4,5,6,7,8,9],[10,11,12,13,14,15,16,17,18]])
# + id="Vod7fLcB3QSb" colab_type="code" outputId="7ff0488c-250a-4e43-8569-98385fd49d12" colab={"base_uri": "https://localhost:8080/", "height": 34}
b.shape
# + id="GxPxlHRi3Rf1" colab_type="code" outputId="09052974-2cc2-4e05-8c89-3f456952cf3f" colab={"base_uri": "https://localhost:8080/", "height": 50}
print(f'b: {b}')
# + id="vaReIRdU3VAG" colab_type="code" outputId="e215a386-09f3-4495-bdac-ed8f1630cf04" colab={"base_uri": "https://localhost:8080/", "height": 50}
# Accessing Element 17 from second row.
print(f'Choice one : {b[1][-2]}')
print(f'Choice two : {b[1,-2]}')
# + id="aC7btWF-3Zv9" colab_type="code" outputId="e62fa4e6-c0c4-4be0-c48b-b1391ab68439" colab={"base_uri": "https://localhost:8080/", "height": 50}
# Access Specific Row
print(f'Choice one : {b[1]}')
print(f'Choice two : {b[1,:]}')
# + id="3eUkJej_3q4o" colab_type="code" outputId="5702f095-871f-41ed-e6db-4ed7242d5907" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Access Specific Column
print(b[:,1])
# + id="YwPooQ3w4mec" colab_type="code" outputId="32ef4d1b-630e-4d4d-d2fa-77c3544c54b8" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Accessing Elements in more fancy way [startindex:endindex:stepsize]
b[0,1:6:2]
# + id="PjgqCMTB5tyQ" colab_type="code" outputId="2a1c9d34-a916-47a1-c2da-82dc4bf0c3a9" colab={"base_uri": "https://localhost:8080/", "height": 50}
# Replacing Specific Elements (15 with 20)
b[1,5] = 20
print(f'Replaced Matrix : {b}')
# + id="1iceFo726dKF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="5c232351-9b18-41bf-86c6-0d261f8284ea"
# Replacing Elements in a column
b[:,2] = [15,20]
print(b)
# + [markdown] id="4vXBj3iKEeTi" colab_type="text"
# **3-D**
# + id="KWNbt8ERC3w_" colab_type="code" colab={}
c = np.array([[[1,2],[3,4]],[[5,6],[7,8]]])
# + id="lnUIuXKTEw_b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="6f3c1341-18ca-4524-a42d-df20bd4d10f9"
print(f'3D Matrix : {c}')
# + id="Jj7OQZV9E1zy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8f037e84-67e0-4dce-eb95-a18805682b66"
c.shape
# + id="1OdDldUvE60s" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2490431c-ffc8-42b9-c201-17c229e8449a"
c.ndim
# + id="eXHYMKsEFQAF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2c9ebfae-269d-4d20-8ba5-d488718f9c9c"
c.dtype
# + id="by9dV4TsFSsi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="1ddeac9a-bbc0-452b-c5a5-f8326921a32d"
print(c.diagonal())
# + id="usJqbw_kFdx9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d513854b-8eeb-49f8-dd49-20787af57f04"
print(b.diagonal())
# + id="f1FP27YfFqu_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="932369f1-dd88-4b5c-ab10-564e326292a9"
c[0]
# + id="qM_j2006Fy6C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="3f4d5fae-483b-400f-e2ef-9bfd19670219"
c[1]
# + id="kC3mI9LjGrVW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2a3ba265-afc1-4702-8193-d4f88ca9db76"
c[0,0]
# + id="EtdlUv9VG_TV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ce6734b0-c0e5-4f5a-f0cb-b718c587c60b"
c[0,0][0]
# + id="K6yAnYbZHTDt" colab_type="code" colab={}
c[0,0][0] = 20
# + id="bc--5EzXHYxY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="9800339c-bab4-4f39-8b2f-4fac0bddb30c"
print(c)
# + [markdown] id="JACdbxArH3zb" colab_type="text"
# **Initializing Different Type of Arrays**
# + id="JP7b2nQ4HaoW" colab_type="code" colab={}
# All 0's Matrix
# + id="l2kTnQvlHzMA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="74609597-f997-4be8-c25f-7d9034f70834"
np.zeros(5)
# + id="ZkVIdKyoIIGG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="1574da70-9787-4b5c-a8c4-3d6ea56983df"
np.zeros((2,3))
# + id="7l5Lgw9aISKD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="03591aa5-3f55-44e9-928f-0c7fe3a788da"
np.zeros((2,3,3))
# + id="_z-7yZHwIXnW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 420} outputId="acb95290-3f56-48b2-be55-0329916e53d7"
np.zeros((2,3,3,3))
# + id="zJj344lhIdGk" colab_type="code" colab={}
# All 1's Matrix
# + id="IoszO0wTI0wi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="bed2d714-d855-4a53-8854-bcad14d87dcd"
np.ones((2,3))
# + id="zOD3N16KI5fO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="31e32cf3-d52f-443a-b775-023562b08503"
np.ones((2,3,3))
# + id="cj4JpRxtJC_9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="88819dc4-dab9-483c-cb72-94c7a2a387fc"
np.full((2,3),90)
# + id="OJx10afOJTdD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="1c1640e6-a30f-45bb-e292-b532dccff1bc"
# Creating an array of size b
np.full_like(b,32)
# + id="U4wCRCDPJ2s4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="03048bd6-7fa8-4c39-e120-3080aac828cc"
# Random Decimal Numbers Between 0 and 1
np.random.rand(4,2)
# + id="pG-Q0tp8KWPl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="78f36038-4bed-4de6-f5ad-006f23712885"
np.random.rand(4,2,3)
# + id="flJhG2r7KaXu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="90c8b971-2678-4201-a79d-7871bc2742b3"
# Random decimal Numbers in the shape of other matrix
np.random.random_sample(b.shape)
# + id="NZ7UFQpYKtk5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="5ab0fa92-07f0-49a3-d879-f13bc62db045"
# Random Integer Values
np.random.randint(3,7,size=(3,3))
# + id="vno7RUKzLOex" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="9db50bc5-532e-4f64-e246-a2ba8c6f2231"
# Identity Matrix
np.identity(3)
# + id="jOTrNSqzLbK5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="10da4390-29da-4d7e-8d2b-201cb86257d6"
# Repeat An Array
arr = np.array([[1,2,3]])
r1 = np.repeat(arr,4,axis=0)
print(r1)
# + id="0HCDly_8MAED" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="2906a6d9-cc64-4c16-bf88-476cf4a2cbca"
# Testing the fundamentals
output = np.ones((5,5))
print(output)
# + id="HtjNCkSgMgba" colab_type="code" colab={}
output[1:4,1:4] = 0
# + id="6zNkqY_nM0-K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="297f7c9d-d304-44f4-96fe-2d91601361ee"
output
# + id="S_5YURMNM2QM" colab_type="code" colab={}
output[2,2] = 9
# + id="qIZeUyb6OK4Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="e1505efc-ba82-4479-fba6-bdc1349dd42a"
output
# + id="IRylkDC6OL6q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d78da170-15c5-4455-8384-75f5fc79101c"
# Copying an array
a = np.array([1,2,3])
b=a
b[0] = 100
print(a)
# + id="-INqxnMcOyIf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1315fddb-82ce-4fa2-cef6-d5013077972e"
a
# + id="LTxUyivtOzD0" colab_type="code" colab={}
# So the above code doesnt work for copying as b is refering to a any change in b effects a
# + id="z_fpxsKtO8oc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="24c51b87-1508-402d-f6b5-342680358545"
a = np.array([1,2,3])
b = a.copy()
b[0] = 100
print(a)
# + [markdown] id="hWtygtMHP2sl" colab_type="text"
# **Math**
# + id="FD4N6K4uPHBc" colab_type="code" colab={}
a = np.array([1,2,3,])
# + id="Drv8IVBXP-n9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e90d2131-dab3-40a1-87cc-be5b8b7d2e8a"
a+2
# + id="DvlmnplQQAJk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9799e866-0f1c-45db-85f6-ae8b1aeb7dde"
a-2
# + id="0K7KnXL-QBk_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e19fef5e-7615-4f07-aca7-42a87cbf17da"
a*2
# + id="g4hPMr-SQC-f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="594ba0eb-bdc0-4e7f-8154-454460d98b19"
a**2
# + id="h5-QDh_6QEkW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="439fd0ab-bbb4-4667-acfb-52a27d55ec25"
a/2
# + id="430SfkbcQF3d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="eafca7fa-f3bb-429b-b8c3-1c2b81d5600c"
a//2
# + id="tYFCKzmaQG16" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b2601e46-0e43-489e-df70-c3c17d0201a2"
np.cos(a)
# + id="lxUXXdsTQLLU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="651a4eb3-d858-4a0b-92d6-da19c6fe3baf"
np.sin(a)
# + [markdown] id="BZcZQdGCQQoK" colab_type="text"
# **Linear Algebra**
# + id="lJvy6Ce6QNxo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 101} outputId="2248e187-6838-4d1d-d146-7b2869f49f70"
# Rule of Matrix Multiplication is : Number of columns in matrix a must be equal to number of rows in matrix b
a = np.ones((2,3))
print(a)
b = np.full((3,2),2)
print(b)
# + id="-HMf6q8YRIUM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="a6ae865e-e3b8-4a8e-d577-bdf12317682e"
np.matmul(a,b)
# + id="rSe76TqwRXOQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="099a72b7-c496-4d9c-e59b-b39f09178bc2"
# Find the determinant
a = np.identity(3)
np.linalg.det(a)
# + id="vit8EUpfTPhN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="2f6d6bc8-ca13-4c06-b80c-0be741ed5813"
np.linalg.eigh(a)
# + id="CsNMDbZjTydD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9bbe7cc2-57a9-49f6-9606-df56e939c1bb"
np.linalg.eigvals(a)
# + id="PRpv8o9NfsHt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} outputId="80400f36-d664-4b06-ddd4-7a1d5ea7a525"
# Dot Product
np.dot(a,b)
# + [markdown] id="7dKxE9seX1g-" colab_type="text"
# **Statistics**
# + id="SWKB5JB0UVyx" colab_type="code" colab={}
stats = np.array([[1,2,3],[4,5,6]])
# + id="YK6iZKEPXxMU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="95b62400-57f1-457a-e3c5-516914c5a82e"
np.min(stats)
# + id="DyW_W3cvYcU9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="be0488e5-0aca-47f0-885e-ea9d1db88a76"
np.max(stats)
# + id="DvAfm27wYejS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0b804807-b8f9-4769-ad1d-a716e5a20054"
np.sum(stats)
# + id="Neo2tafgYhbi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="579ed282-4320-4351-9db0-160896a87303"
np.cumsum(stats)
# + id="cbOzdqS_YkTn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="af8c52d9-46c2-43ee-afde-0f4c41b4c31b"
np.cumproduct(stats)
# + id="hAudgm9FYoyn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="db85488c-05d2-4f38-a3e5-7b87f8baeceb"
np.cumprod(stats)
# + id="d4osd1dWYwgG" colab_type="code" colab={}
# Reorganizing Arrays
# + id="gKFBRsHbZLIR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} outputId="4e4170dc-1450-407c-a19d-48b5301ac0f3"
before = np.array([[1,2,3,4],[5,6,7,8]])
print(before)
after = before.reshape((4,2))
print(after)
# + id="I_5RjASeZa3F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="6d98feed-ddc0-4d78-b208-3aa467ed5453"
a = np.array([2,3,4,5])
b = np.array([6,7,8,9])
np.vstack([a,b,a,b])
# + id="TJCQleudbk54" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="df6ca49a-bb1c-4e95-b4eb-ae53eea43759"
np.hstack([a,b,a,b])
# + id="GKE6WSyOeVQd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c6532581-1206-45af-cc1e-e0aad015dd5a"
# All Method
my_array = [[True, False], [False, False]]
a_bool = np.all(my_array)
print(a_bool)
# + id="crk4kpvaeWyU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="039a8bf5-595a-4a9c-fb81-77622a3aeca1"
# Any Method (Atleast One element must be true)
my_array = [[True, False], [False, False]]
a_bool = np.any(my_array)
print(a_bool)
# + id="la4saAn7e03H" colab_type="code" colab={}
| NUMPY_END_TO_END.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import intake
import xarray as xr
import os
import pandas as pd
import numpy as np
import zarr
import rhg_compute_tools.kubernetes as rhgk
import matplotlib.pyplot as plt
import re
import yaml
import ast
import warnings
# -
'''client, cluster = rhgk.get_standard_cluster()
cluster'''
col = intake.open_esm_datastore("https://storage.googleapis.com/cmip6/pangeo-cmip6.json")
# +
def _paramfile_to_tuple(model, variable):
"""
takes in a model and variable, returns tuple from parameter file.
"""
param_file = '/home/jovyan/downscaling/downscale/workflows/parameters/{}-{}.yaml'.format(model, variable)
with open(param_file, 'r') as f:
var_dict = yaml.full_load(f)
# some string parsing
line = var_dict['jobs']
line1 = re.sub(r"\n", "", line)
line2 = re.sub(r"[\[\]]", "", line1)
return ast.literal_eval(line2.strip())
def _get_cmip6_dataset(model, variable, tuple_id, period='ssp'):
d_ssp = _paramfile_to_tuple(model, variable)[tuple_id][period]
cat = col.search(
activity_id=d_ssp['activity_id'],
experiment_id=d_ssp['experiment_id'],
table_id=d_ssp['table_id'],
variable_id=d_ssp['variable_id'],
source_id=d_ssp['source_id'],
member_id=d_ssp['member_id'],
grid_label=d_ssp['grid_label'],
version=int(d_ssp['version']),
)
return cat.to_dataset_dict(progressbar=False)
def compute_dtr(model, tuple_id=1):
"""
takes in tasmax and tasmin Datasets, computes DTR (returns it lazily)
"""
tasmax = _get_cmip6_dataset(model, 'tasmax', tuple_id)
k_tasmax = list(tasmax.keys())
if len(k_tasmax) != 1:
raise ValueError("there is likely an issue with {} tasmax".format(model))
tasmin = _get_cmip6_dataset(model, 'tasmin', tuple_id)
k_tasmin = list(tasmin.keys())
if len(k_tasmin) != 1:
raise ValueError("there is likely an issue with {} tasmin".format(model))
return tasmax[k_tasmax[0]]['tasmax'] - tasmin[k_tasmin[0]]['tasmin']
def check_dtr(dtr, model):
"""
"""
min_dtr = dtr.min('time')
neg_count = min_dtr.where(min_dtr <= 0).count().values
if neg_count > 0:
warnings.warn("DTR has negative values for {}".format(model))
# -
# checking models
# DTR negative:
# - GFDL-ESM4
# - GFDL-CM4
#
# DTR positive:
# - CanESM5
# - INM-CM4-8
# - INM-CM5-0
# - NorESM2-MM
# - NorESM2-LM
# - MIROC6
# - EC-Earth3-Veg-LR
# - EC-Earth3-Veg
# - EC-Earth3
# - KIOST-ESM
# - MIROC-ES2L
# - MPI-ESM1-2-LR
# - MPI-ESM1-2-HR
# - NESM3
# - MRI-ESM2-0
# - FGOALS-g3
# - CMCC-ESM2
# - BCC-CSM2-MR
# - AWI-CM-1-1-MR
# - ACCESS-CM2
#
# Parameter files to add or fix (could not check DTR):
# - UKESM1-0-LL
# - ACCESS-ESM1-5
# - MPI-ESM1-2-HAM
#
# Tasmin parameter files to add (could not check DTR):
# - CAMS-CSM1-0
model = 'NorESM2-MM'
# +
# _get_cmip6_dataset(model, 'tasmax', 0)
# -
dtr = compute_dtr(model, tuple_id=0)
check_dtr(dtr, model)
| notebooks/downscaling_pipeline/pipeline_testing/check_dtr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
time_data_all = np.load('sudoku/Results/experiment_results_time_all.npy')
propagation_data_all = np.load('sudoku/Results/experiment_results_propagations_all.npy')
time_data_all = np.load('sudoku/Results/experiment_results_time__simplesplit.npy.npy')
propagation_data_all = np.load('sudoku/Results/experiment_results_propagations__simplesplit.npy')
# %pylab inline
from matplotlib.backends.backend_pdf import PdfPages
from pylab import *
rcParams['legend.loc'] = 'best'
print time_data
print propagation_data
print mean(time_data)
print mean(propagation_data)
# +
X = np.arange(0, 9)
visits_mean = [Data[Data[:,Columns.index('difficult_level')] == x][:, Columns.index('visits_number')].mean() for x in X]
fig = figure(figsize=(8, 6))
ax = fig.add_subplot(111)
ax.plot(X, visits_mean, 'r', label='Mean')
visits_variance = [np.var(Data[Data[:,Columns.index('difficult_level')] == x][:,Columns.index('visits_number')]) for x in X]
ax.plot(X, visits_variance, 'b', label='Variance')
ax.set_yscale('log')
legend()
xlabel('Difficulty level')
grid(True)
with PdfPages('mean_variance.pdf') as pdf:
pdf.savefig(fig)
plt.show()
# +
from scipy import stats
for i in range(7):
d1 = Data[Data[:,Columns.index('difficult_level')] == i][:, Columns.index('visits_number')]
d2 = Data[Data[:,Columns.index('difficult_level')] == i + 1][:, Columns.index('visits_number')]
t, p = stats.ttest_ind(d1, d2, equal_var=False)
print(p)
# +
# Visits
num_bins = 100
colors = ['green', 'red', 'blue', 'yellow', 'pink', 'orange', 'cyan', 'magenta']
fig = figure(figsize=(8, 6))
grid(True)
n, bins, patches = hist(time_data, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5)
fig = figure(figsize=(8, 6))
grid(True)
n, bins, patches = hist(propagation_data, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5)
# fig = figure(figsize=(8, 6))
# ax = fig.add_subplot(111)
# for i in range(0, 4):
# x = Data[Data[:,0] == i][:,7]
# n, bins, patches = hist(x, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5, label='Level ' + str(i))
# xlim([0, 3000])
# xlabel('Visits')
# legend()
# grid(True)
# with PdfPages('visits_0_3.pdf') as pdf:
# pdf.savefig(fig)
# plt.show()
# fig = figure(figsize=(8, 6))
# ax = fig.add_subplot(111)
# for i in range(4, 9):
# x = Data[Data[:,0] == i][:,7]
# n, bins, patches = hist(x, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5, label='Level ' + str(i))
# legend()
# xlim([0,30000])
# xlabel('Visits')
# grid(True)
# with PdfPages('visits_5_8.pdf') as pdf:
# pdf.savefig(fig)
# plt.show()
# +
# Fixed variables
fig = figure(figsize=(8, 6))
ax = fig.add_subplot(111)
for i in range(0, 9):
x = Data[Data[:,0] == i][:,3]
n, bins, patches = plt.hist(x, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5, label='Level ' + str(i))
legend()
xlabel('Fixed variables')
grid(True)
with PdfPages('fixed_variables.pdf') as pdf:
pdf.savefig(fig)
show()
# +
# Learned literals
fig = figure(figsize=(8, 6))
ax = fig.add_subplot(111)
for i in range(9):
x = Data[Data[:,0] == i][:,4]
n, bins, patches = hist(x, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5, label='Level ' + str(i))
legend()
xlabel('Learned literals')
grid(True)
with PdfPages('learned_literals.pdf') as pdf:
pdf.savefig(fig)
plt.show()
# +
# Propogations
fig = figure(figsize=(8, 6))
ax = fig.add_subplot(111)
for i in range(4, -1, -1):
x = Data[Data[:,0] == i][:,6]
n, bins, patches = hist(x, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5, label='Level ' + str(i))
legend()
xlim([0,1500])
xlabel('Propagations')
grid(True)
with PdfPages('propagations_0_4.pdf') as pdf:
pdf.savefig(fig)
show()
fig = figure(figsize=(8, 6))
ax = fig.add_subplot(111)
for i in range(8, 4, -1):
x = Data[Data[:,0] == i][:,6]
n, bins, patches = hist(x, num_bins, normed=1, facecolor=colors[i%len(colors)], alpha=0.5, label='Level ' + str(i))
legend()
xlim([0,7500])
xlabel('Propagations')
grid(True)
with PdfPages('propagations_5_8.pdf') as pdf:
pdf.savefig(fig)
show()
# -
| Assignment-2/.ipynb_checkpoints/DataAnalysis-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Looping and File I/O- A worked example
# Here we have a worked example of a real task involving looping and reading/writing files.
# +
# Standard imports (i.e., Python builtins) go at the top
from os import listdir
import os.path as op
from glob import glob
# Now third-party imports
import pandas as pd
# Finally, any local imports would go here
# -
# Here's an example file for us to build our workflow around
example_file = '/home/data/nbc/Sutherland_HIVCB/dset/sub-193/func/sub-193_task-errorawareness_run-01_events.tsv'
# does the file exist?
op.isfile(example_file)
# what are the contents?
df = pd.read_csv(example_file)
df.head()
help(pd.read_csv)
# so what should we change?
df = pd.read_csv(example_file)
df.head()
print(example_file)
# +
in_folder = '/home/data/nbc/Sutherland_HIVCB/dset'
subject_folders = sorted(glob(op.join(in_folder, 'sub-*')))
# I'm quite sure that there are no files starting with 'sub-',
# since that would not fit with BIDS, but, just to be safe,
# we can reduce the list to folders only.
subject_folders = [sf for sf in subject_folders if op.isdir(sf)]
# -
# Let's look through the subject-specific folders
for subject_folder in subject_folders:
func_folder = op.join(subject_folder, 'func')
# And grab *all* errorawareness task events files
events_files = sorted(glob(op.join(func_folder, '*_task-errorawareness_*_events.tsv')))
for ev_file in events_files:
df = pd.read_csv(ev_file, sep='\t')
df.head()
ev_file
subject_folders
# So what do we want from the files?
# 1. All incorrect go trials
# 2. All incorrect nogo trials
print(list(df['trial_type_2'].unique()))
# We don't know if that specific file exhibits *all* possible values
ttypes = []
for subject_folder in subject_folders:
func_folder = op.join(subject_folder, 'func')
# And grab *all* errorawareness task events files
events_files = sorted(glob(op.join(func_folder, '*_task-errorawareness_*_events.tsv')))
for ev_file in events_files:
df = pd.read_csv(ev_file, sep='\t')
ttypes += list(df['trial_type_2'].unique())
ttypes = sorted(list(set(ttypes)))
print(ttypes)
# We know that incorrect go trials are listed as goIncorrect or goUnaware
# So let's grab those
go_incorrect_df = df.loc[df['trial_type_2'].isin(['goIncorrect'])]
go_incorrect_onsets = go_incorrect_df['onset'].values
go_incorrect_onsets
# And let's check incorrect nogo trials while we're at it
nogo_incorrect_df = df.loc[df['trial_type_2'].isin(['nogoIncorrectAware', 'nogoIncorrectUnaware'])]
nogo_incorrect_onsets = nogo_incorrect_df['onset'].values
# +
in_folder = '/home/data/nbc/Sutherland_HIVCB/dset'
subject_folders = sorted(glob(op.join(in_folder, 'sub-*')))
# I'm quite sure that there are no files starting with 'sub-',
# since that would not fit with BIDS, but, just to be safe,
# we can reduce the list to folders only.
subject_folders = [sf for sf in subject_folders if op.isdir(sf)]
# Now let's put these things together
# We need an output directory to save things to
out_dir = '/home/data/nbc/Sutherland_HIVCB/derivatives/afni-processing/preprocessed-data/'
for subject_folder in subject_folders:
subject_id = op.basename(subject_folder)
print('Processing {}'.format(subject_id))
func_folder = op.join(subject_folder, 'func')
# And grab *all* errorawareness task events files
events_files = sorted(glob(op.join(func_folder, '*_task-errorawareness_*_events.tsv')))
out_sub_dir = op.join(out_dir, subject_id, 'func')
# Make lists to place all lines in
go_incorrect_onsets_text = []
nogo_incorrect_onsets_text = []
nogo_aware_onsets_text = []
nogo_unaware_onsets_text = []
nogo_correct_onsets_text = []
for ev_file in events_files:
df = pd.read_csv(ev_file, sep='\t')
# Grab incorrect go trials, which are labeled as goIncorrect
go_incorrect_df = df.loc[df['trial_type_2'].isin(['goIncorrect'])]
go_incorrect_onsets = go_incorrect_df['onset'].values
if go_incorrect_onsets.size == 0:
go_incorrect_onsets = ['*']
go_incorrect_onsets_text.append('\t'.join([str(num) for num in go_incorrect_onsets]))
# Grab incorrect nogo trials, which are labeled as nogoIncorrectAware or nogoIncorrectUnaware
nogo_incorrect_df = df.loc[df['trial_type_2'].isin(['nogoIncorrectAware', 'nogoIncorrectUnaware'])]
nogo_incorrect_onsets = nogo_incorrect_df['onset'].values
if nogo_incorrect_onsets.size == 0:
nogo_incorrect_onsets = ['*']
nogo_incorrect_onsets_text.append('\t'.join([str(num) for num in nogo_incorrect_onsets]))
# Grab incorrect nogo aware trials, which are labeled as nogoIncorrectAware
nogo_aware_df = df.loc[df['trial_type_2'].isin(['nogoIncorrectAware'])]
nogo_aware_onsets = nogo_aware_df['onset'].values
if nogo_aware_onsets.size == 0:
nogo_aware_onsets = ['*']
nogo_aware_onsets_text.append('\t'.join([str(num) for num in nogo_aware_onsets]))
# Grab incorrect nogo unaware trials, which are labeled as nogoIncorrectUnaware
nogo_unaware_df = df.loc[df['trial_type_2'].isin(['nogoIncorrectUnaware'])]
nogo_unaware_onsets = nogo_unaware_df['onset'].values
if nogo_unaware_onsets.size == 0:
nogo_unaware_onsets = ['*']
nogo_unaware_onsets_text.append('\t'.join([str(num) for num in nogo_unaware_onsets]))
# Grab correct nogo trials, which are labeled as nogoCorrect
nogo_correct_df = df.loc[df['trial_type_2'].isin(['nogoCorrect'])]
nogo_correct_onsets = nogo_correct_df['onset'].values
if nogo_correct_onsets.size == 0:
nogo_correct_onsets = ['*']
nogo_correct_onsets_text.append('\t'.join([str(num) for num in nogo_correct_onsets]))
#different line for each run
# Merge list of single-line strings into multiline string
go_incorrect_onsets_text = '\n'.join(go_incorrect_onsets_text)
nogo_incorrect_onsets_text = '\n'.join(nogo_incorrect_onsets_text)
nogo_aware_onsets_text = '\n'.join(nogo_aware_onsets_text)
nogo_unaware_onsets_text = '\n'.join(nogo_unaware_onsets_text)
nogo_correct_onsets_text = '\n'.join(nogo_correct_onsets_text)
try:
#different file for each event type
go_incorrect_file = op.join(out_sub_dir, 'go_incorrect.1D')
with open(go_incorrect_file, 'w') as fo:
fo.write(go_incorrect_onsets_text)
nogo_incorrect_file = op.join(out_sub_dir, 'nogo_incorrect.1D')
with open(nogo_incorrect_file, 'w') as fo:
fo.write(nogo_incorrect_onsets_text)
nogo_aware_file = op.join(out_sub_dir, 'nogo_aware.1D')
with open(nogo_aware_file, 'w') as fo:
fo.write(nogo_aware_onsets_text)
nogo_unaware_file = op.join(out_sub_dir, 'nogo_unaware.1D')
with open(nogo_unaware_file, 'w') as fo:
fo.write(nogo_unaware_onsets_text)
nogo_correct_file = op.join(out_sub_dir, 'nogo_correct.1D')
with open(nogo_correct_file, 'w') as fo:
fo.write(nogo_correct_onsets_text)
except:
print("missing subject")
# -
| error_awareness_task/make_decon_regressors/additional_event_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Table of Contents
# [1. Model Preparation](#1.-Model-Preperation)
# <br>
# * [1.1 Reviewing, Splitting data set](#1.1-Reviewing,-splitting-dataset-into-7:3-for-training-and-testing.)
# * [1.2 Plotting features against target feature](#1.2-Plot-to-compare-all-features-to-target-feature-to-help-make-decisions-to-keep-for-the-models.)
# * [1.2.1 Plotting datetime feature against target feature](#Plotting-datetime-feature-against-target-feature)
# * [1.2.2 Plotting numerical features against target feature](#Plotting-numerical-features-against-target-feature)
# * [1.2.3 Plotting categorical features against target feature](#Plotting-categorical-features-against-target-feature)
# * [1.3. Summary of all features](#1.3.-Summary-of-all-features)
# * [1.3.1 Numerical Features](#Numerical-Features)
# * [1.3.1 Cateogrical Features](#Categorical-Features)
# *[2. Linear Regression & Random Forest & Decision Trees & K-Nearest-Neighbour](#2.-Linear-Regression-&-Random-Forest-&-Decision-Trees-&-K-Nearest-Neighbour)
# * [3. Route model and taking the proportion of the prediction to calculate a journey time for the user](#3.-Route-model-and-taking-the-proportion-of-the-prediction-to-calculate-a-journey-time-for-the-user.)
# * [3.1 Calculating the proportion of each stop from the overall trip](#3.1-Calculating-the-proportion-of-each-stop-from-the-overall-trip.)
# * [4. Random Forest & Decision Trees](#4.-Random-Forest-&-Decision-Trees)
# * [5. Stop pair model](#5.-Stop-pair-model)
# * [5.1 First version of paired stop approach](#5.1-First-version-of-paired-stop-approach)
# * [5.2.1 Setting up for 46a stop pair models using first approach](#5.2.1-Setting-up-for-46a-stop-pair-models-using-first-approach)
# * [5.3 Stop pair based on entire leavetimes](#5.3-Stop-pair-based-on-entire-leavetimes)
# * [6. Final Stop Pair Model](#6.-Final-Stop-Pair-Model)
# Establishing a connection with sqlite database
# +
# import boto3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import sqlite3
import pickle
import time
# from sagemaker import et_execution_role
from patsy import dmatrices
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn import metrics
from math import log
from statistics import stdev
from statistics import mode
# ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Connecting to s3
# role = get_execution_role()
# bucket='sagemaker-studio-520298385440-7in8n1t299'
# data_key = 'route_46a.feather'
# data_location = 's3://{}/{}'.format(bucket, data_key)
# -
# def function to create connection to db
def create_connection(db_file):
"""
create a database connection to the SQLite database specified by db_file
:param df_file: database file
:return: Connection object or None
"""
conn = None
try:
conn = sqlite3.connect(db_file)
return conn
except 'Error' as e:
print(e)
return conn
# create connection to db
db_file = "C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/dublinbus.db"
conn = create_connection(db_file)
# initialise query
query = """
SELECT leavetimes.*, weather.*
FROM leavetimes, weather
WHERE TRIPID in
(SELECT TRIPID
FROM trips
WHERE LINEID = '46A' AND DIRECTION = '1')
AND leavetimes.DAYOFSERVICE = weather.dt;
"""
# execute query and read into dataframe
query_df = pd.read_sql(query, conn)
# # 1. Model Preperation
# Loading file
df = query_df
df = pd.read_feather('route46a.feather')
# ## 1.1 Reviewing, splitting dataset into 7:3 for training and testing.
df.head(5)
df.tail(5)
# Missing values
df.isnull().sum()
# Unique types for each feature
df.nunique()
# Datatypes and convert
df.dtypes
# Rows and columns
df.shape
df.describe().T
# **Review so far:**
# <br>
# There are no more missing values and the constant columns have been removed.
# * Remove index, index, dt.
# * Investigate level_0.
# * Convert the following to categorical: DAYOFWEEK, MONTHOFSERVICE, PROGRNUMBER, STOPPOINTID, VEHICLEID, IS_HOLIDAY, IS_WEEKDAY, TRIPID, weather_id, weather_main, weather_description
# * We have data for most of the days of the year and for each month.
#
df = df.drop(['level_0', 'dt','index'], axis=1)
# Sorting by trip then dayofservice
df['PROGRNUMBER'] = df['PROGRNUMBER'].astype('int64')
df = df.sort_values(by=['TRIPID', 'DAYOFSERVICE', 'PROGRNUMBER'])
# +
# Creating features
categorical_features = ['DAYOFWEEK', 'MONTHOFSERVICE', 'PROGRNUMBER', 'STOPPOINTID', 'PREVIOUS_STOPPOINTID',
'IS_HOLIDAY', 'IS_WEEKDAY', 'TRIPID', 'VEHICLEID', 'weather_id', 'weather_main', 'weather_description']
datetime_features = ['DAYOFSERVICE']
numerical_features = ['PLANNEDTIME_ARR', 'ACTUALTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_DEP',
'DWELLTIME', 'PLANNEDTIME_TRAVEL', 'temp', 'pressure', 'humidity', 'wind_speed', 'wind_deg', 'rain_1h', 'clouds_all']
target_feat = 'ACTUALTIME_TRAVEL'
# +
# Converting object to categorical
for column in categorical_features:
df[column] = df[column].astype('category')
# Converting dayofservice to datetime
df['DAYOFSERVICE'] = pd.to_datetime(df['DAYOFSERVICE'])
# -
# Replacing PROGRNUMBER equal to 1 of ACTUALTIME_TRAVEL with 0
df.loc[df['PROGRNUMBER'] == '1', 'ACTUALTIME_TRAVEL'] = 0
df.loc[df['PROGRNUMBER'] == '1', 'PLANNEDTIME_TRAVEL'] = 0
df.loc[df['PLANNEDTIME_TRAVEL'] < 0, 'PLANNEDTIME_TRAVEL'] = 0
df.loc[df['ACTUALTIME_TRAVEL'] < 0, 'ACTUALTIME_TRAVEL'] = 0
df['HOUROFSERVICE'] = [int(time.strftime("%H",time.gmtime(hour))) for hour in df['ACTUALTIME_DEP']]
df['eve_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 16 and int(time.strftime("%H",time.gmtime(hour))) <= 19 else 0 for hour in df['ACTUALTIME_DEP']]
df['morn_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 7 and int(time.strftime("%H",time.gmtime(hour))) <= 9 else 0 for hour in df['ACTUALTIME_DEP']]
df = df.reset_index()
df.to_feather('route46a.feather')
# +
# Making new feature for previous stoppointid and let those with PROGRNUMBER = 1 to 0
# df['PREVIOUS_STOPPOINTID'] = df['STOPPOINTID'].shift()
# first_stop = {'0':'0'}
# df['PREVIOUS_STOPPOINTID'] = df['PREVIOUS_STOPPOINTID'].cat.add_categories(first_stop)
# df.loc[df['PROGRNUMBER'] == '1', 'PREVIOUS_STOPPOINTID'] = '0'
# -
# <br><br>
# Setting the target feature as _y and x_ as the remaining features in the dataframe.
# <br><br>
df.set_index(np.random.permutation(df.index))
# sort the resulting random index
df.sort_index(inplace=True)
# +
# Creating y and x axis
target_feature = df['ACTUALTIME_TRAVEL']
y = pd.DataFrame(target_feature)
X = df.drop(['ACTUALTIME_TRAVEL'], axis=1)
# Splitting dataset for train and testing data by 70/30
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
# Printing shape of the new split data
print("The original range is: ",df.shape[0])
print("The training range (70%):\t rows 0 to", round(X_train.shape[0]))
print("The test range (30%): \t rows", round(X_train.shape[0]), "to", round(X_train.shape[0]) + X_test.shape[0])
# -
# ## 1.2 Plot to compare all features to target feature to help make decisions to keep for the models.
# #### Plotting datetime feature against target feature
# Plot datetime feature against target feature
X_train.DAYOFSERVICE = pd.to_numeric(X_train.DAYOFSERVICE)
df_temp = pd.concat([X_train['DAYOFSERVICE'], y_train], axis=1)
correlation_dt = df_temp[['DAYOFSERVICE', 'ACTUALTIME_TRAVEL']].corr(method='pearson')
correlation_dt
print('PLOT: DAYOFSERVICE')
fig = plt.figure()
ax = fig.add_subplot
df_temp.plot(kind='scatter', x='DAYOFSERVICE', y='ACTUALTIME_TRAVEL', label = "%.3f" % df_temp[['ACTUALTIME_TRAVEL', 'DAYOFSERVICE']].corr().to_numpy()[0,1], figsize=(15, 8))
plt.show()
# #### Plotting numerical features against target feature
for column in numerical_features:
df_temp = pd.concat([X_train[column], y_train], axis=1)
correlation_dt = df_temp[[column, 'ACTUALTIME_TRAVEL']].corr(method='pearson')
print('\n',correlation_dt)
for column in numerical_features:
df_temp = pd.concat([X_train[column], y_train], axis=1)
correlation_dt = df_temp[[column, 'ACTUALTIME_TRAVEL']].corr(method='spearman')
print('\n',correlation_dt)
# #### Pearson correlation method
print('NUMERICAL FEATURES: PEARSON')
for column in numerical_features:
df_temp = pd.concat([X_train[column], y_train], axis=1)
fig = plt.figure()
ax = fig.add_subplot
df_temp.plot(kind='scatter', x=column, y='ACTUALTIME_TRAVEL', label = "%.3f" % df_temp[['ACTUALTIME_TRAVEL', column]].corr(method='pearson').to_numpy()[0,1], figsize=(12, 8))
plt.show()
# #### Spearman correlation method
print('NUMERICAL FEATURES: SPEARSMAN')
for column in numerical_features:
df_temp = pd.concat([X_train[column], y_train], axis=1)
fig = plt.figure()
ax = fig.add_subplot
df_temp.plot(kind='scatter', x=column, y='ACTUALTIME_TRAVEL', label = "%.3f" % df_temp[['ACTUALTIME_TRAVEL', column]].corr(method='spearman').to_numpy()[0,1], figsize=(12, 8))
plt.show()
print('NUMERICAL FEATURES: USING CORR()')
df.corr()['ACTUALTIME_TRAVEL'][:]
df_numeric = df[numerical_features]
for feature in df_numeric:
df_numeric[feature] = np.log(df_numeric[feature])
df_numeric['ACTUALTIME_TRAVEL'] = np.log(df['ACTUALTIME_TRAVEL'])
# +
print('NUMERICAL FEATURES USING LOG DATA')
# Creating y and x axis
target_feature_numeric = df_numeric['ACTUALTIME_TRAVEL']
y_numeric = pd.DataFrame(target_feature_numeric)
X_numeric = df_numeric.drop(['ACTUALTIME_TRAVEL'], axis=1)
# Splitting dataset for train and testing data by 70/30
X_train_numeric, X_test_numeric, y_train_numeric, y_test_numeric = train_test_split(X_numeric, y_numeric, test_size=0.3, random_state=1)
# Printing shape of the new split data
print("The original range is: ",df.shape[0])
print("The training range (70%):\t rows 0 to", round(X_train_numeric.shape[0]))
print("The test range (30%): \t rows", round(X_train_numeric.shape[0]), "to", round(X_train_numeric.shape[0]) + X_test_numeric.shape[0])
for column in numerical_features:
df_temp = pd.concat([X_train_numeric[column], y_train_numeric], axis=1)
fig = plt.figure()
ax = fig.add_subplot
df_temp.plot(kind='scatter', x=column, y='ACTUALTIME_TRAVEL', label = "%.3f" % df_temp[['ACTUALTIME_TRAVEL', column]].corr(method='spearman').to_numpy()[0,1], figsize=(12, 8))
plt.show()
# -
# #### Plotting categorical features against target feature
# +
year_features = ['eve_rushour', 'morn_rushour','DAYOFWEEK', 'IS_HOLIDAY', 'IS_WEEKDAY', 'MONTHOFSERVICE', 'weather_id', 'weather_main', 'weather_description']
for feature in year_features:
print(feature)
df_temp = pd.concat([X_train, y_train], axis=1)
unique = df_temp[feature].unique()
list_average = []
for value in unique:
list_values = df_temp[df_temp[feature]== value]['ACTUALTIME_TRAVEL'].tolist()
length_list = len(list_values)
average = sum(list_values)/length_list
list_average += [average]
# print(f'Sum of values / list of values: \n {sum(list_values)} / {length_list}')
# print(f'Average ACTUALTIME_TRAVEL: {average}, \n')
# taken from https://pythonspot.com/matplotlib-bar-chart/
y_pos = np.arange(len(unique))
plt.bar(y_pos, list_average, align='center')
plt.xticks(y_pos, unique)
plt.ylabel('Usage')
plt.title(feature)
plt.xticks(rotation=90)
plt.show()
# -
# Average time for each vehicle id
df_temp = pd.concat([X_train, y_train], axis=1)
vehicleid = df_temp['VEHICLEID'].unique().tolist()
for id_ in vehicleid:
print(f'VEHICLEID: {id_}')
list_values = df_temp[df_temp['VEHICLEID']== id_]['ACTUALTIME_TRAVEL'].tolist()
length_list = len(list_values)
average = sum(list_values)/length_list
print(f'Average ACTUALTIME_TRAVEL: {average} \n')
# +
# Making dummy variables for categorical
cat = ['DAYOFWEEK', 'MONTHOFSERVICE', 'PROGRNUMBER', 'STOPPOINTID', 'IS_HOLIDAY', 'IS_WEEKDAY', 'weather_id', 'weather_main', 'weather_description']
df_temp = pd.concat([X_train, y_train], axis=1)
df_copy = df_temp.copy()
df_copy = df_copy[cat]
df_copy = pd.get_dummies(df_copy)
df_copy = pd.concat([df_copy, y_train], axis=1)
categorical_corr = df_copy.corr()['ACTUALTIME_TRAVEL'][:]
# -
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(categorical_corr)
categorical_list = categorical_corr[categorical_corr > 0.04].index.tolist()
categorical_list.remove('ACTUALTIME_TRAVEL')
categorical_list
# ## 1.3. Summary of all features
# <br><br>
# #### Numerical Features
# <br><br>
#
# **DayOfService:**
# * The correlation to the target feature is very low of 0.03806.
# * Don't see it being a useful feature for the target feature.
# * Plot represents a straight line, which suggests little to no correlation.
# * Conclusion: dropped because of the low correlation score.
#
# **PlannedTime_Arr:**
# * There is very low correlation against the target feature though it gets better using spearman correlation.
# * After logging the data, the correlation plot did not make a huge difference when using the spearman method to plot it for the second time.
# * Pearson and spearman plot pre log suggests little correlation as it is a continuous straight line. However, this shouldn't mean it should be dropped.
# * When most values in the target feature fell less than 10, we see that the plannedtime arrival values increasing, it didn't change much. This would be due to the fact that the target feature is the difference between times so it would make sense that the relationship is poor.
# * After logging the data, the plot is more spread out instead of a straight line, but the correlation score still shows a similar low score with a .02 difference using the spearman method.
# * Conclusion: However, this will be dropped.
#
# **ActualTime_Arr:**
# * Compared to Planned time arrival feature, the pearson correlation score is poorer but the spearman scores are more similar pre log.
# * It is similar to planned time arrival in that the plot represents a straight line, that suggests a poor relationship with the target feature.
# * After logging the data, it is found that the plot is more spread out. The score using spearman is not much different pre logging the data.
# * However, it would be unwise to drop this feature as it I feel it would serve good purpose for the target feature for predicting the prediction time for the next stop.
# * Conclusion: this will be dropped.
#
# **PlannedTime_Dep:**
# * Planned time departure has little correlation with the target feature after looking at spearman and pearsons.
# * It doesn't have a linear relationship and the straight line on the plot of both methods proves this.
# * However, when plotted using the logged values we see that the correlation score hasn't changed but the data is more spread out.
# * This doesn't change the relationship much, however.
# * Even so, this will be kept as I feel it would help the predictions. Having the planned time departures would help skew a better result because it would relatively be close to the actual time departure even though it is just an estimate.
# * Conclusion: this will be dropped
#
# **ActualTime_Dep:**
# * Actual time departure is again, more or less the same. It represents the departure for these times at a particular stop to go to the next stop. It is strange that the correlation is so low even after logging the data but it would make sense as you wouldn't expect there to be a linear relationship.
# * The plot is similar to the rest of the previous features mentioned so far.
# * However, it will still be kept because I feel it would still be a useful feature for predicting a time in seconds.
# * By taking the actual time departure for a particular stop it may help.
# * Conclusion: this will be dropped.
#
# **Dwell Time:**
# * Dwell time has a 0.03 coorelation score with the target feature. It suggests on the graph that the time for dwell time equal to 0 then the more the target feature time increases. It might suggest traffic times where if a bus is full then it might be due to rush hour? busy hours?
# * Plotting against the target feature after logging the data gives similar scores using the spearman correlation method. However we see the graph differing from pre log plot. It is more grouped up together compared to the previous graph plot.
# * Because the score is more fairer compared to the previous, it will be useful to keep it for the modelling.
# * Conclusion: dropped.
#
# **PlannedTime_Travel:**
# * When plotting using the pearse correlation method, it gave a correlation of 0.2. This time it is the highest correlation and we see a small linear relationship.
# * The time for planned time travel, as it increases, so does the target feature. It gives us an indication of that slight linear relationship.
# * Using spearmans to graph the correlation gave us a 0.7 score which is a good indication that the two features has a linear relationship.
# * Because of this, this feature will be dropped.
#
# **Temp:**
# * Temp has a negative 0.009 correlation with the target feature and an even poorer linear relationship at -.002.
# * This indicates a poor linear/monotonic relationship and it will not serve useful for the model.
# * The graph plots does not give anymore useful information that would give further evidence that it should be kept.
# * Conclusion: drop.
#
# **Pressure:**
# * It also has a negative linear relationship with the target feature.
# * When looking at the graph plots for both spearman and pearsons, it does not give any further insights.
# * For this reason, this feature will be dropped.
#
# **Humidity:**
# * Humidity does not have a strong relationship with the target feature, be it linear or monotonic.
# * The reason being the correlation using both methods fell < 0.00.
# * Unfortunately, the graph does not represent anything useful either.
# * When looking at the logged data plots however, there is a slight difference however it is not signficant enough that this feature should still be kept as there is no distinct relationship that we can see.
# * Conclusion: drop.
#
# **Windspeed:**
# * No linear relationship.
# * Indicates a small monotonic relationship.
# * This means that as the windspeed value increases, the value of the target feature tends to be higher as well.
# * But a spearman correlation of 0.01 is not strong enough of a feature to keep.
# * Conclusion: drop
#
# **Wind_Deg:**
# * This feature will be dropped immediately as the correalations are both <0.000.
#
# **Rain_1H:**
# * It doesn't have a strong linear relationship but it shows spearmans correlation some promising results when the data has been logged.
#
#
# <br><br>
# #### Categorical Features
# <br><br>
# **DayOfWeek:**
# * In the graph we see the actual time travel increasing during weekdays and slowly the travel time is less during weekends.
# * This suggests a relationship between the days of the week and the target feature in which weekdays have a higher tendency for the actualtime travel feature to be higher.
# * Conclusion: this will be kept.
#
# **MonthofService:**
# * In the graph, we don't really see a connection between each month against the target feature even if it is in order.
# * The overall actual travel time is higher in february before it dips, then rising during winter season.
# * The correlation score seems to be poor also for each month.
# * This feature will still be kept.
#
# **Progrnumber:**
# * Most progrnumbers will be dropped as a lot of the correlations are <0.00.
# * For this reason, this feature will be dropped.
#
# **StoppointID:**
# * Similarly to progrnumbers, there are a lot of low correlations falling <0.00.
# * Most stoppoint numbers are <0.00 correlation.
# * This indicates a very low relationship with the target feature.
# * For this reason, this feature will be dropped, except for those with a correlation > 0.04
#
# **Is_Holiday:**
# * After analyzing the graph, we see a relationship between the target feature and whether or not the time falls under a holiday date (non-school holiday).
# * If it a non holiday, the actual time travel increases.
# * If it is a holiday, the actual time travel decreases.
# * This means that less people are using public transport if it is a holiday date.
# * For this reason, this feature will be kept.
#
# **Is_Weekday:**
# * Like Is_Holiday, we see a relationship between the target feature and whether or not the time is during a weekday or not.
# * We see a contrast between the two values in which 1, being a weekday, has a higher actual time travel, vice versa.
# * For this reason, it is a good indication of a relationship to the target feature.
# * Therefore, this feature will be kept.
#
# **VehicleID:**
# * When looking at the different averages, we see that the average differences are not big.
# * For this reason, it may be best to drop this feature because it doesn't give any indication it would be a useful feature to help the prediction models.
#
# ## 1.4 Cleaning up features
# ### Setting low correlation features - keep
# Categorical features
low_corr_categorical = ['DAYOFWEEK', 'MONTHOFSERVICE', 'IS_HOLIDAY', 'IS_WEEKDAY']
# ### Setting low correlation features - drop
# +
# Numerical features
low_corr_numerical = ['PLANNEDTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_ARR', 'ACTUALTIME_DEP','PLANNEDTIME_TRAVEL']
low_corr = ['DAYOFSERVICE', 'VEHICLEID', 'TRIPID', 'STOPPOINTID', 'PREVIOUS_STOPPOINTID', 'PROGRNUMBER', 'temp', 'pressure', 'humidity',
'wind_deg', 'weather_id', 'weather_description', 'clouds_all', 'wind_speed', 'PREVIOUS_STOPPOINTID', 'PLANNEDTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_ARR', 'ACTUALTIME_DEP',
'PLANNEDTIME_TRAVEL', 'DWELLTIME']
# -
# ### Setting high correlation features
# Numerical features
high_corr_numerical = ['DWELLTIME', 'PLANNEDTIME_TRAVEL']
# ### Dropping features & setting dummy features
df_copy = df.copy()
df_copy = df_copy.drop(low_corr, 1)
df_copy = pd.get_dummies(df_copy)
# ### Training & Testing data
# All features
features = df_copy.columns.tolist()
features
datas = {'ACTUALTIME_TRAVEL': df_copy['ACTUALTIME_TRAVEL']}
y = pd.DataFrame(data=datas)
X = df_copy.drop(['ACTUALTIME_TRAVEL'],1)
# +
# Splitting the dataset into 2 datasets:
# Split the dataset into two datasets: 70% training and 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,random_state=1)
print("The Original range of the dataset: ",df.shape[0])
print("The Training range taken from dataset: (70%): rows 0 to", round(X_train.shape[0]))
print("The Test range taken from dataset: (30%): rows", round(X_train.shape[0]), "to", round(X_train.shape[0]) + X_test.shape[0])
# -
print("\nDescriptive features in X:\n", X_train.head(5))
print("\nTarget feature in y:\n", y_train.head(5))
# I will reset the indexes of the training and test splits so we can see the X_train printout
# We will see that they are no longer in order and the next markdown cell I will reset the indexes.
X_train.head(5)
# Using .reset_index
# We see that they are in order again.
X_train.reset_index(drop=True, inplace=True)
y_train.reset_index(drop=True, inplace=True)
X_test.reset_index(drop=True, inplace=True)
y_test.reset_index(drop=True, inplace=True)
X_train.head(10)
# ***
# <br><br>
# # 2. Linear Regression & Random Forest & Decision Trees & K Nearest Neighbour
class EvaluationMetrics:
def __init__(self, dataframe, train_route):
self.dataframe = dataframe
self.train_route = train_route
self.list_stops = self.train_route.STOPPOINTID.unique().tolist()
self.linear_model = {}
self.rf_model = {}
self.dt_model = {}
self.knn_model = {}
def training_models(self):
for previous, current in zip(self.list_stops, self.list_stops[1:]):
df_stopid = self.dataframe[(self.dataframe['STOPPOINTID']==current) & (self.dataframe['PREVIOUS_STOPPOINTID']==previous)]
df_stopid = df_stopid.drop(low_corr, 1)
df_stopid = pd.get_dummies(df_stopid)
y = pd.DataFrame(df_stopid['ACTUALTIME_TRAVEL'])
df_stopid = df_stopid.drop('ACTUALTIME_TRAVEL', 1)
rfm = RandomForestRegressor(n_estimators=40, oob_score=True, random_state=1)
dtc_4 = DecisionTreeRegressor(max_depth=4, random_state=1)
knn = KNeighborsRegressor()
# Training models
linear_model = LinearRegression().fit(df_stopid, y)
rf_model = rfm.fit(df_stopid, y)
dt_model = dtc_4.fit(df_stopid, y)
knn_model = knn.fit(df_stopid, y)
# Storing models in dictionary
self.linear_model[current + '_' + previous] = linear_model
self.rf_model[current + '_' + previous] = rf_model
self.dt_model[current + '_' + previous] = dt_model
self.knn_model[current + '_' + previous] = knn_model
print('Models trained!')
def make_predictions(self, to_predict):
self.dataframe = to_predict
# Setting up list for predictions
self.linear_pred = np.zeros(shape=(self.dataframe.shape[1],1))
self.rf_model_pred = np.zeros(shape=(self.dataframe.shape[1],1))
self.dt_model_pred = np.zeros(shape=(self.dataframe.shape[1],1))
self.knn_model_pred = np.zeros(shape=(self.dataframe.shape[1],1))
predictions_1 = []
predictions_2 = []
predictions_3 = []
predictions_4 = []
index = 0
for previous, current in zip(self.list_stops, self.list_stops[1:]):
if previous == '807' and current == '817':
continue
predictions_1 += [self.linear_model[current + '_' + previous].predict(self.dataframe.iloc[[index]])]
predictions_2 += [self.linear_model[current + '_' + previous].predict(self.dataframe.iloc[[index]])]
predictions_3 += [self.linear_model[current + '_' + previous].predict(self.dataframe.iloc[[index]])]
predictions_4 += [self.linear_model[current + '_' + previous].predict(self.dataframe.iloc[[index]])]
index += 1
for pred in range(len(31)):
self.linear_pred[pred] = predictions_1[pred][0][0]
self.rf_model_pred[pred] = predictions_2[pred][0]
self.dt_model_pred[pred] = predictions_3[pred][0]
self.knn_model_pred[pred] = predictions_4[pred][0]
self.master_prediction_list = [self.linear_pred, self.rf_model_pred, self.dt_model_pred, self.knn_model_pred]
return self.master_prediction_list
def get_evalmetrics(self, prediction_list, actual_predictions):
self.prediction_list = prediction_list
self.actual_predictions = actual_predictions
for model in self.prediction_list:
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(self.actual_predictions, model)))
print("MSE Score: ", metrics.mean_squared_error(self.actual_predictions, model))
print("MAE Score: ", metrics.mean_absolute_error(self.actual_predictions, model))
print("R2 Score: ", metrics.r2_score(self.actual_predictions, model))
actual_total_linear = sum(self.actual_predictions.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(model)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
routesample_1 = df[(df['TRIPID'] == '8591174') & (df['DAYOFSERVICE']=='2018-12-23')]
sample = routesample_1
routesample_1 = routesample_1.drop(low_corr, 1)
routesample_1 = pd.get_dummies(routesample_1)
actual_routesample_1 = pd.DataFrame(routesample_1['ACTUALTIME_TRAVEL'])
routesample_1 = routesample_1.drop('ACTUALTIME_TRAVEL', 1)
test = EvaluationMetrics(df, sample)
trained_models = test.training_models()
predictions = test.make_predictions(routesample_1)
test.get_evalmetrics(actual_routesample, predictions)
# +
# Setting up route samples
routesample_1 = df[(df['TRIPID'] == '8591174') & (df['DAYOFSERVICE']=='2018-12-23')]
routesample_2 = df[(df['TRIPID'] == '6106738') & (df['DAYOFSERVICE']==' 2018-01-19')]
# List of stops for route 46a
stops_46a = routesample_1.STOPPOINTID.tolist()
# Setting up dummy features
routesample_1 = routesample_1.drop(low_corr, 1)
routesample_1 = pd.get_dummies(routesample_1)
actual_routesample_1 = pd.DataFrame(routesample_1['ACTUALTIME_TRAVEL'])
routesample_1 = routesample_1.drop('ACTUALTIME_TRAVEL', 1)
routesample_2 = routesample_2.drop(low_corr, 1)
routesample_2 = pd.get_dummies(routesample_2)
actual_routesample_2 = pd.DataFrame(routesample_2['ACTUALTIME_TRAVEL'])
routesample_2 = routesample_2.drop('ACTUALTIME_TRAVEL', 1)
# Setting up models for each model - two versions of training models
linear_model_v1 = {}
rf_model_v1 = {}
dt_model_v1 = {}
knn_model_v1 = {}
# Setting up list for predictions
linear_v1_pred = np.zeros(shape=(59,1))
linear_v2_pred = np.zeros(shape=(59,1))
rf_model_v1_pred = np.zeros(shape=(59,1))
rf_model_v2_pred = np.zeros(shape=(59,1))
dt_model_v1_pred = np.zeros(shape=(59,1))
dt_model_v2_pred = np.zeros(shape=(59,1))
knn_model_v1_pred = np.zeros(shape=(59,1))
knn_model_v2_pred = np.zeros(shape=(59,1))
# -
# <br><br>
# ## 2.1 Training without additional features - current stopid and previous stopid
for previous, current in zip(stops_46a, stops_46a[1:]):
df_stopid = df[(df['STOPPOINTID']==current) & (df['PREVIOUS_STOPPOINTID']==previous)]
df_stopid = df_stopid.drop(low_corr, 1)
df_stopid = pd.get_dummies(df_stopid)
y = pd.DataFrame(df_stopid['ACTUALTIME_TRAVEL'])
df_stopid = df_stopid.drop('ACTUALTIME_TRAVEL', 1)
rfm = RandomForestRegressor(n_estimators=40, oob_score=True, random_state=1)
dtc_4 = DecisionTreeRegressor(max_depth=4, random_state=1)
knn = KNeighborsRegressor()
# Training models
linear_model = LinearRegression().fit(df_stopid, y)
rf_model = rfm.fit(df_stopid, y)
dt_model = dtc_4.fit(df_stopid, y)
knn_model = knn.fit(df_stopid, y)
# Storing models in dictionary
linear_model_v1[current + '_' + previous] = linear_model
rf_model_v1[current + '_' + previous] = rf_model
dt_model_v1[current + '_' + previous] = dt_model
knn_model_v1[current + '_' + previous] = knn_model
# ### 2.1.1 Obtaining predictions - route sample 1
# +
index = 0
predictions_1 = []
predictions_2 = []
predictions_3 = []
predictions_4 = []
for previous, current in zip(stops_46a, stops_46a[1:]):
if previous == '807' and current == '817':
continue
predictions_1 += [linear_model_v1[current + '_' + previous].predict(routesample_1.iloc[[index]])]
predictions_2 += [rf_model_v1[current + '_' + previous].predict(routesample_1.iloc[[index]])]
predictions_3 += [dt_model_v1[current + '_' + previous].predict(routesample_1.iloc[[index]])]
predictions_4 += [knn_model_v1[current + '_' + previous].predict(routesample_1.iloc[[index]])]
index += 1
predictions_2[0][0]
for pred in range(len(predictions_1)):
linear_v1_pred[pred] = predictions_1[pred][0][0]
rf_model_v1_pred[pred] = predictions_2[pred][0]
dt_model_v1_pred[pred] = predictions_3[pred][0]
knn_model_v1_pred[pred] = predictions_4[pred][0]
# -
# ### 2.1.2 Obtaining predictions - route sample 2
# +
index = 0
predictions_1 = []
predictions_2 = []
predictions_3 = []
predictions_4 = []
for previous, current in zip(stops_46a, stops_46a[1:]):
if previous == '807' and current == '817':
continue
predictions_1 += [linear_model_v1[current + '_' + previous].predict(routesample_2.iloc[[index]])]
predictions_2 += [rf_model_v1[current + '_' + previous].predict(routesample_2.iloc[[index]])]
predictions_3 += [dt_model_v1[current + '_' + previous].predict(routesample_2.iloc[[index]])]
predictions_4 += [knn_model_v1[current + '_' + previous].predict(routesample_2.iloc[[index]])]
index += 1
for pred in range(len(predictions_1)):
linear_v2_pred[pred] = predictions_1[pred][0][0]
rf_model_v2_pred[pred] = predictions_2[pred][0]
dt_model_v2_pred[pred] = predictions_3[pred][0]
knn_model_v2_pred[pred] = predictions_4[pred][0]
# -
# <br><br>
# Printing evaluation metrics for route sample 1
# +
# Printing evaluation metrics
print('Linear Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, linear_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, linear_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, linear_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, linear_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(linear_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Random Forest Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, rf_model_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, rf_model_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, rf_model_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, rf_model_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(rf_model_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Decision Trees Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, dt_model_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, dt_model_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, dt_model_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, dt_model_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(dt_model_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('KNN Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, knn_model_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, knn_model_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, knn_model_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, knn_model_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(knn_model_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# -
# <br><br>
# Printing evaluation metrics for route sample 2
# +
# Printing evaluation metrics
print('Linear Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, linear_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, linear_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, linear_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, linear_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(linear_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Random Forest Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, rf_model_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, rf_model_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, rf_model_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, rf_model_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(rf_model_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Decition Tree Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, dt_model_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, dt_model_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, dt_model_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, dt_model_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(dt_model_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('KNN Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, knn_model_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, knn_model_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, knn_model_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, knn_model_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(knn_model_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# -
# ## 2.2 Training with additional features - current stopid and previous stopid
# [Back to top section](#2.1-Training-without-additional-features---current-stopid-and-previous-stopid)
import json
file = open('stop_dwelltimes.json',)
stop_dwelltimes = json.load(file)
low_corr = ['DAYOFSERVICE', 'VEHICLEID', 'TRIPID', 'STOPPOINTID', 'PREVIOUS_STOPPOINTID', 'PROGRNUMBER', 'temp', 'pressure', 'humidity',
'wind_deg', 'weather_id', 'weather_description', 'clouds_all', 'wind_speed', 'PREVIOUS_STOPPOINTID', 'PLANNEDTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_ARR', 'ACTUALTIME_DEP',
'PLANNEDTIME_TRAVEL']
# +
# Making new features
# df['HOUROFSERVICE'] = [int(time.strftime("%H",time.gmtime(hour))) for hour in df['ACTUALTIME_DEP']]
df['eve_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 16 and int(time.strftime("%H",time.gmtime(hour))) <= 19 else 0 for hour in df['ACTUALTIME_DEP']]
df['morn_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 7 and int(time.strftime("%H",time.gmtime(hour))) <= 9 else 0 for hour in df['ACTUALTIME_DEP']]
df['morn_rushour'] = df['morn_rushour'].astype('category')
df['eve_rushour'] = df['eve_rushour'].astype('category')
# df = df.drop('HOUROFSERVICE', 1)
# df = df.drop('morn_rushour', 1)
# Setting up route samples
routesample_1 = df[(df['TRIPID'] == '8591174') & (df['DAYOFSERVICE']=='2018-12-23')]
routesample_2 = df[(df['TRIPID'] == '6106738') & (df['DAYOFSERVICE']==' 2018-01-19')]
# List of stops for route 46a
stops_46a = routesample_1.STOPPOINTID.tolist()
# Setting up dummy features ]
routesample_1 = routesample_1.drop(low_corr, 1)
routesample_1 = pd.get_dummies(routesample_1)
actual_routesample_1 = pd.DataFrame(routesample_1['ACTUALTIME_TRAVEL'])
routesample_1 = routesample_1.drop('ACTUALTIME_TRAVEL', 1)
routesample_2 = routesample_2.drop(low_corr, 1)
routesample_2 = pd.get_dummies(routesample_2)
actual_routesample_2 = pd.DataFrame(routesample_2['ACTUALTIME_TRAVEL'])
routesample_2 = routesample_2.drop('ACTUALTIME_TRAVEL', 1)
# Setting up dictionary to store trained models
linear_model_v2 = {}
dt_model_v2 = {}
rf_model_v2 = {}
knn_model_v2 = {}
# Setting up empty arrays to feed predictions into it
linear_v1_pred = np.zeros(shape=(59,1))
linear_v2_pred = np.zeros(shape=(59,1))
rf_model_v1_pred = np.zeros(shape=(59,1))
rf_model_v2_pred = np.zeros(shape=(59,1))
dt_model_v1_pred = np.zeros(shape=(59,1))
dt_model_v2_pred = np.zeros(shape=(59,1))
knn_model_v1_pred = np.zeros(shape=(59,1))
knn_model_v2_pred = np.zeros(shape=(59,1))
# -
for previous, current in zip(stops_46a, stops_46a[1:]):
df_stopid = df[(df['STOPPOINTID']==current) & (df['PREVIOUS_STOPPOINTID']==previous)]
df_stopid = df_stopid.drop(low_corr, 1)
df_stopid = pd.get_dummies(df_stopid)
y = pd.DataFrame(df_stopid['ACTUALTIME_TRAVEL'])
df_stopid = df_stopid.drop('ACTUALTIME_TRAVEL', 1)
rfm = RandomForestRegressor(n_estimators=40, oob_score=True, random_state=1)
dtc_4 = DecisionTreeRegressor(max_depth=4, random_state=1)
knn = KNeighborsRegressor()
# Training models
linear_model = LinearRegression().fit(df_stopid, y)
rf_model = rfm.fit(df_stopid, y)
dt_model = dtc_4.fit(df_stopid, y)
knn_model = knn.fit(df_stopid, y)
# Storing models in dictionary
linear_model_v2[current + '_' + previous] = linear_model
rf_model_v2[current + '_' + previous] = rf_model
dt_model_v2[current + '_' + previous] = dt_model
knn_model_v2[current + '_' + previous] = knn_model
# ### 2.2.1 Obtaining predictions - route sample 1
# +
index = 0
predictions_1 = []
predictions_2 = []
predictions_3 = []
predictions_4 = []
for previous, current in zip(stops_46a, stops_46a[1:]):
if previous == '807' and current == '817':
continue
predictions_1 += [linear_model_v2[current + '_' + previous].predict(routesample_1.iloc[[index]])]
predictions_2 += [rf_model_v2[current + '_' + previous].predict(routesample_1.iloc[[index]])]
predictions_3 += [dt_model_v2[current + '_' + previous].predict(routesample_1.iloc[[index]])]
predictions_4 += [knn_model_v2[current + '_' + previous].predict(routesample_1.iloc[[index]])]
index += 1
for pred in range(len(predictions_1)):
linear_v1_pred[pred] = predictions_1[pred][0][0]
rf_model_v1_pred[pred] = predictions_2[pred][0]
dt_model_v1_pred[pred] = predictions_3[pred][0]
knn_model_v1_pred[pred] = predictions_4[pred][0]
# -
# ### 2.2.2 Obtaining predictions - route sample 2
# +
index = 0
predictions_1 = []
predictions_2 = []
predictions_3 = []
predictions_4 = []
for previous, current in zip(stops_46a, stops_46a[1:]):
if previous == '807' and current == '817':
continue
predictions_1 += [linear_model_v2[current + '_' + previous].predict(routesample_2.iloc[[index]])]
predictions_2 += [rf_model_v2[current + '_' + previous].predict(routesample_2.iloc[[index]])]
predictions_3 += [dt_model_v2[current + '_' + previous].predict(routesample_2.iloc[[index]])]
predictions_4 += [knn_model_v2[current + '_' + previous].predict(routesample_2.iloc[[index]])]
index += 1
for pred in range(len(predictions_1)):
linear_v2_pred[pred] = predictions_1[pred][0][0]
rf_model_v2_pred[pred] = predictions_2[pred][0]
dt_model_v2_pred[pred] = predictions_3[pred][0]
knn_model_v2_pred[pred] = predictions_4[pred][0]
# -
# <br><br>
# Printing evaluation metrics for route sample 1
# +
# Printing evaluation metrics
print('Linear Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, linear_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, linear_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, linear_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, linear_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(linear_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Random Forest Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, rf_model_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, rf_model_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, rf_model_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, rf_model_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(rf_model_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Decision Trees Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, dt_model_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, dt_model_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, dt_model_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, dt_model_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(dt_model_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('KNN Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_1, knn_model_v1_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_1, knn_model_v1_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_1, knn_model_v1_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_1, knn_model_v1_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(knn_model_v1_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# -
# <br><br>
# Printing evaluation metrics for route sample 2
# +
# Printing evaluation metrics
print('Linear Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, linear_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, linear_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, linear_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, linear_v2_pred))
actual_total_linear = sum(actual_routesample_2.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(linear_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Random Forest Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, rf_model_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, rf_model_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, rf_model_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, rf_model_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(rf_model_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('Decition Tree Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, dt_model_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, dt_model_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, dt_model_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, dt_model_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(dt_model_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# +
# Printing evaluation metrics
print('KNN Model Evaluation Metrics: \n')
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actual_routesample_2, knn_model_v2_pred)))
print("MSE Score: ", metrics.mean_squared_error(actual_routesample_2, knn_model_v2_pred))
print("MAE Score: ", metrics.mean_absolute_error(actual_routesample_2, knn_model_v2_pred))
print("R2 Score: ", metrics.r2_score(actual_routesample_2, knn_model_v2_pred))
actual_total_linear = sum(actual_routesample_1.ACTUALTIME_TRAVEL)
predicted_total_linear = sum(knn_model_v2_pred)
print(f'\nActual total journney time: {actual_total_linear} seconds.')
print(f'Predicted total journey time: {predicted_total_linear[0]} seconds')
# -
# ***
# <br><br>
# # 3. Route model and taking the proportion of the prediction to calculate a journey time for the user.
# ## 3.1 Calculating the proportion of each stop from the overall trip.
def proportion_stops(predictions):
# Sum from the first stop until each stop
sum_each_stop = np.zeros(predictions.shape[0], dtype=float)
proportion_each_stop = np.zeros(predictions.shape[0], dtype=float)
overall_prediction = np.sum(predictions)
# Adding sum up until current stop and dividing by overall prediction to get proportion of the trip
for length in range(predictions.shape[0]):
sum_each_stop = np.append(sum_each_stop, [predictions[length]])
sum_overall = np.sum(sum_each_stop) / overall_prediction*100
proportion_each_stop[length] = sum_overall
return proportion_each_stop
# ## 3.2 Return the progrnumber based off the stoppointid in a route
# Finding the most common progrnumber based off the stoppointid. The reason for using to find the most common progrnumber is because it assumes that most route_id for each line would be always complete with the exception of a few trips in which they take a different route and skips some stops as a result.
# +
# Code taken from https://www.geeksforgeeks.org/python-find-most-frequent-element-in-a-list/
# array only accepts a panda Series or numpy array
def most_common(array):
List = array.tolist()
mode_list = mode(List)
if mode_list == '1':
return 0
else:
return(mode(List))
# -
# ## 3.3 Calculating the journey time from a start to end destination based on user input
#
# Finding the travel time duration based on a stoppointid then getting the progrnumber
# +
def journey_time(start,end, prediction):
# Converting into int because the function returns a string
start_progrnum = int(most_common(df['PROGRNUMBER'][df['STOPPOINTID']==start]))
end_progrnum = int(most_common(df['PROGRNUMBER'][df['STOPPOINTID']==end]))
# print(start_progrnum)
# print(end_progrnum)
proportion_array = proportion_stops(prediction)
overall_prediction = np.sum(prediction)
# calculating the time difference from start to end destination
start_prediction = (proportion_array[start_progrnum]/100) * overall_prediction
end_prediction = (proportion_array[end_progrnum]/100) * overall_prediction
journeytime = end_prediction - start_prediction
# print(journeytime)
return journeytime
# +
user_start = '807'
user_end = '812'
journey_time(user_start, user_end, prediction_46a)
# -
# ***
# <br><br>
# # 5. Stop pair model
# ## 5.1 First version of paired stop approach
# <br><br>
# This approach makes a model based on the stopid and its previous stopids
# Returns a paired list of stops
def paired_stops(df):
stopid = df['STOPPOINTID'].unique().tolist()
previous_stopid = []
for i in stopid:
prev = df['PREVIOUS_STOPPOINTID'][df['STOPPOINTID']==i]
# Adds most frequent previous stopid to list
previous_stopid += [prev.value_counts().idxmax()]
return [stopid, previous_stopid]
for ids in range(len(paired_stops[0])):
# Making new dataframe
to_add = df[df['STOPPOINTID']==paired_stops[0][ids]]
to_add = to_add.append(df[df['PREVIOUS_STOPPOINTID']==paired_stops[1][ids]])
stops_df = pd.DataFrame(data=to_add)
# Setting target feature
y = stops_df['ACTUALTIME_TRAVEL']
# Dropping target feature and low corr features
stops_df = stops_df.drop(low_corr,1)
stops_df = stops_df.drop('ACTUALTIME_TRAVEL',1)
stops_df = pd.get_dummies(stops_df)
# Fitting model based on stops
linear_reg = LinearRegression().fit(stops_df, y)
# Save to pickle file
# +
pair_stops = paired_stops(df)
to_add = df[df['STOPPOINTID']==pair_stops[0][5]]
to_add = to_add.append(df[df['PREVIOUS_STOPPOINTID']==pair_stops[1][5]])
stops_df = pd.DataFrame(to_add)
# Setting target feature
y = stops_df['ACTUALTIME_TRAVEL']
# Dropping target feature and low corr features
stops_df = stops_df.drop(low_corr,1)
stops_df = stops_df.drop('ACTUALTIME_TRAVEL',1)
stops_df = pd.get_dummies(stops_df)
# Fitting/Training model based on stops
linear_reg_model_ = LinearRegression().fit(stops_df, y)
# Saving to pickle File
with open('model_'+pair_stops[0][5]+'.pkl', 'wb') as handle:
pickle.dump(linear_reg_model_, handle)
# -
sampledf = stops_df.iloc[[0]]
sample_prediction = linear_reg_sample.predict(sampledf)
sample_prediction
with open('model_'+pair_stops[0][5]+'.pkl', 'rb') as handle:
model = pickle.load(handle)
model.predict(sampledf)
# ## 5.2.1 Setting up for 46a stop pair models using first approach
# Function to get previous stopid and return a paired list
def pair_stopids(current_stopids):
previous_stopid = []
for i in current_stopids:
prev = df['PREVIOUS_STOPPOINTID'][df['STOPPOINTID']==i]
# Adds most frequent previous stopid to list
previous_stopid += [prev.value_counts().idxmax()]
return [current_stopids, previous_stopid]
# Loading the json file
import json
file = open('routes_and_stops.json',)
routes_stops = json.load(file)
# +
# Get all stops for 46a going outbound ('1')
list_46a_stops = routes_stops['46A']['outbound']
# Pairing stopids and prev stopids from 46a route
pairing_46a_stopids = pair_stopids(list_46a_stops)
predictions = []
# -
for ids in range(len(pairing_46a_stopids[0])):
# Making new dataframe
to_add = df[df['STOPPOINTID']==pairing_46a_stopids[0][ids]]
to_add = to_add.append(df[df['PREVIOUS_STOPPOINTID']==pairing_46a_stopids[1][ids]])
stops_df = pd.DataFrame(data=to_add)
# Setting target feature
y = stops_df['ACTUALTIME_TRAVEL']
# Dropping target feature and low corr features
stops_df = stops_df.drop(low_corr,1)
stops_df = stops_df.drop('ACTUALTIME_TRAVEL',1)
stops_df = pd.get_dummies(stops_df)
# Fitting model based on stops
linear_reg_model = LinearRegression().fit(stops_df, y)
# Save to pickle file
# with open('model_'+pairing_46a_stopids[0][ids]+'.pkl', 'wb') as handle:
# pickle.dump(linear_reg_model, handle)
# Predicting data
with open('stop_'+pair_stops[0][ids]+'.pkl', 'rb') as handle:
model = pickle.load(handle)
k = model.predict(route_46a.iloc[[index]])
predictions += [k]
# Printing evaluation metrics
print("RMSE Score: ", np.sqrt(metrics.mean_squared_error(actualtimes_46a, predictions)))
print("MSE Score: ", metrics.mean_squared_error(actualtimes_46a, predictions))
print("MAE Score: ", metrics.mean_absolute_error(actualtimes_46a, predictions))
print("R2 Score: ", metrics.r2_score(actualtimes_46a, predictions))
# <br><br>
# ##### Conclusion:
# Linear regression model is not very good. MSE score is off by more than 1000 seconds. And the R2 score is at a negative value. This means the parameters need to be tuned. Keeping dwelltime might be good.
# ## 5.3 Stop pair based on entire leavetimes
#
# [Scroll to Final Stop Pair Model](#6.-Final-Stop-Pair-Model)
# <br><br>
# 1) Make a function that will combine lists in a list together as one list
def combine_listsoflist(to_combine):
combined = []
for each_list in to_combine:
combined += each_list
return combined
# <br><br>
# 2) Make a function that will get rid of the duplicates in the list
def get_unique(stopids_list):
return list(set(stopids_list))
# <br><br>
# 3) Make a list to store all stopids for DIRECTION == outbound/1.
# +
# Loading the json file
import json
file = open('routes_and_stops.json',)
routes_stops = json.load(file)
# Looping through every lineid, outbound
stopids_outbound = []
for i,j in routes_stops.items():
try:
# print(i, '\n', routes_stops[i]['outbound'], '\n')
stopids_outbound += [routes_stops[i]['outbound']]
except KeyError:
continue
# Calling function to get combined list
combined_stopids_outbound = combine_listsoflist(stopids_outbound)
# Calling function to get unique stopids from combined list
unique_stopids_outbound = get_unique(combined_stopids_outbound)
# -
# <br><br>
# 4) Make a list to store all stopids for DIRECTION ==inbound/2.
# +
# Looping through every lineid, inbound
stopids_inbound = []
for i,j in routes_stops.items():
try:
# print(i, '\n', routes_stops[i]['inbound'], '\n')
stopids_inbound += [routes_stops[i]['inbound']]
except KeyError:
continue
# Calling function to get combined list
combined_stopids_inbound = combine_listsoflist(stopids_inbound)
# Calling function to get unique stopids from combined list - using set() to get rid off existing stops from outbound stops
unique_stopids_inbound = list(set(combined_stopids_inbound) - set(combined_stopids_outbound))
# -
# <br><br>
# 3) Make a function that will get all previous stops of stops and return unique set - then make another dictionary that gets the remaining missing previous stops from routes_stops.json.
# +
def return_previous_stopids(liststopids, direction):
previous_stops = {}
for stopid in liststopids:
list_toadd = []
for i,j in routes_stops.items():
try:
# print(i, '\n', routes_stops[i]['outbound'], '\n')
if stopid in routes_stops[i][direction]:
list_stops = routes_stops[i][direction]
index = list_stops.index(stopid)
if index > 0:
list_toadd += [list_stops[index - 1]]
elif index == 0:
continue
except KeyError:
continue
previous_stops[stopid] = get_unique(list_toadd)
return previous_stops
sample = return_previous_stopids(unique_stopids_inbound, 'inbound')
# +
# Get all of missing stop pairs from routes_json file
file_1 = open('stop_pairs_inbound.json')
prev_stops = json.load(file_1)
def get_remaining(routes_stops, prev_stops):
to_add = {}
for stop in prev_stops.keys():
routes = sample[stop]
current_models = prev_stops[stop]
list_ = []
for value in routes:
if value not in current_models:
list_ += [value]
to_add[stop] = list_
# Get rid of empty keys
for stop in list(to_add.keys()):
o = to_add[stop]
if len(o) == 0:
del to_add[stop]
return to_add
# -
# <br><br>
# 4) Comparing the previous stopids available between master set (routes_stops.json) vs customed one (previous_stopids_outbound/inbound)
file_1 = open('previous_stops_outbound.json')
prev_stops = json.load(file_1)
# +
# Make new json file for all previous stops outbound
previous_stops_outbound = {}
for stopid in unique_stopids_outbound:
query = "SELECT DISTINCT PREVIOUS_STOPPOINTID from leavetimes WHERE STOPPOINTID = " + stopid
df = pd.read_sql(query, conn)
list_ = df['PREVIOUS_STOPPOINTID'].tolist()
previous_stops_outbound[stopid] = [stopid_ for stopid_ in list_ if stopid_ != '0']
print('Finished, ', stopid)
with open('previous_stops_outbound.json', 'w') as fp:
json.dump(previous_stops_outbound, fp)
# +
# Comparing each stopid and their previous stopid
file_1 = open('previous_stops_inbound.json')
route_stops = sample
outbound_stops = json.load(file_1)
no_pairs = []
final_pairs = {}
for stopid in unique_stopids_inbound:
pairs = []
print('Current stopid: ', stopid)
routes_list = route_stops[stopid]
outbound_list = outbound_stops[stopid]
for element_routes in routes_list:
for element_outbound in outbound_list:
if element_routes == element_outbound:
print('Previous stopid: ', element_routes)
pairs += [element_routes]
else:
no_pairs += [stopid]
final_pairs[stopid] = pairs
# +
# Checking empty lists and adding to list
empty_stops = []
for key in final_pairs.keys():
list_ = final_pairs[key]
if len(list_) == 0:
empty_stops += [key]
empty_stops = get_unique(empty_stops)
print(len(empty_stops))
# -
# Filling empty stops with original previous stopids
for stopid in empty_stops:
toadd = outbound_stops[stopid]
final_pairs[stopid] = toadd
# Removing number 0's
for stopid in unique_stopids_inbound:
if '0' in final_pairs[stopid]:
print('here')
final_pairs[stopid].remove('0')
with open('stop_pairs_inbound.json', 'w') as fp:
json.dump(final_pairs, fp)
# <br><br>
# 5) Query to select the rows based on the previous stopids and append them to the current dataframe of the current stopid
#
def df_prev_stops(query_prevstop_list):
query_prevstop_rows = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.PREVIOUS_STOPPOINTID IN " + str(query_prevstop_list)
df_prevstop = pd.read_sql(query_prevstop_rows, conn)
return df_prevstop
def df_prev_stops_one_element(query_prevstop_list):
query_prevstop_rows = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.PREVIOUS_STOPPOINTID = " + str(query_prevstop_list)
df_prevstop = pd.read_sql(query_prevstop_rows, conn)
return df_prevstop
# <br><br>
# 6) Adding index on STOPPOINTID and PREVIOUS_STOPPOINTID
# +
# Adding indexes
# add_index1 = """CREATE INDEX stopid ON leavetimes(STOPPOINTID);"""
# add_index2 = """CREATE INDEX previous_stopid ON leavetimes(PREVIOUS_STOPPOINTID);"""
# add_index3 = """CREATE INDEX direction on trips(DIRECTION);"""
# conn.execute(add_index1)
# conn.execute(add_index2)
# conn.execute(add_index3)
# query = "SELECT name FROM sqlite_master WHERE type = 'index';"
# drop = "DROP INDEX previous_stopid"
# p = conn.execute(query)
# for x in p :
# print(x)
# -
# <br><br>
# 7) Piecing every step together
# +
# Lists all stops done so far. This is for when laptop needs to rest
import os
arr = os.listdir('C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/stop_pair_models_inbound')
j = []
for i in arr:
j += i.split('_')
h = []
for i in j:
h += i.split('.')
b = h[1:]
c = b[::5]
c = get_unique(c)
# g = [str(i) for i in h if i.isdigit()]
unique_stopids_outbound = [x for x in unique_stopids_inbound if x not in c]
len(unique_stopids_outbound)
# -
print(unique_stopids_outbound)
# +
previous_stops = {}
for stopid in unique_stopids_outbound:
# Get all previous stopids in list
query_previoustop = "SELECT DISTINCT leavetimes.PREVIOUS_STOPPOINTID FROM leavetimes WHERE leavetimes.STOPPOINTID = " + stopid
query_prevstop_df = pd.read_sql(query_previoustop, conn)
# Converting into a pandas series then to list
query_prevstop_series = query_prevstop_df['PREVIOUS_STOPPOINTID'].tolist()
query_prevstop_list = [stopid for stopid in query_prevstop_series if stopid != '0']
previous_stops[stopid] = query_prevstop_list
print('finished')
with open('previous_stops_outbound.json', 'w+') as fp:
json.dump(previous_stops, fp)
# +
# import boto3
import pandas as pd
import numpy as np
import sqlite3
import pickle
# from sagemaker import get_execution_role
from sklearn.linear_model import LinearRegression
from math import log
from multiprocessing import Pool
# ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Connecting to s3
# role = get_execution_role()
# bucket='sagemaker-studio-520298385440-7in8n1t299'
# data_key = 'route_46a.feather'
# data_location = 's3://{}/{}'.format(bucket, data_key)
# -
low_corr = ['DAYOFSERVICE', 'VEHICLEID', 'TRIPID', 'STOPPOINTID', 'PREVIOUS_STOPPOINTID', 'PROGRNUMBER', 'temp', 'pressure', 'humidity',
'wind_speed', 'wind_deg', 'weather_id', 'weather_description', 'clouds_all', 'PREVIOUS_STOPPOINTID', 'PLANNEDTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_ARR', 'ACTUALTIME_DEP',
'PLANNEDTIME_TRAVEL', 'DWELLTIME', 'level_0', 'index_x', 'index_y']
# def function to create connection to db
def create_connection(db_file):
"""
create a database connection to the SQLite database specified by db_file
:param df_file: database file
:return: Connection object or None
"""
conn = None
try:
conn = sqlite3.connect(db_file)
return conn
except 'Error' as e:
print(e)
return conn
# create connection to db
db_file = "C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/dublinbus.db"
conn = create_connection(db_file)
# +
# Outbound
file = open('previous_stops_outbound.json',)
previous_stops = json.load(file)
# Master set of features
dummies_features = {'MONTHOFSERVICE_April': 0, 'MONTHOFSERVICE_August': 0, 'MONTHOFSERVICE_December': 0, 'MONTHOFSERVICE_February': 0, 'MONTHOFSERVICE_January': 0, 'MONTHOFSERVICE_July': 0,
'MONTHOFSERVICE_June': 0, 'MONTHOFSERVICE_March': 0, 'MONTHOFSERVICE_May': 0, 'MONTHOFSERVICE_November': 0, 'MONTHOFSERVICE_October': 0, 'MONTHOFSERVICE_September': 0,
'DAYOFWEEK_Friday': 0, 'DAYOFWEEK_Monday': 0, 'DAYOFWEEK_Thursday': 0,'DAYOFWEEK_Tuesday': 0, 'DAYOFWEEK_Wednesday': 0,
'DAYOFWEEK_Saturday': 0, 'DAYOFWEEK_Sunday': 0, 'weather_main_Clouds': 0, 'weather_main_Drizzle': 0,
'weather_main_Fog': 0, 'weather_main_Mist': 0,'weather_main_Rain': 0, 'weather_main_Snow': 0, 'weather_main_Clear': 0, 'rain_1h': 0,
'IS_HOLIDAY_0': 0, 'IS_WEEKDAY_1': 0, 'IS_WEEKDAY_0': 0, 'IS_HOLIDAY_1': 0, 'eve_rushour_1': 0, 'eve_rushour_0': 0, 'morn_rushour_0': 0, 'morn_rushour_1': 0}
dummy_keys = ['rain_1h', 'MONTHOFSERVICE_April', 'MONTHOFSERVICE_August',
'MONTHOFSERVICE_December', 'MONTHOFSERVICE_February',
'MONTHOFSERVICE_January', 'MONTHOFSERVICE_July', 'MONTHOFSERVICE_June',
'MONTHOFSERVICE_March', 'MONTHOFSERVICE_May', 'MONTHOFSERVICE_November',
'MONTHOFSERVICE_October', 'MONTHOFSERVICE_September',
'DAYOFWEEK_Friday', 'DAYOFWEEK_Monday', 'DAYOFWEEK_Saturday',
'DAYOFWEEK_Sunday', 'DAYOFWEEK_Thursday', 'DAYOFWEEK_Tuesday',
'DAYOFWEEK_Wednesday', 'IS_HOLIDAY_0', 'IS_HOLIDAY_1', 'IS_WEEKDAY_0',
'IS_WEEKDAY_1', 'weather_main_Clear', 'weather_main_Clouds',
'weather_main_Drizzle', 'weather_main_Fog', 'weather_main_Mist',
'weather_main_Rain', 'weather_main_Snow', 'eve_rushour_1', 'eve_rushour_0', 'morn_rushour_0', 'morn_rushour_1']
# Query to get all of weather
weather_query = "SELECT weather.* from weather"
weather_df = pd.read_sql(weather_query, conn)
weather_df = weather_df.rename(columns={"dt": "DAYOFSERVICE"})
low_corr = ['DAYOFSERVICE', 'VEHICLEID', 'TRIPID', 'STOPPOINTID', 'PREVIOUS_STOPPOINTID', 'PROGRNUMBER', 'temp', 'pressure', 'humidity',
'wind_speed', 'wind_deg', 'weather_id', 'weather_description', 'clouds_all', 'PREVIOUS_STOPPOINTID', 'PLANNEDTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_ARR', 'ACTUALTIME_DEP',
'PLANNEDTIME_TRAVEL', 'level_0', 'index_x', 'index_y']
index = 0
for current_stopid in to_add:
query_prevstop_series = previous_stops[current_stopid]
query_prevstop_list = tuple(query_prevstop_series)
if len(query_prevstop_list) == 1:
# Making query to db and make df
query_stopid = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.STOPPOINTID = " + current_stopid
df = pd.read_sql(query_stopid, conn)
# Append previous stops rows to main df
to_add = df_prev_stops_one_element(query_prevstop_series[0])
df = pd.concat([df,to_add])
df = df.merge(weather_df, on='DAYOFSERVICE', how='left')
elif len(query_prevstop_list) == 0:
continue
else:
# Making query to db and make df
query_stopid = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.STOPPOINTID = " + current_stopid
df = pd.read_sql(query_stopid, conn)
# Append previous stops rows to main df
to_add = df_prev_stops(query_prevstop_list)
df = pd.concat([df,to_add])
df = df.merge(weather_df, on='DAYOFSERVICE', how='left')
# Drop low correlated features and setting target feature
df = df.drop(low_corr, 1)
df['IS_HOLIDAY'] = df['IS_HOLIDAY'].astype('category')
df['IS_WEEKDAY'] = df['IS_WEEKDAY'].astype('category')
tf = df['ACTUALTIME_TRAVEL']
df = df.drop('ACTUALTIME_TRAVEL', 1)
df = pd.get_dummies(df)
if df.shape[1] < 31:
for key in dummy_keys:
if dummies_features[key] not in df.columns:
df[key] = dummies_features[key]
# Fitting model based on stops
linear_reg_model = LinearRegression().fit(df, tf)
# Save to pickle file
with open('C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/stop_models_outbound/stop_'+ current_stopid +'.pkl', 'wb') as handle:
pickle.dump(linear_reg_model, handle)
print(current_stopid, df.shape[1], index, ' Finished.')
index += 1
# -
# <br><br>
# <br><br>
# <br><br>
# # 6. Final Stop Pair Model
# [Back to Top](#Table-of-Contents)
# +
import pandas as pd
import numpy as np
import sqlite3
import pickle
import time
from sklearn.linear_model import LinearRegression
from math import log
from multiprocessing import Pool
import warnings
warnings.filterwarnings('ignore')
# +
def create_connection(db_file):
"""
create a database connection to the SQLite database specified by db_file
:param df_file: database file
:return: Connection object or None
"""
conn = None
try:
conn = sqlite3.connect(db_file)
return conn
except 'Error' as e:
print(e)
return conn
# create connection to db
db_file = "C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/dublinbus.db"
conn = create_connection(db_file)
# -
# <br><br>
# [Load functions](#5.3-Stop-pair-based-on-entire-leavetimes)
# <br><br>
# +
# Master set of features
dummies_features = {'MONTHOFSERVICE_April': 0, 'MONTHOFSERVICE_August': 0, 'MONTHOFSERVICE_December': 0, 'MONTHOFSERVICE_February': 0, 'MONTHOFSERVICE_January': 0, 'MONTHOFSERVICE_July': 0,
'MONTHOFSERVICE_June': 0, 'MONTHOFSERVICE_March': 0, 'MONTHOFSERVICE_May': 0, 'MONTHOFSERVICE_November': 0, 'MONTHOFSERVICE_October': 0, 'MONTHOFSERVICE_September': 0,
'DAYOFWEEK_Friday': 0, 'DAYOFWEEK_Monday': 0, 'DAYOFWEEK_Thursday': 0,'DAYOFWEEK_Tuesday': 0, 'DAYOFWEEK_Wednesday': 0,
'DAYOFWEEK_Saturday': 0, 'DAYOFWEEK_Sunday': 0, 'weather_main_Clouds': 0, 'weather_main_Drizzle': 0,
'weather_main_Fog': 0, 'weather_main_Mist': 0,'weather_main_Rain': 0, 'weather_main_Snow': 0, 'weather_main_Clear': 0, 'rain_1h': 0,
'IS_HOLIDAY_0': 0, 'IS_WEEKDAY_1': 0, 'IS_WEEKDAY_0': 0, 'IS_HOLIDAY_1': 0, 'eve_rushour_1': 0, 'eve_rushour_0': 0, 'morn_rushour_0': 0, 'morn_rushour_1': 0}
dummy_keys = ['rain_1h', 'MONTHOFSERVICE_April', 'MONTHOFSERVICE_August',
'MONTHOFSERVICE_December', 'MONTHOFSERVICE_February',
'MONTHOFSERVICE_January', 'MONTHOFSERVICE_July', 'MONTHOFSERVICE_June',
'MONTHOFSERVICE_March', 'MONTHOFSERVICE_May', 'MONTHOFSERVICE_November',
'MONTHOFSERVICE_October', 'MONTHOFSERVICE_September',
'DAYOFWEEK_Friday', 'DAYOFWEEK_Monday', 'DAYOFWEEK_Saturday',
'DAYOFWEEK_Sunday', 'DAYOFWEEK_Thursday', 'DAYOFWEEK_Tuesday',
'DAYOFWEEK_Wednesday', 'IS_HOLIDAY_0', 'IS_HOLIDAY_1', 'IS_WEEKDAY_0',
'IS_WEEKDAY_1', 'weather_main_Clear', 'weather_main_Clouds',
'weather_main_Drizzle', 'weather_main_Fog', 'weather_main_Mist',
'weather_main_Rain', 'weather_main_Snow', 'eve_rushour_1', 'eve_rushour_0', 'morn_rushour_0', 'morn_rushour_1']
# Query to get all of weather
weather_query = "SELECT weather.* from weather"
weather_df = pd.read_sql(weather_query, conn)
weather_df = weather_df.rename(columns={"dt": "DAYOFSERVICE"})
low_corr = ['DAYOFSERVICE', 'VEHICLEID', 'TRIPID', 'STOPPOINTID', 'PREVIOUS_STOPPOINTID', 'PROGRNUMBER', 'temp', 'pressure', 'humidity',
'wind_speed', 'wind_deg', 'weather_id', 'weather_description', 'clouds_all', 'PREVIOUS_STOPPOINTID', 'PLANNEDTIME_ARR', 'PLANNEDTIME_DEP', 'ACTUALTIME_ARR', 'ACTUALTIME_DEP',
'PLANNEDTIME_TRAVEL', 'level_0', 'index_x', 'index_y', 'index']
# +
index = 0
file = open('stop_pairs_outbound.json',)
previous_stops = json.load(file)
# f = pd.DataFrame()
for current_stopid in unique_stopids_outbound:
# print(current_stopid)
previous_stops_list = previous_stops[str(current_stopid)]
if len(previous_stops_list) > 0:
query_stopid = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.STOPPOINTID = " + current_stopid
df = pd.read_sql(query_stopid, conn)
# Adding Extra Features
df['eve_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 16 and int(time.strftime("%H",time.gmtime(hour))) <= 19 else 0 for hour in df['ACTUALTIME_DEP']]
df['morn_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 7 and int(time.strftime("%H",time.gmtime(hour))) <= 9 else 0 for hour in df['ACTUALTIME_DEP']]
df['morn_rushour'] = df['morn_rushour'].astype('category')
df['eve_rushour'] = df['eve_rushour'].astype('category')
df['IS_HOLIDAY'] = df['IS_HOLIDAY'].astype('category')
df['IS_WEEKDAY'] = df['IS_WEEKDAY'].astype('category')
df = df.merge(weather_df, on='DAYOFSERVICE', how='left')
# f = f.append(df)
for previous_stop in previous_stops_list:
new_df = df[df['PREVIOUS_STOPPOINTID']==previous_stop]
# f = f.append(new_df)
tf = new_df['ACTUALTIME_TRAVEL']
new_df = new_df.drop('ACTUALTIME_TRAVEL', 1)
new_df = new_df.drop(low_corr, 1)
new_df = pd.get_dummies(new_df)
if new_df.shape[1] < 36:
for key in dummy_keys:
if dummies_features[key] not in new_df.columns:
new_df[key] = dummies_features[key]
# Fitting model based on stops
linear_reg_model = LinearRegression().fit(new_df, tf)
# Save to pickle file
with open('C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/stop_pair_models_outbound/stop_'+ current_stopid +'_' + previous_stop + '_outbound.pkl', 'wb') as handle:
pickle.dump(linear_reg_model, handle)
print(current_stopid, previous_stop, new_df.shape[1], index, ' Finished.')
elif len(previous_stops_list) == 0:
# print('here')
continue
index += 1
# +
index = 0
file = open('stop_pairs_outbound.json',)
previous_stops = json.load(file)
f = pd.DataFrame()
for current_stopid in list(to_add.keys()):
previous_stops_list = to_add[current_stopid]
if len(previous_stops_list) > 0:
query_stopid = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.STOPPOINTID = " + current_stopid
df = pd.read_sql(query_stopid, conn)
for previous_stop in previous_stops_list:
previous_stop_query = "SELECT leavetimes.* FROM leavetimes WHERE leavetimes.PREVIOUS_STOPPOINTID = " + previous_stop
prev_stop_df = pd.read_sql(query_stopid, conn)
# Append previous stops rows to main df
new_df = pd.concat([df,prev_stop_df])
new_df = new_df.merge(weather_df, on='DAYOFSERVICE', how='left')
# Adding Extra Features
new_df['eve_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 16 and int(time.strftime("%H",time.gmtime(hour))) <= 19 else 0 for hour in new_df['ACTUALTIME_DEP']]
new_df['morn_rushour'] = [1 if int(time.strftime("%H",time.gmtime(hour))) >= 7 and int(time.strftime("%H",time.gmtime(hour))) <= 9 else 0 for hour in new_df['ACTUALTIME_DEP']]
new_df['morn_rushour'] = new_df['morn_rushour'].astype('category')
new_df['eve_rushour'] = new_df['eve_rushour'].astype('category')
new_df['IS_HOLIDAY'] = new_df['IS_HOLIDAY'].astype('category')
new_df['IS_WEEKDAY'] = new_df['IS_WEEKDAY'].astype('category')
tf = new_df['ACTUALTIME_TRAVEL']
new_df = new_df.drop('ACTUALTIME_TRAVEL', 1)
new_df = new_df.drop(low_corr, 1)
new_df = pd.get_dummies(new_df)
# f = f.append(new_df)
if new_df.shape[1] < 36:
for key in dummy_keys:
if dummies_features[key] not in new_df.columns:
new_df[key] = dummies_features[key]
# Fitting model based on stops
linear_reg_model = LinearRegression().fit(new_df, tf)
# Save to pickle file
with open('C:/Users/fayea/UCD/ResearchPracticum/Data-Analytics-CityRoute/stop_pair_models_outbound/stop_'+ current_stopid +'_' + previous_stop + '_outbound.pkl', 'wb') as handle:
pickle.dump(linear_reg_model, handle)
print(current_stopid, previous_stop, new_df.shape[1], index, ' Finished.')
elif len(previous_stops_list) == 0:
continue
index += 1
# -
# ***
| Modelling/Data_Modelling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
import xarray as xr
from salishsea_tools import viz_tools
# -
watercolor = 'lightskyblue'
landcolour = 'papayawhip'
mesh = xr.open_dataset('/home/sallen/MEOPAR/grid/mesh_mask201702.nc')
tmask = 1- mesh.tmask[0, 0]
imin, imax = 520, 720
jmin, jmax = 100, 300
y_slice = np.arange(imin, imax)
x_slice = np.arange(jmin, jmax)
data3d = xr.open_dataset('/data/sallen/results/MIDOSS/Monte_Carlo/first240_beachingvolume.nc')
beachingvolume = np.ma.masked_array(data3d.Beaching_Volume, data3d.Beaching_Volume < 0.01)
fig, ax = plt.subplots(1, 1, figsize=(7.5, 7))
viz_tools.plot_land_mask(ax, '/home/sallen/MEOPAR/grid/bathymetry_201702.nc',
xslice=x_slice, yslice=y_slice, color=landcolour)
colours = ax.pcolormesh(np.arange(jmin,jmax+1)+0.75, np.arange(imin,imax+1)+0.75, beachingvolume[imin:imax , jmin:jmax], cmap='Blues', norm=colors.LogNorm(vmin=0.001, vmax=10));
cb = fig.colorbar(colours, ax=ax)
cb.set_label('Beaching Volume (m$^3$/gridcell/spill)')
viz_tools.set_aspect(ax);
fig.savefig('BeachingVolume_for_240.png')
| BeachingVolume.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PROJ_irox_oer] *
# language: python
# name: conda-env-PROJ_irox_oer-py
# ---
# ### Import Modules
# +
import os
print(os.getcwd())
import sys
import pickle
import pandas as pd
from ase import io
# -
dir_root = os.path.join(
os.environ["PROJ_DATA"],
"PROJ_IrOx_OER/seoin_irox_data/bulk",
)
# +
bulk_systems_dict = {
"iro2": [
"anatase-fm",
"brookite-fm",
"columbite-fm",
"pyrite-fm",
"rutile-fm",
],
"iro3": [
"Amm2",
"cmcm",
"pm-3m",
],
}
crystal_rename_dict = {
'anatase-fm': 'anatase',
'brookite-fm': 'brookite',
'columbite-fm': 'columbite',
'pyrite-fm': 'pyrite',
'rutile-fm': 'rutile',
'Amm2': 'amm2',
'cmcm': 'cmcm',
'pm-3m': 'pm-3m',
}
# -
def calc_dH(
e_per_atom,
stoich=None,
num_H_atoms=0,
):
"""
The original method is located in:
F:\Dropbox\01_norskov\00_git_repos\PROJ_IrOx_Active_Learning_OER\data\proj_data_irox.py
Based on a E_DFT/atom of -7.047516 for rutile-IrO2
See the following dir for derivation:
PROJ_IrOx_Active_Learning_OER/workflow/energy_treatment_deriv/calc_references
"""
# | - calc_dH
o_ref = -4.64915959
ir_metal_fit = -9.32910211636731
h_ref = -3.20624595
if stoich == "AB2":
dH = (2 + 1) * e_per_atom - 2 * o_ref - ir_metal_fit
dH_per_atom = dH / 3.
elif stoich == "AB3":
dH = (3 + 1) * e_per_atom - 3 * o_ref - ir_metal_fit
dH_per_atom = dH / 4.
elif stoich == "IrHO3" or stoich == "IrO3H" or stoich == "iro3h" or stoich == "iroh3":
dH = (3 + 1 + 1) * e_per_atom - 3 * o_ref - ir_metal_fit - h_ref
dH_per_atom = dH / 5.
return(dH_per_atom)
#__|
# +
data_dict_list = []
for bulk_i, polymorphs_i in bulk_systems_dict.items():
# print(bulk_i)
if bulk_i == "iro2":
stoich_i = "AB2"
elif bulk_i == "iro3":
stoich_i = "AB3"
for poly_j in polymorphs_i:
# print(" ", poly_j, sep="")
# poly_new_j = crystal_rename_dict.get(poly_j, poly_j)
poly_new_j = crystal_rename_dict.get(poly_j, "TEMP")
path_rel_j = os.path.join(
bulk_i,
poly_j)
path_j = os.path.join(
dir_root,
path_rel_j)
# #################################################
# Reading OUTCAR atoms
atoms_j = io.read(os.path.join(path_j, "OUTCAR"))
pot_e_j = atoms_j.get_potential_energy()
volume_j = atoms_j.get_volume()
num_atoms_j = atoms_j.get_global_number_of_atoms()
volume_pa_j = volume_j / num_atoms_j
# print(atoms_j.get_chemical_formula())
# #################################################
# Calc formation energy
dH_i = calc_dH(
pot_e_j / num_atoms_j,
stoich=stoich_i,
num_H_atoms=0,
)
# #################################################
data_dict_j = dict()
# #################################################
data_dict_j["stoich"] = stoich_i
# data_dict_j["crystal"] = poly_j
data_dict_j["crystal"] = poly_new_j
data_dict_j["dH"] = dH_i
data_dict_j["volume"] = volume_j
data_dict_j["num_atoms"] = num_atoms_j
data_dict_j["volume_pa"] = volume_pa_j
data_dict_j["atoms"] = atoms_j
data_dict_j["path"] = path_rel_j
# #################################################
data_dict_list.append(data_dict_j)
# #################################################
# #########################################################
df = pd.DataFrame(
data_dict_list
)
# #########################################################
# -
# +
df_seoin_bulk = df
# Pickling data ###########################################
directory = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/seoin_irox_data/process_bulk_data",
"out_data")
if not os.path.exists(directory):
os.makedirs(directory)
with open(os.path.join(directory, "df_seoin_bulk.pickle"), "wb") as fle:
pickle.dump(df_seoin_bulk, fle)
# #########################################################
# -
df
# + active=""
#
#
#
# + jupyter={"source_hidden": true}
# 'amm2',
# 'anatase',
# 'brookite',
# 'cmcm',
# 'columbite',
# 'pm-3m',
# 'pyrite',
# 'rutile',
# + jupyter={"source_hidden": true}
# 'anatase-fm',
# 'brookite-fm',
# 'columbite-fm',
# 'pyrite-fm',
# 'rutile-fm',
# 'Amm2',
# 'cmcm',
# 'pm-3m',
# + jupyter={"source_hidden": true}
# df.crystal.tolist()
| workflow/seoin_irox_data/process_bulk_data/process_bulk.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:bruges]
# language: python
# name: conda-env-bruges-py
# ---
# # Welcome to `bruges`
# This notebook accompanies [a blog post on agilegeoscience.com](http://www.agilegeoscience.com/blog/).
#
# If you are running this locally, you need to install [`bruges`](https://github.com/agile-geoscience/bruges) first:
#
# pip install bruges
#
# This notebook also requires [`welly`](https://github.com/agile-geoscience/welly), which you can install like so:
#
# pip install welly
#
# Check you're on version 0.3.2+ and then do 'the usual' imports.
import bruges
bruges.__version__
# +
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
# -
# <hr />
# ## AVO calculations on individual layers
# | | Vp [m/s] | Vs [m/s] | Density [kg/m3] |
# |-------------|----------|----------|----------|
# | upper layer | 3300 | 1500 | 2400 |
# | lower layer | 3050 | 1400 | 2075 |
# +
# Upper layer rock properties
vp1 = 3300.0
vs1 = 1500.0
rho1 = 2400.0
# Lower layer rock properties
vp2 = 3050.0
vs2 = 1400.0
rho2 = 2075.0
# -
bruges.reflection.akirichards(vp1, vs1, rho1, vp2, vs2, rho2, theta1=0)
# This is an `np.ndarray` of zero dimensions. You can use it just like a scalar value (or, in this case, a complex scalar).
#
# When calling `akirichards`, it might be more convenient to use list unpacking. That way we can pass the arguments as two lists, or 'rocks':
rock1 = [vp1, vs1, rho1]
rock2 = [vp2, vs2, rho2]
bruges.reflection.akirichards(*rock1, *rock2, theta1=0)
# We can also request the reflection coefficient at a particular angle, or a range of angles:
bruges.reflection.akirichards(*rock1, *rock2, theta1=30)
# ## Terms of Shuey's approximation
# We can get the individual terms of Shuey's linear approximation:
rc_terms = bruges.reflection.shuey(*rock1, *rock2, theta1=30, terms=True)
rc_terms
# The second term is the product of gradient and $\sin^2 \theta$, but you can get at the gradient directly with an option:
intercept, gradient = bruges.reflection.shuey(*rock1, *rock2, return_gradient=True)
intercept, gradient
# You can equally well pass in a list or array of angles,
theta_list = [0, 10, 20, 30]
rc_list = bruges.reflection.akirichards(*rock1, *rock2, theta1=theta_list)
rc_list
# Create an array of angles from 0 to 70, incremented by 1,
theta_range = np.arange(0, 70)
rc_range = bruges.reflection.akirichards(*rock1, *rock2, theta1=theta_range)
# Compare the two-term Aki-Richards approximation with the full Zoeprittz equation for a interface between two rocks:
rc_z = bruges.reflection.zoeppritz_rpp(*rock1, *rock2, theta1=theta_range)
# Put all this data (just the real parts) on an AVO plot:
# +
style = {'color': 'blue',
'fontsize': 10,
'ha':'left',
'va':'top',}
fig = plt.figure(figsize=(12,5))
# AVO plot
ax1 = fig.add_subplot(121)
ax1.plot(theta_range, rc_z.real, 'k', lw=3, alpha=0.25, label='Zoeppritz')
ax1.plot(theta_range, rc_range.real, 'k.', lw=3, alpha=0.5, label='Aki-Richards')
# We'll also add the four angles...
ax1.plot(theta_list, rc_list.real, 'bo', ms=10, alpha=0.5)
# Putting some annotations on the plot.
for theta, rc in zip(theta_list, rc_list.real):
ax1.text(theta, rc-0.004, '{:.3f}'.format(rc), **style)
ax1.legend()
ax1.set_ylim((-0.15, -0.05))
ax1.set_xlabel('Angle (degrees)')
ax1.set_ylabel('Amplitude')
ax1.grid()
# Intercept-Gradient crossplot.
ax2 = fig.add_subplot(122)
ax2.plot(intercept, gradient, 'go', ms=10, alpha = 0.5)
# Put spines for x and y axis.
ax2.axvline(0, color='k')
ax2.axhline(0, color='k')
# Set square axes limits.
mx = 0.25
ax2.set_xlim((-mx, mx))
ax2.set_ylim((-mx, mx))
# Label the axes and add gridlines.
ax2.set_xlabel('Intercept')
ax2.set_ylabel('Gradient')
ax2.grid()
plt.show()
# -
# <hr />
# ## Elastic modulus calculations
# Say I want to compute the Lamé parameters λ and µ, from V<sub>P</sub>, V<sub>S</sub>, and Density. As long as my inputs are in SI units, I can insert these values directly:
# | | Vp [m/s] | Vs [m/s] | Density [kg/m3] |
# |-------------|----------|----------|----------|
# | upper layer | 3300 | 1500 | 2400 |
# | lower layer | 3050 | 1400 | 2075 |
# Upper layer only
bruges.rockphysics.lam(*rock1), bruges.rockphysics.mu(*rock1)
# Let's do this on real data. We'll use `welly` to load a well, inspect the curves, and compute using those.
import welly
welly.__version__
w = welly.Well.from_las('../data/R-39.las')
p = welly.Project([w])
from IPython.display import HTML
HTML(p.curve_table_html())
w.plot(extents=(2000,3500))
w.data['DT4P']
# Turn things into Python variables for convenience.
dtp = w.data['DT4P']
dts = w.data['DT4S']
rhob = w.data['RHOB']
# Convert velocities and get a depth basis:
z = dtp.basis
vp = 1e6 / dtp
vp.units = 'm/s'
vs = 1e6 / dts
vs.units = 'm/s'
vs.plot()
vs = vs.despike(24, z=0.5)
vs[vs < 0] = np.nan
vs.plot()
# Now the rock properties...
lm = bruges.rockphysics.lam(vp, vs, rhob)
mu = bruges.rockphysics.mu(vp, vs, rhob)
# Create a crossplot:
# +
plt.figure(figsize=(7,6))
plt.scatter(lm*rhob, mu*rhob,
s=30, # marker size
c=z, cmap="gist_earth_r",
edgecolor='none',
alpha=0.2)
# Give the plot a colorbar.
cb = plt.colorbar()
cb.ax.invert_yaxis()
cb.set_label("Depth [m]")
# Give the plot some annotation.
plt.xlabel(r'$\lambda \rho$', size=18)
plt.ylabel(r'$\mu \rho$', size=18)
plt.grid(color='k', alpha=0.2)
plt.xlim(0, 8e13)
plt.ylim(0, 8e13)
plt.show()
# -
# <hr />
# ## Backus averaging
lb = 60 # Backus averaging length in metres.
dz = vp.step # Sample interval of the log in metres.
vp0, vs0, rhob_filt = bruges.rockphysics.backus(vp, vs, rhob, lb, dz)
# +
fig, axs = plt.subplots(figsize=(5,12), ncols=2)
ax = axs[0]
ax.plot(vp, z, 'k', alpha=0.45, label='Vp')
ax.plot(vp0, z, 'r', label='Backus')
ax.set_title('Backus average, Vp')
ax.set_ylabel(r'depth [m]', size=12)
ax.invert_yaxis()
ax.set_xlabel(r'Vp [m/s]', size=12)
ax.grid(color='k', alpha=0.2)
ax.legend()
ax = axs[1]
ax.plot(vs, z, 'k', alpha=0.45, label='Vp')
ax.plot(vs0, z, 'g', label='Backus')
ax.set_title('Backus average, Vs')
ax.set_yticklabels([])
ax.invert_yaxis()
ax.set_xlabel(r'Vs [m/s]', size=12)
ax.grid(color='k', alpha=0.2)
plt.show()
# -
# ## Offset synthetic
# The logs don't start at 0, so let's make a replacement layer, and give it a velocity ramp up to the start of the Vp log.
repl = int(vp.start/vp.step)
velocity = np.pad(vp0, (repl, 0), mode='linear_ramp', end_values=2000)
# Now we pad the logs up to a depth of 0 m.
vp0 = np.pad(vp0, (repl, 0), mode='edge', )
vs0 = np.pad(vs0, (repl, 0), mode='edge')
rhob_filt = np.pad(rhob_filt, (repl, 0), mode='edge')
z = np.pad(z, (repl, 0), mode='edge')
# Now we time-convert the logs we need:
vpt, t = bruges.transform.depth_to_time(vp0, velocity, dt=0.001, dz=vp.step, return_t=True)
vst = bruges.transform.depth_to_time(vs0, velocity, dt=0.001, dz=vp.step)
rhot = bruges.transform.depth_to_time(rhob_filt, velocity, dt=0.001, dz=vp.step)
t
# Make the reflection coefficient series:
rc = bruges.reflection.reflectivity(vpt, vst, rhot, theta=np.arange(60), method='zoeppritz_rpp')
# Because the RC series is 1 value shorter than the logs, we have to lose a value from the time basis:
t = t[1:]
# And a wavelet:
wavelet = bruges.filters.ricker(duration=0.128, dt=0.001, f=45)
# Then perform convolution to make the synthetic response:
syn = np.apply_along_axis(lambda tr: np.convolve(tr, wavelet, mode='same'), arr=rc, axis=-1)
# And plot the result:
# +
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(1, 4)
fig = plt.figure(figsize=(10, 10))
ax = plt.subplot(gs[0])
ax.plot(syn.real[0], t, 'k')
ax.fill_betweenx(t, 0, syn.real[0], where=syn.real[0]>0, color='k')
ax.set_ylim(np.amax(t), 1.75)
ax.set_ylabel('Travel time [seconds]')
ax = plt.subplot(gs[1:])
ax.imshow(syn.real.T, cmap='RdBu', aspect='auto', extent=[0, 60, np.amax(t), 0])
ax.set_ylim(np.amax(t), 1.75)
ax.set_yticklabels([])
ax.set_xlabel('Offset [degrees]')
plt.show()
# -
# <hr />
# <img src="https://avatars1.githubusercontent.com/u/1692321?v=3&s=200" style="float:right;" width="40px" /><p style="color:gray; float:right;">© 2018 <a href="http://www.agilegeoscience.com/">Agile Geoscience</a> — <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY</a> — Have fun! </p>
| notebooks_dev/Welcome_to_bruges.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Import dependencies
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
from pathlib import Path
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
#dataset
file_path = "crypto_data.csv"
crypto_df = pd.read_csv(file_path)
crypto_df.head()
crypto_df = crypto_df[crypto_df['IsTrading'] == True]
crypto_df.head()
#drop null
crypto_df.dropna(inplace=True)
crypto_df
#removing rows with no coins being mined
crypto_df = crypto_df[crypto_df["TotalCoinsMined"]> 0]
crypto_df
crypto_df = crypto_df.drop(columns='CoinName', axis=1)
crypto_df
crypto_df.set_index('Unnamed: 0', inplace=True)
df6 = df5.drop(['IsTrading'], axis=1)
df6
#get info
crypto_df.info()
# Convert to numerical Algorithm and ProofType
X = pd.get_dummies(crypto_df, columns=['Algorithm', 'ProofType'])
X
# # Using PCA
#
# Standardize the data
crypto_scaled = StandardScaler().fit_transform(X[['TotalCoinsMined','TotalCoinSupply']])
crypto_scaled
# +
# Applying PCA to reduce dimensions from 4 to 2
# Initialize PCA model
pca = PCA(n_components=0.99)
# Get two principal components for the iris data.
crypto_pca = pca.fit_transform(crypto_scaled)
# -
# Transform PCA data to a DataFrame
crypto_pca_df = pd.DataFrame(
data=crypto_pca )
crypto_pca_df
# Fetch the explained variance
pca.explained_variance_ratio_
# # Running KMeans with PCA
# +
# Finding the best value for k
inertia = []
k = list(range(1, 11))
# Calculate the inertia for the range of k values
for i in k:
km = KMeans(n_clusters=i, random_state=0)
km.fit(crypto_pca_df)
inertia.append(km.inertia_)
# Creating the Elbow Curve
elbow_data = {"k": k, "inertia": inertia}
df_elbow = pd.DataFrame(elbow_data)
plt.plot(df_elbow['k'], df_elbow['inertia'])
plt.xticks(list(range(11)))
plt.title('Elbow Curve')
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()
# +
# Get predictions
model = KMeans(n_clusters=3, random_state=5)
#fit the model
model.fit(crypto_pca_df)
#predict clusters
predictions = model.predict(crypto_pca_df)
len(predictions)
# -
# # t-SNE
tsne = TSNE(learning_rate=35)
#Fit
tsne_features = tsne.fit_transform(crypto_pca_df)
tsne_features.shape
# +
# Prepare to plot the dataset
# The first column of transformed features
crypto_pca_df['x'] = tsne_features[:,0]
# The second column of transformed features
crypto_pca_df['y'] = tsne_features[:,1]
# -
# Scatter plot
plt.scatter(crypto_pca_df['x'], crypto_pca_df['y'])
plt.show()
crypto_pca_df['x']
crypto_pca_df['y']
#Color Scatter plot
plt.scatter(crypto_pca_df['x'], crypto_pca_df['y'], c=predictions)
plt.show()
| crypto.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Assignment 2
#
# Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to **Preview the Grading** for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.
#
# An NOAA dataset has been stored in the file `data/C2A2_data/BinnedCsvs_d400/3894b80f8e87b2f3edcca7c9b65f23c5cb708e1a0f4a2f6d4c35aec8.csv`. The data for this assignment comes from a subset of The National Centers for Environmental Information (NCEI) [Daily Global Historical Climatology Network](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt) (GHCN-Daily). The GHCN-Daily is comprised of daily climate records from thousands of land surface stations across the globe.
#
# Each row in the assignment datafile corresponds to a single observation.
#
# The following variables are provided to you:
#
# * **id** : station identification code
# * **date** : date in YYYY-MM-DD format (e.g. 2012-01-24 = January 24, 2012)
# * **element** : indicator of element type
# * TMAX : Maximum temperature (tenths of degrees C)
# * TMIN : Minimum temperature (tenths of degrees C)
# * **value** : data value for element (tenths of degrees C)
#
# For this assignment, you must:
#
# 1. Read the documentation and familiarize yourself with the dataset, then write some python code which returns a line graph of the record high and record low temperatures by day of the year over the period 2005-2014. The area between the record high and record low temperatures for each day should be shaded.
# 2. Overlay a scatter of the 2015 data for any points (highs and lows) for which the ten year record (2005-2014) record high or record low was broken in 2015.
# 3. Watch out for leap days (i.e. February 29th), it is reasonable to remove these points from the dataset for the purpose of this visualization.
# 4. Make the visual nice! Leverage principles from the first module in this course when developing your solution. Consider issues such as legends, labels, and chart junk.
#
# The data you have been given is near **Berlin, Land Berlin, Germany**, and the stations the data comes from are shown on the map below.
# +
import matplotlib.pyplot as plt
import mplleaflet
import pandas as pd
def leaflet_plot_stations(binsize, hashid):
df = pd.read_csv('data/C2A2_data/BinSize_d{}.csv'.format(binsize))
station_locations_by_hash = df[df['hash'] == hashid]
lons = station_locations_by_hash['LONGITUDE'].tolist()
lats = station_locations_by_hash['LATITUDE'].tolist()
plt.figure(figsize=(8,8))
plt.scatter(lons, lats, c='r', alpha=0.7, s=200)
return mplleaflet.display()
leaflet_plot_stations(400,'3894b80f8e87b2f3edcca7c9b65f23c5cb708e1a0f4a2f6d4c35aec8')
# -
# ### Manipulating the Data
# Reading the data
df = pd.read_csv('data/C2A2_data/BinnedCsvs_d400/3894b80f8e87b2f3edcca7c9b65f23c5cb708e1a0f4a2f6d4c35aec8.csv')
df.head(10)
df.info()
# Convert the Date from an Object(string) type to a datetime
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')
df.info()
# remove the leap days(i.e. February 29th)
index_remove = df[(df.Date.dt.month == 2) & (df.Date.dt.day==29)].index.values
df.drop(index_remove,axis=0,inplace=True)
r = df.sort_values(by=['Date'])
r.head()
t = r.copy()
t['Month_Day']=t['Date'].dt.strftime('%m/%d')
t['Year'] = t['Date'].dt.year
t.head()
'''
#make a copy
t = df.copy()
t['Month'] = t.Date.dt.month
t['Day'] = t.Date.dt.day
t['Month_Day'] = t[t.columns[4:]].apply(lambda x: '-'.join(x.dropna().astype(int).astype(str)),axis=1)
t.head()
'''
#make a copy
#t = df.copy()
# make a subset data from 2005 till 2014
t_2015 = t[(t.Date.dt.year == 2015)]
t_2015_index = t_2015.index.values
t.drop(t_2015_index,axis=0,inplace=True)
# take the min temperature of each day and save the data in a list
mint_day = pd.DataFrame([t.groupby(t.Month_Day).Data_Value.min()]).T.reset_index()
# take the max temperature of each day and save the data in a list
maxt_day = pd.DataFrame([t.groupby(t.Month_Day).Data_Value.max()]).T.reset_index()
#Same for 2015
mint_day_2015 = pd.DataFrame([t_2015.groupby(t_2015.Month_Day).Data_Value.min()]).T.reset_index()
maxt_day_2015 = pd.DataFrame([t_2015.groupby(t_2015.Month_Day).Data_Value.max()]).T.reset_index()
import numpy as np
#Compute broken 2015 by comparing two arrays
# provide index values
broken_low = np.where(mint_day_2015.Data_Value < mint_day.Data_Value)
broken_high = np.where(maxt_day_2015.Data_Value > maxt_day.Data_Value)
# ### Visualization
# +
# %matplotlib notebook
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
import matplotlib.dates as dates
# +
plt.figure()
#fix the t ticks name
l = list(range(0,365,25))+[364]
x_ticks = mint_day_2015.iloc[l].Month_Day
x = plt.gca().xaxis
[item.set_rotation(45) for item in x.get_ticklabels()]
plt.xticks(l,x_ticks)
#plot the records for the period 2005-2014 with the broken record in 2015
## line plot for the records
plt.plot(maxt_day.index,maxt_day.Data_Value/10.0, color='r', label ='record high Temp. 2005-2014')
plt.plot(mint_day.index,mint_day.Data_Value/10.0, color='b', label ='record low Temp. 2005-2014')
##scatter plot for the points
plt.scatter(broken_high[0][:],maxt_day_2015.Data_Value.iloc[broken_high[0][:]]/10.0, color=(0,0,0), marker='o', label='record high was broken in 2015')
plt.scatter(broken_low[0][:],mint_day_2015.Data_Value.iloc[broken_low[0][:]]/10.0, color='c', marker='o', label='record low was broken in 2015')
# fill the area between the high and low Temp. records
plt.gca().fill_between(range(len(mint_day.index)),
np.array(mint_day.Data_Value/10.0), np.array(maxt_day.Data_Value/10.0),
facecolor='blue',
alpha=0.25)
#set the frame, title, legend, and axes
#plt.legend(loc='best', frameon = False)
#plt.legend(loc = 4, frameon = False)
#plt.legend(bbox_to_anchor=(1.05, 1),loc=2, frameon = False)
plt.legend(loc='best', frameon = False,prop={'size': 8})
plt.title('Temperature records for Berlin area through 2005-2015')
plt.xlabel('Date')
plt.ylabel('Temperature Degrees C')
# Set date axis format
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#adjust the plot
plt.subplots_adjust(bottom=0.25)
# -
plt.savefig('assignment2_Mawas.png', format='png')
| Course2/Week2/Assignment2_Mawas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import librosa
import librosa.display
import soundfile
import matplotlib.pyplot as plt
import os
# -
# # Process Monophonic samples
#
# - remove noise at the beginning by onset detection
# - normalise to -6dBFS
# +
nofx_path = '/Volumes/STEVE/DATASETS/IDMT-SMT-AUDIO-EFFECTS/Gitarre monophon/Samples/NoFX'
out_path = '/Volumes/Macintosh HD/DATASETS/NoFX_mono_preprocessed'
# constants
_sr = 44100
_n_fft = 2048
_win_length = _n_fft
_hop_length = int(_win_length/4)
# fade in vector
_fade_length_in_sec = 0.02
_fade_length_in_samples = int(_fade_length_in_sec * _sr)
fade_amp = np.linspace(0, 1, num=_fade_length_in_samples, endpoint=True, retstep=False, dtype=None, axis=0)
# +
if not os.path.exists('%s' % out_path):
os.makedirs('%s' % out_path)
for file in os.listdir(nofx_path):
if not(file.startswith("._")) and file.endswith(".wav"):
filename = file[:-4]
# open file
audio_file, _ = librosa.load("%s/%s.wav" % (nofx_path, filename),
sr=_sr,
mono=True,
offset=0.0,
duration=None,
dtype=np.float32,
res_type='kaiser_best')
# clean file:
# envelope
oenv_raw = librosa.onset.onset_strength(y=audio_file, sr=_sr)
# onset without backtrack
onset_raw = librosa.onset.onset_detect(onset_envelope=oenv_raw, backtrack=False)
# get main onset index
main_onset_idx = np.argmax(oenv_raw[onset_raw])
# backtrack onsets
onset_raw_bt = librosa.onset.onset_backtrack(onset_raw, oenv_raw)
# onset times
ons_times_in_samples = librosa.frames_to_samples(onset_raw_bt, hop_length=_hop_length, n_fft=None)
# apply fade in
audio_file_proc = np.concatenate((
audio_file[0:ons_times_in_samples[main_onset_idx]-_fade_length_in_samples] * 0,
audio_file[ons_times_in_samples[main_onset_idx]-_fade_length_in_samples:ons_times_in_samples[main_onset_idx]] * fade_amp,
audio_file[ons_times_in_samples[main_onset_idx]:]))
# normalise to -6dBFS
audio_file_proc = audio_file_proc / (2 * max(abs(audio_file_proc)))
# write file
soundfile.write(file="%s/%s.wav" % (out_path, filename),
data=audio_file_proc,
samplerate=_sr,
subtype='PCM_16')
print('DONE!')
# -
# # Process Polyponic samples
#
# - normalise to -6dBFS
# +
nofx_path = '/Volumes/STEVE/DATASETS/IDMT-SMT-AUDIO-EFFECTS/Gitarre polyphon/Samples/NoFX'
out_path = '/Volumes/Macintosh HD/DATASETS/NoFX_poly_preprocessed'
# constants
_sr = 44100
_n_fft = 2048
_win_length = _n_fft
_hop_length = int(_win_length/4)
# +
if not os.path.exists('%s' % out_path):
os.makedirs('%s' % out_path)
for file in os.listdir(nofx_path):
if not(file.startswith("._")) and file.endswith(".wav"):
filename = file[:-4]
# open file
audio_file, _ = librosa.load("%s/%s.wav" % (nofx_path, filename),
sr=_sr,
mono=True,
offset=0.0,
duration=None,
dtype=np.float32,
res_type='kaiser_best')
# normalise to -6dBFS
audio_file_proc = audio_file / (2 * max(abs(audio_file)))
# write file
soundfile.write(file="%s/%s.wav" % (out_path, filename),
data=audio_file_proc,
samplerate=_sr,
subtype='PCM_16')
print('DONE!')
# -
| src/scripts/NoFX_preprocess_script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''DA2021'': conda)'
# metadata:
# interpreter:
# hash: 045f3ba9fbd7084f57766e40ea0c4bb3a1b26edfff4812021a661f88e63a0844
# name: python3
# ---
from cmdstanpy import CmdStanModel
import pandas as pd
import arviz as az
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
# ## Excercise 1 - Generated Quantities Block
gen_quant = CmdStanModel(stan_file='code_1.stan')
samples = gen_quant.sample(data={'M':10},
fixed_param=True,
iter_sampling=1000,
iter_warmup=0,
chains = 1)
# ## Excercise 2 - Constraints on the data
bern1 = CmdStanModel(stan_file='code_2.stan')
samp_bern1 = bern1.sample(data={'N':2, 'y':[0,2]})
bern2 = CmdStanModel(stan_file='code_3.stan')
samp_bern2 = bern2.sample(data={'N':2, 'y':[0,2]})
# ## Excercise 3 - Constraints on parameters
# ### Unconstrained parameters
model_gm1 = CmdStanModel(stan_file='code_4.stan')
out_gamma1 = model_gm1.sample(output_dir='samples',iter_sampling=6000,iter_warmup=1000, seed=4838282)
out_gamma1.diagnose()
# +
N=500
xs = np.linspace(0,8,N)
pdfs = stats.gamma.pdf(xs, 1.25, scale = 1 / 1.25)
plt.plot(xs, pdfs, linewidth=2)
## add histogram of theta samples with 160 bins
plt.gca().set_xlabel("theta")
plt.gca().set_ylabel("Probability Density Function")
plt.show()
# -
# ### Constrained parameter
model_gm2 = CmdStanModel(stan_file='code_5.stan')
out_gamma2 = model_gm2.sample(output_dir='samples',iter_sampling=6000,iter_warmup=1000, seed=4838282)
out_gamma2.diagnose()
# +
N=500
xs = np.linspace(0,8,N)
pdfs = stats.gamma.pdf(xs, 1.25, scale = 1 / 1.25)
plt.plot(xs, pdfs, linewidth=2)
## add histogram of theta samples from the second model with 160 bins
plt.gca().set_xlabel("theta")
plt.gca().set_ylabel("Probability Density Function")
plt.show()
# -
# ## Excercise 4 - Selection of parameters using equation solving
#
#
# +
model_tune = CmdStanModel(stan_file='code_6.stan')
F = # number of letters in the first name
L = # number of letters in the last name
y0 = # initial guess for the equation solving
data={'y_guess':[y0],
'theta':[(F+L)/2]}
tunes = model_tune.sample(data=data, fixed_param=True, iter_sampling=1, iter_warmup=0, chains = 1)
# -
# ## Excercise 5 - different methods of defining models
#
# +
model_samp_st = CmdStanModel(stan_file='code_7.stan')
model_log_target = CmdStanModel(stan_file='code_8.stan')
model_log_target_ind = CmdStanModel(stan_file='code_9.stan')
data = {'N': F}
seed = #integer, your date of birth in the DDMMYYYY format without leading zero (or if you are GPRD weary, use any other date you wish)
result_1 = model_samp_st.sample(data=data,seed=seed)
result_2 = model_log_target(data=data,seed=seed)
result_3 = model_log_target_ind(data=data,seed=seed)
# -
az.plot_density([result_1,result_2,result_3])
plt.show()
# ## Excercise 6 - generated quantities post sampling
model_gq = CmdStanModel(stan_file='code_10.stan')
# fill in with chosen result from previous excercise
mean_of_y = model_gq.generate_quantities(data=data,
mcmc_sample = )
# investigate the output and plot histogram of mean_y variableŌ
| Data Analytics/Lab 2 - Intro to stan/lab2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 17} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 352, "status": "ok", "timestamp": 1530144465190, "user": {"displayName": "<NAME>", "photoUrl": "//lh4.googleusercontent.com/-y5jgaxSBf7U/AAAAAAAAAAI/AAAAAAAAAAg/t9OhWH_FsNk/s50-c-k-no/photo.jpg", "userId": "108065192564137085250"}, "user_tz": 240} id="iWEJt3g2trTx" outputId="3e426f7b-2789-4f1b-befd-77cd2bf9b5de"
# ## How to normalize features in TensorFlow
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 202} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 2393, "status": "ok", "timestamp": 1531404855794, "user": {"displayName": "<NAME>", "photoUrl": "//lh4.googleusercontent.com/-y5jgaxSBf7U/AAAAAAAAAAI/AAAAAAAAAAg/t9OhWH_FsNk/s50-c-k-no/photo.jpg", "userId": "108065192564137085250"}, "user_tz": 240} id="V3gsgdkgci_u" outputId="49f5da99-d6cd-405f-ad05-a2831e6abff5"
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
traindf.head()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 125} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 796, "status": "ok", "timestamp": 1531404856687, "user": {"displayName": "<NAME>", "photoUrl": "//lh4.googleusercontent.com/-y5jgaxSBf7U/AAAAAAAAAAI/AAAAAAAAAAg/t9OhWH_FsNk/s50-c-k-no/photo.jpg", "userId": "108065192564137085250"}, "user_tz": 240} id="vw6nMweu7ENH" outputId="ec2a2170-e7c8-455f-8236-88532b1e6604"
def get_normalization_parameters(traindf, features):
"""Get the normalization parameters (E.g., mean, std) for traindf for
features. We will use these parameters for training, eval, and serving."""
def _z_score_params(column):
mean = traindf[column].mean()
std = traindf[column].std()
return {'mean': mean, 'std': std}
normalization_parameters = {}
for column in features:
normalization_parameters[column] = _z_score_params(column)
return normalization_parameters
NUMERIC_FEATURES = ['housing_median_age', 'total_rooms', 'total_bedrooms',
'population', 'households', 'median_income']
normalization_parameters = get_normalization_parameters(traindf,
NUMERIC_FEATURES)
normalization_parameters
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" deletable=true editable=true id="L4tC2AyetTfa"
def _numeric_column_normalized(column_name, normalizer_fn):
return tf.feature_column.numeric_column(column_name,
normalizer_fn=normalizer_fn)
# Define your feature columns
def create_feature_cols(features, use_normalization):
"""Create our feature columns using tf.feature_column. This function will
get executed during training, evaluation, and serving."""
normalized_feature_columns = []
for column_name in features:
if use_normalization:
column_params = normalization_parameters[column_name]
mean = column_params['mean']
std = column_params['std']
def normalize_column(col): # Use mean, std defined above.
return (col - mean)/std
normalizer_fn = normalize_column
else:
normalizer_fn = None
normalized_feature_columns.append(_numeric_column_normalized(column_name,
normalizer_fn))
return normalized_feature_columns
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" deletable=true editable=true id="jokD5LlIc9nd"
def input_fn(df, shuffle=True):
"""For training and evaluation inputs."""
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df["median_house_value"]/100000, # Scale target.
shuffle = shuffle)
def train_and_evaluate(use_normalization, outdir):
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
feature_columns = create_feature_cols(NUMERIC_FEATURES, use_normalization)
run_config = tf.estimator.RunConfig(save_summary_steps=10,
model_dir = outdir # More granular checkpointing for TensorBoard.
)
model = tf.estimator.LinearRegressor(feature_columns = feature_columns, config=run_config)
# Training input function.
train_spec = tf.estimator.TrainSpec(input_fn=input_fn(traindf),
max_steps=1000)
def json_serving_input_fn():
"""Build the serving inputs. For serving real-time predictions
using ml-engine."""
inputs = {}
for feat in feature_columns:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
# Evaluation and serving input function.
exporter = tf.estimator.FinalExporter('housing', json_serving_input_fn)
eval_spec = tf.estimator.EvalSpec(input_fn=input_fn(evaldf),
exporters=[exporter],
name='housing-eval')
# Train and evaluate the model.
tf.estimator.train_and_evaluate(model, train_spec, eval_spec)
return model
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 1441} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 4574, "status": "ok", "timestamp": 1531404862173, "user": {"displayName": "<NAME>", "photoUrl": "//lh4.googleusercontent.com/-y5jgaxSBf7U/AAAAAAAAAAI/AAAAAAAAAAg/t9OhWH_FsNk/s50-c-k-no/photo.jpg", "userId": "108065192564137085250"}, "user_tz": 240} id="Fdfy-nRG190n" outputId="06134c73-ae01-4169-e803-16939d116403"
results = train_and_evaluate(False, 'housing_trained')
normalized_results = train_and_evaluate(True, 'housing_trained_normalization')
# + [markdown] colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 34} colab_type="code" deletable=true editable=true executionInfo={"elapsed": 433, "status": "ok", "timestamp": 1530145527924, "user": {"displayName": "<NAME>", "photoUrl": "//lh4.googleusercontent.com/-y5jgaxSBf7U/AAAAAAAAAAI/AAAAAAAAAAg/t9OhWH_FsNk/s50-c-k-no/photo.jpg", "userId": "108065192564137085250"}, "user_tz": 240} id="ceasMqSUcOTm" outputId="42698cd6-3052-4c4e-a017-b8daf66d6cd0"
# ## Deploy on Google Cloud ML Engine
# + [markdown] deletable=true editable=true
# ### Test model training locally
# + deletable=true editable=true language="bash"
# OUTPUT_DIR='trained_model'
# export PYTHONPATH=${PYTHONPATH}:${PWD}/model_code
# python -m trainer.task --outdir $OUTPUT_DIR --normalize_input 1
# + [markdown] deletable=true editable=true
# ### Train on the cloud
# Test cloud parameters (optional)
# + deletable=true editable=true language="bash"
# OUTPUT_DIR='housing_trained_model'
# JOBNAME=my_ml_job_$(date -u +%y%m%d_%H%M%S)
# REGION='us-central1'
# PACKAGE_PATH=$PWD/model_code/trainer
#
# gcloud ml-engine local train\
# --package-path=$PACKAGE_PATH\
# --module-name=trainer.task\
# --\
# --outdir=$OUTPUT_DIR\
# --normalize_input=0
# + [markdown] deletable=true editable=true
# ### Submit job
# + deletable=true editable=true language="bash"
# JOBNAME=my_ml_job_$(date -u +%y%m%d_%H%M%S)
# REGION='us-central1'
# BUCKET='gs://crawles-sandbox'
# OUTPUT_DIR=$BUCKET/'housing_trained_model'
# PACKAGE_PATH=$PWD/model_code/trainer
#
# gcloud ml-engine jobs submit training $JOBNAME \
# --package-path=$PACKAGE_PATH \
# --module-name=trainer.task \
# --region=$REGION \
# --staging-bucket=$BUCKET\
# --scale-tier=BASIC \
# --runtime-version=1.8 \
# -- \
# --outdir=$OUTPUT_DIR\
# --normalize_input=0
| blogs/feature_column_normalization/feature_normalization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
s = '1234567890'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '1234567890'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '-1.23'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '10\u00B2'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '\u00BD'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '\u2166'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '一二三四五六七八九〇'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = '壱億参阡萬'
print('s =', s)
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = 'abc'
print('s =', s)
print('isalpha:', s.isalpha())
s = 'あいうえお'
print('s =', s)
print('isalpha:', s.isalpha())
s = 'アイウエオ'
print('s =', s)
print('isalpha:', s.isalpha())
s = '漢字'
print('s =', s)
print('isalpha:', s.isalpha())
s = '1234567890'
print('s =', s)
print('isalpha:', s.isalpha())
s = '1234567890'
print('s =', s)
print('isalpha:', s.isalpha())
s = '一二三四五六七八九'
print('s =', s)
print('isalpha:', s.isalpha())
s = '壱億参阡萬'
print('s =', s)
print('isalpha:', s.isalpha())
s = '〇'
print('s =', s)
print('isalpha:', s.isalpha())
s = '\u2166'
print('s =', s)
print('isalpha:', s.isalpha())
s = 'abc123'
print('s =', s)
print('isalnum:', s.isalnum())
print('isalpha:', s.isalpha())
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
s = 'abc123+-,.&'
print('s =', s)
print('isascii:', s.isascii())
print('isalnum:', s.isalnum())
s = 'あいうえお'
print('s =', s)
print('isascii:', s.isascii())
print('isalnum:', s.isalnum())
s = ''
print('s =', s)
print('isalnum:', s.isalnum())
print('isalpha:', s.isalpha())
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
print('isascii:', s.isascii())
print(bool(''))
print(bool('abc123'))
s = '-1.23'
print('s =', s)
print('isalnum:', s.isalnum())
print('isalpha:', s.isalpha())
print('isdecimal:', s.isdecimal())
print('isdigit:', s.isdigit())
print('isnumeric:', s.isnumeric())
print('isascii:', s.isascii())
print(float('-1.23'))
print(type(float('-1.23')))
# +
# print(float('abc'))
# ValueError: could not convert string to float: 'abc'
# -
def is_num(s):
try:
float(s)
except ValueError:
return False
else:
return True
print(is_num('123'))
print(is_num('-1.23'))
print(is_num('+1.23e10'))
print(is_num('abc'))
print(is_num('10,000,000'))
def is_num_delimiter(s):
try:
float(s.replace(',', ''))
except ValueError:
return False
else:
return True
print(is_num_delimiter('10,000,000'))
def is_num_delimiter2(s):
try:
float(s.replace(',', '').replace(' ', ''))
except ValueError:
return False
else:
return True
print(is_num_delimiter2('10,000,000'))
print(is_num_delimiter2('10 000 000'))
| notebook/str_num_determine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/AnkiiG/create-coco-JSON/blob/master/coco%20JSON.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="u_CtyvEeU5Mp" outputId="4a0dc050-6abc-4fb3-fcb3-02c65dd802de"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="rkymRXpHW-OY" outputId="dc78a355-b722-43f5-dd3f-4933dccadf70"
# !git clone https://github.com/Tony607/labelme2coco.git
# + colab={"base_uri": "https://localhost:8080/"} id="PXW8sJUjXn8p" outputId="bd6563ac-f43e-4b59-9a4c-afb4c7020da5"
# !pip install pyqt5
# + colab={"base_uri": "https://localhost:8080/"} id="XoDm31efXtYF" outputId="f36d7b0f-92db-4344-be28-60a2c6611b3f"
# !pip install labelme
# + id="8tS18B76W0Oq"
import os
import argparse
import json
from labelme import utils
import numpy as np
import glob
import PIL.Image
# + colab={"base_uri": "https://localhost:8080/"} id="Adc3xlRTYAEl" outputId="d76e4fe0-57ca-45f8-cb02-09e47848c667"
class labelme2coco(object):
def __init__(self, labelme_json=[], save_json_path="./coco.json"):
"""
:param labelme_json: the list of all labelme json file paths
:param save_json_path: the path to save new json
"""
self.labelme_json = labelme_json
self.save_json_path = save_json_path
self.images = []
self.categories = []
self.annotations = []
self.label = []
self.annID = 1
self.height = 0
self.width = 0
self.save_json()
def data_transfer(self):
for num, json_file in enumerate(self.labelme_json):
with open(json_file, "r") as fp:
data = json.load(fp)
self.images.append(self.image(data, num))
for shapes in data["shapes"]:
label = shapes["label"].split("_")
if label not in self.label:
self.label.append(label)
points = shapes["points"]
self.annotations.append(self.annotation(points, label, num))
self.annID += 1
# Sort all text labels so they are in the same order across data splits.
self.label.sort()
for label in self.label:
self.categories.append(self.category(label))
for annotation in self.annotations:
annotation["category_id"] = self.getcatid(annotation["category_id"])
def image(self, data, num):
image = {}
img = utils.img_b64_to_arr(data["imageData"])
height, width = img.shape[:2]
img = None
image["height"] = height
image["width"] = width
image["id"] = num
image["file_name"] = data["imagePath"].split("/")[-1]
self.height = height
self.width = width
return image
def category(self, label):
category = {}
category["supercategory"] = label[0]
category["id"] = len(self.categories)
category["name"] = label[0]
return category
def annotation(self, points, label, num):
annotation = {}
contour = np.array(points)
x = contour[:, 0]
y = contour[:, 1]
area = 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
annotation["segmentation"] = [list(np.asarray(points).flatten())]
annotation["iscrowd"] = 0
annotation["area"] = area
annotation["image_id"] = num
annotation["bbox"] = list(map(float, self.getbbox(points)))
annotation["category_id"] = label[0] # self.getcatid(label)
annotation["id"] = self.annID
return annotation
def getcatid(self, label):
for category in self.categories:
if label == category["name"]:
return category["id"]
print("label: {} not in categories: {}.".format(label, self.categories))
exit()
return -1
def getbbox(self, points):
polygons = points
mask = self.polygons_to_mask([self.height, self.width], polygons)
return self.mask2box(mask)
def mask2box(self, mask):
index = np.argwhere(mask == 1)
rows = index[:, 0]
clos = index[:, 1]
left_top_r = np.min(rows) # y
left_top_c = np.min(clos) # x
right_bottom_r = np.max(rows)
right_bottom_c = np.max(clos)
return [
left_top_c,
left_top_r,
right_bottom_c - left_top_c,
right_bottom_r - left_top_r,
]
def polygons_to_mask(self, img_shape, polygons):
mask = np.zeros(img_shape, dtype=np.uint8)
mask = PIL.Image.fromarray(mask)
xy = list(map(tuple, polygons))
PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
mask = np.array(mask, dtype=bool)
return mask
def data2coco(self):
data_coco = {}
data_coco["images"] = self.images
data_coco["categories"] = self.categories
data_coco["annotations"] = self.annotations
return data_coco
def save_json(self):
print("save coco json")
self.data_transfer()
self.data_coco = self.data2coco()
print(self.save_json_path)
os.makedirs(
os.path.dirname(os.path.abspath(self.save_json_path)), exist_ok=True
)
json.dump(self.data_coco, open(self.save_json_path, "w"), indent=4)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
description="labelme annotation to coco data json file."
)
parser.add_argument(
"labelme_images",
help="Directory to labelme images and annotation json files.",
type=str,
)
parser.add_argument(
"--output", help="Output json file path.", default="trainval.json"
)
args, unknown = parser.parse_known_args()
#args = parser.parse_args()
labelme_json = glob.glob(os.path.join(args.labelme_images, "*.json"))
labelme2coco(labelme_json, args.output)
| coco JSON.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# # Lab Instrumentation Projects
#
# This is a collection of Laboratory Instrumentation + Experimental Statistics notebooks for
# PHYS 230 at [UIndy](https://www.uindy.edu).
#
# 1. Project 1 (Introduction to Everything!):
# * Lab 1 [Arduino as a laboratory interface](proj1/Arduino%20as%20a%20Laboratory%20Interface.ipynb)
# * Stats 1 [Normal Distribution and Estimation of Parameters.ipynb](proj1/Normal%20Distribution%20and%20Estimation%20of%20Parameters.ipynb)
#
#
# 2. Project 2 (Amplify That)
# * Lab 2 [Transistors and Amplifiers](proj2/Transistors%20and%20Amplifiers.ipynb)
# * Stats 2 [PMFs and Friends](proj2/PMFs,%20PDFs,%20CDFs%20and%20All%20That.ipynb)
#
#
#
# 3. Project 3 (Calculus, Filtering and Modeling)
# * Lab 3 [Doing Math with Electronics](proj3/Calculus%20and%20Filtering%20with%20Op%20Amps.ipynb)
# * Stats 3 [Generating RNGs](proj3/Modeling%20Distributions%20and%20RNGs.ipynb)
#
#
#
# 4. Project 4 (Can we talk?)
# * Lab 4 [Communication](proj4/Communication.ipynb)
# * Stats 4 [Hypothesis Testing](proj4/Hypothesis%20Testing.ipynb)
#
#
#
# 5. Project 5 (Counting)
# * Lab 5 [Counting Experiments](proj5/Counting%20Experiments.ipynb)
# * Stats 5 [Poisson Distribution](proj5/Poisson%20Distribution.ipynb)
#
#
#
# 6. Project 6 (PSoC: Frequency Counting and Time Series)
# * Lab 6 [Intro to PSoC](proj6/Analog%20Meets%20Digital.ipynb)
# * Stats 6 [Time Series](proj6/Time%20Series%20Analysis.ipynb)
#
#
#
# 7. Project 7 (PID Control and Parameter Estimation)
# * Lab 7 [PID Control](proj7/PID%20Control.ipynb)
# * Stats 7 [Parameter Estimation](proj7/Parameter%20Estimation.ipynb)
#
#
#
| README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python(.seesaw_env)
# language: python
# name: seesaw_env
# ---
import pandas as pd
import seesaw.user_data_analysis
import importlib
importlib.reload(seesaw.user_data_analysis)
from seesaw.user_data_analysis import *
accept_df = pd.concat([pd.read_parquet('./time_view_v3.parquet'), pd.read_parquet('./time_view_v4.parquet')], ignore_index=True)
accept_df = accept_df[accept_df.accepted <= 10]
accept_df[['session_id', 'uname']].apply(lambda x : x.session_id if not x.uname else x.uname, axis=1)
accept_df.groupby(['session_id']).size()
qaccept_df = accept_df.groupby(['qkey','mode','accepted']).elapsed_time.apply(bootstrap_stat).reset_index()
qaccept_df = qaccept_df.assign(grp=qaccept_df[['mode', 'accepted']].apply(tuple,axis=1))
from plotnine import *
codes = {
'pc':dict(dataset='bdd', qstr='police cars',
description='''Police vehicles that have lights and some marking related to police. ''',
negative_description='''Sometimes private security vehicles or ambulances look like police cars but should not be included'''),
'dg':dict(dataset='bdd', qstr='A - dogs'),
'cd':dict(dataset='bdd', qstr='C - car with open doors',
description='''Any vehicles with any open doors, including open trunks in cars, and rolled-up doors in trucks and trailers.''',
negative_description='''We dont count rolled down windows as open doors'''),
'wch':dict(dataset='bdd', qstr='B - wheelchairs',
description='''We include wheelchair alternatives such as electric scooters for the mobility impaired. ''',
negative_description='''We do not include wheelchair signs or baby strollers'''),
'mln':dict(dataset='coco', qstr='D - melon',
description='''We inclulde both cantaloupe (orange melon) and honeydew (green melon), whole melons and melon pieces. ''',
negative_description='''We dont include any other types of melon, including watermelons, papaya or pumpkins, which can look similar.
If you cannot tell whether a fruit piece is really from melon don't sweat it and leave it out.'''),
'spn':dict(dataset='coco', qstr='E - spoons',
description='''We include spoons or teaspons of any material for eating. ''',
negative_description='''We dont include the large cooking or serving spoons, ladles for soup, or measuring spoons.'''),
'dst':dict(dataset='objectnet', qstr='F - dustpans',
description='''We include dustpans on their own or together with other tools, like brooms, from any angle.''',
negative_description='''We dont include brooms alone'''),
'gg':dict(dataset='objectnet', qstr='G - egg cartons',
description='''These are often made of cardboard or styrofoam. We include them viewed from any angle.''',
negative_description='''We dont include the permanent egg containers that come in the fridge''')
}
qaccept_df = qaccept_df.assign(qstr=qaccept_df.qkey.map(lambda x : codes[x]['qstr']))
qaccept_df = qaccept_df[~qaccept_df.qkey.isin(['pc'])]
qaccept_df = qaccept_df.assign(method=qaccept_df['mode'].map(lambda m : {'pytorch': 'this work', 'default':'baseline'}[m]))
show_minutes = lambda x : f'{int(x/60):d}'
qaccept_df
qaccept_df.groupby(['session_id', 'user', 'qkey']).size()
plot = ( ggplot(qaccept_df) +
geom_errorbarh(aes(y='accepted', xmin='lower', xmax='high',
group='grp', color='method'), height=1., alpha=.5, position='identity', show_legend=False) +
geom_point(aes(y='accepted', x='med', group='grp', color='method'), alpha=.5, position='identity') +
# geom_text(aes(y='accepted', x='high', label='n',
# group='grp', color='mode'), va='bottom', ha='left', alpha=.5, position='identity') +
facet_wrap(['qstr'], ncol=2, ) +
scale_x_continuous(breaks=[0, 60, 120, 180, 240, 300, 360], labels=lambda a : list(map(show_minutes,a)) )+
scale_y_continuous(breaks=[0, 3, 6, 10]) +
xlab('elapsed time (min)') +
ylab('results marked relevant') +
annotate('vline', xintercept=6*60, linetype='dashed') +
# annotate('text', label=360, x=360,y=0, va='top')
theme(legend_position='top', legend_direction='horizontal', legend_title=element_blank(), legend_box_margin=0,
legend_margin=0, plot_margin=0, panel_grid_minor=element_blank(), figure_size=(3,5), )
)
plot
import matplotlib.pyplot as plt
f2 = plot.draw()
# +
#type(f2)
# -
f2.savefig('./user_study.pdf', bbox_inches='tight', dpi=200)
import PIL.Image
PIL.Image.open('./user_study.png')
| notebooks/time figure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/fl4izdn4g/colab-training/blob/main/podstawy-notatnikow.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="N4wrviHRS6m3"
# # Podstawy
#
# Praca z Colaboratory jest prosta i intuicyjna. Każdy notatnik składa się z komórek. Ta komórka to pole tekstowe. Można tutaj korzystać z Markdown.
#
# Wypunktowanie:
# * lista 1
# * lista 2
#
# Wypunktowanie numerowane:
# 1. inna lista 1
# 2. inna lista 2
#
# ### Linki
# [Przejdź na Onet](https://onet.pl)
#
# ### Obrazki
# 
#
# ### Obrazki z Google Drive
# 1. W Google Drive Udostępniamy plik obrazka
#
# *https://drive.google.com/file/d/1B-EMZEfxpdyfpwJU3qecdVNZ89di6bL-/view?usp=sharing*
#
# 2. Wyciągamy identyfikator obrazka
#
# **1B-EMZEfxpdyfpwJU3qecdVNZ89di6bL-**
#
# 3. Wklejamy go w szablon adresu
#
# https://drive.google.com/uc?export=view&id=[identyfikator]
#
# 
#
#
# + [markdown] id="f6Zufih0lR6f"
# # Kod
# + id="2H2Gamz1S2zZ" colab={"base_uri": "https://localhost:8080/"} outputId="36bdcaf1-2c82-4283-bcf8-9973c32734a6"
# to jest komórka z kodem
print('Witajcie Ziemianie')
# + colab={"base_uri": "https://localhost:8080/"} id="NBvtxRZekiOZ" outputId="b2b37568-dc0b-4d35-8b98-17659b14f416"
a = 4
b = 6
a + b
# + colab={"base_uri": "https://localhost:8080/"} id="R8a_sZ9DksF7" outputId="a33da708-5951-4c34-a8d4-aa80431ee91d"
print(a)
# + colab={"base_uri": "https://localhost:8080/"} id="qD3rdBpDlC2y" outputId="ba8290a7-a252-4f3f-ca8c-f62ee8f7ffe5"
for i in range(10):
print(i)
# + colab={"base_uri": "https://localhost:8080/"} id="moA7EgOloc6e" outputId="b4047402-978c-4cbb-e7f5-6a42e4ddc79d"
tekst = "a"
for i in range(10):
print(tekst*i)
# + [markdown] id="ypDSqZdulXOB"
# ## Instalowanie dodatkowych bibliotek
# + colab={"base_uri": "https://localhost:8080/"} id="7VYEDyEplW3E" outputId="c20454b5-5ba3-4bb8-f1bb-07358bfef8f5"
# dodawanie bibliotek do pythona
# !pip install curl
# + colab={"base_uri": "https://localhost:8080/"} id="jLjfPGSHlnXj" outputId="05d06c30-ac3f-4cda-de91-317c3f820219"
# dodawanie bibliotek do środowiska
# !apt-get -qq install -y libfluidsynth1
# + [markdown] id="429YicTDmdE9"
# # Formularze
# Colab umożliwia generowanie formularzy w celu szybkiej parametryzacji kodu bez grzebania w kodzie źródłowym.
#
# Formularze można dodać poprzez menu:
#
# **Wstaw -> Dodaj pole formularza**
#
# lub korzystając ze składni widocznej poniżej.
#
#
# + id="buJHsbQ4nF8N"
#@title Pola tekstowe
text = 'ala ma kota' #@param {type:"string"}
dropdown = 'jeden' #@param ["jeden", "dwa", "trzy"]
text_and_dropdown = 'value' #@param ["jeden", "trzy", "cztery"] {allow-input: true}
print(text)
print(dropdown)
print(text_and_dropdown)
# + id="Z7uabSG3nskR"
#@title Data
date_input = '2018-03-22' #@param {type:"date"}
print(date_input)
# + id="g42sHpaen-8l"
#@title Pola numeryczne
number_input = 10.0 #@param {type:"number"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
integer_input = 10 #@param {type:"integer"}
integer_slider = 1 #@param {type:"slider", min:0, max:100, step:1}
print(number_input)
print(number_slider)
print(integer_input)
print(integer_slider)
# + id="E3DJ2bIvoQ1J"
#@title Pola warunkowe
boolean_checkbox = True #@param {type:"boolean"}
boolean_dropdown = True #@param ["False", "True"] {type:"raw"}
print(boolean_checkbox)
print(boolean_dropdown)
# + [markdown] id="Lcrh4VdYngLG"
# Aby wszystko ładnie wyglądało można ukryć kod generujący formularze:
#
# **Prawy przycisk -> Formularze -> Ukryj kod formularza**
#
# + [markdown] id="AKbgN8Cko20U"
# # Trochę DataScience
#
#
# + [markdown] id="Ka6mFCVKrSvS"
# ## Numpy
# + colab={"base_uri": "https://localhost:8080/"} id="1b-qqdaMrR7b" outputId="18650fd8-6643-4ef5-d7e9-b45b50d889c4"
import numpy as np
a = np.array([2,3])
b = np.array([
[1,2],
[3,4]
])
b
# + colab={"base_uri": "https://localhost:8080/"} id="yjyWi9JvuHPV" outputId="6748c211-cf4e-4bab-8756-5eabc5447890"
3*a
# + colab={"base_uri": "https://localhost:8080/"} id="WyEQr42xuMbu" outputId="3cf9b404-5a2e-4965-9a82-b42021fd940f"
b^2
# + colab={"base_uri": "https://localhost:8080/"} id="VMYq1kuhuXAX" outputId="6b6cd351-06f7-4d65-c4c5-b137aa9cba7f"
a*b
# + colab={"base_uri": "https://localhost:8080/"} id="CrTaJfrMud6z" outputId="cd85c25b-86a5-47ed-f22b-af9c20626ad2"
3 + a
# + colab={"base_uri": "https://localhost:8080/"} id="8Hhx35MYutkH" outputId="5353592f-72f5-4e13-a914-8b8f5aa2c558"
np.ones((3,4))
# + colab={"base_uri": "https://localhost:8080/"} id="6-r1dwAZuyRs" outputId="f46b9ee6-a121-47d2-e5b0-9b55b4b16cc8"
np.zeros((10,10))
# + colab={"base_uri": "https://localhost:8080/"} id="nQ6PTuLOu38G" outputId="b2632193-9e9c-4919-e3b6-95fbe8574c0e"
# transpozycja
print(b)
print(b.T)
# + [markdown] id="isYy-CZErYkI"
# ## Pandas
# + colab={"base_uri": "https://localhost:8080/"} id="h3ImTSmJqFqc" outputId="68cfc158-9b06-487a-f99d-f4901f1c4596"
import pandas as pd
df = pd.read_csv('/content/drive/MyDrive/covid19/total-and-daily-cases-covid-19.csv')
df.dtypes
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="29XlVMdpq1fd" outputId="09166963-5549-4ab3-f5fa-3f97ac7d705a"
df.head(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 142} id="YP4lJ9gmq644" outputId="468e6030-5db3-4569-99fd-00abeb51f7ca"
df.sample(3)
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="O5ABubq0rDiz" outputId="b5ffb3b6-6dac-4326-f357-361afec41d75"
df.describe()
# + [markdown] id="AyiHZbjCrawA"
# ## Pyplot
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="EIPanr2HpIrn" outputId="cc374735-c4bf-4315-b6c6-f01ca7e4b1c7"
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '.')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Prosty wykresik")
plt.show()
| podstawy-notatnikow.ipynb |
# This notebook regroups the code sample of the video below, which is a part of the [Hugging Face course](https://huggingface.co/course).
# + cellView="form"
#@title
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/b3u8RzBCX9Y?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>')
# -
# Install the Transformers and Datasets libraries to run this notebook.
# ! pip install datasets transformers[sentencepiece]
# +
from transformers import pipeline
question_answerer = pipeline("question-answering")
context = """
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""
question = "Which deep learning libraries back 🤗 Transformers?"
question_answerer(question=question, context=context)
# -
long_context = """
🤗 Transformers: State of the Art NLP
🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction,
question answering, summarization, translation, text generation and more in over 100 languages.
Its aim is to make cutting-edge NLP easier to use for everyone.
🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and
then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and
can be modified to enable quick research experiments.
Why should I use transformers?
1. Easy-to-use state-of-the-art models:
- High performance on NLU and NLG tasks.
- Low barrier to entry for educators and practitioners.
- Few user-facing abstractions with just three classes to learn.
- A unified API for using all our pretrained models.
- Lower compute costs, smaller carbon footprint:
2. Researchers can share trained models instead of always retraining.
- Practitioners can reduce compute time and production costs.
- Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages.
3. Choose the right framework for every part of a model's lifetime:
- Train state-of-the-art models in 3 lines of code.
- Move a single model between TF2.0/PyTorch frameworks at will.
- Seamlessly pick the right framework for training, evaluation and production.
4. Easily customize a model or an example to your needs:
- We provide examples for each architecture to reproduce the results published by its original authors.
- Model internals are exposed as consistently as possible.
- Model files can be used independently of the library for quick experiments.
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""
question_answerer(
question=question,
context=long_context
)
# +
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
model_checkpoint = "distilbert-base-cased-distilled-squad"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
inputs = tokenizer(question, context, return_tensors="tf")
outputs = model(**inputs)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
print(start_logits.shape, end_logits.shape)
# +
import tensorflow as tf
sequence_ids = inputs.sequence_ids()
# Mask everything apart the tokens of the context
mask = [i != 1 for i in sequence_ids]
# Unmask the [CLS] token
mask[0] = False
mask = tf.constant(mask)[None]
start_logits = tf.where(mask, -10000, start_logits)
end_logits = tf.where(mask, -10000, end_logits)
start_probabilities = tf.math.softmax(start_logits, axis=-1)[0].numpy()
end_probabilities = tf.math.softmax(end_logits, axis=-1)[0].numpy()
# +
import numpy as np
scores = np.triu(scores)
max_index = scores.argmax().item()
start_index = max_index // scores.shape[1]
end_index = max_index % scores.shape[1]
score = scores[start_index, end_index].item()
inputs_with_offsets = tokenizer(question, context, return_offsets_mapping=True)
offsets = inputs_with_offsets["offset_mapping"]
start_char, _ = offsets[start_index]
_, end_char = offsets[end_index]
answer = context[start_char:end_char]
print(f"Answer: '{answer}', score: {score:.4f}")
# -
inputs = tokenizer(
question,
long_context,
stride=128,
max_length=384,
padding="longest",
truncation="only_second",
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
| notebooks/course/videos/qa_pipeline_tf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python3
# ---
# Welcome to the 2nd lab. In this lab we run the boiler plate code which we then gradually speed up. First, we start running it on a single CPU core.
#
# We use the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. A data set of 60K training examples of handwritten monochrome digits with shape 28x28. The task is to classify the digits into categories 0-9.
#
# Now it's time to install TensorFlow 2.x
# !pip install tensorflow==2.2.0rc2
# Now just make sure you restart the Kernel so that the changes take effect:
#
# 
#
# After the kernel has been restarted, we'll check if we are on TensorFlow 2.x
# +
import tensorflow as tf
if not tf.__version__ == '2.2.0-rc2':
print(tf.__version__)
raise ValueError('please upgrade to TensorFlow 2.2.0-rc0, or restart your Kernel (Kernel->Restart & Clear Output)')
# -
# So this worked out. Now it's time to create and run a keras model. Let's use the MNIST dataset. If you want to learn more on the data set check out the following links
#
# [MNIST](https://en.wikipedia.org/wiki/MNIST_database)
#
#
# So in a nutsthell, MNIST contains 60000 28x28 pixel grey scale images of handwritten digits between 0-9 and the corresponding labels. Plus additional 10000 images for testing.
#
# Luckyly, this data set is built in to Keras, so let's load it:
# +
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# -
# As expected, we get 60000 images of 28 by 28 pixels:
train_images.shape
# The labels are simply a list of 60000 elements, each one is a number (label) between 0 and 9:
print(train_labels.shape)
print(train_labels)
# Let's have a look at one image:
# +
# %matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(train_images[0])
plt.show()
# -
# Let's normalize the data by making sure every pixel value is between 0..1; this is easy in this case:
# +
train_images = train_images / 255.0
test_images = test_images / 255.0
# -
# Let's build and train the model using Keras
# +
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.nn import relu
from tensorflow.nn import softmax
from time import time
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation=relu),
Dense(10, activation=softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
start = time()
model.fit(train_images, train_labels, epochs=5)
stop = time()
elapsed = stop - start
# -
print('Elapsed time is %f seconds.'% elapsed)
| gpu/nn_scaling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import pandas as pd
from datetime import date, timedelta
import matplotlib.pyplot as plt
pd.set_option('display.float_format', lambda x: '%.4f' % x)
# -
# ## Load Data
path = '../data/processed'
df = pd.read_pickle(os.path.join(path,'transactions.pkl'))
df.head()
# ## Target Matrix
# - cus_id
# - week_start
# - week_end
# - purchased_product_list
# - total_price
print('First transaction: ', df.t_dat.min())
print('Last transaction: ',df.t_dat.max())
df['week'] = df['t_dat'].dt.isocalendar().week
df['year'] = df['t_dat'].dt.year
df['week_start'] = df['t_dat'].dt.to_period('W').apply(lambda r: r.start_time)
df['week_end'] = df['t_dat'].dt.to_period('W').apply(lambda r: r.end_time).dt.normalize()
df.head()
# ### purchased_product_list
weekly_purchased = pd.DataFrame(df.groupby(['customer_id','week','year'])['article_id'].apply(lambda x: list(set(x))))\
.reset_index()\
.rename(columns={'article_id':'weekly_purchased_products'})
# weekly_purchased['weekly_purchased_products'] = weekly_purchased['weekly_purchased_products'].apply(lambda x: list(set(x)))
weekly_purchased.head()
# ### total_price
total_price = df.groupby(['customer_id','week','year'])\
.agg({'article_id':'count','price':sum})\
.reset_index()\
.rename(columns={'article_id':'total_articles','price':'total_amount'})
total_price.head()
# ### Final DataFrame
final_df = total_price.merge(weekly_purchased,on=['customer_id','week','year'],how='left')
final_df.head()
final_df.info()
final_df.to_pickle(os.path.join(path,'weekly_target.pkl'))
# ## Sampling Data
# - Exclude cold-start customers and one-transaction customers
# - Exclude customers with outlier transaction
#
path = '../data/processed'
trans = pd.read_pickle(os.path.join(path,'transactions.pkl'))
customers = pd.read_pickle(os.path.join(path,'customers.pkl'))
customers.head()
temp = trans.groupby('customer_id').agg({'t_dat':'nunique','article_id':'count'}).reset_index()
temp.describe(percentiles=[0.1,0.25,0.5,0.75,0.9,0.99])
# +
exclude_list = temp['customer_id'][(temp.t_dat<=1)|(temp.article_id>180)]
print('Total customers: ',len(temp))
print('Total excluded customers: ',len(exclude_list))
print('Exclusion ratio: {:.2%}'.format(len(exclude_list)/len(temp)))
print('Remained customers: {}'.format(len(temp)-len(exclude_list)))
# -
# ### Random Sampling
from random import sample
random_list = temp['customer_id'][-temp.customer_id.isin(exclude_list)].sample(100000)
# random_list = sample(eligible_list,100000)
temp[temp.customer_id.isin(random_list)].describe()
print('First t_dat: ', trans[trans.customer_id.isin(random_list)]['t_dat'].min())
print('Last t_dat: ', trans[trans.customer_id.isin(random_list)]['t_dat'].max())
print('Total transactions: ', len(trans[trans.customer_id.isin(random_list)]))
target_cus = trans['customer_id'][(trans.customer_id.isin(random_list))&(trans.t_dat >= '2020-09-15')].unique()
print('Total target customers: {} ({:.2%})'.format(len(target_cus),len(target_cus)/len(random_list)))
# ### Save Sampling
output = pd.DataFrame(target_cus,columns=['customer_id'])
path = '../data'
output.to_csv(os.path.join(path,'random_customer.csv'),index=False)
| model/0.Targeting_and_sampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Interactive Plots of COVID-19 Data
# This is a notebook to interact with COVID-19 data using [Jupyter](https://jupyter.org/) and [Hvplot](https://hvplot.holoviz.org/). Currently we are focused on data from the US but may expand our analyses in the near future.
# ## Load Johns Hopkins COVID-19 Data
# Here we load the COVID-19 confirmed case data from the [The Center for Systems Science and Engineering (CSSE)](https://systems.jhu.edu) at Johns Hopkins University. The CCSE COVID-19 [GitHub Repo](https://github.com/CSSEGISandData/COVID-19) has more information about these data and their sources.
import numpy as np
import pandas as pd
pd.set_option('display.max_rows', 6)
import hvplot.pandas
import datetime
dr='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/'
tday=datetime.date.today()
tday
tday=datetime.date.today()
day=datetime.timedelta(days=1)
yday=tday-day*1
fname=yday.strftime('%m-%d-%Y.csv')
src = dr + fname
src
df1 = pd.read_csv(src)
df1
dfus=df1[['Combined_Key','Admin2','Province_State','Country_Region','Last_Update', 'Lat', 'Long_','Confirmed']]
dfus=dfus[dfus.Country_Region=='US']
dfus.reset_index(drop=True, inplace=True)
#dfus.set_index('Combined_Key', inplace=True)
dfus
#dOld=dfus['Last_Update'][0]
dOld=pd.to_datetime(dfus['Last_Update'][0]).date()
dOld
#dfus.rename(columns={'Confirmed':dOld, 'Combined_Key': 'ck'}, inplace=True)
dfus1=dfus.rename(columns={'Confirmed':dOld, 'Combined_Key': 'ck'})
#dfus1=dfus.copy()
dfus1
# +
#dfus1=df.drop(columns=['Deaths', 'Recovered','Active'])
#dfus1
# -
i=2
dday=tday-day*i
fname2=dday.strftime('%m-%d-%Y.csv')
df2=pd.read_csv(dr+fname2)
fname2
dfus=df2[['Combined_Key','Admin2','Province_State','Country_Region','Last_Update', 'Lat', 'Long_','Confirmed']]
dfus=dfus[dfus.Country_Region=='US']
dfus.reset_index(drop=True, inplace=True)
#dfus.set_index('Combined_Key', inplace=True)
dfus
dNew=dfus['Last_Update'][0]
dNew
dfus2=dfus.rename(columns={'Combined_Key': 'ckNew',
'Admin2':'aNew',
'Province_State': 'psNew',
'Country_Region': 'crNew',
'Last_Update': 'luNew',
'Lat': 'latNew',
'Long_': 'lonNew'})
#dfus2=dfus.copy()
dfus2
dfusc=pd.concat([dfus1,dfus2], axis=1, join='outer')
dfusc
dfusc
# +
#df1.set_index('Combined_Key', inplace=True)
#df2.set_index('Combined_Key', inplace=True)
#dfc=pd.merge(df1, df2, on=['Combined_Key','Combined_Key'])
#dfc
# -
# ## Now clean the table
dfusc.rename(columns={'Confirmed':dNew}, inplace=True)
# +
#dfusc.loc[0:1,:]
# -
rws_null=dfusc['ck'].isnull()
rws=dfusc[rws_null].index
rws
dfusc
#dfusc.loc[rws, 'ck' : dOld]
dfusc.iloc[rws, 0:8]
#dfr=dfusc.loc[rws, 'ckNew' : dNew]
dfr=dfusc.iloc[rws, 8:17]
dfr
# #### Very frustrating. Cannot replace a chunk of table in one go. Needed to do column by column! ??
dfusc.columns[0:7]
#dfusc.replace(dfusc.iloc[rws, 1:8], dfr, inplace=True)
#dfusc.replace(dfusc.loc[rws, 'ck' : dOld], dfr, inplace=True)
#dfusc
#dfusc.iloc[rws[-4:-2],0:2]=[['tarak', 'tarak2'], ['1', '2']]
dfusc.iloc[:, 10:14]
for col in range(0,7):
#print(col)
dfusc.iloc[rws,col]=dfusc.iloc[rws, col+7+i]
dfusc
dfusc.drop(columns={'ckNew','aNew','psNew', 'crNew', 'luNew', 'lonNew', 'latNew'}, inplace=True)
dfusc
# ## Make a function
dfus1
def appendData(dfAll,i,tday):
try:
print(i)
#print(id(dfAll))
#print(dfAll.columns)
day=datetime.timedelta(days=1)
dday=yday-day*i
fname2=dday.strftime('%m-%d-%Y.csv')
df2=pd.read_csv(dr+fname2)
dfus=df2[['Combined_Key','Admin2','Province_State','Country_Region','Last_Update', 'Lat', 'Long_','Confirmed']]
dfus=dfus[dfus.Country_Region=='US']
dfus.reset_index(drop=True, inplace=True)
#dNew=dfus['Last_Update'][0]
dNew=pd.to_datetime(dfus['Last_Update'][0]).date()
print(dNew)
dfus2=dfus.rename(columns={'Combined_Key': 'ckNew',
'Admin2':'aNew',
'Province_State': 'psNew',
'Country_Region': 'crNew',
'Last_Update': 'luNew',
'Lat': 'latNew',
'Long_': 'lonNew'})
#dfus2=dfus.copy()
dfusc=pd.concat([dfAll,dfus2], axis=1)
dfusc2=dfusc.rename(columns={'Confirmed':dNew})
rws_null=dfusc2['ck'].isnull()
rws=dfusc2[rws_null].index
#print(dfusc2.columns)
for icol in range(0,7):
dfusc2.iloc[rws,icol]=dfusc2.iloc[rws, icol+ 7+i ]
dfusc3=dfusc2.drop(columns={'ckNew','aNew','psNew', 'crNew', 'luNew', 'lonNew', 'latNew'})
except Exception:
print('Issue with ' + fname2 )
# try:
dfus=df2[['Province/State', 'Country/Region', 'Last Update', 'Confirmed',
'Latitude', 'Longitude']]
#dNew=dfus['Last_Update'][0]
dNew=pd.to_datetime(dfus['Last Update'][0]).date()
print(dNew)
dfus2=dfus.rename(columns={'Province/State': 'psNew',
'Country/Region': 'crNew',
'Last Update': 'luNew',
'Latitude': 'latNew',
'Longitude': 'lonNew'})
#dfus2=dfus.copy()
dfusc=pd.concat([dfAll,dfus2], axis=1)
dfusc2=dfusc.rename(columns={'Confirmed':dNew})
rws_null=dfusc2['ck'].isnull()
rws=dfusc2[rws_null].index
print(dfusc2.columns)
for icol in range(2,7):
dfusc2.iloc[rws,icol]=dfusc2.iloc[rws, icol+ 5+i ]
dfusc3=dfusc2.drop(columns={'psNew', 'crNew', 'luNew', 'lonNew', 'latNew'})
pass
# except:
# print('Creating empty column for ' + fname2 )
# df2=pd.read_csv(dr+fname2)
# print(df2.columns)
# dNew=pd.to_datetime(df2['Last Update'][0]).date()
# dfusc3=dfAll
# dfusc3[dNew]=np.nan
#pass
dfusc3.iloc[:,-1]=pd.to_numeric(dfusc3.iloc[:,-1], errors='ignore', downcast='float')
return dfusc3
# +
#dfTest=appendData(dfus1,1,tday)
#dfTest
# -
# ## Run the function
dfus1
#ndays=pd.to_datetime(dOld).date()-pd.to_datetime(dNew).date()
deltaDay=datetime.date.today()-datetime.date(2020,3,23)
ndays=int(deltaDay/day)
ndays
days=range(1, ndays, 1)
tday=datetime.date.today()
dfAll=dfus1.copy()
for i in days:
dfAll=appendData(dfAll,i,tday)
#print('outside')
#print(dfAll.columns)
dfAll
dfUS=dfAll[dfAll.Country_Region=='US']
dfUS
dfUS.drop(columns={'Admin2', 'Province_State','Country_Region', 'Last_Update'}, inplace=True)
dfUS
dfUS[(dfUS.ck=='Suffolk, Massachusetts, US')]
dfm=pd.melt(dfUS, id_vars=dfUS.columns.values[0:3], var_name="Date", value_name="Value")
dfm.rename(columns = {'Lat':'lat', 'Long_':'lon','ck':'id'}, inplace = True)
dfm
dfm.to_csv('US_covid_conf.csv', index=False)
# +
#dff.Admin2.fillna('Total', inplace=True)
#dff.set_index(['Province_State', 'Admin2'], inplace=True)
#dff.sort_index(0)
# -
# ## Plot all cases on log scale
# Below is a quick plot of all cases on a logarithmic scale.
#
# Hvplot creates holoviews objects, and the `*` symbol means [overlay](http://holoviews.org/reference/containers/bokeh/Overlay.html). See [holoviz plot customization](http://holoviews.org/user_guide/Customizing_Plots.html) for available options.
| covid_states-Copy2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Nearest Neighbor Example
#
# A nearest neighbor learning algorithm example using TensorFlow library.
# This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/)
#
# - Author: <NAME>
# - Project: https://github.com/aymericdamien/TensorFlow-Examples/
# +
import numpy as np
import tensorflow as tf
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# +
# In this example, we limit mnist data
Xtr, Ytr = mnist.train.next_batch(5000) #5000 for training (nn candidates)
Xte, Yte = mnist.test.next_batch(200) #200 for testing
# tf Graph Input
xtr = tf.placeholder("float", [None, 784])
xte = tf.placeholder("float", [784])
# Nearest Neighbor calculation using L1 Distance
# Calculate L1 Distance
distance = tf.reduce_sum(tf.abs(tf.add(xtr, tf.negative(xte))), reduction_indices=1)
# Prediction: Get min distance index (Nearest neighbor)
pred = tf.arg_min(distance, 0)
accuracy = 0.
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# -
# Start training
with tf.Session() as sess:
sess.run(init)
# loop over test data
for i in range(len(Xte)):
# Get nearest neighbor
nn_index = sess.run(pred, feed_dict={xtr: Xtr, xte: Xte[i, :]})
# Get nearest neighbor class label and compare it to its true label
print "Test", i, "Prediction:", np.argmax(Ytr[nn_index]), \
"True Class:", np.argmax(Yte[i])
# Calculate accuracy
if np.argmax(Ytr[nn_index]) == np.argmax(Yte[i]):
accuracy += 1./len(Xte)
print "Done!"
print "Accuracy:", accuracy
| notebooks/2_BasicModels/nearest_neighbor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.014788, "end_time": "2020-09-16T01:25:01.710312", "exception": false, "start_time": "2020-09-16T01:25:01.695524", "status": "completed"} tags=[]
# ## Dependencies
# + _kg_hide-input=true papermill={"duration": 7.550153, "end_time": "2020-09-16T01:25:09.274249", "exception": false, "start_time": "2020-09-16T01:25:01.724096", "status": "completed"} tags=[]
import warnings, json, random, os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import mean_squared_error
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
SEED = 0
seed_everything(SEED)
warnings.filterwarnings('ignore')
# + [markdown] papermill={"duration": 0.013211, "end_time": "2020-09-16T01:25:09.301008", "exception": false, "start_time": "2020-09-16T01:25:09.287797", "status": "completed"} tags=[]
# # Model parameters
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _kg_hide-input=true _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" papermill={"duration": 0.02635, "end_time": "2020-09-16T01:25:09.341468", "exception": false, "start_time": "2020-09-16T01:25:09.315118", "status": "completed"} tags=[]
config = {
"BATCH_SIZE": 64,
"EPOCHS": 100,
"LEARNING_RATE": 1e-3,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"PB_SEQ_LEN": 107,
"PV_SEQ_LEN": 130,
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
# + [markdown] papermill={"duration": 0.013281, "end_time": "2020-09-16T01:25:09.370877", "exception": false, "start_time": "2020-09-16T01:25:09.357596", "status": "completed"} tags=[]
# # Load data
# + _kg_hide-input=true papermill={"duration": 0.717508, "end_time": "2020-09-16T01:25:10.102038", "exception": false, "start_time": "2020-09-16T01:25:09.384530", "status": "completed"} tags=[]
database_base_path = '/kaggle/input/stanford-covid-vaccine/'
train = pd.read_json(database_base_path + 'train.json', lines=True)
test = pd.read_json(database_base_path + 'test.json', lines=True)
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
# + [markdown] papermill={"duration": 0.015326, "end_time": "2020-09-16T01:25:10.133099", "exception": false, "start_time": "2020-09-16T01:25:10.117773", "status": "completed"} tags=[]
# ## Auxiliary functions
# + _kg_hide-input=true papermill={"duration": 0.052403, "end_time": "2020-09-16T01:25:10.201060", "exception": false, "start_time": "2020-09-16T01:25:10.148657", "status": "completed"} tags=[]
token2int = {x:i for i, x in enumerate('().<KEY>')}
token2int_seq = {x:i for i, x in enumerate('ACGU')}
token2int_struct = {x:i for i, x in enumerate('().')}
token2int_loop = {x:i for i, x in enumerate('BEHIMSX')}
def plot_metrics(history):
metric_list = [m for m in list(history_list[0].keys()) if m is not 'lr']
size = len(metric_list)//2
fig, axes = plt.subplots(size, 1, sharex='col', figsize=(20, size * 5))
if size > 1:
axes = axes.flatten()
else:
axes = [axes]
for index in range(len(metric_list)//2):
metric_name = metric_list[index]
val_metric_name = metric_list[index+size]
axes[index].plot(history[metric_name], label='Train %s' % metric_name)
axes[index].plot(history[val_metric_name], label='Validation %s' % metric_name)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric_name)
axes[index].axvline(np.argmin(history[metric_name]), linestyle='dashed')
axes[index].axvline(np.argmin(history[val_metric_name]), linestyle='dashed', color='orange')
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
def preprocess_inputs(df, encoder, cols=['sequence', 'structure', 'predicted_loop_type']):
input_lists = df[cols].applymap(lambda seq: [encoder[x] for x in seq]).values.tolist()
return np.transpose(np.array(input_lists), (0, 2, 1))
def evaluate_model(df, y_true, y_pred, target_cols):
# Complete data
metrics = []
metrics_clean_sn = []
metrics_noisy_sn = []
metrics_clean_sig = []
metrics_noisy_sig = []
for idx, col in enumerate(pred_cols):
metrics.append(np.sqrt(np.mean((y_true[:, :, idx] - y_pred[:, :, idx])**2)))
target_cols = ['Overall'] + target_cols
metrics = [np.mean(metrics)] + metrics
# SN_filter = 1
idxs = df[df['SN_filter'] == 1].index
for idx, col in enumerate(pred_cols):
metrics_clean_sn.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_clean_sn = [np.mean(metrics_clean_sn)] + metrics_clean_sn
# SN_filter = 0
idxs = df[df['SN_filter'] == 0].index
for idx, col in enumerate(pred_cols):
metrics_noisy_sn.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_noisy_sn = [np.mean(metrics_noisy_sn)] + metrics_noisy_sn
# signal_to_noise > 1
idxs = df[df['signal_to_noise'] > 1].index
for idx, col in enumerate(pred_cols):
metrics_clean_sig.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_clean_sig = [np.mean(metrics_clean_sig)] + metrics_clean_sig
# signal_to_noise <= 1
idxs = df[df['signal_to_noise'] <= 1].index
for idx, col in enumerate(pred_cols):
metrics_noisy_sig.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_noisy_sig = [np.mean(metrics_noisy_sig)] + metrics_noisy_sig
metrics_df = pd.DataFrame({'Metric/MCRMSE': target_cols, 'Complete': metrics, 'Clean (SN)': metrics_clean_sn,
'Noisy (SN)': metrics_noisy_sn, 'Clean (signal)': metrics_clean_sig,
'Noisy (signal)': metrics_noisy_sig})
return metrics_df
def get_dataset(x, y=None, labeled=True, shuffled=True, batch_size=32, buffer_size=-1, seed=0):
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
else:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(AUTO)
return dataset
def get_dataset_sampling(x, y=None, shuffled=True, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
return dataset
# + [markdown] papermill={"duration": 0.015175, "end_time": "2020-09-16T01:25:10.231945", "exception": false, "start_time": "2020-09-16T01:25:10.216770", "status": "completed"} tags=[]
# # Model
# + _kg_hide-output=true papermill={"duration": 1.784506, "end_time": "2020-09-16T01:25:12.032142", "exception": false, "start_time": "2020-09-16T01:25:10.247636", "status": "completed"} tags=[]
# def model_fn(embed_dim=75, hidden_dim=128, dropout=.5, sp_dropout=.2, pred_len=68, n_outputs=5):
def model_fn(embed_dim=75, hidden_dim=128, dropout=.5, sp_dropout=.2, pred_len=68, n_outputs=1):
inputs_seq = L.Input(shape=(None, 1), name='inputs_seq')
inputs_struct = L.Input(shape=(None, 1), name='inputs_struct')
inputs_loop = L.Input(shape=(None, 1), name='inputs_loop')
shared_embed = L.Embedding(input_dim=len(token2int), output_dim=embed_dim, name='shared_embedding')
embed_seq = shared_embed(inputs_seq)
embed_struct = shared_embed(inputs_struct)
embed_loop = shared_embed(inputs_loop)
x_concat = L.concatenate([embed_seq, embed_struct, embed_loop], axis=2, name='embedding_concatenate')
x_reshaped = L.Reshape((-1, x_concat.shape[2]*x_concat.shape[3]))(x_concat)
x = L.SpatialDropout1D(sp_dropout)(x_reshaped)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
# Since we are only making predictions on the first part of each sequence, we have to truncate it
x_truncated = x[:, :pred_len]
outputs = L.Dense(n_outputs, activation='linear', name='outputs')(x_truncated)
model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop], outputs=outputs)
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer=opt, loss=losses.MeanSquaredError())
return model
model = model_fn()
model.summary()
# + [markdown] papermill={"duration": 0.015547, "end_time": "2020-09-16T01:25:12.064445", "exception": false, "start_time": "2020-09-16T01:25:12.048898", "status": "completed"} tags=[]
# # Pre-process
# + _kg_hide-input=true papermill={"duration": 0.496651, "end_time": "2020-09-16T01:25:12.577059", "exception": false, "start_time": "2020-09-16T01:25:12.080408", "status": "completed"} tags=[]
feature_cols = ['sequence', 'structure', 'predicted_loop_type']
target_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C']
pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C']
encoder_list = [token2int, token2int, token2int]
train_features = np.array([preprocess_inputs(train, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
train_labels = np.array(train[pred_cols].values.tolist()).transpose((0, 2, 1))
train_labels_reac = np.array(train[['reactivity']].values.tolist()).transpose((0, 2, 1))
train_labels_ph = np.array(train[['deg_Mg_pH10']].values.tolist()).transpose((0, 2, 1))
train_labels_c = np.array(train[['deg_Mg_50C']].values.tolist()).transpose((0, 2, 1))
public_test = test.query("seq_length == 107").copy()
private_test = test.query("seq_length == 130").copy()
x_test_public = np.array([preprocess_inputs(public_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
x_test_private = np.array([preprocess_inputs(private_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
# + [markdown] papermill={"duration": 0.015626, "end_time": "2020-09-16T01:25:12.608783", "exception": false, "start_time": "2020-09-16T01:25:12.593157", "status": "completed"} tags=[]
# # Training
# + _kg_hide-input=true _kg_hide-output=true papermill={"duration": 24510.505148, "end_time": "2020-09-16T08:13:43.129805", "exception": false, "start_time": "2020-09-16T01:25:12.624657", "status": "completed"} tags=[]
AUTO = tf.data.experimental.AUTOTUNE
skf = StratifiedKFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
history_list = []
oof = train[['id']].copy()
oof_preds = np.zeros(train_labels.shape)
test_public_preds = np.zeros((x_test_public.shape[0], config['PB_SEQ_LEN'], len(pred_cols)))
test_private_preds = np.zeros((x_test_private.shape[0], config['PV_SEQ_LEN'], len(pred_cols)))
train['signal_to_noise_int'] = train['signal_to_noise'].astype(int)
tasks = ['reactivity', 'deg_Mg_pH10', 'deg_Mg_50C']
tasks_labels = [train_labels_reac, train_labels_ph, train_labels_c]
for task_idx, task in enumerate(tasks):
print(f'\n ===== {task_idx} {task} =====')
for fold,(train_idx, valid_idx) in enumerate(skf.split(train_labels, train['signal_to_noise_int'])):
if fold >= config['N_USED_FOLDS']:
break
print(f'\nFOLD: {fold+1}')
### Create datasets
# Create clean and noisy datasets
clean_idxs = np.intersect1d(train[train['signal_to_noise'] > 1].index, train_idx)
train_clean_idxs = np.intersect1d(train[train['signal_to_noise'] > 1].index, train_idx)
valid_clean_idxs = np.intersect1d(train[train['signal_to_noise'] > 1].index, valid_idx)
x_train = train_features[train_clean_idxs]
y_train = tasks_labels[task_idx][train_clean_idxs]
x_valid = train_features[valid_clean_idxs]
y_valid = tasks_labels[task_idx][valid_clean_idxs]
train_ds = get_dataset(x_train, y_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
valid_ds = get_dataset(x_valid, y_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
### Model
K.clear_session()
model = model_fn()
model_path = f'model_{task}_{fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1)
### Train (reactivity)
history = model.fit(train_ds,
validation_data=valid_ds,
callbacks=[es, rlrp],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=2).history
history_list.append(history)
# Save last model weights
model.save_weights(model_path)
### Inference
valid_preds = model.predict(get_dataset(train_features[valid_idx], labeled=False, shuffled=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED))
oof_preds[valid_idx, :, task_idx] = valid_preds.reshape(valid_preds.shape[:2])
# Short sequence (public test)
model = model_fn(pred_len= config['PB_SEQ_LEN'])
model.load_weights(model_path)
test_public_preds[:, :, task_idx] += model.predict(test_public_ds).reshape(test_public_preds.shape[:2]) * (1 / config['N_USED_FOLDS'])
# Long sequence (private test)
model = model_fn(pred_len= config['PV_SEQ_LEN'])
model.load_weights(model_path)
test_private_preds[:, :, task_idx] += model.predict(test_private_ds).reshape(test_private_preds.shape[:2]) * (1 / config['N_USED_FOLDS'])
# + [markdown] papermill={"duration": 0.639276, "end_time": "2020-09-16T08:13:44.417122", "exception": false, "start_time": "2020-09-16T08:13:43.777846", "status": "completed"} tags=[]
# ## Model loss graph
# + _kg_hide-input=true papermill={"duration": 3.594529, "end_time": "2020-09-16T08:13:48.655375", "exception": false, "start_time": "2020-09-16T08:13:45.060846", "status": "completed"} tags=[]
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
print(f"Train {np.array(history['loss']).min():.5f} Validation {np.array(history['val_loss']).min():.5f}")
plot_metrics(history)
# + [markdown] papermill={"duration": 0.665492, "end_time": "2020-09-16T08:13:49.982888", "exception": false, "start_time": "2020-09-16T08:13:49.317396", "status": "completed"} tags=[]
# # Post-processing
# + _kg_hide-input=true papermill={"duration": 4.350775, "end_time": "2020-09-16T08:13:54.997569", "exception": false, "start_time": "2020-09-16T08:13:50.646794", "status": "completed"} tags=[]
# Assign values to OOF set
# Assign labels
for idx, col in enumerate(pred_cols):
val = train_labels[:, :, idx]
oof = oof.assign(**{col: list(val)})
# Assign preds
for idx, col in enumerate(pred_cols):
val = oof_preds[:, :, idx]
oof = oof.assign(**{f'{col}_pred': list(val)})
# Assign values to test set
preds_ls = []
for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=pred_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
# Fill missing columns (if thre is any)
missing_columns = [col for col in target_cols if col not in pred_cols]
for col in missing_columns:
preds_df[col] = 0
# + [markdown] papermill={"duration": 0.663562, "end_time": "2020-09-16T08:13:56.326045", "exception": false, "start_time": "2020-09-16T08:13:55.662483", "status": "completed"} tags=[]
# # Model evaluation
# + _kg_hide-input=true papermill={"duration": 0.721385, "end_time": "2020-09-16T08:13:57.750874", "exception": false, "start_time": "2020-09-16T08:13:57.029489", "status": "completed"} tags=[]
display(evaluate_model(train, train_labels, oof_preds, pred_cols))
# + [markdown] papermill={"duration": 0.659224, "end_time": "2020-09-16T08:13:59.073338", "exception": false, "start_time": "2020-09-16T08:13:58.414114", "status": "completed"} tags=[]
# # Visualize test predictions
# + _kg_hide-input=true papermill={"duration": 1.749452, "end_time": "2020-09-16T08:14:01.492803", "exception": false, "start_time": "2020-09-16T08:13:59.743351", "status": "completed"} tags=[]
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
# + [markdown] papermill={"duration": 0.660628, "end_time": "2020-09-16T08:14:02.814540", "exception": false, "start_time": "2020-09-16T08:14:02.153912", "status": "completed"} tags=[]
# # Test set predictions
# + _kg_hide-input=true papermill={"duration": 3.753971, "end_time": "2020-09-16T08:14:07.232762", "exception": false, "start_time": "2020-09-16T08:14:03.478791", "status": "completed"} tags=[]
display(submission.head(10))
display(submission.describe())
submission.to_csv('submission.csv', index=False)
| Model backlog/Models/15-openvaccine-single-target-same-encoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# Last updated: Feb 26, 2021<br>
# Redoing code so it uses the ASC from Ecospace as basemap for transition matrix / interpolation
# To do:<br>
# Next - review what a transition matrix is (Etten, 2014, section 4) and what values
# are needed in the cost raster.
# https://gis.stackexchange.com/questions/280593/understanding-the-values-from-transition-layers-produced-by-the-r-package-gdist
# Generate voronoi using marine cells from Ecospace SHP <br>
# Find nearest marine cell for each haulout (need code in R for spatial join)<br>
#
#
# ### Relative Foraging Intensity using Inverse Path Distance from Seal Haulout Counts
# input: Olesiuk et al's (2010) and subsequent seal haulout survey data downloaded from DFO's online portal <br>
# output: GeoTiffs of seal distribution based on distance to haulouts <br>
# created: Oct 2019 - Feb 2021<br>
# author: <NAME> <br>
# based on: gdistance in R by Etten (2014)
#
# #### Purpose:
# - Calculate on-water (non-euclidean) paths from seal haulouts to surrounding marine areas.
# - Use cost-path surface to calculate inverse path distance weighting to do this (instead of euclidean) for each cell based on actual distance to haulout, respecting topographical barriers.
# - Sum and rescale scores for each cell for each haulout. Resulting map layer values sum to one (markov matrix) and can be used to distribute total regional seal biomass from other models or surveys.
# - To be used in representing distribution as spatial-temporal 'habitat' layers in Ecospace.
#
# #### General Steps:
# 1) Import haulout locations and fix issues with the locations (e.g., erroneous lat / lons) <br>
# 2) Wrangle the haulout data by interpolating between years, fix multiple surveys per year by taking mean <br>
# 3) Import bathymetry and recode to use as a 'cost' layer <br>
# 4) Ensure the map projections are correct so the haulout points and cost layer actually overlap <br>
# 5) Main code loop <br>
# - creates a travel 'cost' raster layer for each haulout
# - converts the 'cost' to an inverse on-the-water distance grid
# - rescale the grid cell values to sum to 1 for each haulout
# - use the estimated seal abundance at each haulout as measure of relative haulout size to weight the grid cell values
# - sum all haulout rasters using a 'map algebra' approach
# - rescale the grid cell values to 1 in the final raster
# - use the resulting raster with annual regional abundance estimates to distribute seals spatially in Ecospace
#
# #### to do:
# - add in haulout time series for san juan islands (Jeffries et al)
# - completed: if haulout landlocked, causes issues... need manual edits to haulout locations
# - completed: some years a single haulout was surveyed twice? Need to take mean.
# - completed: surveys done sporadically - need to interpolate each haulout between survey years to fill in missing years
# - completed: surveys were done for regions annually and so the whole dataset can't be interpolated at once, it has to be for each region - or the data need to be interpolated so there is a haulout estimate each year for each haulout, rather than gaps. See table in write-up.
# - run for all years
# - completed: decide whether to output NAs instead of zeros
#
# ## TOC: <a class="anchor" id="top"></a>
# * [1. Read haulout locations, fix issues](#section-1)
# * [2. Spatial projections](#section-2)
# * [2a).](#section-2a)
# * [2b). ](#section-2b)
# * [2b)i.](#section-2bi)
# * [2b)ii.](#section-2bii)
# * [3. Interpolation](#section-3)
#
# * [4. ](#section-4)
# * [Crosschecks and experiments](#section-5)
# +
# to install r packages from notebook
#install.packages("magick", repos='http://cran.us.r-project.org')
# +
library(ipdw) # library for inverse path distance weighting
#https://github.com/jsta/ipdw/blob/master/R/pathdistGen.R
library(rgdal)
library(raster)
library(sp)
library(ncdf4)
library(sf) # reads CSV as 'simple features' (spatial)
library(ggplot2) # visualize
library(rnaturalearth)
library(rnaturalearthdata)
library(dplyr)
library(zoom)
library(zoo) #for na.approx
library(dplyr)
library(tidyverse)
# to do: the 'janitor' library may make cleaning data far easier
# +
hl_df <- read.csv("C:/Users/Greig/Sync/6. SSMSP Model/Model Greig/Data/2. Seals/AbundanceLocations_Olesiuketal/MODIFIED/Harbour_Seal_Counts_from_Strait_of_Georgia_Haulout_Locations.csv")
#hl_pts <- st_read("../AbundanceLocations_Olesiuketal/MODIFIED/Harbour_Seal_Counts_from_Strait_of_Georgia_Haulout_Locations.csv", options=c("X_POSSIBLE_NAMES=Longitude","Y_POSSIBLE_NAMES=Latitude"))
# bathy file for travel cost layer
ncpath <- "../../NEMO-Salish-Sea-2021/data/bathymetry/"
#ncname <- "Bathymetry_1km"
ncname <- "bathy_salishsea_1500m_20210208"
dname <- "bathymetry"
# -
# ### 1) Read haulout locations file and fix issues
# <a class="anchor" id="section-1"></a>
# [BACK TO TOP](#top)
# #### Fix issues with haulout data for non-survey years and years with multiple surveys
# +
# Note that hl_df is not actually interpreted as a data frame by R, rather a 'list'
# this is why the 'length' method doesn't work unless converted to df
# make into 'tibble' - easier to work with in R (tidyverse)
#https://www.datanovia.com/en/lessons/identify-and-remove-duplicate-data-in-r/
hl_df_tib <- as_tibble(hl_df)
typeof(hl_df_tib)
# to convert the 'count' from a factor to dbl given a 'list'
hl_df_tib["Count"] <- as.numeric(as.character(unlist(hl_df_tib[["Count"]])))
# get only duplicates
hl_df_grouped <- hl_df_tib %>% group_by(Site.Key, Date.1) %>%
filter(n()>1)
# calculate the mean of duplicates
# https://stackoverflow.com/questions/46661461/calculate-mean-by-group-using-dplyr-package
hl_df_dupemean <- hl_df_grouped %>%
dplyr::summarize(Mean = mean(Count, na.rm=TRUE))
# replace the 'count' for with the mean for certain sites with multiple surveys in a year
# exclude records with duplicate year/site key combo (keeps first record of duplicate)
hl_df_tibDistinct <- hl_df_tib %>% distinct(Date.1, Site.Key, .keep_all = TRUE)
# replaces count for duplicates with the mean value
hl_df_tibFixed <- hl_df_tibDistinct %>%
left_join(hl_df_dupemean, by = c("Site.Key","Date.1")) %>%
mutate(Count = ifelse(!is.na(Mean),Mean,Count))
#filter(!is.na(Mean))
hl_df_tibFixed
# -
# #### Interpolate values between survey years for site
# +
#https://stackoverflow.com/questions/43501670/complete-column-with-group-by-and-complete/43501854
all_years <- as_tibble(data.frame("Date.1" = as.integer(c(1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018))))
# insert years / surveys for years in which there were no surveys
hl_df_tibFixExtraYears <- all_years %>% left_join(hl_df_tibFixed, by = c("Date.1"))
hl_df_tibFixExtraYears
hl_df_expandsurveys <- hl_df_tibFixExtraYears %>%
group_by(Date.1,Site.Key) %>%
summarise(count_number = n()) %>%
ungroup() %>%
complete(Date.1,Site.Key, fill = list(count_number = 1)) %>%
left_join(hl_df_tibFixExtraYears, by = c("Site.Key","Date.1"))
# inspect one site - should have empty records for missing years
hl_df_expandsurveys %>% filter(Site.Key == "H0069")
SiteSurveyCounts <- hl_df_expandsurveys %>%
drop_na(Site.Key) %>%
group_by(Site.Key) %>%
summarise(non_na_count = sum(!is.na(Count)))
# if 1966 is NA set it to zero because interpolation needs a value there.
#mutate(x2 = na.approx(Count, Site.Key)) %>%
#arrange(Site.Key) %>%
#ungroup()
# interpolate between years for those sites w/ more than one year w/ a survey
hl_fg_tib_Interp <- hl_df_expandsurveys %>% left_join(SiteSurveyCounts, by = c("Site.Key")) %>%
filter(non_na_count > 1) %>%
group_by(Site.Key) %>%
arrange(Site.Key,Date.1) %>%
mutate(x2 = na.approx(Count,x = Date.1, na.rm=FALSE)) %>%
rename(Count_old = Count) %>%
rename(Count = x2) %>%
ungroup()
# join back the sites /w only one year survey
hl_df_tibAllSurveys <- hl_df_expandsurveys %>% left_join(SiteSurveyCounts, by = c("Site.Key")) %>%
filter(non_na_count == 1) %>%
filter(!is.na(Count)) %>%
bind_rows(hl_fg_tib_Interp) %>%
filter(!is.na(Count))
# get list of all unique sites
tab_metadata <- hl_df_tibAllSurveys %>% distinct(Site.Key,Location.Value,Location.Info,Latitude,Longitude,.keep_all = FALSE) %>%
drop_na(Latitude) %>%
rename(LocInfo2 = Location.Info) %>%
rename(LocVal2 = Location.Value) %>%
rename(Lat2 = Latitude) %>%
rename(Lon2 = Longitude)
hl_df_filledMetadata1 <- hl_df_tibAllSurveys %>% left_join(tab_metadata) %>%
filter(is.na(Location.Value) & is.na(Latitude)) %>%
mutate(Location.Value = LocVal2) %>%
mutate(Location.Info = LocInfo2) %>%
mutate(Latitude = Lat2) %>%
mutate(Longitude = Lon2)
hl_df_filledMetadata <- hl_df_tibAllSurveys %>%
filter(!is.na(Location.Value) & !is.na(Latitude)) %>%
bind_rows(hl_df_filledMetadata1)
hl_df_filledMetadata %>% group_by(Site.Key, Date.1) %>%
filter(n()>1)
# final step
# final table has interpolated counts between survey years
# duplicates surveys of same site in one year -
# fixing by taking mean value of all duplicates.
# A few have slightly different coordinates
# I did not inspect coordinate issues, kept the first record
hl_df_wrangled <- hl_df_filledMetadata %>% distinct(Date.1, Site.Key, .keep_all = TRUE) %>%
arrange(Site.Key, Date.1) %>%
select(-one_of(c("LocVal2","count_number","Mean","non_na_count","Count_old","LocInfo2","Lat2","Lon2")))
hl_df_wrangled
# -
# ### 2) Spatial projections
# #### create spatial data frame from haulout locations
# <a class="anchor" id="section-2"></a>
# [BACK TO TOP](#top)
# +
# fix the issues with positive longitudes
#hl_pts$Longitude[hl_pts$Longitude > 0] <- #hl_pts$Longitude[hl_pts$Longitude > 0] * -1
#plot(hl_pts$Longitude, hl_pts$Latitude)
# fix issues with some points in wrong hemisphere
hl_df2 <- hl_df_wrangled %>% mutate(Longitude =
ifelse(Longitude>0, -Longitude, Longitude )
)
# create spatial pts object in lat / lon (sf R library)
hl_pts3 <- hl_df2 %>%
st_as_sf(coords = c("Longitude", "Latitude"), crs = 4326)
#SpatialPointsDataFrame
# tried this, but didn't work
#hl_pts2 <- SpatialPointsDataFrame(coords=hl_pts$geometry,data=hl_pts)
# reproject to BC Albers
hl_pts_4326 <- hl_pts3 %>% st_set_crs(NA) %>% st_set_crs(4326)
hl_pts_3005 <- hl_pts_4326 %>% st_transform(3005)
#hl_pts_3005
#plot the points
world <- ne_countries(scale = "medium", returnclass = "sf")
ggplot(data = world) +
geom_sf() +
geom_sf(data = hl_pts_3005, size = 1, shape = 1, color=alpha("black",0.9)) +
coord_sf(xlim = c(-121, -126), ylim = c(47, 52), expand = FALSE) #coord_sf(xlim = c(100, 140), ylim = c(45, 55), expand = FALSE)
# -
#global view to check no points are way off
world <- ne_countries(scale = "medium", returnclass = "sf")
ggplot(data = world) +
geom_sf() +
geom_sf(data = hl_pts_3005, size = 1, shape = 1, color=alpha("red",0.9))
# +
# see Etten (2014) for example
# create a raster from the Ecospace ASC
# create transition matrix with raster
# get nearesr marine cell row / col of ecospace map for each haulout
# issue: our grid is rotated and gdistance might be making distance
# corrections assuming not rotated.
# +
# read ecospace raster
ecospace_file = "C:/Users/Greig/Documents/GitHub/Ecosystem-Model-Data-Framework/data/basemap/ecospacedepthgrid.asc"
con = file(ecospace_file, "r")
i = 0
df1 <- read.table(ecospace_file, skip = 6, header = FALSE, sep = "")
r_mat <- data.matrix(df1)
# the values below determine the resolution using rows / cols.
# because of the rotation I have chosen max and min from mid-points of rotated rectangular map
r <- raster(r_mat,
xmn=-124.588135,
xmx=-123.2697,
ymn=48.54895,
ymx=50.314651,
crs=CRS("+proj=longlat +datum=WGS84"))
r
plot(r, xlab="longs are wrong", ylab="lats are wrong")
# -
#calculate conductances hence 1/max(x)
trans <- gdistance::transition(r, function(x) 1 / max(x), directions = 16)
# ### Get basemap from bathymetry and fix issues
# #### check projections
# +
# get the haulouts as geopackage (can also read shapefiles)
#file_pts="temp/Haulouts.gpkg"
#file_pts="temp/Haulouts1974.gpkg"
#lyr="Haulouts"
#lyr="Haulouts1974"
#pnts <- readOGR(dsn=file_pts, layer=lyr)
# did not use the 'costrastergen' function from ipdw
# given that I already have a grid that's easy enough to use
#pols <- readOGR(dsn="BCLand.shp.gpkg")
# this doesn't work (takes forever)
#costras <- costrasterGen(pnts, pols, extent = "pnts",projstr = projection(pols))
# Get bathymetry as .nc file and check
ncfname <- paste(ncpath, ncname, ".nc", sep="")
bathync <- nc_open(ncfname)
#print(bathync)
# read the netcdf file as raster and create a copy to fiddle with
tmpin <- raster(ncfname,
varname = "Bathymetry",
xmn=min("nav_lon"),
xmx=max("nav_lon"),
ymn=min("nav_lat"),
ymx=max("nav_lat"),
crs=CRS("+proj=longlat +datum=WGS84"))
#tmpin
x <- tmpin
# change the NA values to the raster to 10000 as a high 'cost'
x[is.na(x[])] <- 10000
# change all other values to 1
x[x != 10000] <- 1
x
plot(x)
# +
# check projection
crsBath <- projection(x)
print("CRS of bathymetry:")
crsBath
# check projection
#crsPts <- projection(hl_pts)
#crsPts
#crs_aea ="+proj=aea +lat_1=50 +lat_2=58.5 +lat_0=45 +lon_0=-126 +x_0=1000000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"
# the raster and pnts are unprojected (geo wgs84)
# convert to albers equal area
#s <- spTransform(hl_pts,crs_aea)
#s
# I'm not sure this works! Won't display on map if I do so.
# May just be 'declaring projection'
x2 <- projectRaster(x,crs=crs_aea)
#x2 <- projectRaster(x,crs=3005)
x2
# -
x2
# +
#plot(x2, main="Salish Sea")
#points(hl_pts_3005)
# I got the crs projection info for aea from a geopackage
#pols <- readOGR(dsn="temp/BCLand.shp.gpkg")
#crsPols <- projection(pols)
#crsPols
# -
# #### Prep Map Plot Display Libraries etc
# +
# Custom function to help display raster on map
#
# this function is custom from here:
# https://stackoverflow.com/questions/48955504/how-to-overlay-a-transparent-raster-on-ggmap
#
#' Transform raster as data.frame to be later used with ggplot
#' Modified from rasterVis::gplot
#'
#' @param x A Raster* object
#' @param maxpixels Maximum number of pixels to use
#'
#' @details rasterVis::gplot is nice to plot a raster in a ggplot but
#' if you want to plot different rasters on the same plot, you are stuck.
#' If you want to add other information or transform your raster as a
#' category raster, you can not do it. With `SDMSelect::gplot_data`, you retrieve your
#' raster as a data.frame that can be modified as wanted using `dplyr` and
#' then plot in `ggplot` using `geom_tile`.
#' If Raster has levels, they will be joined to the final tibble.
#'
#' @export
gplot_data <- function(x, maxpixels = 1000000) {
x <- raster::sampleRegular(x, maxpixels, asRaster = TRUE)
coords <- raster::xyFromCell(x, seq_len(raster::ncell(x)))
## Extract values
dat <- utils::stack(as.data.frame(raster::getValues(x)))
names(dat) <- c('value', 'variable')
dat <- dplyr::as.tbl(data.frame(coords, dat))
if (!is.null(levels(x))) {
dat <- dplyr::left_join(dat, levels(x)[[1]],
by = c("value" = "ID"))
}
dat
}
# -
# #### NOTE: the above code 'maxpixels' arg will simplify the raster and make it appear low res if it's set low
# +
# transform raster to data frame for map display
# (does not work if using gplot_data with x2, the attemped CRS 3005 raster)
ss_ras_df <- gplot_data(x)
world <- ne_countries(scale = "medium", returnclass = "sf")
ggplot(data = world) +
geom_sf() +
geom_sf(data = hl_pts_3005, size = 1, shape = 1, color=alpha("red",0.9)) +
geom_tile(data = ss_ras_df,
aes(x = x, y = y, fill = value), alpha=0.5) +
coord_sf(xlim = c(-121, -126), ylim = c(47, 52), expand = FALSE) #coord_sf(xlim = c(100, 140), ylim = c(45, 55), expand = FALSE)
# -
# ### Re-Project Bathymetry
# +
# must work with certain data types and projections
# reproject original raster
crs_aea ="+proj=aea +lat_1=50 +lat_2=58.5 +lat_0=45 +lon_0=-126 +x_0=1000000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"
x2 <- projectRaster(x,crs=crs_aea)
crsBathx <- projection(x)
print("CRS bathymetry before:")
crsBathx
crsBathx2 <- projection(x2)
print("CRS bathymetry after:")
crsBathx2
crsBathPts <- projection(hl_pts_3005)
print("CRS haulouts before:")
crsBathPts
#reproject original points
#s <- spTransform(hl_pts3,crs_aea)
#s
#crsBathxPts2 <- projection(s)
#print("CRS haulouts after:")
#crsBathPts2
# -
# ### Set raster cells with haulouts as 'water'
# check that costs of travel across map are either 10,000 (land) or 1 (water)
hist(x2)
# #### Fix issues due to reprojection process
# +
s1 <- hl_pts_3005
#s1@data["count"] <- 1 #for testing
#s1@data
# set 'cost' to 1 for all cells with a haulout
# need further edits to ensure seals can make it out of landlocked cells
ii <- raster::extract(x2, s1, cellnumbers=TRUE)[,"cells"]
x2[ii] <- 1
# problem fix:
# - the unique values of x2 are not 1 and 10000 anymore due to reprojection process
x2[x2 < 5000] <- 1
x2[x2 >= 5000] <- 10000
# -
# #### Exported raster and haulout points to QGIS, created points to connect some haulouts to water
# +
# fix haulout raster cells that are still landlocks
# (in some cases, the haulout locations are landlocked by more than one cell)
# I manually created this list using QGIS
raster::writeRaster(x2,filename = "tmp_costraster",format="GTiff", overwrite = TRUE, NAflag = -9999)
# filter only unique haulout locations
s2 <- hl_pts_3005 %>% distinct(Site.Key, .keep_all = TRUE)
st_write(s2, "tmp_haulouts.shp", delete_layer = TRUE) # overwrites
# -
# #### Import the points representing paths to water and apply to cost raster
# +
# read the 'paths to water' point shapefile, created manually in QGIS
# use points to set cost raster values to '1'
pathsFix_df <- read.csv("PathsToWater.csv")
# create spatial pts object in lat / lon (sf R library)
pathsFix_df2 <- pathsFix_df %>% st_as_sf(coords = c("X", "Y"), crs = 4326)
# reproject to BC Albers
pathsFix_4326 <- pathsFix_df2 %>% st_set_crs(NA) %>% st_set_crs(4326)
pathsFix_3005 <- pathsFix_4326 %>% st_transform(3005)
# set 'cost' to 1 for all cells with a haulout
# need further edits to ensure seals can make it out of landlocked cells
ii <- raster::extract(x2, pathsFix_3005, cellnumbers=TRUE)[,"cells"]
x2[ii] <- 1
# -
# #### Check that costs are still 10000 (land) or 1 (water)
unique(x2)
# ### 3) Interpolation
# ##### Transition matrix
# <a class="anchor" id="section-3"></a>
# [BACK TO TOP](#top)
# +
# below drawn from ipdw source code
# https://github.com/jsta/ipdw/blob/master/R/pathdistGen.R
# (1) calculates the cost to get anywhere from a point on the map
# RANGE = Maximum Foraging Distance (in bathy raster / map units of meters)
range = 40000
costras = x2
spdf = s1
ipdw_range <- range / raster::res(costras)[1] # range in map units
# note the cells are not perfectly 500 m, so the ipdw_range is not as expected
#start interpolation
#calculate conductances hence 1/max(x)
trans <- gdistance::transition(costras, function(x) 1 / max(x), directions = 16)
# (note - GLO - low conductances = high cost of travel)
# NOTES - GLO
# in transition, users define a function f(i,j) to calculate the transition value for each pair of adjacent cells
# i and j" (quoting from https://cran.r-project.org/web/packages/gdistance/vignettes/gdistance1.pdf)
# also see https://gis.stackexchange.com/questions/280593/understanding-the-values-from-transition-layers-produced-by-the-r-package-gdist
# +
# OLD CODE
# problem is haulouts object is 'sf' and I need 'spdf'
# - https://gis.stackexchange.com/questions/239118/r-convert-sf-object-back-to-spatialpolygonsdataframe
#haulouts@data$Count
#spdf$Count2 <- (as.numeric(as.character(spdf$Count)))
#haulouts <- as(spdf,"Spatial")
#haulouts <- (haulouts["Count2"])
#haulouts <- haulouts[2,]
#haulouts
#seals = as.numeric(as.character(haulouts$Count))
#seals
#haulouts["Count"] <- as.numeric(as.character(haulouts@data$Count))
# -
#paramlist <- c("Count2")
#final.ipdw <- ipdw(haulouts, costras, range = range, paramlist,
# overlapped = TRUE)
#pathdistanceraster <- pathdistGen(haulouts, costras, range, yearmon = "default", progressbar = TRUE)
#raster::writeRaster(pathdistanceraster,filename = "tmp_pathdist",format="GTiff", overwrite = TRUE, NAflag = -9999)
ipdw_range
# +
# plot the transitionlayer
#plot(raster(trans))
# NOTE - GLO
# Not meant to be visualized as it is a multidimensional matrix
# transitionlayers when converted to raster are essentially averaging
# the cost to all adjacent cells, for each cell
# -
# ## 5) Test interpolation method
# +
# TEST - create cost surface for a point
# spdf needs to be sf
coord <- st_coordinates(spdf[1,])
costsurf <- gdistance::accCost(trans, coord)
# added by GLO to set to NA any cells outside the range
costsurf <- calc(costsurf, fun=function(x){ x[x > ipdw_range] <- NA; return(x)} )
# -
#raster::writeRaster(trans,filename = "tmp_transrast",format="GTiff", overwrite = TRUE, NAflag = -9999)
raster::writeRaster(costsurf,filename = "tmp_costsurf",format="GTiff", overwrite = TRUE, NAflag = -9999)
#raster::writeRaster(x2,filename = "tmp_costras",format="GTiff", overwrite = TRUE, NAflag = -9999)
# ### Tests - plot and ggplot of one seal haulout distance / cost surface
plot(costsurf)
points(st_coordinates(spdf[1,]))
# #### Visualize using ggplot library
# +
ss_ras_costsurf <- gplot_data(costsurf)
ggplot() +
geom_raster(data = ss_ras_costsurf,
aes(x = x, y = y,
fill = value)) +
coord_quickmap()
# +
# reproject raster for display
# NOTE - GLO -
#" it is generally not a good idea to use projectRaster with a crs= argument.
# It is better to provide a Raster* object as template to project to"
# https://stackoverflow.com/questions/51137838/raster-projection-to-utm-to-lat-lon-in-r
# https://spatialreference.org/ref/epsg/3005/proj4/
#
costsurf_4326 <- projectRaster(costsurf,x)
# transform raster to data frame for map display
# (does not work if using gplot_data with x2, the attemped CRS 3005 raster)
ss_ras_costsurf <- gplot_data(costsurf_4326)
ggplot(data = world) +
geom_sf(data = spdf[1,], size = 1, shape = 1, color=alpha("red",0.9)) +
geom_tile(data = ss_ras_costsurf,
aes(x = x, y = y, fill = value), alpha=0.5) +
coord_sf(xlim = c(-125, -122.5), ylim = c(47.2, 49.5), expand = FALSE) #coord_sf(xlim = c(100, 140), ylim = c(45, 55), expand = FALSE)
# +
x_ipdw <- 1/costsurf
# (5) # fix na's and infinite values
x_ipdw[is.infinite(x_ipdw[])] <- 1
x_ipdw[is.na(x_ipdw)] <- 0
# +
costsurf_4326 <- projectRaster(x_ipdw,x)
# transform raster to data frame for map display
# (does not work if using gplot_data with x2, the attemped CRS 3005 raster)
ss_ras_costsurf <- gplot_data(costsurf_4326)
ggplot(data = world) +
geom_sf(data = spdf[1,], size = 2, shape = 1, color=alpha("red",1)) +
geom_tile(data = ss_ras_costsurf,
aes(x = x, y = y, fill = value), alpha=0.9) +
coord_sf(xlim = c(-125, -126), ylim = c(50, 50.6), expand = FALSE) #coord_sf(xlim = c(100, 140), ylim = c(45, 55), expand = FALSE)
# -
# ### Get Unique years to loop through
# +
# get unique years (r language = year 'levels')
#y_unique <-unique(format(as.Date(spdf$Date.1, format="%Y-%m-%d"),"%Y"))
y_unique <-unique(format(spdf$Date.1))
# NOTE - GLO
# https://stackoverflow.com/questions/24783860/loop-over-unique-values-r
#TEST
y = 0
#for (y in 1:length(y_unique)){
# print(y_unique[y])
#}
sort(y_unique)
# +
# get only data from one year
y = 5 # begins at index 1, not 0
# filter for desired year
#spdf_yr <- filter(spdf, format(as.Date(spdf$Date.1, format="%Y-%m-%d"),"%Y") == y_unique[y])
spdf_yr <- filter(spdf, spdf$Date.1 == y_unique[y])
head(spdf_yr)
# -
# ## 6) Main Code Loops
y_unique <- sort(y_unique)
y <- 1
for (y in 1:length(y_unique)){
print(y_unique[y])
}
# ### Temp - only get some years (each year takes ~30 min)
y_unique = c("2000","2001","2002")
spdf_yr <- filter(spdf, Date.1 == y_unique[1])
spdf_yr
# +
y <- 1
for (y in 1:length(y_unique)){
spdf_yr <- filter(spdf, Date.1 == y_unique[y])
i <- 1 # haulout index
for(i in seq_len(nrow(spdf_yr))){
coord <- st_coordinates(spdf_yr[i,])
costsurf <- gdistance::accCost(trans, coord)
# (2) calculates the maximum distance?
#ipdw_dist <- raster::hist(costsurf, plot = FALSE)$breaks[2]
#if(ipdw_dist < ipdw_range){
# ipdw_dist <- ipdw_range
#}
# (2)
# added by GLO to set to NA any cells outside the range
costsurf <- calc(costsurf, fun=function(x){ x[x > ipdw_range] <- NA; return(x)} )
# (3) reclassify cost surface (is it accessible under the range / limit)
# problem - the upper class lumps all cells equal to max distance together
#costsurf_reclass <- raster::reclassify(costsurf,
# c(ipdw_dist, +Inf, NA,
# ipdw_range, #ipdw_dist, ipdw_range))
# (4) make it inverse distance
#x_ipdw <- 1/costsurf^2
#x_ipdw <- calc(costsurf, fun=function(x){ x[x > range] <- NA; return(x)} )
x_ipdw <- 1/costsurf
# (5) # fix na's and inifinite values
x_ipdw[is.infinite(x_ipdw[])] <- 1
x_ipdw[is.na(x_ipdw)] <- 0
# (6) Multiply each cell's idpw value by the haulout size
# rescale each cell value out of 1 where 1 = total from all cells
# each cell represents and then multiply by haulout size
tot_val = cellStats(x_ipdw,sum)
#seals = spdf[i,"count"]@data
pts_count <- spdf_yr %>% as.data.frame
# https://stackoverflow.com/questions/34469178/r-convert-factor-to-numeric-and-remove-levels
seals = as.numeric(as.character(pts_count$Count[i]))
#seals = spdf[i,"Count"]@data
var_f = tot_val*seals
#var_f = tot_val*seals[1,]
#print("seal count")
#print(seals)
#print("tot_val")
#print(tot_val)
#print("var_f")
#print(var_f)
#print("check 2")
# each cell = % of total foraging arena (or probability of foraging)
# weight by cell density at haulout
x2_ipdw <- x_ipdw / tot_val * seals
#x2_ipdw <- x_ipdw / tot_val * seals[1,]
#print("check 3")
#print(i)
#print(paste("seals: ", seals))
# (7) Write to disk
# (best practice is to write so not all rasters stored in RAM)
raster::writeRaster(x2_ipdw,filename = file.path(tempdir(), paste(y_unique[y],"sealcount", "A1ras", i, ".grd", sep = "")), overwrite = TRUE, NAflag = -99999)
#print("check 3")
} # end haulout loop
# (8) Open the temp rasters, create rasterstack (one for each haulout)
# get list of temp raster files
raster_flist <- list.files(path = file.path(tempdir()), pattern = paste("sealcount", "A1ras*", sep = ""), full.names = TRUE)
# filter to only get .grd files
raster_flist <- raster_flist[grep(".grd", raster_flist, fixed = TRUE)]
as.numeric(gsub('.*A1ras([0123456789]*)\\.grd$', '\\1', raster_flist)) -> fileNum # get raster by file
raster_flist <- raster_flist[order(fileNum)] # order them
rstack <- raster::stack(raster_flist)
rstack <- raster::reclassify(rstack, cbind(-99999, NA))
# delete temp rasters
file.remove(list.files(path = file.path(tempdir()),
pattern = paste("sealcount", "A1ras*", sep = ""),
full.names = TRUE))
##(9) Combine the foraging intensity rasters for each haulout into one raster for all haulouts
# 'add' rasters.
# make sure NA is changed to zero
z <- rstack[[1]]
z[is.na(z[])] <- 0
for (i in 2:nlayers(rstack)){
each_ras <- rstack[[i]]
each_ras[is.na(each_ras[])] <- 0
z <- z + each_ras
}
# rescale each cell value in similar fashion as step (6)
# final result is each cell contains a multiplier (range 0 to 1)
# representing a relative foraging intensity
# can be used to multiple total biomass for SoG to get B per cell
tot_val = cellStats(z, sum)
final_sealrast <- z / tot_val
#raster::writeRaster(x2_ipdw,filename = file.path(tempdir(), paste(y_unique[y],"sealcount", "A1ras", i, ".grd", sep = "")), overwrite = TRUE, NAflag = -99999)
raster::writeRaster(final_sealrast,filename = paste("sealforagingintens",y_unique[y],".tiff", sep=""),format="GTiff", overwrite = TRUE, NAflag = -9999)
#raster::writeRaster(z,filename = paste("TMPsealforagingintens",y_unique[y],".tiff", sep=""),format="GTiff", overwrite = TRUE, NAflag = -9999)
print(paste("Finished year: ", y))
}# end year loop
# -
# where intermediate files are written
tempdir()
# should sum to 1
cellStats(final_sealrast,sum)
# helps with symbologization in qgis
hist(final_sealrast)
# ### Inspect Results
cellStats(z, sum)
cellStats(final_sealrast, sum)
final_sealrast
# ## 7) Multiply foraging intensity rasters by abundances
# #### Create distribution map for animation creation, communication, display etc (not meant for Ecospace)
# get seal abundances from table
seal_abun <- read.csv("../Abundance_Nelsonetal2019/MODIFIED/seal_data.csv")
seal_abun_tib <- as.tibble(seal_abun)
seal_abun_tib
# get relative distribution raster
year1 = "1997"
mypath = "./"
#list.files(full.names = T)
intens_file <- paste("./sealforagingintens", year1, ".tif", sep="")
r <- raster(x = intens_file, package="raster")
#r
# +
# get the abundance for year of interest
#for (i in 1:length(seal_abun_tib)){
# print(seal_abun_tib$year[i])
# print(seal_abun_tib$sog[i])
#}
abun_yr <- seal_abun_tib %>%
filter(year == year1)%>%
pull(sog)
abun_yr
# multiply raster cells by abundance
r2 <- r * abun_yr
# export new raster
raster::writeRaster(r2,filename = paste("sealforagingintensWeighted",year1,".tiff", sep=""),format="GTiff", overwrite = TRUE, NAflag = -9999)
# -
# ### Make a GeoTIFF
library(magick) # this is call to animate/read pngs
#library(purr) not available for 3.5
filenames = ["sealforagingintensWeighted1976",
"sealforagingintensWeighted1977",
"sealforagingintensWeighted1978",
"sealforagingintensWeighted1979",
"sealforagingintensWeighted1980",
"sealforagingintensWeighted1981",
"sealforagingintensWeighted1982",
"sealforagingintensWeighted1983",
"sealforagingintensWeighted1984",
"sealforagingintensWeighted1985",
"sealforagingintensWeighted1986",
"sealforagingintensWeighted1987",
"sealforagingintensWeighted1988",
"sealforagingintensWeighted1989",
"sealforagingintensWeighted1990",
"sealforagingintensWeighted1991",
"sealforagingintensWeighted1992",
"sealforagingintensWeighted1993",
"sealforagingintensWeighted1994",
"sealforagingintensWeighted1995",
"sealforagingintensWeighted1996",
"sealforagingintensWeighted1997",
"sealforagingintensWeighted1998",
"sealforagingintensWeighted1999",
"sealforagingintensWeighted2000"]
images = []
for filename in filenames:
images.append(imageio.imread(filename))
imageio.mimsave('SealDist.gif', images)
# Step 2: List those Plots, Read them in, and then make animation
list.files(path = "./fig_output/ndwi/", pattern = "*.png", full.names = T) %>%
map(image_read) %>% # reads each path file
image_join() %>% # joins image
image_animate(fps=2) %>% # animates, can opt for number of loops
image_write("ndwi_aug_hgm.gif") # write to current dir
| notebooks/notebook archive/Seal Model - R.ipynb |
# # 📝 Exercise M4.01
#
# The aim of this exercise is two-fold:
#
# * understand the parametrization of a linear model;
# * quantify the fitting accuracy of a set of such models.
#
# We will reuse part of the code of the course to:
#
# * load data;
# * create the function representing a linear model.
#
# ## Prerequisites
#
# ### Data loading
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">If you want a deeper overview regarding this dataset, you can refer to the
# Appendix - Datasets description section at the end of this MOOC.</p>
# </div>
# +
import pandas as pd
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_name = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_name]], penguins[target_name]
# -
# ### Model definition
def linear_model_flipper_mass(
flipper_length, weight_flipper_length, intercept_body_mass
):
"""Linear model of the form y = a * x + b"""
body_mass = weight_flipper_length * flipper_length + intercept_body_mass
return body_mass
# ## Main exercise
#
# Given a vector of the flipper length, several weights and intercepts to
# plot several linear model that could fit our data. Use the above
# visualization helper function to visualize both the model and data.
# +
import numpy as np
flipper_length_range = np.linspace(data.min(), data.max(), num=300)
# -
# Write your code here.
# weights = [...]
# intercepts = [...]
weights = [45, -40, 25]
intercepts = [-5000, 13000, 0]
# In the previous question, you were asked to create several linear models.
# The visualization allowed you to qualitatively assess if a model was better
# than another.
#
# Now, you should come up with a quantitative measure which will indicate the
# goodness of fit of each linear model. This quantitative metric should result
# in a single scalar and allow you to pick up the best model.
import numpy as np
def goodness_fit_measure(true_values, predictions):
# Write your code here.
# Define a measure indicating the goodness of fit of a model given the true
# values and the model predictions.
# print(f'true_values {true_values} - predictions {predictions}')
error = np.sum(np.square(true_values.to_numpy()-predictions.to_numpy()))/true_values.shape[0]
print(error)
return error
# Uncomment the code below.
for model_idx, (weight, intercept) in enumerate(zip(weights, intercepts)):
target_predicted = linear_model_flipper_mass(data, weight, intercept)
print(f"Model #{model_idx}:")
print(f"{weight:.2f} (g / mm) * flipper length + {intercept:.2f} (g)")
print(f"Error: {goodness_fit_measure(target, target_predicted):.3f}\n")
| notebooks/linear_models_ex_01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Create some toy data
# create 3 classes which are not linearly separatable
r = np.tile(np.r_[0:1:100j], 3)
t = np.r_[0:np.pi*8:300j] + np.random.rand(300)
x_train = np.c_[r*np.sin(t), r*np.cos(t)]
y_train = np.arange(3).repeat(100)
plt.scatter(x_train[:,0], x_train[:,1], c=y_train, cmap=plt.cm.Paired)
# ## Define a Neural Network model
# +
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_layer, num_classes):
super().__init__()
self.fc1 = nn.Linear(input_size, hidden_layer)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden_layer, hidden_layer)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden_layer, num_classes)
def forward(self, X):
out = self.fc1(X)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
return out
def predict(self, X):
if isinstance(X, np.ndarray):
X = torch.from_numpy(X.astype(np.float32))
out = self.forward(X)
return torch.max(out, 1)[1]
def plot_dicision_boundary(x_train, y_train):
x1_min, x2_min = x_train.min(0) - 0.5
x1_max, x2_max = x_train.max(0) + 0.5
x1, x2 = np.meshgrid(np.arange(x1_min, x1_max, 0.01),
np.arange(x2_min, x2_max, 0.01))
y = self.predict(np.c_[x1.ravel(), x2.ravel()])
plt.pcolormesh(x1, x2, y.detach().numpy().reshape(x1.shape), cmap=plt.cm.Paired)
plt.scatter(x_train[:,0], x_train[:,1], c=y_train, edgecolors='k', cmap=plt.cm.Paired)
plt.show()
# -
model = NeuralNet(2,128,3)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
for epoch in range(10001):
y_score = model(torch.from_numpy(x_train.astype(np.float32)))
optimizer.zero_grad()
loss = loss_fn(y_score, torch.from_numpy(y_train.astype(np.int64)))
loss.backward()
optimizer.step()
if epoch%1000==0:
print(f'Loss is: {loss} at epoch {epoch}')
plot_dicision_boundary(x_train, y_train, model)
def plot_dicision_boundary(x_train, y_train, model):
x1_min, x2_min = x_train.min(0) - 0.5
x1_max, x2_max = x_train.max(0) + 0.5
x1, x2 = np.meshgrid(np.arange(x1_min, x1_max, 0.01),
np.arange(x2_min, x2_max, 0.01))
y = model.predict(np.c_[x1.ravel(), x2.ravel()])
plt.pcolormesh(x1, x2, y.detach().numpy().reshape(x1.shape), cmap=plt.cm.Paired)
plt.scatter(x_train[:,0], x_train[:,1], c=y_train, edgecolors='k', cmap=plt.cm.Paired)
plt.show()
| backup/pytorch_basics_nn_2018_05_17.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg"
# width="30%"
# align=right
# alt="Dask logo">
#
# # Simple Array Computations
#
# This notebook creates a simple random array on a cluster and performs NumPy-like operations.
#
# It is useful to explore the syntax of [Dask.array](http://dask.pydata.org/en/latest/array.html) and interactions with the [distributed cluster](http://distributed.readthedocs.io/en/latest/api.html).
#
# You may want to run through the notebook once, and then increase the data size and try new and different numpy-like calculations.
# ### Connect to Cluster
from dask.distributed import Client, progress
c = Client()
c
# ### Make random array
import dask.array as da
x = da.random.random(size=(10000, 10000), chunks=(1000, 1000))
x
# ### Persist array in memory across the cluster
x = x.persist()
progress(x)
# ### Perform numpy computations as normal
x.sum().compute()
x.mean(axis=0).compute()
x[x < 0] = 0
x[x > x.mean(axis=0)] = 1
y = x + x.T
da.diag(y).compute()
| notebook/examples/04-dask-array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# From http://www-inst.eecs.berkeley.edu/~cs61a/sp12/hw/hw1.html
# Q1. Fill in the following function definition for adding a to the absolute value of b, without calling abs:
from operator import add, sub
def a_plus_abs_b(a, b):
"""Return a+abs(b), but without calling abs."""
if b < 0:
op = sub
else:
op = add
return op(a, b)
def test_a_plus_abs_b():
assert a_plus_abs_b(5, -5) == 10
assert a_plus_abs_b(5, 5) == 10
test_a_plus_abs_b()
# Q2. Write a function that takes three positive numbers and returns the sum of the squares of the two larger numbers. Use only a single expression for the body of the function:
def two_of_three(a, b, c):
"""Return x**2 + y**2, where x and y are the two largest of a, b, c."""
return max(a**2 + b**2, a**2 + c**2, b**2 + c**2)
def test_two_of_three():
assert two_of_three(5, 4, 1) == 41
assert two_of_three(5, 5, 5) == 50
assert two_of_three(5, -1, -4) != 26 # Doesn't work for negative numbers!
test_two_of_three()
# Q3. Let's try to write a function that does the same thing as an if statement:
def if_function(condition, true_result, false_result):
"""Return true_result if condition is a true value, and false_result otherwise."""
if condition:
return true_result
else:
return false_result
# This function actually doesn't do the same thing as an if statement in all cases. To prove this fact, write functions c, t, and f such that one of these functions returns the number 1, but the other does not:
# +
def with_if_statement():
if c():
return t()
else:
return f()
def with_if_function():
return if_function(c(), t(), f())
# +
def c():
return True
def t():
return 1
def f():
raise Exception("Is this cheating?")
# -
print(with_if_statement())
print(with_if_function())
# Q4. <NAME>’s Pulitzer-prize-winning book, <NAME>, poses the following mathematical puzzle.
#
# Pick a positive number n
# If n is even, divide it by 2.
# If n is odd, multipy it by 3 and add 1.
# Continue this process until n is 1.
#
# The number n will travel up and down but eventually end at 1 (at least for all numbers that have ever been tried -- nobody has ever proved that the sequence will always terminate).
#
# The sequence of values of n is often called a Hailstone sequence, because hailstones also travel up and down in the atmosphere before falling to earth. Write a function that takes a single argument with formal parameter name n, prints out the hailstone sequence starting at n, and returns the number of steps in the sequence:
def hailstone(n, count=0):
"""Print the hailstone sequence starting at n, returning its length."""
if n == 1:
return count
elif n % 2 == 0:
return hailstone(n/2, count + 1)
else:
return hailstone(n * 3 + 1, count + 1)
# Hailstone sequences can get quite long! Try 27. What's the longest you can find?
sorted({n: hailstone(n) for n in range(1, 10000000)}.items(), key=lambda k: -k[1])[0]
| homework1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import random
import subprocess
import pandas as pd
import numpy as np
import seaborn as sns
pd.set_option('display.max_rows', 15000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
# -
# # Evaluation of ITE
#
# To confirm the effectiveness of the presented lightweight approach to constructive disjunction, we compare our ITE implementation to the corresponding standard methods contained in SICStus Prolog.
#
# We start by defining a couple of helper functions and settings, especially for allowing Latex compatible plotting.
# +
def figsize_column(scale, height_ratio=1.0):
fig_width_pt = 180 # Get this from LaTeX using \the\columnwidth
inches_per_pt = 1.0 / 72.27 # Convert pt to inch
golden_mean = (np.sqrt(5.0) - 1.0) / 2.0 # Aesthetic ratio (you could change this)
fig_width = fig_width_pt * inches_per_pt * scale # width in inches
fig_height = fig_width * golden_mean * height_ratio # height in inches
fig_size = [fig_width, fig_height]
return fig_size
def figsize_text(scale, height_ratio=1.0):
fig_width_pt = 361 # Get this from LaTeX using \the\textwidth
inches_per_pt = 1.0 / 72.27 # Convert pt to inch
golden_mean = (np.sqrt(5.0) - 1.0) / 2.0 # Aesthetic ratio (you could change this)
fig_width = fig_width_pt * inches_per_pt * scale # width in inches
fig_height = fig_width * golden_mean * height_ratio # height in inches
fig_size = [fig_width, fig_height]
return fig_size
pgf_with_latex = { # setup matplotlib to use latex for output
"pgf.texsystem": "pdflatex", # change this if using xetex or lautex
"text.usetex": True, # use LaTeX to write all text
"font.family": "serif",
"font.serif": [], # blank entries should cause plots to inherit fonts from the document
"font.sans-serif": [],
"font.monospace": [],
"axes.labelsize": 9,
"font.size": 9,
"legend.fontsize": 9,
"xtick.labelsize": 9,
"ytick.labelsize": 9,
"figure.figsize": figsize_column(1.0),
"text.latex.preamble": [
r"\usepackage[utf8x]{inputenc}", # use utf8 fonts because your computer can handle it :)
r"\usepackage[T1]{fontenc}", # plots will be generated using this preamble
]
}
sns.set_style("white", pgf_with_latex)
sns.set_context("paper")
import matplotlib.pyplot as plt
import matplotlib
LABELS = {
"cd/3_0.9": "cd/3 (*0.9)",
"cd/3_1": "cd/3 (*1)",
"cd/3_1.25": "cd/3 (*1.25)",
"cd/3_1.5": "cd/3 (*1.5)",
"cd/3_2": "cd/3 (*2)",
"cd/3_3": "cd/3 (*3)",
"cd/3_4": "cd/3 (*4)",
"cd/3_5": "cd/3 (*5)",
"native": "clp(FD)",
"reg": "clp(FD)",
"smt": "smt",
"global": "Global",
"nb_clauses": "Number of Clauses",
"time": "Propagation Time",
"mum": "Ultrametric",
"elemctr": "Element",
"disjctr": "Disjunctive",
"lexctr": "Lex",
"domctr": "Domain"
}
# -
# ## Constraints
#
# Replace `constraints.csv` with the file containing the output from running `constraints.exe`.
df = pd.read_csv('constraints.csv', sep=';')
df.head()
benchmarks = df.benchmark.unique()
benchmarks
df[(df.result != 'timeout')].replace(LABELS)
# +
bdf = df[(df.benchmark == 'domctr2') & (df.result != 'timeout')].replace(LABELS)
bdf['n'] = bdf['n'].astype(int)
#bdf['time'] = bdf['time'] / 1000
fig, ax = plt.subplots(figsize=figsize_text(1.0, height_ratio=0.9))
sns.lineplot(x='n', y='time', hue='op', hue_order=['clp(FD)', 'Global', 'smt', 'cd', 'cd(2)', 'cd(3)'], data=bdf, ax=ax)
ax.set_yscale('log')
ax.set_xlim([bdf['n'].min(), bdf['n'].max()])
handles, labels = ax.get_legend_handles_labels()
ax.legend(bbox_to_anchor=(0.5, 1.28), loc=9, borderaxespad=0.,
handles=handles[1:], labels=labels[1:], ncol=3)
#ax.set_xticks(bdf['n'].unique())
ax.set_xlabel('Number of Elements $N$')
ax.set_ylabel('Time (in ms)')
ax.grid()
fig.tight_layout()
fig.savefig('domctr.pgf', dpi=500, bbox_inches='tight')
#fig.show()
# +
bdf = df[(df.benchmark == 'lexctr2') & (df.result != 'timeout')].replace(LABELS)
bdf['n'] = bdf['n'].astype(int)
#bdf.loc[bdf.time >= 60000, 'time'] = 0
bdf['time'] = bdf['time'] / 1000
#bdf.loc[(bdf.result == 'timeout') & (bdf.n == bdf.n.min()), 'time'] = 0.0
#bdf = bdf[~((bdf.result == 'timeout') & (bdf.n > bdf.n.min()))]
fig, ax = plt.subplots(figsize=figsize_text(1.0, height_ratio=0.9))
sns.lineplot(x='n', y='time', hue='op', hue_order=['clp(FD)', 'Global', 'smt', 'cd(2)', 'cd(3)'], data=bdf, ax=ax)
ax.set_yscale('log')
#ax.set_ylim([0, 60])
ax.set_xlim([bdf['n'].min(), bdf['n'].max()])
handles, labels = ax.get_legend_handles_labels()
ax.legend(bbox_to_anchor=(0.5, 1.13), loc=9, borderaxespad=0.,
handles=handles[1:], labels=labels[1:], ncol=5)
#ax.set_xticks([10, 50, 100, 150, 200, 250, 300, 350, 400, 450])
ax.set_xlabel('Number of elements $N$')
ax.set_ylabel('Time (in s)')
ax.grid()
fig.tight_layout()
fig.savefig('lexctr.pgf', dpi=500, bbox_inches='tight')
# +
bdf = df[(df.benchmark == 'elemctr2')].replace(LABELS)
bdf['n'] = bdf['n'].astype(int)
#bdf.loc[bdf.time >= 60000, 'time'] = 0
bdf['time'] = bdf['time'] / 1000
bdf.loc[(bdf.result == 'timeout') & (bdf.n == bdf.n.min()), 'time'] = 0.0
bdf = bdf[~((bdf.result == 'timeout') & (bdf.n > bdf.n.min()))]
fig, ax = plt.subplots(figsize=figsize_text(1.0, height_ratio=0.9))
sns.lineplot(x='n', y='time', hue='op', hue_order=['clp(FD)', 'Global', 'smt', 'cd', 'cd(2)', 'cd(3)'], data=bdf, ax=ax)
ax.set_yscale('log')
#ax.set_ylim([0, 60])
ax.set_xlim([bdf['n'].min(), bdf['n'].max()])
handles, labels = ax.get_legend_handles_labels()
ax.legend(bbox_to_anchor=(0.5, 1.28), loc=9, borderaxespad=0.,
handles=handles[1:], labels=labels[1:], ncol=3)
#ax.set_xticks([10, 50, 100, 150, 200, 250, 300, 350, 400, 450])
ax.set_xlabel('Number of elements $N$')
ax.set_ylabel('Time (in s)')
ax.grid()
fig.tight_layout()
fig.savefig('elemctr.pgf', dpi=500, bbox_inches='tight')
# +
bdf = df[(df.benchmark == 'mulctr2') & (df.result != 'timeout')].replace(LABELS)
bdf['n'] = bdf['n'].astype(int)
#bdf.loc[bdf.time >= 60000, 'time'] = 0
bdf['time'] = bdf['time'] / 1000
fig, ax = plt.subplots(figsize=figsize_text(1.0, height_ratio=0.9))
sns.lineplot(x='n', y='time', hue='op', data=bdf, ax=ax)
ax.set_yscale('log')
ax.set_xlim([bdf['n'].min(), bdf['n'].max()])
handles, labels = ax.get_legend_handles_labels()
ax.legend(bbox_to_anchor=(0.5, 1.28), loc=9, borderaxespad=0.,
handles=handles[1:], labels=labels[1:], ncol=4)
#ax.set_xticks([10, 50, 100, 150, 200, 250, 300, 350, 400, 450])
ax.set_xlabel('Number of elements $N$')
ax.set_ylabel('Time (in s)')
ax.grid()
fig.tight_layout()
fig.savefig('mulctr.pgf', dpi=500, bbox_inches='tight')
| evaluation/GlobalConstraints.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Постройте круговую диаграмму, которая изображает состав населения Дагестана(на 2010г)
# Аварцы - 29,4 %
#
# Даргинцы - 17,0 %
#
# Кумыки - 14,9 %
#
# Лезгины - 13,3 %
#
# Лакцы - 5,6 %
#
# Азербайджанцы - 4,5 %
#
# Табасаранцы - 4,1 %
#
# Русские - 3,6 %
#
# Другие - 7,6 %
# +
import matplotlib
import numpy as np
vals = [29.4, 17, 14.9, 13.3, 5.6, 4.5, 4.1, 3.6, 7.6]
labels = ["Аварцы", "Даргинцы", "Кумыки", "Лезгины", "Лакцы", "Азербайджанцы", "Табасаранцы", "Русские", "Другие"]
explode = (0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1)
fig, ax = plt.subplots()
ax.pie(vals, labels=labels, autopct='%1.1f%%', shadow=True, explode=explode,
wedgeprops={'lw':1, 'ls':'--','edgecolor':"k"}, rotatelabels=True)
ax.axis("equal")
# -
| zad3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Now You Code 1: Check Splitter
#
# Write a Python program which splits up a restaurant check. It first prompts you to input the total amount of your check, then the number of people dining. It should then output the amout each diner must contribute to the check. For example:
#
# ```
# *** Check Splitter ***
# How much is the amount of the check? 100.00
# How many people ? 6
# Each person owes: $16.67
# ```
#
# NOTE: Use string formatters to display the output to two decimal places.
#
# ## Step 1: Problem Analysis
#
# Inputs:
#
# Outputs:
#
# Algorithm (Steps in Program):
#
#
# Step 2: Write code here
check = float(input ("How much is the amount of the check? $"))
people = float(input("How many people? "))
cost = float(check/people)
print("Each person owes: $%.2f" %(cost))
# ## Step 3: Questions
#
# 1. What happens when you enter `TWO` instead of `2` for the number of people dining? or $60 as the amount of the check, instead of 60?
#
# Answer: Then the program will show an error because you cannot find the float or integer of letters or symbols, only numbers.
#
#
# 2. What type of error is this? Do you think it can be handled in code?
#
# Answer: This is a exception error. I think it can be handled in code, possibly by requesting for a number or adding a $ at the end of the string so that the person does not write the $.
#
#
# 3. Explain what happens when you enter `0.5` for the number of people dining? Does the program run? Does it make sense?
#
# Answer:
#
# Yes, the program runs but it makese no sense. The cost would be double the amount of the check.
#
# ## Step 4: Reflection
#
# Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
#
# To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
#
# Keep your response to between 100 and 250 words.
#
# `--== Write Your Reflection Below Here ==--`
# The original problem analysis did not work as assigned, it took me two iterations to go through before I arrived at the solution. It was a simple mess up of mistyping a simple code.Next time, I should take it slow and make sure not to make any errors so that I remember the steps and avoid any simple mistakes. I realized that the code needs to be accurate and precise to make the program work correctly and in the way you want it to run. I learned to remember to put everyhing as a float or an integer, or else it is impossible to divide it. Also, remembering to put the %() inside the parenthesis for print is very important, or else you will get a syntax error.
#
#
#
| content/lessons/02/Now-You-Code/NYC1-Check-Splitter.ipynb |