code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Procesos ETVL usando R -- 1 -- Uso interactivo
Notas de clase sobre la extracción, transformación, visualización y carga de datos usando R
**Juan David Velásquez Henao**
jdvelasq@unal.edu.co
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
[Licencia](https://github.com/jdvelasq/ETVL-R/blob/master/LICENCIA.txt)
[Readme](https://github.com/jdvelasq/ETVL-R/blob/master/Readme.md)
**Software utilizado**.
> Este es un documento interactivo escrito como un notebook de [Jupyter](http://jupyter.org), en el cual se presenta un tutorial sobre la extracción, transformación, visualización y carga de datos usando **R** en el contexto de la ciencia de los datos. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Linux y OS X.
> Haga click [aquí](https://github.com/jdvelasq/guias-de-instalacion) para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.
> Haga clic [aquí](http://nbviewer.jupyter.org/github/jdvelasq/ETVL-R/blob/master/ETVL-R-1-uso-interactivo.ipynb) para ver la última versión de este documento en nbviewer.
> Descargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en [Try Jupyter!](https://try.jupyter.org)
#### Contenido
> * [Cálculos numéricos](#Cálculos-numéricos)
* [Funciones](#Funciones)
* [Funciones matemáticas](#Funciones-matemáticas)
* [Cadenas de caracteres](#Cadenas-de-caracteres)
* [Vectores](#Vectores)
* [Listas](#Listas)
**Bibliografía**.
> [An introduction to R](https://cran.r-project.org/doc/manuals/R-intro.pdf) by W. N. Venables, D. M. Smith and the R Core Team
# Cálculos numéricos
[Contenido](#Contenido)
R puede ser usado de forma interactiva como una calculadora. Esto permite que el análisis de datos pueda ser realizado de forma interactiva, de forma similar a como pueden usarse otras herramientas como el lenguaje Python o Matlab. A continuación se ejemplifican los cálculos aritméticos básicos.
```
2 + 2
50 - 5 * 6
(50 - 5 * 6) / 4 # aritmética entera
8 / 5 # resultado real
8 %% 5 # residuo de la division
8 %/% 5 # parte entera de la división
5 ** 2 # potencia
5 ^ 2 # potencia
```
La asignación puede hacerse de diferentes formas.
```
x <- 1 # asignación típica en R
x = 1 # tradicional en muchos lenguajes de programación
1 -> x # posible, pero poco frecuente
```
También se pueden declarar y usar variables en la ejecución interactiva.
```
width <- 20
height <- 5 * 9
width * height
tax <- 12.5 / 100
price <- 100.50
price * tax
```
# Funciones
[Contenido](#Contenido)
Las funciones son definidas mediante la palabra reservada `function`. La siguiente función devuelve el cuadrado de su argumento.
```
square <- function(x) {
return (x**2) # lo que se retorna tiene que ir obligatoriamente entre paréntesis
}
square(2)
square <- function(x) {
x**2 # el último cálculo antes de salir de la función es lo que se retorna
}
square(2)
square(1+2)
square(square(2))
square(1) + square(2)
sum_of_squares <- function(x, y) square(x) + square(y) # las funciones puden ser llamadas dentro de otra
sum_of_squares(1, 2)
```
# Funciones matemáticas
[Contenido](#Contenido)
```
cos(3.141516)
pi
```
---
**Ejercicio.--** Calcule el valor de la siguiente expresión:
$$\frac{5-(1 -(3 - \exp(\frac{1}{8}))}{3(4-2)(2-\frac{3}{8})} - 2!(-1)^3 + \sin (0.98\pi)$$
---
# Cadenas de caracteres
[Contenido](#Contenido)
En R también pueden usarse cadenas de caracteres (strings). Ellas pueden delimitarse usando comillas simples o dobles.
```
'hola mundo' # comillas simples
"hola mundo" # comillas dobles
'--"--' # uso alternado de comillas.
"--'--"
'--\'--' # En estos dos casos se requiere usar el `\` para indicar que la comilla intermedia no es el delimitador.
"--\"--"
```
El caracter de escape `\n` indica retorno-de-carro o nueva-linea.
```
s = 'Primera linea.\nsegunda linea.'
s
print(s) # se debe usar la función print para imprimir con formato.
cat('Primera linea.\nsegunda linea.')
" Los strings de varias lineas pueden \
escribirse indicando la continuación de \
dobles y son usados corrientemente como \
comentarios \
"
cat("
Los strings de varias lineas pueden \
escribirse deli pormitandolos tres comillas \
dobles y son usados corrientemente como \
comentarios
")
paste('Py', 'thon', sep = '')
paste(paste(rep('abc ', 3), collapse = ''), '012')
paste(paste(rep('def ', 2), collapse= ''), '014')
# borra los caracteres '-' de la cadena de texto
gsub(pattern='-', replacement='', x='h-o-l-a- -m-u-n-d-o')
# cambia los '-' por '='
gsub(pattern='-', replacement='=', x='h-o-l-a- -m-u-n-d-o')
gsub(pattern='-', replacement='+', x='h-o-l-a- -m-u-n-d-o')
# convierte a mayusculas
toupper('hola mundo')
# convierte a minusculas
tolower('HOLA MUNDO')
```
En R, los strings son vectores de caracteres; el primer caracter ocupa la posición 1, el segundo la 2 y así sucesivamente.
+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
1 2 3 4 5 6
```
word <- 'Python'
substr(word, start = 1,stop = 1) # caracter en la posición 1
substr(word, start = 6, stop = 6) # caracter en la posición 6
nchar('abcde') # la función nchar calcula la longitud de una cadena de caracteres.
substr(word, start = nchar(word), stop = nchar(word)) # último caracter
substr(word, start = nchar(word)-2, stop = nchar(word)-2) # antepenúltimo caracter
substr(word, start = 1, stop = 2)
substr(word, start = 3, stop = nchar(word)) # desde la posición 3 hasta el final
w<-'ciencia'
substr(w, start=3, stop=nchar(w))
```
# Vectores
[Contenido](#Contenido)
Los vectores en R son la principal estructura para realizar cálculos numéricos.
```
squares <- c(1, 4, 9, 16, 25) # las listas se crean con la función `c()`
squares
1:5 # forma para crear secuencias
seq(5) # generación de secuencias
seq(from=1, to=10, by=2)
squares[1] # Sus elementos se indexan desde uno
1:10-1 # generación de secuencias
1:(10-1)
rep(c(1, 2), times = 3) # repetición de secuencias
rep(c(1, 2), each = 2)
rep(c(1, 2), times = 3, each = 2)
squares[-1] # los indices negativos indican exclusión del elemento.
squares[(length(squares)-2):length(squares)] # ultimos tres elementos
tail(squares, n = 3)
tail(squares) # desde el primer hasta el último elemento.
head(squares, n = 3)
x <- 1:3 # concatenacion de vectores
c(x, 0, x)
cubes = c(1, 8, 27, 65, 125) # lista de cubos con un elemento malo
4 ** 3 # el cubo de 4 es 64, no 65!
cubes[4] = 64 # se reemplaza el valor erróneo
cubes
cubes <- c(cubes, 216) # se agrega el cubo de 6 al final de la lista.
cubes <- append(cubes, 7 ** 3) # y nuevamente se agrega el cubo de 7 al final
cubes
cuadrado<-c(1,4,9,17,25)
4**2
cuadrado[4]=16
cuadrado
cuadrado<-c(cuadrado, 36)
cuadrado
cuadrado <- append(cuadrado, 7**2)
cuadrado
cuadrado<- head(cuadrado, n=7)
cuadrado
letters = c('a', 'b', 'c', 'd', 'e', 'f', 'g')
letters
letters[3:5] = c('C', 'D', 'E') # se puede reemplazar un rango de posiciones
letters
letters[-(2:5)]
letters[seq(from=1, to=7, by=2)]
letters = c('a', 'b', 'c', 'd') # la función len retorna la longitud de la lista
length(letters)
a = c('a', 'b', 'c') # los elementos de las listas pueden ser de cualquier tipo.
n = c(1, 2, 3)
x = c(a, n) # x es una lista
x
c(1, 3, 5, 7) + 1 # operaciones aritméticas
c(1, 3, 5, 7) + c(2, 4, 6, 8)
c(1,2,3)+c(1,2,3)
1 / c(1, 2, 3)
sum(c(1, 2, 3, 4)) # suma de los elementos de un vector
sum(1:4)
prod(1:4)
cumsum(1:4) # suma acumulada
cumprod(1:4) # producto acumulado
```
# Listas
[Contenido](#Contenido)
Las listas son una de las principales estructuras para almacenar información en R. Se usan comunmente para almacenar información. Las listas en lenguaje Python son equivalentes a los vectores en R. Las listas en R no tienen equivalente en Python; sobre ellas no se pueden realizar operaciones matemáticas.
```
squares = list(1, 4, 9, 16, 25) # las listas se crean con la palabra list
squares
squares = list(a=1, b=4, c=9, d=16, e=25)
squares
squares$a = 2
squares
squares['b'] # acceso por nombre.
squares[2] # acceso por posición.
print(list(list(1, 2, 3), list(4, 5, 6))) # lista de listas
list(list(1, 2, 3), list(4, 5, 6)) # lista de listas
x <- list(list(1, 2, 3), list(4, 5, 6))
x[1]
x[[1]]
x[[1]][2]
```
---
[Contenido](#Contenido)
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"><img alt="Licencia de Creative Commons" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a><br />Este obra está bajo una <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">licencia de Creative Commons Reconocimiento-NoComercial-SinObraDerivada 4.0 Internacional</a>.
| github_jupyter |
```
import numpy as np
import pandas as pd
import sklearn
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
infile = '../ChronoLSTM_1d_bin3/DATA_Linear/xvyw1beta9.5gammax1.0gammay1.0epsln1.0sgma1.0A1.0x01.122w0.8B0.15a1.0_h0.01_mix1.txt'
input_x, _=np.loadtxt(infile, unpack=True, max_rows=100000)
device = torch.device("cpu")
num_bins=3
sm_length=20
def running_mean(x, N):
"""Use convolution to do running average."""
return np.convolve(x, np.ones((N,))/N, mode='valid')
def find_nearest(key_arr, target):
"""key_arr: array-like, storing keys.
target: the value which we want to be closest to."""
idx=np.abs(key_arr-target).argmin()
return idx
def Rm_peaks_steps(traj):
global threshold
"""
Remove sudden changes in the trajectory such as peaks and small steps.
In this method, I used gradient to identify the changes. If two nonzero
gradients are too close (< threshold), we shall take this range as noise.
"""
traj=np.array(traj)
grad_traj=np.gradient(traj) # gradient of trajectory
idx_grad=np.where(grad_traj!=0)[0]
threshold=20
idx0=idx_grad[0]
for idx in idx_grad:
window=idx-idx0
if window <= 1: # neighbor
continue
elif window > 1 and window <= threshold:
traj[idx0:idx0+window//2+1]=traj[idx0]
traj[idx0+window//2+1:idx+1]=traj[idx+1]
idx0=idx
elif window > threshold:
idx0=idx
return traj
X = [1.5, 0, -1.5]
input_x = running_mean(input_x, sm_length) # smooothen data.
idx_x = map(lambda x: find_nearest(X, x), input_x) # convert to three bins.
idx_2d=list(idx_x) # list(zip(idx_x, idx_y))
idx_2d = Rm_peaks_steps(idx_2d) # remove peaks and short steps
text = idx_2d
all_combs = [i for i in range(num_bins)]
vocab=sorted(all_combs)
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
class seq_data(Dataset):
def __init__(self, traj, seq_length, shift):
self.traj = traj
self.seq_length = seq_length
self.shift = shift
def __len__(self):
return self.traj[self.shift:].shape[0]//self.seq_length
def __getitem__(self, idx):
x = self.traj[:-self.shift][idx*self.seq_length:idx*self.seq_length+self.seq_length]
y = self.traj[self.shift:][idx*self.seq_length:idx*self.seq_length+self.seq_length]
return x, y
class NLP(nn.Module):
def __init__(self, input_dim, embedding_dim, rnn_units):
super(NLP, self).__init__()
self.input_dim = input_dim
self.embedding_dim = embedding_dim
self.hidden_dim = rnn_units
self.embedding = nn.Embedding(self.input_dim, self.embedding_dim)
self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, batch_first=True)
self.linear = nn.Linear(self.hidden_dim, self.input_dim)
def forward(self, input):
batch_size = input.shape[0]
embedding_out = self.embedding(input)
lstm_in = embedding_out.view(batch_size, input.shape[1], self.embedding_dim)
lstm_out, hidden = self.lstm(lstm_in)
y_pred = self.linear(lstm_out)
return y_pred
embedding = nn.Embedding(vocab_size, embedding_dim)
embedding_out = embedding(batch_X_train)
embedding_out.shape
lstm = nn.LSTM(embedding_dim, rnn_units, batch_first=True)
lstm_in = embedding_out.view(batch_X_train.shape[0], batch_X_train.shape[1], embedding_dim)
lstm_out = lstm(lstm_in)
lstm_out[1][0].shape
w_ii, w_if, w_ic, w_io = list(model.modules())[2].weight_ih_l0.chunk(4, 0)
w_hi, w_hf, w_hc, w_ho = list(model.modules())[2].weight_hh_l0.chunk(4, 0)
b_hi, b_hf, b_hc, b_ho = list(model.modules())[2].bias_ih_l0.chunk(4,0)
model.hidden[0][0][0][:10]
EPOCHS = 20
sequence_len = 100
shift=1
batch_size=64
dataset = seq_data(text_as_int, 100, 1)
dataset = DataLoader(dataset, batch_size=64, shuffle=True, drop_last=True)
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 8
# Number of RNN units
rnn_units = 32
# Batch size
batch_size=64
model = NLP(vocab_size, embedding_dim, rnn_units).to(device)
print(model)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(EPOCHS):
for batch_X_train, batch_Y_train in dataset:
batch_X_train = batch_X_train.to(device)
batch_Y_train = batch_Y_train.to(device)
y_pred = model(batch_X_train)
y=batch_Y_train.to(device)
loss = loss_fn(y_pred.view(-1, vocab_size), y.view(-1))
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(epoch, loss.item())
```
# Save weights
```
PATH = 'test_weight'
torch.save(model.state_dict(), PATH)
```
# Reset model to batch_size=1 for prediction
```
model = NLP(vocab_size, embedding_dim, rnn_units, 1).to(device)
model.load_state_dict(torch.load(PATH))
```
# Prediction
```
def generate_text(start_string):
input_eval = torch.tensor([char2idx[s] for s in start_string], device=device)
text_generated = np.empty(1)
for i in range(1000):
input_eval = input_eval[np.newaxis, ...] # add a dimension for batch=1.
prediction=model(input_eval)
logits=prediction
p=torch.nn.functional.softmax(logits, dim=-1) # take first batch
predicted_id=torch.multinomial(p[0,-1], 1)
input_eval = predicted_id
text_generated = np.vstack((text_generated, idx2char[predicted_id].tolist()))
return text_generated
text = idx_2d[:1000]
prediction=generate_text(text)
infile = 'prediction2'
prediction=np.loadtxt(infile)
import matplotlib.pyplot as plt
plt.plot(prediction[1:2000])
plt.show()
plt.plot(text_as_int[1:2000])
np.where(prediction==0)[0].shape
np.where(prediction==1)[0].shape
np.where(prediction==2)[0].shape
```
| github_jupyter |
# 预测波士顿房价
## 在 SageMaker 中使用 XGBoost(超参数优化)
_机器学习工程师纳米学位课程 | 部署_
---
为了介绍 SageMaker 的高阶 Python API,我们将查看一个相对简单的问题。我们将使用[波士顿房价数据集](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)预测波士顿地区的房价中位数。
此 notebook 中使用的 API 的参考文档位于 [SageMaker 开发人员指南](https://docs.aws.amazon.com/sagemaker/latest/dg/)页面
## 一般步骤
通常,在 notebook 实例中使用 SageMaker 时,你需要完成以下步骤。当然,并非每个项目都要完成每一步。此外,有很多步骤有很大的变化余地,你将在这些课程中发现这一点。
1. 下载或检索数据。
2. 处理/准备数据。
3. 将处理的数据上传到 S3。
4. 训练所选的模型。
5. 测试训练的模型(通常使用批转换作业)。
6. 部署训练的模型。
7. 使用部署的模型。
在此 notebook 中,我们仅介绍 1-5 步,因为我们只关心如何创建优化过的模型并测试模型性能。
## 第 0 步:设置 notebook
先进行必要的设置以运行 notebook。首先,加载所需的所有 Python 模块。
```
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
```
除了上面的模块之外,我们还需要导入将使用的各种 SageMaker 模块。
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
```
## 第 1 步:下载数据
幸运的是,我们可以使用 sklearn 检索数据集,所以这一步相对比较简单。
```
boston = load_boston()
```
## 第 2 步:准备和拆分数据
因为使用的是整洁的表格数据,所以不需要进行任何处理。但是,我们需要将数据集中的各行拆分成训练集、测试集和验证集。
```
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
## 第 3 步:将数据文件上传到 S3
使用 SageMaker 创建训练作业后,进行训练操作的容器会执行。此容器可以访问存储在 S3 上的数据。所以我们需要将用来训练的数据上传到 S3。此外,在执行批转换作业时,SageMaker 要求输入数据存储在 S3 上。我们可以使用 SageMaker API 完成这一步,它会在后台自动处理一些步骤。
### 将数据保存到本地
首先,我们需要创建测试、训练和验证 csv 文件,并将这些文件上传到 S3。
```
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### 上传到 S3
因为目前正在 SageMaker 会话中运行,所以可以使用代表此会话的对象将数据上传到默认的 S3 存储桶中。注意,建议提供自定义 prefix(即 S3 文件夹),以防意外地破坏了其他 notebook 或项目上传的数据。
```
prefix = 'boston-xgboost-tuning-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
## 第 4 步:训练和构建 XGBoost 模型
将训练和验证数据上传到 S3 后,我们可以构建 XGBoost 模型并训练它。在之前的 notebook 中,我们训练了一个模型,但是现在我们将使用 SageMaker 的超参数优化功能训练多个模型,并使用在验证集上表现最好的模型。
### 设置训练作业
首先,我们将为模型设置训练作业。步骤与在之前的 notebook 中构建训练作业很相似。我们将描述基本训练作业,然后在超参数优化作业中,SageMaker 将更改一些超参数来优化基本训练作业。
```
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. In this case, since we are setting up
# a training job which will serve as the base training job for the eventual hyperparameter
# tuning job, we only specify the _static_ hyperparameters. That is, the hyperparameters that
# we do _not_ want SageMaker to change.
training_params['StaticHyperParameters'] = {
"gamma": "4",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
```
### 设置优化作业
设置好基本训练作业后,我们将设置希望 SageMaker 执行的优化作业。与高阶 notebook 一样,我们将指定要更改的超参数以及超参数的值范围。
此外,我们指定了要构建的模型数量 (`max_jobs`) 以及可以并行训练的模型数量 (`max_parallel_jobs`)。在以下单元格中,我们选择训练 `20` 个模型,并且一次并行训练 `3` 个。注意,总共有 `20` 个训练作业需要执行,可能需要一段时间,在此 notebook 中几乎达到半小时。模型越复杂,等待的时间越长。
```
# We need to construct a dictionary which specifies the tuning job we want SageMaker to perform
tuning_job_config = {
# First we specify which hyperparameters we want SageMaker to be able to vary,
# and we specify the type and range of the hyperparameters.
"ParameterRanges": {
"CategoricalParameterRanges": [],
"ContinuousParameterRanges": [
{
"MaxValue": "0.5",
"MinValue": "0.05",
"Name": "eta"
},
],
"IntegerParameterRanges": [
{
"MaxValue": "12",
"MinValue": "3",
"Name": "max_depth"
},
{
"MaxValue": "8",
"MinValue": "2",
"Name": "min_child_weight"
}
]},
# We also need to specify how many models should be fit and how many can be fit in parallel
"ResourceLimits": {
"MaxNumberOfTrainingJobs": 20,
"MaxParallelTrainingJobs": 3
},
# Here we specify how SageMaker should update the hyperparameters as new models are fit
"Strategy": "Bayesian",
# And lastly we need to specify how we'd like to determine which models are better or worse
"HyperParameterTuningJobObjective": {
"MetricName": "validation:rmse",
"Type": "Minimize"
}
}
```
###执行优化作业
设定了描述要 SageMaker 执行的优化作业的数据结构后,下面开始执行该作业。
```
# First we need to choose a name for the job. This is useful for if we want to recall information about our
# tuning job at a later date. Note that SageMaker requires a tuning job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
tuning_job_name = "tuning-job" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# And now we ask SageMaker to create (and execute) the training job
session.sagemaker_client.create_hyper_parameter_tuning_job(HyperParameterTuningJobName = tuning_job_name,
HyperParameterTuningJobConfig = tuning_job_config,
TrainingJobDefinition = training_params)
```
SageMaker 已经创建了优化作业,并且优化作业现在正在运行中。因为我们需要获得优化作业的输出,所以需要等待运行完毕。我们可以要求 SageMaker 输出优化作业生成的日志,并继续要求输出日志,直到优化作业运行完毕。
```
session.wait_for_tuning_job(tuning_job_name)
```
### 构建模型
优化作业已运行完毕,并且 SageMaker 拟合了多个模型,结果存储在一个数据结构中,我们可以使用优化作业的名称访问该结果。
```
tuning_job_info = session.sagemaker_client.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)
```
`tuning_job_info` 对象中包含的信息包括在 SageMaker 拟合数据的所有模型中表现最佳的训练作业的名称。我们可以使用该训练作业名称访问生成的模型工件,然后根据这些工件构建模型。
```
# We begin by asking SageMaker to describe for us the results of the best training job. The data
# structure returned contains a lot more information than we currently need, try checking it out
# yourself in more detail.
best_training_job_name = tuning_job_info['BestTrainingJob']['TrainingJobName']
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=best_training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = best_training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
```
## 第 5 步:测试模型
将模型拟合训练数据并使用验证数据避免过拟合后,我们可以测试模型了。我们将使用 SageMaker 的批转换功能。也就是说,我们需要设置和执行批转换作业,与之前构建训练作业的方式相似。
### 设置批转换作业
就像训练模型一样,我们首先需要提供一些信息,并且所采用的数据结构描述了我们要执行的批转换作业。
我们将仅使用这里可用的某些选项,如果你想了解其他选项,请参阅[创建批转换作业](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html) SageMaker 文档。
```
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
```
### 执行批转换作业
创建了请求数据结构后,下面要求 SageMaker 设置和运行批转换作业。与之前的步骤一样,SageMaker 会在后台执行这些任务,如果你想等待转换作业运行完毕(并查看作业的进度),可以调用 wait() 方法来等待转换作业运行完毕。
```
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
```
### 分析结果
现在转换作业已经运行完毕,结果按照我们的要求存储到了 S3 上。因为我们想要在 notebook 中分析输出结果,所以将使用一个 notebook 功能将输出文件从 S3 复制到本地。
```
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
```
为了查看模型的运行效果,我们可以绘制一个简单的预测值与真实值散点图。如果模型的预测完全准确的话,那么散点图将是一条直线 $x=y$。可以看出,我们的模型表现不错,但是还有改进的余地。
```
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
```
## 可选步骤:清理数据
SageMaker 上的默认 notebook 实例没有太多的可用磁盘空间。当你继续完成和执行 notebook 时,最终会耗尽磁盘空间,导致难以诊断的错误。完全使用完 notebook 后,建议删除创建的文件。你可以从终端或 notebook hub 删除文件。以下单元格中包含了从 notebook 内清理文件的命令。
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/8_expert_mode/3)%20Create%20experiment%20from%20scratch%20-%20Keras%20backend%20-%20train%2C%20validate%2C%20infer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### Learn how to use full potential of monk in it's expert mode
# Table of Contents
## [0. Install](#0)
## [1. Load data, setup model, select params, and Train](#1)
## [2. Run validation on trained classifier](#2)
## [3. Run inferencing on trained classifier](#3)
<a id='0'></a>
# Install Monk
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
- cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
- (Select the requirements file as per OS and CUDA version)
```
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
# Select the requirements file as per OS and CUDA version
!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
```
## Dataset - Natural Images Classification
- https://www.kaggle.com/prasunroy/natural-images
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z" -O natural-images.zip && rm -rf /tmp/cookies.txt
! unzip -qq natural-images.zip
```
# Imports
```
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using keras backend
from keras_prototype import prototype
```
<a id='1'></a>
# Load data, setup model, select params, and Train
```
gtf = prototype(verbose=1);
gtf.Prototype("project", "expert_mode");
```
## Set Data params
```
gtf.Dataset_Params(dataset_path="natural-images/train",
split=0.9,
input_size=224,
batch_size=16,
shuffle_data=True,
num_processors=3);
```
## Apply Transforms
```
gtf.apply_random_horizontal_flip(train=True, val=True);
```
## Load Dataset
```
gtf.Dataset();
```
## Set Model Params
```
gtf.Model_Params(model_name="resnet50",
freeze_base_network=True,
use_gpu=True,
use_pretrained=True);
```
## Load Model
```
gtf.Model();
```
## Set Training params
```
gtf.Training_Params(num_epochs=5,
display_progress=True,
display_progress_realtime=True,
save_intermediate_models=True,
intermediate_model_prefix="intermediate_model_",
save_training_logs=True);
## Set Optimizer, losses and learning rate schedulers
gtf.optimizer_sgd(0.0001);
gtf.lr_fixed();
gtf.loss_crossentropy()
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='2'></a>
# Validating the trained classifier
```
gtf = prototype(verbose=1);
gtf.Prototype("project", "expert_mode", eval_infer=True);
# Just for example purposes, validating on the training set itself
gtf.Dataset_Params(dataset_path="natural-images/train");
gtf.Dataset();
accuracy, class_based_accuracy = gtf.Evaluate();
```
<a id='3'></a>
# Running inference on test images
```
gtf = prototype(verbose=1);
gtf.Prototype("project", "expert_mode", eval_infer=True);
img_name = "natural-images/test/test3.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
img_name = "natural-images/test/test2.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
img_name = "natural-images/test/test3.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
```
| github_jupyter |
# Optimus VIRTUOSO Composer (ver. 3.0)
## "Music never allows falsehoods for even the deaf hear flat notes!" ---OV
***
Powered by tegridy-tools TMIDIX Optimus Processors: https://github.com/asigalov61/tegridy-tools
***
Credit for char-based GPT2 code used in this colab goes out to Andrej Karpathy: https://github.com/karpathy/minGPT
***
WARNING: This complete implementation is a functioning model of the Artificial Intelligence. Please excercise great humility, care, and respect. https://www.nscai.gov/
***
#### Project Los Angeles
#### Tegridy Code 2021
***
# Setup Environment, clone needed repos, and install all required dependencies
```
#@title nvidia-smi gpu check
!nvidia-smi
#@title Install all dependencies (run only once per session)
!git clone https://github.com/asigalov61/tegridy-tools
!pip install torch
!pip install tqdm
!apt install fluidsynth #Pip does not work for some reason. Only apt works
!pip install midi2audio
!pip install pretty_midi
#@title Import all needed modules
print('Loading needed modules. Please wait...')
import os
from datetime import datetime
import secrets
import tqdm
from tqdm import auto
if not os.path.exists('/content/Dataset'):
os.makedirs('/content/Dataset')
print('Loading TMIDIX module...')
os.chdir('/content/tegridy-tools/tegridy-tools')
import TMIDIX
os.chdir('/content/tegridy-tools/tegridy-tools')
from minGPT import *
from midi2audio import FluidSynth
import pretty_midi
import librosa.display
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from IPython.display import display, Javascript, HTML, Audio
from google.colab import output, drive
os.chdir('/content/')
print('Loading complete. Enjoy! :)')
```
# Setup and prep the model
```
#@title Download ready-to-use pre-trained composer model and the TXT dataset
%cd /content/
print('=' * 70)
print('Downloading pre-trained dataset-model...Please wait...')
print('=' * 70)
!wget https://github.com/asigalov61/Optimus-VIRTUOSO/raw/main/Dataset-Model/MuseNet/Custom-Children-Songs/Optimus-VIRTUOSO-Dataset-Model.zip.001
!wget https://github.com/asigalov61/Optimus-VIRTUOSO/raw/main/Dataset-Model/MuseNet/Custom-Children-Songs/Optimus-VIRTUOSO-Dataset-Model.zip.002
!wget https://github.com/asigalov61/Optimus-VIRTUOSO/raw/main/Dataset-Model/MuseNet/Custom-Children-Songs/Optimus-VIRTUOSO-Dataset-Model.zip.003
!cat Optimus-VIRTUOSO-Dataset-Model.zip* > Optimus-VIRTUOSO-Dataset-Model.zip
print('=' * 70)
!unzip -j Optimus-VIRTUOSO-Dataset-Model.zip
print('=' * 70)
print('Done! Enjoy! :)')
print('=' * 70)
%cd /content/
#@title Load the downloaded model and the dataset
full_path_to_training_text_file = "/content/Optimus-VIRTUOSO-Music-Dataset.txt"
model_attention_span_in_tokens = 512
model_embed_size = 512
number_of_heads = 8
number_of_layers = 6
number_of_training_epochs = 5
training_batch_size = 48
number_of_dataloader_threads = 4
model_learning_rate = 6e-4
checkpoint_full_path = ""
if checkpoint_full_path == '':
checkpoint_full_path = None
trainer, model, train_dataset = MainLoader(full_path_to_training_text_file,
None,
number_of_dataloader_threads,
model_attention_span_in_tokens,
model_embed_size,
number_of_heads,
number_of_layers,
number_of_training_epochs,
training_batch_size,
model_learning_rate,
ckpt_path=checkpoint_full_path)
full_path_to_model_checkpoint = "/content/Optimus-VIRTUOSO-Trained-Model.pth"
model = torch.load(full_path_to_model_checkpoint)
model.eval()
#@title Visually check positional embeddings to make sure everything is ok
PlotPositionalEmbeddings(model, model_attention_span_in_tokens)
```
# Generate original compound music
# STEP 1: Generate the composition seed
```
#@title (BEST) Self-continuation from the dataset
#@markdown NOTE: You can repeat this step as many times as you like until you find the right seed that you like
try_to_start_with_intro = True #@param {type:"boolean"}
completion = ''
completion1 = ''
completion2 = ''
completion3 = ''
print('Optimus VIRTUOSO Model Generator')
# print('Starting up...')
number_of_tokens_to_generate = 1024
creativity_temperature = 1
top_k_prob = 64
input_prompt = "SONG="
self_continuation = True
os.chdir('/content/')
if self_continuation:
with open(full_path_to_training_text_file) as f:
dataset = f.read()
idx = secrets.randbelow(len(dataset)-256)
if try_to_start_with_intro:
input_prompt = 'SONG='
else:
input_prompt = 'SONG=Self-Continuation' + chr(10)
input_prompt += dataset[idx:idx+256]
completion = Generate(model,
train_dataset,
trainer,
number_of_tokens_to_generate,
creativity_temperature,
top_k_prob,
input_prompt)
# Stuff for datetime stamp
filename = '/content/Optimus-VIRTUOSO-Composition-' + 'generated-on-'
fname = TMIDIX.Tegridy_File_Time_Stamp(filename)
fname1 = TMIDIX.Tegridy_File_Time_Stamp(filename)
fname2 = TMIDIX.Tegridy_File_Time_Stamp(filename)
number_of_ticks_per_quarter = 500
dataset_time_denominator = 1
melody_conditioned_encoding = False
encoding_has_MIDI_channels = True
encoding_has_velocities = True
simulate_velocity = False
save_only_first_composition = False
chars_encoding_offset_used_for_dataset = 33
output_list, song_name = TMIDIX.Optimus_TXT_to_Notes_Converter(completion,
has_MIDI_channels=encoding_has_MIDI_channels,
simulate_velocity=simulate_velocity,
char_encoding_offset=chars_encoding_offset_used_for_dataset,
save_only_first_composition=save_only_first_composition,
dataset_MIDI_events_time_denominator=dataset_time_denominator,
has_velocities=encoding_has_velocities
)
# print('Converting Song to MIDI...')
output_signature = 'Optimus VIRTUOSO'
detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(output_list,
output_signature = output_signature,
output_file_name = fname,
track_name=song_name,
number_of_ticks_per_quarter=number_of_ticks_per_quarter)
fn = os.path.basename(fname + '.mid')
fn1 = fn.split('.')[0]
# print('Plotting the composition. Please wait...')
pm = pretty_midi.PrettyMIDI(fname + '.mid')
# Retrieve piano roll of the MIDI file
piano_roll = pm.get_piano_roll()
plt.figure(figsize=(14, 5))
librosa.display.specshow(piano_roll, x_axis='time', y_axis='cqt_note', fmin=1, hop_length=160, sr=16000, cmap=plt.cm.hot)
plt.title(fn1)
FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname + '.mid'), str(fname + '.wav'))
Audio(str(fname + '.wav'), rate=16000)
#@title (ALTERNATIVE) Continue your own MIDI
#@markdown NOTE: The model was trained on a relatively small dataset, so custom MIDI continuations may not be perfect
full_path_to_custom_MIDI_file = "/content/tegridy-tools/tegridy-tools/seed.mid" #@param {type:"string"}
completion = ''
completion1 = ''
completion2 = ''
# Stuff for datetime stamp
filename = '/content/Optimus-VIRTUOSO-Composition-' + 'generated-on-'
fname = TMIDIX.Tegridy_File_Time_Stamp(filename)
fname1 = TMIDIX.Tegridy_File_Time_Stamp(filename)
fname2 = TMIDIX.Tegridy_File_Time_Stamp(filename)
data = TMIDIX.Optimus_MIDI_TXT_Processor(full_path_to_custom_MIDI_file,
output_velocity=True,
output_MIDI_channels=True,
MIDI_channel=16,
MIDI_patch=range(0, 127),
char_offset=33,
perfect_timings=True,
musenet_encoding=True)
custom_prime = data[0][-1024:]
completion3 = data[0]
completion4 = data[0][:len(custom_prime)]
completion = custom_prime
# print('Plotting the composition. Please wait...')
# print('Done!')
# print('Saving to', str(fname + '.txt'))
with open(fname + '.txt', "w") as text_file:
print(completion3, file=text_file)
number_of_ticks_per_quarter = 500
dataset_time_denominator = 1
melody_conditioned_encoding = False
encoding_has_MIDI_channels = True
encoding_has_velocities = True
simulate_velocity = False
save_only_first_composition = False
chars_encoding_offset_used_for_dataset = 33
output_list, song_name = TMIDIX.Optimus_TXT_to_Notes_Converter(completion3,
has_MIDI_channels=encoding_has_MIDI_channels,
simulate_velocity=simulate_velocity,
char_encoding_offset=chars_encoding_offset_used_for_dataset,
save_only_first_composition=save_only_first_composition,
dataset_MIDI_events_time_denominator=dataset_time_denominator,
has_velocities=encoding_has_velocities
)
# print('Converting Song to MIDI...')
output_signature = 'Optimus VIRTUOSO'
detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(output_list,
output_signature = output_signature,
output_file_name = fname,
track_name=song_name,
number_of_ticks_per_quarter=number_of_ticks_per_quarter)
fn = os.path.basename(fname + '.mid')
fn1 = fn.split('.')[0]
pm = pretty_midi.PrettyMIDI(fname + '.mid')
# Retrieve piano roll of the MIDI file
piano_roll = pm.get_piano_roll()
plt.figure(figsize=(14, 5))
librosa.display.specshow(piano_roll, x_axis='time', y_axis='cqt_note', fmin=1, hop_length=160, sr=16000, cmap=plt.cm.hot)
plt.title(fn1)
FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname + '.mid'), str(fname + '.wav'))
Audio(str(fname + '.wav'), rate=16000)
```
# STEP 2: Continuation
```
#@title Generate continuation
#@markdown NOTE: You can repeat this step until you find a perfect continuation that you like
print('Optimus VIRTUOSO Model Generator')
# print('Starting up...')
number_of_tokens_to_generate = 1024
creativity_temperature = 1
top_k_prob = 64
if completion2 == '':
input_prompt = completion
else:
input_prompt = completion2
self_continuation = False
os.chdir('/content/')
completion1 = Generate(model,
train_dataset,
trainer,
number_of_tokens_to_generate,
creativity_temperature,
top_k_prob,
input_prompt)[-2048:]
# print('Done!')
# print('Saving to', str(fname + '.txt'))
with open(fname2 + '.txt', "w") as text_file:
print(completion1, file=text_file)
number_of_ticks_per_quarter = 500
dataset_time_denominator = 1
melody_conditioned_encoding = False
encoding_has_MIDI_channels = True
encoding_has_velocities = True
simulate_velocity = False
save_only_first_composition = False
chars_encoding_offset_used_for_dataset = 33
output_list, song_name = TMIDIX.Optimus_TXT_to_Notes_Converter(completion1,
has_MIDI_channels=encoding_has_MIDI_channels,
simulate_velocity=simulate_velocity,
char_encoding_offset=chars_encoding_offset_used_for_dataset,
save_only_first_composition=save_only_first_composition,
dataset_MIDI_events_time_denominator=dataset_time_denominator,
has_velocities=encoding_has_velocities
)
# print('Converting Song to MIDI...')
output_signature = 'Optimus VIRTUOSO'
detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(output_list,
output_signature = output_signature,
output_file_name = fname2,
track_name=song_name,
number_of_ticks_per_quarter=number_of_ticks_per_quarter)
fn = os.path.basename(fname2 + '.mid')
fn1 = fn.split('.')[0]
# print('Plotting the composition. Please wait...')
pm = pretty_midi.PrettyMIDI(fname2 + '.mid')
# Retrieve piano roll of the MIDI file
piano_roll = pm.get_piano_roll()
plt.figure(figsize=(14, 5))
librosa.display.specshow(piano_roll, x_axis='time', y_axis='cqt_note', fmin=1, hop_length=160, sr=16000, cmap=plt.cm.hot)
plt.title(fn1)
FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname2 + '.mid'), str(fname2 + '.wav'))
Audio(str(fname2 + '.wav'), rate=16000)
```
# STEP 3: Final composition
```
#@title (STEP 3) Add last continuation to the final composition
#@markdown NOTE: DO NOT REPEAT THIS STEP until you have generated the next continuation block in STEP 2
if completion2 == '':
completion2 = 'SONG=Continuation' + chr(10) + completion1
else:
completion2 += completion1[-1024:]
if completion3 != '' and custom_prime != '':
completion2 = completion3 + completion1[1024:]
completion3 = ''
# print('Done!')
# print('Saving to', str(fname + '.txt'))
with open(fname1 + '.txt', "w") as text_file:
print(completion2, file=text_file)
number_of_ticks_per_quarter = 500
dataset_time_denominator = 1
melody_conditioned_encoding = False
encoding_has_MIDI_channels = True
encoding_has_velocities = True
simulate_velocity = False
save_only_first_composition = False
chars_encoding_offset_used_for_dataset = 33
output_list, song_name = TMIDIX.Optimus_TXT_to_Notes_Converter(completion2,
has_MIDI_channels=encoding_has_MIDI_channels,
simulate_velocity=simulate_velocity,
char_encoding_offset=chars_encoding_offset_used_for_dataset,
save_only_first_composition=save_only_first_composition,
dataset_MIDI_events_time_denominator=dataset_time_denominator,
has_velocities=encoding_has_velocities
)
# print('Converting Song to MIDI...')
output_signature = 'Optimus VIRTUOSO Composer'
detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(output_list,
output_signature = output_signature,
output_file_name = fname1,
track_name=song_name,
number_of_ticks_per_quarter=number_of_ticks_per_quarter)
fn = os.path.basename(fname1 + '.mid')
fn1 = fn.split('.')[0]
# print('Plotting the composition. Please wait...')
pm = pretty_midi.PrettyMIDI(fname1 + '.mid')
# Retrieve piano roll of the MIDI file
piano_roll = pm.get_piano_roll()
plt.figure(figsize=(14, 5))
librosa.display.specshow(piano_roll, x_axis='time', y_axis='cqt_note', fmin=1, hop_length=160, sr=16000, cmap=plt.cm.hot)
plt.title(fn1)
FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname1 + '.mid'), str(fname1 + '.wav'))
Audio(str(fname1 + '.wav'), rate=16000)
#@title (OPTIONAL STEP) Undo the last continuation
#@markdown You can undo the last continuation here just in case. Then you can go to STEP 2 to regenerate it
#@markdown NOTE: You can undo as many continuation blocks as you like
print('=' * 70)
print('Removing last continuation block...')
print('=' * 70)
print('Old song length is', len(completion2), 'tokens')
completion2 = completion2[:-1024]
print('New song length is', len(completion2), 'tokens')
print('=' * 70)
if completion2 != '':
# print('Plotting the composition. Please wait...')
# print('Done!')
# print('Saving to', str(fname + '.txt'))
with open(fname1 + '.txt', "w") as text_file:
print(completion2, file=text_file)
number_of_ticks_per_quarter = 500
dataset_time_denominator = 1
melody_conditioned_encoding = False
encoding_has_MIDI_channels = True
encoding_has_velocities = True
simulate_velocity = False
save_only_first_composition = False
chars_encoding_offset_used_for_dataset = 33
output_list, song_name = TMIDIX.Optimus_TXT_to_Notes_Converter(completion2,
has_MIDI_channels=encoding_has_MIDI_channels,
simulate_velocity=simulate_velocity,
char_encoding_offset=chars_encoding_offset_used_for_dataset,
save_only_first_composition=save_only_first_composition,
dataset_MIDI_events_time_denominator=dataset_time_denominator,
has_velocities=encoding_has_velocities
)
# print('Converting Song to MIDI...')
output_signature = 'Optimus VIRTUOSO'
detailed_stats = TMIDIX.Tegridy_SONG_to_MIDI_Converter(output_list,
output_signature = output_signature,
output_file_name = fname1,
track_name=song_name,
number_of_ticks_per_quarter=number_of_ticks_per_quarter)
fn = os.path.basename(fname1 + '.mid')
fn1 = fn.split('.')[0]
pm = pretty_midi.PrettyMIDI(fname1 + '.mid')
# Retrieve piano roll of the MIDI file
piano_roll = pm.get_piano_roll()
plt.figure(figsize=(14, 5))
librosa.display.specshow(piano_roll, x_axis='time', y_axis='cqt_note', fmin=1, hop_length=160, sr=16000, cmap=plt.cm.hot)
plt.title(fn1)
FluidSynth("/usr/share/sounds/sf2/FluidR3_GM.sf2", 16000).midi_to_audio(str(fname1 + '.mid'), str(fname1 + '.wav'))
Audio(str(fname1 + '.wav'), rate=16000)
```
# STEP 4: Download the result
```
#@title Download the final composition
print('Downloading your composition now...')
print(fname1)
from google.colab import files
files.download(fname1 + '.mid')
```
# Congrats! You did it! :)
| github_jupyter |
# Setup
```
import pandas as pd
import numpy as np
import jsonlines
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import torch.nn as nn
import torch
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch_optimizer as optim
import os
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from importlib import reload
pd.set_option('display.max_rows', 500)
pd.set_option('display.float_format', '{:0.3f}'.format)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
pd.options.display.width = 0
import warnings
import torchvision
warnings.filterwarnings('ignore')
from facebook_hateful_memes_detector.utils.globals import set_global, get_global
set_global("cache_dir", "/home/ahemf/cache/cache")
set_global("dataloader_workers", 4)
set_global("use_autocast", True)
set_global("models_dir", "/home/ahemf/cache/")
from facebook_hateful_memes_detector.utils import read_json_lines_into_df, in_notebook, set_device, my_collate, clean_memory
get_global("cache_dir")
from facebook_hateful_memes_detector.models import Fasttext1DCNNModel, MultiImageMultiTextAttentionEarlyFusionModel, LangFeaturesModel, AlbertClassifer
from facebook_hateful_memes_detector.preprocessing import TextImageDataset, get_datasets, get_image2torchvision_transforms, TextAugment
from facebook_hateful_memes_detector.preprocessing import DefinedRotation, QuadrantCut, ImageAugment
from facebook_hateful_memes_detector.training import *
import facebook_hateful_memes_detector
reload(facebook_hateful_memes_detector)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
set_device(device)
device
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
from transformers import AutoModelWithLMHead, AutoTokenizer
from transformers import pipeline
from transformers import AutoModelWithLMHead, AutoTokenizer
from transformers import MarianMTModel, MarianTokenizer
from tqdm.auto import tqdm, trange
from transformers import MarianMTModel, MarianTokenizer
data = get_datasets(data_dir="/home/ahemf/cache/data/",
train_text_transform=None,
train_image_transform=None,
test_text_transform=None,
test_image_transform=None,
train_torchvision_pre_image_transform=None,
test_torchvision_pre_image_transform=None,
cache_images=False,
use_images=True,
dev=False,
test_dev=True,
keep_original_text=True,
keep_original_image=True,
keep_processed_image=True,
keep_torchvision_image=False,
train_mixup_config=None)
data["test"]["label"] = -1
data['test_unseen']["label"] = -1
dev_unseen = data['dev_unseen'].copy()
data['dev_unseen']["label"] = -1
df = pd.concat((data["train"],
data['dev_unseen'],
data["test"], data['test_unseen']))
df = pd.read_csv("/home/ahemf/cache/new_items.csv")
def build_translator(lang_models, model_type="huggingface"):
if model_type=="huggingface":
forward_model, backward_model = lang_models["fwd"], lang_models["inv"]
tokenizer = MarianTokenizer.from_pretrained(forward_model)
model = MarianMTModel.from_pretrained(forward_model)
model = model.to(get_device())
model = model.eval()
state = dict(fwd=(tokenizer, model))
tokenizer = MarianTokenizer.from_pretrained(backward_model)
model = MarianMTModel.from_pretrained(backward_model)
model = model.to(get_device())
model = model.eval()
state["inv"] = (tokenizer, model)
elif model_type=="pytorch":
forward_model, backward_model = lang_models["fwd"], lang_models["inv"]
if "fwd_checkpoint_file" in lang_models:
model = torch.hub.load('pytorch/fairseq', forward_model,
tokenizer='moses', bpe='fastbpe', checkpoint_file=lang_models["fwd_checkpoint_file"])
else:
model = torch.hub.load('pytorch/fairseq', forward_model, tokenizer='moses', bpe='fastbpe')
if "inv_checkpoint_file" in lang_models:
backward_model = torch.hub.load('pytorch/fairseq', backward_model,
tokenizer='moses', bpe='fastbpe', checkpoint_file=lang_models["inv_checkpoint_file"])
else:
backward_model = torch.hub.load('pytorch/fairseq', backward_model, tokenizer='moses', bpe='fastbpe')
model = model.to(get_device())
model = model.eval()
backward_model = backward_model.to(get_device())
backward_model = backward_model.eval()
state = dict(fwd=model, inv=backward_model)
def translate(text):
texts = [text]
if model_type=="huggingface":
fwd_tokenizer, fwd_model = state["fwd"]
inv_tokenizer, inv_model = state["inv"]
lang_codes = fwd_tokenizer.supported_language_codes
if "ROMANCE" in forward_model:
lang_codes = ['>>fr<<', '>>es<<', '>>it<<', '>>pt<<', '>>ro<<', '>>ca<<', '>>gl<<', '>>la<<', '>>wa<<', '>>fur<<', '>>oc<<', '>>sc<<', '>>an<<', '>>frp<<',]
better_lang_codes = ['>>fr<<', '>>es<<', '>>it<<', '>>pt<<', '>>ca<<', '>>fur<<', '>>oc<<', '>>sc<<', '>>an<<', '>>frp<<']
lang_codes = better_lang_codes
if "CELTIC" in forward_model:
lang_codes = ['>>ga<<']
if len(lang_codes) > 0:
texts = [t for text in texts for t in [lang+" "+text for lang in lang_codes]]
batch = fwd_tokenizer.prepare_translation_batch(texts)
for k, v in batch.items():
if isinstance(v, torch.Tensor):
v = v.to(get_device())
batch[k] = v
translated = fwd_model.generate(**batch)
fwd_translations = [fwd_tokenizer.decode(t, skip_special_tokens=True) for t in translated]
inv_batch = inv_tokenizer.prepare_translation_batch(fwd_translations)
for k, v in inv_batch.items():
if isinstance(v, torch.Tensor):
v = v.to(get_device())
inv_batch[k] = v
translated = inv_model.generate(**inv_batch)
tgt_text = [inv_tokenizer.decode(t, skip_special_tokens=True) for t in translated]
clean_memory()
return tgt_text
elif model_type=="pytorch":
intermediate = state["fwd"].translate(text)
res = state["inv"].translate(intermediate)
clean_memory()
return [res]
return translate
fox = "The quick brown fox jumps over the lazy dog."
cats = "The cat sat on the front porch sipping a pint of milk."
text = 'have you ever studied the history of the jews? did you know that they have always banded together as a tribe, infiltrated governments.'
text_long = 'have you ever studied the history of the jews? did you know that they have always banded together as a tribe, infiltrated governments, monopolized the financial systems of nations instigated wars and intentionally created chaos in societies? the jews have mass murdered millions of non- jews over the centuries they have seized control of the media so you will never find out study the history of the jews!'
hg_en_ru = dict(fwd='Helsinki-NLP/opus-mt-en-ru', inv='Helsinki-NLP/opus-mt-ru-en')
hg_en_de = dict(fwd='Helsinki-NLP/opus-mt-en-de', inv='Helsinki-NLP/opus-mt-de-en')
hg_en_celtic = dict(fwd='Helsinki-NLP/opus-mt-en-CELTIC', inv='sshleifer/opus-mt-CELTIC-en')
hg_en_romance = dict(fwd='Helsinki-NLP/opus-mt-en-ROMANCE', inv='Helsinki-NLP/opus-mt-ROMANCE-en')
fox = "The quick brown fox jumps over the lazy dog."
cats = "The cat sat on the front porch sipping a pint of milk."
text = 'have you ever studied the history of the jews? did you know that they have always banded together as a tribe, infiltrated governments.'
text_long = 'have you ever studied the history of the jews? did you know that they have always banded together as a tribe, infiltrated governments, monopolized the financial systems of nations instigated wars and intentionally created chaos in societies? the jews have mass murdered millions of non- jews over the centuries they have seized control of the media so you will never find out study the history of the jews!'
translate = build_translator(hg_en_ru)
translate(fox)
translate(cats)
translate(text)
pt_en_de_1 = dict(fwd='transformer.wmt19.en-de.single_model', inv='transformer.wmt19.de-en.single_model')
pt_en_de_2 = dict(fwd='transformer.wmt19.en-de.single_model', inv='transformer.wmt19.de-en', inv_checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt')
pt_en_de_3 = dict(fwd='transformer.wmt19.en-de', fwd_checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',
inv='transformer.wmt19.de-en.single_model')
pt_en_de_4 = dict(fwd='transformer.wmt19.en-de', fwd_checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',
inv='transformer.wmt19.de-en', inv_checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt')
pt_en_de_5 = dict(fwd='transformer.wmt16.en-de', inv='transformer.wmt19.de-en.single_model')
pt_en_de_6 = dict(fwd='transformer.wmt16.en-de', inv='transformer.wmt19.de-en', inv_checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt')
pt_en_de_7 = dict(fwd='conv.wmt17.en-de', inv='transformer.wmt19.de-en.single_model')
pt_en_de_8 = dict(fwd='conv.wmt17.en-de', inv='transformer.wmt19.de-en', inv_checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt')
pt_en_ru = dict(fwd='transformer.wmt19.en-ru.single_model', inv='transformer.wmt19.ru-en.single_model')
translate = build_translator(pt_en_de_5, model_type="pytorch")
translate(fox)
translate(cats)
translate(text)
```
# DAB
```
results = []
translate = build_translator(pt_en_de_6, model_type="pytorch")
for row in tqdm(df.iterrows(), total=df.shape[0]):
keys = row[1].index.values
values = row[1].values
d = dict(zip(keys, values))
t = translate(d["text"])
if isinstance(t, (list, tuple)):
r = [(d["id"],ts) for ts in t]
results.extend(r)
else:
results.append((d["id"], t))
results[1]
# flattened = []
# for identifier, translations in results:
# flattened.extend([(identifier, t) for t in translations])
len(results)
pd.DataFrame(results, columns=["id", "text"]).to_csv(os.path.join(get_global("models_dir"),"new_items_10.csv"), index=False)
```
# Combine
```
df = pd.read_csv("/home/ahemf/cache/new_items.csv", engine="python")
translated_cols = list(set(df.columns) - {'text'})
df_translated = df[translated_cols]
df = df[["id", "text"]]
dabs = []
for c in translated_cols:
if c!="id":
dabs.append(df_translated[["id", c]].rename(columns={c:"text"}))
# dabs = []
for i in range(1, 14):
db = pd.read_csv(os.path.join(get_global("models_dir"),"new_items_%s.csv"%i))
dabs.append(db)
dabs = pd.concat((dabs))
dabs.shape
dabs.text = dabs.text.astype(str)
df.shape
len(set(list(dabs.text.apply(lambda x: x.strip().lower()))))
len(set(list(dabs.text.apply(lambda x: x.lower().strip()))))
df.shape
len(set(list(df.text.apply(lambda x: x.strip().lower()))))
len(set(list(df.text.apply(lambda x: x.lower().strip()))))
dabs.head()
from collections import defaultdict
id2textset = defaultdict(set)
original_pairs = []
for row in df.iterrows():
idx = row[0]
identifier, text = row[1]
if text.lower().strip() in id2textset[identifier]:
continue
else:
id2textset[identifier].add(text.lower().strip())
for row in dabs.iterrows():
idx = row[0]
identifier, text = row[1]
if text.lower().strip() in id2textset[identifier]:
continue
else:
id2textset[identifier].add(text.lower().strip())
original_pairs.append((identifier, text))
len(original_pairs)
dab = pd.DataFrame(original_pairs, columns=["id", "text"])
dab.to_csv(os.path.join(get_global("models_dir"),"new_dab.csv"), index=False)
fdab = pd.read_csv("/home/ahemf/cache/fdab.csv", engine="python")
fdab.shape
fdab = pd.concat((fdab, dab))
fdab.head()
fdab.shape
fdab.to_csv("/home/ahemf/cache/fdab.csv", index=False)
dab.groupby(["id"]).count().min()
dab.id.nunique()
dab.head().values
df.head()
df.to_csv("id2text.csv", index=False)
!pwd
!head -n5 text.csv
```
# Dedup
```
dab = pd.read_csv("/home/ahemf/cache/fdab.csv", engine="python")
dab.shape
df.shape
from collections import defaultdict
id2textset = defaultdict(set)
original_pairs = []
for row in df[["id", "text"]].iterrows():
idx = row[0]
identifier, text = row[1]
if text.lower().strip() in id2textset[identifier]:
continue
else:
id2textset[identifier].add(text.lower().strip())
for row in dab.iterrows():
idx = row[0]
identifier, text = row[1]
if text.lower().strip() in id2textset[identifier]:
continue
else:
id2textset[identifier].add(text.lower().strip())
original_pairs.append((identifier, text))
len(original_pairs)
from collections import defaultdict
id2wordset = defaultdict(set)
original_pairs = []
for row in df[["id", "text"]].iterrows():
idx = row[0]
identifier, text = row[1]
id2wordset[identifier].update(set(text.lower().strip().split()))
for row in dab.iterrows():
idx = row[0]
identifier, text = row[1]
wsset = id2wordset[identifier]
csset = set(text.lower().strip().split())
if len(csset - wsset)<1:
continue
else:
id2wordset[identifier].update(csset)
original_pairs.append((identifier, text))
len(original_pairs)
dab = pd.DataFrame(original_pairs, columns=["id", "text"])
dab.to_csv("/home/ahemf/cache/hard_dab.csv", index=False)
dab = dab.sort_values(["id"])
df = df.sort_values(["id"])
dab.head()['text'].values
df.head(1)['text'].values
```
| github_jupyter |
```
##importing mmodules
from tkinter import *
import base64
#initialize window
root = Tk()
root.geometry('500x300')
root.resizable(0,0)
#title of the window
root.title("Haseeb's - Message Encode and Decode")
#label
Label(root, text ='ENCODE DECODE', font = 'arial 20 bold').pack()
Label(root, text ='Haseeb Project\'s', font = 'arial 20 bold').pack(side =BOTTOM)
#define variables
Text = StringVar()
private_key = StringVar()
mode = StringVar()
Result = StringVar()
#######define function#####
#function to encode
def Encode(key,message):
enc=[]
for i in range(len(message)):
key_c = key[i % len(key)]
enc.append(chr((ord(message[i]) + ord(key_c)) % 256))
return base64.urlsafe_b64encode("".join(enc).encode()).decode()
#function to decode
def Decode(key,message):
dec=[]
message = base64.urlsafe_b64decode(message).decode()
for i in range(len(message)):
key_c = key[i % len(key)]
dec.append(chr((256 + ord(message[i])- ord(key_c)) % 256))
return "".join(dec)
#function to set mode
def Mode():
if(mode.get() == 'e'):
Result.set(Encode(private_key.get(), Text.get()))
elif(mode.get() == 'd'):
Result.set(Decode(private_key.get(), Text.get()))
else:
Result.set('Invalid Mode')
#Function to exit window
def Exit():
root.destroy()
#Function to reset
def Reset():
Text.set("")
private_key.set("")
mode.set("")
Result.set("")
#################### Label and Button #############
#Message
Label(root, font= 'arial 12 bold', text='MESSAGE').place(x= 60,y=60)
Entry(root, font = 'arial 10', textvariable = Text, bg = 'ghost white').place(x=290, y = 60)
#key
Label(root, font = 'arial 12 bold', text ='KEY').place(x=60, y = 90)
Entry(root, font = 'arial 10', textvariable = private_key , bg ='ghost white').place(x=290, y = 90)
#mode
Label(root, font = 'arial 12 bold', text ='MODE(e-encode, d-decode)').place(x=60, y = 120)
Entry(root, font = 'arial 10', textvariable = mode , bg= 'ghost white').place(x=290, y = 120)
#result
Entry(root, font = 'arial 10 bold', textvariable = Result, bg ='ghost white').place(x=290, y = 150)
######result button
Button(root, font = 'arial 10 bold', text = 'RESULT' ,padx =2,bg ='LightGray' ,command = Mode).place(x=60, y = 150)
#reset button
Button(root, font = 'arial 10 bold' ,text ='RESET' ,width =6, command = Reset,bg = 'LimeGreen', padx=2).place(x=80, y = 190)
#exit button
Button(root, font = 'arial 10 bold',text= 'EXIT' , width = 6, command = Exit,bg = 'OrangeRed', padx=2, pady=2).place(x=180, y = 190)
root.mainloop()
```
| github_jupyter |
# Lambda distribution (Vs Reff)
```
import matplotlib.pyplot as plt
import pickle
import numpy as np
## fucntions
def load_pickle(fname):
with open(fname, 'rb') as f:
return pickle.load(f)
def plot_lambda(catalog, i_early, i_late, i_bad, fn_out='./'):
import matplotlib.pyplot as plt
plt.ioff()
f = plt.figure()
ax = f.add_subplot(111)
#for i, val in enumerate(lambdar_arr):
for i in i_early:
a = np.asarray(catalog['lambda_arr'][i])
ax.plot(a, 'r-', alpha=0.5) # Red = Early
for i in i_late:
ax.plot(catalog['lambda_arr'][i], 'b-', alpha=0.3) # Red = Early
#plt.xlabel() # in the unit of Reff
ax.set_title(r"$\lambda _{R}$")
ax.set_ylabel(r"$\lambda _{R}$")
ax.set_xlabel("["+ r'$R/R_{eff}$'+"]")
ax.set_xlim(right=9)
ax.set_ylim([0,1])
ax.set_xticks([0, 4.5, 9])
ax.set_xticklabels(["0", "0.5", "1"])
plt.savefig(fn_out)
plt.close()
def aexp2zred(aexp):
return [1.0/a - 1.0 for a in aexp]
def zred2aexp(zred):
return [1.0/(1.0 + z) for z in zred]
def lbt2aexp(lts):
import astropy.units as u
from astropy.cosmology import WMAP7, z_at_value
zreds = [z_at_value(WMAP7.lookback_time, ll * u.Gyr) for ll in lts]
return [1.0/(1+z) for z in zreds]
def density_map(x, y, sort=True):
from scipy.stats import gaussian_kde
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
z /= max(z)
idx = z.argsort()
xx, yy = x[idx], y[idx]
z = z[idx]
return xx, yy, z
```
## I like this!
```
clusters = ['05427', '36413', '39990', '01605', '10002', '36415', '04466', '74010'][0:5]
lr_points = 5 # number of points int 1 Reff.
nreff = 3
nbins = 20
def lambda_den_map(clusters, exclude, nout=187, lr_points = 5, nreff=3, nbins=20,
density_kernel=False):
print(" nout:", nout, "lr_points:", lr_points, "nreff:", nreff, "nbins:", nbins)
points = np.arange(lr_points * nreff)
x_ticks_label = ["0", "1", "2", "3", "4"][0:nreff]
x_tick_pos = [0]
[x_tick_pos.append((i+1)*lr_points) for i in range(nreff)]
# Need a compiled array of lambda_arr
fig, axs = plt.subplots(2,2, sharey=True)#, sharex=True)
mass_cut_l = [2e9, 2e9, 1e10, 1e11]
mass_cut_r = [1e13,1e10, 1e11, 1e13]
yticks_ok=[0.0, 0.2, 0.4, 0.6, 0.8]
lambda_range=[0.0, 0.8]
snout = str(nout)
for imass in range(4):
# Count number of galaxies
ngood=0
for iclu, cluster in enumerate(clusters):
wdir = '/home/hoseung/Work/data/' + cluster
catalog = load_pickle(wdir + '/catalog_GM/' + 'catalog' + snout + '.pickle')
#i_good = np.where((catalog['mstar'] > mass_cut_l[imass]) & (catalog['mstar'] < mass_cut_r[imass]))[0]
i_good = (catalog['mstar'] > mass_cut_l[imass]) & (catalog['mstar'] < mass_cut_r[imass])
for i, gal in enumerate(catalog['id']):
if gal in exclude[iclu]: i_good[i] = False
#ngood += len(i_good)
ngood += sum(i_good)
ax = axs.ravel()[imass]
all_lr = np.zeros((len(points), ngood))
# compile data
ngood=0
for iclu, cluster in enumerate(clusters):
wdir = '/home/hoseung/Work/data/' + cluster
catalog = load_pickle(wdir + '/catalog_GM/' + 'catalog' + snout + '.pickle')
#i_good = np.where((catalog['mstar'] > mass_cut_l[imass]) & (catalog['mstar'] < mass_cut_r[imass]))[0]
i_good = (catalog['mstar'] > mass_cut_l[imass]) & (catalog['mstar'] < mass_cut_r[imass])
for i, gal in enumerate(catalog['id']):
if gal in exclude[iclu]: i_good[i] = False
ind_good = np.arange(len(i_good))[i_good]
for i, i_gal in enumerate(ind_good):
all_lr[:,ngood + i] = catalog['lambda_arr'][i_gal][:len(points)]
#ngood +=len(i_good)
ngood += sum(i_good)
# Draw density maps
if density_kernel:
xpoints = np.tile(points, ngood)
xx,yy,z = density_map(xpoints,all_lr.transpose().ravel())
im = ax.scatter(xx, yy, c=z, s=150, edgecolor='')
ax.set_xlim([-0.5, nreff*lr_points])
ax.set_ylim([-0.1,0.9])
#x_tick_pos = ""
#ax.set_xticks([0,lr_points-1,2*lr_points - 1])
#x_ticks_label = ["0", "1", "2"] # Correct. by default, rscale_lambda = 2.0
#ax.set_xticklabels(labels = [z for z in x_ticks_label])
#ax.set_xlabel(r"$R/R_{eff}$")
ax.set_title(r"{:.1e} $< M_\ast <$ {:.1e}".format(mass_cut_l[imass], mass_cut_r[imass]))
ax.text(1,0.65, "# gals:" + str(ngood)) # data coordinates
else:
den_map = np.zeros((nbins, len(points)))
for i in range(len(points)):
den_map[:,i], ypoints = np.histogram(all_lr[i,:], bins=nbins, range=lambda_range)
#den_map[:,i] /= den_map[:,i].max() # normalize each bin.
den_map /= den_map.max()
im = ax.imshow(den_map, origin="lower", cmap="Blues", interpolation="none"
, extent=[0,lr_points * nreff,0,nbins], aspect='auto')
#ax.set_xlim([-1.5, lr_points*nreff])
ax.set_ylim([-0.5,nbins])
ax.set_title(r"{:.1e} $< M_\ast <$ {:.1e}".format(mass_cut_l[imass], mass_cut_r[imass]))
ax.text(2,17, "# gals:" + str(ngood)) # data coordinates
#ax.set_yticks([np.where(ypoints == yy)[0] for yy in [0.0, 0.2, 0.4, 0.6, 0.8]]) # 0.0, 0.2, 0.4, 0.6, 0.8
#ax.set_yticklabels([str(yy) for yy in yticks_ok])
if density_kernel:
for j in range(2):
for i in range(2):
axs[j,i].set_xticks(x_tick_pos)
axs[j,i].set_xticklabels(labels = [z for z in x_ticks_label])
axs[1,i].set_xlabel(r"$R/R_{eff}$")
axs[i,0].set_ylabel("$\lambda _R$")
#axs[i,j].set_yticks([np.where(ypoints == yy)[0] for yy in np.arange(lambda_range[0], lambda_range[1])]) # 0.0, 0.2, 0.4, 0.6, 0.8
axs[i,j].set_yticks([ly for ly in [0.0, 0.2, 0.4, 0.6, 0.8]])
axs[i,j].set_yticklabels([str(yy) for yy in yticks_ok])
else:
for j in range(2):
for i in range(2):
axs[j,i].set_xticks(x_tick_pos)
axs[j,i].set_xticklabels(labels = [z for z in x_ticks_label])
axs[1,i].set_xlabel(r"$R/R_{eff}$")
axs[i,0].set_ylabel("$\lambda _R$")
#axs[i,j].set_yticks([np.where(ypoints == yy)[0] for yy in np.arange(lambda_range[0], lambda_range[1])]) # 0.0, 0.2, 0.4, 0.6, 0.8
axs[i,j].set_yticks([ nbins * ly for ly in [0.0, 0.2, 0.4, 0.6, 0.8]])
axs[i,j].set_yticklabels([str(yy) for yy in yticks_ok])
# Add + mark at 0.5, 1.0, 2.0Reff
#fig.tight_layout()
cax = fig.add_axes([0.86, 0.1, 0.03, 0.8]) # [left corner x, left corner y, x width, y width]
plt.colorbar(im, cax=cax, label='normalized denisty')
plt.subplots_adjust(left=0.1, bottom=None, right=0.8, top=None, wspace=0.05, hspace=0.22)
#left = 0.125 # the left side of the subplots of the figure
#right = 0.9 # the right side of the subplots of the figure
#bottom = 0.1 # the bottom of the subplots of the figure
#top = 0.9 # the top of the subplots of the figure
#wspace = 0.2 # the amount of width reserved for blank space between subplots
#hspace = 0.5 # the amount of height reserved for white space between subplots
plt.show()
#lambda_den_map(clusters)
exclude=[[],[],[],[],[1],[],[]]
lambda_den_map(["05427", "36413", "39990", "28928", "01605", "36415", "10002"], exclude, nout=187, lr_points = lr_points, density_kernel=True)
```
High resolution run seems to have more galaxies.
check mass function.
```
a=np.array([])
clusters = [5427, 36415, 39990, 1605, 10002, 36413, 4466, 74010][0:5]
# 74010 is unreliable.
# 36413 왜 안 돌아가나..?
#exclude_gals = [[],
# [],
# [],
# [123,155,],
# [2694,4684,5448,5885,5906,6967,6981,7047,7072,7151,7612],
# []]
lr_points = 10 # number of points int 1 Reff.
nreff = 3
points = np.arange(lr_points * nreff)
x_ticks_label = ["0", "1", "2", "3", "4"][0:nreff]
x_tick_pos = [0]
[x_tick_pos.append((i+1)*lr_points) for i in range(nreff)]
# Need a compiled array of lambda_arr
fig, axs = plt.subplots(2,2, sharey=True, sharex=True)
mass_cut_l = [0, 5e9, 1e10, 1e11, 1e12]
mass_cut_r = [1e13,1e10, 1e11, 1e12, 1e13]
#titles = #["All galaxies from all clusters",
# " {} $< M_{*} <$ {}".format(mass_cut_l[imass], mass_cut_r[imass])]
for imass in range(4):
ax = axs.ravel()[imass]
all_lr = np.zeros(0)
xpos = [] # why list??
ypos = []
zpos = []
clur = []
for i, cluster in enumerate(clusters):
wdir = '/home/hoseung/Work/data/' + str(cluster).zfill(5)
catalog = load_pickle(wdir + '/catalog_GM/' + 'catalog187.pickle')
i_good = np.where((catalog['mstar'] > mass_cut_l[imass]) & (catalog['mstar'] < mass_cut_r[imass]))[0]
for ij, j in enumerate(i_good):
all_lr = np.concatenate((all_lr, catalog['lambda_r'][j])) # some catalog has L_arr up to 2Rvir.
# Draw density maps
# x values
xpoints = np.tile(points, len(all_lr))
# Gaussian_kde measures 2D density. But in this case x-axis and y-axis are two independent parameters
#(not like x position and y position). So instead, measure 1-D histogram at each x point (R/Reff).
xx, yy, z = density_map(xpoints[:all_lr.shape[0]], all_lr.ravel(), ax)
ax.scatter(xx, yy, c=z, s=50, edgecolor='')
ax.set_xlim([-0.5,2*lr_points])
ax.set_ylim([0,0.8])
ax.set_title(r"{:.1e} $< M_\ast <$ {:.1e}".format(mass_cut_l[imass], mass_cut_r[imass]))
axs[1,0].set_xticks(x_tick_pos)#[0,lr_points-1,2*lr_points - 1])
axs[1,0].set_xticklabels(labels = [z for z in x_ticks_label])
axs[1,0].set_xlabel(r"$R/R_{eff}$")
axs[1,1].set_xticks(x_tick_pos)#[0,lr_points-1,2*lr_points - 1])
axs[1,1].set_xticklabels(labels = [z for z in x_ticks_label])
axs[1,1].set_xlabel(r"$R/R_{eff}$")
axs[0,0].set_ylabel("$\lambda _R$")
axs[1,0].set_ylabel("$\lambda _R$")
# Add + mark at 0.5, 1.0, 2.0Reff
plt.show()
plt.close()
```
#### Seaborn heat map
looks better than imshow. (BTW, you can use pcolomesh (X,Y,Z) instead of imshow(map))
Choose a suitable color map from Seaborn color map templates.
```
#plt.clf()
fig, ax = plt.subplots(1)
import seaborn.apionly as sns
# reset rc params to defaults
sns.reset_orig()
#cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(den_map, cmap="Blues", square=True, xticklabels=5, yticklabels=5,
linewidths=.2, cbar_kws={"shrink": .5}, ax=ax)
plt.gca().invert_yaxis()
plt.show()
# as a line
plt.close()
fig, ax = plt.subplots(len(clusters))
for i, cluster in enumerate(clusters):
wdir = '/home/hoseung/Work/data/' + str(cluster).zfill(5)
catalog = load_pickle(wdir + '/catalog_GM/' + 'catalog187.pickle')
#i_early = np.where(catalog['mstar'] > 5e11)[0]
i_early = np.where((catalog['mstar'] > 1e10) & (catalog['mstar'] < 1e11))[0]
for j in i_early:
ax[i].plot(points, catalog['lambda_arr'][j][:2 *lr_points], c='grey', alpha=0.3)
ax[i].set_xlim([-0.5,2*lr_points])
ax[i].set_ylim([0,0.8])
x_tick_pos = ""
ax[i].set_xticks([0,lr_points -1, 2*lr_points - 1])
x_ticks_label = ["0", "1", "2"] # Correct. by default, rscale_lambda = 2.0
ax[i].set_xticklabels(labels = [z for z in x_ticks_label])
ax[i].set_xlabel(r"$R/R_{eff}$")
plt.show()
len(catalog['lambda_arr'][j])
final_gals = list(cat['final_gal'])
# exclude disky galaxies
for bad_gal in exclude_gals[i]:
final_gals.remove(bad_gal)
ngals = len(final_gals)
mstar = np.zeros((ngals, nnouts))
l_r = np.zeros((ngals, nnouts))
```
| github_jupyter |
# requirements
```
osmium version 1.13.2
libosmium version 2.17.1
Supported PBF compression types: none zlib
```
```
# In this paper, we use New York pbf data ver.2022-02-24T21:21:27Z.
# !wget https://download.geofabrik.de/north-america/us/new-york-latest.osm.pbf -O ../data/new-york-latest.osm.pbf
# !osmium extract --overwrite --bbox -74.02,40.70,-73.9,40.84 -o ../data/newyork.osm.pbf ../data/new-york-latest.osm.pbf
from datetime import datetime, timedelta
import os
from pathlib import Path
pwd = Path(os.getcwd())
import pytz
import random
import sys
sys.path.append(str(pwd.parent))
sys.path.append(str(pwd.parent / "config"))
import warnings
warnings.simplefilter("ignore")
import networkx as nx
import numpy as np
from osmread import parse_file, Node, Way
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from tqdm import tqdm
from config import Config
from util import haversine
config = Config.load()
!pip install PyYAML
```
# Create whole node dataframe.
```
file_path = "../data/newyork.osm.pbf"
file = parse_file(file_path)
nodes = []
ways = []
for data in tqdm(file):
if isinstance(data, Node):
if data.tags == {}:
nodes.append(data)
if isinstance(data, Way):
if data.nodes[0] != data.nodes[-1]:
ways.append(data)
node_df = pd.DataFrame(columns=["NodeID", "WayID"])
for way in tqdm(ways):
tmp_df = pd.DataFrame({"NodeID": way.nodes, "WayID": way.id})
node_df = pd.concat([node_df, tmp_df])
```
# Trim node dataframe with LOCAL_REGION_BOUND.
```
node_map_lat = {}
node_map_lon = {}
for node in nodes:
node_map_lat.update({node.id: node.lat})
node_map_lon.update({node.id: node.lon})
node_df["Longitude"] = node_df["NodeID"].map(node_map_lon)
node_df["Latitude"] = node_df["NodeID"].map(node_map_lat)
node_df = node_df[
(node_df["Longitude"] > config.LOCAL_REGION_BOUND.west_bound)
& (node_df["Longitude"] < config.LOCAL_REGION_BOUND.east_bound)
& (node_df["Latitude"] > config.LOCAL_REGION_BOUND.south_bound)
& (node_df["Latitude"] < config.LOCAL_REGION_BOUND.north_bound)
]
```
# create node connection information
```
connected_node_df = pd.DataFrame(columns=["NodeID", "NextNodeID", "WayID", "Longitude", "Latitude"])
connected_node_df_list = []
for way_id in tqdm(node_df["WayID"].drop_duplicates()):
tmp_df = node_df[node_df["WayID"]==way_id]
if len(tmp_df) > 1:
next_node_list = tmp_df["NodeID"].values.tolist()[1:] + [-1]
else:
next_node_list = [-1]
assert len(tmp_df) == len(next_node_list)
tmp_df["NextNodeID"] = next_node_list
tmp_df["NextNodeID"] = tmp_df["NextNodeID"].astype(int)
connected_node_df_list.append(tmp_df)
connected_node_df = pd.concat(connected_node_df_list)
```
# extract biggest graph.
```
graph = nx.Graph()
graph.add_nodes_from(connected_node_df["NodeID"].drop_duplicates().values)
for from_node, to_node in zip(connected_node_df["NodeID"].values, connected_node_df["NextNodeID"].values):
if to_node == -1:
continue
else:
graph.add_edge(from_node, to_node)
max_nodes = max(nx.connected_components(graph), key=len)
filtered_node_df = pd.DataFrame(columns=["NodeID", "NextNodeID", "WayID"])
for way_id in tqdm(connected_node_df["WayID"].drop_duplicates()):
node_list = connected_node_df[connected_node_df["WayID"]==way_id]["NodeID"].values
filtered_node_list = []
for node_id in node_list:
if node_id in max_nodes:
filtered_node_list.append(node_id)
if len(filtered_node_list) <= 1:
continue
next_node_list = filtered_node_list[1:] + [-1]
assert len(filtered_node_list) == len(next_node_list)
tmp_df = pd.DataFrame({"NodeID": filtered_node_list, "NextNodeID": next_node_list, "WayID": way_id})
tmp_df["NextNodeID"] = tmp_df["NextNodeID"].astype(int)
filtered_node_df = pd.concat([filtered_node_df, tmp_df])
filtered_node_df["Longitude"] = filtered_node_df["NodeID"].map(node_map_lon)
filtered_node_df["Latitude"] = filtered_node_df["NodeID"].map(node_map_lat)
node_info_df = filtered_node_df[["NodeID", "Longitude", "Latitude"]].drop_duplicates()
node_info = {
node_id: {
"node_index": node_index,
"longitude": longitude,
"latitude": lattitude,
} for node_index, (node_id, longitude, lattitude) in enumerate(
zip(node_info_df["NodeID"].values, node_info_df["Longitude"].values, node_info_df["Latitude"].values))
}
```
# check map.
```
import matplotlib.pyplot as plt
plt.figure(figsize=[15,15])
plt.scatter(node_info_df["Longitude"], node_info_df["Latitude"], s=8)
for from_node_id, to_node_id in zip(filtered_node_df["NodeID"].values, filtered_node_df["NextNodeID"].values):
if to_node_id == -1:
continue
from_lng = node_info[from_node_id]["longitude"]
from_lat = node_info[from_node_id]["latitude"]
to_lng = node_info[to_node_id]["longitude"]
to_lat = node_info[to_node_id]["latitude"]
plt.plot([from_lng, to_lng], [from_lat, to_lat], color="gray")
plt.title("New York Node Connection (2022-02-24)", fontsize=20)
plt.xlabel("longitude", fontsize=15)
plt.ylabel("latitude", fontsize=15)
plt.savefig(f"../data/{config.DATA_SIZE}/map_{config.LOCAL_REGION_BOUND}.png")
```
# create cost csv.
```
num_nodes = len(node_info_df)
adj_matrix = np.ones([num_nodes, num_nodes]) / np.zeros([num_nodes, num_nodes])
for i in range(num_nodes):
adj_matrix[i][i] = 0
for from_node_id, to_node_id in zip(filtered_node_df["NodeID"].values, filtered_node_df["NextNodeID"].values):
if to_node_id == -1:
continue
from_node_idx = node_info[from_node_id]["node_index"]
to_node_idx = node_info[to_node_id]["node_index"]
cost = haversine(
node_info[from_node_id]["longitude"],
node_info[from_node_id]["latitude"],
node_info[to_node_id]["longitude"],
node_info[to_node_id]["latitude"],
) / 15 * 60 # Minutes to arrival. Cars drives at 15 kilometers per hour.
adj_matrix[from_node_idx][to_node_idx] = cost
adj_matrix[to_node_idx][from_node_idx] = cost
for k in tqdm(range(num_nodes)):
for i in range(num_nodes):
for j in range(num_nodes):
if adj_matrix[i][k]!=np.inf and adj_matrix[k][j]!=np.inf:
adj_matrix[i][j] = min(adj_matrix[i][j], adj_matrix[i][k] + adj_matrix[k][j])
adj_matrix_df = pd.DataFrame(adj_matrix)
adj_matrix_df.to_csv(f"../data/{config.DATA_SIZE}/AccurateMap.csv", index=False, header=False)
```
# create node connection csv.
```
filtered_node_df["NextNodeLongitude"] = filtered_node_df["NextNodeID"].apply(lambda x: node_info[x]["longitude"] if x != -1 else -9999)
filtered_node_df["NextNodeLatitude"] = filtered_node_df["NextNodeID"].apply(lambda x: node_info[x]["latitude"] if x != -1 else -9999)
filtered_node_df.to_csv(f"../data/{config.DATA_SIZE}/NodeConnection.csv", index=False)
```
# create node csv.
```
node_data_df = filtered_node_df.reset_index(drop=True)[["NodeID", "Longitude", "Latitude"]]
node_data_df.drop_duplicates(inplace=True)
node_data_df[["RoadName", "Gid", "Distance", "WayID"]] = np.nan
node_data_df["NodeIndex"] = node_data_df["NodeID"].apply(lambda x: node_info[x]["node_index"])
node_data_df[["NodeIndex", "NodeID", "WayID", "Longitude", "Latitude", "RoadName", "Gid", "Distance"]].to_csv("../data/Node.csv", index=False)
```
# create driver.
```
driver_start_points = random.choices(node_data_df["NodeID"].values, k=15000)
df = pd.DataFrame({"DriverID": range(0,15000), "NodeS": driver_start_points})
df.to_csv(f"../data/{config.DATA_SIZE}/Drivers0601.csv", index=False)
```
# create order.
```
order_df = pd.read_csv(f"../data/yellow_tripdata_2016-06.csv")
USE_COLUMNS = [
"tpep_pickup_datetime",
"tpep_dropoff_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
]
RENAME_DICT = {
"tpep_pickup_datetime": "Start_time",
"tpep_dropoff_datetime": "End_time",
"pickup_longitude": "PointS_Longitude",
"pickup_latitude": "PointS_Latitude",
"dropoff_longitude": "PointE_Longitude",
"dropoff_latitude": "PointE_Latitude",
}
rename_order_df = order_df[USE_COLUMNS].rename(columns=RENAME_DICT).sort_values("Start_time").reset_index(drop=True)
# NewYorkエリア外のレコードを除去する.
rename_order_without_outlier_df = rename_order_df[
(rename_order_df["PointS_Longitude"] > config.LOCAL_REGION_BOUND.west_bound)
& (rename_order_df["PointS_Longitude"] < config.LOCAL_REGION_BOUND.east_bound)
& (rename_order_df["PointE_Longitude"] > config.LOCAL_REGION_BOUND.west_bound)
& (rename_order_df["PointE_Longitude"] < config.LOCAL_REGION_BOUND.east_bound)
& (rename_order_df["PointS_Latitude"] > config.LOCAL_REGION_BOUND.south_bound)
& (rename_order_df["PointS_Latitude"] < config.LOCAL_REGION_BOUND.north_bound)
& (rename_order_df["PointE_Latitude"] > config.LOCAL_REGION_BOUND.south_bound)
& (rename_order_df["PointE_Latitude"] < config.LOCAL_REGION_BOUND.north_bound)
]
nn = NearestNeighbors(algorithm='ball_tree')
nn.fit(node_data_df[["Longitude", "Latitude"]].values)
node_id_list = node_data_df["NodeID"].values.tolist()
_, start_indices = nn.kneighbors(
rename_order_without_outlier_df[["PointS_Longitude", "PointS_Latitude"]], n_neighbors=1)
rename_order_without_outlier_df["NodeS"] = [node_id_list[i[0]] for i in start_indices]
_, end_indices = nn.kneighbors(
rename_order_without_outlier_df[["PointE_Longitude", "PointE_Latitude"]], n_neighbors=1
)
rename_order_without_outlier_df["NodeE"] = [node_id_list[i[0]] for i in end_indices]
def newyork_datetime_to_utc(datetime_str: str) -> int:
eastern = pytz.timezone('US/Eastern')
date = datetime.strptime(datetime_str, "%Y-%m-%d %H:%M:%S")
date.astimezone(eastern)
return int(date.timestamp())
rename_order_without_outlier_df["Start_datetime"] = rename_order_without_outlier_df["Start_time"].apply(
lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S")
)
rename_order_without_outlier_df["ID"] = range(len(rename_order_without_outlier_df))
start_date = datetime(year=2016, month=6, day=1)
directory = "train"
while True:
next_date = start_date + timedelta(days=1)
tmp_df = rename_order_without_outlier_df[
(rename_order_without_outlier_df["Start_datetime"]<next_date)
& (rename_order_without_outlier_df["Start_datetime"]>start_date)
]
tmp_df["Start_time"] = tmp_df["Start_time"].apply(newyork_datetime_to_utc)
tmp_df["End_time"] = tmp_df["End_time"].apply(newyork_datetime_to_utc)
if start_date.day >= 24:
directory = "test"
tmp_df[["ID"] + list(RENAME_DICT.values()) + ["NodeS", "NodeE"]].to_csv(
f"../data/{config.DATA_SIZE}/Order/modified/{directory}/order_2016{str(start_date.month).zfill(2)}{str(start_date.day).zfill(2)}.csv",
index=False
)
start_date = next_date
if start_date.month != 6:
break
```
| github_jupyter |
```
from engine.api.cartesian import *
```
# Understanding how agents learn through communication
Agents can use communication to achieve two kinds of goals. On one hand, the agent might communicate in order to improve itself. On the other, multiple agents might work together to solve some problem that no agent could do alone. Both of these ideas play into the general notion that agents can learn how to attain some goal or minimize a cost.
Presumably, interaction between agents could cause one to change its goals or redefine a cost function. Agents might also share knowledge or awareness of the environment they exist in with each other.
To study the kinds of interactions and the kinds of learning that might happen between multiple agents, I have designed a virtual world that multiple agents can interact in. This notebook explores some ideas for the world, and outlines possible games or goals we could design for.
### Key terms:
- Environment - A virtual world that agents can interact with. It may provide sensory inputs, like a sense of position in space or a way to send/receive messages.
- Location - A collection of spatial integer coordinates, where each coordinate references a specific position in space. This is may be a relative or absolute position in space.
- Agent - An piece of code interacts with the environment to perform actions or receive sensory input.
- for slightly more details: engine/base_classes.py
## Virtual world:
The `CartesianEnvironment` is
A traversable n-dimensional Cartesian coordinate space of pre-defined shape
that defines the ways agents can interact within this space
Agents use the environment to send each other messages,
check the state of the grid, and move to new locations.
Some facts about the traversable space for a CartesianEnvironment:
- A coordinate in n-dimensional space consists of n integers, where each integer corresponds to a discrete location. 2D space is (x, y). A 3D coordinate is (x, y, z).
- There are no walls. For instance, in a 2D space, all 4 corners are connected.
- Coordinates are not used directly or event explicitly defined, but Locations are! 1 or more coordinate may make up a Location.
A Location allows us to define collections of coordinates in coordinate space. For instance, a location in 2D space might consist of two coordinates: [(0, 0), (0, 1)].
- `RelLoc` - A relative location denotes an offset from some undefined reference point. For instance, (-1, 0) might mean one coordinate above.
- `AbsLoc` - An absolute location is relative to some assumed reference point. Therefore, (0, 0) could be origin of a Cartesian Space.
## Agents
An agent interacts with the environment to perform actions or receive
sensory input. It is identifiable by an integer AgentId.
Agents:
- Can exist a Location, so they have many coordinates and in fact can be distributed beings.
- Decide how the query the environment to get information from it.
- To an Agent, the Environment is just an API.
- An agent might ask only for relative positioning, or it might be aware of absolute position.
- It might not even know where its own body parts are located, or might have limited knowledge of where other agents are.
- Ask the environment to process requests to move in the space and send messages to other agents.
## Games (not implemented):;
1. A Traveling Salesman game: Each coordinate in the Environment must be visited at least once, and all squares should be visited in the minimum number of moves and minimum amount of communication
1. Sequence Prediction: Agents can learn to predict the next movements of other agents in the environment
1. Self Awareness: A distributed agent (like a column of 3 coordinates) must learn how to move without separating itself. Once it can do this, it should figure out how to efficiently clean the space.
- Presumably, we could get agents to learn to work together to visit all squares. for instance a Toroid and a Cylinder in 3D space might learn to travel together.
## Tasks:
- Build out Cartesian Environment, Location
- Design some simple Agents that interact with the space
- Implement a way to visualize this space.
# Simple Example
```
# Create Cartesian Environment of 2 dimensions, and 9 coordinates
env2 = CartesianEnvironment((3, 3))
# Create an agent that moves down 1 coordinate at a time.
agent = DeterministicAgent([(1,0)])
# Place this agent at coordinate (1,1)
env2.add_agent(agent.ID, AbsLoc((1,1)))
print('created agent with ID: %s' % agent.ID)
# initially, the agent has visited no coordinates
env2.visit_counts()
# Show the location of this agent on the grid
list(env2.agents_at_location())
# Have agent interact with its environment twice
for _ in range(2):
# It will ask to move down 1 coordinate
agent.next_action(env2)
# Let's see what happened
print('agent locations', list(env2.agent_location(agent.ID)))
print('visit counts', '\n', env2.visit_counts())
```
### API
```
print('\n'.join(x for x in dir(env2) if not x.startswith('_')))
import numpy as np
env22 = CartesianEnvironment((10,10))
ad22 = DeterministicAgent(np.random.randint(-1, 2, (5, 3, 2)))
startloc = AbsLoc(np.random.randint(0, 3, (3, 2)), 2)
env22.add_agent(ad22.ID, startloc)
list(env22.agents_at_location())
ad22.next_action(env22)
list(env22.agents_at_location())
env22.visit_counts()
ad22.next_action(env22)
list(env22.agents_at_location())
env22.visit_counts()
```
### 3D grid
```
env3 = CartesianEnvironment((2,2,2))
a3 = DeterministicAgent([
RelLoc((0,0,1)),
RelLoc((0,0,1)),
RelLoc((0,1,0))
])
env3.add_agent(a3.ID, AbsLoc((0,0,0)))
print(env3.visit_counts())
print('---')
for _ in range(5):
a3.next_action(env3)
print(env3.visit_counts().reshape(env3.grid_shape))
print('---')
print(env3.visit_counts())
```
### Display
using OpenGL (via Pyglet)
```
d2 = cartesian_display(env2, scale=(50,50))
d2.schedule(agent.next_action, 1, env=env2)
d2.run() # note: the display renders in a window, not in the notebook
d22 = cartesian_display(env22, scale=(50,50))
d22.schedule(ad22.next_action, 1, env=env22)
d22.run() # note: the display renders in a window, not in the notebook
env = CartesianEnvironment((200, 200))
a1 = DeterministicAgent(np.random.randint(-1, 2, (1000, 100, 2)))
env.add_agent(a1.ID, n_coord=100)
a2 = DeterministicAgent(np.random.randint(-1, 2, (10000, 1, 2)))
env.add_agent(a2.ID, np.random.randint(0, 200, (1, 2)))
d = cartesian_display(env, refresh_interval=1/10, scale=(3, 3),
visit_counts=True, agents=True)
def run_agents(a1, a2, env):
a1.next_action(env)
a2.next_action(env)
d.schedule(run_agents, None, a1, a2, env)
d.run() # note: the display renders in a window, not in the notebook
env = CartesianEnvironment((500, 500))
n_coord = 10
a1 = DeterministicAgent(np.random.randint(-1, 2, (70, n_coord, 2)))
env.add_agent(a1.ID, n_coord=n_coord)
n_coord = 20
a2 = DeterministicAgent(np.random.randint(-1, 2, (80, n_coord, 2)))
env.add_agent(a2.ID, n_coord=n_coord)
d = cartesian_display(env, refresh_interval=1/10, scale=(1, 1),
visit_counts=True, agents=True)
def run_agents(a1, a2, env):
a1.next_action(env)
a2.next_action(env)
d.schedule(run_agents, None, a1, a2, env)
d.run() # note: the display renders in a window, not in the notebook
#d.window.close()
#d1.unschedule()
d.unschedule(all_displays=True)
```
| github_jupyter |
```
import catboost
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import time
import xgboost as xgb
from catboost import Pool, CatBoostRegressor, CatBoostClassifier, cv
from itertools import product, chain
from matplotlib.backends.backend_pdf import PdfPages
from matplotlib.pylab import rcParams
from mlxtend.preprocessing import DenseTransformer
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score, train_test_split, GridSearchCV, KFold
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
from paramsearch import paramsearch
import util
reload(util)
from util import plot_top_features, crossvaltest, catboost_param_tune
# matplotlib settings
matplotlib.style.use('ggplot')
rcParams['figure.figsize'] = 12, 8
rcParams['font.size'] = 12
rcParams['axes.facecolor'] = 'white'
# ! pip install jupyternotify
%load_ext jupyternotify
n_time = 3
n_folds = 5
```
## Loading the data
```
data = pd.read_csv('../data/housing_russia/train.csv')
data.head(2)
```
### Data cleaning and new feature creation
```
def split_timestamp(df):
df.timestamp = pd.to_datetime(df.timestamp)
df['day'] = df.timestamp.dt.day
df['month'] = df.timestamp.dt.month
df['year'] = df.timestamp.dt.year
df['quarter'] = np.ceil(df['month'] / 3).astype(int)
split_timestamp(data)
data = data[data.full_sq < 5000]
data = data[data.life_sq < 5000]
data = data[(data.build_year < 2017) & (data.build_year > 1700)]
df_train, df_test, y_train, y_test = train_test_split(data, data.price_doc, test_size = 0.2,
shuffle = True)
df_train.is_copy = False
df_test.is_copy = False
# run once only
y_train, y_test = np.log1p(y_train), np.log1p(y_test)
df_train = df_train.drop(['price_doc', 'timestamp', 'id'], axis = 1)
df_test = df_test.drop(['timestamp', 'id'], axis = 1)
cat_features = list(df_train.select_dtypes(include=['object']).columns.values)
cat_features.extend(['floor', 'max_floor', 'num_room'])
df_train[cat_features] = df_train[cat_features].astype(str)
df_test[cat_features] = df_test[cat_features].astype(str)
cat_indices = [df_train.columns.get_loc(name) for name in cat_features]
```
## Exploratory Data Analysis
### Data statistics
```
print ("number of training instances = {0}".format(df_train.shape[0]))
print ("number of features = {0}".format(df_train.shape[1]))
print ("number of categorical features = {0}".format(len(cat_indices)))
```
#### Price distribution (log scale)
```
print("mean = {0}, std_dev = {1}".format(np.mean(y_train), np.std(y_train)))
sns.distplot(y_train, norm_hist=True)
plt.xlabel('Price (log scale)')
plt.ylabel('Percentage')
plt.title('Price distribution (Russia)')
plt.show()
```
# Catboost: training and evaluating
RMSLE is chosen as our primary evaluation metric.
The easiest way to use it is to use already implemented in most of the libraries metric RMSE on the $\ln(1 + y)$, where $y$ is the actual price.
```
pool_train = Pool(df_train, label = y_train, cat_features=cat_indices)
pool_test = Pool(df_test, cat_features=cat_indices)
model = CatBoostRegressor(loss_function = 'RMSE', custom_metric = 'RMSE', calc_feature_importance = True)
%%notify
model.fit(pool_train, logging_level='Silent')
top_features = plot_top_features(model, pool_train.get_feature_names(), 10)
```
### Evaluation
```
cv_params = model.get_params()
cv_params['logging_level'] = 'Silent'
del cv_params['calc_feature_importance']
%%notify
av_time = 0
for i in range(0, n_time):
t = time.time()
cv_data = cv(cv_params, pool_train, fold_count=n_folds, shuffle = True)
av_time += time.time() - t
print('average cv time (CatBoost) = {0:.2f}'.format(av_time / n_time))
print('RMLSE (CatBoost) = {0}'.format(cv_data['RMSE_test_avg'][-1]))
cat_grid_params = {
'depth': [1, 2, 3],
'learning_rate': [0.1, 0.05, 0.01],
'iterations' : [100, 500, 1000]
}
best_params = catboost_param_tune(cat_grid_params, df_train, y_train, cat_indices, 5)
print(best_params)
```
### Tuned version (with the best parameters found with grid search)
```
%%notify
model = CatBoostRegressor(**best_params)
cv_params = model.get_params()
del cv_params['calc_feature_importance']
av_time = 0
for i in range(0, n_time):
t = time.time()
cv_data = crossvaltest(cv_params, df_train, y_train, cat_indices, n_folds)
av_time += time.time() - t
print('average cv time (CatBoost) = {0:.2f}'.format(av_time / n_time))
print('RMSLE (CatBoost) = {0}'.format(-cv_data))
```
### Feature importance
```
model.fit(pool_train, logging_level='Silent')
top_features = plot_top_features(model, pool_train, 20)
```
# XGBoost
```
df_train_dummy = pd.get_dummies(df_train, columns = cat_features)
xgboost_clf = Pipeline([('to_dense', DenseTransformer()),
('clf', xgb.XGBRegressor(eval_metric = 'rmse'))])
_ = xgboost_clf.fit(df_train_dummy, y_train)
av_time = 0
for i in range(0, n_time):
t = time.time()
xgboost_cv = cross_val_score(xgboost_clf, df_train_dummy, y_train,
scoring='neg_mean_squared_error', cv=n_folds, n_jobs = -1, verbose=False)
av_time += time.time() - t
print('average cv time (XGBoost) = {0:.2f} sec'.format(av_time / n_time))
print('RMSLE (XGBoost) = {0}'.format(np.sqrt(-xgboost_cv.mean())))
_ = plot_top_features(xgboost_clf.named_steps['clf'], df_train_dummy.columns.values, 10)
grid_params = {
'clf__max_depth': [1, 2, 3],
'clf__learning_rate': [0.1, 0.05, 0.01],
'clf__n_estimators' : [100, 500, 1000]
}
grid_search_xgb = GridSearchCV(xgboost_clf, grid_params, scoring='neg_mean_squared_error',
cv=n_folds, verbose=False)
_ = grid_search_xgb.fit(df_train_dummy, y_train)
best_parameters_xgb = max(grid_search_xgb.grid_scores_, key=lambda x: x[1])[0]
best_parameters_xgb
```
### Tuned version (with the best parameters found with grid search)
```
xgboost_clf = Pipeline([('to_dense', DenseTransformer()),
('clf', xgb.XGBRegressor(eval_metric = 'rmse',
learning_rate = best_parameters_xgb['clf__learning_rate'],
n_estimators = best_parameters_xgb['clf__n_estimators'],
max_depth = best_parameters_xgb['clf__max_depth']))])
av_time = 0
for i in range(0, n_time):
t = time.time()
xgboost_cv = cross_val_score(xgboost_clf, df_train_dummy, y_train,
scoring='neg_mean_squared_error', cv=n_folds, n_jobs = -1, verbose=False)
av_time += time.time() - t
print('average cv time (XGBoost) = {0:.2f} sec'.format(av_time / n_time))
print('RMSLE (XGBoost) = {0}'.format(np.sqrt(-xgboost_cv.mean())))
```
# GradientBoostingRegressor
```
sk_boost_clf = Pipeline([('replace_nan', Imputer()),
('to_dense', DenseTransformer()),
('clf', GradientBoostingRegressor())])
_ = sk_boost_clf.fit(df_train_dummy, y_train)
av_time = 0
for i in range(0, n_time):
t = time.time()
sklearn_cv = cross_val_score(sk_boost_clf, df_train_dummy, y_train,
scoring='neg_mean_squared_error', cv=n_folds, n_jobs = -1, verbose=False)
av_time += time.time() - t
print('average cv time (GradientBoostingRegressor) = {0:.2f} sec'.format(av_time / n_time))
print('RMSLE (GradientBoostingClassifier) = {0}'.format(np.sqrt(-sklearn_cv.mean())))
_ = plot_top_features(sk_boost_clf.named_steps['clf'], df_train_dummy.columns.values, 10)
grid_search_sk = GridSearchCV(sk_boost_clf, grid_params, scoring='neg_mean_squared_error',
cv=n_folds, verbose=True)
_ = grid_search_sk.fit(df_train_dummy, y_train)
best_parameters_sk = max(grid_search_sk.grid_scores_, key=lambda x: x[1])[0]
best_parameters_sk
```
### Tuned version (with the best parameters found with grid search)
```
sk_boost_clf = Pipeline([('replace_nan', Imputer()),
('to_dense', DenseTransformer()),
('clf', GradientBoostingRegressor(learning_rate = best_parameters_sk['clf__learning_rate'],
n_estimators = best_parameters_sk['clf__n_estimators'],
max_depth = best_parameters_sk['clf__max_depth']))])
av_time = 0
for i in range(0, n_time):
t = time.time()
sklearn_cv = cross_val_score(sk_boost_clf, df_train_dummy, y_train,
scoring='neg_mean_squared_error', cv=n_folds, n_jobs = -1, verbose=False)
av_time += time.time() - t
print('average cv time (GradientBoostingRegressor) = {0:.2f} sec'.format(av_time / n_time))
print('RMSLE (GradientBoostingRegressor) = {0}'.format(np.sqrt(-sklearn_cv.mean())))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/apergo-ai/CRASS-data-set/blob/main/OpenAIinterface_master.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import io
import pandas as pd
import random
import re
import sys
!pip install openai
import openai
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
## Load the CRASS dataset
url = 'https://raw.githubusercontent.com/apergo-ai/CRASS-data-set/main/CRASS_FTM_main_data_set.csv'
gpt3array = pd.read_csv(url, encoding='latin1',sep=";")
gpt3arraylength = len(gpt3array)
gpt3arraytransposed = gpt3array.T
## Access API using your credentials
openai.api_key = # Enter your API key
engineselection = "ada" # use one of the available OpenAI engines based on your preference by editing the line (possibilitiesare: davinci, curie, babbage, ada)
## This function is used to test the question answering capabilities of GPT-3 in CRASS OSM (Open Scoring Mode) in zero-shot fashion.
def gpt3createanswerfunction():
gpt3arraycounter = 0
while gpt3arraycounter < 1: #choose how many answers you would like to fetch
inputtext = str(gpt3arraytransposed[gpt3arraycounter].PCTID) + ". " + gpt3arraytransposed[gpt3arraycounter].Premise + " " + gpt3arraytransposed[gpt3arraycounter].QCC
rawresponse=openai.Completion.create(
engine=engineselection,
prompt=inputtext,
max_tokens=15)
#print(rawresponse)
responsetext = rawresponse.choices[0].text.strip()
gpt3arraytransposed[gpt3arraycounter].gpt3response = responsetext
#print(gpt3arraytransposed[gpt3arraycounter])
gpt3arraycounter += 1
return responsetext
## This function can be used to generate some examples in the style of the CRASS dataset.
def gpt3createquestionfunction(gpt3array, gpt3arraytransposed, gpt3arraylength):
newarraylength = gpt3arraylength
numberofnewexamples = 0
inputexamplecounter = 1
inputtext = ""
while numberofnewexamples < 1:
while inputexamplecounter < 10:
randomexample = random.randrange(100)
inputtext = inputtext + "\n Premise " + str(inputexamplecounter) + ": " + gpt3arraytransposed[randomexample].Premise + "\n Question " + str(inputexamplecounter) + ": " + gpt3arraytransposed[randomexample].QCC + "\n Answer " + str(inputexamplecounter) + ": " + gpt3arraytransposed[randomexample].CorrectAnswer
inputexamplecounter += 1
rawresponse=openai.Completion.create(
engine=engineselection,
prompt=inputtext,
temperature=0.9,
stop="Premise 11",
max_tokens=45)
#print(rawresponse)
responsetext = rawresponse.choices[0].text.strip()
if (responsetext.find('Premise 10') != -1):
responsetext = re.split('Premise 10:|Question 10:|Answer 10:', responsetext) #Sometimes it can happen that GPT-3 generates unexpected text without the necessary markers.
newpremise = responsetext[1].strip()
newqcc = responsetext[2].strip()
newanswer = responsetext[3].strip()
else:
print(color.BOLD + color.RED + 'Unexpected GPT-3 output, please start function again.' + color.END)
sys.exit(1)
newarraylength += 1
gpt3array.loc[gpt3arraylength] = [newarraylength,'1',newpremise,newqcc,newanswer,'','','','']
gpt3arraylength +=1
numberofnewexamples += 1
return gpt3array
# calls the 'create-answer' function
gpt3newanswers = gpt3createanswerfunction()
# calls the 'create-question' function
gpt3newitems = gpt3createquestionfunction(gpt3array, gpt3arraytransposed, gpt3arraylength)
#Save Data to a target csv
df = pd.DataFrame(gpt3newitems)
df.to_csv('output.csv',sep=';', encoding='utf-8',index=False)
```
| github_jupyter |
# "멜론 사이트 최신곡 검색"
> "문제풀이입니다."
- toc: true
- branch: master
- badges: true
- comments: true
- author: yurim j
- categories: [크롤링, Python]
#### 1) 멜론 사이트에서 국내 최신곡 10위까지의 정보를 노래제목, 가수, 앨범, 좋아요 수를 출력하시오
url = 'https://www.melon.com/new/index.htm'
```
import requests
from bs4 import BeautifulSoup
url1='https://www.melon.com/new/index.htm'
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53'}
response = requests.get(url1,headers=headers)
response.raise_for_status()
soup=BeautifulSoup(response.text,'lxml')
Songs = soup.find_all('div',attrs={'class':'ellipsis rank01'})
songlist=[]
Singers = soup.find_all('div',attrs={'class':'ellipsis rank02'})
singerlist=[]
Albums = soup.find_all('div',attrs={'class':'ellipsis rank03'})
albumlist=[]
Likes = soup.find_all('span',attrs={'class':'cnt'})
likelist=[]
for song in Songs[0:10]:
songlist.append(song.get_text())
for singer in Singers[0:10]:
singerlist.append(singer.get_text())
for album in Albums[0:10]:
albumlist.append(album.get_text())
print('노래제목',songlist)
print('가수',singerlist)
print('앨범',albumlist)
```
### 답 예시
```
response = requests.get(url1, headers=headers)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
all_song=soup.find_all('tr')
for i,song in enumerate(all_song):
#print(song)
title = song.find_all('div', attrs={'class' : 'rank01'})
if len(title)==0: continue
if i > 10 : break
title = title[0].a.get_text()
print(i, title, end='')
singer = song.find_all('div',attrs={'class':'rank02'})
singer = singer[0].a.get_text()
print(singer, end='')
album = song.find_all('div',attrs={'class':'rank03'})
album = album[0].a.get_text()
print(', 앨범명:',album)
```
### 답2
```
import pandas as pd
import requests
from bs4 import BeautifulSoup
url ='https://www.melon.com/new/index.htm'
table = pd.read_html(requests.get(url, headers=headers).text)
table[0]
```
#### 2) 네이버에서 bts이미지를 검색한 후 나온 결과 중 5개의 이미지를 저장
url = 'https://search.naver.com/search.naver?where=image&sm=tab_jum&query=bts'
```
import requests
from bs4 import BeautifulSoup
import os
import time
from selenium import webdriver
# url2='https://search.naver.com/search.naver?sm=tab_hty.top&where=image&query=%EA%B5%AC%EC%8A%AC%EC%95%84%EC%9D%B4%EC%8A%A4+%EC%BF%A0%ED%82%A4&oquery=%EC%BF%A0%ED%82%A4%EB%9F%B0&tqi=hihZIwprvOsssZRca5ossssstXV-367695'
# headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53'}
# response = requests.get(url2,headers=headers)
# response.raise_for_status()
# soup = BeautifulSoup(response.text, 'lxml')
# cookie=soup.find_all('div',attrs={'class':'thumb'})
# print(cookie)
# for i, movie in enumerate(Kuromi):
# title = movie['alt']
# img_url = movie['src']
# #print((i+1),":",img_url)
# img_res=requests.get(img_url)
# img_res.raise_for_status()
# with open(img_dir+'m{}_{}'.format((i+1),title)+'.jpg','wb') as f:
# f.write(img_res.content)
# time.sleep(1)
# print('그림 저장 완료')
url='https://search.naver.com/search.naver?where=image&sm=tab_jum&query=%EC%BF%A0%EB%A1%9C%EB%AF%B8'
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53'}
response = requests.get(url,headers=headers)
response.raise_for_status()
soup=BeautifulSoup(response.text, 'html.parser') #내부 파서 사용
driver=webdriver.Chrome()
#셀레늄을 이용하여 특정 사이트로 이동한 후 1초동안 대기
driver.get(url)
time.sleep(1)
soup = BeautifulSoup(driver.page_source)
kuromi_imgs = soup.find_all('img',attrs={'class':'_image_listImage'})
print('그림의 개수:',len(kuromi_imgs)) #첫 화면에 보이는 이미지의 개수=>
#이미지 저장할 폴더 생성
img_dir='./kuromi_img/'
if not os.path.exists(img_dir): #디렉토리가 없으면 생성
os.makedirs(img_dir)
print('폴더생성')
else:
print('폴더가 존재함')
for i,image in enumerate(kuromi_imgs):
img_url = image['src']
if(not img_url.startswith('http')): continue
img_src = requests.get(img_url,headers=headers)
img_src.raise_for_status()
with open(img_dir + 'kuromi_{}.jpg'.format(i),'wb')as f:
f.write(img_src.content)
if i >10: break;
print('쿠로미 사진 저장 완료')
temp1='http://www.naver.com'
temp2='http://www.daum.net'
print(temp1.startswith('http'))
print(temp2.startswith('http'))
print(temp1.endswith('com'))
```
| github_jupyter |
# PULSIFI Task 2 Solutions
Build a model to **predict z from x and y**. Some questions to consider:
1. What kind of model is appropriate in this case considering the graphs?
2. How to do feature selection?
3. How to measure the performance of the model?
4. How to interpret the result?
## Import necessary libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
```
## Get the data
```
#mount google drive
from google.colab import drive
drive.mount('/content/drive')
#locate the data
!ls "../content/drive/My Drive/ColabNotebooks/pulsifi"
```
## Exploratory Data Analysis
Let's look what does this data looks like.
```
df = pd.read_csv("../content/drive/My Drive/ColabNotebooks/pulsifi/test.csv")
df.describe()
df.head()
df.info()
df = df.drop(columns=["Unnamed: 0"])
df.head()
# create a scatter plot for the data for x vs z
ax1 = df.plot.scatter(x='x',
y='z',
c="DarkBlue")
# create a scatter plot for the data for y vs z
ax2 = df.plot.scatter(x='y',
y='z',
c="DarkBlue")
# create a scatter plot for the data for x,y vs z
ax = df.plot(kind='scatter', x='x', y='z', color='DarkBlue', label='x')
df.plot(kind='scatter', x='y', y='z',color='DarkGreen', label='y', ax=ax);
```
### Plot a desicion boundary for the data
```
sns.lmplot(x='x',y='z',data=df)
sns.lmplot(x='y',y='z',data=df)
sns.pairplot(df[["x","y","z"]], diag_kind="kde")
```
### Check the features correlation
```
plt.figure(figsize=(7,3))
sns.heatmap(df.corr(),annot=True,cmap='cubehelix_r')
plt.show()
```
Conclusion:
1. x and y are directly proportional to z
2. x has a correlation of 0.82 to y, same goes to x to z.
3. y has a correlation of 0.66 to z.
## Machine Learning
### Split the data into train and test set
```
from sklearn.model_selection import train_test_split as split
train, test = split(df, test_size = 0.1)
print (train.shape)
print (test.shape)
#get the features for training data
X_train = train[['x','y']]
#output(label) of training data
y_train = train['z']
#get the testing data features and output(label)
X_test = test[['x', 'y']]
y_test = test['z']
print(X_train.shape)
```
#### Train and Predict by using Support Vector Machine Regressor
```
# Create and train the Support Vector Machine (Regressor)
from sklearn.svm import SVR
svr = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr.fit(X_train, y_train)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
svm_confidence = svr.score(X_test, y_test)
print("svm confidence: ", svm_confidence)
y_pred = svr.predict(X_test)
df_pred = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_pred
from sklearn.metrics import mean_absolute_error
print("MAE by using SVM is {}.".format(mean_absolute_error(y_test,y_pred)))
```
#### Train and Predict by using Linear Regression
```
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
# Train the model
lr.fit(X_train, y_train)
#To retrieve the intercept:
print(lr.intercept_)
#For retrieving the slope:
print(lr.coef_)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
lr_confidence = lr.score(X_test, y_test)
print("lr confidence: ", lr_confidence)
y_pred = lr.predict(X_test)
df_pred = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_pred
print("MAE by using linear regression is {} ".format(mean_absolute_error(y_test,y_pred)))
```
#### Train and Predict by using Random Forest Regressor
```
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
# Train the model
rf.fit(X_train, y_train)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
rf_confidence = rf.score(X_test, y_test)
print("rf confidence: ", rf_confidence)
y_pred = rf.predict(X_test)
df_pred = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_pred
print("MAE by using random forest regressor is {} ".format(mean_absolute_error(y_test,y_pred)))
```
#### Train Model by using XGBoost Regressor
```
from xgboost import XGBRegressor
xgb = XGBRegressor()
# Train the model
xgb.fit(X_train, y_train)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
xgb_confidence = xgb.score(X_test, y_test)
print("xgb confidence: ", xgb_confidence)
y_pred = xgb.predict(X_test)
df_pred = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_pred
print("MAE by using XGBoost regressor is {} ".format(mean_absolute_error(y_test,y_pred)))
```
#### Train and Predict by using Multi-layer Perceptron (MLP)
```
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
X_train, X_val, y_train, y_val = split(X_train, y_train, test_size = 0.1)
model = Sequential()
model.add(Dense(1000, input_dim=2,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(500, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation='linear'))
model.summary()
!pip install keras-rectified-adam
from keras_radam import RAdam
model.compile(loss='mse', optimizer=RAdam(), metrics=['mse','mae'])
checkpointer = [
ReduceLROnPlateau(monitor='val_loss',
patience=3,
verbose=1,
factor=0.5,
min_lr = 0.00001,
cooldown=0),
ModelCheckpoint(filepath='best.hdf5', verbose=1,
save_best_only=True)
]
history = model.fit(X_train, y_train, epochs=150, batch_size=128, verbose=1, validation_data=[X_val, y_val], shuffle=True, callbacks=checkpointer)
train_acc = model.evaluate(X_train, y_train, verbose=0)
val_acc = model.evaluate(X_val, y_val, verbose=0)
print(history.history.keys())
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# load the weights that yielded the best validation accuracy
model.load_weights('best.hdf5')
test_predictions =model.predict(X_test).flatten()
a = plt.axes(aspect='equal')
plt.scatter(y_test, test_predictions)
plt.xlabel('True Values')
plt.ylabel('Predictions')
lims = [-0.2, 1.2]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
error = test_predictions - y_test
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error")
_ = plt.ylabel("Count")
for true, pred in zip(y_test, test_predictions):
if pred == true:
print("{}\t -> {}".format(true, pred))
else:
print("\033[1m{}\t -> {}\033[0m".format(true, pred))
oss, mae, mse = model.evaluate(X_test, y_test, verbose=2)
print("Testing set Mean Abs Error: {:5.4f}".format(mae))
```
### Conclusion(1):
The classical machine learning model like SVM, XGBoost, Linear regression and random forest have mean absolute error of around 0.18, meanwhile the MLP has a mean absolute error of 0.0434.
As x is correlate to y and z for 0.82, below, I am just going to predict the z by using x values only.
## Predict the z by using x only
#### Use x only as the features
```
#get the features for training data
X_train = train[['x']]
#output(label) of training data
y_train = train['z']
#get the testing data features and output(label)
X_test = test[['x']]
y_test = test['z']
print(X_train.shape)
```
### Using SVM regressor
```
# Create and train the Support Vector Machine (Regressor)
from sklearn.svm import SVR
svr = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr.fit(X_train, y_train)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
svm_confidence = svr.score(X_test, y_test)
print("svm confidence: ", svm_confidence)
y_pred = svr.predict(X_test)
df_pred = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_pred
a = plt.axes(aspect='equal')
plt.scatter(y_test, y_pred.flatten())
plt.xlabel('True Values')
plt.ylabel('Predictions')
lims = [-0.2, 1.2]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
print("MAE by using SVM is {}.".format(mean_absolute_error(y_test,y_pred)))
```
### Using MLP
```
X_train, X_val, y_train, y_val = split(X_train, y_train, test_size = 0.1)
model2 = Sequential()
model2.add(Dense(1000, input_dim=1,activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(500, activation="relu"))
model2.add(Dropout(0.5))
model2.add(Dense(1, activation='linear'))
model2.summary()
from keras_radam import RAdam
model2.compile(loss='mse', optimizer=RAdam(), metrics=['mse','mae'])
checkpointer = [
ReduceLROnPlateau(monitor='val_loss',
patience=3,
verbose=1,
factor=0.5,
min_lr = 0.00001,
cooldown=0),
ModelCheckpoint(filepath='best2.hdf5', verbose=1,
save_best_only=True)
]
history = model2.fit(X_train, y_train, epochs=150, batch_size=128, verbose=1, validation_data=[X_val, y_val], shuffle=True, callbacks=checkpointer)
train_acc = model2.evaluate(X_train, y_train, verbose=0)
val_acc = model2.evaluate(X_val, y_val, verbose=0)
print(history.history.keys())
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# load the weights that yielded the best validation accuracy
model2.load_weights('best2.hdf5')
test_predictions =model2.predict(X_test).flatten()
a = plt.axes(aspect='equal')
plt.scatter(y_test, test_predictions)
plt.xlabel('True Values')
plt.ylabel('Predictions')
lims = [-0.2, 1.2]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
error = test_predictions - y_test
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error")
_ = plt.ylabel("Count")
for true, pred in zip(y_test, test_predictions):
if pred == true:
print("{}\t -> {}".format(true, pred))
else:
print("\033[1m{}\t -> {}\033[0m".format(true, pred))
oss, mae, mse = model2.evaluate(X_test, y_test, verbose=2)
print("Testing set Mean Abs Error: {:5.4f}".format(mae))
```
### Conclusion (2)
The results are almost the same with predicting `z` from `x ` and `y`. This is due to both of them have more than 0.5 correlations with `z`. So, next, maybe we can transform (scale) the features before fitting it into machine learning models.
### Transform features before fit into machine learning models
```
from sklearn.model_selection import train_test_split as split
train, test = split(df, test_size = 0.1)
print (train.shape)
print (test.shape)
#get the features for training data
X_train = train[['x','y']]
#output(label) of training data
y_train = train['z']
#get the testing data features and output(label)
X_test = test[['x', 'y']]
y_test = test['z']
print(X_train.shape)
#normalize the data by using MinMaxScaler in sklearn
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train= scaler.fit_transform(X_train)
X_train = pd.DataFrame(X_train, columns=["x","y"])
X_train.head()
X_test= scaler.fit_transform(X_test)
X_test = pd.DataFrame(X_test, columns=["x","y"])
X_test.head()
# Create and train the Support Vector Machine (Regressor)
from sklearn.svm import SVR
svr = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr.fit(X_train, y_train)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
svm_confidence = svr.score(X_test, y_test)
print("svm confidence: ", svm_confidence)
y_pred = svr.predict(X_test)
df_pred = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_pred
from sklearn.metrics import mean_absolute_error
print("MAE by using SVM is {}.".format(mean_absolute_error(y_test,y_pred)))
```
#### Predict by using MLP
```
X_train, X_val, y_train, y_val = split(X_train, y_train, test_size = 0.1)
model3 = Sequential()
model3.add(Dense(1000, input_dim=2,activation='relu'))
model3.add(Dropout(0.5))
model3.add(Dense(500, activation="relu"))
model3.add(Dropout(0.5))
model3.add(Dense(1, activation='linear'))
model3.summary()
from keras_radam import RAdam
model3.compile(loss='mse', optimizer=RAdam(), metrics=['mse','mae'])
checkpointer = [
ReduceLROnPlateau(monitor='val_loss',
patience=3,
verbose=1,
factor=0.5,
min_lr = 0.00001,
cooldown=0),
ModelCheckpoint(filepath='best3.hdf5', verbose=1,
save_best_only=True)
]
history = model3.fit(X_train, y_train, epochs=150, batch_size=128, verbose=1, validation_data=[X_val, y_val], shuffle=True, callbacks=checkpointer)
train_acc = model3.evaluate(X_train, y_train, verbose=0)
val_acc = model3.evaluate(X_val, y_val, verbose=0)
print(history.history.keys())
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# load the weights that yielded the best validation accuracy
model3.load_weights('best3.hdf5')
test_predictions =model3.predict(X_test).flatten()
a = plt.axes(aspect='equal')
plt.scatter(y_test, test_predictions)
plt.xlabel('True Values')
plt.ylabel('Predictions')
lims = [-0.2, 1.2]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
error = test_predictions - y_test
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error")
_ = plt.ylabel("Count")
for true, pred in zip(y_test, test_predictions):
if pred == true:
print("{}\t -> {}".format(true, pred))
else:
print("\033[1m{}\t -> {}\033[0m".format(true, pred))
oss, mae, mse = model3.evaluate(X_test, y_test, verbose=2)
print("Testing set Mean Abs Error: {:5.4f}".format(mae))
```
### Discussions
Multi layer perceptron (MLP) gives you lower mean absolute error compared to the classical machine learning algorithms.
[Mean Squared Error (MSE)](https://en.wikipedia.org/wiki/Mean_squared_error) is a common loss function used for regression problems (different loss functions are used for classification problems).
Similarly, evaluation metrics used for regression differ from classification. A common regression metric is [Mean Absolute Error (MAE)](https://en.wikipedia.org/wiki/Mean_absolute_error).
When numeric input data features have values with different ranges, each feature should be scaled independently to the same range, so I used the [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) from Sklearn to do this. I also tried the StandardScaler for feature scaling, but it turns out that MinMaxScaler gives better results than StandardScaler feature scaling.
If there is not much training data, one technique is to prefer a small network with few hidden layers to avoid overfitting.
I have tested with more hidden layers, different batch size and I have found that 2 layers are enough to get a lower MAE. Batch size between 64 to 256 is usually considered good enough for MLP.
I used [Rectified Adam (RAdam)](https://arxiv.org/pdf/1908.03265.pdf) as the optimizer as it is the newest one and proved to learn better than Adam.
I reduced the learning rate when the validation loss isn't improving and load the best model's weight to predict the data.
I also split the train and test data for classical machine learning algorithms to 9:1 ratio and for MLP, train, validation and test data to 8:1:1 ratio. This is because the more we have for training data, the better the results will it usually be.
So, below are the full results to predict the `z` from `x` and `y`.
<table>
<caption> Table 1 Experimental Results </caption>
<tr>
<th>Algorithms</th>
<th>Feature Scaling</th>
<th>Mean Abs Error </th>
</tr>
<td>Support Vector Machine Regressor</td>
<td>No</td>
<td>0.1670<td>
</tr>
<tr>
<td>Linear Regression </td>
<td>No</td>
<td>0.1691</td>
</tr>
<tr>
<td>Random Forest Regressor </td>
<td>No</td>
<td>0.1834</td>
</tr>
<tr>
<td>XGBoost Regressor </td>
<td>No</td>
<td>0.1753</td>
</tr>
<tr>
<td>Multi-layer Perceptron </td>
<td>No</td>
<td>0.0461</td>
</tr>
<tr>
<td>Support Vector Machine Regressor </td>
<td>Yes</td>
<td>0.1546<td>
</tr>
<tr>
<td>Multi-layer Perceptron </td>
<td>Yes</td>
<td>0.0375</td>
</tr>
</table>
**Future Works**
1. Increase the accuracy by using LSTM or RNN for neural networks model. But this should only be done if we have more data. Or else, maybe fine tuning the hyperparameters of MLP and other machine learning models will turn out better.
2. Use k-fold cross validation to split the training and validation data.
## Conclusion
Data analysis and machine learning problems are more like trial and error, we need to experiments more to know which models works the best. Besides that, we also need to consider the computing resources and data availability before design the algorithms.
| github_jupyter |
# Precipitation exercises
***
## <font color=steelblue>Exercise 3 - Double-mass curve</font>
<font color=steelblue>Perform a double-mass curve analysis with the data in sheet *Exercise_003* from file *RainfallData.xlsx*.</font>
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
sns.set_context('notebook')
from scipy.optimize import curve_fit
```
### Import data
```
# Importar los datos
data3 = pd.read_excel('../data/RainfallData.xlsx', sheet_name='Exercise_003',
skiprows=0, index_col=0)
# name of the gages
gages = data3.columns
# calculate the mean across stations
data3['AVG'] = data3.mean(axis=1)
data3.head()
```
### Double-mass curves
We are going to plot simultaneously the double-mass curve for all the stations, so we can start identifying stations that may have problems.
To plot several plots in the same figure, we will use the function `subplots` in `Matplotlib`.
```
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(12, 8), sharex=True, sharey=True)
for (gage, ax) in zip(gages, axes.flatten()):
# line of slope 1
ax.plot((0, 800), (0, 800), ':k', label='1:1 line')
# double-mass curve
ax.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='data')
ax.set_title('gage ' + gage)
ax.legend()
axes[1, 2].axis('off');
```
From the plot we are certain that the series in gage C is correct, but there might be problems in the rest of the gages.
### Identify errors
The double-mass curve must represent a linear regression with no intercept. We will create a function representing this linear regression which we will use in the following steps.
```
def linear_reg(x, m):
"""Linear regression with no intecept
y = m * x
Input:
------
x: float. Independet value
m: float. Slope of the linear regression
Output:
-------
y: float. Regressed value"""
y = m * x
return y
```
#### Gage A
To identify errors, we will have to fit the linear regression with no intercept to both the series before and after a specific year; if the diference in the fitted slope for those two series exceed an error threshold, we identify that year as a break point in the double-mass curve. We will iterate this process for each year and set a error threshold (or tolerance) to find all the possible break points in the series.
```
# define the gage
gage = 'A'
# define the error threshold
error = .2
for year in data3.index[3:-3]:
# fit the regression from 1978 onwards
m1 = curve_fit(linear_reg, data3.loc[:year, 'AVG'].cumsum(), data3.loc[:year, gage].cumsum())[0][0]
# fit the regression from 1978 onwards
m2 = curve_fit(linear_reg, data3.loc[year:, 'AVG'].cumsum(), data3.loc[year:, gage].cumsum())[0][0]
## correction factor
#factor = m1 / m2
#if (factor < 1 - error) | (factor > 1. + error):
if abs(m1 - m2) > error:
print('{0} m1 = {1:.3f} m2 = {2:.3f} factor = {3:.3f}'.format(year, m1, m2, factor))
```
There are no errors in the series of gage A.
#### All gages
Simply changing the name of the gage in the previous section we can repeat the process. Let's create a function and the run it in a a loop.
```
def identify_errors(dataGage, dataAVG, error=.1):
"""Identify possible break points in the double-mass curve
Parameters:
-----------
dataGage: series. Annual series for the gage to be checked
dataAVG: series. Annual series of the mean across gages in a region
error: float. Error threshold
Output:
-------
It will print the years with a difference in slopes higher than 'error', alon with the values of the slopes.
"""
for year in dataGage.index[3:-3]:
# fit the regression from 1978 onwards
m1 = curve_fit(linear_reg, dataAVG.loc[:year].cumsum(), dataGage.loc[:year].cumsum())[0][0]
# fit the regression from 1978 onwards
m2 = curve_fit(linear_reg, dataAVG.loc[year:].cumsum(), dataGage.loc[year:].cumsum())[0][0]
## correction factor
#factor = m1 / m2
#if (factor < 1 - error) | (factor > 1. + error):
if abs(m1 - m2) > error:
print('{0} m1 = {1:.3f} m2 = {2:.3f}'.format(year, m1, m2))
for gage in gages:
print('Gage ', gage)
identify_errors(data3['AVG'], data3[gage], error=.1)
print()
```
We have identified errors in gages B, D and E. This was an automatic search to discard correct stations. Now, we have to analyse one by one these three stations that might have errors.
### Correct errors
#### Gage B
##### Analyse the series
We have identified anomalies in the years between 1929 and 1939. It will probably mean that there are two break points in the double mass curve. Let's look at the double mass curve and the specific points representing those two years.
```
# set gage and year corresponding to the break in the line
gage = 'B'
breaks = [1929, 1939]
# visualize
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum().loc[breaks], data3[gage].cumsum().loc[breaks], '.', label='breaks')
plt.legend();
```
At a glance, we can identify three periods. There is period at the beginning of the series with a higher than usual slope; this period seem so extend until 1930 (not 1929 as we had identified). There is aperiod at the end of the series with a lower than usual slope; this period seems to start in 1938 (not 1939 as we had identified).
We will reset the break points and calculate the slope of the regression to check it.
```
# reset the break points
breaks = [1930, 1938]
# fit the regression untill the first break
m1 = curve_fit(linear_reg, data3.loc[:breaks[0], 'AVG'].cumsum(), data3.loc[:breaks[0], gage].cumsum())[0][0]
# fit the regression from the first to the second break
m2 = curve_fit(linear_reg, data3.loc[breaks[0]:breaks[1], 'AVG'].cumsum(), data3.loc[breaks[0]:breaks[1], gage].cumsum())[0][0]
# fit the regression from t
m3 = curve_fit(linear_reg, data3.loc[breaks[1]:, 'AVG'].cumsum(), data3.loc[breaks[1]:, gage].cumsum())[0][0]
print('m1 = {0:.3f} m2 = {1:.3f} m3 = {2:.3f}'.format(m1, m2, m3))
```
As expected, there are three different slopes in the series. We will assume that the correct data is that from 1930 to 1937, because it is longest period of the three and its slope is closer to 1. Therefore, we have to calculate the correction factors for two periods: before 1930 and after 1937; with these factors we can correct the series.
##### Correct the series
```
# correction factors
factor12 = m2 / m1
factor23 = m2 / m3
factor12, factor23
# copy of the original series
data3['B_'] = data3[gage].copy()
# correct period before the first break
data3.loc[:breaks[0], 'B_'] *= factor12
# correct period after the second break
data3.loc[breaks[1]:, 'B_'] *= factor23
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum(), data3['B_'].cumsum(), '.-', label='corrected')
plt.legend();
```
Now we can check again for errors in the correceted series.
```
# chech again for errors
identify_errors(data3.B_, data3.AVG)
```
There aren't any more errors, so we've done correcting data from gage B.
#### Gage D
##### Analyse the series
We found a break point in year 1930.
```
# set gage and year corresponding to the break in the line
gage = 'D'
breaks = [1930]
# visualize
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum().loc[breaks], data3[gage].cumsum().loc[breaks], '.', label='breaks')
plt.legend();
# fit the regression untill the break
m1 = curve_fit(linear_reg, data3.loc[:breaks[0], 'AVG'].cumsum(), data3.loc[:breaks[0], gage].cumsum())[0][0]
# fit the regression after the break
m2 = curve_fit(linear_reg, data3.loc[breaks[0]:, 'AVG'].cumsum(), data3.loc[breaks[0]:, gage].cumsum())[0][0]
print('m1 = {0:.3f} m2 = {1:.3f}'.format(m1, m2))
```
This case is simpler than the previous and we easily spot the breal point in 1930. THe period before 1930 has a slope closer to 1, so we will assume that this is the correct part of the series.
##### Correct the series
```
# correction factor
factor = m1 / m2
factor
# copy of the original series
data3[gage + '_'] = data3[gage].copy()
# correct period after the break
data3.loc[breaks[0]:, gage + '_'] *= factor
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum(), data3[gage + '_'].cumsum(), '.-', label='corrected')
plt.legend();
# chech again for errors
identify_errors(data3[gage + '_'], data3.AVG, error=.1)
```
We identify two more possible break point in the corrected series. Both might indicate that the last section of the series has a higher slope that the initial. Let's correct the series from 1935 on, and this may solve the second break point in 1937.
```
gage = 'D_'
breaks = [1935]
# fit the regression untill the break
m1 = curve_fit(linear_reg, data3.loc[:breaks[0], 'AVG'].cumsum(), data3.loc[:breaks[0], gage].cumsum())[0][0]
# fit the regression after the break
m2 = curve_fit(linear_reg, data3.loc[breaks[0]:, 'AVG'].cumsum(), data3.loc[breaks[0]:, gage].cumsum())[0][0]
print('m1 = {0:.3f} m2 = {1:.3f}'.format(m1, m2))
# correction factor
factor = m1 / m2
factor
# copy of the original series
data3[gage + '_'] = data3[gage].copy()
# correct period after the break
data3.loc[breaks[0]:, gage + '_'] *= factor
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum(), data3[gage + '_'].cumsum(), '.-', label='corrected')
plt.legend();
# chech again for errors
identify_errors(data3[gage + '_'], data3.AVG, error=.1)
```
#### Gage E
##### Analyse the series
The series in gage E has a similar behaviour to series B. There is an anomaly in the series between 1929 and 1938, indicating that there might be two break points in the double-mass curve.
```
# set gage and year corresponding to the break in the line
gage = 'E'
breaks = [1929, 1938]
# visualize
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum().loc[breaks], data3[gage].cumsum().loc[breaks], '.', label='1929')
plt.legend();
# fit the regression untill the first break
m1 = curve_fit(linear_reg, data3.loc[:breaks[0], 'AVG'].cumsum(), data3.loc[:breaks[0], gage].cumsum())[0][0]
# fit the regression from the first to the second break
m2 = curve_fit(linear_reg, data3.loc[breaks[0]:breaks[1], 'AVG'].cumsum(), data3.loc[breaks[0]:breaks[1], gage].cumsum())[0][0]
# fit the regression from the second break on
m3 = curve_fit(linear_reg, data3.loc[breaks[1]:, 'AVG'].cumsum(), data3.loc[breaks[1]:, gage].cumsum())[0][0]
print('m1 = {0:.3f} m2 = {1:.3f} m3 = {2:.3f}'.format(m1, m2, m3))
```
There seems to be only one break in the line between the first and the second period. The slopes in the second and third periods are that close that, most probably, there isn't a change from 1938 on. Apart from that, the break in the line seems to be stronger in 1930 than in 1929, so we will change the breaks to only include 1930. We will assume that the period to be corrected is that before 1930.
```
breaks = [1930]
# fit the regression untill the first break
m1 = curve_fit(linear_reg, data3.loc[:breaks[0], 'AVG'].cumsum(), data3.loc[:breaks[0], gage].cumsum())[0][0]
# fit the regression from the first break
m2 = curve_fit(linear_reg, data3.loc[breaks[0]:, 'AVG'].cumsum(), data3.loc[breaks[0]:, gage].cumsum())[0][0]
m1, m2
```
##### Correct the series
```
# correction factor
factor = m2 / m1
factor
# copy of the original series
data3['E_'] = data3[gage].copy()
# correct period before the first break
data3.loc[:breaks[0], 'E_'] *= factor
plt.figure(figsize=(5, 5))
plt.axis('equal')
plt.plot((0, 800), (0, 800), '--k')
plt.plot(data3.AVG.cumsum(), data3[gage].cumsum(), '.-', label='original')
plt.plot(data3.AVG.cumsum(), data3[gage + '_'].cumsum(), '.-', label='corrected')
plt.legend();
# chech again for errors
identify_errors(data3[gage + '_'], data3.AVG)
```
We don't identify any more errors, so the assumption that the slopes of the second and third period were close enough was correct.
#### Redraw the double-mass plot
```
# recalculate the average
gages = ['A', 'B_', 'C', 'D__', 'E_']
data3['AVG_'] = data3[gages].mean(axis=1)
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(12, 8), sharex=True, sharey=True)
for (gage, ax) in zip(gages, axes.flatten()):
ax.plot((0, 800), (0, 800), ':k')
# double-mass curve
ax.plot(data3.AVG_.cumsum(), data3[gage].cumsum(), '.-', label='corrected')
ax.set_title('gage ' + gage)
axes[1, 2].axis('off');
# save figure
plt.savefig('../output/Ex3_double-mass curve.png', dpi=300)
# export corrected series
data3_ = data3.loc[:, gages]
data3_.columns = ['A', 'B', 'C', 'D', 'E']
data3_.to_csv('../output/Ex3_corrected series.csv', float_format='%.2f')
```
| github_jupyter |
* Construindo um vetor de contagem de palavras com scikit-learn
```
import pandas as pd
df = pd.read_csv("fake_or_real_news.csv")
# Import the necessary modules
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
# Print the head of df
df.head()
# Create a series to store the labels: y
y = df.label
# Create training and test sets
X_train, X_test, y_train, y_test = train_test_split(df['text'], y, test_size = 0.33, random_state=53)
# Initialize a CountVectorizer object: count_vectorizer
count_vectorizer = CountVectorizer(stop_words="english")
# Transform the training data using only the 'text' column values: count_train
count_train = count_vectorizer.fit_transform(X_train.values)
# Transform the test data using only the 'text' column values: count_test
count_test = count_vectorizer.transform(X_test.values)
# Print the first 10 features of the count_vectorizer
print(count_vectorizer.get_feature_names()[:10])
count_train.A[:10]
# Import TfidfVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
# Initialize a TfidfVectorizer object: tfidf_vectorizer
tfidf_vectorizer = TfidfVectorizer(stop_words="english", max_df=0.7)
# Transform the training data: tfidf_train
tfidf_train = tfidf_vectorizer.fit_transform(X_train.values)
# Transform the test data: tfidf_test
tfidf_test = tfidf_vectorizer.transform(X_test.values)
# Print the first 10 features
print(tfidf_vectorizer.get_feature_names()[:10])
# Print the first 5 vectors of the tfidf training data
print(tfidf_train.A[:5])
# Create the CountVectorizer DataFrame: count_df
count_df = pd.DataFrame(count_train.A, columns=count_vectorizer.get_feature_names())
# Print the head of count_df
count_df.tail()
# Create the TfidfVectorizer DataFrame: tfidf_df
tfidf_df = pd.DataFrame(tfidf_train.A, columns = tfidf_vectorizer.get_feature_names())
# Print the head of tfidf_df
tfidf_df.head()
# Calculate the difference in columns: difference
difference = set(count_df.columns) - set(tfidf_df.columns)
print(difference)
# Check whether the DataFrames are equal
print(count_df.equals(tfidf_df))
# Import the necessary modules
from sklearn import metrics
from sklearn.naive_bayes import MultinomialNB
# Instantiate a Multinomial Naive Bayes classifier: nb_classifier
nb_classifier = MultinomialNB()
# Fit the classifier to the training data
nb_classifier.fit(count_train, y_train)
# Create the predicted tags: pred
pred = nb_classifier.predict(count_test)
# Calculate the accuracy score: score
score = metrics.accuracy_score(y_test, pred)
print(score)
print(y_test.values[0:10])
print(pred[0:10])
# Calculate the confusion matrix: cm
cm = metrics.confusion_matrix(y_test, pred, labels=['FAKE', 'REAL'])
print(cm)
# Create a Multinomial Naive Bayes classifier: nb_classifier
nb_classifier = MultinomialNB()
# Fit the classifier to the training data
nb_classifier.fit(tfidf_train, y_train)
# Create the predicted tags: pred
pred = nb_classifier.predict(tfidf_test)
# Calculate the accuracy score: score
score = metrics.accuracy_score(y_test, pred)
print(score)
# Calculate the confusion matrix: cm
cm = metrics.confusion_matrix(y_test, pred, labels = ['FAKE', "REAL"])
print(cm)
# Create the list of alphas: alphas
alphas = np.arange(0,1, 0.1)
# Define train_and_predict()
def train_and_predict(alpha):
# Instantiate the classifier: nb_classifier
nb_classifier = MultinomialNB(alpha=alpha)
# Fit to the training data
nb_classifier.fit(tfidf_train, y_train)
# Predict the labels: pred
pred = nb_classifier.predict(tfidf_test)
# Compute accuracy: score
score = metrics.accuracy_score(y_test, pred)
return score
# Iterate over the alphas and print the corresponding score
for alpha in alphas:
print('Alpha: ', alpha)
print('Score: ', train_and_predict(alpha))
print()
# Get the class labels: class_labels
class_labels = nb_classifier.classes_
# Extract the features: feature_names
feature_names = tfidf_vectorizer.get_feature_names()
# Zip the feature names together with the coefficient array and sort by weights: feat_with_weights
feat_with_weights = sorted(zip(nb_classifier.coef_[0], feature_names))
feat_with_weights[-20:]
# Print the first class label and the top 20 feat_with_weights entries
print(class_labels[0], feat_with_weights[:20])
# Print the first class label and the top 20 feat_with_weights entries
print(class_labels[1], feat_with_weights[-20:])
```
| github_jupyter |
```
# !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-dev.conllu
# !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-train.conllu
# !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-test.conllu
# !wget https://storage.googleapis.com/xlnet/released_models/cased_L-12_H-768_A-12.zip -O xlnet.zip
# !unzip xlnet.zip
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
tag2idx = {'PAD': 0, 'X': 1}
tag_idx = 2
import sentencepiece as spm
from prepro_utils import preprocess_text, encode_ids
sp_model = spm.SentencePieceProcessor()
sp_model.Load('xlnet_cased_L-12_H-768_A-12/spiece.model')
def tokenize_fn(text):
text = preprocess_text(text, lower= False)
return encode_ids(sp_model, text)
SEG_ID_A = 0
SEG_ID_B = 1
SEG_ID_CLS = 2
SEG_ID_SEP = 3
SEG_ID_PAD = 4
special_symbols = {
"<unk>" : 0,
"<s>" : 1,
"</s>" : 2,
"<cls>" : 3,
"<sep>" : 4,
"<pad>" : 5,
"<mask>" : 6,
"<eod>" : 7,
"<eop>" : 8,
}
VOCAB_SIZE = 32000
UNK_ID = special_symbols["<unk>"]
CLS_ID = special_symbols["<cls>"]
SEP_ID = special_symbols["<sep>"]
MASK_ID = special_symbols["<mask>"]
EOD_ID = special_symbols["<eod>"]
def process_corpus(corpus, until = None):
global word2idx, tag2idx, char2idx, word_idx, tag_idx, char_idx
sentences, words, depends, labels, pos, sequences = [], [], [], [], [], []
temp_sentence, temp_word, temp_depend, temp_label, temp_pos = [], [], [], [], []
segments, masks = [], []
first_time = True
for sentence in corpus:
try:
if len(sentence):
if sentence[0] == '#':
continue
if first_time:
print(sentence)
first_time = False
sentence = sentence.split('\t')
if sentence[7] not in tag2idx:
tag2idx[sentence[7]] = tag_idx
tag_idx += 1
temp_word.append(sentence[1])
temp_depend.append(int(sentence[6]) + 1)
temp_label.append(tag2idx[sentence[7]])
temp_sentence.append(sentence[1])
temp_pos.append(sentence[3])
else:
if len(temp_sentence) < 2 or len(temp_word) != len(temp_label):
temp_word = []
temp_depend = []
temp_label = []
temp_sentence = []
temp_pos = []
continue
bert_tokens = []
labels_ = []
depends_ = []
seq_ = []
for no, orig_token in enumerate(temp_word):
t = tokenize_fn(orig_token)
labels_.append(temp_label[no])
depends_.append(temp_depend[no])
bert_tokens.extend(t)
labels_.extend([1] * (len(t) - 1))
depends_.extend([0] * (len(t) - 1))
seq_.append(no + 1)
bert_tokens.extend([4, 3])
labels_.extend([0, 0])
depends_.extend([0, 0])
segment = [0] * (len(bert_tokens) - 1) + [SEG_ID_CLS]
input_mask = [0] * len(segment)
words.append(bert_tokens)
depends.append(depends_)
labels.append(labels_)
sentences.append(temp_sentence)
pos.append(temp_pos)
sequences.append(seq_)
segments.append(segment)
masks.append(input_mask)
temp_word = []
temp_depend = []
temp_label = []
temp_sentence = []
temp_pos = []
except Exception as e:
print(e, sentence)
return sentences[:-1], words[:-1], depends[:-1], labels[:-1], pos[:-1], sequences[:-1], segments[:-1], masks[:-1]
with open('en_ewt-ud-dev.conllu') as fopen:
dev = fopen.read().split('\n')
sentences_dev, words_dev, depends_dev, labels_dev, _, seq_dev, segments_dev, masks_dev = process_corpus(dev)
with open('en_ewt-ud-test.conllu') as fopen:
test = fopen.read().split('\n')
sentences_test, words_test, depends_test, labels_test, _, seq_test, segments_test, masks_test = process_corpus(test)
sentences_test.extend(sentences_dev)
words_test.extend(words_dev)
depends_test.extend(depends_dev)
labels_test.extend(labels_dev)
seq_test.extend(seq_dev)
segments_test.extend(segments_dev)
masks_test.extend(masks_dev)
with open('en_ewt-ud-train.conllu') as fopen:
train = fopen.read().split('\n')
sentences_train, words_train, depends_train, labels_train, _, _, segments_train, masks_train = process_corpus(train)
len(sentences_train), len(sentences_test)
idx2tag = {v:k for k, v in tag2idx.items()}
train_X = words_train
train_Y = labels_train
train_depends = depends_train
test_X = words_test
test_Y = labels_test
test_depends = depends_test
import xlnet
import model_utils
import tensorflow as tf
import numpy as np
kwargs = dict(
is_training=True,
use_tpu=False,
use_bfloat16=False,
dropout=0.1,
dropatt=0.1,
init='normal',
init_range=0.1,
init_std=0.05,
clamp_len=-1)
xlnet_parameters = xlnet.RunConfig(**kwargs)
xlnet_config = xlnet.XLNetConfig(json_path='xlnet_cased_L-12_H-768_A-12/xlnet_config.json')
epoch = 15
batch_size = 32
warmup_proportion = 0.1
num_train_steps = int(len(train_X) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
print(num_train_steps, num_warmup_steps)
training_parameters = dict(
decay_method = 'poly',
train_steps = num_train_steps,
learning_rate = 2e-5,
warmup_steps = num_warmup_steps,
min_lr_ratio = 0.0,
weight_decay = 0.00,
adam_epsilon = 1e-8,
num_core_per_host = 1,
lr_layer_decay_rate = 1,
use_tpu=False,
use_bfloat16=False,
dropout=0.0,
dropatt=0.0,
init='normal',
init_range=0.1,
init_std=0.02,
clip = 1.0,
clamp_len=-1,)
class Parameter:
def __init__(self, decay_method, warmup_steps, weight_decay, adam_epsilon,
num_core_per_host, lr_layer_decay_rate, use_tpu, learning_rate, train_steps,
min_lr_ratio, clip, **kwargs):
self.decay_method = decay_method
self.warmup_steps = warmup_steps
self.weight_decay = weight_decay
self.adam_epsilon = adam_epsilon
self.num_core_per_host = num_core_per_host
self.lr_layer_decay_rate = lr_layer_decay_rate
self.use_tpu = use_tpu
self.learning_rate = learning_rate
self.train_steps = train_steps
self.min_lr_ratio = min_lr_ratio
self.clip = clip
training_parameters = Parameter(**training_parameters)
class BiAAttention:
def __init__(self, input_size_encoder, input_size_decoder, num_labels):
self.input_size_encoder = input_size_encoder
self.input_size_decoder = input_size_decoder
self.num_labels = num_labels
self.W_d = tf.get_variable("W_d", shape=[self.num_labels, self.input_size_decoder],
initializer=tf.contrib.layers.xavier_initializer())
self.W_e = tf.get_variable("W_e", shape=[self.num_labels, self.input_size_encoder],
initializer=tf.contrib.layers.xavier_initializer())
self.U = tf.get_variable("U", shape=[self.num_labels, self.input_size_decoder, self.input_size_encoder],
initializer=tf.contrib.layers.xavier_initializer())
def forward(self, input_d, input_e, mask_d=None, mask_e=None):
batch = tf.shape(input_d)[0]
length_decoder = tf.shape(input_d)[1]
length_encoder = tf.shape(input_e)[1]
out_d = tf.expand_dims(tf.matmul(self.W_d, tf.transpose(input_d, [0, 2, 1])), 3)
out_e = tf.expand_dims(tf.matmul(self.W_e, tf.transpose(input_e, [0, 2, 1])), 2)
output = tf.matmul(tf.expand_dims(input_d, 1), self.U)
output = tf.matmul(output, tf.transpose(tf.expand_dims(input_e, 1), [0, 1, 3, 2]))
output = output + out_d + out_e
if mask_d is not None:
d = tf.expand_dims(tf.expand_dims(mask_d, 1), 3)
e = tf.expand_dims(tf.expand_dims(mask_e, 1), 2)
output = output * d * e
return output
class BiLinear:
def __init__(self, left_features, right_features, out_features):
self.left_features = left_features
self.right_features = right_features
self.out_features = out_features
self.U = tf.get_variable("U-bi", shape=[out_features, left_features, right_features],
initializer=tf.contrib.layers.xavier_initializer())
self.W_l = tf.get_variable("Wl", shape=[out_features, left_features],
initializer=tf.contrib.layers.xavier_initializer())
self.W_r = tf.get_variable("Wr", shape=[out_features, right_features],
initializer=tf.contrib.layers.xavier_initializer())
def forward(self, input_left, input_right):
left_size = tf.shape(input_left)
output_shape = tf.concat([left_size[:-1], [self.out_features]], axis = 0)
batch = tf.cast(tf.reduce_prod(left_size[:-1]), tf.int32)
input_left = tf.reshape(input_left, (batch, self.left_features))
input_right = tf.reshape(input_right, (batch, self.right_features))
tiled = tf.tile(tf.expand_dims(input_left, axis = 0), (self.out_features,1,1))
output = tf.transpose(tf.reduce_sum(tf.matmul(tiled, self.U), axis = 2))
output = output + tf.matmul(input_left, tf.transpose(self.W_l))\
+ tf.matmul(input_right, tf.transpose(self.W_r))
return tf.reshape(output, output_shape)
class Attention:
def __init__(self, word_dim, num_words, char_dim, num_chars, num_filters, kernel_size,
hidden_size, encoder_layers, num_labels, arc_space, type_space):
def cells(size, reuse=False):
return tf.nn.rnn_cell.LSTMCell(size,
initializer=tf.orthogonal_initializer(),reuse=reuse)
self.word_embedd = tf.Variable(tf.random_uniform([num_words, word_dim], -1, 1))
self.char_embedd = tf.Variable(tf.random_uniform([num_chars, char_dim], -1, 1))
self.conv1d = tf.layers.Conv1D(num_filters, kernel_size, 1, padding='VALID')
self.num_labels = num_labels
self.encoder = tf.nn.rnn_cell.MultiRNNCell([cells(hidden_size) for _ in range(encoder_layers)])
def encode(self, input_word, input_char):
word = tf.nn.embedding_lookup(self.word_embedd, input_word)
char = tf.nn.embedding_lookup(self.char_embedd, input_char)
b = tf.shape(char)[0]
wl = tf.shape(char)[1]
cl = tf.shape(char)[2]
d = char.shape[3]
char = tf.reshape(char, [b * wl, cl, d])
char = tf.reduce_max(self.conv1d(char), axis = 1)
char = tf.nn.tanh(char)
d = char.shape[-1]
char = tf.reshape(char, [b, wl, d])
src_encoding = tf.concat([word, char], axis=2)
output, hn = tf.nn.dynamic_rnn(self.encoder, src_encoding, dtype = tf.float32,
scope = 'encoder')
arc_h = tf.nn.elu(self.arc_h(output))
arc_c = tf.nn.elu(self.arc_c(output))
type_h = tf.nn.elu(self.type_h(output))
type_c = tf.nn.elu(self.type_c(output))
return (arc_h, arc_c), (type_h, type_c), hn
def forward(self, input_word, input_char, mask):
arcs, types, _ = self.encode(input_word, input_char)
out_arc = tf.squeeze(self.attention.forward(arcs[0], arcs[1], mask_d=mask, mask_e=mask), axis = 1)
return out_arc, types, mask
def loss(self, input_word, input_char, mask, heads, types):
out_arc, out_type, _ = self.forward(input_word, input_char, mask)
type_h, type_c = out_type
batch = tf.shape(out_arc)[0]
max_len = tf.shape(out_arc)[1]
batch_index = tf.range(0, batch)
t = tf.transpose(heads)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0)], axis = 0))
type_h = tf.gather_nd(type_h, concatenated)
out_type = self.bilinear.forward(type_h, type_c)
minus_inf = -1e8
minus_mask = (1 - mask) * minus_inf
out_arc = out_arc + tf.expand_dims(minus_mask, axis = 2) + tf.expand_dims(minus_mask, axis = 1)
loss_arc = tf.nn.log_softmax(out_arc, dim=1)
loss_type = tf.nn.log_softmax(out_type, dim=2)
loss_arc = loss_arc * tf.expand_dims(mask, axis = 2) * tf.expand_dims(mask, axis = 1)
loss_type = loss_type * tf.expand_dims(mask, axis = 2)
num = tf.reduce_sum(mask) - tf.cast(batch, tf.float32)
child_index = tf.tile(tf.expand_dims(tf.range(0, max_len), 1), [1, batch])
t = tf.transpose(heads)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0),
tf.expand_dims(child_index, axis = 0)], axis = 0))
loss_arc = tf.gather_nd(loss_arc, concatenated)
loss_arc = tf.transpose(loss_arc, [1, 0])
t = tf.transpose(types)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(child_index, axis = 0),
tf.expand_dims(t, axis = 0)], axis = 0))
loss_type = tf.gather_nd(loss_type, concatenated)
loss_type = tf.transpose(loss_type, [1, 0])
return tf.reduce_sum(-loss_arc) / num, tf.reduce_sum(-loss_type) / num
def decode(self, input_word, input_char, mask, leading_symbolic=0):
out_arc, out_type, _ = self.forward(input_word, input_char, mask)
batch = tf.shape(out_arc)[0]
max_len = tf.shape(out_arc)[1]
sec_max_len = tf.shape(out_arc)[2]
out_arc = out_arc + tf.linalg.diag(tf.fill([max_len], -np.inf))
minus_mask = tf.expand_dims(tf.cast(1 - mask, tf.bool), axis = 2)
minus_mask = tf.tile(minus_mask, [1, 1, sec_max_len])
out_arc = tf.where(minus_mask, tf.fill(tf.shape(out_arc), -np.inf), out_arc)
heads = tf.argmax(out_arc, axis = 1)
type_h, type_c = out_type
batch = tf.shape(type_h)[0]
max_len = tf.shape(type_h)[1]
batch_index = tf.range(0, batch)
t = tf.cast(tf.transpose(heads), tf.int32)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0)], axis = 0))
type_h = tf.gather_nd(type_h, concatenated)
out_type = self.bilinear.forward(type_h, type_c)
out_type = out_type[:, :, leading_symbolic:]
types = tf.argmax(out_type, axis = 2)
return heads, types
class Model:
def __init__(
self,
learning_rate,
hidden_size_word,
cov = 0.0):
self.words = tf.placeholder(tf.int32, (None, None))
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.float32, [None, None])
self.heads = tf.placeholder(tf.int32, (None, None))
self.types = tf.placeholder(tf.int32, (None, None))
self.mask = tf.cast(tf.math.not_equal(self.words, 0), tf.float32)
self.maxlen = tf.shape(self.words)[1]
self.lengths = tf.count_nonzero(self.words, 1)
mask = self.mask
heads = self.heads
types = self.types
self.arc_h = tf.layers.Dense(hidden_size_word)
self.arc_c = tf.layers.Dense(hidden_size_word)
self.attention = BiAAttention(hidden_size_word, hidden_size_word, 1)
self.type_h = tf.layers.Dense(hidden_size_word)
self.type_c = tf.layers.Dense(hidden_size_word)
self.bilinear = BiLinear(hidden_size_word, hidden_size_word, len(tag2idx))
xlnet_model = xlnet.XLNetModel(
xlnet_config=xlnet_config,
run_config=xlnet_parameters,
input_ids=tf.transpose(self.words, [1, 0]),
seg_ids=tf.transpose(self.segment_ids, [1, 0]),
input_mask=tf.transpose(self.input_masks, [1, 0]))
output_layer = xlnet_model.get_sequence_output()
output_layer = tf.transpose(output_layer, [1, 0, 2])
arc_h = tf.nn.elu(self.arc_h(output_layer))
arc_c = tf.nn.elu(self.arc_c(output_layer))
type_h = tf.nn.elu(self.type_h(output_layer))
type_c = tf.nn.elu(self.type_c(output_layer))
out_arc = tf.squeeze(self.attention.forward(arc_h, arc_h, mask_d=self.mask,
mask_e=self.mask), axis = 1)
batch = tf.shape(out_arc)[0]
max_len = tf.shape(out_arc)[1]
sec_max_len = tf.shape(out_arc)[2]
batch_index = tf.range(0, batch)
decode_arc = out_arc + tf.linalg.diag(tf.fill([max_len], -np.inf))
minus_mask = tf.expand_dims(tf.cast(1 - mask, tf.bool), axis = 2)
minus_mask = tf.tile(minus_mask, [1, 1, sec_max_len])
decode_arc = tf.where(minus_mask, tf.fill(tf.shape(decode_arc), -np.inf), decode_arc)
self.heads_seq = tf.argmax(decode_arc, axis = 1)
t = tf.cast(tf.transpose(self.heads_seq), tf.int32)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0)], axis = 0))
type_h = tf.gather_nd(type_h, concatenated)
out_type = self.bilinear.forward(type_h, type_c)
self.tags_seq = tf.argmax(out_type, axis = 2)
batch = tf.shape(out_arc)[0]
max_len = tf.shape(out_arc)[1]
batch_index = tf.range(0, batch)
t = tf.transpose(heads)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0)], axis = 0))
type_h = tf.gather_nd(type_h, concatenated)
out_type = self.bilinear.forward(type_h, type_c)
minus_inf = -1e8
minus_mask = (1 - mask) * minus_inf
out_arc = out_arc + tf.expand_dims(minus_mask, axis = 2) + tf.expand_dims(minus_mask, axis = 1)
loss_arc = tf.nn.log_softmax(out_arc, dim=1)
loss_type = tf.nn.log_softmax(out_type, dim=2)
loss_arc = loss_arc * tf.expand_dims(mask, axis = 2) * tf.expand_dims(mask, axis = 1)
loss_type = loss_type * tf.expand_dims(mask, axis = 2)
num = tf.reduce_sum(mask) - tf.cast(batch, tf.float32)
child_index = tf.tile(tf.expand_dims(tf.range(0, max_len), 1), [1, batch])
t = tf.transpose(heads)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(t, axis = 0),
tf.expand_dims(child_index, axis = 0)], axis = 0))
loss_arc = tf.gather_nd(loss_arc, concatenated)
loss_arc = tf.transpose(loss_arc, [1, 0])
t = tf.transpose(types)
broadcasted = tf.broadcast_to(batch_index, tf.shape(t))
concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0),
tf.expand_dims(child_index, axis = 0),
tf.expand_dims(t, axis = 0)], axis = 0))
loss_type = tf.gather_nd(loss_type, concatenated)
loss_type = tf.transpose(loss_type, [1, 0])
self.cost = (tf.reduce_sum(-loss_arc) / num) + (tf.reduce_sum(-loss_type) / num)
self.optimizer = tf.train.AdamOptimizer(
learning_rate = learning_rate
).minimize(self.cost)
mask = tf.sequence_mask(self.lengths, maxlen = self.maxlen)
self.prediction = tf.boolean_mask(self.tags_seq, mask)
mask_label = tf.boolean_mask(self.types, mask)
correct_pred = tf.equal(tf.cast(self.prediction, tf.int32), mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
self.prediction = tf.cast(tf.boolean_mask(self.heads_seq, mask), tf.int32)
mask_label = tf.boolean_mask(self.heads, mask)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy_depends = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
learning_rate = 2e-5
hidden_size_word = 128
model = Model(learning_rate, hidden_size_word)
sess.run(tf.global_variables_initializer())
import collections
import re
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map = {}
initialized_variable_names = {}
name_to_variable = collections.OrderedDict()
for var in tvars:
name = var.name
m = re.match('^(.*):\\d+$', name)
if m is not None:
name = m.group(1)
name_to_variable[name] = var
init_vars = tf.train.list_variables(init_checkpoint)
assignment_map = collections.OrderedDict()
for x in init_vars:
(name, var) = (x[0], x[1])
if name not in name_to_variable:
continue
assignment_map[name] = name_to_variable[name]
initialized_variable_names[name] = 1
initialized_variable_names[name + ':0'] = 1
return (assignment_map, initialized_variable_names)
tvars = tf.trainable_variables()
checkpoint = 'xlnet_cased_L-12_H-768_A-12/xlnet_model.ckpt'
assignment_map, initialized_variable_names = get_assignment_map_from_checkpoint(tvars,
checkpoint)
saver = tf.train.Saver(var_list = assignment_map)
saver.restore(sess, checkpoint)
from tensorflow.keras.preprocessing.sequence import pad_sequences
batch_x = train_X[:5]
batch_x = pad_sequences(batch_x,padding='post')
batch_y = train_Y[:5]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = train_depends[:5]
batch_depends = pad_sequences(batch_depends,padding='post')
batch_segments = segments_train[:5]
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = masks_train[:5]
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
sess.run([model.accuracy, model.accuracy_depends, model.cost],
feed_dict = {model.words: batch_x,
model.types: batch_y,
model.heads: batch_depends,
model.segment_ids: batch_segments,
model.input_masks: batch_masks})
tags_seq, heads = sess.run(
[model.tags_seq, model.heads_seq],
feed_dict = {
model.words: batch_x,
model.segment_ids: batch_segments,
model.input_masks: batch_masks
},
)
tags_seq[0], heads[0], batch_depends[0]
from tqdm import tqdm
for e in range(epoch):
train_acc, train_loss = [], []
test_acc, test_loss = [], []
train_acc_depends, test_acc_depends = [], []
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x = train_X[i: index]
batch_x = pad_sequences(batch_x,padding='post')
batch_y = train_Y[i: index]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = train_depends[i: index]
batch_depends = pad_sequences(batch_depends,padding='post')
batch_segments = segments_train[i: index]
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = masks_train[i: index]
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
acc_depends, acc, cost, _ = sess.run(
[model.accuracy_depends, model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.words: batch_x,
model.types: batch_y,
model.heads: batch_depends,
model.segment_ids: batch_segments,
model.input_masks: batch_masks
},
)
train_loss.append(cost)
train_acc.append(acc)
train_acc_depends.append(acc_depends)
pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends)
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'test minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = test_X[i: index]
batch_x = pad_sequences(batch_x,padding='post')
batch_y = test_Y[i: index]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = test_depends[i: index]
batch_depends = pad_sequences(batch_depends,padding='post')
batch_segments = segments_test[i: index]
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = masks_test[i: index]
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
acc_depends, acc, cost = sess.run(
[model.accuracy_depends, model.accuracy, model.cost],
feed_dict = {
model.words: batch_x,
model.types: batch_y,
model.heads: batch_depends,
model.segment_ids: batch_segments,
model.input_masks: batch_masks
},
)
test_loss.append(cost)
test_acc.append(acc)
test_acc_depends.append(acc_depends)
pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends)
print(
'epoch: %d, training loss: %f, training acc: %f, training depends: %f, valid loss: %f, valid acc: %f, valid depends: %f\n'
% (e, np.mean(train_loss),
np.mean(train_acc),
np.mean(train_acc_depends),
np.mean(test_loss),
np.mean(test_acc),
np.mean(test_acc_depends)
))
tags_seq, heads = sess.run(
[model.tags_seq, model.heads_seq],
feed_dict = {
model.words: batch_x,
model.segment_ids: batch_segments,
model.input_masks: batch_masks
},
)
tags_seq[0], heads[0] - 1, batch_depends[0] - 1
def evaluate(heads_pred, types_pred, heads, types, lengths,
symbolic_root=False, symbolic_end=False):
batch_size, _ = heads_pred.shape
ucorr = 0.
lcorr = 0.
total = 0.
ucomplete_match = 0.
lcomplete_match = 0.
corr_root = 0.
total_root = 0.
start = 1 if symbolic_root else 0
end = 1 if symbolic_end else 0
for i in range(batch_size):
ucm = 1.
lcm = 1.
for j in range(start, lengths[i] - end):
total += 1
if heads[i, j] == heads_pred[i, j]:
ucorr += 1
if types[i, j] == types_pred[i, j]:
lcorr += 1
else:
lcm = 0
else:
ucm = 0
lcm = 0
if heads[i, j] == 0:
total_root += 1
corr_root += 1 if heads_pred[i, j] == 0 else 0
ucomplete_match += ucm
lcomplete_match += lcm
return ucorr / total, lcorr / total, corr_root / total_root
arcs, types, roots = [], [], []
for i in range(0, len(test_X), batch_size):
index = min(i + batch_size, len(test_X))
batch_x = test_X[i: index]
batch_x = pad_sequences(batch_x,padding='post')
batch_y = test_Y[i: index]
batch_y = pad_sequences(batch_y,padding='post')
batch_depends = test_depends[i: index]
batch_depends = pad_sequences(batch_depends,padding='post')
batch_segments = segments_test[i: index]
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = masks_test[i: index]
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
tags_seq, heads = sess.run(
[model.tags_seq, model.heads_seq],
feed_dict = {
model.words: batch_x,
model.segment_ids: batch_segments,
model.input_masks: batch_masks
},
)
arc_accuracy, type_accuracy, root_accuracy = evaluate(heads - 1, tags_seq, batch_depends - 1, batch_y,
np.count_nonzero(batch_x, axis = 1))
arcs.append(arc_accuracy)
types.append(type_accuracy)
roots.append(root_accuracy)
print('arc accuracy:', np.mean(arcs))
print('types accuracy:', np.mean(types))
print('root accuracy:', np.mean(roots))
```
| github_jupyter |
## Musterlösung zu Projektaufgabe Deskriptive Statistik und offene Fragen
#### Grundlage: Datensatz der San Francisco Public Library, s.a. https://zbmed.github.io/2020-2021-ZK_Data_Librarian_Modul_3/organisation/dataset/
##### Frage 1: Wie viele Senioren und Kinder sind Kunden der San Francisco Public Library?
##### Frage 2: Wie viele Nutzer möchten per Mail informiert werden?
##### Frage 3: Wie alt sind diese Nutzer durchschnittlich im Vergleich zu Nutzern, die per Post informiert werden möchten?
##### Frage 4: Wie viele Ausleihen werden im Mittel pro Altersgruppe und pro Jahr getätigt? Ist die Streuung zwischen den Gruppen gleich?
##### Frage 5: Welche Altersgruppe verlängert im Mittel wie oft?
##### Frage 6: Wie ist die Verteilung der Altersgruppen im Mission District
##### Frage 7: Erklärung von Normalisierung von Kreuztabellen (Kapitel 3 im Skript)
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
sns.set()
# das was wir brauchen in abgekürzter Form
df = pd.read_csv(
"../data/Library_Usage.csv",
na_values="none"
)
# Einlesen des Datensatzes in das neu definierte DataFrame df mit Überschreibung
#fehlender Werte
#df
df.head()
#Überblick über das DataFrame
```
# Frage 1: Wie viele Senioren und Kinder sind Kunden der San Francisco Public Library?
Um diese Frage zu beantworten, kommen die Spalten "Patron Type Definition" oder "Age Range" in Frage, also schauen wir uns die Einträge (Merkmalsausprägungen) an:
```
df['Patron Type Definition'].value_counts()
```
Man sieht, dass es sich um nominale Werte handelt.
```
df['Age Range'].value_counts()
```
Man sieht, dass es sich um ordinale Werte handelt.
In der Beschreibung des Datensatzes steht, dass sich die Spalte 'Age Range' nach dem Geburtsdatum richtet. D.h. es ist davon auszugehen, dass die Nutzer der Bibliothek zum Zeitpunkt der Bereitstellung des Datensatzes (2016) in die Kategorien von 'Age Range' einsortiert werden. In der Spalte 'Patron Type Definition' sind verschiedene Merkmalsausprägungen zu finden, die nicht unbedingt etwas mit dem Alter zu tun haben (z.B. VISITOR oder SPECIAL). Daher ist davon auszugehen, dass wir mit 'Age Range' am nähesten Fragestellungen zu Alter beantworten zu können. Richtige Angaben zu tatsächlichem Alter der Nutzer liegen bei diesem Datensatz nicht vor.
An dieser Stelle ist also zu definieren, was Senioren und Kinder sind. Eine Möglichkeit ist, Senioren als Age Range = 65 to 74 years und Age Range = 75 years and over zu definieren, man könnte aber auch Age Range = 60 to 64 years dazunehmen. Für das weitere Vorgehen, definieren wir Senioren als über 65-jährige und Kinder als bis 19-jährige. Um die Frage 1 zu beantworten, reicht es also, die entsprechenden Einträge zu summieren.
```
kinder=df.loc[df['Age Range'] == "0 to 9 years"] #Hilfsvariable
len(kinder)
senioren=df.loc[
(df['Age Range'] == "65 to 74 years") |
(df['Age Range'] == "75 years and over")
]
#Hilfsvariable
len(senioren)
```
# Antwort auf Frage 1:
```
print('Es sind ' + str(len(kinder)) + ' Kinder (bis 19 Jahre) und ' + str(len(senioren)) + ' Senioren (ab 65 Jahren) registriert.' )
```
-----
# Frage 2: Wie viele Nutzer möchten per Mail informiert werden?
```
len(df.loc[(df['Notice Preference Definition'] == "email")])
```
# Antwort auf Frage 2:
```
print(str(len(df.loc[(df['Notice Preference Definition'] == "email")])) + ' Nutzer möchten per Mail informiert werden.')
```
# Frage 3: Wie alt sind diese Nutzer durchschnittlich im Vergleich zu Nutzern, die per Post informiert werden möchten?
Die relevante Spalte über die gewünschte Benachrichtigungsart schauen wir uns genauer an:
```
df['Notice Preference Definition'].value_counts()
```
Es handelt sich um ein nominales Merkmal.
Auch hier stoßen wir auf eine Interpretationsfrage. Da der Datensatz kein Alter ausgibt, sondern nur Altersstufen (Age Range) bzw. Kategorien, die nicht immer etwas mit dem Alter zu tun haben sondern eher mit dem Nutzerstatus (Patron Type Definition), müssen wir überlegen, was ein Durchschnitt bedeuten kann. Age Range ist ein ordinales Merkmal, d.h. wir können keinen Erwartungswert berechnen, aber uns dennoch Häufigkeitstabellen anschauen.
Um diese beiden Merkmale (nominal und ordinal) zu verbinden, können wir uns einen Plot angucken, dafür definieren wir eine neue Variable indem wir nach den Merkmalen filtern, die uns interessieren:
```
benachrichtigung=df.loc[
(df['Notice Preference Definition'] == "email") |
(df['Notice Preference Definition'] == "print")
] #Hilfsvariable für den Plot, damit nur die Merkmale 'email' und 'print' angezeigt werden
sns.catplot(x='Age Range', kind='count', hue='Notice Preference Definition', data=benachrichtigung, aspect=3, order=["0 to 9 years","10 to 19 years","20 to 24 years","25 to 34 years","35 to 44 years","45 to 54 years","55 to 59 years","60 to 64 years","65 to 74 years","75 years and over"])
#wenn data=df gewählt wird, würden wir auch die Werte für 'print' sehen
```
Um besser die Zahlen zu verstehen, hilft uns eine Kreuztabelle:
```
pd.crosstab(
benachrichtigung['Notice Preference Definition'],
benachrichtigung['Age Range'],
margins=True
)
# Beachte, dass wir hier eine Teilmenge des Datensatzes betrachten über die Variable "benachrichitigung".
# D.h. wir blenden die Ereignisse aus, wo in der Spalte 'Notice Preference Definition' nicht
# 'email' oder 'print' steht.
```
----
###### WICHTIG:
Beachte, dass die Summe der Nutzer, die per Mail informiert werden hier 323778 ergibt. Wir haben aber bei Frage 2 festgestellt, dass eigentlich 323937 Nutzer per Mail informiert werden möchten.
Dies liegt daran, dass offensichtlich in einigen Einträgen (Zeilen des Datensatzes) zwar im Feld "Notice Preference Definition" der Wert "email" steht, aber offensichtlich im Feld "Age Range" kein Eintrag steht. In der Kreuztabelle werden die beiden Merkmale 'Age Range' und 'Notice Preference Definition' betrachtet und somit nur die Einträge, wo entsprechednd beide Felder ausgefüllt sind.
(Das ist übrigens unabhängig ob man die Kreuztabelle über die "große" Variable "df" oder "benachrichtigung" berechnet, probiere es gerne aus!)
----
```
# Nun möchten wir das prozentual betrachten:
pd.crosstab(
benachrichtigung['Notice Preference Definition'],
benachrichtigung['Age Range'],
margins=True, normalize=0
)
```
# Antwort auf Frage 3:
Die Frage können wir wie folgt beantworten.
Von allen Nutzern, die per Mail informiert werden möchten, sind
- ca. 9% in der Alterklasse 0 bis 9 Jahre,
- ca. 12% in der Altersklasse 10 bis 19 Jahre,
...
- ca. 6% in der Altersklasse 65 bis 74 Jahre,
- ca. 2% in der Altersklasse 75 Jahre und älter.
Im Vergleich dazu, sind von allen Nutzern, die per Post informiert werden möchten
- ca. 13% in der Altersklasse 0 bis 9 Jahre,
- ca. 17% in der Altersklasse 10 bis 19 Jahre,
...
- ca. 86% in der Altersklasse 65 bis 74 Jahre,
- ca. 65% in der Altersklasse 75 Jahre und älter.
# Frage 4: Wie viele Ausleihen werden im Mittel pro Altersgruppe und pro Jahr getätigt? Ist die Streuung zwischen den Gruppen gleich?
Für diese Fragestellung sind die Spalten 'Total Checkouts' und 'Age Range' relevant. Bei der ersten handelt es sich um ein metrisches Merkmal, die zweite ist ordinal.
Wir definieren eine neue Spalte, die die durchschnittliche jährliche Ausleihe eines Nutzers ausgibt.
```
df['Circulation Active Year'] = pd.to_numeric(df['Circulation Active Year'], errors='coerce')
df['Membership Duration Years'] = (df['Circulation Active Year'] - df['Year Patron Registered'])+1
# hier werden der Einfachheit ganze Jahre angenommen
df["Average Checkouts per Year"] =(df['Total Checkouts']/df['Membership Duration Years'])
import matplotlib.pyplot as plt
%matplotlib inline
sns.set()
plt.figure(figsize=(16, 6))
my_order =['0 to 9 years', '10 to 19 years', '20 to 24 years', '25 to 34 years', '35 to 44 years', '45 to 54 years', '55 to 59 years', '60 to 64 years', '65 to 74 years', '75 years and over']
plt = sns.boxplot(y='Average Checkouts per Year', x='Age Range', data=df[df['Average Checkouts per Year']<df['Average Checkouts per Year'].quantile(0.95)], fliersize=0.5, whis=1.5, order =my_order)
```
# Antwort auf Frage 4:
In der Grafik ist die Antwort ablesbar. Man sieht ganz schön, dass es kaum Zusammenhang zwischen der durschnittlichen Ausleihe pro Jahr und den Altersgruppen gibt. Lediglich die Altersgruppe zwischen 20 und 24 und 25 und 34 leihen sichtbar weniger Bücher aus als die anderen Altersgruppen.
# Frage 5: Welche Altersgruppe verlängert im Mittel wie oft?
# Antwort auf Frage 5
```
Age_Range_0_9 = df.loc[df['Age Range'] == '0 to 9 years']
Age_Range_10_19 = df.loc[df['Age Range'] == '10 to 19 years']
Age_Range_20_24 = df.loc[df['Age Range'] == '20 to 24 years']
Age_Range_25_34 = df.loc[df['Age Range'] == '25 to 34 years']
Age_Range_35_44 = df.loc[df['Age Range'] == '35 to 44 years']
Age_Range_45_54 = df.loc[df['Age Range'] == '45 to 54 years']
Age_Range_55_59 = df.loc[df['Age Range'] == '55 to 59 years']
Age_Range_60_64 = df.loc[df['Age Range'] == '60 to 64 years']
Age_Range_65_74 = df.loc[df['Age Range'] == '65 to 74 years']
Age_Range_75 = df.loc[df['Age Range'] == '75 years and over']
# Hilfsvariablen
print('Im Mittel werden in den Altersgruppen wie folgt Ausleihen getätigt:' )
print('0 bis 9-jährige: ' + str(Age_Range_0_9['Average Checkouts per Year'].mean()))
print('10 bis 19-jährige: ' + str(Age_Range_10_19['Average Checkouts per Year'].mean()))
print('...')
print('65 bis 74-jährige: ' + str(Age_Range_65_74['Average Checkouts per Year'].mean()))
print('ab 75-jährige: ' + str(Age_Range_75['Average Checkouts per Year'].mean()))
```
----
# Frage 6: Wie ist die Verteilung der Altersgruppen im Mission District
Für diese Frage sind die Variablen 'Home Library Definition' und 'Age Range' relevant. Die erstere ist nominal, die zweite ordinal. D.h. auch hier ist die Berechnung von Erwartungswert und Varianz nicht möglich. Für einen ersten graphischen Überblick können wir aber einfach zählen, wie viele Nutzer welcher Altersgruppe im Mission District sind.
```
mission_district=df.loc[(df['Home Library Definition'] == "Mission")] # Hilfsvariable
my_order =['0 to 9 years', '10 to 19 years', '20 to 24 years', '25 to 34 years', '35 to 44 years', '45 to 54 years', '55 to 59 years', '60 to 64 years', '65 to 74 years', '75 years and over']
sns.catplot(x='Age Range', kind="count", data=mission_district, order=my_order, aspect=3)
```
Die obere Grafik beantwortet die Frage schon ganz gut, dennoch können wir uns auch die genauen Werte über eine Kreuztabelle ausgeben lassen:
```
pd.crosstab(
mission_district['Home Library Definition'],
mission_district['Age Range'],
margins=True, normalize=0
)
```
# Antwort auf Frage 6
Die Verteilung der Altersgruppen im Mission District ist wie folgt:
- ca. 11% der Nutzer sind zwischen 0 und 9 Jahren,
- ca. 16% der Nutzer sind zwischen 10 und 19 Jahren,
...
- ca. 4% der Nutzer sind zwischen 65 und 74 Jahren,
- ca. 1% der Nutzer sind 75 Jahre und älter.
----
# Frage 7: Erklärung von Normalisierung von Kreuztabellen (Kapitel 3 im Skript)
Wir nutzen das Beispiel von oben (Alterskategorien und Benachrichtigungsart aus Frage 3).
Zuerst einmal eine nicht normalisierte Kreuztabelle:
```
# Nun möchten wir das prozentual betrachten:
pd.crosstab(
benachrichtigung['Notice Preference Definition'],
benachrichtigung['Age Range'],
margins=True
)
# die Reihenfolge der betrachteten Spalten 'Notice Preference Definition' und 'Age Range' spielt keine Rolle,
# probiere es ruhig aus"
# Nun normalisieren wir nach Zeilen (normalize=0)
pd.crosstab(
benachrichtigung['Notice Preference Definition'],
benachrichtigung['Age Range'],
margins=True, normalize=0
)
```
Hier sehen wir, dass jede Zeile in der Summe 1 ergibt. D.h. wir können die Tabelle prozentual nach Zeilen interpretieren.
Also z.B. erste Zeile: Von allen Nutzern, die per Mail informiert werden möchten, befinden sich ca. 13% (0.128020 von 1) in der Altersgruppe 45 bis 54 Jahre.
```
# Als Vergleich normalisieren wir nach Spalten (normalize=1)
pd.crosstab(
benachrichtigung['Notice Preference Definition'],
benachrichtigung['Age Range'],
margins=True, normalize=1
)
```
Hier sehen wir, dass jede Spalte in der Summe 1 ergibt. D.h. wir können die Tabelle prozentual nach Spalten interpretieren.
Also z.B. erste Spalte: Von allen Nutzern, die zwischen 0 und Jahren sind, möchten sich ca. 13% (0.130813 von 1) per Post informieren lassen.
----
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Train and explain models locally and deploy model and scoring explainer
_**This notebook illustrates how to use the Azure Machine Learning Interpretability SDK to deploy a locally-trained model and its corresponding scoring explainer to Azure Container Instances (ACI) as a web service.**_
Problem: IBM employee attrition classification with scikit-learn (train and explain a model locally and use Azure Container Instances (ACI) for deploying your model and its corresponding scoring explainer as a web service.)
---
## Table of Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Run model explainer locally at training time](#Explain)
1. Apply feature transformations
1. Train a binary classification model
1. Explain the model on raw features
1. Generate global explanations
1. Generate local explanations
1. [Visualize explanations](#Visualize)
1. [Deploy model and scoring explainer](#Deploy)
1. [Next steps](#Next)
## Introduction
This notebook showcases how to train and explain a classification model locally, and deploy the trained model and its corresponding explainer to Azure Container Instances (ACI).
It demonstrates the API calls that you need to make to submit a run for training and explaining a model to AMLCompute, download the compute explanations remotely, and visualizing the global and local explanations via a visualization dashboard that provides an interactive way of discovering patterns in model predictions and downloaded explanations. It also demonstrates how to use Azure Machine Learning MLOps capabilities to deploy your model and its corresponding explainer.
We will showcase one of the tabular data explainers: TabularExplainer (SHAP) and follow these steps:
1. Develop a machine learning script in Python which involves the training script and the explanation script.
2. Run the script locally.
3. Use the interpretability toolkit’s visualization dashboard to visualize predictions and their explanation. If the metrics and explanations don't indicate a desired outcome, loop back to step 1 and iterate on your scripts.
5. After a satisfactory run is found, create a scoring explainer and register the persisted model and its corresponding explainer in the model registry.
6. Develop a scoring script.
7. Create an image and register it in the image registry.
8. Deploy the image as a web service in Azure.
## Setup
Make sure you go through the [configuration notebook](../../../../configuration.ipynb) first if you haven't.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Initialize a Workspace
Initialize a workspace object from persisted configuration
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
```
## Explain
Create An Experiment: **Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
```
from azureml.core import Experiment
experiment_name = 'explain_model_at_scoring_time'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.start_logging()
# Get IBM attrition data
import os
import pandas as pd
outdirname = 'dataset.6.21.19'
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
import zipfile
zipfilename = outdirname + '.zip'
urlretrieve('https://publictestdatasets.blob.core.windows.net/data/' + zipfilename, zipfilename)
with zipfile.ZipFile(zipfilename, 'r') as unzip:
unzip.extractall('.')
attritionData = pd.read_csv('./WA_Fn-UseC_-HR-Employee-Attrition.csv')
from sklearn.model_selection import train_test_split
import joblib
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from interpret.ext.blackbox import TabularExplainer
os.makedirs('./outputs', exist_ok=True)
# Dropping Employee count as all values are 1 and hence attrition is independent of this feature
attritionData = attritionData.drop(['EmployeeCount'], axis=1)
# Dropping Employee Number since it is merely an identifier
attritionData = attritionData.drop(['EmployeeNumber'], axis=1)
attritionData = attritionData.drop(['Over18'], axis=1)
# Since all values are 80
attritionData = attritionData.drop(['StandardHours'], axis=1)
# Converting target variables from string to numerical values
target_map = {'Yes': 1, 'No': 0}
attritionData["Attrition_numerical"] = attritionData["Attrition"].apply(lambda x: target_map[x])
target = attritionData["Attrition_numerical"]
attritionXData = attritionData.drop(['Attrition_numerical', 'Attrition'], axis=1)
# Creating dummy columns for each categorical feature
categorical = []
for col, value in attritionXData.iteritems():
if value.dtype == 'object':
categorical.append(col)
# Store the numerical columns in a list numerical
numerical = attritionXData.columns.difference(categorical)
# We create the preprocessing pipelines for both numeric and categorical data.
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
transformations = ColumnTransformer(
transformers=[
('num', numeric_transformer, numerical),
('cat', categorical_transformer, categorical)])
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', transformations),
('classifier', RandomForestClassifier())])
# Split data into train and test
x_train, x_test, y_train, y_test = train_test_split(attritionXData,
target,
test_size=0.2,
random_state=0,
stratify=target)
# Preprocess the data and fit the classification model
clf.fit(x_train, y_train)
model = clf.steps[-1][1]
model_file_name = 'log_reg.pkl'
# Save model in the outputs folder so it automatically get uploaded
with open(model_file_name, 'wb') as file:
joblib.dump(value=clf, filename=os.path.join('./outputs/',
model_file_name))
# Explain predictions on your local machine
tabular_explainer = TabularExplainer(model,
initialization_examples=x_train,
features=attritionXData.columns,
classes=["Not leaving", "leaving"],
transformations=transformations)
# Explain overall model predictions (global explanation)
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations it will
# take longer although they may be more accurate
global_explanation = tabular_explainer.explain_global(x_test)
from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer, save
# ScoringExplainer
scoring_explainer = TreeScoringExplainer(tabular_explainer)
# Pickle scoring explainer locally
save(scoring_explainer, exist_ok=True)
# Register original model
run.upload_file('original_model.pkl', os.path.join('./outputs/', model_file_name))
original_model = run.register_model(model_name='local_deploy_model',
model_path='original_model.pkl')
# Register scoring explainer
run.upload_file('IBM_attrition_explainer.pkl', 'scoring_explainer.pkl')
scoring_explainer_model = run.register_model(model_name='IBM_attrition_explainer', model_path='IBM_attrition_explainer.pkl')
```
## Visualize
Visualize the explanations
```
from raiwidgets import ExplanationDashboard
ExplanationDashboard(global_explanation, clf, dataset=x_test)
```
## Deploy
Deploy Model and ScoringExplainer.
Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
```
from azureml.core.conda_dependencies import CondaDependencies
# azureml-defaults is required to host the model as a web service.
azureml_pip_packages = [
'azureml-defaults', 'azureml-core', 'azureml-telemetry',
'azureml-interpret'
]
# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.
# In production scenario user would choose their dependencies
import pkg_resources
available_packages = pkg_resources.working_set
sklearn_ver = None
pandas_ver = None
for dist in available_packages:
if dist.key == 'scikit-learn':
sklearn_ver = dist.version
elif dist.key == 'pandas':
pandas_ver = dist.version
sklearn_dep = 'scikit-learn'
pandas_dep = 'pandas'
if sklearn_ver:
sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)
if pandas_ver:
pandas_dep = 'pandas=={}'.format(pandas_ver)
# Specify CondaDependencies obj
# The CondaDependencies specifies the conda and pip packages that are installed in the environment
# the submitted job is run in. Note the remote environment(s) needs to be similar to the local
# environment, otherwise if a model is trained or deployed in a different environment this can
# cause errors. Please take extra care when specifying your dependencies in a production environment.
myenv = CondaDependencies.create(pip_packages=['pyyaml', sklearn_dep, pandas_dep] + azureml_pip_packages,
pin_sdk_version=False)
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
with open("myenv.yml","r") as f:
print(f.read())
from azureml.core.model import Model
# Retrieve scoring explainer for deployment
scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.environment import Environment
from azureml.exceptions import WebserviceException
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "IBM_Attrition",
"method" : "local_explanation"},
description='Get local explanations for IBM Employee Attrition data')
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score_local_explain.py", environment=myenv)
# Use configs and models generated above
service = Model.deploy(ws, 'model-scoring-deploy-local', [scoring_explainer_model, original_model], inference_config, aciconfig)
try:
service.wait_for_deployment(show_output=True)
except WebserviceException as e:
print(e.message)
print(service.get_logs())
raise
import requests
import json
# Create data to test service with
sample_data = '{"Age":{"899":49},"BusinessTravel":{"899":"Travel_Rarely"},"DailyRate":{"899":1098},"Department":{"899":"Research & Development"},"DistanceFromHome":{"899":4},"Education":{"899":2},"EducationField":{"899":"Medical"},"EnvironmentSatisfaction":{"899":1},"Gender":{"899":"Male"},"HourlyRate":{"899":85},"JobInvolvement":{"899":2},"JobLevel":{"899":5},"JobRole":{"899":"Manager"},"JobSatisfaction":{"899":3},"MaritalStatus":{"899":"Married"},"MonthlyIncome":{"899":18711},"MonthlyRate":{"899":12124},"NumCompaniesWorked":{"899":2},"OverTime":{"899":"No"},"PercentSalaryHike":{"899":13},"PerformanceRating":{"899":3},"RelationshipSatisfaction":{"899":3},"StockOptionLevel":{"899":1},"TotalWorkingYears":{"899":23},"TrainingTimesLastYear":{"899":2},"WorkLifeBalance":{"899":4},"YearsAtCompany":{"899":1},"YearsInCurrentRole":{"899":0},"YearsSinceLastPromotion":{"899":0},"YearsWithCurrManager":{"899":0}}'
headers = {'Content-Type':'application/json'}
# Send request to service
print("POST to url", service.scoring_uri)
resp = requests.post(service.scoring_uri, sample_data, headers=headers)
# Can covert back to Python objects from json string if desired
print("prediction:", resp.text)
result = json.loads(resp.text)
# Plot the feature importance for the prediction
import numpy as np
import matplotlib.pyplot as plt; plt.rcdefaults()
labels = json.loads(sample_data)
labels = labels.keys()
objects = labels
y_pos = np.arange(len(objects))
performance = result["local_importance_values"][0][0]
plt.bar(y_pos, performance, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
locs, labels = plt.xticks()
plt.setp(labels, rotation=90)
plt.ylabel('Feature impact - leaving vs not leaving')
plt.title('Local feature importance for prediction')
plt.show()
service.delete()
```
## Next
Learn about other use cases of the explain package on a:
1. [Training time: regression problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-regression-local.ipynb)
1. [Training time: binary classification problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-binary-classification-local.ipynb)
1. [Training time: multiclass classification problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-multiclass-classification-local.ipynb)
1. Explain models with engineered features:
1. [Simple feature transformations](https://github.com/interpretml/interpret-community/blob/master/notebooks/simple-feature-transformations-explain-local.ipynb)
1. [Advanced feature transformations](https://github.com/interpretml/interpret-community/blob/master/notebooks/advanced-feature-transformations-explain-local.ipynb)
1. [Save model explanations via Azure Machine Learning Run History](../run-history/save-retrieve-explanations-run-history.ipynb)
1. [Run explainers remotely on Azure Machine Learning Compute (AMLCompute)](../remote-explanation/explain-model-on-amlcompute.ipynb)
1. [Inferencing time: deploy a remotely-trained model and explainer](./train-explain-model-on-amlcompute-and-deploy.ipynb)
1. [Inferencing time: deploy a locally-trained keras model and explainer](./train-explain-model-keras-locally-and-deploy.ipynb)
| github_jupyter |
```
!pip install chart_studio
import plotly.graph_objects as go
import plotly.offline as offline_py
import plotly.graph_objects as go
import plotly.offline as offline_py
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import plotly.figure_factory as ff
import numpy as np
%matplotlib inline
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/DSEI21000-S21/project-product-price-prediction/main/data/random_samples/stratified_sampling_data_by_price_whigh_sz50000_1619218354.csv")
# size of dataset
print('The size of the dataset is: {} \n'.format(df.shape))
# different data types in the dataset
print('The types of the dataset: {}'.format(df.dtypes))
df.head()
df.price.describe()
# most popular categories -- Women, electronics and men
x = df['c1'].value_counts().index.values.astype('str')[:15]
y = df['c1'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%" for v in (y/len(df))] [:15]
trace1 = go.Bar(x=x, y=y, text=pct)
layout = dict(title= 'Number of Items by Main Category',
yaxis = dict(title='Count'),
xaxis = dict(title='Brand'))
fig=dict(data=[trace1], layout=layout)
offline_py.iplot(fig)
x = df['brand_name'].value_counts().index.values.astype('str')[:15]
y = df['brand_name'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%" for v in (y/len(df))] [:15]
colorscale = [[0, '#FAEE1C'], [0.33, '#F3558E'], [0.66, '#9C1DE7'], [1, '#581B98']]
# most popular brands -- Nike & PINK
trace1 = go.Bar(x=x, y=y, text=pct, marker=dict(color = y, colorscale=colorscale, showscale=True))
layout = dict(title= 'Number of Items by brand name',
yaxis = dict(title='Count'),
xaxis = dict(title='Brand'))
fig=dict(data=[trace1], layout=layout)
offline_py.iplot(fig)
# visualize which words has the highest frequencies within the top1 category
description = df.item_description[df.c1 == 'women']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Women')
plt.show()
# relationship between price and shipping
dataframe = df[df.brand_name == 'Nike'][:100]
datawomen = dataframe.loc[:, ['price', 'shipping']]
datawomen["index"] = np.arange(1,len(datawomen)+1)
fig = ff.create_scatterplotmatrix(datawomen, diag='box', index='index',colormap='Portland',
colormap_type='cat',
height=700, width=700)
offline_py.iplot(fig)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
df = pd.read_csv('athlete_events.csv')
region_df = pd.read_csv('noc_regions.csv')
df
df.shape
df = df[df['Season'] == 'Summer']
df.tail()
df.shape
region_df.tail()
df = df.merge(region_df, on = 'NOC', how='left')
df.tail()
df['region'].unique().shape
df.isnull().sum()
df.duplicated().sum()
df.drop_duplicates(inplace = True)
df.duplicated().sum()
df['Medal'].value_counts()
df = pd.concat([df ,pd.get_dummies(df['Medal'])], axis =1 )
df.shape
df.tail()
df.groupby('NOC').sum()[['Gold', 'Silver','Bronze']].sort_values('Gold', ascending = False).reset_index()
##NOTE : in above case we are getting wrong data it is because EG for hockey insted of gettin 1 medal each player gets medal which leads to increase of medal count
## we have to remove duplicate values based on same [ Team , NOC , Games , year , city , sport , event , medal ]
medal_tally = df.drop_duplicates(subset = ['Team' ,'NOC', 'Games','Year', 'City','Sport','Event','Medal'])
medal_tally = medal_tally.groupby('NOC').sum()[['Gold', 'Silver', 'Bronze']].sort_values('Gold',ascending =False).reset_index()
medal_tally[medal_tally['NOC'] == 'IND']
medal_tally['Total'] = medal_tally['Gold'] + medal_tally['Silver'] + medal_tally['Bronze']
medal_tally
years =df['Year'].unique().tolist()
years.sort()
years.insert(0,'overall')
years
country =np.unique(df['region'].dropna().values).tolist()
country
country.insert(0,'Overall')
country
medal_df = df.drop_duplicates(subset = ['Team' ,'NOC', 'Games','Year', 'City','Sport','Event','Medal'])
def fetch_metal_tally(df ,year , country):
medal_df = df.drop_duplicates(subset = ['Team' ,'NOC', 'Games','Year', 'City','Sport','Event','Medal'])
flag =0
if year =='Overall' and country =='Overall':
temp_df = medal_df
if year =='Overall' and country !='Overall':
flag =1
temp_df = medal_df[medal_df['region'] == country]
if year !='Overall' and country =='Overall':
temp_df = medal_df[medal_df['Year'] == int(year)]
if year !='Overall' and country !='Overall':
temp_df = medal_df[(medal_df['Year'] == int(year)) & (medal_df['region'] ==country)]
if flag ==1 :
x = temp_df.groupby('Year').sum()[['Gold', 'Silver', 'Bronze']].sort_values('Year').reset_index()
else:
x = temp_df.groupby('region').sum()[['Gold', 'Silver', 'Bronze']].sort_values('Gold',ascending =False).reset_index()
x [ 'Total' ] = x [ 'Gold' ] + x [ 'Silver' ] + x [ 'Bronze' ]
x [ 'Gold' ] = x [ 'Gold' ].astype('int')
x [ 'Silver' ] = x [ 'Silver' ].astype('int')
x [ 'Bronze' ] = x [ 'Bronze' ].astype('int')
x [ 'Total' ] = x [ 'Total' ].astype('int')
return x
fetch_metal_tally(df,year ="2016" ,country='India')
medal_df = df.drop_duplicates(subset = ['Team' ,'NOC', 'Games','Year', 'City','Sport','Event','Medal'])
medal_df[medal_df['region']== 'India']
## OVERALL ANALYSIS
df['Year'].unique().shape[0] -1
df['City'].unique()
df['Sport'].unique()
df['Name'].unique().shape
nation_over_time = df.drop_duplicates(['Year', 'region'])['Year'].value_counts().reset_index().sort_values('index')
nation_over_time
import plotly.express as px
pip install plotly
df.drop_duplicates(['Year', 'Event'])['Year'].value_counts().reset_index().sort_values('index')
x = df.drop_duplicates(['Year','Sport','Event'])
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize = (25, 25))
sns.heatmap(x.pivot_table(index = 'Sport', columns = 'Year', values ='Event', aggfunc= 'count').fillna(0).astype('int'),annot = True)
def most_successful(df, sport):
temp_df =df.dropna(subset =['Medal'])
if sport != 'Overall':
temp_df =temp_df[temp_df['Sport'] == sport]
x= temp_df['Name'].value_counts().reset_index().merge(df,left_on ='index',right_on = 'Name',how='left')[['index','Name_x','Sport','region']].drop_duplicates('index')
x.rename(columns ={'index':'Name','Name_x':'Medals'},inplace=True)
return x
most_successful(df,'Gymnastics')
####COUNTRY WISE
temp_df = df.dropna(subset = ['Medal'])
temp_df.drop_duplicates(subset = ['Team' ,'NOC', 'Games','Year', 'City','Sport','Event','Medal'],inplace =True)
new_df =temp_df[temp_df['region']=='USA']
new_df.groupby('Year').count()['Medal'].reset_index()
new_df = temp_df[temp_df['region'] == 'USA']
final_df = new_df.groupby('Year').count()['Medal'].reset_index()
fig = px.line(final_df,x='Year', y='Medal')
fig.show()
df.head()
new_df =temp_df[temp_df['region']=='UK']
plt.figure(figsize =(15,15))
sns.heatmap(new_df.pivot_table(index ='Sport',columns ='Year', values ='Medal', aggfunc='count').fillna(0).astype('int'),annot =True)
def most_successful(df, country):
temp_df =df.dropna(subset =['Medal'])
if country != 'Overall':
temp_df =temp_df[temp_df['region'] == country]
x= temp_df['Name'].value_counts().reset_index().merge(df,left_on ='index',right_on = 'Name',how='left')[['index','Name_x','Sport','region']].drop_duplicates('index')
x.rename(columns ={'index':'Name','Name_x':'Medals'},inplace=True)
return x
most_successful(df,'China')
###Athelete Analysis
import plotly.figure_factory as ff
athlete_df = df.drop_duplicates(subset =['Name','region'])
x1 = athlete_df['Age'].dropna()
x2 = athlete_df[athlete_df['Medal'] == 'Gold']['Age'].dropna()
x3 = athlete_df[athlete_df['Medal'] == 'Silver']['Age'].dropna()
x4 = athlete_df[athlete_df['Medal'] == 'Bronze']['Age'].dropna()
fig = ff.create_distplot([x1,x2,x3,x4],['Overall Age','Gold Medalist','Silver Medalist','Bronze Medalist'], show_hist=False ,show_rug=False)
fig.show()
athlete_df['Medal'].fillna('No Medal',inplace = True)
plt.figure(figsize =(10 , 10))
temp_df =athlete_df[athlete_df['Sport'] =='Athletics']
sns.scatterplot(temp_df['Weight'],temp_df['Height'],hue =temp_df['Medal'],style = temp_df['Sex'],s =100,)
men = athlete_df[athlete_df['Sex'] == 'M'].groupby('Year').count()['Name'].reset_index()
women = athlete_df[athlete_df['Sex'] == 'F'].groupby('Year').count()['Name'].reset_index()
final = men.merge(women ,on ='Year')
final.rename(columns ={'Name_x':'Male','Name_y':'Female'}, inplace =True)
fig = px.line(final , x = 'Year', y =['Male','Female'])
fig.show()
final.fillna(0)
```
| github_jupyter |
Originaly taken from https://www.easy-tensorflow.com and adapted for the purpose of the course
# Imports
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
```
# Load the MNIST dataset
## Data dimenstion
```
from tensorflow.examples.tutorials.mnist import input_data
img_h = img_w = 28 # MNIST images are 28x28
img_size_flat = img_h * img_w # 28x28=784, the total number of pixels
n_classes = 10 # Number of classes, one class per digit
n_channels = 1
```
## Helper functions to load the MNIST data
```
def load_data(mode='train'):
"""
Function to (download and) load the MNIST data
:param mode: train or test
:return: images and the corresponding labels
"""
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
if mode == 'train':
x_train, y_train, x_valid, y_valid = mnist.train.images, mnist.train.labels, \
mnist.validation.images, mnist.validation.labels
x_train, _ = reformat(x_train, y_train)
x_valid, _ = reformat(x_valid, y_valid)
return x_train, y_train, x_valid, y_valid
elif mode == 'test':
x_test, y_test = mnist.test.images, mnist.test.labels
x_test, _ = reformat(x_test, y_test)
return x_test, y_test
def reformat(x, y):
"""
Reformats the data to the format acceptable for convolutional layers
:param x: input array
:param y: corresponding labels
:return: reshaped input and labels
"""
img_size, num_ch, num_class = int(np.sqrt(x.shape[-1])), 1, len(np.unique(np.argmax(y, 1)))
dataset = x.reshape((-1, img_size, img_size, num_ch)).astype(np.float32)
labels = (np.arange(num_class) == y[:, None]).astype(np.float32)
return dataset, labels
def randomize(x, y):
""" Randomizes the order of data samples and their corresponding labels"""
permutation = np.random.permutation(y.shape[0])
shuffled_x = x[permutation, :, :, :]
shuffled_y = y[permutation]
return shuffled_x, shuffled_y
def get_next_batch(x, y, start, end):
x_batch = x[start:end]
y_batch = y[start:end]
return x_batch, y_batch
```
## Load the data and display the sizes
Now we can use the defined helper function in "train" mode which loads the train and validation images and their corresponding labels. We'll also display their sizes:
```
x_train, y_train, x_valid, y_valid = load_data(mode='train')
print("Size of:")
print("- Training-set:\t\t{}".format(len(y_train)))
print("- Validation-set:\t{}".format(len(y_valid)))
```
# Hyperparameters
```
logs_path = "./logs" # path to the folder that we want to save the logs for Tensorboard
lr = 0.001 # The optimization initial learning rate
epochs = 10 # Total number of training epochs
batch_size = 100 # Training batch size
display_freq = 100 # Frequency of displaying the training results
```
# Network configuration
```
# 1st Convolutional Layer
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
stride1 = 1 # The stride of the sliding window
# 2nd Convolutional Layer
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 32 # There are 32 of these filters.
stride2 = 1 # The stride of the sliding window
# Fully-connected layer.
h1 = 128 # Number of neurons in fully-connected layer.
```
# Create network helper functions
## Helper functions for creating new variables
```
# weight and bais wrappers
def weight_variable(shape):
"""
Create a weight variable with appropriate initialization
:param name: weight name
:param shape: weight shape
:return: initialized weight variable
"""
initer = tf.truncated_normal_initializer(stddev=0.01)
return tf.get_variable('W',
dtype=tf.float32,
shape=shape,
initializer=initer)
def bias_variable(shape):
"""
Create a bias variable with appropriate initialization
:param name: bias variable name
:param shape: bias variable shape
:return: initialized bias variable
"""
initial = tf.constant(0., shape=shape, dtype=tf.float32)
return tf.get_variable('b',
dtype=tf.float32,
initializer=initial)
```
## Helper-function for creating a new Convolutional Layer
```
def conv_layer(x, filter_size, num_filters, stride, name):
"""
Create a 2D convolution layer
:param x: input from previous layer
:param filter_size: size of each filter
:param num_filters: number of filters (or output feature maps)
:param stride: filter stride
:param name: layer name
:return: The output array
"""
with tf.variable_scope(name):
num_in_channel = x.get_shape().as_list()[-1]
shape = [filter_size, filter_size, num_in_channel, num_filters]
W = weight_variable(shape=shape)
tf.summary.histogram('weight', W)
b = bias_variable(shape=[num_filters])
tf.summary.histogram('bias', b)
layer = tf.nn.conv2d(x, W,
strides=[1, stride, stride, 1],
padding="SAME")
layer += b
return tf.nn.relu(layer)
```
## Helper-function for creating a new Max-pooling Layer
```
def max_pool(x, ksize, stride, name):
"""
Create a max pooling layer
:param x: input to max-pooling layer
:param ksize: size of the max-pooling filter
:param stride: stride of the max-pooling filter
:param name: layer name
:return: The output array
"""
return tf.nn.max_pool(x,
ksize=[1, ksize, ksize, 1],
strides=[1, stride, stride, 1],
padding="SAME",
name=name)
```
# Helper-function for flattening a layer
```
def flatten_layer(layer):
"""
Flattens the output of the convolutional layer to be fed into fully-connected layer
:param layer: input array
:return: flattened array
"""
with tf.variable_scope('Flatten_layer'):
layer_shape = layer.get_shape()
num_features = layer_shape[1:4].num_elements()
layer_flat = tf.reshape(layer, [-1, num_features])
return layer_flat
```
## Helper-function for creating a new fully-connected Layer
```
def fc_layer(x, num_units, name, use_relu=True):
"""
Create a fully-connected layer
:param x: input from previous layer
:param num_units: number of hidden units in the fully-connected layer
:param name: layer name
:param use_relu: boolean to add ReLU non-linearity (or not)
:return: The output array
"""
with tf.variable_scope(name):
in_dim = x.get_shape()[1]
W = weight_variable(shape=[in_dim, num_units])
tf.summary.histogram('weight', W)
b = bias_variable(shape=[num_units])
tf.summary.histogram('bias', b)
layer = tf.matmul(x, W)
layer += b
if use_relu:
layer = tf.nn.relu(layer)
return layer
```
# Network graph
## Placeholders for the inputs (x) and corresponding labels (y)
```
with tf.name_scope('Input'):
x = tf.placeholder(tf.float32, shape=[None, img_h, img_w, n_channels], name='X')
y = tf.placeholder(tf.float32, shape=[None, n_classes], name='Y')
```
## Create the network layers
```
conv1 = conv_layer(x, filter_size1, num_filters1, stride1, name='conv1')
pool1 = max_pool(conv1, ksize=2, stride=2, name='pool1')
conv2 = conv_layer(pool1, filter_size2, num_filters2, stride2, name='conv2')
pool2 = max_pool(conv2, ksize=2, stride=2, name='pool2')
layer_flat = flatten_layer(pool2)
fc1 = fc_layer(layer_flat, h1, 'FC1', use_relu=True)
output_logits = fc_layer(fc1, n_classes, 'OUT', use_relu=False)
```
## Define the loss function, optimizer, accuracy, and predicted class
```
with tf.variable_scope('Train'):
with tf.variable_scope('Loss'):
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=output_logits), name='loss')
tf.summary.scalar('loss', loss)
with tf.variable_scope('Optimizer'):
optimizer = tf.train.AdamOptimizer(learning_rate=lr, name='Adam-op').minimize(loss)
with tf.variable_scope('Accuracy'):
correct_prediction = tf.equal(tf.argmax(output_logits, 1), tf.argmax(y, 1), name='correct_pred')
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')
tf.summary.scalar('accuracy', accuracy)
with tf.variable_scope('Prediction'):
cls_prediction = tf.argmax(output_logits, axis=1, name='predictions')
```
## Initialize all variables and merge the summaries
```
# Initialize the variables
init = tf.global_variables_initializer()
# Merge all summaries
merged = tf.summary.merge_all()
```
# Train
```
sess = tf.InteractiveSession()
sess.run(init)
global_step = 0
summary_writer = tf.summary.FileWriter(logs_path, sess.graph)
# Number of training iterations in each epoch
num_tr_iter = int(len(y_train) / batch_size)
for epoch in range(epochs):
print('Training epoch: {}'.format(epoch + 1))
x_train, y_train = randomize(x_train, y_train)
for iteration in range(num_tr_iter):
global_step += 1
start = iteration * batch_size
end = (iteration + 1) * batch_size
x_batch, y_batch = get_next_batch(x_train, y_train, start, end)
# Run optimization op (backprop)
feed_dict_batch = {x: x_batch, y: y_batch}
sess.run(optimizer, feed_dict=feed_dict_batch)
if iteration % display_freq == 0:
# Calculate and display the batch loss and accuracy
loss_batch, acc_batch, summary_tr = sess.run([loss, accuracy, merged],
feed_dict=feed_dict_batch)
summary_writer.add_summary(summary_tr, global_step)
print("iter {0:3d}:\t Loss={1:.2f},\tTraining Accuracy={2:.01%}".
format(iteration, loss_batch, acc_batch))
# Run validation after every epoch
feed_dict_valid = {x: x_valid, y: y_valid}
loss_valid, acc_valid = sess.run([loss, accuracy], feed_dict=feed_dict_valid)
print('---------------------------------------------------------')
print("Epoch: {0}, validation loss: {1:.2f}, validation accuracy: {2:.01%}".
format(epoch + 1, loss_valid, acc_valid))
print('---------------------------------------------------------')
```
# Test
```
def plot_images(images, cls_true, cls_pred=None, title=None):
"""
Create figure with 3x3 sub-plots.
:param images: array of images to be plotted, (9, img_h*img_w)
:param cls_true: corresponding true labels (9,)
:param cls_pred: corresponding true labels (9,)
"""
fig, axes = plt.subplots(3, 3, figsize=(9, 9))
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(np.squeeze(images[i]), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
ax_title = "True: {0}".format(cls_true[i])
else:
ax_title = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_title(ax_title)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
if title:
plt.suptitle(title, size=20)
plt.show(block=False)
def plot_example_errors(images, cls_true, cls_pred, title=None):
"""
Function for plotting examples of images that have been mis-classified
:param images: array of all images, (#imgs, img_h*img_w)
:param cls_true: corresponding true labels, (#imgs,)
:param cls_pred: corresponding predicted labels, (#imgs,)
"""
# Negate the boolean array.
incorrect = np.logical_not(np.equal(cls_pred, cls_true))
# Get the images from the test-set that have been
# incorrectly classified.
incorrect_images = images[incorrect]
# Get the true and predicted classes for those images.
cls_pred = cls_pred[incorrect]
cls_true = cls_true[incorrect]
# Plot the first 9 images.
plot_images(images=incorrect_images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9],
title=title)
# Test the network when training is done
x_test, y_test = load_data(mode='test')
feed_dict_test = {x: x_test, y: y_test}
loss_test, acc_test = sess.run([loss, accuracy], feed_dict=feed_dict_test)
print('---------------------------------------------------------')
print("Test loss: {0:.2f}, test accuracy: {1:.01%}".format(loss_test, acc_test))
print('---------------------------------------------------------')
# Plot some of the correct and misclassified examples
cls_pred = sess.run(cls_prediction, feed_dict=feed_dict_test)
cls_true = np.argmax(y_test, axis=1)
plot_images(x_test, cls_true, cls_pred, title='Correct Examples')
plot_example_errors(x_test, cls_true, cls_pred, title='Misclassified Examples')
plt.show()
# close the session after you are done with testing
sess.close()
```
At this step our coding is done. We can inspect more in our network using the Tensorboard open your terminal and move inside the notebookz folder in my case *C:\Dev\UpdateConference2019\notebooks*
and type:
```
tensorboard --logdir=logs --host localhost
```
| github_jupyter |
# Is it reasonable to consider only largest mergers among many?
Multiple mergers can occur at a time. Especially when merger takes long, another merger can begin before one ends. Then the effect of a merger can't be separated. In such case, I take the largest merger only.
But how reliable is it?
This script shows distribution of mergers and their mass ratio.
1) Multiple mergers at a snapshot are rare.
2) Multiple major mergers are even rarer.
3) Merger time overlap sometimes happen.
4) Merger time overlap among major ones are rare.
5) When the measurement window (~ 5-10 snapshots before and after) is considered,
many more overlap will be added.
!! The figure can be pickled and shown later quickly!.
```
def get_merger_info(main, atree, sat_root_idx,
dist_gal_scale_in=1.0,
dist_gal_scale_out=2.0):
"""
Returns merger mass ratio and beginning of the merger.
nout_init_this_merger, mass_this_merger = get_merger_info()
Assumes
"""
satellite = ctu.extract_main_tree(atree, sat_root_idx, no_subset=True)
nout_min = max([min(main['nout']), min(satellite['nout'])])
i_main_ok = (main['nout'] >= nout_min) * (main['nout'] <= max(satellite["nout"]))
i_sat_ok = (satellite['nout'] >= nout_min)
satellite = satellite[i_sat_ok]
# distances at all valid nouts.
dd = np.sqrt(np.square(main["x"][i_main_ok] - satellite['x']) \
+ np.square(main["y"][i_main_ok] - satellite['y']) \
+ np.square(main["z"][i_main_ok] - satellite['z'])) * 1e3
rgal_tot = (main['rvir'][i_main_ok] + satellite['rvir'])
#print(" Galaxy sizes : main {}, and the second {}, and the sum {}".format(
# main['r'][i_main_ok], satellite['r'], rgal_tot))
#print(" dd :", dd)
if sum(dist_gal_scale_in * rgal_tot > dd) > 0:
# First close encounter is technically the beginning of merger,
# but in practice that could be merely a flyby,
# and whether they will merger soon or not is not known.
# I can't call an encounter a merger if the encounter will end up merging in 100Gyrs.
#nout_init_this = min(satellite['nout'][dist_gal_scale * rgal_tot < dd])
# First try
# shouldn't go out 2Rgal.
i_dist_bad = np.where(dist_gal_scale_out * rgal_tot < dd)[0]
i_dist_ok = np.where(dist_gal_scale_in * rgal_tot > dd)[0]
if len(i_dist_bad) > 0:
i_dist_bad_last = min(i_dist_bad)
i_dist_final = i_dist_ok[i_dist_ok < i_dist_bad_last]
else:
i_dist_final = i_dist_ok
if len(i_dist_final) > 0:
nout_init_this = satellite['nout'][min(i_dist_final)]
# Second try
mass_this = satellite['m'][satellite['nout'] == nout_init_this].squeeze()
else:
nout_init_this = -1
mass_this = 0
else:
nout_init_this = -1
mass_this = 0
return nout_init_this, mass_this
def find_all_meger(alltrees,
idx_all,
nout_ini=37,
dist_gal_scale=2,
min_mass_ratio = 0.01,
verbose=False,
do_plot = False):
"""
Parameters
----------
dist_gal_scale
if two galaxies are closer than dist_gal_scale * (sum of raidus of the two),
that epoch is the nout_init_merger.
nout_ini
blabla
"""
gal_list=[]
mr_list=[]
nout_list=[]
nout_ini_list=[] # initial time when two halos(Galaxy stellar components in this case) overlap.
for idx in idx_all:
# full tree of a galaxy
atree = ctu.extract_a_tree(alltrees.data, idx)
# main progenitor tree
main = ctu.extract_main_tree(atree, idx)
x_nout = main['nout'].flatten()
i_nout_ok = x_nout > nout_ini
main = main[i_nout_ok]
#x_nout = x_nout[i_nout_ok]
pos = np.zeros((3,len(main)))
pos[0,:] = main['x']
pos[1,:] = main['y']
pos[2,:] = main['z']
mass_ratios_this = []#np.zeros(len(main))
nout_inits_this = []#np.zeros(len(main))
nout_list_this = []
for i, nout in enumerate(main['nout']):
# merger ratio
i_prgs = np.where(atree['desc_id'] == main['id'][i])[0]
#print(" {} Progenitors at nout = {}".format(len(i_prgs), nout))
# multiple prgs = merger
if len(i_prgs) > 1:
#if verbose:
#print("{} {} Progenitors at nout = {}".format(idx, len(i_prgs), nout))
id_prgs = atree['id'][i_prgs]
mass_prgs = atree['m'][i_prgs]
m_r = mass_prgs / max(mass_prgs)
sats = id_prgs[mass_prgs < max(mass_prgs)]
mass_ratios_now=[]
nout_inits_now=[]
for this_sat in sats:
n_i_t, mass_this_sat = get_merger_info(main, atree, this_sat,
dist_gal_scale_in=dist_gal_scale,
dist_gal_scale_out = 3.0)
mass_ratio = mass_this_sat / max(mass_prgs)
if mass_ratio > min_mass_ratio:
nout_inits_now.append(n_i_t)
mass_ratios_now.append(1./mass_ratio)
nout_list_this.append(nout)
nout_inits_this.append(nout_inits_now)
mass_ratios_this.append(mass_ratios_now)
#mr = 1./mass_ratios
gal_list.append(idx)
nout_list.append(nout_list_this)
mr_list.append(mass_ratios_this)
nout_ini_list.append(nout_inits_this)
return gal_list, mr_list, nout_list, nout_ini_list
import tree.ctutils as ctu
import numpy as np
from analysis.misc import load_cat
# parameters used for lambda_arr clipping.
ind_upper = 20
ind_lower = 20
sig_upper = 2.0
sig_lower = 2.0
nout_ini = 62
nout_fi = 187
verbose=True
# In[4]:
base = './'
cdir = ['catalog/', 'easy/', 'catalog_GM/', "easy_final/"][3]
cluster = ['05427', '05420', '29172', \
'29176', '10002', '36415',
'06098', '39990', '36413','17891', '07206', '04466', '01605', '35663'][3]#[:-3]
wdir = base + cluster + '/'
alltrees = ctu.load_tree(wdir, is_gal=True)
ad = alltrees.data
tn = ad[ad['nout'] == nout_fi]
cat = load_cat(wdir + cdir + 'catalog' + str(nout_fi) + '.pickle')
#idx_all = [tn['id'][tn['Orig_halo_id'] == id_final][0] for id_final in cat['id']]
idx_all = cat['idx'][cat["idx"] > 0].astype(int) # why idx are float???
gal_list, mr_list, nout_list, nout_init_list = \
find_all_meger(alltrees,
idx_all,
nout_ini=37,
dist_gal_scale=1,
min_mass_ratio = 0.001,
verbose=False,
do_plot = False)
import matplotlib.pyplot as plt
# simple scatter
fig, ax = plt.subplots()
for nout_list_this, mr_list_this in zip(nout_list, mr_list):
for nout, mr in zip(nout_list_this, mr_list_this):
for mm in mr:
ax.scatter(nout, mm)
try:
ax.scatter(nout, mr[0], marker="*", color='r')
except:
#ax.scatter(nout, mr, marker="*", color='r')
pass
ax.set_yscale('log')
ax.set_ylabel("Merger mass ratio")
ax.set_xlabel("Nout")
plt.show()
import pickle
# merging process overlap
fig, ax = plt.subplots()
# each galaxy
for igal, (nout_init_this, nout_list_this, mr_list_this) in enumerate(zip(nout_init_list, nout_list, mr_list)):
# each snapshot
allnout_thisgal = []
allmm_thisgal = []
for nout_init, nout, mr in zip(nout_init_this, nout_list_this, mr_list_this):
# each merger
for ni, mm in zip(nout_init, mr):
ax.plot([ni, nout], [mm,mm], 'y.-')
#cm = ax.scatter(allnout_thisgal, igal * 10 + np.arange(len(allnout_thisgal)),
# s=1e3/np.array(allmm_thisgal))
#c=np.array(allmm_thisgal), cmap="Greys", vmin=0, vmax=1e3)
ax.set_ylabel("mergers (arbitrary value)")
ax.set_xlabel("Nout")
#plt.colorbar(cm)
plt.show()
# merging process overlap
fig, ax = plt.subplots()
# each galaxy
for igal, (nout_init_this, nout_list_this, mr_list_this) in enumerate(zip(nout_init_list, nout_list, mr_list)):
# each snapshot
allnout_thisgal = []
allmm_thisgal = []
for nout_init, nout, mr in zip(nout_init_this, nout_list_this, mr_list_this):
# each merger
for ni, mm in zip(nout_init, mr):
allnout_thisgal.append(nout)
allnout_thisgal.append(ni)
allmm_thisgal.append(mm)
allmm_thisgal.append(1e3) # one marker per one merger, and minimize the other one.
#
ax.plot(allnout_thisgal, igal * 10 + np.arange(len(allnout_thisgal)), '-')
cm = ax.scatter(allnout_thisgal, igal * 10 + np.arange(len(allnout_thisgal)),
s=1e3/np.array(allmm_thisgal))
#c=np.array(allmm_thisgal), cmap="Greys", vmin=0, vmax=1e3)
ax.set_ylabel("mergers (arbitrary value)")
ax.set_xlabel("Nout")
#plt.colorbar(cm)
pickle.dump(plt.gcf(), open("merger_overlap_plot.pickle", "wb"))
plt.show()
```
### A figure can be saved!
```
# On a separate ipython kernel, the following will generate the same figure!
import matplotlib.pyplot as plt
import pickle
fig = pickle.load(open("merger_overlap_plot.pickle", "rb"))
plt.show(fig)
```
| github_jupyter |
# Huggingface Sagemaker-sdk - Deploy 🤗 Transformers for inference
Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy a transformer model for inference.
In this example we directly deploy one of the 10 000+ Hugging Face Transformers from the [Hub](https://huggingface.co/models) to Amazon SageMaker for Inference.
## API - [SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit)
Using the `transformers pipelines`, we designed an API, which makes it easy for you to benefit from all `pipelines` features. The API is oriented at the API of the [🤗 Accelerated Inference API](https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html), meaning your inputs need to be defined in the `inputs` key and if you want additional supported `pipelines` parameters you can add them in the `parameters` key. Below you can find examples for requests.
**text-classification request body**
```python
{
"inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days."
}
```
**question-answering request body**
```python
{
"inputs": {
"question": "What is used for inference?",
"context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
}
}
```
**zero-shot classification request body**
```python
{
"inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!",
"parameters": {
"candidate_labels": [
"refund",
"legal",
"faq"
]
}
}
```
```
!pip install "sagemaker>=2.48.0" --upgrade
```
## Deploy one of the 10 000+ Hugging Face Transformers to Amazon SageMaker for Inference
_This is an experimental feature, where the model will be loaded after the endpoint is created. This could lead to errors, e.g. models > 10GB_
To deploy a model directly from the Hub to SageMaker we need to define 2 environment variables when creating the `HuggingFaceModel` . We need to define:
- `HF_MODEL_ID`: defines the model id, which will be automatically loaded from [huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides +10 000 models all available through this environment variable.
- `HF_TASK`: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be find [here](https://huggingface.co/transformers/main_classes/pipelines.html).
```
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models
'HF_TASK':'question-answering' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub,
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.6", # transformers version used
pytorch_version="1.7", # pytorch version used
py_version="py36", # python version of the DLC
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)
# example request, you always need to define "inputs"
data = {
"inputs": {
"question": "What is used for inference?",
"context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference."
}
}
# request
predictor.predict(data)
# delete endpoint
predictor.delete_endpoint()
```
| github_jupyter |
```
import os
import pandas as pd
import matplotlib.pyplot as plt
from keras.utils import np_utils
from keras.models import Sequential
from keras.callbacks import EarlyStopping, History, ModelCheckpoint
from keras.layers.core import Flatten, Dense, Dropout, Reshape, Lambda
from keras.layers.normalization import BatchNormalization
from sklearn.preprocessing import LabelEncoder
from keras.utils.np_utils import to_categorical
from sklearn.metrics import log_loss
from sklearn.model_selection import train_test_split
import numpy as np
train_features = np.load('train_features.npy')
valid_features = np.load('valid_features.npy')
train_dir = "new_train/"
valid_dir = "new_valid/"
classes = os.listdir(train_dir)
# Get the labels
train_labels = []
for c in classes:
l = [c]*len(os.listdir(train_dir+c+'/'))
train_labels.extend(l)
valid_labels = []
for c in classes:
l = [c]*len(os.listdir(valid_dir+c+'/'))
valid_labels.extend(l)
onehot_train = to_categorical(LabelEncoder().fit_transform(train_labels))
onehot_valid = to_categorical(LabelEncoder().fit_transform(valid_labels))
model = Sequential()
model.add(Flatten(input_shape=train_features.shape[1:]))
model.add(Dense(4096, activation='relu'))
#model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
#model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(8, activation='softmax'))
model.compile(optimizer="adam",loss="categorical_crossentropy",metrics =["accuracy"])
callbacks = EarlyStopping(monitor='val_loss', patience=1, verbose=1, mode='auto')
# autosave best Model
best_model_file = "./batch_normalized_dropout_weights.h5"
best_model = ModelCheckpoint(best_model_file, monitor='val_acc', verbose = 1, save_best_only = True)
history = model.fit(train_features, onehot_train, batch_size=10, nb_epoch=10,
validation_data=(valid_features,onehot_valid),shuffle=True,callbacks = [callbacks,best_model])
model.load_weights("fully_connected_dropout_weights.h5")
# summarize history for accuracy
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['acc']); plt.plot(history.history['val_acc']);
plt.title('model accuracy'); plt.ylabel('accuracy');
plt.xlabel('epoch'); plt.legend(['train', 'valid'], loc='upper left');
# summarize history for loss
plt.subplot(1, 2, 2)
plt.plot(history.history['loss']); plt.plot(history.history['val_loss']);
plt.title('model loss'); plt.ylabel('loss');
plt.xlabel('epoch'); plt.legend(['train', 'valid'], loc='upper left');
plt.show()
test_features = np.load("test_features.npy")
test_preds = model.predict_proba(test_features, verbose=1)
test_preds[0:5]
submission1 = pd.DataFrame(test_preds, columns= os.listdir(train_dir))
test_files = os.listdir("test_stg1/test_stg1/")
submission1.insert(0, 'image', test_files)
submission1.head()
clipped_preds = np.clip(test_preds,(1-0.82)/7,0.82)
submission2 = pd.DataFrame(clipped_preds, columns= os.listdir("train/train/"))
submission2.insert(0, 'image', test_files)
submission2.head()
submission2.to_csv("fully_connected_dropout.csv",index = False)
```
| github_jupyter |
Bonjour à tous,
Ceci est un fichier de note spécial COVID-19 qui contient du code d'analyse pour vous illustrer l'importance des choix expérimentaux que nous avons vu durant le cours en nous appuyant sur l'experience de Posner et Mitchell (1967). La partie code n'est utile que pour ceux que ça intéresse, pour les autres vous pouvez parfaitement vous contenter de lire les sorties des cellules de codes en complément de mes commentaires.
```
## Import des librairies python (3.7) nécessaires
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
```
## Lecture et rassemblement de vos données
Ici je crée la variable qui va contenir les données de tous les participants que vous avez récoltés, la sortie de la cellule 2 vous montre un extrait de ce sur quoi je vais travailler le long de ce fichier de note. Je passe par python mais la logique tient pour excel ou n'importe quel autre moyen d'analyse.
```
list_ = []#variable vide pour stocker les données
for f in os.listdir('data/'):#Pour chaque fichier dans le répértoire data
raw_ = pd.read_csv('data/'+f)#lire le fichier
raw_['ordre'] = f.split('_')[0]#extraire l'ordre
raw_['participant'] = f.split('_')[1]#le nom du participant
#Supprimer les variables inutiles pour plus de clareté :
del raw_['expName']
del raw_['date']
del raw_['Unnamed: 10']
del raw_['tache.1']
list_.append(raw_)
df = pd.concat(list_, ignore_index=True)#rassembler le tout
df = df[np.isfinite(df.trial)]#fix petit bug chez certain
df.participant = df.participant.replace(df.participant.unique(), np.arange(10)+1)#Anonymisation des participants en leur attribuant des numéros
df.rt = df.rt*1000 #conversion de seconde à milliseconde pour faciliter la lecture, rt = temps de réaction
print(df.head())#afficher un extrait des données rassemblées
df.to_csv('df.csv')#export des données
```
Précision sur les colonnes : Expected = réponse attendue, Response = réponse donnée, précision = rep correcte(0)/incorrecte(1), rt = temps de réaction, trial = numéro de l'essai
# Analyse des choix expérimentaux
Nous avons ''choisi'' (ou en tout cas le confinement nous a forcé) de récolter 10 participants
```
print(df.participant.unique())#ici on exporte le nombre d'identifiant de sujet unique
print(len(df.participant.unique()))#là je ne fais que compter le nombre de sujet
```
Chacun de ces sujets a passé les trois tâches + la tâche d'entrainement
```
print(df.tache.unique())
print(len(df.tache.unique()))
```
Chaque sujet a passé 96 essais pour chaque tâche (et 16 pour l'entraînement), exemple pour le participant n°1
```
print(df.groupby(['participant','tache']).trial.count().head(4))#Ici je groupe par taches et par participants et je compte le nombre d'essais au sein de ces cellules finalement j'affiche uniquement le premier sujet
```
# Plan Expérimental
Notre plan expérimental est le suivant :
$\underline{S}_{10} \times Tache_3$
Comme vu en cours, dans le cas d'un plan emboîté, on a pas besoin de procéder à une attribution aléatoire de nos participants à une des modalités de notre VI puisque chaque participant passe par toutes les modalités de la VI.
En revanche un facteur important que l'on introduit par un facteur emboîté c'est le facteur d'ordre pour cela on cherche à contrbalancer l'ordre de passation.
Initialement dans le cadre du cours, chaque expérimentateur s'était vu attribué un ordre particulier parmi les ordres possibles : 123,231,312,213,321,132
Malheureusement, confinement se faisant, seuls quatre d'entre vous ont pu m'envoyer leurs données, au final tous les ordres ne sont pas représentés et certains le sont plus que d'autres :
```
print(df.groupby(['ordre']).trial.count())#Ici je compte le nombre d'essai pour chaque ordre
```
Notez que par simplicité je vous ai attribué un ordre à chacun, confondant ainsi le facteur ordre et le facteur expérimentateur, idéalement il aurait fallu que chacun fasse passer tous les ordres.
# Choix a priori de la variable dépendante
Comme vu en cours une tâche de choix génère automatiquement deux VD, la justesse de la réponse (correcte ou incorrecte) et le temps nécessaire au choix (temps de réaction).
Dans cette tâche on a choisi une tâche relativement simple dans laquelle nous n'attendons pas beaucoup d'erreurs, on peut le vérifier de la façon suivante :
```
df.precision.mean()#on calcule la moyenne des 0 (incorrecte) et 1 (correcte) dans la colonne précision
```
Le taux de réponse correcte est effectivement très élevé (environ 95\%), ainsi observer une modification de cette VD risque d'être difficile puisqe beaucoup de participant vont être très proches d'une performance plafond.
Ainsi on choisi de travailler sur le temps de réaction (TR), et d'observer l'effet de notre VI sur ce TR.
Si on regarde la distribution des temps de réaction, indifféremment du participant ou de la tâche (donc 96 essais * 3 tâche * 10 participants = 2880 observations) on observe l'histogramme suivant :
```
fig=plt.figure(figsize=(10,5), dpi=80, facecolor='w', edgecolor='k')
df.rt.hist(bins=100)
plt.ylabel('Fréquence')
plt.xlabel('Temps de réaction (millisecondes)')
plt.show()
```
Ici chaque barre/créneau représente une discrétisation de la VD, ex. la barre au dessus des 500ms comprends les TR entre (par exemple) 475 et 525 ms.
La valeur en y représente la fréquence dans l'échantillon ainsi un TR entre 475 et 525ms est présent environ 225 fois dans notre échantillon, elle est donc beaucoup plus fréquente que les TR autour des 2000 ms par exemple.
# Hypothèse
L'hypothèse simplifiée que nous allons tester provient de l'article de Ponser et Michell (1967), qui eux même se basent sur la méthode soustractive de Donders (1868).
Notre prédiction c'est que le TR représente le parcours de l'information à travers différents modules de traitement, ainsi nous attendons à ce qu'une tâche qui à une étape de plus qu'une autre soit plus lente que celle-ci.
- La tache 1 représentait une discrimination de l'identité physique (e.g. A et a sont différents, A et A sont les mêmes)
- La tâche 2 représentation une discrimination sur l'identité symboliques des lettres (e.g. A et B sont différents, A et a sont les mêmes)
- La tâche 3 représentation une discrimination de catégorie, syllabe ou voyelle (e.g. A et B sont différents, A et E sont les mêmes)
L'idée est que la tâche 3 comprends toutes les étapes de la tâche 2 + une étape d'identification de la catégorie. De la même faç_on la tâche 2 comprends toutes les étapes de la tâche 1 + une étape d'identification de la lettre.
Ainsi sur notre VD on prédit :
TR Tache 3 > TR Tache 2 > TR tache 1
La tache 0, l'entrainement (chiffres) ne compte pas dns notre analyse, on l'utilise uniquement pour familiariser le sujet avec la tâche.
_Pour aller plus loin : On peut tester l'efficacité de notre tâche d'entraînement, on s'attend en effet à ce que nos sujets, a mesure qu'ils se familiarisent avec la tâche, aillent de plus en plus vite :_
```
#ici je sélectionne les TR pour la tâche d'entrainement et pour chaque essai (1,2,...,16) je fais la moyenne de mes participants
essais = df[df.tache==0].groupby('trial').rt.mean()
#ensuite je trace en x les essais et en y les valeurs de ces moyennes de TR
plt.plot(essais.index, essais.values,'o-')
plt.ylabel('temps de réaction (millisecondes)')
plt.xlabel('numéro de l\'essai')
```
_j'observe effectivement que le premier TR est très lent comparé aux derniers, mes participants montrent bien un effet d'habituation à la tâche_
# Test de l'hypothèse
Mon hypothèse porte sur une différence de moyenne de TR en fonction de la tâche avec l'ordre 3 > 2 > 1. Je vais donc calculer et représenter ces moyennes
```
moyennes = df[df.tache != 0].groupby('tache').rt.mean()
print(moyennes)
plt.plot(moyennes.index, moyennes.values,'o-')
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
J'observe bien une augmentation du TR avec le numéro de la tâche dans l'ordre que j'ai théoriquement justifié au préalable.
évidemment l'observation d'un effet sur les moyennes n'est pas suffisante, il s'agit de prendre en compte la variabilité de nos données et si celle-ci pourrait aléatoirement avoir générée la différence dans nos moyennes, on pourrait déjà observer ça pour nos données en regardant simplement l'écart-type enter ces tâches
```
moyennes = df[df.tache != 0].groupby('tache').rt.mean()
#Je calcule d'abord les moyenne de TR pour chaque participant dans les trois tâches puis je calcule la dispersion, par l'ecart-type, de ces moyennes
sd = df[df.tache != 0].groupby(['participant','tache']).rt.mean().reset_index().groupby('tache').rt.std()
print(sd)
plt.errorbar(moyennes.index, moyennes.values, yerr=sd, marker='o')
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
Je vois clairement que la variabilité au sein des tâches rend difficilement compte de la différence observée pour mes moyennes, je peux par la suite tester formellement en appliquant un modèle statistiques (e.g. modèle linéaire)
# Exploration de l'impact des choix expérimentaux
## Différence entre un plan intra et un plan inter
Pour illustrer la différence entre un plan inter et un plan intra je vais calculer les moyennes de TR par participant dans chaque tache
```
moyennes = df[df.tache != 0].groupby(['tache','participant']).rt.mean().reset_index()
#Je calcule d'abord les moyenne de TR pour chaque participant dans les trois tâches puis je calcule la dispersion, par l'ecart-type, de ces moyennes
sd = df[df.tache != 0].groupby(['participant','tache']).rt.std().reset_index()
print(sd)
```
Dans un plan inter-sujet, chaque sujet passe par une modalité seulement ainsi je n'ai (ici virtuellement) qu'un point d'observation (moyenne) par sujet
```
for sujet, donnee_sujet in moyennes.groupby('participant'):
plt.plot(donnee_sujet.tache, donnee_sujet.rt,'o',label=sujet, color='k')
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
Il est ainsi difficile de dire, si j'ai peu de sujets dans chaque cellule/modalité, si c'est bien un effet de ma VI ou si, par biais d'échantillonage, les participants de la troisième modalité étaient tout simplement plus lents.
En revanche dans un plan intra-sujet je peux donc observer l'effet de ma VI sur chaque sujet individuellement puisque j'ai bien une moyenne par modalité
```
for sujet, donnee_sujet in moyennes.groupby('participant'):
plt.plot(donnee_sujet.tache, donnee_sujet.rt,'o-',label=sujet)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
Ici j'observe que quasiment tous les sujets se conforment à ma prédiction 3 > 2 > 1, renforçant ainsi l'évidence en faveur de mon hypothèse, cependant on peut toujours imaginer que par un biais d'échantillonnage je n'ai sélectionné que des sujets présentant un effet de ralentissement, et que la populaiton contient en réalité autant de personnes présentant une accélération. D'où l'importance, même dans un plan intra-sujet d'avoir un nombre suffisant de sujets pour généraliser les résultats au delà de l'échantillon.
# Impact du nombre d'observations
Quel impact si au lieu de 96 essais je n'en ai qu'une dizaine ?
```
#Ici je tire aléatoirement 10 essais pour chaque combinaison de participant X tache
tirage = df[df.tache != 0].groupby(['participant','tache']).apply(lambda x: x.sample(10, random_state=111)).reset_index(drop=True)
tirage.head()
moyennes = tirage[tirage.tache != 0].groupby('tache').rt.mean()
#Je calcule d'abord les moyenne de TR pour chaque participant dans les trois tâches puis je calcule la dispersion, par l'ecart-type, de ces moyennes
sd = tirage[df.tache != 0].groupby(['participant','tache']).rt.mean().reset_index().groupby('tache').rt.std()
print(sd)
plt.errorbar(moyennes.index, moyennes.values, yerr=sd, marker='o')
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
On observe que les résultats vont dans le même sens même si la variabilité a augmentée, heureusement l'effet semble suffisamment robuste/fort pour apparaître même avec peu d'essais. Si en revanche je prends un cas plus extrême et je n'échantillonais qu'une observation par sujet :
```
#Ici je tire aléatoirement 1 essais pour chaque combinaison de participant X tache
tirage = df[df.tache != 0].groupby(['participant','tache']).apply(lambda x: x.sample(1, random_state=111)).reset_index(drop=True)
tirage.head()
moyennes = tirage[tirage.tache != 0].groupby('tache').rt.mean()
#Je calcule d'abord les moyenne de TR pour chaque participant dans les trois tâches puis je calcule la dispersion, par l'ecart-type, de ces moyennes
sd = tirage[df.tache != 0].groupby(['participant','tache']).rt.mean().reset_index().groupby('tache').rt.std()
print(sd)
plt.errorbar(moyennes.index, moyennes.values, yerr=sd, marker='o')
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
Je n'observe par exemple plus de différence entre la tache 1 et la tâche 2, ce qui ne validerait pas (au moins en partie) mon hypothèse théorique.
## Impact de l'ordre
Pour illustrer l'impact de l'ordre je vais calculer séparément les moyennes pour les trois ordres dont je dispose (123,213,321). Cependant vu le faible nombre d'ordre finalement réalisé et le peu de sujet par ordre il va être difficile de tirer des conclusions sur cet effet d'ordre de passation des tâches
```
moyennes = df[df.tache != 0].groupby(['tache','ordre']).rt.mean().reset_index()
#Je calcule d'abord les moyenne de TR pour chaque participant dans les trois tâches puis je calcule la dispersion, par l'ecart-type, de ces moyennes
sd = df[df.tache != 0].groupby(['participant','tache','ordre']).rt.mean().reset_index().groupby(['tache','ordre']).rt.std().reset_index()
print(moyennes)
plt.errorbar(moyennes[moyennes.ordre == "123"].tache, moyennes[moyennes.ordre == "123"].rt, yerr = sd[sd.ordre == "123"].rt, marker='o', label="123")
plt.errorbar(moyennes[moyennes.ordre == "321"].tache+.05, moyennes[moyennes.ordre == "321"].rt, yerr = sd[sd.ordre == "321"].rt, marker='o', label="321")
plt.errorbar(moyennes[moyennes.ordre == "213"].tache-.05, moyennes[moyennes.ordre == "213"].rt, yerr = sd[sd.ordre == "213"].rt, marker='o', label="213")
plt.legend()
plt.ylabel('Moyenne de TR (ms)')
plt.xlabel('numéro de la tâche')
plt.xticks([1,2,3])
plt.show()
```
Ici je vois que les sujets avec l'ordre 123 ont une différence entre la première et la dernière tâche plus important. Il se pourrait en effet qu'avec ma VI se confonde la fatigue des sujets à mesure que l'expérience avance, en revanche étant donné le faible nombre de participant récolté par ordre je n epeux évidemment pas tirer de conclusion sur la nature de la différence observée.
# Conclusion
En conclusion ce petit exemple nous a permis d"explorer comment d'une hypothèse théorique (le TR est composé d'une succession d'étape de traitement de l'information) je traduis une hypothèse opérationnelle (3 > 2 > 1). Si je veux, à partir de mes résultats expérimentaux, inférer quelque chose sur mon hypothèse théorique je dois m'assurer que mes conclusions ne dépendent pas des choix expérimentaux qui ont permis cette traduction, par exemple :
- dans le choix du matériel (dans le matériel de Posner et Mitchell on observe par exemple qu'ils utilisent le même matériel dans les trois tâches)
- de la mesure que je choisi
- de la puissance statistique (influencée par mon nombre de participants/essais mais aussi du type de plan, de la mesure choisie)
- de facteurs confondus (e.g. ordre, non-randomisation/contrebalancement)
Bonne continuation à vous
Pour toute question je reste à votre disposition par e-mail
Gabriel Weindel
| github_jupyter |
```
import pandas as pd
import json
import requests
import numpy as np
```
# Points intersecction
Procedure definition:
```sql
DROP FUNCTION get_aqpoints(weights_raw TEXT, points_array TEXT);
DROP TYPE weight_intersection;
CREATE TYPE weight_intersection AS (points_id numeric, basinid numeric, the_geom geometry, water_risk numeric, water_stress numeric, interannual_variability numeric, seasonal_variability numeric, flood_occurrence numeric, drought_severity numeric, upstream_storage numeric, groundwater_stress numeric, return_flow_ratio numeric, upstream_protected_land numeric, media_coverage numeric, access_to_water numeric, threatened_amphibians numeric);
CREATE OR REPLACE FUNCTION get_aqpoints(weights_raw TEXT, points_array TEXT)
RETURNS SETOF weight_intersection as $$
DECLARE
query1 TEXT;
weights TEXT;
weights_sum TEXT;
BEGIN
query1:='select array_to_string(array_agg(col::text ||''*''|| weights::text), ''+ '') as weights, sum(weights)::text as weights_sum from (select unnest(Array'|| weights_raw||') as weights, unnest(array[''bws_s'', ''wsv_s'', ''sv_s'', ''hfo_s'', ''dro_s'', ''stor_s'', ''gw_s'', ''wri_s'', ''eco_s_s'', ''mc_s'', ''wcg_s'', ''eco_v_s'']) as col) as tables where weights is not null';
EXECUTE query1 into weights, weights_sum;
RETURN query EXECUTE 'with points as (SELECT st_setsrid(st_geomfromtext(points),4326) as the_geom, row_number() over () as points_id FROM unnest(Array'|| points_array ||') as points), ranges as (select basinid, average, min(average) over (), max(average) over (), bws_s, wsv_s, sv_s, hfo_s, dro_s, stor_s, gw_s, wri_s, eco_s_s, mc_s, wcg_s, eco_v_s from (SELECT basinid, ('|| weights ||')/('|| weights_sum ||') as average, bws_s, wsv_s, sv_s, hfo_s, dro_s, stor_s, gw_s, wri_s, eco_s_s, mc_s, wcg_s, eco_v_s FROM water_risk_weights) initial) select points_id::numeric, ranges.basinid::numeric, points.the_geom, (((average-min)/(max-min))*5)::numeric as water_risk, bws_s::numeric as baseline_water_stress, wsv_s::numeric as interannual_variability, sv_s::numeric as seasonal_variability, hfo_s::numeric as flood_occurrence, dro_s::numeric as drought_severity, stor_s::numeric as upstream_storage, gw_s::numeric as groundwater_stress, wri_s::numeric as return_flow_ratio, eco_s_s::numeric as upstream_protected_land, mc_s::numeric as media_coverage, wcg_s::numeric as access_to_water, eco_v_s::numeric as threatened_amphibians from ranges inner join wri_subcatchements on ranges.basinid=wri_subcatchements.basinid right join points on st_intersects(wri_subcatchements.the_geom, points.the_geom)';
END
$$ language 'plpgsql';
```
Sql Example:
```sql
Select * from get_aqpoints('[4, 1, 0.5, 1, 1, 2, 2, 1, 0.5, 1, 2, 0.5]','[''POINT(84.8085584935 -14.20067639)'', ''POINT(54.0392656274 -70.8898132233)'', ''POINT(-28.5832686897 -7.71134965117)'', ''POINT(47.9458596199 82.5599787066)'', ''POINT(50.8126903314 -46.0154993389)'', ''POINT(-62.5229253542 -9.68983337791)'', ''POINT(-34.7977234627 84.9984574252)'', ''POINT(-80.1102876685 -33.9286081419)'', ''POINT(22.6686500117 -85.6713992254)'', ''POINT(44.359802466 -27.7294728889)'']')
```
Sql Template:
``` sql
Select * from get_aqpoints({weights_scheeme},{points array})
```
```
payload = {'q': "Select * from get_aqpoints('[4, 1, 0.5, 1, 1, 2, 2, 1, 0.5, 1, 2, 0.5]','[''POINT(84.8085584935 -14.20067639)'', ''POINT(54.0392656274 -70.8898132233)'', ''POINT(-28.5832686897 -7.71134965117)'', ''POINT(47.9458596199 82.5599787066)'', ''POINT(50.8126903314 -46.0154993389)'', ''POINT(-62.5229253542 -9.68983337791)'', ''POINT(-34.7977234627 84.9984574252)'', ''POINT(-80.1102876685 -33.9286081419)'', ''POINT(22.6686500117 -85.6713992254)'', ''POINT(44.359802466 -27.7294728889)'']')"}
r = requests.get('https://wri-01.carto.com/api/v2/sql', params=payload)
tableStructure= pd.read_json(json.dumps(r.json()['rows']), orient='records')
tableStructure.head(0)
```
# Anexo: Points stress test
test points; 100; 500; 1000; 100000;
```
t = 180 * np.random.rand(1000000,2) - 90
```
'POINT(-72.0 42.2)'
```
example1=[]
for point in t:
example1.append('\'\'POINT(' +str(point[0])+ ' ' + str(point[1]) +')\'\'')
t10 = '['+', '.join(example1[:10])+']'
t100 = '['+', '.join(example1[:100])+']'
t1000 = '['+', '.join(example1[:1000])+']'
t10000 = '['+', '.join(example1[:10000])+']'
print(t10)
print(len(t10))
print(len(t100))
print(len(t1000))
print(len(t10000))
payload = {'q': ""}
payload['q']="Select * from get_aqpoints('[4, 1, 0.5, 1, 1, 2, 2, 1, 0.5, 1, 2, 0.5]',\'"+ t10 +"\')"
r = requests.get('https://wri-01.carto.com/api/v2/sql', params=payload)
tableStructure= pd.read_json(json.dumps(r.json()['rows']), orient='records')
tableStructure.head(2)
payload = {'q': ""}
payload['q']="Select * from get_aqpoints('[4, 1, 0.5, 1, 1, 2, 2, 1, 0.5, 1, 2, 0.5]',\'"+ t100 +"\')"
r = requests.get('https://wri-01.carto.com/api/v2/sql', params=payload)
tableStructure= pd.read_json(json.dumps(r.json()['rows']), orient='records')
tableStructure.head(0)
payload = {'q': ""}
payload['q']="Select * from get_aqpoints('[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]','[''Point(72.421875, 56.559482)'', ''Point(18.6328125, 10.8333059)'', ''Point(-109.6875, 42.03297)'']')"
r = requests.post('https://wri-01.carto.com/api/v2/sql', data=payload)
if r.status_code != 200:
issue = json.loads(r.text)
print(issue)
else:
tableStructure= pd.read_json(json.dumps(r.json()['rows']), orient='records')
tableStructure.head(10)
```
| github_jupyter |
# Calculating the Bilingual Evaluation Understudy (BLEU) score: Ungraded Lab
In this ungraded lab, we will implement a popular metric for evaluating the quality of machine-translated text: the BLEU score proposed by Kishore Papineni, et al. In their 2002 paper ["BLEU: a Method for Automatic Evaluation of Machine Translation"](https://www.aclweb.org/anthology/P02-1040.pdf), the BLEU score works by comparing "candidate" text to one or more "reference" translations. The result is better the closer the score is to 1. Let's see how to get this value in the following sections.
# Part 1: BLEU Score
## 1.1 Importing the Libraries
We will first start by importing the Python libraries we will use in the first part of this lab. For learning, we will implement our own version of the BLEU Score using Numpy. To verify that our implementation is correct, we will compare our results with those generated by the [SacreBLEU library](https://github.com/mjpost/sacrebleu). This package provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. It also knows all the standard test sets and handles downloading, processing, and tokenization.
```
%%capture
!pip3 install sacrebleu
%%capture
!wget https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/deeplearning.ai/nlp/datasets/wmt19_can.txt
!wget https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/deeplearning.ai/nlp/datasets/wmt19_ref.txt
!wget https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/deeplearning.ai/nlp/datasets/wmt19_src.txt
import math
from collections import Counter
import matplotlib.pyplot as plt
import nltk
import numpy as np
import sacrebleu
from nltk.util import ngrams
nltk.download("punkt")
!pip list | grep "nltk\|sacrebleu"
```
## 1.2 Defining the BLEU Score
You have seen the formula for calculating the BLEU score in this week's lectures. More formally, we can express the BLEU score as:
$$BLEU = BP\Bigl(\prod_{i=1}^{4}precision_i\Bigr)^{(1/4)}$$
with the Brevity Penalty and precision defined as:
$$BP = min\Bigl(1, e^{(1-({ref}/{cand}))}\Bigr)$$
$$precision_i = \frac {\sum_{snt \in{cand}}\sum_{i\in{snt}}min\Bigl(m^{i}_{cand}, m^{i}_{ref}\Bigr)}{w^{i}_{t}}$$
where:
* $m^{i}_{cand}$, is the count of i-gram in candidate matching the reference translation.
* $m^{i}_{ref}$, is the count of i-gram in the reference translation.
* $w^{i}_{t}$, is the total number of i-grams in candidate translation.
## 1.3 Explaining the BLEU score
### Brevity Penalty (example):
```
ref_length = np.ones(100)
can_length = np.linspace(1.5, 0.5, 100)
x = ref_length / can_length
y = 1 - x
y = np.exp(y)
y = np.minimum(np.ones(y.shape), y)
# Code for in order to make the plot
fig, ax = plt.subplots(1)
lines = ax.plot(x, y)
ax.set(
xlabel="Ratio of the length of the reference to the candidate text",
ylabel="Brevity Penalty",
)
plt.show()
```
The brevity penalty penalizes generated translations that are too short compared to the closest reference length with an exponential decay. The brevity penalty compensates for the fact that the BLEU score has no recall term.
### N-Gram Precision (example):
```
data = {"1-gram": 0.8, "2-gram": 0.7, "3-gram": 0.6, "4-gram": 0.5}
names = list(data.keys())
values = list(data.values())
fig, ax = plt.subplots(1)
bars = ax.bar(names, values)
ax.set(ylabel="N-gram precision")
plt.show()
```
The n-gram precision counts how many unigrams, bigrams, trigrams, and four-grams (i=1,...,4) match their n-gram counterpart in the reference translations. This term acts as a precision metric. Unigrams account for adequacy while longer n-grams account for fluency of the translation. To avoid overcounting, the n-gram counts are clipped to the maximal n-gram count occurring in the reference ($m_{n}^{ref}$). Typically precision shows exponential decay with the with the degree of the n-gram.
### N-gram BLEU score (example):
```
data = {"1-gram": 0.8, "2-gram": 0.77, "3-gram": 0.74, "4-gram": 0.71}
names = list(data.keys())
values = list(data.values())
fig, ax = plt.subplots(1)
bars = ax.bar(names, values)
ax.set(ylabel="Modified N-gram precision")
plt.show()
```
When the n-gram precision is multiplied by the BP, then the exponential decay of n-grams is almost fully compensated. The BLEU score corresponds to a geometric average of this modified n-gram precision.
## 1.4 Example Calculations of the BLEU score
In this example we will have a reference translation and 2 candidates translations. We will tokenize all sentences using the NLTK package introduced in Course 2 of this NLP specialization.
```
reference = "The NASA Opportunity rover is battling a massive dust storm on planet Mars."
candidate_1 = "The Opportunity rover is combating a big sandstorm on planet Mars."
candidate_2 = "A NASA rover is fighting a massive storm on planet Mars."
tokenized_ref = nltk.word_tokenize(reference.lower())
tokenized_cand_1 = nltk.word_tokenize(candidate_1.lower())
tokenized_cand_2 = nltk.word_tokenize(candidate_2.lower())
print(f"{reference} -> {tokenized_ref}")
print("\n")
print(f"{candidate_1} -> {tokenized_cand_1}")
print("\n")
print(f"{candidate_2} -> {tokenized_cand_2}")
```
### STEP 1: Computing the Brevity Penalty
```
def brevity_penalty(candidate, reference):
ref_length = len(reference)
can_length = len(candidate)
# Brevity Penalty
if ref_length < can_length: # if reference length is less than candidate length
BP = 1 # set BP = 1
else:
penalty = 1 - (ref_length / can_length) # else set BP=exp(1-(ref_length/can_length))
BP = np.exp(penalty)
return BP
```
### STEP 2: Computing the Precision
```
def clipped_precision(candidate, reference):
"""
Clipped precision function given a original and a machine translated sentences
"""
clipped_precision_score = []
for i in range(1, 5):
ref_n_gram = Counter(ngrams(reference,i))
cand_n_gram = Counter(ngrams(candidate,i))
c = sum(cand_n_gram.values())
for j in cand_n_gram: # for every n-gram up to 4 in candidate text
if j in ref_n_gram: # check if it is in the reference n-gram
if cand_n_gram[j] > ref_n_gram[j]: # if the count of the candidate n-gram is bigger
# than the corresponding count in the reference n-gram,
cand_n_gram[j] = ref_n_gram[j] # then set the count of the candidate n-gram to be equal
# to the reference n-gram
else:
cand_n_gram[j] = 0 # else set the candidate n-gram equal to zero
clipped_precision_score.append(sum(cand_n_gram.values())/c)
weights =[0.25]*4
s = (w_i * math.log(p_i) for w_i, p_i in zip(weights, clipped_precision_score))
s = math.exp(math.fsum(s))
return s
```
### STEP 3: Computing the BLEU score
```
def bleu_score(candidate, reference):
BP = brevity_penalty(candidate, reference)
precision = clipped_precision(candidate, reference)
return BP * precision
```
### STEP 4: Testing with our Example Reference and Candidates Sentences
```
print(
"Results reference versus candidate 1 our own code BLEU: ",
round(bleu_score(tokenized_cand_1, tokenized_ref) * 100, 1),
)
print(
"Results reference versus candidate 2 our own code BLEU: ",
round(bleu_score(tokenized_cand_2, tokenized_ref) * 100, 1),
)
```
### STEP 5: Comparing the Results from our Code with the SacreBLEU Library
```
print(
"Results reference versus candidate 1 sacrebleu library BLEU: ",
round(sacrebleu.corpus_bleu(candidate_1, reference).score, 1),
)
print(
"Results reference versus candidate 2 sacrebleu library BLEU: ",
round(sacrebleu.corpus_bleu(candidate_2, reference).score, 1),
)
```
# Part 2: BLEU computation on a corpus
## Loading Data Sets for Evaluation Using the BLEU Score
In this section, we will show a simple pipeline for evaluating machine translated text. Due to storage and speed constraints, we will not be using our own model in this lab (you'll get to do that in the assignment!). Instead, we will be using [Google Translate](https://translate.google.com) to generate English to German translations and we will evaluate it against a known evaluation set. There are three files we will need:
1. A source text in English. In this lab, we will use the first 1671 words of the [wmt19](http://statmt.org/wmt19/translation-task.html) evaluation dataset downloaded via SacreBLEU. We just grabbed a subset because of limitations in the number of words that can be translated using Google Translate.
2. A reference translation to German of the corresponding first 1671 words from the original English text. This is also provided by SacreBLEU.
3. A candidate machine translation to German from the same 1671 words. This is generated by feeding the source text to a machine translation model. As mentioned above, we will use Google Translate to generate the translations in this file.
With that, we can now compare the reference an candidate translation to get the BLEU Score.
```
# Loading the raw data
wmt19_src = open("wmt19_src.txt", "rU")
wmt19_src_1 = wmt19_src.read()
wmt19_src.close()
wmt19_ref = open("wmt19_ref.txt", "rU")
wmt19_ref_1 = wmt19_ref.read()
wmt19_ref.close()
wmt19_can = open("wmt19_can.txt", "rU")
wmt19_can_1 = wmt19_can.read()
wmt19_can.close()
tokenized_corpus_src = nltk.word_tokenize(wmt19_src_1.lower())
tokenized_corpus_ref = nltk.word_tokenize(wmt19_ref_1.lower())
tokenized_corpus_cand = nltk.word_tokenize(wmt19_can_1.lower())
print("English source text:")
print("\n")
print(f"{wmt19_src_1[0:170]} -> {tokenized_corpus_src[0:30]}")
print("\n")
print("German reference translation:")
print("\n")
print(f"{wmt19_ref_1[0:219]} -> {tokenized_corpus_ref[0:35]}")
print("\n")
print("German machine translation:")
print("\n")
print(f"{wmt19_can_1[0:199]} -> {tokenized_corpus_cand[0:29]}")
print(
"Results reference versus candidate 1 our own BLEU implementation: ",
round(bleu_score(tokenized_corpus_cand, tokenized_corpus_ref) * 100, 1),
)
print(
"Results reference versus candidate 1 sacrebleu library BLEU: ",
round(sacrebleu.corpus_bleu(wmt19_can_1, wmt19_ref_1).score, 1),
)
```
**BLEU Score Interpretation on a Corpus**
|Score | Interpretation |
|:---------:|:-------------------------------------------------------------:|
| < 10 | Almost useless |
| 10 - 19 | Hard to get the gist |
| 20 - 29 | The gist is clear, but has significant grammatical errors |
| 30 - 40 | Understandable to good translations |
| 40 - 50 | High quality translations |
| 50 - 60 | Very high quality, adequate, and fluent translations |
| > 60 | Quality often better than human |
From the table above (taken [here](https://cloud.google.com/translate/automl/docs/evaluate)), we can see the translation is high quality (*if you see "Hard to get the gist", please open your workspace, delete `wmt19_can.txt` and get the latest version via the Lab Help button*). Moreover, the results of our coded BLEU score are almost identical to those of the SacreBLEU package.
| github_jupyter |
# Lab 2: Inference in Graphical Models
### Machine Learning 2, 2016
* The lab exercises should be made in groups of two people.
* The deadline is Sunday, April 24, 23:59.
* Assignment should be sent to t.s.cohen at uva dot nl (Taco Cohen). The subject line of your email should be "[ML2_2016] lab#_lastname1\_lastname2".
* Put your and your teammate's names in the body of the email
* Attach the .IPYNB (IPython Notebook) file containing your code and answers. Naming of the file follows the same rule as the subject line.
Notes on implementation:
* You should write your code and answers in an IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact us.
* Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.
* NOTE: test your code and make sure we can run your notebook / scripts!
### Introduction
In this assignment, we will implement the sum-product and max-sum algorithms for factor graphs over discrete variables. The relevant theory is covered in chapter 8 of Bishop's PRML book, in particular section 8.4. Read this chapter carefuly before continuing!
We will first implement sum-product and max-sum and apply it to a simple poly-tree structured factor graph for medical diagnosis. Then, we will implement a loopy version of the algorithms and use it for image denoising.
For this assignment we recommended you stick to numpy ndarrays (constructed with np.array, np.zeros, np.ones, etc.) as opposed to numpy matrices, because arrays can store n-dimensional arrays whereas matrices only work for 2d arrays. We need n-dimensional arrays in order to store conditional distributions with more than 1 conditioning variable. If you want to perform matrix multiplication on arrays, use the np.dot function; all infix operators including *, +, -, work element-wise on arrays.
## Part 1: The sum-product algorithm
We will implement a datastructure to store a factor graph and to facilitate computations on this graph. Recall that a factor graph consists of two types of nodes, factors and variables. Below you will find some classes for these node types to get you started. Carefully inspect this code and make sure you understand what it does; you will have to build on it later.
```
%pylab inline
np.set_printoptions(precision=4)
class Node(object):
"""
Base-class for Nodes in a factor graph. Only instantiate sub-classes of Node.
"""
def __init__(self, name):
# A name for this Node, for printing purposes
self.name = name
# Neighbours in the graph, identified with their index in this list.
# i.e. self.neighbours contains neighbour 0 through len(self.neighbours) - 1.
self.neighbours = []
# Reset the node-state (not the graph topology)
self.reset()
def reset(self):
# Incoming messages; a dictionary mapping neighbours to messages.
# That is, it maps Node -> np.ndarray.
self.in_msgs = {}
# A set of neighbours for which this node has pending messages.
# We use a python set object so we don't have to worry about duplicates.
self.pending = set([])
def add_neighbour(self, nb):
self.neighbours.append(nb)
def send_sp_msg(self, other):
# To be implemented in subclass.
raise Exception('Method send_sp_msg not implemented in base-class Node')
def send_ms_msg(self, other):
# To be implemented in subclass.
raise Exception('Method send_ms_msg not implemented in base-class Node')
def receive_msg(self, other, msg):
# Store the incomming message, replacing previous messages from the same node
self.in_msgs[other] = msg
# TODO: add pending messages
# self.pending.update(...)
def __str__(self):
# This is printed when using 'print node_instance'
return self.name
class Variable(Node):
def __init__(self, name, num_states):
"""
Variable node constructor.
Args:
name: a name string for this node. Used for printing.
num_states: the number of states this variable can take.
Allowable states run from 0 through (num_states - 1).
For example, for a binary variable num_states=2,
and the allowable states are 0, 1.
"""
self.num_states = num_states
# Call the base-class constructor
super(Variable, self).__init__(name)
def set_observed(self, observed_state):
"""
Set this variable to an observed state.
Args:
observed_state: an integer value in [0, self.num_states - 1].
"""
# Observed state is represented as a 1-of-N variable
# Could be 0.0 for sum-product, but log(0.0) = -inf so a tiny value is preferable for max-sum
self.observed_state[:] = 0.000001
self.observed_state[observed_state] = 1.0
def set_latent(self):
"""
Erase an observed state for this variable and consider it latent again.
"""
# No state is preferred, so set all entries of observed_state to 1.0
# Using this representation we need not differentiate between observed and latent
# variables when sending messages.
self.observed_state[:] = 1.0
def reset(self):
super(Variable, self).reset()
self.observed_state = np.ones(self.num_states)
def marginal(self, Z=None):
"""
Compute the marginal distribution of this Variable.
It is assumed that message passing has completed when this function is called.
Args:
Z: an optional normalization constant can be passed in. If None is passed, Z is computed.
Returns: marginal, Z. The first is a numpy array containing the normalized marginal distribution.
Z is either equal to the input Z, or computed in this function (if Z=None was passed).
"""
# TODO: compute marginal
return None, Z
def send_sp_msg(self, other):
# TODO: implement Variable -> Factor message for sum-product
pass
def send_ms_msg(self, other):
# TODO: implement Variable -> Factor message for max-sum
pass
class Factor(Node):
def __init__(self, name, f, neighbours):
"""
Factor node constructor.
Args:
name: a name string for this node. Used for printing
f: a numpy.ndarray with N axes, where N is the number of neighbours.
That is, the axes of f correspond to variables, and the index along that axes corresponds to a value of that variable.
Each axis of the array should have as many entries as the corresponding neighbour variable has states.
neighbours: a list of neighbouring Variables. Bi-directional connections are created.
"""
# Call the base-class constructor
super(Factor, self).__init__(name)
assert len(neighbours) == f.ndim, 'Factor function f should accept as many arguments as this Factor node has neighbours'
for nb_ind in range(len(neighbours)):
nb = neighbours[nb_ind]
assert f.shape[nb_ind] == nb.num_states, 'The range of the factor function f is invalid for input %i %s' % (nb_ind, nb.name)
self.add_neighbour(nb)
nb.add_neighbour(self)
self.f = f
def send_sp_msg(self, other):
# TODO: implement Factor -> Variable message for sum-product
pass
def send_ms_msg(self, other):
# TODO: implement Factor -> Variable message for max-sum
pass
```
### 1.1 Instantiate network (10 points)
Convert the directed graphical model ("Bayesian Network") shown below to a factor graph. Instantiate this graph by creating Variable and Factor instances and linking them according to the graph structure.
To instantiate the factor graph, first create the Variable nodes and then create Factor nodes, passing a list of neighbour Variables to each Factor.
Use the following prior and conditional probabilities.
$$
p(\verb+Influenza+) = 0.05 \\\\
p(\verb+Smokes+) = 0.2 \\\\
$$
$$
p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 1) = 0.3 \\\\
p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 0) = 0.001 \\\\
p(\verb+Fever+ = 1| \verb+Influenza+ = 1) = 0.9 \\\\
p(\verb+Fever+ = 1| \verb+Influenza+ = 0) = 0.05 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 1) = 0.99 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 0) = 0.9 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 1) = 0.7 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 0) = 0.0001 \\\\
p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 1) = 0.8 \\\\
p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 0) = 0.07 \\\\
p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 1) = 0.6 \\\\
p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 0) = 0.001 \\\\
$$
```
from IPython.core.display import Image
Image(filename='bn.png')
# Variables
def init_variables():
I = Variable('Influenza', 2)
S = Variable('Smokes', 2)
ST = Variable('SoreThroat', 2)
F = Variable('Fever', 2)
B = Variable('Bronchitits', 2)
C = Variable('Coughing', 2)
W = Variable('Wheezing', 2)
return I, S, ST, F, B, C, W
```
$$
p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 1) = 0.3 \\\\
p(\verb+SoreThroat+ = 1 | \verb+Influenza+ = 0) = 0.001
$$
```
# Factor nodes
def init_f1(I, ST):
# Order: I, ST
f1_weights = np.empty((2, 2))
f1_weights[1, 1] = 0.3
f1_weights[0, 1] = 0.001
f1_weights[:, 0] = 1 - f1_weights[:, 1]
f1 = Factor('f1', f1_weights, [I, ST])
return f1
```
$$
p(\verb+Fever+ = 1| \verb+Influenza+ = 1) = 0.9 \\\\
p(\verb+Fever+ = 1| \verb+Influenza+ = 0) = 0.05
$$
```
def init_f2(I, F):
# Order: I, F
f2_weights = np.empty((2, 2))
f2_weights[1, 1] = 0.9
f2_weights[0, 1] = 0.05
f2_weights[:, 0] = 1 - f2_weights[:, 1]
f2 = Factor('f2', f2_weights, [I, F])
return f2
```
$$
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 1) = 0.99 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 1, \verb+Smokes+ = 0) = 0.9 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 1) = 0.7 \\\\
p(\verb+Bronchitis+ = 1 | \verb+Influenza+ = 0, \verb+Smokes+ = 0) = 0.0001
$$
```
def init_f3(I, S, B):
# Order: I, S, B
f3_weights = np.empty((2, 2, 2))
f3_weights[1, 1, 1] = 0.99
f3_weights[1, 0, 1] = 0.9
f3_weights[0, 1, 1] = 0.7
f3_weights[0, 0, 1] = 0.0001
f3_weights[:, :, 0] = 1 - f3_weights[:, :, 1]
f3 = Factor('f3', f3_weights, [I, S, B])
return f3
```
$$
p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 1) = 0.8 \\\\
p(\verb+Coughing+ = 1| \verb+Bronchitis+ = 0) = 0.07
$$
```
def init_f4(B, C):
# Order: B, C
f4_weights = np.empty((2, 2))
f4_weights[1, 1] = 0.8
f4_weights[0, 1] = 0.07
f4_weights[:, 0] = 1 - f4_weights[:, 1]
f4 = Factor('f4', f4_weights, [B, C])
return f4
```
$$
p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 1) = 0.6 \\\\
p(\verb+Wheezing+ = 1| \verb+Bronchitis+ = 0) = 0.001
$$
```
def init_f5(B, W):
# Order: B, W
f5_weights = np.empty((2, 2))
f5_weights[1, 1] = 0.6
f5_weights[0, 1] = 0.001
f5_weights[:, 0] = 1 - f5_weights[:, 1]
f5 = Factor('f5', f5_weights, [B, W])
return f5
```
$$
p(\verb+Smokes+) = 0.2
$$
```
def init_f6(S):
f6_weights = np.array([0.8, 0.2])
f6 = Factor('f6', f6_weights, [S])
return f6
```
$$
p(\verb+Influenza+) = 0.05
$$
```
def init_f7(I):
f7_weights = np.array([0.95, 0.05])
f7 = Factor('f7', f7_weights, [I])
return f7
def check_params_before_sending(self, other):
in_nodes = filter(lambda neighbour: neighbour != other, self.neighbours)
# Checks if all the information required to pass a message is present
for in_node in in_nodes:
if in_node not in self.in_msgs:
raise ValueError('Message from %s is missing for the factor %s' % (in_node, self))
# A list of incoming messages
in_msgs = map(lambda in_node: self.in_msgs[in_node], in_nodes)
return in_nodes, in_msgs
Node.check_params_before_sending = check_params_before_sending
```
### 1.2 Factor to variable messages (20 points)
Write a method `send_sp_msg(self, other)` for the Factor class, that checks if all the information required to pass a message to Variable `other` is present, computes the message and sends it to `other`. "Sending" here simply means calling the `receive_msg` function of the receiving node (we will implement this later). The message itself should be represented as a numpy array (np.array) whose length is equal to the number of states of the variable.
An elegant and efficient solution can be obtained using the n-way outer product of vectors. This product takes n vectors $\mathbf{x}^{(1)}, \ldots, \mathbf{x}^{(n)}$ and computes a $n$-dimensional tensor (ndarray) whose element $i_0,i_1,...,i_n$ is given by $\prod_j \mathbf{x}^{(j)}_{i_j}$. In python, this is realized as `np.multiply.reduce(np.ix_(*vectors))` for a python list `vectors` of 1D numpy arrays. Try to figure out how this statement works -- it contains some useful functional programming techniques. Another function that you may find useful in computing the message is `np.tensordot`.
```
def send_sp_msg(self, other):
assert isinstance(other,Variable)
if other not in self.neighbours:
raise Exception('The specified node is not a neighbour')
factor_indexes = range(len(self.neighbours))
factor_indexes.remove(self.neighbours.index(other))
message_indexes = range(len(factor_indexes))
mes = []
# extracting messages that are later used in computations
for ne in self.neighbours:
if ne==other: continue
if ne not in self.in_msgs: raise Exception('Some messages are not received')
mes.append(self.in_msgs[ne])
mes = np.tensordot(self.f,np.multiply.reduce(np.ix_(*mes)),axes=(factor_indexes,message_indexes))
# sending the message
self.send_sp_msg_proc(other,mes)
```
### 1.3 Variable to factor messages (10 points)
Write a method `send_sp_message(self, other)` for the Variable class, that checks if all the information required to pass a message to Variable var is present, computes the message and sends it to factor.
```
def variable_send_sp_msg(self, other):
in_nodes, in_msgs = self.check_params_before_sending(other)
# Already with observed_state required for 1.7
# If it's unobserved then self.observed_state == vector of 1 and we
# will marginalize over all possible values of the particular variable
# Otherwise, for example, if it's always 1, then self.observed_state == [0, 1]
# and we will set all 0 state to 0
out_msg = self.observed_state * np.multiply.reduce(in_msgs)
# Sends a message to other
other.receive_msg(self, out_msg)
self.pending.remove(other)
Variable.send_sp_msg = variable_send_sp_msg
```
### 1.4 Compute marginal (10 points)
Later in this assignment, we will implement message passing schemes to do inference. Once the message passing has completed, we will want to compute local marginals for each variable.
Write the method `marginal` for the Variable class, that computes a marginal distribution over that node.
```
def variable_marginal(self, Z=None):
# Already with observed_state required for 1.7
marginal = self.observed_state * np.multiply.reduce(self.in_msgs.values())
if Z == None:
Z = np.sum(marginal)
marginal /= Z
return marginal, Z
Variable.marginal = variable_marginal
```
### 1.5 Receiving messages (10 points)
In order to implement the loopy and non-loopy message passing algorithms, we need some way to determine which nodes are ready to send messages to which neighbours. To do this in a way that works for both loopy and non-loopy algorithms, we make use of the concept of "pending messages", which is explained in Bishop (8.4.7):
"we will say that a (variable or factor)
node a has a message pending on its link to a node b if node a has received any
message on any of its other links since the last time it send (sic) a message to b. Thus,
when a node receives a message on one of its links, this creates pending messages
on all of its other links."
Keep in mind that for the non-loopy algorithm, nodes may not have received any messages on some or all of their links. Therefore, before we say node a has a pending message for node b, we must check that node a has received all messages needed to compute the message that is to be sent to b.
Modify the function `receive_msg`, so that it updates the self.pending variable as described above. The member self.pending is a set that is to be filled with Nodes to which self has pending messages. Modify the `send_msg` functions to remove pending messages as they are sent.
```
def node_receive_msg(self, other, msg):
# Store the incoming message, replacing previous messages from the same node
self.in_msgs[other] = msg
print '%s receives message from %s: %s' % (self, other, msg)
for neighbour in set(self.neighbours) - {other}:
if neighbour in self.in_msgs:
# If received all messages from neighbours
if len(self.in_msgs) == len(self.neighbours):
self.pending.update([neighbour])
elif len(self.in_msgs) == len(self.neighbours) - 1:
# If all incoming messages received
self.pending.update([neighbour])
Node.receive_msg = node_receive_msg
```
### 1.6 Inference Engine (10 points)
Write a function `sum_product(node_list)` that runs the sum-product message passing algorithm on a tree-structured factor graph with given nodes. The input parameter `node_list` is a list of all Node instances in the graph, which is assumed to be ordered correctly. That is, the list starts with a leaf node, which can always send a message. Subsequent nodes in `node_list` should be capable of sending a message when the pending messages of preceding nodes in the list have been sent. The sum-product algorithm then proceeds by passing over the list from beginning to end, sending all pending messages at the nodes it encounters. Then, in reverse order, the algorithm traverses the list again and again sends all pending messages at each node as it is encountered. For this to work, you must initialize pending messages for all the leaf nodes, e.g. `influenza_prior.pending.add(influenza)`, where `influenza_prior` is a Factor node corresponding the the prior, `influenza` is a Variable node and the only connection of `influenza_prior` goes to `influenza`.
```
def apply_algorithm(node_list, func):
for node in node_list:
for other in list(node.pending):
func(node, other)
def sum_product(node, other):
node.send_sp_msg(other)
def configure_experiment():
variables = init_variables()
I, S, ST, F, B, C, W = variables
f1 = init_f1(I, ST)
f2 = init_f2(I, F)
f3 = init_f3(I, S, B)
f4 = init_f4(B, C)
f5 = init_f5(B, W)
f6 = init_f6(S)
f7 = init_f7(I)
f6.pending.update([S])
f7.pending.update([I])
ST.pending.update([f1])
F.pending.update([f2])
C.pending.update([f4])
W.pending.update([f5])
return (I, S, ST, F, B, C, W), (f1, f2, f3, f4, f5, f6, f7)
def print_marginals(variables):
for variable in variables:
marginal, Z = variable.marginal(None)
print variable, marginal
variables, factors = configure_experiment()
I, S, ST, F, B, C, W = variables
f1, f2, f3, f4, f5, f6, f7 = factors
node_list = [f6, f7, W, C, F, f4, f5, S, f2, B, f3, I, f1, ST]
print 'Forward pass'
apply_algorithm(node_list, sum_product)
ST.pending.update([f1])
print 'Backward pass'
apply_algorithm(reversed(node_list), sum_product)
print_marginals(variables)
```
### 1.7 Observed variables and probabilistic queries (15 points)
We will now use the inference engine to answer probabilistic queries. That is, we will set certain variables to observed values, and obtain the marginals over latent variables. We have already provided functions `set_observed` and `set_latent` that manage a member of Variable called `observed_state`. Modify the `Variable.send_msg` and `Variable.marginal` routines that you wrote before, to use `observed_state` so as to get the required marginals when some nodes are observed.
```
variables, factors = configure_experiment()
I, S, ST, F, B, C, W = variables
f1, f2, f3, f4, f5, f6, f7 = factors
B.set_observed(1)
node_list = [f6, f7, W, C, F, f4, f5, S, f2, B, f3, I, f1, ST]
print 'Forward pass'
apply_algorithm(node_list, sum_product)
ST.pending.update([f1])
print 'Backward pass'
apply_algorithm(reversed(node_list), sum_product)
print_marginals(variables)
```
### 1.8 Sum-product and MAP states (5 points)
A maximum a posteriori state (MAP-state) is an assignment of all latent variables that maximizes the probability of latent variables given observed variables:
$$
\mathbf{x}_{\verb+MAP+} = \arg\max _{\mathbf{x}} p(\mathbf{x} | \mathbf{y})
$$
Could we use the sum-product algorithm to obtain a MAP state? If yes, how? If no, why not?
## Part 2: The max-sum algorithm
Next, we implement the max-sum algorithm as described in section 8.4.5 of Bishop.
### 2.1 Factor to variable messages (10 points)
Implement the function `Factor.send_ms_msg` that sends Factor -> Variable messages for the max-sum algorithm. It is analogous to the `Factor.send_sp_msg` function you implemented before.
```
def factor_send_ms_msg(self, other):
in_nodes, in_msgs = self.check_params_before_sending(other)
factor_indexes = range(len(self.neighbours))
# Excludes an index of the "other" node
factor_indexes.remove(self.neighbours.index(other))
# Replacing sum by max, product by sum
out_msg = np.expand_dims(np.add.reduce(np.ix_(*in_msgs)), self.neighbours.index(other))
out_msg = np.squeeze(np.apply_over_axes(np.amax, np.log(self.f) + out_msg, factor_indexes))
# Sends a message to other
other.receive_msg(self, out_msg)
self.pending.remove(other)
Factor.send_ms_msg = factor_send_ms_msg
```
### 2.2 Variable to factor messages (10 points)
Implement the `Variable.send_ms_msg` function that sends Variable -> Factor messages for the max-sum algorithm.
```
def variable_send_ms_msg(self, other):
in_nodes, in_msgs = self.check_params_before_sending(other)
out_msg = np.log(self.observed_state)
if len(in_msgs) > 0:
out_msg += np.add.reduce(in_msgs)
# Sends a message to other
other.receive_msg(self, out_msg)
self.pending.remove(other)
Variable.send_ms_msg = variable_send_ms_msg
```
### 2.3 Find a MAP state (10 points)
Using the same message passing schedule we used for sum-product, implement the max-sum algorithm. For simplicity, we will ignore issues relating to non-unique maxima. So there is no need to implement backtracking; the MAP state is obtained by a per-node maximization (eq. 8.98 in Bishop). Make sure your algorithm works with both latent and observed variables.
```
def map_state(self):
# Returns 0 state or 1
return np.argmax(np.add.reduce(self.in_msgs.values()) + np.log(self.observed_state))
Variable.map_state = map_state
def max_sum(node, other):
node.send_ms_msg(other)
def print_map_states(variables):
for variable in variables:
map_state = variable.map_state()
print variable, map_state
variables, factors = configure_experiment()
I, S, ST, F, B, C, W = variables
f1, f2, f3, f4, f5, f6, f7 = factors
B.set_observed(1)
node_list = [f6, f7, W, C, F, f4, f5, S, f2, B, f3, I, f1, ST]
print 'Forward pass'
apply_algorithm(node_list, max_sum)
ST.pending.update([f1])
print 'Backward pass'
apply_algorithm(reversed(node_list), max_sum)
print_map_states(variables)
```
## Part 3: Image Denoising and Loopy BP
Next, we will use a loopy version of max-sum to perform denoising on a binary image. The model itself is discussed in Bishop 8.3.3, but we will use loopy max-sum instead of Iterative Conditional Modes as Bishop does.
The following code creates some toy data. `im` is a quite large binary image, `test_im` is a smaller synthetic binary image. Noisy versions are also provided.
```
from pylab import imread, gray
# Load the image and binarize
im = np.mean(imread('dalmatian1.png'), axis=2) > 0.5
imshow(im)
gray()
# Add some noise
noise = np.random.rand(*im.shape) > 0.9
noise_im = np.logical_xor(noise, im)
figure()
imshow(noise_im)
test_im = np.zeros((10,10))
#test_im[5:8, 3:8] = 1.0
#test_im[5,5] = 1.0
figure()
imshow(test_im)
# Add some noise
noise = np.random.rand(*test_im.shape) > 0.9
noise_test_im = np.logical_xor(noise, test_im)
figure()
imshow(noise_test_im)
show()
```
### 3.1 Construct factor graph (10 points)
Convert the Markov Random Field (Bishop, fig. 8.31) to a factor graph and instantiate it.
```
from itertools import product
def create_factor_graph(img):
Y = np.empty(img.shape, dtype='object')
X = np.empty(img.shape, dtype='object')
fYX = np.empty(img.shape, dtype='object')
fXR = np.empty((img.shape[0] - 1, img.shape[1] - 1), dtype='object')
fXB = np.empty((img.shape[0] - 1, img.shape[1] - 1), dtype='object')
init_prob = np.array([[0.8, 0.2], [0.2, 0.8]])
for y, x in product(range(img.shape[0]), range(img.shape[1])):
Y[y, x] = Variable('y(%d,%d)' % (x, y), 2)
Y[y, x].set_observed(img[y, x])
X[y, x] = Variable('x(%d,%d)' % (x, y), 2)
fYX[y, x] = Factor('fXY(%d,%d)' % (x, y), init_prob, [Y[y, x], X[y, x]])
Y[y, x].pending.update([fYX[y, x]])
one_msg = np.ones(2)
for y, x in product(range(img.shape[0] - 1), range(img.shape[1] - 1)):
fXR[y, x] = Factor('fXR(%d,%d)' % (x, y), init_prob, [X[y, x], X[y, x + 1]])
fXB[y, x] = Factor('fXB(%d,%d)' % (x, y), init_prob, [X[y, x], X[y + 1, x]])
# Flooding schedule, simultaneously passing a message across every link in both direction
# Bishop 8.4.7
X[y, x].in_msgs[fXR[y, x]] = one_msg
X[y, x].in_msgs[fXB[y, x]] = one_msg
X[y, x + 1].in_msgs[fXR[y, x]] = one_msg
X[y + 1, x].in_msgs[fXB[y, x]] = one_msg
return Y, X, fYX, fXR, fXB
```
### 3.2 Loopy max-sum (10 points)
Implement the loopy max-sum algorithm, by passing messages from randomly chosen nodes iteratively until no more pending messages are created or a maximum number of iterations is reached.
Think of a good way to initialize the messages in the graph.
```
def denoise(img, niter=10):
Y, X, fYX, fXR, fXB = create_factor_graph(img)
for i in range(niter):
fXX = np.hstack((fXR.flatten(), fXB.flatten()))
np.random.shuffle(fXX)
# Preordered, first observed variables, then factors between observed variables and
# corresponding laten variables, then all latent variables and then factors between
# latents in the random order.
node_list = np.hstack((Y.flatten(), fYX.flatten(), X.flatten(), fXX)).tolist()
apply_algorithm(node_list, max_sum)
result = np.zeros_like(img)
for y, x in product(range(img.shape[0]), range(img.shape[1])):
result[y, x] = X[y, x].map_state()
return result
imshow(denoise(noise_test_im))
show()
imshow(denoise(noise_im, niter=10))
show()
```
| github_jupyter |
# Advanced Ray - Overview
© 2019-2020, Anyscale. All Rights Reserved

This tutorial, part of [Anyscale Academy](https://anyscale.com/academy), picks up where the [Ray Crash Course](../ray-crash-course/00-Ray-Crash-Course-Overview.ipynb) left off. It explores tasks and actors in more detail, including profiling and debugging applications, and it explains Ray's internal architecture.
See the instructions in the [README](../README.md) for setting up your environment to use this tutorial.
Go [here](../Overview.ipynb) for an overview of all tutorials.
> **Tip:** Recall that the [Ray Package Reference](https://docs.ray.io/en/latest/package-ref.html) in the [Ray Docs](https://docs.ray.io/en/latest/) is useful for exploring the API features we'll learn.
## Join Us at Ray Summit 2020!
Join us for the [_free_ Ray Summit 2020 virtual conference](https://events.linuxfoundation.org/ray-summit/?utm_source=dean&utm_medium=embed&utm_campaign=ray_summit&utm_content=anyscale_academy), September 30 - October 1, 2020. We have an amazing lineup of luminar keynote speakers and breakout sessions on the Ray ecosystem, third-party Ray libraries, and applications of Ray in the real world.

## Advanced Ray Tutorial Guide
| # | Lesson (Notebook) | Description |
| :- | :-------------------------------------------------------- | :---------------------------------------- |
| 00 | [Overview](00-Advanced-Ray-Overview.ipynb) | A _table of contents_ for this tutorial. |
| 01 | [Ray Tasks Revisited](01-Ray-Tasks-Revisited.ipynb) | More exploration of `ray.wait()` usage patterns, task dependencies and their management, and task profiling techniques. |
| 02 | [Ray Actors Revisited](02-Ray-Actors-Revisited.ipynb) | A more in-depth look at actor characteristics and profiling actor performance using the _Ray Dashboard_. |
| 03 | [Ray Internals](03-Ray-Internals.ipynb) | Explores the architecture of Ray, task scheduling, the Object Store, etc. |
In addition, exercise solutions for this tutorial can be found [here](solutions/Advanced-Ray-Solutions.ipynb).
## Getting Help
* The [#tutorial channel](https://ray-distributed.slack.com/archives/C011ML23W5B) on the [Ray Slack](https://ray-distributed.slack.com). [Click here](https://forms.gle/9TSdDYUgxYs8SA9e8) to join.
* [Email](mailto:academy@anyscale.com)
Find an issue? Please report it!
* [GitHub issues](https://github.com/anyscale/academy/issues)
#### Give Us Feedback!
Let us know what you like and don't like about this reinforcement learning and Ray RLlib tutorial.
* [Survey](https://forms.gle/PKYkFvrAf5M3jL3B8)
| github_jupyter |
```
import yfinance as yf
import pandas as pd
#import numpy and pandas
import numpy as np
# used for dates
import datetime
from datetime import datetime, date
# Set formattign options
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 60)
# bring in matplotlib for graphics
import matplotlib.pyplot as plt
%matplotlib inline
item={}
tickerStrings = ["SPY","AAPL","^TNX","^FVX","SPYG","SPYV","HYG","LQD","EUR=X",'CHF=X',"IVOL","QQQ","IWM","TQQQ","SQQQ"]
df_list = list()
for ticker in tickerStrings:
data = yf.download(ticker, group_by="Ticker")
data['ticker'] = ticker # add this column because the dataframe doesn't contain a column with the ticker
df_list.append(data)
# combine all dataframes into a single dataframe
df1 = pd.concat(df_list)
# save to csv
df1.to_csv('ticker.csv')
# used for dates
import datetime
from datetime import datetime, date
# Set formattign options
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 60)
# bring in matplotlib for graphics
import matplotlib.pyplot as plt
%matplotlib inline
def pivot_tickers_to_columns(raw, column):
items = []
# loop through all dictionary keys
for key in raw:
# get the data for the key
data = raw[key]
# extract just the column specified
subset = data[["Ticker", column]]
# add to items
items.append(subset)
# concatenate all the items
combined = pd.concat(items)
# reset the index
ri = combined.reset_index()
# return the pivot
return ri.pivot( "Ticker", column)
silver=pd.pivot(df1, values="Close", columns="ticker")
silver.reset_index()
# do the pivot - i skipped this for now
close_px = pivot_tickers_to_columns(df1, "Close")
# peek at the result
close_px[:5]
# calculate daily percentage change
daily_pc = silver / silver.shift(1) - 1
daily_pc[5:]
silver.loc['2022-02-14']['AAPL'] / \
silver.loc['2022-02-11']['AAPL'] -1
daily_cr = (1 + daily_pc).cumprod()
daily_cr[5:]
daily_pc.hist(bins=50, figsize=(8,6));
from pandas.plotting import scatter_matrix
# plot the scatter of daily price changed for ALL stocks
scatter_matrix(daily_pc, diagonal='kde', figsize=(12,12));
# calculate the correlation between all the stocks relative
# to daily percentage change
corrs = daily_pc.corr()
corrs
# plot a heatmap of the correlations
plt.imshow(corrs, cmap='hot', interpolation='none')
plt.colorbar()
plt.xticks(range(len(corrs)), corrs.columns)
plt.yticks(range(len(corrs)), corrs.columns)
plt.gcf().set_size_inches(8,8)
# generate a scatter of the mean vs std of daily % change
plt.scatter(daily_pc.mean(), daily_pc.std())
plt.xlabel('Expected returns')
plt.ylabel('Risk')
# this adds fancy labels to each dot, with an arrow too
for label, x, y in zip(daily_pc.columns,
daily_pc.mean(),
daily_pc.std()):
plt.annotate(
label,
xy = (x, y), xytext = (30, -30),
textcoords = 'offset points', ha = 'right',
va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5',
fc = 'yellow',
alpha = 0.5),
arrowprops = dict(arrowstyle = '->',
connectionstyle = 'arc3,rad=0'))
# set ranges and scales for good presentation
plt.xlim(-0.001, 0.003)
plt.ylim(0.005, 0.0275)
# set size
plt.gcf().set_size_inches(8,8)
# 75 period minimum
min_periods = 75
# calculate the volatility
vol = daily_pc.rolling(window=min_periods).std() * \
np.sqrt(min_periods)
# plot it
vol.plot(figsize=(10, 8));
```
| github_jupyter |
```
import numpy as np
import pickle
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import glob
import csv
filenames = glob.glob("../Results/LastFMResults/AccRegret_03_09_23*")
random_result = {}
linucb_result = {}
c = 0
for fname in filenames:
random_result[c] = []
linucb_result[c] = []
print(fname)
with open(fname, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
random_result[c].append(int(row["Time(Iteration)"]) - int(row['Random']))
linucb_result[c].append(int(row["Time(Iteration)"]) - int(row['LinUCB']))
c += 1
filenames = glob.glob("../Results/LastFMResults/AccRegret_03_10_00_0*")
for fname in filenames:
random_result[c] = []
linucb_result[c] = []
with open(fname, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
random_result[c].append(int(row["Time(Iteration)"]) - int(row['Random']))
linucb_result[c].append(int(row["Time(Iteration)"]) - int(row['LinUCB']))
c += 1
filenames = glob.glob("../Results/LastFMResults/AccRegret_03_10_00_1*")
for fname in filenames:
random_result[c] = []
linucb_result[c] = []
with open(fname, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
random_result[c].append(int(row["Time(Iteration)"]) - int(row['Random']))
linucb_result[c].append(int(row["Time(Iteration)"]) - int(row['LinUCB']))
c += 1
ratio
ratio = []
for i in range(c):
ratio.append(linucb_result[i][-1] / random_result[i][-1])
max(ratio)
result = {}
fname = "../Results/LastFMResults/AccRegret_03_09_23_34.csv"
r = []
l = []
result['LinUCB'] = []
with open(fname, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
r.append(int(row["Time(Iteration)"]) - int(row['Random']))
l.append(int(row["Time(Iteration)"]) - int(row['LinUCB']))
result['LinUCB'].append((int(row["Time(Iteration)"]) - int(row['LinUCB'])) / (int(row["Time(Iteration)"]) - int(row['Random'])))
l[-1]/r[-1]
fname = "../Results/LastFMResults/AccRegret_03_09_23_57.csv"
r = []
l = []
result['NeuralUCB'] = []
with open(fname, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
r.append(int(row["Time(Iteration)"]) - int(row['Random']))
l.append(int(row["Time(Iteration)"]) - int(row['LinUCB']))
result['NeuralUCB'].append((int(row["Time(Iteration)"]) - int(row['LinUCB'])) / (int(row["Time(Iteration)"]) - int(row['Random'])))
c += 1
l[-1]/r[-1]
fname = "../Results/LastFMResults/AccRegret_03_10_06_49.csv"
r = []
l = []
result['NeuralPHE'] = []
with open(fname, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
r.append(int(row["Time(Iteration)"]) - int(row['Random']))
l.append(int(row["Time(Iteration)"]) - int(row['NeuralPHE']))
result['NeuralPHE'].append((int(row["Time(Iteration)"]) - int(row['NeuralPHE'])) / (int(row["Time(Iteration)"]) - int(row['Random'])))
c += 1
ax = pd.DataFrame(result).plot(title="LastFM Dataset")
ax.set_xlabel("impressions")
ax.set_ylabel("normalized payoff")
```
| github_jupyter |
_Lambda School Data Science_
# Join and Reshape datasets
Objectives
- concatenate data with pandas
- merge data with pandas
- understand tidy data formatting
- melt and pivot data with pandas
Links
- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)
- Combine Data Sets: Standard Joins
- Tidy Data
- Reshaping Data
- Python Data Science Handbook
- [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append
- [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join
- [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping
- [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables
Reference
- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)
- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
## Download data
We’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
```
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
#! opens another shell and spits output
#% opens another shell but preserves the action
%cd instacart_2017_05_01
!ls -lh *.csv
#Let's look at the files...
!tail departments.csv
!head order_products__prior.csv
!head order_products__train.csv
!head products.csv
```
# Join Datasets
## Goal: Reproduce this example
The first two orders for user id 1:
```
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'
example = Image(url=url, width=600)
display(example)
#We assume order id is global order id, order number is specific to user,
```
## Load data
Here's a list of all six CSV filenames
```
!ls -lh *.csv
```
For each CSV
- Load it with pandas
- Look at the dataframe's shape
- Look at its head (first rows)
- `display(example)`
- Which columns does it have in common with the example we want to reproduce?
```
import pandas as pd
```
### aisles
```
aisles = pd.read_csv("aisles.csv")
aisles.head()
aisles.shape
```
### departments
```
departments = pd.read_csv("departments.csv")
departments.head()
departments.shape
```
### order_products__prior
```
order_products__prior = pd.read_csv("order_products__prior.csv")
order_products__prior.head()
order_products__prior.shape
```
### order_products__train
```
order_products__train = pd.read_csv("order_products__train.csv")
order_products__train.head()
order_products__train.shape
```
### orders
```
orders = pd.read_csv("orders.csv")
orders.head()
orders.tail()
```
### products
```
products = pd.read_csv("products.csv")
products.head()
```
## Concatenate order_products__prior and order_products__train
```
#pd.concat?
#help(pd.concat)
order_products = pd.concat([order_products__prior, order_products__train])
order_products.shape
(order_products.shape,order_products__prior.shape,order_products__train.shape)
#assert throws an error if true value isn't met
assert len(order_products__prior) + len(order_products__train) == len(order_products)
assert order_products__prior.shape[0] + order_products__train.shape[0] == order_products.shape[0]
assert len(order_products.columns) == len(order_products__prior.columns) == len(order_products__train.columns)
#Check our goal
display(example)
```
## Get a subset of orders — the first two orders for user id 1
From `orders` dataframe:
- user_id
- order_id
- order_number
- order_dow
- order_hour_of_day
## Merge dataframes
Merge the subset from `orders` with columns from `order_products`
```
#Goal is to examine results for user_id = 1 and order # < 3, so let's setup slice by setting the index
idx_user_id_1 = (orders['user_id'] == 1) & (orders['order_number'] < 3)
columns = ['user_id', 'order_id', 'order_number', 'order_dow', 'order_hour_of_day']
#loc first value is row, second value is columns
subset = orders.loc[idx_user_id_1,columns]
subset.head()
```
Merge with columns from `products`
```
#help(pd.merge)
columns = ['order_id', 'add_to_cart_order', 'product_id']
merge = pd.merge(subset, order_products[columns], on='order_id')
#Now in one dataframe, we have data of multiple sources
merge.head()
#We still need to add 'add to cart order', 'product id', and 'product name'
#Need to just add 'product name'
products.head()
final = pd.merge(merge, products[['product_id', 'product_name']], on='product_id')
final
final.columns
#Using a method called "dictionary comprehension", we can remove '_' in one line like so
mapper = {col_name: col_name.replace('_',' ') for col_name in final.columns}
mapper
#We could use the mapper in next next code box for alternative rename column method.
#Alternatively, set dictionary in a traditional way, use this way if there's more things to change:
d = {}
for col_name in final.columns:
d[col_name] = col_name.replace('_', ' ')
d
#Let's replace directly with "list comprehension"
final.columns = [col_name.replace('_',' ') for col_name in final.columns]
#Alternatively we could have used the mapper for the "dictionary comprehension" method,
#final = final.rename(index=str,columns=mapper)
final = final.sort_values(by=['order number','add to cart order'])
#second list in above code 'add to cart order' sets the next sort rule after the first
final
#WE'RE DONE
#Compare to check if final matches the example
display(example)
```
## Dictionary of csv files
```
#Example of dictionary of csv files in next few code blocks
#Cleaner way of reading csv files if everything can be read the same way
csv_files = !ls -1 *.csv
csv_files
df_dict = {}
for csv_f in csv_files:
k = csv_f.split('.')[0] #to get name with the csv extenstion
df_dict[k] = pd.read_csv(csv_f)
df_dict.keys()
df_dict['aisles'].head()
del df_dict #To save memory on notebook
```
# Reshape Datasets
## Why reshape data?
#### Some libraries prefer data in different formats
For example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).
> "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:
> - Each variable is a column
- Each observation is a row
> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
#### Data science is often about putting square pegs in round holes
Here's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling!
## Hadley Wickham's Examples
From his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
```
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['John Smith', 'Jane Doe', 'Mary Johnson'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
```
"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild.
The table has two columns and three rows, and both rows and columns are labelled."
```
table1
```
"There are many ways to structure the same underlying data.
Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
```
table2
```
"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.
Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."
| name | trt | result |
|--------------|-----|--------|
| John Smith | a | - |
| Jane Doe | a | 16 |
| Mary Johnson | a | 3 |
| John Smith | b | 2 |
| Jane Doe | b | 11 |
| Mary Johnson | b | 1 |
## Table 1 --> Tidy
We can use the pandas `melt` function to reshape Table 1 into Tidy format.
```
#Melt gathers columns into rows (opposite of pivot)
table1.reset_index().melt(id_vars='index')
#Rename columns - we just did that, look at notes above. Use 'replace' or 'apply'
#apply method:
tmp = table1.reset_index().melt(id_vars='index')
tmp['trt']=tmp['variable'].apply
#Hint to finish the job - rename columns, replace 'treatmenta' with 'a'
table1.reset_index()
#Melt the dataset into tidy format
tidy = table1.reset_index().melt(id_vars='index')
#Rename the columns
tidy = tidy.rename(columns={'index': 'name', 'variable': 'trt', 'value': 'result'})
#Replace trt value string of treatmenta->a, treatmentb->b
tidy['trt'] = tidy['trt'].apply(lambda x: x.replace('treatment', ''))
#Or can do: tidy['trt'] = tidy['trt'.str.replace('treatment','')]]
tidy
```
## Table 2 --> Tidy
```
table2
#Reset index and melt the table
tidy = table2.reset_index().melt(id_vars='index')
tidy
#Reorder columns
tidy = tidy.reindex(columns=['variable','index','value'])
tidy
#Rename the columns
tidy = tidy.rename(columns={'variable': 'name', 'index': 'trt', 'value': 'result'})
tidy
#Remove 'treatment' string
tidy['trt'] = tidy['trt'].apply(lambda x: x.replace('treatment', ''))
#Or can do: tidy['trt'] = tidy['trt'.str.replace('treatment','')]]
tidy
```
## Tidy --> Table 1
The `pivot_table` function is the inverse of `melt`.
```
#Pivot gathers rows into columns
tidy.pivot_table(index='name', columns='trt', values='result')
#Ought to be reordered also, but Alex ignored so I did too
```
## Tidy --> Table 2
```
tidy.pivot_table(index='trt',columns='name',values='result')
#Ought to be reordered also, but Alex ignored Tidy->Table1 so I did too
```
# Seaborn example
The rules can be simply stated:
- Each variable is a column
- Each observation is a row
A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
```
#Tidy format allows you to make the following plot easier
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
```
## Now with Instacart data
```
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
```
## Goal: Reproduce part of this example
Instead of a plot with 50 products, we'll just do two — the first products from each list
- Half And Half Ultra Pasteurized
- Half Baked Frozen Yogurt
```
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
```
So, given a `product_name` we need to calculate its `order_hour_of_day` pattern.
## Subset and Merge
One challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.
```
a = products[['product_id','product_name']]
b = order_products[['order_id','product_id']]
c = orders[['order_id','order_hour_of_day']]
merged1 = pd.merge(a,b)
merged2 = pd.merge(merged1,c)
merged1.head()
merged2.head()
merged2.shape
product_names = [
'Half And Half Ultra Pasteurized',
'Half Baked Frozen Yogurt'
]
idx = (merged2['product_name'] == product_names[0]) | (merged2['product_name'] == product_names[1])
idx.sum()
subset = merged2[idx]
subset.head()
```
## 4 ways to reshape and plot
### 1. value_counts
```
cream = subset[subset['product_name'] == product_names[0]]
froyo = subset[subset['product_name'] == product_names[1]]
cream.shape, froyo.shape
cream['order_hour_of_day'].value_counts(normalize=True).sort_index().plot()
froyo['order_hour_of_day'].value_counts(normalize=True).sort_index().plot()
#compare with picture goal
display(example)
```
### 2. crosstab
```
table = pd.crosstab(subset['order_hour_of_day'],subset['product_name'])
table.plot()
pd.crosstab(subset['order_hour_of_day'],subset['product_name']).head()
```
### 3. Pivot Table
```
subset.pivot_table(index='order_hour_of_day', columns='product_name',values='order_id', aggfunc=len).plot()
subset.pivot_table(index='order_hour_of_day', columns='product_name',values='order_id', aggfunc=len).head()
#aggfunc=len gives number of counts of the row
```
### 4. melt
```
table.head()
melted = table.reset_index().melt(id_vars='order_hour_of_day')
melted.head()
sns.relplot(x='order_hour_of_day',
y='value',
data=melted,
hue='product_name',
kind='line');
```
# Assignment
## Join Data Section
These are the top 10 most frequently ordered products. How many times was each ordered?
1. Banana
2. Bag of Organic Bananas
3. Organic Strawberries
4. Organic Baby Spinach
5. Organic Hass Avocado
6. Organic Avocado
7. Large Lemon
8. Strawberries
9. Limes
10. Organic Whole Milk
First, write down which columns you need and which dataframes have them.
Next, merge these into a single dataframe.
Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
## Reshape Data Section
- Replicate the lesson code
- Complete the code cells we skipped near the beginning of the notebook
- Table 2 --> Tidy
- Tidy --> Table 2
- Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
```
#Assignment1 - using merge and counting most frequent order
a = products[['product_id','product_name']]
b = order_products[['order_id','product_id']]
merged = pd.merge(a,b)
merged['product_name'].value_counts().head(10)
#Assignment2 - Reshape Data Section
flights = sns.load_dataset('flights')
flights.head()
table = pd.pivot_table(flights, index='year', columns='month')
table
```
## Join Data Stretch Challenge
The [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)."
The post says,
> "We can also see the time of day that users purchase specific products.
> Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.
> **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"
Your challenge is to reproduce the list of the top 25 latest ordered popular products.
We'll define "popular products" as products with more than 2,900 orders.
## Reshape Data Stretch Challenge
_Try whatever sounds most interesting to you!_
- Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"
- Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"
- Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)
- Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
```
```
| github_jupyter |
## Acquisition success yearly trending
To run this standalone on the HEAD or GRETA networks:
```
# Get into Ska3 environment
git clone https://github.com/sot/aca_stats.git
cd aca_stats
jupyter nbconvert --to notebook --execute acq-success-trend-yearly.ipynb
ls -lrt
```
This will create the png and csv output files.
```
import numpy as np
from agasc import get_supplement_table
%matplotlib inline
from astropy.table import Table
from mica.stats.acq_stats import get_stats
from cxotime import CxoTime
import matplotlib.pyplot as plt
from chandra_aca.star_probs import binom_ppf
stats = Table(get_stats())
len(stats)
# Make truncated year column for doing grouping. If current time is less than
# half way through the year then slosh that data into the previous year.
curr_year = CxoTime.now().decimalyear
curr_year_trunc = np.trunc(curr_year)
obs_year = CxoTime(stats['guide_tstart']).decimalyear
if curr_year - curr_year_trunc < 0.5:
obs_year = obs_year.clip(None, curr_year_trunc - 1)
stats['iyear'] = np.trunc(obs_year)
# Correct ACA mags using AGASC supplement
obs_mags = get_supplement_table('mags', as_dict=True)
n_match = 0
for row in stats:
try:
row['mag'] = obs_mags[row['agasc_id']]['mag_aca']
n_match += 1
except KeyError:
pass
print(n_match)
# Select only faint stars.
# Original version of notebook filtered on stats['halfw'] >= 120 but don't do that here.
ok = (stats['mag'] >= 9.6) & (stats['mag'] < 10.6)
sok = stats[ok]
len(sok)
n_match
sok['imag'] = np.trunc(sok['mag'] * 5) / 5
sg = sok.group_by(keys=['imag', 'iyear'])['imag', 'iyear', 'star_tracked']
len(sg.groups)
for grp in sg.groups:
print(grp['imag'][0], grp['iyear'][0], np.sum(grp['star_tracked']), len(grp),
'{:.2f}'.format(np.sum(grp['star_tracked']) / len(grp)))
# This is the way to plot error bars, but they end up being too messy anyway.
binom_ppf(7, 13, [0.16, 0.5, 0.84])
# The mean of the star_tracked 0's and 1's is the tracking fraction.
sga = sg.groups.aggregate(np.mean)
print(sga)
# Make the plot
fig, ax = plt.subplots()
imags = sorted(set(sga['imag']))
for imag in imags:
ok = sga['imag'] == imag
ax.plot(sga['iyear'][ok], sga['star_tracked'][ok], '-o', ms=3,
label=f'{imag:.1f} - {imag+0.2:.1f} mag')
ax.legend()
ax.grid()
ax.set_title('Acquisition success rate trend')
ax.set_ylabel('Success fraction')
ax.set_xlabel('Year')
ax.set_ylim(0, 1);
plt.savefig('acq_success_yearly.png', dpi=150)
sga.write('acq_success_yearly.csv', overwrite=True)
```
| github_jupyter |
# Tris
## Les plus simples
Implanter les trois tris classiques suivants, selon les algorithmes vus en cours.
**Remarque : en python on peut échanger deux variables comme ça **
```
a = 5
b = 6
a,b = b,a
print a
print b
def triSelection(t):
"""
Tri sélection
Sélectionne un élément minimal à chaque itération
Input :
- t un tableau
"""
pass # écrire le code
t = []
triSelection(t)
assert(t == [])
t = [5,4,3,2,1]
triSelection(t)
assert(t == [1,2,3,4,5])
triSelection(t)
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
triSelection(t)
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
def triBulles(t):
"""
Tri Bulles
Parcours autant de fois que nécessaire les couples de valeurs adjacentes
Input :
- t un tableau
"""
pass # écrire le code
t = []
triBulles(t)
assert(t == [])
t = [5,4,3,2,1]
triBulles(t)
assert(t == [1,2,3,4,5])
triBulles(t)
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
triBulles(t)
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
def triInsertion(t):
"""
Tri Insertion
Insère dans la partie gauche triée du tableau
Input :
- t un tableau
"""
pass # écrire le code
t = []
triInsertion(t)
assert(t == [])
t = [5,4,3,2,1]
triInsertion(t)
assert(t == [1,2,3,4,5])
triInsertion(t)
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
triInsertion(t)
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
```
Implantez les fonctions qui suivent en reprenant les mêmes algrorithmes que précédement. Cette fois chaque fonction doit retourner 2 valeurs : nombre de comparaisons d'éléments du tableau et nombre d'écriture dans le tableau.
**Remarque 1 : ** Les fonctions python peuvent retourner plusieurs valeurs en même temps sous la forme de ***tuples***
```
def maFonction():
return 1,2
v = maFonction()
print v
print v[0]
print v[1]
```
**Remarque 2 :** L'inversion de deux valeurs du tableau $a,b = b, a$ effectue **2 écritures** dans le tableau.
**Remarque 3 :** Il se peut que votre algorithme soit légèrement différent du mien et que le comptage (et les tests) aussi. Si vous modifiez les tests, assurez-vous de bien comprendre ce que vous comptez !
```
def comptageTriSelection(t):
"""
Tri sélection
Sélectionne le minimum à chaque tour
Input :
- t un tableau
Output :
a,b avec a le nombre de comparaisons et b le nombre d'écritures
"""
pass # écrire le code
t = []
assert(comptageTriSelection(t) == (0,0))
assert(t == [])
t = [5,4,3,2,1]
assert(comptageTriSelection(t) == (10,8))
assert(t == [1,2,3,4,5])
assert(comptageTriSelection(t) == (10,8))
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
assert(comptageTriSelection(t) == (66,22))
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
def comptageTriBulles(t):
"""
Tri Bulles
Parcours autant de fois que nécessaire les couples de valeurs adjacentes
Input :
- t un tableau
Output :
a,b avec a le nombre de comparaisons et b le nombre d'écritures
"""
pass # écrire le code
t = []
assert(comptageTriBulles(t) == (0,0))
assert(t == [])
t = [5,4,3,2,1]
assert(comptageTriBulles(t) == (10,20))
assert(t == [1,2,3,4,5])
assert(comptageTriBulles(t) == (4,0))
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
assert(comptageTriBulles(t) == (38,36))
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
def comptageTriInsertion(t):
"""
Tri Insertion
Insère successivement les éléments dans la partie gauche triée du tableau
Input :
- t un tableau
Output :
a,b avec a le nombre de comparaisons et b le nombre d'écritures
Remarque : pour la boucle interne, on comptera exactement une comparaison de plus que d'entrée dans la boucle
"""
pass # écrire le code
t = []
assert(comptageTriInsertion(t) == (0,0))
assert(t == [])
t = [5,4,3,2,1]
assert(comptageTriInsertion(t) == (14,14))
assert(t == [1,2,3,4,5])
assert(comptageTriInsertion(t) == (4,4))
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
assert(comptageTriInsertion(t) == (29,29))
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
```
## Le tri rapide
Commencez par implanter une fonction de pivot.
```
def pivot(t, deb, fin):
"""
Pivot sur un intervalle [deb, fin[ d'un tableau
Agit sur les valeurs entre l'indice deb (inclus) et fin (exclus)
Choisit la première valeur t[deb] du tableau comme pivot et réorganise
le tableau telle que toutes les valeurs plus petites ou égales à t[deb]
soit à gauche et toutes les valeurs plus grandes à droite.
Input :
- t un tableau
- deb, l'indice de départ (inclus)
- fin, l'indice de fin (exclus)
Output :
l'indice final du pivot
"""
pass # écrire le code
t = [1,2,3,4,5]
assert(pivot(t,0,5) == 0)
assert(all(t[i] > t[0] for i in xrange(1,5)))
t = [5,4,3,2,1]
assert(pivot(t,0,5) == 4)
assert(all(t[i] <= t[4] for i in xrange(4)))
t = [4,5,2,13,4,2,4,13,4,5,7,6]
assert(pivot(t,0,len(t)) == 5)
assert(all(t[i] <= t[5] for i in xrange(5)))
assert(all(t[i] > t[5] for i in xrange(6,len(t))))
```
Implanter l'algorithme de tri rapide vu en classe en utilisant votre fonction de pivot
```
def triRapide(t,deb,fin):
"""
Tri Rapide
Tri récursif reposant sur la fonction pivot
Input :
- t un tableau
- deb l'indice de départ (inclus)
- fin l'indice de fin (exclus)
"""
pass # écrire le code
t = []
triRapide(t,0, len(t))
assert(t == [])
t = [5,4,3,2,1]
triRapide(t, 0, len(t))
assert(t == [1,2,3,4,5])
triRapide(t,0,len(t))
assert(t == [1,2,3,4,5])
t = [1,5,2,13,4,2,4,13,4,5,7,6]
triRapide(t,0,len(t))
assert(t == [1, 2, 2, 4, 4, 4, 5, 5, 6, 7, 13, 13])
```
De la même façon que pour les autres tris, comptez les comparaisons / écritures effectuées par le tri.
Pour la fonction pivot, vous devrez donc retourner 3 valeurs : position du pivot, nombre de comparaisons, nombre d'écritures. On peut le faire de cette manière
```
def maFonction():
return 1,2,3
a,b,c = maFonction()
print a
print b
print c
def comptagePivot(t, deb, fin):
"""
Pivot sur une partie de tableau
Agit sur les valeurs entre l'indice deb (inclus) et fin (exclus)
Choisit la première valeur t[deb] du tableau comme pivot et réorganise
le tableau telle que toutes les valeurs plus petites ou égales à t[dev]
soit à gauche et toutes les valeurs plus grandes à droite.
Input :
- t un tableau
- deb, l'indice de départ (inclus)
- fin, l'indice de fin (exclus)
Output :
l'indice final du pivot, le nombre de comparaisons, le nombre d'écritures
"""
pass # écrire le code
# ces tests ont été écrits pour l'algo du pivot du cours,
# si votre algorithme est différent, vous obtiendrez des valeurs différentes
t = [1,2,3,4,5]
assert(comptagePivot(t,0,5) == (0,6,2))
t = [5,4,3,2,1]
assert(comptagePivot(t,0,5) == (4,6,2))
t = [4,5,2,13,4,2,4,13,4,5,7,6]
assert(comptagePivot(t,0,len(t)) == (5,13,6))
def comptageTriRapide(t,deb,fin):
"""
Tri Rapide
Tri récursif reposant sur la fonction pivot
Input :
- t un tableau
- deb l'indice de départ (inclus)
- fin l'indice de fin (exclus)
Output :
a,b avec a le nombre de comparaisons et b le nombre d'écritures
"""
pass # écrire le code
t = []
assert(comptageTriRapide(t,0, len(t)) == (0,0))
t = [5,4,3,2,1]
assert(comptageTriRapide(t,0, len(t)) == (18,8))
assert(t == [1,2,3,4,5])
assert(comptageTriRapide(t,0, len(t)) == (18,8))
t = [1,5,2,13,4,2,4,13,4,5,7,6]
assert(comptageTriRapide(t,0, len(t)) == (50,22))
```
## Comparaisons de tris
Écrire une fonction qui prend en paramètre un tableau, le tri selon les différentes méthodes et retourne la plus efficace en nombre de comparaisons.
** Attention : les tris se font sur place, il faut donc copier le tableau orginal à chaque fois avant d'appliquer le tri pour pouvoir comparer les algorithmes**
```
# rappel syntaxe copier un tableau
t = [1,2,3]
tcopie = list(t)
tcopie[0] = 10
print t
print tcopie
def meilleurTriComparaison(t):
"""
Meilleur tri -- comparaisons
Lance chacun des tris sur une copie du tableau t en comptant les comparaisons
Input :
- t un tableau
Output :
le tri le plus efficace en terme de comparaisons : "Selection", "Bulles", "Insertion" ou "Rapide"
"""
pass # écrire le code
t = [1,2,3,4,5]
assert(meilleurTriComparaison(t) in {"Bulles", "Insertion"})
t = [2,3,4,5,6,7,8,9,1]
assert(meilleurTriComparaison(t) == "Insertion")
t = [12,16,5,2,14,13,15,7,1,19,3,9,10,18,6,20]
assert(meilleurTriComparaison(t) == "Rapide")
def meilleurTriEcriture(t):
"""
Meilleur tri -- écritures
Lance chacun des tris sur une copie du tableau t en comptant les écritures
Input :
- t un tableau
Output :
le tri le plus efficace en terme d'écritures : "Selection", "Bulles", "Insertion" ou "Rapide"
"""
pass # écrire le code
t = [1,2,3,4,5]
assert(meilleurTriEcriture(t) == "Bulles") # éventuellement insertion
t = [12,16,5,2,14,13,15,7,1,19,3,9,10,18,6,20]
assert(meilleurTriEcriture(t) == "Selection")
```
On va maintenant pouvoir observer l'efficacité des tris sur des tirages aléatoires. La fonction suivante permet de générer un tableau aléatoire de taille n. Tirer des tableaux de différentes tailles et observer quels tris sont les plus efficaces avec vos fonctions précédentes.
```
def randomTableau(n):
"""
Retourne un tableau aléatoire de taille n (chaque valeur est choisie uniformément sur [[0,n]])
"""
import random
return [random.randint(0,n) for i in xrange(n)]
t = randomTableau(1000)
print meilleurTriComparaison(t)
print meilleurTriEcriture(t)
```
Implanter une fonction qui retourne un histogramme sous forme de dictionnaire python des tris qui ont été plus efficaces.
Un dictionnaire python est un objet où l'on peut associer des valeurs à des clés. Par exemple :
```
d = {}
d["Rapide"] = 3
d["Insertion"] = 2
d["Selection"] = 1
d["Bulles"] = 0
d
```
Le but de la fonction est de créer un dictionnaire où chaque valeur représente le nombre de fois où ce tri a été le plus rapide.
```
def histogrammeComparaisons(n, m):
"""
Histogramme -- comparaisons
Input :
- n, la taille des tableaux aléatoires
- m, le nombre de tirages à effectuer
Output :
un dictionnaire python qui représente l'histogramme des tris les plus efficaces sur les m tirages
"""
pass # écrire le code
histogrammeComparaisons(10,100) # entre rapide et insertion
histogrammeComparaisons(100,100) # que du rapide
histogrammeComparaisons(1000,10) # que du rapide
def histogrammeEcritures(n, m):
"""
Histogramme -- ecritures
Input :
- n, la taille des tableaux aléatoires
- m, le nombre de tirages à effectuer
Output :
un dictionnaire python qui représente l'histogramme des tris les plus efficaces sur les m tirages
"""
pass # écrire le code
histogrammeEcritures(10,100) # entre rapide et selection
histogrammeEcritures(100,100) # que du selection
histogrammeEcritures(1000,10) # que du selection
```
On va maintenant utiliser une autre fonction aléatoire qui, cette fois, crée des tableaux "presque" triés, avec un paramètre de mélange.
```
def randomTableauPresqueTrie(n, k):
"""
Retourne un tableau ou chaque valeur d'indice i est choisie uniformément entre i - k, i + k
Input :
- n la taille du tableau
- k le rayon possible pour chaque valeur
"""
import random
return [random.randint(i-k,i+k) for i in xrange(n)]
t = randomTableauPresqueTrie(10,0)
t
t = randomTableauPresqueTrie(10,5)
t
t = randomTableauPresqueTrie(10,10)
t
t = randomTableauPresqueTrie(10,100)
t
t = randomTableauPresqueTrie(100,10)
print meilleurTriComparaison(t)
print meilleurTriEcriture(t)
```
Ecrivez de nouvelles fonctions histogrammes en utilisant cette fonction et observez la différence !
```
def histogrammeComparaisonsPresqueTrie(n, k, m):
"""
Histogramme -- comparaisons
Input :
- n, la taille des tableaux aléatoires
- k, le nombre d'inversions aléatoires
- m, le nombre de tirages à effectuer
Output :
un dictionnaire python qui représente l'histogramme des tris les plus efficaces sur les m tirages
"""
pass # écrire le code
histogrammeComparaisonsPresqueTrie(10,3,100)
histogrammeComparaisonsPresqueTrie(100,10,100)
histogrammeComparaisonsPresqueTrie(100,30,100)
histogrammeComparaisonsPresqueTrie(100,50,100)
```
Cela se traduit-il sur le temps de calcul ? Exécuter le code ci-dessous et expérimenter.
```
t = randomTableau(100)
tcopie = list(t)
time triRapide(tcopie, 0, len(tcopie))
tcopie = list(t)
time triInsertion(tcopie)
t = randomTableauPresqueTrie(100,10)
tcopie = list(t)
time triRapide(tcopie, 0, len(tcopie))
tcopie = list(t)
time triInsertion(tcopie)
```
Qu'en est-il de la fonction *sort* de python ? Comparer les vitesses d'exécutions de vos algorithmes sur des tableaux aléatoires triés et presque triés avec la fonction *sort*
```
t = randomTableau(100)
tcopie = list(t)
time tcopie.sort()
time triRapide(t, 0, len(t))
```
# Autour de Quicksort
## k selection
On veut trouver l'élément médian d'un tableau de $n$ valeurs.
**Rappel:** le médian d'un ensemble de valeur est une valeur permettant de couper l'ensemble en deux parties égales: l'une contenant tous les éléments inférieur, et les autres les éléments supérieurs.
```
def kSelectionNaif(tab, k):
"""
kSelectionNaif
Renvoie le kième plus petit élément d'un tableau sans utiliser la fonction pivot.
Algorithme: trouver le plus grand élément, le sortir, et itérer k fois.
Input:
- tab un tableau
- k un entier strictement positif
"""
for i in range(k):
m = max(tab)
tab.remove(m)
return max(tab)
def kSelection(tab, k):
"""
kSelection
Utilise la fonction de pivot pour trouver le kième élément d'un tableau dans l'ordre croissant
Input:
- tab un tableau
- un entier strictement positif
Output:
- le kième élément du tableau dans l'ordre croisant
"""
pass # écrire le code
tab = [6,1,3,2,2,9,14]
assert(kSelection(tab, 1) == 1)
assert(kSelection(tab, 5) == 6)
t = randomTableau(100)
tcopy = t[:]
assert(kSelection(t, 50) == kSelectionNaif(tcopy, 50))
```
Comparer les deux algorithmes ci dessus. Lequel est le plus efficace?
```
t = randomTableau(2000)
tcopy = t[:]
time kSelection(t, 1000)
time kSelectionNaif(tcopy, 1000)
```
## Tri Fusion et listes chaînées
Comme on l'a vu en cours, le tri fusion a un défaut majeur : sur un tableau, le tri nécessite de l'allocation mémoire. Cependant, quand on travail sur des **listes chaînées**, ce n'est plus le cas.
Ci dessous, la structure de liste du TP2 légèrement améliorée que l'on va réutiliser.
```
class Cellule:
def __init__(self, valeur):
self.valeur = valeur
self.suivante = None
def __repr__(self):
return str(self.valeur)
class Liste:
def __init__(self, tab = None):
"""
Initialisation d'une liste (vide ou avec tableau)
Input :
- tab (optionnel) un itérable contenant des valeurs
"""
self.premiere = None
if not tab is None:
c = None
for v in tab:
if c is None:
self.premiere = Cellule(v)
c = self.premiere
else:
c.suivante = Cellule(v)
c = c.suivante
def __repr__(self):
s = "["
c = self.premiere
while not c is None:
s += str(c)
if c.suivante is not None:
s+=" -> "
c = c.suivante
s+="]"
return s
def ajouteCelluleEnTete(self, c):
"""
Ajoute une cellule en tête de liste
Input :
- c, une cellule
"""
c.suivante = self.premiere
self.premiere = c
def ajouteValeurEnTete(self, v):
"""
Alloue une nouvelle cellule de valeur v et l'ajoute en en tête de liste
Input :
- v une valeur
"""
self.ajouteCelluleEnTete(Cellule(v))
L = Liste([1,2,3])
L
L.ajouteValeurEnTete(4)
L
```
**Ecrire une fonction qui découpe une liste en deux à partir d'un indice donné et retourne la liste obenue avec les valeurs coupées**
Remarque : vous n'avez besoin de parcourir les valeurs qu'une seule fois et seulement jusqu'à $i$. **Dans tous les cas, la complexité ne doit pas dépasser O(n)**.
```
def coupeListe(L, i):
"""
Coupe une liste à l'indice i (exclus)
Input :
- L la liste à couper
- i l'indice
Output :
La liste des valeurs d'indice >= i ou la liste vide si la taille de L est <= à i
"""
pass # écrire le code
L = Liste([1,2,3,4,5])
L2 = coupeListe(L,0)
assert(str(L) == "[]")
assert(str(L2) == "[1 -> 2 -> 3 -> 4 -> 5]")
L = Liste([1,2,3,4,5])
L2 = coupeListe(L,1)
assert(str(L) == "[1]")
assert(str(L2) == "[2 -> 3 -> 4 -> 5]")
L = Liste([1,2,3,4,5])
L2 = coupeListe(L,3)
assert(str(L) == "[1 -> 2 -> 3]")
assert(str(L2) == "[4 -> 5]")
L = Liste([1,2,3,4,5])
L2 = coupeListe(L,5)
assert(str(L) == "[1 -> 2 -> 3 -> 4 -> 5]")
assert(str(L2) == "[]")
L = Liste([1,2,3,4,5])
L2 = coupeListe(L,7)
assert(str(L) == "[1 -> 2 -> 3 -> 4 -> 5]")
assert(str(L2) == "[]")
```
On va écrire un algorithme de fusion de deux listes chaînées triées. Pour des raisons pratiques, il est préférable de prendre en paramètre des **cellules** et non des listes. Cela permet d'écrire l'algorithme de façon récursive.
Adapter l'algorithme *Fusion* vu en classe pour les tableau aux listes chaînées : on écrira un algorithme **récursif** qui prend en paramètre deux cellules. On considère ces cellules comme des têtes de listes triées. L'algortihme doit retourner la première cellule de la liste triée résultat.
```
def fusionCellules(c1,c2):
"""
Fusion de deux listes triées par cellules de têtes
Input :
- c1 la première cellule de la liste 1 (ou none si liste vide)
- c2, la première celule de la liste 2 (ou none si liste vide)
Output :
La première cellule de la liste triée contenant toutes les cellules atteintes par c1 et c2
"""
# écrire le code ici
def fusionListes(L1,L2):
"""
Fusion de deux listes triées
On utilise la fonction fusionCellules pour fusionner les deux listes, puis on
rattache la nouvelle tete de liste à L1.
Donc à la fin de la fonction, L1 contient l'ensemble de la liste fusionnée et L2
est vide.
Input :
- L1, la première liste
- L2, la deuxième liste
"""
L1.premiere = fusionCellules(L1.premiere, L2.premiere)
L2.premiere = None
L1 = Liste([1,1,3,3,4,5])
L2 = Liste([2,4,4,5,6])
fusionListes(L1,L2)
assert(str(L1) == "[1 -> 1 -> 2 -> 3 -> 3 -> 4 -> 4 -> 4 -> 5 -> 5 -> 6]")
assert(str(L2) == "[]")
L1 = Liste([1,1,3,3,4,5])
LVide = Liste()
fusionListes(L1,LVide)
assert(str(L1) == "[1 -> 1 -> 3 -> 3 -> 4 -> 5]")
assert(str(LVide) == "[]")
L1 = Liste([1,1,3,3,4,5])
LVide = Liste()
L3 = fusionListes(LVide,L1)
assert(str(LVide) == "[1 -> 1 -> 3 -> 3 -> 4 -> 5]")
assert(str(L1) == "[]")
```
On a maintenant tous les outils pour écrire un tri fusion sur les listes.
```
def triFusion(L, n):
"""
Tri Fusion
Input :
- L une liste
- n la taille de la liste
"""
pass # écrire le code
L = Liste([5,1,4,2,4,3,2,4,8])
triFusion(L,9)
assert(str(L) == "[1 -> 2 -> 2 -> 3 -> 4 -> 4 -> 4 -> 5 -> 8]")
L = Liste()
triFusion(L,0)
assert(str(L) == "[]")
L = Liste([1])
triFusion(L,0)
assert(str(L) == "[1]")
t = randomTableau(50)
L1 = Liste(t)
triFusion(L1,50)
t.sort()
L2 = Liste(t)
assert(str(L1) == str(L2))
```
## Aller plus loin : optimisation du pivotage
Retournons au tri rapide : comme vous l'avez remarqué, le choix du pivot est crucial dans le déroulement de l'algorithme de tri rapide.
Si le pivot "tombe" trop près du bord de l'intervalle c'est une catastrophe...
Si tout se passait bien on espérerait pouvoir couper en deux à chaque fois, ce qui donnerait un algorithme en $O(n\ ln\, n)$ (pourquoi ?)
On voudrait donc que le pivot soit la valeur *médiane* du tableau (autant d'éléments plus petits que d'éléments plus grands). Il existe des stratégie pour augmenter les chances de trouver la médiane. Par exemple, pour le pivot, on pourrait choisir trois éléments dans le tableau et prendre le médian des trois.
Écrire une telle fonction de pivotage et un tri rapide. Comparer avec votre première version.
```
# 3 comparaisons: optimal
def median3(a, b, c):
"""
Renvoie le median de trois éléments
Input:
- a, b, c trois valeurs
Output:
- le médiant de a, b, c
"""
pass # écrire le code
assert(median3(1,2,3) == 2)
assert(median3(3,1,2) == 2)
assert(median3(2,3,1) == 2)
def pivot3(t, deb, fin):
pass # écrire le code
"""
Tri rapide utilisant l'algorithme pivot3 au lieu du pivot classique
Input :
- t un tableau
- deb, indice du début de tableau (inclus)
- fin, indice de fin de tableau (exclus)
"""
def triRapide3(t, deb, fin):
pass # écrire le code
```
Tester les performances de votre nouvelle fonction et comparer avec la première version du tri rapide (en particulier, sur les tableaux triés).
Proposer d'autres optimisations.
| github_jupyter |
# Arrays (Lists)
```
# We can indexin into a list
num = [0,1,2]
print(num[1], "\n")
# We can replace values are a certain index
num[1] = "one"
print(num[1], "\n")
# Iterate through an array
for x in num:
print(x)
# Python can store multiple data types in an array
print()
# Iterate through an array using index
for x in range(len(num)):
print(num[x])
# Finding the max value in a list
num = [1,2,3,4,100,5,6,7]
# Set the max
maxi = num[1]
print(maxi)
# iterate through and update the max when it is more than the current max
for x in num:
if x > maxi:
maxi = x
print(maxi)
```
# Linked Lists
```
class Node(object):
def __init__(self, data):
self.data = data
self.nextNode = None
class LinkedList(object):
def __init__(self):
self.head = None
self.size=0
# O(1)
def insertStart(self, data):
self.size = self.size +1
newNode = Node(data)
if not self.head:
self.head = newNode
else:
newNode.nextNode = self.head
self.head = newNode
def remove(self, data):
if self.head is None:
return
self.size = self.size - 1
currentNode = self.head
previousNode = None
while currentNode.data != data:
previousNode = currentNode
currentNode = currentNode.nextNode
if previousNode is None:
self.head = currentNode.nextNode
else:
previousNode.nextNode = currentNode.nextNode
# O(1)
def size1(self):
return self.size
# O(N) not good!!
def size2(self):
actualNode = self.head
size=0
while actualNode is not None:
size+=1
actualNode = actualNode.nextNode
return size
# O(N)
def insertEnd(self, data):
self.size = self.size + 1
newNode = Node(data)
actualNode = self.head
while actualNode.nextNode is not None:
actualNode = actualNode.nextNode
actualNode.nextNode = newNode
def traverseList(self):
actualNode = self.head
while actualNode is not None:
print("%d" % actualNode.data)
actualNode = actualNode.nextNode
linked = LinkedList()
linked.insertStart(12)
linked.insertStart(122)
linked.insertEnd(3)
linked.traverseList()
linked.size1()
linked.remove(12)
linked.remove(122)
linked.remove(3)
linked.size1()
```
# Doubly Linked List
Can access head and tail!
Has a pointer to the next node AND the previous node
```
```
# Stack (Abstract Data Type)
LIFO Structure (last in first out)
Real World Application:
- Back button
- Undo option
- Stack memory sotres variables and funtion calls
```
class Stack:
def __init__(self):
self.stack = []
def isEmpty(self):
return self.stack == []
def push(self, data):
"""
Insert items onto the top of the stack
"""
self.stack.append(data)
def pop(self):
"""
Gets rid of the last item
"""
data = self.stack[-1]
del self.stack[-1]
return data
def peek(self):
"""
Return the value of the last item but does not get rid of it. structure of the stack remains the same
"""
return self.stack[-1]
def sizeStack(self):
return len(self.stack)
stack = Stack()
stack.push(1)
stack.push(2)
stack.push(3)
print(stack.sizeStack())
print(f"Popped: {stack.pop()}")
print(f"Popped: {stack.pop()}")
print(stack.sizeStack())
print(f"Peek: {stack.peek()}")
print(stack.sizeStack())
```
# Queue (Abstract Data type)
FIFO Structure (First in First Out)
```
class Queue:
def __init__(self):
self.queue = []
def isEmpty(self):
return self.queue == []
def enqueue(self, data):
self.queue.append(data)
def dequeue(self):
data = self.queue[0]
del self.queue[0]
return data
def peek(self):
return self.queue[0]
def sizeQueue(self):
return len(self.queue)
queue = Queue()
queue.enqueue(10)
queue.enqueue(20)
queue.enqueue(30)
print(queue.sizeQueue())
print(f"Dequeue: {queue.dequeue()}")
print(f"Dequeue: {queue.dequeue()}")
print(queue.sizeQueue())
```
# Binary Search Trees
Real World Cases
-Hierarchical data like in file systems
-Operating Systems
-Game trees
-Machine Learning (decision trees and boosting)
The running time of BST operation depends on the height of the binary search tree: we should keep the tree balanced in order to get the best performance
```
class Node(object):
def __init__(self, data):
self.data = data
self.leftChild = None
self.rightChild = None
class BinarySeachTree(object):
def __init__(self):
self.root = None
def insert(self, data):
if not self.root:
self.root = Node(data)
else:
self.insertNode(data, self.root)
# O(logN) if the tree is balanced
def insertNode(self, data, node):
if data < node.data:
if node.leftChild:
self.insertNode(data, node.leftChild)
else:
node.leftChild = Node(data)
else:
if node.rightChild:
self.insertNode(data, node.rightChild)
else:
node.rightChild = Node(data)
def removeNode(self, data, node):
if not node:
return node
if data < node.data:
node.leftChild = self.removeNode(data, node.leftChild)
elif data > node.data:
node.rightChild = self.removeNode(data, node.rightChild)
else:
if not node.leftChild and not node.rightChild:
print("Removing a leaf node...")
del node
return None
if not node.leftChild:
print("Removing a node with a single right child")
tempNode = node.rightChild
del node
return tempNoode
elif not node.rightChild:
print("removing a node with a single left child")
tempNode = node.leftChild
del node
return tempNode
print("Removing a node with two children")
tempNode = self.getPredecessor(node.leftChild)
node.data = tempNode.data
node.leftChild = self.removeNode(tempNode.data, node.leftChild)
return node
def getPredecessor(self, node):
if node.rightChild:
return self.getPredecessor(node.rightChild)
def remove(self, data):
if self.root:
self.root = self.removeNode(data, self.root)
def getMinValue(self):
if self.root:
return self.getMin(self.root)
def getMin(self, node):
if node.leftChild:
return self.getMin(node.leftChild)
return node.data
def getMaxValue(self):
if self.root:
return self.getMax(self.root)
def getMax(self, node):
if node.rightChild:
return self.getMax(node.rightChild)
return node.data
def traverse(self):
if self.root:
self.traverseInOrder(self.root)
def traverseInOrder(self, node):
if node.leftChild:
self.traverseInOrder(node.leftChild)
print("%s " % node.data)
if node.rightChild:
self.traverseInOrder(node.rightChild)
bst = BinarySeachTree()
bst.insert(10)
bst.insert(13)
bst.insert(5)
bst.insert(1)
bst.remove(5)
print(bst.getMinValue())
print(bst.getMaxValue(), "\n")
print(bst.traverse())
```
# AVL Tree
In an AVL tree, the heights of the two child subtrees of any node differ by more than one then we know we the tree is unbalanced and we need to rotate the tree
AVL tree are faster than red black tree because they are more rigidly balanced BUT needs more work. We check on every insertion whether the tree is balanced or not
Rigidly balanced tree and hence provide faster look ups
For a look up intensive task use an AVL tree
Insertion/Deletion is not so fat -> because we keep rebalancing the tree
```
class Node(object):
def __init__(self, data):
self.data = data
self.height = 0
self.leftChild = None
self.rightChild = None
class AVL(object):
def __init__(self):
self.root = None
def remove(self, data):
if self.root:
self.root = self.removeNode(data, self.root)
def insert(self, data):
self.root = self.insertNode(data, self.root)
def insertNode(self, data, node):
if not node:
return Node(data)
if data < node.data:
node.leftChild = self.insertNode(data, node.leftChild)
else:
node.rightChild = self.insertNode(data, node.rightChild)
node.height = max(self.calcHeight(node.leftChild), self.calcHeight(node.rightChild)) + 1
return self.settleViolation(data, node)
def removeNode(self, data, node):
if not node:
return node
if data < node.data:
node.leftChild = self.removeNode(data, node.leftChild)
elif data > node.data:
node.rightChild = self.removeNode(data, node.rightChild)
else:
if not node.leftChild and not node.rightChild:
print("Removing a leaf node")
del node
return None
if not node.leftChild:
print("Removing a node with a right child")
tempNode = node.rightChild
del node
return None
elif not node.rightChild:
print("Removing a node with a left child")
tempNode = node.leftChild
del node
return tempNode
print("removing node with two children")
tempNode = self.getPredecessor(node.leftChild)
node.data = tempNode.data
node.leftChild = self.removeNode(tempNode.data, node.leftChild)
if not node:
return node # if tree had just a single node
node.height = max(self.calcHeight(node.leftChild), self.calcHeight(node.rightChild)) + 1
balance = self.calcBalance(node)
# doubly left heavy situation
if balance > 1 and self.calcBalance(node.leftChild) >=0:
return self.rotateRight(node)
# left right case
if balance < -1 and self.calcBalance(node.rightChild) <= 0:
return rotateLeft(node)
# right right case
if balance < -1 and self.calcBalance(node.rightChild) > 0:
node.rightChild = self.rotateRight(node.rightChild)
return self.rotateLeft(node)
return node
def settleViolation(self, data, node):
balance = self.calcBalance(node)
# case 1 -> left left heavy situation
if balance > 1 and data < node.leftChild.data:
print("Left left heavy situation")
return self.rotateRight(node)
# case 2 --> right right heavy situation --> single left rotation
if balance < -1 and data > node.rightChild.data:
print("Right right heavy situation")
return self.rotateLeft(node)
if balance > 1 and data > node.leftChild.data:
print("Left right heavy situation")
node.leftChild = self.rotateLeft(node.leftChild)
return self.rotateRight(node)
if balance < -1 and data < node.rightChild.data:
print("Right left heavy situation")
node.rightChild = self.rotateRight(node.rightChild)
return self.rotateLeft(node)
return node
def calcHeight(self, node):
if not node:
return -1
return node.height
# if it return value > 1 it means it is a left heavy tree --> right rotation
# if it returns a values <1 it means its a right heavy tree -> left rotation
def calcBalance(self, node):
if not node:
return 0
return self.calcHeight(node.leftChild) - self.calcHeight(node.rightChild)
def traverse(self):
if self.root:
self.traverseInorder(self.root)
def traverseInorder(self, node):
# if the left node exists
if node.leftChild:
self.traverseInorder(node.leftChild)
print("%s " % node.data)
if node.rightChild:
self.traverseInorder(node.rightChild)
def rotateRight(self, node):
print("Rotating to the right on node", node.data)
tempLeftChild = node.leftChild
t = tempLeftChild.rightChild
tempLeftChild.rightChild = node
node.leftChild = t
node.height = max(self.calcHeight(node.leftChild), self.calcHeight(node.rightChild)) + 1
tempLeftChild.height = max(self.calcHeight(tempLeftChild.leftChild), self.calcHeight(tempLeftChild.rightChild)) + 1
return tempLeftChild
def rotateLeft(self, node):
print("Rotating to the left on node", node.data)
tempRightChild = node.rightChild
t = tempRightChild.leftChild
tempRightChild.leftChild = node
node.rightChild = t
node.height = max(self.calcHeight(node.leftChild), self.calcHeight(node.rightChild)) + 1
tempRightChild.height = max(self.calcHeight(tempRightChild.leftChild), self.calcHeight(tempRightChild.rightChild)) + 1
return tempRightChild
avl = AVL()
avl.insert(10)
avl.insert(20)
avl.insert(5)
avl.insert(4)
avl.insert(15)
avl.remove(5)
avl.remove(4)
avl.traverse()
```
# Red Black Trees
Each node is red or black
The root node is always black
All leaves (NIL or NULL) are black
Every red node must have two black child nodes and no other children -> it must have a black parent
Every path from a given node to any of its descendants NIL/NULL noes contains the same number of black nodes
Every new node is red by default
We keep inserting new node in the same way as we have seen for binary search trees (or AVL trees)
On every insertion -> we have to check whether we have violated the red black properties or not
If we have violated the RB properties: we have to rebalance the tree
- make rotations
- OR just recolor the nodes
## Logic
1. every red node must have two black child nodes and no other children: it must have a black parent
2. every path from a given node to any of its decendants NIL/NULL nodes contains the same number (m) of black nodes
Let's assume that the shortest path from the root to any leaf has m black nodes
- if we want to construct longer paths: we can insert new red nodes but because of
1. we can not insert just a single red node
- so the longest possible path consists of 2*m nodes (alternating black and red) because of
2. every maximal paths have the same number of black nodes
NO PATH IS MORE THAN TWICE AS LONG AS ANY OTHER PATH IN THE TREE
~so the tree is approximately balanced
If you have an application that has many insertions and deletions use a red black tree because the tree is not rigidly balanced.
```
```
# Priority Queue
- It is an abstract data type such as stack or queue
- BUT every item has an additional property: a priority value
- In a priority queue, an element with high priority is served before an element with lower priority
- Priority queues are usually implemented with heaps, but it can be implemented with a self balancing trees as well
- Very similar to queues with some modification: when we would like to get them next -> the highest priority element is retrieved first.
- No FIFO structure here
# Heap
- It is a binary tree
- two main binary heap types: min and max heap
- In the max heap the key of parent node are always greater than or equal to those of the children -> the highest key in the root node
- In the min heap the keys of parent nodes are less than or equal to those of the children -. the lowest key is in the root node.
- It is complete: it cannot be unbalanced! We insert every new item to the next available space
- Applications: Dijkstras algorithm, Prims algorithm
- The heap is one maximally efficient implementation of a priority queue abstract data type
- It has nothing to do with the pool of memory from which dynamically allocated memory is allocated
- **HEAP is construscted on a line by line basis from left to right**
```
```
# Associative Arrays
- Does not support sorting
### Supported operations
- Adding key value pairs to the collection
- Removing key value pairs from the collection
- Update existing key value pairs from the collection
- Lookup of value associated with a given key.
## Applications
- databases: sometimes search trees, sometimes hashing is better
- counting given word occurance in a particular document
- Storing data + lookup tables (password checks)
- Lookup tables in huge networks (lookup for ip addresses)
- The hashing technique can be used for substring search (Rabin-Karp algorithm)
# Hash Tables and Dictionaries
- Balanced BST -> we can achieve O(logN) time complexity for several operations including search
- We can do better by using a Hashtable
- index = h(key) where h() is the hashfunction, it maps keys to indexes in the array.
- **Hashing** - Mapping a certain key to any type to a random array index.
- if we have integer keys we just have to use the modulo operator to transform the number into the range [0,m-1]
- if the keys are strings: we can have teh ASCII values of the character and make some transformation in order to end up with an index to the array
- **Hash Function**
- Distribute the keys uniformly into buckets
- n: number of keys
- m: number of buckets // size of array
- h(x) = n % m (modulo operator)
- We should use prime numbers both for the size of the array and in our hash function to make sure the distribution of the generated indexes will be uniform
- String keys: we could calculate the ASCII value for each character, add them up -> make % modulo
- **Load Factor** - number of entities divided by the number of slots/buckets
- n/m - This is the load factor. It is 0 if the hashtable is empty, it is 1 if the hashtable is full
- if the load factor is approximately 1 -> it means it is nearly full: the performance will decrease, the operations will be slow
- If the load factot is approximately 0 -> it means its nearly empty: there willb ea lot of memory wasted
- **Dynamic Resizing** - Performance depends on the load factor: what is the number of entries and number of buckets ratio
- Space time tradeoff is important: the solution is to resize table, when its load factor exceeds a given threshold
- Python Threshold: 2/3 ~ 0.66
1. Has values depend on tables size so hashes of entries are changed when resizing the algorithm can't just copy data from old storage to new one
2. Resizing takes O(n) time to complete, where n is a number of entries in the table. This face may make dynamic-sized tables inappropriate for real time applications
### Problems Include
- Keys are not always nonnegative integers. We have to do "prehashing" in order to map strings to indexes of an array
-
## **Collisions** - we map 2 keys to the same bucket
### Resolution
- Chaining - we store both values at the same bucket, using linked lists
- If there are many collisions: O(1) complexity gets worse
- It has an additional memory cost due to the refrences
- Open Addressing - if collision occurs we find an empty spot instead
- Linear Probing - if a collision occurs, we try the next slot. If there is a collision too we keep trying the next slot until we find an empty slot
- Quadratic Probing - we try slots 1,2,4,8...units far away
- Rehashing - we hash the result again in order to find an empty slot
## Applications
- Index generation for dictionaries
- Hashes are important for cryptography (cryptographic fingerprints)
-You can compare hashes very fast
- password verification
- blockchains
```
```
# Ternary Search Trees
- With the help of tries we can search and sort things very very efficiently
- The problem is that ries consume a lot of memory, so we should use ternary search trees instead which stores less refrences and null objects
- **TST** stores charecters or strings in nodes
- Each node has **3** children: **less**(left child), **equal**(middle child), or **greater**(right child)
- **TST** can be balanced with rotations, but its not worth the trouble
- It can be used instead of hasmap: it is as efficient as hashing
- Hashing needs to examine the entire string key, **TST** does not
- Smaller or larger than the parent is in regards to letter position in the alphabet. smaller index means smaller, larger index means larger
- Supports sorting operation
- **TST** is better than hasing -> especially for search misses + flexible than BST (usually there is no perfect hash funtion)
- TST is faster than hashmap and more flexible than binary search tree
### Important
- **hashmaps**: generates an index from the key with the hashfunction. We use every single character of the key
- **TST**: can come to the conclusion that there is no value with a given key without considering every character
- Example: We can return after the second character if its determined there is no associated value
### Applications
- It can be used to implement the auto complete feature very very efficiently
- Can be used for spell checkers
- Near Neighbor searching (of which a spell check is a special case)
- For databases especially when indexing by several non-key fields is desirable
- Very important in package routing on WWW -> the router direct the packages in the direction of the longest prefix. It can be found very quickly with the help of TSTs
- Prefix matching - google search
- can use DFS instead as well
# Tries
- In general we have as many pointers /edges from every node as the number of characters in the alphabet
- We have to define an alphabet in advance + ALPHABET_SIZE
- For example: in the english alphabet there are 26 characters so ALPHABET_SIZE=26 -> 26 pointers from every node
```
# Trie Implementation
class Node:
def __init__(self, char):
self.char = char
self.children = {}
self.word_finished = False
self.counter = 0
class Trie:
def __init__(self):
self.root = Node("*") # To make the root empty
def insert(self, word):
current = self.root
for char in word:
if char in current.children:
current = current.children[char]
current.counter += 1
else:
new_node = Node(char)
current.children[char] = new_node
current = new_node
current.counter += 1
current.word_finished = True
def search(self, word):
if not self.root.children:
return False
current = self.root
for char in word:
if char in current.children:
current = current.children[char]
else:
return False
if current.word_finished:
return True
return False
tree = Trie()
tree.insert("bat")
tree.insert("hackathon")
tree.insert("hack")
tree.insert("hac")
print(tree.search("hac"))
print(tree.search("hack"))
print(tree.search("hackathon"))
print(tree.search("ha"))
print(tree.search("bat"))
```
# TST vs Hashing
### Hashing
- Need to examine the entire key (because that is the way the hash function works)
- Search hits and misses cost the same
- The running time and performance relies heavily on the hashfunction
- Does not support as many operations as TST (sorting)
### TST
- Works only for strings
- Only examines just enough key characters
- Search miss may only involve a few characters
- Support more operations (sorting)
- Faster than hashing (for misses especially) and more flexible than BST
```
# Terenary implementation
class Node(object):
def __init__(self, character):
self.character = character
self.leftNode = None
self.middleNode = None
self.rightNode = None
self.value = 0
class TST(object):
def __init__(self):
self.rootNode = None
def put(self, key, value):
self.rootNode = self.putItem(self.rootNode, key, value, 0)
def putItem(self, node, key, value, index):
c = key[index]
if node == None:
node = Node(c)
if c < node.character:
node.leftNode = self.putItem(node.leftNode, key, value, index)
elif c > node.character:
node.rightNode = self.putItem(node.rightNode, key, value, index)
elif index < len(key)-1:
node.middleNode = self.putItem(node.middleNode, key, value, index+1)
else:
node.value = value
return node
def get(self, key):
node = self.getItem(self.rootNode, key, 0)
if node == None:
return -1
return node.value
def getItem(self, node, key, index):
if node == None:
return None
c = key[index]
if c < node.character:
return self.getItem(node.leftNode, key, index)
elif c > node.character:
return self.getItem(node.rightNode, key, index)
elif index < len(key)-1:
return self.getItem(node.middleNode, key, index+1)
else:
return node
tst = TST()
tst.put("apple", 100)
tst.put("orange", 200)
print(tst.get("apple"))
print(tst.get("orange"))
print(tst.get("not here"))
```
# Graph Theory
- Graphs G(V,E) are mathematical structures to model pairwise relations between given objects
- A graph is made up of vertices/nodes and edges
- There are two types of graphs: directed and undirected graphs
### How to model in programming languages?
- 1. Adjacency Matrices (i = row index, j = column index)
- We have an A matrix constructed out of the vertices of the graph:
- The A(i,j) value in the matrix is 1 if there is a connection between node i and node j
- Otherwise A(i,j) is 0
- 2. Edge List Representation
- Create a vertex class -> It stores the neighbors accordingly
- Will store a 1-D array of only connected neighbors
### Applications
- Shortest Path algorithm (GPS, high frequency trading ...)
- graph traversing: web crawlers for google
- spanning trees
- maximum flow problem: lots of problems can be reduced to maximum flow
- because there are two representations for graphs: we cna handle these problems with matrixes as well
- thats why most of Googles algorithms have something to do with matrix related operations although they are graph algorithms
## Breadth-first Search
- Whats it good for?
- We have a graph and we want to visit every node -> we can do it with BFS
- We visit every vertex exactly once
- We visit the neighbors then the neighbors of these new vertices and so on
- Running time complexity: O(V+E) aka **Linear Time Complexity**
- V = number of vertices (nodes) in the graph
- E = number of edges in the graph
- Memory complexity is not good: we have to store lots of refrences
- Thats why DFS (Depth-first Search) is usually preferred
- But it constructs a shortest path: Dijkstra algorithm does a BFS if all the edge weights are equal to 1
- FIFO structure
### Applications of BFS
- In AI/ML it can prove to be very important: robots can discover the surroundings more easily with BFS than DFS.
- It is also very important in maximum flow: Edmonds-Karp algorithm uses BFS for finding augmenting paths
- Cheyens Algorithm in garbage collection -> it helps maintain active refrences on the heap memory
- It uses BFS to detect all the refrences on the heap
- Serialization / deserialization of a tree like structure (for example when order does matter) -> it allows the tree to be reconstructed in an efficient manner
```
# Breadth First Search Implementation (Graph traversal Algo)
#BFS uses a queue
#DFS uses a stack but usually implements it with recursion
class Node(object):
def __init__(self, name):
self.name = name
self.adjacencyList = []
self.visited = False
self.predecessor = None
class BreadthFirstSearch(object):
def bfs(self, startNode):
queue = []
queue.append(startNode)
startNode.visited = True
#while the queue is not empty
while queue:
actualNode = queue.pop(0)
print("%s " % actualNode.name)
for n in actualNode.adjacencyList:
if not n.visited:
n.visited = True
queue.append(n)
node1 = Node("A")
node2 = Node("B")
node3 = Node("C")
node4 = Node("D")
node5 = Node("E")
node1.adjacencyList.append(node2)
node1.adjacencyList.append(node3)
node2.adjacencyList.append(node4)
node4.adjacencyList.append(node5)
bfs = BreadthFirstSearch()
bfs.bfs(node1)
```
### Depth First Search (DFA)
- Depth first search is widely used graph traversal algorithm besides breadth first search
- It was investigated as a strategy for solving mazes by Tremaux in the 19th century
- If explores as far as possible along each each branch before backtracking // BFS was a layer-by-layer algorithm
- Time complexity of traversing a graph with DFS: O(V+E)
- Memory complexity: a bit better than that of BFS
-
### Applications of DFS
- Topological Ordersing
- Kosaraju algorithm for finding strongly connected components in a graph which can be proved to be very important in recommendation systems (youtube)
- Detecting cycles (checking whether a graph is a DAG (Directed Asyclic Graph) or not)
- Generating mazes or finding way out of a maze
### The difference between depth DFS and BFS is DFS uses stack and BFS uses queues
- BFS is a layer by layer algorithm and uses a queue
- O(N) time complexity
- DFS goes as deep as possible down as deep as possible, it doesnt matter if you start left or right and uses the stack
- O(logN) time complexity (prefered most of the time)
```
# Implementation of DFS
class Node(object):
def __init__(self, name):
self.name = name
self.adjacenciesList= []
self.visited = False
self.predecessor = None
# the operating system handles the stack since we are implementing recursion
class DepthFirstSearch(object):
def dfs(self, node):
node.visited = True
print("%s " % node.name)
for n in node.adjacenciesList:
if not n.visited:
self.dfs(n)
node1 = Node("A")
node2 = Node("B")
node3 = Node("C")
node4 = Node("D")
node5 = Node("E")
node1.adjacenciesList.append(node2)
node1.adjacenciesList.append(node3)
node2.adjacenciesList.append(node4)
node4.adjacenciesList.append(node5)
dfs = DepthFirstSearch()
dfs.dfs(node1)
```
## Graph Algortitms - Shortest Paths
- Shartest path problems: finding a [ath between two vertices in a graph such that the sum of weights of its edges is minimized
- Dijkstra algoritm
- It was constructed by computer scientist Edsgar Dijkstra in 1956
- Djikstra can handle positive edge weights / Bellman-Ford algorithm can have negative weights as well
- Several variants: it can find the shortest path from A to B, but it is able to construct a shortest path tree as well -> defines the shortest paths from a source to all other nodes
- This is asymptotically the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded non-negative weights
- Dijkstras algorithm time complexity: O(V*logV+E)
- Dijkstras algorithm is a greedy one: it tries to find the global optimum with the help of local minimum -> it turns out to be good
- It is greedy -> on every iteration we want to find the minimum distance to the next vertext possible -> appropriate data structures: heaps(binary or Fibonacci) or in general a priority queue
- Does not work with negative edge weights.
- Uses Priority queue (Heap)
- Without priority queue (Heap) the time complexity is O(V*V+E)
- Bellman-Ford algorithm
- Invented in 1958 by Bellman and Ford independently
- Slower than Dijkstras but more robust: it can handle negative edge weights too
- Dijkstra algorithm chooses the edges greedily, with the lowest cost: Bellman-Ford relaxes all edges at the same time for V-1 iteration
- Running time is O(V*E) Linear Time Complexity
- Does V-1 iteration +1 to detect cycles: if cost decreases in the V-th iteration, that there is a negative cycle, beacause all the paths are traversen up to the V-1 iteration
- Yen Algorithm (1970): It is the Bellman-Ford algorithm with some potimization
- We can terminate the algoritm if there is no change in the distances between two iterations
- It uses the same technique as bubble sort
- Application:
- Cycle detection can prove to be very important
- Negative cycles as well -> we have to run the Bellman-Ford algoritm that can handle negative edge weights by default
- On the FOREX market it can detect arbitrage situations
- Additional/final iterations are needed so the algorithm can check whether any negative cycles are in the graph or not
- A* search
- Floyd-Warshall algorithm
```
# Dijkstra Algo Implementation
import sys
import heapq
class Edge(object):
def __init__(self, weight, startVertex, targetVertex):
self.weight = weight
self.startVertex = startVertex
self.targetVertex = targetVertex
class Node(object):
def __init__(self, name):
self.name = name
self.visited = False
self.predecessor = None
self.adjacenciesList = []
self.minDistance = sys.maxsize
def __cmp__(self, otherVertex):
return self.cmp(self.minDistance, otherVertex.minDistance)
def __lt__(self, other):
selfPriority = self.minDistance
otherPriority = other.minDistance
return selfPriority < otherPriority
class Algorithm(object):
def calculateShortestPath(self, vertexList, startVertex):
q = []
startVertex.minDistance = 0
heapq.heappush(q, startVertex)
# while q is not empty
while q:
actualVertex = heapq.heappop(q)
for edge in actualVertex.adjacenciesList:
u = edge.startVertex
v = edge.targetVertex
newDistance = u.minDistance + edge.weight
if newDistance < v.minDistance:
v.predecessor = u
v.minDistance = newDistance
heapq.heappush(q, v)
def getShortestPathTo(self, targetVertex):
print("Shortest path to vertex is: ", targetVertex.minDistance)
node = targetVertex
while node is not None:
print("%s " % node.name)
node = node.predecessor
# Make the nodes
node1 = Node("A")
node2 = Node("B")
node3 = Node("C")
node4 = Node("D")
node5 = Node("E")
node6 = Node("F")
node7 = Node("G")
node8 = Node("H")
# Set the distances between the nodes (edges)
edge1 = Edge(5, node1, node2)
edge2 = Edge(8, node1, node8)
edge3 = Edge(9, node1, node5)
edge4 = Edge(15, node2, node4)
edge5 = Edge(12, node2, node3)
edge6 = Edge(4, node2, node8)
edge7 = Edge(7, node8, node3)
edge8 = Edge(6, node8, node6)
edge9 = Edge(5, node5, node8)
edge10 = Edge(4, node5, node6)
edge11 = Edge(20, node5, node7)
edge12 = Edge(1, node6, node3)
edge13 = Edge(13, node6, node7)
edge14 = Edge(3, node3, node4)
edge15 = Edge(11, node3, node7)
edge16 = Edge(9, node4, node7)
# Append the edges
node1.adjacenciesList.append(edge1)
node1.adjacenciesList.append(edge2)
node1.adjacenciesList.append(edge3)
node2.adjacenciesList.append(edge4)
node2.adjacenciesList.append(edge5)
node2.adjacenciesList.append(edge6)
node8.adjacenciesList.append(edge7)
node8.adjacenciesList.append(edge8)
node5.adjacenciesList.append(edge9)
node5.adjacenciesList.append(edge10)
node5.adjacenciesList.append(edge11)
node6.adjacenciesList.append(edge12)
node6.adjacenciesList.append(edge13)
node3.adjacenciesList.append(edge14)
node3.adjacenciesList.append(edge15)
node4.adjacenciesList.append(edge16)
# node1 is starting vertex
vertexList = (node1, node2, node3, node4, node5, node6, node7, node8)
algo = Algorithm()
algo.calculateShortestPath(vertexList, node1)
algo.getShortestPathTo(node7)
# Bellman-Ford Implementation
import sys
class Node(object):
def __init__(self, name):
self.name = name
self.visited = False
self.predecessor = None
self.adjacenciesList = []
self.minDistance = sys.maxsize
class Edge(object):
def __init__(self, weight, startVertex, targetVertex):
self.weight = weight
self.startVertex = startVertex
self.targetVertex = targetVertex
class BellmanFord(object):
HAS_CYCLE = False
def calculateShortestPath(self, vertexList, edgeList, startVertex):
startVertex.minDistance = 0
for i in range(0, len(vertexList) - 1):
for edge in edgeList:
u = edge.startVertex
v = edge.targetVertex
newDistance = u.minDistance + edge.weight
if newDistance < v.minDistance:
v.minDistance = newDistance
v.predecessor = u
for edge in edgeList:
if self.hasCycle(edge):
print("Negative Cycle detected...")
BellmanFord.HAS_CYCLE = True
return
def hasCycle(self, edge):
if (edge.startVertex.minDistance + edge.weight) < edge.targetVertex.minDistance:
return True
else:
return False
def getShortestPathTo(self, targetVertex):
if not BellmanFord.HAS_CYCLE:
print("Shortest path exists with value: ", targetVertex.minDistance)
node = targetVertex
while node is not None:
print("%s" % node.name)
node = node.predecessor
else:
print("Negative cycle detected")
# Make the nodes
node1 = Node("A")
node2 = Node("B")
node3 = Node("C")
node4 = Node("D")
node5 = Node("E")
node6 = Node("F")
node7 = Node("G")
node8 = Node("H")
# Set the distances between the nodes (edges)
edge1 = Edge(5, node1, node2)
edge2 = Edge(8, node1, node8)
edge3 = Edge(9, node1, node5)
edge4 = Edge(15, node2, node4)
edge5 = Edge(12, node2, node3)
edge6 = Edge(4, node2, node8)
edge7 = Edge(7, node8, node3)
edge8 = Edge(6, node8, node6)
edge9 = Edge(5, node5, node8)
edge10 = Edge(4, node5, node6)
edge11 = Edge(20, node5, node7)
edge12 = Edge(1, node6, node3)
edge13 = Edge(13, node6, node7)
edge14 = Edge(3, node3, node4)
edge15 = Edge(11, node3, node7)
edge16 = Edge(9, node4, node7)
# Negative Cycle Example
edge17 = Edge(1, node1, node2)
edge18 = Edge(1, node2, node3)
edge19 = Edge(-3, node3, node1)
# Append the edges
node1.adjacenciesList.append(edge1)
node1.adjacenciesList.append(edge2)
node1.adjacenciesList.append(edge3)
node2.adjacenciesList.append(edge4)
node2.adjacenciesList.append(edge5)
node2.adjacenciesList.append(edge6)
node8.adjacenciesList.append(edge7)
node8.adjacenciesList.append(edge8)
node5.adjacenciesList.append(edge9)
node5.adjacenciesList.append(edge10)
node5.adjacenciesList.append(edge11)
node6.adjacenciesList.append(edge12)
node6.adjacenciesList.append(edge13)
node3.adjacenciesList.append(edge14)
node3.adjacenciesList.append(edge15)
node4.adjacenciesList.append(edge16)
# node1 is starting vertex
vertexList = (node1, node2, node3, node4, node5, node6, node7, node8)
# Change the edge list depending if your demonstrating negative cycle or not
# edgeList = (edge1, edge2, edge3, edge4, edge5, edge6, edge7, edge8, edge9, edge10, edge11, edge12, edge13, edge14, edge15, edge16)
edgeList = (edge17, edge18, edge19) # negative cycle edge list
algo = BellmanFord()
algo.calculateShortestPath(vertexList, edgeList, node1)
algo.getShortestPathTo(node7)
```
# DAG Shortest Path (Directed Acyclic Graph)
- If the graph is a DAG, so there is no directed cycles, it is easier to find the shortest path
- We sort the vertices into topological order: we iterate through the topological order relaxing all edges from the actual vertex
- Topological sort algorithm computes shortest path tree in any edge wegihted (can be negative) DAG in time O(E+V)
- It is much faster than Bellman-Ford or Dijkstra
- Applications
- Solving Knapsack problem
- GPS, vehicle routing, navigation
- Detecting arbitrage situations in FX
- RIP - Routing Information Protocol
- This is a distributed algorithm
- Each node calculates the distances between itself and all other nodes and sotres the information as a table
- Each node sends its table to all adjacent nodes
- When a node recieves distance tables from its neighbors, it calculates the shortest routes to all other nodes and updates its own table to reflect any changes
# Avidan-Shamir method
- When we want to shrink an image for example in the browser or on a smartphone without distortion
- We want to make suure the image will not deform
- We have to eliminate the least significant bit strings
- We set up an "energy function" and remove the connected string of pixels containing the least energy
- Photoshop, GIMP use it
- We build a high graph, vertices are the pixels and the edges are pointing from every vertex to its downward 3 neighbors
- The energy function determins what the edge weights will be
- Its acyclic: we can use topological order shortest path to find the string of pixels to be removed
# Longest Path Problem
- Problem of finding a simple path of maximum length in a given graph
- No polynomial time algorithm. NP-hard problem
- It has a linear time solution for directed acyclic graphs(DAG which has important applications in finding the critical path in scheduling problems
-We just have to negate the edge weights and run the shortest path algotrithm
- We have to use Belllman-Ford algorithm because egative edges can occur
- Application: Parallel job scheduling problem
- Give a set of jobs with durations and precedence constraints, scedule the jobs - by finding a start time to each - so as to achieve the minimum completion time, while respecting the constraints
# Critical Path Problem (CRM): A form of longest path problem
- The method was first used between 1940 and 1943 in the Manhatten project
- Problem Formulation: we want an algorithm for scheduling a set of project activities so that the total running time will be as minimal as possible
- The algorithm needs:
- A list of all activities required to complete the project
- The time (duration) that each activity will take to complete
- The dependencies between the activities
```
```
# Graph Algoritms - Spanning Trees
### Disjoint Sets
- Also known as a union-find data structure
- Data structure to keep track of a set of elements partitioned into a number of disjoint (non overlapping) subsets
- Three main operations: union, final, makeSet
- Disjoint sets can be represented with the help of linked lists but usuall we implement it as a tree like structure
- In Kruskal algoritm it will be useful: with disjoint sets we can decide in approximately O(1) time whether two vertixes are in teh same set or not
- Application:
- It is used mostly in Kruskal algorithm implementation
- We have to check whether adding a given edge to the MST would form a cycle or not
- For chechking this -> union-find data structure is extremely helpful
- We can check whether a cycle is present -> in asmtotically O(1) constant time complexity
### Spanning trees
- A spanning tree of an undirected G graph is a subgraph that includes all the vertices of G
- In general, a tree may have several spanning trees
- We can assign a weight to each edge
- A minimum spanning tree is then a spanning tree with weight less than or equal to the weight of every other spanning tree
- Has lots of applications: in big data analysis, clustering algorithms, finding minimum cost for a telecommunications company laying cable to a new neighborhood
- Standard algorithms: Prims-Jarnik, Kruskal -> greedy algorithms
- Kruskal Algorithm
- We sort the edges according to their edge weights
- It can be done in O(N*logN) with mergesort or quicksort
- Union find data structure: "disjoint set"
- We start adding edged to the MST (Minimum Spanning tree) and we want to make sure there will be no cycles in the spanning tree. It can be done in O(logV) with the help of find data structure
- We could use HEAP instead, sorting the edges in the beginning, but the running time would be the same. So sometimes Kruskals algoritm is iplemented with priority queues
- Worst case running time: O(E*logE), so we can use it for huge graphs too
- If the edges are sorted: the algorithm will be quasi-linear
- If we multiply the weights with a constant or add a constant to the edge weights: the result will be the same
- **Disjoint Sets:** At the beginning we have as many sets as the number of vertices. Then sort the edges from least to greatest. Then go through the sorted list seeing which edges from distinct sets can be combined without creating a cycle (adding an edge from a vertices already in the set). When adding an edge, we merge two sets together. The algorithm stops when there is only a single set remaining.
- Prim-Jarnik Algorithm
- In Kruskal implementation we build the spanning tree separately, adding the smallest edge to the sapnning tree if there is no cycle
- In Prims algorithm we build the spanning tree from the given vertex, adding the smallest edge to the MST
- Kruskal -> edge based
- Prims -> vertex based
- There are two implementations: lazy and easger
- Lazy Implementation: add the new neighbor edges to the heap without deleting its content
- Eager implementation: we keep updating the heap if the distance from the vertex to the MST has changed
- Average running time: O(E*logE) but we need additional memort space O(E)
- Worst case: O(E*logV)
- Difference between Prims and Kruskal
- Prims algorithm is significantly faster in the limit when you've got a really dense graph with many more edges than vertices
- Kruskal performs better in typical situations (sparse graphs) because it uses simpler data structures
- Kruskal can have better performance if the edges can be sorted in linear time or the edges are already sorted
- Prims better if the number of edges to vertices is high (also known as dense graphs)
- **Spanning Trees Applications**
- Optimizing road/cable/pipe length
- We have N cities
- We want to make sure that every city can be reached by roads
- So the naive approach is to connect every city with every other city (Not that optimal)
- We have to find the minimum spanning tree: in order to connect all of the cities with the lowest cost possible (so minimum length of roads)
- Same problems -> want to lead in internet to a region or electricity or building motorways or oil pipelines
- K-means clustering
- We want to classify similar items
- For example dots in a 2-dimmensional plane
- The dots that are closer to each other than to any other dots -> will be in teh same cluster
- We construct a minimum spanning tree -> and remove the N-1 most expensive edges if we want to make N clusters
- Routing in LAN (Local Area Networks)
- The spanning tree protocol (STP) ensures a loop-free topology for any bridged Ethernet local area network
- Each switch would infinately duplicate the first broadcast -> becuse there's nothing to prevent loops
- The idea behind a spanning tree topology is that bridges can discover a subset of the topology that is loop-free: thats the tree
- STP also makes sure there is enough connectivity to reach every portion of the network by spanning the entire LAN
```
# Kruskal Algorithm Implementation
class Vertex(object):
def __init__(self, name):
self.name = name
self.node = None
class Node(object):
def __init__(self, height, nodeId, parentNode):
self.height = height
self.nodeId = nodeId
self.parentNode = parentNode
class Edge(object):
def __init__(self, weight, startVertex, targetVertex):
self.weight= weight
self.startVertex = startVertex
self.targetVertex = targetVertex
def __cmp__(self, otherEdge):
return self.cmp(self.weight, otherEdge.weight)
def __lt__(self, other):
selfPriority = self.weight
otherPriority = other.weight
return selfPriority < otherPriority
class DisjointSet(object):
def __init__(self, vertextList):
self.vertexList = vertexList
self.rootNodes = []
self.nodeCount = 0
self.setCount = 0
self.makeSets(vertexList)
def find(self, node):
currentNode = node
while currentNode.parentNode is not None:
currentNode = currentNode.parentNode
root = currentNode
currentNode = node
while currentNode is not root:
temp = currentNode.parentNode
currentNode.parentNode = root
currentNode = temp
return root.nodeId
def merge(self, node1, node2):
index1 = self.find(node1)
index2 = self.find(node2)
if index1 == index2:
return #they are in the same set
root1 = self.rootNodes[index1]
root2 = self.rootNodes[index2]
if root1.height < root2.height:
root1.parentNode = root2
elif root1.height > root2.height:
root2.parentNode = root1
else:
root2. parentNode = root1
root1.height = root1.height +1
def makeSets(self, vertexList):
for v in vertexList:
self.makeSet(v)
def makeSet(self, vertex):
node = Node(0, len(self.rootNodes), None)
vertex.node = node
self.rootNodes.append(node)
self.setCount = self.setCount +1
self.nodeCount = self.nodeCount +1
class KruskalAlgorithm(object):
def spanningTree(self, vertexList, edgeList):
# Create the disjoint set and the disjoint set assigns a node to every vertex
disjointSet = DisjointSet(vertexList)
spanningTree = []
# Sort edges according to the edge weights
edgeList.sort()
for edge in edgeList:
u = edge.startVertex
v = edge.targetVertex
# if the nodes associated with u, v are not equal it means they are not in the same disjoint set
# it means we can append that given edge to the spanning tree
if disjointSet.find(u.node) is not disjointSet.find(v.node):
spanningTree.append(edge)
# then merge the 2 disjoint sets together
disjointSet.merge(u.node, v.node)
# print out the spanning tree
for edge in spanningTree:
print(edge.startVertex.name, " - ", edge.targetVertex.name)
vertex1 = Vertex("a");
vertex2 = Vertex("b");
vertex3 = Vertex("c");
vertex4 = Vertex("d");
vertex5 = Vertex("e");
vertex6 = Vertex("f");
vertex7 = Vertex("g");
edge1 = Edge(2,vertex1,vertex2);
edge2 = Edge(6,vertex1,vertex3);
edge3 = Edge(5,vertex1,vertex5);
edge4 = Edge(10,vertex1,vertex6);
edge5 = Edge(3,vertex2,vertex4);
edge6 = Edge(3,vertex2,vertex5);
edge7 = Edge(1,vertex3,vertex4);
edge8 = Edge(2,vertex3,vertex6);
edge9 = Edge(4,vertex4,vertex5);
edge10 = Edge(5,vertex4,vertex7);
edge11 = Edge(5,vertex6,vertex7);
vertexList = [];
vertexList.append(vertex1);
vertexList.append(vertex2);
vertexList.append(vertex3);
vertexList.append(vertex4);
vertexList.append(vertex5);
vertexList.append(vertex6);
vertexList.append(vertex7);
edgeList = [];
edgeList.append(edge1);
edgeList.append(edge2);
edgeList.append(edge3);
edgeList.append(edge4);
edgeList.append(edge5);
edgeList.append(edge6);
edgeList.append(edge7);
edgeList.append(edge8);
edgeList.append(edge9);
edgeList.append(edge10);
edgeList.append(edge11);
algorithm = KruskalAlgorithm();
algorithm.spanningTree(vertexList, edgeList)
# Prims-Jarnik Algorithm
import heapq
class Vertex():
def __init__(self, name):
self.name = name
self.visited = False
# store all the neighboring vertices
self.adjacencyList = []
# When we print out the Vertex the __str__ method will print out the name by default
def __str__(self):
return self.name
class Edge():
def __init__(self, weight, startVertex, targetVertex):
self.weight = weight
self.startVertex = startVertex
self.targetVertex = targetVertex
def __lt__(self, otherEdge):
# compares edges based on the weight parameter
selfPriority = self.weight
otherEdgePriority = otherEdge.weight
return selfPriority < otherEdgePriority
class PrimsJarnik():
def __init__(self, unvisitedList):
self.unvisitedList = unvisitedList
self.spanningTree = []
self.edgeHeap = []
self.fullCost = 0
def calculateSpanningTree(self, vertex):
self.unvisitedList.remove(vertex)
# while he havn't visited every vertex in the graph
while self.unvisitedList:
# we visit every edge with an associated vertex
for edge in vertex.adjacencyList:
# if we havn't visted the target vertex we are going to push to Heap
if edge.targetVertex in self.unvisitedList:
heapq.heappush(self.edgeHeap, edge)
# Select the edge with the minimum weight parameter
minEdge = heapq.heappop(self.edgeHeap)
if minEdge.targetVertex in self.unvisitedList:
# append to spanning tree
self.spanningTree.append(minEdge)
print("Edge added to spanning tree: %s-%s" % (minEdge.startVertex.name, minEdge.targetVertex.name))
# update full cost
self.fullCost += minEdge.weight
# update the vertex
vertex = minEdge.targetVertex
self.unvisitedList.remove(vertex)
def getSPanningTree(self):
return self.spanningTree
def getCost(self):
return self.fullCost
vertexA = Vertex("A")
vertexB = Vertex("B")
vertexC = Vertex("C")
vertexD = Vertex("D")
vertexE = Vertex("E")
vertexF = Vertex("F")
vertexG = Vertex("G")
edgeAB = Edge(2, vertexA, vertexB)
edgeBA = Edge(2, vertexB, vertexA)
edgeAE = Edge(5, vertexA, vertexE)
edgeEA = Edge(5, vertexE, vertexA)
edgeAC = Edge(6, vertexA, vertexC)
edgeCA = Edge(6, vertexC, vertexA)
edgeAF = Edge(10, vertexA, vertexF)
edgeFA = Edge(10, vertexF, vertexA)
edgeBE = Edge(3, vertexB, vertexE)
edgeEB = Edge(3, vertexE, vertexB)
edgeBD = Edge(3, vertexB, vertexD)
edgeDB = Edge(3, vertexD, vertexB)
edgeCD = Edge(1, vertexC, vertexD)
edgeDC = Edge(1, vertexD, vertexC)
edgeCF = Edge(2, vertexC, vertexF)
edgeFC = Edge(2, vertexF, vertexC)
edgeDE = Edge(4, vertexD, vertexE)
edgeED = Edge(4, vertexE, vertexD)
edgeDG = Edge(5, vertexD, vertexG)
edgeGD = Edge(5, vertexG, vertexD)
edgeFG = Edge(3, vertexF, vertexG)
edgeGF = Edge(3, vertexG, vertexF)
unvisitedList = []
unvisitedList.append(vertexA)
unvisitedList.append(vertexB)
unvisitedList.append(vertexC)
unvisitedList.append(vertexD)
unvisitedList.append(vertexE)
unvisitedList.append(vertexF)
unvisitedList.append(vertexG)
vertexA.adjacencyList.append(edgeAB)
vertexA.adjacencyList.append(edgeAC)
vertexA.adjacencyList.append(edgeAE)
vertexA.adjacencyList.append(edgeAF)
vertexB.adjacencyList.append(edgeBA)
vertexB.adjacencyList.append(edgeBD)
vertexB.adjacencyList.append(edgeBE)
vertexC.adjacencyList.append(edgeCA)
vertexC.adjacencyList.append(edgeCD)
vertexC.adjacencyList.append(edgeCF)
vertexD.adjacencyList.append(edgeDB)
vertexD.adjacencyList.append(edgeDC)
vertexD.adjacencyList.append(edgeDE)
vertexD.adjacencyList.append(edgeDG)
vertexE.adjacencyList.append(edgeEA)
vertexE.adjacencyList.append(edgeEB)
vertexE.adjacencyList.append(edgeED)
vertexF.adjacencyList.append(edgeFA)
vertexF.adjacencyList.append(edgeFC)
vertexF.adjacencyList.append(edgeFG)
vertexG.adjacencyList.append(edgeGD)
vertexG.adjacencyList.append(edgeGF)
algorithm = PrimsJarnik(unvisitedList)
algorithm.calculateSpanningTree(vertexD)
print(algorithm.getCost())
```
# Basic Sorting Algorithms
## Sorting
- A sorting algorithm is an algorithm that puts elements of an array in a certain order
- Numbers -> numerical ordering
- Strings, characters -> alphabetical ordering
- Comparison based algorithms
- bubble sort, insertion sort, selection sort, merge sort, quick sort
- Non-comparison based sorting
- radix sort, bucket sort
- Features
- Time complexity O(N^2), O(N log N), O(N)
- In place: Strictly an in-place sort needs only O(1) memory beyond the items being sorted
- So an in-place algorithm does not need any extra memory
- Recursive: some sorting algorithms are implemented in a recursive manner -> the divide and conquer ones especially (merge sort(stable) and quick sort(unstable))
- Stable: Stable sorting algorithms maintain the relative order of records with equal values
## Adaptive Sorting Algorithms
- An adaptive algorithm is an algorithm that changes its behavior based on information available at runtime
- Adaptive sort -> it takes advantage of existing order in its input
- It benefits from local orders -> sometimes an unsorted array contains sequences that are sorted by default -> the algorithm will sort faster
- Most of the times: we just have to modify existing sorting algorithms in order to end up with an adaptive one
- Comparison based algorithms have optimal O(N log N) running time complexity
- Adaptive sort takes advantage of the existing order of the input to try to achieve better times: maybe O(N) could be reached
- The more presorted the input is, the faster it should be sorted
- IMPORTANT: nearly sorted sequences are common in practice
- Heapsort, merge sort: not adaptive algorithms, the do not take advantage of presorted sequences
- SHell sort: adaptive algorithm so performs better if the input is partially sorted
## Bubble Sort
- Repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order
- It is too slow and impractical for most problems even when compared to insertion sort
- Bubble sort has worst-case and average complexity both O(N^2)
- Bubble sort is not a practical sorting algorithm
- It will not be efficient in the case of a reverse-ordered collection
- Stable sorting algorithm
- In place algorithm -> does not need any additional memory
## Selection Sort
- Another O(N^2) running time sorting algorithm
- Selection sort is noted for its simplicity and it has performance advantages over more complicated algorithms
- Particularly useful where auxiliary memory is limited
- The algorithm divides the input into two parts:
- the subarray of items already sorted
- and the subarry of items remaining to be sorted that occupy the rest of the array
- The algorithm proceeds by finding the smallest element in the unsorted sub array
- Exchange / swap it with the leftmost unsorted element -> putting it in sorted order
- Moving the subarray boundries one element to the right
- It is an in place algorithm -> no need for extra memory
- Selection sort almost always outperforms bubble sort
- Not a stable sort -> does not preserve the order of keys with equal values
- Almost always outperforms bubble sort
- Quite counter intuitive: selection sort and insertion sort are both typically faster for small arrays (arrays with 10-20 items)
- Usual optimization method -> recursive algorithms switch to insertion sort or selection sort for small subarrays
- Makes less writes than insertion sort -> this can be important if writes are significantly more expensive than reads
- For example with EEPROM or flash memory where every write lessens the lifespan of the memory
## Insertion Sort
- Its a simple sorting algorithm that build the final sorted array one item at a time
- It has quadratic running time O(N^2)
- On large datasets it is very inefficient but on arrays with 10-20 items it is quite good
- Simple implementation
- It is more efficient than other quadratic running time sorting procedures such as a bubble sort or selection sort
- Adaptive algorithm -> speeds up when array is already substantially sorted
- Stable sort -> preserve the order of the items with equal keys
- In place algorithm -> does not need any additional memory
- It is an online algorithm -> it can sort an array as it recieves it for example downloading data from the web
- Hybrid algorithms uses insertion sort if the subarray is small enough
- Insertion sort is faster for small subarrays than quicksort
- Variant of insertion sort is shell sort
- Sometimes selection sort is better: they are very similar algorithms
- Insertion sort requires more writes because the inner loop can require shifting large sections of the sorted portion of the array
- In general, insertion sort wil lwrite to the array O(n^2) times while selection sort will write only O(n) times
-For this reason selection sort may be preferable in cases where writing to memory is significantly more expensive than reading -> for example flash memory.
## Quicksort
- Its is an efficient sorting algorithm
- It was developed by Tony Hoare in 1959 (the same person who invented quickselect algorithm)
- A well implemented quick sort can outperform heapsort and merge sort -> the main competitors of quicksort
- A comparison based algorithm -> not abble to be faster than linearithmic time complexity
- The efficient implementation of quicksort is NOT stable -> does not keep the relative order of items with equal value
- It is in place -> does not need any additional memory
- On average case it has O(N log N) running time
- But the worst case running time is quadratic O(N^2)
- It is widely used in programming languages
- Primitive Types -> usually quicksort is used
- refrence types/objects -> usually mergesort is used
- It is a divide and conquer algorithm:
- Pick an element from the array: this is the pivot item
- partition phase: rearrange the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot value come after it (equal values can go either way)
- recursively apply these steps on the subarrays
- Base Case for Recursion: Arrays of size zero or one never need to be sorted
- Choosing the pivot item:
- It is very important -> if we keep choosing bad pivots, the running time will be quadratic
1. We can choose a pivot at random (usually it is working fine)
2. Choose the middle index of the array as the pivot
## Mergesort
- Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945
- Comparison based algorithm with running time complexity O(N logN)
- It is stable sorting algorithm
- Not an in-place algorithm
- Although heapsort has the same time bounds as merge sort -> heapsort requires only o(1) (theta 1) axiliary space instead of merge sorts o(n) (theta n)
- Efficient quicksort implementations generally outperforms mergesort
- Mergesort is often the best choice for sorting a linked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only o(1) extra space
- How does Mergesort work
- "Divide"
- Divide the array into two subarrays recursively
- Sort these subarrays recursively with mergesort again
- If there is only a single item left in the subarray -> we consider it to be sorted by definition
- Merge the subarrays to get the final sorted array
- "Conquer"
- We start at the beginning of the subarrays: we keep comparing them, we insert the smaller into the result array
- VERY IMPORTANT: We have to iterate through the left and right array if there are some more items left -> in this case the 10 in the left subarray
```
# Bubble Sort Implementation
def bubble_sort(nums):
for i in range(len(nums)-1):
# iterate through the nums until we get to the nd of the list
# -1 b/c of 0 index
# -i b/c the last item is thought to be sorted
for j in range (0, len(nums)-1,1):
if nums[j] > nums[j+1]:
swap(nums, j, j+1)
return nums
def swap(nums, i, j):
temp = nums[i]
nums[i] = nums[j]
nums[j] = temp
a = [1,5,3,2,4,8,7]
print(bubble_sort(a))
# Selection Sort Implementation
def selection_sort(nums):
for i in range(len(nums)-1):
index = i
for j in range(i + 1, len(nums), 1):
# switch the operator to change between ascending and decending
if nums[j] < nums[index]:
index=j
if index != i:
swap(nums, index,i)
return nums
def swap(nums, i, j):
temp = nums[i]
nums[i] = nums[j]
nums[j] = temp
nums = [5,2,1,7,6,-8,8,0]
print(selection_sort(nums))
# Insertion Sort Implementation
def insertion_sort(nums):
for i in range(len(nums)):
j = i
while j>0 and nums[j-1] > nums[j]:
swap(nums, j, j-1)
j = j -1
return nums
def swap(nums, i, j):
temp = nums[i]
nums[i] = nums[j]
nums[j] = temp
nums = [1,5,4,8,10,100,4]
print(insertion_sort(nums))
# Quicksort Implementation
def quick_sort(num, low, high):
if low >= high:
return
pivot_index = partition(nums, low, high)
quick_sort(num, low, pivot_index-1)
quick_sort(num, pivot_index+1, high)
return num
def partition(nums, low, high):
pivot_index = (low+high)//2
swap(nums, pivot_index, high)
i = low
for j in range(low, high, 1):
if nums[j] <= nums[high]:
swap(nums, i, j)
i = i + 1
swap(nums, i, high)
return i
def swap(nums, i, j):
temp = nums[i]
nums[i] = nums[j]
nums[j] = temp
nums = [1,5,4,8,-10,100,4]
print(quick_sort(nums, 0, len(nums)-1))
# Mergesort Implementation
def merge_sort(nums):
# --Divide--
# Split arrays until we have single items (base case)
if len(nums) == 1:
return
# Generate middle index
middle_index = len(nums) // 2
# Create 2 sub arrays
left_half = nums[:middle_index]
right_half = nums[middle_index:]
# Call recursively on the left and right sub arrays until base case is raeched
merge_sort(left_half)
merge_sort(right_half)
# --Conquer--
# Define 3 indicies (due to needing additional memory this is why its not an inplace algo)
i = 0 # Tracks right subarray
j = 0 # Tracks left subarray
k = 0 # Tracks items in the result array
# While the "i" is less than the lengths of the left half and "j" is less than the lines
# on the right half then we have to consider every item from the left sub-array or
# the rights sub-array
while i < len(left_half) and j < len(right_half):
# if the item in the left sub-array is smaller than the item in the right sub array
# we are going to insert it into the nums array
if left_half[i] < right_half[j]:
nums[k] = left_half[i]
# We have to increment "i" because it tracks the left sub array
# Since we added an item from the left sub array
i = i + 1
else:
nums[k] = right_half[j]
j = j +1
# Increment k since it tracks the result array
k = k + 1
# If we have considered every item from the right array then we copy the values from the left array
while i < len(left_half):
nums[k] = left_half[i]
k = k + 1
i = i + 1
# If we have considered every item from the lefft array then we cop the values from the right array
while j < len(right_half):
nums[k] = right_half[j]
k = k + 1
j = j + 1
nums = [1,2,3,4,-5,-1, 10, 100, 300, 43, -88, -32, 32]
merge_sort(nums)
print(nums)
```
# Hybrid Algorithms
- It combines more algorithms to solve a given problem
- It choses one algorithm depending on the data or switching between them over the course of the algorithm
- This is generally done to combine desired features of each, so that the overall algorithm is better than the individual components
- Important: Hybrid algrithm does not refer to simply combining multiple algorithms to solve a different problem, but only to combining algorithms that solve the same problem -> but differ in other charectaristics (such as performance)
- The technique can be used when sorting
- Heapsort -> it has an advantage of a guarenteed running time O(N logN)
- Quicksort -> optiomal implementations are outperform both mergesort and heapsort
- BUT quicksort can have quadratic running time when we keep choose "bad" pivots
- Solution: Lets combine the two algorithms
- **Introsort** (Quicksort + Heapsort):
- It is a hybrid sorting algoritm that provides both fast average performance and optimal worst-case performance
- It begins with quicksort and switches to heapsort when quicksort becomes too slow.
- **Timsort** (Insertion Sort + Mergesort)
- Insertion Sort: Very efficient on small data (5-10 elements)
- Mergesort/Quicksort: Asymptotically optimal on large datasets, but the overhead becomes significant if applying them to small datasets.
- Solution: Lets combine the two algorithms
- Highly optimized hybrid algorithm
- It was implemented bt Tim Peters in 2002 for use in the Python programming language
- Best case running time: O(N)
- Worst case running time O(N logN)
- Worst space complexity O(N)
# Non-comparison based sorting
- There are simpler algorithms that can sort a list using partial information about the keys
- For example
- Radix Sort
- Bucket Sort
- **Counting Sort**
- It operates by counting the number of objects that have each distinct key value
- Integer sorting algorithm: we assume the values to be integers
- And using arithmetic on those counts to determine the positions of each key value in the output sequence
- It is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items
- It can be used as a subroutine in radix sort
- Because counting sort uses key values as indexes into an array -> it is not a comparison based sorting algorithm, so linerithmic running time can be reduces
- Running time: O(N+k)
- N -> number of items we want to sort
- k -> difference between the max and min key values, basically the number of possible keys
- Conclusion: It is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items
- **Radix Sort**
- Can be very efficient, because there are no comparisons
- So O(N) running time can be reached
- Running time: O(k*N) where k is the number of digits in the input number
- We sort the elements according to individual characters or digits
- It is a stable sorting algorithm
- 2 different kinds
- LSD (Least Significant digit first string sorting)
- Consider characters from right to left
- We can use it for fixed length strings or fixed length numbers for example integers
- Sort the characters at the last columns... then keep going left and sort the columns independently
- Typical interview question: How to sort one million 32 bit integers
- MSD (Most-significant-digit-first string sorting)
- Consider characters from left to right
- It is sensitive to ASCII and Unicode representations
- It has several advantages
- MSD examines just enough characters to sort the key
- CAN BE SUBLINEAR IN INPUT SIZE
- MSD access memory randomly ... not so efficient
- Solutions: we should combine it with quicksort ... this is the 3-way radix quicksort
```
```
# Algorithmic Problems in Python
### What is Recursion
- A method (procedure) where the solution to a problem depends on solutions to smaller instances of the same problem
- We break the task into smaller subtasks
- The approach can be applied to many types of problems and recursion is on of the central ideas of computer science
- We have to define base cases in order to avoid infinate loops
- We can solve problems with recursion or with iteration
- so we can transform recursion into iteration and vice versa
| github_jupyter |
Avani Gupta <br>
Roll: 2019121004
# Excercise - Multi-class classification of MNIST using Perceptron
In binary perceptron, where $\mathbf{y} \in \{-1, +1\}$, we used to update our weights only for wrongly classified examples.
The multi-class perceptron is regarded as a generalization of binary perceptron. Learning through iteration is the same as the perceptron. Weighted inputs are passed through a multiclass signum activation function. If the predicted output label is the same as true label then weights are not updated. However, when predicted output label $\neq$ true label, then the wrongly classified input example is added to the weights of the correct label and subtracted from the weights of the incorrect label. Effectively, this amounts to ’rewarding’ the correct weight vector, ’punishing’ the misleading, incorrect weight
vector, and leaving alone an other weight vectors.
```
from sklearn import datasets
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random
import numpy as np
import seaborn as sns; sns.set();
import pandas as pd
import math
import gif
import warnings
warnings.filterwarnings('ignore')
# Setting the seed to ensure reproducibility of experiments
np.random.seed(11)
# One-hot encoding of target label, Y
def one_hot(a):
b = -1 * np.ones((a.size, a.max()+1))
b[np.arange(a.size), a] = 1
return b
# Loading digits datasets
digits = datasets.load_digits()
# One-hot encoding of target label, Y
Y = digits.target
Y = one_hot(Y)
# Adding column of ones to absorb bias b of the hyperplane into X
X = digits.data
bias_ones = np.ones((len(X), 1))
X = np.hstack((X, bias_ones))
# Train-val-test data
X_train_val, X_test, Y_train_val, Y_test = train_test_split(X, Y, shuffle=True, test_size = 0.2)
X_train, X_val, Y_train, Y_val = train_test_split(X_train_val, Y_train_val, test_size = 0.12517)
print("Training dataset: ", X_train.shape)
print("Validation dataset: ", X_val.shape)
print("Test dataset: ", X_test.shape)
sns.reset_orig();
plt.gray()
plt.matshow(digits.images[10])
plt.show();
```
#### Write your code below
tut notebook functions
```
# Defining signum activation function
def signum(vec_w_x):
""" signum activation for perceptron
Parameters
------------
vec_w_x: ndarray
Weighted inputs
"""
vec_w_x[vec_w_x >= 0] = 1
vec_w_x[vec_w_x < 0] = -1
return vec_w_x
# multi-class signum
def multi_class_signum(vec_w_x):
""" Multiclass signum activation.
Parameters
------------
vec_w_x: ndarray
Weighted inputs
"""
flag = np.all(vec_w_x == 0)
if flag:
return vec_w_x
else:
num_examples, num_outputs = np.shape(vec_w_x)
range_examples = np.array(range(0, num_examples))
zero_idxs = np.argwhere(np.all(vec_w_x == 0, axis=1))
non_zero_examples = np.delete(range_examples, zero_idxs[:, 0])
signum_vec_w_x = vec_w_x[non_zero_examples]
maxvals = np.amax(signum_vec_w_x, axis=1)
for i in range(num_examples):
idx = np.argwhere(signum_vec_w_x == maxvals[i])[0]
signum_vec_w_x[idx[0], idx[1]] = 1
non_maxvals_idxs = np.argwhere(signum_vec_w_x != 1)
signum_vec_w_x[non_maxvals_idxs[:, 0], non_maxvals_idxs[:, 1]] = -1
vec_w_x[non_zero_examples] = signum_vec_w_x
return vec_w_x
# Evaluation for train, val, and test set.
def get_accuracy(y_predicted, Y_input_set, num_datapoints):
miscls_points = np.argwhere(np.any(y_predicted != Y_input_set, axis=1))
miscls_points = np.unique(miscls_points)
accuracy = (1-len(miscls_points)/num_datapoints)*100
return accuracy
def get_prediction(X_input_set, Y_input_set, weights, get_acc=True, model_type='perceptron', predict='no'):
if len(Y_input_set) != 0:
num_datapoints, num_categories = np.shape(Y_input_set)
vec_w_transpose_x = np.dot(X_input_set, weights)
if num_categories > 1: # Multi-class
if model_type == 'perceptron':
y_pred_out = multi_class_signum(vec_w_transpose_x)
elif model_type == 'logreg':
y_pred_out = softmax(X_input_set, vec_w_transpose_x, predict=predict)
else: # Binary class
if model_type == 'perceptron' or model_type == 'LinearDA':
y_pred_out = signum(vec_w_transpose_x)
elif model_type == 'logreg':
y_pred_out = sigmoid(vec_w_transpose_x, predict=predict)
# Both prediction and evaluation
if get_acc:
cls_acc = get_accuracy(y_pred_out, Y_input_set, num_datapoints)
return cls_acc
# Only prediction
return y_pred_out
# Perceptron training algorithm
def train(X_train, Y_train, weights, learning_rate=1, total_epochs=100):
"""Training method for Perceptron.
Parameters
-----------
X_train: ndarray (num_examples(rows) vs num_features(columns))
Input dataset which perceptron will use to learn optimal weights
Y_train: ndarray (num_examples(rows) vs class_labels(columns))
Class labels for input data
weights: ndarray (num_features vs n_output)
Weights used to train the network and predict on test set
learning_rate: int
Learning rate use to learn and update weights
total_epochs: int
Max number of epochs to train the perceptron model
"""
n_samples, _ = np.shape(X_train)
history_weights = []
epoch = 1
# Number of missclassified points we would like to see in the train set.
# While training, its value will change every epoch. If m==0, our training
# error will be zero.
m = 1
# If the most recent weights gave 0 misclassifications, break the loop.
# Else continue until total_epochs is completed.
while m != 0 and epoch <= total_epochs:
m = 0
# Compute weighted inputs and predict class labels on training set.
weights_transpose_x = np.dot(X_train, weights)
weights_transpose_x = signum(weights_transpose_x)
y_train_out = np.multiply(Y_train, weights_transpose_x)
epoch += 1
# Collecting misclassified indexes and count them
y_miscls_idxs = np.argwhere(y_train_out <= 0)[:, 0]
y_miscls_idxs = np.unique(y_miscls_idxs)
m = len(y_miscls_idxs)
# Calculate gradients and update weights
dweights = np.dot((X_train[y_miscls_idxs]).T, Y_train[y_miscls_idxs])
weights += (learning_rate/n_samples) * dweights
weights = np.round(weights, decimals=4)
# Append weights to visualize decision boundary later
history_weights.append(weights)
if m == 0 and epoch <= total_epochs:
print("Training has stabilized with all points classified: ", epoch)
else:
print(f'Training completed at {epoch-1} epochs. {m} misclassified points remain.')
return history_weights
```
My code
```
weights_arr = np.zeros((X_train.shape[1], Y_train.shape[1]))
for i in range(Y_train.shape[1]):
weights = np.zeros((X_train.shape[1], 1))
weights_arr[:, i:i+1] = train(X_train, Y_train[:, i].reshape((-1,1)), weights, 1, 10000)[-1].copy()
def accuracy(X, Y, W):
pred = X @ W
Class_value = np.max(pred, axis=1, keepdims=True)
pred = (pred == Class_value )
class1 = np.where(Y == 1, True, False)
match = pred[class1]
acc = np.mean(match) * 100
return acc
train_acc = accuracy(X_train, Y_train, weights_arr)
print("Train accuracy: ",train_acc)
val_acc = accuracy(X_val, Y_val, weights_arr)
print("Validation accuracy: ", val_acc)
test_acc = accuracy(X_test, Y_test, weights_arr)
print("Test accuracy: ", test_acc)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kentokura/python_medical_images/blob/master/chapter3/chap3_10.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import matplotlib.pyplot as plt
import numpy as np
import cv2
import pandas as pd
from google.colab.patches import cv2_imshow
from google.colab import drive
from IPython.display import Image, display_png
drive.mount('/content/drive')
```
## 3.10 画像と周波数(2次元フーリエ変換)
リスト 3.10.1
```
import plotly.express as px
pix = cv2.imread('/content/drive/My Drive/DATASET/dataset/ChestXray256_uint8.png',
cv2.IMREAD_GRAYSCALE)
pix_line = cv2.rectangle(np.copy(pix), (0, 128-1), (256, 128-1), (0, 255, 0), 3)
fig = px.imshow(pix_line, color_continuous_scale='gray') # plotlyで描画
fig.show()
c_row = pix[128-1, :]
data = {'intensity': c_row,
'col': np.linspace(0, 255, 256)
}
df = pd.DataFrame(data=data)
print(df.head())
fig = px.line(df, x='col', y='intensity', title='pixel intensities at row128')
fig.show()
```
リスト 3.10.2
idctは逆っすね。どっちやればいいかわからん。くそ。
```
from scipy .fft import idct
#縞のある画像を作成
a=np.zeros((256,256)) #任意の正方形に
a[0,10]=1
base=idct(idct(a,axis=0),axis=1) #周期性のある2次元配列を離散コサイン変換で求める
plt.imshow(base) #idct→離散コサイン変換
plt.show()
#中央の行を取得しライングラフへ
row128=base[128-1,:]
data={'intensity':row128,"col":np.linspace(0,255,256)}
df=pd.DataFrame(data=data)
fig=px.line(df,x="col",y="intensity",title='pixel intensities at row128')
fig.show()
```
リスト 3.10.3
```
from numpy.fft import fft
#リスト3.10.1胸部の行データを使用する
x=c_row
#フーリエ変換を実施
freq=fft(x)
#結果を絶対値で取得(結果が複素数で返ってくるため)
freq_abs=np.abs(freq) #fft result
'''
グラフにして、左右のシンメトリーになることを確認
ナイキスト周波数の位置(N/2)以降が虚像(pretense)
特殊な操作であるが、虚像をわかりやすくするために配列の前半と後半を入れ替える
'''
swap=np.zeros(256)
swap[:128]=freq_abs[128:256]
swap[128]=freq_abs[0]
swap[129:]=freq_abs[1:128]
#グラフ用のデータへ
data={'freq':swap,"size":np.linspace(0,255,256)}
df=pd.DataFrame(data=data)
#シンメトリーをグラフへ
fig=px.line(df,x="size",y='freq',title="fft result with the pretense at row128")
fig.show()
'''
実際の計算にはナイキスト周波数の位置(N/2)以降の虚像部分は不要
ナイキスト周波数の位置(N/2)までをグラフへ
この例ではN=256のため、その位置は128
'''
data={"freq":freq_abs[0:128],'size':np.linspace(0,128,128)}
df=pd.DataFrame(data=data)
fig=px.line(df,x="size",y='freq',title="fft result with the pretense at row128")
fig.show()
```
リスト 3.10.4 1次元フーリエ変換結果を3Dグラフで確認
ナイキスト 半分
半分以上?半分以下?
```
# 胸部の128秒目のピクセル配列を利用
x = c_row
# ないキスと周波数までを取得,それ以降は虚像のため不要
freq = fft(x)[0:128]
freq_max = np.max(np.abs(freq))
N = len(freq) # データ数
n = np.arange(N) # 1刻みの配列をN個
# 各周波数をsin波の信号に変換
wave_arr = []
name_arr = []
location_arr = []
for i in range(N):
norm = np.abs(freq[i] / freq_max) # 正規化
freq_sin_wave = np.sin(np.abs(freq[i]) * 2 * np.pi * (n/N)) * norm
name = ['freq' + str(i)] * N
if i < 16 : #16番目までの低周波成分を可視化
wave_arr.extend(freq_sin_wave)
name_arr.extend(name)
location_arr.extend(n)
# グラフデータへ
data = {'amplitude' : wave_arr,
'sampling location': location_arr,
'freq': name_arr,
}
df = pd.DataFrame(data=data)
fig = px.line_3d(df, x = "sampling location", y = "freq",
z = "amplitude", color = "freq")
fig.show()
```
### 3.10.3 2次元フーリエ変換
リスト3.10.5 2次元フーリエ変換
```
import cv2
import numpy as np
from numpy.fft import *
import matplotlib.pyplot as plt
H = 16
W = 16
chest = cv2.imread(
'/content/drive/My Drive/DATASET/dataset/ChestXray256_uint8.png',
cv2.IMREAD_GRAYSCALE).astype(float)
# ダウンサンプル
chest = cv2.resize(chest, (W, H), cv2.INTER_LANCZOS4)
plt.subplot(131)
plt.title("original")
plt.tight_layout()
plt.imshow(chest, cmap="gray")
spectrum = np.fft.fft2(chest)
spectrum = np.fft.fftshift(spectrum) # swap
real = np.real(spectrum)
imag = np.imag(spectrum)
plt.subplot(132)
plt.tight_layout()
plt.imshow(real) #, cmap="gray")
plt.title("real")
plt.subplot(133)
plt.tight_layout()
plt.imshow(imag) #, cmap="gray")
plt.title("imaginary")
plt.show()
```
### 3.10.3 2次元フーリエ変換
リスト 3.10.6
```
# 2次元フーリエ逆変換
invert = ifft2(spectrum)
# 逆変換結果を表示
plt.imshow(np.abs(invert), cmap='gray') #絶対値で取得
plt.title("i-2D-FFT")
plt.show()
# 基底関数から再構成するプロセスの可視化
base_arr = [] # 基底関数のリスト
ichest_arr = [] # 再構成プロセス画像
processImg = np.zeros((H, W), dtype='complex128')
# 1ピクセルごとに計算する
for h in range(H):
for w in range(W):
a = np.zeros((H,W))
a[h, w] = 1
# 対応するピクセル位置の基底関数を作成
base = ifft(ifft(a, axis=0), axis=1)
#2D-FFTから得られた結果と基底関数をかけ合わせた後、逐次足していく
processImg += spectrum[h, w] * base
base_arr.append(base)
ichest_arr.append(np.copy(processImg))
_, axs_base = plt.subplots(H, W, figsize=(12, 12))
axs_base = axs_base.flatten()
for img, ax in zip(base_arr, axs_base):
ax.imshow(np.abs(img))
plt.show()
# show invert to original
_, axs_ichest = plt.subplots(H, W, figsize=(12,12))
axs_ichest = axs_ichest.flatten()
for img, ax in zip(ichest_arr, axs_ichest):
ax.imshow(np.abs(img), cmap='gray')
plt.show()
```
| github_jupyter |
# Descriptive stats for datasets and network localization
-------------------
Author: Brin Rosenthal (sbrosenthal@ucsd.edu)
-------------------
Notebook to calculate localization of dDNV sets, and measure number of patients with a dDNV in established disease genes, in a non-disease gene (non-recurrent), or no dDNVs.
<a id='import'></a>
## Import packages
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
import pandas as pd
import random
from IPython.display import display
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']
sns.set_style('white')
sns.set_style("ticks", {"xtick.major.size": 15, "ytick.major.size": 15})
plt.rcParams['svg.fonttype'] = 'none'
import sys
% matplotlib inline
# need to have networkx version 1.11
nx.__version__
# import network localization methods
sys.path.append('netprop_zscore_code/')
import network_localization
```
[TOC](#toc)
<a id='importData'></a>
# Load the data- including ASD and CHD high confidence genes, and DNV sets
ASD genes come from table 4 in http://www.cell.com/neuron/fulltext/S0896-6273(15)00773-4
```
# ------- ASD high confidence genes (established disease genes) -------
HC_genes_temp = pd.read_csv('data/HC_genes/ASD_HC.tsv',sep='\t',index_col='Unnamed: 0')
ASD_HC = [str(g[1:-1]).strip("'") for g in HC_genes_temp['seed_genes'].tolist()[0][1:-1].split(', ')]
len(ASD_HC)
# ------- High confidence CHD genes (established disease genes) -------
# Load congenital heart defect recurring DNVs (from https://www.nature.com/articles/ng.3970)
HC_genes_temp = pd.read_csv('data/HC_genes/CHD_HC.tsv',sep='\t',index_col='Unnamed: 0')
CHD_HC = [str(g[1:-1]).strip("'") for g in HC_genes_temp['seed_genes'].tolist()[0][1:-1].split(', ')]
len(CHD_HC)
# Load all NDD DNVs (from supp materials of CHD paper; from https://www.nature.com/articles/ng.3970)
NDD_df = pd.read_excel('data/SSC/homsy_database_S08.xlsx',skiprows=1)
ASD_DNV = NDD_df[NDD_df['Study']=='SSC'] # simons simplex data
print('\nnumber total ASD damaging DNVs')
print(len(ASD_DNV))
ASD_DNV_VC = ASD_DNV['Gene'].value_counts() # count recurrences
ASD_DNV = list(np.unique(ASD_DNV['Gene'].tolist()))
print('\nnumber unique ASD damaging DNVs:')
print(len(ASD_DNV))
print('\nnumber recurrent ASD damaging DNVs')
print(sum(ASD_DNV_VC>1)) # number recurrent ASD
# Load the new cohort (Supp table 9 from https://www.nature.com/articles/ng.3970)
CHD_new_df = pd.read_excel('data/CHD_databases_2017_paper/ng.3970-S3.xlsx',sheetname='S9',skiprows=1)
CHD_new_df.index=CHD_new_df['Blinded ID']
# apply the same filtering as for old data
CHD_new_df = CHD_new_df[(CHD_new_df['Variant_Class']!='syn')&(CHD_new_df['Variant_Class']!='mis')]
print('number total CHD damaging DNVs:')
print(len(CHD_new_df))
DNV_noNDD_full = CHD_new_df[CHD_new_df['NDD']=='No']
DNV_noNDD_full = DNV_noNDD_full['Gene'].unique().tolist()
print('\nnumber damaging CHD DNVs without NDD:')
print(len(DNV_noNDD_full))
DNV_withNDD_full = CHD_new_df[CHD_new_df['NDD']=='Yes']
DNV_withNDD_full = DNV_withNDD_full['Gene'].unique().tolist()
print('\nnumber damaging CHD DNVs with NDD:')
print(len(DNV_withNDD_full))
CHD_DNV = CHD_new_df['Gene'].unique().tolist()
CHD_DNV_VC = CHD_new_df['Gene'].value_counts()
print('\nnumber unique CHD damaging DNVs:')
print(len(CHD_DNV))
print('\nnumber recurrent CHD damaging DNVs ')
print(sum(CHD_DNV_VC>1)) # number recurrent CHD
# Load control DNVs (from Database S3)
ctrl_DNV_df = pd.read_excel('data/SSC/homsy_database_S03.xlsx',skiprows=1)
# exclude synonymouse and non-damaging missense
ctrl_DNV = ctrl_DNV_df[(ctrl_DNV_df['Variant Class']!='Synonymous')&(ctrl_DNV_df['Variant Class']!='Missense')]
ctrl_DNV_VC = ctrl_DNV['Gene'].value_counts()
print('number damaging control DNVs:')
print(len(ctrl_DNV))
#ctrl_DNV = ctrl_DNV_df
ctrl_DNV = list(np.unique(ctrl_DNV['Gene'].tolist()))
print('\nnumber unique damaging control DNVs:')
print(len(ctrl_DNV))
print('\nnumber recurrent damaging control DNVs:')
print(sum(ctrl_DNV_VC>1))
```
## Define gene sets for use in rest of analysis
```
# pick out which gene sets to investigate
genes_ASD = ASD_HC
print(len(genes_ASD))
genes_CHD = CHD_HC
print(len(genes_CHD))
```
[TOC](#toc)
<a id='plotHCgenes'></a>
# Make figure 1: summary of HC genes in CHD and ASD (showing low recurrence)
```
ASD_df = NDD_df[(NDD_df['Study']=='SSC')]
ASD_df.head()
CHD_DNV_df = CHD_new_df
CHD_DNV_df.head()
```
# Bar chart- how many DNVs within established disease genes compared to outside?
```
print(len(ASD_df))
display(ASD_df['Class (2)'].value_counts())
ASD_df.head()
CHD_new_df.head()
# display(CHD_DNV_for_merge['Class'].value_counts())
# print(len(CHD_DNV_for_merge))
# display(CHD_DNV_for_merge['Gene'].value_counts().head())
# CHD_DNV_for_merge.head()
num_CHD_DNV_in_HC = sum(CHD_new_df['Gene'].isin(ASD_HC+CHD_HC))
print(num_CHD_DNV_in_HC)
num_CHD_DNV_not_HC = sum(~CHD_new_df['Gene'].isin(ASD_HC+CHD_HC))
print(num_CHD_DNV_not_HC)
num_CHD_no_DNV = 2645-(num_CHD_DNV_in_HC+num_CHD_DNV_not_HC)
print(num_CHD_no_DNV)
num_ASD_DNV_in_HC = sum(ASD_df['Gene'].isin(ASD_HC+CHD_HC))
print(num_ASD_DNV_in_HC)
num_ASD_DNV_not_HC = sum(~ASD_df['Gene'].isin(ASD_HC+CHD_HC))
print(num_ASD_DNV_not_HC)
num_ASD_no_DNV = 2759-(num_ASD_DNV_in_HC+num_ASD_DNV_not_HC)
print(num_ASD_no_DNV)
num_CHD_DNV_in_HC+num_CHD_DNV_not_HC
# add a bar showing the number of patients without damaging DNVs
sns.set_style('whitegrid',{'axes.grid':False})
sns.set_style("ticks", {"xtick.major.size": 15, "ytick.major.size": 15})
plt.rcParams['svg.fonttype'] = 'none'
plt.figure(figsize=(3,3))
plt.bar([-.15,.85,1.85],np.divide([num_ASD_DNV_in_HC,num_ASD_DNV_not_HC,num_ASD_no_DNV],2759.0),width=.3,
color='#9FEE9D',alpha=.7,edgecolor='k',
label='ASD')
plt.bar([.15,1.15,2.15],np.divide([num_CHD_DNV_in_HC,num_CHD_DNV_not_HC,num_CHD_no_DNV],2645.0),width=.3,
color='#E69EE6',alpha=.7,edgecolor='k',
label='CHD')
plt.ylim([0,1])
plt.xticks([0,1,2],['dDNVs in \ndisease genes', 'dDNVs outside \ndisease genes','no dDNVs'],fontsize=14,rotation='vertical')
plt.ylabel('fraction cohort',fontsize=16)
plt.legend(loc='upper left',fontsize=14)
# plt.savefig('../../manuscript/figures_1911/Figure1/Figure1_final assets/DNVs_in_out_disease_genes.png',dpi=300,bbox_inches='tight')
# plt.savefig('../../manuscript/figures_1911/Figure1/Figure1_final assets/DNVs_in_out_disease_genes.svg',dpi=300,bbox_inches='tight')
```
[TOC](#toc)
<a id='localization'></a>
# Select focal interactome, calculate network localization of DNVs
PCnet downloaded from ndex and parsed to networkx format
https://ndexbio.org/#/network/f93f402c-86d4-11e7-a10d-0ac135e8bacf
```
# load the pre-computed interactome
# PCnet downloaded from ndex and parsed to networkx format
# https://ndexbio.org/#/network/f93f402c-86d4-11e7-a10d-0ac135e8bacf
Gint = nx.read_gpickle('/Users/brinrosenthal/Documents/CCBB_tickets_data/PCnet/G_PCnet.gpickle')
int_name = 'PCnet'
print(len(Gint.nodes()))
print(len(Gint.edges()))
```
### How much overlap is there between disease genes and selected interactome?
```
print(len(np.intersect1d(list(ASD_HC),Gint.nodes())))
print(len(ASD_HC))
print(len(np.intersect1d(list(CHD_HC),Gint.nodes())))
print(len(CHD_HC))
```
[TOC](#toc)
<a id='disease_localization'></a>
# How localized are the individual diseases?
Two localization methods implemented here:
1. Largest connected component (following Menche et al)
2. Number shared edges in gene set (similar to method on STRING-DB)
## ASD localization
```
# ----- distributions for non-sampled case -----
# set numreps to 5000 for full run
num_reps=100
ASD_numedges_list, ASD_numedges_rand, ASD_LCC_list, ASD_LCC_size_rand = network_localization.localization(Gint,
focal_genes = ASD_DNV,
num_reps = num_reps,
sample_frac = 1.0,
method = 'both',
plot = False,print_counter=True)
# plot distributions for non-sampled case
sns.set_style('white')
sns.set_style("ticks", {"xtick.major.size": 15, "ytick.major.size": 15})
plt.figure(figsize=(1.6,1.54))
plt.vlines(np.mean(ASD_numedges_list),ymin=0,ymax=0.004,color='r',lw=.5,label='ASD dDNVs')
sns.kdeplot(ASD_numedges_rand,color='k',lw=.5,alpha=.5,shade=True,label='random')
plt.legend(loc='upper left',fontsize=8)
plt.ylabel('density',fontsize=8)
plt.xlabel('number shared edges',fontsize=8)
plt.xticks(fontsize=8)
plt.yticks(fontsize=8)
plt.ylim([0,.006])
# print the z-score and fdr
ASD_NE_z = (np.mean(ASD_numedges_list)-np.mean(ASD_numedges_rand))/np.std(ASD_numedges_rand)
from scipy.stats import norm
ptemp = norm.sf(abs(ASD_NE_z))
plt.title('permutation p = '+("%.2g" % ptemp),fontsize=8)
# plt.savefig('../../manuscript/figures_1911/Supplement/ASD_DNV_numedges_'+int_name+'_no_bootstrap'+str(num_reps)+'.png',dpi=300,bbox_inches='tight')
# plt.savefig('../../manuscript/figures_1911/Supplement/ASD_DNV_numedges_'+int_name+'_no_bootstrap'+str(num_reps)+'.svg',dpi=300,bbox_inches='tight')
```
## CHD localization
```
# ---- non-sampled case ----
# set numreps to 5000 for full run
num_reps=100
CHD_numedges_list, CHD_numedges_rand, CHD_LCC_list, CHD_LCC_size_rand = network_localization.localization(Gint,
focal_genes = CHD_DNV,
num_reps = num_reps,
sample_frac = 1.0,
method = 'both',
plot = False,print_counter=True)
# plot distributions for non-sampled case
sns.set_style('white')
sns.set_style("ticks", {"xtick.major.size": 15, "ytick.major.size": 15})
plt.figure(figsize=(1.6,1.54))
plt.vlines(np.mean(CHD_numedges_list),ymin=0,ymax=0.004,color='r',lw=.5,label='CHD dDNVs')
sns.kdeplot(CHD_numedges_rand,color='k',lw=.5,alpha=.5,shade=True,label='random')
plt.legend(loc='upper left',fontsize=8)
plt.ylabel('density',fontsize=8)
plt.xlabel('number shared edges',fontsize=8)
plt.xticks(fontsize=8)
plt.yticks(fontsize=8)
plt.ylim([0,.006])
# print the z-score and fdr
CHD_NE_z = (np.mean(CHD_numedges_list)-np.mean(CHD_numedges_rand))/np.std(CHD_numedges_rand)
from scipy.stats import norm
ptemp = norm.sf(abs(CHD_NE_z))
plt.title('permutation p = '+("%.2g" % ptemp),fontsize=8)
# plt.savefig('../../manuscript/figures_1911/Supplement/CHD_DNV_numedges_'+int_name+'_no_bootstrap'+str(num_reps)+'.png',dpi=300,bbox_inches='tight')
# plt.savefig('../../manuscript/figures_1911/Supplement/CHD_DNV_numedges_'+int_name+'_no_bootstrap'+str(num_reps)+'.svg',dpi=300,bbox_inches='tight')
```
## Control cohort localization
```
len(ctrl_DNV)
# set numreps to 5000 for full run
num_reps=100
CTRL_numedges_list, CTRL_numedges_rand, CTRL_LCC_list, CTRL_LCC_size_rand = network_localization.localization(Gint,
focal_genes = ctrl_DNV,
num_reps = num_reps,
sample_frac = 1.0,
method = 'both',
plot = False,print_counter=True)
# plot distributions for non-sampled case
sns.set_style('white')
sns.set_style("ticks", {"xtick.major.size": 15, "ytick.major.size": 15})
plt.figure(figsize=(1.6,1.54))
plt.vlines(np.mean(CTRL_numedges_list),ymin=0,ymax=0.02,color='r',lw=.5,label='CTRL dDNVs')
sns.kdeplot(CTRL_numedges_rand,color='k',lw=.5,alpha=.5,shade=True,label='random')
plt.legend(loc='upper left',fontsize=8)
plt.ylabel('density',fontsize=8)
plt.xlabel('number shared edges',fontsize=8)
plt.xticks(fontsize=8)
plt.yticks(fontsize=8)
plt.ylim([0,.04])
# print the z-score and fdr
CTRL_NE_z = (np.mean(CTRL_numedges_list)-np.mean(CTRL_numedges_rand))/np.std(CTRL_numedges_rand)
from scipy.stats import norm
ptemp = norm.sf(abs(CTRL_NE_z))
plt.title('permutation p = '+("%.2g" % ptemp),fontsize=8)
# plt.savefig('../../manuscript/figures_1911/Supplement/CTRL_DNV_numedges_'+int_name+'_no_bootstrap'+str(num_reps)+'.png',dpi=300,bbox_inches='tight')
# plt.savefig('../../manuscript/figures_1911/Supplement/CTRL_DNV_numedges_'+int_name+'_no_bootstrap'+str(num_reps)+'.svg',dpi=300,bbox_inches='tight')
```
[TOC](#toc)
| github_jupyter |
# Taller de Datos Abiertos
**Nivel : Medio**
Este Jupyter Notebook es para aprender conceptos básicos de la ciencia de datos utilizando la plataforma Datos Abiertos de Cali, Colombia. En este libro aprenderá cómo:
- Recopilar datos de una API
- Datos limpios
- Visualizar y trazar datos
- Crea un visual genial de datos
<hr/>
# 1. Obtener Los Datos
- Queremos este conjunto de datos : [Homicidios comunes en el Municipio de Santiago de Cali](http://datos.cali.gov.co/dataset/homicidios-comunes-en-el-municipio-de-santiago-de-cali-segun-comuna-del-hecho-2001-2017/resource/670f2cd8-3b5f-4657-a136-577afefc38be)
- Podemos usar la **API** para recopilar los datos. El *API Endpoint* es esto:
http://datos.cali.gov.co/en/api/3/action/datastore_search?resource_id=670f2cd8-3b5f-4657-a136-577afefc38be
```
#Library para usar APIs
import requests
import json
#API Endpoint (de la plataforma de Datos Abiertos)
url = 'http://datos.cali.gov.co/en/api/3/action/datastore_search?resource_id=670f2cd8-3b5f-4657-a136-577afefc38be'
```
### Usa la API y obtén una respuesta en json
(indicio: mira el documentación de <a href='https://realpython.com/python-requests'>requests</a>)
```
#usa la api y obtén una respuesta
response = requests.request("GET", url)
#cambiar la respuesta en un objeto json
#mira el objecto json
```
<hr/>
# 2. Limpiar Los Datos
```
import pandas as pd
```
### Crear un DataFrame con el objecto de json y limpialo
- **Usa la <a href='https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html'>library para dataframes con pandas</a>**
- En el fin, el DataFrame debería verse así :
```
#no usa esta csv, es solo para ver cómo deben verse sus datos cuando termine de limpiarlos
pd.read_csv('datosLimpios.csv').head()
# crear un 'DataFrame' objecto desde el json
# puedes usar json['result']['records'] para ver la información que queremos
```
### Limpiando los datos, necesitas hacer :
- establece el indice a 'comuna'
- omitir las filas de 'TOTAL' y 'sin especificar'
- cambiar columnas para ser el año
- cambiar el DataFrame para todo ser numérico
- reemplazar los datos faltantes con -1
- ordenar el DataFrame
```
#LIMPIANDO LOS DATOS
#Terminar la función:
def limpiarLosDatos(datos):
#establece el indice para ser el numero de comuna
datos.set_index('No', inplace=True)
#omitir filas innecesarias
datosLimpios = datos.loc[["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22"]]
#omitir columnas innecesarias
datosLimpios.drop("_id", axis=1, inplace=True)
#cambiar columnas para ser el año
#cambiar los valores de columnas para que sean numéricos (puede usar pd.to_numeric)
#reemplazar los datos faltantes con -1
#cambiar los valores para que sean numéricos (puede usar pd.to_numeric)
#ordenar el dataframe por año
return datosLimpios
#usa la función para limpiar los datos
#guarda la resulta en un variable
```
<hr/>
# 3. Visualizar los Datos
*(indicio: usa la plot() function para DataFrames en pandas)*
```
#con esto, podemos visualizar en el Jupyter notebook
%matplotlib inline
import matplotlib.pyplot as plt
#primero, visualiza todos los datos (usa la función .T para transponer los datos para trazarlos correctamente.)
#título y nombres de ejes
plt.title('Homocidios en Comunas en Cali 2001 - 2016')
plt.xlabel('año')
plt.ylabel('homocidios')
#la comuna con mucho mas homocidios de otros es numero 13, lo visualizamos :
#tambien, usa la función .T para transponer los datos trazarlos correctamente
```
<hr/>
# 4. Mapear los Datos
Para este conjunto de datos, es bueno visualizar los homicidios a lo largo del tiempo, pero es aún más interesante ver dónde están en un mapa. Ahora usaremos la biblioteca de [geopandas](http://geopandas.org/) para crear un mapa de homicidios de Cali.
```
import geopandas as gpd
caliShapeFileUrl = 'http://ws-idesc.cali.gov.co:8081/geoserver/idesc/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=idesc:mc_comunas&maxFeatures=50&outputFormat=SHAPE-ZIP'
```
** Usa la función [read_file()](http://geopandas.org/io.html) de geopandas para obtener un geoDataFrame con el shapefile de Cali : **
```
caliShape = gpd.read_file(caliShapeFileUrl)
caliShape.head()
```
** Usa la función [plot()](http://geopandas.org/gallery/plotting_with_geoplot.html) de geopandas para ver Cali : **
### Siguiente
*Chevre!* Ahora, tenemos un DataFrame con la información de homocidios por comuna por año en Cali, y también tenemos un DataFrame con las formas de las comunas en Cali en una mapa. Unámoslos para hacer una visual interesante!
Vamos a utilizar la función [join()](https://pandas.pydata.org/pandas-docs/version/0.24.2/reference/api/pandas.DataFrame.join.html) de pandas para hacer esto.
** Primero: restructura el DataFrame de homocidios para ser compatible con los datos de mapa: **
- Cambie el nombre de su DataFrame de antes a ser datosLimpios y puede usar el siguiente código para ayudarlo
```
#nombreAntes = datosLimpios
datosLimpios.reset_index(inplace=True)
datosLimpios = datosLimpios.apply(pd.to_numeric)
datosLimpios.set_index('No')
datosLimpios.index.names=['comuna']
datosLimpios.head()
```
** Segundo: resetructura el DataFrame de Cali ShapeFile para tener 'comuna' para el índice **
```
#establecer el índice para ser el número de la comuna
caliShape.set_index('comuna', inplace=True)
```
** Tercero: Usa la función [join()](https://pandas.pydata.org/pandas-docs/version/0.24.2/reference/api/pandas.DataFrame.join.html) de pandas para fusionar los DataFrames **
```
#Esta función es para hacer que el mapa se vea bonito
def personalizaLaMapa(vmin, vmax):
# quitar el eje
ax.axis('off')
# título
ax.set_title('Número de homicidios en 2001 por distrito en Cali', \
fontdict={'fontsize': '25',
'fontweight' : '3'})
# fuente de los datos
ax.annotate('Source: Datos Abiertos, Alcaldía de Cali',
xy=(0.1, .08), xycoords='figure fraction',
horizontalalignment='left', verticalalignment='top',
fontsize=10, color='#555555')
# Crear una barra de colores como la leyenda del mapa
sm = plt.cm.ScalarMappable(cmap='Blues', norm=plt.Normalize(vmin=vmin, vmax=vmax))
sm._A = []
cbar = fig.colorbar(sm)
```
** Trata de usar la función de [plot()](http://geopandas.org/gallery/plotting_with_geoplot.html) en geopandas para visualiza todos en el mapa de Cali : **
- primero, solo por un año
- usa la función 'personalizaLaMapa' para hacer que el mapa se vea bonito
- despues, guarda la figure en una imagen de png con la función [savefig()](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.savefig.html)
```
#el año (year) que queremos ver
year = 2001
vmin, vmax = 0, 250
#crear una figura en matplotlib
fig, ax = plt.subplots(1, figsize=(10, 10))
#indicio: termina esto --> plot(column= , cmap='Blues, ax=)
# Personalización
personalizaLaMapa(vmin, vmax)
# esto guardará la figura como un png de alta resolución. También puede guardar como svg
# usa la función savefig('nombre.png', dpi=300)
```
# Buen Trabajo!
Muy genial! Utilizamos datos de la plataforma de [Datos Abiertos, Cali](http://datos.cali.gov.co/) y creamos nuestro propio visual! ¿Qué otros conjuntos de datos geniales podemos utilizar?

<hr/>
## Fin.
<hr/>
| github_jupyter |
# Классификация изображений
В данном проекте мы сделаем бинарную классификацию изображений. Рассмотрим набор изображений состоящий из двух классов: коты и собаки. В обучающей выборке 4000 изображений каждого класса, в тестовой - 1000 изображений каждого класса. Так же модель можно протестировать на отдельно добавленных изображениях.
```
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
```
# Подготовка данных
Воспользуемся инструментом ImageDataGenerator из библиотеки keras, который позволяет преообразовать изображения в батчи тензоров. Генератор позволяет добавить различные преобразования с изображениями. Масштабируем изображения, добавляем рандомное отражение по горизонтали, поворот, zoom.
```
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('CNN_Data/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('CNN_Data/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
```
# Модель
Для обучения модели будем использовать сверточную нейронную сеть. Пробуя несколько архитектур сетей, добавляя и убирая слои, мы измеряли качество на тестовой выборке.
Сеть, которая дала лучшее качество, состоит из семи слоев:
1. Сверточный слой с ядром свертки - (3,3).
2. Слой MaxPooling
3. Сверточный слой с ядром свертки - (3,3).
4. Слой MaxPooling
5. Распремляющий слой.
6. Простой слой с активацией relu
7. Простой слой с активацией sigmoid
```
classifier = Sequential()
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
```
Обучаем модель на 10 эпохах. При большем количестве эпох, качество продолжает расти, но мощности компьютеров нам не хватило.
```
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 10,
validation_data = test_set,
validation_steps = 2000)
```
Оценим качество на тестовой выборке.
```
classifier.evaluate_generator(test_set,steps=2000)
```
# Предсказания
Также можно построить предсказание на любом другом изображении. Проверим модель на изображении "кот похожий на собаку" и на собаке корги.
```
from keras.preprocessing import image
from IPython.display import display
def get_prediction(url):
raw_test_image = image.load_img(url, target_size = (64, 64))
test_image = image.img_to_array(raw_test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
if result[0][0] == 1:
print('ПЕС')
else:
print('КОТ')
display(raw_test_image)
```
Как видим, модель блестяще справилась с такими непростыми случаями ;]
```
get_prediction('CNN_Data/for_prediction/cat_like_dog.jpg')
get_prediction('CNN_Data/for_prediction/corgi.jpg')
```
| github_jupyter |
```
import time
from selenium import webdriver
driver= webdriver.Firefox()
class getValueFromLocations():
def __init__(self,localizacao, near_or_in, estabelecimento):
self.localizacao = localizacao;
self.near_or_in = near_or_in;
self.estabelecimento = estabelecimento;
def StartSearch(self):
#driver= webdriver.Firefox()
driver.get("https://www.google.pt/search?tbm=lcl&q={}+{}+{}".format(self.estabelecimento, self.near_or_in, self.localizacao))
restos : list = [];
while True:
time.sleep(1);
rests=driver.find_elements_by_class_name("dbg0pd")
print(len(rests))
i : str;
for i in rests:
i.click()
time.sleep(1)
try:
telefone = driver.find_element_by_class_name("LrzXr.zdqRlf.kno-fv");
except:
continue;
try:
nome = driver.find_element_by_class_name("SPZz6b");
except:
continue;
try:
morada = driver.find_element_by_class_name('LrzXr');
except:
continue;
restos.append([nome.text,telefone.text, morada.text]);
print(restos);
try:
btn = driver.find_element_by_css_selector('#pnnext > span:nth-child(2)');
btn.click();
time.sleep(3);
except:
break;
print('@a_')
#return restos;
print(restos);
def convertStreet_to_GPS():
pass
#'Shops', 'Places', 'DrugStore', 'Restaurants', 'Barber Shop', 'Bicycle Store', 'Car Dealer', 'Movie Theater', 'Museum', 'Music Store', 'Gas Station', 'High School'
x = getValueFromLocations('Torres Novas', 'in', 'Barber Shop').StartSearch();
print(x)
import time;
driver= webdriver.Firefox()
'''def ExtractPlace():
filter_locations : list = [];
locations = driver.find_elements_by_class_name("dbg0pd");
for i in locations:
i.click()
time.sleep(1)
try:
name = driver.find_element_by_class_name("SPZz6b")
telephone = driver.find_element_by_class_name("LrzXr.zdqRlf.kno-fv")
address = driver.find_element_by_class_name("LrzXr")
rating = driver.find_element_by_css_selector(".Ob2kfd > div:nth-child(1) > span:nth-child(1)")
filter_locations.append([name.text,telephone.text, address.text, rating.text])
except:
continue;
return filter_locations;
'''
def ExtractPlace():
filter_locations : list = [];
while True:
locations = driver.find_elements_by_class_name("dbg0pd");
for i in locations:
i.click()
time.sleep(1)
try:
name = driver.find_element_by_class_name("SPZz6b")
telephone = driver.find_element_by_class_name("LrzXr.zdqRlf.kno-fv")
address = driver.find_element_by_class_name("LrzXr")
rating = driver.find_element_by_css_selector(".Ob2kfd > div:nth-child(1) > span:nth-child(1)")
filter_locations.append([name.text,telephone.text, address.text, rating.text])
except:
continue;
try:
btn = driver.find_element_by_css_selector('#pnnext > span:nth-child(2)');
btn.click();
time.sleep(2);
except:
break
return filter_locations;
location : tuple = ('Torres Novas', 'in', 'Drug Store')
driver.get("https://www.google.pt/search?tbm=lcl&q={}+{}+{}&start=0".format(location[2], location[1], location[0]))
fl : None = ExtractPlace();
print(fl);
l : list
for l in fl:
print(l)
with open('local.drugstore', 'a', encoding='utf-8') as f:
f.write(('{}\n').format(str(l)[1:-1]));
lf : None = ExtractPlace()
print(lf);
#coordinates
#clicar em direções esperar, get link
#voltar a pagina inicial de pesquisa, acrescentar 1 ao i, i+= 1;
```
| github_jupyter |
# भाग 9 - एन्क्रिप्ट किए गए कार्यक्रमों में प्रवेश करें
मानो या न मानो, एन्क्रिप्टेड डेटा के साथ गणना करना संभव है। दूसरे शब्दों में, ऐसा प्रोग्राम चलाना संभव है जहां **प्रोग्राम में सभी चर** हैं **एन्क्रिप्टेड**!
इस ट्यूटोरियल में, हम एन्क्रिप्टेड कम्प्यूटेशन के बहुत ही बुनियादी टूल्स से गुजरने वाले हैं। विशेष रूप से, हम एक लोकप्रिय दृष्टिकोण पर ध्यान केंद्रित करने जा रहे हैं जिसे सिक्योर मल्टी-पार्टी कम्प्यूटेशन कहा जाता है। इस पाठ में, हम सीखेंगे कि एक एन्क्रिप्टेड कैलकुलेटर का निर्माण कैसे करें जो एन्क्रिप्टेड संख्याओं पर गणना कर सकता है।
लेखक:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Théo Ryffel - Github: [@LaRiffle](https://github.com/Laiffiff)
संदर्भ:
- Morten Dahl - [ब्लॉग](https://mortendahl.github.io) - Twitter: [@mortendahlcs](https://twitter.com/mortendahlcs)
nbTranslate का उपयोग करके अनुवादित
संपादक:
- Urvashi Raheja - Github: [@raheja](https://github.com/raheja)
# चरण 1: सुरक्षित बहु-पक्षीय संगणना का उपयोग कर एन्क्रिप्शन
SMPC पहली नज़र में "एन्क्रिप्शन" का एक अजीब रूप है। चर को एन्क्रिप्ट करने के लिए सार्वजनिक / निजी कुंजी का उपयोग करने के बजाय, प्रत्येक मान को कई `shares` में विभाजित किया जाता है, जिनमें से प्रत्येक एक निजी कुंजी की तरह काम करता है। आमतौर पर, ये `share` 2 या अधिक _owners_ के बीच वितरित किए जाएंगे। इस प्रकार, चर को डिक्रिप्ट करने के लिए, सभी owners को डिक्रिप्शन की अनुमति देने के लिए सहमत होना चाहिए। संक्षेप में, सभी के पास एक निजी कुंजी है।
### Encrypt()
तो, मान लें कि हम एक चर 'x' को "एन्क्रिप्ट" करना चाहते हैं, हम निम्नलिखित तरीके से ऐसा कर सकते हैं।
> एन्क्रिप्शन फ़्लोट्स या वास्तविक संख्याओं का उपयोग नहीं करता है, लेकिन एक गणितीय स्थान में होता है जिसे [integer quotient ring](http://mathworld.wolfram.com/QuotientRing.html) कहा जाता है जो मूल रूप से `0` और` Q-1` के बीच पूर्णांक है, जहां `Q` प्राइम् और "पर्याप्त" है ताकि हमारे प्रयोगों में उपयोग किए जाने वाले सभी नंबर शामिल हो सकें। व्यवहार में, 'x' पूर्णांक को मान देते हुए, हम रिंग में फिट होने के लिए `x % Q` करते हैं। (इसीलिए हम नंबर `x' > Q` का उपयोग करने से बचते हैं)।
```
Q = 1234567891011
x = 25
import random
def encrypt(x):
share_a = random.randint(-Q,Q)
share_b = random.randint(-Q,Q)
share_c = (x - share_a - share_b) % Q
return (share_a, share_b, share_c)
encrypt(x)
```
जैसा कि आप यहां देख सकते हैं, हमने अपने वेरिएबल `x` को 3 अलग-अलग शेयरों में विभाजित किया है, जिसे 3 अलग-अलग owners (मालिकों) को भेजा जा सकता है।
### Decrypt()
यदि हम इन 3 शेयरों को डिक्रिप्ट करना चाहते हैं, तो हम उन्हें एक साथ जोड़ सकते हैं और परिणाम के मापांक (मॉड क्यू) ले सकते हैं।
```
def decrypt(*shares):
return sum(shares) % Q
a,b,c = encrypt(25)
decrypt(a, b, c)
```
महत्वपूर्ण रूप से, ध्यान दें कि यदि हम केवल दो shares के साथ डिक्रिप्ट करने की कोशिश करते हैं, तो डिक्रिप्शन काम नहीं करता है!
```
decrypt(a, b)
```
इस प्रकार, हमें मूल्य को डिक्रिप्ट करने के लिए सभी मालिकों की आवश्यकता है। यह इस तरह से है कि `shares` निजी कुंजी की तरह काम करते हैं, जिनमें से सभी को एक मूल्य को डिक्रिप्ट करने के लिए मौजूद होना चाहिए।
# चरण 2: SMPC का उपयोग करके बुनियादी अंकगणित
हालाँकि, सुरक्षित बहु-पक्षीय संगणना की वास्तव में असाधारण संपत्ति है गणना करने की क्षमता **जब चर अभी भी एन्क्रिप्ट किए जा रहे हैं**। आइए नीचे सरल जोड़ प्रदर्शित करें।
```
x = encrypt(25)
y = encrypt(5)
def add(x, y):
z = list()
# the first worker adds their shares together
z.append((x[0] + y[0]) % Q)
# the second worker adds their shares together
z.append((x[1] + y[1]) % Q)
# the third worker adds their shares together
z.append((x[2] + y[2]) % Q)
return z
decrypt(*add(x,y))
```
### सफलता!!!
आखिर तुमने इसे हासिल कर ही लिया है! यदि प्रत्येक कार्यकर्ता (अलग-अलग) अपने शेयरों को एक साथ जोड़ता है, तो परिणामस्वरूप शेयर सही मूल्य (25 + 5 == 30) के लिए डिक्रिप्ट करेंगे।
जैसा कि यह पता चला है, SMPC प्रोटोकॉल मौजूद हैं जो निम्नलिखित कार्यों के लिए इस एन्क्रिप्टेड संगणना की अनुमति दे सकते हैं:
- इसके अलावा (जो हमने अभी देखा है)
- गुणन
- तुलना
और इन मूल अंतर्निहित प्राथमिकताओं का उपयोग करके, हम मनमानी गणना कर सकते हैं !!!
अगले भाग में, हम इन ऑपरेशनों को करने के लिए PySyft लाइब्रेरी का उपयोग करना सीखेंगे!
# चरण 3: SMPC PySyft का उपयोग करना
पिछले खंडों में, हमने SMPC के आसपास कुछ बुनियादी अंतर्ज्ञानों को रेखांकित किया है जो काम करने वाले हैं। हालाँकि, व्यवहार में हम अपने एन्क्रिप्टेड कार्यक्रमों को लिखने के दौरान स्वयं ही सभी आदिम संचालन को हाथ से लिखना नहीं चाहते हैं। इस प्रकार, इस खंड में हम PySyft का उपयोग करते हुए एन्क्रिप्टेड संगणना कैसे करें की मूल बातों से गुजरने वाले हैं। विशेष रूप से, हम इस बात पर ध्यान केंद्रित करने जा रहे हैं कि पहले बताई गई 3 प्रधान बातें कैसे करें: जोड़, गुणा और तुलना।
सबसे पहले, हमें कुछ Virtual Workers (वर्चुअल वर्कर्स) बनाने की ज़रूरत है (जो उम्मीद है कि अब आप हमारे पिछले ट्यूटोरियल दिए गए हैं)।
```
import torch
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
bill = sy.VirtualWorker(hook, id="bill")
```
### बुनियादी एन्क्रिप्शन / डिक्रिप्शन
एन्क्रिप्शन किसी भी PySyft टेंसर और calling.share() के रूपांतरण सरल है। डिक्रिप्शन साझा चर पर .get() के रूपांतरण सरल है
```
x = torch.tensor([25])
x
encrypted_x = x.share(bob, alice, bill)
encrypted_x.get()
```
### एन्क्रिप्ट किए गए मानों का परिचय
यदि हम बॉब, एलिस और बिल के श्रमिकों के करीब देखते हैं, तो हम उन shares को देख सकते हैं जो बनते हैं!
```
bob._objects
x = torch.tensor([25]).share(bob, alice, bill)
# Bob's share
bobs_share = list(bob._objects.values())[0]
bobs_share
# Alice's share
alices_share = list(alice._objects.values())[0]
alices_share
# Bill's share
bills_share = list(bill._objects.values())[0]
bills_share
```
और अगर हम चाहते थे, हम पहले से बात की गई वही दृष्टिकोण का उपयोग करके इन मूल्यों को डिक्रिप्ट कर सकते हैं !!!
```
(bobs_share + alices_share + bills_share)
```
जैसा कि आप देख सकते हैं, जब हमने `.share()` कहा तो यह केवल 3 शेयरों में मूल्य को विभाजित करता है और प्रत्येक पक्ष को एक शेयर भेजता है!
# एन्क्रिप्टेड अंकगणित
और अब आप देखते हैं कि हम अंतर्निहित मूल्यों पर अंकगणित कर सकते हैं! एपीआई का निर्माण इसलिए किया जाता है ताकि हम अंकगणित का प्रदर्शन सामान्य Pytorch tensors जैसे कर सकें।
```
x = torch.tensor([25]).share(bob,alice)
y = torch.tensor([5]).share(bob,alice)
z = x + y
z.get()
z = x - y
z.get()
```
# एन्क्रिप्ट किया गया गुणन
गुणन के लिए हमें एक अतिरिक्त पार्टी की आवश्यकता होती है जो लगातार यादृच्छिक संख्या उत्पन्न करने के लिए ज़िम्मेदार है (और किसी भी अन्य दलों के साथ मिलीभगत नहीं करता है)। हम इस व्यक्ति को "crypto provider" कहते हैं। सभी गहन उद्देश्यों के लिए, crypto provider सिर्फ एक अतिरिक्त VirtualWorker है, लेकिन यह स्वीकार करना महत्वपूर्ण है कि crypto provider एक "मालिक" नहीं है जिसमें वह / उसके पास खुद के शेयर नहीं हैं, लेकिन जिन पर भरोसा किया जा सकता है कि वह मौजूदा शेयरधारकों में से किसी के साथ साँठ गाँठ नहीं करता है।
```
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider")
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
# multiplication
z = x * y
z.get()
```
आप मैट्रिक्स गुणा भी कर सकते हैं
```
x = torch.tensor([[1, 2],[3,4]]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([[2, 0],[0,2]]).share(bob,alice, crypto_provider=crypto_provider)
# matrix multiplication
z = x.mm(y)
z.get()
```
# एन्क्रिप्टेड तुलना
निजी मूल्यों के बीच निजी तुलना करना भी संभव है। हम यहां SecureNN प्रोटोकॉल पर भरोसा करते हैं, जिसका विवरण [यहां](https://eprint.iacr.org/2018/442.pdf) पाया जा सकता है। तुलना का परिणाम एक निजी साझा टेंसर भी है।
```
x = torch.tensor([25]).share(bob,alice, crypto_provider=crypto_provider)
y = torch.tensor([5]).share(bob,alice, crypto_provider=crypto_provider)
z = x > y
z.get()
z = x <= y
z.get()
z = x == y
z.get()
z = x == y + 20
z.get()
```
आप अधिकतम संचालन भी कर सकते हैं
```
x = torch.tensor([2, 3, 4, 1]).share(bob,alice, crypto_provider=crypto_provider)
x.max().get()
x = torch.tensor([[2, 3], [4, 1]]).share(bob,alice, crypto_provider=crypto_provider)
max_values, max_ids = x.max(dim=0)
max_values.get()
```
# बधाई हो!!! - समुदाय में शामिल होने का समय!
इस नोटबुक ट्यूटोरियल को पूरा करने पर बधाई! यदि आपने इसका आनंद लिया है और एआई और एअर सप्लाई चेन (डेटा) के विकेन्द्रीकृत स्वामित्व के संरक्षण की ओर आंदोलन में शामिल होना चाहते हैं, तो आप निम्न तरीकों से ऐसा कर सकते हैं!
### GitHub पर स्टार PySyft
हमारे समुदाय की मदद करने का सबसे आसान तरीका सिर्फ रिपोज अभिनीत है! यह हमारे द्वारा बनाए जा रहे कूल टूल्स के बारे में जागरूकता बढ़ाने में मदद करता है।
- [स्टार PySyft](https://github.com/OpenMined/PySyft)
### हमारे Slack में शामिल हों!
नवीनतम प्रगति पर अद्यतित रहने का सबसे अच्छा तरीका हमारे समुदाय में शामिल होना है! [http://slack.openmined.org](http://slack.openminn.org) पर फॉर्म भरकर आप ऐसा कर सकते हैं
### एक कोड परियोजना में शामिल हों!
हमारे समुदाय में योगदान करने का सबसे अच्छा तरीका एक कोड योगदानकर्ता बनना है! किसी भी समय आप PySyft GitHub जारी करने वाले पृष्ठ पर जा सकते हैं और "Projects" के लिए फ़िल्टर कर सकते हैं। यह आपको सभी शीर्ष स्तर के टिकट दिखाएगा कि आप किन परियोजनाओं में शामिल हो सकते हैं! यदि आप किसी परियोजना में शामिल नहीं होना चाहते हैं, लेकिन आप थोड़ी सी कोडिंग करना चाहते हैं, तो आप "good first issue" के रूप में चिह्नित GitHub मुद्दों की खोज करके अधिक मिनी-प्रोजेक्ट्स की तलाश कर सकते हैं।
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### दान करना
यदि आपके पास हमारे कोडबेस में योगदान करने का समय नहीं है, लेकिन फिर भी समर्थन उधार देना चाहते हैं, तो आप हमारे ओपन कलेक्टिव में भी एक बैकर बन सकते हैं। सभी दान हमारी वेब होस्टिंग और अन्य सामुदायिक खर्च जैसे कि हैकाथॉन और मीटअप की ओर जाते हैं!
[OpenMined का ओपन कलेक्टिव पेज](https://opencollective.com/openmined)
| github_jupyter |
```
# default_exp lambda
# export
import requests
import base64
import json
import pandas as pd
import boto3
import math
import sklearn
#hide
from nbdev.showdoc import *
```
# lambda code
> Build lambda handler
```
# export
try:
import spotify_net.retrieve_spot_00 as spot
import spotify_net.retrieve_last_01 as last
import spotify_net.prepModel_02 as prep
import spotify_net.hit_endpoint_03 as hit
except ModuleNotFoundError:
import retrieve_spot_00 as spot
import retrieve_last_01 as last
import prepModel_02 as prep
import hit_endpoint_03 as hit
# export
def handler(event, context):
# Get spotify tracks
c_id, c_secret, a_token, r_token = spot.cred()
o_tracks = spot.get_tracks('3ubgXaHeBn1CWLUZPXvqkj', a_token, r_token, c_id, c_secret)
n_tracks = spot.track_reduce(o_tracks, include=7)
n_tracks.to_csv('s3://spotify-net/newer_tracks.csv')
_ = spot.update('3ubgXaHeBn1CWLUZPXvqkj', a_token, r_token, c_id, c_secret, o_tracks, n_tracks)
print('Retrieved net from Spotify')
# Get last fm tracks
API_KEY, USER_AGENT, USERNAME = last.last_cred()
tracks = last.last_get('user.gettoptracks', USER_AGENT, USERNAME, API_KEY)
formatted = last.last_format(tracks, 7)
formatted.to_csv('s3://spotify-net/df_tracks.csv')
print('Retrieved top recent from Last FM')
# Combine and prep for model
s3_objects = prep.load_s3()
merged_df = prep.merge_frame(s3_objects['spot_tracks'], s3_objects['last_tracks'])
try:
transformed = prep.dummies_and_scale(merged_df, 0.0000001, s3_objects['scaler'])
_ = prep.full_frame(transformed, s3_objects['gen_series'], s3_objects['svd'], s3_objects['key_series'], s3_objects['time_series'])
# Get predictions
df = hit.load_s3()
name_frame = df[['name', 'uri', 'artist']].copy()
df_json = hit.prep_frame(df)
df_list = []
index=0
for i in range(math.ceil(len(df)/3)):
temp_json = hit.prep_frame(df.iloc[index:index+3])
preds = requests.post(f'https://72fe4ffwc6.execute-api.us-east-1.amazonaws.com/dev/model/{temp_json}')
preds = pd.read_json(preds.json()[0])
index += 3
df_list.append(preds)
pred_frame = pd.concat(df_list, ignore_index=True)
pred_frame = pred_frame.rename(columns={0:'predictions'})
name_frame = pd.concat([name_frame, pred_frame], axis=1)
print(name_frame)
# Add to playlist, delete from net
client_id, client_secret, access_token, refresh_token = hit.cred()
_ = hit.add_tracks(name_frame, client_id, client_secret, access_token, refresh_token)
_ = hit.delete_tracks(name_frame, client_id, client_secret, access_token, refresh_token)
except ValueError:
print('Nothing to update')
return 'Success'
# export
if __name__ == '__main__':
try:
_ = handler(event, context)
except NameError:
print('No event')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/bs3537/dengueAI/blob/master/V5_San_Juan_XGB_all_environmental_features.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#https://www.drivendata.org/competitions/44/dengai-predicting-disease-spread/page/80/
#Your goal is to predict the total_cases label for each (city, year, weekofyear) in the test set.
#Performance metric = mean absolute error
```
##LIST OF FEATURES:
You are provided the following set of information on a (year, weekofyear) timescale:
(Where appropriate, units are provided as a _unit suffix on the feature name.)
###City and date indicators
1. city – City abbreviations: sj for San Juan and iq for Iquitos
2. week_start_date – Date given in yyyy-mm-dd format
###NOAA's GHCN daily climate data weather station measurements
1. station_max_temp_c – Maximum temperature
2. station_min_temp_c – Minimum temperature
3. station_avg_temp_c – Average temperature
4. station_precip_mm – Total precipitation
5. station_diur_temp_rng_c – Diurnal temperature range
###PERSIANN satellite precipitation measurements (0.25x0.25 degree scale)
6. precipitation_amt_mm – Total precipitation
###NOAA's NCEP Climate Forecast System Reanalysis measurements (0.5x0.5 degree scale)
7. reanalysis_sat_precip_amt_mm – Total precipitation
8. reanalysis_dew_point_temp_k – Mean dew point temperature
9. reanalysis_air_temp_k – Mean air temperature
10. reanalysis_relative_humidity_percent – Mean relative humidity
11. reanalysis_specific_humidity_g_per_kg – Mean specific humidity
12. reanalysis_precip_amt_kg_per_m2 – Total precipitation
13. reanalysis_max_air_temp_k – Maximum air temperature
14. reanalysis_min_air_temp_k – Minimum air temperature
15. reanalysis_avg_temp_k – Average air temperature
16. reanalysis_tdtr_k – Diurnal temperature range
###Satellite vegetation - Normalized difference vegetation index (NDVI) - NOAA's CDR Normalized Difference Vegetation Index (0.5x0.5 degree scale) measurements
17. ndvi_se – Pixel southeast of city centroid
18. ndvi_sw – Pixel southwest of city centroid
19. ndvi_ne – Pixel northeast of city centroid
20. ndvi_nw – Pixel northwest of city centroid
####TARGET VARIABLE = total_cases label for each (city, year, weekofyear)
```
import sys
#Load train features and labels datasets
train_features = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_features_train.csv')
train_features.head()
train_features.shape
train_labels = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_labels_train.csv')
train_labels.head()
train_labels.shape
#Merge train features and labels datasets
train = pd.merge(train_features, train_labels)
train.head()
train.shape
#city, year and week of year columns are duplicate in train_features and train_labels datasets so the total_cases column is added to the features dataset
train.dtypes
#Data rows for San Juan
train.city.value_counts()
#San Juan has 936 rows which we can isolate and analyze separately
train = train[train['city'].str.match('sj')]
train.head(5)
train.shape
#Thus, we have isolated the train dataset with only city data for San Juan
#Distribution of the target
import seaborn as sns
sns.distplot(train['total_cases'])
#The target distribution is skewed
#Find outliers
train['total_cases'].describe()
#Remove outliers
train = train[(train['total_cases'] >= np.percentile(train['total_cases'], 0.5)) &
(train['total_cases'] <= np.percentile(train['total_cases'], 99.5))]
train.shape
sns.distplot(train['total_cases'])
#Do train, val split
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
random_state=42)
train.shape, val.shape
#Load test features dataset (for the competition)
test = pd.read_csv('https://s3.amazonaws.com/drivendata/data/44/public/dengue_features_test.csv')
#Pandas Profiling
```
#####Baseline statistics (mean and MAE) for the target variable total_cases in train dataset and baseline validation MAE
```
train['total_cases']. describe()
#Baseline mean and mean absolute error
guess = train['total_cases'].mean()
print(f'At the baseline, the mean total number of dengue cases in a year is: {guess:.2f}')
#If we had just guessed that the total number of dengue cases was 31.58 for a city in a particular year, we would be off by how much?
from sklearn.metrics import mean_absolute_error
# Arrange y target vectors
target = 'total_cases'
y_train = train[target]
y_val = val[target]
# Get mean baseline
print('Mean Baseline (using 0 features)')
guess = y_train.mean()
# Train Error
y_pred = [guess] * len(y_train)
mae = mean_absolute_error(y_train, y_pred)
print(f'Train mean absolute error: {mae:.2f} dengue cases per year')
# Test Error
y_pred = [guess] * len(y_val)
mae = mean_absolute_error(y_val, y_pred)
print(f'Validation mean absolute error: {mae:.2f} dengue cases per year')
#we need to convert week_start_date to numeric form uisng pd.to_dateime function
#wrangle function
def wrangle(X):
X = X.copy()
# Convert week_start_date to numeric form
X['week_start_date'] = pd.to_datetime(X['week_start_date'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['week_start_date'].dt.year
X['month_recorded'] = X['week_start_date'].dt.month
#X['day_recorded'] = X['week_start_date'].dt.day
X = X.drop(columns='week_start_date')
X = X.drop(columns='year')
X = X.drop(columns='station_precip_mm')
#I engineered few features which represent standing water, high risk feature for mosquitos
#1. X['standing water feature 1'] = X['station_precip_mm'] / X['station_max_temp_c']
#Standing water features
X['total satellite vegetation index of city'] = X['ndvi_se'] + X['ndvi_sw'] + X['ndvi_ne'] + X['ndvi_nw']
#Standing water features
#Standing water feature 1 = 'NOAA GCN precipitation amount in kg per m2 reanalyzed' * (total vegetation, sum of all 4 parts of the city)
X['standing water feature 1'] = X['reanalysis_precip_amt_kg_per_m2'] * X['total satellite vegetation index of city']
#Standing water feature 2: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'}
X['standing water feature 2'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent']
#Standing water feature 3: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'} * (total vegetation)
X['standing water feature 3'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent'] * X['total satellite vegetation index of city']
#Standing water feature 4: 'NOAA GCN precipitation amount in kg per m2 reanalyzed'} / 'NOAA GCN max air temp reanalyzed'
X['standing water feature 4'] = X['reanalysis_precip_amt_kg_per_m2'] / X['reanalysis_max_air_temp_k']
#Standing water feature 5: ['NOAA GCN precipitation amount in kg per m2 reanalyzed'} * 'NOAA GCN mean relative humidity in pct reanalyzed'} * (total vegetation)]/['NOAA GCN max air temp reanalyzed']
X['standing water feature 5'] = X['reanalysis_precip_amt_kg_per_m2'] * X['reanalysis_relative_humidity_percent'] * X['total satellite vegetation index of city'] / X['reanalysis_max_air_temp_k']
#Rename columns
X.rename(columns= {'reanalysis_air_temp_k':'Mean air temperature in K'}, inplace=True)
X.rename(columns= {'reanalysis_min_air_temp_k':'Minimum air temperature in K'}, inplace=True)
X.rename(columns= {'weekofyear':'Week of Year'}, inplace=True)
X.rename(columns= {'station_diur_temp_rng_c':'Diurnal temperature range in C'}, inplace=True)
X.rename(columns= {'reanalysis_precip_amt_kg_per_m2':'Total precipitation kg/m2'}, inplace=True)
X.rename(columns= {'reanalysis_tdtr_k':'Diurnal temperature range in K'}, inplace=True)
X.rename(columns= {'reanalysis_max_air_temp_k':'Maximum air temperature in K'}, inplace=True)
X.rename(columns= {'year_recorded':'Year recorded'}, inplace=True)
X.rename(columns= {'reanalysis_relative_humidity_percent':'Mean relative humidity'}, inplace=True)
X.rename(columns= {'month_recorded':'Month recorded'}, inplace=True)
X.rename(columns= {'reanalysis_dew_point_temp_k':'Mean dew point temp in K'}, inplace=True)
X.rename(columns= {'precipitation_amt_mm':'Total precipitation in mm'}, inplace=True)
X.rename(columns= {'station_min_temp_c':'Minimum temp in C'}, inplace=True)
X.rename(columns= {'ndvi_se':'Southeast vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_ne':'Northeast vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_nw':'Northwest vegetation index'}, inplace=True)
X.rename(columns= {'ndvi_sw':'Southwest vegetation index'}, inplace=True)
X.rename(columns= {'reanalysis_avg_temp_k':'Average air temperature in K'}, inplace=True)
X.rename(columns= {'reanalysis_sat_precip_amt_mm':'Total precipitation in mm (2)'}, inplace=True)
X.rename(columns= {'reanalysis_specific_humidity_g_per_kg':'Mean specific humidity'}, inplace=True)
X.rename(columns= {'station_avg_temp_c':'Average temp in C'}, inplace=True)
X.rename(columns= {'station_max_temp_c':'Maximum temp in C'}, inplace=True)
X.rename(columns= {'total_cases':'Total dengue cases in the week'}, inplace=True)
#Drop columns
X = X.drop(columns='Year recorded')
X = X.drop(columns='Week of Year')
X = X.drop(columns='Month recorded')
X = X.drop(columns='Total precipitation in mm (2)')
X = X.drop(columns='Average temp in C')
X = X.drop(columns='Maximum temp in C')
X = X.drop(columns='Minimum temp in C')
X = X.drop(columns='Diurnal temperature range in C')
X = X.drop(columns='Average air temperature in K')
X = X.drop(columns='city')
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
train.head().T
#Before we build the model to train on train dataset, log transform target variable due to skew
import numpy as np
target_log = np.log1p(train['Total dengue cases in the week'])
sns.distplot(target_log)
plt.title('Log-transformed target');
target_log_series = pd.Series(target_log)
train = train.assign(log_total_cases = target_log_series)
#drop total_cases target column while training the model
train = train.drop(columns='Total dengue cases in the week')
#Do the same log transformation with validation dataset
target_log_val = np.log1p(val['Total dengue cases in the week'])
target_log_val_series = pd.Series(target_log_val)
val = val.assign(log_total_cases = target_log_val_series)
val = val.drop(columns='Total dengue cases in the week')
#Fitting XGBoost Regresser model
#Define target and features
# The status_group column is the target
target = 'log_total_cases'
# Get a dataframe with all train columns except the target
train_features = train.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Combine the lists
features = numeric_features
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
pip install category_encoders
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
import xgboost as xgb
from xgboost import XGBRegressor
from sklearn import model_selection, preprocessing
processor = make_pipeline(
SimpleImputer(strategy='mean')
)
X_train_processed = processor.fit_transform(X_train)
X_val_processed = processor.transform(X_val)
model = XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
eval_set = [(X_train_processed, y_train),
(X_val_processed, y_val)]
model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='mae',
early_stopping_rounds=10)
results = model.evals_result()
train_error = results['validation_0']['mae']
val_error = results['validation_1']['mae']
iterations = range(1, len(train_error) + 1)
plt.figure(figsize=(10,7))
plt.plot(iterations, train_error, label='Train')
plt.plot(iterations, val_error, label='Validation')
plt.title('XGBoost Validation Curve')
plt.ylabel('Mean Absolute Error (log transformed)')
plt.xlabel('Model Complexity (n_estimators)')
plt.legend();
#predict on X_val
y_pred = model.predict(X_val_processed)
print('XGBoost Validation Mean Absolute Error, log transformed)', mean_absolute_error(y_val, y_pred))
#Transform y_pred back to original units from log transformed
y_pred_original = np.expm1(y_pred)
y_val_original = np.expm1(y_val)
print('XGBoost Validation Mean Absolute Error (non-log transformed)', mean_absolute_error(y_val_original, y_pred_original))
```
| github_jupyter |
# Introduction
This notebook demonstrates basic usage of BioThings Explorer, an engine for autonomously querying a distributed knowledge graph. BioThings Explorer can answer two classes of queries -- "PREDICT" and "EXPLAIN". PREDICT queries are described in [PREDICT_demo.ipynb](PREDICT_demo.ipynb). Here, we describe EXPLAIN queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in [these slides](https://docs.google.com/presentation/d/1QWQqqQhPD_pzKryh6Wijm4YQswv8pAjleVORCPyJyDE/edit?usp=sharing).
EXPLAIN queries are designed to **identify plausible reasoning chains to explain the relationship between two entities**. For example, in this notebook, we explore the question:
"*Why does hydroxychloroquine have an effect on ACE2?*"
**To experiment with an executable version of this notebook, [load it in Google Colaboratory](https://colab.research.google.com/github/biothings/biothings_explorer/blob/master/jupyter%20notebooks/EXPLAIN_ACE2_hydroxychloroquine_demo.ipynb).**
## Step 0: Load BioThings Explorer modules
First, install the `biothings_explorer` and `biothings_schema` packages, as described in this [README](https://github.com/biothings/biothings_explorer/blob/master/jupyter%20notebooks/README.md#prerequisite). This only needs to be done once (but including it here for compability with [colab](https://colab.research.google.com/)).
```
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
```
Next, import the relevant modules:
* **Hint**: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names)
* **FindConnection**: Find intermediate bio-entities which connects user specified input and output
```
# import modules from biothings_explorer
from biothings_explorer.hint import Hint
from biothings_explorer.user_query_dispatcher import FindConnection
```
## Step 1: Find representation of "ACE2" and "hydroxychloroquine" in BTE
In this step, BioThings Explorer translates our query strings "ACE2" and "hydroxychloroquine " into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the `Hint` module will be the correct item, but you should confirm that using the identifiers shown.
Search terms can correspond to any child of [BiologicalEntity](https://biolink.github.io/biolink-model/docs/BiologicalEntity.html) from the [Biolink Model](https://biolink.github.io/biolink-model/docs/), including `DiseaseOrPhenotypicFeature` (e.g., "lupus"), `ChemicalSubstance` (e.g., "acetaminophen"), `Gene` (e.g., "CDK2"), `BiologicalProcess` (e.g., "T cell differentiation"), and `Pathway` (e.g., "Citric acid cycle").
```
ht = Hint()
# find all potential representations of ACE2
ace2_hint = ht.query("ACE2")
# select the correct representation of ACE2
ace2 = ace2_hint['Gene'][0]
ace2
# find all potential representations of hydroxychloroquine
hydroxychloroquine_hint = ht.query("hydroxychloroquine")
# select the correct representation of hydroxychloroquine
hydroxychloroquine = hydroxychloroquine_hint['ChemicalSubstance'][0]
hydroxychloroquine
```
## Step 2: Find intermediate nodes connecting ACE2 and hydroxychloroquine
In this section, we find all paths in the knowledge graph that connect ACE2 and hydroxychloroquine . To do that, we will use `FindConnection`. This class is a convenient wrapper around two advanced functions for **query path planning** and **query path execution**. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.
The parameters for `FindConnection` are described below:
```
help(FindConnection.__init__)
```
Here, we formulate a `FindConnection` query with "CML" as the `input_ojb`, "imatinib" as the `output_obj`. We further specify with the `intermediate_nodes` parameter that we are looking for paths joining chronic myelogenous leukemia and imatinib with *one* intermediate node that is a Gene. (The ability to search for longer reasoning paths that include additional intermediate nodes will be added shortly.)
```
fc = FindConnection(input_obj=ace2, output_obj=hydroxychloroquine, intermediate_nodes=['BiologicalEntity'])
```
We next execute the `connect` method, which performs the **query path planning** and **query path execution** process. In short, BioThings Explorer is deconstructing the query into individual API calls, executing those API calls, then assembling the results.
A verbose log of this process is displayed below:
```
# set verbose=True will display all steps which BTE takes to find the connection
fc.connect(verbose=True)
```
## Step 3: Display and Filter results
This section demonstrates post-query filtering done in Python. Later, more advanced filtering functions will be added to the **query path execution** module for interleaved filtering, thereby enabling longer query paths. More details to come...
First, all matching paths can be exported to a data frame. Let's examine a sample of those results.
```
df = fc.display_table_view()
df.head()
```
While most results are based on edges from [semmed](https://skr3.nlm.nih.gov/SemMed/), edges from [DGIdb](http://www.dgidb.org/), [biolink](https://monarchinitiative.org/), [disgenet](http://www.disgenet.org/), [mydisease.info](https://mydisease.info) and [drugcentral](http://drugcentral.org/) were also retrieved from their respective APIs.
Next, let's look to see which genes are mentioned the most.
```
df.node1_type.unique()
```
| github_jupyter |
<center><img src="../Picture Data/logo.png" alt="Header" style="width: 800px;"/></center>
# Object follow-Basic
In this journey, we will show how to track objects using JetBot!
We will use a pre-trained neural network trained on [COCO dataset] (http://cocodataset.org) to detect 90 different common objects. These include:
* Person (index 0)
* Cup (index 47)
We need to import the ``ObjectDetector`` class, which uses our pre-trained SSD engine. There are many other classes (you can check [this file]
(https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_complete_label_map.pbt)
Get a complete list of class indexes).
This model comes from [TensorFlow Object Detection API]
(https://github.com/tensorflow/models/tree/master/research/object_detection)
The API also provides utilities for object detector training for custom tasks. Once the model is trained, we use the NVIDIA TensorRT to optimize the Jetson Nano. This makes the network very fast and can control Jetbot in real time.
First, we need to import the "ObjectDetector" class, which uses our pre-trained SSD engine.
### Single camera image detection
```
from jetbot import ObjectDetector
model = ObjectDetector('ssd_mobilenet_v2_coco.engine')
```
Internally, the 'ObjectDetector' class uses the TensorRT Python API to execute the engine we provide. It is also responsible for pre-processing the input of the neural network and parsing the detected objects.
Currently, it only works with engines created with the ``jetbotssd_tensorrt`` package. This package has utilities for converting models from the TensorFlow object detection API to the optimized TensorRT engine. Next, let's initialize the camera. Our detector needs 300x300 pixels of input, so we will set this parameter when creating the camera.
```
from jetbot import Camera
camera = Camera.instance(width=300, height=300)
```
Next, let's use some camera input to execute our network. By default, the ``ObjectDetector`` class expects the camera to generate a format of ``bgr8``.
However, if the input format is different, you can override the default preprocessor function.
If there are any COCO objects in the camera's field of view, they should now be stored in the ``detections`` variable,
# Display detected objects in the text area
we print them out by the code shown below:
```
detections = model(camera.value)
print(detections)
from IPython.display import display
import ipywidgets.widgets as widgets
detections_widget = widgets.Textarea()
detections_widget.value = str(detections)
display(detections_widget)
```
You can see the label, confidence, and border position of each object detected in each image. In this example there is only one image (our camera).
To print the first object detected in the first image, we can call the following command
>If no object detected, an error may be appear here.
```
image_number = 0
object_number = 0
print(detections[image_number][object_number])
```
> If no object is detected, an error may be appear here.
# Control the robot to follow the center object
Now, we want the robot to follow the object of the specified class. To do this, we will do the following work
1. Detect objects that match the specified class
2. Select the object closest to the center of the camera's field of view. This is the target object.
3. Guide the robot to move to the target object, otherwise it will drift
We will also create widgets that control the target object label, robot speed and cornering gain, and control the speed of the robot's cornering based on the distance between the target object and the center of the robot's field of view.
```
import torch
import torchvision
import torch.nn.functional as F
import cv2
import numpy as np
mean = 255.0 * np.array([0.485, 0.456, 0.406])
stdev = 255.0 * np.array([0.229, 0.224, 0.225])
normalize = torchvision.transforms.Normalize(mean, stdev)
def preprocess(camera_value):
global device, normalize
x = camera_value
#Image zoomed to 224,224 versus 224,244 obstacle avoidance model
x = cv2.resize(x, (224, 224))
x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
x = x.transpose((2, 0, 1))
x = torch.from_numpy(x).float()
x = normalize(x)
x = x.to(device)
x = x[None, ...]
return x
```
Create a robot instance of the drive motor.
```
from jetbot import Robot
robot = Robot()
```
We display all control widgets and connect the network execution function to the camera update.
```
from jetbot import bgr8_to_jpeg
image_widget = widgets.Image(format='jpeg', width=300, height=300)
label_widget = widgets.IntText(value=1, description='tracked label')
speed_widget = widgets.FloatSlider(value=0.4, min=0.0, max=1.0, description='speed')
turn_gain_widget = widgets.FloatSlider(value=0.8, min=0.0, max=2.0, description='turn gain')
display(widgets.VBox([
widgets.HBox([image_widget]),
label_widget,
speed_widget,
turn_gain_widget
]))
width = int(image_widget.width)
height = int(image_widget.height)
def detection_center(detection):
"""Calculate the center x, y coordinates of the object"""
bbox = detection['bbox']
center_x = (bbox[0] + bbox[2]) / 2.0 - 0.5
center_y = (bbox[1] + bbox[3]) / 2.0 - 0.5
return (center_x, center_y)
def norm(vec):
"""Calculate the length of a two-dimensional vector"""
return np.sqrt(vec[0]**2 + vec[1]**2)
def closest_detection(detections):
"""Find the detection closest to the center of the image"""
closest_detection = None
for det in detections:
center = detection_center(det)
if closest_detection is None:
closest_detection = det
elif norm(detection_center(det)) < norm(detection_center(closest_detection)):
closest_detection = det
return closest_detection
def execute(change):
image = change['new']
# Calculate all detected objects
detections = model(image)
# Draw all detected objects on the image
for det in detections[0]:
bbox = det['bbox']
cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (255, 0, 0), 2)
# Select to match the detection of the selected class label
matching_detections = [d for d in detections[0] if d['label'] == int(label_widget.value)]
# Let the detection be closest to the center of the field and draw it
det = closest_detection(matching_detections)
if det is not None:
bbox = det['bbox']
cv2.rectangle(image, (int(width * bbox[0]), int(height * bbox[1])), (int(width * bbox[2]), int(height * bbox[3])), (0, 255, 0), 5)
# If don't detected object,jetbot car will keep davance
if det is None:
pass
robot.forward(float(speed_widget.value))
# If detected object,control Jetbot follow object
else:
# Move the robot forward and control the x distance between the proportional target and the center
center = detection_center(det)
robot.set_motors(
float(speed_widget.value + turn_gain_widget.value * center[0]),
float(speed_widget.value - turn_gain_widget.value * center[0])
)
# Update image display to widget
image_widget.value = bgr8_to_jpeg(image)
execute({'new': camera.value})
```
Call the following code block to connect the execution function to each camera frame to update.
```
camera.unobserve_all()
camera.observe(execute, names='value')
```
If the robot is not blocked, you can see that the blue box surrounds the detected object and the target object (the object that the robot follows) will be displayed in green.
When the target is discovered, the robot should turn to the target.
You can call the following code block to manually disconnect the camera and stop the robot.
```
import time
camera.unobserve_all()
time.sleep(1.0)
robot.stop()
```
| github_jupyter |
# Trabalho Prático Linguagens de Programação
## Participantes:
* Leandro Alfredo Carlos
* Lucas Pereira de Lira
## Especificações da Base de Dados:
* Nome:
* Spotify Dataset 1922-2021, ~600k Tracks
* Estrutura:
* tracks.csv
* Primário :
- id (Id da trilha gerada pelo Spotify)
* Numérico :
- acousticness (varia de 0 a 1)
- danceability (varia de 0 a 1)
- energy (varia de 0 a 1)
- duration_ms (número inteiro normalmente varia de 200k a 300k)
- instrumentalness (varia de 0 a 1)
- valence (varia de 0 a 1)
- popularity (varia de 0 a 100)
- tempo (flutuação normalmente variando de 50 a 150)
- liveness (varia de 0 a 1)
- loudness (flutuação tipicamente variando de -60 a 0)
- speechiness (varia de 0 a 1)
* Fictício:
- mode (0 = Menor, 1 = Maior)
- explicit (0 = sem conteúdo explícito, 1 = conteúdo explícito)
* Categórico :
- key (todas as chaves na oitava codificadas como valores que variam de 0 a 11, começando em C como 0, C # como 1 e assim por diante ...)
- timesignature (a assinatura de tempo prevista, mais tipicamente 4)
- artists (lista dos artistas mencionados)
- artists_id (Ids dos artistas mencionados)
- release_date (data de lançamento principalmente no formato aaaa-mm-dd, no entanto, a precisão da data pode variar)
- name (nome da música)
* artists.csv
* id (id do artista)
* name (nome do artista)
* followers (número total de seguidores do artista)
* popularity (popularidade de determinado artista com base em todas as suas faixas)
* genres (gêneros associados a este artista)
* dict_artists.json
{
"qualquer": [
"primeiro",
"segundo",
"terceiro",
...,
"enésimo"
],
"em branco": [],
"primeiro: [
" qualquer ",
" terceiro ",
" segundo
],
...
}
* (As listas são classificadas em ordem decrescente, então "primeiro" é o mais semelhante a "qualquer" e "segundo" o segundo é mais e assim por diante... O número de artistas pode variar de 0 a 20)
* Link de Acesso: https://www.kaggle.com/yamaerenay/spotify-dataset-19212020-160k-tracks?select=dict_artists.json
## Considerações Gerais
* Para este trabalho decidimos eliminar os valores nulos encontrados, outras técnicas foram desconsideradas.
* A coluna "artists" possui vetores em seus registros, com isto, foi necessário desmembrar esses valores para que fosse possível demonstrar resultados mais fiéis. Porém a população é muito grande, portanto fazer uma enumeração completa de todos os valores na população tornou-se inviável e muito demorado. Com isto, utilizamos uma amostra (subconjunto) dos dados (observe situações do tipo .head(X), onde "x" é a amostra), de tamanho manejável e possível de se demonstrar uma análise fidedigna.
* Nas células onde há códigos, alguns comentários foram adicionados, favor considere-os.
* Dependendo do tamanho do dataframe é necessário inciar o notebook com o seguinte comando:
* jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
```
import pandas as pd
import seaborn as sbn
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# Observação: Para utilizar o plotly no Jupyter é necessário instalá-lo, sendo assim, abra o anaconda prompt e execute o
# seguinte comando:
# conda install -c https://conda.anaconda.org/plotly plotly
# ou com o comando pip install plotly
import plotly.express as px
artist_popularity_metrics = pd.read_csv('Spotify Dataset 2021/artists.csv')
audio_features_of_tracks = pd.read_csv('Spotify Dataset 2021/tracks.csv')
related_artists = pd.read_json('Spotify Dataset 2021/dict_artists 1.0.json')
df_data_by_year_o = pd.read_csv('Spotify Dataset 2021/data_by_year_o.csv')
artist_popularity_metrics.columns = ['id_artist', 'followers', 'genres', 'name', 'popularity']
audio_features_of_tracks.columns = ['id', 'name', 'popularity', 'duration_ms', 'explicit', 'artists', 'id_artist', 'release_date', 'danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']
```
# Lista de Questões
* Inicialmente foi realizado o tratamento dos dados, onde verificamos se existem valores nulos nas colunas do dataframe. Para isso, foi utilizado o comando DataFrame.isna() que detecta os valores nulos no conjunto de dados, logo após, foi aplicado a função .sum() que realiza a contagem desses valores, como mostrado na célula a seguir.
```
print(audio_features_of_tracks.isna().sum())
```
* Nota-se na saída do código acima que a junção dos comandos nos fornece a quantidade de valores nulos separados por coluna, o que facilitou a sua correção. Conforme destacado no início do trabalho, optamos por eliminar todos os registros que possuem valores nulos. Observa-se que apenas a coluna "name" possui valores nulos, os quais serão eliminados com o comando abaixo.
* Antes de prosseguir será verificado quantos registros existem no dataframe, para isso, utilizamos o comando DataFrame.shape que nos apresentará informações sobre a dimensionalidade do nosso dataframe.
```
audio_features_of_tracks.shape
```
* O dataframe possui 586.672 registros.
* Aplicamos o código para eliminação dos registros nulos encontrados e os atribuimos a um novo dataframe, denominado "not_null".
```
not_null = audio_features_of_tracks[ audio_features_of_tracks['name'].isnull() == False ]
```
* Feito isto, executamos o comando verificar se os comandos nulos realmente foram eliminados.
```
not_null.shape
```
* Nota-se a redução de 71 registros.
* Após os tratamentos desses dados, definiremos e analisaremos as questões a serem respondidas.
### 1 - Qual a tendência na duração das 100 músicas mais populares no ano de 2021?
```
# Filtrando as músicas com lançamento em 2021
_2021_songs = not_null[ not_null['release_date'].str.replace(' ', '').str.contains('2021') ]
# Listando as 100 músicas mais populares de 2021
_100_most_popular_songs = _2021_songs.sort_values(by='popularity', ascending=False).head(100)
df = _100_most_popular_songs['duration_ms']
# Gerando o histograma da distribuição da duração das músicas
sbn.set_theme(style="darkgrid")
sbn.histplot(df)
```
* Um histograma é uma ferramenta de visualização de dados clássica, muito utilizada em divesas áreas, sua estrutura representa a distribuição de uma ou mais variáveis através da contagem do número de ocorrências de um determinado intervalo. Além disso, podemos utilizar o processamento de imagens como exemplo para ilustrar diversas outras formas de utilização, como na equalização de histograma ou histogram stretching para manipulação do contraste da imagem, além de diversas outras.
* Análise: Observou-se através do histograma acima que as músicas de maior popularidade têm, em sua maioria, uma duração que varia entre 150000 ms (2,5 minutos) e 200000 ms (3,33 minutos). Além disso pode-se dizer que o gráfico não segue uma distribuição normal, sua distribuição está enviesada, ou seja, tende a valores mais a esquerda.
### 2 - Quais os gêneros de música predominantes entre os artistas que tem participação nas 100 música mais populares de 2021?
```
# Filtrando as músicas com lançamento em 2021
# As datas podem conter somente o ano, além de outros formatos, sendo assim utilizo um contains para verificar se a música é 2021
_2021_songs = not_null[ not_null['release_date'].str.replace(' ', '').str.contains('2021') ]
# Listando as 100 músicas mais populares de 2021
_100_most_popular_songs = _2021_songs.sort_values(by='popularity', ascending=False).head(100)
# Observação: Devido a coluna de artistas possuir como valores objetos do tipo vetor, tornou-se necessário
# processar os dados da seguinte forma
list_id_artists = []
# Obtendo o id dos artistas que participam nas 100 músicas mais populares
for row in _100_most_popular_songs['id_artist']:
aux_list = row.replace('[', '').replace(']', '').replace("'", '').replace(' ', '').split(',')
# Verificando para não incluir duplicados
for i in range(0, len(aux_list)):
it_found = False
j = 0
while j < len(list_id_artists) and it_found == False:
if aux_list[i] == list_id_artists[j]:
it_found = True
j+=1
if it_found == False:
list_id_artists.append(aux_list[i])
genre_list = []
# Obtendo os gêneros de música dos dos artistas (Como são para artistas únicos estes podem se repetir)
for i in range(0, len(list_id_artists)):
for row in artist_popularity_metrics[artist_popularity_metrics['id_artist'].replace(' ', '') == list_id_artists[i]]['genres']:
aux_list = row.replace('[', '').replace(']', '').replace("'", '').split(',')
for j in range(0, len(aux_list)):
if aux_list[j].replace(' ', '') != '':
genre_list.append(aux_list[j].strip())
df_genre = pd.DataFrame(genre_list)
df_genre2 = df_genre.value_counts().sort_values(ascending=False)
labels = pd.unique(pd.Series(genre_list))
aux_list_dataframe = []
# Definindo a lista resultante
for i in range(0, len(labels)):
aux_list_dataframe.append([labels[i], df_genre2[labels[i]]])
df_result = pd.DataFrame(aux_list_dataframe, columns=['name', 'value'])
df_result = df_result.sort_values(by='value', ascending=False)
plt.rcParams['figure.figsize'] = [20, 20]
fig1, ax1 = plt.subplots()
ax1.pie(df_result['value'], labels=df_result['name'], autopct='%1.1f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
```
* O gráfico de pizza, também conhecido como gráfico de torta ou gráfico de setores é um tipo de gráfico onde os valores de cada categoria estatística representados são proporcionais as suas respectivas frequências.
* Análise: Observa-se no gráfico acima a proporção de cada genêro de música que estão vinculadas aos cantores que tiveram participação nas 100 músicas mais populares de 2021. Conforme podemos visualizar os três primeiros gêneros mais frequêntes entre os ouvintes são pop com 8.1%, trap latino com 6.6% e reggaeton com 4.5%.
* Até este ponto vimos que o conjunto de dados contém mais de 586.000 músicas, coletadas a partir da API do Spotify. Os recursos incluem, música, artista, data de lançamento, bem como algumas características da música, como, acústico, dançante, volume, andamento e assim por diante. Afim de reduzir a carga de processamento dos códigos que sucedem este comentário iremos eliminar algumas colunas do nosso dataframe e podemos fazê-lo através da chamada do método drop, como mostrado na célula a seguir.
```
new_df = not_null.copy()
new_df.drop([ 'id', 'explicit', 'key', 'mode'], axis = 1, inplace = True)
new_df.head()
```
### 3 - Analise se há correlação (similaridade) entre a coluna "danceability" e a coluna "valence"?
* O conjunto de dados inclui diversas métricas. Algumas delas deixam evidente seu significado, como ritmo, volume, energia, acústica, vivacidade, fala e podem ser relevantes na análise de dados. À primeira vista, as colunas "danceability" e "valence" parecem estar relacionadas, pois a "valence" é uma medida que descreve a positividade musical transmitida ao ouvinte, sendo assim, podemos deduzir que a positividade pode influenciar na decisão dos ouvintes quanto a dançar ou não uma determinada música. A fim de analisar a relação entre as colunas do dataframe, utilizaremos o método DataFrame.corr(), este nos fornece um conjunto de medidas que definem o nível de correlação entre as colunas. O código apresentado a seguir explora essa funcionalidade e apresenta os resultados de uma forma mais adequada.
```
corr = new_df[['acousticness', 'danceability', 'energy', 'instrumentalness', 'liveness', 'tempo', 'valence']].corr()
sbn.set(style='darkgrid')
plt.figure(figsize=(12,8))
sbn.heatmap(corr, annot=True)
```
* Análise
* Usamos o Heatmap (ou mapa de calor), que utiliza as cores como referência para facilitar o entendimento das informações, para representar graficamente o resultado do dateFrame acima. Neste gráfico, temos uma barra na lateral direita contendo valores positivos (cores mais "claras ou quentes") e negativos (cores mais "escuras ou frias"). Após a demonstração dos dados graficamente, pode-se confirmar a correlação entre as colunas "danceabiliaty" e "valence" com o valor positivo 0.57. E também é visível uma correlação nos estilos musicais "danceability" e "energy" ou "acousticness" e "instrumentalness", pois são valores positivos e próximos do valor 0.5. Ao contrário temos, "energy" e "acousticness", que mostram-se estilos musicais muito diferentes, pois, se relacionam negativamente em -0.77.
### 4 - Apresente os artistas que produzem músicas tanto enérgicas (energy) quanto acústicas (acousticness), levando em conta que um artista nesta categoria deva ter um valor médio para ambas as colunas de 0.5 ou superior.
```
list_data = []
aux_df = new_df[['artists','energy','acousticness']]
#Observação:
# A coluna "artists" possui vetores em seus registros, com isto, foi necessário desmembrar esses valores para
# que fosse possível demonstrar resultados mais fiéis.
# Porém a população é muito grande, portanto fazer uma enumeração completa de todos os valores na população
# tornou-se inviável e muito demorado.
# Portanto utilizamos uma amostra (subconjunto) dos dados, de tamanho manejável e possível de se demonstrar
# uma análise fidedigna.
aux_df = aux_df.head(3000)
for i in range(len(aux_df)):
list_artists = aux_df.values[i][0].replace('[', '').replace(']', '').replace("'", '').split(',')
for artist in list_artists:
list_data.append([artist, aux_df.values[i][1], aux_df.values[i][2]])
df_result = pd.DataFrame(list_data, columns=['artists', 'energy', 'acousticness'])
aux_df = df_result.groupby('artists').mean()
aux_df = aux_df[aux_df['energy'] > 0.5]
aux_df = aux_df[aux_df['acousticness'] > 0.5]
new_df_list = []
for artist in aux_df.index.tolist():
new_df_list.append([artist])
for i in range(len(aux_df)):
new_df_list[i].append(str(aux_df.values[i][0]))
new_df_list[i].append(str(aux_df.values[i][1]))
new_df_list = pd.DataFrame(new_df_list, columns=['artists', 'energy', 'acousticness'])
new_df_list['artists'] = new_df_list['artists'].astype("string")
new_df_list['energy'] = new_df_list['energy'].astype(float)
new_df_list['acousticness'] = new_df_list['acousticness'].astype(float)
f, ax = plt.subplots(figsize=(6, 5))
sbn.set_color_codes("muted")
sbn.barplot(x="acousticness", y="artists", data=new_df_list,
label="artists X acousticness", color="b")
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(xlim=(0, 1), ylabel="artists",
xlabel="acousticness")
sbn.despine(left=True, bottom=True)
f, ax = plt.subplots(figsize=(6, 5))
sbn.set_color_codes("muted")
sbn.barplot(x="energy", y="artists", data=new_df_list,
label="Artists X Energy", color="b")
ax.legend(ncol=2, loc="lower right", frameon=True)
ax.set(xlim=(0, 1), ylabel="artists",
xlabel="energy")
sbn.despine(left=True, bottom=True)
```
* Análise
* Podemos entender que Músicas do tipo Energy são totalmente contrárias as músicas do tipo Acousticness. De um lado músicas agitadas, com muito ritmo, por exemplo, e do outro lado músicas mais calmas e lentas. Geralmente esta versatilidade no ritmo é difícil de se encontrar no mesmo artista. E conforme os dados exibidos acima vemos que realmente os artistas que mais produzem músicas do tipo energy são os que menos produzem ou cantam músicas do tipo acousticness. Porém, realizamos um filtro onde destacamos os artistas que possuem esta versatilidade e talento ao produzir, lançar ou cantar músicas com estilos ritmicos tão distintos.
### 5 - Visão geral das tendências musicais entre os anos de 1940 a 2021.
* A coluna release_date está formata com a data completa, ex.: 1900-01-01. Porém, a análise será realizada entre a média das características musicais por ano. Com isto, iremos formatar a data de tal forma a manter somente o ano (um valor inteiro).
```
df_6 = new_df.copy()
#Observação:
# A coluna "artists" possui vetores em seus registros, com isto, foi necessário desmembrar esses valores para
# que fosse possível demonstrar resultados mais fiéis.
# Porém a população é muito grande, portanto fazer uma enumeração completa de todos os valores na população
# tornou-se inviável e muito demorado.
# Portanto utilizamos uma amostra (subconjunto) dos dados, de tamanho manejável e possível de se demonstrar
# uma análise fidedigna.
df_6['release_date'] = pd.to_datetime(df_6['release_date'], errors='coerce')
df_6['release_date'] = df_6['release_date'].dt.strftime('%Y')
df_6['release_date'] = df_6['release_date'].astype(int)
df_6 = df_6[df_6['release_date'] >= 1939]
year_avg = df_6[['danceability','energy','liveness','acousticness', 'valence','release_date']].groupby('release_date').mean().sort_values(by='release_date')
print(year_avg.head())
# Pronto, para as cinco medidas diferentes, obtivemos os valores médios anuais.
# Agora, primeiro vou criar um figure e adicionar uma linha para cada tendência/característica acima.
plt.figure(figsize=(14,8))
plt.title("Tendências da música ao longo do tempo", fontsize=15)
lines = ['danceability','energy','liveness','acousticness','valence']
for line in lines:
ax = sbn.lineplot(x='release_date', y=line, data=year_avg)
plt.legend(lines)
plt.show()
```
* Análise
* Com os dados apresentados acima, vemos que o tipo acousticness por volta dos anos 40 foi tendência, porém a partir de 1960 teve um queda acentuada que perdura até os dias de hoje. Ao contrário, vemos, por exemplo, as músicas do tipo energy que no começo dos anos 40 até meados dos anos 60 não era tendência, mas a partir dos anos 60 começou a ganhar popularidade e virou tendência o que perdura até os dias de hoje. O que também é interessante observar que as tendências musicas ditam "moda", pois quanto o maior consumo dos ouvintes naquela caracterísitca nota-se uma maior produção da mesma. Nisto, entendemos que realmente há uma preocupação das gravadoras/produtores em seguir o mercado e lançar artista quem seguem o que já está "nichado".
### 6 - Artistas, vamos ver os 10 principais artistas com mais músicas no conjunto de dados.
Alguns deles produzem muitas canções, enquanto há alguns artistas com pouquíssimas.
a) Vamos ver os 10 principais artistas com mais músicas no conjunto de dados.
```
df_7 = new_df.copy()
#Observação:
# A coluna "artists" possui vetores em seus registros, com isto, foi necessário desmembrar esses valores para
# que fosse possível demonstrar resultados mais fiéis.
# Porém a população é muito grande, portanto fazer uma enumeração completa de todos os valores na população
# tornou-se inviável e muito demorado.
# Portanto utilizamos uma amostra (subconjunto) dos dados, de tamanho manejável e possível de se demonstrar
# uma análise fidedigna.
df_7 = df_7.head(5000)
list_data = []
for i in range(len(df_7)):
list_artists = df_7.values[i][3].replace('[', '').replace(']', '').replace("'", '').split(',')
for artist in list_artists:
name = artist.replace('"', '').replace("'", '').strip()
if (name == 'Janusz Korczak' or name == 'Radclyffe Hall' or name == 'Alice White' or name == 'Synthesized Voice'
or name == 'Francisco Canaro' or name == 'Fernando Pessoa' or name == 'Ignacio Corsini'
or name == 'Louis Armstrong' or name == 'Bessie Smith' or name == 'Dick Haymes'):
list_data.append([name, int(df_7.values[i][5].split('-')[0]), name])
df_7 = pd.DataFrame(list_data, columns=['artists', 'release_date', 'aux_artists'])
df_7['artists'] = df_7['artists'].astype('string')
df_7['aux_artists'] = df_7['aux_artists'].astype('string')
df_7 = df_7.groupby(['artists', 'release_date']).count()
new_df_list = []
for values in df_7.index.tolist():
new_df_list.append([values[0], values[1]])
for i in range(len(df_7)):
new_df_list[i].append(str(df_7.values[i][0]))
new_df_list = pd.DataFrame(new_df_list, columns=['artists', 'release_date', 'count'])
new_df_list['artists'] = new_df_list['artists'].astype('string')
new_df_list['count'] = new_df_list['count'].astype(int)
# new_df_list.sort_values(by='count', ascending=False).head(10)
plt.figure(figsize=(14, 8))
sbn.lineplot(x='release_date', y='count', hue='artists', data=new_df_list)
```
Análise:
Demonstramos a partir do tratamento/desmebramento da coluna "artists" os 10 artistas com mais produções na nossa amostra.
## observação:
### Optamos por não desmembrar os dados da coluna "artists", pois esta ação requer muito processamento e se torna inviável. Porém, gostaríamos de deixar registrado a ideia da tratativa das informações do conjunto de dados. O conjunto de dados do Spotify possui mais de 580.000 registros contendo muitos artistas. Então pensamos em criar um gráfico animado de todo o período da coluna "release_date" (1900 a 2021) dos 10 artistas que mais produziram músicas. A ideia é mostrar a evolução/produção, ano a ano, de cada um desses artistas. Com isso, utilizamos uma técnica de acumular os valores num novo dataFrame e assim animar um gráfico de colunas. O ponto principal que nos motivou a registrar esse execício foi o desenvolvimento do gráfico animado como mostrado a seguir.
```
df = new_df.copy()
print(pd.Series(df["artists"].nunique()))
# Existem 114030 artistas em todo o conjunto de dados.
# Alguns deles produzem muitas canções, enquanto há alguns artistas com pouquíssimas.
# a) Vamos ver os 10 principais artistas com mais músicas no conjunto de dados.
# b) Podemos criar um novo dataframe que mostra a produção anual de canções para esses 10 artistas
# Primeiro criamos uma lista usando o índice retornado pela função value_counts
artist_list = df.artists.value_counts().index[:10]
df['release_date'] = pd.to_datetime(df['release_date'], errors='coerce')
df['release_date'] = df['release_date'].dt.strftime('%Y').astype(int)
# Agora filtro no dataframe usando esta lista e agrupando por ano:
df_artists = df[df.artists.isin(artist_list)][['artists','release_date','energy']].groupby(['artists','release_date']).count().reset_index()
df_artists.rename(columns={'energy':'song_count'}, inplace=True)
# Este dataframe contém o nome do artista, ano e quantas músicas o artista produziu naquele ano.
# Agora vou criar um gráfico de linha comparando as produções desses artistas.
plt.figure(figsize=(16,8))
sbn.lineplot(x='release_date', y='song_count', hue='artists', data=df_artists)
plt.show()
# Não podemos realmente separar as linhas. Por ser um período tão longo (100 anos), os artistas aparecem em apenas uma parte de toda a linha do tempo.
# Verificarei quais artistas estão dominando em cada ano.
#
# Primeiro, vou criar um dataframe vazio que contém toda a linha do tempo (1900-2021) e os nomes dos 10 principais artistas.
df1 = pd.DataFrame(np.zeros((121,10)), columns=artist_list)
df1['release_date'] = np.arange(1900,2021)
#Converter as datas em int para realizar o merge com o mesmo tipo de dado
df1['release_date'] = df1['release_date'].astype(int)
# O dataframe inclui 121 linhas para 121 anos e 11 colunas (10 artistas e uma coluna de ano).
df1 = df1.melt(id_vars='release_date',var_name='artists', value_name='song_count')
# A contagem de músicas é zero em todos os anos. Vou mesclar contagens de músicas do dataframe df_artists usando a função merge
df_merge = pd.merge(df1, df_artists, on=['release_date','artists'], how='outer').sort_values(by='release_date').reset_index(drop=True)
#Substituirei os valores NaN por 0 e eliminarei a coluna song_count_x.
df_merge.fillna(0, inplace=True)
df_merge.drop('song_count_x', axis=1, inplace=True)
df_merge.rename(columns={'song_count_y':'song_count'}, inplace=True)
# Também quero adicionar uma coluna que mostra a soma cumulativa das
# Músicas que cada artista produziu ao longo dos anos. Uma maneira de fazer isso é usar as funções groupby e cumsum.
df_merge['cumsum'] = df_merge[['song_count','artists']].groupby('artists').cumsum()
df_merge.head(10)
fig = px.bar(df_merge,
x='artists', y='cumsum',
color='artists',
animation_frame='release_date', animation_group='release_date',
range_y=[0,5000],
title='Artistas Com Mais Músicas Produzidas')
fig.show()
```
| github_jupyter |
# Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
```
## Reading and plotting the data
```
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
```
## TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
- Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
- Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
- The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
```
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1/(1 + np.exp(-x))
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features,weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
return -y*np.log(output) - (1-y)*np.log(1-output)
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
delta_error = (y - output)
weights += learnrate * delta_error* x
bias += learnrate * delta_error
return weights, bias
```
## Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
```
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
```
## Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
```
train(X, y, epochs, learnrate, True)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
scores = pd.read_csv('numerical_data.csv')
scores
scores.drop('Unnamed: 0', axis=1, inplace=True)
scores
a_scores = scores[['num_features', 'a_fitness', 'a_accuracy', 'a_sensitivity', 'a_specificity']]
a_scores
kin_df = pd.read_csv('reported_kinases.csv', header=None, names=range(32))
kin_df
kin = []
with open('reported_kinases.csv', 'r') as f:
for line in f:
kin.append(line.strip().split(','))
kin
k_cnts = {}
for l in kin:
for k in l:
if k in k_cnts:
k_cnts[k] += 1
else:
k_cnts[k] = 1
kin_cnt_ser = pd.Series(list(k_cnts.values()), index=k_cnts.keys())
kin_cnt_ser.sort_values(ascending=False, inplace=True)
kin_cnt_ser
groups = []
with open('linked_groups.csv', 'r') as f:
for line in f:
groups.append(line.strip().split(','))
for i in range(len(groups)):
groups[i] = set(groups[i])
groups
new_col = [-2 for i in range(len(kin_cnt_ser.index))]
for i in range(len(kin_cnt_ser.index)):
for j in range(len(groups)):
if kin_cnt_ser.index[i] in groups[j]:
new_col[i] = j
continue
if new_col[i] == -2:
new_col[i] = -1
-2 in new_col
kin_cnt_df = pd.DataFrame({'Kinase': kin_cnt_ser.index, 'Count': kin_cnt_ser, 'Group': new_col})
kin_cnt_df
kin_cnt_df['Group'].value_counts()
fig1 = plt.figure(figsize=(8, 6))
ax1_1 = fig1.add_subplot(111)
ax1_1.bar(kin_cnt_df['Group'].value_counts().index, kin_cnt_df['Group'].value_counts())
ax1_1.set_xticks(list(range(-1, 15)))
ax1_1.set_title('Number of Kinases Reported From Linked Groups')
ax1_1.set_xlabel('Linked Group (-1 for None of the Listed Groups)')
ax1_1.set_ylabel('Reported Kinases')
plt.show()
grp_freq = kin_cnt_df['Group'].value_counts()
sum = 0
for g in groups:
sum += len(g)
sum
for i in range(grp_freq.size):
if grp_freq.index[i] == -1:
grp_freq.iloc[i] /= (190-46)
else:
grp_freq.iloc[i] /= len(groups[grp_freq.index[i]])
grp_freq.sort_values(ascending=False, inplace=True)
grp_freq
a_scores
import seaborn as sns
sns.distplot(a_scores['num_features'])
sns.distplot(a_scores['a_fitness'])
sns.distplot(a_scores['a_accuracy'])
sns.distplot(a_scores['a_sensitivity'])
sns.distplot(a_scores['a_specificity'])
print('{:20} Mean | StdDev'.format(' '))
print('{:20} --------|----------'.format(' '))
for col in a_scores:
print('{:20} : {:7.4} | {:7.4}'.format(col, a_scores[col].mean(), a_scores[col].std()))
kin_cnt_df[(kin_cnt_df['Group'] == -1) & (kin_cnt_df['Count'] > 2)]
kin_cnt_df[(kin_cnt_df['Group'] != -1) & (kin_cnt_df['Count'] > 2)]
```
| github_jupyter |
```
import __init__
from __init__ import DATA_PATH
from __init__ import PACKAGE_PATH
from dce import DCE
from cluster import Cluster
import utilities
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
from keras import Model
%matplotlib inline
from rdkit.Chem import MACCSkeys
df = pd.read_csv(os.path.join(DATA_PATH,'BOD_RDKit_Descriptors_1063.csv'))
df['cleaned_bod'] = utilities.clean_out_of_bound(df['value (% BOD)'])
df['bi_class_partition'] = utilities.divide_classes(df['cleaned_bod'], [60])
true_label_biclass = df['bi_class_partition'].values
plt.hist(true_label_biclass)
from descriptor import rdkitDescriptors as rDesc
fps = rDesc.batch_compute_MACCSkeys(df['SMILES'])
sns.set(style='white', font_scale=3)
```
## 1. Baseline Model: Kmeans
```
from cluster import KMeans
kmeans_cluster = KMeans(n_clusters=2)
kmeans_cluster.build_model()
kmeans_cluster.train_model(fps, true_labels=true_label_biclass)
fig, ax=plt.subplots(1,1, figsize=(16,9))
utilities.tsne_2d_visulization(input_feat=fps,
plot_labels=true_label_biclass,
labels=['Non-Biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend()
plt.savefig('1.baseline_model.pdf', bbox_inches='tight')
```
## 2. Two-step Training: Autoencoder + KMeans
```
from dimreducer import DeepAutoEncoder as DAE
dims = [167, 120, 60]
autoencoder = DAE(dims, act='relu')
autoencoder.build_model(norm=False)
history = autoencoder.train_model(fps, loss="binary_crossentropy", verbose=0, epochs=60)
plt.plot(history.history['loss'])
encoder = Model(inputs=autoencoder.model.input,
outputs=autoencoder.model.get_layer(name='embedding_layer').output)
hidden_feat = encoder.predict(fps)
kmeans_cluster = KMeans(n_clusters=2)
kmeans_cluster.build_model()
kmeans_cluster.train_model(hidden_feat, true_labels=true_label_biclass)
fig, ax=plt.subplots(1,1, figsize=(16,9))
utilities.tsne_2d_visulization(input_feat=hidden_feat,
plot_labels=true_label_biclass,
labels=['Non-Biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend()
plt.savefig('2.two_step_train.pdf', bbox_inches='tight')
```
## 3. Simultaneous training: Autoencoder + Clustering
```
from dce import DCE
autoencoder_dims = [167, 120, 60]
cl_weight = 0.5
dce = DCE(autoencoder_dims, n_clusters=2, update_interval=20)
dce.build_model(norm=False)
loss = dce.train_model(
data_train=fps,
clustering_loss='kld', decoder_loss='binary_crossentropy',
verbose=0,clustering_loss_weight=cl_weight)
q, _ = dce.model.predict(fps)
y_pred = q.argmax(1)
encoder = Model(inputs=dce.model.input,
outputs=dce.model.get_layer(name='embedding_layer').output)
hidden_feats = encoder.predict(fps)
plt.plot(loss[0],label='Total test loss')
plt.plot(loss[1],label='Clustering test loss')
plt.plot(loss[2],label='Decoder test loss')
plt.legend()
plt.title('clustering weight: ' + str(cl_weight))
Cluster.true_label_metrics(true_label_biclass,y_pred,print_metric=False)
fig, ax=plt.subplots(1,1, figsize=(16,9))
utilities.tsne_2d_visulization(input_feat=hidden_feats,
plot_labels=true_label_biclass,
labels=['Non-Biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend()
plt.savefig('3.co_train_clustering.pdf', bbox_inches='tight')
```
## 4. Simultaneous training: Autoencoder + Classification
```
class_names = np.array(["Non-biodegradable", "Biodegradable"])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
fps, true_label_biclass, test_size=0.25, random_state=42)
cl_weight = 0.5
autoencoder_dims = [167, 120, 60]
dce = DCE(autoencoder_dims, n_clusters=2, update_interval=20)
dce.build_model(norm=False)
train_loss, test_loss = dce.train_model(data_train=X_train, labels_train=y_train,
data_test=X_test, labels_test=y_test,
clustering_loss='kld', decoder_loss='binary_crossentropy',
verbose=0,clustering_loss_weight=cl_weight)
q, _ = dce.model.predict(X_train)
train_y_pred = q.argmax(1)
q, _ = dce.model.predict(X_test)
test_y_pred = q.argmax(1)
encoder = Model(inputs=dce.model.input,
outputs=dce.model.get_layer(name='embedding_layer').output)
train_hidden_feats = encoder.predict(X_train)
test_hidden_feats = encoder.predict(X_test)
plt.plot(train_loss[0],label='Total train loss')
plt.plot(train_loss[1],label='Clustering train loss')
plt.plot(train_loss[2],label='Decoder train loss')
plt.plot(test_loss[0],label='Total test loss')
plt.plot(test_loss[1],label='Clustering test loss')
plt.plot(test_loss[2],label='Decoder test loss')
plt.legend()
plt.title('clustering weight: ' + str(cl_weight))
print('Train Score:')
Cluster.true_label_metrics(y_train,train_y_pred,print_metric=True)
print('Test Score:')
Cluster.true_label_metrics(y_test,test_y_pred,print_metric=True)
fig, ax =plt.subplots(figsize=(16,9))
utilities.tsne_2d_visulization_test_and_train(
train_feat=train_hidden_feats,
train_labels=y_train,
test_feat=test_hidden_feats,
test_labels=y_test,
labels=['Non-biodegradable', 'Biodegradable'],
verbose=0,
ax=ax)
ax.set_xticks([])
ax.set_yticks([])
ax.legend(fontsize=20)
plt.savefig('4.co_train_classifying.pdf', bbox_inches='tight')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Leonas2000/lil-Beethoven/blob/main/Lil'_Beethoven.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
#@title Imports
import numpy as np
import sys
import os
from scipy.io import wavfile
!pip install python_speech_features
from python_speech_features import mfcc
import librosa
import matplotlib.pyplot as plt
from scipy.io import wavfile,savemat
import os.path
# Import keras main libraries
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Activation
from keras.regularizers import l2
from keras import callbacks
from keras.callbacks import History, ModelCheckpoint, EarlyStopping
#@title Preprocessing / CreateList
# Read args
Label_text_source = "/content/drive/MyDrive/Lil_Beethoven/Input/Txt/";
Output_dir = "/content/drive/MyDrive/Lil_Beethoven/Output/";
f = open(Output_dir + 'train.lst','w')
for filename in os.listdir(Label_text_source):
f.write(filename + '\n')
f.close()
```
:)
Split the train.lst file to train_tr.lst, train_va.lst and test.lst
then run the code for different source_List with the above files path (_tr _va and test)
```
#@title Preprocessing / WAV2mat_batch
# Parameters
hop_length_in = 512
n_bins_in = 252
bins_octaves_in = 36
win_step = 0.01
number_notes = 88
num_cep_def = 40
num_filt_def = 40
length_per_file = 4000000
# Read args
source_List = "/content/drive/MyDrive/Lil_Beethoven/Output/test.lst";
source_WAV = "/content/drive/MyDrive/Lil_Beethoven/Input/Wav/";
source_Txt = "/content/drive/MyDrive/Lil_Beethoven/Input/Txt/";
out_mat = "/content/drive/MyDrive/Lil_Beethoven/Output/";
# Output .npz
train2mat = []
labels2mat = []
contador = 0
# Get the name of the list
source_list_split = source_List.split('.')
source_list_split = source_list_split[0].split('/')
list_name = source_list_split[-1]
# Open the list
file_List = open( source_List , "r")
# Iterate on every file
for filename in file_List:
filename_split = filename.split('.')
#### MFCC extraction ####
# Transform to raw data from wav. Get the sampling rate 2
sampling_freq, stereo_vector = wavfile.read(source_WAV + filename_split[0] + '.wav')
win_len = 512/float(sampling_freq)
#plt.imshow( np.array(np.absolute(cqt_feat)))
#plt.show()
# Transform to mono
mono_vector = np.mean(stereo_vector, axis = 1)
# Extract mfcc_features
cqt_feat = np.absolute(librosa.cqt(mono_vector, sampling_freq, hop_length=hop_length_in,n_bins=n_bins_in,bins_per_octave=bins_octaves_in)).transpose()
#### LABELING ####
# Number of frames in the file
number_Frames = np.max( cqt_feat.shape[0])
# Aux_Vector of times
vector_aux = np.arange(1, number_Frames + 1)*win_len
# Binary labels - we need multiple labels at the same time to represent the chords
labels = np.zeros((number_Frames, number_notes))
# Open the align txt labels
file = open( source_Txt + filename_split[0] + '.txt' , "r")
#f = open(out_mat + filename_split[0] + 'label.lst','w')
# Loop over all the lines
for line in file:
line_split = line.split()
if line_split[0] == "OnsetTime":
print ("Preprocessing operations . . .")
else:
# Get the values from the text
init_range, fin_range, pitch = float(line_split[0]), float(line_split[1]), int(line_split[2])
# Pitch move to 0-87 range
pitch = pitch - 21;
# Get the range indexes
index_min = np.where(vector_aux >= init_range)
index_max = np.where(vector_aux - 0.01 > int((fin_range)*100)/float(100))
labels[index_min[0][0]:index_max[0][0],pitch] = 1
#If you want to save the labels to a txt file
"""for i in range( number_Frames):
for j in range( 88 ):
if labels[i][j] == 1:
f.write('%f' %vector_aux[i] + ' - ' + '%d\n' %j)
f.close()
"""
file.close()
"""
plt.figure()
plt.imshow( np.array(labels.transpose()),aspect='auto')
plt.figure()
plt.imshow( np.array(np.absolute(cqt_feat)), aspect='auto')
plt.show()
"""
while (len(train2mat) + len(cqt_feat)) >= length_per_file:
size_to_add = length_per_file - len(train2mat)
# Append to add to npz
train2mat.extend(cqt_feat[0:size_to_add,:])
# Append the labels
labels2mat.extend(labels[0:size_to_add,:])
train2mat = np.array(train2mat)
labels2mat = np.array(labels2mat)
# Plotting stuff
print (" Shape of MFCC is " + str(train2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
print (" Shape of Labels is " + str(labels2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
contador = contador + 1;
train2mat = []
labels2mat = []
cqt_feat = cqt_feat[size_to_add:,:]
labels = labels[size_to_add:,:]
if len(cqt_feat) == length_per_file:
# Append to add to npz
train2mat.extend(cqt_feat)
# Append the labels
labels2mat.extend(labels)
train2mat = np.array(train2mat)
labels2mat = np.array(labels2mat)
# Plotting stuff
print (" Shape of MFCC is " + str(train2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
print (" Shape of Labels is " + str(labels2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
contador = contador + 1;
train2mat = []
labels2mat = []
elif len(cqt_feat) > 0:
# Append to add to npz
train2mat.extend(cqt_feat)
# Append the labels
labels2mat.extend(labels)
train2mat = np.array(train2mat)
labels2mat = np.array(labels2mat)
"""
plt.figure()
plt.imshow( np.array(labels2mat.transpose()),aspect='auto')
plt.colorbar()
plt.figure()
plt.imshow( np.array(train2mat.transpose()), aspect='auto')
plt.colorbar()
plt.show()
"""
# Plotting stuff
print (" Shape of MFCC is " + str(train2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
print (" Shape of Labels is " + str(labels2mat.shape) + " - Saved in " + out_mat + list_name + '/' + str(contador) + list_name)
np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
#out_mat = "/content/drive/MyDrive/Lil_Beethoven/Output/";
#np.save('{}_X'.format(out_mat + list_name + '/' + str(contador) + list_name ), train2mat)
#np.save('{}_y'.format(out_mat + list_name + '/' + str(contador) + list_name), labels2mat)
#@title Preprocessing / mat2norm_batch
# Read args
source = "/content/drive/MyDrive/Lil_Beethoven/Output/"
train_folder = "train_tr/"
val_folder = "train_va/"
test_folder = "test/"
mean_X = []
min_X = []
max_X = []
print ("Get max - min ")
# Iterate on every file
for filename in os.listdir(source + train_folder):
if "tr_X" in filename:
X_train = np.load(source + train_folder + filename)
max_X.append(X_train.max())
min_X.append(X_train.min())
max_train = max(max_X)
min_train = min(min_X)
print ("Get mean")
total_length = 0
# Iterate on every file
for filename in os.listdir(source + train_folder):
if "tr_X" in filename:
X_train = np.load(source + train_folder + filename)
X_train_norm = (X_train - min_train)/(max_train - min_train)
# Compute the mean
mean_X.append(np.sum(X_train_norm, axis = 0))
total_length = total_length + len(X_train_norm)
train_mean = np.sum(mean_X, axis = 0)/float(total_length)
print ("Normalize ")
# Iterate on every file
for filename in os.listdir(source + train_folder):
filename_split = filename.split('.')
if "tr_X" in filename:
X_train = np.load(source + train_folder + filename)
X_train_norm = (X_train - min_train)/(max_train - min_train)
X_train_norm = X_train_norm - train_mean
print ("X_train file : " + filename)
np.save('{}'.format(source + train_folder + filename_split[0] ), X_train_norm)
for filename in os.listdir(source + val_folder):
filename_split = filename.split('.')
if "va_X" in filename:
X_val = np.load(source + val_folder+ filename)
X_val_norm = (X_val - min_train)/(max_train - min_train)
X_val_norm = X_val_norm - train_mean
print ("X_val file : " + filename)
np.save('{}'.format(source + val_folder + filename_split[0]), X_val_norm)
for filename in os.listdir(source + test_folder):
filename_split = filename.split('.')
if "_X" in filename:
X_test = np.load(source + test_folder + filename)
X_test_norm = (X_test - min_train)/(max_train - min_train)
X_test_norm = X_test_norm - train_mean
print ("X_test file : " + filename)
np.save('{}'.format(source + test_folder + filename_split[0] ), X_test_norm)
print (train_mean)
print (min_train)
print (max_train)
!cp /content/drive/MyDrive/Lil_Beethoven/Output/test/*.npy /content/drive/MyDrive/Lil_Beethoven/Output2/
!cp /content/drive/MyDrive/Lil_Beethoven/Output/train_tr/*.npy /content/drive/MyDrive/Lil_Beethoven/Output2/
!cp /content/drive/MyDrive/Lil_Beethoven/Output/train_va/*.npy /content/drive/MyDrive/Lil_Beethoven/Output2/
#@title Preprocessing / minidataset
# Read args
source = "/content/drive/MyDrive/Lil_Beethoven/Output2/";
# Iterate on every file
for filename in os.listdir(source):
if "tr_X" in filename:
X_train = np.load(source + filename)
print ("X_train file : " + filename)
elif "va_X" in filename:
X_val = np.load(source + filename)
print ("X_val file : " + filename)
elif "_X" in filename:
X_test = np.load(source + filename)
print ("X_test file : " + filename)
elif "tr_y" in filename:
y_tr = np.load(source + filename)
print ("X_val file : " + filename)
elif "va_y" in filename:
y_va = np.load(source + filename)
print ("X_test file : " + filename)
X_train = X_train[1:5000,:]
X_val = X_val[1:5000,:]
y_tr = y_tr[1:5000,:]
y_va = y_va[1:5000,:]
# Normalization
max_train = X_train.max()
min_train = X_train.min()
max_val = X_val.max()
min_val = X_val.min()
max_test = X_test.max()
min_test = X_test.min()
"""max_Global = max(max_train, max_val, max_test)
min_Global = min(min_train, min_val, min_test)
X_val_norm = (X_val - min_Global)/(max_Global - min_Global)
X_test_norm = (X_test - min_Global)/(max_Global - min_Global)
X_train_norm = (X_train - min_Global)/(max_Global - min_Global)"""
X_val_norm = (X_val - min_train)/(max_train - min_train)
X_test_norm = (X_test - min_train)/(max_train - min_train)
X_train_norm = (X_train - min_train)/(max_train - min_train)
# Compute the mean
train_mean = np.mean(X_train_norm, axis = 0)
# Substract it
X_train_norm = X_train_norm - train_mean
X_val_norm = X_val_norm - train_mean
X_test_norm = X_test_norm - train_mean
# Get the name
np.save('{}X_train_norm'.format(source + 'normalized/' ), X_train_norm)
np.save('{}X_val_norm'.format(source + 'normalized/' ), X_val_norm)
np.save('{}y_train_norm'.format(source + 'normalized/' ), y_tr)
np.save('{}y_val_norm'.format(source + 'normalized/' ), y_va)
#@title Train 1
''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 100
input_size = 252
number_units = 256
number_layers = 3
number_classes = 88
best_accuracy = 0
contador_bad = 0
#Arg inputs
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
print ('Build model...')
model = Sequential()
history = History()
print ('Load validation data...')
X_val = np.load(data_directory + "train_va/" + str(0) + "train_va_X.npy" )
y_val = np.load(data_directory + "train_va/" + str(0) + "train_va_y.npy" )
# Count the number of files in the training folder
num_tr_batches = len([name for name in os.listdir(data_directory + "train_tr/")])/2
num_tr_batches = int(num_tr_batches)
print ('Loading all data')
for i in range(num_tr_batches):
print ("Batching..." + str(i) + "train_tr_X.npy")
X_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_X.npy" ))
y_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_y.npy" ))
if i == 0:
X = X_train
y = y_train
else:
X = np.concatenate((X,X_train), axis = 0)
y = np.concatenate((y,y_train), axis = 0)
print (X.shape)
print ("Adding 1st layer of {} units".format(number_units) )
model.add(Dense(number_units, input_shape=(input_size,), kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.2))
for i in range(number_layers-1):
print ("Adding %d" % (i+2) + "th layer of %d" % number_units + " units")
model.add(Dense(number_units, kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.2))
print (" Adding classification layer")
model.add(Dense(number_classes, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_squared_error'])
checkpointer = ModelCheckpoint(filepath= weights_dir + "weights.hdf5", verbose=1, save_best_only=False)
early = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto')
training_log = open(weights_dir + "Training.log", "w")
print ('Train . . .')
# let's say you have an ImageNet generat print "Fitting the batch :"
save = model.fit(X, y,batch_size=mini_batch_size,epochs = num_epochs,validation_data=(X_val, y_val),verbose=1,callbacks=[checkpointer,early])
training_log.write(str(save.history) + "\n")
training_log.close()
#@title train load (don't need now)
'''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 100
input_size = 40
number_units = 256
number_layers = 3
number_classes = 88
best_accuracy = 0
#Arg inputs
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
print ('Load model...' )
model = load_model(weights_dir + "weights.hdf5")
starting_epoch = 13
print ('Load validation data...')
X_val = np.load(data_directory + "train_va/" + str(0) + "train_va_X.npy" )
y_val = np.load(data_directory + "train_va/" + str(0) + "train_va_y.npy" )
# Count the number of files in the training folder
num_tr_batches = len([name for name in os.listdir(data_directory + "train_tr/")])/2
# Count the number of files in the training folder
num_tr_batches = len([name for name in os.listdir(data_directory + "train_tr/")])/2
num_tr_batches = int(num_tr_batches)
print ('Loading all data')
for i in range(num_tr_batches):
print ("Batching..." + str(i) + "train_tr_X.npy")
X_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_X.npy" ))
y_train = np.array(np.load(data_directory + "train_tr/" + str(i) + "train_tr_y.npy" ))
if i == 0:
X = X_train
y = y_train
else:
X = np.concatenate((X,X_train), axis = 0)
y = np.concatenate((y,y_train), axis = 0)
checkpointer = ModelCheckpoint(filepath= weights_dir + "weights.hdf5", verbose=1, save_best_only=False)
early = EarlyStopping(monitor='val_loss', min_delta=0, patience=20, verbose=1, mode='auto')
training_log = open(weights_dir + "Training.log", "w")
print ('Train . . .')
# let's say you have an ImageNet generat print "Fitting the batch :"
save = model.fit(X, y,batch_size=mini_batch_size,epochs = num_epochs,validation_data=(X_val, y_val),verbose=1,callbacks=[checkpointer,early])
training_log.write(str(save.history) + "\n")
training_log.close()
#@title Text 2 text
'''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 50
input_size = 252
number_units = 256
number_layers = 3
number_classes = 88
size_samples = 100
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/test/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
X = []
y = []
num_test_batches = len([name for name in os.listdir(data_directory )])/2
num_test_batches = int(num_test_batches)
print ('Loading test data')
for i in range(num_test_batches):
print ("Batching..." + str(i) + "test_X.npy")
X_test = np.array(np.load(data_directory + str(i) + "test_X.npy" ))
y_test = np.array(np.load(data_directory + str(i) + "test_y.npy" ))
if i == 0:
X = X_test
y = y_test
else:
X = np.concatenate((X,X_test), axis = 0)
y = np.concatenate((y,y_test), axis = 0)
# Load the model
model = load_model(weights_dir + "weights.hdf5")
TP = 0
FP = 0
FN = 0
print ("Predicting model. . . ")
predictions = model.predict(X, batch_size=mini_batch_size, verbose = 1)
predictions = np.array(predictions).round()
predictions[predictions > 1] = 1
np.save('{}predictions'.format(weights_dir), predictions)
print ("\nCalculating accuracy. . .")
TP = np.count_nonzero(np.logical_and( predictions == 1, y == 1 ))
FN = np.count_nonzero(np.logical_and( predictions == 0, y == 1 ))
FP = np.count_nonzero(np.logical_and( predictions == 1, y == 0 ))
print("TP:" + str(TP), "FP:" + str(FP), "FN:" + str(FN))
if (TP + FN) > 0 and (TP +FP) > 0:
R = TP/float(TP + FN)
P = TP/float(TP + FP)
A = 100*TP/float(TP + FP + FN)
if P == 0 and R == 0:
F = 0
else:
F = 100*2*P*R/(P + R)
else:
A = 0
F = 0
R = 0
P = 0
print ('\n F-measure pre-processed: ')
print (F)
print ('\n Accuracy pre-processed: ')
print (A)
print ("\nCleaning model . . .")
for a in range(predictions.shape[1]):
for j in range(2,predictions.shape[0]-3):
if predictions[j-1,a] == 1 and predictions[j,a] == 0 and predictions[j+1,a] == 0 and predictions[j+2,a] == 1:
predictions[j,a] = 1
predictions[j+1,a] = 1
if predictions[j-2,a] == 0 and predictions[j-1,a] == 0 and predictions[j,a] == 1 and predictions[j+1,a] == 1 and predictions[j+2,a] == 0 and predictions[j+3,a] == 0:
predictions[j,a] = 0
predictions[j+1,a] = 0
if predictions[j-1,a] == 0 and predictions[j,a] == 1 and predictions[j+1,a] == 0 and predictions[j+2,a] == 0:
predictions[j,a] = 0
if predictions[j-1,a] == 1 and predictions[j,a] == 0 and predictions[j+1,a] == 1 and predictions[j+2,a] == 1:
predictions[j,a] = 1
print ("Calculating accuracy after cleaning. . .")
np.save('{}predictions_post'.format(weights_dir), predictions)
TP = np.count_nonzero(np.logical_and( predictions == 1, y == 1 ))
FN = np.count_nonzero(np.logical_and( predictions == 0, y == 1 ))
FP = np.count_nonzero(np.logical_and( predictions == 1, y == 0 ))
if (TP + FN) > 0 and (TP +FP) > 0:
R = TP/float(TP + FN)
P = TP/float(TP + FP)
A = 100*TP/float(TP + FP + FN)
if P == 0 and R == 0:
F = 0
else:
F = 100*2*P*R/(P + R)
else:
A = 0
F = 0
R = 0
P = 0
print ('\n F-measure post-processed: ')
print (F)
print ('\n Accuracy post-processed: ')
print (A)
main_data = open(weights_dir + "Accuracy.lst", "w")
main_data.write("R-pre = " + str("%.6f" % R) + "\n")
main_data.write("P-pre = " + str("%.6f" % P) + "\n")
main_data.write("A-pre = " + str("%.6f" % A) + "\n")
main_data.write("F-pre = " + str("%.6f" % F) + "\n")
main_data.write("R-post = " + str("%.6f" % R) + "\n")
main_data.write("P-post = " + str("%.6f" % P) + "\n")
main_data.write("A-post = " + str("%.6f" % A) + "\n")
main_data.write("F-post = " + str("%.6f" % F) + "\n")
main_data.close()
#@title Plot result
'''###### TRAIN 1: DNN - 3 layers - 150 unis per layer ######'''
# We need to set the random seed so that we get ther same results with the same parameters
np.random.seed(400)
mini_batch_size, num_epochs = 100, 50
input_size = 252
number_units = 256
number_layers = 3
number_classes = 88
data_directory = "/content/drive/MyDrive/Lil_Beethoven/Output/test/"
weights_dir = "/content/drive/MyDrive/Lil_Beethoven/Saved_weights/"
predictions_draw = []
y_draw = []
print ('Predict . . . ')
num_test_batches = len([name for name in os.listdir(data_directory)])/2
num_test_batches = int(num_test_batches)
y = []
print ('Loading test data')
for i in range(num_test_batches):
print ("Batching..." + str(i) + "test_X.npy")
y_test = np.array(np.load(data_directory + str(i) + "test_y.npy" ))
if i == 0:
y = y_test
else:
y = np.concatenate((y,y_test), axis = 0)
predictions = np.load(weights_dir + "predictions_post.npy" )
plt.figure()
plt.subplot(211)
plt.imshow(predictions.transpose(),cmap='Greys',aspect='auto')
plt.subplot(212)
plt.imshow(y.transpose(),cmap='Greys',aspect='auto')
plt.show()
```
| github_jupyter |
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
## Solution
def activation(x):
return 1/(1+torch.exp(-x))
# Flatten the input images
inputs = images.view(images.shape[0], -1)
# Create parameters
w1 = torch.randn(784, 256)
b1 = torch.randn(256)
w2 = torch.randn(256, 10)
b2 = torch.randn(10)
h = activation(torch.mm(inputs, w1) + b1)
out = torch.mm(h, w2) + b2
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
## Solution
def softmax(x):
return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names.
```
## Solution
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.fc1.weight)
print(model.fc1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| github_jupyter |
**Chapter 10 – Introduction to Artificial Neural Networks**
_This notebook contains all the sample code and solutions to the exercises in chapter 10._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Perceptrons
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
per_clf = Perceptron(random_state=42)
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)
save_fig("perceptron_iris_plot")
plt.show()
```
# Activation functions
```
def logit(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step")
plt.plot(z, logit(z), "g--", linewidth=2, label="Logit")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=2, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(logit, z), "g--", linewidth=2, label="Logit")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
def heaviside(z):
return (z >= 0).astype(z.dtype)
def sigmoid(z):
return 1/(1+np.exp(-z))
def mlp_xor(x1, x2, activation=heaviside):
return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)
z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
```
# FNN for MNIST
## using tf.learn
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")
import tensorflow as tf
config = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols, config=config)
dnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1
dnn_clf.fit(X_train, y_train, batch_size=50, steps=40000)
from sklearn.metrics import accuracy_score
y_pred = dnn_clf.predict(X_test)
accuracy_score(y_test, y_pred['classes'])
from sklearn.metrics import log_loss
y_pred_proba = y_pred['probabilities']
log_loss(y_test, y_pred_proba)
```
## Using plain TensorFlow
```
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images,
y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = mnist.test.images[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", mnist.test.labels[:20])
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
```
## Using `dense()` instead of `neuron_layer()`
Note: the book uses `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function, except for a few minor differences:
* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.
* the default `activation` is now `None` rather than `tf.nn.relu`.
* a few more differences are presented in chapter 11.
```
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
```
# Exercise solutions
## 1. to 8.
See appendix A.
## 9.
_Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on)._
First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a `tf.summary.scalar()` to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.
```
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
loss_summary = tf.summary.scalar('log_loss', loss)
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
```
Now we need to define the directory to write the TensorBoard logs to:
```
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
logdir = log_dir("mnist_dnn")
```
Now we can create the `FileWriter` that we will use to write the TensorBoard logs:
```
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
```
Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's `input_data()` function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define `X_valid` and `y_valid`:
```
X_valid = mnist.validation.images
y_valid = mnist.validation.labels
m, n = X_train.shape
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"
best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
file_writer.add_summary(accuracy_summary_str, epoch)
file_writer.add_summary(loss_summary_str, epoch)
if epoch % 5 == 0:
print("Epoch:", epoch,
"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
"\tLoss: {:.5f}".format(loss_val))
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
if loss_val < best_loss:
saver.save(sess, final_model_path)
best_loss = loss_val
else:
epochs_without_progress += 5
if epochs_without_progress > max_epochs_without_progress:
print("Early stopping")
break
os.remove(checkpoint_epoch_path)
with tf.Session() as sess:
saver.restore(sess, final_model_path)
accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
accuracy_val
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Ugo5738/MobileNets-Facial-Recognition-/blob/main/Face_recognition_with_MobileNet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Dependencies
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.applications import imagenet_utils
from sklearn.metrics import confusion_matrix
import itertools
import os
import shutil
import random
import matplotlib.pyplot as plt
%matplotlib inline
```
# Vanilla MobileNet
```
def prepare_image(img_file):
img_path = '/content/drive/MyDrive/Recognition/train/jenny/'
img = image.load_img(img_path + img_file, target_size=(224, 224))
img_array = image.img_to_array(img)
img_array_expanded_dims = np.expand_dims(img_array, axis=0)
return tf.keras.applications.mobilenet.preprocess_input(img_array_expanded_dims)
preprocessed_image = prepare_image('IMG_0980.JPG')
preprocessed_image
from IPython.display import Image
Image(filename='/content/drive/MyDrive/Recognition/train/jenny/19.jpg', width=224, height=224)
mobile = tf.keras.applications.mobilenet.MobileNet()
mobile.summary()
# vanilla mobilenet prediction
preprocessed_image = prepare_image('16.jpg')
predictions = mobile.predict(preprocessed_image)
results = imagenet_utils.decode_predictions(predictions)
results
```
# Custom MobileNet
```
# Create working directories
train_dir = '/content/drive/MyDrive/Recognition/train'
validation_dir = '/content/drive/MyDrive/Recognition/validation'
```
## Use Data Augmentation with the Image Generator API
```
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
preprocessing_function=tf.keras.applications.mobilenet.preprocess_input,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
batch_size=5,
class_mode='categorical'
)
validation_datagen = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(224, 224),
batch_size=4,
class_mode='categorical'
)
```
## Finetuning MobileNet
```
mobile = tf.keras.applications.mobilenet.MobileNet(input_shape=(224, 224, 3))
x = mobile.layers[-6].output
output = Dense(units=3, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=output)
for layer in model.layers[:-23]: # select the number of layers you want to finetune or train from
layer.trainable = False # set to true to finetune
model.summary()
model.compile(optimizer=Adam(learning_rate=0.0001),
loss='categorical_crossentropy',
metrics=['accuracy'])
```
## Train Image Classifier model
```
history = model.fit(
train_generator,
# steps_per_epoch=total_training_images/32,
epochs=100,
verbose=2,
validation_data=validation_generator,
# validation_steps=total_validation_images/32,
callbacks = [tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, mode='min')]
)
```
Use a Training and Validation Accuracy Graph to check for Overfitting
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training Accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation Accuracy')
plt.title('Training Accuracy and Validation Accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training Loss and Validation Loss')
plt.legend(loc=0)
plt.figure()
plt.show()
```
## Test Classifier
```
def prepare_image(img_file):
img_path = '/content/drive/MyDrive/Test/'
img = image.load_img(img_path + img_file, target_size=(224, 224))
img_array = image.img_to_array(img)
img_array_expanded_dims = np.expand_dims(img_array, axis=0)
return tf.keras.applications.mobilenet.preprocess_input(img_array_expanded_dims)
from IPython.display import Image
Image(filename='/content/drive/MyDrive/Test/21.jpg', width=400, height=250)
preprocessed_image = prepare_image('21.jpg')
predictions = model.predict(preprocessed_image) # check dtype
prediction = predictions.tolist()[0]
print(prediction)
max_value = max(prediction)
max_index = prediction.index(max_value)
max_index
print(f'This image is an image of {names[max_index]}')
```
## Confusion Matrix
```
def plot_confusion_matrix(cm,
classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting normalize=True.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix with normalization")
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1]))
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true=test_labels, y_pred=predictions.argmax(axis=1))
test_batches.class_indices # print all classification labels in the console
cm_plot_labels = ['blessing', 'daniel', 'jenny']
plot_confusion_matrix(cm=cm, classes=cm_plot_labels, titles='Confusion Matrix')
# 3 - We don't need all nodes, 20% of nodes will be dropped out [probability of each "bad" node being dropped is 20%
# Bad here means the nodes which are not contributing much value to the final output
```
## Save Model
```
tf.lite.TFLiteConverter.from_saved_model() (recommended)
tf.lite.TFLiteConverter.from_keras_model()
# Export saved model
FACIAL_RECOGNITION_SAVED_MODEL_DIR = "fc_saved_model/1"
tf.saved_model.save(model, FACIAL_RECOGNITION_SAVED_MODEL_DIR)
# Define the generator
def generator():
data = tfds.load()
for _ in range(num_calibration_steps):
image, = data.take(1)
yield [image]
# Convert model
converter = tf.lite.TFLiteConverter.from_saved_model(FACIAL_RECOGNITION_SAVED_MODEL_DIR)
# optimization mode
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# converter.representative_dataset = tf.lite.RepresentativeDataset(generator)
tflite_model = converter.convert()
# Save the model
tflite_model_file = pathlib.Path('/content/fc_model.tflite')
tflite_model_file.write_bytes(tflite_model)
```
# TFLite Interpreter
```
# Load TFLite model and allocate tensors
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensor()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Point the data to be used for testing and run the interpreter
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
tflite_results = interpreter.get_tensor(output_details[0]['index'])
```
| github_jupyter |
# Recommender systems via the SVD
This is a simple example of how the SVD comes up in "recommender systems" or ["collaborative filters"](https://en.wikipedia.org/wiki/Collaborative_filtering) such as the "Netflix problem" of determining how best to recommend movies to users, so called because of the [Netflix prize](https://en.wikipedia.org/wiki/Netflix_Prize).
This notebook is just scratching the surface. The class [AMath 582](http://www.washington.edu/students/crscat/appmath.html#amath582), for example, covers some applications of the SVD in more detail.
```
%pylab inline
```
Define a matrix in which rows correspond to movies, columns to users and assume ratings are $-2, -1$ (dislike) or $1, 2$ (like) with 0 indicating missing data. For example:
```
A = array([[2,-2,1,0,-1],
[2,-1,0,-2,0],
[0,2,-1,0,0],
[-1,2,0,1,2]])
print A
```
Compute the SVD of $A = U\Sigma V^*$:
```
(U,S,Vstar) = svd(A)
```
If we make the simple assumption that there is variation in movies along a single axis and that people tend to either like or dislike movies depending on their location along this axis, then we are assuming $A$ is a rank 1 matrix. So compute $A_1$, the best rank-1 approximation to $A$:
```
A1 = dot(U[:,0:1], S[0]*Vstar[0:1,:])
print A1
```
This has filled in the elements in such a way that the sign of each element gives some indication of whether a given user will like a given movie.
This matrix is $A_1 = \sigma_1 u_1 v_1^*$ and the first left singular vector $u_1$ tells us something about the movies under this rank-1 assumption: elements of $u_1$ with the same sign correspond to similar movies that are either liked or disliked as a group by each user:
```
U[:,0:1]
```
The right singular vector $v_1$ tells us something about users. Elements with the same sign correspond to users with similar taste, while elements with opposite sign correspond to users with opposite taste:
```
Vstar[0:1,:].T
```
### Colorful representation of these matrices:
```
figure(figsize=(12,5))
subplot(1,2,1)
pcolor(flipud(array(A)), cmap=cm.RdYlBu, edgecolors='k')
xlabel('People')
ylabel('Movies')
xticks([])
yticks([])
title('Original data')
colorbar()
subplot(1,2,2)
pcolor(flipud(array(A1)), cmap=cm.RdYlBu, edgecolors='k')
xlabel('People')
ylabel('Movies')
xticks([])
yticks([])
title('Rank 1 approximation')
colorbar()
```
For this simple example a rank 1 matrix does a pretty good job -- the yellow squares on the left (missing values) have been filled in with colors hinting at whether each user will like a movie (bluish) or not (redish).
For more complicated data with many different genres of movies there might be more factors that come into peoples' tastes and the full matrix (if everyone rated every movie) would not be well approximated by a rank 1 matrix, but might be expected to be approximated by a low rank matrix that explains the "principle components" of peoples' tastes. The SVD is closely related to [principle component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) (PCA).
We can get an idea of the relative importance of other variables by seeing how quickly the singular values decay:
```
print "Singular values are: ",S
```
## Matrix completion
A better way to solve the recommender problem is to view it as a problem of "matrix completion". Given a matrix $A$ that has some known elements and some missing elements, we want to fill in all the missing elements to obtain a matrix $M$ that can be used to recommend a movie. The matrix completion problem is to fill in the missing values in such a way that the matrix $M$ has minimal rank. This is a much harder optimization problem with no simple algorithm. See for example the [wikipedia page on Matrix completion](https://en.wikipedia.org/wiki/Matrix_completion).
| github_jupyter |
```
%matplotlib inline
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import dicom
import os
import scipy.ndimage
import matplotlib.pyplot as plt
from skimage import measure, morphology
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
# Some constants
patients = '/media/ai-master/my_passport/joyce/stage1/2a48f50391e22040f28e258a74dcaf95/'
images = os.listdir(patients)
slices = [dicom.read_file(patients+'/'+s) for s in os.listdir(patients)]
slices = np.stack([s.pixel_array for s in slices])
f, plots = plt.subplots(3, 3, figsize=(10, 10))
for i in range(0, slices.shape[0], 18):
plots[int(i / 55), int((i) % 50)/17].axis('off')
plots[int(i / 55), int((i) % 50)/17].imshow(slices[i], cmap=plt.cm.gray)
#slice[slice == -2000] = 0
# plt.figure()
# plt.imshow(slice)#cmap=plt.cm.gray)
sample_image = slices[6]
plt.imshow(sample_image)
sample_image[sample_image==-2000]=2000
plt.figure()
plt.imshow(sample_image)
slices = [dicom.read_file(patients+'/'+s) for s in os.listdir(patients)]
slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))
try:
slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])
except:
slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)
for s in slices:
s.SliceThickness = slice_thickness
def get_pixels_hu(slices):
image = np.stack([s.pixel_array for s in slices])
# Convert to int16 (from sometimes int16),
# should be possible as values should always be low enough (<32k)
image = image.astype(np.int16)
# Set outside-of-scan pixels to 0
# The intercept is usually -1024, so air is approximately 0
image[image == -2000] = 0
# Convert to Hounsfield units (HU)
for slice_number in range(len(slices)):
intercept = slices[slice_number].RescaleIntercept
slope = slices[slice_number].RescaleSlope
if slope != 1:
image[slice_number] = slope * image[slice_number].astype(np.float64)
image[slice_number] = image[slice_number].astype(np.int16)
image[slice_number] += np.int16(intercept)
return np.array(image, dtype=np.int16)
def resample(image, scan, new_spacing=[1,1,1]):
# Determine current pixel spacing
spacing = np.array([scan[0].SliceThickness] + scan[0].PixelSpacing, dtype=np.float32)
resize_factor = spacing / new_spacing
new_real_shape = image.shape * resize_factor
new_shape = np.round(new_real_shape)
real_resize_factor = new_shape / image.shape
new_spacing = spacing / real_resize_factor
image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest')
return image, new_spacing
image = np.stack([s.pixel_array for s in slices])
new_image = get_pixels_hu(slices)
new_image,spacing = resample(image, slices, new_spacing=[1,1,1])
print np.shape(image)
print np.shape(new_image)
plt.imshow(new_image[0])
plt.figure()
plt.imshow(image[0])
def plot_3d(image, threshold=-300):
# Position the scan upright,
# so the head of the patient would be at the top facing the camera
p = image.transpose(2,1,0)
verts, faces = measure.marching_cubes(p, threshold)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
# Fancy indexing: `verts[faces]` to generate a collection of triangles
mesh = Poly3DCollection(verts[faces], alpha=0.1)
face_color = [0.5, 0.5, 1]
mesh.set_facecolor(face_color)
ax.add_collection3d(mesh)
ax.set_xlim(0, p.shape[0])
ax.set_ylim(0, p.shape[1])
ax.set_zlim(0, p.shape[2])
plt.show()
plot_3d(new_image, 200)
```
| github_jupyter |
# Learning MNIST & Fashion
In this exercise you will design a classifier for the very simple but very popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/), a classic of dataset in computer vision and one of the first real world problems solved by neural networks.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, Adam, RMSprop
from keras.utils import to_categorical
```
Keras provides access to a few simple datasets for convenience in the `keras.datasets` module. Here we will load MNIST, a standard benchmark dataset for image classification. This will download the dataset if you have run this code before.
```
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
```
MNIST is a simple dataset of grayscale hand-written digits 28x28 pixels big. So there are 10 classes in the dataset corresponding to the digits 0-9. We can get a sense for what this dataset is like (always a good idea) by looking at some random samples for the training data:
```
plt.imshow(X_train[np.random.randint(len(X_train))], cmap='gray')
```
We need to do a little preprocessing of the dataset. Firstly, we will flatten the 28x28 images to a 784 dimensional vector. This is because our first model below does not care about the spatial dimensions, only the pixel values. The images are represented by numpy arrays of integers between 0 and 255. Since this is a fixed range, we should scale the values down to be from 0 to 1. This normalization simplifies things is usually a good idea, especially since weights are usually initialized randomly near zero.
Read the code below and make sure you understand what we are doing to the data.
```
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
y_train_cat = to_categorical(y_train, 10)
y_test_cat = to_categorical(y_test, 10)
```
## Exercise 1 - design a fully conncted network for MNIST
Build a fully connected network. It is up to you what the structure of the model will be, but keep in mind that this problem is much higher dimensional than previous problems we have worked on. This is your first chance to design a model on real data! See if you can get 90% accuracy or better.
Here are some of the things you will need to decide about your model:
* number of layers
* activation function
* number of dimensions in each layer
* batch size
* number of epochs
* learning rate
Suggestions:
* You can pass the argument `verbose=2` to the `model.fit` method to quiet the output a bit, which will speed up the training as well.
* You already divided the training and test data, but since you will be trying a series of experiments and changing your model, it is good practice to set aside a **validation** dataset for you to use to track your model improvements. You should only use the test data after you believe you have a good model to evaluate the final performance. Keras can create a validation set for you if you pass the `validation_split=0.1` argument to `model.fit` to tell Keras to hold out 10% of the training data to use as validation.
* You can use the `plot_loss` if you find it useful in setting your learning rate etc. during your experiments.
* You can refer to previous notebooks and the [documentation](http://keras.io/models/sequential/).
If you want to talk over design decisions, feel free to ask.
```
def plot_loss(hist):
loss = hist.history['loss']
plt.plot(range(len(loss)), loss)
plt.title('loss')
plt.xlabel('epochs')
# Final test evaluation
score = model.evaluate(X_test, y_test_cat, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
## Exercise 2: Fashion Mnist
Repeat the classification exercise using the Fashion Mnist dataset from Zalando Research:
https://github.com/zalandoresearch/fashion-mnist
This dataset has the same specs as MNIST but it's designed to be more indicative of a real image classification problem. It contains 10 classes of clothing items:
Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot
Do you get to similar performance?
| github_jupyter |
```
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
os.chdir('/Users/matthiasboeker/Desktop/Master_Thesis/Schizophrenia_Depression_Project')
from Modules.func.support_functions import *
```
## Data import and manipulation
```
#Import Schizophrenia data
shizophrenia_p, shizophrenia_c = load_data()
pat_dic = {'10':shizophrenia_p[10][:18000], '1':shizophrenia_p[1][:18000], '20':shizophrenia_p[20][:18000], '3':shizophrenia_p[3][:18000] }
con_dic = {'10':shizophrenia_c[10][:18000], '14':shizophrenia_c[14][:18000], '5':shizophrenia_c[5][:18000], '4':shizophrenia_c[4][:18000] }
patpat = pd.DataFrame(pat_dic)
concon = pd.DataFrame(con_dic)
#a.index = pd.to_datetime(a.index)
index = pd.date_range(pd.Timestamp("2013-08-23 00:00"), periods=len(pat_dic['10']), freq='min')
patpat.index = index
concon.index = index
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7)) = plt.subplots(4, 2, figsize=(12, 10))
ax0.plot(patpat['10'])
ax0.plot(patpat['10'].rolling(window=30).mean())
ax0.set_ylim([0,3500])
ax0.set_ylabel('Acceleration/min')
ax0.tick_params(labelrotation=45)
ax0.set_title('Patient 10')
ax1.plot(concon['10'])
ax1.plot(concon['10'].rolling(window=30).mean())
ax1.set_ylim([0,3500])
ax1.set_ylabel('Acceleration/min')
ax1.tick_params(labelrotation=45)
ax1.set_title('Control 10')
ax2.plot(patpat['1'])
ax2.plot(patpat['1'].rolling(window=30).mean())
ax2.set_ylim([0,3500])
ax2.set_ylabel('Acceleration/min')
ax2.tick_params(labelrotation=45)
ax2.set_title('Patient 1')
ax3.plot(concon['14'])
ax3.plot(concon['14'].rolling(window=30).mean())
ax3.set_ylim([0,3500])
ax3.set_ylabel('Acceleration/min')
ax3.tick_params(labelrotation=45)
ax3.set_title('Control 14')
ax4.plot(patpat['20'])
ax4.plot(patpat['20'].rolling(window=30).mean())
ax4.set_ylim([0,3500])
ax4.set_ylabel('Acceleration/min')
ax4.tick_params(labelrotation=45)
ax4.set_title('Patient 20')
ax5.plot(concon['5'])
ax5.plot(concon['5'].rolling(window=30).mean())
ax5.set_ylim([0,3500])
ax5.tick_params(labelrotation=45)
ax5.set_ylabel('Acceleration/min')
ax5.set_title('Control 5')
ax6.plot(patpat['3'])
ax6.plot(patpat['3'].rolling(window=30).mean())
ax6.set_ylim([0,3500])
ax6.set_ylabel('Acceleration/min')
ax6.tick_params(labelrotation=45)
ax6.set_title('Patient 3')
ax7.plot(concon['4'])
ax7.plot(concon['4'].rolling(window=30).mean())
ax7.set_ylim([0,3500])
ax7.set_ylabel('Acceleration/min')
ax7.tick_params(labelrotation=45)
ax7.set_title('Control 4')
fig.tight_layout()
```
## Full activity time series
```
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7)) = plt.subplots(4, 2, figsize=(20, 10))
ax0.plot(shizophrenia_p[10])
ax0.plot(shizophrenia_p[10].rolling(window=30).mean())
ax0.set_ylim([0,3500])
ax0.set_title('Patient 10')
ax1.plot(shizophrenia_c[10])
ax1.plot(shizophrenia_c[10].rolling(window=30).mean())
ax1.set_ylim([0,3500])
ax1.set_title('Control 10')
ax2.plot(shizophrenia_p[1])
ax2.plot(shizophrenia_p[1].rolling(window=30).mean())
ax2.set_ylim([0,3500])
ax2.set_title('Patient 1')
ax3.plot(shizophrenia_c[14])
ax3.plot(shizophrenia_c[14].rolling(window=30).mean())
ax3.set_ylim([0,3500])
ax3.set_title('Control 14')
ax4.plot(shizophrenia_p[20])
ax4.plot(shizophrenia_p[20].rolling(window=30).mean())
ax4.set_ylim([0,3500])
ax4.set_title('Patient 20')
ax5.plot(shizophrenia_c[5])
ax5.plot(shizophrenia_c[5].rolling(window=30).mean())
ax5.set_ylim([0,3500])
ax5.set_title('Control 5')
ax6.plot(shizophrenia_p[3])
ax6.plot(shizophrenia_p[3].rolling(window=30).mean())
ax6.set_ylim([0,3500])
ax6.set_title('Patient 3')
ax7.plot(shizophrenia_c[4])
ax7.plot(shizophrenia_c[4].rolling(window=30).mean())
ax7.set_ylim([0,3500])
ax7.set_title('Control 4')
fig.tight_layout()
```
Figure 1 shows patients data on the left side and data of the control group on the right side.
The data was recorded over arround 14 days per person. The data reveals the circadian rhythm of a person. The periods in which a person is active or awake appear as blocks. The periods between the blocks are seen as resting periods. They have a lower mean value and lower variance and they appear to be shorter in time than the active periods. These observations match with known human behavior. The time a persons is asleep is with 6-8 hours (source) shorter than the time she is awake.
The transition between these to periods appear quite instantly. As soon as a person lays down to rest or sleep, one reduce movements or even lays still for several minutes.
Thus, there is a abrupt change of mean and variance in time, which might indicate heteroscedasticity.
Heteroscedasticity will be analysed at a later stage.
The circadian cycles one can observe here is of course affected by a day and night rhythm, as this comes with social norms but can of course differ from person to person. In conclusion, the time series follows most certainly some sort of seasonality.
A first visual comparison shows a difference in the scale. The control group appears overall more active than the patients.
This will be further investigated when comparing the overall means of the time series. But the difference in scale appears to be mostly during the active periods.
Furthermore, the control groups seems to vary more in activity during their active periods, while the patients seem to vary more over the whole time.
It can be observed, that the control groups seems to stick more to a rest-active cycle than the patients.
It is easier to see, when a patient is resting or active or when the person went to bed. The patients don't seem to stick to this cycle. Especially patient 10 and patient 1 have a very low average activity with little change between day and night.
This observation might fit to the diagnosed characteristics of patients with a schizophrenic condition. The patient show a disturbed circadian cycle and behavioral abnormalilites.
#### Key statements:
* general description of the data, like varying variance, seasonality, heteroscedasticity
* First visual differences between patients and control like:
* lower mean activity
* Disturbed circadian cycle
### Comparison of mean activity
```
from sklearn.utils import resample
means_p = np.array([np.mean(shizophrenia_p[i]) for i in range(0,len(shizophrenia_p))])
means_c = np.array([np.mean(shizophrenia_c[i]) for i in range(0,len(shizophrenia_c))])
boot_p = resample(means_p, replace = True, n_samples = 100, random_state=0 )
boot_c = resample(means_c, replace = True, n_samples = 100, random_state=0 )
var_p = np.array([np.var(shizophrenia_p[i]) for i in range(0,len(shizophrenia_p))])
var_c = np.array([np.var(shizophrenia_c[i]) for i in range(0,len(shizophrenia_c))])
boot_var_p = resample(var_p, replace = True, n_samples = 100, random_state=0 )
boot_var_c = resample(var_c, replace = True, n_samples = 100, random_state=0 )
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(20, 10))
ax0.hist(boot_p, alpha=0.8,bins= 10 ,label = 'Schizophrenic group', rwidth=0.9)
ax0.hist(boot_c, alpha=0.8,bins= 10 ,label = 'Control group', rwidth=0.9)
ax0.axvline(np.median(boot_p), color='k', linestyle='dashed', linewidth=1)
ax0.axvline(np.median(boot_c), color='k', linestyle='dotted', linewidth=1)
ax0.legend()
ax0.set_title("Histogram of bootstrapped mean of time series")
ax1.hist(boot_var_p, alpha=0.8,bins= 10 ,label = 'Schizophrenic group', rwidth=0.9)
ax1.hist(boot_var_c, alpha=0.8,bins= 10 ,label = 'Control group', rwidth=0.9)
ax1.axvline(np.median(boot_var_p), color='k', linestyle='dashed', linewidth=1)
ax1.axvline(np.median(boot_var_c), color='k', linestyle= 'dotted', linewidth=1)
ax1.legend()
ax1.set_title("Histogram of bootstrapped variance of time series")
```
Figure 2 shows a two histograms of the overall sample mean and the sample variance of each time series. To improve the parameter estimate, bootstrapping was applied to increase the sample size from 22 and 32 from the schizophrenic and control group to 50, respectively.
The histograms show that the control group has indeed a higher overall mean and variance of activity than the schizophrenic group.
As we can see there is already a difference in the first to moments of the two groups. In the following, the sample distribution of the data is analyzed.
According to the histograms, the two groups are indeed generated by different distributions, but this has to be further investigated by analysing the daily and nightly structure of the data. Moreover it is up to analyse how the time series are distributed, and if the active and resting period are also generated by different distributions.
REWRITE!
#### Key statement
* schizophrenic and control group most likely generated by different distribution, classification possible
```
stats.ttest_ind(means_p,means_c)
```
## Analysis of the sample distribution of the time series
To get a better understanding of the underlying distribution of the time series, its sample distributions are analyzed.
The time series is non negative, since the activity itself cannot be negative. This natural property of the data could possibly restrict a sample distribution.
The sensor measure the activity averaged over 60 seconds. Not clear how the sensor measure only little movements?
More!
Maybe rectified Gaussian Distributed, indicated by the differenciated version, that we can assume a normal distribution? Maybe more indicators?
```
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(20, 10))
ax0.hist(shizophrenia_p[10], bins = 50, rwidth=0.9)
ax0.set_title('Patient 10')
ax0.set_xlim([0,1500])
ax0.set_ylim([0,14500])
ax1.hist(shizophrenia_c[10], bins = 50, rwidth=0.9)
ax1.set_title('Control 10')
ax1.set_xlim([0,1500])
ax1.set_ylim([0,14500])
ax2.hist(shizophrenia_p[1], bins = 50, rwidth=0.9)
ax2.set_title('Patient 1')
ax2.set_xlim([0,1500])
ax2.set_ylim([0,14500])
ax3.hist(shizophrenia_c[14], bins = 50, rwidth=0.9)
ax3.set_title('Control 14')
ax3.set_xlim([0,1500])
ax3.set_ylim([0,14500])
ax4.hist(shizophrenia_p[20], bins = 50, rwidth=0.9)
ax4.set_title('Patient 20')
ax4.set_xlim([0,1500])
ax4.set_ylim([0,14500])
ax5.hist(shizophrenia_c[5], bins = 50, rwidth=0.9)
ax5.set_title('Control 5')
ax5.set_xlim([0,1500])
ax5.set_ylim([0,14500])
ax6.hist(shizophrenia_p[3], bins = 50, rwidth=0.9)
ax6.set_title('Patient 3')
ax6.set_xlim([0,1500])
ax6.set_ylim([0,14500])
ax7.hist(shizophrenia_c[4], bins = 50, rwidth=0.9)
ax7.set_title('Control 4')
ax7.set_xlim([0,1500])
ax7.set_ylim([0,14500])
ax8.hist(shizophrenia_p[15], bins = 50, rwidth=0.9)
ax8.set_title('Patient 15')
ax8.set_xlim([0,1500])
ax8.set_ylim([0,14500])
ax9.hist(shizophrenia_c[7], bins = 50, rwidth=0.9)
ax9.set_title('Control 7')
ax9.set_xlim([0,1500])
ax9.set_ylim([0,14500])
ax10.hist(shizophrenia_p[8], bins = 50, rwidth=0.9)
ax10.set_title('Patient 8')
ax10.set_xlim([0,1500])
ax10.set_ylim([0,14500])
ax11.hist(shizophrenia_c[30], bins = 50, rwidth=0.9)
ax11.set_title('Control 30')
ax11.set_xlim([0,1500])
ax11.set_ylim([0,14500])
fig.tight_layout()
```
Figure 3 shows 12 different histograms of the full time series of the patient group and control group. The patients histograms are plotted on left side, while the control groups' histograms are plotted on the right. The histograms are provided for 12 example time series.
Each histogram shows a very high amount of zeros. This might be due to the non-negativity of the data.
Comparing the patient and control groups histograms shows, that the amount of zeros is higher for patients. Also, there are fewer observations higher than 800 in a patients time series compared to the control group. These observation match with the obersvations of the total mean and variance of each time series. Schizophrenic persons seem to be more inactive due to the higher number of zero activity. The activity of patients shows a smaller range compared to the mental healthy group.
## Analysis of the sample distribution of the differentiated time series
```
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(shizophrenia_p[10].diff()[1:], bins = 50, rwidth=0.9)
ax0.set_title('Patient 10')
ax0.set_xlim([-1000,1000])
ax0.set_ylim([0,15000])
ax1.hist(shizophrenia_c[10].diff()[1:], bins = 50, rwidth=0.9)
ax1.set_title('Control 10')
ax1.set_xlim([-1000,1000])
ax1.set_ylim([0,15000])
ax2.hist(shizophrenia_p[1].diff()[1:], bins = 50, rwidth=0.9)
ax2.set_title('Patient 1')
ax2.set_xlim([-1000,1000])
ax2.set_ylim([0,15000])
ax3.hist(shizophrenia_c[14].diff()[1:], bins = 50, rwidth=0.9)
ax3.set_title('Control 14')
ax3.set_xlim([-1000,1000])
ax3.set_ylim([0,15000])
ax4.hist(shizophrenia_p[20].diff()[1:], bins = 50, rwidth=0.9)
ax4.set_title('Patient 20')
ax4.set_xlim([-1000,1000])
ax4.set_ylim([0,15000])
ax5.hist(shizophrenia_c[5].diff()[1:], bins = 50, rwidth=0.9)
ax5.set_title('Control 5')
ax5.set_xlim([-1000,1000])
ax5.set_ylim([0,15000])
ax6.hist(shizophrenia_p[3].diff()[1:], bins = 50, rwidth=0.9)
ax6.set_title('Patient 3')
ax6.set_xlim([-1000,1000])
ax6.set_ylim([0,15000])
ax7.hist(shizophrenia_c[4].diff()[1:], bins = 50, rwidth=0.9)
ax7.set_title('Control 4')
ax7.set_xlim([-1000,1000])
ax7.set_ylim([0,15000])
ax8.hist(shizophrenia_p[15].diff()[1:], bins = 50, rwidth=0.9)
ax8.set_title('Patient 15')
ax8.set_xlim([-1000,1000])
ax8.set_ylim([0,15000])
ax9.hist(shizophrenia_c[7].diff()[1:], bins = 50, rwidth=0.9)
ax9.set_title('Control 7')
ax9.set_xlim([-1000,1000])
ax9.set_ylim([0,15000])
ax10.hist(shizophrenia_p[8].diff()[1:], bins = 50, rwidth=0.9)
ax10.set_title('Patient 8')
ax10.set_xlim([-1000,1000])
ax10.set_ylim([0,15000])
ax11.hist(shizophrenia_c[30].diff()[1:], bins = 50, rwidth=0.9)
ax11.set_title('Control 30')
ax11.set_xlim([-1000,1000])
ax11.set_ylim([0,15000])
fig.tight_layout()
## Testing normality of ts with Shapiro Wilk
shapiro_p_values_c = [stats.shapiro(shizophrenia_c[i].diff()[1:]) for i in range(0,len(shizophrenia_c))]
shaprio_p_values_p = [stats.shapiro(shizophrenia_p[i].diff()[1:]) for i in range(0,len(shizophrenia_p))]
## Testing normality of ts with Shapiro Wilk
from scipy import stats
ks_p_values_c = [stats.kstest(shizophrenia_c[i].diff()[1:], 'norm') for i in range(0,len(shizophrenia_c))]
ks_p_values_p = [stats.kstest(shizophrenia_p[i].diff()[1:], 'norm') for i in range(0,len(shizophrenia_p))]
```
As expected, there is a large amount of zero counts within the data. It is difficult to extract a given sample distribution out of these histograms.
A normal distribution might be derived from transforming the data. To overcome the restriction of non-negativity, one can differenced the time series by one.
Differencing the time series by one will give us a slightly different interpretation of it. The differencing time series is no longer the measured activity per minute, but the change in activity compared to the last minute. The differencing is done by,
$y_t' = y_t - y_{t-1}$. Positive values can be interpreted as a positive change in activity. The person has been less active before compared to now. On the other hand, a negative values indicates that the persons is less active now then before.
The histograms of the differenced time series seem more normal distributed, eventho they still show an high amount of zeros. After conducting the Kolmogorov-Smirnoff test for normality as well as the Shaprio-Wilk test for normality, it can be concluded that the data is not normal distributed.
CONDUCT THE TEST FOR ALL TS!
## Analysis of the log transformed sample distribution of the time series
```
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(np.log(shizophrenia_p[10]+1), bins = 50, rwidth=0.9)
ax0.set_title('Patient 10')
ax0.set_xlim([0,9])
ax0.set_ylim([0,10000])
ax1.hist(np.log(shizophrenia_c[10]+1), bins = 50, rwidth=0.9)
ax1.set_title('Control 10')
ax1.set_xlim([0,9])
ax1.set_ylim([0,10000])
ax2.hist(np.log(shizophrenia_p[1]+1), bins = 50, rwidth=0.9)
ax2.set_title('Patient 1')
ax2.set_xlim([0,9])
ax2.set_ylim([0,10000])
ax3.hist(np.log(shizophrenia_c[14]+1), bins = 50, rwidth=0.9)
ax3.set_title('Control 14')
ax3.set_xlim([0,9])
ax3.set_ylim([0,10000])
ax4.hist(np.log(shizophrenia_p[20]+1), bins = 50, rwidth=0.9)
ax4.set_title('Patient 20')
ax4.set_xlim([0,9])
ax4.set_ylim([0,10000])
ax5.hist(np.log(shizophrenia_c[5]+1), bins = 50, rwidth=0.9)
ax5.set_title('Control 5')
ax5.set_xlim([0,9])
ax5.set_ylim([0,10000])
ax6.hist(np.log(shizophrenia_p[3]+1), bins = 50, rwidth=0.9)
ax6.set_title('Patient 3')
ax6.set_xlim([0,9])
ax6.set_ylim([0,10000])
ax7.hist(np.log(shizophrenia_c[4]+1), bins = 50, rwidth=0.9)
ax7.set_title('Control 4')
ax7.set_xlim([0,9])
ax7.set_ylim([0,10000])
ax8.hist(np.log(shizophrenia_p[15]+1), bins = 50, rwidth=0.9)
ax8.set_title('Patient 15')
ax8.set_xlim([0,9])
ax8.set_ylim([0,10000])
ax9.hist(np.log(shizophrenia_c[7]+1), bins = 50, rwidth=0.9)
ax9.set_title('Control 7')
ax9.set_xlim([0,9])
ax9.set_ylim([0,10000])
ax10.hist(np.log(shizophrenia_p[8]+1), bins = 50, rwidth=0.9)
ax10.set_title('Patient 8')
ax10.set_xlim([0,9])
ax10.set_ylim([0,10000])
ax11.hist(np.log(shizophrenia_c[30]+1), bins = 50, rwidth=0.9)
ax11.set_title('Control 30')
ax11.set_xlim([0,9])
ax11.set_ylim([0,10000])
fig.tight_layout()
```
Another common transformation is the log-transformation of the time series. Since there are many zero values, one was added to the data to be able to calculate the natural logarithm.
Just like the other versions, the log-transformed histograms reveals a high number of zeros.
Beside the large amount of zeros, the log-transformation reveals a second smaller accumulation of observations located to the right of the histogram. This might indicate, that the data is generated by two different distributions. Due to the knowledge about the nature of the data, one could make the hypothesis that the active and resting periods are generated by different distributions.
That lets the question arrise how the two different periods are distributed.
Analysing the distribution of the two different periods is not trivial, since it is not known, when a person is an active or resting phase.
To get a better insight, the daily structure of the time series and its distribution are analysed.
## Analysing the daily structure of the data
```
#Extract 24h daily structure
days_c = [np.array([shizophrenia_c[l][i*(60*24):(i+1)*(60*24)] for i in range(0,int(len(shizophrenia_c[l])/(60*24))-1)]) for l in range(0,len(shizophrenia_c))]
days_p = [np.array([shizophrenia_p[l][i*(60*24):(i+1)*(60*24)] for i in range(0,int(len(shizophrenia_p[l])/(60*24))-1)]) for l in range(0,len(shizophrenia_p))]
day_mean_p = [np.mean(days_p[i], axis=0) for i in range(0,len(days_p))]
day_mean_c = [np.mean(days_c[i], axis=0) for i in range(0,len(days_c))]
#Histograms --> change to better visualization
fig, (ax0,ax1) = plt.subplots(1, 2, figsize=(16, 4))
ax0.set_title('Patients average activity during 24h')
ax0.set_ylabel('Activity')
ax0.set_xlabel('Time')
ax0.set_ylim([0,1700])
for a in days_p:
ax0.plot(np.mean(a, axis = 0), alpha= 0.8)
ax1.set_title('Controls average activity during 24h')
ax1.set_ylabel('Activity')
ax1.set_xlabel('Time')
ax1.set_ylim([0,1700])
for a in days_c:
ax1.plot(np.mean(a, axis = 0), alpha= 0.8)
fig, (ax0,ax1) = plt.subplots(1, 2, figsize=(16, 4))
ax0.set_title('Boxplot of Patients average 24h activity')
ax0.set_ylabel('Activity')
ax0.set_xlabel('Patients')
ax0.set_ylim([0,1700])
ax0.boxplot(day_mean_p)
ax1.set_title('Boxplot of Control average 24h activity')
ax1.set_ylabel('Activity')
ax1.set_xlabel('Control Group')
ax1.set_ylim([0,1700])
ax1.boxplot(day_mean_c)
fig.tight_layout()
#Extract 12h structure 9-21 night
daily_c = [np.array([shizophrenia_c[l][i*(60*12):(i+1)*(60*12)] for i in range(0,int(len(shizophrenia_c[l])/(60*12))-1,2)]) for l in range(0,len(shizophrenia_c))]
daily_p = [np.array([shizophrenia_p[l][i*(60*12):(i+1)*(60*12)] for i in range(0,int(len(shizophrenia_p[l])/(60*12))-1, 2)]) for l in range(0,len(shizophrenia_p))]
daily_mean_p = [np.mean(daily_p[i], axis=0) for i in range(0,len(daily_p))]
daily_mean_c = [np.mean(daily_c[i], axis=0) for i in range(0,len(daily_c))]
#Extract 12h structure 21-9 night
night_c = [np.array([shizophrenia_c[l][i*(60*12):(i+1)*(60*12)] for i in range(1,int(len(shizophrenia_c[l])/(60*12))-1,2)]) for l in range(0,len(shizophrenia_c))]
night_p = [np.array([shizophrenia_p[l][i*(60*12):(i+1)*(60*12)] for i in range(1,int(len(shizophrenia_p[l])/(60*12))-1, 2)]) for l in range(0,len(shizophrenia_p))]
night_mean_p = [np.mean(night_p[i], axis=0) for i in range(0,len(night_p))]
night_mean_c = [np.mean(night_c[i], axis=0) for i in range(0,len(night_c))]
fig, ((ax0,ax1),(ax2,ax3)) = plt.subplots(2, 2, figsize=(20,10))
ax0.set_title('Patients average 12h daily activity')
ax0.set_ylabel('Activity')
ax0.set_xlabel('Time')
ax0.set_ylim([0,1700])
for a in daily_p:
ax0.plot(np.mean(a, axis = 0))
ax1.set_title('Controls average 12h daily activity')
ax1.set_ylabel('Activity')
ax1.set_xlabel('Time')
ax1.set_ylim([0,1700])
for a in daily_c:
ax1.plot(np.mean(a, axis = 0))
ax2.set_title('Patients average 12h nightly activity')
ax2.set_ylabel('Activity')
ax2.set_xlabel('Time')
ax2.set_ylim([0,1700])
for a in night_p:
ax2.plot(np.mean(a, axis = 0))
ax3.set_title('Controls average 12h nightly activity')
ax3.set_ylabel('Activity')
ax3.set_xlabel('Time')
ax3.set_ylim([0,1700])
for a in night_c:
ax3.plot(np.mean(a, axis = 0))
fig.tight_layout()
fig, ((ax0,ax1),(ax2,ax3)) = plt.subplots(2, 2, figsize=(20, 10))
ax0.set_title('Boxplot of Patients average 12h daily activity')
ax0.set_ylabel('Activity')
ax0.set_xlabel('Patients')
ax0.set_ylim([0,1700])
ax0.boxplot(daily_mean_p)
ax1.set_title('Boxplot of Controls average 12h daily activity')
ax1.set_ylabel('Activity')
ax1.set_xlabel('Control Group')
ax1.set_ylim([0,1700])
ax1.boxplot(daily_mean_c)
ax2.set_title('Boxplot of Patients average 12h nightly activity')
ax2.set_ylabel('Activity')
ax2.set_xlabel('Patients')
ax2.set_ylim([0,1500])
ax2.boxplot(night_mean_p)
ax3.set_title('Boxplot of Controls average 12h nightly activity')
ax3.set_ylabel('Activity')
ax3.set_xlabel('Control group ')
ax3.set_ylim([0,1500])
ax3.boxplot(night_mean_c)
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(np.log(day_mean_p[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax0.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 11')
ax0.set_xlim([0,8])
ax0.set_ylim([0,200])
ax0.hist(np.log(day_mean_c[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Control')
ax0.set_xlim([0,8])
ax0.set_ylim([0,200])
ax0.legend()
ax1.hist(np.log(day_mean_p[10]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax1.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 10')
ax1.set_xlim([0,8])
ax1.set_ylim([0,200])
ax1.hist(np.log(day_mean_c[10]+1),alpha=0.7, bins = 50, rwidth=0.9,label= 'Control')
ax1.set_xlim([0,8])
ax1.set_ylim([0,200])
ax1.legend()
ax2.hist(np.log(day_mean_p[1]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax2.set_xlim([0,8])
ax2.set_ylim([0,200])
ax2.hist(np.log(day_mean_c[1]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax2.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 1')
ax2.set_xlim([0,8])
ax2.set_ylim([0,200])
ax2.legend()
ax3.hist(np.log(day_mean_p[14]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax3.set_xlim([0,8])
ax3.set_ylim([0,200])
ax3.hist(np.log(day_mean_c[14]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax3.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 14')
ax3.set_xlim([0,8])
ax3.set_ylim([0,200])
ax3.legend()
ax4.hist(np.log(day_mean_p[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax4.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 20')
ax4.set_xlim([0,8])
ax4.set_ylim([0,200])
ax4.hist(np.log(day_mean_c[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Control')
ax4.set_xlim([0,8])
ax4.set_ylim([0,200])
ax4.legend()
ax5.hist(np.log(day_mean_p[5]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax5.set_xlim([0,8])
ax5.set_ylim([0,200])
ax5.hist(np.log(day_mean_c[5]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax5.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 5')
ax5.set_xlim([0,8])
ax5.set_ylim([0,200])
ax5.legend()
ax6.hist(np.log(day_mean_p[3]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax6.set_xlim([0,8])
ax6.set_ylim([0,200])
ax6.hist(np.log(day_mean_c[3]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax6.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 3')
ax6.set_xlim([0,8])
ax6.set_ylim([0,200])
ax6.legend()
ax7.hist(np.log(day_mean_p[4]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax7.set_xlim([0,8])
ax7.set_ylim([0,200])
ax7.hist(np.log(day_mean_c[4]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax7.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 4')
ax7.set_xlim([0,8])
ax7.set_ylim([0,200])
ax7.legend()
ax8.hist(np.log(day_mean_p[15]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax8.set_xlim([0,8])
ax8.set_ylim([0,200])
ax8.hist(np.log(day_mean_c[15]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax8.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 15')
ax8.set_xlim([0,8])
ax8.set_ylim([0,200])
ax8.legend()
ax9.hist(np.log(day_mean_p[7]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax9.set_xlim([0,8])
ax9.set_ylim([0,200])
ax9.hist(np.log(day_mean_c[7]+1), alpha=0.7,bins = 50, rwidth=0.9)
ax9.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 7')
ax9.set_xlim([0,8])
ax9.set_ylim([0,200])
ax9.legend()
ax10.hist(np.log(day_mean_p[8]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax10.set_xlim([0,9])
ax10.set_ylim([0,200])
ax10.hist(np.log(day_mean_c[8]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax10.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 8')
ax10.set_xlim([0,8])
ax10.set_ylim([0,200])
ax10.legend()
ax11.hist(np.log(day_mean_p[21]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax11.set_xlim([0,8])
ax11.set_ylim([0,200])
ax11.hist(np.log(day_mean_c[21]+1), alpha=0.7, bins = 50, rwidth=0.9,label= 'Control')
ax11.set_title('Histogram of 24h average daily log-transformed activity of Patient & Control 21')
ax11.set_xlim([0,8])
ax11.set_ylim([0,200])
ax11.legend()
fig.tight_layout()
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(np.log(daily_mean_p[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax0.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 11')
ax0.set_xlim([0,8])
ax0.set_ylim([0,100])
ax0.hist(np.log(daily_mean_c[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Control')
ax0.set_xlim([0,8])
ax0.set_ylim([0,100])
ax0.legend()
ax1.hist(np.log(daily_mean_p[10]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax1.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 10')
ax1.set_xlim([0,8])
ax1.set_ylim([0,100])
ax1.hist(np.log(daily_mean_c[10]+1),alpha=0.7, bins = 50, rwidth=0.9,label= 'Control')
ax1.set_xlim([0,8])
ax1.set_ylim([0,100])
ax1.legend()
ax2.hist(np.log(daily_mean_p[1]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax2.set_xlim([0,8])
ax2.set_ylim([0,100])
ax2.hist(np.log(daily_mean_c[1]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax2.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 1')
ax2.set_xlim([0,8])
ax2.set_ylim([0,100])
ax2.legend()
ax3.hist(np.log(daily_mean_p[14]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax3.set_xlim([0,8])
ax3.set_ylim([0,100])
ax3.hist(np.log(daily_mean_c[14]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax3.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 14')
ax3.set_xlim([0,8])
ax3.set_ylim([0,100])
ax3.legend()
ax4.hist(np.log(daily_mean_p[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax4.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 20')
ax4.set_xlim([0,8])
ax4.set_ylim([0,100])
ax4.hist(np.log(daily_mean_c[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Control')
ax4.set_xlim([0,8])
ax4.set_ylim([0,100])
ax4.legend()
ax5.hist(np.log(daily_mean_p[5]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax5.set_xlim([0,8])
ax5.set_ylim([0,100])
ax5.hist(np.log(daily_mean_c[5]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax5.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 5')
ax5.set_xlim([0,8])
ax5.set_ylim([0,100])
ax5.legend()
ax6.hist(np.log(night_mean_p[3]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax6.set_xlim([0,8])
ax6.set_ylim([0,100])
ax6.hist(np.log(daily_mean_c[3]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax6.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 3')
ax6.set_xlim([0,8])
ax6.set_ylim([0,100])
ax6.legend()
ax7.hist(np.log(daily_mean_p[4]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax7.set_xlim([0,8])
ax7.set_ylim([0,100])
ax7.hist(np.log(daily_mean_c[4]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax7.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 4')
ax7.set_xlim([0,8])
ax7.set_ylim([0,100])
ax7.legend()
ax8.hist(np.log(daily_mean_p[15]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax8.set_xlim([0,8])
ax8.set_ylim([0,100])
ax8.hist(np.log(daily_mean_c[15]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax8.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 15')
ax8.set_xlim([0,8])
ax8.set_ylim([0,100])
ax8.legend()
ax9.hist(np.log(daily_mean_p[7]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax9.set_xlim([0,8])
ax9.set_ylim([0,100])
ax9.hist(np.log(daily_mean_c[7]+1), alpha=0.7,bins = 50, rwidth=0.9)
ax9.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 7')
ax9.set_xlim([0,8])
ax9.set_ylim([0,100])
ax9.legend()
ax10.hist(np.log(daily_mean_p[8]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax10.set_xlim([0,8])
ax10.set_ylim([0,100])
ax10.hist(np.log(daily_mean_c[8]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax10.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 8')
ax10.set_xlim([0,8])
ax10.set_ylim([0,100])
ax10.legend()
ax11.hist(np.log(daily_mean_p[21]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax11.set_xlim([0,8])
ax11.set_ylim([0,100])
ax11.hist(np.log(daily_mean_c[21]+1), alpha=0.7, bins = 50, rwidth=0.9,label= 'Control')
ax11.set_title('Histogram of 12h average daily log-transformed activity of Patient & Control 21')
ax11.set_xlim([0,8])
ax11.set_ylim([0,100])
ax11.legend()
fig.tight_layout()
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(np.log(night_mean_p[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax0.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 11')
ax0.set_ylim([0,80])
ax0.hist(np.log(night_mean_c[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Control')
ax0.set_xlim([0,8])
ax0.legend()
ax1.hist(np.log(night_mean_p[10]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax1.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 10')
ax1.set_ylim([0,80])
ax1.hist(np.log(night_mean_c[10]+1),alpha=0.7, bins = 50, rwidth=0.9,label= 'Control')
ax1.set_xlim([0,8])
ax1.legend()
ax2.hist(np.log(night_mean_p[1]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax2.set_ylim([0,80])
ax2.hist(np.log(night_mean_c[1]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax2.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 1')
ax2.set_xlim([0,8])
ax2.legend()
ax3.hist(np.log(night_mean_p[14]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax3.set_xlim([0,8])
ax3.hist(np.log(night_mean_c[14]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax3.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 14')
ax3.set_ylim([0,80])
ax3.legend()
ax4.hist(np.log(night_mean_p[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Patient')
ax4.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 20')
ax4.set_xlim([0,8])
ax4.set_ylim([0,80])
ax4.hist(np.log(night_mean_c[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Control')
ax4.legend()
ax5.hist(np.log(night_mean_p[5]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax5.set_xlim([0,8])
ax5.set_ylim([0,60])
ax5.hist(np.log(night_mean_c[5]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax5.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 5')
ax5.legend()
ax6.hist(np.log(night_mean_p[3]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax6.set_xlim([0,8])
ax6.set_ylim([0,80])
ax6.hist(np.log(night_mean_c[3]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax6.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 3')
ax6.set_xlim([0,8])
ax6.set_ylim([0,80])
ax6.legend()
ax7.hist(np.log(night_mean_p[4]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax7.hist(np.log(night_mean_c[4]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax7.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 4')
ax7.set_xlim([0,8])
ax7.set_ylim([0,80])
ax7.legend()
ax8.hist(np.log(night_mean_p[15]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax8.set_xlim([0,8])
ax8.set_ylim([0,80])
ax8.hist(np.log(night_mean_c[15]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax8.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 15')
ax8.legend()
ax9.hist(np.log(night_mean_p[7]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax9.set_xlim([0,8])
ax9.set_ylim([0,80])
ax9.hist(np.log(night_mean_c[7]+1), alpha=0.7,bins = 50, rwidth=0.9)
ax9.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 7')
ax9.set_xlim([0,8])
ax9.legend()
ax10.hist(np.log(night_mean_p[8]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax10.set_xlim([0,8])
ax10.set_ylim([0,80])
ax10.hist(np.log(night_mean_c[8]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Control')
ax10.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 8')
ax10.set_xlim([0,8])
ax10.legend()
ax11.hist(np.log(night_mean_p[21]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Patient')
ax11.set_xlim([0,8])
ax11.hist(np.log(night_mean_c[21]+1), alpha=0.7, bins = 50, rwidth=0.9,label= 'Control')
ax11.set_title('Histogram of 12h average nighly log-transformed activity of Patient & Control 21')
ax11.set_xlim([0,8])
ax11.set_ylim([0,80])
ax11.legend()
fig.tight_layout()
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(np.log(daily_mean_p[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Day')
ax0.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 11 ')
ax0.set_ylim([0,80])
ax0.hist(np.log(night_mean_p[11]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Night')
ax0.set_xlim([0,8])
ax0.legend()
ax1.hist(np.log(daily_mean_c[10]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Day')
ax1.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 10 ')
ax1.set_ylim([0,80])
ax1.hist(np.log(night_mean_c[10]+1),alpha=0.7, bins = 50, rwidth=0.9,label= 'Night')
ax1.set_xlim([0,8])
ax1.legend()
ax2.hist(np.log(daily_mean_p[1]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax2.set_ylim([0,80])
ax2.hist(np.log(night_mean_p[1]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax2.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 1 ')
ax2.set_xlim([0,8])
ax2.legend()
ax3.hist(np.log(daily_mean_c[14]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax3.set_xlim([0,8])
ax3.hist(np.log(night_mean_c[14]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax3.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 14 ')
ax3.set_ylim([0,80])
ax3.legend()
ax4.hist(np.log(daily_mean_p[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Day')
ax4.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 20 ')
ax4.set_xlim([0,8])
ax4.set_ylim([0,80])
ax4.hist(np.log(night_mean_p[20]+1),alpha=0.7, bins = 50, rwidth=0.9, label= 'Night')
ax4.legend()
ax5.hist(np.log(daily_mean_c[5]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax5.set_xlim([0,8])
ax5.set_ylim([0,80])
ax5.hist(np.log(night_mean_c[5]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax5.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 5 ')
ax5.legend()
ax6.hist(np.log(daily_mean_p[3]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax6.set_xlim([0,8])
ax6.set_ylim([0,80])
ax6.hist(np.log(night_mean_p[3]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax6.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 3 ')
ax6.legend()
ax7.hist(np.log(daily_mean_c[4]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax7.hist(np.log(night_mean_c[4]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax7.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 4 ')
ax7.set_xlim([0,8])
ax7.set_ylim([0,80])
ax7.legend()
ax8.hist(np.log(daily_mean_p[15]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax8.set_xlim([0,8])
ax8.set_ylim([0,80])
ax8.hist(np.log(night_mean_p[15]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax8.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 15 ')
ax8.legend()
ax9.hist(np.log(daily_mean_c[7]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax9.set_xlim([0,8])
ax9.set_ylim([0,80])
ax9.hist(np.log(night_mean_c[7]+1), alpha=0.7,bins = 50, rwidth=0.9, label = 'Night')
ax9.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 7 ')
ax9.legend()
ax10.hist(np.log(daily_mean_p[8]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax10.set_xlim([0,8])
ax10.set_ylim([0,80])
ax10.hist(np.log(night_mean_p[8]+1), alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax10.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 8 ')
ax10.legend()
ax11.hist(np.log(daily_mean_c[20]+1), alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax11.hist(np.log(night_mean_c[20]+1), alpha=0.7, bins = 50, rwidth=0.9,label= 'Night')
ax11.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 21 ')
#ax11.set_xlim([0,8])
#ax11.set_ylim([0,80])
ax11.legend()
fig.tight_layout()
mean_daily_mean_c = np.asarray([np.mean(obj) for obj in daily_mean_c])
std_daily_mean_c = np.asarray([np.std(obj) for obj in daily_mean_c])
mean_night_mean_c = np.asarray([np.mean(obj) for obj in night_mean_c])
std_night_mean_c = np.asarray([np.std(obj) for obj in night_mean_c])
mean_daily_mean_p = np.asarray([np.mean(obj) for obj in daily_mean_p])
std_daily_mean_p = np.asarray([np.std(obj) for obj in daily_mean_p])
mean_night_mean_p = np.asarray([np.mean(obj) for obj in night_mean_p])
std_night_mean_p = np.asarray([np.std(obj) for obj in night_mean_p])
m = np.concatenate((mean_daily_mean_c, mean_daily_mean_p ,mean_night_mean_c, mean_night_mean_p))
v = np.concatenate((std_daily_mean_c, std_daily_mean_p, std_night_mean_c, std_night_mean_p))
d = np.concatenate((np.ones(len(mean_daily_mean_c)+len(mean_daily_mean_p)), np.zeros(len(mean_night_mean_c)+len(mean_night_mean_p))))
c = np.concatenate((np.ones(len(mean_daily_mean_c)), np.zeros(len(mean_daily_mean_p)),np.ones(len(mean_night_mean_c)), np.zeros(len(mean_night_mean_p))))
dic = {'m' : m, 'v': v, 'd': d, 'c': c}
vis_data = pd.DataFrame(dic)
from matplotlib.lines import Line2D
fig = plt.figure(figsize=(8,5))
color = ['darkorange' if x>0 else 'navy' for x in vis_data['d']]
shape = ['o' if k>0 else 'x' for k in vis_data['c']]
for x, l, c, m in zip(vis_data['m'], vis_data['v'], color, shape ):
plt.scatter(x, l, alpha=0.8, c=c,marker=m)
plt.ylabel('Standard deviation')
plt.xlabel('Mean')
legend_elements = [Line2D([0], [0], color='blue', marker='o',
markersize=5, label='Average nightly acticity of Control'),
Line2D([0], [0], color='orange', marker='o',
markersize=5, label='Average daily acticity of Control'),
Line2D([0], [0], color='blue', marker='x',
markersize=5, label='Average nightly acticity of Patient'),
Line2D([0], [0], color='orange', marker='x',
markersize=5, label='Average daily acticity of Patient')]
plt.legend(handles=legend_elements)
plt.grid(True)
plt.title('Distribution of the first two moments for average daily and nightly acticity')
#Diff all
shizophrenia_c_diff = [shizophrenia_c[l].diff()[1:] for l in range(0,len(shizophrenia_c))]
shizophrenia_p_diff = [shizophrenia_p[l].diff()[1:] for l in range(0,len(shizophrenia_p))]
days_diff_c = [np.array([shizophrenia_c_diff[l][i*(60*24):(i+1)*(60*24)] for i in range(0,int(len(shizophrenia_c[l])/(60*24))-1)]) for l in range(0,len(shizophrenia_c))]
days_diff_p = [np.array([shizophrenia_p_diff[l][i*(60*24):(i+1)*(60*24)] for i in range(0,int(len(shizophrenia_p[l])/(60*24))-1)]) for l in range(0,len(shizophrenia_p))]
day_diff_mean_p = [np.mean(days_p[i], axis=0) for i in range(0,len(days_p))]
day_diff_mean_c = [np.mean(days_c[i], axis=0) for i in range(0,len(days_c))]
#Extract 12h structure 21-9 night
night_diff_c = [np.array([shizophrenia_c_diff[l][i*(60*12):(i+1)*(60*12)] for i in range(1,int(len(shizophrenia_c[l])/(60*12))-1,2)]) for l in range(0,len(shizophrenia_c))]
night_diff_p = [np.array([shizophrenia_p_diff[l][i*(60*12):(i+1)*(60*12)] for i in range(1,int(len(shizophrenia_p[l])/(60*12))-1, 2)]) for l in range(0,len(shizophrenia_p))]
night_diff_mean_p = [np.mean(night_diff_p[i], axis=0) for i in range(0,len(night_p))]
night_diff_mean_c = [np.mean(night_diff_c[i], axis=0) for i in range(0,len(night_c))]
#Extract 12h structure 9-21 night
daily_diff_c = [np.array([shizophrenia_c_diff[l][i*(60*12):(i+1)*(60*12)] for i in range(0,int(len(shizophrenia_c[l])/(60*12))-1,2)]) for l in range(0,len(shizophrenia_c))]
daily_diff_p = [np.array([shizophrenia_p_diff[l][i*(60*12):(i+1)*(60*12)] for i in range(0,int(len(shizophrenia_p[l])/(60*12))-1, 2)]) for l in range(0,len(shizophrenia_p))]
daily_diff_mean_p = [np.mean(daily_diff_p[i], axis=0) for i in range(0,len(daily_p))]
daily_diff_mean_c = [np.mean(daily_diff_c[i], axis=0) for i in range(0,len(daily_c))]
fig, ((ax0,ax1),(ax2,ax3)) = plt.subplots(2, 2, figsize=(20,10))
ax0.set_title('Patients average 12h daily activity')
ax0.set_ylabel('Activity')
ax0.set_xlabel('Time')
ax0.set_ylim([-600,600])
for a in daily_diff_p:
ax0.plot(np.mean(a, axis = 0))
ax1.set_title('Controls average 12h daily activity')
ax1.set_ylabel('Activity')
ax1.set_xlabel('Time')
ax1.set_ylim([-600,600])
for a in daily_diff_c:
ax1.plot(np.mean(a, axis = 0))
ax2.set_title('Patients average 12h nightly activity')
ax2.set_ylabel('Activity')
ax2.set_xlabel('Time')
ax2.set_ylim([-600,600])
for a in night_diff_p:
ax2.plot(np.mean(a, axis = 0))
ax3.set_title('Controls average 12h nightly activity')
ax3.set_ylabel('Activity')
ax3.set_xlabel('Time')
ax3.set_ylim([-600,600])
for a in night_diff_c:
ax3.plot(np.mean(a, axis = 0))
fig.tight_layout()
fig, ((ax0,ax1),(ax2,ax3),(ax4,ax5), (ax6,ax7), (ax8,ax9),(ax10,ax11)) = plt.subplots(6, 2, figsize=(25, 15))
ax0.hist(daily_diff_mean_p[11],alpha=0.7, bins = 50, rwidth=0.9, label= 'Day')
ax0.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 11 ')
ax0.hist(night_diff_mean_p[11],alpha=0.7, bins = 50, rwidth=0.9, label= 'Night')
ax0.set_xlim([-200,200])
ax0.set_ylim([0,200])
ax0.legend()
ax1.hist(daily_diff_mean_c[10],alpha=0.7, bins = 50, rwidth=0.9, label= 'Day')
ax1.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 10 ')
ax1.set_xlim([-200,200])
ax1.set_ylim([0,200])
ax1.hist(night_diff_mean_c[10],alpha=0.7, bins = 50, rwidth=0.9,label= 'Night')
ax1.legend()
ax2.hist(daily_diff_mean_p[1], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax2.set_xlim([-200,200])
ax2.set_ylim([0,200])
ax2.hist(night_diff_mean_p[1], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax2.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 1 ')
ax2.legend()
ax3.hist(daily_diff_mean_c[14], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax3.hist(night_diff_mean_c[14], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax3.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 14 ')
ax3.set_xlim([-200,200])
ax3.set_ylim([0,200])
ax3.legend()
ax4.hist(daily_diff_mean_p[20],alpha=0.7, bins = 50, rwidth=0.9, label= 'Day')
ax4.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 20 ')
ax4.set_xlim([-200,200])
ax4.set_ylim([0,200])
ax4.hist(night_diff_mean_p[20],alpha=0.7, bins = 50, rwidth=0.9, label= 'Night')
ax4.legend()
ax5.hist(daily_diff_mean_c[5], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax5.set_xlim([-200,200])
ax5.set_ylim([0,200])
ax5.hist(night_diff_mean_c[5], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax5.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 5 ')
ax5.legend()
ax6.hist(daily_diff_mean_p[3], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax6.set_xlim([-200,200])
ax6.set_ylim([0,200])
ax6.hist(night_diff_mean_p[3], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax6.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 3 ')
ax6.legend()
ax7.hist(daily_diff_mean_c[4], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax7.hist(night_diff_mean_c[4], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax7.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 4 ')
ax7.set_xlim([-200,200])
ax7.set_ylim([0,200])
ax7.legend()
ax8.hist(daily_diff_mean_p[15], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax8.set_xlim([-200,200])
ax8.set_ylim([0,200])
ax8.hist(night_diff_mean_p[15], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax8.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 15 ')
ax8.legend()
ax9.hist(daily_diff_mean_c[7], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax9.set_xlim([-200,200])
ax9.set_ylim([0,200])
ax9.hist(night_diff_mean_c[7], alpha=0.7,bins = 50, rwidth=0.9, label = 'Night')
ax9.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 7 ')
ax9.legend()
ax10.hist(daily_diff_mean_p[8], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax10.set_xlim([-200,200])
ax10.set_ylim([0,200])
ax10.hist(night_diff_mean_p[8], alpha=0.7,bins = 50, rwidth=0.9,label= 'Night')
ax10.set_title('Histogram of 12h average nighly & daily log-transformed activity of Patient 8 ')
ax10.legend()
ax11.hist(daily_diff_mean_c[21], alpha=0.7,bins = 50, rwidth=0.9, label= 'Day')
ax11.hist(night_diff_mean_c[21], alpha=0.7, bins = 50, rwidth=0.9,label= 'Night')
ax11.set_title('Histogram of 12h average nighly & daily log-transformed activity of Control 21 ')
ax11.set_xlim([-200,200])
ax11.set_ylim([0,200])
ax11.legend()
fig.tight_layout()
```
### Stationarity and Autocorrelation
#### Autocorrelation plot
```
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
plot_acf(shizophrenia_p[11], lags=2000, ax = ax[0,0], title = 'Patients Autocorrelation')
plot_acf(shizophrenia_c[11], lags=2000, ax = ax[0,1], title = 'Controls Autocorrelation')
plot_acf(shizophrenia_p[0], lags=2000, ax = ax[1,0], title = 'Patients Autocorrelation')
plot_acf(shizophrenia_c[3], lags=2000, ax = ax[1,1], title = 'Controls Autocorrelation')
plt.show()
### Test for Stationarity
from Modules.func.support_functions import *
p_val_adf_p = [adf_test(shizophrenia_p[i]) for i in range(0, len(shizophrenia_p))]
p_val_adf_c = [adf_test(shizophrenia_c[j]) for j in range(0, len(shizophrenia_c))]
p_val_kpss_p =[kpss_test(shizophrenia_p[i]) for i in range(0, len(shizophrenia_p))]
p_val_kpss_c =[kpss_test(shizophrenia_c[j]) for j in range(0, len(shizophrenia_c))]
kpss = np.concatenate([p_val_kpss_p,p_val_kpss_c])
adf = np.concatenate([p_val_adf_p,p_val_adf_c])
dic = {'kpss': kpss, 'ADF': adf}
p_val_statio = pd.DataFrame(dic)
```
## Put Table in Appendix!
```
p_val_statio
```
| github_jupyter |
# Complex Arithmetic
This is a tutorial designed to introduce you to complex arithmetic.
This topic isn't particularly expansive, but it's important to understand it to be able to work with quantum computing.
This tutorial covers the following topics:
* Imaginary and complex numbers
* Basic complex arithmetic
* Complex plane
* Modulus operator
* Imaginary exponents
* Polar representation
If you need to look up some formulas quickly, you can find them in [this cheatsheet](https://github.com/microsoft/QuantumKatas/blob/master/quickref/qsharp-quick-reference.pdf).
If you are curious to learn more, you can find more information at [Wikipedia](https://en.wikipedia.org/wiki/Complex_number).
This notebook has several tasks that require you to write Python code to test your understanding of the concepts. If you are not familiar with Python, [here](https://docs.python.org/3/tutorial/index.html) is a good introductory tutorial for it.
Let's start by importing some useful mathematical functions and constants, and setting up a few things necessary for testing the exercises. **Do not skip this step**.
Click the cell with code below this block of text and press `Ctrl+Enter` (`⌘+Enter` on Mac).
```
# Run this cell using Ctrl+Enter (⌘+Enter on Mac).
from testing import exercise
from typing import Tuple
import math
Complex = Tuple[float, float]
Polar = Tuple[float, float]
```
# Algebraic Perspective
## Imaginary numbers
For some purposes, real numbers aren't enough. Probably the most famous example is this equation:
$$x^{2} = -1$$
which has no solution for $x$ among real numbers. If, however, we abandon that constraint, we can do something interesting - we can define our own number. Let's say there exists some number that solves that equation. Let's call that number $i$.
$$i^{2} = -1$$
As we said before, $i$ can't be a real number. In that case, we'll call it an **imaginary unit**. However, there is no reason for us to define it as acting any different from any other number, other than the fact that $i^2 = -1$:
$$i + i = 2i \\
i - i = 0 \\
-1 \cdot i = -i \\
(-i)^{2} = -1$$
We'll call the number $i$ and its real multiples **imaginary numbers**.
A good video introduction on imaginary numbers can be found [here](https://youtu.be/SP-YJe7Vldo)
### <span style="color:blue">Exercise 1</span>: Powers of $i$.
**Input:** An even integer $n$.
**Goal:** Return the $n$th power of $i$, or $i^n$.
Fill in the missing code (denoted by `...`) and run the cell below to test your work.
```
@exercise
def imaginary_power(n : int) -> int:
# If n is divisible by 4
if n % 4 == 0:
return ...
else:
return ...
```
## Complex Numbers
Adding imaginary numbers to each other is quite simple, but what happens when we add a real number to an imaginary number? The result of that addition will be partly real and partly imaginary, otherwise known as a **complex number**. A complex number is simply the real part and the imaginary part being treated as a single number. Complex numbers are generally written as the sum of their two parts: $a + bi$, where both $a$ and $b$ are real numbers. For example, $3 + 4i$, or $-5 - 7i$ are valid complex numbers. Note that purely real or purely imaginary numbers can also be written as complex numbers: $2$ is $2 + 0i$, and $-3i$ is $0 - 3i$.
When performing operations on complex numbers, it is often helpful to treat them as polynomials in terms of $i$.
### <span style="color:blue">Exercise 2</span>: Complex addition.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di$, represented as a tuple `(c, d)`.
**Goal:** Return the sum of these two numbers $x + y = z = g + hi$, represented as a tuple `(g, h)`.
> A tuple is a pair of numbers.
> You can make a tuple by putting two numbers in parentheses like this: `(3, 4)`.
> * You can access the $n$th element of tuple `x` like so: `x[n]`
> * For this tutorial, complex numbers are represented as tuples where the first element is the real part, and the second element is the real coefficient of the imaginary part
> * For example, $1 + 2i$ would be represented by a tuple `(1, 2)`, and $7 - 5i$ would be represented by `(7, -5)`.
>
> You can find more details about Python's tuple data type in the [official documentation](https://docs.python.org/3/library/stdtypes.html#tuples).
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Remember, adding complex numbers is just like adding polynomials. Add components of the same type - add the real part to the real part, add the complex part to the complex part. <br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=SfbjqVyQljk">here</a>.
</details>
```
@exercise
def complex_add(x : Complex, y : Complex) -> Complex:
# You can extract elements from a tuple like this
a = x[0]
b = x[1]
c = y[0]
d = y[1]
# This creates a new variable and stores the real component into it
real = a + c
# Replace the ... with code to calculate the imaginary component
imaginary = ...
# You can create a tuple like this
ans = (real, imaginary)
return ans
```
### <span style="color:blue">Exercise 3</span>: Complex multiplication.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di$, represented as a tuple `(c, d)`.
**Goal:** Return the product of these two numbers $x \cdot y = z = g + hi$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Remember, multiplying complex numbers is just like multiplying polynomials. Distribute one of the complex numbers:
$$(a + bi)(c + di) = a(c + di) + bi(c + di)$$
Then multiply through, and group the real and imaginary terms together.
<br/>
A video explanation can be found <a href="https://www.youtube.com/watch?v=cWn6g8Qqvs4">here</a>.
</details>
```
@exercise
def complex_mult(x : Complex, y : Complex) -> Complex:
# Fill in your own code
return ...
```
## Complex Conjugate
Before we discuss any other complex operations, we have to cover the **complex conjugate**. The conjugate is a simple operation: given a complex number $x = a + bi$, its complex conjugate is $\overline{x} = a - bi$.
The conjugate allows us to do some interesting things. The first and probably most important is multiplying a complex number by its conjugate:
$$x \cdot \overline{x} = (a + bi)(a - bi)$$
Notice that the second expression is a difference of squares:
$$(a + bi)(a - bi) = a^2 - (bi)^2 = a^2 - b^2i^2 = a^2 + b^2$$
This means that a complex number multiplied by its conjugate always produces a non-negative real number.
Another property of the conjugate is that it distributes over both complex addition and complex multiplication:
$$\overline{x + y} = \overline{x} + \overline{y} \\
\overline{x \cdot y} = \overline{x} \cdot \overline{y}$$
### <span style="color:blue">Exercise 4</span>: Complex conjugate.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return $\overline{x} = g + hi$, the complex conjugate of $x$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video expanation can be found <a href="https://www.youtube.com/watch?v=BZxZ_eEuJBM">here</a>.
</details>
```
@exercise
def conjugate(x : Complex) -> Complex:
return ...
```
## Complex Division
The next use for the conjugate is complex division. Let's take two complex numbers: $x = a + bi$ and $y = c + di \neq 0$ (not even complex numbers let you divide by $0$). What does $\frac{x}{y}$ mean?
Let's expand $x$ and $y$ into their component forms:
$$\frac{x}{y} = \frac{a + bi}{c + di}$$
Unfortunately, it isn't very clear what it means to divide by a complex number. We need some way to move either all real parts or all imaginary parts into the numerator. And thanks to the conjugate, we can do just that. Using the fact that any number (except $0$) divided by itself equals $1$, and any number multiplied by $1$ equals itself, we get:
$$\frac{x}{y} = \frac{x}{y} \cdot 1 = \frac{x}{y} \cdot \frac{\overline{y}}{\overline{y}} = \frac{x\overline{y}}{y\overline{y}} = \frac{(a + bi)(c - di)}{(c + di)(c - di)} = \frac{(a + bi)(c - di)}{c^2 + d^2}$$
By doing this, we re-wrote our division problem to have a complex multiplication expression in the numerator, and a real number in the denominator. We already know how to multiply complex numbers, and dividing a complex number by a real number is as simple as dividing both parts of the complex number separately:
$$\frac{a + bi}{r} = \frac{a}{r} + \frac{b}{r}i$$
### <span style="color:blue">Exercise 5</span>: Complex division.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di \neq 0$, represented as a tuple `(c, d)`.
**Goal:** Return the result of the division $\frac{x}{y} = \frac{a + bi}{c + di} = g + hi$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><b>Need a hint? Click here</b></summary>
A video explanation can be found <a href="https://www.youtube.com/watch?v=Z8j5RDOibV4">here</a>.
</details>
```
@exercise
def complex_div(x : Complex, y : Complex) -> Complex:
return ...
```
# Geometric Perspective
## The Complex Plane
You may recall that real numbers can be represented geometrically using the [number line](https://en.wikipedia.org/wiki/Number_line) - a line on which each point represents a real number. We can extend this representation to include imaginary and complex numbers, which gives rise to an entirely different number line: the imaginary number line, which only intersects with the real number line at $0$.
A complex number has two components - a real component and an imaginary component. As you no doubt noticed from the exercises, these can be represented by two real numbers - the real component, and the real coefficient of the imaginary component. This allows us to map complex numbers onto a two-dimensional plane - the **complex plane**. The most common mapping is the obvious one: $a + bi$ can be represented by the point $(a, b)$ in the **Cartesian coordinate system**.

This mapping allows us to apply complex arithmetic to geometry, and, more importantly, apply geometric concepts to complex numbers. Many properties of complex numbers become easier to understand when viewed through a geometric lens.
## Modulus
One such property is the **modulus** operator. This operator generalizes the **absolute value** operator on real numbers to the complex plane. Just like the absolute value of a number is its distance from $0$, the modulus of a complex number is its distance from $0 + 0i$. Using the distance formula, if $x = a + bi$, then:
$$|x| = \sqrt{a^2 + b^2}$$
There is also a slightly different, but algebraically equivalent definition:
$$|x| = \sqrt{x \cdot \overline{x}}$$
Like the conjugate, the modulus distributes over multiplication.
$$|x \cdot y| = |x| \cdot |y|$$
Unlike the conjugate, however, the modulus doesn't distribute over addition. Instead, the interaction of the two comes from the triangle inequality:
$$|x + y| \leq |x| + |y|$$
### <span style="color:blue">Exercise 6</span>: Modulus.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the modulus of this number, $|x|$.
> Python's exponentiation operator is `**`, so $2^3$ is `2 ** 3` in Python.
>
> You will probably need some mathematical functions to solve the next few tasks. They are available in Python's math library. You can find the full list and detailed information in the [official documentation](https://docs.python.org/3/library/math.html).
<details>
<summary><strong>Need a hint? Click here</strong></summary>
In particular, you might be interested in <a href=https://docs.python.org/3/library/math.html#math.sqrt>Python's square root function.</a><br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=FwuPXchH2rA">here</a>.
</details>
```
@exercise
def modulus(x : Complex) -> float:
return ...
```
## Imaginary Exponents
The next complex operation we're going to need is exponentiation. Raising an imaginary number to an integer power is a fairly simple task, but raising a number to an imaginary power, or raising an imaginary (or complex) number to a real power isn't quite as simple.
Let's start with raising real numbers to imaginary powers. Specifically, let's start with a rather special real number - Euler's constant, $e$:
$$e^{i\theta} = \cos \theta + i\sin \theta$$
(Here and later in this tutorial $\theta$ is measured in radians.)
Explaining why that happens is somewhat beyond the scope of this tutorial, as it requires some calculus, so we won't do that here. If you are curious, you can see [this video](https://youtu.be/v0YEaeIClKY) for a beautiful intuitive explanation, or [the Wikipedia article](https://en.wikipedia.org/wiki/Euler%27s_formula#Proofs) for a more mathematically rigorous proof.
Here are some examples of this formula in action:
$$e^{i\pi/4} = \frac{1}{\sqrt{2}} + \frac{i}{\sqrt{2}} \\
e^{i\pi/2} = i \\
e^{i\pi} = -1 \\
e^{2i\pi} = 1$$
> One interesting consequence of this is Euler's Identity:
>
> $$e^{i\pi} + 1 = 0$$
>
> While this doesn't have any notable uses, it is still an interesting identity to consider, as it combines 5 fundamental constants of algebra into one expression.
We can also calculate complex powers of $e$ as follows:
$$e^{a + bi} = e^a \cdot e^{bi}$$
Finally, using logarithms to express the base of the exponent as $r = e^{\ln r}$, we can use this to find complex powers of any positive real number.
### <span style="color:blue">Exercise 7</span>: Complex exponents.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the complex number $e^x = e^{a + bi} = g + hi$, represented as a tuple `(g, h)`.
> Euler's constant $e$ is available in the [math library](https://docs.python.org/3/library/math.html#math.e),
> as are [Python's trigonometric functions](https://docs.python.org/3/library/math.html#trigonometric-functions).
```
@exercise
def complex_exp(x : Complex) -> Complex:
return ...
```
### <span style="color:blue">Exercise 8</span>*: Complex powers of real numbers.
**Inputs:**
1. A non-negative real number $r$.
2. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the complex number $r^x = r^{a + bi} = g + hi$, represented as a tuple `(g, h)`.
> Remember, you can use functions you have defined previously
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
You can use the fact that $r = e^{\ln r}$ to convert exponent bases. Remember though, $\ln r$ is only defined for positive numbers - make sure to check for $r = 0$ separately!
</details>
```
@exercise
def complex_exp_real(r : float, x : Complex) -> Complex:
return ...
```
## Polar coordinates
Consider the expression $e^{i\theta} = \cos\theta + i\sin\theta$. Notice that if we map this number onto the complex plane, it will land on a **unit circle** arount $0 + 0i$. This means that its modulus is always $1$. You can also verify this algebraically: $\cos^2\theta + \sin^2\theta = 1$.
Using this fact we can represent complex numbers using **polar coordinates**. In a polar coordinate system, a point is represented by two numbers: its direction from origin, represented by an angle from the $x$ axis, and how far away it is in that direction.
Another way to think about this is that we're taking a point that is $1$ unit away (which is on the unit circle) in the specified direction, and multiplying it by the desired distance. And to get the point on the unit circle, we can use $e^{i\theta}$.
A complex number of the format $r \cdot e^{i\theta}$ will be represented by a point which is $r$ units away from the origin, in the direction specified by the angle $\theta$.

Sometimes $\theta$ will be referred to as the number's **phase**.
### <span style="color:blue">Exercise 9</span>: Cartesian to polar conversion.
**Input:** A complex number $x = a + bi$, represented as a tuple `(a, b)`.
**Goal:** Return the polar representation of $x = re^{i\theta}$ - return the distance from origin $r$ and phase $\theta$ as a tuple `(r, θ)`.
* $r$ should not be negative: $r \geq 0$
* $\theta$ should be between $-\pi$ and $\pi$: $-\pi < \theta \leq \pi$
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
<a href=https://docs.python.org/3/library/math.html#math.atan2>Python has a separate function</a> for calculating $\theta$ for this purpose.<br>
A video explanation can be found <a href="https://www.youtube.com/watch?v=8RasCV_Lggg">here</a>.
</details>
```
@exercise
def polar_convert(x : Complex) -> Polar:
r = ...
theta = ...
return (r, theta)
```
### <span style="color:blue">Exercise 10</span>: Polar to Cartesian conversion.
**Input:** A complex number $x = re^{i\theta}$, represented in polar form as a tuple `(r, θ)`.
**Goal:** Return the Cartesian representation of $x = a + bi$, represented as a tuple `(a, b)`.
```
@exercise
def cartesian_convert(x : Polar) -> Complex:
return ...
```
### <span style="color:blue">Exercise 11</span>: Polar multiplication.
**Inputs:**
1. A complex number $x = r_{1}e^{i\theta_1}$ represented in polar form as a tuple `(r1, θ1)`.
2. A complex number $y = r_{2}e^{i\theta_2}$ represented in polar form as a tuple `(r2, θ2)`.
**Goal:** Return the result of the multiplication $x \cdot y = z = r_3e^{i\theta_3}$, represented in polar form as a tuple `(r3, θ3)`.
* $r_3$ should not be negative: $r_3 \geq 0$
* $\theta_3$ should be between $-\pi$ and $\pi$: $-\pi < \theta_3 \leq \pi$
* Try to avoid converting the numbers into Cartesian form.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Remember, a number written in polar form already involves multiplication. What is $r_1e^{i\theta_1} \cdot r_2e^{i\theta_2}$?
</details>
```
@exercise
def polar_mult(x : Polar, y : Polar) -> Polar:
return ...
```
### <span style="color:blue">Exercise 12</span>**: Arbitrary complex exponents.
You now know enough about complex numbers to figure out how to raise a complex number to a complex power.
**Inputs:**
1. A complex number $x = a + bi$, represented as a tuple `(a, b)`.
2. A complex number $y = c + di$, represented as a tuple `(c, d)`.
**Goal:** Return the result of raising $x$ to the power of $y$: $x^y = (a + bi)^{c + di} = z = g + hi$, represented as a tuple `(g, h)`.
<br/>
<details>
<summary><strong>Need a hint? Click here</strong></summary>
Convert $x$ to polar form, and raise the result to the power of $y$.
</details>
```
@exercise
def complex_exp_arbitrary(x : Complex, y : Complex) -> Complex:
return ...
```
## Conclusion
Congratulations! You should now know enough complex arithmetic to get started with quantum computing. When you are ready, you can move on to the next tutorial in this series, covering [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb).
| github_jupyter |
# K-means and Gaussian Mixture Model
## 1) K-means clustering -- anisotropic data clusters, does not work ok!
Here we're going to apply K-means clustering to a dataset in which the data clusters have nonzero covariance. In this case, the K-means algorithm does not work well (as also discussed in the __'Isotropic_clusters_Kmeans' notebook__). We have to use a Gaussian mixture model to improve our clustering.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.datasets import make_blobs
from Kmeans_utils import *
from sklearn.cluster import KMeans
from scipy.stats import multivariate_normal
import matplotlib.mlab as mlab
%matplotlib inline
```
### Get data
Here we randomly generate some data using make_blobs() from scikit-learn. make_blobs() generates isotropic gaussian blobs, but we want the blobs to be anisotropic, i.e. to show correlation between the blob dimensions. In this case, the K-means clustering algorithm does not work well as it assumes isotropic clusters (see __'Isotropic_clusters_Kmeans' notebook__).
```
nb_samples = 500
nb_feat = 2 #2D data, so we can visualise easily
K = 4 #number of clusters
X, labels = make_blobs(nb_samples, nb_feat, K, random_state = 1) #6
transformation = [[0.634, -0.623], [-0.152, 0.461]]
X = Norm_data(np.dot(X, transformation))
#Visualise the data
colors = cm.rainbow(np.linspace(0, 1, K)) #colours for plotting
for i in range(K):
plt.scatter(X[labels==i,0], X[labels==i,1], color = colors[i])
```
### Try various different initial cluster centroid assignments -- it doesn't work!!
No matter how hard we try to make our K-means algorithm work, it just doesn't do a good job in general due to the covariance in the data clusters.
```
#initialise cluster centres and error values
fig, axs = plt.subplots(2,2, figsize = (10, 10)); axs = axs.ravel()
seeds = [0, 6, 9, 10]
Store_err = np.zeros(len(seeds))
for i, seed in enumerate(seeds):
#initialise
np.random.seed(seed)
clus = InitialiseCentres(K, X)
[err, err_prev] = [0, 1]
#The K-means algorithm
while (not np.isclose(err, err_prev, rtol = 1e-10)):
err_prev = err
idx, err = Expectation(X, clus)
clus = FindCenters(X, idx, K)
Store_err[i] = err
#show final cluster assignments
for j in range(K):
axs[i].scatter(X[j == idx,0], X[j == idx,1], color = colors[j], s = 15)
axs[i].set_title('Seed %d, total error %.2f' % (seed, err))
axs[np.argmin(Store_err)].text(0.6,0.9, 'We will use this one\n which is still bad!',
horizontalalignment = 'center',
verticalalignment = 'center',
transform=axs[np.argmin(Store_err)].transAxes,
fontsize = 15, bbox=dict(facecolor='red', alpha=0.3))
```
## 2) Gaussian Mixture Model
**_K-means_**
In K-means clustering we assigned each data point to its closest cluster centroid and said 'this point now belongs to this closest cluster with probability 1'.
The Expectation step --> returns an array of indices, one index for each datapoint to indicate to which cluster it belongs.
The Maximisation step --> returns the positions of the cluster centroids that minimise our error (the total distance between the data points and their closest cluster centroids)
**_Gaussian Mixture Model (GMM)_**
In a GMM, we don't make such hard statements. Each datapoint has a certain probability to belonging to each cluster.
The Expectation step --> returns a (m x K) matrix in which each row i gives the probabilities of datapoint i to belong to each of the K clusters. This matrix is called the **responsibility matrix** because it tell us how responsible cluster k is for datapoint i.
The Maximisation step --> returns the means, covariances and weights of the K clusters that maximise the likelihood of the observed data.
Like K-means, a GMM can come up with different solutions depending on the initialisation. It is better to run it a few times and pick the solution with the highest likelihood, similar to what we did with K-means above (where we picked the solution with the lowest error).
```
#initialise
np.random.seed(1)
mu_init = InitialiseCentres(K, X)
pi = [1.0/K]*K
cov_init = [np.identity(X.shape[1])]*K
r_init = Responsibility(X, mu_init, cov_init, pi)
mu = mu_init; cov = cov_init
```
### Expectation-Maximisation algorithm
```
#Exp-Max algorithm
nIter = 0
Store_err = np.zeros(60)
llh, llh_prev = [0, 1]
while(not np.isclose(llh, llh_prev, rtol = 1e-3)):
llh_prev = llh
#Expectation step
r = Responsibility(X, mu, cov, pi)
#Maximisation step
pi, mu, cov = Max(r, X)
#likelihood
llh = Log_llh(r, X, mu, cov, pi)
Store_err[nIter] = llh
nIter+= 1
print('Needed {} iterations to converge'.format(nIter))
```
### Plot results
We now plot the results of the GMM. We provide the contours of the final Gaussian distributions and colour the datapoints in the following way: each datapoint is assigned to the cluster for which it has the highest probability of belonging to. All points belonging to a specific cluster are given the same colour, the intensity depending on the probability of belonging to that cluster. The initial configuration as well as the final result are shown.
The log-likelihood as a function of the number of iterations of the GMM is also shown -- this should be monotonically increasing.
The final results looks pretty good! Much better than the K-means algorithm
```
fig, axs = plt.subplots(2,2, figsize = (15, 10)); axs = axs.ravel()
cmaps = ['Reds','Greens','Wistia','Blues']
titles = ['Initial guess', 'Final clustering']
mus = [mu_init, mu]
covs = [cov_init, cov]
rs = [r_init, r]
xmin, ymin = np.min(X, axis = 0)
xmax, ymax = np.max(X, axis = 0)
#actual cluster assignments
for k in range(K):
axs[0].scatter(X[labels==k,0], X[labels==k,1], color = colors[k])
axs[0].set_title('Actual cluster labels')
#likelihood changes throughout iterations
axs[1].plot(np.arange(nIter), Store_err[:nIter])
axs[1].set_title('Log-likelihood')
axs[1].set_xlabel('Iterations')
#plot distributions and data points
idxs = [np.argmax(r_init, axis = 1), np.argmax(r, axis = 1)] #cluster assignments initial guess and final result
for k in range(K):
for j in range(2):
#scatter data points
strs = [str(i) for i in rs[j][k == idxs[j],k]]
axs[j+2].scatter(X[k == idxs[j],0], X[k == idxs[j],1], c = strs, cmap = cmaps[k], alpha = 0.5)
#plot contours
PlotContours(mu = mus[j][k], cov = covs[j][k], X=X, axs = axs[j+2])
axs[j+2].set_title(titles[j])
```
# Multiple iterations for GMM
Also for the GMM the initialisation of the means and covariance of the base Gaussian distributions can influence the final cluster assignments. Here we repeat the GMM with four different initialisations and plot the final cluster assignments, assuming that each point belongs to the cluster to which it is most likely to belong.
Like we did for K-means, we choose the configuration with the lowest error (highest log-likelihood).
```
fig, axs = plt.subplots(2,2, figsize = (15, 7)); axs = axs.ravel()
seeds = [1, 2, 3, 10]
Store_err = np.zeros(len(seeds))
for i2, seed in enumerate(seeds):
#initialise
np.random.seed(seed)
mu = InitialiseCentres(K, X)
pi = [1.0/K]*K
cov = [np.identity(X.shape[1])]*K
llh, llh_prev = [0, 1]
#Exp-Max algorithm
while (not np.isclose(llh, llh_prev, rtol = 1e-3)):
llh_prev = llh
#Expectation step
r = Responsibility(X, mu, cov, pi)
#Maximisation step
pi, mu, cov = Max(r, X)
#likelihood
llh = Log_llh(r, X, mu, cov, pi)
Store_err[i2] = llh
#plot
idxs = np.argmax(r, axis = 1)
for k in range(K):
axs[i2].scatter(X[k == idxs,0], X[k == idxs,1], c = colors[k])
axs[i2].set_title('Log-llh is %.1f' % Store_err[i2])
axs[np.argmax(Store_err)].text(0.25,0.15, 'We will use this one!',
horizontalalignment = 'center',
verticalalignment = 'center',
transform=axs[np.argmax(Store_err)].transAxes,
fontsize = 15, bbox=dict(facecolor='red', alpha=0.3))
```
### Finally, we'll run it some more times and pick out the best solution
```
llh_best = -1e8
for i in range(80):
#initialise
np.random.seed(i)
mu = InitialiseCentres(K, X)
pi = [1.0/K]*K
cov = [np.identity(X.shape[1])]*K
llh, llh_prev = [0, 1]
#Exp-Max algorithm
while (not np.isclose(llh, llh_prev, rtol = 1e-3)):
llh_prev = llh
#Expectation step
r = Responsibility(X, mu, cov, pi)
#Maximisation step
pi, mu, cov = Max(r, X)
#likelihood
llh = Log_llh(r, X, mu, cov, pi)
#Store best solution
if llh > llh_best:
r_best = r
mu_best = mu
cov_best = cov
llh_best = llh
fig, axs = plt.subplots(2,2, figsize = (15,10)); axs = axs.ravel()
#actual cluster assignments
for k in range(K):
axs[0].scatter(X[labels==k,0], X[labels==k,1], color = colors[k])
axs[0].set_title('Actual cluster labels')
#plot best solution
idxs = np.argmax(r_best, axis = 1)
for k in range(K):
axs[1].scatter(X[k == idxs,0], X[k == idxs,1], c = colors[k])
strs = [str(i) for i in r_best[k == idxs,k]]
axs[2].scatter(X[k == idxs,0], X[k == idxs,1], c = strs, cmap = cmaps[k], edgecolors = 'k')
#points we're most uncertain about to which cluster they belong
prob = r_best[np.arange(X.shape[0]),idxs] #prob of belonging to the most likely cluster
idx_unc = prob.argsort()[:20] #indices of most uncertain points
strs = [str(i) for i in prob[idx_unc]]
axs[3].scatter(X[:,0], X[:,1], color = 'b', alpha = 0.1) #all data in background for reference
axs[3].scatter(X[idx_unc, 0], X[idx_unc, 1], c = strs, cmap = 'Reds', edgecolors = 'k') #uncertain points
axs[1].set_title('Result, Log-likelihood is %.2f' % llh_best)
axs[2].set_title('Visualise uncertainty')
axs[3].set_title('Most uncertain points shown in red\n(lighter is more uncertain)')
```
When we assign each data point to the cluster to which it is most likely to belong, the algorithm is not 100% correct. This is because it's practically impossible to tell for sure which Gaussian generated a given point when the point lies in a region where two Gaussians overlap. The good thing about this Bayesian approach is that we are aware of not being certain about points in these overlapping regions, as can be seen in the lower two plots.
# GMM for the old faithful dataset
Just because we can, we'll give another example of our GMM model on a different dataset: the old faithful data set which contains eruption durations and eruption waiting times for the old faithful geyser in Yellowstone. We don't really care now what the data exactly means though, we just want to see the clustering
```
import pandas as pd
data_dir = '../Data/Data_Kmeans/'
data = pd.read_csv('{}faithful.csv'.format(data_dir), delimiter='\t')
X = Norm_data(data.values)
K = 2
#initialise
mu = np.zeros((2,2)); mu[0] = [-1.8, 1.5]; mu[1] = [1.8,-1.5]
pi = [1.0/K]*K
cov = [np.identity(X.shape[1])]*K
llh, llh_prev = [0, 1]
mu_init = mu; cov_init = cov; r_init = Responsibility(X, mu_init, cov_init, pi)
#Exp-Max algorithm
while (not np.isclose(llh, llh_prev, rtol = 1e-3)):
llh_prev = llh
#Expectation step
r = Responsibility(X, mu, cov, pi)
#Maximisation step
pi, mu, cov = Max(r, X)
#likelihood
llh = Log_llh(r, X, mu, cov, pi)
```
### Plot
```
fig, axs = plt.subplots(1, 2, figsize = (15, 5)); axs = axs.ravel()
#divide pars in [initial guess plot, final answer plot]
mus = [mu_init, mu]
covs = [cov_init, cov]
rs = [r_init, r]
idxs = [np.argmax(r_init, axis = 1), np.argmax(r, axis = 1)]
titles = ['Initial guess', 'Final answer']
for j in range(2):
axs[j].set_title(titles[j])
for k in range(K):
strs = [str(i) for i in rs[j][k == idxs[j],k]]
axs[j].scatter(X[k == idxs[j],0], X[k == idxs[j],1], c = strs, cmap = cmaps[k], alpha = 0.5)
PlotContours(mu = mus[j][k], cov = covs[j][k], X=X, axs = axs[j])
```
| github_jupyter |
# Code Style
In this chapter, we'll discuss a number of important considerations to make when styling your code. If you think of writing code like writing an essay, considering code style improves your code the same way editing an essay improves your essay. Often, considering code style is referred to as making our code *pythonic*, meaning that it adheres to the foundational principles of the Python programming language.
Learning how to consider and improve your code style up front has a number of benefits. First, your code will be more user-friendly for anyone reading your code. This includes you, who will come back to and edit your code over time. Second, while considering code style and being pythonic is a bit more work up-front on developers (the people writing the code), it pays off on the long run by making your code easier to maintai. Third, by learning this now, early on in your Python journey, you avoid falling into bad habits. It's much easier to learn something and implement it than it is to unlearn bad habits.
Note that what we're discussing here will not affect the functionality of your code. Unlike *programmatic errors* (i.e. errors and exceptions that require debugging for your code to execute properly), *stylistic errors* do not affect the functionality of your code. However, *stylistic errors* are considered bad style and are to be avoided, as they make your code harder to understand.
## Style Guides
Programming lanugages often have style guides, which include a set of conventions for how to write good code. While many of the concepts we'll cover here are applicable for other programming languages (i.e. being consistent), some of the specifics (i.e. variable naming conventions) are more specific to programming in Python.
<div class="alert alert-success">
Coding style refers to a set of conventions for how to write good code.
</div>
### The Zen of Python
To explain the programming philosophy in Python, we'll first introduce what's known as *The Zen of Python*, which lays out the design principles of the individuals who developed the Python programming language. *The Zen of Python* is included as an easter egg in Python, so if you `import this` you're able to read its contents:
```
import this
```
While we won't discuss each of these above we'll highlight two of these tenantsthat are particularly pertinent to the considerations in this chapter. Specifically, **beautiful is better than ugly** and **readability counts** together indicate that how one's code looks matters. Python prioritizes readability in its syntax (relative to other programming languages) and adheres to the idea that "code is more often read than it is written." As such, those who program in Python are encouraged to consider the beauty and readability of their code. To do so, we'll cover a handful of considerations here.
### Code Consistency
For very understandable and good reasons, beginner programmers often focus on getting their code to execute without throwing an error. In this process, however, they often forget about code style. While we'll discuss specific considerations to write well-styled python code in this chapter, the most important overarching concept is that **consistency is the goal**. Rules help us achieve consistency, and so we'll discuss a handful of rules and guidelines to help you write easy-to-read code with consistent code style. However, in doing so, we want you to keep the idea of consistency in mind, as programming is (at least partly) subjective. Since it's easier to recognize & read consistent style, do your best to follow the style guidelines presented in this chapter and once you pick a way to style your code, it's best to use that consistently across your code.
### PEP8
Python Enhancement Proposals (PEPs) are proposals for how something should be or how something shoudl work in the Python programming language. These are written by the people responsible for writing and maintaining the Python programming language. And, PEPs are voted on before incorporation. **[PEP8](https://www.python.org/dev/peps/pep-0008/)**, specfiically, is an accepted proposal that outlines the style guidelines for the Python programming language.
<div class="alert alert-info">
<b><a href="https://www.python.org/dev/peps/pep-0008/">PEP8</a></b> is an accepted proposal that outlines the style guide for Python.
</div>
The general concepts laid out in PEP8 (and in *The Zen of Python*) are as follows:
- Be *explicit & clear*: prioritize readability over cleerness
- There should be a *specific, standard way to do things*: use them
- Coding Style are *guidelines*: They are designed to help the code, but are not laws
#### PEP8: Structure
Throughout this section we'll highlight the PEP8 guideline, provide an example of what to avoid and hten demonstrate an improvement on the error. Note that for each "what to avoid" the code *will* execute without error. This is because we're discussing *stylistic* rather than *programmatic* errors here.
##### Blank Lines
- Use 2 blank lines between functions & classes and 1 between methods
- Use 1 blank line between segments to indicate logical structure
This allows you to, at a glance, identify what pieces of code are there. Using blank lines to separate out components in your code and your code's overall structure improves its readability.
**What to avoid**
In this example of what to avoid, there are no blank lines between segments within your code, making it more difficult to read. Note that if two functions were provided here, there would be 2 blank lines between the different function definitions.
```
def my_func():
my_nums = '123'
output = ''
for num in my_nums:
output += str(int(num) + 1)
return output
```
**How to improve**
To improve the above example, we can use what you see here, with variable definition being separated out from the `for` loop, being separated from the `return` statement. This code helps separate out the logical structures within a function. Note that we do *not* add a blank line between each line of code, as that would *decrease* the readability of the code.
```
# Goodness
def my_func():
my_nums = '123'
output = ''
for num in my_nums:
output += str(int(num) + 1)
return output
```
##### PEP8: Indentation
Use spaces to indicate indentation levels, with each level defined as 4 spaces. Programming languages differ on the speicfics of what constitutes a "tab," but Python has settled on a tab being equivalent to 4 spaces. When you hit "tab" on your keyboard within a Jupyter notebook, for example, the 4 spaces convention is implemented for you automatically, so you may not have even realized this convention before now!
**What to avoid**
Here, you'll note that, while the `print()` statement is indented, only *two* spaces are used. Jupyter will alert you to this by making the word `print` red, rather than its typical green.
```
if True:
print('Words.')
```
**How to improve**
Conversely, here we see the accepted four spaces for a tab/indentation being utilized. Again, remember that the functionality of the code in this example is equivalent to that above; only the style has changed.
```
if True:
print('Words.')
```
##### PEP8: Spacing
- Put one (and only one) space between each element
- Index and assignment don't have a space between opening & closing '()' or '[]'
**What to avoid**
Building on the above, spacing within and surrounding your code should be considered. Here, we see that spaces are missing around operators in the first line of code, whereas the second line has too many spaces around the assignment operator. We also see that there are unecessary spaces around the square brackets the list in line two and spaces after each comma missin in that same line of code. Finaly, in the third line of code there is an unecessary space between `my_list` and the square bracket being used for indexing.
```
my_var=1+2==3
my_list = [ 1,2,3,4 ]
el = my_list [1]
```
**How to improve**
The above spacingissues have all been resolved below:
```
my_var = 1 + 2 == 3
my_list = [1, 2, 3, 4]
el = my_list[1]
```
##### PEP8: Line Length
- PEP8 recommends that each line be at most 79 characters long
Note that this specification is somewhat historical, as computers used to require this. As such, there are tools and development environments that will help ensure that no single line of code exceeds 79 characters. However, in Jupyter notebooks, the general guideline "avoid lengthy lines of code or comments" can be used, as super long lines are hard to read at a glance.
**Multi-line**
To achieve this, know that you can always separate lines of code easily after a comma. In Jupyter notebooks, if you hit return/enter on your keyboard after a comma, your code will be aligned appropriately. For example below you see that after the comma in the first line of code, the `6` is automatically aligned with the `1` from the line above. This visually makes it clear that all of the integers are part of the same list `my_long_list`. Using multiple lines to make your code easier to read is a great habit to get into.
```
my_long_list = [1, 2, 3, 4, 5,
6, 7, 8, 9, 10]
```
Further, note that you can explicitly state that the code on the following line is a continuation of the first line of code with a backlash (`\`) at the end of a line, as you see exemplified here:
```
my_string = 'Python is ' + \
'a pretty great language.'
```
**One Statement Per Line**
While on the topic of line length and readable code, note that while you *can* often condense multiple statements into one line of code, you usually shouldn't, as it makes it harder to read.
**What to avoid**
For example, for loops *can* syntactically be specified on a single line, as you see here:
```
for i in [1, 2, 3]: print(i**2 + i%2)
```
**How to Improve**
However, in he code above, it's harder to read at a glance. Instead, what is being looped over should go on the first line with what code is being executed contained in an indented code block on lines underneat the `for` statement, as this is easier to read than the above example:
```
for i in [1, 2, 3]:
print(i**2 + i%2)
```
##### PEP8: Imports
- Import one module per line
- Avoid `*` imports
- Use the import order: standard library; 3rd party packages; local/custom code
**What to avoid**
While you may still be learning which packages are part of the standard library and which are third party packages, this will become more second nature over time. And, we haven't yet discussed local or custom code, but this includes functions/classes/code you've written and stored in `.py` files. This should be imported last.
In this example here, there are a number of issues! First, `numpy` is a third party package, while `os` and `sys` are part of the standard library, so the order should be flipped. Second `*` imports are to be avoided, as it would be unclear in any resultant code which functionality came from the `numpy` package. Third, `os` and `sys` should be imported on separate lines to be most clear.
```
from numpy import *
import os, sys
```
**How to Improve**
The above issues have been resolved in this set of imports:
```
import os
import sys
import numpy as np
```
##### PEP8: Naming
- Use descriptive names for all modules, variables, functions and classes, that are longer than 1 character
**What to avoid**
Here, single character, non-descriptive names are used.
```
a = 12
b = 24
```
**How to Improve**
Instead, python encourages object names that describe what is stored in the object or what the object is or does.
This is also important when you want to change an object name after the fact. If you were to "Find + Replace All" on the letter `a` that would change every single a in your code. However, if you "Find + Replace All" for `n_filters`, this would likely only change the places in your code you actually intended to replace.
```
n_filters = 12
n_freqs = 24
```
**Naming Style**
- CapWords (leading capitals, no separation) for Classes
- snake_case (all lowercase, underscore separator) for variables, functions, and modules
Note: snake_case is easier to read than CapWords, so we use snake_case for the things (variables, functions) that we name more frequently.
**What to avoid**
While we've been using this convention, it's important to state it explicitly here. Pythonistas (those who program in python) expect the above conventions to be used within their code. Thus, if they see a function `MyFunc`, there will be cognitive dissonance, as CapWords is to be used for classes, not functions. The same for `my_class`; this would require the reader of this code to work harder than necessary, as snake_case is to be used for functions, variables, and modules, not classes.
```
def MyFunc():
pass
class my_class():
def __init__():
pass
```
**How to Improve**
Intead, follow the guideline above. Also, note that we've added two lines between the function and class definitions (to follow the guideline earlier in this chapter).
```
def my_func():
pass
class MyClass():
def __init__():
pass
```
##### String Quotes
In Python, single-quoted strings and double-quoted strings are the same. Note that *PEP8 does not make a recommendation for this*. Rather, you are encouraged to be consistent: **pick a rule and stick to it.** (The author of this books is *exceptionally* bad at following this advice.)
One place, however, to choose one approach over another is when a string contains single or double quote character string literal. In this case, use the other one that's not included in the string to avoid backslashes in the string, as this improves readability. For example...
**What to avoid**
As you see below, you *could* use a backslash to "escape" the apostraphe within the string; however, this makes the string harder to read.
```
my_string = 'Prof\'s Project'
```
**How to Improve**
Instead, using double quotes to specify the string with the apostraphe (single quote) inside the string leads to more readable code, and is thus preferable.
```
my_string = "Prof's Project"
```
#### PEP8: Documentation
While documentation (including how to write docstrings and when, how and where to include code comments) will be covered more explicitly in the next chapter, we'll discuss the style considerations for including code comments and docstrins at this point.
##### PEP8: Comments
First, out-of-date comments are worse than no comments at all. Keep your comments up-to-date. While we encourage writing comments to explain your thinking as you're writing the code, you want to be sure to re-visit your code comments during your "editing" and "improving code style" sessions to ensure that what is stated in the comments matches what is done in your code to avoid confusion for any readers of your code.
**Block comments**
Block comments are comments that are on their own line and come before the code they intend to describe. They follow the following conventions:
- apply to some (or all) code that follows them
- are indented to the same level as that code
- each line of a block comment starts with a # and a single space
**What to avoid**
In the function below, while the code comment does come before the code it describes (good!), it is not at the same level of indentation of the code it describes (not good!) *and* there is no space between the pound sign/hashtag and the code comment:
```
import random
def encourage():
#help try to destress students by picking one thing from the following list using random
statements = ["You've totally got this!","You're so close!","You're going to do great!","Remember to take breaks!","Sleep, water, and food are really important!"]
out = random.choice(statements)
return out
encourage()
```
**How to Improve**
Intead, here, we see improved code comment style by 1) having the block comment at the same level of indentation as the code it describes, 2) having a space in between the `#` and the comment, and 3) breaking up the comment onto two separate lines to avoid having a too-long comment.
The code style is also further improved by considering spacing within the `statements` list *and* considering line spacing throughout the function.
```
def encourage():
# Randomly pick from list of de-stressing statements
# to help students as they finish the quarter.
statements = ["You've totally got this!",
"You're so close!",
"You're going to do great!",
"Remember to take breaks!",
"Sleep, water, and food are really important!"]
out = random.choice(statements)
return out
encourage()
```
**Inline comments**
Inline comments are those comments on the same line as the code they're describing. These are:
- to be used sparingly
- to be separated by at least two spaces from the statement
- start with a # and a single space
**What to avoid**
For example, we'll avoid inline comments that 1) are right up against the code they describe and 2) that fail to have a space after the `#`:
```
encourage()#words of encouragement
```
**How to Improve**
Instead, we'll have two spaces after the code, and a space after the `#`:
```
encourage() # words of encouragement
```
##### PEP8: Documentation
We'll cover docstrings in the following chapter, so for now we'll just specify that PEP8 specifies that a descriptive docstring should be written and included for all functions & classes. We'll discuss how to approach this shortly!
## Exercises
Q1. **Considering code style, which of these is best - A, B, or C?**
A)
```python
def squared(input_number):
val = input_number
power = 2
output = val ** power
return output
```
B)
```python
def squared(input_number, power=2):
output = input_number ** power
return output
```
C)
```python
def squared(input_number):
val = input_number
power = 2
output = val ** power
return output
```
Q2. **Which of the following uses PEP-approved spacing?**
A) `my_list=[1,2,3,4,5]`
B) `my_list = [1,2,3,4,5]`
C) `my_list = [1, 2, 3, 4, 5]`
D) `my_list=[1, 2, 3, 4, 5]`
E) `my_list = [1, 2, 3, 4, 5]`
Q3. **If you were reading code and came cross the following, which of the following would you expect to be a class?**
A) `Phillies_Game`
B) `PhilliesGame`
C) `phillies_game`
D) `philliesgame`
E) `PhIlLiEsGaMe`
Q4. **If you were reading code and came cross the following, which of the following would you expect to be a function or variable name?**
A) `Phillies_Game`
B) `PhilliesGame`
C) `phillies_game`
D) `philliesgame`
E) `PhIlLiEsGaMe`
Q5. **Which of the following would not cause an error in Python and would store the string *You're so close!* ?**
A) `my_string = "You're so close!"`
B) `my_string = "You"re so close!"`
C) `my_string = 'You''re so close!'`
D) `my_string = "You\\'re so close"`
E) `my_string = 'You're so close!'`
Q6. **Identify and improve all of the PEP8/Code Style violations found in the following code**:
```python
def MyFunction(input_num):
my_list = [0,1,2,3]
if 1 in my_list: ind = 1
else:
ind = 0
qq = []
for i in my_list [ind:]:
qq.append(input_num/i)
return qq
```
Q7. **Identify and improve all of the PEP8/Code Style violations found in the following code**:
```python
def ff(jj):
oo = list(); jj = list(jj)
for ii in jj: oo.append(str(ord(ii)))
return '+'.join(oo)
```
| github_jupyter |
# NNCP Splitter
[](https://colab.research.google.com/github/byronknoll/tensorflow-compress/blob/master/nncp-splitter.ipynb)
Made by Byron Knoll. GitHub repository: https://github.com/byronknoll/tensorflow-compress
### Description
This notebook can be used to split files that have been preprocessed by NNCP. This is for compression using [tensorflow-compress](https://colab.research.google.com/github/byronknoll/tensorflow-compress/blob/master/tensorflow-compress.ipynb). The primary use-case is to get around Colab's session time limit by processing large files in smaller parts.
This file splitting does not use the naive method of dividing the file into consecutive parts. Instead, it takes into account the batch size used in tensorflow-compress so that the same sequence of symbols will be used for compressing the split parts as for the original file.
### Instructions
1. In tensorflow-compress, using "preprocess_only" mode, choose "nncp" preprocessor and download the result.
2. Upload the preprocessed file (named "preprocessed.dat") to this notebook, and download the split parts.
3. In tensorflow-compress, compress each split part sequentially, enabling the checkpoint option. Choose "nncp-done" as the preprocessor.
4. In tensorflow-compress, decompress each split part sequentially, enabling the checkpoint option. Choose "nncp-done" as the preprocessor.
5. Upload the decompressed parts to this notebook to reproduce the original file. The files should be named: part.0, part.1, ..., part.N. Also upload the original NNCP dictionary file (named "dictionary.words").
## Parameters
```
batch_size = 96 #@param {type:"integer"}
#@markdown >_Set this to the same value that will be used in tensorflow-compress._
mode = 'split' #@param ["split", "join"]
num_parts = 4 #@param {type:"integer"}
#@markdown >_This is the number of parts the file should be split to._
http_path = '' #@param {type:"string"}
#@markdown >_The file from this URL will be downloaded. It is recommended to use Google Drive URLs to get fast transfer speed. Use this format for Google Drive files: https://drive.google.com/uc?id= and paste the file ID at the end of the URL. You can find the file ID from the "Get Link" URL in Google Drive. You can enter multiple URLs here, space separated._
local_upload = False #@param {type:"boolean"}
#@markdown >_If enabled, you will be prompted in the "Setup Files" section to select files to upload from your local computer. You can upload multiple files. Note: the upload speed can be quite slow (use "http_path" for better transfer speeds)._
download_option = "no_download" #@param ["no_download", "local", "google_drive"]
#@markdown >_If this is set to "local", the output files will be downloaded to your computer. If set to "google_drive", they will be copied to your Google Drive account (which is significantly faster than downloading locally)._
```
## Setup
```
#@title Imports
from google.colab import files
from google.colab import drive
import math
#@title Mount Google Drive
if download_option == "google_drive":
drive.mount('/content/gdrive')
#@title Setup Files
!mkdir -p "data"
if local_upload:
%cd data
files.upload()
%cd ..
if http_path:
%cd data
paths = http_path.split()
for path in paths:
!gdown $path
%cd ..
if mode == "join":
!gdown --id 1EzVPbRkBIIbgOzvEMeM0YpibDi2R4SHD
!tar -xf nncp-2019-11-16.tar.gz
%cd nncp-2019-11-16/
!make preprocess
%cd ..
```
## Run
```
#@title Split/Join
if mode == "split":
input_path = "data/preprocessed.dat"
orig = open(input_path, 'rb').read()
int_list = []
for i in range(0, len(orig), 2):
int_list.append(orig[i] * 256 + orig[i+1])
file_len = len(int_list)
split = math.ceil(file_len / batch_size)
part_split = math.ceil(file_len / (num_parts * batch_size))
pos = 0
for i in range(num_parts):
output = []
for j in range(batch_size):
for k in range(part_split):
if pos + k >= split:
break
index = pos + (j*split) + k
if index >= file_len:
break
output.append(int_list[index])
pos += part_split
with open(("data/part." + str(i)), "wb") as out:
for j in range(len(output)):
out.write(bytes(((output[j] // 256),)))
out.write(bytes(((output[j] % 256),)))
if mode == "join":
file_len = 0
for i in range(num_parts):
part = open("data/part." + str(i), 'rb').read()
file_len += len(part) / 2
split = math.ceil(file_len / batch_size)
part_split = math.ceil(file_len / (num_parts * batch_size))
int_list = [0] * math.floor(file_len)
pos = 0
for i in range(num_parts):
part = open("data/part." + str(i), 'rb').read()
part_list = []
for j in range(0, len(part), 2):
part_list.append(part[j] * 256 + part[j+1])
index2 = 0
for j in range(batch_size):
for k in range(part_split):
if pos + k >= split:
break
index = pos + (j*split) + k
if index >= file_len:
break
int_list[index] = part_list[index2]
index2 += 1
pos += part_split
with open("data/output.dat", "wb") as out:
for i in range(len(int_list)):
out.write(bytes(((int_list[i] // 256),)))
out.write(bytes(((int_list[i] % 256),)))
!./nncp-2019-11-16/preprocess d data/dictionary.words ./data/output.dat ./data/final.dat
#@title File Sizes
!ls -l data
#@title MD5
!md5sum data/*
#@title Download Result
def download(path):
"""Downloads the file at the specified path."""
if download_option == 'local':
files.download(path)
elif download_option == 'google_drive':
!cp -f $path /content/gdrive/My\ Drive
if mode == "split":
for i in range(num_parts):
download("data/part." + str(i))
if mode == "join":
download("data/final.dat")
```
| github_jupyter |
# Point-based and Parallel Processing Water Observations from Space (WOfS) Product in Africa <img align="right" src="../Supplementary_data/DE_Africa_Logo_Stacked_RGB_small.jpg">
* **Products used:**
[ga_ls8c_wofs_2](https://explorer.digitalearth.africa/ga_ls8c_wofs_2)
## Description
The [Water Observations from Space (WOfS)](https://www.ga.gov.au/scientific-topics/community-safety/flood/wofs/about-wofs) is a derived product from Landsat 8 satellite observations as part of provisional Landsat 8 Collection 2 surface reflectance and shows surface water detected in Africa.
Individual water classified images are called Water Observation Feature Layers (WOFLs), and are created in a 1-to-1 relationship with the input satellite data.
Hence there is one WOFL for each satellite dataset processed for the occurrence of water.
The data in a WOFL is stored as a bit field. This is a binary number, where each digit of the number is independantly set or not based on the presence (1) or absence (0) of a particular attribute (water, cloud, cloud shadow etc). In this way, the single decimal value associated to each pixel can provide information on a variety of features of that pixel.
For more information on the structure of WOFLs and how to interact with them, see [Water Observations from Space](../Datasets/Water_Observations_from_Space.ipynb) and [Applying WOfS bitmasking](../Frequently_used_code/Applying_WOfS_bitmasking.ipynb) notebooks.
This notebook explains how you can query WOfS product for each collected validation points in Africa based on point-based sampling approach.
The notebook demonstrates how to:
1. Load validation points for each partner institutions following cleaning stage described in
2. Query WOFL data for validation points and capture available WOfS defined class using point-based sampling and multiprocessing functionality
3. Extract a LUT for each point that contains both information for validation points and WOfS class as well number of clear observation in each month
***
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load packages
Import Python packages that are used for the analysis.
```
%matplotlib inline
import datacube
from datacube.utils import masking, geometry
import sys
import os
import rasterio
import xarray
import glob
import numpy as np
import pandas as pd
import seaborn as sn
import geopandas as gpd
import matplotlib.pyplot as plt
import multiprocessing as mp
import scipy, scipy.ndimage
import warnings
warnings.filterwarnings("ignore") #this will suppress the warnings for multiple UTM zones in your AOI
sys.path.append("../Scripts")
from geopandas import GeoSeries, GeoDataFrame
from shapely.geometry import Point
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import plot_confusion_matrix, f1_score
from deafrica_plotting import map_shapefile,display_map, rgb
from deafrica_spatialtools import xr_rasterize
from deafrica_datahandling import wofs_fuser, mostcommon_crs,load_ard,deepcopy
from deafrica_dask import create_local_dask_cluster
from tqdm import tqdm
```
### Analysis parameters
To analyse validation points collected by each partner institution, we need to obtain WOfS surface water observation data that corresponds with the labelled input data locations.
- Path2csv: the path to CEO validation points labelled by each partner institutions in Africa
- ValPoints: CEO validation points labelled by each partner institutions in Africa in ESRI shapefile format
- Path: Direct path to the ESRI shapefile in case that the shapefile in available
- input_data: geopandas datafram for CEO validation points labelled by each partner institutions in Africa
*** Note: Run the following three cells in case that you dont have a ESRI shapefile for validation points.
```
path2csv = '../Data/Processed/AGRYHMET/AGRYHMET_ValidationPoints.csv'
df = pd.read_csv(path2csv,delimiter=",")
geometries = [Point(xy) for xy in zip(df.LON, df.LAT)]
crs = {'init': 'epsg:4326'}
ValPoints = GeoDataFrame(df, crs=crs, geometry=geometries)
ValPoints.to_file(filename='../Data/Processed/AGRYHMET/AGRYHMET_ValidationPoints.shp')
```
*** Note: In case that you have ESRI shapefile for validation points, please continute from this point onward.
```
path = '../Data/Processed/AGRYHMET/AGRYHMET_ValidationPoints.shp'
#reading the table and converting CRS to metric
input_data = gpd.read_file(path).to_crs('epsg:6933')
input_data.columns
input_data= input_data.drop(['Unnamed_ 0'], axis=1)
#Checking the size of the input data
input_data.shape
```
### Sample WOfS at the ground truth coordinates
To load WOFL data, we can first create a re-usable query as below that will define two particular items, `group_by` solar day, ensuring that the data between scenes is combined correctly. The second parameter is `resampling` method that is set to be nearest. This query will later be updated in the script for other parameters to conduct WOfS query. the time period we are interested in, as well as other important parameters that are used to correctly load the data.
We can convert the WOFL bit field into a binary array containing True and False values. This allows us to use the WOFL data as a mask that can be applied to other datasets. The `make_mask` function allows us to create a mask using the flag labels (e.g. "wet" or "dry") rather than the binary numbers we used above. For more details on how to do masking on WOfS, see the [Applying_WOfS_bit_masking](../Frequently_used_code/Applying_WOfS_bitmasking.ipynb) notebook in Africa sandbox.
```
#generate query object
query ={'group_by':'solar_day',
'resampling':'nearest'}
```
Defining a function to query WOfS database according to the first five days before and after of each calendar month
```
def get_wofs_for_point(index, row, input_data, query, results_wet, results_clear):
dc = datacube.Datacube(app='WOfS_accuracy')
#get the month value for each index
month = input_data.loc[index]['MONTH']
#get the value for time including year, month, start date and end date
timeYM = '2018-'+f'{month:02d}'
start_date = np.datetime64(timeYM) - np.timedelta64(5,'D')
end_date = np.datetime64(timeYM) + np.timedelta64(5,'D')
time = (str(start_date),str(end_date))
plot_id = input_data.loc[index]['PLOT_ID']
#having the original query as it is
dc_query = deepcopy(query)
geom = geometry.Geometry(input_data.geometry.values[index].__geo_interface__, geometry.CRS('EPSG:6933'))
q = {"geopolygon":geom}
t = {"time":time}
#updating the query
dc_query.update(t)
dc_query.update(q)
#loading landsat-8 WOfs product and set the values for x and y (point-based) and also (window-based)
wofls = dc.load(product ="ga_ls8c_wofs_2",
y = (input_data.geometry.y[index], input_data.geometry.y[index]),
x =(input_data.geometry.x[index], input_data.geometry.x[index]),
#y = (input_data.geometry.y[index] - 30.5, input_data.geometry.y[index] + 30.5), # setting x and y coordinates based on 3*3 pixel window-based query
#x =(input_data.geometry.x[index] - 30.5, input_data.geometry.x[index] + 30.5),
crs = 'EPSG:6933',
time=time,
output_crs = 'EPSG:6933',
resolution=(-30,30))
#exclude the records that wofl return as empty for water
if not 'water' in wofls:
pass
else:
#Define a mask for wet and clear pixels
wet_nocloud = {"water_observed":True, "cloud_shadow":False, "cloud":False,"nodata":False}
#Define a mask for dry and clear pixels
dry_nocloud = {"water_observed":False, "cloud_shadow":False, "cloud":False, "nodata":False}
wofl_wetnocloud = masking.make_mask(wofls, **wet_nocloud).astype(int)
wofl_drynocloud = masking.make_mask(wofls, **dry_nocloud).astype(int)
clear = (wofl_wetnocloud | wofl_drynocloud).water.all(dim=['x','y']).values
#record the total number of clear observations for each point in each month and use it to filter out month with no valid data
n_clear = clear.sum()
#condition to identify whether WOfS seen water in specific month for a particular location
if n_clear > 0:
wet = wofl_wetnocloud.isel(time=clear).water.max().values
else:
wet = 0
#updating results for both wet and clear observations
results_wet.update({str(int(plot_id))+"_"+str(month) : int(wet)})
results_clear.update({str(int(plot_id))+"_"+str(month) : int(n_clear)})
return time
```
Define a function for parallel processing
```
def _parallel_fun(input_data, query, ncpus):
manager = mp.Manager()
results_wet = manager.dict()
results_clear = manager.dict()
# progress bar
pbar = tqdm(total=len(input_data))
def update(*a):
pbar.update()
with mp.Pool(ncpus) as pool:
for index, row in input_data.iterrows():
pool.apply_async(get_wofs_for_point,
[index,
row,
input_data,
query,
results_wet,
results_clear], callback=update)
pool.close()
pool.join()
pbar.close()
return results_wet, results_clear
```
Test the for loop
```
results_wet_test = dict()
results_clear_test = dict()
for index, row in input_data[0:14].iterrows():
time = get_wofs_for_point(index, row, input_data, query, results_wet_test, results_clear_test)
print(time)
```
Point-based query and parallel processing on WOfS
```
wet, clear = _parallel_fun(input_data, query, ncpus=15)
#extracting the final table with both CEO labels and WOfS class Wet and clear observations
wetdf = pd.DataFrame.from_dict(wet, orient = 'index')
cleardf = pd.DataFrame.from_dict(clear,orient='index')
df2 = wetdf.merge(cleardf, left_index=True, right_index=True)
df2 = df2.rename(columns={'0_x':'CLASS_WET','0_y':'CLEAR_OBS'})
#split the index (which is plotid + month) into seperate columns
for index, row in df2.iterrows():
df2.at[index,'PLOT_ID'] = index.split('_')[0] +'.0'
df2.at[index,'MONTH'] = index.split('_')[1]
#reset the index
df2 = df2.reset_index(drop=True)
#convert plot id and month to str to help with matching
input_data['PLOT_ID'] = input_data.PLOT_ID.astype(str)
input_data['MONTH']= input_data.MONTH.astype(str)
# merge both dataframe at locations where plotid and month match
final_df = pd.merge(input_data, df2, on=['PLOT_ID','MONTH'], how='outer')
#Defining the shape of final table
final_df.shape
#Counting the number of rows in the final table with NaN values in class_wet and clear observation (Optional)
#This part is to test the parallel processig function returns identicial results each time that it runs
countA = final_df["CLASS_WET"].isna().sum()
countB = final_df["CLEAR_OBS"].isna().sum()
countA, countB
final_df.to_csv(('../../Results/WOfS_Assessment/Point_Based/Institutions/AGRYHMET_PointBased_5D.csv'))
print(datacube.__version__)
```
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
**Last modified:** September 2020
**Compatible datacube version:**
## Tags
Browse all available tags on the DE Africa User Guide's [Tags Index](https://) (placeholder as this does not exist yet)
| github_jupyter |
```
from keras.regularizers import l2
from keras.models import Sequential
from keras.optimizers import Adam
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.preprocessing.text import one_hot
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.layers import Input, Dense, Flatten, GlobalMaxPool2D, GlobalAvgPool2D, Concatenate, Multiply, Dropout, Subtract, Add, Conv2D
import pandas as pd
#Data Prep
data = pd.read_csv("/content/persons.csv",header=None)
data.columns = ["first","second","variation"]
print(data.shape)
data.head()
data["merge"] = data["first"] + " " + data["second"]
data.head()
#list of charcaters
list_chars = list("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ%,.-/&' ")
print(len(list_chars))
tokens = {i:list_chars[i] for i in range(0,len(list_chars))}
tokens_rev = {list_chars[i]:i for i in range(0,len(list_chars))}
print(tokens)
print(tokens_rev)
from sklearn.model_selection import train_test_split
X_temp, X_test, y_temp, y_test = train_test_split(data[['first', 'second']], data['variation'], test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_temp, y_temp, test_size=0.2, random_state=42)
print(X_train.shape)
print(y_train.shape)
print(X_val.shape)
print(y_val.shape)
print(X_test.shape)
print(y_test.shape)
def character_onehot(column_list,tokens_rev):
for i in range(0,len(column_list)):
name_list_current = list(column_list[i])
column_list[i] = [tokens_rev[char] for char in name_list_current]
return column_list
#character_onehot(X_train['second'].to_list(),tokens_rev)
train_q1_seq = character_onehot(X_train['first'].to_list(),tokens_rev)
train_q2_seq = character_onehot(X_train['second'].to_list(),tokens_rev)
val_q1_seq = character_onehot(X_val['first'].to_list(),tokens_rev)
val_q2_seq = character_onehot(X_val['second'].to_list(),tokens_rev)
test_q1_seq = character_onehot(X_test['first'].to_list(),tokens_rev)
test_q2_seq = character_onehot(X_test['second'].to_list(),tokens_rev)
#Vector length for train q1
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
print(train_q1_seq[0:2])
len_vec = [len(char_vec) for char_vec in train_q1_seq]
print("max length => ", np.max(len_vec))
sns.distplot(len_vec)
print(train_q2_seq[0:2])
len_vec = [len(char_vec) for char_vec in train_q2_seq]
print("max length => ", np.max(len_vec))
sns.distplot(len_vec)
from keras.preprocessing.sequence import pad_sequences
max_len = 50
train_q1_seq = pad_sequences(train_q1_seq, maxlen=max_len, padding='post')
train_q2_seq = pad_sequences(train_q2_seq, maxlen=max_len, padding='post')
val_q1_seq = pad_sequences(val_q1_seq, maxlen=max_len, padding='post')
val_q2_seq = pad_sequences(val_q2_seq, maxlen=max_len, padding='post')
test_q1_seq = pad_sequences(test_q1_seq, maxlen=max_len, padding='post')
test_q2_seq = pad_sequences(test_q2_seq, maxlen=max_len, padding='post')
print(train_q1_seq)
import tensorflow as tf
from keras.regularizers import l2
from keras.models import Sequential
from keras.optimizers import Adam
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers import Input, Concatenate, Conv2D, Flatten, Dense, Embedding, LSTM
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.layers import Input, Dense, Flatten, GlobalMaxPool2D, GlobalAvgPool2D, Concatenate, Multiply, Dropout, Subtract, Add, Conv2D
from keras import backend as K
def cosine_distance(vests):
x, y = vests
x = K.l2_normalize(x, axis=-1)
y = K.l2_normalize(y, axis=-1)
return -K.mean(x * y, axis=-1, keepdims=True)
def cos_dist_output_shape(shapes):
shape1, shape2 = shapes
return (shape1[0],1)
from sklearn.metrics import roc_auc_score
def auroc(y_true, y_pred):
try:
return tf.py_function(roc_auc_score, (y_true, y_pred), tf.double)
except:
pass
input_1 = Input(shape=(train_q1_seq.shape[1],))
input_2 = Input(shape=(train_q2_seq.shape[1],))
common_embed = Embedding(name="synopsis_embedd",input_dim =100,
output_dim=30,
input_length=train_q1_seq.shape[1],trainable=True)
lstm_1 = common_embed(input_1)
lstm_2 = common_embed(input_2)
common_lstm = LSTM(64,return_sequences=True, activation="relu")
vector_1 = common_lstm(lstm_1)
vector_1 = Flatten()(vector_1)
vector_2 = common_lstm(lstm_2)
vector_2 = Flatten()(vector_2)
x3 = Subtract()([vector_1, vector_2])
x3 = Multiply()([x3, x3])
x1_ = Multiply()([vector_1, vector_1])
x2_ = Multiply()([vector_2, vector_2])
x4 = Subtract()([x1_, x2_])
#https://stackoverflow.com/a/51003359/10650182
x5 = Lambda(cosine_distance, output_shape=cos_dist_output_shape)([vector_1, vector_2])
conc = Concatenate(axis=-1)([x5,x4, x3])
x = Dense(100, activation="relu", name='conc_layer')(conc)
x = Dropout(0.01)(x)
out = Dense(1, activation="sigmoid", name = 'out')(x)
model = Model([input_1, input_2], out)
model.compile(loss="binary_crossentropy",metrics=['accuracy'], optimizer=Adam(0.00001))
model.summary()
model.fit([train_q1_seq,train_q2_seq],y_train.values.reshape(-1,1), epochs = 30,
batch_size=43,validation_data=([val_q1_seq, val_q2_seq],y_val.values.reshape(-1,1)))
first_term = character_onehot(["Vikas Nair"],tokens_rev)
second_term = character_onehot(["Vikas Nayar"],tokens_rev)
first_padded = pad_sequences(first_term, maxlen=max_len, padding='post')
second_padded = pad_sequences(second_term, maxlen=max_len, padding='post')
model.predict([first_padded,second_padded])
```
| github_jupyter |
#Twitter + Watson Tone Analyzer sample Notebook Part 1: Loading the data
In this Notebook, we show how to load the custom library generate as part of the Twitter + Watson Tone Analyzer streaming application. Code can be found here: https://github.com/ibm-cds-labs/spark.samples/tree/master/streaming-twitter.
The following code is using a pre-built jar has been posted on the Github project, but you can replace with your own url if needed.
```
%AddJar https://github.com/DTAIEB/demos/raw/master/streaming-twitter-assembly-1.5.jar -f
```
##Set up the Twitter and Watson credentials
Please refer to the tutorial for details on how to find the Twitter and Watson credentials, then add the value in the placeholders specified in the code below
```
val demo = com.ibm.cds.spark.samples.StreamingTwitter
demo.setConfig("twitter4j.oauth.consumerKey","XXXXX")
demo.setConfig("twitter4j.oauth.consumerSecret","XXXXX")
demo.setConfig("twitter4j.oauth.accessToken","XXXXX")
demo.setConfig("twitter4j.oauth.accessTokenSecret","XXXXX")
demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer-experimental/api")
demo.setConfig("watson.tone.password","XXXXX")
demo.setConfig("watson.tone.username","XXXXX")
```
##Start the Spark Stream to collect live tweets
Start a new Twitter Stream that collects the live tweets and enrich them with Sentiment Analysis scores. The stream is run for a duration specified in the second argument of the **startTwitterStreaming** method.
Note: if no duration is specified then the stream will run until the **stopTwitterStreaming** method is called.
```
import org.apache.spark.streaming._
demo.startTwitterStreaming(sc, Seconds(40))
```
##Create a SQLContext and a dataframe with all the tweets
Note: this method will register a SparkSQL table called tweets
```
val (sqlContext, df) = demo.createTwitterDataFrames(sc)
```
##Execute a SparkSQL query that contains all the data
```
val fullSet = sqlContext.sql("select * from tweets") //Select all columns
fullSet.show
```
##Persist the dataset into a parquet file on Object Storage service
The parquet file will be reloaded in IPython Part 2 Notebook
Note: you can disregard the warning messages related to SLF4J
```
fullSet.repartition(1).saveAsParquetFile("swift://notebooks.spark/tweetsFull.parquet")
```
##SparkSQL query example on the data.
Select all the tweets that have Anger score greated than 70%
```
val angerSet = sqlContext.sql("select author, text, Anger from tweets where Anger > 70")
println(angerSet.count)
angerSet.show
```
| github_jupyter |
## Using Python Selenium to Automate Tasks
### First day: Selenium by Example
Selenium is a great tool to write functional/acceptance tests and automation scripts that require interaction with a webpage.
In this lesson we get Selenium running and look at two use cases. Then we have you code 1 or 2 scripts using Selenium.
To follow along you need have Selenium installed and a webdriver:
1. `pip install selenium` (if you installed the requirements.txt in my setup video in the appendix you should already have it)
2. I used _PhantomJS_ before but now I got this error: `Selenium support for PhantomJS has been deprecated, please use headless versions of Chrome or Firefox instead`. So I downloaded the [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/home) and put it in my `PATH` (`$HOME/bin`)
Here is the _Hello Selenium world_ example from [the docs](http://selenium-python.readthedocs.io/getting-started.html). Notice how easy it is to interact with forms:
```
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome() # replaced Firefox by Chrome
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
```
#### Example 1. Packt ebook manager
Packt gives away [a free ebook each day](https://www.packtpub.com/packt/offers/free-learning). I've been grabbing quite a few of them (back in our 100 Days we [wrote a notification script](https://github.com/pybites/100DaysOfCode/blob/master/076/packt_notification.py)).
In this section I will make a simple Selenium script to search my collection for a title and provide me the download link. Of course my books are behind login, so I use Selenium to login first. This script/idea also came out of our own 100 days of code, see [day 66](https://github.com/pybites/100DaysOfCode/blob/master/066/packt.py).
How would this work? Quite simply:
1. Go to the Packt login URL and login:

2. You get to your Account page:

3. Go the _My eBooks_ links and parse the html:

Let's get coding!
First of all as per [The twelve-factor app](https://12factor.net/config) I store config details in the environment, __never__ in the script.
`os.environ.get` lets you retrieve environment variables defaulting to `None` if not found. The check for `None` I consider a more Pythonic pattern and to increase the readability I added my own exception: `NoLogin`.
```
import os
user = os.environ.get('PACKT_USER')
pw = os.environ.get('PACKT_PW')
```
As shown in the example before we create a `driver` object and go to the `login` URL. We find the `edit-name` and `edit-pass` form elements and send the user and password (stored in the `user` and `pw` variables respectively). Appending the `Keys.RETURN` submits the form (html forms can be submitted by hitting Enter on an input field, not so on a textarea box, there it would enter a newline).
```
login = 'https://www.packtpub.com/login'
driver = webdriver.Chrome()
driver.get(login)
driver.find_element_by_id('edit-name').send_keys(user)
driver.find_element_by_id('edit-pass').send_keys(pw + Keys.RETURN)
```
Note that at this point a Chrome browser window opened in the background. It will close when we close the driver later:

Als note that here is a natural delay between steps because we are using a notebook. In a script though steps run one after the other at lightning speed.
Also in this case there is no pagination, so my 100+ books take some time to load. So if you use this as a script you might want to add: `driver.implicitly_wait(3)`
Now let's get to the actual content, storing it in `elements`:
```
driver.find_element_by_link_text('My eBooks').click()
```
Next I use a _dictionary comprehension_ to store the book id (`nid`) as keys and the book titles as values. You could sync this to a local file or sqlite DB if you want to cache these results.
```
elements = driver.find_elements_by_class_name("product-line")
books = {e.get_attribute('nid'): e.get_attribute('title') for e in elements}
books
```
First time around I made the mistake to close `driver` after `elements`, but `get_attribute` I used in `books` still needs the session. So only now should you close it. This also closes the Chrome app:
```
driver.close()
```
Now let's write a function to receive a regex search term and book format and print the books that match the books in my collection:
```
import re
DOWNLOAD_URL = 'https://www.packtpub.com/ebook_download/{nid}/{ebook_format}'
BOOK_FORMATS = 'pdf epub mobi'
def get_books(grep, ebook_format):
"""Receives a grep regex and book format (epub, pdf, mobi)
and prints the titles + urls of matching ebooks"""
grep = grep.lower()
ebook_format = ebook_format.lower()
if ebook_format not in BOOK_FORMATS.split():
raise ValueError(f'Not a valid book format (valid are: {BOOK_FORMATS})')
for nid, title in books.items():
if re.search(grep, title.lower()):
url = DOWNLOAD_URL.format(nid=nid, ebook_format=ebook_format)
print(title, url)
get_books('python.*data', 'mobi')
get_books('Machine.*Learning', 'PDF')
```
#### Example 2. autocreate a PyBites banner
Some time ago [I made a banner generator with Pillow and Flask](https://pybit.es/pillow-banner-flask.html). It is hosted [here](http://pybites-banners.herokuapp.com).
Although this is nice what if I want to make banners automatically? Let's try to do so using Selenium:
Let's break the task down into various steps:
1. Although the site can be used without login, authenticated users have their banners stored, so go straight to login URL:

2. And login:

3. We need to locate the form elements and provide the proper data, then click the submit button:

4. We need to download the output image it generates:

After previous exercise this should be quite straight-forward:
```
user = os.environ.get('PB_BANNER_USER')
pw = os.environ.get('PB_BANNER_PW')
class NoLogin(Exception):
pass
if user is None or pw is None:
raise NoLogin('Set PB_BANNER_USER and PB_BANNER_PW in your env')
login = 'https://pybites-banners.herokuapp.com/login'
driver = webdriver.Chrome()
driver.get(login)
driver.find_element_by_id('username').send_keys(user)
driver.find_element_by_id('password').send_keys(pw + Keys.RETURN)
from datetime import datetime
def get_title():
"""Creates a title to store banner as, e.g. newsYYYYWW
(YYYY = year, WW = week number)"""
now = datetime.now()
year = now.year
week = str(now.isocalendar()[1]).zfill(2)
return f'news{year}{week}'
title = get_title()
now = datetime.now()
year = now.year
week = str(now.isocalendar()[1]).zfill(2)
news_option = 'pybites-news'
bg_image = 'http://www.allwhitebackground.com/images/2/2210.jpg'
banner_text = f'from pybites import News -> Twitter Digest {year} Week {week}'
driver.find_element_by_id('name').send_keys(title)
driver.find_element_by_xpath(f'//select[@name="image_url1"]/option[text()="{news_option}"]').click()
driver.find_element_by_id('text').send_keys(banner_text)
driver.find_element_by_id('image_url2').send_keys(bg_image + Keys.RETURN)
```
And the result:

```
driver.close()
```
## Second + third day: practice time!
Now it's your turn. The goal is to have you get your hands dirty using Python Selenium.
### Testing with Selenium
I deliberately left testing out, because we have a nice code challenge for you to practice:
- First checkout the docs: [Using Selenium to write tests](http://selenium-python.readthedocs.io/getting-started.html#using-selenium-to-write-tests)
- Then head over to [Code Challenge 32 - Test a Simple Django App With Selenium](https://codechalleng.es/challenges/32/) and try to automate testing of [PyBites first ever Django App](http://pyplanet.herokuapp.com).
### Scratch your own itch
Although the testing option is our favorite, we want you to be free.
In this notebook I gave you two examples of automated tasks. Maybe you want to try them yourself, build them out?
Or what about trying to use Selenium on your favorite website or service? e.g. login to Facebook, Twitter, Reddit and read/post content ...
There are many possibilities. Again, having you code up a script using what you just learned will make all the difference in what you get out of this lesson.
Have fun and remember: _Keep calm and code in Python!_
### Time to share what you've accomplished!
Be sure to share your last couple of days work on Twitter or Facebook. Use the hashtag **#100DaysOfCode**.
Here are [some examples](https://twitter.com/search?q=%23100DaysOfCode) to inspire you. Consider including [@talkpython](https://twitter.com/talkpython) and [@pybites](https://twitter.com/pybites) in your tweets.
*See a mistake in these instructions? Please [submit a new issue](https://github.com/talkpython/100daysofcode-with-python-course/issues) or fix it and [submit a PR](https://github.com/talkpython/100daysofcode-with-python-course/pulls).*
| github_jupyter |
<a href="https://colab.research.google.com/github/Adminixtrator/gpt-2/blob/master/GPT_2_With_SQuAD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Calling file from Repository
```
!git clone https://github.com/adminixtrator/gpt-2.git
%cd gpt-2
%ls
```
# Using the gpt-2 model 345M
```
#Download the gpt-2 model 345M..
!python3 download_model.py 345M
#Encoding..
!export PYTHONIOENCODING=UTF-8
```
# Now to Implementing gpt-2
```
#Changing directory..
import os
os.chdir('src')
!pip install regex #For OpenAI GPT
#Importing the necessary libraries..
import json
import numpy as np
import tensorflow as tf
import model, sample, encoder
#Function to use the interaction model..
def interact_model(model_name, seed, nsamples, batch_size, length, temperature, top_k, models_dir):
models_dir = os.path.expanduser(os.path.expandvars(models_dir))
if batch_size is None:
batch_size = 1
assert nsamples % batch_size == 0
enc = encoder.get_encoder(model_name, models_dir)
hparams = model.default_hparams()
with open(os.path.join(models_dir, model_name, 'hparams.json')) as f:
hparams.override_from_dict(json.load(f))
if length is None:
length = hparams.n_ctx // 2
elif length > hparams.n_ctx:
raise ValueError("Can't get samples longer than window size: %s" % hparams.n_ctx)
with tf.Session(graph=tf.Graph()) as sess:
context = tf.placeholder(tf.int32, [batch_size, None])
np.random.seed(seed)
tf.set_random_seed(seed)
output = sample.sample_sequence(hparams=hparams, length=length, context=context, batch_size=batch_size, temperature=temperature, top_k=top_k)
saver = tf.train.Saver(save_relative_paths=True)
ckpt = tf.train.latest_checkpoint(os.path.join(models_dir, model_name))
saver.restore(sess, ckpt)
while True:
raw_text = input("\nModel prompt >>> ")
if raw_text == 'ADMIN_NIXTRATOR':
raw_text = False
break
while not raw_text:
print('\nPrompt should not be empty!')
raw_text = input("\nModel prompt >>> ")
context_tokens = enc.encode(raw_text)
generated = 0
for _ in range(nsamples // batch_size):
out = sess.run(output, feed_dict={
context: [context_tokens for _ in range(batch_size)]
})[:, len(context_tokens):]
for i in range(batch_size):
generated += 1
text = enc.decode(out[i])
print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40)
print(text)
print("=" * 80)
```
# **Code Explanation**
## **model_name**:
This indicates which model we are using. In our case, we are using the GPT-2 model with 345 million parameters or weights
## **seed**:
Integer seed for random number generators, fix seed to reproduce results
## **nsamples**:
This represents the number of sample texts generated in our output
## **batch_size**:
This only affects speed/memory. This must also divide nsamples
*Note: To generate more than one sample, you need to change the values of both nsamples and batch_size and also have to keep them equal.*
## **length**:
It represents the number of tokens in the generated text. If the length is None, then the number of tokens is decided by model hyperparameters
## **temperature**:
This controls randomness in Boltzmann distribution. Lower temperature results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Higher temperature results in more random completions
## **top_k**:
This parameter controls diversity. If the value of top_k is set to 1, this means that only 1 word is considered for each step (token). If top_k is set to 40, that means 40 words are considered at each step. 0 (default) is a special setting meaning no restrictions. top_k = 40 generally is a good value
## **models_dir**:
It represents the path to parent folder containing model subfolders (contains the <model_name> folder)
# Results
```
#Using the arguements above..
interact_model('345M', None, 1, 1, 20, 1, 0, '/content/gpt-2/models')
```
# Fine-tuning on SQuAD for question-answering
```
#Checking Directory..
os.chdir('/content/gpt-2/SQuAD/')
%ls
#Importing the neccessary libraries..
import numpy as np, pandas as pd
import json
import ast
from textblob import TextBlob
import nltk
import torch
import pickle
from scipy import spatial
import warnings
warnings.filterwarnings('ignore')
import spacy
from nltk import Tree
en_nlp = spacy.load('en')
from nltk.stem.lancaster import LancasterStemmer
st = LancasterStemmer()
from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer
#Train set
train = pd.read_json("data/train-v2.0.json")
#Familiarizing with the dataset..
train.shape
```
## Loading Embedding dictionary
```
def get_target(x):
idx = -1
for i in range(len(x["sentences"])):
if x["text"] in x["sentences"][i]: idx = i
return idx
train.data
train.dropna(inplace=True)
train.shape
```
## Data Processing
```
def process_data(train):
print("step 1")
train['sentences'] = train['context'].apply(lambda x: [item.raw for item in TextBlob(x).sentences])
print("step 2")
train["target"] = train.apply(get_target, axis = 1)
print("step 3")
train['sent_emb'] = train['sentences'].apply(lambda x: [dict_emb[item][0] if item in\
dict_emb else np.zeros(4096) for item in x])
print("step 4")
train['quest_emb'] = train['question'].apply(lambda x: dict_emb[x] if x in dict_emb else np.zeros(4096) )
return train
train = process_data(train)
def cosine_sim(x):
li = []
for item in x["sent_emb"]:
li.append(spatial.distance.cosine(item,x["quest_emb"][0]))
return li
def pred_idx(distances):
return np.argmin(distances)
#Function to make predictions..
def predictions(train):
train["cosine_sim"] = train.apply(cosine_sim, axis = 1)
train["diff"] = (train["quest_emb"] - train["sent_emb"])**2
train["euclidean_dis"] = train["diff"].apply(lambda x: list(np.sum(x, axis = 1)))
del train["diff"]
print("cosine start")
train["pred_idx_cos"] = train["cosine_sim"].apply(lambda x: pred_idx(x))
train["pred_idx_euc"] = train["euclidean_dis"].apply(lambda x: pred_idx(x))
return train
#Making predictions..
predicted = predictions(train)
```
## Accuracy
```
#Function to check accuracy..
def accuracy(target, predicted):
acc = (target==predicted).sum()/len(target)
return acc
print(accuracy(predicted["target"], predicted["pred_idx_euc"])) #Accuracy for euclidean Distance
print(accuracy(predicted["target"], predicted["pred_idx_cos"])) #Accuracy for Cosine Similarity
```
## Combed Accuracy
```
label = []
for i in range(predicted.shape[0]):
if predicted.iloc[i,10] == predicted.iloc[i,11]:
label.append(predicted.iloc[i,10])
else:
label.append((predicted.iloc[i,10],predicted.iloc[i,10]))
ct = 0
for i in range(75206):
item = predicted["target"][i]
try:
if label[i] == predicted["target"][i]: ct +=1
except:
if item in label[i]: ct +=1
ct/75206 #Accuracy..
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# sklearn imports
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score, f1_score, recall_score, roc_auc_score, precision_score
from sklearn.preprocessing import OneHotEncoder
# imblearn imports
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import RandomUnderSampler
# DiCE imports
import dice_ml
from dice_ml.utils import helpers
```
## Import dataset using pandas
```
dataset = pd.read_csv('healthcare-dataset-stroke-data.csv')
dataset = dataset.dropna()
dataset = dataset.drop(['id'],axis=1)
display(dataset.head())
```
## Returning counts of labels
```
dataset['stroke'].value_counts()
```
Data is massively underbalanced. We will use random over and under sampling and see which technique leads to better performance
## Train-Test Split
```
from collections import Counter
target = dataset["stroke"]
X = dataset.drop(['stroke'],axis=1)
ros = RandomUnderSampler()
# resampling X, y
dataset, target = ros.fit_resample(X, target)
# new class distribution
print(Counter(target))
print(type(dataset))
dataset = pd.concat([dataset,target],axis=1)
train_dataset, test_dataset, y_train, y_test = train_test_split(dataset,
target,
test_size=0.2,
random_state=0,
stratify=target)
x_train = train_dataset.drop('stroke', axis=1)
x_test = test_dataset.drop('stroke', axis=1)
```
## DICE object
```
d = dice_ml.Data(dataframe=train_dataset, continuous_features=['age','avg_glucose_level','bmi'], outcome_name='stroke')
```
## Pipeline
```
numerical = ['age','avg_glucose_level','bmi']
categorical = x_train.columns.difference(numerical)
categorical_transformer = Pipeline(steps=[
('onehot', OneHotEncoder(handle_unknown='ignore'))])
transformations = ColumnTransformer(
transformers=[
('cat', categorical_transformer, categorical)])
# Append classifier to preprocessing pipeline.
clf_rf = Pipeline(steps=[('preprocessor', transformations),
('clf', RandomForestClassifier(random_state=42))])
clf_svm = Pipeline(steps=[('preprocessor', transformations),
('clf', SVC(random_state=42))])
grid_params_rf = [{'clf__criterion': ['gini', 'entropy'],
'clf__max_features': ['auto', 'sqrt'],
'clf__n_estimators': [int(x) for x in np.linspace(start = 128, stop = 384, num = 32)],
'clf__min_samples_split': [2, 5, 10],
'clf__max_depth': [int(x) for x in np.linspace(start = 5, stop = 8, num = 1)]}]
grid_params_svm = [{'clf__kernel': ['poly', 'rbf','sigmoid'],
'clf__C': [0.1, 1, 10, 100, 1000],
'clf__gamma': [1, 0.1, 0.01, 0.001, 0.0001]}]
jobs = -1
RF = GridSearchCV(estimator=clf_rf,
param_grid=grid_params_rf,
scoring='roc_auc',
cv=10,
n_jobs=jobs)
SVM = GridSearchCV(estimator=clf_svm,
param_grid=grid_params_svm,
scoring='roc_auc',
cv=10,
n_jobs=jobs)
grids = [RF,SVM]
```
## Training models using Pipeline & GridSearchCV
```
# Creating a dict for our reference
grid_dict = {0: 'Random Forest',
1: 'Support Vector Machine'}
best_rf = {'score': 0, 'best_model':0}
best_svm = {'score': 0, 'best_model':0}
# Fit the grid search objects
print('Performing model optimizations...')
for idx, model in enumerate(grids):
print('\nEstimator: %s' % grid_dict[idx])
model.fit(x_train, y_train)
print('Best params are : %s' % model.best_params_)
# Best training data accuracy
print('Best training accuracy: %.3f' % model.best_score_)
# Predict on test data with best params
y_pred = model.predict(x_test)
print('Test set accuracy score for best params: %.3f ' % accuracy_score(y_test, y_pred))
print('Precision score: {}'.format(precision_score(y_test, y_pred)))
print('Recall score: {}'.format(recall_score(y_test, y_pred)))
print('F1 score: {}'.format(f1_score(y_test, y_pred)))
print('AUC-ROC score: {}'.format(roc_auc_score(y_test, y_pred)))
# Track best (highest test accuracy) model
if idx == 0:
if roc_auc_score(y_test, y_pred) > best_rf['score']:
best_rf['score'] = roc_auc_score(y_test, y_pred)
best_rf['best_model'] = model.best_params_
elif idx == 1:
if roc_auc_score(y_test, y_pred) > best_svm['score']:
best_svm['score'] = roc_auc_score(y_test, y_pred)
best_svm['best_model'] = model.best_params_
# save dict to file
import json
with open('best_stroke_rf_params.txt', 'w') as file:
file.write(json.dumps(best_rf)) # use `json.loads` to do the reverse
with open('best_stroke_svm_params.txt', 'w') as file:
file.write(json.dumps(best_svm)) # use `json.loads` to do the reverse
```
| github_jupyter |
##### Copyright 2021 The TF-Agents Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# DQN C51/Rainbow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/9_c51_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/9_c51_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Introduction
This example shows how to train a [Categorical DQN (C51)](https://arxiv.org/pdf/1707.06887.pdf) agent on the Cartpole environment using the TF-Agents library.

Make sure you take a look through the [DQN tutorial](https://github.com/tensorflow/agents/blob/master/docs/tutorials/1_dqn_tutorial.ipynb) as a prerequisite. This tutorial will assume familiarity with the DQN tutorial; it will mainly focus on the differences between DQN and C51.
## Setup
If you haven't installed tf-agents yet, run:
```
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg
!pip install 'imageio==2.4.0'
!pip install pyvirtualdisplay
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import categorical_q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tf.compat.v1.enable_v2_behavior()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
```
## Hyperparameters
```
env_name = "CartPole-v1" # @param {type:"string"}
num_iterations = 15000 # @param {type:"integer"}
initial_collect_steps = 1000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 100000 # @param {type:"integer"}
fc_layer_params = (100,)
batch_size = 64 # @param {type:"integer"}
learning_rate = 1e-3 # @param {type:"number"}
gamma = 0.99
log_interval = 200 # @param {type:"integer"}
num_atoms = 51 # @param {type:"integer"}
min_q_value = -20 # @param {type:"integer"}
max_q_value = 20 # @param {type:"integer"}
n_step_update = 2 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 1000 # @param {type:"integer"}
```
## Environment
Load the environment as before, with one for training and one for evaluation. Here we use CartPole-v1 (vs. CartPole-v0 in the DQN tutorial), which has a larger max reward of 500 rather than 200.
```
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
```
## Agent
C51 is a Q-learning algorithm based on DQN. Like DQN, it can be used on any environment with a discrete action space.
The main difference between C51 and DQN is that rather than simply predicting the Q-value for each state-action pair, C51 predicts a histogram model for the probability distribution of the Q-value:

By learning the distribution rather than simply the expected value, the algorithm is able to stay more stable during training, leading to improved final performance. This is particularly true in situations with bimodal or even multimodal value distributions, where a single average does not provide an accurate picture.
In order to train on probability distributions rather than on values, C51 must perform some complex distributional computations in order to calculate its loss function. But don't worry, all of this is taken care of for you in TF-Agents!
To create a C51 Agent, we first need to create a `CategoricalQNetwork`. The API of the `CategoricalQNetwork` is the same as that of the `QNetwork`, except that there is an additional argument `num_atoms`. This represents the number of support points in our probability distribution estimates. (The above image includes 10 support points, each represented by a vertical blue bar.) As you can tell from the name, the default number of atoms is 51.
```
categorical_q_net = categorical_q_network.CategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
num_atoms=num_atoms,
fc_layer_params=fc_layer_params)
```
We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.
Note that one other significant difference from vanilla `DqnAgent` is that we now need to specify `min_q_value` and `max_q_value` as arguments. These specify the most extreme values of the support (in other words, the most extreme of the 51 atoms on either side). Make sure to choose these appropriately for your particular environment. Here we use -20 and 20.
```
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.compat.v2.Variable(0)
agent = categorical_dqn_agent.CategoricalDqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
categorical_q_network=categorical_q_net,
optimizer=optimizer,
min_q_value=min_q_value,
max_q_value=max_q_value,
n_step_update=n_step_update,
td_errors_loss_fn=common.element_wise_squared_loss,
gamma=gamma,
train_step_counter=train_step_counter)
agent.initialize()
```
One last thing to note is that we also added an argument to use n-step updates with $n$ = 2. In single-step Q-learning ($n$ = 1), we only compute the error between the Q-values at the current time step and the next time step using the single-step return (based on the Bellman optimality equation). The single-step return is defined as:
$G_t = R_{t + 1} + \gamma V(s_{t + 1})$
where we define $V(s) = \max_a{Q(s, a)}$.
N-step updates involve expanding the standard single-step return function $n$ times:
$G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$
N-step updates enable the agent to bootstrap from further in the future, and with the right value of $n$, this often leads to faster learning.
Although C51 and n-step updates are often combined with prioritized replay to form the core of the [Rainbow agent](https://arxiv.org/pdf/1710.02298.pdf), we saw no measurable improvement from implementing prioritized replay. Moreover, we find that when combining our C51 agent with n-step updates alone, our agent performs as well as other Rainbow agents on the sample of Atari environments we've tested.
## Metrics and Evaluation
The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.
```
#@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
compute_avg_return(eval_env, random_policy, num_eval_episodes)
# Please also see the metrics module for standard implementations of different
# metrics.
```
## Data Collection
As in the DQN tutorial, set up the replay buffer and the initial data collection with the random policy.
```
#@test {"skip": true}
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_capacity)
def collect_step(environment, policy):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
replay_buffer.add_batch(traj)
for _ in range(initial_collect_steps):
collect_step(train_env, random_policy)
# This loop is so common in RL, that we provide standard implementations of
# these. For more details see the drivers module.
# Dataset generates trajectories with shape [BxTx...] where
# T = n_step_update + 1.
dataset = replay_buffer.as_dataset(
num_parallel_calls=3, sample_batch_size=batch_size,
num_steps=n_step_update + 1).prefetch(3)
iterator = iter(dataset)
```
## Training the agent
The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.
The following will take ~7 minutes to run.
```
#@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience)
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss.loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return))
returns.append(avg_return)
```
## Visualization
### Plots
We can plot return vs global steps to see the performance of our agent. In `Cartpole-v1`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 500, the maximum possible return is also 500.
```
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim(top=550)
```
### Videos
It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.
```
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
```
The following code visualizes the agent's policy for a few episodes:
```
num_episodes = 3
video_filename = 'imageio.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = agent.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
embed_mp4(video_filename)
```
C51 tends to do slightly better than DQN on CartPole-v1, but the difference between the two agents becomes more and more significant in increasingly complex environments. For example, on the full Atari 2600 benchmark, C51 demonstrates a mean score improvement of 126% over DQN after normalizing with respect to a random agent. Additional improvements can be gained by including n-step updates.
For a deeper dive into the C51 algorithm, see [A Distributional Perspective on Reinforcement Learning (2017)](https://arxiv.org/pdf/1707.06887.pdf).
| github_jupyter |
# Multimode simulations
## Imports
```
from itertools import combinations, combinations_with_replacement, product
import jax
import jax.numpy as jnp
import sax
```
## Ports and modes per port
Let's denote a combination of a port and a mode by a string of the following format: `"{port}@{mode}"`. We can obtain all possible port-mode combinations with some magic itertools functions:
```
ports = ["in0", "out0"]
modes = ["te", "tm"]
portmodes = [
(f"{p1}@{m1}", f"{p2}@{m2}")
for (p1, m1), (p2, m2) in combinations_with_replacement(product(ports, modes), 2)
]
portmodes
```
If we would disregard any backreflection, this can be further simplified:
```
portmodes_without_backreflection = [
(p1, p2) for p1, p2 in portmodes if p1.split("@")[0] != p2.split("@")[0]
]
portmodes_without_backreflection
```
Sometimes cross-polarization terms can also be ignored:
```
portmodes_without_crosspolarization = [
(p1, p2) for p1, p2 in portmodes if p1.split("@")[1] == p2.split("@")[1]
]
portmodes_without_crosspolarization
```
## Multimode waveguide
Let's create a waveguide with two ports (`"in"`, `"out"`) and two modes (`"te"`, `"tm"`) without backreflection. Let's assume there is 5% cross-polarization and that the `"tm"`->`"tm"` transmission is 10% worse than the `"te"`->`"te"` transmission. Naturally in more realisic waveguide models these percentages will be length-dependent, but this is just a dummy model serving as an example.
```
def waveguide(wl=1.55, wl0=1.55, neff=2.34, ng=3.4, length=10.0, loss=0.0):
"""a simple straight waveguide model
Args:
wl: wavelength
neff: waveguide effective index
ng: waveguide group index (used for linear neff dispersion)
wl0: center wavelength at which neff is defined
length: [m] wavelength length
loss: [dB/m] waveguide loss
"""
dwl = wl - wl0
dneff_dwl = (ng - neff) / wl0
neff = neff - dwl * dneff_dwl
phase = 2 * jnp.pi * neff * length / wl
transmission = 10 ** (-loss * length / 20) * jnp.exp(1j * phase)
sdict = sax.reciprocal(
{
("in0@te", "out0@te"): 0.95 * transmission, # 5% lost to cross-polarization
("in0@te", "out0@tm"): 0.05 * transmission, # 5% cross-polarization
("in0@tm", "out0@tm"): 0.85 * transmission, # 10% worse tm->tm than te->te
("in0@tm", "out0@te"): 0.05 * transmission, # 5% cross-polarization
}
)
return sdict
waveguide()
```
## Multimode MZI
We can now combine these models into a circuit in much the same way as before. We just need to add the `modes=` keyword:
```
mzi = sax.circuit(
instances={
"lft": sax.models.coupler, # single mode models will be automatically converted to multimode models without cross polarization.
"top": sax.partial(waveguide, length=25.0),
"btm": sax.partial(waveguide, length=15.0),
"rgt": sax.models.coupler, # single mode models will be automatically converted to multimode models without cross polarization.
},
connections={
"lft,out0": "btm,in0",
"btm,out0": "rgt,in0",
"lft,out1": "top,in0",
"top,out0": "rgt,in1",
},
ports={
"in0": "lft,in0",
"in1": "lft,in1",
"out0": "rgt,out0",
"out1": "rgt,out1",
},
modes=("te", "tm"),
)
mzi()
```
we can convert this model back to a singlemode `SDict` as follows:
```
mzi_te = sax.singlemode(mzi, mode="te")
mzi_te()
```
| github_jupyter |
## Churn Prediction using Logisitic Regression
## Data Dictionary
There are multiple variables in the dataset which can be cleanly divided in 3 categories:
### Demographic information about customers
<b>customer_id</b> - Customer id
<b>vintage</b> - Vintage of the customer with the bank in number of days
<b>age</b> - Age of customer
<b>gender</b> - Gender of customer
<b>dependents</b> - Number of dependents
<b>occupation</b> - Occupation of the customer
<b>city</b> - City of customer (anonymised)
### Customer Bank Relationship
<b>customer_nw_category</b> - Net worth of customer (3:Low 2:Medium 1:High)
<b>branch_code</b> - Branch Code for customer account
<b>days_since_last_transaction</b> - No of Days Since Last Credit in Last 1 year
### Transactional Information
<b>current_balance</b> - Balance as of today
<b>previous_month_end_balance</b> - End of Month Balance of previous month
<b>average_monthly_balance_prevQ</b> - Average monthly balances (AMB) in Previous Quarter
<b>average_monthly_balance_prevQ2</b> - Average monthly balances (AMB) in previous to previous quarter
<b>current_month_credit</b> - Total Credit Amount current month
<b>previous_month_credit</b> - Total Credit Amount previous month
<b>current_month_debit</b> - Total Debit Amount current month
<b>previous_month_debit</b> - Total Debit Amount previous month
<b>current_month_balance</b> - Average Balance of current month
<b>previous_month_balance</b> - Average Balance of previous month
<b>churn</b> - Average balance of customer falls below minimum balance in the next quarter (1/0)
## Churn Prediction
* Load Data & Packages for model building & preprocessing
* Preprocessing & Missing value imputation
* Select features on the basis of EDA Conclusions & build baseline model
* Decide Evaluation Metric on the basis of business problem
* Build model using all features & compare with baseline
### Loading Packages
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold, StratifiedKFold, train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix, roc_curve, precision_score, recall_score, precision_recall_curve
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=UserWarning)
```
### Loading Data
```
df = pd.read_csv('churn_prediction.csv')
```
### Missing Values
Before we go on to build the model, we must look for missing values within the dataset as treating the missing values is a necessary step before we fit a model on the dataset.
```
pd.isnull(df).sum()
```
The result of this function shows that there are quite a few missing values in columns gender, dependents, city, days since last transaction and Percentage change in credits. Let us go through each of them 1 by 1 to find the appropriate missing value imputation strategy for each of them.
#### Gender
L
et us look at the categories within gender column
```
df['gender'].value_counts()
```
So there is a good mix of males and females and arguably missing values cannot be filled with any one of them. We could create a seperate category by assigning the value -1 for all missing values in this column.
Before that, first we will convert the gender into 0/1 and then replace missing values with -1
```
#Convert Gender
dict_gender = {'Male': 1, 'Female':0}
df.replace({'gender': dict_gender}, inplace = True)
df['gender'] = df['gender'].fillna(-1)
```
#### Dependents, occupation and city with mode
Next we will have a quick look at the dependents & occupations column and impute with mode as this is sort of an ordinal variable
```
df['dependents'].value_counts()
df['occupation'].value_counts()
df['dependents'] = df['dependents'].fillna(0)
df['occupation'] = df['occupation'].fillna('self_employed')
```
Similarly City can also be imputed with most common category 1020
```
df['city'] = df['city'].fillna(1020)
```
#### Days since Last Transaction
A fair assumption can be made on this column as this is number of days since last transaction in 1 year, we can substitute missing values with a value greater than 1 year say 999
```
df['days_since_last_transaction'] = df['days_since_last_transaction'].fillna(999)
```
### Preprocessing
Now, before applying linear model such as logistic regression, we need to scale the data and keep all features as numeric strictly.
### Dummies with Multiple Categories
```
# Convert occupation to one hot encoded features
df = pd.concat([df,pd.get_dummies(df['occupation'],prefix = str('occupation'),prefix_sep='_')],axis = 1)
```
### Scaling Numerical Features for Logistic Regression
Now, we remember that there are a lot of outliers in the dataset especially when it comes to previous and current balance features. Also, the distributions are skewed for these features. We will take 2 steps to deal with that here:
* Log Transformation
* Standard Scaler
Standard scaling is anyways a necessity when it comes to linear models and we have done that here after doing log transformation on all balance features.
```
num_cols = ['customer_nw_category', 'current_balance',
'previous_month_end_balance', 'average_monthly_balance_prevQ2', 'average_monthly_balance_prevQ',
'current_month_credit','previous_month_credit', 'current_month_debit',
'previous_month_debit','current_month_balance', 'previous_month_balance']
for i in num_cols:
df[i] = np.log(df[i] + 17000)
std = StandardScaler()
scaled = std.fit_transform(df[num_cols])
scaled = pd.DataFrame(scaled,columns=num_cols)
df_df_og = df.copy()
df = df.drop(columns = num_cols,axis = 1)
df = df.merge(scaled,left_index=True,right_index=True,how = "left")
y_all = df.churn
df = df.drop(['churn','customer_id','occupation'],axis = 1)
```
## Model Building and Evaluation Metrics
Since this is a binary classification problem, we could use the following 2 popular metrics:
1. Recall
2. Area under the Receiver operating characteristic curve
Now, we are looking at the recall value here because a customer falsely marked as churn would not be as bad as a customer who was not detected as a churning customer and appropriate measures were not taken by the bank to stop him/her from churning
The ROC AUC is the area under the curve when plotting the (normalized) true positive rate (x-axis) and the false positive rate (y-axis).
Our main metric here would be Recall values, while AUC ROC Score would take care of how well predicted probabilites are able to differentiate between the 2 classes.
### Conclusions from EDA
* For debit values, we see that there is a significant difference in the distribution for churn and non churn and it might be turn out to be an important feature
* For all the balance features the lower values have much higher proportion of churning customers
* For most frequent vintage values, the churning customers are slightly higher, while for higher values of vintage, we have mostly non churning customers which is in sync with the age variable
* We see significant difference for different occupations and certainly would be interesting to use as a feature for prediction of churn.
Now, we will first split our dataset into test and train and using the above conclusions select columns and build a baseline logistic regression model to check the ROC-AUC Score & the confusion matrix
### Baseline Columns
```
baseline_cols = ['current_month_debit', 'previous_month_debit','current_balance','previous_month_end_balance','vintage'
,'occupation_retired', 'occupation_salaried','occupation_self_employed', 'occupation_student']
df_baseline = df[baseline_cols]
```
### Train Test Split to create a validation set
```
# Splitting the data into Train and Validation set
xtrain, xtest, ytrain, ytest = train_test_split(df_baseline,y_all,test_size=1/3, random_state=11, stratify = y_all)
model = LogisticRegression()
model.fit(xtrain,ytrain)
pred = model.predict_proba(xtest)[:,1]
```
### AUC ROC Curve & Confusion Matrix
Now, let us quickly look at the AUC-ROC curve for our logistic regression model and also the confusion matrix to see where the logistic regression model is failing here.
```
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(ytest,pred)
auc = roc_auc_score(ytest, pred)
plt.figure(figsize=(12,8))
plt.plot(fpr,tpr,label="Validation AUC-ROC="+str(auc))
x = np.linspace(0, 1, 1000)
plt.plot(x, x, linestyle='-')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc=4)
plt.show()
# Confusion Matrix
pred_val = model.predict(xtest)
label_preds = pred_val
cm = confusion_matrix(ytest,label_preds)
def plot_confusion_matrix(cm, normalized=True, cmap='bone'):
plt.figure(figsize=[7, 6])
norm_cm = cm
if normalized:
norm_cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sns.heatmap(norm_cm, annot=cm, fmt='g', xticklabels=['Predicted: No','Predicted: Yes'], yticklabels=['Actual: No','Actual: Yes'], cmap=cmap)
plot_confusion_matrix(cm, ['No', 'Yes'])
# Recall Score
recall_score(ytest,pred_val)
```
### Cross validation
Cross Validation is one of the most important concepts in any type of data modelling. It simply says, try to leave a sample on which you do not train the model and test the model on this sample before finalizing the model.
We divide the entire population into k equal samples. Now we train models on k-1 samples and validate on 1 sample. Then, at the second iteration we train the model with a different sample held as validation.
In k iterations, we have basically built model on each sample and held each of them as validation. This is a way to reduce the selection bias and reduce the variance in prediction power.
Since it builds several models on different subsets of the dataset, we can be more sure of our model performance if we use CV for testing our models.
```
def cv_score(ml_model, rstate = 12, thres = 0.5, cols = df.columns):
i = 1
cv_scores = []
df1 = df.copy()
df1 = df[cols]
# 5 Fold cross validation stratified on the basis of target
kf = StratifiedKFold(n_splits=5,random_state=rstate,shuffle=True)
for df_index,test_index in kf.split(df1,y_all):
print('\n{} of kfold {}'.format(i,kf.n_splits))
xtr,xvl = df1.loc[df_index],df1.loc[test_index]
ytr,yvl = y_all.loc[df_index],y_all.loc[test_index]
# Define model for fitting on the training set for each fold
model = ml_model
model.fit(xtr, ytr)
pred_probs = model.predict_proba(xvl)
pp = []
# Use threshold to define the classes based on probability values
for j in pred_probs[:,1]:
if j>thres:
pp.append(1)
else:
pp.append(0)
# Calculate scores for each fold and print
pred_val = pp
roc_score = roc_auc_score(yvl,pred_probs[:,1])
recall = recall_score(yvl,pred_val)
precision = precision_score(yvl,pred_val)
sufix = ""
msg = ""
msg += "ROC AUC Score: {}, Recall Score: {:.4f}, Precision Score: {:.4f} ".format(roc_score, recall,precision)
print("{}".format(msg))
# Save scores
cv_scores.append(roc_score)
i+=1
return cv_scores
baseline_scores = cv_score(LogisticRegression(), cols = baseline_cols)
```
Now let us try using all columns available to check if we get significant improvement.
```
all_feat_scores = cv_score(LogisticRegression())
```
There is some improvement in both ROC AUC Scores and Precision/Recall Scores.
```
from sklearn.ensemble import RandomForestClassifier
rf_all_features = cv_score(RandomForestClassifier(n_estimators=100, max_depth=8))
```
## Comparison of Different model fold wise
Let us visualise the cross validation scores for each fold for the following 3 models and observe differences:
* Baseline Model
* Model based on all features
* Model based on top 10 features obtained from RFE
```
results_df = pd.DataFrame({'baseline':baseline_scores, 'all_feats': all_feat_scores, 'random_forest': rf_all_features})
results_df.plot(y=["baseline", "all_feats", "random_forest"], kind="bar")
```
Here, we can see that the random forest model is giving the best result for each fold and students are encouraged to try and fine tune the model to get the best results.
| github_jupyter |
```
%load_ext autoreload
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.models import candidate_subclass
ChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])
train_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()
dev_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()
print(len(train_cands),len(dev_cands))
# from util import load_external_labels
# %time load_external_labels(session, Spouse, annotator_name='gold')
from snorkel.annotations import load_gold_labels
#L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, zero_one=True)
#L_gold_test = load_gold_labels(session, annotator_name='gold', split=2, zero_one=True)
# L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
# L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, zero_one=True)
# gold_labels_dev = [L[0,0] if L[0,0]==1 else -1 for L in L_gold_dev]
gold_labels_dev = [L[0,0] for L in L_gold_dev]
from snorkel.learning.utils import MentionScorer
#gold_labels_dev = [x[0,0] for x in L_gold_dev.todense()]
#for i,L in enumerate(gold_labels_dev):
# print(i,gold_labels_dev[i])
# gold_labels_dev = []
# for i,L in enumerate(L_gold_dev):
# gold_labels_dev.append(L[0,0])
# gold_labels_test = []
# for i,L in enumerate(L_gold_test):
# gold_labels_test.append(L[0,0])
# print(len(gold_labels_dev),len(gold_labels_test))
# print(gold_labels_dev.count(1),gold_labels_dev.count(-1))
# print(len(gold_labels_dev))
print(gold_labels_dev.count(1),gold_labels_dev.count(0))
print(len(gold_labels_dev))
from gensim.parsing.preprocessing import STOPWORDS
import gensim.matutils as gm
from gensim.models.keyedvectors import KeyedVectors
# Load pretrained model (since intermediate data is not included, the model cannot be refined with additional data)
model = KeyedVectors.load_word2vec_format('../../../snorkel/tutorials/glove_w2v.txt', binary=False) # C binary format
wordvec_unavailable= set()
def write_to_file(wordvec_unavailable):
with open("wordvec_unavailable.txt","w") as f:
for word in wordvec_unavailable:
f.write(word+"\n")
def preprocess(tokens):
btw_words = [word for word in tokens if word not in STOPWORDS]
btw_words = [word for word in btw_words if word.isalpha()]
return btw_words
def get_word_vectors(btw_words): # returns vector of embeddings of words
word_vectors= []
for word in btw_words:
try:
word_v = np.array(model[word])
word_v = word_v.reshape(len(word_v),1)
#print(word_v.shape)
word_vectors.append(model[word])
except:
wordvec_unavailable.add(word)
return word_vectors
def get_similarity(word_vectors,target_word): # sent(list of word vecs) to word similarity
similarity = 0
target_word_vector = 0
try:
target_word_vector = model[target_word]
except:
wordvec_unavailable.add(target_word+" t")
return similarity
target_word_sparse = gm.any2sparse(target_word_vector,eps=1e-09)
for wv in word_vectors:
wv_sparse = gm.any2sparse(wv, eps=1e-09)
similarity = max(similarity,gm.cossim(wv_sparse,target_word_sparse))
return similarity
import bz2
from six.moves.cPickle import load
with bz2.BZ2File('data/ctd.pkl.bz2', 'rb') as ctd_f:
ctd_unspecified, ctd_therapy, ctd_marker = load(ctd_f)
##### Discrete #########
def cand_in_ctd_unspecified(c):
return 1 if c.get_cids() in ctd_unspecified else 0
def cand_in_ctd_therapy(c):
return 1 if c.get_cids() in ctd_therapy else 0
def cand_in_ctd_marker(c):
return 1 if c.get_cids() in ctd_marker else 0
def LF_in_ctd_unspecified(c):
if(cand_in_ctd_unspecified(c)==1):
return (-1,1)
else:
return (0,0)
def LF_in_ctd_therapy(c):
if(cand_in_ctd_therapy(c)==1):
return (-1,1)
else:
return (0,0)
def LF_in_ctd_marker(c):
if(cand_in_ctd_marker(c)==1):
return (1,1)
else:
return (0,0)
import re
from snorkel.lf_helpers import (
get_tagged_text,
rule_regex_search_tagged_text,
rule_regex_search_btw_AB,
rule_regex_search_btw_BA,
rule_regex_search_before_A,
rule_regex_search_before_B,
)
# List to parenthetical
def ltp(x):
return '(' + '|'.join(x) + ')'
def LF_induce(c):
return (1,1) if re.search(r'{{A}}.{0,20}induc.{0,20}{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
causal_past = ['induced', 'caused', 'due']
def LF_d_induced_by_c(c):
if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + '.{0,9}(by|to).{0,50}', 1)==1):
return (1,1)
return (0,0)
def LF_d_induced_by_c_tight(c):
if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + ' (by|to) ', 1)==1):
return (1,1)
return (0,0)
def LF_induce_name(c):
return (1,1) if 'induc' in c.chemical.get_span().lower() else (0,0)
causal = ['cause[sd]?', 'induce[sd]?', 'associated with']
def LF_c_cause_d(c):
return (1,1) if (
re.search(r'{{A}}.{0,50} ' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
and not re.search('{{A}}.{0,50}(not|no).{0,20}' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
) else (0,0)
treat = ['treat', 'effective', 'prevent', 'resistant', 'slow', 'promise', 'therap']
def LF_d_treat_c(c):
if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)==-1):
return (-1,1)
return (0,0)
def LF_c_treat_d(c):
if (rule_regex_search_btw_AB(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)==-1):
return (-1,1)
return (0,0)
def LF_treat_d(c):
if (rule_regex_search_before_B(c, ltp(treat) + '.{0,50}', -1)==-1):
return (-1,1)
return (0,0)
def LF_c_treat_d_wide(c):
if (rule_regex_search_btw_AB(c, '.{0,200}' + ltp(treat) + '.{0,200}', -1)==-1):
return (-1,1)
return (0,0)
def LF_c_d(c):
return (1,1) if ('{{A}} {{B}}' in get_tagged_text(c)) else (0,0)
def LF_c_induced_d(c):
return (1,1) if (
('{{A}} {{B}}' in get_tagged_text(c)) and
(('-induc' in c[0].get_span().lower()) or ('-assoc' in c[0].get_span().lower()))
) else (0,0)
def LF_improve_before_disease(c):
if(rule_regex_search_before_B(c, 'improv.*', -1)==-1):
return (-1,1)
return (0,0)
pat_terms = ['in a patient with ', 'in patients with']
def LF_in_patient_with(c):
return (-1,1) if re.search(ltp(pat_terms) + '{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
uncertain = ['combin', 'possible', 'unlikely']
def LF_uncertain(c):
if (rule_regex_search_before_A(c, ltp(uncertain) + '.*', -1)==-1):
return (-1,1)
return (0,0)
def LF_induced_other(c):
if (rule_regex_search_tagged_text(c, '{{A}}.{20,1000}-induced {{B}}', -1)==-1):
return (-1,1)
return (0,0)
def LF_far_c_d(c):
if (rule_regex_search_btw_AB(c, '.{100,5000}', -1)==-1):
return (-1,1)
return (0,0)
def LF_far_d_c(c):
if (rule_regex_search_btw_BA(c, '.{100,5000}', -1)==-1):
return (-1,1)
return (0,0)
def LF_risk_d(c):
if (rule_regex_search_before_B(c, 'risk of ', 1)==1):
return (1,1)
return (0,0)
def LF_develop_d_following_c(c):
return (1,1) if re.search(r'develop.{0,25}{{B}}.{0,25}following.{0,25}{{A}}', get_tagged_text(c), flags=re.I) else (0,0)
procedure, following = ['inject', 'administrat'], ['following']
def LF_d_following_c(c):
return (1,1) if re.search('{{B}}.{0,50}' + ltp(following) + '.{0,20}{{A}}.{0,50}' + ltp(procedure), get_tagged_text(c), flags=re.I) else (0,0)
def LF_measure(c):
return (-1,1) if re.search('measur.{0,75}{{A}}', get_tagged_text(c), flags=re.I) else (0,0)
def LF_level(c):
return (-1,1) if re.search('{{A}}.{0,25} level', get_tagged_text(c), flags=re.I) else (0,0)
def LF_neg_d(c):
return (-1,1) if re.search('(none|not|no) .{0,25}{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
WEAK_PHRASES = ['none', 'although', 'was carried out', 'was conducted',
'seems', 'suggests', 'risk', 'implicated',
'the aim', 'to (investigate|assess|study)']
WEAK_RGX = r'|'.join(WEAK_PHRASES)
def LF_weak_assertions(c):
return (-1,1) if re.search(WEAK_RGX, get_tagged_text(c), flags=re.I) else (0,0)
def LF_ctd_marker_c_d(c):
l,s = LF_c_d(c)
cl = cand_in_ctd_marker(c)
return (l*cl,s*cl)
def LF_ctd_marker_induce(c):
l1,s1 = LF_c_induced_d(c)
l2,s2 = LF_d_induced_by_c_tight(c)
cl = cand_in_ctd_marker(c)
return ((l1 or l2)*cl,max(s1,s2)*cl)
def LF_ctd_therapy_treat(c):
l,s = LF_c_treat_d_wide(c)
cl = cand_in_ctd_therapy(c)
return (l*cl,s*cl)
def LF_ctd_unspecified_treat(c):
l,s = LF_c_treat_d_wide(c)
cl = cand_in_ctd_unspecified(c)
return (l*cl,s*cl)
def LF_ctd_unspecified_induce(c):
l1,s1 = LF_c_induced_d(c)
l2,s2 = LF_d_induced_by_c_tight(c)
cl = cand_in_ctd_unspecified(c)
return ((l1 or l2)*cl,max(s1,s2)*cl)
# def LF_ctd_marker_c_d(c):
# return LF_c_d(c) * cand_in_ctd_marker(c)
# def LF_ctd_marker_induce(c):
# return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_marker(c)
# def LF_ctd_therapy_treat(c):
# return LF_c_treat_d_wide(c) * cand_in_ctd_therapy(c)
# def LF_ctd_unspecified_treat(c):
# return LF_c_treat_d_wide(c) * cand_in_ctd_unspecified(c)
# def LF_ctd_unspecified_induce(c):
# return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_unspecified(c)
def LF_closer_chem(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical closer than @dist/2 in either direction
sent = c.get_parent()
closest_other_chem = float('inf')
for i in range(dis_end, min(len(sent.words), dis_end + dist // 2)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return (-1,1)
for i in range(max(0, dis_start - dist // 2), dis_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return (-1,1)
return (0,0)
def LF_closer_dis(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical disease than @dist/8 in either direction
sent = c.get_parent()
for i in range(chem_end, min(len(sent.words), chem_end + dist // 8)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return (-1,1)
for i in range(max(0, chem_start - dist // 8), chem_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return (-1,1)
return (0,0)
LFs = [LF_c_cause_d,LF_c_d,LF_c_induced_d,LF_c_treat_d,LF_c_treat_d_wide,LF_closer_chem,
LF_closer_dis,LF_ctd_marker_c_d,LF_ctd_marker_induce,LF_ctd_therapy_treat,
LF_ctd_unspecified_treat,LF_ctd_unspecified_induce,LF_d_following_c,
LF_d_induced_by_c,LF_d_induced_by_c_tight,LF_d_treat_c,LF_develop_d_following_c,
LF_far_c_d,LF_far_d_c,LF_improve_before_disease,LF_in_ctd_therapy,
LF_in_ctd_marker,LF_in_patient_with,LF_induce,LF_induce_name,LF_induced_other,
LF_level,LF_measure,LF_neg_d,LF_risk_d,LF_treat_d,LF_uncertain,LF_weak_assertions
]
LF_l = [
1,1,1,-1,-1,-1,
-1,1,1,-1,
-1,1,1,
1,1,-1,1,
-1,-1,-1,-1,
1,-1,1,1,-1,
-1,-1,-1,1,-1,-1,-1
]
print(len(LFs),len(LF_l))
import numpy as np
def distanceCD(c):
dist = 0
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
return dist/5000
def distanceCD_(c,l):
dist = []
for w in l:
pattern = r'({{A}})(.*)('+w+r')(.*)({{B}})'
matchObj = re.search(pattern, get_tagged_text(c), flags=re.I)
if(matchObj):
match_groups = matchObj.group(2,4)
dist.append(sum([len(mg) for mg in match_groups]))
if(len(dist)>0):
return min(dist)
return 0
def distanceDC_(c,l):
dist = []
for w in l:
pattern = r'({{B}})(.*)('+w+r')(.*)({{A}})'
matchObj = re.search(pattern, get_tagged_text(c), flags=re.I)
if(matchObj):
match_groups = matchObj.group(2,4)
dist.append(sum([len(mg) for mg in match_groups]))
if(len(dist)>0):
return min(dist)
return 0
def levenshtein(source, target):
if len(source) < len(target):
return levenshtein(target, source)
# So now we have len(source) >= len(target).
if len(target) == 0:
return len(source)
# We call tuple() to force strings to be used as sequences
# ('c', 'a', 't', 's') - numpy uses them as values by default.
source = np.array(tuple(source))
target = np.array(tuple(target))
# We use a dynamic programming algorithm, but with the
# added optimization that we only need the last two rows
# of the matrix.
previous_row = np.arange(target.size + 1)
for s in source:
# Insertion (target grows longer than source):
current_row = previous_row + 1
# Substitution or matching:
# Target and source items are aligned, and either
# are different (cost of 1), or are the same (cost of 0).
current_row[1:] = np.minimum(
current_row[1:],
np.add(previous_row[:-1], target != s))
# Deletion (target grows shorter than source):
current_row[1:] = np.minimum(
current_row[1:],
current_row[0:-1] + 1)
previous_row = current_row
return previous_row[-1]
# ##### Smooth LFs #########
# def cand_in_ctd_unspecified(c):
# return 1 if c.get_cids() in ctd_unspecified else 0
# def cand_in_ctd_therapy(c):
# return 1 if c.get_cids() in ctd_therapy else 0
# def cand_in_ctd_marker(c):
# return 1 if c.get_cids() in ctd_marker else 0
# def LF_in_ctd_unspecified(c):
# if(cand_in_ctd_unspecified(c)==1):
# return (-1,1)
# else:
# return (0,0)
# def LF_in_ctd_therapy(c):
# if(cand_in_ctd_therapy(c)==1):
# return (-1,1)
# else:
# return (0,0)
# def LF_in_ctd_marker(c):
# if(cand_in_ctd_marker(c)==1):
# return (1,1)
# else:
# return (0,0)
# import re
# from snorkel.lf_helpers import (
# get_tagged_text,
# rule_regex_search_tagged_text,
# rule_regex_search_btw_AB,
# rule_regex_search_btw_BA,
# rule_regex_search_before_A,
# rule_regex_search_before_B,
# )
# import re
# from snorkel.lf_helpers import (
# get_left_tokens, get_right_tokens, get_between_tokens,
# get_text_between, get_tagged_text,
# )
# # List to parenthetical
# def ltp(x):
# return '(' + '|'.join(x) + ')'
# # def LF_induce(c):
# # return (1,1) if re.search(r'{{A}}.{0,20}induc.{0,20}{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
# def LF_induce(c):
# return (1,distanceCD_(c,['induc'])) if re.search(r'{{A}}.*induc.*{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
# causal_past = ['induced', 'caused', 'due']
# # def LF_d_induced_by_c(c):
# # if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + '.{0,9}(by|to).{0,50}', 1)==1):
# # return (1,1)
# # return (0,0)
# def LF_d_induced_by_c(c):
# sc = 0
# word_vectors = get_word_vectors(get_between_tokens(c))
# for w in causal_past:
# sc=max(sc,get_similarity(word_vectors,w))
# return (1,sc)
# # def LF_d_induced_by_c_tight(c):
# # if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + ' (by|to) ', 1)==1):
# # return (1,1)
# # return (0,0)
# def LF_d_induced_by_c_tight(c):
# if (rule_regex_search_btw_BA(c, '.*' + ltp(causal_past) + ' (by|to) ', 1)==1):
# return (1,(1-distanceDC_(c,causal_past)))
# return (0,0)
# def LF_induce_name(c):
# return (1,1) if 'induc' in c.chemical.get_span().lower() else (0,0)
# causal = ['cause[sd]?', 'induce[sd]?', 'associated with']
# # def LF_c_cause_d(c):
# # return (1,1) if (
# # re.search(r'{{A}}.{0,50} ' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
# # and not re.search('{{A}}.{0,50}(not|no).{0,20}' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
# # ) else (0,0)
# def LF_c_cause_d(c):
# return (1,(1-distanceCD_(c,causal))) if (
# re.search(r'{{A}}.* ' + ltp(causal) + '.*{{B}}', get_tagged_text(c), re.I)
# and not re.search('{{A}}.{0,50}(not|no).{0,20}' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
# ) else (0,0)
# treat = ['treat', 'effective', 'prevent', 'resistant', 'slow', 'promise', 'therap']
# # def LF_d_treat_c(c):
# # if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_d_treat_c(c):
# if (rule_regex_search_btw_BA(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)==-1):
# return (-1,1-distanceDC_(c,treat))
# return (0,0)
# # def LF_c_treat_d(c):
# # if (rule_regex_search_btw_AB(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_c_treat_d(c):
# if (rule_regex_search_btw_AB(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)==-1):
# return (-1,1-distanceCD_(c,treat))
# return (0,0)
# # def LF_treat_d(c):
# # if (rule_regex_search_before_B(c, ltp(treat) + '.{0,50}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_treat_d(c):
# sc = 0
# word_vectors = get_word_vectors(get_left_tokens(c[1],7))
# for w in treat:
# sc=max(sc,get_similarity(word_vectors,w))
# if(re.search('(not|no|none) .* {{B}}', get_tagged_text(c), re.I)):
# return (0,0)
# else:
# return (-1,sc)
# # def LF_c_treat_d_wide(c):
# # if (rule_regex_search_btw_AB(c, '.{0,200}' + ltp(treat) + '.{0,200}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_c_treat_d_wide(c):
# if (rule_regex_search_btw_AB(c, '.{0,200}' + ltp(treat) + '.{0,200}', -1)==-1):
# return (-1,1-distanceCD_(c,treat))
# return (0,0)
# def LF_c_d(c):
# return (1,1) if ('{{A}} {{B}}' in get_tagged_text(c)) else (0,0)
# def LF_c_induced_d(c):
# return (1,1) if (
# ('{{A}} {{B}}' in get_tagged_text(c)) and
# (('-induc' in c[0].get_span().lower()) or ('-assoc' in c[0].get_span().lower()))
# ) else (0,0)
# # def LF_improve_before_disease(c):
# # if(rule_regex_search_before_B(c, 'improv.*', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def distanceImproveBeforeDisease(c):
# m=re.search(r'(improv)(.*)({{B}})', get_tagged_text(c), flags=re.I)
# if(m):
# return len(m.group(2))/5000
# return 0
# def LF_improve_before_disease(c):
# if(rule_regex_search_before_B(c, 'improv.*', -1) == -1):
# return (-1,1-distanceImproveBeforeDisease(c))
# else:
# return (0,0)
# pat_terms = ['in a patient with ', 'in patients with']
# def LF_in_patient_with(c):
# return (-1,1) if re.search(ltp(pat_terms) + '{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
# uncertain = ['combin', 'possible', 'unlikely']
# # def LF_uncertain(c):
# # if (rule_regex_search_before_A(c, ltp(uncertain) + '.*', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_uncertain(c):
# sc = 0
# word_vectors = get_word_vectors(get_left_tokens(c[1],7))
# for w in uncertain:
# sc=max(sc,get_similarity(word_vectors,w))
# if(re.search('(not|no|none) .* {{B}}', get_tagged_text(c), re.I)):
# return (0,0)
# else:
# return (-1,sc)
# # def LF_induced_other(c):
# # if (rule_regex_search_tagged_text(c, '{{A}}.{20,1000}-induced {{B}}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_induced_other(c):
# if (rule_regex_search_tagged_text(c, '{{A}}.{20,1000}-induced {{B}}', -1)==-1):
# return (-1,distanceCD(c))
# return (0,0)
# # def LF_far_c_d(c):
# # if (rule_regex_search_btw_AB(c, '.{100,5000}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_far_c_d(c):
# if (rule_regex_search_btw_AB(c, '.{100,5000}', -1)==-1):
# return (-1,distanceCD(c))
# return (0,0)
# # def LF_far_d_c(c):
# # if (rule_regex_search_btw_BA(c, '.{100,5000}', -1)==-1):
# # return (-1,1)
# # return (0,0)
# def LF_far_d_c(c):
# if (rule_regex_search_btw_BA(c, '.{100,5000}', -1)==-1):
# return (-1,distanceCD(c))
# return (0,0)
# #without deps
# gen_model.weights.lf_accuracy
# # def LF_risk_d(c):
# # if (rule_regex_search_before_B(c, 'risk of ', 1)==1):
# # return (1,1)
# # return (0,0)
# def LF_risk_d(c):
# sc = 0
# word_vectors = get_word_vectors(get_left_tokens(c[1],7))
# sc=max(sc,get_similarity(word_vectors,'risk'))
# return (1,sc)
# # def LF_develop_d_following_c(c):
# # return (1,1) if re.search(r'develop.{0,25}{{B}}.{0,25}following.{0,25}{{A}}', get_tagged_text(c), flags=re.I) else (0,0)
# def distanceDevFol(c):
# dist = 0
# matchObj = re.search(r'(develop)(.*)({{B}})(.*)(following)(.*)({{A}})', get_tagged_text(c), flags=re.I)
# if(matchObj):
# match_groups = matchObj.group(2,4,6)
# dist = sum([len(mg) for mg in match_groups])
# return dist/5000
# def LF_develop_d_following_c(c):
# return (1,1-distanceDevFol(c)) if re.search(r'develop.*{{B}}.*following.*{{A}}', get_tagged_text(c), flags=re.I) else (0,0)
# procedure, following = ['inject', 'administrat'], ['following']
# # def LF_d_following_c(c):
# # return (1,distanceDFollC(c)) if re.search('{{B}}.{0,50}' + ltp(following) + '.{0,20}{{A}}.{0,50}' + ltp(procedure), get_tagged_text(c), flags=re.I) else (0,0)
# def LF_d_following_c(c):
# return (1,1-distanceDC_(c,following)) if re.search('{{B}}.*' + ltp(following) + '.*{{A}}.*' + ltp(procedure), get_tagged_text(c), flags=re.I) else (0,0)
# # def LF_measure(c):
# # return (-1,1) if re.search('measur.{0,75}{{A}}', get_tagged_text(c), flags=re.I) else (0,0)
# def distanceMeasureA(c):
# m = re.search('(measur)(.*)({{A}})', get_tagged_text(c), flags=re.I)
# if(m):
# return (5000-len(m.group(2)))/5000
# return 0
# def LF_measure(c):
# return (-1,distanceMeasureA(c)) if re.search('measur.{0,75}{{A}}', get_tagged_text(c), flags=re.I) else (0,0)
# # def LF_level(c):
# # return (-1,1) if re.search('{{A}}.{0,25} level', get_tagged_text(c), flags=re.I) else (0,0)
# def distanceLevel(c):
# m = re.search('({{A}})(.*)(level)', get_tagged_text(c), flags=re.I)
# if(m):
# return (5000-len(m.group(2)))/5000
# return 0
# def LF_level(c):
# return (-1,distanceLevel(c)) if re.search('{{A}}.{0,25} level', get_tagged_text(c), flags=re.I) else (0,0)
# # def LF_neg_d(c):
# # return (-1,1) if re.search('(none|not|no) .{0,25}{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
# def distanceNeg(c):
# m = re.search('(none|not|no)(.*)({{B}})', get_tagged_text(c), flags=re.I)
# if(m):
# return (5000-len(m.group(2)))/5000
# return 0
# def LF_neg_d(c):
# return (-1,distanceNeg(c)) if re.search('(none|not|no) .{0,25}{{B}}', get_tagged_text(c), flags=re.I) else (0,0)
# WEAK_PHRASES = ['none', 'although', 'was carried out', 'was conducted',
# 'seems', 'suggests', 'risk', 'implicated',
# 'the aim', 'to (investigate|assess|study)']
# WEAK_RGX = r'|'.join(WEAK_PHRASES)
# def LF_weak_assertions(c):
# return (-1,1) if re.search(WEAK_RGX, get_tagged_text(c), flags=re.I) else (0,0)
# def LF_ctd_marker_c_d(c):
# l,s = LF_c_d(c)
# cl = cand_in_ctd_marker(c)
# return (l*cl,s*cl)
# def LF_ctd_marker_induce(c):
# l1,s1 = LF_c_induced_d(c)
# l2,s2 = LF_d_induced_by_c_tight(c)
# cl = cand_in_ctd_marker(c)
# return ((l1 or l2)*cl,max(s1,s2)*cl)
# def LF_ctd_therapy_treat(c):
# l,s = LF_c_treat_d_wide(c)
# cl = cand_in_ctd_therapy(c)
# return (l*cl,s*cl)
# def LF_ctd_unspecified_treat(c):
# l,s = LF_c_treat_d_wide(c)
# cl = cand_in_ctd_unspecified(c)
# return (l*cl,s*cl)
# def LF_ctd_unspecified_induce(c):
# l1,s1 = LF_c_induced_d(c)
# l2,s2 = LF_d_induced_by_c_tight(c)
# cl = cand_in_ctd_unspecified(c)
# return ((l1 or l2)*cl,max(s1,s2)*cl)
# # def LF_ctd_marker_c_d(c):
# # return LF_c_d(c) * cand_in_ctd_marker(c)
# # def LF_ctd_marker_induce(c):
# # return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_marker(c)
# # def LF_ctd_therapy_treat(c):
# # return LF_c_treat_d_wide(c) * cand_in_ctd_therapy(c)
# # def LF_ctd_unspecified_treat(c):
# # return LF_c_treat_d_wide(c) * cand_in_ctd_unspecified(c)
# # def LF_ctd_unspecified_induce(c):
# # return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_unspecified(c)
# def LF_closer_chem(c):
# # Get distance between chemical and disease
# chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
# dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
# if dis_start < chem_start:
# dist = chem_start - dis_end
# else:
# dist = dis_start - chem_end
# # Try to find chemical closer than @dist/2 in either direction
# sent = c.get_parent()
# closest_other_chem = float('inf')
# for i in range(dis_end, min(len(sent.words), dis_end + dist // 2)):
# et, cid = sent.entity_types[i], sent.entity_cids[i]
# if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
# return (-1,1)
# for i in range(max(0, dis_start - dist // 2), dis_start):
# et, cid = sent.entity_types[i], sent.entity_cids[i]
# if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
# return (-1,1)
# return (0,0)
# def LF_closer_dis(c):
# # Get distance between chemical and disease
# chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
# dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
# if dis_start < chem_start:
# dist = chem_start - dis_end
# else:
# dist = dis_start - chem_end
# # Try to find chemical disease than @dist/8 in either direction
# sent = c.get_parent()
# for i in range(chem_end, min(len(sent.words), chem_end + dist // 8)):
# et, cid = sent.entity_types[i], sent.entity_cids[i]
# if et == 'Disease' and cid != sent.entity_cids[dis_start]:
# return (-1,1)
# for i in range(max(0, chem_start - dist // 8), chem_start):
# et, cid = sent.entity_types[i], sent.entity_cids[i]
# if et == 'Disease' and cid != sent.entity_cids[dis_start]:
# return (-1,1)
# return (0,0)
# LFs = [LF_c_cause_d,LF_c_d,LF_c_induced_d,LF_c_treat_d,LF_c_treat_d_wide,LF_closer_chem,
# LF_closer_dis,LF_ctd_marker_c_d,LF_ctd_marker_induce,LF_ctd_therapy_treat,
# LF_ctd_unspecified_treat,LF_ctd_unspecified_induce,LF_d_following_c,
# LF_d_induced_by_c,LF_d_induced_by_c_tight,LF_d_treat_c,LF_develop_d_following_c,
# LF_far_c_d,LF_far_d_c,LF_improve_before_disease,LF_in_ctd_therapy,
# LF_in_ctd_marker,LF_in_patient_with,LF_induce,LF_induce_name,LF_induced_other,
# LF_level,LF_measure,LF_neg_d,LF_risk_d,LF_treat_d,LF_uncertain,LF_weak_assertions
# ]
# LF_l = [
# 1,1,1,-1,-1,-1,
# -1,1,1,-1,
# -1,1,1,
# 1,1,-1,1,
# -1,-1,-1,-1,
# 1,-1,1,1,-1,
# -1,-1,-1,1,-1,-1,-1
# ]
# print(len(LFs),len(LF_l))
''' output:
[[[L_x1],[S_x1]],
[[L_x2],[S_x2]],
......
......
]
'''
def get_L_S_Tensor(cands):
L_S = []
for i,ci in enumerate(cands):
L_S_ci=[]
L=[]
S=[]
P_ik = []
for LF in LFs:
#print LF.__name__
l,s = LF(ci)
L.append(l)
S.append((s+1)/2) #to scale scores in [0,1]
L_S_ci.append(L)
L_S_ci.append(S)
L_S.append(L_S_ci)
if(i%500==0 and i!=0):
print(str(i)+'data points labelled in',(time.time() - start_time)/60,'mins')
return L_S
# import matplotlib.pyplot as plt
import time
import datetime
import numpy as np
start_time = time.time()
lt = time.localtime()
print("started at: {}-{}-{}, {}:{}:{}".format(lt.tm_mday,lt.tm_mon,lt.tm_year,lt.tm_hour,lt.tm_min,lt.tm_sec))
dev_L_S = get_L_S_Tensor(dev_cands)
np.save("dev_L_S_discrete",np.array(dev_L_S))
# np.save("dev_L_S_smooth",np.array(dev_L_S))
train_L_S = get_L_S_Tensor(train_cands)
np.save("train_L_S_discrete",np.array(train_L_S))
# np.save("train_L_S_smooth",np.array(train_L_S))
print("time taken: ",str(datetime.timedelta(seconds=(time.time() - start_time))))
# test_L_S = get_L_S_Tensor(test_cands)
# pkl.dump(test_L_S,open("test_L_S.p","wb"))
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from collections import defaultdict
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_auc_score
def draw2DArray(a):
fig = plt.figure(figsize=(6, 3.2))
ax = fig.add_subplot(111)
ax.set_title('colorMap')
plt.imshow(np.array(a))
ax.set_aspect('equal')
cax = fig.add_axes([0.12, 0.1, 0.78, 0.8])
cax.get_xaxis().set_visible(False)
cax.get_yaxis().set_visible(False)
cax.patch.set_alpha(0)
cax.set_frame_on(False)
plt.colorbar(orientation='vertical')
plt.show()
def report2dict(cr):
# Parse rows
tmp = list()
for row in cr.split("\n"):
parsed_row = [x for x in row.split(" ") if len(x) > 0]
if len(parsed_row) > 0:
tmp.append(parsed_row)
# Store in dictionary
measures = tmp[0]
D_class_data = defaultdict(dict)
for row in tmp[1:]:
class_label = row[0]
for j, m in enumerate(measures):
D_class_data[class_label][m.strip()] = float(row[j + 1].strip())
return pd.DataFrame(D_class_data).T
def predictAndPrint(pl):
print("acc",accuracy_score(gold_labels_dev,pl))
# print(precision_recall_fscore_support(true_labels,pl,average='macro'))
print(confusion_matrix(gold_labels_dev,pl))
draw2DArray(confusion_matrix(gold_labels_dev,pl))
return report2dict(classification_report(gold_labels_dev, pl))# target_names=class_names))
def drawLossVsF1(y_loss,x_f1s,text,title):
fig, ax = plt.subplots()
ax.scatter(x_f1s, y_loss)
plt.xlabel('f1-score')
plt.ylabel('loss')
plt.title(title)
for i, txt in enumerate(text):
ax.annotate(txt, (x_f1s[i],y_loss[i]))
plt.savefig(title+".png")
def drawPRcurve(y_test,y_score,it_no):
fig = plt.figure()
splt = fig.add_subplot(111)
precision, recall, thresholds = precision_recall_curve(y_test, y_score,pos_label=1)
splt.step(recall, precision, color='b', alpha=0.2,
where='post')
splt.fill_between(recall, precision, step='post', alpha=0.2,
color='b')
# print("thresholds",thresholds,len(thresholds))
average_precision = average_precision_score(y_test, y_score)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.05])
plt.title('{0:d} Precision-Recall curve: AP={1:0.2f}'.format(it_no,
average_precision))
import numpy as np
# dev_L_S = np.load("dev_L_S_discrete.npy")
# train_L_S = np.load("train_L_S_discrete.npy")
dev_L_S = np.load("dev_L_S_smooth.npy")
train_L_S = np.load("train_L_S_smooth.npy")
print(dev_L_S.shape,train_L_S.shape)
LF_l = [
1,1,1,-1,-1,-1,
-1,1,1,-1,
-1,1,1,
1,1,-1,1,
-1,-1,-1,-1,
1,-1,1,1,-1,
-1,-1,-1,1,-1,-1,-1
]
def merge(a,b):
c = []
for i in range(len(a)):
ci = []
ci_l = a[i,0,:].tolist()+b[i,0,:].tolist()
ci_s = a[i,1,:].tolist()+b[i,1,:].tolist()
ci.append(ci_l)
ci.append(ci_s)
c.append(ci)
return c
import numpy as np
dev_L_S_s = np.load("dev_L_S_smooth.npy")
train_L_S_s = np.load("train_L_S_smooth.npy")
dev_L_S_d = np.load("dev_L_S_discrete.npy")
train_L_S_d = np.load("train_L_S_discrete.npy")
dev_L_S = np.array(merge(dev_L_S_d,dev_L_S_s))
train_L_S = np.array(merge(train_L_S_d,train_L_S_s))
LF_l = LF_l + LF_l
print(len(LF_l))
test_L_S = dev_L_S
true_labels = gold_labels_dev
print(dev_L_S.shape,train_L_S.shape)
#call this only once for a kernel startup
from __future__ import absolute_import, division, print_function
import tensorflow as tf
# BATCH_SIZE = 32
NoOfLFs= len(LF_l)
NoOfClasses = 2
print(len(LF_l))
## normalized training with smooth lf normalizer
def train_unl_s(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(test_L_S).batch(len(test_L_S))
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.1,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s,name = "s_")
print("s_",s_)
def iskequalsy(v,s):
out = tf.where(tf.equal(v,s),tf.ones_like(v),\
-tf.ones_like(v))
print("out",out)
return out
# ls_ = tf.multiply(l,s_)
# nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda c: iskequalsy(l,c)*s_ ,np.array([-1,1],dtype=np.float64),name="pout")
# print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout,name="t_pout")
print("pout",pout)
print("t_pout",t_pout)
t = tf.squeeze(thetas)
print("t",t)
def ints(y):
ky = iskequalsy(k,y)
print("ky",ky)
out1 = alphas+((tf.exp((t*ky*(1-alphas)))-1)/(t*ky))
print("intsy",out1)
return out1
# zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t*iskequalsy(k,y)),axis=0),np.arange(NoOfClasses,dtype=np.float64))
zy = tf.map_fn(lambda y: tf.reduce_prod(1+ints(y),axis=0),\
np.array([-1,1],dtype=np.float64),name="zy")
print("zy",zy)
# zy = tf.map_fn(lambda y: tf.reduce_prod(1+ints(y),axis=0),\
# np.array(NoOfClasses,dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0),name="logz")
print("logz",logz)
tf.summary.scalar('logz', logz)
lsp = tf.reduce_logsumexp(t_pout,axis=0)
print("lsp",lsp)
tf.summary.scalar('lsp', tf.reduce_sum(lsp))
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(lsp ))
tf.summary.scalar('un-normloss', normloss)
# tf.summary.histogram('thetas', t)
# tf.summary.histogram('alphas', alphas)
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
summary_merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('./summary/train',
tf.get_default_graph())
test_writer = tf.summary.FileWriter('./summary/test')
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for en in range(ep):
sess.run(train_init_op)
tl = 0
try:
it = 0
while True:
sm,_,ls,t,a = sess.run([summary_merged,train_step,normloss,thetas,alphas])
# print(tl)
train_writer.add_summary(sm, it)
tl = tl + ls
it = it + 1
except tf.errors.OutOfRangeError:
pass
print(en,"loss",tl)
sess.run(dev_init_op)
sm,a,t,m,pl = sess.run([summary_merged,alphas,thetas,marginals,predict])
test_writer.add_summary(sm, en)
print(a)
print(t)
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(true_labels,pl))
print(precision_recall_fscore_support(np.array(true_labels),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(true_labels,pl))
predictAndPrint(pl)
print(precision_recall_fscore_support(np.array(true_labels),np.array(pl),average="binary"))
print(precision_recall_fscore_support(np.array(true_labels),np.array(pl),average="macro"))
# cf = confusion_matrix(true_labels,pl)
# print(cf)
return pl
# 66 LFs
predicted_labels=train_unl_s(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
Results = predictAndPrint(predicted_labels)
Results.to_csv("results.csv")
Results
## normalized training with smooth lf normalizer
def train_nl_s(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(test_L_S).batch(len(test_L_S))
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.1,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s,name = "s_")
print("s_",s_)
def iskequalsy(v,s):
out = tf.where(tf.equal(v,s),tf.ones_like(v),\
-tf.ones_like(v))
print("out",out)
return out
# ls_ = tf.multiply(l,s_)
# nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda c: iskequalsy(l,c)*s_ ,np.array([-1,1],dtype=np.float64),name="pout")
# print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout,name="t_pout")
print("pout",pout)
print("t_pout",t_pout)
t = tf.squeeze(thetas)
print("t",t)
def ints(y):
ky = iskequalsy(k,y)
print("ky",ky)
out1 = alphas+((tf.exp((t*ky*(1-alphas)))-1)/(t*ky))
print("intsy",out1)
return out1
# zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t*iskequalsy(k,y)),axis=0),np.arange(NoOfClasses,dtype=np.float64))
zy = tf.map_fn(lambda y: tf.reduce_prod(1+ints(y),axis=0),\
np.array([-1,1],dtype=np.float64),name="zy")
print("zy",zy)
# zy = tf.map_fn(lambda y: tf.reduce_prod(1+ints(y),axis=0),\
# np.array(NoOfClasses,dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0),name="logz")
print("logz",logz)
tf.summary.scalar('logz', logz)
lsp = tf.reduce_logsumexp(t_pout,axis=0)
print("lsp",lsp)
tf.summary.scalar('lsp', tf.reduce_sum(lsp))
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(lsp - logz ))
tf.summary.scalar('un-normloss', normloss)
# tf.summary.histogram('thetas', t)
# tf.summary.histogram('alphas', alphas)
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
summary_merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('./summary/train',
tf.get_default_graph())
test_writer = tf.summary.FileWriter('./summary/test')
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for en in range(ep):
sess.run(train_init_op)
tl = 0
try:
it = 0
while True:
sm,_,ls,t = sess.run([summary_merged,train_step,normloss,thetas])
# print(t)
# print(tl)
train_writer.add_summary(sm, it)
# if(ls<1e-5):
# break
tl = tl + ls
it = it + 1
except tf.errors.OutOfRangeError:
pass
print(en,"loss",tl)
sess.run(dev_init_op)
sm,a,t,m,pl = sess.run([summary_merged,alphas,thetas,marginals,predict])
test_writer.add_summary(sm, en)
# print(a)
# print(t)
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(true_labels,pl))
print(precision_recall_fscore_support(np.array(true_labels),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(true_labels,pl))
predictAndPrint(pl)
print(precision_recall_fscore_support(np.array(true_labels),np.array(pl),average="binary"))
print(precision_recall_fscore_support(np.array(true_labels),np.array(pl),average="macro"))
# cf = confusion_matrix(true_labels,pl)
# print(cf)
return pl
# 66 LFs
predicted_labels=train_nl_s(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
Results = predictAndPrint(predicted_labels)
Results.to_csv("results.csv")
Results
## Sooth LFs normalized training with penalty reduce_sum(max(0,-theta))
def train_unl_p_s(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: ls_*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) )) +\
tf.reduce_sum(tf.maximum(tf.zeros_like(thetas),-thetas))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
train_unl_p_s(0.01/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
train_unl_p_s(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
## normalized training with penalty reduce_sum(max(0,-theta))
def train_nl_p(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) - logz)) +\
tf.reduce_sum(tf.maximum(tf.zeros_like(thetas),-thetas))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
# normalized with penalty init at 1
train_nl_p(0.01,5,tf.truncated_normal_initializer(1,0.1,12))
# normalized with penalty init at 0
train_nl_p(0.01,15,tf.truncated_normal_initializer(0,0.5,12))
## normalized training with penalty2 -tf.minimum( tf.reduce_min(theta),0)
def train_nl_p2(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) - logz)) \
-tf.minimum( tf.reduce_min(thetas),0.0)
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
# normalized with penalty 2 init at 0
train_nl_p2(0.01,15,tf.truncated_normal_initializer(0,0.5,12))
# normalized with penalty 2 init at 1
train_nl_p2(0.01,15,tf.truncated_normal_initializer(1,0.5,12))
# normalized with penalty 2 init at 0.2
train_nl_p2(0.01,15,tf.truncated_normal_initializer(0.2,0.5,12))
## normalized training with penalty3 log(1+e^(-x))
def train_nl_p3(lr,ep,th,pk=0):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) - logz)) \
+tf.reduce_sum(tf.log(1+tf.exp(-thetas-pk)))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
# normalized with penalty 3 init at 0.0
train_nl_p3(0.01,5,tf.truncated_normal_initializer(0.0,0.5,12))
# normalized with penalty 3 init at 1
train_nl_p3(0.01,5,tf.truncated_normal_initializer(1.0,0.5,12))
# normalized with penalty 3
train_nl_p3(0.01,5,tf.truncated_normal_initializer(0.0,0.5,12),3)
# normalized with penalty 3
train_nl_p3(0.01,5,tf.truncated_normal_initializer(1.0,0.5,12),3)
# normalized with penalty 3
train_nl_p3(0.01,5,tf.truncated_normal_initializer(0.2,0.5,12),3)
## normalized training with different params
def train_nl(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) - logz))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
train_nl(0.1/len(train_L_S),15,tf.truncated_normal_initializer(1,0.1,12))
train_nl(0.01,15,tf.truncated_normal_initializer(1,0.1,12))
#input L_S:train_L_S, K: no of classes
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
def get_maj_prior(L_S,K):
maj_prior = []
print(L_S[:,0,:].shape)
for row in np.nditer(L_S[:,0,:],flags=['external_loop'], order='C'):
p = np.ones(K)/K
unique, counts = np.unique(row, return_counts=True)
unique = [int(x) for x in unique]
rc = dict(zip(unique, counts))
tnz = np.count_nonzero(row)
if -1 in rc:
p[0] = rc[-1]
if 1 in rc:
p[1] = rc[1]
p = softmax(p)
maj_prior.append(p)
return np.array(maj_prior)
dev_maj_pl=get_maj_prior(dev_L_S,2)
print(dev_maj_pl.shape)
train_maj_pl=get_maj_prior(train_L_S,2)
print(train_maj_pl.shape)
## Normalized loss with majority prior
def train_nlmp(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
maj_train_dataset = tf.data.Dataset.from_tensor_slices(train_maj_pl).batch(BATCH_SIZE)
maj_dev_dataset = tf.data.Dataset.from_tensor_slices(dev_maj_pl).batch(dev_maj_pl.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
maj_iterator = tf.data.Iterator.from_structure(maj_train_dataset.output_types,
maj_train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
maj_train_init_op = maj_iterator.make_initializer(maj_train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
maj_dev_init_op = maj_iterator.make_initializer(maj_dev_dataset)
next_element = iterator.get_next()
maj_prior = tf.transpose(maj_iterator.get_next())
print("maj_label",maj_prior)
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
stpout= tf.squeeze(t_pout)
print("stpout",stpout)
prod = tf.reduce_sum(maj_prior*stpout,axis=0)
print("prod",prod)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_sum(maj_prior*tf.squeeze(t_pout-logz),axis=1) ))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
sess.run(maj_train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
sess.run(maj_dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
sess.run(maj_dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
train_nlmp(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
## Normalized loss with majority bias un normalized
def train_unlmp(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
maj_train_dataset = tf.data.Dataset.from_tensor_slices(train_maj_pl).batch(BATCH_SIZE)
maj_dev_dataset = tf.data.Dataset.from_tensor_slices(dev_maj_pl).batch(dev_maj_pl.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
maj_iterator = tf.data.Iterator.from_structure(maj_train_dataset.output_types,
maj_train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
maj_train_init_op = maj_iterator.make_initializer(maj_train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
maj_dev_init_op = maj_iterator.make_initializer(maj_dev_dataset)
next_element = iterator.get_next()
maj_prior = tf.transpose(maj_iterator.get_next())
print("maj_label",maj_prior)
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
stpout= tf.squeeze(t_pout)
print("stpout",stpout)
prod = tf.reduce_sum(maj_prior*stpout,axis=0)
print("prod",prod)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_sum(maj_prior*tf.squeeze(t_pout),axis=1) ))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
sess.run(maj_train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
sess.run(maj_dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
sess.run(maj_dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
train_unlmp(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
## normalized loss with prior from other LFs
def train_nlp(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
# ls_ = tf.multiply(l,s_)
# nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda li: tf.map_fn(lambda lij:li*lij,li ),l)
# print("nls",nls_)
print("pout",pout)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
sumy = t_pout-logz
print("sumy",sumy)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_sum(t_pout-logz,axis=1) ))
def index_along_every_row(array, index):
N, _ = array.shape
return array[np.arange(N), index]
#Best LF
blf = tf.argmax(t_pout,axis=1)
print("blf",blf)
print("normloss",normloss)
marginals = tf.py_func(index_along_every_row, [tf.squeeze(t_pout), tf.squeeze(blf)], [tf.float64])[0]
print("marginals",marginals)
predict1 = tf.gather(k,tf.squeeze(blf))
predict = tf.where(tf.equal(predict1,1),tf.ones_like(predict1),tf.zeros_like(predict1))
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl,b = sess.run([alphas,thetas,marginals,predict,blf])
# print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
print("blfs")
unique, counts = np.unique(b.tolist(), return_counts=True)
print(dict(zip(unique, counts)))
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl,b = sess.run([alphas,thetas,marginals,predict,blf])
# print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
print("blfs")
unique, counts = np.unique(b.tolist(), return_counts=True)
print(dict(zip(unique, counts)))
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
train_nlp(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
# print blf
train_nlp(0.1/len(train_L_S),5,tf.truncated_normal_initializer(1,0.1,12))
## Un normalized training with different params
def train_unl(lr,ep,th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.get_variable('thetas',[1,NoOfLFs],\
initializer=th,\
dtype=tf.float64)
print("thetas",thetas)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(lr).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(ep):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
train_unl(0.1/len(train_L_S),15,tf.truncated_normal_initializer(1,0.1,12))
train_unl(0.01,15,tf.truncated_normal_initializer(1,0.1,12))
## Objective value Normalized
def getNLObjValue(th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.convert_to_tensor(th)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
print("thetas",thetas)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) - logz))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
# train_step = tf.train.AdamOptimizer(0.01).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
tl = 0
for it in range(1):
sess.run(train_init_op)
try:
while True:
# _,ls = sess.run([train_step,normloss])
ls = sess.run(normloss) # to calculate loss on fixed thetas
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"train loss",tl)
# sess.run(dev_init_op)
# a,t,m,pl,dl = sess.run([alphas,thetas,marginals,predict,normloss])
# print(a)
# print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
# unique, counts = np.unique(pl, return_counts=True)
# print(dict(zip(unique, counts)))
# print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
# print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl,dl = sess.run([alphas,thetas,marginals,predict,normloss])
print(a)
print(t)
print("dev loss",dl)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
res = precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary")
return (tl,dl,res)
print("snorkel thetas")
l_f1s = []
#snorkel thetas
l_f1s.append(getNLObjValue(np.array([[0.2053655 , 0.30936395, 0.28997198, 0.2367527 , 0.28647343,
0.32020677, 0.19884814, 0.30623267, 0.30723075, 0.2419398 ,
0.25581996, 0.28299078, 0.18634078, 0.21796294, 0.20752234,
0.19268362, 0.18442374, 0.26176987, 0.22819245, 0.18263638,
0.30204006, 0.36018888, 0.18171976, 0.19192962, 0.30156119,
0.22449297, 0.18345572, 0.18210333, 0.18615508, 0.18474442,
0.19665109, 0.19592285, 0.21894825]])))
print("un-norma thetas ep15 lr 0.1/len(train)")
l_f1s.append(getNLObjValue(np.array([[1.331602, 1.26669486, 1.40539061, 1.31680439, 1.54131584, 1.62985126,
1.35424048, 1.46772579, 1.58006601, 1.38292574, 1.33103475, 1.4570726,
0.99999682, 1.33831358, 1.23726225, 1.23667661, 0.8854989 , 1.37555159,
1.4405701 , 0.86946515, 1.56697556, 1.32867193, 1.02812733, 1.31907064,
1.26115131, 1.46762187, 1.16829513, 1.0909831 , 1.19160042, 1.00836655,
0.97116276, 0.94325368, 1.2503787 ]])))
print(l_f1s)
## Normalized loss plot loss vs f1s
y_loss=[190172.47,247125.96,182772.825,182772.62]
x_f1s =[0.636,0.585,0.668,0.668]
text=["snorkel-thetas","unNormalized-thetas-ep7","normalized-trained-thetas-ep7","normalized-trained-thetas-ep15"]
drawLossVsF1(y_loss,x_f1s,text,"CDR-Normalized-Loss")
## Objective value on snorkel thetas Unnormalized # remove logz from obj
def getUNLObjValue(th):
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
# thetas = tf.get_variable('thetas', [1,NoOfLFs],\
# initializer=tf.truncated_normal_initializer(1,0.1,seed),\
# dtype=tf.float64)
thetas = tf.convert_to_tensor(th)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
print("thetas",thetas)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) ))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
# train_step = tf.train.AdamOptimizer(0.01).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
tl = 0
for it in range(1):
sess.run(train_init_op)
try:
while True:
# _,ls = sess.run([train_step,normloss])
ls = sess.run(normloss) # to calculate loss on fixed thetas
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"train loss",tl)
# sess.run(dev_init_op)
# a,t,m,pl,dl = sess.run([alphas,thetas,marginals,predict,normloss])
# print(a)
# print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
# unique, counts = np.unique(pl, return_counts=True)
# print(dict(zip(unique, counts)))
# print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
# print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl,dl = sess.run([alphas,thetas,marginals,predict,normloss])
print(a)
print(t)
print("dev loss",dl)
MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
res = precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary")
return (tl,dl,res)
print("snorkel thetas")
l_f1s = []
#snorkel thetas
l_f1s.append(getUNLObjValue(np.array([[0.2053655 , 0.30936395, 0.28997198, 0.2367527 , 0.28647343,
0.32020677, 0.19884814, 0.30623267, 0.30723075, 0.2419398 ,
0.25581996, 0.28299078, 0.18634078, 0.21796294, 0.20752234,
0.19268362, 0.18442374, 0.26176987, 0.22819245, 0.18263638,
0.30204006, 0.36018888, 0.18171976, 0.19192962, 0.30156119,
0.22449297, 0.18345572, 0.18210333, 0.18615508, 0.18474442,
0.19665109, 0.19592285, 0.21894825]])))
print("normalized thetas ep15 lr0.01")
l_f1s.append(getUNLObjValue(np.array([[ 0.28575421, 0.51761933 , 0.5207703 , 0.36454638, 0.55417525, 0.64604823,
0.13772734, 0.5908674, 0.64182507, 0.41374798, 0.40126751, 0.53715197,
-0.05536051, 0.32497394, 0.21877467, 0.14599839, -0.05084094, 0.423559,
0.32848078, 0.05423492, 0.65924925, 1.0990666 , 0.05418686, 0.22025056,
0.55000554, 0.33751599, 0.06750857, 0.05696271, 0.11251414, -0.01585396,
0.11384963, 0.12932497, 0.32449617]])))
print(l_f1s)
## UN-Normalized loss plot loss vs f1s
y_loss=[-7326.53,-10754.9,-20825.271]
x_f1s =[0.636,0.671,0.585]
text=["snorkel-thetas","normalized-thetas-ep15","Un-normalized-trained-thetas-ep15"]
drawLossVsF1(y_loss,x_f1s,text,"CDR-Un-Normalized-Loss")
## same network that didn't train
BATCH_SIZE = 1
tf.reset_default_graph()
seed = 12
with tf.Graph().as_default():
train_dataset = tf.data.Dataset.from_tensor_slices(train_L_S).batch(BATCH_SIZE)
dev_dataset = tf.data.Dataset.from_tensor_slices(dev_L_S).batch(dev_L_S.shape[0])
labels = tf.convert_to_tensor(gold_labels_dev)
iterator = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_dataset)
dev_init_op = iterator.make_initializer(dev_dataset)
next_element = iterator.get_next()
print("next_element",next_element)
alphas = tf.get_variable('alphas', [NoOfLFs],\
initializer=tf.truncated_normal_initializer(0.2,0.1,seed),\
dtype=tf.float64)
thetas = tf.get_variable('thetas', [1,NoOfLFs],\
initializer=tf.truncated_normal_initializer(1,0.1,seed),\
dtype=tf.float64)
k = tf.convert_to_tensor(LF_l, dtype=tf.float64)
print("k",k)
print(alphas.graph)
print(thetas.graph)
l,s = tf.unstack(next_element,axis=1)
print(alphas)
print(s)
print("l",l)
print(s.graph)
s_ = tf.map_fn(lambda x : tf.maximum(tf.subtract(x,alphas), 0), s)
ls_ = tf.multiply(l,s_)
nls_ = tf.multiply(l,s_)*-1
pout = tf.map_fn(lambda x: l*x,np.array([-1,1],dtype=np.float64))
print("nls",nls_)
print("thetas",thetas)
# lst = tf.matmul(ls_,thetas)
# print("lst",lst)
t_pout = tf.map_fn(lambda x: tf.matmul(x,thetas,transpose_b=True),pout)
print("pout",pout)
print("t_pout",t_pout)
t_k = k*tf.squeeze(thetas)
print("t_k",t_k)
zy = tf.map_fn(lambda y: tf.reduce_prod(1+tf.exp(t_k*y),axis=0),np.array([-1,1],dtype=np.float64))
logz = tf.log(tf.reduce_sum(zy,axis=0))
print("zy",zy)
print("logz",logz)
lsp = tf.reduce_logsumexp(t_pout)
print("lsp",lsp)
# normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0)) - logz) # add z
normloss = tf.negative(tf.reduce_sum(tf.reduce_logsumexp(t_pout,axis=0) - logz))
print("normloss",normloss)
marginals = tf.nn.softmax(t_pout,axis=0)
print("marginals",marginals)
predict = tf.argmax(marginals,axis=0)
print("predict",predict)
# pre = tf.metrics.precision(labels,predict)
# rec = tf.metrics.recall(labels,predict)
# print("loss",loss)
# print("nls_",nls_)
# global_step = tf.Variable(0, trainable=False,dtype=tf.float64)
# starter_learning_rate = 1.0
# learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
# 10, 0.96, staircase=True)
# train_step = tf.train.AdamOptimizer(learning_rate).minimize(normloss, global_step=global_step)
# train_step = tf.train.AdamOptimizer(0.001).minimize(normloss)
# reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
# reg_constant = 5.0 # Choose an appropriate one.
# totalloss = normloss + reg_constant * sum(reg_losses)
train_step = tf.train.AdamOptimizer(0.01).minimize(normloss)
# train_step = tf.train.AdagradOptimizer(0.01).minimize(normloss)
# train_step = tf.train.MomentumOptimizer(0.01,0.2).minimize(normloss)
# train_step = tf.train.GradientDescentOptimizer(0.1).minimize(normloss)
init_g = tf.global_variables_initializer()
init_l = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run(init_g)
sess.run(init_l)
# Initialize an iterator over the training dataset.
for it in range(30):
sess.run(train_init_op)
tl = 0
try:
while True:
_,ls = sess.run([train_step,normloss])
tl = tl + ls
except tf.errors.OutOfRangeError:
pass
print(it,"loss",tl)
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
print()
# Initialize an iterator over the validation dataset.
sess.run(dev_init_op)
a,t,m,pl = sess.run([alphas,thetas,marginals,predict])
print(a)
print(t)
# MentionScorer(dev_cands, L_gold_dev).score(m[1::].flatten())
unique, counts = np.unique(pl, return_counts=True)
print(dict(zip(unique, counts)))
print("acc",accuracy_score(gold_labels_dev,pl))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl)))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="macro"))
cf = confusion_matrix(gold_labels_dev,pl)
print(cf)
print("prec: tp/(tp+fp)",cf[1][1]/(cf[1][1]+cf[0][1]),"recall: tp/(tp+fn)",cf[1][1]/(cf[1][1]+cf[1][0]))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(pl),average="binary"))
#snorkel
a =np.array([ 0.07472098, 0.07514459, 0.11910277, 0.11186369, 0.07306518,
0.69216714, 0.07467749, 0.16012659, 0.13682546, 0.08183363])
temp = a.flatten().argsort()
ranks = np.empty_like(temp)
ranks[temp] = np.arange(len(a))
print(temp)
a = np.array([-0.27082211, -0.01928787, -0.14063245, 0.37856253, 0.43681819, -0.15844807,
0.13280198, -0.01935702, -0.10775934, 0.34390113, 0.39762823, -0.14286955,
-0.39588527, -0.33699178, -0.37821404, 0.38378715, -0.39537146, 0.11504936,
0.21906794, 0.39699417, -0.27113816, 8.13838832, 0.39548336, -0.31328908,
0.25503373, 0.28019293, 0.39734506, 0.39700564, 0.37866251, -0.39156514,
0.38002959, 0.21917987, 0.13605525])
temp = a.flatten().argsort()
ranks = np.empty_like(temp)
ranks[temp] = np.arange(len(a))
print(temp)
# rerun old network to get thetas
#stochastic + weighted cross entropy logits func + remove min(theta,0) in loss -- Marked
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.contrib.tensorboard.plugins import projector
def train_NN():
print()
result_dir = "./"
config = projector.ProjectorConfig()
tf.logging.set_verbosity(tf.logging.INFO)
summary_writer = tf.summary.FileWriter(result_dir)
tf.reset_default_graph()
dim = 2 #(labels,scores)
_x = tf.placeholder(tf.float64,shape=(dim,len(LFs)))
alphas = tf.get_variable('alpha', _x.get_shape()[-1],initializer=tf.constant_initializer(0.2),
dtype=tf.float64)
thetas = tf.get_variable('theta', _x.get_shape()[-1],initializer=tf.constant_initializer(1),
dtype=tf.float64)
l,s = tf.unstack(_x)
prelu_out_s = tf.maximum(tf.subtract(s,alphas), tf.zeros(shape=(len(LFs)),dtype=tf.float64))
mul_L_S = tf.multiply(l,prelu_out_s)
phi_p1 = tf.reduce_sum(tf.multiply(mul_L_S,thetas))
phi_n1 = tf.reduce_sum(tf.multiply(tf.negative(mul_L_S),thetas))
phi_out = tf.stack([phi_n1,phi_p1])
predict = tf.argmax(tf.nn.softmax(phi_out))
loss = tf.negative(tf.reduce_logsumexp(phi_out))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
check_op = tf.add_check_numerics_ops()
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for i in range(5):
c = 0
te_prev=1
total_te = 0
for L_S_i in train_L_S:
a,t,te_curr,_ = sess.run([alphas,thetas,loss,train_step],feed_dict={_x:L_S_i})
total_te+=te_curr
if(abs(te_curr-te_prev)<1e-200):
break
if(c%4000==0):
pl = []
for L_S_i in dev_L_S:
a,t,de_curr,p = sess.run([alphas,thetas,loss,predict],feed_dict={_x:L_S_i})
pl.append(p)
predicted_labels = pl
print()
print(total_te/4000)
total_te=0
# print(a)
print(t)
# print()
print(predicted_labels.count(-1),predicted_labels.count(1))
print(c," d ",precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='macro'))
c+=1
te_prev = te_curr
pl = []
for L_S_i in dev_L_S:
p = sess.run(predict,feed_dict={_x:L_S_i})
pl.append(p)
predicted_labels = pl
print(i,total_te)
print(predicted_labels.count(-1),predicted_labels.count(1))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='macro'))
print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='binary'))
train_NN()
# #stochastic + weighted cross entropy logits func + remove min(theta,0) in loss -- Marked
# from __future__ import absolute_import
# from __future__ import division
# from __future__ import print_function
# import tensorflow as tf
# from tensorflow.contrib.tensorboard.plugins import projector
# def train_NN():
# print()
# result_dir = "./"
# config = projector.ProjectorConfig()
# tf.logging.set_verbosity(tf.logging.INFO)
# summary_writer = tf.summary.FileWriter(result_dir)
# tf.reset_default_graph()
# dim = 2 #(labels,scores)
# _x = tf.placeholder(tf.float64,shape=(dim,len(LFs)))
# alphas = tf.get_variable('alpha', _x.get_shape()[-1],initializer=tf.constant_initializer(0.2),
# dtype=tf.float64)
# thetas = tf.get_variable('theta', _x.get_shape()[-1],initializer=tf.constant_initializer(1),
# dtype=tf.float64)
# l,s = tf.unstack(_x)
# prelu_out_s = tf.maximum(tf.subtract(s,alphas), tf.zeros(shape=(len(LFs)),dtype=tf.float64))
# mul_L_S = tf.multiply(l,prelu_out_s)
# phi_p1 = tf.reduce_sum(tf.multiply(mul_L_S,thetas))
# phi_n1 = tf.reduce_sum(tf.multiply(tf.negative(mul_L_S),thetas))
# phi_out = tf.stack([phi_n1,phi_p1])
# predict = tf.argmax(tf.nn.softmax(phi_out))
# loss = tf.negative(tf.reduce_logsumexp(phi_out))
# train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
# check_op = tf.add_check_numerics_ops()
# sess = tf.Session()
# init = tf.global_variables_initializer()
# sess.run(init)
# for i in range(1):
# c = 0
# te_prev=1
# total_te = 0
# for L_S_i in train_L_S:
# a,t,te_curr,_ = sess.run([alphas,thetas,loss,train_step],feed_dict={_x:L_S_i})
# total_te+=te_curr
# if(abs(te_curr-te_prev)<1e-200):
# break
# if(c%4000==0):
# pl = []
# for L_S_i in dev_L_S:
# a,t,de_curr,p = sess.run([alphas,thetas,loss,predict],feed_dict={_x:L_S_i})
# pl.append(p)
# predicted_labels = [-1 if x==0 else x for x in pl]
# print()
# print(total_te/4000)
# total_te=0
# # print(a)
# # print(t)
# # print()
# print(predicted_labels.count(-1),predicted_labels.count(1))
# print(c," d ",precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='macro'))
# c+=1
# te_prev = te_curr
# pl = []
# for L_S_i in dev_L_S:
# p = sess.run(predict,feed_dict={_x:L_S_i})
# pl.append(p)
# predicted_labels = [-1 if x==0 else x for x in pl]
# print(i,total_te)
# print(predicted_labels.count(-1),predicted_labels.count(1))
# print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='macro'))
# train_NN()
```
| github_jupyter |
Global Variables
```
import print_metrics
file_path = "ITIL Books/ITIL 3/ITIL3 Service Operation chapter 4.pdf"
# extracted_text_file_path = "ITIL Books/ITIL 3/Continual service improvement chapter from notebook.txt"
# extracted_text_file_path = "ITIL Books/ITIL 3/Service operation chapter 4/Service operation chapter 4 - 4.txt"
# extracted_text_file_path = "ITIL Books\ITIL 3\Service operation chapter 4\Automated concepts extracted\\4.2\Service operation chapter 4 - 4.2 to 4.2.4 .txt"
output_file_path = "output/ITIL3 Continual Service Improvement.txt"
# Uncomment a single extracted_text_file_path and a single manual_concepts_file_path to run metric tests on that
# specific chapter subsection, e.g. 4.1 is uncommented now so metrics will be calculated for it within the last cell
# of the notebook
lemmatize = False
# lemmatize = True
### Chapter 4 start ###
# extracted_text_file_path = print_metrics.get_extracted_text_file_path(4)
# manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4)
### Chapter 4 - 4.1 to 4.1.4 ###
extracted_text_file_path = print_metrics.get_extracted_text_file_path(4.1)
manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4.1)
### Chapter 4 - 4.2 to 4.2.4 ###
# extracted_text_file_path = print_metrics.get_extracted_text_file_path(4.2)
# manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4.2)
# ### Chapter 4 - 4.3 to 4.3.4 ###
# extracted_text_file_path = print_metrics.get_extracted_text_file_path(4.3)
# manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4.3)
# ### Chapter 4 - 4.4 to 4.4.4 ###
# extracted_text_file_path = print_metrics.get_extracted_text_file_path(4.4)
# manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4.4)
# ### Chapter 4 - 4.5 to 4.5.4 ###
# extracted_text_file_path = print_metrics.get_extracted_text_file_path(4.5)
# manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4.5)
# ### Chapter 4 - 4.6 to 4.6.4 ###
# extracted_text_file_path = print_metrics.get_extracted_text_file_path(4.6)
# manual_concepts_file_path = print_metrics.get_manual_concepts_file_path(4.6)
```
Imports
```
import pdfminer
import nltk
import re
```
Global Functions
```
with open(extracted_text_file_path, 'r') as file:
extracted_text = file.read()
# extracted_text = "Particle dynamics involves the study of physics and chemistry"
# tokens = nltk.word_tokenize(extracted_text)
# print(tokens)
# ### Part of speech tagging ###
# part_of_speech_array = nltk.pos_tag(tokens)
# print(part_of_speech_array)
```
Text sanitization and word tokenizing
```
### Grab sections from text ###
# print(re.findall("^\d(\.|\d)*(\s|\w)*$", extracted_text))
# title_pattern = re.compile(r"^\d(\.|\d)*(\s|\w)*$", re.MULTILINE)
title_pattern = re.compile(r"^\d+.*$", re.MULTILINE)
sections = title_pattern.findall(extracted_text)
for counter, section in enumerate(sections):
if not (section.find("%") == -1 and section.find(")") == -1):
sections.remove(section)
### Sanitise extracted text ###
extracted_text_sanitised = extracted_text
extracted_text_sanitised = extracted_text.replace("¦", "")
extracted_text_sanitised = extracted_text_sanitised.replace("–", "")
### Tokenise extracted text ###
tokens = nltk.word_tokenize(extracted_text_sanitised)
# print(tokens)
```
Part to speech tagging
```
### Part of speech tagging ###
part_of_speech_array = nltk.pos_tag(tokens)
# print(part_of_speech_array)
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
def lemmatize_part_of_speech_array(part_of_speech_array):
part_of_speech_array_lemmatized = []
for part_of_speech in part_of_speech_array:
part_of_speech_array_lemmatized.append(
(lemmatizer.lemmatize(part_of_speech[0]), part_of_speech[1])
)
return part_of_speech_array_lemmatized
# print(part_of_speech_array_lemmatized)
# Uncomment this if lemmatization is to be used
if lemmatize:
part_of_speech_array = lemmatize_part_of_speech_array(part_of_speech_array)
```
Term extraction
```
### Term Extraction (NNP next to each other) ###
def extract_terms(part_of_speech_array_array, tags_to_use):
terms_array = []
term_phrase = []
start_new_term = True
for index, part in enumerate(part_of_speech_array_array):
if(part[1] in tags_to_use):
term_phrase.append(part[0])
start_new_term = False if part_of_speech_array_array[index + 1][1] in tags_to_use else True
if start_new_term == True:
terms_array.append(" ".join(term_phrase))
term_phrase = []
return terms_array
### Term Extraction (NNP next to each other) ###
def extract_terms_with_adj(part_of_speech_array_array, tags_to_use):
terms_array = []
term_phrase = []
start_new_term = True
for index, part in enumerate(part_of_speech_array_array):
if(part[1] in tags_to_use):
term_phrase.append(part[0])
start_new_term = False if part_of_speech_array_array[index + 1][1] in tags_to_use else True
if start_new_term == True:
terms_array.append(" ".join(term_phrase))
term_phrase = []
return terms_array
major_named_concepts = extract_terms(part_of_speech_array, {"NNP", "NNPS"})
other_concepts = extract_terms(part_of_speech_array, {"NN", "NNS"})
all_noun_phrases = extract_terms(part_of_speech_array, {"NNP", "NNPS", "NN", "NNS"})
all_noun_phrases_with_adj = extract_terms_with_adj(part_of_speech_array, {"NNP", "NNPS", "NN", "NNS", "JJ"})
print(f"major named concepts: \n{major_named_concepts}")
print(f"\nother concepts: \n{other_concepts}")
print(f"\nall noun phrases: \n{all_noun_phrases}")
print(f"\nall noun phrases with adj: \n{all_noun_phrases_with_adj}")
```
Major/common concept extraction
```
# concept_relationships = extract_terms(part_of_speech_array, {"VP"})
# print(concept_relationships)
# print(all_noun_phrases)
### Perform frequency analysis ###
### Concept Extraction Frequency analysis ###
major_named_concept_frequency_distribution = nltk.FreqDist(major_named_concepts)
other_concept_frequency_distribution = nltk.FreqDist(other_concepts)
all_noun_phrases_frequency_distribution = nltk.FreqDist(all_noun_phrases)
print(f"major named concepts: \n{major_named_concept_frequency_distribution.most_common(50)}")
print(f"\nother concepts: \n{other_concept_frequency_distribution.most_common(50)}")
print(f"\nall noun phrases: \n{all_noun_phrases_frequency_distribution.most_common(50)}")
```
Concept relationship extraction
```
def get_sentence_at_index(part_of_speech_array, index):
sentence_starting_index = 0
sentence_end_index = len(part_of_speech_array)
### Get sentence start index ###
for i in range(0, index):
# print(part_of_speech_array[index])
if part_of_speech_array[index - i][1] == ".":
sentence_starting_index = index - i
break
### Get sentence end index ###
for i in range(0, len(part_of_speech_array)):
if part_of_speech_array[index + i][1] == ".":
sentence_end_index = index + i
break
return (sentence_starting_index, sentence_end_index + 1)
### Term Extraction (NNP next to each other) ###
def extract_terms(part_of_speech_array_array, tags_to_use):
part_of_speech_array_with_terms = []
terms_array = []
term_phrase = []
start_new_term = True
for index, part in enumerate(part_of_speech_array_array):
if(part[1] in tags_to_use):
term_phrase.append(part[0])
start_new_term = False if part_of_speech_array_array[index + 1][1] in tags_to_use else True
if start_new_term == True:
if len(term_phrase) > 1:
# part_of_speech_array_with_terms.append((" ".join(term_phrase), f"NPhrase-{part[1]}"))
part_of_speech_array_with_terms.append((" ".join(term_phrase), "NPhrase"))
else:
part_of_speech_array_with_terms.append((" ".join(term_phrase), part[1]))
term_phrase = []
else:
part_of_speech_array_with_terms.append((part[0], part[1]))
return part_of_speech_array_with_terms
part_of_speech_array_with_terms = extract_terms(part_of_speech_array, {"NNP", "NNPS", "NN", "NNS"})
# print(part_of_speech_array_with_terms)
sentences = []
temp_sentence = []
for word in part_of_speech_array_with_terms:
if word[1] is ".":
temp_sentence.append(word)
sentences.append(temp_sentence)
temp_sentence = []
else:
temp_sentence.append(word)
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
# for index, sentence in enumerate(sentences):
# sentence_with_no_stop_words = [word_pos for word_pos in sentence if not word_pos[0] in stop_words]
#
# if index > 3:
# break
#
# print(sentence)
# print(sentence_with_no_stop_words)
# print("-" * 100)
# from part_of_speech_regex import PartOfSpeechRegex
class PartOfSpeechRegex:
def parseAndReturnPatterns(self, pattern_string, sentence):
print(pattern_string)
pattern_string = "*<JJ><NNP><VBZ><NP>"
print(sentences[3])
part_of_speech_regex = PartOfSpeechRegex()
# part_of_speech_regex.parseAndReturnPatterns()
part_of_speech_regex.parseAndReturnPatterns(pattern_string, sentences[3])
most_common_major_concepts = major_named_concept_frequency_distribution.most_common(50)
# print(most_common_major_concepts)
# print(tokens.index(most_common_major_concepts[0][0]))
# print(part_of_speech_array[206])
## Get indices of all common concepts
indices = [i for i, x in enumerate(part_of_speech_array) if x[0] == most_common_major_concepts[0][0]]
# print(indices)
# print(most_common_major_concepts[1][0])
def get_sentence_at_index(part_of_speech_array, index):
sentence_starting_index = 0
sentence_end_index = len(part_of_speech_array)
### Get sentence start index ###
for i in range(0, index):
if part_of_speech_array[index - i][1] == ".":
sentence_starting_index = index - i
break
### Get sentence end index ###
for i in range(0, index):
if part_of_speech_array[index + i][1] == ".":
sentence_end_index = index + i
break
return (sentence_starting_index + 1, sentence_end_index + 1)
def does_list_contain_verb_pos(part_of_speech_array):
for word_pos in part_of_speech_array:
if word_pos[1] in ['VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']:
return True
return False
# sentence_index = get_sentence_at_index(part_of_speech_array, indices[0])
# print(sentence_index)
concept_relations = []
i = 0
for index in indices:
# if i < 3:
# i = i + 1
# continue
sentence_pos_containing_concept = part_of_speech_array[
get_sentence_at_index(part_of_speech_array, index)[0]:
get_sentence_at_index(part_of_speech_array, index)[1]
]
print(sentence_pos_containing_concept)
last_concept = ()
last_concept_index = -1
# For word part_of_speech in sentence_part_of_speech_containing_concept
for index, word_pos in enumerate(sentence_pos_containing_concept):
# print(f"{word_pos[0]}: {word_pos[1]}")
# print(word_pos[0] in all_noun_phrases)
# if (word_pos[0] in all_noun_phrases):
if (word_pos[0] in major_named_concepts):
if last_concept_index != -1:# and does_list_contain_verb_pos(sentence_pos_containing_concept[last_concept_index + 1:index]):
concept_relations.append(f"{last_concept}::{sentence_pos_containing_concept[last_concept_index + 1:index]}::{word_pos}")
last_concept = word_pos
last_concept_index = index
# print(related_concepts)
# print()
# print('-----')
i = i + 1
# if i == 4:
# break
for concept_relation in concept_relations:
print(concept_relation)
```
Metrics for term extraction chapter 4 first section
```
# This code has been moved into the print_metrics.py Python module to prevent code duplication
### The below cell code is now within the print_metrics python file so it can be used for iteration 1 and 2 without duplication
# # automatic_concepts_file_path = "ITIL Books/ITIL 3/Service operation chapter 4/Automated concepts extracted/4/Automated concepts extracted 4.txt"
# # manual_concepts_file_path = "ITIL Books/ITIL 3/Service operation chapter 4/Automated concepts extracted/4/Manual concepts extracted 4.txt"
#
# # automatic_concepts_file_path = "ITIL Books/ITIL 3/Service operation chapter 4/Automated concepts extracted/4.2/Automated concepts extracted 4.2.txt"
# # manual_concepts_file_path = "ITIL Books/ITIL 3/Service operation chapter 4/Automated concepts extracted/4.2/Manual concepts extracted 4.2.txt"
# #
# # with open(automatic_concepts_file_path, 'r') as file:
# # automatic_concepts = file.read()
#
# with open(manual_concepts_file_path, 'r') as file:
# manual_concepts = file.read()
#
# manual_concepts_list = manual_concepts.split('\n')
# manual_concepts_list = [x.lower() for x in manual_concepts_list]
#
# # print("Manual concepts")
# # print(list(dict.fromkeys(manual_concepts_list)))
# # print()
#
# # automatic_concepts_list = ['Service Operation', 'processes', 'paragraph', 'detail', 'chapter', 'reference', 'structure', 'processes', 'detail', 'chapter', 'Please note', 'roles', 'process', 'tools', 'process', 'Chapters', 'Management', 'process', 'monitors', 'events', 'IT infrastructure', 'operation', 'exception conditions', 'Incident Management', 'service', 'users', 'order', 'business impact', 'Problem Management', 'root-cause analysis', 'cause', 'events', 'incidents', 'activities', 'problems/incidents', 'Known Error subprocess', 'quicker diagnosis', 'resolution', 'incidents', 'NOTE', 'distinction', 'incidents', 'problems', 'Incident', 'Problem Records', 'danger', 'Incidents', 'support cycle', 'actions', 'recurrence', 'incidents', 'Incidents', 'root cause analysis', 'visibility', 'user ’ s service', 'SLA targets', 'service', 'users', 'expectations', 'results', 'number', 'incidents', '‘ purge ’', 'visibility', 'issues', 'Request Fulfilment', 'management', 'customer', 'user requests', 'incident', 'service delay', 'disruption', 'organizations', 'requests', 'category ’', 'incidents', 'information', 'Incident Management system', 'others', 'volumes', 'business priority', 'requests', 'provision', 'Request Fulfilment', 'Request Fulfilment process', 'practice', 'Request Fulfilment process', 'customer', 'user requests', 'types', 'requests', 'facilities', 'moves', 'supplies', 'IT services', 'requests', 'SLA measures', 'records', 'process flow', 'practice', 'organizations', 'Access Management', 'process', 'users', 'right', 'service', 'access', 'users', 'users', 'ability', 'access services', 'stages', 'resources', 'HR', 'lifecycle', 'Access Management', 'Identity', 'Rights Management', 'organizations', 'addition', 'processes', 'Service Operation', 'phases', 'Service Management Lifecycle', 'aspects', 'processes', 'part', 'chapter', 'include', 'Change Management', 'process', 'Configuration Management', 'Release Management', 'topics', 'Service Transition publication', 'Capacity', 'Availability Management', 'aspects', 'publication', 'detail', 'Service Design publication', 'Financial Management', 'Service Strategy publication', 'Knowledge Management', 'Service Transition publication', 'IT Service Continuity', 'Service Design publication', 'Service Reporting', 'Measurement', 'Continual Service Improvement publication']
# automatic_concepts_list = all_noun_phrases
# automatic_concepts_list = [x.lower() for x in automatic_concepts_list]
#
# # print("all noun phrases")
# # print(list(dict.fromkeys(automatic_concepts_list)))
#
# count = 0
# for concept in manual_concepts_list:
# if concept in automatic_concepts_list:
# count = count + 1
#
# number_of_fully_correct_manual_concepts = count
#
# number_of_manual_concepts = len(manual_concepts_list)
#
# count = 0
# for concept in automatic_concepts_list:
# if concept in manual_concepts_list:
# count = count + 1
#
# number_of_fully_correct_automatic_concepts = count
#
# number_of_automatic_concepts = len(automatic_concepts_list)
#
# print(f"number_of_manual_concepts: {number_of_manual_concepts}")
# print(f"number_of_automatic_concepts: {number_of_automatic_concepts}")
# print(f"number_of_fully_correct_manual_concepts: {number_of_fully_correct_manual_concepts}")
# print(f"number_of_fully_correct_automatic_concepts: {number_of_fully_correct_automatic_concepts}")
#
# # Lists to words for partial matches
# automatic_concepts_list_single_words = [x.split() for x in automatic_concepts_list]
# # print(automatic_concepts_list_single_words)
#
# manual_concepts_list_single_words = [x.split() for x in manual_concepts_list]
# # print(manual_concepts_list_single_words)
#
# count = 0
# for concept in manual_concepts_list_single_words:
# for word in concept:
# if word in ' '.join(automatic_concepts_list).split():
# count = count + 1
# break
#
# number_of_full_and_partial_correct_manual_concepts = count
# print(f"number_of_full_and_partial_correct_manual_concepts: {number_of_full_and_partial_correct_manual_concepts}")
#
# count = 0
# for concept in automatic_concepts_list_single_words:
# for word in concept:
# if word in ' '.join(manual_concepts_list).split():
# count = count + 1
# break
#
# number_of_full_and_partial_correct_automatic_concepts = count
# print(f"number_of_full_and_partial_correct_automatic_concepts: {number_of_full_and_partial_correct_automatic_concepts}")
import print_metrics
part_of_speech_array = nltk.pos_tag(tokens)
lemmatized_part_of_speech_array = lemmatize_part_of_speech_array(part_of_speech_array)
extract_terms(part_of_speech_array, {"NNP", "NNPS", "NN", "NNS"})
extract_terms(lemmatized_part_of_speech_array, {"NNP", "NNPS", "NN", "NNS"})
print("With duplicates")
print_metrics.print_metrics(manual_concepts_file_path, all_noun_phrases, debug=True, remove_duplicates=False)
print()
print("Without duplicates")
print_metrics.print_metrics(manual_concepts_file_path, all_noun_phrases, debug=True, remove_duplicates=True)
```
| github_jupyter |
# Sample notebook showcasing R on Jupyter
An overview of some plotting controls available in R for visualizing networks and visualizing tree models.
To execute a cell, select it and then use **[Shift] + [Enter]**.
```
# Default plot size is 7 inches x 7 inches; change to 7 x 3
options(repr.plot.height=3)
library(rpart) # CART tree models
library(rpart.plot) # Pretty plotting
library(vcd) # Spline plotting
titanic <- as.data.frame(Titanic)
head(titanic, n=5)
summary(titanic)
```
## Data visualization
Before making the tree models, try some visualization.
```
Survival.by.Sex <- xtabs(Freq~Sex+Survived, data=titanic)
Survival.by.Class <- xtabs(Freq~Class+Survived, data=titanic)
Survival.by.Age <- xtabs(Freq~Age+Survived, data=titanic)
oldpar <- par(mfrow=c(1,3))
options(repr.plot.width=7)
spineplot(Survival.by.Sex, col=c(rgb(0, 0, 0.5), rgb(0.3, 0.3, 1)))
spineplot(Survival.by.Class, col=c(rgb(0, 0, 0.5), rgb(0.3, 0.3, 1)))
spineplot(Survival.by.Age, col=c(rgb(0, 0, 0.5), rgb(0.3, 0.3, 1)))
par(oldpar)
cart.control <- rpart.control(minbucket=1, cp=0, maxdepth=5)
model.cart = rpart(
Survived ~ . ,
data=titanic[ , -5],
weights=titanic$Freq,
method="class",
#xval=10,
control=cart.control
)
print(model.cart)
printcp(model.cart)
# The standard Tree plot
plot(model.cart, margin=0.01)
text(model.cart, use.n=TRUE, cex=.8)
options(repr.plot.height=5)
# Better visualization using rpart.plot
prp(x=model.cart,
fallen.leaves=TRUE, branch=.5, faclen=0, trace=1,
extra=1, under=TRUE,
branch.lty=3,
split.box.col="whitesmoke", split.border.col="darkgray", split.round=0.4)
# Confusion Matrix given a cutoff
threshold = 0.8
cm <- table(titanic$Survived,
predict(model.cart, titanic[,-5], type="prob")[,2] > threshold)
print(cm)
```
# For fun, let's make a Caffeine molecule
This notebook also demonstrates importing an extra library, `igraph`.
The Docker container sets this up, without the student needing to import anything
(grep for igraph).
We'll use an adjacency matrix to describe the network topology of Caffeine, and create the graph
using `graph.adjacency(<the-adjacency-matrix>)` to demonstrate some standard selection and
plotting functions using R's `igraph` library. The chemical formula below demonstrates use of inline LaTeX math markup, and the image inline image placement.
$$C_8H_{10}N_4O_2$$
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/Koffein_-_Caffeine.svg/220px-Koffein_-_Caffeine.svg.png" alt="Caffeine molecule"></img>
[mybinder]: http://mybinder.org
```
library(igraph)
caffeine.adjacency <- as.matrix(read.table("caffeine.txt", sep=" "))
caffeine <- graph.adjacency(caffeine.adjacency, mode='undirected')
V(caffeine)$name <- strsplit('CHHHNCOCNCHHHCHNCNCHHHCO', '')[[1]]
V(caffeine)$color <- rgb(1, 1, 1)
V(caffeine)[name == 'C']$color <- rgb(0, 0, 0, 0.7)
V(caffeine)[name == 'O']$color <- rgb(1, 0, 0, 0.7)
V(caffeine)[name == 'N']$color <- rgb(0, 0, 1, 0.7)
plot(caffeine)
options(repr.plot.height=5, repr.plot.width=5)
```
| github_jupyter |
```
# Install RAPIDS (takes ~10 min).
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!bash rapidsai-csp-utils/colab/rapids-colab.sh 0.18
import sys, os
dist_package_index = sys.path.index('/usr/local/lib/python3.7/dist-packages')
sys.path = sys.path[:dist_package_index] + ['/usr/local/lib/python3.7/site-packages'] + sys.path[dist_package_index:]
sys.path
exec(open('rapidsai-csp-utils/colab/update_modules.py').read(), globals())
# https://github.com/NVIDIA/NVTabular/blob/main/examples/winning-solution-recsys2020-twitter/01-02-04-Download-Convert-ETL-with-NVTabular-Training-with-XGBoost.ipynb
# Needed to fix conda and install nvtabular.
!conda install https://repo.anaconda.com/pkgs/main/linux-64/conda-4.9.2-py37h06a4308_0.tar.bz2
!pip install git+https://github.com/NVIDIA/NVTabular.git@main
# For rapidsai 0.19 ONLY, not working.
"""
!sudo add-apt-repository ppa:ubuntu-toolchain-r/test
!sudo apt-get update
!sudo apt-get install gcc-4.9
!sudo apt-get upgrade libstdc++6
!sudo apt-get dist-upgrade
!strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX"""
# External Dependencies
import time
import glob
import gc
import cupy as cp # CuPy is an implementation of NumPy-compatible multi-dimensional array on GPU
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
import rmm # library for pre-allocating memory on GPU
import dask # dask is an open-source library to nateively scale Python on multiple workers/nodes
import dask_cudf # dask_cudf uses dask to scale cuDF dataframes on multiple workers/nodes
import numpy as np
# NVTabular is the core library, we will use here for feature engineering/preprocessing on GPU
import nvtabular as nvt
import xgboost as xgb
# More dask / dask_cluster related libraries to scale NVTabular
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.distributed import wait
from dask.utils import parse_bytes
from dask.delayed import delayed
from nvtabular.utils import device_mem_size
from nvtabular.column_group import ColumnGroup
!nvidia-smi
time_total_start = time.time()
# Assume dataset in MyDrive/RecSys2021
from google.colab import drive
drive.mount('/content/drive')
BASE_DIR = '/content/drive/MyDrive/RecSys2021/'
cluster = LocalCUDACluster(
protocol="tcp"
)
client = Client(cluster)
client
# Preparing our dataset
features = [
'text_tokens', ###############
'hashtags', #Tweet Features
'tweet_id', #
'media', #
'links', #
'domains', #
'tweet_type', #
'language', #
'timestamp', ###############
'a_user_id', ###########################
'a_follower_count', #Engaged With User Features
'a_following_count', #
'a_is_verified', #
'a_account_creation', ###########################
'b_user_id', #######################
'b_follower_count', #Engaging User Features
'b_following_count', #
'b_is_verified', #
'b_account_creation', #######################
'b_follows_a', #################### Engagement Features
'reply', #Target Reply
'retweet', #Target Retweet
'retweet_comment',#Target Retweet with comment
'like', #Target Like
####################
]
# Splits the entries in media by \t and keeps only the first two values (if available).
def splitmedia(col):
if col.shape[0] == 0:
return(col)
else:
return(col)
return(col.str.split('\t', expand=True)[0].fillna('') + '_' + col.str.split('\t', expand=True)[1].fillna(''))
# Counts the number of token in a column (e.g. how many hashtags are in a tweet).
def count_token(col,token):
not_null = col.isnull()==0
return ((col.str.count(token)+1)*not_null).fillna(0)
# >> is an overloaded operator, it transforms columns in other columns applying functions to them.
count_features = (
nvt.ColumnGroup(['hashtags', 'domains', 'links']) >> (lambda col: count_token(col,'\t')) >> nvt.ops.Rename(postfix = '_count_t')
)
split_media = nvt.ColumnGroup(['media']) >> (lambda col: splitmedia(col))
# Encode categorical columns as a small, continuous integer to save memory.
# Before we can apply Categorify, we need to fill na/missing values in the columns hashtags, domains and links.
multihot_filled = ['hashtags', 'domains', 'links'] >> nvt.ops.FillMissing()
cat_features = (
split_media + multihot_filled + ['language', 'tweet_type', 'tweet_id', 'a_user_id', 'b_user_id'] >>
nvt.ops.Categorify()
)
label_name = ['reply', 'retweet', 'retweet_comment', 'like']
label_name_feature = label_name >> nvt.ops.FillMissing()
weekday = (
nvt.ColumnGroup(['timestamp']) >>
(lambda col: cudf.to_datetime(col, unit='s').dt.weekday) >>
nvt.ops.Rename(postfix = '_wd')
)
output = count_features+cat_features+label_name_feature+weekday
(output).graph
remaining_columns = [x for x in features if x not in (output.columns+['text_tokens'])]
remaining_columns
proc = nvt.Workflow(output+remaining_columns)
data_parts = []
for file in os.listdir(BASE_DIR):
if file.endswith(".tsv"):
data_parts.append(os.path.join(BASE_DIR, file))
trains_itrs = nvt.Dataset(data_parts,
header=None,
names=features,
engine='csv',
sep='\x01',
part_size='2GB')
client = Client(cluster) # Sample client connecting to `cluster` object
# client.run(cudf.set_allocator, "managed") # Uses managed memory instead of "default"
import torch, gc
gc.collect()
torch.cuda.empty_cache()
time_preproc_start = time.time()
proc.fit(trains_itrs)
time_preproc = time.time()-time_preproc_start
time_preproc
# We define the output datatypes for continuous columns to save memory. We can define the output datatypes as a dict and parse it to the to_parquet function.
dict_dtypes = {}
for col in label_name + ['media', 'language', 'tweet_type', 'tweet_id',
'a_user_id', 'b_user_id', 'hashtags', 'domains',
'links', 'timestamp', 'a_follower_count',
'a_following_count', 'a_account_creation',
'b_follower_count', 'b_following_count', 'b_account_creation']:
dict_dtypes[col] = np.uint32
time_preproc_start = time.time()
proc.transform(trains_itrs).to_parquet(output_path=BASE_DIR + 'preprocess/', dtypes=dict_dtypes)
time_preproc += time.time()-time_preproc_start
time_preproc
# Splitting dataset
# We split the training data by time into a train and validation set. The first 5 days are train and the last 2 days are for validation. We use the weekday for it.
# The first day of the dataset is a Thursday (weekday id = 3) and the last day is Wednesday (weekday id = 2) (Not sure for this year).
time_split_start = time.time()
import pandas as pd
df = dask_cudf.read_parquet(BASE_DIR + 'preprocess/*.parquet')
if 'text_tokens' in list(df.columns):
df = df.drop('text_tokens', axis=1)
VALID_DOW = [1, 2]
# pd.set_option('display.max_rows', 100)
# df.head(100)
valid = df[df['timestamp_wd'].isin(VALID_DOW)].reset_index(drop=True)
train = df[~df['timestamp_wd'].isin(VALID_DOW)].reset_index(drop=True)
train = train.sort_values(["b_user_id", "timestamp"]).reset_index(drop=True)
valid = valid.sort_values(["b_user_id", "timestamp"]).reset_index(drop=True)
train.to_parquet(BASE_DIR + 'nv_train/')
valid.to_parquet(BASE_DIR + 'nv_valid/')
time_split = time.time()-time_split_start
time_split
del train; del valid
gc.collect()
# Feature Engineering
# We count encode the columns media, tweet_type, language, a_user_id, b_user_id.
# For counting encoding info see https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/03_4_CountEncoding.ipynb
count_encode = (
['media', 'tweet_type', 'language', 'a_user_id', 'b_user_id'] >>
nvt.ops.JoinGroupby(cont_cols=['reply'],stats=["count"], out_path='./')
)
# We transform timestamp to datetime type and extract hours, minutes, seconds.
datetime = nvt.ColumnGroup(['timestamp']) >> (lambda col: cudf.to_datetime(col.astype('int32'), unit='s'))
hour = datetime >> (lambda col: col.dt.hour) >> nvt.ops.Rename(postfix = '_hour')
minute = datetime >> (lambda col: col.dt.minute) >> nvt.ops.Rename(postfix = '_minute')
seconds = datetime >> (lambda col: col.dt.second) >> nvt.ops.Rename(postfix = '_second')
# We difference encode b_follower_count, b_following_count, language grouped by b_user_id. First, we need to transform the datatype to float32 to prevent overflow/underflow.
# After DifferenceEncoding, we want to fill NaN values with 0.
# For difference encoding info see https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/05_2_TimeSeries_Differences.ipynb
diff_lag = (
nvt.ColumnGroup(['b_follower_count','b_following_count','language']) >>
(lambda col: col.astype('float32')) >>
nvt.ops.DifferenceLag(partition_cols=['b_user_id'], shift = [1, -1]) >>
nvt.ops.FillMissing(fill_val=0)
)
# Transform targets in binary labels.
LABEL_COLUMNS = ['reply', 'retweet', 'retweet_comment', 'like']
labels = nvt.ColumnGroup(LABEL_COLUMNS) >> (lambda col: (col>0).astype('int8'))
# We apply TargetEncoding with kfold of 5 and smoothing of 20.
# For target encoding info see https://medium.com/rapids-ai/target-encoding-with-rapids-cuml-do-more-with-your-categorical-data-8c762c79e784
# and https://github.com/rapidsai/deeplearning/blob/main/RecSys2020Tutorial/03_3_TargetEncoding.ipynb
target_encode = (
['media', 'tweet_type', 'language', 'a_user_id', 'b_user_id',
['domains','language','b_follows_a','tweet_type','media','a_is_verified']] >>
nvt.ops.TargetEncoding(
labels,
kfold=5,
p_smooth=20,
out_dtype="float32",
)
)
output = count_encode+hour+minute+seconds+diff_lag+labels+target_encode
(output).graph
# We want to keep all columns of the input dataset. Therefore, we extract all column names from the first input parquet file.
df_tmp = cudf.read_parquet(BASE_DIR + '/nv_train/part.0.parquet')
all_input_columns = df_tmp.columns
del df_tmp
gc.collect()
remaining_columns = [x for x in all_input_columns if x not in (output.columns+['text_tokens'])]
remaining_columns
# We initialize our NVTabular workflow and add the "remaining" columns to it.
proc = nvt.Workflow(output+remaining_columns)
# We initialize the train and valid as NVTabular datasets.
train_dataset = nvt.Dataset(glob.glob(BASE_DIR + 'nv_train/*.parquet'),
engine='parquet',
part_size="2GB")
valid_dataset = nvt.Dataset(glob.glob(BASE_DIR + 'nv_valid/*.parquet'),
engine='parquet',
part_size="2GB")
time_fe_start = time.time()
proc.fit(train_dataset)
time_fe = time.time()-time_fe_start
time_fe
# The columns a_is_verified, b_is_verified and b_follows_a have the datatype boolean.
# XGBoost does not support boolean datatypes and we need convert them to int8. We can define the output datatypes as a dict and parse it to the .to_parquet function.
dict_dtypes = {}
for col in ['a_is_verified','b_is_verified','b_follows_a']:
dict_dtypes[col] = np.int8
# We apply the transformation to the train and valid datasets.
time_fe_start = time.time()
proc.transform(train_dataset).to_parquet(output_path=BASE_DIR + 'nv_train_fe/', dtypes=dict_dtypes)
proc.transform(valid_dataset).to_parquet(output_path=BASE_DIR + 'nv_valid_fe/', dtypes=dict_dtypes)
time_fe += time.time()-time_fe_start
time_fe
# Training
train = dask_cudf.read_parquet(BASE_DIR + 'nv_train_fe/*.parquet')
valid = dask_cudf.read_parquet(BASE_DIR + 'nv_valid_fe/*.parquet')
train[['a_is_verified','b_is_verified','b_follows_a']].dtypes
# Some columns are only used for feature engineering. Therefore, we define the columns we want to ignore for training.
dont_use =[
'__null_dask_index__',
'text_tokens',
'timestamp',
'a_account_creation',
'b_account_creation',
'hashtags',
'tweet_id',
'links',
'domains',
'a_user_id',
'b_user_id',
'timestamp_wd',
'timestamp_to_datetime',
'a_following_count_a_ff_rate',
'b_following_count_b_ff_rate'
]
dont_use = [x for x in train.columns if x in dont_use]
label_names = ['reply', 'retweet', 'retweet_comment', 'like']
# Nvidia experiments show that we require only 10% of the training dataset. Our feature engineering, such as TargetEncoding,
# uses the training datasets and leverage the information of the full dataset.
# In the competition, Nvidia trained the models with higher ratio (20% and 50%), but could not observe an improvement in performance.
# Since I'm using only a small part of the dataset, I will use all of it.
SAMPLE_RATIO = 0.999 # 0.1
SEED = 1
if SAMPLE_RATIO < 1.0:
train['sample'] = train['tweet_id'].map_partitions(lambda cudf_df: cudf_df.hash_encode(stop=10))
print(len(train))
train = train[train['sample']<10*SAMPLE_RATIO]
train, = dask.persist(train)
print(len(train))
Y_train = train[label_names]
Y_train, = dask.persist(Y_train)
train = train.drop(['sample']+label_names+dont_use,axis=1)
train, = dask.persist(train)
print('Using %i features'%(train.shape[1]))
# Similar to the training dataset, Nvidia experiments show that 35% of our validation dataset is enough to get a good estimate of the performance metric.
# 35% of the validation dataset has a similar size as the test set of the RecSys2020 competition.
SAMPLE_RATIO = 0.999 # 0.35
SEED = 1
if SAMPLE_RATIO < 1.0:
print(len(valid))
valid['sample'] = valid['tweet_id'].map_partitions(lambda cudf_df: cudf_df.hash_encode(stop=10))
valid = valid[valid['sample']<10*SAMPLE_RATIO]
valid, = dask.persist(valid)
print(len(valid))
Y_valid = valid[label_names]
Y_valid, = dask.persist(Y_valid)
valid = valid.drop(['sample']+label_names+dont_use,axis=1)
valid, = dask.persist(valid)
# We initialize our XGBoost parameter.
print('XGB Version',xgb.__version__)
xgb_parms = {
'max_depth':8,
'learning_rate':0.1,
'subsample':0.8,
'colsample_bytree':0.3,
'eval_metric':'logloss',
'objective':'binary:logistic',
'tree_method':'gpu_hist',
'predictor' : 'gpu_predictor'
}
train,valid = dask.persist(train,valid)
# We train our XGBoost models. The challenge requires to predict 4 targets, does a user
# _ like a tweet
# _ reply a tweet
# _ comment a tweet
# _ comment and reply a tweet
# We train 4x XGBoost models for 300 rounds on a GPU.
time_train_start = time.time()
NROUND = 300
VERBOSE_EVAL = 50
preds = []
for i in range(4):
name = label_names[i]
print('#'*25);print('###',name);print('#'*25)
start = time.time(); print('Creating DMatrix...')
dtrain = xgb.dask.DaskDMatrix(client,data=train,label=Y_train.iloc[:, i])
print('Took %.1f seconds'%(time.time()-start))
start = time.time(); print('Training...')
model = xgb.dask.train(client, xgb_parms,
dtrain=dtrain,
num_boost_round=NROUND,
verbose_eval=VERBOSE_EVAL)
print('Took %.1f seconds'%(time.time()-start))
start = time.time(); print('Predicting...')
preds.append(xgb.dask.predict(client,model,valid))
print('Took %.1f seconds'%(time.time()-start))
del model, dtrain
time_train = time.time()-time_train_start
time_train
yvalid = Y_valid[label_names].values.compute()
oof = cp.array([i.values.compute() for i in preds]).T
yvalid.shape
# The hosts of the RecSys2020 competition provide code for calculating the performance metric PRAUC and RCE.
# Nvidia optimized the code to speed up the calculation, as well. Using cuDF / cupy, we can calculate the performance metric on the GPU.
from sklearn.metrics import auc
def precision_recall_curve(y_true,y_pred):
y_true = y_true.astype('float32')
ids = cp.argsort(-y_pred)
y_true = y_true[ids]
y_pred = y_pred[ids]
y_pred = cp.flip(y_pred,axis=0)
acc_one = cp.cumsum(y_true)
sum_one = cp.sum(y_true)
precision = cp.flip(acc_one/cp.cumsum(cp.ones(len(y_true))),axis=0)
precision[:-1] = precision[1:]
precision[-1] = 1.
recall = cp.flip(acc_one/sum_one,axis=0)
recall[:-1] = recall[1:]
recall[-1] = 0
n = (recall==1).sum()
return precision[n-1:],recall[n-1:],y_pred[n:]
def compute_prauc(pred, gt):
prec, recall, thresh = precision_recall_curve(gt, pred)
recall, prec = cp.asnumpy(recall), cp.asnumpy(prec)
prauc = auc(recall, prec)
return prauc
def log_loss(y_true,y_pred,eps=1e-7, normalize=True, sample_weight=None):
y_true = y_true.astype('int32')
y_pred = cp.clip(y_pred, eps, 1 - eps)
if y_pred.ndim == 1:
y_pred = cp.expand_dims(y_pred, axis=1)
if y_pred.shape[1] == 1:
y_pred = cp.hstack([1 - y_pred, y_pred])
y_pred /= cp.sum(y_pred, axis=1, keepdims=True)
loss = -cp.log(y_pred)[cp.arange(y_pred.shape[0]), y_true]
return _weighted_sum(loss, sample_weight, normalize).item()
def _weighted_sum(sample_score, sample_weight, normalize):
if normalize:
return cp.average(sample_score, weights=sample_weight)
elif sample_weight is not None:
return cp.dot(sample_score, sample_weight)
else:
return sample_score.sum()
def compute_rce_fast(pred, gt):
cross_entropy = log_loss(gt, pred)
yt = cp.mean(gt).item()
# cross_entropy and yt are single numbers (no arrays) and using CPU is fast.
strawman_cross_entropy = -(yt*np.log(yt) + (1 - yt)*np.log(1 - yt))
return (1.0 - cross_entropy/strawman_cross_entropy)*100.0
# Finally, we calculate the performance metrics PRAUC and RCE for each target.
txt = ''
for i in range(4):
prauc = compute_prauc(oof[:,i], yvalid[:, i])
rce = compute_rce_fast(oof[:,i], yvalid[:, i]).item()
txt_ = f"{label_names[i]:20} PRAUC:{prauc:.5f} RCE:{rce:.5f}"
print(txt_)
txt += txt_ + '\n'
# Performance metrics for RecSys Challenge 2021.
from sklearn.metrics import average_precision_score, log_loss
def calculate_ctr(gt):
positive = len([x for x in gt if x == 1])
ctr = positive/float(len(gt))
return ctr
def compute_rce(pred, gt):
cross_entropy = log_loss(gt, pred)
data_ctr = calculate_ctr(gt)
strawman_cross_entropy = log_loss(gt, [data_ctr for _ in range(len(gt))])
return (1.0 - cross_entropy/strawman_cross_entropy)*100.0
# ground_truth = read_predictions("gt.csv") # will return data in the form (tweet_id, user_id, labed (1 or 0))
# predictions = read_predictions("predictions.csv") # will return data in the form (tweet_id, user_id, prediction)
# Finally, we calculate the performance metrics AP and RCE for each target.
txt = ''
for i in range(4):
ap = average_precision_score(yvalid[:, i].get(), oof[:,i].get())
rce = compute_rce(oof[:,i].get(), yvalid[:, i].get())
txt_ = f"{label_names[i]:20} AP:{ap:.5f} RCE:{rce:.5f}"
print(txt_)
txt += txt_ + '\n'
# Timings
time_total = time.time()-time_total_start
print('Total time: {:.2f}s'.format(time_total))
print()
print('1. Preprocessing: {:.2f}s'.format(time_preproc))
print('2. Splitting: {:.2f}s'.format(time_split))
print('3. Feature engineering: {:.2f}s'.format(time_fe))
print('4. Training: {:.2f}s'.format(time_train))
```
| github_jupyter |
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(f'top_class[:10,:] : {top_class[:10,:]}')
print(f'top_p[:10,:] : {top_p[:10,:]}')
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(*top_class.shape)
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
## TODO: Implement the validation pass and print out the validation accuracy
test_loss = 0
test_accuracy = 0
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='assets/overfitting.png' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
```
import torch
from torch import nn
from torch.nn.functional as F
from torch import optim
from collections import OrderedDict
import matplotlib.pyplot as plt
## TODO: Define your model with dropout added
input_dim = 784
hidden_layers = [256, 128, 64]
output_dim = 10
#STEP1 building Model
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(input_dim, hidden_layers[0])
self.fc2 = nn.Linear(hidden_layers[0], hidden_layers[1])
self.fc3 = nn.Linear(hidden_layers[1], hidden_layers[2])
self.fc4 = nn.Linear(hidden_layers[2], output_dim)
self.dropout = nn.Dropout(p = 0.2)
def forward(self, x):
# reshaping input
x = x.view(x.shape[0],-1)
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = F.log_softmax(self.fc4(x), dim = 1)
# x = nn.LogSoftmax(dim = 1)(self.fc4(x))
return x
classifier = Classifier()
classifier
# Step2 Set Loss
criterian = nn.NLLLoss()
# Step3 Set Optimizer
optimizer = optim.SGD(classifier.parameters(), lr = 0.003)
# Step4 Training
epochs = 10
train_losses, test_losses = [], []
for epoch in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Step4-1 initialzing Optimizer
optimizer.zero_grad()
# Step4-2 Get output
log_ps = classifier(images)
# Step4-3 Calculate Loss
loss = criterian(log_ps, labels)
# Step4-4 Calculate loss backward
loss.backward()
# Step4-5 Update optimizer
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
test_accuracy = 0
# Step5-1 with torch.no_grad()
with torch.no_grad():
# Step5-2 model.eval() mode
model.eval()
for images, labels in testloader:
log_ps = classifier(images)
loss = criterian(log_ps, labels)
test_loss += loss.item()
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim = 1)
equals = top_class == labels.view(*top_class.shape)
test_accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(trainloader))
print(f'EPOCH : {epoch+1}/{epochs} '
f'Training Loss : {train_losses[-1]} '
f'Test Loss : {test_losses[-1]} '
f'Test Accuracy : {test_accuracy/len(testloader)}')
# Step5-3 model.train() mode
model.train()
# Plotting losses
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
| github_jupyter |
# Plot co-visibilty time of a source at two or more telescopes
```
from astropy.time import Time
from astroplan import download_IERS_A
import pandas as pd
from astropy.coordinates import EarthLocation
from astroplan import Observer,FixedTarget
import astroplan
from astroplan.plots import plot_airmass,plot_altitude
import astropy.units as u
from astropy.coordinates import SkyCoord
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
from pytz import common_timezones
import pyorbital
pd.set_option("display.max_rows", None, "display.max_columns", None)
%matplotlib inline
hydraA=FixedTarget.from_name("Hydra_A")
#selecting target - Hydra_A
Ra,Dec=hydraA.ra,hydraA.dec #Extracting RA and Dec since not manually imputed
print(hydraA,Ra, Dec) #Checking if correct values were loaded
HydraA_skycoord = SkyCoord('139d31m24.765s', '-12d05m43.99s', frame='icrs')
ctio = Observer.at_site("ctio", timezone = 'America/Santiago')
location_spt = EarthLocation.from_geodetic(90.0*u.deg,0.0*u.deg,2800*u.m)
spt= Observer(location=location_spt, name="South Pole Telescope", timezone='Pacific/Auckland')
print(spt)
kmtnet= Observer.at_site("Siding Spring Observatory", name='KMTNET',timezone = 'Australia/Sydney')
start_obs = Time('2020-11-25 00:00:00')
end_obs= Time('2020-11-25 23:00:00')
delta_t = end_obs - start_obs
dt = start_obs + delta_t*np.linspace(0, 1,75)
dt_datetime = [x.datetime for x in dt]
#dt
#Airmasses at ctio
airmass_ctio=ctio.altaz(dt, hydraA).secz
masked_airmass_ctio = np.ma.array(airmass_ctio, mask=airmass_ctio < 1)
#Airmasses at kmtnet
airmass_kmtnet=kmtnet.altaz(dt,hydraA).secz
masked_airmass_kmtnet = np.ma.array(airmass_kmtnet, mask=airmass_kmtnet < 1)
#Airmasses at SPT
airmass_spt=spt.altaz(dt, hydraA).secz
masked_airmass_spt = np.ma.array(airmass_spt, mask=airmass_spt < 1)
#Finding the intersecting point
ndx = np.argwhere(np.diff(np.sign(masked_airmass_ctio - masked_airmass_kmtnet))).flatten()
ndx=ndx[0]
intersect=dt_datetime[ndx]
intersect
```
#Getting the start of intersect
ind=np.where(df['kmtnet-airmass']<3)
index2=ind[0][0] #Indices of first occurance
covis_start=df['datetimes'][index2]
covis_start
```
#Another method of getting the intersecting point
#df['diff']=df['ctio-airmass']-df['kmtnet-airmass']
#idx=np.max(np.where((abs(round(df['diff'],1)==0.0))))
#mid_time=df['datetimes'][idx]
#mid_time
#airmass_spt
from astropy.coordinates import get_moon
#Generate moon positions
moon_radec=[]
for i in dt:
get_moon(i)
moon_radec.append(get_moon(i))
#convert them to radec format
radecs=[]
for i in moon_radec:
c = SkyCoord(ra=i.ra, dec=i.dec)
radecs.append(c.to_string('hmsdms'))
#Splitting them
radec2=[]
for e in radecs:
radec2.append(e.split(' '))
#Conveting to degrees
moon_locs=[]
l=np.arange(0,len(radec2))
for i in (l):
moon_locs.append(SkyCoord(radec2[i][0],radec2[i][1], frame='icrs'))
#Get seperations in degrees
sep=[]
for i in moon_locs:
sep.append(HydraA_skycoord.separation(i).deg)
#making them numpy arrays
sep=np.array(sep)
sunset_CT = ctio.sun_set_time(start_obs, which='nearest')
sunrise_CT = ctio.sun_rise_time(start_obs, which='nearest')
moonset_ctio = ctio.moon_set_time(start_obs, which='nearest')
moonrise_ctio = ctio.moon_rise_time(start_obs, which='next')
moonrise_ctio,moonset_ctio=moonrise_ctio.iso,moonset_ctio.iso
twilight_morning_ctio=ctio.twilight_morning_astronomical(start_obs, which='nearest')
twilight_evening_ctio=ctio.twilight_evening_astronomical(start_obs, which='nearest')
twilight_morning_ctio,twilight_evening_ctio=twilight_morning_ctio.iso,twilight_evening_ctio.iso
#KMTNET
sunset_kmtnet = kmtnet.sun_set_time(start_obs, which='nearest')
sunrise_kmtnet = kmtnet.sun_rise_time(start_obs, which='next')
sunset_kmtnet,sunrise_kmtnet=sunset_kmtnet.iso,sunrise_kmtnet.iso
moonset_kmtnet = kmtnet.moon_set_time(start_obs, which='nearest')
moonrise_kmtnet = kmtnet.moon_rise_time(start_obs, which='next')
moonrise_kmtnet,moonset_kmtnet=moonrise_kmtnet.iso,moonset_kmtnet.iso
twilight_morning_kmtnet=kmtnet.twilight_morning_astronomical(start_obs, which='nearest')
twilight_evening_kmtnet=kmtnet.twilight_evening_astronomical(start_obs, which='nearest')
twilight_morning_kmtnet,twilight_evening_kmtnet=twilight_morning_kmtnet.iso,twilight_evening_kmtnet.iso
#Plotting
fig, ax = plt.subplots(figsize=(16,9))
ax.plot(dt_datetime, masked_airmass_ctio, label='airmass@ctio',color='g',linewidth=3, alpha=0.9)
ax.plot(dt_datetime, masked_airmass_kmtnet, color='r', label='airmass@kmtnet',linewidth=3, alpha=0.9)
ax.invert_yaxis()
ax.set_ylim([3,1])
ax.set_ylabel('Airmasses', color='k',fontsize=14)
ax.set_xlabel("Time from {0} [UTC]".format(min(dt_datetime).date()),fontsize=14)
ax2 = ax.twinx()
ax.axhspan(3, 2, facecolor='y', alpha=0.09,label='Bellow Airmass limit')
ax2.plot(dt_datetime, sep, color='b',linestyle='--', label='moon_sep',linewidth=2, alpha=0.9)
ax.legend(loc='lower left',prop={'size': 10})
ax2.set_ylabel('Moon-target Seperation[degrees]',color='b',fontsize=14)
ax.grid()
#plotting intersect
ax.plot(dt_datetime[ndx], masked_airmass_ctio[ndx], 'ko')
```
#Calculating co visibilities
date_1=covis_start
date_2=intersect
# returns a timedelta object
delta = date_2-date_1
print('Difference: ', delta)
mins = delta.total_seconds() / 60
print('Total difference in minutes: ',str(mins) + ' '+'mins')
print('Total time co-observable in minutes: ',str(mins*2) + ' '+'mins')
```
#Plotting
fig, ax = plt.subplots(figsize=(15,7))
ax.invert_yaxis()
ax.set_ylim([3,1])
ax2 = ax.twinx()
ax.plot(dt_datetime, masked_airmass_ctio, '-',label='airmass@ctio', color='g')
ax2.plot(dt_datetime, masked_airmass_kmtnet, '-',color='r')
ax2.set_ylim([3,1])
ax3=ax2.twinx()
#plotting intersect
ax.plot(dt_datetime[ndx], masked_airmass_ctio[ndx], 'ko')
#Creating a dictionary for the dataframe
d={'datetimes':dt_datetime,'ctio-airmass':masked_airmass_ctio,'kmtnet-airmass':masked_airmass_kmtnet}
#Creating a dataframe
df=pd.DataFrame(data=d)
df=df.round(3 )# Rounding up values
# Dropping NaN values but maintaining indexes in case there are NANs
df=df.dropna()#Checking the dataframe
df
covis_list=df.loc[(df['ctio-airmass']<3)&(df['ctio-airmass']>1)&((df['kmtnet-airmass']<3))&(df['kmtnet-airmass']>1)]
covis_list.reset_index(drop=True, inplace=True)
covis_list
cvs=covis_list['datetimes'][0]
cvd=covis_list['datetimes'][len(covis_list)-1]
print(cvs,'and',cvd)
#Calculating co visibility time in minutes
date_1=cvs
date_2=cvd
# returns a timedelta object
delta = date_2-date_1
print('Difference: ', delta)
mins = delta.total_seconds() / 60
print('Total difference in minutes: ',str(mins) + ' '+'mins')
print('Total time co-observable in minutes: ',str(mins) + ' '+'mins')
dec_range= np.linspace(-90,91,360)
RA_range=np.arange(1,361,1)
#RA_range
#fig, ax = plt.subplots(figsize=(15,7))
#ax2 = ax.twinx()
#ax.set_yticks(dec_range)
#ax.set_xticks(RA_range)
#ax2.scatter(139.52354583,-12.09555278)
#ax.grid()
mins
Z = np.random.rand(10, 5)
tr=([0,10],[10,20],[20,30],[30,40],[40,50],[50,60],[60,70],[70,80])
tr
fig, (ax0) = plt.subplots(figsize=(10,5))
c = ax0.pcolor(tr, edgecolors='w', linewidths=2)#,cmap='RdBu'
ax1.set_title('thick edges')
fig.colorbar(c, ax=ax0,label='Co-visibilty time [Mins]')
fig.tight_layout()
plt.show()
# make these smaller to increase the resolution
dx, dy = 0.15, 0.05
# generate 2 2d grids for the x & y bounds
y, x = np.mgrid[slice(-3, 3 + dy, dy),
slice(-3, 3 + dx, dx)]
z = (1 - x / 2. + x ** 5 + y ** 3) * np.exp(-x ** 2 - y ** 2)
# x and y are bounds, so z should be the value *inside* those bounds.
# Therefore, remove the last value from the z array.
z = z[:-1, :-1]
z_min, z_max = -np.abs(z).max(), np.abs(z).max()
fig, axs = plt.subplots()
ax = axs
c = ax.pcolor(x, y, z, cmap='RdBu', vmin=z_min, vmax=z_max)
ax.set_title('pcolor')
# set the limits of the plot to the limits of the data
ax.axis([x.min(), x.max(), y.min(), y.max()])
fig.colorbar(c, ax=axs)
fig.tight_layout()
plt.show()
fig, ax = plt.subplots(figsize=(12,7))
sc = ax.scatter([139.52354583, -12.09555278],[139.52354583, -12.09555278], c=[0, 60])
ax.set_ylabel('Dec[Degrees]', loc='center',fontsize=12)
ax.set_xlabel('RA[Degrees]', loc='center',fontsize=12)
ax.set_ylim(-90,90)
ax.set_xlim(0,360)
ax.set_title('Co-observing time of'+' '+hydraA.name+ ' at '+ctio.name+ ' and '+kmtnet.name+'',fontsize=12)
ax.grid()
ax2 = ax.twinx()
ax2.scatter(hydraA.ra.deg, hydraA.dec.deg,marker=',')
ax2.set_ylim(-90,90)
cbar = fig.colorbar(sc)
cbar.set_label('Co-visibilty time [Mins]', loc='center',fontsize=12)
plt.show()
plt.figure(figsize=(12,8))
plt.ylabel('Dec[Degrees]', loc='center',fontsize=12)
plt.xlabel('RA[Degrees]', loc='center',fontsize=12)
x = hydraA.ra.deg
y = hydraA.dec.deg
colors = mins
plt.title('Co-observing time of'+' '+hydraA.name+ ' at '+ctio.name+ ' and '+kmtnet.name+'',fontsize=14)
plt.ylim(-90,90)
plt.xlim(0,360)
s=[500]
c=plt.scatter(x, y,s, c=colors, alpha=4.9,marker=',', cmap='Set1')
plt.clim(0,60)
plt.colorbar(label='Co-visibilty time [Mins]'); # show color scale
#ticks=range(60)
plt.grid()
# make these smaller to increase the resolution
dx, dy = 0.15, 0.05
# generate 2 2d grids for the x & y bounds
y, x = np.mgrid[slice(-3, 3 + dy, dy),
slice(-3, 3 + dx, dx)]
z = (1 - x / 2. + x ** 5 + y ** 3) * np.exp(-x ** 2 - y ** 2)
# x and y are bounds, so z should be the value *inside* those bounds.
# Therefore, remove the last value from the z array.
z = z[:-1, :-1]
z_min, z_max = -np.abs(z).max(), np.abs(z).max()
fig, axs = plt.subplots()
ax = axs
c = ax.pcolor(x, y, z, cmap='RdBu', vmin=z_min, vmax=z_max)
ax.set_title('pcolor')
# set the limits of the plot to the limits of the data
ax.axis([x.min(), x.max(), y.min(), y.max()])
fig.colorbar(c, ax=ax)
fig.tight_layout()
plt.show()
```
| github_jupyter |
```
project_name="reco-wikirecs"; branch="master"; account="sparsh-ai"
!cp /content/drive/MyDrive/mykeys.py /content
import mykeys
!rm /content/mykeys.py
path = "/content/" + project_name;
!mkdir "{path}"
%cd "{path}"
import sys; sys.path.append(path)
!git config --global user.email "sparsh@recohut.com"
!git config --global user.name "colab-sparsh"
!git init
!git remote add origin https://"{mykeys.git_token}":x-oauth-basic@github.com/"{account}"/"{project_name}".git
!git pull origin "{branch}"
%cd /content/reco-wikirecs/
!git status
!git add . && git commit -m 'commit' && git push origin "{branch}"
!pip install -r requirements.txt
```
---
## Setup
```
!pip install -q git+https://github.com/sparsh-ai/recochef.git
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import itertools
from scipy.sparse import csr_matrix, csc_matrix, lil_matrix, coo_matrix
from recochef.datasets.wikirecs import WikiRecs
from utils import *
from wiki_pull import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
## Data loading
```
wikidata = WikiRecs()
df = wikidata.load_interactions()
df.info()
df.head()
```
## EDA
```
# Look at the distribution of edit counts
edit_counts = df.groupby('USERID').USERID.count().values
plt.figure(figsize=(20,8))
plt.subplot(1,2,1)
sns.distplot(edit_counts,kde=False,bins=np.arange(0,20000,200))
plt.xlabel('Number of edits by user')
plt.subplot(1,2,2)
sns.distplot(edit_counts,kde=False,bins=np.arange(0,200,1))
plt.xlim([0,200])
plt.xlabel('Number of edits by user')
num_counts = len(edit_counts)
print("Median edit counts: %d" % np.median(edit_counts))
thres = 5
over_thres = np.sum(edit_counts > thres)
print("Number over threshold %d: %d (%.f%%)" % (thres, over_thres, 100*over_thres/num_counts))
# Most edits by user
df.groupby(['USERID','USERNAME']).USERID.count().sort_values(ascending=False)
# Find the elbow in number of edits
plt.plot(df.groupby(['USERID','USERNAME']).USERID.count().sort_values(ascending=False).values)
# plt.ylim([0,20000])
# What are the most popular pages (edited by the most users)
page_popularity = df.drop_duplicates(subset=['TITLE','USERNAME']).groupby('TITLE').count().USERNAME.sort_values()
page_popularity.iloc[-1000:].iloc[::-1]
df.sample().USERNAME
cols = ['userid', 'user', 'pageid', 'title',
'timestamp', 'sizediff']
oneuser = get_edit_history(user="SanAnMan",
latest_timestamp="2021-07-08T22:02:09Z",
earliest_timestamp="2020-05-28T22:02:09Z")
oneuser = pd.DataFrame(oneuser).loc[:,cols]
oneuser
```
## Data cleaning
### Remove consecutive edits and summarize runs
```
%%time
def remove_consecutive_edits(df):
c = dict(zip(df.columns, range(len(df.columns))))
keyfunc = lambda x: (x[c['USERID']],x[c['ITEMID']])
first_and_last = lambda run: [run[0][c['USERID']],
run[0][c['USERNAME']],
run[0][c['ITEMID']],
run[0][c['TITLE']],
run[-1][c['TIMESTAMP']],
run[0][c['TIMESTAMP']],
sum([abs(r[c['SIZEDIFF']]) for r in run]),
len(run)]
d = df.values.tolist()
return pd.DataFrame([first_and_last(list(g)) for k,g in itertools.groupby(d, key=keyfunc)],
columns=['USERID', 'USER', 'ITEMID', 'TITLE', 'FIRST_TIMESTAMP', 'LAST_TIMESTAMP','SUM_SIZEDIFF','CONSECUTIVE_EDITS'])
clean_df = remove_consecutive_edits(df)
```
### Remove top N most popular pages
```
# Get the top most popular pages
TOPN = 20
popularpages = df.drop_duplicates(subset=['TITLE','ITEMID','USERID']).groupby(['TITLE','ITEMID']).count().USERNAME.sort_values()[-TOPN:]
popularpages
# Remove those popular pages
before_count = len(df)
popular_pageids = popularpages.index.get_level_values(level='ITEMID').values
is_popular_page_edit = clean_df.ITEMID.isin(popular_pageids)
clean_df = clean_df.loc[~is_popular_page_edit].copy()
all_histories = None
after_count = len(clean_df)
print("%d edits (%.1f%%) were in top %d popular pages. Length after removing: %d" % (np.sum(is_popular_page_edit),
100* np.sum(is_popular_page_edit)/before_count,
TOPN,
after_count)
)
print("Number of unique page ids: {}".format(len(clean_df.ITEMID.unique())))
```
### Remove users with too many or too few edits
```
MIN_EDITS = 5
MAX_EDITS = 10000
# Get user edit counts
all_user_edit_counts = clean_df.groupby(['USERID','USER']).USERID.count()
# Remove users with too few edits
keep_user = all_user_edit_counts.values >= MIN_EDITS
# Remove users with too many edits
keep_user = keep_user & (all_user_edit_counts.values <= MAX_EDITS)
# Remove users with "bot" in the name
is_bot = ['bot' in username.lower() for username in all_user_edit_counts.index.get_level_values(1).values]
keep_user = keep_user & ~np.array(is_bot)
print("Keep %d users out of %d (%.1f%%)" % (np.sum(keep_user), len(all_user_edit_counts), 100*float(np.sum(keep_user))/len(all_user_edit_counts)))
# Remove those users
userids_to_keep = all_user_edit_counts.index.get_level_values(0).values[keep_user]
clean_df = clean_df.loc[clean_df.USERID.isin(userids_to_keep)]
clean_df = clean_df.reset_index(drop=True)
print("Length after removing users: {}".format(len(clean_df)))
```
## Build lookup tables
```
# Page id to title and back
lookup = clean_df.drop_duplicates(subset=['ITEMID']).loc[:,['ITEMID','TITLE']]
p2t = dict(zip(lookup.ITEMID, lookup.TITLE))
t2p = dict(zip(lookup.TITLE, lookup.ITEMID))
# User id to name and back
lookup = clean_df.drop_duplicates(subset=['USERID']).loc[:,['USERID','USER']]
u2n = dict(zip(lookup.USERID, lookup.USER))
n2u = dict(zip(lookup.USER, lookup.USERID))
# Page id and userid to index in cooccurence matrix and back
pageids = np.sort(clean_df.ITEMID.unique())
userids = np.sort(clean_df.USERID.unique())
p2i = {pageid:i for i, pageid in enumerate(pageids)}
u2i = {userid:i for i, userid in enumerate(userids)}
i2p = {v: k for k, v in p2i.items()}
i2u = {v: k for k, v in u2i.items()}
# User name and page title to index and back
n2i = {k:u2i[v] for k, v in n2u.items() if v in u2i}
t2i = {k:p2i[v] for k, v in t2p.items() if v in p2i}
i2n = {v: k for k, v in n2i.items()}
i2t = {v: k for k, v in t2i.items()}
```
## Build test and training set
```
# Make a test set from the most recent edit by each user
histories_test = clean_df.groupby(['USERID','USER'],as_index=False).first()
# Subtract it from the rest to make the training set
histories_train = dataframe_set_subtract(clean_df, histories_test)
histories_train.reset_index(drop=True, inplace=True)
# Make a dev set from the second most recent edit by each user
histories_dev = histories_train.groupby(['USERID','USER'],as_index=False).first()
# Subtract it from the rest to make the final training set
histories_train = dataframe_set_subtract(histories_train, histories_dev)
histories_train.reset_index(drop=True, inplace=True)
print("Length of test set: {}".format(len(histories_test)))
print("Length of dev set: {}".format(len(histories_dev)))
print("Length of training after removal of test: {}".format(len(histories_train)))
print("Number of pages in training set: {}".format(len(histories_train.ITEMID.unique())))
print("Number of users in training set: {}".format(len(histories_train.USERID.unique())))
print("Number of pages with > 1 user editing: {}".format(np.sum(histories_train.drop_duplicates(subset=['TITLE','USER']).groupby('TITLE').count().USER > 1)))
resurface_userids, discovery_userids = get_resurface_discovery(histories_train, histories_dev)
print("%d out of %d userids are resurfaced (%.1f%%)" % (len(resurface_userids), len(userids), 100*float(len(resurface_userids))/len(userids)))
print("%d out of %d userids are discovered (%.1f%%)" % (len(discovery_userids), len(userids), 100*float(len(discovery_userids))/len(userids)))
```
## Build matrix for implicit collaborative filtering
```
# Get the user/page edit counts
for_implicit = histories_train.groupby(["USERID","ITEMID"]).count().FIRST_TIMESTAMP.reset_index().rename(columns={'FIRST_TIMESTAMP':'edits'})
for_implicit.loc[:,'edits'] = for_implicit.edits.astype(np.int32)
row = np.array([p2i[p] for p in for_implicit.ITEMID.values])
col = np.array([u2i[u] for u in for_implicit.USERID.values])
implicit_matrix_coo = coo_matrix((for_implicit.edits.values, (row, col)))
implicit_matrix = csc_matrix(implicit_matrix_coo)
```
## Saving artifacts
```
save_pickle((p2t, t2p, u2n, n2u, p2i, u2i, i2p, i2u, n2i, t2i, i2n, i2t), 'lookup_tables.pickle')
save_pickle((userids, pageids), 'users_and_pages.pickle')
save_pickle((resurface_userids, discovery_userids), 'resurface_discovery_users.pickle')
save_pickle(implicit_matrix,'implicit_matrix.pickle')
```
| github_jupyter |
# Continuous Training with Kubeflow Pipeline and Vertex AI
**Learning Objectives:**
1. Learn how to use KF pre-built components
1. Learn how to use KF lightweight python components
1. Learn how to build a KF pipeline with these components
1. Learn how to compile, upload, and run a KF pipeline
In this lab, you will build, deploy, and run a KFP pipeline that orchestrates the **Vertex AI** services to train, tune, and deploy a **scikit-learn** model.
## Setup
```
from google.cloud import aiplatform
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
# Set `PATH` to include the directory containing KFP CLI
PATH = %env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
```
## Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `pipeline_vertex/pipeline.py` file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
### Build the trainer image
The training step in the pipeline will require a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
```
!cat trainer_image_vertex/Dockerfile
```
Let's now build and push this trainer container to the container registry:
```
IMAGE_NAME = "trainer_image_covertype_vertex"
TAG = "latest"
TRAINING_CONTAINER_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}"
TRAINING_CONTAINER_IMAGE_URI
!gcloud builds submit --timeout 15m --tag $TRAINING_CONTAINER_IMAGE_URI trainer_image_vertex
```
To match the ml framework version we use at training time while serving the model, we will have to supply the following serving container to the pipeline:
```
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
```
**Note:** If you change the version of the training ml framework you'll have to supply a serving container with matchin version (see [pre-built containers for prediction](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers)).
## Building and deploying the pipeline
Let us write the pipeline to disk:
### Exercise
Implement
1. the `train_and_deploy` function in the `pipeline_vertex/training_lightweight_component.py`
1. the `tune_hyperparameters` function in the `pipeline_vertex/tuning_lightweight_component.py`
and complete the TODOs in the `pipeline.py` file below:
```
%%writefile ./pipeline_vertex/pipeline.py
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
"""Kubeflow Covertype Pipeline."""
import os
from kfp import dsl
from kfp.components import create_component_from_func_v2
from training_lightweight_component import train_and_deploy
from tuning_lightweight_component import tune_hyperparameters
PIPELINE_ROOT = os.getenv("PIPELINE_ROOT")
PROJECT_ID = os.getenv("PROJECT_ID")
REGION = os.getenv("REGION")
TRAINING_CONTAINER_IMAGE_URI = os.getenv("TRAINING_CONTAINER_IMAGE_URI")
SERVING_CONTAINER_IMAGE_URI = os.getenv("SERVING_CONTAINER_IMAGE_URI")
TRAINING_FILE_PATH = os.getenv("TRAINING_FILE_PATH")
VALIDATION_FILE_PATH = os.getenv("VALIDATION_FILE_PATH")
MAX_TRIAL_COUNT = int(os.getenv("MAX_TRIAL_COUNT", "5"))
PARALLEL_TRIAL_COUNT = int(os.getenv("PARALLEL_TRIAL_COUNT", "5"))
THRESHOLD = float(os.getenv("THRESHOLD", "0.6"))
tune_hyperparameters_component = create_component_from_func_v2(
tune_hyperparameters,
base_image="python:3.8",
output_component_file="covertype_kfp_tune_hyperparameters.yaml",
packages_to_install=["google-cloud-aiplatform"],
)
train_and_deploy_component = create_component_from_func_v2(
train_and_deploy,
base_image="python:3.8",
output_component_file="covertype_kfp_train_and_deploy.yaml",
packages_to_install=["google-cloud-aiplatform"],
)
@dsl.pipeline(
name="covertype-kfp-pipeline",
description="The pipeline training and deploying the Covertype classifier",
pipeline_root=PIPELINE_ROOT,
)
def covertype_train(
training_container_uri: str = TRAINING_CONTAINER_IMAGE_URI,
serving_container_uri: str = SERVING_CONTAINER_IMAGE_URI,
training_file_path: str = TRAINING_FILE_PATH,
validation_file_path: str = VALIDATION_FILE_PATH,
accuracy_deployment_threshold: float = THRESHOLD,
max_trial_count: int = MAX_TRIAL_COUNT,
parallel_trial_count: int = PARALLEL_TRIAL_COUNT,
pipeline_root: str = PIPELINE_ROOT,
):
staging_bucket = f"{pipeline_root}/staging"
tuning_op = tune_hyperparameters_component(
project=PROJECT_ID,
location=REGION,
container_uri=training_container_uri,
training_file_path=training_file_path,
validation_file_path=validation_file_path,
staging_bucket=staging_bucket,
max_trial_count=max_trial_count,
parallel_trial_count=parallel_trial_count,
)
accuracy = tuning_op.outputs["best_accuracy"]
with dsl.Condition(
accuracy >= accuracy_deployment_threshold, name="deploy_decision"
):
train_and_deploy_op = ( # pylint: disable=unused-variable
train_and_deploy_component(
project=PROJECT_ID,
location=REGION,
container_uri=training_container_uri,
serving_container_uri=serving_container_uri,
training_file_path=training_file_path,
validation_file_path=validation_file_path,
staging_bucket=staging_bucket,
alpha=tuning_op.outputs["best_alpha"],
max_iter=tuning_op.outputs["best_max_iter"],
)
)
```
### Compile the pipeline
Let stat by defining the environment variables that will be passed to the pipeline compiler:
```
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
PIPELINE_ROOT = f"{ARTIFACT_STORE}/pipeline"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
%env PIPELINE_ROOT={PIPELINE_ROOT}
%env PROJECT_ID={PROJECT_ID}
%env REGION={REGION}
%env SERVING_CONTAINER_IMAGE_URI={SERVING_CONTAINER_IMAGE_URI}
%env TRAINING_CONTAINER_IMAGE_URI={TRAINING_CONTAINER_IMAGE_URI}
%env TRAINING_FILE_PATH={TRAINING_FILE_PATH}
%env VALIDATION_FILE_PATH={VALIDATION_FILE_PATH}
```
Let us make sure that the `ARTIFACT_STORE` has been created, and let us create it if not:
```
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
```
**Note:** In case the artifact store was not created and properly set before hand, you may need
to run in **CloudShell** the following command to allow Vertex AI to access it:
```
PROJECT_ID=$(gcloud config get-value project)
PROJECT_NUMBER=$(gcloud projects list --filter="name=$PROJECT_ID" --format="value(PROJECT_NUMBER)")
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
```
#### Use the CLI compiler to compile the pipeline
We compile the pipeline from the Python file we generated into a JSON description using the following command:
```
PIPELINE_JSON = "covertype_kfp_pipeline.json"
```
### Exercise
Compile the `pipeline_vertex/pipeline.py` with the `dsl-compile-v2` command line:
```
!dsl-compile-v2 --py pipeline_vertex/pipeline.py --output $PIPELINE_JSON
```
**Note:** You can also use the Python SDK to compile the pipeline from its python function
```python
compiler.Compiler().compile(
pipeline_func=covertype_train,
package_path=PIPELINE_JSON,
)
```
The result is the pipeline file.
```
!head {PIPELINE_JSON}
```
### Deploy the pipeline package
### Exercise
Upload and run the pipeline to Vertex AI using `aiplatform.PipelineJob`:
```
aiplatform.init(project=PROJECT_ID, location=REGION)
pipeline = aiplatform.PipelineJob(
display_name="covertype_kfp_pipeline",
template_path=PIPELINE_JSON,
enable_caching=False,
)
pipeline.run()
```
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| github_jupyter |
### Diabetes Case Study
You now have had the opportunity to work with a range of supervised machine learning techniques for both classification and regression. Before you apply these in the project, let's do one more example to see how the machine learning process works from beginning to end with another popular dataset.
We will start out by reading in the dataset and our necessary libraries. You will then gain an understanding of how to optimize a number of models using grid searching as you work through the notebook.
```
# Import our libraries
import pandas as pd
import numpy as np
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split, RandomizedSearchCV
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
import matplotlib.pyplot as plt
from sklearn.svm import SVC
import seaborn as sns
sns.set(style="ticks")
import check_file as ch
%matplotlib inline
# Read in our dataset
diabetes = pd.read_csv('diabetes.csv')
# Take a look at the first few rows of the dataset
diabetes.head()
```
Because this course has been aimed at understanding machine learning techniques, we have largely ignored items related to parts of the data analysis process that come before building machine learning models - exploratory data analysis, feature engineering, data cleaning, and data wrangling.
> **Step 1:** Let's do a few steps here. Take a look at some of usual summary statistics calculated to accurately match the values to the appropriate key in the dictionary below.
```
# Cells for work
diabetes.info()
diabetes['Outcome'].value_counts()
diabetes[ diabetes['Outcome']==1 ].shape[0] / diabetes.shape[0]
diabetes.hist();
sns.pairplot(diabetes, hue='Outcome');
sns.heatmap(diabetes.corr(), annot=True, cmap="YlGnBu");
# Possible keys for the dictionary
a = '0.65'
b = '0'
c = 'Age'
d = '0.35'
e = 'Glucose'
f = '0.5'
g = "More than zero"
# Fill in the dictionary with the correct values here
answers_one = {
'The proportion of diabetes outcomes in the dataset': d,
'The number of missing data points in the dataset': b,
'A dataset with a symmetric distribution': e,
'A dataset with a right-skewed distribution': c,
'This variable has the strongest correlation with the outcome': e
}
# Just to check your answer, don't change this
ch.check_one(answers_one)
```
> **Step 2**: Since our dataset here is quite clean, we will jump straight into the machine learning. Our goal here is to be able to predict cases of diabetes. First, you need to identify the y vector and X matrix. Then, the following code will divide your dataset into training and test data.
```
y = diabetes['Outcome']
X = diabetes[['Pregnancies','Glucose', 'BloodPressure', 'SkinThickness','Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
Now that you have a training and testing dataset, we need to create some models that and ultimately find the best of them. However, unlike in earlier lessons, where we used the defaults, we can now tune these models to be the very best models they can be.
It can often be difficult (and extremely time consuming) to test all the possible hyperparameter combinations to find the best models. Therefore, it is often useful to set up a randomized search.
In practice, randomized searches across hyperparameters have shown to be more time confusing, while still optimizing quite well. One article related to this topic is available [here](https://blog.h2o.ai/2016/06/hyperparameter-optimization-in-h2o-grid-search-random-search-and-the-future/). The documentation for using randomized search in sklearn can be found [here](http://scikit-learn.org/stable/auto_examples/model_selection/plot_randomized_search.html#sphx-glr-auto-examples-model-selection-plot-randomized-search-py) and [here](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html).
In order to use the randomized search effectively, you will want to have a pretty reasonable understanding of the distributions that best give a sense of your hyperparameters. Understanding what values are possible for your hyperparameters will allow you to write a grid search that performs well (and doesn't break).
> **Step 3**: In this step, I will show you how to use randomized search, and then you can set up grid searches for the other models in Step 4. However, you will be helping, as I don't remember exactly what each of the hyperparameters in SVMs do. Match each hyperparameter to its corresponding tuning functionality.
```
# build a classifier
clf_rf = RandomForestClassifier()
# Set up the hyperparameter search
param_dist = {"max_depth": [3, None],
"n_estimators": list(range(10, 200)),
"max_features": list(range(1, X_test.shape[1]+1)),
"min_samples_split": list(range(2, 11)),
"min_samples_leaf": list(range(1, 11)),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
# Run a randomized search over the hyperparameters
random_search = RandomizedSearchCV(clf_rf, param_distributions=param_dist)
# Fit the model on the training data
random_search.fit(X_train, y_train)
# Make predictions on the test data
rf_preds = random_search.best_estimator_.predict(X_test)
ch.print_metrics(y_test, rf_preds, 'random forest')
```
> **Step 4**: Now that you have seen how to run a randomized grid search using random forest, try this out for the AdaBoost and SVC classifiers. You might also decide to try out other classifiers that you saw earlier in the lesson to see what works best.
```
# build a classifier for ada boost
clf_ada = AdaBoostClassifier()
# Set up the hyperparameter search
# look at setting up your search for n_estimators, learning_rate
# http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html
param_dist = {"n_estimators": [10, 100, 200, 400],
"learning_rate": [0.001, 0.005, .01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 1, 2, 10, 20]}
# Run a randomized search over the hyperparameters
ada_search = RandomizedSearchCV(clf_ada, param_distributions=param_dist)
# Fit the model on the training data
ada_search.fit(X_train, y_train)
# Make predictions on the test data
ada_preds = ada_search.best_estimator_.predict(X_test)
ch.print_metrics(y_test, ada_preds, 'adaboost')
# build a classifier for support vector machines
clf_svc = SVC()
# Set up the hyperparameter search
# look at setting up your search for C (recommend 0-10 range),
# kernel, and degree
# http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
param_dist = {"C": [0.1, 0.5, 1, 3, 5],
"kernel": ['linear','rbf']
}
# Run a randomized search over the hyperparameters
svc_search = RandomizedSearchCV(clf_svc, param_distributions=param_dist)
# Fit the model on the training data
svc_search.fit(X_train, y_train)
# Make predictions on the test data
svc_preds = svc_search.best_estimator_.predict(X_test)
ch.print_metrics(y_test, svc_preds, 'svc')
```
> **Step 5**: Use the test below to see if your best model matched, what we found after running the grid search.
```
a = 'randomforest'
b = 'adaboost'
c = 'supportvector'
best_model = b # put your best model here as a string or variable
ch.check_best(best_model)
```
Once you have found your best model, it is also important to understand why it is performing well. In regression models where you can see the weights, it can be much easier to interpret results.
> **Step 6**: Despite the fact that your models here are more difficult to interpret, there are some ways to get an idea of which features are important. Using the "best model" from the previous question, find the features that were most important in helping determine if an individual would have diabetes or not. Do your conclusions match what you might have expected during the exploratory phase of this notebook?
```
# Show your work here - the plot below was helpful for me
# https://stackoverflow.com/questions/44101458/random-forest-feature-importance-chart-using-python
features = diabetes.columns[:diabetes.shape[1]]
importances = random_search.best_estimator_.feature_importances_
indices = np.argsort(importances)
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), features[indices])
plt.xlabel('Relative Importance');
```
> **Step 7**: Using your results above to complete the dictionary below.
```
# Check your solution by matching the correct values in the dictionary
# and running this cell
a = 'Age'
b = 'BloodPressure'
c = 'BMI'
d = 'DiabetesPedigreeFunction'
e = 'Insulin'
f = 'Glucose'
g = 'Pregnancy'
h = 'SkinThickness'
sol_seven = {
'The variable that is most related to the outcome of diabetes' : f,
'The second most related variable to the outcome of diabetes' : c,
'The third most related variable to the outcome of diabetes' : a,
'The fourth most related variable to the outcome of diabetes' : d
}
ch.check_q_seven(sol_seven)
```
> **Step 8**: Now provide a summary of what you did through this notebook, and how you might explain the results to a non-technical individual. When you are done, check out the solution notebook by clicking the orange icon in the upper left.
**In this case study, we looked at predicting diabetes for 768 patients. There was a reasonable amount of class imbalance with just under 35% of patients having diabetes. There were no missing data, and initial looks at the data showed it would be difficult to separate patients with diabetes from those that did not have diabetes.**
**Three advanced modeling techniques were used to predict whether or not a patient has diabetes. The most successful of these techniques proved to be an AdaBoost Classification technique, which had the following metrics:**
Accuracy score for adaboost : 0.7792207792207793
Precision score adaboost : 0.7560975609756098
Recall score adaboost : 0.5636363636363636
F1 score adaboost : 0.6458333333333333
**Based on the initial look at the data, it is unsurprising that `Glucose`, `BMI`, and `Age` were important in understanding if a patient has diabetes. These were consistent with more sophisticated approaches. Interesting findings were that pregnancy looked to be correlated when initially looking at the data. However, this was likely due to its large correlation with age.**
| github_jupyter |
```
from temporal_peaks_cluster import *
df = pd.read_csv('training_pm_nosat_150k.dat', sep=' ', header=None) # lazy way of reading the data
num_events = df[0]
# df[1] is just an integer 200. I don't know why.
# Information which won't be available in the experimental data (i.e. labels)
amp1, amp2 = df[2], df[7]
rise1, rise2 = df[3], df[8] # default decay2=0
decay1, decay2 = df[4], df[9] # default pos2=0
offset, pos1, pos2 = df[6], df[5], df[10] # default pos2=0
# Information which will actually be avialble in the experiment (i.e. features)
wave_forms = df[df.columns[11:]]
print('Data Read. Extracting useful information out of the data...')
wave_forms.columns = range(wave_forms.shape[1])
print("Renamed columns")
# Determine the standard deviation of the background
stdev_bg = np.diff(wave_forms.T[:30], axis=0).std(axis=0)
# when dealing with experimental data a more sophisticated method of determining the BG variation would be required,
# since not all traces may have the first 30 values = background
# classification step
features = []
for num, (ind, trace) in enumerate(wave_forms.iterrows()):
if num>30: # don't have time to classify all len(wave_forms) samples in this demo.
break
print(f'{ind=}, {amp1[ind]=}, {amp2[ind]=}',)
intersection_paths = scroll(trace.values, offset[ind], y_width=stdev_bg[ind])
point_cloud = intersection_paths[:,:]
minmax = MinMaxScaler()
# minmax.fit([[0,trace.min()],[len(trace),trace.max()]])
minmax.fit([[0,trace.min(), trace.min()],[len(trace),trace.max(), trace.max()]])
features.append(minmax.transform(point_cloud))
print(f'{len(intersection_paths)}')
features, labels = ary(features), num_events.values
# displaying the classification results
for ind, feat in enumerate(features):
# !!!! DBSCAN requres tuning of the eps !!!!
dbs = DBSCAN(eps=0.1)
pred = dbs.fit_predict(feat)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_title(f'{ind}: {num_events[ind]} events')
for lab in range(pred.max()+1):
ax.scatter(*(feat[pred==lab]).T, label=f'label={lab} has {(pred==lab).sum()} points')
line = wave_forms.loc[ind].values
scaled_line = (line-line.min())/(line.max()-line.min())
plt.plot(np.linspace(0,1, len(scaled_line)), scaled_line, np.zeros(len(scaled_line)))
plt.legend()
plt.show()
"""
Verdict:
Looks like this scroll appraoch is not very easy to use:
1. the DBSCAN eps value needs tuning.
2. the outputted labels require some filtering
(e.g. remove labels which occurs less than N times, where N=100? or N=len(feat)/30?
Or some other filtering condition depending on the location of the point cluster relative to the line?)
3. Scolling algorithm parameter that we can tune:
3.1 use a different 'scrolling' algorithm: e.g. connect the furthest two cross-over points to extrapolate, instead of the nearest
3.2 width of the band used when 'scrolling'
4. background determination method
Log of failed methods:
1. Other sklearn.cluster algorithms considered
OPTICS: Clearly doesn't work, divides into too many different clusters.
And I remember testing one of (SpectralClustering, AgglomerativeClustering) and it also doesn't work.
Note: only these four algorithms were considered because the rest doesn't seem to recognize line clusters.
2.
"""
```
| github_jupyter |
# Introduction
## The problem
* For people who saw the promotion commercial at Channel KLMN, Los Angeles viewers showed a higher watching rate (i.e., higher fraction of people who watch) of “US Politics This Week” than viewers from other cities
* Executive producer suggested to make new promotion commercials which are tailored to big cities in the US by featuring their Mayors
* Channel KLMN designed and conducted an experiment to test whether the new commercial works
* After the A/B test, surprisingly the results showed that the old commercial was doing better. The fraction of people watching the show decreased when they viewed the new commercial
## The datasets
### test
* General info about the test results
* Columns:
* viewer_id
* date
* tv_make
* tv_size
* uhd_capable
* tv_provider
* total_time_watched
* watched
* test
### viewer
* Info about the users
* Columns:
* viewer_id
* gender
* age
* city
## The Questions
1. Reproduce the negative result found above. Is it actually negative?
2. Explain what might be happening. Are the commercials with local Mayors really driving a lower fraction of people to watch the show?
3. If you identify anything wrong, design an algorithm that would return False if the same problem happens in the future. If nothing is wrong, provide recommendations regarding whether or not they should continue airing the new commercials
```
# Import libraries
%matplotlib inline
import numpy as np
import pandas as pd
import pandas_profiling as pp
import matplotlib.pyplot as plt
import seaborn as sns
import calendar
import scipy
from scipy import stats
import statsmodels
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.formula.api import ols
from statsmodels import stats
from statsmodels.stats import proportion
from statsmodels.stats.proportion import proportions_ztest
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import warnings
warnings.filterwarnings('ignore')
```
# Load data and data housekeeping
## Test data set
```
test = pd.read_csv('./Data/test_data.csv', parse_dates=['date'])
test.head()
```
Get to know more about the data, i.e., data quality, descriptive info, etc.
```
test.info()
test.describe(include='all')
pp.ProfileReport(test)
# Check if there are duplicates
sum(test.duplicated())
```
The test data is clean. There are no duplicates, no messy entries. Only TV provider has missing values.
As showed in the ProfileReport, there were 51.1% viewers in the control group and 48.9% viewers in the experiment group. Without considering further details, the sample size is a fairly even for control and experiment groups.
## Viewer data set
```
viewer = pd.read_csv('./Data/viewer_data.csv')
viewer.head()
viewer.info()
viewer.describe(include='all')
pp.ProfileReport(viewer)
# Check if there are duplicates
sum(viewer.duplicated())
```
The test data is clean. There are no duplicates or missing values.
# Reproduce the negative A/B test results
Because we are interested in comparing the watching rates, i.e., the fraction of people watched the show, between experiment group and control group, for rates, to test statistical difference, we need to use proportional z test
```
exp_watched = sum(test[test.test == 1].watched)
exp_n = len(test[test.test == 1])
ctrl_watched = sum(test[test.test == 0].watched)
ctrl_n = len(test[test.test == 0])
# Use proportional z test in statsmodels
watched_counts = np.array([exp_watched,ctrl_watched])
n_viewer = np.array([exp_n,ctrl_n])
zscore, pvalue = proportions_ztest(watched_counts, n_viewer, alternative = 'two-sided')
print("zscore: {0}\np-value: {1}".format(zscore,pvalue))
print("Experimental group: " '{:.4f}%'.format(exp_watched/exp_n))
print("Control group: " '{:.4f}%'.format(ctrl_watched/ctrl_n))
```
Proportional z test shows a very small p value, way more smaller than commonly used significance level 0.05. It leads to the rejection of the null hypothesis, i.e., no difference of watching rate, and the acceptance of the alternative hypothesis that the control group showed higher watching rate than the experimental group.<br>
**Here I reproduced the negative A/B test results**
**Note a general comparison between experiment and control groups, without considering other relevant factors such as city composition of each group, showed that the control version is doing better.**
**However, since we noticed LA viewers composed a higher fraction of viewers who actually watched the show, in the A/B test, LA viewers all have to be counted as control group because their local Mayor is already on the old commercial. If they are the willing-to-watch viewers, with them in the control group, they will pull the entire control group to a higher watching rate.**
# What cause the results? Is the new commercial really worse?
## Join test and user data
* For the convenience of data exploration and finding out potential causes
* Left join to test data because it has a slightly larger set of unique viewers
```
# Merge two tables
merged_data = pd.merge(test, viewer, how='left', on='viewer_id')
merged_data.head()
merged_data.to_csv('./Data/merged_data.csv')
merged_data.city.unique()
merged_data[merged_data.city == 'Los Angeles'].test.unique()
```
This confirms that viewers from Los Angeles were all in the control group.
## Test results across cities
Is the watching rate of control group constantly higher than the experimental group across cities?
```
cities = [name for name in merged_data['city'].unique() if name not in [np.nan,'Los Angeles']]
print('{0:15s} {1:>15s} {2:>15s} {3:>10s}'.format('City', 'Experiment Rate', 'Control Rate', 'P-Value'))
print('-' * 65)
for city in cities:
exp_watched = sum(merged_data[(merged_data.test == 1) & (merged_data.city == city)].watched)
exp_n = len(merged_data[(merged_data.test == 1) & (merged_data.city == city)])
ctrl_watched = sum(merged_data[(merged_data.test == 0) & (merged_data.city == city)].watched)
ctrl_n = len(merged_data[(merged_data.test == 0) & (merged_data.city == city)])
watched_counts = np.array([exp_watched,ctrl_watched])
n_viewer = np.array([exp_n,ctrl_n])
zscore, pvalue = proportions_ztest(watched_counts, n_viewer, alternative = 'two-sided')
print('{0:15s} {1:15.5f} {2:15.5f} {3:10f}'.format(city, exp_watched/exp_n, ctrl_watched/ctrl_n, pvalue))
```
**This confirms my previous suspection. For cities except LA, there are no significant differences of watching rate between experiment and control groups. All the p-values are larger than 0.05.** <br>
If we want to take a closer look and consider differences that are marginal significant, i.e., p < 0.1, I noticed that:
* Detroit shows a marginally significant higher watching rate for the experiment group than that for the control group, p = 0.054
* New York shows a marginally significant higher watching rate for the control group than that for the experiment group, p = 0.069
```
# Visualization the comparisons across cities
fig = plt.figure(figsize = (25, 6))
sns.set(font_scale = 1.4)
sns.barplot(x='city', y='watched', hue='test', data=merged_data)
plt.title('Comparison of Watching Rate Between Experiment and Control Groups Across Cities')
```
Check out the proportion of viewers from different cities in the experiment vs. control groups
```
city_ls = [name for name in merged_data['city'].unique() if name is not np.nan]
print('{0:15s} {1:>20s} {2:>30s} {3:>25s}'.format('City', 'Proportion in Control','Proportion in Experiment',
'Proportion Difference'))
print('-' * 95)
for city in city_ls:
ctrl_proportion = (len(merged_data[(merged_data.test == 0) & (merged_data.city == city)])
/len(merged_data[merged_data.test == 0]))
exp_proportion = (len(merged_data[(merged_data.test == 1) & (merged_data.city == city)])
/len(merged_data[merged_data.test == 1]))
difference = ctrl_proportion - exp_proportion
print('{0:15s} {1:15.5f} {2:25.5f} {3:30.5f}'.format(city, ctrl_proportion, exp_proportion, difference))
```
Out of control group viewers, around 24.57% of them are LA viewers, whom we have already noticed to be more likely to watch the show. Their higher watching rate is driving the rate of control group. <br>
**Comparisons of watching rate between experiment and control for non-LA cities show no significant differences using significance level as 0.05**
**When conducting A/B tests, we want to balance the experiment and control group with regard to factors that may affect the depedent variable. In this case, cities seem to be a related factor.**<br>
Also note Philadelphia viewers composed 11.99% of experiment group, but only 1.99% of control group. What do we know about Philadelphia viewers before the test? How did they behave with the old commercial?
```
# Check the watching rate across cities under control condition, which could represents the historical performance
# of different cities
fig = plt.figure(figsize = (25, 6))
sns.set(font_scale = 1.4)
sns.barplot(x = merged_data[merged_data.test == 0].city, y = merged_data[merged_data.test == 0].watched)
plt.title('Comparison of Watching Rate Across Cities with the Old Commercial (control condition) ')
```
Philadelphia and Seattle have relatively low watching rate comparing to other non-LA cities. <br>
It appears that **Philadephia** viewers were unwilling-to-watch viewers. With this in mind, more importantly, it composed 11.99% of the experimental group, which may potentially drive the watching rate of experiment group down.<br>
Recall the proportions of viewers in experiment vs. control groups for different cities, **Seattle** viewers composed 7.76% of experiment group, whereas they only composed 1.33% of control group. Similiar to Philadephia, Seattle viewers were unwilling-to-watch viewers. With them composing higher portion of the experiment group might drive the watching rate of the experiment group down too.
**In other words, the issue of this A/B test could be that viewers in some cities were not equally likely to be assigned to either experiment or control group. They are biased to be assigned to one of them. For example, viewers in LA all have to be assigned to the control group. Viewers in Philadelphia and Seattle were more likely to be assigned to the experiment group.**
## Any Other factors?
Explore if other factors affect the test results <br>
To avoid the effects from the high watching rate city LA and low watching rate cities Philadephia and Seattle, when exploring other factors, I compared results from the entire data set and the one excluding these cities
```
regular_cities = merged_data[~merged_data.city.isin(['Los Angeles','Philadelphia','Seattle',np.nan])]
regular_cities.city.unique()
regular_cities.to_csv('./data/regular_cities.csv')
```
### Does total time watched matter?
```
# Visualization of the distribution of total time watched for the entire data set
fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(18, 6))
sns.set(font_scale = 1.5)
sns.distplot(merged_data[(merged_data.test == 1)&(merged_data.watched == 1)].total_time_watched,
label = 'Test',bins = 24, kde=True, ax=ax1)
sns.distplot(merged_data[(merged_data.test == 0)&(merged_data.watched == 1)].total_time_watched,
label = 'Control',bins = 24,kde=True, ax=ax1)
ax1.set_title('Distribution of Viewers Who Watched the Show over Total Time')
sns.distplot(merged_data[(merged_data.test == 1)&(merged_data.watched == 0)].total_time_watched,
label = 'Test',bins = 24,kde=True, ax=ax2)
sns.distplot(merged_data[(merged_data.test == 0)&(merged_data.watched == 0)].total_time_watched,
label = 'Control',bins = 24,kde=True, ax=ax2)
ax2.set_title('Distribution of Viewers Who Not Watched the Show over Total Time')
plt.tight_layout()
plt.legend()
```
From the plots above, I noticed that the distributions of total time watching TV were similiar between viewers in the test and control groups, and between viewers who watched the show and who did not.<br>
To check statistically, I could run a 2(Condition: test vs. control) * 2(Action: watched vs. not watched) ANOVA on the total time watching TV.
```
condition = merged_data.test
action = merged_data.watched
formula = 'merged_data.total_time_watched ~ C(condition) + C(action) + C(condition):C(action)'
model = ols(formula, merged_data).fit()
aov_table = statsmodels.stats.anova.anova_lm(model,typ=2)
print(aov_table)
```
There are no significant differences of total TV watching time between test and control groups, show-watched and not-watched groups.
What about the regular cities?
```
# Visualization of the distribution of total time watched for regular cities
fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(18, 6))
sns.set(font_scale = 1.5)
sns.distplot(regular_cities[(regular_cities.test == 1)&(regular_cities.watched == 1)].total_time_watched,
label = 'Test',bins = 24, kde=True, ax=ax1)
sns.distplot(regular_cities[(regular_cities.test == 0)&(regular_cities.watched == 1)].total_time_watched,
label = 'Control',bins = 24,kde=True, ax=ax1)
ax1.set_title('Distribution of Viewers Who Watched the Show over Total Time')
sns.distplot(regular_cities[(regular_cities.test == 1)&(regular_cities.watched == 0)].total_time_watched,
label = 'Test',bins = 24,kde=True, ax=ax2)
sns.distplot(regular_cities[(regular_cities.test == 0)&(regular_cities.watched == 0)].total_time_watched,
label = 'Control',bins = 24,kde=True, ax=ax2)
ax2.set_title('Distribution of Viewers Who Not Watched the Show over Total Time')
plt.tight_layout()
plt.legend()
```
The distributions appear to be very similiar to those for the entire data set. <br>
Maybe a 2(Condition: test vs. control) * 2(Action: watched vs. not watched)* 15(city) ANOVA on the total time watching TV would answer my curiosity whether samples are evenly assigned across conditions and cities.
```
city = merged_data.city
formula = 'merged_data.total_time_watched ~ C(condition) + C(action) + C(city) + C(condition):C(action) + C(city):C(action) + C(condition):C(city)'
model = ols(formula, merged_data).fit()
aov_table = statsmodels.stats.anova.anova_lm(model,typ=2)
print(aov_table)
```
The above ANOVA table shows no significant main effects or interactions, ps > 0.05. Therefore, I have ruled out the effects of total TV watching time on whether viewers watch vs. not watch the show and whether they were assigned to test vs. control conditions.
### Does TV provider matter?
For the sake of time, I'll run statistic tests first and if there are significance results, I will plot figures to get better ideas. <br>
To test if TV provider matters, I use generalized linear model to analyze whether viewers watched the show or not with regard to condition (test vs. control), tv provider, city, and their interactions. Because the dependent variable is binary (watch vs. not), generalized linear model of binomial family is needed.
```
provider = merged_data.tv_provider
formula = 'action ~ C(condition) + C(provider) + C(city) + C(condition):C(provider) + C(condition):C(city) + C(city):C(provider)'
logit_model = smf.glm(formula, merged_data, family = sm.families.Binomial()).fit()
print(logit_model.summary())
```
The table is long because of the 15 levels of city and their interactions with other factors. <br>
For the question I care here, there were no significant differences between TV providers on watching the show or not.<br>
With regard to the big problem, useful info are:
* In general, being a LA viewer increases the probability of watching the show
* In general, being a Philadephia, Seattle, or Miami viewer decreases the probability of watching the show to some extent
* In general, being a Seattle viewer, using DirecTV increases the probability of watching
* In general, being a Miami viewer, using DirecTV or Time Warner Cable increases the probability of watching
* In general, being a Houston or Phoenix viewer, using Time Warner Cable increases the probability of watching
### Does viewer gender matter?
```
gender = merged_data.gender
formula2 = 'action ~ C(condition) + C(gender) + C(city) + C(condition):C(gender) + C(condition):C(city) + C(city):C(gender)'
logit_model2 = smf.glm(formula2, merged_data, family = sm.families.Binomial()).fit()
print(logit_model2.summary())
```
There was no significant difference between genders on watching the show or not.
With this GLM model, the positive effect of being a LA viewer and the negative effects of being a Philadephia, or Seattle viewer are replicated.
### Does viewer age matter?
```
merged_data = pd.read_csv('./Data/merged_data.csv')
# Group ages for easy process
def group_age(age):
if age <= 40:
return 'young'
elif age <= 60:
return 'middle'
else:
return 'senior'
merged_data['age_group'] = merged_data.age.apply(lambda x: group_age(x))
merged_data.age_group.unique()
merged_data.head()
merged_data.to_csv('./Data/merged_data.csv')
merged_data = pd.read_csv('./Data/merged_data.csv')
age_group = merged_data.age_group
action = merged_data.watched
condition = merged_data.test
city = merged_data.city
# To avoid repeated info, I only considered the interaction between age and city and the interaction between age
# and condition
formula3 = 'action ~ C(condition) + C(age_group) + C(city) + C(condition):C(age_group) + C(city):C(age_group)'
logit_model3 = smf.glm(formula3, merged_data, family = sm.families.Binomial()).fit()
print(logit_model3.summary())
```
Similarly, there were no main effects of age groups. There are two interactions which are senior viewers with condition and senior viewers with Houston. There interactions are not directly related to the questions I was asked. <br>
**It appears that the negative results of A/B test are mainly caused by unbalanced condition assignment of viewers in some cities. LA viewers are willing-to-watch users and they composed a good portion of control group, driving the group watching rate up. Philadephia and Seattle viewers are unwilling-to-watch users and they composed a good portion of experiment group, driving the group watching rate down.** <br>
**To summarize, the current A/B test results are not trustworthy due to unbalanced composition of samples between control and test group. We don't have enough evidence showing the new commercial is worse.**
# Write an algorithm to flag out the problem for future practice
* Ideally, A/B test should implement randomised control trials, meaning subjects are randomly assigned to either the control group or the experiment group. With a large sample size, each subject in the sample should be equally likely to be assigned to either group, i.e., the probability equals 0.5.
* The current experiment violates random assignment in two parts:
* All LA viewers were counted in the control group. There is no random assignment for LA viewers at all
* Philadelphia and Seattle viewers seem to have a higher probability to be assigned to the experiment group
```
def sanity_check(size1, size2, significance = 0.05):
n = size1 + size2
confidence = 1 - significance
z = scipy.stats.norm.ppf(confidence + significance / 2)
confint = n * 0.5 + np.array([-1, 1]) * np.sqrt(n * 0.5 * 0.5)
# shouldn't this be:
# confint = n * 0.5 + np.array([-1, 1]) * z * np.sqrt(n * 0.5 * 0.5)
return confint
# Check the likelihood of viewers in each city to be assigned to either group
print('{0:10s} {1:>15s} {2:>20s} {3:>20s} {4:>13s} {5:>14s} {6:>14s}'.format('City', 'Sample Size','CI of Size',
'Experiment Size', 'Control Size','Exp within CI','Ctrl within CI'))
print('-' * 112)
city_ls = [name for name in merged_data['city'].unique() if name is not np.nan]
exp_n_ls = []
ctrl_n_ls = []
for city in city_ls:
n_city = len(merged_data[merged_data.city == city])
exp_n = len(merged_data[(merged_data.test == 1) & (merged_data.city == city)])
exp_n_ls.append(exp_n)
ctrl_n = len(merged_data[(merged_data.test == 0) & (merged_data.city == city)])
CI = sanity_check(exp_n, ctrl_n)
ctrl_n_ls.append(ctrl_n)
exp_w_CI = True if exp_n > CI[0] and exp_n < CI[1] else False
ctrl_w_CI = True if ctrl_n > CI[0] and ctrl_n < CI[1] else False
print('{0:10s} {1:>10d} {2:>35s} {3:>10d} {4:>10d} {5:>12s} {6:>14s}'.format(city, n_city, str(CI), exp_n, ctrl_n,
str(exp_w_CI),str(ctrl_w_CI)))
```
**Note in this case, viewers from every city were not equally likely to be assigned to experiment or control groups**
Alternatively, I used the chi-square test of independence to check if cities are associated with the assignment of conditions, i.e., whether viewers from some cities are more likely to be assigned to one of the conditions
```
observed_number = np.array([exp_n_ls, ctrl_n_ls])
from scipy import stats
stats.chi2_contingency(observed_number)
```
In this case, chi-square = 76002.42, p = 0, df = 14, indicating that the condition assignment is not independent from cities.<br>
To formalize an algorithm, it could be (use some of the codes above):
```
def eval_condition_assignment(df, related_factor):
factor_levels = [level for level in df[related_factor].unique() if level is not np.nan]
exp_n_ls = []
ctrl_n_ls = []
for level in factor_levels:
exp_n = len(merged_data[(merged_data.test == 1) & (merged_data.city == city)])
exp_n_ls.append(exp_n)
ctrl_n = len(merged_data[(merged_data.test == 0) & (merged_data.city == city)])
ctrl_n_ls.append(ctrl_n)
observed_number = np.array([exp_n_ls, ctrl_n_ls])
from scipy import stats
statistic_values = stats.chi2_contingency(observed_number)
if statistic_values[1] < 0.05:
print('False')
return False
eval_condition_assignment(merged_data,'city')
```
# Future Directions
* Automated a pipeline with consideration of determining sample size based on power analysis
* More guidance and implementation in actually assign viewers and collecting data in an unbiased way
| github_jupyter |
# Network Visualization
This notebook demonstrates how to view **MASSpy** models on network maps using the [Escher](https://escher.github.io/#/) visualization tool <cite data-cite="KDragerE+15">(King et al., 2015)</cite>.
The **Escher** package must already be installed into the environment. To install **Escher**:
```python
pip install escher
```
## Viewing Models with Escher
The **MASSpy** package also comes with some maps for testing purposes.
```
from os.path import join
import numpy as np
import mass
import mass.test
# Load the glycolysis and hemoglobin models, then merge them
glycolysis = mass.test.create_test_model("Glycolysis")
hemoglobin = mass.test.create_test_model("Hemoglobin")
model = glycolysis.merge(hemoglobin, inplace=False)
# Set the path to the map file
map_filepath = join(mass.test.MAPS_DIR, "RBC.glycolysis.map.json")
# To view the list of available maps, remove the semicolon
mass.test.view_test_maps();
```
The primary object for viewing **Escher** maps is the `escher.Builder`, a Jupyter widget that can be viewed in a Jupyter notebook.
```
import escher
from escher import Builder
# Turns off the warning message when leaving or refreshing this page.
# The default setting is False to help avoid losing work.
escher.rc['never_ask_before_quit'] = True
```
To load an existing map, the path to the JSON file of the **Escher** map is provided to the `map_json` argument of the `Builder`. The `MassModel` can be loaded using the `model` argument.
```
escher_builder = Builder(
model=model,
map_json=map_filepath)
escher_builder
```
## Mapping Data onto Escher
### Viewing Reaction Data
Reaction data can be displayed on the **Escher** map using a dictionary that contains reaction identifiers, and values to map onto reaction arrows. The `dict` can be provided to the `reaction_data` argument upon initialization of the builder.
For example, to display the steady state fluxes on the map:
```
initial_flux_data = {
reaction.id: flux
for reaction, flux in model.steady_state_fluxes.items()}
# New instance to prevent modifications to the existing maps
escher_builder = Builder(
model=model,
map_json=map_filepath,
reaction_data=initial_flux_data)
# Display map in notebook
escher_builder
```
The color and size of the data scale can be altered by providing a tuple of at least two dictionaries. Each dictionary is considered a "stop" that defines the color and size at or near that particular value in the data set. The `type` key defines the type for the stop, the `color` key defines the color of the arrow, and the `size` key defines the thickness of the arrow.
```
# New instance to prevent modifications to the existing maps
escher_builder = Builder(
model=model,
map_json=map_filepath,
reaction_data=initial_flux_data,
reaction_scale=(
{"type": 'min', "color": 'green', "size": 5 },
{"type": 'value', "value": 1.12, "color": 'purple', "size": 10},
{"type": 'max', "color": 'blue', "size": 15 }),
)
# Display map in notebook
escher_builder
```
### Viewing Metabolite Data
Metabolite data also can be displayed on an **Escher** map by using a dictionary containing metabolite identifiers, and values to map onto metabolite nodes. In addition to setting the attributes to apply upon initializing the builder, the attributes also can be set for a map after initialization.
For example, to display metabolite concentrations on the map:
```
initial_conc_data = {
metabolite.id: round(conc, 8)
for metabolite, conc in model.initial_conditions.items()}
# New instance to prevent modifications to the existing maps
escher_builder = Builder(
model=model,
map_json=map_filepath,
metabolite_data=initial_conc_data)
# Display map in notebook
escher_builder
```
The secondary metabolites can be removed by setting `hide_secondary_metabolites` as `True` to provide a cleaner visualization of the primary metabolites in the network.
```
escher_builder.hide_secondary_metabolites = True
```
Note that changes made affect the already displayed map. Here, a preset scale is applied to the metabolite concentrations.
```
escher_builder.metabolite_scale_preset = "RdYlBu"
```
### Visualizing SBML models with Escher in Python
Suppose that we would like to visualize our SBML model on a network map as follows:
1. We would like to create this map with the **Escher** web-based API.
2. We would like to view the model on the network map within in a Jupyter notebook using the **Escher** Python-based API.
3. We would like to display the value of forward rate constants for each reaction on the network map.
The JSON format is the preferred format for **Escher** to load models onto network maps ([read more here](https://escher.readthedocs.io/en/latest/escher_and_cobrapy.html#what-is-json-and-why-do-we-use-it)). Therefore, we must convert models between SBML and JSON formats to achieve our goal.
**Note:** The models and maps used in the following example are also available in the testing data.
```
import mass.io
```
Fortunately, the [mass.io](../autoapi/mass/io/index.rst) submodule is capable of exporting such models.
First the SBML model is loaded using the ``mass.io.sbml`` submodule. The model is then exported to a JSON format using the ``mass.io.json`` submodule for use in the [Escher web-based API](https://escher.github.io/#/).
```
# Define path to SBML model
path_to_sbml_model = join(mass.test.MODELS_DIR, "Simple_Toy.xml")
# Load SBML model
model = mass.io.sbml.read_sbml_model(path_to_sbml_model)
# Export as JSON
path_to_json_model = "./Simple_Toy.json"
mass.io.json.save_json_model(model, filename=path_to_json_model)
```
Suppose that we have now created our map using the **Escher** web-based API and saved it as the file "simple_toy_map.json". To display the map with the model:
```
# Define path to Escher map
path_to_map = join(mass.test.MAPS_DIR, "simple_toy_map.json")
escher_builder = Builder(
model_json=path_to_json_model,
map_json=path_to_map)
escher_builder
```
Finally the forward rate constant data from the ``MassModel`` object is added to the map:
```
escher_builder.reaction_data = dict(zip(
model.reactions.list_attr("id"),
model.reactions.list_attr("forward_rate_constant")
))
```
## Additional Examples
For additional information and examples on how to visualize networks and **MASSpy** models using **Escher**, see the following:
* [Animating Simulations with Escher](../gallery/visualization/animating_simulations.ipynb)
| github_jupyter |
========================================
__Contents__
* Search usage
1. Import module & Load data
2. Defining parameter search space
3. Defining feature search space (optional)
4. Run search
* Log usage
1. Extract pramater & feature setting
2. Make meta feature for stacking
* Sample:run all backend search
========================================
# Search usage
## 1. Import module & Load data
In here, use the breast cancer wisconsin dataset to modeling.
This is binary classification dataset.
Firstly, this dataset is splitted to three datasets(Train, Test)
```
import os ,sys
import numpy as np, pandas as pd, scipy as sp
from sklearn import datasets
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.linear_model import LogisticRegression
from cvopt.model_selection import SimpleoptCV
from cvopt.search_setting import search_category, search_numeric
dataset = datasets.load_breast_cancer()
Xtrain, Xtest, ytrain, ytest = train_test_split(dataset.data, dataset.target, test_size=0.3, random_state=0)
print("Train features shape:", Xtrain.shape)
print("Test features shape:", Xtest.shape)
from bokeh.io import output_notebook
output_notebook() # When you need search visualization, need run output_notebook()
```
## 2. Defining parameter search space
Can use a common style in all cv class.
```
param_distributions = {
"penalty": search_category(['none', 'l2']),
"C": search_numeric(0.01, 3.0, "float"),
"tol" : search_numeric(0.0001, 0.001, "float"),
"class_weight" : search_category([None, "balanced", {0:0.5, 1:0.1}]),
}
```
### 2.A Other styles
Can use other styles depend on base module.
### for HyperoptCV (base module: Hyperopt)
```python
param_distributions = {
"penalty": hp.choice("penalty", ['none', 'l2']),
"C": hp.loguniform("C", 0.01, 3.0),
"tol" : hp.loguniform("tol", 0.0001, 0.001),
"class_weight" : hp.choice("class_weight", [None, "balanced", {0:0.5, 1:0.1}]),
}
```
### for BayesoptCV (base module: GpyOpt)
__NOTE:__
* In GpyOpt, search space is list of dict. But in cvopt, need dict of dict(key:param name, value:dict).
* If `type` is `categorical`, search space's dict must have `categories` key. `categories` value is category name's list.
```python
param_distributions = {
"penalty" : {"name": "penalty", "type":"categorical", "domain":(0,1), "categories":['none', 'l2']},
"C": {"name": "C", "type":"continuous", "domain":(0.01, 3.0)},
"tol" : {"name": "tol", "type":"continuous", "domain":(0.0001, 0.001)},
"class_weight" : {"name": "class_weight", "type":"categorical", "domain":(0,1), "categories":[None, "balanced", {0:0.5, 1:0.1}]},
}
```
### for GAoptCV, RandomoptCV
__NOTE:__
* Support search_setting.search_numeric, search_setting.search_category, or scipy.stats class.
```python
param_distributions = {
"penalty" : hp.choice("penalty", ['none', 'l2']),
"C": sp.stats.randint(low=0.01, high=3.0),
"tol" : sp.stats.uniform(loc=0.0001, scale=0.00009),
"class_weight" : hp.choice("class_weight", [None, "balanced", {0:0.5, 1:0.1}]),
}
```
## 3. Defining feature search space (optional)
Features are selected per `feature_group`.
__If `feature_group` is set "-1", this group's features always are used.__
Criterion of separating group is, for example, random, difference of feature engineering method or difference of data source.
When you don't set `feature_group`, optimizer use all input features.
------------------------------------
### Example.
When data has 5 features(5 cols) and `feature_group` is set as shown below.
| feature index(data col index) | feature group |
|:------------:|:------------:|
| 0 | 0 |
| 1 | 0 |
| 2 | 0 |
| 3 | 1 |
| 4 | 1 |
Define as follows python's list.
```
feature_groups = [0, 0, 0, 1, 1]
```
As search result, you may get flag per `feature_group`.
```
feature_groups0: True
feature_groups1: False
```
This result means that optimizer recommend using group 1 features(col index:0,1,2) and not using group 2.
------------------------------------
```
feature_groups = np.random.randint(0, 5, Xtrain.shape[1])
```
## 4. Run search
cvopt has API like scikit-learn cross validation class.
When you had use scikit-learn, you can use cvopt very easy.
For each optimizer class's detail, please see [API reference](https://genfifth.github.io/cvopt/).
```
estimator = LogisticRegression()
cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=0)
opt = SimpleoptCV(estimator, param_distributions,
scoring="roc_auc", # Objective of search
cv=cv, # Cross validation setting
max_iter=32, # Number of search
n_jobs=3, # Number of jobs to run in parallel.
verbose=2, # 0: don't display status, 1:display status by stdout, 2:display status by graph
logdir="./search_usage", # If this path is specified, save the log.
model_id="search_usage", # used estimator's dir and file name in save.
save_estimator=2, # estimator save setting.
backend="hyperopt", # hyperopt,bayesopt, gaopt or randomopt.
)
opt.fit(Xtrain, ytrain, validation_data=(Xtest, ytest),
# validation_data is optional.
# This data is only used to compute validation score(don't fit).
# When this data is input & save_estimator=True,the estimator which is fitted whole Xtrain is saved.
feature_groups=feature_groups,
)
ytest_pred = opt.predict(Xtest)
pd.DataFrame(opt.cv_results_).head() # Search results
```
# Log usage
## 1. Extract pramater & feature setting
In cvopt, helper function is Included to handle log file easily.
When you want to extract settings from log file, It can be implemented as follows.
```
from cvopt.utils import extract_params
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
estimator_params, feature_params, feature_select_flag = extract_params(logdir="./search_usage",
model_id="search_usage",
target_index=target_index,
feature_groups=feature_groups)
estimator.set_params(**estimator_params) # Set estimator parameters
Xtrain_selected = Xtrain[:, feature_select_flag] # Extract selected feature columns
print(estimator)
print("Train features shape:", Xtrain.shape)
print("Train selected features shape:",Xtrain_selected.shape)
```
## 2. Make meta feature for stacking
When you want to male mete feature and [stacking](https://mlwave.com/kaggle-ensembling-guide/), It can be implemented as follows.
When run search, You need set `save_estimator`>0 to make meta feature.
In addition, you need set `save_estimator`>1 to make meta feature from the data which is not fitted.
```
from cvopt.utils import mk_metafeature
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
Xtrain_meta, Xtest_meta = mk_metafeature(Xtrain, ytrain,
logdir="./search_usage",
model_id="search_usage",
target_index=target_index,
cv=cv,
validation_data=(Xtest, ytest),
feature_groups=feature_groups,
estimator_method="predict_proba")
print("Train features shape:", Xtrain.shape)
print("Train meta features shape:", Xtrain_meta.shape)
print("Test features shape:", Xtest.shape)
print("Test meta features shape:", Xtest_meta.shape)
```
# Sample:run all backend search
```
for bk in ["hyperopt", "gaopt", "bayesopt", "randomopt"]:
estimator = LogisticRegression()
cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=0)
opt = SimpleoptCV(estimator, param_distributions,
scoring="roc_auc", # Objective of search
cv=cv, # Cross validation setting
max_iter=32, # Number of search
n_jobs=3, # Number of jobs to run in parallel.
verbose=2, # 0: don't display status, 1:display status by stdout, 2:display status by graph
logdir="./search_usage", # If this path is specified, save the log.
model_id=bk, # used estimator's dir and file name in save.
save_estimator=2, # estimator save setting.
backend=bk, # hyperopt,bayesopt, gaopt or randomopt.
)
opt.fit(Xtrain, ytrain, validation_data=(Xtest, ytest),
# validation_data is optional.
# This data is only used to compute validation score(don't fit).
# When this data is input & save_estimator=True,the estimator which is fitted whole Xtrain is saved.
feature_groups=feature_groups,
)
ytest_pred = opt.predict(Xtest)
from cvopt.utils import extract_params
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
estimator_params, feature_params, feature_select_flag = extract_params(logdir="./search_usage",
model_id=bk,
target_index=target_index,
feature_groups=feature_groups)
estimator.set_params(**estimator_params) # Set estimator parameters
Xtrain_selected = Xtrain[:, feature_select_flag] # Extract selected feature columns
print(estimator)
print("Train features shape:", Xtrain.shape)
print("Train selected features shape:",Xtrain_selected.shape)
from cvopt.utils import mk_metafeature
Xtrain_meta, Xtest_meta = mk_metafeature(Xtrain, ytrain,
logdir="./search_usage",
model_id=bk,
target_index=target_index,
cv=cv,
validation_data=(Xtest, ytest),
feature_groups=feature_groups,
estimator_method="predict_proba")
print("Train features shape:", Xtrain.shape)
print("Train meta features shape:", Xtrain_meta.shape)
print("Test features shape:", Xtest.shape)
print("Test meta features shape:", Xtest_meta.shape)
```
| github_jupyter |
# Riduzione di dimensionalità mediante LDA
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import pandas as pd
import numpy as np
import math
import scipy.stats as st
import scipy.linalg as la
from sklearn.preprocessing import LabelEncoder
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
import seaborn.apionly as sns
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]])
```
Legge dataset
```
df = pd.read_csv("../../data/iris.csv", header=0, delimiter=';')
df.head()
```
Deriva matrice delle feature e vettore target
```
X = np.array(df[df.columns[:-1]])
y = np.array(df[df.columns[-1]])
```
Codifica le classi come interi
```
enc = LabelEncoder()
label_encoder = enc.fit(y)
y = label_encoder.transform(y) + 1
label_dict = {label_encoder.transform(['setosa'])[0]+1: 'Setosa',
label_encoder.transform(['versicolor'])[0]+1: 'Versicolor',
label_encoder.transform(['virginica'])[0]+1:'Virginica'}
```
Matrice delle distribuzioni mutue delle feature. Sulla diagonale, distribuzione delle singole feature
```
fig = plt.figure(figsize=(16, 8))
sns.pairplot(df, size=4, diag_kind='kde', hue='class', palette={'setosa': colors[1], 'versicolor': colors[2], 'virginica':colors[3]},
diag_kws=dict(shade=True, alpha=.5))
plt.show()
```
Valori medi delle feature per tutte le classi
```
mean_vectors = []
for cl in range(1,4):
mean_vectors.append(np.mean(X[y==cl], axis=0))
print('Class {0:s}: {1:s}'.format(label_dict[cl], mean_vectors[cl-1]))
```
Deriva scatter matrix within classes
```
S_W = np.zeros((4,4))
for cl,mv in zip(range(1,4), mean_vectors):
Xp = X[y == cl]
S_W += np.dot((Xp-mv).T,(Xp-mv))
print('Within-class Scatter Matrix:\n')
print('{0:s}'.format(S_W))
```
Deriva scatter matrix between classes
```
overall_mean = np.mean(X, axis=0)
S_B = np.zeros((4,4))
for cl,mv in zip(range(1,4), mean_vectors):
n = X[y==c1].shape[0]
cv = (mv-overall_mean).reshape(1,-1)
S_B += n*np.dot(cv.T,cv)
print('Between-class Scatter Matrix:\n')
print('{0:s}'.format(S_B))
```
Calcola autovalori e autovettori di $S_W^{-1}S_B$
```
eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
for i in range(len(eig_vals)):
eigvec_sc = eig_vecs[:,i].reshape(4,1)
print('Autovalore {0:d}: {1:.2e}. Autovettore {2:s}'.format(i+1, eig_vals[i].real, eigvec_sc.real.reshape(1,-1)))
```
Ordina le coppie autovalore-autovettore in ordine decrescente nel modulo dell'autovalore
```
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs = sorted(eig_pairs, key=lambda k: k[0], reverse=True)
print('Varianza descritta dagli autovalori:')
eigv_sum = sum(eig_vals)
for i,j in enumerate(eig_pairs):
print('Autovalore {0:}: {1:.2%}'.format(i+1, (j[0]/eigv_sum).real))
```
Deriva la matrice di proiezione sull'iperpiano definito dai primi new_dim autovettori
```
new_dim = 2
W = eig_pairs[0][1].reshape(4,1)
for i in range(1, new_dim):
W = np.hstack((W, eig_pairs[i][1].reshape(4,1)))
print('Matrice di proiezione W:')
print(W.real)
```
Proietta gli elementi nel nuovo spazio
```
X_lda = X.dot(W)
fig = plt.figure(figsize=(16,8))
for label in range(1,4):
plt.scatter(x=X_lda[:,0].real[y == label],
y=X_lda[:,1].real[y == label],
color=colors[label],
alpha=0.8,
label=label_dict[label])
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend(loc='upper right', fancybox=True)
plt.title('LDA: proiezione su spazio a 2 dimensioni')
plt.show()
```
Utilizza la funzione in scikit-learn
```
sklearn_lda = LDA(n_components=2)
X_lda_sklearn = sklearn_lda.fit_transform(X, y)
fig = plt.figure(figsize=(16,8))
for label in range(1,4):
plt.scatter(x=X_lda_sklearn[:,0][y == label],
y=X_lda_sklearn[:,1][y == label] * -1,
color=colors[label],
alpha=0.8,
label=label_dict[label])
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend(loc='upper right', fancybox=True)
plt.title('LDA via sklearn: proiezione su spazio a 2 dimensioni')
plt.show()
```
| github_jupyter |
# Pyro Examples
```
import pandas as pd
import numpy as np
from orbit.models.lgt import LGTMAP, LGTAggregated, LGTFull
from orbit.estimators.pyro_estimator import PyroEstimatorVI, PyroEstimatorMAP
from orbit.diagnostics.plot import plot_predicted_data
from orbit.diagnostics.plot import plot_predicted_components
pd.set_option('display.float_format', lambda x: '%.5f' % x)
```
## Data
```
DATA_FILE = "./data/iclaims_example.csv"
raw_df = pd.read_csv(DATA_FILE, parse_dates=['week'])
```
### Train / Test Split
```
df=raw_df.copy()
test_size=52
train_df=df[:-test_size]
test_df=df[-test_size:]
train_df.head(5)
```
## MAP Fit and Predict
```
lgt_map = LGTMAP(
response_col="claims",
date_col="week",
seasonality=52,
seed=8888,
is_multiplicative=True,
estimator_type=PyroEstimatorMAP,
)
%%time
lgt_map.fit(df=train_df)
predicted_df = lgt_map.predict(df=test_df)
plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df,
date_col=lgt_map.date_col, actual_col=lgt_map.response_col, pred_col='prediction',
test_actual_df=test_df)
```
## VI Fit and Predict
Pyro only support Variational Inference(SVI) for full sampling prediction. Note that `pyro` takes advantage of parallel processing in `vi` and hence result in similar computation time compared to `map`.
```
lgt_vi = LGTFull(
response_col='claims',
date_col='week',
seasonality=52,
seed=8888,
num_steps=101,
num_sample=100,
learning_rate=0.1,
n_bootstrap_draws=-1,
estimator_type=PyroEstimatorVI,
)
%%time
lgt_vi.fit(df=train_df)
predicted_df = lgt_vi.predict(df=test_df)
plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df,
date_col=lgt_vi.date_col, actual_col=lgt_vi.response_col, pred_col='prediction',
test_actual_df=test_df)
```
## Fit with Regressors
### with MAP
```
lgt_reg_map = LGTMAP(
response_col="claims",
regressor_col=['trend.unemploy', 'trend.filling'],
regressor_sign=['=','='],
date_col="week",
seasonality=52,
seed=8888,
is_multiplicative=True,
estimator_type=PyroEstimatorMAP,
)
%%time
lgt_reg_map.fit(df=train_df)
predicted_df = lgt_reg_map.predict(df=df, decompose=True)
plot_predicted_components(predicted_df=predicted_df, date_col='week')
```
### with VI
```
lgt_reg_vi = LGTAggregated(
response_col='claims',
regressor_col=['trend.unemploy', 'trend.filling'],
regressor_sign=['=','='],
date_col='week',
seasonality=52,
seed=8888,
num_steps=101,
num_sample=100,
learning_rate=0.1,
estimator_type=PyroEstimatorVI,
)
%%time
lgt_reg_vi.fit(df=train_df)
predicted_df = lgt_reg_vi.predict(df=df, decompose=True)
plot_predicted_components(predicted_df=predicted_df, date_col='week')
```
| github_jupyter |
```
import torch
import torch.utils.data
from torch import nn
from torch.nn import functional as F
from ignite.engine import Events, Engine
from ignite.metrics import Accuracy, Loss
import numpy as np
import sklearn.datasets
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
class Model_bilinear(nn.Module):
def __init__(self, features, num_embeddings):
super().__init__()
self.gamma = 0.99
self.sigma = 0.3
embedding_size = 10
self.fc1 = nn.Linear(2, features)
self.fc2 = nn.Linear(features, features)
self.fc3 = nn.Linear(features, features)
self.W = nn.Parameter(torch.normal(torch.zeros(embedding_size, num_embeddings, features), 1))
self.register_buffer('N', torch.ones(num_embeddings) * 20)
self.register_buffer('m', torch.normal(torch.zeros(embedding_size, num_embeddings), 1))
self.m = self.m * self.N.unsqueeze(0)
def embed(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
# i is batch, m is embedding_size, n is num_embeddings (classes)
x = torch.einsum('ij,mnj->imn', x, self.W)
return x
def bilinear(self, z):
embeddings = self.m / self.N.unsqueeze(0)
diff = z - embeddings.unsqueeze(0)
y_pred = (- diff**2).mean(1).div(2 * self.sigma**2).exp()
return y_pred
def forward(self, x):
z = self.embed(x)
y_pred = self.bilinear(z)
return z, y_pred
def update_embeddings(self, x, y):
z = self.embed(x)
# normalizing value per class, assumes y is one_hot encoded
self.N = torch.max(self.gamma * self.N + (1 - self.gamma) * y.sum(0), torch.ones_like(self.N))
# compute sum of embeddings on class by class basis
features_sum = torch.einsum('ijk,ik->jk', z, y)
self.m = self.gamma * self.m + (1 - self.gamma) * features_sum
np.random.seed(0)
torch.manual_seed(0)
l_gradient_penalty = 1.0
# Moons
noise = 0.1
X_train, y_train = sklearn.datasets.make_moons(n_samples=1500, noise=noise)
X_test, y_test = sklearn.datasets.make_moons(n_samples=200, noise=noise)
num_classes = 2
batch_size = 64
model = Model_bilinear(20, num_classes)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-4)
def calc_gradient_penalty(x, y_pred):
gradients = torch.autograd.grad(
outputs=y_pred,
inputs=x,
grad_outputs=torch.ones_like(y_pred),
create_graph=True,
)[0]
gradients = gradients.flatten(start_dim=1)
# L2 norm
grad_norm = gradients.norm(2, dim=1)
# Two sided penalty
gradient_penalty = ((grad_norm - 1) ** 2).mean()
# One sided penalty - down
# gradient_penalty = F.relu(grad_norm - 1).mean()
return gradient_penalty
def output_transform_acc(output):
y_pred, y, x, z = output
y = torch.argmax(y, dim=1)
return y_pred, y
def output_transform_bce(output):
y_pred, y, x, z = output
return y_pred, y
def output_transform_gp(output):
y_pred, y, x, z = output
return x, y_pred
def step(engine, batch):
model.train()
optimizer.zero_grad()
x, y = batch
x.requires_grad_(True)
z, y_pred = model(x)
loss1 = F.binary_cross_entropy(y_pred, y)
loss2 = l_gradient_penalty * calc_gradient_penalty(x, y_pred)
loss = loss1 + loss2
loss.backward()
optimizer.step()
with torch.no_grad():
model.update_embeddings(x, y)
return loss.item()
def eval_step(engine, batch):
model.eval()
x, y = batch
x.requires_grad_(True)
z, y_pred = model(x)
return y_pred, y, x, z
trainer = Engine(step)
evaluator = Engine(eval_step)
metric = Accuracy(output_transform=output_transform_acc)
metric.attach(evaluator, "accuracy")
metric = Loss(F.binary_cross_entropy, output_transform=output_transform_bce)
metric.attach(evaluator, "bce")
metric = Loss(calc_gradient_penalty, output_transform=output_transform_gp)
metric.attach(evaluator, "gp")
ds_train = torch.utils.data.TensorDataset(torch.from_numpy(X_train).float(), F.one_hot(torch.from_numpy(y_train)).float())
dl_train = torch.utils.data.DataLoader(ds_train, batch_size=batch_size, shuffle=True, drop_last=True)
ds_test = torch.utils.data.TensorDataset(torch.from_numpy(X_test).float(), F.one_hot(torch.from_numpy(y_test)).float())
dl_test = torch.utils.data.DataLoader(ds_test, batch_size=200, shuffle=False)
@trainer.on(Events.EPOCH_COMPLETED)
def log_results(trainer):
evaluator.run(dl_test)
metrics = evaluator.state.metrics
print("Test Results - Epoch: {} Acc: {:.4f} BCE: {:.2f} GP {:.2f}"
.format(trainer.state.epoch, metrics['accuracy'], metrics['bce'], metrics['gp']))
trainer.run(dl_train, max_epochs=30)
domain = 3
x_lin = np.linspace(-domain+0.5, domain+0.5, 100)
y_lin = np.linspace(-domain, domain, 100)
xx, yy = np.meshgrid(x_lin, y_lin)
X_grid = np.column_stack([xx.flatten(), yy.flatten()])
X_vis, y_vis = sklearn.datasets.make_moons(n_samples=1000, noise=noise)
mask = y_vis.astype(np.bool)
with torch.no_grad():
output = model(torch.from_numpy(X_grid).float())[1]
confidence = output.max(1)[0].numpy()
z = confidence.reshape(xx.shape)
plt.figure()
plt.contourf(x_lin, y_lin, z, cmap='cividis')
plt.scatter(X_vis[mask,0], X_vis[mask,1])
plt.scatter(X_vis[~mask,0], X_vis[~mask,1])
```
| github_jupyter |
```
## By Saina Srivastava
## Based on
# - # Link: https://towardsdatascience.com/machine-learning-part-19-time-series-and-autoregressive-integrated-moving-average-model-arima-c1005347b0d7
# - https://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/
# - https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMA.html?highlight=arima#statsmodels.tsa.arima.model.ARIMA
import numpy as np
import pandas as pd
import datetime
from matplotlib import pyplot as plt
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.arima_model import ARIMA
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
file_name = 'data/shampoo.csv'
df = pd.read_csv(file_name, parse_dates = ['Month'], index_col = ['Month'])
df.head()
# Reading the file and plotting the basics
plt.xlabel('Date')
plt.ylabel('Price')
plt.xticks(rotation=90)
plt.plot(df)
# Panda provides basic time operations
window_size = 3 # Size of window
rolling_mean = df.rolling(window = window_size).mean()
rolling_std = df.rolling(window = window_size).std() # Used for standard deviation
plt.plot(df, color = 'blue', label = 'Original')
plt.plot(rolling_mean, color = 'red', label = 'Rolling Mean')
plt.plot(rolling_std, color = 'black', label = 'Rolling Std')
plt.legend(loc = 'best')
plt.title('Rolling Mean & Rolling Standard Deviation for Shampoo Sales')
plt.xticks(rotation=90)
#for i, t in enumerate(plt.xticklabels()):
# if (i % 5) != 0:
# t.set_visible(False)
plt.show()
# Showing on a log scale
df_log = np.log(df)
plt.xticks(rotation=90)
plt.plot(df_log)
# Checking if a timeseries is stationary or not
# Stationary: The mean of the series should not grow over time,
# the variance of the series should not spread over time, and the
# convariance of'i'th and (i + m)th should not be a funtion of time
def get_stationarity(timeseries):
# rolling statistics
rolling_mean = timeseries.rolling(window=12).mean()
rolling_std = timeseries.rolling(window=12).std()
# rolling statistics plot
original = plt.plot(timeseries, color='blue', label='Original')
mean = plt.plot(rolling_mean, color='red', label='Rolling Mean')
std = plt.plot(rolling_std, color='black', label='Rolling Std')
plt.xticks(rotation=90)
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show(block=False)
# Dickey–Fuller test:
result = adfuller(timeseries['Sales'])
print('ADF Statistic: {}'.format(result[0]))
print('p-value: {}'.format(result[1]))
print('Critical Values:')
for key, value in result[4].items():
print('\t{}: {}'.format(key, value))
# To see how it behaves over time
rolling_mean = df_log.rolling(window=12).mean()
df_log_minus_mean = df_log - rolling_mean
df_log_minus_mean.dropna(inplace=True)
get_stationarity(df_log_minus_mean)
# Used to turn the timeframe into a stationary one
rolling_mean_exp_decay = df_log.ewm(halflife=12, min_periods=0, adjust=True).mean()
df_log_exp_decay = df_log - rolling_mean_exp_decay
df_log_exp_decay.dropna(inplace=True)
get_stationarity(df_log_exp_decay)
# Another method to determine if there is a better method to turn a
# moving time series into a stationary one
df_log_shift = df_log - df_log.shift()
df_log_shift.dropna(inplace=True)
get_stationarity(df_log_shift)
## Fit model using ARIMA
# p: The number of lag observations included in the model, also called the lag order.
# d: The number of times that the raw observations are differenced, also called the degree of differencing.
# q: The size of the moving average window, also called the order of moving average.
model = ARIMA(df_log, order=(5,1,0))
model_fit = model.fit()
# summary of fit model
print(model_fit.summary())
# Building ARIMA model
# p: The number of lag observations included in the model, also called the lag order.
# d: The number of times that the raw observations are differenced, also called the degree of differencing.
# q: The size of the moving average window, also called the order of moving average.
model = ARIMA(df_log, order=(0,1,2))
# Fit the model.
# Result form is here: https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMAResults.html
results = model.fit(disp=-1)
# Show
plt.xticks(rotation=90)
plt.plot(df_log_shift)
plt.plot(results.fittedvalues, color='red')
# Allows values to be predicted in future
results.plot_predict(1,50)
```
| github_jupyter |
# Calculating Fall Velocity
>This program computes the settling velocity of a particle with the formulation of Dietrich (1982).
>
>For a more detailed description of the theoretical formulation of the problem and of the code, see Chapter 2 “Characterization of Sediment and Grain Size Distributions” [[download PPT]](http://hydrolab.illinois.edu/people/parkerg/_private/e-bookPowerPoint/RTe-bookCh2SedimentGSD.ppt) of Gary Parker's e-book.
><br>
>
>The input parameters are:
>
>* `particle__diameter`: Sediment particle size, $D$ (mm)
>* `fluid__kinematic_viscosity`: Kinematic viscosity of the liquid, $\nu$ (m$^2$/s)
>* `gravitational_acceleration`: Acceleration due to gravity, $g$ (m/s$^2$)
>* `fluid__density`: Density of water, $\rho_w$ (Kg/m$^3$)
>* `particle__density`: Density of sediment, $\rho_s$ (Kg/m$^3$)
>* `output_filename`: Path of the output file
>
><br>
>The output parameters are:
>
>* `settling_velocity`: Particle settling velocity, $v_s$ (m/s)
>* `Reynolds_number`: Particle Reynolds number, $R_e$
>* `dimensionless_fall_velocity`: Dimensionless fall velocity of the particle, $R_f$
>
><br>
>The program solves the equations:
>
>$v_s = R_f \sqrt{RgD}$
>
>$R_e = \frac{D\sqrt{RgD}}{\nu}$
>
>$R_f = \frac{v_s}{\sqrt{RgD}}$
>
>### Notes:
>
>* These calculations are only valid for Reynold's numbers less than or equal to $2.5 x {10}^6$. If $R_e$ is greater than this upper limit, the code warns and exits.
>
>### References:
>
>Dietrich, E. W., 1982, Settling velocity of natural particles, Water Resources Research, 18 (6), 1626-1982.
This Jupyter Notebook runs the BMI version of Fall Velocity.
To run it, first import the Fall Velocity code and initiate it:
```
from STR.FallVelocity import FallVelocity
import numpy as np
f = FallVelocity()
f.initialize('input_files/FallVelocity.yaml')
f.get_input_var_names()
```
The code is initialized with default values for the input parameters. To access these variables, we can type:
```
print 'Grain size:', f.get_value('particle__diameter'), 'mm'
print 'Kinematic viscosity:', f.get_value('fluid__kinematic_viscosity'), 'm^2/s'
print 'Graviatational acceleration:', f.get_value('gravitational_acceleration'), 'm/s^2'
print 'Density of fluid:', f.get_value('fluid__density'), 'Kg/m^3'
print 'Density of particle:', f.get_value('particle__density'), 'Kg/m^3'
```
To run the program, we type:
```
f.update()
```
This command just solves the equations. It doesn't produce any visible output on the screen and doesn't save a file. To access the output, we can call the individual variables:
```
print 'Settling velocity:', f.get_value('particle__settling_velocity'), 'm/s'
print 'Reynolds number of the particle:', f.get_value('particle__Reynolds_number')
print 'Dimensionless fall velocity:', f.get_value('particle__dimensionless_fall_velocity')
```
We can change these parameters by assigning new values to them:
```
f.set_value('particle__diameter', 0.2)
print 'Grain size:', f.get_value('particle__diameter'), 'mm'
```
Run the program again and check the output values for this new grain size:
```
f.update()
print 'Settling velocity:', f.get_value('particle__settling_velocity'), 'm/s'
print 'Reynolds number of the particle:', f.get_value('particle__Reynolds_number')
print 'Dimensionless fall velocity:', f.get_value('particle__dimensionless_fall_velocity')
```
We can save the most recent input and output values to a YAML file by calling the `finalize` function. The path to the output file is set in the input file:
```
f.finalize()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/CodingBad02/Face-Anonymisation/blob/main/PeakyBlinders.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
###In The name of the **AlgorithmX fellowship**!!!!!
Manjunathan.R ,
SSN CE
*A note on the Novelty and the implication of this work along with required references for the purpose of the **AlgorithmX** panel:*
1. This project is implemented **completely online**, uses Google drive as the source for all the files that are required. Done on **Google colab.**
2. The model I have built is **completely dynamic** in the sense that even the variables that store the representations, are created dynamically, meaning that this could be done for **any** such picture where everyone's face, except Thomas shelby's needs to be anonymised.
3. The model here uses the **VGG-net** which works wihtout training by using **Siamese network based similarity calculation of images**. We have used two metrics here, namely the Euclidean distance and the Cosine similarity with a threshold of 0.4 and 120 respectively, above which the images are not similar.
4. This hence, works as a **Image verification** problem, where you know one is Thomas shelby, and you try to match it with the other faces, and blur all, except Thomas' face.
5. This devoids the model of GPU bottlenecking, or any such issues of wasting time. Making it fast!
---
Links:
* My AlgorithmX drive [link](https://drive.google.com/drive/folders/1IK5gd-vh_D_Po9U0WmEG4KTwZVU2zOIm?usp=sharing)
* Haar Cascade Classifier XML [file](https://drive.google.com/file/d/1kDa3wLEUPJhUpg16Wx4ULe-02ZVl3ty_/view)
* My [Resume](https://drive.google.com/file/d/12VsiPM3pAfNSrhc-mfI0WD15O8NSMpSz/view?usp=sharing)
* Siamese Network Features for Image Matching [paper](https://users.aalto.fi/~kannalj1/publications/icpr2016.pdf)
* VGG Net architecture [Medium article](https://medium.com/analytics-vidhya/vggnet-architecture-explained-e5c7318aa5b6)
---
Thanks for this opportunity to learn and work!
```
#Let's first install OpenCV which will be the backbone for our project.
!apt-get -qq install -y libsm6 libxext6 && pip install -q -U opencv-python
#Import Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
root_path = 'gdrive/My Drive/AlgorithmxFellowship'
!ls -a "/content/gdrive/MyDrive/AlgorithmxFellowship"
!pwd
```
**Imports**
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
#Reading thew Given Image for Anonymisation and further processing:
test_image=cv2.imread('/content/gdrive/MyDrive/AlgorithmxFellowship/peakyblinders.jpg')
#Ordering the color of the image to RGB and then converting into Grayscale:
test_image=cv2.cvtColor(test_image,cv2.COLOR_BGR2RGB)
gs_test_image=cv2.cvtColor(test_image,cv2.COLOR_RGB2GRAY)
```
For detecting the Faces in our model we use a trained Haar cascade classifier which is available at the Open source [Link](https://drive.google.com/file/d/1kDa3wLEUPJhUpg16Wx4ULe-02ZVl3ty_/view) got from the internet
```
face_detector=cv2.CascadeClassifier("/content/gdrive/MyDrive/AlgorithmxFellowship/classifier/haarcascade_frontalface_default.xml")
faces_image= face_detector.detectMultiScale(gs_test_image,1.3,5)
#Checking whether the output is good or not
faces_image
```
We observe that the given images have been reduced into faces. The array corresponds to x_pos,y_pos,height and width of the located BBox of the face.
```
##Plotting the figures using Matplotlib subplotting
fig = plt.figure(figsize=(10, 7))
rows = 1
columns = 10
peaky_face=[]
for i in range(len(faces_image)):
for (x_pos,y_pos,width,height) in faces_image:
peaky_face.append(test_image[y_pos:y_pos+height,x_pos:x_pos+width])
fig.add_subplot(rows, columns, i+1)
plt.imshow(peaky_face[i])
plt.axis('off')
```
We have successfully taken out all faces! Now my thought process is to build a classifier based on Thomas shelby's image and pass this to that classifier. If it detects him in a particular frame, we can remove that frame from the list of frames and blur all. This can be dealt as an verification problem rather than a Recognition one
---
---
**Now we shall start building the model. This model is inspired by VGG-Net, Siamese Networks based image deduction, Global dynanmic variable creation**
```
from keras.models import Sequential,Model
from keras.layers import Activation, Dense,ZeroPadding2D,Convolution2D,MaxPooling2D,Dropout,Flatten
from keras.models import model_from_json
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
```
Instead of using Multiple images to train the model, we can use the process of **Image verification** than image identification, just like how it is used in Biometrics in offices.
The Model we use is the VGG-Face. This Model ois nearly similar to the ImageNet model, however it varies at the output layer an image of size 224x224x3 is made into 2622 encoded values which is used for learning or in our case verification.

```
##Building a VGG network for image identification:
model = Sequential()
model.add(ZeroPadding2D((1,1),input_shape=(224,224, 3)))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(4096, (7, 7), activation='relu'))
model.add(Dropout(0.5))
model.add(Convolution2D(4096, (1, 1), activation='relu'))
model.add(Dropout(0.5))
model.add(Convolution2D(2622, (1, 1)))
model.add(Flatten())
model.add(Activation('softmax'))
model.summary()
model.load_weights('/content/gdrive/MyDrive/AlgorithmxFellowship/classifier/vgg_face_weights.h5')
```
Let's Build an instance of this descriptor!
```
vgg_face_descriptor = Model(inputs=model.layers[0].input, outputs=model.layers[-2].output)
```
Since this takes input in only a particular format, we use preprocessing , both Built in VGG16 as well as self defined ones can be used.
```
def preprocess_image(image_path):
img = image.load_img(image_path, target_size=(224, 224))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
return img
for i in range(len(faces_image)):
im = Image.fromarray(peaky_face[i])
im.save("face{0}.jpeg".format(i))
len(peaky_face)
```
Let us create Global Variables to store the value of the faces represented in a 1D vector format. This helps in dynamic modeling
```
#Creating global Variables to store the img values of Faces.
i = 0
vals=[]
for j in range(len(faces_image)):
i += 1
globals()["facerep" + str(i)] = vgg_face_descriptor.predict(preprocess_image('/content/face{0}.jpeg'.format(j)))[0,:]
vals.append(vgg_face_descriptor.predict(preprocess_image('/content/face{0}.jpeg'.format(j)))[0,:])
np.shape(vals)
```
Let's verify the shape of the model!
```
print(len(facerep1))
```
We Now have our Test image representations. All we need now is to load an image of Thomas Shelby and try to verify if it is the same person in both the images.
--------------------------------------------------------------------------------------------
**Shelby Image Cornering**
```
shelby=cv2.imread('/content/gdrive/MyDrive/AlgorithmxFellowship/shelby_test.jpg')
shelby1=cv2.cvtColor(shelby,cv2.COLOR_BGR2RGB)
shelby_in_gs=cv2.cvtColor(shelby1,cv2.COLOR_RGB2GRAY)
shelby_face= face_detector.detectMultiScale(shelby_in_gs,1.3,5)
shelby_face
for (x_pos,y_pos,width,height) in shelby_face:
shelby_img=shelby[y_pos:y_pos+height,x_pos:x_pos+width]
plt.imshow(shelby_img)
im = Image.fromarray(shelby_img)
im.save("Shelby.jpeg")
shelby_representation = vgg_face_descriptor.predict(preprocess_image('/content/Shelby.jpeg'))[0,:]
```
**End of Shelby Image cornering**
---
Now we have the represntations of the Images in our 1D array format. Now we need to compare the Cosine Similarities and Euclidean similarities which are used in **Siamese Networks**
```
def CosineSimilarity(source, test):
a = np.matmul(np.transpose(source), test)
b = np.sum(np.multiply(source, source))
c = np.sum(np.multiply(test, test))
return 1 - (a / (np.sqrt(b) * np.sqrt(c)))
def EuclideanDistance(source, test):
euclidean_distance = source - test
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
```
These two values represent the closeness between the images. If they are pretty close, then they are the same image. If not, they can be considered as a different image. We hence set a threshold for the classification
```
thresh_cosine=0.40
thresh_euclidean=120
def MatchFace(rep1, rep2):
cosine_similarity = CosineSimilarity(rep1, rep2)
euclidean_distance = EuclideanDistance(rep1, rep2)
if(cosine_similarity < thresh_cosine):
return True
else:
return False
MatchFace(shelby_representation,facerep7)
MatchFace(shelby_representation,facerep6)
for i in range(len(faces_image)):
if MatchFace(shelby_representation,vals[i]):
idx=i
idx
```
---
**Now onto the final part of our code: The Blurred image output:**
```
faces_image.shape
number,numberofelems=faces_image.shape
startlimit=idx*numberofelems
endlimit=(idx+1)*(numberofelems)
iter_idx=np.arange(startlimit,endlimit)
fin_faces=np.delete(faces_image,iter_idx)
fin_faces= fin_faces.reshape((number-1,numberofelems))
for (x_pos,y_pos,width,height) in fin_faces:
test_image[y_pos:y_pos+height,x_pos:x_pos+width]=cv2.blur(test_image[y_pos:y_pos+height,x_pos:x_pos+width],ksize=(10,10))
```
**FINGERS CROSSED**
```
plt.figure(figsize=(16,10))
plt.imshow(test_image)
plt.axis('off')
```
**No training. Got them for good.**
```
im = Image.fromarray(test_image)
im.save("Anonymised_Image.jpeg")
```
| github_jupyter |
# Python program to find the area of a rectangle given that its length is 10 units and breadth is 20 units.
```
length=float(input("Type the length:"))
breadth=float(input("Type the breadth:"))
area= length * breadth
print("The area is:",area)
```
#Python program to find the sum of two numbers.
```
a=int(input())
b=int(input())
Sum=a+b
print(Sum)
```
# Check the memory address
```
num=29
id(num)
num1= 30-1
id(num1)
```
# Program of explicit type conversion from int to float.
```
num1= 10
num2= 20
num3= num1+num2
print(num3)
print(type(num))
num4= float(num1+num2)
print(num4)
print(type(num4))
```
# Program to check if a number is prime or not. A number that is divisible only by itself and 1.
```
num = int(input("Enter a number: "))
flag = False
# prime numbers are greater than 1
if num > 1:
# check for factors
for i in range(2, num):
if (num % i) == 0:
# if factor is found, set flag to True
flag = True
# break out of loop
break
if flag:
print("Not Prime")
else:
print("Prime")
num=int(input())
if num>1:
if(num%i)==0:
print("Prime")
else:
print("Not Prime")
else:
print("Not Prime either")
```
# Python Program to find factorial of a number.
factorial of n=n*(n-1)*(n-2)*....
```
n=int(input())
def factorial(n):
fac=1
for i in range(0,n):
fac*=n-i
print(fac)
factor(6)
```
# Python program to print factors.
```
def fac(n):
for i in range(1,n+1):
if(n%i)==0:
print(i)
fac(5)
fac(6)
```
# Python Program to print all the prime numbers in an interval.
```
def factinterval(lower,upper):
for num in range(lower, upper + 1):
# all prime numbers are greater than 1
if num > 1:
for i in range(2, num):
if (num % i) == 0:
break
else:
print(num)
return
factinterval(2,5)
```
#Python program to find Armstrong Number in an interval.
Armstrong number is a number that is equal to the sum of cubes of its digits. For example 0, 1, 153, 370, 371 and 407 are the Armstrong numbers.
```
def arm(n):
sum=0
digit=0
temp=n
order=len(str(n))
while n>0:
digit=n%10
sum+=digit**order
n=n//10
if sum==temp:
print("Yes, the number is Armstrong number.")
else:
print("No!")
#driver
n=int(input())
arm(n)
```
# Python Program to Find the Square Root.
```
def Squareroot(n):
if n>1:
sq=n**0.5
print("Square root is:",sq)
else:
print("Square root is:",n)
n=int(input())
Squareroot(n)
```
For real or complex numbers
```
import cmath
def Squareroot_c(n):
nsqrt=cmath.sqrt(n)
print("The square root for the number is:",nsqrt)
n=eval(input())
Squareroot_c(n)
```
# Python Program to Solve Quadratic Equation.
```
import cmath
def solve(a,b,c):
dis=b*b-4*a*c
sqrt_val=cmath.sqrt(abs(dis))
if dis>0:
print(" real and different roots ")
print((-b + sqrt_val)/(2 * a))
print((-b - sqrt_val)/(2 * a))
elif dis == 0:
print(" real and same roots")
print(-b / (2 * a))
else:
print("Complex Roots")
print(- b / (2 * a), " + i", sqrt_val)
print(- b / (2 * a), " - i", sqrt_val)
#driver
a=int(input())
b=int(input())
c=int(input())
solve(a,b,c)
```
| github_jupyter |
```
import json
from authenticated_rest import get_pool_manager
pool_manager, cluster_info = get_pool_manager()
url = cluster_info["server"] + "/apis/"
headers = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/12.0.0-snapshot/python', 'Content-Type': 'application/json'}
r = pool_manager.request(
"GET",
url,
headers=headers,
preload_content=True
)
print(r.status)
# json.loads(r.data)
!kubectx
import yaml
import json
class Kubectl:
def __init__(self):
self.pool_manager, self.cluster_info = get_pool_manager()
self.base_url = self.cluster_info["server"] + "/apis"
def get(self, plural, api, namespace="default"):
get_url = self.base_url + f"/{api}" + "/namespaces" + f"/{namespace}" + f"/{plural}"
print("get_url = ", get_url)
r = pool_manager.request(
"GET",
get_url,
preload_content=True
)
return r
def create(self, fpath, namespace="default", plural=None):
"""
args:
resource_name: If you are doing an update, you need to add the name of the actualy deployment to the end of the url
"""
with open(fpath) as f:
deployment = yaml.safe_load(f)
if not plural:
plural = deployment["kind"].lower() + "s"
url = self.base_url + f"/{deployment['apiVersion']}" + "/namespaces" + f"/{namespace}" + f"/{plural}"
r = pool_manager.request(
"POST",
url,
body=json.dumps(request_body),
preload_content=True
)
return r
def update(self, fpath, namespace="default", plural=None, resource_name=None):
"""
args:
resource_name: If you are doing an update, you need to add the name of the actualy deployment to the end of the url
"""
with open(fpath) as f:
deployment = yaml.safe_load(f)
if not plural:
plural = deployment["kind"].lower() + "s"
url = self.base_url + f"/{deployment['apiVersion']}" + "/namespaces" + f"/{namespace}" + f"/{plural}"
if resource_name:
url = url + f"/{resource_name}"
print(url)
request_body = json.dumps(deployment)
print(type(request_body))
r = pool_manager.request(
"PATCH",
url,
body=request_body,
preload_content=True,
headers={'Accept': 'application/json', 'Content-Type': 'application/merge-patch+json', 'User-Agent': 'OpenAPI-Generator/12.0.0-snapshot/python'}
)
return r
kubectl = Kubectl()
update_sdep_r = kubectl.update("/Users/ntorba605/ntorba605/amp/cd/vsgdummy.json", resource_name="vsgdummy-002")
print(update_sdep_r.status)
list_sdeps_r = kubectl.get("seldondeployments", "machinelearning.seldon.io/v1alpha2")
print(list_sdeps_r.status)
pods = kubectl.get("pods", "apps/v1")
print(pods.status)
'https://127.0.0.1:32768/apis/machinelearning.seldon.io/v1alpha2/namespaces/default/seldondeployments/vsgdummy-002' == "https://127.0.0.1:32768/apis/machinelearning.seldon.io/v1alpha2/namespaces/default/seldondeployments/vsgdummy-002"
json.loads(update_sdep_r.data).keys()
json.loads(list_sdeps_r.data).keys()
```
| github_jupyter |
# Aragon Conviction Voting Model - Version 2
New to this model are the following elements:
* Influence - Participant social network where participants influence each others perception of a a proposal.
* Conflict - A network with the notion of supporting one proposal may mean going against an alternative proposal. For proposals with conflicts, an edge is created between them with a function to calculate the degree of conflict.
* Sentiment - Participant sentiment
* Updated trigger function to better represent 1Hive's implementation
* Updated plotting
* Updated affinity distribution to between -1, 1
# An Introduction to Conviction Voting
Conviction Voting is an approach to organizing a communities preferences into discrete decisions in the management of that communities resources. Strictly speaking conviction voting is less like voting and more like signal processing. Framing the approach and the initial algorithm design was done by Michael Zargham and published in a short research proposal [Social Sensor Fusion](https://github.com/BlockScience/conviction/blob/master/social-sensorfusion.pdf). This work is based on a dynamic resource allocation algorithm presented in Zargham's PhD Thesis.
The work proceeded in collaboration with the Commons Stack, including expanding on the pythin implementation to makeup part of the Commons Simulator game. An implemention of Conviction Voting as a smart contract within the Aragon Framework was developed by 1hive.org and is currently being used for community decision making around allocations their community currency, Honey.
## The Word Problem
Suppose a group of people want to coordinate to make a collective decision. Social dynamics such as discussions, signaling, and even changing ones mind based on feedback from others input play an important role in these processes. While the actual decision making process involves a lot of informal processes, in order to be fair the ultimate decision making process still requires a set of formal rules that the community collecively agrees to, which serves to functionally channel a plurality of preferences into a discrete outcomes. In our case we are interested in a procedure which supports asynchronous interactions, an provides visibility into likely outcomes prior to their resolution to serve as a driver of good faith, debate and healthy forms of coalition building. Furthermore, participations should be able to show support for multiple initiatives, and to vary the level of support shown. Participants a quantity of signaling power which may be fixed or variable, homogenous or heterogenous. For the purpose of this document, we'll focus on the case where the discrete decisions to be made are decisions to allocate funds from a shared funding pool towards projects of interest to the community.
## Converting to a Math Problem
Let's start taking these words and constructing a mathematical representation that supports a design that meets the description above. To start we need to define participants.
### Participants
Let $\mathcal{A}$ be the set of participants. Consider a participant $a\in \mathcal{A}$. Any participant $a$ has some capacity to participate in the voting process $h[a]$. In a fixed quantity, homogenous system $h[a] = h$ for all $a\in \mathcal{A}$ where $h$ is a constant. The access control process managing how one becomes a participant determines the total supply of "votes" $S = \sum_{a\in \mathcal{A}} = n\cdot h$ where the number of participants is $n = |\mathcal{A}|$. In a smart contract setting, the set $\mathcal{A}$ is a set of addresses, and $h[a]$ is a quantity of tokens held by each address $a\in \mathcal{A}$.
### Proposals & Shares Resources
Next, we introduce the idea of proposals. Consider a proposal $i\in \mathcal{C}$. Any proposal $i$ is associated with a request for resources $r[i]$. Those requested resources would be allocated from a constrained pool of communal resources currently totaling $R$. The pool of resources may become depleted because when a proposal $i$ passes $R^+= R-r[i]$. Therefore it makes sense for us to consider what fraction of the shared resources are being request $\mu_i = \frac{r[i]}{R}$, which means that thre resource depletion from passing proposals can be bounded by requiring $\mu_i < \mu$ where $\mu$ is a constant representing the maximum fraction of the shared resources which can be dispersed by any one proposal. In order for the system to be sustainable a source of new resources is required. In the case where $R$ is funding, new funding can come from revenues, donations, or in some DAO use cases minting tokens.
### Participants Preferences for Proposals
Most of the interesting information in this system is distributed amongst the participants and it manifests as preferences over the proposals. This can be thought of as a matrix $W\in \mathbb{R}^{n \times m}$.

These private hidden signals drive discussions and voting actions. Each participant individually decides how to allocate their votes across the available proposals. Participant $a$ supports proposal $i$ by setting $x[a,i]>0$ but they are limited by their capacity $\sum_{k\in \mathcal{C}} x[a,k] \le h[a]$. Assuming each participant chooses a subset of the proposals to support, a support graph is formed.

## Aggregating Information
In order to break out of the synchronous voting model, a dynamical systems model of this system is introduced.
### Participants Allocate Voting Power

### System Accounts Proposal Conviction

### Understanding Alpha
* https://www.desmos.com/calculator/x9uc6w72lm
* https://www.desmos.com/calculator/0lmtia9jql
## Converting Signals to Discrete Decisions
Conviction as kinetic energy and Trigger function as required activation energy.
### The Trigger Function
https://www.desmos.com/calculator/yxklrjs5m3
Below we show a sweep of the trigger function threshold:
```
from model.model.conviction_helper_functions import *
import warnings
warnings.filterwarnings("ignore")
beta = .2 #later we should set this to be param so we can sweep it
# tuning param for the trigger function
rho = .001
#alpha = 1 - 0.9999599 #native timescale for app as in contract code
alpha = 1-.5**3 #timescale set in days with 3 day halflife (from comments in contract comments)
supply= 21706
mcv = supply/(1-alpha)
print('for reference: max conviction = '+str(np.log10(mcv))+'in log10 units')
supply_sweep = trigger_sweep('effective_supply',trigger_threshold,beta,rho,alpha, supply)
alpha_sweep = trigger_sweep('alpha',trigger_threshold,beta,rho,alpha, supply)
trigger_grid(supply_sweep, alpha_sweep)
```
### Resolving Passed Proposals

## Social Systems Modeling
Subjective, exploratory modeling of the social system interacting through the conviction voting algorithm.
### Sentiment
Global Sentiment -- the outside world appreciating the output of the community
Local Sentiment -- agents within the system feeling good about the community
### Social Networks
Preferences as mixing process (social influence)
### Relationships between Proposals
Some proposals are synergistic (passing one makes the other more desireable)
Some proposals are (parially) substitutable (passing one makes the other less desirable)
## cadCAD Overview
In the cadCAD simulation [methodology](https://community.cadcad.org/t/differential-specification-syntax-key/31), we operate on four layers: **Policies, Mechanisms, States**, and **Metrics**. Information flows do not have explicit feedback loop unless noted. **Policies** determine the inputs into the system dynamics, and can come from user input, observations from the exogenous environment, or algorithms. **Mechanisms** are functions that take the policy decisions and update the States to reflect the policy level changes. **States** are variables that represent the system quantities at the given point in time, and **Metrics** are computed from state variables to assess the health of the system. Metrics can often be thought of as KPIs, or Key Performance Indicators.
At a more granular level, to setup a model, there are system conventions and configurations that must be [followed.](https://community.cadcad.org/t/introduction-to-simulation-configurations/34)
The way to think of cadCAD modeling is analogous to machine learning pipelines which normally consist of multiple steps when training and running a deployed model. There is preprocessing, which includes segregating features between continuous and categorical, transforming or imputing data, and then instantiating, training, and running a machine learning model with specified hyperparameters. cadCAD modeling can be thought of in the same way as states, roughly translating into features, are fed into pipelines that have built-in logic to direct traffic between different mechanisms, such as scaling and imputation. Accuracy scores, ROC, etc. are analogous to the metrics that can be configured on a cadCAD model, specifying how well a given model is doing in meeting its objectives. The parameter sweeping capability of cadCAD can be thought of as a grid search, or way to find the optimal hyperparameters for a system by running through alternative scenarios. A/B style testing that cadCAD enables is used in the same way machine learning models are A/B tested, except out of the box, in providing a side by side comparison of muliple different models to compare and contrast performance. Utilizing the field of Systems Identification, dynamical systems models can be used to "online learn" by providing a feedback loop to generative system mechanisms.
## Differential Specification

## Schema of the states
The model consists of a temporal in memory graph database called *network* containing nodes of type **Participant** and type **Proposal**. Participants will have *holdings* and *sentiment* and Proposals will have *funds_required, status*(candidate or active), *conviction* Tthe model as three kinds of edges:
* (Participant, participant), we labeled this edge type "influencer" and it contains information about how the preferences and sentiment of one participant influence another
* (Proposal, Proposal), we labeled this edge type "conflict" and it contains information about how synergistic or anti-synergistic two proposals are; basically people are likely to support multiple things that have synergy (meaning once one is passed there is more utility from the other) but they are not likely to pass things that have antisynergy (meaning once one is passed there is less utility from the other).
* The edges between Participant and Proposal, which are described below.
Edges in the network go from nodes of type Participant to nodes of type Proposal with the edges having the key *type*, of which all will be set to *support*. Edges from participant $i$ to proposal $j$ will have the following additional characteristics:
* Each pairing (i,j) will have *affinity*, which determines how much $i$ likes or dislikes proposal $j$.
* Each participant $i$, assigns its $tokens$ over the edges (i,j) for all $j$ such that the summation of all $j$ such that ```Sum_j = network.edges[(i,j)]['tokens'] = network.nodes[i]['holdings']```. This value of tokens for participants on proposals must be less than or equal to the total number of tokens held by the participant.
* Each pairing (i,j) will have *conviction* local to that edge whose update at each timestep is computed using the value of *tokens* at that edge.
* Each proposal *j* will have a *conviction* which is equal to the sum of the conviction on its inbound edges: ```network.nodes[j]['conviction'] = Sum_i network.edges[(i,j)]['conviction']```.
The other state variable in the model is *funds*, which is a numpy floating point.
The system consists of 100 time steps without a parameter sweep or monte carlo.
## Partial State Update Blocks
Each partial state update block is kind of a like a phase in a phased based board game. Everyone decides what to do and it reconciles all decisions. One timestep is a full turn, with each block being a phase of a timestep or turn. We will walk through the individaul Partial State update blocks one by one below.
```
{
# system.py:
'policies': {
'random': driving_process
},
'variables': {
'network': update_network,
'funds':increment_funds,
}
```
To simulate the arrival of participants and proposal into the system, we have a driving process to represent the arrival of individual agents. We use a random uniform distribution generator, over [0, 1), to calculate the number of new participants. We then use an exponential distribution to calculate the particpant's tokens by using a loc of 0.0 and a scale of expected holdings, which is calculated by .1*supply/number of existing participants. We calculate the number of new proposals by
```
proposal_rate = 1/median_affinity * (1+total_funds_requested/funds)
rv2 = np.random.rand()
new_proposal = bool(rv2<1/proposal_rate)
```
The network state variable is updated to include the new participants and proposals, while the funds state variable is updated for the increase in system funds.
[To see the partial state update code, click here](model/model/system.py)
```
{
# participants.py
'policies': {
'completion': check_progress
},
'variables': {
'sentiment': update_sentiment_on_completion, #not completing projects decays sentiment, completing bumps it
'network': complete_proposal
}
},
```
In the next phase of the turn, [to see the logic code, click here](model/model/participants.py), the *check_progress* behavior checks for the completion of previously funded proposals. The code calculates the completion and failure rates as follows:
```
likelihood = 1.0/(base_completion_rate+np.log(grant_size))
failure_rate = 1.0/(base_failure_rate+np.log(grant_size))
if np.random.rand() < likelihood:
completed.append(j)
elif np.random.rand() < failure_rate:
failed.append(j)
```
With the base_completion_rate being 100 and the base_failure_rate as 200.
The mechanism then updates the respective *network* nodes and updates the sentiment variable on proposal completion.
```
# proposals.py
'policies': {
'release': trigger_function
},
'variables': {
'funds': decrement_funds,
'sentiment': update_sentiment_on_release, #releasing funds can bump sentiment
'network': update_proposals
}
},
```
The [trigger release function](model/model/proposals.py) checks to see if each proposal passes or not. If a proposal passes, funds are decremented by the amount of the proposal, while the proposal's status is changed in the network object.
```
{
# participants.py
'policies': {
'participants_act': participants_decisions
},
'variables': {
'network': update_tokens
}
}
```
The Participants decide based on their affinity if which proposals they would like to support,[to see the logic code, click here](model/model/participants.py). Proposals that participants have high affinity for receive more support and pledged tokens than proposals with lower affinity and sentiment. We then update everyone's holdings and their conviction for each proposal.
## Model next steps
The the model described above is the second iteration model that covers the core mechanisms of the Aragon Conviction Voting model. Below are next additional dynamics we can attend to enrich the model, and provide workstreams for subsequent iterations of this lab notebook.
* Mixing of token holdings among participants
* Departure of participants
* Proposals which are good or no good together
* Affects of outcomes on sentiment
# Simulation
## Configuration
Let's factor out into its own notebook where we review the config object and its partial state update blocks.
```
from model import economyconfig
# pull out configurations to illustrate
sim_config,genesis_states,seeds,partial_state_update_blocks = economyconfig.get_configs()
sim_config
partial_state_update_blocks
```
## Initialization
To create the genesis_states, we create our in-memory graph database within networkx.
### Hyperparameters
* $\beta$ = .2 # maximum share of funds a proposal can take
* $\rho$ = 0.002 # tuning param for the trigger function
* $\alpha$ = 1/8 = 1/2^3 = 3 day half-life when timestep is 1 day
* supply = 21706 # Honey supply balance as of 7-17-2020
* initial_sentiment = .9
* n= 24 #initial participants
* m= 3 #initial proposals
* sensitivity = .75
* tmin = 7 #unit days; minimum periods passed before a proposal can pass
* min_supp = 50 #number of tokens that must be stake for a proposal to be a candidate
* base_completion_rate = 100
* base_failure_rate = 200
* initial_funds = 48000 # in xDai
```
# import libraries
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
from model.model.conviction_helper_functions import *
# # Parameters
# # maximum share of funds a proposal can take
# beta = .2
# # tuning param for the trigger function
# rho = 0.002
# #alpha = 1 - 0.9999599 #native timescale for app as in contract code
# alpha = 1/2**7 #timescale set in days with 7 day halflife
# supply = 21706 # Honey supply balance as of 7-17-2020
# initial_sentiment = .9
# n= 24 #initial participants
# m= 5 #initial proposals
# sensitivity = .75
# tmin = 0 #unit days; minimum periods passed before a proposal can pass
# min_supp = 50 #number of tokens that must be stake for a proposal to be a candidate
# base_completion_rate = 100
# base_failure_rate = 200
# initial_funds = 48000 # in xDai
#initializers
network = genesis_states['network']
initial_funds = genesis_states['funds']
initial_sentiment = genesis_states['sentiment']
# Create initial states
# genesis_states = {
# 'network':network,
# 'funds':initial_funds,
# 'sentiment':initial_sentiment,
#}
genesis_states
```
#### Exploring the State Data Structure
A graph is a type of temporal data structure that evolves over time. A graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ consists of vertices or nodes, $\mathcal{V} = \{1...\mathcal{V}\}$ and is connected by edges $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$.
See *Schema of the states* above for more details
Let's explore!
```
# To explore our model prior to the simulation, we extract key components from our networkX object into lists.
proposals = get_nodes_by_type(network, 'proposal')
participants = get_nodes_by_type(network, 'participant')
supporters = get_edges_by_type(network, 'support')
influencers = get_edges_by_type(network, 'influence')
competitors = get_edges_by_type(network, 'conflict')
#sample a participant
network.nodes[participants[0]]
# Let's look at the distribution of participant holdings at the start of the sim
plt.hist([ network.nodes[i]['holdings'] for i in participants])
plt.title('Histogram of Participants Token Holdings')
plt.xlabel('Amount of Honey')
plt.ylabel('Count of Participants')
nx.draw_spring(network, nodelist = participants, edgelist=influencers)
plt.title('Participants Social Network')
#lets look at proposals
network.nodes[proposals[0]]
```
Proposals initially start without any conviction, and with the status of a candidate. If the proposal's amount of conviction is greater than it's trigger, then the proposal moves to active and it's funds requested are granted.
All initial proposal start with 0 conviction and state 'candidate'we can simply examine the amounts of funds requested
```
funds_array = np.array([ network.nodes[i]['funds_requested'] for i in proposals])
conviction_required = np.array([trigger_threshold(r, initial_funds, supply, beta, rho, alpha) for r in funds_array])
plt.bar( proposals, funds_array/initial_funds)
plt.title('Bar chart of Proposals Funds Requested')
plt.xlabel('Proposal identifier')
plt.ylabel('Amount of Honey requested(as a Fraction of Funds available)')
plt.bar( proposals, conviction_required)
plt.title('Bar chart of Proposals Conviction Required')
plt.xlabel('Proposal identifier')
plt.ylabel('Amount of Conviction')
```
Conviction is a concept that arises in the edges between participants and proposals in the initial conditions there are no votes yet so we can look at that later however, the voting choices are driven by underlying affinities which we can see now.
```
m = len(proposals)
n = len(participants)
affinities = np.empty((n,m))
for i_ind in range(n):
for j_ind in range(m):
i = participants[i_ind]
j = proposals[j_ind]
affinities[i_ind][j_ind] = network.edges[(i,j)]['affinity']
dims = (20, 5)
fig, ax = plt.subplots(figsize=dims)
sns.heatmap(affinities.T,
xticklabels=participants,
yticklabels=proposals,
square=True,
cbar=True,
cmap = plt.cm.RdYlGn,
ax=ax)
plt.title('affinities between participants and proposals')
plt.ylabel('proposal_id')
plt.xlabel('participant_id')
```
## Run simulation
Now we will create the final system configuration, append the genesis states we created, and run our simulation.
```
from cadCAD.configuration import Experiment
# Create configuration
exp = Experiment()
exp.append_configs(
sim_configs=sim_config,
initial_state=genesis_states,
seeds=seeds,
partial_state_update_blocks=partial_state_update_blocks
)
import numpy as np
import pandas as pd
from model.model.conviction_helper_functions import *
from model import run
from cadCAD import configs
pd.options.display.float_format = '{:.2f}'.format
%matplotlib inline
rdf = run.run(configs)
```
After the simulation has run successfully, we perform some postprocessing to extract node and edge values from the network object and add as columns to the pandas dataframe. For the rdf, we take only the values at the last substep of each timestep in the simulation.
```
df= run.postprocessing(rdf,0)
df.head(5)
df.plot('timestep','sentiment')
df.plot('timestep',['funds', 'candidate_funds'])
affinities_plot(df)
df.plot(x='timestep',y=['candidate_count','active_count','completed_count', 'killed_count', 'failed_count'],
kind='area')
plt.title('Proposal Status')
plt.ylabel('count of proposals')
plt.legend(ncol = 3,loc='upper center', bbox_to_anchor=(0.5, -0.15))
df.plot(x='timestep',y=['candidate_funds','active_funds','completed_funds', 'killed_funds', 'failed_funds'], kind='area')
plt.title('Proposal Status weighted by funds requested')
plt.ylabel('Funds worth of proposals')
plt.legend(ncol = 3,loc='upper center', bbox_to_anchor=(0.5, -0.15))
nets = df.network.values
K = 55
N = 56
snap_plot(nets[K:N], size_scale = 1/10,savefigs=True)
```
## Conclusion
We have created a simplified conviction voting model that illustrates the state objects, and provides descriptions of how the model fits together. In subsequent notebooks, we will expand the model to introduce additional complexity to more fit real world implementations.
| github_jupyter |
# Exporting and Archiving
Most of the other user guides show you how to use HoloViews for interactive, exploratory visualization of your data, while the [Applying Customizations](03-Applying_Customizations.ipynb) user guide shows how to use HoloViews completely non-interactively, generating and rendering images directly to disk using `hv.save`. In this notebook, we show how HoloViews works together with the Jupyter Notebook to establish a fully interactive yet *also* fully reproducible scientific or engineering workflow for generating reports or publications. That is, as you interactively explore your data and build visualizations in the notebook, you can automatically generate and export them as figures that will feed directly into your papers or web pages, along with records of how those figures were generated and even storing the actual data involved so that it can be re-analyzed later.
```
import holoviews as hv
from holoviews import opts
from holoviews.operation import contours
hv.extension('matplotlib')
```
## Exporting specific files
During interactive exploration in the Jupyter Notebook, your results are always visible within the notebook itself, but you can explicitly request that any visualization is also exported to an external file on disk:
```
penguins = hv.RGB.load_image('../assets/penguins.png')
hv.save(penguins, 'penguin_plot.png', fmt='svg')
penguins
```
This mechanism can be used to provide a clear link between the steps for generating the figure, and the file on disk. You can now load the exported PNG image back into HoloViews, if you like, using ``hv.RGB.load_image`` although the result would be a bit confusing due to the nested axes.
The ``fig="png"`` part of the ``hv.save`` function call above specified that the file should be saved in PNG format, which is useful for posting on web pages or editing in raster-based graphics programs. Note that `hv.save` also accepts `HoloMap`s which can be saved to formats such as ``'scrubber'``, ``'widgets'`` or even ``'gif'`` or ``'mp4'`` (if the necessary matplotlib dependencies are available).
If the file extension is part of the filename, that will automatically be used to set the format. Conversely, if the format is explicitly specified, then the extension does not have to be part of the filename (and any filename extension that is provided will be ignored). Sometimes the two pieces of information are independent: for instance, a filename ending in `.html` can support either the `'widgets'` or `'scrubber'` formats.
For a publication, you will usually want to select SVG format because this vector format preserves the full resolution of all text and drawing elements. SVG files can be be used in some document preparation programs directly (e.g. [LibreOffice](http://www.libreoffice.org/)), and can easily be converted and manipulated in vector graphics editors such as [Inkscape](https://inkscape.org).
## Exporting notebooks
The ``hv.save`` function is useful when you want specific plots saved into specific files. Often, however, a notebook will contain an entire suite of results contained in multiple different cells, and manually specifying these cells and their filenames is error-prone, with a high likelihood of accidentally creating multiple files with the same name or using different names in different notebooks for the same objects.
To make the exporting process easier for large numbers of outputs, as well as more predictable, HoloViews also offers a powerful automatic notebook exporting facility, creating an archive of all your results. Automatic export is very useful in the common case of having a notebook that contains a series of figures to be used in a report or publication, particularly if you are repeatedly re-running the notebook as you finalize your results, and want the full set of current outputs to be available to an external document preparation system.
The advantage of using this archival system over simply converting the notebook to a static HTML file with nbconvert is that you can generate a collection of individual file assets in one or more desired file formats.
To turn on automatic adding of your files to the export archive, run ``hv.archive.auto()``:
```
hv.archive.auto()
```
This object's behavior can be customized extensively; try pressing tab within the parentheses for a list of options, which are described more fully below.
By default, the output will go into a directory with the same name as your notebook, and the names for each object will be generated from the groups and labels used by HoloViews. Objects that contain HoloMaps are not exported by default, since those are usually rendered as animations that are not suitable for inclusion in publications, but you can change it to ``.auto(holomap='gif')`` if you want those as well.
### Adding files to an archive
To see how the auto-exporting works, let's define a few HoloViews objects:
```
penguins[:,:,'R'].relabel("Red") + penguins[:,:,'G'].relabel("Green") + penguins[:,:,'B'].relabel("Blue")
penguins * hv.Arrow(0.15, 0.3, 'Penguin', '>')
cs = contours(penguins[:,:,'R'], levels=[0.10,0.80])
overlay = penguins[:, :, 'R'] * cs
overlay.opts(
opts.Contours(linewidth=1.3, cmap='Autumn'),
opts.Image(cmap="gray"))
```
We can now list what has been captured, along with the names that have been generated:
```
hv.archive.contents()
```
Here each object has resulted in two files, one in SVG format and one in Python "pickle" format (which appears as a ``zip`` file with extension ``.hvz`` in the listing). We'll ignore the pickle files for now, focusing on the SVG images.
The name generation code for these files is heavily customizable, but by default it consists of a list of dimension values and objects:
``{dimension},{dimension},...{group}-{label},{group}-{label},...``.
The ``{dimension}`` shows what dimension values are included anywhere in this object, if it contains any high-level ``Dimensioned`` objects like ``HoloMap``, ``NdOverlay``, and ``GridSpace``. Of course, nearly all HoloViews objects have dimensions, such as ``x`` and ``y`` in this case, but those dimensions are not used in the filenames because they are explicitly shown in the plots; only the top-level dimensions are used (those that determine which plot this is, not those that are shown in the plot itself.)
The ``{group}-{label}`` information lists the names HoloViews uses for default titles and for attribute access for the various objects that make up a given displayed object. E.g. the first SVG image in the list is a ``Layout`` of the three given ``Image`` objects, and the second one is an ``Overlay`` of an ``RGB`` object and an ``Arrow`` object. This information usually helps distinguish one plot from another, because they will typically be plots of objects that have different labels.
If the generated names are not unique, a numerical suffix will be added to make them unique. A maximum filename length is enforced, which can be set with ``hv.archive.max_filename=``_num_.
If you prefer a fixed-width filename, you can use a hash for each name instead (or in addition), where ``:.8`` specifies how many characters to keep from the hash:
```
hv.archive.filename_formatter="{SHA:.8}"
cs
hv.archive.contents()
```
You can see that the newest files added have the shorter, fixed-width format, though the names are no longer meaningful. If the ``filename_formatter`` had been set from the start, all filenames would have been of this type, which has both practical advantages (short names, all the same length) and disadvantages (no semantic clue about the contents).
### Generated indexes
In addition to the files that were added to the archive for each of the cell outputs above, the archive exporter will also add an ``index.html`` file with a static copy of the notebook, with each cell labelled with the filename used to save it once `hv.archive.export()` is called (you can verify this for yourself after this call is executed below). This HTML file acts as a definitive index to your results, showing how they were generated and where they were exported on disk.
The exporter will also add a cleared, runnable copy of the notebook ``index.ipynb`` (with output deleted), so that you can later regenerate all of the output, with changes if necessary.
The exported archive will thus be a complete set of your results, along with a record of how they were generated, plus a recipe for regenerating them -- i.e., fully reproducible research! This HTML file and .ipynb file can the be submitted as supplemental materials for a paper, allowing any reader to build on your results, or it can just be kept privately so that future collaborators can start where this research left off.
### Adding your own data to the archive
Of course, your results may depend on a lot of external packages, libraries, code files, and so on, which will not automatically be included or listed in the exported archive.
Luckily, the archive support is very general, and you can add any object to it that you want to be exported along with your output. For instance, you can store arbitrary metadata of your choosing, such as version control information, here as a JSON-format text file:
```
import json
hv.archive.add(filename='metadata.json',
data=json.dumps({'repository':'git@github.com:ioam/holoviews.git',
'commit':'437e8d69'}), info={'mime_type':'text/json'})
```
The new file can now be seen in the contents listing:
```
hv.archive.contents()
```
You can get a more direct list of filenames using the ``listing`` method:
```
listing = hv.archive.listing()
listing
```
In this way, you should be able to automatically generate output files, with customizable filenames, storing any data or metadata you like along with them so that you can keep track of all the important information for reproducing these results later.
### Controlling the behavior of ``hv.archive``
The ``hv.archive`` object provides numerous parameters that can be changed. You can e.g.:
- output the whole directory to a single compressed ZIP or tar archive file (e.g. ``hv.archive.set_param(pack=False, archive_format='zip')`` or ``archive_format='tar'``)
- generate a new directory or archive every time the notebook is run (``hv.archive.uniq_name=True``); otherwise the old output directory is erased each time
- choose your own name for the output directory or archive (e.g. ``hv.archive.export_name="{timestamp}"``)
- change the format of the optional timestamp (e.g. to retain snapshots hourly, ``archive.set_param(export_name="{timestamp}", timestamp_format="%Y_%m_%d-%H")``)
- select PNG output, at a specified rendering resolution: ``hv.archive.exporters=[hv.renderer('bokeh').instance(size=50)])
``
These options and any others listed above can all be set in the ``hv.archive.auto()`` call at the start, for convenience and to ensure that they apply to all of the files that are added.
### Writing the archive to disk
To actually write the files you have stored in the archive to disk, you need to call ``export()`` after any cell that might contain computation-intensive code. Usually it's best to do so as the last or nearly last cell in your notebook, though here we do it earlier because we wanted to show how to use the exported files.
```
hv.archive.export()
```
Shortly after the ``export()`` command has been executed, the output should be available as a directory on disk, by default in the same directory as the notebook file, named with the name of the notebook:
```
import os
os.getcwd()
if os.path.exists(hv.archive.notebook_name):
print('\n'.join(sorted(os.listdir(hv.archive.notebook_name))))
```
For technical reasons to do with how the IPython Notebook interacts with JavaScript, if you use the Jupyter Notebook command ``Run all``, the ``hv.archive.export()`` command is not actually executed when the cell with that call is encountered during the run. Instead, the ``export()`` is queued until after the final cell in the notebook has been executed. This asynchronous execution has several awkward but not serious consequences:
- It is not possible for the ``export()`` cell to show whether any errors were encountered during exporting, because these will not occur until after the notebook has completed processing. To see any errors, you can run ``hv.archive.last_export_status()`` separately, *after* the ``Run all`` has completed. E.g. just press shift-[Enter] in the following cell, which will tell you whether the previous export was successful.
- If you use ``Run all``, the directory listing ``os.listdir()`` above will show the results from the *previous* time this notebook was run, since it executes before the export. Again, you can use shift-[Enter] to update the data once complete.
- The ``Export name:`` in the output of ``hv.archive.export()`` will not always show the actual name of the directory or archive that will be created. In particular, it may say ``{notebook}``, which when saving will actually expand to the name of your Jupyter Notebook.
```
hv.archive.last_export_status()
```
### Accessing your saved data
By default, HoloViews saves not only your rendered plots (PNG, SVG, etc.), but also the actual HoloViews objects that the plots visualize, which contain all your actual data. The objects are stored in compressed Python pickle files (``.hvz``), which are visible in the directory listings above but have been ignored until now. The plots are what you need for writing a document, but the raw data is is a crucial record to keep as well. For instance, you now can load in the HoloViews object, and manipulate it just as you could when it was originally defined. E.g. we can re-load our ``Levels`` ``Overlay`` file, which has the contours overlaid on top of the image, and easily pull out the underlying ``Image`` object:
```
import os
from holoviews.core.io import Unpickler
c, a = None,None
hvz_file = [f for f in listing if f.endswith('hvz')][0]
path = os.path.join(hv.archive.notebook_name, hvz_file)
if os.path.isfile(path):
print('Unpickling {filename}'.format(filename=hvz_file))
obj = Unpickler.load(open(path,"rb"))
print(obj)
else:
print('Could not find file {path}'.format(path=path))
print('Current directory is {cwd}'.format(cwd=os.getcwd()))
print('Containing files and directories: {listing}'.format(listing=os.listdir(os.getcwd())))
```
Given the ``Image``, you can also access the underlying array data, because HoloViews objects are simply containers for your data and associated metadata. This means that years from now, as long as you can still run HoloViews, you can now easily re-load and explore your data, plotting it entirely different ways or running different analyses, even if you no longer have any of the original code you used to generate the data. All you need is HoloViews, which is permanently archived on GitHub and is fully open source and thus should always remain available. Because the data is stored conveniently in the archive alongside the figure that was published, you can see immediately which file corresponds to the data underlying any given plot in your paper, and immediately start working with the data, rather than laboriously trying to reconstruct the data from a saved figure.
If you do not want the pickle files, you can of course turn them off if you prefer, by changing ``hv.archive.auto()`` to:
```python
hv.archive.auto(exporters=[hv.renderer('matplotlib').instance(holomap=None)])
```
Here, the exporters list has been updated to include the usual default exporters *without* the `Pickler` exporter that would usually be included.
## Using HoloViews to do reproducible research
The export options from HoloViews help you establish a feasible workflow for doing reproducible research: starting from interactive exploration, either export specific files with ``hv.save``, or enable ``hv.archive.auto()``, which will store a copy of your notebook and its output ready for inclusion in a document but retaining the complete recipe for reproducing the results later.
### Why reproducible research matters
To understand why these capabilities are important, let's consider the process by which scientific results are typically generated and published without HoloViews. Scientists and engineers use a wide variety of data-analysis tools, ranging from GUI-based programs like Excel spreadsheets, mixed GUI/command-line programs like Matlab, or purely scriptable tools like matplotlib or bokeh. The process by which figures are created in any of these tools typically involves copying data from its original source, selecting it, transforming it, choosing portions of it to put into a figure, choosing the various plot options for a subfigure, combining different subfigures into a complete figure, generating a publishable figure file with the full figure, and then inserting that into a report or publication.
If using GUI tools, often the final figure is the only record of that process, and even just a few weeks or months later a researcher will often be completely unable to say precisely how a given figure was generated. Moreover, this process needs to be repeated whenever new data is collected, which is an error-prone and time-consuming process. The lack of records is a serious problem for building on past work and revisiting the assumptions involved, which greatly slows progress both for individual researchers and for the field as a whole. Graphical environments for capturing and replaying a user's GUI-based workflow have been developed, but these have greatly restricted the process of exploration, because they only support a few of the many analyses required, and thus they have rarely been successful in practice. With GUI tools it is also very difficult to "curate" the sequence of steps involved, i.e., eliminating dead ends, speculative work, and unnecessary steps, with a goal of showing the clear path from incoming data to a final figure.
In principle, using scriptable or command-line tools offers the promise of capturing the steps involved, in a form that can be curated. In practice, however, the situation is often no better than with GUI tools, because the data is typically taken through many manual steps that culminate in a published figure, and without a laboriously manually created record of what steps are involved, the provenance of a given figure remains unknown. Where reproducible workflows are created in this way, they tend to be "after the fact", as an explicit exercise to accompany a publication, and thus (a) they are rarely done, (b) they are very difficult to do if any of the steps were not recorded originally.
A Jupyter notebook helps significantly to make the scriptable-tools approach viable, by recording both code and the resulting output, and can thus in principle act as a record for establishing the full provenance of a figure. But because typical plotting libraries require so much plotting-specific code before any plot is visible, the notebook quickly becomes unreadable. To make notebooks readable, researchers then typically move the plotting code for a specific figure to some external file, which then drifts out of sync with the notebook so that the notebook no longer acts as a record of the link between the original data and the resulting figure.
HoloViews provides the final missing piece in this approach, by allowing researchers to work directly with their data interactively in a notebook, using small amounts of code that focus on the data and analyses rather than plotting code, yet showing the results directly alongside the specification for generating them. This user guide will describe how use a Jupyter notebook with HoloViews to export your results in a way that preserves the information about how those results were generated, providing a clear chain of provenance and making reproducible research practical at last.
For more information on how HoloViews can help build a reproducible workflow, see our [2015 paper on using HoloViews for reproducible research](http://conference.scipy.org/proceedings/scipy2015/pdfs/jean-luc_stevens.pdf).
| github_jupyter |
Homework 2
=====
Daphne Ippolito
```
import xor_network
```
What issues did you have?
-----
The first issue that I has was that I was trying to output a single scalar whose value could be thresholded to determine whether the network should return TRUE or FALSE. It turns out loss functions for this are much more complicated than if I had instead treated the XOR problem as a classification task with one output per possible label ('TRUE', 'FALSE'). This is the approach I have implemented here.
Another issue I encountered at first was that I was using too few hidden nodes. I originally thought that such a simple problem would only need a couple nodes in a single hidden layer to implement. However, such small networks were extremely slow to converge. This is exemplified in the Architectures section.
Lastly, when I was using small batch sizes (<= 5 examples), and randomly populating the batches, the network would sometimes fail to converge, probably because the batches didn't contain all the possible examples.
Which activation functions did you try? Which loss functions?
-----
I tried ReLU, sigmoid, and tanh activation functions. I only successfully uses a softmax cross-entropy loss function.
The results for the different activation functions can be seen by running the block below. The sigmoid function consistently takes the longest to converge. I'm unsure why tanh does significantly better than sigmoid.
```
batch_size = 100
num_steps = 10000
num_hidden = 7
num_hidden_layers = 2
learning_rate = 0.2
xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'sigmoid')
xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'tanh')
xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'relu')
```
What architectures did you try? What were the different results? How long did it take?
-----
The results for several different architectures can be seen by running the code below. Since there is no reading from disk, each iteration takes almost exactly the same amount of time. Therefore, I will report "how long it takes" in number of iterations rather than in time.
```
# Network with 2 hidden layers of 5 nodes
xor_network.run_network(batch_size, num_steps, 5, 2, learning_rate, False, 'relu')
# Network with 5 hidden layers of 2 nodes each
num_steps = 3000 # (so it doesn't go on forever)
xor_network.run_network(batch_size, num_steps, 2, 5, learning_rate, False, 'relu')
```
**Conclusion from the above:** With the number of parameters held constant, a deeper network does not necessarily perform better than a shallower one. I am guessing this is because fewer nodes in a layer means that the network can keep around less information from layer to layer.
```
xor_network.run_network(batch_size, num_steps, 3, 5, learning_rate, False, 'relu')
```
**Conclusion from the above:** Indeed, the problem is not the number of layers, but the number of nodes in each layer.
```
# This is the minimum number of nodes I can use to consistently get convergence with Gradient Descent.
xor_network.run_network(batch_size, num_steps, 5, 1, learning_rate, False, 'relu')
# If I switch to using Adam Optimizer, I can get down to 2 hidden nodes and consistently have convergence.
xor_network.run_network(batch_size, num_steps, 2, 1, learning_rate, True, 'relu')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yuliethbernal03/MDigitales/blob/main/Clase3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Estructura de condicionales**
---
A l momento de construir una instruccion de tipo condicional se debera comprender que dichas instrucciones estan diseñadas para ayudar en la toma de decisiones:
Ejemplo:
**si **Pedro va a la tienda de la izquierda llegara mas rapido, **sino** se demorara mas.
eXISTEN VARIOS TIPOS DE INSTRNES ALGUNOS PUEDEN SER SIMPLES Y OTROS MULTIPLES:
SIN EMBARGO SOLO GENERAN DOS RESULTADOS.
1. VERDAdero
2. falso
en python tales resultados serian True y False.
para aplicar los condicionales que veremos a continuacion debemos recordar los comandos de operadores matematicos vistos la clase anterior.
1. Igualdad (==)
2. Diferencia (!=)
3. menor que (<)
4. mayor que (>)
5. menor o igual que (<=)
6. mayor o igual que (>=)
En condicionales multiplis, podemos enlazar los operadores logicos
1. y (and)
2. o (or)
3. no (not)
**Ejemplos: **
3 == 8 and 8 > 12 = false
3 == 3 or 15 < 3 = True
not True = False
para aplicar los operadores matematicos y logicos debemos tener en cuenta de que llamamos **diagramas de flujos**, esto nos permite mayor organizacion de las ideas para la toma de deciciones.
**Comando IF**
---
Este comando permite evaluar si una sentencia es verdadera o falsa. Es decir se ejecuta una accion establecida mediante un codigo de una instruccion o varias inmediantamente en las lineas siguientes a dicha condicion.
```
num = int(input("Escriba un numero: "))
if num == 200:
print("Escribiste el 200")
```
**Comando ELSE**
---
En español es conocido como "si no". Este comando permite relacionar las acciones que deberia realizar en caso de que la condicion sea falsa.
```
num = int(input("Escriba un numero: "))
if num == 200:
print("Escribiste el 200")
else:
print("El numero escrito no es 200")
```
**Comando ELIF**
---
Significa "sino, si" y permite concatenar condiciones:
```
num = int(input("Escriba un numero: "))
if num == 200:
print("Escribiste el 200")
elif num > 200:
print("El numero escrito es mayor a 200")
else:
print("El numero escrito no es 200")
```
**Condicionales multiples**
---
Cuando se presentan situaciones con mas de una condicion que dependen unas de otras, estas se pueden tratar mediante elk manejo devarias sentencias o comandos If, o mediante el manejo adecuado del comando elif. Sin embargo, en muchos casos cuando hay multiples condiciones, la promocion necesita mayor cantidad de lineas de codigo.
En esos casos es necesario el uso de operadores logicos como el and y or.
```
x = int(input("valor: ")) #Condicional anidada
if 0 < x:
if x < 10:
print("x es un numero positivo")
x = int(input("Valor: ")) #Expresion booleana
if 0 < x and x < 10:
print("x es un numero de un solo digito")
x = int(input("Valor: "))
if 0 < x < 10:
print("x es un numero de un solo digito")
```
Esta condicion es la misma expresion booleana compuesta y la misma esprecion condicional anidada
## **Estructura de control iterativas**
---
las variables son claves enlas eztructuras de contol iterativas, puesto que son el medio entre la iteracion y la condicion que se esta ejecutando.
**Que es iteracion?**
Iteracion es la consecusion del codigo tantas veces requiera hasta que se cumplan las condiciones establecidas.
**Banderas**
Son las variables que toman un valor preferiblemente binario, booleano e indican un estado.
Ejemplo:
```
suma = False
total = 0
a = 3
b = 10
if suma == False:
total = a + b
suma = True
if (suma == True):
print("el valor total de la suma es:",total )
```
La variable suma en este caso es de tipo booleano y su funcion es indicar cuando se ejecuto la suma. Por lo tanto tiene un estado inicial "False" pero luego de ejecutar la suma toma el valor "True".
Cuando pasa esto escuchamos la frase "la bandera se levanto" esto quiere decir que una u otra accion hizo que el estado de la bandera se cambiara.
Ejemplo cualitativo: **CASO CONTAGIO**
```
paciente = input("ingrese el paciente: ")
contagio_v = input("¿Te haz realizado la prueba de contagio? ")
if (contagio_v == "No"):
print("La pacinte ", paciente, " no se ha realizado la prueba para validar, por favor realizarse la prueba en el centro de salud mas cercano")
print("Por favor, realizarse la prueba")
if (contagio_v == "Pendiente"):
print(paciente,", Por favor revisar el correo donde se le adjunto el resultado de la prueba.")
if (contagio_v == "Si"):
print(paciente,", Dado que es positivo el resultado de su prueba, mantener una distancia de las personas por lo menos 15 dias.")
```
el bootcamp comenzo el 26 de julio de 2021. se les explico a las estudiantes que se les evaluaria de la siguienrte forma.
1. tareas 10%
2. Talleres 25%
2. asistencia 5%
4. Participacion 15%
5. proyecto 45%
Teniendo en cuenta las ponderaciones de calificacion del bootcamp las estudiantes deciden desarrollar un programa que les permita saber cuanto tendria que sacar en el proyecto para tener una definitiva de 3.7 minimo en el curso. si las notas que llevan hasta el momento son las siguientes
1. tareas 4.5
2. talleres 4.0
3. asistencia 5.0
4. participacion 4.0
¿cuanto tendria que sacra en el proyecto?
```
```
| github_jupyter |
```
%matplotlib inline
```
# Classifier comparison
A comparison of a several classifiers in scikit-learn on synthetic datasets.
The point of this example is to illustrate the nature of decision boundaries
of different classifiers.
This should be taken with a grain of salt, as the intuition conveyed by
these examples does not necessarily carry over to real datasets.
Particularly in high-dimensional spaces, data can more easily be separated
linearly and the simplicity of classifiers such as naive Bayes and linear SVMs
might lead to better generalization than is achieved by other classifiers.
The plots show training points in solid colors and testing points
semi-transparent. The lower right shows the classification accuracy on the test
set.
```
print(__doc__)
# Code source: Gaël Varoquaux
# Andreas Müller
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.4, random_state=42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
if ds_cnt == 0:
ax.set_title("Input data")
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='k')
# Plot the testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6,
edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='k')
# Plot the testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
edgecolors='k', alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
if ds_cnt == 0:
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Using Fuzzingbook Code in your own Programs
This notebook has instructions on how to use the `fuzzingbook` code in your own programs.
In short, there are three ways:
1. Simply run the notebooks in your browser, using the "mybinder" environment. Choose "Resources→Edit as Notebook" in any of the `fuzzingbook.org` pages; this will lead you to a preconfigured Jupyter Notebook environment where you can toy around at your leisure.
2. Import the code for your own Python programs. Using `pip install fuzzingbook`, you can install all code and start using it from your own code. See "Can I import the code for my own Python projects?", below.
3. Download or check out the code and/or the notebooks from the project site. This allows you to edit and run all things locally. However, be sure to also install the required packages; see below for details.
```
import bookutils
from bookutils import YouTubeVideo
YouTubeVideo("b4HitpWsJL4")
```
## Can I import the code for my own Python projects?
Yes, you can! (If you like Python, that is.) We provide a `fuzzingbook` Python package that you can install using the `pip` package manager:
```shell
$ pip install fuzzingbook
```
As of `fuzzingbook 1.0`, this is set up such that almost all additional required packages are also installed. For a full installation, also follow the steps in "Which other Packages do I need to use the Python Modules?" below.
Once `pip` is complete, you can import individual classes, constants, or functions from each notebook using
```python
>>> from fuzzingbook.<notebook> import <identifier>
```
where `<identifier>` is the name of the class, constant, or function to use, and `<notebook>` is the name of the respective notebook. (If you read this at fuzzingbook.org, then the notebook name is the identifier preceding `".html"` in the URL).
Here is an example importing `RandomFuzzer` from [the chapter on fuzzers](Fuzzer.ipynb), whose notebook name is `Fuzzer`:
```python
>>> from fuzzingbook.Fuzzer import RandomFuzzer
>>> f = RandomFuzzer()
>>> f.fuzz()
'!7#%"*#0=)$;%6*;>638:*>80"=</>(/*:-(2<4 !:5*6856&?""11<7+%<%7,4.8,*+&,,$,."5%<%76< -5'
```
The "Synopsis" section at the beginning of a chapter gives a short survey on useful code features you can use.
## Which OS and Python versions are required?
As of `fuzzingbook 1.0`, Python 3.9 and later is required. Specifically, we use Python 3.9.7 for development and testing. This is also the version to be used if you check out the code from git, and the version you get if you use the debugging book within the "mybinder" environment.
To use the `fuzzingbook` code with earlier Python version, use
```shell
$ pip install 'fuzzingbook=0.95'
```
Our notebooks generally assume a Unix-like environment; the code is tested on Linux and macOS. System-independent code may also run on Windows.
## Can I use the code from within a Jupyter notebook?
Yes, you can! You would first install the `fuzzingbook` package (as above); you can then access all code right from your notebook.
Another way to use the code is to _import the notebooks directly_. Download the notebooks from the menu. Then, add your own notebooks into the same folder. After importing `bookutils`, you can then simply import the code from other notebooks, just as our own notebooks do.
Here is again the above example, importing `RandomFuzzer` from [the chapter on fuzzers](Fuzzer.ipynb) – but now from a notebook:
```
import bookutils
from Fuzzer import RandomFuzzer
f = RandomFuzzer()
f.fuzz()
```
If you'd like to share your notebook, let us know; we can integrate it in the repository or even in the book.
## Can I check out the code from git and get the latest and greatest?
Yes, you can! We have a few continuous integration (CI) workflows running which do exactly that. After cloning the repository from [the project page](https://github.com/uds-se/fuzzingbook/) and installing the additional packages (see below), you can `cd` into `notebooks` and start `jupyter` right away!
There also is a `Makefile` provided with literally hundreds of targets; most important are the ones we also use in continuous integration:
* `make check-imports` checks whether your code is free of syntax errors
* `make check-style` checks whether your code is free of type errors
* `make check-code` runs all derived code, testing it
* `make check-notebooks` runs all notebooks, testing them
If you want to contribute to the project, ensure that the above tests run through.
The `Makefile` has many more, often experimental, targets. `make markdown` creates a `.md` variant in `markdown/`, and there's also `make word` and `make epub`, which are set to create Word and EPUB variants (with mixed results). Try `make help` for commonly used targets.
## Can I just run the Python code? I mean, without notebooks?
Yes, you can! You can download the code as Python programs; simply select "Resources $\rightarrow$ Download Code" for one chapter or "Resources $\rightarrow$ All Code" for all chapters. These code files can be executed, yielding (hopefully) the same results as the notebooks.
The code files can also be edited if you wish, but (a) they are very obviously generated from notebooks, (b) therefore not much fun to work with, and (c) if you fix any errors, you'll have to back-propagate them to the notebook before you can make a pull request. Use code files only under severely constrained circumstances.
If you only want to **use** the Python code, install the code package (see above).
## Which other Packages do I need to use the Python Modules?
After downloaded `fuzzingbook` code, installing the `fuzzingbook` package, or checking out `fuzzingbook` from the repository, here's what to do to obtain a complete set of packages.
### Step 1: Install Required Python Packages
The [`requirements.txt` file within the project root folder](https://github.com/uds-se/fuzzingbook/tree/master/) lists all _Python packages required_.
You can do
```sh
$ pip install -r requirements.txt
```
to install all required packages (but using `pipenv` is preferred; see below).
### Step 2: Install Additional Non-Python Packages
The [`apt.txt` file in the `binder/` folder](https://github.com/uds-se/fuzzingbook/tree/master/binder) lists all _Linux_ packages required.
In most cases, however, it suffices to install the `dot` graph drawing program (part of the `graphviz` package). Here are some instructions:
#### Installing Graphviz on Linux
```sh
$ sudo apt-get install graphviz
```
to install it.
#### Installing Graphviz on macOS
On macOS, if you use `conda`, run
```sh
$ conda install graphviz
```
If you use HomeBrew, run
```sh
$ brew install graphviz
```
## Installing Fuzzingbook Code in an Isolated Environment
If you wish to install the `fuzzingbook` code in an environment that is isolated from your system interpreter,
we recommend using [Pipenv](https://pipenv.pypa.io/), which can automatically create a so called *virtual environment* hosting all required packages.
To accomplish this, please follow these steps:
### Step 1: Install PyEnv
Optionally install `pyenv` following the [official instructions](https://github.com/pyenv/pyenv#installation) if you are on a Unix operating system.
If you are on Windows, consider using [pyenv-win](https://github.com/pyenv-win/pyenv-win) instead.
This will allow you to seamlessly install any version of Python.
### Step 2: Install PipEnv
Install Pipenv following the official [installation instructions](https://pypi.org/project/pipenv/).
If you have `pyenv` installed, Pipenv can automatically download and install the appropriate version of the Python distribution.
Otherwise, Pipenv will use your system interpreter, which may or may not be the right version.
### Step 3: Install Python Packages
Run
```sh
$ pipenv install -r requirements.txt
```
in the `fuzzingbook` root directory.
### Step 4: Install Additional Non-Python Packages
See above for instructions on how to install additional non-python packages.
### Step 5: Enter the Environment
Enter the environment with
```sh
$ pipenv shell
```
where you can now execute
```sh
$ make -k check-code
```
to run the tests.
| github_jupyter |
# Neural Machine Translation
Welcome to your first programming assignment for this week!
You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models.
This notebook was produced together with NVIDIA's Deep Learning Institute.
Let's load all the packages you will need for this assignment.
```
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
```
## 1 - Translating human readable dates into machine readable dates
The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task.
The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*) and translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.
<!--
Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !-->
### 1.1 - Dataset
We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
```
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
```
You've loaded:
- `dataset`: a list of tuples of (human readable date, machine readable date)
- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index
- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with `human_vocab`.
- `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters.
Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
```
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
```
You now have:
- `X`: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via `human_vocab`. Each date is further padded to $T_x$ values with a special character (< pad >). `X.shape = (m, Tx)`
- `Y`: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in `machine_vocab`. You should have `Y.shape = (m, Ty)`.
- `Xoh`: one-hot version of `X`, the "1" entry's index is mapped to the character thanks to `human_vocab`. `Xoh.shape = (m, Tx, len(human_vocab))`
- `Yoh`: one-hot version of `Y`, the "1" entry's index is mapped to the character thanks to `machine_vocab`. `Yoh.shape = (m, Tx, len(machine_vocab))`. Here, `len(machine_vocab) = 11` since there are 11 characters ('-' as well as 0-9).
Lets also look at some examples of preprocessed training examples. Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed.
```
index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
```
## 2 - Neural machine translation with attention
If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.
The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.
### 2.1 - Attention mechanism
In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
<table>
<td>
<img src="images/attn_model.png" style="width:500;height:500px;"> <br>
</td>
<td>
<img src="images/attn_mechanism.png" style="width:500;height:500px;"> <br>
</td>
</table>
<caption><center> **Figure 1**: Neural machine translation with attention</center></caption>
Here are some properties of the model that you may notice:
- There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism, we will call it *pre-attention* Bi-LSTM. The LSTM at the top of the diagram comes *after* the attention mechanism, so we will call it the *post-attention* LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps.
- The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.
- We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM.
- The diagram on the right uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use `RepeatVector` and `Concatenation` in Keras below.
Lets implement this model. You will start by implementing two functions: `one_step_attention()` and `model()`.
**1) `one_step_attention()`**: At step $t$, given all the hidden states of the Bi-LSTM ($[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$) and the previous hidden state of the second LSTM ($s^{<t-1>}$), `one_step_attention()` will compute the attention weights ($[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$) and output the context vector (see Figure 1 (right) for details):
$$context^{<t>} = \sum_{t' = 0}^{T_x} \alpha^{<t,t'>}a^{<t'>}\tag{1}$$
Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$.
**2) `model()`**: Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. Then, it calls `one_step_attention()` $T_y$ times (`for` loop). At each iteration of this loop, it gives the computed context vector $c^{<t>}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{<t>}$.
**Exercise**: Implement `one_step_attention()`. The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:
1. Define the layer objects (as global variables for examples).
2. Call these objects when propagating the input.
We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: [RepeatVector()](https://keras.io/layers/core/#repeatvector), [Concatenate()](https://keras.io/layers/merge/#concatenate), [Dense()](https://keras.io/layers/core/#dense), [Activation()](https://keras.io/layers/core/#activation), [Dot()](https://keras.io/layers/merge/#dot).
```
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
```
Now you can use these layers to implement `one_step_attention()`. In order to propagate a Keras tensor object X through one of these layers, use `layer(X)` (or `layer([X,Y])` if it requires multiple inputs.), e.g. `densor(X)` will propagate X through the `Dense(1)` layer defined above.
```
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
"""
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = concatenator([a, s_prev])
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = dotor([alphas,a])
### END CODE HERE ###
return context
```
You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function.
**Exercise**: Implement `model()` as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.
```
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
```
Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps:
1. Propagate the input into a [Bidirectional](https://keras.io/layers/wrappers/#bidirectional) [LSTM](https://keras.io/layers/recurrent/#lstm)
2. Iterate for $t = 0, \dots, T_y-1$:
1. Call `one_step_attention()` on $[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$ and $s^{<t-1>}$ to get the context vector $context^{<t>}$.
2. Give $context^{<t>}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using `initial_state= [previous hidden state, previous cell state]`. Get back the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$.
3. Apply a softmax layer to $s^{<t>}$, get the output.
4. Save the output by adding it to the list of outputs.
3. Create your Keras model instance, it should have three inputs ("inputs", $s^{<0>}$ and $c^{<0>}$) and output the list of "outputs".
```
# GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences=True))(X)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a, s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context, initial_state=[s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model([X, s0, c0], outputs)
### END CODE HERE ###
return model
```
Run the following cell to create your model.
```
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
```
Let's get a summary of the model to check if it matches the expected output.
```
model.summary()
```
**Expected Output**:
Here is the summary you should see
<table>
<tr>
<td>
**Total params:**
</td>
<td>
52,960
</td>
</tr>
<tr>
<td>
**Trainable params:**
</td>
<td>
52,960
</td>
</tr>
<tr>
<td>
**Non-trainable params:**
</td>
<td>
0
</td>
</tr>
<tr>
<td>
**bidirectional_1's output shape **
</td>
<td>
(None, 30, 64)
</td>
</tr>
<tr>
<td>
**repeat_vector_1's output shape **
</td>
<td>
(None, 30, 64)
</td>
</tr>
<tr>
<td>
**concatenate_1's output shape **
</td>
<td>
(None, 30, 128)
</td>
</tr>
<tr>
<td>
**attention_weights's output shape **
</td>
<td>
(None, 30, 1)
</td>
</tr>
<tr>
<td>
**dot_1's output shape **
</td>
<td>
(None, 1, 64)
</td>
</tr>
<tr>
<td>
**dense_3's output shape **
</td>
<td>
(None, 11)
</td>
</tr>
</table>
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, a custom [Adam](https://keras.io/optimizers/#adam) [optimizer](https://keras.io/optimizers/#usage-of-optimizers) (`learning rate = 0.005`, $\beta_1 = 0.9$, $\beta_2 = 0.999$, `decay = 0.01`) and `['accuracy']` metrics:
```
### START CODE HERE ### (≈2 lines)
out = model.compile(optimizer=Adam(lr=0.005, beta_1=0.9, beta_2=0.999,
decay=0.01), metrics=['accuracy'],
loss='categorical_crossentropy')
out
### END CODE HERE ###
```
The last step is to define all your inputs and outputs to fit the model:
- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.
- You need to create `s0` and `c0` to initialize your `post_activation_LSTM_cell` with 0s.
- Given the `model()` you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: `outputs[i][0], ..., outputs[i][Ty]` represent the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). More generally, `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.
```
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
```
Let's now fit the model and run it for one epoch.
```
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
```
While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples:
<img src="images/table.png" style="width:700;height:200px;"> <br>
<caption><center>Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption>
We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
```
model.load_weights('models/model.h5')
```
You can now see the results on new examples.
```
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
```
You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character.
## 3 - Visualizing Attention (Optional / Ungraded)
Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.
Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this:
<img src="images/date_attention.png" style="width:600;height:300px;"> <br>
<caption><center> **Figure 8**: Full Attention Map</center></caption>
Notice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018."
### 3.1 - Getting the activations from the network
Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$.
To figure out where the attention values are located, let's start by printing a summary of the model .
```
model.summary()
```
Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.
The function `attention_map()` pulls out the attention values from your model and plots them.
```
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)
```
On the generated plot you can observe the values of the attention weights for each character of the predicted output. Examine this plot and check that where the network is paying attention makes sense to you.
In the date translation application, you will observe that most of the time attention helps predict the year, and hasn't much impact on predicting the day/month.
### Congratulations!
You have come to the end of this assignment
<font color='blue'> **Here's what you should remember from this notebook**:
- Machine translation models can be used to map from one sequence to another. They are useful not just for translating human languages (like French->English) but also for tasks like date format translation.
- An attention mechanism allows a network to focus on the most relevant parts of the input when producing a specific part of the output.
- A network using an attention mechanism can translate from inputs of length $T_x$ to outputs of length $T_y$, where $T_x$ and $T_y$ can be different.
- You can visualize attention weights $\alpha^{\langle t,t' \rangle}$ to see what the network is paying attention to while generating each output.
Congratulations on finishing this assignment! You are now able to implement an attention model and use it to learn complex mappings from one sequence to another.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.