code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# <font color=green> PYTHON PARA DATA SCIENCE - PANDAS
---
# <font color=green> 1. INTRODUÇÃO AO PYTHON
---
# 1.1 Introdução
> Python é uma linguagem de programação de alto nível com suporte a múltiplos paradigmas de programação. É um projeto *open source* e desde seu surgimento, em 1991, vem se tornando uma das linguagens de programação interpretadas mais populares.
>
> Nos últimos anos Python desenvolveu uma comunidade ativa de processamento científico e análise de dados e vem se destacando como uma das linguagens mais relevantes quando o assundo é ciência de dados e machine learning, tanto no ambiente acadêmico como também no mercado.
# 1.2 Instalação e ambiente de desenvolvimento
### Instalação Local
### https://www.python.org/downloads/
### ou
### https://www.anaconda.com/distribution/
### Google Colaboratory
### https://colab.research.google.com
### Verificando versão
```
!python -V
```
# 1.3 Trabalhando com dados
```
import pandas as pd
pd.set_option('display.max_rows', 10)
pd.set_option('display.max_columns', 10)
dataset = pd.read_csv('db.csv', sep = ';')
dataset
dataset.dtypes
dataset[['Quilometragem', 'Valor']].describe()
dataset.info()
```
# <font color=green> 2. TRABALHANDO COM TUPLAS
---
# 2.1 Criando tuplas
Tuplas são sequências imutáveis que são utilizadas para armazenar coleções de itens, geralmente heterogêneos. Podem ser construídas de várias formas:
```
- Utilizando um par de parênteses: ( )
- Utilizando uma vírgula à direita: x,
- Utilizando um par de parênteses com itens separados por vírgulas: ( x, y, z )
- Utilizando: tuple() ou tuple(iterador)
```
```
()
1,2,3
nome = "Teste"
valor = 1
(nome,valor)
nomes_carros = tuple(['Jetta Variant', 'Passat', 'Crossfox', 'DS5'])
nomes_carros
type(nomes_carros)
```
# 2.2 Seleções em tuplas
```
nomes_carros = tuple(['Jetta Variant', 'Passat', 'Crossfox', 'DS5'])
nomes_carros
nomes_carros[0]
nomes_carros[1]
nomes_carros[-1]
nomes_carros[1:3]
nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5', ('Fusca', 'Gol', 'C4'))
nomes_carros
nomes_carros[-1]
nomes_carros[-1][1]
```
# 2.3 Iterando em tuplas
```
nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5')
nomes_carros
for item in nomes_carros:
print(item)
```
### Desempacotamento de tuplas
```
nomes_carros = ('Jetta Variant', 'Passat', 'Crossfox', 'DS5')
nomes_carros
carro_1, carro_2, carro_3, carro_4 = nomes_carros
carro_1
carro_2
carro_3
carro_4
_, A, _, B = nomes_carros
A
B
_, C, *_ = nomes_carros
C
```
## *zip()*
https://docs.python.org/3.6/library/functions.html#zip
```
carros = ['Jetta Variant', 'Passat', 'Crossfox', 'DS5']
carros
valores = [88078.64, 106161.94, 72832.16, 124549.07]
valores
zip(carros, valores)
list(zip(carros, valores))
for carro, valor in zip(carros, valores):
print(carro, valor)
```
# <font color=green> 3. TRABALHANDO COM DICIONÁRIOS
---
# 3.1 Criando dicionários
Listas são coleções sequenciais, isto é, os itens destas sequências estão ordenados e utilizam índices (números inteiros) para acessar os valores.
Os dicionários são coleções um pouco diferentes. São estruturas de dados que representam um tipo de mapeamento. Mapeamentos são coleções de associações entre pares de valores onde o primeiro elemento do par é conhecido como chave (*key*) e o segundo como valor (*value*).
```
dicionario = {key_1: value_1, key_2: value_2, ..., key_n: value_n}
```
https://docs.python.org/3.6/library/stdtypes.html#typesmapping
```
carros = ['Jetta Variant', 'Passat', 'Crossfox']
carros
valores = [88078.64, 106161.94, 72832.16]
valores
carros.index("Passat")
valores[carros.index("Passat")]
valores_carros = {"Jetta Variant": 88078.64, "Passat": 106161.94, "Crossfox": 72832.16}
valores_carros
type(valores_carros)
```
### Criando dicionários com *zip()*
```
list(zip(carros, valores))
valores_carros = dict(zip(carros, valores))
valores_carros
```
# 3.2 Operações com dicionários
```
valores_carros = dict(zip(carros, valores))
valores_carros
```
## *dict[ key ]*
Retorna o valor correspondente à chave (*key*) no dicionário.
```
valores_carros["Passat"]
```
## *key in dict*
Retorna **True** se a chave (*key*) for encontrada no dicionário.
```
import termcolor
from termcolor import colored
is_it = colored('tá lá', 'green') if "Passat" in valores_carros else colored('tá não','red')
print(f'Tá lá? \n R: {is_it}')
is_it = colored('tá lá', 'green') if "Fusqueta" in valores_carros else colored('tá não','red')
print(f'Tá lá? \n R: {is_it}')
is_it = colored('tá não','red') if "Passat" not in valores_carros else colored('tá lá', 'green')
print(f'Tá lá? \n R: {is_it}')
```
## *len(dict)*
Retorna o número de itens do dicionário.
```
len(valores_carros)
```
## *dict[ key ] = value*
Inclui um item ao dicionário.
```
valores_carros["DS5"] = 124549.07
valores_carros
```
## *del dict[ key ]*
Remove o item de chave (*key*) do dicionário.
```
del valores_carros["DS5"]
valores_carros
```
# 3.3 Métodos de dicionários
## *dict.update()*
Atualiza o dicionário.
```
valores_carros
valores_carros.update({'DS5': 124549.07})
valores_carros
valores_carros.update({'DS5': 124549.10, 'Fusca': 75000})
valores_carros
```
## *dict.copy()*
Cria uma cópia do dicionário.
```
copia = valores_carros.copy()
copia
del copia['Fusca']
copia
valores_carros
```
## *dict.pop(key[, default ])*
Se a chave for encontrada no dicionário, o item é removido e seu valor é retornado. Caso contrário, o valor especificado como *default* é retornado. Se o valor *default* não for fornecido e a chave não for encontrada no dicionário um erro será gerado.
```
copia
copia.pop('Passat')
copia
# copia.pop('Passat')
copia.pop('Passat', 'Chave não encontrada')
copia.pop('DS5', 'Chave não encontrada')
copia
```
## *dict.clear()*
Remove todos os itens do dicionário.
```
copia.clear()
copia
```
# 3.4 Iterando em dicionários
## *dict.keys()*
Retorna uma lista contendo as chaves (*keys*) do dicionário.
```
valores_carros.keys()
for key in valores_carros.keys():
print(valores_carros[key])
```
## *dict.values()*
Retorna uma lista com todos os valores (*values*) do dicionário.
```
valores_carros.values()
```
## *dict.items()*
Retorna uma lista contendo uma tupla para cada par chave-valor (*key-value*) do dicionário.
```
valores_carros.items()
for item in valores_carros.items():
print(item)
for key,value in valores_carros.items():
print(key,value)
for key,value in valores_carros.items():
if(value >= 100000):
print(key,value)
dados = {
'Crossfox': {'valor': 72000, 'ano': 2005},
'DS5': {'valor': 125000, 'ano': 2015},
'Fusca': {'valor': 150000, 'ano': 1976},
'Jetta': {'valor': 88000, 'ano': 2010},
'Passat': {'valor': 106000, 'ano': 1998}
}
for item in dados.items():
if(item[1]['ano'] >= 2000):
print(item[0])
```
# <font color=green> 4. FUNÇÕES E PACOTES
---
Funções são unidades de código reutilizáveis que realizam uma tarefa específica, podem receber alguma entrada e também podem retornar alguma resultado.
# 4.1 Built-in function
A linguagem Python possui várias funções integradas que estão sempre acessíveis. Algumas já utilizamos em nosso treinamento: type(), print(), zip(), len(), set() etc.
https://docs.python.org/3.6/library/functions.html
```
dados = {'Jetta Variant': 88078.64, 'Passat': 106161.94, 'Crossfox': 72832.16}
dados
valores = []
for valor in dados.values():
valores.append(valor)
valores
soma = 0
for valor in dados.values():
soma += valor
soma
list(dados.values())
sum(dados.values())
help(print)
print?
```
# 4.2 Definindo funções sem e com parâmetros
### Funções sem parâmetros
#### Formato padrão
```
def <nome>():
<instruções>
```
```
def mean():
valor = (1+2+3)/3
return valor
mean()
```
### Funções com parâmetros
#### Formato padrão
```
def <nome>(<param_1>, <param_2>, ..., <param_n>):
<instruções>
```
```
def mean(lista):
mean = sum(lista)/len(lista)
return mean
media = mean([1,2,3])
print(f'A média é: {media}')
media = mean([65665656,96565454,4565545])
print(f'A média é: {media}')
dados = {
'Crossfox': {'km': 35000, 'ano': 2005},
'DS5': {'km': 17000, 'ano': 2015},
'Fusca': {'km': 130000, 'ano': 1979},
'Jetta': {'km': 56000, 'ano': 2011},
'Passat': {'km': 62000, 'ano': 1999}
}
def km_media(dataset, ano_atual):
for item in dataset.items():
result = item[1]['km'] / (ano_atual - item[1]['ano'])
print(result)
km_media(dados,2019)
```
# 4.3 Definindo funções que retornam valores
### Funções que retornam um valor
#### Formato padrão
```
def <nome>(<param_1>, <param_2>, ..., <param_n>):
<instruções>
return <resultado>
```
```
def mean(lista):
mean = sum(lista)/len(lista)
return mean
result = mean([1,2,3])
result
```
### Funções que retornam mais de um valor
#### Formato padrão
```
def <nome>(<param_1>, <param_2>, ..., <param_n>):
<instruções>
return (<resultado_1>, <resultado_2>, ..., <resultado_n>)
```
```
def mean(lista):
mean = sum(lista)/len(lista)
return (mean,len(lista))
result = mean([1,2,3])
result
result, length = mean([1,2,3])
print(f'{result}, {length}')
dados = {
'Crossfox': {'km': 35000, 'ano': 2005},
'DS5': {'km': 17000, 'ano': 2015},
'Fusca': {'km': 130000, 'ano': 1979},
'Jetta': {'km': 56000, 'ano': 2011},
'Passat': {'km': 62000, 'ano': 1999}
}
def km_media(dataset, ano_atual):
result = {}
for item in dataset.items():
media = item[1]['km'] / (ano_atual - item[1]['ano'])
item[1].update({ 'km_media': media })
result.update({ item[0]: item[1] })
return result
km_media(dados, 2019)
```
# <font color=green> 5. PANDAS BÁSICO
---
**versão: 0.25.2**
Pandas é uma ferramenta de manipulação de dados de alto nível, construída com base no pacote Numpy. O pacote pandas possui estruturas de dados bastante interessantes para manipulação de dados e por isso é muito utilizado por cientistas de dados.
## Estruturas de Dados
### Series
Series são arrays unidimensionais rotulados capazes de armazenar qualquer tipo de dado. Os rótulos das linhas são chamados de **index**. A forma básica de criação de uma Series é a seguinte:
```
s = pd.Series(dados, index = index)
```
O argumento *dados* pode ser um dicionário, uma lista, um array Numpy ou uma constante.
### DataFrames
DataFrame é uma estrutura de dados tabular bidimensional com rótulos nas linha e colunas. Como a Series, os DataFrames são capazes de armazenar qualquer tipo de dados.
```
df = pd.DataFrame(dados, index = index, columns = columns)
```
O argumento *dados* pode ser um dicionário, uma lista, um array Numpy, uma Series e outro DataFrame.
**Documentação:** https://pandas.pydata.org/pandas-docs/version/0.25/
# 5.1 Estruturas de dados
```
import pandas as pd
```
### Criando uma Series a partir de uma lista
```
carros: {'Jetta Variant', 'Passat', 'Crossfox'}
carros
pd.Series(carros)
```
### Criando um DataFrame a partir de uma lista de dicionários
```
dados = [
{'Nome': 'Jetta Variant', 'Motor': 'Motor 4.0 Turbo', 'Ano': 2003, 'Quilometragem': 44410.0, 'Zero_km': False, 'Valor': 88078.64},
{'Nome': 'Passat', 'Motor': 'Motor Diesel', 'Ano': 1991, 'Quilometragem': 5712.0, 'Zero_km': False, 'Valor': 106161.94},
{'Nome': 'Crossfox', 'Motor': 'Motor Diesel V8', 'Ano': 1990, 'Quilometragem': 37123.0, 'Zero_km': False, 'Valor': 72832.16}
]
dataset = pd.DataFrame(dados)
dataset
dataset[['Motor','Valor','Ano', 'Nome', 'Quilometragem', 'Zero_km']]
```
### Criando um DataFrame a partir de um dicionário
```
dados = {
'Nome': ['Jetta Variant', 'Passat', 'Crossfox'],
'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8'],
'Ano': [2003, 1991, 1990],
'Quilometragem': [44410.0, 5712.0, 37123.0],
'Zero_km': [False, False, False],
'Valor': [88078.64, 106161.94, 72832.16]
}
dataset = pd.DataFrame(dados)
dataset
```
### Criando um DataFrame a partir de uma arquivo externo
```
dataset = pd.read_csv('db.csv', sep=';', index_col =0)
dataset
dados = {
'Crossfox': {'km': 35000, 'ano': 2005},
'DS5': {'km': 17000, 'ano': 2015},
'Fusca': {'km': 130000, 'ano': 1979},
'Jetta': {'km': 56000, 'ano': 2011},
'Passat': {'km': 62000, 'ano': 1999}
}
def km_media(dataset, ano_atual):
result = {}
for item in dataset.items():
media = item[1]['km'] / (ano_atual - item[1]['ano'])
item[1].update({ 'km_media': media })
result.update({ item[0]: item[1] })
return result
km_media(dados, 2019)
import pandas as pd
carros = pd.DataFrame(km_media(dados, 2019)).T
carros
```
# 5.2 Seleções com DataFrames
### Selecionando colunas
```
dataset.head()
dataset['Valor']
type(dataset['Valor'])
dataset[['Valor']]
type(dataset[['Valor']])
```
### Selecionando linhas - [ i : j ]
<font color=red>**Observação:**</font> A indexação tem origem no zero e nos fatiamentos (*slices*) a linha com índice i é **incluída** e a linha com índice j **não é incluída** no resultado.
```
dataset[0:3]
```
### Utilizando .loc para seleções
<font color=red>**Observação:**</font> Seleciona um grupo de linhas e colunas segundo os rótulos ou uma matriz booleana.
```
dataset.loc[['Passat', 'DS5']]
dataset.loc[['Passat', 'DS5'], ['Motor', 'Ano']]
dataset.loc[:, ['Motor', 'Ano']]
```
### Utilizando .iloc para seleções
<font color=red>**Observação:**</font> Seleciona com base nos índices, ou seja, se baseia na posição das informações.
```
dataset.head()
dataset.iloc[1]
dataset.iloc[[1]]
dataset.iloc[1:4]
dataset.iloc[1:4, [0, 5, 2]]
dataset.iloc[[1,42,22], [0, 5, 2]]
dataset.iloc[:, [0, 5, 2]]
import pandas as pd
dados = {
'Nome': ['Jetta', 'Passat', 'Crossfox', 'DS5', 'Fusca'],
'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8', 'Motor 2.0', 'Motor 1.6'],
'Ano': [2019, 2003, 1991, 2019, 1990],
'Quilometragem': [0.0, 5712.0, 37123.0, 0.0, 120000.0],
'Zero_km': [True, False, False, True, False],
'Valor': [88000.0, 106000.0, 72000.0, 89000.0, 32000.0]
}
dataset = pd.DataFrame(dados)
dataset[['Nome', 'Ano', 'Quilometragem', 'Valor']][1:3]
import pandas as pd
dados = {
'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8', 'Motor 2.0', 'Motor 1.6'],
'Ano': [2019, 2003, 1991, 2019, 1990],
'Quilometragem': [0.0, 5712.0, 37123.0, 0.0, 120000.0],
'Zero_km': [True, False, False, True, False],
'Valor': [88000.0, 106000.0, 72000.0, 89000.0, 32000.0]
}
dataset = pd.DataFrame(dados, index = ['Jetta', 'Passat', 'Crossfox', 'DS5', 'Fusca'])
dataset.loc[['Passat', 'DS5'], ['Motor', 'Valor']]
dataset.iloc[[1,3], [0,-1]]
```
# 5.3 Queries com DataFrames
```
dataset.head()
dataset.Motor
select = dataset.Motor == 'Motor Diesel'
type(select)
dataset[select]
dataset[(select) & (dataset.Zero_km == True)]
(select) & (dataset.Zero_km == True)
```
### Utilizando o método query
```
dataset.query('Motor == "Motor Diesel" and Zero_km == True')
import pandas as pd
dados = {
'Motor': ['Motor 4.0 Turbo', 'Motor Diesel', 'Motor Diesel V8', 'Motor Diesel', 'Motor 1.6'],
'Ano': [2019, 2003, 1991, 2019, 1990],
'Quilometragem': [0.0, 5712.0, 37123.0, 0.0, 120000.0],
'Zero_km': [True, False, False, True, False],
'Valor': [88000.0, 106000.0, 72000.0, 89000.0, 32000.0]
}
dataset = pd.DataFrame(dados, index = ['Jetta', 'Passat', 'Crossfox', 'DS5', 'Fusca'])
dataset.query('Motor == "Motor Diesel" or Zero_km == True')
dataset.query('Motor == "Motor Diesel" | Zero_km == True')
```
# 5.4 Iterando com DataFrames
```
dataset.head()
for index, row in dataset.iterrows():
if (2019 - row['Ano'] != 0):
dataset.loc[index, "km_media"] = row['Quilometragem'] / 2019 - row['Ano']
else:
dataset.loc[index, "km_media"] = 0
dataset
```
# 5.5 Tratamento de dados
```
dataset.head()
dataset.info()
dataset.Quilometragem.isna()
dataset[dataset.Quilometragem.isna()]
dataset.fillna(0, inplace = True)
dataset
dataset.query('Zero_km == True')
dataset = pd.read_csv('db.csv', sep=';')
dataset.dropna(subset = ['Quilometragem'], inplace = True)
dataset
```
| github_jupyter |
<a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/03_regression/04_polynomial_regression/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#### Copyright 2020 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Polynomial Regression and Overfitting
So far in this course, we have dealt exclusively with linear models. These have all been "straight-line" models where we attempt to draw a straight line that fits a regression.
Today we will start building curved-lined models based on [polynomial equations](https://en.wikipedia.org/wiki/Polynomial).
## Generating Sample Data
Let's start by generating some data based on a second degree polynomial.
```
import numpy as np
import matplotlib.pyplot as plt
num_items = 100
np.random.seed(seed=420)
X = np.random.randn(num_items, 1)
# These coefficients are chosen arbitrarily.
y = 0.6*(X**2) - 0.4*X + 1.3
plt.plot(X, y, 'b.')
plt.show()
```
Let's add some randomness to create a more realistic dataset and re-plot the randomized data points and the fit line.
```
import numpy as np
import matplotlib.pyplot as plt
num_items = 100
np.random.seed(seed=420)
X = np.random.randn(num_items, 1)
# Create some randomness.
randomness = np.random.randn(num_items, 1) / 2
# This is the same equation as the plot above, with added randomness.
y = 0.6*(X**2) - 0.4*X + 1.3 + randomness
X_line = np.linspace(X.min(), X.max(), num=num_items)
y_line = 0.6*(X_line**2) - 0.4*X_line + 1.3
plt.plot(X, y, 'b.')
plt.plot(X_line, y_line, 'r-')
plt.show()
```
That looks much better! Now we can see that a 2-degree polynomial function fits this data reasonably well.
## Polynomial Fitting
We can now see a pretty obvious 2-degree polynomial that fits the scatter plot.
Scikit-learn offers a `PolynomialFeatures` class that handles polynomial combinations for a linear model. In this case, we know that a 2-degree polynomial is a good fit since the data was generated from a polynomial curve. Let's see if the model works.
We begin by creating a `PolynomialFeatures` instance of degree 2.
```
from sklearn.preprocessing import PolynomialFeatures
pf = PolynomialFeatures(degree=2)
X_poly = pf.fit_transform(X)
X.shape, X_poly.shape
```
You might be wondering what the `include_bias` parameter is. By default, it is `True`, in which case it forces the first exponent to be 0.
This adds a constant bias term to the equation. When we ask for no bias we start our exponents at 1 instead of 0.
This preprocessor generates a new feature matrix consisting of all polynomial combinations of the features. Notice that the input shape of `(100, 1)` becomes `(100, 2)` after transformation.
In this simple case, we doubled the number of features since we asked for a 2-degree polynomial and had one input feature. The number of generated features grows exponentially as the number of features and polynomial degrees increases.
## Model Fitting
We can now fit the model by passing our polynomial preprocessing data to the linear regressor.
How close did the intercept and coefficient match the values in the function we used to generate our data?
```
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
```
## Visualization
We can plot our fitted line against the equation we used to generate the data. The fitted line is green, and the actual curve is red.
```
np.random.seed(seed=420)
# Create 100 even-spaced x-values.
X_line_fitted = np.linspace(X.min(), X.max(), num=100)
# Start our equation with the intercept.
y_line_fitted = lin_reg.intercept_
# For each exponent, raise the X value to that exponent and multiply it by the
# appropriate coefficient
for i in range(len(pf.powers_)):
exponent = pf.powers_[i][0]
y_line_fitted = y_line_fitted + \
lin_reg.coef_[0][i] * (X_line_fitted**exponent)
plt.plot(X_line_fitted, y_line_fitted, 'g-')
plt.plot(X_line, y_line, 'r-')
plt.plot(X, y, 'b.')
plt.show()
```
# Overfitting
When using polynomial regression, it can be easy to *overfit* the data so that it performs well on the training data but doesn't perform well in the real world.
To understand overfitting we will create a fake dataset generated off of a linear equation, but we will use a polynomial regression as the model.
```
np.random.seed(seed=420)
# Create 50 points from a linear dataset with randomness.
num_items = 50
X = 6 * np.random.rand(num_items, 1)
y = X + 2 + np.random.randn(num_items, 1)
X_line = np.array([X.min(), X.max()])
y_line = X_line + 2
plt.plot(X_line, y_line, 'r-')
plt.plot(X, y, 'b.')
plt.show()
```
Let's now create a 10 degree polynomial to fit the linear data and fit the model.
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
np.random.seed(seed=420)
poly_features = PolynomialFeatures(degree=10, include_bias=False)
X_poly = poly_features.fit_transform(X)
regression = LinearRegression()
regression.fit(X_poly, y)
```
## Visualization
Let's draw the polynomial line that we fit to the data. To draw the line, we need to execute the 10 degree polynomial equation.
$$
y = k_0 + k_1x^1 + k_2x^2 + k_3x^3 + ... + k_9x^9 + k_{10}x^{10}
$$
Coding the above equation by hand is tedious and error-prone. It also makes it difficult to change the degree of the polynomial we are fitting.
Let's see if there is a way to write the code more dynamically, using the `PolynomialFeatures` and `LinearRegression` functions.
The `PolynomialFeatures` class provides us with a list of exponents that we can use for each portion of the polynomial equation.
```
poly_features.powers_
```
The `LinearRegression` class provides us with a list of coefficients that correspond to the powers provided by `PolynomialFeatures`.
```
regression.coef_
```
It also provides an intercept.
```
regression.intercept_
```
Having this information, we can take a set of $X$ values (in the code below we use 100), then run our equation on those values.
```
np.random.seed(seed=420)
# Create 100 even-spaced x-values.
X_line_fitted = np.linspace(X.min(), X.max(), num=100)
# Start our equation with the intercept.
y_line_fitted = regression.intercept_
# For each exponent, raise the X value to that exponent and multiply it by the
# appropriate coefficient
for i in range(len(poly_features.powers_)):
exponent = poly_features.powers_[i][0]
y_line_fitted = y_line_fitted + \
regression.coef_[0][i] * (X_line_fitted**exponent)
```
We can now plot the data points, the actual line used to generate them, and our fitted model.
```
plt.plot(X_line, y_line, 'r-')
plt.plot(X_line_fitted, y_line_fitted, 'g-')
plt.plot(X, y, 'b.')
plt.show()
```
Notice how our line is very wavy, and it spikes up and down to pass through specific data points. (This is especially true for the lowest and highest $x$-values, where the curve passes through them exactly.) This is a sign of overfitting. The line fits the training data reasonably well, but it may not be as useful on new data.
## Using a Simpler Model
The most obvious way to prevent overfitting in this example is to simply reduce the degree of the polynomial.
The code below uses a 2-degree polynomial and seems to fit the data much better. A linear model would work well too.
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
regression = LinearRegression()
regression.fit(X_poly, y)
X_line_fitted = np.linspace(X.min(), X.max(), num=100)
y_line_fitted = regression.intercept_
for i in range(len(poly_features.powers_)):
exponent = poly_features.powers_[i][0]
y_line_fitted = y_line_fitted + \
regression.coef_[0][i] * (X_line_fitted**exponent)
plt.plot(X_line, y_line, 'r-')
plt.plot(X_line_fitted, y_line_fitted, 'g-')
plt.plot(X, y, 'b.')
plt.show()
```
## Lasso Regularization
It is not always so clear what the "simpler" model choice is. Often, you will have to rely on regularization methods. A **regularization** is a method that penalizes large coefficients, with the aim of shrinking unnecessary coefficients to zero.
Least Absolute Shrinkage and Selection Operator (Lasso) regularization, also called L1 regularization, is a regularization method that adds the sum of the absolute values of the coefficients as a penalty in a cost function.
In scikit-learn, we can use the [Lasso](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) model, which performs a linear regression with an L1 regression penalty.
In the resultant graph, you can see that the regression smooths out our polynomial curve quite a bit despite the polynomial being a degree 10 polynomial. Note that Lasso regression can make the impact of less important features completely disappear.
```
from sklearn.linear_model import Lasso
poly_features = PolynomialFeatures(degree=10, include_bias=False)
X_poly = poly_features.fit_transform(X)
lasso_reg = Lasso(alpha=5.0)
lasso_reg.fit(X_poly, y)
X_line_fitted = np.linspace(X.min(), X.max(), num=100)
y_line_fitted = lasso_reg.intercept_
for i in range(len(poly_features.powers_)):
exponent = poly_features.powers_[i][0]
y_line_fitted = y_line_fitted + lasso_reg.coef_[i] * (X_line_fitted**exponent)
plt.plot(X_line, y_line, 'r-')
plt.plot(X_line_fitted, y_line_fitted, 'g-')
plt.plot(X, y, 'b.')
plt.show()
```
## Ridge Regularization
Similar to Lasso regularization, [Ridge](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) regularization adds a penalty to the cost function of a model. In the case of Ridge, also called L2 regularization, the penalty is the sum of squares of the coefficients.
Again, we can see that the regression smooths out the curve of our 10-degree polynomial.
```
from sklearn.linear_model import Ridge
poly_features = PolynomialFeatures(degree=10, include_bias=False)
X_poly = poly_features.fit_transform(X)
ridge_reg = Ridge(alpha=0.5)
ridge_reg.fit(X_poly, y)
X_line_fitted = np.linspace(X.min(), X.max(), num=100)
y_line_fitted = ridge_reg.intercept_
for i in range(len(poly_features.powers_)):
exponent = poly_features.powers_[i][0]
y_line_fitted = y_line_fitted + ridge_reg.coef_[0][i] * (X_line_fitted**exponent)
plt.plot(X_line, y_line, 'r-')
plt.plot(X_line_fitted, y_line_fitted, 'g-')
plt.plot(X, y, 'b.')
plt.show()
```
## ElasticNet Regularization
Another common form of regularization is [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html) regularization. This regularization method combines the concepts of L1 and L2 regularization by applying a penalty containing both a squared value and an absolute value.
```
from sklearn.linear_model import ElasticNet
poly_features = PolynomialFeatures(degree=10, include_bias=False)
X_poly = poly_features.fit_transform(X)
elastic_reg = ElasticNet(alpha=2.0, l1_ratio=0.5)
elastic_reg.fit(X_poly, y)
X_line_fitted = np.linspace(X.min(), X.max(), num=100)
y_line_fitted = elastic_reg.intercept_
for i in range(len(poly_features.powers_)):
exponent = poly_features.powers_[i][0]
y_line_fitted = y_line_fitted + \
elastic_reg.coef_[i] * (X_line_fitted**exponent)
plt.plot(X_line, y_line, 'r-')
plt.plot(X_line_fitted, y_line_fitted, 'g-')
plt.plot(X, y, 'b.')
plt.show()
```
## Other Strategies
Aside from regularization, there are other strategies that can be used to prevent overfitting. These include:
* [Early stopping](https://en.wikipedia.org/wiki/Early_stopping)
* [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)
* [Ensemble methods](https://en.wikipedia.org/wiki/Ensemble_learning)
* Simplifying your model
* Removing features
# Exercises
For these exercises we will work with the [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) that comes with scikit-learn. The data contains the following features:
1. age
1. sex
1. body mass index (bmi)
1. average blood pressure (bp)
It also contains six measures of blood serum, `s1` through `s6`. The target is a numeric assessment of the progression of the disease over the course of a year.
The data has been standardized.
```
from sklearn.datasets import load_diabetes
import numpy as np
import pandas as pd
data = load_diabetes()
df = pd.DataFrame(data.data, columns=data.feature_names)
df['progression'] = data.target
df.describe()
```
Let's plot how body mass index relates to blood pressure.
```
import matplotlib.pyplot as plt
plt.plot(df['bmi'], df['bp'], 'b.')
plt.show()
```
## Exercise 1: Polynomial Regression
Let's create a model to see if we can map body mass index to blood pressure.
1. Create a 10-degree polynomial preprocessor for our regression
1. Create a linear regression model
1. Fit and transform the `bmi` values with the polynomial features preprocessor
1. Fit the transformed data using the linear regression
1. Plot the fitted line over a scatter plot of the data points
**Student Solution**
```
# Your code goes here
```
---
## Exercise 2: Regularization
Your model from exercise one likely looked like it overfit. Experiment with the Lasso, Ridge, and/or ElasticNet classes in the place of the `LinearRegression`. Adjust the parameters for whichever regularization class you use until you create a line that doesn't look to be under- or over-fitted.
**Student Solution**
```
# Your code goes here
```
---
## Exercise 3: Other Models
Experiment with the [BayesianRidge](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.BayesianRidge.html). Does its fit line look better or worse than your other models?
**Student Solution**
```
# Your code goes here.
```
Does your fit line look better or worse than your other models?
> *Your Answer Goes Here*
---
| github_jupyter |
# Portfolio Variance
```
import sys
!{sys.executable} -m pip install -r requirements.txt
import numpy as np
import pandas as pd
import time
import os
import quiz_helper
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
```
### data bundle
```
import os
import quiz_helper
from zipline.data import bundles
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod')
ingest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME)
bundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func)
print('Data Registered')
```
### Build pipeline engine
```
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
from zipline.utils.calendars import get_calendar
universe = AverageDollarVolume(window_length=120).top(500)
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME)
engine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar)
```
### View Data¶
With the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model.
```
universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')
universe_tickers = engine\
.run_pipeline(
Pipeline(screen=universe),
universe_end_date,
universe_end_date)\
.index.get_level_values(1)\
.values.tolist()
universe_tickers
len(universe_tickers)
from zipline.data.data_portal import DataPortal
data_portal = DataPortal(
bundle_data.asset_finder,
trading_calendar=trading_calendar,
first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,
equity_minute_reader=None,
equity_daily_reader=bundle_data.equity_daily_bar_reader,
adjustment_reader=bundle_data.adjustment_reader)
```
## Get pricing data helper function
```
from quiz_helper import get_pricing
```
## get pricing data into a dataframe
```
returns_df = \
get_pricing(
data_portal,
trading_calendar,
universe_tickers,
universe_end_date - pd.DateOffset(years=5),
universe_end_date)\
.pct_change()[1:].fillna(0) #convert prices into returns
returns_df
```
## Let's look at a two stock portfolio
Let's pretend we have a portfolio of two stocks. We'll pick Apple and Microsoft in this example.
```
aapl_col = returns_df.columns[3]
msft_col = returns_df.columns[312]
asset_return_1 = returns_df[aapl_col].rename('asset_return_aapl')
asset_return_2 = returns_df[msft_col].rename('asset_return_msft')
asset_return_df = pd.concat([asset_return_1,asset_return_2],axis=1)
asset_return_df.head(2)
```
## Factor returns
Let's make up a "factor" by taking an average of all stocks in our list. You can think of this as an equal weighted index of the 490 stocks, kind of like a measure of the "market". We'll also make another factor by calculating the median of all the stocks. These are mainly intended to help us generate some data to work with. We'll go into how some common risk factors are generated later in the lessons.
Also note that we're setting axis=1 so that we calculate a value for each time period (row) instead of one value for each column (assets).
```
factor_return_1 = returns_df.mean(axis=1)
factor_return_2 = returns_df.median(axis=1)
factor_return_l = [factor_return_1, factor_return_2]
```
## Factor exposures
Factor exposures refer to how "exposed" a stock is to each factor. We'll get into this more later. For now, just think of this as one number for each stock, for each of the factors.
```
from sklearn.linear_model import LinearRegression
"""
For now, just assume that we're calculating a number for each
stock, for each factor, which represents how "exposed" each stock is
to each factor.
We'll discuss how factor exposure is calculated later in the lessons.
"""
def get_factor_exposures(factor_return_l, asset_return):
lr = LinearRegression()
X = np.array(factor_return_l).T
y = np.array(asset_return.values)
lr.fit(X,y)
return lr.coef_
factor_exposure_l = []
for i in range(len(asset_return_df.columns)):
factor_exposure_l.append(
get_factor_exposures(factor_return_l,
asset_return_df[asset_return_df.columns[i]]
))
factor_exposure_a = np.array(factor_exposure_l)
print(f"factor_exposures for asset 1 {factor_exposure_a[0]}")
print(f"factor_exposures for asset 2 {factor_exposure_a[1]}")
```
## Variance of stock 1
Calculate the variance of stock 1.
$\textrm{Var}(r_{1}) = \beta_{1,1}^2 \textrm{Var}(f_{1}) + \beta_{1,2}^2 \textrm{Var}(f_{2}) + 2\beta_{1,1}\beta_{1,2}\textrm{Cov}(f_{1},f_{2}) + \textrm{Var}(s_{1})$
```
factor_exposure_1_1 = factor_exposure_a[0][0]
factor_exposure_1_2 = factor_exposure_a[0][1]
common_return_1 = factor_exposure_1_1 * factor_return_1 + factor_exposure_1_2 * factor_return_2
specific_return_1 = asset_return_1 - common_return_1
covm_f1_f2 = np.cov(factor_return_1,factor_return_2,ddof=1) #this calculates a covariance matrix
# get the variance of each factor, and covariances from the covariance matrix covm_f1_f2
var_f1 = covm_f1_f2[0,0]
var_f2 = covm_f1_f2[1,1]
cov_f1_f2 = covm_f1_f2[0][1]
# calculate the specific variance.
var_s_1 = np.var(specific_return_1,ddof=1)
# calculate the variance of asset 1 in terms of the factors and specific variance
var_asset_1 = (factor_exposure_1_1**2 * var_f1) + \
(factor_exposure_1_2**2 * var_f2) + \
2 * (factor_exposure_1_1 * factor_exposure_1_2 * cov_f1_f2) + \
var_s_1
print(f"variance of asset 1: {var_asset_1:.8f}")
```
## Variance of stock 2
Calculate the variance of stock 2.
$\textrm{Var}(r_{2}) = \beta_{2,1}^2 \textrm{Var}(f_{1}) + \beta_{2,2}^2 \textrm{Var}(f_{2}) + 2\beta_{2,1}\beta_{2,2}\textrm{Cov}(f_{1},f_{2}) + \textrm{Var}(s_{2})$
```
factor_exposure_2_1 = factor_exposure_a[1][0]
factor_exposure_2_2 = factor_exposure_a[1][1]
common_return_2 = factor_exposure_2_1 * factor_return_1 + factor_exposure_2_2 * factor_return_2
specific_return_2 = asset_return_2 - common_return_2
# Notice we already calculated the variance and covariances of the factors
# calculate the specific variance of asset 2
var_s_2 = np.var(specific_return_2,ddof=1)
# calcualte the variance of asset 2 in terms of the factors and specific variance
var_asset_2 = (factor_exposure_2_1**2 * var_f1) + \
(factor_exposure_2_2**2 * var_f2) + \
(2 * factor_exposure_2_1 * factor_exposure_2_2 * cov_f1_f2) + \
var_s_2
print(f"variance of asset 2: {var_asset_2:.8f}")
```
## Covariance of stocks 1 and 2
Calculate the covariance of stock 1 and 2.
$\textrm{Cov}(r_{1},r_{2}) = \beta_{1,1}\beta_{2,1}\textrm{Var}(f_{1}) + \beta_{1,1}\beta_{2,2}\textrm{Cov}(f_{1},f_{2}) + \beta_{1,2}\beta_{2,1}\textrm{Cov}(f_{1},f_{2}) + \beta_{1,2}\beta_{2,2}\textrm{Var}(f_{2})$
```
# TODO: calculate the covariance of assets 1 and 2 in terms of the factors
cov_asset_1_2 = (factor_exposure_1_1 * factor_exposure_2_1 * var_f1) + \
(factor_exposure_1_1 * factor_exposure_2_2 * cov_f1_f2) + \
(factor_exposure_1_2 * factor_exposure_2_1 * cov_f1_f2) + \
(factor_exposure_1_2 * factor_exposure_2_2 * var_f2)
print(f"covariance of assets 1 and 2: {cov_asset_1_2:.8f}")
```
## Quiz 1: calculate portfolio variance
We'll choose stock weights for now (in a later lesson, you'll learn how to use portfolio optimization that uses alpha factors and a risk factor model to choose stock weights).
$\textrm{Var}(r_p) = x_{1}^{2} \textrm{Var}(r_1) + x_{2}^{2} \textrm{Var}(r_2) + 2x_{1}x_{2}\textrm{Cov}(r_{1},r_{2})$
```
weight_1 = 0.60
weight_2 = 0.40
# TODO: calculate portfolio variance
var_portfolio = # ...
print(f"variance of portfolio is {var_portfolio:.8f}")
```
## Quiz 2: Do it with Matrices!
Create matrices $\mathbf{F}$, $\mathbf{B}$ and $\mathbf{S}$, where
$\mathbf{F}= \begin{pmatrix}
\textrm{Var}(f_1) & \textrm{Cov}(f_1,f_2) \\
\textrm{Cov}(f_2,f_1) & \textrm{Var}(f_2)
\end{pmatrix}$
is the covariance matrix of factors,
$\mathbf{B} = \begin{pmatrix}
\beta_{1,1}, \beta_{1,2}\\
\beta_{2,1}, \beta_{2,2}
\end{pmatrix}$
is the matrix of factor exposures, and
$\mathbf{S} = \begin{pmatrix}
\textrm{Var}(s_i) & 0\\
0 & \textrm{Var}(s_j)
\end{pmatrix}$
is the matrix of specific variances.
$\mathbf{X} = \begin{pmatrix}
x_{1} \\
x_{2}
\end{pmatrix}$
### Concept Question
What are the dimensions of the $\textrm{Var}(r_p)$ portfolio variance? Given this, when choosing whether to multiply a row vector or a column vector on the left and right sides of the $\mathbf{BFB}^T$, which choice helps you get the dimensions of the portfolio variance term?
In other words:
Given that $\mathbf{X}$ is a column vector, which makes more sense?
$\mathbf{X}^T(\mathbf{BFB}^T + \mathbf{S})\mathbf{X}$ ?
or
$\mathbf{X}(\mathbf{BFB}^T + \mathbf{S})\mathbf{X}^T$ ?
## Answer 2 here:
## Quiz 3: Calculate portfolio variance using matrices
```
# TODO: covariance matrix of factors
F = # ...
F
# TODO: matrix of factor exposures
B = # ...
B
# TODO: matrix of specific variances
S = # ...
S
```
#### Hint for column vectors
Try using [reshape](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.reshape.html)
```
# TODO: make a column vector for stock weights matrix X
X = # ...
X
# TODO: covariance matrix of assets
var_portfolio = # ...
print(f"portfolio variance is \n{var_portfolio[0][0]:.8f}")
```
## Solution
[Solution notebook is here](portfolio_variance_solution.ipynb)
| github_jupyter |
# Unit Testing ML Code: Hands-on Exercise (Data Engineering)
## In this notebook we will explore unit tests for data engineering
#### We will use a classic toy dataset: the Iris plants dataset, which comes included with scikit-learn
Dataset details: https://scikit-learn.org/stable/datasets/index.html#iris-plants-dataset
As we progress through the course, the complexity of examples will increase, but we will start with something basic. This notebook is designed so that it can be run in isolation, once the setup steps described below are complete.
### Setup
Let's begin by importing the dataset and the libraries we are going to use. Make sure you have run `pip install -r requirements.txt` on requirements file located in the same directory as this notebook. We recommend doing this in a separate virtual environment (see dedicated setup lecture).
If you need a refresher on jupyter, pandas or numpy, there are some links to resources in the section notes.
```
from sklearn import datasets
import pandas as pd
import numpy as np
# Access the iris dataset from sklearn
iris = datasets.load_iris()
# Load the iris data into a pandas dataframe. The `data` and `feature_names`
# attributes of the dataset are added by default by sklearn. We use them to
# specify the columns of our dataframes.
iris_frame = pd.DataFrame(iris.data, columns=iris.feature_names)
# Create a "target" column in our dataframe, and set the values to the correct
# classifications from the dataset.
iris_frame['target'] = iris.target
iris.feature_names
```
### Add the `SimplePipeline` from the Test Input Values notebook (same as previous lecture, no changes here)
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
class SimplePipeline:
def __init__(self):
self.frame = None
# Shorthand to specify that each value should start out as
# None when the class is instantiated.
self.X_train, self.X_test, self.y_train, self.Y_test = None, None, None, None
self.model = None
self.load_dataset()
def load_dataset(self):
"""Load the dataset and perform train test split."""
# fetch from sklearn
dataset = datasets.load_iris()
# remove units ' (cm)' from variable names
self.feature_names = [fn[:-5] for fn in dataset.feature_names]
self.frame = pd.DataFrame(dataset.data, columns=self.feature_names)
for col in self.frame.columns:
self.frame[col] *= -1
self.frame['target'] = dataset.target
# we divide the data set using the train_test_split function from sklearn,
# which takes as parameters, the dataframe with the predictor variables,
# then the target, then the percentage of data to assign to the test set,
# and finally the random_state to ensure reproducibility.
self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(
self.frame[self.feature_names], self.frame.target, test_size=0.65, random_state=42)
def train(self, algorithm=LogisticRegression):
# we set up a LogisticRegression classifier with default parameters
self.model = algorithm(solver='lbfgs', multi_class='auto')
self.model.fit(self.X_train, self.y_train)
def predict(self, input_data):
return self.model.predict(input_data)
def get_accuracy(self):
# use our X_test and y_test values generated when we used
# `train_test_split` to test accuracy.
# score is a method on the Logisitic Regression that
# returns the accuracy by default, but can be changed to other metrics, see:
# https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.score
return self.model.score(X=self.X_test, y=self.y_test)
def run_pipeline(self):
"""Helper method to run multiple pipeline methods with one call."""
self.load_dataset()
self.train()
```
### Test Engineered Data (preprocessing)
Below we create an updated pipeline which inherits from the SimplePipeline but has new functionality to preprocess the data by applying a scaler. Linear models are sensitive to the scale of the features. For example features with bigger magnitudes tend to dominate if we do not apply a scaler.
```
from sklearn.preprocessing import StandardScaler
class PipelineWithDataEngineering(SimplePipeline):
def __init__(self):
# Call the inherited SimplePipeline __init__ method first.
super().__init__()
# scaler to standardize the variables in the dataset
self.scaler = StandardScaler()
# Train the scaler once upon pipeline instantiation:
# Compute the mean and standard deviation based on the training data
self.scaler.fit(self.X_train)
def apply_scaler(self):
# Scale the test and training data to be of mean 0 and of unit variance
self.X_train = self.scaler.transform(self.X_train)
self.X_test = self.scaler.transform(self.X_test)
def predict(self, input_data):
# apply scaler transform on inputs before predictions
scaled_input_data = self.scaler.transform(input_data)
return self.model.predict(scaled_input_data)
def run_pipeline(self):
"""Helper method to run multiple pipeline methods with one call."""
self.load_dataset()
self.apply_scaler() # updated in the this class
self.train()
pipeline = PipelineWithDataEngineering()
pipeline.run_pipeline()
accuracy_score = pipeline.get_accuracy()
print(f'current model accuracy is: {accuracy_score}')
```
### Now we Unit Test
We focus specifically on the feature engineering step
```
pipeline.load_dataset()
# pd.DataFrame(pipeline.X_train).stack().mean()
for col in pipeline.X_train.columns:
pipeline.X_train[col] *= -1
pipeline.X_train
import unittest
class TestIrisDataEngineering(unittest.TestCase):
def setUp(self):
"""Call the first method of the tested class after instantiating"""
self.pipeline = PipelineWithDataEngineering()
self.pipeline.load_dataset()
def test_scaler_preprocessing_brings_x_train_mean_near_zero(self):
""""""
# Given
# convert the dataframe to be a single column with pandas stack
original_mean = self.pipeline.X_train.stack().mean()
# When
self.pipeline.apply_scaler()
# Then
# The idea behind StandardScaler is that it will transform your data
# to center the distribution at 0 and scale the variance at 1.
# Therefore we test that the mean has shifted to be less than the original
# and close to 0 using assertAlmostEqual to check to 3 decimal places:
# https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual
self.assertTrue(original_mean > self.pipeline.X_train.mean()) # X_train is a numpy array at this point.
self.assertAlmostEqual(self.pipeline.X_train.mean(), 0.0, places=3)
print(f'Original X train mean: {original_mean}')
print(f'Transformed X train mean: {self.pipeline.X_train.mean()}')
def test_scaler_preprocessing_brings_x_train_std_near_one(self):
# When
self.pipeline.apply_scaler()
# Then
# We also check that the standard deviation is close to 1
self.assertAlmostEqual(self.pipeline.X_train.std(), 1.0, places=3)
print(f'Transformed X train standard deviation : {self.pipeline.X_train.std()}')
import sys
suite = unittest.TestLoader().loadTestsFromTestCase(TestIrisDataEngineering)
unittest.TextTestRunner(verbosity=1, stream=sys.stderr).run(suite)
```
## Data Engineering Test: Hands-on Exercise
Change the pipeline class preprocessing so that the test fails. Do you understand why the test is failing?
| github_jupyter |
<a href="https://colab.research.google.com/github/abdurahman02/AcademicContent/blob/master/FederatedCF011.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import matplotlib.pyplot as plt
from pathlib import Path
import pandas as pd
import numpy as np
import os
import copy
import time
import string
!git clone "https://github.com/abdurahman02/ml-latest-small.git"
os.chdir("ml-latest-small")
os.listdir()
data = pd.read_csv("ratings.csv")
data.head()
def filtering_data(df,from_user, to_user, from_item, to_item):
if(from_user <= to_user and from_item <= to_item
and to_user < max(df["userId"]) and to_item < max(df["movieId"])
):
return df[(df.userId >= from_user) &
(df.userId <= to_user) &
(df.movieId >= from_item) &
(df.movieId <= to_item)
]
print("Error Range")
def getBatchForUser(data, u, batchSize):
if u >= len(data["userId"].unique()):
print("INvalid UserId requested")
return
if batchSize > len(data[data.userId == u]):
batchSize = len(data[data.userId == u])
return data[data.userId == u].sample(n=batchSize)
# split train and validation before encoding
# np.random.seed(3)
# msk = np.random.rand(len(data)) < 0.8
# train = data[msk].copy()
# val = data[~msk].copy()
# here is a handy function modified from fast.ai
def proc_col(col, train_col=None):
"""Encodes a pandas column with continous ids.
"""
if train_col is not None:
uniq = train_col.unique()
else:
uniq = col.unique()
name2idx = {o:i for i,o in enumerate(uniq)}
return name2idx, np.array([name2idx.get(x, -1) for x in col]), len(uniq)
def encode_data(df, train=None):
""" Encodes rating data with continous user and movie ids.
If train is provided, encodes df with the same encoding as train.
"""
df = df.copy()
for col_name in ["userId", "movieId"]:
train_col = None
if train is not None:
train_col = train[col_name]
_,col,_ = proc_col(df[col_name], train_col)
df[col_name] = col
df = df[df[col_name] >= 0]
return df
import torch
import torch.nn as nn
import torch.nn.functional as F
# encoding the train and validation data
# df_train = encode_data(train)
# df_val = encode_data(val, train)
# df_val.movieId.values
# df_train_numpy = df_train.to_numpy(dtype=int, copy=True)
# # print((df_train_numpy))
# diction={}
# for i in range(len(df_train_numpy)):
# diction[df_train_numpy[i][0],df_train_numpy[i][1]] = df_train_numpy[i][2]
class MF(nn.Module):
def __init__(self, userX_embedding, item_embed_mat, emb_size=100):
super(MF, self).__init__()
self.userX_embedding = userX_embedding
self.item_embed_mat = item_embed_mat
# print(userX_embedding.weight)
def forward(self, u, v):
u = self.userX_embedding(u)
v = self.item_embed_mat(v)
# print("u: ",u)
# print("v: ",v)
# print(len((u*v).sum(1)))
return (u*v).sum(1)
# emb_size=5
# items = torch.LongTensor(df_train.movieId.unique()) #.cuda()
# embx_item = nn.Embedding(num_items, emb_size)
# embx_user = nn.Embedding(1,emb_size)
# embx_user.weight.data.uniform_(0, 0.05)
# embx_item.weight.data.uniform_(0, 0.05)
# model01 = MF(embx_user,embx_item,5)
# model01.userX_embedding.weight
# pred = model01(torch.tensor([0]), torch.tensor(items[0]))
# model01.item_embed_mat.weight[0]
# print(pred,torch.tensor([diction[0,0]]))
# optimizer = torch.optim.Adam(model01.parameters(), lr=0.01, weight_decay=0.0)
# loss = F.mse_loss(pred,torch.FloatTensor([diction[0,0]]))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# print(model01.item_embed_mat.weight[0])
# print(model01.userX_embedding.weight)
# model = MF(num_users, num_items, emb_size=5) # .cuda() if you have a GPU
# print(len(df_train.movieId.unique()))
# print(max(df_train.movieId.values))
def add_model_parameters(model1, model2):
# Adds the parameters of model1 to model2
params1 = model1.named_parameters()
params2 = model2.named_parameters()
dict_params2 = dict(params2)
for name1, param1 in params1:
if name1 in dict_params2 and name1 != 'userX_embedding.weight':
dict_params2[name1].data.copy_(param1.data + dict_params2[name1].data)
model2.load_state_dict(dict_params2)
def sub_model_parameters(model1, model2):
# Subtracts the parameters of model2 with model1
params1 = model1.named_parameters()
params2 = model2.named_parameters()
dict_params2 = dict(params2)
for name1, param1 in params1:
if name1 in dict_params2 and name1 != 'userX_embedding.weight':
dict_params2[name1].data.copy_(dict_params2[name1].data - param1.data)
model2.load_state_dict(dict_params2)
def divide_model_parameters(model, f):
# Divides model parameters except for the user embeddings with f
params1 = model.named_parameters()
params2 = model.named_parameters()
dict_params2 = dict(params2)
for name1, param1 in params1:
if name1 != 'userX_embedding.weight':
dict_params2[name1].data.copy_(param1.data / f)
model.load_state_dict(dict_params2)
def zero_model_parameters(model):
# sets all parameters to zero
params1 = model.named_parameters()
params2 = model.named_parameters()
dict_params2 = dict(params2)
for name1, param1 in params1:
if name1 in dict_params2:
dict_params2[name1].data.copy_(param1.data - dict_params2[name1].data)
model.load_state_dict(dict_params2)
class RMSELoss(nn.Module):
def __init__(self, eps=1e-6):
super().__init__()
self.mse = nn.MSELoss()
self.eps = eps
def forward(self,yhat,y):
loss = torch.sqrt(self.mse(yhat,y) + self.eps)
return loss
def fed_train_client(model_server, df_train,epochs=10, lr=0.1):
emb_size=5
los=[]
los_usr=[]
dict_los={}
user_emb_dict = {}
item_emb_mat_dict = {}
model_diff = copy.deepcopy(model_server)
zero_model_parameters(model_diff)
# model02(torch.tensor([0]), torch.tensor(items[0]))
t1 = time.time()
for user_id in range(len(df_train.userId.unique())):
model02 = copy.deepcopy(model_server)
optimizer = torch.optim.Adam(model02.parameters(), lr=lr, weight_decay=1e-5)
batch = df_train[df_train.userId == user_id]
batch = batch.to_numpy(dtype=int, copy=True)
for e in range(epochs):
for data_point in batch:
# print(data_point)
y_hat = model02(torch.tensor([0]), torch.tensor(data_point[1]))
loss_fn = RMSELoss()
loss = loss_fn(y_hat,torch.FloatTensor([data_point[2]]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
los.append(loss.item())
los_usr.append(np.sqrt(np.sum([x**2 for x in los])/len(batch)))
los.clear()
# running_loss = running_loss/len(data_point)
user_emb_dict[user_id] = copy.deepcopy(model02.userX_embedding.weight)
item_emb_mat_dict[user_id] = copy.deepcopy(model02.item_embed_mat.weight)
dict_los[user_id] = los_usr[len(los_usr)-1]
print("userId:", user_id, "training_loss: ", dict_los[user_id])
los_usr.clear()
sub_model_parameters(model_server, model02)
add_model_parameters(model02, model_diff)
# Take the average of the MLP and item vectors
divide_model_parameters(model_diff, (len(df_train.userId.unique())))
# Update the global model by adding the total change
add_model_parameters(model_diff, model_server)
t2 = time.time()
print("Time of round:", round(t2 - t1), "seconds")
return dict_los
# test_loss(model, unsqueeze)
def fed_eval(model, df_val):
los = []
los_usr = []
for user_id in range(len(df_val.userId.unique())):
batch = df_val[df_val.userId == user_id]
batch = batch.to_numpy(dtype=int, copy=True)
for data_point in batch:
y_hat = model(torch.tensor([0]), torch.tensor(data_point[1]))
# loss = RMSELoss(y_hat,torch.FloatTensor([data_point[2]]))
loss_fn = RMSELoss()
loss = loss_fn(y_hat,torch.FloatTensor([data_point[2]]))
los.append(loss.item())
los_usr.append(np.sqrt(np.sum([x**2 for x in los])/len(batch)))
los.clear()
return np.mean(los_usr),los_usr
def Server(from_user, to_user, from_item, to_item, epochs, emb_size, rounds, lr):
Max_BatchSize_User = 20
lr = lr
eta = 80
print("embedding size is: ",emb_size)
print("Max Batch Size is: ",Max_BatchSize_User)
avg_train_loss = []
dict_loss_train={}
avg_test_loss_vec = []
filtered_data = filtering_data(data, from_user, to_user, from_item, to_item)
np.random.seed(3)
msk = np.random.rand(len(filtered_data)) < 0.8
train = filtered_data[msk].copy()
val = filtered_data[~msk].copy()
df_train = encode_data(train)
df_val = encode_data(val, train)
embx_item = nn.Embedding(len(df_train.movieId.unique()), emb_size)
embx_user = nn.Embedding(1,emb_size)
if torch.cuda.is_available():
embx_user.weight.data.uniform_(0, 0.05).cuda()
embx_item.weight.data.uniform_(0, 0.05).cuda()
else:
embx_user.weight.data.uniform_(0, 0.05)
embx_item.weight.data.uniform_(0, 0.05)
model_server = MF(embx_user,embx_item,emb_size)
for t in range(rounds): # for each round
print("Starting round", t + 1)
# train one round
dict_loss_train = fed_train_client(model_server, df_train, epochs=epochs, lr=lr)
avg_train_loss.append(np.mean([dict_loss_train[x] for x in dict_loss_train]))
print("Evaluating model...")
avg_test_loss, test_loss_vec_userX = fed_eval(model_server, df_val)
avg_test_loss_vec.append(avg_test_loss)
print("Round ", t, " computed test loss:", avg_test_loss)
return avg_test_loss_vec, test_loss_vec_userX, avg_train_loss
from_user = 1
to_user = 300
from_item = 1
to_item = 10000
epochs=10
emb_size=100
rounds=100
lr=0.1
avg_test_loss_vec, \
test_loss_vec_userX,\
avg_train_loss = Server(from_user, to_user,
from_item, to_item,
epochs, emb_size, rounds, lr)
# from_user = 1
# to_user = 3
# from_item = 1
# to_item = 10000
# epochs=10
# emb_size=20
# rounds=100
plt.xlabel('number of rounds')
plt.ylabel('Average user test loss')
print("total users: ", abs(from_user-to_user))
print("total items: ", abs(from_item-to_item))
print("Embedding size: ",emb_size)
print("local epochs: ", epochs)
plt.plot(np.arange(start=1, stop=len(avg_test_loss_vec)+1, step=1), avg_test_loss_vec, 'g--')
plt.plot(np.arange(start=1, stop=len(avg_test_loss_vec)+1, step=1), avg_train_loss, 'b--')
plt.legend(["Test Loss", "Training loss"])
class MF_bias(nn.Module):
def __init__(self, num_users, num_items, emb_size=100):
super(MF_bias, self).__init__()
self.user_emb = nn.Embedding(num_users, emb_size)
self.user_bias = nn.Embedding(num_users, 1)
self.item_emb = nn.Embedding(num_items, emb_size)
self.item_bias = nn.Embedding(num_items, 1)
self.user_emb.weight.data.uniform_(0,0.05)
self.item_emb.weight.data.uniform_(0,0.05)
self.user_bias.weight.data.uniform_(-0.01,0.01)
self.item_bias.weight.data.uniform_(-0.01,0.01)
def forward(self, u, v):
U = self.user_emb(u)
V = self.item_emb(v)
b_u = self.user_bias(u).squeeze()
b_v = self.item_bias(v).squeeze()
return (U*V).sum(1) + b_u + b_v
# model = MF_bias(num_users, num_items, emb_size=100) #.cuda()
# train_epocs(model, epochs=10, lr=0.05, wd=1e-5)
# train_epocs(model, epochs=10, lr=0.01, wd=1e-5)
# train_epocs(model, epochs=10, lr=0.001, wd=1e-5)
```
| github_jupyter |
```
### Human Motion Prediction Example ###
# state-of-the-art approaches use recusive encoders and decoders.
# this is meant to be a gentle introduction, not the "best" approach
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
### Autoencoder Model ###
class Autoencoder(nn.Module):
def __init__(self, history_dim, prediction_dim, latent_dim, hidden_dim):
super(Autoencoder, self).__init__()
# encoder architecture
self.linear1 = nn.Linear(history_dim, hidden_dim)
self.linear2 = nn.Linear(hidden_dim, hidden_dim)
self.linear3 = nn.Linear(hidden_dim, latent_dim)
# decoder architecture
self.linear4 = nn.Linear(latent_dim, hidden_dim)
self.linear5 = nn.Linear(hidden_dim, hidden_dim)
self.linear6 = nn.Linear(hidden_dim, prediction_dim)
# loss function: ||x - y||^2
self.loss_fcn = nn.MSELoss()
# encoder takes in history and outputs latent z
def encoder(self, history):
h1 = torch.tanh(self.linear1(history))
h2 = torch.tanh(self.linear2(h1))
return self.linear3(h2)
# decoder takes in latent z and outputs prediction
def decoder(self, z):
h4 = torch.tanh(self.linear4(z))
h5 = torch.tanh(self.linear5(h4))
return self.linear6(h5)
# compare prediction to actual future
def forward(self, history, future):
prediction = self.decoder(self.encoder(history))
return self.loss_fcn(future, prediction)
### Generate the Training Data ###
# make N sine waves
# each sine wave is split in half:
# the first half is the history, and the second half is the future
# the amplitute and frequency are randomized
N = 100
start_t = 0.0
curr_t = 3.0
end_t = 6.0
history_timesteps = np.linspace(start_t, curr_t, 30)
future_timesteps = np.linspace(curr_t, end_t, 30)
dataset = []
for _ in range(N):
amp = np.random.uniform(0.2, 1.0)
freq = 2*np.pi*np.random.uniform(0.1, 1.0)
history = amp*np.sin(freq*history_timesteps)
future = amp*np.sin(freq*future_timesteps)
dataset.append((torch.FloatTensor(history), torch.FloatTensor(future)))
plt.plot(history_timesteps, history)
plt.plot(future_timesteps, future)
plt.show()
### Train the BC Model ###
# arguments: history_dim, prediction_dim, latent_dim, hidden_dim
model = Autoencoder(30, 30, 10, 32)
# hyperparameters for training
EPOCH = 2001
BATCH_SIZE_TRAIN = 100
LR = 0.001
# training loop
optimizer = optim.Adam(model.parameters(), lr=LR)
for epoch in range(EPOCH):
optimizer.zero_grad()
loss = 0
batch = np.random.choice(len(dataset), size=BATCH_SIZE_TRAIN, replace=False)
for index in batch:
item = dataset[index]
loss += model(item[0], item[1])
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print(epoch, loss.item())
### Predict the Motion ###
# given this history
amp = np.random.uniform(0.2, 1.0)
freq = 2*np.pi*np.random.uniform(0.1, 1.0)
history = amp*np.sin(freq*history_timesteps)
# predict the future trajectory
z = model.encoder(torch.FloatTensor(history))
prediction = model.decoder(z).detach().numpy()
# plot the history, prediction, and actual trajectory
plt.plot(history_timesteps, history)
plt.plot(future_timesteps, amp*np.sin(freq*future_timesteps), 'x--')
plt.plot(future_timesteps, prediction, 'o-')
plt.show()
```
| github_jupyter |
# Exploring Weather Trends
### by Phone Thiri Yadana
In this project, we will analyze Gobal vs Singapore weather data across 10 Years Moving Average.
[<img src="./new24397338.png"/>](https://www.vectorstock.com/royalty-free-vector/kawaii-world-and-thermometer-cartoon-vector-24397338)
-------------
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
#load data
global_df = pd.read_csv("Data/global_data.csv")
city_df = pd.read_csv("Data/city_data.csv")
city_list_df = pd.read_csv("Data/city_list.csv")
```
## Check info, duplicate or missing data
```
global_df.head()
global_df.tail()
global_df.shape
sum(global_df.duplicated())
global_df.info()
city_df.head()
city_df.shape
city_df.info()
sum(city_df.duplicated())
city_list_df.head()
city_list_df.shape
city_list_df.info()
sum(city_list_df.duplicated())
```
## Calculate Moving Average
### Global Temperature
```
#yearly plot
plt.plot(global_df["year"], global_df["avg_temp"])
# 10 years Moving Avearge
global_df["10 Years MA"] = global_df["avg_temp"].rolling(window=10).mean()
global_df.iloc[8:18, :]
#10 years Moving Average
plt.plot(global_df["year"], global_df["10 Years MA"])
```
### Specific City Temperature (Singapore)
```
city_df.head()
singapore_df = city_df[city_df["country"] == "Singapore"]
singapore_df.head()
singapore_df.tail()
#check which rows are missing values
singapore_df[singapore_df["avg_temp"].isnull()]
```
As singapore data are missing from 1826 till 1862, so it won't make sense to compare temperature during those period.
```
singapore_df = singapore_df[singapore_df["year"] >= 1863]
# to make sure, check again for null values
singapore_df.info()
singapore_df.head()
# calculate 10 years moving average
singapore_df["10 Years MA"] = singapore_df["avg_temp"].rolling(window=10).mean()
singapore_df.iloc[8:18, :]
plt.plot(singapore_df["year"], singapore_df["10 Years MA"])
```
## Compare with Global Data (10 Years Moving Average)
```
years = global_df.query('year >= 1872 & year <= 2013')[["year"]]
global_ma = global_df.query('year >= 1872 & year <= 2013')[["10 Years MA"]]
singapore_ma = singapore_df.query('year >= 1872 & year <= 2013')["10 Years MA"]
plt.figure(figsize=[10,5])
plt.grid(True)
plt.plot(years, global_ma, label = "Global")
plt.plot(years,singapore_ma, label = "Singapore")
plt.xlabel("Year")
plt.ylabel("Temperature (C)")
plt.title("Temperature in Singapore vs Global (10 Years Moving Average)")
plt.legend()
plt.show()
global_ma.describe()
singapore_ma.describe()
```
----------------------
# Observations:
- As per the findings, we can see in the plot that both Global and Specific City (In this case: Singapore) temperature are rising over the years.
- There are certain ups and downs before 1920 and since then Temperatures have been steadily increasing.
| github_jupyter |
##### Copyright © 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TFX Estimator Component Tutorial
***A Component-by-Component Introduction to TensorFlow Extended (TFX)***
Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab".
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/components">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/components.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/components.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
</table></div>
This Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).
It covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.
When you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.
Note: This notebook and its associated APIs are **experimental** and are
in active development. Major changes in functionality, behavior, and
presentation are expected.
## Background
This notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.
Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.
### Orchestration
In a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.
### Metadata
In a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the `/tmp` directory on the Jupyter notebook or Colab server.
## Setup
First, we install and import the necessary packages, set up paths, and download data.
### Upgrade Pip
To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.
```
try:
import colab
!pip install --upgrade pip
except:
pass
```
### Install TFX
**Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).**
```
!pip install -q -U --use-feature=2020-resolver tfx
```
## Did you restart the runtime?
If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.
### Import packages
We import necessary packages, including standard TFX component classes.
```
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.proto.evaluator_pb2 import SingleSlicingSpec
from tfx.utils.dsl_utils import external_input
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
```
Let's check the library versions.
```
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
```
### Set up pipeline paths
```
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
tempfile.mkdtemp(), 'serving_model/taxi_simple')
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
```
### Download example data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew) released by the City of Chicago. The columns in this dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
With this dataset, we will build a model that predicts the `tips` of a trip.
```
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
```
Take a quick look at the CSV file.
```
!head {_data_filepath}
```
*Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.*
### Create the InteractiveContext
Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
```
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
```
## Run TFX components interactively
In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.
### ExampleGen
The `ExampleGen` component is usually at the start of a TFX pipeline. It will:
1. Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)
2. Convert data into the `tf.Example` format
3. Copy data into the `_tfx_root` directory for other components to access
`ExampleGen` takes as input the path to your data source. In our case, this is the `_data_root` path that contains the downloaded CSV.
Note: In this notebook, we can instantiate components one-by-one and run them with `InteractiveContext.run()`. By contrast, in a production setting, we would specify all the components upfront in a `Pipeline` to pass to the orchestrator (see the "Export to Pipeline" section).
```
example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
```
Let's examine the output artifacts of `ExampleGen`. This component produces two artifacts, training examples and evaluation examples:
```
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
```
We can also take a look at the first three training examples:
```
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
```
Now that `ExampleGen` has finished ingesting the data, the next step is data analysis.
### StatisticsGen
The `StatisticsGen` component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.
`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen`.
```
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
```
After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
```
context.show(statistics_gen.outputs['statistics'])
```
### SchemaGen
The `SchemaGen` component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.
`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
```
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
```
After `SchemaGen` finishes running, we can visualize the generated schema as a table.
```
context.show(schema_gen.outputs['schema'])
```
Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.
To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen).
### ExampleValidator
The `ExampleValidator` component detects anomalies in your data, based on the expectations defined by the schema. It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.
`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.
```
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
```
After `ExampleValidator` finishes running, we can visualize the anomalies as a table.
```
context.show(example_validator.outputs['anomalies'])
```
In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.
### Transform
The `Transform` component performs feature engineering for both training and serving. It uses the [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started) library.
`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.
Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)). First, we define a few constants for feature engineering:
Note: The `%%writefile` cell magic will save the contents of the cell as a `.py` file on disk. This allows the `Transform` component to load your code as a module.
```
_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]
CATEGORICAL_FEATURE_KEYS = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = [
'payment_type',
'company',
]
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def transformed_name(key):
return key + '_xf'
```
Next, we write a `preprocessing_fn` that takes in raw data as input, and returns transformed features that our model can train on:
```
_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import taxi_constants
_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
_transformed_name = taxi_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
```
Now, we pass in this feature engineering code to the `Transform` component and run it to transform your data.
```
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform)
```
Let's examine the output artifacts of `Transform`. This component produces two types of outputs:
* `transform_graph` is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
* `transformed_examples` represents the preprocessed training and evaluation data.
```
transform.outputs
```
Take a peek at the `transform_graph` artifact. It points to a directory containing three subdirectories.
```
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
```
The `transformed_metadata` subdirectory contains the schema of the preprocessed data. The `transform_fn` subdirectory contains the actual preprocessing graph. The `metadata` subdirectory contains the schema of the original data.
We can also take a look at the first three transformed examples:
```
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
```
After the `Transform` component has transformed your data into features, and the next step is to train a model.
### Trainer
The `Trainer` component will train a model that you define in TensorFlow (either using the Estimator API or the Keras API with [`model_to_estimator`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator)).
`Trainer` takes as input the schema from `SchemaGen`, the transformed data and graph from `Transform`, training parameters, as well as a module that contains user-defined model code.
Let's see an example of user-defined model code below (for an introduction to the TensorFlow Estimator APIs, [see the tutorial](https://www.tensorflow.org/tutorials/estimator/premade)):
```
_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from tfx_bsl.tfxio import dataset_options
import taxi_constants
_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = taxi_constants.LABEL_KEY
_transformed_name = taxi_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _build_estimator(config, hidden_units=None, warm_start_from=None):
"""Build an estimator for predicting the tipping behavior of taxi riders.
Args:
config: tf.estimator.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedClassifier(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
def _example_serving_receiver_fn(tf_transform_graph, schema):
"""Build the serving in inputs.
Args:
tf_transform_graph: A TFTransformOutput.
schema: the schema of the input data.
Returns:
Tensorflow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(_LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_graph.transform_raw_features(
serving_input_receiver.features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_graph, schema):
"""Build everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_graph: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- Tensorflow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, process them through the tf-transform
# function computed during the preprocessing step.
transformed_features = tf_transform_graph.transform_raw_features(
features)
# The key name MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[_transformed_name(_LABEL_KEY)])
def _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
dataset_options.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_transformed_name(_LABEL_KEY)),
tf_transform_output.transformed_metadata.schema)
# TFX will call this function
def trainer_fn(trainer_fn_args, schema):
"""Build the estimator using the high level API.
Args:
trainer_fn_args: Holds args used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
train_batch_size = 40
eval_batch_size = 40
tf_transform_graph = tft.TFTransformOutput(trainer_fn_args.transform_output)
train_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda
trainer_fn_args.train_files,
trainer_fn_args.data_accessor,
tf_transform_graph,
batch_size=train_batch_size)
eval_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda
trainer_fn_args.eval_files,
trainer_fn_args.data_accessor,
tf_transform_graph,
batch_size=eval_batch_size)
train_spec = tf.estimator.TrainSpec( # pylint: disable=g-long-lambda
train_input_fn,
max_steps=trainer_fn_args.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn( # pylint: disable=g-long-lambda
tf_transform_graph, schema)
exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=trainer_fn_args.eval_steps,
exporters=[exporter],
name='chicago-taxi-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)
estimator = _build_estimator(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
],
config=run_config,
warm_start_from=trainer_fn_args.base_model)
# Create an input receiver for TFMA processing
receiver_fn = lambda: _eval_input_receiver_fn( # pylint: disable=g-long-lambda
tf_transform_graph, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
```
Now, we pass in this model code to the `Trainer` component and run it to train the model.
```
trainer = Trainer(
module_file=os.path.abspath(_taxi_trainer_module_file),
transformed_examples=transform.outputs['transformed_examples'],
schema=schema_gen.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000))
context.run(trainer)
```
#### Analyze Training with TensorBoard
Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.
```
# Get the URI of the output artifact representing the training logs, which is a directory
model_run_dir = trainer.outputs['model_run'].get()[0].uri
%load_ext tensorboard
%tensorboard --logdir {model_run_dir}
```
### Evaluator
The `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. The `Evaluator` can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the `Evaluator` automatically will label the model as "good".
`Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:
```
eval_config = tfma.EvalConfig(
model_specs=[
# Using signature 'eval' implies the use of an EvalSavedModel. To use
# a serving model remove the signature to defaults to 'serving_default'
# and add a label_key.
tfma.ModelSpec(signature_name='eval')
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
metrics=[
tfma.MetricConfig(class_name='ExampleCount')
],
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
thresholds = {
'accuracy': tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
}
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(feature_keys=['trip_start_hour'])
])
```
Next, we give this configuration to `Evaluator` and run it.
```
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = ResolverNode(
instance_name='latest_blessed_model_resolver',
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing))
context.run(model_resolver)
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
#baseline_model=model_resolver.outputs['model'],
# Change threshold will be ignored if there is no baseline (first run).
eval_config=eval_config)
context.run(evaluator)
```
Now let's examine the output artifacts of `Evaluator`.
```
evaluator.outputs
```
Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set.
```
context.show(evaluator.outputs['evaluation'])
```
To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.
```
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
```
This visualization shows the same metrics, but computed at every feature value of `trip_start_hour` instead of on the entire evaluation set.
TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see [the tutorial](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic).
Since we added thresholds to our config, validation output is also available. The precence of a `blessing` artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.
```
blessing_uri = evaluator.outputs.blessing.get()[0].uri
!ls -l {blessing_uri}
```
Now can also verify the success by loading the validation result record:
```
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
```
### Pusher
The `Pusher` component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to `_serving_model_dir`.
```
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher)
```
Let's examine the output artifacts of `Pusher`.
```
pusher.outputs
```
In particular, the Pusher will export your model in the SavedModel format, which looks like this:
```
push_uri = pusher.outputs.model_push.get()[0].uri
model = tf.saved_model.load(push_uri)
for item in model.signatures.items():
pp.pprint(item)
```
We're finished our tour of built-in TFX components!
| github_jupyter |
# Getting Started with BlazingSQL
In this notebook, we will cover:
- How to set up [BlazingSQL](https://blazingsql.com) and the [RAPIDS AI](https://rapids.ai/) suite.
- How to read and query csv files with cuDF and BlazingSQL.

## Setup
### Environment Sanity Check
RAPIDS packages (BlazingSQL included) require Pascal+ architecture to run. For Colab, this translates to a T4 GPU instance.
The cell below will let you know what type of GPU you've been allocated, and how to proceed.
```
!wget https://github.com/BlazingDB/bsql-demos/raw/master/utils/colab_env.py
!python colab_env.py
```
## Installs
The cell below pulls our Google Colab install script from the `bsql-demos` repo then runs it. The script first installs miniconda, then uses miniconda to install BlazingSQL and RAPIDS AI. This takes a few minutes to run.
```
!wget https://github.com/BlazingDB/bsql-demos/raw/master/utils/bsql-colab.sh
!bash bsql-colab.sh
import sys, os, time
sys.path.append('/usr/local/lib/python3.6/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
import subprocess
subprocess.Popen(['blazingsql-orchestrator', '9100', '8889', '127.0.0.1', '8890'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess.Popen(['java', '-jar', '/usr/local/lib/blazingsql-algebra.jar', '-p', '8890'])
import pyblazing.apiv2.context as cont
cont.runRal()
time.sleep(1)
```
## Import packages and create Blazing Context
You can think of the BlazingContext much like a Spark Context (i.e. where information such as FileSystems you have registered and Tables you have created will be stored). If you have issues running this cell, restart runtime and try running it again.
```
from blazingsql import BlazingContext
import cudf
bc = BlazingContext()
```
## Read CSV
First we need to download a CSV file. Then we use cuDF to read the CSV file, which gives us a GPU DataFrame (GDF). To learn more about the GDF and how it enables end to end workloads on rapids, check out our [blog post](https://blog.blazingdb.com/blazingsql-part-1-the-gpu-dataframe-gdf-and-cudf-in-rapids-ai-96ec15102240).
```
#Download the test CSV
!wget 'https://s3.amazonaws.com/blazingsql-colab/Music.csv'
# like pandas, cudf can simply read the csv
gdf = cudf.read_csv('Music.csv')
# let's see how it looks
gdf.head()
```
## Create a Table
Now we just need to create a table.
```
bc.create_table('music', gdf)
```
## Query a Table
That's it! Now when you can write a SQL query the data will get processed on the GPU with BlazingSQL, and the output will be a GPU DataFrame (GDF) inside RAPIDS!
```
# query 10 events with a rating of at least 7
result = bc.sql('SELECT * FROM main.music where RATING >= 7 LIMIT 10').get()
# get GDF
result_gdf = result.columns
# display GDF (just like pandas)
result_gdf
```
# You're Ready to Rock
And... thats it! You are now live with BlazingSQL.
Check out our [docs](https://docs.blazingdb.com) to get fancy or to learn more about how BlazingSQL works with the rest of [RAPIDS AI](https://rapids.ai/).
| github_jupyter |
[0: NumPy and the ndarray](gridded_data_tutorial_0.ipynb) | **1: Introduction to xarray** | [2: Daymet data access](gridded_data_tutorial_2.ipynb) | [3: Investigating SWE at Mt. Rainier with Daymet](gridded_data_tutorial_3.ipynb)
# Notebook 1: Introduction to xarray
Waterhackweek 2020 | Steven Pestana (spestana@uw.edu)
**By the end of this notebook you will be able to:**
* Create xarray DataArrays and Datasets
* Index and slice DataArrays and Datasets
* Make plots using xarray objects
* Export xarray Datasets as NetCDF or CSV files
---
#### What do we mean by "gridded data"?
Broadly speaking, this can mean any data with a corresponding location in one or more dimensions. Typically, our dimensions represent points on the Earth's surface in two or three dimensions (latitude, longitude, and elevation), and often include time as an additional dimension. You may also hear the term "raster" data, which also means data points on some grid. These multi-dimensional datasets can be thought of as 2-D images, stacks of 2-D images, or data "cubes" in 3 or more dimensions.
Examples of gridded data:
* Satellite images of Earth's surface, where each pixel represents reflection or emission at some wavelength
* Climate model output, where the model is evaluated at discrete nodes or grid cells
Examples of raster/gridded data formats that combine multi-dimensional data along with metadata in a single file:
* [NetCDF](https://www.unidata.ucar.edu/software/netcdf/docs/) (Network Common Data Form) for model data, satellite imagery, and more
* [GeoTIFF](https://trac.osgeo.org/geotiff/) for georeferenced raster imagery (satellite images, digital elevation models, maps, and more)
* [HDF-EOS](https://earthdata.nasa.gov/esdis/eso/standards-and-references/hdf-eos5) (Hierarchical Data Format - Earth Observing Systems)
* [GRIB](https://en.wikipedia.org/wiki/GRIB) (GRIdded Binary) for meteorological data
**How can we easily work with these types of data in python?**
Some python packages for working with gridded data:
* [rasterio](https://rasterio.readthedocs.io/en/latest/)
* [xarray](https://xarray.pydata.org/en/stable/)
* [rioxarray](https://corteva.github.io/rioxarray/stable/)
* [cartopy](https://scitools.org.uk/cartopy/docs/latest/)
**Today we'll be using xarray!**
---
# xarray
The [xarray](https://xarray.pydata.org/) library allows us to read, manipulate, and create **labeled** multi-dimensional arrays and datasets, such as [NetCDF](https://www.unidata.ucar.edu/software/netcdf/) files.
In the image below, we can imagine having two "data cubes" (3-dimensional data arrays) of temperature and precipitation values, each of which corresponds to a particular x and y spatial coordinate, and t time step.
<img src="https://xarray.pydata.org/en/stable/_images/dataset-diagram.png" width=700>
Let's import xarray and start to explore its features...
```
# import the package, and give it the alias "xr"
import xarray as xr
# we will also be using numpy and pandas, import both of these
import numpy as np
import pandas as pd
# for plotting, import matplotlib.pyplot
import matplotlib.pyplot as plt
# tell jupyter to display plots "inline" in the notebook
%matplotlib inline
```
---
# DataArrays
Similar to the `numpy.ndarray` object, the `xarray.DataArray` is a multi-dimensional array, with the addition of labeled dimensions, coordinates, and other metadata. A [DataArray](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) contains the following:
* `values` which store the actual data values in a `numpy.ndarray`
* `dims` are the names for each dimension of the `values` array
* `coords` are arrays of labels for each point
* `attrs` is a [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) that can contain additional metadata
**Let's create some fake air temperature data to see how these different parts work together to form a DataArray.**
Our goal here is to have 100 years of annual maximum air temperature data for a 10 by 10 grid in a DataArray. (Our data will have a shape of 100 x 10 x 10)
I'm going to use a numpy function to generate some random numbers that are [normally distributed](https://numpy.org/devdocs/reference/random/generated/numpy.random.normal.html) (`np.random.normal()`).
```
# randomly generated annual maximum air temperature data for a 10 by 10 grid
# choose a mean and standard deviation for our random data
mean = 20
standard_deviation = 5
# specify that we want to generate 100 x 10 x 10 random samples
samples = (100, 10, 10)
# generate the random samples
air_temperature_max = np.random.normal(mean, standard_deviation, samples)
# look at this ndarray we just made
air_temperature_max
# look at the shape of this ndarray
air_temperature_max.shape
```
`air_temperature` will be the `values` within the DataArray. It is a three-dimensional array, and we've given it a shape of 100x10x10.
The three dimensions will need names (`dims`) and labels (`coords`)
**Make the `coords` that will be our 100 years**
```
# Make a sequence of 100 years to be our time dimension
years = pd.date_range('1920', periods=100, freq ='1Y')
```
**Make the `coords` that will be our longitudes and latitudes**
```
# Make a sequence of linearly spaced longitude and latitude values
lon = np.linspace(-119, -110, 10)
lat = np.linspace(30, 39, 10)
```
**Make the `dims` names**
```
# We can call our dimensions time, lat, and lon corresponding to the dimensions with lengths 100 (years) and 10 (lat and lon) respectively
dimensions = ['time', 'lat', 'lon']
```
**Finally we can create a metadata dictionary which will be included in the DataArray**
```
metadata = {'units': 'C',
'description': 'maximum annual air temperature'}
```
**Now that we have all the individual components of an xarray DataArray, we can create it**
```
tair_max = xr.DataArray(air_temperature_max,
coords=[years, lat, lon],
dims=dimensions,
name='tair_max',
attrs=metadata)
```
**Inspect the DataArray we just created**
```
tair_max
# Get the DataArray dimensions (labels for coordinates)
tair_max.dims
# Get the DataArray coordinates
tair_max.coords
# Look at our attributes
tair_max.attrs
# Take a look at the data values
tair_max.values
```
---
## DataArray indexing/slicing methods
DataArrays can be [indexed or sliced](https://xarray.pydata.org/en/stable/indexing.html) much like ndarrays, but with the addition of using labels.
| Dimension lookup | Index lookup | DataArray syntax |
| --- | --- | --- |
| positional | by integer | `da[:,0]` |
| positional | by label | `da.loc[:,'east_watershed']` |
| by name | by integer | `da.isel(watershed=0)` |
| by name | by label | `da.sel(watershed='east_watershed')` |
Let's select by name and by label, air temperature for just one year, and plot it. (Conveniently, x-array will add axes labels and a title by default.)
```
tair_max.sel(time='2019').plot()
```
Similarly we can select by longitude and latitude to plot a timeseries. (We made this easy on ourselves here by choosing whole number integers for our longitude and latitude)
```
tair_max.sel(lat=34, lon=-114).plot()
```
Now let's select a shorter time range using a `slice()` to plot data for this location.
```
tair_max.sel(lat=34, lon=-114, time=slice('2000','2020')).plot()
```
And if we try to plot the whole DataArray, xarray gives us a histogram!
```
tair_max.plot()
```
---
# Datasets
Similar to the `pandas.dataframe`, the `xarray.Dataset` contains one or more labeled `xarray.DataArray` objects.
We can create a [Dataset](https://xarray.pydata.org/en/stable/data-structures.html#dataset) with our simulated data here.
**First, create a two more DataArrays with annual miminum air temperatures, and annual cumulative precipitation**
```
# randomly generated annual minimum air temperature data for a 10 by 10 grid
air_temperature_min = np.random.normal(-10, 10, (100, 10, 10))
# randomly generated annualcumulative precipitation data for a 10 by 10 grid
cumulative_precip = np.random.normal(100, 25, (100, 10, 10))
```
Make the DataArrays (note that we're using the same `coords` and `dims` as our first maximum air temperature DataArray)
```
tair_min = xr.DataArray(air_temperature_min,
coords=[years, lat, lon],
dims=dimensions,
name='tair_min',
attrs={'units':'C',
'description': 'minimum annual air temperature'})
precip = xr.DataArray(cumulative_precip,
coords=[years, lat, lon],
dims=dimensions,
name='cumulative_precip',
attrs={'units':'cm',
'description': 'annual cumulative precipitation'})
```
**Now merge our two DataArrays and create a Dataset.**
```
my_data = xr.merge([tair_max, tair_min, precip])
# inspect the Dataset
my_data
```
## Dataset indexing/slicing methods
Datasets can also be [indexed or sliced](https://xarray.pydata.org/en/stable/indexing.html) using the `.isel()` or `.sel()` methods.
| Dimension lookup | Index lookup | Dataset syntax |
| --- | --- | --- |
| positional | by integer | *n/a* |
| positional | by label | *n/a* |
| by name | by integer | `ds.isel(location=0)` |
| by name | by label | `ds.sel(location='stream_gage_1')` |
**Select with `.sel()` temperatures and precipitation for just one grid cell**
```
# by name, by label
my_data.sel(lon='-114', lat='35')
```
**Select with `.isel()` temperatures and precipitation for just one year**
```
# by name, by integer
my_data.isel(time=0)
```
---
## Make some plots:
Using our indexing/slicing methods, create some plots showing 1) a timseries of all three variables at a single point, then 2) plot some maps of each variable for two points in time.
```
# 1) create a timeseries for the two temperature variables for a single location
# create a figure with 2 rows and 1 column of subplots
fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10,7), tight_layout=True)
# pick a longitude and latitude in our dataset
my_lon=-114
my_lat=35
# first subplot
# Plot tair_max
my_data.sel(lon=my_lon, lat=my_lat).tair_max.plot(ax=ax[0], color='r', linestyle='-', label='Tair_max')
# Plot tair_min
my_data.sel(lon=my_lon, lat=my_lat).tair_min.plot(ax=ax[0], color='b', linestyle='--', label='Tair_max')
# Add a title
ax[0].set_title('Annual maximum and minimum air temperatures at {}, {}'.format(my_lon,my_lat))
# Add a legend
ax[0].legend(loc='lower left')
# second subplot
# Plot precip
my_data.sel(lon=my_lon, lat=my_lat).cumulative_precip.plot(ax=ax[1], color='black', linestyle='-', label='Cumulative Precip.')
# Add a title
ax[1].set_title('Annualcumulative precipitation at {}, {}'.format(my_lon,my_lat))
# Add a legend
ax[1].legend(loc='lower left')
# Save the figure
plt.savefig('my_data_plot_timeseries.jpg')
# 2) plot maps of temperature and precipitation for two years
# create a figure with 2 rows and 3 columns of subplots
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(15,7), tight_layout=True)
# The two years we want to plot
year1 = '1980'
year2 = '2019'
# Plot tair_max for the year 1980
my_data.sel(time=year1).tair_max.plot(ax=ax[0,0], cmap='RdBu_r', vmin=-20, vmax=40)
# set a title for this subplot
ax[0,0].set_title('Tair_max {}'.format(year1));
# Plot tair_max for the year 1980
my_data.sel(time=year1).tair_min.plot(ax=ax[0,1], cmap='RdBu_r', vmin=-20, vmax=40)
# set a title for this subplot
ax[0,1].set_title('Tair_min {}'.format(year1));
# Plot tair_max for the year 1980
my_data.sel(time=year1).cumulative_precip.plot(ax=ax[0,2], cmap='Blues')
# set a title for this subplot
ax[0,2].set_title('Precip {}'.format(year1));
# Plot tair_max for the year 2019
my_data.sel(time=year2).tair_max.plot(ax=ax[1,0], cmap='RdBu_r', vmin=-20, vmax=40)
# set a title for this subplot
ax[1,0].set_title('Tair_max {}'.format(year2));
# Plot tair_max for the year 2019
my_data.sel(time=year2).tair_min.plot(ax=ax[1,1], cmap='RdBu_r', vmin=-20, vmax=40)
# set a title for this subplot
ax[1,1].set_title('Tair_min {}'.format(year2));
# Plot tair_max for the year 2019
my_data.sel(time=year2).cumulative_precip.plot(ax=ax[1,2], cmap='Blues')
# set a title for this subplot
ax[1,2].set_title('Precip {}'.format(year2));
# save the figure as a jpg image
plt.savefig('my_data_plot_rasters.jpg')
```
---
## Save our data to a file:
**As a NetCDF file:**
```
my_data.to_netcdf('my_data.nc')
```
**We can also convert a Dataset or DataArray to a pandas dataframe**
```
my_data.to_dataframe()
```
**Via a pandas dataframe, save our data to a csv file**
```
my_data.to_dataframe().to_csv('my_data.csv')
```
| github_jupyter |
```
%matplotlib inline
from ipywidgets import interact, FloatSlider, HTML
from IPython.display import display
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('font',size=18)
import matplotlib.ticker as plticker
import matplotlib.patches as patches
import numpy as np
import warnings
import os.path
from hyperfet.devices import SCMOSFET,VO2,HyperFET, Direction
import hyperfet.approximations as appr
import hyperfet.extractions as extr
from hyperfet.references import si#, mixed_vo2_params
from hyperfet.fitting import show_transistor
from hyperfet import ABSTRACT_IMAGE_DIR
def ylog():
with warnings.catch_warnings():
warnings.filterwarnings('error')
plt.yscale('log')
def tighten():
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
plt.tight_layout()
# Parameters given for Figure 3
vo2_params={
"rho_m":si("3e-4 ohm cm"),
"rho_i":si("30 ohm cm"),
"J_MIT":si("1e6 A/cm^2"),
"J_IMT":si(".5e4 A/cm^2"),
"v_met": si(".05 V/ (20nm)")
}
#vo2=VO2(**vo2_params)
VDD=.5
opts={
'figsize': (6,7),
'linidvgpos': [.3,.68,.2,.2],
'linidvgxticks': [0,.25,.5],
'linidvgxlim': [0,.5],
'linidvgyticks': [100,200,300],
'linidvdpos': [.62,.25,.25,.3],
'linidvdxticks': [0,.25,.5],
'linidvdyticks': [0,100,200,300],
}
fet=None
@interact(VT0=FloatSlider(value=.32,min=0,max=1,step=.05,continuous_update=False),
W=FloatSlider(value=100,min=10,max=100,step=10,continuous_update=False),
Cinv_vxo=FloatSlider(value=2500,min=1000,max=5000,step=400,continuous_update=False),
SS=FloatSlider(value=.070,min=.05,max=.09,step=.005,continuous_update=False),
alpha=FloatSlider(value=2.5,min=0,max=5,step=.5,continuous_update=False),
beta=FloatSlider(value=1.8,min=0,max=4,step=.1,continuous_update=False),
VDD=FloatSlider(value=.5,min=.3,max=1,step=.05,continuous_update=False),
VDsats=FloatSlider(value=.1,min=.1,max=2,step=.1,continuous_update=False),
delta=FloatSlider(value=.01,min=0,max=.5,step=.05,continuous_update=False),
log10Gleak=FloatSlider(value=-12,min=-14,max=-5,step=1,continuous_update=False)
)
def show_HEMT(VT0,W,Cinv_vxo,SS,alpha,beta,VDsats,VDD,delta,log10Gleak):
global fet
plt.figure(figsize=(12,6))
fet=SCMOSFET(
W=W*1e-9,Cinv_vxo=Cinv_vxo,
VT0=VT0,alpha=alpha,SS=SS,delta=delta,
VDsats=VDsats,beta=beta,Gleak=10**log10Gleak)
plt.subplot(121)
VD=np.array(VDD)
VG=np.linspace(0,.5,500)
VDgrid,VGgrid=np.meshgrid(VD,VG)
I=fet.ID(VD=VDgrid,VG=VGgrid)
plt.plot(VG,I/fet.W,label=r"$V_D={:.2g}$".format(VDD))
plt.yscale('log')
plt.xlabel(r"$V_G\;\mathrm{[V]}$")
plt.ylabel(r"$I_D\;\mathrm{[\mu A/\mu m]}$")
plt.legend(loc='lower right',fontsize=16)
plt.subplot(122)
VD=np.linspace(0,VDD,500)
VG=np.linspace(0,VDD,10)
VDgrid,VGgrid=np.meshgrid(VD,VG)
I=fet.ID(VD=VDgrid,VG=VGgrid)
plt.plot(VD,I.T/fet.W)
plt.xlabel(r"$V_D\;\mathrm{[V]}$")
plt.ylabel(r"$I_D\;\mathrm{[\mu A/\mu m]}$")
#plt.legend([r"$V_G={:.2g}$".format(vg) for vg in VG],loc='lower right',fontsize=16)
plt.tight_layout()
opts={
'figsize': (6,7),
'linidvgpos': [.25,.69,.2,.2],
'linidvgxticks': [0,.5],
'linidvgxlim': [0,.5],
'linidvgyticks': [100,200,300,400],
'linidvdpos': [.62,.25,.25,.3],
'linidvdxticks': [0,.5],
'linidvdyticks': [100,200,300,400],
}
show_transistor(fet,VDD,data=None,**opts)
plt.gcf().get_axes()[0].set_ylim(1e-3,1e3)
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"Transistor.eps"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"Transistor.png"))
vo2=VO2(L=20e-9,W=15e-9,T=5e-9,**vo2_params)
fig, (ax, ax2) = plt.subplots(2, 1, sharex=True,figsize=(6,7))
I=np.logspace(-4,4,1000)*fet.W
V=vo2.V(I,direc=Direction.FORWARD)
plt.axes(ax)
plt.plot(V,I*1e6)
plt.ylabel(r"$I\;\mathrm{[{\bf\mu} A]}$",fontsize=20)
plt.ylim(10,250)
plt.axes(ax2)
plt.plot(V,I*1e9)
plt.ylim(0,10)
plt.ylabel(r"$I\;\mathrm{[{\bf n}A]}$",fontsize=20)
plt.xlim(0,.35)
plt.xlabel("$V\;\mathrm{[V]}$")
ax2.xaxis.set_major_locator(plticker.MultipleLocator(base=.1))
ax.spines['bottom'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax.xaxis.tick_top()
ax.tick_params(labeltop='off') # don't put tick labels at the top
ax2.xaxis.tick_bottom()
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass to plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color='k', clip_on=False)
ax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal
ax2.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal
plt.axes([.6,.4,.25,.25])
plt.plot(V,I)
plt.yscale('log')
plt.xticks([0,.3])
plt.xlim(0,.35)
#plt.ylabel('$I\;\mathrm{[A]}$')
plt.title("$\mathrm{Log\;IV}$")
plt.yticks([1e-9,1e-7,1e-5])
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
plt.tight_layout()
#plt.yscale('log')
#plt.yscale('log')
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"PCR.eps"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"PCR.png"))
VD=np.array(VDD)
VG=np.linspace(0,.5,500)
plt.figure(figsize=(6,7))
plt.plot(VG,fet.ID(VD,VG)/fet.W,label='Transistor')
hf=HyperFET(fet,vo2)
#hf=HyperFET(fet.shifted(appr.shift(HyperFET(fet,vo2),VDD)),vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2,label='HyperFET')[0]
plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())
plt.yscale('log')
plt.xlabel(r"$V_G\;\mathrm{[V]}$")
plt.ylabel(r"$I_D\;\mathrm{[\mu A/\mu m]}$")
plt.legend(loc='lower right',fontsize=16)
plt.axes([.25,.69,.2,.2])
plt.gca().yaxis.tick_right()
plt.plot(VG,fet.ID(VD,VG)/fet.W)
l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2)[0]
plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())
plt.xticks([0,.5])
plt.yticks([250,500])
plt.title("$\mathrm{Lin\;IV}$");
plt.tight_layout()
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"HyperFET.svg"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"HyperFET.png"))
VD=np.array(VDD)
VG=np.linspace(0,.5,500)
plt.figure(figsize=(8,4.5))
for LWT in ["20nm x (9nm)^2", "20nm x (10nm)^2", "20nm x (11nm)^2"]:
L,WT=[si(x) for x in LWT.split("x")]
W=np.sqrt(WT);T=np.sqrt(WT)
vo2=VO2(L=L,W=W,T=T,**vo2_params)
#plt.subplot(121)
plt.plot(VG,fet.ID(VD,VG)/fet.W,'k')
hf=HyperFET(fet,vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,label=LWT.replace("^2","$^2$"),linewidth=2)[0]
plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())
plt.yscale('log')
plt.xlabel(r"$V_G\;\mathrm{[V]}$",fontsize=20,labelpad=-10)
plt.ylabel(r"$I_D\;\mathrm{[\mu A/\mu m]}$")
plt.legend(loc='upper left',fontsize=14)
plt.axes([.17,.35,.3,.25])
plt.plot(VG,fet.ID(VD,VG)/fet.W,'k')
for LWT in ["20nm x 15nm x 5nm", "20nm x 20nm x 5nm", "20nm x 25nm x 5nm"]:
L,W,T=[si(x) for x in LWT.split("x")]
vo2=VO2(L=L,W=W,T=T,**vo2_params)
hf=HyperFET(fet.shifted(appr.shift(HyperFET(fet,vo2),VDD)),vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2)[0]
plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())
plt.yscale('log')
plt.yticks([])
plt.xticks([])
plt.text(.3,.1,"$\mathrm{Shifted}$",fontsize=20)
plt.axes([.7,.28,.2,.25])
plt.gca().yaxis.tick_right()
Ill_appr=[]
Ill_extr=[]
Ws=np.linspace(7,11,10)
for W in Ws:
L,W,T=20e-9,W*1e-9,W*1e-9
vo2=VO2(L=L,W=W,T=T,**vo2_params)
hf=HyperFET(fet,vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
Ill_appr+=[appr.Ill(hf,VDD)]
Ill_extr+=[extr.left(VG,If,Ib)[1]]
plt.plot(Ws,np.array(Ill_appr)*1e3/fet.W,'k')
plt.plot(Ws,np.array(Ill_extr)*1e3/fet.W,'ko')
plt.yticks([25,50])
plt.xticks([7,11])
plt.xlabel(r'$\sqrt{wt}\ \mathrm{[nm]}$',labelpad=-10)
plt.tick_params(labelsize=12)
plt.title(r'$I_{ll}\ \mathrm{[mA/\mu m]}$',fontsize=16)
#plt.yscale('log')
tighten()
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"HFvsA.svg"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"HFvsA.png"))
VD=np.array(VDD)
VG=np.linspace(0,.5,500)
plt.figure(figsize=(8,4.5))
plt.plot(VG,fet.ID(VD,VG)/fet.W,'k')
for LWT in ["15nm x (10nm)^2", "20nm x (10nm)^2", "25nm x (10nm)^2"]:
L,WT=[si(x) for x in LWT.split("x")]
W=np.sqrt(WT);T=np.sqrt(WT);
vo2=VO2(L=L,W=W,T=T,**vo2_params)
hf=HyperFET(fet,vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,label=LWT.replace("^2","$^2$"),linewidth=2)[0]
plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())
plt.yscale('log')
plt.xlabel(r"$V_G\;\mathrm{[V]}$",fontsize=20,labelpad=-10)
plt.ylabel(r"$I_D\;\mathrm{[\mu A/\mu m]}$")
plt.legend(loc='lower right',fontsize=14)
plt.axes([.17,.65,.3,.25])
plt.plot(VG,fet.ID(VD,VG)/fet.W,'k')
for LWT in ["15nm x 20nm x 5nm", "20nm x 20nm x 5nm", "25nm x 20nm x 5nm"]:
L,W,T=[si(x) for x in LWT.split("x")]
vo2=VO2(L=L,W=W,T=T,**vo2_params)
hf=HyperFET(fet.shifted(appr.shift(HyperFET(fet,vo2),VDD)),vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
l=plt.plot(VG[~np.isnan(If)],If[~np.isnan(If)]/fet.W,linewidth=2)[0]
plt.plot(VG[~np.isnan(Ib)],Ib[~np.isnan(Ib)]/fet.W,linewidth=2,color=l.get_color())
plt.yscale('log')
plt.yticks([])
plt.xticks([])
plt.text(.3,.1,"$\mathrm{Shifted}$",fontsize=20)
plt.axes([.2,.26,.2,.25])
plt.gca().yaxis.tick_right()
Vright_appr=[]
Vright_extr=[]
Ls=np.linspace(10,25,10)
for L in Ls:
L,W,T=L*1e-9,20e-9,5e-9
vo2=VO2(L=L,W=W,T=T,**vo2_params)
hf=HyperFET(fet,vo2)
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
r=appr.Vright(hf,VDD)
Vright_appr+=[r if r>appr.Vleft(hf,VDD) else np.NaN]
r=extr.right(VG,If,Ib)
Vright_extr+=[r[0] if not np.isnan(r[1]) else np.NaN]
plt.plot(Ls,np.array(Vright_appr),'k')
plt.plot(Ls,np.array(Vright_extr),'ko')
plt.yticks([.2,.5])
plt.xticks([10,25])
plt.xlabel(r'$l\ \mathrm{[nm]}$',labelpad=-10)
plt.tick_params(labelsize=12)
plt.title(r'$V_\mathrm{r}\ \mathrm{[V]}$',fontsize=16)
unst=\
max([l for l,v in zip(Ls,Vright_extr) if np.isnan(v)])
plt.gca().add_patch(patches.Rectangle(
(plt.xlim()[0],plt.ylim()[0]),
unst-plt.xlim()[0],
plt.ylim()[1]-plt.ylim()[0],
hatch='/',edgecolor='red',fill=None))
#plt.gca().yaxis.set_label_position("right")
#plt.yscale('log')
tighten()
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"HFvsl.svg"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"HFvsl.png"))
out=HTML()
vo2=None
fet2=None
hf=None
hf2=None
VTm,VTp=[None]*2
@interact(L=FloatSlider(value=20,min=1,max=45,step=1,continuous_update=False),
W=FloatSlider(value=10,min=.5,max=30,step=.5,continuous_update=False),
T=FloatSlider(value=10,min=.5,max=20,step=.5,continuous_update=False))
def show_hf(L,W,T):
global vo2, fet2,VTm,VTp, hf, hf2
plt.figure(figsize=(12,6))
vo2=VO2(L=L*1e-9,W=W*1e-9,T=T*1e-9,**vo2_params)
hf=HyperFET(fet,vo2)
shift=appr.shift(hf,VDD)
fet2=fet.shifted(shift)
hf2=HyperFET(fet2,vo2)
VD=np.array(VDD)
VG=np.linspace(0,VDD,500)
plt.subplot(131)
I=np.ravel(fet.ID(VD=VD,VG=VG))
plt.plot(VG,I/fet.W,'r')
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
plt.plot(VG,If/fet.W,'b')
plt.plot(VG,Ib/fet.W,'g')
plt.ylim(1e-3,1e3)
plt.xlabel("$V_{GS}\;\mathrm{[V]}$")
plt.ylabel("$I/W\;\mathrm{[mA/mm]}$")
ylog()
plt.subplot(132)
plt.plot(VG,I/fet2.W,'r')
If2,Ib2=[np.ravel(i) for i in hf2.I_double(VD=VD,VG=VG)]
plt.plot(VG,If2/fet2.W,'b')
plt.plot(VG,Ib2/fet2.W,'g')
plt.ylim(1e-3,1e3)
plt.yticks([])
ylog()
out.value="Approx shift is {:.2g}mV, which equates the IOFF within {:.2g}%."\
" This is expected to increase ION by {:.2g}% and actually increases it by {:.2g}%"\
.format(shift*1e3,(If2[0]-I[0])/I[0]*100,appr.shiftedgain(hf,VDD)*100-100,(If2[-1]-I[-1])/I[-1]*100)
_,_,VTm,VTp=appr.shorthands(hf,VDD,None,"VTm","VTp",gridinput=False)
display(out)
appr.optsize(fet,VDD,Ml=1,Mr=0,**vo2_params)
from itertools import product
ion0=fet.ID(VDD,VDD)
ioff0=fet.ID(VDD,0)
def sweep(L,WT):
ION_extr=[]
sg_appr=[]
Ml=[]
Mr=[]
Mimt=[]
for Li,WiTi in product(L,WT):
Ti=Wi=np.sqrt(WiTi)
vo2=VO2(L=Li*1e-9,W=Wi*1e-9,T=Ti*1e-9,**vo2_params)
hf=HyperFET(fet,vo2)
hf2=HyperFET(fet.shifted(appr.shift(hf,VD)),vo2)
IONi=hf2.I(VD=VDD,VG=VDD,direc=Direction.FORWARD)
#print(np.ravel(IONi))
#print(IONu,IONl)
#print(Li,extr.boundaries_nonhysteretic(hf2,VDD))
if extr.boundaries_nonhysteretic(hf2,VDD) and (hf2.I(VD=VDD,VG=0,direc=Direction.FORWARD)-ioff0)/ioff0<.1:
ION_extr+=[IONi]
else:
ION_extr+=[np.NaN]
Ml+=[appr.Ill(hf2,VDD)/ioff0-1]
Mr+=[(VDD-appr.Vright(hf2,VDD))/fet.Vth]
Mimt+=[VDD-hf2.pcr.V_IMT-fet.Vth/2]
if Ml[-1]>0 and Mr[-1]>0 and Mimt[-1]>0:
sg_appr+=[appr.shiftedgain(hf,VDD)]
else:
sg_appr+=[np.NaN]
ION_extr=np.array(ION_extr)
ION_appr=np.array(sg_appr)*ion0
return ION_extr,ION_appr,Ml,Mr,Mimt
plt.figure(figsize=(6,6))
main=plt.gca()
#marg=plt.axes([.57,.17,.3,.26])
marg=main.twinx()
L=np.linspace(0.1,40.0,15)
#W=10
#ION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2]) # sqrt(18*5)
#plt.sca(main)
#lp=plt.plot(L,ION_appr/ion0,linewidth=2,label="$\sqrt{{wt}}={:g}\mathrm{{nm}}$".format(W))[0]
#plt.plot(L,ION_extr/ion0,'o',color=lp.get_color())
#plt.sca(marg)
#plt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)
##plt.plot(L,Mr,'--',color=lp.get_color())
##plt.plot(L,Mimt,'-.',color=lp.get_color())
W=9
ION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2]) # sqrt(15*5)
plt.sca(main)
lp=plt.plot(L,ION_appr/ion0,linewidth=2,label="$\sqrt{{wt}}={:g}\mathrm{{nm}}$".format(W))[0]
plt.plot(L,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
W=8
ION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2])
plt.sca(main)
lp=plt.plot(L,ION_appr/ion0,linewidth=2,label="$\sqrt{{wt}}={:g}\mathrm{{nm}}$".format(W))[0]
plt.plot(L,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
W=7
ION_extr,ION_appr,Ml,Mr,Mimt=sweep(L,[W**2])
plt.sca(main)
lp=plt.plot(L,ION_appr/ion0,linewidth=2,label="$\sqrt{{wt}}={:g}\mathrm{{nm}}$".format(W))[0]
plt.plot(L,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(L,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
plt.sca(main)
plt.ylabel(r"$I_\mathrm{ON,hyper}/I_\mathrm{ON,orig}$",fontsize=22)
plt.xlabel(r"$l\mathrm{\ [nm]}$",fontsize=20)
plt.ylim(1)
plt.xlim(0,40)
handles1, labels1 = main.get_legend_handles_labels()
plt.sca(marg)
plt.legend(handles1,labels1,loc="center right",bbox_to_anchor=(1,.25),fontsize=18)
plt.ylim(0,5)
plt.tick_params(labelsize=14)
plt.gca().xaxis.set_major_locator(plticker.MultipleLocator(10))
mintb=(VDD-fet.Vth/2)/(vo2_params['J_IMT']*vo2_params['rho_i'])
plt.gca().add_patch(patches.Rectangle(
(mintb*1e9,plt.ylim()[0]),
plt.xlim()[1]-mintb*1e9,
plt.ylim()[1]-plt.ylim()[0],
hatch='/',edgecolor='k',fill=None))
plt.ylabel("$\mathrm{Safety\ Margin\ } M_r$",fontsize=22)
#plt.title("$\mathrm{Safety\ Margin}$",fontsize=18)
plt.tight_layout()
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"GainvsL.eps"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"GainvsL.png"))
plt.figure(figsize=(6,6))
main=plt.gca()
#marg=plt.axes([.57,.17,.3,.26])
marg=main.twinx()
W=np.sqrt(np.linspace(13**2,5**2,15))
L=15
ION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)
plt.sca(main)
lp=plt.plot(W,ION_appr/ion0,linewidth=2,label="$l={:g}\mathrm{{nm}}$".format(L))[0]
plt.plot(W,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
L=20
ION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)
plt.sca(main)
lp=plt.plot(W,ION_appr/ion0,linewidth=2,label="$l={:g}\mathrm{{nm}}$".format(L))[0]
plt.plot(W,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
L=25
ION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)
plt.sca(main)
lp=plt.plot(W,ION_appr/ion0,linewidth=2,label="$l={:g}\mathrm{{nm}}$".format(L))[0]
plt.plot(W,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
L=30
ION_extr,ION_appr,Ml,Mr,Mimt=sweep([L],W**2)
plt.sca(main)
lp=plt.plot(W,ION_appr/ion0,linewidth=2,label="$l={:g}\mathrm{{nm}}$".format(L))[0]
plt.plot(W,ION_extr/ion0,'o',color=lp.get_color())
plt.sca(marg)
plt.plot(W,Ml,'--',color=lp.get_color(),linewidth=2)
#plt.plot(L,Mr,'--',color=lp.get_color())
#plt.plot(L,Mimt,'-.',color=lp.get_color())
plt.sca(main)
plt.ylabel(r"$I_\mathrm{ON,hyper}/I_\mathrm{ON,orig}$",fontsize=22)
plt.xlabel(r"$\sqrt{wt}\mathrm{\ [nm]}$",fontsize=20)
plt.ylim(1)
plt.xlim(6,13)
handles1, labels1 = main.get_legend_handles_labels()
plt.sca(marg)
plt.legend(handles1,labels1,loc="upper right",fontsize=18)#,bbox_to_anchor=(1,.25)
plt.ylim(0,5)
plt.tick_params(labelsize=14)
#plt.gca().xaxis.set_major_locator(plticker.MultipleLocator(10))
plt.ylabel("$\mathrm{Safety\ Margin\ } M_l$",fontsize=22)
#plt.title("$\mathrm{Safety\ Margin}$",fontsize=18)
plt.tight_layout()
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"GainvsW.eps"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"GainvsW.png"))
out=HTML()
def show_hf(L,W,T):
global vo2, fet2,VTm,VTp, hf, hf2
plt.figure(figsize=(6,6))
vo2=VO2(L=L,W=W,T=T,**vo2_params)
hf=HyperFET(fet,vo2)
shift=appr.shift(hf,VDD)
fet2=fet.shifted(shift)
hf2=HyperFET(fet2,vo2)
VD=np.array(VDD)
VG=np.linspace(0,VDD,500)
#plt.subplot(131)
I=np.ravel(fet.ID(VD=VD,VG=VG))
plt.plot(VG,I/fet.W,'r',label='transistor')
If,Ib=[np.ravel(i) for i in hf.I_double(VD=VD,VG=VG)]
plt.plot(VG,If/fet.W,'b',label='hyperfet',linewidth=2)
plt.plot(VG,Ib/fet.W,'b',linewidth=2)
plt.ylim(1e-3,1e3)
plt.xlabel("$V_{GS}\;\mathrm{[V]}$")
plt.ylabel("$I/W\;\mathrm{[mA/mm]}$")
ylog()
#plt.subplot(132)
plt.plot(VG,I/fet2.W,'r')
If2,Ib2=[np.ravel(i) for i in hf2.I_double(VD=VD,VG=VG)]
plt.plot(VG,If2/fet2.W,'g',label='shifted hyperfet',linewidth=2)
plt.plot(VG,Ib2/fet2.W,'g',linewidth=2)
#ylog()
#plt.ylim(1e-3,1e3)
#plt.yticks([])
plt.legend(loc='lower right',fontsize=14)
Ill=extr.left(VG,If,Ib)[1]
out.value="Approx shift is {:.2g}mV, which equates the IOFF within {:.2g}%."\
" This is expected to increase ION by {:.2g}% and actually increases it by {:.2g}%."\
" Ml effective is {:.3g}."\
.format(shift*1e3,(If2[0]-I[0])/I[0]*100,appr.shiftedgain(hf,VDD)*100-100,(If2[-1]-I[-1])/I[-1]*100,Ill/If2[0])
_,_,VTm,VTp=appr.shorthands(hf,VDD,None,"VTm","VTp",gridinput=False)
show_hf(*appr.optsize(fet,VDD,Ml=1.5,Mr=2,**vo2_params,verbose=False))
display(out)
plt.tight_layout()
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"opthf.eps"))
plt.savefig(os.path.join(ABSTRACT_IMAGE_DIR,"opthf.png"))
```
| github_jupyter |
"""Which Classifier is Should I Choose?
This is one of the most import questions to ask when approaching a machine learning
problem.I find it easier to just test them all at once. """
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
def warn(*args, **kwargs): pass
import warnings
warnings.warn = warn
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedShuffleSplit
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
import sys
print ('Total Number of arguments:', len(sys.argv), 'arguments.')
print ('Argument List:', str(sys.argv))
print (sys.argv[0])
#Data Preparation
# Swiss army knife function to organize the data
def encode(train, test):
le = LabelEncoder().fit(train.species)
labels = le.transform(train.species) # encode species strings
classes = list(le.classes_) # save column names for submission
test_ids = test.id # save test ids for submission
train = train.drop(['species', 'id'], axis=1)
test = test.drop(['id'], axis=1)
return train, labels, test, test_ids, classes
train, labels, test, test_ids, classes = encode(train, test)
train.shape
train.head()
len(labels)
labels
test.head()
len(test_ids)
test_ids
classes
len(classes)
print(labels.shape)
print(test.shape)
print(test_ids.shape)
print(classes)
```
"""Stratified Train/Test Split - Stratification is necessary for this dataset because
there is a relatively large number of classes (100 classes for 990 samples). This
will ensure we have all classes represented in both the train and test indices"""
```
sss = StratifiedShuffleSplit( n_splits=10, test_size=0.3, random_state=23)
print(sss.get_n_splits(train,labels))
for train_index, test_index in sss.split(train,labels):
X_train, X_test = train.values[train_index], train.values[test_index]
y_train, y_test = labels[train_index], labels[test_index]
```
"""Sklearn Classifiers
Simply looping through 4 classifiers and printing the results. Obviously, these
will perform much better after tuning their hyperparameters, but this gives you
a decent ballpark idea."""
```
from sklearn.metrics import accuracy_score, log_loss
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="rbf", C=0.025, probability=True),
NuSVC(probability=True),
DecisionTreeClassifier(),
RandomForestClassifier(n_estimators=50, random_state=11),
GaussianNB()]
# Logging for Visual Comparison
log_cols=["Classifier", "Accuracy", "Log Loss"]
log = pd.DataFrame(columns=log_cols)
for clf in classifiers:
clf.fit(X_train, y_train)
name = clf.__class__.__name__
print("="*30)
print(name)
print('****Results****')
train_predictions = clf.predict(X_test)
acc = accuracy_score(y_test, train_predictions)
print("Accuracy: {:.4%}".format(acc))
train_predictions = clf.predict_proba(X_test)
ll = log_loss(y_test, train_predictions)
print("Log Loss: {}".format(ll))
log_entry = pd.DataFrame([[name, acc*100, ll]], columns=log_cols)
log = log.append(log_entry)
print("="*30)
sns.set_color_codes("muted")
sns.barplot(x='Accuracy', y='Classifier', data=log, color="b")
plt.xlabel('Accuracy %')
plt.title('Classifier Accuracy')
plt.show()
sns.set_color_codes("muted")
sns.barplot(x='Log Loss', y='Classifier', data=log, color="g")
plt.xlabel('Log Loss')
plt.title('Classifier Log Loss')
plt.show()
#After this choose the classifier with the best accuracy for future predictions
import os
os.getcwd()
```
| github_jupyter |
# Import key libraries
```
import numpy as np
import pandas as pd
import scipy
import bt
import ffn
import jhtalib as jhta
import datetime
# import matplotlib as plt
import seaborn as sns
sns.set()
import datetime
import matplotlib.pyplot as plt
%matplotlib inline
```
# Import the datareader with fix
```
start = datetime.datetime(2005, 1, 1)
end = datetime.datetime(2019, 1, 27)
from pandas_datareader import data as pdr
import fix_yahoo_finance as fyf
fyf.pdr_override()
pd.core.common.is_list_like = pd.api.types.is_list_like
```
# Bring In some Commodity ETF data linked to the 3 main composition choices:
1. DBC - Invesco DB Commodity Index Tracking Fund
Net Assets: $2.49 billion
DBC
https://www.invesco.com/portal/site/us/investors/etfs/product-detail?productId=dbc
DBC is the elephant in the commodities room – by far the largest ETF in terms of assets under management. It tracks an index of 14 commodities using futures contracts for exposure. It tackles the weighting problem creatively, capping energy at 60% to allow for more exposure to non-consumables such as gold and silver. The fund's large size also gives it excellent liquidity.
source :https://www.investopedia.com/investing/commodities-etfs/
2. iPath Dow Jones-UBS Commodity ETN <<<<-------- this is the current incarnation of AIG Comm
Net Assets: $810.0 M
DJP
http://www.ipathetn.com/US/16/en/details.app?instrumentId=1193
The Bloomberg Commodity Index (BCOM) is a broadly diversified commodity price index distributed by Bloomberg Indexes. The index was originally launched in 1998 as the Dow Jones-AIG Commodity Index (DJ-AIGCI) and renamed to Dow Jones-UBS Commodity Index (DJ-UBSCI) in 2009, when UBS acquired the index from AIG. On July 1, 2014, the index was rebranded under its current name.
The BCOM tracks prices of futures contracts on physical commodities on the commodity markets. The index is designed to minimize concentration in any one commodity or sector. It currently has 22 commodity futures in seven sectors. No one commodity can compose less than 2% or more than 15% of the index, and no sector can represent more than 33% of the index (as of the annual weightings of the components). The weightings for each commodity included in BCOM are calculated in accordance with rules that ensure that the relative proportion of each of the underlying individual commodities reflects its global economic significance and market liquidity. Annual rebalancing and reweighting ensure that diversity is maintained over time
source : https://en.wikipedia.org/wiki/Bloomberg_Commodity_Index
3. iShares S&P GSCI Commodity-Indexed Trust
Net Assets: $1.32 billion
GSG
The S&P GSCI contains as many commodities as possible, with rules excluding certain commodities to maintain liquidity and investability in the underlying futures markets. The index currently comprises 24 commodities from all commodity sectors - energy products, industrial metals, agricultural products, livestock products and precious metals. The wide range of constituent commodities provides the S&P GSCI with a high level of diversification, across subsectors and within each subsector. This diversity mutes the impact of highly idiosyncratic events, which have large implications for the individual commodity markets, but are minimised when aggregated to the level of the S&P GSCI.
The diversity of the S&P GSCI's constituent commodities, along with their economic weighting allows the index to respond in a stable way to world economic growth, even as the composition of global growth changes across time. When industrialised economies dominate world growth, the metals sector of the GSCI generally responds more than the agricultural components. Conversely, when emerging markets dominate world growth, petroleum-based commodities and agricultural commodities tend to be more responsive.
The S&P GSCI is a world-production weighted index that is based on the average quantity of production of each commodity in the index, over the last five years of available data. This allows the S&P GSCI to be a measure of investment performance as well as serve as an economic indicator.
Production weighting is a quintessential attribute for the index to be a measure of investment performance. This is achieved by assigning a weight to each asset based on the amount of capital dedicated to holding that asset just as market capitalisation is used to assign weights to components of equity indices. Since the appropriate weight assigned to each commodity is in proportion to the amount of that commodity flowing through the economy, the index is also an economic indicator
source: https://en.wikipedia.org/wiki/S%26P_GSCI
From an investment point of view the index designers are attempting to represent expsosure to commodities but commodities have not proven to have an inherent return so concentration rules have been added to improve the return profile but without a great deal of success.
To capitalize on commodity markets a strategy must be at liberty to go long as well as short and weight the exposure by metrics other than world prodcution or some other "economic" metric.
```
DBC = pdr.get_data_yahoo('DBC',start= start)
DJP = pdr.get_data_yahoo('DJP',start= start)
GSG = pdr.get_data_yahoo('GSG',start= start)
ETFs = bt.merge(DBC['Adj Close'], DJP['Adj Close'],GSG['Adj Close'])
ETFs.columns = [['Invesco DB Commodity Index Tracking Fund',
'iPath Dow Jones-UBS Commodity ETN',
'iShares S&P GSCI Commodity-Indexed Trust']]
ETFs.plot(figsize=(15,10))
ETFs_re = pd.DataFrame(ETFs)
# ETFs_re.plot(figsize=(15,10))
ETFs_re = ETFs.dropna()
ETFs_re = ffn.rebase(ETFs_re)
ETFs_re.plot(figsize=(15,10),fontsize=22, title='$100 Invested in different Commodity Indexes')
```
| github_jupyter |
# Big Query Connector - Quick Start
The BigQuery connector enables you to read/write data within BigQuery with ease and integrate it with YData's platform.
Reading a dataset from BigQuery directly into a YData's `Dataset` allows its usage for Data Quality, Data Synthetisation and Preprocessing blocks.
## Storage and Performance Notes
BigQuery is not intended to hold large volumes of data as a pure data storage service. Its main advantages are based on the ability to execute SQL-like queries on existing tables which can efficiently aggregate data into new views. As such, for storage purposes we advise the use of Google Cloud Storage and provide the method `write_query_to_gcs`, available from the `BigQueryConnector`, that allows the user to export a given query to a Google Cloud Storage object.
```
from ydata.connectors import BigQueryConnector
from ydata.utils.formats import read_json
# Load your credentials from a file\n",
token = read_json('{insert-path-to-credentials}')
# Instantiate the Connector
connector = BigQueryConnector(project_id='{insert-project-id}', keyfile_dict=token)
# Load a dataset
data = connector.query(
"SELECT * FROM {insert-dataset}.{insert-table}"
)
# Load a sample of a dataset
small_data = connector.query(
"SELECT * FROM {insert-dataset}.{insert-table}"
n_sample=10_000
)
# Check the available datasets
connector.datasets
# Check the available tables for a given dataset
connector.list_tables('{insert-dataset}')
connector.table_schema(dataset='{insert-dataset}', table='{insert-table}')
```
## Advanced
With `BigQueryConnector`, you can access useful properties and methods directly from the main class.
```
# List the datasets of a given project
connector.datasets
# Access the BigQuery Client
connector.client
# Create a new dataset
connector.get_or_create_dataset(dataset='{insert-dataset}')
# Delete a dataset. WARNING: POTENTIAL LOSS OF DATA
# connector.delete_table_if_exists(dataset='{insert-dataset}', table='{insert-table}')
# Delete a dataset. WARNING: POTENTIAL LOSS OF DATA
# connector.delete_dataset_if_exists(dataset='{insert-dataset}')
```
### Example #1 - Execute Pandas transformations and store to BigQuery
```
# export data to pandas
# small_df = small_data.to_pandas()
#
# DO TRANSFORMATIONS
# (...)
#
# Write results to BigQuery table
# connector.write_table_from_data(data=small_df, dataset='{insert-dataset}', table='{insert-table}')
```
### Example #2 - Write a BigQuery results to Google Cloud Storage
```
# Run a query in BigQuery and store it in Google Cloud Storage
# connector.write_query_to_gcs(query="{insert-query}",
# path="gs://{insert-bucket}/{insert-filepath}")
```
| github_jupyter |
# Developing a Pretrained Alexnet model using ManufacturingNet
###### To know more about the manufacturingnet please visit: http://manufacturingnet.io/
```
import ManufacturingNet
import numpy as np
```
First we import manufacturingnet. Using manufacturingnet we can create deep learning models with greater ease.
It is important to note that all the dependencies of the package must also be installed in your environment
##### Now we first need to download the data. You can use our dataset class where we have curated different types of datasets and you just need to run two lines of code to download the data :)
```
from ManufacturingNet import datasets
datasets.CastingData()
```
##### Alright! Now please check your working directory. The data should be present inside it. That was super easy !!
The Casting dataset is an image dataset with 2 classes. The task that we need to perform using Pretrained Alexnet is classification. ManufacturingNet has also provided different datasets in the package which the user can choose depending on type of application
Pretrained models use Imagefolder dataset from pytorch and image size is (224,224,channels). The pretrained model needs the root folder path of train and test images(Imagefolder format). Manufacturing pretrained models have image resizing feature.
```
#paths of root folder
train_data_address='casting_data/train/'
val_data_address='casting_data/test/'
```
#### Now all we got to do is import the pretrained model class and answer a few simple questions and we will be all set. The manufacturingnet has been designed in a way to make things easy for user and provide them the tools to implement complex used
```
from ManufacturingNet.models import AlexNet
# from ManufacturingNet.models import ResNet
# from ManufacturingNet.models import DenseNet
# from ManufacturingNet.models import MobileNet
# from ManufacturingNet.models import GoogleNet
# from ManufacturingNet.models import VGG
```
###### We import the pretrained Alexnet model (AlexNet) from package and answer a few simple questions
```
model=AlexNet(train_data_address,val_data_address)
# model=ResNet(train_data_address,val_data_address)
# model=DenseNet(train_data_address,val_data_address)
# model=MobileNet(train_data_address,val_data_address)
# model=GoogleNet(train_data_address,val_data_address)
# model=VGG(train_data_address,val_data_address)
```
Alright! Its done you have built your pretrained AlexNet using the manufacturingnet package. Just by answering a few simple questions. It is really easy
The Casting dataset contains more 7000 images including training and testing. The results produced above are just for introducing ManufacturingNet. Hence, only 3 epochs were performed. Better results can be obtained by running more epochs.
A few pointers about developing the pretrained models. These models require image size to be (224,224,channels) as the input size of the image. The number of classes for classification can be varied and is handled by the package. User can use only the architecture without using the pretrained weights.
The loss functions, optimizer, epochs, scheduler should be chosen by the user. The model summary, training accuracy, validation accuracy, confusion matrix, Loss vs epoch are also provided by the package.
ManufacturingNet provides many pretrained models with similar scripts. ManufacturingNet offer ResNet(different variants), AlexNet, GoogleNet, VGG(different variants) and DenseNet(different variants).
Users can follow a similar tutorial on pretrained ResNet(different variants).
| github_jupyter |
# d_logisticRegression
----
Written in the Python 3.7.9 Environment with the following package versions
* joblib 1.0.1
* numpy 1.19.5
* pandas 1.3.1
* scikit-learn 0.24.2
* tensorflow 2.5.0
By Nicole Lund
This Jupyter Notebook tunes a Logistic Regression model for Exoplanet classification from Kepler Exoplanet study data.
Column descriptions can be found at https://exoplanetarchive.ipac.caltech.edu/docs/API_kepcandidate_columns.html
**Source Data**
The source data used was provided by University of Arizona's Data Analytics homework assignment. Their data was derived from https://www.kaggle.com/nasa/kepler-exoplanet-search-results?select=cumulative.csv
The full data set was released by NASA at
https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=koi
```
# Import Dependencies
# Plotting
%matplotlib inline
import matplotlib.pyplot as plt
# Data manipulation
import numpy as np
import pandas as pd
from statistics import mean
from operator import itemgetter
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from tensorflow.keras.utils import to_categorical
# Parameter Selection
from sklearn import tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
# Model Development
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
# Model Metrics
from sklearn.metrics import classification_report
# Save/load files
from tensorflow.keras.models import load_model
import joblib
# # Ignore deprecation warnings
# import warnings
# warnings.simplefilter('ignore', FutureWarning)
# Set the seed value for the notebook, so the results are reproducible
from numpy.random import seed
seed(1)
```
# Read the CSV and Perform Basic Data Cleaning
```
# Import data
df = pd.read_csv("../b_source_data/exoplanet_data.csv")
# print(df.info())
# Drop columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop rows containing null values
df = df.dropna()
# Display data info
print(df.info())
print(df.head())
print(df.koi_disposition.unique())
# Rename "FALSE POSITIVE" disposition values
df.koi_disposition = df.koi_disposition.str.replace(' ','_')
print(df.koi_disposition.unique())
```
# Select features
```
# Split dataframe into X and y
# Select features to analyze in X
select_option = 1
if select_option == 1:
# Option 1: Choose all features
X = df.drop("koi_disposition", axis=1)
elif select_option == 2:
# Option 2: Choose all features that are not associated with error measurements
X = df[['koi_fpflag_nt', 'koi_fpflag_ss', 'koi_fpflag_co', 'koi_fpflag_ec', 'koi_period', 'koi_time0bk', 'koi_impact', 'koi_duration','koi_depth', 'koi_prad', 'koi_teq', 'koi_insol', 'koi_model_snr', 'koi_tce_plnt_num', 'koi_steff', 'koi_slogg', 'koi_srad', 'ra', 'dec', 'koi_kepmag']]
elif select_option == 3:
# Option 3: Choose features from Decision Tree and Random Forest assessment.
tree_features = ['koi_fpflag_nt', 'koi_fpflag_co', 'koi_fpflag_ss', 'koi_model_snr']
forest_features = ['koi_fpflag_co', 'koi_fpflag_nt', 'koi_fpflag_ss', 'koi_model_snr', 'koi_prad']
X = df[set(tree_features + forest_features)]
# Define y
y = df["koi_disposition"]
print(X.shape, y.shape)
```
# Create a Train Test Split
Use `koi_disposition` for the y values
```
# Split X and y into training and testing groups
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42)
# Display training data
X_train.head()
```
# Pre-processing
```
# Scale the data with MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# One-Hot-Encode the y data
# Step 1: Label-encode data set
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
# Step 2: Convert encoded labels to one-hot-encoding
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
print('Unique KOI Disposition Values')
print(y.unique())
print('-----------')
print('Sample KOI Disposition Values and Encoding')
print(y_test[:5])
print(y_test_categorical[:5])
```
# Create and Train the Model - LogisticRegression
```
# Create model newton-cg
model = LogisticRegression(solver='newton-cg', max_iter=1000)
# model = LogisticRegression(solver='sag', max_iter=1000)
# Train model
model.fit(X_train_scaled, y_train)
print(f"Training Data Score: {model.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {model.score(X_test_scaled, y_test)}")
```
# Hyperparameter Tuning
Use `GridSearchCV` to tune the model's parameters
```
# Create the GridSearchCV model
param_grid = {'solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga']}
grid = GridSearchCV(model, param_grid, verbose=3)
# Fit the model using the grid search estimator.
grid.fit(X_train_scaled, y_train)
print(grid.best_params_)
print(grid.best_score_)
```
# Option 1: Model Results when using all features
* solver: 'newton-cg'
* score: 0.8553005758975291
* Training Data Score: 0.8600040874718986
* Testing Data Score: 0.847950428979981
# Option 2: Model Results when using all features not associated with error measurements
* solver: 'sag'
* score: 0.8148348446204654
* Training Data Score: 0.8150391347123959
* Testing Data Score: 0.8021925643469972
# Option 3: Model Results when using selected features from Decision Tree and Random Forest Classifiers
* solver: 'sag'
* 0.762941192444191
* Training Data Score: 0.7392192928673615
* Testing Data Score: 0.7397521448999047
# Save the Model
Option 1 was chosen as the model to save because it yielded the best score of all 3 input options.
```
# Save the model
joblib.dump(model, './d_logisticRegression_model.sav')
joblib.dump(grid, './d_logisticRegression_grid.sav')
```
# Model Discussion
The option 1 model score using the logistic regression method is reasonable for predicting exoplanet observations. However, the SVM and Neural Network models perform better.
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=''
```
### load packages
```
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
```
### Load dataset
```
from sklearn.datasets import make_moons
X_train, Y_train = make_moons(1000, random_state=0, noise=0.1)
X_test, Y_test = make_moons(1000, random_state=1, noise=0.1)
X_valid, Y_valid = make_moons(1000, random_state=2, noise=0.1)
def norm(x):
return (x - np.min(x)) / (np.max(x) - np.min(x))
X_train = norm(X_train)
X_valid = norm(X_valid)
X_test = norm(X_test)
X_train_flat = X_train
X_test_flat = X_test
plt.scatter(X_test[:,0], X_test[:,1], c=Y_test)
```
### Create model and train
### define networks
```
dims = (2)
n_components = 2
from tfumap.vae import VAE, Sampling
encoder_inputs = tf.keras.Input(shape=dims)
x = tf.keras.layers.Flatten()(encoder_inputs)
x = tf.keras.layers.Dense(units=100, activation="relu")(x)
x = tf.keras.layers.Dense(units=100, activation="relu")(x)
x = tf.keras.layers.Dense(units=100, activation="relu")(x)
z_mean = tf.keras.layers.Dense(n_components, name="z_mean")(x)
z_log_var = tf.keras.layers.Dense(n_components, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = tf.keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder")
encoder.summary()
latent_inputs = tf.keras.Input(shape=(n_components,))
x = tf.keras.layers.Dense(units=100, activation="relu")(latent_inputs)
x = tf.keras.layers.Dense(units=100, activation="relu")(x)
x = tf.keras.layers.Dense(units=100, activation="relu")(x)
decoder_outputs = tf.keras.layers.Dense(units=2, activation="sigmoid")(x)
decoder = tf.keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()
```
### Create model and train
```
X_train.shape
vae = VAE(encoder, decoder)
vae.compile(optimizer=tf.keras.optimizers.Adam())
vae.fit(X_train, epochs=500, batch_size=128)
z = vae.encoder.predict(X_train)[0]
```
### Plot model output
```
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)].flatten(),
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
z_recon = decoder.predict(z)
fig, ax = plt.subplots()
ax.scatter(z_recon[:,0], z_recon[:,1], s = 1, c = z_recon[:,0], alpha = 1)
ax.axis('equal')
```
### Save output
```
from tfumap.paths import ensure_dir, MODEL_DIR
dataset = 'moons'
output_dir = MODEL_DIR/'projections'/ dataset / 'vae'
ensure_dir(output_dir)
encoder.save(output_dir / 'encoder')
decoder.save(output_dir / 'encoder')
#loss_df.to_pickle(output_dir / 'loss_df.pickle')
np.save(output_dir / 'z.npy', z)
```
| github_jupyter |
```
import tensorflow as tf
label_dict={"with_mask":0, "without_mask":1} #dictionary
categories=["with_mask","without_mask"] #list
label=[0,1]
data_path="C:\\Users\\anush\\Documents\\dataset"
import cv2,os
data=[]
target=[] #empty lists
for category in categories:
folder_path=os.path.join(data_path,category)
img_names=os.listdir(folder_path)
for img_name in img_names:
img_path=os.path.join(folder_path,img_name)
img=cv2.imread(img_path)
try:
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
resized=cv2.resize(gray,(100,100))
data.append(resized)
target.append(label_dict[category])
except Exception as e:
pass
import numpy as np
data=np.array(data)
data=data/255.0
data
data.shape
data=np.reshape(data,(data.shape[0],100,100,1))
data.shape
target=np.array(target)
target.shape
from keras.utils import np_utils
new_target=np_utils.to_categorical(target)
new_target.shape
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten, Dropout
from keras.layers import Conv2D,MaxPooling2D
model = Sequential()
model.add(Conv2D(200,(3,3),input_shape=data.shape[1:], activation = "relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(100,(3,3), activation = "relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(50, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"] )
from sklearn.model_selection import train_test_split
train_data,test_data,train_target,test_target =train_test_split(data,new_target,test_size=0.1)
from keras.callbacks import ModelCheckpoint
checkpoint=ModelCheckpoint("model-{epoch:03d}.model", save_best_only=True,mode="auto")
history=model.fit(train_data,train_target,epochs=30,validation_split=0.2,callbacks=[checkpoint])
face_cascader=cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
img=cv2.imread("C:\\Users\\anush\\Desktop\\Anushka.jpeg")
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces=face_cascader.detectMultiScale(img,1.3,5)
faces
labels_dict={0:'MASK',1:'NO MASK'}
color_dict={0:(0,255,0),1:(0,0,255)}
source=cv2.VideoCapture(0)
while(True):
ret,img=source.read()
#gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces=face_cascader.detectMultiScale(img,1.3,5)
for (x,y,w,h) in faces:
face_img=img[y:y+w,x:x+w]
resized=cv2.resize(face_img,(100,100))
#normalized=resized/255.0
#result=model.predict(normalized)
normimage=resized/255
reshapeimage=np.reshape(normimage,(-1,100,100,1))
modelop=model.predict(reshapeimage)
label=np.argmax(modelop,axis=1)[1]
cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[label],2)
cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[label],1)
cv2.putText(img, labels_dict[label], (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)
# cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
# cv2.rectangle(img,(x,y-40),(x+w,y),(0,0,255),1)
#cv2.putText(img, "face", (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)
cv2.imshow("checking...",img)
key=cv2.waitKey(2)
if(key==27):
break
cv2.destroyAllWindows()
source.release()
```
| github_jupyter |
# Power Production Project for *Fundamentals of Data Analysis* at GMIT
by Radek Wojtczak G00352936<br>
**Instructions:**
>In this project you must perform and explain simple linear regression using Python
on the powerproduction dataset. The goal is to accurately predict wind turbine power output from wind speed values using the data set as a basis.
Your submission must be in the form of a git repository containing, at a minimum, the
following items:
>1. Jupyter notebook that performs simple linear regression on the data set.
>2. In that notebook, an explanation of your regression and an analysis of its accuracy.
>3. Standard items in a git repository such as a README.
>To enhance your submission, you might consider comparing simple linear regression to
other types of regression on this data set.
# Wind power
**How does a wind turbine work?**
Wind turbines can turn the power of wind into the electricity we all use to power our homes and businesses. They can be stand-alone, supplying just one or a very small number of homes or businesses, or they can be clustered to form part of a wind farm.
The visible parts of the wind farm that we’re all used to seeing – those towering white or pale grey turbines. Each of these turbines consists of a set of blades, a box beside them called a nacelle and a shaft. The wind – and this can be just a gentle breeze – makes the blades spin, creating kinetic energy. The blades rotating in this way then also make the shaft in the nacelle turn and a generator in the nacelle converts this kinetic energy into electrical energy.

**What happens to the wind-turbine generated electricity next?**
To connect to the national grid, the electrical energy is then passed through a transformer on the site that increases the voltage to that used by the national electricity system. It’s at this stage that the electricity usually moves onto the National Grid transmission network, ready to then be passed on so that, eventually, it can be used in homes and businesses. Alternatively, a wind farm or a single wind turbine can generate electricity that is used privately by an individual or small set of homes or businesses.
**How strong does the wind need to be for a wind turbine to work?**
Wind turbines can operate in anything from very light to very strong wind speeds. They generate around 80% of the time, but not always at full capacity. In really high winds they shut down to prevent damage.

**Where are wind farms located?**
Wind farms tend to be located in the windiest places possible, to maximise the energy they can create – this is why you’ll be more likely to see them on hillsides or at the coast. Wind farms that are in the sea are called offshore wind farms, whereas those on dry land are termed onshore wind farms.
**Wind energy in Ireland**
Wind energy is currently the largest contributing resource of renewable energy in Ireland. It is both Ireland’s largest and cheapest renewable electricity resource. In 2018 Wind provided 85% of Ireland’s renewable electricity and 30% of our total electricity demand. It is the second greatest source of electricity generation in Ireland after natural gas. Ireland is one of the leading countries in its use of wind energy and 3rd place worldwide in 2018, after Denmark and Uruguay.

### Exploring dataset:
```
# importing all necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model as lm
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import seaborn as sns
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import r2_score
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import PolynomialFeatures
from matplotlib import pyplot
# loading our dataset, seting columns names and changing index to start from 1 instead of 0
df = pd.read_csv('dataset/powerproduction.txt', sep=",", header=None)
df.columns = ["speed", "power"]
df = df[1:]
df
# checking for nan values
count_nan = len(df) - df.count()
count_nan
# Converting Strings to Floats
df = df.astype(float)
# showing first 20 results
df.head(20)
# basic statistic of speed column
df['speed'].describe()
# basic statistic of power column
df['power'].describe()
# histogram of 'speed' data
sns.set_style('darkgrid')
sns.distplot(df['speed'])
plt.show()
```
We can clearly see normal distribution in above 'speed' column data.
```
# histogram od 'power' data
sns.set_style('darkgrid')
sns.distplot(df['power'])
plt.show()
```
As we can see above this distribution look like bimodal distribution.
```
# scatter plot of our dataset
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(df['speed'],df['power'])
plt.show()
df
```
## Regression
Regression analysis is a set of statistical methods used for the estimation of relationships between a dependent variable and one or more independent variables. It can be utilized to assess the strength of the relationship between variables and for modeling the future relationship between them.
The term regression is used when you try to find the relationship between variables.
In Machine Learning, and in statistical modeling, that relationship is used to predict the outcome of future events.
## Linear Regression
The term “linearity” in algebra refers to a linear relationship between two or more variables. If we draw this relationship in a two-dimensional space (between two variables), we get a straight line.
Simple linear regression is useful for finding relationship between two continuous variables. One is predictor or independent variable and other is response or dependent variable. It looks for statistical relationship but not deterministic relationship. Relationship between two variables is said to be deterministic if one variable can be accurately expressed by the other. For example, using temperature in degree Celsius it is possible to accurately predict Fahrenheit. Statistical relationship is not accurate in determining relationship between two variables. For example, relationship between height and weight.
The core idea is to obtain a line that best fits the data. The best fit line is the one for which total prediction error (all data points) are as small as possible. Error is the distance between the point to the regression line.
```
# divide data to x = speed and y = power
x = df['speed']
y = df['power']
# model of Linear regression
model = LinearRegression(fit_intercept=True)
# fiting the model
model.fit(x[:, np.newaxis], y)
# making predyctions
xfit = np.linspace(0, 25, 100)
yfit = model.predict(xfit[:, np.newaxis])
# creating plot
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x, y)
plt.plot(xfit, yfit, color="red");
# slope and intercept parameters
print("Parameters:", model.coef_, model.intercept_)
print("Model slope: ", model.coef_[0])
print("Model intercept:", model.intercept_)
```
**Different approach: Simple linear regression model**
Fiting line helps to determine, if our model is predicting well on test dataset.
With help of a line we can calculate the error of each datapoint from a line on basis of how fare it is from the line.
Error could be +ve or -ve, and on basis of that we can calculate the cost function.
I have used Fitted Line Plot to display the relationship between one continuous predictor and a response. A fitted line plot shows a scatterplot of the data with a regression line representing the regression equation.
A best fitted line can be roughly determined using an eyeball method by drawing a straight line on a scatter plot so that the number of points above the line and below the line is about equal (and the line passes through as many points as possible).As we can see below our data,are a little bit sinusoidal and in this case best fitted line is trying to cover most of points that are on diagonal, but also it has to cover other data points so its little bit tweaked due to overestimation and underestimation.
I divided data into training and testing samples at ratio of 70-30%. After that I will apply different models to compare the accuracy scores of all models.
```
# training our main model
x_train,x_test,y_train,y_test = train_test_split(df[['speed']],df.power,test_size = 0.3)
```
Simple linear regression model
```
reg_simple = lm.LinearRegression()
reg_simple.fit(x_train,y_train)
```
Best fit line on test dataset with simple linear regression
```
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,reg_simple.predict(x_test),color = 'r')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
reg_simple.coef_ #slope
reg_simple.intercept_ #y-intercept
reg_simple.score(x_test,y_test)
```
## Ridge regression and classification
Ridge regression is an extension of linear regression where the loss function is modified to minimize the complexity of the model. This modification is done by adding a penalty parameter that is equivalent to the square of the magnitude of the coefficients.
Ridge Regression is a technique for analyzing multiple regression data that suffer from multicollinearity. When
multicollinearity occurs, least squares estimates are unbiased, but their variances are large so they may be far from
the true value. By adding a degree of bias to the regression estimates, ridge regression reduces the standard errors.
It is hoped that the net effect will be to give estimates that are more reliable
```
reg_ridge = lm.Ridge(alpha=.5)
reg_ridge.fit(x_train,y_train)
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,reg_ridge.predict(x_test),color = 'r')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
reg_ridge.coef_ #slope
reg_ridge.intercept_ #y-intercept
reg_ridge.score(x_test,y_test)
```
**With regularization parameter.**
```
reg_ridgecv = lm.RidgeCV(alphas=np.logspace(-6, 6, 13))
reg_ridgecv.fit(x_train,y_train)
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,reg_ridgecv.predict(x_test),color = 'r')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
reg_ridgecv.coef_ #slope
reg_ridgecv.intercept_ #y-intercept
reg_ridgecv.score(x_test,y_test)
```
# Lasso
Lasso regression is a type of linear regression that uses shrinkage. Shrinkage is where data values are shrunk towards a central point, like the mean. The lasso procedure encourages simple, sparse models (i.e. models with fewer parameters). This particular type of regression is well-suited for models showing high levels of muticollinearity or when you want to automate certain parts of model selection, like variable selection/parameter elimination.
The acronym “LASSO” stands for Least Absolute Shrinkage and Selection Operator.
```
reg_lasso = lm.Lasso(alpha=0.1)
reg_lasso.fit(x_train,y_train)
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,reg_lasso.predict(x_test),color = 'r')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
reg_lasso.coef_ #slope
reg_lasso.intercept_ #y-intercept
reg_lasso.score(x_test,y_test)
```
# LARS Lasso
In statistics, least-angle regression (LARS) is an algorithm for fitting linear regression models to high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani.[1]
Suppose we expect a response variable to be determined by a linear combination of a subset of potential covariates. Then the LARS algorithm provides a means of producing an estimate of which variables to include, as well as their coefficients.
Instead of giving a vector result, the LARS solution consists of a curve denoting the solution for each value of the L1 norm of the parameter vector. The algorithm is similar to forward stepwise regression, but instead of including variables at each step, the estimated parameters are increased in a direction equiangular to each one's correlations with the residual.
```
reg_lars = lm.Lars(n_nonzero_coefs=1)
reg_lars.fit(x_train,y_train)
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,reg_lars.predict(x_test),color = 'r')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
reg_lars.coef_ #slope
reg_lars.intercept_ #y-intercept
reg_lars.score(x_test,y_test)
```
**Accuracy** of all models are almost 78% and model having accuracy between 70% to 80% are considered as a good models.<br>
If score value is between 80% and 90%, then model is cosidered as excellent model. If score value is between 90% and 100%, it's a probably an overfitting case.
<img src="img/img2.png">
Above image explains over and under **estimation** of data, We can see in below image that how
datapoints are overestimating and underestimating at some points
<img src="img/img_exp.png">
## Logistic Regression
Logistic regression is a statistical method for predicting binary classes. The outcome or target variable is dichotomous in nature. Dichotomous means there are only two possible classes. For example, it can be used for cancer detection problems. It computes the probability of an event occurrence.
It is a special case of linear regression where the target variable is categorical in nature. It uses a log of odds as the dependent variable. Logistic Regression predicts the probability of occurrence of a binary event utilizing a logit function.
**Linear Regression Vs. Logistic Regression**
Linear regression gives you a continuous output, but logistic regression provides a constant output. An example of the continuous output is house price and stock price. Example's of the discrete output is predicting whether a patient has cancer or not, predicting whether the customer will churn. Linear regression is estimated using Ordinary Least Squares (OLS) while logistic regression is estimated using Maximum Likelihood Estimation (MLE) approach.
<img src="img/linlog.png">
```
# Logistic regression model
logistic_regression = LogisticRegression(max_iter=5000)
# importing necessary packages
from sklearn import preprocessing
from sklearn import utils
# encoding data to be able to proceed with Logistic regression
lab_enc = preprocessing.LabelEncoder()
y_train_encoded = lab_enc.fit_transform(y_train)
print(y_train_encoded)
print(utils.multiclass.type_of_target(y_train))
print(utils.multiclass.type_of_target(y_train.astype('int')))
print(utils.multiclass.type_of_target(y_train_encoded))
# training model
logistic_regression.fit(x_train, y_train_encoded)
logistic_regression.fit(x_train, y_train_encoded)
# predicting "y"
y_pred = logistic_regression.predict(x_test)
# creating plot
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,logistic_regression.predict_proba(x_test)[:,1],color = 'r')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
logistic_regression.coef_.mean() #slope
logistic_regression.intercept_ .mean()#y-intercept
test_enc = preprocessing.LabelEncoder()
y_test_encoded = test_enc.fit_transform(y_test)
logistic_regression.score(x_test,y_test_encoded)
# trying to get rid of outliers
filter = df["power"]==0.0
filter
# using enumerate() + list comprehension
# to return true indices.
res = [i for i, val in enumerate(filter) if val]
# printing result
print ("The list indices having True values are : " + str(res))
# updating list by dropping "0" power not including first few data points
update = df.drop(df.index[[15, 16, 24, 26, 31, 35, 37, 39, 42, 43, 44, 47, 60, 65, 67, 70, 73, 74, 75, 83, 89, 105, 110, 111, 114, 133, 135, 136, 140, 149, 208, 340, 404, 456, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499]])
update
# training updated data
x_train,x_test,y_train,y_test = train_test_split(update[['speed']],update.power,test_size = 0.3)
# updated model
log = LogisticRegression(max_iter=5000)
# encoding data again
lab_enc = preprocessing.LabelEncoder()
y_train_encoded = lab_enc.fit_transform(y_train)
print(y_train_encoded)
print(utils.multiclass.type_of_target(y_train))
print(utils.multiclass.type_of_target(y_train.astype('int')))
print(utils.multiclass.type_of_target(y_train_encoded))
# fitting data
log.fit(x_train, y_train_encoded)
"predicting "y"
y_pred = log.predict_proba(x_test)[:,1]
# creating plot
plt.xlabel('wind speed',fontsize = 16)
plt.ylabel('power',fontsize = 16)
plt.scatter(x_test,y_test, color='blue')
plt.plot(x_test,log.predict_proba(x_test)[:,300],color = 'r')
plt.show()
```
**Logistic regression** is not able to handle a large number of categorical features/variables. It is vulnerable to overfitting. Also, can't solve the non-linear problem with the logistic regression that is why it requires a transformation of non-linear features. Logistic regression will not perform well with independent variables that are not correlated to the target variable and are very similar or correlated to each other.
It was very bad on our data with score below 0.05, even when I have tried to cut outliners.
## Polynomial regression
is a special case of linear regression where we fit a polynomial equation on the data with a curvilinear relationship between the target variable and the independent variables.
In a curvilinear relationship, the value of the target variable changes in a non-uniform manner with respect to the predictor (s).
The number of higher-order terms increases with the increasing value of n, and hence the equation becomes more complicated.
While there might be a temptation to fit a higher degree polynomial to get lower error, this can result in over-fitting. Always plot the relationships to see the fit and focus on making sure that the curve fits the nature of the problem. Here is an example of how plotting can help:
<img src="img/fitting.png">
Especially look out for curve towards the ends and see whether those shapes and trends make sense. Higher polynomials can end up producing wierd results on extrapolation.
```
# Training Polynomial Regression Model
poly_reg = PolynomialFeatures(degree = 4)
x_poly = poly_reg.fit_transform(x_train)
poly_reg.fit(x_poly, y_train)
lin_reg = LinearRegression()
lin_reg.fit(x_poly, y_train)
# Predict Result with Polynomial Regression
poly = lin_reg.predict(poly_reg.fit_transform(x_test))
poly
# Change into array
x = np.array(df['speed'])
y = np.array(df['power'])
# Changing the shape of array
x = x.reshape(-1,1)
y = y.reshape(-1,1)
# Visualise the Results of Polynomial Regression
plt.scatter(x_train, y_train, color = 'blue')
plt.plot(x, lin_reg.predict(poly_reg.fit_transform(x)), color = 'red')
plt.title('Polynomial Regression')
plt.xlabel('Wind speed')
plt.ylabel('Power')
plt.show()
```
Slope, y-intercept and score of our predictions.
```
lin_reg.coef_.mean() #slope
lin_reg.intercept_#y-intercept
model.score(x_test, y_test) #score
```
## Spearman’s Rank Correlation
This statistical method quantifies the degree to which ranked variables are associated by a monotonic function, meaning an increasing or decreasing relationship. As a statistical hypothesis test, the method assumes that the samples are uncorrelated (fail to reject H0).
>The Spearman rank-order correlation is a statistical procedure that is designed to measure the relationship between two variables on an ordinal scale of measurement.
>— Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, 2009.
The intuition for the Spearman’s rank correlation is that it calculates a Pearson’s correlation (e.g. a parametric measure of correlation) using the rank values instead of the real values. Where the Pearson’s correlation is the calculation of the covariance (or expected difference of observations from the mean) between the two variables normalized by the variance or spread of both variables.
Spearman’s rank correlation can be calculated in Python using the spearmanr() SciPy function.
The function takes two real-valued samples as arguments and returns both the correlation coefficient in the range between -1 and 1 and the p-value for interpreting the significance of the coefficient.
```
# importing sperman correlation
from scipy.stats import spearmanr
# prepare data
x = df['speed']
y = df['power']
# calculate spearman's correlation
coef, p = spearmanr(x, y)
print('Spearmans correlation coefficient: %.3f' % coef)
# interpret the significance
alpha = 0.05
if p > alpha:
print('Samples are uncorrelated (fail to reject H0) p=%.3f' % p)
else:
print('Samples are correlated (reject H0) p=%.3f' % p)
```
The statistical test reports a strong positive correlation with a value of 0.819. The p-value is close to zero, which means that the likelihood of observing the data given that the samples are uncorrelated is very unlikely (e.g. 95% confidence) and that we can reject the null hypothesis that the samples are uncorrelated.
## Kendall’s Rank Correlation
The intuition for the test is that it calculates a normalized score for the number of matching or concordant rankings between the two samples. As such, the test is also referred to as Kendall’s concordance test.
The Kendall’s rank correlation coefficient can be calculated in Python using the kendalltau() SciPy function. The test takes the two data samples as arguments and returns the correlation coefficient and the p-value. As a statistical hypothesis test, the method assumes (H0) that there is no association between the two samples.
```
# importing kendall correaltion
from scipy.stats import kendalltau
# calculate kendall's correlation
coef, p = kendalltau(x, y)
print('Kendall correlation coefficient: %.3f' % coef)
# interpret the significance
alpha = 0.05
if p > alpha:
print('Samples are uncorrelated (fail to reject H0) p=%.3f' % p)
else:
print('Samples are correlated (reject H0) p=%.3f' % p)
```
Running the example calculates the Kendall’s correlation coefficient as 0.728, which is highly correlated.
The p-value is close to zero (and printed as zero), as with the Spearman’s test, meaning that we can confidently reject the null hypothesis that the samples are uncorrelated.
## Conclusion
Spearman’s & Kendall’s Rank Correlation shows us that our data are strongly correlated. After trying Linear, Ridge, Lasso and LARS Lasso regressions all of them are equally effective, so the best choice would be to stick with Linear Regression to simplify.
As I wanted to find the better way I tried Logistic regression and I found out it is pretty useless for our dataset even when I get rid of outliers.
Next in line was Polynomial regression and it was great success with nearly 90% score. Seeing results best approach for our dataset would Polynomial regression with Linear regression for our second choice if we would like to keep it simple.
**References:**
- https://www.goodenergy.co.uk/media/1775/howawindturbineworks.jpg?width=640&height=¢er=0.5,0.5&mode=crop
- https://www.nationalgrid.com/stories/energy-explained/how-does-wind-turbine-work
- https://www.pluralsight.com/guides/linear-lasso-ridge-regression-scikit-learn
- https://www.seai.ie/technologies/wind-energy/
- https://towardsdatascience.com/ridge-regression-python-example-f015345d936b
- https://towardsdatascience.com/ridge-and-lasso-regression-a-complete-guide-with-python-scikit-learn-e20e34bcbf0b
- https://realpython.com/linear-regression-in-python/
- https://en.wikipedia.org/wiki/Least-angle_regression
- https://towardsdatascience.com/simple-and-multiple-linear-regression-in-python-c928425168f9
- https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html
- https://www.statisticshowto.com/lasso-regression/
- https://saskeli.github.io/data-analysis-with-python-summer-2019/linear_regression.html
- https://www.w3schools.com/python/python_ml_linear_regression.asp
- https://www.geeksforgeeks.org/linear-regression-python-implementation/
- https://www.kdnuggets.com/2019/03/beginners-guide-linear-regression-python-scikit-learn.html
- https://towardsdatascience.com/an-introduction-to-linear-regression-for-data-science-9056bbcdf675
- https://www.kaggle.com/ankitjha/comparing-regression-models
- https://machinelearningmastery.com/compare-machine-learning-algorithms-python-scikit-learn/
- https://www.datacamp.com/community/tutorials/understanding-logistic-regression-python
- https://www.researchgate.net/post/Is_there_a_test_which_can_compare_which_of_two_regression_models_is_best_explains_more_variance
- https://heartbeat.fritz.ai/logistic-regression-in-python-using-scikit-learn-d34e882eebb1
- https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/
- https://towardsdatascience.com/machine-learning-polynomial-regression-with-python-5328e4e8a386
- https://www.w3schools.com/python/python_ml_polynomial_regression.asp
- https://www.dailysmarty.com/posts/polynomial-regression
- https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/
- https://machinelearningmastery.com/how-to-calculate-nonparametric-rank-correlation-in-python/
| github_jupyter |
# Building a Machine Learning model to detect spam in SMS
> Building a machine learing model to predict that a SMS messages is spam or not
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
In this notebook, we'll show how to build a simple machine learning model to predict that a SMS is spam or not.
The notebook was built to go along with my talk in May 2020 for [Vonage Developer Day](https://www.vonage.com/about-us/vonage-stories/vonage-developer-day/)
youtube: https://www.youtube.com/watch?v=5d4_HpMLXf4&t=1s
We'll be using the scikit-learn library to train a model on a set of messages which are labeled as spam and non spam(aka ham) messages.
After our model is trained, we'll deploy to an AWS Lambda in which its input will be a message, and its output will be the prediction(spam or ham).
Before we build a model, we'll need some data. So we'll use the [SMS Spam Collection DataSet](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection).
This dataset contains over 5k messages which are labeled spam or ham.
In the following cell, we'll download the dataset
```
!wget --no-check-certificate https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip
!unzip /content/smsspamcollection.zip
```
Once we have downloaded the datatset, we'll load into a Pandas Dataframe and view the first 10 rows of the dataset.
```
import pandas as pd
df = pd.read_csv("/content/SMSSpamCollection", sep='\t', header=None, names=['label', 'message'])
df.head()
```
Next, we need to first understand the data before building a model.
We'll first need to see how many messages are considered spam or ham
```
df.label.value_counts()
```
From the cell above, we see that 4825 messages are valid messages, and only 747 messages are labled as spam.
Lets now just view some messages that are ham and some that are spam
```
spam = df[df["label"] == "spam"]
spam.head()
ham = df[df["label"] == "ham"]
ham.head()
```
after looking at some messages that spam and ham, we can see the spam messages look spammy..
# Preprocessing
The next step is to get the dataset ready to build a model. A machine learning model can only deal with numbers, so we'll have to convert our text into numbers using `TfidfVectorizer`
TfidfVectorizer converts a collection of raw documents to a matrix of [term frequency-inverse document frequency](http://www.tfidf.com/) features. Also known as TF-IDF.
In our case, a document is each message. For each message, we'll compute the number of times a term is in our document divied by all the terms in the document times the total number of documents divded by the number of documents that contain the specific term

[source](https://towardsdatascience.com/spam-or-ham-introduction-to-natural-language-processing-part-2-a0093185aebd)
The output will be a matrix in which the rows will be all the terms, and the colums will be all the documents

[This notebook by Mike Bernico](https://github.com/mbernico/CS570/blob/master/module_1/TFIDF.ipynb) by goes into more detail on TF-IDF and how to calucate without using sklearn.
first, we'll split the dataset into a train and test set. For the training set, we'll take 80% of the data from the dataset, and use that for training the model. The rest of the dataset(20%) will be used for testing the model.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['message'], df['label'], test_size = 0.2, random_state = 1)
```
once we split our data, we can use the TfidfVectorizer. This will return a sparse matrix(a matrix with mostly 0's)
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(X_train)
X_test = vectorizer.transform(X_test)
```
After we fit the TfidfVectorizer to the sentenes, lets plot the matrix as a pandas dataframe to understand what TfidfVectorizer is doing
```
feature_names = vectorizer.get_feature_names()
tfid_df = pd.DataFrame(tfs.T.todense(), index=feature_names)
print(tfid_df[1200:1205])
```
From the table above, each word in our dataset are the rows are the sentenes index are the columns. We've only plotted a few rows in the middle of the dataframe for a better understanding of the data.
Next, we'll train a model using Gaussian Naive Bayes in scikit-learn. Its a good starting algorithm for text classification. We'll then print out the accuracy of the model by using the training set and our confusion_matrix
## Model Training
To train our model, we'll use A Navie Bayes algorhtymn to train our model
The formula for Navie Bayes is:
\\[ P(S|W) = P(W|S) \times P(S) \over P(W|S) \times P(S) + P(W|H) \times P(h) \\].
**P(s|w)** - The probability(**P**) of a message is spam(**s**) Given(**|**) a word(**w**)
**=**
**P(w|s)** - probability(**P**) that a word(**w**) is spam(**s**)
*
**P(s)** - Overall probability(**P**) that ANY message is spam(**s**)
**/**
**P(w|s)** - probability(**P**) that a word(**w**) exists in spam messages(**s**)
*
**P(s)** - Overall probability(**P**) that ANY message is spam(**s**)
**+**
**P(w|h)** - probability(**P**) the word(**w**) appears in non-spam(**h**) messages
*
**P(h)** - Overall probability(**P**) that any message is not-spam(**h**)
```
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
clf = GaussianNB()
clf.fit(X_train.toarray(),y_train)
y_true, y_pred = y_test, clf.predict(X_test.toarray())
accuracy_score(y_true, y_pred)
print(classification_report(y_true, y_pred))
cmtx = pd.DataFrame(
confusion_matrix(y_true, y_pred, labels=['ham', 'spam']),
index=['ham', 'spam'],
columns=['ham', 'spam']
)
print(cmtx)
```
## Grid Search
```
from sklearn.model_selection import GridSearchCV
parameters = {"var_smoothing":[1e-9, 1e-5, 1e-1]}
gs_clf = GridSearchCV(
GaussianNB(), parameters)
gs_clf.fit(X_train.toarray(),y_train)
gs_clf.best_params_
y_true, y_pred = y_test, gs_clf.predict(X_test.toarray())
accuracy_score(y_true, y_pred)
cmtx = pd.DataFrame(
confusion_matrix(y_true, y_pred, labels=['ham', 'spam']),
index=['ham', 'spam'],
columns=['ham', 'spam']
)
print(cmtx)
print(classification_report(y_true, y_pred))
```
From our trained model, we get about 96% accuracy. Which is pretty good.
We also print out the confusion_matrix. This shows how many messages were classificed correctly. In the first column and first row, we see that 866 messages that were classified as ham were actaully ham and 136 messages that were predicted as spam, were in fact spam.
Next, lets test our model with some examples messages
## Inference
```
message = vectorizer.transform(["i'm on my way home"])
message = message.toarray()
gs_clf.predict(message)
message = vectorizer.transform(["this offer is to good to be true"])
message = message.toarray()
gs_clf.predict(message)
```
The final step is the save the model and the tf-idf vectorizer. We will use these when clasifing incoming messages on our lambda function
```
import joblib
joblib.dump(gs_clf, "model.pkl")
joblib.dump(vectorizer, "vectorizer.pkl")
```
# Lambda
Once our model is trained, we'll now put it in a production envioroment.
For this example, we'll create a lambda function to host our model.
The lambda function will be attached to an API gateway in which we'll be able to have a endpoint to make our predictions
Deploying a scikit-learn model to lambda isnt as easy as you would think. You can't just import your libraries, espcially scikit-learn to work.
Here's what we'll need to do in order to deploy our model
* Spin up EC2 instance
* SSH into the instance and install our dependencies
* copy the lambda function code from this [repo](https://github.com/tbass134/SMS-Spam-Classifier-lambda)
* Run a bash script that zips up the :
* zip the code, including the packages
* upload to S3
* point the lambda function to to s3 file
## Create an EC2 instance
If you have an aws account:
* Go to EC2 on the console and click `Launch Instance`.
* Select the first available AMI(Amazon Linux 2 AMI).
* Select the t2.micro instance, then click `Review and Launch`
* Click the Next button
* Under IAM Role, Click Create New Role
* Create a new role with the following policies:
AmazonS3FullAccess
AWSLambdaFullAccess
Name your role and click create role
* Under permissions, create a new role that has access to the following:
* lambda full access
* S3 full access
These will be needed when uploading our code to your S3 bucket and pointing the lambda function to zip file that will be creating later.
* Create a new private key pair and click `Lanuch Instance`
* Note, in order to use the key, you have to run `chmod 400` on the key when downloaded to your local machine.
After the instance spins up, you'll need to connect to it via ssh
* Find the newly created instance on EC2 and click `Connect`
* On your local machine, navigate to terminal and run the the command from the Example. It will look something like:
```bash
ssh -i "{PATH TO KEY}" {user_name}@ec2-{instance_ip}.compute-1.amazonaws.com
```
## Install packages
Before installing packages, you will need to install python and pip. You can follow the steps [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-linux.html)
These will most likey be:
```bash
sudo yum install python37
curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py --user
verify pip is installed using
```bash
pip --version
```
You will also need to install git
```bash
sudo yum install git -y
```
When connected to the instance, clone the repo
```bash
git clone https://github.com/tbass134/SMS-Spam-Classifier-lambda
```
This repo contains everything we need to make predictions. These includes the pickle files from the model and vectorizer, as well as the lambda function to make predictions and returns its response
cd into the SMS-Spam-Classifier-lambda/lambda folder
* Next, you you will need to install the `sklearn` library.
* On your instance, type:
`pip install -t . sklearn`
This will import the library into its own folder
Next, if you want to use your trained model, it will need to be uploaded into your ec2 instance.
If your using Google Colab, navigate to the files tab, right click on `my_model.pk` and `vectorizer.pkl` and click download.
Note, the sample repo already contains a trained model so this is optional.
To upload your trained model, you can use a few ways:
* Fork the repo, add your models, and checkout on the ec2 instance
You can use `scp` to copy to files from your local machine to the instance
To upload the model file we saved
```bash
scp -i {PATH_TO_KEY} vectorizer.pkl ec2-user@{INSTANCE_NAME}:
```
and we'll do the same for the model
```bash
scp -i {PATH_TO_KEY} my_model.pkl ec2-user@{INSTANCE_NAME}:
```
* The other method is to upload the files to s3 and have your lambda function load the files from there using Boto
```Python
def load_s3_file(key):
obj = s3.Object(MODEL_BUCKET, key)
body = obj.get()['Body'].read()
return joblib.load(BytesIO(body))
model = load_s3_file({PATH_TO_S3_MODEL}
vectorizer = load_s3_file({PATH_TO_S3_VECTORIZER}
```
## Create lambda function
* On the AWS console, navigate to https://console.aws.amazon.com/lambda
* Click on the Create function button
* Make sure `Author from scratch` is selected
* Name your function
* Set the runtime to Python 3.7
* Under Execution Role, create a new role with basic permissions
* Click `Create Function`
## Create S3 bucket
In order to push our code to a lambda function, we need to first copy zip up the code and libraies to a S3 bucket.
From here, our lambda function will load the zip file from this bucket.
* On the AWS console under `Services`, Search for `S3`
* Click `Create Bucket`
* Name your bucket, and click Create Bucket at the bottom of the page.
## Upload to lambda
Next, we'll run the `publish.sh`script inside the root of the repo, which does the following:
* zip up the pacakages, including our Python code, model and transformer.
* upload the zip to an S3 bucket
* point our lambda function to this bucket
when calling this script, we need to pass in 3 arguments:
* The name of the zip file. We can call it `zip.zip` for now
* The name of the S3 bucket that we will upload the zip to
* the name of lambda function
```bash
bash publish.sh {ZIP_FILE_NAME} {S3_BUCKET} {LAMBDA_FUNCTION_NAME}
```
If everything is successful, your lambda function will be deployed.
If you see errors, make sure your EC2 instance has a IAM role that has an S3 permission, and Lambda permissions.
See this [guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) for more info.
## Add HTTP endpoint
The final piece will be to add a API gateway.
On the configuration tab on the lambda function
* click `Add Trigger`
* Click on the select a trigger box and select `API Gateway`
* Click on `Create an API`
* Set API Type to `REST API`
* Set Security to `OPEN` (make sure to secure when deploying for production)
* At the bottom, click `Add`
For detail, see this [documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-lambda.html#api-as-lambda-proxy-create-api-resources)
We can now test the endpoint by using curl and making a call to our endpoint.
Under `API Gateway` section in lambda, click on oi
In the lambda function, we are looking for the `message` GET parameter. When we make our request, we'll pass a query parameter called `message`. This will contain the string we want to make a prediction on.
```
ham_message = "im on my way home".replace(" ", "%20")
ham_message
%%bash -s "$ham_message"
curl --location --request GET "https://e18fmcospk.execute-api.us-east-1.amazonaws.com/default/spam-detection?message=$1"
spam_message = "this offer is to good to be true".replace(" ", "%20")
spam_message
%%bash -s "$spam_message"
curl --location --request GET "https://e18fmcospk.execute-api.us-east-1.amazonaws.com/default/spam-detection?message=$1"
```
# Google Cloud Functions
For non-amazon users, we can use Google Cloud Functions to deploy our model for use in our Vonage SMS API app

Code is [here](https://gist.github.com/tbass134/7985c0adf44c938d6e683c18dabac8f9)
# Create Vonage SMS Application
The final step is to build a Vonage SMS Application.
Have a look at this blog post on how to build yourself
Our application will receive an SMS
https://developer.nexmo.com/messaging/sms/code-snippets/receiving-an-sms
and will send a SMS back to the user with its prediction
https://developer.nexmo.com/messaging/sms/code-snippets/send-an-sms

To work through this example, you will need the following
* Login / Signup to [Vonage SMS API](https://dashboard.nexmo.com/sign-up)
* Rent a phone number
* Assign a publicly accessable url via [ngrok](https://www.nexmo.com/blog/2017/07/04/local-development-nexmo-ngrok-tunnel-dr) to that phone number
We'll also build a simple Flask app that will make a request to our API Gateway
```bash
git clone https://github.com/tbass134/SMS-Spam-Classifier-lambda.git
cd app
```
Next we'll create a virtual environment and install the requirements using pip
```bash
virtualenv venv --python=python3
source venv/bin/activate
pip install -r requirments.txt
```
Next, create a `.env` file with the following:
```bash
NEXMO_API_KEY={YOUR_NEXMO_API_KEY}
NEXMO_API_SECRET={YOUR_NEXMO_API_SECRET}
NEXMO_NUMBER={YOUR_NEXMO_NUMBER
API_GATEWAY_URL={FULL_API_GATEWAY}
```
Finally, you can run the application:
```bash
python app.py
```
This will spin up a webserver listening on PORT 3000
# Fin
```
```
| github_jupyter |
```
import random
import torch.nn as nn
import torch
import pickle
import pandas as pd
from pandas import Series, DataFrame
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=False)
from sklearn.metrics import roc_auc_score, roc_curve, accuracy_score, matthews_corrcoef, f1_score, precision_score, recall_score
import numpy as np
import torch.optim as optim
folder = "/data/AIpep-clean/"
import matplotlib.pyplot as plt
from vocabulary import Vocabulary
from datasethem import Dataset
from datasethem import collate_fn_no_activity as collate_fn
from models import Generator
from tqdm.autonotebook import trange, tqdm
import os
from collections import defaultdict
```
# Load data
```
df = pd.read_pickle(folder + "pickles/DAASP_RNN_dataset_with_hemolysis.plk")
df_training = df.query("Set == 'training' and (baumannii == True or aeruginosa == True) and isNotHemolytic==1")
df_test = df.query("Set == 'test' and (baumannii == True or aeruginosa == True) and isNotHemolytic==1")
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
print("Against A. baumannii or P. aeruginosa:\nactive training "+ str(len(df_training[df_training["activity"]==1])) \
+ "\nactive test " + str(len(df_test[df_test["activity"]==1])) \
+ "\ninactive training "+ str(len(df_training[df_training["activity"]==0])) \
+ "\ninactive test " + str(len(df_test[df_test["activity"]==0])))
```
# Define helper functions
```
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
def categoryFromOutput(output):
top_n, top_i = output.topk(1)
category_i = top_i[0].item()
return category_i
def nan_equal(a,b):
try:
np.testing.assert_equal(a,b)
except AssertionError:
return False
return True
def models_are_equal(model1, model2):
model1.vocabulary == model2.vocabulary
model1.hidden_size == model2.hidden_size
for a,b in zip(model1.model.parameters(), model2.model.parameters()):
if nan_equal(a.detach().numpy(), b.detach().numpy()) == True:
print("true")
```
# Define hyper parameters
```
n_embedding = 100
n_hidden = 400
n_layers = 2
n_epoch = 200
learning_rate = 0.00001
momentum = 0.9
batch_size = 10
epoch = 22
```
# Loading and Training
```
if not os.path.exists(folder+"pickles/generator_TL_gramneg_results_hem.pkl"):
model = Generator.load_from_file(folder+"models/RNN-generator/ep{}.pkl".format(epoch))
model.to(device)
vocabulary = model.vocabulary
df_training_active = df_training.query("activity == 1")
df_test_active = df_test.query("activity == 1")
df_training_inactive = df_training.query("activity == 0")
df_test_inactive = df_test.query("activity == 0")
training_dataset_active = Dataset(df_training_active, vocabulary, with_activity=False)
test_dataset_active = Dataset(df_test_active, vocabulary, with_activity=False)
training_dataset_inactive = Dataset(df_training_inactive, vocabulary, with_activity=False)
test_dataset_inactive = Dataset(df_test_inactive, vocabulary, with_activity=False)
optimizer = optim.SGD(model.model.parameters(), lr = learning_rate, momentum=momentum)
# the only one used for training
training_dataloader_active = torch.utils.data.DataLoader(training_dataset_active, batch_size=batch_size, shuffle=True, collate_fn = collate_fn, drop_last=True, pin_memory=True, num_workers=4)
# used for evaluation
test_dataloader_active = torch.utils.data.DataLoader(test_dataset_active, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)
training_dataloader_inactive = torch.utils.data.DataLoader(training_dataset_inactive, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)
test_dataloader_inactive = torch.utils.data.DataLoader(test_dataset_inactive, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)
training_dataloader_active_eval = torch.utils.data.DataLoader(training_dataset_active, batch_size=batch_size, shuffle=False, collate_fn = collate_fn, drop_last=False, pin_memory=True, num_workers=4)
training_dictionary = {}
for e in trange(1, n_epoch + 1):
print("Epoch {}".format(e))
for i_batch, sample_batched in tqdm(enumerate(training_dataloader_active), total=len(training_dataloader_active) ):
seq_batched = sample_batched[0].to(model.device, non_blocking=True)
seq_lengths = sample_batched[1].to(model.device, non_blocking=True)
nll = model.likelihood(seq_batched, seq_lengths)
loss = nll.mean()
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_value_(model.model.parameters(), 2)
optimizer.step()
model.save(folder+"models/RNN-generator-TL-hem/gramneg_ep{}.pkl".format(e))
print("\tExample Sequences")
sampled_seq = model.sample(5)
for s in sampled_seq:
print("\t\t{}".format(model.vocabulary.tensor_to_seq(s, debug=True)))
nll_training = []
with torch.no_grad():
for i_batch, sample_batched in enumerate(training_dataloader_active_eval):
seq_batched = sample_batched[0].to(model.device, non_blocking=True)
seq_lengths = sample_batched[1].to(model.device, non_blocking=True)
nll_training += model.likelihood(seq_batched, seq_lengths)
nll_training_active_mean = torch.stack(nll_training).mean().item()
print("\tNLL Train Active: {}".format(nll_training_active_mean))
del nll_training
nll_test = []
with torch.no_grad():
for i_batch, sample_batched in enumerate(test_dataloader_active):
seq_batched = sample_batched[0].to(model.device, non_blocking=True)
seq_lengths = sample_batched[1].to(model.device, non_blocking=True)
nll_test += model.likelihood(seq_batched, seq_lengths)
nll_test_active_mean = torch.stack(nll_test).mean().item()
print("\tNLL Test Active: {}".format(nll_test_active_mean))
del nll_test
nll_training = []
with torch.no_grad():
for i_batch, sample_batched in enumerate(training_dataloader_inactive):
seq_batched = sample_batched[0].to(model.device, non_blocking=True)
seq_lengths = sample_batched[1].to(model.device, non_blocking=True)
nll_training += model.likelihood(seq_batched, seq_lengths)
nll_training_inactive_mean = torch.stack(nll_training).mean().item()
print("\tNLL Train Inactive: {}".format(nll_training_inactive_mean))
del nll_training
nll_test = []
with torch.no_grad():
for i_batch, sample_batched in enumerate(test_dataloader_inactive):
seq_batched = sample_batched[0].to(model.device, non_blocking=True)
seq_lengths = sample_batched[1].to(model.device, non_blocking=True)
nll_test += model.likelihood(seq_batched, seq_lengths)
nll_test_inactive_mean = torch.stack(nll_test).mean().item()
print("\tNLL Test Inactive: {}".format(nll_test_inactive_mean))
del nll_test
print()
training_dictionary[e]=[nll_training_active_mean, nll_test_active_mean, nll_training_inactive_mean, nll_test_inactive_mean]
with open(folder+"pickles/generator_TL_gramneg_results_hem.pkl",'wb') as fd:
pickle.dump(training_dictionary, fd)
else:
with open(folder+"pickles/generator_TL_gramneg_results_hem.pkl",'rb') as fd:
training_dictionary = pickle.load(fd)
min_nll_test_active = float("inf")
for epoch, training_values in training_dictionary.items():
nll_test_active = training_values[1]
if nll_test_active < min_nll_test_active:
best_epoch = epoch
min_nll_test_active = nll_test_active
```
# Sampling evaluation
```
print(best_epoch)
model = Generator.load_from_file(folder+"models/RNN-generator-TL-hem/gramneg_ep{}.pkl".format(best_epoch))
```
199
```
training_seq = df_training.Sequence.values.tolist()
def _sample(model, n):
sampled_seq = model.sample(n)
sequences = []
for s in sampled_seq:
sequences.append(model.vocabulary.tensor_to_seq(s))
return sequences
def novelty(seqs, list_):
novel_seq = []
for s in seqs:
if s not in list_:
novel_seq.append(s)
return novel_seq, (len(novel_seq)/len(seqs))*100
def is_in_training(seq, list_ = training_seq):
if seq not in list_:
return False
else:
return True
def uniqueness(seqs):
unique_seqs = defaultdict(int)
for s in seqs:
unique_seqs[s] += 1
return unique_seqs, (len(unique_seqs)/len(seqs))*100
# sample
seqs = _sample(model, 50000)
unique_seqs, perc_uniqueness = uniqueness(seqs)
notintraining_seqs, perc_novelty = novelty(unique_seqs, training_seq)
# create dataframe
df_generated = pd.DataFrame(list(unique_seqs.keys()), columns =['Sequence'])
df_generated["Repetition"] = df_generated["Sequence"].map(lambda x: unique_seqs[x])
df_generated["inTraining"] = df_generated["Sequence"].map(is_in_training)
df_generated["Set"] = "generated-TL-GN-hem"
# save
df_generated.to_pickle(folder+"pickles/Generated-TL-gramneg-hem.pkl")
print(perc_uniqueness, perc_novelty)
```
82.89999999999999 99.61158021712907
| github_jupyter |
# SEIRHVD model example
## Work in progress (equations not ready)
\begin{align}
\dot{S} & = S_f - \alpha\beta\frac{SI}{N+k_I I+k_R R} + r_{R\_S} R\\
\dot{E} & = E_f + \alpha\beta\frac{SI}{N+k_I I+k_R R} - E\frac{1}{t_{E\_I}} \\
\dot{I} & = I_f + E\frac{1}{t_{E\_I}} - I\frac{1}{t_{I\_R}} \\
\dot{R} & = R_f + I\frac{1}{t_{I\_R}} - r_{I\_R} R\\
\end{align}
Where:
* $S:$ Susceptible
* $E:$ Exposed
* $I:$ Infectious
* $R:$ Removed
* $\alpha:$ Mobilty
* $\beta:$ Infection rate
* $N:$ Total population
* $t_{E\_I}:$ # Transition time between exposed and infectious
* $t_{I\_R}:$ # Transition time between infectious and recovered
* $r_{R\_S}:$ Immunity loss rate ($\frac{1}{t_{R\_S}}$)
* $S_f,E_f,I_f,R_f:$ External flux
* $k_I:$ Infected saturation
* $k_R:$ Immunity shield
```
# Util libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Adding lib paths
# cv19 libraries
from cv19gm.models.seirhvd import SEIRHVD
from cv19gm.utils import cv19functions
# For pop-up plots execute this code (optional)
import platform
OS = platform.system()
if OS == 'Linux':
%matplotlib tk
print('Linux')
elif OS == 'Windows':
%matplotlib qt
print('Windows')
elif OS == 'Darwin':
%matplotlib tk
print('Mac (Funciona?)')
```
# Variable CFR
## Discrete change
* pH_r de 0.7 a 0.6
* pH_D de 0.3 a 0.4
```
pH_R = cv19functions.events(values=[0.7,0.5],days=[[0,30],[30,500]])
pH_D = cv19functions.events(values=[0.3,0.5],days=[[0,30],[30,500]])
%%capture
# Input configuration file
config = 'cfg/SEIRHVD.toml'
# Build simulation object
model1 = SEIRHVD(config = config, H_cap=4000, pH_R = pH_R,pH_D = pH_D)
# Simulate (solve ODE)
model1.solve()
t = np.linspace(0,50,1000)
plt.plot(t,100*pH_R(t),label='pH_R')
plt.plot(t,100*pH_D(t),label='pH_D')
plt.plot(t,100*pH_D(t)*model1.pE_Icr(t),label='CFR')
plt.legend(loc=0)
plt.title('CFR change (%)')
plt.show()
t = model1.t
plt.plot(t,100*model1.pH_R(t),label='pH_R')
plt.plot(t,100*model1.pH_D(t),label='pH_D')
plt.plot(t,100*model1.CFR,label='CFR')
plt.xlim(0,50)
plt.legend(loc=0)
plt.title('CFR change (%)')
plt.show()
```
* Nota: la transicion pareciera ocurrir en 2 días, pero tiee que ver con la resolución en la cual se entregan los datos, la transición que utiliza el integrador es "instantánea".
```
# Plot matplotlib
fig, axs = plt.subplots(figsize=(13,9),linewidth=5,edgecolor='black',facecolor="white")
axs2 = axs.twinx()
axs.plot(model1.t,model1.D_d,color='tab:red',label='Daily deaths')
axs.set_ylabel('Deaths',color='tab:red')
axs.tick_params(axis='y', labelcolor='tab:red')
t = model1.t
axs2.plot(t,100*pH_D(t)*model1.pE_Icr(t),color='tab:blue',label='CFR')
axs2.set_ylabel('CFR',color='tab:blue')
axs2.tick_params(axis='y',labelcolor='tab:blue')
axs.set_xlim(0,200)
axs2.set_xlim(0,200)
fig.legend(loc=8)
fig.suptitle('CFR vs Deaths')
fig.show()
```
## Continious change
* pH_r de 0.7 a 0.6
* pH_D de 0.3 a 0.4
```
pH_R = cv19functions.sigmoidal_transition(t_init=20,t_end=40,initvalue = 0.7, endvalue = 0.5)
pH_D = cv19functions.sigmoidal_transition(t_init=20,t_end=40,initvalue = 0.3, endvalue = 0.5)
%%capture
# Input configuration file
config = 'cfg/SEIRHVD.toml'
# Build simulation object
model2 = SEIRHVD(config = config, H_cap=4000, pH_R = pH_R,pH_D = pH_D)
# Simulate (solve ODE)
model2.solve()
t = model2.t
plt.plot(t,100*model2.pH_R(t),label='pH_R')
plt.plot(t,100*model2.pH_D(t),label='pH_D')
plt.plot(t,100*model2.CFR,label='CFR')
plt.xlim(0,50)
plt.legend(loc=0)
plt.title('CFR change (%)')
plt.show()
# Plot matplotlib
fig, axs = plt.subplots(figsize=(13,9),linewidth=5,edgecolor='black',facecolor="white")
axs2 = axs.twinx()
axs.plot(model2.t,model2.D_d,color='tab:red',label='Daily deaths')
axs.set_ylabel('Deaths',color='tab:red')
axs.tick_params(axis='y', labelcolor='tab:red')
t = model2.t
axs2.plot(t,100*pH_D(t)*model2.pE_Icr(t),color='tab:blue',label='CFR')
axs2.set_ylabel('CFR',color='tab:blue')
axs2.tick_params(axis='y',labelcolor='tab:blue')
axs.set_xlim(0,200)
axs2.set_xlim(0,200)
fig.legend(loc=8)
fig.suptitle('CFR vs Deaths')
fig.show()
```
# Access CFR value
## As a variable
```
model2.CFR
```
## As part of the pandas array
```
model2.results['CFR']
```
| github_jupyter |
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/ (1+torch.exp(-x))
## Your solution
# Hyperparemeter settings
batch_size = 64
image_dim = 784
n_hidden = 256
n_output = 10
torch.manual_seed(1)
# inputs = images.view(images.shape[0], -1)
# Intializing Weights and Bias
W1 = torch.randn((image_dim, n_hidden))
W2 = torch.randn((n_hidden, n_output))
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
# Forward Prop
h1 = activation(torch.mm(images.view(batch_size, image_dim), W1) + B1) # (64, 256)
# output of your network, should have shape (64,10)
out = torch.mm(h1, W2) + B3
h1.shape, out.shape
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
def softmax(x):
""" Softmax activation function
Arguments
---------
x: torch.Tensor
"""
return torch.exp(x) / torch.sum(torch.exp(x), dim=1).view(-1, 1)
# Here, out should be the output of the network in the previous excercise with shape (64,10)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
# torch.sum(torch.exp(output), dim=1).view(-1,1).shape
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
```
## Your solution here
class DNN(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layers linear transformation
self.hidden1 = nn.Linear(784, 128)
self.hidden2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(64, 10)
# Define sigmoid activation and softmax output
# self.sigmoid = nn.Sigmoid()
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden1(x)
x = self.relu(x)
x = self.hidden2(x)
x = self.relu(x)
x = self.output(x)
x = self.softmax(x)
return x
model = DNN()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
+
print(model.fc1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output.
The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| github_jupyter |
```
#pip install python-binance
from binance import Client
import pandas as pd
import matplotlib.pyplot as plt
import time
with open('access.txt') as f:
acc = f.readlines()
api = acc[0].strip()
key = acc[1].strip()
client = Client(api,key)
def get_interval_data(currency, interval, lookback):
interval_data = pd.DataFrame(client.get_historical_klines(currency, interval, lookback + ' minutes ago UTC'))
interval_data = interval_data.iloc[:, :6]
interval_data.columns = ['Time', 'Open', 'High','Low','Close', 'Volume']
interval_data.set_index('Time', inplace=True)
interval_data.index = pd.to_datetime(interval_data.index, unit ='ms')
interval_data = interval_data.astype(float)
return interval_data
frame = get_interval_data('DOGEUSDT', '1m', '30')
frame.Low.plot();
frame.High.plot();
class TestTrade:
def __init__(self, symb, qnty, entered=False):
self.symb = symb
self.qnty = qnty
self.entered = entered
self.buy_orders = []
self.sell_orders = []
def get_interval_data(self, interval, lookback):
interval_data = pd.DataFrame(client.get_historical_klines(self.symb, interval, lookback + ' minutes ago UTC'))
interval_data = interval_data.iloc[:, :6]
interval_data.columns = ['Time', 'Open', 'High','Low','Close', 'Volume']
interval_data.set_index('Time', inplace=True)
interval_data.index = pd.to_datetime(interval_data.index, unit ='ms')
interval_data = interval_data.astype(float)
return interval_data
def buy_order(self):
while True:
frame = self.get_interval_data('1m', '100')
change = (frame.Open.pct_change() +1).cumprod() - 1
if change[-1] < -0.005:
order = client.create_order(symbol=self.symb,side='BUY', type='MARKET', quantity=self.qnty)
print('BUY order executed')
self.entered = True
self.buy_orders.append(order)
break
def sell_order(self):
time_buy = pd.to_datetime(self.buy_orders[-1]['transactTime'],unit='ms')
while True:
frame = self.get_interval_data('1m', '100')
since_buy = frame.loc[frame.index > time_buy]
if len(since_buy) > 0:
change = (since_buy.Open.pct_change() +1).cumprod() -1
if change[-1] > 0.005:
order = client.create_order(symbol=self.symb,side='SELL', type='MARKET', quantity=self.qnty)
print('SELL order executed')
self.entered = False
self.sell_orders.append(order)
break
def trade(self):
while len(self.buy_orders) < 3:
if not self.entered:
self.buy_order()
if self.entered:
self.sell_order()
time.sleep(10)
test_trade = TestTrade('DOGEUSDT', 100)
#test_trade.trade()
b = [float(i['cummulativeQuoteQty']) for i in test_trade.buy_orders]
b
s = [float(i['cummulativeQuoteQty']) for i in test_trade.sell_orders]
s
```
| github_jupyter |
# Tigergraph<>Graphistry Fraud Demo: Raw REST
Accesses Tigergraph's fraud demo directly via manual REST calls
```
#!pip install graphistry
import pandas as pd
import graphistry
import requests
#graphistry.register(key='MY_API_KEY', server='labs.graphistry.com', api=2)
TIGER = "http://MY_TIGER_SERVER:9000"
#curl -X GET "http://MY_TIGER_SERVER:9000/query/circleDetection?srcId=111"
# string -> dict
def query_raw(query_string):
url = TIGER + "/query/" + query_string
r = requests.get(url)
return r.json()
def flatten (lst_of_lst):
try:
if type(lst_of_lst[0]) == list:
return [item for sublist in lst_of_lst for item in sublist]
else:
return lst_of_lst
except:
print('fail', lst_of_lst)
return lst_of_lst
#str * dict -> dict
def named_edge_to_record(name, edge):
record = {k: edge[k] for k in edge.keys() if not (type(edge[k]) == dict) }
record['type'] = name
nested = [k for k in edge.keys() if type(edge[k]) == dict]
if len(nested) == 1:
for k in edge[nested[0]].keys():
record[k] = edge[nested[0]][k]
else:
for prefix in nested:
for k in edge[nested[prefix]].keys():
record[prefix + "_" + k] = edge[nested[prefix]][k]
return record
def query(query_string):
results = query_raw(query_string)['results']
out = {}
for o in results:
for k in o.keys():
if type(o[k]) == list:
out[k] = flatten(o[k])
out = flatten([[named_edge_to_record(k,v) for v in out[k]] for k in out.keys()])
print('# results', len(out))
return pd.DataFrame(out)
def plot_edges(edges):
return graphistry.bind(source='from_id', destination='to_id').edges(edges).plot()
```
# 1. Fraud
## 1.a circleDetection
```
circle = query("circleDetection?srcId=10")
circle.sample(3)
plot_edges(circle)
```
## 1.b fraudConnectivity
```
connectivity = query("fraudConnectivity?inputUser=111&trustScore=0.1")
connectivity.sample(3)
plot_edges(connectivity)
```
## Combined
```
circle['provenance'] = 'circle'
connectivity['provenance'] = 'connectivity'
plot_edges(pd.concat([circle, connectivity]))
```
## Color by type
```
edges = pd.concat([circle, connectivity])
froms = edges.rename(columns={'from_id': 'id', 'from_type': 'node_type'})[['id', 'node_type']]
tos = edges.rename(columns={'to_id': 'id', 'to_type': 'node_type'})[['id', 'node_type']]
nodes = pd.concat([froms, tos], ignore_index=True).drop_duplicates().dropna()
nodes.sample(3)
nodes['node_type'].unique()
#https://labs.graphistry.com/docs/docs/palette.html
type2color = {
'User': 0,
'Transaction': 1,
'Payment_Instrument': 2,
'Device_Token': 3
}
nodes['color'] = nodes['node_type'].apply(lambda type_str: type2color[type_str])
nodes.sample(3)
graphistry.bind(source='from_id', destination='to_id', node='id', point_color='color').edges(edges).nodes(nodes).plot()
```
| github_jupyter |
# Project 1
- **Team Members**: Chika Ozodiegwu, Kelsey Wyatt, Libardo Lambrano, Kurt Pessa

### Data set used:
* https://open-fdoh.hub.arcgis.com/datasets/florida-covid19-case-line-data
```
import requests
import pandas as pd
import io
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import JSON
df = pd.read_csv("Resources/Florida_COVID19_Case_Line_Data_new.csv")
df.head(3)
#Clean dataframe
new_csv_data_df = df[['ObjectId', "County",'Age',"Age_group", "Gender", "Jurisdiction", "Travel_related", "Hospitalized","Case1"]]
new_csv_data_df.head()
#Create new csv
new_csv_data_df.to_csv ("new_covid_dataframe.csv")
```
# There is no change in hospitalizations since reopening
### Research Question to Answer:
* “There is no change in hospitalizations since reopening.”
### Part 1: Six (6) Steps for Hypothesis Testing
<details><summary> click to expand </summary>
#### 1. Identify
- **Populations** (divide Hospitalization data in two groups of data):
1. Prior to opening
2. After opening
* Decide on the **date**:
* May 4th - restaurants opening to 25% capacity
* June (Miami opening beaches)
- Distribution:
* Distribution
#### 2. State the hypotheses
- **H0**: There is no change in hospitalizations after Florida has reopened
- **H1**: There is a change in hospitalizations after Florida has reopened
#### 3. Characteristics of the comparison distribution
- Population means, standard deviations
#### 4. Critical values
- p = 0.05
- Our hypothesis is nondirectional so our hypothesis test is **two-tailed**
#### 5. Calculate
#### 6. Decide!
</details>
### Part 2: Visualization
```
#Calculate total number of cases
Total_covid_cases = new_csv_data_df["ObjectId"].nunique()
Total_covid_cases = pd.DataFrame({"Total Number of Cases": [Total_covid_cases]})
Total_covid_cases
#Total number of cases per county
total_cases_county = new_csv_data_df.groupby(by="County").count().reset_index().loc[:,["County","Case1"]]
total_cases_county.rename(columns={"County": "County", "Case1": "Total Cases"})
#Total number of cases per county sorted
total_cases_county = total_cases_county.sort_values('Case1',ascending=False)
total_cases_county.head(20)
#Create bar chart for total cases per county
total_cases_county.plot(kind='bar',x='County',y='Case1', title ="Total Cases per County", figsize=(15, 10), color="blue")
plt.title("Total Cases per County")
plt.xlabel("County")
plt.ylabel("Number of Cases")
plt.legend(["Number of Cases"])
plt.show()
#Calculate top 10 counties with total cases
top10_county_cases = total_cases_county.sort_values(by="Case1",ascending=False).head(10)
top10_county_cases["Rank"] = np.arange(1,11)
top10_county_cases.set_index("Rank").style.format({"Case1":"{:,}"})
#Create bar chart for total cases for top 10 counties
top10_county_cases.plot(kind='bar',x='County',y='Case1', title ="Total Cases for Top 10 Counties", figsize=(15, 10), color="blue")
plt.title("Total Hospitalizations for Top 10 Counties")
plt.xlabel("County")
plt.ylabel("Number of Cases")
plt.legend(["Number of Cases"])
plt.show()
#Total number of cases by gender
total_cases_gender = new_csv_data_df.groupby(by="Gender").count().reset_index().loc[:,["Gender","Case1"]]
total_cases_gender.rename(columns={"Gender": "Gender", "Case1": "Total Cases"})
#Create pie chart for total number of cases by gender
total_cases_gender = new_csv_data_df["Gender"].value_counts()
colors=["pink", "blue", "green"]
explode=[0.1,0.1,0.1]
total_cases_gender.plot.pie(explode=explode,colors=colors, autopct="%1.1f%%", shadow=True, subplots=True, startangle=120);
plt.title("Total Number of Cases in Males vs. Females")
#Filter data to show only cases that include hospitalization
filt = new_csv_data_df["Hospitalized"] == "YES"
df = new_csv_data_df[filt]
df
#Calculate total number of hospitalizations
pd.DataFrame({
"Total Hospitalizations (Florida)" : [df.shape[0]]
}).style.format("{:,}")
#Total number of hospitalization for all counties
hospitalizations_county = df.groupby(by="County").count().reset_index().loc[:,["County","Hospitalized"]]
hospitalizations_county
#Total number of hospitalization for all counties sorted
hospitalizations_county = hospitalizations_county.sort_values('Hospitalized',ascending=False)
hospitalizations_county.head(10)
#Create bar chart for total hospitalizations per county
hospitalizations_county.plot(kind='bar',x='County',y='Hospitalized', title ="Total Hospitalizations per County", figsize=(15, 10), color="blue")
plt.title("Total Hospitalizations per County")
plt.xlabel("County")
plt.ylabel("Number of Hospitalizations")
plt.show()
#Calculate top 10 counties with hospitalizations
top10_county = hospitalizations_county.sort_values(by="Hospitalized",ascending=False).head(10)
top10_county["Rank"] = np.arange(1,11)
top10_county.set_index("Rank").style.format({"Hospitalized":"{:,}"})
#Create a bar chart for the top 10 counties with hospitalizations
top10_county.plot(kind='bar',x='County',y='Hospitalized', title ="Total Hospitalizations for the Top 10 Counties", figsize=(15, 10), color="blue")
plt.title("Total Hospitalizations for the Top 10 Counties")
plt.xlabel("County")
plt.ylabel("Number of Hospitalizations")
plt.show()
#Average number of hospitalization by county (Not done yet) (Kelsey)
average = hospitalizations_county["Hospitalized"].mean()
average
#Filter data to show only cases that include hospitalization
filt = new_csv_data_df["Hospitalized"] == "YES"
df = new_csv_data_df[filt]
df
#Percentage of hospitalization by gender # Create Visualization (Libardo)
#code on starter_notebook.ipynb
new_csv_data_df
import seaborn as sns
new_csv_data_df['Count']=np.where(new_csv_data_df['Hospitalized']=='YES', 1,0)
new_csv_data_df.head()
new_csv_data_df['Count2']=1
new_csv_data_df['Case1']=pd.to_datetime(new_csv_data_df['Case1'])
case_plot_df=pd.DataFrame(new_csv_data_df.groupby(['Hospitalized', pd.Grouper(key='Case1', freq='W')])['Count2'].count())
case_plot_df.reset_index(inplace=True)
plt.subplots(figsize=[15,7])
sns.lineplot(x='Case1', y='Count2', data=case_plot_df, hue='Hospitalized')
plt.xticks(rotation=45)
#Percentage of hospitalization by age group (Chika) #Create visualization
#Hospitalization by case date/month (needs more) (Libardo)
#Compare travel-related hospitalization to non-travelrelated cases (Not done yet) (Chika)
#Divide hospitalization data in two groups of data prior to reopening and create new dataframe (Kurt) consider total (Chika)
#Divide hospitalization data in two groups of data after reopening and create new dataframe (Kurt) condider total (Chika)
#Percentage of hospitalization before shut down (Not done yet) (Rephrase) (Chika)
#Percentage of hospitalization during shut down (backburner)
#Percentage of hospitalization after reopening(Not done yet) (Rephrase) (Chika)
#Statistical testing between before and after reopening
```
| github_jupyter |
# Benchmark NumPyro in large dataset
This notebook uses `numpyro` and replicates experiments in references [1] which evaluates the performance of NUTS on various frameworks. The benchmark is run with CUDA 10.1 on a NVIDIA RTX 2070.
```
import time
import numpy as np
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
from numpyro.examples.datasets import COVTYPE, load_dataset
from numpyro.infer import HMC, MCMC, NUTS
assert numpyro.__version__.startswith('0.3.0')
# NB: replace gpu by cpu to run this notebook in cpu
numpyro.set_platform("gpu")
```
We do preprocessing steps as in [source code](https://github.com/google-research/google-research/blob/master/simple_probabilistic_programming/no_u_turn_sampler/logistic_regression.py) of reference [1]:
```
_, fetch = load_dataset(COVTYPE, shuffle=False)
features, labels = fetch()
# normalize features and add intercept
features = (features - features.mean(0)) / features.std(0)
features = jnp.hstack([features, jnp.ones((features.shape[0], 1))])
# make binary feature
_, counts = np.unique(labels, return_counts=True)
specific_category = jnp.argmax(counts)
labels = (labels == specific_category)
N, dim = features.shape
print("Data shape:", features.shape)
print("Label distribution: {} has label 1, {} has label 0"
.format(labels.sum(), N - labels.sum()))
```
Now, we construct the model:
```
def model(data, labels):
coefs = numpyro.sample('coefs', dist.Normal(jnp.zeros(dim), jnp.ones(dim)))
logits = jnp.dot(data, coefs)
return numpyro.sample('obs', dist.Bernoulli(logits=logits), obs=labels)
```
## Benchmark HMC
```
step_size = jnp.sqrt(0.5 / N)
kernel = HMC(model, step_size=step_size, trajectory_length=(10 * step_size), adapt_step_size=False)
mcmc = MCMC(kernel, num_warmup=500, num_samples=500, progress_bar=False)
mcmc.warmup(random.PRNGKey(2019), features, labels, extra_fields=('num_steps',))
mcmc.get_extra_fields()['num_steps'].sum().copy()
tic = time.time()
mcmc.run(random.PRNGKey(2020), features, labels, extra_fields=['num_steps'])
num_leapfrogs = mcmc.get_extra_fields()['num_steps'].sum().copy()
toc = time.time()
print("number of leapfrog steps:", num_leapfrogs)
print("avg. time for each step :", (toc - tic) / num_leapfrogs)
mcmc.print_summary()
```
In CPU, we get `avg. time for each step : 0.02782863507270813`.
## Benchmark NUTS
```
mcmc = MCMC(NUTS(model), num_warmup=50, num_samples=50, progress_bar=False)
mcmc.warmup(random.PRNGKey(2019), features, labels, extra_fields=('num_steps',))
mcmc.get_extra_fields()['num_steps'].sum().copy()
tic = time.time()
mcmc.run(random.PRNGKey(2020), features, labels, extra_fields=['num_steps'])
num_leapfrogs = mcmc.get_extra_fields()['num_steps'].sum().copy()
toc = time.time()
print("number of leapfrog steps:", num_leapfrogs)
print("avg. time for each step :", (toc - tic) / num_leapfrogs)
mcmc.print_summary()
```
In CPU, we get `avg. time for each step : 0.028006251705287415`.
## Compare to other frameworks
| | HMC | NUTS |
| ------------- |----------:|----------:|
| Edward2 (CPU) | | 56.1 ms |
| Edward2 (GPU) | | 9.4 ms |
| Pyro (CPU) | 35.4 ms | 35.3 ms |
| Pyro (GPU) | 3.5 ms | 4.2 ms |
| NumPyro (CPU) | 27.8 ms | 28.0 ms |
| NumPyro (GPU) | 1.6 ms | 2.2 ms |
Note that in some situtation, HMC is slower than NUTS. The reason is the number of leapfrog steps in each HMC trajectory is fixed to $10$, while it is not fixed in NUTS.
**Some takeaways:**
+ The overhead of iterative NUTS is pretty small. So most of computation time is indeed spent for evaluating potential function and its gradient.
+ GPU outperforms CPU by a large margin. The data is large, so evaluating potential function in GPU is clearly faster than doing so in CPU.
## References
1. `Simple, Distributed, and Accelerated Probabilistic Programming,` [arxiv](https://arxiv.org/abs/1811.02091)<br>
Dustin Tran, Matthew D. Hoffman, Dave Moore, Christopher Suter, Srinivas Vasudevan, Alexey Radul, Matthew Johnson, Rif A. Saurous
| github_jupyter |
```
import pandas as pd
df = pd.read_csv(r'C:\Users\rohit\Documents\Flight Delay\flightdata.csv')
df.head()
df.shape
df.isnull().values.any()
df.isnull().sum()
df = df.drop('Unnamed: 25', axis=1)
df.isnull().sum()
df = pd.read_csv(r'C:\Users\rohit\Documents\Flight Delay\flightdata.csv')
df = df[["MONTH", "DAY_OF_MONTH", "DAY_OF_WEEK", "ORIGIN", "DEST", "CRS_DEP_TIME", "DEP_DEL15", "CRS_ARR_TIME", "ARR_DEL15"]]
df.isnull().sum()
df[df.isnull().values.any(axis=1)].head()
df = df.fillna({'ARR_DEL15': 1})
df = df.fillna({'DEP_DEL15': 1})
df.iloc[177:185]
df.head()
import math
for index, row in df.iterrows():
df.loc[index, 'CRS_DEP_TIME'] = math.floor(row['CRS_DEP_TIME'] / 100)
df.loc[index, 'CRS_ARR_TIME'] = math.floor(row['CRS_ARR_TIME'] / 100)
df.head()
df = pd.get_dummies(df, columns=['ORIGIN', 'DEST'])
df.head()
from sklearn.model_selection import train_test_split
train_x, test_x, train_y, test_y = train_test_split(df.drop(['ARR_DEL15','DEP_DEL15'], axis=1), df[['ARR_DEL15','DEP_DEL15']], test_size=0.2, random_state=42)
train_x.shape
test_x.shape
train_y.shape
test_y.shape
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(random_state=13)
model.fit(train_x, train_y)
predicted = model.predict(test_x)
model.score(test_x, test_y)
from sklearn.metrics import roc_auc_score
probabilities = model.predict_proba(test_x)
roc_auc_score(test_y, probabilities[0])
from sklearn.metrics import multilabel_confusion_matrix
multilabel_confusion_matrix(test_y, predicted)
from sklearn.metrics import precision_score
train_predictions = model.predict(train_x)
precision_score(train_y, train_predictions, average = None)
from sklearn.metrics import recall_score
recall_score(train_y, train_predictions, average = None)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(test_y, probabilities[0])
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], color='grey', lw=1, linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
def predict_delay(departure_date_time, arrival_date_time, origin, destination):
from datetime import datetime
try:
departure_date_time_parsed = datetime.strptime(departure_date_time, '%d/%m/%Y %H:%M:%S')
arrival_date_time_parsed = datetime.strptime(departure_date_time, '%d/%m/%Y %H:%M:%S')
except ValueError as e:
return 'Error parsing date/time - {}'.format(e)
month = departure_date_time_parsed.month
day = departure_date_time_parsed.day
day_of_week = departure_date_time_parsed.isoweekday()
hour = departure_date_time_parsed.hour
origin = origin.upper()
destination = destination.upper()
input = [{'MONTH': month,
'DAY': day,
'DAY_OF_WEEK': day_of_week,
'CRS_DEP_TIME': hour,
'ORIGIN_ATL': 1 if origin == 'ATL' else 0,
'ORIGIN_DTW': 1 if origin == 'DTW' else 0,
'ORIGIN_JFK': 1 if origin == 'JFK' else 0,
'ORIGIN_MSP': 1 if origin == 'MSP' else 0,
'ORIGIN_SEA': 1 if origin == 'SEA' else 0,
'DEST_ATL': 1 if destination == 'ATL' else 0,
'DEST_DTW': 1 if destination == 'DTW' else 0,
'DEST_JFK': 1 if destination == 'JFK' else 0,
'DEST_MSP': 1 if destination == 'MSP' else 0,
'DEST_SEA': 1 if destination == 'SEA' else 0 }]
return model.predict_proba(pd.DataFrame(input))[0][0]
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cindyhfls/NMA_DL_2021_project/blob/main/DifferentRegionsCorrelatedLatents/restandmove.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Focus on what matters: inferring low-dimensional dynamics from neural recordings
**By Neuromatch Academy**
__Content creators:__ Marius Pachitariu, Pedram Mouseli, Lucas Tavares, Jonny Coutinho,
Blessing Itoro, Gaurang Mahajan, Rishika Mohanta
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Objective:
It is very difficult to interpret the activity of single neurons in the brain, because their firing patterns are noisy, and it is not clear how a single neuron can contribute to cognition and behavior. However, neurons in the brain participate in local, regional and brainwide dynamics. No neuron is isolated from these dynamics, and much of a single neuron's activity can be predicted from the dynamics. Furthermore, only populations of neurons as a whole can control cognition and behavior. Hence it is crucial to identify these dynamical patterns and relate them to stimuli or behaviors.
In this notebook, we generate simulated data from a low-dimensional dynamical system and then use seq-to-seq methods to predict one subset of neurons from another. This allows us to identify the low-dimensional dynamics that are sufficient to explain the activity of neurons in the simulation. The methods described in this notebook can be applied to large-scale neural recordings of hundreds to tens of thousans of neurons, such as the ones from the NMA-CN course.
---
# Setup
```
# Imports
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
from matplotlib import pyplot as plt
import math
from sklearn.linear_model import LinearRegression
import copy
# @title Figure settings
from matplotlib import rcParams
rcParams['figure.figsize'] = [20, 4]
rcParams['font.size'] =15
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
rcParams['figure.autolayout'] = True
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
def pearson_corr_tensor(input, output):
rpred = output.detach().cpu().numpy()
rreal = input.detach().cpu().numpy()
rpred_flat = np.ndarray.flatten(rpred)
rreal_flat = np.ndarray.flatten(rreal)
corrcoeff = np.corrcoef(rpred_flat, rreal_flat)
return corrcoeff[0,1]
#@title Set random seed
#@markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
```
**Note:** If `cuda` is not enabled, go to `Runtime`--> `Change runtime type` and in `Hardware acceleration` choose `GPU`.
```
# Data Loading
#@title Data retrieval
import os, requests
fname = []
for j in range(3):
fname.append('steinmetz_part%d.npz'%j)
url = ["https://osf.io/agvxh/download"]
url.append("https://osf.io/uv3mw/download")
url.append("https://osf.io/ehmw2/download")
for j in range(len(url)):
if not os.path.isfile(fname[j]):
try:
r = requests.get(url[j])
except requests.ConnectionError:
print("!!! Failed to download data !!!")
else:
if r.status_code != requests.codes.ok:
print("!!! Failed to download data !!!")
else:
with open(fname[j], "wb") as fid:
fid.write(r.content)
alldat = np.array([])
for j in range(len(fname)):
alldat = np.hstack((alldat, np.load('steinmetz_part%d.npz'%j, allow_pickle=True)['dat']))
#@title Print Keys
print(alldat[0].keys())
#@title Define Steinmetz Class
class SteinmetzSession:
data = []
binSize = 10
nTrials = []
nNeurons = []
trialLen = 0
trimStart = "trialStart"
trimEnd = "trialEnd"
def __init__(self, dataIn):
self.data = copy.deepcopy(dataIn)
dims1 = np.shape(dataIn['spks'])
self.nTrials = dims1[1]
self.nNeurons = dims1[0]
self.trialLen = dims1[2]
def binData(self, binSizeIn): # Inputs: data, scalar for binning. Combines binSizeIn bins together to bin data smaller Ex. binSizeIn of 5 on the original dataset combines every 5 10 ms bins into one 50 ms bin across all trials.
varsToRebinSum = ['spks']
varsToRebinMean = ['wheel', 'pupil']
spikes = self.data['spks']
histVec = range(0,self.trialLen+1, binSizeIn)
spikesBin = np.zeros((self.nNeurons, self.nTrials, len(histVec)))
print(histVec)
for trial in range(self.nTrials):
spikes1 = np.squeeze(spikes[:,trial,:])
for time1 in range(len(histVec)-1):
spikesBin[:,trial, time1] = np.sum(spikes1[:, histVec[time1]:histVec[time1+1]-1], axis=1)
spikesBin = spikesBin[:,:,:-1]
self.data['spks'] = spikesBin
self.trialLen = len(histVec) -1
self.binSize = self.binSize*binSizeIn
s = "Binned spikes, turning a " + repr(np.shape(spikes)) + " matrix into a " + repr(np.shape(spikesBin)) + " matrix"
print(s)
def plotTrial(self, trialNum): # Basic function to plot the firing rate during a single trial. Used for debugging trimming and binning
plt.imshow(np.squeeze(self.data['spks'][:,trialNum,:]), cmap='gray_r', aspect = 'auto')
plt.colorbar()
plt.xlabel("Time (bins)")
plt.ylabel("Neuron #")
def realign_data_to_movement(self,length_time_in_ms): # input has to be n * nTrials * nbins
align_time_in_bins = np.round(self.data['response_time']/self.binSize*1000)+ int(500/self.binSize) # has to add 0.5 s because the first 0.5 s is pre-stimulus
length_time_in_bins = int(length_time_in_ms/self.binSize)
validtrials = self.data['response']!=0
maxtime = self.trialLen
newshape = (self.nNeurons,self.nTrials)
newshape+=(length_time_in_bins,)
newdata = np.empty(newshape)
for count,align_time_curr_trial in enumerate(align_time_in_bins):
if (validtrials[count]==0)|(align_time_curr_trial+length_time_in_bins>maxtime) :
validtrials[count] = 0
else:
newdata[:,count,:]= self.data['spks'][:,count,int(align_time_curr_trial):int(align_time_curr_trial)+length_time_in_bins]
# newdata = newdata[:,validtrials,:]
self.data['spks'] = newdata
# self.validtrials = validtrials
print('spikes aligned to movement, returning validtrials')
return validtrials
def realign_data_to_rest(self,length_time_in_ms): # input has to be n * nTrials * nbins
align_time_in_bins = np.zeros(self.data['response_time'].shape)
length_time_in_bins = int(length_time_in_ms/self.binSize)
newshape = (self.nNeurons,self.nTrials)
newshape+=(length_time_in_bins,)
newdata = np.empty(newshape)
for count,align_time_curr_trial in enumerate(align_time_in_bins):
newdata[:,count,:]= self.data['spks'][:,count,int(align_time_curr_trial):int(align_time_curr_trial)+length_time_in_bins]
self.data['spks'] = newdata
print('spikes aligned to rest')
def get_areas(self):
print(set(list(self.data['brain_area'])))
return set(list(self.data['brain_area']))
def extractROI(self, region): #### extract neurons from single region
rmrt=list(np.where(self.data['brain_area']!=region))[0]
print(f' removing data from {len(rmrt)} neurons not contained in {region} ')
self.data['spks']=np.delete(self.data['spks'],rmrt,axis=0)
neur=len(self.data['spks'])
print(f'neurons remaining in trial {neur}')
self.data['brain_area']=np.delete(self.data['brain_area'],rmrt,axis=0)
self.data['ccf']=np.delete(self.data['ccf'],rmrt,axis=0)
def FlattenTs(self):
self.data['spks']=np.hstack(self.data['spks'][:])
def removeTrialAvgFR(self):
mFR = self.data['spks'].mean(1)
mFR = np.expand_dims(mFR, 1).repeat(self.data['spks'].shape[1],axis = 1)
print(np.shape(self.data['spks']))
print(np.shape(mFR))
self.data['spks'] = self.data['spks'].astype(float)
self.data['spks'] -= mFR
def permdims(self):
return torch.permute(torch.tensor(self.data['spks']),(2,1,0))
def smoothFR(self, smoothingWidth):# TODO: Smooth the data and save it back to the data structure
return 0
```
# function to run all areas
```
def run_each_region(PA_name,IA_name,ncomp,learning_rate_start,plot_on = False,verbose = False,niter = 400):
nTr = np.argwhere(validtrials) # since the other trials were defaulted to a zero value, only plot the valid trials
## plot a trial
if plot_on:
plt.figure()
curr_session.plotTrial(nTr[1])
plt.title('All')
PA = copy.deepcopy(curr_session)
###remove all neurons not in motor cortex
PA.extractROI(PA_name)
### plot a trial from motor neuron
if plot_on:
plt.figure()
PA.plotTrial(nTr[1])
plt.title('Predicted Area')
### permute the trials
PAdata = PA.permdims().float().to(device)
PAdata = PAdata[:,validtrials,:]
if IA_name == 'noise':
# generate some negative controls:
IAdata= torch.maximum(torch.randn(PAdata.shape),torch.zeros(PAdata.shape)) # for now say the shape of noise matches the predicted area, I doubt that matters?
if plot_on:
plt.figure()
plt.imshow(np.squeeze(IAdata[:,nTr[1],:].numpy().T),cmap = 'gray_r',aspect = 'auto')
plt.title('Random noise')
IAdata = IAdata.float().to(device)
else:
IA = copy.deepcopy(curr_session)
###remove all neurons not in motor cortex
IA.extractROI(IA_name)
if plot_on:
### plot a trial from motor neuron
plt.figure()
IA.plotTrial(nTr[1])
plt.title('Input Area')
IAdata = IA.permdims().float().to(device)
IAdata = IAdata[:,validtrials,:]
##@title get indices for trials (split into ~60%, 30%,10%)
N = PAdata.shape[1]
np.random.seed(42)
ii = torch.randperm(N).tolist()
idx_train = ii[:math.floor(0.6*N)]
idx_val = ii[math.floor(0.6*N):math.floor(0.9*N)]
idx_test = ii[math.floor(0.9*N):]
##@title split into train, test and validation set
x0 = IAdata
x0_train = IAdata[:,idx_train,:]
x0_val = IAdata[:,idx_val,:]
x0_test = IAdata[:,idx_test,:]
x1 = PAdata
x1_train = PAdata[:,idx_train,:]
x1_val = PAdata[:,idx_val,:]
x1_test = PAdata[:,idx_test,:]
NN1 = PAdata.shape[2]
NN2 = IAdata.shape[2]
class Net_singleinput(nn.Module): # our model
def __init__(self, ncomp, NN2, NN1, bidi=True): # NN2 is input dim, NN1 is output dim
super(Net_singleinput, self).__init__()
# play with some of the options in the RNN!
self.rnn1 = nn.RNN(NN2, ncomp, num_layers = 1, dropout = 0, # PA
bidirectional = bidi, nonlinearity = 'tanh')
self.fc = nn.Linear(ncomp,NN1)
def forward(self, x0):
y = self.rnn1(x0)[0] # ncomp IAs
if self.rnn1.bidirectional:
# if the rnn is bidirectional, it concatenates the activations from the forward and backward pass
# we want to add them instead, so as to enforce the latents to match between the forward and backward pass
q = (y[:, :, :ncomp] + y[:, :, ncomp:])/2
else:
q = y
# the softplus function is just like a relu but it's smoothed out so we can't predict 0
# if we predict 0 and there was a spike, that's an instant Inf in the Poisson log-likelihood which leads to failure
z = F.softplus(self.fc(q), 10)
return z, q
# @title train loop
# you can keep re-running this cell if you think the cost might decrease further
# we define the Poisson log-likelihood loss
def Poisson_loss(lam, spk):
return lam - spk * torch.log(lam)
def train(net,train_input,train_output,val_input,val_output,niter = niter):
set_seed(42)
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate_start)
training_cost = []
val_cost = []
for k in range(niter):
### training
optimizer.zero_grad()
# the network outputs the single-neuron prediction and the latents
z,_= net(train_input)
# our log-likelihood cost
cost = Poisson_loss(z, train_output).mean()
# train the network as usual
cost.backward()
optimizer.step()
training_cost.append(cost.item())
### test on validation data
z_val,_ = net(val_input)
cost = Poisson_loss(z_val, val_output).mean()
val_cost.append(cost.item())
if (k % 100 == 0)& verbose:
print(f'iteration {k}, cost {cost.item():.4f}')
return training_cost,val_cost
# @title train model PA->PA only
net_PAPA = Net_singleinput(ncomp, NN1, NN1, bidi = False).to(device)
net_PAPA.fc.bias.data[:] = x1.mean((0,1))
training_cost_PAPA,val_cost_PAPA = train(net_PAPA,x1_train,x1_train,x1_val,x1_val) # train
# @title train model IA->PA only
net_IAPA = Net_singleinput(ncomp, NN2, NN1, bidi = False).to(device)
net_IAPA.fc.bias.data[:] = x1.mean((0,1))
training_cost_IAPA,val_cost_IAPA = train(net_IAPA,x0_train,x1_train,x0_val,x1_val) # train
# get latents
z_PAPA,y_PAPA= net_PAPA(x1_train)
z_IAPA,y_IAPA= net_IAPA(x0_train)
#@title plot the training side-by-side
if plot_on:
plt.figure()
plt.plot(training_cost_PAPA,'b')
plt.plot(training_cost_IAPA,'b',linestyle = '--')
plt.plot(val_cost_PAPA,'r')
plt.plot(val_cost_IAPA,'r',linestyle = '--')
plt.legend(['training cost (PAPA)','training cost (IAPA)','validation cost(PAPA)',
'validation cost (IAPA)'])
plt.title('Training cost over epochs')
plt.ylabel('cost')
plt.xlabel('epochs')
# see if the latents are correlated?
plt.figure()
plt.subplot(2,1,1)
plt.plot(y_PAPA[:,0,:].detach().cpu().numpy())
plt.subplot(2,1,1)
plt.plot(y_IAPA[:,0,:].detach().cpu().numpy())
if verbose:
print(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,z_IAPA.flatten(start_dim = 0,end_dim = 1).T).mean())
print(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())
print(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())
diff_cosine_similarity = torch.subtract(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean(),F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())
diff_cosine_similarity = diff_cosine_similarity.detach().cpu().tolist()
if plot_on:
plt.figure()
plt.hist(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).detach().cpu().numpy())
plt.hist(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).detach().cpu().numpy())
plt.legend(('PAPA','IAPA'))
plt.title('cosine_similarity by neuron')
def regress_tensor(X,y):
X = X.detach().cpu().numpy()
y = y.flatten().detach().cpu().numpy().reshape(-1,1)
model = LinearRegression()
model.fit(X, y)
r_sq = model.score(X, y)
if verbose:
print('coefficient of determination:', r_sq)
return r_sq
rsqmat = []
for i in range(ncomp):
rsqmat.append(regress_tensor(y_IAPA.flatten(start_dim = 0,end_dim = 1),y_PAPA[:,:,i].reshape(-1,1)))
Avg_rsq = sum(rsqmat)/len(rsqmat)
max_rsq = max(rsqmat)
if verbose:
print('Average Rsq for predicting the %i latents in IAPA from a linear combination of %i latents in PAPA is %2.3f'%(ncomp,ncomp,Avg_rsq))
print('Max Rsq for predicting the %i latents in IAPA from a linear combination of %i latents in PAPA is %2.3f'%(ncomp,ncomp,max_rsq))
return diff_cosine_similarity,rsqmat
```
# select session and area
```
# set the sessions
session_num = 30
curr_session=SteinmetzSession(alldat[session_num])
# some preprocessing
validtrials = curr_session.realign_data_to_movement(500) # get 500 ms from movement time,
# had to load again for at rest
curr_session=SteinmetzSession(alldat[session_num])
curr_session.realign_data_to_rest(500)
# cannot get realign and binning to work the same time =[
# print areas
areas = curr_session.get_areas()
# CHANGE ME
# Set input/hyperparameters here:
ncomp = 10
learning_rate_start = 0.005
# set areas
PA_name = 'MOs' # predicted area
all_other_areas = ['noise']
all_other_areas = all_other_areas+list(areas-set([PA_name]))
print(all_other_areas)
counter = 0
for IA_name in all_other_areas:
print(IA_name)
diff_cosine_similarity,rsqmat = run_each_region(PA_name,IA_name,ncomp,learning_rate_start)
if counter == 0:
allrsq = np.array(rsqmat)
cos_sim_mat = np.array(diff_cosine_similarity)
else:
allrsq = np.vstack((allrsq,np.array(rsqmat)))
cos_sim_mat = np.vstack((cos_sim_mat,np.array(diff_cosine_similarity)))
counter +=1
summary = {'output area':PA_name,'input area':all_other_areas,'cosine_similarity_difference':cos_sim_mat,'all rsq':allrsq};
avg_rsq = summary['all rsq'].mean(1)
max_rsq = summary['all rsq'].max(1)
sort_index = np.argsort(np.array(avg_rsq)) # sort by average
input_areas = np.array(summary['input area'])
plt.figure()
plt.plot(avg_rsq[sort_index])
plt.plot(max_rsq[sort_index])
plt.xticks(range(len(sort_index)),input_areas[sort_index])
plt.legend(('average Rsq','max Rsq'))
plt.title('Rsq in predicting '+summary['output area']+' latents');
plt.ylabel('Rsq');
plt.xlabel('Regions')
outfile = 'summary_arrays_rest.npz'
np.savez(outfile, **summary)
import numpy as np
import matplotlib.pyplot as plt
summary = np.load('summary_arrays_rest.npz')
print('summary.files: {}'.format(summary.files))
print('summary["input area"]: {}'.format(summary["input area"]))
cos_sim_mat = np.array(summary['cosine_similarity_difference']).squeeze()
input_areas = summary['input area']
sort_index = np.argsort(cos_sim_mat)[::-1] # sort by average
plt.figure()
plt.rcParams["figure.figsize"] = (20, 10)
plt.rcParams.update({'font.size': 20})
plt.plot(cos_sim_mat[sort_index])
plt.xticks(range(len(sort_index)),input_areas[sort_index])
plt.title('Average difference in cosine simlarity between area1-area1 prediction and area2-area1 prediction')
plt.xlabel('input areas')
plt.ylabel('cosine similarity')
summary_move = np.load('summary_arrays.npz')
allrsq_move = summary_move['all rsq']
indx =[0,4,1,3,6,5,7,2]
allrsq_move = allrsq_move[indx,:]
avg_rsq = summary['all rsq'].mean(1)
sort_index = np.argsort(np.array(avg_rsq)) # sort by average
print(sort_index)
input_areas = summary['input area']
plotx = np.array(range(1,9))
allrsq.shape
plt.figure()
_ = plt.boxplot(allrsq.T, positions = plotx-0.2,widths = 0.2,patch_artist = True,boxprops = dict(facecolor = 'blue'),medianprops = dict(color = 'white'))
_ = plt.boxplot(allrsq_move.T, positions = plotx+0.2,widths = 0.2,patch_artist = True,boxprops = dict(facecolor = 'red'),medianprops = dict(color = 'white'))
plt.xticks(range(1,len(sort_index)+1),input_areas)
plt.title('Rsq in predicting '+summary['output area'].tolist()+' latents');
plt.ylabel('Rsq');
plt.xlabel('Regions')
plt.legend(('rest','movement'))
print(summary['input area'])
print(summary_move['input area'])
input_area_move = summary_move['input area']
indx =[0,4,1,3,6,5,7,2]
print(input_area_move[indx])
```
| github_jupyter |
```
import pymongo
import pandas as pd
import numpy as np
from pymongo import MongoClient
from bson.objectid import ObjectId
import datetime
import matplotlib.pyplot as plt
from collections import defaultdict
%matplotlib inline
import json
plt.style.use('ggplot')
import seaborn as sns
from math import log10, floor
## Connect to local DB
client = MongoClient('localhost', 27017)
print ("Setup db access")
#
# Get collections from mongodb
#
#db = client.my_test_db
db = client.test
chunk = 100000
start = 0
end = start + chunk
reponses = db.anon_student_task_responses.find()[start:end]
df_responses = pd.DataFrame(list(reponses))
print (df_responses.head())
df2 = df_responses.join(pd.DataFrame(df_responses["student"].to_dict()).T)
df2 = df2.join(pd.DataFrame(df2['level_summary'].to_dict()).T)
df2 = df2.join(pd.DataFrame(df2['problems'].to_dict()).T)
df3 = df2.copy()
## Look act columns
print (df_responses.columns)
## How many data samples
print (len(df_responses), "Number of entries")
## Make 'description' a feature wih important words mapped
df3.columns
df3['percent_correct'] = df3['nright'].astype(float) / df3['ntotal']
df3.iloc[0]
for idx in range(100):
print ('index"', idx)
print (df3.iloc[idx]['lesson'])
print (df3.iloc[idx]['response'])
def stringify_response(resp):
my_val = str(resp).replace("': ","_")
my_val = my_val.replace("_{"," ")
my_val = my_val.replace("_[",", ")
for c in [']','[','{','}',"'","",","]:
my_val = my_val.replace(c,'')
return my_val
stringify_response(df3.iloc[0]['response'])
df3['response_str'] = df3['response'].apply(stringify_response)
for idx in range(20):
print (idx, df3['response_str'].iloc[idx])
df3.columns
## In Response:
### convert K, V, and all K_V into words in a text doc
### Then add text
### The add description
def make_string_from_list(key, elem_list):
# Append key to each item in list
ans = ''
for elem in elem_list:
ans += key + '_' + elem
def make_string(elem, key=None, top=True):
ans = ''
if not elem:
return ans
if top:
top = False
top_keys = []
for idx in range(len(elem.keys())):
top_keys.append(True)
for idx, key in enumerate(elem.keys()):
if top_keys[idx]:
top = True
top_keys[idx] = False
ans += ' '
else:
top = False
#print ('ans = ', ans)
#print (type(elem[key]))
if type(elem[key]) is str or\
type(elem[key]) is int:
#print ('add value', elem[key])
value = str(elem[key])
#ans += key + '_' + value + ' ' + value + ' '
ans += key + '_' + value + ' '
elif type(elem[key]) is list:
#print ('add list', elem[key])
temp_elem = dict()
for item in elem[key]:
temp_elem[key] = item
ans += make_string(temp_elem, top)
elif type(elem[key]) is dict:
#print ('add dict', elem[key])
for item_key in elem[key].keys():
temp_elem = dict()
temp_elem[item_key] = elem[key][item_key]
ans += key + '_' + make_string(temp_elem, top)
elif type(elem[key]) is float:
#print ('add dict', elem[key])
sig = 2
value = elem[key]
value = round(value, sig-int(
floor(log10(abs(value))))-1)
value = str(value)
#ans += key + '_' + value + ' ' + value + ' '
ans += key + '_' + value + ' '
# ans += ' ' + key + ' '
#print ('not handled', elem[key])
return ans
df3['response_doc'] = df3['response'].map(make_string)
df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ')
df3['response_doc'] = df3['response_doc'] + df3['txt']
df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ')
df3['response_doc'] = df3['response_doc'] + df3['description']
df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace("\n", ""))
df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace("?", " "))
df3.iloc[100]['response_doc']
df3.iloc[100]['response']
for idx in range(20):
print (idx, df3['response_doc'].iloc[idx])
df3['response_doc'] = df3['response_doc'].map( lambda x: " ".join(x.split('/')) if '/' in x else x)
df3.iloc[100]['response_doc']
df3['response_doc'] = df3['response_doc'].map( lambda x: x.replace('[',' '))
df3['response_doc'] = df3['response_doc'].map( lambda x: x.replace(']',' '))
df3.iloc[100]['response_doc']
docs = list(df3['response_doc'])
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
data_samples = docs
n_features = 1000
n_samples = len(data_samples)
n_topics = 100
n_top_words = 30
print("Extracting tf-idf features for NMF...")
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
# Fit the NMF model
print("Fitting the NMF model with tf-idf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
t0 = time()
nmf = NMF(n_components=n_topics, random_state=1,
alpha=.1, l1_ratio=.5).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
t0 = time()
lda.fit(tf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
n_features = 1000
n_samples = len(data_samples)
n_topics = 50
n_top_words = 20
print("Extracting tf-idf features for NMF...")
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
t0 = time()
lda.fit(tf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
from sklearn.cluster import KMeans, MiniBatchKMeans
true_k = 100
km = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1,
init_size=1000, batch_size=1000)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(tf)
print("done in %0.3fs" % (time() - t0))
print()
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = tf_vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
len(km.labels_)
np.bincount(km.labels_)
df3['cluster_100'] = km.labels_
df3['trait_1'] = df3['behavioral_traits'].apply(lambda x : x[0] if len(x) > 0 else 'None' )
df3['trait_2'] = df3['behavioral_traits'].apply(lambda x : x[1] if len(x) > 1 else 'None' )
df3['trait_1'].value_counts()
df3['trait_2'].value_counts()
df_cluster_100 = df3.groupby('cluster_100')
df_cluster_100.head()
df3['percent_correct'].groupby(df3['cluster_100']).describe()
df_trait_1 = df3.groupby(['cluster_100', 'trait_1']).size().unstack(fill_value=0)
df_trait_2 = df3.groupby(['cluster_100', 'trait_2']).size().unstack(fill_value=0)
df_trait_2
df_trait_2.columns
df_trait_1.columns
[x for x in df_trait_2.columns if x not in df_trait_1.columns ]
[x for x in df_trait_1.columns if x not in df_trait_2.columns ]
#df_trait_1 = df_trait_1.drop('None', axis=1)
#df_trait_2 = df_trait_2.drop('None', axis=1)
df_traits = pd.merge(left=df_trait_1,right=df_trait_2, how='left' )
df_trait_1.index.rename('cluster_100', inplace=True)
df_trait_2.index.rename('cluster_100', inplace=True)
df_traits.columns
df_traits = pd.concat([df_trait_1, df_trait_2], axis=1)
df_traits.columns
df_traits
df_traits = df_traits.drop('None', axis=1)
df_traits.to_csv('cluster_100.csv')
df_traits2 = pd.concat([df3['percent_correct'].groupby(df3['cluster_100']).describe(), df_traits], axis=1)
df_traits2.to_csv('cluster_100_plus_correct.csv')
df_traits_dict = df_traits.to_dict(orient='dict')
df_traits_dict
df_traits_dict2 = {}
cluster_with_no_trait = list(np.arange(100))
cluster_with_lt_10_trait = list(np.arange(100))
for trait in df_traits_dict:
#print (idx, trait)
df_traits_dict2[trait] = {}
for cluster in df_traits_dict[trait]:
#print (trait, cluster, df_traits_dict[trait][cluster])
if df_traits_dict[trait][cluster] > 0:
df_traits_dict2[trait][cluster] = df_traits_dict[trait][cluster]
if cluster in cluster_with_no_trait:
cluster_with_no_trait.remove(cluster)
if df_traits_dict[trait][cluster] > 9:
if cluster in cluster_with_lt_10_trait:
cluster_with_lt_10_trait.remove(cluster)
print (df_traits_dict2)
cluster_with_no_trait,
len(cluster_with_no_trait)
len(cluster_with_lt_10_trait)
x = list(df_traits.index)
y = df_traits.sum(axis=1)
y
plt.bar( x, y)
fig, ax = plt.subplots()
rects1 = ax.bar(x, y, color='b')
ax.set_xlabel('Cluster number')
ax.set_ylabel('Lessons with trait at this cluster')
ax.set_title('Traits per cluster')
```
| github_jupyter |
# Imports
```
import pandas as pd
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
np.set_printoptions(suppress=True)
```
Goal: Use use SQLAlchemy to investigate the NBA data set.
```
#This setting allows us to see every column in the output cell
pd.set_option('display.max_columns', None)
#Import data
all_seasons_df = pd.read_csv('<File_location>/<File_Name>')
#Example: all_seasons_df = pd.read_csv('/Users/<hackerperson>/Desktop/Coding/Projects/Project3/all_seasons_df.csv')
#Check dataframe
all_seasons_df
engine = create_engine('postgresql://johnmetzger:localhost@localhost:5432/nba_ht')
nba_all_data = pd.read_csv('/Users/johnmetzger/Desktop/Coding/Projects/Project3/all_seasons_df.csv')
# I'm choosing to name this table "nba_all_data" for "NBA Halftime"
nba_ht.to_sql('nba_all_data', engine, index=True)
```
GET Data on a team
```
#Here, 'DET' is used as an example
# This uses the WHERE Command
Team ='''
SELECT * FROM nba_all_data WHERE "TEAM_ABBREVIATION"='DET';
'''
tanks = pd.read_sql(Team, engine)
tanks
#Here, 'DET' is used as an example
# Pick attribtues to order. Here you are selecting from the df=nba_all_data
# and finding out what values in columns (after first SELECT)
# are greater than the average of a variable. Here it was 'FG_PCT'
Winning ='''
WITH temporaryTable(averageValue) as (SELECT avg("FG_PCT")
from nba_all_data)
SELECT "SEASON_YEAR","TEAM_NAME", "FG_PCT"
FROM nba_all_data, temporaryTable
WHERE nba_all_data."FG_PCT" > temporaryTable.averageValue;
'''
Team_winning = pd.read_sql(Winning, engine)
Team_winning
```
**PERSONAL FOULS**
```
PFs ='''
SELECT "TEAM_ABBREVIATION"
AS "nba_all_data.TEAM_ABBREVIATION" FROM nba_all_data GROUP BY "nba_all_data.TEAM_ABBREVIATION" HAVING "nba_all_data.PF" > 500;
'''
PFs = pd.read_sql(PFs, engine)
PFs
# This one uses HAVING and GROUPBY. It shows that Boston Celtics was the only
# team with more than an average of 10 personal fouls per game.
PFs ='''
SELECT AVG("PF"), "TEAM_ABBREVIATION"
FROM nba_all_data
GROUP BY "TEAM_ABBREVIATION"
HAVING AVG("PF") > 10;
'''
PFs = pd.read_sql(PFs, engine)
PFs
## Sorted where first half score MAX was higher than 50.
Score ='''
SELECT MAX("PTS"), "TEAM_ABBREVIATION"
FROM nba_all_data
GROUP BY "TEAM_ABBREVIATION"
HAVING MAX("PTS") > 50;
'''
Score = pd.read_sql(Score, engine)
Score
c.execute('SELECT * FROM stocks WHERE symbol=?', t)
```
# Intro
```
query = '''
SELECT * FROM nba_ht;
'''
simpsons=pd.read_sql(query, engine)
simpsons
CREATE TABLE namename (column1)
query = '''BULK INSERT namename
FROM '/Users/johnmetzger/Desktop/Coding/Projects/Project3/all_seasons_df.csv',
WITH( FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n')'''
```
| github_jupyter |
<h1 align='center'> 8.2 Combining and Merging Datasets
<b>Database-Style DataFrame Joins
```
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
'data1': range(7)})
df1
df2 = pd.DataFrame({'key': ['a', 'b', 'd'],
'data2': range(3)})
df2
pd.merge(df1,df2)
```
Note that I didn’t specify which column to join on. If that information is not speci‐fied, merge uses the overlapping column names as the keys. It’s a good practice tospecify explicitly, though:
```
pd.merge(df1,df2,on='key')
```
If the column names are different in each object, you can specify them separately:
```
df1 = pd.DataFrame({'1_key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
'data1': range(7)})
df2 = pd.DataFrame({'2_key': ['a', 'b', 'd'],
'data2': range(3)})
pd.merge(df1,df2,left_on='1_key',right_on='2_key')
```
You may notice that the 'c' and 'd' values and associated data are missing from theresult. By default merge does an 'inner' join; the keys in the result are the intersec‐tion, or the common set found in both tables. Other possible options are 'left','right', and 'outer'. The outer join takes the union of the keys, combining theeffect of applying both left and right joins:
Its just like Sql.Many-to-many joins form the Cartesian product of the rows.
```
pd.merge(df1,df2,left_on='1_key',right_on='2_key',how='outer')
pd.merge(df1,df2,left_on='1_key',right_on='2_key',how='right')
```
To determine which key combinations will appear in the result depending on thechoice of merge method, think of the multiple keys as forming an array of tuples tobe used as a single join key (even though it’s not actually implemented that way)
When you’re joining columns-on-columns, the indexes on thepassed DataFrame objects are discarded
A last issue to consider in merge operations is the treatment of overlapping columnnames. While you can address the overlap manually (see the earlier section onrenaming axis labels), merge has a suffixes option for specifying strings to appendto overlapping names in the left and right DataFrame objects
pd.merge(left, right, on='key1', suffixes=('_left', '_right'))
<b>Merging on Index
In some cases, the merge key(s) in a DataFrame will be found in its index. In thiscase, you can pass left_index=True or right_index=True (or both) to indicate thatthe index should be used as the merge key:
```
left1 = pd.DataFrame({'key': ['a', 'b', 'a', 'a', 'b', 'c'],
'value': range(6)})
right1 = pd.DataFrame({'group_val': [3.5, 7]},
index=['a', 'b'])
left1
right1
pd.merge(left1, right1, left_on='key', right_index=True)
```
With hierarchically indexed data, things are more complicated, as joining on index isimplicitly a multiple-key merge:
```
lefth = pd.DataFrame({'key1': ['Ohio', 'Ohio', 'Ohio',
'Nevada', 'Nevada'],
'key2': [2000, 2001, 2002, 2001, 2002],
'data': np.arange(5.)})
righth = pd.DataFrame(np.arange(12).reshape((6, 2)),
index=[['Nevada', 'Nevada', 'Ohio', 'Ohio','Ohio', 'Ohio'],
[2001, 2000, 2000, 2000, 2001, 2002]]
,columns=['event1', 'event2'])
lefth
righth
```
In this case, you have to indicate multiple columns to merge on as a list (note thehandling of duplicate index values with how='outer'
```
pd.merge(lefth, righth, left_on=['key1', 'key2'], right_index=True)
```
DataFrame has a convenient join instance for merging by index. It can also be usedto combine together many DataFrame objects having the same or similar indexes butnon-overlapping columns.
left.join(right, how='outer')
DataFrame’s joinmethod performs a left join on the join keys, exactly preserving the left frame’s rowindex. It also supports joining the index of the passed DataFrame on one of the col‐umns of the calling DataFrame:
left.join(right, on='key')
Lastly, for simple index-on-index merges, you can pass a list of DataFrames to join asan alternative to using the more general concat function described in the next section
left2.join([right2, another])
<b>Concatenating Along an Axis
```
arr = np.arange(12).reshape((3, 4))
np.concatenate([arr, arr], axis=1)
```
In the context of pandas objects such as Series and DataFrame, having labeled axesenable you to further generalize array concatenation.
In particular, you have a num‐ber of additional things to think about:
•If the objects are indexed differently on the other axes, should we combine the
distinct elements in these axes or use only the shared values (the intersection)?
•Do the concatenated chunks of data need to be identifiable in the resultingobject?
•Does the “concatenation axis” contain data that needs to be preserved? In manycases,
the default integer labels in a DataFrame are best discarded duringconcatenation.
```
s1 = pd.Series([0, 1], index=['a', 'b'])
s2 = pd.Series([2, 3, 4], index=['c', 'd', 'e'])
s3 = pd.Series([5, 6], index=['f', 'g'])
pd.concat([s1, s2, s3])
pd.concat([s1, s2, s3],axis=1)
s4=pd.concat([s1,s3])
s4
pd.concat([s4, s3],axis=1,join='inner')
```
A potential issue is that the concatenated pieces are not identifiable in the result. Sup‐pose instead you wanted to create a hierarchical index on the concatenation axis. Todo this, use the keys argument.
```
result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'])
result
```
In the case of combining Series along axis=1, the keys become the DataFrame col‐umn headers:
```
result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'],axis=1)
result
```
In the case of combining Series along axis=1, the keys become the DataFrame col‐umn headers:
The same logic extends to DataFrame objects:
```
df1 = pd.DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],
columns=['one', 'two'])
df2 = pd.DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],
columns=['three', 'four'])
pd.concat([df1, df2], axis=1, keys=['level1', 'level2'])
```
A last consideration concerns DataFrames in which the row index does not containany relevant data:
In this case, you can pass ignore_index=True:
pd.concat([df1, df2], ignore_index=True)


<b>Combining Data with Overlap
There is another data combination situation that can’t be expressed as either a mergeor concatenation operation. You may have two datasets whose indexes overlap in fullor part. As a motivating example, consider NumPy’s where function, which performsthe array-oriented equivalent of an if-else expression:
```
a = pd.Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],
index=['f', 'e', 'd', 'c', 'b', 'a'])
b = pd.Series(np.arange(len(a), dtype=np.float64),
index=['f', 'e', 'd', 'c', 'b', 'a'])
a
b[-1] = np.nan
b
```
Series has a combine_first method, which performs the equivalent of this operationalong with pandas’s usual data alignment logic:
```
b[:-2].combine_first(a[2:])
```
With DataFrames, combine_first does the same thing column by column, so youcan think of it as “patching” missing data in the calling object with data from theobject you pass:
```
df1 = pd.DataFrame({'a': [1., np.nan, 5., np.nan],
'b': [np.nan, 2., np.nan, 6.],
'c': range(2, 18, 4)})
df2 = pd.DataFrame({'a': [5., 4., np.nan, 3., 7.],
'b': [np.nan, 3., 4., 6., 8.]})
df1
df2
df1.combine_first(df2)
```
| github_jupyter |
# Python Dictionaries
## Dictionaries
* Collection of Key - Value pairs
* also known as associative array
* unordered
* keys unique in one dictionary
* storing, extracting
```
emptyd = {}
len(emptyd)
type(emptyd)
tel = {'jack': 4098, 'sape': 4139}
print(tel)
tel['guido'] = 4127
print(tel.keys())
print(tel.values())
# add key 'valdis' with value 4127 to our tel dictionary
tel['valdis'] = 4127
tel
#get value from key in dictionary
# very fast even in large dictionaries! O(1)
tel['jack']
tel['sape'] = 54545
# remove key value pair
del tel['sape']
tel['sape']
'valdis' in tel.keys()
'karlis' in tel.keys()
# this will be slower going through all the key:value pairs
4127 in tel.values()
type(tel.values())
dir(tel.values())
tel['irv'] = 4127
tel
list(tel.keys())
list(tel.values())
sorted([5,7,1,66], reverse=True)
?sorted
tel.keys()
sorted(tel.keys())
'guido' in tel
'Valdis' in tel
'valdis' in tel
# alternative way of creating a dictionary using tuples ()
t2=dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
print(t2)
names = ['Valdis', 'valdis', 'Antons', 'Anna', 'Kārlis', 'karlis']
names
sorted(names)
```
* `globals()` always returns the dictionary of the module namespace
* `locals()` always returns a dictionary of the current namespace
* `vars()` returns either a dictionary of the current namespace (if called with no argument) or the dictionary of the argument.
```
globals()
'print(a,b)' in globals()['In']
vars().keys()
sorted(vars().keys())
# return value of the key AND destroy the key:value
# if key does not exist, then KeyError will appear
tel.pop('valdis')
# return value of the key AND destroy the key:value
# if key does not exist, then KeyError will appear
tel.pop('valdis')
# we can store anything in dictionaries
# including other dictionaries and lists
mydict = {'mylist':[1,2,6,6,"Badac"], 55:165, 'innerd':{'a':100,'b':[1,2,6]}}
mydict
mydict.keys()
# we can use numeric keys as well!
mydict[55]
mydict['55'] = 330
mydict
mlist = mydict['mylist']
mlist
mytext = mlist[-1]
mytext
mychar = mytext[-3]
mychar
# get letter d
mydict['mylist'][-1][-3]
mydict['mylist'][-1][2]
mlist[-1][2]
mydict['real55'] = mydict[55]
del mydict[55]
mydict
sorted(mydict.keys())
mydict.get('55')
# we get None on nonexisting key instead of KeyError
mydict.get('53253242452')
# here we will get KeyError on nonexisting key
mydict['53253242452']
mydict.get("badkey") == None
k,v = mydict.popitem()
k,v
# update for updating multiple dictionary values at once
mydict.update({'a':[1,3,'valdis',5],'anotherkey':567})
mydict
mydict.setdefault('b', 3333)
mydict
# change dictionary key value pair ONLY if key does not exist
mydict.setdefault('a', 'aaaaaaaa')
mydict
# here we overwite no matter what
mydict['a'] = 'changed a value'
mydict
# and we clear our dictionary
mydict.clear()
mydict
type(mydict)
mydict = 5
type(mydict)
```
| github_jupyter |
# Chapter 1 - Softmax from First Principles
## Language barriers between humans and autonomous systems
If our goal is to help humans and autnomous systems communicate, we need to speak in a common language. Just as humans have verbal and written languages to communicate ideas, so have we developed mathematical languages to communicate information. Probability is one of those languages and, thankfully for us, autonomous systems are pretty good at describing probabilities, even if humans aren't. This document shows one technique for translating a human language (English) into a language known by autonomous systems (probability).
Our translator is something called the **SoftMax classifier**, which is one type of probability distribution that takes discrete labels and translates them to probabilities. We'll show you the details on how to create a softmax model, but let's get to the punchline first: we can decompose elements of human language to represent a partitioning of arbitrary state spaces.
Say, for instance, we'd like to specify the location of an object in two dimensional cartesian coordinates. Our state space is all combinations of *x* and *y*, and we'd like to translate human language into some probability that our target is at a given combination of *x* and *y*. One common tactic humans use to communicate position is range (near, far, next to, etc.) and bearing (North, South, SouthEast, etc.). This already completely partitions our *xy* space: if something is north, it's not south; if it's east, it's not west; and so on.
A softmax model that translates range and bearing into probability in a state space is shown below:
<img src="https://raw.githubusercontent.com/COHRINT/cops_and_robots/master/notebooks/softmax/img/softmax_range_bearing.png" alt="Softmax range and bearing" width=500px>
Assuming that *next to* doesn't require a range, we see seventeen different word combinations we can use to describe something's position: two ranges (*nearby* and *far*) for each cardinal and intercardinal direction (eight total), and then one extra label for *next to*. This completely partitions our entire state space $\mathbb{R}^2$.
This range and bearing language is, by its nature, inexact. If I say, "That boat is far north.", you don't have a deterministic notion of exactly where the boat is -- but you have a good sense of where it is, and where it is not. We can represent that sense probabilistically, such that the probability of a target existing at a location described by a range and bearing label is nonzero over the entire state space, but that probability is very small if not in the area most associated with that label.
What do we get from this probabilistic interpretation of the state space? We get a two-way translation between humans and autonomous systems to describe anything we'd like. If our state space is one-dimensional relative velocity (i.e. the derivative of range without bearing), I can say, "She's moving really fast!", to give the autonomous system a probability distribution over my target's velocity with an expected value of, say, 4 m/s. Alternatively, if my autnomous system knows my target's moving at 0.04352 m/s, it can tell me, "Your target is moving slowly." Our labeled partitioning of the state space (that is, our classifier) is the mechanism that translates for us.
## Softmax model construction
The [SoftMax function](http://en.wikipedia.org/wiki/Softmax_function) goes by many names: normalized exponential, multinomial logistic function, log-linear model, sigmoidal function. We use the SoftMax function to develop a classification model for our state space:
$$
\begin{equation}
P(L=i \vert \mathbf{x}) = \frac{e^{\mathbf{w}_i^T \mathbf{x} + b_i}}{\sum_{k=1}^M e^{\mathbf{w}_k^T\mathbf{x} + b_k}}
\end{equation}
$$
Where $L = i$ is our random variable of class labels instantiated as class $i$, $\mathbf{x}$ is our state vector, $\mathbf{w}_i$ is a vector of parameters (or weights) associated with our class $i$, $b_i$ is a bias term for class $i$, and $M$ is the total number of classes.
The terms *label* and *class* require some distinction: a label is a set of words associated with a class (i.e. *far northwest*) whereas a class is a probability distribution over the entire state space. They are sometimes used interchangeably, and the specific meaning should be clear from context.
Several key factors come out of the SoftMax equation:
- The probabilities of all classes for any given point $\mathbf{x}$ sum to 1.
- The probability any single class for any given point $\mathbf{x}$ is bounded by 0 and 1.
- The space can be partitioned into an arbitrary number of classes (with some restrictions about those classes - more on this later).
- The probability of one class for a given point $\mathbf{x}$ is determined by that class' weighted exponential sum of the state vector *relative* to the weighted exponential sums of *all* classes.
- Since the probability of a class is conditioned on $\mathbf{x}$, we can apply estimators such as [Maximum Likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood) to learn SoftMax models.
- $P(L=i \vert \mathbf{x})$ is convex in $\mathbf{w_i}$ for any $\mathbf{x}$.
Let's try to get some intuition about this setup. For a two-dimensional case with state $\mathbf{x} = \begin{bmatrix}x & y\end{bmatrix}^T$, each class $i$ has weights $\mathbf{w}_i = \begin{bmatrix}w_{i,x} & w_{i,y}\end{bmatrix}^T$. Along with the constant bias term $b_i$, we have one weighted linear function of $x$ and one weighted linear function of $y$. Each class's probability is normalized with respect to the sum of all other classes, so the weights can be seen as a relative scaling of one class over another in any given state. The bias weight increases a class's probability in all cases, the $x$ weight increases the class's probability for greater values of $x$ (and positive weights), and the $y$ weight, naturally, increases the class's probability for greater values of $y$ (and positive weights).
We can get fancy with our state space, having states of the form $\mathbf{x} = \begin{bmatrix}x & y & x^2 & y^2 & 2xy\end{bmatrix}^T$, but we'll build up to states like that. Let's look at some simpler concepts first.
## Class boundaries
For any two classes, we can take the ratio of their probabilities to determine the **odds** of one class instead of the other:
$$
L(i,j) =\frac{P(L=i \vert \mathbf{x})}{P(L=j \vert \mathbf{x})} =
\frac{\frac{e^{\mathbf{w}_i^T \mathbf{x} + b_i}}{\sum_{k=i}^M e^{\mathbf{w}_k^T\mathbf{x} + b_k}}}{\frac{e^{\mathbf{w}_j^T \mathbf{x} + b_{j}}}{\sum_{k=i}^M e^{\mathbf{w}_k^T\mathbf{x} + b_k}}} = \frac{e^{\mathbf{w}_i^T \mathbf{x} + b_i}}{e^{\mathbf{w}_j^T\mathbf{x} + b_j}}
$$
When $L(i,j)=1$, the two classes have equal probability. This doesn't give us a whole lot of insight until we take the **log-odds** (the logarithm of the odds):
$$
\begin{align}
L_{log}(i,j) &=
\log{\frac{P(L=i \vert \mathbf{x})}{P(L=j \vert \mathbf{x})}}
= \log{\frac{e^{\mathbf{w}_i^T \mathbf{x} + b_j}}{e^{\mathbf{w}_j^T\mathbf{x} + b_j}}}
= (\mathbf{w}_i^T\mathbf{x} + b_i)- (\mathbf{w}_j^T\mathbf{x} + b_j) \\
&= (\mathbf{w}_i - \mathbf{w}_j)^T\mathbf{x} + (b_i - b_j)
\end{align}
$$
When $L_{log}(i,j) = \log{L(i,j)} = \log{1} = 0$, we have equal probability between the two classes, and we've also stumbled upon the equation for an n-dimensional affine hyperplane dividing the two classes:
$$
\begin{align}
0 &= (\mathbf{w}_i - \mathbf{w}_j)^T\mathbf{x} + (b_i - b_j) \\
&= (w_{i,x_1} - w_{j,x_1})x_1 + (w_{i,x_2} - w_{j,x_2})x_2 + \dots + (w_{i,x_n} - w_{j,x_n})x_n + (b_i - b_j)
\end{align}
$$
This follows from the general definition of an <a href="http://en.wikipedia.org/wiki/Plane_(geometry)#Point-normal_form_and_general_form_of_the_equation_of_a_plane">Affine Hyperplane</a> (that is, an n-dimensional flat plane):
$$
a_1x_1 + a_2x_2 + \dots + a_nx_n + b = 0
$$
Where $a_1 = w_{i,x_1} - w_{j,x_1}$, $a_2 = w_{i,x_2} - w_{j,x_2}$, and so on. This gives us a general formula for the division of class boundaries -- that is, we can specify the class boundaries directly, rather than specifying the weights leading to those class boundaries.
### Example
Let's take a step back and look at an example. Suppose I'm playing Pac-Man, and I want to warn our eponymous hero of a ghost approaching him. Let's restrict my language to the four intercardinal directions: NE, SE, SW and NW. My state space is $\mathbf{x} = \begin{bmatrix}x & y\end{bmatrix}^T$ (one term for each cartesian direction in $\mathbb{R}^2$).
<img src="https://raw.githubusercontent.com/COHRINT/cops_and_robots/master/notebooks/softmax/img/pacman.png" alt="Pacman with intercardinal bearings" width="500px">
In this simple problem, we can expect our weights to be something along the lines of:
$$
\begin{align}
\mathbf{w}_{SW} &= \begin{bmatrix}-1 & -1 \end{bmatrix}^T \\
\mathbf{w}_{NW} &= \begin{bmatrix}-1 & 1 \end{bmatrix}^T \\
\mathbf{w}_{SE} &= \begin{bmatrix}1 & -1 \end{bmatrix}^T \\
\mathbf{w}_{NE} &= \begin{bmatrix}1 & 1 \end{bmatrix}^T \\
\end{align}
$$
If we run these weights in our SoftMax model, we get the following results:
```
# See source at: https://github.com/COHRINT/cops_and_robots/blob/master/src/cops_and_robots/robo_tools/fusion/softmax.py
import numpy as np
from cops_and_robots.robo_tools.fusion.softmax import SoftMax
%matplotlib inline
labels = ['SW', 'NW', 'SE',' NE']
weights = np.array([[-1, -1],
[-1, 1],
[1, -1],
[1, 1],
])
pacman = SoftMax(weights, class_labels=labels)
pacman.plot(title='Unshifted Pac-Man Bearing Model')
```
Which is along the right path, but needs to be shifted down to Pac-Man's location. Say Pac-Man is approximately one quarter of the map south from the center point, we can bias our model accordingly (assuming a $10m \times 10m$ space):
$$
\begin{align}
b_{SW} &= -2.5\\
b_{NW} &= 2.5\\
b_{SE} &= -2.5\\
b_{NE} &= 2.5\\
\end{align}
$$
```
biases = np.array([-2.5, 2.5, -2.5, 2.5,])
pacman = SoftMax(weights, biases, class_labels=labels)
pacman.plot(title='Y-Shifted Pac-Man Bearing Model')
```
Looking good! Note that we'd get the same answer had we used the following weights:
$$
\begin{align}
b_{SW} &= -5\\
b_{NW} &= 0\\
b_{SE} &= -5\\
b_{NE} &= 0\\
\end{align}
$$
Because the class boundaries and probability distributions are defined by the *relative differences*.
But this simply shifts the weights in the $y$ direction. How do we go about shifting weights in any state dimension?
Remember that our biases will essentially scale an entire class, so, what we did was scale up the two classes that have a positive scaling for negative $y$ values. If we want to place the center of the four classes in the top-left, for instance, we'll want to bias the NW class less than the other classes.
Let's think of what happens if we use another coordinate system:
$$
\mathbf{x}' = \mathbf{x} + \mathbf{b}
$$
Where $\mathbf{x}'$ is our new state vector and $\mathbf{b}$ are offsets to each state in our original coordinate frame (assume the new coordinate system is unbiased). For example, something like:
$$
\mathbf{x}' = \begin{bmatrix}x & y\end{bmatrix}^T + \begin{bmatrix}2 & -3\end{bmatrix}^T = \begin{bmatrix}x + 2 & y -3\end{bmatrix}^T
$$
Can we represent this shift simply by adjusting our biases, instead of having to redefine our state vector? Assuming we're just shifting the distributions, the probabilities, and thus, the hyperplanes, will simply be shifted as well, so we have:
$$
0 = (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{x}' = (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{x} + (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{b}
$$
Which retains our original state and shifts only our biases. If we distribute the offset $\mathbf{b}$, we can define each class's bias term:
$$
\begin{align}
b_i - b_j &= (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{b} \\
&= \mathbf{w}_i^T \mathbf{b} - \mathbf{w}_j^T \mathbf{b}
\end{align}
$$
Our bias for each class $i$ in our original coordinate frame is simply $\mathbf{w}_i^T \mathbf{b}$.
Let's try this out with $\mathbf{b} = \begin{bmatrix}2 & -3\end{bmatrix}^T$ (remembering that this will push the shifted origin negatively along the x-axis and positively along the y-axis):
$$
\begin{align}
b_{SW} &= \begin{bmatrix}-1 & -1 \end{bmatrix} \begin{bmatrix}2 \\ -3\end{bmatrix} = 1\\
b_{NW} &= \begin{bmatrix}-1 & 1 \end{bmatrix} \begin{bmatrix}2 \\ -3\end{bmatrix} =-5 \\
b_{SE} &= \begin{bmatrix}1 & -1 \end{bmatrix} \begin{bmatrix}2 \\ -3\end{bmatrix} = 5\\
b_{NE} &= \begin{bmatrix}1 & 1 \end{bmatrix} \begin{bmatrix}2 \\ -3\end{bmatrix} = -1 \\
\end{align}
$$
```
biases = np.array([1, -5, 5, -1,])
pacman = SoftMax(weights, biases, class_labels=labels)
pacman.plot(title='Shifted Pac-Man Bearing Model')
```
One other thing we can illustrate with this example: how would the SoftMax model change if we multiplied all our weights and biases by 10?
We get:
```
weights = np.array([[-10, -10],
[-10, 10],
[10, -10],
[10, 10],
])
biases = np.array([10, -50, 50, -10,])
pacman = SoftMax(weights, biases, class_labels=labels)
pacman.plot(title='Steep Pac-Man Bearing Model')
```
Why does this increase in slope happen? Let's investigate.
## SoftMax slope for linear states
The [gradient](http://en.wikipedia.org/wiki/Gradient) of $P(L=i \vert \mathbf{x})$ will give us a function for the slope of our SoftMax model of class $i$. For a linear state space, such as our go-to $\mathbf{x} = \begin{bmatrix}x & y\end{bmatrix}$, our gradient is defined as:
$$
\nabla P(L=i \vert \mathbf{x}) = \nabla \frac{e^{\mathbf{w}_i^T \mathbf{x} + b_i}}{\sum_{k=1}^M e^{\mathbf{w}_k^T\mathbf{x} + b_k}} =
\frac{\partial}{\partial x} \frac{e^{\mathbf{w}_i^T \mathbf{x}}}{\sum_{k=1}^M e^{\mathbf{w}_k^T\mathbf{x}}} \mathbf{\hat{i}} +
\frac{\partial}{\partial y} \frac{e^{\mathbf{w}_i^T \mathbf{x}}}{\sum_{k=1}^M e^{\mathbf{w}_k^T\mathbf{x}}} \mathbf{\hat{j}}
$$
Where $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ are unit vectors in the $x$ and $y$ dimensions, respectively. Given the structure of our equation, the form of either partial derivative will be the same as the other, so let's look at the partial with respect to $x$, using some abused notation:
$$
\begin{align}
\frac{\partial P(L = i \vert \mathbf{x})} {\partial x} &= \frac{d P(L = i \vert x)} {dx} =
\frac{\partial}{\partial x} \frac{e^{w_{i,x}x}}{\sum_{k=1}^M e^{w_{k,x}x}} \\
&= \frac{w_{i,x}e^{w_{i,x}x}\sum_{k=1}^M e^{w_{k,x}x} - e^{w_{i,x}x}(\sum_{k=1}^M w_{k,x}e^{w_{k,x}x})}{(\sum_{k=1}^M e^{w_{k,x}x})^2} \\
&= \frac{w_{i,x}e^{w_{i,x}x}\sum_{k=1}^M e^{w_{k,x}x}}{(\sum_{k=1}^M e^{w_{k,x}x})^2} -
\frac{e^{w_{i,x}x}(\sum_{k=1}^M w_{k,x}e^{w_{k,x}x})}{(\sum_{k=1}^M e^{w_{k,x}x})^2}\\
&= w_{i,x} \left( \frac{e^{w_{i,x}x}}{\sum_{k=1}^M e^{w_{k,x}x}}\right) -
\left( \frac{e^{w_{i,x}x}}{\sum_{k=1}^M e^{w_{k,x}x}}\right)\frac{\sum_{k=1}^M w_{k,x}e^{w_{k,x}x}}{\sum_{k=1}^M e^{w_{k,x}x}}\\
& = P(L = i \vert x) \left(w_{i,x} - \frac{\sum_{k=1}^M w_{k,x}e^{w_{k,x}x}}{\sum_{k=1}^M e^{w_{k,x}x}}\right) \\
& = P(L = i \vert x) \left(w_{i,x} - \sum_{k=1}^M w_{k,x}P(L = k \vert x) \right) \\
\end{align}
$$
Where line 2 was found using the quotient rule. This is still hard to interpret, so let's break it down into multiple cases:
If $P(L = i \vert x) \approx 1$, the remaining probabilities are near zero, thus reducing the impact of their weights, leaving:
$$
\frac{\partial P(L = i \vert \mathbf{x})} {\partial x}
\approx P(L = i \vert x) \left(w_{i,x} - w_{i,x}P(L = i \vert x) \right)
= 0
$$
This makes sense: a dominating probability will be flat.
If $P(L = i \vert x) \approx 0$, we get:
$$
\frac{\partial P(L = i \vert \mathbf{x})} {\partial x}
\approx 0 \left(w_{i,x} - w_{i,x}P(L = i \vert x) \right)
= 0
$$
This also makes sense: a diminished probability will be flat.
We can expect the greatest slope of a [logistic function](http://en.wikipedia.org/wiki/Logistic_function) (which is simply a univariate SoftMax function) to appear at its midpoint $P(L = i \vert x) = 0.5$. Our maximum slope, then, is:
$$
\frac{\partial P(L = i \vert \mathbf{x})} {\partial x}
= 0.5 \left(w_{i,x} - \sum_{k=1}^M w_{k,x}P(L = k \vert x) \right) \\
= 0.5 \left(w_{i,x} - \sum^M _{\substack{k = 1, \\ k \neq i}} w_{k,x}P(L = k \vert x) - 0.5w_{i,x}\right) \\
= 0.25w_{i,x} - 0.5\sum^M _{\substack{k = 1, \\ k \neq i}} w_{k,x}P(L = k \vert x) \\
$$
NOTE: This section feels really rough, and possibly unnecessary. I need to work on it some more.
## Rotations
Just as we were able to shift our SoftMax distributions to a new coordinate origin, we can apply a [rotation](http://en.wikipedia.org/wiki/Rotation_matrix) to our weights and biases. Let's once again update our weights and biases through a new, rotated, coordinate scheme:
$$
R(\theta)\mathbf{x}' = R(\theta)(\mathbf{x} + \mathbf{b})
$$
As before, we examine the case at the linear hyperplane boundaries:
$$
0 = (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{x}' = (\mathbf{w}_i - \mathbf{w}_j)^T R(\theta)\mathbf{x} + (\mathbf{w}_i - \mathbf{w}_j)^T R(\theta) \mathbf{b}
$$
Our weights are already defined, so we simply need to multiply them by $R(\theta)$ to find our rotated weights. Let's find our biases:
$$
\begin{align}
b_i - b_j &= (\mathbf{w}_i - \mathbf{w}_j)^T R(\theta) \mathbf{b} \\
&= \mathbf{w}_i^T R(\theta) \mathbf{b} - \mathbf{w}_j^T R(\theta) \mathbf{b}
\end{align}
$$
So, under rotation, $b_i = \mathbf{w}_i^T R(\theta) \mathbf{b}$.
Let's try this with a two-dimensional rotation matrix using $\theta = \frac{\pi}{4} rad$ and $\mathbf{b} = \begin{bmatrix}2 & -3\end{bmatrix}^T$:
$$
\begin{align}
b_{SW} &= \begin{bmatrix}-1 & -1 \end{bmatrix}
\begin{bmatrix}\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}\end{bmatrix}
\begin{bmatrix}2 \\ -3\end{bmatrix} = -2\sqrt{2} \\
b_{NW} &= \begin{bmatrix}-1 & 1 \end{bmatrix}
\begin{bmatrix}\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}\end{bmatrix}
\begin{bmatrix}2 \\ -3\end{bmatrix} = -3\sqrt{2} \\
b_{SE} &= \begin{bmatrix}1 & -1 \end{bmatrix}
\begin{bmatrix}\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}\end{bmatrix}
\begin{bmatrix}2 \\ -3\end{bmatrix} = 3\sqrt{2} \\
b_{NE} &= \begin{bmatrix}1 & 1 \end{bmatrix}\begin{bmatrix}\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}\end{bmatrix}
\begin{bmatrix}2 \\ -3\end{bmatrix} = 2\sqrt{2} \\
\end{align}
$$
```
# Define rotation matrix
theta = np.pi/4
R = np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
# Rotate weights
weights = np.array([[-1, -1],
[-1, 1],
[1, -1],
[1, 1],
])
weights = np.dot(weights,R)
# Apply rotated biases
biases = np.array([-2 * np.sqrt(2),
-3 * np.sqrt(2),
3 * np.sqrt(2),
2 * np.sqrt(2),])
pacman = SoftMax(weights, biases, class_labels=labels)
pacman.plot(title='Rotated and Shifted Pac-Man Bearing Model')
```
##Summary
That should be a basic introduction to the SoftMax model. We've only barely scraped the surface of why you might want to use SoftMax models as a tool for aspects of HRI.
Let's move on to [Chapter 2](02_from_normals.ipynb) where we examine a more practical way of constructing SoftMax distributions.
```
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
---
_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
---
# Assignment 1
In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data.
Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.
The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates.
Here is a list of some of the variants you might encounter in this dataset:
* 04/20/2009; 04/20/09; 4/20/09; 4/3/09
* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;
* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009
* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009
* Feb 2009; Sep 2009; Oct 2010
* 6/2008; 12/2009
* 2009; 2010
Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:
* Assume all dates in xx/xx/xx format are mm/dd/yy
* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)
* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).
* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).
* Watch out for potential typos as this is a raw, real-life derived dataset.
With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.
For example if the original series was this:
0 1999
1 2010
2 1978
3 2015
4 1985
Your function should return this:
0 2
1 4
2 0
3 1
4 3
Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.
*This function should return a Series of length 500 and dtype int.*
```
# Load the data
# Reference: https://necromuralist.github.io/data_science/posts/extracting-dates-from-medical-data/
import pandas
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
data = pandas.Series(doc)
data.head(10)
data.describe()
# 4 The Grammar
# 4.1 Cardinality
ZERO_OR_MORE = '*'
ONE_OR_MORE = "+"
ZERO_OR_ONE = '?'
EXACTLY_TWO = "{2}"
ONE_OR_TWO = "{1,2}"
EXACTLY_ONE = '{1}'
# 4.2 Groups and Classes
GROUP = r"({})"
NAMED = r"(?P<{}>{})"
CLASS = "[{}]"
NEGATIVE_LOOKAHEAD = "(?!{})"
NEGATIVE_LOOKBEHIND = "(?<!{})"
POSITIVE_LOOKAHEAD = "(?={})"
POSITIVE_LOOKBEHIND = "(?<={})"
ESCAPE = "\{}"
# 4.3 Numbers
DIGIT = r"\d"
ONE_DIGIT = DIGIT + EXACTLY_ONE
ONE_OR_TWO_DIGITS = DIGIT + ONE_OR_TWO
NON_DIGIT = NEGATIVE_LOOKAHEAD.format(DIGIT)
TWO_DIGITS = DIGIT + EXACTLY_TWO
THREE_DIGITS = DIGIT + "{3}"
EXACTLY_TWO_DIGITS = DIGIT + EXACTLY_TWO + NON_DIGIT
FOUR_DIGITS = DIGIT + r"{4}" + NON_DIGIT
# 4.4 String Literals
SLASH = r"/"
OR = r'|'
LOWER_CASE = "a-z"
SPACE = "\s"
DOT = "."
DASH = "-"
COMMA = ","
PUNCTUATION = CLASS.format(DOT + COMMA + DASH)
EMPTY_STRING = ""
# 4.5 Dates
# These are parts to build up the date-expressions.
MONTH_SUFFIX = (CLASS.format(LOWER_CASE) + ZERO_OR_MORE
+ CLASS.format(SPACE + DOT + COMMA + DASH) + ONE_OR_TWO)
MONTH_PREFIXES = "Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec".split()
MONTHS = [month + MONTH_SUFFIX for month in MONTH_PREFIXES]
MONTHS = GROUP.format(OR.join(MONTHS))
DAY_SUFFIX = CLASS.format(DASH + COMMA + SPACE) + ONE_OR_TWO
DAYS = ONE_OR_TWO_DIGITS + DAY_SUFFIX
YEAR = FOUR_DIGITS
# This is for dates like Mar 21st, 2009, those with suffixes on the days.
CONTRACTED = (ONE_OR_TWO_DIGITS
+ LOWER_CASE
+ EXACTLY_TWO
)
CONTRACTION = NAMED.format("contraction",
MONTHS
+ CONTRACTED
+ DAY_SUFFIX
+ YEAR)
# This is for dates that have no days in them, like May 2009.
NO_DAY_BEHIND = NEGATIVE_LOOKBEHIND.format(DIGIT + SPACE)
NO_DAY = NAMED.format("no_day", NO_DAY_BEHIND + MONTHS + YEAR)
# This is for the most common form (that I use) - May 21, 2017.
WORDS = NAMED.format("words", MONTHS + DAYS + YEAR)
BACKWARDS = NAMED.format("backwards", ONE_OR_TWO_DIGITS + SPACE + MONTHS + YEAR)
slashed = SLASH.join([ONE_OR_TWO_DIGITS,
ONE_OR_TWO_DIGITS,
EXACTLY_TWO_DIGITS])
dashed = DASH.join([ONE_OR_TWO_DIGITS,
ONE_OR_TWO_DIGITS,
EXACTLY_TWO_DIGITS])
TWENTIETH_CENTURY = NAMED.format("twentieth",
OR.join([slashed, dashed]))
NUMERIC = NAMED.format("numeric",
SLASH.join([ONE_OR_TWO_DIGITS,
ONE_OR_TWO_DIGITS,
FOUR_DIGITS]))
NO_PRECEDING_SLASH = NEGATIVE_LOOKBEHIND.format(SLASH)
NO_PRECEDING_SLASH_DIGIT = NEGATIVE_LOOKBEHIND.format(CLASS.format(SLASH + DIGIT))
NO_ONE_DAY = (NO_PRECEDING_SLASH_DIGIT
+ ONE_DIGIT
+ SLASH
+ FOUR_DIGITS)
NO_TWO_DAYS = (NO_PRECEDING_SLASH
+ TWO_DIGITS
+ SLASH
+ FOUR_DIGITS)
NO_DAY_NUMERIC = NAMED.format("no_day_numeric",
NO_ONE_DAY
+ OR
+ NO_TWO_DAYS
)
CENTURY = GROUP.format('19' + OR + "20") + TWO_DIGITS
DIGIT_SLASH = DIGIT + SLASH
DIGIT_DASH = DIGIT + DASH
DIGIT_SPACE = DIGIT + SPACE
LETTER_SPACE = CLASS.format(LOWER_CASE) + SPACE
COMMA_SPACE = COMMA + SPACE
YEAR_PREFIX = NEGATIVE_LOOKBEHIND.format(OR.join([
DIGIT_SLASH,
DIGIT_DASH,
DIGIT_SPACE,
LETTER_SPACE,
COMMA_SPACE,
]))
YEAR_ONLY = NAMED.format("year_only",
YEAR_PREFIX + CENTURY
)
IN_PREFIX = POSITIVE_LOOKBEHIND.format(CLASS.format('iI') + 'n' + SPACE) + CENTURY
SINCE_PREFIX = POSITIVE_LOOKBEHIND.format(CLASS.format("Ss") + 'ince' + SPACE) + CENTURY
AGE = POSITIVE_LOOKBEHIND.format("Age" + SPACE + TWO_DIGITS + COMMA + SPACE) + CENTURY
AGE_COMMA = POSITIVE_LOOKBEHIND.format("Age" + COMMA + SPACE + TWO_DIGITS + COMMA + SPACE) + CENTURY
OTHERS = ['delivery', "quit", "attempt", "nephrectomy", THREE_DIGITS]
OTHERS = [POSITIVE_LOOKBEHIND.format(label + SPACE) + CENTURY for label in OTHERS]
OTHERS = OR.join(OTHERS)
LEFTOVERS_PREFIX = OR.join([IN_PREFIX, SINCE_PREFIX, AGE, AGE_COMMA]) + OR + OTHERS
LEFTOVERS = NAMED.format("leftovers", LEFTOVERS_PREFIX)
DATE = NAMED.format("date", OR.join([NUMERIC,
TWENTIETH_CENTURY,
WORDS,
BACKWARDS,
CONTRACTION,
NO_DAY,
NO_DAY_NUMERIC,
YEAR_ONLY,
LEFTOVERS]))
def twentieth_century(date):
"""adds a 19 to the year
Args:
date (re.Regex): Extracted date
"""
month, day, year = date.group(1).split(SLASH)
year = "19{}".format(year)
return SLASH.join([month, day, year])
def take_two(line):
match = re.search(TWENTIETH_CENTURY, line)
if match:
return twentieth_century(match)
return line
def extract_and_count(expression, data, name):
"""extract all matches and report the count
Args:
expression (str): regular expression to match
data (pandas.Series): data with dates to extratc
name (str): name of the group for the expression
Returns:
tuple (pandas.Series, int): extracted dates, count
"""
extracted = data.str.extractall(expression)[name]
count = len(extracted)
print("'{}' matched {} rows".format(name, count))
return extracted, count
numeric, numeric_count = extract_and_count(NUMERIC, data, 'numeric')
# 'numeric' matched 25 rows
twentieth, twentieth_count = extract_and_count(TWENTIETH_CENTURY, data, 'twentieth')
# 'twentieth' matched 100 rows
words, words_count = extract_and_count(WORDS, data, 'words')
# 'words' matched 34 rows
backwards, backwards_count = extract_and_count(BACKWARDS, data, 'backwards')
# 'backwards' matched 69 rows
contraction_data, contraction = extract_and_count(CONTRACTION, data, 'contraction')
# 'contraction' matched 0 rows
no_day, no_day_count = extract_and_count(NO_DAY, data, 'no_day')
# 'no_day' matched 115 rows
no_day_numeric, no_day_numeric_count = extract_and_count(NO_DAY_NUMERIC, data,
"no_day_numeric")
# 'no_day_numeric' matched 112 rows
year_only, year_only_count = extract_and_count(YEAR_ONLY, data, "year_only")
# 'year_only' matched 15 rows
leftovers, leftovers_count = extract_and_count(LEFTOVERS, data, "leftovers")
# 'leftovers' matched 30 rows
found = data.str.extractall(DATE)
total_found = len(found.date)
print("Total Found: {}".format(total_found))
print("Remaining: {}".format(len(data) - total_found))
print("Discrepancy: {}".format(total_found - (numeric_count
+ twentieth_count
+ words_count
+ backwards_count
+ contraction
+ no_day_count
+ no_day_numeric_count
+ year_only_count
+ leftovers_count)))
# Total Found: 500
# Remaining: 0
# Discrepancy: 0
missing = [label for label in data.index if label not in found.index.levels[0]]
try:
print(missing[0], data.loc[missing[0]])
except IndexError:
print("all rows matched")
# all rows matched
def clean(source, expression, replacement, sample=5):
"""applies the replacement to the source
as a side-effect shows sample rows before and after
Args:
source (pandas.Series): source of the strings
expression (str): regular expression to match what to replace
replacement: function or expression to replace the matching expression
sample (int): number of randomly chosen examples to show
Returns:
pandas.Series: the source with the replacement applied to it
"""
print("Random Sample Before:")
print(source.sample(sample))
cleaned = source.str.replace(expression, replacement)
print("\nRandom Sample After:")
print(cleaned.sample(sample))
print("\nCount of cleaned: {}".format(len(cleaned)))
assert len(source) == len(cleaned)
return cleaned
def clean_punctuation(source, sample=5):
"""removes punctuation
Args:
source (pandas.Series): data to clean
sample (int): size of sample to show
Returns:
pandas.Series: source with punctuation removed
"""
print("Cleaning Punctuation")
if any(source.str.contains(PUNCTUATION)):
source = clean(source, PUNCTUATION, EMPTY_STRING)
return source
LONG_TO_SHORT = dict(January="Jan",
February="Feb",
March="Mar",
April="Apr",
May="May",
June="Jun",
July="Jul",
August="Aug",
September="Sep",
October="Oct",
November="Nov",
December="Dec")
# it turns out there are spelling errors in the data so this has to be fuzzy
LONG_TO_SHORT_EXPRESSION = OR.join([GROUP.format(month)
+ CLASS.format(LOWER_CASE)
+ ZERO_OR_MORE
for month in LONG_TO_SHORT.values()])
def long_month_to_short(match):
"""convert long month to short
Args:
match (re.Match): object matching a long month
Returns:
str: shortened version of the month
"""
return match.group(match.lastindex)
def convert_long_months_to_short(source, sample=5):
"""convert long month names to short
Args:
source (pandas.Series): data with months
sample (int): size of sample to show
Returns:
pandas.Series: data with short months
"""
return clean(source,
LONG_TO_SHORT_EXPRESSION,
long_month_to_short)
def add_month_date(match):
"""adds 01/01 to years
Args:
match (re.Match): object that only matched a 4-digit year
Returns:
str: 01/01/YYYY
"""
return "01/01/" + match.group()
def add_january_one(source):
"""adds /01/01/ to year-only dates
Args:
source (pandas.Series): data with the dates
Returns:
pandas.Series: years in source with /01/01/ added
"""
return clean(source, YEAR_ONLY, add_month_date)
two_digit_expression = GROUP.format(ONE_OR_TWO_DIGITS) + POSITIVE_LOOKAHEAD.format(SLASH)
def two_digits(match):
"""add a leading zero if needed
Args:
match (re.Match): match with one or two digits
Returns:
str: the matched string with leading zero if needed
"""
# for some reason the string-formatting raises an error if it's a string
# so cast it to an int
return "{:02}".format(int(match.group()))
def clean_two_digits(source, sample=5):
"""makes sure source has two-digits
Args:
source (pandas.Series): data with digit followed by slash
sample (int): number of samples to show
Returns:
pandas.Series: source with digits coerced to two digits
"""
return clean(source, two_digit_expression, two_digits, sample)
def clean_two_digits_isolated(source, sample=5):
"""cleans two digits that are standalone
Args:
source (pandas.Series): source of the data
sample (int): number of samples to show
Returns:
pandas.Series: converted data
"""
return clean(source, ONE_OR_TWO_DIGITS, two_digits, sample)
digits = ("{:02}".format(month) for month in range(1, 13))
MONTH_TO_DIGITS = dict(zip(MONTH_PREFIXES, digits))
SHORT_MONTHS_EXPRESSION = OR.join((GROUP.format(month) for month in MONTH_TO_DIGITS))
def month_to_digits(match):
"""converts short month to digits
Args:
match (re.Match): object with short-month
Returns:
str: month as two-digit number (e.g. Jan -> 01)
"""
return MONTH_TO_DIGITS[match.group()]
def convert_short_month_to_digits(source, sample=5):
"""converts three-letter months to two-digits
Args:
source (pandas.Series): data with three-letter months
sample (int): number of samples to show
Returns:
pandas.Series: source with short-months coverted to digits
"""
return clean(source,
SHORT_MONTHS_EXPRESSION,
month_to_digits,
sample)
def clean_months(source, sample=5):
"""clean up months (which start as words)
Args:
source (pandas.Series): source of the months
sample (int): number of random samples to show
"""
cleaned = clean_punctuation(source)
print("Converting long months to short")
cleaned = clean(cleaned,
LONG_TO_SHORT_EXPRESSION,
long_month_to_short, sample)
print("Converting short months to digits")
cleaned = clean(cleaned,
SHORT_MONTHS_EXPRESSION,
month_to_digits, sample)
return cleaned
def frame_to_series(frame, index_source, samples=5):
"""re-combines data-frame into a series
Args:
frame (pandas.DataFrame): frame with month, day, year columns
index_source (pandas.series): source to copy index from
samples (index): number of random entries to print when done
Returns:
pandas.Series: series with dates as month/day/year
"""
combined = frame.month + SLASH + frame.day + SLASH + frame.year
combined.index = index_source.index
print(combined.sample(samples))
return combined
year_only_cleaned = add_january_one(year_only)
# Random Sample Before:
# match
# 472 0 2010
# 495 0 1979
# 497 0 2008
# 481 0 1974
# 486 0 1973
# Name: year_only, dtype: object
# Random Sample After:
# match
# 495 0 01/01/1979
# 470 0 01/01/1983
# 462 0 01/01/1988
# 481 0 01/01/1974
# 480 0 01/01/2013
# Name: year_only, dtype: object
# Count of cleaned: 15
leftovers_cleaned = add_january_one(leftovers)
# Random Sample Before:
# match
# 487 0 1992
# 477 0 1994
# 498 0 2005
# 488 0 1977
# 484 0 2004
# Name: leftovers, dtype: object
# Random Sample After:
# match
# 464 0 01/01/2016
# 455 0 01/01/1984
# 465 0 01/01/1976
# 475 0 01/01/2015
# 498 0 01/01/2005
# Name: leftovers, dtype: object
# Count of cleaned: 30
cleaned = pandas.concat([year_only_cleaned, leftovers_cleaned])
print(len(cleaned))
no_day_numeric_cleaned = clean_two_digits(no_day_numeric)
no_day_numeric_cleaned = clean(no_day_numeric_cleaned,
SLASH,
lambda m: "/01/")
original = len(cleaned)
cleaned = pandas.concat([cleaned, no_day_numeric_cleaned])
assert len(cleaned) == no_day_numeric_count + original
print(len(cleaned))
no_day_cleaned = clean_months(no_day)
no_day_cleaned = clean(no_day_cleaned,
SPACE + ONE_OR_MORE,
lambda match: "/01/")
original = len(cleaned)
cleaned = pandas.concat([cleaned, no_day_cleaned])
print(len(cleaned))
assert len(cleaned) == no_day_count + original
frame = pandas.DataFrame(backwards.str.split().tolist(),
columns="day month year".split())
frame.head()
frame.day = clean_two_digits(frame.day)
frame.month = clean_months(frame.month)
backwards_cleaned = frame_to_series(frame, backwards)
original = len(cleaned)
cleaned = pandas.concat([cleaned, backwards_cleaned])
assert len(cleaned) == original + backwards_count
print(len(cleaned))
frame = pandas.DataFrame(words.str.split().tolist(), columns="month day year".split())
print(frame.head())
frame.month = clean_months(frame.month)
frame.day = clean_punctuation(frame.day)
frame.head()
words_cleaned = frame_to_series(frame, words)
original = len(cleaned)
cleaned = pandas.concat([cleaned, words_cleaned])
assert len(cleaned) == original + words_count
print(len(cleaned))
print(twentieth.iloc[21])
twentieth_cleaned = twentieth.str.replace(DASH, SLASH)
print(cleaned.iloc[21])
frame = pandas.DataFrame(twentieth_cleaned.str.split(SLASH).tolist(),
columns=["month", "day", "year"])
print(frame.head())
frame.month = clean_two_digits_isolated(frame.month)
frame.day = clean_two_digits_isolated(frame.day)
frame.head()
frame.year = clean(frame.year, TWO_DIGITS, lambda match: "19" + match.group())
twentieth_cleaned = frame_to_series(frame, twentieth)
original = len(cleaned)
cleaned = pandas.concat([cleaned, twentieth_cleaned])
assert len(cleaned) == original + twentieth_count
print(numeric.head())
has_dashes = numeric.str.contains(DASH)
print(numeric[has_dashes])
frame = pandas.DataFrame(numeric.str.split(SLASH).tolist(),
columns="month day year".split())
print(frame.head())
frame.month = clean_two_digits_isolated(frame.month)
frame.day = clean_two_digits_isolated(frame.day)
numeric_cleaned = frame_to_series(frame, numeric)
original = len(cleaned)
cleaned = pandas.concat([cleaned, numeric_cleaned])
assert len(cleaned) == original + numeric_count
print(len(cleaned))
cleaned = pandas.concat([numeric_cleaned,
twentieth_cleaned,
words_cleaned,
backwards_cleaned,
no_day_cleaned,
no_day_numeric_cleaned,
year_only_cleaned,
leftovers_cleaned,
])
print(len(cleaned))
print(cleaned.head())
assert len(cleaned) == len(data)
print(cleaned.head())
datetimes = pandas.to_datetime(cleaned, format="%m/%d/%Y")
print(datetimes.head())
sorted_dates = datetimes.sort_values()
print(sorted_dates.head())
print(sorted_dates.tail())
answer = pandas.Series(sorted_dates.index.labels[0])
print(answer.head())
def date_sorter():
return answer
```
| github_jupyter |
# Predicting Boston Housing Prices
## Using XGBoost in SageMaker (Batch Transform)
_Deep Learning Nanodegree Program | Deployment_
---
As an introduction to using SageMaker's High Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.
The documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/)
## General Outline
Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail.
## Step 0: Setting up the notebook
We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
```
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
```
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.predictor import csv_serializer
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
# dir(role)
```
## Step 1: Downloading the data
Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
```
boston = load_boston()
print(boston.DESCR)
```
## Step 2: Preparing and splitting the data
Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
```
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
## Step 3: Uploading the data files to S3
When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.
### Save the data locally
First we need to create the test, train and validation csv files which we will then upload to S3.
```
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Upload to S3
Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
```
prefix = 'boston-xgboost-HL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
## Step 4: Train the XGBoost model
Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility.
To construct an estimator, the object which we wish to train, we need to provide the location of a container which contains the training code. Since we are using a built in algorithm this container is provided by Amazon. However, the full name of the container is a bit lengthy and depends on the region that we are operating in. Fortunately, SageMaker provides a useful utility method called `get_image_uri` that constructs the image name for us.
To use the `get_image_uri` method we need to provide it with our current region, which can be obtained from the session object, and the name of the algorithm we wish to use. In this notebook we will be using XGBoost however you could try another algorithm if you wish. The list of built in algorithms can be found in the list of [Common Parameters](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).
```
# As stated above, we use this utility method to construct the image name for the training container.
container = get_image_uri(session.boto_region_name, 'xgboost')
# Now that we know which container to use, we can construct the estimator object.
xgb = sagemaker.estimator.Estimator(container, # The image name of the training container
role, # The IAM role to use (our current role in this case)
train_instance_count=1, # The number of instances to use for training
train_instance_type='ml.m4.xlarge', # The type of instance to use for training
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
# Where to save the output (the model artifacts)
sagemaker_session=session) # The current SageMaker session
```
Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html)
```
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='reg:linear',
early_stopping_rounds=10,
num_round=200)
```
Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.
```
# This is a wrapper around the location of our train and validation data, to make sure that SageMaker
# knows our data is in csv format.
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
## Step 5: Test the model
Now that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. To start with, we need to build a transformer object from our fit model.
```
xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previously stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once.
Note that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong.
```
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
```
Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
```
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
data_dir = '../data/aclImdb/train/unsup'
!rm $data_dir/*
# !rmdir $data_dir
!ls $data_dir
```
| github_jupyter |
# Imports
```
import torch
from torch.autograd import Variable
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.insert(0, "lib/")
from utils.preprocess_sample import preprocess_sample
from utils.collate_custom import collate_custom
from utils.utils import to_cuda_variable
from utils.json_dataset_evaluator import evaluate_boxes,evaluate_masks
from model.detector import detector
import utils.result_utils as result_utils
import utils.vis as vis_utils
import skimage.io as io
from utils.blob import prep_im_for_blob
import utils.dummy_datasets as dummy_datasets
from utils.selective_search import selective_search # needed for proposal extraction in Fast RCNN
from PIL import Image
torch_ver = torch.__version__[:3]
```
# Parameters
```
# Pretrained model
arch='resnet50'
# COCO minival2014 dataset path
coco_ann_file='datasets/data/coco/annotations/instances_minival2014.json'
img_dir='datasets/data/coco/val2014'
# model type
model_type='mask' # change here
if model_type=='mask':
# https://s3-us-west-2.amazonaws.com/detectron/35858828/12_2017_baselines/e2e_mask_rcnn_R-50-C4_2x.yaml.01_46_47.HBThTerB/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl
pretrained_model_file = 'files/trained_models/mask/model_final.pkl'
use_rpn_head = True
use_mask_head = True
elif model_type=='faster':
# https://s3-us-west-2.amazonaws.com/detectron/35857281/12_2017_baselines/e2e_faster_rcnn_R-50-C4_2x.yaml.01_34_56.ScPH0Z4r/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl
pretrained_model_file = 'files/trained_models/faster/model_final.pkl'
use_rpn_head = True
use_mask_head = False
elif model_type=='fast':
# https://s3-us-west-2.amazonaws.com/detectron/36224046/12_2017_baselines/fast_rcnn_R-50-C4_2x.yaml.08_22_57.XFxNqEnL/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl
pretrained_model_file = 'files/trained_models/fast/model_final.pkl'
use_rpn_head = False
use_mask_head = False
```
# Load image
```
image_fn = 'demo/33823288584_1d21cf0a26_k.jpg'
# Load image
image = io.imread(image_fn)
if len(image.shape) == 2: # convert grayscale to RGB
image = np.repeat(np.expand_dims(image,2), 3, axis=2)
orig_im_size = image.shape
# Preprocess image
im_list, im_scales = prep_im_for_blob(image)
# Build sample
sample = {}
sample['image'] = torch.FloatTensor(im_list[0]).permute(2,0,1).unsqueeze(0)
sample['scaling_factors'] = torch.FloatTensor([im_scales[0]])
sample['original_im_size'] = torch.FloatTensor(orig_im_size)
# Extract proposals
if model_type=='fast':
# extract proposals using selective search (xmin,ymin,xmax,ymax format)
rects = selective_search(pil_image=Image.fromarray(image),quality='f')
sample['proposal_coords']=torch.FloatTensor(preprocess_sample().remove_dup_prop(rects)[0])*im_scales[0]
else:
sample['proposal_coords']=torch.FloatTensor([-1]) # dummy value
# Convert to cuda variable
sample = to_cuda_variable(sample)
```
# Create detector model
```
model = detector(arch=arch,
detector_pkl_file=pretrained_model_file,
use_rpn_head = use_rpn_head,
use_mask_head = use_mask_head)
model = model.cuda()
```
# Evaluate
```
def eval_model(sample):
class_scores,bbox_deltas,rois,img_features=model(sample['image'],
sample['proposal_coords'],
scaling_factor=sample['scaling_factors'].cpu().data.numpy().item())
return class_scores,bbox_deltas,rois,img_features
if torch_ver=="0.4":
with torch.no_grad():
class_scores,bbox_deltas,rois,img_features=eval_model(sample)
else:
class_scores,bbox_deltas,rois,img_features=eval_model(sample)
# postprocess output:
# - convert coordinates back to original image size,
# - treshold proposals based on score,
# - do NMS.
scores_final, boxes_final, boxes_per_class = result_utils.postprocess_output(rois,
sample['scaling_factors'],
sample['original_im_size'],
class_scores,
bbox_deltas)
if model_type=='mask':
# compute masks
boxes_final_th = Variable(torch.cuda.FloatTensor(boxes_final))*sample['scaling_factors']
masks=model.mask_head(img_features,boxes_final_th)
# postprocess mask output:
h_orig = int(sample['original_im_size'].squeeze()[0].data.cpu().numpy().item())
w_orig = int(sample['original_im_size'].squeeze()[1].data.cpu().numpy().item())
cls_segms = result_utils.segm_results(boxes_per_class, masks.cpu().data.numpy(), boxes_final, h_orig, w_orig)
else:
cls_segms = None
print('Done!')
```
# Visualize
```
output_dir = 'demo/output/'
vis_utils.vis_one_image(
image, # BGR -> RGB for visualization
image_fn,
output_dir,
boxes_per_class,
cls_segms,
None,
dataset=dummy_datasets.get_coco_dataset(),
box_alpha=0.3,
show_class=True,
thresh=0.7,
kp_thresh=2,
show=True
)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Queries" data-toc-modified-id="Queries-1"><span class="toc-item-num">1 </span>Queries</a></span><ul class="toc-item"><li><span><a href="#All-Videos" data-toc-modified-id="All-Videos-1.1"><span class="toc-item-num">1.1 </span>All Videos</a></span></li><li><span><a href="#Videos-by-Channel" data-toc-modified-id="Videos-by-Channel-1.2"><span class="toc-item-num">1.2 </span>Videos by Channel</a></span></li><li><span><a href="#Videos-by-Show" data-toc-modified-id="Videos-by-Show-1.3"><span class="toc-item-num">1.3 </span>Videos by Show</a></span></li><li><span><a href="#Videos-by-Canonical-Show" data-toc-modified-id="Videos-by-Canonical-Show-1.4"><span class="toc-item-num">1.4 </span>Videos by Canonical Show</a></span></li><li><span><a href="#Videos-by-time-of-day" data-toc-modified-id="Videos-by-time-of-day-1.5"><span class="toc-item-num">1.5 </span>Videos by time of day</a></span></li></ul></li><li><span><a href="#Shots" data-toc-modified-id="Shots-2"><span class="toc-item-num">2 </span>Shots</a></span><ul class="toc-item"><li><span><a href="#Shot-Validation" data-toc-modified-id="Shot-Validation-2.1"><span class="toc-item-num">2.1 </span>Shot Validation</a></span></li><li><span><a href="#All-Shots" data-toc-modified-id="All-Shots-2.2"><span class="toc-item-num">2.2 </span>All Shots</a></span></li><li><span><a href="#Shots-by-Channel" data-toc-modified-id="Shots-by-Channel-2.3"><span class="toc-item-num">2.3 </span>Shots by Channel</a></span></li><li><span><a href="#Shots-by-Show" data-toc-modified-id="Shots-by-Show-2.4"><span class="toc-item-num">2.4 </span>Shots by Show</a></span></li><li><span><a href="#Shots-by-Canonical-Show" data-toc-modified-id="Shots-by-Canonical-Show-2.5"><span class="toc-item-num">2.5 </span>Shots by Canonical Show</a></span></li><li><span><a href="#Shots-by-Time-of-Day" data-toc-modified-id="Shots-by-Time-of-Day-2.6"><span class="toc-item-num">2.6 </span>Shots by Time of Day</a></span></li></ul></li><li><span><a href="#Commercials" data-toc-modified-id="Commercials-3"><span class="toc-item-num">3 </span>Commercials</a></span><ul class="toc-item"><li><span><a href="#All-Commercials" data-toc-modified-id="All-Commercials-3.1"><span class="toc-item-num">3.1 </span>All Commercials</a></span></li><li><span><a href="#Commercials-by-Channel" data-toc-modified-id="Commercials-by-Channel-3.2"><span class="toc-item-num">3.2 </span>Commercials by Channel</a></span></li><li><span><a href="#Commercials-by-Show" data-toc-modified-id="Commercials-by-Show-3.3"><span class="toc-item-num">3.3 </span>Commercials by Show</a></span></li><li><span><a href="#Commercials-by-Canonical-Show" data-toc-modified-id="Commercials-by-Canonical-Show-3.4"><span class="toc-item-num">3.4 </span>Commercials by Canonical Show</a></span></li><li><span><a href="#Commercials-by-Time-of-Day" data-toc-modified-id="Commercials-by-Time-of-Day-3.5"><span class="toc-item-num">3.5 </span>Commercials by Time of Day</a></span></li></ul></li><li><span><a href="#Faces" data-toc-modified-id="Faces-4"><span class="toc-item-num">4 </span>Faces</a></span><ul class="toc-item"><li><span><a href="#Face-Validation" data-toc-modified-id="Face-Validation-4.1"><span class="toc-item-num">4.1 </span>Face Validation</a></span></li><li><span><a href="#All-Faces" data-toc-modified-id="All-Faces-4.2"><span class="toc-item-num">4.2 </span>All Faces</a></span></li></ul></li><li><span><a href="#Genders" data-toc-modified-id="Genders-5"><span class="toc-item-num">5 </span>Genders</a></span><ul class="toc-item"><li><span><a href="#All-Gender" data-toc-modified-id="All-Gender-5.1"><span class="toc-item-num">5.1 </span>All Gender</a></span><ul class="toc-item"><li><span><a href="#Persist-for-Report" data-toc-modified-id="Persist-for-Report-5.1.1"><span class="toc-item-num">5.1.1 </span>Persist for Report</a></span></li></ul></li><li><span><a href="#Gender-by-Channel" data-toc-modified-id="Gender-by-Channel-5.2"><span class="toc-item-num">5.2 </span>Gender by Channel</a></span></li><li><span><a href="#Gender-by-Show" data-toc-modified-id="Gender-by-Show-5.3"><span class="toc-item-num">5.3 </span>Gender by Show</a></span><ul class="toc-item"><li><span><a href="#Persist-for-Report" data-toc-modified-id="Persist-for-Report-5.3.1"><span class="toc-item-num">5.3.1 </span>Persist for Report</a></span></li></ul></li><li><span><a href="#Gender-by-Canonical-Show" data-toc-modified-id="Gender-by-Canonical-Show-5.4"><span class="toc-item-num">5.4 </span>Gender by Canonical Show</a></span><ul class="toc-item"><li><span><a href="#Persist-for-Report" data-toc-modified-id="Persist-for-Report-5.4.1"><span class="toc-item-num">5.4.1 </span>Persist for Report</a></span></li></ul></li><li><span><a href="#Gender-by-time-of-day" data-toc-modified-id="Gender-by-time-of-day-5.5"><span class="toc-item-num">5.5 </span>Gender by time of day</a></span></li><li><span><a href="#Gender-by-Day-of-the-Week" data-toc-modified-id="Gender-by-Day-of-the-Week-5.6"><span class="toc-item-num">5.6 </span>Gender by Day of the Week</a></span></li><li><span><a href="#Gender-by-topic" data-toc-modified-id="Gender-by-topic-5.7"><span class="toc-item-num">5.7 </span>Gender by topic</a></span></li><li><span><a href="#Male-vs.-female-faces-in-panels" data-toc-modified-id="Male-vs.-female-faces-in-panels-5.8"><span class="toc-item-num">5.8 </span>Male vs. female faces in panels</a></span></li></ul></li><li><span><a href="#Pose" data-toc-modified-id="Pose-6"><span class="toc-item-num">6 </span>Pose</a></span></li><li><span><a href="#Topics" data-toc-modified-id="Topics-7"><span class="toc-item-num">7 </span>Topics</a></span></li></ul></div>
```
%matplotlib inline
from esper.stdlib import *
from esper.prelude import *
from esper.spark_util import *
from esper.validation import *
import IPython
import shutil
shows = get_shows()
print('Schema:', shows)
print('Count:', shows.count())
videos = get_videos()
print('Schema:', videos)
print('Count:', videos.count())
shots = get_shots()
print('Schema:', shots)
print('Count:', shots.count())
speakers = get_speakers()
print('Schema:', speakers)
print('Count:', speakers.count())
# speakers.where(speakers.in_commercial == True).show()
# speakers.where(speakers.in_commercial == False).show()
segments = get_segments()
print('Schema:', segments)
print('Count:', segments.count())
# segments.where(segments.in_commercial == True).show()
# segments.where(segments.in_commercial == False).show()
faces = get_faces()
print('Schema:', faces)
print('Count:', faces.count())
face_genders = get_face_genders()
print('Schema:', face_genders)
print('Count:', face_genders.count())
face_identities = get_face_identities()
print('Schema:', face_identities)
print('Count:', face_identities.count())
commercials = get_commercials()
print('Schema:', commercials)
print('Count:', commercials.count())
```
# Queries
```
def format_time(seconds, padding=4):
return '{{:0{}d}}:{{:02d}}:{{:02d}}'.format(padding).format(
int(seconds/3600), int(seconds/60 % 60), int(seconds % 60))
def format_number(n):
def fmt(n):
suffixes = {
6: 'thousand',
9: 'million',
12: 'billion',
15: 'trillion'
}
log = math.log10(n)
suffix = None
key = None
for k in sorted(suffixes.keys()):
if log < k:
suffix = suffixes[k]
key = k
break
return '{:.2f} {}'.format(n / float(10**(key-3)), suffix)
if isinstance(n, list):
return map(fmt, n)
else:
return fmt(n)
def show_df(table, ordering, clear=True):
if clear:
IPython.display.clear_output()
return pd.DataFrame(table)[ordering]
def format_hour(h):
if h <= 12:
return '{} AM'.format(h)
else:
return '{} PM'.format(h-12)
def video_stats(key, labels):
if key is not None:
rows = videos.groupBy(key).agg(
videos[key],
func.count('duration'),
func.avg('duration'),
func.sum('duration'),
func.stddev_pop('duration')
).collect()
else:
rows = videos.agg(
func.count('duration'),
func.avg('duration'),
func.sum('duration'),
func.stddev_pop('duration')
).collect()
rmap = {(0 if key is None else r[key]): r for r in rows}
return [{
'label': label['name'],
'count': rmap[label['id']]['count(duration)'],
'duration': format_time(int(rmap[label['id']]['sum(duration)'])),
'avg_duration': '{} (σ = {})'.format(
format_time(int(rmap[label['id']]['avg(duration)'])),
format_time(int(rmap[label['id']]['stddev_pop(duration)']), padding=0))
} for label in labels if not key or label['id'] in rmap]
video_ordering = ['label', 'count', 'duration', 'avg_duration']
hours = [
r['hour'] for r in
Video.objects.annotate(
hour=Extract('time', 'hour')
).distinct('hour').order_by('hour').values('hour')
]
```
## All Videos
```
show_df(
video_stats(None, [{'id': 0, 'name': 'whole dataset'}]),
video_ordering)
```
## Videos by Channel
```
show_df(
video_stats('channel_id', list(Channel.objects.all().values('id', 'name'))),
video_ordering)
```
## Videos by Show
"Situation Room with Wolf Blitzer" and "Special Report with Bret Baier" were ingested as 60 10-minute segments each, whereas the other shows have 10 ≥1 hour segments.
```
show_df(
video_stats('show_id', list(Show.objects.all().values('id', 'name'))),
video_ordering)
```
## Videos by Canonical Show
```
show_df(
video_stats('canonical_show_id', list(CanonicalShow.objects.all().values('id', 'name'))),
video_ordering)
```
## Videos by time of day
Initial selection of videos was only prime-time, so between 4pm-11pm.
```
show_df(
video_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),
video_ordering)
```
# Shots
```
med_withcom = shots.approxQuantile('duration', [0.5], 0.01)[0]
print('Median shot length with commercials: {:0.2f}s'.format(med_withcom))
med_nocom = shots.where(
shots.in_commercial == False
).approxQuantile('duration', [0.5], 0.01)[0]
print('Median shot length w/o commercials: {:0.2f}s'.format(med_nocom))
med_channels = {
c.name: shots.where(
shots.channel_id == c.id
).approxQuantile('duration', [0.5], 0.01)[0]
for c in Channel.objects.all()
}
print('Median shot length by_channel:')
for c, v in med_channels.items():
print(' {}: {:0.2f}s'.format(c, v))
pickle.dump({
'withcom': med_withcom,
'nocom': med_nocom,
'channels': med_channels
}, open('/app/data/shot_medians.pkl', 'wb'))
all_shot_durations = np.array(
[r['duration'] for r in shots.select('duration').collect()]
)
hist, edges = np.histogram(all_shot_durations, bins=list(range(0, 3600)) + [10000000])
pickle.dump(hist, open('/app/data/shot_histogram.pkl', 'wb'))
```
## Shot Validation
```
# TODO: what is this hack?
shot_precision = 0.97
shot_recall = 0.97
def shot_error_interval(n):
return [n * shot_precision, n * (2 - shot_recall)]
def shot_stats(key, labels, shots=shots):
if key is not None:
df = shots.groupBy(key)
rows = df.agg(shots[key], func.count('duration'), func.avg('duration'), func.sum('duration'), func.stddev_pop('duration')).collect()
else:
df = shots
rows = df.agg(func.count('duration'), func.avg('duration'), func.sum('duration'), func.stddev_pop('duration')).collect()
rmap = {(0 if key is None else r[key]): r for r in rows}
out_rows = []
for label in labels:
try:
out_rows.append({
'label': label['name'],
'count': rmap[label['id']]['count(duration)'], #format_number(shot_error_interval(rmap[label['id']]['count(duration)'])),
'duration': format_time(int(rmap[label['id']]['sum(duration)'])),
'avg_duration': '{:06.2f}s (σ = {:06.2f})'.format(
rmap[label['id']]['avg(duration)'],
rmap[label['id']]['stddev_pop(duration)'])
})
except KeyError:
pass
return out_rows
shot_ordering = ['label', 'count', 'duration', 'avg_duration']
```
## All Shots
```
show_df(
shot_stats(None, [{'id': 0, 'name': 'whole dataset'}]),
shot_ordering)
```
## Shots by Channel
```
show_df(
shot_stats('channel_id', list(Channel.objects.all().values('id', 'name'))),
shot_ordering)
```
## Shots by Show
```
show_df(
shot_stats('show_id', list(Show.objects.all().values('id', 'name'))),
shot_ordering)
```
## Shots by Canonical Show
```
show_df(
shot_stats('canonical_show_id', list(CanonicalShow.objects.all().values('id', 'name'))),
shot_ordering)
```
## Shots by Time of Day
```
show_df(
shot_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),
shot_ordering)
```
# Commercials
```
def commercial_stats(key, labels):
if key is not None:
rows = commercials.groupBy(key).agg(
commercials[key],
func.count('duration'),
func.avg('duration'),
func.sum('duration')
).collect()
else:
rows = commercials.agg(
func.count('duration'),
func.avg('duration'),
func.sum('duration')
).collect()
rmap = {(0 if key is None else r[key]): r for r in rows}
out_rows = []
for label in labels:
try:
out_rows.append({
'label': label['name'],
'count': format_number(rmap[label['id']]['count(duration)']),
'duration': format_time(int(rmap[label['id']]['sum(duration)'])),
'avg_duration': '{:06.2f}s'.format(rmap[label['id']]['avg(duration)'])
})
except KeyError:
pass
return out_rows
commercial_ordering = ['label', 'count', 'duration', 'avg_duration']
```
## All Commercials
```
show_df(
commercial_stats(None, [{'id': 0, 'name': 'whole dataset'}]),
commercial_ordering)
print('Average # of commercials per video: {:0.2f}'.format(
commercials.groupBy('video_id').count().agg(
func.avg(func.col('count'))
).collect()[0]['avg(count)']
))
```
## Commercials by Channel
```
show_df(
commercial_stats('channel_id', list(Channel.objects.all().values('id', 'name'))),
commercial_ordering)
```
## Commercials by Show
```
show_df(
commercial_stats('show_id', list(Show.objects.all().values('id', 'name'))),
commercial_ordering)
```
## Commercials by Canonical Show
```
show_df(
commercial_stats('canonical_show_id', list(CanonicalShow.objects.all().values('id', 'name'))),
commercial_ordering)
```
## Commercials by Time of Day
```
show_df(
commercial_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),
commercial_ordering)
```
# Faces
## Face Validation
```
base_face_stats = face_validation('All faces', lambda x: x)
big_face_stats = face_validation(
'Faces height > 0.2', lambda qs: qs.annotate(height=F('bbox_y2') - F('bbox_y1')).filter(height__gte=0.2))
shot_precision = 0.97
shot_recall = 0.97
def face_error_interval(n, face_stats):
(face_precision, face_recall, _) = face_stats
return [n * shot_precision * face_precision, n * (2 - shot_recall) * (2 - face_recall)]
```
## All Faces
```
print('Total faces: {}'.format(
format_number(face_error_interval(faces.count(), base_face_stats[2]))))
total_duration = videos.agg(func.sum('duration')).collect()[0]['sum(duration)'] - \
commercials.agg(func.sum('duration')).collect()[0]['sum(duration)']
face_duration = faces.groupBy('shot_id') \
.agg(
func.first('duration').alias('duration')
).agg(func.sum('duration')).collect()[0]['sum(duration)']
print('% of time a face is on screen: {:0.2f}'.format(100.0 * face_duration / total_duration))
```
# Genders
These queries analyze the distribution of men vs. women across a number of axes. We use faces detected by [MTCNN](https://github.com/kpzhang93/MTCNN_face_detection_alignment/) and gender detected by [rude-carnie](https://github.com/dpressel/rude-carnie). We only consider faces with a height > 20% of the frame to eliminate people in the background.
Time for a given gender is the amount of time during which at least one person of that gender was on screen. Percentages are (gender screen time) / (total time any person was on screen).
```
_, Cm = gender_validation('Gender w/ face height > 0.2', big_face_stats)
def P(y, yhat):
d = {'M': 0, 'F': 1, 'U': 2}
return float(Cm[d[y]][d[yhat]]) / sum([Cm[i][d[yhat]] for i in d.values()])
# TODO: remove a host -- use face features to identify and remove rachel maddow from computation
# TODO: more discrete time zones ("sunday mornings", "prime time", "daytime", "late evening")
# TODO: by year
# TODO: specific dates, e.g. during the RNC
MALE = Gender.objects.get(name='M')
FEMALE = Gender.objects.get(name='F')
UNKNOWN = Gender.objects.get(name='U')
gender_names = {g.id: g.name for g in Gender.objects.all()}
def gender_singlecount_stats(key, labels, min_dur=None):
if key == 'topic':
# TODO: Fix this
df1 = face_genders.join(segment_links, face_genders.segment_id == segment_links.segment_id)
df2 = df1.join(things, segment_links.thing_id == things.id)
topic_type = ThingType.objects.get(name='topic').id
df3 = df2.where(things.type_id == topic_type).select(
*(['duration', 'channel_id', 'show_id', 'hour', 'week_day', 'gender_id'] + \
[things.id.alias('topic'), 'shot_id']))
full_df = df3
else:
full_df = face_genders
keys = ['duration', 'channel_id', 'show_id', 'hour', 'week_day']
aggs = [func.count('gender_id')] + [func.first(full_df[k]).alias(k) for k in keys] + \
([full_df.topic] if key == 'topic' else [])
groups = ([key] if key is not None else []) + ['gender_id']
counts = full_df.groupBy(
# this is very brittle, need to add joined fields like 'canonical_show_id' here
*(['shot_id', 'gender_id', 'canonical_show_id'] + (['topic'] if key == 'topic' else []))
).agg(*aggs)
rows = counts.where(
counts['count(gender_id)'] > 0
).groupBy(
*groups
).agg(
func.sum('duration')
).collect()
if key is not None:
base_counts = full_df.groupBy(
['shot_id', key]
).agg(full_df[key], func.first('duration').alias('duration')) \
.groupBy(key).agg(full_df[key], func.sum('duration')).collect()
else:
base_counts = full_df.groupBy(
'shot_id'
).agg(
func.first('duration').alias('duration')
).agg(func.sum('duration')).collect()
base_map = {
(row[key] if key is not None else 0): row['sum(duration)']
for row in base_counts
}
out_rows = []
for label in labels:
label_rows = {
row.gender_id: row for row in rows if key is None or row[key] == label['id']
}
if len(label_rows) < 3:
continue
base_dur = int(base_map[label['id']])
if min_dur != None and base_dur < min_dur:
continue
durs = {
g.id: int(label_rows[g.id]['sum(duration)'])
for g in [MALE, FEMALE, UNKNOWN]
}
def adjust(g):
return int(
reduce(lambda a, b:
a + b, [durs[g2] * P(gender_names[g], gender_names[g2])
for g2 in durs]))
adj_durs = {
g: adjust(g)
for g in durs
}
out_rows.append({
key: label['name'],
'M': format_time(durs[MALE.id]),
'F': format_time(durs[FEMALE.id]),
'U': format_time(durs[UNKNOWN.id]),
'base': format_time(base_dur),
'M%': int(100.0 * durs[MALE.id] / base_dur),
'F%': int(100.0 * durs[FEMALE.id] / base_dur),
'U%': int(100.0 * durs[UNKNOWN.id] / base_dur),
# 'M-Adj': format_time(adj_durs[MALE.id]),
# 'F-Adj': format_time(adj_durs[FEMALE.id]),
# 'U-Adj': format_time(adj_durs[UNKNOWN.id]),
# 'M-Adj%': int(100.0 * adj_durs[MALE.id] / base_dur),
# 'F-Adj%': int(100.0 * adj_durs[FEMALE.id] / base_dur),
# 'U-Adj%': int(100.0 * adj_durs[UNKNOWN.id] / base_dur),
#'Overlap': int(100.0 * float(male_dur + female_dur) / base_dur) - 100
})
return out_rows
gender_ordering = ['M', 'M%', 'F', 'F%', 'U', 'U%']
#gender_ordering = ['M', 'M%', 'M-Adj', 'M-Adj%', 'F', 'F%', 'F-Adj', 'F-Adj%', 'U', 'U%', 'U-Adj', 'U-Adj%']
def gender_multicount_stats(key, labels, min_dur=None, no_host=False, just_host=False):
df0 = face_genders
if no_host:
df0 = df0.where(df0.is_host == False)
if just_host:
df0 = df0.where(df0.is_host == True)
if key == 'topic':
df1 = df0.join(segment_links, df0.segment_id == segment_links.segment_id)
df2 = df1.join(things, segment_links.thing_id == things.id)
topic_type = ThingType.objects.get(name='topic').id
df3 = df2.where(things.type_id == topic_type).select(
*(['duration', 'channel_id', 'show_id', 'hour', 'week_day', 'gender_id'] + \
[things.id.alias('topic'), 'shot_id']))
full_df = df3
else:
full_df = df0
groups = ([key] if key is not None else []) + ['gender_id']
rows = full_df.groupBy(*groups).agg(func.sum('duration')).collect()
out_rows = []
for label in labels:
label_rows = {row.gender_id: row for row in rows if key is None or row[key] == label['id']}
if len(label_rows) < 3: continue
male_dur = int(label_rows[MALE.id]['sum(duration)'])
female_dur = int(label_rows[FEMALE.id]['sum(duration)'])
unknown_dur = int(label_rows[UNKNOWN.id]['sum(duration)'])
base_dur = male_dur + female_dur
if min_dur != None and base_dur < min_dur:
continue
out_rows.append({
key: label['name'],
'M': format_time(male_dur),
'F': format_time(female_dur),
'U': format_time(unknown_dur),
'base': format_time(base_dur),
'M%': int(100.0 * male_dur / base_dur),
'F%': int(100.0 * female_dur / base_dur),
'U%': int(100.0 * unknown_dur / (base_dur + unknown_dur)),
'Overlap': 0,
})
return out_rows
def gender_speaker_stats(key, labels, min_dur=None, no_host=False):
keys = ['duration', 'channel_id', 'show_id', 'hour', 'week_day']
df0 = speakers
if no_host:
df0 = df0.where(df0.has_host == False)
if key == 'topic':
df1 = df0.join(segment_links, speakers.segment_id == segment_links.segment_id)
df2 = df1.join(things, segment_links.thing_id == things.id)
topic_type = ThingType.objects.get(name='topic').id
df3 = df2.where(things.type_id == topic_type).select(
*(keys + ['gender_id', things.id.alias('topic')]))
full_df = df3
else:
full_df = df0
aggs = [func.count('gender_id')] + [func.first(full_df[k]).alias(k) for k in keys] + \
([full_df.topic] if key == 'topic' else [])
groups = ([key] if key is not None else []) + ['gender_id'] + (['topic'] if key == 'topic' else [])
rows = full_df.groupBy(*groups).agg(func.sum('duration')).collect()
if key is not None:
base_counts = full_df.groupBy(key).agg(full_df[key], func.sum('duration')).collect()
else:
base_counts = full_df.agg(func.sum('duration')).collect()
base_map = {
(row[key] if key is not None else 0): row['sum(duration)']
for row in base_counts
}
out_rows = []
for label in labels:
label_rows = {row.gender_id: row for row in rows if key is None or row[key] == label['id']}
if len(label_rows) < 2: continue
male_dur = int(label_rows[MALE.id]['sum(duration)'])
female_dur = int(label_rows[FEMALE.id]['sum(duration)'])
base_dur = int(base_map[label['id']])
if min_dur != None and base_dur < min_dur:
continue
out_rows.append({
key: label['name'],
'M': format_time(male_dur),
'F': format_time(female_dur),
'base': format_time(base_dur),
'M%': int(100.0 * male_dur / base_dur),
'F%': int(100.0 * female_dur / base_dur),
})
return out_rows
gender_speaker_ordering = ['M', 'M%', 'F', 'F%']
```
## All Gender
```
print('Singlecount')
show_df(gender_singlecount_stats(None, [{'id': 0, 'name': 'whole dataset'}]),
gender_ordering)
print('Multicount')
gender_screen_all = gender_multicount_stats(None, [{'id': 0, 'name': 'whole dataset'}])
gender_screen_all_nh = gender_multicount_stats(None, [{'id': 0, 'name': 'whole dataset'}],
no_host=True)
show_df(gender_screen_all, gender_ordering)
show_df(gender_screen_all_nh, gender_ordering)
print('Speaking time')
gender_speaking_all = gender_speaker_stats(None, [{'id': 0, 'name': 'whole dataset'}])
gender_speaking_all_nh = gender_speaker_stats(
None, [{'id': 0, 'name': 'whole dataset'}],
no_host=True)
show_df(gender_speaking_all, gender_speaker_ordering)
show_df(gender_speaking_all_nh, gender_speaker_ordering)
```
### Persist for Report
```
pd.DataFrame(gender_screen_all).to_csv('/app/data/screen_all.csv')
pd.DataFrame(gender_screen_all_nh).to_csv('/app/data/screen_all_nh.csv')
pd.DataFrame(gender_speaking_all).to_csv('/app/data/speaking_all.csv')
```
## Gender by Channel
```
print('Singlecount')
show_df(
gender_singlecount_stats('channel_id', list(Channel.objects.values('id', 'name'))),
['channel_id'] + gender_ordering)
print('Multicount')
show_df(
gender_multicount_stats('channel_id', list(Channel.objects.values('id', 'name'))),
['channel_id'] + gender_ordering)
print('Speaking time')
show_df(
gender_speaker_stats('channel_id', list(Channel.objects.values('id', 'name'))),
['channel_id'] + gender_speaker_ordering)
```
## Gender by Show
```
print('Singlecount')
show_df(
gender_singlecount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*500),
['show_id'] + gender_ordering)
print('Multicount')
gender_screen_show = gender_multicount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*250)
gender_screen_show_nh = gender_multicount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*250, no_host=True)
gender_screen_show_jh = gender_multicount_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*50, just_host=True)
show_df(gender_screen_show, ['show_id'] + gender_ordering)
gshow = face_genders.groupBy('video_id', 'gender_id').agg(func.sum('duration').alias('screen_sum'), func.first('show_id').alias('show_id'))
gspeak = speakers.groupBy('video_id', 'gender_id').agg(func.sum('duration').alias('speak_sum'))
rows = gshow.join(gspeak, ['video_id', 'gender_id']).toPandas()
# TODO: this is really sketchy and clobbers some variables such as videos
# show = Show.objects.get(name='Fox and Friends First')
# rows2 = rows[rows.show_id == show.id]
# videos = collect([r for _, r in rows2.iterrows()], lambda r: int(r.video_id))
# bs = []
# vkeys = []
# for vid, vrows in videos.iteritems():
# vgender = {int(r.gender_id): r for r in vrows}
# def balance(key):
# return vgender[1][key] / float(vgender[1][key] + vgender[2][key])
# try:
# bs.append(balance('screen_sum') / balance('speak_sum'))
# except KeyError:
# bs.append(0)
# vkeys.append(vid)
# idx = np.argsort(bs)[-20:]
# print(np.array(vkeys)[idx].tolist(), np.array(bs)[idx].tolist())
show_df(gender_screen_show_nh, ['show_id'] + gender_ordering)
print('Speaking time')
gender_speaking_show = gender_speaker_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*3)
gender_speaking_show_nh = gender_speaker_stats('show_id', list(Show.objects.values('id', 'name')), min_dur=3600*3, no_host=True)
show_df(
gender_speaking_show,
['show_id'] + gender_speaker_ordering)
show_df(
gender_speaking_show_nh,
['show_id'] + gender_speaker_ordering)
```
### Persist for Report
```
pd.DataFrame(gender_screen_show).to_csv('/app/data/screen_show.csv')
pd.DataFrame(gender_screen_show_nh).to_csv('/app/data/screen_show_nh.csv')
pd.DataFrame(gender_screen_show_jh).to_csv('/app/data/screen_show_jh.csv')
pd.DataFrame(gender_speaking_show).to_csv('/app/data/speaking_show.csv')
pd.DataFrame(gender_speaking_show_nh).to_csv('/app/data/speaking_show_nh.csv')
```
## Gender by Canonical Show
```
print('Singlecount')
show_df(
gender_singlecount_stats(
'canonical_show_id',
list(CanonicalShow.objects.values('id', 'name')),
min_dur=3600*500
),
['canonical_show_id'] + gender_ordering
)
print('Multicount')
gender_screen_canonical_show = gender_multicount_stats(
'canonical_show_id',
list(CanonicalShow.objects.values('id', 'name')),
min_dur=3600*250
)
gender_screen_canonical_show_nh = gender_multicount_stats(
'canonical_show_id',
list(CanonicalShow.objects.values('id', 'name')),
min_dur=3600*250,
no_host=True
)
gender_screen_canonical_show_jh = gender_multicount_stats(
'canonical_show_id',
list(CanonicalShow.objects.values('id', 'name')),
min_dur=3600*50,
just_host=True
)
show_df(gender_screen_canonical_show, ['canonical_show_id'] + gender_ordering)
print('Speaking time')
gender_speaking_canonical_show = gender_speaker_stats(
'canonical_show_id',
list(CanonicalShow.objects.values('id', 'name')),
min_dur=3600*3
)
gender_speaking_canonical_show_nh = gender_speaker_stats(
'canonical_show_id',
list(CanonicalShow.objects.values('id', 'name')),
min_dur=3600*3,
no_host=True
)
show_df(
gender_speaking_canonical_show,
['canonical_show_id'] + gender_speaker_ordering)
show_df(
gender_speaking_canonical_show_nh,
['canonical_show_id'] + gender_speaker_ordering)
```
### Persist for Report
```
pd.DataFrame(gender_screen_canonical_show).to_csv('/app/data/screen_canonical_show.csv')
pd.DataFrame(gender_screen_canonical_show_nh).to_csv('/app/data/screen_canonical_show_nh.csv')
pd.DataFrame(gender_screen_canonical_show_jh).to_csv('/app/data/screen_canonical_show_jh.csv')
pd.DataFrame(gender_speaking_canonical_show).to_csv('/app/data/speaking_canonical_show.csv')
pd.DataFrame(gender_speaking_canonical_show_nh).to_csv('/app/data/speaking_canonical_show_nh.csv')
```
## Gender by time of day
```
print('Singlecount')
show_df(
gender_singlecount_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours]),
['hour'] + gender_ordering)
print('Multicount')
gender_screen_tod = gender_multicount_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours])
show_df(gender_screen_tod, ['hour'] + gender_ordering)
print('Speaking time')
gender_speaking_tod = gender_speaker_stats('hour', [{'id': hour, 'name': format_hour(hour)} for hour in hours])
show_df(gender_speaking_tod, ['hour'] + gender_speaker_ordering)
```
## Gender by Day of the Week
```
dotw = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
print('Singlecount')
show_df(
gender_singlecount_stats('week_day', [{'id': i + 1, 'name': d} for i, d in enumerate(dotw)]),
['week_day'] + gender_ordering)
print('Multicount')
show_df(
gender_multicount_stats('week_day', [{'id': i + 1, 'name': d} for i, d in enumerate(dotw)]),
['week_day'] + gender_ordering)
print('Speaking time')
show_df(
gender_speaker_stats('week_day', [{'id': i + 1, 'name': d} for i, d in enumerate(dotw)]),
['week_day'] + gender_speaker_ordering)
```
## Gender by topic
```
# TODO: FIX ME
# THOUGHTS:
# - Try topic analysis just on a "serious" news show.
# - Generate a panel from multiple clips, e.g. endless panel of people on a topic
# - Produce an endless stream of men talking about, e.g. birth control
# print('Singlecount')
# show_df(
# gender_singlecount_stats(
# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],
# min_dur=3600*5),
# ['topic'] + gender_ordering)
# check this
# M% is the pecent of time that men are on screen when this topic is being discussed
# print('Multicount')
# gender_screen_topic = gender_multicount_stats(
# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],
# min_dur=3600*300)
# gender_screen_topic_nh = gender_multicount_stats(
# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],
# min_dur=3600*300, no_host=True)
# show_df(gender_screen_topic, ['topic'] + gender_ordering)
# print('Speaking time')
# gender_speaking_topic = gender_speaker_stats(
# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],
# min_dur=3600*100)
# gender_speaking_topic_nh = gender_speaker_stats(
# 'topic', [{'id': t.id, 'name': t.name} for t in Thing.objects.filter(type__name='topic')],
# min_dur=3600*100, no_host=True)
# show_df(gender_speaking_topic, ['topic'] + gender_speaker_ordering)
```
## Male vs. female faces in panels
* Smaller percentage of women in panels relative to overall dataset.
```
# # TODO: female-domainated situations?
# # TODO: slice this on # of people in the panel
# # TODO: small visualization that shows sample of segments
# # TODO: panels w/ majority male vs. majority female
# print('Computing panels')
# panels = queries.panels()
# print('Computing gender stats')
# frame_ids = [frame.id for (frame, _) in panels]
# counts = filter_gender(lambda qs: qs.filter(face__person__frame__id__in=frame_ids), lambda qs: qs)
# show_df([counts], ordering)
```
# Pose
* Animatedness of people (specifically hosts)
* e.g. Rachel Maddow vs. others
* Pick 3-4 hours of a few specific hosts, compute dense poses and tracks
* Devise acceleration metric
* More gesturing on heated exchanges?
* Sitting vs. standing
* Repeated gestures (debates vs. state of the union)
* Head/eye orientation (are people looking at each other?)
* Camera orientation (looking at someone from above/below)
* How much are the hosts facing each other
* Quantify aggressive body language
# Topics
```
df = pd.DataFrame(gender_screen_tod)
ax = df.plot('hour', 'M%')
pd.DataFrame(gender_speaking_tod).plot('hour', 'M%', ax=ax)
ax.set_ylim(0, 100)
ax.set_xticks(range(len(df)))
ax.set_xticklabels(df.hour)
ax.axhline(50, color='r', linestyle='--')
ax.legend(['Screen time', 'Speaking time', '50%'])
# pd.DataFrame(gender_screen_topic).to_csv('/app/data/screen_topic.csv')
# pd.DataFrame(gender_screen_topic_nh).to_csv('/app/data/screen_topic_nh.csv')
# pd.DataFrame(gender_speaking_topic).to_csv('/app/data/speaking_topic.csv')
# pd.DataFrame(gender_speaking_topic_nh).to_csv('/app/data/speaking_topic_nh.csv')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Custom Federated Algorithms, Part 1: Introduction to the Federated Core
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/master/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/master/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This tutorial is the first part of a two-part series that demonstrates how to
implement custom types of federated algorithms in TensorFlow Federated (TFF)
using the [Federated Core (FC)](../federated_core.md) - a set of lower-level
interfaces that serve as a foundation upon which we have implemented the
[Federated Learning (FL)](../federated_learning.md) layer.
This first part is more conceptual; we introduce some of the key concepts and
programming abstractions used in TFF, and we demonstrate their use on a very
simple example with a distributed array of temperature sensors. In
[the second part of this series](custom_federated_algorithms_2.ipynb), we use
the mechanisms we introduce here to implement a simple version of federated
training and evaluation algorithms. As a follow-up, we encourage you to study
[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)
of federated averaging in `tff.learning`.
By the end of this series, you should be able to recognize that the applications
of Federated Core are not necessarily limited to learning. The programming
abstractions we offer are quite generic, and could be used, e.g., to implement
analytics and other custom types of computations over distributed data.
Although this tutorial is designed to be self-contained, we encourage you to
first read tutorials on
[image classification](federated_learning_for_image_classification.ipynb) and
[text generation](federated_learning_for_text_generation.ipynb) for a
higher-level and more gentle introduction to the TensorFlow Federated framework
and the [Federated Learning](../federated_learning.md) APIs (`tff.learning`), as
it will help you put the concepts we describe here in context.
## Intended Uses
In a nutshell, Federated Core (FC) is a development environment that makes it
possible to compactly express program logic that combines TensorFlow code with
distributed communication operators, such as those that are used in
[Federated Averaging](https://arxiv.org/abs/1602.05629) - computing
distributed sums, averages, and other types of distributed aggregations over a
set of client devices in the system, broadcasting models and parameters to those
devices, etc.
You may be aware of
[`tf.contrib.distribute`](https://www.tensorflow.org/api_docs/python/tf/contrib/distribute),
and a natural question to ask at this point may be: in what ways does this
framework differ? Both frameworks attempt at making TensorFlow computations
distributed, after all.
One way to think about it is that, whereas the stated goal of
`tf.contrib.distribute` is *to allow users to use existing models and training
code with minimal changes to enable distributed training*, and much focus is on
how to take advantage of distributed infrastructure to make existing training
code more efficient, the goal of TFF's Federated Core is to give researchers and
practitioners explicit control over the specific patterns of distributed
communication they will use in their systems. The focus in FC is on providing a
flexible and extensible language for expressing distributed data flow
algorithms, rather than a concrete set of implemented distributed training
capabilities.
One of the primary target audiences for TFF's FC API is researchers and
practitioners who might want to experiment with new federated learning
algorithms and evaluate the consequences of subtle design choices that affect
the manner in which the flow of data in the distributed system is orchestrated,
yet without getting bogged down by system implementation details. The level of
abstraction that FC API is aiming for roughly corresponds to pseudocode one
could use to describe the mechanics of a federated learning algorithm in a
research publication - what data exists in the system and how it is transformed,
but without dropping to the level of individual point-to-point network message
exchanges.
TFF as a whole is targeting scenarios in which data is distributed, and must
remain such, e.g., for privacy reasons, and where collecting all data at a
centralized location may not be a viable option. This has implication on the
implementation of machine learning algorithms that require an increased degree
of explicit control, as compared to scenarios in which all data can be
accumulated in a centralized location at a data center.
## Before we start
Before we dive into the code, please try to run the following "Hello World"
example to make sure your environment is correctly setup. If it doesn't work,
please refer to the [Installation](../install.md) guide for instructions.
```
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
```
## Federated data
One of the distinguishing features of TFF is that it allows you to compactly
express TensorFlow-based computations on *federated data*. We will be using the
term *federated data* in this tutorial to refer to a collection of data items
hosted across a group of devices in a distributed system. For example,
applications running on mobile devices may collect data and store it locally,
without uploading to a centralized location. Or, an array of distributed sensors
may collect and store temperature readings at their locations.
Federated data like those in the above examples are treated in TFF as
[first-class citizens](https://en.wikipedia.org/wiki/First-class_citizen), i.e.,
they may appear as parameters and results of functions, and they have types. To
reinforce this notion, we will refer to federated data sets as *federated
values*, or as *values of federated types*.
The important point to understand is that we are modeling the entire collection
of data items across all devices (e.g., the entire collection temperature
readings from all sensors in a distributed array) as a single federated value.
For example, here's how one would define in TFF the type of a *federated float*
hosted by a group of client devices. A collection of temperature readings that
materialize across an array of distributed sensors could be modeled as a value
of this federated type.
```
federated_float_on_clients = tff.type_at_clients(tf.float32)
```
More generally, a federated type in TFF is defined by specifying the type `T` of
its *member constituents* - the items of data that reside on individual devices,
and the group `G` of devices on which federated values of this type are hosted
(plus a third, optional bit of information we'll mention shortly). We refer to
the group `G` of devices hosting a federated value as the value's *placement*.
Thus, `tff.CLIENTS` is an example of a placement.
```
str(federated_float_on_clients.member)
str(federated_float_on_clients.placement)
```
A federated type with member constituents `T` and placement `G` can be
represented compactly as `{T}@G`, as shown below.
```
str(federated_float_on_clients)
```
The curly braces `{}` in this concise notation serve as a reminder that the
member constituents (items of data on different devices) may differ, as you
would expect e.g., of temperature sensor readings, so the clients as a group are
jointly hosting a [multi-set](https://en.wikipedia.org/wiki/Multiset) of
`T`-typed items that together constitute the federated value.
It is important to note that the member constituents of a federated value are
generally opaque to the programmer, i.e., a federated value should not be
thought of as a simple `dict` keyed by an identifier of a device in the system -
these values are intended to be collectively transformed only by *federated
operators* that abstractly represent various kinds of distributed communication
protocols (such as aggregation). If this sounds too abstract, don't worry - we
will return to this shortly, and we will illustrate it with concrete examples.
Federated types in TFF come in two flavors: those where the member constituents
of a federated value may differ (as just seen above), and those where they are
known to be all equal. This is controlled by the third, optional `all_equal`
parameter in the `tff.FederatedType` constructor (defaulting to `False`).
```
federated_float_on_clients.all_equal
```
A federated type with a placement `G` in which all of the `T`-typed member
constituents are known to be equal can be compactly represented as `T@G` (as
opposed to `{T}@G`, that is, with the curly braces dropped to reflect the fact
that the multi-set of member constituents consists of a single item).
```
str(tff.type_at_clients(tf.float32, all_equal=True))
```
One example of a federated value of such type that might arise in practical
scenarios is a hyperparameter (such as a learning rate, a clipping norm, etc.)
that has been broadcasted by a server to a group of devices that participate in
federated training.
Another example is a set of parameters for a machine learning model pre-trained
at the server, that were then broadcasted to a group of client devices, where
they can be personalized for each user.
For example, suppose we have a pair of `float32` parameters `a` and `b` for a
simple one-dimensional linear regression model. We can construct the
(non-federated) type of such models for use in TFF as follows. The angle braces
`<>` in the printed type string are a compact TFF notation for named or unnamed
tuples.
```
simple_regression_model_type = (
tff.StructType([('a', tf.float32), ('b', tf.float32)]))
str(simple_regression_model_type)
```
Note that we are only specifying `dtype`s above. Non-scalar types are also
supported. In the above code, `tf.float32` is a shortcut notation for the more
general `tff.TensorType(dtype=tf.float32, shape=[])`.
When this model is broadcasted to clients, the type of the resulting federated
value can be represented as shown below.
```
str(tff.type_at_clients(
simple_regression_model_type, all_equal=True))
```
Per symmetry with *federated float* above, we will refer to such a type as a
*federated tuple*. More generally, we'll often use the term *federated XYZ* to
refer to a federated value in which member constituents are *XYZ*-like. Thus, we
will talk about things like *federated tuples*, *federated sequences*,
*federated models*, and so on.
Now, coming back to `float32@CLIENTS` - while it appears replicated across
multiple devices, it is actually a single `float32`, since all member are the
same. In general, you may think of any *all-equal* federated type, i.e., one of
the form `T@G`, as isomorphic to a non-federated type `T`, since in both cases,
there's actually only a single (albeit potentially replicated) item of type `T`.
Given the isomorphism between `T` and `T@G`, you may wonder what purpose, if
any, the latter types might serve. Read on.
## Placements
### Design Overview
In the preceding section, we've introduced the concept of *placements* - groups
of system participants that might be jointly hosting a federated value, and
we've demonstrated the use of `tff.CLIENTS` as an example specification of a
placement.
To explain why the notion of a *placement* is so fundamental that we needed to
incorporate it into the TFF type system, recall what we mentioned at the
beginning of this tutorial about some of the intended uses of TFF.
Although in this tutorial, you will only see TFF code being executed locally in
a simulated environment, our goal is for TFF to enable writing code that you
could deploy for execution on groups of physical devices in a distributed
system, potentially including mobile or embedded devices running Android. Each
of of those devices would receive a separate set of instructions to execute
locally, depending on the role it plays in the system (an end-user device, a
centralized coordinator, an intermediate layer in a multi-tier architecture,
etc.). It is important to be able to reason about which subsets of devices
execute what code, and where different portions of the data might physically
materialize.
This is especially important when dealing with, e.g., application data on mobile
devices. Since the data is private and can be sensitive, we need the ability to
statically verify that this data will never leave the device (and prove facts
about how the data is being processed). The placement specifications are one of
the mechanisms designed to support this.
TFF has been designed as a data-centric programming environment, and as such,
unlike some of the existing frameworks that focus on *operations* and where
those operations might *run*, TFF focuses on *data*, where that data
*materializes*, and how it's being *transformed*. Consequently, placement is
modeled as a property of data in TFF, rather than as a property of operations on
data. Indeed, as you're about to see in the next section, some of the TFF
operations span across locations, and run "in the network", so to speak, rather
than being executed by a single machine or a group of machines.
Representing the type of a certain value as `T@G` or `{T}@G` (as opposed to just
`T`) makes data placement decisions explicit, and together with a static
analysis of programs written in TFF, it can serve as a foundation for providing
formal privacy guarantees for sensitive on-device data.
An important thing to note at this point, however, is that while we encourage
TFF users to be explicit about *groups* of participating devices that host the
data (the placements), the programmer will never deal with the raw data or
identities of the *individual* participants.
(Note: While it goes far outside the scope of this tutorial, we should mention
that there is one notable exception to the above, a `tff.federated_collect`
operator that is intended as a low-level primitive, only for specialized
situations. Its explicit use in situations where it can be avoided is not
recommended, as it may limit the possible future applications. For example, if
during the course of static analysis, we determine that a computation uses such
low-level mechanisms, we may disallow its access to certain types of data.)
Within the body of TFF code, by design, there's no way to enumerate the devices
that constitute the group represented by `tff.CLIENTS`, or to probe for the
existence of a specific device in the group. There's no concept of a device or
client identity anywhere in the Federated Core API, the underlying set of
architectural abstractions, or the core runtime infrastructure we provide to
support simulations. All the computation logic you write will be expressed as
operations on the entire client group.
Recall here what we mentioned earlier about values of federated types being
unlike Python `dict`, in that one cannot simply enumerate their member
constituents. Think of values that your TFF program logic manipulates as being
associated with placements (groups), rather than with individual participants.
Placements *are* designed to be a first-class citizen in TFF as well, and can
appear as parameters and results of a `placement` type (to be represented by
`tff.PlacementType` in the API). In the future, we plan to provide a variety of
operators to transform or combine placements, but this is outside the scope of
this tutorial. For now, it suffices to think of `placement` as an opaque
primitive built-in type in TFF, similar to how `int` and `bool` are opaque
built-in types in Python, with `tff.CLIENTS` being a constant literal of this
type, not unlike `1` being a constant literal of type `int`.
### Specifying Placements
TFF provides two basic placement literals, `tff.CLIENTS` and `tff.SERVER`, to
make it easy to express the rich variety of practical scenarios that are
naturally modeled as client-server architectures, with multiple *client* devices
(mobile phones, embedded devices, distributed databases, sensors, etc.)
orchestrated by a single centralized *server* coordinator. TFF is designed to
also support custom placements, multiple client groups, multi-tiered and other,
more general distributed architectures, but discussing them is outside the scope
of this tutorial.
TFF doesn't prescribe what either the `tff.CLIENTS` or the `tff.SERVER` actually
represent.
In particular, `tff.SERVER` may be a single physical device (a member of a
singleton group), but it might just as well be a group of replicas in a
fault-tolerant cluster running state machine replication - we do not make any
special architectural assumptions. Rather, we use the `all_equal` bit mentioned
in the preceding section to express the fact that we're generally dealing with
only a single item of data at the server.
Likewise, `tff.CLIENTS` in some applications might represent all clients in the
system - what in the context of federated learning we sometimes refer to as the
*population*, but e.g., in
[production implementations of Federated Averaging](https://arxiv.org/abs/1602.05629),
it may represent a *cohort* - a subset of the clients selected for paticipation
in a particular round of training. The abstractly defined placements are given
concrete meaning when a computation in which they appear is deployed for
execution (or simply invoked like a Python function in a simulated environment,
as is demonstrated in this tutorial). In our local simulations, the group of
clients is determined by the federated data supplied as input.
## Federated computations
### Declaring federated computations
TFF is designed as a strongly-typed functional programming environment that
supports modular development.
The basic unit of composition in TFF is a *federated computation* - a section of
logic that may accept federated values as input and return federated values as
output. Here's how you can define a computation that calculates the average of
the temperatures reported by the sensor array from our previous example.
```
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(sensor_readings):
return tff.federated_mean(sensor_readings)
```
Looking at the above code, at this point you might be asking - aren't there
already decorator constructs to define composable units such as
[`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)
in TensorFlow, and if so, why introduce yet another one, and how is it
different?
The short answer is that the code generated by the `tff.federated_computation`
wrapper is *neither* TensorFlow, *nor is it* Python - it's a specification of a
distributed system in an internal platform-independent *glue* language. At this
point, this will undoubtedly sound cryptic, but please bear this intuitive
interpretation of a federated computation as an abstract specification of a
distributed system in mind. We'll explain it in a minute.
First, let's play with the definition a bit. TFF computations are generally
modeled as functions - with or without parameters, but with well-defined type
signatures. You can print the type signature of a computation by querying its
`type_signature` property, as shown below.
```
str(get_average_temperature.type_signature)
```
The type signature tells us that the computation accepts a collection of
different sensor readings on client devices, and returns a single average on the
server.
Before we go any further, let's reflect on this for a minute - the input and
output of this computation are *in different places* (on `CLIENTS` vs. at the
`SERVER`). Recall what we said in the preceding section on placements about how
*TFF operations may span across locations, and run in the network*, and what we
just said about federated computations as representing abstract specifications
of distributed systems. We have just a defined one such computation - a simple
distributed system in which data is consumed at client devices, and the
aggregate results emerge at the server.
In many practical scenarios, the computations that represent top-level tasks
will tend to accept their inputs and report their outputs at the server - this
reflects the idea that computations might be triggered by *queries* that
originate and terminate on the server.
However, FC API does not impose this assumption, and many of the building blocks
we use internally (including numerous `tff.federated_...` operators you may find
in the API) have inputs and outputs with distinct placements, so in general, you
should not think about a federated computation as something that *runs on the
server* or is *executed by a server*. The server is just one type of participant
in a federated computation. In thinking about the mechanics of such
computations, it's best to always default to the global network-wide
perspective, rather than the perspective of a single centralized coordinator.
In general, functional type signatures are compactly represented as `(T -> U)`
for types `T` and `U` of inputs and outputs, respectively. The type of the
formal parameter (such `sensor_readings` in this case) is specified as the
argument to the decorator. You don't need to specify the type of the result -
it's determined automatically.
Although TFF does offer limited forms of polymorphism, programmers are strongly
encouraged to be explicit about the types of data they work with, as that makes
understanding, debugging, and formally verifying properties of your code easier.
In some cases, explicitly specifying types is a requirement (e.g., polymorphic
computations are currently not directly executable).
### Executing federated computations
In order to support development and debugging, TFF allows you to directly invoke
computations defined this way as Python functions, as shown below. Where the
computation expects a value of a federated type with the `all_equal` bit set to
`False`, you can feed it as a plain `list` in Python, and for federated types
with the `all_equal` bit set to `True`, you can just directly feed the (single)
member constituent. This is also how the results are reported back to you.
```
get_average_temperature([68.5, 70.3, 69.8])
```
When running computations like this in simulation mode, you act as an external
observer with a system-wide view, who has the ability to supply inputs and
consume outputs at any locations in the network, as indeed is the case here -
you supplied client values at input, and consumed the server result.
Now, let's return to a note we made earlier about the
`tff.federated_computation` decorator emitting code in a *glue* language.
Although the logic of TFF computations can be expressed as ordinary functions in
Python (you just need to decorate them with `tff.federated_computation` as we've
done above), and you can directly invoke them with Python arguments just
like any other Python functions in this notebook, behind the scenes, as we noted
earlier, TFF computations are actually *not* Python.
What we mean by this is that when the Python interpreter encounters a function
decorated with `tff.federated_computation`, it traces the statements in this
function's body once (at definition time), and then constructs a
[serialized representation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/proto/v0/computation.proto)
of the computation's logic for future use - whether for execution, or to be
incorporated as a sub-component into another computation.
You can verify this by adding a print statement, as follows:
```
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(sensor_readings):
print ('Getting traced, the argument is "{}".'.format(
type(sensor_readings).__name__))
return tff.federated_mean(sensor_readings)
```
You can think of Python code that defines a federated computation similarly to
how you would think of Python code that builds a TensorFlow graph in a non-eager
context (if you're not familiar with the non-eager uses of TensorFlow, think of
your Python code defining a graph of operations to be executed later, but not
actually running them on the fly). The non-eager graph-building code in
TensorFlow is Python, but the TensorFlow graph constructed by this code is
platform-independent and serializable.
Likewise, TFF computations are defined in Python, but the Python statements in
their bodies, such as `tff.federated_mean` in the example weve just shown,
are compiled into a portable and platform-independent serializable
representation under the hood.
As a developer, you don't need to concern yourself with the details of this
representation, as you will never need to directly work with it, but you should
be aware of its existence, the fact that TFF computations are fundamentally
non-eager, and cannot capture arbitrary Python state. Python code contained in a
TFF computation's body is executed at definition time, when the body of the
Python function decorated with `tff.federated_computation` is traced before
getting serialized. It's not retraced again at invocation time (except when the
function is polymorphic; please refer to the documentation pages for details).
You may wonder why we've chosen to introduce a dedicated internal non-Python
representation. One reason is that ultimately, TFF computations are intended to
be deployable to real physical environments, and hosted on mobile or embedded
devices, where Python may not be available.
Another reason is that TFF computations express the global behavior of
distributed systems, as opposed to Python programs which express the local
behavior of individual participants. You can see that in the simple example
above, with the special operator `tff.federated_mean` that accepts data on
client devices, but deposits the results on the server.
The operator `tff.federated_mean` cannot be easily modeled as an ordinary
operator in Python, since it doesn't execute locally - as noted earlier, it
represents a distributed system that coordinates the behavior of multiple system
participants. We will refer to such operators as *federated operators*, to
distinguish them from ordinary (local) operators in Python.
The TFF type system, and the fundamental set of operations supported in the TFF's
language, thus deviates significantly from those in Python, necessitating the
use of a dedicated representation.
### Composing federated computations
As noted above, federated computations and their constituents are best
understood as models of distributed systems, and you can think of composing
federated computations as composing more complex distributed systems from
simpler ones. You can think of the `tff.federated_mean` operator as a kind of
built-in template federated computation with a type signature `({T}@CLIENTS ->
T@SERVER)` (indeed, just like computations you write, this operator also has a
complex structure - under the hood we break it down into simpler operators).
The same is true of composing federated computations. The computation
`get_average_temperature` may be invoked in a body of another Python function
decorated with `tff.federated_computation` - doing so will cause it to be
embedded in the body of the parent, much in the same way `tff.federated_mean`
was embedded in its own body earlier.
An important restriction to be aware of is that bodies of Python functions
decorated with `tff.federated_computation` must consist *only* of federated
operators, i.e., they cannot directly contain TensorFlow operations. For
example, you cannot directly use `tf.nest` interfaces to add a pair of
federated values. TensorFlow code must be confined to blocks of code decorated
with a `tff.tf_computation` discussed in the following section. Only when
wrapped in this manner can the wrapped TensorFlow code be invoked in the body of
a `tff.federated_computation`.
The reasons for this separation are technical (it's hard to trick operators such
as `tf.add` to work with non-tensors) as well as architectural. The language of
federated computations (i.e., the logic constructed from serialized bodies of
Python functions decorated with `tff.federated_computation`) is designed to
serve as a platform-independent *glue* language. This glue language is currently
used to build distributed systems from embedded sections of TensorFlow code
(confined to `tff.tf_computation` blocks). In the fullness of time, we
anticipate the need to embed sections of other, non-TensorFlow logic, such as
relational database queries that might represent input pipelines, all connected
together using the same glue language (the `tff.federated_computation` blocks).
## TensorFlow logic
### Declaring TensorFlow computations
TFF is designed for use with TensorFlow. As such, the bulk of the code you will
write in TFF is likely to be ordinary (i.e., locally-executing) TensorFlow code.
In order to use such code with TFF, as noted above, it just needs to be
decorated with `tff.tf_computation`.
For example, here's how we could implement a function that takes a number and
adds `0.5` to it.
```
@tff.tf_computation(tf.float32)
def add_half(x):
return tf.add(x, 0.5)
```
Once again, looking at this, you may be wondering why we should define another
decorator `tff.tf_computation` instead of simply using an existing mechanism
such as `tf.function`. Unlike in the preceding section, here we are
dealing with an ordinary block of TensorFlow code.
There are a few reasons for this, the full treatment of which goes beyond the
scope of this tutorial, but it's worth naming the main one:
* In order to embed reusable building blocks implemented using TensorFlow code
in the bodies of federated computations, they need to satisfy certain
properties - such as getting traced and serialized at definition time,
having type signatures, etc. This generally requires some form of a
decorator.
In general, we recommend using TensorFlow's native mechanisms for composition,
such as `tf.function`, wherever possible, as the exact manner in
which TFF's decorator interacts with eager functions can be expected to evolve.
Now, coming back to the example code snippet above, the computation `add_half`
we just defined can be treated by TFF just like any other TFF computation. In
particular, it has a TFF type signature.
```
str(add_half.type_signature)
```
Note this type signature does not have placements. TensorFlow computations
cannot consume or return federated types.
You can now also use `add_half` as a building block in other computations . For
example, here's how you can use the `tff.federated_map` operator to apply
`add_half` pointwise to all member constituents of a federated float on client
devices.
```
@tff.federated_computation(tff.type_at_clients(tf.float32))
def add_half_on_clients(x):
return tff.federated_map(add_half, x)
str(add_half_on_clients.type_signature)
```
### Executing TensorFlow computations
Execution of computations defined with `tff.tf_computation` follows the same
rules as those we described for `tff.federated_computation`. They can be invoked
as ordinary callables in Python, as follows.
```
add_half_on_clients([1.0, 3.0, 2.0])
```
Once again, it is worth noting that invoking the computation
`add_half_on_clients` in this manner simulates a distributed process. Data is
consumed on clients, and returned on clients. Indeed, this computation has each
client perform a local action. There is no `tff.SERVER` explicitly mentioned in
this system (even if in practice, orchestrating such processing might involve
one). Think of a computation defined this way as conceptually analogous to the
`Map` stage in `MapReduce`.
Also, keep in mind that what we said in the preceding section about TFF
computations getting serialized at the definition time remains true for
`tff.tf_computation` code as well - the Python body of `add_half_on_clients`
gets traced once at definition time. On subsequent invocations, TFF uses its
serialized representation.
The only difference between Python methods decorated with
`tff.federated_computation` and those decorated with `tff.tf_computation` is
that the latter are serialized as TensorFlow graphs (whereas the former are not
allowed to contain TensorFlow code directly embedded in them).
Under the hood, each method decorated with `tff.tf_computation` temporarily
disables eager execution in order to allow the computation's structure to be
captured. While eager execution is locally disabled, you are welcome to use
eager TensorFlow, AutoGraph, TensorFlow 2.0 constructs, etc., so long as you
write the logic of your computation in a manner such that it can get correctly
serialized.
For example, the following code will fail:
```
try:
# Eager mode
constant_10 = tf.constant(10.)
@tff.tf_computation(tf.float32)
def add_ten(x):
return x + constant_10
except Exception as err:
print (err)
```
The above fails because `constant_10` has already been constructed outside of
the graph that `tff.tf_computation` constructs internally in the body of
`add_ten` during the serialization process.
On the other hand, invoking python functions that modify the current graph when
called inside a `tff.tf_computation` is fine:
```
def get_constant_10():
return tf.constant(10.)
@tff.tf_computation(tf.float32)
def add_ten(x):
return x + get_constant_10()
add_ten(5.0)
```
Note that the serialization mechanisms in TensorFlow are evolving, and we expect
the details of how TFF serializes computations to evolve as well.
### Working with `tf.data.Dataset`s
As noted earlier, a unique feature of `tff.tf_computation`s is that they allows
you to work with `tf.data.Dataset`s defined abstractly as formal parameters by
your code. Parameters to be represented in TensorFlow as data sets need to be
declared using the `tff.SequenceType` constructor.
For example, the type specification `tff.SequenceType(tf.float32)` defines an
abstract sequence of float elements in TFF. Sequences can contain either
tensors, or complex nested structures (we'll see examples of those later). The
concise representation of a sequence of `T`-typed items is `T*`.
```
float32_sequence = tff.SequenceType(tf.float32)
str(float32_sequence)
```
Suppose that in our temperature sensor example, each sensor holds not just one
temperature reading, but multiple. Here's how you can define a TFF computation
in TensorFlow that calculates the average of temperatures in a single local data
set using the `tf.data.Dataset.reduce` operator.
```
@tff.tf_computation(tff.SequenceType(tf.float32))
def get_local_temperature_average(local_temperatures):
sum_and_count = (
local_temperatures.reduce((0.0, 0), lambda x, y: (x[0] + y, x[1] + 1)))
return sum_and_count[0] / tf.cast(sum_and_count[1], tf.float32)
str(get_local_temperature_average.type_signature)
```
In the body of a method decorated with `tff.tf_computation`, formal parameters
of a TFF sequence type are represented simply as objects that behave like
`tf.data.Dataset`, i.e., support the same properties and methods (they are
currently not implemented as subclasses of that type - this may change as the
support for data sets in TensorFlow evolves).
You can easily verify this as follows.
```
@tff.tf_computation(tff.SequenceType(tf.int32))
def foo(x):
return x.reduce(np.int32(0), lambda x, y: x + y)
foo([1, 2, 3])
```
Keep in mind that unlike ordinary `tf.data.Dataset`s, these dataset-like objects
are placeholders. They don't contain any elements, since they represent abstract
sequence-typed parameters, to be bound to concrete data when used in a concrete
context. Support for abstractly-defined placeholder data sets is still somewhat
limited at this point, and in the early days of TFF, you may encounter certain
restrictions, but we won't need to worry about them in this tutorial (please
refer to the documentation pages for details).
When locally executing a computation that accepts a sequence in a simulation
mode, such as in this tutorial, you can feed the sequence as Python list, as
below (as well as in other ways, e.g., as a `tf.data.Dataset` in eager mode, but
for now, we'll keep it simple).
```
get_local_temperature_average([68.5, 70.3, 69.8])
```
Like all other TFF types, sequences like those defined above can use the
`tff.StructType` constructor to define nested structures. For example,
here's how one could declare a computation that accepts a sequence of pairs `A`,
`B`, and returns the sum of their products. We include the tracing statements in
the body of the computation so that you can see how the TFF type signature
translates into the dataset's `output_types` and `output_shapes`.
```
@tff.tf_computation(tff.SequenceType(collections.OrderedDict([('A', tf.int32), ('B', tf.int32)])))
def foo(ds):
print('element_structure = {}'.format(ds.element_spec))
return ds.reduce(np.int32(0), lambda total, x: total + x['A'] * x['B'])
str(foo.type_signature)
foo([{'A': 2, 'B': 3}, {'A': 4, 'B': 5}])
```
The support for using `tf.data.Datasets` as formal parameters is still somewhat
limited and evolving, although functional in simple scenarios such as those used
in this tutorial.
## Putting it all together
Now, let's try again to use our TensorFlow computation in a federated setting.
Suppose we have a group of sensors that each have a local sequence of
temperature readings. We can compute the global temperature average by averaging
the sensors' local averages as follows.
```
@tff.federated_computation(
tff.type_at_clients(tff.SequenceType(tf.float32)))
def get_global_temperature_average(sensor_readings):
return tff.federated_mean(
tff.federated_map(get_local_temperature_average, sensor_readings))
```
Note that this isn't a simple average across all local temperature readings from
all clients, as that would require weighing contributions from different clients
by the number of readings they locally maintain. We leave it as an exercise for
the reader to update the above code; the `tff.federated_mean` operator
accepts the weight as an optional second argument (expected to be a federated
float).
Also note that the input to `get_global_temperature_average` now becomes a
*federated float sequence*. Federated sequences is how we will typically represent
on-device data in federated learning, with sequence elements typically
representing data batches (you will see examples of this shortly).
```
str(get_global_temperature_average.type_signature)
```
Here's how we can locally execute the computation on a sample of data in Python.
Notice that the way we supply the input is now as a `list` of `list`s. The outer
list iterates over the devices in the group represented by `tff.CLIENTS`, and
the inner ones iterate over elements in each device's local sequence.
```
get_global_temperature_average([[68.0, 70.0], [71.0], [68.0, 72.0, 70.0]])
```
This concludes the first part of the tutorial... we encourage you to continue on
to the [second part](custom_federated_algorithms_2.ipynb).
| github_jupyter |
# Check Cell Count
## Libraries
```
import pandas
import MySQLdb
import numpy as np
import pickle
import os
```
## Functions and definitions
```
# - - - - - - - - - - - - - - - - - - - -
# Define Experiment
table = 'IsabelCLOUPAC_Per_Image'
# - - - - - - - - - - - - - - - - - - - -
def ensure_dir(file_path):
'''
Function to ensure a file path exists, else creates the path
:param file_path:
:return:
'''
directory = os.path.dirname(file_path)
if not os.path.exists(directory):
os.makedirs(directory)
```
## Main Functions
```
def create_Single_CellCounts(db_table):
db = MySQLdb.connect("menchelabdb.int.cemm.at", "root", "cqsr4h", "ImageAnalysisDDI")
string = "select Image_Metadata_ID_A from "+db_table+" group by Image_Metadata_ID_A;"
data = pandas.read_sql(string, con=db)['Image_Metadata_ID_A']
#with open('../results/FeatureVectors/SingleVectors_' + str(min(plates)) + '_to_' + str(
# max(plates)) + '_NoCutoff_' + str(cast_int) + '.pickle', 'rb') as handle:
# single_Vectors = pickle.load(handle)
singles = list(data)
singles.sort()
if 'PosCon' in singles:
singles.remove('PosCon')
if 'DMSO' in singles:
singles.remove('DMSO')
# Define Database to check for missing Images
string = "select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_ID_A,Image_Metadata_Conc_A,Image_Metadata_Plate from " + db_table + " where Image_Metadata_ID_A not like 'DMSO' and Image_Metadata_Transfer_A like 'YES' and Image_Metadata_Transfer_A like 'YES' group by Image_Metadata_ID_A,Image_Metadata_Plate,Image_Metadata_Well;"
data = pandas.read_sql(string,con=db)
ensure_dir('../results/'+table+'/CellCount/SinglesCellCount.csv')
fp_out = open('../results/'+table+'/CellCount/SinglesCellCount.csv','w')
fp_out.write('Drug,Conc,AVG_CellCount\n')
for drug in singles:
drug_values = data.loc[data['Image_Metadata_ID_A'] == drug][['SUM(Image_Count_Cytoplasm)','Image_Metadata_Conc_A']]
concentrations = list(set(drug_values['Image_Metadata_Conc_A'].values))
concentrations.sort()
for conc in concentrations:
if len(drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values) > 0:
cellcount = np.mean(drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values)
cellcount = int(cellcount)
else:
cellcount = 'nan'
fp_out.write(drug+','+str(conc)+','+str(cellcount) +'\n')
fp_out.close()
def create_Single_CellCounts_individualReplicates(db_table):
db = MySQLdb.connect("menchelabdb.int.cemm.at", "root", "cqsr4h", "ImageAnalysisDDI")
string = "select Image_Metadata_ID_A from "+db_table+" group by Image_Metadata_ID_A;"
data = pandas.read_sql(string, con=db)['Image_Metadata_ID_A']
#with open('../results/FeatureVectors/SingleVectors_' + str(min(plates)) + '_to_' + str(
# max(plates)) + '_NoCutoff_' + str(cast_int) + '.pickle', 'rb') as handle:
# single_Vectors = pickle.load(handle)
singles = list(data)
singles.sort()
if 'PosCon' in singles:
singles.remove('PosCon')
if 'DMSO' in singles:
singles.remove('DMSO')
#plates = range(1315001, 1315124, 10)
#string = "select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_ID_A,Image_Metadata_ID_B,Image_Metadata_Plate from " + db_table + " where Image_Metadata_ID_B like 'DMSO' and Image_Metadata_Transfer_A like 'YES' and Image_Metadata_Transfer_B like 'YES' group by Image_Metadata_ID_A,Image_Metadata_Plate,Image_Metadata_Well;"
string = "select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_ID_A,Image_Metadata_Conc_A,Image_Metadata_Plate from " + db_table + " where Image_Metadata_ID_A not like 'DMSO' and Image_Metadata_Transfer_A like 'YES' and Image_Metadata_Transfer_A like 'YES' group by Image_Metadata_ID_A,Image_Metadata_Plate,Image_Metadata_Well;"
data = pandas.read_sql(string,con=db)
ensure_dir('../results/' + table + '/CellCount/SinglesCellCount_AllReplicates.csv')
fp_out = open('../results/' + table + '/CellCount/SinglesCellCount_AllReplicates.csv','w')
#fp_out.write('Drug,CellCounts\n')
fp_out.write('Drug,Conc,Replicate1,Replicate2\n')
for drug in singles:
drug_values = data.loc[data['Image_Metadata_ID_A'] == drug][['SUM(Image_Count_Cytoplasm)','Image_Metadata_Conc_A']]
concentrations = list(set(drug_values['Image_Metadata_Conc_A'].values))
concentrations.sort()
for conc in concentrations:
if len(drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values) > 0:
cellcounts = drug_values.loc[drug_values['Image_Metadata_Conc_A'] == conc]['SUM(Image_Count_Cytoplasm)'].values
fp_out.write(drug + ',' +str(conc)+','+ ','.join([str(x) for x in cellcounts]) + '\n')
fp_out.close()
def getDMSO_Untreated_CellCount(db_table):
# Define Database to check for missing Images
db = MySQLdb.connect("menchelabdb.int.cemm.at", "root", "cqsr4h", "ImageAnalysisDDI")
string = "select SUM(Image_Count_Cytoplasm), Image_Metadata_Well, Image_Metadata_Plate from " + db_table + " where Image_Metadata_ID_A like 'DMSO' and Image_Metadata_Transfer_A like 'YES' group by Image_Metadata_Well,Image_Metadata_Plate;"
data = pandas.read_sql(string,con=db)
mean = np.mean(data['SUM(Image_Count_Cytoplasm)'])
std = np.std(data['SUM(Image_Count_Cytoplasm)'])
max_val = np.percentile(data['SUM(Image_Count_Cytoplasm)'],98)
ensure_dir('../results/' + table + '/CellCount/DMSO_Overview.csv')
fp_out = open('../results/' + table + '/CellCount/DMSO_Overview.csv', 'w')
fp_out.write('Mean,Std,Max\n%f,%f,%f' %(mean,std,max_val))
fp_out.close()
fp_out = open('../results/' + table + '/CellCount/DMSO_Replicates.csv', 'w')
fp_out.write('Plate,Well,CellCount\n')
for row in data.iterrows():
fp_out.write(str(row[1][2])+','+row[1][1]+','+str(row[1][0])+'\n')
fp_out.close()
def get_CellCount_perWell(db_table):
# Define Database to check for missing Images
db = MySQLdb.connect("menchelabdb.int.cemm.at", "root", "cqsr4h", "ImageAnalysisDDI")
string = "select SUM(Image_Count_Cytoplasm),Image_Metadata_ID_A, Image_Metadata_Well, Image_Metadata_Plate,Image_Metadata_Transfer_A from " + db_table + " group by Image_Metadata_Well,Image_Metadata_Plate;"
data = pandas.read_sql(string,con=db)
data.sort_values(by=['Image_Metadata_Plate','Image_Metadata_Well'])
ensure_dir('../results/' + db_table + '/CellCount/Individual_Well_Results.csv')
fp_out = open('../results/' + db_table + '/CellCount/Individual_Well_Results.csv', 'w')
fp_out.write('ID_A,ID_B,Plate,Well,CellCount,TransferOK\n')
for row in data.iterrows():
ID_A = row[1][1]
Trans_A = row[1][4]
if ID_A == 'DMSO' or ID_A == 'PosCon':
if Trans_A == 'YES':
worked = 'TRUE'
else:
worked = 'FALSE'
else:
if Trans_A == 'YES':
worked = 'TRUE'
else:
worked = 'FALSE'
fp_out.write(ID_A+','+str(row[1][3])+','+row[1][2]+','+str(row[1][0])+','+worked+'\n')
fp_out.close()
def PlotResult_file(table,all=False):
from matplotlib import pylab as plt
drug_values = {}
dmso_values = []
fp = open('../results/' + table + '/CellCount/Individual_Well_Results.csv')
fp.next()
for line in fp:
tmp = line.strip().split(',')
if tmp[4] == 'TRUE':
if tmp[0] != 'DMSO':
if drug_values.has_key(tmp[0]):
drug_values[tmp[0]].append(float(tmp[3]))
else:
drug_values[tmp[0]] = [float(tmp[3])]
if tmp[0] == 'DMSO':
dmso_values.append(float(tmp[3]))
max_val = np.mean(dmso_values) + 0.5 * np.std(dmso_values)
#max_val = np.mean([np.mean(x) for x in drug_values.values()]) + 1.2 * np.std([np.mean(x) for x in drug_values.values()])
effect = 0
normalized = []
for drug in drug_values:
scaled = (np.mean(drug_values[drug]) - 0) / max_val
if scaled <= 1:
normalized.append(scaled)
else:
normalized.append(1)
if scaled < 0.5:
effect +=1
print 'Number of drugs with more than 50%% cytotoxicity: %d' %effect
print 'Number of drugs with les than 50%% cytotoxicity: %d' %(len(drug_values) - effect)
plt.hist(normalized,bins='auto', color = '#40B9D4')
#plt.show()
plt.xlabel('Viability')
plt.ylabel('Frequency')
plt.savefig('../results/' + table + '/CellCount/CellCountHistogram.pdf')
plt.close()
#create_Single_CellCounts(table)
#create_Single_CellCounts_individualReplicates(table)
#getDMSO_Untreated_CellCount(table)
#get_CellCount_perWell(table)
PlotResult_file(table)
```
| github_jupyter |
# Fluorescence per phase
This module allows a calculations for a second fluorescence channel, based on cells that have been binned into cell cycle phases. There is also an option to ignore the phase information.
```
import os
import re
import string
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from loguru import logger
from GEN_Utils import FileHandling
```
### Set some sample-specific parameters
```
input_folder = 'python/gauss_models/normalised/'
output_folder = 'python/phase_fluorescence/'
fluorescence_col = 'TPE'
plate_samples = ['TPE only', '1', '1.5', '2', '3', '4']*4
plate_cords = [f'{x}{y}' for x in string.ascii_uppercase[0:4]
for y in range(1, 7)]
sample_map = dict(zip(plate_cords, plate_samples))
if not os.path.exists(output_folder):
os.mkdir(output_folder)
# Generate filelist
file_list = [filename for filename in os.listdir(input_folder)]
```
### Collect important info into summary df, grouped according to phase
```
sample_data = []
for filename in file_list:
sample_name = os.path.splitext(filename)[0]
raw_data = pd.read_csv(f'{input_folder}{filename}')
raw_data.rename(columns={fluorescence_col: "fluorescence"}, inplace=True)
fluo_data = raw_data.copy()[['phase', 'fluorescence']]
fluo_data = fluo_data.groupby('phase').median().T
fluo_data['sample'] = sample_name
sample_data.append(fluo_data)
summary_df = pd.concat(sample_data).reset_index(drop=True)
summary_df.head()
summary_df['plate'] = summary_df['sample'].str[0]
summary_df['well'] = summary_df['sample'].str[1:]
summary_df['sample'] = summary_df['well'].map(sample_map)
summary_df.sort_values(['sample'], inplace=True)
summary_df.head(10)
```
### Generate equivalent dataset, ignoring phase
```
sample_data = {}
for filename in file_list:
sample_name = os.path.splitext(filename)[0]
raw_data = pd.read_csv(f'{input_folder}{filename}')
raw_data.rename(columns={fluorescence_col: "fluorescence"}, inplace=True)
fluo_data = raw_data.copy()['fluorescence']
sample_data[sample_name] = fluo_data.median()
summary_df = pd.DataFrame.from_dict(sample_data, orient='index').reset_index()
summary_df.rename(columns={'index': 'sample',
0: 'med_fluorescence'}, inplace=True)
summary_df['plate'] = summary_df['sample'].str[0]
summary_df['well'] = summary_df['sample'].str[1:]
summary_df['sample'] = summary_df['well'].map(sample_map)
summary_df.sort_values(['plate', 'sample'], inplace=True)
summary_df.head(10)
```
| github_jupyter |
```
# Binary Tree Basic Implimentations
# For harder questions and answers, refer to:
# https://github.com/volkansonmez/Algorithms-and-Data-Structures-1/blob/master/Binary_Tree_All_Methods.ipynb
import numpy as np
np.random.seed(0)
class BST():
def __init__(self, root = None):
self.root = root
def add_node(self, value):
if self.root == None:
self.root = Node(value)
else:
self._add_node(self.root, value)
def _add_node(self, key_node, value):
if key_node == None: return
if value < key_node.cargo: # go left
if key_node.left == None:
key_node.left = Node(value)
key_node.left.parent = key_node
else:
self._add_node(key_node.left, value)
elif value > key_node.cargo: # go right
if key_node.right == None:
key_node.right = Node(value)
key_node.right.parent = key_node
else:
self._add_node(key_node.right, value)
else: # if the value already exists
return
def add_random_nodes(self):
numbers = np.arange(0,20)
self.random_numbers = np.random.permutation(numbers)
for i in self.random_numbers:
self.add_node(i)
def find_node(self, value): # find if the value exists in the tree
if self.root == None: return None
if self.root.cargo == value:
return self.root
else:
return self._find_node(self.root, value)
def _find_node(self, key_node, value):
if key_node == None: return None
if key_node.cargo == value: return key_node
if value < key_node.cargo: # go left
key_node = key_node.left
return self._find_node(key_node, value)
else:
key_node = key_node.right
return self._find_node(key_node, value)
def print_in_order(self): # do a dfs, print from left leaf to the right leaf
if self.root == None: return
key_node = self.root
self._print_in_order(key_node)
def _print_in_order(self, key_node):
if key_node == None: return
self._print_in_order(key_node.left)
print(key_node.cargo, end = ' ')
self._print_in_order(key_node.right)
def print_leaf_nodes_by_stacking(self):
all_nodes = [] # append the node objects
leaf_nodes = [] # append the cargos of the leaf nodes
if self.root == None: return None
all_nodes.append(self.root)
while len(all_nodes) > 0:
curr_node = all_nodes.pop() # pop the last item, last in first out
if curr_node.left != None:
all_nodes.append(curr_node.left)
if curr_node.right != None:
all_nodes.append(curr_node.right)
elif curr_node.left == None and curr_node.right == None:
leaf_nodes.append(curr_node.cargo)
return leaf_nodes
def print_bfs(self, todo = None):
if todo == None: todo = []
if self.root == None: return
todo.append(self.root)
while len(todo) > 0:
curr_node = todo.pop()
if curr_node.left != None:
todo.append(curr_node.left)
if curr_node.right != None:
todo.append(curr_node.right)
print(curr_node.cargo, end = ' ')
def find_height(self):
if self.root == None: return 0
else:
return self._find_height(self.root, left = 0, right = 0)
def _find_height(self, key_node, left, right):
if key_node == None: return max(left, right)
return self._find_height(key_node.left, left + 1, right)
return self._find_height(key_node.right, left, right +1)
def is_valid(self):
if self.root == None: return True
key_node = self.root
return self._is_valid(self.root, -np.inf, np.inf)
def _is_valid(self, key_node, min_value , max_value):
if key_node == None: return True
if key_node.cargo > max_value or key_node.cargo < min_value: return False
left_valid = True
right_valid = True
if key_node != None and key_node.left != None:
left_valid = self._is_valid(key_node.left, min_value, key_node.cargo)
if key_node != None and key_node.right != None:
right_valid = self._is_valid(key_node.right, key_node.cargo, max_value)
return left_valid and right_valid
def zig_zag_printing_top_to_bottom(self):
if self.root == None: return
even_stack = [] # stack the nodes in levels that are in even numbers
odd_stack = [] # stack the nodes in levels that are in odd numbers
print_nodes = [] # append the items' cargos in zigzag order
even_stack.append(self.root)
while len(even_stack) > 0 or len(odd_stack) > 0:
while len(even_stack) > 0:
tmp = even_stack.pop()
print_nodes.append(tmp.cargo)
if tmp.right != None:
odd_stack.append(tmp.right)
if tmp.left != None:
odd_stack.append(tmp.left)
while len(odd_stack) > 0:
tmp = odd_stack.pop()
print_nodes.append(tmp.cargo)
if tmp.left != None:
even_stack.append(tmp.left)
if tmp.right != None:
even_stack.append(tmp.right)
return print_nodes
def lowest_common_ancestor(self, node1, node2): # takes two cargos and prints the lca node of them
if self.root == None: return
node1_confirm = self.find_node(node1)
if node1_confirm == None: return
node2_confirm = self.find_node(node2)
if node2_confirm == None: return
key_node = self.root
print('nodes are in the tree')
return self._lowest_common_ancestor(key_node, node1, node2)
def _lowest_common_ancestor(self, key_node, node1, node2):
if key_node == None: return
if node1 < key_node.cargo and node2 < key_node.cargo:
key_node = key_node.left
return self._lowest_common_ancestor(key_node, node1, node2)
elif node1 > key_node.cargo and node2 > key_node.cargo:
key_node = key_node.right
return self._lowest_common_ancestor(key_node, node1, node2)
else:
return key_node , key_node.cargo
def maximum_path_sum(self): # function to find the maximum path sum
if self.root == None: return
max_value = -np.inf
return self._maximum_path_sum(self.root, max_value)
def _maximum_path_sum(self, key_node, max_value): # recursive function to search and return the max path sum
if key_node == None: return 0
left = self._maximum_path_sum(key_node.left, max_value)
right = self._maximum_path_sum(key_node.right, max_value)
max_value = max(max_value, key_node.cargo + left + right)
return max(left, right) + self.root.cargo
class Node():
def __init__(self, cargo = None, parent = None, left = None, right = None):
self.cargo = cargo
self.parent = parent
self.left = left
self.right = right
test_bst = BST()
test_bst.add_random_nodes()
#print(test_bst.print_in_order())
#test_bst.find_node(11)
#test_bst.print_leaf_nodes_by_stacking()
#test_bst.print_bfs()
#test_bst.find_height()
#test_bst.is_valid()
test_bst.zig_zag_printing_top_to_bottom()
#test_bst.lowest_common_ancestor(8, 0)
#test_bst.maximum_path_sum()
```
| github_jupyter |
# Machine Learning Trading Bot
In this Challenge, you’ll assume the role of a financial advisor at one of the top five financial advisory firms in the world. Your firm constantly competes with the other major firms to manage and automatically trade assets in a highly dynamic environment. In recent years, your firm has heavily profited by using computer algorithms that can buy and sell faster than human traders.
The speed of these transactions gave your firm a competitive advantage early on. But, people still need to specifically program these systems, which limits their ability to adapt to new data. You’re thus planning to improve the existing algorithmic trading systems and maintain the firm’s competitive advantage in the market. To do so, you’ll enhance the existing trading signals with machine learning algorithms that can adapt to new data.
## Instructions:
Use the starter code file to complete the steps that the instructions outline. The steps for this Challenge are divided into the following sections:
* Establish a Baseline Performance
* Tune the Baseline Trading Algorithm
* Evaluate a New Machine Learning Classifier
* Create an Evaluation Report
#### Establish a Baseline Performance
In this section, you’ll run the provided starter code to establish a baseline performance for the trading algorithm. To do so, complete the following steps.
Open the Jupyter notebook. Restart the kernel, run the provided cells that correspond with the first three steps, and then proceed to step four.
1. Import the OHLCV dataset into a Pandas DataFrame.
2. Generate trading signals using short- and long-window SMA values.
3. Split the data into training and testing datasets.
4. Use the `SVC` classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and make predictions based on the testing data. Review the predictions.
5. Review the classification report associated with the `SVC` model predictions.
6. Create a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.
7. Create a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.
8. Write your conclusions about the performance of the baseline trading algorithm in the `README.md` file that’s associated with your GitHub repository. Support your findings by using the PNG image that you saved in the previous step.
#### Tune the Baseline Trading Algorithm
In this section, you’ll tune, or adjust, the model’s input features to find the parameters that result in the best trading outcomes. (You’ll choose the best by comparing the cumulative products of the strategy returns.) To do so, complete the following steps:
1. Tune the training algorithm by adjusting the size of the training dataset. To do so, slice your data into different periods. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing the training window?
> **Hint** To adjust the size of the training dataset, you can use a different `DateOffset` value—for example, six months. Be aware that changing the size of the training dataset also affects the size of the testing dataset.
2. Tune the trading algorithm by adjusting the SMA input features. Adjust one or both of the windows for the algorithm. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows?
3. Choose the set of parameters that best improved the trading algorithm returns. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns, and document your conclusion in your `README.md` file.
#### Evaluate a New Machine Learning Classifier
In this section, you’ll use the original parameters that the starter code provided. But, you’ll apply them to the performance of a second machine learning model. To do so, complete the following steps:
1. Import a new classifier, such as `AdaBoost`, `DecisionTreeClassifier`, or `LogisticRegression`. (For the full list of classifiers, refer to the [Supervised learning page](https://scikit-learn.org/stable/supervised_learning.html) in the scikit-learn documentation.)
2. Using the original training data as the baseline model, fit another model with the new classifier.
3. Backtest the new model to evaluate its performance. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns for this updated trading algorithm, and write your conclusions in your `README.md` file. Answer the following questions: Did this new model perform better or worse than the provided baseline model? Did this new model perform better or worse than your tuned trading algorithm?
#### Create an Evaluation Report
In the previous sections, you updated your `README.md` file with your conclusions. To accomplish this section, you need to add a summary evaluation report at the end of the `README.md` file. For this report, express your final conclusions and analysis. Support your findings by using the PNG images that you created.
```
# Imports
import pandas as pd
import numpy as np
from pathlib import Path
import hvplot.pandas
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.preprocessing import StandardScaler
from pandas.tseries.offsets import DateOffset
from sklearn.metrics import classification_report
```
---
## Establish a Baseline Performance
In this section, you’ll run the provided starter code to establish a baseline performance for the trading algorithm. To do so, complete the following steps.
Open the Jupyter notebook. Restart the kernel, run the provided cells that correspond with the first three steps, and then proceed to step four.
### Step 1: mport the OHLCV dataset into a Pandas DataFrame.
```
# Import the OHLCV dataset into a Pandas Dataframe
ohlcv_df = pd.read_csv(
Path("./Resources/emerging_markets_ohlcv.csv"),
index_col='date',
infer_datetime_format=True,
parse_dates=True
)
# Review the DataFrame
ohlcv_df.head()
# Filter the date index and close columns
signals_df = ohlcv_df.loc[:, ["close"]]
# Use the pct_change function to generate returns from close prices
signals_df["Actual Returns"] = signals_df["close"].pct_change()
# Drop all NaN values from the DataFrame
signals_df = signals_df.dropna()
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
```
## Step 2: Generate trading signals using short- and long-window SMA values.
```
# Set the short window and long window
short_window = 4
long_window = 100
# Generate the fast and slow simple moving averages (4 and 100 days, respectively)
signals_df['SMA_Fast'] = signals_df['close'].rolling(window=short_window).mean()
signals_df['SMA_Slow'] = signals_df['close'].rolling(window=long_window).mean()
signals_df = signals_df.dropna()
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
# Initialize the new Signal column
signals_df['Signal'] = 0.0
# When Actual Returns are greater than or equal to 0, generate signal to buy stock long
signals_df.loc[(signals_df['Actual Returns'] >= 0), 'Signal'] = 1
# When Actual Returns are less than 0, generate signal to sell stock short
signals_df.loc[(signals_df['Actual Returns'] < 0), 'Signal'] = -1
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
signals_df['Signal'].value_counts()
# Calculate the strategy returns and add them to the signals_df DataFrame
signals_df['Strategy Returns'] = signals_df['Actual Returns'] * signals_df['Signal'].shift()
# Review the DataFrame
display(signals_df.head())
display(signals_df.tail())
# Plot Strategy Returns to examine performance
(1 + signals_df['Strategy Returns']).cumprod().plot()
```
### Step 3: Split the data into training and testing datasets.
```
# Assign a copy of the sma_fast and sma_slow columns to a features DataFrame called X
X = signals_df[['SMA_Fast', 'SMA_Slow']].shift().dropna()
# Review the DataFrame
X.head()
# Create the target set selecting the Signal column and assiging it to y
y = signals_df['Signal']
# Review the value counts
y.value_counts()
# Select the start of the training period
training_begin = X.index.min()
# Display the training begin date
print(training_begin)
# Select the ending period for the training data with an offset of 3 months
training_end = X.index.min() + DateOffset(months=3)
# Display the training end date
print(training_end)
# Generate the X_train and y_train DataFrames
X_train = X.loc[training_begin:training_end]
y_train = y.loc[training_begin:training_end]
# Review the X_train DataFrame
X_train.head()
# Generate the X_test and y_test DataFrames
X_test = X.loc[training_end+DateOffset(hours=1):]
y_test = y.loc[training_end+DateOffset(hours=1):]
# Review the X_test DataFrame
X_test.head()
# Scale the features DataFrames
# Create a StandardScaler instance
scaler = StandardScaler()
# Apply the scaler model to fit the X-train data
X_scaler = scaler.fit(X_train)
# Transform the X_train and X_test DataFrames using the X_scaler
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
### Step 4: Use the `SVC` classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and make predictions based on the testing data. Review the predictions.
```
# From SVM, instantiate SVC classifier model instance
svm_model = svm.SVC()
# Fit the model to the data using the training data
svm_model = svm_model.fit(X_train_scaled, y_train)
# Use the testing data to make the model predictions
svm_pred = svm_model.predict(X_test_scaled)
# Review the model's predicted values
svm_pred
```
### Step 5: Review the classification report associated with the `SVC` model predictions.
```
# Use a classification report to evaluate the model using the predictions and testing data
svm_testing_report = classification_report(y_test, svm_pred)
# Print the classification report
print(svm_testing_report)
```
### Step 6: Create a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.
```
# Create a new empty predictions DataFrame.
# Create a predictions DataFrame
predictions_df = pd.DataFrame(index=X_test.index)
# Add the SVM model predictions to the DataFrame
predictions_df['Predicted'] = svm_pred
# Add the actual returns to the DataFrame
predictions_df['Actual Returns'] = signals_df['Actual Returns']
# Add the strategy returns to the DataFrame
predictions_df['Strategy Returns'] = predictions_df["Actual Returns"] * predictions_df['Predicted']
# Review the DataFrame
display(predictions_df.head())
display(predictions_df.tail())
```
### Step 7: Create a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.
```
# Plot the actual returns versus the SVM strategy returns
(1+predictions_df[["Actual Returns", "Strategy Returns"]]).cumprod().plot(title= "SVM Strategy Returns")
```
---
## Tune the Baseline Trading Algorithm
## Step 6: Use an Alternative ML Model and Evaluate Strategy Returns
In this section, you’ll tune, or adjust, the model’s input features to find the parameters that result in the best trading outcomes. You’ll choose the best by comparing the cumulative products of the strategy returns.
### Step 1: Tune the training algorithm by adjusting the size of the training dataset.
To do so, slice your data into different periods. Rerun the notebook with the updated parameters, and record the results in your `README.md` file.
Answer the following question: What impact resulted from increasing or decreasing the training window?
### Step 2: Tune the trading algorithm by adjusting the SMA input features.
Adjust one or both of the windows for the algorithm. Rerun the notebook with the updated parameters, and record the results in your `README.md` file.
Answer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows?
### Step 3: Choose the set of parameters that best improved the trading algorithm returns.
Save a PNG image of the cumulative product of the actual returns vs. the strategy returns, and document your conclusion in your `README.md` file.
---
## Evaluate a New Machine Learning Classifier
In this section, you’ll use the original parameters that the starter code provided. But, you’ll apply them to the performance of a second machine learning model.
### Step 1: Import a new classifier, such as `AdaBoost`, `DecisionTreeClassifier`, or `LogisticRegression`. (For the full list of classifiers, refer to the [Supervised learning page](https://scikit-learn.org/stable/supervised_learning.html) in the scikit-learn documentation.)
```
# Import a new classifier from SKLearn
from sklearn.ensemble import RandomForestClassifier
# Initiate the model instance
model= RandomForestClassifier(n_estimators=1000)
```
### Step 2: Using the original training data as the baseline model, fit another model with the new classifier.
```
# Fit the model using the training data
model = model.fit(X_train_scaled, y_train)
# Use the testing dataset to generate the predictions for the new model
pred = model.predict(X_test_scaled)
# Review the model's predicted values
pred[:10]
```
### Step 3: Backtest the new model to evaluate its performance.
Save a PNG image of the cumulative product of the actual returns vs. the strategy returns for this updated trading algorithm, and write your conclusions in your `README.md` file.
Answer the following questions:
Did this new model perform better or worse than the provided baseline model?
Did this new model perform better or worse than your tuned trading algorithm?
```
# Use a classification report to evaluate the model using the predictions and testing data
# Report based on the RandomForestTree Classifier Model
report=classification_report(y_test, pred)
# Print the classification report
print(report)
# Create a new empty predictions DataFrame.
# Create a predictions DataFrame
rfc_predictions_df = pd.DataFrame(index=X_test.index)
# Add the SVM model predictions to the DataFrame
rfc_predictions_df["Random Forest predictions"]= pred
# Add the actual returns to the DataFrame
rfc_predictions_df["Actual Returns"]= signals_df["Actual Returns"]
# Add the strategy returns to the DataFrame
rfc_predictions_df['Strategy Returns'] = rfc_predictions_df['Actual Returns'] * rfc_predictions_df['Random Forest predictions']
# Review the DataFrame
rfc_predictions_df
# Plot the actual returns versus the strategy returns
(1+rfc_predictions_df[["Actual Returns", "Strategy Returns"]]).cumprod().plot(title= "Random Forest Classifier Strategy")
```
The SVM model achieved greater accuracy and produced higher cumulative returns than the RandomForestClassifier
| github_jupyter |
# SVI Part II: Conditional Independence, Subsampling, and Amortization
## The Goal: Scaling SVI to Large Datasets
For a model with $N$ observations, running the `model` and `guide` and constructing the ELBO involves evaluating log pdf's whose complexity scales badly with $N$. This is a problem if we want to scale to large datasets. Luckily, the ELBO objective naturally supports subsampling provided that our model/guide have some conditional independence structure that we can take advantage of. For example, in the case that the observations are conditionally independent given the latents, the log likelihood term in the ELBO can be approximated with
$$ \sum_{i=1}^N \log p({\bf x}_i | {\bf z}) \approx \frac{N}{M}
\sum_{i\in{\mathcal{I}_M}} \log p({\bf x}_i | {\bf z}) $$
where $\mathcal{I}_M$ is a mini-batch of indices of size $M$ with $M<N$ (for a discussion please see references [1,2]). Great, problem solved! But how do we do this in Pyro?
## Marking Conditional Independence in Pyro
If a user wants to do this sort of thing in Pyro, he or she first needs to make sure that the model and guide are written in such a way that Pyro can leverage the relevant conditional independencies. Let's see how this is done. Pyro provides two language primitives for marking conditional independencies: `irange` and `iarange`. Let's start with the simpler of the two.
### `irange`
Let's return to the example we used in the [previous tutorial](svi_part_i.html). For convenience let's replicate the main logic of `model` here:
```python
def model(data):
# sample f from the beta prior
f = pyro.sample("latent_fairness", dist.beta, alpha0, beta0)
# loop over the observed data using pyro.sample with the obs keyword argument
for i in range(len(data)):
# observe datapoint i using the bernoulli likelihood
pyro.sample("obs_{}".format(i), dist.bernoulli,
f, obs=data[i])
```
For this model the observations are conditionally independent given the latent random variable `latent_fairness`. To explicitly mark this in Pyro we basically just need to replace the Python builtin `range` with the Pyro construct `irange`:
```python
def model(data):
# sample f from the beta prior
f = pyro.sample("latent_fairness", dist.beta, alpha0, beta0)
# loop over the observed data [WE ONLY CHANGE THE NEXT LINE]
for i in pyro.irange("data_loop", len(data)):
# observe datapoint i using the bernoulli likelihood
pyro.sample("obs_{}".format(i), dist.bernoulli,
f, obs=data[i])
```
We see that `pyro.irange` is very similar to `range` with one main difference: each invocation of `irange` requires the user to provide a unique name. The second argument is an integer just like for `range`.
So far so good. Pyro can now leverage the conditional independency of the observations given the latent random variable. But how does this actually work? Basically `pyro.irange` is implemented using a context manager. At every execution of the body of the `for` loop we enter a new (conditional) independence context which is then exited at the end of the `for` loop body. Let's be very explicit about this:
- because each observed `pyro.sample` statement occurs within a different execution of the body of the `for` loop, Pyro marks each observation as independent
- this independence is properly a _conditional_ independence _given_ `latent_fairness` because `latent_fairness` is sampled _outside_ of the context of `data_loop`.
Before moving on, let's mention some gotchas to be avoided when using `irange`. Consider the following variant of the above code snippet:
```python
# WARNING do not do this!
my_reified_list = list(pyro.irange("data_loop", len(data)))
for i in my_reified_list:
pyro.sample("obs_{}".format(i), dist.bernoulli, f, obs=data[i])
```
This will _not_ achieve the desired behavior, since `list()` will enter and exit the `data_loop` context completely before a single `pyro.sample` statement is called. Similarly, the user needs to take care not to leak mutable computations across the boundary of the context manager, as this may lead to subtle bugs. For example, `pyro.irange` is not appropriate for temporal models where each iteration of a loop depends on the previous iteration; in this case a `range` should be used instead.
## `iarange`
Conceptually `iarange` is the same as `irange` except that it is a vectorized operation (as `torch.arange` is to `range`). As such it potentially enables large speed-ups compared to the explicit `for` loop that appears with `irange`. Let's see how this looks for our running example. First we need `data` to be in the form of a tensor:
```python
data = Variable(torch.zeros(10, 1))
data.data[0:6, 0] = torch.ones(6) # 6 heads and 4 tails
```
Then we have:
```python
with iarange('observe_data'):
pyro.sample('obs', dist.bernoulli, f, obs=data)
```
Let's compare this to the analogous `irange` construction point-by-point:
- just like `irange`, `iarange` requires the user to specify a unique name.
- note that this code snippet only introduces a single (observed) random variable (namely `obs`), since the entire tensor is considered at once.
- since there is no need for an iterator in this case, there is no need to specify the length of the tensor(s) involved in the `iarange` context
Note that the gotchas mentioned in the case of `irange` also apply to `iarange`.
## Subsampling
We now know how to mark conditional independence in Pyro. This is useful in and of itself (see the [dependency tracking section](svi_part_iii.html) in SVI Part III), but we'd also like to do subsampling so that we can do SVI on large datasets. Depending on the structure of the model and guide, Pyro supports several ways of doing subsampling. Let's go through these one by one.
### Automatic subsampling with `irange` and `iarange`
Let's look at the simplest case first, in which we get subsampling for free with one or two additional arguments to `irange` and `iarange`:
```python
for i in pyro.irange("data_loop", len(data), subsample_size=5):
pyro.sample("obs_{}".format(i), dist.bernoulli, f, obs=data[i])
```
That's all there is to it: we just use the argument `subsample_size`. Whenever we run `model()` we now only evaluate the log likelihood for 5 randomly chosen datapoints in `data`; in addition, the log likelihood will be automatically scaled by the appropriate factor of $\tfrac{10}{5} = 2$. What about `iarange`? The incantantion is entirely analogous:
```python
with iarange('observe_data', size=10, subsample_size=5) as ind:
pyro.sample('obs', dist.bernoulli, f,
obs=data.index_select(0, ind))
```
Importantly, `iarange` now returns a tensor of indices `ind`, which, in this case will be of length 5. Note that in addition to the argument `subsample_size` we also pass the argument `size` so that `iarange` is aware of the full size of the tensor `data` so that it can compute the correct scaling factor. Just like for `irange`, the user is responsible for selecting the correct datapoints using the indices provided by `iarange`.
Finally, note that the user must pass the argument `use_cuda=True` to `irange` or `iarange` if `data` is on the GPU.
### Custom subsampling strategies with `irange` and `iarange`
Every time the above `model()` is run `irange` and `iarange` will sample new subsample indices. Since this subsampling is stateless, this can lead to some problems: basically for a sufficiently large dataset even after a large number of iterations there's a nonnegligible probability that some of the datapoints will have never been selected. To avoid this the user can take control of subsampling by making use of the `subsample` argument to `irange` and `iarange`. See [the docs](http://docs.pyro.ai/primitives.html#pyro.__init__.iarange) for details.
### Subsampling when there are only local random variables
We have in mind a model with a joint probability density given by
$$ p({\bf x}, {\bf z}) = \prod_{i=1}^N p({\bf x}_i | {\bf z}_i) p({\bf z}_i) $$
For a model with this dependency structure the scale factor introduced by subsampling scales all the terms in the ELBO by the same amount. Consequently there's no need to invoke any special Pyro constructs. This is the case, for example, for a vanilla VAE. This explains why for the VAE it's permissible for the user to take complete control over subsampling and pass mini-batches directly to the model and guide without using `irange` or `iarange`. To see how this looks in detail, see the [VAE tutorial](vae.html)
### Subsampling when there are both global and local random variables
In the coin flip examples above `irange` and `iarange` appeared in the model but not in the guide, since the only thing being subsampled was the observations. Let's look at a more complicated example where subsampling appears in both the model and guide. To make things simple let's keep the discussion somewhat abstract and avoid writing a complete model and guide.
Consider the model specified by the following joint distribution:
$$ p({\bf x}, {\bf z}, \beta) = p(\beta)
\prod_{i=1}^N p({\bf x}_i | {\bf z}_i) p({\bf z}_i | \beta) $$
There are $N$ observations $\{ {\bf x}_i \}$ and $N$ local latent random variables
$\{ {\bf z}_i \}$. There is also a global latent random variable $\beta$. Our guide will be factorized as
$$ q({\bf z}, \beta) = q(\beta) \prod_{i=1}^N q({\bf z}_i | \beta, \lambda_i) $$
Here we've been explicit about introducing $N$ local variational parameters
$\{\lambda_i \}$, while the other variational parameters are left implicit. Both the model and guide have conditional independencies. In particular, on the model side, given the $\{ {\bf z}_i \}$ the observations $\{ {\bf x}_i \}$ are independent. In addition, given $\beta$ the latent random variables $\{\bf {z}_i \}$ are independent. On the guide side, given the variational parameters $\{\lambda_i \}$ and $\beta$ the latent random variables $\{\bf {z}_i \}$ are independent. To mark these conditional independencies in Pyro and do subsampling we need to make use of either `irange` or `iarange` in _both_ the model _and_ the guide. Let's sketch out the basic logic using `irange` (a more complete piece of code would include `pyro.param` statements, etc.). First, the model:
```python
def model(data):
beta = pyro.sample("beta", ...) # sample the global RV
for i in pyro.irange("locals", len(data)):
z_i = pyro.sample("z_{}".format(i), ...)
# compute the parameter used to define the observation
# likelihood using the local random variable
theta_i = compute_something(z_i)
pyro.sample("obs_{}".format(i), dist.mydist,
theta_i, obs=data[i])
```
Note that in contrast to our running coin flip example, here we have `pyro.sample` statements both inside and outside of the `irange` context. Next the guide:
```python
def guide(data):
beta = pyro.sample("beta", ...) # sample the global RV
for i in pyro.irange("locals", len(data), subsample_size=5):
# sample the local RVs
pyro.sample("z_{}".format(i), ..., lambda_i)
```
Note that crucially the indices will only be subsampled once in the guide; the Pyro backend makes sure that the same set of indices are used during execution of the model. For this reason `subsample_size` only needs to be specified in the guide.
## Amortization
Let's again consider a model with global and local latent random variables and local variational parameters:
$$ p({\bf x}, {\bf z}, \beta) = p(\beta)
\prod_{i=1}^N p({\bf x}_i | {\bf z}_i) p({\bf z}_i | \beta) \qquad \qquad
q({\bf z}, \beta) = q(\beta) \prod_{i=1}^N q({\bf z}_i | \beta, \lambda_i) $$
For small to medium-sized $N$ using local variational parameters like this can be a good approach. If $N$ is large, however, the fact that the space we're doing optimization over grows with $N$ can be a real probelm. One way to avoid this nasty growth with the size of the dataset is *amortization*.
This works as follows. Instead of introducing local variational parameters, we're going to learn a single parametric function $f(\cdot)$ and work with a variational distribution that has the form
$$q(\beta) \prod_{n=1}^N q({\bf z}_i | f({\bf x}_i))$$
The function $f(\cdot)$—which basically maps a given observation to a set of variational parameters tailored to that datapoint—will need to be sufficiently rich to capture the posterior accurately, but now we can handle large datasets without having to introduce an obscene number of variational parameters.
This approach has other benefits too: for example, during learning $f(\cdot)$ effectively allows us to share statistical power among different datapoints. Note that this is precisely the approach used in the [VAE](vae.html).
## References
[1] `Stochastic Variational Inference`,
<br/>
Matthew D. Hoffman, David M. Blei, Chong Wang, John Paisley
[2] `Auto-Encoding Variational Bayes`,<br/>
Diederik P Kingma, Max Welling
| github_jupyter |
# Students Scores Prediction
Predicting the percentage of an student based on the no. of study hours using a simple linear regressor.
### Data Importing
First, we need to import our data to our environment using read_csv() method from pandas library.
```
# import pandas under alias pd
import pandas as pd
# read our csv_file in 'data' using read_csv(), passing it the path of our file : data
student_scores = pd.read_csv('https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv')
print('Data imported successfully!')
```
### Data Exploration
We need to know more information about our dataset, so we use .head() and .info() methods of pandas object 'data'.
```
# view our first data observations
student_scores.head()
# view meta-data about our dataset
student_scores.info()
```
### Quantitative EDA (Descriptive Statistics)
We need some of statistics representing our dataset (statistics describe our dataset).
```
student_scores.describe()
```
There is another statistic describes the correlation between 2 variables called 'Pearson's R', we can see our correlation matrix between all variables in our pandas DataFrame by using .corr() method on our pandas DataFrame object (student_scores).
```
# view correlation between student_scores variables
student_scores.corr()
```
#### We can see that there is 'High Positive Correlation' between the two variables in our dataset 'Hours' and 'Scores' (0.976), so we can predict each variable by the other.
#### NOTE: "Correlation not mean Causation".
### Graphical EDA
We need now to see visually our data points with our only feature variable on the x-axis and our target variable on the y-axis.
```
# import our helpful libraries uder aliases plt for pyplot, and sns for seaborn
import matplotlib.pyplot as plt
import seaborn as sns
# plotting scatter plot using seaborn and setting up our plot title
sns.scatterplot(x='Hours', y='Scores', data=student_scores)
plt.title('Students Studying Hours vs Students Scores')
plt.show()
```
So, we can see here from the value of R-correlation _which catching the linear relationships between 2 variables_ and from visuals that there is a strong linear relationship between the 2 variables and we can use Linear Regression to find the formula that best describe that relation.
### Data Modeling using Linear Regression
We will use a simple linear regression model to train on our dataset so can find the best fit model (formula).
We need first to split our data to training set and validation set, so we can measure our model performance on data unseen before, then we can training the model on our training data, finally measure our score.
```
# import our train_test_split function from sklearn.model_selection
from sklearn.model_selection import train_test_split
# Data Preparing
X = pd.DataFrame(student_scores.Hours)
y = pd.DataFrame(student_scores.Scores)
# split our dataset into training data and validation data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
# Model building
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
# Model Training
regressor.fit(X_train, y_train)
print('Training Complete!')
```
#### We can plot now our regression line which best fit our data.
```
# Plotting the regression line
slope = regressor.coef_
y_intercept = regressor.intercept_
linear_equation = slope * X + y_intercept
# Plotting for the training & test data
plt.scatter(X, y)
plt.plot(X, linear_equation);
plt.show()
```
### Making Predictions
Now that we have trained our algorithm, it's time to make some predictions on our testing data.
```
# Model Predictions on validation set
preds = regressor.predict(X_test)
pd.DataFrame({'Predicted': [x for x in preds], 'Actual': [y for y in y_test.values]})
```
### Model Evaluation
After we using our model for prediction, Let's validate our model with the 'mean absolute error' MAE metric to measure our model performance.
```
# Model performance measuring by using mean absolute error (MAE) metric
from sklearn.metrics import mean_absolute_error
print('MAE:', mean_absolute_error(y_test, preds))
```
we can use our .score() method of model, it uses R-Squared metric as default.
```
print('R2:', regressor.score(X_test, y_test))
```
### What will be predicted score if a student studies for 9.25 hrs/day?
```
print('Predicted score for student studies for 9.25 hrs/day = ', regressor.predict([[9.25]]))
```
| github_jupyter |
# How to Create NBA Shot Charts in Python #
In this post I go over how to extract a player's shot chart data and then plot it using matplotlib and seaborn .
```
%matplotlib inline
import requests
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import json
```
## Getting the data ##
Getting the data from stats.nba.com is pretty straightforward. While there isn't a a public API provided by the NBA,
we can actually access the API that the NBA uses for stats.nba.com using the requests library.
[This blog post](http://www.gregreda.com/2015/02/15/web-scraping-finding-the-api/)
by Greg Reda does a great job on explaining how to access this API (or finding an API to any web app for that matter).
```
playerID='2200'
shot_chart_url ='http://stats.nba.com/stats/shotchartdetail?CFID=33&CFPARAMS=2015-16&' \
'ContextFilter=&ContextMeasure=FGA&DateFrom=&DateTo=&GameID=&GameSegment=&LastNGames=0&' \
'LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID=0&Outcome=&PaceAdjust=N&' \
'PerMode=PerGame&Period=0&PlayerID='+playerID+'&PlusMinus=N&Position=&Rank=N&RookieYear=&' \
'Season=2015-16&SeasonSegment=&SeasonType=Regular+Season&TeamID=0&VsConference=&' \
'VsDivision=&mode=Advanced&showDetails=0&showShots=1&showZones=0'
print(shot_chart_url)
```
The above url sends us to the JSON file contatining the data we want.
Also note that the url contains the various API parameters used to access the data.
The PlayerID parameter in the url is set to 201935, which is James Harden's PlayerID.
Now lets use requests to get the data we want
```
# Get the webpage containing the data
response = requests.get(shot_chart_url)
# Grab the headers to be used as column headers for our DataFrame
headers = response.json()['resultSets'][0]['headers']
# Grab the shot chart data
shots = response.json()['resultSets'][0]['rowSet']
```
Create a pandas DataFrame using the scraped shot chart data.
```
shot_df = pd.DataFrame(shots, columns=headers)
# View the head of the DataFrame and all its columns
from IPython.display import display
with pd.option_context('display.max_columns', None):
display(shot_df.head())
```
The above shot chart data contains all the the field goal attempts James Harden took during the 2014-15
regular season. They data we want is found in LOC_X and LOC_Y. These are coordinate values for each shot
attempt, which can then be plotted onto a set of axes that represent the basketball court.
### Plotting the Shot Chart Data ###
Lets just quickly plot the data just too see how it looks.
```
sns.set_style("white")
sns.set_color_codes()
plt.figure(figsize=(12,11))
plt.scatter(shot_df.LOC_X, shot_df.LOC_Y)
plt.show()
```
Please note that the above plot misrepresents the data. The x-axis values are the inverse
of what they actually should be. Lets plot the shots taken from only the right side to see
this issue.
```
right = shot_df[shot_df.SHOT_ZONE_AREA == "Right Side(R)"]
plt.figure(figsize=(12,11))
plt.scatter(right.LOC_X, right.LOC_Y)
plt.xlim(-300,300)
plt.ylim(-100,500)
plt.show()
```
As we can see the shots in categorized as shots from the "Right Side(R)",
while to the viewers right, are actually to the left side of the hoop.
This is something we will need to fix when creating our final shot chart.
### Drawing the Court ###
But first we need to figure out how to draw the court lines onto our plot. By looking at the first plot and
at the data we can roughly estimate that the center of the hoop is at the origin. We can also estimate that
every 10 units on either the x and y axes represents one foot. We can verify this by just look at the first
observation in our DataFrame . That shot was taken from the Right Corner 3 spot from a distance of 22 feet
with a LOC_X value of 226. So the shot was taken from about 22.6 feet to the right of the hoop. Now that we
know this we can actually draw the court onto our plot.
The dimensions of a basketball court can be seen [here](http://www.sportscourtdimensions.com/wp- content/uploads/2015/02/nba_court_dimensions_h.png), and [here](http://www.sportsknowhow.com/basketball/dimensions/nba-basketball- court-dimensions.html).
Using those dimensions we can convert them to fit the scale of our plot and just draw them using
[Matplotlib Patches](http://matplotlib.org/api/patches_api.html). We'll be using and [Arc](http://matplotlib.org/api/patches_api.html#matplotlib.patches.Arc) objects to draw our court.
Now to create our function that draws our basketball court.
NOTE: While you can draw lines onto the plot using [Lines2D](http://matplotlib.org/api/lines_api.html? highlight=line#matplotlib.lines.Line2D) I found it more convenient to use Rectangles (without a height or width) instead.
EDIT (Aug 4, 2015): I made a mistake in drawing the outerlines and the half court arcs. The outer courtlines height was changed from the incorrect value of 442.5 to 470. The y-values for the centers of the center court arcs were changed from 395 to 422.5. The ylim values for the plots were changed from (395, -47.5) to (422.5, -47.5)
```
from matplotlib.patches import Circle, Rectangle, Arc
def draw_court(ax=None, color='black', lw=2, outer_lines=False):
# If an axes object isn't provided to plot onto, just get current one
if ax is None:
ax = plt.gca()
# Create the various parts of an NBA basketball court
# Create the basketball hoop
# Diameter of a hoop is 18" so it has a radius of 9", which is a value
# 7.5 in our coordinate system
hoop = Circle((0, 0), radius=7.5, linewidth=lw, color=color, fill=False)
# Create backboard
backboard = Rectangle((-30, -7.5), 60, -1, linewidth=lw, color=color)
# The paint
# Create the outer box 0f the paint, width=16ft, height=19ft
outer_box = Rectangle((-80, -47.5), 160, 190, linewidth=lw, color=color,
fill=False)
# Create the inner box of the paint, widt=12ft, height=19ft
inner_box = Rectangle((-60, -47.5), 120, 190, linewidth=lw, color=color,
fill=False)
# Create free throw top arc
top_free_throw = Arc((0, 142.5), 120, 120, theta1=0, theta2=180,
linewidth=lw, color=color, fill=False)
# Create free throw bottom arc
bottom_free_throw = Arc((0, 142.5), 120, 120, theta1=180, theta2=0,
linewidth=lw, color=color, linestyle='dashed')
# Restricted Zone, it is an arc with 4ft radius from center of the hoop
restricted = Arc((0, 0), 80, 80, theta1=0, theta2=180, linewidth=lw,
color=color)
# Three point line
# Create the side 3pt lines, they are 14ft long before they begin to arc
corner_three_a = Rectangle((-220, -47.5), 0, 140, linewidth=lw,
color=color)
corner_three_b = Rectangle((220, -47.5), 0, 140, linewidth=lw, color=color)
# 3pt arc - center of arc will be the hoop, arc is 23'9" away from hoop
# I just played around with the theta values until they lined up with the
# threes
three_arc = Arc((0, 0), 475, 475, theta1=22, theta2=158, linewidth=lw,
color=color)
# Center Court
center_outer_arc = Arc((0, 422.5), 120, 120, theta1=180, theta2=0,
linewidth=lw, color=color)
center_inner_arc = Arc((0, 422.5), 40, 40, theta1=180, theta2=0,
linewidth=lw, color=color)
# List of the court elements to be plotted onto the axes
court_elements = [hoop, backboard, outer_box, inner_box, top_free_throw,
bottom_free_throw, restricted, corner_three_a,
corner_three_b, three_arc, center_outer_arc,
center_inner_arc]
if outer_lines:
# Draw the half court line, baseline and side out bound lines
outer_lines = Rectangle((-250, -47.5), 500, 470, linewidth=lw,
color=color, fill=False)
court_elements.append(outer_lines)
#Add the court elements onto the axes
for element in court_elements:
ax.add_patch(element)
return ax
```
Lets draw our court
```
plt.figure(figsize=(12,11))
draw_court(outer_lines=True)
plt.xlim(-300,300)
plt.ylim(-100,500)
plt.show()
```
### Creating some Shot Charts ###
Now plot our properly adjusted shot chart data along with the court. We can adjust
the x-values in two ways. We can either pass in the the negative inverse of LOC_X to
plt.scatter or we can pass in descending values to plt.xlim . We'll do the latter to plot
our shot chart.
```
plt.figure(figsize=(12,11))
plt.scatter(shot_df.LOC_X, shot_df.LOC_Y)
draw_court(outer_lines=True)
# Descending values along the axis from left to right
plt.xlim(300,-300)
plt.show()
```
Lets orient our shot chart with the hoop by the top of the chart, which is the same orientation as the shot charts on stats.nba.com. We do this by settting descending y-values from the bottom to the top of the y-axis. When we do this we no longer need to adjust the x-values of our plot.
```
plt.figure(figsize=(12,11))
plt.scatter(shot_df.LOC_X, shot_df.LOC_Y)
draw_court(outer_lines=True)
# Adjust plot limits to just fit in half court
plt.xlim(-250,250)
# Descending values along th y axis from bottom to top
# in order to place the hoop by the top of plot
plt.ylim(422.5, -47.5)
# get rid of axis tick labels
plt.tick_params(labelbottom=False, labelleft=False)
plt.show()
```
Lets create a few shot charts using jointplot from seaborn .
```
# create our jointplot
joint_shot_chart = sns.jointplot(shot_df.LOC_X, shot_df.LOC_Y,
stat_func=None,kind='scatter', space=0, alpha=0.5)
joint_shot_chart.fig.set_size_inches(12,11)
# A joint plot has 3 Axes, the first one called ax_joint
# is the one we want to draw our court onto and adjust some other settings
ax = joint_shot_chart.ax_joint
draw_court(ax, outer_lines=True)
# Adjust the axis limits and orientation of the plot in order
# to plot half court, with the hoop by the top of the plot
ax.set_xlim(-250,250)
ax.set_ylim(422.5, -47.5)
# Get rid of axis labels and tick marks
ax.set_xlabel('')
ax.set_ylabel('')
ax.tick_params(labelbottom='off', labelleft='off')
# Add a title
ax.set_title('James Harden FGA \n2015-16 Reg. Season',
y=1.2, fontsize=18)
# Add Data Scource and Author
authors="""Data Source: stats.nba.com
Author: Juan Ignacio Gil
Original code by Savvas Tjortjoglou (savvastjortjoglou.com)"""
ax.text(-250,460,authors,fontsize=12)
plt.show()
```
### Getting a Player's Image ###
We could also scrape Jame Harden's picture from stats.nba.com and place it on our plot.
We can find his image at [this url](http://stats.nba.com/media/players/230x185/201935.png).
To retrieve the image for our plot we can use urlretrieve from url.requests as follows:
```
import urllib.request
# we pass in the link to the image as the 1st argument
# the 2nd argument tells urlretrieve what we want to scrape
pic = urllib.request.urlretrieve("http://stats.nba.com/media/players/230x185/"+playerID+".png",
playerID+".png")
# urlretrieve returns a tuple with our image as the first
# element and imread reads in the image as a
# mutlidimensional numpy array so matplotlib can plot it
harden_pic = plt.imread(pic[0])
# plot the image
plt.imshow(harden_pic)
plt.show()
```
Now to plot Harden's face on a jointplot we will import OffsetImage from matplotlib.
Offset, which will allow us to place the image at the top right corner of the plot.
So lets create our shot chart like we did above, but this time we will create a [KDE](https://en.wikipedia.org/wiki/Kernel_density_estimation) jointplot and at the end add
on our image.
```
from matplotlib.offsetbox import OffsetImage
# create our jointplot
# get our colormap for the main kde plot
# Note we can extract a color from cmap to use for
# the plots that lie on the side and top axes
cmap=plt.cm.YlOrRd_r
# n_levels sets the number of contour lines for the main kde plot
joint_shot_chart = sns.jointplot(shot_df.LOC_X, shot_df.LOC_Y, stat_func=None,
kind='kde', space=0, color=cmap(0.1),
cmap=cmap, n_levels=50)
joint_shot_chart.fig.set_size_inches(12,11)
# A joint plot has 3 Axes, the first one called ax_joint
# is the one we want to draw our court onto and adjust some other settings
ax = joint_shot_chart.ax_joint
draw_court(ax,outer_lines=True)
# Adjust the axis limits and orientation of the plot in order
# to plot half court, with the hoop by the top of the plot
ax.set_xlim(-250,250)
ax.set_ylim(422.5, -47.5)
# Get rid of axis labels and tick marks
ax.set_xlabel('')
ax.set_ylabel('')
ax.tick_params(labelbottom='off', labelleft='off')
# Add a title
ax.set_title('James Harden FGA \n2015-16 Reg. Season',
y=1.2, fontsize=18)
# Add Data Scource and Author
ax.text(-250,460,authors,fontsize=12)
# Add Harden's image to the top right
# First create our OffSetImage by passing in our image
# and set the zoom level to make the image small enough
# to fit on our plot
img = OffsetImage(harden_pic, zoom=0.6)
# Pass in a tuple of x,y coordinates to set_offset
# to place the plot where you want, I just played around
# with the values until I found a spot where I wanted
# the image to be
img.set_offset((625,621))
# add the image
ax.add_artist(img)
plt.show()
```
And another jointplot but with hexbins.
```
# create our jointplot
cmap=plt.cm.gist_heat_r
joint_shot_chart = sns.jointplot(shot_df.LOC_X, shot_df.LOC_Y, stat_func=None,
kind='hex', space=0, color=cmap(.2), cmap=cmap)
joint_shot_chart.fig.set_size_inches(12,11)
# A joint plot has 3 Axes, the first one called ax_joint
# is the one we want to draw our court onto
ax = joint_shot_chart.ax_joint
draw_court(ax)
# Adjust the axis limits and orientation of the plot in order
# to plot half court, with the hoop by the top of the plot
ax.set_xlim(-250,250)
ax.set_ylim(422.5, -47.5)
# Get rid of axis labels and tick marks
ax.set_xlabel('')
ax.set_ylabel('')
ax.tick_params(labelbottom='off', labelleft='off')
# Add a title
ax.set_title('FGA 2015-16 Reg. Season', y=1.2, fontsize=14)
# Add Data Source and Author
ax.text(-250,450,authors, fontsize=12)
# Add James Harden's image to the top right
img = OffsetImage(harden_pic, zoom=0.6)
img.set_offset((625,621))
ax.add_artist(img)
plt.show()
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0' # specify GPUs locally
package_paths = [
'./input/pytorch-image-models/pytorch-image-models-master', #'../input/efficientnet-pytorch-07/efficientnet_pytorch-0.7.0'
'./input/pytorch-gradual-warmup-lr-master'
]
import sys;
for pth in package_paths:
sys.path.append(pth)
from glob import glob
from sklearn.model_selection import GroupKFold, StratifiedKFold
import cv2
from skimage import io
import torch
from torch import nn
import os
from datetime import datetime
import time
import random
import cv2
import torchvision
from torchvision import transforms
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from torch.utils.data import Dataset,DataLoader
from torch.utils.data.sampler import SequentialSampler, RandomSampler
from torch.cuda.amp import autocast, GradScaler
from torch.nn.modules.loss import _WeightedLoss
import torch.nn.functional as F
import timm
import sklearn
import warnings
import joblib
from sklearn.metrics import roc_auc_score, log_loss
from sklearn import metrics
import warnings
import cv2
#from efficientnet_pytorch import EfficientNet
from scipy.ndimage.interpolation import zoom
from adamp import AdamP
CFG = {
'fold_num': 5,
'seed': 719,
'model_arch': 'regnety_040',
'model_path' : 'regnety_040_bs24_epoch20_reset_swalr_step',
'img_size': 512,
'epochs': 20,
'train_bs': 24,
'valid_bs': 8,
'T_0': 10,
'lr': 1e-4,
'min_lr': 1e-6,
'weight_decay':1e-6,
'num_workers': 4,
'accum_iter': 1, # suppoprt to do batch accumulation for backprop with effectively larger batch size
'verbose_step': 1,
'device': 'cuda:0',
'target_size' : 5,
'smoothing' : 0.2
}
if not os.path.isdir(CFG['model_path']):
os.mkdir(CFG['model_path'])
train = pd.read_csv('./input/cassava-leaf-disease-classification/merged.csv')
# delete_id
## 2019 : 이미지의 한 변이 500보다 작거나 1000보다 큰 경우
## 2020 : 중복되는 3개 이미지
delete_id = ['train-cbb-1.jpg', 'train-cbb-12.jpg', 'train-cbb-126.jpg', 'train-cbb-134.jpg', 'train-cbb-198.jpg',
'train-cbb-244.jpg', 'train-cbb-245.jpg', 'train-cbb-30.jpg', 'train-cbb-350.jpg', 'train-cbb-369.jpg',
'train-cbb-65.jpg', 'train-cbb-68.jpg', 'train-cbb-77.jpg', 'train-cbsd-1354.jpg', 'train-cbsd-501.jpg',
'train-cgm-418.jpg', 'train-cmd-1145.jpg', 'train-cmd-2080.jpg', 'train-cmd-2096.jpg', 'train-cmd-332.jpg',
'train-cmd-494.jpg', 'train-cmd-745.jpg', 'train-cmd-896.jpg', 'train-cmd-902.jpg', 'train-healthy-118.jpg',
'train-healthy-181.jpg', 'train-healthy-5.jpg','train-cbb-69.jpg', 'train-cbsd-463.jpg', 'train-cgm-547.jpg',
'train-cgm-626.jpg', 'train-cgm-66.jpg', 'train-cgm-768.jpg', 'train-cgm-98.jpg', 'train-cmd-110.jpg',
'train-cmd-1208.jpg', 'train-cmd-1566.jpg', 'train-cmd-1633.jpg', 'train-cmd-1703.jpg', 'train-cmd-1917.jpg',
'train-cmd-2197.jpg', 'train-cmd-2289.jpg', 'train-cmd-2304.jpg', 'train-cmd-2405.jpg', 'train-cmd-2490.jpg',
'train-cmd-412.jpg', 'train-cmd-587.jpg', 'train-cmd-678.jpg', 'train-healthy-250.jpg']
delete_id += ['2947932468.jpg', '2252529694.jpg', '2278017076.jpg']
train = train[~train['image_id'].isin(delete_id)].reset_index(drop=True)
print(train.shape)
submission = pd.read_csv('./input/cassava-leaf-disease-classification/sample_submission.csv')
submission.head()
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
def get_img(path):
im_bgr = cv2.imread(path)
im_rgb = im_bgr[:, :, ::-1]
#print(im_rgb)
return im_rgb
def rand_bbox(size, lam):
W = size[0]
H = size[1]
cut_rat = np.sqrt(1. - lam)
cut_w = np.int(W * cut_rat)
cut_h = np.int(H * cut_rat)
# uniform
cx = np.random.randint(W)
cy = np.random.randint(H)
bbx1 = np.clip(cx - cut_w // 2, 0, W)
bby1 = np.clip(cy - cut_h // 2, 0, H)
bbx2 = np.clip(cx + cut_w // 2, 0, W)
bby2 = np.clip(cy + cut_h // 2, 0, H)
return bbx1, bby1, bbx2, bby2
class CassavaDataset(Dataset):
def __init__(self, df, data_root,
transforms=None,
output_label=True,
):
super().__init__()
self.df = df.reset_index(drop=True).copy()
self.transforms = transforms
self.data_root = data_root
self.output_label = output_label
self.labels = self.df['label'].values
def __len__(self):
return self.df.shape[0]
def __getitem__(self, index: int):
# get labels
if self.output_label:
target = self.labels[index]
img = get_img("{}/{}".format(self.data_root, self.df.loc[index]['image_id']))
if self.transforms:
img = self.transforms(image=img)['image']
if self.output_label == True:
return img, target
else:
return img
from albumentations.core.transforms_interface import DualTransform
from albumentations.augmentations import functional as F
class GridMask(DualTransform):
"""GridMask augmentation for image classification and object detection.
Author: Qishen Ha
Email: haqishen@gmail.com
2020/01/29
Args:
num_grid (int): number of grid in a row or column.
fill_value (int, float, lisf of int, list of float): value for dropped pixels.
rotate ((int, int) or int): range from which a random angle is picked. If rotate is a single int
an angle is picked from (-rotate, rotate). Default: (-90, 90)
mode (int):
0 - cropout a quarter of the square of each grid (left top)
1 - reserve a quarter of the square of each grid (left top)
2 - cropout 2 quarter of the square of each grid (left top & right bottom)
Targets:
image, mask
Image types:
uint8, float32
Reference:
| https://arxiv.org/abs/2001.04086
| https://github.com/akuxcw/GridMask
"""
def __init__(self, num_grid=3, fill_value=0, rotate=0, mode=0, always_apply=False, p=0.5):
super(GridMask, self).__init__(always_apply, p)
if isinstance(num_grid, int):
num_grid = (num_grid, num_grid)
if isinstance(rotate, int):
rotate = (-rotate, rotate)
self.num_grid = num_grid
self.fill_value = fill_value
self.rotate = rotate
self.mode = mode
self.masks = None
self.rand_h_max = []
self.rand_w_max = []
def init_masks(self, height, width):
if self.masks is None:
self.masks = []
n_masks = self.num_grid[1] - self.num_grid[0] + 1
for n, n_g in enumerate(range(self.num_grid[0], self.num_grid[1] + 1, 1)):
grid_h = height / n_g
grid_w = width / n_g
this_mask = np.ones((int((n_g + 1) * grid_h), int((n_g + 1) * grid_w))).astype(np.uint8)
for i in range(n_g + 1):
for j in range(n_g + 1):
this_mask[
int(i * grid_h) : int(i * grid_h + grid_h / 2),
int(j * grid_w) : int(j * grid_w + grid_w / 2)
] = self.fill_value
if self.mode == 2:
this_mask[
int(i * grid_h + grid_h / 2) : int(i * grid_h + grid_h),
int(j * grid_w + grid_w / 2) : int(j * grid_w + grid_w)
] = self.fill_value
if self.mode == 1:
this_mask = 1 - this_mask
self.masks.append(this_mask)
self.rand_h_max.append(grid_h)
self.rand_w_max.append(grid_w)
def apply(self, image, mask, rand_h, rand_w, angle, **params):
h, w = image.shape[:2]
mask = F.rotate(mask, angle) if self.rotate[1] > 0 else mask
mask = mask[:,:,np.newaxis] if image.ndim == 3 else mask
image *= mask[rand_h:rand_h+h, rand_w:rand_w+w].astype(image.dtype)
return image
def get_params_dependent_on_targets(self, params):
img = params['image']
height, width = img.shape[:2]
self.init_masks(height, width)
mid = np.random.randint(len(self.masks))
mask = self.masks[mid]
rand_h = np.random.randint(self.rand_h_max[mid])
rand_w = np.random.randint(self.rand_w_max[mid])
angle = np.random.randint(self.rotate[0], self.rotate[1]) if self.rotate[1] > 0 else 0
return {'mask': mask, 'rand_h': rand_h, 'rand_w': rand_w, 'angle': angle}
@property
def targets_as_params(self):
return ['image']
def get_transform_init_args_names(self):
return ('num_grid', 'fill_value', 'rotate', 'mode')
from albumentations import (
HorizontalFlip, VerticalFlip, IAAPerspective, ShiftScaleRotate, CLAHE, RandomRotate90,
Transpose, ShiftScaleRotate, Blur, OpticalDistortion, GridDistortion, HueSaturationValue,
IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, IAAPiecewiseAffine, RandomResizedCrop,
IAASharpen, IAAEmboss, RandomBrightnessContrast, Flip, OneOf, Compose, Normalize, Cutout, CoarseDropout, ShiftScaleRotate, CenterCrop, Resize
)
from albumentations.pytorch import ToTensorV2
def get_train_transforms():
return Compose([
Resize(600, 800),
RandomResizedCrop(CFG['img_size'], CFG['img_size']),
Transpose(p=0.5),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.5),
ShiftScaleRotate(p=0.5),
HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5),
RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
CoarseDropout(p=0.5),
GridMask(num_grid=3, p=0.5),
ToTensorV2(p=1.0),
], p=1.)
def get_valid_transforms():
return Compose([
Resize(600, 800),
CenterCrop(CFG['img_size'], CFG['img_size'], p=1.),
Resize(CFG['img_size'], CFG['img_size']),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
ToTensorV2(p=1.0),
], p=1.)
def get_inference_transforms():
return Compose([
Resize(600, 800),
OneOf([
Resize(CFG['img_size'], CFG['img_size'], p=1.),
CenterCrop(CFG['img_size'], CFG['img_size'], p=1.),
RandomResizedCrop(CFG['img_size'], CFG['img_size'], p=1.)
], p=1.),
Transpose(p=0.5),
HorizontalFlip(p=0.5),
#VerticalFlip(p=0.5),
Resize(CFG['img_size'], CFG['img_size']),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
ToTensorV2(p=1.0),
], p=1.)
class CassvaImgClassifier(nn.Module):
def __init__(self, model_arch, n_class, pretrained=False):
super().__init__()
self.model = timm.create_model(model_arch, pretrained=pretrained)
if model_arch == 'regnety_040':
self.model.head = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(1088, n_class)
)
elif model_arch == 'regnety_320':
self.model.head = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(3712, n_class)
)
elif model_arch == 'regnety_080':
self.model.head = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(2016, n_class)
)
elif model_arch == 'regnety_160':
self.model.head = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(3024, n_class)
)
else:
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, n_class)
def forward(self, x):
x = self.model(x)
return x
def prepare_dataloader(df, trn_idx, val_idx, data_root='./input/cassava-leaf-disease-classification/train_images/'):
# from catalyst.data.sampler import BalanceClassSampler
train_ = df.loc[trn_idx,:].reset_index(drop=True)
valid_ = df.loc[val_idx,:].reset_index(drop=True)
train_ds = CassavaDataset(train_, data_root, transforms=get_train_transforms(), output_label=True)
valid_ds = CassavaDataset(valid_, data_root, transforms=get_valid_transforms(), output_label=True)
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=CFG['train_bs'],
pin_memory=False,
drop_last=False,
shuffle=True,
num_workers=CFG['num_workers'],
#sampler=BalanceClassSampler(labels=train_['label'].values, mode="downsampling")
)
val_loader = torch.utils.data.DataLoader(
valid_ds,
batch_size=CFG['valid_bs'],
num_workers=CFG['num_workers'],
shuffle=False,
pin_memory=False,
)
return train_loader, val_loader
def train_one_epoch(epoch, model, loss_fn, optimizer, train_loader, device, scheduler=None, schd_batch_update=False):
model.train()
t = time.time()
running_loss = None
# pbar = tqdm(enumerate(train_loader), total=len(train_loader))
for step, (imgs, image_labels) in enumerate(train_loader):
imgs = imgs.to(device).float()
image_labels = image_labels.to(device).long()
with autocast():
image_preds = model(imgs) #output = model(input)
loss = loss_fn(image_preds, image_labels)
scaler.scale(loss).backward()
if running_loss is None:
running_loss = loss.item()
else:
running_loss = running_loss * .99 + loss.item() * .01
if ((step + 1) % CFG['accum_iter'] == 0) or ((step + 1) == len(train_loader)):
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
if scheduler is not None and schd_batch_update:
scheduler.step()
if scheduler is not None and not schd_batch_update:
scheduler.step()
def valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False):
model.eval()
t = time.time()
loss_sum = 0
sample_num = 0
image_preds_all = []
image_targets_all = []
# pbar = tqdm(enumerate(val_loader), total=len(val_loader))
for step, (imgs, image_labels) in enumerate(val_loader):
imgs = imgs.to(device).float()
image_labels = image_labels.to(device).long()
image_preds = model(imgs) #output = model(input)
image_preds_all += [torch.argmax(image_preds, 1).detach().cpu().numpy()]
image_targets_all += [image_labels.detach().cpu().numpy()]
loss = loss_fn(image_preds, image_labels)
loss_sum += loss.item()*image_labels.shape[0]
sample_num += image_labels.shape[0]
# if ((step + 1) % CFG['verbose_step'] == 0) or ((step + 1) == len(val_loader)):
# description = f'epoch {epoch} loss: {loss_sum/sample_num:.4f}'
# pbar.set_description(description)
image_preds_all = np.concatenate(image_preds_all)
image_targets_all = np.concatenate(image_targets_all)
print('epoch = {}'.format(epoch+1), 'validation multi-class accuracy = {:.4f}'.format((image_preds_all==image_targets_all).mean()))
if scheduler is not None:
if schd_loss_update:
scheduler.step(loss_sum/sample_num)
else:
scheduler.step()
def inference_one_epoch(model, data_loader, device):
model.eval()
image_preds_all = []
# pbar = tqdm(enumerate(data_loader), total=len(data_loader))
with torch.no_grad():
for step, (imgs, _labels) in enumerate(data_loader):
imgs = imgs.to(device).float()
image_preds = model(imgs) #output = model(input)
image_preds_all += [torch.softmax(image_preds, 1).detach().cpu().numpy()]
image_preds_all = np.concatenate(image_preds_all, axis=0)
return image_preds_all
# reference: https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733
class MyCrossEntropyLoss(_WeightedLoss):
def __init__(self, weight=None, reduction='mean'):
super().__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
def forward(self, inputs, targets):
lsm = F.log_softmax(inputs, -1)
if self.weight is not None:
lsm = lsm * self.weight.unsqueeze(0)
loss = -(targets * lsm).sum(-1)
if self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
# ====================================================
# Label Smoothing
# ====================================================
class LabelSmoothingLoss(nn.Module):
def __init__(self, classes, smoothing=0.0, dim=-1):
super(LabelSmoothingLoss, self).__init__()
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
self.cls = classes
self.dim = dim
def forward(self, pred, target):
pred = pred.log_softmax(dim=self.dim)
with torch.no_grad():
true_dist = torch.zeros_like(pred)
true_dist.fill_(self.smoothing / (self.cls - 1))
true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
return torch.mean(torch.sum(-true_dist * pred, dim=self.dim))
from torchcontrib.optim import SWA
from sklearn.metrics import accuracy_score
for c in range(5):
train[c] = 0
folds = StratifiedKFold(n_splits=CFG['fold_num'], shuffle=True, random_state=CFG['seed']).split(np.arange(train.shape[0]), train.label.values)
for fold, (trn_idx, val_idx) in enumerate(folds):
print('Training with {} started'.format(fold))
print(len(trn_idx), len(val_idx))
train_loader, val_loader = prepare_dataloader(train, trn_idx, val_idx, data_root='./input/cassava-leaf-disease-classification/train/')
device = torch.device(CFG['device'])
model = CassvaImgClassifier(CFG['model_arch'], train.label.nunique(), pretrained=True).to(device)
scaler = GradScaler()
base_opt = AdamP(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'])
# base_opt = torch.optim.Adam(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'])
optimizer = SWA(base_opt, swa_start=2*len(trn_idx)//CFG['train_bs'], swa_freq=len(trn_idx)//CFG['train_bs'])
scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=CFG['T_0'], T_mult=1, eta_min=CFG['min_lr'], last_epoch=-1)
loss_tr = LabelSmoothingLoss(classes=CFG['target_size'], smoothing=CFG['smoothing']).to(device)
loss_fn = nn.CrossEntropyLoss().to(device)
for epoch in range(CFG['epochs']):
train_one_epoch(epoch, model, loss_tr, optimizer, train_loader, device, scheduler=scheduler, schd_batch_update=False)
with torch.no_grad():
valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
optimizer.swap_swa_sgd()
optimizer.bn_update(train_loader, model, device)
with torch.no_grad():
valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
torch.save(model.state_dict(),'./{}/swa_{}_fold_{}_{}'.format(CFG['model_path'],CFG['model_arch'], fold, epoch))
tst_preds = []
for tta in range(5):
tst_preds += [inference_one_epoch(model, val_loader, device)]
train.loc[val_idx, [0, 1, 2, 3, 4]] = np.mean(tst_preds, axis=0)
del model, optimizer, train_loader, val_loader, scaler, scheduler
torch.cuda.empty_cache()
train['pred'] = np.array(train[[0, 1, 2, 3, 4]]).argmax(axis=1)
print(accuracy_score(train['label'].values, train['pred'].values))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/temiafeye/Colab-Projects/blob/master/Fraud_Detection_Algorithm(Using_SOMs).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install numpy
#Build Hybrid Deep Learning Model
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#Importing The Dataset
from google.colab import files
uploaded = files.upload()
dataset = pd.read_csv(io.BytesIO(uploaded['Credit_Card_Applications.csv']))
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
X = sc.fit_transform(X)
#Importing the SOM
from google.colab import files
uploaded = files.upload()
# Training the SOM
from minisom import MiniSom
som = MiniSom(x = 10, y = 10, input_len = 15, sigma = 1.0, learning_rate = 0.5)
som.random_weights_init(X)
som.train_random(data = X, num_iteration = 100)
#Visualizing the results
from pylab import bone, pcolor, colorbar, plot, show
bone()
pcolor(som.distance_map().T)
colorbar()
markers = ['o', 's']
colors = ['r', 'g']
for i, x in enumerate(X):
w = som.winner(x)
plot(w[0] + 0.5,
w[1] + 0.5,
markers[y[i]],
markeredgecolor = colors[y[i]],
markerfacecolor = 'None',
markersize = 10,
markeredgewidth = 2)
show()
# Finding the frauds
mappings = som.win_map(X)
frauds = np.concatenate((mappings[(2,4)], mappings[(8,8)]), axis = 0)
frauds = sc.inverse_transform(frauds)
#Part 2 - Create a supervised deep learning model
#Creates a matrix of features
customers = dataset.iloc[:, 1:].values
#Create the dependent variable
is_fraud = np.zeros(len(dataset)) #creates an array of zeroes, scanning through dataset
#initiate a loop, to append values of 1 if fraud data found in dataset
for i in range(len(dataset)):
if dataset.iloc[i,0] in frauds:
is_fraud[i] = 1
#train artificial neural network
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
customers = sc.fit_transform(customers)
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 2, kernel_initializer = 'uniform', activation = 'relu', input_dim = 15))
# Adding the output layer
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(customers, is_fraud, batch_size = 1, epochs = 2)
# Part 3 - Making predictions and evaluating the model
# Predicting the probabilities of fraud
y_pred= classifier.predict(customers)
y_pred = np.concatenate((dataset.iloc[:,0:1].values, y_pred), axis = 1)
#Sorts numpy array in one colum
y_pred = y_pred[y_pred[:,1].argsort()]
y_pred.shape
```
```
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
df = pd.read_csv('../doge_v1.csv')
df = df.set_index(pd.DatetimeIndex(df['Date'].values))
df
df = df.resample('D').ffill()
df.Close.plot(figsize=(16, 2), color="red", label='Close price', lw=2, alpha =.7)
future_days = 1
columnName = str(future_days)+'_day_price_forecast'
#added new column
df[columnName] = df[['Close']].shift(-future_days)
df[['Close', columnName]]
df.info()
X = np.array(df[["High", "Low", "Volume", "Open", "twitter_followers", "reddit_average_posts_48h",
"reddit_average_comments_48h", "reddit_subscribers", "reddit_accounts_active_48h", "forks", "stars",
"subscribers", "total_issues", "closed_issues", "pull_requests_merged", "pull_request_contributors",
"commit_count_4_weeks", "dogecoin_monthly", "dogecoin"]])
print(df.shape)
X = X[:df.shape[0] - future_days]
print(X)
y = np.array(df[columnName])
y = y[:-future_days]
print(y)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, shuffle=False)
from sklearn.preprocessing import StandardScaler
StdS_X = StandardScaler()
StdS_y = StandardScaler()
X_l = StdS_X.fit_transform(x_train)
y_p = StdS_y.fit_transform(y_train.reshape(-1,1))
print("Scaled X_l:")
print(X_l)
print("Scaled y_p:")
print(y_p)
StdS_X_test = StandardScaler()
StdS_y_test = StandardScaler()
X_l_test = StdS_X_test.fit_transform(x_test)
y_p_test = StdS_y_test.fit_transform(y_test.reshape(-1,1))
print("Scaled X_l_test:")
print(X_l_test)
print("Scaled y_p_test:")
print(y_p_test)
from sklearn.svm import SVR
svr_sigmoid = SVR(kernel='sigmoid', C = 0.0185, epsilon=0.0002)
svr_sigmoid.fit(X_l, y_p)
from sklearn.linear_model import LinearRegression
# Create and train the Linear Regression Model
lr = LinearRegression()
# Train the model
lr.fit(X_l, y_p)
# Testing Model: Score returns the coefficient of determination R^2 of the prediction.
# The best possible score is 1.0
lr_confidence = lr.score(X_l_test, y_p_test)
print("lr confidence: ", lr_confidence)
svr_linear_confidence = svr_sigmoid.score(X_l_test, y_p_test)
print('svr_linear confidence', svr_linear_confidence)
svr_prediction = svr_sigmoid.predict(X_l_test)
print(svr_prediction)
final_prediction =svr_prediction.reshape(-1,1)
final_prediction = StdS_y_test.inverse_transform(final_prediction)
print(final_prediction)
print(y_test)
print(len(final_prediction))
print(len(y_test))
df
plt.figure(figsize=(17,5))
plt.plot(final_prediction, label='Prediction', lw=2, alpha =.7, color = "green")
plt.plot(y_test, label='Actual', lw=2, alpha =.7, color = "red")
plt.legend(['predicted', "actual"])
plt.title('Prediction vs Actual')
plt.ylabel('Price in USD')
plt.xlabel('Time')
plt.show()
from math import sqrt
from sklearn.metrics import mean_absolute_error, mean_squared_error
print("R^2")
print(svr_linear_confidence)
print("\nMAE")
print(mean_absolute_error(y_test,final_prediction))
print("\nRMSE")
print(sqrt(mean_squared_error(y_test, final_prediction)))
```
| github_jupyter |
```
# THIS SCRIPT IS TO GENERATE AGGREGATIONS OF EXPLANATIONS for interesting FINDINGS
%load_ext autoreload
%autoreload 2
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torch.nn.functional as F
import torchvision
from torchvision import models
from torchvision import transforms
import torch
import torchvision
torch.set_num_threads(1)
torch.manual_seed(0)
np.random.seed(0)
from torchvision.models import *
from visualisation.core.utils import device, image_net_postprocessing
from torch import nn
from operator import itemgetter
from visualisation.core.utils import imshow
from IPython.core.debugger import Tracer
NN_flag = True
layer = 4
if NN_flag:
feature_extractor = nn.Sequential(*list(resnet34(pretrained=True).children())[:layer-6]).to(device)
model = resnet34(pretrained=True).to(device)
model.eval()
# %matplotlib notebook
import glob
import matplotlib.pyplot as plt
import numpy as np
import torch
from utils import *
from PIL import Image
plt.rcParams["figure.figsize"]= 16,8
def make_dir(path):
if os.path.isdir(path) == False:
os.mkdir(path)
import glob
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import imshow
from visualisation.core.utils import device
from PIL import Image
from torchvision.transforms import ToTensor, Resize, Compose, ToPILImage
from visualisation.core import *
from visualisation.core.utils import image_net_preprocessing
size = 224
# Pre-process the image and convert into a tensor
transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(256),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
img_num = 50
# methods = ['Conf', 'GradCAM', 'EP', 'SHAP', 'NNs', 'PoolNet', 'AIonly']
methods = ['Conf', 'GradCAM', 'EP', 'NNs', 'PoolNet', 'AIonly']
task = 'Natural'
# Adversarial_Nat
# Added for loading ImageNet classes
def load_imagenet_label_map():
"""
Load ImageNet label dictionary.
return:
"""
input_f = open("input_txt_files/imagenet_classes.txt")
label_map = {}
for line in input_f:
parts = line.strip().split(": ")
(num, label) = (int(parts[0]), parts[1].replace('"', ""))
label_map[num] = label
input_f.close()
return label_map
# Added for loading ImageNet classes
def load_imagenet_id_map():
"""
Load ImageNet ID dictionary.
return;
"""
input_f = open("input_txt_files/synset_words.txt")
label_map = {}
for line in input_f:
parts = line.strip().split(" ")
(num, label) = (parts[0], ' '.join(parts[1:]))
label_map[num] = label
input_f.close()
return label_map
def convert_imagenet_label_to_id(label_map, key_list, val_list, prediction_class):
"""
Convert imagenet label to ID: for example - 245 -> "French bulldog" -> n02108915
:param label_map:
:param key_list:
:param val_list:
:param prediction_class:
:return:
"""
class_to_label = label_map[prediction_class]
prediction_id = key_list[val_list.index(class_to_label)]
return prediction_id
# gt_dict = load_imagenet_validation_gt()
id_map = load_imagenet_id_map()
label_map = load_imagenet_label_map()
key_list = list(id_map.keys())
val_list = list(id_map.values())
def convert_imagenet_id_to_label(label_map, key_list, val_list, class_id):
"""
Convert imagenet label to ID: for example - n02108915 -> "French bulldog" -> 245
:param label_map:
:param key_list:
:param val_list:
:param prediction_class:
:return:
"""
return key_list.index(str(class_id))
from torchray.attribution.extremal_perturbation import extremal_perturbation, contrastive_reward
from torchray.attribution.grad_cam import grad_cam
import PIL.Image
def get_EP_saliency_maps(model, path):
img_index = (path.split('.jpeg')[0]).split('images/')[1]
img = PIL.Image.open(path)
x = transform(img).unsqueeze(0).to(device)
out = model(x)
p = torch.nn.functional.softmax(out, dim=1)
score, index = torch.topk(p, 1)
category_id_1 = index[0][0].item()
gt_label_id = img_index.split('val_')[1][9:18]
input_prediction_id = convert_imagenet_label_to_id(label_map, key_list, val_list, category_id_1)
masks, energy = extremal_perturbation(
model, x, category_id_1,
areas=[0.025, 0.05, 0.1, 0.2],
num_levels=8,
step=7,
sigma=7 * 3,
max_iter=800,
debug=False,
jitter=True,
smooth=0.09,
perturbation='blur'
)
saliency = masks.sum(dim=0, keepdim=True)
saliency = saliency.detach()
return saliency[0].to('cpu')
# import os
import os.path
from visualisation.core.utils import tensor2cam
postprocessing_t = image_net_postprocessing
import cv2 as cv
import sys
imagenet_train_path = '/home/dexter/Downloads/train'
## Creating colormap
cMap = 'Reds'
id_list= list()
conf_dict = {}
eps=1e-5
cnt = 0
K = 3 # Change to your expected number of nearest neighbors
import csv
reader = csv.reader(open('csv_files/definition.csv'))
definition_dict = dict()
for row in reader:
key = row[0][:9]
definition = row[0][12:]
definition_dict[key] = definition
# Added for loading ImageNet classes
def load_imagenet_id_map():
"""
Load ImageNet ID dictionary.
return;
"""
input_f = open("input_txt_files/synset_words.txt")
label_map = {}
for line in input_f:
parts = line.strip().split(" ")
(num, label) = (parts[0], ' '.join(parts[1:]))
label_map[num] = label
input_f.close()
return label_map
id_map = load_imagenet_id_map()
Q1_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/Finding_explanations/SOD_wrong_dogs_aggregate'
Q2_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/Finding_explanations/NNs_hard_imagenet_aggregate'
Q3_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/Finding_explanations/NNs_adversarial_imagenet_aggregate'
Q4_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/Finding_explanations/Conf_adversarial_dog_aggregate'
Q5_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/Finding_explanations/GradCAM_norm_imagenet_aggregate'
Q6_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/Finding_explanations/NNs_easy_imagenet_aggregate'
Q_datapath = ['/home/dexter/Downloads/Human_experiments/Dataset/Dog/mixed_images',
'/home/dexter/Downloads/Human_experiments/Dataset/Natural/mixed_images',
'/home/dexter/Downloads/Human_experiments/Dataset/Adversarial_Nat/mixed_images',
'/home/dexter/Downloads/Human_experiments/Dataset/Adversarial_Dog/mixed_images',
'/home/dexter/Downloads/Human_experiments/Dataset/Natural/mixed_images',
'/home/dexter/Downloads/Human_experiments/Dataset/Natural/mixed_images']
for idx, question_path in enumerate([Q1_path, Q2_path, Q3_path, Q4_path, Q5_path, Q6_path]):
representatives = glob.glob(question_path + '/*.*')
# Tracer()()
if idx != 1:
continue
for representative in representatives:
if '21805' not in representative:
continue
sample_idx = representative.split('aggregate/')[1]
image_path = os.path.join(Q_datapath[idx], sample_idx)
# image_path = os.path.join('/home/dexter/Downloads/val', sample_folder, sample_idx)
# if '6952' not in image_path:
# continue
distance_dict = dict()
neighbors = list()
categories_confidences = list()
confidences= list ()
img = Image.open(image_path)
if NN_flag:
embedding = feature_extractor(transform(img).unsqueeze(0).to(device)).flatten(start_dim=1)
input_image = img.resize((size,size), Image.ANTIALIAS)
# Get the ground truth of the input image
gt_label_id = image_path.split('val_')[1][9:18]
gt_label = id_map.get(gt_label_id)
id = key_list.index(gt_label_id)
gt_label = gt_label.split(',')[0]
# Get the prediction for the input image
img = Image.open(image_path)
x = transform(img).unsqueeze(0).to(device)
out = model(x)
p = torch.nn.functional.softmax(out, dim=1)
score, index = torch.topk(p, 1)
input_category_id = index[0][0].item()
predicted_confidence = score[0][0].item()
predicted_confidence = ("%.2f") %(predicted_confidence)
input_prediction_id = convert_imagenet_label_to_id(label_map, key_list, val_list, input_category_id)
predicted_label = id_map.get(input_prediction_id).split(',')[0]
predicted_label = predicted_label[0].lower() + predicted_label[1:]
print(predicted_label)
print(predicted_confidence)
# Original image
plt.gca().set_axis_off()
plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0,
hspace = 0, wspace = 0)
plt.margins(0,0)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
fig = plt.figure()
# plt.figure(figsize=(6.0,4.5), dpi=300)
plt.axis('off')
#predicted_label = 'african hunting dog'
plt.title('{}: {}'.format(predicted_label, predicted_confidence), fontsize=30)
plt.imshow(input_image)
plt.savefig('tmp/original.jpeg', figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
cmd = 'convert tmp/original.jpeg -resize 630x600\! tmp/original.jpeg'
os.system(cmd)
# Edge image
img = cv.resize(cv.imread(image_path,0),((size,size)))
edges = cv.Canny(img,100,200)
edges = edges - 255
fig = plt.figure()
# plt.figure(figsize=(6.0,4.5), dpi=300)
plt.axis('off')
plt.title(' ', fontsize=60)
plt.imshow(edges, cmap = 'gray')
plt.savefig('tmp/Edge.jpeg', figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
# GradCAM
saliency = grad_cam(
model, x, input_category_id,
saliency_layer='layer4',
resize=True
)
saliency *= 1.0/saliency.max()
GradCAM = saliency[0][0].cpu().detach().numpy()
fig = plt.figure()
# plt.figure(figsize=(6.0,4.5), dpi=300)
plt.axis('off')
plt.title('GradCAM', fontsize=30)
mlb = plt.imshow(GradCAM, cmap=cMap, vmin=0, vmax=1)
# plt.colorbar(orientation='vertical')
plt.savefig('tmp/heatmap.jpeg', figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
# plt.close()
myCmd = 'composite -blend 10 tmp/Edge.jpeg -gravity SouthWest tmp/heatmap.jpeg tmp/GradCAM.jpeg'
os.system(myCmd)
cmd = 'convert tmp/GradCAM.jpeg -resize 600x600\! tmp/GradCAM.jpeg'
os.system(cmd)
# draw a new figure and replot the colorbar there
fig,ax = plt.subplots()
cbar = plt.colorbar(mlb,ax=ax)
cbar.ax.tick_params(labelsize=20)
ax.remove()
# save the same figure with some approximate autocropping
plt.title(' ', fontsize=30)
plt.savefig('tmp/color_bar.jpeg', dpi=300, bbox_inches='tight')
cmd = 'convert tmp/color_bar.jpeg -resize 100x600\! tmp/color_bar.jpeg'
os.system(cmd)
# Extremal Perturbation
saliency = get_EP_saliency_maps(model, image_path)
ep_saliency_map = tensor2img(saliency)
ep_saliency_map *= 1.0/ep_saliency_map.max()
fig = plt.figure()
# plt.figure(figsize=(6.0,4.5), dpi=300)
plt.axis('off')
plt.title('EP', fontsize=30)
plt.imshow(ep_saliency_map, cmap=cMap, vmin=0, vmax=1)
# plt.colorbar(orientation='vertical')
plt.savefig('tmp/heatmap.jpeg', figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
# Get overlay version
myCmd = 'composite -blend 10 tmp/Edge.jpeg -gravity SouthWest tmp/heatmap.jpeg tmp/EP.jpeg'
os.system(myCmd)
cmd = 'convert tmp/EP.jpeg -resize 600x600\! tmp/EP.jpeg'
os.system(cmd)
# Salient Object Detection
from shutil import copyfile, rmtree
def rm_and_mkdir(path):
if os.path.isdir(path) == True:
rmtree(path)
os.mkdir(path)
# Prepare dataset
rm_and_mkdir('/home/dexter/Downloads/run-0/run-0-sal-p/')
rm_and_mkdir('/home/dexter/Downloads/PoolNet-master/data/PASCALS/Imgs/')
if os.path.isdir('/home/dexter/Downloads/PoolNet-master/data/PASCALS/test.lst'):
os.remove('/home/dexter/Downloads/PoolNet-master/data/PASCALS/test.lst')
src_paths = [image_path]
for src_path in src_paths:
dst_path = '/home/dexter/Downloads/PoolNet-master/data/PASCALS/Imgs/' + src_path.split('images/')[1]
copyfile(src_path, dst_path)
cmd = 'ls /home/dexter/Downloads/PoolNet-master/data/PASCALS/Imgs/ > /home/dexter/Downloads/PoolNet-master/data/PASCALS/test.lst'
os.system(cmd)
cmd = 'python /home/dexter/Downloads/PoolNet-master/main.py --mode=\'test\' --model=\'/home/dexter/Downloads/run-0/run-0/models/final.pth\' --test_fold=\'/home/dexter/Downloads/run-0/run-0-sal-p/\' --sal_mode=\'p\''
os.system(cmd)
npy_file_paths = glob.glob('/home/dexter/Downloads/run-0/run-0-sal-p/*.*')
npy_file = np.load(npy_file_paths[0])
fig = plt.figure()
# plt.figure(figsize=(6.0,4.5), dpi=300)
plt.axis('off')
plt.title('SOD', fontsize=30)
plt.imshow(npy_file, cmap=cMap, vmin=0, vmax=1)
# plt.colorbar(orientation='vertical')
plt.savefig('tmp/heatmap.jpeg', figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
# Get overlay version
myCmd = 'composite -blend 10 tmp/Edge.jpeg -gravity SouthWest tmp/heatmap.jpeg tmp/SOD.jpeg'
os.system(myCmd)
cmd = 'convert tmp/SOD.jpeg -resize 600x600\! tmp/SOD.jpeg'
os.system(cmd)
# Nearest Neighbors
imagenet_train_path = '/home/dexter/Downloads/train'
if NN_flag:
from utils import *
## Nearest Neighbors
predicted_set_path = os.path.join(imagenet_train_path, input_prediction_id)
predicted_set_img_paths = glob.glob(predicted_set_path + '/*.*')
predicted_set_color_images= list()
embedding = embedding.detach()
embedding.to(device)
# Build search space for nearest neighbors
for i, path in enumerate(predicted_set_img_paths):
img = Image.open(path)
if img.mode != 'RGB':
img.close()
del img
continue
x = transform(img).unsqueeze(0).to(device)
out = model(x)
p = torch.nn.functional.softmax(out, dim=1)
del out
score, index = torch.topk(p, 1)
del p
category_id = index[0][0].item()
del score, index
# This is to avoid the confusion from crane 134 and crane 517 and to make NNs work :)
# Because in Imagenet, annotators mislabeled 134 and 517
if input_category_id != 134 and input_category_id != 517 and category_id != 134 and category_id != 517:
if input_category_id != category_id:
continue
f = feature_extractor(x)
feature_vector = f.flatten(start_dim=1).to(device)
feature_vector = feature_vector.detach()
del f
distance_dict[path] = torch.dist(embedding, feature_vector)
del feature_vector
torch.cuda.empty_cache()
img.close()
del img
predicted_set_color_images.append(path)
# Get K most similar images
res = dict(sorted(distance_dict.items(), key = itemgetter(1))[:K])
print("Before...")
print(res)
# Tracer()()
while distance_dict[list(res.keys())[0]] < 100:
del distance_dict[list(res.keys())[0]]
res = dict(sorted(distance_dict.items(), key = itemgetter(1))[:K])
print("After...")
print(res)
# del distance_dict
del embedding
similar_images = list(res.keys())
for similar_image in similar_images:
img = Image.open(similar_image)
neighbors.append(img.resize((size,size), Image.ANTIALIAS))
x = transform(img).unsqueeze(0).to(device)
out = model(x)
p = torch.nn.functional.softmax(out, dim=1)
score, index = torch.topk(p, 1) # Get 1 most probable classes
category_id = index[0][0].item()
confidence = score[0][0].item()
label = label_map.get(category_id).split(',')[0].replace("\"", "")
label = label[0].lower() + label[1:]
print(label + ": %.2f" %(confidence))
categories_confidences.append((label + ": %.2f" %(confidence)))
confidences.append(confidence)
img.close()
for index, neighbor in enumerate(neighbors):
fig = plt.figure()
# plt.figure(figsize=(6.0,4.5), dpi=300)
plt.axis('off')
if index == 1: # Make title for the middle image (2nd image) to annotate the 3 NNs
plt.title('3-NN'.format(predicted_label), fontsize=30)
else:
plt.title(' ', fontsize=30)
plt.imshow(neighbor)
plt.savefig('tmp/{}.jpeg'.format(index), figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
cmd = 'convert tmp/{}.jpeg -resize 600x600\! tmp/{}.jpeg'.format(index, index)
os.system(cmd)
myCmd = 'montage tmp/[0-2].jpeg -tile 3x1 -geometry +0+0 tmp/NN.jpeg'
os.system(myCmd)
# Sample images and definition
print(image_path)
gt_label = image_path.split('images/')[1][34:43]
print(image_path)
print(gt_label)
sample_path = '/home/dexter/Downloads/A-journey-into-Convolutional-Neural-Network-visualization-/sample_images'
predicted_sample_path = os.path.join(sample_path, gt_label + '.jpeg')
textual_label = id_map.get(gt_label).split(',')[0]
textual_label = textual_label[0].lower() + textual_label[1:]
definition = '{}: {}'.format(textual_label, definition_dict[gt_label])
definition = definition.replace("'s", "")
print(definition)
# definition = 'any sluggish bottom-dwelling ray of the order Torpediniformes having a rounded body and electric organs on each side of the head capable of emitting strong electric discharges'
# Responsive annotation of imagemagick (only caption has responsive functions)
cmd = 'convert {} -resize 2400x600\! tmp/sample_def.jpeg'.format(predicted_sample_path)
os.system(cmd)
cmd = 'convert tmp/sample_def.jpeg -background White -size 2395x \
-pointsize 50 -gravity Center \
caption:\'{}\' \
+swap -gravity Center -append tmp/sample_def.jpeg'.format(definition)
os.system(cmd)
# Top-k predictions
img = Image.open(image_path)
x = transform(img).unsqueeze(0).to(device)
out = model(x)
p = torch.nn.functional.softmax(out, dim=1)
score, index = torch.topk(p, 5)
# Tracer()()
predicted_labels = []
predicted_confidences = []
colors = []
for i in range(5):
input_prediction_id = convert_imagenet_label_to_id(label_map, key_list, val_list, index[0][i].item())
if input_prediction_id == gt_label:
colors.append('lightcoral')
else:
colors.append('mediumslateblue')
predicted_label = id_map.get(input_prediction_id).split(',')[0]
predicted_label = predicted_label[0].lower() + predicted_label[1:]
predicted_labels.append(predicted_label)
predicted_confidences.append(score[0][i].item())
# plt.rcdefaults()
fig, ax = plt.subplots()
y_pos = np.arange(len(predicted_labels))
ax.tick_params(axis='y', direction='in',pad=-100)
ax.tick_params(axis = "x", which = "both", bottom = False, top = False) # turn off xtick
ax.barh(predicted_labels, predicted_confidences, align='center', color=colors, height=1.0)
ax.set_xlim(0,1)
ax.set_yticklabels(predicted_labels, horizontalalignment = "left", fontsize=64, weight='bold')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_title(textual_label, fontsize=60, weight='bold') #1
# remove the x and y ticks
ax.set_xticks([])
plt.savefig('tmp/top5.jpeg', figsize=(6.0,4.5), dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
#cmd = 'convert tmp/top5.jpeg -resize 570x400\! tmp/top5.jpeg'
cmd = 'convert tmp/top5.jpeg -resize 580x400\! tmp/top5.jpeg'
os.system(cmd)
cmd = 'convert tmp/top5.jpeg -gravity North -background white -extent 100x150% tmp/top5.jpeg'
os.system(cmd)
# cmd = 'montage original.jpeg GradCAM.jpeg EP.jpeg SOD.jpeg color_bar.jpeg -tile 5x1 -geometry 600x600+0+0 agg1.jpeg'
cmd = 'convert tmp/original.jpeg tmp/GradCAM.jpeg tmp/EP.jpeg tmp/SOD.jpeg tmp/color_bar.jpeg -gravity center +append tmp/agg1.jpeg'
os.system(cmd)
# cmd = 'montage top5.jpeg NN.jpeg -tile 2x1 -geometry +0+0 agg2.jpeg'
# cmd = 'montage top5.jpeg [0-2].jpeg -tile 4x1 -geometry 600x600+0+0 agg2.jpeg'
cmd = 'convert tmp/top5.jpeg tmp/[0-2].jpeg -gravity center +append tmp/agg2.jpeg'
os.system(cmd)
cmd = 'convert tmp/agg2.jpeg -gravity West -background white -extent 101.5x100% tmp/agg2.jpeg'
os.system(cmd)
cmd = 'convert tmp/agg1.jpeg tmp/agg2.jpeg tmp/sample_def.jpeg -gravity center -append {}'.format(representative)
print(cmd)
os.system(cmd)
```
| github_jupyter |
# Ejercicios de agua subterránea
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
plt.style.use('dark_background')
#plt.style.use('seaborn-whitegrid')
```
## <font color=steelblue>Ejercicio 1 - Infiltración. Método de Green-Ampt
<font color=steelblue>Usando el modelo de Green-Ampt, calcula la __infiltración acumulada__, la __tasa de infiltración__ y la __profundidad del frente de mojado__ durante una precipitación constante de 5 cm/h que dure 2 h en un _loam_ limoso típico con un contenido de agua inicial de 0,45.
Las propiedades típicas del _loam_ limoso son: <br>
$\phi=0.485$ <br>
$K_{s}=2.59 cm/h$ <br>
$|\Psi_{ae}|=78.6 cm$ <br>
$b=5.3$ <br>
```
# datos del enunciado
phi = 0.485 # -
theta_o = 0.45 # -
Ks = 2.59 # cm/h
psi_ae = 78.6 # cm
b = 5.3 # -
ho = 0 # cm
i = 5 # cm/h
tc = 2 # h
epsilon = 0.001 # cm
```
### Modelo de infiltración de Green-Ampt
Hipótesis:
* Suelo encharcado con una lámina de altura $h_o$ desde el inicio.
* Frente de avance de la humedad plano (frente pistón).
* Suelo profundo y homogéneo ($\theta_o$, $\theta_s$, $K_s$ constantes).
Tasa de infiltración, $f \left[ \frac{L}{T} \right]$:
$$f = K_s \left( 1 + \frac{\Psi_f · \Delta\theta}{F} \right) \qquad \textrm{(1)}$$
Infiltración acumulada, $f \left[ L \right]$:
$$F = K_s · t + \Psi_f · \Delta\theta · \ln \left(1 + \frac{F}{\Psi_f · \Delta\theta} \right) \qquad \textrm{(2)}$$
Es una ecuación implícita. Para resolverla, se puede utilizar, por ejemplo, el método de Picard. Se establece un valor inicial de ($F_o=K_s·t$) y se itera el siguiente cálculo hasta converger ($F_{m+1}-F_m<\varepsilon$):
$$F_{m+1} = K_s · t + \Psi_f · \Delta\theta · \ln \left(1 + \frac{F_m}{\Psi_f · \Delta\theta} \right) \qquad \textrm{(3)}$$
##### Suelo no encharcado al inicio
Si no se cumple la hipótesis de encharcamiento desde el inicio, se debe calcular el tiempo de encharcamiento ($t_p$) y la cantidad de agua infiltrada hata ese momento ($F_p$):
$$t_p = \frac{K_s · \Psi_f · \Delta\theta}{i \left( i - K_s \right)} \qquad \textrm{(4)}$$
$$F_p = i · t_p = \frac{K_s · \Psi_f · \Delta\theta}{i - K_s} \qquad \textrm{(5)}$$
Conocidos $t_p$ y $F_p$, se ha de resolver la ecuación (1) sobre una nueva variable tiempo $t_p'=t_p-t_o$, con lo que se llega a la siguiente ecuación emplícita:
$$F_{m+1} = K_s · (t - t_o) + \Psi_f · \Delta\theta · \ln \left(1 + \frac{F_m}{\Psi_f · \Delta\theta} \right) \qquad \textrm{(6)}$$
donde $t_o$ es:<br>
$$t_o = t_p - \frac{F_p - \Psi_f · \Delta\theta · \ln \left(1 + \frac{F_p}{\Psi_f · \Delta\theta} \right)}{K_s} \qquad \textrm{(7)}$$
```
# calcular variables auxiliares
Atheta = phi - theta_o # incremento de la humedad del suelo
psi_f = (2 * b + 3) / (2 * b + 6) * psi_ae # tensión en el frente húmedo
# tiempo hasta el encharcamiento
tp = psi_f * Atheta * Ks / (i * (i - Ks))
# infiltración acumulada cuando ocurre el encharcamiento
Fp = tp * i
# tiempo de inicio de la curva de infiltración
to = tp - (Fp - psi_f * Atheta * np.log(1 + Fp / (psi_f * Atheta))) / Ks
# infiltración acumulada en el tiempo de cálculo
Fo = Ks * (tc - to)
Fi = Ks * (tc - to) + psi_f * Atheta * np.log(1 + Fo / (psi_f * Atheta))
while (Fi - Fo) > epsilon:
Fo = Fi
Fi = Ks * (tc - to) + psi_f * Atheta * np.log(1 + Fo / (psi_f * Atheta))
print(Fo, Fi)
Fc = Fi
print()
print('Fc = {0:.3f} cm'.format(Fc))
# tasa de infiltración en el tiempo de cálculo
fc = Ks * (1 + psi_f * Atheta / Fc)
print('fc = {0:.3f} cm/h'.format(fc))
# profundidad del frente de húmedo
L = Fc / Atheta
print('L = {0:.3f} cm'.format(L))
def GreenAmpt(i, tc, ho, phi, theta_o, Ks, psi_ae, b=5.3, epsilon=0.001):
"""Se calcula la infiltración en un suelo para una precipitación constante mediante el método de Green-Ampt.
Entradas:
---------
i: float. Intensidad de precipitación (cm/h)
tc: float. Tiempo de cálculo (h)
ho: float. Altura de la lámina de agua del encharcamiento en el inicio (cm)
phi: float. Porosidad (-)
theta_o: float. Humedad del suelo en el inicio (-)
Ks: float. Conductividad saturada (cm/h)
psi_ae: float. Tensión del suelo para el punto de entrada de aire (cm)
b: float. Coeficiente para el cálculo de la tensión en el frente húmedo (cm)
epsilo: float. Error tolerable en el cálculo (cm)
Salidas:
--------
Fc: float. Infiltración acumulada en el tiempo de cálculo (cm)
fc: float. Tasa de infiltración en el tiempo de cálculo (cm/h)
L: float. Profundidad del frente húmedo en el tiempo de cálculo (cm)"""
# calcular variables auxiliares
Atheta = phi - theta_o # incremento de la humedad del suelo
psi_f = (2 * b + 3) / (2 * b + 6) * psi_ae # tensión en el frente húmedo
if ho > 0: # encharcamiento inicial
tp = 0
to = 0
elif ho == 0: # NO hay encharcamiento inicial
# tiempo hasta el encharcamiento
tp = psi_f * Atheta * Ks / (i * (i - Ks))
# infiltración acumulada cuando ocurre el encharcamiento
Fp = tp * i
# tiempo de inicio de la curva de infiltración
to = tp - (Fp - psi_f * Atheta * np.log(1 + Fp / (psi_f * Atheta))) / Ks
# infiltración acumulada en el tiempo de cálculo
if tc <= tp:
Fc = i * tc
elif tc > tp:
Fo = Ks * (tc - to)
Fi = Ks * (tc - to) + psi_f * Atheta * np.log(1 + Fo / (psi_f * Atheta))
while (Fi - Fo) > epsilon:
Fo = Fi
Fi = Ks * (tc - to) + psi_f * Atheta * np.log(1 + Fo / (psi_f * Atheta))
Fc = Fi
# tasa de infiltración en el tiempo de cálculo
fc = Ks * (1 + psi_f * Atheta / Fc)
# profundidad del frente de húmedo
L = Fc / Atheta
return Fc, fc, L
Fc, fc, L = GreenAmpt(i, tc, ho, phi, theta_o, Ks, psi_ae, b, epsilon)
print('Fc = {0:.3f} cm'.format(Fc))
print('fc = {0:.3f} cm/h'.format(fc))
print('L = {0:.3f} cm'.format(L))
# Guardar resultados
results = pd.DataFrame([Fc, fc, L], index=['Fc (cm)', 'fc (cm/h)', 'L (cm)']).transpose()
results.to_csv('../output/Ej1_resultados.csv', index=False, float_format='%.3f')
```
| github_jupyter |
```
import numpy as np
'''
Convolution class using no padding
param func - activation function
param d_func - derivative of activation function
param last_layer - point to last layer, which pass the value over
param input_num - numbers of input feature maps/images
param input_size - input feature maps/images size
param filter_size - size of the filter/kernel
param filter_num - numbers of filters, refer to number of output feature maps/images
param stride - moving step of the kernel function
param is_first - whether this layer is the first layer
'''
class ConvLayer():
def __init__(self, func, d_func, last_layer, input_size, input_num, filter_size, filter_num, stride, is_first):
# Activation Functions
self.act_func = func # Activation function
self.d_act_func = d_func # Derivative of activate function
self.last_layer = last_layer
self.input_num = int(input_num)
self.input_size = int(input_size)
self.filter_size = int(filter_size)
self.filter_num = int(filter_num)
self.stride = int(stride)
self.is_first = is_first
# Initial kernel
bound = np.sqrt(6 / ((input_num + filter_num) * filter_size * filter_size))
self.kernel = np.random.uniform(-bound, bound, (int(filter_num),int(input_num),int(filter_size),int(filter_size)))
# self.kernel = np.random.randn(int(filter_num),int(input_num),int(filter_size),int(filter_size))
# self.kernel = np.random.randn(int(filter_num),int(input_num),int(filter_size),int(filter_size)) / np.sqrt(1/(filter_size*filter_size)
# Initial bias
self.bias = np.zeros(self.filter_num)
self.delta_w_sum = np.zeros((int(filter_num),int(input_num),int(filter_size),int(filter_size)))
self.delta_bias_sum = np.zeros(self.filter_num)
# Check if the parameters of input size, filter size, stride is legel
self.output_size = (input_size + stride - filter_size) / stride
if(self.output_size%1 != 0):
print("Invalid ! Please check your parameters");
return -1
self.input_img = 0
self.stride_shape = (int(input_num),int(self.output_size),int(self.output_size),int(filter_size),int(filter_size))
self.strides = (int(input_size*input_size*8), int(input_size*stride*8), int(stride*8), int(input_size*8), 8)
self.conv_img = 0
self.output_img = 0
self.d_func_img = 0
self.delta_bias = 0 # Correction of bias
self.pass_error = 0 # Error passed to previous layer
self.delta_w = 0 # Weight correction
self.error = 0 # input error * derivative of activation function
def forwrad_pass(self):
if not self.is_first:
self.extract_value()
self.strided_img = np.lib.stride_tricks.as_strided(self.input_img, shape=self.stride_shape, strides=self.strides) # Cut the image with kernel size
# Convolution operations
self.conv_img = np.einsum("ijklm,ailm->ajk", self.strided_img, self.kernel)
# print("=================self.conv_img=================")
# print(self.conv_img)
# print("=================self.bias=================")
# print(self.bias)
self.conv_img += self.bias.reshape(len(self.bias),1, 1)
# print("=================self.conv_img=================")
# print(self.conv_img)
self.output_img = self.act_func(self.conv_img) # Through activation function
self.d_func_img = self.d_act_func(self.conv_img) # Through derivative of activation function
# print("=================self.input_img=================")
# print(self.input_img[0])
# print("=================self.strided_img=================")
# print(self.strided_img[0][13])
# print("=================self.kernel=================")
# print(self.kernel[0])
# print("=================self.conv_img=================")
# print(self.conv_img)
# print("=================self.output_img=================")
# print(self.output_img)
'''
Adjust weights, using backpropagation
For error function, e = y_predict - y_desire
For weight correction, w_n+1 = w_n - delta_w
'''
def adjust_weight(self, lr_rate, need_update):
# Calculate error
self.error = self.d_func_img * self.bp_vec
# print(self.bp_vec)
# Adjust weight
self.delta_w = np.einsum("ijklm,ajk->ailm", self.strided_img, self.error)
self.delta_w_sum += self.delta_w
# Adjust bias
self.delta_bias = np.einsum("ijk->i", self.error)
self.delta_bias_sum += self.delta_bias
# Update weight if reach bias
if need_update:
# print("update")
self.kernel -= lr_rate * self.delta_w_sum
self.bias -= lr_rate * self.delta_bias
self.delta_w_sum.fill(0)
self.delta_bias_sum.fill(0)
# print("=================self.d_func_img=================")
# print(self.d_func_img)
# print("=================self.bp_vec=================")
# print(self.bp_vec)
# print("=================self.strided_img=================")
# print(self.strided_img)
# print("=================self.error=================")
# print(self.error)
# print("=================self.delta_w=================")
# print(self.delta_w)
# print("=================self.delta_bias=================")
# print(self.delta_bias)
# Calculate pass error
if not self.is_first:
pass_error_tmp = np.einsum("aijk,alm->ilmjk", self.kernel, self.error)
img_shape = (int(self.input_num),int(self.output_size),int(self.output_size),int(self.filter_size),int(self.filter_size))
img_strides = (int(self.output_size*self.output_size*self.input_size*self.input_size*8), int((self.input_size * self.input_size + self.input_size)*self.stride*8), int((self.input_size * self.input_size + self.stride)*8), int(self.input_size*8), 8)
# Use to map error position
self.pass_error = np.zeros((int(self.input_num),int(self.output_size),int(self.output_size),int(self.input_size),int(self.input_size)), dtype=np.float)
inv_stride = self.pass_error.strides
inv_shape = self.pass_error.shape
A = np.lib.stride_tricks.as_strided(self.pass_error, shape=img_shape, strides=img_strides) # Cut the image with kernel size
A += pass_error_tmp
A = np.lib.stride_tricks.as_strided(A, shape=inv_shape, strides=inv_stride) # Cut the image with kernel size
self.pass_error = np.einsum("ijklm->ilm", A)
self.last_layer.pass_bp(self.pass_error)
# for i in range(len(pass_error_tmp[0])):
# for j in range(len(pass_error_tmp[0][0])):
# print(A[:,i,j], pass_error_tmp[:,i,j])
# self.pass_error[:,i,j] += pass_error_tmp[:,i,j]
# for img_h in range(len(self.error[0])):
# for img_w in range(len(self.error[0][0])):
# left_corner_h = img_h * self.stride
# left_corner_w = img_w * self.stride
# for feature in range(len(self.error)):
# # error pass to previous layer
# self.pass_error[:,int(left_corner_h):int(left_corner_h+self.filter_size),int(left_corner_w):int(left_corner_w+self.filter_size)] += self.kernel[feature] * self.error[feature][img_h][img_w]
def extract_value(self):
self.input_img = self.last_layer.get_output()
return self.input_img
def get_output(self):
return self.output_img.copy()
def get_output_size(self):
return self.output_size
'''
Set input variable, used for first layer which recieve input value
@param x - input value for the network
'''
def set_input(self, x):
self.input_img = x.copy()
'''
Pass backpropagation value back to previous layer
'''
def pass_bp(self, bp_value):
self.bp_vec = bp_value.copy()
'''
Pooling class using no padding
param last_layer - point to last layer, which pass the value over
param input_size - input feature maps/images size
param input_num - numbers of input feature maps/images
param filter_pattern - pattern of the filter
param stride - moving step of the kernel function
'''
class AvgPooling():
def __init__(self, last_layer, input_size, input_num, filter_size, stride, is_first):
self.last_layer = last_layer
self.input_size = int(input_size)
self.input_num = int(input_num)
self.filter_size = int(filter_size)
self.filter_pattern = np.full((int(input_num), int(input_num), int(filter_size), int(filter_size)), 1/(filter_size * filter_size))
self.d_filter_pattern = np.full((int(input_num),int(input_num),int(filter_size),int(filter_size)), 1/(filter_size * filter_size))
self.stride = stride
self.is_first = is_first
# Check if the parameters of input size, filter size, stride is legel
self.output_size = (input_size + stride - self.filter_size) / stride
if(self.output_size%1 != 0):
print("Invalid ! Please check your parameters");
return -1
self.input_img = 0
self.stride_shape = (int(input_num),int(self.output_size),int(self.output_size),int(filter_size),int(filter_size))
self.strides = (int(input_size*input_size*8), int(input_size*stride*8), int(stride*8), int(input_size*8), 8)
self.conv_img = 0
self.output_img = 0
self.d_func_img = 0
self.pass_error = 0 # Error passed to previous layer
self.error = 0 # input error * derivative of activation function
def forwrad_pass(self):
if not self.is_first:
self.extract_value()
strided_img = np.lib.stride_tricks.as_strided(self.input_img, shape=self.stride_shape, strides=self.strides) # Cut the image with kernel size
# Convolution operations
self.output_img = np.einsum("ijklm,ailm->ajk", strided_img, self.filter_pattern)
## In max pooling, need to record last max value position
'''
Adjust weights, using backpropagation
For error function, e = y_predict - y_desire
For weight correction, w_n+1 = w_n - delta_w
'''
def adjust_weight(self, lr_rate, need_update):
if not self.is_first:
self.error = self.bp_vec # error
# print(self.bp_vec )
# Calculate pass error
pass_error_tmp = np.einsum("aijk,alm->ilmjk", self.d_filter_pattern, self.error)
img_shape = (int(self.input_num),int(self.output_size),int(self.output_size),int(self.filter_size),int(self.filter_size))
img_strides = (int(self.output_size*self.output_size*self.input_size*self.input_size*8), int((self.input_size * self.input_size + self.input_size)*self.stride*8), int((self.input_size * self.input_size + self.stride)*8), int(self.input_size*8), 8)
# Use to map error position
self.pass_error = np.zeros((int(self.input_num),int(self.output_size),int(self.output_size),int(self.input_size),int(self.input_size)), dtype=np.float)
inv_stride = self.pass_error.strides
inv_shape = self.pass_error.shape
A = np.lib.stride_tricks.as_strided(self.pass_error, shape=img_shape, strides=img_strides) # Cut the image with kernel size
A += pass_error_tmp
A = np.lib.stride_tricks.as_strided(A, shape=inv_shape, strides=inv_stride) # Cut the image with kernel size
self.pass_error = np.einsum("ijklm->ilm", A)
self.last_layer.pass_bp(self.pass_error)
def extract_value(self):
self.input_img = self.last_layer.get_output()
return self.input_img
def get_output(self):
return self.output_img.copy()
def get_output_size(self):
return self.output_size
'''
Set input variable, used for first layer which recieve input value
@param x - input value for the network
'''
def set_input(self, x):
self.input_img = x.copy()
'''
Pass backpropagation value back to previous layer
'''
def pass_bp(self, bp_value):
self.bp_vec = bp_value.copy()
'''
Flattening class using no padding
param last_layer - point to last layer, which pass the value over
param input_size - input feature maps/images size
param input_num - numbers of input feature maps/images
'''
class Flattening():
def __init__(self, last_layer, input_size, input_num, is_first):
self.last_layer = last_layer
self.input_size = int(input_size)
self.input_num = int(input_num)
self.is_first = is_first
def forwrad_pass(self):
if not self.is_first:
self.extract_value()
self.output_img = self.input_img.reshape(int(self.input_num*self.input_size*self.input_size))
def extract_value(self):
self.input_img = self.last_layer.get_output()
return self.input_img
def get_output(self):
return self.output_img.copy()
'''
Set input variable, used for first layer which recieve input value
@param x - input value for the network
'''
def set_input(self, x):
self.input_img = x.copy()
def get_node_num(self):
self.neuron_num = self.input_num*self.input_size*self.input_size
return self.neuron_num
'''
Pass backpropagation value back to previous layer
'''
def pass_bp(self, bp_value):
self.bp_vec = bp_value.copy()
'''
Adjust weights, using backpropagation
For error function, e = y_predict - y_desire
For weight correction, w_n+1 = w_n - delta_w
'''
def adjust_weight(self, lr_rate, need_update):
if not self.is_first:
self.pass_error = self.bp_vec.reshape(int(self.input_num), int(self.input_size), int(self.input_size))
self.last_layer.pass_bp(self.pass_error)
#@title
'''
Activation function for the network
'''
def test_act_func(x):
return x*11
'''
ReLU
'''
def ReLU(x):
x[x<=0] = 0
return x.copy()
'''
Sigmoid
'''
def Sigmoid(x):
return 1/(1+np.exp(-x))
#@title
'''
Diviation of the activation function for the network
'''
def d_test_act_func(x):
return x+2
'''
Diviation of ReLU
'''
def d_ReLU(x):
x[x > 0] = 1
x[x < 0] = 0
return x.copy()
'''
Diviation of Sigmoid
'''
def d_Sigmoid(x):
s = 1/(1+np.exp(-x))
return s * (1 - s)
```
| github_jupyter |
```
%matplotlib inline
```
Training a Classifier
=====================
This is it. You have seen how to define neural networks, compute loss and make
updates to the weights of the network.
Now you might be thinking,
What about data?
----------------
Generally, when you have to deal with image, text, audio or video data,
you can use standard python packages that load data into a numpy array.
Then you can convert this array into a ``torch.*Tensor``.
- For images, packages such as Pillow, OpenCV are useful
- For audio, packages such as scipy and librosa
- For text, either raw Python or Cython based loading, or NLTK and
SpaCy are useful
Specifically for vision, we have created a package called
``torchvision``, that has data loaders for common datasets such as
Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial, we will use the CIFAR10 dataset.
It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
.. figure:: /_static/img/cifar10.png
:alt: cifar10
cifar10
Training an image classifier
----------------------------
We will do the following steps in order:
1. Load and normalizing the CIFAR10 training and test datasets using
``torchvision``
2. Define a Convolution Neural Network
3. Define a loss function
4. Train the network on the training data
5. Test the network on the test data
1. Loading and normalizing CIFAR10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using ``torchvision``, it’s extremely easy to load CIFAR10.
```
import torch
import torchvision
import torchvision.transforms as transforms
```
The output of torchvision datasets are PILImage images of range [0, 1].
We transform them to Tensors of normalized range [-1, 1].
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
Let us show some of the training images, for fun.
```
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
2. Define a Convolution Neural Network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Copy the neural network from the Neural Networks section before and modify it to
take 3-channel images (instead of 1-channel images as it was defined).
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
3. Define a Loss function and optimizer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Let's use a Classification Cross-Entropy loss and SGD with momentum.
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
4. Train the network
^^^^^^^^^^^^^^^^^^^^
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to the
network and optimize.
```
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
5. Test the network on the test data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We have trained the network for 2 passes over the training dataset.
But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural network
outputs, and checking it against the ground-truth. If the prediction is
correct, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
Okay, now let us see what the neural network thinks these examples above are:
```
outputs = net(images)
```
The outputs are energies for the 10 classes.
Higher the energy for a class, the more the network
thinks that the image is of the particular class.
So, let's get the index of the highest energy:
```
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
The results seem pretty good.
Let us look at how the network performs on the whole dataset.
```
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
That looks waaay better than chance, which is 10% accuracy (randomly picking
a class out of 10 classes).
Seems like the network learnt something.
Hmmm, what are the classes that performed well, and the classes that did
not perform well:
```
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
```
Okay, so what next?
How do we run these neural networks on the GPU?
Training on GPU
----------------
Just like how you transfer a Tensor on to the GPU, you transfer the neural
net onto the GPU.
Let's first define our device as the first visible cuda device if we have
CUDA available:
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
net.to(device)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs).to(device)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
The rest of this section assumes that `device` is a CUDA device.
Then these methods will recursively go over all modules and convert their
parameters and buffers to CUDA tensors:
.. code:: python
net.to(device)
Remember that you will have to send the inputs and targets at every step
to the GPU too:
.. code:: python
inputs, labels = inputs.to(device), labels.to(device)
Why dont I notice MASSIVE speedup compared to CPU? Because your network
is realllly small.
**Exercise:** Try increasing the width of your network (argument 2 of
the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –
they need to be the same number), see what kind of speedup you get.
**Goals achieved**:
- Understanding PyTorch's Tensor library and neural networks at a high level.
- Train a small neural network to classify images
Training on multiple GPUs
-------------------------
If you want to see even more MASSIVE speedup using all of your GPUs,
please check out :doc:`data_parallel_tutorial`.
Where do I go next?
-------------------
- :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`
- `Train a state-of-the-art ResNet network on imagenet`_
- `Train a face generator using Generative Adversarial Networks`_
- `Train a word-level language model using Recurrent LSTM networks`_
- `More examples`_
- `More tutorials`_
- `Discuss PyTorch on the Forums`_
- `Chat with other users on Slack`_
| github_jupyter |
```
#from nbdev import *
%load_ext autoreload
%autoreload 2
#%nbdev_hide
#import sys
#sys.path.append("..")
```
# Examples
> Examples of the PCT library in use.
```
import gym
render=False
runs=1
#gui
render=True
runs=2000
```
## Cartpole
Cartpole is an Open AI gym environment for the inverted pendulum problem. The goal is to keep the pole balanced, by moving the cart left or right.
The environment provides observations (perceptions) for the state of the cart and pole.
0 - Cart Position
1 - Cart Velocity
2 - Pole Angle
3 - Pole Angular Velocity
It takes one value, of 0 or 1, for applying a force to the left or right, respectively.
The PCT solution is a four-level hierarchy for controlling the perceptions at goal values. Only one goal reference is manually set, the highest level which is the pole angle of 0.
This example shows how a perceptual control hierarchy can be implemented with this library.
```
import matplotlib.pyplot as plt
import numpy as np
from pct.hierarchy import PCTHierarchy
from pct.putils import FunctionsList
from pct.environments import CartPoleV1
from pct.functions import IndexedParameter
from pct.functions import Integration
from pct.functions import GreaterThan
from pct.functions import PassOn
```
Create a hierarchy of 4 levels each with one node.
```
cartpole_hierarchy = PCTHierarchy(levels=4, cols=1, name="cartpoleh", build=False)
namespace=cartpole_hierarchy.namespace
cartpole_hierarchy.get_node(0, 0).name = 'cart_velocity_node'
cartpole_hierarchy.get_node(1, 0).name = 'cart_position_node'
cartpole_hierarchy.get_node(2, 0).name = 'pole_velocity_node'
cartpole_hierarchy.get_node(3, 0).name = 'pole_angle_node'
#FunctionsList.getInstance().report()
#cartpole_hierarchy.summary(build=True)
```
Create the Cartpole gym environment function. This will apply the "action" output from the hierarchy and provide the new observations.
```
cartpole = CartPoleV1(name="CartPole-v1", render=render, namespace=namespace)
```
Create functions for each of the observation parameters of the Cartpole environment. Insert them into the hierarchy at the desired places.
```
cartpole_hierarchy.insert_function(level=0, col=0, collection="perception", function=IndexedParameter(index=1, name="cart_velocity", links=[cartpole], namespace=namespace))
cartpole_hierarchy.insert_function(level=1, col=0, collection="perception", function=IndexedParameter(index=0, name="cart_position", links=[cartpole], namespace=namespace))
cartpole_hierarchy.insert_function(level=2, col=0, collection="perception", function=IndexedParameter(index=3, name="pole_velocity", links=[cartpole], namespace=namespace))
cartpole_hierarchy.insert_function(level=3, col=0, collection="perception", function=IndexedParameter(index=2, name="pole_angle", links=[cartpole], namespace=namespace))
```
Link the references to the outputs of the level up.
```
cartpole_hierarchy.insert_function(level=0, col=0, collection="reference", function=PassOn(name="cart_velocity_reference", links=['proportional1'], namespace=namespace))
cartpole_hierarchy.insert_function(level=1, col=0, collection="reference", function=PassOn(name="cart_position_reference", links=['proportional2'], namespace=namespace))
cartpole_hierarchy.insert_function(level=2, col=0, collection="reference", function=PassOn(name="pole_velocity_reference", links=['proportional3'], namespace=namespace))
```
Set the highest level reference.
```
top = cartpole_hierarchy.get_function(level=3, col=0, collection="reference")
top.set_name("pole_angle_reference")
top.set_value(0)
```
Link the output of the hierarchy back to the Cartpole environment.
```
cartpole_hierarchy.summary(build=True)
cartpole_hierarchy.insert_function(level=0, col=0, collection="output", function=Integration(gain=-0.05, slow=4, name="force", links='subtract', namespace=namespace))
```
Set the names and gains of the output functions. This also shows another way of getting a function, by name.
```
FunctionsList.getInstance().get_function(namespace=namespace, name="proportional3").set_name("pole_angle_output")
FunctionsList.getInstance().get_function(namespace=namespace, name="pole_angle_output").set_property('gain', 3.5)
FunctionsList.getInstance().get_function(namespace=namespace, name="proportional2").set_name("pole_velocity_output")
FunctionsList.getInstance().get_function(namespace=namespace, name="pole_velocity_output").set_property('gain', 0.5)
FunctionsList.getInstance().get_function(namespace=namespace, name="proportional1").set_name("cart_position_output")
FunctionsList.getInstance().get_function(namespace=namespace, name="cart_position_output").set_property('gain', 2)
```
Add a post function to convert the output to 1 or 0 as required by the Cartpole environment.
```
greaterthan = GreaterThan(threshold=0, upper=1, lower=0, links='force', namespace=namespace)
cartpole_hierarchy.add_postprocessor(greaterthan)
```
Add the cartpole function as one that is executed before the actual hierarchy.
```
cartpole_hierarchy.add_preprocessor(cartpole)
```
Set the output of the hierachy as the action input to the Cartpole environment.
```
#link = cartpole_hierarchy.get_output_function()
cartpole.add_link(greaterthan)
```
Sit back and observe the brilliance of your efforts.
```
cartpole_hierarchy.set_order("Down")
cartpole_hierarchy.summary()
#gui
cartpole_hierarchy.draw(font_size=10, figsize=(8,12), move={'CartPole-v1': [-0.075, 0]}, node_size=1000, node_color='red')
cartpole_hierarchy.save("cartpole.json")
import networkx as nx
gr = cartpole_hierarchy.graph()
print(nx.info(gr))
print(gr.nodes())
```
Run the hierarchy for 500 steps.
```
cartpole_hierarchy.run(1,verbose=False)
cartpole_hierarchy.run(runs,verbose=False)
cartpole.close()
```
| github_jupyter |
A notebook to visualize some of the test systems in the C++ test code in `Code/GraphMol/RGroupDecomposition/testRGroupDecomp.cpp`
```
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
from rdkit.Chem.rdRGroupDecomposition import RGroupDecomposition, RGroupDecompositionParameters, \
RGroupMatching, RGroupScore, RGroupLabels, RGroupCoreAlignment
import pandas as pd
from rdkit.Chem import PandasTools
from collections import OrderedDict
from IPython.display import HTML
from rdkit import rdBase
from io import StringIO
from rdkit.Chem import Draw
rdBase.DisableLog("rdApp.debug")
```
### testSDFGRoupMultiCoreNoneShouldMatch
Cores, compounds and python code
```
sdcores = """
Mrv1813 05061918272D
13 14 0 0 0 0 999 V2000
-1.1505 0.0026 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.1505 -0.8225 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.4360 -1.2350 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.2784 -0.8225 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.2784 0.0026 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
-0.4360 0.4151 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.9354 0.2575 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
-2.4202 -0.4099 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.9354 -1.0775 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
0.9907 -1.2333 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
-0.4360 1.2373 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.2784 1.6497 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
-3.2452 -0.4098 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
6 1 1 0 0 0 0
1 7 1 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
9 2 1 0 0 0 0
3 4 1 0 0 0 0
4 5 1 0 0 0 0
4 10 1 0 0 0 0
5 6 1 0 0 0 0
6 11 1 0 0 0 0
7 8 1 0 0 0 0
8 13 1 0 0 0 0
8 9 1 0 0 0 0
11 12 1 0 0 0 0
M RGP 3 10 1 12 2 13 3
M END
$$$$
Mrv1813 05061918272D
13 14 0 0 0 0 999 V2000
6.9524 0.1684 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.9524 -0.6567 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.6668 -1.0692 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.3813 -0.6567 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.3813 0.1684 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
7.6668 0.5809 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.1674 0.4233 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
5.6827 -0.2441 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.1674 -0.9117 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
9.0935 -1.0675 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
7.6668 1.4031 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
8.3813 1.8155 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
4.8576 -0.2440 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
6 1 1 0 0 0 0
1 7 1 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
9 2 1 0 0 0 0
3 4 1 0 0 0 0
4 5 1 0 0 0 0
4 10 1 0 0 0 0
5 6 1 0 0 0 0
6 11 1 0 0 0 0
7 8 1 0 0 0 0
8 13 1 0 0 0 0
8 9 1 0 0 0 0
11 12 1 0 0 0 0
M RGP 3 10 1 12 2 13 3
M END
$$$$)CTAB"""
supplier = Chem.SDMolSupplier()
supplier.SetData(sdcores)
cores = [x for x in supplier]
for core in cores:
AllChem.Compute2DCoords(core)
Draw.MolsToGridImage(cores)
sdmols="""CTAB(
Mrv1813 05061918322D
15 17 0 0 0 0 999 V2000
0.1742 0.6899 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8886 0.2774 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8886 -0.5476 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.1742 -0.9601 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.1742 -1.7851 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8886 -2.1976 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.8886 -3.0226 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.1742 -3.4351 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
-0.5403 -3.0226 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.3249 -3.2775 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
-1.8099 -2.6101 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.3249 -1.9426 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.5403 -2.1976 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.5403 -0.5476 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.5403 0.2774 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
1 15 1 0 0 0 0
2 3 1 0 0 0 0
3 4 1 0 0 0 0
4 5 1 0 0 0 0
4 14 1 0 0 0 0
5 6 1 0 0 0 0
5 13 1 0 0 0 0
6 7 1 0 0 0 0
7 8 1 0 0 0 0
8 9 1 0 0 0 0
9 10 1 0 0 0 0
9 13 1 0 0 0 0
10 11 1 0 0 0 0
11 12 1 0 0 0 0
12 13 1 0 0 0 0
14 15 1 0 0 0 0
M END
$$$$
Mrv1813 05061918322D
14 15 0 0 0 0 999 V2000
6.4368 0.3002 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.7223 -0.1123 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
5.7223 -0.9373 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.4368 -1.3498 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
6.4368 -2.1748 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.7223 -2.5873 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
5.0078 -2.1748 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.2232 -2.4297 0.0000 S 0 0 0 0 0 0 0 0 0 0 0 0
3.7383 -1.7623 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.2232 -1.0949 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.9683 -0.3102 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
3.1613 -0.1387 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.5203 0.3029 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.0078 -1.3498 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
3 4 1 0 0 0 0
3 14 1 0 0 0 0
4 5 1 0 0 0 0
5 6 1 0 0 0 0
6 7 1 0 0 0 0
7 8 1 0 0 0 0
7 14 1 0 0 0 0
8 9 1 0 0 0 0
9 10 1 0 0 0 0
10 11 1 0 0 0 0
10 14 1 0 0 0 0
11 13 1 0 0 0 0
11 12 1 0 0 0 0
M END
$$$$
Mrv1813 05061918322D
14 15 0 0 0 0 999 V2000
0.8289 -7.9643 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.1144 -8.3768 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.1144 -9.2018 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.8289 -9.6143 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.8289 -10.4393 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.1144 -10.8518 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.6000 -10.4393 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.3847 -10.6942 0.0000 S 0 0 0 0 0 0 0 0 0 0 0 0
-1.8696 -10.0268 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.3847 -9.3593 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.6396 -8.5747 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.4466 -8.4032 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-1.0876 -7.9616 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-0.6000 -9.6143 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
3 4 1 0 0 0 0
3 14 1 0 0 0 0
4 5 1 0 0 0 0
5 6 1 0 0 0 0
6 7 1 0 0 0 0
7 8 1 0 0 0 0
7 14 1 0 0 0 0
8 9 1 0 0 0 0
9 10 1 0 0 0 0
10 11 1 0 0 0 0
10 14 1 0 0 0 0
11 13 1 0 0 0 0
11 12 1 0 0 0 0
M END
$$$$
Mrv1813 05061918322D
12 13 0 0 0 0 999 V2000
5.3295 -8.1871 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5844 -7.4025 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
5.0995 -6.7351 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.5844 -6.0676 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
6.3690 -6.3226 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.0835 -5.9101 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
7.0835 -5.0851 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
7.7980 -6.3226 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
7.7980 -7.1476 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.5124 -7.5601 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
7.0835 -7.5601 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
6.3690 -7.1476 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
2 12 1 0 0 0 0
3 4 1 0 0 0 0
4 5 1 0 0 0 0
5 12 1 0 0 0 0
5 6 1 0 0 0 0
6 7 1 0 0 0 0
6 8 1 0 0 0 0
8 9 1 0 0 0 0
9 10 1 0 0 0 0
9 11 1 0 0 0 0
11 12 1 0 0 0 0
M END
$$$$)CTAB"""
supplier = Chem.SDMolSupplier()
supplier.SetData(sdmols)
mols = [x for x in supplier]
for mol in mols:
AllChem.Compute2DCoords(mol)
Draw.MolsToGridImage(mols)
options = RGroupDecompositionParameters()
options.onlyMatchAtRGroups = False
options.removeHydrogensPostMatch = False
decomp = RGroupDecomposition(cores, options)
for mol in mols:
decomp.Add(mol)
decomp.Process()
cols= decomp.GetRGroupsAsColumns()
cols['mol'] = mols
for c in cols['Core']:
AllChem.Compute2DCoords(c)
Draw.MolsToGridImage(cols['Core'])
df = pd.DataFrame(cols);
PandasTools.ChangeMoleculeRendering(df)
HTML(df.to_html())
rows = decomp.GetRGroupsAsRows();
for i, r in enumerate(rows):
labels = ['{}:{}'.format(l, Chem.MolToSmiles(r[l])) for l in r]
print('{} {}'.format(str(i+1), ' '.join(labels)))
```
### testMultiCorePreLabelled
```
sdcores = """CTAB(
RDKit 2D
9 9 0 0 0 0 0 0 0 0999 V2000
1.1100 -1.3431 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
1.5225 -0.6286 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.9705 -0.0156 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.2168 -0.3511 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
0.3029 -1.1716 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.1419 0.7914 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
0.5289 1.3431 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
1.9266 1.0463 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
-0.4976 0.0613 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0
2 3 2 0
3 4 1 0
4 5 1 0
1 5 2 0
3 6 1 0
6 7 2 0
6 8 1 0
4 9 1 0
M RGP 2 8 1 9 2
V 8 *
V 9 *
M END
$$$$
RDKit 2D
12 13 0 0 0 0 0 0 0 0999 V2000
-6.5623 0.3977 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
-5.8478 -0.0147 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-5.1333 0.3977 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
-4.4188 -0.0147 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-4.4188 -0.8397 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-5.1333 -1.2522 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
-5.8478 -0.8397 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.7044 -1.2522 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
-3.7044 0.3977 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
-2.9899 -0.0147 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-2.9899 -0.8397 0.0000 A 0 0 0 0 0 0 0 0 0 0 0 0
-2.2754 0.3978 0.0000 R# 0 0 0 0 0 0 0 0 0 0 0 0
3 4 1 0
4 5 2 0
5 6 1 0
6 7 2 0
2 3 2 0
2 7 1 0
9 10 2 0
10 11 1 0
8 11 2 0
8 5 1 0
4 9 1 0
10 12 1 0
1 2 1 0
M RGP 2 1 2 12 1
V 1 *
V 12 *
M END
$$$$
)CTAB"""
supplier = Chem.SDMolSupplier()
supplier.SetData(sdcores)
cores = [x for x in supplier]
for core in cores:
AllChem.Compute2DCoords(core)
Draw.MolsToGridImage(cores)
smiles = ["CNC(=O)C1=CN=CN1CC", "Fc1ccc2ccc(Br)nc2n1"]
mols = [Chem.MolFromSmiles(s) for s in smiles]
Draw.MolsToGridImage(mols)
def decomp(options):
options.onlyMatchAtRGroups = True
options.removeHydrogensPostMatch = True
decomp = RGroupDecomposition(cores, options)
for mol in mols:
decomp.Add(mol)
decomp.Process()
cols = decomp.GetRGroupsAsColumns()
return cols
def show_decomp(cols):
cols['mol'] = mols
df = pd.DataFrame(cols);
PandasTools.ChangeMoleculeRendering(df)
return HTML(df.to_html())
# for when we can't display structures ("non-ring aromatic")
def show_decomp_smiles(cols):
cols['mol'] = mols
for c in cols:
cols[c] = ['{}:{}'.format(c, Chem.MolToSmiles(m)) for m in cols[c]]
df = pd.DataFrame(cols);
PandasTools.ChangeMoleculeRendering(df)
return HTML(df.to_html())
options = RGroupDecompositionParameters()
options.labels = RGroupLabels.AutoDetect
options.alignment = RGroupCoreAlignment.MCS
cols = decomp(options)
Draw.MolsToGridImage(cols['Core'])
show_decomp(cols)
options = RGroupDecompositionParameters()
options.labels = RGroupLabels.MDLRGroupLabels | RGroupLabels.RelabelDuplicateLabels
options.alignment = RGroupCoreAlignment.MCS
cols=decomp(options)
Draw.MolsToGridImage(cols['Core'])
show_decomp(cols)
options = RGroupDecompositionParameters()
options.labels = RGroupLabels.AutoDetect
options.alignment = RGroupCoreAlignment.NoAlignment
cols=decomp(options)
Draw.MolsToGridImage(cols['Core'])
show_decomp(cols)
options = RGroupDecompositionParameters()
options.labels = RGroupLabels.MDLRGroupLabels | RGroupLabels.RelabelDuplicateLabels
options.alignment = RGroupCoreAlignment.NoAlignment
cols=decomp(options)
Draw.MolsToGridImage(cols['Core'])
show_decomp(cols)
for core in cores:
for atom in core.GetAtoms():
if atom.HasProp("_MolFileRLabel"):
atom.ClearProp("_MolFileRLabel")
if atom.GetIsotope():
atom.SetIsotope(0)
if atom.GetAtomMapNum():
print("atom map num")
atom.SetAtomMapNum(0)
options = RGroupDecompositionParameters()
options.labels = RGroupLabels.AutoDetect
options.alignment = RGroupCoreAlignment.MCS
cols=decomp(options)
Draw.MolsToGridImage(cols['Core'])
show_decomp_smiles(cols)
options = RGroupDecompositionParameters()
options.labels = RGroupLabels.DummyAtomLabels | RGroupLabels.RelabelDuplicateLabels
options.alignment = RGroupCoreAlignment.MCS
cols=decomp(options)
Draw.MolsToGridImage(cols['Core'])
show_decomp_smiles(cols)
```
| github_jupyter |
# Getting Started with NumPy
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Getting-Started-with-NumPy" data-toc-modified-id="Getting-Started-with-NumPy-1"><span class="toc-item-num">1 </span>Getting Started with NumPy</a></span><ul class="toc-item"><li><span><a href="#Learning-Objectives" data-toc-modified-id="Learning-Objectives-1.1"><span class="toc-item-num">1.1 </span>Learning Objectives</a></span></li><li><span><a href="#What-is-NumPy?" data-toc-modified-id="What-is-NumPy?-1.2"><span class="toc-item-num">1.2 </span>What is NumPy?</a></span></li><li><span><a href="#The-NumPy-Array-Object" data-toc-modified-id="The-NumPy-Array-Object-1.3"><span class="toc-item-num">1.3 </span>The NumPy Array Object</a></span></li><li><span><a href="#Data-types" data-toc-modified-id="Data-types-1.4"><span class="toc-item-num">1.4 </span>Data types</a></span><ul class="toc-item"><li><span><a href="#Basic-Numerical-Data-Types-Available-in-NumPy" data-toc-modified-id="Basic-Numerical-Data-Types-Available-in-NumPy-1.4.1"><span class="toc-item-num">1.4.1 </span>Basic Numerical Data Types Available in NumPy</a></span></li><li><span><a href="#Data-Type-Promotion" data-toc-modified-id="Data-Type-Promotion-1.4.2"><span class="toc-item-num">1.4.2 </span>Data Type Promotion</a></span></li></ul></li><li><span><a href="#Going-Further" data-toc-modified-id="Going-Further-1.5"><span class="toc-item-num">1.5 </span>Going Further</a></span></li></ul></li></ul></div>
## Learning Objectives
- Understand NumPy Array Object
## What is NumPy?
- NumPy provides the numerical backend for nearly every scientific or technical library for Python. In fact, NumPy is the foundation library for scientific computing in Python since it provides data structures and high-performing functions that the basic Python standard library cannot provide. Therefore, knowledge of this library is essential in terms of numerical calculations since its correct use can greatly influence the performance of your computations.
- NumPy provides the following additional features:
- `Ndarray`: A multidimensional array much faster and more efficient
than those provided by the basic package of Python. The core of NumPy is implemented in C and provides efficient functions for manipulating and processing arrays.
- `Element-wise computation`: A set of functions for performing this type of calculation with arrays and mathematical operations between arrays.
- `Integration with other languages such as C, C++, and FORTRAN`: A
set of tools to integrate code developed with these programming
languages.
- At a first glance, NumPy arrays bear some resemblance to Python’s list data structure. But an important difference is that while Python lists are generic containers of objects:
- NumPy arrays are homogenous and typed arrays of fixed size.
- Homogenous means that all elements in the array have the same data type.
- Fixed size means that an array cannot be resized (without creating a new array).
## The NumPy Array Object
- The core of the NumPy Library is one main object: `ndarray` (which stands for N-dimensional array)
- This object is a multi-dimensional homogeneous array with a predetermined number of items
- In addition to the data stored in the array, this data structure also contains important metadata about the array, such as its shape, size, data type, and other attributes.
**Basic Attributes of the ndarray Class**
| Attribute | Description |
|-----------|----------------------------------------------------------------------------------------------------------|
| shape | A tuple that contains the number of elements (i.e., the length) for each dimension (axis) of the array. |
| size | The total number elements in the array. |
| ndim | Number of dimensions (axes). |
| nbytes | Number of bytes used to store the data. |
| dtype | The data type of the elements in the array. |
| itemsize | Defines teh size in bytes of each item in the array. |
| data | A buffer containing the actual elements of the array. |
In order to use the NumPy library, we need to import it in our program. By convention,
the numPy module imported under the alias np, like so:
```
import numpy as np
```
After this, we can access functions and classes in the numpy module using the np
namespace. Throughout this notebook, we assume that the NumPy module is imported in
this way.
```
data = np.array([[10, 2], [5, 8], [1, 1]])
data
```
Here the ndarray instance data is created from a nested Python list using the
function `np.array`. More ways to create ndarray instances from data and from rules of
various kinds are introduced later in this tutorial.
```
type(data)
data.ndim
data.size
data.dtype
data.nbytes
data.itemsize
data.data
```
## Data types
- `dtype` attribute of the `ndarray` describes the data type of each element in the array.
- Since NumPy arrays are homogeneous, all elements have the same data type.
### Basic Numerical Data Types Available in NumPy
| dtype | Variants | Description |
|---------|-------------------------------------|---------------------------------------|
| int | int8, int16, int32, int64 | Integers |
| uint | uint8, uint16, uint32, uint64 | Unsigned (non-negative) integers |
| bool | Bool | Boolean (True or False) |
| float | float16, float32, float64, float128 | Floating-point numbers |
| complex | complex64, complex128, complex256 | Complex-valued floating-point numbers |
Once a NumPy array is created, its `dtype` cannot be changed, other than by creating a new copy with type-casted array values
```
data = np.array([5, 9, 87], dtype=np.float32)
data
data = np.array(data, dtype=np.int32) # use np.array function for type-casting
data
data = np.array([5, 9, 87], dtype=np.float32)
data
data = data.astype(np.int32) # Use astype method of the ndarray class for type-casting
data
```
### Data Type Promotion
When working with NumPy arrays, the data type might get promoted from one type to another, if required by the operation.
For instance, adding float-value and integer-valued arrays, the resulting array is a float-valued array:
```
arr1 = np.array([0, 2, 3], dtype=float)
arr1
arr2 = np.array([10, 20, 30], dtype=int)
arr2
res = arr1 + arr2
res
res.dtype
```
<div class="alert alert-block alert-info">
In some cases, depending on the application and its requirements, it is essential to create arrays with data type appropriately set to right data type. The default data type is `float`:
<div>
```
np.sqrt(np.array([0, -1, 2]))
np.sqrt(np.array([0, -1, 2], dtype=complex))
```
Here, using the `np.sqrt` function to compute the square root of each element in
an array gives different results depending on the data type of the array. Only when the data type of the array is complex is the square root of `–1` resulting in the imaginary unit (denoted as `1j` in Python).
## Going Further
The NumPy library is the topic of several books, including the Guide to NumPy, by the creator of the NumPy T. Oliphant, available for free online at http://web.mit.edu/dvp/Public/numpybook.pdf, and *Numerical Python (2019)*, and *Python for Data Analysis (2017)*.
- [NumPy Reference Documentation](https://docs.scipy.org/doc/numpy/reference/)
- Robert Johansson, Numerical Python 2nd.Urayasu-shi, Apress, 2019.
- McKinney, Wes. Python for Data Analysis 2nd. Sebastopol: O’Reilly, 2017.
<div class="alert alert-block alert-success">
<p>Next: <a href="02_memory_layout.ipynb">Memory Layout</a></p>
</div>
| github_jupyter |
RMedian : Phase 3 / Clean Up Phase
```
import math
import random
import statistics
```
Testfälle :
```
# User input
testcase = 3
# Automatic
X = [i for i in range(101)]
cnt = [0 for _ in range(101)]
# ------------------------------------------------------------
# Testcase 1 : Det - max(sumL, sumR) > n/2
# Unlabanced
if testcase == 1:
X = [i for i in range(101)]
L = [[i, i+1] for i in reversed(range(0, 21, 2))]
C = [i for i in range(21, 28)]
R = [[i, i+1] for i in range(28, 100, 2)]
# ------------------------------------------------------------
# Testcase 2 : AKS - |C| < log(n)
elif testcase == 2:
X = [i for i in range(101)]
L = [[i, i+1] for i in reversed(range(0, 48, 2))]
C = [i for i in range(48, 53)]
R = [[i, i+1] for i in range(53, 100, 2)]
# ------------------------------------------------------------
# Testcase 3 : Rek - Neither
elif testcase == 3:
L = [[i, i+1] for i in reversed(range(0, 30, 2))]
C = [i for i in range(30, 71)]
R = [[i, i+1] for i in range(71, 110, 2)]
# ------------------------------------------------------------
lc = len(C)
# ------------------------------------------------------------
# Show Testcase
print('L :', L)
print('C :', C)
print('R :', R)
```
Algorithmus : Phase 3
```
def phase3(X, L, C, R, cnt):
res = 'error'
n = len(X)
sumL, sumR = 0, 0
for l in L:
sumL += len(l)
for r in R:
sumR += len(r)
s = sumL - sumR
# Det Median
if max(sumL, sumR) > n/2:
res = 'DET'
if len(X) % 2 == 0:
return (X[int(len(X)/2 - 1)] + X[int(len(X)/2)]) / 2, cnt, res, s
else:
return X[int(len(X) / 2 - 0.5)], cnt, res, s
# AKS
if len(C) < math.log(n) / math.log(2):
res = 'AKS'
C.sort()
if len(C) % 2 == 0:
return (C[int(len(C)/2 - 1)] + C[int(len(C)/2)]) / 2, cnt, res, s
else:
return C[int(len(C) / 2 - 0.5)], cnt, res, s
print(sumR)
# Expand
if s < 0:
rs = []
for r in R:
rs += r
random.shuffle(rs)
for i in range(-s):
C.append(rs[i])
for r in R:
if rs[i] in r:
r.remove(rs[i])
else:
ls = []
for l in L:
ls += l
random.shuffle(ls)
for i in range(s):
C.append(ls[i])
for l in L:
if ls[i] in l:
l.remove(ls[i])
res = 'Expand'
return -1, cnt, res, s
# Testfall
med, cnt, res, s = phase3(X, L, C, R, cnt)
```
Resultat :
```
def test(X, L, C, R, lc, med, cnt, res, s):
n, l, c, r, sumL, sumR, mx = len(X), len(L), len(C), len(R), 0, 0, max(cnt)
m = statistics.median(X)
for i in range(len(L)):
sumL += len(L[i])
sumR += len(R[i])
print('')
print('Testfall:')
print('=======================================')
print('|X| / |L| / |C| / |R| :', n, '/', sumL, '/', c, '/', sumR)
print('=======================================')
print('Case :', res)
print('SumL - SumR :', s)
print('|C| / |C_new| :', lc, '/', len(C))
print('---------------------------------------')
print('Algo / Median :', med, '/', m)
print('=======================================')
print('max(cnt) :', mx)
print('=======================================')
return
# Testfall
test(X, L, C, R, lc, med, cnt, res, s)
```
| github_jupyter |
# CS229: Problem Set 1
## Problem 3: Gaussian Discriminant Analysis
**C. Combier**
This iPython Notebook provides solutions to Stanford's CS229 (Machine Learning, Fall 2017) graduate course problem set 1, taught by Andrew Ng.
The problem set can be found here: [./ps1.pdf](ps1.pdf)
I chose to write the solutions to the coding questions in Python, whereas the Stanford class is taught with Matlab/Octave.
## Notation
- $x^i$ is the $i^{th}$ feature vector
- $y^i$ is the expected outcome for the $i^{th}$ training example
- $m$ is the number of training examples
- $n$ is the number of features
### Question 3.a)
The gist of the solution is simply to apply Bayes rule, and simplify the exponential terms in the denominator which gives us the sigmoid function. The calculations are somewhat heavy:
$$
\begin{align*}
p(y=1 \mid x) & = \frac{p(x \mid y=1)p(y=1)}{p(x)} \\
& = \frac{p(x \mid y=1)p(y=1)}{p(x \mid y=1)p(y=1)+ p(x \mid y=-1)p(y=-1)} \\
& = \frac{\frac{1}{(2\pi)^{\frac{n}{2}} \lvert \Sigma \rvert^{\frac{1}{2}}} \exp \left(-\frac{1}{2} \left(x-\mu_{1} \right)^T\Sigma^{-1} \left(x-\mu_{1} \right) \right) \phi }{ \frac{1}{(2\pi)^{\frac{n}{2}} \lvert \Sigma \rvert^{\frac{1}{2}}} \exp \left(-\frac{1}{2} \left(x-\mu_{1} \right)^T\Sigma^{-1} \left(x-\mu_{1} \right) \right) \phi + \frac{1}{(2\pi)^{\frac{n}{2}} \lvert \Sigma \rvert^{\frac{1}{2}}} \exp \left(-\frac{1}{2} \left(x-\mu_{-1} \right)^T\Sigma^{-1} \left(x-\mu_{-1} \right) \right)\left(1-\phi \right)} \\
& = \frac{\phi \exp \left(-\frac{1}{2} \left(x-\mu_{1} \right)^T\Sigma^{-1} \left(x-\mu_{1} \right) \right) }{\phi \exp \left(-\frac{1}{2} \left(x-\mu_{1} \right)^T\Sigma^{-1} \left(x-\mu_{1} \right) \right) + \left(1-\phi \right) \exp \left(-\frac{1}{2} \left(x-\mu_{-1} \right)^T\Sigma^{-1} \left(x-\mu_{-1} \right) \right)} \\
& = \frac{1}{1+ \exp \left(\log\left(\frac{\left(1-\phi \right)}{\phi}\right) -\frac{1}{2} \left(x-\mu_{-1} \right)^T\Sigma^{-1} \left(x-\mu_{-1} \right) + \frac{1}{2} \left(x-\mu_{1} \right)^T\Sigma^{-1} \left(x-\mu_{1} \right) \right)} \\
& = \frac{1}{1+\exp \left(\log \left(\frac{1-\phi}{\phi}\right) -\frac{1}{2} \left(x^T \Sigma^{-1}x -2x^T \Sigma^{-1}\mu_{-1}+ \mu_{-1}^T \Sigma^{-1} \mu_{-1}\right) + \frac{1}{2} \left(x^T \Sigma^{-1}x -2x^T \Sigma^{-1}\mu_{1}+ \mu_{1}^T \Sigma^{-1} \mu_{1} \right)\right)} \\
& = \frac{1}{1+\exp \left(\log \left(\frac{1-\phi}{\phi}\right) + x^T \Sigma^{-1} \mu_{-1} - x^T \Sigma^{-1} \mu_1 - \frac{1}{2} \mu_{-1}^T \Sigma^{-1} \mu_{-1} + \frac{1}{2} \mu_1^T\Sigma^{-1}\mu_1 \right)} \\
& = \frac{1}{1+ \exp\left(\log\left(\frac{1-\phi}{\phi}\right) + x^T \Sigma^{-1} \left(\mu_{-1}-\mu_1 \right) - \frac{1}{2}\mu_{-1}^T\Sigma^{-1}\mu_{-1} + \mu_1^T \Sigma^{-1} \mu_1 \right)} \\
\\
\end{align*}
$$
With:
- $\theta_0 = \frac{1}{2}\left(\mu_{-1}^T \Sigma^{-1} \mu_{-1}- \mu_1^T \Sigma^{-1}\mu_1 \right)-\log\frac{1-\phi}{\phi} $
- $\theta = \Sigma^{-1}\left(\mu_{1}-\mu_{-1} \right)$
we have:
$$
p(y=1 \mid x) = \frac{1}{1+\exp \left(-y(\theta^Tx + \theta_0) \right)}
$$
### Questions 3.b) and 3.c)
Question 3.b) is the special case where $n=1$. Let us prove the general case directly, as required in 3.c):
$$
\begin{align*}
\ell \left(\phi, \mu_{-1}, \mu_1, \Sigma \right) & = \log \prod_{i=1}^m p(x^{i}\mid y^i; \phi, \mu_{-1}, \mu_1, \Sigma)p(y^{i};\phi) \\
& = \sum_{i=1}^m \log p(x^{i}\mid y^{i}; \phi, \mu_{-1}, \mu_1, \Sigma) + \sum_{i=1}^m \log p(y^{i};\phi) \\
& = \sum_{i=1}^m \left[\log \frac{1}{\left(2 \pi \right)^{\frac{n}{2}} \lvert \Sigma \rvert^{\frac{1}{2}}} - \frac{1}{2} \left(x^{i} - \mu_{y^{i}} \right)^T \Sigma^{-1} \left(x^{i} - \mu_{y^{i}} \right) + \log \phi^{y^{i}} + \log \left(1- \phi \right)^{\left(1-y^{i} \right)} \right] \\
& \simeq \sum_{i=1}^m \left[- \frac{1}{2} \log \lvert \Sigma \rvert - \frac{1}{2} \left(x^{i} - \mu_{y^{i}} \right)^T \Sigma^{-1} \left(x^{i} - \mu_{y^{i}} \right) + y^{i} \log \phi + \left(1-y^{i} \right) \log \left(1- \phi \right) \right] \\
\end{align*}
$$
Now we calculate the maximum likelihood be calculating the gradient of the log-likelihood with respect to the parameters and setting it to $0$:
$$
\begin{align*}
\frac{\partial \ell}{\partial \phi} &= \sum_{i=1}^{m}( \frac{y^i}{\phi} - \frac{1-y^i}{1-\phi}) \\
&= \sum_{i=1}^{m}\frac{1(y^i = 1)}{\phi} + \frac{m-\sum_{i=1}^{m}1(y^i = 1)}{1-\phi}
\end{align*}
$$
Therefore, $\phi = \frac{1}{m} \sum_{i=1}^m 1(y^i =1 )$, i.e. the percentage of the training examples such that $y^i = 1$
Now for $\mu_{-1}:$
$$
\begin{align*}
\nabla_{\mu_{-1}} \ell & = - \frac{1}{2} \sum_{i : y^{i}=-1} \nabla_{\mu_{-1}} \left[ -2 \mu_{-1}^T \Sigma^{-1} x^{(i)} + \mu_{-1}^T \Sigma^{-1} \mu_{-1} \right] \\
& = - \frac{1}{2} \sum_{i : y^{i}=-1} \left[-2 \Sigma^{-1}x^{(i)} + 2 \Sigma^{-1} \mu_{-1} \right]
\end{align*}
$$
Again, we set the gradient to $0$:
$$
\begin{align*}
\sum_{i:y^i=-1} \left[\Sigma^{-1}x^{i}-\Sigma^{-1} \mu_{-1} \right] &= 0 \\
\sum_{i=1}^m 1 \left\{y^{i}=-1\right\} \Sigma^{-1} x^{(i)} - \sum_{i=1}^m 1 \left\{y^{i}=-1 \right\} \Sigma^{-1} \mu_{-1} &=0 \\
\end{align*}
$$
This yields:
$$
\Sigma^{-1} \mu_{-1} \sum_{i=1}^m 1 \left\{y^{i}=-1 \right\} = \Sigma^{-1} \sum_{i=1}^m 1 \left\{y^{(i)}=-1\right\} x^{i}
$$
Allowing us to finally write:
$$\mu_{-1} = \frac{\sum_{i=1}^m 1 \left\{y^{i}=-1\right\} x^{i}}{\sum_{i=1}^m 1 \left\{y^{(i)}=-1 \right\}}$$
The calculations are similar for $\mu_1$, and we obtain:
$$\mu_{1} = \frac{\sum_{i=1}^m 1 \left\{y^{i}=1\right\} x^{i}}{\sum_{i=1}^m 1 \left\{y^{i}=1 \right\}}$$
The last step is to calculate the gradient with respect to $\Sigma$. To simplify calculations, let us calculate the gradient for $S = \frac{1}{\Sigma}$.
$$
\begin{align*}
\nabla_{S} \ell & = - \frac{1}{2}\sum_{i=1}^m \nabla_{\Sigma} \left[-\log \lvert S \rvert + \left(x^{i}- \mu_{y^{i}} \right)^T S \left(x^{i}- \mu_{y^{i}} \right) \right] \\
& = - \frac{1}{2}\sum_{i=1}^m \left[-S^{-1} + \left(x^{i}- \mu_{y^{i}} \right)\left(x^{i}- \mu_{y^{i}} \right)^T \right] \\
& = \sum_{i=1}^m \frac{1}{2} \Sigma - \frac{1}{2} \sum_{i=1}^m \left(x^{i}- \mu_{y^{i}} \right)\left(x^{i}- \mu_{y^{i}} \right)^T\\
\end{align*}
$$
Again, we set the gradient to $0$, allowing us to write:
$$
\frac{1}{2} m \Sigma = \frac{1}{2} \sum_{i=1}^m \left(x^{i}- \mu_{y^{i}} \right)\left(x^{i}- \mu_{y^{i}} \right)^T \\
$$
Finally, we obtain the maximum likelihood estimate for $\Sigma$:
$$
\Sigma = \frac{1}{m}\sum_{i=1}^m \left(x^{i}- \mu_{y^{i}} \right)\left(x^{i}- \mu_{y^{i}} \right)^T
$$
| github_jupyter |
# Images
```
import pathlib
import tensorflow as tf
import matplotlib.pyplot as plt
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))
image_count
```
### Loading images
```
dataset = tf.data.Dataset.list_files(str(data_dir/'*/*'))
for f in dataset.take(5):
print(f.numpy())
def load_image(path):
img_height = 180
img_width = 180
binary_format = tf.io.read_file(path)
image = tf.image.decode_jpeg(binary_format, channels=3)
return tf.image.resize(image, [img_height, img_width])
dataset = dataset.map(load_image, num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.cache().shuffle(buffer_size=1000) # cache only if the dataset fits in memory
dataset = dataset.batch(2)
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
for f in dataset.take(5):
print(f.numpy().shape)
images = next(iter(dataset))
images.shape
```
### Filters
Filters are 3-dimensional tensors. Tensorflow stores the different filter weights for a given pixel and channel in the last dimension. Therefore, the structure of a tensor of filters is:
```python
[rows, columns, channels, filters]
```
where channels are the filters in the input thensor for a given layer.
```
hfilter = tf.stack([tf.stack([tf.zeros(3), tf.ones(3), tf.zeros(3)]) for _ in range(3)])
hfilter
vfilter = tf.transpose(hfilter, [0, 2, 1])
vfilter
```
Given that the values of each filter (for a concrete pixel and channel) are in the last axis, we are goint to stack both filters in the last axis.
```
filters = tf.stack([hfilter, vfilter], axis=-1)
filters.shape
outputs = tf.nn.conv2d(images, filters, strides=1, padding="SAME")
plt.figure(figsize=(20,60))
ax = plt.subplot(1, 3, 1)
plt.axis("off")
plt.imshow(images[1].numpy().astype("uint8"))
for i in range(2):
ax = plt.subplot(1, 3, i + 2)
plt.imshow(outputs[1, :, :, i], cmap="gray")
plt.axis("off")
```
### Pooling
```
outputs = tf.nn.max_pool(images, ksize=(1,2,2,1), strides=(1,2,2,1), padding='SAME')
images.shape, outputs.shape
plt.figure(figsize=(8, 8))
for i in range(2):
ax = plt.subplot(2, 2, i*2 + 1)
plt.imshow(images[i, :, :, i], cmap="gray")
plt.axis("off")
ax = plt.subplot(2, 2, i*2 + 2)
plt.imshow(outputs[i, :, :, i], cmap="gray")
plt.axis("off")
```
### Depthwise pooling
Pooling along all the channels for each pixel.
```
outputs = tf.nn.max_pool(images, ksize=(1,1,1,3), strides=(1,1,1,3), padding='SAME')
images.shape, outputs.shape
plt.figure(figsize=(8, 8))
for i in range(2):
ax = plt.subplot(2, 2, i*2 + 1)
plt.imshow(images[i, :, :, i], cmap="gray")
plt.axis("off")
ax = plt.subplot(2, 2, i*2 + 2)
plt.imshow(outputs[i, :, :, 0], cmap="gray")
plt.axis("off")
```
## Keras vs Tensorflow
```
import os
import numpy as np
batch_size = 32
img_height = 180
img_width = 180
list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'), shuffle=False)
list_ds = list_ds.shuffle(image_count, reshuffle_each_iteration=False)
class_names = np.array(sorted([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
val_size = int(image_count * 0.2)
train_ds = list_ds.skip(val_size)
val_ds = list_ds.take(val_size)
print(tf.data.experimental.cardinality(train_ds).numpy())
print(tf.data.experimental.cardinality(val_ds).numpy())
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
get_label(b'/Users/nerea/.keras/datasets/flower_photos/tulips/8686332852_c6dcb2e86b.jpg').numpy()
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# resize the image to the desired size
return tf.image.resize(img, [img_height, img_width])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=tf.data.AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=tf.data.AUTOTUNE)
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=tf.data.AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
image_batch, label_batch = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].numpy().astype("uint8"))
label = label_batch[i]
plt.title(class_names[label])
plt.axis("off")
```
### Keras
```
from tensorflow.keras import layers
normalization_layer = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
num_classes = 5
model = tf.keras.Sequential([
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, activation='relu'),
#layers.MaxPooling2D(),
layers.GlobalAvgPool2D(),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(num_classes)
])
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=1
)
model.summary()
```
### Tensorflow
#### Model Implementation
```
images = tf.divide(images, 255.)
shape = tf.TensorShape([3,3,3,16])
filters_1 = tf.Variable(
initial_value=tf.initializers.glorot_uniform()(shape),
shape=shape,
name='filters_1',
dtype=tf.float32,
trainable=True,
synchronization=tf.VariableSynchronization.AUTO,
caching_device=None
)
feature_maps_1 = tf.nn.leaky_relu(
tf.nn.conv2d(images, filters_1, strides=[1, 1, 1, 1], padding="SAME"),
alpha=0.2
)
pooled_maps_1 = tf.nn.max_pool(
feature_maps_1,
ksize=(1,3,3,1),
strides=(1,3,3,1),
padding='SAME'
)
pooled_maps_1.shape
shape = tf.TensorShape([3,3,16,32])
filters_2 = tf.Variable(
initial_value=tf.initializers.glorot_uniform()(shape),
shape=shape,
name='filters_2',
dtype=tf.float32,
trainable=True,
synchronization=tf.VariableSynchronization.AUTO,
caching_device=None
)
feature_maps_2 = tf.nn.leaky_relu(
tf.nn.conv2d(
pooled_maps_1,
filters_2,
strides=[1, 1, 1, 1],
padding="SAME"
),
alpha=0.2
)
pooled_maps_2 = tf.nn.max_pool(
feature_maps_2,
ksize=(1,3,3,1),
strides=(1,3,3,1),
padding='SAME'
)
pooled_maps_2.shape
shape = tf.TensorShape([3,3,32,64])
filters_3 = tf.Variable(
initial_value=tf.initializers.glorot_uniform()(shape),
shape=shape,
name='filters_3',
dtype=tf.float32,
trainable=True,
synchronization=tf.VariableSynchronization.AUTO,
caching_device=None
)
feature_maps_3 = tf.nn.leaky_relu(
tf.nn.conv2d(
pooled_maps_2,
filters_3,
strides=[1, 1, 1, 1],
padding="SAME"
),
alpha=0.2
)
feature_maps_3.shape
pooled_maps_3 = tf.nn.max_pool(
feature_maps_3,
ksize=(1,60,60,1),
strides=(1,60,60,1),
padding='SAME'
)
pooled_maps_3.shape
flatten = tf.reshape(
pooled_maps_3,
shape=tf.TensorShape((2, 64))
)
shape = tf.TensorShape([64, 64])
W_1 = tf.Variable(
initial_value=tf.initializers.glorot_uniform()(shape),
shape=shape,
name='W_1',
dtype=tf.float32,
trainable=True,
synchronization=tf.VariableSynchronization.AUTO,
caching_device=None
)
X_1 = tf.nn.dropout(
tf.nn.leaky_relu(
tf.matmul(flatten, W_1)
),
rate=0.3
)
shape = tf.TensorShape([64, 5])
W_2 = tf.Variable(
initial_value=tf.initializers.glorot_uniform()(shape),
shape=shape,
name='W_2',
dtype=tf.float32,
trainable=True,
synchronization=tf.VariableSynchronization.AUTO,
caching_device=None
)
X_2 = tf.nn.dropout(
tf.nn.leaky_relu(
tf.matmul(X_1, W_2)
),
rate=0.3
)
scores = tf.nn.softmax(X_2)
```
#### Training
```
weights = {
"filters_1": filters_1,
"filters_2": filters_2,
"filters_3": filters_3,
"W_1": W_1,
"W_2": W_2
}
@tf.function
def classify(images, weights):
normalized_images = tf.divide(images, 255.)
feature_maps_1 = tf.nn.leaky_relu(
tf.nn.conv2d(normalized_images, filters_1, strides=[1, 1, 1, 1], padding="SAME"),
alpha=0.2
)
pooled_maps_1 = tf.nn.max_pool(
feature_maps_1,
ksize=(1,3,3,1),
strides=(1,3,3,1),
padding='SAME'
)
feature_maps_2 = tf.nn.leaky_relu(
tf.nn.conv2d(
pooled_maps_1,
filters_2,
strides=[1, 1, 1, 1],
padding="SAME"
),
alpha=0.2
)
pooled_maps_2 = tf.nn.max_pool(
feature_maps_2,
ksize=(1,3,3,1),
strides=(1,3,3,1),
padding='SAME'
)
feature_maps_3 = tf.nn.leaky_relu(
tf.nn.conv2d(
pooled_maps_2,
filters_3,
strides=[1, 1, 1, 1],
padding="SAME"
),
alpha=0.2
)
pooled_maps_3 = tf.nn.max_pool(
feature_maps_3,
ksize=(1,60,60,1),
strides=(1,60,60,1),
padding='SAME'
)
print(pooled_maps_3.shape)
flatten = tf.reshape(
pooled_maps_3,
shape=tf.TensorShape((32, 64))
)
X_1 = tf.nn.dropout(
tf.nn.leaky_relu(
tf.matmul(flatten, W_1)
),
rate=0.3
)
X_2 = tf.nn.dropout(
tf.nn.leaky_relu(
tf.matmul(X_1, W_2)
),
rate=0.3
)
scores = tf.nn.softmax(X_2)
return scores
optimizer = tf.optimizers.Adam(0.01)
grads
num_epochs = 2
for e in range(num_epochs):
for imgs, labels in train_ds:
with tf.GradientTape() as tape:
#print(imgs.shape)
outputs = classify(imgs, weights)
#current_loss = tf.losses.SparseCategoricalCrossentropy(labels, outputs)
current_loss = tf.losses.categorical_crossentropy(outputs, tf.one_hot(labels, 5))
grads = tape.gradient(current_loss, weights)
#optimizer.apply_gradients(zip(grads, weights))
print(tf.reduce_mean(current_loss))
```
| github_jupyter |
# Debug centering issue
```
# Imports
import os
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
%matplotlib inline
from astropy.io import fits
import astropy.units as u
import hcipy as hc
from hcipy.optics.segmented_mirror import SegmentedMirror
os.chdir('../../pastis/')
import util_pastis as util
from e2e_simulators.luvoir_imaging import LuvoirAPLC
# Instantiate LUVOIR
apodizer_design = 'small'
sampling = 4
# This path is specific to the paths used in the LuvoirAPLC class
optics_input = '/Users/ilaginja/Documents/LabWork/ultra/LUVOIR_delivery_May2019/'
luvoir = LuvoirAPLC(optics_input, apodizer_design, sampling)
# Make reference image
luvoir.flatten()
psf_unaber, ref, inter = luvoir.calc_psf(ref=True, return_intermediate='efield')
# Make dark hole
dh_outer = hc.circular_aperture(2*luvoir.apod_dict[apodizer_design]['owa'] * luvoir.lam_over_d)(luvoir.focal_det)
dh_inner = hc.circular_aperture(2*luvoir.apod_dict[apodizer_design]['iwa'] * luvoir.lam_over_d)(luvoir.focal_det)
dh_mask = (dh_outer - dh_inner).astype('bool')
inter.keys()
to_plot = inter['before_lyot']
plt.figure(figsize=(12,12))
plt.subplot(2, 2, 1)
plt.title("Intensity")
hc.imshow_field(np.log10(to_plot.intensity))
plt.subplot(2, 2, 2)
plt.title("Phase")
hc.imshow_field(to_plot.phase)
plt.subplot(2, 2, 3)
plt.title("Real")
hc.imshow_field(to_plot.real)
plt.subplot(2, 2, 4)
plt.title("Imaginary")
hc.imshow_field(to_plot.imag)
(np.arange(10) + 0.5 - 10/2)
plt.figure(figsize=(10,10))
hc.imshow_field(luvoir.fpm)
#plt.grid(color='w', linestyle='-', linewidth=2)
plt.figure(figsize=(6, 6))
plt.imshow(luvoir.fpm.shaped)
print(luvoir.fpm.shaped)
util.write_fits(luvoir.fpm.shaped, '/Users/ilaginja/Documents/fpm.fits')
res = util.FFT(luvoir.fpm.shaped)
new_plot = res
im = util.zoom_point(new_plot, new_plot.shape[0]/2, new_plot.shape[0]/2, 200)
plt.figure(figsize=(12,12))
plt.subplot(2, 2, 1)
plt.title("Intensity")
plt.imshow(np.log10(np.abs(new_plot)**2))
plt.subplot(2, 2, 2)
plt.title("Phase")
plt.imshow(np.angle(new_plot))
plt.colorbar()
plt.subplot(2, 2, 3)
plt.title("Real")
#hc.imshow_field(to_plot.real)
plt.subplot(2, 2, 4)
plt.title("Imaginary")
#hc.imshow_field(to_plot.imag)
np.angle(new_plot)
print(np.min(np.angle(new_plot)))
# Plot
plt.figure(figsize=(18, 6))
plt.subplot(131)
hc.imshow_field(psf_unaber.intensity/ref.intensity.max(), norm=LogNorm())
plt.subplot(132)
hc.imshow_field(dh_mask)
plt.subplot(133)
hc.imshow_field(psf_unaber.intensity/ref.intensity.max(), norm=LogNorm(), mask=dh_mask)
dh_intensity = psf_unaber.intensity/ref.intensity.max() * dh_mask
baseline_contrast = util.dh_mean(dh_intensity, dh_mask)
#np.mean(dh_intensity[np.where(dh_intensity != 0)])
print('Baseline contrast:', baseline_contrast)
imsize = 10
im = np.zeros((imsize, imsize))
focal_plane_mask = util.circle_mask(im, imsize/2, imsize/2, imsize/2)
plt.imshow(focal_plane_mask)
out = util.FFT(focal_plane_mask)
plt.imshow(np.abs(out))
plt.imshow(np.angle(out))
np.angle(out)
```
| github_jupyter |
# Some fun with functions and fractals (Informatics II)
author: Tsjerk Wassenaar
The topic of this tutorial is advanced functions in Python. This consists of several aspects:
* Functions with variable arguments lists (\*args and \*\*kwargs)
* Recursive functions
* Functions as objects
* Functions returning functions (closures)
The last two of these are mainly to give a bit of feel of what functions are (in Python) and what you can do with them and are there for *passive learning*. The first two are part of the core of Informatics 2.
The aspects of functions named above are here demonstrated by making fractals, which are mathematical images with *scaled symmetry*: the image consists of smaller copies of itself, which consist of smaller copies of themselves. Such fractals actually occur in biological systems, and can be seen in the structures of weeds and trees. Nice examples are to be find <a href="http://paulbourke.net/fractals/fracintro/">here</a>
We'll be drawing the fractals first in 2D with turtle graphics. Towards the end, we'll be able to extend to 3D and generate a fractal structure for drawing with **pypovray** (optional).
Just to set a few things straight:
* You **don't** need to know fractals, L-systems and any of the specific ones named and used.
* You **do** need to understand *recursive functions* and *recursion depth*
* You **don't** need to know (reproduce) the functions used in this tutorial
* You **do** need to understand how the functions work and be able to put them to use in the template
* You **do** need to write a template and put the functions in to make this work. Although... you can also work interactively to try things out.
When writing a turtle program using the template, you can start with the following basic main function, to keep the image until you press the _any_ key:
```
def main(args):
"""Docstring goes here"""
# Preparation
# Processing
# Finishing
input('Press any key to continue')
return 0
```
## Recursion
```
def recurse():
print("A recursive function is a function that calls itself, like:")
recurse()
recurse()
```
So, a recursive function is one that calls itself.
Well, that's that. So, we can continue to the next topic...
On the other hand, maybe it is good to think about how that works in practice and good to think about why you'd want to do that. Just to set things straight: there is nothing you can do with recursion that you can't do with a for loop and creative use of (nested) lists. However, in some cases, you'll have to get really creative, and you may be better off if you can split your problem into parts and do try the same function/strategy on the parts.
A classic example of a recursive function is the factorial. The factorial of an integer is the joint product of that number and *all* foregoing positive numbers. It's typically written as n!. So the outcome of 5! is 5\*4\*3\*2\*1 = 120. There is one additional rule: by definition 0! = 1
With that, we can write a factorial function. First, look at the for-loop way:
```
def factorial(n):
result = 1
for num in range(n,0,-1):
result *= num
return result
```
We can turn this into a recursive function, by considering two cases: n is 0, or n is not. If n is 0, then the result is 1 (by definition). If n is not (yet) zero, then the result is n * (n-1)!, so n times the factorial of n-1:
```
def factorial(n):
if not n:
return 1
return n * factorial(n - 1)
```
### Assignment:
Write a program factorial.py that takes a number as command line argument and prints the factorial of that number. Start with a correct template and use the recursive function. Write docstrings!
## Fractals and recursion
For fractals, we'll focus on Lindenmayer fractals (L-systems). These are written as a series of steps, like forward, right, and left. The trick is that a step can be replaced by a sequence of steps, in which steps can be replaced by that sequence of steps again, and so on and so forth. Because of time, that has to end somewhere, and we'll call that the depth of the sequence.
So, the L-system consists of:
* the **axiom**: the start sequence
* the **rules**: the replacement rules
* the **depth**: the depth of the recursion
The result is a sequence of instructions (forward, right, and left) that we can nicely pass to our turtle friend Don.
```
import turtle
don = turtle.Turtle(shape="turtle")
```
We start off with the **Hilbert function**, which can be written as an L-system (thanks Wikipedia):
* axiom: A
* rules:
- A → -BF+AFA+FB-
- B → +AF-BFB-FA+
Here, "F" means "draw forward", "−" means "turn left 90°", "+" means "turn right 90°" (see turtle graphics), and "A" and "B" are ignored during drawing.
**This means** that we start with 'A', and then replace 'A' with '-BF+AFA+FB-'. In the result, we replace each 'A' with that same string, but each B is replaced with '+AF-BFB-FA+'. And we can repeat that...
```
def hilbert(depth, sequence='A'):
if not depth:
return sequence
out = []
for character in sequence:
if character == 'A':
out.extend('-BF+AFA+FB-')
elif character == 'B':
out.extend('+AF-BFB-FA+')
else:
out.append(character)
return hilbert(depth - 1, out)
print("".join(hilbert(0)))
print("".join(hilbert(1)))
print("".join(hilbert(2)))
print("".join(hilbert(3)))
```
Now, for each F in the sequence don goes forward, for each - he goes left and for each + he goes right. We can write this with an if/elif clause:
```
for char in hilbert(3):
if char == 'F':
don.forward(10)
elif char == '+':
don.right(90)
elif char == '-':
don.left(90)
```
This is a function specific for the Hilbert function, which is pretty cool, as it generates a maze-like drawing. But there are many other interesting L-systems, and we can capture more of them, using the advanced function syntax, which allows us to specify an arbitrary number of keyword arguments:
```
def l_system(depth, axiom, **rules):
if not depth:
return axiom
# Basic, most straight-forward implementation
# Note 1: it doesn't matter if axiom is a string or a list
# Note 2: consider the difference between .extend() and .append()
out = []
for char in axiom:
if char in rules:
out.extend(rules[char])
else:
out.append(char)
# Two alternative implementations. If you want to try
# an alternative, comment out the original first.
# It won't change the answer, but it will take more time
# if you keep the code active.
# I. Alternative implementation using dict.get
# --------------------------------------------
# out = []
# for char in axiom:
# out.extend(rules.get(char, [char]))
# II. Alternative implementation in one line using list comprehension
# -------------------------------------------------------------------
# out = [i for char in axiom for i in rules.get(char, char)]
# Note 3: See how comments are used to annotate the code... :)
return l_system(depth - 1, out, **rules)
```
With this, we can write the Hilbert function much shorter:
```
def hilbert(depth):
return l_system(depth, axiom='A', A='-BF+AFA+FB-', B='+AF-BFB-FA+')
```
And we can write a Sierpinski gasket, using
* **axiom**: f
* **rules**:
- F: f+F+f
- f: F-f-F
With the note that both f and F mean forward.
```
def sierpinski_gasket(depth):
return l_system(depth, axiom='f', F='f+F+f', f='F-f-F')
for char in sierpinski_gasket(7):
if char in 'Ff':
don.forward(1)
elif char == '+':
don.right(60)
elif char == '-':
don.left(60)
```
The next step is getting rid of the if/elif/elif/... clause, to make the handling of actions a bit nicer.
## Functions as objects
Getting rid of an if/elif/.. construct typically involves introducing a dictionary. A good reason to do that is that a dictionary requires less bookkeeping. However, in our case, we deal with actions, not values. Then again, actions are processes, which can be described as functions. So, we'll put *functions* in a dictionary!
Again, what we'll do is just a different way, whether it's actually better depends on the situation.
The idea of putting functions in a dictionary hinges on using functions as objects. Functions are objects that are *callable*: you can add parentheses to invoke the action. Without parentheses, it's just the function object. Consider the following example:
```
blabla = print
blabla("Hello World!")
```
So, we assign the print *object* to a new variable, called *blabla*, and we can use that name too as print function. Likewise, we can store the function in a tuple, list, set or dictionary:
```
actions = {"p": print}
actions["p"]("Hello World!")
```
Notice what happens there. We store the print function in a dictionary, bound to the key "p". Then we use the key "p" to get the corresponding value from the dictionary, and we *call* the process, by adding the parentheses with the argument "Hello World!".
Now let's do that with the actions for Don.
```
def forward(turt, step=5):
turt.forward(step)
def right(turt, angle=90):
turt.right(angle)
def left(turt, angle=90):
turt.left(angle)
actions = {'F': forward, 'f': forward, '+': right, '-': left}
```
Take a moment and think about the function definitions (and write docstrings!). The functions are very simple, but it's not easily possible to actually put the turtle functions in the dictionary. Well, actually it *is* easy once you know how, but it's not actually easy to call them nicely then. The approach above is easier to deal with:
```
for char in hilbert(5):
if char in actions:
actions[char](don)
```
## Functions with function definitions
And now we'll be taking a step further. Note: **this is not mandatory stuff for the exam**. However, the following things may give you a good feel for the idea of functions being objects, just like (other) variables. So, to allow the actions to have different angles/steps, to deal with the Sierpinski thing, we'll generate the actions dictionary with a function, in which we can set the angle and the step:
```
def actions(step, angle):
def forward(turt):
turt.forward(step)
def right(turt):
turt.right(angle)
def left(turt):
turt.left(angle)
return {'F': forward, 'f': forward, '+': right, '-': left}
```
Now, put this function in your code and write the docstring. Take a moment to see what is happening here. *Within the function **actions** we define three functions, which take a **turt** as argument. The three functions are put in a dictionary, and this dictionary is returned.* The dictionary can then be used:
```
actions_dict = actions(step=1, angle=60)
for char in sierpinski_gasket(7):
if char in actions_dict:
actions_dict[char](don)
```
| github_jupyter |
```
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import astropy.coordinates as coord
from astropy.table import Table
import astropy.units as u
import gala.coordinates as gc
import gala.dynamics as gd
from gala.dynamics import mockstream
import gala.potential as gp
from gala.units import galactic
plt.style.use('notebook')
t = Table.read('../data/stream_track.txt', format='ascii.commented_header', delimiter=',')
tp = Table.read('../data/pvd_stream.dat', format='ascii.commented_header', delimiter=' ')
```
### Rotate the galaxy to lie along z=0
```
pa = 154*u.deg # https://www.flickr.com/photos/dcrowson/35166799656
theta = 64*u.deg
x = np.cos(theta)*t['x'] + np.sin(theta)*t['y']
z = -np.sin(theta)*t['x'] + np.cos(theta)*t['y']
xpvd = np.cos(theta)*tp['x_pvd_kpc'] + np.sin(theta)*tp['y_pvd_kpc']
zpvd = -np.sin(theta)*tp['x_pvd_kpc'] + np.cos(theta)*tp['y_pvd_kpc']
# progenitor as densest location on the stream
xp_rot, zp_rot = -38.7, -2.3
xp_ = np.cos(theta)*xp_rot + np.sin(theta)*zp_rot
zp_ = -np.sin(theta)*xp_rot + np.cos(theta)*zp_rot
plt.plot(t['x'], t['y'], 'ko', alpha=0.1)
plt.plot(x, z, 'ko')
plt.plot(xp_, zp_, 'kx', ms=10, mew=2)
plt.xlabel('x [kpc]')
plt.ylabel('z [kpc]')
plt.gca().set_aspect('equal')
```
### Set up gravitational potential
```
# most params from Martinez-Delgado paper, + tuned halo mass to reproduce Casertano measurement of max vcirc
# https://ui.adsabs.harvard.edu/abs/2008ApJ...689..184M/abstract
# adopted halo flattening of 0.95 to match the trailing tail curvature
ham = gp.Hamiltonian(gp.MilkyWayPotential(nucleus=dict(m=0),
halo=dict(c=0.95, m=1.96e11*u.Msun, r_s=8.2*u.kpc),
bulge=dict(m=2.3e10*u.Msun, c=0.6*u.kpc),
disk=dict(m=8.4e10*u.Msun, a=6.24*u.kpc, b=0.26*u.kpc)))
xyz = np.zeros((3, 128))
xyz[0] = np.linspace(1, 25, 128)
print('maximal circular velocity {:.0f}'.format(np.max(ham.potential.circular_velocity(xyz))))
plt.figure(figsize=(8,5))
plt.plot(xyz[0], ham.potential.circular_velocity(xyz))
plt.axhline(227, color='k')
plt.xlabel('r [kpc]')
plt.ylabel('$V_c$ [km s$^{-1}$]')
plt.tight_layout()
for d in [200,225,250]:
print('{:.0f} kpc {:.2g}'.format(d, ham.potential.mass_enclosed(d*u.kpc)[0]))
```
### Pick orbit for the satellite
```
# trial progenitor 6D location
xp = np.array([xp_, 0, zp_]) * u.kpc
vp = np.array([30,85,165]) * u.km/u.s
w0 = gd.PhaseSpacePosition(xp, vel=vp)
dt = 0.5*u.Myr
n_steps = 900
orbit_fwd = ham.integrate_orbit(w0, dt=dt, n_steps=n_steps)
orbit_rr = ham.integrate_orbit(w0, dt=-dt, n_steps=n_steps)
plt.plot(x, z, 'ko')
for orbit in [orbit_fwd, orbit_rr]:
plt.plot(orbit.cartesian.x, orbit.cartesian.z, '-', color='tab:blue')
plt.xlabel('x [kpc]')
plt.ylabel('z [kpc]')
plt.gca().set_aspect('equal')
```
### Create a stream model
```
f = 3
prog_orbit = ham.integrate_orbit(w0, dt=-dt/f, n_steps=5200*f)
prog_orbit = prog_orbit[::-1]
n_times = np.size(prog_orbit.t)
prog_mass = np.linspace(2e8, 0, n_times)
# stream = mockstream.fardal_stream(ham, prog_orbit, prog_mass=prog_mass, release_every=1, seed=4359)
# fardal values for particle release conditions
k_mean = np.array([2., 0, 0, 0, 0.3, 0])
k_disp = np.array([0.5, 0, 0.5, 0, 0.5, 0.5])
# tweaks to reproduce smaller offset of tidal arms, trailing tail extension
k_mean = np.array([1.2, 0, 0, 0.0, 0.1, 0])
k_disp = np.array([0.5, 0, 0.5, 0.02, 0.5, 0.5])
stream = mockstream.mock_stream(ham, prog_orbit, prog_mass=prog_mass, release_every=1, seed=4359,
k_mean=k_mean, k_disp=k_disp)
plt.figure(figsize=(10,10))
plt.plot(x, z, 'ko', ms=4, label='Dragonfly (Colleen)')
plt.plot(xpvd, zpvd, 'ro', ms=4, label='Dragonfly (Pieter)')
plt.plot(prog_orbit.cartesian.x, prog_orbit.cartesian.z, '-', color='tab:blue', label='Orbit', alpha=0.5)
plt.plot(stream.cartesian.x, stream.cartesian.z, '.', color='0.3', ms=1, alpha=0.05, label='Stream model')
plt.legend(fontsize='small', loc=1)
plt.xlabel('x [kpc]')
plt.ylabel('z [kpc]')
plt.xlim(-40,130)
plt.ylim(-60,70)
plt.gca().set_aspect('equal')
plt.savefig('../plots/trial_model_xz.png', dpi=200)
Ns = np.size(stream.cartesian.x)
Nsh = int(Ns/2)
Nsq = int(Ns/4)
xp, vp
tout_stream = Table([stream.cartesian.x, stream.cartesian.z], names=('x', 'z'))
tout_stream.write('../data/stream.fits', overwrite=True)
tout_orbit = Table([prog_orbit.cartesian.x, prog_orbit.cartesian.z, prog_orbit.t], names=('x', 'z', 't'))
tout_orbit.write('../data/orbit.fits', overwrite=True)
```
| github_jupyter |
```
import arviz as az
import pystan
import numpy as np
import ujson as json
with open("radon.json", "rb") as f:
radon_data = json.load(f)
key_renaming = {"x": "floor_idx", "county": "county_idx", "u": "uranium"}
radon_data = {
key_renaming.get(key, key): np.array(value) if isinstance(value, list) else value
for key, value in radon_data.items()
}
radon_data["county_idx"] = radon_data["county_idx"] + 1
prior_code = """
data {
int<lower=0> J;
int<lower=0> N;
int floor_idx[N];
int county_idx[N];
real uranium[J];
}
generated quantities {
real g[2];
real<lower=0> sigma_a = exponential_rng(1);
real<lower=0> sigma = exponential_rng(1);
real b = normal_rng(0, 1);
real za_county[J];
real y_hat[N];
real a[J];
real a_county[J];
g[1] = normal_rng(0, 10);
g[2] = normal_rng(0, 10);
for (i in 1:J) {
za_county[i] = normal_rng(0, 1);
a[i] = g[1] + g[2] * uranium[i];
a_county[i] = a[i] + za_county[i] * sigma_a;
}
for (j in 1:N) {
y_hat[j] = normal_rng(a_county[county_idx[j]] + b * floor_idx[j], sigma);
}
}
"""
prior_model = pystan.StanModel(model_code=prior_code)
prior_data = {key: value for key, value in radon_data.items() if key not in ("county_name", "y")}
prior = prior_model.sampling(data=prior_data, iter=500, algorithm="Fixed_param")
radon_code = """
data {
int<lower=0> J;
int<lower=0> N;
int floor_idx[N];
int county_idx[N];
real uranium[J];
real y[N];
}
parameters {
real g[2];
real<lower=0> sigma_a;
real<lower=0> sigma;
real za_county[J];
real b;
}
transformed parameters {
real theta[N];
real a[J];
real a_county[J];
for (i in 1:J) {
a[i] = g[1] + g[2] * uranium[i];
a_county[i] = a[i] + za_county[i] * sigma_a;
}
for (j in 1:N)
theta[j] = a_county[county_idx[j]] + b * floor_idx[j];
}
model {
g ~ normal(0, 10);
sigma_a ~ exponential(1);
za_county ~ normal(0, 1);
b ~ normal(0, 1);
sigma ~ exponential(1);
for (j in 1:N)
y[j] ~ normal(theta[j], sigma);
}
generated quantities {
real log_lik[N];
real y_hat[N];
for (j in 1:N) {
log_lik[j] = normal_lpdf(y[j] | theta[j], sigma);
y_hat[j] = normal_rng(theta[j], sigma);
}
}
"""
stan_model = pystan.StanModel(model_code=radon_code)
model_data = {key: value for key, value in radon_data.items() if key not in ("county_name",)}
fit = stan_model.sampling(data=model_data, control={"adapt_delta": 0.99}, iter=1500, warmup=1000)
coords = {
"level": ["basement", "floor"],
"obs_id": np.arange(radon_data["y"].size),
"county": radon_data["county_name"],
"g_coef": ["intercept", "slope"],
}
dims = {
"g" : ["g_coef"],
"za_county" : ["county"],
"y" : ["obs_id"],
"y_hat" : ["obs_id"],
"floor_idx" : ["obs_id"],
"county_idx" : ["obs_id"],
"theta" : ["obs_id"],
"uranium" : ["county"],
"a" : ["county"],
"a_county" : ["county"],
}
idata = az.from_pystan(
posterior=fit,
posterior_predictive="y_hat",
prior=prior,
prior_predictive="y_hat",
observed_data=["y"],
constant_data=["floor_idx", "county_idx", "uranium"],
log_likelihood={"y": "log_lik"},
coords=coords,
dims=dims,
).rename({"y_hat": "y"}) # renames both prior and posterior predictive
idata
idata.to_netcdf("pystan.nc")
```
| github_jupyter |
# Variable Distribution Type Tests (Gaussian)
- Shapiro-Wilk Test
- D’Agostino’s K^2 Test
- Anderson-Darling Test
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=2, palette= "viridis")
from scipy import stats
data = pd.read_csv('../data/pulse_data.csv')
data.head()
```
## Visual Normality Check
```
data.Height.describe()
data.skew()
data.kurtosis()
plt.figure(figsize=(10,8))
sns.histplot(data=data, x='Height')
plt.show()
plt.figure(figsize=(10,8))
sns.histplot(data=data, x='Age', kde=True)
plt.show()
# Checking for normality by Q-Q plot graph
plt.figure(figsize=(12, 8))
stats.probplot(data['Age'], plot=plt, dist='norm')
plt.show()
```
__the data should be on the red line. If there are data points that are far off of it, it’s an indication that there are some deviations from normality.__
```
# Checking for normality by Q-Q plot graph
plt.figure(figsize=(12, 8))
stats.probplot(data['Height'], plot=plt, dist='norm')
plt.show()
```
__the data should be on the red line. If there are data points that are far off of it, it’s an indication that there are some deviations from normality.__
## Shapiro-Wilk Test
Tests whether a data sample has a Gaussian distribution/normal distribution.
### Assumptions
Observations in each sample are independent and identically distributed (iid).
### Interpretation
- H0: The sample has a Gaussian/normal distribution.
- Ha: The sample does not have a Gaussian/normal distribution.
```
stats.shapiro(data['Age'])
stat, p_value = stats.shapiro(data['Age'])
print(f'statistic = {stat}, p-value = {p_value}')
alpha = 0.05
if p_value > alpha:
print("The sample has normal distribution(Fail to reject the null hypothesis, the result is not significant)")
else:
print("The sample does not have a normal distribution(Reject the null hypothesis, the result is significant)")
```
## D’Agostino’s K^2 Test
Tests whether a data sample has a Gaussian distribution/normal distribution.
### Assumptions
Observations in each sample are independent and identically distributed (iid).
### Interpretation
- H0: The sample has a Gaussian/normal distribution.
- Ha: The sample does not have a Gaussian/normal distribution.
```
stats.normaltest(data['Age'])
stat, p_value = stats.normaltest(data['Age'])
print(f'statistic = {stat}, p-value = {p_value}')
alpha = 0.05
if p_value > alpha:
print("The sample has normal distribution(Fail to reject the null hypothesis, the result is not significant)")
else:
print("The sample does not have a normal distribution(Reject the null hypothesis, the result is significant)")
```
__Remember__
- If Data Is Gaussian:
- Use Parametric Statistical Methods
- Else:
- Use Nonparametric Statistical Methods
| github_jupyter |
```
#data manipulation
from pathlib import Path
import numpy as np
from numpy import percentile
from datetime import datetime, timedelta
import xarray as xr
import pandas as pd
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
from math import sqrt
import scipy.stats
from scipy.stats import weibull_min
#plotting
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
import matplotlib.patches as mpatch
from matplotlib.transforms import offset_copy
import matplotlib.colors as colors
import seaborn as seabornInstance
import seaborn as sns
from reliability.Fitters import Fit_Weibull_2P
%matplotlib inline
#CSVfilelocation
#swh_sa is the list of Saral-Altika wind speed and Hs data
df=pd.read_csv("swh_sa.csv", sep='\t')
df.head()
#Satellite wind speed data within 0.5 dd of 44017
colocated7= df[((df[['lon','lat']] - [287.951,40.693])**2).sum(axis=1) < 0.5**2]
yy=colocated7['swh']
xx=colocated7['wind_speed_alt']
data = colocated7["wind_speed_alt"]
fig,ax=plt.subplots(figsize=(10,9))
shape, loc, scale = weibull_min.fit(data, floc=0,fc=2) # if you want to fix shape as 2: set fc=2
x = np.linspace(data.min(), data.max(), 100)
plt.plot(x, weibull_min(shape, loc, scale).pdf(x),color="blue",label="Buoy44097-0.25 decimal degrees Saral/ALtika"+"(Scale:"+str(round(scale,2))+";Shape:"+str(round(shape,2))+")")
sns.distplot(xx,hist_kws=dict(alpha=1),color='lightskyblue',kde_kws=dict(alpha=0))
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
change_width(ax, 1.6)
plt.xlabel('Wind Speed (m/s)', fontsize=15)
plt.ylabel('Density Function', fontsize=15)
plt.tick_params(axis='both', which='major', labelsize=15)
plt.xlim(0,25)
plt.ylim(0,0.17)
# parameters
A = round(scale,4) # from weibull
k = round(shape,4)
air_density = 1.225 # kg/m^3
from scipy.special import gamma, factorial
mean_energy_density = 0.5*air_density*A**3*gamma(1+3/k)
A
k
mean_energy_density
#Corresponding buoy wind speed data at 0.5 decimal degrees radius
df2=pd.read_csv('44017_df_50.csv')
x1=df2['Buoy 44017 U10']
y1=df2['Buoy 44017 Wave Height']
df2
df2['Date'] = pd.to_datetime(df2["Buoy 44017 Time"])
df2['month'] = df2['Date'].dt.month_name()
df2['day'] = df2['Date'].dt.day_name()
df2.describe()
data = df2["Buoy 44017 U10"]
fig,ax=plt.subplots(figsize=(10,9))
sns.distplot(x1,hist_kws=dict(alpha=1),color='lightskyblue',kde_kws=dict(alpha=0))
shape, loc, scale = weibull_min.fit(data, floc=0,fc=2) # if you want to fix shape as 2: set fc=2
x = np.linspace(data.min(), data.max(), 100)
plt.plot(x, weibull_min(shape, loc, scale).pdf(x),color="blue",label="Buoy44097-0.25 decimal degrees Saral/ALtika"+"(Scale:"+str(round(scale,2))+";Shape:"+str(round(shape,2))+")")
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
change_width(ax, 1.6)
plt.xlabel('$u_{10}$ (m/s)', fontsize=15)
plt.ylabel('Density Function', fontsize=15)
plt.tick_params(axis='both', which='major', labelsize=15)
plt.xlim(0,25)
plt.ylim(0,0.17)
# parameters
A = round(scale,4) # from weibull
k = round(shape,4)
air_density = 1.225 # kg/m^3
from scipy.special import gamma, factorial
mean_energy_density = 0.5*air_density*A**3*gamma(1+3/k)
A
k
#satellite wave height 0.5 dd around buoy 55017
data = colocated7['swh']
fig,ax=plt.subplots(figsize=(10,9))
shape, loc, scale = weibull_min.fit(data, floc=0) # if you want to fix shape as 2: set fc=2
x = np.linspace(data.min(), data.max(), 100)
plt.plot(x, weibull_min(shape, loc, scale).pdf(x),color="blue",label="Buoy44097-0.25 decimal degrees Saral/ALtika"+"(Scale:"+str(round(scale,2))+";Shape:"+str(round(shape,2))+")")
sns.distplot(yy,hist_kws=dict(alpha=1),color='lightskyblue',kde_kws=dict(alpha=0))
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
change_width(ax, 0.31)
plt.xlabel('$H_s$ (m)', fontsize=15)
plt.ylabel('Density Function', fontsize=15)
plt.tick_params(axis='both', which='major', labelsize=15)
plt.xlim(0,6)
plt.ylim(0,0.89)
# parameters
A = round(scale,4) # from weibull
k = round(shape,4)
air_density = 1.225 # kg/m^3
from scipy.special import gamma, factorial
mean_energy_density = 0.5*air_density*A**3*gamma(1+3/k)
A
k
#corresponding buoy wave height
data = df2['Buoy 44017 Wave Height']
fig,ax=plt.subplots(figsize=(10,9))
sns.distplot(y1,hist_kws=dict(alpha=1),color='lightskyblue',kde_kws=dict(alpha=0))
shape, loc, scale = weibull_min.fit(data, floc=0) # if you want to fix shape as 2: set fc=2
x = np.linspace(data.min(), data.max(), 100)
plt.plot(x, weibull_min(shape, loc, scale).pdf(x),color="blue",label="Buoy44097-0.25 decimal degrees Saral/ALtika"+"(Scale:"+str(round(scale,2))+";Shape:"+str(round(shape,2))+")")
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
change_width(ax, 0.3)
plt.xlabel('$H_s$ (m)', fontsize=15)
plt.ylabel('Density Function', fontsize=15)
plt.xlim(0,6)
plt.ylim(0,0.89)
plt.tick_params(axis='both', which='major', labelsize=15)
# parameters
A = round(scale,4) # from weibull
k = round(shape,4)
air_density = 1.225 # kg/m^3
from scipy.special import gamma, factorial
mean_energy_density = 0.5*air_density*A**3*gamma(1+3/k)
A
k
#directory to buoy 44017 files
df1=pd.read_csv('b44017_wind_wave.csv', sep='\t')
x2=df1['u10']
y2=df1['WVHT']
data = df1['u10']
fig,ax=plt.subplots(figsize=(10,9))
#plt.hist(data, density=True, alpha=0.5)
shape, loc, scale = weibull_min.fit(data, floc=0,fc=2) # if you want to fix shape as 2: set fc=2
x = np.linspace(data.min(), data.max(), 100)
plt.plot(x, weibull_min(shape, loc, scale).pdf(x),color="blue",label="Buoy44097-0.25 decimal degrees Saral/ALtika"+"(Scale:"+str(round(scale,2))+";Shape:"+str(round(shape,2))+")")
sns.distplot(x2,hist_kws=dict(alpha=1),color='lightskyblue',kde_kws=dict(alpha=0))
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
change_width(ax, 1.6)
plt.xlabel('$u_{10}$ (m/s)', fontsize=15)
plt.ylabel('Density Function', fontsize=15)
plt.xlim(0,25)
plt.ylim(0,0.17)
plt.tick_params(axis='both', which='major', labelsize=15)
# parameters
A = round(scale,4) # from weibull
k = round(shape,4)
air_density = 1.225 # kg/m^3
from scipy.special import gamma, factorial
mean_energy_density = 0.5*air_density*A**3*gamma(1+3/k)
A
data = df1['WVHT']
fig,ax=plt.subplots(figsize=(10,9))
shape, loc, scale = weibull_min.fit(data, floc=0) # if you want to fix shape as 2: set fc=2
x = np.linspace(data.min(), data.max(), 100)
plt.plot(x, weibull_min(shape, loc, scale).pdf(x),color="blue",label="Buoy44097-0.25 decimal degrees Saral/ALtika"+"(Scale:"+str(round(scale,2))+";Shape:"+str(round(shape,2))+")")
sns.distplot(y2,hist_kws=dict(alpha=1),color='lightskyblue',kde_kws=dict(alpha=0))
def change_width(ax, new_value) :
for patch in ax.patches :
current_width = patch.get_width()
diff = current_width - new_value
# we change the bar width
patch.set_width(new_value)
# we recenter the bar
patch.set_x(patch.get_x() + diff * .5)
change_width(ax, 0.3)
plt.xlabel('$H_s$ (m)', fontsize=15)
plt.ylabel('Density Function', fontsize=15)
plt.xlim(0,6)
plt.ylim(0,0.89)
plt.tick_params(axis='both', which='major', labelsize=15)
# parameters
A = round(scale,4) # from weibull
k = round(shape,4)
air_density = 1.225 # kg/m^3
from scipy.special import gamma, factorial
mean_energy_density = 0.5*air_density*A**3*gamma(1+3/k)
A
k
```
| github_jupyter |
# Collaboration and Competition
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.
### 1. Start the Environment
We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
import copy
from collections import namedtuple, deque
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
%matplotlib inline
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Tennis.app"`
- **Windows** (x86): `"path/to/Tennis_Windows_x86/Tennis.exe"`
- **Windows** (x86_64): `"path/to/Tennis_Windows_x86_64/Tennis.exe"`
- **Linux** (x86): `"path/to/Tennis_Linux/Tennis.x86"`
- **Linux** (x86_64): `"path/to/Tennis_Linux/Tennis.x86_64"`
- **Linux** (x86, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86"`
- **Linux** (x86_64, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86_64"`
For instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Tennis.app")
```
```
env = UnityEnvironment(file_name="Tennis_Linux_NoVis/Tennis.x86_64")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment.
Once this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.
Of course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment!
```
for i in range(1, 6): # play game for 5 episodes
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))
```
When finished, you can close the environment.
```
# env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
### 5. My Multi DDPG
```
from ddpg.multi_ddpg_agent import Agent
agent_0 = Agent(state_size, action_size, num_agents=1, random_seed=0)
agent_1 = Agent(state_size, action_size, num_agents=1, random_seed=0)
def get_actions(states, add_noise):
'''gets actions for each agent and then combines them into one array'''
action_0 = agent_0.act(states, add_noise) # agent 0 chooses an action
action_1 = agent_1.act(states, add_noise) # agent 1 chooses an action
return np.concatenate((action_0, action_1), axis=0).flatten()
SOLVED_SCORE = 0.5
CONSEC_EPISODES = 100
PRINT_EVERY = 10
ADD_NOISE = True
def run_multi_ddpg(n_episodes=2000, max_t=1000, train_mode=True):
"""Multi-Agent Deep Deterministic Policy Gradient (MADDPG)
Params
======
n_episodes (int) : maximum number of training episodes
max_t (int) : maximum number of timesteps per episode
train_mode (bool) : if 'True' set environment to training mode
"""
scores_window = deque(maxlen=CONSEC_EPISODES)
scores_all = []
moving_average = []
best_score = -np.inf
best_episode = 0
already_solved = False
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=train_mode)[brain_name] # reset the environment
states = np.reshape(env_info.vector_observations, (1,48)) # get states and combine them
agent_0.reset()
agent_1.reset()
scores = np.zeros(num_agents)
while True:
actions = get_actions(states, ADD_NOISE) # choose agent actions and combine them
env_info = env.step(actions)[brain_name] # send both agents' actions together to the environment
next_states = np.reshape(env_info.vector_observations, (1, 48)) # combine the agent next states
rewards = env_info.rewards # get reward
done = env_info.local_done # see if episode finished
agent_0.step(states, actions, rewards[0], next_states, done, 0) # agent 1 learns
agent_1.step(states, actions, rewards[1], next_states, done, 1) # agent 2 learns
scores += np.max(rewards) # update the score for each agent
states = next_states # roll over states to next time step
if np.any(done): # exit loop if episode finished
break
ep_best_score = np.max(scores)
scores_window.append(ep_best_score)
scores_all.append(ep_best_score)
moving_average.append(np.mean(scores_window))
# save best score
if ep_best_score > best_score:
best_score = ep_best_score
best_episode = i_episode
# print results
if i_episode % PRINT_EVERY == 0:
print(f'Episodes {i_episode}\tMax Reward: {np.max(scores_all[-PRINT_EVERY:]):.3f}\tMoving Average: {moving_average[-1]:.3f}')
# determine if environment is solved and keep best performing models
if moving_average[-1] >= SOLVED_SCORE:
if not already_solved:
print(f'Solved in {i_episode-CONSEC_EPISODES} episodes! \
\n<-- Moving Average: {moving_average[-1]:.3f} over past {CONSEC_EPISODES} episodes')
already_solved = True
torch.save(agent_0.actor_local.state_dict(), 'checkpoint_actor_0.pth')
torch.save(agent_0.critic_local.state_dict(), 'checkpoint_critic_0.pth')
torch.save(agent_1.actor_local.state_dict(), 'checkpoint_actor_1.pth')
torch.save(agent_1.critic_local.state_dict(), 'checkpoint_critic_1.pth')
elif ep_best_score >= best_score:
print(f'Best episode {i_episode}\tMax Reward: {ep_best_score:.3f}\tMoving Average: {moving_average[-1]:.3f}')
torch.save(agent_0.actor_local.state_dict(), 'checkpoint_actor_0.pth')
torch.save(agent_0.critic_local.state_dict(), 'checkpoint_critic_0.pth')
torch.save(agent_1.actor_local.state_dict(), 'checkpoint_actor_1.pth')
torch.save(agent_1.critic_local.state_dict(), 'checkpoint_critic_1.pth')
elif (i_episode-best_episode) >= 200:
# stop training if model stops converging
print('Done')
break
else:
continue
return scores_all, moving_average
scores, avgs = run_multi_ddpg()
plt.plot(np.arange(1, len(scores)+1), scores, label='Score')
plt.plot(np.arange(len(scores)), avgs, c='r', label='100 Average')
plt.legend(loc=0)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.title('Udacity Project3 Solution by Bongsang')
plt.savefig('result.png')
plt.show()
env.close()
```
| github_jupyter |
# PAOO5: High Value Customer Identification (Insiders)
## Planejamento da solução (IoT)
### Input
1. Problema de negocio
* selecionar os clientes mais valiosos para integrar um programa de fidelizacao.
2. Conjunto de dados
* Vendas de um e-commerce online, durante o periodo de um ano.
### Output
1. A indicacao das pessoas que farao parte do programa de Insiders
- Lista: client_id|is_insider
2. Relatorio com as respostas das perguntas de negocio
- Quem sao as pessoas elegiveis para participar do programa insiders ?
1. **Who are the people eligible to participate in the Insiders program?**
2. **How many customers will be part of the group?**
3. **What are the main characteristics of these customers?**
4. **What percentage of revenue contribution comes from Insiders?**
5. **What is this group's expected revenue for the coming months?**
6. **What are the conditions for a person to be eligible for Insiders?**
7. **What are the conditions for a person to be removed from Insiders?**
8. **What is the guarantee that the Insiders program is better than the rest of the base?**
9. **What actions can the marketing team take to increase revenue?**
### Tasks
1. **Quem são as pessoas elegíveis para participar do programa de Insiders ?**
- O que é ser elegível ? O que é um cliente "valioso" para a empresa ?
- Faturamento:
- Alto Ticket Médio
- Alto LTV
- Baixa Recência ou Alta Frequência ( tempo entre as compras )
- Alto Basket Size ( quantidade média de produtos comprados )
- Baixa probabilidade de Churn
- Previsão alta de LTV
- Alta propensão de compra
- Custo:
- Baixo número de devoluções
- Experiência:
- Média alta de avaliações
2. **Quantos clientes farão parte do grupo?**
- Número de clientes
- % em relação ao total de clients
3. **Quais as principais características desses clientes ?**
- Escrever os principais atributos dos clientes
- Idade
- País
- Salário
- Escrever os principais comportamentos de compra dos clients ( métricas de negócio )
- Vide acima
4. **Qual a porcentagem de contribuição do faturamento, vinda do Insiders ?**
- Calcular o faturamento total da empresa durante o ano.
- Calcular o faturamento (%) apenas do cluster Insiders.
5. **Qual a expectativa de faturamento desse grupo para os próximos meses ?**
- Cálculo do LTV do grupo Insiders
- Séries Temporais ( ARMA, ARIMA, HoltWinter, etc )
6. **Quais as condições para uma pessoa ser elegível ao Insiders ?**
- Qual o período de avaliação ?
- O "desempenho" do cliente está próximo da média do cluster Insiders.
7. **Quais as condições para uma pessoa ser removida do Insiders ?**
- O "desempenho" do cliente não está mais próximo da média do cluster Insiders.
8. **Qual a garantia que o programa Insiders é melhor que o restante da base ?**
- Teste de Hipóteses
- Teste A/B
9. **Quais ações o time de marketing pode realizar para aumentar o faturamento?**
- Descontos
- Preferências de escolha
- Produtos exclusivos
# 0 IMPORTS
```
# %pip install plotly
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
from IPython.display import HTML
import inflection
from sklearn import cluster as c
from yellowbrick.cluster import KElbowVisualizer,SilhouetteVisualizer
from sklearn import metrics as m
from plotly import express as px
import umap.umap_ as umap
```
## 0.1 Helper Functions
```
def jupyter_settings():
%matplotlib inline
%pylab inline
plt.style.use( 'bmh' )
plt.rcParams['figure.figsize'] = [25, 12]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>') )
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False )
sns.set()
jupyter_settings()
```
# 0.2 Loadind dataset
```
df_raw=pd.read_csv('/home/tc0019/DS/insiders_cluster/dataset/Ecommerce.csv', encoding='unicode_escape')
df_raw=df_raw.drop( columns = ['Unnamed: 8'], axis=1)
df_raw.head()
```
# 1.0 Descricao dos dados
```
df1=df_raw.copy()
```
## 1.1 Rename columns
```
cols_old=['InvoiceNo', 'StockCode', 'Description', 'Quantity', 'InvoiceDate',
'UnitPrice', 'CustomerID', 'Country']
snakecase = lambda x: inflection.underscore(x)
cols_new = list( map( snakecase, cols_old ) )
df1.columns=cols_new
df1.sample()
```
## 1.2. Data Dimensions
```
print('Number of Rows: {}'.format(df1.shape[0]))
print('Number of Cols: {}'.format(df1.shape[1]))
```
## 1.3. Data Types
```
df1.dtypes
```
## 1.4. Check NA
```
df1.isna().sum()
```
### 1.4.1 Remove NA
```
df1 = df1.dropna(subset=['description', 'customer_id'])
print('Removed data: {:.2f}'.format(1-(df1.shape[0]/df_raw.shape[0])))
df1.isna().sum()
df1.shape
```
## 1.5. Descriptive Statistics
### 1.5.1. Numerical Atributes
## 1.6 Change dtypes
```
# invoice date
df1['invoice_date'] = pd.to_datetime(df1['invoice_date'], format='%d-%b-%y')
```
# 2.0 Feature Engineering
```
df2=df1.copy()
# data reference
df_ref=df2.drop(['invoice_no', 'stock_code', 'description',
'quantity', 'invoice_date', 'unit_price', 'country'], axis=1).drop_duplicates(ignore_index=True)
df_ref.head()
# Gross revenue (quantity * price)
df2['gross_revenue'] = df2['quantity'] * df2['unit_price']
# Monetary
df_monetary=df2[['customer_id', 'gross_revenue']].groupby('customer_id').sum().reset_index()
df_ref=pd.merge(df_ref, df_monetary, on='customer_id', how='left')
# Recency - last day purchase
df_recency=df2[['customer_id', 'invoice_date']].groupby('customer_id').max().reset_index()
df_recency['recency_days'] = (df2['invoice_date'].max()-df_recency['invoice_date']).dt.days
df_recency=df_recency[['customer_id','recency_days']].groupby('customer_id').sum().reset_index()
df_ref=pd.merge(df_ref, df_recency, on='customer_id', how='left')
# Frequency
df_freq=df2[['customer_id', 'invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index()
df_ref=pd.merge(df_ref,df_freq, on='customer_id', how='left')
df_ref.isna().sum()
# AVG ticket
df_avg_ticket = df2[['customer_id', 'gross_revenue']].groupby('customer_id').mean().reset_index().rename(columns={'gross_revenue': 'avg_ticket'})
df_ref = pd.merge(df_ref,df_avg_ticket, on='customer_id', how='left')
df_ref.head()
```
# 3.0 Filtragem de variaveis
```
df3=df_ref.copy()
```
# 4.0 EDA
```
df4=df3.copy()
```
# 5.0 Data preparation
```
df5=df4.copy()
```
# 6.0 Feature Selection
```
df6=df5.copy()
```
# 7.0 Hyperparameter fine tuning
```
df7=df6.copy()
X = df6.drop(columns='customer_id')
clusters = [2, 3, 4, 5, 6, 7]
```
## 7.1 Within-Cluster Sum of Square (WSS)
```
wss = []
for k in clusters:
# model definition
kmeans=c.KMeans( init='random', n_clusters=k, n_init=10, max_iter=300, random_state=42 )
# model Training
kmeans.fit(X)
# validation
wss.append( kmeans.inertia_)
# plot wss elbow method
plt.plot(clusters, wss, linestyle='--', marker='o', color='b')
plt.xlabel('K');
plt.ylabel('Within-Cluster Sum Square');
plt.title('WSS vs K')
kmeans = KElbowVisualizer( c.KMeans(), k=clusters, timings=False )
kmeans.fit(X)
kmeans.show()
```
## 7.2 Sillhouette Score
```
kmeans = KElbowVisualizer( c.KMeans(), k=clusters, metric='silhouette', timings=False )
kmeans.fit(X)
kmeans.show()
```
## 7.3 Silhouette Analysis
```
fig, ax = plt.subplots( 3, 2, figsize=(25, 18))
for k in clusters:
km = c.KMeans(n_clusters=k, init='random', n_init=10, max_iter=100, random_state=42)
q, mod = divmod(k,2)
visualizer = SilhouetteVisualizer(km, color='yellowbrick', ax=ax[q-1][mod])
visualizer.fit(X)
visualizer.finalize()
```
# 8.0 Model Training
```
df8=df7.copy()
```
## 8.1 K-Means
```
# model definition
k=4
kmeans = c.KMeans( init='random', n_clusters=k, n_init=10, max_iter=300, random_state=42)
# model training
kmeans.fit(X)
# clustering
labels=kmeans.labels_
```
## 8.2 Cluster Validation
```
## WSS (within -cluster sum of squares)
print ('WSS Value: {}'.format(kmeans.inertia_))
## SS (silhouette score)
print ('SS Value: {}'.format(m.silhouette_score(X, labels, metric='euclidean')))
```
# 9.0 Cluster Analysis
```
df9=df8.copy()
df9['cluster'] = labels
df9.head()
```
## 9.1 Visualization Inspections
```
visualizer = SilhouetteVisualizer( kmeans, colors='yellowbricks')
```
### 9.1.1 2D Plot
```
df_viz= df9.drop(columns='customer_id', axis=1)
sns.pairplot( df_viz, hue='cluster')
fig = px.scatter_3d(df9, x='recency_days', y='invoice_no', z='gross_revenue', color='cluster')
fig.show()
```
## 9.2 Cluster Profile
```
# Number of customer
df_cluster = df9[['customer_id', 'cluster']].groupby( 'cluster' ).count().reset_index()
df_cluster['perc_customer'] = 100*( df_cluster['customer_id'] / df_cluster['customer_id'].sum() )
# Avg Gross revenue
df_avg_gross_revenue = df9[['gross_revenue', 'cluster']].groupby( 'cluster' ).mean().reset_index()
df_cluster = pd.merge( df_cluster, df_avg_gross_revenue, how='inner', on='cluster' )
# Avg recency days
df_avg_recency_days = df9[['recency_days', 'cluster']].groupby( 'cluster' ).mean().reset_index()
df_cluster = pd.merge( df_cluster, df_avg_recency_days, how='inner', on='cluster' )
# Avg invoice_no
df_invoice_no = df9[['invoice_no', 'cluster']].groupby( 'cluster' ).mean().reset_index()
df_cluster = pd.merge( df_cluster, df_invoice_no, how='inner', on='cluster' )
# Avg Ticket
df_ticket = df9[['avg_ticket', 'cluster']].groupby( 'cluster' ).mean().reset_index()
df_cluster = pd.merge( df_cluster, df_ticket, how='inner', on='cluster' )
df_cluster
```
## 9.3 UMAP
```
reducer = umap.UMAP( n_neighbors=90, random_state=42 )
embedding = reducer.fit_transform( X )
# embedding
df_viz['embedding_x'] = embedding[:, 0]
df_viz['embedding_y'] = embedding[:, 1]
# plot UMAP
sns.scatterplot( x='embedding_x', y='embedding_y',
hue='cluster',
palette=sns.color_palette( 'hls', n_colors=len( df_viz['cluster'].unique() ) ),
data=df_viz )
```
### Cluster 01: ( Candidato à Insider )
- Número de customers: 6 (0.14% do customers )
- Recência em média: 7 dias
- Compras em média: 89 compras
- Receita em média: $182.182,00 dólares
### Cluster 02:
- Número de customers: 31 (0.71% do customers )
- Recência em média: 14 dias
- Compras em média: 53 compras
- Receita em média: $40.543,52 dólares
### Cluster 03:
- Número de customers: 4.335 (99% do customers )
- Recência em média: 92 dias
- Compras em média: 5 compras
- Receita em média: $1.372,57 dólares
# 10.0 Deploy to production
```
df10=df9.copy()
```
| github_jupyter |
# K-Nearest Neighbours
Let’s build a K-Nearest Neighbours model from scratch.
First, we will define some generic `KNN` object. In the constructor, we pass three parameters:
- The number of neighbours being used to make predictions
- The distance measure we want to use
- Whether or not we want to use weighted distances
```
import sys
sys.path.append("D:/source/skratch/source")
from collections import Counter
import numpy as np
from utils.distances import euclidean
class KNN:
def __init__(self, k, distance=euclidean, weighted=False):
self.k = k
self.weighted = weighted # Whether or not to use weighted distances
self.distance = distance
```
Now we will define the fit function, which is the function which describes how to train a model. For a K-Nearest Neighbours model, the training is rather simplistic. Indeed, all there needs to be done is to store the training instances as the model’s parameters.
```
def fit(self, X, y):
self.X_ = X
self.y_ = y
return self
```
Similarly, we can build an update function which will update the state of the model as more data points are provided for training. Training a model by feeding it data in a stream-like fashion is often referred to as online learning. Not all models allow for computationally efficient online learning, but K-Nearest Neighbours does.
```
def update(self, X, y):
self.X_ = np.concatenate((self.X_, X))
self.y_ = np.concatenate((self.y_, y))
return self
```
In order to make predictions, we also need to create a predict function. For a K-Nearest Neighbours model, a prediction is made in two steps:
- Find the K-nearest neighbours by computing their distances to the data point we want to predict
- Given these neighbours and their distances, compute the predicted output
```
def predict(self, X):
predictions = []
for x in X:
neighbours, distances = self._get_neighbours(x)
prediction = self._vote(neighbours, distances)
predictions.append(prediction)
return np.array(predictions)
```
Retrieving the neighbours can be done by calculating all pairwise distances between the data point and the data stored inside the state of the model. Once these distances are known, the K instances that have the shortest distance to the example are returned.
```
def _get_neighbours(self, x):
distances = np.array([self._distance(x, x_) for x_ in self.X_])
indices = np.argsort(distances)[:self.k]
return self.y_[indices], distances[indices]
```
In case we would like to use weighted distances, we need to compute the weights. By default, these weights are all set to 1 to make all instances equal. To weigh the instances, neighbours that are closer are typically favoured by given them a weight equal to 1 divided by their distance.
>If neighbours have distance 0, since we can’t divide by zero, their weight is set to 1, and all other weights are set to 0. This is also how scikit-learn deals with this problem according to their source code.
```
def _get_weights(self, distances):
weights = np.ones_like(distances, dtype=float)
if self.weighted:
if any(distances == 0):
weights[distances != 0] = 0
else:
weights /= distances
return weights
```
The only function that we have yet to define is the vote function that is called in the predict function. Depending on the implementation of that function, K-Nearest Neighbours can be used for regression, classification, or even as a meta-learner.
## KNN for Regression
In order to use K-Nearest Neighbour for regression, the vote function is defined as the average of the neighbours. In case weighting is used, the vote function returns the weighted average, favouring closer instances.
```
class KNN_Regressor(KNN):
def _vote(self, targets, distances):
weights = self._get_weights(distances)
return np.sum(weights * targets) / np.sum(weights)
```
## KNN for Classification
In the classification case, the vote function uses a majority voting scheme. If weighting is used, each neighbour has a different impact on the prediction.
```
class KNN_Classifier(KNN):
def _vote(self, classes, distances):
weights = self._get_weights(distances)
prediction = None
max_weighted_frequency = 0
for c in classes:
weighted_frequency = np.sum(weights[classes == c])
if weighted_frequency > max_weighted_frequency:
prediction = c
max_weighted_frequency = weighted_frequency
return prediction
```
| github_jupyter |
<a href="https://cocl.us/Data_Science_with_Scalla_top"><img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/SC0103EN/adds/Data_Science_with_Scalla_notebook_top.png" width = 750, align = "center"></a>
<br/>
<a><img src="https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width="200" align="center"></a>"
# Basic Statistics and Data Types
## Hypothesis Testing
## Lesson Objectives
After completing this lesson, you should be able to:
- Perform hypothesis testing for goodness of fit and independence
- Perform hypothesis testing for equality and probability distributions
- Perform kernel density estimation
## Hypothesis Testing
- Used to determine whether a result is statistically significant, that is, whether it occurred by chance or not
- Supported tests:
- Pearson's Chi-Squared test for goodness of fit
- Pearson's Chi-Squared test for independence
- Kolmogorov-Smirnov test for equality of distribution
- Inputs of type `RDD[LabeledPoint]` are also supported, enabling feature selection
### Pearson's Chi-Squared Test for Goodness of Fit
- Determines whether an observed frequency distribution differs from a given distribution or not
- Requires an input of type Vector containing the frequencies of the events
- It runs against a uniform distribution, if a second vector to test against is not supplied
- Available as `chiSqTest`() function in Statistics
### Libraries required for examples
```
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.mllib.linalg.{Matrix, Matrices}
import org.apache.spark.mllib.stat.Statistics
val vec: Vector = Vectors.dense(0.3, 0.2, 0.15, 0.1, 0.1, 0.1, 0.05)
val goodnessOfFitTestResult = Statistics.chiSqTest(vec)
goodnessOfFitTestResult
```
### Pearson's Chi-Squared Test for Independence
- Determines whether unpaired observations on two variables are independent of each other
- Requires an input of type Matrix, representing a contingency table, or an `RDD[LabeledPoint]`
- Available as `chiSqTest()` function in Statistics
- May be used for feature selection
```
// Testing for Independence
import org.apache.spark.mllib.linalg.{Matrix, Matrices}
import org.apache.spark.mllib.stat.Statistics
import org.apache.spark.rdd.RDD
val mat: Matrix = Matrices.dense(3, 2,
Array(13.0, 47.0, 40.0, 80.0, 11.0, 9.0))
val independenceTestResult = Statistics.chiSqTest(mat)
independenceTestResult
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.stat.test.ChiSqTestResult
val obs: RDD[LabeledPoint] = sc.parallelize(Array(
LabeledPoint(0, Vectors.dense(1.0, 2.0)),
LabeledPoint(0, Vectors.dense(0.5, 1.5)),
LabeledPoint(1, Vectors.dense(1.0, 8.0))))
val featureTestResults: Array[ChiSqTestResult] = Statistics.chiSqTest(obs)
featureTestResults
```
### Kolmogorov-Smirnov Test
- Determines whether nor not two probability distributions are equal
- One sample, two sided test
- Supported distributions to test against:
- normal distribution (distName='norm')
- customized cumulative density function (CDF)
- Available as `kolmogorovSmirnovTest()` function in Statistics
```
// Test for Equality of Distribution
import org.apache.spark.mllib.random.RandomRDDs.normalRDD
val data: RDD[Double] = normalRDD(sc, size=100, numPartitions=1, seed=13L)
val testResult = Statistics.kolmogorovSmirnovTest(data, "norm", 0, 1)
// Test for Equality of Distribution
import org.apache.spark.mllib.random.RandomRDDs.uniformRDD
val data1: RDD[Double] = uniformRDD(sc, size = 100, numPartitions=1, seed=13L)
val testResult1 = Statistics.kolmogorovSmirnovTest(data1, "norm", 0, 1)
```
### Kernel Density Estimation
- Computes an estimate of the probability density function of a random variable, evaluated at a given set of points
- Does not require assumptions about the particular distribution that the observed samples are drawn from
- Requires an RDD of samples
- Available as `estimate()` function in KernelDensity
- In Spark, only Gaussian kernel is supported
```
// Kernel Density Estimation I
import org.apache.spark.mllib.stat.KernelDensity
val data: RDD[Double] = normalRDD(sc, size=1000, numPartitions=1, seed=17L)
val kd = new KernelDensity().setSample(data).setBandwidth(0.1)
val densities = kd.estimate(Array(-1.5, -1, -0.5, 1, 1.5))
densities
// Kernel Density Estimation II
val data: RDD[Double] = uniformRDD(sc, size=1000, numPartitions=1, seed=17L)
val kd = new KernelDensity().setSample(data).setBandwidth(0.1)
val densities = kd.estimate(Array(-0.25, 0.25, 0.5, 0.75, 1.25))
densities
```
## Lesson Summary
- Having completed this lesson, you should be able to:
- Perform hypothesis testing for goodness of fit and independence
- Perform hypothesis testing for equality of probability distributions
- Perform kernel density estimation
### About the Authors
[Petro Verkhogliad](https://www.linkedin.com/in/vpetro) is Consulting Manager at Lightbend. He holds a Masters degree in Computer Science with specialization in Intelligent Systems. He is passionate about functional programming and applications of AI.
| github_jupyter |
```
import pandas as pd
import numpy as np
import json
from cold_start import get_cold_start_rating
import pyspark
spark = pyspark.sql.SparkSession.builder.getOrCreate()
sc = spark.sparkContext
ratings_df = spark.read.json('data/ratings.json').toPandas()
metadata = pd.read_csv('data/movies_metadata.csv')
request_df = spark.read.json('data/requests.json').toPandas()
ratings_df['user_id'].nunique()
ratings_df['rating'].value_counts()
ratings_df.isna().sum()
len(metadata), metadata['tagline'].isna().sum()
metadata.loc[0]['genres']
len(requests_df)
users = []
for line in open('data/users.dat', 'r'):
item = line.split('\n')
users.append(item[0].split("::"))
user_df = pd.read_csv('data/users.dat', sep='::', header=None, names=['id', 'gender', 'age', 'occupation', 'zip'])
movie_info_df = pd.read_csv('data/movies.dat', sep='::', header=None, names=['id', 'name', 'genres'])
user_df[20:53]
movie_info_df.head()
movie_info_df['genres'] = movie_info_df['genres'].apply(lambda x: x.split('|'))
movie_info_df.head()
all_genres = set([item for movie in movie_info_df['genres'] for item in movie])
all_genres
user_df = user_df.drop('zip', axis=1)
user_df.head()
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans
def ohe_columns(series, name):
ohe = OneHotEncoder(categories='auto')
ohe.fit(series)
cols = ohe.get_feature_names(name)
ohe = ohe.transform(series)
final_df = pd.DataFrame(ohe.toarray(), columns=cols)
return final_df
# OHE the user cols
my_cols = ['gender', 'age', 'occupation']
ohe_multi = OneHotEncoder(categories='auto')
ohe_multi.fit(user_df[my_cols])
ohe_mat = ohe_multi.transform(user_df[my_cols])
# Then KMeans cluster
k_clusters = KMeans(n_clusters=8, random_state=42)
k_clusters.fit(ohe_mat)
preds = k_clusters.predict(ohe_mat)
preds
preds.shape
def add_clusters_to_users(n_clusters=8):
"""
parameters:number of clusters
return: user dataframe
"""
# Get the user data
user_df = pd.read_csv('data/users.dat', sep='::', header=None
, names=['id', 'gender', 'age', 'occupation', 'zip'])
# OHE for clustering
my_cols = ['gender', 'age', 'occupation']
ohe_multi = OneHotEncoder(categories='auto')
ohe_multi.fit(user_df[my_cols])
ohe_mat = ohe_multi.transform(user_df[my_cols])
# Then KMeans cluster
k_clusters = KMeans(n_clusters=8, random_state=42)
k_clusters.fit(ohe_mat)
preds = k_clusters.predict(ohe_mat)
# Add clusters to user df
user_df['cluster'] = preds
return user_df
test_df = add_clusters_to_users()
test_df.to_csv('data/u_info.csv')
temp_ohe = ohe_2.get_feature_names(['age'])
gender_df = pd.DataFrame(gender_ohe.toarray(), columns=['F', 'M'])
gender_df.head()
ohe.fit(user_df[['gender']])
gender_ohe = ohe.transform(user_df[['gender']])
gender_df = pd.DataFrame(gender_ohe.toarray(), columns=['F', 'M'])
gender_df.head()
ohe_2.fit(user_df[['age']])
temp_ohe = ohe_2.get_feature_names(['age'])
age_ohe = ohe_2.transform(user_df[['age']])
age_df = pd.DataFrame(age_ohe.toarray(), columns=temp_ohe)
age_df.head()
ohe_3.fit(user_df[['occupation']])
cols = ohe_3.get_feature_names(['occupation'])
occ_ohe = ohe_3.transform(user_df[['occupation']])
occ_df = pd.DataFrame(occ_ohe.toarray(), columns=cols)
occ_df.head()
all_cat = pd.concat([gender_df, age_df, occ_df], axis=1)
all_cat.head()
k_clusters = KMeans(n_clusters=8, random_state=42)
k_clusters.fit(all_cat)
preds = k_clusters.predict(all_cat)
preds
user_df['cluster'] = preds
user_df[user_df['id'] == 6040]
cluster_dict = {}
for k, v in zip(user_df['id'].tolist(), user_df['cluster'].tolist()):
cluster_dict[k] = v
ratings_df['cluster'] = ratings_df['user_id'].apply(lambda x: cluster_dict[x])
def add_cluster_to_ratings(user_df):
"""
given user_df with clusters, add clusters to ratings data
parameters
---------
user_df: df with user data
returns
-------
ratings_df: ratings_df with cluster column
"""
# Read in ratings file
#Get ratings file
ratings_df = spark.read.json('data/ratings.json').toPandas()
# Set up clusters
cluster_dict = {}
for k, v in zip(user_df['id'].tolist(), user_df['cluster'].tolist()):
cluster_dict[k] = v
# Add cluster to ratings
ratings_df['cluster'] = ratings_df['user_id'].apply(lambda x: cluster_dict[x])
return ratings_df
all_df = add_cluster_to_ratings(user_df)
all_df.to_csv('data/user_cluster.csv')
movie_by_cluster = all_df.groupby(by=['cluster', 'movie_id']).agg({'rating': 'mean'}).reset_index()
movie_by_cluster.head()
movie_by_cluster = pd.read_csv('data/u_info.csv', index_col=0)
movie_by_cluster.head()
ratings_df.head()
request_df.head()
def cluster_rating(df, movie_id, cluster):
cluster_rating = df[(df['movie_id'] == movie_id) & (df['cluster'] == cluster)]
return cluster_rating['rating'].mean()
def user_bias(df, user_id):
return df.loc[df['user_id'] == user_id, 'rating'].mean() - df['rating'].mean()
def item_bias(df, movie_id):
return df.loc[df['movie_id'] == movie_id, 'rating'].mean() - df['rating'].mean()
avg = cluster_rating(df=ratings_df, movie_id=1617, cluster=1)
u = user_bias(ratings_df, 6040)
i = item_bias(ratings_df, 2019)
avg + u + i
movie_info_df[movie_info_df['id'] == 1617]
def get_cold_start_rating(user_id, movie_id):
"""
Given user_id and movie_id, return a predicted rating
parameters
----------
user_id, movie_id
returns
-------
movie rating (float)
"""
# Get user df with clusters
user_df = pd.read_csv('data/user_cluster.csv', index_col=0)
u_clusters = pd.read_csv('data/u_info.csv', index_col=0)
# Get ratings data, with clusters
ratings_df = pd.read_csv('data/movie_cluster_avg.csv', index_col=0)
# User Cluster
user_cluster = u_clusters.loc[u_clusters['id'] == user_id]['cluster'].tolist()[0]
# Get score components
avg = ratings_df.loc[ratings_df['user_id'] == movie_id]['rating'].tolist()[0]
u = user_bias(user_df, user_id)
i = item_bias(user_df, movie_id)
pred_rating = avg + u + i
return pred_rating
blah = get_cold_start_rating(user_id=53, movie_id=9999)
blah
df = pd.read_csv('data/user_cluster.csv', index_col=0)
ratings_df = pd.read_csv('data/movie_cluster_avg.csv', index_col=0)
ratings_df.head()
```
| github_jupyter |
```
import os
import sys
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchsummary import summary
sys.path.append('../')
sys.path.append('../src/')
from src import utils
from src import generators
import imp
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
```
# Inference
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model_LoadWeights = '../data/trainings/train_UNETA_class/vgg_5.pkl'
mvcnn = torch.load(model_LoadWeights)
test_patient_information = utils.get_PatientInfo('/home/alex/Dataset3/', test=True)
sep = generators.SEPGenerator(base_DatabasePath='/home/alex/Dataset3/',
channels=1,
resize=296,
normalization='min-max')
test_generator = sep.generator(test_patient_information, dataset='test')
final = []
with torch.no_grad():
for v_m, v_item in enumerate(test_generator):
image_3D, p_id = torch.tensor(v_item[0], device=device).float(), v_item[1]
if image_3D.shape[0] == 0:
print(p_id)
continue
output = mvcnn(image_3D, batch_size=1, mvcnn=True)
print(output, p_id)
final.append((p_id, output.to('cpu').detach().numpy()))
if v_m == len(test_patient_information) - 1:
break
keys = {0: 0.0,
1: 1.0,
2: 1.5,
3: 2.0,
4: 2.5,
5: 3.0,
6: 3.5,
7: 4.0,
8: 4.5,
9: 5.0,
10: 5.5,
11: 6.0,
12: 6.5,
13: 7.0,
14: 7.5,
15: 8.0,
16: 8.5,
17: 9.0}
list(map(lambda a : [[int(a[0])], [keys[np.argmax(a[1])]]], (final)))
final[1][1]
import csv
csvData = [["Sequence_id"],["EDSS"]] + list(map(lambda a : [int(a[0]), keys[np.argmax(a[1])]], (final)))
with open('AZmed_Unet.csv', 'w') as csvFile:
writer = csv.writer(csvFile)
writer.writerows(csvData)
csvFile.close()
csvData
database_path =
train_patient_information, valid_patient_information = get_PatientInfo(database_path)
# Create train and valid generators
sep = SEPGenerator(database_path,
channels=channels,
resize=resize,
normalization=normalization)
train_generator = sep.generator(train_patient_information)
valid_generator = sep.generator(valid_patient_information, train=False)
train_patient_information, valid_patient_information = get_PatientInfo(database_path)
# Create train and valid generators
sep = SEPGenerator(database_path,
channels=channels,
resize=resize,
normalization=normalization)
train_generator = sep.generator(train_patient_information)
valid_generator = sep.generator(valid_patient_information, train=False)
with torch.no_grad():
for v_m, v_item in enumerate(valid_generator):
image_3D, label = torch.tensor(v_item[0], device=device).float(), torch.tensor(v_item[1], device=device).float()
if image_3D.shape[0] == 0:
continue
output = mvcnn(image_3D, batch_size, use_mvcnn)
total_ValidLoss += criterion(output, label)
```
# Models
## Base Mode - CNN_1
```
class VGG(nn.Module):
def __init__(self):
super(VGG,self).__init__()
pad = 1
self.cnn = nn.Sequential(nn.BatchNorm2d(1),
nn.Conv2d(1,32,3,padding=pad),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Conv2d(32,32,3,padding=pad),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(32),
nn.Conv2d(32,64,3,padding=pad),
nn.ReLU(),
nn.BatchNorm2d(64),
nn.Conv2d(64,64,3,padding=pad),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(64),
nn.Conv2d(64,128,3,padding=pad),
nn.ReLU(),
nn.BatchNorm2d(128),
nn.Conv2d(128,128,3,padding=pad),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(128),
nn.Conv2d(128,256,3,padding=pad),
nn.ReLU(),
nn.BatchNorm2d(256),
nn.Conv2d(256,256,3,padding=pad),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(256),
nn.Conv2d(256,256,3,padding=pad),
nn.ReLU(),
nn.BatchNorm2d(256),
nn.Conv2d(256,256,3,padding=pad),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(256),
nn.Conv2d(256,512,3,padding=pad),
nn.ReLU(),
nn.BatchNorm2d(512),
nn.Conv2d(512,512,3,padding=pad),
nn.ReLU(),
nn.MaxPool2d(2,2))
self.fc1 = nn.Sequential(nn.Linear(8192, 1096),
nn.ReLU(),
nn.Dropout(0.8),
nn.Linear(1096, 96),
nn.ReLU(),
nn.Dropout(0.9),
nn.Linear(96, 1))
# self.fc2 = nn.Sequential(nn.Linear(8192, 4096),
# nn.ReLU(),
# nn.Dropout(0.8),
# nn.Linear(4096, 4096),
# nn.ReLU(),
# nn.Dropout(0.9),
# nn.Linear(4096, 1))
def forward(self, x, batch_size=1, mvcnn=False):
if mvcnn:
view_pool = []
# Assuming x has shape (x, 1, 299, 299)
for n, v in enumerate(x):
v = v.unsqueeze(0)
v = self.cnn(v)
v = v.view(v.size(0), 512 * 4 * 4)
view_pool.append(v)
pooled_view = view_pool[0]
for i in range(1, len(view_pool)):
pooled_view = torch.max(pooled_view, view_pool[i])
output = self.fc1(pooled_view)
else:
x = self.cnn(x)
x = x.view(-1, 512 * 4* 4)
x = self.fc1(x)
output = F.sigmoid(x)
return output
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch v0.4.0
model = VGG().to(device)
summary(model, (1, 299, 299))
```
Since patients have varying images, create single images where the channels occupy the slices of the patient
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
mvcnn = MVCNN().to(device)
criterion = nn.MSELoss()
optimizer = optim.Adam(mvcnn.parameters(), lr=0.0003)
file_path = '/home/alex/Dataset 1/Dataset - 1.xlsx'
df = pd.read_excel(file_path, sheet_name='Feuil1')
edss = df['EDSS'].tolist()
p_id = df['Sequence_id'].tolist()
channels = 1
resize = 299
normalization = 'min-max'
patient_information = [(p_id[i], edss[i]) for i in range(df.shape[0])]
train_patient_information = patient_information[:int(0.9*len(patient_information))]
valid_patient_information = patient_information[int(0.9*len(patient_information)):]
base_DatabasePath = '/home/alex/Dataset 1'
generator_inst = generators.SEPGenerator(base_DatabasePath,
channels=channels,
resize=resize,
normalization=normalization)
train_generator = generator_inst.generator(train_patient_information)
valid_generator = generator_inst.generator(valid_patient_information)
#dataloader = torch.utils.data.DataLoader(train_generator, batch_size=1, shuffle=True)
valid_iterations
total_loss = 0
train_iterations = 100
valid_iterations = len(valid_patient_information)
epochs = 5
for epoch in range(epochs):
total_TrainLoss = 0
for t_m, t_item in enumerate(train_generator):
image_3D, label = torch.tensor(t_item[0], device=device).float(), torch.tensor(t_item[1], device=device).float()
output = mvcnn(image_3D, 1)
loss = criterion(output, label)
loss.backward()
optimizer.step()
total_TrainLoss += loss
if not (t_m+1)%50:
print("On_Going_Epoch : {} \t | Iteration : {} \t | Training Loss : {}".format(epoch+1, t_m+1, total_TrainLoss/(t_m+1)))
if (t_m+1) == train_iterations:
total_ValidLoss = 0
with torch.no_grad():
for v_m, v_item in enumerate(valid_generator):
image_3D, label = torch.tensor(v_item[0], device=device).float(), torch.tensor(v_item[1], device=device).float()
output = mvcnn(image_3D, 1)
total_ValidLoss += criterion(output, label)
print(total_ValidLoss)
if (v_m + 1) == valid_iterations:
break
print("Epoch : {} \t | Training Loss : {} \t | Validation Loss : {} ".format(epoch+1, total_TrainLoss/(t_m+1), total_ValidLoss/(v_m+1)) )
torch.save(mvcnn, './' + 'vgg_' + str(epoch) + '.pkl')
break
total_ValidLoss
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
c = torch.randn(90, 512, 4, 4).to(device)
#torch.randn(90, 1, 299, 299)
for n,v in enumerate(c):
v = v.view(1, 512*4*4).to(device)
print(n)
if n:
pooled_view = torch.max(pooled_view, v).to(device)
else:
pooled_view = v.to(device)
```
# Augmenter
```
def generate_images(image, transformation='original', angle=30):
"""
Function to generate images based on the requested transfomations
Args:
- image (nd.array) : input image array
- transformation (str) : image transformation to be effectuated
- angle (int) : rotation angle if transformation is a rotation
Returns:
- trans_image (nd.array) : transformed image array
"""
def rotateImage(image, angle):
"""
Function to rotate an image at its center
"""
image_center = tuple(np.array(image.shape[1::-1]) / 2)
rot_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0)
result = cv2.warpAffine(image, rot_mat, image.shape[1::-1], flags=cv2.INTER_LINEAR)
return result
# Image transformations
if transformation == 'original':
trans_image = image
elif transformation == 'flip_v':
trans_image = cv2.flip(image, 0)
elif transformation == 'flip_h':
trans_image = cv2.flip(image, 1)
elif transformation == 'flip_vh':
trans_image = cv2.flip(image, -1)
elif transformation == 'rot_c':
trans_image = rotateImage(image, -angle)
elif transformation == 'rot_ac':
trans_image = rotateImage(image, angle)
else:
raise ValueError("In valid transformation value passed : {}".format(transformation))
return trans_image
"""
The agumenter ought to be able to do the following:
- Get list of patient paths and their respective scores (make sure to do the validation and test splits before)
- Select a random augmentation (flag='test')
- Select a patient path and his/her corresponding score
- With each .dcm file do following:
- read image
- normalized image
- resize image
- get percentage of white matter (%, n) and append to list
- transform image
- store in an array
- yield image_3D (top 70 images with white matter), label
"""
def SEP_generator(object):
def __init__(self,
resize,
normalization,
transformations)
import imgaug as ia
from imgaug import augmenters as iaa
import imgaug as ia
from imgaug import augmenters as iaa
class ImageBaseAug(object):
def __init__(self):
sometimes = lambda aug: iaa.Sometimes(0.5, aug)
self.seq = iaa.Sequential(
[
# Blur each image with varying strength using
# gaussian blur (sigma between 0 and 3.0),
# average/uniform blur (kernel size between 2x2 and 7x7)
# median blur (kernel size between 3x3 and 11x11).
iaa.OneOf([
iaa.GaussianBlur((0, 3.0)),
iaa.AverageBlur(k=(2, 7)),
iaa.MedianBlur(k=(3, 11)),
]),
# Sharpen each image, overlay the result with the original
# image using an alpha between 0 (no sharpening) and 1
# (full sharpening effect).
sometimes(iaa.Sharpen(alpha=(0, 0.5), lightness=(0.75, 1.5))),
# Add gaussian noise to some images.
sometimes(iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05*255), per_channel=0.5)),
# Add a value of -5 to 5 to each pixel.
sometimes(iaa.Add((-5, 5), per_channel=0.5)),
# Change brightness of images (80-120% of original value).
sometimes(iaa.Multiply((0.8, 1.2), per_channel=0.5)),
# Improve or worsen the contrast of images.
sometimes(iaa.ContrastNormalization((0.5, 2.0), per_channel=0.5)),
],
# do all of the above augmentations in random order
random_order=True
)
def __call__(self, sample):
seq_det = self.seq.to_deterministic()
image, label = sample['image'], sample['label']
image = seq_det.augment_images([image])[0]
return {'image': image, 'label': label}
```
# UNET
```
def double_conv(in_channels, out_channels):
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, 3, padding=1),
nn.ReLU(inplace=True)
)
class UNet(nn.Module):
def __init__(self, n_class=1):
super().__init__()
self.dconv_down1 = double_conv(1, 32)
self.dconv_down2 = double_conv(32, 64)
self.dconv_down3 = double_conv(64, 128)
self.dconv_down4 = double_conv(128, 256)
self.maxpool = nn.MaxPool2d(2)
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.dconv_up3 = double_conv(128 + 256, 128)
self.dconv_up2 = double_conv(64 + 128, 64)
self.dconv_up1 = double_conv(32 + 64, 32)
self.conv_last = nn.Sequential(nn.BatchNorm2d(32),
nn.MaxPool2d(2,2))
def forward(self, x):
conv1 = self.dconv_down1(x)
x = self.maxpool(conv1)
conv2 = self.dconv_down2(x)
x = self.maxpool(conv2)
conv3 = self.dconv_down3(x)
x = self.maxpool(conv3)
x = self.dconv_down4(x)
x = self.upsample(x)
x = torch.cat([x, conv3], dim=1)
x = self.dconv_up3(x)
x = self.upsample(x)
x = torch.cat([x, conv2], dim=1)
x = self.dconv_up2(x)
x = self.upsample(x)
x = torch.cat([x, conv1], dim=1)
x = self.dconv_up1(x)
out = self.conv_last(x)
return out
import torch
import torch.nn as nn
def attention_block():
return nn.Sequential(
nn.ReLU(),
nn.Conv2d(1, 1, 1, padding=0),
nn.BatchNorm2d(1),
nn.Sigmoid()
)
def double_conv(in_channels, out_channels):
return nn.Sequential(
nn.BatchNorm2d(in_channels),
nn.Conv2d(in_channels, out_channels, 3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(out_channels),
nn.Conv2d(out_channels, out_channels, 3, padding=1),
nn.ReLU(inplace=True))
def one_conv(in_channels, padding=0):
return nn.Sequential(
nn.BatchNorm2d(in_channels),
nn.Conv2d(in_channels, 1, 1, padding=padding))
class UNet(nn.Module):
def __init__(self, n_class):
super().__init__()
self.dconv_down1 = double_conv(1, 32)
self.dconv_down2 = double_conv(32, 64)
self.dconv_down3 = double_conv(64, 128)
self.dconv_down4 = double_conv(128, 256)
self.maxpool = nn.MaxPool2d(2)
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.oneconv = one_conv
self.attention = attention_block()
self.oneconvx3 = one_conv(128)
self.oneconvg3 = one_conv(256)
self.dconv_up3 = double_conv(128 + 256, 128)
self.oneconvx2 = one_conv(64)
self.oneconvg2 = one_conv(128)
self.dconv_up2 = double_conv(64 + 128, 64)
self.conv_last = nn.Sequential(nn.BatchNorm2d(64),
nn.Conv2d(64,32,3,padding=0),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(32),
nn.Conv2d(32,8,3,padding=0),
nn.ReLU(),
nn.MaxPool2d(2,2))
self.fc1 = nn.Sequential(nn.Linear(9800, 1096),
nn.ReLU(),
nn.Dropout(0.8),
nn.Linear(1096, 96),
nn.ReLU(),
nn.Dropout(0.9),
nn.Linear(96, 1))
def forward(self, x):
conv1 = self.dconv_down1(x) # 1 -> 32 filters
x = self.maxpool(conv1)
conv2 = self.dconv_down2(x) # 32 -> 64 filters
x = self.maxpool(conv2)
conv3 = self.dconv_down3(x) # 64 -> 128 filters
x = self.maxpool(conv3)
x = self.dconv_down4(x) # 128 -> 256 filters
x = self.upsample(x)
_g = self.oneconvg3(x)
_x = self.oneconvx3(conv3)
_xg = _g + _x
psi = self.attention(_xg)
conv3 = conv3*psi
x = torch.cat([x, conv3], dim=1)
x = self.dconv_up3(x) # 128 + 256 -> 128 filters
x = self.upsample(x)
_g = self.oneconvg2(x)
_x = self.oneconvx2(conv2)
_xg = _g + _x
psi = self.attention(_xg)
conv2 = conv2*psi
x = torch.cat([x, conv2], dim=1)
x = self.dconv_up2(x)
# x = self.upsample(x)
# _g = self.oneconvg1(x)
# _x = self.oneconvx1(conv1)
# _xg = _g + _x
# psi = self.attention(_xg)
# conv1 = conv1*psi
# x = torch.cat([x, conv1], dim=1)
# x = self.dconv_up1(x)
x = self.conv_last(x)
x = x.view(-1, 35*35*8)
x = self.fc1(x)
return x
net = UNet(1)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch v0.4.0
model = UNet(1).to(device)
summary(model, (1, 296, 296))
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
```
# Trails (Pytorch)
```
import os
import torch
import numpy as np
os.environ['CUDA_VISIBLE_DEVICES'] = "2"
## TENSORS
# create an 'un-initialized' matrix
x = torch.empty(5, 3)
print(x)
# construct a randomly 'initialized' matrix
x = torch.rand(5, 3)
print(x)
# construct a matrix filled with zeros an dtype=long
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
# construct a tensor from data
x = torch.tensor([[5.5, 3]])
print(x)
# Create a tensor based on existing tensor
x = x.new_ones(5, 3, dtype=torch.double)
print(x)
x = torch.randn_like(x, dtype=torch.float)
print(x)
## OPERATIONS
# Addition syntax 1
y = torch.rand(5, 3)
print(x + y)
# Addition syntax 2
print(torch.add(x, y))
# Addtion output towards a tensor
result = torch.empty(5,3)
torch.add(x, y, out=result)
print(result)
# Addition in place
y.add(x)
print(y)
# Any operation that mutates a tensor in-place is post-fixed with an _.
x.copy_(y)
x.t_()
# Resizing tensors
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1,8)
print(x.size(), y.size(), z.size())
# Use get value off a one element tensor
x = torch.randn(1)
print(x)
print(x.item())
## NUMPY BRIDGE
# Torch tensor to numpy array
a = torch.ones(5)
b = a.numpy()
print(a)
print(b)
a.add_(1)
print(a)
print(b)
# Numpy array to torch tensor
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
## USING CUDA
if torch.cuda.is_available():
device = torch.device("cuda") # Cuda device object
y = torch.ones_like(x, device=device) # Directly creates a tensor on GPU
x = x.to(device) #
z = x + y
print(z)
print(z.to("cpu", torch.double))
"""
AUTO-GRAD
- The autograd package provides automatic differntation for all
opeations on tensors.
- A define-by-run framework i.e backprop defined by how code
is run and every single iteration can be different.
TENSOR
- torch.tensor is the central class of the 'torch' package.
- If one sets attribute '.requires_grad()' as 'True', all
operations on it are tracked.
- When computations are finished one can call'backward()'
and have all the gradients computed.
- Gradient of a tensor is accumulated into '.grad' attribute.
- To stop tensor from tracking history, call '.detach()' to detach
it from computation history and prevent future computation
from being tracked
- To prevent tacking histroy and using memory, wrap the code
block in 'with torch.no_grad()'. Helpful when evaluating a model
cause model has trainable parameters with 'requires_grad=True'
- 'Function' class is very important for autograd implementation
- 'Tensor' and 'Function' are interconnected and buid up an acyclic
graph that encodes a complete history of computation.
- Each tensor has a '.grad_fn' attribute that references a 'Function'
that has created the 'Tensor' (except for tensors created by user)
- To compute derivates, '.backward()' is called on a Tensor. If
tensor is a scalar, no arguments ought to be passed to '.backward()'
if not, a 'gradient' argument ought to be specified.
"""
## TENSORS
# Create tenor to track all operations
x = torch.ones(2,2, requires_grad=True)
print(x)
y = x + 2
print(y)
z = y * y * 3
out = z.mean()
print(z, out)
## GRADIENTS
# Peforming backprop on 'out'
out.backward()
print(x.grad)
# An example of vector-Jacobian product
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
# Stop autograd from tracking history on Tensors
# with .requires_grad=True
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x**2).requires_grad)
image.requires_grad_(True)
image
"""
## NEURAL NETWORKS
- Can be constructed using 'torch.nn' package
- 'nn' depends on 'autograd' to define models and differentiate
them.
- 'nn.Module' contains layers and a method forward(input) that
returns the 'output'.
- Training procedure:
- Define neural network that has some learnable parameter
- Iterate over a dataset of inputs
- Process input through the network
- Compute loss
- Propagate gradients back into the network's parameters
- Update weights
"""
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
# Convolutional Layers
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# An affine operation
self.fc1 = nn.Linear(16*6*6, 128)
self.fc2 = nn.Linear(128, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
```
| github_jupyter |
```
import math
import torch
from d2l.torch import load_data_nmt
from torch import nn
from d2l import torch as d2l
x = torch.randint(1,4,size=(3,3),dtype=torch.float)
x.dim()
x.reshape(-1)
torch.repeat_interleave(x.reshape(-1),repeats=2,dim=0)
#@save
def sequence_mask(X, valid_len, value=0):
"""在序列中屏蔽不相关的项"""
maxlen = X.size(1)
mask = torch.arange((maxlen), dtype=torch.float32,
device=X.device)[None, :] < valid_len[:, None]
X[~mask] = value
return X
X = torch.tensor([[1, 2, 3], [4, 5, 6]])
sequence_mask(X, torch.tensor([1, 2]))
X = torch.ones(2, 3, 4)
sequence_mask(X, torch.tensor([1, 2]), value=False)
#@save
def masked_softmax(X, valid_lens):
"""通过在最后一个轴上掩蔽元素来执行softmax操作"""
# X:3D张量,valid_lens:1D或2D张量
if valid_lens is None:
return nn.functional.softmax(X, dim=-1)
else:
shape = X.shape
if valid_lens.dim() == 1:
valid_lens = torch.repeat_interleave(valid_lens, shape[1])
else:
valid_lens = valid_lens.reshape(-1)
# 最后一轴上被掩蔽的元素使用一个非常大的负值替换,从而其softmax输出为0
X = sequence_mask(X.reshape(-1, shape[-1]), valid_lens,
value=-1e6)
return nn.functional.softmax(X.reshape(shape), dim=-1)
#[batch_size,query_nums,key_nums]
masked_softmax(torch.rand(2, 2, 4), torch.tensor([2, 3]))
#score shape:[batch_size,query_nums,key_nums]
masked_softmax(torch.rand(2, 4, 4), torch.tensor([[1,2,3,4],[1,2,3,4]]))
#@save
class AdditiveAttention(nn.Module):
"""加性注意力"""
def __init__(self, key_size, query_size, num_hiddens, dropout, **kwargs):
super(AdditiveAttention, self).__init__(**kwargs)
self.W_k = nn.Linear(key_size, num_hiddens, bias=False)
self.W_q = nn.Linear(query_size, num_hiddens, bias=False)
self.w_v = nn.Linear(num_hiddens, 1, bias=False)
self.dropout = nn.Dropout(dropout)
def forward(self, queries, keys, values, valid_lens):
queries, keys = self.W_q(queries), self.W_k(keys)
# 在维度扩展后,
# queries的形状:(batch_size,查询的个数,1,num_hidden)
# key的形状:(batch_size,1,“键-值”对的个数,num_hiddens)
# 使用广播方式进行求和
features = queries.unsqueeze(2) + keys.unsqueeze(1)
features = torch.tanh(features)
# self.w_v仅有一个输出,因此从形状中移除最后那个维度。
# scores的形状:(batch_size,查询的个数,“键-值”对的个数)
scores = self.w_v(features).squeeze(-1)
self.attention_weights = masked_softmax(scores, valid_lens)
# values的形状:(batch_size,“键-值”对的个数,值的维度)
return torch.bmm(self.dropout(self.attention_weights), values)
queries, keys = torch.normal(0, 1, (2, 1, 20)), torch.ones((2, 10, 2))
# values的小批量,两个值矩阵是相同的
values = torch.arange(40, dtype=torch.float32).reshape(1, 10, 4).repeat(
2, 1, 1)
valid_lens = torch.tensor([2, 6])
attention = AdditiveAttention(key_size=2, query_size=20, num_hiddens=8,
dropout=0.1)
attention.eval()
res = attention(queries, keys, values, valid_lens)
res.shape
attention.attention_weights.shape
#@save
class DotProductAttention(nn.Module):
"""缩放点积注意力"""
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# queries的形状:(batch_size,查询的个数,d)
# keys的形状:(batch_size,“键-值”对的个数,d)
# values的形状:(batch_size,“键-值”对的个数,值的维度)
# valid_lens的形状:(batch_size,)或者(batch_size,查询的个数)
def forward(self, queries, keys, values, valid_lens=None):
d = queries.shape[-1]
# 设置transpose_b=True为了交换keys的最后两个维度
scores = torch.bmm(queries, keys.transpose(1,2)) / math.sqrt(d)
self.attention_weights = masked_softmax(scores, valid_lens)
return torch.bmm(self.dropout(self.attention_weights), values)
queries = torch.normal(0, 1, (2, 1, 2))
attention = DotProductAttention(dropout=0.5)
attention.eval()
attention(queries, keys, values, valid_lens)
keys.shape,values.shape
#@save
class AttentionDecoder(d2l.Decoder):
"""带有注意力机制解码器的基本接口"""
def __init__(self, **kwargs):
super(AttentionDecoder, self).__init__(**kwargs)
@property
def attention_weights(self):
raise NotImplementedError
class Seq2SeqAttentionDecoder(AttentionDecoder):
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqAttentionDecoder, self).__init__(**kwargs)
self.attention = d2l.AdditiveAttention(
num_hiddens, num_hiddens, num_hiddens, dropout)
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.GRU(
embed_size + num_hiddens, num_hiddens, num_layers,
dropout=dropout)
self.dense = nn.Linear(num_hiddens, vocab_size)
def init_state(self, enc_outputs, enc_valid_lens, *args):
# outputs的形状为(batch_size,num_steps,num_hiddens).
# hidden_state的形状为(num_layers,batch_size,num_hiddens)
outputs, hidden_state = enc_outputs
return (outputs.permute(1, 0, 2), hidden_state, enc_valid_lens)
def forward(self, X, state):
# enc_outputs的形状为(batch_size,num_steps,num_hiddens).
# hidden_state的形状为(num_layers,batch_size,
# num_hiddens)
enc_outputs, hidden_state, enc_valid_lens = state
# 输出X的形状为(num_steps,batch_size,embed_size)
X = self.embedding(X).permute(1, 0, 2)
outputs, self._attention_weights = [], []
for x in X:
# query的形状为(batch_size,1,num_hiddens)
query = torch.unsqueeze(hidden_state[-1], dim=1)
# context的形状为(batch_size,1,num_hiddens)
context = self.attention(
query, enc_outputs, enc_outputs, enc_valid_lens)
# 在特征维度上连结
x = torch.cat((context, torch.unsqueeze(x, dim=1)), dim=-1)
# shape [batch_size,1,embed_size+num_hiddens]
# 将x变形为(1,batch_size,embed_size+num_hiddens)
out, hidden_state = self.rnn(x.permute(1, 0, 2), hidden_state)
outputs.append(out)
self._attention_weights.append(self.attention.attention_weights)
# 全连接层变换后,outputs的形状为
# (num_steps,batch_size,vocab_size)
outputs = self.dense(torch.cat(outputs, dim=0))
return outputs.permute(1, 0, 2), [enc_outputs, hidden_state,
enc_valid_lens]
@property
def attention_weights(self):
return self._attention_weights
encoder = d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
encoder.eval()
decoder = Seq2SeqAttentionDecoder(vocab_size=10, embed_size=8, num_hiddens=16,
num_layers=2)
decoder.eval()
X = torch.zeros((4, 7), dtype=torch.long) # (batch_size,num_steps)
state = decoder.init_state(encoder(X), None)
output, state = decoder(X, state)
output.shape, len(state), state[0].shape, len(state[1]), state[1][0].shape
torch.cat(decoder.attention_weights,dim=1).shape
decoder.attention_weights[0].shape
import os
def read_data_nmt():
"""Load the English-French dataset.
Defined in :numref:`sec_machine_translation`"""
data_dir = d2l.download_extract('fra-eng')
with open(os.path.join(data_dir, 'fra.txt'), 'r',encoding='utf-8') as f:
return f.read()
def preprocess_nmt(text):
"""Preprocess the English-French dataset.
Defined in :numref:`sec_machine_translation`"""
def no_space(char, prev_char):
return char in set(',.!?') and prev_char != ' '
# Replace non-breaking space with space, and convert uppercase letters to
# lowercase ones
text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower()
# Insert space between words and punctuation marks
out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char
for i, char in enumerate(text)]
return ''.join(out)
def truncate_pad(line, num_steps, padding_token):
"""Truncate or pad sequences.
Defined in :numref:`sec_machine_translation`"""
if len(line) > num_steps:
return line[:num_steps] # Truncate
return line + [padding_token] * (num_steps - len(line)) # Pad
def build_array_nmt(lines, vocab, num_steps):
"""Transform text sequences of machine translation into minibatches.
Defined in :numref:`subsec_mt_data_loading`"""
lines = [vocab[l] for l in lines]
lines = [l + [vocab['<eos>']] for l in lines]
array = d2l.tensor([truncate_pad(
l, num_steps, vocab['<pad>']) for l in lines])
valid_len = d2l.reduce_sum(
d2l.astype(array != vocab['<pad>'], d2l.int32), 1)
return array, valid_len
def load_data_nmt(batch_size, num_steps, num_examples=600):
"""Return the iterator and the vocabularies of the translation dataset.
Defined in :numref:`subsec_mt_data_loading`"""
text = preprocess_nmt(read_data_nmt())
source, target = d2l.tokenize_nmt(text, num_examples)
src_vocab = d2l.Vocab(source, min_freq=2,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
tgt_vocab = d2l.Vocab(target, min_freq=2,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
src_array, src_valid_len = build_array_nmt(source, src_vocab, num_steps)
tgt_array, tgt_valid_len = build_array_nmt(target, tgt_vocab, num_steps)
data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len)
data_iter = d2l.load_array(data_arrays, batch_size)
return data_iter, src_vocab, tgt_vocab
embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.1
batch_size, num_steps = 64, 10
lr, num_epochs, device = 0.005, 250, d2l.try_gpu()
train_iter, src_vocab, tgt_vocab = load_data_nmt(batch_size, num_steps)
encoder = d2l.Seq2SeqEncoder(
len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqAttentionDecoder(
len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, dec_attention_weight_seq = d2l.predict_seq2seq(
net, eng, src_vocab, tgt_vocab, num_steps, device, True)
print(f'{eng} => {translation}, ',
f'bleu {d2l.bleu(translation, fra, k=2):.3f}')
#@save
class MultiHeadAttention(nn.Module):
"""多头注意力"""
def __init__(self, key_size, query_size, value_size, num_hiddens,
num_heads, dropout, bias=False, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.num_heads = num_heads
self.attention = d2l.DotProductAttention(dropout)
self.W_q = nn.Linear(query_size, num_hiddens, bias=bias)
self.W_k = nn.Linear(key_size, num_hiddens, bias=bias)
self.W_v = nn.Linear(value_size, num_hiddens, bias=bias)
self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)
def forward(self, queries, keys, values, valid_lens):
# queries,keys,values的形状:
# (batch_size,查询或者“键-值”对的个数,num_hiddens)(batch_size,qk_nums,num_hiddens)
# valid_lens 的形状:
# (batch_size,)或(batch_size,查询的个数)
# 经过变换后,输出的queries,keys,values 的形状:
# (batch_size*num_heads,查询或者“键-值”对的个数,
# num_hiddens/num_heads)
queries = transpose_qkv(self.W_q(queries), self.num_heads)
keys = transpose_qkv(self.W_k(keys), self.num_heads)
values = transpose_qkv(self.W_v(values), self.num_heads)
#q,k,v shape: [batch_size,num_heads,qkv_nums,num_hiddens/num_heads]
if valid_lens is not None:
# 在轴0,将第一项(标量或者矢量)复制num_heads次,
# 然后如此复制第二项,然后诸如此类。
valid_lens = torch.repeat_interleave(
valid_lens, repeats=self.num_heads, dim=0)
# output的形状:(batch_size*num_heads,查询的个数,
# num_hiddens/num_heads)
output = self.attention(queries, keys, values, valid_lens)
# output_concat的形状:(batch_size,查询的个数,num_hiddens)
output_concat = transpose_output(output, self.num_heads)
return self.W_o(output_concat)
#@save
def transpose_qkv(X, num_heads):
"""为了多注意力头的并行计算而变换形状"""
# 输入X的形状:(batch_size,查询或者“键-值”对的个数,num_hiddens)
# 输出X的形状:(batch_size,查询或者“键-值”对的个数,num_heads,
# num_hiddens/num_heads)
X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)
# 输出X的形状:(batch_size,num_heads,查询或者“键-值”对的个数,
# num_hiddens/num_heads)
X = X.permute(0, 2, 1, 3)
# 最终输出的形状:(batch_size*num_heads,查询或者“键-值”对的个数,
# num_hiddens/num_heads)
return X.reshape(-1, X.shape[2], X.shape[3])
#@save
def transpose_output(X, num_heads):
"""逆转transpose_qkv函数的操作"""
X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])
X = X.permute(0, 2, 1, 3)
return X.reshape(X.shape[0], X. shape[1], -1)
valid_lens = torch.tensor([[1,2,3],[4,5,6]])
torch.repeat_interleave(valid_lens, repeats=3, dim=0)
x = torch.arange(16).reshape(2,2,4)
x,x[1]
x.reshape(2,2,2,2)
x.reshape(2,2,2,2).permute(0,2,1,3).reshape(-1,2,2)
num_hiddens, num_heads = 100, 5
attention = d2l.MultiHeadAttention(num_hiddens, num_hiddens, num_hiddens,
num_hiddens, num_heads, 0.5)
attention.eval()
batch_size, num_queries, valid_lens = 2, 4, torch.tensor([3, 2])
X = torch.ones((batch_size, num_queries, num_hiddens))
attention(X, X, X, valid_lens).shape
#@save
class PositionalEncoding(nn.Module):
"""位置编码"""
def __init__(self, num_hiddens, dropout, max_len=1000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# 创建一个足够长的P
self.P = torch.zeros((1, max_len, num_hiddens))
X = torch.arange(max_len, dtype=torch.float32).reshape(-1, 1) / torch.pow(10000, torch.arange(0, num_hiddens, 2, dtype=torch.float32) / num_hiddens)
self.P[:, :, 0::2] = torch.sin(X)
self.P[:, :, 1::2] = torch.cos(X)
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].to(X.device)
return self.dropout(X)
max_len = 1000
num_hiddens=100
P = torch.zeros((1, max_len, num_hiddens))
P[:].shape
class PositionWiseFFN(nn.Module):
def __init__(self,ffn_num_input,ffn_num_hiddens,ffn_num_outputs,**kwargs):
super(PositionWiseFFN, self).__init__(**kwargs)
self.dense1 = nn.Linear(ffn_num_input,ffn_num_hiddens)
self.relu = nn.ReLU()
self.dense2 = nn.Linear(ffn_num_hiddens,ffn_num_outputs)
def forward(self,X):
return self.dense2(self.relu(self.dense1(X)))
ln = nn.LayerNorm(2)
bn = nn.BatchNorm1d(2)
X = torch.tensor([[1, 2], [2, 3]], dtype=torch.float32)
# 在训练模式下计算X的均值和方差
print('layer norm:', ln(X), '\nbatch norm:', bn(X))
class AddNorm(nn.Module):
def __init__(self,normalized_shape,dropout,**kwargs):
super(AddNorm, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
self.ln = nn.LayerNorm(normalized_shape)
def forward(self,X,Y):
return self.ln(self.dropout(Y)+X)
add_norm = AddNorm([3, 4], 0.5)
add_norm.eval()
add_norm(torch.ones((2, 3, 4)), torch.ones((2, 3, 4))).shape
class EncoderBlock(nn.Module):
def __init__(self,key_size,query_size,value_size,num_hiddens,norm_shape,ffn_num_input,
ffn_num_hiddens,num_heads,dropout,use_bias=False,**kwargs):
super(EncoderBlock, self).__init__(**kwargs)
self.attention = MultiHeadAttention(key_size,query_size,value_size,num_hiddens,num_heads,dropout,use_bias)
self.addnorm1 = AddNorm(norm_shape,dropout)
self.ffn = PositionWiseFFN(ffn_num_input,ffn_num_hiddens,num_hiddens)
self.addnorm2 = AddNorm(norm_shape,dropout)
def forward(self,X,valid_lens):
Y = self.addnorm1(X,self.attention(X,X,X,valid_lens))
return self.addnorm2(Y,self.ffn(Y))
X = torch.ones((2, 100, 24))
valid_lens = torch.tensor([50, 60])
encoder_blk = EncoderBlock(24, 24, 24, 24, [100, 24], 24, 48, 8, 0.5)
encoder_blk.eval()
encoder_blk(X, valid_lens).shape
#@save
class TransformerEncoder(d2l.Encoder):
"""transformer编码器"""
def __init__(self, vocab_size, key_size, query_size, value_size,
num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens,
num_heads, num_layers, dropout, use_bias=False, **kwargs):
super(TransformerEncoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
self.blks = nn.Sequential()
for i in range(num_layers):
self.blks.add_module("block"+str(i),
EncoderBlock(key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_input, ffn_num_hiddens,
num_heads, dropout, use_bias))
def forward(self, X, valid_lens, *args):
# 因为位置编码值在-1和1之间,
# 因此嵌入值乘以嵌入维度的平方根进行缩放,
# 然后再与位置编码相加。
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
self.attention_weights = [None] * len(self.blks)
for i, blk in enumerate(self.blks):
X = blk(X, valid_lens)
self.attention_weights[
i] = blk.attention.attention.attention_weights
return X
encoder = TransformerEncoder(
200, 24, 24, 24, 24, [100, 24], 24, 48, 8, 2, 0.5)
encoder.eval()
encoder(torch.ones((2, 100), dtype=torch.long), valid_lens).shape
class DecoderBlock(nn.Module):
"""解码器中第i个块"""
def __init__(self, key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
dropout, i, **kwargs):
super(DecoderBlock, self).__init__(**kwargs)
self.i = i
self.attention1 = d2l.MultiHeadAttention(
key_size, query_size, value_size, num_hiddens, num_heads, dropout)
self.addnorm1 = AddNorm(norm_shape, dropout)
self.attention2 = d2l.MultiHeadAttention(
key_size, query_size, value_size, num_hiddens, num_heads, dropout)
self.addnorm2 = AddNorm(norm_shape, dropout)
self.ffn = PositionWiseFFN(ffn_num_input, ffn_num_hiddens,num_hiddens)
self.addnorm3 = AddNorm(norm_shape, dropout)
def forward(self, X, state):
enc_outputs, enc_valid_lens = state[0], state[1]
# 训练阶段,输出序列的所有词元都在同一时间处理,
# 因此state[2][self.i]初始化为None。
# 预测阶段,输出序列是通过词元一个接着一个解码的,
# 因此state[2][self.i]包含着直到当前时间步第i个块解码的输出表示
if state[2][self.i] is None:
key_values = X
else:
key_values = torch.cat((state[2][self.i], X), axis=1)
state[2][self.i] = key_values
if self.training:
batch_size, num_steps, _ = X.shape
# dec_valid_lens的开头:(batch_size,num_steps),
# 其中每一行是[1,2,...,num_steps]
# 用于自注意力计算,每到一个新单词就把这个单词加到注意力里,而不是把全部单词加进来
dec_valid_lens = torch.arange(
1, num_steps + 1, device=X.device).repeat(batch_size, 1)
else:
dec_valid_lens = None
# 自注意力,这边dec_valid_lens考虑了
X2 = self.attention1(X, key_values, key_values, dec_valid_lens)
Y = self.addnorm1(X, X2)
# 编码器-解码器注意力。
# enc_outputs的开头:(batch_size,num_steps,num_hiddens)
Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_lens)
Z = self.addnorm2(Y, Y2)
return self.addnorm3(Z, self.ffn(Z)), state
X = torch.ones((2, 100, 24))
Y = torch.ones((2, 100, 24))
torch.cat((X,Y),dim=1).shape
batch_size
torch.arange(1, num_steps + 1, device=X.device).repeat(batch_size, 1)
decoder_blk = DecoderBlock(24, 24, 24, 24, [100, 24], 24, 48, 8, 0.5, 0)
decoder_blk.eval()
X = torch.ones((2, 100, 24))
state = [encoder_blk(X, valid_lens), valid_lens, [None]]
decoder_blk(X, state)[0].shape
class TransformerDecoder(d2l.AttentionDecoder):
def __init__(self, vocab_size, key_size, query_size, value_size,
num_hiddens, norm_shape, ffn_num_input, ffn_num_hiddens,
num_heads, num_layers, dropout, **kwargs):
super(TransformerDecoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.num_layers = num_layers
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
self.blks = nn.Sequential()
for i in range(num_layers):
self.blks.add_module("block"+str(i),
DecoderBlock(key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_input, ffn_num_hiddens,
num_heads, dropout, i))
self.dense = nn.Linear(num_hiddens, vocab_size)
def init_state(self, enc_outputs, enc_valid_lens, *args):
return [enc_outputs, enc_valid_lens, [None] * self.num_layers]
def forward(self, X, state):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
self._attention_weights = [[None] * len(self.blks) for _ in range (2)]
for i, blk in enumerate(self.blks):
X, state = blk(X, state)
# 解码器自注意力权重
self._attention_weights[0][
i] = blk.attention1.attention.attention_weights
# “编码器-解码器”自注意力权重
self._attention_weights[1][
i] = blk.attention2.attention.attention_weights
return self.dense(X), state
@property
def attention_weights(self):
return self._attention_weights
num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.1, 64, 10
lr, num_epochs, device = 0.005, 200, d2l.try_gpu()
ffn_num_input, ffn_num_hiddens, num_heads = 32, 64, 4
key_size, query_size, value_size = 32, 32, 32
norm_shape = [32]
train_iter, src_vocab, tgt_vocab = load_data_nmt(batch_size, num_steps)
encoder = TransformerEncoder(
len(src_vocab), key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
num_layers, dropout)
decoder = TransformerDecoder(
len(tgt_vocab), key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_input, ffn_num_hiddens, num_heads,
num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, dec_attention_weight_seq = d2l.predict_seq2seq(
net, eng, src_vocab, tgt_vocab, num_steps, device, True)
print(f'{eng} => {translation}, ',
f'bleu {d2l.bleu(translation, fra, k=2):.3f}')
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib
matplotlib.rcParams['figure.figsize'] = [12.0, 8.0]
def plot_project_data(data_x, data_list_y, plt_range_min_x, plt_range_max_x,
short_colors = ['b', 'g', 'r'],
labels = ['mapreduce', 'hive', 'spark'], types = ['', '', ''],
title='Job', loc=2):
plt_range_min_y = int(min(data_list_y[0]) * 0.9)
plt_range_max_y = int(max(data_list_y[0]) * 1.1)
for dy in data_list_y:
plt_range_min_y = int(min(dy + [plt_range_min_y]) * 0.99)
plt_range_max_y = int(max(dy + [plt_range_max_y]) * 1.01)
handles = [];
for i in range(len(data_list_y)):
app, _ = plt.plot(data_x[:len(data_list_y[i])], data_list_y[i], short_colors[i]+types[i], data_x[:len(data_list_y[i])], data_list_y[i], short_colors[i]+'o', label=labels[i])
handles = handles + [app]
plt.axis([plt_range_min_x, plt_range_max_x, plt_range_min_y, plt_range_max_y])
plt.ylabel('seconds')
plt.xlabel('MB')
plt.title(title)
plt.legend(handles=handles, loc=loc)
plt.show()
data_x = [5, 26, 70, 184, 292, 591]
max_x = 600
data_mr1L = [12, 14, 18, 25, 33, 40]
data_hive1L = [8, 8, 9, 12, 17, 21]
data_spark1L = [7, 11, 14, 19, 25, 27]
plot_project_data(data_x, [data_mr1L, data_hive1L, data_spark1L],
0, max_x, title='Job 1 - Local')
data_mr2L = [12, 15, 19, 31, 43, 81]
data_hive2L = [7, 8, 10, 16, 19, 25]
data_spark2L = [8, 12, 16, 28, 39, 71]
plot_project_data(data_x, [data_mr2L, data_hive2L, data_spark2L],
0, max_x, title='Job 2 - Local')
data_mr31FL = [12, 12, 13, 20, 27, 69]
data_mr31SL = [11, 12, 19, 34, 65, 275]
data_mr31L = [12+11+5, 12+12+5, 13+19+5, 20+34+5, 27+65+5, 69+275+5]
data_mr32L = [11, 13, 30, 210]
data_hive3L = [13, 14, 53, 145, 308, 482]
data_spark3L = [9, 13, 46, 225, 405, 417]
plot_project_data(data_x,
[data_mr31FL, data_mr31SL, data_mr31L, data_mr32L, data_hive3L, data_spark3L],
0, max_x,
short_colors = ['y', 'c', 'b', 'm', 'g', 'r'],
labels = ['mapreduce v1 (first)', 'mapreduce v1 (second)', 'mapreduce v1 - total', 'mapreduce v2', 'hive', 'spark'],
types = ['--', '--', '', '', '', '', ''],
title='Job 3 - Local')
data_mr1C = [12, 14, 16, 20, 27, 30]
data_hive1C = [68, 69, 70, 71, 76, 88]
data_spark1C = [28, 28, 36, 41, 43, 46]
plot_project_data(data_x, [data_mr1C, data_hive1C, data_spark1C],
0, max_x, title='Job 1 - Cluster')
data_mr2C = [11, 15, 16, 22, 24, 29]
data_hive2C = [68, 70, 73, 77, 59, 89]
data_hive2C_no = [68, 70, 73, 77, 80, 89]
data_spark2C = [21, 33, 36, 44, 52, 70]
plot_project_data(data_x, [data_mr2C, data_hive2C, data_hive2C_no, data_spark2C],
0, max_x,
short_colors = ['b', 'g', 'g', 'r'],
labels = ['mapreduce', 'hive (real)', 'hive (no outlier)', 'spark'],
types = ['', '', '--', ''],
title='Job 2 - Cluster', loc=4)
data_mr31FC = [11, 11, 13, 14, 28, 72]
data_mr31SC = [10, 12, 19, 21, 54, 162]
data_mr31C = [11+10+5, 11+12+5, 13+19+5, 14+21+5, 28+54+5, 72+162+5]
data_mr32C = [11, 12, 25, 193]
data_hive3C = [34, 43, 56, 102, 206, 245]
data_spark3C = [29, 30, 45, 108, 244, 303]
plot_project_data(data_x,
[data_mr31FC, data_mr31SC, data_mr31C, data_mr32C, data_hive3C, data_spark3C],
0, max_x,
short_colors = ['y', 'c', 'b', 'm', 'g', 'r'],
labels = ['mapreduce v1 (first)', 'mapreduce v1 (second)', 'mapreduce v1 - total', 'mapreduce v2', 'hive', 'spark'],
types = ['--', '--', '', '', '', '', ''],
title='Job 3 - Cluster')
plot_project_data(data_x, [data_mr1L, data_mr1C],
0, max_x, title='Job 1 - MapReduce',
labels = ['local', 'cluster'])
plot_project_data(data_x, [data_mr2L, data_mr2C],
0, max_x, title='Job 2 - MapReduce',
labels = ['local', 'cluster'])
plot_project_data(data_x, [data_mr31L, data_mr31C, data_mr32L, data_mr32C],
0, max_x, title='Job 3 - MapReduce',
short_colors = ['b', 'g', 'c', 'y'],
types = ['', '', '', '', ''],
labels = ['local (v1)', 'cluster (v1)', 'local (v2)', 'cluster (v2)'])
plot_project_data(data_x, [data_hive1L, data_hive1C, [x-60 for x in data_hive1C]],
0, max_x, title='Job 1 - Hive',
short_colors = ['b', 'g', 'c'],
labels = ['local', 'cluster', 'cluster (compare)'],
types = ['', '', '--'])
plot_project_data(data_x, [data_hive2L, data_hive2C, data_hive2C_no, [x-61 for x in data_hive2C_no]],
0, max_x, title='Job 2 - Hive',
labels = ['local', 'cluster (real)', 'cluster (no outlier)', 'cluster (compare)'],
short_colors = ['b', 'g', 'g', 'c'],
types = ['', '', '--', '--'], loc=4)
plot_project_data(data_x, [data_hive3L, data_hive3C],
0, max_x, title='Job 3 - Hive',
labels = ['local', 'cluster'])
plot_project_data(data_x, [data_spark1L, data_spark1C, [x-21 for x in data_spark1C]],
0, max_x, title='Job 1 - Spark',
short_colors = ['b', 'g', 'c'],
labels = ['local', 'cluster', 'cluster (compare)'],
types = ['', '', '--'], loc=4)
plot_project_data(data_x, [data_spark2L, data_spark2C],
0, max_x, title='Job 2 - Spark',
labels = ['local', 'cluster'])
plot_project_data(data_x, [data_spark3L, data_spark3C],
0, max_x, title='Job 3 - Spark',
labels = ['local', 'cluster'])
```
| github_jupyter |
```
import random
import time
import os
print()
print('''Bienvenido a la máquina tragamonedas
Comenzarás con $ 50 pesos. Se te preguntará si quieres jugar.
Responda con sí / no. también puedes usar y / n
No hay sensibilidad de mayúsculas, escríbela como quieras!
Para ganar debes obtener una de las siguientes combinaciones:
BAR\tBAR\tBAR\t\tpays\t$250
BELL\tBELL\tBELL/BAR\tpays\t$20
PLUM\tPLUM\tPLUM/BAR\tpays\t$14
ORANGE\tORANGE\tORANGE/BAR\tpays\t$10
CHERRY\tCHERRY\tCHERRY\t\tpays\t$7
CHERRY\tCHERRY\t -\t\tpays\t$5
CHERRY\t -\t -\t\tpays\t$2
7\t 7\t 7\t\tpays\t The Jackpot!
''')
time.sleep(10)
#Constants:
INIT_STAKE = 50
INIT_BALANCE = 1000
ITEMS = ["CHERRY", "LEMON", "ORANGE", "PLUM", "BELL", "BAR", "7"]
firstWheel = None
secondWheel = None
thirdWheel = None
stake = INIT_STAKE
balance = INIT_BALANCE
def play():
global stake, firstWheel, secondWheel, thirdWheel
playQuestion = askPlayer()
while(stake != 0 and playQuestion == True):
firstWheel = spinWheel()
secondWheel = spinWheel()
thirdWheel = spinWheel()
printScore()
playQuestion = askPlayer()
def askPlayer():
'''
Le pregunta al jugador si quiere volver a jugar.
esperando que el usuario responda con sí, y, no o n
No hay sensibilidad a mayúsculas en la respuesta. sí, sí, y, y, no. . . todas las obras
'''
global stake
global balance
while(True):
os.system('cls' if os.name == 'nt' else 'clear')
if (balance <=1):
print ("Reinicio de la máquina.")
balance = 1000
print ("El Jackpot es actualmente: $" + str(balance) + ".")
answer = input("¿Quisieras jugar? ¿O revisar tu dinero? ")
answer = answer.lower()
if(answer == "si" or answer == "y"):
return True
elif(answer == "no" or answer == "n"):
print("Terminaste el juego con $" + str(stake) + " en tu mano. Gran trabajo!")
time.sleep(5)
return False
elif(answer == "check" or answer == "CHECK"):
print ("Tu Actualmente tienes $" + str(stake) + ".")
else:
print("Whoops! No entendi eso.")
def spinWheel():
'''
returns a random item from the wheel
'''
randomNumber = random.randint(0, 5)
return ITEMS[randomNumber]
def printScore():
'''
prints the current score
'''
global stake, firstWheel, secondWheel, thirdWheel, balance
if((firstWheel == "CHERRY") and (secondWheel != "CHERRY")):
win = 2
balance = balance - 2
elif((firstWheel == "CHERRY") and (secondWheel == "CHERRY") and (thirdWheel != "CHERRY")):
win = 5
balance = balance - 5
elif((firstWheel == "CHERRY") and (secondWheel == "CHERRY") and (thirdWheel == "CHERRY")):
win = 7
balance = balance - 7
elif((firstWheel == "ORANGE") and (secondWheel == "ORANGE") and ((thirdWheel == "ORANGE") or (thirdWheel == "BAR"))):
win = 10
balance = balance - 10
elif((firstWheel == "PLUM") and (secondWheel == "PLUM") and ((thirdWheel == "PLUM") or (thirdWheel == "BAR"))):
win = 14
balance = balance - 14
elif((firstWheel == "BELL") and (secondWheel == "BELL") and ((thirdWheel == "BELL") or (thirdWheel == "BAR"))):
win = 20
balance = balance - 20
elif((firstWheel == "BAR") and (secondWheel == "BAR") and (thirdWheel == "BAR")):
win = 250
balance = balance - 250
elif((firstWheel == "7") and (secondWheel == "7") and (thridWheel == "7")):
win = balance
balance = balance - win
else:
win = -1
balance = balance + 1
stake += win
if win == balance:
print ("Ganaste el JACKPOT!!")
if(win > 0):
print(firstWheel + '\t' + secondWheel + '\t' + thirdWheel + ' -- Ganaste $' + str(win))
time.sleep(3)
os.system('cls' if os.name == 'nt' else 'clear')
else:
print(firstWheel + '\t' + secondWheel + '\t' + thirdWheel + ' -- Perdiste')
time.sleep(2)
os.system('cls' if os.name == 'nt' else 'clear')
play()
```
| github_jupyter |
Corrigir versao de scipy para Inception
```
pip install scipy==1.3.3
```
Importar bibliotecas
```
from __future__ import division, print_function
from torchvision import datasets, models, transforms
import copy
import matplotlib.pyplot as plt
import numpy as np
import os
import shutil
import time
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import zipfile
```
Montar Google Drive
```
from google.colab import drive
drive.mount('/content/drive')
```
Definir constantes
```
DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
ZIP_FILE_PATH = './dataset.zip'
DATASET_PATH = './dataset'
INCEPTION = 'inception'
VGG19 = 'vgg-19'
MODEL = INCEPTION # Define o tipo de modelo a ser usado.
IMG_SIZE = {
INCEPTION: 299,
VGG19: 224,
}[MODEL]
NORMALIZE_MEAN = [0.485, 0.456, 0.406]
NORMALIZE_STD = [0.229, 0.224, 0.225]
BATCH_SIZE = 4
NUM_WORKERS = 4
TRAIN = 'train'
VAL = 'val'
TEST = 'test'
PHASES = {
TRAIN: 'train',
VAL: 'val',
TEST: 'test',
}
print(DEVICE)
```
Limpar diretorio do dataset
```
shutil.rmtree(DATASET_PATH)
```
Extrair dataset
```
zip_file = zipfile.ZipFile(ZIP_FILE_PATH)
zip_file.extractall()
zip_file.close()
```
Carregar dataset
```
# Augmentacao de dados para treinamento,
# apenas normalizacao para validacao e teste.
data_transforms = {
TRAIN: transforms.Compose([
transforms.Resize(IMG_SIZE),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
transforms.Normalize(NORMALIZE_MEAN, NORMALIZE_STD),
]),
VAL: transforms.Compose([
transforms.Resize(IMG_SIZE),
transforms.ToTensor(),
transforms.Normalize(NORMALIZE_MEAN, NORMALIZE_STD),
]),
TEST: transforms.Compose([
transforms.Resize(IMG_SIZE),
transforms.ToTensor(),
transforms.Normalize(NORMALIZE_MEAN, NORMALIZE_STD),
]),
}
data_sets = {
phase: datasets.ImageFolder(
os.path.join(DATASET_PATH, PHASES[phase]),
data_transforms[phase],
) for phase in PHASES
}
data_loaders = {
phase: torch.utils.data.DataLoader(
data_sets[phase],
batch_size = BATCH_SIZE,
shuffle = True,
num_workers = NUM_WORKERS,
) for phase in PHASES
}
data_sizes = {
phase: len(data_sets[phase]) for phase in PHASES
}
class_names = data_sets[TRAIN].classes
print(data_sets)
print(data_loaders)
print(data_sizes)
print(class_names)
```
Helper functions
```
# Exibe uma imagem a partir de um Tensor.
def imshow(data):
mean = np.array(NORMALIZE_MEAN)
std = np.array(NORMALIZE_STD)
image = data.numpy().transpose((1, 2, 0))
image = std * image + mean
image = np.clip(image, 0, 1)
plt.imshow(image)
# Treina o modelo e retorna o modelo treinado.
def train_model(model_type, model, optimizer, criterion, num_epochs = 25):
start_time = time.time()
num_epochs_without_improvement = 0
best_acc = 0.0
best_model = copy.deepcopy(model.state_dict())
torch.save(best_model, 'model.pth')
for epoch in range(num_epochs):
print('Epoch {}/{} ...'.format(epoch + 1, num_epochs))
for phase in PHASES:
if phase == TRAIN:
model.train()
elif phase == VAL:
model.eval()
else:
continue
running_loss = 0.0
running_corrects = 0
for data, labels in data_loaders[phase]:
data = data.to(DEVICE)
labels = labels.to(DEVICE)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == TRAIN):
outputs = model(data)
if phase == TRAIN and model_type == INCEPTION:
outputs = outputs.logits
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
if phase == TRAIN:
loss.backward()
optimizer.step()
running_loss += loss.item() * data.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / data_sizes[phase]
epoch_acc = running_corrects.double() / data_sizes[phase]
print('{} => Loss: {:.4f}, Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
if phase == VAL:
if epoch_acc > best_acc:
num_epochs_without_improvement = 0
best_acc = epoch_acc
best_model = copy.deepcopy(model.state_dict())
torch.save(best_model, 'model.pth')
else:
num_epochs_without_improvement += 1
if num_epochs_without_improvement == 50:
print('Exiting early...')
break
elapsed_time = time.time() - start_time
print('Took {:.0f}m {:.0f}s'.format(elapsed_time // 60, elapsed_time % 60))
print('Best Acc: {:4f}'.format(best_acc))
model.load_state_dict(best_model)
return model
# Visualiza algumas predicoes do modelo.
def visualize_model(model, num_images = 6):
was_training = model.training
model.eval()
fig = plt.figure()
images_so_far = 0
with torch.no_grad():
for i, (data, labels) in enumerate(data_loaders[TEST]):
data = data.to(DEVICE)
labels = labels.to(DEVICE)
outputs = model(data)
_, preds = torch.max(outputs, 1)
for j in range(data.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images // 2, 2, images_so_far)
ax.axis('off')
ax.set_title('Predicted: {}'.format(class_names[preds[j]]))
imshow(data.cpu().data[j])
if images_so_far == num_images:
model.train(mode = was_training)
return
model.train(mode = was_training)
# Testa o modelo.
def test_model(model, criterion):
was_training = model.training
model.eval()
running_loss = 0.0
running_corrects = 0
with torch.no_grad():
for data, labels in data_loaders[TEST]:
data = data.to(DEVICE)
labels = labels.to(DEVICE)
outputs = model(data)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
running_loss += loss.item() * data.size(0)
running_corrects += torch.sum(preds == labels.data)
loss = running_loss / data_sizes[TEST]
acc = running_corrects.double() / data_sizes[TEST]
print('Loss: {:4f}, Acc: {:4f}'.format(loss, acc))
model.train(mode = was_training)
```
Exibir amostra do dataset
```
data, labels = next(iter(data_loaders[TRAIN]))
grid = torchvision.utils.make_grid(data)
imshow(grid)
```
Definir modelo
```
if MODEL == INCEPTION:
model = models.inception_v3(pretrained = True, progress = True)
print(model.fc)
for param in model.parameters():
param.requires_grad = False
num_features = model.fc.in_features
model.fc = nn.Linear(num_features, len(class_names))
model = model.to(DEVICE)
optimizer = optim.SGD(model.fc.parameters(), lr = 0.001, momentum = 0.9)
elif MODEL == VGG19:
model = models.vgg19(pretrained = True, progress = True)
print(model.classifier[6])
for param in model.parameters():
param.requires_grad = False
num_features = model.classifier[6].in_features
model.classifier[6] = nn.Linear(num_features, len(class_names))
model = model.to(DEVICE)
optimizer = optim.SGD(model.classifier[6].parameters(), lr = 0.001, momentum = 0.9)
else:
print('ERRO: Nenhum tipo de modelo definido!')
criterion = nn.CrossEntropyLoss()
print(model)
```
Treinar modelo
```
model = train_model(MODEL, model, optimizer, criterion)
```
Visualizar modelo
```
visualize_model(model)
```
Testar modelo
```
model.load_state_dict(torch.load('model.pth'))
test_model(model, criterion)
```
Salvar modelo para CPU
```
model = model.cpu()
torch.save(model.state_dict(), 'model-cpu.pth')
```
Salvar no Google Drive
```
torch.save(model.state_dict(), '/content/drive/My Drive/model-inception.pth')
```
| github_jupyter |
# WorkFlow
### Imports
### Load the data
### Cleanning
### FE
### Data.corr()
### Analytics
### Preproccessing
### Decomposition
### Feature Selection
### Modelling
### Random Search
### Gird Search
## Imports
```
import random
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import torch,torchvision
from torch.nn import *
from torch.optim import *
# Preproccessing
from sklearn.preprocessing import (
StandardScaler,
RobustScaler,
MinMaxScaler,
MaxAbsScaler,
OneHotEncoder,
Normalizer,
Binarizer
)
# Decomposition
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
# Feature Selection
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import RFECV
from sklearn.feature_selection import SelectFromModel
# Model Eval
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score,train_test_split
from sklearn.metrics import mean_absolute_error,mean_squared_error,accuracy_score,precision_score,f1_score,recall_score
# Models
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LogisticRegression,LogisticRegressionCV
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import GradientBoostingRegressor,AdaBoostRegressor,VotingRegressor,BaggingRegressor,RandomForestRegressor
from sklearn.svm import SVR
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from catboost import CatBoost,CatBoostRegressor
from xgboost import XGBRegressor,XGBRFRegressor
from flaml import AutoML
# Other
import pickle
import wandb
PROJECT_NAME = 'House-Prices-Advanced-Regression-Techniques-V9'
device = 'cuda'
np.random.seed(21)
random.seed(21)
torch.manual_seed(21)
```
### Funtions
```
def make_submission(model):
pass
def valid(model,X,y,valid=False):
preds = model.predict(X)
if valid:
results = {
'val mean_absolute_error':mean_absolute_error(y_true=y,y_pred=preds),
'val mean_squared_error':mean_squared_error(y_true=y,y_pred=preds),
}
else:
results = {
'mean_absolute_error':mean_absolute_error(y_true=y,y_pred=preds),
'mean_squared_error':mean_squared_error(y_true=y,y_pred=preds),
}
return results
def train(model,X_train,X_test,y_train,y_test,name):
wandb.init(project=PROJECT_NAME,name=name)
model.fit(X_train,y_train)
wandb.log(valid(model,X_train,y_train))
wandb.log(valid(model,X_test,y_test,True))
make_submission(model)
return model
def object_to_int(data,col):
data_col = data[col].to_dict()
idx = -1
labels_and_int_index = {}
for data_col_vals in data_col.values():
if data_col_vals not in labels_and_int_index.keys():
idx += 1
labels_and_int_index[data_col_vals] = idx
new_data = []
for data_col_vals in data_col.values():
new_data.append(labels_and_int_index[data_col_vals])
data[col] = new_data
return data,idx,labels_and_int_index,new_data
def fe(data,col,quantile_max_num=0.99,quantile_min_num=0.05):
max_num = data[col].quantile(quantile_max_num)
min_num = data[col].quantile(quantile_min_num)
print(max_num)
print(min_num)
data = data[data[col] < max_num]
data = data[data[col] > min_num]
return data
def decomposition(X,pca=False,kernal_pca=False):
if pca:
pca = PCA()
X = pca.fit_transform(X)
if kernal_pca:
kernal_pca = KernelPCA()
X = kernal_pca.fit_transform(X)
return X
def feature_selection_prep_data(model,X,y,select_from_model=False,variance_threshold=False,select_k_best=False,rfecv=False):
if select_from_model:
transform = SelectFromModel(estimator=model.fit(X, y))
X = transform.transform(X)
if variance_threshold:
transform = VarianceThreshold()
X = transform.fit_transform(X)
if select_k_best:
X = SelectKBest(chi2, k='all').fit_transform(X, y)
if rfecv:
X = RFECV(model, step=1, cv=5).fit(X, y)
X = X.transform(X)
return X
def prep_data(X,transformer):
mct = make_column_transformer(
(transformer,list(X.columns)),
remainder='passthrough'
)
X = mct.fit_transform(X)
return X
```
## Load the data
```
data = pd.read_csv('./data/train.csv')
preproccessings = [StandardScaler,RobustScaler,MinMaxScaler,MaxAbsScaler,OneHotEncoder,Normalizer,Binarizer]
models = [
['KNeighborsRegressor',KNeighborsRegressor],
['LogisticRegression',LogisticRegression],
['LogisticRegressionCV',LogisticRegressionCV],
['DecisionTreeRegressor',DecisionTreeRegressor],
['GradientBoostingRegressor',GradientBoostingRegressor],
['AdaBoostRegressor',AdaBoostRegressor],
['RandomForestRegressor',RandomForestRegressor],
['BaggingRegressor',BaggingRegressor],
['GaussianNB',GaussianNB],
['ExtraTreesRegressor',ExtraTreesRegressor],
['CatBoost',CatBoost],
['CatBoostRegressor',CatBoostRegressor],
['XGBRegressor',XGBRegressor],
['XGBRFRegressor',XGBRFRegressor],
['ExtraTreesRegressor',ExtraTreesRegressor],
]
```
## Cleanning the data
```
X = data.drop('SalePrice',axis=1)
y = data['SalePrice']
str_cols = []
int_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(X.columns),X.isna().sum(),X.dtypes):
if dtype == object:
str_cols.append(col_name)
else:
int_cols.append(col_name)
for str_col in str_cols:
X,idx,labels_and_int_index,new_data = object_to_int(X,str_col)
X.head()
nan_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(X.columns),X.isna().sum(),X.dtypes):
if num_of_missing_rows > 0:
nan_cols.append(col_name)
for nan_col in nan_cols:
X[nan_col].fillna(X[nan_col].median(),inplace=True)
nan_cols = []
for col_name,num_of_missing_rows,dtype in zip(list(X.columns),X.isna().sum(),X.dtypes):
if num_of_missing_rows > 0:
nan_cols.append(col_name)
# train(GradientBoostingRegressor(),X,X,y,y,name='baseline-without-fe')
X_old = X.copy()
```
## FE
```
# for col_name in list(X.columns):
# try:
# X = X_old.copy()
# X = fe(X,col_name)
# train(GradientBoostingRegressor(),X,X,y,y,name=f'baseline-with-fe-{col_name}')
# except:
# print('*'*50)
# print('*'*50)
# X = X_old.copy()
X_corr = X_old.corr()
keep_cols = []
```
## Data.corr()
```
# for key,val in zip(X_corr.to_dict().keys(),X_corr.to_dict().values()):
# for val_key,val_vals in zip(val.keys(),val.values()):
# if val_key == key:
# pass
# else:
# if val_vals > 0.0:
# if val_key not in keep_cols:
# print(val_vals)
# keep_cols.append(val_key)
# fig,ax = plt.subplots(figsize=(25,12))
# ax = sns.heatmap(X_corr,annot=True,linewidths=0.5,fmt='.2f',cmap='YlGnBu')
# keep_cols
# len(keep_cols)
```
## Analytics
```
X.head()
```
## Preproccessing
```
X_old = X.copy()
for preproccessing in preproccessings:
X = X_old.copy()
preproccessing = preproccessing()
X = preproccessing.fit_transform(X)
train(GradientBoostingRegressor(),X,X,y,y,name=f'{preproccessing}-preproccessing')
X = X_old.copy()
X = decomposition(True,False)
train(GradientBoostingRegressor(),X,X,y,y,name=f'PCA=True-kernal_pca=False-decomposition')
X = X_old.copy()
X = decomposition(False,True)
train(GradientBoostingRegressor(),X,X,y,y,name=f'PCA=False-kernal_pca=True-decomposition')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/flych3r/IA025_2022S1/blob/main/ex04/matheus_xavier/IA025_A04.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Regressão Softmax com dados do MNIST utilizando gradiente descendente estocástico por minibatches
Este exercicío consiste em treinar um modelo de uma única camada linear no MNIST **sem** usar as seguintes funções do pytorch:
- torch.nn.Linear
- torch.nn.CrossEntropyLoss
- torch.nn.NLLLoss
- torch.nn.LogSoftmax
- torch.optim.SGD
- torch.utils.data.Dataloader
## Importação das bibliotecas
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import random
import torch
import torchvision
from torchvision.datasets import MNIST
```
## Fixando as seeds
```
random.seed(123)
np.random.seed(123)
torch.manual_seed(123)
```
## Dataset e dataloader
### Definição do tamanho do minibatch
```
batch_size = 50
```
### Carregamento, criação dataset e do dataloader
```
dataset_dir = '../data/'
dataset_train_full = MNIST(
dataset_dir, train=True, download=True,
transform=torchvision.transforms.ToTensor()
)
print(dataset_train_full.data.shape)
print(dataset_train_full.targets.shape)
```
### Usando apenas 1000 amostras do MNIST
Neste exercício utilizaremos 1000 amostras de treinamento.
```
indices = torch.randperm(len(dataset_train_full))[:1000]
dataset_train = torch.utils.data.Subset(dataset_train_full, indices)
# Escreva aqui o equivalente do código abaixo:
# loader_train = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle=False)
import math
class DataLoader:
def __init__(self, dataset: torch.utils.data.Dataset, batch_size: int = 1, shuffle: bool = True):
self.dataset = dataset
self.batch_size = batch_size
self.shuffle = shuffle
self.idx = 0
self.indexes = np.arange(len(dataset))
self._size = math.ceil(len(dataset) / self.batch_size)
def __iter__(self):
self.idx = 0
return self
def __next__(self):
if self.idx < len(self):
if self.idx == 0 and self.shuffle:
np.random.shuffle(self.indexes)
batch = self.indexes[self.idx * self.batch_size: (self.idx + 1) * self.batch_size]
self.idx += 1
x_batch, y_batch = [], []
for b in batch:
x, y = self.dataset[b]
x_batch.append(x)
y_batch.append(y)
return torch.stack(x_batch), torch.tensor(y_batch)
raise StopIteration
def __len__(self):
return self._size
loader_train = DataLoader(dataset_train, batch_size=batch_size, shuffle=False)
print('Número de minibatches de trenamento:', len(loader_train))
x_train, y_train = next(iter(loader_train))
print("\nDimensões dos dados de um minibatch:", x_train.size())
print("Valores mínimo e máximo dos pixels: ", torch.min(x_train), torch.max(x_train))
print("Tipo dos dados das imagens: ", type(x_train))
print("Tipo das classes das imagens: ", type(y_train))
```
## Modelo
```
# Escreva aqui o codigo para criar um modelo cujo o equivalente é:
# model = torch.nn.Linear(28*28, 10)
# model.load_state_dict(dict(weight=torch.zeros(model.weight.shape), bias=torch.zeros(model.bias.shape)))
class Model:
def __init__(self, in_features: int, out_features: int):
self.weight = torch.zeros(out_features, in_features, requires_grad=True)
self.bias = torch.zeros(out_features, requires_grad=True)
def __call__(self, x: torch.Tensor) -> torch.Tensor:
y_pred = x.mm(torch.t(self.weight)) + self.bias.unsqueeze(0)
return y_pred
def parameters(self):
return self.weight, self.bias
model = Model(28*28, 10)
```
## Treinamento
### Inicialização dos parâmetros
```
n_epochs = 50
lr = 0.1
```
## Definição da Loss
```
# Escreva aqui o equivalente de:
# criterion = torch.nn.CrossEntropyLoss()
class CrossEntropyLoss:
def __init__(self):
self.loss = 0
def __call__(self, inputs: torch.Tensor, targets: torch.Tensor):
log_sum_exp = torch.log(torch.sum(torch.exp(inputs), dim=1, keepdim=True))
logits = inputs.gather(dim=1, index=targets.unsqueeze(dim=1))
return torch.mean(-logits + log_sum_exp)
criterion = CrossEntropyLoss()
```
# Definição do Optimizer
```
# Escreva aqui o equivalente de:
# optimizer = torch.optim.SGD(model.parameters(), lr)
from typing import Iterable
class SGD:
def __init__(self, parameters: Iterable[torch.Tensor], learning_rate: float):
self.parameters = parameters
self.learning_rate = learning_rate
def step(self):
for p in self.parameters:
p.data -= self.learning_rate * p.grad
def zero_grad(self):
for p in self.parameters:
p.grad = torch.zeros_like(p.data)
optimizer = SGD(model.parameters(), lr)
```
### Laço de treinamento dos parâmetros
```
epochs = []
loss_history = []
loss_epoch_end = []
total_trained_samples = 0
for i in range(n_epochs):
# Substitua aqui o loader_train de acordo com sua implementação do dataloader.
for x_train, y_train in loader_train:
# Transforma a entrada para uma dimensão
inputs = x_train.view(-1, 28 * 28)
# predict da rede
outputs = model(inputs)
# calcula a perda
loss = criterion(outputs, y_train)
# zero, backpropagation, ajusta parâmetros pelo gradiente descendente
# Escreva aqui o código cujo o resultado é equivalente às 3 linhas abaixo:
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_trained_samples += x_train.size(0)
epochs.append(total_trained_samples / len(dataset_train))
loss_history.append(loss.item())
loss_epoch_end.append(loss.item())
print(f'Epoch: {i:d}/{n_epochs - 1:d} Loss: {loss.item()}')
```
### Visualizando gráfico de perda durante o treinamento
```
plt.plot(epochs, loss_history)
plt.xlabel('época')
```
### Visualização usual da perda, somente no final de cada minibatch
```
n_batches_train = len(loader_train)
plt.plot(epochs[::n_batches_train], loss_history[::n_batches_train])
plt.xlabel('época')
# Assert do histórico de losses
target_loss_epoch_end = np.array([
1.1979684829711914,
0.867622971534729,
0.7226786613464355,
0.6381281018257141,
0.5809749960899353,
0.5387411713600159,
0.5056464076042175,
0.4786270558834076,
0.4558936357498169,
0.4363219141960144,
0.4191650450229645,
0.4039044976234436,
0.3901679515838623,
0.3776799440383911,
0.3662314713001251,
0.35566139221191406,
0.34584277868270874,
0.33667415380477905,
0.32807353138923645,
0.31997355818748474,
0.312318354845047,
0.3050611615180969,
0.29816246032714844,
0.29158851504325867,
0.28531041741371155,
0.2793029546737671,
0.273544579744339,
0.2680158317089081,
0.26270008087158203,
0.2575823664665222,
0.25264936685562134,
0.24788929522037506,
0.24329163134098053,
0.23884665966033936,
0.23454584181308746,
0.23038141429424286,
0.22634628415107727,
0.22243399918079376,
0.2186385989189148,
0.21495483815670013,
0.21137762069702148,
0.20790249109268188,
0.20452524721622467,
0.20124195516109467,
0.19804897904396057,
0.1949428766965866,
0.19192075729370117,
0.188979372382164,
0.18611609935760498,
0.1833282858133316])
assert np.allclose(np.array(loss_epoch_end), target_loss_epoch_end, atol=1e-6)
```
## Exercício
Escreva um código que responda às seguintes perguntas:
Qual é a amostra classificada corretamente, com maior probabilidade?
Qual é a amostra classificada erradamente, com maior probabilidade?
Qual é a amostra classificada corretamente, com menor probabilidade?
Qual é a amostra classificada erradamente, com menor probabilidade?
```
# Escreva o código aqui:
loader_eval = DataLoader(dataset_train, batch_size=len(dataset_train), shuffle=False)
x, y = next(loader_eval)
logits = model(x.view(-1, 28 * 28))
exp_logits = torch.exp(logits)
sum_exp_logits = torch.sum(exp_logits, dim=1, keepdim=True)
softmax = (exp_logits / sum_exp_logits).detach()
y_pred = torch.argmax(softmax, dim=1)
y_proba = softmax.gather(-1, y_pred.view(-1, 1)).ravel()
corret_preditions = (y == y_pred)
wrong_predictions = (y != y_pred)
def plot_image_and_proba(images, probas, idx, title):
plt.figure(figsize=(16, 8))
x_labels = list(range(10))
plt.subplot(121)
plt.imshow(images[idx][0])
plt.subplot(122)
plt.bar(x_labels, probas[idx])
plt.xticks(x_labels)
plt.suptitle(title)
plt.show()
# Qual é a amostra classificada corretamente, com maior probabilidade?
mask = corret_preditions
idx = torch.argmax(y_proba[mask])
title = 'Predita: {} | Probabilidate: {:.4f} | Correta: {}'.format(
y_pred[mask][idx],
y_proba[mask][idx],
y[mask][idx],
)
plot_image_and_proba(x[mask], softmax[mask], idx, title)
# Qual é a amostra classificada erradamente, com maior probabilidade?
mask = wrong_predictions
idx = torch.argmax(y_proba[mask])
title = 'Predita: {} | Probabilidate: {:.4f} | Correta: {}'.format(
y_pred[mask][idx],
y_proba[mask][idx],
y[mask][idx],
)
plot_image_and_proba(x[mask], softmax[mask], idx, title)
# Qual é a amostra classificada corretamente, com menor probabilidade?
mask = corret_preditions
idx = torch.argmin(y_proba[mask])
title = 'Predita: {} | Probabilidate: {:.4f} | Correta: {}'.format(
y_pred[mask][idx],
y_proba[mask][idx],
y[mask][idx],
)
plot_image_and_proba(x[mask], softmax[mask], idx, title)
# Qual é a amostra classificada erradamente, com menor probabilidade?
mask = wrong_predictions
idx = torch.argmin(y_proba[mask])
title = 'Predita: {} | Probabilidate: {:.4f} | Correta: {}'.format(
y_pred[mask][idx],
y_proba[mask][idx],
y[mask][idx],
)
plot_image_and_proba(x[mask], softmax[mask], idx, title)
```
## Exercício Bonus
Implemente um dataloader que aceite como parâmetro de entrada a distribuição probabilidade das classes que deverão compor um batch.
Por exemplo, se a distribuição de probabilidade passada como entrada for:
`[0.01, 0.01, 0.72, 0.2, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01]`
Em média, 72% dos exemplos do batch deverão ser da classe 2, 20% deverão ser da classe 3, e os demais deverão ser das outras classes.
Mostre também que sua implementação está correta.
| github_jupyter |
```
%matplotlib notebook
import sys
sys.path.insert(1, '../../../script/')
import math
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
#import missingno as msno
from scipy.stats import mode
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import linkage, dendrogram, fcluster
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from collections import defaultdict
from scipy.stats.stats import pearsonr
from fim import apriori
df = pd.read_csv('data/training.csv')
df[:10]
X = df[['Make', 'Model']]
gkk = X.groupby(['Make', 'Model'])
gkk.first()
#for key, item in gkk:
# print(key)
df["Model"].value_counts()
```
# Data Cleaning
We can't use here our cleaning function because it also works with missing values, but the main task of pattern mining is to find rules to substitute missing values. So here we do all the data cleaning EXCEPT dealing with missing values
<b>Typo correction</b>
```
df.iat[6895, 11] = 'MANUAL'
df.iat[42627, 6] = 'SCION'
#a = df[(df['Nationality']=='TOP LINE ASIAN') | (df['Nationality']=='OTHER ASIAN')].index
#for x in a:
# df['Nationality'].values[x] = 'ASIAN'
# WheelTypeID 0.0 correction
df.iat[3897, 12] = 1.0
df.iat[23432, 12] = 1.0
df.iat[23831, 12] = 2.0
df.iat[45666, 12] = 1.0
# submodel la mode sui group by
# Praticamente è la mode sui group by (più o meno specifici)
df.iat[28961, 9] = '4D SEDAN SE1'
df.iat[35224, 9] = '4D SEDAN SXT FFV'
df.iat[48641, 9] = '4D SEDAN SXT FFV'
df.iat[28280, 9] = 'PASSENGER 3.9L SE'
df.iat[33225, 9] = '4D SUV 4.6L'
df.iat[50661, 9] = 'REG CAB 2.2L FFV'
df.iat[23019, 9] = '4D SEDAN'
# size la mode sui group by
df.iat[18532, 16] = 'MEDIUM SUV'
df.iat[20016, 16] = 'SMALL SUV'
df.iat[35157, 16] = 'SMALL SUV'
df.iat[15769, 16] = 'MEDIUM SUV'
```
<b>Dropped features</b>
```
del df['PRIMEUNIT']
del df['AUCGUART']
del df['RefId']
del df['VNZIP1']
del df['Auction']
del df['IsOnlineSale']
del df['SubModel']
del df['Color']
del df['VehYear']
del df['PurchDate']
del df['Trim']
del df['TopThreeAmericanName']
del df['WheelType']
del df['BYRNO']
del df['MMRAcquisitionAuctionCleanPrice']
del df['MMRAcquisitionRetailAveragePrice']
del df['MMRAcquisitonRetailCleanPrice']
del df['MMRCurrentAuctionAveragePrice']
del df['MMRCurrentAuctionCleanPrice']
del df['MMRCurrentRetailAveragePrice']
del df['MMRCurrentRetailCleanPrice']
```
<b>Row deletion outliers</b>
```
features = ['VehOdo',
'MMRAcquisitionAuctionAveragePrice',
'VehBCost',
'WarrantyCost',
'VehicleAge']
for feature in features:
for isBadBuy in [0,1]:
q1 = df[(df.IsBadBuy == isBadBuy)][feature].quantile(0.25)
q3 = df[(df.IsBadBuy == isBadBuy)][feature].quantile(0.75)
iqr = q3 - q1
qlow = q1 - 1.5*iqr
qhigh = q3 + 1.5*iqr
df.drop(df[(df.IsBadBuy == isBadBuy) & (df[feature] <= qlow)].index, inplace=True)
df.drop(df[(df.IsBadBuy == isBadBuy) & (df[feature] >= qhigh)].index, inplace=True)
```
# Data Preparation
We have 5 numerical variables: VehicleAge, VehOdo, MMRAcquisitionAuctionAveragePrice, VehBCost and WarrantyCost.
The VehicleAge is almost categorical variable (it has only 8 possible values: from 1 to 8), but all the others have thousands of possible unique values. For Pattern Mining it will means that all these values will create different patterns which is not really useful for us. So we have decided to cluster these 4 variables: VehOdo, MMRAcquisitionAuctionAveragePrice, VehBCost and WarrantyCost - and substitute these variables with their class.
As the method of the clustering we choose hierarchical one. We are not sure if it is true in general but we saw that for VehBCost hierarchical clustering gives us clusters that have almost equal range between minimal value of the cost and the maximum one, the size of the clusters was not the same, but the range, as we said, was plus minus the same. On the other hand, k-means gave us clusters of the same size but the range was very different.
We thought that in real life when we want to buy a new car, the groups don't have the same number of options (there is a lot of cars in medium range and only few super expensive ones), but we start our search from the amount of money that we have, so the key factor is the range, not the size of the cluster.
Also in other papers we saw that they just write: we chose 7 cluster (or 4 clusters, the number here is not important). Nothing else. We at least watched the possible cluster and found some explanation why we chose this one and not another one. We don't want to reopen here from the begining all the discussion about clustering. So lets just assume we use hierarchical clustering.
```
df[:10]
```
<b>VehBCost clustering</b>
What we did here: took VehBCost, made hierarchical clustering for this variable, chose the threshold and then substituted the VehBCost column with VehBCost-Class which has 5 different classes: all of them have names [min; max] - [1720.0; 3815.0], [3820.0; 5745.0], [5750.0; 7450.0], [7455.0; 9815.0], [9820.0; 11645.0]
```
X = df[["VehBCost"]]
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
data_dist = pdist(X, metric='euclidean')
data_link = linkage(data_dist, method='complete', metric='euclidean')
res = dendrogram(data_link, color_threshold=2, truncate_mode='lastp')
color_threshold = 2
num_clusters = 5
clusters = fcluster(data_link, color_threshold, criterion='distance')
df['VehBCost-Class'] = clusters
mapClassName = {}
for i in range(1, num_clusters+1):
classVehBCost = df[df['VehBCost-Class'] == i]['VehBCost']
mapClassName[i] = "[" + str(classVehBCost.min()) + "; " + str(classVehBCost.max()) + "]"
df['VehBCost-Class'] = df['VehBCost-Class'].map(mapClassName).astype(str)
del df['VehBCost']
df['VehBCost-Class'].value_counts()
```
<b>VehOdo clustering</b>
What we did here: took VehOdo, made hierarchical clustering for this variable, chose the threshold and then substituted the VehOdo column with VehOdo-Class which has 5 different classes: all of them have names [min; max] - [30212; 45443], [45449; 61627], [61630; 71437], [71439; 91679], [91683; 112029]
```
X = df[["VehOdo"]]
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
data_dist = pdist(X, metric='euclidean')
data_link = linkage(data_dist, method='complete', metric='euclidean')
res = dendrogram(data_link, color_threshold=1.8, truncate_mode='lastp')
color_threshold = 1.8
num_clusters = 5
clusters = fcluster(data_link, color_threshold, criterion='distance')
df['VehOdo-Class'] = clusters
mapClassName = {}
for i in range(1, num_clusters+1):
classVehBCost = df[df['VehOdo-Class'] == i]['VehOdo']
mapClassName[i] = "[" + str(classVehBCost.min()) + "; " + str(classVehBCost.max()) + "]"
df['VehOdo-Class'] = df['VehOdo-Class'].map(mapClassName).astype(str)
del df['VehOdo']
df['VehOdo-Class'].value_counts()
```
<b>MMRAcquisitionAuctionAveragePrice</b>
What we did here: took MMRAcquisitionAuctionAveragePrice, made hierarchical clustering for this variable, chose the threshold and then substituted the MMRAcquisitionAuctionAveragePrice column with MMRAcquisitionAuctionAveragePrice-Class which has 4 different classes: all of them have names [min; max] - [884.0; 3619.0], [3620.0; 6609.0], [6610.0; 10416.0], [10417.0; 12951.0].
Here we also have missing values, so there is one more group: group NaN. We should also not forget that here we have values 0.0 that are not real values! They are missing values, so as the first step we change 0.0 to NaN.
```
# 0 as acquisition price is still Missing value so here we just make controll
df.loc[df["MMRAcquisitionAuctionAveragePrice"] == 0] = np.nan
X = df[df['MMRAcquisitionAuctionAveragePrice'].notnull()][['MMRAcquisitionAuctionAveragePrice']]
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
data_dist = pdist(X, metric='euclidean')
data_link = linkage(data_dist, method='complete', metric='euclidean')
res = dendrogram(data_link, color_threshold=1.8, truncate_mode='lastp')
color_threshold = 1.8
num_clusters = 4
clusters = fcluster(data_link, color_threshold, criterion='distance')
df["MMRAcquisitionAuctionAveragePrice-Class"] = np.nan
df.loc[df["MMRAcquisitionAuctionAveragePrice"].notnull(), "MMRAcquisitionAuctionAveragePrice-Class"] = clusters
mapClassName = {}
for i in range(1, num_clusters+1):
classVehBCost = df[df['MMRAcquisitionAuctionAveragePrice-Class'] == i]['MMRAcquisitionAuctionAveragePrice']
mapClassName[i] = "[" + str(classVehBCost.min()) + "; " + str(classVehBCost.max()) + "]"
df['MMRAcquisitionAuctionAveragePrice-Class'] = df['MMRAcquisitionAuctionAveragePrice-Class'].map(mapClassName).astype(str)
del df['MMRAcquisitionAuctionAveragePrice']
df['MMRAcquisitionAuctionAveragePrice-Class'].value_counts()
```
<b>WarrantyCost</b>
What we did here: took WarrantyCost, made hierarchical clustering for this variable, chose the threshold and then substituted the WarrantyCost column with WarrantyCost-Class which has 5 different classes: all of them have names [min; max] - [462.0; 728.0], [754.0; 1223.0], [1241.0; 1808.0], [1857.0; 2282.0], [2322.0; 2838.0]. Here we also have missing values, so there is one more group: group NaN.
```
X = df[df['WarrantyCost'].notnull()][['WarrantyCost']]
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
data_dist = pdist(X, metric='euclidean')
data_link = linkage(data_dist, method='complete', metric='euclidean')
res = dendrogram(data_link, color_threshold=1.2, truncate_mode='lastp')
color_threshold = 1.2
num_clusters = 5
clusters = fcluster(data_link, color_threshold, criterion='distance')
df["WarrantyCost-Class"] = np.nan
df.loc[df["WarrantyCost"].notnull(), "WarrantyCost-Class"] = clusters
mapClassName = {}
for i in range(1, num_clusters+1):
classVehBCost = df[df['WarrantyCost-Class'] == i]['WarrantyCost']
mapClassName[i] = "[" + str(classVehBCost.min()) + "; " + str(classVehBCost.max()) + "]"
df['WarrantyCost-Class'] = df['WarrantyCost-Class'].map(mapClassName).astype(str)
del df['WarrantyCost']
df['WarrantyCost-Class'].value_counts()
```
So after all the transformations we should get something like this:
```
df[:10]
```
But to get this result I did hierarchical clustering 4 times, which is really time consuming, so I created the shortcut of division into clusters, so from now we didn't have to wait for so long to have our division for numerical values
```
# VehBCost
df["VehBCost-Class"] = np.nan
criteria = [df['VehBCost'].between(1720, 3815), df['VehBCost'].between(3820, 5745), df['VehBCost'].between(5750, 7450), df['VehBCost'].between(7455, 9815), df['VehBCost'].between(9820, 11645)]
values = ["[1720; 3815]", "[3820; 5745]", "[5750; 7450]", "[7455; 9815]", "[9820; 11645]"]
df['VehBCost-Class'] = np.select(criteria, values, 0)
del df["VehBCost"]
# VehOdo
df["VehOdo-Class"] = np.nan
criteria = [df['VehOdo'].between(30212, 45443), df['VehOdo'].between(45449, 61627), df['VehOdo'].between(61630, 71437), df['VehOdo'].between(71439, 91679), df['VehOdo'].between(91683, 112029)]
values = ["[30212; 45443]", "[45449; 61627]", "[61630; 71437]", "[71439; 91679]", "[91683; 112029]"]
df['VehOdo-Class'] = np.select(criteria, values, 0)
del df["VehOdo"]
# MMRAcquisitionAuctionAveragePrice
df.loc[df["MMRAcquisitionAuctionAveragePrice"] == 0, "MMRAcquisitionAuctionAveragePrice"] = np.nan
df["MMRAcquisitionAuctionAveragePrice-Class"] = np.nan
criteria = [df['MMRAcquisitionAuctionAveragePrice'].between(884, 3619), df['MMRAcquisitionAuctionAveragePrice'].between(3620, 6609), df['MMRAcquisitionAuctionAveragePrice'].between(6610, 10416), df['MMRAcquisitionAuctionAveragePrice'].between(10417, 12951)]
values = ["[884; 3619]", "[3620; 6609]", "[6610; 10416]", "[10417; 12951]"]
df['MMRAcquisitionAuctionAveragePrice-Class'] = np.select(criteria, values, np.nan)
del df["MMRAcquisitionAuctionAveragePrice"]
# MMRAcquisitionAuctionAveragePrice
df["WarrantyCost-Class"] = np.nan
criteria = [df['WarrantyCost'].between(462, 728), df['WarrantyCost'].between(754, 1223), df['WarrantyCost'].between(1241, 1808), df['WarrantyCost'].between(1857, 2282), df['WarrantyCost'].between(2322, 2838)]
values = ["[462; 728]", "[754; 1223]", "[1241; 1808]", "[1857; 2282]", "[2322; 2838]"]
df['WarrantyCost-Class'] = np.select(criteria, values, np.nan)
del df["WarrantyCost"]
```
# Apriori algorythm
```
help(apriori)
baskets = df.values.tolist()
baskets[0]
itemsets = apriori(baskets, supp=80, zmin=1, target='a')
print('Number of itemsets:', len(itemsets))
itemsets
itemsets = apriori(baskets, supp=80, zmin=1, target='a')
print('Number of itemsets:', len(itemsets))
itemsets
rules = apriori(baskets, supp=10, zmin=2, target='r', conf=60, report='ascl')
print('Number of rule:', len(rules))
for r in rules:
if r[0] == 1:
print(r)
```
| github_jupyter |
# Science User Case - Inspecting a Candidate List
Ogle et al. (2016) mined the NASA/IPAC Extragalactic Database (NED) to identify a new type of galaxy: Superluminous Spiral Galaxies. Here's the paper:
Here's the paper: https://ui.adsabs.harvard.edu//#abs/2016ApJ...817..109O/abstract
Table 1 lists the positions of these Super Spirals. Based on those positions, let's create multiwavelength cutouts for each super spiral to see what is unique about this new class of objects.
## 1. Import the Python modules we'll be using.
```
# Suppress unimportant warnings.
import warnings
warnings.filterwarnings("ignore", module="astropy.io.votable.*")
warnings.filterwarnings("ignore", module="pyvo.utils.xml.*")
warnings.filterwarnings('ignore', '.*RADECSYS=*', append=True)
import matplotlib.pyplot as plt
import numpy as np
from astropy.coordinates import SkyCoord
from astropy.io import fits
from astropy.nddata import Cutout2D
import astropy.visualization as vis
from astropy.wcs import WCS
from astroquery.ned import Ned
import pyvo as vo
```
## 2. Search NED for objects in this paper.
Consult QuickReference.md to figure out how to use astroquery to search NED for all objects in a paper, based on the refcode of the paper. Inspect the resulting astropy table.
## 3. Filter the NED results.
The results from NED will include galaxies, but also other kinds of objects. Print the 'Type' column to see the full range of classifications. Next, print the 'Type' of just the first source in the table, in order to determine its data type (since Python 3 distinguishes between strings and byte strings). Finally, use the data type information to filter the results so that we only keep the galaxies in the list.
## 4. Search the NAVO Registry for image resources.
The paper selected super spirals using WISE, SDSS, and GALEX images. Search the NAVO registry for all image resources, using the 'service_type' search parameter. How many image resources are currently available?
## 5. Search the NAVO Registry for image resources that will allow you to search for AllWISE images.
There are hundreds of image resources...too many to quickly read through. Try adding the 'keyword' search parameter to your registry search, and find the image resource you would need to search the AllWISE images. Remember from the Known Issues that 'keywords' must be a list.
## 6. Select the AllWISE image service that you are interested in.
Hint: there should be only one service after searching with ['allwise']
## 7. Make a SkyCoord from the first galaxy in the NED list.
```
ra = galaxies['RA'][0]
dec = galaxies['DEC'][0]
pos = SkyCoord(ra, dec, unit = 'deg')
```
## 8. Search for a list of AllWISE images that cover this galaxy.
How many images are returned? Which are you most interested in?
## 9. Use the .to_table() method to view the results as an Astropy table.
## 10. From the result in 8., select the first record for an image taken in WISE band W1 (3.6 micron)
Hints:
* Loop over records and test on the `.bandpass_id` attribute of each record
* Print the `.title` and `.bandpass_id` of the record you find, to verify it is the right one.
## 11. Visualize this AllWISE image.
```
allwise_w1_image = fits.open(allwise_image_record.getdataurl())
fig = plt.figure()
wcs = WCS(allwise_w1_image[0].header)
ax = fig.add_subplot(1, 1, 1, projection=wcs)
ax.imshow(allwise_w1_image[0].data, cmap='gray_r', origin='lower', vmax = 10)
ax.scatter(ra, dec, transform=ax.get_transform('fk5'), s=500, edgecolor='red', facecolor='none')
```
## 12. Plot a cutout of the AllWISE image, centered on your position.
Try a 60 arcsecond cutout. Use `Cutout2D` that we imported earlier.
## 13. Try visualizing a cutout of a GALEX image that covers your position.
Repeat steps 4, 5, 6, 8 through 12 for GALEX.
## 14. Try visualizing a cutout of an SDSS image that covers your position.
Hints:
* Search the registry using `keywords=['sloan']
* Find the service with a `short_name` of `b'SDSS SIAP'`
* From Known Issues, recall that an empty string must be specified to the `format` parameter dues to a bug in the service.
* After obtaining your search results, select r-band images using the `.title` attribute of the records that are returned, since `.bandpass_id` is not populated.
## 15. Try looping over the first few positions and plotting multiwavelength cutouts.
| github_jupyter |
### Importing Libraries
```
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
tf.__version__
```
### Data Preprocessing
#### Preprocessing trainingset
- preprocessing training set helps prevent overfitting
- generatig new images with feature scaling (rescale param)
- data augmentation transformations: i) shear ii) zoom iii) horizontal flip
**taget_size is the final image size when they get fed in to the CNN (Bigger images are slower)**
```
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
training_set = train_datagen.flow_from_directory(
'dataset/training_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
```
#### Preprocessing the test set
- Only do feature scaling
- dont apply transformations
```
test_datagen = ImageDataGenerator(rescale=1./255)
test_set = test_datagen.flow_from_directory(
'dataset/test_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
```
### Building the CNN model
##### Initialising the CNN model
```
cnn = tf.keras.models.Sequential()
```
##### Add Convolution layer
```
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu',input_shape=[64,64,3]))
```
##### Add Pooling Layer to convolutional layer (max pooling)
```
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
```
##### Add second Convolutional Layer
```
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
```
##### Add flattening layer
```
cnn.add(tf.keras.layers.Flatten())
```
#### Add Fully Connected Layer
```
#units refers to hidden neurons
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
```
#### Output layer
```
#units =1 because this is a binary classification
cnn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
```
### Compiling the CNN
```
cnn.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
#### Train the CNN
- train on training set and evaluating on the test set
```
cnn.fit(x=training_set, validation_data=test_set, epochs=25)
```
#### Making a single prediction
```
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('dataset/single_prediction/cat_or_dog_2.jpg', target_size=(64,64))
test_image = image.img_to_array(test_image)
#add the batch dimension to test image since images were trained in batches
test_image = np.expand_dims(test_image, axis=0)
result = cnn.predict(test_image)
print(training_set.class_indices)
#in result[0][0] the first index represents the batch and the second index represents the actual prediction
if result[0][0] > 0.5:
prediction = 'dog'
else:
prediction = 'cat'
print(prediction)
```
| github_jupyter |

<font size=3 color="midnightblue" face="arial">
<h1 align="center">Escuela de Ciencias Básicas, Tecnología e Ingeniería</h1>
</font>
<font size=3 color="navy" face="arial">
<h1 align="center">ECBTI</h1>
</font>
<font size=2 color="darkorange" face="arial">
<h1 align="center">Curso:</h1>
</font>
<font size=2 color="navy" face="arial">
<h1 align="center">Introducción al lenguaje de programación Python</h1>
</font>
<font size=1 color="darkorange" face="arial">
<h1 align="center">Febrero de 2020</h1>
</font>
<h2 align="center">Sesión 11 - Ecosistema Python - Pandas</h2>
## Instructor:
> <strong> *Carlos Alberto Álvarez Henao, I.C. Ph.D.* </strong>
## *Pandas*
Es un módulo (biblioteca) en *Python* de código abierto (open source) que proporciona estructuras de datos flexibles y permite trabajar con la información de forma eficiente (gran parte de Pandas está implementado usando `C/Cython` para obtener un buen rendimiento).
Desde [este enlace](http://pandas.pydata.org "Pandas") podrás acceder a la página oficial de Pandas.
Antes de *Pandas*, *Python* se utilizó principalmente para la manipulación y preparación de datos. Tenía muy poca contribución al análisis de datos. *Pandas* resolvió este problema. Usando *Pandas*, podemos lograr cinco pasos típicos en el procesamiento y análisis de datos, independientemente del origen de los datos:
- cargar,
- preparar,
- manipular,
- modelar, y
- analizar.
## Principales características de *Pandas*
- Objeto tipo DataFrame rápido y eficiente con indexación predeterminada y personalizada.
- Herramientas para cargar datos en objetos de datos en memoria desde diferentes formatos de archivo.
- Alineación de datos y manejo integrado de datos faltantes.
- Remodelación y pivoteo de conjuntos de datos.
- Etiquetado de corte, indexación y subconjunto de grandes conjuntos de datos.
- Las columnas de una estructura de datos se pueden eliminar o insertar.
- Agrupamiento por datos para agregación y transformaciones.
- Alto rendimiento de fusión y unión de datos.
- Funcionalidad de series de tiempo
### Configuración de *Pandas*
La distribución estándar del no incluye el módulo de `pandas`. Es necesario realizar el procedimiento de instalacion y difiere del ambiente o el sistema operativo empleados.
Si usa el ambiente *[Anaconda](https://anaconda.org/)*, la alternativa más simple es usar el comando:
o empleando *conda*:
### Estructuras de datos en *Pandas*
Ofrece varias estructuras de datos que nos resultarán de mucha utilidad y que vamos a ir viendo poco a poco. Todas las posibles estructuras de datos que ofrece a día de hoy son:
- **`Series`:** Son arrays unidimensionales con indexación (arrays con índice o etiquetados), similar a los diccionarios. Pueden generarse a partir de diccionarios o de listas.
- **`DataFrame`:** Similares a las tablas de bases de datos relacionales como `SQL`.
- **`Panel`, `Panel4D` y `PanelND`:** Permiten trabajar con más de dos dimensiones. Dado que es algo complejo y poco utilizado trabajar con arrays de más de dos dimensiones no trataremos los paneles en estos tutoriales de introducción a Pandas.
## Dimensionado y Descripción
La mejor manera para pensar sobre estas estructuras de datos es que la estructura de dato de dimension mayor contiene a la estructura de datos de menor dimensión.
`DataFrame` contiene a las `Series`, `Panel` contiene al `DataFrame`
| Data Structure | Dimension | Descripción |
|----------------|:---------:|-------------|
|`Series` | 1 | Arreglo 1-Dimensional homogéneo de tamaño inmutable |
|`DataFrames` | 2 | Estructura tabular 2-Dimensional, tamaño mutable con columnas heterogéneas|
|`Panel` | 3 | Arreglo general 3-Dimensional, tamaño variable|
La construcción y el manejo de dos o más matrices dimensionales es una tarea tediosa, se le impone una carga al usuario para considerar la orientación del conjunto de datos cuando se escriben las funciones. Pero al usar las estructuras de datos de *Pandas*, se reduce el esfuerzo mental del usuario.
- Por ejemplo, con datos tabulares (`DataFrame`), es más útil semánticamente pensar en el índice (las filas) y las columnas, en lugar del eje 0 y el eje 1.
### Mutabilidad
Las estructuras en *Pandas* son de valor mutable (se pueden cambiar), y excepto las `Series`, todas son de tamaño mutables. Los `DataFrames` son los más usados, los `Panel` no se usan tanto.
## Cargando el módulo *Pandas*
```
import pandas as pd
import numpy as np
```
## `Series`:
Las series se definen de la siguiente manera:
donde:
- `data` es el vector de datos
- `index` (opcional) es el vector de índices que usará la serie. Si los índices son datos de fechas directamente se creará una instancia de una `TimeSeries` en lugar de una instancia de `Series`. Si se omite, por defecto es: `np.arrange(n)`
- `dtype`, tipo de dato. Si se omite, el tipo de dato se infiere.
- `copy`, copia datos, por defecto es `False`.
Veamos un ejemplo de como crear este tipo de contenedor de datos. Primero vamos a crear una `Series` y `Pandas` nos creará índices automáticamente:
#### Creando una `Series` sin datos (vacía, `empty`)
```
s = pd.Series()
print(s)
```
#### Creando una *Serie* con datos
Si los datos provienen de un `ndarray`, el índice pasado debe ser de la misma longitud. Si no se pasa ningún índice, el índice predeterminado será `range(n)` donde `n` es la longitud del arreglo, es decir, $[0,1,2, \ldots rango(len(array))-1]$.
```
data = np.array(['a','b','c','d'])
s = pd.Series(data)
print(s)
```
- Obsérvese que no se pasó ningún índice, por lo que, de forma predeterminada, se asignaron los índices que van de `0` a `len(datos) - 1`, es decir, de `0` a `3`.
```
data = np.array(['a','b','c','d'])
s = pd.Series(data,index=[150,1,"can?",10])
print(s)
```
- Aquí pasamos los valores del índice. Ahora podemos ver los valores indexados de forma personalizada en la salida.
#### Creando una `Series` desde un diccionario
```
data = {'a' : 0., 'b' : 1.,True : 2.}
s = pd.Series(data)
print(s)
```
- La `clave` del diccionario es usada para construir el índice.
```
data = {'a' : 0., 'b' : 1., 'c' : 2.}
s = pd.Series(data,index=['b','c','d','a'])
print(s)
```
- El orden del índice se conserva y el elemento faltante se llena con `NaN` (*Not a Number*).
#### Creando una *Serie* desde un escalar
```
s = pd.Series(5, index=[0, 1, 2, 3])
print(s)
```
#### Accesando a los datos desde la `Series` con la posición
Los datos en una `Series` se pueden acceder de forma similar a un `ndarray`
```
s = pd.Series([1,2,3,4,5],index = ['a','b','c','d','e'])
print(s['c']) # recupera el primer elemento
```
Ahora, recuperemos los tres primeros elementos en la `Series`. Si se inserta `a:` delante, se extraerán todos los elementos de ese índice en adelante. Si se usan dos parámetros (con `:` entre ellos), se extraerán los elementos entre los dos índices (sin incluir el índice de detención).
```
print(s[:3]) # recupera los tres primeros elementos
```
Recupere los tres últimos elementos
```
print(s[-3:])
```
#### Recuperando los datos usando indexación
Recupere un único elemento usando el valor del índice
```
print(s['a'])
```
Recupere múltiples elementos usando una lista de valores de los índices
```
print(s[['a','c','d']])
```
Si una etiqueta no está contenida, se emitirá un mensaje de excepción (error)
```
print(s['f'])
```
* Vamos a crear una serie con índices generados aleatoriamente (de forma automática)
```
# serie con índices automáticos
serie = pd.Series(np.random.random(10))
print('Serie con índices automáticos'.format())
print('{}'.format(serie))
print(type(serie))
```
* Ahora vamos a crear una serie donde nosotros le vamos a decir los índices que queremos usar (definidos por el usuario)
```
serie = pd.Series(np.random.randn(4),
index = ['itzi','kikolas','dieguete','nicolasete'])
print('Serie con índices definidos')
print('{}'.format(serie))
print(type(serie))
```
* Por último, vamos a crear una serie temporal usando índices que son fechas.
```
# serie(serie temporal) con índices que son fechas
n = 60
serie = pd.Series(np.random.randn(n),
index = pd.date_range('2001/01/01', periods = n))
print('Serie temporal con índices de fechas')
print('{}'.format(serie))
print(type(serie))
pd.
```
En los ejemplos anteriores hemos creado las series a partir de un `numpy array` pero las podemos crear a partir de muchas otras cosas: listas, diccionarios, numpy arrays,... Veamos ejemplos:
```
serie_lista = pd.Series([i*i for i in range(10)])
print('Serie a partir de una lista')
print('{}'.format(serie_lista))
```
Serie a partir de un diccionario
```
dicc = {'cuadrado de {}'.format(i) : i*i for i in range(10)}
serie_dicc = pd.Series(dicc)
print('Serie a partir de un diccionario ')
print('{}'.format(serie_dicc))
```
Serie a partir de valores de otra serie...
```
serie_serie = pd.Series(serie_dicc.values)
print('Serie a partir de los valores de otra (pandas) serie')
print('{}'.format(serie_serie))
```
Serie a partir de un valor constante ...
```
serie_cte = pd.Series(-999, index = np.arange(10))
print('Serie a partir de un valor constante')
print('{}'.format(serie_cte))
```
Una serie (`Series` o `TimeSeries`) se puede manejar igual que si tuviéramos un `numpy array` de una dimensión o igual que si tuviéramos un diccionario. Vemos ejemplos de esto:
```
serie = pd.Series(np.random.randn(10),
index = ['a','b','c','d','e','f','g','h','i','j'])
print('Serie que vamos a usar en este ejemplo:')
print('{}'.format(serie))
```
Ejemplos de comportamiento como `numpy array`
```
print('serie.max() {}'.format(serie.max()))
print('serie.sum() {}'.format(serie.sum()))
print('serie.abs()')
print('{}'.format(serie.abs()))
print('serie[serie > 0]')
print('{}'.format(serie[serie > 0]))
#...
print('\n')
```
Ejemplos de comportamiento como diccionario
```
print("Se comporta como un diccionario:")
print("================================")
print("serie['a'] \n {}".format(serie['a']))
print("'a' en la serie \n {}".format('a' in serie))
print("'z' en la serie \n {}".format('z' in serie))
```
Las operaciones están 'vectorizadas' y se hacen elemento a elemento con los elementos alineados en función del índice.
- Si se hace, por ejemplo, una suma de dos series, si en una de las dos series no existe un elemento, i.e. el índice no existe en la serie, el resultado para ese índice será `NAN`.
- En resumen, estamos haciendo una unión de los índices y funciona diferente a los `numpy arrays`.
Se puede ver el esquema en el siguiente ejemplo:
```
s1 = serie[1:]
s2 = serie[:-1]
suma = s1 + s2
print(' s1 s2 s1 + s2')
print('------------------ ------------------ ------------------')
for clave in sorted(set(list(s1.keys()) + list(s2.keys()))):
print('{0:1} {1:20} + {0:1} {2:20} = {0:1} {3:20}'.format(clave,
s1.get(clave),
s2.get(clave),
suma.get(clave)))
```
En la anterior línea de código se usa el método `get` para no obtener un `KeyError`, como sí obtendría si se usa, p.e., `s1['a']`
## `DataFrame`
Un `DataFrame` es una estructura 2-Dimensional, es decir, los datos se alinean en forma tabular por filas y columnas.
### Características del`DataFrame`
- Las columnas pueden ser de diferente tipo.
- Tamaño cambiable.
- Ejes etiquetados (filas y columnas).
- Se pueden desarrollar operaciones aritméticas en filas y columnas.
### `pandas.DataFrame`
una estructura de `DataFrame` puede crearse usando el siguiente constructor:
Los parámetros de este constructor son los siguientes:
- **`data`:** Pueden ser de diferentes formas como `ndarray`, `Series`, `map`, `lists`, `dict`, constantes o también otro `DataFrame`.
- **`index`:** para las etiquetas de fila, el índice que se utilizará para la trama resultante es dado de forma opcional por defecto por `np.arange(n)`, si no se especifica ningún índice.
- **`columns`:** para las etiquetas de columnas, la sintaxis por defecto es `np.arange(n)`. Esto es así si no se especifíca ningún índice.
- **`dtype`:** tipo de dato para cada columna
- **`copy`:*** Es usado para copiar los datos. Por defecto es `False`.
#### Creando un `DataFrame`
Un `DataFrame` en Pandas se puede crear usando diferentes entradas, como: `listas`, `diccionarios`, `Series`, `ndarrays`, otros `DataFrame`.
#### Creando un `DataFrame`
vacío
```
df = pd.DataFrame()
print(df)
```
#### Creando un `DataFrame` desde listas
```
data = [1,2,3,4,5]
df = pd.DataFrame(data)
print(df)
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'])
print(df)
df = pd.DataFrame(data,columns=['Name','Age'],dtype=float)
print(df)
```
#### Creando un `DataFrame` desde diccionarios de `ndarrays`/`lists`
Todos los `ndarrays` deben ser de la misma longitud. Si se pasa el índice, entonces la longitud del índice debe ser igual a la longitud de las matrices.
Si no se pasa ningún índice, de manera predeterminada, el índice será `range(n)`, donde `n` es la longitud del arreglo.
```
data = {'Name':['Tom', 'Jack', 'Steve', 'Ricky'],'Age':[28,34,29,42]}
df = pd.DataFrame(data)
print(df)
```
- Observe los valores $0,1,2,3$. Son el índice predeterminado asignado a cada uno usando la función `range(n)`.
Ahora crearemos un `DataFrame` indexado usando `arrays`
```
df = pd.DataFrame(data, index=['rank1','rank2','rank3','rank4'])
print(df)
```
#### Creando un `DataFrame` desde listas de diccionarios
Se puede pasar una lista de diccionarios como datos de entrada para crear un `DataFrame`. Las `claves` serán usadas por defecto como los nombres de las columnas.
```
data = [{'a': 1, 'b': 2},{'a': 5, 'b': 10, 'c': 20}]
df = pd.DataFrame(data)
print(df)
```
- un `NaN` aparece en donde no hay datos.
El siguiente ejemplo muestra como se crea un `DataFrame` pasando una lista de diccionarios y los índices de las filas:
```
df = pd.DataFrame(data, index=['first', 'second'])
print(df)
```
El siguiente ejemplo muestra como se crea un `DataFrame` pasando una lista de diccionarios, y los índices de las filas y columnas:
```
# Con dos índices de columnas, los valores son iguales que las claves del diccionario
df1 = pd.DataFrame(data, index=['first', 'second'], columns=['a', 'b'])
# Con dos índices de columna y con un índice con otro nombre
df2 = pd.DataFrame(data, index=['first', 'second'], columns=['a', 'b1'])
print(df1)
print(df2)
```
- Observe que el `DataFrame` `df2` se crea con un índice de columna que no es la clave del diccionario; por lo tanto, se generan los `NaN` en su lugar. Mientras que, `df1` se crea con índices de columnas iguales a las claves del diccionario, por lo que no se agrega `NaN`.
#### Creando un `DataFrame` desde un diccionario de `Series`
Se puede pasar una Serie de diccionarios para formar un `DataFrame`. El índice resultante es la unión de todos los índices de serie pasados.
```
d = {'one' : pd.Series([1, 2, 3], index=['a', 'b', 'c']),
'two' : pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
print(df)
```
- para la serie `one`, no hay una etiqueta `'d'` pasada, pero en el resultado, para la etiqueta `d`, se agrega `NaN`.
Ahora vamos a entender la selección, adición y eliminación de columnas a través de ejemplos.
#### Selección de columna
```
df = pd.DataFrame(d)
print(df['one'])
```
#### Adición de columna
```
df = pd.DataFrame(d)
# Adding a new column to an existing DataFrame object with column label by passing new series
print ("Adicionando una nueva columna pasando como Serie:, \n")
df['three']=pd.Series([10,20,30],index=['a','b','c'])
print(df,'\n')
print ("Adicionando una nueva columna usando las columnas existentes en el DataFrame:\n")
df['four']=df['one']+df['three']
print(df)
```
#### Borrado de columna
```
d = {'one' : pd.Series([1, 2, 3], index=['a', 'b', 'c']),
'two' : pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd']),
'three' : pd.Series([10,20,30], index=['a','b','c'])}
df = pd.DataFrame(d)
print ("Our dataframe is:\n")
print(df, '\n')
# using del function
print ("Deleting the first column using DEL function:\n")
del df['one']
print(df,'\n')
# using pop function
print ("Deleting another column using POP function:\n")
df.pop('two')
print(df)
```
### Selección, Adición y Borrado de fila
#### Selección por etiqueta
Las filas se pueden seleccionar pasando la etiqueta de fila por la función `loc`
```
d = {'one' : pd.Series([1, 2, 3], index=['a', 'b', 'c']),
'two' : pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
print(df)
print(df.loc['b'])
```
- El resultado es una serie con etiquetas como nombres de columna del `DataFrame`. Y, el Nombre de la serie es la etiqueta con la que se recupera.
#### Selección por ubicación entera
Las filas se pueden seleccionar pasando la ubicación entera a una función `iloc`.
```
df = pd.DataFrame(d)
print(df.iloc[2])
```
#### Porcion de fila
Múltiples filas se pueden seleccionar usando el operador `:`
```
df = pd.DataFrame(d)
print(df[2:4])
```
#### Adición de filas
Adicionar nuevas filas al `DataFrame` usando la función `append`. Esta función adiciona las filas al final.
```
df = pd.DataFrame([[1, 2], [3, 4]])
df2 = pd.DataFrame([[5, 6], [7, 8]])
print(df)
df = df.append(df2)
print(df)
```
#### Borrado de filas
Use la etiqueta de índice para eliminar o cortar filas de un `DataFrame`. Si la etiqueta está duplicada, se eliminarán varias filas.
Si observa, en el ejemplo anterior, las etiquetas están duplicadas. Cortemos una etiqueta y veamos cuántas filas se descartarán.
```
df = pd.DataFrame([[1, 2], [3, 4]], columns = ['a','b'])
df2 = pd.DataFrame([[5, 6], [7, 8]], columns = ['a','b'])
df = df.append(df2)
print(df)
# Drop rows with label 0
df = df.drop(1)
print(df)
```
- En el ejemplo anterior se quitaron dos filas porque éstas dos contenían la misma etiqueta `0`.
### Lectura / Escritura en Pandas
Una de las grandes capacidades de *`Pandas`* es la potencia que aporta a lo hora de leer y/o escribir archivos de datos.
- Pandas es capaz de leer datos de archivos `csv`, `excel`, `HDF5`, `sql`, `json`, `html`,...
Si se emplean datos de terceros, que pueden provenir de muy diversas fuentes, una de las partes más tediosas del trabajo será tener los datos listos para empezar a trabajar: Limpiar huecos, poner fechas en formato usable, saltarse cabeceros,...
Sin duda, una de las funciones que más se usarán será `read_csv()` que permite una gran flexibilidad a la hora de leer un archivo de texto plano.
```
help(pd.read_csv)
```
En [este enlace](http://pandas.pydata.org/pandas-docs/stable/io.html "pandas docs") se pueden encontrar todos los posibles formatos con los que Pandas trabaja:
Cada uno de estos métodos de lectura de determinados formatos (`read_NombreFormato`) tiene infinidad de parámetros que se pueden ver en la documentación y que no vamos a explicar por lo extensísima que seria esta explicación.
Para la mayoría de los casos que nos vamos a encontrar los parámetros serían los siguientes:
Básicamente hay que pasarle el archivo a leer, cual es su separador, si la primera linea del archivo contiene el nombre de las columnas y en el caso de que no las tenga pasarle en `names` el nombre de las columnas.
Veamos un ejemplo de un dataset del cómo leeriamos el archivo con los datos de los usuarios, siendo el contenido de las 10 primeras lineas el siguiente:
```
# Load users info
userHeader = ['ID', 'Sexo', 'Edad', 'Ocupacion', 'PBOX']
users = pd.read_csv('Datasets/users.txt', engine='python', sep='::', header=None, names=userHeader)
# print 5 first users
print ('# 10 primeros usuarios: \n%s' % users[:100])
```
Para escribir un `DataFrame` en un archivo de texto se pueden utilizar los [método de escritura](http://pandas.pydata.org/pandas-docs/stable/io.html) para escribirlos en el formato que se quiera.
- Por ejemplo si utilizamos el método `to_csv()` nos escribirá el `DataFrame` en este formato estandar que separa los campos por comas; pero por ejemplo, podemos decirle al método que en vez de que utilice como separador una coma, que utilice por ejemplo un guión.
Si queremos escribir en un archivo el `DataFrame` `users` con estas características lo podemos hacer de la siguiente manera:
```
users.to_csv('Datasets/MyUsers3.txt', sep='-')
```
### Merge
Una funcionalidad muy potente que ofrece Pandas es la de poder juntar, `merge` (en bases de datos sería hacer un `JOIN`) datos siempre y cuando este sea posible.
En el ejemplo que estamos haciendo con el dataset podemos ver esta funcionalidad de forma muy intuitiva, ya que los datos de este data set se han obtenido a partir de una bases de datos relacional.
Veamos a continuación como hacer un `JOIN` o un `merge` de los archivos `users.txt` y `ratings.txt` a partir del `user_id`:
```
# Load users info
userHeader = ['user_id', 'gender', 'age', 'ocupation', 'zip']
users = pd.read_csv('Datasets/users.txt', engine='python', sep='::', header=None, names=userHeader)
# Load ratings
ratingHeader = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_csv('Datasets/ratings.txt', engine='python', sep='::', header=None, names=ratingHeader)
# Merge tables users + ratings by user_id field
merger_ratings_users = pd.merge(users, ratings)
print('%s' % merger_ratings_users[:10])
```
De la misma forma que hemos hecho el `JOIN` de los usuarios y los votos, podemos hacer lo mismo añadiendo también los datos relativos a las películas:
```
userHeader = ['user_id', 'gender', 'age', 'ocupation', 'zip']
users = pd.read_csv('Datasets/users.txt', engine='python', sep='::', header=None, names=userHeader)
movieHeader = ['movie_id', 'title', 'genders']
movies = pd.read_csv('Datasets/movies.txt', engine='python', sep='::', header=None, names=movieHeader)
ratingHeader = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_csv('Datasets/ratings.txt', engine='python', sep='::', header=None, names=ratingHeader)
# Merge data
#mergeRatings = pd.merge(pd.merge(users, ratings), movies)
mergeRatings = pd.merge(merger_ratings_users, movies)
```
Si quisiésemos ver por ejemplo un elemento de este nuevo `JOIN` creado (por ejemplo la posición 1000), lo podríamos hacer de la siguiente forma:
```
info1000 = mergeRatings.loc[1000]
print('Info of 1000 position of the table: \n%s' % info1000[:1000])
```
### Trabajando con Datos, Indexación, Selección
¿Cómo podemos seleccionar, añadir, eliminar, mover,..., columnas, filas,...?
- Para seleccionar una columna solo hay que usar el nombre de la columna y pasarlo como si fuera un diccionario (o un atributo).
- Para añadir una columna simplemente hay que usar un nombre de columna no existente y pasarle los valores para esa columna.
- Para eliminar una columna podemos usar `del` o el método `pop` del `DataFrame`.
- Para mover una columna podemos usar una combinación de las metodologías anteriores.
Como ejemplo, vamos crear un `DataFrame`con datos aleatorios y a seleccionar los valores de una columna:
```
df = pd.DataFrame(np.random.randn(5,3),
index = ['primero','segundo','tercero','cuarto','quinto'],
columns = ['velocidad', 'temperatura','presion'])
print(df)
print(df['velocidad'])
print(df.velocidad)
```
Para acceder a la columna `velocidad` lo podemos hacer de dos formas.
- O bien usando el nombre de la columna como si fuera una clave de un diccionario
- O bien usando el nombre de la columna como si fuera un atributo.
En el caso de que los nombres de las columnas sean números, la segunda opción no podríais usarla...
Vamos a añadir una columna nueva al `DataFrame`. Es algo tan sencillo como usar un nombre de columna no existente y pasarle los datos:
```
df['velocidad_maxima'] = np.random.randn(df.shape[0])
print(df)
```
Pero qué pasa si quiero añadir la columna en un lugar específico. Para ello podemos usar el método `insert` (y de paso vemos como podemos borrar una columna):
**Forma 1:**
- Borramos la columna 'velocidad_maxima' que está al final del df usando `del`
- Colocamos la columna eliminada en la posición que especifiquemos
```
print(df)
columna = df['velocidad_maxima']
del df['velocidad_maxima']
df.insert(1, 'velocidad_maxima', columna)
print(df)
```
**Forma 2:** Usando el método `pop`: borramos usando el método `pop` y añadimos la columna borrada en la última posición de nuevo.
```
print(df)
columna = df.pop('velocidad_maxima')
print(df)
#print(columna)
df.insert(3, 'velocidad_maxima', columna)
print(df)
```
Para seleccionar datos concretos de un `DataFrame` podemos usar el índice, una rebanada (*slicing*), valores booleanos, la columna,...
- Seleccionamos la columna de velocidades:
```
print(df.velocidad)
```
- Seleccionamos todas las columnas cuyo índice es igual a tercero:
```
print(df.xs('tercero'))
```
- Seleccionamos todas las columnas cuyo índice está entre tercero y quinto (en este caso los índices son inclusivos)
```
print(df.loc['tercero':'quinto'])
```
- Seleccionamos todos los valores de velocidad donde la temperatura > 0
```
print(df['velocidad'][df['temperatura']>0])
```
Seleccionamos todos los valores de una columna por índice usando una rebanada (`slice`) de enteros.
- En este caso el límite superior de la rebanada no se incluye (Python tradicional)
```
print(df.iloc[1:3])
```
- Seleccionamos filas y columnas
```
print(df.iloc[1:3, ['velocidad', 'presion']])
help(df.ix)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import skbio
from collections import Counter
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from statsmodels.formula.api import ols
import researchpy as rp
luminescence_means = "../../data/luminescence/to_be_sorted/24.11.19/output_means.csv"
luminescence_raw = "../../data/luminescence/to_be_sorted/24.11.19/output_raw.csv"
luminescence_means_df = pd.read_csv(luminescence_means, header=0)
luminescence_raw_df = pd.read_csv(luminescence_raw, header=0)
luminescence_means_df
luminescence_raw_df
#add promoter names column
luminescence_raw_df['Promoter'] = luminescence_raw_df.name
luminescence_raw_df.loc[luminescence_raw_df.name == '71 + 72', 'Promoter'] = 'UBQ10'
luminescence_raw_df.loc[luminescence_raw_df.name == '25+72', 'Promoter'] = 'NIR1'
luminescence_raw_df.loc[luminescence_raw_df.name == '35+72', 'Promoter'] = 'NOS'
luminescence_raw_df.loc[luminescence_raw_df.name == '36+72', 'Promoter'] = 'STAP4'
luminescence_raw_df.loc[luminescence_raw_df.name == '92+72', 'Promoter'] = 'NRP'
luminescence_raw_df
#set style to ticks
sns.set(style="ticks", color_codes=True)
plot = sns.catplot(x="Promoter", y="nluc/fluc", data=luminescence_raw_df, hue='condition', kind='violin')
#plot points
ax = sns.swarmplot(x="Promoter", y="nluc/fluc", data=luminescence_raw_df, color=".25").get_figure().savefig('../../data/plots/luminescence/24.11.19/luminescence_violin.pdf', format='pdf')
#bar chart, 95% confidence intervals
plot = sns.barplot(x="Promoter", y="nluc/fluc", hue="condition", data=luminescence_raw_df)
plt.ylabel("Mean_luminescence")
#plot raw UBQ10
plot = sns.barplot(x="Promoter", y="fluc_luminescence", hue="condition", data=luminescence_raw_df[luminescence_raw_df.Promoter == 'UBQ10'])
plt.ylabel("Mean_luminescence")
```
### get names of each condition for later
```
pd.Categorical(luminescence_raw_df.condition)
names = luminescence_raw_df.condition.unique()
for name in names:
print(name)
#get list of promoters
pd.Categorical(luminescence_raw_df.Promoter)
prom_names = luminescence_raw_df.Promoter.unique()
for name in prom_names:
print(name)
```
### test normality
```
#returns test statistic, p-value
for name1 in prom_names:
for name in names:
print('{}: {}'.format(name, stats.shapiro(luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == name])))
```
#### not normal
```
#test variance
stats.levene(luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[0]],
luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[1]],
luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[2]])
test = luminescence_raw_df.groupby('Promoter')['nluc/fluc'].apply
test
```
| github_jupyter |
# Ordinary Differential Equations Exercise 1
## Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
```
## Euler's method
[Euler's method](http://en.wikipedia.org/wiki/Euler_method) is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function `solve_euler` that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
```
def solve_euler(derivs, y0, x):
"""Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
y = [y0]
for n in range(1, len(x)):
y.append(y[n-1] + (x[n] - x[n-1])*derivs(y[n-1], x[n-1]))
return np.asarray(y)
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
```
The [midpoint method]() is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function `solve_midpoint` that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
```
def solve_midpoint(derivs, y0, x):
"""Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
"""
# YOUR CODE HERE
y = [y0]
for n in range(1, len(x)):
h = x[n] - x[n-1]
y.append(y[n-1] + h*derivs(y[n-1] + h/2*derivs(y[n-1], x[n-1]), x[n-1] + h/2))
return np.asarray(y)
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
```
You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a `solve_exact` function that compute the exact solution and follows the specification described in the docstring:
```
def solve_exact(x):
"""compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
"""
# YOUR CODE HERE
return 0.25*np.exp(2*x) - 0.5*x - 0.25
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
```
In the following cell you are going to solve the above ODE using four different algorithms:
1. Euler's method
2. Midpoint method
3. `odeint`
4. Exact
Here are the details:
* Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
* Define the `derivs` function for the above differential equation.
* Using the `solve_euler`, `solve_midpoint`, `odeint` and `solve_exact` functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
1. Plot the $y(x)$ versus $x$ for each of the 4 approaches.
2. Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
```
# YOUR CODE HERE
x = np.linspace(0, 1, 11)
def derivs(yvec, x):
y = yvec
dy = x + 2*y
return np.array([dy])
y0 = np.array([1.0])
print(y0.shape)
print(solve_euler(derivs, 1.0, x))
print(solve_midpoint(derivs, 1.0, x))
#gives error "object too deep for desired array". thinks y0 is a 2d array?
print(odeint(derivs, y0, x))
assert True # leave this for grading the plots
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Deploying a web service to Azure Kubernetes Service (AKS)
This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it.
We then test and delete the service, image and model.
```
from azureml.core import Workspace
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.image import Image
from azureml.core.model import Model
import azureml.core
print(azureml.core.VERSION)
```
# Get workspace
Load existing workspace from the config file info.
```
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
# Register the model
Register an existing trained model, add descirption and tags.
```
#Register the model
from azureml.core.model import Model
model = Model.register(model_path = "sklearn_regression_model.pkl", # this points to a local file
model_name = "sklearn_regression_model.pkl", # this is the name the model is registered as
tags = {'area': "diabetes", 'type': "regression"},
description = "Ridge regression model to predict diabetes",
workspace = ws)
print(model.name, model.description, model.version)
```
# Create an image
Create an image using the registered model the script that will load and run the model.
```
%%writefile score.py
import pickle
import json
import numpy
from sklearn.externals import joblib
from sklearn.linear_model import Ridge
from azureml.core.model import Model
def init():
global model
# note here "sklearn_regression_model.pkl" is the name of the model registered under
# this is a different behavior than before when the code is run locally, even though the code is the same.
model_path = Model.get_model_path('sklearn_regression_model.pkl')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
# note you can pass in multiple rows for scoring
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = numpy.array(data)
result = model.predict(data)
# you can return any data type as long as it is JSON-serializable
return result.tolist()
except Exception as e:
error = str(e)
return error
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
from azureml.core.image import ContainerImage
image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
conda_file = "myenv.yml",
description = "Image with ridge regression model",
tags = {'area': "diabetes", 'type': "regression"}
)
image = ContainerImage.create(name = "myimage1",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
```
# Provision the AKS Cluster
This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.
```
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'my-aks-9'
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
%%time
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
```
## Optional step: Attach existing AKS cluster
If you have existing AKS cluster in your Azure subscription, you can attach it to the Workspace.
```
'''
# Use the default configuration (can also provide parameters to customize)
resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'
create_name='my-existing-aks'
# Create the cluster
aks_target = AksCompute.attach(workspace=ws, name=create_name, resource_id=resource_id)
# Wait for the operation to complete
aks_target.wait_for_completion(True)
'''
```
# Deploy web service to AKS
```
#Set the web service configuration (using default here)
aks_config = AksWebservice.deploy_configuration()
%%time
aks_service_name ='aks-service-1'
aks_service = Webservice.deploy_from_image(workspace = ws,
name = aks_service_name,
image = image,
deployment_config = aks_config,
deployment_target = aks_target)
aks_service.wait_for_deployment(show_output = True)
print(aks_service.state)
```
# Test the web service
We test the web sevice by passing data.
```
%%time
import json
test_sample = json.dumps({'data': [
[1,2,3,4,5,6,7,8,9,10],
[10,9,8,7,6,5,4,3,2,1]
]})
test_sample = bytes(test_sample,encoding = 'utf8')
prediction = aks_service.run(input_data = test_sample)
print(prediction)
```
# Clean up
Delete the service, image and model.
```
%%time
aks_service.delete()
image.delete()
model.delete()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TF Lattice Custom Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/custom_estimators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/custom_estimators.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
You can use custom estimators to create arbitrarily monotonic models using TFL layers. This guide outlines the steps needed to create such estimators.
## Setup
Installing TF Lattice package:
```
#@test {"skip": true}
!pip install tensorflow-lattice
```
Importing required packages:
```
import tensorflow as tf
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
from tensorflow import feature_column as fc
from tensorflow_estimator.python.estimator.canned import optimizers
from tensorflow_estimator.python.estimator.head import binary_class_head
logging.disable(sys.maxsize)
```
Downloading the UCI Statlog (Heart) dataset:
```
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
target = df.pop('target')
train_size = int(len(df) * 0.8)
train_x = df[:train_size]
train_y = target[:train_size]
test_x = df[train_size:]
test_y = target[train_size:]
df.head()
```
Setting the default values used for training in this guide:
```
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 1000
```
## Feature Columns
As for any other TF estimator, data needs to be passed to the estimator, which is typically via an input_fn and parsed using [FeatureColumns](https://www.tensorflow.org/guide/feature_columns).
```
# Feature columns.
# - age
# - sex
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
feature_columns = [
fc.numeric_column('age', default_value=-1),
fc.categorical_column_with_vocabulary_list('sex', [0, 1]),
fc.numeric_column('ca'),
fc.categorical_column_with_vocabulary_list(
'thal', ['normal', 'fixed', 'reversible']),
]
```
Note that categorical features do not need to be wrapped by a dense feature column, since `tfl.laysers.CategoricalCalibration` layer can directly consume category indices.
## Creating input_fn
As for any other estimator, you can use input_fn to feed data to the model for training and evaluation.
```
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=train_x,
y=train_y,
shuffle=True,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
num_threads=1)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=test_x,
y=test_y,
shuffle=False,
batch_size=BATCH_SIZE,
num_epochs=1,
num_threads=1)
```
## Creating model_fn
There are several ways to create a custom estimator. Here we will construct a `model_fn` that calls a Keras model on the parsed input tensors. To parse the input features, you can use `tf.feature_column.input_layer`, `tf.keras.layers.DenseFeatures`, or `tfl.estimators.transform_features`. If you use the latter, you will not need to wrap categorical features with dense feature columns, and the resulting tensors will not be concatenated, which makes it easier to use the features in the calibration layers.
To construct a model, you can mix and match TFL layers or any other Keras layers. Here we create a calibrated lattice Keras model out of TFL layers and impose several monotonicity constraints. We then use the Keras model to create the custom estimator.
```
def model_fn(features, labels, mode, config):
"""model_fn for the custom estimator."""
del config
input_tensors = tfl.estimators.transform_features(features, feature_columns)
inputs = {
key: tf.keras.layers.Input(shape=(1,), name=key) for key in input_tensors
}
lattice_sizes = [3, 2, 2, 2]
lattice_monotonicities = ['increasing', 'none', 'increasing', 'increasing']
lattice_input = tf.keras.layers.Concatenate(axis=1)([
tfl.layers.PWLCalibration(
input_keypoints=np.linspace(10, 100, num=8, dtype=np.float32),
# The output range of the calibrator should be the input range of
# the following lattice dimension.
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
monotonicity='increasing',
)(inputs['age']),
tfl.layers.CategoricalCalibration(
# Number of categories including any missing/default category.
num_buckets=2,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
)(inputs['sex']),
tfl.layers.PWLCalibration(
input_keypoints=[0.0, 1.0, 2.0, 3.0],
output_min=0.0,
output_max=lattice_sizes[0] - 1.0,
# You can specify TFL regularizers as tuple
# ('regularizer name', l1, l2).
kernel_regularizer=('hessian', 0.0, 1e-4),
monotonicity='increasing',
)(inputs['ca']),
tfl.layers.CategoricalCalibration(
num_buckets=3,
output_min=0.0,
output_max=lattice_sizes[1] - 1.0,
# Categorical monotonicity can be partial order.
# (i, j) indicates that we must have output(i) <= output(j).
# Make sure to set the lattice monotonicity to 'increasing' for this
# dimension.
monotonicities=[(0, 1), (0, 2)],
)(inputs['thal']),
])
output = tfl.layers.Lattice(
lattice_sizes=lattice_sizes, monotonicities=lattice_monotonicities)(
lattice_input)
training = (mode == tf.estimator.ModeKeys.TRAIN)
model = tf.keras.Model(inputs=inputs, outputs=output)
logits = model(input_tensors, training=training)
if training:
optimizer = optimizers.get_optimizer_instance_v2('Adagrad', LEARNING_RATE)
else:
optimizer = None
head = binary_class_head.BinaryClassHead()
return head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=optimizer,
logits=logits,
trainable_variables=model.trainable_variables,
update_ops=model.updates)
```
## Training and Estimator
Using the `model_fn` we can create and train the estimator.
```
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=test_input_fn)
print('AUC: {}'.format(results['auc']))
```
| github_jupyter |
Introduction to Spark
====
This lecture is an introduction to the Spark framework for distributed computing, the basic data and control flow abstractions, and getting comfortable with the functional programming style needed to write a Spark application.
- What problem does Spark solve?
- SparkContext and the master configuration
- RDDs
- Actions
- Transforms
- Key-value RDDs
- Example - word count
- Persistence
- Merging key-value RDDs
Learning objectives
----
- Overview of Spark
- Working with Spark RDDs
- Actions and transforms
- Working with Spark DataFrames
- Using the `ml` and `mllib` for machine learning
#### Not covered
- Spark GraphX (library for graph algorithms)
- Spark Streaming (library for streaming (microbatch) data)
Installation
----
You should use the current version of Spark at https://spark.apache.org/downloads.html. Choose the package `Pre-built for Hadoop2.7 and later`. The instructions below use the version current as of 9 April 2018.
```bash
cd ~
wget https://www.apache.org/dyn/closer.lua/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
tar spark-2.3.0-bin-hadoop2.7.tgz
rm spark-2.3.0-bin-hadoop2.7.tgz
mv spark-2.3.0-bin-hadoop2.7 spark
```
Install the `py4j` Python package needed for `pyspark`
```
pip install py4j
```
You need to define these environment variables before starting the notebook.
```bash
export SPARK_HOME=~/spark
export PYSPARK_PYTHON=python3
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYSPARK_SUBMIT_ARGS="--packages ${PACKAGES} pyspark-shell"
```
In Unix/Mac, this can be done in `.bashrc` or `.bash_profile`.
For the adventurous, see [Running Spark on an AWS EMR cluster](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark.html).
Resources
----
- [Quick Start](http://spark.apache.org/docs/latest/quick-start.html)
- [Spark Programming Guide](http://spark.apache.org/docs/latest/programming-guide.html)
- [DataFramews, DataSets and SQL](http://spark.apache.org/docs/latest/sql-programming-guide.html)
- [MLLib](http://spark.apache.org/docs/latest/mllib-guide.html)
- [GraphX](http://spark.apache.org/docs/latest/graphx-programming-guide.html)
- [Streaming](http://spark.apache.org/docs/latest/streaming-programming-guide.html)
Overview of Spark
----
With massive data, we need to load, extract, transform and analyze the data on multiple computers to overcome I/O and processing bottlenecks. However, when working on multiple computers (possibly hundreds to thousands), there is a high risk of failure in one or more nodes. Distributed computing frameworks are designed to handle failures gracefully, allowing the developer to focus on algorithm development rather than system administration.
The first such widely used open source framework was the Hadoop MapReduce framework. This provided transparent fault tolerance, and popularized the functional programming approach to distributed computing. The Hadoop work-flow uses repeated invocations of the following instructions:
```
load dataset from disk to memory
map function to elements of dataset
reduce results of map to get new aggregate dataset
save new dataset to disk
```
Hadoop has two main limitations:
- the repeated saving and loading of data to disk can be slow, and makes interactive development very challenging
- restriction to only `map` and `reduce` constructs results in increased code complexity, since every problem must be tailored to the `map-reduce` format
Spark is a more recent framework for distributed computing that addresses the limitations of Hadoop by allowing the use of in-memory datasets for iterative computation, and providing a rich set of functional programming constructs to make the developer's job easier. Spark also provides libraries for common big data tasks, such as the need to run SQL queries, perform machine learning and process large graphical structures.
Languages supported
----
Fully supported
- Java
- Scala
- Python
- R
## Distributed computing bakkground
With distributed computing, you interact with a network of computers that communicate via message passing as if issuing instructions to a single computer.

Source: https://image.slidesharecdn.com/distributedcomputingwithspark-150414042905-conversion-gate01/95/distributed-computing-with-spark-21-638.jpg
### Hadoop and Spark
- There are 3 major components to a distributed system
- storage
- cluster management
- computing engine
- Hadoop is a framework that provides all 3
- distributed storage (HDFS)
- clsuter managemnet (YARN)
- computing eneine (MapReduce)
- Spakr only provides the (in-memory) distributed computing engine, and relies on other frameworks for storage and clsuter manageemnt. It is most frequently used on top of the Hadoop framework, but can also use other distribtued storage(e.g. S3 and Cassandra) or cluster mangement (e.g. Mesos) software.
### Distributed stoage

Source: http://slideplayer.com/slide/3406872/12/images/15/HDFS+Framework+Key+features+of+HDFS:.jpg
### Role of YARN
- Resource manageer (manages cluster resources)
- Scheduler
- Applicaitons manager
- Ndoe manager (manages single machine/node)
- manages data containers/partitions
- monitors reosurce usage
- reprots to resource manager

Source: https://kannandreams.files.wordpress.com/2013/11/yarn1.png
### YARN operations

Source: https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/yarn_architecture.gif
### Hadoop MapReduce versus Spark
Spark has several advantages over Hadoop MapReduce
- Use of RAM rahter than disk mean fsater processing for multi-step operations
- Allows interactive applicaitons
- Allows real-time applications
- More flexible programming API (full range of functional constructs)

Source: https://i0.wp.com/s3.amazonaws.com/acadgildsite/wordpress_images/bigdatadeveloper/10+steps+to+master+apache+spark/hadoop_spark_1.png
### Overall Ecosystem

Source: https://cdn-images-1.medium.com/max/1165/1*z0Vm749Pu6mHdlyPsznMRg.png
### Spark Ecosystem
- Spark is written in Scala, a functional programming language built on top of the Java Virtual Machine (JVM)
- Traditionally, you have to code in Scala to get the best performacne from Spark
- With Spark DataFrames and vectorized operations (Spark 2.3 onwards) Python is now competitive

Source: https://data-flair.training/blogs/wp-content/uploads/apache-spark-ecosystem-components.jpg
### Livy and Spark magic
- Livy provides a REST interface to a Spark cluster.

Source: https://cdn-images-1.medium.com/max/956/0*-lwKpnEq0Tpi3Tlj.png
### PySpark

Source: http://i.imgur.com/YlI8AqEl.png
### Resilident distributed datasets (RDDs)

Source: https://mapr.com/blog/real-time-streaming-data-pipelines-apache-apis-kafka-spark-streaming-and-hbase/assets/blogimages/msspark/imag12.png
### Spark fault tolerance

Source: https://image.slidesharecdn.com/deep-dive-with-spark-streamingtathagata-dasspark-meetup2013-06-17-130623151510-phpapp02/95/deep-dive-with-spark-streaming-tathagata-das-spark-meetup-20130617-13-638.jpg
```
%%spark
%%info
```
### Configuring allocated resources
Note the proxyUser from `%%info`.
```
%%configure -f
{"driverMemory": "2G",
"numExecutors": 10,
"executorCores": 2,
"executorMemory": "2048M",
"proxyUser": "user06021",
"conf": {"spark.master": "yarn"}}
```
### Python version
The default version of Python with the PySpark kernel is Python 2.
```
import sys
sys.version_info
```
### Remember to shut down the notebook after use
When you are done running Sark jobs with this notebook, go to the notebook's file menu, and select the "Close and Halt" option to terminate the notebook's kernel and clear the Spark session.
| github_jupyter |
# Analyzing Street Trees: Diversity Indices and the 10/20/30 Rule
This notebook analyzes the diversity indices of the street trees inside and outside the city center you've selected, and then check the tree inventory according to the 10/20/30 rule, discussed below.
```
# library import
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import descartes
import treeParsing as tP
```
# Import Tree Inventory and City Center Boundary
Import your tree data and city center boundary data below. These data may use any geospatial data format (SHP, Geojson, Geopackage) and should be in the same coordinate projection.
Your tree data will need the following columns:
* Point geographic location
* Diameter at breast height (DBH)
* Tree Scientific Name
* Tree Genus Name
* Tree Family Name
Your city center geography simply needs to be a single, dissolved geometry representing your city center area.
```
### Enter the path to your data below ###
tree_data_path = 'example_data/trees_paris.gpkg'
tree_data = gpd.read_file(tree_data_path)
tree_data.plot()
### Enter the path to your data below ###
city_center_boundary_path = 'example_data/paris.gpkg'
city_center = gpd.read_file(city_center_boundary_path)
city_center.plot()
```
# Clean Data and Calculate Basal Area
To start, we need to remove features missing data and remove the top quantile of data. Removing any missing data and the top quantile helps remove erroneous entries that are too large or too small than what we would expect. If your data has already been cleaned, feel free to skip the second cell below.
```
### Enter your column names here ###
scientific_name_column = 'Scientific'
genus_name_column = 'genus'
family_name_column = 'family'
diameter_breast_height_column = 'DBH'
### Ignore if data is already cleaned ###
# Exclude Data Missing DBH
tree_data = tree_data[tree_data[diameter_breast_height_column]>0]
# Exclude data larger than the 99th quantile (often erroneously large)
tree_data = tree_data[tree_data[diameter_breast_height_column]<=tree_data.quantile(0.99).DBH]
# Calculate Basal Area
basal_area_column = 'BA'
tree_data[basal_area_column] = tree_data[diameter_breast_height_column]**2 * 0.00007854
```
# Calculating Simpson and Shannon Diversity Indices
The following cells spatially join your city center geometry to your tree inventory data, and then calculates the simpson and shannon diversity indices for the city center, area outside the city center -- based on area and tree count.
```
# Add dummy column to city center geometry
city_center['inside'] = True
city_center = city_center[['geometry','inside']]
# Spatial Join -- this may take a while
sjoin_tree_data = gpd.sjoin(tree_data, city_center, how="left")
def GenerateIndices(label, df, scientific_name_column, genus_name_column, family_name_column, basal_area_column):
# Derive counts, areas, for species, genus, and family
species_count = df[[scientific_name_column, basal_area_column]].groupby(scientific_name_column).count().reset_index()
species_area = df[[scientific_name_column, basal_area_column]].groupby(scientific_name_column).sum().reset_index()
genus_count = df[[genus_name_column, basal_area_column]].groupby(genus_name_column).count().reset_index()
genus_area = df[[genus_name_column, basal_area_column]].groupby(genus_name_column).sum().reset_index()
family_count = df[[family_name_column, basal_area_column]].groupby(family_name_column).count().reset_index()
family_area = df[[family_name_column, basal_area_column]].groupby(family_name_column).sum().reset_index()
# Calculate Percentages by count and area
species_count["Pct"] = species_count[basal_area_column]/sum(species_count[basal_area_column])
species_area["Pct"] = species_area[basal_area_column]/sum(species_area[basal_area_column])
genus_count["Pct"] = genus_count[basal_area_column]/sum(genus_count[basal_area_column])
genus_area["Pct"] = genus_area[basal_area_column]/sum(genus_area[basal_area_column])
family_count["Pct"] = family_count[basal_area_column]/sum(family_count[basal_area_column])
family_area["Pct"] = family_area[basal_area_column]/sum(family_area[basal_area_column])
# Calculate Shannon Indices
species_shannon_count = tP.ShannonEntropy(list(species_count["Pct"]))
species_shannon_area = tP.ShannonEntropy(list(species_area["Pct"]))
genus_shannon_count = tP.ShannonEntropy(list(genus_count["Pct"]))
genus_shannon_area = tP.ShannonEntropy(list(genus_area["Pct"]))
family_shannon_count = tP.ShannonEntropy(list(family_count["Pct"]))
family_shannon_area = tP.ShannonEntropy(list(family_area["Pct"]))
# Calculate Simpson Indices
species_simpson_count = tP.simpson_di(list(species_count[scientific_name_column]), list(species_count[basal_area_column]))
species_simpson_area = tP.simpson_di(list(species_area[scientific_name_column]),list(species_area[basal_area_column]))
genus_simpson_count = tP.simpson_di(list(genus_count[genus_name_column]), list(genus_count[basal_area_column]))
genus_simpson_area = tP.simpson_di(list(genus_area[genus_name_column]), list(genus_area[basal_area_column]))
family_simpson_count = tP.simpson_di(list(family_count[family_name_column]), list(family_count[basal_area_column]))
family_simpson_area = tP.simpson_di(list(family_area[family_name_column]), list(family_area[basal_area_column]))
return {
'Geography':label,
'species_simpson_count': species_simpson_count,
'species_simpson_area': species_simpson_area,
'genus_simpson_count': genus_simpson_count,
'genus_simpson_area': genus_simpson_area,
'family_simpson_count': family_simpson_count,
'family_simpson_area': family_simpson_area,
'species_shannon_count': species_shannon_count,
'species_shannon_area': species_shannon_area,
'genus_shannon_count': genus_shannon_count,
'genus_shannon_area': genus_shannon_area,
'family_shannon_count': family_shannon_count,
'family_shannon_area': family_shannon_area
}
# Generate results and load into dataframe
temp_results = []
city_center_data = sjoin_tree_data[sjoin_tree_data.inside == True]
outside_center_data = sjoin_tree_data[sjoin_tree_data.inside != True]
temp_results.append(
GenerateIndices(
'Inside City Center',
city_center_data,
scientific_name_column,
genus_name_column,
family_name_column,
basal_area_column
)
)
temp_results.append(
GenerateIndices(
'Outside City Center',
outside_center_data,
scientific_name_column,
genus_name_column,
family_name_column,
basal_area_column
)
)
results = pd.DataFrame(temp_results)
results.head()
# Split up results for plotting
shannon_area = results.round(4)[['species_shannon_area','genus_shannon_area','family_shannon_area']].values
shannon_count = results.round(4)[['species_shannon_count','genus_shannon_count','family_shannon_count']].values
simpson_area = results.round(4)[['species_simpson_area','genus_simpson_area','family_simpson_area']].values
simpson_count = results.round(4)[['species_simpson_count','genus_simpson_count','family_simpson_count']].values
def autolabel(rects, axis):
"""Attach a text label above each bar in *rects*, displaying its height."""
for rect in rects:
height = rect.get_height()
axis.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
labels = ['Species', 'Genus', 'Family']
plt.rcParams["figure.figsize"] = [14, 7]
x = np.arange(len(labels)) # the label locations
width = 0.35 # the width of the bars
fig, axs = plt.subplots(2, 2)
rects1 = [axs[0,0].bar(x - width/2, shannon_area[0], width, color="lightsteelblue", label='City Center'), axs[0,0].bar(x + width/2, shannon_area[1], width, color="darkgreen", label='Outside City Center')]
rects2 = [axs[0,1].bar(x - width/2, shannon_count[0], width, color="lightsteelblue", label='City Center'), axs[0,1].bar(x + width/2, shannon_count[1], width, color="darkgreen", label='Outside City Center')]
rects3 = [axs[1,0].bar(x - width/2, simpson_area[0], width, color="lightsteelblue", label='City Center'), axs[1,0].bar(x + width/2, simpson_area[1], width, color="darkgreen", label='Outside City Center')]
rects4 = [axs[1,1].bar(x - width/2, simpson_count[0], width, color="lightsteelblue", label='City Center'), axs[1,1].bar(x + width/2, simpson_count[1], width, color="darkgreen", label='Outside City Center')]
axs[0,0].set_ylabel('Diversity Index')
axs[0,0].set_title('Shannon Diversity by Basal Area')
axs[0,0].set_xticks(x)
axs[0,0].set_xticklabels(labels)
axs[0,0].legend()
axs[0,1].set_ylabel('Diversity Index')
axs[0,1].set_title('Shannon Diversity by Count')
axs[0,1].set_xticks(x)
axs[0,1].set_xticklabels(labels)
axs[0,1].legend()
axs[1,0].set_ylabel('Diversity Index')
axs[1,0].set_title('Simpson Diversity by Basal Area')
axs[1,0].set_xticks(x)
axs[1,0].set_xticklabels(labels)
axs[1,0].legend()
axs[1,1].set_ylabel('Diversity Index')
axs[1,1].set_title('Simpson Diversity by Count')
axs[1,1].set_xticks(x)
axs[1,1].set_xticklabels(labels)
axs[1,1].legend()
autolabel(rects1[0], axs[0,0])
autolabel(rects1[1], axs[0,0])
autolabel(rects2[0], axs[0,1])
autolabel(rects2[1], axs[0,1])
autolabel(rects3[0], axs[1,0])
autolabel(rects3[1], axs[1,0])
autolabel(rects4[0], axs[1,1])
autolabel(rects4[1], axs[1,1])
axs[0,0].set_ylim([0,max(shannon_count.max(), shannon_area.max())+0.5])
axs[0,1].set_ylim([0,max(shannon_count.max(), shannon_area.max())+0.5])
axs[1,0].set_ylim([0,1])
axs[1,1].set_ylim([0,1])
fig.tight_layout()
plt.show()
```
# Interpreting these Results
For both indices, a higher score represents a more diverse body of street trees. If your city follows our general findings, the city center tends to be less diverse. The results provide some context on the evenness of the diversity in the city center and outside areas. The cells below calculate how well your street trees adhere to the 10/20/30 standard.
____
# 10/20/30 Standard
The 10/20/30 rule suggests that urban forests should be made up of no more than 10% from one species, 20% from one genus, or 30% from one family. A more optimistic version, the 5/10/15 rule, argues that those values should be halved.
Below, we'll calculate how well your tree inventory data adheres to these rules and then chart the results.
```
def GetPctRule(df, column, basal_area_column, predicate, location):
if predicate == 'area':
tempData = df[[basal_area_column,column]].groupby(column).sum().sort_values(basal_area_column, ascending=False).reset_index()
else:
tempData = df[[basal_area_column,column]].groupby(column).count().sort_values(basal_area_column, ascending=False).reset_index()
total = tempData[basal_area_column].sum()
return {
'name':column,
'location': location,
'predicate':predicate,
'most common': tempData.iloc[0][column],
'amount': tempData.iloc[0][basal_area_column],
'percent': round(tempData.iloc[0][basal_area_column]/total*100,2),
'total': total,
}
temp_results = []
for location in ['City Center', 'Outside City Center']:
for column in [scientific_name_column, genus_name_column, family_name_column]:
for predicate in ['area', 'count']:
if location == 'City Center':
df = city_center_data
else:
df = outside_center_data
temp_results.append(GetPctRule(df, column, basal_area_column, predicate, location))
results = pd.DataFrame(temp_results)
results.head()
results[results.name=='Scientific']
fig, axs = plt.subplots(3,2)
columns = [scientific_name_column, genus_name_column, family_name_column]
predicates=['area', 'count']
max_value = results.percent.max() * 1.2
for row in [0,1,2]:
temp_data = results[results.name==columns[row]]
for col in [0,1]:
temp_col_data = temp_data[temp_data.predicate==predicates[col]]
if row == 0:
x_value = 10
text="Species Benchmark"
elif row == 1:
x_value = 20
text="Genus Benchmark"
else:
x_value = 30
text="Family Benchmark"
if col == 0:
title = text + ' (Area)'
else:
title = text + ' (Count)'
axs[row,col].set_xlabel('Percent of Tree Inventory')
axs[row,col].set_xlim([0,max_value])
axs[row,col].set_ylim([-0.1,0.1])
axs[row,col].get_yaxis().set_visible(False)
axs[row,col].plot([0,max_value], [0,0], c='darkgray')
axs[row,col].scatter(x=x_value, y=0, marker='|', s=1000, c='darkgray')
axs[row,col].text(x=x_value+1, y=0.02, linespacing=2, s=text, c='black')
axs[row,col].set_title(title)
axs[row,col].scatter(x=float(temp_col_data[temp_col_data.location=='City Center'].percent), y=0, s=100, c='lightsteelblue', label='City Center')
axs[row,col].scatter(x=float(temp_col_data[temp_col_data.location=='Outside City Center'].percent), y=0, s=100, c='darkgreen', label='Outside City Center')
axs[row,col].legend()
plt.tight_layout()
plt.show()
```
# Interpreting these Results
The closer the green dots are to the vertical benchmark line, the closer the tree inventory is to meeting the benchmark. Ideally, each dot should be at or to the left of that taxonomy level's benchmark (10, 20, or 30%). The charts on the left reflect tree species diversity by area, which may better reflect the street trees scaled to their mass, and the right column tracks the count of trees, which is more widely used in urban forestry.
These charts do not define the success of the street tree inventory you are exploring, but they do highlight whether or not the data adheres to suggested urban forestry standards.
___
Want to share you results? Contact us at ***senseable-trees@mit.edu***, we'd love to hear how you used this notebook!
| github_jupyter |
<img src='./img/intel-logo.jpg' width=30%>
<font size=7><div align='left'>판다스 기초강의<br>
<br>
<font size=6><div align='left'>04. 데이터 합치기<br>
<font size=3><div align='right'>
<div align='right'>성 민 석 (Minsuk Sung)</div>
<div align='right'>류 회 성 (Hoesung Ryu)</div>
<div align='right'>이 인 구 (Ike Lee)</div>
<h1>강의목차<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#pd.concat()" data-toc-modified-id="pd.concat()-1"><span class="toc-item-num">1 </span>pd.concat()</a></span><ul class="toc-item"><li><span><a href="#concat(-axis-=1-)" data-toc-modified-id="concat(-axis-=1-)-1.1"><span class="toc-item-num">1.1 </span>concat( axis =1 )</a></span></li><li><span><a href="#concat(axis=0)" data-toc-modified-id="concat(axis=0)-1.2"><span class="toc-item-num">1.2 </span>concat(axis=0)</a></span></li><li><span><a href="#concat(axis=0)-ignore_index" data-toc-modified-id="concat(axis=0)-ignore_index-1.3"><span class="toc-item-num">1.3 </span>concat(axis=0) ignore_index</a></span></li></ul></li><li><span><a href="#pd.merge()" data-toc-modified-id="pd.merge()-2"><span class="toc-item-num">2 </span>pd.merge()</a></span><ul class="toc-item"><li><span><a href="#주요-파라미터-(-pd.merge(df1,df2,-on-=-???))" data-toc-modified-id="주요-파라미터-(-pd.merge(df1,df2,-on-=-???))-2.1"><span class="toc-item-num">2.1 </span>주요 파라미터 ( pd.merge(df1,df2, on = ???))</a></span></li><li><span><a href="#inner-join" data-toc-modified-id="inner-join-2.2"><span class="toc-item-num">2.2 </span>inner join</a></span></li><li><span><a href="#left-outer-join" data-toc-modified-id="left-outer-join-2.3"><span class="toc-item-num">2.3 </span>left outer join</a></span></li><li><span><a href="#right-outer-join" data-toc-modified-id="right-outer-join-2.4"><span class="toc-item-num">2.4 </span>right outer join</a></span></li><li><span><a href="#fully-outer-join" data-toc-modified-id="fully-outer-join-2.5"><span class="toc-item-num">2.5 </span>fully outer join</a></span></li></ul></li></ul></div>
## pd.concat()
$n$ 개의 데이터 프레임을 컬럼 기준의 합치고 싶은 경우 `axis`를 설정 한 후`concate()`함수를 사용한다.
### concat( axis =1 )
<img src="img/concat_axis1.png" style="width: 700px;"/>
```
# DataFrame 생성
import pandas as pd
# df1 생성
df1 = pd.DataFrame([
['Hong', 'Gildong'],
['Sung', 'Munsuk'],
['Ryu', 'Hoesung'],
['Hwang', 'Jinha'],
], index=['1','2','3', '4'], columns=['Last name', 'First name']
)
# print 대신 display를 쓰면 이쁘게 출력된다.
display(df1)
# df2 생성
df2 = pd.DataFrame([
['Pyonyang', 21],
['Seoul', 27],
['Jeju', 29],
['Gyeonggi-do', 30]
], index=['1','2','3', '4'], columns=['City', 'Age']
)
display(df2)
# concat with axis = 1 수행
df = pd.concat([df1,df2], axis=1,sort=True)
df
```
### concat(axis=0)
<img src="img/concat_axis0.png" style="width: 700px;"/>
```
# df1 생성
df1 = pd.DataFrame([
['Hong', 'Gildong'],
['Sung', 'Munsuk'],
['Ryu', 'Hoesung'],
['Hwang', 'Jinha'],
], index=['1','2','3', '4'], columns=['Last name', 'First name']
)
# df2 생성
df2 = pd.DataFrame([
['Lee', 'Chang'],
['Kim', 'Chi'],
], index=['1','2'], columns=['Last name', 'First name']
)
# concat with axis = 0 수행
df = pd.concat([df1,df2], axis=0,sort=True)
df
```
### concat(axis=0) ignore_index
<img src="img/concat_axis0_ignore.png" style="width: 700px;"/>
```
df = pd.concat([df1,df2], axis=0,sort=True,ignore_index=True)
df
```
## pd.merge()
### 주요 파라미터 ( pd.merge(df1,df2, on = ???))
- 키를 기준으로 DataFrame의 로우를 합친다. SQL이나 다른 관계형 데이터베이스의 join 연산과 동일함.
- 주요 파라미터
. left, right : merge할 DataFrame 객체이름
. how = 'inner', #left, right, outer
. on = None, #merge의 기준이 되는 컬럼
. left_on = None, #left DataFrame의 기준 컬럼
. right_on = None, #right DataFrame의 기준 컬럼
```
# DataFrame 생성
df_left = pd.DataFrame({'KEY': ['k0', 'k1', 'k2', 'k3'],
'A': ['a0', 'a1', 'a2', 'a3'],
'B': ['b0', 'b1', 'b2', 'b3']})
df_right = pd.DataFrame({'KEY': ['k2', 'k3', 'k4', 'k5'],
'C': ['c2', 'c3', 'c4', 'c5'],
'D': ['d2', 'd3', 'd4', 'd5']})
print('df_left:')
display(df_left)
print('-'*15)
print('df_right:')
display(df_right)
```
### inner join
```
# 4 * 4 = 16번의 연산이 이루어짐
# key값이 같은 놈이 나옴 => k2,k3
pd.merge(df_left,df_right,how='inner') # inner join이란것을 알려주기 위해서
```
### left outer join
```
# left에 있는 데이터는 일단 다 출력해달라 -> 없는 놈은 NaN 처리하더라도
pd.merge(df_left,df_right,how='left')
```
### right outer join
```
pd.merge(df_left,df_right,how='right')
```
### fully outer join
```
pd.merge(df_left,df_right,how='outer') # 둘 중 한군데라도 있으면 출력
```
| github_jupyter |
```
import sqlite3
from urllib.parse import urlparse, urlsplit
from hashlib import sha256 as hash
sqlite_file = 'D:/data/sqlite3/url_kb.sqlite3'
batch_size = 10000
with sqlite3.connect(sqlite_file) as conn:
cur = conn.cursor()
cur.execute("SELECT * FROM raw_url;")
while True:
all = cur.fetchmany(batch_size)
if len(all) > 0:
print(len(all))
else:
break
cur.close()
with sqlite3.connect(sqlite_file) as conn:
cur = conn.cursor()
cur.execute("SELECT * FROM raw_url;")
all = cur.fetchmany(10)
cur.close()
t = all[0]
target_url = t[2]
target_url
urlparse(target_url)
def create_table_parsed_urls():
sql = """
CREATE TABLE IF NOT EXISTS parsed_urls (
hash text NOT NULL PRIMARY KEY,
scheme text NOT NULL,
netloc text NOT NULL,
path text NOT NULL,
params text NOT NULL,
query text NOT NULL,
fragment text NOT NULL,
count integer NOT NULL,
url text NOT NULL
);
"""
with sqlite3.connect(sqlite_file) as conn:
try:
cur = conn.cursor()
cur.execute(sql)
cur.close()
except Error as e:
print(e)
def insert_parsed_urls(sql_params):
sql = ''' INSERT INTO parsed_urls(hash, scheme, netloc, path, params, query, fragment, count, url) VALUES (?,?,?,?,?,?,?,?,?)
ON CONFLICT (hash) DO
UPDATE
SET count = count + 1
WHERE hash=?
;'''
with sqlite3.connect(sqlite_file) as conn:
try:
cur = conn.cursor()
cur.execute(sql, sql_params)
cur.close()
#conn.commit()
except Exception as e:
print(e)
def get_page(page_number):
batch_size = 10000
offset = (page_number - 1) * batch_size
with sqlite3.connect(sqlite_file) as conn:
cur = conn.cursor()
cur.execute("SELECT * FROM raw_url where id > 86301 LIMIT ? OFFSET ?;", (batch_size, offset))
all = cur.fetchall()
cur.close()
return all
def process_raw_url():
page_number = 1
while True:
records = get_page(page_number)
record_count = len(records)
if record_count > 0:
for r in records:
target_url = r[2]
parse_result = urlparse(target_url)
h = hash(target_url.encode('UTF8')).hexdigest()
q = (h, parse_result.scheme, parse_result.netloc, parse_result.path, parse_result.params, parse_result.query, parse_result.fragment, 1, target_url, h)
insert_parsed_urls(q)
page_number = page_number + 1
else:
break
create_table_parsed_urls()
parse_result = urlparse(target_url)
print(parse_result)
print(parse_result.scheme)
h = hash(target_url.encode('UTF8')).hexdigest()
q = (h, parse_result.scheme, parse_result.netloc, parse_result.path, parse_result.params, parse_result.query, parse_result.fragment, 1, target_url, h)
q
insert_parsed_urls(q)
process_raw_url()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Homedepot5/DataScience/blob/deeplearning/GradientDescent.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
import io
df=pd.read_csv('insurance_data.csv')
df
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df[['age','affordibility']],df.bought_insurance,test_size=0.2, random_state=25)
X_train
X_trainscaled= X_train.copy()
X_trainscaled['age']=X_trainscaled['age']/100
X_testscaled=X_test.copy()
X_testscaled.age=X_testscaled['age']/100
X_trainscaled
model = keras.Sequential([
keras.layers.Dense(1, input_shape=(2,), activation='sigmoid', kernel_initializer='ones', bias_initializer='zeros')
])
```
binary_crossentropy is equal to log loss function
```
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X_trainscaled, y_train, epochs=5000)
model.evaluate(X_testscaled,y_test)
model.predict(X_testscaled)
y_test
coef, intercept = model.get_weights()
coef, intercept
def sigmoid(x):
import math
return 1 / (1 + math.exp(-x))
sigmoid(18)
X_test
def prediction_function(age, affordibility):
weighted_sum = coef[0]*age + coef[1]*affordibility + intercept
return sigmoid(weighted_sum)
prediction_function(.47, 1)
prediction_function(.18, 1)
def sigmoid_numpy(X):
return 1/(1+np.exp(-X))
sigmoid_numpy(np.array([12,0,1]))
def log_loss(y_true, y_predicted):
epsilon = 1e-15
y_predicted_new = [max(i,epsilon) for i in y_predicted]
y_predicted_new = [min(i,1-epsilon) for i in y_predicted_new]
y_predicted_new = np.array(y_predicted_new)
return -np.mean(y_true*np.log(y_predicted_new)+(1-y_true)*np.log(1-y_predicted_new))
def gradient_descent(age, affordability, y_true, epochs, loss_thresold):
w1 = w2 = 1
bias = 0
rate = 0.5
n = len(age)
for i in range(epochs):
weighted_sum = w1 * age + w2 * affordability + bias
y_predicted = sigmoid_numpy(weighted_sum)
loss = log_loss(y_true, y_predicted)
w1d = (1/n)*np.dot(np.transpose(age),(y_predicted-y_true))
w2d = (1/n)*np.dot(np.transpose(affordability),(y_predicted-y_true))
bias_d = np.mean(y_predicted-y_true)
w1 = w1 - rate * w1d
w2 = w2 - rate * w2d
bias = bias - rate * bias_d
print (f'Epoch:{i}, w1:{w1}, w2:{w2}, bias:{bias}, loss:{loss}')
if loss<=loss_thresold:
break
return w1, w2, bias
gradient_descent(X_trainscaled['age'],X_trainscaled['affordibility'],y_train,1000, 0.4631)
```
| github_jupyter |
```
##### import modules #####
from os.path import join as opj
from nipype.interfaces.ants import ApplyTransforms
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.freesurfer import FSCommand, MRIConvert
from nipype.interfaces.io import SelectFiles, DataSink, FreeSurferSource
from nipype.pipeline.engine import Workflow, Node, MapNode
from nipype.algorithms.misc import Gunzip
# FreeSurfer - Specify the location of the freesurfer folder
fs_dir = '/media/lmn/86A406A0A406933B2/TNAC_BIDS/derivatives/mindboggle/freesurfer_subjects/'
FSCommand.set_default_subjects_dir(fs_dir)
##### set paths and define parameters #####
experiment_dir = '/media/lmn/86A406A0A406933B2/TNAC_BIDS/'
output_dir = 'derivatives/masks/output_inverse_transform_ROIs'
working_dir = 'derivatives/masks/workingdir_inverse_transform_ROIs'
input_dir_preproc = 'derivatives/preprocessing/output_preproc'
input_dir_reg = 'derivatives/preprocessing/output_registration'
#location of atlas --> downloaded from alpaca
input_dir_ROIs = 'derivatives/anat_rois_norman-haignere/anatlabels_surf_mni/mni152_te11-te10-te12-pt-pp'
# list of subjects
subject_list = ['sub-03', 'sub-04', 'sub-05', 'sub-06', 'sub-07', 'sub-08', 'sub-09', 'sub-10', 'sub-11', 'sub-12', 'sub-13', 'sub-14']
#### specify workflow-nodes #####
# FreeSurferSource - Data grabber specific for FreeSurfer data
fssource = Node(FreeSurferSource(subjects_dir=fs_dir),
run_without_submitting=True,
name='fssource')
# Convert FreeSurfer's MGZ format into NIfTI.gz-format (brain.mgz-anatomical)
convert2niigz = Node(MRIConvert(out_type='niigz'), name='convert2niigz')
# Transform the volumetric ROIs to the target space
inverse_transform_rois = MapNode(ApplyTransforms(args='--float',
input_image_type=3,
interpolation='Linear',
invert_transform_flags=[False],
num_threads=1,
terminal_output='file'),
name='inverse_transform_rois', iterfield=['input_image'])
# Gunzip - unzip the output ROI-images to use them in further DCM-analysis
gunzip_rois = MapNode(Gunzip(), name="gunzip_rois", iterfield=['in_file'])
# Gunzip - unzip the anatomical reference-image to use it in further DCM-analysis
gunzip_anat = Node(Gunzip(), name="gunzip_anat")
##### specify input and output stream #####
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list)]
templates = {'inverse_transform_composite': opj(input_dir_reg, 'registrationtemp', '{subject_id}', 'transformInverseComposite.h5'),
'atlas_ROIs': opj(input_dir_ROIs, '*.nii.gz')
}
# SelectFiles - to grab the data (alternativ to DataGrabber),
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', '')]
datasink.inputs.substitutions = substitutions
##### initiate the workflow and connect nodes #####
# Initiation of the inverse transform ROIs workflow
inverse_transform_ROIs = Workflow(name='inverse_transform_ROIs')
inverse_transform_ROIs.base_dir = opj(experiment_dir, working_dir)
# Connect up ANTS normalization components
inverse_transform_ROIs.connect([(fssource, convert2niigz, [('brain', 'in_file')]),
(convert2niigz, inverse_transform_rois, [('out_file', 'reference_image')]),
(inverse_transform_rois, gunzip_rois, [('output_image', 'in_file')]),
(convert2niigz, gunzip_anat, [('out_file', 'in_file')]),
])
# Connect SelectFiles and DataSink to the workflow
inverse_transform_ROIs.connect([(infosource, selectfiles, [('subject_id', 'subject_id')]),
(infosource, fssource, [('subject_id', 'subject_id')]),
(selectfiles, inverse_transform_rois, [('atlas_ROIs', 'input_image')]),
(selectfiles, inverse_transform_rois, [('inverse_transform_composite', 'transforms')]),
(convert2niigz, datasink, [('out_file', 'convert2niigz.@anatomical_niigz_transform')]),
(gunzip_rois, datasink, [('out_file', 'inverse_transform_rois.@roi_transform')]),
(gunzip_anat, datasink, [('out_file', 'unzipped_anatomical.@unzipped_anatomical')]),
])
##### visualize the pipeline #####
# Create a colored output graph
inverse_transform_ROIs.write_graph(graph2use='colored',format='png', simple_form=True)
# Create a detailed output graph
inverse_transform_ROIs.write_graph(graph2use='flat',format='png', simple_form=True)
# Visualize the simple graph
from IPython.display import Image
Image(filename='/media/lmn/86A406A0A406933B2/TNAC_BIDS/derivatives/masks/workingdir_inverse_transform_ROIs/inverse_transform_ROIs/graph.png')
# Visualize the detailed graph
from IPython.display import Image
Image(filename='/media/lmn/86A406A0A406933B2/TNAC_BIDS/derivatives/masks/workingdir_inverse_transform_ROIs/inverse_transform_ROIs/graph_detailed.png')
##### run the workflow using multiple cores #####
inverse_transform_ROIs.run('MultiProc', plugin_args={'n_procs':4})
!tree /media/lmn/86A406A0A406933B2/TNAC_BIDS/derivatives/masks/output_inverse_transform_ROIs/
```
| github_jupyter |
# Linear Regression
---
- Author: Diego Inácio
- GitHub: [github.com/diegoinacio](https://github.com/diegoinacio)
- Notebook: [regression_linear.ipynb](https://github.com/diegoinacio/machine-learning-notebooks/blob/master/Machine-Learning-Fundamentals/regression_linear.ipynb)
---
Overview and implementation of *Linear Regression* analysis.
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from regression__utils import *
# Synthetic data 1
x, yA, yB, yC, yD = synthData1()
```

## 1. Simple
---
$$ \large
y_i=mx_i+b
$$
Where **m** describes the angular coefficient (or line slope) and **b** the linear coefficient (or line y-intersept).
$$ \large
m=\frac{\sum_i^n (x_i-\overline{x})(y_i-\overline{y})}{\sum_i^n (x_i-\overline{x})^2}
$$
$$ \large
b=\overline{y}-m\overline{x}
$$
```
class linearRegression_simple(object):
def __init__(self):
self._m = 0
self._b = 0
def fit(self, X, y):
X = np.array(X)
y = np.array(y)
X_ = X.mean()
y_ = y.mean()
num = ((X - X_)*(y - y_)).sum()
den = ((X - X_)**2).sum()
self._m = num/den
self._b = y_ - self._m*X_
def pred(self, x):
x = np.array(x)
return self._m*x + self._b
lrs = linearRegression_simple()
%%time
lrs.fit(x, yA)
yA_ = lrs.pred(x)
lrs.fit(x, yB)
yB_ = lrs.pred(x)
lrs.fit(x, yC)
yC_ = lrs.pred(x)
lrs.fit(x, yD)
yD_ = lrs.pred(x)
```

$$ \large
MSE=\frac{1}{n} \sum_i^n (Y_i- \hat{Y}_i)^2
$$

## 2. Multiple
---
$$ \large
y=m_1x_1+m_2x_2+...+m_nx_n+b
$$
```
class linearRegression_multiple(object):
def __init__(self):
self._m = 0
self._b = 0
def fit(self, X, y):
X = np.array(X).T
y = np.array(y).reshape(-1, 1)
X_ = X.mean(axis = 0)
y_ = y.mean(axis = 0)
num = ((X - X_)*(y - y_)).sum(axis = 0)
den = ((X - X_)**2).sum(axis = 0)
self._m = num/den
self._b = y_ - (self._m*X_).sum()
def pred(self, x):
x = np.array(x).T
return (self._m*x).sum(axis = 1) + self._b
lrm = linearRegression_multiple()
%%time
# Synthetic data 2
M = 10
s, t, x1, x2, y = synthData2(M)
# Prediction
lrm.fit([x1, x2], y)
y_ = lrm.pred([x1, x2])
```


## 3. Gradient Descent
---
$$ \large
e_{m,b}=\frac{1}{n} \sum_i^n (y_i-(mx_i+b))^2
$$
To perform the gradient descent as a function of the error, it is necessary to calculate the gradient vector $\nabla$ of the function, described by:
$$ \large
\nabla e_{m,b}=\Big\langle\frac{\partial e}{\partial m},\frac{\partial e}{\partial b}\Big\rangle
$$
where:
$$ \large
\begin{aligned}
\frac{\partial e}{\partial m}&=\frac{2}{n} \sum_{i}^{n}-x_i(y_i-(mx_i+b)), \\
\frac{\partial e}{\partial b}&=\frac{2}{n} \sum_{i}^{n}-(y_i-(mx_i+b))
\end{aligned}
$$
```
class linearRegression_GD(object):
def __init__(self,
mo = 0,
bo = 0,
rate = 0.001):
self._m = mo
self._b = bo
self.rate = rate
def fit_step(self, X, y):
X = np.array(X)
y = np.array(y)
n = X.size
dm = (2/n)*np.sum(-x*(y - (self._m*x + self._b)))
db = (2/n)*np.sum(-(y - (self._m*x + self._b)))
self._m -= dm*self.rate
self._b -= db*self.rate
def pred(self, x):
x = np.array(x)
return self._m*x + self._b
%%time
lrgd = linearRegression_GD(rate=0.01)
# Synthetic data 3
x, x_, y = synthData3()
iterations = 3072
for i in range(iterations):
lrgd.fit_step(x, y)
y_ = lrgd.pred(x)
```

## 4. Non-linear analysis
---
```
# Synthetic data 4
# Anscombe's quartet
x1, y1, x2, y2, x3, y3, x4, y4 = synthData4()
%%time
lrs.fit(x1, y1)
y1_ = lrs.pred(x1)
lrs.fit(x2, y2)
y2_ = lrs.pred(x2)
lrs.fit(x3, y3)
y3_ = lrs.pred(x3)
lrs.fit(x4, y4)
y4_ = lrs.pred(x4)
```


| github_jupyter |
## Gaussian Process Latent Variable Model
The [Gaussian Process Latent Variable Model](https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Gaussian_process_latent_variable_models) (GPLVM) is a dimensionality reduction method that uses a Gaussian process to learn a low-dimensional representation of (potentially) high-dimensional data. In the typical setting of Gaussian process regression, where we are given inputs $X$ and outputs $y$, we choose a kernel and learn hyperparameters that best describe the mapping from $X$ to $y$. In the GPLVM, we are not given $X$: we are only given $y$. So we need to learn $X$ along with the kernel hyperparameters.
We do not do maximum likelihood inference on $X$. Instead, we set a Gaussian prior for $X$ and learn the mean and variance of the approximate (gaussian) posterior $q(X|y)$. In this notebook, we show how this can be done using the `pyro.contrib.gp` module. In particular we reproduce a result described in [2].
```
import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch.nn import Parameter
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
import pyro.ops.stats as stats
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('1.1.0')
pyro.enable_validation(True) # can help with debugging
pyro.set_rng_seed(1)
```
### Dataset
The data we are going to use consists of [single-cell](https://en.wikipedia.org/wiki/Single-cell_analysis) [qPCR](https://en.wikipedia.org/wiki/Real-time_polymerase_chain_reaction) data for 48 genes obtained from mice (Guo *et al.*, [1]). This data is available at the [Open Data Science repository](https://github.com/sods/ods). The data contains 48 columns, with each column corresponding to (normalized) measurements of each gene. Cells differentiate during their development and these data were obtained at various stages of development. The various stages are labelled from the 1-cell stage to the 64-cell stage. For the 32-cell stage, the data is further differentiated into 'trophectoderm' (TE) and 'inner cell mass' (ICM). ICM further differentiates into 'epiblast' (EPI) and 'primitive endoderm' (PE) at the 64-cell stage. Each of the rows in the dataset is labelled with one of these stages.
```
# license: Copyright (c) 2014, the Open Data Science Initiative
# license: https://www.elsevier.com/legal/elsevier-website-terms-and-conditions
URL = "https://raw.githubusercontent.com/sods/ods/master/datasets/guo_qpcr.csv"
df = pd.read_csv(URL, index_col=0)
print("Data shape: {}\n{}\n".format(df.shape, "-" * 21))
print("Data labels: {}\n{}\n".format(df.index.unique().tolist(), "-" * 86))
print("Show a small subset of the data:")
df.head()
```
### Modelling
First, we need to define the output tensor $y$. To predict values for all $48$ genes, we need $48$ Gaussian processes. So the required shape for $y$ is `num_GPs x num_data = 48 x 437`.
```
data = torch.tensor(df.values, dtype=torch.get_default_dtype())
# we need to transpose data to correct its shape
y = data.t()
```
Now comes the most interesting part. We know that the observed data $y$ has latent structure: in particular different datapoints correspond to different cell stages. We would like our GPLVM to learn this structure in an unsupervised manner. In principle, if we do a good job of inference then we should be able to discover this structure---at least if we choose reasonable priors. First, we have to choose the dimension of our latent space $X$. We choose $dim(X)=2$, since we would like our model to disentangle 'capture time' ($1$, $2$, $4$, $8$, $16$, $32$, and $64$) from cell branching types (TE, ICM, PE, EPI). Next, when we set the mean of our prior over $X$, we set the first dimension to be equal to the observed capture time. This will help the GPLVM discover the structure we are interested in and will make it more likely that that structure will be axis-aligned in a way that is easier for us to interpret.
```
capture_time = y.new_tensor([int(cell_name.split(" ")[0]) for cell_name in df.index.values])
# we scale the time into the interval [0, 1]
time = capture_time.log2() / 6
# we setup the mean of our prior over X
X_prior_mean = torch.zeros(y.size(1), 2) # shape: 437 x 2
X_prior_mean[:, 0] = time
```
We will use a sparse version of Gaussian process inference to make training faster. Remember that we also need to define $X$ as a `Parameter` so that we can set a prior and guide (variational distribution) for it.
```
kernel = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2))
# we clone here so that we don't change our prior during the course of training
X = Parameter(X_prior_mean.clone())
# we will use SparseGPRegression model with num_inducing=32;
# initial values for Xu are sampled randomly from X_prior_mean
Xu = stats.resample(X_prior_mean.clone(), 32)
gplvm = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)
```
We will use the [autoguide()](http://docs.pyro.ai/en/dev/contrib.gp.html#pyro.contrib.gp.parameterized.Parameterized.autoguide) method from the [Parameterized](http://docs.pyro.ai/en/dev/contrib.gp.html#module-pyro.contrib.gp.parameterized) class to set an auto Normal guide for $X$.
```
# we use `.to_event()` to tell Pyro that the prior distribution for X has no batch_shape
gplvm.X = pyro.nn.PyroSample(dist.Normal(X_prior_mean, 0.1).to_event())
gplvm.autoguide("X", dist.Normal)
```
### Inference
As mentioned in the [Gaussian Processes tutorial](gp.ipynb), we can use the helper function [gp.util.train](http://docs.pyro.ai/en/dev/contrib.gp.html#pyro.contrib.gp.util.train) to train a Pyro GP module. By default, this helper function uses the Adam optimizer with a learning rate of `0.01`.
```
# note that training is expected to take a minute or so
losses = gp.util.train(gplvm, num_steps=4000)
# let's plot the loss curve after 4000 steps of training
plt.plot(losses)
plt.show()
```
After inference, the mean and standard deviation of the approximated posterior $q(X) \sim p(X | y)$ will be stored in the parameters `X_loc` and `X_scale`. To get a sample from $q(X)$, we need to set the `mode` of `gplvm` to `"guide"`.
```
gplvm.mode = "guide"
X = gplvm.X # draw a sample from the guide of the variable X
```
### Visualizing the result
Let’s see what we got by applying GPLVM to our dataset.
```
plt.figure(figsize=(8, 6))
colors = plt.get_cmap("tab10").colors[::-1]
labels = df.index.unique()
X = gplvm.X_loc.detach().numpy()
for i, label in enumerate(labels):
X_i = X[df.index == label]
plt.scatter(X_i[:, 0], X_i[:, 1], c=[colors[i]], label=label)
plt.legend()
plt.xlabel("pseudotime", fontsize=14)
plt.ylabel("branching", fontsize=14)
plt.title("GPLVM on Single-Cell qPCR data", fontsize=16)
plt.show()
```
We can see that the first dimension of the latent $X$ for each cell (horizontal axis) corresponds well with the observed capture time (colors). On the other hand, the 32 TE cell and 64 TE cell are clustered near each other. And the fact that ICM cells differentiate into PE and EPI can also be observed from the figure!
### Remarks
+ The sparse version scales well (linearly) with the number of data points. So the GPLVM can be used with large datasets. Indeed in [2] the authors have applied GPLVM to a dataset with 68k peripheral blood mononuclear cells.
+ Much of the power of Gaussian Processes lies in the function prior defined by the kernel. We recommend users try out different combinations of kernels for different types of datasets! For example, if the data contains periodicities, it might make sense to use a [Periodic kernel](http://docs.pyro.ai/en/dev/contrib.gp.html#periodic). Other kernels can also be found in the [Pyro GP docs](http://docs.pyro.ai/en/dev/contrib.gp.html#module-pyro.contrib.gp.kernels).
### References
[1] `Resolution of Cell Fate Decisions Revealed by Single-Cell Gene Expression Analysis from Zygote to Blastocyst`,<br />
Guoji Guo, Mikael Huss, Guo Qing Tong, Chaoyang Wang, Li Li Sun, Neil D. Clarke, Paul Robson
[2] `GrandPrix: Scaling up the Bayesian GPLVM for single-cell data`,<br />
Sumon Ahmed, Magnus Rattray, Alexis Boukouvalas
[3] `Bayesian Gaussian Process Latent Variable Model`,<br />
Michalis K. Titsias, Neil D. Lawrence
[4] `A novel approach for resolving differences in single-cell gene expression patterns from zygote to blastocyst`,<br />
Florian Buettner, Fabian J. Theis
| github_jupyter |
```
import sys
sys.path.append('../src')
import csv
import yaml
import tqdm
import math
import pickle
import numpy as np
import pandas as pd
import itertools
import operator
from operator import concat, itemgetter
from pickle_wrapper import unpickle, pickle_it
import matplotlib.pyplot as plt
import dask
from dask.distributed import Client
from pathlib import Path
from collections import defaultdict
from functools import reduce
from operator import concat, itemgetter
import ast
from pickle_wrapper import unpickle, pickle_it
from mcmc_norm_learning.algorithm_1_v4 import to_tuple
from mcmc_norm_learning.algorithm_1_v4 import create_data
from mcmc_norm_learning.rules_4 import get_prob, get_log_prob
from mcmc_norm_learning.environment import position,plot_env
from mcmc_norm_learning.robot_task_new import task, robot, plot_task
from mcmc_norm_learning.algorithm_1_v4 import algorithm_1, over_dispersed_starting_points
from mcmc_norm_learning.mcmc_convergence import prepare_sequences, calculate_R
from mcmc_norm_learning.rules_4 import q_dict, rule_dict, get_log_prob
from algorithm_2_utilities import Likelihood
from mcmc_norm_learning.mcmc_performance import performance
from collections import Counter
with open("../params_nc.yaml", 'r') as fd:
params = yaml.safe_load(fd)
```
### Step 1: Default Environment and params
```
##Get default env
env = unpickle('../data/env.pickle')
##Get default task
true_norm_exp = params['true_norm']['exp']
num_observations = params['num_observations']
obs_data_set = params['obs_data_set']
w_nc=params["w_nc"]
n = params['n']
m = params['m']
rf = params['rf']
rhat_step_size = params['rhat_step_size']
top_n = params["top_norms_n"]
colour_specific = params['colour_specific']
shape_specific = params['shape_specific']
target_area_parts = params['target_area'].replace(' ','').split(';')
target_area_part0 = position(*map(float, target_area_parts[0].split(',')))
target_area_part1 = position(*map(float, target_area_parts[1].split(',')))
target_area = (target_area_part0, target_area_part1)
print(target_area_part0.coordinates())
print(target_area_part1.coordinates())
the_task = task(colour_specific, shape_specific,target_area)
fig,axs=plt.subplots(1,2,figsize=(9,4),dpi=100);
plot_task(env,axs[0],"Initial Task State",the_task,True)
axs[1].text(0,0.5,"\n".join([str(x) for x in true_norm_exp]),wrap=True)
axs[1].axis("off")
```
### Step 2: Non Compliant Obs
```
obs = nc_obs= create_data(true_norm_exp,env,name=None,task=the_task,random_task=False,
num_actionable=np.nan,num_repeat=num_observations,w_nc=w_nc,verbose=False)
true_norm_prior = get_prob("NORMS",true_norm_exp)
true_norm_log_prior = get_log_prob("NORMS",true_norm_exp)
if not Path('../data_nc/observations_ad_0.1.pickle').exists():
pickle_it(obs, '../data_nc/observations_ad_0.1.pickle')
```
### Step 3: MCMC chains
```
%%time
%%capture
num_chains = math.ceil(m/2)
starts, info = over_dispersed_starting_points(num_chains,obs,env,\
the_task,time_threshold=math.inf,w_normative=(1-w_nc))
with open('../metrics/starts_info_nc_parallel.txt', 'w') as chain_info:
chain_info.write(info)
@dask.delayed
def delayed_alg1(obs,env,the_task,q_dict,rule_dict,start,rf,max_iters,w_nc):
exp_seq,log_likelihoods = algorithm_1(obs,env,the_task,q_dict,rule_dict,
"dummy value",start = start,relevance_factor=rf,\
max_iterations=max_iters,w_normative=1-w_nc,verbose=False)
log_posteriors = [None]*len(exp_seq)
for i in range(len(exp_seq)):
exp = exp_seq[i]
ll = log_likelihoods[i]
log_prior = get_log_prob("NORMS",exp) # Note: this imports the rules dict from rules_4.py
log_posteriors[i] = log_prior + ll
return {'chain': exp_seq, 'log_posteriors': log_posteriors}
%%time
%%capture
chains_and_log_posteriors=[]
for i in tqdm.tqdm(range(num_chains),desc="Loop for Individual Chains"):
chains_and_log_posteriors.append(
delayed_alg1(obs,env,the_task,q_dict,rule_dict,starts[i],rf,4*n,w_nc).compute())
from joblib import Parallel, delayed
def delayed_alg1_joblib(start_i):
alg1_result=delayed_alg1(obs=obs,env=env,the_task=the_task,q_dict=q_dict,\
rule_dict=rule_dict,start=start_i,rf=rf,\
max_iters=4*n,w_nc=w_nc).compute()
return (alg1_result)
%%time
%%capture
chains_and_log_posteriors=[]
chains_and_log_posteriors=Parallel(verbose = 2,n_jobs = -1\
)(delayed( delayed_alg1_joblib )(starts[run])\
for run in tqdm.tqdm(range(num_chains),desc="Loop for Individual Chains"))
pickle_it(chains_and_log_posteriors, '../data_nc/chains_and_log_posteriors.pickle')
```
### Step 4: Pass to analyse chains
```
with open('../metrics/chain_posteriors_nc.csv', 'w', newline='') as csvfile, \
open('../metrics/chain_info.txt', 'w') as chain_info:
chain_info.write(f'Number of chains: {len(chains_and_log_posteriors)}\n')
chain_info.write(f'Length of each chain: {len(chains_and_log_posteriors[0]["chain"])}\n')
csv_writer = csv.writer(csvfile)
csv_writer.writerow(('chain_number', 'chain_pos', 'expression', 'log_posterior'))
exps_in_chains = [None]*len(chains_and_log_posteriors)
for i,chain_data in enumerate(chains_and_log_posteriors): # Consider skipping first few entries
chain = chain_data['chain']
log_posteriors = chain_data['log_posteriors']
exp_lp_pairs = list(zip(chain,log_posteriors))
exps_in_chains[i] = set(map(to_tuple, chain))
#print(sorted(log_posteriors, reverse=True))
lps_to_exps = defaultdict(set)
for exp,lp in exp_lp_pairs:
lps_to_exps[lp].add(to_tuple(exp))
num_exps_in_chain = len(exps_in_chains[i])
print(lps_to_exps.keys())
print('\n')
chain_info.write(f'Num. expressions in chain {i}: {num_exps_in_chain}\n')
decreasing_lps = sorted(lps_to_exps.keys(), reverse=True)
chain_info.write("Expressions by decreasing log posterior\n")
for lp in decreasing_lps:
chain_info.write(f'lp = {lp} [{len(lps_to_exps[lp])} exps]:\n')
for exp in lps_to_exps[lp]:
chain_info.write(f' {exp}\n')
chain_info.write('\n')
chain_info.write('\n')
changed_exp_indices = [i for i in range(1,len(chain)) if chain[i] != chain[i-1]]
print(f'Writing {len(exp_lp_pairs)} rows to CSV file\n')
csv_writer.writerows(((i,j,chain_lp_pair[0],chain_lp_pair[1]) for j,chain_lp_pair in enumerate(exp_lp_pairs)))
all_exps = set(itertools.chain(*exps_in_chains))
chain_info.write(f'Total num. distinct exps across all chains (including warm-up): {len(all_exps)}\n')
true_norm_exp = params['true_norm']['exp']
true_norm_tuple = to_tuple(true_norm_exp)
chain_info.write(f'True norm in some chain(s): {true_norm_tuple in all_exps}\n')
num_chains_in_to_exps = defaultdict(set)
for exp in all_exps:
num_chains_in = operator.countOf(map(operator.contains,
exps_in_chains,
(exp for _ in range(len(exps_in_chains)))
),
True)
num_chains_in_to_exps[num_chains_in].add(exp)
for num in sorted(num_chains_in_to_exps.keys(), reverse=True):
chain_info.write(f'Out of {len(exps_in_chains)} chains ...\n')
chain_info.write(f'{len(num_chains_in_to_exps[num])} exps are in {num} chains.\n')
csvfile.close()
chain_info.close()
result=pd.read_csv("../metrics/chain_posteriors_nc.csv")
log_post_no_norm=Likelihood(["Norms",["No-Norm"]],the_task,obs,env,w_normative=1-w_nc)
log_post_true_norm=Likelihood(true_norm_exp,the_task,obs,env,w_normative=1-w_nc)
print(log_post_no_norm,log_post_true_norm)
result.groupby("chain_number")[["log_posterior"]].agg(['min','max','mean','std'])
hist_plot=result['log_posterior'].hist(by=result['chain_number'],bins=10)
plt.savefig("../data_nc/nc_hist.jpg")
grouped = result.groupby('chain_number')[["log_posterior"]]
ncols=2
nrows = int(np.ceil(grouped.ngroups/ncols))
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(14,5), sharey=False)
for (key, ax) in zip(grouped.groups.keys(), axes.flatten()):
grouped.get_group(key).plot(ax=ax)
ax.axhline(y=log_post_no_norm,label="No Norm",c='r')
ax.axhline(y=log_post_true_norm,label="True Norm",c='g')
ax.title.set_text("For chain={}".format(key))
ax.legend()
plt.show()
plt.savefig("../plots/nc_movement.jpg")
```
### Step 5: Convergence Tests
```
def conv_test(chains):
convergence_result, split_data = calculate_R(chains, rhat_step_size)
with open('../metrics/conv_test_nc.txt', 'w') as f:
f.write(convergence_result.to_string())
return reduce(concat, split_data)
chains = list(map(itemgetter('chain'), chains_and_log_posteriors))
posterior_sample = conv_test(prepare_sequences(chains, warmup=True))
pickle_it(posterior_sample, '../data_nc/posterior_nc.pickle')
```
### Step 6: Extract Top Norms
```
learned_expressions=Counter(map(to_tuple, posterior_sample))
top_norms_with_freq = learned_expressions.most_common(top_n)
top_norms = list(map(operator.itemgetter(0), top_norms_with_freq))
exp_posterior_df = pd.read_csv('../metrics/chain_posteriors_nc.csv', usecols=['expression','log_posterior'])
exp_posterior_df = exp_posterior_df.drop_duplicates()
exp_posterior_df['post_rank'] = exp_posterior_df['log_posterior'].rank(method='dense',ascending=False)
exp_posterior_df.sort_values('post_rank', inplace=True)
exp_posterior_df['expression'] = exp_posterior_df['expression'].transform(ast.literal_eval)
exp_posterior_df['expression'] = exp_posterior_df['expression'].transform(to_tuple)
exp_posterior_df
def log_posterior(exp, exp_lp_df):
return exp_lp_df.loc[exp_lp_df['expression'] == exp]['log_posterior'].iloc[0]
with open('../metrics/precision_recall_nc.txt', 'w') as f:
f.write(f"Number of unique Norms in sequence={len(learned_expressions)}\n")
f.write(f"Top {top_norms} norms:\n")
for expression,freq in top_norms_with_freq:
f.write(f"Freq. {freq}, lp {log_posterior(expression, exp_posterior_df)}: ")
f.write(f"{expression}\n")
f.write("\n")
pr_result=performance(the_task,env,true_norm_exp,learned_expressions,
folder_name="temp",file_name="top_norm",
top_n=n,beta=1,repeat=100000,verbose=False)
top_norms[3]
true_norm_exp
```
| github_jupyter |
# Clusters as Knowledge Areas of Annotators
```
# import required packages
import sys
sys.path.append("../..")
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from annotlib import ClusterBasedAnnot
from sklearn.datasets import make_classification
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
```
A popular approach to simulate annotators is to use clustering methods.
By using clustering methods, we can emulate areas of knowledge.
The assumption is that the knowledge of an annotator is not constant for a whole classification problem, but there are areas where the annotator has a wider knowledge compared to areas of sparse knowledge.
As the samples lie in a feature space, we can model the area of knowledge as an area in the feature space.
The simulation of annotators by means of clustering is implemented by the class [ClusterBasedAnnot](../annotlib.cluster_based.rst).
To create such annotators, you have to provide the samples `X`, their corresponding true class labels `y_true` and the cluster labels `y_cluster`.
In this section, we introduce the following simulation options:
- class labels as clustering,
- clustering algorithms to find clustering,
- and feature space as a single cluster.
The code below generates a two-dimensional (`n_features=2`) artificial data set with `n_samples=500` samples and `n_classes=4` classes.
```
X, y_true = make_classification(n_samples=500, n_features=2,
n_informative=2, n_redundant=0,
n_repeated=0, n_classes=4,
n_clusters_per_class=1,
flip_y=0.1, random_state=4)
plt.figure(figsize=(5, 3), dpi=150)
plt.scatter(X[:, 0], X[:, 1], marker='o', c=y_true, s=10)
plt.title('artificial data set: samples with class labels', fontsize=7)
plt.xticks(fontsize=7)
plt.yticks(fontsize=7)
plt.show()
```
## 1. Class Labels as Clustering
If you do not provide any cluster labels `y_cluster`, the true class labels `y_true` are assumed to be a representive clustering.
As a result the class labels and cluster labels are equivalent `y_cluster = y_true` and define the knowledge areas of the simulated annotators.
To simulate annotators on this dataset, we create an instance of the [ClusterBasedAnnot](../annotlib.cluster_based.rst) class by providing the samples `X` with the true labels `y_true` as input.
```
# simulate annotators where the clusters are defined by the class labels
clust_annot_cls = ClusterBasedAnnot(X=X, y_true=y_true, random_state=42)
```
The above simulated annotators have knowledge areas defined by the class label distribution.
As a result, there are four knowledge areas respectively clusters.
In the default setting, the number of annotators is equal to the number of defined clusters.
Correspondingly, there are four simulated annotators in our example.
☝🏽An important aspect is the simulation of the labelling performances of the annotators on the different clusters.
By default, each annotator is assumed to be an expert on a single cluster.
Since we have four clusters and four annotators, each cluster has only one annotator as expert.
Being an expert means that an annotator has a higher probability for providing the correct class label for a sample than in the clusters of low expertise.
Let the number of clusters be $K$ (`n_clusters`) and the number of annotators be $A$ (`n_annotators`).
For the case $K=A$, an annotator $a_i$ is expert on cluster $c_i$ with $i \in \{0,\dots,A-1\}$, the probability of providing the correct class label $y^{\text{true}}_\mathbf{x}$ for sample $\mathbf{x} \in c_i$ is defined by
$$p(y^{\text{true}}_\mathbf{x} \mid \mathbf{x}, a_i, c_i) = U(0.8, 1.0)$$
where $U(a,b)$ means that a value is uniformly drawn from the interval $[0.8, 1.0]$.
In contrast for the clusters of low expertise, the default probability for providing a correct class label is defined by
$$p(y^{\text{true}}_\mathbf{x} \mid \mathbf{x}, a_i, c_j) = U\left(\frac{1}{C}, \text{min}(\frac{1}{C}+0.1,1)\right),$$
where $j=0,\dots,A-1$, $j\neq i$ and $C$ denotes the number of classes (`n_classes`).
These properties apply only for the default settings.
The actual labelling accuracies per cluster are exemplary plotted for annotator $a_0$ below.
```
acc_cluster = clust_annot_cls.labelling_performance_per_cluster(accuracy_score)
x = np.arange(len(np.unique(clust_annot_cls.y_cluster_)))
plt.figure(figsize=(4, 2), dpi=150)
plt.bar(x, acc_cluster[0])
plt.xticks(x, ('cluster $c_0$', 'cluster $c_1$', 'cluster $c_2$',
'cluster $c_3$'), fontsize=7)
plt.ylabel('labelling accuracy', fontsize=7)
plt.title('labelling accuracy of annotator $a_0$',
fontsize=7)
plt.show()
```
The above figure matches the description of the default behaviour.
We can see that the accuracy of annotator $a_0$ is high in cluster $c_0$, whereas the labelling accuracy on the remaining clusters is comparable to randomly guessing of class labels.
You can also manually define properties of the annotators.
This may be interesting when you want to evaluate the performance of a developed method coping with multiple uncertain annotators.
Let's see how the ranges of uniform distributions for correct class labels on the clusters can be defined manually. For the default setting, we observe the following ranges:
```
print('ranges of uniform distributions for correct'
+' class labels on the clusters:')
for a in range(clust_annot_cls.n_annotators()):
print('annotator a_' + str(a) + ':\n'
+ str(clust_annot_cls.cluster_labelling_acc_[a]))
```
The attribute `cluster_labelling_acc_` is an array with the shape `(n_annotators, n_clusters, 2)` and can be defined by means of the parameter `cluster_labelling_acc`.
This parameter may be either a `str` or array-like.
By default, `cluster_labelling_acc='one_hot'` is valid, which indicates that each annotator is expert on one cluster.
Another option is `cluster_labelling_acc='equidistant'` and is explained in one of the following examples.
The entry `cluster_labelling_acc_[i, j , 0]` indicates the lower limit of the uniform distribution for correct class labels of annotator $a_i$ on cluster $c_j$. Analogous, the entry `cluster_labelling_acc_[i, j ,1]` represents the upper limit.
The sampled probabilities for correct class labels are also the confidence scores of the annotators.
An illustration of the annotators $a_0$ and $a_1$ simulated with default values on the predefined data set is given in the following plots.
The confidence scores correspond to the size of the crosses and dots.
```
clust_annot_cls.plot_class_labels(X=X, y_true=y_true, annotator_ids=[0, 1],
plot_confidences=True)
print('The confidence scores correspond to the size of the crosses and dots.')
plt.tight_layout()
plt.show()
```
☝🏽To sum up, by using the true class labels `y_true` as proxy of a clustering and specifying the input parameter `cluster_labelling_acc`, annotators being experts on different classes can be simulated.
## 2. Clustering Algorithms to Find Clustering
There are several algorithms available for perfoming clustering on a data set. The framework *scikit-learn* provides many clustering algorithms, e.g.
- `sklearn.cluster.KMeans`,
- `sklearn.cluster.DBSCAN`,
- `sklearn.cluster.AgglomerativeClustering`,
- `sklearn.cluster.bicluster.SpectralBiclusterin`,
- `sklearn.mixture.BayesianGaussianMixture`,
- and `sklearn.mixture.GaussianMixture`.
We examplary apply the `KMeans` algorithm being a very popular clustering algorithm.
For this purpose, you have to specify the number of clusters.
By doing so, you determine the number of different knowledge areas in the feature space with reference to the simulation of annotators.
We set `n_clusters = 3` as number of clusters.
The clusters found by `KMeans` on the previously defined data set are given in the following:
```
# standardize features of samples
X_z = StandardScaler().fit_transform(X)
# apply k-means algorithm
y_cluster_k_means = KMeans(n_clusters=3).fit_predict(X_z)
# plot found clustering
plt.figure(figsize=(5, 3), dpi=150)
plt.scatter(X[:, 0], X[:, 1], c=y_cluster_k_means, s=10)
plt.title('samples with cluster labels of k-means algorithm', fontsize=7)
plt.xticks(fontsize=7)
plt.yticks(fontsize=7)
plt.show()
```
The clusters are found on the standardised data set, so that the mean of each feature is 0 and the variance is 1.
The computed cluster labels `y_cluster` are used as input parameter to simulate two annotators, where the annotator $a_0$ is expert on two clusters and the annotator $a_1$ is expert on one cluster.
```
# define labelling accuracy ranges on four clusters for three annotators
clu_label_acc_km = np.array([[[0.8, 1], [0.8, 1], [0.3, 0.5]],
[[0.3, 0.5], [0.3, 0.5], [0.8, 1]]])
# simulate annotators
cluster_annot_kmeans = ClusterBasedAnnot(X=X, y_true=y_true,
y_cluster=y_cluster_k_means,
n_annotators=2,
cluster_labelling_acc=clu_label_acc_km)
# scatter plots of annotators
cluster_annot_kmeans.plot_class_labels(X=X, y_true=y_true,
plot_confidences=True,
annotator_ids=[0, 1])
plt.tight_layout()
plt.show()
```
☝🏽The employment of different clustering allows to define almost arbitrarily knowledge areas and offers a huge flexibiility.
However, the clusters should reflect the actual regions within a feature space.
## 3. Feature Space as a Single Cluster
Finally, you can simulate annotators whose knowledge does not depend on clusters.
Hence, their knowledge level is constant over the whole feature space.
To emulate such a behaviour, you create a clustering array `y_cluster_const`, in which all samples in the feature space are assigned to the same cluster.
```
y_cluster_const = np.zeros(len(X), dtype=int)
cluster_annot_const = ClusterBasedAnnot(X=X, y_true=y_true,
y_cluster=y_cluster_const,
n_annotators=5,
cluster_labelling_acc='equidistant')
# plot labelling accuracies
cluster_annot_const.plot_labelling_accuracy(X=X, y_true=y_true,
figsize=(4, 2), fontsize=6)
plt.show()
# print predefined labelling accuracies
print('ranges of uniform distributions for correct class '
+ 'labels on the clusters:')
for a in range(cluster_annot_const.n_annotators()):
print('annotator a_' + str(a) + ': '
+ str(cluster_annot_const.cluster_labelling_acc_[a]))
```
Five annotators are simulated whose labelling accuracy intervals are increasing with the index number of the annotator.
☝🏽The input parameter `cluster_labelling_acc='equidistant'` means that the lower bounds of the labelling accuracy intervals between two annotators have always the same distance.
In general, the interval of the correct labelling probability for annotator $a_i$ is computed by
$$d = \frac{1 - \frac{1}{C}}{A+1},$$
$$p(y^{(\text{true})}_\mathbf{x} \mid \mathbf{x}, a_i, c_j) \in U(\frac{1}{C} + i \cdot d, \frac{1}{C} + 2 \cdot i \cdot d),$$
where $i=0,\dots,A-1$ and $j=0,\dots,K-1$ with $K$.
This procedure ensures that the intervals of the correct labelling probabilities are overlapping.
| github_jupyter |
**This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/your-first-machine-learning-model).**
---
## Recap
So far, you have loaded your data and reviewed it with the following code. Run this cell to set up your coding environment where the previous step left off.
```
# Code you have previously used to load data
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex3 import *
print("Setup Complete")
```
# Exercises
## Step 1: Specify Prediction Target
Select the target variable, which corresponds to the sales price. Save this to a new variable called `y`. You'll need to print a list of the columns to find the name of the column you need.
```
# print the list of columns in the dataset to find the name of the prediction target
home_data.columns
y = home_data.SalePrice
# Check your answer
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()
```
## Step 2: Create X
Now you will create a DataFrame called `X` holding the predictive features.
Since you want only some columns from the original data, you'll first create a list with the names of the columns you want in `X`.
You'll use just the following columns in the list (you can copy and paste the whole list to save some typing, though you'll still need to add quotes):
* LotArea
* YearBuilt
* 1stFlrSF
* 2ndFlrSF
* FullBath
* BedroomAbvGr
* TotRmsAbvGrd
After you've created that list of features, use it to create the DataFrame that you'll use to fit the model.
```
# Create the list of features below
feature_names = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
# Select data corresponding to features in feature_names
X = home_data[feature_names]
# Check your answer
step_2.check()
# step_2.hint()
# step_2.solution()
```
## Review Data
Before building a model, take a quick look at **X** to verify it looks sensible
```
# Review data
# print description or statistics from X
X.describe()
# print the top few lines
X.head()
```
## Step 3: Specify and Fit Model
Create a `DecisionTreeRegressor` and save it iowa_model. Ensure you've done the relevant import from sklearn to run this command.
Then fit the model you just created using the data in `X` and `y` that you saved above.
```
from sklearn.tree import DecisionTreeRegressor
#specify the model.
#For model reproducibility, set a numeric value for random_state when specifying the model
iowa_model = DecisionTreeRegressor(random_state=2021)
# Fit the model
iowa_model.fit(X, y)
# Check your answer
step_3.check()
# step_3.hint()
# step_3.solution()
```
## Step 4: Make Predictions
Make predictions with the model's `predict` command using `X` as the data. Save the results to a variable called `predictions`.
```
predictions = iowa_model.predict(X)
print(predictions)
# Check your answer
step_4.check()
# step_4.hint()
# step_4.solution()
```
## Think About Your Results
Use the `head` method to compare the top few predictions to the actual home values (in `y`) for those same homes. Anything surprising?
```
# You can write code in this cell
y.head()
```
It's natural to ask how accurate the model's predictions will be and how you can improve that. That will be you're next step.
# Keep Going
You are ready for **[Model Validation](https://www.kaggle.com/dansbecker/model-validation).**
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161285) to chat with other Learners.*
| github_jupyter |
```
from six.moves import cPickle as pickle
import keras
from keras.models import Sequential
from keras.layers import Conv1D, MaxPooling1D, Flatten, Dense, Dropout
from keras.callbacks import ModelCheckpoint
from google.colab import drive
drive.mount('/content/drive')
data_dir = '/content/drive/My Drive/Colab Notebooks/HEX New folder'
import glob
import os
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
%matplotlib inline
# normalize inputs from 0-255 to 0-1
import keras
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
#from keras.utils import to_categorical
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelEncoder
import pandas.util.testing as tm
def ReshapeY(Y_train,n):
Y = list()
for x in Y_train:
Y.append(find_1(x, n))
Y = np.array(Y)
return Y
print(Y.shape)
# look for 1 ( spoof) in each
def find_1(x, n):
if 1 in x:
res = 1
else:
res = 0
return res
def LOAD_data(path ):
filenames = glob.glob(path + "/*.csv")
dfs = []
for filename in filenames:
df=pd.read_csv(filename)
if 'le0.csv'== filename[-7:]:
df['attack'] = 0
df = df[190:]
else:
df['attack'] = 1
dfa = df['attack']
df = df[14:]
df = df.iloc[:-180]
df = df.select_dtypes(exclude=['object','bool']) #remove nan
df = df.loc[:, (df != 0).any(axis=0)] #remove zeros
df = df.drop(df.std()[(df.std() == 0)].index, axis=1) #remove equals
df=((df-df.min())/(df.max()-df.min()))*1
df['attack'] = dfa
dfs.append(df)
# Concatenate all data into one DataFrame
df = pd.concat(dfs, ignore_index=True)
#df.head()
# Concatenate all data into one DataFrame
df = pd.concat(dfs, ignore_index=True)
#df.head()
df = df.select_dtypes(exclude=['object','bool']) #remove nan
df = df.loc[:, (df != 0).any(axis=0)] #remove zeros
df = df.drop(df.std()[(df.std() == 0)].index, axis=1) #remove equals
sf = df[['roll', 'pitch', 'heading', 'rollRate', 'pitchRate', 'yawRate',
'groundSpeed', 'altitudeRelative',
'throttlePct', 'estimatorStatus.horizPosRatio',
'estimatorStatus.vertPosRatio',
'estimatorStatus.horizPosAccuracy','gps.courseOverGround']]
scaled_data = scale(sf)
pca = PCA(n_components = 9)
pca.fit(scaled_data)
pca_data = pca.transform(scaled_data)
pca_data = pd.DataFrame(pca_data)
df_sf = pd.concat([pca_data, df[['attack']]], axis=1)
sf_t =df_sf
data_dim = sf_t.shape[1] -1
timesteps = 60
num_classes = 2
X = sf_t.drop(['attack'], axis =1).values
Y = sf_t[['attack']].values
ll = sf_t.shape[0] // timesteps
ll
x = np.array(X[0: (timesteps*ll)])
y = np.array(Y[0: (timesteps*ll)])
x.shape
X_t = np.reshape(x,(-1,timesteps,data_dim))
Y_t = np.reshape(y,(-1,timesteps,1))
Y_t = ReshapeY(Y_t,timesteps )
print(X_t.shape)
print(Y_t.shape)
# lb_make = LabelEncoder()
# Y_t = lb_make.fit_transform(Y_t)
# Y_t = tf.keras.utils.to_categorical(Y_t)
# X_t = X_t.astype("float32")
# Y_t = Y_t.astype("float32")
# X_t /= 255
return (X_t,Y_t)
def put_together(combined_array, asd):
combined_array = np.concatenate((combined_array, asd), axis=0)
#combined_array = np.delete(combined_array, 0, axis=0)
return combined_array
def Delete_first(combined_array):
combined_array = np.delete(combined_array, 0, axis=0)
return combined_array
import os
paths = []
# rootdir = r'C:\Users\lenovo\OneDrive - aggies.ncat.edu\Desktop\new correct files\HEX New folder'
for file in os.listdir(data_dir):
d = os.path.join(data_dir, file)
if os.path.isdir(d):
paths.append(d)
paths
from sklearn.preprocessing import scale
i = 0
for path in paths:
(Xa,Ya) = LOAD_data(path)
if (i == 0):
X_ = Xa
Y_ = Ya
i = i + 1
else:
X_ = np.concatenate((X_, Xa), axis=0)
Y_ = np.concatenate((Y_, Ya), axis=0)
print(X_.shape)
print(Y_.shape)
X_train_D,X_test_D, Y_train_D, Y_test_D = train_test_split(X_, Y_, test_size=0.10, random_state=1)
print(Y_test_D.shape, ':y test')
print(Y_train_D.shape, ':y train')
def ReshapeY(Y_train,n):
Y = list()
for x in Y_train:
Y.append(find_1(x, n))
Y = np.array(Y)
return Y
print(Y.shape)
# look for 1 ( spoof) in each
def find_1(x, n):
if 1 in x:
res = 1
else:
res = 0
return res
# normalize inputs from 0-255 to 0-1
import keras
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
#from keras.utils import to_categorical
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelEncoder
# one-hot encode the labels
num_classes = 2
Y_train_D_hot = tf.keras.utils.to_categorical(Y_train_D-1, num_classes)
Y_test_D_hot = tf.keras.utils.to_categorical(Y_test_D-1, num_classes)
# # break training set into training and validation sets
# (X_train, X_valid) = X_train_D[300:], X_train_D[:300]
# (Y_train, Y_valid) = Y_train_D_hot[300:], Y_train_D_hot[:300]
X_train,X_valid, Y_train, Y_valid = train_test_split(X_train_D, Y_train_D_hot, test_size=0.1, random_state=1)
# X_train = X_train_D
# Y_train = Y_train_D_hot
X_test = X_test_D
Y_test = Y_test_D_hot
Y_valid.shape
# X_train = np.transpose(X_train, (1, 0, 2))
# X_test = np.transpose(X_test, (1, 0, 2))
# X_valid = np.transpose(X_valid, (1, 0, 2))
# Y_train = np.transpose(Y_train, (1, 0, 2))
# Y_test = np.transpose(Y_test, (1, 0, 2))
# Y_valid = np.transpose(Y_valid, (1, 0, 2))
X_train.shape
CNNch = 9
# epch
ne = 100
modelC2 = Sequential()
#1
modelC2.add(Conv1D(filters=16, kernel_size=64,strides = 16, padding='same', activation='relu',
input_shape=(60, CNNch)))
modelC2.add(MaxPooling1D(pool_size=1))
#2
modelC2.add(Conv1D(filters=16, kernel_size=3, strides = 1, padding='same', activation='relu'))
modelC2.add(MaxPooling1D(pool_size=1))
#3
modelC2.add(Conv1D(filters=32, kernel_size=3, strides = 1, padding='same', activation='relu'))
modelC2.add(MaxPooling1D(pool_size=1))
modelC2.add(Dropout(0.2))
#4
modelC2.add(Conv1D(filters=32, kernel_size=3, strides = 1, padding='same', activation='relu'))
modelC2.add(MaxPooling1D(pool_size=1))
modelC2.add(Dropout(0.2))
#5
modelC2.add(Conv1D(filters=32, kernel_size=3, strides = 1, padding='same', activation='relu'))
#paper no padding?, Yes, to make 5th layer output 6 width and 3 after pooling
#-> same seems to perform little better because of more parameter?
# little diffrernt from the paper but keep it as padding = 'same'
modelC2.add(MaxPooling1D(pool_size=1))
modelC2.add(Flatten())
modelC2.add(Dense(10, activation='relu'))
modelC2.add(Dropout(0.2))
modelC2.add(Dense(2, activation='softmax'))
modelC2.summary()
# compile the model
modelC2.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
# train the model
checkpointer = ModelCheckpoint(filepath='CNNC2.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = modelC2.fit(X_train[:,:,0:CNNch], Y_train, batch_size=32, epochs=ne,
validation_data=(X_valid[:,:,0:CNNch], Y_valid), callbacks=[checkpointer],
verbose=1, shuffle=True)
# load the weights that yielded the best validation accuracy
modelC2.load_weights('CNNC2.weights.best.hdf5')
# evaluate and print test accuracy
score = modelC2.evaluate(X_test[:,:,0:CNNch], Y_test, verbose=0)
print('\n', 'CNN Test accuracy:', score[1])
score = modelC2.evaluate(X_train[:,:,0:CNNch], Y_train, verbose=0)
print('\n', 'CNN train accuracy:', score[1])
score = modelC2.evaluate(X_valid[:,:,0:CNNch], Y_valid, verbose=0)
print('\n', 'CNN validation accuracy:', score[1])
import keras
from matplotlib import pyplot as plt
#history = model.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(hist.history['accuracy'])
plt.plot(hist.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training accuracy', 'Validation accuracy'], loc='lower right')
plt.show()
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
y_pred = modelC2.predict(X_test)
y_pred.round()
ypreddf = pd.DataFrame(y_pred.round())
ytestdf = pd.DataFrame(Y_test)
from sklearn.metrics import classification_report, confusion_matrix
import itertools
print (classification_report(Y_test, y_pred.round()))
cm = confusion_matrix(ytestdf[0], ypreddf[0])
cm_plot_labels = ['Normal','Spoofed']
plot_confusion_matrix(cm=cm, classes=cm_plot_labels, title='Confusion Matrix')
from sklearn.metrics import jaccard_score, f1_score, accuracy_score,recall_score, precision_score
print("Avg F1-score: %.4f" % f1_score(Y_test, y_pred.round(), average='weighted'))
print("Jaccard score: %.4f" % jaccard_score(Y_test, y_pred.round(), average='weighted'))
print("Recall score: %.4f" % recall_score(Y_test, y_pred.round(), average='weighted'))
print("Precision score: %.4f" % precision_score(Y_test, y_pred.round(), average='weighted'))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/VitoriaCampos/Super-Computador-Projeto-C125/blob/main/Laboratorio1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Laboratório #1
### Instruções
1. Para cada um dos exercícios a seguir, faça o seguinte:
* Sempre que possível, adicione comentários ao seu código. Os comentários servem para documentar o código.
* Use **docstrings** explicando sucintamente o que cada função implementada faz.
+ **docstrings** são os comentários de múltiplas linhas que aparecem **logo após** o cabeçalho da função.
* Escolha nomes explicativos para suas funções e variáveis.
2. Quando você terminar os exercícios do laboratório, vá ao menu do Jupyter ou Colab e selecione a opção para fazer download do notebook.
* Os notebooks tem extensão .ipynb.
* Este deve ser o arquivo que você irá entregar.
* No Jupyter vá até a opção **File** -> **Download as** -> **Notebook (.ipynb)**.
* No Colab vá até a opção **File** -> **Download .ipynb**.
3. Após o download do notebook, vá até a aba de tarefas do MS Teams, localize a tarefa referente a este laboratório e faça o upload do seu notebook. Veja que há uma opção para anexar arquivos à tarefa.
**NOME:** Vitória Campos Neves
**MATRÍCULA:** 651
## Exercícios
#### 1) Implemente 4 funções diferentes que recebam dois valores, x e y, e retornem o resultado das operações abaixo. Adicione uma docstring a cada uma das funções.
1. **adição**:
* Exemplo: 2 + 3 = 5
* Trecho de código para teste:
```python
print('O resultado da adição é:', adição(2, 3))
```
2. **subtração**:
* Exemplo: 7 – 4 = 3
* Trecho de código para teste:
```python
print('O resultado da subtração é:', subtração(7, 4))
```
3. **divisão**:
* Exemplo: 8 / 2 = 4
* Trecho de código para teste:
```python
print('O resultado da divisão é:', divisão(8, 2))
```
4. **multiplicação**:
* Exemplo: 3 * 5 = 15
* Trecho de código para teste:
```python
print('O resultado da multiplicação é:', multiplicação(3, 5))
```
**Operadores artiméticos**
Abaixo segue a lista de operadores aritméticos usados em Python.
| Operador | Nome | Exemplo | Resultado |
|:--------:|:---------------:|:--------:|:---------:|
| + | Adição | a = 1 + 1 | 2 |
| - | Subtração | a = 2 - 1 | 1 |
| * | Multiplicação | a = 2 * 2 | 4 |
| / | Divisão | a = 100 / 4 | 25.0 |
| % | Módulo | a = 5 % 3 | 2 |
| ** | Exponenciação | a = 2 ** 3 | 8 |
| // | Divisão inteira | a = 100 // 4 | 25 |
**OBS.: Não se esqueça de depois de implementar as funções, invocá-las com alguns valores de teste como mostrado acima.**
#### Função de adição
```
# Defina aqui o código da função 'adição' e em seguida
# a invoque com alguns valores de teste.
def adição(x,y):
return x+y
print('O resultado da adição é:', adição(2, 3))
```
#### Função de subtração
```
# Defina aqui o código da função 'subtração' e em seguida
# a invoque com alguns valores de teste.
def subtração(x,y):
return x-y
print('O resultado da subtração é:', subtração(7, 4))
```
#### Função de divisão
```
# Defina aqui o código da função 'divisão' e em seguida
# a invoque com alguns valores de teste.
def divisão(x,y):
return x/y
print('O resultado da divisão é:', divisão(8, 2))
```
#### Função de multiplicação
```
# Defina aqui o código da função 'multiplicação' e em seguida
# a invoque com alguns valores de teste.
def multiplicação(x,y):
return x*y
print('O resultado da multiplicação é:', multiplicação(3, 5))
```
#### 2) Dado o valor da conta de um restaurante, crie uma função chamada de `gorjeta` que calcule o valor da gorjeta do garçom, considerando que a gorjeta é sempre de 10% do valor da conta.
**OBS.: Não se esqueça de depois de implementar a função, invocá-la com algum valor de teste, como mostrado no trecho de código abaixo.**
```python
print('O valor da gorjeta é de:', gorjeta(100))
```
```
# Defina aqui o código da função 'gorjeta' e em seguida
# a invoque com algum valor de teste.
def gorjeta(x):
return x*(10/100)
print('O valor da gorjeta é de:', gorjeta(100))
```
#### 3) Execute o código abaixo e veja que ocorre um erro. Em seguida, corrija os erros até que o código execute corretamente.
**Dica**: Lembre-se do que foi discutido sobre as diferenças entre Python e outras linguagens de programação e sobre a definição de funções em Python.
```
def foo(a,b,c):
var = a + b * c
return var
'''
Invocando a função chamada de 'foo'
e imprimindo o resultado retornado por ela.
'''
print('A função foo retorna o valor: ', foo(1,2,3))
```
| github_jupyter |
# Creating your own dataset from Google Images
*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)*
```
!pip install fastai
#!pip install -upgrade pip
#!pip install -q fastai —upgrade pip
```
In this tutorial we will see how to easily create an image dataset through Google Images. **Note**: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).
```
from fastai.vision import *
```
## Get a list of URLs
### Search and scroll
Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.
Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.
It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:
"canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis
You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.
### Download into file
Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.
Press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd> in Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd> in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.
You will need to get the urls of each of the images. Before running the following commands, you may want to disable ad blocking extensions (uBlock, AdBlockPlus etc.) in Chrome. Otherwise the window.open() command doesn't work. Then you can run the following commands:
```javascript
urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);
window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));
```
### Create directory and upload urls file into your server
Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.
```
folder = 'mountainbikes'
file = 'urls_mountainbikes.csv'
folder = 'racingcycles'
file = 'urls_racingcycles.csv'
```
You will need to run this cell once per each category.
```
path = Path('data/bikes')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
path.ls()
```
Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.

## Download images
Now you will need to download your images from their respective urls.
fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.
Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.
You will need to run this line once for every category.
```
file
path
folder
classes = ['mountainbikes','racingcycles']
download_images(path/file, dest, max_pics=200)
# If you have problems download, try with `max_workers=0` to see exceptions:
download_images(path/file, dest, max_pics=20, max_workers=0)
```
Then we can remove any images that can't be opened:
```
for c in classes:
print(c)
verify_images(path/c, delete=True, max_size=500)
```
## View data
```
#np.random.seed(42)
#data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
# ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
# If you already cleaned your data, run this cell instead of the one before
np.random.seed(42)
data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv',
ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
```
Good! Let's take a look at some of our pictures then.
```
data.classes
data.show_batch(rows=3, figsize=(7,8))
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
```
## Train model
```
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(4)
learn.save('stage-1')
learn.unfreeze()
learn.lr_find()
# If the plot is not showing try to give a start and end learning rate
# learn.lr_find(start_lr=1e-5, end_lr=1e-1)
learn.recorder.plot()
learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4))
learn.save('stage-2')
```
## Interpretation
```
learn.load('stage-2');
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
```
## Cleaning Up
Some of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.
Using the `ImageCleaner` widget from `fastai.widgets` we can prune our top losses, removing photos that don't belong.
```
from fastai.widgets import *
```
First we need to get the file paths from our top_losses. We can do this with `.from_toplosses`. We then feed the top losses indexes and corresponding dataset to `ImageCleaner`.
Notice that the widget will not delete images directly from disk but it will create a new csv file `cleaned.csv` from where you can create a new ImageDataBunch with the corrected labels to continue training your model.
In order to clean the entire set of images, we need to create a new dataset without the split. The video lecture demostrated the use of the `ds_type` param which no longer has any effect. See [the thread](https://forums.fast.ai/t/duplicate-widget/30975/10) for more details.
```
db = (ImageList.from_folder(path)
.split_none()
.label_from_folder()
.transform(get_transforms(), size=224)
.databunch()
)
# If you already cleaned your data using indexes from `from_toplosses`,
# run this cell instead of the one before to proceed with removing duplicates.
# Otherwise all the results of the previous step would be overwritten by
# the new run of `ImageCleaner`.
db = (ImageList.from_csv(path, 'cleaned.csv', folder='.')
.split_none()
.label_from_df()
.transform(get_transforms(), size=224)
.databunch()
)
```
Then we create a new learner to use our new databunch with all the images.
```
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)
learn_cln.load('stage-2');
ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
```
Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via [/tree](/tree), not [/lab](/lab). Running the `ImageCleaner` widget in Jupyter Lab is [not currently supported](https://github.com/fastai/fastai/issues/1539).
```
# Don't run this in google colab or any other instances running jupyter lab.
# If you do run this on Jupyter Lab, you need to restart your runtime and
# runtime state including all local variables will be lost.
ImageCleaner(ds, idxs, path)
```
If the code above does not show any GUI(contains images and buttons) rendered by widgets but only text output, that may caused by the configuration problem of ipywidgets. Try the solution in this [link](https://github.com/fastai/fastai/issues/1539#issuecomment-505999861) to solve it.
Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. `ImageCleaner` will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from `top_losses.ImageCleaner(ds, idxs)`
You can also find duplicates in your dataset and delete them! To do this, you need to run `.from_similars` to get the potential duplicates' ids and then run `ImageCleaner` with `duplicates=True`. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.
Make sure to recreate the databunch and `learn_cln` from the `cleaned.csv` file. Otherwise the file would be overwritten from scratch, losing all the results from cleaning the data from toplosses.
```
ds, idxs = DatasetFormatter().from_similars(learn_cln)
ImageCleaner(ds, idxs, path, duplicates=True)
??ImageCleaner
```
Remember to recreate your ImageDataBunch from your `cleaned.csv` to include the changes you made in your data!
## Putting your model in production
First thing first, let's export the content of our `Learner` object for production:
```
learn.export()
```
This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).
You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:
```
defaults.device = torch.device('cpu')
img = open_image(path/'mountainbikes'/'00000021.jpg')
img
```
We create our `Learner` in production enviromnent like this, jsut make sure that `path` contains the file 'export.pkl' from before.
```
learn = load_learner(path)
pred_class,pred_idx,outputs = learn.predict(img)
pred_class
```
So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):
```python
@app.route("/classify-url", methods=["GET"])
async def classify_url(request):
bytes = await get_bytes(request.query_params["url"])
img = open_image(BytesIO(bytes))
_,_,losses = learner.predict(img)
return JSONResponse({
"predictions": sorted(
zip(cat_learner.data.classes, map(float, losses)),
key=lambda p: p[1],
reverse=True
)
})
```
(This example is for the [Starlette](https://www.starlette.io/) web app toolkit.)
## Things that can go wrong
- Most of the time things will train fine with the defaults
- There's not much you really need to tune (despite what you've heard!)
- Most likely are
- Learning rate
- Number of epochs
### Learning rate (LR) too high
```
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(1, max_lr=0.5)
```
### Learning rate (LR) too low
```
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
```
Previously we had this result:
```
Total time: 00:57
epoch train_loss valid_loss error_rate
1 1.030236 0.179226 0.028369 (00:14)
2 0.561508 0.055464 0.014184 (00:13)
3 0.396103 0.053801 0.014184 (00:13)
4 0.316883 0.050197 0.021277 (00:15)
```
```
learn.fit_one_cycle(5, max_lr=1e-5)
learn.recorder.plot_losses()
```
As well as taking a really long time, it's getting too many looks at each image, so may overfit.
### Too few epochs
```
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False)
learn.fit_one_cycle(1)
```
### Too many epochs
```
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32,
ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0
),size=224, num_workers=4).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0)
learn.unfreeze()
learn.fit_one_cycle(40, slice(1e-6,1e-4))
```
| github_jupyter |
# Meet in the Middle Attack
- Given prime `p`
- then `Zp* = {1, 2, 3, ..., p-1}`
- let `g` and `h` be elements in `Zp*` such that
- such that `h mod p = g^x mod p` where ` 0 < x < 2^40`
- find `x` given `h`, `g`, and `p`
# Idea
- let `B = 2^20` then `B^2 = 2^40`
- then `x= xo * B + x1` where `xo` and `x1` are in `{0, 1, ..., B-1}`
- Then smallest x is `x = 0 * B + O = 0`
- Largest x is `x = B * (B-1) + B - 1 = B^2 - B + B -1 = B^2 - 1 = 2^40 - 1`
- Then:
```
h = g^x
h = g^(xo * B + x1)
h = g^(xo * B) * g^(x1)
h / g^(x1) = g^(xo *B)
```
- Find `xo` and `x1` given `g`, `h`, `B`
# Strategy
- Build a hash table key: `h / g^(x1)`, with value `x1` for `x1` in `{ 0, 1, 2, .., 2^20 - 1}`
- For each value `x0` in `{0, 1, 2, ... 20^20 -1}` check if `(g^B)^(x0) mod P` is in hashtable. If it is then you've found `x0` and `x1`
- Return `x = xo * B + x1`
### Modulo Division
```
(x mod p) / ( y mod p) = ((x mod p) * (y_inverse mod p)) mod p
```
### Definition of inverse
```
Definition of modular inverse in Zp
y_inverse * y mod P = 1
```
### Inverse of `x` in `Zp*`
```
Given p is prime,
then for every element x in set Zp* = {1, ..., p - 1}
the element x is invertible (there exist an x_inverse such that:
x_inverse * x mod p = 1
The following is true (according to Fermat's 1640)
> x^(p - 1) mod = 1
> x ^ (p - 2) * x mod p = 1
> x_inverse = x^(p-2)
```
# Notes
- Work is `2^20` multiplications and `2^20` lookups in the worst case
- If we brute forced it, we would do `2^40` multiplications
- So the work is squareroot of brute force
# Test Numbers
```
p = 134078079299425970995740249982058461274793658205923933\
77723561443721764030073546976801874298166903427690031\
858186486050853753882811946569946433649006084171
g = 11717829880366207009516117596335367088558084999998952205\
59997945906392949973658374667057217647146031292859482967\
5428279466566527115212748467589894601965568
h = 323947510405045044356526437872806578864909752095244\
952783479245297198197614329255807385693795855318053\
2878928001494706097394108577585732452307673444020333
```
# Library used
- https://gmpy2.readthedocs.io/en/latest/mpz.html
```
from gmpy2 import mpz
from gmpy2 import t_mod, invert, powmod, add, mul, is_prime
def build_table(h, g, p, B):
table, z = {}, h
g_inverse = invert(g, p)
table[h] = 0
for x1 in range(1, B):
z = t_mod(mul(z, g_inverse), p)
table[z] = x1
return table
def lookup(table, g, p, B):
gB, z = powmod(g, B, p), 1
for x0 in range(B):
if z in table:
x1 = table[z]
return x0, x1
z = t_mod(mul(z, gB), p)
return None, None
def find_x(h, g, p, B):
table = build_table(h, g, p, B)
x0, x1 = lookup(table, g, p, B)
# assert x0 != None and x1 != None
Bx0 = mul(x0, B)
x = add(Bx0, x1)
print(x0, x1)
return x
p_string = '13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084171'
g_string = '11717829880366207009516117596335367088558084999998952205599979459063929499736583746670572176471460312928594829675428279466566527115212748467589894601965568'
h_string = '3239475104050450443565264378728065788649097520952449527834792452971981976143292558073856937958553180532878928001494706097394108577585732452307673444020333'
p = mpz(p_string)
g = mpz(g_string)
h = mpz(h_string)
B = mpz(2) ** mpz(20)
assert is_prime(p)
assert g < p
assert h < p
x = find_x(h, g, p, B)
print(x)
assert h == powmod(g, x, p)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.