text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
## Importação de Bibliotecas
```
import numpy as np
import pandas as pd
import seaborn as sns
import missingno as msno
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
```
## Visualizando os dados
Importação dos Datasets obtidos no site da Policia Rodoviaria Federal
https://antigo.prf.gov.br/dados-abertos-acidentes
```
df_17 = pd.read_excel('acidentes2017.xlsx')
df_18 = pd.read_excel('acidentes2018.xlsx')
df_19 = pd.read_excel('acidentes2019.xlsx')
df_20 = pd.read_excel('acidentes2020.xlsx')
```
Verificação dos datasets que iremos utilizar
```
df_17.info()
df_18.info()
df_19.info()
df_20.info()
```
Criação de uma Coluna em cada Dataset contendo o Ano da ocorrência
```
df_17['ano']=2017
df_18['ano']=2018
df_19['ano']=2019
df_20['ano']=2020
```
Criação de um único Dataset
```
df_17_20 = pd.concat([df_17,df_18,df_19,df_20])
```
Exportação do Dataset unificado com o nome **df_17_20.csv**
```
df_17_20.to_csv('df_17_20.csv',index=False)
```
# Corrigindo o Latilon
A correção do latilon foi feita através do notebook: Crrecao_Latilon devido ao tamanho do código;
Como saíde do Correcao Latilon = 'df_17_20_v02.csv' que iremos resgatar
'df_17_20' é um backup quando nao foi considerado a utilização do latilon
## Analisando dados
Verificando as informações do Dataset
```
df_17_20_indv = pd.read_csv('df_17_20_v02.csv')
df_17_20_indv.info()
```
Verificando Colunas
```
df_17_20_indv.columns
```
Criando uma nova variável com as colunas que iremos utilizar, e recebendo o nome de **df_17_20_indv_cut**
```
df_17_20_indv_cut = df_17_20_indv[['id','data_inversa', 'dia_semana', 'horario', 'uf', 'causa_acidente', 'tipo_acidente',
'classificacao_acidente', 'fase_dia',
'condicao_metereologica', 'tipo_pista', 'tracado_via', 'uso_solo',
'tipo_veiculo', 'ano_fabricacao_veiculo',
'tipo_envolvido', 'idade', 'sexo', 'ilesos',
'mortos', 'latitude', 'longitude', 'ano']].copy()
```
Na nova variável, decidimos agrupar a coluna de feridos leves e feridos graves em uma única coluna, onde a mesma recebe o nome de *feridos_cal*
```
df_17_20_indv_cut['feridos_cal'] = df_17_20_indv['feridos_leves'] + df_17_20_indv['feridos_graves']
```
Como instrução presente no relatorio da Policia Rodoviaria Federal, quando o trecho possui asfalto, o mesmo recebe Sim, e quando não possui Asfalto, recebe o nome Não. Devido a este fator, decidimos substituir os valores escritos como *Sim* por *'Urbano'* e os com o valor igual a *Não* por *'Rural'*
```
df_17_20_indv_cut.loc[df_17_20_indv_cut['uso_solo']=='Sim','uso_solo']='Urbano'
df_17_20_indv_cut.loc[df_17_20_indv_cut['uso_solo']=='Não','uso_solo']='Rural'
df_17_20_indv_cut.sample(9)
```
Verificação de valores únicos
```
df_17_20_indv_cut.nunique()
```
Verificação dos Dados mais presentes nas Colunas
```
df_17_20_indv_cut['causa_acidente'].value_counts()
df_17_20_indv_cut['tipo_acidente'].value_counts()
df_17_20_indv_cut['classificacao_acidente'].value_counts()
df_17_20_indv_cut['fase_dia'].value_counts()
df_17_20_indv_cut['condicao_metereologica'].value_counts()
```
Como possuimos valores "*Ignorado"*, decidimos substitui-los por Nulo
```
df_17_20_indv_cut['condicao_metereologica'].replace('Ignorado',np.nan,inplace=True)
df_17_20_indv_cut['tipo_pista'].value_counts()
df_17_20_indv_cut['tracado_via'].value_counts()
```
Como possuimos valores "Não Informado", decidimos substitui-los por Nulo
```
df_17_20_indv_cut['tracado_via'].replace('Não Informado',np.nan,inplace=True)
df_17_20_indv_cut['uso_solo'].value_counts()
df_17_20_indv_cut['tipo_veiculo'].value_counts()
```
Como possuimos valores "Não Informado" e "Outros", decidimos substitui-los por Nulo
```
df_17_20_indv_cut['tipo_veiculo'].replace('Não Informado',np.nan,inplace=True)
df_17_20_indv_cut['tipo_veiculo'].replace('Outros',np.nan,inplace=True)
df_17_20_indv_cut['ano_fabricacao_veiculo'].value_counts()
```
Verificação dos dados dos Anos de Frabricação com a ajuda do Box Plot
```
sns.boxplot(df_17_20_indv_cut['ano_fabricacao_veiculo'])
df_17_20_indv_cut['tipo_envolvido'].value_counts()
```
Como a pergunta do nosso projeto se refere ao acidente, não vimos motivos deixar os dados da Testemunha no conjunto de dados, pois a mesma não participa ou influencia na ação. Outro ponto, como possuimos apenas 4 dados "Não Informado", decidimos retira-los.
```
df_17_20_indv_cut.drop(df_17_20_indv_cut.index[df_17_20_indv_cut['tipo_envolvido'] == 'Testemunha'], inplace = True)
df_17_20_indv_cut.drop(df_17_20_indv_cut.index[df_17_20_indv_cut['tipo_envolvido'] == 'Não Informado'], inplace = True)
df_17_20_indv_cut['tipo_envolvido'].value_counts()
df_17_20_indv_cut['idade'].value_counts()
```
Verificação da 'idade' com auxilio da ferramenta Box Plot
```
sns.boxplot(df_17_20_indv_cut['idade'])
```
Como podemos ver, possuimos valores errados, onde é possivel perceber dados mostrando idades superiores a 500.
Assim, utilizamos a idade da pessoa mais velha do mundo (117 anos) para ser nossa idade de corte
```
df_17_20_indv_cut.loc[df_17_20_indv_cut.idade>117,'idade'] = np.nan # Considerando outliers como faltantes...
df_17_20_indv_cut.loc[df_17_20_indv_cut.idade<0,'idade'] = np.nan # Considerando outliers como faltantes...
df_17_20_indv_cut['sexo'].value_counts()
```
Como possuimos valores iguais a "Não Informado" e "Ignorado", ambos foram substituido por valores Nulos
```
df_17_20_indv_cut['sexo'].replace('Não Informado',np.nan,inplace=True)
df_17_20_indv_cut['sexo'].replace('Ignorado',np.nan,inplace=True)
df_17_20_indv_cut['sexo'].value_counts()
df_17_20_indv_cut['ilesos'].value_counts()
df_17_20_indv_cut['mortos'].value_counts()
df_17_20_indv_cut['ano'].value_counts()
df_17_20_indv_cut['feridos_cal'].value_counts()
```
## Substituindo os Valores Nulos
**Variaveis Numéricas **
Como possuimos valores nulos na coluna "ano_fabricacao_veiculo", decidimos preencher esses dados com a média do ano presente no coluna.
Coluna *ano_fabricacao_veiculo*
```
df_17_20_indv_cut['ano_fabricacao_veiculo'].fillna(df_17_20_indv_cut['ano_fabricacao_veiculo'].mean(),inplace=True)
df_17_20_indv_cut['ano_fabricacao_veiculo'].isnull().sum()
```
Coluna *idade*
```
sns.boxplot(df_17_20_indv_cut['idade'])
df_17_20_indv_cut['idade'].isnull().sum()
```
Preenchendo os dados faltantes da coluna, com a média de valores presentes na coluna 'idade'.
```
df_17_20_indv_cut['idade'].fillna(df_17_20_indv_cut['idade'].mean(),inplace=True)
df_17_20_indv_cut['idade'].isnull().sum()
df_17_20_indv_cut.isnull().sum()
```
Com isso, podemos perceber que grande parte dos valores nulos/faltantes foram corrigidos, porém possuimos variaveis nominais que estão faltando no nosso dataset.
```
df_17_20_indv_cut.to_csv('df_ind.csv',index=False)
'''Agrupamento de dados e seleção de principais colunas a serem analisadas (2017-2020).
id 0 - Não Alterado
dia_semana 0 - Não Alterado
horario 0 - Não Alterado
uf 0 - Não Alterado
causa_acidente 0 - Não Alterado
tipo_acidente 0 - Não Alterado
classificacao_acidente 0 - Não Alterado
fase_dia 0 - Não Alterado
condicao_metereologica 8597 - Ignorados -> np.nan
tipo_pista 0 - Não Alterado
tracado_via 66609 - Não Informado - np.nan
uso_solo 0 - Sim-> Urbano Não-> Rural
tipo_veiculo 2645 - Outros e Não Informados -> np.nan
ano_fabricacao_veiculo 0 - np.nan - > médias dos anos de fabricação
tipo_envolvido 0 - Não informado e Testemunha -> drop do dataframe
idade 0 - 0 < idade < 117 | outliers -> np.nan -> média de idades
sexo 28618 - Não informado e Ignorados -> np.nan
ilesos 0 - Não Alterado
mortos 0 - Não Alterado
ano 0 - Criado para rastreamento
feridos_cal 0 - feridos_cal = feridos_leves + feridos_graves (criado)
'''
msno.matrix(df_17_20_indv_cut)
```
**Variaveis Nominais **
```
df_ind = pd.read_csv('df_ind.csv')
df_ind.info()
def cat_plot(dataframe):
import matplotlib.pyplot as plt
import seaborn as sns
for i in dataframe.columns:
plt.figure(figsize=(10,10))
sns.barplot(x=dataframe[i].value_counts().index ,y = dataframe[i].value_counts())
plt.xticks(rotation=90);
'''Melhoria de colocar subplot em um grid por exemplo 2 colunas e 10 linhas'''
# fig, axes = plt.subplots(10, 2, figsize=(25,100), sharey=True)
#
# sns.barplot(ax=axes[0,0],x=df_17_20['dia_semana'].value_counts().index ,y = df_17_20['dia_semana'].value_counts())
# plt.xticks(rotation=90);
#
# sns.barplot(ax=axes[0,1],x=df_17_20['tipo_pista'].value_counts().index ,y = df_17_20['tipo_pista'].value_counts())
# plt.xticks(rotation=90);
```
Como é possivel observar no gráfico abaixo, possuimos diversas variáveis nulas. Poderiamos, para solucionar esse problema, apenas dropar os valores, porém o gráfico nos mostra que grande parte dos dados faltantes não possuem relação com a falta de outras colunas, o que iria gerar uma perda grande de dados no nosso dataset. Assim sendo, optamos por tratar esses dados.
```
# https://github.com/ResidentMario/missingno
msno.matrix(df_ind)
msno.bar(df_ind)
msno.heatmap(df_ind)
```
Verificando as colunas que estão com dados faltantes.
```
df_ind.isnull().sum()
```
Com auxílio do grafico de barras, decidimos plotar os grafícos com as variaveis presentes no dataset, tanto para *condicao_metereologica* quanto *veiculos*.
```
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
plt.setp( ax1.xaxis.get_majorticklabels(),rotation=90)
plt.setp( ax2.xaxis.get_majorticklabels(),rotation=90)
sns.barplot(ax=ax1,x=df_ind['condicao_metereologica'].value_counts().index ,y = df_ind['condicao_metereologica'].value_counts())
sns.barplot(ax=ax2,x=df_ind['tipo_veiculo'].value_counts().index ,y = df_ind['tipo_veiculo'].value_counts())
```
Após análise dos graficos, é possivel avaliar alguns pontos:
Na Coluna condicao_metereologica:
1. Possuimos 8597 dados faltantes
Isso gera um valor menor que 1,5 % do nosso dataset (8597 / 623089). Sendo assim, decidos dropa-los já que sua margem de dados faltantes ser baixa.
Na Coluna veiculo:
2. Possuimos 2645 dados faltantes
Isso gera um valor menor que 0,42 % do nosso dataset (2645 / 623089). Sendo assim, decidos também dropa-los já que sua margem de dados faltantes ser baixa.
No final, iremos ter retirado 1,8 % do nosso dataset, sendo que os dados retirados estão distribuidos entre os anos.
623089 -> 611956
```
df_ind.dropna(subset=['condicao_metereologica','tipo_veiculo'],inplace=True)
df_ind.isnull().sum()
```
Em relação as colunas tracado_via quanto sexo, possuimos uma quantidade de dados relevantes, e os mesmos serão tratados. Assim, chegamos na conclusão de substituir os dados faltantes de forma igualitária, onde os dados faltantes irão receber proporcionalmente valores conforme os já preenchidos corretamente (não nulos).
Coluna *tracado_via*
```
sns.barplot(x=df_ind['classificacao_acidente'].value_counts().index ,y = df_ind['classificacao_acidente'].value_counts())
plt.xticks(rotation=90);
sns.barplot(x=df_ind['tracado_via'].value_counts().index ,y = df_ind['tracado_via'].value_counts())
plt.xticks(rotation=90);
# tracado_via 11 % subistituir proporcionalmente ?
```
Proporção de dados presentes no dataset:
```
round(df_ind['tracado_via'].value_counts()/df_ind['tracado_via'].count()*100)
```
Sabendo a proporção de dados presentes no dataset, precisamos saber qual será o valor necéssario em cada variável para continuar com a mesma proporção. Sabendo que teremos 65676 dados para serem substituidos.
```
round(df_ind['tracado_via'].value_counts()/df_ind['tracado_via'].count()*65435)
```
Criamos um nova variavel, para receber as mudanças, que receberá o nome temp.
```
temp = df_ind.copy()
temp.isnull().sum()
sns.barplot(x=temp['tracado_via'].value_counts().index ,y = temp['tracado_via'].value_counts())
plt.xticks(rotation=90);
```
Então, para resolver o problema, foi desenvolvido uma função com a intenção de substituir os valores de forma aleatória no nosso dataset, para assim, não ser gerado nenhum viés durante a reposição de dados
```
def replace_randomly_nan(df_to_replace,column,new_value,size_to_replace):
df_sample_na = df_to_replace.loc[df_to_replace[column].isnull(),column].sample(size_to_replace).copy()
for i in df_sample_na.index:
#print(i)
df_to_replace.loc[df_to_replace.index == i,column] = new_value
replace_randomly_nan(temp,'tracado_via','Reta',45261)
```
Teste da função para verificar se a função preencheu os 45261 dados de forma correta.
```
temp.isnull().sum()
```
Como nossa função obteve sucesso, a mesma foi utilizada para as outras variáveis. Onde era preciso substituir:
* Curva 11320.0
* Interseção de vias 3571.0
* Desvio Temporário 2089.0
* Rotatória 1381.0
* Retorno Regulamentado 835.0
* Viaduto 495.0
* Ponte 411.0
* Túnel 70.0 4 71.0
```
replace_randomly_nan(temp,'tracado_via','Curva',11320)
replace_randomly_nan(temp,'tracado_via','Interseção de vias',3571)
replace_randomly_nan(temp,'tracado_via','Desvio Temporário',2089)
replace_randomly_nan(temp,'tracado_via','Rotatória',1381)
replace_randomly_nan(temp,'tracado_via','Retorno Regulamentado',835)
replace_randomly_nan(temp,'tracado_via','Viaduto',495)
replace_randomly_nan(temp,'tracado_via','Ponte',411)
replace_randomly_nan(temp,'tracado_via','Túnel',72)
```
Verificação da proporção de dados.
```
df_ind['tracado_via'].value_counts()/df_ind['tracado_via'].count()*100
```
Como é possivel ver, continuamos com a mesma proporção de dados.
```
fig, (ax3, ax4) = plt.subplots(1, 2, figsize=(12, 5))
plt.setp( ax3.xaxis.get_majorticklabels(),rotation=90)
plt.setp( ax4.xaxis.get_majorticklabels(),rotation=90)
sns.barplot(ax = ax3,x=temp['tracado_via'].value_counts().index ,y = temp['tracado_via'].value_counts())
sns.barplot(ax = ax4,x=df_ind['sexo'].value_counts().index ,y = df_ind['sexo'].value_counts())
sns.barplot(x=temp['tracado_via'].value_counts().index ,y = temp['tracado_via'].value_counts())
plt.xticks(rotation=90);
```
Assim, a coluna *tracado_via* foi corrigida.
```
temp.isnull().sum()
```
Coluna *sexo*
```
sns.barplot(x=df_ind['sexo'].value_counts().index ,y = df_ind['sexo'].value_counts())
plt.xticks(rotation=90);
```
Proporção de dados presentes no dataset:
```
round(df_ind['sexo'].value_counts()/df_ind['sexo'].count()*100)
```
Sabendo a proporção de dados presentes no dataset, precisamos saber qual será o valor necessário em cada variável para continuar com a mesma proporção. Sabendo que teremos 26259 dados para serem substituidos.
```
round(df_ind['sexo'].value_counts()/df_ind['sexo'].count()*26089)
```
Como a função informada acima, precisamos só substituir os valores calculados anteriormente para ocorrer os reparo.
```
replace_randomly_nan(temp,'sexo','Masculino',19929)
replace_randomly_nan(temp,'sexo','Feminino',6160)
```
Verificação da proporção dos dados.
```
round(df_ind['sexo'].value_counts()/df_ind['sexo'].count()*100)
```
Com isso, finalizamos a correção dessa coluna.
```
sns.barplot(x=temp['sexo'].value_counts().index ,y = temp['sexo'].value_counts())
plt.xticks(rotation=90);
```
Assim, concluimos a limpeza dos nosso dados, e também a correção dos dados que apresentaram algum tipo de problema.
```
temp.isnull().sum()
msno.matrix(temp)
```
Exportação, do dataset limpo, com o nome **'df_acidentes.csv'**
```
temp.to_csv('df_acidentes.csv',index=False)
'''
Parte 2
Correção e Limpeza dos dados (EDA e ETL)
id 0 - Não Alterado
data_inversa 0 - Não Alterado
dia_semana 0 - Não Alterado
horario 0 - Não Alterado
uf 0 - Não Alterado
causa_acidente 0 - Não Alterado
tipo_acidente 0 - Não Alterado
classificacao_acidente 0 - Não Alterado
fase_dia 0 - Não Alterado
condicao_metereologica 0 - np.nan -> dropna
tipo_pista 0 - Não Alterado
tracado_via 0 - np.nan - > replace_randomly_nan() - preenchimento proporcional
uso_solo 0 - Não Alterado
tipo_veiculo 0 - np.nan -> dropna
ano_fabricacao_veiculo 0 - Não Alterado
tipo_envolvido 0 - Não Alterado
idade 0 - Não Alterado
sexo 0 - np.nan - > replace_randomly_nan() - preenchimento proporcional
ilesos 0 - Não Alterado
mortos 0 - Não Alterado
ano 0 - Não Alterado
feridos_cal 0 - Não Alterado
==========================================================================================================================
--------------------------------------------------------------------------------------------------------------------------
==========================================================================================================================
Parte 1
Agrupamento de dados e seleção de principais colunas a serem analisadas (2017-2020).
id 0 - Não Alterado
data_inversa 0 - Não Alterado
dia_semana 0 - Não Alterado
horario 0 - Não Alterado
uf 0 - Não Alterado
causa_acidente 0 - Não Alterado
tipo_acidente 0 - Não Alterado
classificacao_acidente 0 - Não Alterado
fase_dia 0 - Não Alterado
condicao_metereologica 8597 - Ignorados -> np.nan
tipo_pista 0 - Não Alterado
tracado_via 66609 - Não Informado - np.nan
uso_solo 0 - Sim-> Urbano Não-> Rural
tipo_veiculo 2645 - Outros e Não Informados -> np.nan
ano_fabricacao_veiculo 0 - np.nan - > médias dos anos de fabricação
tipo_envolvido 0 - Não informado e Testemunha -> drop do dataframe
idade 0 - 0 < idade < 117 | outliers -> np.nan -> média de idades
sexo 28618 - Não informado e Ignorados -> np.nan
ilesos 0 - Não Alterado
mortos 0 - Não Alterado
ano 0 - Criado para rastreamento
feridos_cal 0 - feridos_cal = feridos_leves + feridos_graves (criado)
==========================================================================================================================
--------------------------------------------------------------------------------------------------------------------------
==========================================================================================================================
Parte 0
id 0
pesid 4
data_inversa 0
dia_semana 0
horario 0
uf 0
br 1083
km 1083
municipio 0
causa_acidente 0
tipo_acidente 0
classificacao_acidente 0
fase_dia 0
sentido_via 0
condicao_metereologica 0
tipo_pista 0
tracado_via 0
uso_solo 0
id_veiculo 4
tipo_veiculo 0
marca 30885
ano_fabricacao_veiculo 36269
tipo_envolvido 0
estado_fisico 0
idade 58641
sexo 0
ilesos 0
feridos_leves 0
feridos_graves 0
mortos 0
latitude 0 - > tratados e corrigidos, valores inconsistentes removidos
longitude 0 - > tratados e corrigidos, valores inconsistentes removidos
regional 0
delegacia 0
uop 27830
ano 0
'''
temp[['classificacao_acidente','ilesos','feridos_cal','mortos']].sample(5)
```
| github_jupyter |
# Bayesian Cognitive Modeling in PyMC3
PyMC3 port of Lee and Wagenmakers' [Bayesian Cognitive Modeling - A Pratical Course](http://bayesmodels.com)
All the codes are in jupyter notebook with the model explain in distributions (as in the book). Background information of the models please consult the book. You can also compare the result with the original code associated with the book ([WinBUGS and JAGS](https://webfiles.uci.edu/mdlee/Code.zip); [Stan](https://github.com/stan-dev/example-models/tree/master/Bayesian_Cognitive_Modeling))
_All the codes are currently tested under PyMC3 v3.1.rc3 with theano 0.9.0.dev_
## Part II - PARAMETER ESTIMATION
### [Chapter 3: Inferences with binomials](./ParameterEstimation/Binomial.ipynb)
[3.1 Inferring a rate](./ParameterEstimation/Binomial.ipynb#3.1-Inferring-a-rate)
[3.2 Difference between two rates](./ParameterEstimation/Binomial.ipynb#3.2-Difference-between-two-rates)
[3.3 Inferring a common rate](./ParameterEstimation/Binomial.ipynb#3.3-Inferring-a-common-rate)
[3.4 Prior and posterior prediction](./ParameterEstimation/Binomial.ipynb#3.4-Prior-and-posterior-prediction)
[3.5 Posterior prediction](./ParameterEstimation/Binomial.ipynb#3.5-Posterior-Predictive)
[3.6 Joint distributions](./ParameterEstimation/Binomial.ipynb#3.6-Joint-distributions)
### [Chapter 4: Inferences with Gaussians](./ParameterEstimation/Gaussian.ipynb)
[4.1 Inferring a mean and standard deviation](./ParameterEstimation/Gaussian.ipynb#4.1-Inferring-a-mean-and-standard-deviation)
[4.2 The seven scientists](./ParameterEstimation/Gaussian.ipynb#4.2-The-seven-scientists)
[4.3 Repeated measurement of IQ](./ParameterEstimation/Gaussian.ipynb#4.3-Repeated-measurement-of-IQ)
### [Chapter 5: Some examples of data analysis](./ParameterEstimation/DataAnalysis.ipynb)
[5.1 Pearson correlation](./ParameterEstimation/DataAnalysis.ipynb#5.1-Pearson-correlation)
[5.2 Pearson correlation with uncertainty](./ParameterEstimation/DataAnalysis.ipynb#5.2-Pearson-correlation-with-uncertainty)
[5.3 The kappa coefficient of agreement](./ParameterEstimation/DataAnalysis.ipynb#5.3-The-kappa-coefficient-of-agreement)
[5.4 Change detection in time series data](./ParameterEstimation/DataAnalysis.ipynb#5.4-Change-detection-in-time-series-data)
[5.5 Censored data](./ParameterEstimation/DataAnalysis.ipynb#5.5-Censored-data)
[5.6 Recapturing planes](./ParameterEstimation/DataAnalysis.ipynb#5.6-Recapturing-planes)
### [Chapter 6: Latent-mixture models](./ParameterEstimation/Latent-mixtureModels.ipynb)
[6.1 Exam scores](./ParameterEstimation/Latent-mixtureModels.ipynb#6.1-Exam-scores)
[6.2 Exam scores with individual differences](./ParameterEstimation/Latent-mixtureModels.ipynb#6.2-Exam-scores-with-individual-differences)
[6.3 Twenty questions](./ParameterEstimation/Latent-mixtureModels.ipynb#6.3-Twenty-questions)
[6.4 The two-country quiz](./ParameterEstimation/Latent-mixtureModels.ipynb#6.4-The-two-country-quiz)
[6.5 Assessment of malingering](./ParameterEstimation/Latent-mixtureModels.ipynb#6.5-Assessment-of-malingering)
[6.6 Individual differences in malingering](./ParameterEstimation/Latent-mixtureModels.ipynb#6.6-Individual-differences-in-malingering)
[6.7 Alzheimer’s recall test cheating](./ParameterEstimation/Latent-mixtureModels.ipynb#6.7-Alzheimer's-recall-test-cheating)
## Part III - MODEL SELECTION
### [Chapter 8: Comparing Gaussian means](./ModelSelection/ComparingGaussianMeans.ipynb)
[8.1 One-sample comparison](./ModelSelection/ComparingGaussianMeans.ipynb#8.1-One-sample-comparison)
[8.2 Order-restricted one-sample comparison](./ModelSelection/ComparingGaussianMeans.ipynb#8.2-Order-restricted-one-sample-comparison)
[8.3 Two-sample comparison](./ModelSelection/ComparingGaussianMeans.ipynb#8.3-Two-sample-comparison)
### [Chapter 9: Comparing binomial rates](./ModelSelection/ComparingBinomialRates.ipynb)
[9.1 Equality of proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.1-Equality-of-proportions)
[9.2 Order-restricted equality of proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.2-Order-restricted-equality-of-proportions)
[9.3 Comparing within-subject proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.3-Comparing-within-subject-proportions)
[9.4 Comparing between-subject proportions](./ModelSelection/ComparingBinomialRates.ipynb#9.4-Comparing-between-subject-proportions)
[9.5 Order-restricted between-subjects comparison](./ModelSelection/ComparingBinomialRates.ipynb#9.5-Order-restricted-between-subject-proportions)
## Part IV - CASE STUDIES
### [Chapter 10: Memory retention](./CaseStudies/MemoryRetention.ipynb)
[10.1 No individual differences](./CaseStudies/MemoryRetention.ipynb#10.1-No-individual-differences)
[10.2 Full individual differences](./CaseStudies/MemoryRetention.ipynb#10.2-Full-individual-differences)
[10.3 Structured individual differences](./CaseStudies/MemoryRetention.ipynb#10.3-Structured-individual-differences)
### [Chapter 11: Signal detection theory](./CaseStudies/SignalDetectionTheory.ipynb)
[11.1 Signal detection theory](./CaseStudies/SignalDetectionTheory.ipynb#11.1-Signal-detection-theory)
[11.2 Hierarchical signal detection theory](./CaseStudies/SignalDetectionTheory.ipynb#11.2-Hierarchical-signal-detection-theory)
[11.3 Parameter expansion](./CaseStudies/SignalDetectionTheory.ipynb#11.3-Parameter-expansion)
### [Chapter 12: Psychophysical functions](./CaseStudies/PsychophysicalFunctions.ipynb)
[12.1 Psychophysical functions](./CaseStudies/PsychophysicalFunctions.ipynb#12.1-Psychophysical-functions)
[12.2 Psychophysical functions under contamination](./CaseStudies/PsychophysicalFunctions.ipynb#12.2-Psychophysical-functions-under-contamination)
### [Chapter 13: Extrasensory perception](./CaseStudies/ExtrasensoryPerception.ipynb)
[13.1 Evidence for optional stopping](./CaseStudies/ExtrasensoryPerception.ipynb#13.1-Evidence-for-optional-stopping)
[13.2 Evidence for differences in ability](./CaseStudies/ExtrasensoryPerception.ipynb#13.2-Evidence-for-differences-in-ability)
[13.3 Evidence for the impact of extraversion](./CaseStudies/ExtrasensoryPerception.ipynb#13.3-Evidence-for-the-impact-of-extraversion)
### [Chapter 14: Multinomial processing trees](./CaseStudies/MultinomialProcessingTrees.ipynb)
[14.1 Multinomial processing model of pair-clustering](./CaseStudies/MultinomialProcessingTrees.ipynb#14.1-Multinomial-processing-model-of-pair-clustering)
[14.2 Latent-trait MPT model](./CaseStudies/MultinomialProcessingTrees.ipynb#14.2-Latent-trait-MPT-model)
### [Chapter 15: The SIMPLE model of memory](./CaseStudies/TheSIMPLEModelofMemory.ipynb)
[15.1 The SIMPLE model](./CaseStudies/TheSIMPLEModelofMemory.ipynb#15.1-The-SIMPLE-model)
[15.2 A hierarchical extension of SIMPLE](./CaseStudies/TheSIMPLEModelofMemory.ipynb#15.2-A-hierarchical-extension-of-SIMPLE)
### [Chapter 16: The BART model of risk taking](./CaseStudies/TheBARTModelofRiskTaking.ipynb)
[16.1 The BART model](./CaseStudies/TheBARTModelofRiskTaking.ipynb#16.1-The-BART-model)
[16.2 A hierarchical extension of the BART model](./CaseStudies/TheBARTModelofRiskTaking.ipynb#16.2-A-hierarchical-extension-of-the-BART-model)
### [Chapter 17: The GCM model of categorization](./CaseStudies/TheGCMModelofCategorization.ipynb)
[17.1 The GCM model](./CaseStudies/TheGCMModelofCategorization.ipynb#17.1-The-GCM-model)
[17.2 Individual differences in the GCM](./CaseStudies/TheGCMModelofCategorization.ipynb#17.2-Individual-differences-in-the-GCM)
[17.3 Latent groups in the GCM](./CaseStudies/TheGCMModelofCategorization.ipynb#17.3-Latent-groups-in-the-GCM)
### [Chapter 18: Heuristic decision-making](./CaseStudies/HeuristicDecisionMaking.ipynb)
[18.1 Take-the-best](./CaseStudies/HeuristicDecisionMaking.ipynb#18.1-Take-the-best)
[18.2 Stopping](./CaseStudies/HeuristicDecisionMaking.ipynb#18.2-Stopping)
[18.3 Searching](./CaseStudies/HeuristicDecisionMaking.ipynb#18.3-Searching)
[18.4 Searching and stopping](./CaseStudies/HeuristicDecisionMaking.ipynb#18.4-Searching-and-stopping)
### [Chapter 19: Number concept development](./CaseStudies/NumberConceptDevelopment.ipynb)
[19.1 Knower-level model for Give-N](./CaseStudies/NumberConceptDevelopment.ipynb#19.1-Knower-level-model-for-Give-N)
[19.2 Knower-level model for Fast-Cards](./CaseStudies/NumberConceptDevelopment.ipynb#19.2-Knower-level-model-for-Fast-Cards)
[19.3 Knower-level model for Give-N and Fast-Cards](./CaseStudies/NumberConceptDevelopment.ipynb#19.3-Knower-level-model-for-Give-N-and-Fast-Cards)
| github_jupyter |
# Creating the Florida school number crosswalks
```
from os import path
import os
import numpy as np
import pandas as pd
from fuzzywuzzy import fuzz
from fuzzywuzzy import process
```
## Assert folders are in place
```
folders = [
'../data/intermediary/keys/',
]
for folder in folders:
if path.exists(folder):
print("{folder} is already here!".format(folder=folder))
else:
try:
os.makedirs(folder)
except OSError:
print("I couldn't make {folder}!".format(folder=folder))
else:
print("{folder} successfully made!".format(folder=folder))
```
## Broward
```
keys = pd.read_csv('../data/intermediary/scorecard/names/broward.csv')
raw_attendance = pd.read_csv('../data/input/foia/broward.csv')
attendance = pd.DataFrame()
attendance['school_name'] = raw_attendance['SchoolName'].unique()
choices = keys['school_name_l'].unique()
attendance['key_guess'] = attendance['school_name'].apply(lambda x: process.extract(x, choices, limit=1, scorer=fuzz.token_sort_ratio)[0])
attendance['guess_confidence'] = attendance['key_guess'].apply(lambda x: x[1])
attendance['key_guess'] = attendance['key_guess'].apply(lambda x: x[0])
```
Check rows that have a low guess confidence score:
```
attendance[attendance['guess_confidence'] < 75]
```
Check and see if the amount unique guesses is equal to the number of schools:
```
print("key guesses:", attendance['key_guess'].nunique())
print("Actual district schools:", attendance['school_name'].nunique())
print("Off by:", attendance['school_name'].nunique() - attendance['key_guess'].nunique())
```
List out which rows have duplicate guesses:
```
for school in attendance['key_guess'].unique():
if (attendance[attendance['key_guess'] == school]['school_name'].nunique()) > 1:
print("this guess used multiple times:",school)
print('for these schools:')
print(attendance[attendance['key_guess'].str.contains(school)]['school_name'].unique())
print("\n")
```
List out more potential guesses for those that need fixes:
```
need_fixes = ['PINE RIDGE ALTERNATIVE CENTER', 'PLANTATION ELEMENTARY']
for need in need_fixes:
print(need)
for guess in process.extract(need, choices, limit=10, scorer=fuzz.token_sort_ratio):
print("\t", guess)
```
Replace incorrect options:
```
school = 'PLANTATION ELEMENTARY'
fixed_guess = 'PLANTATION ELEMENTARY SCHOOL'
attendance['key_guess'] = np.where(attendance['school_name'] == school, fixed_guess, attendance['key_guess'])
```
Remove rows that don't seem to have matching data:
```
school = 'PINE RIDGE ALTERNATIVE CENTER'
attendance = attendance[attendance['school_name'] != school]
```
Save as a key file:
```
KEYS_FILENAME = '../data/intermediary/keys/broward.csv'
df = attendance.merge(keys, left_on='key_guess', right_on='school_name_l')
df[[
'school_name',
'key_guess',
'guess_confidence',
'district_number',
'district_name',
'school_number',
'school_name_l'
]].to_csv(KEYS_FILENAME, index=False)
```
| github_jupyter |
```
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
!ls
cd drive/Houston/
import numpy as np
import cv2
area_hsi = []
for i in range(3):
for j in range(14):
img_path = "area/pc_area_" + str(i) + "_" + str(j) + ".png"
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
area_hsi.append(img)
area_hsi = np.array(area_hsi)
area_lidar = []
for i in range(14):
img_path = "area/lidar_area_" + str(i) + ".png"
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
area_lidar.append(img)
area_lidar = np.array(area_lidar)
hsi = []
for i in range(3):
img_path = "pca/hsi_pc" + str(i) + ".png"
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
hsi.append(img)
hsi = np.array(hsi)
lidar = cv2.imread("pca/lidar.png", cv2.IMREAD_GRAYSCALE)
lidar = lidar.reshape(1, lidar.shape[0], lidar.shape[1])
print(area_hsi.shape)
print(area_lidar.shape)
print(hsi.shape)
print(lidar.shape)
train_file_name = "labels/train.txt"
test_file_name = "labels/test.txt"
file = open(train_file_name)
triplets = file.read().split()
for i in range(0, len(triplets)):
triplets[i] = triplets[i].split(",")
train_array = np.array(triplets, dtype=int)
file.close()
file = open(test_file_name)
triplets = file.read().split()
for i in range(0, len(triplets)):
triplets[i] = triplets[i].split(",")
test_array = np.array(triplets, dtype=int)
file.close()
HEIGHT = train_array.shape[0]
WIDTH = train_array.shape[1]
area_hsi_train_data = []
area_hsi_test_data = []
area_lidar_train_data = []
area_lidar_test_data = []
hsi_train_data = []
hsi_test_data = []
lidar_train_data = []
lidar_test_data = []
train_labels = []
test_labels = []
for i in range(HEIGHT):
for j in range(WIDTH):
if train_array[i, j] != 0:
area_hsi_train_data.append(area_hsi[:, i, j])
area_lidar_train_data.append(area_lidar[:, i, j])
hsi_train_data.append(hsi[:, i, j])
lidar_train_data.append(lidar[:, i, j])
train_labels.append(train_array[i, j])
if test_array[i,j] != 0:
area_hsi_test_data.append(area_hsi[:, i, j])
area_lidar_test_data.append(area_lidar[:, i, j])
hsi_test_data.append(hsi[:, i, j])
lidar_test_data.append(lidar[:, i, j])
test_labels.append(test_array[i, j])
area_hsi_train_data = np.array(area_hsi_train_data)
area_lidar_train_data = np.array(area_lidar_train_data)
area_hsi_test_data = np.array(area_hsi_test_data)
area_lidar_test_data = np.array(area_lidar_test_data)
hsi_train_data = np.array(hsi_train_data)
lidar_train_data = np.array(lidar_train_data)
hsi_test_data = np.array(hsi_test_data)
lidar_test_data = np.array(lidar_test_data)
train_labels = np.array(train_labels)
test_labels = np.array(test_labels)
print(area_hsi_train_data.shape)
print(area_lidar_train_data.shape)
print(area_hsi_test_data.shape)
print(area_lidar_test_data.shape)
print(hsi_train_data.shape)
print(lidar_train_data.shape)
print(hsi_test_data.shape)
print(lidar_test_data.shape)
import keras
train_one_hot = keras.utils.to_categorical(train_labels-1)
test_one_hot = keras.utils.to_categorical(test_labels-1)
print(train_one_hot.shape)
print(test_one_hot.shape)
HSI_PATCH_SIZE = 27
LiDAR_PATCH_SIZE = 41
CONV1 = 500
CONV2 = 100
FC1 = 200
FC2 = 84
LEARNING_RATE = 0.005
padded_area_hsi = np.lib.pad(area_hsi, ((0,0), (HSI_PATCH_SIZE//2, HSI_PATCH_SIZE//2), (HSI_PATCH_SIZE//2,HSI_PATCH_SIZE//2)), 'reflect')
padded_area_lidar = np.lib.pad(area_lidar, ((0,0), (LiDAR_PATCH_SIZE//2, LiDAR_PATCH_SIZE//2), (LiDAR_PATCH_SIZE//2,LiDAR_PATCH_SIZE//2)), 'reflect')
padded_hsi = np.lib.pad(hsi, ((0,0), (HSI_PATCH_SIZE//2, HSI_PATCH_SIZE//2), (HSI_PATCH_SIZE//2,HSI_PATCH_SIZE//2)), 'reflect')
padded_lidar = np.lib.pad(lidar, ((0,0), (LiDAR_PATCH_SIZE//2, LiDAR_PATCH_SIZE//2), (LiDAR_PATCH_SIZE//2,LiDAR_PATCH_SIZE//2)), 'reflect')
print(padded_area_hsi.shape)
print(padded_area_lidar.shape)
print(padded_hsi.shape)
print(padded_lidar.shape)
def get_patches(data, patch_size, row, column):
offset = patch_size // 2
row_low = row - offset
row_high = row + offset
col_low = column - offset
col_high = column + offset
return data[0:, row_low:row_high + 1, col_low:col_high + 1].reshape(patch_size, patch_size, data.shape[0])
area_hsi_train_patches = []
area_hsi_test_patches = []
area_lidar_train_patches = []
area_lidar_test_patches = []
hsi_train_patches = []
hsi_test_patches = []
lidar_train_patches = []
lidar_test_patches = []
for i in range(HEIGHT):
for j in range(WIDTH):
if train_array[i, j] != 0:
area_hsi_train_patches.append(get_patches(padded_area_hsi, HSI_PATCH_SIZE, i+HSI_PATCH_SIZE//2, j+HSI_PATCH_SIZE//2))
area_lidar_train_patches.append(get_patches(padded_area_lidar, LiDAR_PATCH_SIZE, i+LiDAR_PATCH_SIZE//2, j+LiDAR_PATCH_SIZE//2))
hsi_train_patches.append(get_patches(padded_hsi, HSI_PATCH_SIZE, i+HSI_PATCH_SIZE//2, j+HSI_PATCH_SIZE//2))
lidar_train_patches.append(get_patches(padded_lidar, LiDAR_PATCH_SIZE, i+LiDAR_PATCH_SIZE//2, j+LiDAR_PATCH_SIZE//2))
if test_array[i, j] != 0:
area_hsi_test_patches.append(get_patches(padded_area_hsi, HSI_PATCH_SIZE, i+HSI_PATCH_SIZE//2, j+HSI_PATCH_SIZE//2))
area_lidar_test_patches.append(get_patches(padded_area_lidar, LiDAR_PATCH_SIZE, i+LiDAR_PATCH_SIZE//2, j+LiDAR_PATCH_SIZE//2))
hsi_test_patches.append(get_patches(padded_hsi, HSI_PATCH_SIZE, i+HSI_PATCH_SIZE//2, j+HSI_PATCH_SIZE//2))
lidar_test_patches.append(get_patches(padded_lidar, LiDAR_PATCH_SIZE, i+LiDAR_PATCH_SIZE//2, j+LiDAR_PATCH_SIZE//2))
area_hsi_train_patches = np.array(area_hsi_train_patches)
area_hsi_test_patches = np.array(area_hsi_test_patches)
area_lidar_train_patches = np.array(area_lidar_train_patches)
area_lidar_test_patches = np.array(area_lidar_test_patches)
hsi_train_patches = np.array(hsi_train_patches)
hsi_test_patches = np.array(hsi_test_patches)
lidar_train_patches = np.array(lidar_train_patches)
lidar_test_patches = np.array(lidar_test_patches)
print(area_hsi_train_patches.shape)
print(area_hsi_test_patches.shape)
print(area_lidar_train_patches.shape)
print(area_lidar_test_patches.shape)
print(hsi_train_patches.shape)
print(hsi_test_patches.shape)
print(lidar_train_patches.shape)
print(lidar_test_patches.shape)
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
from tensorflow.python.keras.layers import InputLayer
from tensorflow.python.keras.layers import MaxPooling2D
from tensorflow.python.keras.layers import BatchNormalization, Dropout
from tensorflow.python.keras.optimizers import Adam,SGD
BANDS = area_hsi_train_patches.shape[3]
NUM_CLS = train_one_hot.shape[1]
BATCH_SIZE = 25
area_hsi_model = Sequential()
area_hsi_model.add(InputLayer(input_shape=(HSI_PATCH_SIZE, HSI_PATCH_SIZE, BANDS)))
area_hsi_model.add(Conv2D(kernel_size=6, strides=2, filters=CONV1, padding='same', activation='relu', name='conv1'))
area_hsi_model.add(BatchNormalization())
area_hsi_model.add(MaxPooling2D(pool_size=2, strides=2))
area_hsi_model.add(Conv2D(kernel_size=5, strides=2, filters=CONV2, padding='same', activation='relu', name='conv2'))
area_hsi_model.add(BatchNormalization())
area_hsi_model.add(MaxPooling2D(pool_size=2, strides=2))
area_hsi_model.add(Flatten())
area_hsi_model.add(Dense(FC1, activation='relu'))
area_hsi_model.add(Dropout(0.6))
area_hsi_model.add(Dense(FC2, activation='relu'))
area_hsi_model.add(Dropout(0.4))
area_hsi_model.add(Dense(NUM_CLS, activation='softmax'))
area_hsi_model.summary()
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
from tensorflow.python.keras.layers import InputLayer
from tensorflow.python.keras.layers import MaxPooling2D
from tensorflow.python.keras.layers import BatchNormalization, Dropout
from tensorflow.python.keras.optimizers import Adam,SGD
BANDS = area_lidar_train_patches.shape[3]
NUM_CLS = train_one_hot.shape[1]
area_lidar_model = Sequential()
area_lidar_model.add(InputLayer(input_shape=(LiDAR_PATCH_SIZE, LiDAR_PATCH_SIZE, BANDS)))
area_lidar_model.add(Conv2D(kernel_size=6, strides=2, filters=CONV1, padding='same', activation='relu', name='conv1'))
area_lidar_model.add(BatchNormalization())
area_lidar_model.add(MaxPooling2D(pool_size=2, strides=2))
area_lidar_model.add(Conv2D(kernel_size=5, strides=2, filters=CONV2, padding='same', activation='relu', name='conv2'))
area_lidar_model.add(BatchNormalization())
area_lidar_model.add(MaxPooling2D(pool_size=2, strides=2))
area_lidar_model.add(Flatten())
area_lidar_model.add(Dense(FC1, activation='relu'))
area_lidar_model.add(Dropout(0.7))
area_lidar_model.add(Dense(FC2, activation='relu'))
area_lidar_model.add(Dropout(0.5))
area_lidar_model.add(Dense(NUM_CLS, activation='softmax'))
area_lidar_model.summary()
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
from tensorflow.python.keras.layers import InputLayer
from tensorflow.python.keras.layers import MaxPooling2D
from tensorflow.python.keras.layers import BatchNormalization, Dropout
from tensorflow.python.keras.optimizers import Adam,SGD
BANDS = hsi_train_patches.shape[3]
NUM_CLS = train_one_hot.shape[1]
BATCH_SIZE = 25
hsi_model = Sequential()
hsi_model.add(InputLayer(input_shape=(HSI_PATCH_SIZE, HSI_PATCH_SIZE, BANDS)))
hsi_model.add(Conv2D(kernel_size=6, strides=2, filters=CONV1, padding='same', activation='relu', name='conv1'))
hsi_model.add(BatchNormalization())
hsi_model.add(MaxPooling2D(pool_size=2, strides=2))
hsi_model.add(Conv2D(kernel_size=5, strides=2, filters=CONV2, padding='same', activation='relu', name='conv2'))
hsi_model.add(BatchNormalization())
hsi_model.add(MaxPooling2D(pool_size=2, strides=2))
hsi_model.add(Flatten())
hsi_model.add(Dense(FC1, activation='relu'))
hsi_model.add(Dropout(0.75))
hsi_model.add(Dense(FC2, activation='relu'))
hsi_model.add(Dropout(0.6))
hsi_model.add(Dense(NUM_CLS, activation='softmax'))
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
from tensorflow.python.keras.layers import InputLayer
from tensorflow.python.keras.layers import MaxPooling2D
from tensorflow.python.keras.layers import BatchNormalization, Dropout
from tensorflow.python.keras.optimizers import Adam,SGD
BANDS = lidar_train_patches.shape[3]
NUM_CLS = train_one_hot.shape[1]
BATCH_SIZE = 25
lidar_model = Sequential()
lidar_model.add(InputLayer(input_shape=(LiDAR_PATCH_SIZE, LiDAR_PATCH_SIZE, BANDS)))
lidar_model.add(Conv2D(kernel_size=6, strides=2, filters=CONV1, padding='same', activation='relu', name='conv1'))
lidar_model.add(BatchNormalization())
lidar_model.add(MaxPooling2D(pool_size=2, strides=2))
lidar_model.add(Conv2D(kernel_size=5, strides=2, filters=CONV2, padding='same', activation='relu', name='conv2'))
lidar_model.add(BatchNormalization())
lidar_model.add(MaxPooling2D(pool_size=2, strides=2))
lidar_model.add(Flatten())
lidar_model.add(Dense(FC1, activation='relu'))
lidar_model.add(Dropout(0.75))
lidar_model.add(Dense(FC2, activation='relu'))
lidar_model.add(Dropout(0.6))
lidar_model.add(Dense(NUM_CLS, activation='softmax'))
hsi_model.load_weights('Models/hsi_model_weights.h5')
lidar_model.load_weights('Models/lidar_model_weights.h5')
area_hsi_model.load_weights('Models/area_hsi_model_weights.h5')
area_lidar_model.load_weights('Models/area_lidar_model_weights.h5')
from operator import truediv
def AA_andEachClassAccuracy(confusion_matrix):
counter = confusion_matrix.shape[0]
list_diag = np.diag(confusion_matrix)
list_raw_sum = np.sum(confusion_matrix, axis=1)
each_acc = np.nan_to_num(truediv(list_diag, list_raw_sum))
average_acc = np.mean(each_acc)
return each_acc, average_acc
test_cls = test_labels - 1
prediction = []
for i in range(area_hsi_test_patches.shape[0]):
prediction.append(area_hsi_model.predict(area_hsi_test_patches[i].reshape(1, area_hsi_test_patches.shape[1], area_hsi_test_patches.shape[2], area_hsi_test_patches.shape[3])).argmax(axis=-1))
#area_hsi_model.predict(area_hsi_test_patches[0].reshape(1, area_hsi_test_patches.shape[1], area_hsi_test_patches.shape[2], area_hsi_test_patches.shape[3])).argmax(axis=-1)
prediction = np.array(prediction)
#prediction = area_hsi_model.predict(area_hsi_test_patches).argmax(axis=-1)
from sklearn import metrics, preprocessing
overall_acc = metrics.accuracy_score(prediction, test_cls)
kappa = metrics.cohen_kappa_score(prediction, test_cls)
confusion_matrix = metrics.confusion_matrix(prediction, test_cls)
each_acc, average_acc = AA_andEachClassAccuracy(confusion_matrix)
print("Overall Accuracy of training sapmles : ",overall_acc)
print("Average Accuracy of training samples : ",average_acc)
print("Kappa statistics of training samples : ",kappa)
print("Each class accuracy of training samples : ", each_acc)
print("Confusion matrix :", confusion_matrix)
test_cls = test_labels - 1
prediction = area_lidar_model.predict(area_lidar_test_patches).argmax(axis=-1)
from sklearn import metrics, preprocessing
overall_acc = metrics.accuracy_score(prediction, test_cls)
kappa = metrics.cohen_kappa_score(prediction, test_cls)
confusion_matrix = metrics.confusion_matrix(prediction, test_cls)
each_acc, average_acc = AA_andEachClassAccuracy(confusion_matrix)
print("Overall Accuracy of training sapmles : ",overall_acc)
print("Average Accuracy of training samples : ",average_acc)
print("Kappa statistics of training samples : ",kappa)
print("Each class accuracy of training samples : ", each_acc)
print("Confusion matrix :", confusion_matrix)
test_cls = test_labels - 1
prediction = hsi_model.predict(hsi_test_patches).argmax(axis=-1)
from sklearn import metrics, preprocessing
overall_acc = metrics.accuracy_score(prediction, test_cls)
kappa = metrics.cohen_kappa_score(prediction, test_cls)
confusion_matrix = metrics.confusion_matrix(prediction, test_cls)
each_acc, average_acc = AA_andEachClassAccuracy(confusion_matrix)
print("Overall Accuracy of training sapmles : ",overall_acc)
print("Average Accuracy of training samples : ",average_acc)
print("Kappa statistics of training samples : ",kappa)
print("Each class accuracy of training samples : ", each_acc)
print("Confusion matrix :", confusion_matrix)
test_cls = test_labels - 1
prediction = lidar_model.predict(lidar_test_patches).argmax(axis=-1)
from sklearn import metrics, preprocessing
overall_acc = metrics.accuracy_score(prediction, test_cls)
kappa = metrics.cohen_kappa_score(prediction, test_cls)
confusion_matrix = metrics.confusion_matrix(prediction, test_cls)
each_acc, average_acc = AA_andEachClassAccuracy(confusion_matrix)
print("Overall Accuracy of training sapmles : ",overall_acc)
print("Average Accuracy of training samples : ",average_acc)
print("Kappa statistics of training samples : ",kappa)
print("Each class accuracy of training samples : ", each_acc)
print("Confusion matrix :", confusion_matrix)
from tensorflow.python.keras.models import Model
intermediate_layer_area_hsi_model = Model(area_hsi_model.layers[0].input,outputs=area_hsi_model.layers[6].output)
intermediate_layer_area_lidar_model = Model(area_lidar_model.layers[0].input,outputs=area_lidar_model.layers[6].output)
intermediate_layer_hsi_model = Model(hsi_model.layers[0].input,outputs=hsi_model.layers[6].output)
intermediate_layer_lidar_model = Model(lidar_model.layers[0].input,outputs=lidar_model.layers[6].output)
area_hsi_train_flatten = intermediate_layer_area_hsi_model.predict(area_hsi_train_patches)
area_lidar_train_flatten = intermediate_layer_area_lidar_model.predict(area_lidar_train_patches)
area_hsi_test_flatten = intermediate_layer_area_hsi_model.predict(area_hsi_test_patches)
area_lidar_test_flatten = intermediate_layer_area_lidar_model.predict(area_lidar_test_patches)
hsi_train_flatten = intermediate_layer_hsi_model.predict(hsi_train_patches)
lidar_train_flatten = intermediate_layer_lidar_model.predict(lidar_train_patches)
hsi_test_flatten = intermediate_layer_hsi_model.predict(hsi_test_patches)
lidar_test_flatten = intermediate_layer_lidar_model.predict(lidar_test_patches)
print(area_hsi_train_flatten.shape)
print(area_lidar_train_flatten.shape)
print(area_hsi_test_flatten.shape)
print(area_lidar_test_flatten.shape)
print(hsi_train_flatten.shape)
print(lidar_train_flatten.shape)
print(hsi_test_flatten.shape)
print(lidar_test_flatten.shape)
train_fusion = np.concatenate((np.concatenate((area_hsi_train_flatten, area_lidar_train_flatten), axis=1), np.concatenate((hsi_train_flatten, lidar_train_flatten), axis=1)), axis=1)
test_fusion = np.concatenate((np.concatenate((area_hsi_test_flatten, area_lidar_test_flatten), axis=1), np.concatenate((hsi_test_flatten, lidar_test_flatten), axis=1)), axis=1)
print(train_fusion.shape)
print(test_fusion.shape)
from tensorflow.keras.layers import concatenate
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
from tensorflow.python.keras.layers import InputLayer
from tensorflow.python.keras.layers import MaxPooling2D
from tensorflow.python.keras.layers import BatchNormalization, Dropout
from tensorflow.python.keras.optimizers import Adam,SGD
fusion_model = Sequential()
fusion_model.add(Dense(512, input_dim=train_fusion.shape[1], activation='relu'))
fusion_model.add(Dropout(0.8))
fusion_model.add(Dense(256, activation='relu'))
fusion_model.add(Dropout(0.7))
fusion_model.add(Dense(128, activation='relu'))
fusion_model.add(Dropout(0.6))
fusion_model.add(Dense(NUM_CLS, activation='softmax'))
sgd = SGD(lr=LEARNING_RATE/2, decay=1e-6, momentum=0.9, nesterov=True)
fusion_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
fusion_model.summary()
history = fusion_model.fit(train_fusion, train_one_hot, batch_size=BATCH_SIZE, shuffle=True, epochs=50)
fusion_model.load_weights('Models/full_fusion_model_weights.h5')
test_cls = test_labels - 1
prediction = fusion_model.predict(test_fusion).argmax(axis=-1)
from sklearn import metrics, preprocessing
overall_acc = metrics.accuracy_score(prediction, test_cls)
kappa = metrics.cohen_kappa_score(prediction, test_cls)
confusion_matrix = metrics.confusion_matrix(prediction, test_cls)
each_acc, average_acc = AA_andEachClassAccuracy(confusion_matrix)
print("Overall Accuracy of training sapmles : ",overall_acc)
print("Average Accuracy of training samples : ",average_acc)
print("Kappa statistics of training samples : ",kappa)
print("Each class accuracy of training samples : ", each_acc)
print("Confusion matrix :\n", confusion_matrix)
fusion_model.save_weights('Models/full_fusion_model_weights.h5')
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Finite-Difference Playground: Using NRPy+-Generated C Codes in a Larger Project
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
## Introduction:
To illustrate how NRPy+-based codes can be used, we write a C code that makes use of the NRPy+-generated C code from the [previous module](Tutorial-Finite_Difference_Derivatives.ipynb). This is a rather silly example, as the C code generated by NRPy+ could be easily generated by hand. However, as we will see in later modules, NRPy+'s true strengths lie in its automatic handling of far more complex and generic expressions, in higher dimensions. For the time being, bear with NRPy+; its true powers will become clear soon!
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#outputc): Output the C file `finite_diff_tutorial-second_deriv.h`
1. [Step 2](#fdplayground): Finite-Difference Playground: A Complete C Code for Analyzing Finite-Difference Expressions Output by NRPy+
1. [Step 3](#exercise): Exercises to students
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='outputc'></a>
# Step 1: Output the C file `finite_diff_tutorial-second_deriv.h` \[Back to [top](#toc)\]
$$\label{outputc}$$
We start with the NRPy+ code from the [previous module](Tutorial-Finite_Difference_Derivatives.ipynb), and output it to the C file `finite_diff_tutorial-second_deriv.h`.
```
# Step P1: Import needed NRPy+ core modules:
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
# Set the spatial dimension to 1
par.set_paramsvals_value("grid::DIM = 1")
# Register the input gridfunction "phi" and the gridfunction to which data are output, "output":
phi, output = gri.register_gridfunctions("AUX",["phi","output"])
# Declare phi_dDD as a rank-2 indexed expression: phi_dDD[i][j] = \partial_i \partial_j phi
phi_dDD = ixp.declarerank2("phi_dDD","nosym")
# Set output to \partial_0^2 phi
output = phi_dDD[0][0]
# Output to the screen the core C code for evaluating the finite difference derivative
fin.FD_outputC("stdout",lhrh(lhs=gri.gfaccess("out_gf","output"),rhs=output))
# Now, output the above C code to a file named "finite_diff_tutorial-second_deriv.h".
fin.FD_outputC("finite_diff_tutorial-second_deriv.h",lhrh(lhs=gri.gfaccess("aux_gfs","output"),rhs=output))
```
<a id='fdplayground'></a>
# Step 2: Finite-Difference Playground: A Complete C Code for Analyzing Finite-Difference Expressions Output by NRPy+ \[Back to [top](#toc)\]
$$\label{fdplayground}$$
NRPy+ is designed to generate C code "kernels" at the heart of more advanced projects. As an example of its utility, let's now write a simple C code that imports the above file `finite_diff_tutorial-second_deriv.h` to evaluate the finite-difference second derivative of
$$f(x) = \sin(x)$$
at fourth-order accuracy. Let's call the finite-difference second derivative of $f$ evaluated at a point $x$ $f''(x)_{\rm FD}$. A fourth-order-accurate $f''(x)_{\rm FD}$ will, in the truncation-error-dominated regime, satisfy the equation
$$f''(x)_{\rm FD} = f''(x)_{\rm exact} + \mathcal{O}(\Delta x^4).$$
Therefore, the [relative error](https://en.wikipedia.org/wiki/Approximation_error) between the finite-difference derivative and the exact value should be given to good approximation by
$$E_{\rm Rel} = \left| \frac{f''(x)_{\rm FD} - f''(x)_{\rm exact}}{f''(x)_{\rm exact}}\right| \propto \Delta x^4,$$
so that (taking the logarithm of both sides of the equation):
$$\log_{10} E_{\rm Rel} = 4 \log_{10} (\Delta x) + \log_{10} (k),$$
where $k$ is the proportionality constant, divided by $f''(x)_{\rm exact}$.
Let's confirm this is true using our finite-difference playground code, which imports the NRPy+-generated C code generated above for evaluating $f''(x)_{\rm FD}$ at fourth-order accuracy, and outputs $\log_{10} (\Delta x)$ and $\log_{10} E_{\rm Rel}$ in a range of $\Delta x$ that is truncation-error dominated.
```
%%writefile finite_difference_playground.c
// Part P1: Import needed header files
#include "stdio.h" // Provides printf()
#include "stdlib.h" // Provides malloc() and free()
#include "math.h" // Provides sin()
// Part P2: Declare the IDX2(gf,i) macro, which enables us to store 2-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// ("gf" held to a fixed value) are consecutive in memory, where
// consecutive values of "gf" (fixing "i") are separated by N elements in
// memory.
#define IDX2(gf, i) ( (i) + Npts_in_stencil * (gf) )
// Part P3: Set PHIGF and OUTPUTGF macros
#define PHIGF 0
#define OUTPUTGF 1
// Part P4: Import code generated by NRPy+ to compute f''(x)
// as a finite difference derivative.
void f_dDD_FD(double *in_gfs,double *aux_gfs,const int i0,const int Npts_in_stencil,const double invdx0) {
#include "finite_diff_tutorial-second_deriv.h"
}
// Part P5: Define the function we wish to differentiate, as well as its exact second derivative:
double f(const double x) { return sin(x); } // f(x)
double f_dDD_exact(const double x) { return -sin(x); } // f''(x)
// Part P6: Define x_i = (x_0 + i*Delta_x)
double x_i(const double x_0,const int i,const double Delta_x) {
return (x_0 + (double)i*Delta_x);
}
// main() function
int main(int argc,char *argv[]) {
// Step 0: Read command-line arguments (TODO)
// Step 1: Set some needed constants
const int Npts_in_stencil = 5; // Equal to the finite difference order, plus one. '+str(par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER"))+'+1;
const double PI = 3.14159265358979323846264338327950288; // The scale over which the sine function varies.
const double x_eval = PI/4.0; // x_0 = desired x at which we wish to compute f(x)
// Step 2: Evaluate f''(x_eval) using the exact expression:
double EX = f_dDD_exact(x_eval);
// Step 3: Allocate space for two gridfunctions
double *in_gfs = (double *)malloc(sizeof(double)*Npts_in_stencil*2);
// Step 4: Loop over grid spacings
for(double Delta_x = 1e-3*(2*PI);Delta_x<=1.5e-1*(2*PI);Delta_x*=1.1) {
// Step 4a: x_eval is the center point of the finite differencing stencil,
// thus x_0 = x_eval - 2*dx for fourth-order-accurate first & second finite difference derivs,
// and x_0 = x_eval - 3*dx for sixth-order-accurate first & second finite difference derivs, etc.
// In general, for the integer Npts_in_stencil, we have
// x_0 = x_eval - (double)(Npts_in_stencil/2)*Delta_x,
// where we rely upon integer arithmetic (which always rounds down) to ensure
// Npts_in_stencil/2 = 5/2 = 2 for fourth-order-accurate first & second finite difference derivs:
const double x_0 = x_eval - (double)(Npts_in_stencil/2)*Delta_x;
// Step 4b: Set \phi=PHIGF to be f(x) as defined in the
// f(const double x) function above, where x_i = stencil_start_x + i*Delta_x:
for(int ii=0;ii<Npts_in_stencil;ii++) {
in_gfs[IDX2(PHIGF, ii)] = f(x_i(x_0,ii,Delta_x));
}
// Step 4c: Set invdx0, which is needed by the NRPy+-generated "finite_diff_tutorial-second_deriv.h"
const double invdx0 = 1.0/Delta_x;
// Step 4d: Evaluate the finite-difference second derivative of f(x):
const int i0 = Npts_in_stencil/2; // The derivative is evaluated at the center of the stencil.
f_dDD_FD(in_gfs,in_gfs,i0,Npts_in_stencil,invdx0);
double FD = in_gfs[IDX2(OUTPUTGF,i0)];
// Step 4e: Print log_10(\Delta x) and log_10([relative error])
printf("%e\t%.15e\n",log10(Delta_x),log10(fabs((EX-FD)/(EX))));
}
// Step 5: Free the allocated memory for the gridfunctions.
free(in_gfs);
return 0;
}
```
Next we compile and run the C code.
```
import cmdline_helper as cmd
cmd.C_compile("finite_difference_playground.c", "fdp")
cmd.delete_existing_files("data.txt")
cmd.Execute("fdp", "", "data.txt")
```
Finally, let's plot $\log_{10} E_{\rm Rel}$ as a function of $\log_{10} (\Delta x)$. Again, the expression at fourth-order accuracy should obey
$$\log_{10} E_{\rm Rel} = 4 \log_{10} (\Delta x) + \log_{10} (k).$$
Defining $\hat{x} = \log_{10} (\Delta x)$ and $y(\hat{x})=\log_{10} E_{\rm Rel}$, we can write the above equation in the more suggestive form:
$$y(\hat{x}) = 4 \hat{x} + \log_{10} (k),$$
so $y(\hat{x}) = \log_{10} E_{\rm Rel}\left(\log_{10} (\Delta x)\right)$ should be a line with positive slope of 4.
```
%matplotlib inline
import matplotlib.pyplot as plt
# from https://stackoverflow.com/questions/12311767/how-to-plot-files-with-numpy
plt.plotfile('data.txt', delimiter = '\t', cols=(0,1), names=('log10(Delta_x)','log10([Relative Error])'))
```
A quick glance at the above plot indicates that between $\log_{10}(\Delta x) \approx -2.0$ and $\log_{10}(\Delta x) \approx -1.0$, the logarithmic relative error $\log_{10} E_{\rm Rel}$ increases by about 4, indicating a positive slope of approximately 4. Thus we have confirmed fourth-order convergence.
<a id='exercise'></a>
# Step 3: Exercises to students \[Back to [top](#toc)\]
$$\label{exercise}$$
1. Use NumPy's [`polyfit()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html) function to evaluate the least-squares slope of the above line.
2. Explore $\log_{10}(\Delta x)$ outside the above (truncation-error-dominated) range. What other errors dominate outside the truncation-error-dominated regime?
3. Adjust the above NRPy+ and C codes to support 6th-order-accurate finite differencing. What should the slope of the resulting plot of $\log_{10} E_{\rm Rel}$ versus $\log_{10}(\Delta x)$ be? Explain why this case does not provide as clean a slope as the 4th-order case.
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-Finite_Difference_Playground.pdf](Tutorial-Start_to_Finish-Finite_Difference_Playground.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-Finite_Difference_Playground.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-Finite_Difference_Playground.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-Finite_Difference_Playground.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-Finite_Difference_Playground.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
# Getting started with Random Matrix Theory: Wigner semicircle
Author: Mirco Milletari' <milletari@gmail.com>
In this notebook we give a simple numerical validation of the Wigner semicircle distribution discussed in this [blog article](https://medium.com/cantors-paradise/getting-started-with-random-matrices-a-step-by-step-guide-81e5902384e). In order to run this notebook you need python3 installed together with Numpy, timeit (optional) and Matplotlib. In case you do not have it, I suggest to install the anaconda distribution for simplicity. This notebook has been tested on the following configuration:
- processor: 3.5 GHz Dual-Core Intel Core i7.
- Memory: 16 GB 2133 MHz LPDDR3 memory.
all the reported timings refer to this configuration.
For the sake of cleanliness, I moved the utility functions in a separate file in `utilities/rm_utils.py` where you can get into details of the evaluation.
```
import os, sys
import numpy as np
from numpy.linalg import eigvalsh
from matplotlib import pyplot as plt
from timeit import default_timer
os.chdir('../')
from src.rm_utils import normal, rm_sampling, wigner, rmse, get_bulk_edge_values
print('system version:', sys.version, "\n")
print('numpy version:', np.__version__)
```
It is convenient to define a random number generator with a given seed, so that all the expressions will be reproducible throughout the notebook.
```
rng = np.random.default_rng(seed = 0)
```
We will be using numpy's normal distribution to generate instances of the random matrix. Let us check below how this works for the simple scalar case and compare the numerical result to its analytical expression:
$$ p(x) = \frac{1}{\sqrt{2 \pi \sigma^2} } e^{- \frac{x^2}{2 \sigma^2} } \equiv \mathcal{N}(0,\sigma)$$
Note that below we are using $\sigma = 1/\sqrt{N}$ in order to make contact with the normalization used for the Random Matrix result.
```
N = 2000
sigma = N**(-0.5)
x = rng.normal(scale= sigma, size= N).astype(np.float32)
z = np.arange(-.1,.1, .001, dtype= np.float32)
gauss = normal(z, sigma)
plt.figure(figsize=(8, 5))
plt.plot(z,gauss)
plt.hist(x, bins= 50,density=True)
plt.xlabel(r"$x$", size= 14)
plt.ylabel(r"$p(x)$", size= 14)
plt.show()
```
## Warm-up: generating a (small) single instance
As a first step we generate a Random Matrix (RM) instance $X$ using numpy's random library. We are interested in RMs whose entries are i.i.d. Gaussian distributed. We generate the RM $X$ by sampling from the normal distribution; however, the result will not be automatically symmetric. We can get the desired symmetric matrix by using the following formula:
$$X_s = \frac{X+X^T}{\sqrt{2} }$$
where the normalization has been chosen to conform to the one used in the derivation of the analytical results.
Let us check this explicitly with a small matrix. We create a square matrix with $N= 10$ and matrix elements sampled from a normal distribution centered in $0$ with variance $\sigma = 1/\sqrt{N}$ (see [blog post](https://medium.com/cantors-paradise/getting-started-with-random-matrices-a-step-by-step-guide-81e5902384e)):
```
N= 10
sigma = N**(-0.5)
X = rng.normal(scale= sigma, size=(N,N)).astype(np.float32)
Xs= (X+X.T)/2**(0.5)
```
And simply check the symmetric condition $X_{ij} = X_{ji}$ for some choice of $i$ and $j$:
```
X[1,2] == X[2,1]
Xs[1,2] == Xs[2,1]
```
You can check all the elements at once using the equivalent matrix expression: $X_s = X_s^T$:
```
(Xs == Xs.T).all()
```
Moving to the eigenvalues, we can easily get them using numpy's linear algebra library; in particular, we are going to use [eigvalsh](https://numpy.org/doc/stable/reference/generated/numpy.linalg.eigvalsh.html#numpy.linalg.eigvalsh), that is numerically optimized to evaluate eigenvalues of real symmetric (and complex hermitian) matrices. Due to their simmetry, symmetric matrices have $N(N-1)/2$ independent elements, so the numpy function is evaluated only using the upper/lower triangular part of the matrix.
These are the 10 eigenvalues of $X_s$ and their plot:
```
lambdas= eigvalsh(Xs).astype(np.float32)
lambdas
plt.figure(figsize=(8, 5))
plt.hist(lambdas, bins= N, density = True )
plt.xlabel(r"$\lambda$", size= 14)
plt.ylabel("count", size= 14)
plt.show()
```
Note that we will always plot normalized histograms as we are interested in distributions. However, this is not much of a distribution, as this is the result of a single realization (instance) of the random matrix. In the next section we are going to sample a larger portion of the distribution and show how the result depends on matrix size.
## Sampling
In this section we move on sampling the parameter space for different sizes of the random marix $X$. For convenience I have defined a sampling function that performs the operations described in the previous section multiple times.
Let us consider the same matrix size $N=10$ and draw 100 samples of it
```
N = 10
n_samples = 100
lambdas = rm_sampling(n_samples, N, rng)
plt.figure(figsize=(8, 5))
plt.hist( lambdas, bins= int(n_samples**(0.5)), density=True )
plt.xlabel(r"$\lambda$", size= 14)
plt.ylabel(r"$\rho$", size= 14)
plt.show()
```
Now this starts looking like a distribution, even though it is still quite choppy. As we draw more samples, the distribution begins to take a better shape:
```
n_samples= 1000
N = 10
lambdas = rm_sampling(n_samples, N, rng)
plt.figure(figsize=(8, 5))
plt.hist( lambdas, bins= int(n_samples**(0.5)), density = True )
plt.xlabel(r"$\lambda$", size= 14)
plt.ylabel(r"$\rho$", size= 14)
plt.show()
```
### Wigner law:
At this point we can compare the numerical result with the Wigner law we found in the [blog article](https://medium.com/cantors-paradise/getting-started-with-random-matrices-a-step-by-step-guide-81e5902384e); for convenience, let me rewrite the final result below:
$\begin{equation}
\rho(\lambda) = \Bigg\{ \begin{matrix} 0 & |\lambda| > 2 \\
\frac{1}{2\pi} \sqrt{4- \lambda^2} & |\lambda|<2
\end{matrix}
\end{equation}$.
We can visualize this function in its domain of validity using the `wigner` utility function:
```
l = np.arange(-2.,2., .001, dtype= np.float32)
rho = wigner(l)
plt.figure(figsize=(8, 5))
plt.plot(l,rho)
plt.xlabel(r"$\lambda$", size= 14)
plt.ylabel(r"$\rho$", size= 14)
plt.show()
```
## Comparisons
Let us plot our findings together. You can see that the numerical results start looking like the analytical ones, even though there are some deviations, especially in the tails. You should not be surprised by that, since the analytical expression was obtained in the limit of large $N$ and clearly $N=10$ is not that large. The difference between the large and finite $N$ expression is generaaly known as a finite size correction, we come back to it later.
```
n_samples = 1000
N = 10
lambdas = rm_sampling(n_samples, N, rng)
plt.figure(figsize=(8, 5))
plt.plot(l,rho)
plt.hist( lambdas, bins= int(n_samples**(0.5)), density= True)
plt.xlabel(r"$\lambda$", size= 14)
plt.ylabel(r"$\rho$", size= 14)
plt.show()
```
Let us now run our sampling experiment for a larger RM, say $N=500$. Note that depending on your machine, this may take a while; I report my running time below to give you an idea of how long it takes on my laptop:
```
start = default_timer()
N = 500
n_samples = 1000
lambdas = rm_sampling(n_samples, N, rng)
print('sampling completed in {} s'.format(default_timer()- start))
plt.figure(figsize=(8, 5))
plt.plot(l,rho, color= 'red', linewidth=2)
r= plt.hist( lambdas, bins=int(n_samples**(0.5)), density= True, color= 'lightblue')
plt.plot(r[1][:-1], r[0], marker='x', linestyle=' ', markersize=8, color= 'black')
plt.xlabel(r"$\lambda$", size= 14)
plt.ylabel(r"$\rho$", size= 14)
plt.show()
```
You can see how the tails now "shrinks" to be almost inside the boundaries of the semicircle. Here I have also plotted the the bin value of the numerics as a "x" to better show the difference. Clearly the result gets better as the matrix size increases. To have a better and quantitative understading of what "better means", in the next section I evaluate the error between the numerical and analytical expression as a function of $N$.
## Finite Size Error
In this last section I present a simple scaling analysis of the result. In particular, we are going to evaluate the Mean SquareRoot error (MSRE) between the numerical and analytical result as a function of the matrix size. For completness, let me remind you its definition:
$$ rmse(\rho, \hat{\rho}) = \sqrt{ \frac{1}{N} \sum_{i=1}^N (\rho_i - \hat{\rho}_i )^2 } $$
where $\rho$ is the "measured" numerical value and $\hat{\rho}$ the analytical expression obtained in the large $N$ limit. In `rm_utils.py` I have defined two functions:
- rmse: is just evaluating the rmse as explained above.
- get_bulk_edge_values: separates the bulk numeriacal eigenvalues, i.e. those $\lambda \in [-2, 2]$ from the tails.
What we want to verify is that $rmse(\rho_b, \hat{\rho}) \to 0$ as $N$ increases (b stands for bulk) as well as the values in the tails. For consistency, I will avaluate the latter in terms of the error $rmse(\rho_t, 0)$ for the tails ... I know it is a bit of an overkill, but it doesn't cost us much more effort :)
Note that running this section may take a while, depending on your machine; the reported running time refers to the configuration described at the beginning of the notebook.
```
sizes=[10, 50, 100, 300, 500, 1000]
n_samples= 1000
bulk_errors = []
tails_errors= []
data = []
start=default_timer()
for n in sizes:
lambdas = rm_sampling(n_samples, n, rng)
counts, bins = np.histogram(lambdas, bins= n, density=True)
bulk_rho, bulk_lambdas, tails_rho, tails_lambdas = get_bulk_edge_values(counts.astype(np.float32),
bins.astype(np.float32)
)
rho_n = wigner(bulk_lambdas)
bulk_errors.append(rmse(bulk_rho, rho_n))
tails_errors.append(rmse(tails_rho, 0))
print('sampling of {} random matrices completed in {} s'.format( len(sizes), default_timer()- start))
fig, ax = plt.subplots(figsize= [8,5])
ax.plot(sizes, bulk_errors, marker='x', markersize=8, color= 'black', label='bulk_rmse')
ax.plot(sizes, tails_errors, marker='.', markersize=8, color= 'blue', label='tails_rmse')
ax.set_title('RMSE for bulk/tails eigenvalue distribution')
ax.set_xlabel('N', size=14)
ax.set_ylabel('rmse', size=14)
ax.legend(prop={'size': 14})
plt.show()
```
You can see how the error goes to zero as the matrix size increases, both for the bulk and the tails of the distribution.
| github_jupyter |
# Drug Type Predictor
This notebook will create a model to predict drug to prescribe a patient given their demographic and other clinical data.
The dataset is taken from [this Kaggle Dataset](https://www.kaggle.com/prathamtripathi/drug-classification)
# The Dataset
The dataset's target variable (drug type) contains 5 different medications: Drug A, Drug B, Drug C, Drug X and Y.
The feature sets of this dataset are Age, Sex, Blood Pressure, and Cholesterol of patients, and the target is the drug that each patient responded to.
We can see that it is an example of a multiclass classification problem.
First let's import the libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(color_codes=True)
```
# The Data
Read the data using pandas.
```
df = pd.read_csv('Drug Data.csv', delimiter=",")
df.head()
```
Some info on the dataset
```
df.info()
df.describe(include='all')
```
Alright. First, we need to encode the categorical features.
## Label Encoding
As you may figure out, some features in this dataset are categorical such as **Sex** or **BP**.
Unfortunately, Sklearn Decision Trees do not handle categorical variables.
So, we would convert them into 'numerical' values by using `LabelEncoder` in the `sklearn` library.
```
from sklearn import preprocessing
le_sex = preprocessing.LabelEncoder()
le_sex.fit(['F','M'])
df.loc[:,'Sex'] = le_sex.transform(df.loc[:,'Sex'])
le_BP = preprocessing.LabelEncoder()
le_BP.fit([ 'LOW', 'NORMAL', 'HIGH'])
df.loc[:,'BP'] = le_BP.transform(df.loc[:,'BP'])
le_Chol = preprocessing.LabelEncoder()
le_Chol.fit([ 'NORMAL', 'HIGH'])
df.loc[:,'Cholesterol'] = le_Chol.transform(df.loc[:,'Cholesterol'])
df.head()
```
Now we should also save the data.
```
df.to_csv('Label Encoded Drug Data.csv')
```
# Exploratory Analysis
Now we need to explore the data and see it's connection with the target variable.
First, let's create box plots to see relation of continous variables and target and heatmap to see relation of categorical and target variable.
We need to get the non-encoded data to better see how relationship play into it.
```
df = pd.read_csv('Drug Data.csv')
continous = ['Age', 'Na_to_K']
categorical = ['Sex', 'BP', 'Cholesterol']
for feature in continous:
plt.figure(figsize=(10,8))
sns.boxplot(y=feature, x='Drug', data=df)
for feature in categorical:
plt.figure(figsize=(10,8))
heatmap_df = pd.crosstab(df['Drug'], df[feature])
sns.heatmap(data=heatmap_df, annot=True, cmap='Blues')
```
It seems like every variable has a potential to help us in classification. So we would be using all of them to try and predict the drug type for our patient.
# Modeling
From the graphs, it seems as if each variable can help us divide into fairly discrete groups of drug type to choose from.
Also, the dataset is not too big. So we can be fairly liberal with the time/space complexity too.
So, let's try all the different models and see which one would work the best.
## Testing Different Models
```
df = pd.read_csv('Label Encoded Drug Data.csv')
X = df[['Age', 'Sex', 'BP', 'Cholesterol', 'Na_to_K']]
y = df['Drug']
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
models = [
LogisticRegression(),
KNeighborsClassifier(),
DecisionTreeClassifier(),
SVC()
]
for model in models:
cv_score = cross_val_score(estimator=model, X=X, y=y, scoring='f1_micro')
print(f'F1 Micro Score for {model}: ', cv_score, '\n')
```
It looks like Decision Tree is the clear winner.
Logistic Regression even fails to converge. Although this could likely be prevented by scaling, increasing iterations and other methods, it's doubtful it could turn out to be better than Decision Tree's results as show in here.
So, we are going to use Decision Trees to model our Drug Type Predictor.
# Decision Tree Modeling
To use decision tree we would be using the `DecisionTreeClassifier` from `sklearn`
We would also be using the criterion as entropy rather than gini since it usually is better metric and time/space is not an issue here.
## Split into train/test
We will be using __train/test split__ to seperate into training and testing sets.
Let's import first.
```
from sklearn.model_selection import train_test_split
```
We will be splitting in the train/test ratio of 7/3 with random state as 3.
```
X_trainset, X_testset, y_trainset, y_testset = train_test_split(X, y, test_size=0.3, random_state=3)
drugTree = DecisionTreeClassifier(criterion='entropy')
```
Next, we will fit the data with the training feature matrix _X_trainset_ and response vector _y_trainset_
```
drugTree.fit(X_trainset,y_trainset)
```
# Prediction and Evaluation
Now, we need to make predictions on the testing dataset and then use metrics to evaluate our model.
```
y_hat = drugTree.predict(X_testset)
```
Next, we need to evaluate it.
We would be using `metrics` in `sklearn`.
```
from sklearn import metrics
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_testset, y_hat))
print("\n")
print("DecisionTrees's Jaccard Index (micro): ", metrics.jaccard_score(y_testset, y_hat, average='micro'))
print("DecisionTrees's Jaccard Index (macro): ", metrics.jaccard_score(y_testset, y_hat, average='macro'))
print("\n")
print("DecisionTrees's precision Score (micro): ", metrics.precision_score(y_testset, y_hat, average='micro'))
print("DecisionTrees's precision Score (macro): ", metrics.precision_score(y_testset, y_hat, average='macro'))
print("\n")
print("DecisionTrees's recall Score (micro): ", metrics.recall_score(y_testset, y_hat, average='micro'))
print("DecisionTrees's recall Score (macro): ", metrics.recall_score(y_testset, y_hat, average='macro'))
print("\n")
print("DecisionTrees's F1 Score (micro): ", metrics.f1_score(y_testset, y_hat, average='micro'))
print("DecisionTrees's F1 Score (macro): ", metrics.f1_score(y_testset, y_hat, average='macro'))
```
Those are good scores. So, we can confidently say that our model is decent and is able to predict the drug for the patient to take with high accuracy.
# Grid Search
Let's try to optimize the max depth of the decision tree and get (hopefully) the best model.
First, let's import
```
from sklearn.model_selection import GridSearchCV
```
Now let's create the estimator and params grid.
```
dtreeClassifier = DecisionTreeClassifier(criterion='entropy')
dtreeClassifier.get_params()
hyper_params = {'max_depth': [3, 4, 5, 6, 7, None]}
```
Now we need to pass into the `GridSearchCV` and get the object.
```
grid_dtree = GridSearchCV(dtreeClassifier, hyper_params, scoring='f1_micro')
```
Fit into it. We would be using the F1 micro score to estimate.
```
grid_dtree.fit(X, y)
```
Now, let's see the best estimator
```
grid_dtree.best_params_
grid_dtree.best_score_
```
So, it turns out that 4 was the best max depth for the decision tree.
# Evaluation of Best Decision Tree from Grid Search
Now let's get the estimator and get all the relevant scores.
First, let's get the estimator.
```
best_dtree = grid_dtree.best_estimator_
best_dtree.fit(X_trainset, y_trainset)
```
Now, let's predict from our test set
```
y_hat_b = best_dtree.predict(X_testset)
```
Finally, the scores:
```
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_testset, y_hat_b))
print("\n")
print("DecisionTrees's Jaccard Index (micro): ", metrics.jaccard_score(y_testset, y_hat_b, average='micro'))
print("DecisionTrees's Jaccard Index (macro): ", metrics.jaccard_score(y_testset, y_hat_b, average='macro'))
print("\n")
print("DecisionTrees's precision Score (micro): ", metrics.precision_score(y_testset, y_hat_b, average='micro'))
print("DecisionTrees's precision Score (macro): ", metrics.precision_score(y_testset, y_hat_b, average='macro'))
print("\n")
print("DecisionTrees's recall Score (micro): ", metrics.recall_score(y_testset, y_hat_b, average='micro'))
print("DecisionTrees's recall Score (macro): ", metrics.recall_score(y_testset, y_hat_b, average='macro'))
print("\n")
print("DecisionTrees's F1 Score (micro): ", metrics.f1_score(y_testset, y_hat_b, average='micro'))
print("DecisionTrees's F1 Score (macro): ", metrics.f1_score(y_testset, y_hat_b, average='macro'))
```
# Conclusion
There you have it. A model to predict which drug the patient should take with a high accuracy.
# Author
By Abhinav Garg
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
import pickle
import platform
from sklearn.preprocessing import StandardScaler
from mabwiser.mab import MAB, LearningPolicy
from mabwiser.linear import _RidgeRegression, _Linear
class LinTSExample(_RidgeRegression):
def predict(self, x):
if self.scaler is not None:
x = self._scale_predict_context(x)
covar = np.dot(self.alph**2, self.A_inv)
beta_sampled = rng.multivariate_normal(self.beta, covar)
return np.dot(x, beta_sampled)
class LinearExample(_Linear):
factory = {"ts": LinTSExample}
def __init__(self, rng, arms, n_jobs=1, backend=None, l2_lambda=1, alpha=1, regression='ts', arm_to_scaler = None):
super().__init__(rng, arms, n_jobs, backend, l2_lambda, alpha, regression)
self.l2_lambda = l2_lambda
self.alpha = alpha
self.regression = regression
# Create ridge regression model for each arm
self.num_features = None
if arm_to_scaler is None:
arm_to_scaler = dict((arm, None) for arm in arms)
self.arm_to_model = dict((arm, LinearExample.factory.get(regression)(rng, l2_lambda,
alpha, arm_to_scaler[arm])) for arm in arms)
```
# Create Data Set
```
from sklearn.datasets import make_classification
dfs = []
for i in range(4):
X, y = make_classification(n_samples=100, n_features=20, n_classes=2, n_informative=15, random_state=i)
df = pd.DataFrame(X)
df['arm'] = i
df['reward'] = y
dfs.append(df)
data = pd.concat(dfs)
from sklearn.model_selection import train_test_split
train, test = train_test_split(data, random_state=43, test_size=0.3)
train.head()
context_features = [c for c in data.columns if c not in ['arm', 'reward']]
decisions = MAB._convert_array(train['arm'])
rewards = MAB._convert_array(train['reward'])
contexts = MAB._convert_matrix(train[context_features]).astype('float')
test_contexts = MAB._convert_matrix(test[context_features]).astype('float')
rng = np.random.RandomState(seed=11)
mab = LinearExample(rng=rng, arms=[0, 1, 2, 3], l2_lambda=1, alpha=1, regression='ts', n_jobs=1, backend=None)
mab.fit(decisions, rewards, contexts)
for arm in mab.arms:
u, s, vh = np.linalg.svd(mab.arm_to_model[arm].A_inv)
print(s)
```
The data has duplicate singular values, and will thus be able to reproduce the non-deterministic behavior
```
context_features
train['set'] = 'train'
test['set'] = 'test'
data = pd.concat([train, test])
data.to_csv('simulated_data.csv', index=False)
```
# SageMaker
```
platform.platform()
print(np.__version__)
data = pd.read_csv('simulated_data.csv')
train = data[data['set']=='train']
test = data[data['set']=='test']
context_features = [c for c in data.columns if c not in ['set', 'arm', 'reward']]
decisions = MAB._convert_array(train['arm'])
rewards = MAB._convert_array(train['reward'])
contexts = MAB._convert_matrix(train[context_features]).astype('float')
test_contexts = MAB._convert_matrix(test[context_features]).astype('float')
print(context_features)
rng = np.random.RandomState(seed=11)
mab = LinearExample(rng=rng, arms=[0, 1, 2, 3], l2_lambda=1, alpha=1, regression='ts', n_jobs=1, backend=None)
mab.arm_to_model[1]
mab.fit(decisions, rewards, contexts)
expectations = mab.predict_expectations(test_contexts)
expectations[0][1]
pickle.dump(mab, open(os.path.join('output', 'sgm_mab.pkl'), 'wb'))
pickle.dump(expectations, open(os.path.join('output', 'sgm_expectations.pkl'), 'wb'))
```
# Cholesky
```
mab = MAB(arms=[0, 1, 2, 3], learning_policy=LearningPolicy.LinTS(l2_lambda=1, alpha=1), n_jobs=1, backend=None, seed=11)
mab._imp.arm_to_model[1]
mab.fit(decisions, rewards, contexts)
expectations = mab.predict_expectations(test_contexts)
expectations[0][1]
pickle.dump(mab, open(os.path.join('output', 'sgm_ch_mab.pkl'), 'wb'))
pickle.dump(expectations, open(os.path.join('output', 'sgm_ch_expectations.pkl'), 'wb'))
```
| github_jupyter |
# Running ProjectQ code on AWS Braket service provided devices
## Compiling code for AWS Braket Service
In this tutorial we will see how to run code on some of the devices provided by the Amazon AWS Braket service. The AWS Braket devices supported are: the State Vector Simulator 'SV1', the Rigetti device 'Aspen-8' and the IonQ device 'IonQ'
You need to have a valid AWS account, created a pair of access key/secret key, and have activated the braket service. As part of the activation of the service, a specific S3 bucket and folder associated to the service should be configured.
First we need to do the required imports. That includes the mail compiler engine (MainEngine), the backend (AWSBraketBackend in this case) and the operations to be used in the cicuit
```
from projectq import MainEngine
from projectq.backends import AWSBraketBackend
from projectq.ops import Measure, H, C, X, All
```
Prior to the instantiation of the backend we need to configure the credentials, the S3 storage folder and the device to be used (in the example the State Vector Simulator SV1)
```
creds = {
'AWS_ACCESS_KEY_ID': 'aws_access_key_id',
'AWS_SECRET_KEY': 'aws_secret_key',
} # replace with your Access key and Secret key
s3_folder = ['S3Bucket', 'S3Directory'] # replace with your S3 bucket and directory
device = 'SV1' # replace by the device you want to use
```
Next we instantiate the engine with the AWSBraketBackend including the credentials and S3 configuration. By setting the 'use_hardware' parameter to False we indicate the use of the Simulator. In addition we set the number of times we want to run the circuit and the interval in secons to ask for the results. For a complete list of parameters and descriptions, please check the documentation.
```
eng = MainEngine(AWSBraketBackend(use_hardware=False,
credentials=creds,
s3_folder=s3_folder,
num_runs=10,
interval=10))
```
We can now allocate the required qubits and create the circuit to be run. With the last instruction we ask the backend to run the circuit.
```
# Allocate the required qubits
qureg = eng.allocate_qureg(3)
# Create the circuit. In this example a quantum teleportation algorithms that teleports the first qubit to the third one.
H | qureg[0]
H | qureg[1]
C(X) | (qureg[1], qureg[2])
C(X) | (qureg[0], qureg[1])
H | qureg[0]
C(X) | (qureg[1], qureg[2])
# At the end we measure the qubits to get the results; should be all-0 or all-1
All(Measure) | qureg
# And run the circuit
eng.flush()
```
The backend will automatically create the task and generate a unique identifier (the task Arn) that can be used to recover the status of the task and results later on.
Once the circuit is executed the indicated number of times, the results are stored in the S3 folder configured previously and can be recovered to obtain the probabilities of each of the states.
```
# Obtain and print the probabilies of the states
prob_dict = eng.backend.get_probabilities(qureg)
print("Probabilites for each of the results: ", prob_dict)
```
## Retrieve results form a previous execution
We can retrieve the result later on (of this job or a previously executed one) using the task Arn provided when it was run. In addition, you have to remember the amount of qubits involved in the job and the order you used. The latter is required since we need to set up a mapping for the qubits when retrieving results of a previously executed job.
To retrieve the results we need to configure the backend including the parameter 'retrieve_execution' set to the Task Arn of the job. To be able to get the probabilities of each state we need to configure the qubits and ask the backend to get the results.
```
# Set the Task Arn of the job to be retrieved and instantiate the engine with the AWSBraketBackend
task_arn = 'your_task_arn' # replace with the actual TaskArn you want to use
eng1 = MainEngine(AWSBraketBackend(retrieve_execution=task_arn, credentials=creds, num_retries=2, verbose=True))
# Configure the qubits to get the states probabilies
qureg1 = eng1.allocate_qureg(3)
# Ask the backend to retrieve the results
eng1.flush()
# Obtain and print the probabilities of the states
prob_dict1 = eng1.backend.get_probabilities(qureg1)
print("Probabilities ", prob_dict1)
```
We can plot an histogram with the probabilities as well.
```
import matplotlib.pyplot as plt
%matplotlib inline
from projectq.libs.hist import histogram
histogram(eng1.backend, qureg1)
plt.show()
```
| github_jupyter |
# 深度概率编程CVAE
## 概述
本例采用MindSpore的深度概率编程方法应用于条件变分自编码器(CVAE)模型训练。
整体流程如下:
1. 数据集准备
2. 定义条件变分自编码器网络;
3. 定义损失函数和优化器;
4. 训练生成模型。
5. 生成新样本或重构输入样本。
> 本例适用于GPU和Ascend环境。
## 数据准备
### 下载数据集
本例使用MNIST_Data数据集,执行如下命令进行下载并解压到对应位置:
```
!wget -N https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/MNIST_Data.zip
!unzip -o MNIST_Data.zip -d ./datasets
!tree ./datasets/MNIST_Data/
```
### 数据增强
将数据集增强为适应CVAE网络训练要求的数据,本例主要是将原始图片像素大小由$28\times28$增强为$32\times32$,同时将多张图片组成1个`batch`来加速训练。
```
import mindspore.common.dtype as mstype
import mindspore.dataset as ds
import mindspore.dataset.vision.c_transforms as CV
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
"""
create dataset for train or test
"""
# define dataset
mnist_ds = ds.MnistDataset(data_path)
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
# define map operations
resize_op = CV.Resize((resize_height, resize_width)) # Bilinear mode
rescale_op = CV.Rescale(rescale, shift)
hwc2chw_op = CV.HWC2CHW()
# apply map operations on images
mnist_ds = mnist_ds.map(operations=resize_op, input_columns="image", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=rescale_op, input_columns="image", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns="image", num_parallel_workers=num_parallel_workers)
# apply DatasetOps
mnist_ds = mnist_ds.batch(batch_size)
mnist_ds = mnist_ds.repeat(repeat_size)
return mnist_ds
```
## 定义条件变分自编码器网络
变分自编码器的构成主要分为三个部分,编码器,解码器和隐空间。
其中:
编码器(Encoder)主要作用是将训练数据进行降维,压缩,提取特征,形成特征向量,存储在隐空间中。
解码器(Decoder)主要作用是将训练数据因空间分布的参数进行解码,还原生成出新的图像。
隐空间主要作用是将模型的特征按照某种分布特性进行存储,属于编码器和解码器中间的桥梁。
本例中条件变分自编码器(CVAE)是在变分自编码器的基础上增添标签训练,在后续随机采样生成图片的过程中,可以施加标签指定生成该条件的图片。
```
import os
import mindspore.nn as nn
from mindspore import context, Tensor
import mindspore.ops as ops
context.set_context(mode=context.GRAPH_MODE,device_target="GPU")
IMAGE_SHAPE=(-1,1,32,32)
image_path = os.path.join("./datasets/MNIST_Data","train")
class Encoder(nn.Cell):
def __init__(self, num_classes):
super(Encoder, self).__init__()
self.fc1 = nn.Dense(1024 + num_classes, 400)
self.relu = nn.ReLU()
self.flatten = nn.Flatten()
self.concat = ops.Concat(axis=1)
self.one_hot = nn.OneHot(depth=num_classes)
def construct(self, x, y):
x = self.flatten(x)
y = self.one_hot(y)
input_x = self.concat((x, y))
input_x = self.fc1(input_x)
input_x = self.relu(input_x)
return input_x
class Decoder(nn.Cell):
def __init__(self):
super(Decoder, self).__init__()
self.fc2 = nn.Dense(400, 1024)
self.sigmoid = nn.Sigmoid()
self.reshape = ops.Reshape()
def construct(self, z):
z = self.fc2(z)
z = self.reshape(z, IMAGE_SHAPE)
z = self.sigmoid(z)
return z
```
## 定义优化器和损失函数
定义条件变分自编码器的损失函数,将图像与label关联。
损失函数采用ELBO函数,此函数用于计算解码图像和原图像的差值,并通过对比两个图像的差值,以及图像分布的均值之差来计算两图的损失情况。
优化器采用`nn.Adam`来最小化损失值。
```
from mindspore.nn.probability.dpn import ConditionalVAE
from mindspore.nn.probability.infer import ELBO, SVI
class CVAEWithLossCell(nn.WithLossCell):
"""
Rewrite WithLossCell for CVAE
"""
def construct(self, data, label):
out = self._backbone(data, label)
return self._loss_fn(out, label)
# define the encoder and decoder
encoder = Encoder(num_classes=10)
decoder = Decoder()
# define the vae model
cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20,num_classes=10)
# define the loss function
net_loss = ELBO(latent_prior='Normal', output_prior='Normal')
# define the optimizer
optimizer = nn.Adam(params=cvae.trainable_params(), learning_rate=0.001)
net_with_loss = CVAEWithLossCell(cvae,net_loss)
vi = SVI(net_with_loss=net_with_loss,optimizer=optimizer)
```
参数解释:
- num_classes:类别数量,本例中为0-9个数字,共计10个种类。
- ConditionalVAE:条件自编码器模型,将编码器,解码器,压缩大小,隐空间维度和类别数量等变分自编码器网络初始化。
- `encoder`:编码器网络。
- `decoder`:解码器网络。
- `hiddend_size`:数据压缩后的大小,本例为400。
- `latent_size`:隐空间的向量维度,向量维度越大,分别的特征维度越多,图像特征越清晰,本例中可调节维度大小为20。
- `num_classes`:类别数量。
- ELBO:变分自编码器的损失函数。
- `latent_prior`:隐空间初始化分布,本例中隐空间的参数遵循正态分布。
- `output_prior`:输出权重的初始化分布,本例中其权重参数初始化分布遵循正态分布。
- nn.Adam:优化器。
- CVAEWithLossCell:本例重建了`nn.WithlossCell`函数,使得生成的数据,附带标签(label)。
- SVI:模型函数,类似MindSpore中的Model,此函数为变分自编码器专用模型函数。
## 训练生成模型
生成训练数据,将调用上述代码中`vi`的训练模式,对模型进行训练,训练完成后打印出模型的loss值。
```
# define the training dataset
ds_train = create_dataset(image_path, 32, 1)
# run the vi to return the trained network.
cvae = vi.run(train_dataset=ds_train, epochs=5)
# get the trained loss
trained_loss = vi.get_train_loss()
print(trained_loss)
```
### 样本重建
先定义可视化绘图函数`plot_image`,用于样本重建和条件采样生成数据的可视化。
使用训练好的模型,查看重建数据的能力如何,这里取一组原始数据进行重建,执行如下代码:
```
import matplotlib.pyplot as plt
import numpy as np
def plot_image(sample_data,col_num=4,row_num=8,count=0):
for i in sample_data:
plt.subplot(col_num,row_num,count+1)
plt.imshow(np.squeeze(i.asnumpy()))
plt.axis("off")
count += 1
plt.show()
sample = next(ds_train.create_dict_iterator(output_numpy=True, num_epochs=1))
sample_x = Tensor(sample['image'], dtype=mstype.float32)
sample_y = Tensor(sample['label'], dtype=mstype.int32)
reconstructed_sample = cvae.reconstruct_sample(sample_x, sample_y)
print('The shape of the reconstructed sample is ', reconstructed_sample.shape)
print("\n=============The Original Images=============")
plot_image(sample_x)
print("\n============The Reconstruct Images=============")
plot_image(reconstructed_sample)
```
对比原图片,CVAE生成的图片能明显对应上原始图片,但还稍显模糊。说明训练效果已经达到但还有提升空间。
### 条件样本采样
在隐空间中进行条件采样,本例使用条件为`(0,1)`,对应生成`(0,1)`的图像数据,同时将采样生成的数据进行可视化。
```
# test function: generate_sample
sample_label = Tensor([i for i in range(0,2)]*16, dtype=mstype.int32)
# test function: generate_sample
generated_sample = cvae.generate_sample(sample_label, 32, IMAGE_SHAPE)
# test function: reconstruct_sample
print('The shape of the generated sample is ', generated_sample.shape)
plot_image(generated_sample,4,8)
```
在条件为`(0,1)`特征采样中,生成的图片有的看起来像其他的数字,说明图像在特征分布中,其他数字的部分特征与`(0,1)`的特征出现了交叉,而随机采样正好采样到了这些交叉特征,导致`(0,1)`图片出现了其他数字的特征。
| github_jupyter |
# Example: Using MIRAGE to Generate Wide Field Slitless Exposures
This notebook shows how to use Mirage to create Wide Field Slitless Spectroscopy (WFSS) data, beginning with an APT file. This can be done for NIRCam or NIRISS.
*Table of Contents:*
* [Getting Started](#getting_started)
* [Create input yaml files from an APT proposal](#yaml_from_apt)
* [Make WFSS simulated observations](#make_wfss)
* [Provide a single wfss mode yaml file](#single_yaml)
* [Provide mulitple yaml files](#multiple_yamls)
* [Provide a single yaml file and an hdf5 file containing SED curves of the sources](#yaml_plus_hdf5)
* [Outputs](#wfss_outputs)
* [Make imaging simulated observations](#make_imaging)
* [Outputs](#imaging_outputs)
---
<a id='getting_started'></a>
## Getting Started
<div class="alert alert-block alert-warning">
**Important:**
Before proceeding, ensure you have set the MIRAGE_DATA environment variable to point to the directory that contains the reference files associated with MIRAGE.
<br/><br/>
If you want JWST pipeline calibration reference files to be downloaded in a specific directory, you should also set the CRDS_DATA environment variable to point to that directory. This directory will also be used by the JWST calibration pipeline during data reduction.
<br/><br/>
You may also want to set the CRDS_SERVER_URL environment variable set to https://jwst-crds.stsci.edu. This is not strictly necessary, and Mirage will do it for you if you do not set it, but if you import the crds package, or any package that imports the crds package, you should set this environment variable first, in order to avoid an error.
</div>
<div class="alert alert-block alert-info">
**Dependencies:**<br>
1) Install GRISMCONF from https://github.com/npirzkal/GRISMCONF<br>
2) Install NIRCAM_Gsim from https://github.com/npirzkal/NIRCAM_Gsim. This is the disperser software, which works for both NIRCam and NIRISS.
</div>
```
import os
# Set environment variables
# It may be helpful to set these within your .bashrc or .cshrc file, so that CRDS will
# know where to look for reference files during future runs of the JWST calibration
# pipeline.
#os.environ["MIRAGE_DATA"] = "/my/mirage_data/"
os.environ["CRDS_PATH"] = os.path.join(os.path.expandvars('$HOME'), "crds_cache")
os.environ["CDRS_SERVER_URL"]="https://jwst-cdrs.stsci.edu"
from glob import glob
import pkg_resources
import yaml
from astropy.io import fits
import astropy.units as u
from astropy.visualization import simple_norm, imshow_norm
import h5py
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from mirage import imaging_simulator
from mirage import wfss_simulator
from mirage.utils.constants import FLAMBDA_CGS_UNITS, FLAMBDA_MKS_UNITS, FNU_CGS_UNITS
from mirage.yaml import yaml_generator
TEST_DATA_DIRECTORY = os.path.normpath(os.path.join(pkg_resources.resource_filename('mirage', ''),
'../examples/wfss_example_data'))
```
---
<a id='yaml_from_apt'></a>
## Create a series of yaml files from an [APT](https://jwst-docs.stsci.edu/display/JPP/JWST+Astronomers+Proposal+Tool+Overview) proposal
With your proposal file open in APT, export the "xml" and "pointing" files. These will serve as the inputs to the yaml file generator function.
```
# Input files from APT
xml_file = os.path.join(TEST_DATA_DIRECTORY, 'niriss_wfss_example.xml')
pointing_file = os.path.join(TEST_DATA_DIRECTORY, 'niriss_wfss_example.pointing')
```
See Mirage's [Mirage's yaml_generator documentation](https://mirage-data-simulator.readthedocs.io/en/latest/yaml_generator.html#additional-yaml-generator-inputs "Yaml Generator Inputs")
for details on the formatting options for the inputs listed below. The formats will vary based on the complexity of your inputs and observations (number of targets, number of observations, instruments used).
```
# Source catalogs to be used. In this relatively simple case with a single target
# and a single instrument, there are two ways to supply the source catalogs. You
# may specify with or without the target name from the APT file as a dictionary key.
#catalogs = {'MAIN-TARGET': {'point_source': os.path.join(TEST_DATA_DIRECTORY,'point_sources.cat')}}
catalogs = {'point_source': os.path.join(TEST_DATA_DIRECTORY,'point_sources.cat')}
# Set reference file values.
# Setting to 'crds_full_name' will search for and download needed
# calibration reference files (commonly referred to as CRDS reference files) when
# the yaml_generator is run.
#
# Setting to 'crds' will put placeholders in the yaml files and save the downloading
# for when the simulated images are created.
reffile_defaults = 'crds'
# Optionally set the cosmic ray library and rate
cosmic_rays = {'library': 'SUNMAX', 'scale': 1.0}
# Optionally set the background signal rates to be used
background = 'medium'
# Optionally set the telescope roll angle (PAV3) for the observations
pav3 = 12.5
# Optionally set the observation date to use for the data. Note that this information
# is placed in the headers of the output files, but not used by Mirage in any way.
dates = '2022-10-31'
```
For NIRISS simulations, users can add optical ghosts to the data. By default (i.e. if the keywords below are omitted from the call to the yaml_generator), ghosts will be added for point sources only. Ghosts can also be added for galaxy or extended targets if you have a stamp image for each source. See the [documentation for adding ghosts](https://mirage-data-simulator.readthedocs.io/en/latest/ghosts.html)
for details.
```
ghosts = True
convolve_ghosts = False
```
You can specify the data reduction state of the Mirage outputs.
Options are 'raw', 'linear', or 'linear, raw'.
If 'raw' is specified, the output is a completely uncalibrated file, with a filename ending in "uncal.fits"
If 'linear' is specified, the output is a file with linearized signals, ending in "linear.fits". This is equivalent to having been run through the dq_init, saturation flagging, superbias subtraction, reference pixel subtraction, and non-linearity correction steps of the calibration pipeline. Note that this product does not include dark current subtraction.
If 'linear, raw', both outputs are saved.
In order to fully process the Mirage output with the default steps used by the pipeline, it would be best to use the 'raw' output and run the entire calibration pipeline.
```
datatype = 'linear, raw'
```
Provide the output directory for the yaml files themselves, as well as the output directory where you want the simulated files to eventually be saved. This information will be placed in the yaml files.
```
print(catalogs)
# Create a series of Mirage input yaml files
# using the APT files
yaml_output_dir = '/where/to/put/yaml/files'
simulations_output_dir = '/where/to/put/simulated/data'
# Run the yaml generator
yam = yaml_generator.SimInput(input_xml=xml_file, pointing_file=pointing_file,
catalogs=catalogs, cosmic_rays=cosmic_rays,
background=background, roll_angle=pav3,
dates=dates, reffile_defaults=reffile_defaults,
add_ghosts=ghosts, convolve_ghosts_with_psf=convolve_ghosts,
verbose=True, output_dir=yaml_output_dir,
simdata_output_dir=simulations_output_dir,
datatype=datatype)
yam.create_inputs()
```
One yaml file will be created for each exposure and detector. The naming convention of the files follows that for [JWST exposure filenames](https://jwst-docs.stsci.edu/display/JDAT/File+Naming+Conventions+and+Data+Products). For example, the first expsure in proposal number 12345, Observation 3, Visit 2, assuming it is made using NIRCam (the A2 detector in this case) will be named jw12345003002_01101_00001_nrca1_uncal.fits. Note that Mirage does not yet create activity IDs in the same way as the JWST flight software, so filenames will be slightly different than what they will be in-flight for the same APT proposal.
Look to see which yaml files are for WFSS and which are imaging
```
yaml_files = glob(os.path.join(yam.output_dir,"jw*.yaml"))
yaml_WFSS_files = []
yaml_imaging_files = []
for f in yaml_files:
my_dict = yaml.safe_load(open(f))
if my_dict["Inst"]["mode"]=="wfss":
yaml_WFSS_files.append(f)
if my_dict["Inst"]["mode"]=="imaging":
yaml_imaging_files.append(f)
print("WFSS files:",len(yaml_WFSS_files))
print("Imaging files:",len(yaml_imaging_files))
```
Each output yaml file contains details on the simulation.
```
with open(yaml_WFSS_files[0], 'r') as infile:
parameters = yaml.load(infile)
for key in parameters:
for level2_key in parameters[key]:
print('{}: {}: {}'.format(key, level2_key, parameters[key][level2_key]))
```
---
<a id='make_wfss'></a>
## Make WFSS simulated observations
Create simulated data from the WFSS yaml files. This is accomplished using the **wfss_simulator** module, which wraps around the various stages of Mirage. There are several input options available for the **wfss_simulator**.
* [Provide a single wfss mode yaml file](#singler_yaml)
* [Provide mulitple yaml files](#multiple_yamls)
* [Provide a single yaml file and an hdf5 file containing SED curves of the sources](#yaml_plus_hdf5)
A brief explanation of the available keywords for the **wfss_simualtor**:
* If an appropriate (linearized, or linearized and cut to the proper number of groups) dark current exposure already exists, the dark current preparation step can be skipped by providing the name of the dark file in **override_dark**.
* The **save_dispersed_seed** option will save the dispersed seed image to a fits file.
* The name of the fits file can be given in the **disp_seed_filename** keyword or, if that is left as None, Mirage will create a filename based on the simulated data output name in the WFSS mode yaml file.
* If **extrapolate_SED** is set to True, then the continuum calculated by Mirage will be extrapolated to cover the necessary wavlengths if the filters in the input yaml files do not span the entire wavelength range.
* If the **source_stamps_file** is set to the name of an [hdf5](https://www.h5py.org/) file, then the disperser will save 2D stamp images of the dispersed spectral orders for each target. These are intended as aids for spectral extraction. (**NOTE that turning this option on will lead to significantly longer run times for Mirage, as so much more data will be generated.**)
* The **SED_file** keyword can be used to input an existing hdf5 file containing source spectra to be used in the simuation.
* If you have source spectra created within your notbeook or python sessions, these can be added using the **SED_dict** keyword.
* If there are normalized spectra within your **SED_file** or **SED_dict**, you must also provide the **SED_normalizing_catalog_column**. This is the magnitude column name within the ascii source catalog to use for scaling the normalized spectra. Only spectra with units specified as "normalized" will be scaled.
* The **create_continuum_seds** keyword declares whether or not Mirage will use the information in the ascii source catalog to create a set of source SEDs, save them to an hdf5 file, and provide them to the disperser. The only case where the user-input value of this keyword is respected is in the case where mutiple yaml files (and no hdf5 file) are input into the **wfss_simulator**. Only in this situation is it possible to run the disperser using either the multiple imaging seed images alone, or from multiple imaging seed images plus an hdf5 file.
<a id='single_yaml'></a>
### Provide a single wfss mode yaml file
Here, we provide a single yaml file as input. In this case, Mirage will create a direct (undispersed) seed image for the yaml file. For each source, Mirage will construct a continuum spectrum by either:
1. Interpolating the filtered magnitudes in the catalogs listed in the yaml file
2. If only a single filter's magnitude is given, Mirage will extrapolate to produce a flat continuum
This continuum spectrum will then be placed in the dispersed seed image, which will then be combined with a dark current exposure in order to create the final simulated exposure.
```
m = wfss_simulator.WFSSSim(yaml_WFSS_files[0], override_dark=None, save_dispersed_seed=True,
extrapolate_SED=True, disp_seed_filename=None, source_stamps_file=None,
SED_file=None, SED_normalizing_catalog_column=None, SED_dict=None,
create_continuum_seds=True)
m.create()
```
<a id='multiple_yamls'></a>
### Provide mulitple yaml files
Here, we provide multiple yaml files as input. There are two options when operating in this way.
* [Set **create_continuum_seds=False**](#multiple_yamls_no_sed). In this case, Mirage will create a direct (undispersed) seed image for each yaml file. For each source, the disperser determines an object's SED by *interpolating that object's signal across the seed images*. This continuum spectrum will then be placed in the dispersed seed image, which will then be combined with a dark current exposure in order to create the final simulated exposure.
* [Set **create_continuum_seds=True**](#multiple_yamls_make_sed). In this case Mirage will produce the SEDs by *interpolating the source magnitudes given in the ascii source catalog*. These SEDs are saved to an hdf5 file. The hdf5 file is then provided to the disperser along with one undispersed seed image. The advantage of this option is processing time. In this case, the **wfss_simulator** only produces a single undispersed seed image, whereas if no hdf5 file is produced, Mirage will construct seed images from all of the input yaml files.
NOTE: In this case, all of the supplied yaml files MUST have the same pointing!
```
test_yaml_files = ['jw00042001001_01101_00003_nis.yaml', 'jw00042001001_01101_00005_nis.yaml',
'jw00042001001_01101_00009_nis.yaml']
test_yaml_files = [os.path.join(yaml_output_dir, yfile) for yfile in test_yaml_files]
```
<a id='multiple_yamls_no_sed'></a>
#### Multiple yaml files, do not create continuum SED file
```
disp_seed_image = 'multiple_yaml_input_no_continuua_dispersed_seed_image.fits'
m = wfss_simulator.WFSSSim(test_yaml_files, override_dark=None, save_dispersed_seed=True,
extrapolate_SED=True, disp_seed_filename=disp_seed_image, source_stamps_file=None,
SED_file=None, SED_normalizing_catalog_column=None, SED_dict=None,
create_continuum_seds=False)
m.create()
```
<a id='multiple_yamls_make_sed'></a>
#### Multiple yaml files, create continuum SED file
```
disp_seed_image = 'multiple_yaml_input_with_continuua_dispersed_seed_image.fits'
m = wfss_simulator.WFSSSim(test_yaml_files, override_dark=None, save_dispersed_seed=True,
extrapolate_SED=True, disp_seed_filename=disp_seed_image, source_stamps_file=None,
SED_file=None, SED_normalizing_catalog_column=None, SED_dict=None,
create_continuum_seds=True)
m.create()
```
<a id='yaml_plus_hdf5'></a>
### Provide a single yaml file and an hdf5 file containing SED curves of the sources
In this case, a single WFSS mode yaml file is provided as input to Mirage. Along with this an [hdf5](https://www.h5py.org/) file is provided. This file contains a Spectral Energy Distribution (SED) curve for each target, either in units of F_lambda, (`F_lambda (erg / second / cm^2 / Angstrom)` or `(W / m^2 / micron)`) (or units that can be converted to F_lambda), F_nu (`erg / second / cm^2 / Hz` or `W / m^2 / Hz`), or a normalized SED. Along with the SED, the user must provide a set of wavelengths or frequencies. See the [hdf5 example](#make_sed_file) and [manual example](#manual_seds) below for more information on units.
The advantage of this input scenario is that you are not limited to simple continuum spectra for your targets. Emission and absorption features can be added. Normalized SEDs will be scaled by the magnitudes listed in one of the magnitude columns of the ascii input catalog. The desired column name is provided through the `SED_normalizing_catalog_column` keyword.
The disperser software will then use the SED along with the segmentation map in the direct seed image to place spectra into the dispersed seed image. In the cell below, we show a simple example of how to create an hdf5 file with SEDs. In this case the spectrum is flat with no emission nor absorption features.
```
target_1_wavelength = np.arange(1.0, 5.5, 0.1)
target_1_flux = np.repeat(1e-16, len(target_1_wavelength))
wavelengths = [target_1_wavelength]
fluxes = [target_1_flux]
# Examples for the case where you want to include data on more sources
# Add fluxes for target number 2
#target_2_wavelength = np.arange(0.8, 5.4, 0.05)
#target_2_flux = np.repeat(1.4e-16, len(target_2_wavelength))
#wavelengths.append(target_2_wavelength)
#fluxes.append(target_2_flux)
# Add a normalized input spectrum
#target_3_wavelength = np.arange(0.8, 5.4, 0.05)
#target_3_flux = np.linspace(1.3, 0.75, len(target_3_wavelength))
#wavelengths.append(target_3_wavelength)
#fluxes.append(target_3_flux)
```
<a id='make_sed_file'></a>
#### Create HDF5 file containing object SEDs
If you wish to add information about the units of the wavelengths and fluxes, that can be done by setting attributes of each dataset as it is created. See the example below where the file **test_sed_file.hdf5** is created. If units are not provided, Mirage assumes wavelength units of `microns` and flux density units of F_lambda in CGS units `(erg / second / cm^2 / Angstrom)`. hdf5 files only support the use of strings as dataset attributes, so we specify units using strings. Mirage will convert these strings to astropy units when working with the data.
Also note that in this hdf5 file (as well as in the manually created source SEDs below), each SED can have its own units.
The hdf5 file is populated by inserting one "dataset" for each source. The dataset contains the wavelengths and flux densities of the SED. The datasets are organized within the file similarly to entries in a dictionary. You can reference a particular dataset by using its key. In this case, the keys in the file are the source index numbers from the ascii source catalogs to be used in the simulation.
BE SURE THAT THE KEYS IN THE HDF5 FILE MATCH THE INDEX NUMBERS IN THE SOURCE CATALOGS. IF THERE IS A MISMATCH, THE SED WILL BE APPLIED TO THE INCORRECT SOURCE.
As part of this, if you are using multiple source catalogs for your simulation (e.g. a point source and a galaxy catalog), be sure that the index numbers of the catalogs do not overlap. See the [Source Index Numbers](https://mirage-data-simulator.readthedocs.io/en/latest/catalogs.html#source_index_numbers) section of Mirage's online documentation for details on how to create multiple catalogs with non-overlapping indexes.
```
wavelength_units = 'microns'
flux_units = 'flam'
sed_file = 'test_sed_file.hdf5'
sed_file = os.path.join(yaml_output_dir, sed_file)
with h5py.File(sed_file, "w") as file_obj:
for i in range(len(fluxes)):
dset = file_obj.create_dataset(str(i+1), data=[wavelengths[i], fluxes[i]], dtype='f',
compression="gzip", compression_opts=9)
dset.attrs[u'wavelength_units'] = wavelength_units
if i < 2:
dset.attrs[u'flux_units'] = flux_units
else:
dset.attrs[u'flux_units'] = 'normalized'
```
<a id='manual_seds'></a>
#### Manual SED inputs
Also in this example we show the option to manually provide an SED. In this case the SED must be a dictionary where the key is the index number of the object (corresponding to the index number in the ascii catalog). The dictionary entry must contain a `'wavelengths'` and a `'fluxes'` entry for each object. Both of these must be lists or numpy arrays. Astropy units can optionally be attached to each list. Currently Mirage supports only `F_lambda` (or equivalent) units, `F_nu` (or equivalent) units, or normalized units, which can be specified using astropy's `pct` unit. In the example below, note the use of `FLAMBDA_CGS_UNITS`, `FLAMBDA_MKS_UNITS`, and `FNU_CGS_UNITS`, which have been imported from Mirage. *Target_7* also uses a set of frequencies (note the specification of Hz for units), rather than wavelengths. Convertable frequency units (e.g. MHz, GHz) are also allowed.
As when using an hdf5 file as above, be sure that the keys in this dictionary match the proper source index numbers from the input ascii catalogs, otherwise the SEDs will be applied to the wrong sources.
```
my_sed = {}
target_2_wavelength = np.arange(0.8, 5.4, 0.05) * u.micron
target_2_flux = np.linspace(1.1, 0.95, len(target_2_wavelength)) * u.pct
my_sed[2] = {"wavelengths": target_2_wavelength,
"fluxes": target_2_flux}
# Examples in the case you want to add information for other sources
#target_5_wavelength = np.arange(0.8, 5.4, 0.05) * u.micron
#target_5_flux = np.linspace(1e-16, 1e-17, len(target_5_wavelength)) * FLAMBDA_CGS_UNITS
#my_sed[4] = {"wavelengths": target_5_wavelength,
# "fluxes": target_5_flux}
#target_6_wavelength = np.arange(0.8, 5.4, 0.05) * u.micron
#target_6_flux = np.linspace(1e-15, 1e-16, len(target_5_wavelength)) * FLAMBDA_MKS_UNITS
#my_sed[5] = {"wavelengths": target_6_wavelength,
# "fluxes": target_6_flux}
#target_7_wavelength = np.linspace(5.6e13, 3.7e14, 10) * u.Hz
#target_7_flux = np.linspace(1.6e-26, 1.6e-27, len(target_7_wavelength)) * FNU_CGS_UNITS
#my_sed[5] = {"wavelengths": target_7_wavelength,
# "fluxes": target_7_flux}
# Input the SED file and SED dictionary along with a WFSS mode yaml file to Mirage
m = wfss_simulator.WFSSSim(test_yaml_files[0], override_dark=None, save_dispersed_seed=True,
extrapolate_SED=True, disp_seed_filename=None, source_stamps_file=None,
SED_file=sed_file, SED_normalizing_catalog_column='niriss_f200w_magnitude',
SED_dict=my_sed, create_continuum_seds=True)
m.create()
```
<a id='wfss_outputs'></a>
### Outputs
Regardless of whether the **wfss_simulator** is called with multiple yaml files or a yaml and an hdf5 file, the outputs will be the same. The final output will be **jw\*uncal.fits** (or **jw\*linear.fits**, depending on whether raw or linear outputs are specified in the yaml files) files in your output directory. These files are in DMS format and can be fed directly into the **calwebb_detector1** pipeline for further calibration, if desired.
The seed image is also saved, as an intermediate output. This seed image is a noiseless rate image of the same scene in the final output file. The seed image can be thought of as an ideal version of the scene that excludes (most) detector effects.
#### Examine the dispersed seed image
```
with fits.open(m.disp_seed_filename) as seedfile:
dispersed_seed = seedfile[1].data
fig, ax = plt.subplots(figsize=(10, 10))
norm = simple_norm(dispersed_seed, stretch='log', min_cut=0.25, max_cut=10)
cax = ax.imshow(dispersed_seed, norm=norm)
cbar = fig.colorbar(cax)
plt.show()
```
#### Examine the final output file
```
final_file = os.path.join(yaml_output_dir, 'jw00042001001_01101_00003_nis_uncal.fits')
with fits.open(final_file) as hdulist:
data = hdulist['SCI'].data
hdulist.info()
fig, ax = plt.subplots(figsize=(10, 10))
norm = simple_norm(data[0, 4, :, :], stretch='log', min_cut=5000, max_cut=50000)
cax = ax.imshow(data[0, 4, :, :], norm=norm)
cbar = fig.colorbar(cax)
plt.show()
```
---
<a id='make_imaging'></a>
# Make imaging simulated observations
Similar to the **wfss_simulator** module for WFSS observations, imaging data can be created using the **imaging_simulator** module. This can be used to create the data for the direct (in NIRCam and NIRISS), and Out of Field (NIRCam) exposures that accompany WFSS observations, as well as the shortwave channel data for NIRCam, which is always imaging while the longwave detector is observing through the grism.
```
for yaml_imaging_file in yaml_imaging_files[0:1]:
print("Imaging simulation for {}".format(yaml_imaging_file))
img_sim = imaging_simulator.ImgSim()
img_sim.paramfile = yaml_imaging_file
img_sim.create()
```
<a id='imaging_outputs'></a>
### Outputs
As with WFSS outputs, the **imaging_simulator** will create **jw\*ucal.fits** or **jw\*linear.fits** files, depending on which was specified in the associated yaml files.
#### Examine the seed image
```
fig, ax = plt.subplots(figsize=(10, 10))
norm = simple_norm(img_sim.seedimage, stretch='log', min_cut=0.25, max_cut=1000)
cax = ax.imshow(img_sim.seedimage, norm=norm)
cbar = fig.colorbar(cax)
plt.show()
```
#### Examine the output file
```
final_file = os.path.join(yaml_output_dir, 'jw00042001001_01101_00001_nis_uncal.fits')
with fits.open(final_file) as hdulist:
data = hdulist['SCI'].data
hdulist.info()
fig, ax = plt.subplots(figsize=(10, 10))
norm = simple_norm(data[0, 4, :, :], stretch='log', min_cut=5000, max_cut=50000)
cax = ax.imshow(data[0, 4, :, :], norm=norm)
cbar = fig.colorbar(cax)
plt.show()
```
| github_jupyter |
# Neighborhood Structures in the ArcGIS Spatial Statistics Library
1. Spatial Weights Matrix
2. On-the-fly Neighborhood Iterators [GA Table]
3. Contructing PySAL Spatial Weights
# Spatial Weight Matrix File
1. Stores the spatial weights so they do not have to be re-calculated for each analysis.
2. In row-compressed format.
3. Little endian byte encoded.
4. Requires a unique long/short field to identify each features. **Can NOT be the OID/FID.**
## Construction
```
import Weights as WEIGHTS
import os as OS
inputFC = r'../data/CA_Polygons.shp'
fullFC = OS.path.abspath(inputFC)
fullPath, fcName = OS.path.split(fullFC)
masterField = "MYID"
```
### Distance-Based Options
``` INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
fixed (boolean): fixed (1) or inverse (0) distance?
concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN
exponent {float, 1.0}: distance decay
threshold {float, None}: distance threshold
kNeighs (int): number of neighbors to return
rowStandard {bool, True}: row standardize weights?
```
*Example: Fixed Distance*
```
swmFile = OS.path.join(fullPath, "fixed250k.swm")
fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField,
threshold = 250000)
```
*Example: Inverse Distance Squared*
```
swmFile = OS.path.join(fullPath, "inv2_250k.swm")
fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, fixed = False,
exponent = 2.0, threshold = 250000)
```
### k-Nearest Neighbors Options
``` INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN
kNeighs {int, 1}: number of neighbors to return
rowStandard {bool, True}: row standardize weights?
```
*Example: 8-nearest neighbors*
```
swmFile = OS.path.join(fullPath, "knn8.swm")
fixedSWM = WEIGHTS.kNearest2SWM(fullFC, swmFile, masterField, kNeighs = 8)
```
*Example: Fixed Distance - k-nearest neighbor hybrid [i.e. at least k neighbors but may have more...]*
```
swmFile = OS.path.join(fullPath, "fixed250k_knn8.swm")
fixedSWM = WEIGHTS.distance2SWM(fullFC, swmFile, masterField, kNeighs = 8,
threshold = 250000)
```
### Delaunay Triangulation Options
``` INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
rowStandard {bool, True}: row standardize weights?
```
*Example: delaunay*
```
swmFile = OS.path.join(fullPath, "delaunay.swm")
fixedSWM = WEIGHTS.delaunay2SWM(fullFC, swmFile, masterField)
```
### Polygon Contiguity Options <a id="poly_options"></a>
``` INPUTS:
inputFC (str): path to the input feature class
swmFile (str): path to the SWM file.
masterField (str): field in table that serves as the mapping.
concept: {str, EUCLIDEAN}: EUCLIDEAN or MANHATTAN
kNeighs {int, 0}: number of neighbors to return (1)
rowStandard {bool, True}: row standardize weights?
contiguityType {str, Rook}: {Rook = Edges Only, Queen = Edges/Vertices}
NOTES:
(1) kNeighs is an option often used when you know there are polygon
features that are not contiguous (e.g. islands). A kNeighs value
of 2 will assure that ALL features have at least 2 neighbors.
If a polygon is determined to only touch a single other polygon,
then a nearest neighbor search based on true centroids are used
to find the additional neighbor.
```
*Example: Rook [Binary]*
```
swmFile = OS.path.join(fullPath, "rook_bin.swm")
WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, rowStandard = False)
```
*Example: Queen Contiguity [Row Standardized]
```
swmFile = OS.path.join(fullPath, "queen.swm")
WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, contiguityType = "QUEEN")
```
*Example: Queen Contiguity - KNN Hybrid [Prevents Islands w/ no Neighbors][
(1)](#poly_options)*
```
swmFile = OS.path.join(fullPath, "hybrid.swm")
WEIGHTS.polygon2SWM(inputFC, swmFile, masterField, kNeighs = 4)
```
# On-the-fly Neighborhood Iterators [GA Table]
1. Reads centroids of input features into spatial tree structure.
2. Distance Based Queries.
3. Scalable: In-memory/disk-space swap for large data.
4. Requires a unique long/short field to identify each features. **Can be the OID/FID.**
5. Uses ```requireSearch = True``` when using ```ssdo.obtainData```
*Pre-Example: Load the Data into GA Version of SSDataObject*
```
import SSDataObject as SSDO
inputFC = r'../data/CA_Polygons.shp'
ssdo = SSDO.SSDataObject(inputFC)
uniqueIDField = ssdo.oidName
ssdo.obtainData(uniqueIDField, requireSearch = True)
```
*Example: NeighborSearch - When you only need your Neighbor IDs*
*gaSearch.init_nearest(distance_band, minimum_num_neighs, {"euclidean", "manhattan")*
```
import arcgisscripting as ARC
import WeightsUtilities as WU
import gapy as GAPY
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
concept, gaConcept = WU.validateDistanceMethod('EUCLIDEAN', ssdo.spatialRef)
gaSearch.init_nearest(0.0, 4, gaConcept)
neighSearch = ARC._ss.NeighborSearch(ssdo.gaTable, gaSearch)
for i in range(len(neighSearch)):
neighOrderIDs = neighSearch[i]
if i < 5:
print(neighOrderIDs)
import arcgisscripting as ARC
import WeightsUtilities as WU
import gapy as GAPY
import SSUtilities as UTILS
inputGrid = r'D:\Data\UC\UC17\Island\Dykstra\Dykstra.gdb\emerge'
ssdo = SSDO.SSDataObject(inputGrid)
ssdo.obtainData(ssdo.oidName, requireSearch = True)
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
concept, gaConcept = WU.validateDistanceMethod('EUCLIDEAN', ssdo.spatialRef)
gaSearch.init_nearest(300., 0, gaConcept)
neighSearch = ARC._ss.NeighborSearch(ssdo.gaTable, gaSearch)
for i in range(len(neighSearch)):
neighOrderIDs = neighSearch[i]
x0,y0 = ssdo.xyCoords[i]
if i < 5:
nhs = ", ".join([str(i) for i in neighOrderIDs])
dist = []
for nh in neighOrderIDs:
x1,y1 = ssdo.xyCoords[nh]
dij = WU.euclideanDistance(x0,y0,x1,y1)
dist.append(UTILS.formatValue(dij, "%0.2f"))
print("ID {0} has {1} neighs, they are {2}".format(i, len(neighOrderIDs), nhs))
print("The Distances are... {0}".format(", ".join(dist)))
```
*Example: NeighborWeights - When you need non-uniform spatial weights (E.g. Inverse Distance Squared)*
*NeighborWeights(gaTable, gaSearch, weight_type [0: inverse_distance, **1: fixed_distance**], exponent = 1.0, row_standard = True, include_self = False)*
```
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
gaSearch.init_nearest(250000, 0, gaConcept)
neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = 0, exponent = 2.0)
for i in range(len(neighSearch)):
neighOrderIDs, neighWeights = neighSearch[i]
if i < 3:
print(neighOrderIDs)
print(neighWeights)
```
# Contructing PySAL Spatial Weights
1. Convert masterID to orderID when using ssdo.obtainData (SWM File, Polygon Contiguity)
2. Data is already in orderID when using ssdo.obtainDataGA (Distance Based)
**Methods in next cell can be imported from pysal2ArcGIS.py**
```
import pysal as PYSAL
import WeightsUtilities as WU
import SSUtilities as UTILS
def swm2Weights(ssdo, swmfile):
"""Converts ArcGIS Sparse Spatial Weights Matrix (*.swm) file to
PySAL Sparse Spatial Weights Class.
INPUTS:
ssdo (class): instance of SSDataObject [1,2]
swmFile (str): full path to swm file
NOTES:
(1) Data must already be obtained using ssdo.obtainData()
(2) The masterField for the swm file and the ssdo object must be
the same and may NOT be the OID/FID/ObjectID
"""
neighbors = {}
weights = {}
#### Create SWM Reader Object ####
swm = WU.SWMReader(swmfile)
#### SWM May NOT be a Subset of the Data ####
if ssdo.numObs > swm.numObs:
ARCPY.AddIDMessage("ERROR", 842, ssdo.numObs, swm.numObs)
raise SystemExit()
#### Parse All SWM Records ####
for r in UTILS.ssRange(swm.numObs):
info = swm.swm.readEntry()
masterID, nn, nhs, w, sumUnstandard = info
#### Must Have at Least One Neighbor ####
if nn:
#### Must be in Selection Set (If Exists) ####
if masterID in ssdo.master2Order:
outNHS = []
outW = []
#### Transform Master ID to Order ID ####
orderID = ssdo.master2Order[masterID]
#### Neighbors and Weights Adjusted for Selection ####
for nhInd, nhVal in enumerate(nhs):
try:
nhOrder = ssdo.master2Order[nhVal]
outNHS.append(nhOrder)
weightVal = w[nhInd]
if swm.rowStandard:
weightVal = weightVal * sumUnstandard[0]
outW.append(weightVal)
except KeyError:
pass
#### Add Selected Neighbors/Weights ####
if len(outNHS):
neighbors[orderID] = outNHS
weights[orderID] = outW
swm.close()
#### Construct PySAL Spatial Weights and Standardize as per SWM ####
w = PYSAL.W(neighbors, weights)
if swm.rowStandard:
w.transform = 'R'
return w
def poly2Weights(ssdo, contiguityType = "ROOK", rowStandard = True):
"""Uses GP Polygon Neighbor Tool to construct contiguity relationships
and stores them in PySAL Sparse Spatial Weights class.
INPUTS:
ssdo (class): instance of SSDataObject [1]
contiguityType {str, ROOK}: ROOK or QUEEN contiguity
rowStandard {bool, True}: whether to row standardize the spatial weights
NOTES:
(1) Data must already be obtained using ssdo.obtainData() or ssdo.obtainDataGA ()
"""
neighbors = {}
weights = {}
polyNeighDict = WU.polygonNeighborDict(ssdo.inputFC, ssdo.masterField,
contiguityType = contiguityType)
for masterID, neighIDs in UTILS.iteritems(polyNeighDict):
orderID = ssdo.master2Order[masterID]
neighbors[orderID] = [ssdo.master2Order[i] for i in neighIDs]
w = PYSAL.W(neighbors)
if rowStandard:
w.transform = 'R'
return w
def distance2Weights(ssdo, neighborType = 1, distanceBand = 0.0, numNeighs = 0, distanceType = "euclidean",
exponent = 1.0, rowStandard = True, includeSelf = False):
"""Uses ArcGIS Neighborhood Searching Structure to create a PySAL Sparse Spatial Weights Matrix.
INPUTS:
ssdo (class): instance of SSDataObject [1]
neighborType {int, 1}: 0 = inverse distance, 1 = fixed distance,
2 = k-nearest-neighbors, 3 = delaunay
distanceBand {float, 0.0}: return all neighbors within this distance for inverse/fixed distance
numNeighs {int, 0}: number of neighbors for k-nearest-neighbor, can also be used to set a minimum
number of neighbors for inverse/fixed distance
distanceType {str, euclidean}: manhattan or euclidean distance [2]
exponent {float, 1.0}: distance decay factor for inverse distance
rowStandard {bool, True}: whether to row standardize the spatial weights
includeSelf {bool, False}: whether to return self as a neighbor
NOTES:
(1) Data must already be obtained using ssdo.obtainDataGA()
(2) Chordal Distance is used for GCS Data
"""
neighbors = {}
weights = {}
gaSearch = GAPY.ga_nsearch(ssdo.gaTable)
if neighborType == 3:
gaSearch.init_delaunay()
neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = 1)
else:
if neighborType == 2:
distanceBand = 0.0
weightType = 1
else:
weightType = neighborType
concept, gaConcept = WU.validateDistanceMethod(distanceType.upper(), ssdo.spatialRef)
gaSearch.init_nearest(distanceBand, numNeighs, gaConcept)
neighSearch = ARC._ss.NeighborWeights(ssdo.gaTable, gaSearch, weight_type = weightType,
exponent = exponent, include_self = includeSelf)
for i in range(len(neighSearch)):
neighOrderIDs, neighWeights = neighSearch[i]
neighbors[i] = neighOrderIDs
weights[i] = neighWeights
w = PYSAL.W(neighbors, weights)
if rowStandard:
w.transform = 'R'
return w
```
# Converting Spatial Weight Matrix Formats (e.g. *.swm, *.gwt, *.gal)
- Follow directions at the PySAL-ArcGIS-Toolbox Git Repository [https://github.com/Esri/PySAL-ArcGIS-Toolbox]
- Please make note of the section on **Adding a Git Project to your ArcGIS Installation Python Path**.
```
import WeightConvertor as W_CONVERT
swmFile = OS.path.join(fullPath, "queen.swm")
galFile = OS.path.join(fullPath, "queen.gal")
convert = W_CONVERT.WeightConvertor(swmFile, galFile, inputFC, "MYID", "SWM", "GAL")
convert.createOutput()
```
**Calling MaxP Regions Using SWM Based on Rook Contiguity, No Row Standardization**
```
import numpy as NUM
NUM.random.seed(100)
ssdo = SSDO.SSDataObject(inputFC)
uniqueIDField = "MYID"
fieldNames = ['PCR2010', 'POP2010', 'PERCNOHS']
ssdo.obtainDataGA(uniqueIDField, fieldNames)
df = ssdo.getDataFrame()
X = df.as_matrix()
swmFile = OS.path.join(fullPath, "rook_bin.swm")
w = swm2Weights(ssdo, swmFile)
maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2])
maxpGroups = NUM.empty((ssdo.numObs,), int)
for regionID, orderIDs in enumerate(maxp.regions):
maxpGroups[orderIDs] = regionID
print((regionID, orderIDs))
```
**Calling MaxP Regions Using Rook Contiguity, No Row Standardization**
```
NUM.random.seed(100)
w = poly2Weights(ssdo, rowStandard = False)
maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2])
maxpGroups = NUM.empty((ssdo.numObs,), int)
for regionID, orderIDs in enumerate(maxp.regions):
maxpGroups[orderIDs] = regionID
print((regionID, orderIDs))
```
**Identical results because the random seed was set to 100 and they have the same spatial neighborhood**
**Calling MaxP Regions Using Fixed Distance 250000, Hyrbid to Assure at least 2 Neighbors**
```
NUM.random.seed(100)
w = distance2Weights(ssdo, distanceBand = 250000.0, numNeighs = 2)
maxp = PYSAL.region.Maxp(w, X[:,0:2], 3000000., floor_variable = X[:,2])
maxpGroups = NUM.empty((ssdo.numObs,), int)
for regionID, orderIDs in enumerate(maxp.regions):
maxpGroups[orderIDs] = regionID
print((regionID, orderIDs))
```
**Same random seed, different result as neighborhood is different than the two previous**
| github_jupyter |
```
import pandas as pd
cols=['data','label','index']
```
# Malayalam Data
```
mal_train = pd.read_csv('/content/drive/MyDrive/mal_full_offensive_train.csv',sep='\t',names=cols)
mal_dev= pd.read_csv('/content/drive/MyDrive/mal_full_offensive_dev.csv',sep='\t',names=cols)
mal_test = pd.read_csv('/content/drive/MyDrive/mal_full_offensive_test.csv',sep='\t',names=['data'])
mal_train = mal_train[['data','label']]
mal_dev = mal_dev[['data','label']]
mal_test = mal_test[['data']]
mal_train.head()
mal_train.info()
mal_train[mal_train['label']=='not-malayalam']
mal_train['label'].value_counts()
print(len(mal_train))
mal_train = mal_train.drop_duplicates()
print(len(mal_train))
mal_train['label'].value_counts()
mal_train['token_length'] = [len(x.split(" ")) for x in mal_train.data]
print(max(mal_train.token_length))
print(min(mal_train.token_length))
print(sum(mal_train.token_length)/len(mal_train.token_length))
```
# Tamil Data
```
tamil_train = pd.read_csv('/content/drive/MyDrive/tamil_offensive_full_train.csv',sep='\t',names=cols)
tamil_dev= pd.read_csv('/content/drive/MyDrive/tamil_offensive_full_dev.csv',sep='\t',names=cols)
tamil_test = pd.read_csv('/content/drive/MyDrive/tamil_offensive_full_test.csv',sep='\t',names=['data'])
tamil_train = tamil_train[['data','label']]
tamil_dev = tamil_dev[['data','label']]
tamil_test = tamil_test[['data']]
tamil_train.head()
tamil_train.info()
tamil_train['label'].value_counts()
print(len(tamil_train))
tamil_train = tamil_train.drop_duplicates()
print(len(tamil_train))
tamil_train['label'].value_counts()
tamil_train['token_length'] = [len(x.split(" ")) for x in tamil_train.data]
print(max(tamil_train.token_length))
print(min(tamil_train.token_length))
print(sum(tamil_train.token_length)/len(tamil_train.token_length))
```
# Kannada data
```
kannada_train = pd.read_csv('/content/drive/MyDrive/kannada_offensive_train.csv',sep='\t',names=cols)
kannada_dev= pd.read_csv('/content/drive/MyDrive/kannada_offensive_dev.csv',sep='\t',names=cols)
kannada_test = pd.read_csv('/content/drive/MyDrive/kannada_offensive_test.csv',sep='\t',names=['data'])
kannada_train = kannada_train[['data','label']]
kannada_dev = kannada_dev[['data','label']]
kannada_test = kannada_test[['data']]
kannada_train.head()
kannada_train.info()
print(len(kannada_train))
kannada_train = kannada_train.drop_duplicates()
print(len(kannada_train))
kannada_train['label'].value_counts()
kannada_train['token_length'] = [len(x.split(" ")) for x in kannada_train.data]
print(max(kannada_train.token_length))
print(min(kannada_train.token_length))
print(sum(kannada_train.token_length)/len(kannada_train.token_length))
```
# Tokenization
```
!pip install indic-nlp-library
!git clone https://github.com/anoopkunchukuttan/indic_nlp_resources.git
INDIC_NLP_RESOURCES=r"/content/indic_nlp_resources"
from indicnlp import common
common.set_resources_path(INDIC_NLP_RESOURCES)
from indicnlp import loader
loader.load()
```
Normalization
```
mal_lines = []
for i in mal_train['data']:
mal_lines.append(i)
len(mal_lines)
from indicnlp.normalize.indic_normalize import IndicNormalizerFactory
# input_text="പലദേശം. പല ഭാഷ ഒരേ ഒരു രാജാവ് അല്ലാതെ സ്വന്തം രാജവയത് അല്ല"
# remove_nuktas=False
factory=IndicNormalizerFactory()
normalizer=factory.get_normalizer("ml")
# %%time
nor_mal_lines = []
for i in range(len(mal_lines)):
nor_mal_line = normalizer.normalize(mal_lines[i])
nor_mal_lines.append(nor_mal_line)
# new_mal_lines
len(nor_mal_lines)
```
Tokenization word level
```
from indicnlp.tokenize import indic_tokenize
tokenized_mal_lines = []
for i in range(len(mal_lines)):
tokenized_mal_line = indic_tokenize.trivial_tokenize(nor_mal_lines[i])
tokenized_mal_lines.append(tokenized_mal_line)
# tokenized_mal_lines
tokenized_mal_lines[2]
mal_lines[2]
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
transliterated_mal_lines = []
flags=[]
for i in range(len(mal_lines)):
transliterated_mal_line = ItransTransliterator.from_itrans(mal_lines[i],'ml')
if(transliterated_mal_line == mal_lines[i]):
flag=1
else:
flag=0
flags.append(flag)
transliterated_mal_lines.append(transliterated_mal_line)
transliterated_mal_lines[0]
print('native malayalam sentences: ',sum(flags))
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
en_transliterated_mal_lines = []
flags=[]
for i in range(len(mal_lines)):
en_transliterated_mal_line = ItransTransliterator.to_itrans(mal_lines[i],'ml')
if(en_transliterated_mal_line == mal_lines[i]):
flag=1
else:
flag=0
flags.append(flag)
en_transliterated_mal_lines.append(en_transliterated_mal_line)
en_transliterated_mal_lines[0]
whole_mal_train = ''
for i in range(len(mal_lines)):
whole_mal_train+=str(mal_lines[i])
len(whole_mal_train)
from indicnlp.langinfo import *
lang='ml'
vowels = 0
for i in range(len(whole_mal_train)):
if(is_vowel(whole_mal_train[i],lang)):
vowels+=1
print('Total characters: ',len(whole_mal_train))
print('Total vowels: ',vowels)
```
#ULMFiT Malayalam
```
!pip install sentencepiece
#reference: https://github.com/goru001/nlp-for-malyalam/blob/master/classification/Malyalam_Classification_Model.ipynb
from fastai.text import *
import numpy as np
from sklearn.model_selection import train_test_split
import pickle
import sentencepiece as spm
import re
import pdb
import fastai, torch
fastai.__version__ , torch.__version__
def handle_all_caps(t: str) -> str:
tokens = t.split()
tokens = replace_all_caps(tokens)
return ' '.join(tokens)
def handle_upper_case_first_letter(t: str) -> str:
tokens = t.split()
tokens = deal_caps(tokens)
return ' '.join(tokens)
def lower_case_everything(t: str) -> str:
return t.lower()
class CodeMixedMalayalamTokenizer(BaseTokenizer):
def __init__(self, lang:str):
self.lang = lang
self.sp = spm.SentencePieceProcessor()
self.sp.Load(str("/content/drive/MyDrive/AggressionNLP/code-mixed-enma/tokenizer_mixed_script/mlen_spm.model"))
def tokenizer(self, t:str) -> List[str]:
return self.sp.EncodeAsPieces(t)
sp = spm.SentencePieceProcessor()
sp.Load(str("/content/drive/MyDrive/AggressionNLP/code-mixed-enma/tokenizer_mixed_script/mlen_spm.model"))
itos = [sp.IdToPiece(int(i)) for i in range(25000)]
# 25,000 is the vocab size that we chose in sentencepiece
mlen_vocab = Vocab(itos)
tokenizer = Tokenizer(lang='mlen', tok_func=CodeMixedMalayalamTokenizer)
tokenizer.pre_rules.append(lower_case_everything)
tokenizer.pre_rules.append(handle_all_caps)
tokenizer.pre_rules.append(handle_upper_case_first_letter)
tokenizer.special_cases, tokenizer.pre_rules, tokenizer.post_rules
label_cols = ['label']
df_test_pred=pd.read_csv('/content/mal_test_preds_2.csv',names=['query','predicted_label'],skiprows=1)
df_test_pred
df_test_pred['data']=df_test_pred['query']
df_test_pred['label']=df_test_pred['predicted_label']
mal_train_new = pd.concat([mal_train,df_test_pred])
mal_train_new
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(mal_train_new['data'],mal_train_new['label'], test_size = 0.2, random_state = 42, stratify=mal_train_new['label'])
X_train
X_train_df = pd.concat([X_train, y_train], axis=1, keys=['text', 'label'])
X_val_df = pd.concat([X_val, y_val], axis=1, keys=['text', 'label'])
X_test_df = pd.concat([mal_dev['data'], mal_dev['label']], axis=1, keys=['text', 'label'])
# X_train_df['text']
data_lm = TextLMDataBunch.from_df(path='/content', train_df=X_train_df, valid_df=X_val_df, test_df=X_test_df, tokenizer=tokenizer, vocab=mlen_vocab,text_cols='text')
data_lm.show_batch()
learn = language_model_learner(data_lm, arch=AWD_LSTM, drop_mult=0.3, pretrained=False)
# !unzip '/content/drive/MyDrive/AggressionNLP/code-mixed-enma/lm_mixed_script/models.zip' -d '/content/drive/MyDrive/AggressionNLP/MalayalamEnglish/lm_mixed_script/'
# Loading the pretrained language model on malyalam wikipedia
learn.load('/content/drive/MyDrive/AggressionNLP/MalayalamEnglish/lm_mixed_script/models/best_model', with_opt=True)
learn.freeze()
learn.fit_one_cycle(1, 1e-2)
learn.unfreeze()
learn.fit_one_cycle(5, 1e-3)
learn.save_encoder('mal_en_fine_tuned_enc')
data_clas = TextClasDataBunch.from_df(path='/content', train_df=X_train_df, valid_df=X_val_df, test_df=X_test_df, tokenizer=tokenizer, vocab=mlen_vocab,text_cols=['text'], label_cols=['label'], bs=16)
data_clas.show_batch()
learn = text_classifier_learner(data_clas, arch=AWD_LSTM, drop_mult=0.5)
learn.load_encoder('mal_en_fine_tuned_enc')
learn.freeze()
learn.loss_func
mcc = MatthewsCorreff()
learn.metrics = [mcc, accuracy]
learn.fit_one_cycle(1, 1e-2)
learn.freeze_to(-2)
learn.fit_one_cycle(1, 1e-2)
learn.save('mal_en-second-full')
learn.unfreeze()
learn.fit_one_cycle(5, 1e-3, callbacks=[callbacks.SaveModelCallback(learn, every='improvement', monitor='accuracy', name='mal_en_final')])
learn.load('mal_en_final')
from sklearn.metrics import accuracy_score, matthews_corrcoef
df_dict = {'query': list(mal_dev['data']), 'actual_label': list(mal_dev['label']), 'predicted_label': ['']*mal_dev.shape[0]}
all_nodes = list(set(mal_train['label']))
for node in all_nodes:
df_dict[node] = ['']*mal_dev.shape[0]
i2c = {}
for key, value in learn.data.c2i.items():
i2c[value] = key
df_result = pd.DataFrame(df_dict)
preds = learn.get_preds(ds_type=DatasetType.Test, ordered=True)
for index, row in df_result.iterrows():
for node in all_nodes:
row[node] = preds[0][index][learn.data.c2i[node]].item()
row['predicted_label'] = i2c[np.argmax(preds[0][index]).data.item()]
df_result.head()
mal_test=mal_test.to_frame()
preds = []
for index, row in mal_test.iterrows():
p = learn.predict(row['data'])
preds.append(str(p[0]))
mal_test['text']=mal_test['data']
from sklearn.metrics import accuracy_score, matthews_corrcoef
data_lm_2 = TextClasDataBunch.from_df(path='/content',train_df=X_train_df, valid_df=X_val_df, test_df=mal_test, tokenizer=tokenizer, vocab=mlen_vocab,text_cols='text',label_cols=['label'])
learn = text_classifier_learner(data_lm_2, arch=AWD_LSTM, drop_mult=0.5)
learn.load_encoder('/content/drive/MyDrive/AggressionNLP/MalayalamEnglish/models/mal_en_fine_tuned_enc')
learn.load('/content/drive/MyDrive/AggressionNLP/MalayalamEnglish/models/mal_en_final')
df_dict = {'query': list(mal_test['text']), 'predicted_label': ['']*mal_test.shape[0]}
all_nodes = list(set(mal_train['label']))
for node in all_nodes:
df_dict[node] = ['']*mal_test.shape[0]
i2c = {}
for key, value in learn.data.c2i.items():
i2c[value] = key
df_result_2 = pd.DataFrame(df_dict)
preds = learn.get_preds(ds_type=DatasetType.Test, ordered=True)
for index, row in df_result_2.iterrows():
for node in all_nodes:
row[node] = preds[0][index][learn.data.c2i[node]].item()
row['predicted_label'] = i2c[np.argmax(preds[0][index]).data.item()]
df_result_2.head()
df_test_pred_2 = pd.DataFrame(
{'query': mal_test['data'],
'predicted_label': preds })
df_test_pred_2.to_csv('mal_test_preds_2.csv')
accuracy_score(df_result['actual_label'], df_result['predicted_label'])
matthews_corrcoef(df_result['actual_label'], df_result['predicted_label'])
from sklearn.metrics import classification_report
print(classification_report(df_result['actual_label'], df_result['predicted_label']))
precision recall f1-score support
Not_offensive 0.97 0.98 0.97 1779
Offensive_Targeted_Insult_Group 0.57 0.31 0.40 13
Offensive_Targeted_Insult_Individual 0.80 0.33 0.47 24
Offensive_Untargetede 0.55 0.60 0.57 20
not-malayalam 0.81 0.87 0.83 163
accuracy 0.95 1999
macro avg 0.74 0.62 0.65 1999
weighted avg 0.95 0.95 0.95 1999
df_result.to_excel('mal_ml_2.xlsx', index=False,encoding='utf-16')
df_result_2.to_excel('mal_ml_test_preds.xlsx', index=False,encoding='utf-16')
!mv '/content/models' '/content/drive/MyDrive/AggressionNLP/MalayalamEnglish'
```
#ULMFiT Tamil
```
!pip install sentencepiece
#reference: https://github.com/goru001/nlp-for-malyalam/blob/master/classification/Malyalam_Classification_Model.ipynb
from fastai.text import *
import numpy as np
from sklearn.model_selection import train_test_split
import pickle
import sentencepiece as spm
import re
import pdb
import fastai, torch
fastai.__version__ , torch.__version__
def handle_all_caps(t: str) -> str:
tokens = t.split()
tokens = replace_all_caps(tokens)
return ' '.join(tokens)
def handle_upper_case_first_letter(t: str) -> str:
tokens = t.split()
tokens = deal_caps(tokens)
return ' '.join(tokens)
def lower_case_everything(t: str) -> str:
return t.lower().replace('@user', '').replace('#tag ', '').replace('rt ', '').strip()
class CodeMixedTamilTokenizer(BaseTokenizer):
def __init__(self, lang:str):
self.lang = lang
self.sp = spm.SentencePieceProcessor()
self.sp.Load(str("/content/drive/MyDrive/AggressionNLP/tokenizer/taen_spm.model"))
def tokenizer(self, t:str) -> List[str]:
return self.sp.EncodeAsPieces(t)
sp = spm.SentencePieceProcessor()
sp.Load(str("/content/drive/MyDrive/AggressionNLP/tokenizer/taen_spm.model"))
itos = [sp.IdToPiece(int(i)) for i in range(8000)]
# 8,000 is the vocab size that we chose in sentencepiece
taen_vocab = Vocab(itos)
tokenizer = Tokenizer(lang='taen', tok_func=CodeMixedTamilTokenizer)
tokenizer.pre_rules.append(lower_case_everything)
tokenizer.pre_rules.append(handle_all_caps)
tokenizer.pre_rules.append(handle_upper_case_first_letter)
tokenizer.special_cases, tokenizer.pre_rules, tokenizer.post_rules
label_cols = ['label']
tamil_train
df_result_2['data']=df_result_2['query']
df_result_2['label']=df_result_2['predicted_label']
tamil_train_new = pd.concat([tamil_train,df_result_2])
tamil_train_new
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(tamil_train_new['data'],tamil_train_new['label'], test_size = 0.2, random_state = 42, stratify=tamil_train_new['label'])
X_train
X_train_df = pd.concat([X_train, y_train], axis=1, keys=['text', 'label'])
X_val_df = pd.concat([X_val, y_val], axis=1, keys=['text', 'label'])
X_test_df = pd.concat([tamil_dev['data'], tamil_dev['label']], axis=1, keys=['text', 'label'])
# X_train_df['text']
data_lm = TextLMDataBunch.from_df(path='/content', train_df=X_train_df, valid_df=X_val_df, test_df=X_test_df, tokenizer=tokenizer, vocab=taen_vocab,text_cols='text')
data_lm.show_batch()
learn = language_model_learner(data_lm, arch=AWD_LSTM, drop_mult=0.3, pretrained=False)
# !unzip '/content/drive/MyDrive/AggressionNLP/TamilPretrainedLanguageModel/models.zip' -d '/content/drive/MyDrive/AggressionNLP/TamilPretrainedLanguageModel'
# Loading the pretrained language model on malyalam wikipedia
learn.load('/content/drive/MyDrive/AggressionNLP/TamilPretrainedLanguageModel/models/best_model', with_opt=True)
learn.freeze()
learn.fit_one_cycle(1, 1e-2)
learn.unfreeze()
learn.fit_one_cycle(5, 1e-3)
learn.save_encoder('ta_en_fine_tuned_enc')
data_clas = TextClasDataBunch.from_df(path='/content', train_df=X_train_df, valid_df=X_val_df,test_df=X_test_df, tokenizer=tokenizer, vocab=taen_vocab,text_cols=['text'], label_cols=['label'], bs=16)
data_clas.show_batch()
learn = text_classifier_learner(data_clas, arch=AWD_LSTM, drop_mult=0.5)
learn.load_encoder('ta_en_fine_tuned_enc')
learn.freeze()
learn.loss_func.func
mcc = MatthewsCorreff()
learn.metrics = [mcc, accuracy]
learn.fit_one_cycle(1, 1e-2)
learn.freeze_to(-2)
learn.fit_one_cycle(1, 1e-2)
learn.save('ta_en_second-full')
learn.unfreeze()
learn.fit_one_cycle(5, 1e-3, callbacks=[callbacks.SaveModelCallback(learn, every='improvement', monitor='accuracy', name='ta_en_final')])
learn.load('ta_en_final')
from sklearn.metrics import accuracy_score, matthews_corrcoef
df_dict = {'query': list(tamil_dev['data']), 'actual_label': list(tamil_dev['label']), 'predicted_label': ['']*tamil_dev.shape[0]}
all_nodes = list(set(tamil_train['label']))
for node in all_nodes:
df_dict[node] = ['']*tamil_dev.shape[0]
i2c = {}
for key, value in learn.data.c2i.items():
i2c[value] = key
df_result = pd.DataFrame(df_dict)
preds = learn.get_preds(ds_type=DatasetType.Test, ordered=True)
for index, row in df_result.iterrows():
for node in all_nodes:
row[node] = preds[0][index][learn.data.c2i[node]].item()
row['predicted_label'] = i2c[np.argmax(preds[0][index]).data.item()]
df_result.head()
from sklearn.metrics import accuracy_score, matthews_corrcoef
data_lm_2 = TextClasDataBunch.from_df(path='/content',train_df=X_train_df, valid_df=X_val_df, test_df=tamil_test, tokenizer=tokenizer, vocab=taen_vocab,text_cols='text',label_cols=['label'])
learn = text_classifier_learner(data_lm_2, arch=AWD_LSTM, drop_mult=0.5)
learn.load_encoder('ta_en_fine_tuned_enc')
learn.load('/content/models/ta_en_final')
df_dict = {'query': list(tamil_test['text']), 'predicted_label': ['']*tamil_test.shape[0]}
all_nodes = list(set(tamil_train['label']))
for node in all_nodes:
df_dict[node] = ['']*tamil_test.shape[0]
i2c = {}
for key, value in learn.data.c2i.items():
i2c[value] = key
df_result_2 = pd.DataFrame(df_dict)
preds = learn.get_preds(ds_type=DatasetType.Test, ordered=True)
for index, row in df_result_2.iterrows():
for node in all_nodes:
row[node] = preds[0][index][learn.data.c2i[node]].item()
row['predicted_label'] = i2c[np.argmax(preds[0][index]).data.item()]
df_result_2.head()
df_result_2
accuracy_score(df_result['actual_label'], df_result['predicted_label'])
matthews_corrcoef(df_result['actual_label'], df_result['predicted_label'])
from sklearn.metrics import classification_report
print(classification_report(df_result['actual_label'], df_result['predicted_label']))
df_result.to_excel('taen_ml.xlsx', index=False,encoding='utf-16')
df_result_2.to_excel('taen_ml_test_preds.xlsx', index=False,encoding='utf-16')
# precision recall f1-score support
# Not_offensive 0.81 0.96 0.88 3193
# Offensive_Targeted_Insult_Group 0.41 0.13 0.20 295
# Offensive_Targeted_Insult_Individual 0.52 0.26 0.35 307
# Offensive_Targeted_Insult_Other 0.00 0.00 0.00 65
# Offensive_Untargetede 0.46 0.25 0.32 356
# not-Tamil 0.86 0.70 0.77 172
# accuracy 0.78 4388
# macro avg 0.51 0.38 0.42 4388
# weighted avg 0.72 0.78 0.73 4388
!mv '/content/models' '/content/drive/MyDrive/AggressionNLP/TamilEnglishResults'
```
| github_jupyter |
# ORF recognition by CNN
Use variable number of bases between START and STOP. Thus, ncRNA will have its STOP out-of-frame or too close to the START, and pcRNA will have its STOP in-frame and far from the START.
```
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=10000 # how many protein-coding sequences
NC_SEQUENCES=10000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=32 # how long is each sequence
CDS_LEN=16 # min CDS len to be coding
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 16
DROP_RATE = 0.3
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=100 # how many times to train on all the data
SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=3 # train the model this many times (range 1 to SPLITS)
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
from RNA_describe import Random_Base_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import prepare_inputs_len_x_alphabet
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle
from SimTools.RNA_prep import prepare_inputs_len_x_alphabet
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
rbo=Random_Base_Oracle(RNA_LEN,True)
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
```
| github_jupyter |
# 数据抓取:
> ### Requests、Beautifulsoup、Xpath简介
***
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
# 爬虫基本原理
http://www.cnblogs.com/zhaof/p/6898138.html
# 需要解决的问题
- 页面解析
- 获取Javascript隐藏源数据
- 自动翻页
- 自动登录
- 连接API接口
- 一般的数据抓取,使用requests和beautifulsoup配合就可以了。
- 尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
- 以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
- 在天涯论坛,关于雾霾的帖子的第一页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾
- 第二页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾
# 第一个爬虫
Beautifulsoup Quick Start
http://www.crummy.com/software/BeautifulSoup/bs4/doc/

http://computational-class.github.io/bigdata/data/test.html
```
import requests
from bs4 import BeautifulSoup
help(requests.get)
url = 'http://computational-class.github.io/bigdata/data/test.html'
content = requests.get(url)
help(content)
print(content.text)
content.encoding
```
# Beautiful Soup
> Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:
- Beautiful Soup provides a few simple methods. It doesn't take much code to write an application
- Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding.
- Beautiful Soup sits on top of popular Python parsers like `lxml` and `html5lib`.
# Install beautifulsoup4
### open your terminal/cmd
<del> $ pip install beautifulsoup4
# html.parser
Beautiful Soup supports the html.parser included in Python’s standard library
# lxml
but it also supports a number of third-party Python parsers. One is the lxml parser `lxml`. Depending on your setup, you might install lxml with one of these commands:
> $ apt-get install python-lxml
> $ easy_install lxml
> $ pip install lxml
# html5lib
Another alternative is the pure-Python html5lib parser `html5lib`, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:
> $ apt-get install python-html5lib
> $ easy_install html5lib
> $ pip install html5lib
```
url = 'http://computational-class.github.io/bigdata/data/test.html'
content = requests.get(url)
content = content.text
soup = BeautifulSoup(content, 'html.parser')
soup
print(soup.prettify())
```
- html
- head
- title
- body
- p (class = 'title', 'story' )
- a (class = 'sister')
- href/id
# Select 方法
- 标签名不加任何修饰
- 类名前加点
- id名前加 #
我们也可以利用这种特性,使用soup.select()方法筛选元素,返回类型是 list
## Select方法三步骤
- Inspect (检查)
- Copy
- Copy Selector
- 鼠标选中标题`The Dormouse's story`, 右键检查Inspect
- 鼠标移动到选中的源代码
- 右键Copy-->Copy Selector
`body > p.title > b`
```
soup.select('body > p.title > b')[0].text
```
### Select 方法: 通过标签名查找
```
soup.select('title')
soup.select('a')
soup.select('b')
```
### Select 方法: 通过类名查找
```
soup.select('.title')
soup.select('.sister')
soup.select('.story')
```
### Select 方法: 通过id名查找
```
soup.select('#link1')
soup.select('#link1')[0]['href']
```
### Select 方法: 组合查找
将标签名、类名、id名进行组合
- 例如查找 p 标签中,id 等于 link1的内容
```
soup.select('p #link1')
```
### Select 方法:属性查找
加入属性元素
- 属性需要用中括号`>`连接
- 属性和标签属于同一节点,中间不能加空格。
```
soup.select("head > title")
soup.select("body > p")
```
# find_all方法
```
soup('p')
soup.find_all('p')
[i.text for i in soup('p')]
for i in soup('p'):
print(i.text)
for tag in soup.find_all(True):
print(tag.name)
soup('head') # or soup.head
soup('body') # or soup.body
soup('title') # or soup.title
soup('p')
soup.p
soup.title.name
soup.title.string
soup.title.text
# 推荐使用text方法
soup.title.parent.name
soup.p
soup.p['class']
soup.find_all('p', {'class', 'title'})
soup.find_all('p', class_= 'title')
soup.find_all('p', {'class', 'story'})
soup.find_all('p', {'class', 'story'})[0].find_all('a')
soup.a
soup('a')
soup.find(id="link3")
soup.find_all('a')
soup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a')
soup.find_all('a', {'class', 'sister'})[0]
soup.find_all('a', {'class', 'sister'})[0].text
soup.find_all('a', {'class', 'sister'})[0]['href']
soup.find_all('a', {'class', 'sister'})[0]['id']
soup.find_all(["a", "b"])
print(soup.get_text())
```
***
***
# 数据抓取:
> # 抓取微信公众号文章内容
***
***
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
```
from IPython.display import display_html, HTML
HTML(url = 'http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd')
# the webpage we would like to crawl
```
# 查看源代码 Inspect
```
url = "http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd"
content = requests.get(url).text #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
title = soup.select("#activity-name") # #activity-name
title[0].text.strip()
soup.find('h2', {'class', 'rich_media_title'}).text.strip()
print(soup.find('div', {'class', 'rich_media_meta_list'}) )
soup.select('#publish_time')
article = soup.find('div', {'class' , 'rich_media_content'}).text
print(article)
rmml = soup.find('div', {'class', 'rich_media_meta_list'})
#date = rmml.find(id = 'post-date').text
rmc = soup.find('div', {'class', 'rich_media_content'})
content = rmc.get_text()
print(title[0].text.strip())
#print(date)
print(content)
```
# wechatsogou
> pip install wechatsogou --upgrade
https://github.com/Chyroc/WechatSogou
```
!pip install wechatsogou --upgrade
import wechatsogou
# 可配置参数
# 直连
ws_api = wechatsogou.WechatSogouAPI()
# 验证码输入错误的重试次数,默认为1
ws_api = wechatsogou.WechatSogouAPI(captcha_break_time=3)
# 所有requests库的参数都能在这用
# 如 配置代理,代理列表中至少需包含1个 HTTPS 协议的代理, 并确保代理可用
ws_api = wechatsogou.WechatSogouAPI(proxies={
"http": "127.0.0.1:8889",
"https": "127.0.0.1:8889",
})
# 如 设置超时
ws_api = wechatsogou.WechatSogouAPI(timeout=0.1)
ws_api =wechatsogou.WechatSogouAPI()
ws_api.get_gzh_info('南航青年志愿者')
articles = ws_api.search_article('南京航空航天大学')
for i in articles:
print(i)
```
# requests + Xpath方法介绍:以豆瓣电影为例
Xpath 即为 XML 路径语言(XML Path Language),它是一种用来确定 XML 文档中某部分位置的语言。
Xpath 基于 XML 的树状结构,提供在数据结构树中找寻节点的能力。起初 Xpath 的提出的初衷是将其作为一个通用的、介于 Xpointer 与 XSL 间的语法模型。但是Xpath 很快的被开发者采用来当作小型查询语言。
获取元素的Xpath信息并获得文本:
这里的“元素的Xpath信息”是需要我们手动获取的,获取方式为:
- 定位目标元素
- 在网站上依次点击:右键 > 检查
- copy xpath
- xpath + '/text()'
参考:https://mp.weixin.qq.com/s/zx3_eflBCrrfOqFEWjAUJw
```
import requests
from lxml import etree
url = 'https://movie.douban.com/subject/26611804/'
data = requests.get(url).text
s = etree.HTML(data)
```
豆瓣电影的名称对应的的xpath为xpath_title,那么title表达为:
`title = s.xpath('xpath_info/text()')`
其中,xpath_info为:
`//*[@id="content"]/h1/span[1]`
```
title = s.xpath('//*[@id="content"]/h1/span[1]/text()')[0]
director = s.xpath('//*[@id="info"]/span[1]/span[2]/a/text()')
actors = s.xpath('//*[@id="info"]/span[3]/span[2]/a/text()')
type1 = s.xpath('//*[@id="info"]/span[5]/text()')
type2 = s.xpath('//*[@id="info"]/span[6]/text()')
type3 = s.xpath('//*[@id="info"]/span[7]/text()')
time = s.xpath('//*[@id="info"]/span[11]/text()')
length = s.xpath('//*[@id="info"]/span[13]/text()')
score = s.xpath('//*[@id="interest_sectl"]/div[1]/div[2]/strong/text()')[0]
print(title, director, actors, type1, type2, type3, time, length, score)
```
## Douban API
https://developers.douban.com/wiki/?title=guide
https://github.com/computational-class/douban-api-docs
```
import requests
# https://movie.douban.com/subject/26611804/
url = 'https://api.douban.com/v2/movie/subject/26611804?apikey=0b2bdeda43b5688921839c8ecb20399b&start=0&count=20&client=&udid='
jsonm = requests.get(url).json()
jsonm.keys()
#jsonm.values()
jsonm['rating']
jsonm['alt']
jsonm['casts'][0]
jsonm['directors']
jsonm['genres']
```
## 作业:抓取豆瓣电影 Top 250
```
import requests
from bs4 import BeautifulSoup
from lxml import etree
url0 = 'https://movie.douban.com/top250?start=0&filter='
data = requests.get(url0).text
s = etree.HTML(data)
s.xpath('//*[@id="content"]/div/div[1]/ol/li[1]/div/div[2]/div[1]/a/span[1]/text()')[0]
s.xpath('//*[@id="content"]/div/div[1]/ol/li[2]/div/div[2]/div[1]/a/span[1]/text()')[0]
s.xpath('//*[@id="content"]/div/div[1]/ol/li[3]/div/div[2]/div[1]/a/span[1]/text()')[0]
import requests
from bs4 import BeautifulSoup
url0 = 'https://movie.douban.com/top250?start=0&filter='
data = requests.get(url0).text
soup = BeautifulSoup(data, 'lxml')
movies = soup.find_all('div', {'class', 'info'})
len(movies)
movies[0].a['href']
movies[0].find('span', {'class', 'title'}).text
movies[0].find('div', {'class', 'star'})
movies[0].find('span', {'class', 'rating_num'}).text
people_num = movies[0].find('div', {'class', 'star'}).find_all('span')[-1]
people_num.text.split('人评价')[0]
for i in movies:
url = i.a['href']
title = i.find('span', {'class', 'title'}).text
des = i.find('div', {'class', 'star'})
rating = des.find('span', {'class', 'rating_num'}).text
rating_num = des.find_all('span')[-1].text.split('人评价')[0]
print(url, title, rating, rating_num)
for i in range(0, 250, 25):
print('https://movie.douban.com/top250?start=%d&filter='% i)
import requests
from bs4 import BeautifulSoup
dat = []
for j in range(0, 250, 25):
urli = 'https://movie.douban.com/top250?start=%d&filter='% j
data = requests.get(urli).text
soup = BeautifulSoup(data, 'lxml')
movies = soup.find_all('div', {'class', 'info'})
for i in movies:
url = i.a['href']
title = i.find('span', {'class', 'title'}).text
des = i.find('div', {'class', 'star'})
rating = des.find('span', {'class', 'rating_num'}).text
rating_num = des.find_all('span')[-1].text.split('人评价')[0]
listi = [url, title, rating, rating_num]
dat.append(listi)
import pandas as pd
df = pd.DataFrame(dat, columns = ['url', 'title', 'rating', 'rating_num'])
df['rating'] = df.rating.astype(float)
df['rating_num'] = df.rating_num.astype(int)
df.head()
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(df.rating_num)
plt.show()
plt.hist(df.rating)
plt.show()
# viz
fig = plt.figure(figsize=(16, 16),facecolor='white')
plt.plot(df.rating_num, df.rating, 'bo')
for i in df.index:
plt.text(df.rating_num[i], df.rating[i], df.title[i],
fontsize = df.rating[i],
color = 'red', rotation = 45)
plt.show()
df[df.rating > 9.4]
alist = []
for i in df.index:
alist.append( [df.rating_num[i], df.rating[i], df.title[i] ])
blist =[[df.rating_num[i], df.rating[i], df.title[i] ] for i in df.index]
alist
from IPython.display import display_html, HTML
HTML('<iframe src=http://nbviewer.jupyter.org/github/computational-class/bigdata/blob/gh-pages/vis/douban250bubble.html \
width=1000 height=500></iframe>')
```
# 作业:
- 抓取复旦新媒体微信公众号最新一期的内容
# requests.post模拟登录豆瓣(包括获取验证码)
https://blog.csdn.net/zhuzuwei/article/details/80875538
# 抓取江苏省政协十年提案
```
# headers = {
# 'Accept': 'application/json, text/javascript, */*; q=0.01',
# 'Accept-Encoding': 'gzip, deflate',
# 'Accept-Language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6',
# 'Cache-Control': 'no-cache',
# 'Connection': 'keep-alive',
# 'Cookie': 'JSESSIONID=992CB756ADE61B87409672DC808FDD92',
# 'DNT': '1',
# 'Host': 'www.jszx.gov.cn',
# 'Pragma': 'no-cache',
# 'Referer': 'http://www.jszx.gov.cn/zxta/2019ta/',
# 'Upgrade-Insecure-Requests': '1',
# 'User-Agent': 'Mozilla/5.0 (iPad; CPU OS 11_0 like Mac OS X) AppleWebKit/604.1.34 (KHTML, like Gecko) Version/11.0 Mobile/15A5341f Safari/604.1'
# }
```
打开http://www.jszx.gov.cn/zxta/2019ta/
- 点击下一页,url不变!
> 所以数据的更新是使用js推送的
- 分析network中的内容,发现proposalList.jsp
- 查看它的header,并发现了form_data
<img src = './img/form_data.png'>
http://www.jszx.gov.cn/zxta/2019ta/
```
import requests
from bs4 import BeautifulSoup
form_data = {'year':2019,
'pagenum':1,
'pagesize':20
}
url = 'http://www.jszx.gov.cn/wcm/zxweb/proposalList.jsp'
content = requests.get(url, form_data)
content.encoding = 'utf-8'
js = content.json()
js['data']['totalcount']
dat = js['data']['list']
pagenum = js['data']['pagecount']
```
### 抓取所有提案的链接
```
for i in range(2, pagenum+1):
print(i)
form_data['pagenum'] = i
content = requests.get(url, form_data)
content.encoding = 'utf-8'
js = content.json()
for j in js['data']['list']:
dat.append(j)
len(dat)
dat[0]
import pandas as pd
df = pd.DataFrame(dat)
df.head()
df.groupby('type').size()
```
### 抓取提案内容
http://www.jszx.gov.cn/zxta/2019ta/index_61.html?pkid=18b1b347f9e34badb8934c2acec80e9e
http://www.jszx.gov.cn/wcm/zxweb/proposalInfo.jsp?pkid=18b1b347f9e34badb8934c2acec80e9e
```
url_base = 'http://www.jszx.gov.cn/wcm/zxweb/proposalInfo.jsp?pkid='
urls = [url_base + i for i in df['pkid']]
import sys
def flushPrint(www):
sys.stdout.write('\r')
sys.stdout.write('%s' % www)
sys.stdout.flush()
text = []
for k, i in enumerate(urls):
flushPrint(k)
content = requests.get(i)
content.encoding = 'utf-8'
js = content.json()
js = js['data']['binfo']['_content']
soup = BeautifulSoup(js, 'html.parser')
text.append(soup.text)
len(text)
df['content'] = text
df.head()
df.to_csv('../data/jszx2019.csv', index = False)
dd = pd.read_csv('../data/jszx2019.csv')
dd.head()
```
| github_jupyter |
```
%matplotlib inline
import re
import numpy as np
import pandas as pd
from IPython.display import display, HTML
from pathlib import Path
from matplotlib import pyplot as plt
from datetime import datetime
def extract(string, key, dtype):
if dtype is bool:
return True if re.search(' {}=((True)|(False)) '.format(key), string).group(1) == 'True' else False
if dtype is float:
return float(re.search(' {}=(\d+(\.\d+)?(?:[eE][+\-]?\d+)?)'.format(key), string).group(1))
if dtype is int:
return float(re.search(' {}=(\d+) '.format(key), string).group(1))
def get_result_metrics(filepath):
# get lines with results
lines = []
marker = 'Got result: '
with open(filepath) as f:
for line in f.readlines():
if marker in line:
lines.append(line.split(marker, 1)[1])
# extract metrics
runs = []
for run in lines:
if not extract(run, 'success', bool):
continue
data = {
'time_created': extract(run, 'time_created', float),
'time_input_received': extract(run, 'time_input_received', float),
'time_compute_started': extract(run, 'time_compute_started', float),
'time_result_sent': extract(run, 'time_result_sent', float),
'time_result_received': extract(run, 'time_result_received', float),
'time_running': extract(run, 'time_running', float),
'time_serialize_inputs': extract(run, 'time_serialize_inputs', float),
'time_deserialize_inputs': extract(run, 'time_deserialize_inputs', float),
'time_serialize_results': extract(run, 'time_serialize_results', float),
'time_deserialize_results': extract(run, 'time_deserialize_results', float),
'time_result_received': extract(run, 'time_result_received', float),
'time_async_resolve_proxies': extract(run, 'time_async_resolve_proxies', float),
}
data['client_to_method_server'] = data['time_input_received'] - (
data['time_created'] + data['time_serialize_inputs'])
data['worker_to_method_server'] = data['time_result_sent'] - (
data['time_compute_started'] + data['time_running'] +
data['time_deserialize_inputs'] + data['time_serialize_results'] +
data['time_async_resolve_proxies'])
data['method_server_to_client'] = data['time_result_received'] - (
data['time_result_sent'] + data['time_deserialize_results'])
data['time_serialization'] = sum(data[key] for key in data if 'serialize' in key)
data['time_created_to_result_received'] = data['time_result_received'] - data['time_created']
runs.append(data)
return pd.DataFrame(runs)
def aggregate_runs(run_paths):
data = []
for path in run_paths:
results = pd.Series({'path': path})
with open(path) as f:
first_line = f.readline()
timestamp = first_line.split(' - ')[0]
value_server = True if re.search('use_value_server=((True)|(False))', first_line).group(1) == 'True' else False
results['value_server'] = value_server
results['reuse_data'] = True if re.search('reuse_data=((True)|(False))', first_line).group(1) == 'True' else False
results['task_count'] = int(re.search('task_count=(\d+),', first_line).group(1))
results['task_input_size'] = float(re.search('task_input_size=(\d+(\.\d+)?(?:[eE][+\-]?\d+)?),', first_line).group(1))
results['task_interval'] = float(re.search('task_interval=(\d+(\.\d+)?(?:[eE][+\-]?\d+)?),', first_line).group(1))
results['task_output_size'] = float(re.search('task_output_size=(\d+(\.\d+)?(?:[eE][+\-]?\d+)?),', first_line).group(1))
results['time_start'] = datetime.strptime(timestamp, "%Y-%m-%d %H:%M:%S,%f").timestamp()
function_results = get_result_metrics(path)
results['n_tasks'] = len(function_results.index)
results['total'] = None
results = pd.concat([results, function_results.median()])
data.append(results)
return pd.concat(data, axis=1).T
cols = ['client_to_method_server', 'worker_to_method_server', 'method_server_to_client', 'time_serialization', 'time_async_resolve_proxies', 'time_running', 'total']
ind = np.arange(len(cols))
width = 0.35
```
## Colmena Integration (auto value server usage)
```
rundir = 'runs/50KBx50_1s'
run_paths = list(map(str, Path(rundir).rglob('*runtime.log')))
results = aggregate_runs(run_paths)
results['total'] = results[cols].sum(axis=1)
display(results)
no_value_server_unique = results.loc[results['value_server'] == False].loc[results['reuse_data'] == False][cols]
value_server_unique = results.loc[results['value_server'] == True].loc[results['reuse_data'] == False][cols]
no_value_server_reuse = results.loc[results['value_server'] == False].loc[results['reuse_data'] == True][cols]
value_server_reuse = results.loc[results['value_server'] == True].loc[results['reuse_data'] == True][cols]
width = 0.35
fig, ax = plt.subplots(1, 2, figsize=(8,4), dpi= 100)
ax[0].bar(ind, no_value_server_unique.to_numpy()[0], width, label='Default')
ax[0].bar(ind+width, value_server_unique.to_numpy()[0], width, label='Value Server')
ax[1].bar(ind, no_value_server_reuse.to_numpy()[0], width, label='Default')
ax[1].bar(ind+width, value_server_reuse.to_numpy()[0], width, label='Value Server')
fig.suptitle('1 Node, {} tasks, {}MB input, {}s interval'.format(results['n_tasks'].iloc[0], results['task_input_size'].iloc[0], results['task_interval'].iloc[0]))
plt.setp(ax, ylabel='Median Time per Task (s)', xticks=ind + width/2, xticklabels=cols)
plt.setp(ax[0].get_xticklabels(), rotation=35, ha='right')
plt.setp(ax[1].get_xticklabels(), rotation=35, ha='right')
ax[0].set_title('Unique task input')
ax[1].set_title('Constant task input')
ax[0].legend(loc='best')
ax[1].legend(loc='best')
fig.tight_layout()
plt.show()
rundir = 'runs/1MBx50_1s'
run_paths = list(map(str, Path(rundir).rglob('*runtime.log')))
results = aggregate_runs(run_paths)
results['total'] = results[cols].sum(axis=1)
display(results)
no_value_server_unique = results.loc[results['value_server'] == False].loc[results['reuse_data'] == False][cols]
value_server_unique = results.loc[results['value_server'] == True].loc[results['reuse_data'] == False][cols]
no_value_server_reuse = results.loc[results['value_server'] == False].loc[results['reuse_data'] == True][cols]
value_server_reuse = results.loc[results['value_server'] == True].loc[results['reuse_data'] == True][cols]
width = 0.35
fig, ax = plt.subplots(1, 2, figsize=(8,4), dpi= 100)
ax[0].bar(ind, no_value_server_unique.to_numpy()[0], width, label='Default')
ax[0].bar(ind+width, value_server_unique.to_numpy()[0], width, label='Value Server')
ax[1].bar(ind, no_value_server_reuse.to_numpy()[0], width, label='Default')
ax[1].bar(ind+width, value_server_reuse.to_numpy()[0], width, label='Value Server')
fig.suptitle('1 Node, {} tasks, {}MB input, {}s interval'.format(results['n_tasks'].iloc[0], results['task_input_size'].iloc[0], results['task_interval'].iloc[0]))
plt.setp(ax, ylabel='Median Time per Task (s)', xticks=ind + width/2, xticklabels=cols)
plt.setp(ax[0].get_xticklabels(), rotation=35, ha='right')
plt.setp(ax[1].get_xticklabels(), rotation=35, ha='right')
ax[0].set_title('Unique task input')
ax[1].set_title('Same task input')
ax[0].legend(loc='best')
ax[1].legend(loc='best')
fig.tight_layout()
plt.show()
rundir = 'runs/1MBin1MBoutx50_5s'
run_paths = list(map(str, Path(rundir).rglob('*runtime.log')))
results = aggregate_runs(run_paths)
results['total'] = results[cols].sum(axis=1)
display(results)
no_value_server_unique = results.loc[results['value_server'] == False].loc[results['reuse_data'] == False][cols]
value_server_unique = results.loc[results['value_server'] == True].loc[results['reuse_data'] == False][cols]
#no_value_server_reuse = results.loc[results['value_server'] == False].loc[results['reuse_data'] == True][cols]
#value_server_reuse = results.loc[results['value_server'] == True].loc[results['reuse_data'] == True][cols]
width = 0.35
fig, ax = plt.subplots(1, 1, figsize=(4,4), dpi= 100)
ax.bar(ind, no_value_server_unique.to_numpy()[0], width, label='Default')
ax.bar(ind+width, value_server_unique.to_numpy()[0], width, label='Value Server')
fig.suptitle('1 Node, {} tasks, {}MB input, {}MB output, {}s interval'.format(results['n_tasks'].iloc[0], results['task_input_size'].iloc[0], results['task_output_size'].iloc[0], results['task_interval'].iloc[0]))
plt.setp(ax, ylabel='Median Time per Task (s)', xticks=ind + width/2, xticklabels=cols)
plt.setp(ax.get_xticklabels(), rotation=35, ha='right')
ax.set_title('Unique task input')
ax.legend(loc='best')
fig.tight_layout()
plt.show()
rundir = 'runs/1MBx200_1s_4node'
run_paths = list(map(str, Path(rundir).rglob('*runtime.log')))
results = aggregate_runs(run_paths)
results['total'] = results[cols].sum(axis=1)
display(results)
no_value_server_unique = results.loc[results['value_server'] == False].loc[results['reuse_data'] == False][cols]
value_server_unique = results.loc[results['value_server'] == True].loc[results['reuse_data'] == False][cols]
no_value_server_reuse = results.loc[results['value_server'] == False].loc[results['reuse_data'] == True][cols]
value_server_reuse = results.loc[results['value_server'] == True].loc[results['reuse_data'] == True][cols]
width = 0.35
fig, ax = plt.subplots(1, 2, figsize=(8,4), dpi= 100)
ax[0].bar(ind, no_value_server_unique.to_numpy()[0], width, label='Default')
ax[0].bar(ind+width, value_server_unique.to_numpy()[0], width, label='Value Server')
ax[1].bar(ind, no_value_server_reuse.to_numpy()[0], width, label='Default')
ax[1].bar(ind+width, value_server_reuse.to_numpy()[0], width, label='Value Server')
fig.suptitle('4 Node, {} tasks, {}MB input, {}s interval'.format(results['n_tasks'].iloc[0], results['task_input_size'].iloc[0], results['task_interval'].iloc[0]))
plt.setp(ax, ylabel='Median Time per Task (s)', xticks=ind + width/2, xticklabels=[cols])
plt.setp(ax[0].get_xticklabels(), rotation=35, ha='right')
plt.setp(ax[1].get_xticklabels(), rotation=35, ha='right')
ax[0].set_title('Unique task input')
ax[1].set_title('Constant task input')
ax[0].legend(loc='best')
ax[1].legend(loc='best')
fig.tight_layout()
plt.show()
rundir = 'runs/variable_input'
run_paths = list(map(str, Path(rundir).rglob('*runtime.log')))
results = aggregate_runs(run_paths)
results['total'] = results[cols].sum(axis=1)
default = results.loc[results['value_server'] == False][['task_input_size', 'total']].to_numpy()
value_server = results.loc[results['value_server'] == True][['task_input_size', 'total']].to_numpy()
default.sort(axis=0)
value_server.sort(axis=0)
results = np.append(default, np.reshape(value_server[:, 1], (-1, 1)), axis=1)
display(results)
results[:, 2] = 100 * (results[:, 1] - results[:, 2]) / results[:, 2]
display(results)
fig, ax = plt.subplots(1, 1, figsize=(4,4), dpi= 100)
ax.scatter(results[:, 0], results[:, 2])
ax.hlines(0, 0, 20, color='black')
plt.setp(ax, ylabel='Median Time Improvement (%)')
ax.set_xscale('log')
ax.set_title('Value Server Overhead vs Default (no-op tasks)')
ax.set_xlabel('Input Size (MB)')
fig.tight_layout()
plt.show()
```
| github_jupyter |
# Visualizing hypergraphs
As for pairwise networks, visualizing hypergraphs is surely a hard task and no algorithm can exaustively work for any given input structure. Here we show how to visualize some toy structures using the visualization function contained in the ```drawing``` module that heavily relies on [networkx](https://networkx.org/documentation/stable/reference/drawing.html) and [matplotlib](https://matplotlib.org/).
```
import matplotlib.pyplot as plt
import numpy as np
import random
import xgi
```
Les us first create a small toy hypergraph containing edges of different sizes.
```
H = xgi.Hypergraph()
H.add_edges_from([[1,2,3],[3,4,5],[3,6],[6,7,8,9],[1,4,10,11,12],[1,4]])
```
The first step for drawing a hypergraph consists in choosing a layout for the nodes.
At the moment the three available layouts are:
* ```random_layout```: to position nodes uniformly at random in the unit square ([exactly as networkx](https://networkx.org/documentation/stable/reference/generated/networkx.drawing.layout.random_layout.html)).
* ```pairwise_spring_layout```: to position the nodes using the Fruchterman-Reingold force-directed algorithm on the projected graph. In this case the hypergraph is first projected into a graph (1-skeleton) using the ```xgi.convert_to_graph(H)``` function and then networkx's [spring_layout](https://networkx.org/documentation/stable/reference/generated/networkx.drawing.layout.spring_layout.html) is applied.
* ```barycenter_spring_layout```: to position the nodes using the Fruchterman-Reingold force-directed algorithm using an augmented version of the the graph projection of the hypergraph, where _phantom nodes_ (that we call barycenters) are created for each edge of order $d>1$ (composed by more than two nodes). Weights are then assigned to all hyperedges of order 1 (links) and to all connections to phantom nodes within each hyperedge to keep them together. Weights scale with the size of the hyperedges. Finally, the weighted version of networkx's [spring_layout](https://networkx.org/documentation/stable/reference/generated/networkx.drawing.layout.spring_layout.html) is applied.
* ```weighted_barycenter_spring_layout```: same as ```barycenter_spring_layout``, but here the weighted version of the Fruchterman-Reingold force-directed algorithm is used. Weights are assigned to all hyperedges of order 1 (links) and
to all connections to phantom nodes within each hyperedge to keep them together. Weights scale with the order of the group interaction.
Each layout returns a dictionary that maps nodes ID into (x, y) coordinates.
```
pos = xgi.barycenter_spring_layout(H)
```
We can now pas the ```pos``` dictionaty to the ```drawing``` function:
```
xgi.draw(H, pos)
```
**Colors of the hyperedges** are designed to match the hyperedge size. Both sequential and qualitative [colormaps](https://matplotlib.org/stable/tutorials/colors/colormaps.html) can be passed as an argument. Sequential colormaps would simply be discretized according to the sizes of the provided hypergraph:
```
plt.figure(figsize=(10,4))
#Sequential colormap
cmap = plt.cm.Blues
ax = plt.subplot(1,2,1)
xgi.draw(H, pos, cmap=cmap, ax=ax)
#Qualitative colormap
cmap = plt.cm.Set1
ax = plt.subplot(1,2,2)
xgi.draw(H, pos, cmap=cmap, ax=ax)
```
Some other parameters can be tweaked as well:
```
cmap = plt.cm.Reds
edge_lc = 'gray'
edge_lw = 4
node_fc = 'black'
node_ec = 'white'
node_lw = 2
node_size = 0.07
xgi.draw(H, pos, cmap=cmap, edge_lc=edge_lc, edge_lw=edge_lw,
node_fc=node_fc, node_ec=node_ec, node_lw=node_lw, node_size=node_size)
```
# Visualizing simplicial complexes
Simplicial complexes can be visualized using the same functions for node layout and drawing.
### Technical note
By definition, a simplicial complex object contains all sub-simplices. This would make the visualization heavy since all sub-simplices contained in a maximal simplex would overlap. The automatic solution for this, implemented by default in all the layouts, is to convert the simplicial complex into a hypergraph composed by solely by its maximal simplices.
### Visual note
To visually distinguish simplicial complexes from hypergraphs, the ```draw``` function will also show all links contained in each maximal simplices (while omitting simplices of intermetiate orders).
```
SC = xgi.SimplicialComplex()
SC.add_simplices_from([[3,4,5],[3,6],[6,7,8,9],[1,4,10,11,12],[1,4]])
pos = xgi.pairwise_spring_layout(SC)
xgi.draw(SC, pos)
```
# Example: generative model
We generate and visualize a [Chung-Lu hypergraph](https://doi.org/10.1093/comnet/cnx001).
```
n = 500
k1 = {i : random.randint(10, 30) for i in range(n)}
k2 = {i : sorted(k1.values())[i] for i in range(n)}
H = xgi.chung_lu_hypergraph(k1, k2)
pos = xgi.barycenter_spring_layout(H)
```
Since there are more nodes we reduce the ```node_size```
```
plt.figure(figsize=(10,10))
ax = plt.subplot(111)
xgi.draw(H, pos, node_size = 0.01, ax=ax)
```
### Degree
Using its simplest (higher-order) definition, the degree is the number of hyperedges (of any size) incident on a node.
```
centers, heights = xgi.degree_histogram(H)
plt.figure(figsize=(12,4))
ax = plt.subplot(111)
ax.bar(centers, heights)
ax.set_ylabel('Count')
ax.set_xlabel('Degree')
ax.set_xticks(np.arange(1, max(centers)+1, step=1));
```
| github_jupyter |
```
import sys
import os
from pathlib import Path
import numpy as np
import pandas as pd
import skimage.io as io
import torch
from torchvision.models.detection import maskrcnn_resnet50_fpn
import albumentations as A
from albumentations.pytorch import ToTensorV2
from pytorch_toolbelt.utils import to_numpy, rle_encode
from dotenv import load_dotenv
from src.visualization import plot_two_masks, plot_mask_bbox
from src.postprocessing import remove_overlapping_pixels, postprocess_predictions
from torchvision.ops.boxes import nms
current_dir = Path(".")
load_dotenv()
current_dir.absolute()
# Global config of configuration
dataset_path = Path(os.environ['dataset_path'])
test_images_dir = dataset_path / "test"
weights_dir = current_dir / "weights" / "maxim_baseline.ckpt"
device = "cpu"
# Local tunable parameters of evaluation
score_threshold = 0.0 # All predictions would be counted, even with low score
nms_threshold = 0.1 # Overlapping instances will be dropped, lower - lower overlap is permitted
mask_threshold = 0.5 # Cut masks by the threshold
assert test_images_dir.is_dir(), f"Check test dir path for correctness, was looking at {test_images_dir.absolute()}"
assert weights_dir.is_file(), f"File not found, was looking at {weights_dir.absolute()}"
preprocess_image = A.Compose([
A.Normalize(mean=(0.485,), std=(0.229,)),
ToTensorV2(),
])
model = maskrcnn_resnet50_fpn(progress=False, num_classes=2)
model.load_state_dict(torch.load(weights_dir, map_location=torch.device(device)))
model.to(device)
model.eval()
def predict_masks(image: np.ndarray, model) -> np.ndarray:
"""Predicts masks for the given single image"""
image = preprocess_image(image=image)['image']
image.to(device)
with torch.no_grad():
outputs = model.forward([image])
output = postprocess_predictions(outputs,
mask_threshold=mask_threshold,
score_threshold=score_threshold,
nms_threshold=None)[0]
masks, boxes = output['masks'], output['boxes']
plot_mask_bbox(image.cpu(), boxes, masks, figure_scale=8)
answer_masks = remove_overlapping_pixels(masks)
assert np.max(np.sum(answer_masks, axis=0)) <= 1, "Masks overlap"
return answer_masks
answers = {
"id": [],
"predicted" : [],
}
for image_path in test_images_dir.glob("**/*.png"):
image = io.imread(str(image_path))
masks = predict_masks(image, model)
answers["id"].extend(image_path.stem for i in range(len(masks)))
answers["predicted"].extend(" ".join(map(str, rle_encode(mask.transpose()))) for mask in masks)
submission = pd.DataFrame(answers)
submission.sample(8)
submission.to_csv("submission.csv", index=False)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlow Addons Optimizers: LazyAdam
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_lazyadam"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_lazyadam.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This notebook will demonstrate how to use the lazy adam optimizer from the Addons package.
## LazyAdam
> LazyAdam is a variant of the Adam optimizer that handles sparse updates more efficiently.
The original Adam algorithm maintains two moving-average accumulators for
each trainable variable; the accumulators are updated at every step.
This class provides lazier handling of gradient updates for sparse
variables. It only updates moving-average accumulators for sparse variable
indices that appear in the current batch, rather than updating the
accumulators for all indices. Compared with the original Adam optimizer,
it can provide large improvements in model training throughput for some
applications. However, it provides slightly different semantics than the
original Adam algorithm, and may lead to different empirical results.
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x
except:
pass
import tensorflow as tf
!pip install --no-deps tensorflow-addons~=0.7
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
import numpy as np
from matplotlib import pyplot as plt
# Hyperparameters
batch_size=64
epochs=10
```
## Build the Model
```
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
```
## Prepare the Data
```
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
```
## Train and Evaluate
Simply replace typical keras optimizers with the new tfa optimizer
```
# Compile the model
model.compile(
optimizer=tfa.optimizers.LazyAdam(0.001), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
# Train the network
history = model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs)
# Evaluate the network
print('Evaluate on test data:')
results = model.evaluate(x_test, y_test, batch_size=128, verbose = 2)
print('Test loss = {0}, Test acc: {1}'.format(results[0], results[1]))
```
| github_jupyter |
# 1 Download Raw Data
This module is used for downloading the original Excel files that contain the historical LAM data.
Run all cells of this notebook to get the data of Askisto, Mäntsälä and Kemijärvi to "raw_dataset.csv".
```
import urllib.request
import calendar
import glob
import pandas as pd
import subprocess
from dateutil import parser
def get_file_url(y, m, file_prefix, next_month=False):
base_url = "https://aineistot.liikennevirasto.fi/lam/reports/LAM/"
dir_m = m
dir_y = y
if next_month:
if dir_m < 12:
dir_m += 1
else:
dir_m = 1
dir_y += 1
dir_name = "20{:02}{:02}11/".format(dir_y, dir_m)
first_date = parser.parse("20{:02}-{:02}-01".format(y, m))
d = calendar.monthrange(first_date.year, first_date.month)[1]
file_name = file_prefix + "_20{:02}{:02}01_20{:02}{:02}{:02}.xls".format(y, m, y, m, d)
file_url = base_url + dir_name + file_name
return file_url, file_name
def download_xls_files(file_prefix, to_location="./data/"):
for y in range(10, 18):
for m in range(1, 13):
try:
file_url, file_name = get_file_url(y, m, file_prefix)
urllib.request.urlretrieve(file_url, to_location + file_name)
print("downloaded: " + file_name)
except:
try:
file_url, file_name = get_file_url(y, m, file_prefix, next_month=True)
urllib.request.urlretrieve(file_url, to_location + file_name)
print("downloaded: " + file_name)
except:
print("not found: " + file_url)
def download_all_excel_files(data_dir='./data/'):
file_prefixes = ["168_Askisto", "110_M%c3%84NTS%c3%84L%c3%84", "1403_KEMIJ%c3%84RVI"]
for prefix in file_prefixes:
download_xls_files(file_prefix=prefix, to_location=data_dir)
def convert_xls_to_csv(data_dir="./data/"):
for f in (glob.glob("{}*.xls".format(data_dir))):
# to run this process you need ssconvert: with mac OS run 'brew install gnumeric' first
subprocess.call(["ssconvert", f, f[:-3] + "csv"])
def read_csv_files_to_dataframe(columns, data_dir="data/"):
dfs = []
for f in (glob.glob("{}*.csv".format(data_dir))):
try:
df = pd.read_csv(f)
df.columns = columns
dfs.append(df)
except:
print("Failed: " + f)
df = pd.concat(dfs)
return df
def export_dataframe_to_csv(df, filename="raw_dataset.csv"):
df.to_csv(filename, index=False)
# download all files
download_xls_files()
# convert xls files to csv files
convert_xls_to_csv()
# read the csv files to a DataFrame
columns = [
'location_id',
'location_name',
'date',
'direction',
'vehicle_type',
'hour_1',
'hour_2',
'hour_3',
'hour_4',
'hour_5',
'hour_6',
'hour_7',
'hour_8',
'hour_9',
'hour_10',
'hour_11',
'hour_12',
'hour_13',
'hour_14',
'hour_15',
'hour_16',
'hour_17',
'hour_18',
'hour_19',
'hour_20',
'hour_21',
'hour_22',
'hour_23',
'hour_24',
]
data = read_csv_files_to_dataframe(columns).fillna(0)
# Inspect the DataFrame and ensure it looks OK
print(data.info())
data.head()
# export the DataFrame to a csv file
export_dataframe_to_csv(data)
```
| github_jupyter |
# Feature selection
```
from feature_selector import FeatureSelector
import pandas as pd
```
* Import csv files into dataframes
* Make sure to remove the orange category label row in the csv
* Also move the original features in front of the new features
* Also remove the other targets in the set
```
meta = ['Reference DOI','Composition ID']
coercivity = pd.read_csv('SplitDB\\Coercivity7-26.csv').drop(columns=meta)
core_loss = pd.read_csv('SplitDB\\CoreLoss7-26.csv').drop(columns=meta)
curie_temp = pd.read_csv('SplitDB\\CurieTemperature7-26.csv').drop(columns=meta)
electrical_resistivity = pd.read_csv('SplitDB\\ElectricalResistivity7-26.csv').drop(columns=meta)
grain_size = pd.read_csv('SplitDB\\GrainSize7-26.csv').drop(columns=meta)
magnetic_saturation = pd.read_csv('SplitDB\\MagneticSaturation7-26.csv').drop(columns=meta)
magnetostriction = pd.read_csv('SplitDB\\Magnetostriction7-26.csv').drop(columns=meta)
permeability = pd.read_csv('SplitDB\\Permeability7-26.csv').drop(columns=meta)
```
# Dataframes, for reference
* coercivity
* core_loss
* curie_temp
* electrical_resistivity
* grain_size
* magnetic_saturation
* magnetostriction
* permeability
```
# Defining training labels
coercivity_labels = coercivity['Coercivity']
core_loss_labels = core_loss['Core Loss']
curie_temp_labels = curie_temp['Curie Temp']
electrical_resistivity_labels = electrical_resistivity['Electrical Resistivity']
grain_size_labels = grain_size['Grain Diameter']
magnetic_saturation_labels = magnetic_saturation['Magnetic Saturation']
magnetostriction_labels = magnetostriction['Magnetostriction']
permeability_labels = permeability['Permeability']
# Defining training features
coercivity_features = coercivity.drop(columns=['Coercivity'])
core_loss_features = core_loss.drop(columns=['Core Loss'])
curie_temp_features = curie_temp.drop(columns=['Curie Temp'])
electrical_resistivity_features = electrical_resistivity.drop(columns=['Electrical Resistivity'])
grain_size_features = grain_size.drop(columns=['Grain Diameter'])
magnetic_saturation_features = magnetic_saturation.drop(columns=['Magnetic Saturation'])
magnetostriction_features = magnetostriction.drop(columns=['Magnetostriction'])
permeability_features = permeability.drop(columns=['Permeability'])
# Building feature selector objects from labels and features
fs_coercivity = FeatureSelector(data = coercivity_features, labels = coercivity_labels)
fs_core_loss = FeatureSelector(data = core_loss_features, labels = core_loss_labels)
fs_curie_temp = FeatureSelector(data = curie_temp_features, labels = curie_temp_labels)
fs_electrical_resistivity = FeatureSelector(data = electrical_resistivity_features, labels = electrical_resistivity_labels)
fs_grain_size = FeatureSelector(data = grain_size_features, labels = grain_size_labels)
fs_magnetic_saturation = FeatureSelector(data = magnetic_saturation_features, labels = magnetic_saturation_labels)
fs_magnetostriction = FeatureSelector(data = magnetostriction_features, labels = magnetostriction_labels)
fs_permeability = FeatureSelector(data = permeability_features, labels = permeability_labels)
fs_coercivity.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.95,
'task': 'regression', 'eval_metric': 'l1',
'cumulative_importance': 0.95})
fs_curie_temp.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.99,
'task': 'regression', 'eval_metric': 'l1',
'cumulative_importance': 0.99})
#fs_electrical_resistivity.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.95,
# 'task': 'regression', 'eval_metric': 'l1',
# 'cumulative_importance': 0.95})
fs_grain_size.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.95,
'task': 'regression', 'eval_metric': 'l1',
'cumulative_importance': 0.95})
fs_magnetic_saturation.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.95,
'task': 'regression', 'eval_metric': 'l1',
'cumulative_importance': 0.99})
fs_magnetostriction.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.99,
'task': 'regression', 'eval_metric': 'l1',
'cumulative_importance': 0.99})
fs_permeability.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.95,
'task': 'regression', 'eval_metric': 'l1',
'cumulative_importance': 0.95})
coercivity_removed_all = fs_coercivity.remove(methods = 'all', keep_one_hot = False)
#core_loss_removed_all = fs_core_loss.remove(methods = 'all', keep_one_hot = False)
curie_temp_removed_all = fs_curie_temp.remove(methods = 'all', keep_one_hot = False)
#electrical_resistivity_removed_all = fs_electrical_resistivity.remove(methods = 'all', keep_one_hot = False)
grain_size_removed_all = fs_grain_size.remove(methods = 'all', keep_one_hot = False)
magnetic_saturation_removed_all = fs_magnetic_saturation.remove(methods = 'all', keep_one_hot = False)
magnetostriction_removed_all = fs_magnetostriction.remove(methods = 'all', keep_one_hot = False)
permeability_removed_all = fs_permeability.remove(methods = 'all', keep_one_hot = False)
coercivity_best = list(coercivity_removed_all.iloc[:,:])
#core_loss_best = list(core_loss_removed_all.iloc[:,:])
curie_temp_best = list(curie_temp_removed_all.iloc[:,:])
#electrical_resistivity_best = list(electrical_resistivity_removed_all.iloc[:,:])
grain_size_best = list(grain_size_removed_all.iloc[:,:])
magnetic_saturation_best = list(magnetic_saturation_removed_all.iloc[:,:])
magnetostriction_best = list(magnetostriction_removed_all.iloc[:,:])
permeability_best = list(permeability_removed_all.iloc[:,:])
with open('kept_coercivity.txt', 'w') as file_handler:
for item in coercivity_best:
file_handler.write("{}\n".format(item))
#with open('kept_core_loss.txt', 'w') as file_handler:
# for item in core_loss_best:
# file_handler.write("{}\n".format(item))
with open('kept_curie_temp.txt', 'w') as file_handler:
for item in curie_temp_best:
file_handler.write("{}\n".format(item))
#with open('kept_electrical_resistivity.txt', 'w') as file_handler:
# for item in electrical_resistivity_best:
# kept_electrical_resistivity.write("%s\n" % item)
with open('kept_grain_size.txt', 'w') as file_handler:
for item in grain_size_best:
file_handler.write("{}\n".format(item))
with open('kept_magnetic_saturation.txt', 'w') as file_handler:
for item in magnetic_saturation_best:
file_handler.write("{}\n".format(item))
with open('kept_magnetostriction.txt', 'w') as file_handler:
for item in magnetostriction_best:
file_handler.write("{}\n".format(item))
with open('kept_permability.txt', 'w') as file_handler:
for item in permeability_best:
file_handler.write("{}\n".format(item))
```
| github_jupyter |
<img src="../_static/pymt-logo-header-text.png">
## Coastline Evolution Model + Waves
* Link to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/cem_and_waves.ipynb
* Install command: `$ conda install notebook pymt_cem`
This example explores how to use a BMI implementation to couple the Waves component with the Coastline Evolution Model component.
### Links
* [CEM source code](https://github.com/csdms/cem-old): Look at the files that have *deltas* in their name.
* [CEM description on CSDMS](http://csdms.colorado.edu/wiki/Model_help:CEM): Detailed information on the CEM model.
### Interacting with the Coastline Evolution Model BMI using Python
Some magic that allows us to view images within the notebook.
```
%matplotlib inline
import numpy as np
```
Import the `Cem` class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
```
from pymt import models
cem, waves = models.Cem(), models.Waves()
```
Even though we can't run our waves model yet, we can still get some information about it. *Just don't try to run it.* Some things we can do with our model are get the names of the input variables.
```
waves.get_output_var_names()
cem.get_input_var_names()
```
We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use [CSDMS standard names](http://csdms.colorado.edu/wiki/CSDMS_Standard_Names). The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI **initialize** method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass **None**, which tells Cem to use some defaults.
```
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cem.initialize(*args)
args = waves.setup()
waves.initialize(*args)
```
Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
```
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[1] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[0] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
```
It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
```
grid_id = cem.get_var_grid('sea_water__depth')
spacing = cem.get_grid_spacing(grid_id)
shape = cem.get_grid_shape(grid_id)
z = np.empty(shape)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
```
Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
```
qs = np.zeros_like(z)
qs[0, 100] = 750
```
The CSDMS Standard Name for this variable is:
"land_surface_water_sediment~bedload__mass_flow_rate"
You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function **get_var_units**.
```
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_asymmetry_parameter', .3)
waves.set_value('sea_shoreline_wave~incoming~deepwater__ashton_et_al_approach_angle_highness_parameter', .7)
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
```
Set the bedload flux and run the model.
```
for time in range(3000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
```
Let's add another sediment source with a different flux and update the model.
```
qs[0, 150] = 500
for time in range(3750):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
```
Here we shut off the sediment supply completely.
```
qs.fill(0.)
for time in range(4000):
waves.update()
angle = waves.get_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity')
cem.set_value('sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity', angle)
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update()
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
```
| github_jupyter |
This application demonstrates how to build a simple neural network using the Graph mark.
Interactions can be enabled by adding event handlers (click, hover etc) on the nodes of the network.
See the [Mark Interactions notebook](../Interactions/Mark Interactions.ipynb) and the [Scatter Notebook](../Marks/Scatter.ipynb) for details.
```
from itertools import chain, product
import numpy as np
from bqplot import *
class NeuralNet(Figure):
def __init__(self, **kwargs):
self.height = kwargs.get('height', 600)
self.width = kwargs.get('width', 960)
self.directed_links = kwargs.get('directed_links', False)
self.num_inputs = kwargs['num_inputs']
self.num_hidden_layers = kwargs['num_hidden_layers']
self.nodes_output_layer = kwargs['num_outputs']
self.layer_colors = kwargs.get('layer_colors',
['Orange'] * (len(self.num_hidden_layers) + 2))
self.build_net()
super(NeuralNet, self).__init__(**kwargs)
def build_net(self):
# create nodes
self.layer_nodes = []
self.layer_nodes.append(['x' + str(i+1) for i in range(self.num_inputs)])
for i, h in enumerate(self.num_hidden_layers):
self.layer_nodes.append(['h' + str(i+1) + ',' + str(j+1) for j in range(h)])
self.layer_nodes.append(['y' + str(i+1) for i in range(self.nodes_output_layer)])
self.flattened_layer_nodes = list(chain(*self.layer_nodes))
# build link matrix
i = 0
node_indices = {}
for layer in self.layer_nodes:
for node in layer:
node_indices[node] = i
i += 1
n = len(self.flattened_layer_nodes)
self.link_matrix = np.empty((n,n))
self.link_matrix[:] = np.nan
for i in range(len(self.layer_nodes) - 1):
curr_layer_nodes_indices = [node_indices[d] for d in self.layer_nodes[i]]
next_layer_nodes = [node_indices[d] for d in self.layer_nodes[i+1]]
for s, t in product(curr_layer_nodes_indices, next_layer_nodes):
self.link_matrix[s, t] = 1
# set node x locations
self.nodes_x = np.repeat(np.linspace(0, 100,
len(self.layer_nodes) + 1,
endpoint=False)[1:],
[len(n) for n in self.layer_nodes])
# set node y locations
self.nodes_y = np.array([])
for layer in self.layer_nodes:
n = len(layer)
ys = np.linspace(0, 100, n+1, endpoint=False)[1:]
self.nodes_y = np.append(self.nodes_y, ys[::-1])
# set node colors
n_layers = len(self.layer_nodes)
self.node_colors = np.repeat(np.array(self.layer_colors[:n_layers]),
[len(layer) for layer in self.layer_nodes]).tolist()
xs = LinearScale(min=0, max=100)
ys = LinearScale(min=0, max=100)
self.graph = Graph(node_data=[{'label': d,
'label_display': 'none'} for d in self.flattened_layer_nodes],
link_matrix=self.link_matrix,
link_type='line',
colors=self.node_colors,
directed=self.directed_links,
scales={'x': xs, 'y': ys},
x=self.nodes_x,
y=self.nodes_y,
# color=2 * np.random.rand(len(self.flattened_layer_nodes)) - 1
)
self.graph.hovered_style = {'stroke': '1.5'}
self.graph.unhovered_style = {'opacity': '0.4'}
self.graph.selected_style = {'opacity': '1',
'stroke': 'red',
'stroke-width': '2.5'}
self.marks = [self.graph]
self.title = 'Neural Network'
self.layout.width = str(self.width) + 'px'
self.layout.height = str(self.height) + 'px'
NeuralNet(num_inputs=3, num_hidden_layers=[10, 10, 8, 5], num_outputs=1)
```
| github_jupyter |
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
from torch.utils.data import DataLoader, TensorDataset
import torchvision.utils
from torchvision import models
import torchvision.datasets as dsets
import torchvision.transforms as transforms
# import torchattacks
# from torchattacks import PGD
import matplotlib.pyplot as plt
# from torchattacks import RPGD
# epsilons = [0, .05, .1, .15, .2, .25, .3]
pretrained_model = '/gdrive/My Drive/Tmp/cifar_net.pth' #pretrained_model = "lenet_mnist_model.pth"
use_cuda=True
from google.colab import drive
drive.mount('/gdrive')
# simply define a silu function
def srelu(input, slope):
return slope * F.relu(input)
class SReLU(nn.Module):
def __init__(self):
super().__init__() # init the base class
def forward(self, input, slope):
return srelu(input, slope) # simply apply already implemented SiLU
class Net(nn.Module):
def __init__(self, slope):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.slope = slope
def forward(self, x):
x = self.pool(srelu(self.conv1(x), self.slope))
x = self.pool(srelu(self.conv2(x), self.slope))
x = x.view(-1, 16 * 5 * 5)
x = srelu(self.fc1(x), self.slope)
x = srelu(self.fc2(x), self.slope)
x = self.fc3(x)
return F.log_softmax(x, dim=1)
# Define what device we are using
print("CUDA Available: ",torch.cuda.is_available())
device = torch.device("cuda" if (use_cuda and torch.cuda.is_available()) else "cpu")
# # Initialize the network
# model = Net().to(device)
# # Load the pretrained model
# model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
# # Set the model in evaluation mode. In this case this is for the Dropout layers
# model.eval()
transform = transforms.Compose(
[transforms.ToTensor()])
# cifar10_train = dsets.CIFAR10(root='./data', train=True,
# download=True, transform=transform)
cifar10_test = dsets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(cifar10_test, batch_size=1,
shuffle=False, num_workers=1)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# FGSM attack code
def fgsm_attack(image, epsilon, data_grad):
# Collect the element-wise sign of the data gradient
sign_data_grad = data_grad.sign()
# Create the perturbed image by adjusting each pixel of the input image
perturbed_image = image + epsilon*sign_data_grad
# Adding clipping to maintain [0,1] range
perturbed_image = torch.clamp(perturbed_image, 0, 1)
# Return the perturbed image
return perturbed_image
def test( model, device, test_loader, epsilon, myTarget):
# Accuracy counter
correct = 0
adv_examples = []
# Loop over all examples in test set
# import pdb; pdb.set_trace()
for data, target in test_loader:
# Send the data and label to the device
data, target, myTarget = data.to(device), target.to(device), myTarget.to(device)
# Set requires_grad attribute of tensor. Important for Attack
data.requires_grad = True
# Forward pass the data through the model
output = model(data)
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
if init_pred.item() == myTarget.item():
continue
# Calculate the loss
# loss = F.nll_loss(output, target)
loss = F.nll_loss(output, myTarget)
# Zero all existing gradients
model.zero_grad()
# Calculate gradients of model in backward pass
loss.backward()
# Collect datagrad
data_grad = data.grad.data
# Call FGSM Attack
perturbed_data = fgsm_attack(data, epsilon, -1*data_grad)
# Re-classify the perturbed image
output = model(perturbed_data)
# Check for success
final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
if final_pred.item() == myTarget.item():
correct += 1
# Special case for saving 0 epsilon examples
if (len(adv_examples) < 5):
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
# else:
# # Save some adv examples for visualization later
# if len(adv_examples) < 5:
# adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
# adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
# Calculate final accuracy for this epsilon
final_acc = correct/float(len(test_loader))
print("Epsilon: {}\tTest Accuracy = {} / {} = {}".format(epsilon, correct, len(test_loader), final_acc))
# Return the accuracy and an adversarial example
return final_acc, adv_examples
epsilons = [0, 0.003, 0.007, 0.01, 0.05, 0.1 ]
slopes = [.5, 1, 2, 5, 10, 100]
symbs = ['*-', 'o-', 's-', 'd-', '+-', 'x-', '^-', '<-']
xx
all_all_accuracies = []
for num in range(10):
all_accuracies = []
all_examples = []
# Run test for each slope
for sl in slopes:
print(f'\n Running class={classes[num]} slope={sl} ... ')
# Initialize the network
model = Net(sl).to(device)
# Load the pretrained model
model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
# Set the model in evaluation mode. In this case this is for the Dropout layers
model.eval()
# Run test for each epsilon
def_accuracies = []
def_examples = []
# myTarget = torch.tensor([2])
for eps in epsilons:
myTarget = torch.tensor([num]) #torch.randint(10, (1,1)).squeeze(0)
# myTarget = torch.randint(10, (1,1)).squeeze(0)
acc, ex = test(model, device, test_loader, eps, myTarget)
def_accuracies.append(acc)
# def_examples.append(ex)
all_accuracies.append(def_accuracies)
# all_examples.append(def_examples)
all_all_accuracies.append(all_accuracies)
all_accuracies = np.array(all_accuracies)
# all_accuracies = all_accuracies.T
fig =plt.figure(figsize=(5,5))
for idx in range(len(all_accuracies)):
plt.plot(all_accuracies[idx,:], symbs[idx])
# plt.plot(epsilons, np.array(all_accuracies).T, "*-")
plt.yticks(np.arange(0, .5, step=0.1))
plt.xticks(np.arange(0, len(epsilons), step=1), epsilons)
plt.title(f"Accuracy vs Epsilon (converting to {classes[num]})")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.legend(slopes)
plt.show()
fig.savefig(f'/gdrive/My Drive/Tmp/Targeted/plot-{classes[num]}.png')
np.save(f'/gdrive/My Drive/Tmp/Targeted/CIFAR_slopes.npy', all_all_accuracies)
all_all_accuracies = np.load(f'/gdrive/My Drive/Tmp/Targeted/CIFAR_slopes.npy')
np.save(f'/gdrive/My Drive/Tmp/Targeted/CIFAR_slopes_FINAL.npy', all_all_accuracies)
all_res = np.array(all_all_accuracies)
all_res_m = all_res.mean(axis=0).T
fig =plt.figure(figsize=(5,5))
for idx in range(len(all_res_m)):
plt.plot(all_res_m[:,idx], symbs[idx])
# plt.plot(epsilons, np.array(all_accuracies).T, "*-")
plt.yticks(np.arange(0, .5, step=0.1))
plt.xticks(np.arange(0, 6, step=1), epsilons)
plt.title(f"Accuracy vs Epsilon (Avg. over digits)")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.legend(slopes)
plt.show()
fig.savefig(f'/gdrive/My Drive/Tmp/Targeted/CIFAR_avg.png')
# torch.save(all_accuracies,'/gdrive/My Drive/Tmp/accs_CIFAR_Targeted.npy')
model_1 = Net(1).to(device)
model_1.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
model_1.eval()
model_2 = Net(100).to(device)
model_2.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
model_2.eval()
# Run test for each epsilon
def_examples_1 = []
def_examples_2 = []
eps = .1
myTarget = torch.tensor([0]) #torch.randint(10, (1,1)).squeeze(0)
# test_loader_2 = torch.utils.data.DataLoader(cifar10_test, batch_size=1,
# shuffle=False, num_workers=1)
_, ex = test(model_1, device, test_loader, eps, myTarget)
def_examples_1.append(ex)
_, ex = test(model_2, device, test_loader, eps, myTarget)
def_examples_2.append(ex)
x,y = (img, label)
y
# Plot several examples of adversarial samples at each epsilon
cnt = 0
plt.figure(figsize=(8,10))
for j in range(len(def_examples_1[0])):
cnt += 1
plt.subplot(1,len(def_examples_1[0]),cnt)
plt.xticks([], [])
plt.yticks([], [])
if j == 0:
plt.ylabel("Eps: {}".format(eps), fontsize=14)
orig,adv,ex = def_examples_1[0][j]
plt.title("{} -> {}".format(classes[orig], classes[adv]))
plt.imshow(ex.transpose(1,2,0), cmap="gray")
plt.tight_layout()
plt.show()
cnt = 0
plt.figure(figsize=(8,10))
for j in range(len(def_examples_2[0])):
cnt += 1
plt.subplot(1,len(def_examples_2[0]),cnt)
plt.xticks([], [])
plt.yticks([], [])
if j == 0:
plt.ylabel("Eps: {}".format(eps), fontsize=14)
orig,adv,ex = def_examples_2[0][j]
plt.title("{} -> {}".format(classes[orig], classes[adv]))
plt.imshow(ex.transpose(1,2,0), cmap="gray")
plt.tight_layout()
plt.show()
```
| github_jupyter |
## 1) Preprocess all the necessary variables
### 1.1) Build feature, target and initial normalization files
```
#!ln -s /filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM/cbrain \
#/filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM/notebooks/tbeucler_devlog/cbrain
from cbrain.imports import *
from cbrain.data_generator import *
from cbrain.models import MasConsLay
from cbrain.utils import limit_mem
import tensorflow as tf
import tensorflow.math as tfm
import xarray as xr
import numpy as np
# Otherwise tensorflow will use ALL your GPU RAM for no reason
limit_mem()
TRAINDIR = '/local/Tom.Beucler/SPCAM_PHYS/'
DATADIR = '/project/meteo/w2w/A6/S.Rasp/SP-CAM/sp32fbp_andkua/'
PREFIX = '32_col_lgsc_12m_'
%cd /filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM
```
tgb - 2/4/2019 - This is where this notebook start differing from 001
1) The new goal is to take into account large-scale forcings in the input file and see if it makes a difference since the CRM knows about them.
2) If it were the 8-column dataset, I would throw in UBP (the zonal wind) just in case and the Jacobian will be a good way of checking if it makes a difference.
3) We'll call the new config file 32col_mp_lgsc.
In that file:
inputs : [QBP, QCBP, QIBP, TBP, UBP, VBP, Qdt_adiabatic, QCdt_adiabatic, QIdt_adiabatic, Tdt_adiabatic, Udt_adiabatic, Vdt_adiabatic, PS, SOLIN, SHFLX, LHFLX]
outputs : [PHQ, PHCLDLIQ, PHCLDICE, TPHYSTND, DTVKE, FSNT, FSNS, FLNT, FLNS, PRECT, PRECTEND, PRECST, PRECSTEN]
Error1: There shouldn't be any [1,:] in the preprocess_aqua.py script when calculating the right name based on the dt_adiabatic name.
```
!python cbrain/Test01_preprocess_aqua.py \
--config pp_config/32col_mp_lgsc_tbeucler_local.yml \
--aqua_names '*.h1.0000-*-0[1-12]-*' \
--out_pref 32_col_lgsc_12m_train
```
### 1.2) Create validation dataset
```
!python cbrain/Test01_preprocess_aqua.py \
--config pp_config/32col_mp_lgsc_tbeucler_local.yml \
--aqua_names '*.h1.0001-*-0[1-3]-*' \
--out_pref 32_col_lgsc_12m_valid --ext_norm Nope
```
### 1.3) Shuffle the training dataset
tgb - 1/16/2019 - Adapted from Stephan's entire worlflow for 32 column run
```
%cd /filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM
!python cbrain/shuffle_ds.py --pref $TRAINDIR/32_col_lgsc_12m_train
!python cbrain/shuffle_ds.py --pref $TRAINDIR/32_col_lgsc_12m_valid
```
### 1.4) Change the output's normalization using pressure levels
tgb - 2/6/2019 - See notebook 001 for careful test of each of the steps below
```
ds = xr.open_dataset(TRAINDIR + PREFIX + 'train_norm.nc')
# Open the pickle files containing the pressure converters
with open(os.path.join('/filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM/cbrain', 'hyai_hybi.pkl'), 'rb') as f:
hyai, hybi = pickle.load(f)
# Takes representative value for PS since purpose is normalization
PS = 1e5; P0 = 1e5;
P = P0*hyai+PS*hybi; # Total pressure [Pa]
dP = P[1:]-P[:-1]; # Differential pressure [Pa]
dt = 30*60; # timestep
ds.target_conv[:150] = np.multiply(ds.target_conv[:150],np.concatenate((dP,dP,dP,dP,np.divide(dP,dt))))
# Copy old normalization file
path1 = os.path.join(TRAINDIR,PREFIX+'train_norm.nc')
path2 = os.path.join(TRAINDIR,PREFIX+'train_oldnorm.nc')
!cp $path1 $path2
# Create new dataset with characteristics of modified ds
new_ds = xr.Dataset({
'feature_means': ds.feature_means,
'feature_stds': ds.feature_stds,
'feature_mins': ds.feature_mins,
'feature_maxs': ds.feature_maxs,
'target_means': ds.target_means,
'target_stds': ds.target_stds,
'target_mins': ds.target_mins,
'target_maxs': ds.target_maxs,
'feature_names': ds.feature_names,
'target_names': ds.target_names,
'feature_stds_by_var': ds.feature_stds_by_var,
'target_conv': ds.target_conv
})
# 4.3 Write new data set to initial target_conv file
!rm $path1 # Remove normalization file
new_ds.to_netcdf(path1) # Save the new dataset as the new normalization file
```
## 2) Create data generator and produce data sample
### 2.1) Create data generator from training dataset
```
xr.open_dataset(path1).close() # Don't forget to close xarray handler!!
train_gen_obj = DataGenerator(
data_dir=TRAINDIR,
feature_fn=PREFIX+'train_shuffle_features.nc',
target_fn=PREFIX+'train_shuffle_targets.nc',
batch_size=512,
norm_fn=PREFIX+'train_norm.nc',
fsub='feature_means', # Subtracct the mean
fdiv='feature_stds_by_var', # Then divide by Std
tmult='target_conv', # For targets/output: use values from preprocess_aqua.
shuffle=True,
)
gen = train_gen_obj.return_generator()
# Produce data sample
x, y = next(gen)
# and check its shape
x.shape, y.shape
```
### 2.2) Create data generator from validation dataset and produce sample
```
valid_gen_obj = DataGenerator(
data_dir=TRAINDIR,
feature_fn=PREFIX+'valid_shuffle_features.nc',
target_fn=PREFIX+'valid_shuffle_targets.nc',
batch_size=512,
norm_fn=PREFIX+'train_norm.nc',
fsub='feature_means', # Subtracct the mean
fdiv='feature_stds_by_var', # Then divide by Std
tmult='target_conv', # For targets/output: use values from preprocess_aqua.
shuffle=True,
)
validgen = valid_gen_obj.return_generator()
xval, yval = next(validgen)
xval.shape, yval.shape
```
## 3) Neural network attempts (tgb - started 1/15/2019)
```
from keras.layers import *
from keras.models import *
```
### 3.1) Energy conservation strategy (tgb - started 1/18/2019)
#### Step 0: Load all the variables to calculate mass-weighted vertical integrals
```
# 1) Open the file containing the normalization of the targets
ds = xr.open_dataset(TRAINDIR + PREFIX + 'train_norm.nc')
# 2) Open the pickle files containing the pressure converters
with open(os.path.join('/filer/z-sv-pool12c/t/Tom.Beucler/SPCAM/CBRAIN-CAM/cbrain', 'hyai_hybi.pkl'), 'rb') as f:
hyai, hybi = pickle.load(f)
# 3) Define fsub, fdiv, normq
fsub = ds.feature_means.values
fdiv = ds.feature_stds_by_var.values
normq = ds.target_conv.values
print('fsub.shape=',fsub.shape)
print('fdiv.shape=',fdiv.shape)
print('normq.shape=',normq.shape)
print('hyai.shape=',hyai.shape)
print('hybi.shape=',hybi.shape)
ds.close()
```
#### Step 1: Implement mass conservation layer
##### Enforcing mass conservation
Reference for mass and energy conservation:
(Original CAM F90 script, look for "Compute vertical integrals of dry static energy and water" in the script)
https://gitlab.com/mspritch/spcam3.0-neural-net/blob/master/models/atm/cam/src/physics/cam1/check_energy.F90
(Stephan's energy/mass verification scripts)
https://github.com/raspstephan/CBRAIN-CAM/blob/master/notebooks/dev/old_notebooks/energy_conservation.ipynb
##### Mass/Water conservation equation in W/m2
If the network predicts the water vapor tendency, defined as the difference between the moisture before and after physics divided by the timestep dt (normalized in energy units W/m2):
$$\delta q_{v,i,l}\left(p\right)\overset{\mathrm{def}}{=}\frac{L_{v}\Delta p_{\mathrm{norm}}}{g}\frac{q_{v,i,l}^{a}\left(p\right)-q_{v,i,l}^{b}\left(p\right)}{\Delta t}$$
The water conservation equation is (normalized in energy units W/m2):
$$\underbrace{\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\left(\delta q_{v}+\delta q_{l}+\delta q_{i}\right)}_{\mathrm{Difference\ after-before\ physics}}=\underbrace{\int_{\widetilde{t}}^{\widetilde{t}+1} \widetilde{dt}\ \left(LHF-L_{v}P-10^{-3}\cdot L_{v} P_{tend}\right)}_{\mathrm{Cond-Precip\ during\ \Delta t}}
$$
where we have defined:
$$
\widetilde{p}\overset{\mathrm{def}}{=}\frac{p}{p_{\mathrm{norm}}}
$$
$$
\widetilde{t}\overset{\mathrm{def}}{=}\frac{t}{\Delta t}
$$
Note that the precipitation variables here sum up to the water flux from the atmosphere to the surface due to precipitation:
$$
\mathrm{Precipitation\ flux\ atm\rightarrow surf.\ \left[kg.m^{-2}.s^{-1}\right]}=P+10^{-3}\cdot P_{tend}
$$
The idea is to predict all variables but one. The specific humidity at the lowest level of the model (here 30) is likely to be one of the most penalized output variables as it has one of the largest tendencies (in W/m2) in the final cost function. If we predict all other variables and calculate that variable as a residual, it yields:
$$\Delta \widetilde{p}_{30} \delta q_{v}^{30}=\int_{\widetilde{t}}^{\widetilde{t}+1} \widetilde{dt}\ \left(LHF-L_{v}P-10^{-3}\cdot L_{v} P_{tend}\right)-\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\left(\delta q_{l}+\delta q_{i}\right)-\int_{0}^{\widetilde{p_{30}}}d\widetilde{p}\delta q_{v}
$$
Note that because we are already working with tendencies, the timestep variable dt is not needed in the water conservation layer.
```
# tgb - 2/5/2019 - Adapated the mass conservation layer to new input format
class MasConsLay(Layer):
def __init__(self, fsub, fdiv, normq, hyai, hybi, output_dim, **kwargs):
self.fsub = fsub # Subtraction for normalization of inputs
self.fdiv = fdiv # Division for normalization of inputs
self.normq = normq # Normalization of output's water concentration
self.hyai = hyai # CAM constants to calculate d_pressure
self.hybi = hybi # CAM constants to calculate d_pressure
self.output_dim = output_dim # Dimension of output
super().__init__(**kwargs)
def build(self, input_shape):
super().build(input_shape) # Be sure to call this somewhere!
# tgb - 2/6/2019 - following https://github.com/keras-team/keras/issues/4871
def get_config(self):
config = {'fsub': list(self.fsub), 'fdiv': list(self.fdiv),
'normq': list(self.normq), 'hyai': list(self.hyai),
'hybi': list(self.hybi), 'output_dim': self.output_dim}
base_config = super(MasConsLay, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def call(self, arrs):
# arrs (for arrays) is a list with
# [inputs=inp and the output of the previous layer=densout]
# inputs will be [n_sample, 304 = 30*10+4] with
# [QBP, QCBP, QIBP, TBP, VBP, Qdt_adiabatic, QCdt_adiabatic, QIdt_adiabatic,
# Tdt_adiabatic, Vdt_adiabatic, PS, SOLIN, SHFLX, LHFLX]
# outputs of the previous dense layer will be [n_samples, 124 = 30*4+6-2] with
# [DELQ\{PHQ AT LOWEST LVL}, DELCLDLIQ, DELCLDICE,
# TPHYSTND\{TPHYSTND AT LOWEST LVL}, FSNT, FSNS, FLNT, FLNS, PRECT, PRECTEND]
# Split between the inputs inp & the output of the densely connected
# neural network, densout
inp, densout = arrs
# 0) Constants
G = 9.80616; # Reference gravity constant [m.s-2]
L_V = 2.501e6; # Latent heat of vaporization of water [W.kg-1]
P0 = 1e5; # Reference surface pressure [Pa]
# 1) Get non-dimensional pressure differences (p_tilde above)
# In the input vector, PS is the 151st element after
# the first elements = [QBP, ..., VBP with shape 30*5=150]
PS = tfm.add( tfm.multiply( inp[:,300], self.fdiv[300]), self.fsub[300])
# Reference for calculation of d_pressure is cbrain/models.py (e.g. QLayer)
P = tfm.add( tfm.multiply( P0, self.hyai), \
tfm.multiply( PS[:,None], self.hybi))
dP = tfm.subtract( P[:, 1:], P[:, :-1])
# norm_output = dp_norm * L_V/G so dp_norm = norm_output * G/L_V
dP_NORM = tfm.divide( \
tfm.multiply(self.normq[:30], \
G), L_V)
# dp_tilde = dp/dp_norm
# Wondering about broadcasting here...
# tf.div or simply \ would support broadcasting
dP_TILD = tfm.divide( dP, dP_NORM)
# 2) Calculate cloud water vertical integral from level 1 to level 30
# The indices are tricky here because we are missing del(q_v)@(level 30)
# so e.g. q_liq@(level 1) is the 30th element of the output of the
# previous dense layer
CLDVEC = tfm.multiply( dP_TILD, \
tfm.add( densout[:, 29:59], densout[:, 59:89]))
CLDINT = tfm.reduce_sum( CLDVEC, axis=1)
# 3) Calculate water vapor vertical integral from level 1 to level 29
VAPVEC = tfm.multiply( dP_TILD[:, :29], \
densout[:, :29])
VAPINT = tfm.reduce_sum( VAPVEC, axis=1)
# 4) Calculate forcing on the right-hand side (Net Evaporation-Precipitation)
# E-P is already normalized to units W.m-2 in the output vector
# so all we need to do is input-unnormalize LHF that is taken from the input vector
LHF = tfm.add( tfm.multiply( inp[:,303], self.fdiv[303]), self.fsub[303])
# Note that total precipitation = PRECT + 1e-3*PRECTEND in the CAM model
# PRECTEND already multiplied by 1e-3 in output vector so no need to redo it
PREC = tfm.add( densout[:, 152], densout[:, 153])
# 5) Infer water vapor tendency at level 30 as a residual
# Composing tfm.add 3 times because not sure how to use tfm.add_n
DELQV30 = tfm.divide( \
tfm.add( tfm.add( tfm.add (\
LHF, tfm.negative(PREC)), \
tfm.negative(CLDINT)), \
tfm.negative(VAPINT)), \
dP_TILD[:, 29])
# 6) Concatenate the water tendencies with the newly inferred tendency
# to get the final vector out of shape (#samples,125) with
# [DELQ, DELCLDLIQ, DELCLDICE,
# TPHYSTND\{TPHYSTND AT SURFACE}, FSNT, FSNS, FLNT, FLNS, PRECT PRECTEND]
# Uses https://www.tensorflow.org/api_docs/python/tf/concat
DELQV30 = tf.expand_dims(DELQV30,1) # Adds dimension=1 to axis=1
out = tf.concat([densout[:, :29], DELQV30, densout[:, 29:]], 1)
return out
def compute_output_shape(self, input_shape):
return (input_shape[0][0], self.output_dim) # The output has size 125=30*4+6-1
# and is ready to be fed to the energy conservation layer
# before we reach the total number of outputs = 126
```
#### Step 2: Implement energy conservation layer
##### Energy conservation equation in W/m2
The total energy conservation in CAM is much more delicate than the water mass conservation. One key simplification is that the net advection of moist static energy in the CRM (within a grid-box) is zero, so we only need to focus on the thermodynamics.
The two reference scripts of SPCAM are:
1)https://gitlab.com/mspritch/spcam3.0-neural-net/blob/master/models/atm/cam/src/physics/cam1/check_energy.F90 (look for te_tdn)
2)https://gitlab.com/mspritch/spcam3.0-neural-net/blob/master/models/atm/cam/src/physics/cam1/tphysbc_internallythreaded.F90 (look for wtricesink and icesink)
Another very useful script is Stephan's attempt at conserving energy/making sense of the variables:
https://github.com/raspstephan/CBRAIN-CAM/blob/master/notebooks/dev/old_notebooks/energy_conservation.ipynb
which is extremely helpful since some NETCDF flags are misleading/have the wrong attributes.
Following these files, we define the total energy, where the enthalpy component uses the ice as the reference phase of energy 0 (therefore, each gram of water in the forms of liquid or vapor add to the total energy of the system):
$$
e\ \left[\mathrm{J\ kg^{-1}}\right]\overset{\mathrm{def}}{=}\frac{\overrightarrow{u}\cdot\overrightarrow{u}}{2}+c_{p}T+L_{s}q_{v}+L_{f}q_{l}
$$
We then isolate the column tendency of each variable that is due to precipitation or phase change between ice and liquid within the column.
$$
\delta QV_{SP}\ \left[\mathrm{kg\ m^{-2}}\right]\overset{\mathrm{def}}{=}\int_{0}^{p_{s}}\frac{dp}{g}\left(\frac{dq_{v}}{dt}\right)-\frac{\mathrm{LHF}}{L_{v}}
$$
$$
\delta QL_{SP}\ \left[\mathrm{kg\ m^{-2}}\right]\overset{\mathrm{def}}{=}\int_{0}^{p_{s}}\frac{dp}{g}\left(\frac{dq_{l}}{dt}\right)
$$
$$
\delta T_{SP}\ \left[\mathrm{W\ m^{-2}\ K^{-1}}\right]\overset{\mathrm{def}}{=}\int_{0}^{p_{s}}\frac{dp}{g}\left(\frac{dT}{dt}\right)-\frac{\mathrm{RAD}}{c_{p}}-\frac{\mathrm{SHF}}{c_{p}}-\int_{0}^{p_{s}}\frac{dp}{g}\left(\frac{dT_{KE}}{dt}\right)
$$
where we have introduced the net radiative heating of the column:
$$
\mathrm{RAD}\overset{\mathrm{def}}{=}\mathrm{SW}_{t}-\mathrm{SW}_{s}+\mathrm{LW}_{s}-\mathrm{LW}_{t}
$$
Physically, we have removed the following energetic contributions:
- The net evaporation due to the latent heat flux from the column water vapor tendency,
- The net radiative flux, the sensible heat flux and the column turbulent dissipation of kinetic energy from the column temperature tendency,
- Nothing from the column liquid water tendency, since it is all due to conversion to vapor or ice.
The next step is to calculate the total energy tendency due to precipitation/phase change by summing all of the components we have calculated before:
$$
\delta E_{\mathrm{SP}}\ \left[\mathrm{W\ m^{-2}}\right]=c_{p}\delta T_{SP}+L_{s}\delta QV_{SP}+L_{f}\delta QL_{SP}
$$
Note that:
- Because the current setup of SPCAM does not resolve momentum transfer, we have left the change in kinetic energy out.
- Because the reference state is ice (and not liquid as is often seen for the frozen moist static energy), the latent heat flux is multiplied by the ratio L_s/L_v>1 of the latent heat of sublimation to the latent heat of vaporization.
The change from phase change/precipitation is a sum of two terms:
- An energy source from the net change from (liquid) to (ice) within the column "SNOW".
- An energy sink from the precipitation flux from the (atmopshere) to the (surface) "P".
$$
\delta E_{\mathrm{SP}}\ \left[\mathrm{W\ m^{-2}}\right]=L_{f}\left(SNOW+10^{-3}SNOW_{tend}-P-10^{-3}P_{tend}\ \left[\mathrm{All\ in\ kg\ m^{-2}\ s^{-1}}\right]\right)
$$
We work with the same non-dimensional variable as before, since all variables have been normalized in the output vector to have units W/m2. Additionally to:
$$
\delta q_{v,i,l}\left(p\right)\overset{\mathrm{def}}{=}\frac{L_{v}\Delta p_{\mathrm{norm}}}{g}\frac{q_{v,i,l}^{a}\left(p\right)-q_{v,i,l}^{b}\left(p\right)}{\Delta t}
$$
we now also introduce the level-by-level temperature tendency in units W/m2:
$$
\delta T\left(p\right)\overset{\mathrm{def}}{=}\frac{c_{p}\Delta p_{\mathrm{norm}}}{g}\left(\frac{dT}{dt}\right)\left(p\right)
$$
The equations become:
$$
L_{v}\cdot\delta QV_{SP}\ \left[\mathrm{W\ m^{-2}}\right]\overset{\mathrm{def}}{=}\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta q_{v}-\mathrm{LHF}
$$
$$
L_{v}\cdot\delta QL_{SP}\ \left[\mathrm{W\ m^{-2}}\right]\overset{\mathrm{def}}{=}\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta q_{l}
$$
$$
c_{p}\cdot\delta T_{SP}\ \left[\mathrm{W\ m^{-2}}\right]\overset{\mathrm{def}}{=}\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta T-\mathrm{RAD}-\mathrm{SHF}-\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta T_{TKE}
$$
We now isolate the temperature tendency at level 30 and write it as a residual of the energy budget using all the variables normalized to units W/m2:
$$
\begin{aligned}\Delta\widetilde{p}_{30}\cdot\delta T_{30} & =\frac{L_{f}}{L_{v}}\left(L_{v}SNOW+10^{-3}L_{v}SNOW_{tend}-L_{v}P-10^{-3}L_{v}P_{tend}\right)\\
& +\mathrm{RAD}+\mathrm{SHF}+\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta T_{TKE}\\
& -\frac{L_{s}}{L_{v}}\left(\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta q_{v}-\mathrm{LHF}\right)-\frac{L_{f}}{L_{v}}\int_{0}^{\widetilde{p_{s}}}d\widetilde{p}\cdot\delta q_{l}\\
& -\int_{0}^{\widetilde{p_{30}}}d\widetilde{p}\cdot\delta T
\end{aligned}
$$
```
# tgb - 2/5/2019 - Change to adapt to new input format
class EntConsLay(Layer):
def __init__(self, fsub, fdiv, normq, hyai, hybi, output_dim, **kwargs):
self.fsub = fsub # Subtraction for normalization of inputs
self.fdiv = fdiv # Division for normalization of inputs
self.normq = normq # Normalization of output's water concentration
self.hyai = hyai # CAM constants to calculate d_pressure
self.hybi = hybi # CAM constants to calculate d_pressure
self.output_dim = output_dim # Dimension of output
super().__init__(**kwargs)
def build(self, input_shape):
super().build(input_shape) # Be sure to call this somewhere!
# tgb - 2/6/2019 - following https://github.com/keras-team/keras/issues/4871
def get_config(self):
config = {'fsub': list(self.fsub), 'fdiv': list(self.fdiv),
'normq': list(self.normq), 'hyai': list(self.hyai),
'hybi': list(self.hybi), 'output_dim': self.output_dim}
base_config = super(EntConsLay, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def call(self, arrs):
# arrs (for arrays) is a list with
# [inputs=inp and the output of the previous layer=massout]
# inputs will be [n_sample, 304 = 30*10+4] with
# [QBP, QCBP, QIBP, TBP, VBP, Qdt_adiabatic, QCdt_adiabatic, QIdt_adiabatic,
# Tdt_adiabatic, Vdt_adiabatic, PS, SOLIN, SHFLX, LHFLX]
# outputs of the previous dense layer will be [n_samples, 157 = 30*5+8-1] with
# [DELQ, DELCLDLIQ, DELCLDICE,
# TPHYSTND\{TPHYSTND AT LOWEST LVL}, DTVKE,
# FSNT, FSNS, FLNT, FLNS, PRECT, PRECTEND, PRECST, PRECSTEN]
# Split between the inputs inp & the output of the densely connected
# neural network, massout
inp, massout = arrs
# 0) Constants
G = 9.80616; # Reference gravity constant [m.s-2]
L_F = 3.337e5; # Latent heat of fusion of water [W.kg-1]
L_V = 2.501e6; # Latent heat of vaporization of water [W.kg-1]
P0 = 1e5; # Reference surface pressure [Pa]
# 1) Get non-dimensional pressure differences (p_tilde above)
# In the input vector, PS is the 151st element after
# the first elements = [QBP, ..., VBP with shape 30*5=150]
PS = tfm.add( tfm.multiply( inp[:,300], self.fdiv[300]), self.fsub[300])
# Reference for calculation of d_pressure is cbrain/models.py (e.g. QLayer)
P = tfm.add( tfm.multiply( P0, self.hyai), \
tfm.multiply( PS[:,None], self.hybi))
dP = tfm.subtract( P[:, 1:], P[:, :-1])
# norm_output = dp_norm * L_V/G so dp_norm = norm_output * G/L_V
dP_NORM = tfm.divide( \
tfm.multiply(self.normq[:30], \
G),\
L_V)
# dp_tilde = dp/dp_norm
dP_TILD = tfm.divide( dP, dP_NORM)
# 2) Calculate net energy input from phase change and precipitation
# PHAS = Lf/Lv*((PRECST+PRECSTEN)-(PRECT+PRECTEND))
PHAS = tfm.divide( tfm.multiply( tfm.subtract(\
tfm.add( massout[:,155], massout[:,156]),\
tfm.add( massout[:,153], massout[:,154])),\
L_F),\
L_V)
# 3) Calculate net energy input from radiation, sensible heat flux and turbulent KE
# 3.1) RAD = FSNT-FSNS-FLNT+FLNS
RAD = tfm.add(\
tfm.subtract( massout[:,149], massout[:,150]),\
tfm.subtract( massout[:,152], massout[:,151]))
# 3.2) Unnormalize sensible heat flux
SHF = tfm.add( tfm.multiply( inp[:,302], self.fdiv[302]), self.fsub[302])
# 3.3) Net turbulent kinetic energy dissipative heating is the column-integrated
# turbulent kinetic energy energy dissipative heating
KEDVEC = tfm.multiply( dP_TILD, massout[:, 119:149])
KEDINT = tfm.reduce_sum( KEDVEC, axis=1)
# 4) Calculate tendency of normalized column water vapor due to phase change
# 4.1) Unnormalize latent heat flux
LHF = tfm.add( tfm.multiply( inp[:,303], self.fdiv[303]), self.fsub[303])
# 4.2) Column water vapor is the column integral of specific humidity
PHQVEC = tfm.multiply( dP_TILD, massout[:, :30])
PHQINT = tfm.reduce_sum( PHQVEC, axis=1)
# 4.3) Multiply by L_S/L_V to normalize (explanation above)
SPDQINT = tfm.divide( tfm.multiply( tfm.subtract(\
PHQINT, LHF),\
L_S),\
L_V)
# 5) Same operation for liquid water tendency but multiplied by L_F/L_V
SPDQCINT = tfm.divide( tfm.multiply(\
tfm.reduce_sum(\
tfm.multiply( dP_TILD, massout[:, 30:60]),\
axis=1),\
L_F),\
L_V)
# 6) Same operation for temperature but only integrate from level 1 to level 29
DTINT = tfm.reduce_sum( tfm.multiply( dP_TILD[:, :29], massout[:, 90:119]), axis=1)
# 7) Now calculate dT30 as a residual
dT30 = tfm.divide(tfm.add(tfm.add(tfm.add(tfm.add(tfm.add(tfm.add(\
PHAS,RAD),\
SHF),\
KEDINT),\
tfm.negative( SPDQINT)),\
tfm.negative( SPDQCINT)),\
tfm.negative( DTINT)),\
dP_TILD[:, 29])
dT30 = tf.expand_dims(dT30,1)
out = tf.concat([massout[:, :119], dT30, massout[:, 119:]], 1)
return out
def compute_output_shape(self, input_shape):
return (input_shape[0][0], self.output_dim)
# and is ready to be used in the cost function
```
#### Step 3: Implement custom loss function
$$
\mathrm{Loss}\left[\mathrm{W^{2}.m^{-4}}\right]=\alpha\cdot\mathrm{MSE}+\left(1-\alpha\right)\left(\mathrm{Enthalpy\ residual}^{2}+\mathrm{Mass\ residual}^{2}\right)\ \ |\ \ \alpha\in[0,1]
$$
tgb - 2/5/2019- Inspired from
1) https://stackoverflow.com/questions/46858016/keras-custom-loss-function-to-pass-arguments-other-than-y-true-and-y-pred for the custom loss function
2) https://stackoverflow.com/questions/46464549/keras-custom-loss-function-accessing-current-input-pattern for using the inputs in the custom loss function
Uses the function massent_check as reference for the square Energy and mass residuals
```
def customLoss(input_tensor,fsub,fdiv,normq,hyai,hybi,alpha = 0.5):
# tgb - 2/5/2019 - Loss function written above
def lossFunction(y_true,y_pred):
loss = tfm.multiply(alpha, mse(y_true, y_pred))
loss += tfm.multiply(tfm.subtract(1.0,alpha), \
massent_res(input_tensor,y_pred,fsub,fdiv,normq,hyai,hybi))
return loss
# tgb - 2/5/2019 - Mass and enthalpy residual function
# Adapted from massent_check by converting numpy to tensorflow
def massent_res(x,y,fsub,fdiv,normq,hyai,hybi):
# 0) Constants
G = 9.80616; # Reference gravity constant [m.s-2]
L_F = 3.337e5; # Latent heat of fusion of water [W.kg-1]
L_V = 2.501e6; # Latent heat of vaporization of water [W.kg-1]
L_S = L_F+L_V; # Latent heat of sublimation of water [W.kg-1]
P0 = 1e5; # Reference surface pressure [Pa]
# WATER&ENTHALPY) Get non-dimensional pressure differences (p_tilde above)
# In the input vector, PS is the 151st element after
# the first elements = [QBP, ..., VBP with shape 30*5=150]
PS = tfm.add( tfm.multiply( x[:,300], fdiv[300]), fsub[300])
# Reference for calculation of d_pressure is cbrain/models.py (e.g. QLayer)
P = tfm.add( tfm.multiply( P0, hyai), \
tfm.multiply( PS[:,None], hybi))
dP = tfm.subtract( P[:, 1:], P[:, :-1])
# norm_output = dp_norm * L_V/G so dp_norm = norm_output * G/L_V
dP_NORM = tfm.divide( \
tfm.multiply(normq[:30], \
G),\
L_V)
# dp_tilde = dp/dp_norm
dP_TILD = tfm.divide( dP, dP_NORM)
# WATER.1) Calculate water vertical integral from level 1 to level 30
WATVEC = tfm.multiply( dP_TILD, tfm.add(tfm.add(y[:, :30],\
y[:, 30:60]),\
y[:, 60:90]))
WATINT = tfm.reduce_sum( WATVEC, axis=1)
# WATER.2) Calculate forcing on the right-hand side (Net Evaporation-Precipitation)
# E-P is already normalized to units W.m-2 in the output vector
# so all we need to do is input-unnormalize LHF that is taken from the input vector
LHF = tfm.add( tfm.multiply( x[:,303], fdiv[303]), fsub[303])
# Note that total precipitation = PRECT + 1e-3*PRECTEND in the CAM model
# PRECTEND already multiplied by 1e-3 in output vector so no need to redo it
PREC = tfm.add( y[:, 154], y[:, 155])
# WATER.FINAL) Residual = E-P-DWATER/DT
WATRES = tfm.add(tfm.add(LHF,\
tfm.negative(PREC)),\
tfm.negative(WATINT))
# ENTHALPY.1) Calculate net energy input from phase change and precipitation
# PHAS = Lf/Lv*((PRECST+PRECSTEN)-(PRECT+PRECTEND))
PHAS = tfm.divide( tfm.multiply( tfm.subtract(\
tfm.add( y[:,156], y[:,157]),\
tfm.add( y[:,154], y[:,155])),\
L_F),\
L_V)
# ENTHALPY.2) Calculate net energy input from radiation, sensible heat flux and turbulent KE
# 2.1) RAD = FSNT-FSNS-FLNT+FLNS
RAD = tfm.add(\
tfm.subtract( y[:,150], y[:,151]),\
tfm.subtract( y[:,153], y[:,152]))
# 2.2) Unnormalize sensible heat flux
SHF = tfm.add( tfm.multiply( x[:,302], fdiv[302]), fsub[302])
# 2.3) Net turbulent kinetic energy dissipative heating is the column-integrated
# turbulent kinetic energy energy dissipative heating
KEDVEC = tfm.multiply( dP_TILD, y[:, 120:150])
KEDINT = tfm.reduce_sum( KEDVEC, axis=1)
# ENTHALPY.3) Calculate tendency of normalized column water vapor due to phase change
# 3.1) Column water vapor is the column integral of specific humidity
PHQVEC = tfm.multiply( dP_TILD, y[:, :30])
PHQINT = tfm.reduce_sum( PHQVEC, axis=1)
# 3.2) Multiply by L_S/L_V to normalize (explanation above)
SPDQINT = tfm.divide( tfm.multiply( tfm.subtract(\
PHQINT, LHF),\
L_S),\
L_V)
# ENTHALPY.4) Same operation for liquid water tendency but multiplied by L_F/L_V
SPDQCINT = tfm.divide( tfm.multiply(\
tfm.reduce_sum(\
tfm.multiply( dP_TILD, y[:, 30:60]),\
axis=1),\
L_F),\
L_V)
# ENTHALPY.5) Same operation for temperature tendency
DTINT = tfm.reduce_sum( tfm.multiply( dP_TILD[:, :30], y[:, 90:120]), axis=1)
# ENTHALPY.FINAL) Residual = SPDQ+SPDQC+DTINT-RAD-SHF-PHAS
ENTRES = tfm.add(tfm.add(tfm.add(tfm.add(tfm.add(tfm.add(SPDQINT,\
SPDQCINT),\
DTINT),\
tfm.negative(RAD)),\
tfm.negative(SHF)),\
tfm.negative(PHAS)),\
tfm.negative(KEDINT))
# Return sum of water and enthalpy square residuals
return tfm.add( tfm.square(WATRES), tfm.square(ENTRES))
return lossFunction
```
#### Step 4: Formulate mass/energy conserving model, and unconstrained model
We start with inputs [QBP, QCBP, QIBP, TBP, VBP, Qdt_adiabatic, QCdt_adiabatic, QIdt_adiabatic, Tdt_adiabatic, Vdt_adiabatic, PS, SOLIN, SHFLX, LHFLX] of shape 304.
We then have a few dense layers with size proportional to a power of 2
For now, there is no other "physically-constraining" layers than
the mass+energy conservation layers which take input of shape 156
After mass layer outputs vector of shape 157
After energy layer outputs final vector of shape 158
i.e. [PHQ, PHCLDLIQ, PHCLDICE, TPHYSTND, DTVKE, FSNT, FSNS, FLNT, FLNS, PRECT, PRECTEND, PRECST, PRECSTEN]
```
# Conservative model with 5 dense layers
inp = Input(shape=(304,))
densout = Dense(512, activation='relu')(inp)
for i in range (4):
densout = Dense(512, activation='relu')(densout)
densout = Dense(156, activation='relu')(densout)
massout = MasConsLay(
input_shape=(156,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 157
)([inp, densout])
out = EntConsLay(
input_shape=(157,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 158
)([inp, massout])
mod_cons_5dens = Model(inputs=inp, outputs=out)
# Same model, unconstrained
inp = Input(shape=(304,))
densout = Dense(512, activation='relu')(inp)
for i in range (4):
densout = Dense(512, activation='relu')(densout)
out = Dense(158, activation='relu')(densout)
mod_uncons_5dens = Model(inputs=inp, outputs=out)
# LeakyReLU version
inp = Input(shape=(304,))
densout = Dense(512, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (4):
densout = Dense(512, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
densout = Dense(158, activation='linear')(densout)
out = LeakyReLU(alpha=0.3)(densout)
lru_uncons_5dens = Model(inputs=inp, outputs=out)
# Using a Lagrange multipler in the loss function
# alpha = 0.5 (equal contrib from MSE and residual)
inp05 = Input(shape=(304,))
densout = Dense(512, activation='relu')(inp05)
for i in range (4):
densout = Dense(512, activation='relu')(densout)
out = Dense(158, activation='relu')(densout)
mod_lagr05_5dens = Model(inputs=inp05, outputs=out)
# Leaky ReLU version
inp05 = Input(shape=(304,))
densout = Dense(512, activation='linear')(inp05)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (4):
densout = Dense(512, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
densout = Dense(158, activation='linear')(densout)
out = LeakyReLU(alpha=0.3)(densout)
lru_lagr05_5dens = Model(inputs=inp05, outputs=out)
# Leaky ReLU version
inp001 = Input(shape=(304,))
densout = Dense(512, activation='linear')(inp001)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (4):
densout = Dense(512, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
densout = Dense(158, activation='linear')(densout)
out = LeakyReLU(alpha=0.3)(densout)
lru_lagr001_5dens = Model(inputs=inp001, outputs=out)
# Leaky ReLU version
inp099 = Input(shape=(304,))
densout = Dense(512, activation='linear')(inp099)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (4):
densout = Dense(512, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
densout = Dense(158, activation='linear')(densout)
out = LeakyReLU(alpha=0.3)(densout)
lru_lagr099_5dens = Model(inputs=inp099, outputs=out)
# alpha = 0.01 (mostly residual)
inp001 = Input(shape=(304,))
densout = Dense(512, activation='relu')(inp001)
for i in range (4):
densout = Dense(512, activation='relu')(densout)
out = Dense(158, activation='relu')(densout)
mod_lagr001_5dens = Model(inputs=inp001, outputs=out)
# alpha = 0.99 (mostly mse)
inp099 = Input(shape=(304,))
densout = Dense(512, activation='relu')(inp099)
for i in range (4):
densout = Dense(512, activation='relu')(densout)
out = Dense(158, activation='relu')(densout)
mod_lagr099_5dens = Model(inputs=inp099, outputs=out)
# USING ELU instead of RELU
# Conservative model with 5 dense layers
inp = Input(shape=(304,))
densout = Dense(512, activation='elu')(inp)
for i in range (4):
densout = Dense(512, activation='elu')(densout)
densout = Dense(156, activation='elu')(densout)
massout = MasConsLay(
input_shape=(156,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 157
)([inp, densout])
out = EntConsLay(
input_shape=(157,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 158
)([inp, massout])
elu_cons_5dens = Model(inputs=inp, outputs=out)
```
tgb - 2/6/2019 - Added the leaky ReLU using the following issue:
https://github.com/keras-team/keras/issues/117
```
# Using LeakyRELU instead of RELU
# Conserving model with 5 dense layers
inp = Input(shape=(304,))
densout = Dense(512, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (4):
densout = Dense(512, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
densout = Dense(156, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
massout = MasConsLay(
input_shape=(156,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 157
)([inp, densout])
out = EntConsLay(
input_shape=(157,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 158
)([inp, massout])
lru_cons_5dens = Model(inputs=inp, outputs=out)
```
tgb - 2/6/2019 - It really looks like LeakyReLU (for now with alpha=0.3) is the best activation function by far to minimize the MSE. So we'll:
(1) Play with the optimizer using this one.
(2) Re-write all other networks using LeakyReLU
```
# Playing with the optmizer
inp = Input(shape=(304,))
densout = Dense(512, activation='linear')(inp)
densout = LeakyReLU(alpha=0.3)(densout)
for i in range (4):
densout = Dense(512, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
densout = Dense(156, activation='linear')(densout)
densout = LeakyReLU(alpha=0.3)(densout)
massout = MasConsLay(
input_shape=(156,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 157
)([inp, densout])
out = EntConsLay(
input_shape=(157,), fsub=fsub, fdiv=fdiv, normq=normq,\
hyai=hyai, hybi=hybi, output_dim = 158
)([inp, massout])
#adad_cons_5dens = Model(inputs=inp, outputs=out)
#adam_cons_5dens = Model(inputs=inp, outputs=out)
lru_cons_5dens = Model(inputs=inp, outputs=out)
```
tgb - 2/5/2019 - Here, I compile the standard models with .compile(optimizer,loss)
or the model with custom loss function with .compile(loss=custom_loss_wrapper(input_tensor), optimizer=XX)
See https://stackoverflow.com/questions/46464549/keras-custom-loss-function-accessing-current-input-pattern for more information.
```
lru_lagr05_5dens.compile(loss=customLoss(inp05,fsub,fdiv,normq,hyai,hybi,alpha = 0.5),\
optimizer='rmsprop')
lru_lagr001_5dens.compile(loss=customLoss(inp001,fsub,fdiv,normq,hyai,hybi,alpha = 0.01),\
optimizer='rmsprop')
lru_lagr099_5dens.compile(loss=customLoss(inp099,fsub,fdiv,normq,hyai,hybi,alpha = 0.99),\
optimizer='rmsprop')
lru_uncons_5dens.compile('rmsprop','mse')
#mod_cons_5dens.compile('adadelta','mse')
#mod_uncons_5dens.compile('adadelta','mse') # careful, apparently compile does NOT reset weights and biases
# See https://stackoverflow.com/questions/47995324/does-model-compile-initialize-all-the-weights-and-biases-in-keras-tensorflow
#adad_cons_5dens.compile('adadelta','mse')
#adam_cons_5dens.compile('adam','mse')
lru_cons_5dens.compile('rmsprop','mse')
lru_lagr099_5dens.summary()
#hist_cons_5dens = mod_cons_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
# validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
hlru_uncons_5dens = lru_uncons_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
hlru_lagr001_5dens = lru_lagr001_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
hlru_lagr05_5dens = lru_lagr05_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
hlru_lagr099_5dens = lru_lagr099_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
helu_cons_5dens = elu_cons_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
#hadad_cons_5dens = adad_cons_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
# validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
#hadam_cons_5dens = adam_cons_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
# validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
hrmsp_cons_5dens = rmsp_cons_5dens.fit_generator(gen, train_gen_obj.n_batches, epochs=20, \
validation_data=validgen, validation_steps= valid_gen_obj.n_batches)
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.subplot(111)
for index in range (5):
if index==0: hdict = hrmsp_cons_5dens.history; colo = 'bo'; col = 'b'; lab = 'C';
elif index==1: hdict = hlru_uncons_5dens.history; colo = 'ro'; col = 'r'; lab = 'U';
elif index==2: hdict = hlru_lagr001_5dens.history; colo = 'go'; col = 'g'; lab = 'W001';
elif index==3: hdict = hlru_lagr05_5dens.history; colo = 'co'; col = 'c'; lab = 'W05';
elif index==4: hdict = hlru_lagr099_5dens.history; colo = 'mo'; col = 'm'; lab = 'W099';
train_loss_values = hdict['loss']
valid_loss_values = hdict['val_loss']
epochs = range(1, len(train_loss_values) + 1)
ax.plot(epochs, train_loss_values, colo, label=lab+' Train')
ax.plot(epochs, valid_loss_values, col, label=lab+' Valid')
#plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss (W/m^2)')
plt.ylim((110, 130))
# https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot
# for legend at the right place
#ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
# ncol=5, fancybox=True, shadow=True);
plt.show()
import matplotlib.pyplot as plt
fig = plt.figure(); ax = plt.subplot(111)
for index in range (2):
if index==2: hdict = hist_cons_5dens.history; colo = 'bo'; col = 'b'; lab = 'Adadelta';
elif index==0: hdict = hadam_cons_5dens.history; colo = 'ro'; col = 'r'; lab = 'Adam';
elif index==1: hdict = hrmsp_cons_5dens.history; colo = 'go'; col = 'g'; lab = 'RmsProp';
train_loss_values = hdict['loss']; valid_loss_values = hdict['val_loss']
epochs = range(1, len(train_loss_values) + 1)
ax.plot(epochs, train_loss_values, colo, label=lab+' Train')
ax.plot(epochs, valid_loss_values, col, label=lab+' Valid')
plt.xlabel('Epochs'); plt.ylabel('Loss (W/m^2)'); plt.ylim((100, 200))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ncol=5, fancybox=True, shadow=True); plt.show()
# import matplotlib.pyplot as plt
fig = plt.figure(num=None, figsize=(20, 13.5), dpi=80, facecolor='w', edgecolor='k')
ax = plt.subplot(111)
for index in range (2):
if index==0: hdict = hist_cons_5dens.history; colo = 'bo'; col = 'b'; lab = 'ReLU';
elif index==2: hdict = helu_cons_5dens.history; colo = 'ro'; col = 'r'; lab = 'ExpLU';
elif index==1: hdict = hlru_cons_5dens.history; colo = 'go'; col = 'g'; lab = 'LeakyReLU';
train_loss_values = hdict['loss']; valid_loss_values = hdict['val_loss']
epochs = range(1, len(train_loss_values) + 1)
ax.plot(epochs, train_loss_values, colo, label=lab+' Train')
ax.plot(epochs, valid_loss_values, col, label=lab+' Valid')
plt.xlabel('Epochs', fontsize=25); plt.ylabel('Loss (W per m2)', fontsize=25);
plt.xticks(fontsize=25); plt.yticks(fontsize=25)
plt.ylim((0, 500))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ncol=5, fancybox=True, shadow=True, fontsize=25); plt.show()
pred_uncons = mod_uncons_5dens.predict_on_batch(x)
pred_cons = mod_cons_5dens.predict_on_batch(x)
ind_test = 150;
plt.plot(pred_uncons[ind_test,:], label='unconstrained')
plt.plot(pred_cons[ind_test,:], label='conserving')
plt.plot(y[ind_test,:], label='truth')
plt.legend();
print('truth=',y[ind_test,140:])
print('unconstrained=',pred_uncons[ind_test,140:])
print('conserving=',pred_cons[ind_test,140:])
```
#### Step 5: Check energy and mass conservation for the predictions
If we coded the mass/enthalpy conservation layers properly
pred_cons from the mass/enthalpy-conserving model_cons should conserve mass/energy
pred_uncons which is a "naive" dense network should not a priori conserve mass/energy
The function below is directly adapted from the tested mass/enthalpy conservation layers in numpy that have been used to develop the tensorflow layers
```
def massent_check(x,y,fsub=fsub,fdiv=fdiv,normq=normq,hyai=hyai,hybi=hybi,outtype="graph"):
import numpy as np
# 0) Constants
G = 9.80616; # Reference gravity constant [m.s-2]
L_F = 3.337e5; # Latent heat of fusion of water [W.kg-1]
L_V = 2.501e6; # Latent heat of vaporization of water [W.kg-1]
L_S = L_F+L_V; # Latent heat of sublimation of water [W.kg-1]
P0 = 1e5; # Reference surface pressure [Pa]
# WATER&ENTHALPY) Get non-dimensional pressure differences (p_tilde above)
# In the input vector, PS is the 151st element after
# the first elements = [QBP, ..., VBP with shape 30*5=150]
PS = np.add( np.multiply( x[:,300], fdiv[300]), fsub[300])
# Reference for calculation of d_pressure is cbrain/models.py (e.g. QLayer)
P = np.add( np.multiply( P0, hyai), \
np.multiply( PS[:,None], hybi))
dP = np.subtract( P[:, 1:], P[:, :-1])
# norm_output = dp_norm * L_V/G so dp_norm = norm_output * G/L_V
dP_NORM = np.divide( \
np.multiply(normq[:30], \
G),\
L_V)
# dp_tilde = dp/dp_norm
dP_TILD = np.divide( dP, dP_NORM)
# WATER.1) Calculate water vertical integral from level 1 to level 30
WATVEC = np.multiply( dP_TILD, y[:, :30] + y[:, 30:60] + y[:, 60:90])
WATINT = np.sum( WATVEC, axis=1)
# WATER.2) Calculate forcing on the right-hand side (Net Evaporation-Precipitation)
# E-P is already normalized to units W.m-2 in the output vector
# so all we need to do is input-unnormalize LHF that is taken from the input vector
LHF = np.add( np.multiply( x[:,303], fdiv[303]), fsub[303])
# Note that total precipitation = PRECT + 1e-3*PRECTEND in the CAM model
# PRECTEND already multiplied by 1e-3 in output vector so no need to redo it
PREC = np.add( y[:, 154], y[:, 155])
# WATER.FINAL) Residual = E-P-DWATER/DT
WATRES = LHF-PREC-WATINT
# ENTHALPY.1) Calculate net energy input from phase change and precipitation
# PHAS = Lf/Lv*((PRECST+PRECSTEN)-(PRECT+PRECTEND))
PHAS = np.divide( np.multiply( np.subtract(\
np.add( y[:,156], y[:,157]),\
np.add( y[:,154], y[:,155])),\
L_F),\
L_V)
# ENTHALPY.2) Calculate net energy input from radiation, sensible heat flux and turbulent KE
# 2.1) RAD = FSNT-FSNS-FLNT+FLNS
RAD = np.add(\
np.subtract( y[:,150], y[:,151]),\
np.subtract( y[:,153], y[:,152]))
# 2.2) Unnormalize sensible heat flux
SHF = np.add( np.multiply( x[:,302], fdiv[302]), fsub[302])
# 2.3) Net turbulent kinetic energy dissipative heating is the column-integrated
# turbulent kinetic energy energy dissipative heating
KEDVEC = np.multiply( dP_TILD, y[:, 120:150])
KEDINT = np.sum( KEDVEC, axis=1)
# ENTHALPY.3) Calculate tendency of normalized column water vapor due to phase change
# 3.1) Column water vapor is the column integral of specific humidity
PHQVEC = np.multiply( dP_TILD, y[:, :30])
PHQINT = np.sum( PHQVEC, axis=1)
# 3.2) Multiply by L_S/L_V to normalize (explanation above)
SPDQINT = np.divide( np.multiply( np.subtract(\
PHQINT, LHF),\
L_S),\
L_V)
# ENTHALPY.4) Same operation for liquid water tendency but multiplied by L_F/L_V
SPDQCINT = np.divide( np.multiply(\
np.sum(\
np.multiply( dP_TILD, y[:, 30:60]),\
axis=1),\
L_F),\
L_V)
# ENTHALPY.5) Same operation for temperature tendency
DTINT = np.sum( np.multiply( dP_TILD[:, :30], y[:, 90:120]), axis=1)
# ENTHALPY.FINAL) Residual = SPDQ+SPDQC+DTINT-RAD-SHF-PHAS
ENTRES = SPDQINT+SPDQCINT+DTINT-RAD-SHF-PHAS-KEDINT
if outtype=="graph":
import matplotlib.pyplot as plt
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(121)
plt.hist(WATRES)
plt.xlabel(r"$\mathrm{Water\ Residual\ \left[W.m^{-2}\right]}$", fontsize=16)
plt.ylabel(r'Number of samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplot(122)
plt.hist(ENTRES)
plt.xlabel(r"$\mathrm{Enthalpy\ Residual\ \left[W.m^{-2}\right]}$", fontsize=16)
plt.ylabel(r'Number of samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
elif outtype=="list":
return WATRES,ENTRES
import matplotlib.pyplot as plt
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.figure(num=None, figsize=(20, 13.5), dpi=80, facecolor='w', edgecolor='k')
XMAX = 180; bins = np.linspace(0, XMAX, 100)
xval, yval = next(validgen)
for index in range (5):
if index==0: pred = lru_uncons_5dens.predict_on_batch(xval); lab = 'U';
elif index==1: pred = lru_cons_5dens.predict_on_batch(xval); lab = 'C';
elif index==2: pred = lru_lagr001_5dens.predict_on_batch(xval); lab = 'W001';
elif index==3: pred = lru_lagr05_5dens.predict_on_batch(xval); lab = 'W05';
elif index==4: pred = lru_lagr099_5dens.predict_on_batch(xval); lab = 'W099';
res = np.mean((pred-yval)**2, axis=1);
ax = plt.subplot(5,1,index+1)
ax.hist(res, bins, alpha=0.5, edgecolor='k', label = lab+' m%i' %np.mean(res)+' std%i' %np.std(res))
plt.ylabel(r'Nb samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.ylim((0, 60)); plt.xlim((0, XMAX));
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ncol=5, fancybox=True, shadow=True, fontsize = 20);
import matplotlib.pyplot as plt
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.figure(num=None, figsize=(20, 13.5), dpi=80, facecolor='w', edgecolor='k')
XMAX = 100; bins = np.linspace(-XMAX, XMAX, 100)
xval, yval = next(validgen)
for index in range (5):
if index==0: pred = lru_uncons_5dens.predict_on_batch(xval); lab = 'U';
elif index==1: pred = lru_cons_5dens.predict_on_batch(xval); lab = 'C';
elif index==2: pred = lru_lagr001_5dens.predict_on_batch(xval); lab = 'W001';
elif index==3: pred = lru_lagr05_5dens.predict_on_batch(xval); lab = 'W05';
elif index==4: pred = lru_lagr099_5dens.predict_on_batch(xval); lab = 'W099';
watres,entres = massent_check(xval,pred,fsub=fsub,fdiv=fdiv,normq=normq,hyai=hyai,hybi=hybi,outtype="list");
ax = plt.subplot(5,2,2*index+1)
ax.hist(watres, bins, alpha=0.5, edgecolor='k', label = lab+' mSQ%i' %np.mean(watres**2)+' stdSQ%i' %np.std(watres**2))
plt.ylabel(r'Nb samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.ylim((0, 50)); plt.xlim((-XMAX, XMAX));
ax.legend(loc='upper left', bbox_to_anchor=(0.5, 1.05),
ncol=5, fancybox=True, shadow=True, fontsize = 20);
ax = plt.subplot(5,2,2*index+2)
ax.hist(entres, bins, alpha=0.5, edgecolor='k', label = lab+' mSQ%i' %np.mean(entres**2)+' stdSQ%i' %np.std(entres**2))
plt.ylabel(r'Nb samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.ylim((0, 50)); plt.xlim((-XMAX, XMAX));
ax.legend(loc='upper left', bbox_to_anchor=(0.5, 1.05),
ncol=5, fancybox=True, shadow=True, fontsize = 20);
```
#### Step 6: Check positivity of water species
There are two necessary steps:
1) Load the water species concentrations "before physics" from the input vector and unnormalize them
2) Invert the output normalization to get the water concentrations "after physics"
$$
\delta q_{v,i,l}\left(p\right)=\frac{L_{v}\Delta p_{\mathrm{norm}}}{g}\frac{q_{v,i,l}^{a}\left(p\right)-q_{v,i,l}^{b}\left(p\right)}{\Delta t}\ \Rightarrow\ q_{v,i,l}^{a}\left(p\right)=q_{v,i,l}^{b}\left(p\right)+\frac{g\Delta t}{L_{v}\Delta p_{\mathrm{norm}}}\delta q_{v,i,l}\left(p\right)
$$
```
def watpos_check(x,y,fsub=fsub,fdiv=fdiv,normq=normq,dt=30*60):
import numpy as np
# 1) Extract water species concentrations from inputs
QVB = np.add( np.multiply( x[:, :30], fdiv[ :30]), fsub[ :30])
QLB = np.add( np.multiply( x[:, 30:60], fdiv[ 30:60]), fsub[ 30:60])
QSB = np.add( np.multiply( x[:, 60:90], fdiv[ 60:90]), fsub[ 60:90])
# 2) Inverse output normalization and get water concentration after physics
QVA = QVB + np.divide( dt*y[:, :30] , normq[:30])
QLA = QLB + np.divide( dt*y[:, 30:60] , normq[:30])
QSA = QSB + np.divide( dt*y[:, 60:90] , normq[:30])
import matplotlib.pyplot as plt
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.figure(num=None, figsize=(12, 6), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(231)
plt.hist(1e3*QVA)
plt.xlabel(r"$\mathrm{Water\ vapor\ concentration\ \left[g/kg\right]}$", fontsize=16)
plt.ylabel(r'Number of samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplot(232)
plt.hist(1e3*QLA)
plt.xlabel(r"$\mathrm{Liquid\ water\ concentration\ \left[g/kg\right]}$", fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplot(233)
plt.hist(1e3*QSA)
plt.xlabel(r"$\mathrm{Ice\ concentration\ \left[g/kg\right]}$", fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplot(234)
plt.hist(1e3*(QVA-QVB))
plt.xlabel(r"$\mathrm{Water\ vapor\ change\ \left[g/kg\right]}$", fontsize=16)
plt.ylabel(r'Number of samples', fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplot(235)
plt.hist(1e3*(QLA-QLB))
plt.xlabel(r"$\mathrm{Liquid\ water\ change\ \left[g/kg\right]}$", fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplot(236)
plt.hist(1e3*(QSA-QSB))
plt.xlabel(r"$\mathrm{Ice\ change\ \left[g/kg\right]}$", fontsize=16)
plt.xticks(fontsize=14); plt.yticks(fontsize=14)
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.5)
watpos_check(xval,yval,fsub,fdiv,normq,dt)
watpos_check(xval,pred6,fsub,fdiv,normq,dt)
```
#### Last step: Save trained models as h5 files
```
%cd $TRAINDIR/HDF5_DATA
!pwd
#lru_lagr05_5dens.save('lru_lagr05_5dens.h5')
#lru_lagr001_5dens.save('lru_lagr001_5dens.h5')
#lru_lagr099_5dens.save('lru_lagr099_5dens.h5')
#lru_uncons_5dens.save('lru_uncons_5dens.h5')
#lru_cons_5dens.save('lru_cons_5dens.h5')
rmsp_cons_5dens.save_weights('tmp.h5')
lru_cons_5dens.load_weights('tmp.h5')
```
| github_jupyter |
# Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Import necessary packages
As usual we need to first import the Python packages that we will need.
```
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
```
wiki = graphlab.SFrame('people_wiki.gl')
wiki
```
## Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in `wiki`.
```
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
```
## Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
```
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
```
Let's look at the top 10 nearest neighbors by performing the following query:
```
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
```
All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
* Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
* Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
* Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
* Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
```
def top_words(name):
"""
Get a table of the most frequent words in the given person's wikipedia page.
"""
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
```
Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as **join**. The **join** operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See [the documentation](https://dato.com/products/create/docs/generated/graphlab.SFrame.join.html) for more details.
For instance, running
```
obama_words.join(barrio_words, on='word')
```
will extract the rows from both tables that correspond to the common words.
```
combined_words = obama_words.join(barrio_words, on='word')
combined_words
```
Since both tables contained the column named `count`, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (`count`) is for Obama and the second (`count.1`) for Barrio.
```
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
```
**Note**. The **join** operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget `ascending=False` to display largest counts first.
```
combined_words.sort('Obama', ascending=False)
```
**Quiz Question**. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint:
* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.
* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function `has_top_words` to accomplish the task.
- Convert the list of top 5 words into set using the syntax
```
set(common_words)
```
where `common_words` is a Python list. See [this link](https://docs.python.org/2/library/stdtypes.html#set) if you're curious about Python sets.
- Extract the list of keys of the word count dictionary by calling the [`keys()` method](https://docs.python.org/2/library/stdtypes.html#dict.keys).
- Convert the list of keys into a set as well.
- Use [`issubset()` method](https://docs.python.org/2/library/stdtypes.html#set) to check if all 5 words are among the keys.
* Now apply the `has_top_words` function on every row of the SFrame.
* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
```
common_words = ... # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = ... # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return ... # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
... # YOUR CODE HERE
```
**Checkpoint**. Check your `has_top_words` function on two random articles:
```
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
```
**Quiz Question**. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint: To compute the Euclidean distance between two dictionaries, use `graphlab.toolkits.distances.euclidean`. Refer to [this link](https://dato.com/products/create/docs/generated/graphlab.toolkits.distances.euclidean.html) for usage.
**Quiz Question**. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
**Note.** Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
## TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. **TF-IDF** (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
```
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
```
Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
```
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
```
Using the **join** operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
**Quiz Question**. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
```
common_words = ... # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = ... # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return ... # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
... # YOUR CODE HERE
```
Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
## Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of `model_tf_idf`. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
**Quiz Question**. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
```
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
```
But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
```
def compute_length(row):
return len(row['text'])
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
```
To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
```
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
**Note:** Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to **cosine distances**:
$$
d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}
$$
Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
```
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
```
From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
```
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
```
Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
**Moral of the story**: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
# Problem with cosine distances: tweets vs. long articles
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
```
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
```
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
```
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
```
Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
```
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
obama
```
Now, compute the cosine distance between the Barack Obama article and this tweet:
```
obama = wiki[wiki['name'] == 'Barack Obama']
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
```
Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
```
model2_tf_idf.query(obama, label='name', k=10)
```
With cosine distances, the tweet is "nearer" to Barack Obama than everyone else, except for Joe Biden! This probably is not something we want. If someone is reading the Barack Obama Wikipedia page, would you want to recommend they read this tweet? Ignoring article lengths completely resulted in nonsensical results. In practice, it is common to enforce maximum or minimum document lengths. After all, when someone is reading a long article from _The Atlantic_, you wouldn't recommend him/her a tweet.
| github_jupyter |
# PRMT-1960 Can we use the presence of a error code at a particular point in the process to designate a transfer as failed
### Context
Data range: 01/09/2020 - 28/02/2021 (6 months)
### Hypothesis
**We believe that** certain Error Codes appear at certain points in the GP2GP process,
**Can** automatically be considered failures.
**We will know this to be true when** we can see in the data that whenever a given error codes appear at a given stage of the transfer process (e.g. in intermediate, sender or final message(s)), those transfers have no successful integrations.
### Scope
We have:
- looked at the effect of re-designating any transfers that have a pending with error status, and contain the fatal intermediate error codes as failed - see fatal error codes in Notebook 16: PRMT-1622
- for each error code, for each stage in the process, looked at the eventual status of the transfer
- identify which error codes appearing at which stage can be automatically assumed as failed.
- This analysis is for a 6 month time frame - From September 2020 to February 2021 (using transfers - duplicate hypothesis - dataset).
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import paths
import data
error_code_lookup_file = pd.read_csv(data.gp2gp_response_codes.path)
transfer_file_location = "s3://prm-gp2gp-data-sandbox-dev/transfers-duplicates-hypothesis/"
transfer_files = [
"9-2020-transfers.parquet",
"10-2020-transfers.parquet",
"11-2020-transfers.parquet",
"12-2020-transfers.parquet",
"1-2021-transfers.parquet",
"2-2021-transfers.parquet",
]
transfer_input_files = [transfer_file_location + f for f in transfer_files]
transfers_raw = pd.concat((
pd.read_parquet(f)
for f in transfer_input_files
))
# This is only needed when using transfers-duplicates-hypothesis datasets
transfers_raw = transfers_raw.drop(["sending_supplier", "requesting_supplier"], axis=1)
# Given the findings in PRMT-1742 - many duplicate EHR errors are misclassified, the below reclassifies the relevant data
successful_transfers_bool = transfers_raw['request_completed_ack_codes'].apply(lambda x: True in [(np.isnan(i) or i==15) for i in x])
transfers = transfers_raw.copy()
transfers.loc[successful_transfers_bool, "status"] = "INTEGRATED"
```
# Part 1: Pending with Error
## Fatal Error Codes - Effect on pending with error transfers
We want to find the:
- Number of pending with error as the status - total number of transfers
- broken down by the (4) Likely Fatal Error: Common errors with no integrations
- broken down by the (4) Likely Fatal Error: Common errors with no integrations & (2) Seems Fatal: Tiny chance of Integration
### Data set information
```
start_time = transfers['date_requested'].min()
end_time = transfers['date_requested'].max()
start_date = start_time.date()
end_date = end_time.date()
print(f"Min time of dataset: {start_time}")
print(f"Max time of dataset: {end_time}")
total_number_transfers = transfers["status"].value_counts().sum()
print(f"Total number of transfers: {total_number_transfers}")
print("Breakdown by status:")
transfers["status"].value_counts()
transfers_with_pending_bool = transfers.loc[:, "status"] == "PENDING"
transfers_with_pending = transfers.loc[transfers_with_pending_bool]
print("To confirm that no pending transfers have any intermediate error codes")
transfers_with_pending["intermediate_error_codes"].apply(len).value_counts()
print("To confirm that no pending transfers have a sender error code")
transfers_with_pending["sender_error_code"].value_counts(dropna=False)
```
### Fatal Errors
```
transfers_with_pending_with_error_bool = transfers.loc[:, "status"] == "PENDING_WITH_ERROR"
transfers_with_pending_with_error = transfers.loc[transfers_with_pending_with_error_bool]
transfers_with_pending_with_error["intermediate_and_sender_error_codes"] = transfers_with_pending_with_error.apply(lambda row: np.append(row["intermediate_error_codes"], row["sender_error_code"]), axis=1)
print(f"Total number of transfers with pending with error status:")
print(transfers["status"].value_counts()["PENDING_WITH_ERROR"])
print(f"Validating transfers_with_pending_with_error data frame is the correct size")
transfers_with_pending_with_error.shape
print('Do pending with error transfers, contain fatal error codes? Just error codes which are 100% fatal [PRMT-1622]:')
fatal_error_codes = [10, 6, 7, 24, 99, 15]
transfers_with_fatal_errors_bool = transfers_with_pending_with_error["intermediate_and_sender_error_codes"].apply(lambda interm_error_codes: list(set(interm_error_codes) & set(fatal_error_codes))).apply(len) > 0
transfers_with_fatal_errors_bool.value_counts().iloc[[1,0]]
print('Do pending with error transfers, contain fatal error codes? All error codes which are 99% + fatal [PRMT-1622]:')
extended_fatal_error_codes = fatal_error_codes + [30, 14, 23]
transfers_with_extended_fatal_errors_bool = transfers_with_pending_with_error["intermediate_and_sender_error_codes"].apply(lambda interm_error_codes: list(set(interm_error_codes) & set(extended_fatal_error_codes))).apply(len) > 0
transfers_with_extended_fatal_errors_bool.value_counts()
pd.pivot_table(transfers, index="sender_error_code", columns="status", aggfunc="count", values="conversation_id").fillna(0).astype(int)
```
From the above figures - it appears that almost all transfers with 'pending with error' status contain sender error 30 (LM general failure) or 14 (Message not send because requesting LM messages). Error codes 30 and 14 - large message issues - these are deemed to be usually fatal, and therefore we may be able to classify the vast majority of these as a status of failed instead.
#### Given this finding, let's open this up to all error types (eg sender, final, intermediate, req ack)
# Part 2: All Error Types
## Error Code, Error Type (Sender/Intermediate/Final) and transfer status
Looking at all transfers that have any error codes (either as a sender error code, final error code, or intermediate error code) - and what their final transfer status is (failed/integrated/pending or pending with error), in order to see any patterns.
```
# Sender Errors
transfers_with_sender_error_bool = transfers["sender_error_code"].apply(lambda sender_error_code: np.isfinite(sender_error_code))
transfers_with_sender_error = transfers[transfers_with_sender_error_bool]
transfers_with_sender_error = transfers_with_sender_error[["sender_error_code", "status"]]
transfers_with_sender_error["Error Type"] = "Sender"
transfers_with_sender_error = transfers_with_sender_error.rename({ "sender_error_code": "Error Code" }, axis=1)
# Final Errors
transfers_with_final_error_bool = transfers["final_error_code"].apply(lambda final_error_code: np.isfinite(final_error_code))
transfers_with_final_error = transfers[transfers_with_final_error_bool]
transfers_with_final_error = transfers_with_final_error[["final_error_code", "status"]]
transfers_with_final_error["Error Type"] = "Final"
transfers_with_final_error = transfers_with_final_error.rename({ "final_error_code": "Error Code" }, axis=1)
# Intermediate Errors
has_intermediate_errors_bool = transfers["intermediate_error_codes"].apply(len) > 0
transfers_with_intermediate_errors_exploded = transfers[has_intermediate_errors_bool].explode("intermediate_error_codes")
# A single transfer may have the same duplicate error code repeatedly - let's only count each one once by dropping duplicates
transfers_with_unique_interm_errors = transfers_with_intermediate_errors_exploded.drop_duplicates(subset=["conversation_id", "intermediate_error_codes"])
transfers_with_unique_interm_errors = transfers_with_unique_interm_errors[["intermediate_error_codes", "status"]]
transfers_with_unique_interm_errors["Error Type"] = "intermediate"
transfers_with_unique_interm_errors = transfers_with_unique_interm_errors.rename({ "intermediate_error_codes": "Error Code" }, axis=1)
# Request Completed Acknowledgement Errors [As added by pipeline branch created by PRMT-1622; there are "final" error codes being lost by the current pipeline stored here]
has_req_ack_errors_bool = transfers['request_completed_ack_codes'].apply(len) > 0
transfers_with_req_ack_errors_exploded = transfers[has_req_ack_errors_bool].explode("request_completed_ack_codes")
# A single transfer may have the same duplicate error code repeatedly - let's only count each one once by dropping duplicates
transfers_with_req_ack_errors = transfers_with_req_ack_errors_exploded.drop_duplicates(subset=["conversation_id", "request_completed_ack_codes"])
transfers_with_req_ack_errors = transfers_with_req_ack_errors[["request_completed_ack_codes", "status"]]
transfers_with_req_ack_errors["Error Type"] = "Request completed acknowledgement"
transfers_with_req_ack_errors = transfers_with_req_ack_errors.rename({ "request_completed_ack_codes": "Error Code" }, axis=1).dropna()
transfers_with_errors = pd.concat([transfers_with_unique_interm_errors, transfers_with_final_error, transfers_with_sender_error,transfers_with_req_ack_errors])
transfers_with_errors["Error Type"].value_counts()
transfers_with_errors["Error Description"] = transfers_with_errors["Error Code"]
transfers_with_errors["Error Description"] = transfers_with_errors["Error Description"].replace(error_code_lookup_file["ErrorCode"].values, error_code_lookup_file["ErrorName"].values)
error_code_summary_pivot_table = pd.pivot_table(transfers_with_errors, index=["Error Code", "Error Description", "Error Type"], columns="status", aggfunc=lambda x: len(x), fill_value=0, margins=True, margins_name="Total")
pd.set_option('display.max_rows', len(error_code_summary_pivot_table))
#error_code_summary_pivot_table
sender_pending=[6, 7, 10, 14, 23, 24]
failure_when_sender =[30,99]
sender_mixed_outcome=[19, 20]
intermediate_mixed_outcome=[29]
end_mixed_outcome=[11, 12,31]
end_failures=[17, 21, 25,9,99]
distinct=[15, 205]
error_code_summary_pivot_table.loc[sender_pending+failure_when_sender].reset_index().set_index('Error Type').loc['Sender'].sum()
# All Sender Error Codes: Almost always end up Pending with Error
error_code_summary_pivot_table.loc[sender_pending]
key_metrics=dict()
raw_metric_data=error_code_summary_pivot_table.loc[sender_pending].sum()
key_metrics['We can classify sender codes as Failures']=(100*raw_metric_data['INTEGRATED']/raw_metric_data['Total'])<0.5
# 30 is a mix but when it is a sender error code, it almost always ends up Pending with Error
error_code_summary_pivot_table.loc[failure_when_sender]
error_code_summary_pivot_table.loc[sender_mixed_outcome]
error_code_summary_pivot_table.loc[intermediate_mixed_outcome]
error_code_summary_pivot_table.loc[end_mixed_outcome]
error_code_summary_pivot_table.loc[end_failures]
error_code_summary_pivot_table.loc[distinct]
```
### Verification of numbers
```
print("To verify above values")
transfers_with_sender_error["Error Code"].value_counts()
transfers_with_unique_interm_errors["Error Code"].value_counts()
transfers_with_final_error["Error Code"].value_counts()
key_metrics
```
| github_jupyter |
```
# default_exp plots
from IPython.core.debugger import set_trace
from IPython.utils import traitlets as _traitlets
```
<h1><center> Plotting Playing Sequence </center></h1>
This module is highly inspired from [`matplotsoccer` library](https://github.com/TomDecroos/matplotsoccer/blob/master/matplotsoccer/fns.py) with some bug fixes and improvements
# Config
```
# export
import itertools
import math
import os
import pickle
from pathlib import Path
from typing import List, Tuple
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from fastcore.basics import *
from fastcore.foundation import *
from fastcore.xtras import *
from matplotlib.patches import Arc
from matplotlib.pyplot import cm
from scipy.ndimage import gaussian_filter
SPADL_CONFIG = {
"length": 105,
"width": 68,
"penalty_box_length": 16.5,
"penalty_box_width": 40.3,
"six_yard_box_length": 5.5,
"six_yard_box_width": 18.3,
"penalty_spot_distance": 11,
"goal_width": 7.3,
"goal_length": 2,
"origin_x": 0,
"origin_y": 0,
"circle_radius": 9.15,
}
ZLINE = 8000
ZFIELD = -5000
ZACTION = 9000
ZHEATMAP = 7000
# TODO change this to use some local data
data_path = Path("./data")
action_examples_path = data_path.ls(file_exts=".csv")
action_examples_path
```
# Pitch
```
# export
def _plot_rectangle(x1, y1, x2, y2, ax, color):
ax.plot([x1, x1], [y1, y2], color=color, zorder=ZLINE)
ax.plot([x2, x2], [y1, y2], color=color, zorder=ZLINE)
ax.plot([x1, x2], [y1, y1], color=color, zorder=ZLINE)
ax.plot([x1, x2], [y2, y2], color=color, zorder=ZLINE)
def _field(
ax=None,
fig=None,
linecolor="black",
fieldcolor="white",
alpha=1,
figsize=None,
field_config=SPADL_CONFIG,
):
cfg = field_config
# Create figure
if fig is None:
fig, ax = plt.subplots()
# Pitch Outline & Centre Line
x1, y1, x2, y2 = (
cfg["origin_x"],
cfg["origin_y"],
cfg["origin_x"] + cfg["length"],
cfg["origin_y"] + cfg["width"],
)
d = cfg["goal_length"]
rectangle = plt.Rectangle(
(x1 - 2 * d, y1 - 2 * d),
cfg["length"] + 4 * d,
cfg["width"] + 4 * d,
fc=fieldcolor,
alpha=alpha,
zorder=ZFIELD,
)
ax.add_patch(rectangle)
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
ax.plot([(x1 + x2) / 2, (x1 + x2) / 2], [y1, y2], color=linecolor, zorder=ZLINE)
# Left Penalty Area
x1 = cfg["origin_x"]
x2 = cfg["origin_x"] + cfg["penalty_box_length"]
m = (cfg["origin_y"] + cfg["width"]) / 2
y1 = m - cfg["penalty_box_width"] / 2
y2 = m + cfg["penalty_box_width"] / 2
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
# Right Penalty Area
x1 = cfg["origin_x"] + cfg["length"] - cfg["penalty_box_length"]
x2 = cfg["origin_x"] + cfg["length"]
m = (cfg["origin_y"] + cfg["width"]) / 2
y1 = m - cfg["penalty_box_width"] / 2
y2 = m + cfg["penalty_box_width"] / 2
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
# Left 6-yard Box
x1 = cfg["origin_x"]
x2 = cfg["origin_x"] + cfg["six_yard_box_length"]
m = (cfg["origin_y"] + cfg["width"]) / 2
y1 = m - cfg["six_yard_box_width"] / 2
y2 = m + cfg["six_yard_box_width"] / 2
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
# Right 6-yard Box
x1 = cfg["origin_x"] + cfg["length"] - cfg["six_yard_box_length"]
x2 = cfg["origin_x"] + cfg["length"]
m = (cfg["origin_y"] + cfg["width"]) / 2
y1 = m - cfg["six_yard_box_width"] / 2
y2 = m + cfg["six_yard_box_width"] / 2
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
# Left Goal
x1 = cfg["origin_x"] - cfg["goal_length"]
x2 = cfg["origin_x"]
m = (cfg["origin_y"] + cfg["width"]) / 2
y1 = m - cfg["goal_width"] / 2
y2 = m + cfg["goal_width"] / 2
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
# Right Goal
x1 = cfg["origin_x"] + cfg["length"]
x2 = cfg["origin_x"] + cfg["length"] + cfg["goal_length"]
m = (cfg["origin_y"] + cfg["width"]) / 2
y1 = m - cfg["goal_width"] / 2
y2 = m + cfg["goal_width"] / 2
_plot_rectangle(x1, y1, x2, y2, ax=ax, color=linecolor)
# Prepare Circles
mx, my = (cfg["origin_x"] + cfg["length"]) / 2, (cfg["origin_y"] + cfg["width"]) / 2
centreCircle = plt.Circle(
(mx, my), cfg["circle_radius"], color=linecolor, fill=False, zorder=ZLINE
)
centreSpot = plt.Circle((mx, my), 0.4, color=linecolor, zorder=ZLINE)
lx = cfg["origin_x"] + cfg["penalty_spot_distance"]
leftPenSpot = plt.Circle((lx, my), 0.4, color=linecolor, zorder=ZLINE)
rx = cfg["origin_x"] + cfg["length"] - cfg["penalty_spot_distance"]
rightPenSpot = plt.Circle((rx, my), 0.4, color=linecolor, zorder=ZLINE)
# Draw Circles
ax.add_patch(centreCircle)
ax.add_patch(centreSpot)
ax.add_patch(leftPenSpot)
ax.add_patch(rightPenSpot)
# Prepare Arcs
r = cfg["circle_radius"] * 2
leftArc = Arc(
(lx, my),
height=r,
width=r,
angle=0,
theta1=307,
theta2=53,
color=linecolor,
zorder=ZLINE,
)
rightArc = Arc(
(rx, my),
height=r,
width=r,
angle=0,
theta1=127,
theta2=233,
color=linecolor,
zorder=ZLINE,
)
# Draw Arcs
ax.add_patch(leftArc)
ax.add_patch(rightArc)
## Tidy Axes
ax.axis("off")
# Display Pitch
if figsize:
h, w = fig.get_size_inches()
##newh, neww = figsize, w / h * figsize
newh, neww = figsize, 68 / 105 * figsize
fig.set_size_inches(newh, neww, forward=True)
return fig, ax
def field(color="white", figsize=None, fig=None, ax=None, linecolor="black"):
"""
Plot football pitch in different colors
Parameters
----------
color: str
Current options are `white` or `green`
figsize: int
figure size in inches
ax: matplotlib.axis
the matplotlib.axis to update
show: bool
Should we call the `.show()` method on the created plot.
Returns
-------
matplotlib.axis
The pitch attached to a matplotlib.axis
"""
if color == "white":
return _field(
fig=fig,
ax=ax,
linecolor=linecolor,
fieldcolor=color,
alpha=1,
figsize=figsize,
field_config=SPADL_CONFIG,
)
elif color == "green":
return _field(
fig=fig,
ax=ax,
linecolor=linecolor,
fieldcolor=color,
alpha=0.4,
figsize=figsize,
field_config=SPADL_CONFIG,
)
else:
raise Exception("Invalid field color")
from fastai.vision.all import *
fig, ctxs = get_grid(n=6, nrows=2, ncols=3, figsize=(15, 10), return_fig=True)
for i, ctx in enumerate(ctxs):
field(color="white", figsize=15, fig=fig, ax=ctx)
```
# Actions
```
# export
def get_lines(labels):
labels = np.asarray(labels)
if labels.ndim == 1:
labels = labels.reshape(-1, 1)
assert labels.ndim == 2
labels = list([list([str(l) for l in ls]) for ls in labels])
maxlen = {i: 0 for i in range(len(labels[0]))}
for ls in labels:
for i, l in enumerate(ls):
maxlen[i] = max(maxlen[i], len(l))
labels = [[l.ljust(maxlen[i]) for i, l in enumerate(ls)] for ls in labels]
return [" | ".join(ls) for ls in labels]
def plot_actions(
location,
action_type=None,
result=None,
team=None,
label=None,
labeltitle=None,
color="white",
fig=None,
ax=None,
figsize=None,
zoom=False,
show_legend=True,
return_fig=False,
):
"""Plot SPADL actions on a football pitch"""
fig, ax = field(ax=ax, fig=fig, color=color, figsize=figsize)
figsize, _ = fig.get_size_inches()
arrowsize = math.sqrt(figsize)
# SANITIZING INPUT
location = np.asarray(location)
if action_type is None:
m, n = location.shape
action_type = ["pass" for i in range(m)]
if label is None:
show_legend = False
action_type = np.asarray(action_type)
if team is None:
team = ["Team X" for t in action_type]
team = np.asarray(team)
assert team.ndim == 1
if result is None:
result = [1 for t in action_type]
result = np.asarray(result)
assert result.ndim == 1
if label is None:
label = [t for t in action_type]
label = np.asarray(label)
lines = get_lines(label)
if label is None:
label = [[t] for t in action_type]
label = np.asarray(label)
if label.ndim == 1:
label = label.reshape(-1, 1)
assert label.ndim == 2
indexa = np.asarray([list(range(1, len(label) + 1))]).reshape(-1, 1)
label = np.concatenate([indexa, label], axis=1)
if labeltitle is not None:
labeltitle = list(labeltitle)
labeltitle.insert(0, "")
labeltitle = [labeltitle]
label = np.concatenate([labeltitle, label])
lines = get_lines(label)
titleline = lines[0]
ax.plot(np.NaN, np.NaN, "-", color="none", label=titleline)
ax.plot(np.NaN, np.NaN, "-", color="none", label="-" * len(titleline))
lines = lines[1:]
else:
lines = get_lines(label)
m, n = location.shape
if n != 2 and n != 4:
raise ValueError("Location must have 2 or 4 columns")
if n == 2:
loc_end = location.copy()
loc_end[:-1, :] = loc_end[1:, :]
location = np.concatenate([location, loc_end], axis=1)
assert location.shape[1] == 4
text_offset = 3
if zoom:
x = np.concatenate([location[:, 0], location[:, 2]])
y = np.concatenate([location[:, 1], location[:, 3]])
xmin = min(x)
xmax = max(x)
ymin = min(y)
ymax = max(y)
mx = (xmin + xmax) / 2
dx = (xmax - xmin) / 2
my = (ymin + ymax) / 2
dy = (ymax - ymin) / 2
if type(zoom) == bool:
d = max(dx, dy)
else:
d = zoom
text_offset = 0.07 * d
zoompad = 5
xmin = max(mx - d, 0) - zoompad
xmax = min(mx + d, spadl_config["length"]) + zoompad
ax.set_xlim(xmin, xmax)
ymin = max(my - d, 0) - zoompad
ymax = min(my + d, spadl_config["width"]) + zoompad
ax.set_ylim(ymin, ymax)
h, w = fig.get_size_inches()
h, w = xmax - xmin, ymax - ymin
newh, neww = figsize, w / h * figsize
fig.set_size_inches(newh, neww, forward=True)
arrowsize = (w + h) / 2 / 105 * arrowsize
eventmarkers = itertools.cycle(["s", "p", "h"])
event_types = set(action_type)
eventmarkerdict = {"pass": "o"}
for eventtype in event_types:
if eventtype != "pass":
eventmarkerdict[eventtype] = next(eventmarkers)
markersize = figsize
def get_color(type_name, te):
home_team = team[0]
if type_name == "dribble":
return "black"
elif te == home_team:
return "blue"
else:
return "red"
colors = np.array([get_color(ty, te) for ty, te in zip(action_type, team)])
blue_n = np.sum(colors == "blue")
red_n = np.sum(colors == "red")
blue_markers = iter(list(cm.Blues(np.linspace(0.1, 0.8, blue_n))))
red_markers = iter(list(cm.Reds(np.linspace(0.1, 0.8, red_n))))
cnt = 1
for ty, r, loc, color, line in zip(action_type, result, location, colors, lines):
[sx, sy, ex, ey] = loc
ax.text(sx + text_offset, sy, str(cnt))
cnt += 1
if color == "blue":
c = next(blue_markers)
elif color == "red":
c = next(red_markers)
else:
c = "black"
if ty == "dribble":
ax.plot(
[sx, ex],
[sy, ey],
color=c,
linestyle="--",
linewidth=2,
label=line,
zorder=ZACTION,
)
else:
ec = "black" if r else "red"
m = eventmarkerdict[ty]
ax.plot(
sx,
sy,
linestyle="None",
marker=m,
markersize=markersize,
label=line,
color=c,
mec=ec,
zorder=ZACTION,
)
if abs(sx - ex) > 1 or abs(sy - ey) > 1:
ax.arrow(
sx,
sy,
ex - sx,
ey - sy,
head_width=arrowsize,
head_length=arrowsize,
linewidth=1,
fc=ec,
ec=ec,
length_includes_head=True,
zorder=ZACTION,
)
if show_legend:
leg = ax.legend(
bbox_to_anchor=(1.0, 0.75, 1, 0.05),
loc="best",
prop={"family": "monospace"},
)
if return_fig:
return fig, ax
else:
return ax
## selects id
play_sequences = [pd.read_csv(_file) for _file in action_examples_path]
fig, ctxs = get_grid(
n=len(play_sequences), ncols=1, figsize=(12, 12 * 1.7), return_fig=True
)
for i, ctx in enumerate(ctxs):
play_sequence = play_sequences[i].tail(8)
labels = play_sequence[
["time_seconds", "type_name", "player_name", "team_name", "result_name"]
]
plot_actions(
location=play_sequence[["start_x", "start_y", "end_x", "end_y"]],
action_type=play_sequence.type_name,
team=play_sequence.team_name,
result=play_sequence.result_name,
label=labels,
labeltitle=["time", "action", "player", "team", "result"],
zoom=False,
fig=fig,
ax=ctx,
show_legend=True,
)
```
# Heatmap
The aim of this function is to visualize the density of a specific action on the pitch. The computation proceeds as follows:
1. The pitch is broken down into cells and an estimate of the probability of scoring for that action over that cell is estimated.
2. These estimated probabilities are smoothed.
3. The obtained smoothed density is plotted on the pitch.
The plot should allow us to visualize if the probability distribution on the pitch is inline with what we expect.
## Density Estimate
```
# export
def dens_prob(
action_prob_df: pd.DataFrame,
n_x: int = 50,
n_y: int = 50,
aggr: str = "median",
smooth: bool = True,
**kwargs
) -> np.array:
"""
Estimate the probability density for a given action
Parameters
----------
action_prob_df: pd.DataFrame
should have at least column `start_x`, `start_y` and `proba_goal`
n_x, n_y: int
number of cells to use over x and y axis
aggr: str
how to aggregate the probability estimate
smooth: bool
should the density be smoothed by a gaussian filter?
kwarhs: dict
keyword parameters to be passed to `scipy.ndimage.gaussian_filter()` besides `input` and `sigma`
Returns
-------
numpy array of size `n_x` x `n_y`
"""
xmin = SPADL_CONFIG["origin_x"]
ymin = SPADL_CONFIG["origin_y"]
xdiff = SPADL_CONFIG["length"]
ydiff = SPADL_CONFIG["width"]
probs_cp = action_prob_df.copy()
xi = (probs_cp.start_x - xmin) / xdiff * n_x
yj = (probs_cp.start_y - ymin) / ydiff * n_y
xi = xi.astype(int).clip(0, n_x - 1)
yj = yj.astype(int).clip(0, n_y - 1)
probs_cp["flat_indexes"] = n_x * (n_y - 1 - yj) + xi
agg_probs = probs_cp.groupby(["flat_indexes"])["proba_goal"].agg("median")
flat_probs = np.zeros(n_x * n_y)
flat_probs[agg_probs.index] = agg_probs
flat_probs = flat_probs.reshape((n_y, n_x))
if smooth:
flat_probs = gaussian_filter(input=flat_probs, sigma=1, **kwargs)
return flat_probs
def plot_heatmap(
dens_arr,
fig=None,
ax=None,
figsize=None,
alpha=1,
cmap="hot",
fieldcolor="green",
linecolor="white",
cbar=True,
title=None,
show=True,
):
fig, ax = field(
ax=ax, fig=fig, color=fieldcolor, figsize=figsize, linecolor=linecolor
)
figsize, _ = fig.get_size_inches()
x1, y1, x2, y2 = (
SPADL_CONFIG["origin_x"],
SPADL_CONFIG["origin_y"],
SPADL_CONFIG["origin_x"] + SPADL_CONFIG["length"],
SPADL_CONFIG["origin_y"] + SPADL_CONFIG["width"],
)
extent = (x1, x2, y1, y2)
limits = ax.axis()
imobj = ax.imshow(
dens_arr, extent=extent, aspect="auto", alpha=alpha, cmap=cmap, zorder=ZHEATMAP
)
ax.axis(limits)
if cbar:
# dirty hack
# https://stackoverflow.com/questions/18195758/set-matplotlib-colorbar-size-to-match-graph
colorbar = plt.gcf().colorbar(
imobj, ax=ax, fraction=0.035, aspect=15, pad=-0.05
)
colorbar.minorticks_on()
plt.axis("scaled")
if not title is None:
ax.set_title(title)
if show:
plt.show()
return ax
model_name = "LSTM_FCN_bidir-True_layers-2_no_goal_prop-2-v1"
probs = pd.read_csv(Path("./models") / model_name / "sample_probs.csv").dropna()
action_name = "Shot on target"
hm_tabl = dens_prob(probs[probs.type_name == action_name])
_ = plot_heatmap(
dens_arr=hm_tabl,
fieldcolor="white",
linecolor="black",
cmap="Blues",
title=action_name,
)
```
| github_jupyter |
```
import random
import pandas as pd
import matplotlib.pyplot as plt
from pathlib import Path
import sys
sys.path.insert(0, '..')
from utils.latex import add_colname, show_latex, TABLES
from utils.config import PATHS
from utils.data_process import concat_annotated, drop_disregard, fix_week_14
pd.set_option('max_columns', None)
pd.set_option('max_colwidth', None)
```
# Load data
```
# load and concatenate annotations
path = PATHS.getpath('data_expr_july')
df = concat_annotated(path)
# load batch info
path = PATHS.getpath('data_to_inception_conll')
info = pd.concat([pd.read_pickle(fp) for fp in path.glob('week_*.pkl')], ignore_index=True)
df = df.merge(
info[['NotitieID', 'MDN', 'source', 'samp_meth']],
how='left',
on=['NotitieID', 'MDN'],
)
```
# Annotated notes
```
caption = "Weeks 14-26: Number of annotated notes (incl. disregard)"
label = "w14-w26_annot_n_notes"
df.pivot_table(
index='source',
values='NotitieID',
aggfunc='nunique',
margins=True,
margins_name='total',
).rename(columns={'NotitieID': 'n_notes'}
).join(
df.query("disregard == True").pivot_table(
index='source',
values='NotitieID',
aggfunc='nunique',
margins=True,
margins_name='total',
).rename(columns={'NotitieID': 'n_disregard'})
).assign(
n_annotated=lambda df: df.n_notes - df.n_disregard,
prc_disregard=lambda df: (df.n_disregard / df.n_notes).mul(100).round(1),
).pipe(show_latex, caption=caption, label=label)
```
# Annotated sentences
```
# remove "disregard" notes & remove MBW annotations from week 14
adjusted = drop_disregard(df).pipe(fix_week_14)
# select rows with domain labels
domains = ['ADM', 'ATT', 'BER', 'ENR', 'ETN', 'FAC', 'INS', 'MBW', 'STM']
rows_with_domain = adjusted.loc[adjusted[domains].any(axis=1)]
domain_totals_per_sen_id = rows_with_domain.groupby(['source', 'sen_id'])[domains].any()
caption = "Weeks 14-26: Number of sentences with domain labels (excl. disregard)"
label = "w14-w26_annot_sents_w_domain_labels"
n_sent = adjusted.groupby('source').sen_id.nunique()
n_sent_with_label = adjusted.assign(
has_domain = lambda df: df[domains].any(axis=1),
).query("has_domain == True").groupby('source').sen_id.nunique()
table = pd.concat([
n_sent.rename('n_all_sents'),
n_sent_with_label.rename('n_sents_with_labels'),
], axis=1)
table.loc['total'] = table.sum()
table.assign(prc_sents_with_labels=lambda df: (df.n_sents_with_labels / df.n_all_sents).mul(100).round(1)
).pipe(show_latex, caption=caption, label=label)
```
# Distribution of domains
```
caption = "Weeks 14-26: Distribution of domains"
label = "w14-w26_annot_domains"
n_labels = domain_totals_per_sen_id.pivot_table(
index='source',
values=domains,
aggfunc='sum',
margins=True,
margins_name='total',
).assign(total=lambda df: df.sum(axis=1))
p_labels = (n_labels.div(n_labels.iloc[:, -1], axis=0) * 100).round()
piv = n_labels.pipe(add_colname, 'n').join(
p_labels.pipe(add_colname, '%')
).astype('Int64'
).sort_index(axis=1, level=[0,1], ascending=[True, False])
piv.pipe(show_latex, caption=caption, label=label)
# total number of labels
fig, ax = plt.subplots(figsize=(6, 4))
piv.loc[['total']].xs('n', axis=1, level=1).iloc[:,:-1].T.plot.bar(
ax=ax,
legend=False,
grid=True,
title='Weeks 14-26: Total number of domain labels',
)
fig.savefig('figures/w14-w26_total_n_domains.png')
```
# Distribution of levels per domain
```
caption = "Weeks 14-26: Distribution of levels per domain"
label = "w14-w26_annot_levels"
stats = []
for lvl in [f"{i}_lvl" for i in domains]:
notna = adjusted.loc[adjusted[lvl].notna()]
stat = notna.groupby(['source', 'sen_id'])[lvl].apply(lambda s: {i for i in s if i==i})
stat = stat.explode().groupby(level=0).value_counts()
stats.append(stat)
table = pd.concat(stats, axis=1)
table = table.append(pd.concat([table.groupby(level=1).sum()], keys=['total']))
table.index = pd.MultiIndex.from_tuples([(i,int(j)) for i,j in table.index])
# sums = table.groupby(level=0).sum()
# sums.index = pd.MultiIndex.from_tuples([(i, 'total') for i in sums.index])
# table = pd.concat([table, sums]).sort_index(level=0)
table.pipe(show_latex, caption=caption, label=label)
cols = table.index.levels[0]
rows = table.columns[:5]
nrows = len(rows)
ncols = len(cols)
fig, axes = plt.subplots(nrows, ncols, figsize=(2*ncols,2*nrows), constrained_layout=True)
for i, row in enumerate(rows):
for j, col in enumerate(cols):
ylabel = row if j == 0 else ''
table.xs(col)[row].plot.pie(ax=axes[i,j])
axes[i,j].set_ylabel(ylabel=ylabel, labelpad=16)
for ax, col in zip(axes[0], cols):
ax.set_title(col)
fig.savefig('figures/w14-w26_levels_part1.png')
cols = table.index.levels[0]
rows = table.columns[5:]
nrows = len(rows)
ncols = len(cols)
fig, axes = plt.subplots(nrows, ncols, figsize=(2*ncols,2*nrows), constrained_layout=True)
for i, row in enumerate(rows):
for j, col in enumerate(cols):
ylabel = row if j == 0 else ''
table.xs(col)[row].plot.pie(ax=axes[i,j])
axes[i,j].set_ylabel(ylabel=ylabel, labelpad=16)
for ax, col in zip(axes[0], cols):
ax.set_title(col)
fig.savefig('figures/w14-w26_levels_part2.png')
```
# Levels aggregated on a note-level
```
fig, axes = plt.subplots(3, 3, figsize=(12, 8), constrained_layout=True)
for i, lev in enumerate([f'{d}_lvl' for d in domains]):
x, y = i%3, i//3
s = adjusted.groupby('NotitieID')[lev].nunique().loc[lambda s: s>0]
s.value_counts().plot.bar(ax=axes[x,y], grid=True)
axes[x,y].set_title(lev)
fig.savefig('figures/w14-w26_dist_nunique_levels_per_note.png')
fig, axes = plt.subplots(3, 3, figsize=(12, 8), constrained_layout=True)
for i, lev in enumerate([f'{d}_lvl' for d in domains]):
x, y = i%3, i//3
to_agg = ['nunique', 'min', 'max']
sel_rows = lambda df: df['nunique'] > 1
s = adjusted.groupby('NotitieID')[lev].agg(to_agg).loc[sel_rows]
diff = (s['max'] - s['min']).astype(int)
diff.value_counts().sort_index().plot.bar(ax=axes[x,y], grid=True)
axes[x,y].set_title(lev)
fig.savefig('figures/w14-w26_dist_diff_minmax_levels_per_note.png')
```
# Randomly-selected notes vs. Keyword-selected notes
```
# % disregard notes (out of all notes)
compare_samp = df.assign(
samp_meth=lambda df: df.samp_meth.str.split('_').str[0]
).groupby(['samp_meth']).apply(
lambda grp: grp.groupby('NotitieID').disregard_note.first().agg({'n':'size', 'p':'sum'})
)
pct_disregard = (compare_samp.p / compare_samp.n).mul(100).round(1).rename(r'% disregard notes')
# % sentences with labels (out of all sents, excl. disregard)
adjusted = adjusted.assign(
samp_meth=lambda df: df.samp_meth.str.split('_').str[0]
)
n_sents = adjusted.groupby('samp_meth').sen_id.nunique()
n_sents_with_label = adjusted.assign(
has_domain = lambda df: df[domains].any(axis=1),
).query("has_domain == True").groupby('samp_meth').sen_id.nunique()
pct_sents_with_label = (n_sents_with_label / n_sents).mul(100).round(1).rename(r'% sentences with labels')
# distribution of domains
rows_with_domain = adjusted.loc[adjusted[domains].any(axis=1)]
domain_totals_per_sen_id = rows_with_domain.groupby(['samp_meth', 'sen_id'])[domains].any()
n_labels = domain_totals_per_sen_id.pivot_table(
index='samp_meth',
values=domains,
aggfunc='sum',
).assign(total=lambda df: df.sum(axis=1))
pct_labels = (n_labels.div(n_labels.iloc[:, -1], axis=0) * 100).round(1).iloc[:,:-1]
# put everything together
caption = "Weeks 14-26: Comparison between randomly-selected and keyword-selected notes"
label = "w14-w26_annot_kwd_vs_rndm"
pd.concat([pct_disregard, pct_sents_with_label, pct_labels], axis=1).T.pipe(show_latex, caption=caption, label=label)
prefix = 'w14-w26_annot'
for idx, table in enumerate(TABLES):
with open(f'./tables/{prefix}_{idx}.tex', 'w', encoding='utf8') as f:
f.write(table)
```
| github_jupyter |
<h3>Problem: As a PM, I write lots of blogs. How do I know if they will be received well by readers?</h3>
<table>
<tr>
<td><img src="https://jayclouse.com/wp-content/uploads/2019/06/hacker_news.webp" height=300 width=300></img></td>
<td><img src="https://miro.medium.com/max/852/1*wJ18DgYgtsscG63Sn56Oyw.png" height=300 width=300></img></td>
</tr>
</table>
<h1>Background on Spark ML</h1>
DataFrame: This ML API uses DataFrame from Spark SQL as an ML dataset, which can hold a variety of data types. E.g., a DataFrame could have different columns storing text, feature vectors, true labels, and predictions.
Transformer: A Transformer is an algorithm which can transform one DataFrame into another DataFrame. E.g., an ML model is a Transformer which transforms a DataFrame with features into a DataFrame with predictions.
Estimator: An Estimator is an algorithm which can be fit on a DataFrame to produce a Transformer. E.g., a learning algorithm is an Estimator which trains on a DataFrame and produces a model.
Pipeline: A Pipeline chains multiple Transformers and Estimators together to specify an ML workflow.
Parameter: All Transformers and Estimators now share a common API for specifying parameters.
```
from IPython.display import Image
Image(url='https://spark.apache.org/docs/3.0.0-preview/img/ml-Pipeline.png')
```
<h2>Loading Hackernews Text From BigQuery</h2>
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
scala_minor_version = str(spark.sparkContext._jvm.scala.util.Properties.versionString().replace("version ","").split('.')[1])
spark = SparkSession.builder.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2." + scala_minor_version + ":0.18.0") \
.enableHiveSupport() \
.getOrCreate()
df = spark.read \
.format("bigquery") \
.load("google.com:crosbie-test-project.demos.hackernewssample")
df.describe().show()
```
<h2>Prepare the data using Spark SQL<h2>
<h4>Create a random ID to distribute between test and training sets</h4>
<h4>Make the score a binary variable so we can run a logicistic regression model on it</h4>
```
df.registerTempTable("df")
from pyspark.sql import functions as F
df_full = spark.sql("select cast(round(rand() * 100) as int) as id, text, case when score > 10 THEN 1.0 else 0.0 end as label from df")
df_full.groupby('id').count().sort('count', ascending=False).show()
```
<h4>Create our training and test sets</h4>
```
#use the above table to identify ~10% holdback for test
holdback = "(22,39,25,55,23,47,38,71,5,98)"
#create test set by dropping label
df_test = df_full.where("id in {}".format(holdback))
df_test = df_test.drop("label")
rdd_test = df_test.rdd
test = rdd_test.map(tuple)
testing = spark.createDataFrame(test,["id", "text"])
#training data - Spark ML is expecting tuples so convert to RDD to map back to tuples (may not be required)
df_train = df_full.where("id not in {}".format(holdback))
rdd_train = df_train.rdd
train = rdd_train.map(tuple)
training = spark.createDataFrame(train,["id", "text", "label"])
#a little less than 10% of the trainig data is positively reviewed. Should be okay.
training.where("label > 0").count()
```
<h2>Build our ML Pipeline</h2>
<h3>Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr.</h3>
```
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import HashingTF, Tokenizer
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features")
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
# Fit the pipeline to hacker news articles
model = pipeline.fit(training)
```
<h3>Review model based on test set</h3>
```
# Make predictions on test documents and print columns of interest.
prediction = model.transform(testing)
selected = prediction.select("id", "text", "probability", "prediction").where("prediction > 0")
for row in selected.collect():
rid, text, prob, prediction = row
print("(%d, %s) --> prob=%s, prediction=%f" % (rid, text, str(prob), prediction))
```
<h2>Use the model to decide which PM blog to use</h2>
```
my_blog = """
The life of a data scientist can be challenging. If you’re in this role, your job may involve anything from understanding the day-to-day business behind the data to keeping up with the latest machine learning academic research. With all that a data scientist must do to be effective, you shouldn’t have to worry about migrating data environments or dealing with processing limitations associated with working with raw data.
Google Cloud’s Dataproc lets you run cloud-native Apache Spark and Hadoop clusters easily. This is especially helpful as data growth relocates data scientists and machine learning researchers from personal servers and laptops into distributed cluster environments like Apache Spark, which offers Python and R interfaces for data of any size. You can run open source data processing on Google Cloud, making Dataproc one of the fastest ways to extend your existing data analysis to cloud-sized datasets.
We’re announcing the general availability of several new Dataproc features that will let you apply the open source tools, algorithms, and programming languages that you use today to large datasets. This can be done without having to manage clusters and computers. These new GA features make it possible for data scientists and analysts to build production systems based on personalized development environments.
"""
pmm_blog = """
Dataproc makes open source data and analytics processing fast, easy, and more secure in the cloud.
New customers get $300 in free credits to spend on Dataproc or other Google Cloud products during the first 90 days.
Go to console
Spin up an autoscaling cluster in 90 seconds on custom machines
Build fully managed Apache Spark, Apache Hadoop, Presto, and other OSS clusters
Only pay for the resources you use and lower the total cost of ownership of OSS
Encryption and unified security built into every cluster
Accelerate data science with purpose-built clusters
"""
boss_blog = """
In 2014, we made a decision to build our core data platform on Google Cloud Platform and one of the products which was critical for the decision was Google BigQuery. The scale at which it enabled us to perform analysis we knew would be critical in long run for our business. Today we have more than 200 unique users performing analysis on a monthly basis.
Once we started using Google BiqQuery at scale we soon realized our analysts needed better tooling around it. The key requests we started getting were
Ability to schedule jobs: Analysts needed to have ability to run queries at regular intervals to generate data and metrics.
Define workflow of queries: Basically analysts wanted to run multiple queries in a sequence and share data across them through temp tables.
Simplified data sharing: Finally it became clear teams needed to share this data generated with other systems. For example download it to leverage in R programs or send it to another system to process through Kafka.
"""
pm_blog_off = spark.createDataFrame([
('me', my_blog),
('pmm', pmm_blog),
('sudhir', boss_blog)
], ["id", "text"])
blog_prediction = model.transform(pm_blog_off)
blog_prediction.select("id","prediction").show()
```
<h2>Save our trained model to GCS</h2>
```
model.save("gs://crosbie-dev/blog-validation-model")
```
| github_jupyter |
SOP055 - Uninstall azdata CLI (using pip)
=========================================
Steps
-----
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportabilty, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
return output
else:
return
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
try:
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
j = load_json("sop055-uninstall-azdata.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
# rules that have 9 elements are the injected (output) rules (the ones we want). Rules
# with only 8 elements are the source (input) rules, which are not expanded (i.e. TSG029,
# not ../repair/tsg029-nb-name.ipynb)
if len(rule) == 9:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'python': []}
error_hints = {'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP008 - Backup HDFS files to Azure Data Lake Store Gen2 with distcp', '../backup-restore/sop008-distcp-backup-to-adl-gen2.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb']]}
install_hint = {'python': []}
```
### Uninstall azdata CLI
```
import sys
run(f'python -m pip uninstall -r https://aka.ms/azdata -y')
```
### Pip list
Verify there are no azdata modules in the list
```
run(f'python -m pip list')
```
### Related (SOP055, SOP064)
```
print('Notebook execution complete.')
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
```
# EDA for the first time
### URLs
- [Pandas documentation](https://pandas.pydata.org/docs/)
- [Pandas tutorials](https://pandas.pydata.org/pandas-docs/version/0.15/tutorials.html)
### Training on ...
- how to read data
- how to analyze every atribute
- how to analyze relationships between atributes
- how to vizualize data (which kinds of visualizations and their characters, ...)
### EDA questions to answer by analyzing
- Which questions?
- What is the quest of our work?
# Country dataset - Reading from a file using Pandas
```
import matplotlib
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import scipy.stats as stats
filename = "data/countries.csv"
df = pd.read_csv(filename)
df.head()
```
**Reading the file again with the proper `decimal` for numbers for this file**
```
df = pd.read_csv(filename, decimal=',')
df.head()
df.info()
df.Region.unique()
```
**String cleaning - remove blank spaces**
```
df.Region = df.Region.str.strip()
df.Country = df.Country.str.strip()
df.Region.unique()
```
## Missing value?
**In how many row?**
```
df.shape[0] - df.dropna().shape[0]
df.isnull().sum()
df.isnull().sum().sum()
df[df.isnull().any(axis=1)]
```
## Visualization
```
df.info()
plt.rcParams["figure.figsize"] = (10,5)
sns.boxplot(x='Region',
y='GDP ($ per capita)',
data=df)
pylab.xticks(rotation=90)
df.Region.value_counts().plot(kind='bar')
sns.pairplot(df.dropna()[['Pop. Density (per sq. mi.)',
'GDP ($ per capita)',
'Birthrate',
'Net migration',
'Literacy (%)',
'Phones (per 1000)',
'Deathrate']])
df['GDP ($ per capita)'].corr(df['Birthrate'])
sns.scatterplot(x='GDP ($ per capita)',
y='Birthrate',
data=df)
df['Phones (per 1000)'].corr(df['Birthrate'])
sns.scatterplot(x='Phones (per 1000)',
y='Birthrate',
data=df)
```
### In IAU, we will deal with supervised learning (učenia s učiteľom)
- Regresion 𝑌∈𝑅
- Classification 𝑌∈{𝐶1, 𝐶2,…, 𝐶𝑁}
# Obese dataset
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import scipy.stats as stats
df = pd.read_csv("data/obese.csv")
df.info()
```
I don't like one column's name. It is too long, I'm going to shorten it
```
df.rename(columns={'Share of adults who are obese (%)' : 'obesity_rate'}, inplace=True)
df.Entity.unique()
```
**Visualization**
```
af = df[df.Entity == 'Africa']
sns.regplot(x=af.Year, y=af.obesity_rate)
usa = df[df.Entity == 'United States']
sns.regplot(x=usa.Year, y=usa.obesity_rate)
plt.legend(labels=['Afrika', 'United States'])
```
**Simple Y=f(X)**
```
slope, intercept, r_value, p_value, std_err = stats.linregress(af.Year, af.obesity_rate)
line = slope * af.Year + intercept
plt.plot(af.Year, af.obesity_rate, 'o', af.Year, line)
slope, intercept, r_value, p_value, std_err = stats.linregress(usa.Year, usa.obesity_rate)
line = slope * usa.Year + intercept
plt.plot(usa.Year, usa.obesity_rate, 'o', usa.Year, line)
x = 2300
y = slope * x + intercept
y
```
# Diamond dataset
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import scipy.stats as stats
df = pd.read_csv('data/diamonds.csv')
df.describe()
df.color.value_counts()
df.color.value_counts().plot(kind='bar')
df.color.value_counts().plot(kind='pie')
```
**Your code:**
# Monitoring dataset
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import scipy.stats as stats
filename = 'data/monitoring.csv'
df = pd.read_csv(filename)
df.describe()
%%bash
head -n 10 data/monitoring.csv
data = pd.read_csv(filename,
sep='\t',
header=None,
na_values=[-999, -9999],
index_col=0)
data.head()
```
**Your code:***
| github_jupyter |
```
import datetime
lipidname = "OEMC"
tail = "CCDDC CDCC"
link = "G G"
head = "C P"
description = "; A general model Plasmenylcholines (MC) lipid \n; C18:2(9c,12c) linoleic acid, C18:1(9c) oleic acid \n"
modeledOn="; This topology follows the standard Martini 2.0 lipid definitions and building block rules.\n; Reference(s): \n; S.J. Marrink, A.H. de Vries, A.E. Mark. Coarse grained model for semi-quantitative lipid simulations. JPC-B, 108:750-760, \n; 2004. doi:10.1021/jp036508g \n; S.J. Marrink, H.J. Risselada, S. Yefimov, D.P. Tieleman, A.H. de Vries. The MARTINI force field: coarse grained model for \n; biomolecular simulations. JPC-B, 111:7812-7824, 2007. doi:10.1021/jp071097f \n; T.A. Wassenaar, H.I. Ingolfsson, R.A. Bockmann, D.P. Tieleman, S.J. Marrink. Computational lipidomics with insane: a versatile \n; tool for generating custom membranes for molecular simulations. JCTC, 150410125128004, 2015. doi:10.1021/acs.jctc.5b00209\n; Created: "
now = datetime.datetime.now()
membrane="testmembrane"
insane="../insane+SF.py"
mdparams="../test.mdp"
martinipath="../martini.ff/"
ITPCatalogue="./epithelial.cat"
ITPMasterFile="martini_v2_epithelial.itp"
modeledOn+= now.strftime("%Y.%m.%d")+"\n"
# Cleaning up intermediate files from previous runs
!rm -f *#*
!rm -f *step*
!rm -f {membrane}*
import fileinput
import os.path
print("Create itp")
!python {martinipath}/lipid-martini-itp-v06.py -o {lipidname}.itp -alname {lipidname} -name {lipidname} -alhead '{head}' -allink '{link}' -altail '{tail}'
#update description and parameters
with fileinput.FileInput(lipidname+".itp", inplace=True) as file:
for line in file:
if line == "; This is a ...\n":
print(description, end='')
elif line == "; Was modeled on ...\n":
print(modeledOn, end='')
else:
print(line, end='')
!sed -i 's/GL2/N0/g' {lipidname}.itp
!sed -i 's/C1B/C3/g' {lipidname}.itp
#Add this ITP file to the catalogue file
if not os.path.exists(ITPCatalogue):
ITPCatalogueData = []
else:
with open(ITPCatalogue, 'r') as file :
ITPCatalogueData = file.read().splitlines()
ITPCatalogueData = [x for x in ITPCatalogueData if not x==lipidname+".itp"]
ITPCatalogueData.append(lipidname+".itp")
with open(ITPCatalogue, 'w') as file :
file.writelines("%s\n" % item for item in ITPCatalogueData)
#build ITPFile
with open(martinipath+ITPMasterFile, 'w') as masterfile:
for ITPfilename in ITPCatalogueData:
with open(ITPfilename, 'r') as ITPfile :
for line in ITPfile:
masterfile.write(line)
print("Done")
# build a simple membrane to visualize this species
!python2 {insane} -o {membrane}.gro -p {membrane}.top -d 0 -x 3 -y 3 -z 3 -sol PW -center -charge 0 -orient -u {lipidname}:1 -l {lipidname}:1 -itpPath {martinipath}
import os #Operating system specific commands
import re #Regular expression library
print("Test")
print("Grompp")
grompp = !gmx grompp -f {mdparams} -c {membrane}.gro -p {membrane}.top -o {membrane}.tpr
success=True
for line in grompp:
if re.search("ERROR", line):
success=False
if re.search("Fatal error", line):
success=False
#if not success:
print(line)
if success:
print("Run")
!export GMX_MAXCONSTRWARN=-1
!export GMX_SUPPRESS_DUMP=1
run = !gmx mdrun -v -deffnm {membrane}
summary=""
logfile = membrane+".log"
if not os.path.exists(logfile):
print("no log file")
print("== === ====")
for line in run:
print(line)
else:
try:
file = open(logfile, "r")
fe = False
for line in file:
if fe:
success=False
summary=line
elif re.search("^Steepest Descents.*converge", line):
success=True
summary=line
break
elif re.search("Fatal error", line):
fe = True
except IOError as exc:
sucess=False;
summary=exc;
if success:
print("Success")
else:
print(summary)
```
| github_jupyter |
# Speed and Quality of Katz-Eigen Community Detection vs Louvain
```
import zen
import pandas as pd
import numpy as np
from clusteringAlgo import lineClustering
import matplotlib.pyplot as plt
```
#### Compare the speed of the Katz-eigen plot method of community detection with that of Louvain community detection, using the 328-node Amazon product network.
```
def katz(G,tol=0.01,max_iter=1000,alpha=0.001,beta=1):
iteration = 0
centrality = np.zeros(G.num_nodes)
while iteration < max_iter:
iteration += 1 # increment iteration count
centrality_old = centrality.copy()
for node in G.nodes_():
Ax = 0
for neighbor in G.neighbors_(node):
weight = G.weight_(G.edge_idx_(neighbor,node))
Ax += np.multiply(centrality[neighbor],weight)
#Ax += centrality[neighbor] #exclude weight due to overflow in multiplication
centrality[node] = np.multiply(alpha,Ax)+beta
if np.sum(np.abs(np.subtract(centrality,centrality_old))) < tol:
return centrality
def modular_graph(Size1, Size2, edges1, edges2, common, katz_alpha=0.001):
g1 = zen.generating.barabasi_albert(Size1,edges1)
avgDeg1 = (2.0 * g1.num_edges)/g1.num_nodes
lcc1 = np.mean(zen.algorithms.clustering.lcc_(g1))
g2 = zen.generating.barabasi_albert(Size2,edges2)
avgDeg2 = (2.0 * g2.num_edges)/g2.num_nodes
lcc2 = np.mean(zen.algorithms.clustering.lcc_(g2))
Size = Size1 + Size2
G = zen.Graph()
for i in range(Size):
G.add_node(i)
for edge in g1.edges_iter():
u = edge[0]
v = edge[1]
G.add_edge(u,v)
for edge in g2.edges_iter():
u = edge[0]+Size1
v = edge[1]+Size1
G.add_edge(u,v)
# Select random pairs of nodes to connect the subgraphs
join_nodes = np.empty((common,2),dtype=np.int64)
nodes1 = np.random.randint(0,Size1,size=common)
nodes2 = np.random.randint(Size1,Size,size=common)
join_nodes[:,0] = nodes1
join_nodes[:,1] = nodes2
for edge in join_nodes:
if not G.has_edge(edge[0],edge[1]):
G.add_edge(edge[0],edge[1])
return G
def modularity(G,classDict,classList):
Q = zen.algorithms.modularity(G,classDict)
# Maximum Modularity
count=0.0
for e in G.edges():
n1 = G.node_idx(e[0])
n2 = G.node_idx(e[1])
if classList[n1] == classList[n2]:
count += 1
same = count / G.num_edges
rand = same - Q
qmax = 1 - rand
return Q, qmax
from zen.algorithms.community import spectral_modularity as spm
def spectral_community_detection(G,ke_plot=False):
cset = spm(G)
if ke_plot:
evc = zen.algorithms.eigenvector_centrality_(G)
kc = katz(G,alpha=1e-4)
#scale
evc = evc - np.min(evc)
evc = evc / np.max(evc)
kc = kc - np.min(kc)
kc = kc / np.max(kc)
comm_dict = {}
comm_list = np.zeros(G.num_nodes)
for i,community in enumerate(cset.communities()):
comm_dict[i] = community.nodes()
comm_list[community.nodes_()] = i
if ke_plot:
plt.scatter(evc[community.nodes_()],kc[community.nodes_()],s=3,label='cluster %d'%i)
if ke_plot:
plt.xlabel('Eigenvector Centrality (normalized)')
plt.xlabel('Katz Centrality (normalized)')
plt.legend()
plt.show()
q,qmax = modularity(G,comm_dict,comm_list)
print '%d communities found.'%(i+1)
print 'Q: %.3f'%q
print 'Normalized Q: %.3f'%(q/qmax)
def ke_community_detection(G,dtheta=0.01,dx=0.5,window=10,plot=False,ke_plot=False):
evc = zen.algorithms.eigenvector_centrality_(G)
kc = katz(G,alpha=1e-4)
#scale
evc = evc - np.min(evc)
evc = evc / np.max(evc)
kc = kc - np.min(kc)
kc = kc / np.max(kc)
clusters = lineClustering(evc,kc,dtheta=dtheta,dx=dx,window=window,plot=plot)
ClassDict = {}
ClassList = np.zeros(G.num_nodes)
for i,c in enumerate(clusters):
ClassDict[i] = [G.node_object(x) for x in c]
ClassList[c]=i
if ke_plot:
plt.scatter(evc[c],kc[c],s=3,label='cluster %d'%i)
if ke_plot:
plt.xlabel('Eigenvector Centrality (normalized)')
plt.xlabel('Katz Centrality (normalized)')
plt.legend()
plt.show()
q,qmax = modularity(G,ClassDict,ClassList)
print '%d communities found.'%(i+1)
print 'Q: %.3f'%q
print 'Normalized Q: %.3f'%(q/qmax)
from zen.algorithms.community import louvain
def louvain_community_detection(G,ke_plot=False):
cset = louvain(G)
if ke_plot:
evc = zen.algorithms.eigenvector_centrality_(G)
kc = katz(G,alpha=1e-4)
#scale
evc = evc - np.min(evc)
evc = evc / np.max(evc)
kc = kc - np.min(kc)
kc = kc / np.max(kc)
comm_dict = {}
comm_list = np.zeros(G.num_nodes)
for i,community in enumerate(cset.communities()):
comm_dict[i] = community.nodes()
comm_list[community.nodes_()] = i
if ke_plot:
plt.scatter(evc[c],kc[c],s=3,label='cluster %d'%i)
if ke_plot:
plt.xlabel('Eigenvector Centrality (normalized)')
plt.xlabel('Katz Centrality (normalized)')
plt.legend()
plt.show()
q,qmax = modularity(G,comm_dict,comm_list)
print '%d communities found.'%(i+1)
print 'Q: %.3f'%q
print 'Normalized Q: %.3f'%(q/qmax)
```
### Test on Amazon Product Graph
```
G = zen.io.gml.read('amazon_product.gml',weight_fxn=lambda x: x['weight'])
%%time
ke_community_detection(G)
%%time
louvain_community_detection(G)
%%time
spectral_community_detection(G)
```
## Test on Amazon Beauty Graph
```
G = zen.io.gml.read('amazon_reviews_beauty.gml',weight_fxn=lambda x: x['weight'])
G_ = zen.io.gml.read('amazon_reviews_beauty.gml',weight_fxn=lambda x: 1.0)
print G.num_nodes
print G.num_edges
%%time
ke_community_detection(G,dx=0.3)
%%time
spectral_community_detection(G_)
```
### Test on Amazon Health Graph
```
G = zen.io.gml.read('amazon_reviews_health.gml',weight_fxn=lambda x: x['weight'])
G_ = zen.io.gml.read('amazon_reviews_health.gml',weight_fxn=lambda x: 1.0)
print G.num_nodes
print G.num_edges
%%time
ke_community_detection(G,dx=0.3)
%%time
spectral_community_detection(G_)
```
## Test on DBLP Graph
```
#G = zen.io.edgelist.read('com-dblp.ungraph.txt')
G = zen.io.gml.read('dblp_top_2_weighted.gml',weight_fxn=lambda x:x['weight'])
G_ = zen.io.gml.read('dblp_top_2_weighted.gml',weight_fxn=lambda x: 1.0)
print G.num_nodes
print G.num_edges
%%time
ke_community_detection(G,dx=0.07)
%%time
louvain_community_detection(G)
%%time
spectral_community_detection(G_)
```
## Test on synthetic graphs
```
G_synth = modular_graph(500,500,15,20,100,katz_alpha=1e-4)
print "Nodes: %d"%G_synth.num_nodes
print "Edges: %d"%G_synth.num_edges
%%time
ke_community_detection(G_synth)
%%time
louvain_community_detection(G_synth)
%%time
spectral_community_detection(G_synth)
G_synth = modular_graph(1000,1000,4,7,100,katz_alpha=1e-4)
print "Nodes: %d"%G_synth.num_nodes
print "Edges: %d"%G_synth.num_edges
%%time
ke_community_detection(G_synth)
%%time
louvain_community_detection(G_synth)
%%time
spectral_community_detection(G_synth)
G_synth = modular_graph(5000,5000,5,14,300,katz_alpha=1e-4)
print "Nodes: %d"%G_synth.num_nodes
print "Edges: %d"%G_synth.num_edges
%%time
ke_community_detection(G_synth)
%%time
louvain_community_detection(G_synth)
%%time
spectral_community_detection(G_synth)
```
| github_jupyter |
# Denoising Autoencoder
Sticking with the MNIST dataset, let's add noise to our data and see if we can define and train an autoencoder to _de_-noise the images.
<img src='notebook_ims/autoencoder_denoise.png' width=70%/>
Let's get started by importing our libraries and getting the dataset.
```
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection
# Run this script to enable the datasets download
# Reference: https://github.com/pytorch/vision/issues/1938
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='~/.pytorch/MNIST_data/', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='~/.pytorch/MNIST_data/', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
```
### Visualize the Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
```
---
# Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1.
>**We'll use noisy images as input and the original, clean images as targets.**
Below is an example of some of the noisy images I generated and the associated, denoised images.
<img src='notebook_ims/denoising.png' />
Since this is a harder problem for the network, we'll want to use _deeper_ convolutional layers here; layers with more feature maps. You might also consider adding additional layers. I suggest starting with a depth of 32 for the convolutional layers in the encoder, and the same depths going backward through the decoder.
#### TODO: Build the network for the denoising autoencoder. Add deeper and/or additional layers compared to the model above.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvDenoiser(nn.Module):
def __init__(self):
super(ConvDenoiser, self).__init__()
## encoder layers ##
## decoder layers ##
## a kernel of 2 and a stride of 2 will increase the spatial dims by 2
def forward(self, x):
## encode ##
## decode ##
return x
# initialize the NN
model = ConvDenoiser()
print(model)
```
---
## Training
We are only concerned with the training images, which we can get from the `train_loader`.
>In this case, we are actually **adding some noise** to these images and we'll feed these `noisy_imgs` to our model. The model will produce reconstructed images based on the noisy input. But, we want it to produce _normal_ un-noisy images, and so, when we calculate the loss, we will still compare the reconstructed outputs to the original images!
Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:
```
loss = criterion(outputs, images)
```
```
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 20
# for adding noise to images
noise_factor=0.5
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
# no need to flatten images
images, _ = data
## add random noise to the input images
noisy_imgs = images + noise_factor * torch.randn(*images.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# clear the gradients of all optimized variables
optimizer.zero_grad()
## forward pass: compute predicted outputs by passing *noisy* images to the model
outputs = model(noisy_imgs)
# calculate the loss
# the "target" is still the original, not-noisy images
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
```
## Checking out the results
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# add noise to the test images
noisy_imgs = images + noise_factor * torch.randn(*images.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# get sample outputs
output = model(noisy_imgs)
# prep images for display
noisy_imgs = noisy_imgs.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for noisy_imgs, row in zip([noisy_imgs, output], axes):
for img, ax in zip(noisy_imgs, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
import datetime
plt.style.use('ggplot')
sns.set_style("whitegrid")
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
def read_df(f, i):
items = f.split("/")
binner, assembly, l = items[i-2], items[i-1], items[i]
_df = pd.read_csv(f, sep="\t", header=0, index_col=0)
data = {"binner": [binner]*_df.shape[0],
"assembly": [assembly]*_df.shape[0],
"min_contig_length": [l]*_df.shape[0]}
_df = pd.merge(pd.DataFrame(data, index=_df.index), _df,
left_index=True, right_index=True)
return _df
def read_summary_stats(f):
"""
Reads the basic summary statistics for bins
"""
df = pd.read_csv(f, sep="\t", header=0)
value_vars = ["Mbp", "GC", "contigs", "n50"]
id_vars = ["binner", "assembly"]
# Use contig length for rows if doing multiple plotting
if len(df.min_contig_length.unique()) > 1:
id_vars.append("min_contig_length")
row = "min_contig_length"
else:
row = None
# Melt the stats dataframe
dfm = pd.melt(df, id_vars=id_vars, value_vars=value_vars)
bin_counts = df.groupby(["binner","assembly","min_contig_length"]).count().loc[:,"bp"]
bin_counts = pd.DataFrame(bin_counts).rename(columns={'bp': 'bins'}).reset_index()
return dfm, bin_counts, row
def read_checkm_stats(f):
"""
Reads extended statistics from checkm qa
"""
df = pd.read_csv(f, sep="\t", index_col=0, header=0)
df = df.assign(Purity = 100 - df.Contamination)
df = df.assign(Mbp = df["Genome size (bp)"] / 1000000)
return df
with PdfPages(snakemake.output[0]) as pdf:
#### Read the summary statistics ####
stats = [f for f in snakemake.input if os.path.basename(f) == "binning_summary.tsv"]
df, bin_counts, row = read_summary_stats(stats[0])
#### Plot the summary stats
plt.figure(figsize=(8, 8))
plt.title('Overall statistics')
sns.catplot(kind="strip", col="variable", hue="assembly", y="value", x="binner", data=df,
hue_order=sorted(df.assembly.unique()), sharey=False, linewidth=.5, row=row)
pdf.savefig(bbox_inches="tight")
plt.close()
#### Plot number of bins
plt.rc('text', usetex=False)
plt.figure(figsize=(8, 8))
plt.title("Number of bins")
sns.catplot(data=bin_counts, hue="assembly", kind="bar", x="binner", y="bins",
hue_order=sorted(bin_counts.assembly.unique()), col="min_contig_length", sharey=True)
pdf.savefig(bbox_inches="tight")
plt.close()
#### Read checkm stats if available ####
checkm_stats = [f for f in snakemake.input if os.path.basename(f) == "checkm.stats.tsv"]
if len(checkm_stats) > 0:
checkm_df = read_checkm_stats(checkm_stats[0])
lengths = sorted(checkm_df.min_contig_length.unique())
plt.rc('text', usetex=False)
fig, a = plt.subplots(nrows=len(lengths), ncols=1, figsize=(8,6*len(lengths)))
if len(lengths) > 1:
axes = [axis for axis in a]
else:
axes = [a]
for i, l in enumerate(lengths):
ax = sns.scatterplot(data=checkm_df.loc[checkm_df.min_contig_length==l], y="Completeness", x="Purity", style="binner", hue="assembly",
hue_order=sorted(checkm_df.assembly.unique()), size="Mbp", linewidth=.5, ax=axes[i])
ax.set_title("min contig length={}".format(l))
ax.legend(bbox_to_anchor=(1,1))
ax.set_xlabel("Purity (%)")
ax.set_ylabel("Completeness (%)")
pdf.savefig(fig, bbox_inches="tight")
plt.close()
# Set the file's metadata via the PdfPages object:
d = pdf.infodict()
d['Title'] = 'Binning report for NBIS-meta workflow'
d['Author'] = 'NBIS'
d['CreationDate'] = datetime.datetime.today()
```
| github_jupyter |
# Recommender Systems using Affinity Analysis
<hr>
Here we will look at affinity analysis that determines when objects occur
frequently together. This is also called market basket analysis, after one of
the use cases of determining when items are purchased together frequently.
In this example, we wish to work out when
two movies are recommended by the same reviewers.
### Affinity analysis
Affinity analysis is the task of determining when objects are used in similar
ways. The data for affinity analysis is often described in the form of a
transaction. Intuitively, this comes from a transaction at a store—determining
when objects are purchased together.
The classic algorithm for affinity analysis is called the Apriori algorithm. It addresses
the exponential problem of creating sets of items that occur frequently within a
database, called frequent itemsets. Once these frequent itemsets are discovered,
creating association rules is straightforward.
#### Apriori algorithm
First, we ensure that a rule
has sufficient support within the dataset. Defining a minimum support level is the
key parameter for Apriori. To build a frequent itemset, for an itemset (A, B) to have a
support of at least 30, both A and B must occur at least 30 times in the database. This
property extends to larger sets as well. For an itemset (A, B, C, D) to be considered
frequent, the set (A, B, C) must also be frequent (as must D).
These frequent itemsets can be built up and possible itemsets that are not frequent
(of which there are many) will never be tested. This saves significant time in testing
new rules.
Other example algorithms for affinity analysis include the Eclat and FP-growth
algorithms. There are many improvements to these algorithms in the data mining
literature that further improve the efficiency of the method. In this chapter, we will
focus on the basic Apriori algorithm.
#### Choosing parameters
To perform association rule mining for affinity analysis, we first use the Apriori
to generate frequent itemsets. Next, we create association rules (for example, if a
person recommended movie X, they would also recommend movie Y) by testing
combinations of premises and conclusions within those frequent itemsets.
For the first stage, the Apriori algorithm needs a value for the minimum support
that an itemset needs to be considered frequent. Any itemsets with less support will
not be considered. Setting this minimum support too low will cause Apriori to test a
larger number of itemsets, slowing the algorithm down. Setting it too high will result
in fewer itemsets being considered frequent.
In the second stage, after the frequent itemsets have been discovered, association
rules are tested based on their confidence. We could choose a minimum confidence
level, a number of rules to return, or simply return all of them and let the user decide
what to do with them.
Here, we will return only rules above a given confidence level. Therefore,
we need to set our minimum confidence level. Setting this too low will result in rules
that have a high support, but are not very accurate. Setting this higher will result in
only more accurate rules being returned, but with fewer rules being discovered.
### The movie recommendation problem
Product recommendation is big business. Online stores use it to up-sell to
customers by recommending other products that they could buy. Making better
recommendations leads to better sales. When online shopping is selling to millions
of customers every year, there is a lot of potential money to be made by selling more
items to these customers.
Product recommendations have been researched for many years; however, the field
gained a significant boost when Netflix ran their Netflix Prize between 2007 and
2009. This competition aimed to determine if anyone can predict a user's rating of a
film better than Netflix was currently doing. The prize went to a team that was just
over 10 percent better than the current solution. While this may not seem like a large
improvement, such an improvement would net millions to Netflix in revenue from
better movie recommendations.
### Obtaining the dataset
Since the inception of the Netflix Prize, Grouplens, a research group at the University
of Minnesota, has released several datasets that are often used for testing algorithms
in this area. They have released several versions of a movie rating dataset, which
have different sizes. There is a version with 100,000 reviews, one with 1 million
reviews and one with 10 million reviews.
The datasets are available from http://grouplens.org/datasets/movielens/
and the dataset we are going to use in this chapter is the MovieLens 1 million
dataset. Download this dataset and unzip it in your data folder. We then load the dataset using Pandas. The MovieLens dataset is in a good shape; however, there are some changes from the
default options in pandas.read_csv that we need to make. To start with, the data is
separated by tabs, not commas. Next, there is no heading line. This means the first
line in the file is actually data and we need to manually set the column names. When loading the file, we set the delimiter parameter to the tab character, tell pandas
not to read the first row as the header (with header=None), and set the column
names.
```
ratings_filename = "data/ml-100k/u.data"
import pandas as pd
all_ratings = pd.read_csv(ratings_filename, delimiter="\t", header=None, names = ["UserID", "MovieID", "Rating", "Datetime"])
all_ratings["Datetime"] = pd.to_datetime(all_ratings['Datetime'],unit='s')
all_ratings[:5]
```
Sparse data formats:
This dataset is in a sparse format. Each row can be thought of as a cell in a large
feature matrix of the type used in previous chapters, where rows are users and
columns are individual movies. The first column would be each user's review
of the first movie, the second column would be each user's review of the second
movie, and so on.
There are 1,000 users and 1,700 movies in this dataset, which means that the full
matrix would be quite large. We may run into issues storing the whole matrix in
memory and computing on it would be troublesome. However, this matrix has the
property that most cells are empty, that is, there is no review for most movies for
most users. There is no review of movie #675 for user #213 though, and not for most
other combinations of user and movie.
```
# As you can see, there are no review for most movies, such as #213
all_ratings[all_ratings["UserID"] == 675].sort("MovieID")
```
The format given here represents the full matrix, but in a more compact way.
The first row indicates that user #196 reviewed movie #242, giving it a ranking
of 3 (out of five) on the December 4, 1997.
Any combination of user and movie that isn't in this database is assumed to not exist.
This saves significant space, as opposed to storing a bunch of zeroes in memory. This
type of format is called a sparse matrix format. As a rule of thumb, if you expect
about 60 percent or more of your dataset to be empty or zero, a sparse format will
take less space to store.
When computing on sparse matrices, the focus isn't usually on the data we don't
have—comparing all of the zeroes. We usually focus on the data we have and
compare those.
### The Apriori implementation
The goal of this chapter is to produce rules of the following form: if a person
recommends these movies, they will also recommend this movie. We will also discuss
extensions where a person recommends a set of movies is likely to recommend
another particular movie.
To do this, we first need to determine if a person recommends a movie. We can
do this by creating a new feature Favorable, which is True if the person gave a
favorable review to a movie:
```
# Not all reviews are favourable! Our goal is "other recommended books", so we only want favourable reviews
all_ratings["Favorable"] = all_ratings["Rating"] > 3
all_ratings[10:15]
all_ratings[all_ratings["UserID"] == 1][:5]
```
We will sample our dataset to form a training dataset. This also helps reduce
the size of the dataset that will be searched, making the Apriori algorithm run faster.
We obtain all reviews from the first 200 users:
```
# Sample the dataset. You can try increasing the size of the sample, but the run time will be considerably longer
ratings = all_ratings[all_ratings['UserID'].isin(range(200))] # & ratings["UserID"].isin(range(100))]
```
Next, we can create a dataset of only the favorable reviews in our sample:
```
# We start by creating a dataset of each user's favourable reviews
favorable_ratings = ratings[ratings["Favorable"]]
favorable_ratings[:5]
```
We will be searching the user's favorable reviews for our itemsets. So, the next thing
we need is the movies which each user has given a favorable. We can compute this
by grouping the dataset by the User ID and iterating over the movies in each group:
```
# We are only interested in the reviewers who have more than one review
favorable_reviews_by_users = dict((k, frozenset(v.values)) for k, v in favorable_ratings.groupby("UserID")["MovieID"])
len(favorable_reviews_by_users)
```
In the preceding code, we stored the values as a frozenset, allowing us to quickly
check if a movie has been rated by a user. Sets are much faster than lists for this type
of operation, and we will use them in a later code.
Finally, we can create a DataFrame that tells us how frequently each movie has been
given a favorable review:
```
# Find out how many movies have favourable ratings
num_favorable_by_movie = ratings[["MovieID", "Favorable"]].groupby("MovieID").sum()
num_favorable_by_movie.sort("Favorable", ascending=False)[:5]
```
### The Apriori algorithm revisited
The Apriori algorithm is part of our affinity analysis and deals specifically with
finding frequent itemsets within the data. The basic procedure of Apriori builds
up new candidate itemsets from previously discovered frequent itemsets. These
candidates are tested to see if they are frequent, and then the algorithm iterates as
explained here:
1. Create initial frequent itemsets by placing each item in its own itemset. Only items with at least the minimum support are used in this step.
2. New candidate itemsets are created from the most recently discovered frequent itemsets by finding supersets of the existing frequent itemsets.
3. All candidate itemsets are tested to see if they are frequent. If a candidate is not frequent then it is discarded. If there are no new frequent itemsets from this step, go to the last step.
4. Store the newly discovered frequent itemsets and go to the second step.
5. Return all of the discovered frequent itemsets.
#### Implementation
On the first iteration of Apriori, the newly discovered itemsets will have a length
of 2, as they will be supersets of the initial itemsets created in the first step. On the
second iteration (after applying the fourth step), the newly discovered itemsets will
have a length of 3. This allows us to quickly identify the newly discovered itemsets,
as needed in second step.
We can store our discovered frequent itemsets in a dictionary, where the key is the
length of the itemsets. This allows us to quickly access the itemsets of a given length,
and therefore the most recently discovered frequent itemsets, with the help of the
following code:
```
frequent_itemsets = {} # itemsets are sorted by length
```
We also need to define the minimum support needed for an itemset to be considered frequent. This value is chosen based on the dataset but feel free to try different
values. I recommend only changing it by 10 percent at a time though, as the time the
algorithm takes to run will be significantly different! Let's apply minimum support:
```
min_support = 50
```
To implement the first step of the Apriori algorithm, we create an itemset with each
movie individually and test if the itemset is frequent. We use frozenset, as they
allow us to perform set operations later on, and they can also be used as keys in our
counting dictionary (normal sets cannot).
```
# k=1 candidates are the isbns with more than min_support favourable reviews
frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"])
for movie_id, row in num_favorable_by_movie.iterrows()
if row["Favorable"] > min_support)
```
We implement the second and third steps together for efficiency by creating a
function that takes the newly discovered frequent itemsets, creates the supersets,
and then tests if they are frequent. First, we set up the function and the counting
dictionary. In keeping with our rule of thumb of reading through the data as little as possible,
we iterate over the dataset once per call to this function. While this doesn't matter too
much in this implementation (our dataset is relatively small), it is a good practice to
get into for larger applications. We iterate over all of the users and their reviews. Next, we go through each of the previously discovered itemsets and see if it is a
subset of the current set of reviews. If it is, this means that the user has reviewed
each movie in the itemset. We can then go through each individual movie that the user has reviewed that isn't
in the itemset, create a superset from it, and record in our counting dictionary that
we saw this particular itemset. We end our function by testing which of the candidate itemsets have enough support
to be considered frequent and return only those :
```
from collections import defaultdict
def find_frequent_itemsets(favorable_reviews_by_users, k_1_itemsets, min_support):
counts = defaultdict(int)
for user, reviews in favorable_reviews_by_users.items():
for itemset in k_1_itemsets:
if itemset.issubset(reviews):
for other_reviewed_movie in reviews - itemset:
current_superset = itemset | frozenset((other_reviewed_movie,))
counts[current_superset] += 1
return dict([(itemset, frequency) for itemset, frequency in counts.items() if frequency >= min_support])
```
To run our code, we create a loop that iterates over the steps of the Apriori
algorithm, storing the new itemsets as we go. In this loop, k represents the length
of the soon-to-be discovered frequent itemsets, allowing us to access the previously
most discovered ones by looking in our frequent_itemsets dictionary using the
key k - 1. We create the frequent itemsets and store them in our dictionary by their
length. We want to break out the preceding loop if we didn't find any new frequent itemsets
(and also to print a message to let us know what is going on). If we do find frequent itemsets, we print out a message to let us know the loop will
be running again. This algorithm can take a while to run, so it is helpful to know that
the code is still running while you wait for it to complete! Finally, after the end of the loop, we are no longer interested in the first set of
itemsets anymore—these are itemsets of length one, which won't help us create
association rules – we need at least two items to create association rules. Let's
delete them :
```
import sys
print("There are {} movies with more than {} favorable reviews".format(len(frequent_itemsets[1]), min_support))
sys.stdout.flush()
for k in range(2, 20):
# Generate candidates of length k, using the frequent itemsets of length k-1
# Only store the frequent itemsets
cur_frequent_itemsets = find_frequent_itemsets(favorable_reviews_by_users, frequent_itemsets[k-1],
min_support)
if len(cur_frequent_itemsets) == 0:
print("Did not find any frequent itemsets of length {}".format(k))
sys.stdout.flush()
break
else:
print("I found {} frequent itemsets of length {}".format(len(cur_frequent_itemsets), k))
#print(cur_frequent_itemsets)
sys.stdout.flush()
frequent_itemsets[k] = cur_frequent_itemsets
# We aren't interested in the itemsets of length 1, so remove those
del frequent_itemsets[1]
```
This code may take a few minutes to run.
```
print("Found a total of {0} frequent itemsets".format(sum(len(itemsets) for itemsets in frequent_itemsets.values())))
```
As we can see it returns 2968 frequent itemsets of varying lengths. You'll notice
that the number of itemsets grows as the length increases before it shrinks. It grows
because of the increasing number of possible rules. After a while, the large number
of combinations no longer has the support necessary to be considered frequent.
This results in the number shrinking. This shrinking is the benefit of the Apriori
algorithm. If we search all possible itemsets (not just the supersets of frequent ones),
we would be searching thousands of times more itemsets to see if they are frequent.
### Extracting association rules
After the Apriori algorithm has completed, we have a list of frequent itemsets.
These aren't exactly association rules, but they are similar to it. A frequent itemset
is a set of items with a minimum support, while an association rule has a premise
and a conclusion.
We can make an association rule from a frequent itemset by taking one of the movies
in the itemset and denoting it as the conclusion. The other movies in the itemset will
be the premise. This will form rules of the following form: if a reviewer recommends all
of the movies in the premise, they will also recommend the conclusion.
For each itemset, we can generate a number of association rules by setting each
movie to be the conclusion and the remaining movies as the premise.
In code, we first generate a list of all of the rules from each of the frequent itemsets,
by iterating over each of the discovered frequent itemsets of each length. We then iterate over every movie in this itemset, using it as our conclusion.
The remaining movies in the itemset are the premise. We save the premise and
conclusion as our candidate rule. This returns a very large number of candidate rules. We can see some by printing
out the first few rules in the list.
```
# Now we create the association rules. First, they are candidates until the confidence has been tested
candidate_rules = []
for itemset_length, itemset_counts in frequent_itemsets.items():
for itemset in itemset_counts.keys():
for conclusion in itemset:
premise = itemset - set((conclusion,))
candidate_rules.append((premise, conclusion))
print("There are {} candidate rules".format(len(candidate_rules)))
print(candidate_rules[:5])
```
These rules were returned as the resulting output.
In these rules, the first part (the frozenset) is the list of movies in the premise,
while the number after it is the conclusion. In the first case, if a reviewer
recommends movie 50, they are also likely to recommend movie 64.
Next, we compute the confidence of each of these rules. The process starts by creating dictionaries to store how many times we see the
premise leading to the conclusion (a correct example of the rule) and how many
times it doesn't (an incorrect example). We iterate over all of the users, their favorable reviews, and over each candidate
association rule. We then test to see if the premise is applicable to this user. In other words, did the
user favorably review all of the movies in the premise? If the premise applies, we see if the conclusion movie was also rated favorably.
If so, the rule is correct in this instance. If not, it is incorrect. We then compute the confidence for each rule by dividing the correct count by the
total number of times the rule was seen.
```
# Now, we compute the confidence of each of these rules. This is very similar to what we did in chapter 1
correct_counts = defaultdict(int)
incorrect_counts = defaultdict(int)
for user, reviews in favorable_reviews_by_users.items():
for candidate_rule in candidate_rules:
premise, conclusion = candidate_rule
if premise.issubset(reviews):
if conclusion in reviews:
correct_counts[candidate_rule] += 1
else:
incorrect_counts[candidate_rule] += 1
rule_confidence = {candidate_rule:
correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule])
for candidate_rule in candidate_rules}
# Choose only rules above a minimum confidence level
min_confidence = 0.9
# Filter out the rules with poor confidence
rule_confidence = {rule: confidence for rule, confidence in rule_confidence.items() if confidence > min_confidence}
print(len(rule_confidence))
```
Now we can print the top five rules by sorting this confidence dictionary and
printing the results:
```
from operator import itemgetter
sorted_confidence = sorted(rule_confidence.items(), key=itemgetter(1), reverse=True)
for index in range(5):
print("Rule #{0}".format(index + 1))
(premise, conclusion) = sorted_confidence[index][0]
print("Rule: If a person recommends {0} they will also recommend {1}".format(premise, conclusion))
print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)]))
print("")
```
The resulting printout shows only the movie IDs, which isn't very helpful without
the names of the movies also. The dataset came with a file called u.items, which
stores the movie names and their corresponding MovieID (as well as other
information, such as the genre).
We can load the titles from this file using pandas. Additional information about
the file and categories is available in the README that came with the dataset.
The data in the files is in CSV format, but with data separated by the | symbol;
it has no header and the encoding is important to set. The column names were
found in the README file
```
# Even better, we can get the movie titles themselves from the dataset
movie_name_filename = 'data/ml-100k/u.item'
movie_name_data = pd.read_csv(movie_name_filename, delimiter="|", header=None, encoding = "mac-roman")
movie_name_data.columns = ["MovieID", "Title", "Release Date", "Video Release", "IMDB", "<UNK>", "Action", "Adventure",
"Animation", "Children's", "Comedy", "Crime", "Documentary", "Drama", "Fantasy", "Film-Noir",
"Horror", "Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War", "Western"]
```
Getting the movie title is important, so we will create a function that will return a
movie's title from its MovieID, saving us the trouble of looking it up each time. We look up the movie_name_data DataFrame for the given MovieID and return only
the title column. We use the values parameter to get the actual value (and not the pandas Series
object that is currently stored in title_object). We are only interested in the first
value—there should only be one title for a given MovieID anyway! We end the function by returning the title as needed.
```
def get_movie_name(movie_id):
title_object = movie_name_data[movie_name_data["MovieID"] == movie_id]["Title"]
title = title_object.values[0]
return title
get_movie_name(4)
```
We adjust our previous code for printing out the top
rules to also include the titles
```
for index in range(5):
print("Rule #{0}".format(index + 1))
(premise, conclusion) = sorted_confidence[index][0]
premise_names = ", ".join(get_movie_name(idx) for idx in premise)
conclusion_name = get_movie_name(conclusion)
print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name))
print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)]))
print("")
```
The result is much more readable (there are still some issues, but we can ignore them
for now.)
### Evaluation
In a broad sense, we can evaluate the association rules using the same concept as for
classification. We use a test set of data that was not used for training, and evaluate
our discovered rules based on their performance in this test set.
To do this, we will compute the test set confidence, that is, the confidence of each
rule on the testing set.
We won't apply a formal evaluation metric in this case; we simply examine the rules
and look for good examples.
First, we extract the test dataset, which is all of the records we didn't use in the
training set. We used the first 200 users (by ID value) for the training set, and we will
use all of the rest for the testing dataset. As with the training set, we will also get the
favorable reviews for each of the users in this dataset as well.
```
# Evaluation using test data
test_dataset = all_ratings[~all_ratings['UserID'].isin(range(200))]
test_favorable = test_dataset[test_dataset["Favorable"]]
#test_not_favourable = test_dataset[~test_dataset["Favourable"]]
test_favorable_by_users = dict((k, frozenset(v.values)) for k, v in test_favorable.groupby("UserID")["MovieID"])
#test_not_favourable_by_users = dict((k, frozenset(v.values)) for k, v in test_not_favourable.groupby("UserID")["MovieID"])
#test_users = test_dataset["UserID"].unique()
test_dataset[:5]
```
We then count the correct instances where the premise leads to the conclusion, in the
same way we did before. The only change here is the use of the test data instead of
the training data.
```
correct_counts = defaultdict(int)
incorrect_counts = defaultdict(int)
for user, reviews in test_favorable_by_users.items():
for candidate_rule in candidate_rules:
premise, conclusion = candidate_rule
if premise.issubset(reviews):
if conclusion in reviews:
correct_counts[candidate_rule] += 1
else:
incorrect_counts[candidate_rule] += 1
```
Next, we compute the confidence of each rule from the correct counts.
```
test_confidence = {candidate_rule: correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule])
for candidate_rule in rule_confidence}
print(len(test_confidence))
sorted_test_confidence = sorted(test_confidence.items(), key=itemgetter(1), reverse=True)
print(sorted_test_confidence[:5])
```
Finally, we print out the best association rules with the titles instead of the
movie IDs.
```
for index in range(10):
print("Rule #{0}".format(index + 1))
(premise, conclusion) = sorted_confidence[index][0]
premise_names = ", ".join(get_movie_name(idx) for idx in premise)
conclusion_name = get_movie_name(conclusion)
print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name))
print(" - Train Confidence: {0:.3f}".format(rule_confidence.get((premise, conclusion), -1)))
print(" - Test Confidence: {0:.3f}".format(test_confidence.get((premise, conclusion), -1)))
print("")
```
The fifth rule, for instance, has a perfect confidence in the training data (1.000), but it
is only accurate in 60 percent of cases for the test data (0.609). Many of the other rules in
the top 10 have high confidences in test data though, making them good rules for
making recommendations.
### Summary
In this example, we performed affinity analysis in order to recommend movies based
on a large set of reviewers. We did this in two stages. First, we found frequent
itemsets in the data using the Apriori algorithm. Then, we created association rules
from those itemsets.
The use of the Apriori algorithm was necessary due to the size of the dataset.
We performed training on a subset of our data in order to find the association rules,
and then tested those rules on the rest of the data—a testing set. From what we
discussed in the previous chapters, we could extend this concept to use cross-fold
validation to better evaluate the rules. This would lead to a more robust evaluation
of the quality of each rule
___
| github_jupyter |
# Gaussian Process Fitting
by Sarah Blunt
### Prerequisites
This tutorial assumes knowledge of the basic `radvel` API for $\chi^2$ likelihood fitting. As such, please complete the following before beginning this tutorial:
- radvel/docs/tutorials/164922_Fitting+MCMC.ipynb
- radvel/docs/tutorials/K2-24_Fitting+MCMC.ipynb
This tutorial also assumes knowledge of Gaussian Processes (GPs) as applied to radial velocity (RV) timeseries modeling. Grunblatt et al. (2015) and Rajpaul et al. (2015) contain excellent introductions to this topic. Also check out "Gaussian Processes for Machine Learning," by Rasmussen & Williams, a free online textbook hosted at gaussianprocesses.org.
### Objectives
Using the K2-131 (EPIC-228732031) dataset published in Dai et al. (2017), I will show how to:
- perform a maximum a posteriori (MAP) fit using a quasi-periodic kernel GP regression to model stellar activity (with data from multiple telescopes)
- do an MCMC exploration of the corresponding parameter space (with data from multiple telescopes)
### Tutorial
Do some preliminary imports:
```
import numpy as np
import pandas as pd
import os
import radvel
import radvel.likelihood
from radvel.plot import orbit_plots, mcmc_plots
from scipy import optimize
%matplotlib inline
```
Read in RV data from Dai et al. (2017):
```
data = pd.read_csv(os.path.join(radvel.DATADIR,'k2-131.txt'), sep=' ')
t = np.array(data.time)
vel = np.array(data.mnvel)
errvel = np.array(data.errvel)
tel = np.array(data.tel)
telgrps = data.groupby('tel').groups
instnames = telgrps.keys()
```
We'll use a quasi-periodic covariance kernel in this fit. An element in the covariance matrix, $C_{ij}$ is defined as follows:
$$
C_{ij} = \eta_1^2 exp[-\frac{|t_i-t_j|^2}{\eta_2^2} -\frac{sin^2(\pi|t_i-t_j|/\eta_3)}{2\eta_4^2}]
$$
Several other kernels are implemented in `radvel`. The code for all kernels lives in radvel/gp.py. Check out that file if you'd like to implement a new kernel.
Side Note: to see a list of all implemented kernels and examples of possible names for their associated hyperparameters...
```
print(radvel.gp.KERNELS)
```
Define the GP hyperparameters we will use in our fit:
```
hnames = [
'gp_amp', # eta_1; GP variability amplitude
'gp_explength', # eta_2; GP non-periodic characteristic length
'gp_per', # eta_3; GP variability period
'gp_perlength', # eta_4; GP periodic characteristic length
]
```
Define some numbers (derived from photometry) that we will use in our priors on the GP hyperparameters:
```
gp_explength_mean = 9.5*np.sqrt(2.) # sqrt(2)*tau in Dai+ 2017 [days]
gp_explength_unc = 1.0*np.sqrt(2.)
gp_perlength_mean = np.sqrt(1./(2.*3.32)) # sqrt(1/(2*gamma)) in Dai+ 2017
gp_perlength_unc = 0.019
gp_per_mean = 9.64 # T_bar in Dai+ 2017 [days]
gp_per_unc = 0.12
Porb = 0.3693038 # orbital period [days]
Porb_unc = 0.0000091
Tc = 2457582.9360 # [BJD]
Tc_unc = 0.0011
```
Dai et al. (2017) derive the above from photometry (see sect 7.2.1). I'm currently working on implementing joint modeling of RVs & photometry and RVs & activity indicators in `radvel`, so stay tuned if you'd like to use those features!
Initialize `radvel.Parameters` object:
```
nplanets=1
params = radvel.Parameters(nplanets,basis='per tc secosw sesinw k')
```
Set initial guesses for each fitting parameter:
```
params['per1'] = radvel.Parameter(value=Porb)
params['tc1'] = radvel.Parameter(value=Tc)
params['sesinw1'] = radvel.Parameter(value=0.,vary=False) # fix eccentricity = 0
params['secosw1'] = radvel.Parameter(value=0.,vary=False)
params['k1'] = radvel.Parameter(value=6.55)
params['dvdt'] = radvel.Parameter(value=0.,vary=False)
params['curv'] = radvel.Parameter(value=0.,vary=False)
```
Set initial guesses for GP hyperparameters:
```
params['gp_amp'] = radvel.Parameter(value=25.0)
params['gp_explength'] = radvel.Parameter(value=gp_explength_mean)
params['gp_per'] = radvel.Parameter(value=gp_per_mean)
params['gp_perlength'] = radvel.Parameter(value=gp_perlength_mean)
```
Instantiate a `radvel.model.RVmodel` object, with `radvel.Parameters` object as attribute:
```
gpmodel = radvel.model.RVModel(params)
```
Initialize `radvel.likelihood.GPLikelihood` objects (one for each telescope):
```
jit_guesses = {'harps-n':0.5, 'pfs':5.0}
likes = []
def initialize(tel_suffix):
# Instantiate a separate likelihood object for each instrument.
# Each likelihood must use the same radvel.RVModel object.
indices = telgrps[tel_suffix]
like = radvel.likelihood.GPLikelihood(gpmodel, t[indices], vel[indices],
errvel[indices], hnames, suffix='_'+tel_suffix,
kernel_name="QuasiPer"
)
# Add in instrument parameters
like.params['gamma_'+tel_suffix] = radvel.Parameter(value=np.mean(vel[indices]), vary=False, linear=True)
like.params['jit_'+tel_suffix] = radvel.Parameter(value=jit_guesses[tel_suffix], vary=True)
likes.append(like)
for tel in instnames:
initialize(tel)
```
Instantiate a `radvel.likelihood.CompositeLikelihood` object that has both GP likelihoods as attributes:
```
gplike = radvel.likelihood.CompositeLikelihood(likes)
```
Instantiate a `radvel.Posterior` object:
```
gppost = radvel.posterior.Posterior(gplike)
```
Add in priors (see Dai et al. 2017 section 7.2):
```
gppost.priors += [radvel.prior.Gaussian('per1', Porb, Porb_unc)]
gppost.priors += [radvel.prior.Gaussian('tc1', Tc, Tc_unc)]
gppost.priors += [radvel.prior.Jeffreys('k1', 0.01, 10.)] # min and max for Jeffrey's priors estimated by Sarah
gppost.priors += [radvel.prior.Jeffreys('gp_amp', 0.01, 100.)]
gppost.priors += [radvel.prior.Jeffreys('jit_pfs', 0.01, 10.)]
gppost.priors += [radvel.prior.Jeffreys('jit_harps-n', 0.01,10.)]
gppost.priors += [radvel.prior.Gaussian('gp_explength', gp_explength_mean, gp_explength_unc)]
gppost.priors += [radvel.prior.Gaussian('gp_per', gp_per_mean, gp_per_unc)]
gppost.priors += [radvel.prior.Gaussian('gp_perlength', gp_perlength_mean, gp_perlength_unc)]
```
Note: our prior on `'gp_perlength'` isn't equivalent to the one Dai et al. (2017) use because our formulations of the quasi-periodic kernel are slightly different. The results aren't really affected.
Do a MAP fit:
```
res = optimize.minimize(
gppost.neglogprob_array, gppost.get_vary_params(), method='Nelder-Mead',
options=dict(maxiter=200, maxfev=100000, xatol=1e-8)
)
print(gppost)
```
Explore the parameter space with MCMC:
```
chains = radvel.mcmc(gppost,nrun=100,ensembles=3,savename='rawchains.h5')
```
Note: for reliable results, run MCMC until the chains have converged. For this example, nrun=10000 should do the trick, but that would take a minute or two, and I won't presume to take up that much of your time with this tutorial.
Make some nice plots:
```
# try switching some of these (optional) keywords to "True" to see what they do!
GPPlot = orbit_plots.GPMultipanelPlot(
gppost,
subtract_gp_mean_model=False,
plot_likelihoods_separately=False,
subtract_orbit_model=False
)
GPPlot.plot_multipanel()
Corner = mcmc_plots.CornerPlot(gppost, chains) # posterior distributions
Corner.plot()
quants = chains.quantile([0.159, 0.5, 0.841]) # median & 1sigma limits of posterior distributions
for par in gppost.params.keys():
if gppost.params[par].vary:
med = quants[par][0.5]
high = quants[par][0.841] - med
low = med - quants[par][0.159]
err = np.mean([high,low])
err = radvel.utils.round_sig(err)
med, err, errhigh = radvel.utils.sigfig(med, err)
print('{} : {} +/- {}'.format(par, med, err))
```
Compare posterior characteristics with those of Dai et al. (2017):
per1 : 0.3693038 +/- 9.1e-06
tc1 : 2457582.936 +/- 0.0011
k1 : 6.6 +/- 1.5
gp_amp : 26.0 +/- 6.2
gp_explength : 11.6 +/- 2.3
gp_per : 9.68 +/- 0.15
gp_perlength : 0.35 +/- 0.02
gamma_harps-n : -6695 +/- 11
jit_harps-n : 2.0 +/- 1.5
gamma_pfs : -1 +/- 11
jit_pfs : 5.3 +/- 1.4
Thanks for going through this tutorial! As always, if you have any questions, feature requests, or problems, please file an issue on the `radvel` GitHub repo (github.com/California-Planet-Search/radvel).
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Implement Fizz Buzz.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* What is fizz buzz?
* Return the string representation of numbers from 1 to n
* Multiples of 3 -> 'Fizz'
* Multiples of 5 -> 'Buzz'
* Multiples of 3 and 5 -> 'FizzBuzz'
* Can we assume the inputs are valid?
* No
* Can we assume this fits memory?
* Yes
## Test Cases
<pre>
* None -> Exception
* < 1 -> Exception
* 15 ->
[
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
</pre>
## Algorithm
There is no fancy algorithm to solve fizz buzz.
* Iterate from 1 through n
* Use the mod operator to determine if the current iteration is divisible by:
* 3 and 5 -> 'FizzBuzz'
* 3 -> 'Fizz'
* 5 -> 'Buzz'
* else -> string of current iteration
* return the results
Complexity:
* Time: O(n)
* Space: O(n)
## Code
```
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError('num cannot be None')
if num < 1:
raise ValueError('num cannot be less than one')
results = []
for i in range(1, num + 1):
if i % 3 == 0 and i % 5 == 0:
results.append('FizzBuzz')
elif i % 3 == 0:
results.append('Fizz')
elif i % 5 == 0:
results.append('Buzz')
else:
results.append(str(i))
return results
```
## Unit Test
```
%%writefile test_fizz_buzz.py
from nose.tools import assert_equal, assert_raises
class TestFizzBuzz(object):
def test_fizz_buzz(self):
solution = Solution()
assert_raises(TypeError, solution.fizz_buzz, None)
assert_raises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
assert_equal(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
%run -i test_fizz_buzz.py
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import matplotlib
import librosa
import numpy as np
import librosa.display
import scipy.io.wavfile
s, da = scipy.io.wavfile.read('schnitzel.wav')
data = da.astype('float32')
# y, sr = librosa.load(librosa.ex('trumpet'))
librosa.feature.melspectrogram(y=data, sr=s)
D = np.abs(librosa.stft(data))**2
S = librosa.feature.melspectrogram(S=D, sr=s)
# Passing through arguments to the Mel filters
S = librosa.feature.melspectrogram(y=data, sr=s, n_mels=128,
fmax=8000)
fig, ax = plt.subplots()
S_dB = librosa.power_to_db(S, ref=np.max)
img = librosa.display.specshow(S_dB, x_axis='time',
y_axis='mel', sr=s,
fmax=10000, ax=ax, cmap='nipy_spectral')
# bone, grey_r, YlOrRd
# 'Greys', 'Purples', 'Blues', 'Greens', 'Oranges', 'Reds',
# 'YlOrBr', 'YlOrRd', 'OrRd', 'PuRd', 'RdPu', 'BuPu',
# 'GnBu', 'PuBu', 'YlGnBu', 'PuBuGn', 'BuGn', 'YlGn'
# 'PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu',
# 'RdYlBu', 'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic'
# 'flag', 'prism', 'ocean', 'gist_earth', 'terrain', 'gist_stern',
# 'gnuplot', 'gnuplot2', 'CMRmap', 'cubehelix', 'brg',
# 'gist_rainbow', 'rainbow', 'jet', 'turbo', 'nipy_spectral',
# 'gist_ncar'
# 'Pastel1', 'Pastel2', 'Paired', 'Accent',
# 'Dark2', 'Set1', 'Set2', 'Set3',
# 'tab10', 'tab20', 'tab20b', 'tab20c'
# 'twilight', 'twilight_shifted', 'hsv'
fig.colorbar(img, ax=ax, format='%+2.0f dB')
# ax.set(title='Mel-frequency spectrogram')
'grey_r' is not a valid value for name; supported values are 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'turbo', 'turbo_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'winter', 'winter_r'
```
| github_jupyter |
# Warsztat 4 - funkcje<a id=top></a>
<font size=2>Przed pracą z notatnikiem polecam wykonać kod w ostatniej komórce (zawiera html i css), dzięki czemu całość będzie bardziej estetyczna :)</font>
<a href='#Warsztat-4---funkcje'>Warsztat 4</a>
<ul>
<li><a href='#Składnia'><span>Składnia</span></a></li>
<li><a href='#Instrukcja-return'><span>Instrukcja return</span></a></li>
<li><a href='#Argumenty-nie-wprost'><span>Argumenty nie-wprost</span></a></li>
<li><a href='#Dowolna-ilość-argumentów'><span>Dowolna ilość argumentów</span></a></li>
<li><a href='#Funkcje-Lambda'><span>Funkcje Lambda</span></a></li>
</ul>
Większość programów będzie wymagała wielokrotnego uruchamiania tych samych (albo podobnych) sekwencji komend i przekształceń. Wpisywanie ich ręcznie w całości, za każdym razem kiedy będziemy potrzebowali danego zestawu operacji, jest bardzo nieekonomiczne - zarówno pod względem czasu, wykorzystanej pamięci oraz czytelności kodu.<br>
Zamiast tego wystarczy użyć <b>funkcji</b>.
Funkcja to wydzielony fragment kodu o określonej nazwie, której wywołanie wykona wszystkie operacje w nim zawarte.<br>
Podstawowa składnia do tworzenia funkcji wygląda następująco:
```python
def nazwa_funkcji (argument1, argument2,...,argumentN):
<kod do wykonania>
<kod do wykonania>
<kod do wykonania>
return zwracana_zmienna```
Wewnątrz funkcji może znajdować się dowolny kod, który jest możliwy do stworzenia w pythonie. Spróbujmy zatem wygenerować prostą funkcję do witania się z użytkownikami.
```
def powitanie():
print ("Witaj, użytkowniku!")
powitanie()
powitanie()
```
Powyższy kod zawiera dwa wystąpienia naszej funkcji. Pierwsze wystąpienie zawsze jest <b>deklaracją</b> (definicją) funkcji, czyli "przepisem" na wykonanie określonego zbioru operacji (w tym wypadku wypisania powitania na konsoli). Deklaracja musi pojawić się przed pierwszym użyciem wywołaniem funkcji. Najczęściej umieszcza się je wszystkie, zbiorczo, na początku skryptu a dopiero po wszystkich deklarowanych elementach - kod wykonawczy programu.
Drugie użycie fukncji to <b>wywołanie</b>, podczas którego wykonywany jest kod zawarty wewnątrz funkcji. Wystarczy zatem przywołać nazwę funkcji aby zaaplikować cały kod, co oszczędzi nam mnóstwo czasu i ustrzeże od błędów przy przepisywaniu tego samego kodu w innym miejscu - wystarczy ponownie wywołać daną funkcję.<br>
Istotnym elementem działania funkcji jest także możliwość wkładania do nich danych. Przyjrzyjmy się kolejnemu przykładowi:
```
def powitanie(imie):
print ("Witaj, użytkowniku %s!" % (imie))
zmienna = input("Jak masz na imie? ")
powitanie(zmienna)
```
Dzięki uwzględnieniu zmiennej imię w nawiasie deklaracji funkcji powitanie możemy przekazać jej zmienną, która zostanie użyta w trakcie wykonywania kodu wewnątrz funkcji. 'imię' jest tzw. <b>zmienną lokalną</b>, co oznacza, że widoczna jest wyłącznie dla elementów wewnątrz funkcji - poza nią nie będzie można jej użyć.
```
def powitanie(imie2):
print ("Witaj, użytkowniku %s!" % (imie2))
powitanie("Marcin")
print (imie2)
```
Ten podział na zmienne <b>lokalne</b> oraz <b>globalne</b> jest bardzo przydatny - pozwala zachować porządek w kodzie (przy długich programach minimalizuje szansę na powtórzenie przypadkiem nazwy zmiennej) oraz używanie tej samej nazwy dla zmiennych wewnątrz różnych funkcji (przez co nie trzeba tworzyć np. imie1, nowe_imie, imie_imie etc.), co ułatwia zrozumienie kodu.
Funkcje mogą być wywołane w dowolnym momencie programu - także wewnątrz innych funkcji. Zobaczmy, jak możemy stworzyć drugą funkcję, która zautomatyzuje nam proces pobierania imienia od użytkownika.
```
def powitanie(imie):
print ("Witaj, użytkowniku %s!" % (imie))
def wez_imie():
zmienna = input("Jak masz na imie? ")
powitanie(zmienna)
wez_imie()
```
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
### Instrukcja return
Jak mogliśmy zobaczyć na samym początku, na końcu funkcji może pojawić się instrukcja <b>return</b>. Służy ona do deklarowania, jaką wartość ma zostać przez funkcję zwrócona, tzn. dostępna dla innych funkcji i możliwa do przypisania np. do zmiennej globalnej.<br>
Zobaczymy to na przykładzie poniższej funkcji artymetycznej:
```
def dodaj (pierwsza, druga):
return pierwsza+druga
zmienna = dodaj(2,2)
print (dodaj(2,3))
```
Funkcja może zwrócić dowolny obiekt - tekst, liczbę, listę czy słownik. Może też zwracać więcej niż jeden element - należy je wtedy rozdzielić przecinkiem. Jeśli przypiszemy wieloelementowy zwrot do jednej zmiennej, całość zostana przechowana w formie krotki (ang. tuple). Można jednak podać dwie zmienne (oddzielone przecinkiem), dzięki czemu każdy eleme
```
def arytmetyka (pierwsza, druga):
'''Opis funkcji
Funkcja zwraca wyniki działań w kolejności: dodaj, odejmij, mnóż, dziel
'''
dodaj = pierwsza+druga
odejmij = pierwsza-druga
mnoz = pierwsza*druga
dziel = pierwsza/druga
return [dodaj, odejmij, mnoz, dziel], dodaj+odejmij+mnoz+dziel
zmienna = arytmetyka(2,2)
print (zmienna)
print (type(zmienna), "\n")
lista, suma = arytmetyka(2,4)
print (lista)
print (suma)
help(arytmetyka)
```
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
### Argumenty nie-wprost
Oprócz argumentów, które użytkownik musi podać podczas każdorazowego użycia funkcji, możemy również zadeklarować argumenty, które będą posiadały standardową wartość. Dzięki temu nie będzie potrzeby ich wprowadzania, ale będzie to możliwe, jeśli tylko tego zażyczy sobie użytkownik.
```
def mnoznik(liczba, druga=5):
return liczba*druga
print (mnoznik(5))
print (mnoznik(5,10))
```
Widzimy, że podczas podania tylko jednego argumentu, drugi jest brany w swojej podstawowej wartości. Podanie drugiego argumentu podczas wywołania funkcji nadpisuje standardową wersję.
### Dowolna ilość argumentów
Mogą się zdarzyć sytuacje, w których nie będziemy chcieli ograniczać ilości argumentów, jakie użytkownik może wprowadzić do danej funkcji. Nieznaną ilość argumentów można zastąpić specjalnym wyrażeniem <b>*args</b>, dzięki czemu interpreter pythona pozwoli na wprowadzenie dowolnej ich ilości.
```
def dodawanie(*args):
suma = 0
for i in args:
suma+=i
return suma
print (dodawanie(*[1,2,3,4,5,6,7,8,9,10,11,12,13,14]))
```
W powyższym przykładzie istotny jest znak " \* ", słowo "args" to tylko element konwencji - może być dowolone. Gwiazdka sugeruje, że otrzymane argumenty należy przekazać w formie listy i zmusza interpreter do rozłożenia tej listy na pojedyncze argumenty.
```
def dodawanie(*args):
return args
print (dodawanie(1,2,3,4,5,6,7,8,9,10))
print (dodawanie([1,2,3,4,5,6,7,8,9,10]))
print (dodawanie(*[1,2,3,4,5,6,7,8,9,10]))
def dodawanie(args):
suma = 0
for i in args:
suma+=i
return suma
print (dodawanie([1,2,3,4,5,6,7,8,9,10]))
```
Oprócz tego istnieje również konstrukcja <b>**kwargs</b>, która interpretuje nadchodzące argumenty jako elementy słownika.
```
def dodawanie(**kwargs):
return kwargs
print (dodawanie(arg1=1,arg2=2,arg3=3))
slownik={'arg1': 1,'arg2': 2,'arg3': 3}
print (dodawanie(**slownik))
print (dodawanie(slownik))
```
Mogliśmy zobaczyć dwie strategie:<br>
1. podanie gwiazdki (dwóch) wyłącznie w deklaracji funkcji - interpreter będzie oczekiwał, że kolejne argumenty będą tworzyć konkretną strukturę, uporządkowaną listę albo słownik.
2. podanie gwiazdki (dwóch) podczas wywoływania funkcji - interpreter weźmie wskazany jeden element i zinterpretuje go jako listę/słownik.
Przy prostych funkcjach tego typu zabiegi zazwyczaj nie są konieczne, ale warto zauważyć, że takie przekazywanie argumentów może np. pozwalać na wstawianie zestawów "ustawień" do wskazanej funkcji, bez konieczności listowania wszystkich elementów w każdym wywołaniu.
```
def dodawanie(**kwargs):
return sum(kwargs.values())
slownik={'arg1': 1,'arg2': 2,'arg3': 3}
print (dodawanie(**slownik))
```
<a href='#top' style='float: right; font-size: 13px;'>Do początku</a>
### Funkcje Lambda
Możliwe jest również tworzenie funkcji bez przypisanej nazwy i o uproszczonej składni - wykonywane jest wyłącznie jedno wyrażenie. Są one szczególne przydatne w pracy z listami (patrz: list comprehensions).
```
zet = lambda x: x*x+4
print (zet(5))
zet = lambda x, y: x*y+4
print (zet(5,7))
```
### Ćwiczenia z funkcji
Stwórz funkcję fibbonacci, która będzie przyjmować jeden argument (liczba całkowita) natomiast efektem jego pracy będzie ciąg Fibbonacciego o wskazanej przez argument ilości elementów.<br>
Wzór na n-ty element ciągu: $fib_n=fib_{n-2}+fib_{n-1}$
```
def fibbonacci(liczba):
fibbonacci(10)
```
```
from IPython.core.display import HTML
from urllib.request import urlopen
HTML(urlopen("https://raw.githubusercontent.com/mkoculak/Warsztat-programowania/master/ipython.css").read().decode("utf-8"))
```
| github_jupyter |
# Exploration of one customer
Analysis of:
* global stats
* daily pattern
Also, found a week of interest (early 2011-12) for futher work ([Solar home control bench](https://github.com/pierre-haessig/solarhome-control-bench) and SGE 2018 paper)
* daily pattern the month before this week
To be done: [clustering of daily trajectories](#Clustering-of-daily-trajectories)
PH December 23, 2017
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
## Load customer data
```
import solarhome as sh
df_raw = sh.read_csv('2011-2012')
df, missing_rec = sh.reshape(df_raw)
```
Choice: customer 12:
* no controlled load (CL channel)
* no oddities consumption and production during 2011-2012
```
n_cust = 12
d = df[n_cust]
d.to_csv('customer/12/data_2011-2012.csv')
d.head()
```
### Customer global stats (over the year)
Statistics close to the [dataset average](Solar%20home%20exploration.ipynb#Global-statistics-of-PV-generation):
* Consumption: avg 0.7 kW, max 4.0 kW. Yearly total of 5 900 kWh/yr
* PV max 0.9 kW (1.04 kW capacity). Yield of 1250 kWh/yr/kWc
```
d.describe([])
```
#### Histograms
```
prod = d.GG
cons = d.GC
cons.hist(bins=30, label='Consumption')
(prod[prod>=0.01]).hist(label='PV prod (>0)', alpha=0.8)
plt.legend();
```
PV yearly production (kWh)
```
dt = 0.5 # hours
E_pv = prod.sum() * dt
E_pv
d_cust_cap = df_raw[['Customer', 'Generator Capacity']]
gen_cap = d_cust_cap.groupby('Customer')['Generator Capacity'].max()
gen_cap = gen_cap[n_cust]
gen_cap
```
PV yield (kWh/kWc/yr)
```
E_pv/gen_cap
```
Yearly consumption
```
E_cons = cons.sum() * dt
E_cons
```
### Time plot
```
%matplotlib inline
fig, (ax1, ax2) = plt.subplots(2,1, sharex=True)
#sl = slice('2011-10-12','2011-10-18') # 3 semi-cloudy, 1 very cloudy, 3 sunny days
sl = slice('2011-11-29','2011-12-05')
#sl=slice(0, -1)
cons[sl].plot(ax=ax1, label='Consumption')
prod[sl].plot(ax=ax2, color='tab:orange', label='PV production')
ax1.legend(loc='upper right')
ax2.legend(loc='upper right')
ax1.set(
title='Customer %d data: 7 days extract' % n_cust,
ylabel='Power (kW)'
)
ax2.set(
ylabel='Power (kW)'
);
fig.tight_layout()
fig.savefig('customer/12/data_week_%s.png' % sl.start, dpi=200, bbox_inches='tight')
```
## Daily pattern
i.e. stats as a function of the hour of the day
```
def hod(tstamp):
'hour of the day (fractional))'
return tstamp.hour + tstamp.minute/60
d_dm = d.groupby(by=hod).mean()
d_d05 = d.groupby(by=hod).quantile(.05)
d_d25 = d.groupby(by=hod).quantile(.25)
d_d75 = d.groupby(by=hod).quantile(.75)
d_d95 = d.groupby(by=hod).quantile(.95)
fig, (ax1, ax2) = plt.subplots(2,1, sharex=True)
c = 'tab:blue'
d_dm.GC.plot(ax=ax1, color=c, label='Consumption')
ax1.fill_between(d_dm.index, d_d05.GC, d_d95.GC, alpha=0.3, color=c, lw=0)
ax1.fill_between(d_dm.index, d_d25.GC, d_d75.GC, alpha=0.3, color=c, lw=0)
ax1.set_ylim(ymin=0)
ax1.legend(loc='upper left')
c = 'tab:orange'
d_dm.GG.plot(ax=ax2, color=c, label='PV production')
ax2.fill_between(d_dm.index, d_d05.GG, d_d95.GG, alpha=0.3, color=c, lw=0)
ax2.fill_between(d_dm.index, d_d25.GG, d_d75.GG, alpha=0.3, color=c, lw=0)
ax2.legend(loc='upper left')
ax1.set(
title='Customer %d daily pattern' % n_cust,
ylabel='Power (kW)'
);
ax2.set(
xlabel='hour of the day',
ylabel='Power (kW)'
);
fig.tight_layout()
fig.savefig('customer/12/daily_pattern_2011-2012.png', dpi=200, bbox_inches='tight')
```
#### Compute all quantiles, to save the pattern for later reuse
```
quantiles = np.linspace(0.05, 0.95, 19)
quantiles
def daily_pattern(ts):
'''compute statistics for each hour of the day (min, max, mean and quantiles)
of the time series `ts`
returns DataFrame with columns 'mean','min', 'qXX'..., 'max'
and rows being the hours of the day between 0. and 24.
'''
dstats = pd.DataFrame({
'q{:02.0f}'.format(q*100) : ts.groupby(by=hod).quantile(q)
for q in quantiles
})
dstats.insert(0, 'min', ts.groupby(by=hod).min())
dstats.insert(0, 'mean', ts.groupby(by=hod).mean())
dstats['max'] = ts.groupby(by=hod).max()
return dstats
prod_dstats = daily_pattern(d.GG)
prod_dstats.to_csv('customer/12/daily_pattern_prod_2011-2012.csv')
cons_dstats = daily_pattern(d.GC)
cons_dstats.to_csv('customer/12/daily_pattern_cons_2011-2012.csv')
def plot_daily_pattern(dstats, title):
fig, ax = plt.subplots(1,1)
q_names = [c for c in dstats.columns if c.startswith('q')]
dstats[q_names[:9]].plot(ax=ax, color='tab:blue', lw=0.5)
dstats['q50'].plot(ax=ax, color='k')
dstats[q_names[11:]].plot(ax=ax, color='tab:red', lw=0.5)
dstats['min'].plot(ax=ax, color='tab:blue', label='min')
dstats['max'].plot(ax=ax, color='tab:red')
dstats['mean'].plot(ax=ax, color='k', lw=6, alpha=0.5)
plt.legend(ax.lines[-3:], ['min', 'max', 'mean']);
ax.set(
xlabel='hour of the day',
ylabel='Power (kW)',
title=title)
fig.tight_layout()
return fig, ax
fig, ax = plot_daily_pattern(cons_dstats,
title='Customer %d daily consumption pattern' % n_cust)
fig.savefig('customer/12/daily_pattern_cons_2011-2012.png', dpi=200)
fig, ax = plot_daily_pattern(prod_dstats,
title='Customer %d daily production pattern' % n_cust)
fig.savefig('customer/12/daily_pattern_prod_2011-2012.png', dpi=200)
```
### The month before 2011-11-29
i.e. before the week extract above
#### Compute all quantiles
```
sl = slice('2011-10-29','2011-11-28')
#sl = '2011-10'
daily_pattern(cons[sl]).to_csv('customer/12/daily_pattern_cons_M-1-%s.csv' % sl.stop)
daily_pattern(prod[sl]).to_csv('customer/12/daily_pattern_prod_M-1-%s.csv' % sl.stop)
fig, ax = plot_daily_pattern(daily_pattern(cons[sl]),
title='Customer %d consumption pattern \nthe month before %s' % (n_cust, sl.stop))
ax.set_ylim(ymax=3)
fig.savefig('customer/12/daily_pattern_cons_M-1-%s.png' % sl.stop, dpi=200)
fig, ax = plot_daily_pattern(daily_pattern(prod[sl]),
title='Customer %d production pattern \nthe month before %s' % (n_cust, sl.stop))
fig.savefig('customer/12/daily_pattern_prod_M-1-%s.png' % sl.stop, dpi=200)
```
#### Spaghetti plots, to compare with quantiles
code inspired from the analysis of daily patterns in [Pattern_daily_consumption.ipynb]( [Pattern_daily_consumption.ipynb#A-look-at-individual-customer])
```
def daily_spaghetti(df, title):
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(5.5,4.5))
hod_lin = np.arange(48)/2
GC_day_m = df.GC.groupby(by=hod).mean()
GG_day_m = df.GG.groupby(by=hod).mean()
GC_day_traj = df.GC.values.reshape((-1, 48)).T
GG_day_traj = df.GG.values.reshape((-1, 48)).T
ax1.plot(hod_lin, GC_day_traj, 'k', lw=0.5, alpha=0.15);
ax1.plot(hod_lin, GC_day_m, color='tab:blue', lw=3, alpha=0.7)
ax2.plot(hod_lin, GG_day_traj, 'k', lw=0.5, alpha=0.15);
ax2.plot(hod_lin, GG_day_m, color='tab:orange', lw=3, alpha=0.7)
ax1.legend(ax1.lines[-2:], ['each day', 'mean'], loc='upper left')
ax2.legend(ax2.lines[-2:], ['each day', 'mean'], loc='upper left')
ax1.set(
title=title,
ylabel='Consumption (kW)'
)
ax2.set(
xlim=(0, 23.5),
xticks=range(0,24,3),
xlabel='hour of the day',
ylabel='PV production (kW)'
)
fig.tight_layout()
return fig, (ax1, ax2)
fig, (ax1, ax2) = daily_spaghetti(d[sl],
title='Customer %d the month before %s' % (n_cust, sl.stop))
ax1.set_ylim(ymax=3);
fig.savefig('customer/12/daily_traj_M-1-%s.png' % sl.stop, dpi=200)
```
Variation: plot data with solar panel upscaled to 4 kWp
```
d4k = d.copy()
d4k.GG *= 4/1.04
fig, (ax1, ax2) = daily_spaghetti(d4k[sl],
title='')
ax1.set_ylim(ymax=3);
fig.savefig('customer/12/daily_traj_M-1-%s_PV4kWp.png' % sl.stop, dpi=200)
fig.savefig('customer/12/daily_traj_M-1-%s_PV4kWp.pdf' % sl.stop, bbox_inches='tight')
```
### Clustering of daily trajectories
to be done.
## Day ahead forecast
model: autoregression on the previous half-hour, the previous day, with effect of the hour of the day.
$$ y_k = f(y_{k-1}, y_{k-48}, h) $$
More precisely, linear autogression, with coefficient being dependant of the hod:
$$ y_k = a_0(h) + a_1(h).y_{k-1} + a_2(h).y_{k-48} $$
In addition, the series of coefficients $a_0(h)$, $a_1(h)$, $a_2(h)$,... may require some smoothing, that is a penalization of their variations. Either absolute variation around average time-indepent coefficients or variation along the day.
→ pivoted data is saved for further processing in Julia: [Forecast.ipynb](Forecast.ipynb)
### Data preparation: group by day (pivoting)
pivot data: one day per row, one hour per column
```
d1 = d.copy()
d1['date'] = pd.DatetimeIndex(d1.index.date)
d1['hod'] = hod(d1.index)
d1.head()
prod_dpivot = d1.pivot('date', 'hod', 'GG')
cons_dpivot = d1.pivot('date', 'hod', 'GC')
cons_dpivot.head(3)
prod_dpivot[12.0].plot(label='prod @ 12:00')
prod_dpivot[18.0].plot(label='prod @ 18:00')
plt.legend();
```
Save as CSV for further use
```
prod_dpivot.to_csv('customer/12/daily_pivot_prod_2011-2012.csv')
cons_dpivot.to_csv('customer/12/daily_pivot_cons_2011-2012.csv')
```
### PV Production heatmap
Notice the effect of **daylight saving** between days ~92 (Oct 1st) and ~274 (March 31st).
→ this is a *problem for forecasting*
```
prod_dpivot.index[92], prod_dpivot.index[274]
fig = plt.figure(figsize=(7,4))
plt.imshow(prod_dpivot.values.T, aspect='auto',
origin='lower', extent=[0, 365, 0, 24], cmap='inferno');
plt.ylim([4, 20])
cbar = plt.colorbar()
cbar.set_label('Power (kW)')
cbar.locator
ax = plt.gca()
ax.set(
title='Customer %d production 2011-2012' % n_cust,
xlabel='day',
ylabel='hour of day',
yticks=[0, 6, 12, 18, 24]
)
fig.tight_layout()
fig.savefig('customer/12/daily_pivot_prod_2011-2012.png', dpi=200, bbox_inches='tight')
```
### Consumption heatmap
Notice: vmax set to 2 kW (→ saturation) otherwise the plot is dominated by the few spikes between 2.5 and 4 kW
Obs:
* start of the day at 6 am. Not influenced by daylight saving
```
fig = plt.figure(figsize=(7,4))
plt.imshow(cons_dpivot.values.T, aspect='auto',
vmax=2,
origin='lower', extent=[0, 365, 0, 24]);
#plt.ylim([4, 20])
cbar = plt.colorbar()
cbar.set_label('Power (kW) [saturated]')
fig.tight_layout()
ax = plt.gca()
ax.set(
title='Customer %d consumption 2011-2012' % n_cust,
xlabel='day',
ylabel='hour of day',
yticks=[0, 6, 12, 18, 24]
)
fig.tight_layout()
fig.savefig('customer/12/daily_pivot_cons_2011-2012.png', dpi=200, bbox_inches='tight')
```
Same plot, without saturation, but using a compression of high values:
$$ v \to \sqrt{v/v_{max}}$$
```
v = cons_dpivot.values.T
v = v/v.max()
v = v**(0.5)
fig = plt.figure(figsize=(7,4))
plt.imshow(v, aspect='auto',
origin='lower', extent=[0, 365, 0, 24]);
#plt.ylim([4, 20])
cbar = plt.colorbar()
cbar.set_label('normed sqrt(Power)')
ax = plt.gca()
ax.set(
title='Customer %d consumption 2011-2012' % n_cust,
xlabel='day',
ylabel='hour of day',
yticks=[0, 6, 12, 18, 24]
)
fig.tight_layout()
```
| github_jupyter |
# Pedersen N07 neutral case with heat flux
## Nalu-Wind with K-SGS model
Comparison between Nalu-wind and Pedersen (2014)
**Note**: To convert this notebook to PDF, use the command
```bash
$ jupyter nbconvert --TagRemovePreprocessor.remove_input_tags='{"hide_input"}' --to pdf postpro_n07.ipynb
```
```
%%capture
# Important header information
naluhelperdir = '../../utilities/'
# Import libraries
import sys
import numpy as np
import matplotlib.pyplot as plt
sys.path.insert(1, naluhelperdir)
import plotABLstats
import yaml as yaml
from IPython.display import Image
from matplotlib.lines import Line2D
import matplotlib.image as mpimg
%matplotlib inline
# Nalu-wind parameters
rundir = '/ascldap/users/lcheung/GPFS1/2020/amrcodes/testruns/neutral_n07_ksgs'
statsfile = 'abl_statistics.nc.run2'
avgtimes = [82800,86400]
# Load nalu-wind data
data = plotABLstats.ABLStatsFileClass(stats_file=rundir+'/'+statsfile);
Vprof, vheader = plotABLstats.plotvelocityprofile(data, None, tlims=avgtimes, exportdata=True)
Tprof, theader = plotABLstats.plottemperatureprofile(data, None, tlims=avgtimes, exportdata=True)
# Pedersen parameters
datadir = '../pedersen2014_data'
ped_umag = np.loadtxt(datadir+'/Pedersen2014_N07_velocity.csv', delimiter=',')
ped_T = np.loadtxt(datadir+'/Pedersen2014_N07_temperature.csv', delimiter=',')
h = 757
# Plot the velocity profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
plt.plot(Vprof[:,4], Vprof[:,0]/h, 'b', label='Nalu-wind (k-sgs)')
plt.plot(ped_umag[:,0], ped_umag[:,1], 'r', label='Pedersen(2014)')
# Construct a legend
plt.legend()
plt.ylim([0, 1.5]);
plt.xlim([0, 12])
plt.xlabel('Velocity [m/s]')
plt.ylabel('Z/h')
#plt.grid()
plt.title('N07 Wind speed')
# Plot the temperature profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
plt.plot(Tprof[:,1], Tprof[:,0], 'b', label='Nalu-wind (k-sgs)')
plt.plot(ped_T[:,0], ped_T[:,1], 'r', label='Pedersen(2014)')
# Construct a legend
plt.legend()
plt.ylim([0, 1500]);
#plt.xlim([0, 12])
plt.xlabel('Temperature [K]')
plt.ylabel('Z [m]')
#plt.grid()
plt.title('N07 Temperature')
# Extract TKE and Reynolds stresses
REstresses, REheader = plotABLstats.plottkeprofile(data, None, tlims=avgtimes, exportdata=True)
# Extract the fluxes
tfluxes, tfluxheader = plotABLstats.plottfluxprofile(data, None, tlims=avgtimes, exportdata=True)
# Extract the fluxes
sfstfluxes, sfstfluxheader= plotABLstats.plottfluxsfsprofile(data, None, tlims=[avgtimes[-1]-1, avgtimes[-1]], exportdata=True)
# Extract Utau
avgutau = plotABLstats.avgutau(data, None, tlims=avgtimes)
print('Avg Utau = %f'%avgutau)
# Calculate the inversion height
zi, utauz = plotABLstats.calcInversionHeight(data, [750.0], tlims=avgtimes)
print('zi = %f'%zi)
# Export the Nalu-Wind data for other people to compare
np.savetxt('NaluWind_N07_velocity.dat', Vprof, header=vheader)
np.savetxt('NaluWind_N07_temperature.dat', Tprof, header=theader)
np.savetxt('NaluWind_N07_reynoldsstresses.dat', REstresses, header=REheader)
np.savetxt('NaluWind_N07_temperaturefluxes.dat', tfluxes, header=tfluxheader)
np.savetxt('NaluWind_N07_sfstemperaturefluxes.dat', sfstfluxes, header=sfstfluxheader)
# Write the YAML file with integrated quantities
import yaml
savedict={'zi':float(zi), 'ustar':float(avgutau)}
f=open('istats.yaml','w')
f.write('# Averaged quantities from %f to %f\n'%(avgtimes[0], avgtimes[1]))
f.write(yaml.dump(savedict, default_flow_style=False))
f.close()
```
| github_jupyter |
# Chapter 6 - Unsupervised Machine Learning
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("whitegrid")
# Import Data
df = pd.read_csv("../../datasets/dataset_wisc_sd.csv")
print(df.shape)
# Cleaning up
df = df.replace(r'\\n','', regex=True)
df = df.dropna()
print(df.shape)
# Check first few rows
df.head()
sns.countplot(df['diagnosis']);
import matplotlib
from matplotlib import rc
font = {'size' : 16}
matplotlib.rc('font', **font)
# Encode the labels to be 1 for malignant and 0 for benign
df['diagnosis'] = df['diagnosis'].map({'M':1,'B':0})
df.head()
select_feats = ["diagnosis", "radius_mean", "texture_mean", "smoothness_mean"]
sns_plot = sns.pairplot(df[select_feats], hue = 'diagnosis', markers=["s", "o"])
sns_plot.savefig("c6_cancer_pairplot.png")
sns.scatterplot(x="radius_mean", y="texture_mean", hue="diagnosis", style='diagnosis', data=df, markers=["s", "o"])
df.shape
# We can drop a few variables to avoid any multicollinearity, however KMeans clustering is not generally affected by it.
import numpy as np
# Create correlation matrix
corr_matrix = df.corr().abs()
# Select upper triangle of correlation matrix
upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))
# Find features with correlation greater than 0.95
to_drop = [column for column in upper.columns if any(upper[column] > 0.90)]
# Drop features
df.drop(to_drop, axis=1, inplace=True)
df.shape
df.columns
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = df.drop(columns = ["id", "diagnosis"])
y = df.diagnosis.values
X_scaled = pd.DataFrame(scaler.fit_transform(X), columns = X.columns)
X_scaled.head(3)
```
### AgglomerativeClustering
```
from sklearn.cluster import AgglomerativeClustering
agc = AgglomerativeClustering(n_clusters=2, linkage="ward")
agc_featAll_pred = agc.fit_predict(X_scaled.iloc[:, :2])
plt.figure(figsize=(20, 5))
plt.subplot(131)
plt.title("Actual Results")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=y, style=y, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(132)
plt.title("Agglomerative Clustering")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=agc_featAll_pred, style=agc_featAll_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
from sklearn.metrics import accuracy_score
print(accuracy_score(y, agc_featAll_pred))
```
### KMeans:
```
from sklearn.cluster import KMeans
kmc = KMeans(n_clusters=2, n_init=10, init="k-means++")
kmc_feat2_pred = kmc.fit_predict(X_scaled.iloc[:, :2])
kmc_feat2_pred_inv = 1-kmc_feat2_pred
kmc_feat2_pred_inv[:5]
kmc_feat2_pred[:5]
plt.figure(figsize=(20, 5))
plt.subplot(131)
plt.title("Actual Results")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=y, style=y, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(132)
plt.title("KMeans Results (Features=2)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=kmc_feat2_pred, style=kmc_feat2_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
# Non-pythonic, for illustration only, best to iterate
kmc_feat2_pred = kmc.fit_predict(X_scaled.iloc[:, :2])
kmc_feat3_pred = kmc.fit_predict(X_scaled.iloc[:, :3])
kmc_feat4_pred = kmc.fit_predict(X_scaled.iloc[:, :4])
kmc_featall_pred = kmc.fit_predict(X_scaled.iloc[:, :])
plt.figure(figsize=(20, 5))
plt.subplot(141)
plt.title("KMeans Results (Features=2)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=kmc_feat2_pred, style=kmc_feat2_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(142)
plt.title("KMeans Results (Features=3)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=kmc_feat3_pred, style=kmc_feat3_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(143)
plt.title("KMeans Results (Features=4)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=kmc_feat4_pred, style=kmc_feat4_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(144)
plt.title("KMeans Results (Features=All)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=kmc_featall_pred, style=kmc_featall_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
from sklearn.metrics import accuracy_score
print(accuracy_score(y, kmc_feat2_pred))
print(accuracy_score(y, kmc_feat3_pred))
print(accuracy_score(y, kmc_feat4_pred))
print(accuracy_score(y, kmc_featall_pred))
```
### Gaussian Mixture
```
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=2, covariance_type="full").fit(X_scaled.iloc[:, :2])
gmm_featAll_pred = 1-gmm.predict(X_scaled.iloc[:, :2])
print(accuracy_score(y, gmm_featAll_pred))
plt.figure(figsize=(20, 5))
plt.subplot(131)
plt.title("Actual Results")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=y, style=y, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(132)
plt.title("Gaussian Mixture Results (Features=2)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=gmm_featAll_pred, style=gmm_featAll_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
gmm_feat2_pred = 1-gmm.fit(X_scaled.iloc[:, :2]).predict(X_scaled.iloc[:, :2])
gmm_feat3_pred = 1-gmm.fit(X_scaled.iloc[:, :3]).predict(X_scaled.iloc[:, :3])
gmm_feat4_pred = 1-gmm.fit(X_scaled.iloc[:, :4]).predict(X_scaled.iloc[:, :4])
gmm_featall_pred = 1-gmm.fit(X_scaled.iloc[:, :]).predict(X_scaled.iloc[:, :])
plt.figure(figsize=(20, 15))
plt.subplot(221)
plt.title("Gaussian Mixure Results (Features=2)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=gmm_feat2_pred, style=gmm_feat2_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(222)
plt.title("Gaussian Mixure Results (Features=3)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=gmm_feat3_pred, style=gmm_feat3_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(223)
plt.title("Gaussian Mixure Results (Features=4)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=gmm_feat4_pred, style=gmm_feat4_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
plt.subplot(224)
plt.title("Gaussian Mixure Results (Features=All)")
ax = sns.scatterplot(x="radius_mean", y="texture_mean", hue=gmm_featall_pred, style=gmm_featall_pred, data=X_scaled, markers=["s", "o"])
ax.legend(loc="upper right")
from sklearn.metrics import accuracy_score
print(accuracy_score(y, gmm_feat2_pred))
print(accuracy_score(y, gmm_feat3_pred))
print(accuracy_score(y, gmm_feat4_pred))
print(accuracy_score(y, gmm_featAll_pred))
```
### Principal Component Analysis:
```
from sklearn.decomposition import PCA
pca_2d = PCA(n_components=2, svd_solver='full')
pca_2d.fit(X_scaled)
data_pca_2d = pca_2d.fit_transform(X_scaled)
print(pca_2d.explained_variance_ratio_)
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
sns.scatterplot(x=data_pca_2d[:,0], y=data_pca_2d[:,1], hue=y, style=y, markers=["s", "o"])
from sklearn.mixture import GaussianMixture
from sklearn.metrics import accuracy_score
gmm = GaussianMixture(n_components=2, covariance_type="full")
gmm_featAll_pred = 1-gmm.fit(data_pca_2d).predict(data_pca_2d)
%%timeit
from sklearn.mixture import GaussianMixture
from sklearn.metrics import accuracy_score
gmm = GaussianMixture(n_components=2, covariance_type="full")
gmm_featAll_pred = 1-gmm.fit(data_pca_2d).predict(data_pca_2d)
%%timeit
from sklearn.mixture import GaussianMixture
from sklearn.metrics import accuracy_score
gmm = GaussianMixture(n_components=2, covariance_type="full")
gmm_featAll_pred = 1-gmm.fit_predict(X_scaled)
%%timeit
from sklearn.mixture import GaussianMixture
from sklearn.metrics import accuracy_score
gmm = GaussianMixture(n_components=2, covariance_type="full")
gmm_featAll_pred = 1-gmm.fit_predict(X)
```
| github_jupyter |
# Обучение нейросетей — оптимизация и регуляризация
**Разработчик: Артем Бабенко**
На это семинаре будет необходимо (1) реализовать Dropout-слой и проследить его влияние на обобщающую способность сети (2) реализовать BatchNormalization-слой и пронаблюдать его влияние на скорость сходимости обучения.
## Dropout (0.6 балла)
Как всегда будем экспериментировать на датасете MNIST. MNIST является стандартным бенчмарк-датасетом, и его можно подгрузить средствами pytorch.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import clear_output
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import torch.optim as optim
from torch.utils.data.sampler import SubsetRandomSampler
input_size = 784
num_classes = 10
batch_size = 128
train_dataset = dsets.MNIST(root='./MNIST/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./MNIST/',
train=False,
transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
```
Определим ряд стандартных функций с прошлых семинаров
```
def train_epoch(model, optimizer, batchsize=32):
loss_log, acc_log = [], []
model.train()
for batch_num, (x_batch, y_batch) in enumerate(train_loader):
data = Variable(x_batch)
target = Variable(y_batch)
optimizer.zero_grad()
output = model(data)
pred = torch.max(output, 1)[1].data.numpy()
acc = np.mean(pred == y_batch)
acc_log.append(acc)
loss = F.nll_loss(output, target).cpu()
loss.backward()
optimizer.step()
loss = loss.data[0]
loss_log.append(loss)
return loss_log, acc_log
def test(model):
loss_log, acc_log = [], []
model.eval()
for batch_num, (x_batch, y_batch) in enumerate(test_loader):
data = Variable(x_batch)
target = Variable(y_batch)
output = model(data)
loss = F.nll_loss(output, target).cpu()
pred = torch.max(output, 1)[1].data.numpy()
acc = np.mean(pred == y_batch)
acc_log.append(acc)
loss = loss.data[0]
loss_log.append(loss)
return loss_log, acc_log
def plot_history(train_history, val_history, title='loss'):
plt.figure()
plt.title('{}'.format(title))
plt.plot(train_history, label='train', zorder=1)
points = np.array(val_history)
plt.scatter(points[:, 0], points[:, 1], marker='+', s=180, c='orange', label='val', zorder=2)
plt.xlabel('train steps')
plt.legend(loc='best')
plt.grid()
plt.show()
def train(model, opt, n_epochs):
train_log, train_acc_log = [], []
val_log, val_acc_log = [], []
for epoch in range(n_epochs):
train_loss, train_acc = train_epoch(model, opt, batchsize=batch_size)
val_loss, val_acc = test(model)
train_log.extend(train_loss)
train_acc_log.extend(train_acc)
steps = train_dataset.train_labels.shape[0] / batch_size
val_log.append((steps * (epoch + 1), np.mean(val_loss)))
val_acc_log.append((steps * (epoch + 1), np.mean(val_acc)))
clear_output()
plot_history(train_log, val_log)
plot_history(train_acc_log, val_acc_log, title='accuracy')
```
Создайте простейшую однослойную модель - однослойную полносвязную сеть и обучите ее с параметрами оптимизации, заданными ниже.
```
class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size()[0], -1)
model = nn.Sequential(
#<your code>
)
opt = torch.optim.Adam(model.parameters(), lr=0.0005)
train(model, opt, 20)
```
Параметром обученной нейросети является матрица весов, в которой каждому классу соответствует один из 784-мерных столбцов. Визуализируйте обученные векторы для каждого из классов, сделав их двумерными изображениями 28-28. Для визуализации можно воспользоваться кодом для визуализации MNIST-картинок с предыдущих семинаров.
```
weights = #<your code>
plt.figure(figsize=[10, 10])
for i in range(10):
plt.subplot(5, 5, i + 1)
plt.title("Label: %i" % i)
plt.imshow(weights[i].reshape([28, 28]), cmap='gray');
```
Реализуйте Dropout-слой для полносвязной сети. Помните, что этот слой ведет себя по-разному во время обучения и во время применения.
```
class DropoutLayer(nn.Module):
def __init__(self, p):
super().__init__()
#<your code>
def forward(self, input):
if self.training:
#<your code>
else:
#<your code>
```
Добавьте Dropout-слой в архитектуру сети, проведите оптимизацию с параметрами, заданными ранее, визуализируйте обученные веса. Есть ли разница между весами, обученными с Dropout и без него? Параметр Dropout возьмите равным 0.7
```
modelDp = nn.Sequential(
#<your code>
)
opt = torch.optim.Adam(modelDp.parameters(), lr=0.0005)
train(modelDp, opt, 20)
weights = #<your code>
plt.figure(figsize=[10, 10])
for i in range(10):
plt.subplot(5, 5, i + 1)
plt.title("Label: %i" % i)
plt.imshow(weights[i].reshape([28, 28]), cmap='gray');
```
Обучите еще одну модель, в которой вместо Dropout-регуляризации используется L2-регуляризация с коэффициентом 0.05. (Параметр weight_decay в оптимизаторе). Визуализируйте веса и сравните с двумя предыдущими подходами.
```
model = nn.Sequential(
Flatten(),
nn.Linear(input_size,num_classes),
nn.LogSoftmax(dim=-1)
)
opt = torch.optim.Adam(model.parameters(), lr=0.0005, weight_decay=0.05)
train(model, opt, 20)
weights = #<your code>
plt.figure(figsize=[10, 10])
for i in range(10):
plt.subplot(5, 5, i + 1)
plt.title("Label: %i" % i)
plt.imshow(weights[i].reshape([28, 28]), cmap='gray');
```
## Batch normalization (0.4 балла)
Реализуйте BatchNormalization слой для полносвязной сети. В реализации достаточно только центрировать и разделить на корень из дисперсии, аффинную поправку (гамма и бета) в этом задании можно не реализовывать.
```
class BnLayer(nn.Module):
def __init__(self, num_features):
super().__init__()
#<your code>
def forward(self, input):
if self.training:
#<your code>
else:
#<your code>
return #<your code>
```
Обучите трехслойную полносвязную сеть (размер скрытого слоя возьмите 100) с сигмоидами в качестве функций активации.
```
model = nn.Sequential(
#<your code>
)
opt = torch.optim.RMSprop(model.parameters(), lr=0.01)
train(model, opt, 3)
```
Повторите обучение с теми же параметрами для сети с той же архитектурой, но с добавлением BatchNorm слоя (для всех трех скрытых слоев).
```
modelBN = nn.Sequential(
#<your code>
)
opt = torch.optim.RMSprop(modelBN.parameters(), lr=0.01)
train(modelBN, opt, 3)
```
Сравните кривые обучения и сделайте вывод о влиянии BatchNorm на ход обучения.
| github_jupyter |
# Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
```
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
```
## 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
- You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
## 2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> **Figure 1** </u>: **1D linear model**<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
**Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = None
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
```
**Expected Output**:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
**Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = None
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
```
**Expected Output**:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
**Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
**Instructions**:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
```
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = None # Step 1
thetaminus = None # Step 2
J_plus = None # Step 3
J_minus = None # Step 4
gradapprox = None # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = None
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = None # Step 1'
denominator = None # Step 2'
difference = None # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
```
**Expected Output**:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`.
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
## 3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> **Figure 2** </u>: **deep neural network**<br>*LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
```
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
```
Now, run backward propagation.
```
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
```
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
**How does gradient checking work?**.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> **Figure 2** </u>: **dictionary_to_vector() and vector_to_dictionary()**<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
**Exercise**: Implement gradient_check_n().
**Instructions**: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute `J_plus[i]`:
1. Set $\theta^{+}$ to `np.copy(parameters_values)`
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`.
- To compute `J_minus[i]`: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
```
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = None # Step 1
thetaplus[i][0] = None # Step 2
J_plus[i], _ = None # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = None # Step 1
thetaminus[i][0] = None # Step 2
J_minus[i], _ = None # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = None
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = None # Step 1'
denominator = None # Step 2'
difference = None # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
```
**Expected output**:
<table>
<tr>
<td> ** There is a mistake in the backward propagation!** </td>
<td> difference = 0.285093156781 </td>
</tr>
</table>
It seems that there were errors in the `backward_propagation_n` code we gave you! Good that you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code.
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented.
**Note**
- Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct.
- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout.
Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
<font color='blue'>
**What you should remember from this notebook**:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
| github_jupyter |
## Model-Agnostic Meta-Learning
Based on the paper: <i>Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks</i>. Data from this [notebook](https://github.com/hereismari/tensorflow-maml/blob/master/maml.ipynb): sinusoid dataset of sine waves with different amplitude and phase, representing different "tasks". These are shuffled prior to batching, representing the task sampling step.
```
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input,Dense,Dropout,Activation
from tensorflow.keras.models import Model,Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import BinaryCrossentropy,MeanSquaredError
from sklearn.model_selection import train_test_split
from sklearn.metrics import balanced_accuracy_score,mean_squared_error
from sklearn.utils import shuffle
```
### Gather data
```
sin_x = np.load("../data/other/sin_x.npy")
sin_y = np.load("../data/other/sin_y.npy")
sin_x_test = np.load("../data/other/sin_x_test.npy")
sin_y_test = np.load("../data/other/sin_y_test.npy")
train_x,test_x = sin_x[:10000],sin_x[10000:]
train_y,test_y = sin_y[:10000],sin_y[10000:]
train_x,train_y = shuffle(train_x,train_y)
test_x,test_y = shuffle(test_x,test_y)
print(train_x.shape,train_y.shape)
print(sin_x_test.shape,sin_y_test.shape)
plt.scatter(train_x[:1000],train_y[:1000])
plt.show()
```
### Modeling
```
def get_model():
""" Model instantiation
"""
x = Input(shape=(1))
h = Dense(50,activation="relu")(x)
h = Dense(50,activation="relu")(h)
o = Dense(1,activation=None)(h)
model = Model(inputs=x,outputs=o)
return model
n_epochs = 100
batch_size = 25
meta_optimizer = Adam(0.01)
meta_model = get_model()
for epoch_i in range(n_epochs):
losses = []
for i in range(0,len(train_x[0]),batch_size):
task_train_x,task_train_y = train_x[i:i+batch_size],train_y[i:i+batch_size]
task_test_x,task_test_y = train_x[i:i+batch_size],train_y[i:i+batch_size]
with tf.GradientTape() as meta_tape: # optimization step
with tf.GradientTape() as tape:
task_train_pred = meta_model(task_train_x) # this cannot be done with model_copy
task_train_loss = MeanSquaredError()(task_train_y,task_train_pred)
gradients = tape.gradient(task_train_loss, meta_model.trainable_variables)
model_copy = get_model()
model_copy.set_weights(meta_model.get_weights())
k=0 # gradient descent this way does not break gradient flow from model_copy -> meta_model
for i in range(1,len(model_copy.layers)): # first layer is input
model_copy.layers[i].kernel = meta_model.layers[i].kernel-0.01*gradients[k]
model_copy.layers[i].bias = meta_model.layers[i].bias-0.01*gradients[k+1]
k+=2
task_test_pred = model_copy(task_test_x)
task_test_loss = MeanSquaredError()(task_test_y,task_test_pred)
gradients = meta_tape.gradient(task_test_loss,meta_model.trainable_variables)
meta_optimizer.apply_gradients(zip(gradients,meta_model.trainable_variables))
losses.append(float(task_test_loss))
if (epoch_i+1)%10==0 or epoch_i==0:
print("Epoch {}: {}".format(epoch_i+1,sum(losses)/len(losses)))
```
| github_jupyter |
# "Hardware Build - Part 2"
> "Buckle up, Assembling the hardware now."
- toc: true
- branch: master
- badges: false
- comments: true
- categories: [SelfDriving]
- hide: false
- search_exclude: false
- image: images/post-thumbnails/Hardware_Config.png
- metadata_key1: MUSHR
- metadata_key2:
# Purpose
The intention of this page is NOT to replace the existing documentation by MUSHR. Please follow them [here](https://mushr.io), buit visit this page if things dont go as planned. I will highlight the sections that I ran into problems and solutions for them. This is **NOT** a TUTORIAL rather a buffet of practical problems faced and solutions provided to overcome them easily.

# Servo Motor Install Tips
During the installation of the SERVO MOTOR, You need to remove the hex screws. BUT the hex screws are tough to remove and might end up messing up the screw head. This damage can be permananent and will prevent from removing the existing SERVO MOTOR! In other words, the screws will SCREW you! and it did in my case. See below. To counter that, I had to remove the TOP parts (melt them, crack them etc.) and naturally the screws will open up at the bottom. This will take few hours to complete, since the screw heads was damaged there was any other way to remove screws.

## Weak 3D Printing Parts
There could be 3D printing parts that are weak and break. No issues, just use the glue and put them back!

# Buck Converter Steps
If you have a connector different than the MUSHR videos, then you might need to connect as shows to configure the buck converter

# Push Button Install Tip
Use the threads instead of screws. The screws didnt fit well to the back of the push button

# Camera Install Screw Alternative
This screw is provided as a part of BUCK CONVERTER

# Lower Platform Imperfections
3D Imperfections are OK

# Upper Platform Trick
Stick a tape to the nut and them remove it. This is useful to screws on the backside of the platform which might fall off.

# Final Door Trick
If you screw the final door, then you need to remove it again for charging the batteries (which is more often than you think). To solve the issue,
use the VELCRO as shown

# Misc Hardware Pics

| github_jupyter |
## Sorting and Grouping
The Search service offers the ability to organize your result set in different ways. Whether you choose to sort based on a specific property, group properties or boost the result set, you have the option to override the default ordering returned. In addition, you can also use navigation to organize your buckets of data returned.
By default, the result set will provide some basic ranking, depending on the type of query or filter performed.
```
import refinitiv.dataplatform as rdp
import pandas as pd
rdp.open_desktop_session('Your API Key here')
pd.set_option('display.max_colwidth', 140)
rdp.__version__
```
#### Example - Default ranking and how to boost results
To demonstrate default sorting, I'm going to search for documents that match a specific ticker. This will result in the documents ranked based on the relevancy and scores assigned to each document.
Search for Vodafone (VOD) against the ticker:
```
rdp.search(
filter = "TickerSymbol eq 'VOD'",
select = "_, RCSExchangeCountryLeaf"
)
```
The above output ranked the common share 'VOD.L', listed within the UK, at the top. This is largely due to the fact that 'VOD.L' is the most significant or liquid asset in the list. However, if I decide to override this default ranking by specifying that I want to show the listed documents within the Unitied States at the top, I can do this
by applying a *Boost* parameter.
```
rdp.search(
filter = "TickerSymbol eq 'VOD'",
boost = "RCSExchangeCountryLeaf eq 'United States'",
select = "_, RCSExchangeCountryLeaf"
)
```
#### Example - List the youngest CEO's.
The order_by parameters will sort, ascending (default) or descending, based on the birth year property.
Note: In the following example, not every document that identifies a CEO will have a reported birth year. Because we are sorting based on the year they were born,
all CEO's where the year of birth is not recorded will be bumped to the bottom of the list.
```
rdp.search(
view = rdp.SearchViews.People,
filter = "OfficerDirector(RoleTitleCode eq 'CEO' and RoleStatus eq 'Active')",
order_by = "YearOfBirth desc, LastName, FirstName",
select = "FullName, YearOfBirth, DTCharacteristics, PrimaryOfficerDirectorRIC, PrimaryOfficerDirector"
)
```
Instead of sorting by the year of birth, let's organize the output so the company is grouped.
**Note**: While we can see the company's grouped, we can also observe the many of the entries do not have a year of birth recorded.
```
rdp.search(
view = rdp.SearchViews.People,
filter = "OfficerDirector(RoleTitleCode eq 'CEO' and RoleStatus eq 'Active')",
group_by = "PrimaryOfficerDirector",
top = 20,
select = "FullName, YearOfBirth, DTCharacteristics, PrimaryOfficerDirectorRIC, PrimaryOfficerDirector"
)
```
#### Example - Sorting using Navigators
By default, when you use a navigator against a property, it will sort all results based on the number of matches for each value within a bucket. For example, if I were to list the top 10 exchanges within Canada, we can see the count value ranked, indicating the number instruments matched on that exchange.
```
response = rdp.Search.search(
view = rdp.SearchViews.EquityQuotes,
filter = "RCSExchangeCountryLeaf eq 'Canada'",
top = 0,
navigators = "ExchangeName(buckets:10)"
)
response.data.raw["Navigators"]["ExchangeName"]["Buckets"]
```
Using the above example, I can instead choose to sort based on the average daily volume within the exchange. The following search will result in the top 5 Canadian exchanges, ranked based on the 90 day average volume:
```
response = rdp.Search.search(
view = rdp.SearchViews.EquityQuotes,
filter = "RCSExchangeCountryLeaf eq 'Canada'",
top = 0,
navigators = "ExchangeName(buckets:5, desc:sum_AvgVol90D)"
)
# Pretty display of the listing
data = []
for exch in response.data.raw["Navigators"]["ExchangeName"]["Buckets"]:
data.append([exch["Label"], f'{exch["sum_AvgVol90D"]:,}'])
pd.DataFrame(data, columns=["Exchange", "Average 90-day Volume"])
```
| github_jupyter |
```
#Importing the required libraries
#Use pandas, seaborn, numpy and matplotlib
import os
print(os.listdir("../input"))
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#Read the values of the dataset into a Pandas Dataframe
#Also display the dataframe
df = pd.read_csv("../input/glass.csv")
df.head()
# Display class values
df.Type.value_counts().sort_index()
# Convert the target feature(Household) into a binary feature
# glass_type 1, 2, 3 are window glass
# glass_type 5, 6, 7 are non-window glass
df['household'] = df.Type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
df.head()
# Plot Aluminum (al) vs household
plt.scatter(df.Al, df.household)
plt.xlabel('Al')
plt.ylabel('household')
#Create Train/Test Split using sklearn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df[['Al']],df.household,train_size=0.7)
```
**Predicting with Linear Regression**
```
from sklearn.linear_model import LinearRegression
# Fit the model
linear_model = LinearRegression()
linear_model = linear_model.fit(X_train, y_train)
# Create a seperate table to store predictions
glass_df = X_train[['Al']]
glass_df['household_actual'] = y_train
# Predict with Linear Regression
glass_df['household_pred_linear'] = linear_model.predict(X_train)
# Examine the first 15 linear regression predictions
linear_model.predict(X_train)[0:15]
```
Notice there are some numbers below 0 and above 1 (NOT GOOD)
```
# Plot Linear Regression Line
sns.regplot(x='Al', y='household_actual', data=glass_df, logistic=False)
```
Linear regression is making predictions outside the range of 0 and 1
**Logistic Regression**
```
from sklearn.linear_model import LogisticRegression
# Fit logistic regression model
logistic_model = LogisticRegression(class_weight='balanced')
logistic_model = logistic_model.fit(X_train, y_train)
```
**Predict Class Probabilities & Class Predictions**
```
# Make class label predictions
logistic_model.predict(X_train)[:15]
# Make class probability predictions
logistic_model.predict_proba(X_train)[:15]
# Predict with Logistic Regression
glass_df['household_pred_log'] = logistic_model.predict(X_train)
# Predict Probability with Logistic Regression
glass_df['household_pred_prob_log'] = logistic_model.predict_proba(X_train)[:,1]
# Plot logistic regression line
sns.regplot(x='Al', y='household_actual', data=glass_df, logistic=True, color='b')
```
**Compare Predictions**
```
# Examine the table
glass_df.head(10)
```
**Model Evaluation**
```
# Observe class predictions on test set
logistic_model.predict(X_test)
# Store predictions
predicted = logistic_model.predict(X_test)
from sklearn import metrics
# Print Confusion Matrix
print (metrics.confusion_matrix(y_test, predicted))
# Print the metrics classification report
print (metrics.classification_report(y_test, predicted))
# Using the statsmodel library
import statsmodels.api as sm
# Define independent variables
iv = ['RI','Na','Mg','Al','Si','K','Ca','Ba','Fe']
# Fit the logistic regression function
logReg = sm.Logit(df.household,df[iv])
answer = logReg.fit()
# Display the parameter coefficients
np.exp(answer.params)
# The End
```
| github_jupyter |
# USB Webcam
This notebook shows how to use a USB web camera attached to the Pynq-Z1 board. An image is captured using [fswebcam](http://manpages.ubuntu.com/manpages/wily/man1/fswebcam.1.html). The image can then be manipulated using the Python Image Library (Pillow).
The webcam used is the Logitech USB HD Webcam C270 and the driver for this webcam has already been installed on the Pynq-Z1 board image.
#### References
http://pillow.readthedocs.org/en/3.1.x/handbook/tutorial.html<br>
http://manpages.ubuntu.com/manpages/lucid/man1/fswebcam.1.html <br>
http://www.logitech.com/en-us/product/hd-webcam-c270
```
from PIL import Image as PIL_Image
orig_img_path = '/home/xilinx/jupyter_notebooks/common/data/webcam.jpg'
!fswebcam --no-banner --save {orig_img_path} -d /dev/video0 2> /dev/null
rgb_img = PIL_Image.open(orig_img_path)
rgb_img
# import the necessary packages
from imutils import contours
from skimage import measure
import numpy as np
import imutils
import cv2
# load the image, convert it to grayscale, and blur it
rgb_array = cv2.imread(orig_img_path)
#rgb_array[:,:,1] = 0
#rgb_array[:,:,2] = 0
#gray_array = cv2.cvtColor(rgb_array, cv2.COLOR_BGR2GRAY)
#blurred = cv2.GaussianBlur(gray_array, (3, 3), 0)
#img = PIL_Image.fromarray(blurred, 'L')
red_channel = rgb_array[:,:,0]
blurred = cv2.GaussianBlur(red_channel, (3, 3), 0)
img = PIL_Image.fromarray(blurred)
img
# threshold the image to reveal light regions in the
# blurred image
thresh = cv2.threshold(blurred, 180, 255, cv2.THRESH_BINARY)[1]
# perform a series of erosions and dilations to remove
# any small blobs of noise from the thresholded image
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=4)
# perform a connected component analysis on the thresholded
# image, then initialize a mask to store only the "large"
# components
labels = measure.label(thresh, neighbors=8, background=0)
mask = np.zeros(thresh.shape, dtype="uint8")
# loop over the unique components
for label in np.unique(labels):
# if this is the background label, ignore it
if label == 0:
continue
# otherwise, construct the label mask and count the number of pixels
labelMask = np.zeros(thresh.shape, dtype="uint8")
labelMask[labels == label] = 255
numPixels = cv2.countNonZero(labelMask)
# if the number of pixels in the component is sufficiently
# large, then add it to our mask of "large blobs"
if numPixels > 300:
mask = cv2.add(mask, labelMask)
# find the contours in the mask, then sort them from left to
# right
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cnts = contours.sort_contours(cnts)[0]
# loop over the contours
for (i, c) in enumerate(cnts):
# draw the bright spot on the image
(x, y, w, h) = cv2.boundingRect(c)
((cX, cY), radius) = cv2.minEnclosingCircle(c)
cv2.circle(rgb_array, (int(cX), int(cY)), int(radius),
(0, 0, 255), 3)
cv2.putText(rgb_array, "#{}".format(i + 1), (x, y - 15),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2)
# show the output image
img = PIL_Image.fromarray(rgb_array, 'RGB')
img
```
| github_jupyter |
# PETs/TETs – Hyperledger Aries / PySyft – Manufacturer 2 (Holder) 🚛
```
%%javascript
document.title = '🚛 Manufacturer2'
```
## PART 3: Connect with City to Analyze Data
**What:** Share encrypted data with City agent in a trust- and privacy-preserving manner
**Why:** Share data with City agent (e.g., to obtain funds)
**How:** <br>
1. [Initiate Manufacturer's AgentCommunicationManager (ACM)](#1)
2. [Connect anonymously with the City agent via a multi-use SSI invitation](#2)
3. [Prove Manufacturer2 Agent is a certified manufacturer via VCs](#3)
4. [Establish anonymous Duet Connection to share encrypted data](#4)
**Accompanying Agents and Notebooks:**
* City 🏙️️: `03_connect_with_manufacturer.ipynb`
* Optional – Manufacturer1 🚗: `03_connect_with_city.ipynb`
* Optional – Manufacturer3 🛵: `03_connect_with_city.ipynb`
---
### 0 - Setup
#### 0.1 - Imports
```
import os
import numpy as np
import pandas as pd
import syft as sy
import torch
from aries_cloudcontroller import AriesAgentController
from libs.agent_connection_manager import CredentialHolder
```
#### 0.2 – Variables
```
# Get relevant details from .env file
api_key = os.getenv("ACAPY_ADMIN_API_KEY")
admin_url = os.getenv("ADMIN_URL")
webhook_port = int(os.getenv("WEBHOOK_PORT"))
webhook_host = "0.0.0.0"
```
---
<a id=1></a>
### 1 – Initiate Manufacturer2 Agent
#### 1.1 – Init ACA-PY agent controller
```
# Setup
agent_controller = AriesAgentController(admin_url,api_key)
print(f"Initialising a controller with admin api at {admin_url} and an api key of {api_key}")
```
#### 1.2 – Start Webhook Server to enable communication with other agents
@todo: is communication with other agents, or with other docker containers?
```
# Listen on webhook server
await agent_controller.init_webhook_server(webhook_host, webhook_port)
print(f"Listening for webhooks from agent at http://{webhook_host}:{webhook_port}")
```
#### 1.3 – Init ACM Credential Holder
```
# The CredentialHolder registers relevant webhook servers and event listeners
manufacturer2_agent = CredentialHolder(agent_controller)
# Verify if Manufacturer already has a VC
# (if there are manufacturer credentials, there is no need to execute the notebook)
manufacturer2_agent.get_credentials()
```
---
<a id=2></a>
### 2 – Establish a connection with the City agent 🏙️
A connection with the credential issuer (i.e., the authority agent) must be established before a VC can be received. In this scenario, the Manufacturer2 requests a connection with the Authority to be certified as an official city agency. Thus, the Manufacturer2 agent sends an invitation to the Authority. In real life, the invitation can be shared via video call, phone call, or E-Mail. In this PoC, this is represented by copy and pasting the invitation into the manufacturers' notebooks.
#### 2.1 Join invitation of City agent 🏙️
Copy and paste the multi-use invitation of the city agent, and establish a connection with them.
```
# Variables
alias = "undisclosedM2"
auto_accept = True
# Receive connection invitation
connection_id = manufacturer2_agent.receive_connection_invitation(alias=alias, auto_accept=auto_accept)
```
<div style="font-size: 25px"><center><b>Break Point 2 / 3 / 4</b></center></div>
<div style="font-size: 50px"><center>🚛 ➡️ 🚗 / 🛵 / 🏙️ </center></div><br>
<center><b>Please proceed to the remaining Manufacturers. <br> If you have established a connection between the City and all Manufacturers, proceed to the City Notebook's Step 2.2</b></center>
---
<a id=3></a>
### 3 – Create Presentation to Send Proof Presentation
#### 3.1 – Create presentation that satisfies requirements of proof request
Before you can present a presentation, you must identify the presentation record which you wish to respond to with a presentation. To do so, the `prepare_presentation()` function runs through the following steps:
1. Get all proof requests that were sent through `connection_id`
2. Get the most recent `presentation_exchange_id` and the corresponding `proof_request` from (1)
3. Get the restrictions the City agent defined in `proof_request` from (2)
4. Compare all VCs the Manufacturer2 agent has stored, and find (if available) a VC that satisfies the restrictions from (3)
5. Return a presentation dictionary from a VC from (4) that satisfies all requirements. Generally, a presentation consists of three classes of attributes: <br>
a. `requested_attributes`: Attributes that were signed by an issuer and have been revealed in the presentation process <br>
b. `self_attested_attributes`: Attributes that the prover has self attested to in the presentation object. <br>
c. `requested_predicates` (predicate proofs): Attribute values that have been proven to meet some statement. (TODO: Show how you can parse this information)
```
presentation, presentation_exchange_id = manufacturer2_agent.prepare_presentation(connection_id)
```
#### 3.2 – Send Presentation
Send the presentation to the recipient of `presentation_exchange_id`
```
manufacturer2_agent.send_proof_presentation(presentation_exchange_id, presentation)
```
<div style="font-size: 25px"><center><b>Break Point 6 / 7 / 8</b></center></div>
<div style="font-size: 50px"><center>🚛 ➡️ 🚗 / 🛵 / 🏙️ </center></div><br>
<center><b>Please proceed to the remaining Manufacturers and run all cells between Steps 3 and 4.1 <br> If you have sent proof presentations from all manufacturers, proceed to the City Notebook's Step 3.3 </b></center>
---
<a id=4></a>
### 4 – Do Data Science
Assuming that the City agent will acknowledge the proofs and deem them to be correct, proceed by inviting the City agent to a Duet Connection.
#### 4.1 – Establish a Duet Connection with City Agent: Send Duet invitation
Duet is a package that allows you to exchange encrypted data and run privacy-preserving arithmetic operations on them (e.g., through homomorphic encryption or secure multiparty computation).
```
# Set up connection_id to use for duet connection
manufacturer2_agent._update_connection(connection_id=connection_id, is_duet_connection=True, reset_duet=True)
# Create duet invitation for city agent
duet = sy.launch_duet(credential_exchanger=manufacturer2_agent)
```
#### 4.2 - Load data to duet store
```
# Verify data store of duet
duet.store.pandas # There should only be an MPC session statement by the City agent
```
Process data before loading it to the duet store. We take a synthetically created dataset of CO2 emission per trip across the City Agent's City (in this case Berlin, Germany).
```
# Get zipcode data (zipcode data from https://daten.odis-berlin.de/de/dataset/plz/)
df_zipcode = pd.read_csv("data/geo/berlin_zipcodes.csv").rename(columns={"plz":"zipcode"})
valid_zipcodes = list(df_zipcode.zipcode)
df_zipcode.head()
# Get trip data
df_co2 = pd.read_csv("data/trips/data.csv", index_col=0)
df_co2 = df_co2[df_co2.zipcode.isin(valid_zipcodes)]
df_co2["hour"] = df_co2.timestamp.apply(lambda x: int(x[11:13]))
df_co2.head()
```
The trip data is then grouped by zipcode to sum the CO2 emission per hour per zipcode.
```
# Get hourly co2
df_hourly_co2 = df_co2[["zipcode", "hour","co2_grams"]].groupby(["zipcode", "hour"]).sum().reset_index()
df_hourly_co2 = df_hourly_co2.pivot(index=["zipcode"], columns=["hour"])["co2_grams"].replace(np.nan, 0)
# Get matrix that of shape (4085,25)
df_hourly_zipcode = df_zipcode.set_index("zipcode").reindex(columns=list(range(0,24))).replace(np.nan,0)#.reset_index()
# Merge dataframes together
df = df_hourly_zipcode.add(df_hourly_co2, fill_value=0)
print(df.shape)
df.head()
```
Then, convert the dataset to a tensor, and upload the tensor with shape (194 x 24) to the duet data store
```
# Configure tensor
hourly_co2_torch = torch.tensor(df.values)
hourly_co2_torch = hourly_co2_torch.tag("hourly-co2-per-zip_2021-08-19")
hourly_co2_torch = hourly_co2_torch.describe("Total CO2 per Zipcode per Hour on August 19, 2021. Shape: zipcode (10115-14199) x hour (0-23) = 4085 x 24")
# Load tensor to datastore
hourly_co2_torch_pointer = hourly_co2_torch.send(duet, pointable=True)
# Verify datastore
duet.store.pandas
```
#### 4.3 – Authorize City agent to `.reconstruct()` the data
Authorize the city agent to reconstruct the data once it is shared and joined with other manufacutrers' data
```
duet.requests.add_handler(
#name="reconstruct",
action="accept"
)
```
---
### 5 – Terminate Controller
Whenever you have finished with this notebook, be sure to terminate the controller. This is especially important if your business logic runs across multiple notebooks.
(Note: the terminating the controller will not terminate the Duet session).
```
await agent_controller.terminate()
```
---
<div style="font-size: 25px"><center><b>Break Point 10 / 11 / 12</b></center></div>
<div style="font-size: 50px"><center>🚛 ➡️ 🚗 / 🛵 / 🏙️ </center></div><br>
<center><b>Please proceed to the remaining Manufacturers and run all cells between Steps 4.2 and 4.3 <br> If you have uploaded all data to the manufacturers' datastored, proceed to the City agent Step 4.2</b></center>
## 🔥🔥🔥 You can close this notebook now 🔥🔥🔥
| github_jupyter |
# Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
**Instructions:**
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
**You will learn to:**
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
## 1 - Packages ##
First, let's run the cell below to import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.
- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.
- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
```
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
```
## 2 - Overview of the Problem set ##
**Problem Statement**: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
```
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
```
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
```
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
```
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
**Exercise:** Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
```
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
```
**Expected Output for m_train, m_test and num_px**:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use:
```python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
```
```
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
```
**Expected Output**:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
```
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
```
<font color='blue'>
**What you need to remember:**
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data
## 3 - General Architecture of the learning algorithm ##
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!**
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
**Mathematical expression of the algorithm**:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
**Key steps**:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
## 4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call `model()`.
### 4.1 - Helper functions
**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
```
**Expected Output**:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
### 4.2 - Initializing parameters
**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
```
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
```
**Expected Output**:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
### 4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.
**Hints**:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
```
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = -1.0/m * np.sum( Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1.0/m * np.dot(X, (A - Y).T)
db = 1.0/m * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99993216]
[ 1.99980262]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.499935230625 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 6.000064773192205</td>
</tr>
</table>
### d) Optimization
- You have initialized your parameters.
- You are also able to compute a cost function and its gradient.
- Now, you want to update the parameters using gradient descent.
**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
```
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.1124579 ]
[ 0.23106775]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.55930492484 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.90158428]
[ 1.76250842]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.430462071679 </td>
</tr>
</table>
**Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There is two steps to computing predictions:
1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$
2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
```
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0, i] = 1 * (A[0,i] > 0.5)
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
print ("predictions = " + str(predict(w, b, X)))
```
**Expected Output**:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1.]]
</td>
</tr>
</table>
<font color='blue'>
**What to remember:**
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
## 5 - Merge all functions into a model ##
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
**Exercise:** Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
```
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = np.zeros((X_train.shape[0], 1)), 0
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
```
Run the following cell to train your model.
```
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
```
**Expected Output**:
<table style="width:40%">
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
**Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
```
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
```
Let's also plot the cost function and the gradients.
```
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
```
**Interpretation**:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
## 6 - Further analysis (optional/ungraded exercise) ##
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
#### Choice of learning rate ####
**Reminder**:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
```
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
```
**Interpretation**:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
## 7 - Test with your own image (optional/ungraded exercise) ##
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
```
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "CAT.png" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
```
<font color='blue'>
**What to remember from this assignment:**
1. Preprocessing the dataset is important.
2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course!
Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit before trying anything. Once you submit, things you can play with include:
- Play with the learning rate and the number of iterations
- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its standard deviation)
Bibliography:
- http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
- https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-subtracting-the-datasets-image-mean-and-not-the-c
| github_jupyter |
# Hierarchical Clustering
### Validation with Dendogram and Heatmap
Created by Andres Segura-Tinoco
Created on Apr 20, 2021
```
# Import libraries
import numpy as np
from sklearn import datasets
from sklearn.cluster import AgglomerativeClustering
from scipy.cluster.hierarchy import dendrogram, linkage
import scipy.cluster.hierarchy as sch
# Plot libraries
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from mpl_toolkits.axes_grid1 import make_axes_locatable
```
## <span>1. Load Iris data</span>
```
# Load the IRIS dataset
iris = datasets.load_iris()
# Preprocessing data
X = iris.data
y = iris.target
n_data = len(X)
delta = 0.3
x_min, x_max = X[:, 0].min() - delta, X[:, 0].max() + delta
y_min, y_max = X[:, 1].min() - delta, X[:, 1].max() + delta
# Plot the training points
fig, ax = plt.subplots(figsize=(8, 8))
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1, s=20)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.title("IRIS Sepal Data", fontsize=16)
op1 = mpatches.Patch(color='#E41A1C', label='Setosa')
op2 = mpatches.Patch(color='#FF8000', label='Versicolor')
op3 = mpatches.Patch(color='#979797', label='Virginica')
plt.legend(handles=[op1, op2, op3], loc='best')
ax.grid()
plt.show()
```
## <span>2. Hierarchical Agglomerative Clustering</span>
Hierarchical clustering is a method of cluster analysis which seeks to build a hierarchy of clusters <a href="https://en.wikipedia.org/wiki/Hierarchical_clustering" target="_blank">[wikipedia]</a>.
### 2.1. Dendogram to Select Optimal Clusters
```
# Calculate average linkage
linkage_method = 'average'
linked = linkage(X, linkage_method)
labelList = range(1, n_data+1)
# Plot Dendogram
plt.figure(figsize=(16, 8))
dendrogram(linked, orientation='top', labels=labelList, distance_sort='descending', show_leaf_counts=True)
plt.title("Dendogram with " + linkage_method.title() + " Linkage", fontsize=16)
plt.show()
```
Clearly, with Hierarchical Clustering with average linkage, the optimal number of clusters is 2.
### 2.2. Hierarchical Clustering with Optimal k
```
# Optimal number of clusters
k = 2
# Apply Hierarchical Agglomerative Clustering
hac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage=linkage_method)
cluster = hac.fit_predict(X)
cluster
# Plotting clustering
fig, ax = plt.subplots(figsize=(8, 8))
colormap = np.array(["#d62728", "#2ca02c"])
plt.scatter(X[:, 0], X[:, 1], c=colormap[cluster], s=20)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.title("Hierarchical Clustering", fontsize=16)
op1 = mpatches.Patch(color=colormap[1], label='Cluster 1')
op2 = mpatches.Patch(color=colormap[0], label='Cluster 2')
plt.legend(handles=[op1, op2], loc='best')
ax.grid()
plt.show()
```
### 2.3. Hierarchical Clustering Heatmap
```
# Calculate distance matrix with Euclidean distance
D = np.zeros([n_data, n_data])
for i in range(n_data):
for j in range(n_data):
D[i, j] = np.linalg.norm(X[i] - X[j])
# Dendrogram that comes to the left
fig = plt.figure(figsize=(14, 14))
# Add left axes with hierarchical cluster
ax1 = fig.add_axes([0.09, 0.1, 0.2, 0.6])
Y = sch.linkage(X, method='single')
Z1 = sch.dendrogram(Y, orientation='left')
ax1.set_xticks([])
ax1.set_yticks([])
# Add top axes with hierarchical cluster
ax2 = fig.add_axes([0.3, 0.71, 0.58, 0.2])
Y = sch.linkage(X, method=linkage_method)
Z2 = sch.dendrogram(Y)
ax2.set_xticks([])
ax2.set_yticks([])
# Main heat-map
axmatrix = fig.add_axes([0.3, 0.1, 0.6, 0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
D = D[idx1, :]
D = D[:, idx2]
# The actual heat-map
im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap="PuBuGn")
divider = make_axes_locatable(axmatrix)
cax = divider.append_axes("right", size="3%", pad=0.05)
plt.colorbar(im, cax=cax)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
plt.show()
```
<hr>
You can contact me on <a href="https://twitter.com/SeguraAndres7" target="_blank">Twitter</a> | <a href="https://github.com/ansegura7/" target="_blank">GitHub</a> | <a href="https://www.linkedin.com/in/andres-segura-tinoco/" target="_blank">LinkedIn</a>
| github_jupyter |
```
%matplotlib inline
# ignore warnings
import warnings
warnings.filterwarnings('ignore')
from joblib import load, dump
from ruamel.yaml import YAML
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import h5py
import periodictable as pt
from palettable.cartocolors.sequential import SunsetDark_7
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import KFold, GridSearchCV, ShuffleSplit, train_test_split, learning_curve
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.gaussian_process import GaussianProcessRegressor, kernels
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression, BayesianRidge, Ridge
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.utils import resample
from umda import paths
from umda.data import load_data, load_pipeline
from umda import training
plt.style.use("publication.mpl")
seed = 1215677
normalize = True
mask = False
state = np.random.default_rng(seed)
full_X, full_cluster_ids, tmc1_df = load_data(exclude_hydrogen=True)
embedder = load_pipeline()
tmc1_X = np.vstack([embedder.vectorize(smi) for smi in tmc1_df["SMILES"]])
tmc1_y = np.log10(tmc1_df["Column density (cm^-2)"].to_numpy())
boot_X, boot_y = training.get_bootstrap_samples((tmc1_X, tmc1_y), seed, n_samples=400, replace=True, noise_scale=0.5)
with open("model_hparams.yml") as read_file:
hparams = YAML().load(read_file)
# gp_kernel = kernels.ConstantKernel() * \
# kernels.RBF(3.0, (1e-1, 10.0)) + \
# kernels.RationalQuadratic(200.0, 20.0, alpha_bounds=(1e-3, 5e2), length_scale_bounds=(50.0, 1e4)) * \
# kernels.ConstantKernel() + kernels.ConstantKernel()
gp_kernel = training.get_gp_kernel()
base_models = {
"linear_regression": LinearRegression(),
"ridge": Ridge(),
"br": BayesianRidge(lambda_1=1e4, lambda_2=1e4, tol=1e-4, alpha_1=9.8, alpha_2=0.02, alpha_init=1e-5, lambda_init=0.474),
"svr": SVR(),
"knn": KNeighborsRegressor(),
"rfr": RandomForestRegressor(random_state=seed),
"gbr": GradientBoostingRegressor(random_state=seed),
"gpr": GaussianProcessRegressor(
kernel=gp_kernel, random_state=seed
)
}
models = {key: training.compose_model(value, normalize) for key, value in base_models.items()}
```
The code below will perform a grid search over hyperparameters, using `ShuffleSplit` cross-validation and identifying the set of hyperparameters with the lowest test error. The model is then refit to the full dataset: this is viable because we have a bootstrap dataset that minimizes the degree of overfitting. This is confirmed in the learning curves later on, which uses the models with optimized hyperparameters.
```
# generalized workflow
model_results = dict()
best_models = dict()
cv_results = dict()
# do a final train/test split to prevent overfitting
# train_X, test_X, train_y, test_y = train_test_split(boot_X, boot_y, test_size=0.2, random_state=seed, shuffle=True)
(train_X, train_y), (test_X, test_y) = training.get_molecule_split_bootstrap(
(tmc1_X, tmc1_y), seed=seed, n_samples=400, replace=True,
noise_scale=0.5, molecule_split=0.2, test_size=0.2
)
for name in models.keys():
model = models.get(name)
print(f"Working on {name} now.")
hparam = hparams.get(name)
# only do grid search for the models with hparam specification
if hparam is not None:
cv_grid = training.grid_cv_search((test_X, test_y), model, hparam, seed, verbose=1,
n_splits=10, n_jobs=8, scoring="neg_mean_squared_error", refit=True
)
model = cv_grid.best_estimator_
result = model.fit(train_X, train_y)
pred_Y = result.predict(test_X)
mse = mean_squared_error(test_y, pred_Y)
print(f"Model: {name} best CV score: {cv_grid.best_score_:.4e}, split test score: {mse:.2f}")
best_models[name] = result
cv_results[name] = cv_grid
```
Export the cross-validation reports, which should be inspected to see which hyperparameters affect the loss the most (and which don't).
```
# export the cross-validation results
for name in models.keys():
df = pd.DataFrame(cv_results[name].cv_results_)
keys = ["mean_test_score", "rank_test_score"]
keys.extend([key for key in df.keys() if "param_" in key])
df = df[keys]
# sort and reset the indices
df.sort_values("rank_test_score", ascending=True, inplace=True)
df.reset_index(inplace=True, drop=True)
# dump to CSV file
if normalize:
flags = "norm"
else:
flags = "unnorm"
if mask:
flags += "_mask"
else:
flags += "_nomask"
df.to_csv(f"outputs/grid_search/{name}_{flags}.csv", index=False)
```
## Exporting the hyperparameter optimization results
This is mostly for final reporting, where we write out the best hyperparameters for each model as a YAML dictionary, with keys being the models and the values being the best hyperparameters for that model (within the search space).
```
# collect up the dictionaries for best parameters
best_param_dict = dict()
for name in models.keys():
best_param_dict[name] = cv_results[name].best_params_
with open("outputs/grid_search/optimized_hparams.yml", "w+") as write_file:
YAML().dump(best_param_dict, write_file)
```
## Dumping the best models to pickle
```
dump(best_models, "outputs/grid_search/best_models.pkl")
best_param_dict
```
## Making an overview plot
First compute the scatter point sizes as a function of molecular weight.
```
formulae = tmc1_df["Formula"].str.replace("+", "").str.replace("-", "").to_list()
formulae = [pt.formula(formula) for formula in formulae]
def calc_mass(formula):
weight = 0
for atom, number in formula.atoms.items():
weight += atom.mass * number
return weight
weights = np.array(list(map(calc_mass, formulae)))
# these are the extremely poor performing molecules by LR
tmc1_df.loc[np.abs(best_models.get("gpr").predict(tmc1_X)) <= 10.]
colors = SunsetDark_7.mpl_colormap(np.linspace(0., 1., 8))
num_models = len(models)
formatted_names = {key: key.upper() for key in models.keys()}
formatted_names["linear_regression"] = "LR"
formatted_names["ridge"] = "RR"
formatted_names["knn"] = "$k$NN"
fig, axarray = plt.subplots(2, num_models // 2, figsize=(7, 4), sharex=True, sharey="row")
for model_name, ax, color in zip(models.keys(), axarray.flatten(), colors):
model = best_models.get(model_name)
# draw the ideal curve
ax.plot(np.arange(10, 16), np.arange(10, 16), ls="--", alpha=0.4, color="k")
# ax.hlines(0., 10., 15., ls="--", alpha=0.3)
# for probabilistic models, get the uncertainty too
if model_name in ["gpr", "br"]:
pred_y, pred_std = model.predict(tmc1_X, return_std=True)
mask = np.ones_like(tmc1_y, dtype=bool)
ax.errorbar(tmc1_y, pred_y, yerr=pred_std, fmt="none", elinewidth=0.3, ecolor=color)
ax.scatter(tmc1_y, pred_y, s=0.3 * weights[mask], lw=0.3, edgecolors="w", alpha=0.8, c=color[None,:])
# ax.errorbar(tmc1_y, np.log10(pred_y / tmc1_y), yerr=np.log10(pred_std / tmc1_y), fmt="none", elinewidth=0.3, ecolor=color)
# ax.scatter(tmc1_y, np.log10(pred_y / tmc1_y), lw=0.3, edgecolors="w", alpha=0.8, c=color[None,:])
elif model_name == "linear_regression":
pred_y = model.predict(tmc1_X)
# filter the insane outliers
mask = np.abs(pred_y) <= 25.
# ax.scatter(tmc1_y[mask], np.log10(pred_y[mask] / tmc1_y[mask]), s=0.3 * weights[mask], lw=0.3, edgecolors="w", alpha=0.8, c=color[None,:])
ax.scatter(tmc1_y[mask], pred_y[mask], s=0.3 * weights[mask], lw=0.3, edgecolors="w", alpha=0.8, c=color[None,:])
else:
pred_y = model.predict(tmc1_X)
mask = np.ones_like(tmc1_y, dtype=bool)
# ax.scatter(tmc1_y, np.log10(pred_y / tmc1_y), s=0.3 * weights[mask], lw=0.3, edgecolors="w", alpha=0.8, c=color[None,:])
ax.scatter(tmc1_y, pred_y, s=0.3 * weights[mask], lw=0.3, edgecolors="w", alpha=0.8, c=color[None,:])
r2 = r2_score(tmc1_y[mask], pred_y[mask])
mse = mean_squared_error(tmc1_y[mask], pred_y[mask])
# ax.set(xlim=[9.5, 15.5], ylim=(9.5, 15.5))
real_name = formatted_names.get(model_name)
ax.set_title(f"{real_name}", loc="left")
# if model_name != "linear_regression":
ax.text(0.05, 0.18, f"MSE: {mse:.2f}", fontsize="x-small", transform=ax.transAxes)
ax.text(0.05, 0.1, f"$R^2$: {r2:.2f}", fontsize="x-small", transform=ax.transAxes)
# ax.set(xlim=[9.5, 15.5], ylim=(9.5, 15.5))
# ax.set(ylim=[-0.13, 0.13])
fig.supxlabel("Obs. column density ($\log_{10}$ cm$^{-2}$)", fontsize="x-small")
fig.supylabel("Pred. column density ($\log_{10}$ cm$^{-2}$)", fontsize="x-small")
fig.tight_layout()
# fig.savefig("outputs/grid_search/regression_plot.pdf", dpi=150)
```
## Learning curves for every model
This uses the optimal hyperparameter models
```
# use the boot strap data from above
# boot_X, boot_y = training.get_bootstrap_samples((tmc1_X, tmc1_y), seed, n_samples=700, replace=True, noise_scale=0.5)
num_models = len(models)
formatted_names = {key: key.upper() for key in models.keys()}
formatted_names["linear_regression"] = "LR"
formatted_names["ridge"] = "RR"
train_sizes = (np.linspace(0.1, 0.95, 6) * boot_y.size).astype(int)
fig, axarray = plt.subplots(2, num_models // 2, figsize=(7, 4), sharex=True)
for model_name, ax, color in zip(models.keys(), axarray.flatten(), colors):
model = best_models.get(model_name)
print(f"Running model {model_name}")
train_sizes, train_scores, valid_scores = learning_curve(
model, boot_X, boot_y, train_sizes=train_sizes,
cv=20, scoring="neg_mean_squared_error"
)
mean_train, std_train = np.abs(train_scores.mean(axis=1)), train_scores.std(axis=1)
mean_val, std_val = np.abs(valid_scores.mean(axis=1)), valid_scores.std(axis=1)
for target, (mean, std), marker in zip(["Training", "CV"], [(mean_train, std_train), (mean_val, std_val)], ["o", "s"]):
ax.plot(train_sizes, mean, marker=marker, label=target, markersize=5., color=color[None,:], mec="w", mew=0.5)
ax.fill_between(train_sizes, mean + std, mean - std, alpha=0.4, color=color)
ax.hlines([5e-2, 1e-1, 5e-1], 0., 400, ls="--", alpha=0.3)
if model_name != "linear_regression":
ax.set(ylim=[5e-2, 5.])
ax.set(yscale="log")
ax.set_title(formatted_names.get(model_name), loc="left")
fig.supxlabel("Training set size", fontsize="x-small")
fig.supylabel("Mean squared error ($\log_{10}$ cm$^{-2}$)", fontsize="x-small")
fig.tight_layout()
fig.savefig("outputs/grid_search/learning_curve.pdf", dpi=150)
```
## Data importance estimation
```
def bootstrap_importance_estimation(estimator, data, seed: int, n_splits: int = 500):
X, y = data
splitter = ShuffleSplit(n_splits, test_size=0.2, random_state=seed)
log = list()
weights = np.ones((n_splits, y.size))
test_errors = list()
for split_index, (train_index, test_index) in enumerate(splitter.split(X, y)):
train_X, test_X, train_y, test_y = X[train_index], X[test_index], y[train_index], y[test_index]
result = estimator.fit(train_X, train_y)
# compute the mean squared error
train_error = mean_squared_error(train_y, result.predict(train_X))
test_error = mean_squared_error(test_y, result.predict(test_X))
log.append(
{"train_error": train_error, "test_error": test_error, "train_index": train_index, "test_index": test_index}
)
test_errors.append(test_error)
weights[split_index, test_index] = 0.
# reshape so we can do matrix multiplication
test_errors = np.asarray(test_errors)[:,None]
molecule_weights = (weights * test_errors).std(axis=0)
molecule_weights /= np.min(molecule_weights)
return log, molecule_weights
bootstrap_log, weights = bootstrap_importance_estimation(best_models["ridge"], (tmc1_X, tmc1_y) ,seed, n_splits=5000)
from sklearn.utils import resample
resample(tmc1_X, tmc1_y, n_samples=500, random_state=seed)[0]
```
| github_jupyter |
### Import Packages
```
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D
import random
```
### Import Data
```
npids = -1
data = pd.read_csv('nlst_407_prsn_20180510.csv').sample(frac=1)
data_canc = data[data['can_scr']>0][:-1]
data_no_canc = data[data['can_scr']==0][:len(data_canc)]
data = pd.concat([data_canc, data_no_canc])
plt.hist(data['acrin_drinknum_form'])
```
### Set parameters
```
age = data['age']
cancer = data['can_scr'].where(data['can_scr']<=0,2)-1
cancyr = (data['cancyr']+1).fillna(0).astype(int)
```
### Define axis
```
x = np.arange(age.max()+7)
y = np.arange(len(data))
tags = ['smoke', 'tmp']
z = np.arange(len(tags))
```
### Smoke data
```
start_smoke = data['smokeage'].fillna(0).astype(int)
quit_smoke = data['age_quit'].fillna(age).astype(int)
mag_smoke = data['smokeday']/data['smokeday'].max()
smoke = np.zeros((len(y), len(x)))
for pid in range(len(data)):
smoke[pid, start_smoke.values[pid]:quit_smoke.values[pid]+1] = mag_smoke.values[pid]*cancer.values[pid]
smoke[pid, age.values[pid]+1:age.values[pid]+cancyr.values[pid]+1] = np.nan
```
### Tmp data
```
start_tmp = data['smokeage'].fillna(0).astype(int)
quit_tmp = data['age_quit'].fillna(age).astype(int)
mag_tmp = data['smokeday']/data['smokeday'].max()
tmp = np.zeros((len(y), len(x)))
for pid in range(len(data)):
tmp[pid, start_tmp.values[pid]:quit_tmp.values[pid]+1] = mag_tmp.values[pid]*cancer.values[pid]
tmp[pid, age.values[pid]+1:age.values[pid]+cancyr.values[pid]+1] = np.nan
```
### Stack
```
everything = np.stack((smoke,tmp),2)
```
### 2D plot
```
cmap = cm.PRGn_r
cmap.set_bad('r',1.)
fig, axes = plt.subplots(1,len(z), sharey=True, figsize=(20,10))
for i, (ax, tag) in enumerate(zip(axes, tags)):
pcol = ax.pcolormesh(x, y, everything[:,:,i],
cmap=cmap, vmin=-1, vmax=1)
cbar = plt.colorbar(pcol, ax=ax, ticks=[-1, 0, 1], label='Standardized magnitude')
cbar.ax.set_yticklabels(['1 - Healty', '0', '1 - Cancer'])
ax.set_title(tag)
ax.set_xticks(x[::5])
ax.set_yticks([])
# And a corresponding grid
ax.grid(which='both')
# Or if you want different settings for the grids:
#ax.grid(which='minor', alpha=0.2)
# ax.grid()
```
### 3D plot
```
if False:
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Make the 3D grid
X, Y, Z = np.meshgrid(x, y, z)
cube = ax.scatter(X, Y, Z, zdir='z', c=np.ravel(everything), cmap=cmap, vmin=-1, vmax=1)
cbar = fig.colorbar(cube, ticks=[-1, 0, 1], label='Standardized magnitude') # Add a color bar
cbar.ax.set_yticklabels(['1 - Healty', '0', '1 - Cancer'])
ax.set_xlabel('Age')
ax.set_ylabel('Person')
ax.set_zticks(z)
ax.set_zticklabels(tags)
plt.show()
```
| github_jupyter |
## 練習時間
### F1-Score 其實是 F-Score 中的 β 值為 1 的特例,代表 Precision 與 Recall 的權重相同
請參考 F1-score 的[公式](https://en.wikipedia.org/wiki/F1_score) 與下圖的 F2-score 公式圖,試著寫出 F2-Score 的計算函數

HINT: 可使用 slearn.metrics 中的 precision, recall 函數幫忙
```
import numpy as np
y_pred = np.random.randint(2, size=100) # 生成 100 個隨機的 0 / 1 prediction
y_true = np.random.randint(2, size=100) # 生成 100 個隨機的 0 / 1 ground truth
y_pred
def f2_score(precision,recall):
return 5 * (precision*recall)/(4*precision+recall)
from sklearn import metrics
f1 = metrics.f1_score(y_true, y_pred)
precision = metrics.precision_score(y_true, y_pred)
recall = metrics.recall_score(y_true, y_pred)
f2 = f2_score(precision,recall)
print('F1 Score:',f1)
print('F2 Score:',f2)
print('Precision:',precision)
print('Recall:',recall)
```
| github_jupyter |
## PSRC Linear Regression
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
import seaborn as sns
from seaborn.linearmodels import corrplot,symmatplot
#from sklearn.linear_model import LogisticRegression
#from sklearn.utils import shuffle
from sklearn.model_selection import GridSearchCV
#from sklearn.metrics import log_loss
#from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import KFold
from sklearn.metrics import mean_squared_error, r2_score
from datetime import datetime
#https://jmetzen.github.io/2015-01-29/ml_advice.html
# Load the data sets
data_dir = './data/'
#print (df_Trip_Household_Merged)
#print(df_Trip_Household_Merged.dtypes)
```
## Pre-processing
```
df_Trip_Household_Merged = pd.read_csv(data_dir + 'Trip_Household_Merged.csv')
# Filter trips that end outside seattle
df_Trip_Household_Merged = df_Trip_Household_Merged[df_Trip_Household_Merged['uv_dest'] != "Outside Seattle"]
# drop data that are not in urban villages
df_Trip_Household_Merged = df_Trip_Household_Merged[(df_Trip_Household_Merged['uv_origin'] != "Outside Seattle") &
(df_Trip_Household_Merged['uv_origin'] != "Outside Villages")]
# Create dummie variables for departure time period
depart_period_dummies = pd.get_dummies(df_Trip_Household_Merged['depart_period'])
df_Trip_Household_Merged['weekday_am'] = depart_period_dummies['Weekday AM']
df_Trip_Household_Merged['weekday_mid'] = depart_period_dummies['Weekday Mid']
df_Trip_Household_Merged['weekday_pm'] = depart_period_dummies['Weekday PM']
df_Trip_Household_Merged['weekday_late'] = depart_period_dummies['Late Night']
# Create dummie variables for residency duration, with justs two categories, under and over 5 years (see codebook)
df_Trip_Household_Merged['residency_under5'] = np.where((df_Trip_Household_Merged['res_dur']<=3), 1, 0)
df_Trip_Household_Merged['residency_over5'] = np.where((df_Trip_Household_Merged['res_dur']>3), 1, 0)
# Create dummie variables for home ownership
df_Trip_Household_Merged['hh_rent'] = np.where((df_Trip_Household_Merged['rent_own']==2), 1, 0)
df_Trip_Household_Merged['hh_own'] = np.where((df_Trip_Household_Merged['rent_own']==1), 1, 0)
# Create dummie variables for income
df_Trip_Household_Merged['income_under25'] = np.where((df_Trip_Household_Merged['hhincome_broad']==1), 1, 0)
df_Trip_Household_Merged['income_25_50'] = np.where((df_Trip_Household_Merged['hhincome_broad']==2), 1, 0)
df_Trip_Household_Merged['income_50_75'] = np.where((df_Trip_Household_Merged['hhincome_broad']==3), 1, 0)
df_Trip_Household_Merged['income_75_100'] = np.where((df_Trip_Household_Merged['hhincome_broad']==4), 1, 0)
df_Trip_Household_Merged['income_over100'] = np.where((df_Trip_Household_Merged['hhincome_broad']==5), 1, 0)
##aggregate by geography of origin
geography = ['uv_origin']
trainFeatures = ['google_duration', 'trip_path_distance', 'hhsize', 'hh_rent','hh_own',
'depart_time','hhincome_broad', 'rent_own', 'numchildren', 'vehicle_count',
'weekday_am', 'weekday_mid','weekday_pm','weekday_late','residency_under5','residency_over5',
'income_under25', 'income_25_50', 'income_75_100','income_75_100','income_over100','drive_alone']
aggDict = {}
for feature in trainFeatures:
aggDict[feature]=[sum]
#print (aggDict)
# Apply trip weights prior to aggregation
for feature in trainFeatures:
df_Trip_Household_Merged[feature] = df_Trip_Household_Merged['trip_wt_final'] * df_Trip_Household_Merged[feature]
aggDict['trip_wt_final'] =[sum]
data = df_Trip_Household_Merged.groupby(geography, as_index=False).agg(aggDict)
data.columns = data.columns.get_level_values(0)
# Reapply column weights to obtain averages for each location
for feature in trainFeatures:
data[feature] = data[feature] / data['trip_wt_final']
# Create label for geographies under the citywide mean of 30%
data['drive_alone_under30'] = np.where((data['drive_alone'] <= .3), 1, 0)
# drop outlier
#data = data[data['drive_alone'] < .3]
data = data[data['trip_wt_final'] > 1000]
#print(data.columns)
print (data)
_ = sns.pairplot(data[:50], vars=['weekday_late','income_75_100' , 'trip_path_distance', 'residency_under5', 'hhsize'], hue="drive_alone_under30", size=2)
# Compute the correlation matrix
corr = data.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
# Machine Learning Pipeline
Multivariate cross validation with a linear regression model
```
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import KFold
from sklearn.metrics import mean_squared_error
features = ['google_duration', 'trip_path_distance', 'hhsize', 'residency_under5',
'numchildren', 'vehicle_count','hh_own',
'weekday_am', 'weekday_mid','weekday_pm','weekday_late',
'income_under25', 'income_25_50','income_75_100','income_over100']
feature_list = features
target = ["drive_alone"]
def train_and_cross_val(cols, target):
# Split into features & target.
features = data[cols]
target = data[target]
variance_values = []
mse_values = []
# kFold instance
kf = KFold(n=len(data), n_folds=5, shuffle=True, random_state = 3)
# Iterate through each fold
for train_index, test_index in kf:
#Training and test sets
X_train, X_test = features.iloc[train_index], features.iloc[test_index]
y_train, y_test = target.iloc[train_index], target.iloc[test_index]
# Instantiate the model
model = LinearRegression()
# Fit model to features and target
model.fit(X_train,y_train)
# Make predictions
predictions = model.predict(X_test)
# Calulate mse and variance for this fold
mse = mean_squared_error(y_test, predictions)
variance = r2_score(y_test, predictions)
# Append to arrays to do calculate overall
# average mse and variance values.
variance_values.append(variance)
mse_values.append(mse)
# Compute average mse and variance values.
avg_mse = np.mean(mse_values)
avg_var = np.mean(variance_values)
print ("mse: " + str(mse)," variance: " + str(variance))
return(avg_mse, avg_var)
def train_and_predict(cols, target):
# Split into features & target.
features = data[cols]
target = data[target]
# Instantiate the model
model = LinearRegression()
# fit the model
model.fit(features,target)
# Make predictions
predictions = model.predict(features)
mse = mean_squared_error(target, predictions)
variance = r2_score(target, predictions)
print (model.intercept_, model.coef_)
# The mean squared error
print("Mean squared error: %.3f"
% mean_squared_error(target, predictions))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.3f' % r2_score(target, predictions))
fig, ax = plt.subplots()
ax.scatter(target, predictions, edgecolors=(0, 0, 0))
ax.plot([target.min(), target.max()], [target.min(), target.max()], 'k--', lw=3)
ax.set_xlabel('Measured')
ax.set_ylabel('Predicted')
plt.show()
df_results = pd.DataFrame(feature_list)
#df_results['feature'] = features
df_results['coefficient'] = model.coef_[0]
print (df_results)
train_and_predict(features, target)
train_and_cross_val(features, target)
```
# Test for Overfitting
At the heart of understanding overfitting is understanding bias and variance. Bias and variance make up the 2 observable sources of error in a model that we can indirectly control. Bias describes error that results in bad assumptions about the learning algorithm. Variance describes error that occurs because of the variability of a model's predicted values. In an ideal world, we want low bias and low variance but in reality, there's always a tradeoff.
For regression, we can use mean absolute error, mean squared error, or R-squared.
We can notice that there is a large gap between error on training and on validation data. What does that mean? We are probably overfitting the training data!
| github_jupyter |
```
import os
import importlib
os.chdir('../app')
import preprocessing.preglobal as pg
import task_lib as tl
import fit_lib as fl
importlib.reload(fl)
importlib.reload(tl)
tl.execute_tasks() # RUN AFTER TASKS ARE ADDED TO SPAWN TASKS
## MLE FITS - done
mle_fit_params = {
'fit_book_notfixed':{
'fitparameter':{
'method':'mle',
'book_convention':True,
'fix_phi_dash':False
}},
'fit_book_fixed':{
'fitparameter':{
'method':'mle',
'book_convention':True,
'fix_phi_dash':True
}},
'fit_corrected_fixed': {
'fitparameter':{
'method':'mle',
'book_convention':False,
'fix_phi_dash':True
}},
'fit_corrected_notfixed': {
'fitparameter':{
'method':'mle',
'book_convention':False,
'fix_phi_dash':False
}},
}
for i in pg.get_kn_entries({'selected':1}):
task = {
'task':'fit',
'id':i['id'],
'start':10.5*3600*1000,
'stop':12.5*3600*1000
}
for k,v in mle_fit_params.items():
taski = task.copy()
taski['name'] = k
for kk,vv in v['fitparameter'].items():
taski[kk] = vv
tl.add_task(taski)
## COVARIANCE - done
for i in pg.get_kn_entries({'selected':1}):
task = {
'task':'covariance',
'id':i['id'],
'start':10.5*3600*1000,
'stop':12.5*3600*1000
}
tl.add_task(task)
## MOMENT FITS - todo, after fit of moments is repaired
for i in pg.get_kn_entries({'selected':1}):
task = {
'id':i['id'],
'task':'fit',
'method':'moments',
'start':10.5*3600*1000,
'stop':12.5*3600*1000,
'loadcachedcovariance':False,
'powerfit':False,
'lsqfit':True,
'directcovar':True,
'dt_range':[0.001,0.002,0.004]
}
tl.add_task(task)
for i in pg.get_kn_entries({'selected':1}):
task = {
'id':i['id'],
'task':'fit',
'method':'moments',
'start':10.5*3600*1000,
'stop':12.5*3600*1000,
'loadcachedcovariance':False,
'powerfit':True,
'lsqfit':False,
'directcovar':True,
'dt_range':[0.001,0.002,0.004]
}
tl.add_task(task)
# CREATE SIMULATION FOR MOMENT OR MLE FIT RESULTS - IF NOT ALREADY SIMULATED
tbl = tl.dbconnect()
tasks = list(
tbl.aggregate([{"$match":{"status":3,"error":None}},{"$sort":{"task.id":1}}])
)
c = 0
for i in tasks:
if 'method' not in i['task']:
continue
if len([t for t in tasks if 'origin' in t['task']and t['task']['task'] == 'simulate' and t['task']['origin']['id'] == i['_id']]) > 0:
continue
if i['task']['method'] == 'moments':
for k,v in i['result']['g_params'].items():
task = {'origin':{'id':i['_id'],'fit':k, 'task':i['task']}, 'task':'simulate', 'id':i['task']['id'], 'start':i['task']['start'], 'stop':i['task']['stop']}
task['g_params'] = v
print('nosimyet', task)
c+=1
#tl.add_task(task)
if i['task']['method'] == 'mle':
task = {'origin':{'id':i['_id'], 'task':i['task']}, 'task':'simulate', 'id':i['task']['id'], 'start':i['task']['start'], 'stop':i['task']['stop']}
if 'phi_0' in i['result']:
task['phi_0'] = i['result']['phi_0']
task['g_params'] = i['result']['g_params']
print('nosimyet', task)
c+=1
#tl.add_task(task)
print(c)
# EXECUTE SINGLE TASK DIRECTLY HERE FOR TESTING, E.G SIMULATION
result = tl.execute_task(
{'origin': {'id': '123'}, 'task': 'simulate', 'id': '20150309_AAPL', 'start': 37800000.0, 'stop': 45000000.0, 'phi_0': 0.8976357559433028, 'g_params': [0.99, 7900.79560871683, 53.28371451939907]}
)
print(result)
# OR MOMENT FIT
res_moments = tl.execute_task( {
'task':'fit',
'method':'moments',
'powerfit':True,
'lsqfit':False,
'loadcachedcovariance':True,
'id':'20170912_AAPL',
'start':10.5*3600*1000,
'stop':12.5*3600*1000
}
)
# OR MLE
res_mle = tl.add_task( {
'task':'fit',
'method':'mle',
'book_convention':True,
'fix_phi_dash':False,
'id':'20170912_AAPL',
'start':10.5*3600*1000,
'stop':12.5*3600*1000
}, execute_sync=False
)
```
| github_jupyter |
# Train Eval Baseline for CelebA Dataset
---
## Import Libraries
```
import sys
sys.path.append("..")
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
%matplotlib inline
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision.utils import make_grid
from disenn.datasets.celeba_dataset import CelebA
from disenn.models.conceptizers import VaeConceptizer
from disenn.models.parameterizers import ConvParameterizer
from disenn.models.aggregators import SumAggregator
from disenn.models.disenn import DiSENN
from disenn.models.losses import celeba_robustness_loss
from disenn.models.losses import bvae_loss
from disenn.utils.initialization import init_parameters
```
## Hardware & Seed
```
np.random.seed(42)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
## Sample dataset for baseline
```
SAMPLE_SIZE = 1
celeba_dataset = CelebA(split='train', data_path='data/celeba')
sample_idxs = np.random.permutation(len(celeba_dataset))[:SAMPLE_SIZE]
sample_celeba_dataset = [celeba_dataset[idx] for idx in sample_idxs]
sample_images = [x for x,_ in sample_celeba_dataset]
sample_labels = [y for _,y in sample_celeba_dataset]
print(f"Male: {sum(sample_labels)}")
sample_images_grid = make_grid(sample_images)
fig, ax = plt.subplots(figsize=(20,10))
ax.imshow(sample_images_grid.numpy().transpose(1,2,0))
ax.set_xticks([])
ax.set_yticks([])
fig.tight_layout()
sample_dl = DataLoader(sample_celeba_dataset, batch_size=2, shuffle=True)
x,y = next(iter(sample_dl))
```
# $\beta$-VAE Conceptizer
## Forward Pass
```
conceptizer = VaeConceptizer(num_concepts=10)
concept_mean, concept_logvar, x_reconstruct = conceptizer(x)
x.shape
concept_mean.shape, concept_logvar.shape
x_reconstruct.shape
plt.imshow(x[0].numpy().transpose(1,2,0))
plt.matshow(concept_mean.detach().numpy())
plt.matshow(concept_logvar.detach().numpy())
plt.imshow(x_reconstruct[0].detach().numpy().transpose(1,2,0))
```
## Sanity Check: Initial Loss
```
conceptizer = VaeConceptizer(num_concepts=10)
# concept_mean, concept_logvar, x_reconstruct = conceptizer(x)
# recon_loss, kl_div = BVAE_loss(x, x_reconstruct, concept_mean, concept_logvar)
# loss = recon_loss + kl_div
# loss.backward()
_, _, x_reconstruct = conceptizer(x)
loss = F.binary_cross_entropy(x_reconstruct, x, reduction="mean")
loss.backward()
loss
x_mean = x[0].mean().item()
x_recon_mean = x_reconstruct.mean().item()
0.5 * np.log(0.5) + (1-0.5) * np.log(1-0.5)
x_mean * np.log(x_recon_mean) + (1-x_mean) * np.log(1 - x_recon_mean)
```
## Initialize Parameters
```
conceptizer.apply(init_parameters);
```
## Backward Gradients
```
print(conceptizer.decoder.tconv_block[-1].weight.grad.mean())
print(conceptizer.decoder.tconv_block[-1].weight.grad.std())
print(conceptizer.decoder.tconv_block[-3].weight.grad.mean())
print(conceptizer.decoder.tconv_block[-3].weight.grad.std())
print(conceptizer.encoder.logvar_layer.weight.grad.mean())
print(conceptizer.encoder.mu_layer.weight.grad.std())
```
## Training
```
conceptizer = VaeConceptizer(num_concepts=10).to(device)
# conceptizer.apply(init_parameters);
train_dl = DataLoader(celeba_dataset, batch_size=128, shuffle=True)
optimizer = optim.Adam(conceptizer.parameters())
conceptizer.train();
recorder = []
EPOCHS = 1000
BETA = 1
PRINT_FREQ = 10
for epoch in range(EPOCHS):
for i, (x, _) in enumerate(sample_dl):
x = x.to(device)
optimizer.zero_grad()
concept_mean, concept_logvar, x_reconstruct = conceptizer(x)
recon_loss, kl_div = BVAE_loss(x, x_reconstruct, concept_mean, concept_logvar)
loss = recon_loss + BETA * kl_div
loss.backward()
optimizer.step()
recorder.append([loss.item(), recon_loss.item(), kl_div.item()])
steps = list(range(len(recorder)))
recorder = np.array(recorder)
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(steps, recorder[:,0], label="Concept loss")
ax.plot(steps, recorder[:,1], label="Reconstruction loss")
ax.plot(steps, recorder[:,2], label="KL Div loss")
ax.set_xlabel("Steps")
ax.set_ylabel("Metrics")
ax.legend()
fig.tight_layout()
recorder[-1][0]
conceptizer.eval();
concept_mean, concept_logvar, x_reconstruct = conceptizer(x)
# x_reconstruct, _, _ = conceptizer(x)
plt.imshow(x[0].cpu().numpy().transpose(1,2,0))
plt.matshow(concept_mean.detach().cpu().numpy())
plt.colorbar()
plt.matshow(concept_logvar.detach().cpu().numpy())
plt.colorbar()
plt.imshow(x_reconstruct[0].detach().cpu().numpy().transpose(1,2,0))
```
**Observations**:
* KL Divergence affects the reconstruction loss such that all images tend to look similar
* Reducing beta to 0 drastically improves reconstruction loss
* Increasing the number of epochs do not help
* Will increasing data size help? No.
* The initial reconstruction loss should be 0.69 which is verified
* With 100 epochs and 10 images, loss reaches 0.65 which results in hazy reconstructions
* With 1000 epochs and 1 image, loss reaches 0.62 which results in almost perfect reconstruction
* Loss of 0.62 is our goal (although 0.69 to 0.62 is a pretty close bound)
* Initialization does not help reconstruction even with 1000 epochs
# DiSENN
## Forward Pass
```
NUM_CONCEPTS = 5
NUM_CLASS = 2
conceptizer = VaeConceptizer(NUM_CONCEPTS)
parameterizer = ConvParameterizer(NUM_CONCEPTS, NUM_CLASS)
aggregator = SumAggregator(NUM_CLASS)
disenn = DiSENN(conceptizer, parameterizer, aggregator).to(device)
y_pred, explanation, x_construct = disenn(x)
disenn.explain(x[0], 1, show=True, num_prototypes=20)
```
## Training
```
EPOCHS = 1000
BETA = 1
ROBUST_REG = 1e-4
opt = optim.Adam(disenn.parameters())
disenn.train();
recorder = []
for epoch in range(EPOCHS):
for i, (x, labels) in enumerate(sample_dl):
x = x.to(device)
labels = labels.long().to(device)
opt.zero_grad()
x.requires_grad_(True)
y_pred, (concepts_dist, relevances), x_reconstruct = disenn(x)
concept_mean, concept_logvar = concepts_dist
concepts = concept_mean
pred_loss = F.nll_loss(y_pred.squeeze(-1), labels)
robustness_loss = celeba_robustness_loss(x, y_pred, concepts, relevances)
recon_loss, kl_div = BVAE_loss(x, x_reconstruct, concept_mean, concept_logvar)
concept_loss = recon_loss + BETA * kl_div
total_loss = pred_loss + concept_loss + (ROBUST_REG * robustness_loss)
total_loss.backward()
opt.step()
recorder.append([total_loss.item(), pred_loss.item(), robustness_loss.item(),
concept_loss.item(), recon_loss.item(), kl_div.item()])
steps = list(range(len(recorder)))
recorder = np.array(recorder)
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(steps, recorder[:,0], label="Total loss")
ax.plot(steps, recorder[:,1], label="Prediction loss")
ax.plot(steps, recorder[:,2], label="Robustness loss")
ax.plot(steps, recorder[:,3], label="Concept loss")
ax.plot(steps, recorder[:,4], label="Reconstruction loss")
ax.plot(steps, recorder[:,5], label="KL Div loss")
ax.set_xlabel("Steps")
ax.set_ylabel("Metrics")
ax.legend()
fig.tight_layout()
y[0].item()
recorder[-1]
disenn.explain(x[0].detach(), 1, show=True, num_prototypes=20)
disenn.eval()
y_pred, explanations, x_reconstruct = disenn(x[0].unsqueeze(0))
plt.imshow(x_reconstruct[0].detach().cpu().numpy().transpose(1,2,0))
```
**Observations**:
* With 1000 epochs and 1 image, we reach the best possible loss: 0.623
* Conceptizer reconstruction is almost perfect
| github_jupyter |
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
```
## Modalna analiza lunarnog prizemljivača
Matrica dinamike $A$ koja reprezentira dinamiku lunarnog prizemljivača opisanog u prethodnoj interaktivnoj lekciji je:
$$
A=\begin{bmatrix}0&1&0&0 \\ 0&0&F/m&0 \\ 0&0&0&1 \\ 0&0&0&0\end{bmatrix},
$$
gdje je $F$ sila potiska, a $m$ masa prizemljivača. Stanje sustava je $x=[z,\dot{z},\theta,\dot{\theta}]^T$, gdje je $z$ lateralna pozicija (bočni položaj), $\dot{z}$ vremenska promjena bočnog položaja, $\theta$ kut prizemljivača s obzirom na vertikalnu os, a $\dot{\theta}$ njegova promjena/varijacija u vremenu.
Matrica dinamike u ovom obliku prikazuje četiri svojstvene vrijednosti, sve jednake 0. Svojstvene vrijednosti 0 često se nazivaju integratorima (podsjetimo se Laplaceove transformacije integrala signala: što je korijen nazivnika odgovarajućeg izraza?), pa kažemo da ovaj sustav ima 4 integratora. Uz $F\neq0$ ($m\neq0$) sustav predstavlja strukturu koja je slična $4\times4$ Jordanovom bloku, tako da svojstvena vrijednost 0, u ovom slučaju, ima geometrijsku množnost jednaku 1. Uz $F=0$ svojstvena vrijednost ostaje ista s istom algebarskom množnošću, ali s geometrijskom množnošću jednakoj 2.
Dolje je predstavljen primjer s $F\neq0$.
### Kako koristiti ovaj interaktivni primjer?
- Pokušajte postaviti $F=0$ i pokušajte objasniti što fizički podrazumijeva ovaj slučaj za lunarni prizemljivač, posebno za dinamiku $z$ i $\theta$ i njihov odnos.
```
#Preparatory Cell
import control
import numpy
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
%matplotlib inline
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('example@example.com', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
#define the sliders for m, k and c
m = widgets.FloatSlider(
value=1000,
min=400,
max=2000,
step=1,
description='$m$ [kg]:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
F = widgets.FloatSlider(
value=1500,
min=0,
max=5000,
step=10,
description='$F$ [N]:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
#function that make all the computations
def main_callback(m, F):
eig1 = 0
eig2 = 0
eig3 = 0
eig4 = 0
if numpy.real([eig1,eig2,eig3,eig4])[0] == 0 and numpy.real([eig1,eig2,eig3,eig4])[1] == 0:
T = numpy.linspace(0,20,1000)
else:
if min(numpy.abs(numpy.real([eig1,eig2,eig3,eig4]))) != 0:
T = numpy.linspace(0,7*1/min(numpy.abs(numpy.real([eig1,eig2,eig3,eig4]))),1000)
else:
T = numpy.linspace(0,7*1/max(numpy.abs(numpy.real([eig1,eig2,eig3,eig4]))),1000)
if F==0:
mode1 = numpy.exp(eig1*T)
mode2 = T*mode1
mode3 = mode1
mode4 = mode2
else:
mode1 = numpy.exp(eig1*T)
mode2 = T*mode1
mode3 = T*mode2
mode4 = T*mode3
fig = plt.figure(figsize=[16, 10])
fig.set_label('Modovi')
g1 = fig.add_subplot(221)
g2 = fig.add_subplot(222)
g3 = fig.add_subplot(223)
g4 = fig.add_subplot(224)
g1.plot(T,mode1)
g1.grid()
g1.set_xlabel('Vrijeme [s]')
g1.set_ylabel('Prvi mod')
g2.plot(T,mode2)
g2.grid()
g2.set_xlabel('Vrijeme [s]')
g2.set_ylabel('Drugi mod')
g3.plot(T,mode3)
g3.grid()
g3.set_xlabel('Vrijeme [s]')
g3.set_ylabel('Treći mod')
g4.plot(T,mode4)
g4.grid()
g4.set_xlabel('Vrijeme [s]')
g4.set_ylabel('Četvrti mod')
modesString = r'Svojstvena vrijednost je jednaka 0 s algebarskom množnošću 4. '
if F==0:
modesString = modesString + r'Odgovarajući modovi su $k$ and $t$.'
else:
modesString = modesString + r'Odgovarajući modovi su $k$, $t$, $\frac{t^2}{2}$ i $\frac{t^3}{6}$.'
display(Markdown(modesString))
out = widgets.interactive_output(main_callback,{'m':m,'F':F})
sliders = widgets.HBox([m,F])
display(out,sliders)
```
| github_jupyter |
# Soccerstats Predictions v2.0
The changelog from v1.x:
* Implement data cleaning pipeline for model predictions.
* Load saved model from disk.
* Use model to predict data points.
## A. Data Preparation
### 1. Read csv file
```
# load csv data to predict
stat_df = sqlContext.read\
.format("com.databricks.spark.csv")\
.options(header = True)\
.load("data/predFixture.csv")
```
### 2. Filter-out column values
```
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
# replace "-" values with null: HTS_teamAvgOpponentPPG, ATS_teamAvgOpponentPPG
nullify_hyphen_cols = udf(
lambda row_value: None if row_value == "-" else row_value,
StringType()
)
stat_df = (stat_df.withColumn("HTS_teamAvgOpponentPPG", nullify_hyphen_cols(stat_df.HTS_teamAvgOpponentPPG))
.withColumn("ATS_teamAvgOpponentPPG", nullify_hyphen_cols(stat_df.ATS_teamAvgOpponentPPG))
)
# drop Null values
stat_df = stat_df.dropna()
```
## B. Deep Learning
### 1. Clean data
```
# drop unnecessary columns
ml_df = stat_df.drop(
"gameID", "gamePlayDate", "gamePlayTime", "gameHomeTeamName",
"gameAwayTeamName", "gameHomeTeamID","gameAwayTeamID", "leagueName",
"leagueDivisionName", "gameFtScore"
)
# separate col types: double & string
# double type features
dtype_features = [
"leagueCompletion", "HTS_teamPosition", "HTS_teamGamesPlayed", "HTS_teamGamesWon",
"HTS_teamGamesDraw", "HTS_teamGamesLost", "HTS_teamGoalsScored", "HTS_teamGoalsConceded",
"HTS_teamPoints", "HTS_teamPointsPerGame", "HTS_teamPPGlast8", "HTS_homeGamesWon",
"HTS_homeGamesDraw", "HTS_homeGamesLost", "HTS_homeGamesPlayed", "HTS_awayGamesWon",
"HTS_awayGamesDraw", "HTS_awayGamesLost", "HTS_awayGamesPlayed", "HTS_teamPPGHome",
"HTS_teamPPGAway", "HTS_teamAvgOpponentPPG", "HTS_homeGoalMargin_by1_wins",
"HTS_homeGoalMargin_by1_losses", "HTS_homeGoalMargin_by2_wins", "HTS_homeGoalMargin_by2_losses",
"HTS_homeGoalMargin_by3_wins", "HTS_homeGoalMargin_by3_losses", "HTS_homeGoalMargin_by4p_wins",
"HTS_homeGoalMargin_by4p_losses", "HTS_awayGoalMargin_by1_wins", "HTS_awayGoalMargin_by1_losses",
"HTS_awayGoalMargin_by2_wins", "HTS_awayGoalMargin_by2_losses", "HTS_awayGoalMargin_by3_wins",
"HTS_awayGoalMargin_by3_losses", "HTS_awayGoalMargin_by4p_wins", "HTS_awayGoalMargin_by4p_losses",
"HTS_totalGoalMargin_by1_wins", "HTS_totalGoalMargin_by1_losses", "HTS_totalGoalMargin_by2_wins",
"HTS_totalGoalMargin_by2_losses", "HTS_totalGoalMargin_by3_wins", "HTS_totalGoalMargin_by3_losses",
"HTS_totalGoalMargin_by4p_wins", "HTS_totalGoalMargin_by4p_losses", "HTS_homeGoalsScored",
"HTS_homeGoalsConceded", "HTS_homeGoalsScoredPerMatch", "HTS_homeGoalsConcededPerMatch",
"HTS_homeScored_ConcededPerMatch", "HTS_awayGoalsScored", "HTS_awayGoalsConceded",
"HTS_awayGoalsScoredPerMatch", "HTS_awayGoalsConcededPerMatch", "HTS_awayScored_ConcededPerMatch",
"ATS_teamPosition", "ATS_teamGamesPlayed", "ATS_teamGamesWon", "ATS_teamGamesDraw", "ATS_teamGamesLost",
"ATS_teamGoalsScored", "ATS_teamGoalsConceded", "ATS_teamPoints", "ATS_teamPointsPerGame",
"ATS_teamPPGlast8", "ATS_homeGamesWon", "ATS_homeGamesDraw", "ATS_homeGamesLost",
"ATS_homeGamesPlayed", "ATS_awayGamesWon", "ATS_awayGamesDraw", "ATS_awayGamesLost",
"ATS_awayGamesPlayed", "ATS_teamPPGHome", "ATS_teamPPGAway", "ATS_teamAvgOpponentPPG",
"ATS_homeGoalMargin_by1_wins", "ATS_homeGoalMargin_by1_losses", "ATS_homeGoalMargin_by2_wins",
"ATS_homeGoalMargin_by2_losses", "ATS_homeGoalMargin_by3_wins", "ATS_homeGoalMargin_by3_losses",
"ATS_homeGoalMargin_by4p_wins", "ATS_homeGoalMargin_by4p_losses", "ATS_awayGoalMargin_by1_wins",
"ATS_awayGoalMargin_by1_losses", "ATS_awayGoalMargin_by2_wins", "ATS_awayGoalMargin_by2_losses",
"ATS_awayGoalMargin_by3_wins", "ATS_awayGoalMargin_by3_losses", "ATS_awayGoalMargin_by4p_wins",
"ATS_awayGoalMargin_by4p_losses", "ATS_totalGoalMargin_by1_wins", "ATS_totalGoalMargin_by1_losses",
"ATS_totalGoalMargin_by2_wins", "ATS_totalGoalMargin_by2_losses", "ATS_totalGoalMargin_by3_wins",
"ATS_totalGoalMargin_by3_losses", "ATS_totalGoalMargin_by4p_wins", "ATS_totalGoalMargin_by4p_losses",
"ATS_homeGoalsScored", "ATS_homeGoalsConceded", "ATS_homeGoalsScoredPerMatch", "ATS_homeGoalsConcededPerMatch",
"ATS_homeScored_ConcededPerMatch", "ATS_awayGoalsScored", "ATS_awayGoalsConceded", "ATS_awayGoalsScoredPerMatch",
"ATS_awayGoalsConcededPerMatch", "ATS_awayScored_ConcededPerMatch"
]
# string type features
stype_features = [
"HTS_teamCleanSheetPercent", "HTS_homeOver1_5GoalsPercent",
"HTS_homeOver2_5GoalsPercent", "HTS_homeOver3_5GoalsPercent", "HTS_homeOver4_5GoalsPercent",
"HTS_awayOver1_5GoalsPercent", "HTS_awayOver2_5GoalsPercent", "HTS_awayOver3_5GoalsPercent",
"HTS_awayOver4_5GoalsPercent", "HTS_homeCleanSheets", "HTS_homeWonToNil", "HTS_homeBothTeamsScored",
"HTS_homeFailedToScore", "HTS_homeLostToNil", "HTS_awayCleanSheets", "HTS_awayWonToNil",
"HTS_awayBothTeamsScored", "HTS_awayFailedToScore", "HTS_awayLostToNil", "HTS_homeScored_ConcededBy_0",
"HTS_homeScored_ConcededBy_1", "HTS_homeScored_ConcededBy_2", "HTS_homeScored_ConcededBy_3",
"HTS_homeScored_ConcededBy_4", "HTS_homeScored_ConcededBy_5p", "HTS_homeScored_ConcededBy_0_or_1",
"HTS_homeScored_ConcededBy_2_or_3", "HTS_homeScored_ConcededBy_4p", "HTS_awayScored_ConcededBy_0",
"HTS_awayScored_ConcededBy_1", "HTS_awayScored_ConcededBy_2", "HTS_awayScored_ConcededBy_3",
"HTS_awayScored_ConcededBy_4", "HTS_awayScored_ConcededBy_5p", "HTS_awayScored_ConcededBy_0_or_1",
"HTS_awayScored_ConcededBy_2_or_3", "HTS_awayScored_ConcededBy_4p",
"ATS_teamCleanSheetPercent", "ATS_homeOver1_5GoalsPercent", "ATS_homeOver2_5GoalsPercent",
"ATS_homeOver3_5GoalsPercent", "ATS_homeOver4_5GoalsPercent", "ATS_awayOver1_5GoalsPercent",
"ATS_awayOver2_5GoalsPercent", "ATS_awayOver3_5GoalsPercent", "ATS_awayOver4_5GoalsPercent",
"ATS_homeCleanSheets", "ATS_homeWonToNil", "ATS_homeBothTeamsScored", "ATS_homeFailedToScore",
"ATS_homeLostToNil", "ATS_awayCleanSheets", "ATS_awayWonToNil", "ATS_awayBothTeamsScored",
"ATS_awayFailedToScore", "ATS_awayLostToNil", "ATS_homeScored_ConcededBy_0", "ATS_homeScored_ConcededBy_1",
"ATS_homeScored_ConcededBy_2", "ATS_homeScored_ConcededBy_3", "ATS_homeScored_ConcededBy_4",
"ATS_homeScored_ConcededBy_5p", "ATS_homeScored_ConcededBy_0_or_1", "ATS_homeScored_ConcededBy_2_or_3",
"ATS_homeScored_ConcededBy_4p", "ATS_awayScored_ConcededBy_0", "ATS_awayScored_ConcededBy_1",
"ATS_awayScored_ConcededBy_2", "ATS_awayScored_ConcededBy_3", "ATS_awayScored_ConcededBy_4",
"ATS_awayScored_ConcededBy_5p", "ATS_awayScored_ConcededBy_0_or_1", "ATS_awayScored_ConcededBy_2_or_3",
"ATS_awayScored_ConcededBy_4p"
]
# integer type features
itype_features = ["HTS_teamGoalsDifference", "ATS_teamGoalsDifference"]
from pyspark.sql.types import DoubleType, IntegerType
from pyspark.sql.functions import col
# cast types to columns: doubles
ml_df = ml_df.select(*[col(c).cast("double").alias(c) for c in dtype_features] + stype_features + itype_features)
# convert "HTS_teamGoalsDifference" & "ATS_teamGoalsDifference" to integer
int_udf = udf(
lambda r: int(r),
IntegerType()
)
# cast types to columns: integers
ml_df = ml_df.select(*[int_udf(col(col_name)).name(col_name) for col_name in itype_features] + stype_features + dtype_features)
# convert percent cols to float
percent_udf = udf(
lambda r: float(r.split("%")[0])/100,
DoubleType()
)
# cast types to columns: strings
ml_df = ml_df.select(*[percent_udf(col(col_name)).name(col_name) for col_name in stype_features] + itype_features + dtype_features)
```
### 2. Some featurization
```
import numpy as np
feature_cols = dtype_features + stype_features + itype_features
ml_df = ml_df[feature_cols]
# convert dataframe to ndarray
X_new = np.array(ml_df.select(feature_cols).collect())
print("New features shape: '{}'".format(X_new.shape))
```
### 3. Restore model from disk
```
from keras.models import model_from_json
# model version to restore
MODEL_VERSION = 1.6
# load json and create model
json_file = open('models/model_({}).json'.format(MODEL_VERSION), 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("models/model_({}).h5".format(MODEL_VERSION))
print("Loaded model version '{}' from disk!".format(MODEL_VERSION))
```
### 4. Model predictions
```
import numpy as np
from prettytable import PrettyTable
# evaluate loaded model on test data
loaded_model.compile(
loss='binary_crossentropy',
optimizer='adagrad',
metrics=['accuracy'])
# make prediction: class prediction
y_new_class = loaded_model.predict_classes(X_new)
# make prediction: probability prediction
y_new_prob = loaded_model.predict_proba(X_new)
# create predictions table
predictions = PrettyTable()
predictions.field_names = [
"gamePlayDate", "gameHomeTeamName", "gameAwayTeamName", "leagueName",
"leagueDivisionName", "predClass", "predProb", "predOutcome"
]
# populate prediction table
for val in range(len(X_new)):
if y_new_class[val] == 0:
pred = "Under 3.5"
else:
pred = "Over 3.5"
# append values to predictions table
predictions.add_row([
"{}".format(stat_df.collect()[val]["gamePlayDate"]),
"{}".format(stat_df.collect()[val]["gameHomeTeamName"]),
"{}".format(stat_df.collect()[val]["gameAwayTeamName"]),
"{}".format(stat_df.collect()[val]["leagueName"]),
"{}".format(stat_df.collect()[val]["leagueDivisionName"]),
"{}".format(y_new_class[val]),
"{}".format(y_new_prob[val]),
"{}".format(pred)
])
print(predictions)
```
| github_jupyter |
```
# Hidden TimeStamp
import time, datetime
st = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
print('Last Run: {}'.format(st))
# Hidden Working Directory
# Run this cell only once
from IPython.display import clear_output
%cd ../
clear_output()
#PYTEST_VALIDATE_IGNORE_OUTPUT
# Hidden Versioning
import numpy, matplotlib, pandas, six, openpyxl, xlrd, version_information, lamana
%reload_ext version_information
%version_information numpy, matplotlib, pandas, six, openpyxl, xlrd, version_information, lamana
# Hidden Namespace Reset
%reset -sf
%whos
```
# Demonstration
The following demonstration includes basic and intermediate uses of the LamAna Project library. It is intended to exhaustively reference all API features, therefore some advandced demonstrations will favor technical detail.
# Tutorial: Basic
## User Input Startup
```
#------------------------------------------------------------------------------
import pandas as pd
import lamana as la
#import LamAna as la
%matplotlib inline
#%matplotlib nbagg
# PARAMETERS ------------------------------------------------------------------
# Build dicts of geometric and material parameters
load_params = {'R' : 12e-3, # specimen radius
'a' : 7.5e-3, # support ring radius
'r' : 2e-4, # radial distance from center loading
'P_a' : 1, # applied load
'p' : 5, # points/layer
}
# Quick Form: a dict of lists
mat_props = {'HA' : [5.2e10, 0.25],
'PSu' : [2.7e9, 0.33],
}
# Standard Form: a dict of dicts
# mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
# 'Poissons': {'HA': 0.25, 'PSu': 0.33}}
# What geometries to test?
# Make tuples of desired geometeries to analyze: outer - {inner...-....}_i - middle
# Current Style
g1 = ('0-0-2000') # Monolith
g2 = ('1000-0-0') # Bilayer
g3 = ('600-0-800') # Trilayer
g4 = ('500-500-0') # 4-ply
g5 = ('400-200-800') # Short-hand; <= 5-ply
g6 = ('400-200-400S') # Symmetric
g7 = ('400-[200]-800') # General convention; 5-ply
g8 = ('400-[100,100]-800') # General convention; 7-plys
g9 = ('400-[100,100]-400S') # General and Symmetric convention; 7-plys
'''Add to test set'''
g13 = ('400-[150,50]-800') # Dissimilar inner_is
g14 = ('400-[25,125,50]-800')
geos_most = [g1, g2, g3, g4, g5]
geos_special = [g6, g7, g8, g9]
geos_full = [g1, g2, g3, g4, g5, g6, g7, g8, g9]
geos_dissimilar = [g13, g14]
# Future Style
#geos1 = ((400-400-400),(400-200-800),(400-350-500)) # same total thickness
#geos2 = ((400-400-400), (400-500-1600), (400-200-800)) # same outer thickness
#import pandas as pd
pd.set_option('display.max_columns', 10)
pd.set_option('precision', 4)
```
## Goal: Generate a Plot in 3 Lines of Code
```
case1 = la.distributions.Case(load_params, mat_props) # instantiate a User Input Case Object through distributions
case1.apply(['400-200-800'])
case1.plot()
```
That's it! The rest of this demonstration showcases API functionality of the LamAna project.
## Calling Case attributes
Passed in arguments are acessible, but can be displayed as pandas Series and DataFrames.
```
# Original
case1.load_params
# Series View
case1.parameters
# Original
case1.mat_props
# DataFrame View
case1.properties
# Equivalent Standard Form
case1.properties.to_dict()
```
Reset material order. Changes are relfected in the properties view and stacking order.
```
case1.materials = ['PSu', 'HA']
case1.properties
```
Serial resets
```
case1.materials = ['PSu', 'HA', 'HA']
case1.properties
case1.materials # get reorderd list of materials
case1._materials
case1.apply(geos_full)
case1.snapshots[-1]
'''Need to bypass pandas abc ordering of indicies.'''
```
Reset the parameters
```
mat_props2 = {'HA' : [5.3e10, 0.25],
'PSu' : [2.8e9, 0.33],
}
case1 = la.distributions.Case(load_params, mat_props2)
case1.properties
```
## `apply()` Geometries and LaminateModels
Construct a laminate using geometric, matrial paramaters and geometries.
```
case2 = la.distributions.Case(load_params, mat_props)
case2.apply(geos_full) # default model Wilson_LT
```
Access the user input geometries
```
case2.Geometries # using an attribute, __repr__
print(case2.Geometries) # uses __str__
case2.Geometries[0] # indexing
```
We can compare Geometry objects with builtin Python operators. This process directly compares GeometryTuples in the `Geometry` class.
```
bilayer = case2.Geometries[1] # (1000.0-[0.0]-0.0)
trilayer = case2.Geometries[2] # (600.0-[0.0]-800.0)
#bilayer == trilayer
bilayer != trilayer
```
Get all thicknesses for selected layers.
```
case2.middle
case2.inner
case2.inner[-1]
case2.inner[-1][0] # List indexing allowed
[first[0] for first in case2.inner] # iterate
case2.outer
```
A general and very important object is the LaminateModel.
```
case2.LMs
```
Sometimes might you want to throw in a bunch of geometry strings from different groups. If there are repeated strings in different groups (set intersections), you can tell `Case` to only give a unique result.
For instane, here we combine two groups of geometry strings, 5-plys and odd-plys. Clearly these two groups overlap, and there are some repeated geometries (one with different conventions). Using the `unique` keyword, Case only operates on a unique set of `Geometry` objects (independent of convention), resulting in a unique set of LaminateModels.
```
fiveplys = ['400-[200]-800', '350-400-500', '200-100-1400']
oddplys = ['400-200-800', '350-400-500', '400.0-[100.0,100.0]-800.0']
mix = fiveplys + oddplys
mix
# Non-unique, repeated 5-plys
case_ = la.distributions.Case(load_params, mat_props)
case_.apply(mix)
case_.LMs
# Unique
case_ = la.distributions.Case(load_params, mat_props)
case_.apply(mix, unique=True)
case_.LMs
```
## DataFrame Access
You can get a quick view of the stack using the `snapshot` method. This gives access to a `Construct` - a DataFrame converted stack.
```
case2.snapshots[-1]
```
We can easily view entire laminate DataFrames using the `frames` attribute. This gives access to `LaminateModels` (DataFrame) objects, which extends the stack view so that laminate theory is applied to each row.
```
'''Consider head command for frames list'''
#case2.frames
##with pd.set_option('display.max_columns', None): # display all columns, within this context manager
## case2.frames[5]
case2.frames[5].head()
'''Extend laminate attributes'''
case3 = la.distributions.Case(load_params, mat_props)
case3.apply(geos_dissimilar)
#case3.frames
```
NOTE, for even plies, the material is set alternate for each layer. Thus outers layers may be different materials.
```
case4 = la.distributions.Case(load_params, mat_props)
case4.apply(['400-[100,100,100]-0'])
case4.frames[0][['layer', 'matl', 'type']]
;
'''Add functionality to customize material type.'''
```
### Totaling
The `distributions.Case` class has useful properties available for totaling specific layers for a group of laminates as lists. As these properties return lists, these results can be **sliced** and **iterated**.
```
'''Show Geometry first then case use.'''
```
###### `.total` property
```
case2.total
case2.total_middle
case2.total_middle
case2.total_inner_i
case2.total_outer
case2.total_outer[4:-1] # slicing
[inner_i[-1]/2.0 for inner_i in case2.total_inner_i] # iterate
```
###### `Geometry` Totals
The total attribute used in Case actually dervive from attributes for Geometry objects individually. On Geometry objects, they return specific thicknesses instead of lists of thicknesses.
```
G1 = case2.Geometries[-1]
G1
G1.total # laminate thickness (um)
G1.total_inner_i # inner_i laminae
G1.total_inner_i[0] # inner_i lamina pair
sum(G1.total_inner_i) # inner total
G1.total_inner # inner total
```
## `LaminateModel` Attributes
Access the LaminateModel object directly using the `LMs` attribute.
```
case2.LMs[5].Middle
case2.LMs[5].Inner_i
```
Laminates are assumed mirrored at the neutral axis, but dissimilar inner_i thicknesses are allowed.
```
case2.LMs[5].tensile
```
Separate from the case attributes, Laminates have useful attributes also, such as `nplies`, `p` and its own `total`.
```
LM = case2.LMs[4]
LM.LMFrame.tail(7)
```
Often the extreme stress values (those at the interfaces) are most important. This is equivalent to p=2.
```
LM.extrema
LM.p # number of rows per group
LM.nplies # number of plies
LM.total # total laminate thickness (m)
LM.Geometry
'''Overload the min and max special methods.'''
LM.max_stress # max interfacial failure stress
```
NOTE: this feature gives a different result for p=1 since a single middle cannot report two interfacial values; INDET.
```
LM.min_stress
'''Redo tp return series of bool an index for has_attrs'''
LM.has_neutaxis
LM.has_discont
LM.is_special
LM.FeatureInput
'''Need to fix FeatureInput and Geometry inside LaminateModel'''
```
As with Geometry objects, we can compare LaminateModel objects also. ~~This process directly compares two defining components of a LaminateModel object: the LM DataFrame (`LMFrame`) and FeatureInput. If either is False, the equality returns `False`.~~
```
case2 = la.distributions.Case(load_params, mat_props)
case2.apply(geos_full)
bilayer_LM = case2.LMs[1]
trilayer_LM = case2.LMs[2]
trilayer_LM == trilayer_LM
#bilayer_LM == trilayer_LM
bilayer_LM != trilayer_LM
```
Use python and pandas native comparison tracebacks that to understand the errors directly by comparing FeatureInput dict and LaminateModel DataFrame.
```
#bilayer_LM.FeatureInput == trilayer_LM.FeatureInput # gives detailed traceback
'''Fix FI DataFrame with dict.'''
bilayer_LM.FeatureInput
#bilayer_LM.LMFrame == trilayer_LM.LMFrame # gives detailed traceback
```
## `plot()` LT Geometries
CAVEAT: it is recommended to use at least p=2 for calculating stress. Less than two points for odd plies is indeterminant in middle rows, which can raise exceptions.
```
'''Find a way to remove all but interfacial points.'''
```
We try to quickly plot simple stress distriubtions with native pandas methods. We have two variants for displaying distributions:
- Unnoormalized: plotted by the height (`d_`). Visaully: thicknesses vary, material slopes are constant.
- Normalized: plotted by the relative fraction level (`k_`). Visually: thicknesses are constant, material slopes vary.
Here we plot with the nbagg matplotlib backend to generatre interactive figures. NOTE: for Normalized plots, slope can vary for a given material.
```
from lamana.utils import tools as ut
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
#%matplotlib nbagg
# Quick plotting
case4 = ut.laminator(dft.geos_standard)
for case in case4.values():
for LM in case.LMs:
df = LM.LMFrame
df.plot(x='stress_f (MPa/N)', y='d(m)', title='Unnormalized Distribution')
df.plot(x='stress_f (MPa/N)', y='k', title='Normalized Distribution')
```
While we get reasonable stress distribution plots rather simply, LamAna offers some plotting methods pertinent to laminates than assisting with visualization.
Demo - An example illustration of desired plotting of multiple geometries from `distributions`.

This is image of results from legacy code used for comparison.
We can plot the stress distribution for a case of a single geometry.
```
case3 = la.distributions.Case(load_params, mat_props)
case3.apply(['400-200-800'], model='Wilson_LT')
case3.plot()
```
We can also plot multiple geometries of similar total thickness.
```
five_plies = ['350-400-500', '400-200-800', '200-200-1200', '200-100-1400',
'100-100-1600', '100-200-1400', '300-400-600']
case4 = la.distributions.Case(load_params, mat_props)
case4.apply(five_plies, model='Wilson_LT')
case4.plot()
'''If different plies or patterns, make new caselet (subplot)'''
'''[400-200-800, '300-[400,200]-600'] # non-congruent? equi-ply'''
'''[400-200-800, '400-200-0'] # odd/even ply'''
# currently superimposes plots. Just needs to separate.
```
## Exporting
Saving data is critical for future analysis. LamAna offers two formas for exporting your data and parameters. Parameters used to make calculations such as the FeatureInput information are saved as "dashboards" in different forms.
- '.xlsx': (default); convient for storing multiple calculationa amd dashboards as se[arate worksheets in a Excel workbook.
- '.csv': universal format; separate files for data and dashboard.
The lowest level to export data is for a LaminateModel object.
```
LM = case4.LMs[0]
LM.to_xlsx(delete=True) # or `to_csv()`
```
<div class="alert alert-warning">**NOTE** For demonstration purposes, the `temp` and `delete` are activated. This will create temporary files in the OS temp directory and automatically delete them. For practical use, ignore setting these flags.</div>
The latter LaminateModel data was saved to an .xlsx file in the default export folder. The filepath is returned (currently suppressed with the `;` line).
The next level to export data is for a case. This will save all files comprise in a case. If exported to csv format, files are saved seperately. In xlsx format, a single file is made where each LaminateModel data and dashboard are saved as seperate worksheets.
```
case4.to_xlsx(temp=True, delete=True) # or `to_csv()`
```
---
# Tutorial: Intermediate
So far, the barebones objects have been discussed and a lot can be accomplished with the basics. For users who have some experience with Python and Pandas, here are some intermediate techniques to reduce repetitious actions.
This section dicusses the use of abstract base classes intended for reducing redundant tasks such as **multiple case creation** and **default parameter definitions**. Custom model subclassing is also discussed.
```
#------------------------------------------------------------------------------
import pandas as pd
import lamana as la
%matplotlib inline
#%matplotlib nbagg
# PARAMETERS ------------------------------------------------------------------
# Build dicts of loading parameters and and material properties
load_params = {'R' : 12e-3, # specimen radius
'a' : 7.5e-3, # support ring radius
'r' : 2e-4, # radial distance from center loading
'P_a' : 1, # applied load
'p' : 5, # points/layer
}
# # Quick Form: a dict of lists
# mat_props = {'HA' : [5.2e10, 0.25],
# 'PSu' : [2.7e9, 0.33],}
# Standard Form: a dict of dicts
mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
'Poissons': {'HA': 0.25, 'PSu': 0.33}}
# What geometries to test?
# Make tuples of desired geometeries to analyze: outer - {inner...-....}_i - middle
# Current Style
g1 = ('0-0-2000') # Monolith
g2 = ('1000-0-0') # Bilayer
g3 = ('600-0-800') # Trilayer
g4 = ('500-500-0') # 4-ply
g5 = ('400-200-800') # Short-hand; <= 5-ply
g6 = ('400-200-400S') # Symmetric
g7 = ('400-[200]-800') # General convention; 5-ply
g8 = ('400-[100,100]-800') # General convention; 7-plys
g9 = ('400-[100,100]-400S') # General and Symmetric convention; 7-plys
'''Add to test set'''
g13 = ('400-[150,50]-800') # Dissimilar inner_is
g14 = ('400-[25,125,50]-800')
geos_most = [g1, g2, g3, g4, g5]
geos_special = [g6, g7, g8, g9]
geos_full = [g1, g2, g3, g4, g5, g6, g7, g8, g9]
geos_dissimilar = [g13, g14]
```
## Exploring LamAna Objects
This is brief introduction to underlying objects in this package. We begin with an input string that is parsed and converted into a Geometry object. This is part of the `input_` module.
```
# Geometry object
la.input_.Geometry('100-200-1600')
```
This object has a number of handy methods. This information is shipped with parameters and properties in `FeatureInput`. A `FeatureInput` is simply a dict. This currently does have not an official class but is it import for other objects.
```
# FeatureInput
FI = {
'Geometry': la.input_.Geometry('400.0-[200.0]-800.0'),
'Materials': ['HA', 'PSu'],
'Model': 'Wilson_LT',
'Parameters': load_params,
'Properties': mat_props,
'Globals': None,
}
```
The following objects are serially inherited and part of the `constructs` module. These construct the DataFrame represention of a laminate. The code to decouple LaminateModel from Laminate was merged in verions 0.4.13.
```
# Stack object
la.constructs.Stack(FI)
# Laminate object
la.constructs.Laminate(FI)
# LaminateModel object
la.constructs.LaminateModel(FI)
```
The latter cells verify these objects are successfully decoupled. That's all for now.
## Generating Multiple Cases
We've already seen we can [generate a case object and plots with three lines of code](#Goal:-Generate-a-Plot-in-3-Lines-of-Code). However, sometimes it is necessary to generate different cases. These invocations can be tedious with three lines of code per case. Have no fear. A simple way to produce more cases is to instantiate a `Cases` object.
Below we will create a `Cases` which houses multiples cases that:
- share similiar loading parameters/material properties and laminate theory model with
- different numbers of datapoints, p
```
cases1 = la.distributions.Cases(['400-200-800', '350-400-500',
'400-200-0', '1000-0-0'],
load_params=load_params,
mat_props=mat_props, model= 'Wilson_LT',
ps=[3,4,5])
cases1
```
`Cases()` accepts a list of geometry strings. Given appropriate default keywords, this lone argument will return a dict-like object of cases with indicies as keys. The `model` and `ps` keywords have default values.
A `Cases()` object has some interesting characteristics (this is not a dict):
- if user-defined, tries to import `Defaults()` to simplify instantiations
- dict-like storage and access of cases
- list-like ordering of cases
- gettable: list-like, get items by index (including negative indicies)
- sliceable: slices the dict keys of the Cases object
- viewable: contained LaminateModels
- iterable: by values (unlike normal dicts, not by keys)
- writable: write DataFrames to csv files
- selectable: perform set operations and return unique subsets
```
# Gettable
cases1[0] # normal dict key selection
cases1[-1] # negative indices
cases1[-2] # negative indicies
# Sliceable
cases1[0:2] # range of dict keys
cases1[0:3] # full range of dict keys
cases1[:] # full range
cases1[1:] # start:None
cases1[:2] # None:stop
cases1[:-1] # None:negative index
cases1[:-2] # None:negative index
#cases1[0:-1:-2] # start:stop:step; NotImplemented
#cases1[::-1] # reverse; NotImplemented
# Viewable
cases1
cases1.LMs
# Iterable
for i, case in enumerate(cases1): # __iter__ values
print(case)
#print(case.LMs) # access LaminateModels
# Writable
#cases1.to_csv() # write to file
# Selectable
cases1.select(nplies=[2,4]) # by # plies
cases1.select(ps=[3,4]) # by points/DataFrame rows
cases1.select(nplies=[2,4], ps=[3,4], how='intersection') # by set operations
```
LamainateModels can be compared using set theory. Unique subsets of LaminateModels can be returned from a mix of repeated geometry strings. We will use the default `model` and `ps` values.
```
set(geos_most).issubset(geos_full) # confirm repeated items
mix = geos_full + geos_most # contains repeated items
# Repeated Subset
cases2 = la.distributions.Cases(mix, load_params=load_params, mat_props=mat_props)
cases2.LMs
# Unique Subset
cases2 = la.distributions.Cases(mix, load_params=load_params, mat_props=mat_props,
unique=True)
cases2.LMs
```
## Subclassing Custom Default Parameters
We observed the benefits of using *implicit*, default keywords (`models`, `ps`) in simplifying the writing of `Cases()` instantiations. In general, the user can code *explicit* defaults for `load_params` and `mat_props` by subclassing `BaseDefaults()` from `inputs_`. While subclassing requires some extra Python knowledge, this is a relatively simple process that reduces a significant amount of redundant code, leading to a more effiencient anaytical setting.
The `BaseDefaults` contains a dict various geometry strings and Geometry objects. Rather than defining examples for various geometry plies, the user can call from all or a groupings of geometries.
```
from lamana.input_ import BaseDefaults
bdft = BaseDefaults()
# geometry String Attributes
bdft.geo_inputs # all dict key-values
bdft.geos_all # all geo strings
bdft.geos_standard # static
bdft.geos_sample # active; grows
# Geometry Object Attributes; mimics latter
bdft.Geo_objects # all dict key-values
bdft.Geos_all # all Geo objects
# more ...
# Custom FeatureInputs
#bdft.get_FeatureInput() # quick builds
#bdft.get_materials() # convert to std. form
```
The latter geometric defaults come out of the box when subclassed from `BaseDefaults`. If custom geometries are desired, the user can override the `geo_inputs` dict, which automatically builds the `Geo_objects` dict.
Users can override three categories of defaults parameters:
1. geometric variables
1. loading parameters
1. material properties
As mentioned, some geometric variables are provided for general laminate dimensions. The other parameters cannot be predicted and need to be defined by the user. Below is an example of a Defaults() subclass. If a custom model has been implemented (see next section), it is convention to place `Defaults()` and all other custom code within this module. If a custom model is implemented an located in the models directory, Cases will automatically search will the designated model modules, locate the `load_params` and `mat_props` attributes and load them automatically for all `Cases` instantiations.
```
# Example Defaults from LamAna.models.Wilson_LT
class Defaults(BaseDefaults):
'''Return parameters for building distributions cases. Useful for consistent
testing.
Dimensional defaults are inheirited from utils.BaseDefaults().
Material-specific parameters are defined here by he user.
- Default geometric and materials parameters
- Default FeatureInputs
Examples
========
>>>dft = Defaults()
>>>dft.load_params
{'R' : 12e-3, 'a' : 7.5e-3, 'p' : 1, 'P_a' : 1, 'r' : 2e-4,}
>>>dft.mat_props
{'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
'Poissons': {'HA': 0.25, 'PSu': 0.33}}
>>>dft.FeatureInput
{'Geometry' : '400-[200]-800',
'Geometric' : {'R' : 12e-3, 'a' : 7.5e-3, 'p' : 1, 'P_a' : 1, 'r' : 2e-4,},
'Materials' : {'HA' : [5.2e10, 0.25], 'PSu' : [2.7e9, 0.33],},
'Custom' : None,
'Model' : Wilson_LT,
}
'''
def __init__(self):
BaseDefaults.__init__(self)
'''DEV: Add defaults first. Then adjust attributes.'''
# DEFAULTS ------------------------------------------------------------
# Build dicts of geometric and material parameters
self.load_params = {'R' : 12e-3, # specimen radius
'a' : 7.5e-3, # support ring radius
'p' : 5, # points/layer
'P_a' : 1, # applied load
'r' : 2e-4, # radial distance from center loading
}
self.mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
'Poissons': {'HA': 0.25, 'PSu': 0.33}}
# ATTRIBUTES ----------------------------------------------------------
# FeatureInput
self.FeatureInput = self.get_FeatureInput(self.Geo_objects['standard'][0],
load_params=self.load_params,
mat_props=self.mat_props,
##custom_matls=None,
model='Wilson_LT',
global_vars=None)
'''Use Classic_LT here'''
from lamana.distributions import Cases
# Auto load_params and mat_params
dft = Defaults()
cases3 = Cases(dft.geos_full, model='Wilson_LT')
#cases3 = la.distributions.Cases(dft.geos_full, model='Wilson_LT')
cases3
'''Refine idiom for importing Cases '''
```
## Subclassing Custom Models
One of the most powerful feauteres of LamAna is the ability to define customized modifications to the Laminate Theory models.
Code for laminate theories (i.e. Classic_LT, Wilson_LT) are are located in the models directory. These models can be simple functions or sublclass from `BaseModels` in the `theories` module. Either approach is acceptable (see narrative docs for more details on creating custom models.
This ability to add custom code make this library extensibile to use a larger variety of models.
## Plotting Cases
An example of multiple subplots is show below. Using a former case, notice each subplot is indepent, woth separate geometries for each. LamAna treats each subplot as a subset or "caselet":
```
cases1.plot(extrema=False)
```
Each caselet can also be a separate case, plotting multiple geometries for each as accomplished with `Case`.
```
const_total = ['350-400-500', '400-200-800', '200-200-1200',
'200-100-1400', '100-100-1600', '100-200-1400',]
const_outer = ['400-550-100', '400-500-200', '400-450-300',
'400-400-400', '400-350-500', '400-300-600',
'400-250-700', '400-200-800', '400-0.5-1199']
const_inner = ['400-400-400', '350-400-500', '300-400-600',
'200-400-700', '200-400-800', '150-400-990',
'100-400-1000', '50-400-1100',]
const_middle = ['100-700-400', '150-650-400', '200-600-400',
'250-550-400', '300-400-500', '350-450-400',
'400-400-400', '450-350-400', '750-0.5-400']
case1_ = const_total
case2_ = const_outer
case3_ = const_inner
case4_ = const_middle
cases_ = [case1_, case2_, case3_, case4_]
cases3 = la.distributions.Cases(cases_, load_params=load_params,
mat_props=mat_props, model= 'Wilson_LT',
ps=[2,3])
cases3.plot(extrema=False)
```
See Demo notebooks for more examples of plotting.
## More on Cases
```
'''Fix importing cases'''
from lamana.distributions import Cases
```
###### Applying caselets
The term "caselet" is defined in LPEP 003. Most importantly, the various types a caselet represents is handled by `Cases` and discussed here. In 0.4.4b3+, caselets are contained in lists. LPEP entertains the idea of containing caselets in dicts.
```
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
%matplotlib inline
str_caselets = ['350-400-500', '400-200-800', '400-[200]-800']
list_caselets = [['400-400-400', '400-[400]-400'],
['200-100-1400', '100-200-1400',],
['400-400-400', '400-200-800','350-400-500',],
['350-400-500']]
case1 = la.distributions.Case(dft.load_params, dft.mat_props)
case2 = la.distributions.Case(dft.load_params, dft.mat_props)
case3 = la.distributions.Case(dft.load_params, dft.mat_props)
case1.apply(['400-200-800', '400-[200]-800'])
case2.apply(['350-400-500', '400-200-800'])
case3.apply(['350-400-500', '400-200-800', '400-400-400'])
case_caselets = [case1, case2, case3]
mixed_caselets = [['350-400-500', '400-200-800',],
[['400-400-400', '400-[400]-400'],
['200-100-1400', '100-200-1400',]],
[case1, case2,]
]
dict_caselets = {0: ['350-400-500', '400-200-800', '200-200-1200',
'200-100-1400', '100-100-1600', '100-200-1400'],
1: ['400-550-100', '400-500-200', '400-450-300',
'400-400-400', '400-350-500', '400-300-600'],
2: ['400-400-400', '350-400-500', '300-400-600',
'200-400-700', '200-400-800', '150-400-990'],
3: ['100-700-400', '150-650-400', '200-600-400',
'250-550-400', '300-400-500', '350-450-400'],
}
cases = Cases(str_caselets)
#cases = Cases(str_caselets, combine=True)
#cases = Cases(list_caselets)
#cases = Cases(list_caselets, combine=True)
#cases = Cases(case_caselets)
#cases = Cases(case_caselets, combine=True) # collapse to one plot
#cases = Cases(str_caselets, ps=[2,5])
#cases = Cases(list_caselets, ps=[2,3,5,7])
#cases = Cases(case_caselets, ps=[2,5])
#cases = Cases([], combine=True) # test raises
# For next versions
#cases = Cases(dict_caselets)
#cases = Cases(mixed_caselets)
#cases = Cases(mixed_caselets, combine=True)
cases
cases.LMs
'''BUG: Following cell raises an Exception in Python 2. Comment to pass nb reg test in pytest.'''
```
```
cases.caselets
'''get out tests from code'''
'''run tests'''
'''test set seletions'''
```
###### Characteristics
```
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
cases = Cases(dft.geo_inputs['5-ply'], ps=[2,3,4])
len(cases) # test __len__
cases.get(1) # __getitem__
#cases[2] = 'test' # __setitem__; not implemented
cases[0] # select
cases[0:2] # slice (__getitem__)
del cases[1] # __delitem__
cases # test __repr__
print(cases) # test __str__
cases == cases # test __eq__
not cases != cases # test __ne__
for i, case in enumerate(cases): # __iter__ values
print(case)
#print(case.LMs)
cases.LMs # peek inside cases
cases.frames # get a list of DataFrames directly
cases
#cases.to_csv() # write to file
```
###### Unique Cases from Intersecting Caselets
`Cases` can check if caselet is unique by comparing the underlying geometry strings. Here we have a non-unique caselets of different types. We get unique results *within each caselet* using the `unique` keyword. Notice, different caselets could have similar LaminateModels.
```
str_caselets = ['350-400-500', '400-200-800', '400-[200]-800']
str_caselets2 = [['350-400-500', '350-[400]-500'],
['400-200-800', '400-[200]-800']]
list_caselets = [['400-400-400', '400-[400]-400'],
['200-100-1400', '100-200-1400',],
['400-400-400', '400-200-800','350-400-500',],
['350-400-500']]
case1 = la.distributions.Case(dft.load_params, dft.mat_props)
case2 = la.distributions.Case(dft.load_params, dft.mat_props)
case3 = la.distributions.Case(dft.load_params, dft.mat_props)
case1.apply(['400-200-800', '400-[200]-800'])
case2.apply(['350-400-500', '400-200-800'])
case3.apply(['350-400-500', '400-200-800', '400-400-400'])
case_caselets = [case1, case2, case3]
```
The following cells attempt to print the LM objects. Cases objects unordered and thus print in random orders.
It is important to note that once set operations are performed, order is no longer a preserved. This is related to how Python handles hashes. This applies to `Cases()` in two areas:
- The `unique` keyword optionally invoked during instantiation.
- Any use of set operation via the `how` keyword within the `Cases.select()` method.
###### Revamped Idioms
**Gotcha**: Although a `Cases` instance is a dict, as if 0.4.4b3, it's `__iter__` method has been overriden to iterate the values by default (not the keys as in Python). This choice was decided since keys are uninformative integers, while the values (curently cases )are of interest, which saves from typing .items() when interating a `Cases` instance.
```python
>>> cases = Cases()
>>> for i, case in cases.items() # python
>>> ... print(case)
>>> for case in cases: # modified
>>> ... print(case)
```
This behavior may change in future versions.
```
#----------------------------------------------------------+
# Iterating Over Cases
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
# Multiple cases, Multiple LMs
cases = Cases(dft.geos_full, ps=[2,5]) # two cases (p=2,5)
for i, case in enumerate(cases): # iter case values()
print('Case #: {}'.format(i))
for LM in case.LMs:
print(LM)
print("\nYou iterated several cases (ps=[2,5]) comprising many LaminateModels.")
# A single case, single LM
cases = Cases(['400-[200]-800']) # a single case and LM (manual)
for i, case_ in enumerate(cases): # iter i and case
for LM in case_.LMs:
print(LM)
print("\nYou processed a case and LaminateModel w/iteration. (Recommended)\n")
# Single case, multiple LMs
cases = Cases(dft.geos_full) # auto, default p=5
for case in cases: # iter case values()
for LM in case.LMs:
print(LM)
print("\nYou iterated a single case of many LaminateModels.")
```
###### Selecting
From cases, subsets of LaminateModels can be chosen. `select` is a method that performs on and returns sets of LaminateModels. Plotting functions are not implement for this method directly, however the reulsts can be used to make new cases instances from which `.plot()` is accessible. Example access techniques using `Cases`.
- Access all cases : `cases`
- Access specific cases : `cases[0:2]`
- Access all LaminateModels : `cases.LMs`
- Access LaminateModels (within a case) : `cases.LMs[0:2]`
- Select a subset of LaminateModels from all cases : `cases.select(ps=[3,4])`
```
# Iterating Over Cases
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
#geometries = set(dft.geos_symmetric).union(dft.geos_special + dft.geos_standard + dft.geos_dissimilar)
#cases = Cases(geometries, ps=[2,3,4])
cases = Cases(dft.geos_special, ps=[2,3,4])
# Reveal the full listdft.geos_specia
# for case in cases: # iter case values()
# for LM in case.LMs:
# print(LM)
# Test union of lists
#geometries
cases
'''Right now a case shares p, size. cases share geometries and size.'''
cases[0:2]
'''Hard to see where these comem from. Use dict?'''
cases.LMs
cases.LMs[0:6:2]
cases.LMs[0:4]
```
Selections from latter cases.
```
cases.select(nplies=[2,4])
cases.select(ps=[2,4])
cases.select(nplies=4)
cases.select(ps=3)
```
###### Advanced techniques: multiple selections.
Set operations have been implemented in the selection method of `Cases` which enables filtering of unique LaminateModels that meet given conditions for `nplies` and `ps`.
- union: all LMs that meet either conditions (or)
- intersection: LMs that meet both conditions (and)
- difference: LMs
- symmetric difference:
```
cases.select(nplies=4, ps=3) # union; default
cases.select(nplies=4, ps=3, how='intersection') # intersection
```
By default, difference is subtracted as `set(ps) - set(nplies)`. Currently there is no implementation for the converse difference, but set operations still work.
```
cases.select(nplies=4, ps=3, how='difference') # difference
cases.select(nplies=4) - cases.select(ps=3) # set difference
'''How does this work?'''
cases.select(nplies=4, ps=3, how='symm diff') # symm difference
cases.select(nplies=[2,4], ps=[3,4], how='union')
cases.select(nplies=[2,4], ps=[3,4], how='intersection')
cases.select(nplies=[2,4], ps=3, how='difference')
cases.select(nplies=4, ps=[3,4], how='symmeric difference')
```
Current logic seems to return a union.
###### Enhancing selection algorithms with set operations
Need logic to append LM for the following:
- all, either, neither (and, or, not or)
- a, b are int
- a, b are list
- a, b are mixed
- b, a are mixed
```
import numpy as np
a = []
b = 1
c = np.int64(1)
d = [1,2]
e = [1,2,3]
f = [3,4]
test = 1
test in a
#test in b
#test is a
test is c
# if test is a or test is c:
# True
from lamana.utils import tools as ut
ut.compare_set(d, e)
ut.compare_set(b, d, how='intersection')
ut.compare_set(d, b, how='difference')
ut.compare_set(e, f, how='symmertric difference')
ut.compare_set(d, e, test='issubset')
ut.compare_set(e, d, test='issuperset')
ut.compare_set(d, f, test='isdisjoint')
set(d) ^ set(e)
ut.compare_set(d,e, how='symm')
g1 = dft.Geo_objects['5-ply'][0]
g2 = dft.Geo_objects['5-ply'][1]
cases = Cases(dft.geos_full, ps=[2,5]) # two cases (p=2,5)
for i, case in enumerate(cases): # iter case values()
for LM in case.LMs:
print(LM)
```
In order to compare objects in sets, they must be hashable. The simple requirement equality is include whatever makes the hash of a equal to the hash of b. Ideally, we should hash the Geometry object, but the inner values is a list which is unhashable due to its mutability. Conventiently however, strings are not hashable. We can try to hash the geometry input string once they have been converted to General Convention as unique identifiers for the geometry object. This requires some reorganization in `Geometry`.
- ~~isolate a converter function `_to_gen_convention()`~~
- privative all functions invisible to the API
- ~~hash the converted `geo_strings`~~
- ~~privatize `_geo_strings`. This cannot be altered by the user.~~
Here we see the advantage to using geo_strings as hashables. They are inheirently hashable.
UPDATE: decided to make a hashalbe version of the `GeometryTuple`
```
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash('400-200-800')
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash('400-[200]-800')
```
Need to make `Laminate` class hashable. Try to use unique identifiers such as Geometry and p.
```
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash((case.LMs[0].Geometry, case.LMs[0].p))
case.LMs[0]
L = [LM for case in cases for LM in case.LMs]
L[0]
L[8]
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash((L[0].Geometry, L[0].p))
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash((L[1].Geometry, L[1].p))
set([L[0]]) != set([L[8]])
```
Use sets to filter unique geometry objects from `Defaults()`.
```
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
mix = dft.Geos_full + dft.Geos_all
mix
set(mix)
```
## Mixing Geometries
See above. Looks like comparing the order of these lists give different results. This test has been quarantine from the repo until a solution is found.
```
mix = dft.geos_most + dft.geos_standard # 400-[200]-800 common to both
cases3a = Cases(mix, combine=True, unique=True)
cases3a.LMs
load_params['p'] = 5
cases3b5 = la.distributions.Case(load_params, dft.mat_props)
cases3b5.apply(mix)
cases3b5.LMs[:-1]
```
## Idiomatic Case Making
As we transition to more automated techniques, tf parameters are to be reused multiple times, it can be helpful to store them as default values.
```
'''Add how to build Defaults()'''
# Case Building from Defaults
import lamana as la
from lamana.utils import tools as ut
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
##dft = ut.Defaults() # user-definable
case2 = la.distributions.Case(dft.load_params, dft.mat_props)
case2.apply(dft.geos_full) # multi plies
#LM = case2.LMs[0]
#LM.LMFrame
print("\nYou have built a case using user-defined defaults to set geometric \
loading and material parameters.")
case2
```
Finally, if building several cases is required for the same parameters, we can use higher-level API tools to help automate the process.
*Note, for every case that is created, a seperate `Case()` instantiation and `Case.apply()` call is required. These techniques obviate such redundancies.*
```
# Automatic Case Building
import lamana as la
from lamana.utils import tools as ut
#Single Case
dft = wlt.Defaults()
##dft = ut.Defaults()
case3 = ut.laminator(dft.geos_full) # auto, default p=5
case3 = ut.laminator(dft.geos_full, ps=[5]) # declared
#case3 = ut.laminator(dft.geos_full, ps=[1]) # LFrame rollbacks
print("\nYou have built a case using higher-level API functions.")
case3
# How to get values from a single case (Python 3 compatible)
list(case3.values())
```
Cases are differentiated by different ps.
```
# Multiple Cases
cases1 = ut.laminator(dft.geos_full, ps=[2,3,4,5]) # multi ply, multi p
print("\nYou have built many cases using higher-level API functions.")
cases1
# How to get values from multiple cases (Python 3 compatible)
list(cases1.values())
```
Python 3 no longer returns a list for `.values()` method, so list used to evalate a the dictionary view. While consuming a case's, dict value view with `list()` works in Python 2 and 3, iteration with loops and comprehensions is a preferred technique for both single and mutiple case processing. After cases are accessed, iteration can access the contetnts of all cases. Iteration is the preferred technique for processing cases. It is most general, cleaner, Py2/3 compatible out of the box and agrees with The Zen of Python:
> There should be one-- and preferably only one --obvious way to do it.
```
# Iterating Over Cases
# Latest style
case4 = ut.laminator(['400-[200]-800']) # a sinle case and LM
for i, case_ in case4.items(): # iter p and case
for LM in case_.LMs:
print(LM)
print("\nYou processed a case and LaminateModel w/iteration. (Recommended)\n")
case5 = ut.laminator(dft.geos_full) # auto, default p=5
for i, case in case5.items(): # iter p and case with .items()
for LM in case.LMs:
print(LM)
for case in case5.values(): # iter case only with .values()
for LM in case.LMs:
print(LM)
print("\nYou processed many cases using Case object methods.")
# Convert case dict to generator
case_gen1 = (LM for p, case in case4.items() for LM in case.LMs)
# Generator without keys
case_gen2 = (LM for case in case4.values() for LM in case.LMs)
print("\nYou have captured a case in a generator for later, one-time use.")
```
We will demonstrate comparing two techniques for generating equivalent cases.
```
# Style Comparisons
dft = wlt.Defaults()
##dft = ut.Defaults()
case1 = la.distributions.Case(load_params, mat_props)
case1.apply(dft.geos_all)
cases = ut.laminator(geos=dft.geos_all)
case2 = cases
# Equivalent calls
print(case1)
print(case2)
print("\nYou have used classic and modern styles to build equivalent cases.")
```
| github_jupyter |
# Summary:
This notebook contains the soft smoothing figures for Amherst (Figure 2(c)).
## load libraries
```
from __future__ import division
import networkx as nx
import numpy as np
import os
from sklearn import metrics
from sklearn.preprocessing import label_binarize
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedShuffleSplit
import matplotlib.pyplot as plt
## function to create + save dictionary of features
def create_dict(key, obj):
return(dict([(key[i], obj[i]) for i in range(len(key))]))
```
## load helper functions and datasets
```
# set the working directory and import helper functions
#get the current working directory and then redirect into the functions under code
cwd = os.getcwd()
# parents working directory of the current directory: which is the code folder
parent_cwd = os.path.dirname(cwd)
# get into the functions folder
functions_cwd = parent_cwd + '/functions'
# change the working directory to be .../functions
os.chdir(functions_cwd)
# import all helper functions
exec(open('parsing.py').read())
exec(open('ZGL.py').read())
exec(open('create_graph.py').read())
exec(open('ZGL_softing_new_new.py').read())
# import the data from the data folder
data_cwd = os.path.dirname(parent_cwd)+ '/data'
# change the working directory and import the fb dataset
fb100_file = data_cwd +'/Amherst41'
A, metadata = parse_fb100_mat_file(fb100_file)
# change A(scipy csc matrix) into a numpy matrix
adj_matrix_tmp = A.todense()
#get the gender for each node(1/2,0 for missing)
gender_y_tmp = metadata[:,1]
# get the corresponding gender for each node in a disctionary form
gender_dict = create_dict(range(len(gender_y_tmp)), gender_y_tmp)
#exec(open("/Users/yatong_chen/Google Drive/research/DSG_empirical/code/functions/create_graph.py").read())
(graph, gender_y) = create_graph(adj_matrix_tmp,gender_dict,'gender',0,None,'yes')
```
## Setup
```
adj_matrix_gender = np.array(nx.adjacency_matrix(graph).todense())
percent_initially_unlabelled = [0.99,0.95,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1,0.05]
percent_initially_labelled = np.subtract(1, percent_initially_unlabelled)
n_iter = 10
cv_setup = 'stratified'
w = [0.1,1,10,100,1000,10000]
```
## Hard Smoothing (ZGL method)
```
# run ZGL part
adj_matrix_tmp_ZGL = adj_matrix_tmp
(mean_accuracy_zgl_amherst, se_accuracy_zgl_amherst,
mean_micro_auc_zgl_amherst,se_micro_auc_zgl_amherst,
mean_wt_auc_zgl_amherst,se_wt_auc_zgl_amherst) =ZGL(np.array(adj_matrix_gender),
np.array(gender_y),percent_initially_unlabelled,
n_iter,cv_setup)
```
## Soft smoothing (with different parameters w)
```
# NEW NEW ZGL without original training node
(graph, gender_y) = create_graph(adj_matrix_tmp,gender_dict,'gender',0,None,'yes')
(mean_accuracy_zgl_softing_new_new_amherst01, se_accuracy_zgl_softing_new_new_amherst01,
mean_micro_auc_zgl_softing_new_new_amherst01,se_micro_auc_zgl_softing_new_new_amherst01,
mean_wt_auc_zgl_softing_new_new_amherst01,se_wt_auc_zgl_softing_new_new_amherst01) = ZGL_softing_new_new(w[0], adj_matrix_tmp,
gender_dict,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_amherst1, se_accuracy_zgl_softing_new_new_amherst1,
mean_micro_auc_zgl_softing_new_new_amherst1,se_micro_auc_zgl_softing_new_new_amherst1,
mean_wt_auc_zgl_softing_new_new_amherst1,se_wt_auc_zgl_softing_new_new_amherst1) = ZGL_softing_new_new(w[1], adj_matrix_tmp,
gender_dict,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_amherst10, se_accuracy_zgl_softing_new_new_amherst10,
mean_micro_auc_zgl_softing_new_new_amherst10,se_micro_auc_zgl_softing_new_new_amherst10,
mean_wt_auc_zgl_softing_new_new_amherst10,se_wt_auc_zgl_softing_new_new_amherst10) = ZGL_softing_new_new(w[2], adj_matrix_tmp,
gender_dict,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_amherst100, se_accuracy_zgl_softing_new_new_amherst100,
mean_micro_auc_zgl_softing_new_new_amherst100,se_micro_auc_zgl_softing_new_new_amherst100,
mean_wt_auc_zgl_softing_new_new_amherst100,se_wt_auc_zgl_softing_new_new_amherst100) = ZGL_softing_new_new(w[3], adj_matrix_tmp,
gender_dict,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_amherst1000, se_accuracy_zgl_softing_new_new_amherst1000,
mean_micro_auc_zgl_softing_new_new_amherst1000,se_micro_auc_zgl_softing_new_new_amherst1000,
mean_wt_auc_zgl_softing_new_new_amherst1000,se_wt_auc_zgl_softing_new_new_amherst1000) = ZGL_softing_new_new(w[4], adj_matrix_tmp,
gender_dict,'gender', percent_initially_unlabelled, n_iter,cv_setup)
(mean_accuracy_zgl_softing_new_new_amherst10000, se_accuracy_zgl_softing_new_new_amherst10000,
mean_micro_auc_zgl_softing_new_new_amherst10000,se_micro_auc_zgl_softing_new_new_amherst10000,
mean_wt_auc_zgl_softing_new_new_amherst10000,se_wt_auc_zgl_softing_new_new_amherst10000) = ZGL_softing_new_new(w[5], adj_matrix_tmp,
gender_dict,'gender', percent_initially_unlabelled, n_iter,cv_setup)
```
## Plot:
AUC against Initial unlabled node precentage
```
%matplotlib inline
from matplotlib.ticker import FixedLocator,LinearLocator,MultipleLocator, FormatStrFormatter
fig = plt.figure()
#seaborn.set_style(style='white')
from mpl_toolkits.axes_grid1 import Grid
grid = Grid(fig, rect=111, nrows_ncols=(1,1),
axes_pad=0.1, label_mode='L')
for i in range(4):
if i == 0:
# set the x and y axis
grid[i].xaxis.set_major_locator(FixedLocator([0,25,50,75,100]))
grid[i].yaxis.set_major_locator(FixedLocator([0.4, 0.5,0.6,0.7,0.8,0.9,1]))
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_amherst,
yerr=se_wt_auc_zgl_amherst, fmt='--o', capthick=2,
alpha=1, elinewidth=8, color='black')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_amherst01,
yerr=se_wt_auc_zgl_softing_new_new_amherst01, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='gold')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_amherst1,
yerr=se_wt_auc_zgl_softing_new_new_amherst1, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='darkorange')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_amherst10,
yerr=se_wt_auc_zgl_softing_new_new_amherst10, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='crimson')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_amherst100,
yerr=se_wt_auc_zgl_softing_new_new_amherst100, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='red')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_amherst1000,
yerr=se_wt_auc_zgl_softing_new_new_amherst1000, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='maroon')
grid[i].errorbar(percent_initially_labelled*100, mean_wt_auc_zgl_softing_new_new_amherst10000,
yerr=se_wt_auc_zgl_softing_new_new_amherst10000, fmt='--o', capthick=2,
alpha=1, elinewidth=3, color='darkred')
grid[i].set_ylim(0.45,1)
grid[i].set_xlim(0,101)
grid[i].annotate('soft: a = 0.001', xy=(3, 0.96),
color='gold', alpha=1, size=12)
grid[i].annotate('soft: a = 1', xy=(3, 0.92),
color='darkorange', alpha=1, size=12)
grid[i].annotate('soft: a = 10', xy=(3, 0.88),
color='red', alpha=1, size=12)
grid[i].annotate('soft: a = 100', xy=(3, 0.84),
color='crimson', alpha=1, size=12)
grid[i].annotate('soft: a = 1000', xy=(3, 0.80),
color='maroon', alpha=1, size=12)
grid[i].annotate('soft: a = 1000000', xy=(3, 0.76),
color='darkred', alpha=1, size=12)
grid[i].annotate('hard smoothing', xy=(3, 0.72),
color='black', alpha=1, size=12)
grid[i].set_ylim(0.4,0.8)
grid[i].set_xlim(0,100)
grid[i].spines['right'].set_visible(False)
grid[i].spines['top'].set_visible(False)
grid[i].tick_params(axis='both', which='major', labelsize=13)
grid[i].tick_params(axis='both', which='minor', labelsize=13)
grid[i].set_xlabel('Percent of Nodes Initially Labeled').set_fontsize(15)
grid[i].set_ylabel('AUC').set_fontsize(15)
grid[0].set_xticks([0,25, 50, 75, 100])
grid[0].set_yticks([0.4,0.6,0.8,1])
grid[0].minorticks_on()
grid[0].tick_params('both', length=4, width=1, which='major', left=1, bottom=1, top=0, right=0)
```
| github_jupyter |
# Training a DeepSpeech LSTM Model using the LibriSpeech Data
At the end of Chapter 16 and into Chapter 17 in the book it is suggested try and build an automatic speech recognition system using the LibriVox corpus and long short term memory (LSTM) models just learned in the Recurrent Neural Network (RNN) chapter. This particular excercise turned out to be quite difficult mostly from the perspective again of simply gathering and formatting the data, combined with the work to understand the LSTM was doing. As it turns out, in doing this assignment I taught myself about MFCCs (mel frequency cepstral coefficient) which are simply what is going on in the Bregman Toolkit example earlier in the book. It's a process to convert audio into *num_cepstrals* coefficients using an FFT, and to use those coeffiicents as amplitudes and convert from the frequency into the time domain. LSTMs need time series data and a number of audio files converted using MFCCs into frequency amplitudes corresponding to utterances that you have transcript data for and you are in business!
The other major lesson was finding [RNN-Tutorial](https://github.com/mrubash1/RNN-Tutoria) an existing GitHub repository that implements a simplified version of the [deepspeech model](https://github.com/mozilla/DeepSpeech) from Mozilla which is a TensorFlow implementation of the Baidu model from the [seminal paper](https://arxiv.org/abs/1412.5567) in 2014.
I had to figure out along the way how to tweak hyperparameters including epochs, batch size, and training data. But overall this is a great architecture and example of how to use validation/dev sets during training for looking at validation loss compared to train loss and then overall to measure test accuracy.
### Data Preprocessing Steps:
1. Grab all text files which start out as the full speech from all subsequent \*.flac files
2. Each line in the text file contains:
```
filename(without .txt at end) the speech present in the file, e.g., words separated by spaces
filename N ... words ....
```
3. Then convert all \*.flac files to \*.wav files, using `flac2wav`
4. Remove all the flac files and remove the \*.trans.txt files
5. Run this code in the notebook below to generate the associated \*.txt file to go along with each \*.wav file.
6. Move all the \*.wav and \*.txt files into a single folder, e.g., `LibriSpeech/train-clean-all`
7. Repeat for test and dev
Once complete, you have a dataset to run through [RNN-Tutorial](https://github.com/mrubash1/RNN-Tutorial.git)
### References
1. [PyDub](https://github.com/jiaaro/pydub) - PyDub library
2. [A short reminder of how CTC works](https://towardsdatascience.com/beam-search-decoding-in-ctc-trained-neural-networks-5a889a3d85a7)
3. [OpenSLR - LibriSpeech corpus](http://www.openslr.org/12)
4. [Hamsa's Deep Speech notebook](https://github.com/cosmoshsv/Deep-Speech/blob/master/DeepSpeech_RNN_Training.ipynb)
5. [LSTM's by example using TensorFlow](https://towardsdatascience.com/lstm-by-example-using-tensorflow-feb0c1968537)
6. [How to read an audio file using TensorFlow APIs](https://github.com/tensorflow/tensorflow/issues/28237)
7. [Audio spectrograms in TensorFlow](https://mauri870.github.io/blog/posts/audio-spectrograms-in-tensorflow/)
8. [Reading audio files using TensorFlow](https://github.com/tensorflow/tensorflow/issues/32382)
9. [TensorFlow's decode_wav API](https://www.tensorflow.org/api_docs/python/tf/audio/decode_wav)
10. [Speech Recognition](https://towardsdatascience.com/speech-recognition-analysis-f03ff9ce78e9)
11. [Using TensorFlow's audio ops](https://stackoverflow.com/questions/48660391/using-tensorflow-contrib-framework-python-ops-audio-ops-audio-spectrogram-to-gen)
12. [LSTM by Example - Towards Data Science](https://towardsdatascience.com/lstm-by-example-using-tensorflow-feb0c1968537)
13. [Training your Own Model - DeepSpeech](https://deepspeech.readthedocs.io/en/v0.7.3/TRAINING.html)
14. [Understanding LSTMs](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
15. [Implementing LSTMs](https://apaszke.github.io/lstm-explained.html)
16. [Mel Frequency Cepstral Coefficient](http://practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/)
17. [TensorFlow - Extract Every Other Element](https://stackoverflow.com/questions/46721407/tensorflow-extract-every-other-element)
18. [Plotting MFCCs in TensorFlow](https://stackoverflow.com/questions/47056432/is-it-possible-to-get-exactly-the-same-results-from-tensorflow-mfcc-and-librosa)
19. [MFCCs in TensorFlow](https://kite.com/python/docs/tensorflow.contrib.slim.rev_block_lib.contrib_framework_ops.audio_ops.mfcc)
20. [How to train Baidu's Deep Speech Model with Kur](https://blog.deepgram.com/how-to-train-baidus-deepspeech-model-with-kur/)
21. [Silicon Valley Data Science SVDS - RNN Tutorial](https://www.svds.com/tensorflow-rnn-tutorial/)
22. [Streaming RNNs with TensorFlow](https://hacks.mozilla.org/2018/09/speech-recognition-deepspeech/)
```
import sys
sys.path.append("../libs/basic_units/")
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.audio import decode_wav
from tensorflow.raw_ops import Mfcc, AudioSpectrogram
from tqdm.notebook import tqdm
from basic_units import cm, inch
import glob
from scipy import signal
import soundfile as sf
import os
import time
import csv
speech_data_path = "../data/LibriSpeech"
train_path = speech_data_path + "/train-clean-100"
dev_path = speech_data_path + "/dev-clean"
test_path = speech_data_path + "/test-clean"
train_transcripts = [file for file in glob.glob(train_path + "/*/*/*.txt")]
dev_transcripts = [file for file in glob.glob(dev_path + "/*/*/*.txt")]
test_transcripts = [file for file in glob.glob(test_path + "/*/*/*.txt")]
train_audio_wav = [file for file in glob.glob(train_path + "/*/*/*.wav")]
dev_audio_wav = [file for file in glob.glob(dev_path + "/*/*/*.wav")]
test_audio_wav = [file for file in glob.glob(test_path + "/*/*/*.wav")]
sys.path.append("../libs/RNN-Tutorial/src")
numcep=26
numcontext=9
filename = '../data/LibriSpeech/train-clean-100/3486/166424/3486-166424-0004.wav'
raw_audio = tf.io.read_file(filename)
audio, fs = decode_wav(raw_audio)
print(np.shape(audio.numpy()))
print(fs.numpy())
# Get mfcc coefficients
spectrogram = AudioSpectrogram(
input=audio, window_size=1024,stride=64)
orig_inputs = Mfcc(spectrogram=spectrogram, sample_rate=fs, dct_coefficient_count=numcep)
audio_mfcc = orig_inputs.numpy()
print(audio_mfcc)
print(np.shape(audio_mfcc))
hist_audio = np.histogram(audio_mfcc, bins=range(9 + 1))
plt.hist(hist_audio)
plt.show()
labels=[]
for i in np.arange(26):
labels.append("P"+str(i+1))
fig, ax = plt.subplots()
ind = np.arange(len(labels))
width = 0.15
colors = ['r', 'g', 'y', 'b', 'black']
plots = []
for i in range(0, 5):
Xs = np.asarray(np.abs(audio_mfcc[0][i])).reshape(-1)
p = ax.bar(ind + i*width, Xs, width, color=colors[i])
plots.append(p[0])
xticks = ind + width / (audio_mfcc.shape[0])
print(xticks)
ax.legend(tuple(plots), ('S1', 'S2', 'S3', 'S4', 'S5'))
ax.yaxis.set_units(inch)
ax.autoscale_view()
ax.set_xticks(xticks)
ax.set_xticklabels(labels)
ax.set_ylabel('Normalized freq coumt')
ax.set_xlabel('Pitch')
ax.set_title('Normalized frequency counts for Various Sounds')
plt.show()
filename = '../data/LibriSpeech/train-clean-100/3486/166424/3486-166424-0004.wav'
raw_audio = tf.io.read_file(filename)
audio, fs = decode_wav(raw_audio)
wsize = 16384 #1024
stride = 448 #64
# Get mfcc coefficients
spectrogram = AudioSpectrogram(
input=audio, window_size=wsize,stride=stride)
numcep=26
numcontext=9
orig_inputs = Mfcc(spectrogram=spectrogram, sample_rate=fs, dct_coefficient_count=numcep)
orig_inputs = orig_inputs[:,::2]
audio_mfcc = orig_inputs.numpy()
print(audio_mfcc)
print(np.shape(audio_mfcc))
train_inputs = np.array([], np.float32)
train_inputs.resize((audio_mfcc.shape[1], numcep + 2 * numcep * numcontext))
# Prepare pre-fix post fix context
empty_mfcc = np.array([])
empty_mfcc.resize((numcep))
empty_mfcc = tf.convert_to_tensor(empty_mfcc, dtype=tf.float32)
empty_mfcc_ev = empty_mfcc.numpy()
# Prepare train_inputs with past and future contexts
# This code always takes 9 time steps previous and 9 time steps in the future along with the current time step
time_slices = range(train_inputs.shape[0])
context_past_min = time_slices[0] + numcontext #starting min point for past content, has to be at least 9 ts
context_future_max = time_slices[-1] - numcontext #ending point max for future content, size time slices - 9ts
for time_slice in tqdm(time_slices):
#print('time slice %d ' % (time_slice))
# Reminder: array[start:stop:step]
# slices from indice |start| up to |stop| (not included), every |step|
# Add empty context data of the correct size to the start and end
# of the MFCC feature matrix
# Pick up to numcontext time slices in the past, and complete with empty
# mfcc features
need_empty_past = max(0, (context_past_min - time_slice))
empty_source_past = np.asarray([empty_mfcc_ev for empty_slots in range(need_empty_past)])
data_source_past = orig_inputs[0][max(0, time_slice - numcontext):time_slice]
assert(len(empty_source_past) + data_source_past.numpy().shape[0] == numcontext)
# Pick up to numcontext time slices in the future, and complete with empty
# mfcc features
need_empty_future = max(0, (time_slice - context_future_max))
empty_source_future = np.asarray([empty_mfcc_ev for empty_slots in range(need_empty_future)])
data_source_future = orig_inputs[0][time_slice + 1:time_slice + numcontext + 1]
assert(len(empty_source_future) + data_source_future.numpy().shape[0] == numcontext)
# pad if needed for the past or future, or else simply take past and future
if need_empty_past:
past = tf.concat([tf.cast(empty_source_past, tf.float32), tf.cast(data_source_past, tf.float32)], 0)
else:
past = data_source_past
if need_empty_future:
future = tf.concat([tf.cast(data_source_future, tf.float32), tf.cast(empty_source_future, tf.float32)], 0)
else:
future = data_source_future
past = tf.reshape(past, [numcontext*numcep])
now = orig_inputs[0][time_slice]
future = tf.reshape(future, [numcontext*numcep])
train_inputs[time_slice] = np.concatenate((past.numpy(), now.numpy(), future.numpy()))
assert(train_inputs[time_slice].shape[0] == numcep + 2*numcep*numcontext)
train_inputs = (train_inputs - np.mean(train_inputs)) / np.std(train_inputs)
print('Train inputs shape %s ' % str(np.shape(train_inputs)))
print('Train inputs '+str(train_inputs))
preprocessing = {
'data_dir': train_path,
'cache_dir' : '../data/cache/LibriSpeech',
'window_size': 20,
'step_size': 10
}
model = {
'verbose': 1,
'conv_channels': [100],
'conv_filters': [5],
'conv_strides': [2],
'rnn_units': [64],
'bidirectional_rnn': True,
'future_context': 2,
'use_bn': True,
'learning_rate': 0.001
}
training = {
'tensorboard': False,
'log_dir': './logs',
'batch_size': 5,
'epochs': 5,
'validation_size': 0.2,
'max_train' : 100
}
if not os.path.exists(preprocessing['cache_dir']):
os.makedirs(preprocessing['cache_dir'])
def clipped_relu(x):
return tf.keras.activations.relu(x, max_value=20)
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
return tf.keras.backend.ctc_batch_cost(labels, y_pred, input_length, label_length)
def ctc(y_true, y_pred):
return y_pred
class SpeechModel(object):
def __init__(self, hparams):
input_data = tf.keras.layers.Input(name='inputs', shape=[hparams['max_input_length'], 161])
x = input_data
if hparams['use_bn']:
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ZeroPadding1D(padding=(0, hparams['max_input_length']))(x)
for i in range(len(hparams['conv_channels'])):
x = tf.keras.layers.Conv1D(hparams['conv_channels'][i], hparams['conv_filters'][i],
strides=hparams['conv_strides'][i], activation='relu', padding='same')(x)
if hparams['use_bn']:
x = tf.keras.layers.BatchNormalization()(x)
for h_units in hparams['rnn_units']:
if hparams['bidirectional_rnn']:
h_units = int(h_units / 2)
gru = tf.keras.layers.GRU(h_units, activation='relu', return_sequences=True)
if hparams['bidirectional_rnn']:
gru = tf.keras.layers.Bidirectional(gru, merge_mode='sum')
x = gru(x)
if hparams['use_bn']:
x = tf.keras.layers.BatchNormalization()(x)
if hparams['future_context'] > 0:
if hparams['future_context'] > 1:
x = tf.keras.layers.ZeroPadding1D(padding=(0, hparams['future_context'] - 1))(x)
x = tf.keras.layers.Conv1D(100, hparams['future_context'], activation='relu')(x)
y_pred = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(hparams['vocab_size'] + 1,
activation='sigmoid'))(x)
labels = tf.keras.layers.Input(name='labels', shape=[None], dtype='int32')
input_length = tf.keras.layers.Input(name='input_lengths', shape=[1], dtype='int32')
label_length = tf.keras.layers.Input(name='label_lengths', shape=[1], dtype='int32')
loss_out = tf.keras.layers.Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred,
labels,
input_length,
label_length])
self.model = tf.keras.Model(inputs=[input_data, labels, input_length, label_length], outputs=[loss_out])
if hparams['verbose']:
print(self.model.summary())
optimizer = tf.keras.optimizers.Adam(lr=hparams['learning_rate'], beta_1=0.9, beta_2=0.999,
epsilon=1e-8, clipnorm=5)
self.model.compile(optimizer=optimizer, loss=ctc)
def train_generator(self, generator, train_params):
callbacks = []
if train_params['tensorboard']:
callbacks.append(tf.keras.callbacks.TensorBoard(train_params['log_dir'], write_images=True))
self.model.fit(generator, epochs=train_params['epochs'],
steps_per_epoch=train_params['steps_per_epoch'],
callbacks=callbacks)
def create_character_mapping():
character_map = {' ': 0}
for i in range(97, 123):
character_map[chr(i)] = len(character_map)
return character_map
def get_data_details(filename):
result = {
'max_input_length': 0,
'max_label_length': 0,
'num_samples': 0
}
# Get max lengths
with open(filename, 'r') as metadata:
metadata_reader = csv.DictReader(metadata, fieldnames=['filename', 'spec_length', 'labels_length', 'labels'])
next(metadata_reader)
for row in metadata_reader:
if int(row['spec_length']) > result['max_input_length']:
result['max_input_length'] = int(row['spec_length'])
if int(row['labels_length']) > result['max_label_length']:
result['max_label_length'] = int(row['labels_length'])
result['num_samples'] += 1
return result
def create_data_generator(directory, max_input_length, max_label_length, batch_size=64, num_epochs=5):
x, y, input_lengths, label_lengths = [], [], [], []
epochs = 0
while epochs < num_epochs:
with open(os.path.join(directory, 'LibriSpeech-metadata.csv'), 'r') as metadata:
metadata_reader = csv.DictReader(metadata, fieldnames=['filename', 'spec_length', 'labels_length', 'labels'])
next(metadata_reader)
for row in metadata_reader:
audio = np.load(os.path.join(directory, row['filename'] + '.npy'))
x.append(audio)
y.append([int(i) for i in row['labels'].split(' ')])
input_lengths.append(row['spec_length'])
label_lengths.append(row['labels_length'])
if len(x) == batch_size:
yield {
'inputs': tf.keras.preprocessing.sequence.pad_sequences(x, maxlen=max_input_length, padding='post'),
'labels': tf.keras.preprocessing.sequence.pad_sequences(y, maxlen=max_label_length, padding='post'),
'input_lengths': np.asarray(input_lengths, dtype=np.int32),
'label_lengths': np.asarray(label_lengths, dtype=np.int32)
}, {
'ctc': np.zeros([batch_size])
}
x, y, input_lengths, label_lengths = [], [], [], []
epochs = epochs + 1
def log_linear_specgram(audio, sample_rate, window_size=20,
step_size=10, eps=1e-10):
nperseg = int(round(window_size * sample_rate / 1e3))
noverlap = int(round(step_size * sample_rate / 1e3))
_, _, spec = signal.spectrogram(audio, fs=sample_rate,
window='hann', nperseg=nperseg, noverlap=noverlap,
detrend=False)
return np.log(spec.T.astype(np.float32) + eps)
def preprocess_librispeech(directory):
print("Pre-processing LibriSpeech corpus")
start_time = time.time()
character_mapping = create_character_mapping()
if not os.path.exists(preprocessing['data_dir']):
os.makedirs(preprocessing['data_dir'])
dir_walk = list(os.walk(directory))
num_hours = 0
num_train = 0
with open(os.path.join(preprocessing['cache_dir'] + '/LibriSpeech-metadata.csv'), 'w', newline='') as metadata:
metadata_writer = csv.DictWriter(metadata, fieldnames=['filename', 'spec_length', 'labels_length', 'labels'])
metadata_writer.writeheader()
for root, dirs, files in tqdm(dir_walk):
for file in files:
if file[-4:] == '.txt' and num_train < training['max_train']:
filename = os.path.join(root, file)
with open(filename, 'r') as f:
txt = f.read().split(' ')
filename_base_no_path = os.path.splitext(file)[0]
filename_base = os.path.splitext(filename)[0]
filename_wav = filename_base + '.wav'
audio, sr = sf.read(filename_wav)
num_hours += (len(audio) / sr) / 3600
spec = log_linear_specgram(audio, sr, window_size=preprocessing['window_size'],
step_size=preprocessing['step_size'])
np.save(os.path.join(preprocessing['cache_dir'], filename_base_no_path) + '.npy', spec)
ids = [character_mapping[c] for c in ' '.join(txt).lower()
if c in character_mapping]
metadata_writer.writerow({
'filename': filename_base_no_path,
'spec_length': spec.shape[0],
'labels_length': len(ids),
'labels': ' '.join([str(i) for i in ids])
})
if num_train + 1 <= training['max_train']:
num_train = num_train + 1
if num_train >= training['max_train']:
print('Processed {} files: max train {} reached...'.format(num_train, training['max_train']))
break
print("Done!")
print("Hours pre-processed: " + str(num_hours))
print("Time: " + str(time.time() - start_time))
preprocess_librispeech(preprocessing['data_dir'])
character_mapping = create_character_mapping()
data_details = get_data_details(filename=os.path.join(preprocessing['cache_dir'], 'LibriSpeech-metadata.csv'))
print(data_details)
training['steps_per_epoch'] = int(data_details['num_samples'] / training['batch_size'])
model['max_input_length'] = data_details['max_input_length']
model['max_label_length'] = data_details['max_label_length']
model['vocab_size'] = len(character_mapping)
data_generator = create_data_generator(directory=preprocessing['cache_dir'],
max_input_length=model['max_input_length'],
max_label_length=model['max_label_length'],
batch_size=training['batch_size'],
num_epochs=training['epochs'])
speech_model = SpeechModel(hparams=model)
speech_model.train_generator(data_generator, training)
tst_gen = create_data_generator(directory=preprocessing['cache_dir'],
max_input_length=model['max_input_length'],
max_label_length=model['max_label_length'],
batch_size=training['batch_size'],
num_epochs=1)
for i in tst_gen:
print(speech_model.model.predict(i[0]))
```
| github_jupyter |
```
%config IPCompleter.greedy=True
# then click ". + tab " simultaniously -> intellisense
# $ conda info -e -> shows conda envs
# $ conda activate py37
# (py37) pips install jupyter-tabnine ...
#press [SHIFT] and [TAB] from within the method parentheses
### intellisense - works perfect!! -> excute in command line windows. : (py37) $ -> works perfect !!
# (py37) $pip3 install jupyter-tabnine
# (py37) $sudo jupyter nbextension install --py jupyter_tabnine
# (py37) $jupyter nbextension enable jupyter_tabnine --py
##### jupyter nbextension enable --py jupyter_tabnine ##instead above line excuted.
# (py37) $jupyter serverextension enable --py jupyter_tabnine
#--> I installed in (py37) conda env.
```
# Python Language Basics, IPython, and Jupyter Notebooks
```
import numpy as np
np.random.seed(12345)
np.set_printoptions(precision=4, suppress=True)
import IPython
print(IPython.sys_info())
an_example = 42
an_apple = 27 #an<tab> then enter : autocomplete
b = [1,2,3]
b.append(an_apple)
b
```
## The Python Interpreter
```python
$ python
Python 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 5
>>> print(a)
5
```
```python
print('Hello world')
```
```python
$ python hello_world.py
Hello world
```
```shell
$ ipython
Python 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)
Type "copyright", "credits" or "license" for more information.
IPython 5.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: %run hello_world.py
Hello world
In [2]:
```
## IPython Basics
### Running the IPython Shell
#### type in command line :
```
$ ipython -> then show the results in asame command line windows.
$ data = { i : np.random.rand() for i in np.arange(10)}
$ data
```
$
```
import numpy as np
data = {i : np.random.randn() for i in range(7)}
data
```
>>> from numpy.random import randn
>>> data = {i : randn() for i in range(7)}
>>> print(data)
{0: -1.5948255432744511, 1: 0.10569006472787983, 2: 1.972367135977295,
3: 0.15455217573074576, 4: -0.24058577449429575, 5: -1.2904897053651216,
6: 0.3308507317325902}
### Running the Jupyter Notebook
```shell
$ jupyter notebook
[I 15:20:52.739 NotebookApp] Serving notebooks from local directory:
/home/wesm/code/pydata-book
[I 15:20:52.739 NotebookApp] 0 active kernels
[I 15:20:52.739 NotebookApp] The Jupyter Notebook is running at:
http://localhost:8888/
[I 15:20:52.740 NotebookApp] Use Control-C to stop this server and shut down
all kernels (twice to skip confirmation).
Created new window in existing browser session.
```
### Tab Completion
```
import IPython
print(IPython.sys_info())
def func_with_keywords(abra = 1, abbra=2, abbbra=3):
return abra, abbra, abbbra
abra, abbra, abbbra = 10, 20 , 30
func_with_keywords(abra, abbra, abbbra) #autocomplete
abra, abbra = 10, 20
func_with_keywords(abra, abbra)
```
```
In [1]: an_apple = 27
In [2]: an_example = 42
In [3]: an
```
```
In [3]: b = [1, 2, 3]
In [4]: b.
```
```
In [1]: import datetime
In [2]: datetime.
```
```
import datetime
datetime.datetime
```
```
In [7]: datasets/movielens/
```
### Introspection
```
In [8]: b = [1, 2, 3]
In [9]: b?
Type: list
String Form:[1, 2, 3]
Length: 3
Docstring:
list() -> new empty list
list(iterable) -> new list initialized from iterable's items
In [10]: print?
Docstring:
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
Type: builtin_function_or_method
```
```
#Using a question mark ( ? ) before or after a variable will display
# some general information about the object:
b?
print?
```
```python
def add_numbers(a, b):
"""
Add two numbers together
Returns
-------
the_sum : type of arguments
"""
return a + b
```
```python
In [11]: add_numbers?
Signature: add_numbers(a, b)
Docstring:
Add two numbers together
Returns
-------
the_sum : type of arguments
File: <ipython-input-9-6a548a216e27>
Type: function
```
```python
In [12]: add_numbers??
Signature: add_numbers(a, b)
Source:
def add_numbers(a, b):
"""
Add two numbers together
Returns
-------
the_sum : type of arguments
"""
return a + b
File: <ipython-input-9-6a548a216e27>
Type: function
```
```
def add_numbers(a, b):
"""
Add two numbers together
Returns
-------
the_sum : type of arguments
"""
return a + b
add_numbers? #return docstring (""" docstring """)
# Using ?? will also show the function’s source code if possible:\
add_numbers??
```
```python
In [13]: np.*load*?
np.__loader__
np.load
np.loads
np.loadtxt
np.pkgload
```
```
# searching the IPython namespace
# A number of characters combined
# with the wildcard ( * ) will show all names matching the wildcard expression.
np.*load*?
np.random.*rand*?
np.random.randn?
```
### The %run Command
#### run any file as a Python program inside the environment of your IPython
* session using the %run command. Suppose you had the following simple script stored in
ipython_script_test.py:
```python
def f(x, y, z):
return (x + y) / z
a = 5
b = 6
c = 7.5
result = f(a, b, c)
print(result)
```
```python
In [14]: %run ipython_script_test.py
```
```
# You can execute this by passing the filename to %run :
%run ipython_script_test.py
#All of the variables (imports, functions, and globals)
#defined in the file(up until an exception, if any, is raised) will then be accessible in
#the IPython shell:
print(c)
print(result)
```
```python
In [15]: c
Out [15]: 7.5
In [16]: result
Out[16]: 1.4666666666666666
```
```python
>>> %load ipython_script_test.py
def f(x, y, z):
return (x + y) / z
a = 5
b = 6
c = 7.5
result = f(a, b, c)
```
```
#%load magic function, which
#imports a script into a code cell
####
#%load ipython_script_test.py -> excute
#### -> auto comment above line & show the code script in windows.
# %load ipython_script_test.py
def f(x, y, z):
return (x + y) / z
a = 5
b = 6
c = 7.5
result = f(a, b, c)
print(result)
```
#### Interrupting running code
### Executing Code from the Clipboard
```python
x = 5
y = 7
if x > 5:
x += 1
y = 8
```
```python
In [17]: %paste
x = 5
y = 7
if x > 5:
x += 1
y = 8
## -- End pasted text --
```
```python
In [18]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:x = 5
:y = 7
:if x > 5:
: x += 1
:
: y = 8
:--
```
```
# %paste #Just for Ipython , not for jupyter notebook
```
### Terminal Keyboard Shortcuts
### About Magic Commands
```python
In [20]: a = np.random.randn(100, 100)
In [20]: %timeit np.dot(a, a)
10000 loops, best of 3: 20.9 µs per loop
```
```
a = np.random.randn(100, 100)
%timeit np.dot(a, a)
#Magic commands have additional “command-line” options, which can
#all be viewed (as you might expect) using ? :
%debug?
%load?
```
Magic functions can be used by default without the percent sign, as long as no vari‐
able is defined with the same name as the magic function in question. This feature is
called automagic and can be enabled or disabled with %automagic .
```
load?
```
```python
In [21]: %debug?
Docstring:
::
%debug [--breakpoint FILE:LINE] [statement [statement ...]]
Activate the interactive debugger.
This magic command support two ways of activating debugger.
One is to activate debugger before executing code. This way, you
can set a break point, to step through the code from the point.
You can use this mode by giving statements to execute and optionally
a breakpoint.
The other one is to activate debugger in post-mortem mode. You can
activate this mode simply running %debug without any argument.
If an exception has just occurred, this lets you inspect its stack
frames interactively. Note that this will always work only on the last
traceback that occurred, so you must call this quickly after an
exception that you wish to inspect has fired, because if another one
occurs, it clobbers the previous one.
If you want IPython to automatically do this on every exception, see
the %pdb magic for more details.
positional arguments:
statement Code to run in debugger. You can omit this in cell
magic mode.
optional arguments:
--breakpoint <FILE:LINE>, -b <FILE:LINE>
Set break point at LINE in FILE.
```
```python
In [22]: %pwd
Out[22]: '/home/wesm/code/pydata-book
In [23]: foo = %pwd
In [24]: foo
Out[24]: '/home/wesm/code/pydata-book'
```
Some magic functions behave like Python functions and their output can be assigned
to a variable:
```
%pwd
foo = %pwd
foo
```
### Matplotlib Integration
```python
In [26]: %matplotlib
Using matplotlib backend: Qt4Agg
```
```python
In [26]: %matplotlib inline
```
- ctrl + Enter : just excute the cell
- shift + Enter : Excute cell, move below
- alt + Enter : Excute cell, insert below
```
%matplotlib inline
test_rand = np.random.rand(50)
test_rand
test_rand.cumsum() #cumulative sum: np.random.rand(50).cumsum()
import matplotlib.pyplot as plt
# plt.plot(np.random.rand(50).cumsum())
plt.plot(test_rand.cumsum())
```
## Python Language Basics
### Language Semantics
#### Indentation, not braces
```python
for x in array:
if x < pivot:
less.append(x)
else:
greater.append(x)
```
python statements also do not need to be terminated by semi‐
colons. Semicolons can be used, however, to separate multiple statements on a single
line:
```python
a = 5; b = 6; c = 7
```
#### Everything is an object
#### Comments
```python
results = []
for line in file_handle:
# keep the empty lines for now
# if len(line) == 0:
# continue
results.append(line.replace('foo', 'bar'))
```
```python
print("Reached this line") # Simple status report
```
#### Function and object method calls
```
result = f(x, y, z)
g()
```
```
obj.some_method(x, y, z)
```
```python
result = f(a, b, c, d=5, e='foo')
```
#### Variables and argument passing
```
a = [1, 2, 3]
# this assignment would cause the data [1, 2, 3] to be copied. In
# Python, a and b actually now refer to the same object, the original list [1, 2, 3]
b = a
a.append(4)
b
```
```python
def append_element(some_list, element):
some_list.append(element)
```
```python
In [27]: data = [1, 2, 3]
In [28]: append_element(data, 4)
In [29]: data
Out[29]: [1, 2, 3, 4]
```
```
data = [1, 2, 3]
def append_element(some_list, element):
some_list.append(element)
append_element(data, 4)
data
```
#### Dynamic references, strong types
```
a = 5
print(type(a))
a = 'foo'
print(type(a))
'5' + 5
try:
print('5' + 5)
except TypeError as err:
print(f"Error : {err} " )
```
Python is considered a strongly typed language, which means that every object
has a specific type (or class), and implicit conversions will occur only in certain obvi‐
ous circumstances, such as the following:
```
def add_arguments(x, y):
assert type(x) == type(y), "Type of both arguments must be same!"
return x + y
add_arguments(1, 2)
add_arguments('1', '2')
add_arguments('2', 2)
def add_arguments_raise(x, y):
if type(x) != type(y):
raise TypeError(f"Type of both arguments must be same: {type(x)} != {type(y)}")
return x + y
add_arguments_raise(1, 2)
add_arguments_raise('1', 2)
```
Python is considered a strongly typed language, which means that every object
has a specific type (or class), and implicit conversions will occur only in certain obvi‐
ous circumstances, such as the following:
```
a = 4.5
b = 2
# String formatting, to be visited later
print('a is {0}, b is {1}'.format(type(a), type(b)))
a / b
a = 5
isinstance(a, int)
a = 5; b = 4.5
print(isinstance(a, (int, float)))
print(isinstance(b, (int, float)))
```
#### Attributes and methods
In contrast with many compiled languages, such as Java and C++, object references in
Python have no type associated with them. There is no problem with the following:
```python
In [1]: a = 'foo'
In [2]: a.<Press Tab>
a.capitalize a.format a.isupper a.rindex a.strip
a.center a.index a.join a.rjust a.swapcase
a.count a.isalnum a.ljust a.rpartition a.title
a.decode a.isalpha a.lower a.rsplit a.translate
a.encode a.isdigit a.lstrip a.rstrip a.upper
a.endswith a.islower a.partition a.split a.zfill
a.expandtabs a.isspace a.replace a.splitlines
a.find a.istitle a.rfind a.startswith
```
```
a = 'foo'
# a.<tab>
a.split?
getattr(a, 'split')
```
#### Duck typing
```
def isiterable(obj):
try:
iter(obj)
return True
except TypeError: # not iterable
return False
print(isiterable('a string'))
print(isiterable([1, 2, 3]))
print(isiterable(5))
```
if not isinstance(x, list) and isiterable(x):
x = list(x)
```
# A common case is writing a function that can accept any
# kind of sequence (list, tuple, ndarray) or even an iterator. You can first check if the
# object is a list (or a NumPy array) and, if it is not, convert it to be one:
def accept_any(x):
if not isinstance(x, list) and isiterable(x):
x = list(x)
print(x)
accept_any([1,2,3,4])
print(type((3,4)))
print(type({3,4}))
accept_any({3,4})#set
accept_any((3,4)) #tuple
accept_any(3)
```
#### Imports
```python
# some_module.py
PI = 3.14159
def f(x):
return x + 2
def g(a, b):
return a + b
```
import some_module
result = some_module.f(5)
pi = some_module.PI
from some_module import f, g, PI
result = g(5, PI)
import some_module as sm
from some_module import PI as pi, g as gf
r1 = sm.f(pi)
r2 = gf(6, pi)
```
######################################
#AttributeError: module 'some_module' has no attribute 'f'
# import some_module
# result = some_module.f(4)
# pi = some_module.PI
########## No Error in ch02 subdirectory with __init__.py ##########
import calculation #Importing calculation module
print(calculation.add(1,2)) #Calling function defined in add module.
from calculation import add
print(calculation.add(1,2))
%run module_test.py
# import calculation
# print(calculation.add(1,2))
from some_module import f, g, PI
result = g(5, PI) #return 5 + PI
result
import some_module as sm
from some_module import PI as pi, g as gf
r1 = sm.f(pi)
print(r1)
r2 = gf(6, pi)
print(r2)
```
### __init__.py
Files name __init__.py are used to mark directories on disk as Python package directories. If you have the files
```
mydir/spam/__init__.py
midir/spam/module.py
```
and mydir is on your path, you can import the code in module.py as
```
import spam.module
#or
from spam import module
```
If you remove the __init__.py file, Python will no longer look for submodules inside that directory, so attempts to import the module will fail.
The __init__.py file is usually empty, but can be used to export selected portions of the package under more convenient name, hold convenience functions, etc. Given the example above, the contents of the init module can be accessed as
```
import spam
```
#### Binary operators and comparisons
```
5 - 7
12 + 21.5
5 <= 2
a = [1, 2, 3]
b = a
c = list(a)
print(c)
print(type(c))
print("a is b : ",a is b) #b = a
a is not c #c = list(a)
a == c
```
A very common use of is and is not is to check if a variable is None , since there is
only one instance of None :
```
a = None
a is None
a = True
b = False #False
a & b
a=True
b=bool(None)
c=bool('')
print(b)
print(c)
print(a & b)
print(a & c)
# Function to convert decimal number
# to binary using recursion
def DecimalToBinary(num):
if num > 1:
DecimalToBinary(num // 2)
print(num % 2, end = '')
# Driver Code
if __name__ == '__main__':
# decimal value
dec_val1 = 1401
# Calling function
DecimalToBinary(dec_val1)
# decimal value
dec_val2 = 1000
print("\n--------------------")
# Calling function
DecimalToBinary(dec_val2)
a1 = 1401
b1 = 1000
a1 & b1
# assign number as binary
# prefix 0b
num = 0b111101
print ("num: ", num)
print(bin(num))
# prefix 0B
num = 0B111101
print( "num: ", num)
print(bin(num))
a = 0b111101
b = 0b000010
# print value in binary
print("values in binary...")
print("a: ",bin (a))
print("b: ",bin (b))
print("--------------------------")
# bitwise OR and AND operations
print("(a|b) : ", bin (a|b))
print("(a&b) : ", bin (a&b))
print("--------------------------")
# print values in decimal
print("values in decimal...")
print("a: ",a )
print("b: ",b )
print("--------------------------")
# bitwise OR and AND operations
print("(a|b) : ", int (bin (a|b),2))
print("(a&b) : ", int (bin (a&b),2))
```
- a = 10 = 1010 (Binary)
- b = 4 = 0100 (Binary
```
a & b =
1010
&
0100
= 0000
= 0 (Decimal)
```
```
a & b =
1010
|
0100
= 1110
= 14 (Decimal)
```
#### Mutable and immutable objects
```
a_list = ['foo', 2, [4, 5]]
a_list[2] = (3, 4) #list mutable
a_list
#tyuple, string : immutable
a_tuple = (3, 5, (4, 5))
a_tuple[1] = 'four' #tuple : immutable
```
### Scalar Types
#### Numeric types
```
ival = 17239871
ival ** 6
fval = 7.243
fval2 = 6.78e-5
3 / 2
3 // 2
5//3
5%3
```
#### Strings
* Python strings are immutable; you cannot modify a string:
```
a = 'one way of writing a string'
b = "another way"
a
```
For multiline strings with line breaks, you can use triple quotes, either ''' or """ :
```
c = """
This is a longer string that
spans multiple lines
"""
c
c.count('\n')
a = 'this is a string'
a[10] = 'f' #Python strings are immutable; you cannot modify a string:
print(f"a : {a}")
b = a.replace('string', 'longer string')
b
#Afer this operation, the variable a is unmodified:
a
#Many Python objects can be converted to a string using the str function:
a = 5.6
s = str(a)
print(s)
print(type(s))
```
Strings are a sequence of Unicode characters and therefore can be treated like other
sequences, such as lists and tuples
```
ss = 'python'
print(type(ss))
# list(ss) #after excute this line, the result is same!
ss[:3] #pyt (value of index 3-> h , not including)
s = 'python'
print(type(s))
list(s)
#slicing
s[:3] #pyt (value of index 3-> h , not including)
#backslash character \ is an escape character, meaning that it is used to specify
# special characters like newline \n or Unicode characters.
s = '12\\34'
print(s)
```
### raw : r'this\has\no\special\characters'
```
#preface the leading quote of the string with r ,
# which means that the characters should be interpreted as is:
s = r'this\has\no\special\characters'
s
a = 'this is the first half '
b = 'and this is the second half'
a + b
template = '{0:.2f} {1:s} are worth US${2:d}'
template.format(4.5560, 'Argentine Pesos', 1)
```
#### Bytes and Unicode
* In modern Python (i.e., Python 3.0 and up), Unicode has become the first-class string
type to enable more consistent handling of ASCII and non-ASCII text.
```
val = "español"
val
val_utf8 = val.encode('utf-8')
print(val_utf8)
type(val_utf8)
val_utf8.decode('utf-8')
```
While it’s become preferred to use UTF-8 for any encoding, for historical reasons you
may encounter data in any number of different encodings:
```
val.encode('latin1')
val.encode('utf-16')
val.encode('utf-16le')
```
bytes objects in the context of working with files,
where implicitly decoding all data to Unicode strings may not be desired.
```
bytes_val = b'this is bytes'
print(bytes_val) #byte
print(type(bytes_val))
print("------string again------------------")
decoded = bytes_val.decode('utf8')
print(type(decoded))
decoded # this is str (Unicode) now
```
#### Booleans
```
True and True
False or True
```
#### Type casting
```
s = '3.14159'
fval = float(s)
print(type(fval))
print(int(fval))
print(bool(fval))
print(bool(0))
```
#### None
```
a = None
a is None
b = 5
b is not None
def add_and_maybe_multiply(a, b, c=None):
result = a + b
if c is not None:
result = result * c
return result
add_and_maybe_multiply(1, 4, 2)
add_and_maybe_multiply(1, 4) #c is None -> just result = a + b
#None is not only a reserved keyword
# but also a unique instance of NoneType :
type(None)
```
#### Dates and times
```
from datetime import datetime, date, time
dt = datetime(2011, 10, 29, 20, 30, 21)
print(dt.day)
print(dt.minute)
#Given a datetime instance(dt), you can extract the equivalent date and time objects by
# calling methods(such as date(), time()) on the datetime of the same name:
dt.date()
dt.time()
print(dt.date())
print(dt.time())
# strftime method formats a datetime as a string:
#%Y : Four-digit year, %H : Hour (24-hour clock) [00, 23]
dt.strftime('%m/%d/%Y %H:%M') #format to string
# strftime method formats a datetime as a string:
#%y : Two-digit year, %I : Hour (12-hour clock) [01, 12]
dt.strftime('%m/%d/%y %I:%M')
# strftime method formats a datetime as a string:
dt.strftime('%m/%d/%Y %H:%M:%S') #Hour(24-hour clock):Minute:Second
# strftime method formats a datetime as a string:
# %w : Weekday as integer [0 (Sunday), 6]
dt.strftime('%m/%d/%Y %w %H:%M')
# strftime method formats a datetime as a string:
# %U: Week number of the year [00, 53]; Sunday is considered the first day of the week,
# and days before the first Sunday of the year are “week 0”
dt.strftime('%m/%d/%Y %U %H:%M')
# %W: Week number of the year [00, 53]; Monday is considered the first day of the week,
# and days before the first Monday of the year are “week 0”
dt.strftime('%m/%d/%Y %W %H:%M')
dt1 = datetime(2020, 1, 1, 22, 15, 16)
dt1.strftime('%m/%d/%Y %w %H:%M') #3 ->Wed (Sun:0, Mon:1, Tue:2, Wed:3)
dt1.strftime('%m/%d/%Y %U %H:%M')
# %z : UTC time zone offset as +HHMM or -HHMM ; empty if time zone naive
dt.strftime('%m/%d/%Y %W %H:%M -%z')
# strptime function parses strings into datetime objects:
datetime.strptime('20091031', '%Y%m%d')
print(datetime.strptime('20091031', '%Y%m%d'))
from datetime import datetime, date, time
dt4 = datetime(2011, 10, 29, 20, 30, 21) #instead of this line, use below cell
print(type(dt4))
print(dt4.strptime('20191029203023','%Y%m%d%H%M%S'))
from datetime import datetime, date, time
print(datetime.strptime('20111029203021','%Y%m%d%H%M%S')) #parse string to datetime
dt.strptime('20091031', '%Y%m%d')
# %F : Shortcut for %Y-%m-%d (e.g., 2012-4-18 )
# %D : Shortcut for %m/%d/%y (e.g., 04/18/12 )
dt4.strftime('%F')
dt4.strftime('%D')
```
When you are aggregating or otherwise grouping time series data, it will occasionally
be useful to replace time fields of a series of datetime s—for example, replacing the
minute and second fields with zero:
```
dt.replace(minute=0, second=0) #20 h, 0 m, 0 s
# The difference of two datetime objects produces a datetime.timedelta type:
dt2 = datetime(2011, 11, 15, 22, 30)
delta = dt2 - dt
print("dt2 : ",dt2)
print("dt : ",dt)
print("delta : " ,delta)
type(delta)
# The output timedelta(17, 7179) indicates that the timedelta encodes
# an offset of 17 days and 7,179 seconds.
delta
dt
# Adding a timedelta to a datetime produces a new shifted datetime :
dt + delta #dt2
```
### Control Flow
#### if, elif, and else
```
x = -8
if x < 0:
print("It's negative")
if x < 0:
print('It\'s negative')
elif x == 0:
print('Equal to zero')
elif 0 < x < 5:
print('Positive but smaller than 5')
else:
print('Positive and larger than or equal to 5')
a = 5; b = 7
c = 8; d = 4
if a < b or c > d:
print('Made it')
# chain comparisons
4 > 3 > 2 > 1
```
#### for loops
for value in collection:
# do something with value
```
# You can advance a for loop to the next iteration, skipping the remainder of the block,
# using the continue keyword.
sequence = [1, 2, None, 4, None, 5]
total = 0
for value in sequence:
print("--------value----------")
print(value)
if value is None: #if value is None, don't excute below line, and just goto next value in sequence
continue
total += value
print("interim total: ", total)
print("===============================")
print("total : ", total)
# A for loop can be exited altogether with the break keyword. This code sums ele‐
# ments of the list until a 5 is reached:(5 not including)
sequence = [1, 2, 0, 4, 6, 5, 2, 1]
total_until_5 = 0
for value in sequence:
if value == 5:
break
total_until_5 += value
print(total_until_5) # sum of 1, 2, 0, 4, 6 (5 not including)
for i in range(4): #4 not including(0,1,2,3)
for j in range(4): #(0,1,2,3)
if j > i:
break
print((i, j))
```
for a, b, c in iterator:
# do something
#### while loops
* A while loop specifies a condition and a block of code that is to be executed
* until the condition evaluates to False
* or the loop is explicitly ended with break :
```
x = 256
total = 0
while x > 0:
# while x > 10:
if total > 500:
break
total += x
print(total)
x = x // 2
print(x)
print("-----------")
print("==============================")
print("total : ", total)
print("x : ", x)
```
#### pass
pass is the “no-op” statement in Python. It can be used in blocks where no action is to
be taken (or as a placeholder for code not yet implemented);
* it is only required
because Python uses whitespace to delimit(determine the limits or boundaries of) blocks:
- White space is created by pressing the Return key, spacebar key, or the Tab key, and can also be created by setting the document's margins and inserting form feeds or tables.
```
# x = -9
x = 0
if x < 0:
print('negative!')
elif x == 0:
# TODO: put something smart here
pass
else:
print('positive!')
```
#### range
range function returns an iterator that yields a sequence of evenly spaced
integers:
```
range(10) #only for python, numpy-> np.arange
list(range(10))
list(range(0, 20, 2))
list(range(5, 0, -1))
```
A common use of range is for iterating through sequences by index:
```
# seq = [1, 2, 3, 4]
seq = [2, 7, 3, 4]
for i in range(len(seq)):
val = seq[i]
print(val)
sum = 0
# for i in range(100000):
for i in range(50):
# % is the modulo operator
if i % 3 == 0 or i % 5 == 0:
print("i : ",i)
sum += i
print("sum : ", sum)
print(f'--- end of i : {i} ---')
print("============================")
print("sum is ",sum)
```
#### Ternary expressions
```value = true-expr if condition else false-expr```
```
if condition:
value = true-expr
else:
value = false-expr
```
```
x = 5
'Non-negative' if x >= 0 else 'Negative'
```
| github_jupyter |
# Approximate q-learning
In this notebook you will teach a __tensorflow__ neural network to do Q-learning.
__Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
```
#XVFB will be launched if you run on a server
import os
if os.environ.get("DISPLAY") is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
```
# Approximate (deep) Q-learning: building the network
To train a neural network policy one must have a neural network policy. Let's build it.
Since we're working with a pre-extracted features (cart positions, angles and velocities), we don't need a complicated network yet. In fact, let's build something like this for starters:

For your first run, please only use linear layers (L.Dense) and activations. Stuff like batch normalization or dropout may ruin everything if used haphazardly.
Also please avoid using nonlinearities like sigmoid & tanh: agent's observations are not normalized so sigmoids may become saturated from init.
Ideally you should start small with maybe 1-2 hidden layers with < 200 neurons and then increase network size if agent doesn't beat the target score.
```
import tensorflow as tf
import keras
import keras.layers as L
tf.reset_default_graph()
sess = tf.InteractiveSession()
keras.backend.set_session(sess)
network = keras.models.Sequential()
network.add(L.InputLayer(state_dim))
# let's create a network for approximate q-learning following guidelines above
<YOUR CODE: stack more layers!!!1 >
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = network.predict(state[None])[0]
###YOUR CODE
return <epsilon-greedily selected action>
assert network.output_shape == (None, n_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]"
assert network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity"
# test epsilon-greedy exploration
s = env.reset()
assert np.shape(get_action(s)) == (), "please return just one action (integer)"
for eps in [0., 0.1, 0.5, 1.0]:
state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=n_actions)
best_action = state_frequencies.argmax()
assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / n_actions)) < 200
for other_action in range(n_actions):
if other_action != best_action:
assert abs(state_frequencies[other_action] - 10000 * (eps / n_actions)) < 200
print('e=%.1f tests passed'%eps)
```
### Q-learning via gradient descent
We shall now train our agent's Q-function by minimizing the TD loss:
$$ L = { 1 \over N} \sum_i (Q_{\theta}(s,a) - [r(s,a) + \gamma \cdot max_{a'} Q_{-}(s', a')]) ^2 $$
Where
* $s, a, r, s'$ are current state, action, reward and next state respectively
* $\gamma$ is a discount factor defined two cells above.
The tricky part is with $Q_{-}(s',a')$. From an engineering standpoint, it's the same as $Q_{\theta}$ - the output of your neural network policy. However, when doing gradient descent, __we won't propagate gradients through it__ to make training more stable (see lectures).
To do so, we shall use `tf.stop_gradient` function which basically says "consider this thing constant when doingbackprop".
```
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
#get q-values for all actions in current states
predicted_qvalues = network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions), axis=1)
gamma = 0.99
# compute q-values for all actions in next states
predicted_next_qvalues = <YOUR CODE - apply network to get q-values for next_states_ph>
# compute V*(next_states) using predicted next q-values
next_state_values = <YOUR CODE>
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = <YOUR CODE>
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions"
assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')"
assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector"
```
### Playing the game
```
def generate_session(t_max=1000, epsilon=0, train=False):
"""play env with approximate q-learning agent and train it at the same time"""
total_reward = 0
s = env.reset()
for t in range(t_max):
a = get_action(s, epsilon=epsilon)
next_s, r, done, _ = env.step(a)
if train:
sess.run(train_step,{
states_ph: [s], actions_ph: [a], rewards_ph: [r],
next_states_ph: [next_s], is_done_ph: [done]
})
total_reward += r
s = next_s
if done: break
return total_reward
epsilon = 0.5
for i in range(1000):
session_rewards = [generate_session(epsilon=epsilon, train=True) for _ in range(100)]
print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon))
epsilon *= 0.99
assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training"
if np.mean(session_rewards) > 300:
print ("You Win!")
break
```
### How to interpret results
Welcome to the f.. world of deep f...n reinforcement learning. Don't expect agent's reward to smoothly go up. Hope for it to go increase eventually. If it deems you worthy.
Seriously though,
* __ mean reward__ is the average reward per game. For a correct implementation it may stay low for some 10 epochs, then start growing while oscilating insanely and converges by ~50-100 steps depending on the network architecture.
* If it never reaches target score by the end of for loop, try increasing the number of hidden neurons or look at the epsilon.
* __ epsilon__ - agent's willingness to explore. If you see that agent's already at < 0.01 epsilon before it's is at least 200, just reset it back to 0.1 - 0.5.
### Record videos
As usual, we now use `gym.wrappers.Monitor` to record a video of our agent playing the game. Unlike our previous attempts with state binarization, this time we expect our agent to act ~~(or fail)~~ more smoothly since there's no more binarization error at play.
As you already did with tabular q-learning, we set epsilon=0 for final evaluation to prevent agent from exploring himself to death.
```
#record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),directory="videos",force=True)
sessions = [generate_session(epsilon=0, train=False) for _ in range(100)]
env.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
```
---
### Submit to coursera
```
from submit import submit_cartpole
submit_cartpole(generate_session, <EMAIL>, <TOKEN>)
```
| github_jupyter |
# Credit Card Fraud Detection
Throughout the financial sector, machine learning algorithms are being developed to detect fraudulent transactions. In this project, that is exactly what we are going to be doing as well. Using a dataset of of nearly 28,500 credit card transactions and multiple unsupervised anomaly detection algorithms, we are going to identify transactions with a high probability of being credit card fraud. In this project, we will build and deploy the following two machine learning algorithms:
* Local Outlier Factor (LOF)
* Isolation Forest Algorithm
Furthermore, using metrics suchs as precision, recall, and F1-scores, we will investigate why the classification accuracy for these algorithms can be misleading.
In addition, we will explore the use of data visualization techniques common in data science, such as parameter histograms and correlation matrices, to gain a better understanding of the underlying distribution of data in our data set. Let's get started!
## 1. Importing Necessary Libraries
To start, let's print out the version numbers of all the libraries we will be using in this project. This serves two purposes - it ensures we have installed the libraries correctly and ensures that this tutorial will be reproducible.
```
import sys
import numpy
import pandas
import matplotlib
import seaborn
import scipy
print('Python: {}'.format(sys.version))
print('Numpy: {}'.format(numpy.__version__))
print('Pandas: {}'.format(pandas.__version__))
print('Matplotlib: {}'.format(matplotlib.__version__))
print('Seaborn: {}'.format(seaborn.__version__))
print('Scipy: {}'.format(scipy.__version__))
# import the necessary packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### 2. The Data Set
In the following cells, we will import our dataset from a .csv file as a Pandas DataFrame. Furthermore, we will begin exploring the dataset to gain an understanding of the type, quantity, and distribution of data in our dataset. For this purpose, we will use Pandas' built-in describe feature, as well as parameter histograms and a correlation matrix.
```
# Load the dataset from the csv file using pandas
data = pd.read_csv('creditcard.csv')
# Start exploring the dataset
print(data.columns)
# Print the shape of the data
data = data.sample(frac=0.1, random_state = 1)
print(data.shape)
print(data.describe())
# V1 - V28 are the results of a PCA Dimensionality reduction to protect user identities and sensitive features
# Plot histograms of each parameter
data.hist(figsize = (20, 20))
plt.show()
# Determine number of fraud cases in dataset
Fraud = data[data['Class'] == 1]
Valid = data[data['Class'] == 0]
outlier_fraction = len(Fraud)/float(len(Valid))
print(outlier_fraction)
print('Fraud Cases: {}'.format(len(data[data['Class'] == 1])))
print('Valid Transactions: {}'.format(len(data[data['Class'] == 0])))
# Correlation matrix
corrmat = data.corr()
fig = plt.figure(figsize = (12, 9))
sns.heatmap(corrmat, vmax = .8, square = True)
plt.show()
# Get all the columns from the dataFrame
columns = data.columns.tolist()
# Filter the columns to remove data we do not want
columns = [c for c in columns if c not in ["Class"]]
# Store the variable we'll be predicting on
target = "Class"
X = data[columns]
Y = data[target]
# Print shapes
print(X.shape)
print(Y.shape)
```
## 3. Unsupervised Outlier Detection
Now that we have processed our data, we can begin deploying our machine learning algorithms. We will use the following techniques:
**Local Outlier Factor (LOF)**
The anomaly score of each sample is called Local Outlier Factor. It measures the local deviation of density of a
given sample with respect to its neighbors. It is local in that the anomaly score depends on how isolated the
object is with respect to the surrounding neighborhood.
**Isolation Forest Algorithm**
The IsolationForest ‘isolates’ observations by randomly selecting a feature and then randomly selecting
a split value between the maximum and minimum values of the selected feature.
Since recursive partitioning can be represented by a tree structure, the number of splittings required to
isolate a sample is equivalent to the path length from the root node to the terminating node.
This path length, averaged over a forest of such random trees, is a measure of normality and our decision function.
Random partitioning produces noticeably shorter paths for anomalies. Hence, when a forest of random trees
collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies.
```
from sklearn.metrics import classification_report, accuracy_score
from sklearn.ensemble import IsolationForest
from sklearn.neighbors import LocalOutlierFactor
# define random states
state = 1
# define outlier detection tools to be compared
classifiers = {
"Isolation Forest": IsolationForest(max_samples=len(X),
contamination=outlier_fraction,
random_state=state),
"Local Outlier Factor": LocalOutlierFactor(
n_neighbors=20,
contamination=outlier_fraction)}
# Fit the model
plt.figure(figsize=(9, 7))
n_outliers = len(Fraud)
for i, (clf_name, clf) in enumerate(classifiers.items()):
# fit the data and tag outliers
if clf_name == "Local Outlier Factor":
y_pred = clf.fit_predict(X)
scores_pred = clf.negative_outlier_factor_
else:
clf.fit(X)
scores_pred = clf.decision_function(X)
y_pred = clf.predict(X)
# Reshape the prediction values to 0 for valid, 1 for fraud.
y_pred[y_pred == 1] = 0
y_pred[y_pred == -1] = 1
n_errors = (y_pred != Y).sum()
# Run classification metrics
print('{}: {}'.format(clf_name, n_errors))
print(accuracy_score(Y, y_pred))
print(classification_report(Y, y_pred))
```
| github_jupyter |
# Assignment 5

**WARNING!!! If you see this icon on the top of your COLAB sesssion, your work is not saved automatically.**
**When you are working on homeworks, make sure that you save often. You may find it easier to save intermident copies in Google drive. If you save your working file in Google drive all changes will be saved as you work. MAKE SURE that your final version is saved to GitHub.**
Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel → Restart) and then run all cells (in the menubar, select Cell → Run All). You can speak with others regarding the assignment but all work must be your own.
### This is a 30 point assignment graded from answers to questions and automated tests that should be run at the bottom. Be sure to clearly label all of your answers and commit final tests at the end.
```
files = "https://github.com/rpi-techfundamentals/introml_website_fall_2020/raw/master/files/assignment5.zip"
!pip install otter-grader && wget $files && unzip -o assignment5.zip
#Run this. It initiates autograding.
import otter
grader = otter.Notebook()
```
### Load Data
We have our titanic dataset that is a bit different from what we have had previously. Load the train-new.csv and test-new.csv into dataframes train and test.
```
# Load the data here
```
## Question 1
(1) Investigate the data a little bit. What is different from some of the titanic datasets we have used in the past? (For example, compare against the data in the Kaggle Baseline notebook).
```
man1="""
"""
```
## Generating Dummy Variables
Before we do analysis of the titanic dataset, we have to select out our features, for the train and the test set, which we shall label `X_train`, and `X_test`.
As a part of this we need to generate `n-1` dummy variables for each one of our categorical columns. The resulting dataframes should be all numeric and have all of these columns below (in the correct order).
Follow the example above to generate a new value for `X_train` and `X_test` utilizing all the data.
```
['Age', 'SibSp', 'Parch', 'Fare', 'family_size', 'Pclass_2', 'Pclass_3', 'Sex_male', 'Cabin_B', 'Cabin_C', 'Cabin_D', 'Cabin_E', 'Cabin_F', 'Cabin_G', 'Cabin_H', 'Embarked_Q', 'Embarked_S']
```
*Hint, try:
`help(pd.get_dummies)`*
You should also set `y` to the Survived column.
```
#Answer Here
grader.check('q01')
```
## Split Training Set For Cross Validation
(2.) We want to split up our training set `X` so that we can do some cross validation. Specifically, we will start to use the term validation set for the set we will use to validate our model.
In doing so below, use the sklearn methods to do a train test (i.e., validation) split.
From X y dataframe, generate the following dataframes by drawing the data **randomly** from the train dataframe 80% of the data in train and 20% of the data in test. So that you get repeatable results, set the `random_state=100`. This will set a "seed" so that your random selection will be the same as mine and you will pass the internal tests.
train_X, val_X, train_y, val_y
```
#Answer Here
grader.check('q02')
```
### Perform Nearest Neighbor Classification (KNeighborsClassifier)
(3.) Using the default options (i.e., all default hyperparameters), perform nearest neighbor classification. Calculate the accuracy measure using `metrics.accuracy_score`.
Train your model using the training data and access the accuracy of both the training and validation data.
*Note: You only train the model once...on the training data. You then assess the performance on both the training and validation data.*
Assign the following variables:
`knn0_train_y` = The KNN prediction for the `train_X` data.
`knn0_val_y` = The KNN prediction for the `val_X` data.
`knn0_train_accuracy` = The accuracy for the `knn0_train_y` prediction.
`knn0_val_accuracy` = The accuracy for the `knn0_val_y` prediction.
```
#Answer Here
grader.check('q03')
```
### Confusion Matrix
We can utilize a confusion matrix to be able to understand misclassifications a bit more. This will give us a full idea of the true positives, true negatives, false positives, and false negatives.
See the documentation [here](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html).
You can utilize the syntax below to generate knn_mat1_train and knn_mat1_test.
```
from sklearn.metrics import confusion_matrix
confusion_matrix(y_true, y_pred)
```
**(4.) Explain what each of the four quadrants of the confusion matrix means. **
```
#Answer here
man4= """
"""
```
### Create Confusion Matrix for the Training and Validation Predictions
(5) Create a confusion matrix for each of the training and valiation predictions.
`knn0_con_train` A confusion matrix for the training data.
`knn0_con_val` A confusion matrix for the validation data.
```
#Answers
grader.check('q05')
```
## Hyperparameter Tuning
(6) You created a single model using the default parameters. However, we want to adjust the parameters to try and improve the model.
Examine the documentation on KNN and see some of the different parameters that you can adjust.
[Scikit Learn Documentation](http://scikit-learn.org/stable/supervised_learning.html#supervised-learning).
Assign the following variables:
`knn1_train_y` = The KNN prediction for the `train_X` datafor your improved model.
`knn1_val_y` = The KNN prediction for the `val_X` data for your improved model.
`knn1_train_accuracy` = The accuracy for the `knn1_train_y` prediction.
`knn1_val_accuracy` = The accuracy for the `knn1_val_y` prediction.
```
#Answers
grader.check('q06')
```
### Other Models
(7.) Test Logistic regression and 1 other algorithms/models (your choice). Provide a summary of the best performance below.
Use any of the available classification models. You should show and comment code
[Scikit Learn Documentation](http://scikit-learn.org/stable/supervised_learning.html#supervised-learning).
*Make sure your clearly indicate the accuracy of the Logistic regression model your other model. Assess which model worked best considering all your efforts.*
```
#Answer here
man7= """
"""
#This runs all tests.
grader.check_all()
```
| github_jupyter |
# Shape segmentation
The notebooks in this folder replicate the experiments as performed for [CNNs on Surfaces using Rotation-Equivariant Features](https://doi.org/10.1145/3386569.3392437).
The current notebook replicates the shape segmentation experiments from section `5.2 Comparisons`.
## Imports
We start by importing dependencies.
```
# File reading and progressbar
import os.path as osp
import progressbar
# PyTorch and PyTorch Geometric dependencies
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch_geometric.transforms as T
from torch_geometric.data import DataLoader
from torch_geometric.nn.inits import zeros
# Harmonic Surface Networks components
# Layers
from nn import (HarmonicConv, HarmonicResNetBlock,
ParallelTransportPool, ParallelTransportUnpool,
ComplexLin, ComplexNonLin)
# Utility functions
from utils.harmonic import magnitudes
# Rotated MNIST dataset
from datasets import ShapeSeg
# Transforms
from transforms import (HarmonicPrecomp, VectorHeat, MultiscaleRadiusGraph,
ScaleMask, FilterNeighbours, NormalizeArea, NormalizeAxes, Subsample)
```
## Settings
Next, we set a few parameters for our network. You can change these settings to experiment with different configurations of the network. Right now, the settings are set to the ones used in the paper.
```
# Maximum rotation order for streams
max_order = 1
# Number of rings in the radial profile
n_rings = 6
# Number of filters per block
nf = [16, 32]
# Ratios used for pooling
ratios=[1, 0.25]
# Radius of convolution for each scale
radii = [0.2, 0.4]
# Number of datasets per batch
batch_size = 1
# Number of classes for segmentation
n_classes = 8
```
## Dataset
To get our dataset ready for training, we need to perform the following steps:
1. Provide a path to load and store the dataset.
2. Define transformations to be performed on the dataset:
- A transformation that computes a multi-scale radius graph and precomputes the logarithmic map.
- A transformation that masks the edges and vertices per scale and precomputes convolution components.
3. Assign and load the datasets.
```
# 1. Provide a path to load and store the dataset.
# Make sure that you have created a folder 'data' somewhere
# and that you have downloaded and moved the raw datasets there
path = osp.join('data', 'ShapeSeg')
# 2. Define transformations to be performed on the dataset:
# Transformation that computes a multi-scale radius graph and precomputes the logarithmic map.
pre_transform = T.Compose((
NormalizeArea(),
MultiscaleRadiusGraph(ratios, radii, loop=True, flow='target_to_source', sample_n=1024),
VectorHeat(),
Subsample(),
NormalizeAxes()
))
# Apply a random scale and random rotation to each shape
transform = T.Compose((
T.RandomScale((0.85, 1.15)),
T.RandomRotate(45, axis=0),
T.RandomRotate(45, axis=1),
T.RandomRotate(45, axis=2))
)
# Transformations that masks the edges and vertices per scale and precomputes convolution components.
scale0_transform = T.Compose((
ScaleMask(0),
FilterNeighbours(radii[0]),
HarmonicPrecomp(n_rings, max_order, max_r=radii[0]))
)
scale1_transform = T.Compose((
ScaleMask(1),
FilterNeighbours(radii[1]),
HarmonicPrecomp(n_rings, max_order, max_r=radii[1]))
)
# 3. Assign and load the datasets.
test_dataset = ShapeSeg(path, False, pre_transform=pre_transform)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
train_dataset = ShapeSeg(path, True, pre_transform=pre_transform, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
```
## Network architecture
Now, we create the network architecture by creating a new `nn.Module`, `Net`. We first setup each layer in the `__init__` method of the `Net` class and define the steps to perform for each batch in the `forward` method. The following figure shows a schematic of the architecture we will be implementing:
<img src="img/resnet_architecture.png" width="800px" />
Let's get started!
```
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.lin0 = nn.Linear(3, nf[0])
# Stack 1
self.resnet_block11 = HarmonicResNetBlock(nf[0], nf[0], max_order, n_rings, prev_order=0)
self.resnet_block12 = HarmonicResNetBlock(nf[0], nf[0], max_order, n_rings)
# Pool
self.pool = ParallelTransportPool(1, scale1_transform)
# Stack 2
self.resnet_block21 = HarmonicResNetBlock(nf[0], nf[1], max_order, n_rings)
self.resnet_block22 = HarmonicResNetBlock(nf[1], nf[1], max_order, n_rings)
# Stack 3
self.resnet_block31 = HarmonicResNetBlock(nf[1], nf[1], max_order, n_rings)
self.resnet_block32 = HarmonicResNetBlock(nf[1], nf[1], max_order, n_rings)
# Unpool
self.unpool = ParallelTransportUnpool(from_lvl=1)
# Stack 4
self.resnet_block41 = HarmonicResNetBlock(nf[1] + nf[0], nf[0], max_order, n_rings)
self.resnet_block42 = HarmonicResNetBlock(nf[0], nf[0], max_order, n_rings)
# Final Harmonic Convolution
# We set offset to False,
# because we will only use the radial component of the features after this
self.conv_final = HarmonicConv(nf[0], n_classes, max_order, n_rings, offset=False)
self.bias = nn.Parameter(torch.Tensor(n_classes))
zeros(self.bias)
def forward(self, data):
x = data.pos
# Linear transformation from input positions to nf[0] features
x = F.relu(self.lin0(x))
# Convert input features into complex numbers
x = torch.stack((x, torch.zeros_like(x)), dim=-1).unsqueeze(1)
# Stack 1
# Select only the edges and precomputed components of the first scale
data_scale0 = scale0_transform(data)
attributes = (data_scale0.edge_index, data_scale0.precomp, data_scale0.connection)
x = self.resnet_block11(x, *attributes)
x_prepool = self.resnet_block12(x, *attributes)
# Pooling
# Apply parallel transport pooling
x, data, data_pooled = self.pool(x_prepool, data)
# Stack 2
# Store edge_index and precomputed components of the second scale
attributes_pooled = (data_pooled.edge_index, data_pooled.precomp, data_pooled.connection)
x = self.resnet_block21(x, *attributes_pooled)
x = self.resnet_block22(x, *attributes_pooled)
# Stack 3
x = self.resnet_block31(x, *attributes_pooled)
x = self.resnet_block32(x, *attributes_pooled)
# Unpooling
x = self.unpool(x, data)
# Concatenate pre-pooling x with post-pooling x
x = torch.cat((x, x_prepool), dim=2)
# Stack 3
x = self.resnet_block41(x, *attributes)
x = self.resnet_block42(x, *attributes)
x = self.conv_final(x, *attributes)
# Take radial component from features and sum streams
x = magnitudes(x, keepdim=False)
x = x.sum(dim=1)
x = x + self.bias
return F.log_softmax(x, dim=1)
```
## Training
Phew, we're through the hard part. Now, let's get to training. First, move the network to the GPU and setup an optimizer.
```
# We want to train on a GPU. It'll take a long time on a CPU
device = torch.device('cuda')
# Move the network to the GPU
model = Net().to(device)
# Set up the ADAM optimizer with learning rate of 0.0076 (as used in H-Nets)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
```
Next, define a training and test function.
```
def train(epoch):
# Set model to 'train' mode
model.train()
if epoch > 20:
for param_group in optimizer.param_groups:
param_group['lr'] = 0.001
for data in progressbar.progressbar(train_loader):
# Move training data to the GPU and optimize parameters
optimizer.zero_grad()
F.nll_loss(model(data.to(device)), data.y).backward()
optimizer.step()
def test():
# Set model to 'evaluation' mode
model.eval()
correct = 0
total_num = 0
for i, data in enumerate(test_loader):
pred = model(data.to(device)).max(1)[1]
correct += pred.eq(data.y).sum().item()
total_num += data.y.size(0)
return correct / total_num
```
Train for 50 epochs.
```
print('Start training, may take a while...')
# Try with fewer epochs if you're in a timecrunch
for epoch in range(50):
train(epoch)
test_acc = test()
print("Epoch {} - Test: {:06.4f}".format(epoch, test_acc))
```
| github_jupyter |
# Linear Regression
We will follow the example given by [scikit-learn](https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html), and use the [diabetes](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html) dataset to train and test a linear regressor. We begin by loading the dataset (using only two features for this example) and splitting it into training and testing samples (an 80/20 split).
```
from sklearn.model_selection import train_test_split
from sklearn import datasets
dataset = datasets.load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(dataset.data[:, :2], dataset.target, test_size=0.2)
print("Train examples: %d, Test examples: %d" % (X_train.shape[0], X_test.shape[0]))
```
# Non-private baseline
We now use scikit-learn's native LinearRegression function to establish a non-private baseline for our experiments. We will use the [r-squared score](https://en.wikipedia.org/wiki/Coefficient_of_determination) to evaluate the goodness-of-fit of the model, which is built into LinearRegression.
```
from sklearn.linear_model import LinearRegression as sk_LinearRegression
regr = sk_LinearRegression()
regr.fit(X_train, y_train)
baseline = regr.score(X_test, y_test)
print("Non-private baseline R2 score: %.2f" % baseline)
```
# Differentially private Linear Regression
Let's now train a differentially private linear regressor, where the trained model is differentially private with respect to the training data. We will pass additional hyperparameters to the regressor later to suppress the `PrivacyLeakWarning`.
```
from diffprivlib.models import LinearRegression
regr = LinearRegression()
regr.fit(X_train, y_train)
print("R2 score for epsilon=%.2f: %.2f" % (regr.epsilon, regr.score(X_test, y_test)))
```
# Plotting r-squared versus epsilon
We want to evaluate the tradeoff between goodness-of-fit and privacy budget (epsilon), and plot the result using `matplotlib`. For this example, we evaluate the score for epsilon between 1e-2 and 1e2. To ensure no privacy leakage from the hyperparameters of the model, `data_norm`, `range_X` and `range_y` should all be set independently of the data, i.e. using domain knowledge.
```
import numpy as np
epsilons = np.logspace(-1, 2, 100)
accuracy = []
for epsilon in epsilons:
regr = LinearRegression(epsilon=epsilon, bounds_X=(-0.138, 0.2), bounds_y=(25, 346))
regr.fit(X_train, y_train)
accuracy.append(regr.score(X_test, y_test))
```
And then plot the result in a semi-log plot.
```
import matplotlib.pyplot as plt
plt.semilogx(epsilons, accuracy, label="Differentially private linear regression", zorder=10)
plt.semilogx(epsilons, baseline * np.ones_like(epsilons), dashes=[2,2], label="Non-private baseline", zorder=5)
plt.xlabel("epsilon")
plt.ylabel("r-squared score")
plt.ylim(-5, 1.5)
plt.xlim(epsilons[0], epsilons[-1])
plt.legend(loc=2)
```
| github_jupyter |
# stripmap acquisition mode
In conventional stripmap Synthetic Aperture Radar(SAR) imaging mode, the radar antenna
is fixed to a specific direction, illuminating a single swath of the scene with a fixed squint angle (i.e., the angle between the radar beam and the cross-track direction). The imaging swath width can be increased using the scanning SAR (ScanSAR) or Terrain Observation by Progressive Scan(TOPS). In this notebook we focus on interferometric processing of stripmap data using stripmapApp.py.
The stripmap mode has been used by sevreal SAR missions, such as Envisat, ERS, RadarSAT-1, Radarsat-2, ALOS-1, Cosmo Sky-Med and TerraSAR-X. Although Sentinel-1 A/B and ALOS-2 are capable of acuqiring SAR data with stripmap mode, their operational imaging modes are TOPS and ScanSAR respectively. Both missions have been acquiring stripmap data over certain regions.
For processing TOPS data using topsApp, please see the topsApp notebook. However, we recommend that new InSAR users may start with the stripmapApp notebook first, and then try the topsApp notebook.
The detailed algorithms for stripmap processing and TOPS processing implemented in ISCE software can be found in the following literatures:
### stripmapApp:
H. Fattahi, M. Simons, and P. Agram, "InSAR Time-Series Estimation of the Ionospheric Phase Delay: An Extension of the Split Range-Spectrum Technique", IEEE Trans. Geosci. Remote Sens., vol. 55, no. 10, 5984-5996, 2017.
(https://ieeexplore.ieee.org/abstract/document/7987747/)
### topsApp:
H. Fattahi, P. Agram, and M. Simons, “A network-based enhanced spectral diversity approach for TOPS time-series analysis,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 2, pp. 777–786, Feb. 2017. (https://ieeexplore.ieee.org/abstract/document/7637021/)
### ISCE framework:
Rosen et al, IGARSS 2018 [Complete reference here]

(Figure from Fattahi et. al., 2017)
# stripmapApp (general overview)
stripmapApp.py is an ISCE application, designed for interferometric processing of SAR data acquired with stripmap mode onboard platforms with precise orbits. This ISCE application is equivalent to insarApp (an older ISCE application which has been widely used by ISCE users), in this sense that both apps process stripmap data. Although the naming convention of the products and interface of the App is similiar to insarApp, the core modules used in stripmapApp are fundamentally different. The main features of stripmapApp includes the following:
#### a) Focusing RAW data to native Doppler:
If processing starts from RAW data, JPL's ROI software is used for focusing the raw data to SLC(Single Look Complex) SAR images in native Doppler geometry. stripmapApp does not use the motion compensation algorithm which was used in insarApp.
#### b) Interferometric processing of SLCs in native or zero Doppler geometry
If the input data are SLC images, then focusing is not required and will be skipped. stripmapApp can process SLCs focused to zero or native Dopplers
#### c) Coregistration using SAR acquisition geometry (Orbit + DEM)
The geometry module of ISCE is used for coregistration of SAR images, i.e., range and azimuth offsets are computed for each pixel using SAR acquisition geometry, orbit information and an existing Digital Elevation Model(DEM). The geometrical offsets are refined with a small constant shift in range and azimuth directions. The constant shifts are estimated using incoherent cross-correlation of the two SAR images already coregistered using pure geometrical information.
#### d) More optional precise coregistration step
An optional step called "rubbersheeting" is available for more precise coregistration. If "rubbersheeting" is requested, a dense azimuth offsets is computed using incoherent cross-correlation between the two SAR images, and is added to the geometrical offsets for more precise coregistration. Rubbersheeting may be required if SAR images are affected by ionospheric scintillation.
#### e) Ionospheric phase estimation
Split Range-Spectrum technique and ionospheric phase estimation are available as optional processing steps.
# Prepare directories, download raw data
Importing some python modules and setting up some variables:
```
import os
import numpy as np
import matplotlib.pyplot as plt
import gdal
#
ASF_USER = ""
ASF_PASS = ""
# the working directory:
home_dir = os.path.join(os.getenv("HOME"), "work")
PROCESS_DIR = os.path.join(home_dir, "Hawaii_ALOS1")
DATA_DIR = os.path.join(PROCESS_DIR, "data")
print("home directory: ", home_dir)
```
Check if the PROCESS_DIR and DATA_DIR already exist. If they don't exist, we create them:
```
if not os.path.exists(PROCESS_DIR):
print("create ", PROCESS_DIR)
os.makedirs(PROCESS_DIR)
else:
print(PROCESS_DIR, " already exists!")
if not os.path.exists(DATA_DIR):
print("create ", DATA_DIR)
os.makedirs(DATA_DIR)
else:
print(DATA_DIR, " already exists!")
```
go to the DATA_DIR:
```
os.chdir(DATA_DIR)
```
In this tutorial we will process two ALOS1 PALSAR acquistions over Hawaii. The two acquisitions cover a dike opening even in March 2011.

Download two ALOS-1 acquistions from ASF using the following command:
```
cmd = "wget https://datapool.asf.alaska.edu/L1.0/A3/ALPSRP265743230-L1.0.zip --user={0} --password={1}".format(ASF_USER, ASF_PASS)
if not os.path.exists(os.path.join(DATA_DIR, "ALPSRP265743230-L1.0.zip")):
os.system(cmd)
else:
print("ALPSRP265743230-L1.0.zip already exists")
cmd = "wget https://datapool.asf.alaska.edu/L1.0/A3/ALPSRP272453230-L1.0.zip --user={0} --password={1}".format(ASF_USER, ASF_PASS)
if not os.path.exists(os.path.join(DATA_DIR, "ALPSRP272453230-L1.0.zip")):
os.system(cmd)
else:
print("ALPSRP272453230-L1.0.zip already exists")
```
Now the data should be downloading from ASF. Check the terminal that you strated your jupyter notebook from and you should see the progress bar for the download
```
#Alternative ssara command
#!ssara_federated_query.py --platform=ALOS --intersectsWith='POLYGON((-155.3 19.5, -155.3 19.8,-155.0 19.8,-155.0 19.5, -155.3 19.5 ))' --print --kml --flightDirection=D --beamMode=FBS,FBD --relativeOrbit=598 -s 2011-01-17 -e 2011-03-07 --download
ls
```
unzip the downloaded files
```
!unzip ALPSRP265743230-L1.0.zip
!unzip ALPSRP272453230-L1.0.zip
```
looking at the unzipped directories there are multiple files:
```
ls ALPSRP265743230-L1.0
```
When you download PALSAR data from a data provider, each frame comprises an image data file and an image leader file, as well as possibly some other ancillary files that are not used by ISCE.
Files with IMG as prefix are images.
Files with LED as prefix are leaders.
The leader file contains parameters of the sensor that are relevant to the imaging mode, all the information necessary to process the data. The data file contains the raw data samples if Level 1.0 raw data (this is just a different name from what other satellites call Level 0) and processed imagery if Level 1.1 or 1.5 image data. The naming convention for these files is standardized across data archives, and has the following taxonomy:

To see the acquisition date of this PALSAR acquisition we can look at the following file:
```
!cat ALPSRP265743230-L1.0/ALPSRP265743230.l0.workreport
!grep Img_SceneCenterDateTime ALPSRP265743230-L1.0/ALPSRP265743230.l0.workreport
!grep Img_SceneCenterDateTime ALPSRP272453230-L1.0/ALPSRP272453230.l0.workreport
```
for clarity let's create two directories for the two acquisition dates and move the unziped folders there:
```
!mkdir 20110119
!mkdir 20110306
!mv ALPSRP265743230-L1.0 20110119
!mv ALPSRP272453230-L1.0 20110306
```
Now that we have the data ready let's cd to the main PROCESS directroy
```
os.chdir(PROCESS_DIR)
```
To make sure where we are, run pwd:
```
!pwd
```
# Setting up input xml files for processing with stripmapApp
Create master.xml file to point to the master Raw data or SLC images.
Here is an example master.xml file for this tutorial:
### master.xml
```xml
<component>
<property name="IMAGEFILE">
<value>[data/20110119/ALPSRP265743230-L1.0/IMG-HH-ALPSRP265743230-H1.0__D]</value>
</property>
<property name="LEADERFILE">
<value>[data/20110119/ALPSRP265743230-L1.0/LED-ALPSRP265743230-H1.0__D]</value>
</property>
<property name="OUTPUT">
<value>20110119</value>
</property>
</component>
```
### slave.xml
```xml
<component>
<property name="IMAGEFILE">
<value>[data/20110306/ALPSRP272453230-L1.0/IMG-HH-ALPSRP272453230-H1.0__D]</value>
</property>
<property name="LEADERFILE">
<value>[data/20110306/ALPSRP272453230-L1.0/LED-ALPSRP272453230-H1.0__D]</value>
</property>
<property name="OUTPUT">
<value>20110306</value>
</property>
</component>
```
### stripmapApp.xml
```xml
<?xml version="1.0" encoding="UTF-8"?>
<stripmapApp>
<component name="insar">
<property name="sensor name">ALOS</property>
<component name="master">
<catalog>master.xml</catalog>
</component>
<component name="slave">
<catalog>slave.xml</catalog>
</component>
<!--
<property name="demFilename">
<value>demLat_N18_N21_Lon_W156_W154.dem.wgs84</value>
</property>
-->
<property name="unwrapper name">icu</property>
</component>
</stripmapApp>
```
<br>
<div class="alert alert-info">
<b>Note :</b>
demFilename is commented out in the stripmapApp.xml. This means that user has not specified the DEM. Therefore, isce looks online and download the SRTM dem.
</div>
After downloading the data to process, and setting up the input xml files, we are ready to start processing with stripmapApp. To see a full list of the processing steps run the following command:
```
!stripmapApp.py --help --steps
```
# stripmapApp processing steps
The default setting of this App includes the following steps to generate a geocoded interferogram from raw data or SLC images:
'startup', 'preprocess',
'formslc',
'verifyDEM',
'topo',
'geo2rdr',
'coarse_resample',
'misregistration',
'refined_resample',
'interferogram',
'filter',
'unwrap',
'geocode'
<br>
<div class="alert alert-info">
<b>Note (to process the interferogram with one command):</b>
stripmapApp.py stripmapApp.xml --start=startup --end=endup
</div>
However in this tutorial we process the interferogram step by step.
<br>
<div class="alert alert-info">
<b>At the end of each step, you will see a mesage showing the remaining steps:</b>
The remaining steps are (in order): [.....]
</div>
### pre-processing
```
!stripmapApp.py stripmapApp.xml --start=startup --end=preprocess
```
By the end of "preprocess", the following folders are created:
20110119_raw
20110306_raw
If you look into one of these folders:
```
ls 20110119_raw
```
20110119.raw contains the raw data (I/Q real and imaginary parts of each pulse, sampled along track (azimuth direction) with Pulse Repitition Frequency (PRF) and across track(range direction) with Range Sampling Frequency. stripmapApp currently only handles data acquired (or resampled) to a constant PRF.
### crop raw data
```
!stripmapApp.py stripmapApp.xml --start=cropraw --end=cropraw
```
The "cropraw" step would crop the raw data based on the region of interest if it was requested in the stripmapApp.xml. The region of interest can be added to stripmapApp.xml as:
```xml
<property name="regionOfInterest">[19.0, 19.9, -155.4, -154.7]</property>
```
Since we have not sopecified the region of interest, then "cropraw" will be ignored and the whole frame will be processed.
### focusing
```
!stripmapApp.py stripmapApp.xml --start=formslc --end=formslc
```
By the end of "formslc", the raw data for both master and slave images are focused to SLC images.
```
ls 20110119_slc
```
20110119.slc: Single Look Comlex image for 20110119 acquisition.
20110119.slc.vrt: A gdal VRT file which contains the size, data type, etc.
20110119.slc.xml: ISCE xml metadat file
In order to see the number of lines and pixels for an SLC image (or any data readable by GDAL):
```
!gdalinfo 20110119_slc/20110119.slc
```
Display a subset of SLC's amplitude and phase
```
ds = gdal.Open("20110119_slc/20110119.slc", gdal.GA_ReadOnly)
# extract a part of the SLC to display
x0 = 0
y0 = 10000
x_offset = 5000
y_offset = 10000
slc = ds.GetRasterBand(1).ReadAsArray(x0, y0, x_offset, y_offset)
ds = None
fig = plt.figure(figsize=(14, 12))
# display amplitude of the slc
ax = fig.add_subplot(1,2,1)
ax.imshow(np.abs(slc), vmin = -2, vmax=2, cmap='gray')
ax.set_title("amplitude")
#display phase of the slc
ax = fig.add_subplot(1,2,2)
ax.imshow(np.angle(slc))
ax.set_title("phase")
plt.show()
slc = None
```
### crop SLC
```
!stripmapApp.py stripmapApp.xml --start=cropslc --end=cropslc
```
Similar to crop raw data but for SLC. Since region of interest has not been specified, the whole frame is processed.
### verifyDEM
Cheks if the DEM was given in the input xml file. If the DEM is not given, then the app downloads SRTM DEM.
```
!stripmapApp.py stripmapApp.xml --start=verifyDEM --end=verifyDEM
```
### topo (mapping radar coordinates to geo coordinates)
```
!stripmapApp.py stripmapApp.xml --start=topo --end=topo
```
At this step, based on the SAR acquisition geometry of the master Image (including Doppler information), platforms trajectory and an existing DEM, each pixel of the master image is geolocated. The geolocated coordinates will be at the same coordinate system of the platforms state vectors, which are usually given in WGS84 coordinate system. Moreover the incidence angle and heading angles will be computed for each pixel.

Outputs of the step "topo" are written to "geometry" directory:
```
!ls geometry
```
lat.rdr.full: latitude of each pixel on the ground. "full" stands for full SAR image resolution grid (before multi-looking)
lon.rdr.full: longitude
z.rdr.full: height
los.rdr.full: incidence angle and heading angle
```
# Read a bounding box of latitude
ds = gdal.Open('geometry/lat.rdr.full', gdal.GA_ReadOnly)
lat = ds.GetRasterBand(1).ReadAsArray(0,10000,5000, 5000)
ds = None
# Read a bounding box of longitude
ds = gdal.Open('geometry/lon.rdr.full', gdal.GA_ReadOnly)
lon = ds.GetRasterBand(1).ReadAsArray(0,10000,5000, 5000)
ds = None
# Read a bounding box of height
ds = gdal.Open('geometry/z.rdr.full', gdal.GA_ReadOnly)
hgt = ds.GetRasterBand(1).ReadAsArray(0,10000,5000, 5000)
ds = None
fig = plt.figure(figsize=(18, 16))
ax = fig.add_subplot(1,3,1)
cax=ax.imshow(lat)
ax.set_title("latitude")
ax.set_axis_off()
cbar = fig.colorbar(cax, orientation='horizontal')
ax = fig.add_subplot(1,3,2)
cax=ax.imshow(lon)
ax.set_title("longitude")
ax.set_axis_off()
cbar = fig.colorbar(cax, orientation='horizontal')
ax = fig.add_subplot(1,3,3)
cax=ax.imshow(hgt, vmin = -100, vmax=1000)
ax.set_title("height")
ax.set_axis_off()
cbar = fig.colorbar(cax, orientation='horizontal')
plt.show()
lat = None
lon = None
hgt = None
```
### geo2rdr (mapping from geo coordinates to radar coordinates)
```
!stripmapApp.py stripmapApp.xml --start=geo2rdr --end=geo2rdr
```
At this step, given the geo-ccordinates of each pixel in the master image (outputs of topo), the range and azimuth time (radar coordinates) is computed given the acquisition geometry and orbit information of the slave image.

The computed range and azimuth time for the slave image, gives the pure geometrical offset, required for resampling the slave image to the master image in the next step.

After running this step, the geometrical offsets are available in "offsets" folder:
```
!ls offsets
```
azimuth.off: contains the offsets betwen master ans slave images in azimuth direction
range.off: contains the offsets betwen master ans slave images in range direction
```
import gdal
import matplotlib.pyplot as plt
ds = gdal.Open('offsets/azimuth.off', gdal.GA_ReadOnly)
# extract only part of the data to display
az_offsets = ds.GetRasterBand(1).ReadAsArray(100,100,5000,5000)
ds = None
ds = gdal.Open('offsets/range.off', gdal.GA_ReadOnly)
# extract only part of the data to display
rng_offsets = ds.GetRasterBand(1).ReadAsArray(100,100,5000,5000)
ds = None
fig = plt.figure(figsize=(14, 12))
ax = fig.add_subplot(1,2,1)
cax=ax.imshow(az_offsets)
ax.set_title("azimuth offsets")
ax.set_axis_off()
cbar = fig.colorbar(cax, orientation='horizontal')
ax = fig.add_subplot(1,2,2)
cax = ax.imshow(rng_offsets)
ax.set_title("range offsets")
ax.set_axis_off()
cbar = fig.colorbar(cax, orientation='horizontal')
plt.show()
az_offsets = None
rng_offsets = None
```
### resampling (using only geometrical offsets)
```
!stripmapApp.py stripmapApp.xml --start=coarse_resample --end=coarse_resample
```
At this step, the gemetrical offsets are used to resample the slave image to the same grid as the master image, i.e., the slave image is co-registered to the master image. The output of this step is written to "coregisteredSlc" folder.
```
!ls coregisteredSlc/
```
coarse_coreg.slc: is the slave SLC coregistered to the master image
```
import gdal
ds = gdal.Open("coregisteredSlc/coarse_coreg.slc", gdal.GA_ReadOnly)
slc = ds.GetRasterBand(1).ReadAsArray(0, 10000, 5000, 10000)
ds = None
fig = plt.figure()
ax = fig.add_subplot(1,2,1)
ax.imshow(np.abs(slc), vmin = -2, vmax=2, cmap='gray')
ax.set_title("amplitude")
slc = None
```
### misregistration (estimating constant offsets in range and azimuth directions)
```
!stripmapApp.py stripmapApp.xml --start=misregistration --end=misregistration
```
The range and azimuth offsets derived from pure geometry can be potentially affected by inaccuracy of orbit information or inaccurate DEMs, or inaccurate SAR metadata. The current available DEMs (e.g., SRTM DEMs) are accurate enough to estimate offsets with accuracies of 1/100 of a pixel. The Orbit information of most modern SAR sensors are also precise enough to obtain the same order of accuracy. However, inaccurate metadata (such as timing error, constant range bias), or range bulk delay may affect the estimated offsets. To account for such sources of errors the misregistration step is performed to estimate possible constant offsets between coarse coregistered SLC and master SLC. For this purpose an incoherent cross correlation is performed.
The results of the "misregistration" step is written to the "misreg" folder.
```
!ls misreg/
```
In order to extract the estimated misregistration offsets:
```
import isce
import isceobj.StripmapProc.StripmapProc as St
stObj=St()
stObj.configure()
az = stObj.loadProduct("misreg/misreg_az.xml")
rng = stObj.loadProduct("misreg/misreg_rg.xml")
print("azimuth misregistration: ", az._coeffs)
print("range misregistration: ", rng._coeffs)
```
### refine_resample (resampling using geometrical offsets + misregistration)
```
!stripmapApp.py stripmapApp.xml --start=refined_resample --end=refined_resample
```
At this step resampling is re-run to account for the misregistration estimated at the previous step. The new coregisterd SLC (named refined_coreg.slc) is written to the "coregisteredSlc" folder.
```
!ls coregisteredSlc/
```
### optional steps ('dense_offsets', 'rubber_sheet', 'fine_resample', 'split_range_spectrum' , 'sub_band_resample')
```
!stripmapApp.py stripmapApp.xml --start=dense_offsets --end=sub_band_resample
```
These steps are optional and will be skipped if user does not request them in the input xml file. We will get back to these steps in a different session where we estimate ionospheric phase.
### interferogram
```
!stripmapApp.py stripmapApp.xml --start=interferogram --end=interferogram
```
At this step the master image and refined_coreg.slc is used to generate the interferogram. The generated interferogram is multi-looked based on the user inputs in the input xml file. If user does not specify the number of looks in range and azimuth directions, then they will be estimated based on posting. The default posting is 30 m which can be also specified in the input xml file.
The results of the interferogram step is written to the "interferogram" folder:
```
!ls interferogram/
```
topophase.flat: flattened (geometrical phase removed) and multi-looked interferogram.(one band complex64 data).
topophase.cor: coherence and magnitude for the flattened multi-looked interferogram. (two bands float32 data).
topophase.cor.full: similar to topophase.cor but at full SAR resolution.
topophase.amp: amplitudes of master amd slave images. (two bands float32)
### sub-band interferogram
```
!stripmapApp.py stripmapApp.xml --start=sub_band_interferogram --end=sub_band_interferogram
```
This step will be skipped as we have not asked for ionospheric phase estimation. We will get back to this step in the ionospheric phase estimation notebook.
### filter
```
!stripmapApp.py stripmapApp.xml --start=filter --end=filter
```
A power spectral filter is applied to the multi-looked interferogram to reduce noise.
```
import gdal
import matplotlib.pyplot as plt
import numpy as np
# reading the multi-looked wrapped interferogram
ds = gdal.Open("interferogram/topophase.flat", gdal.GA_ReadOnly)
igram = ds.GetRasterBand(1).ReadAsArray()
ds = None
# reading the multi-looked un-wrapped interferogram
ds = gdal.Open("interferogram/filt_topophase.flat", gdal.GA_ReadOnly)
filt_igram = ds.GetRasterBand(1).ReadAsArray()
ds = None
fig = plt.figure(figsize=(18, 16))
ax = fig.add_subplot(1,3,1)
ax.imshow(np.abs(igram), vmin = 0 , vmax = 60.0, cmap = 'gray')
ax.set_title("magnitude")
ax.set_axis_off()
ax = fig.add_subplot(1,3,2)
ax.imshow(np.angle(igram), cmap='jet')
ax.plot([1000,2800,2800,1000,1000],[3000,3000,2000,2000,3000],'-k')
ax.set_title("multi-looked interferometric phase")
ax.set_axis_off()
ax = fig.add_subplot(1,3,3)
ax.imshow(np.angle(filt_igram), cmap='jet')
ax.plot([1000,2800,2800,1000,1000],[3000,3000,2000,2000,3000],'-k')
ax.set_title("multi-looked & filtered phase")
#ax.set_axis_off()
fig = plt.figure(figsize=(18, 16))
ax = fig.add_subplot(1,3,1)
ax.imshow(np.abs(igram[2000:3000, 1000:2800]), vmin = 0 , vmax = 60.0, cmap = 'gray')
ax.set_title("magnitude")
ax.set_axis_off()
ax = fig.add_subplot(1,3,2)
ax.imshow(np.angle(igram[2000:3000, 1000:2800]), cmap='jet')
ax.plot([600,1200,1200,600,600],[600,600,100,100,600],'--k')
ax.set_title("multi-looked interferometric phase")
ax.set_axis_off()
ax = fig.add_subplot(1,3,3)
ax.imshow(np.angle(filt_igram[2000:3000, 1000:2800]), cmap='jet')
ax.plot([600,1200,1200,600,600],[600,600,100,100,600],'--k')
ax.set_title("multi-looked & filtered phase")
ax.set_axis_off()
fig = plt.figure(figsize=(18, 16))
ax = fig.add_subplot(1,3,1)
ax.imshow(np.abs(igram[2100:2600, 1600:2200]), vmin = 0 , vmax = 60.0, cmap = 'gray')
ax.set_title("magnitude")
ax.set_axis_off()
ax = fig.add_subplot(1,3,2)
ax.imshow(np.angle(igram[2100:2600, 1600:2200]), cmap='jet')
ax.set_title("multi-looked interferometric phase")
ax.set_axis_off()
ax = fig.add_subplot(1,3,3)
ax.imshow(np.angle(filt_igram[2100:2600, 1600:2200]), cmap='jet')
ax.set_title("multi-looked & filtered phase")
ax.set_axis_off()
filt_igram = None
igram = None
```
<br>
<div class="alert alert-info">
<b>Note :</b>
The interferometric phase, shows ground displacement caused by dike opening event in March 2011, along the east rift zone of Kīlauea Volcano, Hawaii.
</div>
### optional steps ('filter_low_band', 'filter_high_band')
```
!stripmapApp.py stripmapApp.xml --start=filter_low_band --end=filter_high_band
```
These steps will be skipped since we have not asked for ionospheric phase estimation in the input xml file.
### unwrap
```
!stripmapApp.py stripmapApp.xml --start=unwrap --end=unwrap
```
At this step the wrapped phase of the filtered and multi-looked interferogram is unwrapped. The unwrapped interferogram is a two band data with magnitude and phase components.
```
import gdal
import matplotlib.pyplot as plt
# reading the multi-looked wrapped interferogram
ds = gdal.Open("interferogram/filt_topophase.flat", gdal.GA_ReadOnly)
igram = ds.GetRasterBand(1).ReadAsArray()
ds = None
# reading the multi-looked unwrapped interferogram
ds = gdal.Open("interferogram/filt_topophase.unw", gdal.GA_ReadOnly)
igram_unw = ds.GetRasterBand(2).ReadAsArray()
ds = None
# reading the connected component file
ds = gdal.Open("interferogram/filt_topophase.conncomp", gdal.GA_ReadOnly)
connected_components = ds.GetRasterBand(1).ReadAsArray()
ds = None
fig = plt.figure(figsize=(18, 16))
ax = fig.add_subplot(1,3,1)
cax=ax.imshow(np.angle(igram), cmap='jet')
ax.set_title("wrapped")
#ax.set_axis_off()
cbar = fig.colorbar(cax, ticks=[-3.14,0,3.14],orientation='horizontal')
cbar.ax.set_xticklabels(["$-\pi$",0,"$\pi$"])
ax = fig.add_subplot(1,3,2)
cax = ax.imshow(igram_unw, vmin = -15 , vmax = 15.0, cmap = 'jet')
ax.set_title("unwrapped")
ax.set_axis_off()
cbar = fig.colorbar(cax, ticks=[-15,0, 15], orientation='horizontal')
ax = fig.add_subplot(1,3,3)
cax = ax.imshow(connected_components, cmap = 'jet')
ax.set_title("components")
ax.set_axis_off()
cbar = fig.colorbar(cax, ticks=[0, 1] , orientation='horizontal')
cbar.ax.set_xticklabels([0,1])
connected_components = None
```
<br>
<div class="alert alert-info">
<b>Note (wrapped vs unwrapped) :</b>
Note the colorscale for the wrapped and unwrapped interferograms. The wrapped interferometric phase varies from $-\pi$ to $\pi$, while the unwrapped interferogram varies from -15 to 15 radians.
</div>
<br>
<div class="alert alert-info">
<b>Note :</b>
The connected components file is a product of the phase unwrapping. Each interferogram may have several connected compoenets. The unwrapped phase within each component is expected to be correctly unwrapped. However, there might be $2\pi$ phase jumps between the components. Advanced ISCE users may use the 2-stage unwrapping to adjust ambiguities among different components. stripmapApp currently does not support 2-stage unwrapping. Look for this option in future releases.
</div>
```
profile_wrapped_1 = np.angle(igram[2400,1500:1650])
profile_unwrapped_1 = igram_unw[2400,1500:1650]
profile_wrapped_2 = np.angle(igram[2400,1500:2000])
profile_unwrapped_2 = igram_unw[2400,1500:2000]
fig = plt.figure(figsize=(20,8))
ax = fig.add_subplot(2,3,1)
cax=ax.plot(profile_wrapped_1)
ax.set_title("wrapped")
ax = fig.add_subplot(2,3,2)
cax=ax.plot(profile_unwrapped_1)
ax.set_title("unwrapped")
ax = fig.add_subplot(2,3,3)
cax=ax.plot(np.round((profile_unwrapped_1-profile_wrapped_1)/2.0/np.pi))
ax.set_title("(unwrapped - wrapped)/(2$\pi$)")
ax = fig.add_subplot(2,3,4)
cax=ax.plot(profile_wrapped_2)
ax.set_title("wrapped")
ax = fig.add_subplot(2,3,5)
cax=ax.plot(profile_unwrapped_2)
ax.set_title("unwrapped")
ax = fig.add_subplot(2,3,6)
cax=ax.plot((profile_unwrapped_2-profile_wrapped_2)/2.0/np.pi)
ax.set_title("(unwrapped - wrapped)/(2$\pi$)")
igram = None
igram_unw = None
```
### optional steps ('unwrap_low_band', 'unwrap_high_band', 'ionosphere')
```
!stripmapApp.py stripmapApp.xml --start=unwrap_low_band --end=ionosphere
```
Since we have not asked for ionospheric phase estimation, all these steps will be skipped.
### geocoding
```
!stripmapApp.py stripmapApp.xml --start=geocode --end=geocode
import gdal
import matplotlib.pyplot as plt
# reading the multi-looked wrapped interferogram
ds = gdal.Open("interferogram/filt_topophase.unw.geo", gdal.GA_ReadOnly)
unw_geocoded = ds.GetRasterBand(2).ReadAsArray()
ds = None
fig = plt.figure(figsize=(14,12))
ax = fig.add_subplot(1,1,1)
cax = ax.imshow(unw_geocoded, vmin = -15 , vmax = 15.0, cmap = 'jet')
ax.set_title("geocoded unwrapped")
ax.set_axis_off()
cbar = fig.colorbar(cax, ticks=[-15,0, 15], orientation='horizontal')
plt.show()
unw_geocoded = None
```
# Supplementary Information
### understanding xml files
The format of this type of file may seem unfamiliar or strange to you, but with the following description of the basics of the format, it will hopefully become more familiar. The first thing to point out is that the indentations and line breaks seen above are not required and are simply used to make the structure more clear and the file more readable to humans. The xml file provides structure to data for consumption by a computer. As far as the computer is concerned the data structure is equally readable if all of the information were contained on a single very long line, but human readers would have a hard time reading it in that format.
The next thing to point out is the method by which the data are structured through the use of tags and attributes. An item enclosed in the < (less-than) and > (greater-than) symbols is referred to as a tag. The name enclosed in the < and > symbols is the name of the tag. Every tag in an xml file must have an associated closing tag that contains the same name but starts with the symbol </ and ends with the symbol >. This is the basic unit of structure given to the data. Data are enclosed inside of opening and closing tags that have names identifying the enclosed data. This structure is nested to any order of nesting necessary to represent the data. The Python language (in which the ISCE user interface is written) provides powerful tools to parse the xml structure into a data structure object and to very easily “walk” through the structure of that object.
In the above xml file the first and last tags in the file are a tag pair: <stripmapApp> and </stripmapApp> (note again, tags must come in pairs like this). The first of these two tags, or the opening tag, marks the beginning of the contents of the tag and the second of these two tags, or the closing tag, marks the end of the contents of the tag. ISCE expects a “file tag” of this nature to bracket all inputs contained in the file. The actual name of the file tag, as far as ISCE is concerned, is user selectable. In this example it is used, as a convenience to the user, to document the ISCE application, named insarApp.py, for which it is meant to provide inputs; it could have been named <foo> and insarApp.py would have been equally happy provided that the closing tag were </foo>.
The next tag is <component name="insar">. Its closing tag </component> is located at the penultimate line of the file (one line above the </insar> tag). The name of this tag is component and it has an attribute called name with value “insarApp”. The component tags bound a collection of information that is used by a computational element within ISCE that has the name specified by the name attribute. The name “insarApp” in the first component tag tells ISCE that the enclosed information correspond to a functional component in ISCE named “insarApp”, which in this case is actually the application that is run at the command line.
In general, component tags contain information in the form of other component tags or property tags, all of which can be nested to any required level. In this example the insarApp component contains a property tag and two other component tags.
The first tag we see in the insarApp component tag is the property tag with attribute name=“sensor name”. The property tag contains a value tag that contains the name of the sensor, ALOS in this case. The next tag is a component tag with attribute name=”Master”. This tag contains a catalog tag containing Master.xml. The catalog tag in general informs ISCE to look in the named file (Master.xml in this case) for the contents of the current tag. The next component tag has the same structure with the catalog tag containing a different file named Slave.xml.
### Extra configuration parameters
The input configuration file in this tutorial only included mandatory parameters including the master and slave images, which are enough to run the application. This means that the application is configured with default parameters hardwired in the code or computed during processing.
For custom processing, user may want to set parameters in the input configuration file. In the following a few more parameters are shown that can be added to stripmapApp.xml.
### regionOfInterest
To specify a region of interest to process:
```xml
<property name="regionOfInterest">[South, North, West, East]</property>
```
Example:
```xml
<property name="regionOfInterest">[19.0, 19.9, -155.4, -154.7]</property>
```
Default: Full frame is processed.
### range looks
number of looks in range direction
```xml
<property name="range looks">USER_INPUT</property>
```
Deafult: is computed based on the posting parameter.
### azimuth looks
number of looks in azimuth direction
Deafult: is computed based on the posting parameter.
### posting
Interferogram posting in meters.
```xml
<property name="posting">USER_INPUT</property>
```
Default: 30
<br>
<div class="alert alert-info">
<b>Note :</b>
If "range looks" and "azimuth looks" have not been specified, then posting is used to compute them such that the interferogram is generated with a roughly square pixels size with each dimension close to the "posting" parameter.
</div>
### filter strength
strength of the adaptive filter used for filtering the wrapped interferogram
```xml
<property name="filter strength">USER_INPUT</property>
```
Default: 0.5
### useHighResolutionDemOnly
```xml
<property name="useHighResolutionDemOnly">True</property>
```
If True and a dem is not specified in input, it will only
download the SRTM highest resolution dem if it is available
and fill the missing portion with null values (typically -32767)
Default: False
### do unwrap
To turn phase unwrapping off
```xml
<property name="do unwrap">False</property>
```
Default: True
### unwrapper name
To choose the name of the phase unwrapping method. e.g., to choose "snaphu" for phase unwrapping
```xml
<property name="unwrapper name">snaphu</property>
```
Default: "icu".
### do rubbersheeting
To turn on rubbersheeting (estimating azimuth offsets caused by strong ionospheric scentilation)
```xml
<property name="do rubbersheeting">True</property>
```
Default : False
### rubber sheet SNR Threshold
```xml
<property name="rubber sheet SNR Threshold">USER_INPUT</property>
```
If "do rubbersheeting" is turned on, then this values is used to mask out azimuth offsets with SNR less that the input threshold.
Default: 5
### rubber sheet filter size
the size of the median filter used for filtering the azimuth offsets
```xml
<property name="rubber sheet filter size">USER_INPUT</property>
```
Default: 8
### do denseoffsets
turn on the dense offsets computation from cross correlation
```xml
<property name="do denseoffsets">True</property>
```
Default: False
<br>
<div class="alert alert-info">
<b>Note :</b>
If "do rubbersheeting" is turned on, then dense offsets computation is turned on regardless of the user input for "do denseoffsets"
</div>
### setting the dense offsets parameters
```xml
<property name="dense window width">USER_INPUT</property>
<property name="dense window height">USER_INPUT</property>
<property name="dense search width">USER_INPUT</property>
<property name="dense search height">USER_INPUT</property>
<property name="dense skip width">USER_INPUT</property>
<property name="dense skip height">USER_INPUT</property>
```
Default values:
<br>
dense window width = 64
<br>
dense window height = 64
<br>
dense search width = 20
<br>
dense search height = 20
<br>
dense skip width = 32
<br>
dense skip height = 32
### geocode list
List of products to be geocoded.
```xml
<property name="geocode list">"a list of files to geocode">
```
Default: multilooked, filtered wrapped and unwrapped interferograms, coherence, ionospehric phase
### offset geocode list
List of offset-specific files to geocode
```xml
<property name="offset geocode list">"a list of offset files to geocode">
```
### do split spectrum
turn on split spectrum
```xml
<property name="do split spectrum">True</property>
```
Default: False
### do dispersive
turn on disperive phase estimation
```xml
<property name="do dispersive">True</property>
```
Default: False
<br>
<div class="alert alert-info">
<b>Note :</b>
By turning on "do dispersive", the user input for "do split spectrum" is ignored and the split spectrum will be turned on as it is needed for dispersive phase estimation.
</div>
### control the filter kernel for filtering the dispersive phase
```xml
<property name="dispersive filter kernel x-size">800</property>
<property name="dispersive filter kernel y-size">800</property>
<property name="dispersive filter kernel sigma_x">100</property>
<property name="dispersive filter kernel sigma_y">100</property>
<property name="dispersive filter kernel rotation">0</property>
<property name="dispersive filter number of iterations">5</property>
<property name="dispersive filter mask type">coherence</property>
<property name="dispersive filter coherence threshold">0.6</property>
```
### processing data from other stripmap sensors
stripmapApp.py is able to process the stripmap data from the following sensors. So far it has been sucessfully tested on the following sensors:
<br>
ALOS1 (Raw and SLC)
ALOS2 (SLC, one frame)
COSMO_SkyMed (Raw and SLC)
ERS
ENVISAT ()
Radarsat-1
Radarsat-2
TerraSARX
TanDEMX
Sentinel1
envisat_slc
### Sample input data xml for different sensors:
#### Envisat:
```xml
<component name="master">
<property name="IMAGEFILE">data/ASA_IMS_1PNESA20050519_140259_000000172037_00239_16826_0000.N1</property>
<property name="INSTRUMENT_DIRECTORY">/u/k-data/agram/sat_metadata/ENV/INS_DIR</property>
<property name="ORBIT_DIRECTORY">/u/k-data/agram/sat_metadata/ENV/Doris/VOR</property>
<property name="OUTPUT">
20050519
</property>
</component>
```
<br>
<div class="alert alert-info">
<b>Note :</b>
Note that for processing the ENVISAT data a directory that contains the orbits is required.
</div>
### Sentinel-1 stripmap:
```xml
<component name="master">
<property name="orbit directory">/u/data/sat_metadata/S1/aux_poeorb/</property>
<property name="output">20151024</property>
<property name="safe">/u/data/S1A_S1_SLC__1SSV_20151024T234201_20151024T234230_008301_00BB43_068C.zip</property>
</component>
<component name="slave">
<property name="orbit directory">/u/k-raw/sat_metadata/S1/aux_poeorb/</property>
<property name="output">20150930</property>
<property name="safe">/u/data/S1A_S1_SLC__1SSV_20150930T234200_20150930T234230_007951_00B1CC_121C.zip</property>
</component>
```
<br>
<div class="alert alert-info">
<b>Note :</b>
Note that for processing the Sentinel-1 data a directory that contains the orbits is required.
</div>
### ALOS2 SLC
```xml
<component>
<property name="IMAGEFILE">
<value>data/20141114/ALOS2025732920-141114/IMG-HH-ALOS2025732920-141114-UBSL1.1__D</value>
</property>
<property name="LEADERFILE">
<value>data/20141114/ALOS2025732920-141114/LED-ALOS2025732920-141114-UBSL1.1__D</value>
</property>
<property name="OUTPUT">
<value>20141114</value>
</property>
</component>
```
### ALOS1 raw data
``` xml
<component>
<property name="IMAGEFILE">
<value>[data/20110119/ALPSRP265743230-L1.0/IMG-HH-ALPSRP265743230-H1.0__D]</value>
</property>
<property name="LEADERFILE">
<value>[data/20110119/ALPSRP265743230-L1.0/LED-ALPSRP265743230-H1.0__D]</value>
</property>
<property name="OUTPUT">
<value>20110119</value>
</property>
</component>
```
### CosmoSkyMed raw or SLC data
```xml
<component name="master">
<property name="HDF5">data/CSKS3_RAW_B_HI_03_HH_RD_SF_20111007021527_20111007021534.h5</property>
<property name="OUTPUT">
20111007
</property>
</component>
```
### TerraSAR-X and TanDEM-X
```xml
<component name="master">
<property name="xml">PATH_TO_TSX_DATA_XML</property>
<property name="OUTPUT">OUTPUT_NAME</property>
</component>
```
### Using ISCE as a python library
ISCE can be used a python library. Users can develop their own workflows within ISCE framework. Here are few simple examples where we try to call isce modules:
#### Example 1: (extract metadata, range and azimuth pixel size)
```
import isce
import isceobj
import isceobj.StripmapProc.StripmapProc as St
from isceobj.Planet.Planet import Planet
stObj = St()
stObj.configure()
frame = stObj.loadProduct("20110119_slc.xml")
print("Wavelength = {0} m".format(frame.radarWavelegth))
print("Slant Range Pixel Size = {0} m".format(frame.instrument.rangePixelSize))
#For azimuth pixel size we need to multiply azimuth time interval by the platform velocity along the track
# the acquisition time at the middle of the scene
t_mid = frame.sensingMid
#get the orbit for t_mid
st_mid=frame.orbit.interpolateOrbit(t_mid)
# platform velocity
Vs = st_mid.getScalarVelocity()
# pulse repitition frequency
prf = frame.instrument.PRF
#Azimuth time interval
ATI = 1.0/prf
#Azimuth Pixel size
az_pixel_size = ATI*Vs
print("Azimuth Pixel Size = {0} m".format(az_pixel_size))
```
#### Example 2: compute ground range pixels size
```
r0 = frame.startingRange
rmax = frame.getFarRange()
rng =(r0+rmax)/2
elp = Planet(pname='Earth').ellipsoid
tmid = frame.sensingMid
sv = frame.orbit.interpolateOrbit( tmid, method='hermite') #.getPosition()
llh = elp.xyz_to_llh(sv.getPosition())
hdg = frame.orbit.getENUHeading(tmid)
elp.setSCH(llh[0], llh[1], hdg)
sch, vsch = elp.xyzdot_to_schdot(sv.getPosition(), sv.getVelocity())
Re = elp.pegRadCur
H = sch[2]
cos_beta_e = (Re**2 + (Re + H)**2 -rng**2)/(2*Re*(Re+H))
sin_bet_e = np.sqrt(1 - cos_beta_e**2)
sin_theta_i = sin_bet_e*(Re + H)/rng
print("incidence angle at the middle of the swath: ", np.arcsin(sin_theta_i)*180.0/np.pi)
groundRangeRes = frame.instrument.rangePixelSize/sin_theta_i
print("Ground range pixel size: {0} m ".format(groundRangeRes))
```
<br>
<div class="alert alert-info">
<b>Note :</b>
One can easily get the incidence angle from the los.rdr file inside geometry folder. Even without opening the file, here is a way to get the statistics and the average value of the incidence angle: gdalinfo geometry/los.rdr -stats
</div>
| github_jupyter |
# Stochastic optimization landscape of a minimal MLP
In this notebook, we will try to better understand how stochastic gradient works. We fit a very simple non-convex model to data generated from a linear ground truth model.
We will also observe how the (stochastic) loss landscape changes when selecting different samples.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from torch.nn import Parameter
from torch.nn.functional import mse_loss
from torch.autograd import Variable
from torch.nn.functional import relu
```
Data is generated from a simple model:
$$y= 2x + \epsilon$$
where:
- $\epsilon \sim \mathcal{N}(0, 3)$
- $x \sim \mathcal{U}(-1, 1)$
```
def sample_from_ground_truth(n_samples=100, std=0.1):
x = torch.FloatTensor(n_samples, 1).uniform_(-1, 1)
epsilon = torch.FloatTensor(n_samples, 1).normal_(0, std)
y = 2 * x + epsilon
return x, y
n_samples = 100
std = 3
x, y = sample_from_ground_truth(n_samples=100, std=std)
```
We propose a minimal single hidden layer perceptron model with a single hidden unit and no bias. The model has two tunable parameters $w_1$, and $w_2$, such that:
$$f(x) = w_1 \cdot \sigma(w_2 \cdot x)$$
where $\sigma$ is the ReLU function.
```
class SimpleMLP(nn.Module):
def __init__(self, w=None):
super(SimpleMLP, self).__init__()
self.w1 = Parameter(torch.FloatTensor((1,)))
self.w2 = Parameter(torch.FloatTensor((1,)))
if w is None:
self.reset_parameters()
else:
self.set_parameters(w)
def reset_parameters(self):
self.w1.uniform_(-.1, .1)
self.w2.uniform_(-.1, .1)
def set_parameters(self, w):
with torch.no_grad():
self.w1[0] = w[0]
self.w2[0] = w[1]
def forward(self, x):
return self.w1 * relu(self.w2 * x)
```
As in the previous notebook, we define a function to sample from and plot loss landscapes.
```
from math import fabs
def make_grids(x, y, model_constructor, expected_risk_func, grid_size=100):
n_samples = len(x)
assert len(x) == len(y)
# Grid logic
x_max, y_max, x_min, y_min = 5, 5, -5, -5
w1 = np.linspace(x_min, x_max, grid_size, dtype=np.float32)
w2 = np.linspace(y_min, y_max, grid_size, dtype=np.float32)
W1, W2 = np.meshgrid(w1, w2)
W = np.concatenate((W1[:, :, None], W2[:, :, None]), axis=2)
W = torch.from_numpy(W)
# We will store the results in this tensor
risks = torch.FloatTensor(n_samples, grid_size, grid_size)
expected_risk = torch.FloatTensor(grid_size, grid_size)
with torch.no_grad():
for i in range(grid_size):
for j in range(grid_size):
model = model_constructor(W[i, j])
pred = model(x)
loss = mse_loss(pred, y, reduce=False)
risks[:, i, j] = loss.view(-1)
expected_risk[i, j] = expected_risk_func(W[i, j, 0], W[i, j, 1])
empirical_risk = torch.mean(risks, dim=0)
return W1, W2, risks.numpy(), empirical_risk.numpy(), expected_risk.numpy()
def expected_risk_simple_mlp(w1, w2):
"""Question: Can you derive this your-self?"""
return .5 * (8 / 3 - (4 / 3) * w1 * w2 + 1 / 3 * w1 ** 2 * w2 ** 2) + std ** 2
```
- `risks[k, i, j]` holds loss value $\ell(f(w_1^{(i)} , w_2^{(j)}, x_k), y_k)$ for a single data point $(x_k, y_k)$;
- `empirical_risk[i, j]` corresponds to the empirical risk averaged over the training data points:
$$ \frac{1}{n} \sum_{k=1}^{n} \ell(f(w_1^{(i)}, w_2^{(j)}, x_k), y_k)$$
```
W1, W2, risks, empirical_risk, expected_risk = make_grids(
x, y, SimpleMLP, expected_risk_func=expected_risk_simple_mlp)
```
Let's define our train loop and train our model:
```
from torch.optim import SGD
def train(model, x, y, lr=.1, n_epochs=1):
optimizer = SGD(model.parameters(), lr=lr)
iterate_rec = []
grad_rec = []
for epoch in range(n_epochs):
# Iterate over the dataset one sample at a time:
# batch_size=1
for this_x, this_y in zip(x, y):
this_x = this_x[None, :]
this_y = this_y[None, :]
optimizer.zero_grad()
pred = model(this_x)
loss = mse_loss(pred, this_y)
loss.backward()
with torch.no_grad():
iterate_rec.append([model.w1.clone()[0], model.w2.clone()[0]])
grad_rec.append([model.w1.grad.clone()[0], model.w2.grad.clone()[0]])
optimizer.step()
return np.array(iterate_rec), np.array(grad_rec)
init = torch.FloatTensor([3, -4])
model = SimpleMLP(init)
iterate_rec, grad_rec = train(model, x, y, lr=.01)
print(iterate_rec[-1])
```
We now plot:
- the point-wise risk at iteration $k$ on the left plot
- the total empirical risk on the center plot
- the expected risk on the right plot
Observe how empirical and expected risk differ, and how empirical risk minimization is not totally equivalent to expected risk minimization.
```
import matplotlib.colors as colors
class LevelsNormalize(colors.Normalize):
def __init__(self, levels, clip=False):
self.levels = levels
vmin, vmax = levels[0], levels[-1]
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
quantiles = np.linspace(0, 1, len(self.levels))
return np.ma.masked_array(np.interp(value, self.levels, quantiles))
def plot_map(W1, W2, risks, emp_risk, exp_risk, sample, iter_):
all_risks = np.concatenate((emp_risk.ravel(), exp_risk.ravel()))
x_center, y_center = emp_risk.shape[0] // 2, emp_risk.shape[1] // 2
risk_at_center = exp_risk[x_center, y_center]
low_levels = np.percentile(all_risks[all_risks <= risk_at_center],
q=np.linspace(0, 100, 11))
high_levels = np.percentile(all_risks[all_risks > risk_at_center],
q=np.linspace(10, 100, 10))
levels = np.concatenate((low_levels, high_levels))
norm = LevelsNormalize(levels=levels)
cmap = plt.get_cmap('RdBu_r')
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(12, 4))
risk_levels = levels.copy()
risk_levels[0] = min(risks[sample].min(), risk_levels[0])
risk_levels[-1] = max(risks[sample].max(), risk_levels[-1])
ax1.contourf(W1, W2, risks[sample], levels=risk_levels,
norm=norm, cmap=cmap)
ax1.scatter(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
color='orange')
if any(grad_rec[iter_] != 0):
ax1.arrow(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
-0.1 * grad_rec[iter_, 0], -0.1 * grad_rec[iter_, 1],
head_width=0.3, head_length=0.5, fc='orange', ec='orange')
ax1.set_title('Pointwise risk')
ax2.contourf(W1, W2, emp_risk, levels=levels, norm=norm, cmap=cmap)
ax2.plot(iterate_rec[:iter_ + 1, 0], iterate_rec[:iter_ + 1, 1],
linestyle='-', marker='o', markersize=6,
color='orange', linewidth=2, label='SGD trajectory')
ax2.legend()
ax2.set_title('Empirical risk')
cf = ax3.contourf(W1, W2, exp_risk, levels=levels, norm=norm, cmap=cmap)
ax3.scatter(iterate_rec[iter_, 0], iterate_rec[iter_, 1],
color='orange', label='Current sample')
ax3.set_title('Expected risk (ground truth)')
plt.colorbar(cf, ax=ax3)
ax3.legend()
fig.suptitle('Iter %i, sample % i' % (iter_, sample))
plt.show()
for sample in range(0, 100, 10):
plot_map(W1, W2, risks, empirical_risk, expected_risk, sample, sample)
```
Observe and comment.
### Exercices:
- Change the model to a completely linear one and reproduce the plots. What change do you observe regarding the plot of the stochastic loss landscape?
- Try changing the optimizer. Is it useful in this case?
- Try to initialize the model with pathological weights, e.g., symmetric ones. What do you observe?
- You may increase the number of epochs to observe slow convergence phenomena
- Try augmenting the noise in the dataset. What do you observe?
```
# %load solutions/linear_mlp.py
```
## Utilities to generate the slides figures
```
# from matplotlib.animation import FuncAnimation
# from IPython.display import HTML
# fig, ax = plt.subplots(figsize=(8, 8))
# all_risks = np.concatenate((empirical_risk.ravel(),
# expected_risk.ravel()))
# x_center, y_center = empirical_risk.shape[0] // 2, empirical_risk.shape[1] // 2
# risk_at_center = expected_risk[x_center, y_center]
# low_levels = np.percentile(all_risks[all_risks <= risk_at_center],
# q=np.linspace(0, 100, 11))
# high_levels = np.percentile(all_risks[all_risks > risk_at_center],
# q=np.linspace(10, 100, 10))
# levels = np.concatenate((low_levels, high_levels))
# norm = LevelsNormalize(levels=levels)
# cmap = plt.get_cmap('RdBu_r')
# ax.set_title('Pointwise risk')
# def animate(i):
# for c in ax.collections:
# c.remove()
# for l in ax.lines:
# l.remove()
# for p in ax.patches:
# p.remove()
# risk_levels = levels.copy()
# risk_levels[0] = min(risks[i].min(), risk_levels[0])
# risk_levels[-1] = max(risks[i].max(), risk_levels[-1])
# ax.contourf(W1, W2, risks[i], levels=risk_levels,
# norm=norm, cmap=cmap)
# ax.plot(iterate_rec[:i + 1, 0], iterate_rec[:i + 1, 1],
# linestyle='-', marker='o', markersize=6,
# color='orange', linewidth=2, label='SGD trajectory')
# return []
# anim = FuncAnimation(fig, animate,# init_func=init,
# frames=100, interval=300, blit=True)
# anim.save("stochastic_landscape_minimal_mlp.mp4")
# plt.close(fig)
# HTML(anim.to_html5_video())
# fig, ax = plt.subplots(figsize=(8, 7))
# cf = ax.contourf(W1, W2, empirical_risk, levels=levels, norm=norm, cmap=cmap)
# ax.plot(iterate_rec[:100 + 1, 0], iterate_rec[:100 + 1, 1],
# linestyle='-', marker='o', markersize=6,
# color='orange', linewidth=2, label='SGD trajectory')
# ax.legend()
# plt.colorbar(cf, ax=ax)
# ax.set_title('Empirical risk')
# fig.savefig('empirical_loss_landscape_minimal_mlp.png')
```
| github_jupyter |
## Day 1: Of Numerical Integration and Python
Welcome to Day 1! Today, we start with our discussion of what Numerical Integration is.
### What is Numerical Integration?
From the point of view of a theoretician, the ideal form of the solution to a differential equation given the initial conditions, i.e. an initial value problem (IVP), would be a formula for the solution function. But sometimes obtaining a formulaic solution is not always easy, and in many cases is absolutely impossible. So, what do we do when faced with a differential equation that we cannot solve? If you are only looking for long term behavior of a solution you can always sketch a direction field. This can be done without too much difficulty for some fairly complex differential equations that we can’t solve to get exact solutions. But, what if we need to determine how a specific solution behaves, including some values that the solution will take? In that case, we have to rely on numerical methods for solving the IVP such as euler's method or the Runge-Kutta Methods.
#### Euler's Method for Numerical Integration
We use Euler's Method to generate a numerical solution to an initial value problem of the form:
$$\frac{dx}{dt} = f(x, t)$$
$$x(t_o) = x_o$$
Firstly, we decide the interval over which we desire to find the solution, starting at the initial condition. We break this interval into small subdivisions of a fixed length $\epsilon$. Then, using the initial condition as our starting point, we generate the rest of the solution by using the iterative formulas:
$$t_{n+1} = t_n + \epsilon$$
$$x_{n+1} = x_n + \epsilon f(x_n, t_n)$$
to find the coordinates of the points in our numerical solution. We end this process once we have reached the end of the desired interval.
The best way to understand how it works is from the following diagram:
<img src="euler.png" alt="euler.png" width="400"/>
#### Euler's Method in Python
Let $\frac{dx}{dt}=f(x,t)$, we want to find $x(t)$ over $t\in[0,2)$, given that $x(0)=1$ and $f(x,t) = 5x$. The exact solution of this equation would be $x(t) = e^{5t}$.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def f(x,t): # define the function f(x,t)
return 5*x
epsilon = 0.01 # define timestep
t = np.arange(0,2,epsilon) # define an array for t
x = np.zeros(t.shape) # define an array for x
x[0]= 1 # set initial condition
for i in range(1,t.shape[0]):
x[i] = epsilon*f(x[i-1],t[i-1])+x[i-1] # Euler Integration Step
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.plot(t[::5],x[::5],".",label="Eulers Solution")
plt.plot(t,np.exp(5*t),label="Exact Solution")
plt.xlabel("t")
plt.ylabel("x")
plt.legend()
plt.show()
```
#### Euler and Vectors
Euler's Method also applies to vectors and can solve simultaneous differential equations.
The Initial Value problem now becomes:
$$\frac{d\vec{X}}{dt} = \vec{f}(\vec{X}, t)$$
$$\vec{X}(t_o) = \vec{X_o}$$
where $\vec{X}=[X_1,X_2...]$ and $\vec{f}(\vec{X}, t)=[f_1(\vec{X}, t),f_2(\vec{X}, t)...]$.
The Euler's Method becomes:
$$t_{n+1} = t_n + \epsilon$$
$$\vec{X_{n+1}} = \vec{X_n} + \epsilon \vec{f}(\vec{X_n}, t_n)$$
Let $\frac{d\vec{X}}{dt}=f(\vec{X},t)$, we want to find $\vec{X}(t)$ over $t\in[0,2)$, given that $\vec{X}(t)=[x,y]$, $\vec{X}(0)=[1,0]$ and $f(\vec{X},t) = [x-y,y-x]$.
```
def f(X,t): # define the function f(x,t)
x,y = X
return np.array([x-y,y-x])
epsilon = 0.01 # define timestep
t = np.arange(0,2,epsilon) # define an array for t
X = np.zeros((2,t.shape[0])) # define an array for x
X[:,0]= [1,0] # set initial condition
for i in range(1,t.shape[0]):
X[:,i] = epsilon*f(X[:,i-1],t[i-1])+X[:,i-1] # Euler Integration Step
plt.plot(t[::5],X[0,::5],".",label="Eulers Solution for x")
plt.plot(t[::5],X[1,::5],".",label="Eulers Solution for y")
plt.xlabel("t")
plt.ylabel("x")
plt.legend()
plt.show()
```
#### A Generalized function for Euler Integration
Now, we create a generalized function that takes in 3 inputs ie. the function $\vec{f}(\vec{y},t)$ when $\frac{d\vec{y}}{dt}=f(\vec{y},t)$, the time array, and initial vector $\vec{y_0}$.
##### Algorithm
- Get the required inputs: function $\vec{f}(\vec{y},t)$, initial condition vector $\vec{y_0}$ and time series $t$. Entering a time series $t$ allows for greater control over $\epsilon$ as it can now vary for each timestep. The only difference in the Euler's Method is now : $\epsilon\rightarrow\epsilon(t_n)$.
- Check if the input is of the correct datatype ie. floating point decimal.
- Create a zero matrix to hold the output.
- For each timestep, perform the euler method updation with variable $\epsilon$ and store it in the output matrix.
- Return the output timeseries matrix.
```
def check_type(y,t): # Ensure Input is Correct
return y.dtype == np.floating and t.dtype == np.floating
class _Integrator():
def integrate(self,func,y0,t):
time_delta_grid = t[1:] - t[:-1]
y = np.zeros((y0.shape[0],t.shape[0]))
y[:,0] = y0
for i in range(time_delta_grid.shape[0]):
y[:,i+1]= time_delta_grid[i]*func(y[:,i],t[i])+y[:,i]
return y
def odeint_euler(func,y0,t):
y0 = np.array(y0)
t = np.array(t)
if check_type(y0,t):
return _Integrator().integrate(func,y0,t)
else:
print("error encountered")
solution = odeint_euler(f,[1.,0.],t)
plt.plot(t[::5],solution[0,::5],".",label="Eulers Solution for x")
plt.plot(t[::5],solution[1,::5],".",label="Eulers Solution for y")
plt.xlabel("t")
plt.ylabel("X")
plt.legend()
plt.show()
```
#### Runge-Kutta Methods for Numerical Integration
The formula for the Euler method is $x_{n+1}=x_n + \epsilon f(x_n,t_n)$ which takes a solution from $t_n$ to $t_{n+1}=t_n+\epsilon$. One might notice there is an inherent assymetry in the formula. It advances the solution through an interval $\epsilon$, but uses the derivative information at only the start of the interval. This results in an error in the order of $O(\epsilon^2)$. But, what if we take a trial step and evaluate the derivative at the midpoint of the update interval to evaluate the value of $y_{n+1}$? Take the equations:
$$k_1=\epsilon f(x_n,t_n)$$
$$k_2=\epsilon f(x_n+\frac{k_1}{2},t_n+\frac{\epsilon}{2})$$
$$y_{n+1}=y_n+k_2+O(\epsilon^3)$$
The symmetrization removes the O($\epsilon^2$) error term and now the method is second order and called the second order Runge-Kutta method or the midpoint method. You can look at this method graphically as follows:
<img src="rk2.png" alt="rk2.png" width="400"/>
But we do not have to stop here. By further rewriting the equation, we can cancel higher order error terms and reach the most commonly used fourth-order Runge-Kutta Methods or RK4 method, which is described below:
$$k_1=f(x_n,t_n)$$
$$k_2=f(x_n+\epsilon\frac{k_1}{2},t_n+\frac{\epsilon}{2})$$
$$k_3=f(x_n+\epsilon\frac{k_2}{2},t_n+\frac{\epsilon}{2})$$
$$k_4=f(x_n+\epsilon k_3,t_n+\epsilon)$$
$$y_{n+1}=y_n+\frac{\epsilon}{6}(k_1+2 k_2+2 k_3+k_4)+O(\epsilon^5)$$
Note that this numerical method is again easily converted to a vector algorithm by simply replacing $x_i$ by the vector $\vec{X_i}$.
This method is what we will use to simulate our networks.
#### Generalized RK4 Method in Python
Just like we had created a function for Euler Integration in Python, we create a generalized function for RK4 that takes in 3 inputs ie. the function $f(\vec{y},t)$ when $\frac{d\vec{y}}{dt}=f(\vec{y},t)$, the time array, and initial vector $\vec{y_0}$. We then perform the exact same integration that we had done with Euler's Method. Everything remains the same except we replace the Euler's method updation rule with the RK4 update rule.
```
def check_type(y,t): # Ensure Input is Correct
return y.dtype == np.floating and t.dtype == np.floating
class _Integrator():
def integrate(self,func,y0,t):
time_delta_grid = t[1:] - t[:-1]
y = np.zeros((y0.shape[0],t.shape[0]))
y[:,0] = y0
for i in range(time_delta_grid.shape[0]):
k1 = func(y[:,i], t[i]) # RK4 Integration Steps
half_step = t[i] + time_delta_grid[i] / 2
k2 = func(y[:,i] + time_delta_grid[i] * k1 / 2, half_step)
k3 = func(y[:,i] + time_delta_grid[i] * k2 / 2, half_step)
k4 = func(y[:,i] + time_delta_grid[i] * k3, t + time_delta_grid[i])
y[:,i+1]= (k1 + 2 * k2 + 2 * k3 + k4) * (time_delta_grid[i] / 6) + y[:,i]
return y
def odeint_rk4(func,y0,t):
y0 = np.array(y0)
t = np.array(t)
if check_type(y0,t):
return _Integrator().integrate(func,y0,t)
else:
print("error encountered")
solution = odeint_rk4(f,[1.,0.],t)
plt.plot(t[::5],solution[0,::5],".",label="RK4 Solution for x")
plt.plot(t[::5],solution[1,::5],".",label="RK4 Solution for y")
plt.xlabel("t")
plt.ylabel("X")
plt.legend()
plt.show()
```
As an **Exercise**, try to solve the equation of a simple pendulum and observe its dynamics using Euler Method and RK4 methods. The equation of motion of a simple pendulum is given by: $$\frac{d^2s}{dt^2}=L\frac{d^2\theta}{dt^2}=-g\sin{\theta}$$ where $L$ = Length of String and $\theta$ = angle made with vertical. To solve this second order differential equation you may use a dummy variable $\omega$ representing angular velocity such that:
$$\frac{d\theta}{dt}=\omega$$
$$\frac{d\omega}{dt}=-\frac{g}{L}\sin{\theta}$$
| github_jupyter |
```
import numpy as np
import pandas as pd
import pickle
import math
import os
import matplotlib.pyplot as plt
import collections
from scipy import stats
from sklearn.preprocessing import MinMaxScaler
###################
n_f=7
n_node=365
###################
#label - load fact
y_area=np.load('data/data_cablearea.npy');print('y_area.shape',y_area.shape)
###node ddata
#location feature
node_loc=(pd.read_excel('data/raw_data/node_location.xlsx',header=None).values)
#node_loc=np.array([node_loc]*1201);print('node_loc',node_loc.shape)
nf=np.load('data/nf.npy')
disp=np.load('data/disp.npy')#(12000, 365, 7)
#x=np.concatenate((disp,node_loc),axis=-1);
x=disp.copy();print(x.shape)#(1201, 365, 10)
#############################
features=x.copy()
label_area=y_area.copy()
for i in range(7):
print(np.min(features[:,:,i]),np.max(features[:,:,i]),np.mean(features[:,:,i]),np.std(features[:,:,i]))
for i in range(7): print(np.min(nf[:,i]),np.max(nf[:,i]),np.mean(nf[:,i]),np.std(nf[:,i]))
#split
np.random.seed(1)
indices = np.random.permutation(len(features));print(indices[0])
n_training=int(len(features)*0.6)+4#6:1:3#7:1:2
n_val=int(len(features)*0.2)-2
n_test=len(features)-n_training-n_val
training_idx, val_idx,test_idx = indices[:n_training], indices[n_training:n_training+n_val], indices[-n_test:]
f_training, f_val, f_test = features[training_idx],features[val_idx], features[test_idx]
print(f_training.shape,f_val.shape,f_test.shape)
l_training, l_val, l_test = label_area[training_idx],label_area[val_idx], label_area[test_idx]
print(l_training.shape,l_val.shape,l_test.shape)
#multi task
def getting_multi(label_data,sf,mode):
c_data=np.argmin(label_data,axis=1)
a_data=np.min(label_data,axis=1)
#c_data=c_data+1;c_data[a_data==1e-06]=0
c_data=c_data+1
c_data[a_data>0.999]=0;print('safe',np.sum(c_data==0))
a_data=a_data[:,np.newaxis]
bce_data = np.zeros((len(c_data), 121))
bce_data[np.arange(len(c_data)),c_data] = 1
if sf:
np.savez_compressed('data/label_multitask/c_label_'+mode,c_data)
np.savez_compressed('data/label_multitask/a_label_'+mode,a_data)
np.savez_compressed('data/label_multitask_bce/c_label_'+mode,bce_data)
np.savez_compressed('data/label_multitask_bce/a_label_'+mode,a_data)
return c_data,a_data,bce_data
save_flag=True#False#
c_training,a_training,bce_training=getting_multi(l_training,sf=save_flag,mode='train')
c_val,a_val,bce_val=getting_multi(l_val,sf=save_flag,mode='val')
c_test,a_test,bce_test=getting_multi(l_test,sf=save_flag,mode='test')
#save
save_flag=True#False#
if save_flag:
data_path='data/data_mpnn/'
np.savez_compressed(data_path+'features_train',f_training)
np.savez_compressed(data_path+'features_val',f_val)
np.savez_compressed(data_path+'features_test',f_test)
data_path='data/label/'
np.savez_compressed(data_path+'label_train',l_training)
np.savez_compressed(data_path+'label_val',l_val)
np.savez_compressed(data_path+'label_test',l_test)
import numpy as np
import pickle
import collections
from sklearn.preprocessing import MinMaxScaler
adj_mx = np.load('data/sensor_graph/adj_mx.npy')
for i in range(len(adj_mx)): adj_mx[i,i]=0
print(adj_mx.shape,np.sum(adj_mx))
edge_data=np.array([[i,j] for i,j in zip(np.where(adj_mx==1)[0],np.where(adj_mx==1)[1])])
#####distance
edge_d=np.load('data/sensor_graph/dist_mx.npy')
for i in range(len(adj_mx)): edge_d[i,i]=0
edge_d=edge_d[edge_data[:,0],edge_data[:,1]]
#####virtual
edge_v=np.load('data/sensor_graph/virtual_mx.npy')
for i in range(len(adj_mx)): edge_v[i,i]=0
edge_v=edge_v[edge_data[:,0],edge_data[:,1]]
#####element type
#type 1
edge_t1=np.load('data/sensor_graph/type_c_mx.npy')
for i in range(len(adj_mx)): edge_t1[i,i]=0
edge_t1=edge_t1[edge_data[:,0],edge_data[:,1]]
#type 2
edge_t2=np.load('data/sensor_graph/type_t_mx.npy')
for i in range(len(adj_mx)): edge_t2[i,i]=0
edge_t2=edge_t2[edge_data[:,0],edge_data[:,1]]
#type 3
edge_t3=np.load('data/sensor_graph/type_g_mx.npy')
for i in range(len(adj_mx)): edge_t3[i,i]=0
edge_t3=edge_t3[edge_data[:,0],edge_data[:,1]]
#type 4
edge_t4=np.load('data/sensor_graph/type_cb_mx.npy')
for i in range(len(adj_mx)): edge_t4[i,i]=0
edge_t4=edge_t4[edge_data[:,0],edge_data[:,1]]
#concatenate
edge_f=np.stack([edge_d,edge_v,edge_t1,edge_t2,edge_t3,edge_t4],axis=1)#edge_f=np.stack([edge_d,edge_e,edge_a,edge_v],axis=1)
print(edge_data.shape,edge_f.shape)
#scale
sc= MinMaxScaler(feature_range=(-1,1)).fit(edge_f[:,:1])
edge_f[:,:1]=sc.transform(edge_f[:,:1])
for i in range(edge_f.shape[1]):
print(i,np.min(edge_f[...,i]),np.max(edge_f[...,i]),np.mean(edge_f[...,i]),np.std(edge_f[...,i]))
save_flag=True#False#
if save_flag:
data_path='data/data_mpnn/'
np.savez_compressed(data_path+'edge_features',edge_f)
np.savez_compressed(data_path+'edge_data',edge_data)
```
| github_jupyter |
```
%matplotlib inline
import cosmo_metric_utils as cmu
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.colors as colors
import matplotlib.cm as cmx
from mpl_toolkits.axes_grid1 import make_axes_locatable
import os
import glob
#dist_loc_base = '/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/distances/omprior_0.01_flat/emille_samples/' #mu_photoIa_plasticc*'
#file_extension = 'WFD'
dist_loc_base = '/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/distances/omprior_0.01_flat/emille_samples/' #mu_photoIa_plasticc*'
file_extension = 'DDF'
from collections import OrderedDict
remap_dict = OrderedDict({
'perfect3000': 'Perfect',
'fiducial3000': 'Fiducial',
'random3000fail2998': 'Random',
'random3000': 'Random',
'all_objs_survived_SALT2_DDF' : 'All SALT',
'all_objs_survived_SALT2_WFD': 'All SALT',
'50SNIa50SNII': 'SN-II 50',
'68SNIa32SNII': 'SN-II 32',
'72SNIa28SNII': 'SN-II 28',
'75SNIa25SNII': 'SN-II 25',
'90SNIa10SNII': 'SN-II 10',
'95SNIa5SNII': 'SN-II 5',
'98SNIa2SNII': 'SN-II 2',
'99SNIa1SNII': 'SN-II 1',
'50SNIa50SNIbc': 'SN-Ibc 50',
'68SNIa32SNIbc': 'SN-Ibc 32',
'75SNIa25SNIbc': 'SN-Ibc 25',
'83SNIa17SNIbc': 'SN-Ibc 17',
'90SNIa10SNIbc': 'SN-Ibc 10',
'95SNIa5SNIbc': 'SN-Ibc 5',
'98SNIa2SNIbc': 'SN-Ibc 2',
'99SNIa1SNIbc': 'SN-Ibc 1',
'50SNIa50SNIax': 'SN-Iax 50',
'68SNIa32SNIax': 'SN-Iax 32',
'75SNIa25SNIax': 'SN-Iax 25',
'86SNIa14SNIax': 'SN-Iax 14',
'90SNIa10SNIax': 'SN-Iax 10',
'94SNIa6SNIax': 'SN-Iax 6',
'95SNIa5SNIax': 'SN-Iax 5',
'97SNIa3SNIax': 'SN-Iax 3',
'98SNIa2SNIax': 'SN-Iax 2',
'99SNIa1SNIax': 'SN-Iax 1',
'71SNIa29SNIa-91bg': 'SN-Ia-91bg 29',
'75SNIa25SNIa-91bg': 'SN-Ia-91bg 25',
'90SNIa10SNIa-91bg': 'SN-Ia-91bg 10',
'95SNIa5SNIa-91bg': 'SN-Ia-91bg 5',
'98SNIa2SNIa-91bg': 'SN-Ia-91bg 2',
'99SNIa1SNIa-91bg': 'SN-Ia-91bg 1',
'99.8SNIa0.2SNIa-91bg': 'SN-Ia-91bg 0.2',
'57SNIa43AGN': 'AGN 43',
'75SNIa25AGN': 'AGN 25',
'90SNIa10AGN': 'AGN 10',
'94SNIa6AGN': 'AGN 6',
'95SNIa5AGN': 'AGN 5',
'98SNIa2AGN': 'AGN 2',
'99SNIa1AGN': 'AGN 1',
'99.9SNIa0.1AGN': 'AGN 0.1',
'83SNIa17SLSN-I': 'SNLS-I 17',
'90SNIa10SLSN-I': 'SNLS-I 10',
'95SNIa5SLSN-I': 'SNLS-I 5',
'98SNIa2SLSN-I': 'SNLS-I 2',
'99SNIa1SLSN-I': 'SNLS-I 1',
'99.9SNIa0.1SLSN': 'SNLS-I 0.1',
'95SNIa5TDE': 'TDE 5',
'98SNIa2TDE': 'TDE 2',
'99SNIa1TDE': 'TDE 1',
'99.6SNIa0.4TDE': 'TDE 0.4',
'99.1SNIa0.9CART': 'CART 0.9',
'99.7SNIa0.3CART': 'CART 0.3'
})
all_shapes = {'SNIa-91bg': 'o',
'SNIax': 's',
'SNII': 'd',
'SNIbc': 'X',
'SLSN-I': 'v',
'AGN': '^',
'TDE': '<',
'KN': '>',
'CART': 'v'}
# Mapping the percent contaminated to the colormap.
## size corresponds to remap_dict
color_nums = np.array([1, 1, 1, 1, 1, 1, # Special
50, 32, 28, 25, 10, 5, 2, 1, # II
50, 32, 25, 17, 10, 5, 2, 1, # Ibc
50, 32, 25, 14, 10, 6, 5, 3, 2, 1, # Iax
29, 25, 10, 5, 2, 1, 1, # 91bg
43, 25, 10, 6, 5, 2, 1, 1, # AGN
17, 10, 5, 2, 1, 1, # SNLS
5, 2, 1, 1, # TDE
1, 1, # CART
])
# Color map
rainbow = cm = plt.get_cmap('plasma_r')
cNorm = colors.LogNorm(vmin=1, vmax=52) #colors.Normalize(vmin=0, vmax=50)
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=rainbow)
color_map = scalarMap.to_rgba(np.arange(1, 52))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(5, 10), sharey=True)
tick_lbls = []
ax1.axvline(-1, color='c', ls='--')
df = pd.read_csv(dist_loc_base + 'stan_input_salt2mu_lowz_withbias_perfect3000.csv')
sig_perf = cmu.fisher_results(df['z'].values, df['muerr'].values)[0]
ax1.axvspan(-1 - sig_perf[1],
-1 + sig_perf[1], alpha=0.1, color='grey')
ax2.axvline(0, color='k')
file_base = dist_loc_base + 'stan_input_salt2mu_lowz_withbias_'
i = 0
i_list = []
for j, (a, c) in enumerate(zip(remap_dict, color_nums)):
class_ = str.split(remap_dict[a])[0]
if '91bg' in class_:
class_ = 'SNIa-91bg'
else:
class_ = class_.replace('-', '')
#mfc='none'
if 'DDF' in file_extension:
if 'fiducial' in a:
mfc = 'tab:blue'
elif 'random' in a:
mfc = 'tab:red'
elif 'perfect' in a:
mfc = 'k'
else:
mfc = color_map[c]
if 'WFD' in file_extension:
mfc = "none"
try:
file = glob.glob(file_base + a + '.csv')
df = pd.read_csv(str(file[0]))
sig = cmu.fisher_results(df['z'].values, df['muerr'].values)[0]
if 'perfect' in a:
ax1.plot(-1, -i, ms=10, color='k', marker='*', mfc=mfc)
ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color='k', mfc=mfc)
elif 'random' in a:
ax1.plot(-1, -i, ms=10, color='tab:red', marker='*', mfc=mfc)
ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color='tab:red')
ax2.plot((sig[1]-sig_perf[1])/sig_perf[1], -i, 'o', color='tab:red', marker='*', ms=10, mfc=mfc)
elif 'fiducial' in a:
ax1.plot(-1, -i, ms=10, color='tab:blue', marker='*', mfc=mfc)
ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color='tab:blue')
ax2.plot((sig[1]-sig_perf[1])/sig_perf[1], -i, color='tab:blue', marker='*', ms=10, mfc=mfc)
else:
ax1.plot(-1, -i, marker=all_shapes[class_], ms=10, color=color_map[c], mfc=mfc)
ax1.plot([-1 - sig[1], -1 + sig[1]], [-i, -i], "|-", ms=10, color=color_map[c])
ax2.plot((sig[1]-sig_perf[1])/sig_perf[1], -i, color=color_map[c], marker=all_shapes[class_], ms=10, mfc=mfc)
tick_lbls.append(remap_dict[a])
i_list.append(-i)
i +=2
if 'random' in a or '99SNIa1' in a:
if 'DDF' in file_extension:
if 'AGN' in a or '91bg' in a or 'CART' in a:
continue
else:
i_list.append(i)
i += 1.6
tick_lbls.append('')
elif 'SNIa0' in a and 'CART' not in a:
i_list.append(-i)
i += 1.6
tick_lbls.append('')
except:
print("Missing: ", a)
tick_locs = tick_locs = i_list[::-1]#np.arange(-len(tick_lbls)+1, 1)
ax1.set_yticks(tick_locs)
ax1.set_yticklabels(tick_lbls[::-1], fontsize=13)
ax1.set_ylim(i_list[-1]-0.7, i_list[0]+0.7)
#(-len(tick_lbls)+0.5, 0.5)
#ax1.set_xscale('log')
ax1.set_xlabel('Fisher Matrix', fontsize=13)
ax2.set_xlabel('Fractional difference', fontsize=13)
yticks = ax1.yaxis.get_major_ticks()
yticks2 = ax2.yaxis.get_major_ticks()
#ticks = [-4, -11, -15, -21]
#for t in ticks:
# yticks[t].set_visible(False)
# yticks2[t].set_visible(False)
#plt.savefig('fisher_matrix_' + file_extension + '_20210603.pdf', bbox_inches='tight')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import torch
from torch.nn import functional as F
from torch import nn
from pytorch_lightning.core.lightning import LightningModule
import pytorch_lightning as pl
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from src.models import *
# from ilan_src.models import *
from src.dataloader import *
from src.utils import *
from src.evaluation import *
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import pickle
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
import sys
```
## Set up dataset - my way
```
members = 10
zero_noise = False
DATADRIVE = '/home/jupyter/data/'
# ds_train = TiggeMRMSDataset(
# tigge_dir=f'{DATADRIVE}/tigge/32km/',
# tigge_vars=['total_precipitation_ens10'],
# mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/',
# rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc',
# data_period=('2018-01', '2019-12'),
# val_days=5,
# split='train',
# tp_log=0.01,
# ensemble_mode='random',
# idx_stride=16
# )
# ds_train.mins.to_netcdf('tmp/mins1.nc')
# ds_train.maxs.to_netcdf('tmp/maxs1.nc')
mins = xr.open_dataset('tmp/mins1.nc')
maxs = xr.open_dataset('tmp/maxs1.nc')
ds_test = TiggeMRMSDataset(
tigge_dir=f'{DATADRIVE}/tigge/32km/',
tigge_vars=['total_precipitation_ens10'],
mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/',
# rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc',
data_period=('2020-01', '2020-12'),
first_days=5,
tp_log=0.01,
mins=mins,
maxs=maxs,
ensemble_mode='random',
idx_stride=16
)
# For ens_tp with stuff
ds_test_pad = TiggeMRMSDataset(
tigge_dir=f'{DATADRIVE}/tigge/32km/',
tigge_vars=['total_precipitation_ens10'],
mrms_dir=f'{DATADRIVE}/mrms/4km/RadarOnly_QPE_06H/',
# rq_fn=f'{DATADRIVE}/mrms/4km/RadarQuality.nc',
data_period=('2020-01', '2020-12'),
first_days=5,
tp_log=0.01,
mins=mins,
maxs=maxs,
ensemble_mode='random',
idx_stride=16,
pad_tigge=10,
pad_tigge_channel=True,
)
```
## Load models
### Model 1: single_forecast_tp_pure_sr_pretraining
```
name='single_forecast_tp_pure_sr_pretraining'
zero_noise=True
model_dir = '/home/jupyter/data/saved_models/saved_models/leingan/single_forecast_tp_pure_sr_pretraining/0'
sys.path.append(model_dir)
gan = BaseGAN2.load_from_checkpoint(
f"{model_dir}/epoch=199-step=133999.ckpt")
model = gan.gen
model = model.to(device)
model.train(False);
```
## Model 2: ens_mean_L1_weighted_gen_loss
```
name='ens_mean_L1_weighted_gen_loss'
model_dir = '/home/jupyter/data/saved_models/saved_models/leingan/ens10_tp/random/ens_mean_L1_weighted_gen_loss/3'
sys.path.append(model_dir)
gan = BaseGAN2.load_from_checkpoint(
f"{model_dir}/epoch=349-step=234499.ckpt")
model = gan.gen
model = model.to(device)
model.train(False);
```
### Model 3: ens10_tp_and_added_vars_TCW_broadfield_channel
```
name='ens10_tp_and_added_vars_TCW_broadfield_channel'
ds_test = ds_test_pad
model_dir = '/home/jupyter/data/saved_models/saved_models/leingan/ens10_tp_and_added_vars_TCW_broadfield_channel/15'
sys.path.append(model_dir)
sys.path
gan = BaseGAN2.load_from_checkpoint(
f"{model_dir}/epoch=499-step=258499.ckpt")
model = gan.gen
model = model.to(device)
model.train(False);
```
## Full field eval
```
def create_valid_predictions(model, ds_valid, member_idx=None, zero_noise=False):
# Get predictions for full field
preds = []
for t in tqdm(range(len(ds_valid.tigge.valid_time))):
X, y = ds_valid.return_full_array(t, member_idx=member_idx)
noise = torch.randn(1, X.shape[0], X.shape[1], X.shape[2]).to(device)
if zero_noise:
noise *= 0
pred = model(torch.FloatTensor(X[None]).to(device), noise).to('cpu').detach().numpy()[0, 0]
preds.append(pred)
preds = np.array(preds)
# Unscale
preds = preds * (ds_valid.maxs.tp.values - ds_valid.mins.tp.values) + ds_valid.mins.tp.values
# Un-log
if ds_valid.tp_log:
preds = log_retrans(preds, ds_valid.tp_log)
# Convert to xarray
preds = xr.DataArray(
preds,
dims=['valid_time', 'lat', 'lon'],
coords={
'valid_time': ds_valid.tigge.valid_time,
'lat': ds_valid.mrms.lat.isel(
lat=slice(ds_valid.pad_mrms, ds_valid.pad_mrms+preds.shape[1])
),
'lon': ds_valid.mrms.lon.isel(
lon=slice(ds_valid.pad_mrms, ds_valid.pad_mrms+preds.shape[2])
)
},
name='tp'
)
return preds
def create_stitched_predictions(model, ds_test, member_idx, zero_noise=False):
preds = ds_test.mrms.copy(True) * np.NaN
for idx in tqdm(range(len(ds_test.idxs))):
time_idx, lat_idx, lon_idx = ds_test.idxs[idx]
lat_slice = slice(lat_idx * ds_test.ratio, lat_idx * ds_test.ratio + ds_test.patch_mrms)
lon_slice = slice(lon_idx * ds_test.ratio, lon_idx * ds_test.ratio + ds_test.patch_mrms)
X, y = ds_test.__getitem__(idx, member_idx=member_idx)
noise = torch.randn(1, X.shape[0], X.shape[1], X.shape[2]).to(device)
if zero_noise:
noise *= 0
p = model(torch.FloatTensor(X[None]).to(device), noise).to('cpu').detach().numpy()[0, 0]
preds[time_idx, lat_slice, lon_slice] = p
# Unscale
preds = preds * (ds_test.maxs.tp.values - ds_test.mins.tp.values) + ds_test.mins.tp.values
# Un-log
if ds_test.tp_log:
preds = log_retrans(preds, ds_test.tp_log)
preds = preds.rename({'time': 'valid_time'})
return preds
def create_valid_ensemble(model, ds_valid, nens, stitched=False, zero_noise=False):
"""Wrapper to create ensemble"""
if stitched:
fn = create_stitched_predictions
else:
fn = create_valid_predictions
preds = [fn(model, ds_valid, member_idx=member_idx, zero_noise=zero_noise) for member_idx in range(nens)]
return xr.concat(preds, 'member')
ens_pred = create_valid_ensemble(model, ds_test, members, zero_noise=zero_noise)
ens_pred_stitched = create_valid_ensemble(model, ds_test, members, stitched=True, zero_noise=zero_noise)
ens_pred.to_netcdf(f'tmp/ens_pred_{name}.nc')
ens_pred_stitched.to_netcdf(f'tmp/ens_pred_stitched_{name}.nc')
ens_pred.isel(valid_time=1).plot(vmin=0, vmax=20, cmap='gist_ncar_r', col='member')
```
## Get ground truth
```
mrms = ds_test.mrms.rename(
{'time': 'valid_time'}) * ds_test.maxs.tp.values
mrms = log_retrans(mrms, ds_test.tp_log)
mrms.to_netcdf('tmp/mrms.nc')
```
## Get interpolation baseline
```
tigge = ds_test.tigge.isel(variable=0) * ds_test.maxs.tp.values
tigge = log_retrans(tigge, ds_test.tp_log)
interp = tigge.interp_like(mrms, method='linear')
interp.to_netcdf('tmp/interp_ens.nc')
```
### HREF
```
href = xr.open_mfdataset('/home/jupyter/data/hrefv2//4km/total_precipitation/2020*.nc')
href = href.tp.diff('lead_time').sel(lead_time=np.timedelta64(12, 'h'))
href['valid_time'] = href.init_time + href.lead_time
href = href.swap_dims({'init_time': 'valid_time'})
href = href.assign_coords({'lat': interp.lat.values, 'lon': interp.lon.values})
overlap_times = np.intersect1d(interp.valid_time, href.valid_time)
href = href.sel(valid_time=overlap_times)
href.load();
href.to_netcdf('tmp/href.nc')
```
# Old
## Get mask
```
ds = xr.open_dataset(
'/home/jupyter/data/hrrr/raw/total_precipitation/20180215_00.nc')
from src.regrid import *
ds_regridded = regrid(ds, 4, lons=(235, 290), lats=(50, 20))
hrrr_mask = np.isfinite(ds_regridded).tp.isel(init_time=0, lead_time=0)
rq.plot(vmin=0, vmax=1)
(rq>0.3).plot(vmin=0, vmax=1)
mrms_mask.plot(vmin=0, vmax=1)
rq = xr.open_dataarray(f'{DATADRIVE}/mrms/4km/RadarQuality.nc')
mrms_mask = rq>-1
mrms_mask = mrms_mask.assign_coords({
'lat': hrrr_mask.lat,
'lon': hrrr_mask.lon
})
total_mask = mrms_mask * hrrr_mask
total_mask = total_mask.isel(lat=slice(0, -6))
total_mask = total_mask.assign_coords({'lat': interp.lat.values, 'lon': interp.lon.values})
total_mask.plot()
```
## Compute scores
```
hrrr = hrrr.isel(lat=slice(0, -6))
hrrr = hrrr.assign_coords({'lat': interp.lat.values, 'lon': interp.lon.values})
# Apply mask
mrms = mrms.where(total_mask)
det_pred = det_pred.where(total_mask)
hrrr = hrrr.where(total_mask)
interp = interp.where(total_mask)
det_pred2 = det_pred2.where(total_mask)
hrrr.load()
```
## Bias
```
mrms.mean().values
det_pred.mean().values
interp.mean().values
hrrr.mean().values
```
### Histograms
```
bins = np.logspace(0, 2, 25)-1
mid_bin = (bins[1:] + bins[:-1])/2
def plot_hist(ds, bins, label):
nums, bins = np.histogram(ds.values, bins=bins)
plt.plot(mid_bin, nums, marker='o', label=label)
plt.figure(figsize=(10, 5))
plot_hist(det_pred, bins, 'GAN')
plot_hist(mrms, bins, 'Obs')
plot_hist(interp, bins, 'Interp')
plot_hist(hrrr, bins, 'HRRR')
plt.yscale('log')
plt.legend()
```
### RMSE
```
xs.rmse(det_pred, mrms, dim=['lat', 'lon', 'valid_time'], skipna=True).values
xs.rmse(interp, mrms, dim=['lat', 'lon', 'valid_time'], skipna=True).values
xs.rmse(hrrr, mrms, dim=['lat', 'lon', 'valid_time'], skipna=True).values
```
### FSS
```
thresh = 10
window = 100 // 4
def compute_fss(f, o, thresh, window, time_mean=True):
f_thresh = f > thresh
o_thresh = o > thresh
f_frac = f_thresh.rolling({'lat': window, 'lon': window}, center=True).mean()
o_frac = o_thresh.rolling({'lat': window, 'lon': window}, center=True).mean()
mse = ((f_frac - o_frac)**2).mean(('lat', 'lon'))
mse_ref = (f_frac**2).mean(('lat', 'lon')) + (o_frac**2).mean(('lat', 'lon'))
fss = 1 - mse / mse_ref
if time_mean:
fss = fss.mean('valid_time')
return fss
compute_fss(mrms, det_pred, thresh, window).values
compute_fss(mrms, interp, thresh, window).values
compute_fss(mrms, hrrr, thresh, window).values
fig, ax = plt.subplots(figsize=(20, 10))
det_pred.isel(valid_time=2).plot(vmin=0, vmax=20)
ax.set_aspect('equal')
fig, ax = plt.subplots(figsize=(20, 10))
det_pred2.isel(valid_time=2).plot(vmin=0, vmax=20)
ax.set_aspect('equal')
fig, ax = plt.subplots(figsize=(20, 10))
mrms.isel(valid_time=2).plot(vmin=0, vmax=20)
ax.set_aspect('equal')
fig, ax = plt.subplots(figsize=(20, 10))
interp.isel(valid_time=2).plot(vmin=0, vmax=20)
ax.set_aspect('equal')
fig, ax = plt.subplots(figsize=(20, 10))
hrrr.isel(valid_time=2).plot(vmin=0, vmax=20)
ax.set_aspect('equal')
interp[50].plot(vmin=0, vmax=20)
hrrr[50].plot(vmin=0, vmax=20)
eps = 1e-6
bin_edges = [-eps] + np.linspace(eps, log_retrans(ds_max, tp_log)+eps, 51).tolist()
pred_means.append(np.mean(preds.sel(member=0)))
pred_hists.append(np.histogram(preds.sel(member=0), bins = bin_edges, density=False)[0])
truth_means.append(np.mean(truth))
truth_hists.append(np.histogram(truth, bins = bin_edges, density=False)[0])
truth_pert = truth + np.random.normal(scale=1e-6, size=truth.shape)
preds_pert = preds + np.random.normal(scale=1e-6, size=preds.shape)
```
| github_jupyter |
```
import optuna as op
import pandas as pd
import numpy as np
from scipy.spatial.transform import Rotation
import seaborn as sns
import matplotlib.pyplot as plt
sns.set()
```
This is a slightly lower tech process that attempts to align the datasets via procrustes transformations. Each embedding is performed independently of the other embeddings. We then attempt to find rotations of the data that minimises the procrustes distance between datasets.
Suppose we have two sets of data, $X = {x_1, ..., x_n}$ and $Y = {y_1, ..., y_n}$ that are comparable, such that $x_i$ and $y_i$ are related.
Define the Procrustes distance: $D_p(X, Y) = \sum ||x_i - y_i||_2$
We seek to find a rotation and shift, $Y' = RY + C$ that minimises $D_p(X,Y')$. C is columnwise constant.
```
p2 = pd.read_csv('Data/TTI_Pillar2/SymptomsUMAP_loose_clusteringOrigin P2.csv', index_col=0)
sgss = pd.read_csv('Data/TTI_SGSS/SymptomsUMAP_loose_clusteringOrigin SGSS.csv', index_col=0)
css = pd.read_csv('Data/CovidSymptomStudy/UMAPLooseWide.csv', index_col=0)
cis = pd.read_csv('Data/CommunityInfectionSurvey/SymptomsUMAP_loose_clustering.csv', index_col=0)
class ProcrustesAlignment:
def __init__(self, X: np.array, Y: np.array, X_mapping_idx: list[int], Y_mapping_idx: list[int]) -> None:
self.X = X
self.Y = Y
self.X_mapping_idx = X_mapping_idx
self.Y_mapping_idx = Y_mapping_idx
self.optimized = False
def compute_procrustes_distance(self, X: np.array, Y_dash: np.array) -> float:
return np.sum(np.sqrt( ((X.T[self.X_mapping_idx,:] - Y_dash.T[self.Y_mapping_idx,:])**2).sum(axis = 1) ) )
def get_rotation_matrix(self, theta) -> np.array:
theta = np.radians(theta)
c, s = np.cos(theta), np.sin(theta)
return np.array(((c, -s), (s, c)))
def transform_Y(self, theta: float, x_shift: float, y_shift: float) -> np.array:
return np.matmul(self.get_rotation_matrix(theta), self.Y) + np.array([[x_shift], [y_shift]])
def eval_transformation(self, theta: float, x_shift: float, y_shift: float) -> float:
return self.compute_procrustes_distance(self.X, self.transform_Y(theta, x_shift, y_shift))
def optimize(self, n_trials: int) -> np.array:
self.study = op.create_study()
self.study.optimize(self.objective, n_trials)
self.optimized = True
self.best_params = self.study.best_params
def objective(self, trial):
theta = trial.suggest_float('theta', 0, 360)
x_shift = trial.suggest_float('x_shift', -10, 10)
y_shift = trial.suggest_float('y_shift', -10, 10)
return self.eval_transformation(theta, x_shift, y_shift)
def get_optimal_rotation(self):
if not self.optimized:
print('Optimisation has not yet been performed.')
else:
return self.transform_Y(self.best_params['theta'], self.best_params['x_shift'], self.best_params['y_shift'])
```
We can only align symptoms that are shared across datasets, so we need to find the mapping from one dataset to the other.
```
# load the lookup into memory
symptom_name_category_lookup = pd.read_csv('Data/Lookups/SymptomNameCategoryLookup.csv')
# subset the lookup for each dataset
ctas_lookup = symptom_name_category_lookup[symptom_name_category_lookup.dataset == 'CTAS']
css_lookup = symptom_name_category_lookup[symptom_name_category_lookup.dataset == 'Zoe']
cis_lookup = symptom_name_category_lookup[symptom_name_category_lookup.dataset == 'ONS']
# create tables that contain only the raw symptom variable names in the dataset
p2_symptoms = pd.DataFrame(p2.columns, columns=['symptom'])
sgss_symptoms = pd.DataFrame(sgss.columns, columns=['symptom'])
css_symptoms = pd.DataFrame(css.columns, columns=['symptom'])
cis_symptoms = pd.DataFrame(cis.columns, columns=['symptom'])
# join to the lookup table, this allows us to map the symptoms between datasets
p2_symptoms = pd.merge(left = p2_symptoms, right = ctas_lookup, left_on = 'symptom', right_on='symptom_name_raw')[['symptom', 'symptom_id', 'symptom_name_formatted', 'category']]
sgss_symptoms = pd.merge(left = sgss_symptoms, right = ctas_lookup, left_on = 'symptom', right_on='symptom_name_raw')[['symptom', 'symptom_id', 'symptom_name_formatted', 'category']]
css_symptoms = pd.merge(left = css_symptoms, right = css_lookup, left_on = 'symptom', right_on='symptom_name_raw')[['symptom', 'symptom_id', 'symptom_name_formatted', 'category']]
cis_symptoms = pd.merge(left = cis_symptoms, right = cis_lookup, left_on = 'symptom', right_on='symptom_name_raw')[['symptom', 'symptom_id', 'symptom_name_formatted', 'category']]
# work out which ids are common across all datasets
symptom_ids = [
p2_symptoms.symptom_id.values,
sgss_symptoms.symptom_id.values,
css_symptoms.symptom_id.values,
cis_symptoms.symptom_id.values
]
shared_ids = symptom_ids[0]
for id_set in symptom_ids:
shared_ids = np.intersect1d(shared_ids, id_set)
# convenience function for mapping symptoms in one data to the other
# need to provide a list of the symptoms that are common across all datasets
def get_mapping_indices(symptoms_from, symptoms_to, common_symptom_ids):
from_index = []
to_index = []
for num_from, symptom_id_from in enumerate(symptoms_from.symptom_id.values):
if symptom_id_from in common_symptom_ids:
for num_to, symptom_id_to in enumerate(symptoms_to.symptom_id.values):
if symptom_id_to == symptom_id_from:
from_index.append(num_from)
to_index.append(num_to)
return from_index, to_index
from_idx, to_idx = get_mapping_indices(p2_symptoms, css_symptoms, common_symptom_ids=shared_ids)
def align(dataset, dataset_symptoms):
from_idx, to_idx = get_mapping_indices(p2_symptoms, dataset_symptoms, shared_ids)
aligner = ProcrustesAlignment(X = p2.values, Y = dataset.values, X_mapping_idx=from_idx, Y_mapping_idx=to_idx)
aligner.optimize(n_trials=500)
aligned_embedding = aligner.get_optimal_rotation()
return pd.DataFrame(data=aligned_embedding, columns = dataset.columns)
# we align all the datasets relative to the pillar 2 output. It shouldn't make a difference
sgss = align(sgss, sgss_symptoms)
css = align(css, css_symptoms)
cis = align(cis, cis_symptoms)
p2.to_csv('Data/Alignments/ProcrustesAlignments/p2_loose.csv')
sgss.to_csv('Data/Alignments/ProcrustesAlignments/sgss_loose.csv')
css.to_csv('Data/Alignments/ProcrustesAlignments/css_loose.csv')
cis.to_csv('Data/Alignments/ProcrustesAlignments/cis_loose.csv')
```
| github_jupyter |
## Neral Networks In Pytorch
* We're just going to use data from Pytorch's "torchvision." Pytorch has a relatively handy inclusion of a bunch of different datasets, including many for vision tasks, which is what torchvision is for.
> Let's visualise the datatets that we can find in `torchvision`
## Imports
```
import torch
import torchvision
from torchvision import datasets, transforms
from matplotlib import pyplot as plt
import numpy as np
```
> The datasets `dir`
```
print(dir(datasets)), len(dir(datasets))
```
> We have `75` items that we can work with in the torchvision dataset.
### The MNIST dataset
The goal is to classify hand-written digits that comes from the `mnist` dataset as our `hello-world-neural-network`. This dataset contains images of handwritten digits from `0` to `9`
### Loading the data
```
train = datasets.MNIST('', train=True, download=True,
transform = transforms.Compose({
transforms.ToTensor()
}))
test = datasets.MNIST('', train=False, download=True,
transform = transforms.Compose({
transforms.ToTensor()
}))
```
> From the above cell we are just downloading the datasets and then transform or preprocess it.
> Now, we need to handle for how we're going to iterate over that dataset:
```
trainset = torch.utils.data.DataLoader(train, batch_size=10, shuffle=True)
testset = torch.utils.data.DataLoader(test, batch_size=10, shuffle=False)
```
> **That was so brutal!! What is happening here?**
**shuffle** - in ML normally we shuffle the data to mix it up so that the data will not have labels of the same type following each other.
**batch_size** - this split our data in batches in our case a batch of `10`
```
for data in trainset:
print(data[0][:2], data[1][:2])
plt.imshow(data[0][0].view(28, 28), cmap="gray")
plt.show()
break
```
### Creating a NN
* Now we have our trainset and testset let's start creating a Neural Network.
```
import torch.nn as nn
import torch.nn.functional as F
```
> The `torch.nn` import gives us access to some helpful neural network things, such as various neural network layer types like:
**regular fully-connected layers**, **convolutional layers** ..etc
> The `torch.nn.functional` area specifically gives us access to some handy functions that we might not want to write ourselves. We will be using the **`relu`** or "rectified linear unit" activation function for our neurons.
```
class Net(nn.Module):
def __init__(self):
super().__init__()
net = Net()
print(net)
```
> We have created a `Net` class which is inheriting from the `nn.Module` class.
```
class Net(nn.Module):
def __init__(self):
super().__init__()
self.FC1 = nn.Linear(28*28, 64)
self.FC2 = nn.Linear(64, 64 )
self.FC3 = nn.Linear(64, 64)
self.FC4 = nn.Linear(64, 10)
net = Net()
print(net)
```
> Each of our `nn.Linear` layers expects the first parameter to be the input size, and the 2nd parameter is the output size. Note that the basic `Nural Network` expect a flattened array not a `28x28`. So at some point we must pass the flattened array.
> The last layer **accepts 64 in_features and outputs 10** which is in our case the total number of unique labels.`
> Let's define a new method called `forward`
```
class Net(nn.Module):
def __init__(self):
super().__init__();
self.FC1 = nn.Linear(28 * 28, 64)
self.FC2 = nn.Linear(64, 64)
self.FC3 = nn.Linear(64, 64)
self.FC4 = nn.Linear(64, 10)
def forward(self, X):
X = self.FC1(X)
X = self.FC2(X)
X = self.FC3(X)
X = self.FC4(X)
return X
Net()
```
> So `X` in this case is our input data, we will pass this to the first `FC1` and the output will be passed down to the `FC2` up to the `FC4` **And also remember that our `X` is a flattened array.**
**Wait** Our layers are missing activation functions. In this case we are going to use `relu` as our activation function for other layers and `log_softmax` for the output layer.
```
class Net(nn.Module):
def __init__(self):
super().__init__()
self.FC1 = nn.Linear(28 * 28, 64)
self.FC2 = nn.Linear(64, 64)
self.FC3 = nn.Linear(64, 64)
self.FC4 = nn.Linear(64, 10)
def forward(self, X):
X = F.relu(self.FC1(X))
X = F.relu(self.FC2(X))
X = F.relu(self.FC3(X))
X = F.log_softmax(self.FC4(X), dim=1)
return X
X = torch.randn((28,28))
X = X.view(-1, 28*28)
net = Net()
np.argmax(net(X).detach().numpy())
```
### Training Our NN
```
net.parameters()
from torch import optim
optimizer = optim.Adam(net.parameters(), lr=1e-3)
```
**loss** - this function calcualetes how far are our classifiaction from reality.
**For one hot vectors** - `mean_square_error` is better to use.
**For scalar classifictaion** - `cross_entropy` is better to use.
> [Loss Functions](https://pytorch.org/docs/stable/nn.html#loss-functions)
**optimizer** - this is what adjust the model's adjustable parameters like weights. The one that is popular is `Adam` (**Adaptive Momentum**) which takes a `lr` which has a default value of `0.001 or 1e-3`. The learning rate dictates the magnitude of changes that the optimizer can make at a time
> Now we can iterate over the data and see more about the **loss** we are going to define our `EPOCHS`
too many epochs can result in the model `over-fitting` and too few epochs may result in the model `under-learning` the data.
```
EPOCHS = 3
for epoch in range(EPOCHS):
print(f"EPOCHS {epoch+1}/{EPOCHS }")
for data in trainset:
X, y = data # a batch of 10 features and 10 labels
net.zero_grad() # sets gradients to 0 before loss calulated
output = net(X.view(-1,784)) ## pass the flattened image
## calculate the loss value
loss = F.nll_loss(output, y)
# apply this loss backwards thru the network's parameters
loss.backward()
# attempt to optimize weights to account for loss/gradients
optimizer.step()
print(loss)
```
The `net.zero_grad()` is a very important step, otherwise these gradients will add up for every pass, and then we'll be re-optimizing for previous gradients that we already optimized for.
### Calculating accuracy
```
correct = 0
total = 0
with torch.no_grad():
for data in testset:
X, y = data
output = net(X.view(-1, 784))
for i, j in enumerate(output):
if torch.argmax(j) == y[i]:
correct +=1
total += 1
print("Accuracy: ", correct/total)
```
> Our model is `97%` accurate on the `testset`
```
correct = 0
total = 0
with torch.no_grad():
for data in trainset:
X, y = data
output = net(X.view(-1, 784))
for i, j in enumerate(output):
if torch.argmax(j) == y[i]:
correct +=1
total += 1
print("Accuracy: ", correct/total)
```
> Our model is `98%` accurate on the trainset. Which is closer to `97%` which means we are not overfitting or underfitting the model. Our model is learning fine with `3` epochs.
### Making Predictions
```
for X in trainset:
X, y = X
break
plt.imshow(X[0].view(28,28), cmap="gray"), y[0]
predictions = net(X[0].view(-1, 28*28))
torch.argmax(predictions).detach().numpy()
```
> The model is cool in predicting the digit `3`.
| github_jupyter |
```
%%javascript
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
```
# 03. Differential Privacy
[Differential privacy](https://www.microsoft.com/en-us/research/publication/differential-privacy/) is a popular mechanism to quantitatively assess the privacy loss of a given probabilistic query schema or data transformation method. The fundamental equation of differntial privacy is given as
$$
\begin{equation}
\mathrm{Pr}[\mathcal{K}(D_1) \in S] \le exp(\epsilon) \times \mathrm{Pr}[\mathcal{K}(D_2) \in S] \label{eq:dp}
\end{equation}
$$
Here, $\mathrm{Pr}[\mathcal{K}(D_1) \in S]$ is the probability of a
randomized function $\mathcal{K}$ to yield one of values in the $S$ when evaluating
it on a given dataset $D_1$. The right side is identical to the left except
that the function is now evaluated on a dataset $D_2$ that differs from $D_1$
in at most one element. And finally, $\epsilon$ is a parameter that describes
how much information is leaked (or generated) by the function.
Sounds pretty abstract, so let's work out a simple example: Let's assume we want to build a differentially private dataset from the adult data that we've looked at in part 02. The goal here is to protect an adversary from gaining too much information about the sensitive attribute (income > 50k or not) when adding that person's data to the dataset. With differential privacy, we look at the state of the dataset before and after a person was added and quantify the privacy loss as given by equation ($\ref{eq:dp}$). The scheme we're evaluating here is a so-called **randomized response method**, which works as follows:
* With probability $1-p$, we add a person's true attribute value to the database.
* With probability $p$ we choose a random boolean (0/1) value from a distribution returning $0$ with probability $k$ and $1$ with probability $1-k$ and add that value to the database instead.
Using this scheme, an attacker cannot know with certainty if the real attribute value of the person or a random one was added to the database. This protects the privacy of the person but of course it also adds noise to the database, making it more difficult to use for legitimate users as well.
In practice, we therefore always need to weigh privacy against utility when employing differential privacy. In this notebook, we will calculate the $\epsilon$ and other relevant parameters for our scheme above and see how we can use this differentially private data to make predictions about the income distribution of the people in our dataset.
#### Calculating $\epsilon$
In our differentially private scheme, the probability of adding the true attribute value to the database is $1-p$. The probability of adding a random value is therefore $p$ and the probability of that value being $0$ is $k$. So how can we relate this to eq. (\ref{eq:dp})? Well, we can set $D_1$ and $D_2$ as the versions of our database **before** and **after** adding the person's data to it. Let's say that before adding the person's data there are $n$ $1$'s in the database. We can then use a query $\mathcal{K}$ that returns the number of 1's in the database and choose our result set as $S = \{n\}$. Before adding the person to the database, $\mathcal{K}(D_1)=n$ with certainty, hence $\mathrm{Pr}(\mathcal{K}(D_1))=1$. After adding the person's data, the probability that the query result is still $n$ can be calculated as follows, depending on the person's attribute value:
* If a persons's attribute value is $0$, the probability that $\mathcal{K}$ is unchanged after adding the data to the database is given as $1-p+p\cdot k$.
* If a person's attribute value is $1$, the probability that $\mathcal{K}$ is unchanged after adding the data to the database is given as $p\cdot k$.
We therefore have the two equations
$$
\begin{eqnarray}
\mathrm{Pr}[\mathcal{K}(D_1) \in S | x_i=1] & = & 1 \le \exp{\epsilon}\cdot \mathrm{Pr}[\mathcal{K}(D_2) \in S | x_i=1] = \exp{\epsilon}\cdot p \cdot k \\
\mathrm{Pr}[\mathcal{K}(D_1) \in S | x_i=0] & = & 1 \le \exp{\epsilon}\cdot \mathrm{Pr}[\mathcal{K}(D_2) \in S | x_i=0] = \exp{\epsilon}\cdot (1-p+p \cdot k) \\
\end{eqnarray}
$$
This yields
$$
\begin{eqnarray}
\epsilon & \ge & -\ln{\left(p \cdot k\right)} \\
\epsilon & \ge & -\ln{\left(1-p+p\cdot k\right)} \\
\end{eqnarray}
$$
Since we're interested in an upper bound for $\epsilon$ and since $-\ln{\left(1-p+p\cdot k\right)} \le -\ln{p\cdot k}$, we obtain
$$
\begin{equation}
\epsilon = -\ln{\left(p\cdot k\right)}
\end{equation}
$$
## Exercise
**Write a function that returns the value of epsilon for a given $p$ and $k$.**
```
# %load "../solutions/differential-privacy/epsilon.py"
def epsilon(p, k):
"""
:param p: The probability of returning a random value instead of the true one
:param k: The probability of returning 1 when generating a random value
:returns: The epsilon for the given values of p, k
"""
```
## Exercise
** Plot $\epsilon$ for various values of $p$ and $k$.**
## Exercise: Different Scheme
Let's assume we propose the following anonymization scheme for our dataset:
* With probability $1-p$, we add a person's true attribute value to the database
* With probability $p$, we do not add anything to the database
Can you calculate the $\epsilon$ of this scheme? Which scheme do you prefer, and why? Does this scheme always provide "plausible deniability"?
```
%load "../solutions/differential-privacy/different-scheme.md"
```
## What does this tell us?
Calculating the $\epsilon$ is great, but what does it actually tell us about the privacy loss or risk for our use case? Let's assume an adversary want to learn about the real value of a person's attribute. If she knows the model used for generating the data, she could then use Bayesian reasoning to calculate the probability of a person's attribute being $1$ given the observed difference in the database, which we call $\Delta$. Using Bayes theorem we can calculate this as (for $\Delta = 1$ here)
$$
\begin{equation}
P(x_i=1 | \Delta = 1) = P(\Delta = 1| x_i = 1)\cdot \frac{P(x_i=1)}{P(\Delta=0)}
\end{equation}
$$
For our scheme, we know that
$$
\begin{equation}
P(\Delta = 1 | x_i = \mathrm{1}) = (1-p) + p\cdot(1-k) = 1-pk
\end{equation}
$$
and
$$
\begin{equation}
P(\Delta = 1) = (1-p)\cdot P(x_i = \mathrm{1}) + p\cdot(1-k)
\end{equation}
$$
so we obtain
$$
\begin{equation}
P(x_i=1 | \Delta = 1) = \frac{(1-pk)\cdot P(x_i = \mathrm{1})}{(1-p)\cdot P(x_i = \mathrm{1})+p\cdot(1-k)}
\end{equation}
$$
Let's see how this relates to $\epsilon$!
## Exercise
**Write a function that calculates thecondition probability as given in eq. (4).**
```
# %load "../solutions/differential-privacy/conditional-prob.py"
def p_cond(p, k, p_1):
"""
:param p: The probability of returning a random value instead of the true one
:param k: The probability of returning 1 when generating a random value
:param p_1: The probability of a person to have an attribute value x_i=1
"""
```
## Exercise
** Choose a given k (e.g. 0.5) as well as a value for P(x_i=yes) and plot the conditional probability from eq. (4) as a function of p.**
# Implementing It
Now that we have a feeling for our scheme we can implement it! For that, we load the "adult census" data from the k-anonymity case study again.
```
import pandas as pd
names = (
'age',
'workclass', #Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
'fnlwgt', # "weight" of that person in the dataset (i.e. how many people does that person represent) -> https://www.kansascityfed.org/research/datamuseum/cps/coreinfo/keyconcepts/weights
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income',
)
categorical = set((
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'sex',
'native-country',
'race',
'income',
))
df = pd.read_csv("../data/k-anonymity/adult.all.txt", sep=", ", header=None, names=names, index_col=False, engine='python');# We load the data using Pandas
```
## Exercise
**Implement a function that processes a new value according to the differentially private scheme discussed above.**
```
# %load "../solutions/differential-privacy/process-value.py"
import random
def process_value(value, p, k):
"""
:param value: The value to apply the differentially private scheme to.
:param p: The probability of returning a random value instead of the true one
:param k: The probability of returning 1 when generating a random value
: returns: A new, differentially private value
"""
```
## Exercise
**Now apply this method to the "income" column of the adult dataset to obtain a differentially private dataset.**
```
# %load "../solutions/differential-privacy/apply.py"
import numpy as np
p = 0.8
k = 0.4
df['income_binary'] = np.where(df['income'] == '<=50k', 0, 1)
df['income_dp'] = 0
# ...
```
# Working With Differentially Private Data
After collecting the differentially private data, we want of course to make use of it! For example, we might want to estimate the probability of a person having an income > 50T\$ based on the data we've collected, which we assume is [Bernoulli distributed](https://en.wikipedia.org/wiki/Bernoulli_distribution) with a probability $p_{1}$. Now, when adding up the data from $n$ persons, the resulting value is [binomially distributed](https://en.wikipedia.org/wiki/Bernoulli_distribution). The mean of this distribution is given as $\mathrm{E}_{1} = n\cdot p_{1}$ and the variance as $\mathrm{Var}_1 = n\cdot p_1 \cdot (1-p_1)$. A consistent and unbiased estimator of $\mathrm{E}_1$ is $\hat{\mathrm{E}}_{1} = \sum_i x_{1}^i$, which then gives an estimate for $p_{1}$ of $\hat{p}_{1} = \hat{\mathrm{E}}_1/n$.
Now, if we apply the differential privacy mechanism to our dataset, the probability of obtaining a $1$ will change to $p_{1,dp} = (1-p)\cdot p_{1}+p\cdot(1-k)$. Therefore, an unbiased and consistent estimator of $p_1$ based on $p_{1,dp}$ is given as
$$
\begin{equation}
\hat{p}_1 = \frac{\hat{p}_{1,dp}-p\cdot(1-k)}{1-p}
\end{equation}
$$
As before, $\hat{p}_{1,dp}=\sum_i x_{1,dp}^i/n$. Note that this naive estimator can return a negative probability, which can be avoided by using a more suitable method like a maximum likelihood estimator.
## Exercise
**Write an estimator for $\hat{p}_1$ based on a differentially private dataset with parameters $p$ and $k$..**
```
# %load "../solutions/differential-privacy/p-1.py"
def p_1_estimator(p_1dp, p, k):
"""
:param p_1dp: The empirical probability of x_i=1 of our DP dataset.
:param p: The p value of our DP scheme.
:param k: The k value of our DP scheme.
: returns: An estimate of p_1 of our DP dataset.
"""
```
## Exercise
**Apply the estimator to the differentially private dataset created above to generate an estimate of $p_1$.**
```
# %load "../solutions/differential-privacy/estimate-p1.py"
p_1_hat = ...
```
## Exercise
**Write a function that estimates the variance of $\hat{p}_{1}$ and calculate its value for the case above.**
Hint: The variance of $\hat{p}_1$ can be estimated as $$\hat{\mathrm{Var}}_1 = \frac{\hat{\mathrm{Var}}_{1,dp}}{(1-p)^2} = \frac{\hat{p}_{1,dp}\cdot(1-\hat{p}_{1,dp})}{(1-p)^2\cdot n}$$
```
# %load "../solutions/differential-privacy/estimate-var.py"
def var_1_estimator(p_1dp, n, p, k):
"""
:param p_1dp: The empirical probability of x_i=1 of our DP dataset.
:param n: The number of samples in our dataset.
:param p: The p value of our DP scheme.
:param k: The k value of our DP scheme.
: returns: An estimate of the variance of our DP dataset.
"""
var_1_hat = var_1_estimator(p_1dp, len(df), p, k)
var_1_hat
```
## Exercise
** Repeat the data generation process $N$ (e.g. 500) times. For each resulting dataset, estimate $\hat{p}_1$ and store the value in a list, so that we can plot it later.**
```
# %load "../solutions/differential-privacy/repeat-dp.py"
p_1_hats = []
for j in range(500):
# ...
p_1_hats.append(p_1_hat)
p_1_hats = np.array(p_1_hats)
# We then compare these estimates to the expected distribution (via central limit theorem: a normal distribution
# with expectation p_1 and variance var_1_hat)
import matplotlib.pylab as pl
pl.hist(p_1_hats, density=True)
gauss = lambda x, mu, var: 1/np.sqrt(2*np.pi*var)*np.exp(-(x-mu)**2/(2*var))
p_1_hat = p_1_hats.mean()
x = np.linspace(0.1, 0.3, 1000)
pl.plot(x, gauss(x, p_1_hat, var_1_hat));
```
# Summary
That's it! As you can see, applying a differential privacy mechanism to your data is not so difficult, you have to take into account the added noise though, which will make your estimates less precise.
| github_jupyter |
```
%matplotlib inline
import pylab as plt
import time
import sys
sys.path.insert(0, '/opt/usr/python/')
import astra
import numpy as np
import pandas as pd
def create_test_cube(size):
# Create a simple hollow cube phantom
cube = np.zeros((size,size,size), dtype='float32')
x0 = int(128.*size/1024)
x1 = int(895.*size/1024)
y0 = int(256.*size/1024)
y1 = int(767.*size/1024)
cube[x0:x1,x0:x1,x0:x1] = 1
cube[y0:y1,y0:y1,y0:y1,] = 0
return cube
def test_projections_parallel(size, angles_count, gpus_list):
# Set up multi-GPU usage.
# This only works for 3D GPU forward projection and back projection.
astra.astra.set_gpu_index(gpus_list)
# Optionally, you can also restrict the amount of GPU memory ASTRA will use.
# The line commented below sets this to 1GB.
#astra.astra.set_gpu_index([0,1], memory=1024*1024*1024)
angles = np.linspace(0, np.pi, angles_count, False)
proj_geom = astra.create_proj_geom('parallel3d', 1.0, 1.0, size, size, angles)
# Create a simple hollow cube phantom
cube = create_test_cube(size)
vol_geom = astra.create_vol_geom(cube.shape)
cube_id = astra.data3d.create('-vol', vol_geom, cube)
del cube
print('Start create_sino3d_gpu')
# Create projection data from this
t=time.time()
proj_id = astra.create_sino3d_gpu(cube_id, proj_geom, vol_geom, returnData=False)
astra.data3d.delete(cube_id)
t_proj = time.time()-t
print('Takes: {}'.format(t_proj))
print('Start astra.create_backprojection3d_gpu')
# Backproject projection data
t=time.time()
bproj_id = astra.create_backprojection3d_gpu(proj_id, proj_geom, vol_geom, returnData=False)
t_bproj = time.time()-t
print('Takes: {}'.format(t_bproj))
# Clean up. Note that GPU memory is tied up in the algorithm object,
# and main RAM in the data objects.
astra.data3d.info()
astra.data3d.delete(proj_id)
astra.data3d.delete(bproj_id)
return size, angles_count, len(gpus_list), t_proj, t_bproj
def test_projections_cone(size, angles_count, gpus_list):
# Set up multi-GPU usage.
# This only works for 3D GPU forward projection and back projection.
astra.astra.set_gpu_index(gpus_list)
# Optionally, you can also restrict the amount of GPU memory ASTRA will use.
# The line commented below sets this to 1GB.
#astra.astra.set_gpu_index([0,1], memory=1024*1024*1024)
angles = np.linspace(0, np.pi, angles_count, False)
# Circular
# Parameters: width of detector column, height of detector row, #rows, #columns,
# angles, distance source-origin, distance origin-detector
# see example #5 from python samples
# All distances in [pixels]
pixel_size = 2.82473e-3
os_distance = 56.135 / pixel_size # object-sample distance
ds_distance = 225.082 / pixel_size # detector-sample distance
detector_size = size
# proj_geom = astra.create_proj_geom('cone', 1.0, 1.0, 32, 64, angles, 1000, 0)
proj_geom = astra.create_proj_geom('cone', ds_distance / os_distance,ds_distance / os_distance,
detector_size, detector_size, angles,
os_distance, (ds_distance - os_distance))
# proj_geom = astra.create_proj_geom('parallel3d', 1.0, 1.0, size, size, angles)
# Create a simple hollow cube phantom
cube = create_test_cube(size)
vol_geom = astra.create_vol_geom(cube.shape)
cube_id = astra.data3d.create('-vol', vol_geom, cube)
del cube
print('Start create_sino3d_gpu')
# Create projection data from this
t=time.time()
proj_id = astra.create_sino3d_gpu(cube_id, proj_geom, vol_geom, returnData=False)
astra.data3d.delete(cube_id)
t_proj = time.time()-t
print('Takes: {}'.format(t_proj))
print('Start astra.create_backprojection3d_gpu')
# Backproject projection data
t=time.time()
bproj_id = astra.create_backprojection3d_gpu(proj_id, proj_geom, vol_geom, returnData=False)
t_bproj = time.time()-t
print('Takes: {}'.format(t_bproj))
# Clean up. Note that GPU memory is tied up in the algorithm object,
# and main RAM in the data objects.
astra.data3d.info()
astra.data3d.delete(proj_id)
astra.data3d.delete(bproj_id)
return size, angles_count, len(gpus_list), t_proj, t_bproj
def test_reconstruction_cone_fdk(size, angles_count, gpus_list):
# Set up multi-GPU usage.
# This only works for 3D GPU forward projection and back projection.
astra.astra.set_gpu_index(gpus_list)
# Optionally, you can also restrict the amount of GPU memory ASTRA will use.
# The line commented below sets this to 1GB.
#astra.astra.set_gpu_index([0,1], memory=1024*1024*1024)
angles = np.linspace(0, np.pi, angles_count, False)
# Circular
# Parameters: width of detector column, height of detector row, #rows, #columns,
# angles, distance source-origin, distance origin-detector
# see example #5 from python samples
# All distances in [pixels]
pixel_size = 2.82473e-3
os_distance = 56.135 / pixel_size # object-sample distance
ds_distance = 225.082 / pixel_size # detector-sample distance
detector_size = size
# proj_geom = astra.create_proj_geom('cone', 1.0, 1.0, 32, 64, angles, 1000, 0)
proj_geom = astra.create_proj_geom('cone', ds_distance / os_distance,ds_distance / os_distance,
detector_size, detector_size, angles,
os_distance, (ds_distance - os_distance))
# proj_geom = astra.create_proj_geom('parallel3d', 1.0, 1.0, size, size, angles)
# Create a simple hollow cube phantom
cube = create_test_cube(size)
vol_geom = astra.create_vol_geom(cube.shape)
cube_id = astra.data3d.create('-vol', vol_geom, cube)
del cube
print('Start create_sino3d_gpu')
# Create projection data from this
t=time.time()
proj_id = astra.create_sino3d_gpu(cube_id, proj_geom, vol_geom, returnData=False)
t_proj = time.time()-t
print('Takes: {}'.format(t_proj))
# Set up the parameters for a reconstruction algorithm using the GPU
cfg = astra.astra_dict('FDK_CUDA')
cfg['ReconstructionDataId'] = cube_id
cfg['ProjectionDataId'] = proj_id
# Create the algorithm object from the configuration structure
alg_id = astra.algorithm.create(cfg)
# print('Start astra reconstruction FDK_CUDA')
# # Backproject projection data
t=time.time()
astra.algorithm.run(alg_id, 1)
t_bproj = time.time()-t
print('Takes: {}'.format(t_bproj))
# rec = astra.data3d.get(cube_id)
# plt.figure()
# plt.imshow(rec[:,:,int(rec.shape[-1]/2)])
# plt.show()
# Clean up. Note that GPU memory is tied up in the algorithm object,
# and main RAM in the data objects.
astra.data3d.info()
astra.algorithm.delete(alg_id)
astra.data3d.delete(proj_id)
astra.data3d.delete(cube_id)
return size, angles_count, len(gpus_list), t_proj, t_bproj
test_reconstruction_cone_fdk(1536,1536,[0,1])
rec=create_test_cube(1024)
plt.figure()
plt.imshow(rec[:,:,512])
plt.show()
plt.figure()
plt.imshow(rec[:,:,200])
plt.show()
sizes = [1024,]
print sizes
angles = [1024,]
print angles
gpu_config = [[0,1],[0,],[1,]]
print gpu_config
with open('test_proj_bproj_parallel_3d.txt', 'a') as f:
for angles_count in angles:
for size in sizes:
for gpu_list in gpu_config:
res = test_projections_parallel(size, angles_count, gpu_list)
f.write('\t'.join(map(lambda x: str(x),res)))
f.write('\n')
with open('test_proj_bproj_cone_3d.txt', 'a') as f:
for angles_count in angles:
for size in sizes:
for gpu_list in gpu_config:
res = test_projections_cone(size, angles_count, gpu_list)
f.write('\t'.join(map(lambda x: str(x),res)))
f.write('\n')
with open('test_rec_cone_3d.txt', 'a') as f:
for angles_count in angles:
for size in sizes:
for gpu_list in gpu_config:
res = test_reconstruction_cone_fdk(size, angles_count, gpu_list)
f.write('\t'.join(map(lambda x: str(x),res)))
f.write('\n')
stat = np.loadtxt('test_proj_bproj_parallel_3d.txt')
#!rm test_proj_bproj_parallel_3d.txt
size = stat[:,0]
angles_count = stat[:,1]
gpus = stat[:,2]
t_proj = stat[:,3]
t_bproj = stat[:,4]
df = pd.DataFrame(stat,columns=['size', 'angles', 'ngpus', 't_proj', 't_bproj'])
db = df.groupby(['ngpus'])
plt.figure(figsize=(10,7))
gpu_index = db.groups[1]
plt.plot(df['size'][gpu_index],
df['t_proj'][gpu_index]/df['size'][gpu_index]**3/df['angles'][gpu_index]*1e11,
'o',
label='Projection time (ngpu=1)')
plt.plot(df['size'][gpu_index],
df['t_bproj'][gpu_index]/df['size'][gpu_index]**3/df['angles'][gpu_index]*1e11,
'o',
label='Back projection time (ngpu=1)')
# multi_gpu_index = db.groups[2]
# plt.plot(df['size'][multi_gpu_index],df['t_proj'][multi_gpu_index] , '*', label='Projection time (ngpu=2)')
# plt.plot(df['size'][multi_gpu_index],df['t_bproj'][multi_gpu_index] , 'x', label='Back projection time (ngpu=2)')
plt.ylabel('Time, s*1e-11')
plt.xlabel('Cube size, px')
plt.grid(True)
plt.legend(loc=0)
plt.show()
plt.figure(figsize=(10,7))
gpu_index = db.groups[1]
plt.plot(df['size'][gpu_index],
df['t_proj'][gpu_index]/df['size'][gpu_index]**3/df['angles'][gpu_index]*1e11,
'o',
label='Projection time (ngpu=1)')
gpu_index = db.groups[2]
plt.plot(df['size'][gpu_index],
df['t_proj'][gpu_index]/df['size'][gpu_index]**3/df['angles'][gpu_index]*1e11,
'o',
label='Projection time (ngpu=2)')
plt.ylim([0,6])
# multi_gpu_index = db.groups[2]
# plt.plot(df['size'][multi_gpu_index],df['t_proj'][multi_gpu_index] , '*', label='Projection time (ngpu=2)')
# plt.plot(df['size'][multi_gpu_index],df['t_bproj'][multi_gpu_index] , 'x', label='Back projection time (ngpu=2)')
plt.ylabel('Time, s*1e-11')
plt.xlabel('Cube size, px')
plt.grid(True)
plt.legend(loc=0)
plt.show()
plt.figure(figsize=(10,7))
gpu_index = db.groups[1]
plt.plot(df['size'][gpu_index],
df['t_bproj'][gpu_index]/df['size'][gpu_index]**3/df['angles'][gpu_index]*1e11,
'o',
label='Back projection time (ngpu=1)')
gpu_index = db.groups[2]
plt.plot(df['size'][gpu_index],
df['t_bproj'][gpu_index]/df['size'][gpu_index]**3/df['angles'][gpu_index]*1e11,
'o',
label='Back projection time (ngpu=2)')
plt.ylim([0,6])
# multi_gpu_index = db.groups[2]
# plt.plot(df['size'][multi_gpu_index],df['t_proj'][multi_gpu_index] , '*', label='Projection time (ngpu=2)')
# plt.plot(df['size'][multi_gpu_index],df['t_bproj'][multi_gpu_index] , 'x', label='Back projection time (ngpu=2)')
plt.ylabel('Time, s*1e-11')
plt.xlabel('Cube size, px')
plt.grid(True)
plt.legend(loc=0)
plt.show()
#-----------------------------------------------------------------------
#Copyright 2013 Centrum Wiskunde & Informatica, Amsterdam
#
#Author: Daniel M. Pelt
#Contact: D.M.Pelt@cwi.nl
#Website: http://dmpelt.github.io/pyastratoolbox/
#
#
#This file is part of the Python interface to the
#All Scale Tomographic Reconstruction Antwerp Toolbox ("ASTRA Toolbox").
#
#The Python interface to the ASTRA Toolbox is free software: you can redistribute it and/or modify
#it under the terms of the GNU General Public License as published by
#the Free Software Foundation, either version 3 of the License, or
#(at your option) any later version.
#
#The Python interface to the ASTRA Toolbox is distributed in the hope that it will be useful,
#but WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
#GNU General Public License for more details.
#
#You should have received a copy of the GNU General Public License
#along with the Python interface to the ASTRA Toolbox. If not, see <http://www.gnu.org/licenses/>.
#
#-----------------------------------------------------------------------
import astra
import numpy as np
vol_geom = astra.create_vol_geom(128, 128, 128)
angles = np.linspace(0, np.pi, 180,False)
# proj_geom = astra.create_proj_geom('parallel3d', 1.0, 1.0, 128, 192, angles)
pixel_size = 2.82473e-3
os_distance = 56.135 / pixel_size # object-sample distance
ds_distance = 225.082 / pixel_size # detector-sample distance
detector_size = 128
# proj_geom = astra.create_proj_geom('cone', 1.0, 1.0, 32, 64, angles, 1000, 0)
proj_geom = astra.create_proj_geom('cone', ds_distance / os_distance,ds_distance / os_distance,
detector_size, detector_size, angles,
os_distance, (ds_distance - os_distance))
# Create a simple hollow cube phantom
cube = np.zeros((128,128,128))
cube[17:113,17:113,17:113] = 1
cube[33:97,33:97,33:97] = 0
# Create projection data from this
proj_id, proj_data = astra.create_sino3d_gpu(cube, proj_geom, vol_geom)
# Display a single projection image
import pylab
pylab.viridis()
pylab.figure(1)
pylab.imshow(proj_data[:,20,:])
# Create a data object for the reconstruction
rec_id = astra.data3d.create('-vol', vol_geom)
# Set up the parameters for a reconstruction algorithm using the GPU
cfg = astra.astra_dict('FDK_CUDA')
cfg['ReconstructionDataId'] = rec_id
cfg['ProjectionDataId'] = proj_id
# Create the algorithm object from the configuration structure
alg_id = astra.algorithm.create(cfg)
# Run 150 iterations of the algorithm
# Note that this requires about 750MB of GPU memory, and has a runtime
# in the order of 10 seconds.
astra.algorithm.run(alg_id, 1)
# Get the result
rec = astra.data3d.get(rec_id)
pylab.figure(2)
pylab.imshow(rec[:,:,64])
pylab.show()
# Clean up. Note that GPU memory is tied up in the algorithm object,
# and main RAM in the data objects.
astra.algorithm.delete(alg_id)
astra.data3d.delete(rec_id)
astra.data3d.delete(proj_id)
fp = np.memmap('1.tmp', dtype='float32', mode='w+', shape=(int(1e5),int(1e5)))
rm 1.tmp
```
| github_jupyter |
<a href="https://colab.research.google.com/github/unica-ml/ml/blob/master/notebooks/lab05.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Machine Learning - Lab05
## Neural Networks with PyTorch
This notebook provides a brief introduction to PyTorch, inspired from the PyTorch tutorials available at https://github.com/yunjey/pytorch-tutorial and https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html.
Let's start with tensors, autograd/autodiff and to/from numpy conversions.
```
import torch
import torchvision
import torch.nn as nn
import numpy as np
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
# ================================================================== #
# Basic autograd example #
# ================================================================== #
# Create tensors.
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)
# Build a computational graph.
y = w * x + b # y = 2 * x + 3
# Compute gradients.
y.backward()
# Print out the gradients.
print(x.grad) # x.grad = 2
print(w.grad) # w.grad = 1
print(b.grad) # b.grad = 1
# ================================================================== #
# Loading data from numpy #
# ================================================================== #
# Create a numpy array.
x = np.array([[1, 2], [3, 4]])
# Convert the numpy array to a torch tensor.
y = torch.from_numpy(x)
# Convert the torch tensor to a numpy array.
z = y.numpy()
print("x: ", x)
print("y: ", y)
print("z: ", z)
```
## Logistic/Softmax Classifier on MNIST data
We aim to learn a multiclass linear classifier $f(x) = Wx+b$ on the MNIST dataset, where we have $d=28 \times 28=784$ pixels as inputs, and we aim to predict $k=10$ values (one output per class), i.e., $f(x) : R^d \mapsto R^k$.
To learn the classifier parameters $W \in R^{k \times d}, b \in R^k$, we minimize the cross-entropy loss on the softmax-scaled $k$ outputs:
$$\min_{W,b} L(\mathcal D, W, b) = -\sum_{i=1}^{n} \sum_{c=1}^{k} y_{i c} \cdot \log \left( \sigma(f_{c}(x_i; W, b))\right),$$
where $\mathcal D = (x_i, y_i)_{i=1}^n$ is the training set, $\sigma$ is the softmax operator, and $y_{ic}$ is 1 if the training sample $x_i$ belongs to class $c$ and 0 otherwise (one-hot label encoding of $y_i$).
More details (including gradient computation) at: https://peterroelants.github.io/posts/cross-entropy-softmax/.
```
# Hyper-parameters
input_size = 28 * 28 # 784
num_classes = 10
batch_size = 64
# set CPU or GPU, if available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# MNIST dataset (images and labels)
train_dataset = torchvision.datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
# Data loader (input pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# functions to show an image
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
plt.figure(figsize=(7,10))
imshow(torchvision.utils.make_grid(images, nrow=8))
# Hyper-parameters
num_epochs = 2
learning_rate = 0.001
# Logistic regression model
model = nn.Linear(input_size, num_classes).to(device)
# Loss and optimizer
# nn.CrossEntropyLoss() computes softmax internally
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Train the model
total_step = len(train_loader)
loss_path = np.zeros(shape=(num_epochs,total_step))
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Reshape images to (batch_size, input_size)
images = images.reshape(-1, input_size)
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
loss_path[epoch][i] = loss.item()
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
plt.figure()
plt.plot(loss_path.ravel())
plt.title('Loss')
plt.xlabel("iteration")
plt.show()
# Test the model
# In test phase, we don't need to compute gradients (for memory efficiency)
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, input_size)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the model on the 10000 test images: {} %'
.format(100.0 * correct / total))
```
## Training a CNN on MNIST
```
# Hyper-parameters
num_epochs = 2
learning_rate = 0.001
# Convolutional neural network (two convolutional layers)
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
model = ConvNet(num_classes).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Train the model
total_step = len(train_loader)
loss_path = np.zeros(shape=(num_epochs,total_step))
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
loss_path[epoch][i] = loss.item()
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
plt.figure()
plt.plot(loss_path.ravel())
plt.title('Loss')
plt.xlabel("iteration")
plt.show()
# Test the model
# eval mode (batchnorm uses moving mean/var instead of mini-batch mean/var)
model.eval()
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of the model on the 10000 test images: {} %'
.format(100.0 * correct / total))
```
## Training a CNN on CIFAR10
Source: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
```
batch_size = 32
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# print labels for the first 8 images
print(' '.join('%5s' % classes[labels[j]] for j in range(8)))
# show images
plt.figure(figsize=(10,5))
imshow(torchvision.utils.make_grid(images))
import torch.nn.functional as F
# Conv2d: https://pytorch.org/docs/master/generated/torch.nn.Conv2d.html
# MaxPool2d: https://pytorch.org/docs/master/generated/torch.nn.MaxPool2d.html
# These layers rescale inputs as described in the docs
# In our case below, we do not use dilation and padding is zero,
# hence we can compute h_out and w_out as:
# h_out = floor( (h_in - kernel_size)/stride +1 )
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5)
# input size after conv1 is: 28x28x6
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
# input size after pool is: 14x14x6
self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5)
# input size after conv2 is: 10x10x16
# input size after conv2 and pool is: 5x5x16
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
# loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
if i % 100 == 99: # print every 100 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, loss.item()))
running_loss = 0.0
print('Finished Training')
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
plt.figure(figsize=(10,5))
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(8)))
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# per-class accuracies
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Generating C Code for the Scalar Wave Equation in Cartesian Coordinates
## Authors: Zach Etienne & Thiago Assumpção
### Formatting improvements courtesy Brandon Clark
## This module generates the C Code for the Scalarwave in Cartesian coordinates and sets up either monochromatic plane wave or spherical Gaussian [Initial Data](https://en.wikipedia.org/wiki/Initial_value_problem).
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented below ([right-hand-side expressions](#code_validation1); [initial data expressions](#code_validation2)). In addition, all expressions have been validated against a trusted code (the [original SENR/NRPy+ code](https://bitbucket.org/zach_etienne/nrpy)).
### NRPy+ Source Code for this module:
* [ScalarWave/ScalarWave_RHSs.py](../edit/ScalarWave/ScalarWave_RHSs.py)
* [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py)
## Introduction:
### Problem Statement
We wish to numerically solve the scalar wave equation as an [initial value problem](https://en.wikipedia.org/wiki/Initial_value_problem) in Cartesian coordinates:
$$\partial_t^2 u = c^2 \nabla^2 u \text{,}$$
where $u$ (the amplitude of the wave) is a function of time and space: $u = u(t,x,y,...)$ (spatial dimension as-yet unspecified) and $c$ is the wave speed, subject to some initial condition
$$u(0,x,y,...) = f(x,y,...)$$
and suitable spatial boundary conditions.
As described in the next section, we will find it quite useful to define
$$v(t,x,y,...) = \partial_t u(t,x,y,...).$$
In this way, the second-order PDE is reduced to a set of two coupled first-order PDEs
\begin{align}
\partial_t u &= v \\
\partial_t v &= c^2 \nabla^2 u.
\end{align}
We will use NRPy+ to generate efficient C codes capable of generating both initial data $u(0,x,y,...) = f(x,y,...)$; $v(0,x,y,...)=g(x,y,...)$, as well as finite-difference expressions for the right-hand sides of the above expressions. These expressions are needed within the *Method of Lines* to "integrate" the solution forward in time.
### The Method of Lines
Once we have initial data, we "evolve it forward in time", using the [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html). In short, the Method of Lines enables us to handle
1. the **spatial derivatives** of an initial value problem PDE using **standard finite difference approaches**, and
2. the **temporal derivatives** of an initial value problem PDE using **standard strategies for solving ordinary differential equations (ODEs)**, so long as the initial value problem PDE can be written in the form
$$\partial_t \vec{f} = \mathbf{M}\ \vec{f},$$
where $\mathbf{M}$ is an $N\times N$ matrix filled with differential operators that act on the $N$-element column vector $\vec{f}$. $\mathbf{M}$ may not contain $t$ or time derivatives explicitly; only *spatial* partial derivatives are allowed to appear inside $\mathbf{M}$. The scalar wave equation as written in the [previous module](Tutorial-ScalarWave.ipynb)
\begin{equation}
\partial_t
\begin{bmatrix}
u \\
v
\end{bmatrix}=
\begin{bmatrix}
0 & 1 \\
c^2 \nabla^2 & 0
\end{bmatrix}
\begin{bmatrix}
u \\
v
\end{bmatrix}
\end{equation}
satisfies this requirement.
Thus we can treat the spatial derivatives $\nabla^2 u$ of the scalar wave equation using **standard finite-difference approaches**, and the temporal derivatives $\partial_t u$ and $\partial_t v$ using **standard approaches for solving ODEs**. In [the next module](Tutorial-Start_to_Finish-ScalarWave.ipynb), we will apply the highly robust [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4), used widely for numerically solving ODEs, to "march" (integrate) the solution vector $\vec{f}$ forward in time from its initial value ("initial data").
### Basic Algorithm
The basic algorithm for solving the scalar wave equation [initial value problem](https://en.wikipedia.org/wiki/Initial_value_problem), based on the Method of Lines (see section above) is outlined below, with NRPy+-based components highlighted in <font color='green'>green</font>. We will review how NRPy+ generates these core components in this module.
1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.
1. <font color='green'>Set gridfunction values to initial data.</font>
1. Evolve the system forward in time using RK4 time integration. At each RK4 substep, do the following:
1. <font color='green'>Evaluate scalar wave RHS expressions.</font>
1. Apply boundary conditions.
**We refer to the right-hand side of the equation $\partial_t \vec{f} = \mathbf{M}\ \vec{f}$ as the RHS. In this case, we refer to the $\mathbf{M}\ \vec{f}$ as the "scalar wave RHSs".** In the following sections we will
1. Use NRPy+ to cast the scalar wave RHS expressions -- in finite difference form -- into highly efficient C code,
1. first in one spatial dimension with fourth-order finite differences,
1. and then in three spatial dimensions with tenth-order finite differences.
1. Use NRPy+ to generate monochromatic plane-wave initial data for the scalar wave equation, where the wave propagates in an arbitrary direction.
As for the $\nabla^2 u$ term, spatial derivatives are handled in NRPy+ via [finite differencing](https://en.wikipedia.org/wiki/Finite_difference).
We will sample the solution $\{u,v\}$ at discrete, uniformly-sampled points in space and time. For simplicity, let's assume that we consider the wave equation in one spatial dimension. Then the solution at any sampled point in space and time is given by
$$u^n_i = u(t_n,x_i) = u(t_0 + n \Delta t, x_0 + i \Delta x),$$
where $\Delta t$ and $\Delta x$ represent the temporal and spatial resolution, respectively. $v^n_i$ is sampled at the same points in space and time.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
1. [Step 1](#initializenrpy): Initialize core NRPy+ modules
1. [Step 2](#rhss1d): Scalar Wave RHSs in One Spatial Dimension, Fourth-Order Finite Differencing
1. [Step 3](#rhss3d): Scalar Wave RHSs in Three Spatial Dimensions, Tenth-Order Finite Differencing
1. [Step 3.a](#code_validation1): Code Validation against `ScalarWave.ScalarWave_RHSs` NRPy+ module
1. [Step 4](#id): Setting up Initial Data for the Scalar Wave Equation
1. [Step 4.a](#planewave): The Monochromatic Plane-Wave Solution
1. [Step 4.b](#sphericalgaussian): The Spherical Gaussian Solution (*Courtesy Thiago Assumpção*)
1. [Step 5](#code_validation2): Code Validation against `ScalarWave.InitialData` NRPy+ module
1. [Step 6](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize core NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from NRPy+:
```
# Step P1: Import needed NRPy+ core modules:
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import grid as gri # NRPy+: Functions having to do with numerical grids
import finite_difference as fin # NRPy+: Finite difference C code generation module
from outputC import lhrh # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
```
<a id='rhss1d'></a>
# Step 2: Scalar Wave RHSs in One Spatial Dimension, Fourth-Order Finite Differencing \[Back to [top](#toc)\]
$$\label{rhss1d}$$
To minimize complication, we will first restrict ourselves to solving the wave equation in one spatial dimension, so
$$\nabla^2 u = \partial_x^2 u.$$
Extension of this operator to higher spatial dimensions is straightforward, particularly when using NRPy+.
As was discussed in [the finite difference section of the tutorial](Tutorial-Finite_Difference_Derivatives.ipynb), NRPy+ approximates derivatives using [finite difference methods](), the second-order derivative $\partial_x^2$ accurate to fourth-order in uniform grid spacing $\Delta x$ (from fitting the unique 4th-degree polynomial to 5 sample points of $u$) is given by
\begin{equation}
\left[\partial_x^2 u(t,x)\right]_j = \frac{1}{(\Delta x)^2}
\left(
-\frac{1}{12} \left(u_{j+2} + u_{j-2}\right)
+ \frac{4}{3} \left(u_{j+1} + u_{j-1}\right)
- \frac{5}{2} u_j \right)
+ \mathcal{O}\left((\Delta x)^4\right).
\end{equation}
```
# Step P2: Define the C parameter wavespeed. The `wavespeed`
# variable is a proper SymPy variable, so it can be
# used in below expressions. In the C code, it acts
# just like a usual parameter, whose value is
# specified in the parameter file.
thismodule = "ScalarWave"
wavespeed = par.Cparameters("REAL",thismodule,"wavespeed", 1.0)
# Step 1: Set the spatial dimension parameter, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",1)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",4)
# Step 3: Register gridfunctions that are needed as input
# to the scalar wave RHS expressions.
uu, vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 4: Declare the rank-2 indexed expression \partial_{ij} u,
# which is symmetric about interchange of indices i and j
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dDD = ixp.declarerank2("uu_dDD","sym01")
# Step 5: Define right-hand sides for the evolution.
uu_rhs = vv
vv_rhs = 0
for i in range(DIM):
vv_rhs += wavespeed*wavespeed*uu_dDD[i][i]
vv_rhs = sp.simplify(vv_rhs)
# Step 6: Generate C code for scalarwave evolution equations,
# print output to the screen (standard out, or stdout).
fin.FD_outputC("stdout",
[lhrh(lhs=gri.gfaccess("rhs_gfs","uu"),rhs=uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vv"),rhs=vv_rhs)])
```
**Success!** Notice that indeed NRPy+ was able to compute the spatial derivative operator,
\begin{equation}
\left[\partial_x^2 u(t,x)\right]_j \approx \frac{1}{(\Delta x)^2}
\left(
-\frac{1}{12} \left(u_{j+2} + u_{j-2}\right)
+ \frac{4}{3} \left(u_{j+1} + u_{j-1}\right)
- \frac{5}{2} u_j \right),
\end{equation}
correctly (easier to read in the "Original SymPy expressions" comment block at the top of the C output. Note that `invdx0`$=1/\Delta x_0$, where $\Delta x_0$ is the (uniform) grid spacing in the zeroth, or $x_0$ direction.
<a id='rhss3d'></a>
# Step 3: Scalar Wave RHSs in Three Spatial Dimensions, Tenth-Order Finite Differencing \[Back to [top](#toc)\]
$$\label{rhss3d}$$
Let's next repeat the same process, only this time at **10th** finite difference order, for the **3-spatial-dimension** scalar wave equation, with SIMD enabled:
```
# Step 1: Define the C parameter wavespeed. The `wavespeed`
# variable is a proper SymPy variable, so it can be
# used in below expressions. In the C code, it acts
# just like a usual parameter, whose value is
# specified in the parameter file.
wavespeed = par.Cparameters("REAL",thismodule,"wavespeed", 1.0)
# Step 2: Set the spatial dimension parameter
# to *FOUR* this time, and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 3: Set the finite differencing order to 10.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",10)
# Step 4a: Reset gridfunctions registered in 1D case above,
# to avoid NRPy+ throwing an error about double-
# registering gridfunctions, which is not allowed.
gri.glb_gridfcs_list = []
# Step 4b: Register gridfunctions that are needed as input
# to the scalar wave RHS expressions.
uu, vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 5: Declare the rank-2 indexed expression \partial_{ij} u,
# which is symmetric about interchange of indices i and j
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dDD = ixp.declarerank2("uu_dDD","sym01")
# Step 6: Define right-hand sides for the evolution.
uu_rhs = vv
vv_rhs = 0
for i in range(DIM):
vv_rhs += wavespeed*wavespeed*uu_dDD[i][i]
# Step 7: Simplify the expression for c^2 \nabla^2 u (a.k.a., vv_rhs):
vv_rhs = sp.simplify(vv_rhs)
# Step 8: Generate C code for scalarwave evolution equations,
# print output to the screen (standard out, or stdout).
fin.FD_outputC("stdout",
[lhrh(lhs=gri.gfaccess("rhs_gfs","uu"),rhs=uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vv"),rhs=vv_rhs)],params="SIMD_enable=True")
```
<a id='code_validation1'></a>
## Step 3.a: Code Validation against `ScalarWave.ScalarWave_RHSs` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation1}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the three-spatial-dimension Scalar Wave equation (i.e., `uu_rhs` and `vv_rhs`) between
1. this tutorial and
2. the [NRPy+ ScalarWave.ScalarWave_RHSs](../edit/ScalarWave/ScalarWave_RHSs.py) module.
```
# Step 10: We already have SymPy expressions for uu_rhs and vv_rhs in
# terms of other SymPy variables. Even if we reset the list
# of NRPy+ gridfunctions, these *SymPy* expressions for
# uu_rhs and vv_rhs *will remain unaffected*.
#
# Here, we will use the above-defined uu_rhs and vv_rhs to
# validate against the same expressions in the
# ScalarWave/ScalarWave_RHSs.py module,
# to ensure consistency between this tutorial
# (historically speaking, the tutorial was written first)
# and the ScalarWave_RHSs.py module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice will spawn an error.
gri.glb_gridfcs_list = []
# Step 11: Call the ScalarWave_RHSs() function from within the
# ScalarWave/ScalarWave_RHSs.py module,
# which should do exactly the same as in Steps 1-10 above.
import ScalarWave.ScalarWave_RHSs as swrhs
swrhs.ScalarWave_RHSs()
# Step 12: Consistency check between the tutorial notebook above
# and the ScalarWave_RHSs() function from within the
# ScalarWave/ScalarWave_RHSs.py module.
print("Consistency check between ScalarWave tutorial and NRPy+ module:")
print("uu_rhs - swrhs.uu_rhs = "+str(sp.simplify(uu_rhs - swrhs.uu_rhs))+"\t\t (should be zero)")
print("vv_rhs - swrhs.vv_rhs = "+str(sp.simplify(vv_rhs - swrhs.vv_rhs))+"\t\t (should be zero)")
```
<a id='id'></a>
# Step 4: Setting up Initial Data for the Scalar Wave Equation \[Back to [top](#toc)\]
$$\label{id}$$
<a id='planewave'></a>
## Step 4.a: The Monochromatic Plane-Wave Solution \[Back to [top](#toc)\]
$$\label{planewave}$$
The solution to the scalar wave equation for a monochromatic (single-wavelength) wave traveling in the $\hat{k}$ direction is
$$u(\vec{x},t) = f(\hat{k}\cdot\vec{x} - c t),$$
where $\hat{k}$ is a unit vector. We choose $f(\hat{k}\cdot\vec{x} - c t)$ to take the form
$$
f(\hat{k}\cdot\vec{x} - c t) = \sin\left(\hat{k}\cdot\vec{x} - c t\right) + 2,
$$
where we add the $+2$ to ensure that the exact solution never crosses through zero. In places where the exact solution passes through zero, the relative error (i.e., the measure of error to compare numerical with exact results) is undefined. Also, $f(\hat{k}\cdot\vec{x} - c t)$ plus a constant is still a solution to the wave equation.
```
# Step 1: Set parameters defined in other modules
xx = gri.xx # Sets the Cartesian coordinates xx[0]=x; xx[1]=y; xx[2]=z
# Step 2: Declare free parameters intrinsic to these initial data
time = par.Cparameters("REAL", thismodule, "time",0.0)
kk = par.Cparameters("REAL", thismodule, ["kk0", "kk1", "kk2"],[1.0,1.0,1.0])
# Step 3: Normalize the k vector
kk_norm = sp.sqrt(kk[0]**2 + kk[1]**2 + kk[2]**2)
# Step 4: Compute k.x
dot_product = sp.sympify(0)
for i in range(DIM):
dot_product += xx[i]*kk[i]
dot_product /= kk_norm
# Step 5: Set initial data for uu and vv, where vv_ID = \partial_t uu_ID.
uu_ID_PlaneWave = sp.sin(dot_product - wavespeed*time)+2
vv_ID_PlaneWave = sp.diff(uu_ID_PlaneWave, time)
```
Next we verify that $f(\hat{k}\cdot\vec{x} - c t)$ satisfies the wave equation, by computing
$$\left(c^2 \nabla^2 - \partial_t^2 \right)\ f\left(\hat{k}\cdot\vec{x} - c t\right),$$
and confirming the result is exactly zero.
```
sp.simplify(wavespeed**2*(sp.diff(uu_ID_PlaneWave,xx[0],2) +
sp.diff(uu_ID_PlaneWave,xx[1],2) +
sp.diff(uu_ID_PlaneWave,xx[2],2))
- sp.diff(uu_ID_PlaneWave,time,2))
```
<a id='sphericalgaussian'></a>
## Step 4.b: The Spherical Gaussian Solution \[Back to [top](#toc)\]
$$\label{sphericalgaussian}$$
Here we will implement the spherical Gaussian solution, consists of ingoing and outgoing wave fronts:
\begin{align}
u(r,t) &= u_{\rm out}(r,t) + u_{\rm in}(r,t) + 1,\ \ \text{where}\\
u_{\rm out}(r,t) &=\frac{r-ct}{r} \exp\left[\frac{-(r-ct)^2}{2 \sigma^2}\right] \\
u_{\rm in}(r,t) &=\frac{r+ct}{r} \exp\left[\frac{-(r+ct)^2}{2 \sigma^2}\right] \\
\end{align}
where $c$ is the wavespeed, and $\sigma$ is the width of the Gaussian (i.e., the "standard deviation").
```
# Step 1: Set parameters defined in other modules
xx = gri.xx # Sets the Cartesian coordinates xx[0]=x; xx[1]=y; xx[2]=z
# Step 2: Declare free parameters intrinsic to these initial data
time = par.Cparameters("REAL", thismodule, "time",0.0)
sigma = par.Cparameters("REAL", thismodule, "sigma",3.0)
# Step 4: Compute r
r = sp.sympify(0)
for i in range(DIM):
r += xx[i]**2
r = sp.sqrt(r)
# Step 5: Set initial data for uu and vv, where vv_ID = \partial_t uu_ID.
uu_ID_SphericalGaussianOUT = +(r - wavespeed*time)/r * sp.exp( -(r - wavespeed*time)**2 / (2*sigma**2) )
uu_ID_SphericalGaussianIN = +(r + wavespeed*time)/r * sp.exp( -(r + wavespeed*time)**2 / (2*sigma**2) )
uu_ID_SphericalGaussian = uu_ID_SphericalGaussianOUT + uu_ID_SphericalGaussianIN + sp.sympify(1)
vv_ID_SphericalGaussian = sp.diff(uu_ID_SphericalGaussian, time)
```
Since the wave equation is linear, both the leftgoing and rightgoing waves must satisfy the wave equation, which implies that their sum also satisfies the wave equation.
Next we verify that $u(r,t)$ satisfies the wave equation, by computing
$$\left(c^2 \nabla^2 - \partial_t^2 \right)\left\{u_{\rm R}(r,t)\right\},$$
and
$$\left(c^2 \nabla^2 - \partial_t^2 \right)\left\{u_{\rm L}(r,t)\right\},$$
are separately zero. We do this because SymPy has difficulty simplifying the combined expression.
```
print(sp.simplify(wavespeed**2*(sp.diff(uu_ID_SphericalGaussianOUT,xx[0],2) +
sp.diff(uu_ID_SphericalGaussianOUT,xx[1],2) +
sp.diff(uu_ID_SphericalGaussianOUT,xx[2],2))
- sp.diff(uu_ID_SphericalGaussianOUT,time,2)) )
print(sp.simplify(wavespeed**2*(sp.diff(uu_ID_SphericalGaussianIN,xx[0],2) +
sp.diff(uu_ID_SphericalGaussianIN,xx[1],2) +
sp.diff(uu_ID_SphericalGaussianIN,xx[2],2))
- sp.diff(uu_ID_SphericalGaussianIN,time,2)))
```
<a id='code_validation2'></a>
# Step 5: Code Validation against `ScalarWave.InitialData` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation2}$$
As a code validation check, we will verify agreement in the SymPy expressions for plane-wave initial data for the Scalar Wave equation between
1. this tutorial and
2. the NRPy+ [ScalarWave.InitialData](../edit/ScalarWave/InitialData.py) module.
```
# We just defined SymPy expressions for uu_ID and vv_ID in
# terms of other SymPy variables. Here, we will use the
# above-defined uu_ID and vv_ID to validate against the
# same expressions in the ScalarWave/InitialData.py
# module, to ensure consistency between this tutorial
# (historically speaking, the tutorial was written first)
# and the PlaneWave ID module itself.
#
# Step 6: Call the InitialData(Type="PlaneWave") function from within the
# ScalarWave/InitialData.py module,
# which should do exactly the same as in Steps 1-5 above.
import ScalarWave.InitialData as swid
swid.InitialData(Type="PlaneWave")
# Step 7: Consistency check between the tutorial notebook above
# and the PlaneWave option from within the
# ScalarWave/InitialData.py module.
print("Consistency check between ScalarWave tutorial and NRPy+ module: PlaneWave Case")
if sp.simplify(uu_ID_PlaneWave - swid.uu_ID) != 0:
print("TEST FAILED: uu_ID_PlaneWave - swid.uu_ID = "+str(sp.simplify(uu_ID_PlaneWave - swid.uu_ID))+"\t\t (should be zero)")
sys.exit(1)
if sp.simplify(vv_ID_PlaneWave - swid.vv_ID) != 0:
print("TEST FAILED: vv_ID_PlaneWave - swid.vv_ID = "+str(sp.simplify(vv_ID_PlaneWave - swid.vv_ID))+"\t\t (should be zero)")
sys.exit(1)
print("TESTS PASSED!")
# Step 8: Consistency check between the tutorial notebook above
# and the SphericalGaussian option from within the
# ScalarWave/InitialData.py module.
swid.InitialData(Type="SphericalGaussian")
print("Consistency check between ScalarWave tutorial and NRPy+ module: SphericalGaussian Case")
if sp.simplify(uu_ID_SphericalGaussian - swid.uu_ID) != 0:
print("TEST FAILED: uu_ID_SphericalGaussian - swid.uu_ID = "+str(sp.simplify(uu_ID_SphericalGaussian - swid.uu_ID))+"\t\t (should be zero)")
sys.exit(1)
if sp.simplify(vv_ID_SphericalGaussian - swid.vv_ID) != 0:
print("TEST FAILED: vv_ID_SphericalGaussian - swid.vv_ID = "+str(sp.simplify(vv_ID_SphericalGaussian - swid.vv_ID))+"\t\t (should be zero)")
sys.exit(1)
print("TESTS PASSED!")
```
<a id='latex_pdf_output'></a>
# Step 6: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-ScalarWave.pdf](Tutorial-ScalarWave.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ScalarWave")
```
| github_jupyter |
# BATEMAN’S EQUATIONS: CHAIN OF DECAYS OF 3 NUCLEAR SPECIES
## Import Libraries
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams; rcParams["figure.dpi"] = 300
from matplotlib.ticker import (AutoMinorLocator)
plt.style.use('seaborn-bright')
plt.rc('font', family='serif')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
```
You will be examining properties of the Bateman equations that govern the decay of multiple nuclear species. While the problem is exactly solvable in iterative form we will consider it as a coupled system of equations, to determine the co-evolution of all species. It is well known that if there is only one nuclear species A the number of radioactive decays is proportional to the number of radioactive nuclei, $N_A$, i.e., the species evolves according to the ordinary differential equation (ODE). <br>
$$\begin{equation}
\tag{1}
\begin{split}
\frac{dN_A}{dt} &= -\lambda_{A}N_A
\end{split}
\end{equation}$$ <br>
where $\lambda_A$ is related to the half-life of the species $A_{t_{1/2},A} = ln(2/\lambda_A)$ <br>
Now consider the case of a chain of two decays: one nucleus A decays into another B by one process, then B decays
into another C by a new process. The previous equation cannot be applied to the decay chain. Since A decays into B, and B decays into C, the activity of A adds to the total number of B nuclei. Therefore, the number of second generation nuclei B increases as a result of the decay of first generation A nuclei, and decreases as a result of its own decay into the third generation nuclei C, thus, the B species evolves as, <br>
$$\begin{equation}
\tag{2}
\begin{split}
\frac{dN_B}{dt} &= -\lambda_{B}N_B + \lambda_{A}N_A
\end{split}
\end{equation}$$ <br>
We will now treat a more interesting possibility, where we have 3 radioactive species, the first generation A decays into second generation B and C. For example, $^{40}K$ has a 89.3% probability of decaying to $^{40}Ca$, and 10.7% to $^{40}Ar$. To make the problem more fun we will add the possibility that species B can, too, decay into species C. We will also consider that C decays into stable nuclei D. Based on the above considerations we have: <br>
$$\begin{equation}
\tag{3}
\begin{split}
\frac{dN_A}{dt} &= -\lambda_{A}N_A \\
\frac{dN_B}{dt} &= -\lambda_{B}N_B + \lambda_{A,B}N_A \\
\frac{dN_C}{dt} &= -\lambda_{C}N_C + \lambda_{A,C}N_A + \lambda_{B}N_B \\
\frac{dN_D}{dt} &= \lambda_{C}N_C
\end{split}
\end{equation}$$ <br>
where $\lambda_{A,B} + \lambda_{A,C} = \lambda_{A}$, and $\lambda_{A,B}/\lambda_{A,C}$ must equal the ratio of the probability that A will decay into B to the probability that A will decay into C. <br>
This system of equations has an integral (constant) of the motion, <br>
$$\begin{equation}
\tag{4}
\begin{split}
N = N_A + N_B + N_C + N_D
\end{split}
\end{equation}$$ <br>
which is the total number of nuclei and is determined by the initial conditions. <br>
The system (3) has multiple timescales involved, and hence it is not possible to completely non-dimensionalize it.
In addition, use N to introduce normalized species numbers $\tilde N_{i} = N_{i}/N$, i = A,B,C,D, so that the final equations in dimensionless time describe the evolution of the fraction of each species in an initial sample. This way the constant of the motion becomes, <br>
$$\begin{equation}
\tag{5}
\begin{split}
\tilde N_{A} + \tilde N_{B} + \tilde N_{C} + \tilde N_{D} = \tilde N = 1
\end{split}
\end{equation}$$ <br>
This constant $\tilde N$ will allow you to validate the quality of the numerical integration of Eq. (3). <br>
Show in your term paper the derivation of the the normalized and dimensionless version of (3). To solve this normalized and dimensionless version of (3) you will need to specify initial conditions. Do the following:
### ***QUESTIONS***
**1)** Use RK4 to integrate numerically the dimensionless and normalized version the ODE (3) from t = 0 forward in time and for sufficiently large t until the populations of each species settles. You will need to be plotting $\tilde N_{i}$ vs $t$ to see if the solution is settling and to determine when to stop the integration. Run a couple of numerical experiments with different parameters to test the dynamics of the species populations.
Use your judgement as to how small a step size you need to solve this system accurately. If you cannot figure this out from pure thought, experiment with different step sizes and use $\delta \tilde N = |(\tilde N(t) - \tilde N(t=0))/\tilde N(t=0)|$ to determine this accuracy. If $\delta\tilde N$ is smaller than $10^{-3}$ for all integration times, then you have a decent accuracy.
**1)** Use RK4 to integrate numerically the dimensionless and normalized version the ODE (3) from t = 0 forward in time and for sufficiently large t until the populations of each species settles. You will need to be plotting $\tilde N_{i}$ vs $t$ to see if the solution is settling and to determine when to stop the integration. Run a couple of numerical experiments with different parameters to test the dynamics of the species populations.
## Tools for Numerical Integration
```
# Define the RK4 Step (Taken from Class)
def RK4(RHS,y0,t,h,*P):
"""
Implements a single step of a fourth-order, explicit Runge-Kutta scheme
"""
thalf = t + 0.5*h
k1 = h*RHS(y0, t, *P)
k2 = h*RHS(y0+0.5*k1, thalf, *P)
k3 = h*RHS(y0+0.5*k2, thalf, *P)
k4 = h*RHS(y0+k3, t+h, *P)
return y0 + (k1 + 2*k2 + 2*k3 + k4)/6
# Define the ODESolver (Taken from Class)
def odeSolve(t0, y0, tmax, h, RHS, method, *P):
"""
ODE driver with constant step-size, allowing systems of ODE's
"""
# make array of times and find length of array
t = np.arange(t0,tmax+h,h)
ntimes, = t.shape
# find out if we are solving a scalar ODE or a system of ODEs, and allocate space accordingly
if type(y0) in [int, float]: # check if primitive type -- means only one eqn
neqn = 1
y = np.zeros(ntimes)
else: # otherwise assume a numpy array -- a system of more than one eqn
neqn, = y0.shape
y = np.zeros((ntimes, neqn))
# set first element of solution to initial conditions (possibly a vector)
y[0] = y0
# march on...
for i in range(0,ntimes-1):
y[i+1] = method(RHS,y[i],t[i],h,*P)
return t,y
```
## RHS for the 4 Coupled Autonomous Linear ODEs
```
# Define the RHS
def nuclear_species_RHS(y,t,*P):
lambda_B, lambda_AB, lambda_C, lambda_AC = P ## Unpack Parameters
NA = y[0]
NB = y[1]
NC = y[2]
ND = y[3]
dNA_dt = -NA
dNB_dt = -lambda_B*NB + lambda_AB*NA
dNC_dt = -lambda_C*NC + lambda_AC*NA + lambda_B*NB
dND_dt = lambda_C*NC
array = np.array([dNA_dt, dNB_dt, dNC_dt, dND_dt])
return array
```
### CASE 1: Species A decays very SLOWLY
***
First, consider the case where the A species decays very slowly and test whether it is possible to run out of B and C nuclei. For this experiment you will set $\lambda_{B}/\lambda_{A}$ = 5, $\lambda_{C}/\lambda_{A}$ = 10, $\lambda_{A,B}/\lambda_{A}$ = 0.85 and $\lambda_{A,C}/\lambda_{A}$ = 0.15. Consider initial conditions $\tilde N_{A}$ = 0.5, $\tilde N_{B}$ = 0.25, $\tilde N_{C}$ = 0.1, $\tilde N_{D}$ = 0.15. <br>
Show plots of your solution for $\tilde N_{i}$ v/s t. <br>
**QUESTION:** Does this evolution eliminate the species A and B completely?
```
# Initial Conditions
t0 = 0.0
y0 = np.array([0.5, 0.25, 0.1, 0.15])
tmax = 10
h = 0.0001
# Parameters
lambda_B = 5.0
lambda_AB = 0.85
lambda_C = 10.0
lambda_AC = 0.15
# Solve the IVP
t,y = odeSolve(t0, y0, tmax, h, nuclear_species_RHS, RK4, lambda_B, lambda_AB, lambda_C, lambda_AC)
# dN Analysis
Nm = np.array([y[:,0],y[:,1],y[:,2],y[:,3]])
N = Nm.sum(axis=0)
Ne = N - 1
# Plot Normalized Values v/s Dimensionless Time
f,a = plt.subplots()
a.plot(t,y[:,0],'r', label=r'$\tilde N_{A}$')
a.plot(t,y[:,1],'g', label=r'$\tilde N_{B}$')
a.plot(t,y[:,2],'b', label=r'$\tilde N_{C}$')
a.plot(t,y[:,3],'k', label=r'$\tilde N_{D}$')
a.set_xlabel(r'Dimensionless Time ($t$)')
a.set_ylabel(r'Normalized Values ($\tilde N_{i}$)')
a.set_title(r'$\tilde N_{i}$ v/s $t$: Species A decays VERY SLOWLY', fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.legend()
a.grid()
plt.tight_layout()
plt.show()
# Plot dN v/s Dimensionless Time
f,a = plt.subplots()
a.plot(t,Ne,'k')
a.set_xlabel(r'Dimensionless Time ($t$)')
a.set_ylabel(r'$\delta \tilde N$')
a.set_title(r'$\delta \tilde N$ v/s $t$: Species A decays VERY SLOWLY', fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.grid(linestyle=':')
plt.tight_layout()
plt.show()
```
### CASE 2 : Species A decays very RAPIDLY
***
Second, consider the case where the A species decays very rapidly. For this experiment you will set $\lambda_{B}/\lambda_{A}$ = 0.05, $\lambda_{C}/\lambda_{A}$ = 0.1, $\lambda_{A,B}/\lambda_{A}$ = 0.85 and $\lambda_{A,C}/\lambda_{A}$ = 0.15. Consider initial conditions $\tilde N_{A}$ = 0.5, $\tilde N_{B}$ = 0.25, $\tilde N_{C}$ = 0.1, $\tilde N_{D}$ = 0.15. <br>
Show plots of your solution for $\tilde N_{i}$ v/s t. <br>
**QUESTION:** How is this evolution different from the previous one?
```
# Initial Conditions
t0 = 0.0
y0 = np.array([0.5, 0.25, 0.1, 0.15])
tmax = 120
h = 0.0001
# Parameters
lambda_B = 0.05
lambda_AB = 0.85
lambda_C = 0.1
lambda_AC = 0.15
# Solve the IVP
t,y = odeSolve(t0, y0, tmax, h, nuclear_species_RHS, RK4, lambda_B, lambda_AB, lambda_C, lambda_AC)
# dN Analysis
Nm = np.array([y[:,0],y[:,1],y[:,2],y[:,3]])
N = Nm.sum(axis=0)
Ne = N - 1
# Plot Normalized Values v/s Dimensionless Time
f,a = plt.subplots()
a.plot(t,y[:,0],'r', label=r'$\tilde N_{A}$')
a.plot(t,y[:,1],'g', label=r'$\tilde N_{B}$')
a.plot(t,y[:,2],'b', label=r'$\tilde N_{C}$')
a.plot(t,y[:,3],'k', label=r'$\tilde N_{D}$')
a.set_xlabel(r'Dimensionless Time ($t$)')
a.set_ylabel(r'Normalized Values ($\tilde N_{i}$)')
a.set_title(r'$\tilde N_{i}$ v/s $t$: Species A decays VERY RAPIDLY', fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.legend()
plt.tight_layout()
plt.show()
# Plot dN v/s Dimensionless Time
f,a = plt.subplots()
a.plot(t,Ne,'k')
a.set_xlabel(r'Dimensionless Time ($t$)')
a.set_ylabel(r'$\delta \tilde N$')
a.set_title(r'$\delta \tilde N$ v/s $t$: Species A decays VERY RAPIDLY', fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.grid(linestyle=':')
plt.tight_layout()
plt.show()
### CONVERGENCE ###
# Initial Conditions
t0 = 0.0
y0 = np.array([0.5, 0.25, 0.1, 0.15])
tmax = 120
h=np.array([1e2,1,0.75,0.6,0.5,0.3,0.1,0.05,0.01,0.005])
# Parameters
lambda_B = 0.05
lambda_AB = 0.85
lambda_C = 0.1
lambda_AC = 0.15
# Solve the IVP and dN Analysis
for i in range(len(h)):
t,y = odeSolve(t0, y0, tmax, h[i], nuclear_species_RHS, RK4, lambda_B, lambda_AB, lambda_C, lambda_AC)
Nm = np.array([y[:,0],y[:,1],y[:,2],y[:,3]])
N = Nm.sum(axis=0)
Ne = N - 1
f,a = plt.subplots()
a.plot(h[0],Ne[int(3.0/h[0])],'b.')
a.plot(h[1],Ne[int(3.0/h[1])],'b.')
a.plot(h[2],Ne[int(3.0/h[2])],'b.')
a.plot(h[3],Ne[int(3.0/h[3])],'b.')
a.plot(h[4],Ne[int(3.0/h[4])],'b.')
a.plot(h[5],Ne[int(3.0/h[5])],'b.')
a.plot(h[6],Ne[int(3.0/h[6])],'b.')
a.plot(h[7],Ne[int(3.0/h[7])],'b.')
a.plot(h[8],Ne[int(3.0/h[8])],'b.')
a.plot(h[9],Ne[int(3.0/h[9])],'b.')
a.set_xlabel('h')
a.set_ylabel(r'$\delta\~N(\~t=3)$')
a.set_title('Constant of Motion Error for Varying Step Sizes',fontsize=18)
a.set_xscale('log')
plt.show()
plt.tight_layout()
```
### CASE 3: Species B remains CONSTANT
***
Third, the system of equations has an "equilibrium" point for the B species, when $\lambda_{B} \tilde N_{B}/ \lambda_{A} = \lambda_{A,B} \tilde N_{b}/\lambda_{A}$, because then $\frac{d\tilde N_{B}}{dt} = 0$, which implies that the number of B species remain constant, and the B decay rate is balanced by the replenishment of B from the decay of A species. You can now study if this "equilibrium" is stable, by considering initial conditions that satisfy it. We will keep the same $\lambda_{A,B}/\lambda_{A}$ = 0.85 and $\lambda_{A,C}/\lambda_{A}$ = 0.15, and initial conditions $\tilde N_{A}$ = 0.5, $\tilde N_{B}$ = 0.25, $\tilde N_{C}$ = 0.1, $\tilde N_{D}$ = 0.15. The condition $\lambda_{B} \tilde N_{B}/ \lambda_{A} = \lambda_{A,B} \tilde N_{b}/\lambda_{A}$ implies $\lambda_{B}/\lambda_{A}$ = 1.7. And set again $\lambda_{C}/\lambda_{A}$ = 0.1. <br>
Show plots of your solution for $\tilde N_{i}$ v/s t. <br>
**QUESTION:** Does the population of B species remain constant?
```
# Initial Conditions
t0 = 0.0
y0 = np.array([0.5, 0.25, 0.1, 0.15])
tmax = 60
h = 0.0001
# Parameters
lambda_B = 1.7
lambda_AB = 0.85
lambda_C = 0.1
lambda_AC = 0.15
# Solve the IVP
t,y = odeSolve(t0, y0, tmax, h, nuclear_species_RHS, RK4, lambda_B, lambda_AB, lambda_C, lambda_AC)
# dN Analysis
Nm = np.array([y[:,0],y[:,1],y[:,2],y[:,3]])
N = Nm.sum(axis=0)
Ne = N - 1
# Plot Normalized Values v/s Dimensionless Time
f,a = plt.subplots()
a.plot(t,y[:,0],'r', label=r'$\tilde N_{A}$')
a.plot(t,y[:,1],'g', label=r'$\tilde N_{B}$')
a.plot(t,y[:,2],'b', label=r'$\tilde N_{C}$')
a.plot(t,y[:,3],'k', label=r'$\tilde N_{D}$')
a.set_xlabel(r'Dimensionless Time ($t$)')
a.set_ylabel(r'Normalized Values ($\tilde N_{i}$)')
a.set_title(r'$\tilde N_{i}$ v/s $t$: Species B remains CONSTANT', fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.legend()
plt.tight_layout()
plt.show()
# Plot dN v/s Dimensionless Time
f,a = plt.subplots()
a.plot(t,Ne,'k')
a.set_xlabel(r'Dimensionless Time ($t$)')
a.set_ylabel(r'$\delta \tilde N$')
a.set_title(r'$\delta \tilde N$ v/s $t$: Species B remains CONSTANT', fontweight='bold')
a.xaxis.set_minor_locator(AutoMinorLocator())
a.yaxis.set_minor_locator(AutoMinorLocator())
a.tick_params(which='minor', length=2.5, color='k')
a.grid(linestyle=':')
plt.tight_layout()
plt.show()
```
| github_jupyter |
```
# Copyright 2021 Fagner Cunha
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
import json
import numpy as np
import pandas as pd
from classification.iwildcamlib import CategoryMap
def _get_data_from_dict(row, dictionary, dictionary_key):
if str(row['location']) in dictionary:
return dictionary[str(row['location'])][dictionary_key]
else:
return np.NaN
def prepare_location_info(data_info, locations):
images = pd.DataFrame(data_info)
images['date'] = images['datetime']
images['latitude'] = images.apply(lambda row: _get_data_from_dict(row, locations, 'latitude'), axis=1)
images['longitude'] = images.apply(lambda row: _get_data_from_dict(row, locations, 'longitude'), axis=1)
return images.to_dict('records')
def _map_categ(row, categ_map):
return categ_map.category_to_index(row['category_id'])
def prepare_category(data_info, categ_map):
ann = pd.DataFrame(data_info)
ann['category_id'] = ann.apply(lambda row: _map_categ(row, categ_map), axis=1)
return ann.to_dict('records')
def filter_locations(data_info, locations):
images = pd.DataFrame(data_info)
images = images[images.location.isin(locations)].copy()
return images.to_dict('records')
```
### Loading metadata
```
locations_file = '/data/fagner/iWildCam2021/data/metadata/gps_locations.json'
train_file = '/data/fagner/iWildCam2021/data/metadata/iwildcam2021_train_annotations.json'
test_file = '/data/fagner/iWildCam2021/data/metadata/iwildcam2021_test_information.json'
train_dataset_split = '../data/data_split.json'
with open(locations_file) as json_file:
locations = json.load(json_file)
with open(train_file) as json_file:
train_info = json.load(json_file)
with open(test_file) as json_file:
test_info = json.load(json_file)
with open(train_dataset_split) as json_file:
split_info = json.load(json_file)
category_map = CategoryMap(train_file)
```
### Converting data
```
train_info['images'] = prepare_location_info(train_info['images'], locations)
train_info['annotations'] = prepare_category(train_info['annotations'], category_map)
test_info['images'] = prepare_location_info(test_info['images'], locations)
trainmini_info = train_info.copy()
trainmini_info['images'] = filter_locations(trainmini_info['images'], split_info['train'])
val_info = train_info.copy()
val_info['images'] = filter_locations(val_info['images'], split_info['validation'])
```
### Save data
```
train_geo_file = '/data/fagner/iWildCam2021/data/metadata/iwildcam2021_train_annotations_geoprior.json'
trainmin_geo_file = '/data/fagner/iWildCam2021/data/metadata/iwildcam2021_trainmini_annotations_geoprior.json'
val_geo_file = '/data/fagner/iWildCam2021/data/metadata/iwildcam2021_val_annotations_geoprior.json'
test_geo_file = '/data/fagner/iWildCam2021/data/metadata/iwildcam2021_test_information_geoprior.json'
with open(train_geo_file, 'w') as json_file:
json.dump(train_info, json_file)
with open(trainmin_geo_file, 'w') as json_file:
json.dump(trainmini_info, json_file)
with open(val_geo_file, 'w') as json_file:
json.dump(val_info, json_file)
with open(test_geo_file, 'w') as json_file:
json.dump(test_info, json_file)
```
| github_jupyter |
# Linear Regression
Linear regression model is one of the simplest regression models. It assumes linear relationship between $X$ and $Y$. The output equation is defined as follows:
$$\hat{y} = WX + b$$
The *Advertising data set* (from "*An Introduction to Statistical Learning*", textbook by Gareth James, Robert Tibshirani, and Trevor Hastie) consists of the sales of a product in 200 different markets, along with advertising budgets for the product in each of those markets for three different media: TV, radio, and newspaper.
Objective: to determine if there is an association between advertising and sales, then we can instruct our client to adjust advertising budgets, thereby indirectly increasing sales.
We want to train an **inference model**, a series of mathematical expressions we want to apply to our data that depends on a series of parameters. The values of parameters change through training in order for the model to learn and adjust its output.
The training loop is:
+ Initialize the model parameters to some values.
+ Read the training data (for each example), possibly using randomization strategies in order to assure that training is stochastic.
+ Execute the inference model on the training data, getting for each training example the model output with the parameter values.
+ Compute the loss.
+ Adjuts the model parameters.
We will repeat this process a number of times, according to the learning rate.
After the training we will apply an evaluation phase.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# Load data.
import numpy as np
data = pd.read_csv('data/Advertising.csv',index_col=0)
train_X = data[['TV']].values
train_Y = data.Sales.values
train_Y = train_Y[:,np.newaxis]
n_samples = train_X.shape[0]
print n_samples
print train_X.shape, train_Y.shape
# data visualization
import seaborn as sns
fig, ax = plt.subplots(1, 1)
ax.set_ylabel('Results',
rotation=0,
ha='right', # horizontalalignment
ma='left', # multiline alignment
)
ax.set_xlabel('TV')
fig.set_facecolor('#EAEAF2')
ax.plot(train_X, train_Y, 'o', color=sns.xkcd_rgb['pale red'], alpha=0.6,label='Original data')
plt.show()
import tensorflow as tf
tf.reset_default_graph()
# Training Parameters
learning_rate = 0.1
training_epochs = 100
# Define tf Graph Inputs
X = tf.placeholder("float",[None,1])
y = tf.placeholder("float",[None,1])
# Create Model variables
# Set model weights
W = tf.Variable(np.random.randn(), name="weight")
b = tf.Variable(np.random.randn(), name="bias")
# Construct a linear model
y_pred = tf.add(tf.mul(X, W), b)
# Minimize the squared errors
cost = tf.reduce_sum(tf.pow(y_pred - y,2))/(2*n_samples) #L2 loss
# Define the optimizer
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
cost_plot = []
# Fit all training data
for epoch in range(training_epochs):
sess.run(optimizer,
feed_dict={X: train_X, y: train_Y})
cost_plot.append(sess.run(cost,
feed_dict={X: train_X, y:train_Y}))
print ""
print "Optimization Finished!"
print "cost=", sess.run(cost,
feed_dict={X: train_X, y: train_Y}), \
"W=", sess.run(W), "b=", sess.run(b)
fig, ax = plt.subplots(1, 1)
ax.set_ylabel('Results',
rotation=0,
ha='right', # horizontalalignment
ma='left', # multiline alignment
)
ax.set_xlabel('TV')
fig.set_facecolor('#EAEAF2')
ax.plot(train_X,
train_Y, 'o',
color=sns.xkcd_rgb['pale red'],
alpha=0.6,label='Original data')
plt.plot(train_X,
sess.run(W) * train_X + sess.run(b),
label='Fitted line')
plt.show()
x = range(len(cost_plot))
plt.plot(x, np.sqrt(cost_plot))
plt.show()
print cost_plot[-1]
```
### Exercise
Tune the learning parameters of the previous example in order to get a better result.
```
# your code here
```
### Multiple Linear Regression
Let's use three features as input vector : TV, Radio, Newspaper
```
tf.reset_default_graph()
# Parameters
learning_rate = 1e-2
training_epochs = 2000
display_step = 200
import numpy as np
data = pd.read_csv('data/Advertising.csv',index_col=0)
train_X = data[['TV','Radio','Newspaper']].values
train_Y = data.Sales.values
train_Y = train_Y[:,np.newaxis]
n_samples = train_X.shape[0]
print n_samples
print train_X.shape, train_Y.shape
# Define tf Graph Inputs
X = tf.placeholder("float",[None,3])
y = tf.placeholder("float",[None,1])
# Create Model variables
# Set model weights
W = tf.Variable(tf.zeros([3, 1]),name="bias")
b = tf.Variable(np.random.randn(), name="bias")
# Construct a multidimensional linear model
y_pred = tf.matmul(X, W) + b
# Minimize the squared errors
cost = tf.reduce_sum(tf.pow(y_pred-y,2))/(2*n_samples) #L2 loss
# Define the optimizer
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
sess.run(optimizer, feed_dict={X: train_X, y: train_Y})
#Display logs per epoch step
if epoch % display_step == 0:
print "Epoch: ", '%04d' % (epoch+1), "\n cost= ", sess.run(cost, feed_dict={X: train_X, y: train_Y}), \
"\n W= ", sess.run(W), "\n b= ", sess.run(b), "\n"
print "Optimization Finished!"
print "cost= \n", sess.run(cost, feed_dict={X: train_X, y: train_Y}), \
"\n W= \n", sess.run(W), "\n b= \n", sess.run(b)
```
### `tf` helpers
Updating of parameters through many training cycles can be dangerous (f.e. if your computer lose power). The `tf.train.Saver` class can save the graph variables for later reuse.
```python
...
saver = tf.train.Saver()
with tf.Session() as sess:
for step in range(training_steps):
sess.run(...)
if step % 1000 == 0:
saver.save(sess, 'my-model', global_step=step)
# evaluation
saver.safe(sess, 'my-model', global_step=training_steps)
sess.close()
```
If we want to recover the training from a certain point we should use the `tf.train.get_checkpoint_state` method, which will verify that there is a checkpoint saved, and the `tf.train.Saver.restore` method to recover the variable values.
### Exercise: Logistic regression.
Complete the following code.
The linear model predicts a constinous value. Now we are going to write a model that can answer yes/no question: **logistic regression**:
$$ \hat{y} = \frac{1}{1+ e^{-(WX + b)}} $$
```
import tensorflow as tf
import os
tf.reset_default_graph()
# same params and variables initialization as log reg.
W = tf.Variable(tf.zeros([5, 1]), name="weights")
b = tf.Variable(0., name="bias")
# your code here: write a function called 'inference' to implement the logistic regression model
```
The **cross-entropy** loss function is the best suited for logistic regression:
$$ L = -\sum_i (y_i \cdot \log (\hat{y}_i) + (1 - y_i) \cdot \log (1 - \hat{y}_i)) $$
```
def loss(X, Y):
#your code here
```
Now we are going to read the survivor Titanic dataset. The model will have to infer, based on the passenger age, sex and ticket class if the passenger survived or not. We will create a batch to read many rows packed in a single tensor for computing the inference efficiently.
```
def read_csv(batch_size, file_name, record_defaults):
#
filename_queue = tf.train.string_input_producer([os.path.join(os.getcwd(), file_name)])
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
# decode_csv will convert a Tensor from type string (the text line) in
# a tuple of tensor columns with the specified defaults, which also
# sets the data type for each column
decoded = tf.decode_csv(value, record_defaults=record_defaults)
# batch actually reads the file and loads "batch_size" rows in a single tensor
return tf.train.shuffle_batch(decoded,
batch_size=batch_size,
capacity=batch_size * 50,
min_after_dequeue=batch_size)
```
We have *categorical* features in this dataset (`ticket_class, gender`) and we need to convert them to numbers. To this end we can convert each categorical feature to $N$ boolean features that represent each possible value.
In case of categorical values with to values it is enough to use a binary feature.
```
def inputs():
passenger_id, survived, pclass, name, sex, age, sibsp, parch, ticket, fare, cabin, embarked = \
read_csv(100, "data/train_titanic.csv", [[0.0], [0.0], [0], [""], [""], [0.0], [0.0], [0.0], [""], [0.0], [""], [""]])
# convert categorical data
is_first_class = tf.to_float(tf.equal(pclass, [1]))
is_second_class = tf.to_float(tf.equal(pclass, [2]))
is_third_class = tf.to_float(tf.equal(pclass, [3]))
gender = tf.to_float(tf.equal(sex, ["female"]))
# Finally we pack all the features in a single matrix;
# We then transpose to have a matrix with one example per row and one feature per column.
features = tf.transpose(tf.pack([is_first_class, is_second_class, is_third_class, gender, age]))
survived = tf.reshape(survived, [100, 1])
return features, survived
def train(total_loss):
learning_rate = 0.01
return tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)
def evaluate(sess, X, Y):
predicted = tf.cast(inference(X) > 0.5, tf.float32)
print sess.run(tf.reduce_mean(tf.cast(tf.equal(predicted, Y), tf.float32)))
# Launch the graph in a session, setup boilerplate
with tf.Session() as sess:
tf.initialize_all_variables().run()
X, Y = inputs()
total_loss = loss(X, Y)
train_op = train(total_loss)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
# actual training loop
training_steps = 5000
for step in range(training_steps):
sess.run([train_op])
# for debugging and learning purposes, see how the loss gets decremented thru training steps
if step % 100 == 0:
print "loss: ", sess.run([total_loss])
evaluate(sess, X, Y)
import time
time.sleep(5)
coord.request_stop()
coord.join(threads)
sess.close()
```
### 'tf' Neural Network from scratch
Let's classify handwritten digits:

```
# Import MINST data
# The MNIST data is split into three parts: 55,000 data points of training data (mnist.train),
# 10,000 points of test data (mnist.test), and 5,000 points of validation data (mnist.validation).
# Both the training set and test set contain images and their corresponding labels; for example the
# training images are mnist.train.images and the training labels are mnist.train.labels.
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
mnist = input_data.read_data_sets("data/", one_hot=True)
fig, ax = plt.subplots(figsize=(2, 2))
plt.imshow(mnist.train.images[0].reshape((28, 28)), interpolation='nearest', cmap='gray')
plt.show()
print "Class: ", np.argmax(mnist.train.labels[0])
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost)
print "Optimization Finished!"
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
```
| github_jupyter |
# 09 - Beginner Exercises
* Lambda
---
## 🍩🍩🍩
1.Create a lambda function that takes an argument $x$ and returns $x^{2}$.
Then assign it to the variable Pow. then print Pow(2) , Pow(3) , Pow(1400) .
```
# Write your own code in this cell
Pow =
```
## 🍩🍩🍩
2.Create a lambda function that takes two argument $a$ and $b$ and returns $\frac{a + b}{a * b}$.
Then assign it to the variable **Alpha**. then print Alpha(1,2) , Alpha(2,3) , Alpha(3,4) .
```
# Write your own code in this cell
Alpha =
```
##🍩
3.We have a list of famous singers of each country. Print a list containing only their names using Lambda. use Lambda to write a function that, if called for each item of the **```persons```**, will return a tuple containing the **name** and **country** of each singer and do not return their age.
```
# Write your own code in this cell
persons = [("Paul David Hewson", 61, "Ireland"),
("Saeed Mohammadi", 63, "Iran"),
("Alla Borisovna Pugacheva", 77, "Russia"),
("David Jon Gilmour", 75, "United Kingdom"),
("Aryana Sayeed", 36, "Afghanistan"),
("Céline Marie Claudette Dion", 53, "Canad"),
("Caetano Emanuel Viana Telles Veloso", 79, "Brazil"),]
```
## 🍩
4.As you know, the sort method takes two Optional arguments. Reverse and key.
$$list.sort(reverse=True|False, key=myFunc)$$
$$sorted(iterable, key=key, reverse=reverse)$$
reverse=True will sort the list descending. Default is reverse=False
And the key's value can be equal to a function to specify the **sorting criteria(s)**
Using **```lambda```** and **```sort()```** method, please sort the list of singers below based on the **second letter** of their name.
In the next step, sort the list of singers based on the **last letter** of their name using the **```sorted()```** builtin function.
```
# Write your own code in this cell
singers = ["Paul David Hewson", "Saeed Mohammadi" , "Aryana Sayeed",
"Alla Borisovna Pugacheva", "Alla Borisovna Pugacheva",
"David Jon Gilmour", "Céline Marie Claudette Dion",
"Caetano Emanuel Viana Telles Veloso"]
```
## 🍩
5.Please arrange the list below by age, from oldest to youngest.
```
# Write your own code in this cell
persons = [("Paul David Hewson", 61, "Ireland"),
("Saeed Mohammadi", 63, "Iran"),
("Alla Borisovna Pugacheva", 77, "Russia"),
("David Jon Gilmour", 75, "United Kingdom"),
("Aryana Sayeed", 36, "Afghanistan"),
("Céline Marie Claudette Dion", 53, "Canad"),
("Caetano Emanuel Viana Telles Veloso", 79, "Brazil"),]
```
## 🌶️
6.Using Lambda and sorted() function, write a function that puts negative numbers to the left of the list and positive numbers to the right of the list.
```
# Write your own code in this cell
mylist = [-1 , -7 ,3,14,6,12,-2,9,2,1,-4]
```
| github_jupyter |
What's New and Changed in version 2.8.210321
--------------------------------------------
Version 2.8.210321 supports **SAP HANA SPS05** and **SAP HANA Cloud**
Enhancement:
- Enhanced sql() to enable multiline execution.
- Enhanced save() to add append option.
- Enhanced diff() to enable negative input.
- Enhanced model report functionality of UnifiedClassification with added model and data visualization.
- Enhanced dataset_report module with a optimized process of report generation and better user experience.
- Enhanced UnifiedClustering to support 'distance_level' in AgglomerateHierarchicalClustering and DBSCAN functions. Please refer to documentation for details.
- Enahnced model storage to support unified report.
New functions:
- Added generate_html_report() and generate_notebook_iframe_report() functions for UnifiedRegression which could display the output, e.g. statistic and model.
- APL Gradient Boosting: the **other_params** parameter is now supported.
- APL all models: a new method, **get_model_info**, is created, allowing users to retrieve the summary and the performance metrics of a saved model.
- APL all models: users can now specify the weight of explanatory variables via the **weight** parameter.
- Added LSTM.
- Added Text Mining functions support for both SAP HANA on-premise and cloud version.
- tf_analysis
- text_classification
- get_related_doc
- get_related_term
- get_relevant_doc
- get_relevant_term
- get_suggested_term
- Added unified report.
New dependency:
- Added new dependency 'htmlmin' for generating dataset and model report.
API change:
- KMeans with two added parameters 'use_fast_library' and 'use_float'.
- UnifiedRegression with one added parameter 'build_report'.
- Added a parameter 'distance_level' in UnifiedClustering when 'func' is AgglomerateHierarchicalClustering and DBSCAN. Please refer to documentation for details.
- Renamed 'batch_size' with 'chunk_size' in create_dataframe_from_pandas.
- OnlineARIMA has two added parameters 'random_state', 'random_initialization' and its partial_fit() function supports two parameters 'learning_rate' and 'epsilon' for updating the values in the input model.
Bug fixes:
- Fixed onlineARIMA model storage support.
- Fixed inflexible default locations of selected columns of input data, e.g. key, features and endog.
- Fixed accuracy_measure issue in AutoExponentialSmoothing.
## Multiline SQL execution
We've enhanced connection context's sql function to support multiline sql execution and return the last query statement.
```
from hana_ml.dataframe import ConnectionContext
connection_context = ConnectionContext(userkey="raymondyao")
df = connection_context.sql(
"""
DO
BEGIN
outtab = SELECT 1 KEY, 2.2 ENDOG FROM DUMMY;
CREATE LOCAL TEMPORARY TABLE #AABB AS (SELECT * FROM :outtab);
END;
SELECT * FROM #AABB
"""
)
df.collect()
```
## LSTM
Data from PAL example.
```
datalist = [
(0 ,20.7),
(1 ,17.9),
(2 ,18.8),
(3 ,14.6),
(4 ,15.8),
(5 ,15.8),
(6 ,15.8),
(7 ,17.4),
(8 ,21.8),
(9 ,20),
(10,16.2),
(11,13.3),
(12,16.7),
(13,21.5),
(14,25),
(15,20.7),
(16,20.6),
(17,24.8),
(18,17.7),
(19,15.5),
(20,18.2),
(21,12.1),
(22,14.4),
(23,16),
(24,16.5),
(25,18.7),
(26,19.4),
(27,17.2),
(28,15.5),
(29,15.1),
(30,15.4),
(31,15.3),
(32,18.8),
(33,21.9),
(34,19.9),
(35,16.6),
(36,16.8),
(37,14.6),
(38,17.1),
(39,25),
(40,15),
(41,13.7),
(42,13.9),
(43,18.3),
(44,22),
(45,22.1),
(46,21.2),
(47,18.4),
(48,16.6),
(49,16.1),
(50,15.7),
(51,16.6),
(52,16.5),
(53,14.4),
(54,14.4),
(55,18.5),
(56,16.9),
(57,17.5),
(58,21.2),
(59,17.8),
(60,18.6),
(61,17),
(62,16),
(63,13.3),
(64,14.3),
(65,11.4),
(66,16.3),
(67,16.1),
(68,11.8),
(69,12.2),
(70,14.7),
(71,11.8),
(72,11.3),
(73,10.6),
(74,11.7),
(75,14.2),
(76,11.2),
(77,16.9),
(78,16.7),
(79,8.1),
(80,8),
(81,8.8),
(82,13.4),
(83,10.9),
(84,13.4),
(85,11),
(86,15),
(87,15.7),
(88,14.5),
(89,15.8),
(90,16.7),
(91,16.8),
(92,17.5),
(93,17.1),
(94,18.1),
(95,16.6),
(96,10),
(97,14.9),
(98,15.9),
(99,13)]
datalist_predict = [
(0,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5,16.4,14.5,12.6,13.6,11.2,11,12),
(1,11.9,14.7,9.4,6.6,7.9,11,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1),
(2,14.7,9.4,6.6,7.9,11,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13),
(3,9.4,6.6,7.9,11,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4),
(4,6.6,7.9,11,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3),
(5,7.9,11,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9),
(6,11,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12),
(7,15.7,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7),
(8,15.2,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6),
(9,15.9,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3),
(10,10.6,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7),
(11,8.3,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2),
(12,8.6,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5),
(13,12.7,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9),
(14,10.5,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5),
(15,12,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5,16.4),
(16,11.1,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5,16.4,14.5),
(17,13,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5,16.4,14.5,12.6),
(18,12.4,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5,16.4,14.5,12.6,13.6),
(19,13.3,15.9,12,13.7,17.6,14.3,13.7,15.2,14.5,14.9,15.5,16.4,14.5,12.6,13.6,11.2)
]
import pandas as pd
from hana_ml.dataframe import create_dataframe_from_pandas
lstm_data = create_dataframe_from_pandas(connection_context=connection_context,
pandas_df=pd.DataFrame(datalist, columns=["KEY", "VALUE"]),
table_name="#LSTM_TRAIN",
force=True)
lstm_predict = create_dataframe_from_pandas(connection_context=connection_context,
pandas_df=pd.DataFrame(datalist_predict, columns=["ID",
"VAL1",
"VAL2",
"VAL3",
"VAL4",
"VAL5",
"VAL6",
"VAL7",
"VAL8",
"VAL9",
"VAL10",
"VAL11",
"VAL12",
"VAL13",
"VAL14",
"VAL15",
"VAL16" ]),
table_name="#LSTM_PREIDCT",
force=True)
from hana_ml.algorithms.pal.tsa import lstm
lstm = lstm.LSTM(gru='lstm',
bidirectional=False,
time_dim=16,
max_iter=1000,
learning_rate=0.01,
batch_size=32,
hidden_dim=128,
num_layers=1,
interval=1,
stateful=False,
optimizer_type='Adam')
lstm.fit(lstm_data)
res = lstm.predict(lstm_predict)
res.head(2).collect()
```
## SHAPLEY Explainer in Unified Classification
Diabetes data.
```
from data_load_utils import DataSets, Settings
from hana_ml.algorithms.pal.model_selection import GridSearchCV
from hana_ml.algorithms.pal.unified_classification import UnifiedClassification
Settings.load_config("../../config/e2edata.ini")
full_tbl, train_tbl, test_tbl, _ = DataSets.load_diabetes_data(connection_context)
diabetes_train = connection_context.table(train_tbl)
diabetes_test = connection_context.table(test_tbl)
uc_hgbdt = UnifiedClassification('HybridGradientBoostingTree')
gscv = GridSearchCV(estimator=uc_hgbdt,
param_grid={'learning_rate': [0.1, 0.4, 0.7, 1],
'n_estimators': [4, 6, 8, 10],
'split_threshold': [0.1, 0.4, 0.7, 1]},
train_control=dict(fold_num=5,
resampling_method='cv',
random_state=1,
ref_metric=['auc']),
scoring='error_rate')
gscv.fit(data=diabetes_train, key= 'ID',
label='CLASS',
partition_method='stratified',
partition_random_state=1,
stratified_column='CLASS',
build_report=True)
features = diabetes_train.columns
features.remove('CLASS')
features.remove('ID')
pred_res = gscv.predict(diabetes_test, key='ID', features=features)
from hana_ml.visualizers.model_debriefing import TreeModelDebriefing
shapley_explainer = TreeModelDebriefing.shapley_explainer(pred_res, diabetes_test, key='ID', label='CLASS')
shapley_explainer.summary_plot()
```
## Unified Report (support model storage)
```
from hana_ml.model_storage import ModelStorage
model_storage = ModelStorage(connection_context=connection_context)
gscv.estimator.name = 'HGBT'
gscv.estimator.version = 1
model_storage.save_model(model=gscv.estimator)
from hana_ml.visualizers.unified_report import UnifiedReport
mymodel = model_storage.load_model('HGBT', 1)
UnifiedReport(mymodel).build().display()
UnifiedReport(diabetes_test).build().display()
```
## Text Mining Functions
cloud version vs on-premise version
data from PAL example
```
conn_onpremise = ConnectionContext(userkey="leiyiyao")
conn_cloud = ConnectionContext(userkey="raymondyao")
data = pd.DataFrame({"ID" : ['doc1', 'doc2', 'doc3', 'doc4', 'doc5', 'doc6'],
"CONTENT" : ['term1 term2 term2 term3 term3 term3',
'term2 term3 term3 term4 term4 term4',
'term3 term4 term4 term5 term5 term5',
'term3 term4 term4 term5 term5 term5 term5 term5 term5',
'term4 term6',
'term4 term6 term6 term6'],
"CATEGORY" : ['CATEGORY_1', 'CATEGORY_1', 'CATEGORY_2', 'CATEGORY_2', 'CATEGORY_3', 'CATEGORY_3']})
df_test1 = pd.DataFrame({"CONTENT":["term2 term2 term3 term3"]})
df_test2 = pd.DataFrame({"CONTENT":["term3"]})
df_test3 = pd.DataFrame({"CONTENT":["doc3"]})
df_test4 = pd.DataFrame({"CONTENT":["term3"]})
df_onpremise = create_dataframe_from_pandas(connection_context=conn_onpremise, pandas_df=data, table_name="TM_DEMO", force=True)
df_cloud = create_dataframe_from_pandas(connection_context=conn_cloud, pandas_df=data, table_name="TM_DEMO", force=True)
```
### TFIDF (cloud only)
```
from hana_ml.text.tm import tf_analysis
tfidf= tf_analysis(df_cloud)
tfidf[0].head(3).collect()
```
### Text Classification
#### via reference data
```
from hana_ml.text.tm import text_classification
res, stat = text_classification(df_cloud.select(df_cloud.columns[0], df_cloud.columns[1]), df_cloud)
res.head(1).collect()
res = text_classification(df_onpremise.select(df_onpremise.columns[0], df_onpremise.columns[1]), df_onpremise)
res.head(1).collect()
```
#### via calculated TFIDF (cloud only)
```
res, stat = text_classification(df_cloud.select(df_cloud.columns[0], df_cloud.columns[1]), tfidf)
res.head(1).collect()
from hana_ml.text.tm import get_related_doc, get_related_term, get_relevant_doc, get_relevant_term, get_suggested_term
df_test1_cloud = create_dataframe_from_pandas(connection_context=conn_cloud,
pandas_df=df_test1,
table_name="#TM_DATA1",
force=True)
df_test2_cloud = create_dataframe_from_pandas(connection_context=conn_cloud,
pandas_df=df_test2,
table_name="#TM_DATA2",
force=True)
df_test3_cloud = create_dataframe_from_pandas(connection_context=conn_cloud,
pandas_df=df_test3,
table_name="#TM_DATA3",
force=True)
df_test4_cloud = create_dataframe_from_pandas(connection_context=conn_cloud,
pandas_df=df_test4,
table_name="#TM_DATA4",
force=True)
df_test1_onpremise = create_dataframe_from_pandas(connection_context=conn_onpremise,
pandas_df=df_test1,
table_name="TM_DATA1",
force=True)
df_test2_onpremise = create_dataframe_from_pandas(connection_context=conn_onpremise,
pandas_df=df_test2,
table_name="TM_DATA2",
force=True)
df_test3_onpremise = create_dataframe_from_pandas(connection_context=conn_onpremise,
pandas_df=df_test3,
table_name="TM_DATA3",
force=True)
df_test4_onpremise = create_dataframe_from_pandas(connection_context=conn_onpremise,
pandas_df=df_test4,
table_name="TM_DATA4",
force=True)
```
### get related doc
```
get_related_doc(df_test1_cloud, tfidf).collect()
grd_onpremise = get_related_doc(df_test1_onpremise, df_onpremise)
print(grd_onpremise.select_statement)
grd_onpremise.collect()
```
### get related term
```
get_related_term(df_test2_cloud, df_cloud).collect()
grt_onpremise = get_related_term(df_test2_onpremise, df_onpremise)
print(grt_onpremise.select_statement)
grt_onpremise.collect()
```
### get relevant doc
```
get_relevant_doc(df_test2_cloud, df_cloud).collect()
grvd_onpremise = get_relevant_doc(pred_data=df_test2_onpremise, ref_data=df_onpremise, top=4)
print(grvd_onpremise.select_statement)
grvd_onpremise.collect()
```
### get relevant term
```
get_relevant_term(df_test4_cloud, df_cloud).collect()
grvt_onpremise = get_relevant_term(df_test4_onpremise, df_onpremise)
print(grvt_onpremise.select_statement)
grvt_onpremise.collect()
```
### get suggested term
```
get_suggested_term(df_test4_cloud, df_cloud).collect()
gst_onpremise = get_suggested_term(df_test4_onpremise, df_onpremise)
print(gst_onpremise.select_statement)
gst_onpremise.collect()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Hyperparameter Tuning with the HParams Dashboard
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/hyperparameter_tuning_with_hparams.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/hyperparameter_tuning_with_hparams.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
When building machine learning models, you need to choose various [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)), such as the dropout rate in a layer or the learning rate. These decisions impact model metrics, such as accuracy. Therefore, an important step in the machine learning workflow is to identify the best hyperparameters for your problem, which often involves experimentation. This process is known as "Hyperparameter Optimization" or "Hyperparameter Tuning".
The HParams dashboard in TensorBoard provides several tools to help with this process of identifying the best experiment or most promising sets of hyperparameters.
This tutorial will focus on the following steps:
1. Experiment setup and HParams summary
2. Adapt TensorFlow runs to log hyperparameters and metrics
3. Start runs and log them all under one parent directory
4. Visualize the results in TensorBoard's HParams dashboard
Note: The HParams summary APIs and dashboard UI are in a preview stage and will change over time.
Start by installing TF 2.0 and loading the TensorBoard notebook extension:
```
!pip install -q tf-nightly-2.0-preview
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Clear any logs from previous runs
!rm -rf ./logs/
```
Import TensorFlow and the TensorBoard HParams plugin:
```
import tensorflow as tf
from tensorboard.plugins.hparams import api as hp
```
Download the [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset and scale it:
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
```
## 1. Experiment setup and the HParams experiment summary
Experiment with three hyperparameters in the model:
1. Number of units in the first dense layer
2. Dropout rate in the dropout layer
3. Optimizer
List the values to try, and log an experiment configuration to TensorBoard. This step is optional: you can provide domain information to enable more precise filtering of hyperparameters in the UI, and you can specify which metrics should be displayed.
```
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
METRIC_ACCURACY = 'accuracy'
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)
```
If you choose to skip this step, you can use a string literal wherever you would otherwise use an `HParam` value: e.g., `hparams['dropout']` instead of `hparams[HP_DROPOUT]`.
## 2. Adapt TensorFlow runs to log hyperparameters and metrics
The model will be quite simple: two dense layers with a dropout layer between them. The training code will look familiar, although the hyperparameters are no longer hardcoded. Instead, the hyperparameters are provided in an `hparams` dictionary and used throughout the training function:
```
def train_test_model(hparams):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(hparams[HP_NUM_UNITS], activation=tf.nn.relu),
tf.keras.layers.Dropout(hparams[HP_DROPOUT]),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
model.compile(
optimizer=hparams[HP_OPTIMIZER],
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
)
model.fit(x_train, y_train, epochs=1) # Run with 1 epoch to speed things up for demo purposes
_, accuracy = model.evaluate(x_test, y_test)
return accuracy
```
For each run, log an hparams summary with the hyperparameters and final accuracy:
```
def run(run_dir, hparams):
with tf.summary.create_file_writer(run_dir).as_default():
hp.hparams(hparams) # record the values used in this trial
accuracy = train_test_model(hparams)
tf.summary.scalar(METRIC_ACCURACY, accuracy, step=1)
```
When training Keras models, you can use callbacks instead of writing these directly:
```python
model.fit(
...,
callbacks=[
tf.keras.callbacks.TensorBoard(logdir), # log metrics
hp.KerasCallback(logdir, hparams), # log hparams
],
)
```
## 3. Start runs and log them all under one parent directory
You can now try multiple experiments, training each one with a different set of hyperparameters.
For simplicity, use a grid search: try all combinations of the discrete parameters and just the lower and upper bounds of the real-valued parameter. For more complex scenarios, it might be more effective to choose each hyperparameter value randomly (this is called a random search). There are more advanced methods that can be used.
Run a few experiments, which will take a few minutes:
```
session_num = 0
for num_units in HP_NUM_UNITS.domain.values:
for dropout_rate in (HP_DROPOUT.domain.min_value, HP_DROPOUT.domain.max_value):
for optimizer in HP_OPTIMIZER.domain.values:
hparams = {
HP_NUM_UNITS: num_units,
HP_DROPOUT: dropout_rate,
HP_OPTIMIZER: optimizer,
}
run_name = "run-%d" % session_num
print('--- Starting trial: %s' % run_name)
print({h.name: hparams[h] for h in hparams})
run('logs/hparam_tuning/' + run_name, hparams)
session_num += 1
```
## 4. Visualize the results in TensorBoard's HParams plugin
The HParams dashboard can now be opened. Start TensorBoard and click on "HParams" at the top.
```
%tensorboard --logdir logs/hparam_tuning
```
<img class="tfo-display-only-on-site" src="images/hparams_table.png?raw=1"/>
The left pane of the dashboard provides filtering capabilities that are active across all the views in the HParams dashboard:
- Filter which hyperparameters/metrics are shown in the dashboard
- Filter which hyperparameter/metrics values are shown in the dashboard
- Filter on run status (running, success, ...)
- Sort by hyperparameter/metric in the table view
- Number of session groups to show (useful for performance when there are many experiments)
The HParams dashboard has three different views, with various useful information:
* The **Table View** lists the runs, their hyperparameters, and their metrics.
* The **Parallel Coordinates View** shows each run as a line going through an axis for each hyperparemeter and metric. Click and drag the mouse on any axis to mark a region which will highlight only the runs that pass through it. This can be useful for identifying which groups of hyperparameters are most important. The axes themselves can be re-ordered by dragging them.
* The **Scatter Plot View** shows plots comparing each hyperparameter/metric with each metric. This can help identify correlations. Click and drag to select a region in a specific plot and highlight those sessions across the other plots.
A table row, a parallel coordinates line, and a scatter plot market can be clicked to see a plot of the metrics as a function of training steps for that session (although in this tutorial only one step is used for each run).
To further explore the capabilities of the HParams dashboard, download a set of pregenerated logs with more experiments:
```
%%bash
wget -q 'https://storage.googleapis.com/download.tensorflow.org/tensorboard/hparams_demo_logs.zip'
unzip -q hparams_demo_logs.zip -d logs/hparam_demo
```
View these logs in TensorBoard:
```
%tensorboard --logdir logs/hparam_demo
```
<img class="tfo-display-only-on-site" src="images/hparams_parallel_coordinates.png?raw=1"/>
You can try out the different views in the HParams dashboard.
For example, by going to the parallel coordinates view and clicking and dragging on the accuracy axis, you can select the runs with the highest accuracy. As these runs pass through 'adam' in the optimizer axis, you can conclude that 'adam' performed better than 'sgd' on these experiments.
| github_jupyter |
# Degree Planner Backend Algorithm
## TODO:
* In class Graph, use dict for nodes
* In class Graph, finish isCompleted()
* Figure out whats wrong with '__str__'
* Add all components to Node class (full name, description, etc.)
* In Graph, nodesToRemove(), return 2 lists: required, and possible to take
* Finish God
```
import copy
class Node(object):
name = ""
credits = None
prereq = []
required = False
completed = False
description = ""
fullName = ""
priorityNumber = 0 # Number of edges to that node
hardAdded = False
offeredF = True
offeredS = True
offeredSu = False
offeredEven = True
offeredOdd = True
def __init__(self, name, fullName, description, prereq, required, credits):
self.name = name
self.fullName = fullName
self.description = description
if prereq != None:
self.prereq = prereq
self.required = required
self.credits = credits
def __str__(self):
if len(self.prereq) == 0:
return str(self.name) + ' [] ' + str(self.required)
retVal = self.name
retVal += ' ['
for item in self.prereq:
retVal += (str(item.name) + ', ')
retVal = retVal[:-2]
retVal += '] '
retVal += str(self.required)
return retVal
def canComplete(self):
if self.completed == True:
return False
for pr in self.prereq:
if pr.completed == False:
return False
return True
class Graph(object):
nodes = {}
def __init__(self, nodesDict):
self.nodes = nodesDict
def __str__(self):
return self.nodes
def nodesToRemove(self):
retList = []
for key, val in self.nodes.items():
# print(key, val)
if(val.canComplete() and val.hardAdded == False):
retList.append(val)
retList.sort(key=lambda x: x.priorityNumber, reverse=True)
return retList
def isCompleted(self): #Sets degree requirements
for node in self.nodes:
#print(str(node))
if(not node.completed):
return False
return True
def completeCourses(listOfCourses):
for course in listOfCourses:
for catalogCourse in nodes:
if course.name == catalogCourse.name:
catalogCourse.completed = True
listOfCourses.remove(course)
for course in listOfCourses:
nodes.append(course)
class Semester:
"""
Each student will have a set of semester objects.
"""
# available_classes = {} # computed through Node class
classes_taken = {} # computed through Node class
name = None # Fall, Spring, Summer
year = None # '2018','2017'
isEven = False
credits_taken = 0
max_credits = 16
def __init__(self, sem_name, sem_year, credits):
self.name = sem_name
self.year = sem_year
self.max_credits = credits
self.isEven = True if int(self.year[-1])% 2 == 0 else False
self.reset()
def reset(self):
self.credits_taken = 0
self.classes_taken = {}
#print('T/M', self.credits_taken, self.max_credits)
#print('reset done on', self.name, self.year)
def canAdd(self, course):
"""
Compute if the course can be taken during the given semester using the Graph object
"""
#print(course.credits, self.credits_taken, self.max_credits)
if (course.credits + self.credits_taken) > self.max_credits:
print('Not enough credits')
return False
if course.offeredF and self.name[0] == 'F':
if course.offeredEven and self.isEven: return True
elif course.offeredOdd and not self.isEven: return True
else : return False
elif course.offeredS and self.name[0] == 'S':
if course.offeredEven and self.isEven: return True
elif course.offeredOdd and not self.isEven: return True
else: return False
elif course.offeredSu and self.name[0] == 'Su':
if course.offeredEven and self.isEven: return True
elif course.offeredOdd and not self.isEven: return True
else: return False
else: return False
def add_course(self, course):
if self.canAdd(course):
print('Added: ', course)
self.classes_taken.update({course.name : course})
self.credits_taken += course.credits
course.completed = True
return True
return False
class God(object): #God algorithm!
catalogYear = None
allCourses = []
terms = []
graph = None
allSemesters = []
def __init__(self, catalogY): #Possibly reinit lists!
self.catalogYear = catalogY
self.letThereBeLight()
# print('Semseters: ', len(self.allSemesters))
self.darwinEvolution()
def darwinEvolution(self):
# get courses from a specific year catalog
self.allCourses = copy.deepcopy(testNodes) # TODO!!!
self.graph = Graph(self.allCourses)
for semester in self.allSemesters:
print('\n*********** Semester: ', semester.name, semester.year, semester.max_credits)
self.evolve(semester)
def letThereBeLight(self):
priorLearningSemester = Semester('PriorLearning', '0', 0)
priorLearningSemester.max_credits = 0
self.allSemesters.append(priorLearningSemester)
catalogY = self.catalogYear
for year in range(catalogY*3, catalogY*3+24):
#print(int(year/3))
semName = None
defaultCredits = 16
if year % 3 == 0:
semName = 'S'
elif year % 3 == 1:
semName = 'Su'
defaultCredits = 0
elif year % 3 == 2:
semName = 'F'
#print(int(year/3), semName, defaultCredits)
#print(str(year/3))
#print(defaultCredits)
tempSemester = Semester(semName, str(int(year/3)), defaultCredits)
#print(tempSemester.max_credits)
self.allSemesters.append(tempSemester)
# Choose high priority classes, then required, then possible to take
def evolve(self, semester):
#print('Evolve')
semester.reset()
tempList = self.graph.nodesToRemove()
#print('List size: ', len(tempList))
for course in tempList:
print('Sending: ', course)
addReturn = semester.add_course(course)
def changeCredits(self, semseterNum, credits):
self.allSemesters[semseterNum].max_credits = credits
self.darwinEvolution()
def semesterInfo(self):
for semester in self.allSemesters:
print('Semester: ', semester.name, semester.year, semester.credits_taken, '/', semester.max_credits)
# name, fullName, description, prereq, required, credits
MATH170 = Node('MATH170', 'CALCULUS I', 'Informal limits. Derivatives and antiderivatives.', None, True, 4)
CS121 = Node('CS121', 'COMPUTER SCIENCE I', 'Introduction to object-oriented problem solving and programming.', [MATH170], True, 4)
CS221 = Node('CS221', 'COMPUTER SCIENCE II', 'Object-oriented design including inheritance, polymorphism, and dynamic binding.', [CS121], True, 3)
CS321 = Node('CS321', 'DATA STRUCTURES', 'Sorting, searching, and order statistics.', [CS221], True, 3)
MATH189 = Node('MATH189', 'DISCRETE MATHEMATICS', 'Content drawn from propositional and predicate logic.', [MATH170], True, 4)
A.completed = True
testNodes = {'CS121':CS121, 'CS221':CS221, 'CS321':CS321, 'MATH170':MATH170, 'MATH189':MATH189}
catalogTest = Graph(testNodes)
catalogTest.nodes
god = God(2019)
god.changeCredits(3, 4)
god.semesterInfo()
```
| github_jupyter |
```
#!pip install librosa
"""import librosa as lib
import os
import pandas as pd
import pylab
import numpy as np
import librosa.display
import glob """
"""def convert(filename):
sig, fs = lib.load(filename)
# make pictures name
save_path = filename+'.jpg'
pylab.axis('off') # no axis
pylab.axes([0., 0., 1., 1.], frameon=False, xticks=[], yticks=[]) # Remove the white edge
S = librosa.feature.melspectrogram(y=sig, sr=fs)
lib.display.specshow(librosa.power_to_db(S, ref=np.max))
pylab.savefig(save_path, bbox_inches=None, pad_inches=0)
pylab.close()"""
##"""for i in range(11):
# for filename in glob.glob(r'D:\Projects\urban_sound\UrbanSound8K\audio\fold'+str(i)+'\*.wav'):
# convert(filename)"""
import pandas as pd
data = pd.read_csv(r'D:\Projects\urban_sound\UrbanSound8K\metadata\UrbanSound8K.csv')
print(data.head())
test = data.loc[data['fold'] == 1]
train = data.loc[data['fold'] != 1]
#cv2.imshow('in',cv2.imread(r'D:\Projects\urban_sound\UrbanSound8K\audio\fold7\99812-1-6-0.wav.jpg'))
from sklearn.preprocessing import LabelEncoder
import numpy as np
import os
import cv2
loc = r'D:\Projects\urban_sound\UrbanSound8K\audio'
img_lst = np.array(train.slice_file_name.tolist())
folds = np.array(train.fold)
print(img_lst.shape)
img_lst = [loc +'\\'+'fold'+str(fold)+'\\'+ s + '.jpg' for s,fold in zip(img_lst,folds)]
X = []
for s in img_lst:
X.append(cv2.resize(cv2.imread(s, cv2.IMREAD_COLOR),(150,150),interpolation=cv2.INTER_CUBIC))
#X = [cv2.imshow('img',s) for s in X]
y = np.array(train.classID.tolist())
import gc
del train
gc.collect()
X = np.array(X)
#print(X[0])
print(y)
X = X.astype('float')
X /= 255.0
X.shape
# construct the image generator for data augmentation
from keras.preprocessing.image import ImageDataGenerator
aug = ImageDataGenerator()
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
lb = LabelEncoder()
y = np_utils.to_categorical(lb.fit_transform(y))
import keras
#base = keras.applications.nasnet.NASNetLarge(input_shape=None, include_top=False, weights=None, input_tensor=None, pooling=None, classes=10)
base = keras.applications.inception_resnet_v2.InceptionResNetV2(include_top=False, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=10)
test_lst = np.array(test.slice_file_name.tolist())
folds = np.array(test.fold)
test_lst = [loc +'\\'+'fold'+str(fold)+'\\'+ s + '.jpg' for s,fold in zip(test_lst,folds)]
#print(test_lst)
test_X = []
for s in test_lst:
test_X.append(cv2.resize(cv2.imread(s, cv2.IMREAD_COLOR),(150,150),interpolation=cv2.INTER_CUBIC))
#X = [cv2.imshow('img',s) for s in X]
test_X = np.array(test_X)
test_X = test_X.astype('float')
test_X /= 255.0
test_y = np.array(test.classID.tolist())
test_y = np_utils.to_categorical(lb.fit_transform(test_y))
print(test_X.shape)
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras.optimizers import SGD
BS = 48
EPOCHS = 25
x = base.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
model = Model(inputs=base.input, outputs=predictions)
for layer in base.layers:
layer.trainable = True
model.compile(optimizer=SGD(lr=0.01, momentum=0.9), loss='categorical_crossentropy', metrics=['categorical_accuracy','top_k_categorical_accuracy'])
model.fit_generator(aug.flow(X, y, batch_size=BS),
steps_per_epoch=len(X)// BS,
epochs=EPOCHS, verbose=1)
model.save(r'D:\Projects\urban_sound\model\model_1.h5')
model.evaluate(x=test_X, y=test_y, batch_size=None, verbose=1)
```
| github_jupyter |
```
import sqlite3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.plotly as py
import plotly.graph_objs as go
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import cross_val_score
from collections import Counter
from sklearn.metrics import accuracy_score
from sklearn import cross_validation
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
from sklearn.decomposition import TruncatedSVD
```
# Loading the Data and Sorting(Time-wise)
```
final_data = pd.read_csv("final.csv")
final_data = final_data.drop(["Text"], axis = 1)
final_data = final_data.drop(final_data.columns[0], axis = 1)
labels = final_data.Score
final_data = final_data.sort_values("Time")
final_data.shape
```
# Train/Test Split(70-30)
```
n = final_data.shape[0]
train_size = 0.7
train_set = final_data.iloc[:int(n*train_size)]
test_set =final_data.iloc[int(n*train_size):]
X_train = train_set.CleanedText
y_train = train_set.Score
X_test = test_set.CleanedText
y_test= test_set.Score
# Feature Importance(WordCloud Visualization)
import matplotlib as mpl
from wordcloud import WordCloud, STOPWORDS
stopwords = set(STOPWORDS)
#mpl.rcParams['figure.figsize']=(8.0,6.0) #(6.0,4.0)
mpl.rcParams['font.size']=12 #10
mpl.rcParams['savefig.dpi']=100 #72
mpl.rcParams['figure.subplot.bottom']=.1
def show_wordcloud(data, title = None):
wordcloud = WordCloud(
background_color='white',
stopwords=stopwords,
max_words=200,
max_font_size=40,
scale=3,
random_state=1 # chosen at random by flipping a coin; it was heads
).generate(str(data))
fig = plt.figure(1, figsize=(8, 8))
plt.axis('off')
if title:
fig.suptitle(title, fontsize=20)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud)
plt.show()
show_wordcloud(final_data["CleanedText"])
show_wordcloud(final_data[final_data.Score == "negative"]["CleanedText"], title = "Negative words")
show_wordcloud(final_data[final_data.Score == "positive"]["CleanedText"], title = "Positive words")
```
# Bag of words Vectorization
```
count_vect = CountVectorizer() #in scikit-learn
X_train1 = count_vect.fit_transform(X_train)
X_test1 = count_vect.transform(X_test)
#Standardization
from sklearn.preprocessing import StandardScaler
sc= StandardScaler(with_mean=False)
X_train1 = sc.fit_transform(X_train1)
X_test1 = sc.transform(X_test1)
```
# Applying Naive Bayes
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import TimeSeriesSplit
import pylab as pl
al = [0.00001,0.0001,0.001,0.01,0.1,1,10,100,1000,10000]
# empty list that will hold cv scores
cv_scores = []
my_cv = [(train,test) for train, test in TimeSeriesSplit(n_splits=10).split(X_train1)]
# perform 10-fold cross validation
for a in al:
knn = MultinomialNB(alpha=a)
scores = cross_val_score(knn, X_train1, y_train, cv = my_cv, scoring='accuracy')
cv_scores.append(scores.mean())
# changing to misclassification error
MSE = [1 - x for x in cv_scores]
# determining best alpha
optimal_a = MSE.index(min(MSE))
n = al[optimal_a]
print('\nThe optimal hyperparameter alpha is %f.' % n)
# plot misclassification error vs alpha
plt.plot(al, MSE)
for xy in zip(al, np.round(MSE,3)):
plt.annotate('(%s, %s)' % xy, xy=xy, textcoords='data')
plt.xlabel('Hyperparameter-alpha ')
plt.ylabel('Misclassification Error')
plt.show()
print("the misclassification error for each alpha value is : ", (np.round(MSE,3)))
nb = MultinomialNB(alpha = 10000)
nb.fit(X_train1,y_train)
pred = nb.predict(X_test1)
```
# Generating Confusion matrix and Classification report
```
import itertools
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
acc = accuracy_score(y_test, pred, normalize=True) * float(100)
x = nb.predict(X_train1)
tr_acc = accuracy_score(y_train, x, normalize=True) * float(100)
print('\n****Train accuracy is {:.2f}'.format(tr_acc))
print('\n****Test accuracy is {:.2f}'.format(acc))
print(confusion_matrix(y_test, pred))
print("")
target_names = ["positive", "negative"]
print(classification_report(y_test, pred, target_names=target_names))
# Tf-idf Vectorization
#TF-IDF
tf_idf_vect = TfidfVectorizer(ngram_range=(1,2))
X_train2 = tf_idf_vect.fit_transform(X_train)
X_test2 = tf_idf_vect.transform(X_test)
#Standardization
from sklearn.preprocessing import StandardScaler
sc= StandardScaler(with_mean=False)
X_train2 = sc.fit_transform(X_train2)
X_test2 = sc.transform(X_test2)
```
# Applying Naive Bayes
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import TimeSeriesSplit
import pylab as pl
al = [0.00001,0.0001,0.001,0.01,0.1,1,10,100,1000]
# empty list that will hold cv scores
cv_scores = []
my_cv = [(train,test) for train, test in TimeSeriesSplit(n_splits=10).split(X_train2)]
# perform 10-fold cross validation
for a in al:
knn = MultinomialNB(alpha=a)
scores = cross_val_score(knn, X_train2, y_train, cv = my_cv, scoring='accuracy')
cv_scores.append(scores.mean())
# changing to misclassification error
MSE = [1 - x for x in cv_scores]
# determining best alpha
optimal_a = MSE.index(min(MSE))
n = al[optimal_a]
print('\nThe optimal hyperparameter alpha is %f.' % n)
# plot misclassification error vs alpha
plt.plot(al, MSE)
for xy in zip(al, np.round(MSE,3)):
plt.annotate('(%s, %s)' % xy, xy=xy, textcoords='data')
plt.xlabel('Hyperparameter-alpha ')
plt.ylabel('Misclassification Error')
plt.show()
print("the misclassification error for each alpha value is : ", (np.round(MSE,3)))
nb = MultinomialNB(alpha = 0.00001)
nb.fit(X_train2,y_train)
pred = nb.predict(X_test2)
```
# Generating Confusion matrix and Classification report
```
import itertools
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
acc = accuracy_score(y_test, pred, normalize=True) * float(100)
x = nb.predict(X_train2)
tr_acc = accuracy_score(y_train, x, normalize=True) * float(100)
print('\n****Train accuracy is {:.2f}'.format(tr_acc))
print('\n****Test accuracy is {:.2f}'.format(acc))
print(confusion_matrix(y_test, pred))
print("")
target_names = ["positive", "negative"]
print(classification_report(y_test, pred, target_names=target_names))
```
# Conclusion
#### For 'Bag of Words'
- Train accuracy is 85.46%
- Test accuracy is 82.46%
## Confusion Matrix:
[[ 249 18832]
[ 326 89845]]
| | Precision| recall | f1-score | support|
|------|------||------||------||------|
| positive | 0.43 |0.01|0.03 | 19081|
| negative | 0.83 |1.00|0.90 | 90171|
| avg/total | 0.76 |0.82|0.75 | 109252|
#### For 'Tf-idf'
- Train accuracy is 99.79%
- Test accuracy is 82.22%
## Confusion Matrix:
[[ 4264 14817]
[ 4605 85566]]
| | Precision| recall | f1-score | support|
|------||------||------||------||------|
| positive | 0.48 |0.22|0.31 | 19081|
| negative | 0.85 |0.95|0.90 | 90171|
| avg/total | 0.79 |0.82|0.79 | 109252|
## Important Words
### Positive :
- 'product', 'use', 'great', 'movie'
### Negative :
- 'product', disappoint', 'good', 'taste'
| github_jupyter |
# OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction
In this [Kaggle competition](https://www.kaggle.com/c/stanford-covid-vaccine/overview) we try to develop models and design rules for RNA degradation. As the overview of the competition states:
>mRNA vaccines have taken the lead as the fastest vaccine candidates for COVID-19, but currently, they face key potential limitations. One of the biggest challenges right now is how to design super stable messenger RNA molecules (mRNA). Conventional vaccines (like your seasonal flu shots) are packaged in disposable syringes and shipped under refrigeration around the world, but that is not currently possible for mRNA vaccines.
>
>Researchers have observed that RNA molecules have the tendency to spontaneously degrade. This is a serious limitation--a single cut can render the mRNA vaccine useless. Currently, little is known on the details of where in the backbone of a given RNA is most prone to being affected. Without this knowledge, current mRNA vaccines against COVID-19 must be prepared and shipped under intense refrigeration, and are unlikely to reach more than a tiny fraction of human beings on the planet unless they can be stabilized.
<img src="images/banner.png" width="1000" style="margin-left: auto; margin-right: auto;">
The model should predict likely degradation rates at each base of an RNA molecule. The training data set is comprised of over 3000 RNA molecules and their degradation rates at each position.
# Install necessary packages
We can install the necessary package by either running `pip install --user <package_name>` or include everything in a `requirements.txt` file and run `pip install --user -r requirements.txt`. We have put the dependencies in a `requirements.txt` file so we will use the former method.
> NOTE: Do not forget to use the `--user` argument. It is necessary if you want to use Kale to transform this notebook into a Kubeflow pipeline
```
!pip install --user -r requirements.txt
```
# Imports
In this section we import the packages we need for this example. Make it a habit to gather your imports in a single place. It will make your life easier if you are going to transform this notebook into a Kubeflow pipeline using Kale.
```
import json
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
```
# Project hyper-parameters
In this cell, we define the different hyper-parameters. Defining them in one place makes it easier to experiment with their values and also facilitates the execution of HP Tuning experiments using Kale and Katib.
```
# Hyper-parameters
LR = 1e-3
EPOCHS = 10
BATCH_SIZE = 64
EMBED_DIM = 100
HIDDEN_DIM = 128
DROPOUT = .5
SP_DROPOUT = .3
TRAIN_SEQUENCE_LENGTH = 107
```
Set random seed for reproducibility and ignore warning messages.
```
tf.random.set_seed(42)
np.random.seed(42)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
```
# Load and preprocess data
In this section, we load and process the dataset to get it in a ready-to-use form by the model. First, let us load and analyze the data.
## Load data
The data are in `json` format, thus, we use the handy `read_json` pandas method. There is one train data set and two test sets (one public and one private).
```
train_df = pd.read_json("data/train.json", lines=True)
test_df = pd.read_json("data/test.json", lines=True)
```
We also load the `sample_submission.csv` file, which will prove handy when we will be creating our submission to the competition.
```
sample_submission_df = pd.read_csv("data/sample_submission.csv")
```
Let us now explore the data, their dimensions and what each column mean. To this end, we use the pandas `head` method to visualize a small sample (five rows by default) of our data set.
```
train_df.head()
```
We see a lot of strange entries, so, let us try to see what they are:
* `sequence`: An 107 characters long string in Train and Public Test (130 in Private Test), which describes the RNA sequence, a combination of A, G, U, and C for each sample.
* `structure`: An 107 characters long string in Train and Public Test (130 in Private Test), which is a combination of `(`, `)`, and `.` characters that describe whether a base is estimated to be paired or unpaired. Paired bases are denoted by opening and closing parentheses (e.g. (....) means that base 0 is paired to base 5, and bases 1-4 are unpaired).
* `predicted_loop_type`: An 107 characters long string, which describes the structural context (also referred to as 'loop type') of each character in sequence. Loop types assigned by bpRNA from Vienna RNAfold 2 structure. From the bpRNA_documentation: `S`: paired "Stem" `M`: Multiloop `I`: Internal loop `B`: Bulge `H`: Hairpin loop `E`: dangling End `X`: eXternal loop.
Then, we have `signal_to_noise`, which is quality control feature. It records the measurements relative to their errors; the higher value the more confident measurements are.
The `*_error_*` columns calculate the errors in experimental values obtained in corresponding `reactivity` and `deg_*` columns.
The last five columns (i.e., `recreativity` and `deg_*`) are out depended variables, our targets. Thus, for every base in the molecule we should predict five different values.
These are the main columns we care about. For more details, visit the competition [info](https://www.kaggle.com/c/stanford-covid-vaccine/data).
## Preprocess data
We are now ready to preprocess the data set. First, we define the symbols that encode certain features (e.g. the base symbol or the structure), the features and the target variables.
```
symbols = "().ACGUBEHIMSX"
feat_cols = ["sequence", "structure", "predicted_loop_type"]
target_cols = ["reactivity", "deg_Mg_pH10", "deg_Mg_50C", "deg_pH10", "deg_50C"]
error_cols = ["reactivity_error", "deg_error_Mg_pH10", "deg_error_Mg_50C", "deg_error_pH10", "deg_error_50C"]
```
In order to encode values like strings or characters and feed them to the neural network, we need to tokenize them. The `Tokenizer` class will assign a number to each character.
```
tokenizer = Tokenizer(char_level=True, filters="")
tokenizer.fit_on_texts(symbols)
```
Moreover, the tokenizer keeps a dictionary, `word_index`, from which we can get the number of elements in our vocabulary. In this case, we only have a few elements, but if our dataset was a whole book, that function would be handy.
> NOTE: We should add `1` to the length of the `word_index` dictionary to get the correct number of elements.
```
# get the number of elements in the vocabulary
vocab_size = len(tokenizer.word_index) + 1
```
We are now ready to process our features. First, we transform each character sequence (i.e., `sequence`, `structure`, `predicted_loop_type`) into number sequences and concatenate them together. The resulting shape should be `(num_examples, 107, 3)`.
> Now, we should do this in a way that would permit us to use this processing function with KFServing. Thus, since Numpy arrays are not JSON serializable, this function should accept and return pure Python lists.
```
def process_features(example):
sequence_sentences = example[0]
structure_sentences = example[1]
loop_sentences = example[2]
# transform character sequences into number sequences
sequence_tokens = np.array(
tokenizer.texts_to_sequences(sequence_sentences)
)
structure_tokens = np.array(
tokenizer.texts_to_sequences(structure_sentences)
)
loop_tokens = np.array(
tokenizer.texts_to_sequences(loop_sentences)
)
# concatenate the tokenized sequences
sequences = np.stack(
(sequence_tokens, structure_tokens, loop_tokens),
axis=1
)
sequences = np.transpose(sequences, (2, 0, 1))
prepared = sequences.tolist()
return prepared[0]
```
In the same way we process the labels. We should just extract them and transform them into the correct shape. The resulting shape should be `(num_examples, 68, 5)`.
```
def process_labels(df):
df = df.copy()
labels = np.array(df[target_cols].values.tolist())
labels = np.transpose(labels, (0, 2, 1))
return labels
public_test_df = test_df.query("seq_length == 107")
private_test_df = test_df.query("seq_length == 130")
```
We are now ready to process the data set and make the features ready to be consumed by the model.
```
x_train = [process_features(row.tolist()) for _, row in train_df[feat_cols].iterrows()]
y_train = process_labels(train_df)
unprocessed_x_public_test = [row.tolist() for _, row in public_test_df[feat_cols].iterrows()]
unprocessed_x_private_test = [row.tolist() for _, row in private_test_df[feat_cols].iterrows()]
```
# Define and train the model
We are now ready to define our model. We have to do with sequences, thus, it makes sense to use RNNs. More specifically, we will use bidirectional Gated Recurrent Units (GRUs) and Long Short Term Memory cells (LSTM). The output layer shoud produce 5 numbers, so we can see this as a regression problem.
First let us define two helper functions for GRUs and LSTMs and then, define the body of the full model.
```
def gru_layer(hidden_dim, dropout):
return tf.keras.layers.Bidirectional(
tf.keras.layers.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer = 'orthogonal')
)
def lstm_layer(hidden_dim, dropout):
return tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer = 'orthogonal')
)
```
The model has an embedding layer. The embedding layer projects the tokenized categorical input into a high-dimensional latent space. For this example we treat the dimensionality of the embedding space as a hyper-parameter that we can use to fine-tune the model.
```
def build_model(vocab_size, seq_length=int(TRAIN_SEQUENCE_LENGTH), pred_len=68,
embed_dim=int(EMBED_DIM),
hidden_dim=int(HIDDEN_DIM), dropout=float(DROPOUT), sp_dropout=float(SP_DROPOUT)):
inputs = tf.keras.layers.Input(shape=(seq_length, 3))
embed = tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)(inputs)
reshaped = tf.reshape(
embed, shape=(-1, embed.shape[1], embed.shape[2] * embed.shape[3])
)
hidden = tf.keras.layers.SpatialDropout1D(sp_dropout)(reshaped)
hidden = gru_layer(hidden_dim, dropout)(hidden)
hidden = lstm_layer(hidden_dim, dropout)(hidden)
truncated = hidden[:, :pred_len]
out = tf.keras.layers.Dense(5, activation="linear")(truncated)
model = tf.keras.Model(inputs=inputs, outputs=out)
return model
model = build_model(vocab_size)
model.summary()
```
Submissions are scored using MCRMSE (mean columnwise root mean squared error):
<img src="images/mcrmse.png" width="250" style="margin-left: auto; margin-right: auto;">
Thus, we should code this metric and use it as our objective (loss) function.
```
class MeanColumnwiseRMSE(tf.keras.losses.Loss):
def __init__(self, name='MeanColumnwiseRMSE'):
super().__init__(name=name)
def call(self, y_true, y_pred):
colwise_mse = tf.reduce_mean(tf.square(y_true - y_pred), axis=1)
return tf.reduce_mean(tf.sqrt(colwise_mse), axis=1)
```
We are now ready to compile and fit the model.
```
model.compile(tf.optimizers.Adam(learning_rate=float(LR)), loss=MeanColumnwiseRMSE())
history = model.fit(np.array(x_train), np.array(y_train),
validation_split=.1, batch_size=int(BATCH_SIZE), epochs=int(EPOCHS))
validation_loss = history.history.get("val_loss")[0]
```
## Evaluate the model
Finally, we are ready to evaluate the model using the two test sets.
```
model_public = build_model(vocab_size, seq_length=107, pred_len=107)
model_private = build_model(vocab_size, seq_length=130, pred_len=130)
model_public.set_weights(model.get_weights())
model_private.set_weights(model.get_weights())
public_preds = model_public.predict(np.array([process_features(x) for x in unprocessed_x_public_test]))
private_preds = model_private.predict(np.array([process_features(x) for x in unprocessed_x_private_test]))
```
# Submission
Last but note least, we create our submission to the Kaggle competition. The submission is just a `csv` file with the specified columns.
```
preds_ls = []
for df, preds in [(public_test_df, public_preds), (private_test_df, private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=target_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
preds_df.head()
submission = sample_submission_df[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
submission.to_csv('submission.csv', index=False)
```
| github_jupyter |
## Dataset Directory Structure
Parent_Directory (root)
|
|-----------Images (img_dir)
| |
| |------------------img1.jpg
| |------------------img2.jpg
| |------------------.........(and so on)
|
|
|-----------train_labels.csv (anno_file)
## Annotation file format
| Id | Labels |
| img1.jpg | x1 y1 x2 y2 label1 x1 y1 x2 y2 label2 |
- Labels: xmin ymin xmax ymax label
- xmin, ymin - top left corner of bounding box
- xmax, ymax - bottom right corner of bounding box
```
import os
import sys
import numpy as np
import pandas as pd
import cv2
from engine import train_one_epoch, evaluate
import utils
import os
import numpy as np
import torch
import pandas as pd
from PIL import Image
Image.LOAD_TRUNCATED_IMAGES = True
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# Step 1 - Data pre-prep
root = "multi_object/kangaroo/kangaroo-master/"; #var1
img_dir = "Images/"; #var2
anno_file = "train_labels.csv"; #var3
train_list = pd.read_csv(root + anno_file);
label_list = [];
for i in range(len(train_list)):
label = train_list["Label"][i];
tmp = label.split(" ");
for j in range(len(tmp)//5):
if(tmp[(j*5+4)] not in label_list):
label_list.append(tmp[(j*5+4)])
sorted(label_list)
# Step 3 - Data Loading
class CustomDatasetMultiObject(object):
def __init__(self, root, transforms):
self.root = root
self.transforms = transforms
# load all image files, sorting them to
# ensure that they are aligned
self.train_list = pd.read_csv(root + "/train_labels.csv");
self.label_list = self.get_labels();
self.num_classes = len(self.label_list) + 1;
def get_labels(self):
label_list = [];
for i in range(len(self.train_list)):
label = self.train_list["Label"][i];
tmp = label.split(" ");
for j in range(len(tmp)//5):
if(tmp[(j*5+4)] not in label_list):
label_list.append(tmp[(j*5+4)])
return sorted(label_list);
def __getitem__(self, idx):
# load images ad masks
img_name = self.train_list["ID"][idx];
label = self.train_list["Label"][idx];
img_path = os.path.join(self.root, "Images",img_name)
img = Image.open(img_path).convert("RGB")
h, w = img.size;
tmp = label.split(" ");
boxes = [];
num_objs = 0;
obj_ids = [];
for j in range(len(tmp)//5):
x1 = int(tmp[(j*5+0)]);
y1 = int(tmp[(j*5+1)]);
x2 = int(tmp[(j*5+2)]);
y2 = int(tmp[(j*5+3)]);
label = tmp[(j*5+4)];
boxes.append([x1, y1, x2, y2]);
obj_ids.append(self.label_list.index(label)+1);
num_objs += 1;
obj_ids = np.array(obj_ids, dtype=np.int64);
#print(obj_ids)
# convert everything into a torch.Tensor
boxes = torch.as_tensor(boxes, dtype=torch.float32)
# there is only one class
labels = torch.as_tensor(obj_ids, dtype=torch.int64)
#print(labels)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.train_list)
import transforms as T
def get_transform(train):
transforms = []
transforms.append(T.ToTensor())
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# use our dataset and defined transformations
dataset = CustomDatasetMultiObject(root, get_transform(train=True))
dataset_test = CustomDatasetMultiObject(root, get_transform(train=False))
num_classes = dataset.num_classes;
# split the dataset in train and test set
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-50])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])
# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1, shuffle=False, num_workers=4,
collate_fn=utils.collate_fn)
import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
# load a pre-trained model for classification and return
# only the features
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
# FasterRCNN needs to know the number of
# output channels in a backbone. For mobilenet_v2, it's 1280
# so we need to add it here
backbone.out_channels = 1280
# let's make the RPN generate 5 x 3 anchors per spatial
# location, with 5 different sizes and 3 different aspect
# ratios. We have a Tuple[Tuple[int]] because each feature
# map could potentially have different sizes and
# aspect ratios
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
# let's define what are the feature maps that we will
# use to perform the region of interest cropping, as well as
# the size of the crop after rescaling.
# if your backbone returns a Tensor, featmap_names is expected to
# be [0]. More generally, the backbone should return an
# OrderedDict[Tensor], and in featmap_names you can choose which
# feature maps to use.
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
output_size=7,
sampling_ratio=2)
# put the pieces together inside a FasterRCNN model
model = FasterRCNN(backbone,
num_classes=num_classes,
rpn_anchor_generator=anchor_generator,
box_roi_pool=roi_pooler)
# get the model using our helper function
#model = get_model_instance_segmentation(num_classes)
# move model to the right device
model.to(device)
# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
# let's train it for 10 epochs
num_epochs = 2
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, data_loader_test, device=device)
print("That's it!")
```
| github_jupyter |
# How To: Provisioning Data Science Virtual Machine (DSVM)
__Notebook Version:__ 1.0<br>
__Python Version:__ Python 3.6<br>
__Platforms Supported:__<br>
- Azure Notebooks Free Compute
__Data Source Required:__<br>
- no
### Description
The sample notebook shows how to provision a Azure DSVM as an alternate computing resource for hosting Azure Notebooks.
Azure Notebooks provides Free Compute as the default computing resource, which is free of charge. However, sometimes you do want to have a powerful computing environment, and you don't want to go through Direct Compute route which requires JupyterHub installation on Linux machines, then Data Science Virtual Machine (DSVM) becomes a vital choice.
You may reference <a href='https://docs.microsoft.com/en-us/azure/notebooks/configure-manage-azure-notebooks-projects' target='_blank'>this article</a> for details. In a nutshell, you need to select Linux VM with Ubuntu flavor. And keep in mind that on Azure DSVM, if you want to use Python 3.6 which is required by Azure Sentinel notebooks, you need to <font color=red> select Python 3.6 - AzureML.</font>
## Table of Contents
1. How to create a new DSVM
2. How to use DSVM
3. Things to know about using DSVM
## 1. How to create a new DSVM
0. First, please read <a href='https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro' target='_blank'>this article</a> for details
1. Go to Azure portal
2. Search for Data Science Virtual Machine under All Services<br>
<br>
3. Select DSVM for Linux (Ubuntu), read the introduction, click Create button. On the following page shown below, following the instruction to complete the form. You need to use the same Azure subscription that you are using for your Azure Sentinel and Azure Log Analytics. And make sure you select Password and check 'Login with Azure Active Directory'.<br>
<br>
4. Once a DSVM created, make sure you keep SSH public key and password in a safe place.
5. If you want to remote into the VM using SSH, you can add inbound port rule for port 22.
## 2. How to use DSVM
1. Now that you have a DSVM, when you login to https://notebooks.azure.com, you can see you DSVM on the drop down list under Free Compute and Direct Compute.<br>
<br>
2. Of course you will select DSVM, it will ask you to validate your JIT credentials.<br>
<br>
3. Once you pick a notebook to run, you may encounter the following warning:<br>
<br>
As you may see, [Python 3.6 - AzureML] is the correct answer.
## 3. Things to know about using DSVM
1. The most important thing to know about Azure Notebooks on DSVM is that: Azure Notebooks project home directory is not mounted on the DSVM. So any references to Azure Notebooks folders / files will incur File/folder not found exception. In other words, each ipynb notebook need to be independent of other files.
2. There are work-around solutions:<br>
a. Data files can be stored on Azure Blob storage and <a href='https://github.com/Azure/azure-storage-fuse' target='_blank'>blobfufe</a><br>
b. Python files can be added to the notebook by using the Jupyter magic, you can find an example here: <a href='https://github.com/Microsoft/connect-petdetector/blob/master/setup.ipynb' target='_blank'>%%writefile</a><br>
c. Configuration files are a bit more complicated. Using our Azure Sentinel config.json as an example, it is generated when you import Azure Sentinel Jupyter project from GitHub repo through Azure portal. The configuration JSON is Azure Log Analytics workspace specific file, so you clone one project for one Log Analytics workspace. You can find the config.json file at the root of the project home directory. <a href='https://orion-zhaozp.notebooks.azure.com/j/notebooks/Notebooks/Get%20Start.ipynb' target='_blank'>Get Start.jpynb</a> section 1 demonstrates how to set the configuration settings manually.
| github_jupyter |
<table style="float:left; border:none">
<tr style="border:none; background-color: #ffffff">
<td style="border:none">
<a href="http://bokeh.pydata.org/">
<img
src="assets/bokeh-transparent.png"
style="width:50px"
>
</a>
</td>
<td style="border:none">
<h1>Bokeh 教程</h1>
</td>
</tr>
</table>
<div style="float:right;"><h2>06. 关联和交互</h2></div>
```
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
output_notebook()
```
在上一章,我们知道了怎么把多个图表(plot)放在一个布局(layout)里。现在,我们就来看看怎么把不同的图表联系起来,怎么把图表和控件(widget)联系起来。
# Linked Interactions
不同的Bokeh图表之间是可以有联系的。例如,两个(或更多)图的范围可能是由联系的,当一个图被平移(或放大,或其范围变化),其它的图也相应更新保持一致。还可以将两个图的选择(selection)联系起来,当在一个图上选择某项时,另一个图中对应的项也被选中。
## Linked panning(相连的平移)
相连的平移(多个图的range保持同步)在Bokeh里非常容易。您只需让两个(或多个)图共享适当的range对象就可以了。下面的例子展示了如何通过三种不同的方式连接这些图的range:
```
from bokeh.layouts import gridplot
x = list(range(11))
y0, y1, y2 = x, [10-i for i in x], [abs(i-5) for i in x]
plot_options = dict(width=250, plot_height=250, tools='pan,wheel_zoom')
# create a new plot
s1 = figure(**plot_options)
s1.circle(x, y0, size=10, color="navy")
# create a new plot and share both ranges
s2 = figure(x_range=s1.x_range, y_range=s1.y_range, **plot_options)
s2.triangle(x, y1, size=10, color="firebrick")
# create a new plot and share only one range
s3 = figure(x_range=s1.x_range, **plot_options)
s3.square(x, y2, size=10, color="olive")
p = gridplot([[s1, s2, s3]])
# show the results
show(p)
# EXERCISE: create two plots in a gridplot, and link their ranges
```
## Linked brushing
相连的选择(selection)通过类似的方式完成,在图块之间共享数据源。注意,通常 ``bokeh.plotting`` 和 ``bokeh.charts`` 会自动创建一个缺省数据源。然而,为了共享数据源,我们必须手动创建它们并显式地传递。下面的例子说明了这一点:
```
from bokeh.models import ColumnDataSource
x = list(range(-20, 21))
y0, y1 = [abs(xx) for xx in x], [xx**2 for xx in x]
# create a column data source for the plots to share
source = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1))
TOOLS = "box_select,lasso_select,help"
# create a new plot and add a renderer
left = figure(tools=TOOLS, width=300, height=300)
left.circle('x', 'y0', source=source)
# create another new plot and add a renderer
right = figure(tools=TOOLS, width=300, height=300)
right.circle('x', 'y1', source=source)
p = gridplot([[left, right]])
show(p)
# EXERCISE: create two plots in a gridplot, and link their data sources
```
# Hover Tools(悬停工具)
Bokeh有一个悬停的工具,当用户将鼠标悬停在一个特定的标记符号(glyph)上时,可以在一个弹出框里显示更多的信息。基本的悬停工具配置相当于提供一个 ``(name, format)`` 元组列表。完整的细节请参考用户指南 [hovertool](http://bokeh.pydata.org/en/latest/docs/user_guide/tools.html#hovertool)。
下面的例子展示了悬停工的一些基本使用,悬停信息在utils.py中定义:
```
from bokeh.models import HoverTool
source = ColumnDataSource(
data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
desc=['A', 'b', 'C', 'd', 'E'],
)
)
hover = HoverTool(
tooltips=[
("index", "$index"),
("(x,y)", "($x, $y)"),
("desc", "@desc"),
]
)
p = figure(plot_width=300, plot_height=300, tools=[hover], title="Mouse over the dots")
p.circle('x', 'y', size=20, source=source)
show(p)
```
# Widgets(控件)
Bokeh直接集成了一个小的基本控件集。这些控件和Bokeh服务器或 ``CustomJS`` 模型协作使用可以加入更多的互动的功能。您可以在用户指南的 [Adding Widgets](http://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html#adding-widgets) 部分看到完整的widget列表和示例代码。
要使用小部件,就像您将使用一个绘图对象一样,将它们包括在布局中:
```
from bokeh.layouts import widgetbox
from bokeh.models.widgets import Slider
slider = Slider(start=0, end=10, value=1, step=.1, title="foo")
show(widgetbox(slider))
# EXERCISE: create and show a Select widget
```
# CustomJS Callbacks(CustomJS回调)
```
from bokeh.models import TapTool, CustomJS, ColumnDataSource
callback = CustomJS(code="alert('hello world')")
tap = TapTool(callback=callback)
p = figure(plot_width=600, plot_height=300, tools=[tap])
p.circle(x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], size=20)
show(p)
```
## 很多地方可以添加回调
* Widgets - Button, Toggle, Dropdown, TextInput, AutocompleteInput, Select, Multiselect, Slider, (DateRangeSlider), DatePicker,
* Tools - TapTool, BoxSelectTool, HoverTool,
* Selection - ColumnDataSource, AjaxDataSource, BlazeDataSource, ServerDataSource
* Ranges - Range1d, DataRange1d, FactorRange
## Callbacks for widgets(控件的回调)
具有关联值的控件可以附上小的JavaScript动作。这些动作(也被称为“回调”)在控件的值改变时被执行。为了更容易从JavaScript引用特定的Bokeh模型(例如,数据源或标记符),``CustomJS`` 对象接受一个dictionary类型的“参数”,映射名字到Bokeh模型。对应的JavaScript模型自动提供给 ``CustomJS`` 代码。
下面的例子展示了一个附加到滑块的动作,滑块每次移动时都会更新数据源:
```
from bokeh.layouts import column
from bokeh.models import CustomJS, ColumnDataSource, Slider
x = [x*0.005 for x in range(0, 201)]
source = ColumnDataSource(data=dict(x=x, y=x))
plot = figure(plot_width=400, plot_height=400)
plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=0.1, end=6, value=1, step=.1, title="power")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
y[i] = Math.pow(x[i], f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
show(column(slider, plot))
```
## Calbacks for selections(选择的回调)
也可以在用户选择(例如,框、点、套索)更改时执行JavaScript动作。这是通过将同样的customjs对象附加到要选择的数据源实现的。
下面的示例稍稍复杂,展示通过响应一个标记符的选择来更新另一个标记符的数据源:
```
from random import random
x = [random() for x in range(500)]
y = [random() for y in range(500)]
color = ["navy"] * len(x)
s = ColumnDataSource(data=dict(x=x, y=y, color=color))
p = figure(plot_width=400, plot_height=400, tools="lasso_select", title="Select Here")
p.circle('x', 'y', color='color', size=8, source=s, alpha=0.4)
s2 = ColumnDataSource(data=dict(xm=[0,1],ym=[0.5, 0.5]))
p.line(x='xm', y='ym', color="orange", line_width=5, alpha=0.6, source=s2)
s.callback = CustomJS(args=dict(s2=s2), code="""
var inds = cb_obj.get('selected')['1d'].indices;
var d = cb_obj.get('data');
var ym = 0
if (inds.length == 0) { return; }
for (i = 0; i < d['color'].length; i++) {
d['color'][i] = "navy"
}
for (i = 0; i < inds.length; i++) {
d['color'][inds[i]] = "firebrick"
ym += d['y'][inds[i]]
}
ym /= inds.length
s2.get('data')['ym'] = [ym, ym]
cb_obj.trigger('change');
s2.trigger('change');
""")
show(p)
```
# 更多
更多交互,参考用户指南 - http://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html
| github_jupyter |
* cookiecutter is too slow to do live
** nbdime, nbval was broken
* Too long talking after setup, before notebook
* More instructions for what doing while talking
* Too much text in notebooks - fine for SOLUTION notebook, but the problem notebooks should be pretty bare
Problem/solution.
* Need stop/talk points in notebook (and shift-enter sections)
* Explain what they did. (not what they are about to do)
Clearer instructions on what they should be doing.
Exposition: Introduce a problem, the tools to solve it.
* Give a quick example.
Give an exercise:
* Give an explicit problem to be solved
* Have an outcome in mind
review our solution to the same problem
Rinse, Lather, Repeat
e.g. here's the site. PRodice a RawDataset with tar URL, license, readme, and do a "fetch"
You should end up with 3 files.
* Suggestion: Visualize the flow/graph of steps
* instructions available for installation of
* git
* make
* editor (not familiar with their laptop)
## Tutorial 0: Reproducible Environment (30m)
* Code Flow: make and makefiles
* Templates: cookiecutter
* Revision Control: git and github
* Virtualenv: conda / pipenv
* Testing: doctest, pytest, hypothesis
## Tutorial 1: Reproducible Data (1h)
"Raw Data is Read Only. Sing it with me"
* RawDataset
* Fetching + Unpack
* Example 1: lvq-pak
* Exercise: fmnist
* Processing data
* Process into data, (optionally, target)
* create a process_my_dataset() function
* Example 1: lvq-pak
* Exercise: fmnist
* save the raw dataset to the raw dataset catalog
* Datasets and Data Transformers
* Create a transformer to produce a Dataset from the RawDataset
* Add this dataset to the catalog
* Load the dataset
* example: lvq-pak
* exercise: fmnist_test, fmnist_train
* More Complicated Transformers
* Example: Train/Test Split on lvq-pak
* Exercise: merge labels on lvq-pak
* Exercise: merge labels on fmnist
* Punchline:
* make clean_raw, clean_cache, clean_processed, (clean_data?) `make data`
## Tutorial 2: Reproducible Models (1h)
"We're gonna data science the @#&! out of this"
* Models (Estimators with metadata)
* Experiments (Datasets with metadata)
* Punchline:
* make clean_models, clean_predictions. `make predict`
## Tutorial 3: Reproducible Results (30m)
"I'm only here for the pretty pictures"
* Punchline
* make clean_analysis, clean_results, `make results`
## The Big Punchline
```
make clean
make results
```
| github_jupyter |
```
import requests
import datetime
import json
import time
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
import os
from datetime import datetime as dt,date,timedelta
from smtplib import SMTP
import smtplib
from pretty_html_table import build_table
from email.mime.text import MIMEText
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
```
## Enter your Postal Code -
#### Example: 711101
```
POST_CODE = int(input("Enter Your Postal Code "))
```
## Enter for the Number of Days You Want To Check The Availability
#### Example: 7
```
numdays = int(input("Enter for the Duration of Days you want to check "))
```
## Enter the Date from which you Want to check!
#### Example - 02-05-2021 (dd-mm-yyyy)
```
date = str(input("Enter the Date, dd-mm-yyy "))
base = pd.to_datetime(date,format='%d-%m-%Y')
date_list = [base + datetime.timedelta(days=x) for x in range(numdays)]
date_range = [x.strftime("%d-%m-%Y") for x in date_list]
final=[]
for i in date_range:
URL = "https://cdn-api.co-vin.in/api/v2/appointment/sessions/public/findByPin?pincode={}&date={}".format(POST_CODE, i)
response = requests.get(URL)
result = response.json()
if result['sessions']:
df = pd.DataFrame(result['sessions'])
df= df[['date','center_id','name','state_name','district_name','fee_type','available_capacity','fee','min_age_limit','vaccine']]
final.append(df)
else:
print("No Slot for the {} Available OR Slots are BOOKED!".format(i))
try:
final = pd.concat(final)
except:
print("No Slots Available OR Slots are Booked!")
final= final.reset_index(drop=True)
final
final_html = final.to_html()
email_sender_account = "botguypython@gmail.com"
email_sender_username = "botguypython@gmail.com"
email_sender_password = "password" # Enter your password here
email_smtp_server = "smtp.gmail.com"
email_smtp_port = 587
email_recepients = ["lokeshrth4617@gmail.com","lokesh.rathi@hiveminds.in"]
def SendEmail(final_html):
email_subject = f"Covid nearest vaccination center and slots availability"
email_body = '<html><head> </head><body>'
email_body += f'{final_html}</h3>'
email_body += f'<h3><u>For the Dates with No Data - Either the Slots are booked or are Unavailable!</u></h3>'
server = smtplib.SMTP(email_smtp_server,email_smtp_port)
print(f"Logging in to {email_sender_account}")
server.starttls()
server.login(email_sender_username, email_sender_password)
for recipient in email_recepients:
print(f"Sending email to {recipient}")
message = MIMEMultipart('alternative')
message['From'] = email_sender_account
message['To'] = recipient
message['Subject'] = email_subject
message.attach(MIMEText(email_body, 'html'))
#message.attach(nike)
server.sendmail(email_sender_account,recipient,message.as_string())
server.quit()
SendEmail(final_html)
```
| github_jupyter |
```
import numpy as np#you usually need numpy
#---these are for plots---#
import matplotlib
matplotlib.use('nbAgg')
import matplotlib.pyplot as plt
plt.rcParams['font.size']=16
plt.rcParams['font.family']='dejavu sans'
plt.rcParams['mathtext.fontset']='stix'
plt.rcParams['mathtext.rm']='custom'
plt.rcParams['mathtext.it']='stix:italic'
plt.rcParams['mathtext.bf']='stix:bold'
#-------------------------#
#load the module
from sys import path as sysPath
sysPath.append('../../src')
from interfacePy.Axion import Axion
from interfacePy.AxionMass import AxionMass
from interfacePy.Cosmo import Cosmo
from interfacePy.FT import FT #easy tick formatting
theta_i, fa=0.94435, 1e12
# theta_i, fa=np.pi, 1e12
# theta_i, fa=1e-3, 1e5
umax=500
TSTOP=1e-4
ratio_ini=1e3
N_convergence_max, convergence_lim=5, 1e-2 #this is fine, but you can experiment a bit.
#radiation dominated example
inputFile="../InputExamples/RDinput.dat"
# Matter domination example.
# the NSC parameters (using the notation of 2012.07202) are:
# T_end=1e-2 (GeV), c=3, T_ini=1e12 (GeV), and r=1e-1
# inputFile="../InputExamples/MatterInput.dat"
# Kination domination example.
# the NSC parameters (using the notation of 2012.07202) are:
# T_end=0, c=6, T_ini=1e3 (GeV), and r=1e10
# inputFile="../InputExamples/KinationInput.dat"
#you can define the axion mass using a data file
axionMass = AxionMass(r'../../src/data/chi.dat',0,1e5)
#you can define the axion mass via a function
# def ma2(T,fa):
# TQCD=150*1e-3;
# ma20=3.1575e-05/fa/fa;
# if T<=TQCD:
# return ma20;
# return ma20*pow((TQCD/T),8.16)
# axionMass = AxionMass(ma2)
# options for the solver
# These variables are optional. Yoou can use the Axion class without them.
initial_step_size=1e-2; #initial step the solver takes.
minimum_step_size=1e-8; #This limits the sepsize to an upper limit.
maximum_step_size=1e-2; #This limits the sepsize to a lower limit.
absolute_tolerance=1e-8; #absolute tolerance of the RK solver
relative_tolerance=1e-8; #relative tolerance of the RK solver
beta=0.9; #controls how agreesive the adaptation is. Generally, it should be around but less than 1.
#The stepsize does not increase more than fac_max, and less than fac_min.
#This ensures a better stability. Ideally, fac_max=inf and fac_min=0, but in reality one must
#tweak them in order to avoid instabilities.
fac_max=1.2;
fac_min=0.8;
maximum_No_steps=int(1e7); #maximum steps the solver can take Quits if this number is reached even if integration is not finished.
# Axion instance
ax=Axion(theta_i, fa, umax, TSTOP, ratio_ini, N_convergence_max, convergence_lim, inputFile,axionMass,
initial_step_size,minimum_step_size, maximum_step_size, absolute_tolerance,
relative_tolerance, beta, fac_max, fac_min, maximum_No_steps)
# Axion instance
# you can always run Axion with the default parameters for the solver
# ax=Axion(theta_i, fa, umax, TSTOP, ratio_ini, N_convergence_max, convergence_lim, inputFile)
# solve the EOM (this only gives you the relic, T_osc, theta_osc, and a_osc)
ax.solveAxion()
ax.relic, ax.T_osc, ax.theta_osc
ax.getPeaks()#this gives you the peaks of the oscillation
ax.getPoints()#this gives you all the points of integration
ax.getErrors()#this gives you local errors of integration
if True:
fig=plt.figure(figsize=(9,4))
fig.subplots_adjust(bottom=0.15, left=0.15, top = 0.95, right=0.9,wspace=0.0,hspace=0.0)
sub = fig.add_subplot(1,1,1)
#this plot shows the peaks of the oscillation
sub.plot(ax.T_peak,ax.theta_peak,linestyle=':',marker='+',color='xkcd:blue',linewidth=2)
#this plot shows all the points
sub.plot(ax.T,ax.theta,linestyle='-',linewidth=2,alpha=1,c='xkcd:black')
sub.set_xlabel(r'$T ~[{\rm GeV}]$')
sub.xaxis.set_label_coords(0.5, -0.1)
sub.set_ylabel(r'$\theta$')
sub.yaxis.set_label_coords(-0.1,0.5)
sub.axhline(ax.theta_osc,linestyle=':',color='xkcd:red',linewidth=1.5)
sub.axvline(ax.T_osc,linestyle='--',color='xkcd:gray',linewidth=1.5)
#set major ticks
_M_xticks=[ round(0.45+i*0.15,4) for i in range(0,15) ]
_M_yticks=[ round(-0.4+i*0.1,3) for i in range(0,20,2) ]
#set major ticks that will not have a label
_M_xticks_exception=[]
_M_yticks_exception=[]
_m_xticks=[]
_m_yticks=[]
ft=FT(_M_xticks,_M_yticks,
_M_xticks_exception,_M_yticks_exception,
_m_xticks,_m_yticks,
xmin=0.45,xmax=2,ymin=-0.4,ymax=1,xscale='linear',yscale='linear')
ft.format_ticks(plt,sub)
sub.text(x=0.92,y=0.1, s=r'$T_{\rm osc}$',rotation=90)
sub.text(x=1.5,y=0.8, s=r'$\theta_{\rm osc}$')
sub.text(x=0.5,y=0.2, s=r'$\theta_{\rm max}$',rotation=20)
sub.text(x=1.6,y=-0.3,
s=r'$f_{a}=10^{12}~{\rm GeV}$'+'\n'+ r'$\theta_i = 0.94435$')
# fig.savefig('theta_evolution.pdf',bbox_inches='tight')
fig.show()
if True:
fig=plt.figure(figsize=(9,4))
fig.subplots_adjust(bottom=0.15, left=0.15, top = 0.9, right=0.9,wspace=0.0,hspace=0.25)
sub = fig.add_subplot(1,1,1)
sub.plot(ax.T,ax.zeta,linestyle='-',linewidth=2,alpha=1,c='xkcd:black')
sub.plot(ax.T_peak,ax.zeta_peak,linestyle=':',marker='+',color='xkcd:blue',linewidth=2)
sub.axvline(ax.T_osc,linestyle='--',color='xkcd:gray',linewidth=1.5)
#set major ticks
_M_xticks=[ round(0.45+i*0.15,4) for i in range(0,15) ]
_M_yticks=[ round(-20+i*5,3) for i in range(0,20,1) ]
#set major ticks that will not have a label
_M_xticks_exception=[]
_M_yticks_exception=[]
_m_xticks=[]
_m_yticks=[]
ft=FT(_M_xticks,_M_yticks,
_M_xticks_exception,_M_yticks_exception,
_m_xticks,_m_yticks,
xmin=0.45,xmax=2,ymin=-20,ymax=20,xscale='linear',yscale='linear')
ft.format_ticks(plt,sub)
sub.text(x=0.92,y=10, s=r'$T_{\rm osc}$',rotation=90)
sub.text(x=1.6,y=-15,
s=r'$f_{a}=10^{12}~{\rm GeV}$'+'\n'+ r'$\theta_i = 0.94435$')
# fig.savefig('zeta_evolution.pdf',bbox_inches='tight')
fig.show()
if True:
fig=plt.figure(figsize=(9,4))
fig.subplots_adjust(bottom=0.15, left=0.15, top = 0.9, right=0.9,wspace=0.0,hspace=0.25)
sub = fig.add_subplot(1,1,1)
sub.plot(ax.T,np.abs(ax.dtheta/ax.theta),linestyle='-',linewidth=2,alpha=1,c='xkcd:black',label=r'$\dfrac{\delta \theta}{\theta}$')
sub.plot(ax.T,np.abs(ax.dzeta/ax.zeta),linestyle='-',linewidth=2,alpha=1,c='xkcd:red',label=r'$\dfrac{\delta \zeta}{\zeta}$')
sub.set_yscale('log')
sub.set_xscale('linear')
sub.set_xlabel(r'$T ~[{\rm GeV}]$')
sub.xaxis.set_label_coords(0.5, -0.1)
sub.set_ylabel(r'local errors')
sub.yaxis.set_label_coords(-0.1,0.5)
sub.legend(bbox_to_anchor=(0.98, 0.95),borderaxespad=0.,
borderpad=0.05,ncol=1,loc='upper right',fontsize=14,framealpha=0)
sub.axvline(ax.T_osc,linestyle='--',color='xkcd:gray',linewidth=1.5)
#set major ticks
_M_xticks=[ round(0.45+i*0.15,4) for i in range(0,15) ]
_M_yticks=[ 10.**i for i in range(-12,5,1) ]
#set major ticks that will not have a label
_M_xticks_exception=[]
_M_yticks_exception=[]
_m_xticks=[]
_m_yticks=[]
ft=FT(_M_xticks,_M_yticks,
_M_xticks_exception,_M_yticks_exception,
_m_xticks,_m_yticks,
xmin=0.45,xmax=2,ymin=1e-11,ymax=1e-4,xscale='linear',yscale='log')
ft.format_ticks(plt,sub)
sub.text(x=0.92,y=1e-6, s=r'$T_{\rm osc}$',rotation=90)
# fig.savefig('local_errors.pdf',bbox_inches='tight')
fig.show()
if True:
fig=plt.figure(figsize=(9,4))
fig.subplots_adjust(bottom=0.15, left=0.15, top = 0.9, right=0.9,wspace=0.0,hspace=0.25)
sub = fig.add_subplot(1,1,1)
sub.hist(ax.T,bins=30,color='xkcd:blue')
sub.set_yscale('log')
sub.set_xscale('linear')
sub.set_xlabel(r'$T ~[{\rm GeV}]$')
sub.xaxis.set_label_coords(0.5, -0.1)
sub.set_ylabel(r'Number of steps')
sub.yaxis.set_label_coords(-0.1,0.5)
sub.axvline(ax.T_osc,linestyle='--',color='xkcd:gray',linewidth=1.5)
sub.axvline(ax.T_osc,linestyle='--',color='xkcd:gray',linewidth=1.5)
#set major ticks
_M_xticks=[ round(0.45+i*0.15,4) for i in range(0,15) ]
_M_yticks=[ 10.**i for i in range(-12,5,1) ]
#set major ticks that will not have a label
_M_xticks_exception=[]
_M_yticks_exception=[]
_m_xticks=[]
_m_yticks=[]
ft=FT(_M_xticks,_M_yticks,
_M_xticks_exception,_M_yticks_exception,
_m_xticks,_m_yticks,
xmin=0.45,xmax=2,ymin=1e0,ymax=1e3,xscale='linear',yscale='log')
ft.format_ticks(plt,sub)
sub.text(x=0.92,y=1e2, s=r'$T_{\rm osc}$',rotation=90)
# fig.savefig('histogram.pdf',bbox_inches='tight')
fig.show()
cosmo=Cosmo('../../src/data/eos2020.dat',0,1.22e19)
if True:
fig=plt.figure(figsize=(9,4))
fig.subplots_adjust(bottom=0.15, left=0.15, top = 0.9, right=0.9,wspace=0.0,hspace=0.25)
sub = fig.add_subplot(1,1,1)
sub.plot(ax.T,ax.rho_axion/cosmo.rho_crit,linestyle='-',linewidth=2,alpha=1,c='xkcd:black')
sub.plot(ax.T_peak,ax.rho_axion_peak/cosmo.rho_crit,linestyle=':',linewidth=2,alpha=1,c='xkcd:blue')
sub.set_xlabel(r'$T ~[{\rm GeV}]$')
sub.xaxis.set_label_coords(0.5, -0.1)
sub.set_ylabel(r'$\dfrac{\rho_{a}(T)}{\rho_{\rm crit}}$')
sub.yaxis.set_label_coords(-0.1,0.5)
sub.axvline(ax.T_osc,linestyle='--',color='xkcd:gray',linewidth=1.5)
#set major ticks
_M_xticks=[ round(0.45+i*0.15,4) for i in range(0,15) ]
_M_yticks=[ 10.**i for i in range(32,42,1) ]
#set major ticks that will not have a label
_M_xticks_exception=[]
_M_yticks_exception=[]
_m_xticks=[]
_m_yticks=[]
ft=FT(_M_xticks,_M_yticks,
_M_xticks_exception,_M_yticks_exception,
_m_xticks,_m_yticks,
xmin=0.45,xmax=2,ymin=1e32,ymax=1e36,xscale='linear',yscale='log')
ft.format_ticks(plt,sub)
sub.text(x=0.92,y=1e33, s=r'$T_{\rm osc}$',rotation=90)
# fig.savefig('axion_energy_density.pdf',bbox_inches='tight')
fig.show()
#run the destructors
del ax
del cosmo
```
| github_jupyter |
```
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
```
<h2><font color="darkblue">Support Vector Machine</font></h2>
<hr/>
### Preliminaries
- Linearly separable
> Let $ S_0 $ and $ S_1 $ be two sets of points in an $ n $-dimensional Euclidean space. We say $ S_0 $ and $ S_1 $ are linearly separable if $ \quad \exists w_1, w_2, \cdots, w_n, k \quad $ such that $ \qquad \forall x \in S_0 $, $ \displaystyle \sum_{i=1}^{n} w_i x_i > k \quad $ and $ \quad \forall x \in S_1 $, $ \displaystyle \sum_{i=1}^{n} w_i x_i < k \quad $ where $ x_i $ is the $ i $-th component of $ x $.
- Example: Linearly separable
<img src="http://i.imgur.com/aLZlG.png" width=200 >
- Example: Not linearly separable
<img src="http://i.imgur.com/gWdPX.png" width=200 >
<p style="text-align:center">(Picture from https://www.reddit.com/r/MachineLearning/comments/15zrpp/please_explain_support_vector_machines_svm_like_i/)</p>
### Support Vector Machine (Hard-margin)
- Intuition: Find an optimal hyperplane that could maximize margin between different classes
<img src="https://upload.wikimedia.org/wikipedia/commons/b/b5/Svm_separating_hyperplanes_%28SVG%29.svg" width=300 align=center>
<p style="text-align:center">(Picture from https://en.wikipedia.org/wiki/Support_vector_machine)</p>
- Data
> $ \displaystyle \{(\mathbf{x}_i, y_i) \}_{i=1}^{n} \qquad $ where $ \displaystyle \qquad \mathbf{x}_i \in \mathbb{R}^d, \ y_i \in \{-1, 1 \} $
- Linearly separable if
> $ \displaystyle \exists (\mathbf{w}, b) \quad $ such that $ \displaystyle \quad y_i = \text{sign} \left(\langle \mathbf{w}, \mathbf{x}_i \rangle + b \right) \quad \forall i $
>
> $ \displaystyle \exists (\mathbf{w}, b) \quad $ such that $ \displaystyle \quad y_i \left(\langle \mathbf{w}, \mathbf{x}_i \rangle + b \right) > 0 \quad \forall i $
- Margin
> The margin of a hyperplane w.r.t training data is the minimal distance between a point in the training data and the hyperplane.
>
> In this sense, if a hyperplane has a large margin, then it still could separate the training data even if we slightly perturb each data point.
- Recall
> The distance between a point $ \mathbf{x} $ and the hyperplane defined by $ \quad (\mathbf{w}, b) \quad $ where $ \quad \lvert\lvert \mathbf{w} \rvert\rvert = 1 \quad $ is $ \quad \lvert \langle \mathbf{w}, \mathbf{x} \rangle + b \rvert $
- **Hard-SVM**: Fit a hyperplane that separates the training data with the largest possible margin
> $ \displaystyle \max_{\mathbf{w}, b: \lvert\lvert \mathbf{w} \rvert\rvert = 1} \min\limits_{i \in [n]} \lvert \langle \mathbf{w}, \mathbf{x}_i \rangle + b \rvert \quad $ such that $ \displaystyle \quad y_i(\langle \mathbf{w}, \mathbf{x}_i \rangle + b) > 0 \quad \forall i $
- Example
```
from sklearn import svm
import numpy as np
# Generate 100 separable points
x, y = datasets.make_blobs(n_samples=100, centers=2, random_state=3)
plt.scatter(x[:,0], x[:,1], c=y);
# Fit SVM
clf = svm.SVC(kernel='linear', C=1000)
clf.fit(x, y)
# Create grid to evaluate model
xx = np.linspace(x[:,0].min()-0.5, x[:,0].max()+0.5, 30)
yy = np.linspace(x[:,1].min()-0.5, x[:,1].max()+0.5, 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# Plot decision boundary and margins
plt.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']);
plt.scatter(x[:,0], x[:,1], c=y);
```
### Support Vector Machine (Soft-margin)
- **Hard-SVM**: Fit a hyperplane that separates the training data with the largest possible margin
> $ \displaystyle \min_{\mathbf{w}, b} \ \lvert\lvert \mathbf{w} \rvert\rvert^2 \quad $ such that $ \displaystyle \quad y_i(\langle \mathbf{w}, \mathbf{x}_i \rangle + b) > 1 \quad \forall i $
- **Soft-SVM**: Relax the condition
> $ \displaystyle \min_{\mathbf{w}, b, \zeta} \lambda \lvert\lvert \mathbf{w} \rvert\rvert^2 + \frac{1}{n} \sum_{i=1}^{n} \zeta_i \qquad $ such that $ \displaystyle \quad y_i(\langle \mathbf{w}, \mathbf{x}_i \rangle + b) \ge 1 - \zeta_i \quad $ where $ \displaystyle \quad \lambda > 0, \ \zeta_i \ge 0 $
- Kernel trick
> Map the original space into feature space (possibly of higher dimension) where data could be linearly separable.
>
> The kernel function transform the data into a higher dimensional feature space to make it possible to perform the linear separation.
<img src="http://i.imgur.com/WuxyO.png" width=300 align=center>
<p style="text-align:center">(Picture from https://www.reddit.com/r/MachineLearning/comments/15zrpp/please_explain_support_vector_machines_svm_like_i/)</p>
<img src="https://cdn-images-1.medium.com/max/1600/1*C3j5m3E3KviEApHKleILZQ.png" width=300 align=center>
We can just project this by mapping kernel using $x^2+y^2=z$
<img src="https://cdn-images-1.medium.com/max/1600/1*FLolUnVUjqV0EGm3CYBPLw.png" width=300 align=center>
```
# Toy Example: not linearly separable in original space (dimension=1)
x = np.arange(-10, 11)
y = np.repeat(-1, x.size)
y[np.abs(x) > 3] = 1
plt.scatter(x, np.repeat(0, x.size), c=y);
```
> $ \displaystyle \phi: \mathbb{R} \rightarrow \mathbb{R}^2 $
>
> $ \displaystyle \phi(x) = (x, x^2) $
```
# Kernel trick: linearly separable in feature space (dimension=2)
plt.scatter(x, x**2, c=y);
plt.axhline(y=12.5);
```
- Some kernels
> Polynomial kernel: $ \displaystyle \qquad K(x, x^\prime) = \left(1 + \langle x, x^\prime \rangle \right)^d $
>
> (Gaussian) radial basis function kernel (RBF): $ \displaystyle \qquad K(x, x^\prime) = \exp \left(- \frac{\lvert\lvert x - x^\prime \rvert\rvert^2}{2 \sigma^2} \right) = \exp (- \gamma \lvert\lvert x - x^\prime \rvert\rvert^2) \qquad $ where $ \displaystyle \qquad \gamma = \frac{1}{2 \sigma^2} $
>
>
- Choice of kernel
```
from sklearn.model_selection import cross_val_score
x, y = datasets.make_circles(n_samples=1000, factor=0.3, noise=0.1, random_state=2018)
plt.subplot(111, aspect='equal');
plt.scatter(x[:,0], x[:,1], c=y);
# Create grid to evaluate model
xx = np.linspace(x[:,0].min()-0.5, x[:,0].max()+0.5, 30)
yy = np.linspace(x[:,1].min()-0.5, x[:,1].max()+0.5, 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
# Linear kernel
clf = svm.SVC(kernel='linear')
clf.fit(x,y)
Z = clf.decision_function(xy).reshape(XX.shape)
# Plot decision boundary and margins
plt.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']);
plt.scatter(x[:,0], x[:,1], c=y);
print('10-fold cv scores with Linear kernel: ', np.mean(cross_val_score(clf, x, y, cv=10)))
# Polynomial kernel
clf = svm.SVC(kernel='poly', gamma='auto')
clf.fit(x,y)
Z = clf.decision_function(xy).reshape(XX.shape)
# Plot decision boundary and margins
plt.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']);
plt.scatter(x[:,0], x[:,1], c=y);
print('10-fold cv scores with Polynomial kernel: ', np.mean(cross_val_score(clf, x, y, cv=10)))
# RBF kernel
clf = svm.SVC(kernel='rbf', gamma='auto')
clf.fit(x,y)
Z = clf.decision_function(xy).reshape(XX.shape)
# Plot decision boundary and margins
plt.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']);
plt.scatter(x[:,0], x[:,1], c=y);
print('10-fold cv scores with RBF kernel: ', np.mean(cross_val_score(clf, x, y, cv=10)))
```
<br/>
- **Note:** For `SVC` in scikit-learn, it tries to solve the following problem:
$ \displaystyle \min_{w, b, \zeta} \frac{1}{2} w^\top w + C \sum_{i=1}^{n} \zeta_i \qquad $ subject to $ \displaystyle \qquad y_i \left(w^\top \phi(x_i) + b \right) \ge 1 - \zeta_i \qquad $ where $ \displaystyle \qquad \zeta_i \ge 0, i = 1, 2, \cdots, n $
[References](http://scikit-learn.org/stable/modules/svm.html#svm-mathematical-formulation)
<br/>
**References**
- Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning. Springer series in statistics.
- Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding machine learning: From theory to algorithms. Cambridge university press.
## SVM Excercise - Wisconsin Breast Cancer Dataset
> Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. Bennett and O.L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseperable Sets", Optimization Methods and Software 1, 1992, 22-34].
## Questions - Can we find some hyperplane separating our samples such that we predict whether they are cancerous?
```
from sklearn.datasets import load_breast_cancer
from sklearn.decomposition import PCA
# load the breast cancer dataset
d = load_breast_cancer()
x = d['data']
y = d['target']
# reduce dimensionality
pca = PCA(n_components=2)
x = pca.fit_transform(x)
# fit a SVM
clf = svm.SVC(kernel='linear')
clf.fit(x,y)
# Create grid to evaluate model
xx = np.linspace(x[:,0].min()-0.5, x[:,0].max()+0.5, 30)
yy = np.linspace(x[:,1].min()-0.5, x[:,1].max()+0.5, 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# Plot decision boundary and margins
plt.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']);
plt.scatter(x[:,0], x[:,1], c=y);
print('10-fold cv scores with linear kernel: ', np.mean(cross_val_score(clf, x, y, cv=10)))
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.