code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
## Классная работа
Является ли процесс ($X_n$) мартингалом по отношению к фильтрации $\mathcal{F}_n$?
1. $z_1,z_2,\ldots,z_n$ — независимы и $z_i\sim N(0,49)$, $X_n=\sum_{i=1}^n z_i$. Фильтрация: $\mathcal{F}_n=\sigma(z_1,z_2,\ldots,z_n);$
2. $z_1,z_2,\ldots,z_n$ — независимы и $z_i\sim U[0,1]$, $X_n=\sum_{i=1}^n z_i$. Фильтрация: $\mathcal{F}_n=\sigma(z_1,z_2,\ldots,z_n);$
3. Есть колода карт. Всего 52 карты, 4 масти. Я открываю одну карту за другой и смотрю, какую карту я открыла. Пусть $X_n$ — доля тузов в оставшейся колоде после открытия $n$ карт. $\mathcal{F}_n$ — знаю те карты, которые открыты. Рассмотрим, какие значения могут принимать $X_0$ и $X_{51}.$
$X_0=\dfrac4{52}.$
После открытия 51-ой карты, получим, что значения, которые принимает $X_{51}$ будет либо 1 (последняя карта — туз), либо 0 (последняя карта — не туз). Тогда вероятность того, что последняя карта окажется тузом, равна $\dfrac4{52}$, так как всего 4 туза, а количество карт равно 52.
| Исход | Не туз | Туз |
|----------|----------------|-------------|
| $X_{51}$ | $0$ | $1$ |
| $p$ |$\dfrac{48}{52}$|$\dfrac4{52}$|
d) Сколько элементов в $\mathcal{F_1}$ и $\mathcal{F_2}$? Понять, что больше: число элементарных частиц во Вселенной или число элементов в $\mathcal{F_2}$?
### Решение:
Для всех случаев нужно проверить выполнение двух условий из определения мартингала.
**a) Рассмотрим 1-ый случай:**
1-ое условие: Я знаю $z_1,z_2,\ldots,z_n$ и так как $X_n=\sum_{i=1}^n z_i$, то я знаю $X_n$.
2-ое условие: $E(X_{n+1}|\mathcal{F}_n) = E(z_1+z_2+\ldots+z_{n+1}|z_1,z_2,\ldots,z_n) =$ (знаю $z_1,z_2,\ldots,z_n$, поэтому могу их вынести) $= z_1+z_2+\ldots+z_n + E(z_{n+1}|z_1,z_2,\ldots,z_n) = z_1+z_2+\ldots+z_n+E (z_{n+1})=z_1+z_2+\ldots+z_n=X_n.$
Пояснения ко 2-ому условию: ($E (z_{n+1}) = 0$ так как $z_i \sim N(0,1).$
$E (z_{n+1}|z_1,z_2,\ldots,z_n)=E (z_{n+1})$, так как случайные величины $z_1,z_2,\ldots,z_{n+1}$ — независимы).
Оба условия выполняются, а значит, процесс ($X_n$) — мартингал по отношению к фильтрации $\mathcal{F}_n.$
**b) Рассмотрим 2-ой случай:**
1-ое условие: Я знаю $z_1,z_2,\ldots,z_n$ и так как $X_n=\sum_{i=1}^n z_i$, то я знаю $X_n.$
2-ое условие: $E (X_{n+1}|\mathcal{F}_n)=E (z_1+z_2+\ldots+z_{n+1}|z_1,z_2,\ldots,z_n) =$ (знаю $z_1,z_2,\ldots,z_n$, поэтому могу их вынести) $= z_1+z_2+\ldots+z_n+E (z_{n+1}|z_1,z_2,\ldots,z_n) = z_1+z_2+\ldots+z_n+E (z_{n+1}) = z_1+z_2+\ldots+z_n+\dfrac{0+1}{2}=X_n+\dfrac12 \ne X_n.$
2-ое условие не выполняется, а значит, в этом случае процесс ($X_n$) — не является мартингалом.
**c) Рассмотрим 3-ий случай:**
1-ое условие: выполнено, так как если я вижу открытые карты, то могу посчитать долю тузов среди неоткрытых, то есть могу посчитать $X_n.$
2-ое условие:
Спрогнозируем долю тузов, когда открою следующую карту : $E (X_{n+1}|\mathcal{F}_n).$
Сейчас: открыто $n$, закрыто $52-n.$
Доля тузов среди закрытых карт: $X_n.$
Количество закрытых тузов: $X_n(52-n).$
Тогда вероятность того, что при открытии $n+1$ карты будет туз, равна доле тузов среди закрытых карт или $X_n$. Если карта — туз, то после её открытия доля тузов будет равна $X_{n+1}=\dfrac{(52-n)X_n-1}{51-n}$. Если же при открытии карта окажется не тузом, то $X_{n+1}=\dfrac{(52-n)X_n}{51-n}$. Ниже представлена таблица с долями тузов и вероятностями исходов.
| Исход | Туз | Не туз |
|---------|---------------------------|-------------------------|
|$X_{n+1}$|$\dfrac{(52-n)X_n-1}{51-n}$|$\dfrac{(52-n)X_n}{51-n}$|
| $p$ | $X_n$ | $1-X_n$ |
$E (X_{n+1}|\mathcal{F}_n) = X_n\dfrac{(52-n)X_n-1}{51-n}+(1-X_n)\dfrac{(52-n)X_n}{51-n} = \dfrac{52X_n^2-nX_n^2-X_n+52X_n-52X_n^2-nX_n+nX_n^2}{51-n}=\dfrac{51X_n-nX_n}{51-n}=X_n$
2-ое условие выполняется.
Оба условия выполняются, а значит, процесс ($X_n$) — мартингал по отношению к фильтрации $\mathcal{F}_n.$
**d) Последнее задание**
$\mathcal{F_1}$ содержит 52 элементарных события (например, карта №1 — дама $\clubsuit$, карта №2 — туз $\spadesuit$ и т.д.). Каждое событие либо включаем либо не включаем, поэтому получим $card\mathcal{F_1}=2^{52}.$
$card\mathcal{F_2}=2^{C_{52}^1C_{51}^1}=2^{52\cdot51} \approx (4 \cdot 10^{15})^{51}=4^{51} \cdot 10^{15\cdot51}$
Число элементарных частиц во Вселенной $\approx 10^{81}$
$4^{51}\cdot 10^{15\cdot51} \gg 10^{81}$
**Упражнение**
$z_1,z_2,\ldots,z_n$ — независимы и $z_i\sim U[0,1]$, $X_n=\sum_{i=1}^n z_i.$ Фильтрация: $\mathcal{F}_n=\sigma(X_1,X_2,\ldots,X_n).$ Возьмём процесс $M_n=a^{X_n}$. Нужно подобрать число $a$ так, чтобы $(M_n)$ был мартингалом относительно фильтрации $\mathcal{F}_n$.
**Решение**
Простой случай: $a = 1$. Действительно, $(M_n) = (1,1,1,1,\ldots)$. Тогда $E (M_{n+1}|\mathcal{F}_n)=1=M_n$, а значит, $(M_n)$ — мартингал.
Теперь попробуем найти $a \ne 1$. Для этого проверим выполнимость двух условий из определения мартингала.
1-ое условие: $M_n$ измерим относительно $\mathcal{F}_n$ при известном $a.$
2-ое условие: $E (M_{n+1}|\mathcal{F}_n)=E (a^{X_{n+1}}|\mathcal{F}_n)=E (a^{z_1+z_2+\ldots+z_{n+1}}|\mathcal{F}_n) =$ (знаю $z_1,z_2,\ldots,z_n$, поэтому могу их вынести) $= a^{z_1+z_2+\ldots+z_n}E (a^{z_{n+1}}|\mathcal{F}_n)=a^{X_n}E (a^{z_{n+1}}|\mathcal{F}_n) =$ (так как случайная величина $z_{n+1}$ не зависит от $z_1,z_2,\ldots,z_n$) $= M_nE (a^{z_{n+1}}) =$ (по определению мартингала) $= M_n.$
Тогда
$E (a^{z_{n+1}})=1$
$E (a^{z_{n+1}})=\int\limits_0^1 a^t\,dt=1$
$\int\limits_0^1 e^{t\cdot\ln a}\,dt=\left. \dfrac{e^{t\cdot\ln a}}{\ln a}\right|_0^1=\dfrac{e^{\ln a}}{\ln a}-\dfrac1{\ln a}=\dfrac{e^{\ln a}-1}{\ln a}=\dfrac{a-1}{\ln a} = 1$ $\Rightarrow$
$\Rightarrow a-1=\ln a$
Это уравнение имеет единственное решение $a = 1.$
Получаем:
Процесс $M_n=a^{X_n}$ является мартингалом относительно фильтрации $\mathcal{F}_n$ только при $a = 1.$
# Мартингалы (продолжение). Момент остановки.{#12 Martingals. Stopping time}
## Мартингалы (продолжение)
### Задачка
### Упражнение:
Известно, что $M_t$ — мартингал. Чему равняется $E(M_{t+1}|\mathcal{F_{t+1}})$, $E(M_{t+2}|\mathcal{F_t}),$
а также $E(M_{t+k}|\mathcal{F_t}),$ (при $k \geqslant 0$)?
### Решение:
1) По определению мартингала: $E(M_{t+1}|\mathcal{F_{t+1}})=M_{t+1}.$
**Важное свойство:** $\mathcal{F_t} \leqslant \mathcal{F_{t+1}}.$
2) $E(M_{t+2}|\mathcal{F_t})=E[E(M_{t+2}|\mathcal{F_{t+1}})|\mathcal{F_t}]=$ (по свойству повторного математического ожидания) $=E(M_{t+1}|\mathcal{F_t})=M_t.$
3) $E(M_{t+k}|\mathcal{F_t})=M_t, k \geqslant 0.$
## Момент остановки
**Определение:**
Случайная величина $T$ называется моментом остановки (stopping time) по отношению к фильтрации $\mathcal{F_t}$, если:
1) Интуитивно: когда $T$ наступит, это можно понять;
2) Формально:
2.1) $T$ принимает значения $({0,1,2,3,\ldots }) U (+\infty)$;
2.2)Событие $(T=k)$ содержится в $\mathcal{F_k}$ для любого k.
### Задачки:
#### Задача №1:
Пусть $X_t$ — симметричное случайное блуждание,
$X_t=D_1+D_2+\ldots+D_t$, где $D_i$ — независимы и равновероятно принимают значения $(\pm 1)$
Фильтрация:
$\mathcal{F_t}=\sigma(X_1,X_2,\ldots,X_t)$ (мы видим значения случайного блуждания).
Имеются случайные величины:
$T_1=min\{t|X_t=100\}$,
$T_2=T_1+1$,
$T_3=T_1-1.$
Что из этого является моментом остановки?
#### Решение:
$T_1=min\{t|X_t=100\}$ — Да, момент остановки. В тот момент, когда он наступает, мы можем точно сказать был он или не был.
$T_2=T_1+1$ — Да, момент остановки. Если $T_1$ — произошло, то мы сможем сказать, что на следующем шаге наступит $T_2.$
$T_3=T_1-1$ — Нет.
Интуитивное объяснение: Бабушка говорит внуку: "Приходи ко мне в гости, когда наступит момент $T$", а внук видит значения $X$. Прийдет ли внук вовремя в гости к бабушке?
Ответ: $T_1$, $T_2.$
#### Задача №2:
Извлекаем карты из коллоды по одной и видим извлечённые значения.
$T_1$ — извлечение второго туза, является моментом остановки,
$T_1/2$ — не является моментом остановки.
## Остановленный процесс
**Определение:**
Пусть $X_t$ — случайный процесс, а $t$ — момент остановки.
Процесс $Y_t=X_{min\{t,T\}}$ называется остановленным процессом $X_t$.
### Примеры:
#### Пример №1:
Пусть $X_t$ — симметричное случайное блуждание,
$\tau=min\{t|X_t=20\}.$
Построить две траектории $X_t$ и соответствующие траектории $Y_t=X_{\tau}.$

Когда $min\{t,\tau\}=t$ и $t \leqslant \tau$ то $Y_t = X_t$,
когда $t > \tau$ то $Y_t < X_t.$

| github_jupyter |
# Sistema de Recomendação com a biblioteca Surprise
Aqui nós vamos implementar um simples **Sistema de Recomendação** com a biblioteca **[Surprise](http://surpriselib.com/)**. O objetivo principal é testar alguns dos recursos básico da biblioteca.
## 01 - Preparando o Ambiente & Conjunto de dados
### 01.1 - Baixando a biblioteca "surprise"
A primeira coisa que nós vamos fazer é baixar a biblioteca **[Surprise](http://surpriselib.com/)** que é uma biblioteca especifica para **Sistemas de Recomendação**.
```
#conda install -c conda-forge scikit-surprise
#!pip install scikit-surprise
```
### 01.2 - Importando as bibliotecas necessárias
Agora vamos importar as bibliotecas necessárias para criar nosso **Sistema de Recomendação**.
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import datetime
import surprise
from datetime import datetime
```
### 01.3 - Pegando o conjunto de dados
Agora nós vamos pegar o conjunto de dados **[FilmTrust](https://guoguibing.github.io/librec/datasets.html)** que basicamente vai ter:
- ID dos usuários;
- ID do filme;
- Nota (rating) que foi dado ao filme.
```
df = pd.read_csv(
"datasets/ratings.txt", # Get dataset.
sep=" ", # Separate data by space.
names=["user_id", "movie_id", "rating"] # Set columns names.
)
df.info()
df.head(20)
```
**NOTE:**
Vejam que nós temos **35.497** amostras e 3 colunas (features).
---
## 02 - Análise Exploratória dos dados (EDA)
Agora nós vamos fazer uma breve **Análise Exploratória dos dados (EDA)** no nosso conjunto de dados a fim de tirar insights dos mesmos.
### 02.1 - Contabilizando o número total de filmes (movies), usuários (users) e amostras (samples)
```
movies = len(df["movie_id"].unique())
users = len(df["user_id"].unique())
samples = df.shape[0]
print("Total movies:", movies)
print("Total users:", users)
print("Total samples:", samples)
```
**NOTE:**
Como podem ver nós temos apenas **35.497** amostras, mas as combinações entre **filmes (movies)** e **usuários (users)** é bem maior que isso:
```python
2.071 x 1.508 = 3.123.068
```
Ou seja, temos muitos dados faltantes (missing) e isso pode ocorrer porque alguns usuários apenas não viram determinados filmes. Por isso, vai ser interessante tentar ***prever*** esses valores.
Por exemplo, vamos olhar as 20 primeiras amostras:
```
df.head(20)
```
**NOTE:**
Com apenas 20 amostras já da para tirar algumas concluções:
- O usuário 1 viu apenas 12 filmes:
- Ou seja, teremos que fazer inúmeras previsões de notas (rating) de filmes para esse usuário.
- O usuário 2 viu apenas 2 filme:
- Pior ainda, podemos ter até um problema de underfitting para prever notas (rating) para esse usuário visto que o nosso Algoritmo não vai generalizar o suficiente.
### 02.2 - Contabilizando o número de notas (rating)
```
df['rating'].value_counts().plot(kind="bar")
plt.xlabel("User Rating")
plt.ylabel("Frequency")
plt.show()
```
**NOTE:**
Olhando para o gráfico acima nós temos que:
- A nota (rating) mínima foi 0.5;
- A nota (rating) máxima foi 4.0;
- A nota (rating) mais frequente foi 4.0.
Vamos ver esse mesmo resultado, porém, utilizando outra abordagem, apenas para fins de ensino.
```
max_rating = df["rating"].max()
min_rating = df["rating"].min()
print("Max rating: {0} \nMin rating: {1}".format(max_rating, min_rating))
```
---
## 03 - Preparando & Treinando um modelo para Sistema de Recomendação
### 03.1 - Redefinindo o range (escala) do ratings
Por padrão a classe **surprise.Reader** tem o range (escala) de rating de **rating_scale=(1, 5)**, então, vamos redefinir esse parâmetro para se ajustar a nossa necessidade, ou seja, de **0.5** até **4.0**.
Para utilizar um conjunto de dados externos da biblioteca Surprise (visto que ela também tem conjuntos de dados prontos para testes) você antes tem que utilizar a classe [Reader](https://github.com/NicolasHug/Surprise/blob/fa7455880192383f01475162b4cbd310d91d29ca/surprise/reader.py). Essa classe, tem o seguinte construtor por padrão (default):
```python
def __init__(
self,
name=None,
line_format='user item rating',
sep=None,
rating_scale=(1, 5),
skip_lines=0
):
```
Vamos criar uma instância dessa classe apenas passando como argumento o que nos interessa - **rating_scale = (0.5, 4.0)**
```
reader = surprise.Reader(rating_scale = (0.5, 4.0))
```
### 03.2 - Passando o nosso conjunto de dados (DataFrame Pandas) para o Surprise
A biblioteca Surprise tem uma abordagem um pouco diferente de se trabalhar. Uma delas é na hora de passar dados externos para a biblioteca, por exemplo, para passar um DataFrame Pandas nós podemos utilizar o método **load_from_df()** da classe **Dataset**.
Esse método recebe os seguintes argumentos:
- **df (Dataframe):**
- O dataframe que contém as classificações. Ele deve ter três colunas, correspondentes aos:
- ids do usuário;
- aos ids do item (filmes no nosso caso);
- e às classificações (ratings), nesta ordem.
- **leitor (Reader):**
- Um leitor (Reader) para ler o arquivo. Apenas o campo **rating_scale** precisa ser especificado.
```
# Load df to surprise library + Pass rating_scale by Reader class.
df_surprise = surprise.Dataset.load_from_df(df, reader)
```
### 03.3 - Criando um conjunto de dados de treinamento a partir de "df_surprise"
Como nós vimos no passo anterior, nós passamos nosso conjunto de dados para a biblioteca Surprise e salvamos isso na variável **df_surprise**. Agora nós vamos pegar todo esse conjunto de dados e criar um conjunto de dados de treinamento (isso mesmo, sem dados de validação/teste).
Para isso nós vamos utilizar o método **build_full_trainset()** da classe **[Dataset](https://surprise.readthedocs.io/en/stable/dataset.html#surprise.dataset.DatasetAutoFolds.build_full_trainset)**:
```
df_without_missing = df_surprise.build_full_trainset()
```
**NOTE:**
Eu deixei bem explicito a partir do nome da variável (df_without_missing) que esse conjunto de dados de treinamento não vai conter valores **faltantes/missing**.
### 03.4 - Criando uma instância do Algoritmo SVD++
Como nós sabemos uma das abordagens para fazer previsões em Sistemas de Recomendações é utilizando **Matrix Factorization** que é uma abordagem baseada em **Filtragem Colaborativa**.
**NOTE:**
Esse Algoritmo vai criar *características (features)* para os usuários e itens (filmes no nosso caso) e a partir dessas *características (features)* nós podemos fazer previsões futuras.
A primeira coisa que nós vamos fazer aqui é criar uma instância do Algoritmo SVD++:
```
algo_svdpp = surprise.SVDpp(n_factors=20) # SVD++ instance.
```
### 03.5 - Criando as características (features) a partir do Algoritmo SVD++
Para criar as *características (features)* é muito simples, basta treinar nosso modelo com o método **fit()** passando como argumento um conjunto de dados.
```
algo_svdpp.fit(df_without_missing)
```
**NOTE:**
A parte do código acima nada mais do que utilizar o algoritmo **SVD++** para criar as *features* para o nosso conjunto de dados. Por padrão, o algoritmo criar 20 features (n_factors=20). Outra observação é que ele está utilizando o método **Matrix Factorization** como pode ser visto na saída acima.
### 03.6 - Criando um DataFrame com os dados faltantes/missing
Como nós sabemos, nós temos:
```python
Total movies: 2071
Total users: 1508
Total samples: 35497
```
O que nos resultaria em:
```python
2.071 x 1.508 = 3.123.068
```
Ou seja, ainda faltam milhões de combinações de usuários e notas (rating) para filmes. Como nós criamos as *features* para esse conjunto de dados agora vamos ***prever*** as combinações faltantes.
Para isso primeiro nós vamos utilizar o método **build_anti_testset()** a partir da nossa variável **df_without_missing (nosso conjunto de dados sem dados missing)**. Esse método retorna uma lista de classificações (ratings) que podem ser usadas como um conjunto de testes.
Por exemplo, primeiro vamos pegar as combinações que faltam para o nosso conjunto de dados, ou seja, os dados faltantea/missing:
```
df_missing_values = df_without_missing.build_anti_testset()
```
Se você utilizar a função **len(df_missing_values)** vai ver que existem milhões de combinações que estavam faltando e agora nosso algoritmo **SVD++** *preveu (estimou)*.
```
len(df_missing_values)
```
**Ué, mas essa saída é menos do que o tamanho máximo de combinações possíveis!**
Lembrem, que essa saída são apenas os faltantes/missing. Nós devemo subtrair to número total às **35.497** amostras que nós já temos:
```python
(Total movies: 2071) * (Total users: 1508)
2.071 x 1.508 = 3.123.068
(Total combinations: 3.123.068) - (Samples we already had: 35497) = 3.087.574
```
Ótimo agora nós já temos as combinações que faltavam salvos na variável **df_missing_values**. Só para estudos mesmo, vou criar uma mecanismo abaixo que **printa()** as 10 primeiras combinações que faltavam, isso porque se a gente tinha toda a variável **df_missing_values** vai dar uma saída muito grande.
```
count = 0
while count < 10:
print(df_missing_values[count])
count += 1
```
**NOTE:**
Se vocês compararem essa saída com a do nosso conjunto de dados originais (sem previsões alguma) vai ver que o primeiro usuário só deu notas (rating) até o filme com ID 12. Ou seja, o nosso Algoritmo SVD++ fez as previsões para as combinações que faltavam.
### 03.7 - Relacionando todos os dados em um único objeto
Agora nós vamos utilizar o método **test()** do modelo SVD++ (objeto) para fazer uma relação entre os dados que os usuários já tinham passado e os que nós prevemos (estimamos), onde, vamos ter:
- **uid:**
- ID do usuário.
- **iid:**
- ID do filme.
- **r_ui:**
- A resposta real. Ou seja, o valor passado pelo usuário.
- **est:**
- O valora/nota (rating) previsto/estimado.
```
df_complete = algo_svdpp.test(df_missing_values)
```
Agora que nós já temos um objeto com todos os valores possíveis vamos dar uma olhada na primeira previsão:
```
df_complete[0]
```
**NOTE:**
Vejam que tem uma pequena diferença entre o valor que o usuário passou **r_ui** e o valor estimado pelo Algoritmo SVD++ **est**:
- r_ui=3.0028030537791928
- est=3.530673400436856
### 03.8 - Pegando as TOP recomendações por usuário
Agora nós vamos criar uma função que retorne o top-N recomendações para cada usuário. Essa função retorna um **dicionário**, onde, as chaves são os usuários e os valores são listas de tuplas.
```
from collections import defaultdict
def get_top_recommendations(predicts, n=5):
top_n = defaultdict(list) # Create a dictionary where lists are empty.
for user, movie, _, predict, _ in predicts:
top_n[user].append((movie, predict)) # Add key-value
for user, user_predicts in top_n.items():
user_predicts.sort(key=lambda x: x[1], reverse=True) # Order predicts rating from high to less.
top_n[user] = user_predicts[:n] # Save only the first values.
return top_n
```
Vamos começar pegando as top 5 recomendações (que é o valor default da função get_top_recommendations) para cada usuário:
```
top_five = get_top_recommendations(df_complete)
top_five
```
Olhando para a saída acima nós temos:
- Um dicionário onde:
- A chave é o ID do usuário
- E os valores são uma lista de duplas, onde:
- O primeiro elemento da tupla representa o ID do filme;
- O segundo elemento da tupla representa a nota (rating) do filme.
- Isso tudo em ordem decrescente, ou seja, da maior nota (rating) para a menor.
**NOTE:**
Agora você pode pegar essa saída e trabalhar com ela da maneira que você desejar, por exemplo, passar para uma API e etc. Só por questão didática vamos pegar essa saída (dicionário) e criar uma mecanismo, onde, nós vamos ter o ID do usuário e uma lista com os top 5 filmes por ID e não por nota (rating).
```
for user, user_predicts in top_five.items():
print(user, [movie for (movie, _) in user_predicts])
```
**NOTE:**
Uma observação aqui é que esses ID dos filmes estão ordenados de modo que os filmes que tiveram melhor nota (rating) sejam os primeiros.
> Então, não confunda o ID dos filmes com as notas (rating).
### 03.9 - Pegando previsões específicas de usuários por filme
Ok, mas como eu posso pegar uma previsão para um usuário e filme específico? Simples, vejam o código abaixo:
```
user_1_predict = algo_svdpp.predict(uid="1", iid="15")
user_1_predict
```
**NOTES:**
- A primeira observação aqui e crucial é que o usuáro não tinha passado uma nota (rating) para esse filme:
- r_ui=None
- Nós também podemos pegar a nota (rating) que foi prevista apenas utilizando o atributo **est**:
```
rating = user_1_predict.est
print(rating)
```
---
## 04 - Validando nosso modelo
> Ótimo, nós já treinamos um modelo; Fizemos previsões, mas falta ***validar*** esse modelo.
### 04.1 - Dividindo os dados em dados de treino e dados de teste (validação)
Da mesma maneira que a biblioteca *Scikit-Learn* tem o método **train_test_split()** a biblioteca surprise tem o mesmo para Sistemas de Recomendação.
```
from surprise.model_selection import train_test_split
df_train, df_test = train_test_split(df_surprise, test_size=0.3)
```
**NOTE:**
Uma observação aqui é que nós estamos passando o **df_surprise** que é um load do método **load_from_df()**:
```python
df_surprise = surprise.Dataset.load_from_df(df, reader)
```
Ou seja, nós estamos passando o conjunto de dados reais, sem dados faltantes/missing.
### 04.2 - treinando o modelo com os dados de treino
Agora o que nós vamos fazer é criar uma instância do nosso Algoritmo SVD++ e treinar nosso modelo com os dados de treino (df_train):
```
model_svdpp = surprise.SVDpp(n_factors=20) # SVD++ Instance.
model_svdpp = model_svdpp.fit(df_train)
```
### 04.3 - Fazendo previsões com os dados de teste (validação)
Agora que nós já treinamos nosso modelo com o método fit() e os dados de treino vamos fazer algumas predições com os dados de teste (validação):
```
general_predicts = model_svdpp.test(df_test)
```
Eu vou criar um mecanismo simples para demonstrar apenas as 10 primeiras previsões, visto que nossa saída era muito grande, vamos apenas visualizar as 10 primeiras:
```
count = 0
while count < 10:
print(general_predicts[count])
count += 1
```
**NOTE:**
Como os dados estão divididos em **treino** e **teste (validação)** você não vai ter os dados ordenados para trabalhar. Vai receber um conjunto de dados aleatório entre treino e teste. Se for de seu interesse você pode comparar as saídas das 10 primeiras amostras pegando pelo índice da variável **general_predicts**:
```
general_predicts[0]
```
**NOTE:**
Vejam que ela realmente corresponde a saída que nós tinhamos passado antes.
### 04.4 - Validando o modelo com a métrica "accuracy.rmse"
A biblioteca **Surprise** tem métodos de *Validação*. Vamos utilizar o método **rmse()** da classe Accuracy:
```
from surprise import accuracy
rmse = accuracy.rmse(general_predicts)
```
**Mas o que significa essa saída?**
Significa que o nosso modelo está errando a uma taxa de **0.80** mais ou menos, para cima ou para baixo.
### 04.5 - Ajustando os Hiperparâmetros
Talvez esse erro **RMSE: 0.8099** seja um pouco grande dependendo do nosso problema.
> Então, como melhorar a performance do nosso Algoritmo (modelo)? **Ajustando os Hiperparâmetros!**
Uma das maneiras de tentar melhorar a performance do nosso modelo é **"Ajustando os Hiperparâmetros"**. Vamos ver como fazer isso na prática:
```
start_time = datetime.now()
param_grid = {
'lr_all': [0.01, 0.001, 0.07, 0.005],
'reg_all': [0.02, 0.1, 1.0, 0.005]
}
surprise_grid = surprise.model_selection.GridSearchCV(
surprise.SVDpp, # Estimator with fit().
param_grid, # Params.
measures=["rmse"], # Metric.
cv=3, # Cross-Validation K-Fold.
n_jobs=-1
)
surprise_grid.fit(df_surprise) # Training model.
print(surprise_grid.best_params['rmse'])
end_time = datetime.now()
print('Method runtime: {}'.format(end_time - start_time))
```
**NOTES:**
- Ótimo, agora que nós já temos os melhores valores de **lr_all** e **reg_all** é só na hora de treinar nosso modelo passar esses valores.
- Outra observação aqui é que esse processo de encontrar os melhores hiperparâmetros demorou mais de 6 minutos e isso se dar pelo fato do Algoritmo ter que testar varias combinações possíveis com os hiperparâmetros que nós passamos.
---
## 05 - Pegando os filmes mais semelhantes com Cosine Distance/Similarity
Agora imagine que nós queremos pegar os 5 ou 10 filmes mais semelhantes em relação a um determinado filme. Ou seja, agora o foco é a **similaridade** entre os filmes. Como nós sabemos uma maneira de fazer isso é utilizando a abordagem ***Cosine Distance/Similarity***.
Vamos ver como implementar isso na prática:
```
from surprise import KNNBasic
df_without_missing = df_surprise.build_full_trainset()
# KNN Algorithms instance.
algo_cosine_similarity = KNNBasic(
sim_options = {
'name': 'cosine', # Name is similarity Algorithms
# If "user_based" is True the algorithm calculate similarity betweem users.
# If "user_based" is False the algorithm calculate similarity betweem movies (items).
'user_based': False
}
)
algo_cosine_similarity.fit(df_without_missing) # Training
# iid (int) – The (inner) id of the user (or item) for which we want the nearest neighbors.
# Get the top 10 nearest neighbors (k=10).
neighbors = algo_cosine_similarity.get_neighbors(iid=343, k=10) # Get neighbors.
neighbors
for movie in neighbors:
print(movie)
```
**NOTES:**
- Olhando para as saídas acima nós temos que ao passar o **iid=343** ele vai retornar os top 10 filmes mais semelhantes (em Cosine Distance/Similarity) em relação ao filme com esse ID:
- Isso é interessante porque agora nós podemos indicar (recomendar) esses filmes para quem assistir esse filme (iid=343).
- Esse algoritmo foi treinado para pegar os vizinhos mais próximos (KNN) **baseado nos itens** e **não nos usuários**:
- visto que nós passamos **'user_based': False**
**REFERÊNCIA:**
[DidáticaTech](https://didatica.tech/)
**Rodrigo Leite -** *drigols*
| github_jupyter |
<a href="https://colab.research.google.com/github/sortsammcdonald/edx-python_and_data_science/blob/master/final_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Determining Iris species based on simple metrics
This report will evaluate if it is possible for species of Iris plants to be categorised via simple meterics such as measuring sepal or petal length. The reason this is important is that it makes it more straightforward for non-experts to reliably predict the species, they simply have to record the metrics, input these into a database that an ML algorithm can parse and they should then have a correct result with small chance of error.
More broadly this could be useful as, there maybe other species this approach could be applied to and in turn if this is applied at scale it could give us insights into how plants are adjusting to changing environments.
## Data set and prelimanary remarks
I am using the following data set: https://www.kaggle.com/uciml/iris
This consists of data on three species of Iris:
- Setosa
- Veriscolor
- Virginica
With 150 samples (50 per species) recorded based on the following properties:
- Sepal Length
- Sepal Width
- Petal Length
- Petal Width
My goal is to first review this data and see if any coralations can be drawn between these metrics and if there is sufficent clustering of the three different species for a Machine Learning algorithm to predict the Iris Species based on these metrics. If this is the case then I will train an KNN algorithm and test it's predictive power.
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
```
## Data preparation and cleaning
Since the file is in CSV format it is possible to generate a dataframe via pandas. This can be used in turn to evalutade the data and generate visualisations. However before undertaking any analysis it is necessary to check the quality of the data to ensure it is usable.
```
iris_df = pd.read_csv('Iris.csv', sep=',')
iris_df.head()
```
A dataframe has successfully been generated based on the CSV file.
```
iris_df.info()
```
There do not appear to be any null values so I can proceed with the analysis. However the Id column serves no purpose so I will remove that before proceeding.
## Exploratory analysis
Next I will undertake an exploratory analysis to determine if there are any correlations in the attributes within the dataframe for the species. I will also consider if there is sufficient clustering across the three species to use these metrics as a way to predict the species. To do this I will generate scatterplots showing Sepal Length vs Sepal Width and Petal Length vs Petal Width with each of the three species hightlighted in different colours.
```
scatter_plot_sepal = iris_df[iris_df.Species=='Iris-setosa'].plot(kind ='scatter', x = 'SepalLengthCm', y ='SepalWidthCm',color='orange', label='Setosa')
iris_df[iris_df.Species=='Iris-versicolor'].plot(kind = 'scatter', x ='SepalLengthCm', y ='SepalWidthCm',color='blue', label='Versicolor',ax=scatter_plot_sepal)
iris_df[iris_df.Species=='Iris-virginica'].plot(kind = 'scatter', x ='SepalLengthCm', y ='SepalWidthCm',color='green', label='Virginica', ax=scatter_plot_sepal)
scatter_plot_sepal.set_xlabel("Sepal Length")
scatter_plot_sepal.set_ylabel("Sepal Width")
scatter_plot_sepal.set_title("Sepal Length VS Width")
scatter_plot_sepal=plt.gcf()
plt.show()
scatter_plot_petal = iris_df[iris_df.Species=='Iris-setosa'].plot.scatter(x = 'PetalLengthCm', y ='PetalWidthCm', color='orange', label='Setosa')
iris_df[iris_df.Species=='Iris-versicolor'].plot.scatter(x = 'PetalLengthCm', y ='PetalWidthCm', color='blue', label='Versicolor', ax = scatter_plot_petal)
iris_df[iris_df.Species=='Iris-virginica'].plot.scatter(x = 'PetalLengthCm', y ='PetalWidthCm', color='green', label='Virginica', ax = scatter_plot_petal)
scatter_plot_petal.set_xlabel("Petal Length")
scatter_plot_petal.set_ylabel("Petal Width")
scatter_plot_petal.set_title("Petal Length VS Width")
scatter_plot_petal=plt.gcf()
plt.show()
```
Visually it would appear that there are corraelations in these attributes. There is clustering among the different species in respect to Sepal Length and Width. Similarly petal length versus width shows correlatino and each species also forming their own clusters.
## Testing and Training Machine Learning Algorithm
In order to train and test the predcition accuracy of a machine learning algorithm, it is divide the data into a sample for training and another for testing. Since we already know the result for the testing sample it is possible to compare the predcitions the trained algorithm makes against actual results.
For my analysis I will train a K Means alogrithm and test how accurtate its predcitions of Iris species are against the test sample.
```
train, test = train_test_split(iris_df, test_size = 0.3)
print(train.shape)
print(test.shape)
```
I have generated a training data set of 105 values and testing data set of 45 values
```
train_X = train[['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']]# taking the training data features
train_y=train.Species# output of our training data
test_X= test[['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']] # taking test data features
test_y =test.Species #output value of test data
train_X.head(2)
train_y.head() ##output of the training data
model=KNeighborsClassifier(n_neighbors=3) #this examines 3 neighbours for putting the new data into a class
model.fit(train_X,train_y)
prediction=model.predict(test_X)
print('The accuracy of the KNN is',metrics.accuracy_score(prediction,test_y))
a_index=list(range(1,11))
a=pd.Series()
x=[1,2,3,4,5,6,7,8,9,10]
for i in list(range(1,11)):
model=KNeighborsClassifier(n_neighbors=i)
model.fit(train_X,train_y)
prediction=model.predict(test_X)
a=a.append(pd.Series(metrics.accuracy_score(prediction,test_y)))
plt.plot(a_index, a)
plt.xticks(x)
```
| github_jupyter |
# Initialization
Welcome to the first assignment of "Improving Deep Neural Networks".
Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results.
A well chosen initialization can:
- Speed up the convergence of gradient descent
- Increase the odds of gradient descent converging to a lower training (and generalization) error
To get started, run the following cell to load the packages and the planar dataset you will try to classify.
```
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
```
You would like a classifier to separate the blue dots from the red dots.
## 1 - Neural Network model
You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with:
- *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.
- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values.
- *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015.
**Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
```
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
## 2 - Zero initialization
There are two types of parameters to initialize in a neural network:
- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$
- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$
**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
```
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 0. 0. 0.]
[ 0. 0. 0.]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[ 0. 0.]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using zeros initialization.
```
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
```
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The model is predicting 0 for every example.
In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
<font color='blue'>
**What you should remember**:
- The weights $W^{[l]}$ should be initialized randomly to break symmetry.
- It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly.
## 3 - Random initialization
To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values.
**Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
```
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[-0.82741481 -6.27000677]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using random initialization.
```
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
```
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.
- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm.
- If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.
<font color='blue'>
**In summary**:
- Initializing weights to very large random values does not work well.
- Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!
## 4 - He initialization
Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)
**Exercise**: Implement the following function to initialize your parameters with He initialization.
**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
```
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2./layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]
[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using He initialization.
```
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The model with He initialization separates the blue and the red dots very well in a small number of iterations.
## 5 - Conclusions
You have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is:
<table>
<tr>
<td>
**Model**
</td>
<td>
**Train accuracy**
</td>
<td>
**Problem/Comment**
</td>
</tr>
<td>
3-layer NN with zeros initialization
</td>
<td>
50%
</td>
<td>
fails to break symmetry
</td>
<tr>
<td>
3-layer NN with large random initialization
</td>
<td>
83%
</td>
<td>
too large weights
</td>
</tr>
<tr>
<td>
3-layer NN with He initialization
</td>
<td>
99%
</td>
<td>
recommended method
</td>
</tr>
</table>
<font color='blue'>
**What you should remember from this notebook**:
- Different initializations lead to different results
- Random initialization is used to break symmetry and make sure different hidden units can learn different things
- Don't intialize to values that are too large
- He initialization works well for networks with ReLU activations.
| github_jupyter |
# Deep Crossentropy method
In this section we'll extend your CEM implementation with neural networks! You will train a multi-layer neural network to solve simple continuous state space games. __Please make sure you're done with tabular crossentropy method from the previous notebook.__

```
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
from tqdm import tqdm, tqdm_notebook
import gym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape[0]
plt.imshow(env.render("rgb_array"))
print("state vector dim =", state_dim)
print("n_actions =", n_actions)
```
# Neural Network Policy
For this assignment we'll utilize the simplified neural network implementation from __[Scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)__. Here's what you'll need:
* `agent.partial_fit(states, actions)` - make a single training pass over the data. Maximize the probabilitity of :actions: from :states:
* `agent.predict_proba(states)` - predict probabilities of all actions, a matrix of shape __[len(states), n_actions]__
```
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(
hidden_layer_sizes=(20, 20),
warm_start=True,
activation='tanh',
max_iter=1,
)
# initialize agent to the dimension of state space and number of actions
agent.partial_fit([env.reset()] * n_actions, range(n_actions), range(n_actions))
env.reset()
agent.predict_proba([env.reset()])
def generate_session(env, agent, t_max=1000):
"""
Play a single game using agent neural network.
Terminate when game finishes or after :t_max: steps
"""
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# use agent to predict a vector of action probabilities for state :s:
probs = agent.predict_proba([s])[0] # <YOUR CODE>
assert probs.shape == (env.action_space.n,), "make sure probabilities are a vector (hint: np.reshape)"
# use the probabilities you predicted to pick an action
# sample proportionally to the probabilities, don't just take the most likely action
a = np.random.choice(np.arange(n_actions), p=probs) # <YOUR CODE>
# ^-- hint: try np.random.choice
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
dummy_states, dummy_actions, dummy_reward = generate_session(env, agent, t_max=5)
print("states:", np.stack(dummy_states))
print("actions:", dummy_actions)
print("reward:", dummy_reward)
```
### CEM steps
Deep CEM uses exactly the same strategy as the regular CEM, so you can copy your function code from previous notebook.
The only difference is that now each observation is not a number but a `float32` vector.
```
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
"""
reward_threshold = np.percentile(rewards_batch, percentile) # <YOUR CODE: compute minimum reward for elite sessions. Hint: use np.percentile()>
elite_states = []
elite_actions = []
for i in range(len(states_batch)):
if rewards_batch[i] >= reward_threshold:
elite_states += states_batch[i]
elite_actions += actions_batch[i]
return elite_states, elite_actions
```
# Training loop
Generate sessions, select N best and fit to those.
```
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
sessions = [ generate_session(env, agent) for i in np.arange(n_sessions) ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile) # <YOUR CODE: select elite actions just like before>
#<YOUR CODE: partial_fit agent to predict elite_actions(y) from elite_states(X)>
agent.partial_fit(elite_states, elite_actions)
show_progress(rewards_batch, log, percentile, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
```
# Results
```
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True)
sessions = [generate_session(env, agent) for _ in range(100)]
env.close()
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor, agent) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from base64 import b64encode
from IPython.display import HTML
video_paths = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
video_path = video_paths[-1] # You can also try other indices
if 'google.colab' in sys.modules:
# https://stackoverflow.com/a/57378660/1214547
with video_path.open('rb') as fp:
mp4 = fp.read()
data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode()
else:
data_url = str(video_path)
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(data_url))
```
# Homework part I
### Tabular crossentropy method
You may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
### Tasks
- __1.1__ (2 pts) Find out how the algorithm performance changes if you use a different `percentile` and/or `n_sessions`. Provide here some figures so we can see how the hyperparameters influence the performance.
- __1.2__ (1 pts) Tune the algorithm to end up with positive average score.
It's okay to modify the existing code.
```<Describe what you did here>```
```
import gym
import numpy as np
env = gym.make('Taxi-v3')
env.reset()
env.render()
n_sessions = 250 # sample this many sessions
percentile = 50 # take this percent of session with highest rewards
learning_rate = 0.5 # how quickly the policy is updated, on a scale from 0 to 1
log = []
n_states = env.observation_space.n
n_actions = env.action_space.n
print("n_states=%i, n_actions=%i" % (n_states, n_actions))
def initialize_policy(n_states, n_actions):
policy = np.ones((n_states, n_actions)) / n_actions
return policy
policy = initialize_policy(n_states, n_actions)
assert type(policy) in (np.ndarray, np.matrix)
assert np.allclose(policy, 1./n_actions)
assert np.allclose(np.sum(policy, axis=1), 1)
def generate_session(env, policy, t_max = 10**4):
'''
play ONE game
record states-actions
compute reward
'''
states = []
actions = []
total_reward = 0.0
s = env.reset()
for t in np.arange(t_max):
act = np.random.choice(np.arange(len(policy[s])), p=policy[s])
new_s, r, done, info = env.step(act)
states.append(s)
actions.append(act)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
s, a, r = generate_session(env, policy)
assert type(s) == type(a) == list
assert len(s) == len(a)
assert type(r) in [float, np.float]
# let's see the initial reward distribution
import matplotlib.pyplot as plt
%matplotlib inline
sample_rewards = [generate_session(env, policy, t_max=1000)[-1] for _ in range(200)]
plt.hist(sample_rewards, bins=20)
plt.vlines([np.percentile(sample_rewards, 50)], [0], [100], label="50'th percentile", color='green')
plt.vlines([np.percentile(sample_rewards, 90)], [0], [100], label="90'th percentile", color='red')
plt.legend()
def select_elites(states_batch, actions_batch, rewards_batch, percentile):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
"""
reward_threshold = np.percentile(rewards_batch, percentile) # <YOUR CODE: compute minimum reward for elite sessions. Hint: use np.percentile()>
elite_states = []
elite_actions = []
for i in range(len(states_batch)):
if rewards_batch[i] >= reward_threshold:
elite_states += states_batch[i]
elite_actions += actions_batch[i]
return elite_states, elite_actions
def get_new_policy(elite_states, elite_actions):
"""
Given a list of elite states/actions from select_elites,
return a new policy where each action probability is proportional to
policy[s_i,a_i] ~ #[occurrences of s_i and a_i in elite states/actions]
Don't forget to normalize the policy to get valid probabilities and handle the 0/0 case.
For states that you never visited, use a uniform distribution (1/n_actions for all states).
:param elite_states: 1D list of states from elite sessions
:param elite_actions: 1D list of actions from elite sessions
"""
new_policy = np.zeros([n_states, n_actions])
# <YOUR CODE: set probabilities for actions given elite states & actions>
# Don't forget to set 1/n_actions for all actions in unvisited states.
for i in np.arange(len(elite_states)):
new_policy[elite_states[i]][elite_actions[i]] += 1
for i in np.arange(n_states):
summ = np.sum(new_policy[i])
if summ == 0:
new_policy[i] = 1 / n_actions
else:
new_policy[i] /= summ
return new_policy
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.show()
def show_final_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.show()
policy = initialize_policy(n_states, n_actions)
def get_policy_rewards_batch_log(n_sessions, percentile, learning_rate, policy=initialize_policy(n_states, n_actions)):
log = []
for i in tqdm(np.arange(100)):
sessions = [ generate_session(env, policy) for i in np.arange(n_sessions) ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile) # <YOUR CODE: select elite states & actions>
new_policy = get_new_policy(elite_states, elite_actions) # <YOUR CODE: compute new policy>
policy = learning_rate * new_policy + (1 - learning_rate) * policy
# display results on chart
# show_progress(rewards_batch, log, percentile)
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
return policy, rewards_batch, log
n_sessions_tab = [50, 150, 250, 400]
percentile_tab = [80, 60, 40, 20]
all_policy = [[0 for i in range(len(n_sessions_tab))] for j in range(len(percentile_tab))]
all_rewards = [[0 for i in range(len(n_sessions_tab))] for j in range(len(percentile_tab))]
all_log = [[0 for i in range(len(n_sessions_tab))] for j in range(len(percentile_tab))]
fig, ax = plt.subplots(nrows=len(n_sessions_tab), ncols=len(percentile_tab), figsize=(25, 25))
print('n_sessions, percentile')
for i in tqdm_notebook(range(len(n_sessions_tab))):
n_sessions = n_sessions_tab[i]
for j in range(len(percentile_tab)):
percentile = percentile_tab[j]
policy, rewards_batch, log = get_policy_rewards_batch_log(n_sessions, percentile, learning_rate)
all_policy[i][j] = policy
all_rewards[i][j] = mean_reward
all_log[i][j] = log
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
# plt.figure(figsize=[8, 4])
# plt.subplot(1, 2, 1)
ax[i][j].plot(list(zip(*log))[0], label='Mean rewards')
ax[i][j].plot(list(zip(*log))[1], label='Reward thresholds')
ax[i][j].legend()
ax[i][j].grid()
'''plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.show()'''
# clear_output(True)
print("[{0}] [{1}]".format(n_sessions, percentile), end = ' ')
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
# ax[i][j].show()
opt_policy, rewards_batch, log = get_policy_rewards_batch_log(600, 20, learning_rate)
show_final_progress(rewards_batch, log, percentile)
threshold
print("[{0}][{1}]".format(n_sessions, percentile), end = ' ')
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
n_sessions = 250 # sample this many sessions
percentile = 50 # take this percent of session with highest rewards
learning_rate = 0.5 # how quickly the policy is updated, on a scale from 0 to 1
log = []
for i in range(100):
%time sessions = [ generate_session(env, policy) for i in np.arange(n_sessions) ]
states_batch, actions_batch, rewards_batch = zip(*sessions)
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile) # <YOUR CODE: select elite states & actions>
new_policy = get_new_policy(elite_states, elite_actions) # <YOUR CODE: compute new policy>
policy = learning_rate * new_policy + (1 - learning_rate) * policy
# display results on chart
show_progress(rewards_batch, log, percentile)
```
# Homework part II
### Deep crossentropy method
By this moment you should have got enough score on [CartPole-v0](https://gym.openai.com/envs/CartPole-v0) to consider it solved (see the link). It's time to try something harder.
* if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help.
### Tasks
* __2.1__ (3 pts) Pick one of environments: `MountainCar-v0` or `LunarLander-v2`.
* For MountainCar, get average reward of __at least -150__
* For LunarLander, get average reward of __at least +50__
See the tips section below, it's kinda important.
__Note:__ If your agent is below the target score, you'll still get most of the points depending on the result, so don't be afraid to submit it.
* __2.2__ (up to 6 pts) Devise a way to speed up training against the default version
* Obvious improvement: use [`joblib`](https://joblib.readthedocs.io/en/latest/). However, note that you will probably need to spawn a new environment in each of the workers instead of passing it via pickling. (2 pts)
* Try re-using samples from 3-5 last iterations when computing threshold and training. (2 pts)
* Experiment with the number of training iterations and learning rate of the neural network (see params). Provide some plots as in 1.1. (2 pts)
__Please list what you did in Anytask submission form__.
### Tips
* Gym page: [MountainCar](https://gym.openai.com/envs/MountainCar-v0), [LunarLander](https://gym.openai.com/envs/LunarLander-v2)
* Sessions for MountainCar may last for 10k+ ticks. Make sure ```t_max``` param is at least 10k.
* Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 10% are better, than if you use percentile 20% as threshold, R >= threshold __fails cut off bad sessions__ whule R > threshold works alright.
* _issue with gym_: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified __t_max__, and if it isn't, try `env = gym.make("MountainCar-v0").env` or otherwise get rid of TimeLimit wrapper.
* If you use old _swig_ lib for LunarLander-v2, you may get an error. See this [issue](https://github.com/openai/gym/issues/100) for solution.
* If it won't train it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)
* 20-neuron network is probably not enough, feel free to experiment.
You may find the following snippet useful:
```
def visualize_mountain_car(env, agent):
# Compute policy for all possible x and v (with discretization)
xs = np.linspace(env.min_position, env.max_position, 100)
vs = np.linspace(-env.max_speed, env.max_speed, 100)
grid = np.dstack(np.meshgrid(xs, vs[::-1])).transpose(1, 0, 2)
grid_flat = grid.reshape(len(xs) * len(vs), 2)
probs = agent.predict_proba(grid_flat).reshape(len(xs), len(vs), 3).transpose(1, 0, 2)
# # The above code is equivalent to the following:
# probs = np.empty((len(vs), len(xs), 3))
# for i, v in enumerate(vs[::-1]):
# for j, x in enumerate(xs):
# probs[i, j, :] = agent.predict_proba([[x, v]])[0]
# Draw policy
f, ax = plt.subplots(figsize=(7, 7))
ax.imshow(probs, extent=(env.min_position, env.max_position, -env.max_speed, env.max_speed), aspect='auto')
ax.set_title('Learned policy: red=left, green=nothing, blue=right')
ax.set_xlabel('position (x)')
ax.set_ylabel('velocity (v)')
# Sample a trajectory and draw it
states, actions, _ = generate_session(env, agent)
states = np.array(states)
ax.plot(states[:, 0], states[:, 1], color='white')
# Draw every 3rd action from the trajectory
for (x, v), a in zip(states[::3], actions[::3]):
if a == 0:
plt.arrow(x, v, -0.1, 0, color='white', head_length=0.02)
elif a == 2:
plt.arrow(x, v, 0.1, 0, color='white', head_length=0.02)
import gym
import numpy as np
import sys, os
from tqdm import tqdm, tqdm_notebook
import matplotlib.pyplot as plt
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
%matplotlib inline
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("MountainCar-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape[0]
plt.imshow(env.render("rgb_array"))
print("state vector dim =", state_dim)
print("n_actions =", n_actions)
env.reset()
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(
hidden_layer_sizes=(20, 20),
warm_start=True,
activation='tanh',
max_iter=1)
agent.partial_fit([env.reset()] * n_actions, np.arange(n_actions), np.arange(n_actions))
import pickle
agent = pickle.load(open('saved_model_132.pkl', mode='rb'))
agent.predict_proba([env.reset()])
agent.predict([env.reset(), [10, 2], [123, -123], [32, 23]])
from joblib import wrap_non_picklable_objects
@wrap_non_picklable_objects
def generate_session(env, agent, t_max=1000):
"""
Play a single game using agent neural network.
Terminate when game finishes or after :t_max: steps
"""
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# use agent to predict a vector of action probabilities for state :s:
probs = agent.predict_proba([s])[0] # <YOUR CODE>
assert probs.shape == (env.action_space.n,), "make sure probabilities are a vector (hint: np.reshape)"
# use the probabilities you predicted to pick an action
# sample proportionally to the probabilities, don't just take the most likely action
a = np.random.choice(np.arange(n_actions), p=probs) # <YOUR CODE>
# ^-- hint: try np.random.choice
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
"""
reward_threshold = np.percentile(rewards_batch, percentile) # <YOUR CODE: compute minimum reward for elite sessions. Hint: use np.percentile()>
elite_states = []
elite_actions = []
for i in range(len(states_batch)):
if rewards_batch[i] >= reward_threshold:
for j in states_batch[i]:
elite_states.append(j)
for j in actions_batch[i]:
elite_actions.append(j)
return elite_states, elite_actions
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
from joblib import Parallel, delayed
from math import sqrt
Parallel(n_jobs=2, prefer="threads")(delayed(sqrt)(i ** 2) for i in range(10))
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
%time sessions = Parallel(n_jobs=1)(delayed(generate_session)(env, agent, 10000) for i in range(n_sessions))
# [ generate_session(env, agent, 10000) for i in np.arange(n_sessions) ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
print(states_batch.shape, actions_batch.shape)
# <YOUR CODE: select elite actions just like before>
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile)
#<YOUR CODE: partial_fit agent to predict elite_actions(y) from elite_states(X)>
agent.partial_fit(elite_states, elite_actions)
show_progress(rewards_batch, log, percentile, (-500, 0))
with gym.make('MountainCar-v0').env as env:
visualize_mountain_car(env, agent)
import pickle
#
# Create your model here (same as above)
#
# Save to file in the current working directory
pkl_filename = "saved_model_132.pkl"
with open(pkl_filename, 'wb') as file:
pickle.dump(agent, file)
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
%time sessions = [ generate_session(env, agent, 10000) for i in np.arange(n_sessions) ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
print(states_batch.shape, actions_batch.shape)
# <YOUR CODE: select elite actions just like before>
elite_states, elite_actions = select_elites(states_batch, actions_batch, rewards_batch, percentile)
#<YOUR CODE: partial_fit agent to predict elite_actions(y) from elite_states(X)>
agent.partial_fit(elite_states, elite_actions)
show_progress(rewards_batch, log, percentile, (-6000, 0))
from IPython import display
# Create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(
gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1,
)
actions = {'left': 0, 'stop': 1, 'right': 2}
plt.figure(figsize=(4, 3))
display.clear_output(wait=True)
obs = env.reset()
for t in range(TIME_LIMIT):
plt.gca().clear()
probs = agent.predict_proba([obs])[0] # <YOUR CODE>
# use the probabilities you predicted to pick an action
# sample proportionally to the probabilities, don't just take the most likely action
action = np.random.choice(np.arange(n_actions), p=probs) # <YOUR CODE>
# Call your policy
obs, reward, done, _ = env.step(action)
# Pass the action chosen by the policy to the environment
# We don't do anything with reward here because MountainCar is a very simple environment,
# and reward is a constant -1. Therefore, your goal is to end the episode as quickly as possible.
# Draw game image on display.
plt.imshow(env.render('rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
display.clear_output(wait=True)
```
### Bonus tasks
* __2.3 bonus__ (2 pts) Try to find a network architecture and training params that solve __both__ environments above (_Points depend on implementation. If you attempted this task, please mention it in Anytask submission._)
* __2.4 bonus__ (4 pts) Solve continuous action space task with `MLPRegressor` or similar.
* Since your agent only predicts the "expected" action, you will have to add noise to ensure exploration.
* Choose one of [MountainCarContinuous-v0](https://gym.openai.com/envs/MountainCarContinuous-v0) (90+ pts to solve), [LunarLanderContinuous-v2](https://gym.openai.com/envs/LunarLanderContinuous-v2) (200+ pts to solve)
* 4 points for solving. Slightly less for getting some results below solution threshold. Note that discrete and continuous environments may have slightly different rules aside from action spaces.
If you're still feeling unchallenged, consider the project (see other notebook in this folder).
| github_jupyter |
# 2.3 Least Squares and Nearest Neighbors
### 2.3.3 From Least Squares to Nearest Neighbors
1. Generates 10 means $m_k$ from a bivariate Gaussian distrubition for each color:
- $N((1, 0)^T, \textbf{I})$ for <span style="color: blue">BLUE</span>
- $N((0, 1)^T, \textbf{I})$ for <span style="color: orange">ORANGE</span>
2. For each color generates 100 observations as following:
- For each observation it picks $m_k$ at random with probability 1/10.
- Then generates a $N(m_k,\textbf{I}/5)$
```
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
sample_size = 100
def generate_data(size, mean):
identity = np.identity(2)
m = np.random.multivariate_normal(mean, identity, 10)
return np.array([
np.random.multivariate_normal(random.choice(m), identity / 5)
for _ in range(size)
])
def plot_data(orange_data, blue_data):
axes.plot(orange_data[:, 0], orange_data[:, 1], 'o', color='orange')
axes.plot(blue_data[:, 0], blue_data[:, 1], 'o', color='blue')
blue_data = generate_data(sample_size, [1, 0])
orange_data = generate_data(sample_size, [0, 1])
data_x = np.r_[blue_data, orange_data]
data_y = np.r_[np.zeros(sample_size), np.ones(sample_size)]
# plotting
fig = plt.figure(figsize = (8, 8))
axes = fig.add_subplot(1, 1, 1)
plot_data(orange_data, blue_data)
plt.show()
```
### 2.3.1 Linear Models and Least Squares
$$\hat{Y} = \hat{\beta_0} + \sum_{j=1}^{p} X_j\hat{\beta_j}$$
where $\hat{\beta_0}$ is the intercept, also know as the *bias*. It is convenient to include the constant variable 1 in X and $\hat{\beta_0}$ in the vector of coefficients $\hat{\beta}$, and then write as:
$$\hat{Y} = X^T\hat{\beta} $$
#### Residual sum of squares
How to fit the linear model to a set of training data? Pick the coefficients $\beta$ to minimize the *residual sum of squares*:
$$RSS(\beta) = \sum_{i=1}^{N} (y_i - x_i^T\beta) ^ 2 = (\textbf{y} - \textbf{X}\beta)^T (\textbf{y} - \textbf{X}\beta)$$
where $\textbf{X}$ is an $N \times p$ matrix with each row an input vector, and $\textbf{y}$ is an N-vector of the outputs in the training set. Differentiating w.r.t. β we get the normal equations:
$$\mathbf{X}^T(\mathbf{y} - \mathbf{X}\beta) = 0$$
If $\mathbf{X}^T\mathbf{X}$ is nonsingular, then the unique solution is given by:
$$\hat{\beta} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$$
```
class LinearRegression:
def fit(self, X, y):
X = np.c_[np.ones((X.shape[0], 1)), X]
self.beta = np.linalg.inv(X.T @ X) @ X.T @ y
return self
def predict(self, x):
return np.dot(self.beta, np.r_[1, x])
model = LinearRegression().fit(data_x, data_y)
print("beta = ", model.beta)
```
#### Example of the linear model in a classification context
The fitted values $\hat{Y}$ are converted to a fitted class variable $\hat{G}$ according to the rule:
$$
\begin{equation}
\hat{G} = \begin{cases}
\text{ORANGE} & \text{ if } \hat{Y} \gt 0.5 \\
\text{BLUE } & \text{ if } \hat{Y} \leq 0.5
\end{cases}
\end{equation}
$$
```
from itertools import filterfalse, product
def plot_grid(orange_grid, blue_grid):
axes.plot(orange_grid[:, 0], orange_grid[:, 1], '.', zorder = 0.001,
color='orange', alpha = 0.3, scalex = False, scaley = False)
axes.plot(blue_grid[:, 0], blue_grid[:, 1], '.', zorder = 0.001,
color='blue', alpha = 0.3, scalex = False, scaley = False)
plot_xlim = axes.get_xlim()
plot_ylim = axes.get_ylim()
grid = np.array([*product(np.linspace(*plot_xlim, 50), np.linspace(*plot_ylim, 50))])
is_orange = lambda x: model.predict(x) > 0.5
orange_grid = np.array([*filter(is_orange, grid)])
blue_grid = np.array([*filterfalse(is_orange, grid)])
axes.clear()
axes.set_title("Linear Regression of 0/1 Response")
plot_data(orange_data, blue_data)
plot_grid(orange_grid, blue_grid)
find_y = lambda x: (0.5 - model.beta[0] - x * model.beta[1]) / model.beta[2]
axes.plot(plot_xlim, [*map(find_y, plot_xlim)], color = 'black',
scalex = False, scaley = False)
fig
```
### 2.3.2 Nearest-Neighbor Methods
$$\hat{Y}(x) = \frac{1}{k} \sum_{x_i \in N_k(x)} y_i$$
where $N_k(x)$ is the neighborhood of $x$ defined by the $k$ closest points $x_i$ in the training sample.
```
class KNeighborsRegressor:
def __init__(self, k):
self._k = k
def fit(self, X, y):
self._X = X
self._y = y
return self
def predict(self, x):
X, y, k = self._X, self._y, self._k
distances = ((X - x) ** 2).sum(axis=1)
return np.mean(y[distances.argpartition(k)[:k]])
def plot_k_nearest_neighbors(k):
model = KNeighborsRegressor(k).fit(data_x, data_y)
is_orange = lambda x: model.predict(x) > 0.5
orange_grid = np.array([*filter(is_orange, grid)])
blue_grid = np.array([*filterfalse(is_orange, grid)])
axes.clear()
axes.set_title(str(k) + "-Nearest Neighbor Classifier")
plot_data(orange_data, blue_data)
plot_grid(orange_grid, blue_grid)
plot_k_nearest_neighbors(1)
fig
```
It appears that k-nearest-neighbor have a single parameter (*k*), however the effective number of parameters is N/k and is generally bigger than the p parameters in least-squares fits. **Note:** if the neighborhoods
were nonoverlapping, there would be N/k neighborhoods and we would fit one parameter (a mean) in each neighborhood.
```
plot_k_nearest_neighbors(15)
fig
```
| github_jupyter |
<a href="https://colab.research.google.com/github/JerKeller/2022_ML_Earth_Env_Sci/blob/main/S4_3_THOR.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/ESLP1e1BfUxKu-hchh7wZKcBZiG3bJnNbnt0PDDm3BK-9g?download=1'>
<center>
Photo Credits: <a href="https://unsplash.com/photos/zCMWw56qseM">Sea Foam</a> by <a href="https://unsplash.com/@unstable_affliction">Ivan Bandura</a> licensed under the <a href='https://unsplash.com/license'>Unsplash License</a>
</center>
>*A frequently asked question related to this work is “Which mixing processes matter most for climate?” As with many alluringly comprehensive sounding questions, the answer is “it depends.”* <br>
> $\qquad$ MacKinnon, Jennifer A., et al. <br>$\qquad$"Climate process team on internal wave–driven ocean mixing." <br>$\qquad$ Bulletin of the American Meteorological Society 98.11 (2017): 2429-2454.
In week 4's final notebook, we will perform clustering to identify regimes in data taken from the realistic numerical ocean model [Estimating the Circulation and Climate of the Ocean](https://www.ecco-group.org/products-ECCO-V4r4.htm). Sonnewald et al. point out that finding robust regimes is intractable with a naïve approach, so we will be using using reduced dimensionality data.
It is worth pointing out, however, that the reduction was done with an equation instead of one of the algorithms we discussed this week. If you're interested in the full details, you can check out [Sonnewald et al. (2019)](https://doi.org/10.1029/2018EA000519)
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
import xarray as xr
import pooch
# to make this notebook's output stable across runs
rnd_seed = 42
rnd_gen = np.random.default_rng(rnd_seed)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "dim_reduction"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
Here we're going to import the [StandardScaler](https://duckduckgo.com/sklearn.preprocessing.standardscaler) function from scikit's preprocessing tools, import the [scikit clustering library](https://duckduckgo.com/sklearn.clustering), and set up the colormap that we will use when plotting.
```
from sklearn.preprocessing import StandardScaler
import sklearn.cluster as cluster
from matplotlib.colors import LinearSegmentedColormap, ListedColormap
colors = ['royalblue', 'cyan','yellow', 'orange', 'magenta', 'red']
mycmap = ListedColormap(colors)
```
# Data Preprocessing
The first thing we need to do is retrieve the list of files we'll be working on. We'll rely on pooch to access the files hosted on the cloud.
```
# Retrieve the files from the cloud using Pooch.
data_url = 'https://unils-my.sharepoint.com/:u:/g/personal/tom_beucler_unil_ch/EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q?download=1'
hash = '3f41661c7a087fa7d7af1d2a8baf95c065468f8a415b8514baedda2f5bc18bb5'
files = pooch.retrieve(data_url, known_hash=hash, processor=pooch.Unzip())
[print(filename) for filename in files];
```
And now that we have a set of files to load, let's set up a dictionary with the variable names as keys and the data in numpy array format as the values.
```
# Let's read in the variable names from the filepaths
var_names = []
[var_names.append(path.split('/')[-1][:-4]) for path in files]
# And build a dictionary of the data variables keyed to the filenames
data_dict = {}
for idx, val in enumerate(var_names):
data_dict[val] = np.load(files[idx]).T
#We'll print the name of the variable loaded and the associated shape
[print(f'Varname: {item[0]:<15} Shape: {item[1].shape}') for item in data_dict.items()];
```
We now have a dictionary that uses the filename as the key! Feel free to explore the data (e.g., loading the keys, checking the shape of the arrays, plotting)
```
#Feel free to explore the data dictionary
```
We're eventually going to have an array of cluster classes that we're going to use to label dynamic regimes in the ocean. Let's make an array full of NaN (not-a-number) values that has the same shape as our other variables and store it in the data dictionary.
```
data_dict['clusters'] = np.full_like(data_dict['BPT'],np.nan)
```
### Reformatting as Xarray
In the original paper, this data was loaded as numpy arrays. However, we'll take this opportunity to demonstrate the same procedure while relying on xarray. First, let's instantiate a blank dataset.<br><br>
###**Q1) Make a blank xarray dataset.**<br>
*Hint: Look at the xarray [documentation](https://duckduckgo.com/?q=xarray+dataset)*
```
# Make your blank dataset here! Instantiate the class without passing any parameters.
ds = xr.Dataset
```
<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EZv_qqVz_h1Hio6Nq11ckScBb01bGb9jtNKzdqAg1TPrKQ?download=1'>
<center> Image taken from the xarray <a href='https://xarray.pydata.org/en/stable/user-guide/data-structures.html#:~:text=Dataset-,xarray.,from%20the%20netCDF%20file%20format.'> <i>Data Structure documentation</i> </a> </center>
In order to build the dataset, we're going to need a set of coordinate vectors that help us map out our data! For our data, we have two axes corresponding to longitude ($\lambda$) and latitude ($\phi$).
We don't know much about how many lat/lon points we have, so let's explore one of the variables to make sense of the data the shape of one of the numpy arrays.
###**Q2) Visualize the data using a plot and printing the shape of the data to the console output.**
```
#Complete the code
# Let's print out an image of the Bottom Pressure Torques (BPT)
plt.imshow( data_dict['BPT'] , origin='lower')
# It will also be useful to store and print out the shape of the data
data_shape = data_dict['BPT'].shape
print(data_shape)
```
Now that we know how the resolution of our data, we can prepare a set of axis arrays. We will use these to organize the data we will feed into the dataset.
###**Q3) Prepare the latitude and longitude arrays to be used as axes for our dataset**
*Hint 1: You can build ordered numpy arrays using, e.g., [numpy.linspace](https://numpy.org/doc/stable/reference/generated/numpy.linspace.html) and [numpy.arange](https://numpy.org/doc/stable/reference/generated/numpy.arange.html)*
*Hint 2: You can rely on the data_shape variable we loaded previously to know how many points you need along each axis*
```
#Complete the code
# Let's prepare the lat and lon axes for our data.
lat = np.linspace(0, data_shape[0],data_shape[0])
lon = np.linspace(0, data_shape[1],data_shape[1])
```
Now that we have the axes we need, we can build xarray [*data arrays*](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) for each data variable. Since we'll be doing it several times, let's go ahead and defined a function that does this for us!
###**Q4) Define a function that takes in: 1) an array name, 2) a numpy array, 3) a lat vector, and 4) a lon vector. The function should return a dataArray with lat-lon as the coordinate dimensions**
```
#Complete the code
def np_to_xr(array_name, array, lat, lon):
#building the xarrray
da = xr.DataArray(data = array, # Data to be stored
#set the name of dimensions for the dataArray
dims = ['lat', 'lon'],
#Set the dictionary pointing the name dimensions to np arrays
coords = {'lat':lat,
'lon':lon},
name=array_name)
return da
```
We're now ready to build our data array! Let's iterate through the items and merge our blank dataset with the data arrays we create.
###**Q5) Build the dataset from the data dictionary**
*Hint: We'll be using the xarray merge command to put everything together.*
```
# The code in the notebook assumes you named your dataset ds. Change it to
# whatever you used!
# Complete the code
for key, item in data_dict.items():
# Let's make use of our np_to_xr function to get the data as a dataArray
da = np_to_xr(key, item, lat, lon)
# Merge the dataSet with the dataArray here!
ds = xr.merge( [ds , da] )
```
Congratulations! You should now have a nicely set up xarray dataset. This let's you access a ton of nice features, e.g.:
> Data plotting by calling, e.g., `ds.BPT.plot.imshow(cmap='ocean')`
>
> Find statistical measures of all variables at once! (e.g.: `ds.std()`, `ds.mean()`)
```
# Play around with the dataset here if you'd like :)
```
Now we want to find clusters of data considering each grid point as a datapoint with 5 dimensional data. However, we went through a lot of work to get the data nicely associated with a lat and lon - do we really want to undo that?
Luckily, xarray develops foresaw the need to group dimensions together. Let's create a 'flat' version of our dataset using the [`stack`](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.stack.html) method. Let's make a flattened version of our dataset.
###**Q6) Store a flattened version of our dataset**
*Hint 1: You'll need to pass a dictionary with the 'new' stacked dimension name as the key and the 'flattened' dimensions as the values.*
*Hint 2: xarrays have a ['.values' attribute](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.values.html) that return their data as a numpy array.*
```
# Complete the code
# Let's store the stacked version of our dataset
stacked = ds.stack( { ____ :[ ___ , ___ ] } )
# And verify the shape of our data
print(stacked.to_array()._____._____)
```
So far we've ignored an important point - we're supposed to have 5 variables, not 6! As you may have guessed, `noiseMask` helps us throw away data we dont want (e.g., from land mass or bad pixels).
We're now going to clean up the stacked dataset using the noise mask. Relax and read through the code, since there won't be a question in this part :)
```
# Let's redefine stacked as all the points where noiseMask = 1, since noisemask
# is binary data.
print(f'Dataset shape before processing: {stacked.to_array().values.shape}')
print("Let's do some data cleaning!")
print(f'Points before cleaning: {len(stacked.BPT)}')
stacked = stacked.where(stacked.noiseMask==1, drop=True)
print(f'Points after cleaning: {len(stacked.BPT)}')
# We also no longer need the noiseMask variable, so we can just drop it.
print('And drop the noisemask variable...')
print(f'Before dropping: {stacked.to_array().values.shape}')
stacked = stacked.drop('noiseMask')
print(f'Dataset shape after processing: {stacked.to_array().values.shape}')
```
We now have several thousand points which we want to divide into clusters using the kmeans clustering algorithm (you can check out the documentation for scikit's implementation of kmeans [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html)).
You'll note that the algorithm expects the input data `X` to be fed as `(n_samples, n_features)`. This is the opposite of what we have! Let's go ahead and make a copy to a numpy array has the axes in the right order.
You'll need xarray's [`.to_array()`](https://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_array.html) method and [`.values`](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.values.html) parameter, as well as numpy's [`.moveaxis`](https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html) method.
###**Q7) Load the datapoints into a numpy array following the convention where the 0th axis corresponds to the samples and the 1st axis corresponds to the features.**
```
# Complete the code
input_data = np._____(stacked._____()._____, # data to reshape
'number', # source axis as integer,
'number') # destination axis as integer
# Does the input data look the way it's supposed to? Print the shape.
print(________)
```
In previous classes we discussed the importance of the scaling the data before implementing our algorithms. Now that our data is all but ready to be fed into an algorithm, let's make sure that it's been scaled.
###**Q8) Scale the input data**
*Hint 1: Import the [`StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) class from scikit and instantiate it*
*Hint 2: Update the input array to the one returned by the [`.fit_transform(X)`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler.fit_transform) method*
```
#Write your scaling code here
```
Now we're finally ready to train our algorithm! Let's load up the kmeans model and find clusters in our data.
###**Q9) Instantiate the kmeans clustering algorithm, and then fit it using 50 clusters, trying out 10 different initial centroids.**
*Hint 1: `sklearn.cluster` was imported as `cluser` during the notebook setup! [Here is the scikit `KMeans` documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html).*
*Hint 2: Use the `fit_predict` method to organize the data into clusters*
*Warning! : Fitting the data may take some time (under a minute during the testing of the notebook)
```
# Complete the code
kmeans = cluster._____(________ =50, # Number of clusters
________ =42, # setting a random state
________ =10, # Number of initial centroid states to try
verbose = 1) # Verbosity so we know things are working
cluster_labels = kmeans.______(____) # Feed in out scaled input data!
```
We now have a set of cluster labels that group the data into 50 similar groups. Let's store it in our stacked dataset!
```
# Let's run this line
stacked['clusters'].values = cluster_labels
```
We now have a set of labels, but they're stored in a flattened array. Since we'd like to see the data as a map, we still have some work to do. Let's go back to a 2D representation of our values.
###**Q10) Turn the flattened xarray back into a set of 2D fields**
*Hint*: xarrays have an [`.unstack` method](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.unstack.html) that you will find to be very useful for this.
```
# Complete the code:
processed_ds = ds.____()
```
Now we have an unstacked dataset, and can now easily plot out the clusters we found!
###**Q11) Plot the 'cluster' variable using the buil-in xarray function**
*Hint: `.plot()` [link text](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.plot.html) let's you access the xarray implementations of [`pcolormesh`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.pcolormesh.html) and [`imshow`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html).*
```
```
Compare your results to those from the paper:
<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EdLh6Ds0yVlFivyfIOXlV74B_G35dVz87GFagzylIG-gZA?download=1'>
We now want to find the 5 most common regimes, and group the rest. This isn't straightforward, so we've gone ahead and prepared the code for you. Run through it and try to understand what the code is doing!
```
# Make field filled with -1 vals so unprocessed points are easily retrieved.
# Noise masked applied automatically by using previously found labels as base.
processed_ds['final_clusters'] = (processed_ds.clusters * 0) - 1
# Find the 5 most common cluster labels
top_clusters = processed_ds.groupby('clusters').count().sortby('BPT').tail(5).clusters.values
#Build the set of indices for the cluster data, used for rewriting cluster labels
for idx, label in enumerate(top_clusters):
#Find the indices where the label is found
indices = (processed_ds.clusters == label)
processed_ds['final_clusters'].values[indices] = 4-idx
# Set the remaining unlabeled regions to category 5 "non-linear"
processed_ds['final_clusters'].values[processed_ds.final_clusters==-1] = 5
# Plot the figure
processed_ds.final_clusters.plot.imshow(cmap=mycmap, figsize=(18,8));
# Feel free to use this space
```
Compare it to the regimes found in the paper:
<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EehuR9cUfaJImrw4DCAzDPoBiGuG7R3Ys6453Umi1cN_OQ?download=1'>
The authors then went on to train neural networks ***to infer in-depth dynamics from data that is largely readily available from for example CMIP6 models, using NN methods to infer the source of predictive skill*** and ***to apply the trained Ensemble MLP to a climate model in order to assess circulation changes under global heating***.
For our purposes, however, we will say goodbye to *THOR* at this point 😃
| github_jupyter |
```
import re
import json
import matplotlib.pylab as plt
import numpy as np
import glob
%matplotlib inline
all_test_acc = []
all_test_err = []
all_train_loss = []
all_test_loss = []
all_cardinalities = []
all_depths = []
all_widths = []
for file in glob.glob('logs_cardinality/Cifar2/*.txt'):
with open(file) as logs:
next(logs)
test_acc = []
test_err = []
train_loss = []
test_loss = []
i = 0
for line in logs:
i += 1
if i % 2 != 0:
for t in re.finditer(r"\{.*\}", line):
try:
data = json.loads(t.group())
train_loss.append(data['train_loss'])
test_loss.append(data['test_loss'])
test_acc.append(data['test_accuracy'])
test_err.append((1-data['test_accuracy'])*100)
cardinality = data['cardinality']
depth = data['depth']
width = data['base_width']
except ValueError:
pass
all_test_acc.append(test_acc)
all_test_err.append(test_err)
all_train_loss.append(train_loss)
all_test_loss.append(test_loss)
all_cardinalities.append(cardinality)
all_depths.append(depth)
all_widths.append(width)
epochs = np.arange(0, 300, 2)
ordered_test_err = []
ordered_test_err.append(all_test_err[all_cardinalities.index(1)])
ordered_test_err.append(all_test_err[all_cardinalities.index(2)])
ordered_test_err.append(all_test_err[all_cardinalities.index(4)])
ordered_test_err.append(all_test_err[all_cardinalities.index(8)])
ordered_test_err.append(all_test_err[all_cardinalities.index(16)])
all_cardinalities = sorted(all_cardinalities)
ordered_test_err = []
ordered_test_err.append(all_test_err[all_depths.index(20)])
ordered_test_err.append(all_test_err[all_depths.index(29)])
all_depths = sorted(all_depths)
ordered_test_err = []
ordered_test_err.append(all_test_err[all_widths.index(32)])
ordered_test_err.append(all_test_err[all_widths.index(64)])
all_widths = sorted(all_widths)
for file_no in range(0, 3):
plt.plot(epochs, ordered_test_err[file_no])
plt.legend([cardinality for cardinality in all_cardinalities[0:3]], loc='upper right')
plt.xlabel('epochs \n\n (f)')
plt.ylabel('top-1 error(%)')
plt.show()
for file_no in range(0, 2):
plt.plot(epochs, ordered_test_err[file_no])
plt.legend([depth for depth in all_depths], loc='upper right')
plt.xlabel('epochs \n\n (c)')
plt.ylabel('top-1 error(%)')
# plt.title('(a)')
plt.show()
for file_no in range(0, 2):
plt.plot(epochs, ordered_test_err[file_no])
plt.legend([width for width in all_widths], loc='upper right')
plt.xlabel('epochs \n\n (a)')
plt.ylabel('top-1 error(%)')
plt.show()
cardinalities = [1, 2, 4, 8, 16]
params = [5.6, 9.8, 18.3, 34.4, 68.1]
text = ['1x64d', '2x64d', '4x64d', '8x64d', '16x64d']
cifar29 = [[0.786, 0.797, 0.803, 0.83, 0.823], [0.886, 0.887, 0.86, 0.914, 0.92], [0.939, 0.939, 0.941, 0.946, 0.946]]
fig = plt.figure()
ax = fig.add_subplot(111)
y = [(1-val)*100 for val in cifar29[2]]
ax.plot(params, y, 'x-')
plt.xlabel('# of parameters (M)')
plt.ylabel('test error (%)')
for i, txt in enumerate(text):
ax.annotate(txt, (params[i], y[i]))
plt.title('CIFAR 2 Dataset')
```
| github_jupyter |
```
import numpy as np
import spectral_embedding as se
import matplotlib as mpl
import matplotlib.pyplot as plt
```
In this example we demostrate unfolded adjacency spectral embedding for a series of stochastic block models and investigate the stability of the embedding compared to two other possible approaches; omnibus embedding and separate adjacency spectral embedding.
```
np.random.seed(0)
```
We generate a dynamic stochastic block model over $T = 2$ time periods with $n=1000$ nodes and $K=4$ communities, where nodes are equally likely to be in either community, $\pi = (0.25, 0.25, 0.25, 0.25)$. We use the following two community link probability matrices for the two time periods,
$$
\textbf{B}^{(1)} = \left( \begin{array}{cccc}
0.08 & 0.02 & 0.18 & 0.10 \\
0.02 & 0.20 & 0.04 & 0.10 \\
0.18 & 0.04 & 0.02 & 0.02 \\
0.10 & 0.10 & 0.02 & 0.06
\end{array} \right), \quad
\textbf{B}^{(2)} = \left( \begin{array}{cccc}
0.16 & 0.16 & 0.04 & 0.10 \\
0.16 & 0.16 & 0.04 & 0.10 \\
0.04 & 0.04 & 0.09 & 0.02 \\
0.10 & 0.10 & 0.02 & 0.06
\end{array} \right).
$$
In the first time period, the four communities are all behaving in different ways and spectral embedding should be able to distinguish between the groups. In the second time period, communities 1 and 2 have the same link probabilities to all the other communities, so it is desirable that those nodes are embedded in the same way at this time. This is known as cross-sectional stability. Furthermore, community 4 has the same community link probabilities at time 1 and time 2, so it is desirable that these nodes are embedded in the same way between the two time periods. This is known as longitudinal stability.
```
K = 4
T = 2
n = 1000
pi = np.repeat(1/K, K)
Bs = np.array([[[0.08, 0.02, 0.18, 0.10],
[0.02, 0.20, 0.04, 0.10],
[0.18, 0.04, 0.02, 0.02],
[0.10, 0.10, 0.02, 0.06]],
[[0.16, 0.16, 0.04, 0.10],
[0.16, 0.16, 0.04, 0.10],
[0.04, 0.04, 0.09, 0.02],
[0.10, 0.10, 0.02, 0.06]]])
As, Z = se.generate_SBM_dynamic(n, Bs, pi)
```
Colour the nodes depending on their community assignment.
```
colours = np.array(list(mpl.colors.TABLEAU_COLORS.keys())[0:K])
Zcol = colours[Z]
```
#### Unfolded adjacency spectral embedding
Embed the nodes into four dimensions by looking at the right embedding of the unfolded adjacency matrix $\textbf{A} = (\textbf{A}^{(1)} | \textbf{A}^{(2)})$. Since the network is a dynamic stochastic model, we can compute the asymptotic distribution for the embedding as a Gaussian mixture model in both time periods.
Note that in all the diagrams that follow, only the first two dimensions of the embeddings are shown for visualisation purposes.
```
_, YAs_UASE = se.UASE(As, K)
Ys_UASE, SigmaYs_UASE = se.SBM_dynamic_distbn(As, Bs, Z, pi, K)
fig, axs = plt.subplots(1, 2, figsize=(9.4,4.4), sharex=True, sharey=True)
for t in range(T):
axs[t].grid()
axs[t].scatter(YAs_UASE[t,:,0], YAs_UASE[t,:,1], marker='.', s=5, c=Zcol)
axs[t].scatter(Ys_UASE[t,:,0], Ys_UASE[t,:,1], marker='o', s=12, c='black')
for i in range(K):
ellipse = se.gaussian_ellipse(Ys_UASE[t,i], SigmaYs_UASE[t,i][0:2,0:2]/n)
axs[t].plot(ellipse[0], ellipse[1],'--', color='black')
axs[t].set_title('UASE, SBM ' + str(t+1), fontsize=13);
```
Note that the Gaussian distributions for communities 1 and 2 (shown in blue and orange) at time 2 are identical demonstrating cross-sectional stability. Also, the Gaussian distribution for community 4 (shown in red) is the same at times 1 and 2 demonstrating longitudinal stability.
#### Omnibus embedding
Embed the nodes into four dimensions using the omnibus matrix,
$$
\tilde{\textbf{A}} = \left( \begin{array}{cc}
\textbf{A}^{(1)} & \frac{1}{2}(\textbf{A}^{(1)} + \textbf{A}^{(2)}) \\
\frac{1}{2}(\textbf{A}^{(1)} + \textbf{A}^{(2)}) & \textbf{A}^{(2)}
\end{array} \right).
$$
For this technique, we do not have results about the asymptotic distribution of the embedding. However, we can still say something about the stability of the embedding.
```
YAs_omni = se.omnibus(As, K)
fig, axs = plt.subplots(1, 2, figsize=(9.4,4.4), sharex=True, sharey=True)
for t in range(T):
axs[t].grid()
axs[t].scatter(YAs_omni[t,:,0], YAs_omni[t,:,1], marker='.', s=5, c=Zcol)
axs[t].set_title('Omnibus, SBM ' + str(t+1), fontsize=13);
```
Community 4 (shown in red) is approximately in the same position over the two time periods suggesting longitudinal stability, but communities 1 and 2 (shown in blue and orange) at time 2 do not have the same distribution, so no cross-sectional stability.
#### Separate adjacency spectral embedding
Finally, we can always compute the spectral embedding for the adjacency matrix at each time period separately. However, since there is a choice of singular vectors in a singular value decomposition, there is no possible way these embeddings can be consistent over time, so no longitudinal stability. However, in this section we show that adjacency spectral embedding has cross-sectional stability.
Note that, while the matrix $\textbf{B}^{(1)}$ has rank 4, the matrix $\textbf{B}^{(2)}$ has rank 3, due to the repeated rows caused by communities 1 and 2. Therefore, we need to embed the adjacency matrices into different numbers of dimensions. For example, if we tried to embed $\textbf{A}^{(2)}$ into four dimensions, we find that the covariance matrices for the asymptotic Gaussian distributions are degenerate.
```
d = [4,3]
YAs_ASE = [se.ASE(As[0], d[0]), se.ASE(As[1], d[1])]
Y1_ASE, SigmaY1_ASE = se.SBM_distbn(As[0], Bs[0], Z, pi, d[0])
Y2_ASE, SigmaY2_ASE = se.SBM_distbn(As[1], Bs[1], Z, pi, d[1])
Ys_ASE = [Y1_ASE, Y2_ASE]
SigmaYs_ASE = [SigmaY1_ASE, SigmaY2_ASE]
fig, axs = plt.subplots(1, 2, figsize=(9.4,4.4), sharex=True, sharey=True)
for t in range(T):
axs[t].grid()
axs[t].scatter(YAs_ASE[t][:,0], YAs_ASE[t][:,1], marker='.', s=5, c=Zcol)
axs[t].scatter(Ys_ASE[t][:,0], Ys_ASE[t][:,1], marker='o', s=12, c='black')
for i in range(K):
ellipse = se.gaussian_ellipse(Ys_ASE[t][i], SigmaYs_ASE[t][i][0:2,0:2]/n)
axs[t].plot(ellipse[0], ellipse[1],'--', color='black')
axs[t].set_title('Independent ASE, SBM ' + str(t+1), fontsize=13);
```
At time 2, we see that communities 1 and 2 (shown in blue and orange) have the same distribution, so we have cross-sectional stability.
| github_jupyter |
# Using an external master clock for hardware control of a stage-scanning high NA oblique plane microscope
Tutorial provided by [qi2lab](https://www.shepherdlaboratory.org).
This tutorial uses Pycro-Manager to rapidly acquire terabyte-scale volumetric images using external hardware triggering of a stage scan optimized, high numerical aperture (NA) oblique plane microscope (OPM). The microscope that this notebook controls is described in detail in this [preprint](https://www.biorxiv.org/content/10.1101/2020.04.07.030569v2), under the *stage scan OPM* section in the methods.
This high NA OPM allows for versatile, high-resolution, and large field-of-view single molecule imaging. The main application is quantifying 3D spatial gene expression in millions of cells or large pieces of intact tissue using interative RNA-FISH (see examples [here](https://www.nature.com/articles/s41598-018-22297-7) and [here](https://www.nature.com/articles/s41598-019-43943-8)). Because the fluidics controller for the iterative labeling is also controlled via Python (code not provided here), using Pycro-Manager greatly simplifies controlling these complex experiments.
The tutorial highlights the use of the `post_camera_hook_fn` and `post_hardware_hook_fn` functionality to allow an external controller to synchronize the microscope acquisition (external master). This is different from the standard hardware sequencing functionality in Pycro-Manager, where the acquisition engine sets up sequencable hardware and the camera serves as the master clock.
The tutorial also discusses how to structure the events and avoid timeouts to acquire >10 million of events per acquistion.
## Microscope hardware
Briefly, the stage scan high NA OPM is built around a [bespoke tertiary objective](https://andrewgyork.github.io/high_na_single_objective_lightsheet/) designed by Alfred Millet-Sikking and Andrew York at Calico Labs. Stage scanning is performed by an ASI scan optimized XY stage, an ASI FTP Z stage, and an ASI Tiger controller with a programmable logic card. Excitation light is provided by a Coherent OBIS Laser Box. A custom Teensy based DAC synchronizes laser emission and a galvanometer mirror to the scan stage motion to eliminate motion blur. Emitted fluorescence is imaged by a Photometrics Prime BSI.
The ASI Tiger controller is the master clock in this experiment. The custom Teensy DAC is setup in a closed loop with the Photometrics camera. This controller is detailed in a previous [publication](https://www.nature.com/articles/s41467-017-00514-7) on adaptive light sheet microscopy.
The code to orthogonally deskew the acquired data and place it into a BigDataViewer HDF5 file that can be read stitched and fused using BigStitcher is found at the qi2lab (www.github.com/qi2lab/OPM/).
## Initial setup
### Imports
```
from pycromanager import Bridge, Acquisition
import numpy as np
from pathlib import Path
from time import sleep
```
### Create bridge to Micro-Manager
```
with Bridge() as bridge:
core = bridge.get_core()
```
## Define pycromanager specific hook functions for externally controlled hardware acquisition
### Post camera hook function to start external controller
This is run once after the camera is put into active mode in the sequence acquisition. The stage starts moving on this command and outputs a TTL pulse to the camera when it passes the preset initial position. This TTL starts the camera running at the set exposure time using internal timing. The camera acts the master signal for the galvo/laser controller using its own "exposure out" signal.
```
def post_camera_hook_(event,bridge,event_queue):
"""
Run a set of commands after the camera is started
:param event: current list of events, each a dictionary, to run in this hardware sequence
:type event: list
:param bridge: pycro-manager java bridge
:type bridge: pycromanager.core.Bridge
:param event_queue: thread-safe event queue
:type event_queue: multiprocessing.Queue
:return: event_queue
"""
# acquire core from bridge
core=bridge.get_core()
# send Tiger command to start constant speed scan
command='1SCAN'
core.set_property('TigerCommHub','SerialCommand',command)
return event
```
### Post hardware setup function to make sure external controller is ready
This is run once after the acquisition engine sets up the hardware for the non-sequencable hardware, such as the height axis stage and channel.
```
def post_hardware_hook(event,bridge,event_queue):
"""
Run a set of commands after the hardware setup calls by acquisition engine are finished
:param event: current list of events, each a dictionary, to run in this hardware sequence
:type event: list
:param bridge: pycro-manager java bridge
:type bridge: pycromanager.core.Bridge
:param event_queue: thread-safe event queue
:type event_queue: multiprocessing.Queue
:return: event_queue
"""
# acquire core from bridge
core = bridge.get_core()
# turn on 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','No')
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# turn off 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','Yes')
return event
```
## Acquistion parameters set by user
### Select laser channels and powers
```
# lasers to use
# 0 -> inactive
# 1 -> active
state_405 = 0
state_488 = 0
state_561 = 1
state_635 = 0
state_730 = 0
# laser powers (0 -> 100%)
power_405 = 0
power_488 = 0
power_561 = 0
power_635 = 0
power_730 = 0
# construct arrays for laser informaton
channel_states = [state_405,state_488,state_561,state_635,state_730]
channel_powers = [power_405,power_488,power_561,power_635,power_730]
```
### Camera parameters
```
# FOV parameters.
# x size (256) is the Rayleigh length of oblique light sheet excitation
# y size (1600) is the high quality lateral extent of the remote image system (~180 microns)
# camera is oriented so that cropping the x size limits the number of readout rows and therefore lowering readout time
ROI = [1024, 0, 256, 1600] #unit: pixels
# camera exposure
exposure_ms = 5 #unit: ms
# camera pixel size
pixel_size_um = .115 #unit: um
```
### Stage scan parameters
The user defines these by interactively moving the XY and Z stages around the sample. At the edges of the sample, the user records the positions.
```
# distance between adjacent images.
scan_axis_step_um = 0.2 #unit: um
# scan axis limits. Use stage positions reported by Micromanager
scan_axis_start_um = 0. #unit: um
scan_axis_end_um = 5000. #unit: um
# tile axis limits. Use stage positions reported by Micromanager
tile_axis_start_um = 0. #unit: um
tile_axis_end_um = 5000. #unit: um
# height axis limits. Use stage positions reported by Micromanager
height_axis_start_um = 0.#unit: um
height_axis_end_um = 30. #unit: um
```
### Path to save acquistion data
```
save_directory = Path('/path/to/save')
save_name = 'test'
```
## Setup hardware for stage scanning sample through oblique digitally scanned light sheet
### Calculate stage limits and speeds from user provided scan parameters
Here, the number of events along the scan (x) axis in each acquisition, the overlap between adajcent strips along the tile (y) axis, and the overlap between adajacent strips along the height (z) axis are all calculated.
```
# scan axis setup
scan_axis_step_mm = scan_axis_step_um / 1000. #unit: mm
scan_axis_start_mm = scan_axis_start_um / 1000. #unit: mm
scan_axis_end_mm = scan_axis_end_um / 1000. #unit: mm
scan_axis_range_um = np.abs(scan_axis_end_um-scan_axis_start_um) # unit: um
scan_axis_range_mm = scan_axis_range_um / 1000 #unit: mm
actual_exposure_s = actual_readout_ms / 1000. #unit: s
scan_axis_speed = np.round(scan_axis_step_mm / actual_exposure_s,2) #unit: mm/s
scan_axis_positions = np.rint(scan_axis_range_mm / scan_axis_step_mm).astype(int) #unit: number of positions
# tile axis setup
tile_axis_overlap=0.2 #unit: percentage
tile_axis_range_um = np.abs(tile_axis_end_um - tile_axis_start_um) #unit: um
tile_axis_range_mm = tile_axis_range_um / 1000 #unit: mm
tile_axis_ROI = ROI[3]*pixel_size_um #unit: um
tile_axis_step_um = np.round((tile_axis_ROI) * (1-tile_axis_overlap),2) #unit: um
tile_axis_step_mm = tile_axis_step_um / 1000 #unit: mm
tile_axis_positions = np.rint(tile_axis_range_mm / tile_axis_step_mm).astype(int) #unit: number of positions
# if tile_axis_positions rounded to zero, make sure acquisition visits at least one position
if tile_axis_positions == 0:
tile_axis_positions=1
# height axis setup
# this is more complicated, because the excitation is an oblique light sheet
# the height of the scan is the length of the ROI in the tilted direction * sin(tilt angle)
height_axis_overlap=0.2 #unit: percentage
height_axis_range_um = np.abs(height_axis_end_um-height_axis_start_um) #unit: um
height_axis_range_mm = height_axis_range_um / 1000 #unit: mm
height_axis_ROI = ROI[2]*pixel_size_um*np.sin(30*(np.pi/180.)) #unit: um
height_axis_step_um = np.round((height_axis_ROI)*(1-height_axis_overlap),2) #unit: um
height_axis_step_mm = height_axis_step_um / 1000 #unit: mm
height_axis_positions = np.rint(height_axis_range_mm / height_axis_step_mm).astype(int) #unit: number of positions
# if height_axis_positions rounded to zero, make sure acquisition visits at least one position
if height_axis_positions==0:
height_axis_positions=1
```
### Setup Coherent laser box from user provided laser parameters
```
with Bridge() as bridge:
core = bridge.get_core()
# turn off lasers
# this relies on a Micro-Manager configuration group that sets all lasers to "off" state
core.set_config('Coherent-State','off')
core.wait_for_config('Coherent-State','off')
# set lasers to user defined power
core.set_property('Coherent-Scientific Remote','Laser 405-100C - PowerSetpoint (%)',channel_powers[0])
core.set_property('Coherent-Scientific Remote','Laser 488-150C - PowerSetpoint (%)',channel_powers[1])
core.set_property('Coherent-Scientific Remote','Laser OBIS LS 561-150 - PowerSetpoint (%)',channel_powers[2])
core.set_property('Coherent-Scientific Remote','Laser 637-140C - PowerSetpoint (%)',channel_powers[3])
core.set_property('Coherent-Scientific Remote','Laser 730-30C - PowerSetpoint (%)',channel_powers[4])
```
### Setup Photometrics camera for low-noise readout and triggering
The camera input trigger is set to `Trigger first` mode to allow for external control and the output trigger is set to `Rolling Shutter` mode to ensure that laser light is only delivered when the entire chip is exposed. The custom Teensy DAC waits for the signal from the camera to go HIGH and then sweeps a Gaussian pencil beam once across the field-of-view. It then rapidly resets and scans again upon the next trigger. The Teensy additionally blanks the Coherent laser box emission between frames.
```
with Bridge() as bridge:
core = bridge.get_core()
# set camera into 16bit readout mode
core.set_property('Camera','ReadoutRate','100MHz 16bit')
# give camera time to change modes
sleep(5)
# set camera into low noise readout mode
core.set_property('Camera','Gain','2-CMS')
# give camera time to change modes
sleep(5)
# set camera to give an exposure out signal
# this signal is used by the custom DAC to synchronize blanking and a digitally swept light sheet
core.set_property('Camera','ExposureOut','Rolling Shutter')
# give camera time to change modes
sleep(5)
# change camera timeout.
# this is necessary because the acquisition engine can take a long time to setup with millions of events
# on the first run
core.set_property('Camera','Trigger Timeout (secs)',300)
# give camera time to change modes
sleep(5)
# set camera to internal trigger
core.set_property('Camera','TriggerMode','Internal Trigger')
# give camera time to change modes
sleep(5)
```
### Setup ASI stage control cards and programmable logic card in the Tiger controller
Hardware is setup for a constant-speed scan along the `x` direction, lateral tiling along the `y` direction, and height tiling along the `z` direction. The programmable logic card sends a signal to the camera to start acquiring once the scan (x) axis reaches the desired speed and crosses the user defined start position.
Documentation for the specific commands to setup the constant speed stage scan on the Tiger controller is at the following links,
- [SCAN](http://asiimaging.com/docs/commands/scan)
- [SCANR](http://asiimaging.com/docs/commands/scanr)
- [SCANV](http://www.asiimaging.com/docs/commands/scanv)
Documentation for the programmable logic card is found [here](http://www.asiimaging.com/docs/tiger_programmable_logic_card?s[]=plc).
The Tiger is polled after each command to make sure that it is ready to receive another command.
```
with Bridge() as bridge:
core = bridge.get_core()
# Setup the PLC to output external TTL when an internal signal is received from the stage scanning card
plcName = 'PLogic:E:36'
propPosition = 'PointerPosition'
propCellConfig = 'EditCellConfig'
addrOutputBNC3 = 35
addrStageSync = 46 # TTL5 on Tiger backplane = stage sync signal
core.set_property(plcName, propPosition, addrOutputBNC3)
core.set_property(plcName, propCellConfig, addrStageSync)
# turn on 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','No')
# set tile (y) axis speed to 25% of maximum for all moves
command = 'SPEED Y=.25'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# set scan (x) axis speed to 25% of maximum for non-sequenced moves
command = 'SPEED X=.25'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# turn off 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','Yes')
# turn on 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','No')
# set scan (x) axis speed to correct speed for constant speed movement of scan (x) axis
# expects mm/s
command = 'SPEED X='+str(scan_axis_speed)
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# set scan (x) axis to true 1D scan with no backlash
command = '1SCAN X? Y=0 Z=9 F=0'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# set range and return speed (25% of max) for constant speed movement of scan (x) axis
# expects mm
command = '1SCANR X='+str(scan_axis_start_mm)+' Y='+str(scan_axis_end_mm)+' R=25'
core.set_property('TigerCommHub','SerialCommand',command)
# check to make sure Tiger is not busy
ready='B'
while(ready!='N'):
command = 'STATUS'
core.set_property('TigerCommHub','SerialCommand',command)
ready = core.get_property('TigerCommHub','SerialResponse')
sleep(.500)
# turn off 'transmit repeated commands' for Tiger
core.set_property('TigerCommHub','OnlySendSerialCommandOnChange','Yes')
```
## Setup and run the acquisition
### Change core timeout
This is necessary because of the large, slow XY stage moves.
```
with Bridge() as bridge:
core = bridge.get_core()
# change core timeout for long stage moves
core.set_property('Core','TimeoutMs',20000)
```
### Move stage hardware to initial positions
```
with Bridge() as bridge:
core = bridge.get_core()
# move scan (x) and tile (y) stages to starting positions
core.set_xy_position(scan_axis_start_um,tile_axis_start_um)
core.wait_for_device(xy_stage)
# move height (z) stage to starting position
core.set_position(height_position_um)
core.wait_for_device(z_stage)
```
### Create event structure
The external controller handles all of the events in `x` for a given `yzc` position. To make sure that pycro-manager structures the acquistion this way, the value of the stage positions for `x` are kept constant for all events at a given `yzc` position. This gives the order of the loops to create the event structure as `yzcx`.
```
# empty event dictionary
events = []
# loop over all tile (y) positions.
for y in range(tile_axis_positions):
# update tile (y) axis position
tile_position_um = tile_axis_start_um+(tile_axis_step_um*y)
# loop over all height (z) positions
for z in range(height_axis_positions):
# update height (z) axis position
height_position_um = height_axis_start_um+(height_axis_step_um*z)
# loop over all channels (c)
for c in range(len(channel_states)):
# create events for all scan (x) axis positions.
# The acquistion engine knows that this is a hardware triggered sequence because
# the physical x position does not change when specifying the large number of x events
for x in range(scan_axis_positions):
# only create events if user sets laser to active
# this relies on a Micromanager group 'Coherent-State' that has individual entries that correspond
# the correct on/off state of each laser. Laser blanking and synchronization are handled by the
# custom Teensy DAC controller.
if channel_states[c]==1:
if (c==0):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '405nm'}}
elif (c==1):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '488nm'}}
elif (c==2):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '561nm'}}
elif (c==3):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '637nm'}}
elif (c==4):
evt = { 'axes': {'x': x, 'y':y, 'z':z}, 'x': scan_axis_start_um, 'y': tile_position_um,
'z': height_position_um, 'channel' : {'group': 'Coherent-State', 'config': '730nm'}}
events.append(evt)
```
### Run acquisition
- The camera is set to `Trigger first` mode. In this mode, the camera waits for an external trigger and then runs using the internal timing.
- The acquisition is setup and started. The initial acquisition setup by Pycro-manager and the Java acquisition engine takes a few minutes and requires at significant amount of RAM allocated to ImageJ. 40 GB of RAM seems acceptable. The circular buffer is only allocated 2 GB, because the computer for this experiment has an SSD array capable of writing up to 600 MBps.
- At each `yzc` position, the ASI Tiger controller supplies the external master signal when the the (scan) axis has ramped up to the correct constant speed and crossed `scan_axis_start_um`. The speed is defined by `scan_axis_speed = scan_axis_step_um / camera_exposure_ms`. Acquired images are placed into the `x` axis of the Acquisition without Pycro-Manager interacting with the hardware.
- Once the full acquisition is completed, all lasers are set to `off` and the camera is placed back in `Internal Trigger` mode.
```
with Bridge() as bridge:
core = bridge.get_core()
# set camera to trigger first mode for stage synchronization
# give camera time to change modes
core.set_property('Camera','TriggerMode','Trigger first')
sleep(5)
# run acquisition
# the acquisition needs to write data at roughly 100-500 MBps depending on frame rate and ROI
# so the display is set to off and no multi-resolution calculations are done
with Acquisition(directory=save_directory, name=save_name, post_hardware_hook_fn=post_hardware_hook,
post_camera_hook_fn=post_camera_hook, show_display=False, max_multi_res_index=0) as acq:
acq.acquire(events)
# turn off lasers
core.set_config('Coherent-State','off')
core.wait_for_config('Coherent-State','off')
# set camera to internal trigger
core.set_property('Camera','TriggerMode','Internal Trigger')
# give camera time to change modes
sleep(5)
```
| github_jupyter |
# Distribution Plots
Let's discuss some plots that allow us to visualize the distribution of a data set. These plots are:
* distplot
* jointplot
* pairplot
* rugplot
* kdeplot
___
## Imports
```
import seaborn as sns
%matplotlib inline
```
## Data
Seaborn comes with built-in data sets!
```
tips = sns.load_dataset('tips')
tips.head()
```
## distplot
The distplot shows the distribution of a univariate set of observations.
```
sns.distplot(tips['total_bill'])
# Safe to ignore warnings
```
To remove the kde layer and just have the histogram use:
```
sns.distplot(tips['total_bill'],kde=False,bins=30)
```
## jointplot
jointplot() allows you to basically match up two distplots for bivariate data. With your choice of what **kind** parameter to compare with:
* “scatter”
* “reg”
* “resid”
* “kde”
* “hex”
```
sns.jointplot(x='total_bill',y='tip',data=tips,kind='scatter')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='hex')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='reg')
```
## pairplot
pairplot will plot pairwise relationships across an entire dataframe (for the numerical columns) and supports a color hue argument (for categorical columns).
```
sns.pairplot(tips)
sns.pairplot(tips,hue='sex',palette='coolwarm')
```
## rugplot
rugplots are actually a very simple concept, they just draw a dash mark for every point on a univariate distribution. They are the building block of a KDE plot:
```
sns.rugplot(tips['total_bill'])
```
## kdeplot
kdeplots are [Kernel Density Estimation plots](http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth). These KDE plots replace every single observation with a Gaussian (Normal) distribution centered around that value. For example:
```
# Don't worry about understanding this code!
# It's just for the diagram below
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#Create dataset
dataset = np.random.randn(25)
# Create another rugplot
sns.rugplot(dataset);
# Set up the x-axis for the plot
x_min = dataset.min() - 2
x_max = dataset.max() + 2
# 100 equally spaced points from x_min to x_max
x_axis = np.linspace(x_min,x_max,100)
# Set up the bandwidth, for info on this:
url = 'http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth'
bandwidth = ((4*dataset.std()**5)/(3*len(dataset)))**.2
# Create an empty kernel list
kernel_list = []
# Plot each basis function
for data_point in dataset:
# Create a kernel for each point and append to list
kernel = stats.norm(data_point,bandwidth).pdf(x_axis)
kernel_list.append(kernel)
#Scale for plotting
kernel = kernel / kernel.max()
kernel = kernel * .4
plt.plot(x_axis,kernel,color = 'grey',alpha=0.5)
plt.ylim(0,1)
# To get the kde plot we can sum these basis functions.
# Plot the sum of the basis function
sum_of_kde = np.sum(kernel_list,axis=0)
# Plot figure
fig = plt.plot(x_axis,sum_of_kde,color='indianred')
# Add the initial rugplot
sns.rugplot(dataset,c = 'indianred')
# Get rid of y-tick marks
plt.yticks([])
# Set title
plt.suptitle("Sum of the Basis Functions")
```
So with our tips dataset:
```
sns.kdeplot(tips['total_bill'])
sns.rugplot(tips['total_bill'])
sns.kdeplot(tips['tip'])
sns.rugplot(tips['tip'])
```
# Great Job!
| github_jupyter |
<table align="center">
<td align="center"><a target="_blank" href="http://introtodeeplearning.com">
<img src="http://introtodeeplearning.com/images/colab/mit.png" style="padding-bottom:5px;" />
Visit MIT Deep Learning</a></td>
<td align="center"><a target="_blank" href="https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab1/Part2_Music_Generation.ipynb">
<img src="http://introtodeeplearning.com/images/colab/colab.png?v2.0" style="padding-bottom:5px;" />Run in Google Colab</a></td>
<td align="center"><a target="_blank" href="https://github.com/aamini/introtodeeplearning/blob/master/lab1/Part2_Music_Generation.ipynb">
<img src="http://introtodeeplearning.com/images/colab/github.png" height="70px" style="padding-bottom:5px;" />View Source on GitHub</a></td>
</table>
# Copyright Information
```
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
```
# Lab 1: Intro to TensorFlow and Music Generation with RNNs
# Part 2: Music Generation with RNNs
In this portion of the lab, we will explore building a Recurrent Neural Network (RNN) for music generation. We will train a model to learn the patterns in raw sheet music in [ABC notation](https://en.wikipedia.org/wiki/ABC_notation) and then use this model to generate new music.
## 2.1 Dependencies
First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
```
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
# Download and import the MIT 6.S191 package
!pip install mitdeeplearning
import mitdeeplearning as mdl
# Import all remaining packages
import numpy as np
import os
import time
import functools
from IPython import display as ipythondisplay
from tqdm import tqdm
!apt-get install abcmidi timidity > /dev/null 2>&1
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
```
## 2.2 Dataset

We've gathered a dataset of thousands of Irish folk songs, represented in the ABC notation. Let's download the dataset and inspect it:
```
# Download the dataset
songs = mdl.lab1.load_training_data()
# Print one of the songs to inspect it in greater detail!
example_song = songs[0]
print("\nExample song: ")
print(example_song)
```
We can easily convert a song in ABC notation to an audio waveform and play it back. Be patient for this conversion to run, it can take some time.
```
# Convert the ABC notation to audio file and listen to it
mdl.lab1.play_song(example_song)
s = "hi my name is "
set(s)
# set of a string converts the string into dictionary of unique characters
## interesting stuff
```
One important thing to think about is that this notation of music does not simply contain information on the notes being played, but additionally there is meta information such as the song title, key, and tempo. How does the number of different characters that are present in the text file impact the complexity of the learning problem? This will become important soon, when we generate a numerical representation for the text data.
```
# Join our list of song strings into a single string containing all songs
songs_joined = "\n\n".join(songs)
# Find all unique characters in the joined string
vocab = sorted(set(songs_joined)) # vocab is a sorted dictionary of unique characters found in songs_joined, no values only keys
print("There are", len(vocab), "unique characters in the dataset")
```
## 2.3 Process the dataset for the learning task
Let's take a step back and consider our prediction task. We're trying to train a RNN model to learn patterns in ABC music, and then use this model to generate (i.e., predict) a new piece of music based on this learned information.
Breaking this down, what we're really asking the model is: given a character, or a sequence of characters, what is the most probable next character? We'll train the model to perform this task.
To achieve this, we will input a sequence of characters to the model, and train the model to predict the output, that is, the following character at each time step. RNNs maintain an internal state that depends on previously seen elements, so information about all characters seen up until a given moment will be taken into account in generating the prediction.
### Vectorize the text
Before we begin training our RNN model, we'll need to create a numerical representation of our text-based dataset. To do this, we'll generate two lookup tables: one that maps characters to numbers, and a second that maps numbers back to characters. Recall that we just identified the unique characters present in the text.
```
### Define numerical representation of text ###
# Create a mapping from character to unique index.
# For example, to get the index of the character "d",
# we can evaluate `char2idx["d"]`.
char2idx = {u:i for i, u in enumerate(vocab)} # assign a number to a unique character
# Create a mapping from indices to characters. This is
# the inverse of char2idx and allows us to convert back
# from unique index to the character in our vocabulary.
idx2char = np.array(vocab) # index of character in idx2char matches up with char2idx
print(idx2char[2])
print(char2idx["!"])
```
This gives us an integer representation for each character. Observe that the unique characters (i.e., our vocabulary) in the text are mapped as indices from 0 to `len(unique)`. Let's take a peek at this numerical representation of our dataset:
```
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char])) # {:4s} and {:3d} adds padding to the print
print(' ...\n}')
### Vectorize the songs string ###
'''TODO: Write a function to convert the all songs string to a vectorized
(i.e., numeric) representation. Use the appropriate mapping
above to convert from vocab characters to the corresponding indices.
NOTE: the output of the `vectorize_string` function
should be a np.array with `N` elements, where `N` is
the number of characters in the input string
'''
def vectorize_string(string):
output = np.array([char2idx[c] for c in string])
print(output)
return output
vectorized_songs = vectorize_string(songs_joined)
vectorized_songs.shape[0] - 1
```
We can also look at how the first part of the text is mapped to an integer representation:
```
print ('{} ---- characters mapped to int ----> {}'.format(repr(songs_joined[:10]), vectorized_songs[:10]))
# check that vectorized_songs is a numpy array
assert isinstance(vectorized_songs, np.ndarray), "returned result should be a numpy array"
```
### Create training examples and targets
Our next step is to actually divide the text into example sequences that we'll use during training. Each input sequence that we feed into our RNN will contain `seq_length` characters from the text. We'll also need to define a target sequence for each input sequence, which will be used in training the RNN to predict the next character. For each input, the corresponding target will contain the same length of text, except shifted one character to the right.
To do this, we'll break the text into chunks of `seq_length+1`. Suppose `seq_length` is 4 and our text is "Hello". Then, our input sequence is "Hell" and the target sequence is "ello".
The batch method will then let us convert this stream of character indices to sequences of the desired size.
```
### Batch definition to create training examples ###
def get_batch(vectorized_songs, seq_length, batch_size):
# the length of the vectorized songs string
n = vectorized_songs.shape[0] - 1
# randomly choose the starting indices for the examples in the training batch
idx = np.random.choice(n-seq_length, batch_size) # select randomly from np.arrange(n-seq_length) up till size batch_size
print(idx)
'''TODO: construct a list of input sequences for the training batch'''
input_batch = [vectorized_songs[i : i+seq_length] for i in idx]
print(input_batch)
'''TODO: construct a list of output sequences for the training batch'''
output_batch = [vectorized_songs[i+1 : i+seq_length+1] for i in idx]
print(output_batch)
# x_batch, y_batch provide the true inputs and targets for network training
x_batch = np.reshape(input_batch, [batch_size, seq_length])
y_batch = np.reshape(output_batch, [batch_size, seq_length])
return x_batch, y_batch
# Perform some simple tests to make sure your batch function is working properly!
test_args = (vectorized_songs, 10, 2)
if not mdl.lab1.test_batch_func_types(get_batch, test_args) or \
not mdl.lab1.test_batch_func_shapes(get_batch, test_args) or \
not mdl.lab1.test_batch_func_next_step(get_batch, test_args):
print("======\n[FAIL] could not pass tests")
else:
print("======\n[PASS] passed all tests!")
```
For each of these vectors, each index is processed at a single time step. So, for the input at time step 0, the model receives the index for the first character in the sequence, and tries to predict the index of the next character. At the next timestep, it does the same thing, but the RNN considers the information from the previous step, i.e., its updated state, in addition to the current input.
We can make this concrete by taking a look at how this works over the first several characters in our text:
```
x_batch, y_batch = get_batch(vectorized_songs, seq_length=5, batch_size=1)
# np.squeeze to remove single dimension arrays, becomes a (x, ) vector
# zip converts x and y array into an iterator of tuples of {(x1, y1)... (xn, yn)}
# repr returns printable version of object
# what this does is to split the sampled data into a train and test set. (X, y)
for i, (input_idx, target_idx) in enumerate(zip(np.squeeze(x_batch), np.squeeze(y_batch))):
print("Step {:3d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
```
## 2.4 The Recurrent Neural Network (RNN) model
Now we're ready to define and train a RNN model on our ABC music dataset, and then use that trained model to generate a new song. We'll train our RNN using batches of song snippets from our dataset, which we generated in the previous section.
The model is based off the LSTM architecture, where we use a state vector to maintain information about the temporal relationships between consecutive characters. The final output of the LSTM is then fed into a fully connected [`Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layer where we'll output a softmax over each character in the vocabulary, and then sample from this distribution to predict the next character.
As we introduced in the first portion of this lab, we'll be using the Keras API, specifically, [`tf.keras.Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential), to define the model. Three layers are used to define the model:
* [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding): This is the input layer, consisting of a trainable lookup table that maps the numbers of each character to a vector with `embedding_dim` dimensions.
* [`tf.keras.layers.LSTM`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM): Our LSTM network, with size `units=rnn_units`.
* [`tf.keras.layers.Dense`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense): The output layer, with `vocab_size` outputs.
<img src="https://raw.githubusercontent.com/aamini/introtodeeplearning/2019/lab1/img/lstm_unrolled-01-01.png" alt="Drawing"/>
### Define the RNN model
Now, we will define a function that we will use to actually build the model.
```
def LSTM(rnn_units):
return tf.keras.layers.LSTM(
rnn_units,
return_sequences=True,
recurrent_initializer='glorot_uniform',
recurrent_activation='sigmoid',
stateful=True,
)
```
The time has come! Fill in the `TODOs` to define the RNN model within the `build_model` function, and then call the function you just defined to instantiate the model!
```
### Defining the RNN Model ###
'''TODO: Add LSTM and Dense layers to define the RNN model using the Sequential API.'''
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
# Layer 1: Embedding layer to transform indices into dense vectors
# of a fixed embedding size
tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]),
# Layer 2: LSTM with `rnn_units` number of units.
# TODO: Call the LSTM function defined above to add this layer.
LSTM(rnn_units),
# Layer 3: Dense (fully-connected) layer that transforms the LSTM output
# into the vocabulary size.
# TODO: Add the Dense layer.
tf.keras.layers.Dense(vocab_size)
])
return model
# Build a simple model with default hyperparameters. You will get the
# chance to change these later.
model = build_model(len(vocab), embedding_dim=256, rnn_units=1024, batch_size=32)
```
### Test out the RNN model
It's always a good idea to run a few simple checks on our model to see that it behaves as expected.
First, we can use the `Model.summary` function to print out a summary of our model's internal workings. Here we can check the layers in the model, the shape of the output of each of the layers, the batch size, etc.
```
model.summary()
```
We can also quickly check the dimensionality of our output, using a sequence length of 100. Note that the model can be run on inputs of any length.
```
x, y = get_batch(vectorized_songs, seq_length=100, batch_size=32)
pred = model(x)
print("Input shape: ", x.shape, " # (batch_size, sequence_length)")
print("Prediction shape: ", pred.shape, "# (batch_size, sequence_length, vocab_size)")
```
### Predictions from the untrained model
Let's take a look at what our untrained model is predicting.
To get actual predictions from the model, we sample from the output distribution, which is defined by a `softmax` over our character vocabulary. This will give us actual character indices. This means we are using a [categorical distribution](https://en.wikipedia.org/wiki/Categorical_distribution) to sample over the example prediction. This gives a prediction of the next character (specifically its index) at each timestep.
Note here that we sample from this probability distribution, as opposed to simply taking the `argmax`, which can cause the model to get stuck in a loop.
Let's try this sampling out for the first example in the batch.
```
sampled_indices = tf.random.categorical(pred[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
sampled_indices
```
We can now decode these to see the text predicted by the untrained model:
```
print("Input: \n", repr("".join(idx2char[x[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices])))
```
As you can see, the text predicted by the untrained model is pretty nonsensical! How can we do better? We can train the network!
## 2.5 Training the model: loss and training operations
Now it's time to train the model!
At this point, we can think of our next character prediction problem as a standard classification problem. Given the previous state of the RNN, as well as the input at a given time step, we want to predict the class of the next character -- that is, to actually predict the next character.
To train our model on this classification task, we can use a form of the `crossentropy` loss (negative log likelihood loss). Specifically, we will use the [`sparse_categorical_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/sparse_categorical_crossentropy) loss, as it utilizes integer targets for categorical classification tasks. We will want to compute the loss using the true targets -- the `labels` -- and the predicted targets -- the `logits`.
Let's first compute the loss using our example predictions from the untrained model:
```
### Defining the loss function ###
'''TODO: define the loss function to compute and return the loss between
the true labels and predictions (logits). Set the argument from_logits=True.'''
def compute_loss(labels, logits):
loss = tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True) # TODO
return loss
'''TODO: compute the loss using the true next characters from the example batch
and the predictions from the untrained model several cells above'''
example_batch_loss = compute_loss(y, pred) # TODO
print("Prediction shape: ", pred.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
```
Let's start by defining some hyperparameters for training the model. To start, we have provided some reasonable values for some of the parameters. It is up to you to use what we've learned in class to help optimize the parameter selection here!
```
### Hyperparameter setting and optimization ###
# Optimization parameters:
num_training_iterations = 2000 # Increase this to train longer
batch_size = 4 # Experiment between 1 and 64
seq_length = 100 # Experiment between 50 and 500
learning_rate = 5e-3 # Experiment between 1e-5 and 1e-1
# Model parameters:
vocab_size = len(vocab)
embedding_dim = 256
rnn_units = 1024 # Experiment between 1 and 2048
# Checkpoint location:
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "my_ckpt")
```
Now, we are ready to define our training operation -- the optimizer and duration of training -- and use this function to train the model. You will experiment with the choice of optimizer and the duration for which you train your models, and see how these changes affect the network's output. Some optimizers you may like to try are [`Adam`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam?version=stable) and [`Adagrad`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adagrad?version=stable).
First, we will instantiate a new model and an optimizer. Then, we will use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) method to perform the backpropagation operations.
We will also generate a print-out of the model's progress through training, which will help us easily visualize whether or not we are minimizing the loss.
```
### Define optimizer and training operation ###
'''TODO: instantiate a new model for training using the `build_model`
function and the hyperparameters created above.'''
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size)
'''TODO: instantiate an optimizer with its learning rate.
Checkout the tensorflow website for a list of supported optimizers.
https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/
Try using the Adam optimizer to start.'''
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, name="Adam")
print(model.trainable_variables)
@tf.function
def train_step(x, y):
# Use tf.GradientTape()
with tf.GradientTape() as tape:
'''TODO: feed the current input into the model and generate predictions'''
y_hat = model(x)
'''TODO: compute the loss!'''
loss = compute_loss(y, y_hat)
# Now, compute the gradients
'''TODO: complete the function call for gradient computation.
Remember that we want the gradient of the loss with respect all
of the model parameters.
HINT: use `model.trainable_variables` to get a list of all model
parameters.'''
grads = tape.gradient(loss, model.trainable_variables)
# Apply the gradients to the optimizer so it can update the model accordingly
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
##################
# Begin training!#
##################
history = []
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for iter in tqdm(range(num_training_iterations)):
# Grab a batch and propagate it through the network
x_batch, y_batch = get_batch(vectorized_songs, seq_length, batch_size)
loss = train_step(x_batch, y_batch)
# Update the progress bar
history.append(loss.numpy().mean())
plotter.plot(history)
# Update the model with the changed weights!
if iter % 100 == 0:
model.save_weights(checkpoint_prefix)
# Save the trained model and the weights
model.save_weights(checkpoint_prefix)
```
## 2.6 Generate music using the RNN model
Now, we can use our trained RNN model to generate some music! When generating music, we'll have to feed the model some sort of seed to get it started (because it can't predict anything without something to start with!).
Once we have a generated seed, we can then iteratively predict each successive character (remember, we are using the ABC representation for our music) using our trained RNN. More specifically, recall that our RNN outputs a `softmax` over possible successive characters. For inference, we iteratively sample from these distributions, and then use our samples to encode a generated song in the ABC format.
Then, all we have to do is write it to a file and listen!
### Restore the latest checkpoint
To keep this inference step simple, we will use a batch size of 1. Because of how the RNN state is passed from timestep to timestep, the model will only be able to accept a fixed batch size once it is built.
To run the model with a different `batch_size`, we'll need to rebuild the model and restore the weights from the latest checkpoint, i.e., the weights after the last checkpoint during training:
```
'''TODO: Rebuild the model using a batch_size=1'''
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
# Restore the model weights for the last checkpoint after training
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
```
Notice that we have fed in a fixed `batch_size` of 1 for inference.
### The prediction procedure
Now, we're ready to write the code to generate text in the ABC music format:
* Initialize a "seed" start string and the RNN state, and set the number of characters we want to generate.
* Use the start string and the RNN state to obtain the probability distribution over the next predicted character.
* Sample from multinomial distribution to calculate the index of the predicted character. This predicted character is then used as the next input to the model.
* At each time step, the updated RNN state is fed back into the model, so that it now has more context in making the next prediction. After predicting the next character, the updated RNN states are again fed back into the model, which is how it learns sequence dependencies in the data, as it gets more information from the previous predictions.

Complete and experiment with this code block (as well as some of the aspects of network definition and training!), and see how the model performs. How do songs generated after training with a small number of epochs compare to those generated after a longer duration of training?
```
### Prediction of a generated song ###
def generate_text(model, start_string, generation_length=1000):
# Evaluation step (generating ABC text using the learned RNN model)
'''TODO: convert the start string to numbers (vectorize)'''
input_eval = [char2idx[c] for c in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Here batch size == 1
model.reset_states()
tqdm._instances.clear()
for i in tqdm(range(generation_length)):
'''TODO: evaluate the inputs and generate the next character predictions'''
predictions = model(input_eval)
# Remove the batch dimension
predictions = tf.squeeze(predictions, 0)
'''TODO: use a multinomial distribution to sample'''
predicted_id = tf.random.categorical(logits=predictions, num_samples=1)[-1,0].numpy()
# Pass the prediction along with the previous hidden state
# as the next inputs to the model
input_eval = tf.expand_dims([predicted_id], 0)
'''TODO: add the predicted character to the generated text!'''
# Hint: consider what format the prediction is in vs. the output
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
'''TODO: Use the model and the function defined above to generate ABC format text of length 1000!
As you may notice, ABC files start with "X" - this may be a good start string.'''
generated_text = generate_text(model, start_string="X", generation_length=10000) # TODO
# generated_text = generate_text('''TODO''', start_string="X", generation_length=1000)
```
### Play back the generated music!
We can now call a function to convert the ABC format text to an audio file, and then play that back to check out our generated music! Try training longer if the resulting song is not long enough, or re-generating the song!
```
### Play back generated songs ###
generated_songs = mdl.lab1.extract_song_snippet(generated_text)
for i, song in enumerate(generated_songs):
# Synthesize the waveform from a song
waveform = mdl.lab1.play_song(song)
# If its a valid song (correct syntax), lets play it!
if waveform:
print("Generated song", i)
ipythondisplay.display(waveform)
```
## 2.7 Experiment and **get awarded for the best songs**!!
Congrats on making your first sequence model in TensorFlow! It's a pretty big accomplishment, and hopefully you have some sweet tunes to show for it.
If you want to go further, try to optimize your model and submit your best song! Tweet us at [@MITDeepLearning](https://twitter.com/MITDeepLearning) or [email us](mailto:introtodeeplearning-staff@mit.edu) a copy of the song (if you don't have Twitter), and we'll give out prizes to our favorites!
Consider how you may improve your model and what seems to be most important in terms of performance. Here are some ideas to get you started:
* How does the number of training epochs affect the performance?
* What if you alter or augment the dataset?
* Does the choice of start string significantly affect the result?
Have fun and happy listening!

```
# Example submission by a previous 6.S191 student (credit: Christian Adib)
%%html
<blockquote class="twitter-tweet"><a href="https://twitter.com/AdibChristian/status/1090030964770783238?ref_src=twsrc%5Etfw">January 28, 2019</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
```
| github_jupyter |
# EUC Calibration Experiment from David Halpern
For the vertical spacing, what are the ECCOv4r4 vertical layers from the surface to 400 m depth? At the equator (0°), the vertical profile of the zonal velocity component is: 0.1 m s*-1 towards the west from the sea surface at 0 m to 20 m depth; 0.5 m s*-1 towards the east at 20-170 m depth interval; and 0.1 m s-*1 towards the west at depths greater than 170 m. What would be the “algorithm” or step-by-step computational method to compute the EUC transport per unit width?
```
import os
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from xgcm import Grid
from pych.calc import haversine
fig_dir = 'euc_figs'
if not os.path.isdir(fig_dir):
os.makedirs(fig_dir)
```
## Part 1: simplified transport
Let’s do a calibration calculation. Imagine the Equatorial Undercurrent at 140°W.
- At the equator (0°), the vertical profile of the zonal velocity component is:
- 0.1 m s*-1 towards the west from the sea surface at 0 m to 20 m depth;
- 0.5 m s*-1 towards the east at 20-170 m depth interval; and
- 0.1 m s*-1 towards the west at depths greater than 170 m.
- The EUC transport per unit width is (150 m) x (0.5 m s*-1) = 75 m*2 s*-1.
- Let’s assume that the identical velocity profile occurs at all latitudes from 1.5°S to 1.5°N.
- For now, I’ll assume that 1° latitude between 1.5°S and 1.5°N is equal to 110 km (which is a good approximation for this exercise but not for the final computer program).
The EUC volume transport = (3°) x (110 km) x (150 m) x (0.5 m s*-1) = 24.75 x 10*6 m*3 s*-1 = 24.75 Sv.
Let this EUC transport (24.75 Sv) be constant at all longitudes from 140°E to 80°W. Please make a plot of the longitudinal distribution of the EUC transport.
```
lon_arr = np.concatenate((np.arange(140,180),np.arange(-180,-79)),axis=0)
lon = xr.DataArray(lon_arr,coords={'lon':lon_arr},dims=('lon',))
lat_arr = np.arange(-1,2)
lat = xr.DataArray(lat_arr,coords={'lat':lat_arr},dims=('lat',))
deptharr = np.arange(1,200)-.5
depth = xr.DataArray(deptharr,coords={'depth':deptharr},dims=('depth',))
ds = xr.Dataset({'lon':lon,'lat':lat,'depth':depth})
ds
ds['dyG'] = xr.DataArray(np.array([110000,110000,110000]),coords=ds.lat.coords,dims=('lat',))
ds['drF'] = xr.DataArray(np.array([1]*199),coords=ds.depth.coords,dims=('depth',))
ds = ds.set_coords(['dyG','drF'])
ds['uvel'] = xr.zeros_like(ds.depth*ds.lat*ds.lon)
```
### Create the velocity profile
- At the equator (0°), the vertical profile of the zonal velocity component is:
- 0.1 m s*-1 towards the west from the sea surface at 0 m to 20 m depth;
- 0.5 m s*-1 towards the east at 20-170 m depth interval; and
- 0.1 m s*-1 towards the west at depths greater than 170 m.
- Let’s assume that the identical velocity profile occurs at all latitudes from 1.5°S to 1.5°N.
```
ds['uprof'] = xr.where(ds.depth<20,-0.1,0.) + \
xr.where((ds.depth>=20) & (ds.depth<170),0.5,0.) + \
xr.where(ds.depth>=170,-0.1,0.)
ds.uprof.attrs['units'] = 'm/s'
ds.uprof.plot(y='depth',yincrease=False)
plt.xlabel('U [m/s]')
plt.ylabel('Depth (m)')
plt.title('Zonal Velocity Profile')
plt.savefig(f'{fig_dir}/simple_zonal_velocity_profile.png',bbox_inches='tight')
```
### "Broadcast" this profile to latitudes and longitudes in the domain
Show a plot at two random places as verification
```
ds['uvel'],_ = xr.broadcast(ds.uprof,ds.lat*ds.lon)
ds.uvel.attrs['units'] = 'm/s'
fig,axs = plt.subplots(1,2,figsize=(18,6),sharey=True)
ds.uvel.sel(lon=170).plot(ax=axs[0],yincrease=False)
ds.uvel.sel(lon=-90).plot(ax=axs[1],yincrease=False)
```
### The EUC transport per unit width is (150 m) x (0.5 m s*-1) = 75 m*2 s*-1.
Plot below verifies this...
```
ds['trsp_per_width'] = (ds['uvel']*ds['drF']).where(ds.uvel>0).sum('depth')
ds.trsp_per_width.attrs['units'] = 'm^2/s'
ds.trsp_per_width.sel(lon=140).plot()
ds['trsp'] = ds['uvel']*ds['drF']*ds['dyG']
euc = ds['trsp'].where(ds.uvel>0).sum(['lat','depth']) / 1e6
euc.attrs['units']='Sv'
def euc_plot(xda,xcoord='XG',ax=None,xskip=10):
if ax is None:
fig,ax = plt.subplots(1,1)
x=xda[xcoord]
xbds = [140,-80]
# Grab Pacific
xda = xda.where((x<=xbds[0])|(x>=xbds[1]),drop=True)
x_split=xda[xcoord]
xda[xcoord]=xr.where(xda[xcoord]<=0,360+xda[xcoord],xda[xcoord])
xda = xda.sortby(xcoord)
xda.plot(ax=ax)
xlbl = [f'{xx}' for xx in np.concatenate([np.arange(xbds[0],181),np.arange(-179,xbds[1])])]
x_slice = slice(None,None,xskip)
ax.xaxis.set_ticks(xda[xcoord].values[x_slice])
ax.xaxis.set_ticklabels(xlbl[x_slice])
ax.set_xlim([xbds[0],xbds[1]+360])
return ax
fig,ax = plt.subplots(1,1,figsize=(18,6))
euc_plot(euc,xcoord='lon',ax=ax)
plt.title(f'EUC: {euc[0].values} {euc.attrs["units"]}')
plt.savefig(f'{fig_dir}/simplified_euc.png',bbox_inches='tight')
```
## Part 2: The LLC90 grid with telescoping refinement near the equator
The next thought-experiment calculation will provide me with a greater appreciation of the ECCOv4r4 horizontal grid spacing, which, I believe, has a 1° x 1° horizontal grid spacing.
In the latitudinal direction, where are the grid points?
For example, are 0° and 1° at grid points or are 0.5° and 1.5° at grid points?
If 0° is a grid point, then is the ECCOv4r4 value of the zonal current at a specific depth, say 20 m, constant from 0.5°S to 0.5°N?
| github_jupyter |
Cross-shelf transport (total) of CNTDIFF experiments
==
This notebook explores the similarities and differences between the 2 tracer transports for case CNTDIFF as well as canyon and no canyon cases. It looks at the transport normal to a shelf break wall<sup>1</sup>. Total Tracer Transport (TracTrans) is understood here as tracer transport (concentration * transport) per cell area; similarly, Total Transport (Trans) is transport per cell area, which is only the speed. This gives the following units:
$[TracTrans] = [C]ms^{-1} $
$[Trans] = [v] = ms^{-1} $
TracTrans = AdvFlux + DiffFlux / cell area
<sup>1</sup> Plane that goes from shelf-break depth to surface and all along the shelf break.
The base case to compare the effect of isopycnal diffusivity is a run without GMREDI and different values of $K_{iso}$ but constant vertical diffusivity (CNTDIFF). The vertical diff for tracer 1 is $10^{-5}$ $m^2s^{-1}$ and $10^{-3}$ $m^2s^{-1}$ for tracer 2. An associated no-canyon case allows to isolate the effect of the canyon (CNTDIFF run07).
CNTDIFF runs include the following cases:
| Run | $k_{iso}$ ($m^2s^{-1}$) | Bathymetry |
|:-----:|:------------------------------:|:-----------------------|
| 02 | $10^{1}$ | Barkley-like |
| 03 | $10^{0}$ | Barkley-like |
| 04 | $10^{-1}$ | Barkley-like |
| 07 | $10^{0}$ | No canyon |
Other runs explore the effect of bottom drag and stratification. $K_{iso}$ = 100 gave NaNs in run from first checkpoint on and I have to figure out why.
```
#KRM
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from math import *
import scipy.io
import scipy as spy
%matplotlib inline
from netCDF4 import Dataset
import pylab as pl
import os
import sys
import seaborn as sns
lib_path = os.path.abspath('/ocean/kramosmu/Building_canyon/BuildCanyon/PythonModulesMITgcm') # Add absolute path to my python scripts
sys.path.append(lib_path)
import ReadOutTools_MITgcm as rout
import ShelfBreakTools_MITgcm as sb
import savitzky_golay as sg
#Base case, iso =1 , No 3d diff.
CanyonGrid='/ocean/kramosmu/MITgcm/TracerExperiments/NOGMREDI/run02/gridGlob.nc'
CanyonGridOut = Dataset(CanyonGrid)
#for dimobj in CanyonGridOut.variables.values():
# print dimobj
CanyonState='/ocean/kramosmu/MITgcm/TracerExperiments/NOGMREDI/run02/stateGlob.nc'
CanyonStateOut = Dataset(CanyonState)
FluxTR01 = '/ocean/kramosmu/MITgcm/TracerExperiments/NOGMREDI/run02/FluxTR01Glob.nc'
FluxOut1 = Dataset(FluxTR01)
FluxTR01NoCNoR = '/ocean/kramosmu/MITgcm/TracerExperiments/NOGMREDI/run04/FluxTR01Glob.nc'
FluxOut1NoCNoR = Dataset(FluxTR01NoCNoR)
CanyonGridNoC='/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run07/gridGlob.nc'
CanyonGridOutNoC = Dataset(CanyonGridNoC)
CanyonStateNoC='/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run07/stateGlob.nc'
FluxTR01NoC = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run07/FluxTR01Glob.nc'
FluxTR03NoC = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run07/FluxTR03Glob.nc'
# General input
nx = 360
ny = 360
nz = 90
nt = 19 # t dimension size
z = CanyonStateOut.variables['Z']
#print(z[10])
Time = CanyonStateOut.variables['T']
#print(Time[:])
xc = rout.getField(CanyonGrid, 'XC') # x coords tracer cells
yc = rout.getField(CanyonGrid, 'YC') # y coords tracer cells
bathy = rout.getField(CanyonGrid, 'Depth')
hFacC = rout.getField(CanyonGrid, 'HFacC')
MaskC = rout.getMask(CanyonGrid, 'HFacC')
hFacCNoC = rout.getField(CanyonGridNoC, 'HFacC')
MaskCNoC = rout.getMask(CanyonGridNoC, 'HFacC')
dxF = rout.getField(CanyonGrid, 'dxF')
drF = CanyonGridOut.variables['drF']
sns.set()
sns.set_style('white')
sns.set_context('talk')
colors=['midnightblue','dodgerblue','deepskyblue','lightskyblue',
'darkmagenta','orchid']
VTRAC = rout.getField(FluxTR01,'VTRAC01') #
UTRAC = rout.getField(FluxTR01,'UTRAC01') #
VTRACNoCNoR = rout.getField(FluxTR01NoCNoR,'VTRAC01') #
UTRACNoCNoR = rout.getField(FluxTR01NoCNoR,'UTRAC01') #
VTRACNoC = rout.getField(FluxTR01NoC,'VTRAC01') #
UTRACNoC = rout.getField(FluxTR01NoC,'UTRAC01') #
zlev = 29
SBx, SBy = sb.findShelfBreak(zlev,hFacC)
SBxx = SBx[:-1]
SByy = SBy[:-1]
slope, theta = sb.findSlope(xc,yc,SBxx,SByy)
slopeFilt = sg.savitzky_golay(slope, 11, 3) # window size 11, polynomial order 3
thetaFilt = np.arctan(slopeFilt)
zlev = 29
SBxNoC, SByNoC = sb.findShelfBreak(zlev,hFacCNoC)
SBxxNoC = SBxNoC[:-1]
SByyNoC = SByNoC[:-1]
slopeNoC, thetaNoC = sb.findSlope(xc,yc,SBxxNoC,SByyNoC)
slopeFiltNoC = sg.savitzky_golay(slopeNoC, 11, 3) # window size 11, polynomial order 3
thetaFiltNoC = np.arctan(slopeFiltNoC)
# TRACER 1
FluxTR01run02 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run02/FluxTR01Glob.nc'
FluxOut1run02 = Dataset(FluxTR01run02)
FluxTR01run03 = '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run03/FluxTR01Glob.nc'
FluxOut1run03 = Dataset(FluxTR01run03)
FluxTR01run04= '/ocean/kramosmu/MITgcm/TracerExperiments/CNTDIFF/run04/FluxTR01Glob.nc'
FluxOut1run04 = Dataset(FluxTR01run04)
VTRACrun02 = rout.getField(FluxTR01run02,'VTRAC01') #
UTRACrun02 = rout.getField(FluxTR01run02,'UTRAC01') #
VTRACrun3 = rout.getField(FluxTR01run03,'VTRAC01') #
UTRACrun3 = rout.getField(FluxTR01run03,'UTRAC01') #
VTRACrun04 = rout.getField(FluxTR01run04,'VTRAC01') #
UTRACrun04 = rout.getField(FluxTR01run04,'UTRAC01') #
```
```
times = range(18)
ToTalTracTransRun03=np.empty(18)
ToTalTracTransBaseNoC=np.empty(18)
ToTalTracTransBaseNoCNoR=np.empty(18)
ToTalTracTransRun02=np.empty(18)
ToTalTracTransRun04=np.empty(18)
ToTalTracTransBase=np.empty(18)
for tt in times:
VTRACPlotBase = sb.MerFluxSB(SBxx,SByy,tt,VTRAC,z,xc,zlev,hFacC,MaskC)
UTRACPlotBase = sb.ZonFluxSB(SBxx,SByy,tt,UTRAC,z,xc,zlev,hFacC,MaskC)
VTRACPlotNoCNoR = sb.MerFluxSB(SBxxNoC,SByyNoC,tt,VTRACNoCNoR,z,xc,zlev,hFacCNoC,MaskCNoC)
UTRACPlotNoCNoR = sb.ZonFluxSB(SBxxNoC,SByyNoC,tt,UTRACNoCNoR,z,xc,zlev,hFacCNoC,MaskCNoC)
VTRACPlot2 = sb.MerFluxSB(SBxx,SByy,tt,VTRACrun02,z,xc,zlev,hFacC,MaskC)
UTRACPlot2 = sb.ZonFluxSB(SBxx,SByy,tt,UTRACrun02,z,xc,zlev,hFacC,MaskC)
VTRACPlot3 = sb.MerFluxSB(SBxx,SByy,tt,VTRACrun3,z,xc,zlev,hFacC,MaskC)
UTRACPlot3 = sb.ZonFluxSB(SBxx,SByy,tt,UTRACrun3,z,xc,zlev,hFacC,MaskC)
VTRACPlot4 = sb.MerFluxSB(SBxx,SByy,tt,VTRACrun04,z,xc,zlev,hFacC,MaskC)
UTRACPlot4 = sb.ZonFluxSB(SBxx,SByy,tt,UTRACrun04,z,xc,zlev,hFacC,MaskC)
VTRACPlotNoC = sb.MerFluxSB(SBxxNoC,SByyNoC,tt,VTRACNoC,z,xc,zlev,hFacCNoC,MaskCNoC)
UTRACPlotNoC = sb.ZonFluxSB(SBxxNoC,SByyNoC,tt,UTRACNoC,z,xc,zlev,hFacCNoC,MaskCNoC)
TracTrans2 = VTRACPlot2[:,4:-5]*np.cos(thetaFilt) + UTRACPlot2[:,4:-4]*np.sin(-thetaFilt)
TracTrans3 = VTRACPlot3[:,4:-5]*np.cos(thetaFilt) + UTRACPlot3[:,4:-4]*np.sin(-thetaFilt)
TracTrans4 = VTRACPlot4[:,4:-5]*np.cos(thetaFilt) + UTRACPlot4[:,4:-4]*np.sin(-thetaFilt)
TracTransNoC = VTRACPlotNoC[:,4:-5]*np.cos(thetaFiltNoC) + UTRACPlotNoC[:,4:-4]*np.sin(-thetaFiltNoC)
TracTransBase = VTRACPlotBase[:,4:-5]*np.cos(thetaFilt) + UTRACPlotBase[:,4:-4]*np.sin(-thetaFilt)
TracTransNoCNoR = VTRACPlotNoCNoR[:,4:-5]*np.cos(thetaFiltNoC) + UTRACPlotNoCNoR[:,4:-4]*np.sin(-thetaFiltNoC)
ToTalTracTransRun02[tt]=np.sum(TracTrans2)
ToTalTracTransRun03[tt]=np.sum(TracTrans3)
ToTalTracTransRun04[tt]=np.sum(TracTrans4)
ToTalTracTransBase[tt]=np.sum(TracTransBase)
ToTalTracTransBaseNoC[tt]=np.sum(TracTransNoC)
ToTalTracTransBaseNoCNoR[tt]=np.sum(TracTransNoCNoR)
sns.set(context='talk', style='whitegrid', font='sans-serif', font_scale=1)
times = range(18)# # First time element of flux is at 43200 sec, and las at 8 days
times = [time/2.0+0.5 for time in times]
figSize=(10,8)
numCols = 1
numRows = 1
unitsTr = '$mol \cdot l^{-1}\cdot ms^{-1}$'
fig44 = plt.figure(figsize=figSize)
plt.subplot(numRows,numCols,1)
ax = plt.gca()
ax.plot(times,ToTalTracTransRun02[:],'o-',color=colors[0],label = '$k_{iso}$ = 10 $m^2/s$')
ax.plot(times,ToTalTracTransRun03[:],'o-',color=colors[1],label = '$k_{iso}$ = 1 $m^2/s$')
ax.plot(times,ToTalTracTransRun04[:],'o-',color=colors[2],label = '$k_{iso}$ = 0.1 $m^2/s$')
ax.plot(times,ToTalTracTransBaseNoC[:],'o-',color=colors[3],label = ' NoC Run, $k_{iso}$ = 1E0 $m^2/s$ ')
ax.plot(times,ToTalTracTransBase[:],'o-',color=colors[4],label = 'Base Run, NOREDI 1E-5 $m^2/s$ ')
handles, labels = ax.get_legend_handles_labels()
display = (0,1,2,3,4)
ax.legend([handle for i,handle in enumerate(handles) if i in display],
[label for i,label in enumerate(labels) if i in display],loc=0)
plt.xlabel('Days')
plt.ylabel(unitsTr)
plt.title('Total tracer transport across shelf break - CNTDIFF runs')
sns.set(context='talk', style='whitegrid', font='sans-serif', font_scale=1)
times = range(18)# # First time element of flux is at 43200 sec, and las at 8 days
times = [time/2.0+0.5 for time in times]
figSize=(10,8)
numCols = 1
numRows = 1
unitsTr = '$mol \cdot l^{-1}\cdot ms^{-1}$'
fig44 = plt.figure(figsize=figSize)
plt.subplot(numRows,numCols,1)
ax = plt.gca()
ax.plot(times,ToTalTracTransRun02[:]-ToTalTracTransBaseNoC[:],'o-',color=colors[0],label = '10 $m^2/s$ - NoC')
ax.plot(times,ToTalTracTransRun03[:]-ToTalTracTransBaseNoC[:],'o-',color=colors[1],label = '1 $m^2/s$- NoC')
ax.plot(times,ToTalTracTransRun04[:]-ToTalTracTransBaseNoC[:],'o-',color=colors[2],label = '0.1 $m^2/s$- NoC')
ax.plot(times,ToTalTracTransBase[:]-ToTalTracTransBaseNoCNoR[:],'o-',color=colors[5],label = 'Base Run-NoC, NOREDI 1E-5 $m^2/s$ ')
handles, labels = ax.get_legend_handles_labels()
display = (0,1,2,3,4)
ax.legend([handle for i,handle in enumerate(handles) if i in display],
[label for i,label in enumerate(labels) if i in display],loc=0)
plt.xlabel('Days')
plt.ylabel(unitsTr)
plt.title('Total tracer transport across shelf break - Canyon Effect CNTDIFF')
sns.set(context='talk', style='whitegrid', font='sans-serif', font_scale=1)
times = range(18)# # First time element of flux is at 43200 sec, and las at 8 days
times = [time/2.0+0.5 for time in times]
figSize=(10,8)
numCols = 1
numRows = 1
unitsTr = '$mol \cdot l^{-1}\cdot ms^{-1}$'
fig44 = plt.figure(figsize=figSize)
plt.subplot(numRows,numCols,1)
ax = plt.gca()
ax.plot(times,ToTalTracTransRun02[:]-ToTalTracTransBase[:],'o-',color=colors[0],label = 'Minus Base case $k_{iso}$ = 10 $m^2/s$')
ax.plot(times,ToTalTracTransRun03[:]-ToTalTracTransBase[:],'o-',color=colors[1],label = 'Minus Base case $k_{iso}$ = 1 $m^2/s$')
ax.plot(times,ToTalTracTransRun04[:]-ToTalTracTransBase[:],'o-',color=colors[2],label = 'Minus Base case $k_{iso}$ = 0.1 $m^2/s$')
handles, labels = ax.get_legend_handles_labels()
display = (0,1,2,3,4)
ax.legend([handle for i,handle in enumerate(handles) if i in display],
[label for i,label in enumerate(labels) if i in display],loc=0)
plt.xlabel('Days')
plt.ylabel(unitsTr)
plt.title('Total tracer transport across shelf break - REDI effect')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from datetime import datetime
from sqlalchemy import create_engine
import requests
from time import sleep
import warnings
warnings.filterwarnings('ignore')
df = pd.read_excel("HistoricoCobranca.xlsx")
df["doc"] = df.apply(lambda x : x["CNPJ"].replace(".", "").replace("-", "").replace("/", ""), axis=1)
df.head()
df["MOTIVO DO CONTATO"].unique().tolist()
df["JUSTIFICATIVA DO ALERTA"].unique().tolist()
df[df['JUSTIFICATIVA DO ALERTA'].isin(["Fechou a Loja", "Fechou a Empresa"])]
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/creditoDigital")
con = engine.connect()
dfop = pd.read_sql("select * from desembolso", con)
con.close()
df_data = dfop[["cnpj", "dataDesembolso"]]
df_data["dataDesembolso"] = df_data.apply(lambda x : x["dataDesembolso"].date(), axis=1)
df.shape
res = df.merge(df_data, left_on='doc', right_on='cnpj', how='left')
res[res["doc"]=='11117460000110']
res.drop(columns=["cnpj"], axis=1, inplace=True)
res["dataDesembolso"].iloc[0]
res.sort_values("dataDesembolso")
res[res['dataDesembolso']<datetime(2019, 1,1).date()].shape
res.shape[0] - 13
res.head()
def get_numero_consulta(cnpj):
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo")
con = engine.connect()
query = "select data_ref, numero_consulta from consultas_idwall_operacoes where cnpj_cpf='{}'".format(cnpj)
df = pd.read_sql(query, con)
numero = df[df['data_ref']==df['data_ref'].max()]["numero_consulta"].iloc[0]
con.close()
return numero
def get_details(numero):
URL = "https://api-v2.idwall.co/relatorios"
authorization = "b3818f92-5807-4acf-ade8-78a1f6d7996b"
url_details = URL + "/{}".format(numero) + "/dados"
while True:
dets = requests.get(url_details, headers={"authorization": authorization})
djson = dets.json()
sleep(1)
if djson['result']['status'] == "CONCLUIDO":
break
return dets.json()
def get_idade(cnpj):
numero = get_numero_consulta(cnpj)
print(numero)
js = get_details(numero)
data_abertura = js.get("result").get("cnpj").get("data_abertura")
data_abertura = data_abertura.replace("/", "-")
data = datetime.strptime(data_abertura, "%d-%m-%Y").date()
idade = ((datetime.now().date() - data).days/366)
idade_empresa = np.around(idade, 2)
return idade_empresa
get_idade("12549813000114")
res
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import sys
from pathlib import Path
ROOT_DIR = os.path.abspath(os.path.join(Path().absolute(), os.pardir))
sys.path.insert(1, ROOT_DIR)
import numpy as np
import scipy
import matplotlib.pyplot as plt
from frequency_response import FrequencyResponse
from biquad import peaking, low_shelf, high_shelf, digital_coeffs
harman_overear = FrequencyResponse.read_from_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_over-ear_2018.csv'))
fig, ax = harman_overear.plot_graph(show=False, color='C0')
fs = 48000
a0, a1, a2, b0, b1, b2 = low_shelf(105.0, 0.71, 6, fs=fs)
shelf = digital_coeffs(harman_overear.frequency, fs, a0, a1, a2, b0, b1, b2)
shelf = FrequencyResponse(name='Shelf', frequency=harman_overear.frequency.copy(), raw=shelf)
shelf.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_overear_wo_bass = FrequencyResponse(
name='Harman over-ear target 2018 without bass',
frequency=harman_overear.frequency.copy(),
raw=harman_overear.raw - shelf.raw
)
harman_overear_wo_bass.plot_graph(fig=fig, ax=ax, color='C2', show=False)
ax.legend(['Harman over-ear 2018', 'Low shelf', 'Harman over-ear 2018 without bass shelf'])
ax.set_ylim([-4, 10])
plt.show()
harman_inear = FrequencyResponse.read_from_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_in-ear_2019v2.csv'))
fig, ax = harman_inear.plot_graph(show=False, color='C0')
fs = 48000
a0, a1, a2, b0, b1, b2 = low_shelf(105.0, 0.71, 9, fs=fs)
shelf = digital_coeffs(harman_inear.frequency, fs, a0, a1, a2, b0, b1, b2)
shelf = FrequencyResponse(name='Shelf', frequency=harman_inear.frequency.copy(), raw=shelf)
shelf.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_inear_wo_bass = FrequencyResponse(
name='Harman in-ear target 2019 without bass',
frequency=harman_inear.frequency.copy(),
raw=harman_inear.raw - shelf.raw
)
harman_inear_wo_bass.plot_graph(fig=fig, ax=ax, color='C2', show=False)
ax.legend(['Harman in-ear 2019', 'Low shelf', 'Harman in-ear target 2019 without bass'])
ax.set_ylim([-4, 10])
plt.show()
fig, ax = harman_overear.plot_graph(show=False, color='C0')
harman_overear_wo_bass.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_overear_4_bass = harman_overear_wo_bass.copy()
harman_overear_4_bass.raw += digital_coeffs(harman_overear_4_bass.frequency, fs, *low_shelf(105, 0.71, 4, fs=fs))
harman_overear_4_bass.plot_graph(fig=fig, ax=ax, show=False, color='C2')
ax.legend(['Harman over-ear 2018', 'Harman over-ear 2018 without bass', 'Harman over-ear 2018 with 4 dB bass'])
ax.set_ylim([-4, 10])
ax.set_title('Harman over-ear')
plt.show()
fig, ax = harman_inear.plot_graph(show=False, color='C0')
harman_inear_wo_bass.plot_graph(fig=fig, ax=ax, show=False, color='C1')
harman_inear_6_bass = harman_inear_wo_bass.copy()
harman_inear_6_bass.raw += digital_coeffs(harman_inear_6_bass.frequency, fs, *low_shelf(105, 0.71, 4, fs=fs))
harman_inear_6_bass.plot_graph(fig=fig, ax=ax, show=False, color='C2')
ax.legend(['Harman in-ear 2019', 'Harman in-ear 2019 without bass', 'Harman in-ear 2019 with 6 dB bass'])
ax.set_ylim([-4, 10])
ax.set_title('Harman in-ear')
plt.show()
# WARNING: These will overwrite the files
harman_overear_wo_bass.write_to_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_over-ear_2018_wo_bass.csv'))
harman_overear_wo_bass.plot_graph(file_path=os.path.join(ROOT_DIR, 'compensation', 'harman_over-ear_2018_wo_bass.png'), color='C0')
harman_inear_wo_bass.write_to_csv(os.path.join(ROOT_DIR, 'compensation', 'harman_in-ear_2019v2_wo_bass.csv'))
harman_inear_wo_bass.plot_graph(file_path=os.path.join(ROOT_DIR, 'compensation', 'harman_in-ear_2019v2_wo_bass.png'), color='C0')
```
| github_jupyter |
```
!pip install -q blackjax
!pip install -q distrax
import jax
import jax.numpy as jnp
import jax.scipy.stats as stats
from jax.random import PRNGKey, split
try:
import distrax
except ModuleNotFoundError:
%pip install -qq distrax
import distrax
try:
from tensorflow_probability.substrates.jax.distributions import HalfCauchy
except ModuleNotFoundError:
%pip install -qq tensorflow-probability
from tensorflow_probability.substrates.jax.distributions import HalfCauchy
try:
import blackjax.hmc as hmc
except ModuleNotFoundError:
%pip install -qq blackjax
import blackjax.hmc as hmc
import blackjax.nuts as nuts
import blackjax.stan_warmup as stan_warmup
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
try:
import arviz as az
except ModuleNotFoundError:
%pip install -qq arviz
import arviz as az
from functools import partial
sns.set_style("whitegrid")
np.random.seed(123)
url = "https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/radon.csv?raw=true"
data = pd.read_csv(url)
county_names = data.county.unique()
county_idx = jnp.array(data.county_code.values)
n_counties = len(county_names)
X = data.floor.values
Y = data.log_radon.values
def init_non_centered_params(n_counties, rng_key=None):
params = {}
if rng_key is None:
rng_key = PRNGKey(0)
mu_a_key, mu_b_key, sigma_a_key, sigma_b_key, a_key, b_key, eps_key = split(rng_key, 7)
half_cauchy = distrax.as_distribution(HalfCauchy(loc=0.0, scale=5.0))
params["mu_a"] = distrax.Normal(0.0, 1.0).sample(seed=mu_a_key)
params["mu_b"] = distrax.Normal(0.0, 1.0).sample(seed=mu_b_key)
params["sigma_a"] = half_cauchy.sample(seed=sigma_a_key)
params["sigma_b"] = half_cauchy.sample(seed=sigma_b_key)
params["a_offsets"] = distrax.Normal(0.0, 1.0).sample(seed=a_key, sample_shape=(n_counties,))
params["b_offsets"] = distrax.Normal(0.0, 1.0).sample(seed=b_key, sample_shape=(n_counties,))
params["eps"] = half_cauchy.sample(seed=eps_key)
return params
def init_centered_params(n_counties, rng_key=None):
params = {}
if rng_key is None:
rng_key = PRNGKey(0)
mu_a_key, mu_b_key, sigma_a_key, sigma_b_key, a_key, b_key, eps_key = split(rng_key, 7)
half_cauchy = distrax.as_distribution(HalfCauchy(loc=0.0, scale=5.0))
params["mu_a"] = distrax.Normal(0.0, 1.0).sample(seed=mu_a_key)
params["mu_b"] = distrax.Normal(0.0, 1.0).sample(seed=mu_b_key)
params["sigma_a"] = half_cauchy.sample(seed=sigma_a_key)
params["sigma_b"] = half_cauchy.sample(seed=sigma_b_key)
params["b"] = distrax.Normal(params["mu_b"], params["sigma_b"]).sample(seed=b_key, sample_shape=(n_counties,))
params["a"] = distrax.Normal(params["mu_a"], params["sigma_a"]).sample(seed=a_key, sample_shape=(n_counties,))
params["eps"] = half_cauchy.sample(seed=eps_key)
return params
def log_joint_non_centered(params, X, Y, county_idx, n_counties):
log_theta = 0
log_theta += distrax.Normal(0.0, 100**2).log_prob(params["mu_a"]) * n_counties
log_theta += distrax.Normal(0.0, 100**2).log_prob(params["mu_b"]) * n_counties
log_theta += distrax.as_distribution(HalfCauchy(0.0, 5.0)).log_prob(params["sigma_a"]) * n_counties
log_theta += distrax.as_distribution(HalfCauchy(0.0, 5.0)).log_prob(params["sigma_b"]) * n_counties
log_theta += distrax.Normal(0.0, 1.0).log_prob(params["a_offsets"]).sum()
log_theta += distrax.Normal(0.0, 1.0).log_prob(params["b_offsets"]).sum()
log_theta += jnp.sum(distrax.as_distribution(HalfCauchy(0.0, 5.0)).log_prob(params["eps"]))
# Linear regression
a = params["mu_a"] + params["a_offsets"] * params["sigma_a"]
b = params["mu_b"] + params["b_offsets"] * params["sigma_b"]
radon_est = a[county_idx] + b[county_idx] * X
log_theta += jnp.sum(distrax.Normal(radon_est, params["eps"]).log_prob(Y))
return -log_theta
def log_joint_centered(params, X, Y, county_idx):
log_theta = 0
log_theta += distrax.Normal(0.0, 100**2).log_prob(params["mu_a"]).sum()
log_theta += distrax.Normal(0.0, 100**2).log_prob(params["mu_b"]).sum()
log_theta += distrax.as_distribution(HalfCauchy(0.0, 5.0)).log_prob(params["sigma_a"]).sum()
log_theta += distrax.as_distribution(HalfCauchy(0.0, 5.0)).log_prob(params["sigma_b"]).sum()
log_theta += distrax.Normal(params["mu_a"], params["sigma_a"]).log_prob(params["a"]).sum()
log_theta += distrax.Normal(params["mu_b"], params["sigma_b"]).log_prob(params["b"]).sum()
log_theta += distrax.as_distribution(HalfCauchy(0.0, 5.0)).log_prob(params["eps"]).sum()
# Linear regression
radon_est = params["a"][county_idx] + params["b"][county_idx] * X
log_theta += distrax.Normal(radon_est, params["eps"]).log_prob(Y).sum()
return -log_theta
def inference_loop(rng_key, kernel, initial_state, num_samples):
def one_step(state, rng_key):
state, _ = kernel(rng_key, state)
return state, state
keys = jax.random.split(rng_key, num_samples)
_, states = jax.lax.scan(one_step, initial_state, keys)
return states
def fit_hierarchical_model(
X, Y, county_idx, n_counties, is_centered=True, num_warmup=1000, num_samples=5000, rng_key=None
):
if rng_key is None:
rng_key = PRNGKey(0)
init_key, warmup_key, sample_key = split(rng_key, 3)
if is_centered:
potential = partial(log_joint_centered, X=X, Y=Y, county_idx=county_idx)
params = init_centered_params(n_counties, rng_key=init_key)
else:
potential = partial(log_joint_non_centered, X=X, Y=Y, county_idx=county_idx, n_counties=n_counties)
params = init_non_centered_params(n_counties, rng_key=init_key)
initial_state = nuts.new_state(params, potential)
kernel_factory = lambda step_size, inverse_mass_matrix: nuts.kernel(potential, step_size, inverse_mass_matrix)
last_state, (step_size, inverse_mass_matrix), _ = stan_warmup.run(
warmup_key, kernel_factory, initial_state, num_warmup
)
kernel = kernel_factory(step_size, inverse_mass_matrix)
states = inference_loop(sample_key, kernel, initial_state, num_samples)
return states
states_centered = fit_hierarchical_model(X, Y, county_idx, n_counties, is_centered=True)
states_non_centered = fit_hierarchical_model(X, Y, county_idx, n_counties, is_centered=False)
```
## Centered Hierarchical Model
```
def plot_funnel_of_hell(x, sigma_x, k=75):
x = pd.Series(x[:, k].flatten(), name=f"slope b_{k}")
y = pd.Series(sigma_x.flatten(), name="slope group variance sigma_b")
sns.jointplot(x=x, y=y, ylim=(0.0, 0.7), xlim=(-2.5, 1.0));
samples_centered = states_centered.position
b_centered = samples_centered["b"]
sigma_b_centered = samples_centered["sigma_b"]
plot_funnel_of_hell(b_centered, sigma_b_centered)
def plot_single_chain(x, sigma_x, name):
fig, axs = plt.subplots(nrows=2, figsize=(16, 6))
axs[0].plot(sigma_x, alpha=0.5)
axs[0].set(ylabel=f"sigma_{name}")
axs[1].plot(x, alpha=0.5)
axs[1].set(ylabel=name);
plot_single_chain(b_centered[1000:], sigma_b_centered[1000:], "b")
```
## Non-Centered Hierarchical Model
```
samples_non_centered = states_non_centered.position
b_non_centered = (
samples_non_centered["mu_b"][..., None]
+ samples_non_centered["b_offsets"] * samples_non_centered["sigma_b"][..., None]
)
sigma_b_non_centered = samples_non_centered["sigma_b"]
plot_funnel_of_hell(b_non_centered, sigma_b_non_centered)
plot_single_chain(b_non_centered[1000:], sigma_b_non_centered[1000:], "b")
```
## Comparison
```
k = 75
x_lim, y_lim = [-2.5, 1], [0, 0.7]
bs = [(b_centered, sigma_b_centered, "Centered"), (b_non_centered, sigma_b_non_centered, "Non-centered")]
ncols = len(bs)
fig, axs = plt.subplots(ncols=ncols, sharex=True, sharey=True, figsize=(8, 6))
for i, (b, sigma_b, model_name) in enumerate(bs):
x = pd.Series(b[:, k], name=f"slope b_{k}")
y = pd.Series(sigma_b, name="slope group variance sigma_b")
axs[i].plot(x, y, ".")
axs[i].set(title=model_name, ylabel="sigma_b", xlabel=f"b_{k}")
axs[i].set_xlim(x_lim)
axs[i].set_ylim(y_lim)
```
| github_jupyter |
# Direct Grib Read
If you have installed more recent versions of pygrib, you can ingest grib mosaics directly without conversion to netCDF. This speeds up the ingest by ~15-20 seconds. This notebook will also demonstrate how to use MMM-Py with cartopy, and how to download near-realtime data from NCEP.
```
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
import pandas as pd
import glob
import mmmpy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io.img_tiles import StamenTerrain
import pygrib
import os
import pyart
%matplotlib inline
```
### Download MRMS directly from NCEP
```
def download_files(input_dt, max_seconds=300):
"""
This function takes an input datetime object, and will try to match with the closest mosaics in time
that are available at NCEP. Note that NCEP does not archive much beyond 24 hours of data.
Parameters
----------
input_dt : datetime.datetime object
input datetime object, will try to find closest file in time on NCEP server
Other Parameters
----------------
max_seconds : int or float
Maximum number of seconds difference tolerated between input and selected datetimes,
before file matching will fail
Returns
-------
files : 1-D ndarray of strings
Array of mosaic file names, ready for ingest into MMM-Py
"""
baseurl = 'http://mrms.ncep.noaa.gov/data/3DReflPlus/'
page1 = pd.read_html(baseurl)
directories = np.array(page1[0][0][3:-1]) # May need to change indices depending on pandas version
urllist = []
files = []
for i, d in enumerate(directories):
print(baseurl + d)
page2 = pd.read_html(baseurl + d)
filelist = np.array(page2[0][0][3:-1]) # May need to change indices depending on pandas version
dts = []
for filen in filelist:
# Will need to change in event of a name change
dts.append(dt.datetime.strptime(filen[32:47], '%Y%m%d-%H%M%S'))
dts = np.array(dts)
diff = np.abs((dts - input_dt))
if np.min(diff).total_seconds() <= max_seconds:
urllist.append(baseurl + d + filelist[np.argmin(diff)])
files.append(filelist[np.argmin(diff)])
for url in urllist:
print(url)
os.system('wget ' + url)
return np.array(files)
files = download_files(dt.datetime.utcnow())
```
### Direct ingest of grib into MMM-Py
```
mosaic = mmmpy.MosaicTile(files)
mosaic.diag()
```
### Plot with cartopy
```
tiler = StamenTerrain()
ext = [-130, -65, 20, 50]
fig = plt.figure(figsize=(12, 6))
projection = ccrs.PlateCarree() # ShadedReliefESRI().crs
ax = plt.axes(projection=projection)
ax.set_extent(ext)
ax.add_image(tiler, 3)
# Create a feature for States/Admin 1 regions at 1:10m from Natural Earth
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
ax.add_feature(states_provinces, edgecolor='gray')
# Create a feature for Countries 0 regions at 1:10m from Natural Earth
countries = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_0_boundary_lines_land',
scale='50m',
facecolor='none')
ax.add_feature(countries, edgecolor='k')
ax.coastlines(resolution='50m')
mosaic.get_comp()
valmask = np.ma.masked_where(mosaic.mrefl3d_comp <= 0, mosaic.mrefl3d_comp)
cs = plt.pcolormesh(mosaic.Longitude, mosaic.Latitude, valmask, vmin=0, vmax=55,
cmap='pyart_Carbone42', transform=projection)
plt.colorbar(cs, label='Composite Reflectivity (dBZ)',
orientation='horizontal', pad=0.05, shrink=0.75, fraction=0.05, aspect=30)
plt.title(dt.datetime.utcfromtimestamp(mosaic.Time).strftime('%m/%d/%Y %H:%M UTC'))
```
| github_jupyter |
# Lab 01 : MLP -- demo
# Understanding the training loop
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
```
### Download the data
```
from utils import check_mnist_dataset_exists
data_path=check_mnist_dataset_exists()
train_data=torch.load(data_path+'mnist/train_data.pt')
train_label=torch.load(data_path+'mnist/train_label.pt')
test_data=torch.load(data_path+'mnist/test_data.pt')
```
### Make a three layer net class
```
class three_layer_net(nn.Module):
def __init__(self, input_size, hidden_size1, hidden_size2, output_size):
super(three_layer_net , self).__init__()
self.layer1 = nn.Linear( input_size , hidden_size1 , bias=False )
self.layer2 = nn.Linear( hidden_size1 , hidden_size2 , bias=False )
self.layer3 = nn.Linear( hidden_size2 , output_size , bias=False )
def forward(self, x):
y = self.layer1(x)
y_hat = F.relu(y)
z = self.layer2(y_hat)
z_hat = F.relu(z)
scores = self.layer3(z_hat)
return scores
```
### Build the net
```
net=three_layer_net(784, 50, 50, 10)
print(net)
```
### Choose the criterion, optimizer, learning rate, and batch size
```
criterion = nn.CrossEntropyLoss()
optimizer=torch.optim.SGD( net.parameters() , lr=0.01 )
bs=200
```
### Train the network on the train set (process 5000 batches)
```
for iter in range(1,5000):
# Set dL/dU, dL/dV, dL/dW to be filled with zeros
optimizer.zero_grad()
# create a minibatch
indices=torch.LongTensor(bs).random_(0,60000)
minibatch_data = train_data[indices]
minibatch_label= train_label[indices]
#reshape the minibatch
inputs = minibatch_data.view(bs,784)
# tell Pytorch to start tracking all operations that will be done on "inputs"
inputs.requires_grad_()
# forward the minibatch through the net
scores=net( inputs )
# Compute the average of the losses of the data points in the minibatch
loss = criterion( scores , minibatch_label)
# backward pass to compute dL/dU, dL/dV and dL/dW
loss.backward()
# do one step of stochastic gradient descent: U=U-lr(dL/dU), V=V-lr(dL/dU), ...
optimizer.step()
```
### Choose image at random from the test set and see how good/bad are the predictions
```
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
scores = net( im.view(1,784))
prob=F.softmax(scores, dim = 1)
utils.show_prob_mnist(prob)
```
| github_jupyter |
```
# header files
import torch
import torch.nn as nn
import torchvision
import numpy as np
from torch.utils.tensorboard import SummaryWriter
from google.colab import drive
drive.mount('/content/drive')
np.random.seed(1234)
torch.manual_seed(1234)
torch.cuda.manual_seed(1234)
# define transforms
train_transforms = torchvision.transforms.Compose([torchvision.transforms.RandomRotation(30),
torchvision.transforms.Resize((224, 224)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
# datasets
train_data = torchvision.datasets.ImageFolder("/content/drive/My Drive/train_images/", transform=train_transforms)
val_data = torchvision.datasets.ImageFolder("/content/drive/My Drive/val_images/", transform=train_transforms)
print(len(train_data))
print(len(val_data))
# load the data
train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True, num_workers=16)
val_loader = torch.utils.data.DataLoader(val_data, batch_size=32, shuffle=False, num_workers=16)
class Convolution(torch.nn.Sequential):
# init method
def __init__(self, in_channels, out_channels, kernel_size, strides, padding):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.strides = strides
self.padding = padding
self.add_module("conv", torch.nn.Conv2d(self.in_channels, self.out_channels, kernel_size=self.kernel_size, stride=self.strides, padding=self.padding))
self.add_module("norm", torch.nn.BatchNorm2d(self.out_channels))
self.add_module("act", torch.nn.ReLU(inplace=True))
# define VGG19 network
class VGG19(torch.nn.Module):
# init method
def __init__(self, num_classes=2):
super(VGG19, self).__init__()
self.features = nn.Sequential(
# first cnn block
Convolution(3, 64, 3, 1, 1),
Convolution(64, 64, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# second cnn block
Convolution(64, 128, 3, 1, 1),
Convolution(128, 128, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# third cnn block
Convolution(128, 256, 3, 1, 1),
Convolution(256, 256, 3, 1, 1),
Convolution(256, 256, 3, 1, 1),
Convolution(256, 256, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# fourth cnn block
Convolution(256, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2),
# fifth cnn block
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
Convolution(512, 512, 3, 1, 1),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.avgpool = nn.AdaptiveAvgPool2d(7)
self.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(inplace = True),
nn.Dropout(0.5),
nn.Linear(4096, 4096),
nn.ReLU(inplace = True),
nn.Dropout(0.5),
nn.Linear(4096, num_classes),
)
# forward step
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = x.view(x.shape[0], -1)
x = self.classifier(x)
return x
# Cross-Entropy loss with Label Smoothing
class CrossEntropyLabelSmoothingLoss(nn.Module):
def __init__(self, smoothing=0.0):
super(CrossEntropyLabelSmoothingLoss, self).__init__()
self.smoothing = smoothing
def forward(self, pred, target):
log_prob = torch.nn.functional.log_softmax(pred, dim=-1)
weight = input.new_ones(pred.size()) * (self.smoothing/(pred.size(-1)-1.))
weight.scatter_(-1, target.unsqueeze(-1), (1.-self.smoothing))
loss = (-weight * log_prob).sum(dim=-1).mean()
return loss
# define loss (smoothing=0 is equivalent to standard Cross-Entropy loss)
criterion = CrossEntropyLabelSmoothingLoss(0.0)
# load model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = VGG19()
model.to(device)
# load tensorboard
%load_ext tensorboard
%tensorboard --logdir logs
# optimizer to be used
optimizer = torch.optim.SGD(model.parameters(), lr=0.005, momentum=0.9, weight_decay=5e-4)
best_metric = -1
best_metric_epoch = -1
writer = SummaryWriter("./logs/")
# train and validate
for epoch in range(0, 100):
# train
model.train()
training_loss = 0.0
total = 0
correct = 0
for i, (input, target) in enumerate(train_loader):
input = input.to(device)
target = target.to(device)
optimizer.zero_grad()
output = model(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
training_loss = training_loss + loss.item()
_, predicted = output.max(1)
total += target.size(0)
correct += predicted.eq(target).sum().item()
training_loss = training_loss/float(len(train_loader))
training_accuracy = str(100.0*(float(correct)/float(total)))
writer.add_scalar("Loss/train", float(training_loss), epoch)
writer.add_scalar("Accuracy/train", float(training_accuracy), epoch)
# validate
model.eval()
valid_loss = 0.0
total = 0
correct = 0
for i, (input, target) in enumerate(val_loader):
with torch.no_grad():
input = input.to(device)
target = target.to(device)
output = model(input)
loss = criterion(output, target)
_, predicted = output.max(1)
total += target.size(0)
correct += predicted.eq(target).sum().item()
valid_loss = valid_loss + loss.item()
valid_loss = valid_loss/float(len(val_loader))
valid_accuracy = str(100.0*(float(correct)/float(total)))
writer.add_scalar("Loss/val", float(valid_loss), epoch)
writer.add_scalar("Accuracy/val", float(valid_accuracy), epoch)
# store best model
if(float(valid_accuracy)>best_metric and epoch>=10):
best_metric = float(valid_accuracy)
best_metric_epoch = epoch
torch.save(model.state_dict(), "best_model_vgg19.pth")
print()
print("Epoch" + str(epoch) + ":")
print("Training Accuracy: " + str(training_accuracy) + " Validation Accuracy: " + str(valid_accuracy))
print("Training Loss: " + str(training_loss) + " Validation Loss: " + str(valid_loss))
print()
```
| github_jupyter |
Deep Learning
=============
Assignment 1
------------
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
import imageio
import PIL.Image
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
```
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19.000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
```
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
print("Test file name: {}".format(test_filename))
print("Train file name: {}".format(train_filename))
```
Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
```
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
```
---
Problem 1
---------
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
---
First of all, let's import some libraries that I will use later on and activate online display of matplotlib outputs:
```
import random
import hashlib
%matplotlib inline
def disp_samples(data_folders, sample_size):
for folder in data_folders:
print(folder)
image_files = os.listdir(folder)
image_sample = random.sample(image_files, sample_size)
for image in image_sample:
image_file = os.path.join(folder, image)
i = Image(filename=image_file)
display(i)
disp_samples(train_folders, 5)
disp_samples(test_folders, 5)
```
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
```
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
"""Verify"""
try:
img = PIL.Image.open(image_file) # open the image file
img.verify() # verify that it is, in fact an image
image_data = (imageio.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
```
---
Problem 2
---------
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
---
```
def disp_8_img(imgs, titles):
"""Display subplot with 8 images or less"""
for i, img in enumerate(imgs):
plt.subplot(2, 4, i+1)
plt.title(titles[i])
plt.axis('off')
plt.imshow(img)
def disp_sample_pickles(data_folders):
folder = random.sample(data_folders, 1)
pickle_filename = ''.join(folder) + '.pickle'
try:
with open(pickle_filename, 'rb') as f:
dataset = pickle.load(f)
except Exception as e:
print('Unable to read data from', pickle_filename, ':', e)
return
# display
plt.suptitle(''.join(folder)[-1])
for i, img in enumerate(random.sample(list(dataset), 8)):
plt.subplot(2, 4, i+1)
plt.axis('off')
plt.imshow(img)
disp_sample_pickles(train_folders)
disp_sample_pickles(test_folders)
```
---
Problem 3
---------
Another check: we expect the data to be balanced across classes. Verify that.
---
Data is balanced across classes if the classes have about the same number of items. Let's check the number of images by class.
```
def disp_number_images(data_folders):
for folder in data_folders:
pickle_filename = ''.join(folder) + '.pickle'
try:
with open(pickle_filename, 'rb') as f:
dataset = pickle.load(f)
except Exception as e:
print('Unable to read data from', pickle_filename, ':', e)
return
print('Number of images in ', folder, ' : ', len(dataset))
disp_number_images(train_folders)
disp_number_images(test_folders)
```
There are only minor gaps, so the classes are well balanced.
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
```
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
```
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
```
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
```
---
Problem 4
---------
Convince yourself that the data is still good after shuffling!
---
To be sure that the data are still fine after the merger and the randomization, I will select one item and display the image alongside the label. Note: 0 = A, 1 = B, 2 = C, 3 = D, 4 = E, 5 = F, 6 = G, 7 = H, 8 = I, 9 = J.
```
pretty_labels = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'J'}
def disp_sample_dataset(dataset, labels):
items = random.sample(range(len(labels)), 8)
for i, item in enumerate(items):
plt.subplot(2, 4, i+1)
plt.axis('off')
plt.title(pretty_labels[labels[item]])
plt.imshow(dataset[item])
disp_sample_dataset(train_dataset, train_labels)
disp_sample_dataset(valid_dataset, valid_labels)
disp_sample_dataset(test_dataset, test_labels)
```
Finally, let's save the data for later reuse:
```
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
```
---
Problem 5
---------
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
---
In this part, I will explore the datasets and understand better the overlap cases. There are overlaps, but there are also duplicates in the same dataset! Processing time is also critical. I will first use nested loops and matrix comparison, which is slow and then use hash function to accelerate and process the whole dataset.
```
def display_overlap(overlap, source_dataset, target_dataset):
item = random.choice(overlap.keys())
imgs = np.concatenate(([source_dataset[item]], target_dataset[overlap[item][0:7]]))
plt.suptitle(item)
for i, img in enumerate(imgs):
plt.subplot(2, 4, i+1)
plt.axis('off')
plt.imshow(img)
def extract_overlap(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
for j, img_2 in enumerate(dataset_2):
if np.array_equal(img_1, img_2):
if not i in overlap.keys():
overlap[i] = []
overlap[i].append(j)
return overlap
%time overlap_test_train = extract_overlap(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
```
The ``display_overlap`` function above display one of the duplicate, the first element is from the first dataset, and the next ones are from the dataset used for the comparison.
Now that exact duplicates have been found, let's look for near duplicates. How to define near identical images? That's a tricky question. My first thought has been to use the ``allclose`` numpy matrix comparison. This is too restrictive, since two images can vary by one pyxel, and still be very similar even if the variation on the pyxel is large. A better solution involves some kind of average.
To keep is simple and still relevant, I will use a Manhattan norm (sum of absolute values) of the difference matrix. Since the images of the dataset have all the same size, I will not normalize the norm value. Note that it is pyxel by pyxel comparison, and therefore it will not scale to the whole dataset, but it will help to understand image similarities.
```
MAX_MANHATTAN_NORM = 10
def extract_overlap_near(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
for j, img_2 in enumerate(dataset_2):
diff = img_1 - img_2
m_norm = np.sum(np.abs(diff))
if m_norm < MAX_MANHATTAN_NORM:
if not i in overlap.keys():
overlap[i] = []
overlap[i].append(j)
return overlap
%time overlap_test_train_near = extract_overlap_near(test_dataset[:200], train_dataset)
print('Number of near overlaps:', len(overlap_test_train_near.keys()))
display_overlap(overlap_test_train_near, test_dataset[:200], train_dataset)
```
The techniques above work well, but the performance is very low and the methods are poorly scalable to the full dataset. Let's try to improve the performance. Let's take some reference times on a small dataset.
Here are some ideas:
+ stop a the first occurence
+ nympy function ``where`` in diff dataset
+ hash comparison
```
def extract_overlap_stop(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
for j, img_2 in enumerate(dataset_2):
if np.array_equal(img_1, img_2):
overlap[i] = [j]
break
return overlap
%time overlap_test_train = extract_overlap_stop(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
```
It is a faster, and only one duplicate from the second dataset is displayed. This is still not scalable.
```
MAX_MANHATTAN_NORM = 10
def extract_overlap_where(dataset_1, dataset_2):
overlap = {}
for i, img_1 in enumerate(dataset_1):
diff = dataset_2 - img_1
norm = np.sum(np.abs(diff), axis=1)
duplicates = np.where(norm < MAX_MANHATTAN_NORM)
if len(duplicates[0]):
overlap[i] = duplicates[0]
return overlap
test_flat = test_dataset.reshape(test_dataset.shape[0], 28 * 28)
train_flat = train_dataset.reshape(train_dataset.shape[0], 28 * 28)
%time overlap_test_train = extract_overlap_where(test_flat[:200], train_flat)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
```
The built-in numpy function provides some improvement either, but this algorithm is still not scalable to the dataset to its full extend.
To make it work at scale, the best option is to use a hash function. To find exact duplicates, the hash functions used for the cryptography will work just fine.
```
def extract_overlap_hash(dataset_1, dataset_2):
dataset_hash_1 = [hashlib.sha256(img).hexdigest() for img in dataset_1]
dataset_hash_2 = [hashlib.sha256(img).hexdigest() for img in dataset_2]
overlap = {}
for i, hash1 in enumerate(dataset_hash_1):
for j, hash2 in enumerate(dataset_hash_2):
if hash1 == hash2:
if not i in overlap.keys():
overlap[i] = []
overlap[i].append(j) ## use np.where
return overlap
%time overlap_test_train = extract_overlap_hash(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
```
More overlapping values could be found, this is due to the hash collisions. Several images can have the same hash but are actually different differents. This is not noticed here, and even if it happens, this is acceptable. All duplicates will be removed for sure.
We can make the processing a but faster by using the built-in numpy ``where``function.
```
def extract_overlap_hash_where(dataset_1, dataset_2):
dataset_hash_1 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_1])
dataset_hash_2 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_2])
overlap = {}
for i, hash1 in enumerate(dataset_hash_1):
duplicates = np.where(dataset_hash_2 == hash1)
if len(duplicates[0]):
overlap[i] = duplicates[0]
return overlap
%time overlap_test_train = extract_overlap_hash_where(test_dataset[:200], train_dataset)
print('Number of overlaps:', len(overlap_test_train.keys()))
display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
```
From my perspective near duplicates should also be removed in the sanitized datasets. My assumption is that "near" duplicates are very very close (sometimes just there is a one pyxel border of difference), and penalyze the training the same way the true duplicates do.
That's being said, finding near duplicates with a hash function is not obvious. There are techniques for that, like "locally sensitive hashing", "perceptual hashing" or "difference hashing". There even are Python library available. Unfortunatly I did not have time to try them. The sanitized dataset generated below are based on true duplicates found with a cryptography hash function.
For sanitizing the dataset, I change the function above by returning the clean dataset directly.
```
def sanetize(dataset_1, dataset_2, labels_1):
dataset_hash_1 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_1])
dataset_hash_2 = np.array([hashlib.sha256(img).hexdigest() for img in dataset_2])
overlap = [] # list of indexes
for i, hash1 in enumerate(dataset_hash_1):
duplicates = np.where(dataset_hash_2 == hash1)
if len(duplicates[0]):
overlap.append(i)
return np.delete(dataset_1, overlap, 0), np.delete(labels_1, overlap, None)
%time test_dataset_sanit, test_labels_sanit = sanetize(test_dataset[:200], train_dataset, test_labels[:200])
print('Overlapping images removed: ', len(test_dataset[:200]) - len(test_dataset_sanit))
```
The same value is found, so we can now sanetize the test and the train datasets.
```
%time test_dataset_sanit, test_labels_sanit = sanetize(test_dataset, train_dataset, test_labels)
print('Overlapping images removed: ', len(test_dataset) - len(test_dataset_sanit))
%time valid_dataset_sanit, valid_labels_sanit = sanetize(valid_dataset, train_dataset, valid_labels)
print('Overlapping images removed: ', len(valid_dataset) - len(valid_dataset_sanit))
pickle_file_sanit = 'notMNIST_sanit.pickle'
try:
f = open(pickle_file_sanit, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset_sanit,
'valid_labels': valid_labels_sanit,
'test_dataset': test_dataset_sanit,
'test_labels': test_labels_sanit,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file_sanit)
print('Compressed pickle size:', statinfo.st_size)
```
Since I did not have time to generate clean sanitized datasets, I did not use the datasets generated above in the training of the my NN in the next assignments.
---
Problem 6
---------
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
---
I have already used scikit-learn in a previous MOOC. It is a great tool, very easy to use!
```
regr = LogisticRegression()
X_test = test_dataset.reshape(test_dataset.shape[0], 28 * 28)
y_test = test_labels
sample_size = 50
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
pred_labels = regr.predict(X_test)
disp_sample_dataset(test_dataset, pred_labels)
sample_size = 100
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
sample_size = 1000
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
X_valid = valid_dataset[:sample_size].reshape(sample_size, 784)
y_valid = valid_labels[:sample_size]
regr.score(X_valid, y_valid)
pred_labels = regr.predict(X_valid)
disp_sample_dataset(valid_dataset, pred_labels)
sample_size = 5000
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr.fit(X_train, y_train)
regr.score(X_test, y_test)
```
To train the model on all the data, we have to use another solver. SAG is the faster one.
```
regr2 = LogisticRegression(solver='sag')
sample_size = len(train_dataset)
X_train = train_dataset[:sample_size].reshape(sample_size, 784)
y_train = train_labels[:sample_size]
%time regr2.fit(X_train, y_train)
regr2.score(X_test, y_test)
pred_labels = regr.predict(X_test)
disp_sample_dataset(test_dataset, pred_labels)
```
The accuracy may be weak compared to a deep neural net, but as my first character recognition technique, I find it already impressive!
| github_jupyter |
# MLB's Biggest All-Star Injustices
```
# Import dependencies
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', 100)
pd.options.mode.chained_assignment = None
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.metrics import classification_report, roc_auc_score
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.preprocessing import StandardScaler
import seaborn as sns
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# Print accuracy of pandas crosstab
def crosstabAccuracy(ct):
try:
acc = (ct[0][0]+ct[1][1]) / (ct[0][0]+ct[1][1]+ct[0][1]+ct[1][0])
except:
acc = (ct[0][0]) / (ct[0][0]+ct[1][0])
return(100*round(acc,3))
# Print classification report with specified threshold
def thresholdReport(continuous_predictions, actual_results, threshold):
updated_preds = np.array([1 if pred > threshold else 0 for pred in continuous_predictions])
print(classification_report(y_pred=updated_preds, y_true=actual_results))
print(pd.crosstab(updated_preds, actual_results))
# Read data
fh = pd.read_csv('.\\data\\firsthalf.csv')
# Change 'position' to dummy variables
position_dummies = pd.get_dummies(fh.position)
fh = fh.drop('position', axis=1)
fh = pd.concat([fh, position_dummies], axis=1)
# Initial df metrics
print(fh.shape)
print(fh.made_asg.value_counts(normalize=True))
print(fh.columns)
# Set features
modelcols = [
'AVG',
'Def',
'HR',
'K%',
'SB',
'WAR',
'popular',
'won_WS_PY',
'lost_WS_PY',
'1B',
'2B',
'3B',
'C',
'DH',
'OF',
'SS'
]
Y = fh.made_asg
X = fh.loc[:,modelcols]
# Correlation matrix
sns.heatmap(X.corr(), cmap='RdBu_r')
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, Y, test_size=0.20,
stratify=Y, random_state=1000)
```
### Logistic Regression (unscaled)
```
# Train logistic regression model
logR = LogisticRegression(penalty='l1')
logR.fit(Xtrain, Ytrain)
pd.crosstab(logR.predict(Xtrain), Ytrain)
# Test logistic regression model
logR_preds_binary = logR.predict(Xtest)
logR_preds_continuous = logR.predict_proba(Xtest)[:,1]
logR_ct = pd.crosstab(logR_preds_binary, Ytest)
print('Accuracy:',crosstabAccuracy(logR_ct))
print('AUC: {:.1f}'.format(100*roc_auc_score(y_score=logR_preds_continuous, y_true=Ytest)))
logR_ct
# Classification report @ 0.40 threshold
thresholdReport(continuous_predictions=logR_preds_continuous,
actual_results=Ytest,
threshold=0.40)
# Feature coefficients
print(len(X.columns), 'features:')
for num, feature in enumerate(Xtrain.columns):
print(logR.coef_[0][num], feature)
```
### Lasso / Ridge / Elastic Net
```
# Scale all features for lasso, ridge, EN
scaler = StandardScaler()
Xtrainscaled = pd.DataFrame(scaler.fit_transform(Xtrain))
Xtrainscaled.columns = modelcols
Xtestscaled = pd.DataFrame(scaler.transform(Xtest))
Xtestscaled.columns = modelcols
# Binary columns back to 0-1
binaries = ['popular', 'NYY', 'BOS', 'CHC', 'LAD', 'won_WS_PY',
'lost_WS_PY', 'played_WS_PY', '1B', '2B', '3B', 'C',
'DH', 'OF', 'SS']
for col in binaries:
try:
Xtrainscaled[col] = Xtrainscaled[col].apply(lambda x: 1 if x>0 else 0)
Xtestscaled[col] = Xtestscaled[col].apply(lambda x: 1 if x>0 else 0)
except:
pass
# Conduct Lasso, Ridge, EN for different levels of alpha (never outperforms logistic)
print('AUCs:\n\n')
for i in np.arange(0.01, 0.50, 0.02):
alpha = i
print('Alpha = {:.2f}'.format(alpha))
lasso_model = SGDClassifier(penalty='l1', alpha=alpha, max_iter=100, loss='modified_huber')
lasso_model.fit(Xtrainscaled, Ytrain)
ridge_model = SGDClassifier(penalty='l2', alpha=alpha, max_iter=100, loss='modified_huber')
ridge_model.fit(Xtrainscaled, Ytrain)
elastic_model = SGDClassifier(penalty='l1', alpha=alpha, l1_ratio=0.50, max_iter=100, loss='modified_huber')
elastic_model.fit(Xtrainscaled, Ytrain)
lasso_model_preds = lasso_model.predict_proba(Xtestscaled)[:,1]
print('Lasso: {:.1f}'.format(100*roc_auc_score(y_score=lasso_model_preds, y_true=Ytest)))
ridge_model_preds = ridge_model.predict_proba(Xtestscaled)[:,1]
print('Ridge: {:.1f}'.format(100*roc_auc_score(y_score=ridge_model_preds, y_true=Ytest)))
elastic_model_preds = elastic_model.predict_proba(Xtestscaled)[:,1]
print('Elastic: {:.1f}'.format(100*roc_auc_score(y_score=elastic_model_preds, y_true=Ytest)))
print('------------')
```
### Random Forest
```
# Grid search for random forest
params = {
'max_depth':[5,6,7,8],
'max_features':[3,5,10,None],
'min_samples_leaf':[1,3,7,11],
'n_estimators':[301]
}
rf_for_gs = RandomForestClassifier()
grid_search_rf = GridSearchCV(estimator=rf_for_gs, param_grid=params, cv=7, n_jobs=4)
grid_search_rf.fit(Xtrain, Ytrain)
# Best random forest parameters
grid_search_rf.best_params_
# Train model
rf = RandomForestClassifier(max_depth=8, max_features=5, min_samples_leaf=3,
n_estimators=1001, oob_score=True)
rf.fit(Xtrain, Ytrain)
# Training results
pd.crosstab(rf.predict(Xtrain), Ytrain)
# Test results (does not outperform logistic)
rf_probs_binary = rf.predict(Xtest)
rf_probs_continuous = rf.predict_proba(Xtest)[:,1]
ct_rf = pd.crosstab(rf_probs_binary, Ytest)
print('Accuracy: {:.1f}'.format(crosstabAccuracy(ct_rf)))
print('AUC: {:.1f}'.format(100*roc_auc_score(y_score=rf_probs_continuous, y_true=Ytest)))
ct_rf
```
### Full model (Logistic Regression)
```
# Train logistic regression on full data set
logR_full = LogisticRegression(penalty='l1').fit(X,Y)
full_preds_lr = pd.Series(logR_full.predict_proba(X)[:,1])
fh_preds = pd.concat([fh, full_preds_lr], axis=1).rename(columns={0:'pred_lr'})
# Feature coefficients
for num, feature in enumerate(X.columns):
print(round(logR_full.coef_[0][num],2), feature)
# Reorder columns
cols = fh_preds.columns.tolist()
cols.insert(1, cols.pop(cols.index('year'))) # move "year"
cols.insert(2, cols.pop(cols.index('made_asg'))) # move "made_asg"
cols.insert(3, cols.pop(cols.index('started_asg'))) # move "started_asg"
cols = cols[-1:] + cols[:-1]
fh_preds = fh_preds[cols]
# Should have made ASG, but didn't
fh_preds[fh_preds['made_asg']==0].sort_values('pred_lr', ascending=False).head(5)
# Made ASG, but shouldn't have
fh_preds[fh_preds['made_asg']==1].sort_values('pred_lr', ascending=True).head(5)
fh_preds.sort_values('pred_lr', ascending=False).tail(5)
```
### Deploy model on 2018 first-half data
```
# Import 2018 data
fh2018_full = pd.read_csv('.\\data\\firsthalf2018.csv')
# Change 'position' to dummy variables
position_dummies2 = pd.get_dummies(fh2018_full.position)
fh2018_full = fh2018_full.drop(['position', 'Unnamed: 0'], axis=1)
fh2018_full = pd.concat([fh2018_full, position_dummies2], axis=1)
# Deploy logistic regression model on 2018 data
fh2018 = fh2018_full.loc[:,modelcols]
fh2018_full['prob_lr'] = pd.Series(logR_full.predict_proba(fh2018)[:,1])
# Lowest 2018 ASG probabilities
fh2018_full.loc[:,['player', 'prob_lr', 'AVG', 'OBP', 'SLG', 'HR', 'WAR']].sort_values('prob_lr', ascending=True).head(5)
# Highest 2018 ASG probabilities
fh2018_full.loc[:,['player', 'prob_lr', 'AVG', 'OBP', 'SLG', 'HR', 'WAR']].sort_values('prob_lr', ascending=False).head(5)
```
| github_jupyter |
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
np.random.seed(0)
from statistics import mean
```
今回はアルゴリズムの評価が中心の章なので,学習アルゴリズム実装は後に回し、sklearnを学習アルゴリズムとして使用する。
```
import sklearn
```
今回、学習に使うデータはsin関数に正規分布$N(\varepsilon|0,0.05)$ノイズ項を加えたデータを使う
```
size = 100
max_degree = 11
x_data = np.random.rand(size) * np.pi * 2
var_data = np.random.normal(loc=0,scale=0.1,size=size)
sin_data = np.sin(x_data) + var_data
plt.ylim(-1.2,1.2)
plt.scatter(x_data,sin_data)
```
学習用のアルゴリズムは多項式回帰を使います。
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
```
2.2.2:**MSE**:近似の良さの評価手法。
$$MSE=\int (y(x;D) - h(x))^2p(x)dx=E\{(y(x;D)-h(x))^2\}$$
```
def MSE(y,t):
return np.sum(np.square(y-t))/y.size
MSE(np.array([10,3,3]),np.array([1,2,3]))
```
2.2.1 (1)**ホールドアウト法**:
手元のデータを2つに分割し、片方をトレーニングに使い、片方をテストに使う手法。
テストデータの数が必要
```
%%time
def holdout_method(x,y,per=0.8,value_func=MSE,degree=11):
index = np.random.permutation(x.size)
index_train,index_test = np.split(index,[int(x.size*per)])
#plt.scatter(x_data[index_train],sin_data[index_train])
test_score_list = []
train_score_list = []
for i in range(1,degree):
pf = PolynomialFeatures(degree=i, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(x[index_train].reshape(-1,1), y[index_train])
pred_y_test = pl.predict(x[index_test].reshape(-1,1))
pred_y_train = pl.predict(x[index_train].reshape(-1,1))
score_train = value_func(pred_y_train,y[index_train])
score_test = value_func(pred_y_test,y[index_test])
train_score_list.append(score_train)
test_score_list.append(score_test)
return train_score_list,test_score_list
hold_train_score_list,hold_test_score_list = holdout_method(x_data,sin_data,degree=max_degree)
plt.plot(np.array(range(1,max_degree)),np.array(hold_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(hold_test_score_list),color='r')
```
(2)**交差確認法**:手元の各クラスをn分割して、n-1のグループで学習して、残りの1つのグループのデータでテストをし、その平均を誤り率とした性能評価を行う。
```
def cross_validation(x,y,value_func=MSE,split_num=5,degree=1):
assert x.size % split_num==0,"You must use divisible number"
n = x.size / split_num
train_scores =[]
test_scores =[]
for i in range(split_num):
indices = [int(i*n),int(i*n+n)]
train_x_1,test_x,train_x_2=np.split(x,indices)
train_y_1,test_y,train_y_2=np.split(y,indices)
train_x = np.concatenate([train_x_1,train_x_2])
train_y = np.concatenate([train_y_1,train_y_2])
pf = PolynomialFeatures(degree=degree, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(train_x.reshape(-1,1), train_y)
pred_y_test = pl.predict(np.array(test_x).reshape(-1,1))
pred_y_train = pl.predict(np.array(train_x).reshape(-1,1))
score_train = value_func(pred_y_train,train_y)
#print(score_train)
score_test = value_func(pred_y_test,test_y)
#print(len(test_y))
train_scores.append(score_train)
test_scores.append(score_test)
return mean(train_scores),mean(test_scores)
cross_test_score_list = []
cross_train_score_list = []
for i in range(1,max_degree):
tra,tes = cross_validation(x_data,sin_data,degree=i)
cross_train_score_list.append(tra)
cross_test_score_list.append(tes)
plt.plot(np.array(range(1,max_degree)),np.array(cross_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(cross_test_score_list),color='r')
```
(3)**一つ抜き法**:交差確認法の特別な場合で、データ数=グループの数としたものである。
```
def leave_one_out(x,y,value_func=MSE,size=size,degree=1):
return cross_validation(x,y,value_func,split_num=size,degree=degree)
leave_test_score_list = []
leave_train_score_list = []
for i in range(1,max_degree):
tra,tes = leave_one_out(x_data,sin_data,degree=i)
leave_train_score_list.append(tra)
leave_test_score_list.append(tes)
plt.plot(np.array(range(1,max_degree)),np.array(leave_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(leave_test_score_list),color='r')
plt.plot(np.array(range(1,max_degree)),np.array(hold_train_score_list),color='y')
plt.plot(np.array(range(1,max_degree)),np.array(hold_test_score_list),color='m')
plt.plot(np.array(range(1,max_degree)),np.array(cross_train_score_list),color='k')
plt.plot(np.array(range(1,max_degree)),np.array(cross_test_score_list),color='c')
plt.plot(np.array(range(1,max_degree)),np.array(leave_train_score_list),color='b')
plt.plot(np.array(range(1,max_degree)),np.array(leave_test_score_list),color='r')
```
(4)**ブートストラップ法**:N個の復元抽出をしてブートストラップサンプルを作り、そこから
$bias=\varepsilon(N^*,N^*)-N(N^*,N)$
を推定して、それをいくつか計算してその平均でバイアスを推定する。
その推定値を$\overline{bias}$として、その推定値を
$\varepsilon = \varepsilon(N,N)-\overline{bias}$
とする。
```
def bootstrap(x,y,value_func=MSE,trial=50,degree=1):
biases=[]
for i in range(trial):
boot_ind = np.random.choice(range(x.size),size=x.size,replace=True)
pf = PolynomialFeatures(degree=degree, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(x[boot_ind].reshape(-1,1), y[boot_ind])
pred_y_boot = pl.predict(x[boot_ind].reshape(-1,1))
pred_y_base = pl.predict(x.reshape(-1,1))
score_boot = value_func(pred_y_boot,y[boot_ind])
#print(score_train)
score_base = value_func(pred_y_base,y)
bias = score_base - score_boot
#print(bias)
biases.append(bias)
pf = PolynomialFeatures(degree=degree, include_bias=False)
lr = LinearRegression()
pl = Pipeline([("PF", pf), ("LR", lr)])
pl.fit(x.reshape(-1,1), y)
pred_y_base = pl.predict(x.reshape(-1,1))
score_base = value_func(pred_y_base,y)
return score_base + mean(biases)
boot_score_list = []
for i in range(1,max_degree):
boot_score = bootstrap(x_data,sin_data,degree=i)
boot_score_list.append(boot_score)
plt.plot(np.array(range(1,max_degree)),np.array(boot_score_list),color='b')
```
| github_jupyter |
___
<a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Content Copyright by Pierian Data</em></center>
# Warmup Project Exercise
## Simple War Game
Before we launch in to the OOP Milestone 2 Project, let's walk through together on using OOP for a more robust and complex application, such as a game. We will use Python OOP to simulate a simplified version of the game war. Two players will each start off with half the deck, then they each remove a card, compare which card has the highest value, and the player with the higher card wins both cards. In the event of a time
## Single Card Class
### Creating a Card Class with outside variables
Here we will use some outside variables that we know don't change regardless of the situation, such as a deck of cards. Regardless of what round,match, or game we're playing, we'll still need the same deck of cards.
```
# We'll use this later
import random
suits = ('Hearts', 'Diamonds', 'Spades', 'Clubs')
ranks = ('Two', 'Three', 'Four', 'Five', 'Six', 'Seven', 'Eight', 'Nine', 'Ten', 'Jack', 'Queen', 'King', 'Ace')
values = {'Two':2, 'Three':3, 'Four':4, 'Five':5, 'Six':6, 'Seven':7, 'Eight':8,
'Nine':9, 'Ten':10, 'Jack':11, 'Queen':12, 'King':13, 'Ace':14}
class Card:
def __init__(self,suit,rank):
self.suit = suit
self.rank = rank
self.value = values[rank]
def __str__(self):
return self.rank + ' of ' + self.suit
```
Create an example card
```
suits[0]
ranks[0]
two_hearts = Card(suits[0],ranks[0])
two_hearts
print(two_hearts)
two_hearts.rank
two_hearts.value
values[two_hearts.rank]
```
## Deck Class
### Using a class within another class
We just created a single card, but how can we create an entire Deck of cards? Let's explore doing this with a class that utilizes the Card class.
A Deck will be made up of multiple Cards. Which mean's we will actually use the Card class within the \_\_init__ of the Deck class.
```
class Deck:
def __init__(self):
# Note this only happens once upon creation of a new Deck
self.all_cards = []
for suit in suits:
for rank in ranks:
# This assumes the Card class has already been defined!
self.all_cards.append(Card(suit,rank))
def shuffle(self):
# Note this doesn't return anything
random.shuffle(self.all_cards)
def deal_one(self):
# Note we remove one card from the list of all_cards
return self.all_cards.pop()
```
### Create a Deck
```
mydeck = Deck()
len(mydeck.all_cards)
mydeck.all_cards[0]
print(mydeck.all_cards[0])
mydeck.shuffle()
print(mydeck.all_cards[0])
my_card = mydeck.deal_one()
print(my_card)
```
# Player Class
Let's create a Player Class, a player should be able to hold instances of Cards, they should also be able to remove and add them from their hand. We want the Player class to be flexible enough to add one card, or many cards so we'll use a simple if check to keep it all in the same method.
We'll keep this all in mind as we create the methods for the Player class.
### Player Class
```
class Player:
def __init__(self,name):
self.name = name
# A new player has no cards
self.all_cards = []
def remove_one(self):
# Note we remove one card from the list of all_cards
# We state 0 to remove from the "top" of the deck
# We'll imagine index -1 as the bottom of the deck
return self.all_cards.pop(0)
def add_cards(self,new_cards):
if type(new_cards) == type([]):
self.all_cards.extend(new_cards)
else:
self.all_cards.append(new_cards)
def __str__(self):
return f'Player {self.name} has {len(self.all_cards)} cards.'
jose = Player("Jose")
jose
print(jose)
two_hearts
jose.add_cards(two_hearts)
print(jose)
jose.add_cards([two_hearts,two_hearts,two_hearts])
print(jose)
```
## War Game Logic
```
player_one = Player("One")
player_two = Player("Two")
```
## Setup New Game
```
new_deck = Deck()
new_deck.shuffle()
```
### Split the Deck between players
```
len(new_deck.all_cards)/2
for x in range(26):
player_one.add_cards(new_deck.deal_one())
player_two.add_cards(new_deck.deal_one())
len(new_deck.all_cards)
len(player_one.all_cards)
len(player_two.all_cards)
```
## Play the Game
```
import pdb
game_on = True
round_num = 0
while game_on:
round_num += 1
print(f"Round {round_num}")
# Check to see if a player is out of cards:
if len(player_one.all_cards) == 0:
print("Player One out of cards! Game Over")
print("Player Two Wins!")
game_on = False
break
if len(player_two.all_cards) == 0:
print("Player Two out of cards! Game Over")
print("Player One Wins!")
game_on = False
break
# Otherwise, the game is still on!
# Start a new round and reset current cards "on the table"
player_one_cards = []
player_one_cards.append(player_one.remove_one())
player_two_cards = []
player_two_cards.append(player_two.remove_one())
at_war = True
while at_war:
if player_one_cards[-1].value > player_two_cards[-1].value:
# Player One gets the cards
player_one.add_cards(player_one_cards)
player_one.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
# Player Two Has higher Card
elif player_one_cards[-1].value < player_two_cards[-1].value:
# Player Two gets the cards
player_two.add_cards(player_one_cards)
player_two.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
else:
print('WAR!')
# This occurs when the cards are equal.
# We'll grab another card each and continue the current war.
# First check to see if player has enough cards
# Check to see if a player is out of cards:
if len(player_one.all_cards) < 5:
print("Player One unable to play war! Game Over at War")
print("Player Two Wins! Player One Loses!")
game_on = False
break
elif len(player_two.all_cards) < 5:
print("Player Two unable to play war! Game Over at War")
print("Player One Wins! Player One Loses!")
game_on = False
break
# Otherwise, we're still at war, so we'll add the next cards
else:
for num in range(5):
player_one_cards.append(player_one.remove_one())
player_two_cards.append(player_two.remove_one())
```
## Game Setup in One Cell
```
player_one = Player("One")
player_two = Player("Two")
new_deck = Deck()
new_deck.shuffle()
for x in range(26):
player_one.add_cards(new_deck.deal_one())
player_two.add_cards(new_deck.deal_one())
game_on = True
round_num = 0
while game_on:
round_num += 1
print(f"Round {round_num}")
# Check to see if a player is out of cards:
if len(player_one.all_cards) == 0:
print("Player One out of cards! Game Over")
print("Player Two Wins!")
game_on = False
break
if len(player_two.all_cards) == 0:
print("Player Two out of cards! Game Over")
print("Player One Wins!")
game_on = False
break
# Otherwise, the game is still on!
# Start a new round and reset current cards "on the table"
player_one_cards = []
player_one_cards.append(player_one.remove_one())
player_two_cards = []
player_two_cards.append(player_two.remove_one())
at_war = True
while at_war:
if player_one_cards[-1].value > player_two_cards[-1].value:
# Player One gets the cards
player_one.add_cards(player_one_cards)
player_one.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
# Player Two Has higher Card
elif player_one_cards[-1].value < player_two_cards[-1].value:
# Player Two gets the cards
player_two.add_cards(player_one_cards)
player_two.add_cards(player_two_cards)
# No Longer at "war" , time for next round
at_war = False
else:
print('WAR!')
# This occurs when the cards are equal.
# We'll grab another card each and continue the current war.
# First check to see if player has enough cards
# Check to see if a player is out of cards:
if len(player_one.all_cards) < 5:
print("Player One unable to play war! Game Over at War")
print("Player Two Wins! Player One Loses!")
game_on = False
break
elif len(player_two.all_cards) < 5:
print("Player Two unable to play war! Game Over at War")
print("Player One Wins! Player One Loses!")
game_on = False
break
# Otherwise, we're still at war, so we'll add the next cards
else:
for num in range(5):
player_one_cards.append(player_one.remove_one())
player_two_cards.append(player_two.remove_one())
len(player_one.all_cards)
len(player_two.all_cards)
print(player_one_cards[-1])
print(player_two_cards[-1])
```
## Great Work!
Other links that may interest you:
* https://www.reddit.com/r/learnpython/comments/7ay83p/war_card_game/
* https://codereview.stackexchange.com/questions/131174/war-card-game-using-classes
* https://gist.github.com/damianesteban/6896120
* https://lethain.com/war-card-game-in-python/
* https://hectorpefo.github.io/2017-09-13-Card-Wars/
* https://www.wimpyprogrammer.com/the-statistics-of-war-the-card-game
| github_jupyter |
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
```

<h1 align='center'>Stats Can Notebook Template: Quick Dataset Exploration</h1>
<h4 align='center'>Laura Gutierrez Funderburk $\mid$ Stats Can Notebook</h4>
<h2 align='center'>Abstract</h2>
This notebook may be used to quickly explore most data sets from Stats Can. To explore the contents of a dataset, simply visit https://www150.statcan.gc.ca/n1/en/type/data?MM=1 and select a "Table".
To select a table, copy the string next to Table, under the data set name. Here is an example.

In this case, the data set's table is 10-10-0122-01.
Simply copy and paste that table in the box below, and press the Download Dataset button.
```
%run -i ./StatsCan/helpers.py
%run -i ./StatsCan/scwds.py
%run -i ./StatsCan/sc.py
from ipywidgets import widgets, VBox, HBox, Button
from ipywidgets import Button, Layout, widgets
from IPython.display import display, Javascript, Markdown, HTML
import datetime as dt
import qgrid as q
import pandas as pd
import json
import datetime
import qgrid
from tqdm import tnrange, tqdm_notebook
from time import sleep
import sys
grid_features = { 'fullWidthRows': True,
'syncColumnCellResize': True,
'forceFitColumns': True,
'enableColumnReorder': True,
'enableTextSelectionOnCells': True,
'editable': False,
'filterable': True,
'sortable': False,
'highlightSelectedRow': True}
def rerun_cell( b ):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+3)'))
def run_4cell( b ):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+5)'))
style = {'description_width': 'initial'}
```
<h2 align='center'>Downloading Stats Can Data</h2>
To download a full dataset, enter a product ID and press the Download Dataset button.
```
prod_ID = widgets.Text(
value="10-10-0122-01",
placeholder='ProductID value',
description='productID value',
disabled=False,
style=style
)
DS_button = widgets.Button(
button_style='success',
description="Download Dataset",
layout=Layout(width='15%', height='30px'),
style=style
)
DS_button.on_click( run_4cell )
display(prod_ID)
display(DS_button)
# # Download data
productId = prod_ID.value
if "-" not in productId:
if len(productId)!=10:
print("WARNING: THIS IS LIKELY A NUMBER NOT ASSOCIATED WITH A DATA TABLE. VERIFY AND TRY AGAIN")
sys.exit(1)
else:
if len(productId.split("-")) !=4:
print("WARNING: THIS IS LIKELY A NUMBER NOT ASSOCIATED WITH A DATA TABLE. VERIFY AND TRY AGAIN")
sys.exit(1)
download_tables(str(productId))
def download_and_store_json(productId):
with open(str(productId) +'.json') as f:
data = json.load(f)
f.close()
return data
import zipfile
def read_data_compute_df(productID):
zf = zipfile.ZipFile('./' + str(productID) + '-eng.zip')
df = pd.read_csv(zf.open(str(productID)+'.csv'))
return df
# Example
#data = download_and_store_json(productId)
# Example, we will select the study we downloaded previously
df_fullDATA = zip_table_to_dataframe(productId)
cols = list(df_fullDATA.loc[:,'REF_DATE':'UOM'])+ ['SCALAR_FACTOR'] + ['VALUE']
df_less = df_fullDATA[cols]
df_less2 = df_less.drop(["DGUID"], axis=1)
df_less2.head()
iteration_nr = df_less2.shape[1]
categories = []
for i in range(iteration_nr-1):
categories.append(df_less2.iloc[:,i].unique())
all_the_widgets = []
for i in range(len(categories)):
if i==0:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Start Date:',
style = style,
disabled=False
)
b_category = widgets.Dropdown(
value = categories[i][-1],
options = categories[i],
description ='End Date:',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
all_the_widgets.append(b_category)
elif i==1:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Location:',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
elif i==len(categories)-1:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Scalar factor:',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
elif i==len(categories)-2:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Units of Measure :',
style = style,
disabled=False
)
all_the_widgets.append(a_category)
else:
a_category = widgets.Dropdown(
value = categories[i][0],
options = categories[i],
description ='Subcategory ' + str(i),
style = style,
disabled=False
)
all_the_widgets.append(a_category)
```
## <h2 align='center'>Select Data Subsets: One-Dimensional Plotting</h2>
Use the user menu below to select a cateory within the full subset you are interested in exploring.
Choose a starting and end date to plot results.
If there is data available, it will appear under the headers.
Be careful to select dataframes with actual data in them!.
Use the Select Dataset button to help you preview the data.
```
CD_button = widgets.Button(
button_style='success',
description="Preview Dataset",
layout=Layout(width='15%', height='30px'),
style=style
)
CD_button.on_click( run_4cell )
tab3 = VBox(children=[HBox(children=all_the_widgets[0:3]),
HBox(children=all_the_widgets[3:5]),
HBox(children=all_the_widgets[5:len(all_the_widgets)]),
CD_button])
tab = widgets.Tab(children=[tab3])
tab.set_title(0, 'Load Data Subset')
display(tab)
df_sub = df_less2[(df_less2["REF_DATE"]>=all_the_widgets[0].value) &
(df_less2["REF_DATE"]<=all_the_widgets[1].value) &
(df_less2["GEO"]==all_the_widgets[2].value) &
(df_less2["UOM"]==all_the_widgets[-2].value) &
(df_less2["SCALAR_FACTOR"]==all_the_widgets[-1].value) ]
df_sub.head()
# TO HANDLE THE REST OF THE COLUMNS, SIMPLY SUBSTITUTE VALUES
col_name = df_sub.columns[2]
# weather_data = pd.read_csv("DATA.csv",sep=',')
col_name
df_sub_final = df_sub[(df_sub[col_name]==all_the_widgets[3].value)]
import matplotlib.pyplot as plt
%matplotlib inline
fig1 = plt.figure(facecolor='w',figsize=(18,18))
plt.subplot(3, 3, 1)
plt.axis('off');
plt.subplot(3, 3, 2)
plt.plot(df_sub_final["REF_DATE"],df_sub_final["VALUE"],'b--',label='Value')
#plt.plot(df_20_USA["REF_DATE"],df_20_USA["VALUE"],'r--',label='U.S. dollar, daily average')
plt.xlabel('Year-Month', fontsize=20)
plt.ylabel('Value',fontsize=20)
plt.title(str(all_the_widgets[3].value) + ", "+ str(all_the_widgets[2].value),fontsize=20)
plt.xticks(rotation=90)
plt.grid(True)
plt.subplot(3, 3, 3);
plt.axis('off');
```
<h2 align='center'>References</h2>
Statistics Canada.
https://www150.statcan.gc.ca/n1/en/type/data?MM=1
# 
| github_jupyter |
# H2O Tutorial: EEG Eye State Classification
Author: Erin LeDell
Contact: erin@h2o.ai
This tutorial steps through a quick introduction to H2O's R API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from R.
Most of the functionality for R's `data.frame` is exactly the same syntax for an `H2OFrame`, so if you are comfortable with R, data frame manipulation will come naturally to you in H2O. The modeling syntax in the H2O R API may also remind you of other machine learning packages in R.
References: [H2O R API documentation](http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Rdoc.html), the [H2O Documentation landing page](http://www.h2o.ai/docs/) and [H2O general documentation](http://h2o-release.s3.amazonaws.com/h2o/latest_stable_doc.html).
## Install H2O in R
### Prerequisites
This tutorial assumes you have R installed. The `h2o` R package has a few dependencies which can be installed using CRAN. The packages that are required (which also have their own dependencies) can be installed in R as follows:
```r
pkgs <- c("methods","statmod","stats","graphics","RCurl","jsonlite","tools","utils")
for (pkg in pkgs) {
if (! (pkg %in% rownames(installed.packages()))) { install.packages(pkg) }
}
```
### Install h2o
Once the dependencies are installed, you can install H2O. We will use the latest stable version of the `h2o` R package, which at the time of writing is H2O v3.8.0.4 (aka "Tukey-4"). The latest stable version can be installed using the commands on the [H2O R Installation](http://www.h2o.ai/download/h2o/r) page.
## Start up an H2O cluster
After the R package is installed, we can start up an H2O cluster. In a R terminal, we load the `h2o` package and start up an H2O cluster as follows:
```
library(h2o)
# Start an H2O Cluster on your local machine
h2o.init(nthreads = -1) #nthreads = -1 uses all cores on your machine
```
If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows:
```
# This will not actually do anything since it's a fake IP address
# h2o.init(ip="123.45.67.89", port=54321)
```
## Download EEG Data
The following code downloads a copy of the [EEG Eye State](http://archive.ics.uci.edu/ml/datasets/EEG+Eye+State#) dataset. All data is from one continuous EEG measurement with the [Emotiv EEG Neuroheadset](https://emotiv.com/epoc.php). The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.

We can import the data directly into H2O using the `import_file` method in the Python API. The import path can be a URL, a local path, a path to an HDFS file, or a file on Amazon S3.
```
#csv_url <- "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
csv_url <- "https://h2o-public-test-data.s3.amazonaws.com/eeg_eyestate_splits.csv"
data <- h2o.importFile(csv_url)
```
## Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame:
```
dim(data)
```
Now let's take a look at the top of the frame:
```
head(data)
```
The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions: train (60%), valid (%20) and test (20%) and marked which split each row belongs to in the "split" column.
Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
```
names(data)
```
To select a subset of the columns to look at, typical R data.frame indexing applies:
```
columns <- c('AF3', 'eyeDetection', 'split')
head(data[columns])
```
Now let's select a single column, for example -- the response column, and look at the data more closely:
```
y <- 'eyeDetection'
data[y]
```
It looks like a binary response, but let's validate that assumption:
```
h2o.unique(data[y])
```
If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default.
Therefore, we should convert the response column to a more efficient "factor" representation (called "enum" in Java) -- in this case it is a categorial variable with two levels, 0 and 1. If the only column in my data that is categorical is the response, I typically don't bother specifying the column type during the parse, and instead use this one-liner to convert it aftewards:
```
data[y] <- as.factor(data[y])
```
Now we can check that there are two levels in our response column:
```
h2o.nlevels(data[y])
```
We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are:
```
h2o.levels(data[y])
```
We may want to check if there are any missing values, so let's look for NAs in our dataset. For all the supervised H2O algorithms, H2O will handle missing values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels.
To figure out which, if any, values are missing, we can use the `h2o.nacnt` (NA count) method on any H2OFrame (or column). The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to an H2OFrame also apply to a single column.
```
h2o.nacnt(data[y])
```
Great, no missing labels. :-)
Out of curiosity, let's see if there is any missing data in any of the columsn of this frame:
```
h2o.nacnt(data)
```
Each column returns a zero, so there are no missing values in any of the columns.
The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution:
```
h2o.table(data[y])
```
Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents:
```
n <- nrow(data) # Total number of training samples
h2o.table(data[y])['Count']/n
```
### Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set.
If you want H2O to do the splitting for you, you can use the `split_frame` method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want.
Subset the `data` H2O Frame on the "split" column:
```
train <- data[data['split']=="train",]
nrow(train)
valid <- data[data['split']=="valid",]
nrow(valid)
test <- data[data['split']=="test",]
nrow(test)
```
## Machine Learning in H2O
We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data.
### Train and Test a GBM model
In the steps above, we have already created the training set and validation set, so the next step is to specify the predictor set and response variable.
#### Specify the predictor set and response
As with any machine learning algorithm, we need to specify the response and predictor columns in the training set.
The `x` argument should be a vector of predictor names in the training frame, and `y` specifies the response column. We have already set `y <- "eyeDetector"` above, but we still need to specify `x`.
```
names(train)
x <- setdiff(names(train), c("eyeDetection", "split")) #Remove the 13th and 14th columns
x
```
Now that we have specified `x` and `y`, we can train the GBM model using a few non-default model parameters. Since we are predicting a binary response, we set `distribution = "bernoulli"`.
```
model <- h2o.gbm(x = x, y = y,
training_frame = train,
validation_frame = valid,
distribution = "bernoulli",
ntrees = 100,
max_depth = 4,
learn_rate = 0.1)
```
### Inspect Model
The type of results shown when you print a model, are determined by the following:
- Model class of the estimator (e.g. GBM, RF, GLM, DL)
- The type of machine learning problem (e.g. binary classification, multiclass classification, regression)
- The data you specify (e.g. `training_frame` only, `training_frame` and `validation_frame`, or `training_frame` and `nfolds`)
Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a `validation_frame`. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score.
The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF.
Lastly, for tree-based methods (GBM and RF), we also print variable importance.
```
print(model)
```
### Model Performance on a Test Set
Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as `validation_frame`), could have also served as a "test set." We technically have already created test set predictions and evaluated test set performance.
However, when performing model selection over a variety of model parameters, it is common for users to train a variety of models (using different parameters) using the training set, `train`, and a validation set, `valid`. Once the user selects the best model (based on validation set performance), the true test of model performance is performed by making a final set of predictions on the held-out (never been used before) test set, `test`.
You can use the `model_performance` method to generate predictions on a new dataset. The results are stored in an object of class, `"H2OBinomialMetrics"`.
```
perf <- h2o.performance(model = model, newdata = test)
class(perf)
```
Individual model performance metrics can be extracted using methods like `r2`, `auc` and `mse`. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
```
h2o.r2(perf)
h2o.auc(perf)
h2o.mse(perf)
```
### Cross-validated Performance
To perform k-fold cross-validation, you use the same code as above, but you specify `nfolds` as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row.
Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the `nfolds` argument.
When performing cross-validation, you can still pass a `validation_frame`, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which is called `data`.
```
cvmodel <- h2o.gbm(x = x, y = y,
training_frame = train,
validation_frame = valid,
distribution = "bernoulli",
ntrees = 100,
max_depth = 4,
learn_rate = 0.1,
nfolds = 5)
```
This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the `auc` method again, and you can specify `train` or `xval` as `TRUE` to get the correct metric.
```
print(h2o.auc(cvmodel, train = TRUE))
print(h2o.auc(cvmodel, xval = TRUE))
```
### Grid Search
One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over:
- `ntrees`: Number of trees
- `max_depth`: Maximum depth of a tree
- `learn_rate`: Learning rate in the GBM
We will define a grid as follows:
```
ntrees_opt <- c(5,50,100)
max_depth_opt <- c(2,3,5)
learn_rate_opt <- c(0.1,0.2)
hyper_params = list('ntrees' = ntrees_opt,
'max_depth' = max_depth_opt,
'learn_rate' = learn_rate_opt)
```
The `h2o.grid` function can be used to train a `"H2OGrid"` object for any of the H2O algorithms (specified by the `"algorithm"` argument.
```
gs <- h2o.grid(algorithm = "gbm",
grid_id = "eeg_demo_gbm_grid",
hyper_params = hyper_params,
x = x, y = y,
training_frame = train,
validation_frame = valid)
```
### Compare Models
```
print(gs)
```
By default, grids of models will return the grid results sorted by (increasing) logloss on the validation set. However, if we are interested in sorting on another model performance metric, we can do that using the `h2o.getGrid` function as follows:
```
# print out the auc for all of the models
auc_table <- h2o.getGrid(grid_id = "eeg_demo_gbm_grid", sort_by = "auc", decreasing = TRUE)
print(auc_table)
```
The "best" model in terms of validation set AUC is listed first in auc_table.
```
best_model <- h2o.getModel(auc_table@model_ids[[1]])
h2o.auc(best_model, valid = TRUE) #Validation AUC for best model
```
The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC.
```
best_perf <- h2o.performance(model = best_model, newdata = test)
h2o.auc(best_perf)
```
The test set AUC is approximately 0.97. Not bad!!
| github_jupyter |
<a href="https://colab.research.google.com/github/issdl/from-data-to-solution-2021/blob/main/4_metrics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Metrics
## Imports
```
import numpy as np
np.random.seed(2021)
import random
random.seed(2021)
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
```
## Create Toy Datasets
```
def pc(db): # print count
print("Database contains {} negative and {} positive samples".format(db.count(0), db.count(1)))
length = 100
# Balanced
db_balanced = [0] * (length//2) + [1] * (length//2)
pc(db_balanced)
# More positives
amount = random.uniform(0.9, 0.99)
db_positives = [1] * int(length*amount) + [0] * int(length*(1-amount)+1)
pc(db_positives)
# More negatives
amount = random.uniform(0.9, 0.99)
db_negatives = [0] * int(length*amount) + [1] * int(length*(1-amount)+1)
pc(db_negatives)
```
## Dummy model
```
top_no = 95
def dummy_model(data, threshold):
correct=0
output=[]
for i, d in enumerate(data):
if i < threshold or i > top_no :
output.append(d)
correct+=1
else:
output.append(abs(1-d))
return output
```
### *Balanced dataset*
```
balanced_threshold = 80
out_balanced = dummy_model(db_balanced, balanced_threshold)
print('Labels:')
printmd('{}**{}**{}'.format(db_balanced[:balanced_threshold], db_balanced[balanced_threshold:top_no], db_balanced[top_no+1:],))
print('Predictions:')
printmd('{}**{}**{}'.format(out_balanced[:balanced_threshold], out_balanced[balanced_threshold:top_no], out_balanced[top_no+1:],))
```
### *More positives*
```
positives_threshold = 80
out_positives = dummy_model(db_positives, positives_threshold)
print('Labels:')
printmd('{}**{}**{}'.format(db_positives[:positives_threshold], db_positives[positives_threshold:top_no], db_positives[top_no+1:]))
print('Predictions:')
printmd('{}**{}**{}'.format(out_positives[:positives_threshold], out_positives[positives_threshold:top_no], out_positives[top_no+1:]))
```
### *More negatives*
```
negatives_threshold = 80
out_negatives = dummy_model(db_negatives, negatives_threshold)
print('Labels:')
printmd('{}**{}**{}'.format(db_negatives[:negatives_threshold], db_negatives[negatives_threshold:top_no], db_negatives[top_no+1:]))
print('Predictions:')
printmd('{}**{}**{}'.format(out_negatives[:negatives_threshold], out_negatives[negatives_threshold:top_no], db_negatives[top_no+1:]))
```
## Metrics
### **Accuracy**
Tasks:
* Create method implementing accuracy metric
*Balanced dataset*
```
from sklearn.metrics import accuracy_score
## Implement method implementing accuracy metric
def acc(labels, predictions):
## START
## END
printmd('Accuracy custom {}'.format(acc(db_balanced, out_balanced)))
printmd('Accuracy sklearn {}'.format(accuracy_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('Accuracy custom {}'.format(acc(db_positives, out_positives)))
printmd('Accuracy sklearn {}'.format(accuracy_score(db_positives, out_positives)))
```
*More negatives*
```
printmd('Accuracy custom {}'.format(acc(db_negatives, out_negatives)))
printmd('Accuracy sklearn {}'.format(accuracy_score(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('Accuracy {}'.format(accuracy_score(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
```
printmd('Accuracy {}'.format(accuracy_score(db_negatives, np.zeros(length))))
```
### **Confusion Matrix**
```
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
```
*Balanced dataset*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_balanced, out_balanced), display_labels=[0,1])
cmd.plot()
```
*More positives*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_positives, out_positives), display_labels=[0,1])
cmd.plot()
```
*More negatives*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_negatives, out_negatives), display_labels=[0,1])
cmd.plot()
```
*More positives - all positive predictions*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_positives, np.ones(length)), display_labels=[0,1])
cmd.plot()
```
*More negatives - all negative predictions*
```
cmd = ConfusionMatrixDisplay(confusion_matrix(db_negatives, np.zeros(length)), display_labels=[0,1])
cmd.plot()
```
### **Precision**
Tasks:
* Create method implementing precision metric
```
from sklearn.metrics import precision_score
## Create method implementing precision metric
def precision(labels, predictions):
## START
## END
```
*Balanced dataset*
```
printmd('Precision custom {}'.format(precision(db_balanced, out_balanced)))
printmd('Precision sklearn {}'.format(precision_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('Precision custom {}'.format(precision(db_positives, out_positives)))
printmd('Precision sklearn {}'.format(precision_score(db_positives, out_positives)))
```
*More negatives*
```
printmd('Precision custom {}'.format(precision(db_negatives, out_negatives)))
printmd('Precision sklearn {}'.format(precision_score(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('Precision custom {}'.format(precision(db_positives, np.ones(length))))
printmd('Precision sklearn {}'.format(precision_score(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
```
printmd('Precision custom {}'.format(precision(db_negatives, np.zeros(length))))
printmd('Precision sklearn {}'.format(precision_score(db_negatives, np.zeros(length))))
```
### **Recall**
Tasks:
* Create method implementing recall metric
```
from sklearn.metrics import recall_score
## Create method implementing recall metric
def recall(labels, predictions):
## START
## END
```
*Balanced dataset*
```
printmd('Recall custom {}'.format(recall(db_balanced, out_balanced)))
printmd('Recall sklearn {}'.format(recall_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('Recall custom {}'.format(recall(db_positives, out_positives)))
printmd('Recall sklearn {}'.format(recall_score(db_positives, out_positives)))
```
*More negatives*
```
printmd('Recall custom {}'.format(recall(db_negatives, out_negatives)))
printmd('Recall sklearn {}'.format(recall_score(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('Recall custom {}'.format(recall(db_positives, np.ones(length))))
printmd('Recall sklearn {}'.format(recall_score(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
```
printmd('Recall custom {}'.format(recall(db_negatives, np.zeros(length))))
printmd('Recall sklearn {}'.format(recall_score(db_negatives, np.zeros(length))))
```
### **False Positive Rate = Specificity**
```
def fpr(labels, predictions):
assert len(labels)==len(predictions)
fp=0
tn=0
#fpr=fp/(fp+tn)
for i, p in enumerate(predictions):
if p == labels[i] and p == 0:
tn+=1
elif p != labels[i] and p == 1:
fp+=1
if (fp+tn)==0:
return 0
return fp/(fp+tn)
```
*Balanced dataset*
```
printmd('fpr {}'.format(fpr(db_balanced, out_balanced)))
```
*More positives*
```
printmd('fpr {}'.format(fpr(db_positives, out_positives)))
```
*More negatives*
```
printmd('fpr {}'.format(fpr(db_negatives, out_negatives)))
```
*More positives - all positive predictions*
```
printmd('fpr {}'.format(fpr(db_positives, np.ones(length))))
```
*More negatives - all negative predictions*
### **True Positive Rate = Recall = Sensitivity**
### **F1 Score**
```
from sklearn.metrics import f1_score
def f1():
pass
```
*Balanced dataset*
```
printmd('F1 sklearn {}'.format(f1_score(db_balanced, out_balanced)))
```
*More positives*
```
printmd('F1 sklearn {}'.format(f1_score(db_positives, out_positives)))
printmd('F1 sklearn weighted {}'.format(f1_score(db_positives, out_positives, average='weighted')))
```
*More negatives*
```
printmd('F1 sklearn {}'.format(f1_score(db_negatives, out_negatives)))
printmd('F1 sklearn weighted {}'.format(f1_score(db_negatives, out_negatives, average='weighted')))
```
*More positives - all positive predictions*
```
printmd('F1 sklearn {}'.format(f1_score(db_positives, np.ones(length))))
printmd('F1 sklearn weighted {}'.format(f1_score(db_positives, np.ones(length), average='weighted')))
```
*More negatives - all negative predictions*
```
printmd('F1 sklearn {}'.format(f1_score(db_negatives, np.zeros(length))))
printmd('F1 sklearn weighted {}'.format(f1_score(db_negatives, np.zeros(length), average='weighted')))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
import matplotlib
import matplotlib.pyplot as plt
from xgboost.sklearn import XGBRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, roc_auc_score, make_scorer, accuracy_score
from xgboost import XGBClassifier, plot_importance
import math
main_df = pd.read_csv(os.path.join('data', 'unpacked_genres.csv')).drop('Unnamed: 0', axis=1)
lang_df = pd.read_csv(os.path.join('data', 'languages_parsed.csv')).drop('Unnamed: 0', axis=1)
main_df.head()
lang_df.columns
main_df['id'] = main_df['id'].astype('str')
lang_df['id'] = lang_df['id'].astype('str')
lang_df = lang_df[['id', u'numlang', u'cn', u'da', u'de',
u'en', u'es', u'fr', u'hi', u'it', u'ja', u'ko', u'ml', u'ru', u'ta',
u'zh']]
all_df = pd.merge(main_df, lang_df, on='id')
all_df.columns
all_df.to_csv(os.path.join('data', 'final.csv'))
all_df = all_df.drop(['production_countries', 'spoken_languages', 'original_language'], axis=1)
all_df.to_csv(os.path.join('data', 'final.csv'))
all_df.head()
all_df.drop('original_language', axis=1).to_csv(os.path.join('data', 'final.csv'))
df = pd.read_csv(os.path.join('data', 'final.csv'))
X = df.drop(['revenue', 'id', 'likes', 'dislikes'], axis=1)
y = df.revenue
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg.fit(X_train, y_train)
print(math.sqrt(mean_squared_error(y_test, reg.predict(X_test))))
print(reg.predict(df[df['id'] == 862].drop(['id', 'revenue'], axis=1)))
X.columns
Xp = X.drop([u'cn',
u'da', u'de', u'es', u'fr', u'hi', u'it', u'ja', u'ko', u'ml',
u'ru', u'ta', u'zh'], axis=1)
Xp.head()
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg.fit(X_train, y_train)
print(math.sqrt(mean_squared_error(y_test, reg.predict(X_test))))
import seaborn as sns
sns.heatmap(X.corr())
df.columns
sns.heatmap(df.drop([u'cn', u'da', u'de', u'es',
u'fr', u'hi', u'it', u'ja', u'ko', u'ml', u'ru', u'ta', u'zh'], axis=1).corr())
df.revenue.hist()
profit = []
for i in range(len(df)):
profit.append(df['revenue'][i] - df['budget'][i])
df['profit'] = profit
len(df[df['profit'] < 0])
isProfitable = []
for i in range(len(df)):
isProfitable.append(df['profit'][i] > 0)
df['isProfitable'] = isProfitable
df = pd.read_csv(os.path.join('data', 'final_clf.csv')).drop('Unnamed: 0', axis=1)
X = df.drop(['id', 'revenue', 'TV Movie', 'profit', 'isProfitable'], axis=1)
y = df.isProfitable.astype('int')
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = XGBClassifier()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
plot_importance(clf)
plt.show()
roc_auc_score(y_test, np.array(clf.predict_proba(X_test))[:,1])
roc_auc_score(y, np.array(clf.predict_proba(X))[:,1])
from sklearn.model_selection import GridSearchCV
all_df.head()
all_df.drop('original_language', axis=1).to_csv(os.path.join('data', 'final.csv'))
df = pd.read_csv(os.path.join('data', 'final.csv'))
X = df.drop(['revenue', 'id', 'likes', 'dislikes'], axis=1)
y = df.revenue
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg.fit(X_train, y_train)
print(math.sqrt(mean_squared_error(y_test, reg.predict(X_test))))
print(reg.predict(df[df['id'] == 862].drop(['id', 'revenue'], axis=1)))
X.columns
Xp = X.drop([u'cn',
u'da', u'de', u'es', u'fr', u'hi', u'it', u'ja', u'ko', u'ml',
u'ru', u'ta', u'zh'], axis=1)
Xp.head()
reg = XGBRegressor()
X_train, X_test, y_train, y_test = train_test_split(X, y)
df.revenue.hist()
profit = []
for i in range(len(df)):
profit.append(df['revenue'][i] - df['budget'][i])
df['profit'] = profit
grid_params = {
'max_depth': range(5, 15, 3),
'n_estimators': range(50, 200, 25)
}
scoring = {'AUC': 'roc_auc', 'Accuracy': make_scorer(accuracy_score)}
clf = GridSearchCV(XGBClassifier(), param_grid=grid_params, scoring=scoring, cv=5, refit='AUC')
clf.fit(X, y)
best_clf = clf.best_estimator_
df.columns
X = df.drop(['id', 'revenue', 'TV Movie', 'profit', 'isProfitable'], axis=1)
y = df.isProfitable.astype('int')
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = XGBClassifier()
roc_auc_score(y, np.array(best_clf.predict_proba(X))[:,1])
plot_importance(best_clf)
plt.show()
from xgboost import plot_tree
df.daysSinceStart.plot.hist()
df['isProfitable'] = df['isProfitable'].astype('int')
len(df[df['isProfitable'] == 0])
1421.0/(len(df)-1421.0)
df.to_csv(os.path.join('data', 'final_clf.csv'))
```
| github_jupyter |

<hr style="margin-bottom: 40px;">
<img src="https://user-images.githubusercontent.com/7065401/39117440-24199c72-46e7-11e8-8ffc-25c6e27e07d4.jpg"
style="width:300px; float: right; margin: 0 40px 40px 40px;"></img>
# Handling Missing Data with Pandas
pandas borrows all the capabilities from numpy selection + adds a number of convenient methods to handle missing values. Let's see one at a time:

## Hands on!
```
import numpy as np
import pandas as pd
```
### Pandas utility functions
Similarly to `numpy`, pandas also has a few utility functions to identify and detect null values:
```
pd.isnull(np.nan)
pd.isnull(None)
pd.isna(np.nan)
pd.isna(None)
```
The opposite ones also exist:
```
pd.notnull(None)
pd.notnull(np.nan)
pd.notna(np.nan)
pd.notnull(3)
```
These functions also work with Series and `DataFrame`s:
```
pd.isnull(pd.Series([1, np.nan, 7]))
pd.notnull(pd.Series([1, np.nan, 7]))
pd.isnull(pd.DataFrame({
'Column A': [1, np.nan, 7],
'Column B': [np.nan, 2, 3],
'Column C': [np.nan, 2, np.nan]
}))
```

### Pandas Operations with Missing Values
Pandas manages missing values more gracefully than numpy. `nan`s will no longer behave as "viruses", and operations will just ignore them completely:
```
pd.Series([1, 2, np.nan]).count()
pd.Series([1, 2, np.nan]).sum()
pd.Series([2, 2, np.nan]).mean()
```
### Filtering missing data
As we saw with numpy, we could combine boolean selection + `pd.isnull` to filter out those `nan`s and null values:
```
s = pd.Series([1, 2, 3, np.nan, np.nan, 4])
pd.notnull(s)
pd.isnull(s)
pd.notnull(s).sum()
pd.isnull(s).sum()
s[pd.notnull(s)]
```
But both `notnull` and `isnull` are also methods of `Series` and `DataFrame`s, so we could use it that way:
```
s.isnull()
s.notnull()
s[s.notnull()]
```

### Dropping null values
Boolean selection + `notnull()` seems a little bit verbose and repetitive. And as we said before: any repetitive task will probably have a better, more DRY way. In this case, we can use the `dropna` method:
```
s
s.dropna()
```
### Dropping null values on DataFrames
You saw how simple it is to drop `na`s with a Series. But with `DataFrame`s, there will be a few more things to consider, because you can't drop single values. You can only drop entire columns or rows. Let's start with a sample `DataFrame`:
```
df = pd.DataFrame({
'Column A': [1, np.nan, 30, np.nan],
'Column B': [2, 8, 31, np.nan],
'Column C': [np.nan, 9, 32, 100],
'Column D': [5, 8, 34, 110],
})
df
df.shape
df.info()
df.isnull()
df.isnull().sum()
```
The default `dropna` behavior will drop all the rows in which _any_ null value is present:
```
df.dropna()
```
In this case we're dropping **rows**. Rows containing null values are dropped from the DF. You can also use the `axis` parameter to drop columns containing null values:
```
df.dropna(axis=1) # axis='columns' also works
```
In this case, any row or column that contains **at least** one null value will be dropped. Which can be, depending on the case, too extreme. You can control this behavior with the `how` parameter. Can be either `'any'` or `'all'`:
```
df2 = pd.DataFrame({
'Column A': [1, np.nan, 30],
'Column B': [2, np.nan, 31],
'Column C': [np.nan, np.nan, 100]
})
df2
df.dropna(how='all')
df.dropna(how='any') # default behavior
```
You can also use the `thresh` parameter to indicate a _threshold_ (a minimum number) of non-null values for the row/column to be kept:
```
df
df.dropna(thresh=3)
df.dropna(thresh=3, axis='columns')
```

### Filling null values
Sometimes instead than dropping the null values, we might need to replace them with some other value. This highly depends on your context and the dataset you're currently working. Sometimes a `nan` can be replaced with a `0`, sometimes it can be replaced with the `mean` of the sample, and some other times you can take the closest value. Again, it depends on the context. We'll show you the different methods and mechanisms and you can then apply them to your own problem.
```
s
```
**Filling nulls with a arbitrary value**
```
s.fillna(0)
s.fillna(s.mean())
s
```
**Filling nulls with contiguous (close) values**
The `method` argument is used to fill null values with other values close to that null one:
```
s.fillna(method='ffill')
s.fillna(method='bfill')
```
This can still leave null values at the extremes of the Series/DataFrame:
```
pd.Series([np.nan, 3, np.nan, 9]).fillna(method='ffill')
pd.Series([1, np.nan, 3, np.nan, np.nan]).fillna(method='bfill')
```
### Filling null values on DataFrames
The `fillna` method also works on `DataFrame`s, and it works similarly. The main differences are that you can specify the `axis` (as usual, rows or columns) to use to fill the values (specially for methods) and that you have more control on the values passed:
```
df
df.fillna({'Column A': 0, 'Column B': 99, 'Column C': df['Column C'].mean()})
df.fillna(method='ffill', axis=0)
df.fillna(method='ffill', axis=1)
```

### Checking if there are NAs
The question is: Does this `Series` or `DataFrame` contain any missing value? The answer should be yes or no: `True` or `False`. How can you verify it?
**Example 1: Checking the length**
If there are missing values, `s.dropna()` will have less elements than `s`:
```
s.dropna().count()
missing_values = len(s.dropna()) != len(s)
missing_values
```
There's also a `count` method, that excludes `nan`s from its result:
```
len(s)
s.count()
```
So we could just do:
```
missing_values = s.count() != len(s)
missing_values
```
**More Pythonic solution `any`**
The methods `any` and `all` check if either there's `any` True value in a Series or `all` the values are `True`. They work in the same way as in Python:
```
pd.Series([True, False, False]).any()
pd.Series([True, False, False]).all()
pd.Series([True, True, True]).all()
```
The `isnull()` method returned a Boolean `Series` with `True` values wherever there was a `nan`:
```
s.isnull()
```
So we can just use the `any` method with the boolean array returned:
```
pd.Series([1, np.nan]).isnull().any()
pd.Series([1, 2]).isnull().any()
s.isnull().any()
```
A more strict version would check only the `values` of the Series:
```
s.isnull().values
s.isnull().values.any()
```

| github_jupyter |
<a href="https://colab.research.google.com/github/mrdbourke/tensorflow-deep-learning/blob/main/00_tensorflow_fundamentals.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 00. Getting started with TensorFlow: A guide to the fundamentals
## What is TensorFlow?
[TensorFlow](https://www.tensorflow.org/) is an open-source end-to-end machine learning library for preprocessing data, modelling data and serving models (getting them into the hands of others).
## Why use TensorFlow?
Rather than building machine learning and deep learning models from scratch, it's more likely you'll use a library such as TensorFlow. This is because it contains many of the most common machine learning functions you'll want to use.
## What we're going to cover
TensorFlow is vast. But the main premise is simple: turn data into numbers (tensors) and build machine learning algorithms to find patterns in them.
In this notebook we cover some of the most fundamental TensorFlow operations, more specificially:
* Introduction to tensors (creating tensors)
* Getting information from tensors (tensor attributes)
* Manipulating tensors (tensor operations)
* Tensors and NumPy
* Using @tf.function (a way to speed up your regular Python functions)
* Using GPUs with TensorFlow
* Exercises to try
Things to note:
* Many of the conventions here will happen automatically behind the scenes (when you build a model) but it's worth knowing so if you see any of these things, you know what's happening.
* For any TensorFlow function you see, it's important to be able to check it out in the documentation, for example, going to the Python API docs for all functions and searching for what you need: https://www.tensorflow.org/api_docs/python/ (don't worry if this seems overwhelming at first, with enough practice, you'll get used to navigating the documentaiton).
## Introduction to Tensors
If you've ever used NumPy, [tensors](https://www.tensorflow.org/guide/tensor) are kind of like NumPy arrays (we'll see more on this later).
For the sake of this notebook and going forward, you can think of a tensor as a multi-dimensional numerical representation (also referred to as n-dimensional, where n can be any number) of something. Where something can be almost anything you can imagine:
* It could be numbers themselves (using tensors to represent the price of houses).
* It could be an image (using tensors to represent the pixels of an image).
* It could be text (using tensors to represent words).
* Or it could be some other form of information (or data) you want to represent with numbers.
The main difference between tensors and NumPy arrays (also an n-dimensional array of numbers) is that tensors can be used on [GPUs (graphical processing units)](https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/) and [TPUs (tensor processing units)](https://en.wikipedia.org/wiki/Tensor_processing_unit).
The benefit of being able to run on GPUs and TPUs is faster computation, this means, if we wanted to find patterns in the numerical representations of our data, we can generally find them faster using GPUs and TPUs.
Okay, we've been talking enough about tensors, let's see them.
The first thing we'll do is import TensorFlow under the common alias `tf`.
```
# Import TensorFlow
import tensorflow as tf
print(tf.__version__) # find the version number (should be 2.x+)
```
### Creating Tensors with `tf.constant()`
As mentioned before, in general, you usually won't create tensors yourself. This is because TensorFlow has modules built-in (such as [`tf.io`](https://www.tensorflow.org/api_docs/python/tf/io) and [`tf.data`](https://www.tensorflow.org/guide/data)) which are able to read your data sources and automatically convert them to tensors and then later on, neural network models will process these for us.
But for now, because we're getting familar with tensors themselves and how to manipulate them, we'll see how we can create them ourselves.
We'll begin by using [`tf.constant()`](https://www.tensorflow.org/api_docs/python/tf/constant).
```
# Create a scalar (rank 0 tensor)
scalar = tf.constant(7)
scalar
```
A scalar is known as a rank 0 tensor. Because it has no dimensions (it's just a number).
> 🔑 **Note:** For now, you don't need to know too much about the different ranks of tensors (but we will see more on this later). The important point is knowing tensors can have an unlimited range of dimensions (the exact amount will depend on what data you're representing).
```
# Check the number of dimensions of a tensor (ndim stands for number of dimensions)
scalar.ndim
# Create a vector (more than 0 dimensions)
vector = tf.constant([10, 10])
vector
# Check the number of dimensions of our vector tensor
vector.ndim
# Create a matrix (more than 1 dimension)
matrix = tf.constant([[10, 7],
[7, 10]])
matrix
matrix.ndim
```
By default, TensorFlow creates tensors with either an `int32` or `float32` datatype.
This is known as [32-bit precision](https://en.wikipedia.org/wiki/Precision_(computer_science) (the higher the number, the more precise the number, the more space it takes up on your computer).
```
# Create another matrix and define the datatype
another_matrix = tf.constant([[10., 7.],
[3., 2.],
[8., 9.]], dtype=tf.float16) # specify the datatype with 'dtype'
another_matrix
# Even though another_matrix contains more numbers, its dimensions stay the same
another_matrix.ndim
# How about a tensor? (more than 2 dimensions, although, all of the above items are also technically tensors)
tensor = tf.constant([[[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
tensor
tensor.ndim
```
This is known as a rank 3 tensor (3-dimensions), however a tensor can have an arbitrary (unlimited) amount of dimensions.
For example, you might turn a series of images into tensors with shape (224, 224, 3, 32), where:
* 224, 224 (the first 2 dimensions) are the height and width of the images in pixels.
* 3 is the number of colour channels of the image (red, green blue).
* 32 is the batch size (the number of images a neural network sees at any one time).
All of the above variables we've created are actually tensors. But you may also hear them referred to as their different names (the ones we gave them):
* **scalar**: a single number.
* **vector**: a number with direction (e.g. wind speed with direction).
* **matrix**: a 2-dimensional array of numbers.
* **tensor**: an n-dimensional arrary of numbers (where n can be any number, a 0-dimension tensor is a scalar, a 1-dimension tensor is a vector).
To add to the confusion, the terms matrix and tensor are often used interchangably.
Going forward since we're using TensorFlow, everything we refer to and use will be tensors.
For more on the mathematical difference between scalars, vectors and matrices see the [visual algebra post by Math is Fun](https://www.mathsisfun.com/algebra/scalar-vector-matrix.html).

### Creating Tensors with `tf.Variable()`
You can also (although you likely rarely will, because often, when working with data, tensors are created for you automatically) create tensors using [`tf.Variable()`](https://www.tensorflow.org/api_docs/python/tf/Variable).
The difference between `tf.Variable()` and `tf.constant()` is tensors created with `tf.constant()` are immutable (can't be changed, can only be used to create a new tensor), where as, tensors created with `tf.Variable()` are mutable (can be changed).
```
# Create the same tensor with tf.Variable() and tf.constant()
changeable_tensor = tf.Variable([10, 7])
unchangeable_tensor = tf.constant([10, 7])
changeable_tensor, unchangeable_tensor
```
Now let's try to change one of the elements of the changable tensor.
```
# Will error (requires the .assign() method)
changeable_tensor[0] = 7
changeable_tensor
```
To change an element of a `tf.Variable()` tensor requires the `assign()` method.
```
# Won't error
changeable_tensor[0].assign(7)
changeable_tensor
```
Now let's try to change a value in a `tf.constant()` tensor.
```
# Will error (can't change tf.constant())
unchangeable_tensor[0].assign(7)
unchangleable_tensor
```
Which one should you use? `tf.constant()` or `tf.Variable()`?
It will depend on what your problem requires. However, most of the time, TensorFlow will automatically choose for you (when loading data or modelling data).
### Creating random tensors
Random tensors are tensors of some abitrary size which contain random numbers.
Why would you want to create random tensors?
This is what neural networks use to intialize their weights (patterns) that they're trying to learn in the data.
For example, the process of a neural network learning often involves taking a random n-dimensional array of numbers and refining them until they represent some kind of pattern (a compressed way to represent the original data).
**How a network learns**

*A network learns by starting with random patterns (1) then going through demonstrative examples of data (2) whilst trying to update its random patterns to represent the examples (3).*
We can create random tensors by using the [`tf.random.Generator`](https://www.tensorflow.org/guide/random_numbers#the_tfrandomgenerator_class) class.
```
# Create two random (but the same) tensors
random_1 = tf.random.Generator.from_seed(42) # set the seed for reproducibility
random_1 = random_1.normal(shape=(3, 2)) # create tensor from a normal distribution
random_2 = tf.random.Generator.from_seed(42)
random_2 = random_2.normal(shape=(3, 2))
# Are they equal?
random_1, random_2, random_1 == random_2
```
The random tensors we've made are actually [pseudorandom numbers](https://www.computerhope.com/jargon/p/pseudo-random.htm) (they appear as random, but really aren't).
If we set a seed we'll get the same random numbers (if you've ever used NumPy, this is similar to `np.random.seed(42)`).
Setting the seed says, "hey, create some random numbers, but flavour them with X" (X is the seed).
What do you think will happen when we change the seed?
```
# Create two random (and different) tensors
random_3 = tf.random.Generator.from_seed(42)
random_3 = random_3.normal(shape=(3, 2))
random_4 = tf.random.Generator.from_seed(11)
random_4 = random_4.normal(shape=(3, 2))
# Check the tensors and see if they are equal
random_3, random_4, random_1 == random_3, random_3 == random_4
```
What if you wanted to shuffle the order of a tensor?
Wait, why would you want to do that?
Let's say you working with 15,000 images of cats and dogs and the first 10,000 images of were of cats and the next 5,000 were of dogs. This order could effect how a neural network learns (it may overfit by learning the order of the data), instead, it might be a good idea to move your data around.
```
# Shuffle a tensor (valuable for when you want to shuffle your data)
not_shuffled = tf.constant([[10, 7],
[3, 4],
[2, 5]])
# Gets different results each time
tf.random.shuffle(not_shuffled)
# Shuffle in the same order every time using the seed parameter (won't acutally be the same)
tf.random.shuffle(not_shuffled, seed=42)
```
Wait... why didn't the numbers come out the same?
It's due to rule #4 of the [`tf.random.set_seed()`](https://www.tensorflow.org/api_docs/python/tf/random/set_seed) documentation.
> "4. If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence."
`tf.random.set_seed(42)` sets the global seed, and the `seed` parameter in `tf.random.shuffle(seed=42)` sets the operation seed.
Because, "Operations that rely on a random seed actually derive it from two seeds: the global and operation-level seeds. This sets the global seed."
```
# Shuffle in the same order every time
# Set the global random seed
tf.random.set_seed(42)
# Set the operation random seed
tf.random.shuffle(not_shuffled, seed=42)
# Set the global random seed
tf.random.set_seed(42) # if you comment this out you'll get different results
# Set the operation random seed
tf.random.shuffle(not_shuffled)
```
### Other ways to make tensors
Though you might rarely use these (remember, many tensor operations are done behind the scenes for you), you can use [`tf.ones()`](https://www.tensorflow.org/api_docs/python/tf/ones) to create a tensor of all ones and [`tf.zeros()`](https://www.tensorflow.org/api_docs/python/tf/zeros) to create a tensor of all zeros.
```
# Make a tensor of all ones
tf.ones(shape=(3, 2))
# Make a tensor of all zeros
tf.zeros(shape=(3, 2))
```
You can also turn NumPy arrays in into tensors.
Remember, the main difference between tensors and NumPy arrays is that tensors can be run on GPUs.
> 🔑 **Note:** A matrix or tensor is typically represented by a capital letter (e.g. `X` or `A`) where as a vector is typically represented by a lowercase letter (e.g. `y` or `b`).
```
import numpy as np
numpy_A = np.arange(1, 25, dtype=np.int32) # create a NumPy array between 1 and 25
A = tf.constant(numpy_A,
shape=[2, 4, 3]) # note: the shape total (2*4*3) has to match the number of elements in the array
numpy_A, A
```
## Getting information from tensors (shape, rank, size)
There will be times when you'll want to get different pieces of information from your tensors, in particuluar, you should know the following tensor vocabulary:
* **Shape:** The length (number of elements) of each of the dimensions of a tensor.
* **Rank:** The number of tensor dimensions. A scalar has rank 0, a vector has rank 1, a matrix is rank 2, a tensor has rank n.
* **Axis** or **Dimension:** A particular dimension of a tensor.
* **Size:** The total number of items in the tensor.
You'll use these especially when you're trying to line up the shapes of your data to the shapes of your model. For example, making sure the shape of your image tensors are the same shape as your models input layer.
We've already seen one of these before using the `ndim` attribute. Let's see the rest.
```
# Create a rank 4 tensor (4 dimensions)
rank_4_tensor = tf.zeros([2, 3, 4, 5])
rank_4_tensor
rank_4_tensor.shape, rank_4_tensor.ndim, tf.size(rank_4_tensor)
# Get various attributes of tensor
print("Datatype of every element:", rank_4_tensor.dtype)
print("Number of dimensions (rank):", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0])
print("Elements along last axis of tensor:", rank_4_tensor.shape[-1])
print("Total number of elements (2*3*4*5):", tf.size(rank_4_tensor).numpy()) # .numpy() converts to NumPy array
```
You can also index tensors just like Python lists.
```
# Get the first 2 items of each dimension
rank_4_tensor[:2, :2, :2, :2]
# Get the dimension from each index except for the final one
rank_4_tensor[:1, :1, :1, :]
# Create a rank 2 tensor (2 dimensions)
rank_2_tensor = tf.constant([[10, 7],
[3, 4]])
# Get the last item of each row
rank_2_tensor[:, -1]
```
You can also add dimensions to your tensor whilst keeping the same information present using `tf.newaxis`.
```
# Add an extra dimension (to the end)
rank_3_tensor = rank_2_tensor[..., tf.newaxis] # in Python "..." means "all dimensions prior to"
rank_2_tensor, rank_3_tensor # shape (2, 2), shape (2, 2, 1)
```
You can achieve the same using [`tf.expand_dims()`](https://www.tensorflow.org/api_docs/python/tf/expand_dims).
```
tf.expand_dims(rank_2_tensor, axis=-1) # "-1" means last axis
```
## Manipulating tensors (tensor operations)
Finding patterns in tensors (numberical representation of data) requires manipulating them.
Again, when building models in TensorFlow, much of this pattern discovery is done for you.
### Basic operations
You can perform many of the basic mathematical operations directly on tensors using Pyhton operators such as, `+`, `-`, `*`.
```
# You can add values to a tensor using the addition operator
tensor = tf.constant([[10, 7], [3, 4]])
tensor + 10
```
Since we used `tf.constant()`, the original tensor is unchanged (the addition gets done on a copy).
```
# Original tensor unchanged
tensor
```
Other operators also work.
```
# Multiplication (known as element-wise multiplication)
tensor * 10
# Subtraction
tensor - 10
```
You can also use the equivalent TensorFlow function. Using the TensorFlow function (where possible) has the advantage of being sped up later down the line when running as part of a [TensorFlow graph](https://www.tensorflow.org/tensorboard/graphs).
```
# Use the tensorflow function equivalent of the '*' (multiply) operator
tf.multiply(tensor, 10)
# The original tensor is still unchanged
tensor
```
### Matrix mutliplication
One of the most common operations in machine learning algorithms is [matrix multiplication](https://www.mathsisfun.com/algebra/matrix-multiplying.html).
TensorFlow implements this matrix multiplication functionality in the [`tf.matmul()`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul) method.
The main two rules for matrix multiplication to remember are:
1. The inner dimensions must match:
* `(3, 5) @ (3, 5)` won't work
* `(5, 3) @ (3, 5)` will work
* `(3, 5) @ (5, 3)` will work
2. The resulting matrix has the shape of the inner dimensions:
* `(5, 3) @ (3, 5)` -> `(3, 3)`
* `(3, 5) @ (5, 3)` -> `(5, 5)`
> 🔑 **Note:** '`@`' in Python is the symbol for matrix multiplication.
```
# Matrix multiplication in TensorFlow
print(tensor)
tf.matmul(tensor, tensor)
# Matrix multiplication with Python operator '@'
tensor @ tensor
```
Both of these examples work because our `tensor` variable is of shape (2, 2).
What if we created some tensors which had mismatched shapes?
```
# Create (3, 2) tensor
X = tf.constant([[1, 2],
[3, 4],
[5, 6]])
# Create another (3, 2) tensor
Y = tf.constant([[7, 8],
[9, 10],
[11, 12]])
X, Y
# Try to matrix multiply them (will error)
X @ Y
```
Trying to matrix multiply two tensors with the shape `(3, 2)` errors because the inner dimensions don't match.
We need to either:
* Reshape X to `(2, 3)` so it's `(2, 3) @ (3, 2)`.
* Reshape Y to `(3, 2)` so it's `(3, 2) @ (2, 3)`.
We can do this with either:
* [`tf.reshape()`](https://www.tensorflow.org/api_docs/python/tf/reshape) - allows us to reshape a tensor into a defined shape.
* [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) - switches the dimensions of a given tensor.

Let's try `tf.reshape()` first.
```
# Example of reshape (3, 2) -> (2, 3)
tf.reshape(Y, shape=(2, 3))
# Try matrix multiplication with reshaped Y
X @ tf.reshape(Y, shape=(2, 3))
```
It worked, let's try the same with a reshaped `X`, except this time we'll use [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) and `tf.matmul()`.
```
# Example of transpose (3, 2) -> (2, 3)
tf.transpose(X)
# Try matrix multiplication
tf.matmul(tf.transpose(X), Y)
# You can achieve the same result with parameters
tf.matmul(a=X, b=Y, transpose_a=True, transpose_b=False)
```
Notice the difference in the resulting shapes when tranposing `X` or reshaping `Y`.
This is because of the 2nd rule mentioned above:
* `(3, 2) @ (2, 3)` -> `(2, 2)` done with `tf.matmul(tf.transpose(X), Y)`
* `(2, 3) @ (3, 2)` -> `(3, 3)` done with `X @ tf.reshape(Y, shape=(2, 3))`
This kind of data manipulation is a reminder: you'll spend a lot of your time in machine learning and working with neural networks reshaping data (in the form of tensors) to prepare it to be used with various operations (such as feeding it to a model).
### The dot product
Multiplying matrices by eachother is also referred to as the dot product.
You can perform the `tf.matmul()` operation using [`tf.tensordot()`](https://www.tensorflow.org/api_docs/python/tf/tensordot).
```
# Perform the dot product on X and Y (requires X to be transposed)
tf.tensordot(tf.transpose(X), Y, axes=1)
```
You might notice that although using both `reshape` and `tranpose` work, you get different results when using each.
Let's see an example, first with `tf.transpose()` then with `tf.reshape()`.
```
# Perform matrix multiplication between X and Y (transposed)
tf.matmul(X, tf.transpose(Y))
# Perform matrix multiplication between X and Y (reshaped)
tf.matmul(X, tf.reshape(Y, (2, 3)))
```
Hmm... they result in different values.
Which is strange because when dealing with `Y` (a `(3x2)` matrix), reshaping to `(2, 3)` and tranposing it result in the same shape.
```
# Check shapes of Y, reshaped Y and tranposed Y
Y.shape, tf.reshape(Y, (2, 3)).shape, tf.transpose(Y).shape
```
But calling `tf.reshape()` and `tf.transpose()` on `Y` don't necessarily result in the same values.
```
# Check values of Y, reshape Y and tranposed Y
print("Normal Y:")
print(Y, "\n") # "\n" for newline
print("Y reshaped to (2, 3):")
print(tf.reshape(Y, (2, 3)), "\n")
print("Y transposed:")
print(tf.transpose(Y))
```
As you can see, the outputs of `tf.reshape()` and `tf.transpose()` when called on `Y`, even though they have the same shape, are different.
This can be explained by the default behaviour of each method:
* [`tf.reshape()`](https://www.tensorflow.org/api_docs/python/tf/reshape) - change the shape of the given tensor (first) and then insert values in order they appear (in our case, 7, 8, 9, 10, 11, 12).
* [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) - swap the order of the axes, by default the last axis becomes the first, however the order can be changed using the [`perm` parameter](https://www.tensorflow.org/api_docs/python/tf/transpose).
So which should you use?
Again, most of the time these operations (when they need to be run, such as during the training a neural network, will be implemented for you).
But generally, whenever performing a matrix multiplication and the shapes of two matrices don't line up, you will transpose (not reshape) one of them in order to line them up.
### Matrix multiplication tidbits
* If we transposed `Y`, it would be represented as $\mathbf{Y}^\mathsf{T}$ (note the capital T for tranpose).
* Get an illustrative view of matrix multiplication [by Math is Fun](https://www.mathsisfun.com/algebra/matrix-multiplying.html).
* Try a hands-on demo of matrix multiplcation: http://matrixmultiplication.xyz/ (shown below).

### Changing the datatype of a tensor
Sometimes you'll want to alter the default datatype of your tensor.
This is common when you want to compute using less precision (e.g. 16-bit floating point numbers vs. 32-bit floating point numbers).
Computing with less precision is useful on devices with less computing capacity such as mobile devices (because the less bits, the less space the computations require).
You can change the datatype of a tensor using [`tf.cast()`](https://www.tensorflow.org/api_docs/python/tf/cast).
```
# Create a new tensor with default datatype (float32)
B = tf.constant([1.7, 7.4])
# Create a new tensor with default datatype (int32)
C = tf.constant([1, 7])
B, C
# Change from float32 to float16 (reduced precision)
B = tf.cast(B, dtype=tf.float16)
B
# Change from int32 to float32
C = tf.cast(C, dtype=tf.float32)
C
```
### Getting the absolute value
Sometimes you'll want the absolute values (all values are positive) of elements in your tensors.
To do so, you can use [`tf.abs()`](https://www.tensorflow.org/api_docs/python/tf/math/abs).
```
# Create tensor with negative values
D = tf.constant([-7, -10])
D
# Get the absolute values
tf.abs(D)
```
### Finding the min, max, mean, sum (aggregation)
You can quickly aggregate (perform a calculation on a whole tensor) tensors to find things like the minimum value, maximum value, mean and sum of all the elements.
To do so, aggregation methods typically have the syntax `reduce()_[action]`, such as:
* [`tf.reduce_min()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_min) - find the minimum value in a tensor.
* [`tf.reduce_max()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max) - find the maximum value in a tensor (helpful for when you want to find the highest prediction probability).
* [`tf.reduce_mean()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean) - find the mean of all elements in a tensor.
* [`tf.reduce_sum()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) - find the sum of all elements in a tensor.
* **Note:** typically, each of these is under the `math` module, e.g. `tf.math.reduce_min()` but you can use the alias `tf.reduce_min()`.
Let's see them in action.
```
# Create a tensor with 50 random values between 0 and 100
E = tf.constant(np.random.randint(low=0, high=100, size=50))
E
# Find the minimum
tf.reduce_min(E)
# Find the maximum
tf.reduce_max(E)
# Find the mean
tf.reduce_mean(E)
# Find the sum
tf.reduce_sum(E)
```
You can also find the standard deviation ([`tf.reduce_std()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_std)) and variance ([`tf.reduce_variance()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_variance)) of elements in a tensor using similar methods.
### Finding the positional maximum and minimum
How about finding the position a tensor where the maximum value occurs?
This is helpful when you want to line up your labels (say `['Green', 'Blue', 'Red']`) with your prediction probabilities tensor (e.g. `[0.98, 0.01, 0.01]`).
In this case, the predicted label (the one with the highest prediction probability) would be `'Green'`.
You can do the same for the minimum (if required) with the following:
* [`tf.argmax()`](https://www.tensorflow.org/api_docs/python/tf/math/argmax) - find the position of the maximum element in a given tensor.
* [`tf.argmin()`](https://www.tensorflow.org/api_docs/python/tf/math/argmin) - find the position of the minimum element in a given tensor.
```
# Create a tensor with 50 values between 0 and 1
F = tf.constant(np.random.random(50))
F
# Find the maximum element position of F
tf.argmax(F)
# Find the minimum element position of F
tf.argmin(F)
# Find the maximum element position of F
print(f"The maximum value of F is at position: {tf.argmax(F).numpy()}")
print(f"The maximum value of F is: {tf.reduce_max(F).numpy()}")
print(f"Using tf.argmax() to index F, the maximum value of F is: {F[tf.argmax(F)].numpy()}")
print(f"Are the two max values the same (they should be)? {F[tf.argmax(F)].numpy() == tf.reduce_max(F).numpy()}")
```
### Squeezing a tensor (removing all single dimensions)
If you need to remove single-dimensions from a tensor (dimensions with size 1), you can use `tf.squeeze()`.
* [`tf.squeeze()`](https://www.tensorflow.org/api_docs/python/tf/squeeze) - remove all dimensions of 1 from a tensor.
```
# Create a rank 5 (5 dimensions) tensor of 50 numbers between 0 and 100
G = tf.constant(np.random.randint(0, 100, 50), shape=(1, 1, 1, 1, 50))
G.shape, G.ndim
# Squeeze tensor G (remove all 1 dimensions)
G_squeezed = tf.squeeze(G)
G_squeezed.shape, G_squeezed.ndim
```
### One-hot encoding
If you have a tensor of indicies and would like to one-hot encode it, you can use [`tf.one_hot()`](https://www.tensorflow.org/api_docs/python/tf/one_hot).
You should also specify the `depth` parameter (the level which you want to one-hot encode to).
```
# Create a list of indices
some_list = [0, 1, 2, 3]
# One hot encode them
tf.one_hot(some_list, depth=4)
```
You can also specify values for `on_value` and `off_value` instead of the default `0` and `1`.
```
# Specify custom values for on and off encoding
tf.one_hot(some_list, depth=4, on_value="We're live!", off_value="Offline")
```
### Squaring, log, square root
Many other common mathematical operations you'd like to perform at some stage, probably exist.
Let's take a look at:
* [`tf.square()`](https://www.tensorflow.org/api_docs/python/tf/math/square) - get the square of every value in a tensor.
* [`tf.sqrt()`](https://www.tensorflow.org/api_docs/python/tf/math/sqrt) - get the squareroot of every value in a tensor (**note:** the elements need to be floats or this will error).
* [`tf.math.log()`](https://www.tensorflow.org/api_docs/python/tf/math/log) - get the natural log of every value in a tensor (elements need to floats).
```
# Create a new tensor
H = tf.constant(np.arange(1, 10))
H
# Square it
tf.square(H)
# Find the squareroot (will error), needs to be non-integer
tf.sqrt(H)
# Change H to float32
H = tf.cast(H, dtype=tf.float32)
H
# Find the square root
tf.sqrt(H)
# Find the log (input also needs to be float)
tf.math.log(H)
```
### Manipulating `tf.Variable` tensors
Tensors created with `tf.Variable()` can be changed in place using methods such as:
* [`.assign()`](https://www.tensorflow.org/api_docs/python/tf/Variable#assign) - assign a different value to a particular index of a variable tensor.
* [`.add_assign()`](https://www.tensorflow.org/api_docs/python/tf/Variable#assign_add) - add to an existing value and reassign it at a particular index of a variable tensor.
```
# Create a variable tensor
I = tf.Variable(np.arange(0, 5))
I
# Assign the final value a new value of 50
I.assign([0, 1, 2, 3, 50])
# The change happens in place (the last value is now 50, not 4)
I
# Add 10 to every element in I
I.assign_add([10, 10, 10, 10, 10])
# Again, the change happens in place
I
```
## Tensors and NumPy
We've seen some examples of tensors interact with NumPy arrays, such as, using NumPy arrays to create tensors.
Tensors can also be converted to NumPy arrays using:
* `np.array()` - pass a tensor to convert to an ndarray (NumPy's main datatype).
* `tensor.numpy()` - call on a tensor to convert to an ndarray.
Doing this is helpful as it makes tensors iterable as well as allows us to use any of NumPy's methods on them.
```
# Create a tensor from a NumPy array
J = tf.constant(np.array([3., 7., 10.]))
J
# Convert tensor J to NumPy with np.array()
np.array(J), type(np.array(J))
# Convert tensor J to NumPy with .numpy()
J.numpy(), type(J.numpy())
```
By default tensors have `dtype=float32`, where as NumPy arrays have `dtype=float64`.
This is because neural networks (which are usually built with TensorFlow) can generally work very well with less precision (32-bit rather than 64-bit).
```
# Create a tensor from NumPy and from an array
numpy_J = tf.constant(np.array([3., 7., 10.])) # will be float64 (due to NumPy)
tensor_J = tf.constant([3., 7., 10.]) # will be float32 (due to being TensorFlow default)
numpy_J.dtype, tensor_J.dtype
```
## Using `@tf.function`
In your TensorFlow adventures, you might come across Python functions which have the decorator [`@tf.function`](https://www.tensorflow.org/api_docs/python/tf/function).
If you aren't sure what Python decorators do, [read RealPython's guide on them](https://realpython.com/primer-on-python-decorators/).
But in short, decorators modify a function in one way or another.
In the `@tf.function` decorator case, it turns a Python function into a callable TensorFlow graph. Which is a fancy way of saying, if you've written your own Python function, and you decorate it with `@tf.function`, when you export your code (to potentially run on another device), TensorFlow will attempt to convert it into a fast(er) version of itself (by making it part of a computation graph).
For more on this, read the [Better performnace with tf.function](https://www.tensorflow.org/guide/function) guide.
```
# Create a simple function
def function(x, y):
return x ** 2 + y
x = tf.constant(np.arange(0, 10))
y = tf.constant(np.arange(10, 20))
function(x, y)
# Create the same function and decorate it with tf.function
@tf.function
def tf_function(x, y):
return x ** 2 + y
tf_function(x, y)
```
If you noticed no difference between the above two functions (the decorated one and the non-decorated one) you'd be right.
Much of the difference happens behind the scenes. One of the main ones being potential code speed-ups where possible.
## Finding access to GPUs
We've mentioned GPUs plenty of times throughout this notebook.
So how do you check if you've got one available?
You can check if you've got access to a GPU using [`tf.config.list_physical_devices()`](https://www.tensorflow.org/guide/gpu).
```
print(tf.config.list_physical_devices('GPU'))
```
If the above outputs an empty array (or nothing), it means you don't have access to a GPU (or at least TensorFlow can't find it).
If you're running in Google Colab, you can access a GPU by going to *Runtime -> Change Runtime Type -> Select GPU* (**note:** after doing this your notebook will restart and any variables you've saved will be lost).
Once you've changed your runtime type, run the cell below.
```
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
```
If you've got access to a GPU, the cell above should output something like:
`[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]`
You can also find information about your GPU using `!nvidia-smi`.
```
!nvidia-smi
```
> 🔑 **Note:** If you have access to a GPU, TensorFlow will automatically use it whenever possible.
## 🛠 Exercises
1. Create a vector, scalar, matrix and tensor with values of your choosing using `tf.constant()`.
2. Find the shape, rank and size of the tensors you created in 1.
3. Create two tensors containing random values between 0 and 1 with shape `[5, 300]`.
4. Multiply the two tensors you created in 3 using matrix multiplication.
5. Multiply the two tensors you created in 3 using dot product.
6. Create a tensor with random values between 0 and 1 with shape `[224, 224, 3]`.
7. Find the min and max values of the tensor you created in 6.
8. Created a tensor with random values of shape `[1, 224, 224, 3]` then squeeze it to change the shape to `[224, 224, 3]`.
9. Create a tensor with shape `[10]` using your own choice of values, then find the index which has the maximum value.
10. One-hot encode the tensor you created in 9.
## 📖 Extra-curriculum
* Read through the [list of TensorFlow Python APIs](https://www.tensorflow.org/api_docs/python/), pick one we haven't gone through in this notebook, reverse engineer it (write out the documentation code for yourself) and figure out what it does.
* Try to create a series of tensor functions to calculate your most recent grocery bill (it's okay if you don't use the names of the items, just the price in numerical form).
* How would you calculate your grocery bill for the month and for the year using tensors?
* Go through the [TensorFlow 2.x quick start for beginners](https://www.tensorflow.org/tutorials/quickstart/beginner) tutorial (be sure to type out all of the code yourself, even if you don't understand it).
* Are there any functions we used in here that match what's used in there? Which are the same? Which haven't you seen before?
* Watch the video ["What's a tensor?"](https://www.youtube.com/watch?v=f5liqUk0ZTw) - a great visual introduction to many of the concepts we've covered in this notebook.
| github_jupyter |
# Libraries + DATA
```
from visualizations import *
import numpy as np
import pandas as pd
import warnings
from math import tau
import matplotlib.pyplot as plt
from scipy.integrate import quad
warnings.filterwarnings('ignore')
data = np.loadtxt("./../DATA/digits2k_pixels.data.gz", ndmin=2)/255.0
data.shape = (data.shape[0], int(np.sqrt(data.shape[1])), int(np.sqrt(data.shape[1])))
labels = np.loadtxt("./../DATA/digits2k_pixels.labels.gz", dtype='int')
```
# Helpful functions
```
def onlyBlackWhite(array, percentage = 0.3):
result = array.copy()
quantile = np.quantile(result[result>0], percentage)
for i in range(len(result)):
for j in range(len(result[0])):
if (result[i,j] < quantile):
result[i,j] = 0
else:
result[i,j] = 1
return result
## By using quantiles, we reduce some noise near the number and away from the number
## Empiric tests show that 0.3 quantile produces some nice results
def get_longest_array(arr_list):
n = len(arr_list)
max_len = 0
max_i = 0
for i in range(n):
if len(arr_list[i]) > max_len:
max_len, max_i = len(arr_list[i]), i
return max_i
def create_close_loop(image_array, level=[200]):
# Get Contour Path and create lookup-table
contour_paths = plt.contour(image_array, levels=level, colors='black', origin='image').collections[0].get_paths()
contour_path = contour_paths[get_longest_array(contour_paths)]
x_table, y_table = contour_path.vertices[:, 0], contour_path.vertices[:, 1]
time_table = np.linspace(0, tau, len(x_table))
# Simple method to center the image
x_table = x_table - min(x_table)
y_table = y_table - min(y_table)
x_table = x_table - max(x_table) / 2
y_table = y_table - max(y_table) / 2
return time_table, x_table, y_table
```
### Some fourier series generating functions (explained in other scripts)
```
def f(t, time_table, x_table, y_table):
return interp(t, time_table, x_table) + 1j*interp(t, time_table, y_table)
def coef_list(time_table, x_table, y_table, order=10):
"""
Counting c_n coefficients of Fourier series, of function aproximated by points (time_table, x_table + j*y_table)
of order of magnitude = order
"""
coef_list = []
for n in range(-order, order+1):
real_coef = quad(lambda t: np.real(f(t, time_table, x_table, y_table) * np.exp(-n*1j*t)), 0, tau, limit=100, full_output=1)[0]/tau
imag_coef = quad(lambda t: np.imag(f(t, time_table, x_table, y_table) * np.exp(-n*1j*t)), 0, tau, limit=100, full_output=1)[0]/tau
coef_list.append([real_coef, imag_coef])
return np.array(coef_list)
```
# Generating
This time, we will use Fourier series, not to get coefficients in the result, but first points of the Fourier Shape Description, then their distances from the centroids
#### Now we also need functions for: interpolation of n points from fourier series, finding radiuses of centroid distances of these points.
```
def DFT(t, coef_list, order=10):
"""
get points of Fourier series aproximation, where t is a time argument for which we want to get (from range[0, tau])
"""
kernel = np.array([np.exp(-n*1j*t) for n in range(-order, order+1)])
series = np.sum( (coef_list[:,0]+1j*coef_list[:,1]) * kernel[:])
return np.real(series), np.imag(series)
def GenerateShapePoints(coef_list, n=100):
time_space = np.linspace(0, tau, n)
x_DFT = [DFT(t, coef)[0] for t in time_space]
y_DFT = [DFT(t, coef)[1] for t in time_space]
return x_DFT, y_DFT
```
##### Test
```
copied = onlyBlackWhite(data[i,:,:])
time_table, x_table, y_table = create_close_loop(copied)
coef = coef_list(time_table, x_table, y_table, order=10)
X, Y = GenerateShapePoints(coef, n=30)
plt.plot(X, Y, '-o')
## n = 30 describes the number well enough (we still want to do it in reasonable time)
```
### Now a function generating centroid distances
Maybe here is a good moment to explain why we use this method. According to https://cis.temple.edu/~lakamper/courses/cis9601_2009/etc/fourierShape.pdf it simply gives best results when comparing shapes using Fourier transformations. It's a really well written article on the topic, I strongly reccomened getting some insights.
```
import math
def measureDistancesFromCentroids(coef_list, N=30):
X, Y = GenerateShapePoints(coef_list, n=N)
x_centroid = np.mean(X)
y_centroid = np.mean(Y)
centr_r = []
for i in range(N):
x_dist_sq = (X[i] - x_centroid)**2
y_dist_sq = (Y[i] - y_centroid)**2 y
centr_r.append(math.sqrt(x_dist_sq + y_dist_sq))
return np.array(centr_r)
```
## Let's proceed to actual generating
```
i_gen = np.linspace(0, len(data)-1, len(data)).astype(int)
centr_radiuses = []
for i in i_gen:
copied = onlyBlackWhite(data[i,:,:])
time_table, x_table, y_table = create_close_loop(copied)
coef = coef_list(time_table, x_table, y_table, order=10)
centr_radiuses.append(measureDistancesFromCentroids(coef))
if i%100 == 0:
print(i)
np.save(file='centroid_distances', arr=centr_radiuses)
```
GOT IT!
| github_jupyter |
# Predicitng Stock/Weather prices using neural networks
## import relevant libraries
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from keras.models import Sequential
import matplotlib.patches as mpatches
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM, Bidirectional
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import os
```
## Load data from the file and visualize the data
```
df = pd.read_csv('testset.csv') # Loading the data from file
df.head()
```
# convert datetime columns from object type to datatime64 and set it as index
```
df['datetime_utc'] = pd.to_datetime(df['datetime_utc'])
df.set_index('datetime_utc', inplace= True)
```
# Resample the colums as per Day/Hour/Mins as per requirements
```
df =df.resample('H').mean() #The index is based on hours and mean value of the data from that specific hour is filled in the columns
```
# Select the columns you want to predict
```
df = df['Open' ]
```
# Fill the empty slots
```
df = df.ffill().bfill()
df.mean()# we will fill the null row
```
# Plot the data
```
plt.figure(figsize=(20,8))
plt.plot(df)
plt.title('Time Series')
plt.xlabel('Date')
plt.ylabel('Stock')
plt.show()
```
# Convert the data to float and reshape to 2D
```
df=df.values
df = df.astype('float32')
df=df.reshape(df.shape[0],1)
df
```
# Transform the data using minmax scalr iwth the given range
```
scaler= MinMaxScaler(feature_range=(0,1))
sc = scaler.fit_transform(df)
```
# Create Xtrain and Ytrain based on your requirements
```
timestep = 10 #Steps used to train before predicting the next point
X=[]
Y=[]
for i in range(1,len(sc)- (timestep)):
X.append(sc[i:i+timestep])
Y.append(sc[i+timestep])
X=np.asanyarray(X)
Y=np.asanyarray(Y)
length = len(sc)-100
# shape of the input data varies form model to model
Xtrain = X[:length,:,:]
Xtest = X[length:,:,:]
Ytrain = Y[:length]
Ytest= Y[k-1:-1]
```
# Import the libraries
```
from keras.layers import Dense,RepeatVector
from keras.layers import Flatten
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
```
# Build the model Bidirectional
Below model is keras Bideirectional model
```
model = Sequential()
model.add(Bidirectional(LSTM(100, activation='sigmoid'),input_shape=(timestep,1)))
model.add(Dense(50, activation='sigmoid'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(Xtrain,Ytrain,epochs=10, verbose=0 )
model.summary()
```
# save model
```
model.save('model.h5')
```
# load model
```
model = load_model('model.h5')
```
# Predict the results and inverse transform them
```
preds = model.predict(Xtest)
preds = scaler.inverse_transform(preds)
```
# Inverse transfrom test data to compare with the predicted results
```
Ytest=np.asanyarray(Ytest)
Ytest=Ytest.reshape(-1,1)
Ytest = scaler.inverse_transform(Ytest)
```
# Mean squaled error is calculated to measure the accuracy of precdiction
Measuring accuracy varies based on the output and requirements
```
mean_squared_error(Ytest,preds)
```
# Plot the predicted and True results to visualize
```
plt.figure(figsize=(20,9))
plt.plot(Ytest)
plt.plot(preds_cnn1)
plt.legend(('Test','Predicted'))
plt.show()
```
| github_jupyter |
```
import re
from robobrowser import RoboBrowser
import urllib
import os
class ProgressBar(object):
"""
链接:https://www.zhihu.com/question/41132103/answer/93438156
来源:知乎
"""
def __init__(self, title, count=0.0, run_status=None, fin_status=None, total=100.0, unit='', sep='/', chunk_size=1.0):
super(ProgressBar, self).__init__()
self.info = "【%s】 %s %.2f %s %s %.2f %s"
self.title = title
self.total = total
self.count = count
self.chunk_size = chunk_size
self.status = run_status or ""
self.fin_status = fin_status or " " * len(self.statue)
self.unit = unit
self.seq = sep
def __get_info(self):
"""【razorback】 下载完成 3751.50 KB / 3751.50 KB """
_info = self.info % (self.title, self.status, self.count/self.chunk_size, self.unit, self.seq, self.total/self.chunk_size, self.unit)
return _info
def refresh(self, count=1, status=None):
self.count += count
self.status = status or self.status
end_str = "\r"
if self.count >= self.total:
end_str = '\n'
self.status = status or self.fin_status
print(self.__get_info(), end=end_str)
path = './'
def download_video_by_url(url, path, vid_title):
outfile = os.path.join(path,vid_title+'.mp4')
with closing(requests.get(url, stream=True)) as response:
chunk_size = 1024
content_size = int(response.headers['content-length'])
progress = ProgressBar(vid_title, total=content_size, unit="KB", chunk_size=chunk_size, run_status="正在下载", fin_status="下载完成")
assert response.status_code == 200
with open(outfile, "wb") as file:
for data in response.iter_content(chunk_size=chunk_size):
file.write(data)
progress.refresh(count=len(data))
return True
url = 'http://91porn.com/view_video.php?viewkey=4d65b13fa47b2afb51b8'
br = RoboBrowser(history=True,parser='lxml')
br.open(url)
lang = br.get_forms()[0]
lang['session_language'].options = ['cn_CN']
lang['session_language'].value = 'cn_CN'
br.submit_form(lang)
vid_title = br.find('div',{'id':'viewvideo-title'}).text.strip()
print(vid_title)
vid_id = re.findall(r'\d{6}',br.find('a',{'href':'#featureVideo'}).attrs['onclick'])[0]
vid_real_url = 'http://192.240.120.34//mp43/{}.mp4'.format(vid_id)
urllib.request.urlretrieve(vid_real_url,'{}.mp4'.format(vid_title))
if download_video_by_url(vid_real_url, path, vid_title):
print('下载成功!珍惜生命,远离黄赌毒!')
hot_videos = {}
br = RoboBrowser(history=True,parser='lxml')
url = 'http://91porn.com/v.php?category=rf&viewtype=basic&page=1'
br.open(url)
lang = br.get_forms()[0]
lang['session_language'].options = ['cn_CN']
lang['session_language'].value = 'cn_CN'
br.submit_form(lang)
# get every video's information
videos = br.find_all('div',{'class':'listchannel'})
# get their titles and urls
videos_dict = dict([(i.find('a').find('img')['title'],i.find('a')['href']) for i in videos])
hot_videos.update(videos_dict)
for i,j in enumerate(hot_videos.keys()):
print(i,j)
```
| github_jupyter |
# Character-based LSTM
## Grab all Chesterton texts from Gutenberg
```
from nltk.corpus import gutenberg
gutenberg.fileids()
text = ''
for txt in gutenberg.fileids():
if 'chesterton' in txt:
text += gutenberg.raw(txt).lower()
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
'corpus length: {} total chars: {}'.format(len(text), len(chars))
print(text[:100])
```
## Create the Training set
Build a training and test dataset. Take 40 characters and then save the 41st character. We will teach the model that a certain 40 char sequence should generate the 41st char. Use a step size of 3 so there is overlap in the training set and we get a lot more 40/41 samples.
```
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i+maxlen])
next_chars.append(text[i + maxlen])
print("sequences: ", len(sentences))
print(sentences[0])
print(sentences[1])
print(next_chars[0])
```
One-hot encode
```
import numpy as np
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
```
## Create the Model
```
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
```
## Train the Model
```
epochs = 2
batch_size = 128
model.fit(X, y, batch_size=batch_size, epochs=epochs)
```
## Generate new sequence
```
import random
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
import sys
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
| github_jupyter |
**Run the following two cells before you begin.**
```
%autosave 10
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
```
______________________________________________________________________
**First, import your data set and define the sigmoid function.**
<details>
<summary>Hint:</summary>
The definition of the sigmoid is $f(x) = \frac{1}{1 + e^{-X}}$.
</details>
```
# Import the data set
data = pd.read_csv("cleaned_data.csv")
# Define the sigmoid function
def sigmoid(X):
Y = 1 / (1 + np.exp(-X))
return Y
```
**Now, create a train/test split (80/20) with `PAY_1` and `LIMIT_BAL` as features and `default payment next month` as values. Use a random state of 24.**
```
# Create a train/test split
X_train, X_test, y_train, y_test = train_test_split(data[['PAY_1', 'LIMIT_BAL']].values, data['default payment next month'].values,
test_size=0.2, random_state=24)
```
______________________________________________________________________
**Next, import LogisticRegression, with the default options, but set the solver to `'liblinear'`.**
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear')
```
______________________________________________________________________
**Now, train on the training data and obtain predicted classes, as well as class probabilities, using the testing data.**
```
# Fit the logistic regression model on training data
lr.fit(X_train,y_train)
# Make predictions using `.predict()`
model = lr.predict(X_test)
# Find class probabilities using `.predict_proba()`
model_proba = lr.predict_proba(X_test)
```
______________________________________________________________________
**Then, pull out the coefficients and intercept from the trained model and manually calculate predicted probabilities. You'll need to add a column of 1s to your features, to multiply by the intercept.**
```
# Add column of 1s to features
features = np.hstack([np.ones((X_test.shape[0],1)), X_test])
# Get coefficients and intercepts from trained model
coef_inter = np.concatenate([lr.intercept_.reshape(1,1), lr.coef_], axis=1)
coef_inter
# Manually calculate predicted probabilities
X_lin = np.dot(coef_inter, np.transpose(features))
model_proba_manual = sigmoid(X_lin)
```
______________________________________________________________________
**Next, using a threshold of `0.5`, manually calculate predicted classes. Compare this to the class predictions output by scikit-learn.**
```
# Manually calculate predicted classes
model_manual = model_proba_manual >= 0.5
# Compare to scikit-learn's predicted classes
np.array_equal(model.reshape(1,-1), model_manual)
```
______________________________________________________________________
**Finally, calculate ROC AUC using both scikit-learn's predicted probabilities, and your manually predicted probabilities, and compare.**
```
# Use scikit-learn's predicted probabilities to calculate ROC AUC
roc_auc_score(y_test, model_proba[:,1])
# Use manually calculated predicted probabilities to calculate ROC AUC
roc_auc_score(y_test, model_proba_manual.reshape(model_proba_manual.shape[1],))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ahammedshaneebnk/ML_Support_Vector_Machines_Exercises/blob/main/soft_margin_svm_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#**Question:**

#**Answer:1(a)**
##**Data Analysis**
###***Read Training Set***
```
# import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import svm
# read training set 1 and convert into pandas dataframe
df = pd.read_csv('train_1.txt', delim_whitespace=' ', header=None)
# display the data
print(df)
```
###***Basic Details***
```
# rows and columns
print(df.shape)
```
* Number of **Rows = 1500**
* Number of **Columns = 3**
* Number of **Features = 2**
```
# basic statistical details
print(df.describe())
```
* Both features have almost same set of minimum and maximum values.
###***Check for Null Values***
```
print(df.info())
```
* **No null value** is present in the dataset
###***Features Distribution***
```
plt.figure(figsize=(14,4))
# plot the histogram of 1st feature data
plt.subplot(121)
sns.histplot(data=df, x=0, kde=True)
plt.xlabel('X1')
# plot the histogram of 2nd feature data
plt.subplot(122)
sns.histplot(data=df, x=1, kde=True)
plt.xlabel('X2')
```
* Both feature values are almost normally distributed. Since both of them have almost same range too, we do not need to feature scale these.
##**Data Visualization**
```
# scatter plot
# output +1 => green and '+'
# output -1 => red and '-'
plt.figure(figsize=(9,7))
df1 = df.loc[df[2]==1]
df2 = df.loc[df[2]==-1]
plt.scatter(df1[0], df1[1], color='green', marker='+', s=60)
plt.scatter(df2[0], df2[1], color='red', marker='_', s=60)
plt.legend(['+1 data','-1 data'])
plt.xlabel('X1')
plt.ylabel('X2')
```
##**Test Data**
###***Read Test Data***
```
# read test dataset 1 and convert into pandas dataframe
test_df = pd.read_csv('test_1.txt', delim_whitespace=' ', header=None)
# size of the dataset
print(test_df.shape)
# display the data
print(test_df)
```
* There are **500** instances in the test data
##**SVM Implementation**
###**Function to Plot**
```
# this function will provide the scatter plots
def plot_fun(model, df, color1, color2, flag):
# separating +1 and -1 data
df1 = df.loc[df[2]==1]
df2 = df.loc[df[2]==-1]
plt.scatter(df1[0], df1[1], color=color1, marker='+', s=60)
plt.scatter(df2[0], df2[1], color=color2, marker='_', s=60)
plt.legend(['+1 data','-1 data'])
plt.xlabel('X1')
plt.ylabel('X2')
# plot the decision function
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
XX, YY = np.meshgrid(xx, yy)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = model.decision_function(xy).reshape(XX.shape)
# training set
if flag==1:
# plot decision boundary and margins
ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
ax.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=100,
linewidth=1, facecolors='none', edgecolors='k')
# test set
elif flag==0:
ax.contour(XX, YY, Z, colors='k', levels=0, alpha=0.5,
linestyles='-')
```
###**Function to Find Error**
```
# This function will provide the error
def err_fun(model, df):
# prediction with the learned model
predicted_labels = model.predict(df.iloc[:,:-1])
error_count = 0
# comparison with actual label
for i in range(df.shape[0]):
if predicted_labels[i] != df.iloc[i,-1]:
error_count = error_count + 1
# returns the error percentage
return (error_count * 100 / df.shape[0])
```
###**Function to Train SVM**
```
# This function will train the SVM and do all other needed operations
def svm_fun(df, test_df, c):
# training
model = svm.SVC(kernel='linear', C = c)
model.fit(df.iloc[:,:-1], df.iloc[:,-1])
plt.figure(figsize=(15,6))
# plot with training data
plt.subplot(121)
plt.title('Training Data, C = %s'%(c))
plot_fun(model, df, 'green', 'red', 1)
# plot with test data
plt.subplot(122)
plt.title('Test Data, C = %s'%(c))
plot_fun(model, test_df, 'blue', 'magenta', 0)
# support vector details
print(f"{30*'==='}\n")
print(f"Softmargin SVM with C = {c}\n")
print(f"There are {len(model.support_vectors_)} support vectors in total.")
print(f"\nThey are as follows:\n")
for i in range(len(model.support_vectors_)):
print(f"{i+1}. {model.support_vectors_[i]}\tLamda = \
{model.dual_coef_[0][i]/(df.iloc[model.support_[i],-1])}")
# error calculation
print(f"\nTraining Error = {err_fun(model, df)} %")
print(f"Testing Error = {err_fun(model, test_df)} %\n")
```
###**SVM with C = 1000**
```
svm_fun(df, test_df, 1000)
```
###**SVM with C = 100**
```
svm_fun(df, test_df, 100)
```
###**SVM with C = 1**
```
svm_fun(df, test_df, 1)
```
###**SVM with C = 0.01**
```
svm_fun(df, test_df, 0.01)
```
###**SVM with C = 0.001**
```
svm_fun(df, test_df, 0.001)
```
##**Conclusion**
* The given dataset has been analyzed and the values of the features were found to be **almost normally distributed**.
* **No null values** were present in the training data and there were **1500 instances and 2 features**. The test data has **500** instances.
* The data set was corresponding to **binary classification** with labels -1 and +1.
* The training data has been visualized with the help of **scatter plot** and found to be well suitable for linear SVM.
* The softmargin SVM was implemented with linear kernel.
* Different values for the **hyper parameter C** has been experimented and the results were noted.
* In this particular experiment, there was no difference observed when experimented with C = 1000, 100 and 1. In these cases, there were three support vectors.
* Althogh, **when C has been decreased** to 0.01 and further to 0.001, **more number of support vectors** were found (4 and 24 respectively. This happened because the objective function of the SVM tried to concentrate on increasing the margin and provided less priority to the misclassifications or deviations.
* The **dual coefficient $\lambda$ values** were also studied by displaying them and found to be greater than 0 for all the support vectors and especiallly equal to C for those do not lie on the decision boundaries.
* In all experimented cases, both training error and test error were found to be **zero**.
##**Submitted By:**
####Ahammed Shaneeb N K
####M1, AI: Roll No - 2
| github_jupyter |
# Team BackProp
During exploration of the neural architecture, we used copies of this notebook to be able to easily process data whilst keeping our models intact.
1. Import KMNIST Data
2. Data preprocess and augmentate
3. Develop neural network model
4. Cross validate model
- At this stage we decide whether to keep the model for full training or remodify the network again to improve it.
5. Hyperparameter Tuning
6. Train on full dataset
7. Save model and submit
## Pipeline Setup
### Imports
```
!pip install pycm livelossplot
%pylab inline
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
from torch.utils.data import Dataset
import torchvision.transforms as transforms
from torchvision.transforms import Compose, ToTensor, Normalize, RandomRotation,\
ToPILImage, RandomResizedCrop, RandomAffine
from livelossplot import PlotLosses
import csv
import pickle
def set_seed(seed):
""" Use this to set ALL the random seeds to a fixed value and take out any
randomness from cuda kernels
"""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
#uses inbuilt cudnn auto-tuner to find the fastest convolution algorithms.
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = False
return True
device = 'cpu'
if torch.cuda.device_count() > 0 and torch.cuda.is_available():
print("Cuda installed! Running on GPU!")
device = 'cuda'
else:
print("No GPU available!")
from google.colab import drive
drive.mount('/content/gdrive/')
```
### KMNIST Data
```
# Load in the datasets
X = np.load(F"/content/gdrive/My Drive/Colab Notebooks/Mini-Project/acse-module-8-19/kmnist-train-imgs.npy") /255
y = np.load(F"/content/gdrive/My Drive/Colab Notebooks/Mini-Project/acse-module-8-19/kmnist-train-labels.npy")
Xtest = np.load(F"/content/gdrive/My Drive/Colab Notebooks/Mini-Project/acse-module-8-19/kmnist-test-imgs.npy") /255
# Load in the classmap as a dictionary
classmap = {}
with open('/content/gdrive/My Drive/Colab Notebooks/Mini-Project/acse-module-8-19/kmnist_classmap.csv', 'r') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',')
next(spamreader)
for row in spamreader:
classmap[row[0]] = row[2]
# Check if we imported correctly
plt.imshow(X[0]);
```
## Image Preprocessing and Augmentation
```
class CustomImageTensorDataset(Dataset):
def __init__(self, data, targets, transform=None, mean=False, std=False):
"""
Args:
data (Tensor): A tensor containing the data e.g. images
targets (Tensor): A tensor containing all the labels
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.data = data.reshape(-1,1,28,28)
self.targets = targets
self.transform = transform
# Find mean and standard dev
self.mean = mean
self.std = std
self.Rotation = Compose([
ToPILImage(),
RandomRotation(10),
ToTensor(), Normalize(mean=[self.mean], std=[self.std])
])
self.RotandCrop = Compose([
ToPILImage(),
RandomResizedCrop(size=(28,28), scale=(0.8,1)),
ToTensor(), Normalize(mean=[self.mean], std=[self.std])
])
self.Affine = Compose([ToPILImage(),
RandomAffine(10, shear=10),
ToTensor(), Normalize(mean=[self.mean], std=[self.std])
])
self.Norm = Compose([Normalize(mean=[self.mean], std=[self.std])
])
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
sample, label = self.data[idx], self.targets[idx]
assert (self.mean != False), "Asign a mean"
assert (self.mean != False), "Asign a std"
if self.transform:
x = random.random()
if 0<= x<0.2: # rotate
sample = self.Rotation(sample)
if 0.2<= x<0.4: # resize crop
sample = self.RotandCrop(sample)
if 0.4<= x<0.7: # shear crop
sample= self.Affine(sample)
else: # none
sample = self.Norm(sample)
else:
sample = self.Norm(sample)
return sample, label
# Verify if image augmentation works:
X_train, y_train = X.astype(float), y
X_train, y_train = torch.from_numpy(X_train).float(), torch.from_numpy(y_train)
mean1, std1 = torch.mean(X_train), torch.std(X_train)
dset = CustomImageTensorDataset(X_train, y_train, transform=True, mean=mean1, std=std1 )
# Make a dataloader to access the PIL images of a batch size of 25
loader = DataLoader(dset, batch_size=25, shuffle=True)
# Create an iter object to cycle through dataloader
train_iter = iter(loader)
imgs, labels = train_iter.next()
print(imgs.shape)
print('max:',imgs.max())
# plot our batch of images with labels
fig, axarr = plt.subplots(5,5,figsize=(8,8))
fig.tight_layout()
for img, label, axs in zip(imgs, labels, axarr.flatten()):
axs.set_title(str(label.numpy()) + " " + str(label.numpy()))
axs.imshow(img.numpy()[0])
```
## Model Development
### Architecture Analysis Models
```
class AlexNet_Exp1(nn.Module):
"""Based on the AlexNet paper with the same number of layers and parameters
are rescaled down by 8x to fit with the original alexnet image size to
our kmnist size ratio (227:28)
"""
def __init__(self):
super(AlexNet_Exp1, self).__init__()
self.conv_1 = nn.Conv2d(1, 6, kernel_size=11, stride=1, padding=3, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=2, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(16, 24, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(24, 24, kernel_size=3, stride=1, padding=2, bias=True)
self.conv_7 = nn.Conv2d(24, 16, kernel_size=3, stride=1, padding=2, bias=True)
self.pool_8 = nn.MaxPool2d(kernel_size=2)
self.linear_9 = nn.Linear(400, 256, bias=True)
self.output = nn.Linear(256, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.pool_8(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_9(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp3(nn.Module):
"""Based on the AlexNet paper with the same number of layers and parameters
are rescaled down by 4x to better fit to the labels compared to 8x scaling
"""
def __init__(self):
super(AlexNet_Exp3, self).__init__()
self.conv_1 = nn.Conv2d(1, 12, kernel_size=11, stride=1, padding=3, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(12, 32, kernel_size=5, stride=1, padding=2, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(32, 48, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(48, 48, kernel_size=3, stride=1, padding=2, bias=True)
self.conv_7 = nn.Conv2d(48, 32, kernel_size=3, stride=1, padding=2, bias=True)
self.pool_8 = nn.MaxPool2d(kernel_size=2)
self.linear_9 = nn.Linear(800, 512, bias=True)
self.output = nn.Linear(512, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.pool_8(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_9(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp4(nn.Module):
"""Based on the AlexNet paper with the same number of channels and layers
are rescaled down by 8x to fit with the original alexnet image size to
our kmnist size ratio (227:28)
We have now provided a "reasonable" guess of the filters and paddings
"""
def __init__(self):
super(AlexNet_Exp4, self).__init__()
self.conv_1 = nn.Conv2d(1, 6, kernel_size=11, stride=1, padding=3, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=2, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(16, 24, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(24, 24, kernel_size=3, stride=1, padding=1, bias=True)
self.conv_7 = nn.Conv2d(24, 16, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_8 = nn.MaxPool2d(kernel_size=2)
self.linear_9 = nn.Linear(144, 128, bias=True)
self.output = nn.Linear(128, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.pool_8(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_9(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp5(nn.Module):
"""Based on the AlexNet paper with the same number of layers and parameters
are rescaled down by 8x but with an addional convolutional layer
"""
def __init__(self):
super(AlexNet_Exp5, self).__init__()
self.conv_1 = nn.Conv2d(1, 6, kernel_size=13, stride=1, padding=6, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(6, 16, kernel_size=7, stride=1, padding=3, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(16, 24, kernel_size=5, stride=1, padding=2, bias=True)
# additional layer
self.conv_6 = nn.Conv2d(24, 24, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_7 = nn.Conv2d(24, 24, kernel_size=4, stride=1, padding=1, bias=True)
self.conv_8 = nn.Conv2d(24, 16, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_9 = nn.MaxPool2d(kernel_size=2)
self.linear_10 = nn.Linear(144, 100, bias=True)
self.output = nn.Linear(100, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.act(self.conv_8(x))
x = self.pool_9(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_10(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp6(nn.Module):
""" Based on the AlexNet paper with the same number of channels and layers
are rescaled down by 4x to fit with the original alexnet image size to
our kmnist size ratio (227:28)
We have now provided a "reasonable" guess of the filters and paddings
"""
def __init__(self):
super(AlexNet_Exp6, self).__init__()
self.conv_1 = nn.Conv2d(1, 12, kernel_size=11, stride=1, padding=3, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(12, 32, kernel_size=5, stride=1, padding=2, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(32, 48, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(48, 48, kernel_size=3, stride=1, padding=1, bias=True)
self.conv_7 = nn.Conv2d(48, 32, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_8 = nn.MaxPool2d(kernel_size=2)
self.linear_9 = nn.Linear(288, 200, bias=True)
self.output = nn.Linear(200, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.pool_8(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_9(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp7(nn.Module):
"""Based on the AlexNet paper with the same number of channels and layers
are rescaled down by 8x to fit with the original alexnet image size to
our kmnist size ratio (227:28)
+1 classification layer
We provided a "reasonable" guess of the filters
"""
def __init__(self):
super(AlexNet_Exp7, self).__init__()
self.conv_1 = nn.Conv2d(1, 6, kernel_size=11, stride=1, padding=3, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=2, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(16, 24, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(24, 24, kernel_size=3, stride=1, padding=1, bias=True)
self.conv_7 = nn.Conv2d(24, 16, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_8 = nn.MaxPool2d(kernel_size=2)
self.linear_9 = nn.Linear(144, 100, bias=True)
self.linear_10 = nn.Linear(100, 70, bias=True)
self.output = nn.Linear(70, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.pool_8(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_9(x))
x = self.act(self.linear_10(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp8(nn.Module):
"""Based on the AlexNet paper: Modified the each part of the network
+1 Conv layer
+4 Classification layers
+x4 parameters
We have used a "reasonable" guess of the filters
"""
def __init__(self):
super(AlexNet_Exp8, self).__init__()
# Convolutional Layers
self.conv_1 = nn.Conv2d(1, 12, kernel_size=13, stride=1, padding=6, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(12, 32, kernel_size=7, stride=1, padding=3, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(32, 48, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(48, 48, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_7 = nn.Conv2d(48, 48, kernel_size=4, stride=1, padding=1, bias=True)
self.conv_8 = nn.Conv2d(48, 32, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_9 = nn.MaxPool2d(kernel_size=2)
# Classification Layers
self.linear_10 = nn.Linear(288, 200, bias=True)
self.linear_11 = nn.Linear(200, 130, bias=True)
self.linear_12 = nn.Linear(130, 90, bias=True)
self.linear_13 = nn.Linear(90, 60, bias=True)
self.linear_14 = nn.Linear(60, 30, bias=True)
self.output = nn.Linear(30, 10, bias=True)
#self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv_1(x))
x = self.pool_2(x)
x = self.act(self.conv_3(x))
x = self.pool_4(x)
x = self.act(self.conv_5(x))
x = self.act(self.conv_6(x))
x = self.act(self.conv_7(x))
x = self.act(self.conv_8(x))
x = self.pool_9(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.linear_10(x))
x = self.act(self.linear_11(x))
x = self.act(self.linear_12(x))
x = self.act(self.linear_13(x))
x = self.act(self.linear_14(x))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp12(nn.Module):
"""Based on the AlexNet paper: Modified the each part of the network
+1 Conv layer
+5 Classification layers
+x2 parameters - only halved the original params!
We have used a "reasonable" guess of the filters
Added batch norm
Added drop out
"""
def __init__(self):
super(AlexNet_Exp12, self).__init__()
# Convolutional Layers
self.conv_1 = nn.Conv2d(1, 24, kernel_size=13, stride=1, padding=6, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(24, 64, kernel_size=7, stride=1, padding=3, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(64, 96, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(96, 96, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_7 = nn.Conv2d(96, 96, kernel_size=4, stride=1, padding=1, bias=True)
self.conv_8 = nn.Conv2d(96, 64, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_9 = nn.MaxPool2d(kernel_size=2)
# Classification Layers
self.linear_10 = nn.Linear(576, 384, bias=True)
self.linear_11 = nn.Linear(384, 192, bias=True)
self.linear_12 = nn.Linear(192, 128, bias=True)
self.linear_13 = nn.Linear(128, 85, bias=True)
self.linear_14 = nn.Linear(85, 42, bias=True)
self.linear_15 = nn.Linear(42, 21, bias=True)
self.output = nn.Linear(21, 10, bias=True)
# Batch Normalization
self.b1 = nn.BatchNorm2d(24)
self.b3 = nn.BatchNorm2d(64)
self.b5 = nn.BatchNorm2d(96)
self.b6 = nn.BatchNorm2d(96)
self.b7 = nn.BatchNorm2d(96)
self.b8 = nn.BatchNorm2d(64)
self.dout = nn.Dropout(p=0.25) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.b1(self.conv_1(x)))
x = self.pool_2(x)
x = self.act(self.b3(self.conv_3(x)))
x = self.pool_4(x)
x = self.act(self.b5(self.conv_5(x)))
x = self.act(self.b6(self.conv_6(x)))
x = self.act(self.b7(self.conv_7(x)))
x = self.act(self.b8(self.conv_8(x)))
x = self.pool_9(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.dout(self.linear_10(x)))
x = self.act(self.dout(self.linear_11(x)))
x = self.act(self.dout(self.linear_12(x)))
x = self.act(self.dout(self.linear_13(x)))
x = self.act(self.dout(self.linear_14(x)))
x = self.act(self.dout(self.linear_15(x)))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
class AlexNet_Exp24(nn.Module):
"""Based on the AlexNet paper: Modified the each part of the network
+1 Conv layer
+5 Classification layers
+x2 parameters - only halved the original params!
We have used a "reasonable" guess of the filters
Added batch norm
Added drop out
"""
def __init__(self):
super(AlexNet_Exp24, self).__init__()
# Convolutional Layers
self.conv_1 = nn.Conv2d(1, 16, kernel_size=13, stride=1, padding=6, bias=True)
self.pool_2 = nn.MaxPool2d(kernel_size=2)
self.conv_3 = nn.Conv2d(16, 42, kernel_size=7, stride=1, padding=3, bias=True)
self.pool_4 = nn.MaxPool2d(kernel_size=2)
self.conv_5 = nn.Conv2d(42, 64, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_6 = nn.Conv2d(64, 64, kernel_size=5, stride=1, padding=2, bias=True)
self.conv_7 = nn.Conv2d(64, 64, kernel_size=4, stride=1, padding=1, bias=True)
self.conv_8 = nn.Conv2d(64, 42, kernel_size=3, stride=1, padding=1, bias=True)
self.pool_9 = nn.MaxPool2d(kernel_size=2)
# Classification Layers
self.linear_10 = nn.Linear(378, 252, bias=True)
self.linear_11 = nn.Linear(252, 126, bias=True)
self.linear_12 = nn.Linear(126, 84, bias=True)
self.linear_13 = nn.Linear(84, 42, bias=True)
self.linear_14 = nn.Linear(42, 21, bias=True)
self.output = nn.Linear(21, 10, bias=True)
# Batch Normalization
self.b1 = nn.BatchNorm2d(16)
self.b3 = nn.BatchNorm2d(42)
self.b5 = nn.BatchNorm2d(64)
self.b6 = nn.BatchNorm2d(64)
self.b7 = nn.BatchNorm2d(64)
self.b8 = nn.BatchNorm2d(42)
self.dout = nn.Dropout(p=0.5) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.b1(self.conv_1(x)))
x = self.pool_2(x)
x = self.act(self.b3(self.conv_3(x)))
x = self.pool_4(x)
x = self.act(self.b5(self.conv_5(x)))
x = self.act(self.b6(self.conv_6(x)))
x = self.act(self.b7(self.conv_7(x)))
x = self.act(self.b8(self.conv_8(x)))
x = self.pool_9(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.dout(self.linear_10(x)))
x = self.act(self.dout(self.linear_11(x)))
x = self.act(self.dout(self.linear_12(x)))
x = self.act(self.dout(self.linear_13(x)))
x = self.act(self.dout(self.linear_14(x)))
x = self.output(x) # Don't activate this output layer, we apply a softmax transformation in our training functions!
return x
```
### Final Model
```
class SimpleAlexNet_FINAL(nn.Module):
def __init__(self):
super(SimpleAlexNet_FINAL, self).__init__()
self.conv_1 = nn.Conv2d(1, 36, kernel_size=3, padding=1)
self.pool_2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv_3 = nn.Conv2d(36, 72, kernel_size=3)
self.pool_4 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv_5 = nn.Conv2d(72, 142, kernel_size=3, padding=1)
self.conv_6 = nn.Conv2d(142, 284, kernel_size=3, padding=1)
self.conv_7 = nn.Conv2d(284, 124, kernel_size=3, padding=1)
self.pool_8 = nn.MaxPool2d(kernel_size=2, stride=2)
self.linear_9 = nn.Linear(1116, 400)
self.linear_10 = nn.Linear(400, 400)
self.linear_11 = nn.Linear(400, 10)
self.dout = nn.Dropout(p=0.7) #dropout added to prevent overfitting :0
self.act = nn.ReLU()
self.b1 = nn.BatchNorm2d(36)
self.b2 = nn.BatchNorm2d(72)
self.b3 = nn.BatchNorm2d(142)
self.b4 = nn.BatchNorm2d(284)
self.b5 = nn.BatchNorm2d(124)
def forward(self, x):
x = self.act(self.b1(self.conv_1(x)))
x = self.pool_2(x)
x = self.act(self.b2(self.conv_3(x)))
x = self.pool_4(x)
x = self.act(self.b3(self.conv_5(x)))
x = self.act(self.b4(self.conv_6(x)))
x = self.act(self.b5(self.conv_7(x)))
# x = self.act(self.conv_7(x)) # Added new layer
x = self.pool_8(x)
x = x.view(-1, x.size(1) * x.size(2) * x.size(3))
x = self.act(self.dout(self.linear_9(x)))
x = self.act(self.dout(self.linear_10(x)))
# x = self.dout(x)
x = self.act(self.linear_11(x))
return x
```
## Cross Validation Analysis
We run holdout cross validation as it is sufficient given the amount of data we have.
```
from kmnist_helpers.model_selection import holdoutCV, holdout_loaders
# training parameters:
batch = 64
testbatch = 1000
epochs = 3
model = SimpleAlexNet_FINAL().to(device)
train_loader, val_loader = holdout_loaders(X, y, CustomImageTensorDataset,
batch, testbatch)
lloss, val_loss, val_acc = holdoutCV(epochs, 0.0, 1e-4, model,
train_loader, val_loader)
```
### Save Cross Validation Logs
```
SimpleAlexNet_FINAL_logs = lloss.logs
f = open(F"/content/gdrive/My Drive/Colab Notebooks/Mini-Project/Model/SimpleAlexNet_FINAL_logs.pkl","wb")
pickle.dump(SimpleAlexNet_FINAL_logs,f)
f.close()
```
## Random-Grid Searching for Hyperparameters
We perform a random-grid search to find optimal hyperparameters.
```
from kmnist_helpers.tuning import RandomSearch, GridSearch
train_loader, val_loader = holdout_loaders(X, y, CustomImageTensorDataset,
batch, testbatch)
model = SimpleAlexNet_FINAL().to(device)
max_acc, rand_params = RandomSearch(5, model, 5,
train_loader, val_loader)
best_comb, lloss, loss, acc = GridSearch(5, model, rand_params,
train_loader, val_loader,
pseudo=True)
```
## Final Full Training
We train the model onto the full training set and use the given test dataset for Kaggle.
```
# =============================Load Data======================================
X_train, y_train = torch.from_numpy(X).float(), torch.from_numpy(y) # Dummy Test Labels for y_test
X_test, y_test = torch.from_numpy(Xtest).float(), torch.from_numpy(np.array(range(X_test.shape[0]))).float()
mean, std = torch.mean(X_train), torch.std(X_train)
train_ds = CustomImageTensorDataset(X_train, y_train.long(), transform=True, mean=mean, std=std)
test_ds = CustomImageTensorDataset(X_test, y_test.long(), transform=False, mean=mean, std=std)
batchsize = 100
testbatch = 1000
train_loader = DataLoader(train_ds, batch_size=batchsize, shuffle=True, num_workers=4)
test_loader = DataLoader(test_ds, batch_size=testbatch, shuffle=False, num_workers=0)
# =============================Train Model======================================
epochs = 30
model = SimpleAlexNet_FINAL().to(device)
set_seed(42)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, betas=(0.9, 0.999),
eps=1e-08, weight_decay=0.0, amsgrad=False)
criterion = nn.CrossEntropyLoss()
liveloss = PlotLosses()
for epoch in range(epochs):
logs = {}
train_loss, train_accuracy = train(model, optimizer, criterion, train_loader)
logs['' + 'log loss'] = train_loss.item()
logs['' + 'accuracy'] = train_accuracy.item()
logs['val_' + 'log loss'] = 0.
logs['val_' + 'accuracy'] = 0.
liveloss.update(logs)
liveloss.draw()
# ===================Train T-SNE and Logistic Regression========================
idx = np.where((y_train==2) | (y_train==6))
ytrainsim = y[idx]
Xtrainsim = X[idx]
tsne = TSNE(n_components=2, perplexity=3)
xtrain2d = np.reshape(Xtrainsim, (Xtrainsim.shape[0], -1))
xtrain2d = tsne.fit_transform(xtrain2d)
clf = LogisticRegression(random_state=seed, solver='lbfgs',
multi_class='multinomial').fit(xtrain2d, ytrainsim)
# ===========================T-SNE Recorrection=================================
y_predictions, _ = evaluate(model, test_loader)
idx = np.where((y_predictions==2) | (y_predictions==6))
ysim = y_predictions[idx]
Xsim = X_test[idx]
Xsim2d = np.reshape(Xsim, (Xsim.shape[0], -1))
Xsim2d = tsne.transform(Xsim2d)
y_predictions[idx] = clf.predict(Xsim2d)
# ===========================Predict Model======================================
y_predictions, _ = evaluate(model, test_loader)
submit = np.vstack((np.array(_), np.array(y_predictions)))
submit = submit.transpose()
```
### Ensemble modelling
```
from kmnist_helpers.ensemble import ensemble_validate, ensemble_score
model_list = [] # to be filled with pre-trained models on the cpu
train_loader_full = DataLoader(train_ds, batch_size=1000, shuffle=False, num_workers=0)
test_loader_full = DataLoader(test_ds, batch_size=1000, shuffle=False, num_workers=0)
ensemble_score = ensemble_validate(model_list, criterion=nn.CrossEntropyLoss(), data_loader=test_loader_full)
print('Score for the predictions of the ensembled models:', ensemble_score)
```
### Save Submissions
```
# Save the model
model_save_name = ".pt"
path = F"/content/gdrive/My Drive/Colab Notebooks/Mini-Project/Model/{model_save_name}"
torch.save(model.state_dict(), path)
# Save the submission
output_save_name = ".txt"
path_out = F"/content/gdrive/My Drive/Colab Notebooks/Mini-Project/{output_save_name}"
np.savetxt(path_out, submit, delimiter=",", fmt='%d', header="Id,Category", comments='')
```
| github_jupyter |
# Author: [Yunting Chiu](https://www.linkedin.com/in/yuntingchiu/)
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
import time
import pandas as pd
#wd
%cd /content/drive/MyDrive/American_University/2021_Fall/DATA-793-001_Data Science Practicum/data
!pwd
```
# Exploratory Data Analysis
##Read the data (`.npz` file)
```
"""
data_zipped = np.load("np_data_all.npz", allow_pickle=True)
for item in data_zipped.files:
print(item)
print(data_zipped[item])
print(data_zipped[item].shape)
data = data_zipped[item]
"""
```
# Read the data (`.npy` file)
```
data = np.load("np_data_one.npy", allow_pickle=True)
```
## Check the length of $X$ and $y$
```
X = []
y = []
for i in data:
X.append(i[0])
y.append(i[1])
print(len(X))
print(len(y))
print("The length should be " + str((6984+7000)))
print(X)
print(y)
print("data dimension:",data.shape)
```
## Visualization
```
fake_cnt = 0
real_cnt = 0
for i in data:
if i[1] == "fake":
fake_cnt += 1
else:
real_cnt += 1
#print(fake_cnt)
#print(real_cnt)
df = [['fake', fake_cnt], ['real', real_cnt]]
df = pd.DataFrame(df, columns=['image_type', 'count'])
#ax = df.plot.bar(x='video_type', y='count', rot=0)
#fig = plt.figure()
plt.bar(df['image_type'], df['count'])
plt.xlabel("Image Type")
plt.ylabel("Count")
plt.savefig('count_type.png')
```
# Machine Learning Task
```
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler # standardize features by removing the mean and scaling to unit variance.
from sklearn.metrics import confusion_matrix
#from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
```
## Support Vector Machine
```
start_time = time.time()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) # 80% for training, 20 for of testing
svm_clf = make_pipeline(StandardScaler(), SVC(gamma='scale', C = 1)) # clf = classifer
svm_clf.fit(X_train, y_train)
y_pred = svm_clf.predict(X_test)
print("--- %s seconds ---" % (time.time() - start_time))
print(confusion_matrix(y_test, y_pred))
```
### SVM Confusion Matrix
```
#plot_confusion_matrix(svm_clf, X_test, y_test, values_format = '.0f')
#plt.figure(figsize=(12,8))
#plt.show()
conf_matrix = confusion_matrix(y_true = y_test, y_pred = y_pred)
# Print the confusion matrix using Matplotlib
fig, ax = plt.subplots(figsize=(7.5, 7.5))
ax.matshow(conf_matrix, cmap=plt.cm.Blues, alpha=0.3)
for i in range(conf_matrix.shape[0]):
for j in range(conf_matrix.shape[1]):
ax.text(x=j, y=i,s=conf_matrix[i, j], va='center', ha='center', size='xx-large')
plt.xlabel('Predictions', fontsize=18)
plt.ylabel('Actuals', fontsize=18)
plt.title('Confusion Matrix', fontsize=18)
plt.show()
plt.savefig('Confusion_Matrix.png')
```
### ROC curves
- ROC Curves summarize the trade-off between the true positive rate and false positive rate for a predictive model using different probability thresholds.
- Precision-Recall curves summarize the trade-off between the true positive rate and the positive predictive value for a predictive model using different probability thresholds.
- ROC curves are appropriate when the observations are balanced between each class, whereas precision-recall curves are appropriate for imbalanced datasets.
```
"""
# generate a no skill prediction (majority class)
ns_probs = [0 for _ in range(len(y_test))]
lr_probs = svm_clf.predict_proba(X_test)
# keep probabilities for the positive outcome only
lr_probs = lr_probs[:, 1]
# calculate scores
ns_auc = roc_auc_score(y_test, ns_probs)
lr_auc = roc_auc_score(y_test, lr_probs)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='No Skill')
plt.plot(lr_fpr, lr_tpr, marker='.', label='Logistic')
# axis labels
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# show the legend
plt.legend()
# show the plot
plt.show()
plt.savefig('ROC_AUC_Plot.png')
"""
```
### SVM Accuracy Score
```
print("----------Accuracy Score----------------")
print(accuracy_score(y_test, y_pred))
target_names = ['fake', 'real']
print(classification_report(y_test, y_pred, target_names=target_names))
```
## Random Forest Classifier
```
start_time = time.time()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) # 80% for training, 20 for of testing
#rf_clf = RandomForestClassifier(n_estimators=100, random_state=42, bootstrap=True)
#rf_clf.fit(X_train, y_train)
#y_pred = rf_clf.predict(X_test)
#print("--- %s seconds ---" % (time.time() - start_time))
#print(confusion_matrix(y_test, y_pred))
```
### Random Forest Accuracy Score
```
print(accuracy_score(y_test, y_pred))
```
## Logistic Regression
```
start_time = time.time()
lg_clf = LogisticRegression(random_state=42, C=1)
lg_clf.fit(X_train, y_train)
y_pred = lg_clf.predict(X_test)
print("--- %s seconds ---" % (time.time() - start_time))
print(confusion_matrix(y_test, y_pred))
```
### Logistic Regression Accuracy Score
```
print(accuracy_score(y_test, y_pred))
```
# Nested Cross-Validation (Testing Zone)
```
from sklearn.datasets import make_classification
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# manual nested cross-validation for random forest on a classification dataset
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
# create dataset
#X, y = make_classification(n_samples=1000, n_features=20, random_state=1, n_informative=10, n_redundant=10)
#print(X.shape)
#print(y.shape)
# configure the cross-validation procedure
cv_inner = KFold(n_splits=3, shuffle=True, random_state=1)
# define the model
model = RandomForestClassifier(random_state=42)
# define search space
space = dict()
space['n_estimators'] = [10, 100, 500]
#space['max_features'] = [2, 4, 6]
# define search
search = GridSearchCV(model, space, scoring='accuracy', n_jobs=1, cv=cv_inner, refit=True)
# configure the cross-validation procedure
cv_outer = KFold(n_splits=10, shuffle=True, random_state=1)
# execute the nested cross-validation
scores = cross_val_score(search, X_train, y_train, scoring='accuracy', cv=cv_outer, n_jobs=-1)
# report performance
print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
result = search.fit(X_train, y_train)
# get the best performing model fit on the whole training set
best_model = result.best_estimator_
# evaluate model on the hold out dataset
yhat = best_model.predict(X_test)
space = {}
space['n_estimators'] = list(range(1, 1001))
print(space)
```
# References
- https://learning.oreilly.com/library/view/hands-on-machine-learning/9781492032632/ch05.html#idm45022165153592
- https://github.com/scikit-learn/scikit-learn/issues/16127
- https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
- https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.bar.html
- https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/
- https://machinelearningmastery.com/nested-cross-validation-for-machine-learning-with-python/
| github_jupyter |
Check coefficients for integration schemes - they should all line up nicely for values in the middle and vary smoothly
```
from bokeh import plotting, io, models, palettes
io.output_notebook()
import numpy
from maxr.integrator import history
nmax = 5
figures = []
palette = palettes.Category10[3]
for n in range(1, nmax):
fig = plotting.figure(height=100, width=600,
active_drag='pan', active_scroll='wheel_zoom')
for order, color in zip((1, 2, 3), palette):
try:
coeffs = history.coefficients(n, order=order)
ticks = range(len(coeffs))
fig.line(ticks, coeffs, alpha=0.9, color=color)
fig.circle(ticks, coeffs, alpha=0.9, color=color)
except ValueError:
# Skip orders if we don't have enough coefficients to calculate these
continue
fig.yaxis.axis_label = 'n={0}'.format(n)
fig.toolbar.logo = None
fig.toolbar_location = None
figures.append(fig)
# Set up scaling
if len(figures) == 1:
figures[0].x_range = models.Range1d(0, nmax - 1)
figures[0].y_range = models.Range1d(0, 2)
else:
figures[-1].x_range = figures[0].x_range
figures[-1].y_range = figures[0].y_range
io.show(models.Column(*figures))
```
Define some timesteps to integrate over
```
tmin, tmax = 0, 30
ts = numpy.linspace(tmin, tmax, 1000)
```
Check we can integrate things!
```
expected = -1.2492166377597749
history.integrator(numpy.sin(ts), ts) - expected < 1e-5
```
Turn this into a history integrator for a python function
```
def evaluate_history_integral(f, ts, order=1):
""" Evaluate the history integral for a given driving function f
"""
return numpy.array([0] + [
history.integrator(f(ts[:idx+1]), ts[:idx+1], order=order)
for idx in range(1, len(ts))])
results = evaluate_history_integral(numpy.sin, ts)
figure = plotting.figure(height=300)
figure.line(ts, results)
figure.title.text = "∫sin(t)/√(t-𝜏)d𝜏"
io.show(figure)
```
Check accuracy of convergence. We use a sinusoidal forcing and plot the response
$$
\int_0^{t} \frac{\sin{(\tau)}}{\sqrt{t - \tau}}d\tau = \sqrt{2 \pi}\left[C{\left(\sqrt{\frac{2t}{\pi}}\right)}\sin{t} - S{\left(\sqrt{\frac{2t}{\pi}}\right)}\cos{t}\right]
$$
where $C$ is the Fresnel C (cos) integral, and $S$ is the Fresnel $S$ (sin) integral. Note the solution in the paper is **WRONG**
```
from scipy.special import fresnel
def solution(t):
ssc, csc = fresnel(numpy.sqrt(2 * t / numpy.pi))
return numpy.sqrt(2 * numpy.pi) * (
csc * numpy.sin(t) - ssc * numpy.cos(t))
```
Show the solution
```
figure = plotting.figure(height=300)
figure.line(ts, numpy.sin(ts), legend='Source function sin(t)', color=palette[1], alpha=0.7)
figure.line(ts, solution(ts), legend='Analytic ∫sin(t)/√(t-𝜏)d𝜏', color=palette[0], alpha=0.7)
figure.line(ts, evaluate_history_integral(numpy.sin, ts), legend='Numerical ∫sin(t)/√(t-𝜏)d𝜏', color=palette[2], alpha=0.7)
io.show(figure)
```
and try integration numerically
```
nsteps = 30
order = 3
tmin = 0
tmax = 40
# Evaluate solution
ts = numpy.linspace(tmin, tmax, nsteps)
numeric = evaluate_history_integral(numpy.sin, ts, order=order)
exact = solution(ts)
figure = plotting.figure(height=300)
figure.line(ts, exact, legend='Analytic', color=palette[0], alpha=0.7)
figure.line(ts, numeric, legend='Numerical', color=palette[2], alpha=0.7)
io.show(figure)
numpy.mean(numeric - exact)
```
Now we loop through by order and computer the error
```
from collections import defaultdict
# Set up steps
nstepstep = 50
nsteps = numpy.arange(nstepstep, 500, nstepstep)
spacing = 10 / (nsteps - 1)
# Calculate error
error = defaultdict(list)
for order in (1, 2, 3):
for N in nsteps:
ts = numpy.linspace(0, tmax, N)
err = evaluate_history_integral(numpy.sin, ts, order=order) - solution(ts)
error[order].append(abs(err).max())
# Convert to arrays
for key, value in error.items():
error[key] = numpy.asarray(value)
```
We can plot how the error changes with spacing
```
figure = plotting.figure(height=300, x_axis_type='log', y_axis_type='log')
for order, color in zip((1, 2, 3), palette):
figure.line(spacing, error[order], legend='Order = {0}'.format(order),
color=color, alpha=0.9)
figure.xaxis.axis_label = 'Timestep (𝛿t)'
figure.yaxis.axis_label = 'Error (𝜀)'
figure.legend.location = 'bottom_right'
io.show(figure)
```
check that we get reasonable scaling (should be about $\epsilon\sim\delta t ^{\text{order} + 1}$)
```
def slope(rise, run):
return (rise[1:] - rise[0]) / (run[1:] - run[0])
figure = plotting.figure(height=300, x_axis_type='log')
for order, color in zip((1, 2, 3), palette):
figure.line(spacing[1:],
slope(numpy.log(error[order]), numpy.log(spacing)),
legend='Order = {0}'.format(order),
color=color, alpha=0.9)
figure.xaxis.axis_label = 'Timestep (𝛿t)'
figure.yaxis.axis_label = 'Scaling exponent'
figure.legend.location = 'center_right'
io.show(figure)
```
| github_jupyter |
# Operations on Word Vectors
Welcome to your first assignment of Week 2, Course 5 of the Deep Learning Specialization!
Because word embeddings are very computationally expensive to train, most ML practitioners will load a pre-trained set of embeddings. In this notebook you'll try your hand at loading, measuring similarity between, and modifying pre-trained embeddings.
**After this assignment you'll be able to**:
* Explain how word embeddings capture relationships between words
* Load pre-trained word vectors
* Measure similarity between word vectors using cosine similarity
* Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
At the end of this notebook you'll have a chance to try an optional exercise, where you'll modify word embeddings to reduce their gender bias. Reducing bias is an important consideration in ML, so you're encouraged to take this challenge!
## Table of Contents
- [Packages](#0)
- [1 - Load the Word Vectors](#1)
- [2 - Embedding Vectors Versus One-Hot Vectors](#2)
- [3 - Cosine Similarity](#3)
- [Exercise 1 - cosine_similarity](#ex-1)
- [4 - Word Analogy Task](#4)
- [Exercise 2 - complete_analogy](#ex-2)
- [5 - Debiasing Word Vectors (OPTIONAL/UNGRADED)](#5)
- [5.1 - Neutralize Bias for Non-Gender Specific Words](#5-1)
- [Exercise 3 - neutralize](#ex-3)
- [5.2 - Equalization Algorithm for Gender-Specific Words](#5-2)
- [Exercise 4 - equalize](#ex-4)
- [6 - References](#6)
<a name='0'></a>
## Packages
Let's get started! Run the following cell to load the packages you'll need.
```
import numpy as np
from w2v_utils import *
```
<a name='1'></a>
## 1 - Load the Word Vectors
For this assignment, you'll use 50-dimensional GloVe vectors to represent words.
Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
<a name='2'></a>
## 2 - Embedding Vectors Versus One-Hot Vectors
Recall from the lesson videos that one-hot vectors don't do a good job of capturing the level of similarity between words. This is because every one-hot vector has the same Euclidean distance from any other one-hot vector.
Embedding vectors, such as GloVe vectors, provide much more useful information about the meaning of individual words.
Now, see how you can use GloVe vectors to measure the similarity between two words!
<a name='3'></a>
## 3 - Cosine Similarity
To measure the similarity between two words, you need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u \cdot v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
* $u \cdot v$ is the dot product (or inner product) of two vectors
* $||u||_2$ is the norm (or length) of the vector $u$
* $\theta$ is the angle between $u$ and $v$.
* The cosine similarity depends on the angle between $u$ and $v$.
* If $u$ and $v$ are very similar, their cosine similarity will be close to 1.
* If they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center><font color='purple'><b>Figure 1</b>: The cosine of the angle between two vectors is a measure of their similarity.</font></center></caption>
<a name='ex-1'></a>
### Exercise 1 - cosine_similarity
Implement the function `cosine_similarity()` to evaluate the similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
#### Additional Hints
* You may find [np.dot](https://numpy.org/doc/stable/reference/generated/numpy.dot.html), [np.sum](https://numpy.org/doc/stable/reference/generated/numpy.sum.html), or [np.sqrt](https://numpy.org/doc/stable/reference/generated/numpy.sqrt.html) useful depending upon the implementation that you choose.
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similarity between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
# Special case. Consider the case u = [0, 0], v=[0, 0]
if np.all(u == v):
return 1
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u.T, v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(np.power(u, 2)))
# Compute the L2 norm of v (≈1 line)
norm_v = np.sqrt(np.sum(np.power(v, 2)))
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = np.divide(dot, norm_u * norm_v)
### END CODE HERE ###
return cosine_similarity
# START SKIP FOR GRADING
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
# END SKIP FOR GRADING
# PUBLIC TESTS
def cosine_similarity_test(target):
a = np.random.uniform(-10, 10, 10)
b = np.random.uniform(-10, 10, 10)
c = np.random.uniform(-1, 1, 23)
assert np.isclose(cosine_similarity(a, a), 1), "cosine_similarity(a, a) must be 1"
assert np.isclose(cosine_similarity((c >= 0) * 1, (c < 0) * 1), 0), "cosine_similarity(a, not(a)) must be 0"
assert np.isclose(cosine_similarity(a, -a), -1), "cosine_similarity(a, -a) must be -1"
assert np.isclose(cosine_similarity(a, b), cosine_similarity(a * 2, b * 4)), "cosine_similarity must be scale-independent. You must divide by the product of the norms of each input"
print("\033[92mAll test passed!")
cosine_similarity_test(cosine_similarity)
```
#### Try different words!
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around with the cosine similarity of other inputs will give you a better sense of how word vectors behave.
<a name='4'></a>
## 4 - Word Analogy Task
* In the word analogy task, complete this sentence:
<font color='brown'>"*a* is to *b* as *c* is to **____**"</font>.
* An example is:
<font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>.
* You're trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner:
$e_b - e_a \approx e_d - e_c$
* Measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
<a name='ex-2'></a>
### Exercise 2 - complete_analogy
Complete the code below to perform word analogies!
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a],word_to_vec_map[word_b],word_to_vec_map[word_c] # transform words into vectors
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w] - e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
# PUBLIC TEST
def complete_analogy_test(target):
a = [3, 3] # Center at a
a_nw = [2, 4] # North-West oriented vector from a
a_s = [3, 2] # South oriented vector from a
c = [-2, 1] # Center at c
# Create a controlled word to vec map
word_to_vec_map = {'a': a,
'synonym_of_a': a,
'a_nw': a_nw,
'a_s': a_s,
'c': c,
'c_n': [-2, 2], # N
'c_ne': [-1, 2], # NE
'c_e': [-1, 1], # E
'c_se': [-1, 0], # SE
'c_s': [-2, 0], # S
'c_sw': [-3, 0], # SW
'c_w': [-3, 1], # W
'c_nw': [-3, 2] # NW
}
# Convert lists to np.arrays
for key in word_to_vec_map.keys():
word_to_vec_map[key] = np.array(word_to_vec_map[key])
assert(target('a', 'a_nw', 'c', word_to_vec_map) == 'c_nw')
assert(target('a', 'a_s', 'c', word_to_vec_map) == 'c_s')
assert(target('a', 'synonym_of_a', 'c', word_to_vec_map) != 'c'), "Best word cannot be input query"
assert(target('a', 'c', 'a', word_to_vec_map) == 'c')
print("\033[92mAll tests passed")
complete_analogy_test(complete_analogy)
```
Run the cell below to test your code. Patience, young grasshopper...this may take 1-2 minutes.
```
# START SKIP FOR GRADING
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad, word_to_vec_map)))
# END SKIP FOR GRADING
```
Once you get the output, try modifying the input cells above to test your own analogies.
**Hint**: Try to find some other analogy pairs that will work, along with some others where the algorithm doesn't give the right answer:
* For example, you can try small->smaller as big->?
## Congratulations!
You've come to the end of the graded portion of the assignment. By now, you've:
* Loaded some pre-trained word vectors
* Measured the similarity between word vectors using cosine similarity
* Used word embeddings to solve word analogy problems such as Man is to Woman as King is to __.
Cosine similarity is a relatively simple and intuitive, yet powerful, method you can use to capture nuanced relationships between words. These exercises should be helpful to you in explaining how it works, and applying it to your own projects!
<font color='blue'>
<b>What you should remember</b>:
- Cosine similarity is a good way to compare the similarity between pairs of word vectors.
- Note that L2 (Euclidean) distance also works.
- For NLP applications, using a pre-trained set of word vectors is often a great way to get started. </font>
Even though you've finished the graded portion, please take a look at the rest of this notebook to learn about debiasing word vectors.
<a name='5'></a>
## 5 - Debiasing Word Vectors (OPTIONAL/UNGRADED)
In the following exercise, you'll examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can certainly complete it without being an expert! Go ahead and give it a shot. This portion of the notebook is optional and is not graded...so just have fun and explore.
First, see how the GloVe word embeddings relate to gender. You'll begin by computing a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender".
You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them, but just using $e_{woman}-e_{man}$ will give good enough results for now.
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, consider the cosine similarity of different words with $g$. What does a positive value of similarity mean, versus a negative cosine similarity?
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not surprising, and the result seems acceptable.
Now try with some other words:
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, we see “computer” is negative and is closer in value to male first names, while “literature” is positive and is closer to female first names. Ouch!
You'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender-specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You'll have to treat these two types of words differently when debiasing.
<a name='5-1'></a>
### 5.1 - Neutralize Bias for Non-Gender Specific Words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which is called $g_{\perp}$ here. In linear algebra, we say that the 49-dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49-dimensional, given the limitations of what you can draw on a 2D screen, it's illustrated using a 1-dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center><font color='purple'><b>Figure 2</b>: The word vector for "receptionist" represented before and after applying the neutralize operation.</font> </center></caption>
<a name='ex-3'></a>
### Exercise 3 - neutralize
Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist."
Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this. ;)
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula given above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by subtracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical rounding (on the order of $10^{-17}$).
<table>
<tr>
<td>
<b>cosine similarity between receptionist and g, before neutralizing:</b> :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
<b>cosine similarity between receptionist and g, after neutralizing</b> :
</td>
<td>
-4.442232511624783e-17
</tr>
</table>
<a name='5-2'></a>
### 5.2 - Equalization Algorithm for Gender-Specific Words
Next, let's see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralization to "babysit," you can reduce the gender stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equidistant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. Visually, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 in the References for details.) Here are the key equations:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||_2} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||_2} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
<a name='ex-4'></a>
### Exercise 4 - equalize
Implement the `equalize()` function below.
Use the equations above to get the final equalized version of the pair of words. Good luck!
**Hint**
- Use [np.linalg.norm](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html)
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
<b>cosine_similarity(word_to_vec_map["man"], gender)</b> =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
<b>cosine_similarity(word_to_vec_map["woman"], gender)</b> =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
<b>cosine_similarity(e1, gender)</b> =
</td>
<td>
-0.7004364289309388
</td>
</tr>
<tr>
<td>
<b>cosine_similarity(e2, gender)</b> =
</td>
<td>
0.7004364289309387
</td>
</tr>
</table>
Go ahead and play with the input words in the cell above, to apply equalization to other pairs of words.
Hint: Try...
These debiasing algorithms are very helpful for reducing bias, but aren't perfect and don't eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with these types of variants as well!
### Congratulations!
You have come to the end of both graded and ungraded portions of this notebook, and have seen several of the ways that word vectors can be applied and modified. Great work pushing your knowledge in the areas of neutralizing and equalizing word vectors! See you next time.
<a name='6'></a>
## 6 - References
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
### Notebook for the Udacity Project "Write A Data Science Blog Post"
#### Dataset used: "TripAdvisor Restaurants Info for 31 Euro-Cities"
https://www.kaggle.com/damienbeneschi/krakow-ta-restaurans-data-raw
https://www.kaggle.com/damienbeneschi/krakow-ta-restaurans-data-raw/downloads/krakow-ta-restaurans-data-raw.zip/5
## 1.: Business Understanding according to CRISP-DM
I was in south-western Poland recently and while searching for a good place to eat on Google Maps I noticed, that there were a lot of restaurants that had really good ratings and reviews in the 4+ region, in cities as well as at the countryside. This made me thinking, because in my hometown Munich there is also many great places, but also a lot that are in not-so-good-region around 3 stars. In general, ratings seemed to be better there compared to what I know. So I thought, maybe people just rate more mildly there. Then I had my first lunch at one of those 4+ places and not only the staff was so friendly and the food looked really nicely, it also tasted amazing at a decent pricetag. Okay, I was lucky I thought. On the evening of the same day I tried another place and had the same great experience.
I had even more great eats. So is the quality of the polish restaurants on average better than the quality of the bavarian ones? Subjectively… Yes, it seemed so. But what does data science say? Are there differences in average ratings and number of ratings between regions? To answer this question, I used the TripAdvisor Restaurants Info for 31 Euro-Cities from Kaggle. This dataset contains the TripAdvisor reviews and ratings for 111927 restaurants in 31 European cities.
## Problem Definition / Research Questions:
- RQ 1: Are there differences in average ratings and number of ratings between cities?
- RQ 2: Are there more vegetarian-friendly cities and if so, are they locally concentrated?
- RQ 3: Is local cuisine rated better than foreign cusine and if so, is there a difference between cities?
```
# Import Statements
import pandas as pd
import numpy as np
# Load in dataset
data_raw = pd.read_csv("TA_restaurants_curated.csv")
```
## 2.: Data Understanding according to CRISP-DM
In the following, we have a look at the raw data of the dataset.
```
# Having a first look at the data
data_raw.head()
data_raw.describe()
# Which cities are included in the dataset?
cities = data_raw.City.unique()
cities
# Manually add the name of the local cuisines into an array (needed for RQ3)
local_cuisine = ['Dutch', 'Greek', 'Spanish', 'German', 'Eastern European', 'Belgian', 'Hungarian', 'Danish', 'Irish', 'Scottish', 'Swiss', 'German', 'Scandinavian', 'Polish', 'Portuguese', 'Slovenian', 'British', 'European', 'French', 'Spanish', 'Italian', 'German', 'Portuguese', 'Norwegian', 'French', 'Czech', 'Italian', 'Swedish', 'Austrian', 'Polish', 'Swiss']
```
As I live in Munich, I will want to have a closer look on the data for the city of Munich. So I will filter for the Munich data and have a first look on it.
```
# Function to return data for a specific city
def getRawData(city):
'''Returns the data for a specific city, which is given to the function via the city argument.'''
data_raw_city = data_raw[(data_raw.City == "Munich")]
return data_raw_city
# Filter for Munich data and have a first look
city = "Munich"
data_raw_city = getRawData(city)
data_raw_city.head(10)
data_raw_city.tail(10)
data_raw_city.describe()
```
### Dealing with missing data:
It can be seen, that some restaurants, especially the last ones, don't have any Ranking, Rating, Price Ranges or reviews. How to deal with that data? I have chosen to ignore those restaurants in the relevant questions. If, for example, the average rating of a cities restaurant is needed, I only use that restaurants, that actually have a rating. The other restaurants without rating are ignored.
## 3. and 4.: Data Preparation and Modeling according to CRISP-DM
### Calculate the data for RQ 1 - 3
In the following code, the data is first prepared by only selecting relevant and non-NaN data. Afterwards, data is modelled by calculating the relevant statistical numbers.
```
# Loop through entries for each city
# Create empty lists
num_entries = []
num_rated = []
perc_rated = []
avg_num_ratings = []
avg_rating = []
avg_veg_available = []
avg_loc_available = []
avg_loc_rating = []
avg_non_loc_rating = []
diff_loc_rating = []
total_local_rating = []
total_non_local_rating = []
# Initialize city number
n_city = -1
for city in cities:
n_city = n_city + 1
# Compute Data for RQ1
# Select data for one city
data_1city = data_raw[(data_raw.City == city)]
ratings = data_1city.Rating
data_1city_non_NaN = data_1city[data_1city['Rating'].notnull()]
ratings_non_NaN = data_1city_non_NaN.Rating
# Compute Data for RQ2 & RQ3
# Initialize lists for the current city
veg_available = []
loc_available = []
rating_local = []
rating_non_local = []
data_1city_stl_non_Nan = data_1city[data_1city['Cuisine Style'].notnull()]
# Iterate through every restaurant and check if they offer vegetarian/vegan food.
for i in range(len(data_1city_stl_non_Nan)):
veg_true = 0
styles = data_1city_stl_non_Nan.iloc[i, 3]
if 'Vegetarian' in styles:
veg_true = 1
#print('Veg Found')
elif 'Vegan' in styles:
veg_true = 1
veg_available.append(veg_true)
# For RQ3 check if the current restaurant offers local food and add the rating to the respective list.
loc_true = 0
if local_cuisine[n_city] in styles:
loc_true = 1
if ~np.isnan(data_1city_stl_non_Nan.iloc[i, 5]):
rating_local.append(data_1city_stl_non_Nan.iloc[i, 5])
total_local_rating.append(data_1city_stl_non_Nan.iloc[i, 5])
else:
if ~np.isnan(data_1city_stl_non_Nan.iloc[i, 5]):
rating_non_local.append(data_1city_stl_non_Nan.iloc[i, 5])
total_non_local_rating.append(data_1city_stl_non_Nan.iloc[i, 5])
loc_available.append(loc_true)
# Add to lists / caluclate aggregated values
num_entries.append(len(data_1city))
num_rated.append(len(data_1city_non_NaN))
perc_rated.append(len(data_1city_non_NaN) / len(data_1city))
avg_num_ratings.append(np.mean(data_1city_non_NaN['Number of Reviews']))
avg_rating.append(np.mean(data_1city_non_NaN['Rating']))
avg_veg_available.append(np.mean(veg_available))
avg_loc_available.append(np.mean(loc_available))
avg_loc_rating.append(np.mean(rating_local))
avg_non_loc_rating.append(np.mean(rating_non_local))
diff_loc_rating.append(np.mean(rating_local) - np.mean(rating_non_local))
# Create Dataframe
data_RQ1 = pd.DataFrame({'City': cities, 'Local_Cuisine': local_cuisine, 'Num_Entries': num_entries, 'Num_Rated': num_rated, 'Perc_Rated': perc_rated, 'Avg_Num_Ratings': avg_num_ratings, 'Avg_Rating': avg_rating, 'Avg_Veg_Av': avg_veg_available, 'Avg_Loc_Av': avg_loc_available, 'Avg_loc_rating': avg_loc_rating, 'Avg_non_loc_rating': avg_non_loc_rating, 'Diff_loc_rating': diff_loc_rating})
# Show the before computed data for RQ 1, 2 and 3.
data_RQ1.head(31)
```
## 5.: Evaluate the Results according to CRISP-DM
In the following, for every research questions relevant plots and statistical numbers are plotted to interpret the results. Afterward the plots, results are discussed.
### RQ 1: Are there differences in average ratings and number of ratings between cities?
```
data_RQ1.plot.bar(x='City', y='Avg_Rating', rot=0, figsize=(30,6))
print('Lowest Average Rating: {:.3f}'.format(min(data_RQ1.Avg_Rating)))
print('Highest Average Rating: {:.3f}'.format(max(data_RQ1.Avg_Rating)))
print('Difference from lowest to highest average Rating: {:.3f}'.format(max(data_RQ1.Avg_Rating) - min(data_RQ1.Avg_Rating)))
```
#### As it can clearly be seen, there is a difference in average ratings by citiy. The highest average rating is 4.232 for the city of Rome and 3.797 for the city of Madrid. An interesting follow-up question would be, wether the general quality of restaurants is better in Rome or if reviewers give better ratings in Rome compared to Madrid. Another more vague explaination would be that Tripadvisor is more often used by Tourists than locals, and that tourists rate Italian food better, as they are better used to it since it is better known in the world compared to spanish food.
```
data_RQ1.plot.bar(x='City', y='Avg_Num_Ratings', rot=0, figsize=(30,6))
print('Lowest Average Number of Ratings: {:.3f}'.format(min(data_RQ1.Avg_Num_Ratings)))
print('Highest Average Number of Ratings: {:.3f}'.format(max(data_RQ1.Avg_Num_Ratings)))
print('Difference from lowest to highest number of Ratings: {:.3f}'.format(max(data_RQ1.Avg_Num_Ratings) - min(data_RQ1.Avg_Num_Ratings)))
```
#### Also with the number of ratings it can be noted, that there definitely is a a difference in number of ratings. The highest average number of ratings with 293.896 is (again) seen in the city of Rome, while Hamburg with 45.942 has the lowest average number of ratings, which makes up of a difference of close to 248 in average ratings - that means rome has 6 times the average number of ratings as Hamburg, which can't be explained by the difference in inhabitants, which is 2.872.800 for Rome (Wikipedia) and 1.841.179 for Hamburg (Wikipedia). Other explainations would be that certain regions are more rating-friendly, prefer Tripadvisor or other tools such as Google Maps or that the probably higher number of tourists in Rome uses Tripadvisor more often.
### RQ 2: Are there more vegetarian-friendly cities and if so, are they locally concentrated?
```
data_RQ1.plot.bar(x='City', y='Avg_Veg_Av', rot=0, figsize=(30,6))
print('Lowest Average Number of Vegetarian/Vegan Available: {:.3f}'.format(min(data_RQ1.Avg_Veg_Av)))
print('Highest Average Number of Vegetarian/Vegan Available: {:.3f}'.format(max(data_RQ1.Avg_Veg_Av)))
print('Difference from lowest to highest number: {:.3f}'.format(max(data_RQ1.Avg_Veg_Av) - min(data_RQ1.Avg_Veg_Av)))
```
#### It seems that there are also great differences in average number of restaurants with vegetarian/vegan option available: Edinburgh has the highest number of restaurants that offer veg, with 56.9%, Lyon on the other hand with 12,9% is a lot less veg-friendly. A clear local pattern can not be distinguished.
### RQ 3: Is local cuisine rated better than foreign cusine and if so, is there a difference between cities?
```
data_RQ1.plot.bar(x='City', y='Avg_Loc_Av', rot=0, figsize=(30,6))
data_RQ1.plot.bar(x='City', y='Avg_loc_rating', rot=0, figsize=(30,6))
data_RQ1.plot.bar(x='City', y='Avg_non_loc_rating', rot=0, figsize=(30,6))
data_RQ1.plot.bar(x='City', y='Diff_loc_rating', rot=0, figsize=(30,6))
print('Lowest Rating Difference: {:.3f}'.format(min(data_RQ1.Diff_loc_rating)))
print('Highest Rating Difference: {:.3f}'.format(max(data_RQ1.Diff_loc_rating)))
print('Average Total Rating Difference: {:.3f}'.format(np.mean(data_RQ1.Diff_loc_rating)))
print()
print('Total Local Ratings: {}'.format(len(total_local_rating)))
print('Total Local Rating Mean: {}'.format(np.mean(total_local_rating)))
print('Total Non-Local Ratings: {}'.format(len(total_non_local_rating)))
print('Total Non-Local Rating Mean: {}'.format(np.mean(total_non_local_rating)))
print('Total Non-Local Rating Mean Difference: {}'.format(np.mean(total_local_rating) - np.mean(total_non_local_rating)))
```
#### Although there is a difference with local restaurants being rated better than restaurants not serving local food (aggregated difference is 0.026 / total difference is 0.0155), it is quite small and not neccessarily statistically significant in general. Yet it is interesting to notive, that for some cities the hypothesis is true. Especially Copenhagen, Edicnburgh, Helsinki, Ljubliana and Lyana show more significant differences with local restaurants being favored and cities like Barcelona, Berlin, Bratislava, Brussels and Prahgue, where local restaurants are rated less good, in the case of Bratislava the difference is greater than 0.2.
So, again, this can have multiple reasons. It is possible that people who use Tripadvisor, which are often tourists, prefer certain cousines that they are familiar to. Also it is possible, that certain local cuisines are "easier" for the non local. Other reasons are thinkable.
| github_jupyter |
```
import os, sys, time, copy
import random
import numpy as np
import matplotlib.pyplot as plt
import multiprocessing
from functools import partial
from tqdm import tqdm
import myokit
sys.path.append('../')
sys.path.append('../Protocols')
sys.path.append('../Models')
sys.path.append('../Lib')
import protocol_lib
import mod_trace
import simulator_myokit
import simulator_scipy
import vc_protocols
def find_closest_index(array, t):
"""Given an array, return the index with the value closest to t."""
return (np.abs(np.array(array) - t)).argmin()
# def get_currents_with_constant_dt(xs, window=1, step_size=1):
# times = xs[0]
# currents = xs[1:]
# data_li = []
# for I in currents:
# data_temp = []
# t = 0
# while t <= times[-1] - window:
# start_index = find_closest_index(times, t)
# end_index = find_closest_index(times, t + window)
# I_window = I[start_index: end_index + 1]
# data_temp.append(sum(I_window)/len(I_window))
# t += step_size
# data_li.append(data_temp)
# return data_li
def get_currents_with_constant_dt(xs, window=1, step_size=1):
times = xs[0]
i_ion = xs[1]
i_ion_window = []
t = 0
while t <= times[-1] - window:
start_index = find_closest_index(times, t)
end_index = find_closest_index(times, t + window)
I_window = i_ion[start_index: end_index + 1]
i_ion_window.append(sum(I_window)/len(I_window))
t += step_size
return i_ion_window
cell_types = {
'Endocardial' : 0,
'Epicardial' : 1,
'Mid-myocardial' : 2,
}
```
### Create Voltage Protocol
```
'''
leemV1
'''
# VC_protocol = vc_protocols.hERG_CiPA()
# VC_protocol = vc_protocols.cav12_CiPA()
# VC_protocol = vc_protocols.lateNav15_CiPA()
VC_protocol = protocol_lib.VoltageClampProtocol() # steps=steps
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-90, duration=100) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-35, duration=40) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=200) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-40, duration=40) )
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=0, duration=40) ) # <- why?? vo
VC_protocol.add( protocol_lib.VoltageClampStep(voltage=40, duration=500) )
VC_protocol.add( protocol_lib.VoltageClampRamp(voltage_start=40, voltage_end=-120, duration=200)) # ramp step
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=0, duration=100) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=60, duration=500) )
# VC_protocol.add( protocol_lib.VoltageClampRamp(voltage_start=60, voltage_end=-80, duration=200)) # ramp step
vhold = -80 # VC_protocol.steps[0].voltage
print(f'The protocol is {VC_protocol.get_voltage_change_endpoints()[-1]} ms')
# '''
# SongV1
# '''
# VC_protocol = protocol_lib.VoltageClampProtocol() # steps=steps
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=100) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-120, duration=20) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-40, duration=200) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=60, duration=200) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=0, duration=200) ) # <- why?? vo
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=50, duration=200) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-10, duration=200) )
# VC_protocol.add( protocol_lib.VoltageClampStep(voltage=-80, duration=50) )
# VC_protocol.add( protocol_lib.VoltageClampRamp(voltage_start=30, voltage_end=-50, duration=100)) # ramp step
# vhold = -80
# print(f'The protocol is {VC_protocol.get_voltage_change_endpoints()[-1]} ms')
start_time = time.time()
model, p, s = myokit.load("../mmt-model-files/ohara-cipa-v1-2017_JK-v2.mmt")
sim = simulator_myokit.Simulator(model, VC_protocol, max_step=1.0, abs_tol=1e-06, rel_tol=1e-6, vhold=vhold) # 1e-12, 1e-14 # 1e-08, 1e-10
sim.name = "ohara2017"
f = 1.5
params = {
'cell.mode': cell_types['Mid-myocardial'],
'setting.simType': 1, # 0: AP | 1: VC
'ina.gNa' : 75.0 * f,
'inal.gNaL' : 0.0075 * 2.661 * f,
'ito.gto' : 0.02 * 4 * f,
'ical.PCa' : 0.0001 * 1.007 * 2.5 * f,
'ikr.gKr' : 4.65854545454545618e-2 * 1.3 * f, # [mS/uF]
'iks.gKs' : 0.0034 * 1.87 * 1.4 * f,
'ik1.gK1' : 0.1908 * 1.698 * 1.3 * f,
'inaca.gNaCa' : 0.0008 * 1.4,
'inak.PNaK' : 30 * 0.7,
'ikb.gKb' : 0.003,
'inab.PNab' : 3.75e-10,
'icab.PCab' : 2.5e-8,
'ipca.GpCa' : 0.0005,
}
sim.set_simulation_params(params)
print("--- %s seconds ---"%(time.time()-start_time))
for key, value in params.items():
print(f'{key} : {value}')
def gen_dataset( gen_params, datasetNo=1):
'''
type = 'AP' or 'I"
params = {
'times': 1,
'log_li' : [],
'nData' : 10000,
'dataset_dir' : './dataset',
'data_file_name' : 'current',
'scale' : 2,
}
'''
random.seed(datasetNo * 84)
np.random.seed(datasetNo * 86)
print("-----Dataset%d generation starts.-----"%(datasetNo))
d = None
result_li = []
param_li = []
current_nData = 0
simulation_error_count = 0
with tqdm(total = gen_params['nData']) as pbar:
while (current_nData < gen_params['nData']):
g_adj = np.random.uniform(0, 1, 7)
g_adj_li= {
'ina.g_adj' : g_adj[0],
'inal.g_adj' : g_adj[1],
'ito.g_adj' : g_adj[2],
'ical.g_adj' : g_adj[3],
'ikr.g_adj' : g_adj[4],
'iks.g_adj' : g_adj[5],
'ik1.g_adj' : g_adj[6],
# 'if.g_adj' : g_fc[7]
}
sim.set_simulation_params(g_adj_li)
# log_li = ['membrane.V']
# if len(log_li)>0:
# log_li = gen_params['log_li']
try :
sim.pre_simulate(5000, sim_type=1)
d = sim.simulate( gen_params['end_time'], extra_log=gen_params['log_li'])
# temp = [d['engine.time']]
# for log in gen_params['save_log_li'] :
# temp.append(d[log])
# temp = get_currents_with_constant_dt(temp, window=gen_params['window'], step_size=gen_params['step_size'])
temp = [d['engine.time'], d['membrane.i_ion']]
if (gen_params['window']>0) and (gen_params['step_size']>0):
temp = get_currents_with_constant_dt(temp, window=gen_params['window'], step_size=gen_params['step_size'])
result_li.append( np.array(temp) )
else:
result_li.append( temp )
param_li.append( g_adj )
current_nData+=1
except :
simulation_error_count += 1
print("There is a simulation error.")
continue
pbar.update(1)
if gen_params['window'] != None and gen_params['step_size']:
result_li = np.array(result_li)
else:
result_li = np.array(result_li, dtype=object)
param_li = np.array(param_li)
np.save(os.path.join(gen_params['dataset_dir'], f"{gen_params['data_file_name']}{datasetNo}" ) , result_li)
np.save(os.path.join(gen_params['dataset_dir'], f'parameter{datasetNo}' ), param_li )
result_li = []
param_li = []
print("=====Dataset%d generation End. & %d simulation errors occured.====="%(datasetNo, simulation_error_count))
if __name__=='__main__':
start_time = time.time()
nCPU = os.cpu_count()
print("The number of process :", nCPU )
multi = False
gen_params = {
'end_time': VC_protocol.get_voltage_change_endpoints()[-1],
'log_li' : ['membrane.i_ion', 'ina.INa', 'inal.INaL', 'ito.Ito', 'ical.ICaL', 'ical.ICaNa', 'ical.ICaK', 'ikr.IKr', 'iks.IKs', 'ik1.IK1', 'inaca.INaCa', 'inacass.INaCa_ss', 'inak.INaK', 'ikb.IKb', 'inab.INab', 'icab.ICab', 'ipca.IpCa'],
'save_log_li' : ['membrane.i_ion'],
'nData' : 1000,
'dataset_dir' : './ohara2017_LeemV1_fixed_concentrations',
'data_file_name' : 'currents',
'window' : None,
'step_size' : None,
'startNo' : 71,
'nDataset' : 1,
}
gen_params['dataset_dir'] = gen_params['dataset_dir'] #+ f"_w{gen_params['window']}_s{gen_params['step_size']}"
datasetNo_li = list(range(gen_params['startNo'], gen_params['startNo']+gen_params['nDataset'])) # Core 수만큼 [1,2,3,4,5,6,7,8,9,10]
print(datasetNo_li)
try:
if not os.path.exists(gen_params['dataset_dir']):
os.makedirs(gen_params['dataset_dir'])
print('"%s" has been created.'%(gen_params['dataset_dir']))
else:
print("The folder already exists.")
except OSError:
print('Error: create_folder(). : ' + gen_params['dataset_dir'])
'''
Plot
'''
fig, ax = plt.subplots(1,1, figsize=(10,3))
# fig.suptitle(sim.name, fontsize=14)
# ax.set_title('Simulation %d'%(simulationNo))
# axes[i].set_xlim(model_scipy.times.min(), model_scipy.times.max())
# ax.set_ylim(ylim[0], ylim[1])
ax.set_xlabel('Time (ms)')
ax.set_ylabel(f'Voltage')
times = np.linspace(0, VC_protocol.get_voltage_change_endpoints()[-1], 10000)
ax.plot( times, VC_protocol.get_voltage_clamp_protocol(times), label='VC', color='k', linewidth=5)
ax.legend()
ax.grid()
# ax[-1].set_ylim(-5, 5)
plt.subplots_adjust(left=0.07, bottom=0.05, right=0.95, top=0.95, wspace=0.5, hspace=0.15)
plt.show()
fig.savefig(os.path.join(gen_params['dataset_dir'], "aVC.jpg" ), dpi=100)
if multi :
pool = multiprocessing.Pool(processes=32 )
func = partial(gen_dataset, gen_params)
pool.map(func, datasetNo_li)
pool.close()
pool.join()
else:
for No in datasetNo_li :
gen_dataset(gen_params, No)
# print("Dataset has been generated.")
print("--- %s seconds ---"%(time.time()-start_time))
#
# # Set parameter transformation
# transform_to_model_param = log_transform_to_model_param # return np.exp(out)
# transform_from_model_param = log_transform_from_model_param # return np.log(out)
# logprior = LogPrior(transform_to_model_param, transform_from_model_param)
# p = logprior.sample_without_inv_transform()
# print(p)
# print(logprior.rmax)
# print(logprior.rmin)
# print(5e5)
print("Finish")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
plt.rcParams["savefig.dpi"] = 300
plt.rcParams["savefig.bbox"] = "tight"
np.set_printoptions(precision=3, suppress=True)
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import scale, StandardScaler
# toy plot
plt.plot([.3, 0, 1])
plt.xticks((0, 1, 2), ("0 (.16)", "1 (.5)", "2 (.84)"))
plt.xlabel("Bin index (expected positive)")
plt.ylabel("Observed positive in bin")
plt.savefig("images/calib_curve.png")
from sklearn.datasets import fetch_covtype
from sklearn.utils import check_array
def load_data(dtype=np.float32, order='C', random_state=13):
######################################################################
# Load covertype dataset (downloads it from the web, might take a bit)
data = fetch_covtype(download_if_missing=True, shuffle=True,
random_state=random_state)
X = check_array(data['data'], dtype=dtype, order=order)
# make it bineary classification
y = (data['target'] != 1).astype(np.int)
# Create train-test split (as [Joachims, 2006])
n_train = 522911
X_train = X[:n_train]
y_train = y[:n_train]
X_test = X[n_train:]
y_test = y[n_train:]
# Standardize first 10 features (the numerical ones)
mean = X_train.mean(axis=0)
std = X_train.std(axis=0)
mean[10:] = 0.0
std[10:] = 1.0
X_train = (X_train - mean) / std
X_test = (X_test - mean) / std
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = load_data()
# subsample training set by a factor of 10:
X_train = X_train[::10]
y_train = y_train[::10]
from sklearn.linear_model import LogisticRegressionCV
print(X_train.shape)
print(np.bincount(y_train))
lr = LogisticRegressionCV().fit(X_train, y_train)
lr.C_
print(lr.predict_proba(X_test)[:10])
print(y_test[:10])
from sklearn.calibration import calibration_curve
probs = lr.predict_proba(X_test)[:, 1]
prob_true, prob_pred = calibration_curve(y_test, probs, n_bins=5)
print(prob_true)
print(prob_pred)
def plot_calibration_curve(y_true, y_prob, n_bins=5, ax=None, hist=True, normalize=False):
prob_true, prob_pred = calibration_curve(y_true, y_prob, n_bins=n_bins, normalize=normalize)
if ax is None:
ax = plt.gca()
if hist:
ax.hist(y_prob, weights=np.ones_like(y_prob) / len(y_prob), alpha=.4,
bins=np.maximum(10, n_bins))
ax.plot([0, 1], [0, 1], ':', c='k')
curve = ax.plot(prob_pred, prob_true, marker="o")
ax.set_xlabel("predicted probability")
ax.set_ylabel("fraction of positive samples")
ax.set(aspect='equal')
return curve
plot_calibration_curve(y_test, probs)
plt.title("n_bins=5")
fig, axes = plt.subplots(1, 3, figsize=(16, 6))
for ax, n_bins in zip(axes, [5, 20, 50]):
plot_calibration_curve(y_test, probs, n_bins=n_bins, ax=ax)
ax.set_title("n_bins={}".format(n_bins))
plt.savefig("images/influence_bins.png")
from sklearn.svm import LinearSVC, SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
fig, axes = plt.subplots(1, 3, figsize=(8, 8))
for ax, clf in zip(axes, [LogisticRegressionCV(), DecisionTreeClassifier(),
RandomForestClassifier(n_estimators=100)]):
# use predict_proba is the estimator has it
scores = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1]
plot_calibration_curve(y_test, scores, n_bins=20, ax=ax)
ax.set_title(clf.__class__.__name__)
plt.tight_layout()
plt.savefig("images/calib_curve_models.png")
# same thing but with bier loss shown. Why do I refit the models? lol
from sklearn.metrics import brier_score_loss
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
for ax, clf in zip(axes, [LogisticRegressionCV(), DecisionTreeClassifier(), RandomForestClassifier(n_estimators=100)]):
# use predict_proba is the estimator has it
scores = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1]
plot_calibration_curve(y_test, scores, n_bins=20, ax=ax)
ax.set_title("{}: {:.2f}".format(clf.__class__.__name__, brier_score_loss(y_test, scores)))
plt.tight_layout()
plt.savefig("images/models_bscore.png")
from sklearn.calibration import CalibratedClassifierCV
X_train_sub, X_val, y_train_sub, y_val = train_test_split(X_train, y_train,
stratify=y_train, random_state=0)
rf = RandomForestClassifier(n_estimators=100).fit(X_train_sub, y_train_sub)
scores = rf.predict_proba(X_test)[:, 1]
plot_calibration_curve(y_test, scores, n_bins=20)
plt.title("{}: {:.3f}".format(clf.__class__.__name__, brier_score_loss(y_test, scores)))
cal_rf = CalibratedClassifierCV(rf, cv="prefit", method='sigmoid')
cal_rf.fit(X_val, y_val)
scores_sigm = cal_rf.predict_proba(X_test)[:, 1]
cal_rf_iso = CalibratedClassifierCV(rf, cv="prefit", method='isotonic')
cal_rf_iso.fit(X_val, y_val)
scores_iso = cal_rf_iso.predict_proba(X_test)[:, 1]
scores_rf = cal_rf.predict_proba(X_val)
plt.plot(scores_rf[:, 1], y_val, 'o', alpha=.01)
plt.xlabel("rf.predict_proba")
plt.ylabel("True validation label")
plt.savefig("images/calibration_val_scores.png")
sigm = cal_rf.calibrated_classifiers_[0].calibrators_[0]
scores_rf_sorted = np.sort(scores_rf[:, 1])
sigm_scores = sigm.predict(scores_rf_sorted)
iso = cal_rf_iso.calibrated_classifiers_[0].calibrators_[0]
iso_scores = iso.predict(scores_rf_sorted)
plt.plot(scores_rf[:, 1], y_val, 'o', alpha=.01)
plt.plot(scores_rf_sorted, sigm_scores, label='sigm')
plt.plot(scores_rf_sorted, iso_scores, label='iso')
plt.xlabel("rf.predict_proba")
plt.ylabel("True validation label")
plt.legend()
plt.savefig("images/calibration_val_scores_fitted.png")
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
for name, s, ax in zip(['no callibration', 'sigmoid', 'isotonic'],
[scores, scores_sigm, scores_iso], axes):
plot_calibration_curve(y_test, s, n_bins=20, ax=ax)
ax.set_title("{}: {:.3f}".format(name, brier_score_loss(y_test, s)))
plt.tight_layout()
plt.savefig("images/types_callib.png")
cal_rf_iso_cv = CalibratedClassifierCV(rf, method='isotonic')
cal_rf_iso_cv.fit(X_train, y_train)
scores_iso_cv = cal_rf_iso_cv.predict_proba(X_test)[:, 1]
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
for name, s, ax in zip(['no callibration', 'isotonic', 'isotonic cv'],
[scores, scores_iso, scores_iso_cv], axes):
plot_calibration_curve(y_test, s, n_bins=20, ax=ax)
ax.set_title("{}: {:.3f}".format(name, brier_score_loss(y_test, s)))
plt.tight_layout()
plt.savefig("images/types_callib_cv.png")
# http://scikit-learn.org/dev/auto_examples/calibration/plot_calibration_multiclass.html
# Author: Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>
# License: BSD Style.
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.calibration import CalibratedClassifierCV
from sklearn.metrics import log_loss, brier_score_loss
np.random.seed(0)
# Generate data
X, y = make_blobs(n_samples=1000, n_features=2, random_state=42,
cluster_std=5.0)
X_train, y_train = X[:600], y[:600]
X_valid, y_valid = X[600:800], y[600:800]
X_train_valid, y_train_valid = X[:800], y[:800]
X_test, y_test = X[800:], y[800:]
# Train uncalibrated random forest classifier on whole train and validation
# data and evaluate on test data
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train_valid, y_train_valid)
clf_probs = clf.predict_proba(X_test)
score = log_loss(y_test, clf_probs)
#score = brier_score_loss(y_test, clf_probs[:, 1])
# Train random forest classifier, calibrate on validation data and evaluate
# on test data
clf = RandomForestClassifier(n_estimators=25)
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid", cv="prefit")
sig_clf.fit(X_valid, y_valid)
sig_clf_probs = sig_clf.predict_proba(X_test)
sig_score = log_loss(y_test, sig_clf_probs)
#sig_score = brier_score_loss(y_test, sig_clf_probs[:, 1])
# Plot changes in predicted probabilities via arrows
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
colors = ["r", "g", "b"]
for i in range(clf_probs.shape[0]):
plt.arrow(clf_probs[i, 0], clf_probs[i, 1],
sig_clf_probs[i, 0] - clf_probs[i, 0],
sig_clf_probs[i, 1] - clf_probs[i, 1],
color=colors[y_test[i]], head_width=1e-2)
# Plot perfect predictions
plt.plot([1.0], [0.0], 'ro', ms=20, label="Class 1")
plt.plot([0.0], [1.0], 'go', ms=20, label="Class 2")
plt.plot([0.0], [0.0], 'bo', ms=20, label="Class 3")
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 'k', label="Simplex")
# Annotate points on the simplex
plt.annotate(r'($\frac{1}{3}$, $\frac{1}{3}$, $\frac{1}{3}$)',
xy=(1.0/3, 1.0/3), xytext=(1.0/3, .23), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.plot([1.0/3], [1.0/3], 'ko', ms=5)
plt.annotate(r'($\frac{1}{2}$, $0$, $\frac{1}{2}$)',
xy=(.5, .0), xytext=(.5, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $\frac{1}{2}$, $\frac{1}{2}$)',
xy=(.0, .5), xytext=(.1, .5), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($\frac{1}{2}$, $\frac{1}{2}$, $0$)',
xy=(.5, .5), xytext=(.6, .6), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $0$, $1$)',
xy=(0, 0), xytext=(.1, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($1$, $0$, $0$)',
xy=(1, 0), xytext=(1, .1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
plt.annotate(r'($0$, $1$, $0$)',
xy=(0, 1), xytext=(.1, 1), xycoords='data',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='center', verticalalignment='center')
# Add grid
plt.grid("off")
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], 'k', alpha=0.2)
plt.plot([0, 0 + (1-x)/2], [x, x + (1-x)/2], 'k', alpha=0.2)
plt.plot([x, x + (1-x)/2], [0, 0 + (1-x)/2], 'k', alpha=0.2)
plt.title("Change of predicted probabilities after sigmoid calibration")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.legend(loc="best")
print("Log-loss of")
print(" * uncalibrated classifier trained on 800 datapoints: %.3f "
% score)
print(" * classifier trained on 600 datapoints and calibrated on "
"200 datapoint: %.3f" % sig_score)
# Illustrate calibrator
plt.subplot(1, 2, 2)
# generate grid over 2-simplex
p1d = np.linspace(0, 1, 20)
p0, p1 = np.meshgrid(p1d, p1d)
p2 = 1 - p0 - p1
p = np.c_[p0.ravel(), p1.ravel(), p2.ravel()]
p = p[p[:, 2] >= 0]
calibrated_classifier = sig_clf.calibrated_classifiers_[0]
prediction = np.vstack([calibrator.predict(this_p)
for calibrator, this_p in
zip(calibrated_classifier.calibrators_, p.T)]).T
prediction /= prediction.sum(axis=1)[:, None]
# Plot modifications of calibrator
for i in range(prediction.shape[0]):
plt.arrow(p[i, 0], p[i, 1],
prediction[i, 0] - p[i, 0], prediction[i, 1] - p[i, 1],
head_width=1e-2, color=colors[np.argmax(p[i])])
# Plot boundaries of unit simplex
plt.plot([0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], 'k', label="Simplex")
plt.grid("off")
for x in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
plt.plot([0, x], [x, 0], 'k', alpha=0.2)
plt.plot([0, 0 + (1-x)/2], [x, x + (1-x)/2], 'k', alpha=0.2)
plt.plot([x, x + (1-x)/2], [0, 0 + (1-x)/2], 'k', alpha=0.2)
plt.title("Illustration of sigmoid calibrator")
plt.xlabel("Probability class 1")
plt.ylabel("Probability class 2")
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.savefig("images/multi_class_calibration.png")
```
| github_jupyter |
```
# Imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle
import os
from scipy.stats import linregress
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import LabelEncoder, LabelBinarizer
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.metrics import mean_squared_error
from sklearn.kernel_ridge import KernelRidge as KRR
from sklearn.ensemble import RandomForestRegressor as RFR
from sklearn.gaussian_process.kernels import WhiteKernel, ExpSineSquared
# Define the project root directory
ROOT_DIR = os.path.join(os.getcwd(), os.pardir)
# Load the data
df = pd.read_pickle(f"{ROOT_DIR}/data/data.csv")
print(f"Loaded raw data of shape {df.shape}")
plt.plot(df["Reaction Energy"], df["Activation Energy"], "b.")
plt.xlabel("Reaction Energy [eV]")
plt.ylabel("Activation Energy [eV]")
plt.savefig(f"{ROOT_DIR}/data/images/er_ea_correlation.png")
plt.show()
df.shape
```
### Separate metals, non-metals, and semiconductors
```
metals = [
"Sc", "Ti", "V", "Cr", "Mn", "Fe", "Co", "Ni", "Cu", "Zn",
"Y", "Zr", "Nb", "Mo", "Tc", "Ru", "Rh", "Pd", "Ag", "Cd",
"Hf", "Ta", "W", "Re", "Os", "Ir", "Pt", "Au", "Hg",
"Rf", "Db", "Sg", "Bh", "Hs", "Mt", "Ds", "Rg", "Cn",
"Al", "Ga", "In", "Sn", "Tl", "Pb", "Bi", "Nh", "Fl", "Mc", "Lv",
"Y-fcc", "Zr-fcc", "Nb-fcc", "Mo-fcc", "Tc-fcc", "Ru-fcc", "Rh-fcc", "Pd-fcc", "Ag-fcc", "Cd-fcc",
"Sc-fcc", "Ti-fcc", "V-fcc", "Cr-fcc", "Mn-fcc", "Fe-fcc", "Co-fcc", "Ni-fcc", "Cu-fcc", "Zn-fcc",
"Hf-fcc", "Ta-fcc", "W-fcc", "Re-fcc", "Os-fcc", "Ir-fcc", "Pt-fcc", "Au-fcc", "Hg-fcc",
"Rf-fcc", "Db-fcc", "Sg-fcc", "Bh-fcc", "Hs-fcc", "Mt-fcc", "Ds-fcc", "Rg-fcc", "Cn-fcc",
"Al-fcc", "Ga-fcc", "In-fcc", "Sn-fcc", "Tl-fcc", "Pb-fcc", "Bi-fcc", "Nh-fcc", "Fl-fcc", "Mc-fcc", "Lv-fcc"
]
indices = []
for i in range(df.shape[0]):
if df.iloc[i]["Chemical Composition"] in metals or df.iloc[i]["Surface Composition"] in metals:
indices.append(i)
df = df.iloc[indices]
print(f"Found {df.shape[0]} reaction on pure metal catalyst surfaces.")
```
### Transform feature labels to binary one-hot arrays with DataFrameMapper and LabelBinarizer
```
df_bin = df.copy()
print(f"Converted {df_bin.shape[1] - 1} features into ", end="")
bin_mapper = DataFrameMapper([
("Reactant 1", LabelBinarizer()),
("Reactant 2", LabelBinarizer()),
("Reactant 3", LabelBinarizer()),
("Product 1", LabelBinarizer()),
("Product 2", LabelBinarizer()),
("Chemical Composition", LabelBinarizer()),
("Surface Composition", LabelBinarizer()),
("Facet", LabelBinarizer()),
("Adsorption Site", LabelBinarizer()),
("Reaction Equation", LabelBinarizer()),
(["Reaction Energy"], None),
(["Activation Energy"], None),
], df_out=True)
df_bin = bin_mapper.fit_transform(df_bin)
print(f"{df_bin.shape[1] - 1} features.")
df_bin.head()
```
### OR Transform feature labels to integer values with LabelEncoder
```
df_enc = df.copy()
enc_mapper = DataFrameMapper([
('Reactant 1', LabelEncoder()),
('Reactant 2', LabelEncoder()),
('Reactant 3', LabelEncoder()),
('Product 1', LabelEncoder()),
('Product 2', LabelEncoder()),
('Chemical Composition', LabelEncoder()),
('Surface Composition', LabelEncoder()),
('Facet', LabelEncoder()),
('Adsorption Site', LabelEncoder()),
('Reaction Equation', LabelEncoder()),
(['Reaction Energy'], None),
(['Activation Energy'], None),
], df_out=True)
df_enc = enc_mapper.fit_transform(df_enc)
df_enc = df_enc.drop_duplicates(ignore_index=True)
df_enc.head()
```
### Split the data into training and test sets
```
train_set_enc, test_set_enc = train_test_split(df_enc, test_size=0.2)
train_set_bin, test_set_bin = train_test_split(df_bin, test_size=0.2)
y_train_enc = train_set_enc["Activation Energy"]
X_train_enc = train_set_enc.drop("Activation Energy", axis=1)
y_train_bin = train_set_bin["Activation Energy"]
X_train_bin = train_set_bin.drop("Activation Energy", axis=1)
y_test_enc = test_set_enc["Activation Energy"]
X_test_enc = test_set_enc.drop("Activation Energy", axis=1)
y_test_bin = test_set_bin["Activation Energy"]
X_test_bin = test_set_bin.drop("Activation Energy", axis=1)
```
### Kernel Ridge Regression
```
param_grid = {"alpha": [1e0, 1e-1, 1e-2, 1e-3],
"gamma": np.logspace(-2, 2, 5),
"kernel": ["rbf", "linear"]}
krr_enc = GridSearchCV(KRR(), param_grid=param_grid)
krr_enc.fit(X_train_enc, y_train_enc)
krr_enc_best = krr_enc.best_estimator_
krr_enc_score = krr_enc_best.score(X_test_enc, y_test_enc)
krr_enc_pred = krr_enc_best.predict(X_test_enc)
krr_bin = GridSearchCV(KRR(), param_grid=param_grid)
krr_bin.fit(X_train_bin, y_train_bin)
krr_bin_best = krr_bin.best_estimator_
krr_bin_score = krr_bin_best.score(X_test_bin, y_test_bin)
krr_bin_pred = krr_bin_best.predict(X_test_bin)
print(f"KRR score with label encoded data: {krr_enc_score}, using parameters: {krr_enc_best.get_params()}")
print(f"KRR score with label binarized data: {krr_bin_score}, using parameters: {krr_bin_best.get_params()}")
# Plot the label encoded KRR predictions against the test set target values
res = linregress(krr_enc_pred, y_test_enc)
x = np.arange(-1, 8, 1)
y = x*res[0] + res[1]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(krr_enc_pred, y_test_enc, "b.")
plt.plot(x, y, "r-")
plt.xlabel("$E_A$ ML [eV]")
plt.ylabel("$E_A$ DFT [eV]")
plt.xlim(xmin=min(krr_enc_pred), xmax=max(krr_enc_pred))
plt.ylim(ymin=min(y_test_enc), ymax=max(y_test_enc))
ax.set_aspect("equal")
plt.savefig(f"{ROOT_DIR}/data/images/krr_enc_pred.png")
plt.show()
# Plot the binarized KRR predictions against the test set target values
res = linregress(krr_bin_pred, y_test_bin)
x = np.arange(0, 8, 1)
y = x*res[0] + res[1]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(krr_bin_pred, y_test_bin, "b.")
plt.plot(x, y, "r-")
plt.xlabel("$E_A$ ML [eV]")
plt.ylabel("$E_A$ DFT [eV]")
plt.xlim(xmin=min(krr_bin_pred), xmax=max(krr_bin_pred))
plt.ylim(ymin=min(y_test_bin), ymax=max(y_test_bin))
ax.set_aspect("equal")
plt.savefig(f"{ROOT_DIR}/data/images/krr_bin_pred.png")
plt.show()
```
### Random Forest
```
n_estimators = [50, 100, 150, 200, 250, 300]
max_features = ["auto", "sqrt", "log2"]
max_depth = [10, 20, 30, 40]
max_depth.append(None)
min_samples_split = [2, 5, 10, 15, 20]
min_samples_leaf = [1, 2, 5, 10, 15, 20]
param_grid = {
"n_estimators": n_estimators,
"max_features": max_features,
"max_depth": max_depth,
"min_samples_split": min_samples_split,
"min_samples_leaf": min_samples_leaf
}
rfr_enc = RandomizedSearchCV(RFR(), param_distributions=param_grid, n_iter=400, cv=5, verbose=1, n_jobs=-1)
rfr_enc.fit(X_train_enc, y_train_enc)
rfr_bin = RandomizedSearchCV(RFR(), param_distributions=param_grid, n_iter=400, cv=5, verbose=1, n_jobs=-1)
rfr_bin.fit(X_train_bin, y_train_bin)
rfr_enc_best = rfr_enc.best_estimator_
rfr_enc_score = rfr_enc_best.score(X_test_enc, y_test_enc)
rfr_enc_pred = rfr_enc_best.predict(X_test_enc)
rfr_bin_best = rfr_bin.best_estimator_
rfr_bin_score = rfr_bin_best.score(X_test_bin, y_test_bin)
rfr_bin_pred = rfr_bin_best.predict(X_test_bin)
print(f"Random Forest score with label encoded data: {rfr_enc_score}, using parameters: {rfr_enc_best.get_params()}")
print(f"Random Forest score with label binarized data: {rfr_bin_score}, using parameters: {rfr_bin_best.get_params()}")
res = linregress(rfr_enc_pred, y_test_enc)
x = np.arange(0, 8, 1)
y = x*res[0] + res[1]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(rfr_enc_pred, y_test_enc, "b.")
plt.plot(x, y, "r-")
plt.xlabel("E$_A$ ML [eV]")
plt.ylabel("E$_A$ DFT [eV]")
plt.xlim(xmin=min(rfr_enc_pred), xmax=max(rfr_enc_pred))
plt.ylim(ymin=min(y_test_enc), ymax=max(y_test_enc))
ax.set_aspect("equal")
plt.savefig(f"{ROOT_DIR}/data/images/rfr_enc_pred.png")
plt.show()
res = linregress(rfr_bin_pred, y_test_bin)
x = np.arange(0, 8, 1)
y = x*res[0] + res[1]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(rfr_bin_pred, y_test_bin, "b.")
plt.plot(x, y, "r-")
plt.xlabel("E$_A$ ML [eV]")
plt.ylabel("E$_A$ DFT [eV]")
plt.xlim(xmin=min(rfr_bin_pred), xmax=max(rfr_bin_pred))
plt.ylim(ymin=min(y_test_bin), ymax=max(y_test_bin))
ax.set_aspect("equal")
plt.savefig(f"{ROOT_DIR}/data/images/rfr_bin_pred.png")
plt.show()
```
### Save the trained models
```
# Save the label encoded RFR model
with open(f"{ROOT_DIR}/data/rfr_enc.pkl", "wb") as rfr_enc_file:
pickle.dump(rfr_enc_best, rfr_enc_file)
# Save the label binarized RFR model
with open(f"{ROOT_DIR}/data/rfr_bin.pkl", "wb") as rfr_bin_file:
pickle.dump(rfr_bin_best, rfr_bin_file)
# Save the label encoded KRR model
with open(f"{ROOT_DIR}/data/krr_enc.pkl", "wb") as krr_enc_file:
pickle.dump(krr_enc_best, krr_enc_file)
# Save the label binarized KRR model
with open(f"{ROOT_DIR}/data/krr_bin.pkl", "wb") as krr_bin_file:
pickle.dump(krr_bin_best, krr_bin_file)
```
## Inspect the freature importances
```
fimportances = rfr_enc_best.feature_importances_
fi_data = np.array([X_train_enc.columns,fimportances]).T
fi_data = fi_data[fi_data[:,1].argsort()]
plt.barh(fi_data[:,0], fi_data[:,1])
plt.xlabel("Feature weight")
plt.savefig(f"{ROOT_DIR}/data/images/feature_importances.png", bbox_inches="tight")
plt.show()
```
| github_jupyter |
# How to setup Seven Bridges Public API python library
## Overview
Here you will learn the three possible ways to setup Seven Bridges Public API Python library.
## Prerequisites
1. You need to install _sevenbridges-python_ library. Library details are available [here](http://sevenbridges-python.readthedocs.io/en/latest/sevenbridges/)
The easiest way to install sevenbridges-python is using pip:
$ pip install sevenbridges-python
Alternatively, you can get the code. sevenbridges-python is actively developed on GitHub, where the [code](https://github.com/sbg/sevenbridges-python) is always available. To clone the public repository :
$ git clone git://github.com/sbg/sevenbridges-python.git
Once you have a copy of the source, you can embed it in your Python
package, or install it into your site-packages by invoking:
$ python setup.py install
2. You need your _authentication token_ which you can get [here](https://igor.sbgenomics.com/developer/token)
### Notes and Compatibility
Python package is intended to be used with Python 3.6+ versions.
```
# Import the library
import sevenbridges as sbg
```
### Initialize the library
You can initialize the library explicitly or by supplying the necessary information in the $HOME/.sevenbridges/credentials file
There are generally three ways to initialize the library:
1. Explicitly, when calling api constructor, like:
``` python
api = sbg.Api(url='https://api.sbgenomics.com/v2', token='MY AUTH TOKEN')
```
2. By using OS environment to store the url and authentication token
```
export AUTH_TOKEN=<MY AUTH TOKEN>
export API_ENDPOINT='https://api.sbgenomics.com/v2'
```
3. By using ini file $HOME/.sevenbridges/credentials (for MS Windows, the file should be located in \%UserProfile\%.sevenbridges\credentials) and specifying a profile to use. The format of the credentials file is standard ini file format, as shown below:
```bash
[sbpla]
api_endpoint = https://api.sbgenomics.com/v2
auth_token = 700992f7b24a470bb0b028fe813b8100
[cgc]
api_endpoint = https://cgc-api.sbgenomics.com/v2
auth_token = 910975f5b24a470bb0b028fe813b8100
```
0. to **create** this file<sup>1</sup>, use the following steps in your _Terminal_:
1.
```bash
cd ~
mkdir .sevenbridges
touch .sevenbridges/credentials
vi .sevenbridges/credentials
```
2. Press "i" then enter to go into **insert mode**
3. write the text above for each environment.
4. Press "ESC" then type ":wq" to save the file and exit vi
<sup>1</sup> If the file already exists, omit the _touch_ command
### Test if you have stored the token correctly
Below are the three options presented above, test **one** of them. Logically, if you have only done **Step 3**, then testing **Step 2** will return an error.
```
# (1.) You can also instantiate library by explicitly
# specifying API url and authentication token
api_explicitly = sbg.Api(url='https://api.sbgenomics.com/v2',
token='<MY TOKEN HERE>')
api_explicitly.users.me()
# (2.) If you have not specified profile, the python-sbg library
# will search for configuration in the environment
c = sbg.Config()
api_via_environment = sbg.Api(config=c)
api_via_environment.users.me()
# (3.) If you have credentials setup correctly, you only need to specify the profile
config_file = sbg.Config(profile='sbpla')
api_via_ini_file = sbg.Api(config=config_file)
api_via_ini_file.users.me()
```
#### PROTIP
* We _recommend_ the approach with configuration file (the **.sevenbridges/credentials** file in option #3), especially if you are using multiple environments (like SBPLA and CGC).
| github_jupyter |
Manipulating numbers in Python
================
**_Disclaimer_: Much of this section has been transcribed from <a href="https://pymotw.com/2/math/">https://pymotw.com/2/math/</a>**
Every computer represents numbers using the <a href="https://en.wikipedia.org/wiki/IEEE_floating_point">IEEE floating point standard</a>. The **math** module implements many of the IEEE functions that would normally be found in the native platform C libraries for complex mathematical operations using floating point values, including logarithms and trigonometric operations.
The fundamental information about number representation is contained in the module **sys**
```
import sys
sys.float_info
```
From here we can learn, for instance:
```
sys.float_info.max
```
Similarly, we can learn the limits of the IEEE 754 standard
Largest Real = 1.79769e+308, 7fefffffffffffff // -Largest Real = -1.79769e+308, ffefffffffffffff
Smallest Real = 2.22507e-308, 0010000000000000 // -Smallest Real = -2.22507e-308, 8010000000000000
Zero = 0, 0000000000000000 // -Zero = -0, 8000000000000000
eps = 2.22045e-16, 3cb0000000000000 // -eps = -2.22045e-16, bcb0000000000000
Interestingly, one could define an even larger constant (more about this below)
```
infinity = float("inf")
infinity
infinity/10000
```
## Special constants
Many math operations depend on special constants. **math** includes values for $\pi$ and $e$.
```
import math
print ('π: %.30f' % math.pi)
print ('e: %.30f' % math.e)
print('nan: {:.30f}'.format(math.nan))
print('inf: {:.30f}'.format(math.inf))
```
Both values are limited in precision only by the platform’s floating point C library.
## Testing for exceptional values
Floating point calculations can result in two types of exceptional values. INF (“infinity”) appears when the double used to hold a floating point value overflows from a value with a large absolute value.
There are several reserved bit patterns, mostly those with all ones in the exponent field. These allow for tagging special cases as Not A Number—NaN. If there are all ones and the fraction is zero, the number is Infinite.
The IEEE standard specifies:
Inf = Inf, 7ff0000000000000 // -Inf = -Inf, fff0000000000000
NaN = NaN, fff8000000000000 // -NaN = NaN, 7ff8000000000000
```
float("inf")-float("inf")
import math
print('{:^3} {:6} {:6} {:6}'.format(
'e', 'x', 'x**2', 'isinf'))
print('{:-^3} {:-^6} {:-^6} {:-^6}'.format(
'', '', '', ''))
for e in range(0, 201, 20):
x = 10.0 ** e
y = x * x
print('{:3d} {:<6g} {:<6g} {!s:6}'.format(
e, x, y, math.isinf(y),))
```
When the exponent in this example grows large enough, the square of x no longer fits inside a double, and the value is recorded as infinite. Not all floating point overflows result in INF values, however. Calculating an exponent with floating point values, in particular, raises OverflowError instead of preserving the INF result.
```
x = 10.0 ** 200
print('x =', x)
print('x*x =', x*x)
try:
print('x**2 =', x**2)
except OverflowError as err:
print(err)
```
This discrepancy is caused by an implementation difference in the library used by C Python.
Division operations using infinite values are undefined. The result of dividing a number by infinity is NaN (“not a number”).
```
import math
x = (10.0 ** 200) * (10.0 ** 200)
y = x/x
print('x =', x)
print('isnan(x) =', math.isnan(x))
print('y = x / x =', x/x)
print('y == nan =', y == float('nan'))
print('isnan(y) =', math.isnan(y))
```
## Comparing
Comparisons for floating point values can be error prone, with each step of the computation potentially introducing errors due to the numerical representation. The isclose() function uses a stable algorithm to minimize these errors and provide a way for relative as well as absolute comparisons. The formula used is equivalent to
abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
By default, isclose() uses relative comparison with the tolerance set to 1e-09, meaning that the difference between the values must be less than or equal to 1e-09 times the larger absolute value between a and b. Passing a keyword argument rel_tol to isclose() changes the tolerance. In this example, the values must be within 10% of each other.
The comparison between 0.1 and 0.09 fails because of the error representing 0.1.
```
import math
INPUTS = [
(1000, 900, 0.1),
(100, 90, 0.1),
(10, 9, 0.1),
(1, 0.9, 0.1),
(0.1, 0.09, 0.1),
]
print('{:^8} {:^8} {:^8} {:^8} {:^8} {:^8}'.format(
'a', 'b', 'rel_tol', 'abs(a-b)', 'tolerance', 'close')
)
print('{:-^8} {:-^8} {:-^8} {:-^8} {:-^8} {:-^8}'.format(
'-', '-', '-', '-', '-', '-'),
)
fmt = '{:8.2f} {:8.2f} {:8.2f} {:8.2f} {:8.2f} {!s:>8}'
for a, b, rel_tol in INPUTS:
close = math.isclose(a, b, rel_tol=rel_tol)
tolerance = rel_tol * max(abs(a), abs(b))
abs_diff = abs(a - b)
print(fmt.format(a, b, rel_tol, abs_diff, tolerance, close))
```
To use a fixed or "absolute" tolerance, pass abs_tol instead of rel_tol.
For an absolute tolerance, the difference between the input values must be less than the tolerance given.
```
import math
INPUTS = [
(1.0, 1.0 + 1e-07, 1e-08),
(1.0, 1.0 + 1e-08, 1e-08),
(1.0, 1.0 + 1e-09, 1e-08),
]
print('{:^8} {:^11} {:^8} {:^10} {:^8}'.format(
'a', 'b', 'abs_tol', 'abs(a-b)', 'close')
)
print('{:-^8} {:-^11} {:-^8} {:-^10} {:-^8}'.format(
'-', '-', '-', '-', '-'),
)
for a, b, abs_tol in INPUTS:
close = math.isclose(a, b, abs_tol=abs_tol)
abs_diff = abs(a - b)
print('{:8.2f} {:11} {:8} {:0.9f} {!s:>8}'.format(
a, b, abs_tol, abs_diff, close))
```
nan and inf are special cases.
nan is never close to another value, including itself. inf is only close to itself.
```
import math
print('nan, nan:', math.isclose(math.nan, math.nan))
print('nan, 1.0:', math.isclose(math.nan, 1.0))
print('inf, inf:', math.isclose(math.inf, math.inf))
print('inf, 1.0:', math.isclose(math.inf, 1.0))
```
## Converting to Integers
The math module includes three functions for converting floating point values to whole numbers. Each takes a different approach, and will be useful in different circumstances.
The simplest is trunc(), which truncates the digits following the decimal, leaving only the significant digits making up the whole number portion of the value. floor() converts its input to the largest preceding integer, and ceil() (ceiling) produces the largest integer following sequentially after the input value.
```
import math
print('{:^5} {:^5} {:^5} {:^5} {:^5}'.format('i', 'int', 'trunk', 'floor', 'ceil'))
print('{:-^5} {:-^5} {:-^5} {:-^5} {:-^5}'.format('', '', '', '', ''))
fmt = ' '.join(['{:5.1f}'] * 5)
for i in [ -1.5, -0.8, -0.5, -0.2, 0, 0.2, 0.5, 0.8, 1 ]:
print (fmt.format(i, int(i), math.trunc(i), math.floor(i), math.ceil(i)))
```
## Alternate Representations
**modf()** takes a single floating point number and returns a tuple containing the fractional and whole number parts of the input value.
```
import math
for i in range(6):
print('{}/2 = {}'.format(i, math.modf(i/2.0)))
```
**frexp()** returns the mantissa and exponent of a floating point number, and can be used to create a more portable representation of the value. It uses the formula x = m \* 2 \*\* e, and returns the values m and e.
```
import math
print('{:^7} {:^7} {:^7}'.format('x', 'm', 'e'))
print('{:-^7} {:-^7} {:-^7}'.format('', '', ''))
for x in [ 0.1, 0.5, 4.0 ]:
m, e = math.frexp(x)
print('{:7.2f} {:7.2f} {:7d}'.format(x, m, e))
```
**ldexp()** is the inverse of frexp(). Using the same formula as frexp(), ldexp() takes the mantissa and exponent values as arguments and returns a floating point number.
```
import math
print('{:^7} {:^7} {:^7}'.format('m', 'e', 'x'))
print('{:-^7} {:-^7} {:-^7}'.format('', '', ''))
for m, e in [ (0.8, -3),
(0.5, 0),
(0.5, 3),
]:
x = math.ldexp(m, e)
print('{:7.2f} {:7d} {:7.2f}'.format(m, e, x))
```
## Positive and Negative Signs
The absolute value of number is its value without a sign. Use **fabs()** to calculate the absolute value of a floating point number.
```
import math
print(math.fabs(-1.1))
print(math.fabs(-0.0))
print(math.fabs(0.0))
print(math.fabs(1.1))
```
To determine the sign of a value, either to give a set of values the same sign or simply for comparison, use **copysign()** to set the sign of a known good value. An extra function like copysign() is needed because comparing NaN and -NaN directly with other values does not work.
```
import math
print
print('{:^5} {:^5} {:^5} {:^5} {:^5}'.format('f', 's', '< 0', '> 0', '= 0'))
print('{:-^5} {:-^5} {:-^5} {:-^5} {:-^5}'.format('', '', '', '', ''))
for f in [ -1.0,
0.0,
1.0,
float('-inf'),
float('inf'),
float('-nan'),
float('nan'),
]:
s = int(math.copysign(1, f))
print('{:5.1f} {:5d} {!s:5} {!s:5} {!s:5}'.format(f, s, f < 0, f > 0, f==0))
```
## Commonly Used Calculations
Representing precise values in binary floating point memory is challenging. Some values cannot be represented exactly, and the more often a value is manipulated through repeated calculations, the more likely a representation error will be introduced. math includes a function for computing the sum of a series of floating point numbers using an efficient algorithm that minimize such errors.
```
import math
values = [ 0.1 ] * 10
print('Input values:', values)
print('sum() : {:.20f}'.format(sum(values)))
s = 0.0
for i in values:
s += i
print('for-loop : {:.20f}'.format(s))
print('math.fsum() : {:.20f}'.format(math.fsum(values)))
```
Given a sequence of ten values each equal to 0.1, the expected value for the sum of the sequence is 1.0. Since 0.1 cannot be represented exactly as a floating point value, however, errors are introduced into the sum unless it is calculated with **fsum()**.
**factorial()** is commonly used to calculate the number of permutations and combinations of a series of objects. The factorial of a positive integer n, expressed n!, is defined recursively as (n-1)! * n and stops with 0! == 1. **factorial()** only works with whole numbers, but does accept float arguments as long as they can be converted to an integer without losing value.
```
import math
for i in [ 0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.1 ]:
try:
print('{:2.0f} {:6.0f}'.format(i, math.factorial(i)))
except ValueError as err:
print('Error computing factorial(%s):' % i, err)
```
The modulo operator (%) computes the remainder of a division expression (i.e., 5 % 2 = 1). The operator built into the language works well with integers but, as with so many other floating point operations, intermediate calculations cause representational issues that result in a loss of data. fmod() provides a more accurate implementation for floating point values.
```
import math
print('{:^4} {:^4} {:^5} {:^5}'.format('x', 'y', '%', 'fmod'))
print('---- ---- ----- -----')
for x, y in [ (5, 2),
(5, -2),
(-5, 2),
]:
print('{:4.1f} {:4.1f} {:5.2f} {:5.2f}'.format(x, y, x % y, math.fmod(x, y)))
```
A potentially more frequent source of confusion is the fact that the algorithm used by fmod for computing modulo is also different from that used by %, so the sign of the result is different. mixed-sign inputs.
## Exponents and Logarithms
Exponential growth curves appear in economics, physics, and other sciences. Python has a built-in exponentiation operator (“\*\*”), but pow() can be useful when you need to pass a callable function as an argument.
```
import math
for x, y in [
# Typical uses
(2, 3),
(2.1, 3.2),
# Always 1
(1.0, 5),
(2.0, 0),
# Not-a-number
(2, float('nan')),
# Roots
(9.0, 0.5),
(27.0, 1.0/3),
]:
print('{:5.1f} ** {:5.3f} = {:6.3f}'.format(x, y, math.pow(x, y)))
```
Raising 1 to any power always returns 1.0, as does raising any value to a power of 0.0. Most operations on the not-a-number value nan return nan. If the exponent is less than 1, pow() computes a root.
Since square roots (exponent of 1/2) are used so frequently, there is a separate function for computing them.
```
import math
print(math.sqrt(9.0))
print(math.sqrt(3))
try:
print(math.sqrt(-1))
except ValueError as err:
print('Cannot compute sqrt(-1):', err)
```
Computing the square roots of negative numbers requires complex numbers, which are not handled by math. Any attempt to calculate a square root of a negative value results in a ValueError.
There are two variations of **log()**. Given floating point representation and rounding errors the computed value produced by **log(x, b)** has limited accuracy, especially for some bases. **log10()** computes **log(x, 10)**, using a more accurate algorithm than **log()**.
```
import math
print('{:2} {:^12} {:^20} {:^20} {:8}'.format('i', 'x', 'accurate', 'inaccurate', 'mismatch'))
print('{:-^2} {:-^12} {:-^20} {:-^20} {:-^8}'.format('', '', '', '', ''))
for i in range(0, 10):
x = math.pow(10, i)
accurate = math.log10(x)
inaccurate = math.log(x, 10)
match = '' if int(inaccurate) == i else '*'
print('{:2d} {:12.1f} {:20.18f} {:20.18f} {:^5}'.format(i, x, accurate, inaccurate, match))
```
The lines in the output with trailing * highlight the inaccurate values.
As with other special-case functions, the function **exp()** uses an algorithm that produces more accurate results than the general-purpose equivalent math.pow(math.e, x).
```
import math
x = 2
fmt = '%.20f'
print(fmt % (math.e ** 2))
print(fmt % math.pow(math.e, 2))
print(fmt % math.exp(2))
```
For more information about other mathematical functions, including trigonometric ones, we refer to <a href="https://pymotw.com/2/math/">https://pymotw.com/2/math/</a>
The python references can be found at <a href="https://docs.python.org/2/library/math.html">https://docs.python.org/2/library/math.html</a>
| github_jupyter |
# Seldon-Core Component Demo
If you are reading this then you are about to take Seldon-Core, a model serving framework, for a test drive.
Seldon-Core has been packaged as a [combinator component](https://combinator.ml/components/introduction/), which makes it easy to spin up a combination of MLOps components to make a stack. This notebook is running within the cluster, next to the Seldon-Core installation.
The following demo is a very short introduction to show you how to connect to seldon-core. But I recommend that you follow the [official documentation](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/github-readme.html) for a comprehensive guide.
## Prerequisites
You will primarily interact with Seldon-Core via the Kubernetes API. This means we need to download `kubectl`.
`kubectl` usage, however, requires permission. This notebook needs permission to perform actions on the Kubernetes API. This is acheived in the test drive codebase by connecting the seldon-core operator cluster role to the default service account.
:warning: Connecting pre-existing cluster roles to default service accounts is not a good idea! :warning:
```
!wget -q -O /tmp/kubectl https://dl.k8s.io/release/v1.21.2/bin/linux/amd64/kubectl
!cp /tmp/kubectl /opt/conda/bin # Move the binary to somewhere on the PATH
!chmod +x /opt/conda/bin/kubectl
```
## Deploy a Pre-Trained Model
The manifest below defines a `SeldonDeployment` using a pre-trained sklearn model. This leverages Seldon-Core's sklearn server implementation.
```
%%writefile deployment.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: iris-model
namespace: seldon
spec:
name: iris
predictors:
- graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/sklearn/iris
name: classifier
name: default
replicas: 1
```
And apply the manifest to the seldon namespace.
```
!kubectl -n seldon apply -f deployment.yaml
!kubectl -n seldon rollout status deployment/iris-model-default-0-classifier
```
## Call The Model
The model container has downloaded a pre-trained model and instantiated it inside a serving container. You can now call the hosted endpoint.
Seldon-core uses a service mesh to call the endpoint, so here you need to point the call towards the ingress gateway of your service mesh. In this case it's the default Istio ingress gateway and I'm able to use the internal Kubernetes DNS because this notebook is running in the cluster.
```
import json, urllib
url = "http://istio-ingressgateway.istio-system.svc/seldon/seldon/iris-model/api/v1.0/predictions"
data = { "data": { "ndarray": [[1,2,3,4]] } }
params = json.dumps(data).encode('utf8')
req = urllib.request.Request(url,
data=params,
headers={'content-type': 'application/json'})
response = urllib.request.urlopen(req)
print(json.dumps(json.loads(response.read()), indent=4, sort_keys=True))
```
| github_jupyter |
```
import numpy as np
import random
twopi = 2.*np.pi
oneOver2Pi = 1./twopi
import time
def time_usage(func):
def wrapper(*args, **kwargs):
beg_ts = time.time()
retval = func(*args, **kwargs)
end_ts = time.time()
print("elapsed time: %f" % (end_ts - beg_ts))
return retval
return wrapper
#
# For the jam multiruns
# [iso, D, T, X, U, L]
mode = "edge_3"
runs = {1:"edge_3_7.00", 0:"edge_3_14.00"}
in_dir = "/home/walterms/project/walterms/mcmd/output/scratch/"+mode+"/"
trn_dir = "/home/walterms/project/walterms/mcmd/nn/data/train/"
test_dir = "/home/walterms/project/walterms/mcmd/nn/data/test/"
unlabeled_dir = "/home/walterms/project/walterms/mcmd/nn/data/unlbl/"
jidx = np.arange(2,18)
testidxs = np.arange(0,2) # want 400 ea
nblSkip = 1 # Skip first image
# noiseLvl: sigma of Gaussian in units of rod length
rodlen = 1.0
noiseLvl = 0.00*rodlen
thnoise = 0.00
noiseappend = ""
if noiseLvl > 0.0:
noiseappend = "_"+str(noiseLvl)
processTrain(noise=noiseLvl)
@time_usage
def processTrain(noise=0.):
for lbl in runs:
name = runs[lbl]
trnlim = -1
trnfnames = [name+"_"+str(i) for i in jidx]
fout = open(trn_dir+name+noiseappend,'w') #erases file
fout.close()
for f in trnfnames:
fin = open(in_dir+f,'r')
print "processing " + f + noiseappend + " for training data"
fout = open(trn_dir+name+noiseappend,'a')
# find width from file header
width, height = 0., 0.
l = fin.readline().split("|")
for ll in l:
if "boxEdge" in ll:
width = float(ll.split()[1])
height = width
fin.seek(0)
if width == 0.:
# calculate edge length based on vertices of first block
block = []
for line in fin.readlines():
if line == "\n": break
if line[0].isalpha(): continue
block.append(line)
fin.seek(0)
width, height = edgeLenCalc(block)
if not (fin.readline()[0].isalpha()): fin.seek(0)
thNorm = oneOver2Pi
normX, normY = 1./width, 1./height # normalize x and y
nbl = 0
fRot = 0. # rotation factor: 0,1,2,3. Multiplied by pi/2
block = []
for line in fin.readlines():
if line == "\n":
if nbl < nblSkip:
nbl+=1
block = []
continue
fRot = random.randint(0,3)
for l in block:
fout.write('%f %f %f\n' % (l[0], l[1], l[2]))
fout.write('label %f\n\n' % (lbl))
block = []
nbl+=1
continue
rndxy = [0.,0.]
rndth = 0.
if noise > 0.:
# Gen three random numbers
rndxy = np.random.normal(0,noise,2)
rndth = np.random.normal(0,twopi*thnoise,1)
# rndxy = [0.,0.]
# rndth = 0.
spt = [float(x) for x in line.split()]
x,y,th = spt[2],spt[3],spt[4]
# Rotate block
# note thetas should be [0,2pi] initially
th_ = fRot*twopi*0.25
th += th_ + rndth
if th > twopi: th-=twopi
th *= thNorm
x = np.cos(th_)*spt[2] - np.sin(th_)*spt[3] + rndxy[0]
y = np.sin(th_)*spt[2] + np.cos(th_)*spt[3] + rndxy[1]
# shift and normalize
x *= normX
y *= normY
block.append([x,y,th])
fout.close()
fin.close()
print "Done processing training files"
r = np.random.normal(0,noiseLvl,2)
r[0]
processTest()
@time_usage
def processTest():
for lbl in runs:
name = runs[lbl]
testfnames = [name+"_"+str(i) for i in testidxs]
fout = open(test_dir+name,'w') #erases file
fout.close()
for f in testfnames:
fin = open(in_dir+f,'r')
print "processing " + f + " for testing data"
fout = open(test_dir+name,'a')
# find width from file header
width, height = 0., 0.
l = fin.readline().split("|")
for ll in l:
if "boxEdge" in ll:
width = float(ll.split()[1])
height = width
fin.seek(0)
if width == 0.:
# calculate edge length based on vertices of first block
block = []
for line in fin.readlines():
if line == "\n": break
if line[0].isalpha(): continue
block.append(line)
fin.seek(0)
width, height = edgeLenCalc(block)
if not (fin.readline()[0].isalpha()): fin.seek(0)
thNorm = oneOver2Pi
normX, normY = 1./width, 1./height # normalize x and y
nbl = 0
fRot = 0. # rotation factor: 0,1,2,3. Multiplied by pi/2
block = []
for line in fin.readlines():
if line == "\n":
if nbl < 1:
nbl+=1
block = []
continue
fRot = random.randint(0,3)
for l in block:
fout.write('%f %f %f\n' % (l[0], l[1], l[2]))
fout.write('label %f\n\n' % (lbl))
block = []
nbl+=1
continue
spt = [float(x) for x in line.split()]
x,y,th = spt[2],spt[3],spt[4]
# Rotate block
# note thetas should be [0,2pi] initially
th_ = fRot*twopi*0.25
th += th_
if th > twopi: th-=twopi
th *= thNorm
x = np.cos(th_)*spt[2] - np.sin(th_)*spt[3]
y = np.sin(th_)*spt[2] + np.cos(th_)*spt[3]
# shift and normalize
x *= normX
y *= normY
block.append([x,y,th])
fout.close()
fin.close()
print "Done processing testing files"
edges = []
ein = open("/home/walterms/mcmd/edge_3",'r')
for line in ein.readlines():
edges.append(float(line))
unlblnames = [mode+"_"+"%.2f"%(e) for e in edges]
uidx = np.arange(0,18)
processUnlbl()
@time_usage
def processUnlbl(noise=0.):
nlimPerFile = 270+nblSkip
for run in unlblnames:
fnames = [run+"_"+str(i) for i in uidx]
fout = open(unlabeled_dir+run+noiseappend,'w') #erases file
fout.close()
for f in fnames:
fin = open(in_dir+f,'r')
print "processing " + f + noiseappend + " for training data"
fout = open(unlabeled_dir+run+noiseappend,'a')
# find width from file header
width, height = 0., 0.
l = fin.readline().split("|")
for ll in l:
if "boxEdge" in ll:
width = float(ll.split()[1])
height = width
fin.seek(0)
if width == 0.:
# calculate edge length based on vertices of first block
block = []
for line in fin.readlines():
if line == "\n": break
if line[0].isalpha(): continue
block.append(line)
fin.seek(0)
width, height = edgeLenCalc(block)
if not (fin.readline()[0].isalpha()): fin.seek(0)
thNorm = oneOver2Pi
normX, normY = 1./width, 1./height # normalize x and y
nbl = 0
fRot = 0. # rotation factor: 0,1,2,3. Multiplied by pi/2
block = []
for line in fin.readlines():
if line == "\n":
if nbl < nblSkip:
nbl+=1
block = []
continue
fRot = random.randint(0,3)
for l in block:
fout.write('%f %f %f\n' % (l[0], l[1], l[2]))
fout.write('\n')
block = []
nbl+=1
if nbl == nlimPerFile:
break
else:
continue
rndxy = [0.,0.]
rndth = 0.
if noise > 0.:
# Gen three random numbers
rndxy = np.random.normal(0,noise,2)
rndth = np.random.normal(0,twopi*thnoise,1)
# rndxy = [0.,0.]
# rndth = 0.
spt = [float(x) for x in line.split()]
x,y,th = spt[2],spt[3],spt[4]
# Rotate block
# note thetas should be [0,2pi] initially
th_ = fRot*twopi*0.25
th += th_ + rndth
if th > twopi: th-=twopi
th *= thNorm
x = np.cos(th_)*spt[2] - np.sin(th_)*spt[3] + rndxy[0]
y = np.sin(th_)*spt[2] + np.cos(th_)*spt[3] + rndxy[1]
# shift and normalize
x *= normX
y *= normY
block.append([x,y,th])
fout.close()
fin.close()
print "Done processing unlbl files"
```
| github_jupyter |
#### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Classification on imbalanced data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. You will display metrics for precision, recall, true positives, false positives, true negatives, false negatives, and AUC while training the model. These are more informative than accuracy when working with imbalanced datasets classification.
This tutorial contains complete code to:
* Load a CSV file using Pandas.
* Create train, validation, and test sets.
* Define and train a model using Keras (including setting class weights).
* Evaluate the model using various metrics (including precision and recall).
## Import TensorFlow and other libraries
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
!pip install imblearn
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from scikit_learn_contrib.imbalanced_learn.over_sampling import SMOTE
```
## Use Pandas to get the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
```
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
```
## Split the dataframe into train, validation, and test
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
```
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(raw_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
# Normalize the input features using the sklearn StandardScaler.
# This will set the mean to 0 and standard deviation to 1.
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
```
## Examine the class label imbalance
Let's look at the dataset imbalance:
```
neg, pos = np.bincount(train_labels)
total = neg + pos
print('{} positive samples out of {} training samples ({:.2f}% of total)'.format(
pos, total, 100 * pos / total))
```
This shows a small fraction of positive samples.
## Define the model and metrics
Define a function that creates a simple neural network with three densely connected hidden layers, an output sigmoid layer that returns the probability of a transaction being fraudulent, and two [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layers as an effective way to reduce overfitting.
```
def make_model():
model = keras.Sequential([
keras.layers.Dense(256, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(256, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(1, activation='sigmoid'),
])
metrics = [
keras.metrics.Accuracy(name='accuracy'),
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc')
]
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=metrics)
return model
```
## Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
* **False** negatives and **false** positives are samples that were **incorrectly** classified
* **True** negatives and **true** positives are samples that were **correctly** classified
* **Accuracy** is the percentage of examples correctly classified
> $\frac{\text{true samples}}{\text{total samples}}$
* **Precision** is the percentage of **predicted** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false positives}}$
* **Recall** is the percentage of **actual** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false negatives}}$
* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
<br>
Read more:
* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
## Train a baseline model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudelent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
```
model = make_model()
EPOCHS = 10
BATCH_SIZE = 2048
history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(val_features, val_labels))
```
## Plot metrics on the training and validation sets
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
```
epochs = range(EPOCHS)
plt.title('Accuracy')
plt.plot(epochs, history.history['accuracy'], color='blue', label='Train')
plt.plot(epochs, history.history['val_accuracy'], color='orange', label='Val')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
_ = plt.figure()
plt.title('Loss')
plt.plot(epochs, history.history['loss'], color='blue', label='Train')
plt.plot(epochs, history.history['val_loss'], color='orange', label='Val')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
_ = plt.figure()
plt.title('False Negatives')
plt.plot(epochs, history.history['fn'], color='blue', label='Train')
plt.plot(epochs, history.history['val_fn'], color='orange', label='Val')
plt.xlabel('Epoch')
plt.ylabel('False Negatives')
plt.legend()
```
## Evaluate the baseline model
Evaluate your model on the test dataset and display results for the metrics you created above.
```
results = model.evaluate(test_features, test_labels)
for name, value in zip(model.metrics_names, results):
print(name, ': ', value)
```
It looks like the precision is relatively high, but the recall and AUC aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. However, because missing fraudulent transactions (false negatives) may have significantly worse business consequences than incorrectly flagging fraudulent transactions (false positives), recall may be more important than precision in this case.
## Examine the confusion matrix
You can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
```
predicted_labels = model.predict(test_features)
cm = confusion_matrix(test_labels, np.round(predicted_labels))
plt.matshow(cm, alpha=0)
plt.title('Confusion matrix')
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
for (i, j), z in np.ndenumerate(cm):
plt.text(j, i, str(z), ha='center', va='center')
plt.show()
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
```
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
## Using class weights for the loss function
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
```
weight_for_0 = 1 / neg
weight_for_1 = 1 / pos
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2e}'.format(weight_for_0))
print('Weight for class 1: {:.2e}'.format(weight_for_1))
```
## Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers who's step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
```
weighted_model = make_model()
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(val_features, val_labels),
class_weight=class_weight)
weighted_results = weighted_model.evaluate(test_features, test_labels)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
```
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower overall accuracy, this approach may be better when considering the consequences of failing to identify fraudulent transactions driving the prioritization of recall. Depending on how bad false negatives are, you might use even more exaggerated weights to further improve recall while dropping precision.
## Oversampling the minority class
A related approach would be to resample the dataset by oversampling the minority class, which is the process of creating more positive samples using something like sklearn's [imbalanced-learn library](https://github.com/scikit-learn-contrib/imbalanced-learn). This library provides methods to create new positive samples by simply duplicating random existing samples, or by interpolating between them to generate synthetic samples using variations of [SMOTE](https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis#Oversampling_techniques_for_classification_problems). TensorFlow also provides a way to do [Random Oversampling](https://www.tensorflow.org/api_docs/python/tf/data/experimental/sample_from_datasets).
```
# with default args this will oversample the minority class to have an equal
# number of observations
smote = SMOTE()
res_features, res_labels = smote.fit_sample(train_features, train_labels)
res_neg, res_pos = np.bincount(res_labels)
res_total = res_neg + res_pos
print('{} positive samples out of {} training samples ({:.2f}% of total)'.format(
res_pos, res_total, 100 * res_pos / res_total))
```
## Train and evaluate a model on the resampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
```
resampled_model = make_model()
resampled_history = resampled_model.fit(
res_features,
res_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(val_features, val_labels))
resampled_results = resampled_model.evaluate(test_features, test_labels)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
```
This approach can be worth trying, but may not provide better results than using class weights because the synthetic examples may not accurately represent the underlying data.
## Applying this tutorial to your problem
Imbalanced data classification is an inherantly difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of the problem to evaluate how bad your false positives or negatives really are.
| github_jupyter |
# Génération de données synthétiques
```
import numpy as np
import pandas as pd
from math import exp, log, log10, sqrt
from scipy.integrate import odeint
from scipy.stats import norm, lognorm
# The Complete model
def deriv(y, t, phiS, phiL, deltaS, deltaL, deltaAb):
dydt = phiS * exp(-deltaS * t) + phiL * exp(-deltaL * t) - deltaAb * y
return dydt
def analytic(A0, time, phiS, phiL, deltaS, deltaL, deltaAb):
y = []
for t in time:
A=(A0-phiS/(deltaAb-deltaS)-phiL/(deltaAb-deltaL))*exp(-deltaAb*t)\
+phiS/(deltaAb-deltaS)*exp(-deltaS*t)+phiL/(deltaAb-deltaL)*exp(-deltaL*t)
y.append(A)
return y
def sample_id_params(pop_params,groupHav720 = False):
# sample parameters from their distributions
A0 = norm.rvs(model_params['A0_mean'],model_params['A0_std'])
phiS = exp(norm.rvs(model_params['ln_phiS_mean'],model_params['ln_phiS_std']))
deltaAb = exp(norm.rvs(model_params['ln_deltaAb_mean'],model_params['ln_deltaAb_std']))
if groupHav720:
phiL = exp(norm.rvs(model_params['ln_phiL_mean'],model_params['ln_phiL_std'])+
model_params['beta_phiL_Hav720'])
deltaS = exp(norm.rvs(model_params['ln_deltaS_mean'],model_params['ln_deltaS_std'])+
model_params['beta_deltaS_Hav720'])
deltaL = exp(norm.rvs(model_params['ln_deltaL_mean'],model_params['ln_deltaL_std'])+
model_params['beta_deltaL_Hav720'])
else:
phiL = exp(norm.rvs(model_params['ln_phiL_mean'],model_params['ln_phiL_std']))
deltaS = exp(norm.rvs(model_params['ln_deltaS_mean'],model_params['ln_deltaS_std']))
deltaL = exp(norm.rvs(model_params['ln_deltaL_mean'],model_params['ln_deltaL_std']))
return A0, (phiS, phiL, deltaS, deltaL, deltaAb)
# True parameters: we suppose that they are log-normal distributed
ln_phiS_mean = log(1)
ln_phiS_std = 0.2
ln_phiL_mean = log(0.54)
ln_phiL_std = 0.1
ln_deltaS_mean = log(0.069)
ln_deltaS_std = 0.5
ln_deltaL_mean = log(1.8e-6)
ln_deltaL_std = 1
ln_deltaAb_mean = log(0.79)
ln_deltaAb_std = 0.1
beta_phiL_Hav720 = -1
beta_deltaS_Hav720 = -0.5
beta_deltaL_Hav720 = 3
# Initial conditions on A0 is supposed to be normally distributed:
A0_mean = 8
A0_std = 0.1
# Finally, we will add an additive error to log_10 transformed data. The error follows a standard gaussian,
# distribution with variance:
sigma2 = 0.01
model_params = {'ln_phiS_mean':ln_phiS_mean,'ln_phiL_mean':ln_phiL_mean,'ln_deltaS_mean':ln_deltaS_mean,
'ln_deltaL_mean':ln_deltaL_mean,'ln_deltaAb_mean':ln_deltaAb_mean,
'ln_phiS_std':ln_phiS_std,'ln_phiL_std':ln_phiL_std,'ln_deltaS_std':ln_deltaS_std,
'ln_deltaL_std':ln_deltaL_std,'ln_deltaAb_std':ln_deltaAb_std,
'beta_phiL_Hav720':beta_phiL_Hav720,'beta_deltaS_Hav720':beta_deltaS_Hav720,
'beta_deltaL_Hav720':beta_deltaL_Hav720,'A0_mean':A0_mean,'A0_std':A0_std}
# Time points: we suppose that all participants have observation at all time points. Note: here time is in months.
time = np.linspace(0,36,10)
# We are going to generate 100 patients form HavrixTM 1440 dataset and 100 patients from HavrixTM 720 dataset
N1, N2 = 100, 100
data = []
for n in range(N1+N2):
if n < N1:
A0, id_params = sample_id_params(model_params,groupHav720 = False)
error = norm.rvs(0,sqrt(sigma2))
phiS, phiL, deltaS, deltaL, deltaAb = id_params
y_t = analytic(A0, time, phiS, phiL, deltaS, deltaL, deltaAb)
#ret = odeint(deriv, A0, time, args=id_params)
#y_t = ret.T[0]
for t in range(len(y_t)):
data.append([n+1,time[t],log10(y_t[t])+error,A0,0])
else:
A0, id_params = sample_id_params(model_params,groupHav720 = True)
error = norm.rvs(0,sqrt(sigma2))
phiS, phiL, deltaS, deltaL, deltaAb = id_params
y_t = analytic(A0, time, phiS, phiL, deltaS, deltaL, deltaAb)
#ret = odeint(deriv, A0, time, args=id_params)
#y_t = ret.T[0]
for t in range(len(y_t)):
data.append([n+1,time[t],log10(y_t[t])+error,A0,1])
dataframe = pd.DataFrame(data, columns=['ID', 'TIME', 'OBS', 'OBS_0', 'GROUP'])
# Save the obtained dataframe as simulated_AB_response.csv
dataframe.to_csv('simulated_AB_response.csv',sep=',',index=False)
####if you are using Colab:
from google.colab import files
files.download('simulated_AB_response.csv')
```
| github_jupyter |
# Discretization
---
In this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces.
### 1. Import the Necessary Packages
```
import sys
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
```
### 2. Specify the Environment, and Explore the State and Action Spaces
We'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but a discrete action space.
```
# Create an environment and set random seed
env = gym.make('MountainCar-v0')
env.seed(505);
```
Run the next code cell to watch a random agent.
```
state = env.reset()
score = 0
for t in range(200):
action = env.action_space.sample()
env.render()
state, reward, done, _ = env.step(action)
score += reward
if done:
break
print('Final score:', score)
env.close()
```
In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.
```
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Generate some samples from the state space
print("State space samples:")
print(np.array([env.observation_space.sample() for i in range(10)]))
# Explore the action space
print("Action space:", env.action_space)
# Generate some samples from the action space
print("Action space samples:")
print(np.array([env.action_space.sample() for i in range(10)]))
```
### 3. Discretize the State Space with a Uniform Grid
We will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimension, which will be 1 less than the number of bins.
For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, and `bins = (10, 10)`, then your function should return the following list of 2 NumPy arrays:
```
[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),
array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]
```
Note that the ends of `low` and `high` are **not** included in these split points. It is assumed that any value below the lowest split point maps to index `0` and any value above the highest split point maps to index `n-1`, where `n` is the number of bins along that dimension.
```
def create_uniform_grid(low, high, bins=(10, 10)):
"""Define a uniformly-spaced grid that can be used to discretize a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins along each corresponding dimension.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
# TODO: Implement this
grid = [np.linspace(low[dim], high[dim], bins[dim] + 1)[1:-1] for dim in range(len(bins))]
print("Uniform grid: [<low>, <high>] / <bins> => <splits>")
for l, h, b, splits in zip(low, high, bins, grid):
print(" [{}, {}] / {} => {}".format(l, h, b, splits))
return grid
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_uniform_grid(low, high) # [test]
```
Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.
Assume the grid is a list of NumPy arrays containing the following split points:
```
[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]),
array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]
```
Here are some potential samples and their corresponding discretized representations:
```
[-1.0 , -5.0] => [0, 0]
[-0.81, -4.1] => [0, 0]
[-0.8 , -4.0] => [1, 1]
[-0.5 , 0.0] => [2, 5]
[ 0.2 , -1.9] => [6, 3]
[ 0.8 , 4.0] => [9, 9]
[ 0.81, 4.1] => [9, 9]
[ 1.0 , 5.0] => [9, 9]
```
**Note**: There may be one-off differences in binning due to floating-point inaccuracies when samples are close to grid boundaries, but that is alright.
```
def discretize(sample, grid):
"""Discretize a sample as per given grid.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
grid : list of array_like
A list of arrays containing split points for each dimension.
Returns
-------
discretized_sample : array_like
A sequence of integers with the same number of dimensions as sample.
"""
# TODO: Implement this
return list(int(np.digitize(s, g)) for s, g in zip(sample, grid)) # apply along each dimension
# Test with a simple grid and some samples
grid = create_uniform_grid([-1.0, -5.0], [1.0, 5.0])
samples = np.array(
[[-1.0 , -5.0],
[-0.81, -4.1],
[-0.8 , -4.0],
[-0.5 , 0.0],
[ 0.2 , -1.9],
[ 0.8 , 4.0],
[ 0.81, 4.1],
[ 1.0 , 5.0]])
discretized_samples = np.array([discretize(sample, grid) for sample in samples])
print("\nSamples:", repr(samples), sep="\n")
print("\nDiscretized samples:", repr(discretized_samples), sep="\n")
```
### 4. Visualization
It might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.
```
import matplotlib.collections as mc
def visualize_samples(samples, discretized_samples, grid, low=None, high=None):
"""Visualize original and discretized samples on a given 2-dimensional grid."""
fig, ax = plt.subplots(figsize=(10, 10))
# Show grid
ax.xaxis.set_major_locator(plt.FixedLocator(grid[0]))
ax.yaxis.set_major_locator(plt.FixedLocator(grid[1]))
ax.grid(True)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Otherwise use first, last grid locations as low, high (for further mapping discretized samples)
low = [splits[0] for splits in grid]
high = [splits[-1] for splits in grid]
# Map each discretized sample (which is really an index) to the center of corresponding grid cell
grid_extended = np.hstack((np.array([low]).T, grid, np.array([high]).T)) # add low and high ends
grid_centers = (grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 # compute center of each grid cell
locs = np.stack(grid_centers[i, discretized_samples[:, i]] for i in range(len(grid))).T # map discretized samples
ax.plot(samples[:, 0], samples[:, 1], 'o') # plot original samples
ax.plot(locs[:, 0], locs[:, 1], 's') # plot discretized samples in mapped locations
ax.add_collection(mc.LineCollection(list(zip(samples, locs)), colors='orange')) # add a line connecting each original-discretized sample
ax.legend(['original', 'discretized'])
visualize_samples(samples, discretized_samples, grid, low, high)
```
Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.
```
# Create a grid to discretize the state space
state_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10))
state_grid
# Obtain some samples from the space, discretize them, and then visualize them
state_samples = np.array([env.observation_space.sample() for i in range(10)])
discretized_state_samples = np.array([discretize(sample, state_grid) for sample in state_samples])
visualize_samples(state_samples, discretized_state_samples, state_grid,
env.observation_space.low, env.observation_space.high)
plt.xlabel('position'); plt.ylabel('velocity'); # axis labels for MountainCar-v0 state space
```
You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works!
### 5. Q-Learning
Provided below is a simple Q-Learning agent. Implement the `preprocess_state()` method to convert each continuous state sample to its corresponding discretized representation.
```
class QLearningAgent:
"""Q-Learning agent that can act on a continuous state space by discretizing it."""
def __init__(self, env, state_grid, alpha=0.02, gamma=0.99,
epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505):
"""Initialize variables, create grid for discretization."""
# Environment info
self.env = env
self.state_grid = state_grid
self.state_size = tuple(len(splits) + 1 for splits in self.state_grid) # n-dimensional state space
self.action_size = self.env.action_space.n # 1-dimensional discrete action space
self.seed = np.random.seed(seed)
print("Environment:", self.env)
print("State space size:", self.state_size)
print("Action space size:", self.action_size)
# Learning parameters
self.alpha = alpha # learning rate
self.gamma = gamma # discount factor
self.epsilon = self.initial_epsilon = epsilon # initial exploration rate
self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon
self.min_epsilon = min_epsilon
# Create Q-table
self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))
print("Q table size:", self.q_table.shape)
def preprocess_state(self, state):
"""Map a continuous state to its discretized representation."""
# TODO: Implement this
return tuple(discretize(state, self.state_grid))
def reset_episode(self, state):
"""Reset variables for a new episode."""
# Gradually decrease exploration rate
self.epsilon *= self.epsilon_decay_rate
self.epsilon = max(self.epsilon, self.min_epsilon)
# Decide initial action
self.last_state = self.preprocess_state(state)
self.last_action = np.argmax(self.q_table[self.last_state])
return self.last_action
def reset_exploration(self, epsilon=None):
"""Reset exploration rate used when training."""
self.epsilon = epsilon if epsilon is not None else self.initial_epsilon
def act(self, state, reward=None, done=None, mode='train'):
"""Pick next action and update internal Q table (when mode != 'test')."""
state = self.preprocess_state(state)
if mode == 'test':
# Test mode: Simply produce an action
action = np.argmax(self.q_table[state])
else:
# Train mode (default): Update Q table, pick next action
# Note: We update the Q table entry for the *last* (state, action) pair with current state, reward
self.q_table[self.last_state + (self.last_action,)] += self.alpha * \
(reward + self.gamma * max(self.q_table[state]) - self.q_table[self.last_state + (self.last_action,)])
# Exploration vs. exploitation
do_exploration = np.random.uniform(0, 1) < self.epsilon
if do_exploration:
# Pick a random action
action = np.random.randint(0, self.action_size)
else:
# Pick the best action from Q table
action = np.argmax(self.q_table[state])
# Roll over current state, action for next step
self.last_state = state
self.last_action = action
return action
q_agent = QLearningAgent(env, state_grid)
```
Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.
```
def run(agent, env, num_episodes=20000, mode='train'):
"""Run agent in given reinforcement learning environment and return scores."""
scores = []
max_avg_score = -np.inf
for i_episode in range(1, num_episodes+1):
# Initialize episode
state = env.reset()
action = agent.reset_episode(state)
total_reward = 0
done = False
# Roll out steps until done
while not done:
state, reward, done, info = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
# Save final score
scores.append(total_reward)
# Print episode stats
if mode == 'train':
if len(scores) > 100:
avg_score = np.mean(scores[-100:])
if avg_score > max_avg_score:
max_avg_score = avg_score
if i_episode % 100 == 0:
print("\rEpisode {}/{} | Max Average Score: {}".format(i_episode, num_episodes, max_avg_score), end="")
sys.stdout.flush()
return scores
scores = run(q_agent, env)
```
The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.
```
# Plot scores obtained per episode
plt.plot(scores); plt.title("Scores");
```
If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.
```
def plot_scores(scores, rolling_window=100):
"""Plot scores and optional rolling mean using specified window."""
plt.plot(scores); plt.title("Scores");
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
return rolling_mean
rolling_mean = plot_scores(scores)
```
You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.
```
# Run in test mode and analyze scores obtained
test_scores = run(q_agent, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
```
It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that value.
```
def plot_q_table(q_table):
"""Visualize max Q-value for each state and corresponding action."""
q_image = np.max(q_table, axis=2) # max Q-value for each state
q_actions = np.argmax(q_table, axis=2) # best action for each state
fig, ax = plt.subplots(figsize=(10, 10))
cax = ax.imshow(q_image, cmap='jet');
cbar = fig.colorbar(cax)
for x in range(q_image.shape[0]):
for y in range(q_image.shape[1]):
ax.text(x, y, q_actions[x, y], color='white',
horizontalalignment='center', verticalalignment='center')
ax.grid(False)
ax.set_title("Q-table, size: {}".format(q_table.shape))
ax.set_xlabel('position')
ax.set_ylabel('velocity')
plot_q_table(q_agent.q_table)
```
### 6. Modify the Grid
Now it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).
```
# TODO: Create a new agent with a different state space grid
state_grid_new = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(20, 20))
q_agent_new = QLearningAgent(env, state_grid_new)
q_agent_new.scores = [] # initialize a list to store scores for this agent
# Train it over a desired number of episodes and analyze scores
# Note: This cell can be run multiple times, and scores will get accumulated
q_agent_new.scores += run(q_agent_new, env, num_episodes=50000) # accumulate scores
rolling_mean_new = plot_scores(q_agent_new.scores)
# Run in test mode and analyze scores obtained
test_scores = run(q_agent_new, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
# Visualize the learned Q-table
plot_q_table(q_agent_new.q_table)
```
### 7. Watch a Smart Agent
```
state = env.reset()
score = 0
for t in range(200):
action = q_agent_new.act(state, mode='test')
env.render()
state, reward, done, _ = env.step(action)
score += reward
if done:
break
print('Final score:', score)
env.close()
```
| github_jupyter |
## Energy Generation Analysis
- This script analyzes energy generation and fuel stock data published by the U.S Energy Information Administration
- The data used includes energy generation data from across the country
- Also included are stock levels of fuels used, including oil, coal, petcoke, and boiler fuels
- Data Source: https://www.eia.gov/electricity/data/eia923/
### Hypothesis
- Energy generation will be inversely proportional to fuel stock. Therefore, if energy generation in a given period of time increases, the respective stock of fuel will decrease.
### Observations
- The stock of each fuel type showed the same right-tailed distribution when aggregated country wide. Indicating that a small number of power plants maintain a much higher stock of fuel than other in the country.
- The same ditribution held true for power generation. A small number of plants form an outlier by generating significantly more energy than those in the rest of the country.
- Power generation from all fuel types except petcoke has declined each year from 2008 to 2017
- No fuel stock showed a correlation to generation except boiler fuel
### Conclusions
- Generally, the hypothesis that energy generation was inversely proportional to fuel stock was not supported.
- With the exception of petcoke, all fuel types showed no relationship between stock and generation.
### I. Import Packages & Data
```
#Import packages
import os
import numpy as np
import scipy.stats
import pandas as pd
import datetime as dt
import re
import copy
from calendar import month_name
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context(rc={'lines.markeredgewidth': 0.1})
#View files in current working directory
path = os.getcwd() + '/' 'Raw_Data'
files = os.listdir(path)
#Create a list of all excel files
files_ = [i for i in files if i[-4:]=='xlsx' or i[-3:]=='xls']
```
#### Verify the sheets in each file match
```
#Store list of sheets to load into dataframes for analysis
keep_sheets = ['Page 2 Oil Stocks Data', 'Page 2 Coal Stocks Data', 'Page 2 Petcoke Stocks Data', \
'Page 3 Boiler Fuel Data', 'Page 4 Generator Data']
#Define function to import data from excel
def import_data(file, sheet_name, skiprows=0):
return pd.read_excel(file, sheet_name=sheet_name, skiprows=skiprows)
#Populate dictionary with data from all required sheets of each file
data = {}
for sheet in keep_sheets:
data[sheet] = {}
for sheet, dict_ in data.items():
for file in files_:
filepath = path + '/' + file
end_file_idx = file.rfind('.')
dict_[file[:end_file_idx]] = import_data(filepath, sheet)
```
### II. Data Cleaning
#### Remove any remaining blank rows at the top of the dataframes
- There is inconsistent formatting between Excel files/sheets
- Some additional clean up is needed to remove any additional headers, notes, etc that are not related to the data
```
def remove_blank_rows(df, n_rows=15):
'''
This function checks the first n_rows rows of a dataframe and removes any rows
with blanks in the first four columns (ID columns)
Input
df (dataframe): Dataframe to edit
n_rows (int): Number of rows to check
Output(df): pandas dataframe (w/ empty rows removed)
'''
#Create dummy 'drop' column to mark cols to drop
df['drop'] = False
#Check the first n_rows rows for blanks in all ID cols (first four) and drop the row if found
for i in range(n_rows):
if any(df.iloc[i, :2].isnull()):
df.loc[i, 'drop'] = True
df_2 = df.loc[df['drop'] != True, df.columns != 'drop'].copy()
df_2.reset_index(drop=True, inplace=True)
#Set new columns
df_2.columns = df_2.loc[0, ]
#Drop first row (no longer needed) and reset index
df_2.drop(0, inplace=True)
df_2.reset_index(drop=True, inplace=True)
return df_2
#Apply remove_blank_rows func to all df's stored in data dict
##Create a new copy of the data dict to retain raw data
data_2 = copy.deepcopy(data)
for sheet, d in data_2.items():
for file, df in d.items():
data_2[sheet][file] = remove_blank_rows(df)
```
#### Format col names to match between df's from the same Excel sheet
- Validate that column names match before concatenating dataframes
```
def format_col_names(col_idx):
'''
This function edits each column name in a list of dataframe columns
to standardize formatting across all dataframes. Will only
operate on strings.
Input(index): Raw column index
Output(list): Formatted column list
'''
col_list = list(col_idx).copy()
for i in range(len(col_list)):
if type(col_list[i]) is str:
#Replace spaces with underscore
col_list[i] = re.sub(' ', '_', col_list[i])
#Remove all non-alphanumeric characters
col_list[i] = re.sub('[\W]+', '', col_list[i])
#Replace any double underscores with single underscore
col_list[i] = re.sub('__', '_', col_list[i])
#Change all characters to lowercase
col_list[i] = col_list[i].lower()
return col_list
#Format column names of all df's
for sheet, d in data_2.items():
for file, df in d.items():
df.columns = format_col_names(df.columns)
```
#### Identify inconsistencies in df column names & number of columns
```
#Identify df's with column counts that don't match the rest
for sheet, d in data_2.items():
col_cnt = []
for file, df in d.items():
col_cnt.append(len(df.columns))
actual_col_cnt = scipy.stats.mode(col_cnt)
for i in range(len(col_cnt)):
if col_cnt[i] != actual_col_cnt[0]:
print((sheet, actual_col_cnt[0], col_cnt[i]))
```
- All dataframes within each sheet have the same number of columns
#### Standardize column names across all df's within each sheet
```
#Confirm that all df's within each sheet have the same number of columns
for sheet in keep_sheets:
num_cols = data_2[sheet]['2008'].shape[1]
for file in files_:
end_file_idx = file.rfind('.')
if num_cols != data_2[sheet][file[:end_file_idx]].shape[1]:
print(sheet, file, num_cols, data_2[sheet][file[:end_file_idx]].shape[1])
#Set all column names within each sheet to match
##Set columns in each dataframe equal to the columns of the first df (2008 file)
for sheet in keep_sheets:
col_list = data_2[sheet]['2008'].columns
for file in files_:
data_2[sheet][file[:end_file_idx]].columns = col_list
```
#### Concatenate all dataframes within each Excel sheet
```
#Create a new dictionary to store the combined datasets
combined_data = {}
for sheet in keep_sheets:
combined_data[sheet] = pd.concat(data_2[sheet])
def format_index(df):
'''
This function resets the index of the provided dataframe, drops one of the previous indices,
and renames the other.
Input(dataframe): Provided dataframe
Output(dataframe): New dataframe with formatted index
'''
df_2 = df.reset_index()
df_2.drop('level_1', axis=1, inplace=True)
df_2.rename({'level_0' : 'report_year'}, axis=1, inplace=True)
return df_2
#Reformat all df's in the combined_data dict
combined_data_2 = copy.deepcopy(combined_data)
for sheet, df in combined_data_2.items():
combined_data_2[sheet] = format_index(df)
#Limit Boiler Fuel data to only observations in short ton units
combined_data_2['Page 3 Boiler Fuel Data'] = \
combined_data_2['Page 3 Boiler Fuel Data'].loc[combined_data_2['Page 3 Boiler Fuel Data']['physical_unit_label']=='short tons', ]
#Reset index
combined_data_2['Page 3 Boiler Fuel Data'].reset_index(inplace=True)
```
### III. Create final dataset
```
#Start with a copy of the 'Page 4 Generator Data' as the base dataframe
df_gen = combined_data_2['Page 4 Generator Data'].copy()
```
#### Pivot data to prepare it for joining
```
#Define general lists of columns that will be used to prepare / join data
id_cols = ['report_year', 'plant_id', 'operator_id']
general_stocks_val_cols = ['quantityjanuary', 'quantityfebruary', 'quantitymarch', 'quantityapril',
'quantitymay', 'quantityjune', 'quantityjuly', 'quantityaugust',
'quantityseptember', 'quantityoctober', 'quantitynovember',
'quantitydecember']
boiler_fuel_val_cols = ['quantity_of_fuel_consumed_january', 'quantity_of_fuel_consumed_february',
'quantity_of_fuel_consumed_march', 'quantity_of_fuel_consumed_april',
'quantity_of_fuel_consumed_may', 'quantity_of_fuel_consumed_june',
'quantity_of_fuel_consumed_july', 'quantity_of_fuel_consumed_august',
'quantity_of_fuel_consumed_september', 'quantity_of_fuel_consumed_october',
'quantity_of_fuel_consumed_november', 'quantity_of_fuel_consumed_december']
final_dataset_val_cols = ['net_generation_january', 'net_generation_february', 'net_generation_march',
'net_generation_april', 'net_generation_may', 'net_generation_june',
'net_generation_july', 'net_generation_august', 'net_generation_september',
'net_generation_october', 'net_generation_november', 'net_generation_december']
#Define func to pivot data and prepare for join with generation data
def pivot_data(df, vals, idx):
'''
This function pivots a dataframe based on value and index columns provided.
After pivot, the columns are re-ordered to match the vals list and the
index is reset.
Input:
df (dataframe): Dataframe to pivot
vals (list): List of columns to aggregate
idx (list): List of columns to use a keys in the pivot table
Output (dataframe): Pivoted / formatted dataframe
'''
df_new = df.pivot_table(values=vals, index=idx, aggfunc=np.sum).copy()
return df_new.reset_index()
#Pivot all dataframes that will be joined with the energy generation data
df_oil_stock = pivot_data(combined_data_2['Page 2 Oil Stocks Data'], general_stocks_val_cols, id_cols)
df_coal_stock = pivot_data(combined_data_2['Page 2 Coal Stocks Data'], general_stocks_val_cols, id_cols)
df_petcoke_stock = pivot_data(combined_data_2['Page 2 Petcoke Stocks Data'], \
general_stocks_val_cols, id_cols)
df_boiler_fuel = pivot_data(combined_data_2['Page 3 Boiler Fuel Data'], boiler_fuel_val_cols, id_cols)
#Pivot final data set
df_gen_2 = pivot_data(df_gen, final_dataset_val_cols, id_cols)
#Define func to finish preparing dataframes for join
def prep_data(df, fuel_type, data_col_name='stock'):
'''
This function will finish preparing data for join by creating month
and data type columns. Columns will also be renamed/reordered. Creating
the month column will utilize the pre-defined 'find_month' function.
Input:
df (dataframe): Input dataframe to be formatted
fuel_type (str): Value to populate fuel_type col
data_col_name (str): Name for new data column. Default value set to 'monthly_avg'
Output:
df_new (dataframe): Formatted dataframe
'''
#Extract month from level_3 feature, create data_type and fuel_type features
df['month'] = df['level_3'].apply(find_month)
df['fuel_type'] = fuel_type
#Drop cols no longer needed
df_2 = df.drop('level_3', axis=1)
#Move monthly_avg col to end of col list and rename
col_list = list(df_2.columns)
col_list.remove(0)
col_list.append(0)
df_3 = df_2[col_list].copy()
df_3.rename({0 : data_col_name}, axis=1, inplace=True)
#Move month col
col_list = list(df_3.columns)
old_idx = col_list.index('month')
col_list.insert(1, col_list.pop(old_idx))
df_4 = df_3[col_list].copy()
return df_4
def find_month(x):
'''Use regex expression to extract month name from column values'''
pattern = '|'.join(month_name[1:])
return re.search(pattern, x, re.IGNORECASE).group(0)
#Prepare supplemental datasets for join with generation data
df_oil_stock_2 = prep_data(df_oil_stock, 'oil')
df_coal_stock_2 = prep_data(df_coal_stock, 'coal')
df_petcoke_stock_2 = prep_data(df_petcoke_stock, 'petcoke')
df_boiler_fuel_2 = prep_data(df_boiler_fuel, 'boiler fuel')
#Prepare generation data for join
df_gen_3 = prep_data(df_gen_2, 'energy', 'generation')
```
#### Concat stock/fuel df's prior to join with generation data
```
df_stock_fuel = pd.concat([df_oil_stock_2, df_coal_stock_2, df_petcoke_stock_2, df_boiler_fuel_2], axis=0)
```
#### Join the generation data with the stock / fuel data
```
#Define function to join data
def join_data(left_df, right_df, id_cols):
'''
This function performs a left join between two dataframes using
an input for the id cols.
Input:
left_df (dataframe): Main dataframe to join data to
right_df (dataframe): Supplemental dataframe
id_cols (list): List of col names contained in both df's
to perform join on
Output (dataframe): New dataframe resulting from the join
'''
return left_df.merge(right_df, how='left', on=id_cols)
#Define new id_cols list to include new cols created by data pivot
id_cols_2 = ['report_year', 'month', 'plant_id', 'operator_id']
#Add oil stock data to generation dataset
df_final = join_data(df_gen_3, df_stock_fuel, id_cols_2)
```
#### Clean up final data set to prepare for visualization
```
#Drop cols that are no longer needed
df_final_2 = df_final.drop('fuel_type_x', axis=1)
#Rename columns as needed
df_final_3 = df_final_2.rename({'fuel_type_y' : 'fuel_type'}, axis=1)
#Convert object dtypes to numeric where needed
for col in ['generation', 'stock']:
df_final_3[col] = pd.to_numeric(df_final_3[col], errors='coerce')
```
### IV. Vizualize Data
```
df_final_3.head()
```
#### Summary Statistics
- Caclulate a range of summary statistics to get a high level view of the data
```
df_final_3.info()
def summary_stats(df):
'''This function will display summary statistics on the input dataframe'''
print(df.info())
print('\n')
print(df.describe())
summary_stats(df_final_3)
```
#### Distribution Plots
- Create violin plots of generation and stock by fuel type to determine the shape of the data distributions
```
def plot_dist(df, feature, title, xlabel, sliced_by=None):
'''
This function will generate a violin plot of the provided
dataframe feature distribution.
Input:
df (dataframe): Dataframe containing feature to display
feature (str): Name of feature within df to plot
title (str): Title of chart
xlabel (str): x-axis label
sliced_by (str or None): If value provided, slice the feature by the provided value
Output: None
'''
if sliced_by:
fig = sns.violinplot(df.loc[df['fuel_type']==sliced_by, feature])
else:
fig = sns.violinplot(df[feature])
fig.set_title(title)
fig.set_xlabel(xlabel)
#Plot the distribution of the 'monthly_generation' feature
dist_features_gen = {'df' : df_final_3, 'feature' : 'stock',
'title' : 'Generation Distribution', 'xlabel' : 'Generation (MWh)'}
plot_dist(**dist_features_gen)
#Plot the distribution of the oil stocks
dist_features_oil = {'df' : df_final_3, 'feature' : 'stock',
'title' : 'Oil Stock Distribution', 'xlabel' : 'Oil Stock (barrels)',
'sliced_by' : 'oil'}
plot_dist(**dist_features_oil)
#Plot the distribution of the coal stocks
dist_features_coal = {'df' : df_final_3, 'feature' : 'stock',
'title' : 'Coal Stock Distribution', 'xlabel' : 'Coal Stock (short tons)',
'sliced_by' : 'coal'}
plot_dist(**dist_features_coal)
#Plot the distribution of the petcoke stocks
dist_features_petcoke = {'df' : df_final_3, 'feature' : 'stock',
'title' : 'Petcoke Stock Distribution', 'xlabel' : 'Petcoke Stock (short tons)',
'sliced_by' : 'petcoke'}
plot_dist(**dist_features_petcoke)
#Plot the distribution of the boiler fuel stocks
dist_features_boiler = {'df' : df_final_3, 'feature' : 'stock',
'title' : 'Boiler Fuel Stock Distribution', 'xlabel' : 'Boiler Fuel Stock (short tons)',
'sliced_by' : 'boiler fuel'}
plot_dist(**dist_features_boiler)
```
#### Time Series Plots
- Plot plot generation and stock metrics against report year to look for any time series trends
```
#Aggregate the data by year
df_agg = df_final_3.pivot_table(values=['generation', 'stock'],
index=['report_year', 'fuel_type'], aggfunc=np.sum)
df_agg.reset_index(inplace=True)
def plot_time_series(df, feature, title, xlabel='Report Year', ylabel=None):
'''
This function generates a time series plot of the input feature.
Input:
df (dataframe): Dataframe containing features to plot
feature (str): String value representing the feature within the dataframe to plot
title (str): Title of chart
xlabel (str): x-axis label of the chart
ylabel (str): y-axis label of the chart
Output: None
'''
#Create lineplot object
time_series = sns.lineplot('report_year', feature, hue='fuel_type', data=df)
#Set title and x-axis and y-axis labels
time_series.set_title(title)
time_series.set_xlabel(xlabel)
time_series.set_ylabel(ylabel)
#Format legend, move to outside the figure area
handles, labels = time_series.get_legend_handles_labels()
time_series.legend(handles=handles[1:], labels=labels[1:], title='Fuel Type', bbox_to_anchor=(1, 1))
#Create time series plot of power generation by fuel type
ts_feat_gen = {'df' : df_agg, 'feature' : 'generation', 'title' : 'Annual Power Generation',
'ylabel' : 'Generation (MWh)'}
plot_time_series(**ts_feat_gen)
#Create time series plot of fuel stock by fuel type
ts_feat_stock = {'df' : df_agg, 'feature' : 'stock', 'title' : 'Annual Stock',
'ylabel' : 'Stock'}
plot_time_series(**ts_feat_stock)
```
#### Generation vs. Stock by Fuel Type
- Plot variables against each other to look for any correlations
```
df_slice = df_agg.loc[df_agg['fuel_type']=='oil', ]
sns.scatterplot('stock', 'generation', data=df_slice)
plt.show()
def corr_plot(df, fuel_type, x_feat='stock', y_feat='generation',
title='Feature Correlation', xlabel=None, ylabel=None):
'''
This function generates a scatter plot of x_feat vs. y_feat from the
provided dataframe sliced by the fuel type provided.
Input:
df (dataframe): Dataframe containing x_feat and y_feat
fuel_type (str): Value to slice df by
x_feat (str): Name of the feature in df to plot on the x-axis
y_feat (str): Name of the feature in df to plot on the y-axis
title (str): String to set as chart title
xlabel (str): x-axis label
ylabel (str): y-axis label
Output: None
'''
#Create slice of df
df_slice = df.loc[df['fuel_type']==fuel_type, ]
#Create chart object and format
corr_scatter = sns.scatterplot(x_feat, y_feat, data=df_slice)
corr_scatter.set_title(title)
corr_scatter.set_xlabel(xlabel)
corr_scatter.set_ylabel(ylabel)
#Generate scatter plot for oil stock vs generation
corr_feat_oil = {'df' : df_agg, 'fuel_type' : 'oil', 'x_feat' : 'stock', 'y_feat' : 'generation',
'title' : 'Oil Stock vs. Generation', 'xlabel' : 'Stock (barrels)',
'ylabel' : 'Generation (MWh)'}
corr_plot(**corr_feat_oil)
#Generate scatter plot for coal stock vs generation
corr_feat_coal = {'df' : df_agg, 'fuel_type' : 'coal', 'x_feat' : 'stock', 'y_feat' : 'generation',
'title' : 'Coal Stock vs. Generation', 'xlabel' : 'Stock (short tons)',
'ylabel' : 'Generation (MWh)'}
corr_plot(**corr_feat_oil)
#Generate scatter plot for petcoke stock vs generation
corr_feat_petcoke = {'df' : df_agg, 'fuel_type' : 'petcoke', 'x_feat' : 'stock', 'y_feat' : 'generation',
'title' : 'Petcoke Stock vs. Generation', 'xlabel' : 'Stock (short tons)',
'ylabel' : 'Generation (MWh)'}
corr_plot(**corr_feat_petcoke)
#Generate scatter plot for boiler fuel consumed vs generation
corr_feat_boiler = {'df' : df_agg, 'fuel_type' : 'boiler fuel', 'x_feat' : 'stock', 'y_feat' : 'generation',
'title' : 'Boiler Fuel Stock vs. Generation', 'xlabel' : 'Boiler Fuel Consumed (short tons)',
'ylabel' : 'Generation (MWh)'}
corr_plot(**corr_feat_boiler)
```
| github_jupyter |
```
pip install pyspark
pip install sklearn
pip install pandas
pip install seaborn
pip install matplotlib
import pandas as pd
import numpy as np
import os
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
from sklearn import datasets
from sklearn import datasets
data = datasets.load_wine()
wine = pd.DataFrame(data = np.c_[data['data'], data['target']],
columns = data['feature_names'] + ['target'])
wine.info()
wine.head()
wine.describe()
# Gráfico que faz comparações entre as colunas que foram citadas com a coluna "target", que é representada:
# pelas cores azul (0), laranja (1) e verde (2).
sns.pairplot(wine, vars=["malic_acid", "ash", "alcalinity_of_ash", "total_phenols", "flavanoids",
"nonflavanoid_phenols"], hue='target')
# Correlacao entre as colunas da matriz
correlacao = wine.corr()
# O primeiro plot ficou impossível de entender, pois ficou muito pequeno e com muita informação
# Precisei aumentar o tamanho da heatmap
fig,ax = plt.subplots(figsize = (10, 10))
sns.heatmap(correlacao, annot = True, fmt = ".2f")
plt.show()
wine.info()
# Divisão da base em treinamento e teste
from sklearn.model_selection import train_test_split
x = wine[['alcohol', 'malic_acid', 'ash', 'alcalinity_of_ash', 'magnesium', 'total_phenols', 'flavanoids',
'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity', 'hue', 'od280/od315_of_diluted_wines',
'proline']]
y = wine['target']
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.30, random_state = 42)
# Normalizando a base
from sklearn.preprocessing import StandardScaler
normalize = StandardScaler()
normalize.fit(x_train)
newx_train = normalize.transform(x_train)
newx_train = pd.DataFrame(data = newx_train, columns = x.columns)
normalize.fit(x_test)
newx_test = normalize.transform(x_test)
newx_test = pd.DataFrame(newx_test)
```
## A seguir segue os modelos criados e a avaliação do compartamento na base de treinamento
## Solver: "liblinear"
```
from sklearn.model_selection import GridSearchCV
solver_list = ['liblinear']
parametros = dict(solver = solver_list)
model = LogisticRegression(random_state = 42, solver = 'liblinear', max_iter = 150)
clf = GridSearchCV(model, parametros, cv = 5)
clf.fit(x_train, y_train)
scores = clf.cv_results_["mean_test_score"]
print(solver_list,":", scores)
```
## Solver: "newton-cg"
```
solver_list = ['newton-cg']
parametros = dict(solver = solver_list)
model = LogisticRegression(random_state = 42, solver = 'newton-cg', max_iter = 150)
clf = GridSearchCV(model, parametros, cv = 5)
clf.fit(x_train, y_train)
scores = clf.cv_results_["mean_test_score"]
print(solver_list,":", scores)
```
## Solver: "lbfgs"
```
solver_list = ['lbfgs']
parametros = dict(solver = solver_list)
model = LogisticRegression(random_state = 42, solver = 'lbfgs', max_iter = 150)
clf = GridSearchCV(model, parametros, cv = 5)
clf.fit(x_train, y_train)
scores = clf.cv_results_["mean_test_score"]
print(solver_list,":", scores)
```
## Solver: "sag"
```
solver_list = ['sag']
parametros = dict(solver = solver_list)
model = LogisticRegression(random_state = 42, solver = 'sag', max_iter = 150)
clf = GridSearchCV(model, parametros, cv = 5)
clf.fit(x_train, y_train)
scores = clf.cv_results_["mean_test_score"]
print(solver_list,":", scores)
```
## Solver: "saga"
```
solver_list = ['saga']
parametros = dict(solver = solver_list)
model = LogisticRegression(random_state = 42, solver = 'saga', max_iter = 150)
clf = GridSearchCV(model, parametros, cv = 5)
clf.fit(x_train, y_train)
scores = clf.cv_results_["mean_test_score"]
print(solver_list,":", scores)
```
## Para um melhor entendimento:
#### For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones.
#### For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.
#### ‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas ‘liblinear’ and ‘saga’ handle L1 penalty.
#### ‘liblinear’ might be slower in LogisticRegressionCV because it does not handle warm-starting.
## Baseado nisso, o modelo que obteve a melhor avaliação, foi o "newton-cg".
```
# Treine o modelo com a melhor configuração, aplique-o na base de testes e avalie os resultados
model = LogisticRegression(random_state = 42, solver = 'newton-cg', max_iter = 150).fit(newx_train, y_train)
predictions = model.predict(newx_test)
predictions
probabilidade = model.predict_proba(newx_test)
probabilidade
# Matriz Confusão
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, model.predict(newx_test))
model.score(newx_test, y_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, model.predict(newx_test)))
```
## Cross-Validation
```
# Escolher o K
from sklearn.neighbors import KNeighborsClassifier
classificador = KNeighborsClassifier(n_neighbors = 3)
# treinar
classificador.fit(newx_train, y_train)
# Fazer a predição
prediction = classificador.predict(newx_test)
# Melhor K
storage = []
for i in range(1, 100):
knn = KNeighborsClassifier(n_neighbors = i)
scores = cross_val_score(knn, x, y, cv = 12)
storage.append(scores.mean())
print(len(scores))
print(max(scores))
# Classification_report
from sklearn.metrics import classification_report
print(classification_report(y_test, classificador.predict(newx_test), zero_division = 1))
# Matriz Confusão
from sklearn.metrics import confusion_matrix
matrix_confusao = confusion_matrix(y_test, classificador.predict(newx_test))
print(matrix_confusao)
# Melhor acurácia e medindo o erro
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
storageK = []
error = []
for i in range(1, 30):
knn = KNeighborsClassifier(n_neighbors = i)
scores = cross_val_score(knn, x, y, cv = 5)
classificador.fit(newx_train, y_train)
prediction = classificador.predict(newx_test)
storageK = accuracy_score(y_test, prediction)
error.append(np.mean(y_test != prediction))
print("Melhor acurácia: ", storageK)
print("Taxa de erros: ", max(error))
```
## Comparando o resultado do Cross-Validation com o da Regressão Logística:
### Cross-Validation
```
# Classification_report
from sklearn.metrics import classification_report
print(classification_report(y_test, classificador.predict(newx_test)))
```
### Regressão Logística
```
from sklearn.metrics import classification_report
print(classification_report(y_test, model.predict(newx_test)))
```
| github_jupyter |
# GRIP_JULY - 2021 (TASK 5)
# Task Name:- Traffic sign classification/Recognition
# Domain:- Computer Vision and IOT
# Name:- Akash Singh

```
import cv2
import numpy as np
from scipy.stats import itemfreq
def get_dominant_color(image, n_colors):
pixels = np.float32(image).reshape((-1, 3))
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 200, .1)
flags = cv2.KMEANS_RANDOM_CENTERS
flags, labels, centroids = cv2.kmeans(
pixels, n_colors, None, criteria, 10, flags)
palette = np.uint8(centroids)
return palette[np.argmax(itemfreq(labels)[:, -1])]
clicked = False
def onMouse(event, x, y, flags, param):
global clicked
if event == cv2.EVENT_LBUTTONUP:
clicked = True
cameraCapture = cv2.VideoCapture(0)
cv2.namedWindow('camera')
cv2.setMouseCallback('camera', onMouse)
# Read and process frames in loop
success, frame = cameraCapture.read()
while success and not clicked:
cv2.waitKey(1)
success, frame = cameraCapture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
img = cv2.medianBlur(gray, 37)
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT,
1, 50, param1=120, param2=40)
if not circles is None:
circles = np.uint16(np.around(circles))
max_r, max_i = 0, 0
for i in range(len(circles[:, :, 2][0])):
if circles[:, :, 2][0][i] > 50 and circles[:, :, 2][0][i] > max_r:
max_i = i
max_r = circles[:, :, 2][0][i]
x, y, r = circles[:, :, :][0][max_i]
if y > r and x > r:
square = frame[y-r:y+r, x-r:x+r]
dominant_color = get_dominant_color(square, 2)
if dominant_color[2] > 100:
print("STOP")
elif dominant_color[0] > 80:
zone_0 = square[square.shape[0]*3//8:square.shape[0]
* 5//8, square.shape[1]*1//8:square.shape[1]*3//8]
cv2.imshow('Zone0', zone_0)
zone_0_color = get_dominant_color(zone_0, 1)
zone_1 = square[square.shape[0]*1//8:square.shape[0]
* 3//8, square.shape[1]*3//8:square.shape[1]*5//8]
cv2.imshow('Zone1', zone_1)
zone_1_color = get_dominant_color(zone_1, 1)
zone_2 = square[square.shape[0]*3//8:square.shape[0]
* 5//8, square.shape[1]*5//8:square.shape[1]*7//8]
cv2.imshow('Zone2', zone_2)
zone_2_color = get_dominant_color(zone_2, 1)
if zone_1_color[2] < 60:
if sum(zone_0_color) > sum(zone_2_color):
print("LEFT")
else:
print("RIGHT")
else:
if sum(zone_1_color) > sum(zone_0_color) and sum(zone_1_color) > sum(zone_2_color):
print("FORWARD")
elif sum(zone_0_color) > sum(zone_2_color):
print("FORWARD AND LEFT")
else:
print("FORWARD AND RIGHT")
else:
print("N/A")
for i in circles[0, :]:
cv2.circle(frame, (i[0], i[1]), i[2], (0, 255, 0), 2)
cv2.circle(frame, (i[0], i[1]), 2, (0, 0, 255), 3)
cv2.imshow('camera', frame)
cv2.destroyAllWindows()
cameraCapture.release()
```
| github_jupyter |
# Making Predictions with the Standardized Coefficients
## Import the relevant libraries
```
# For these lessons we will need NumPy, pandas, matplotlib and seaborn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# and of course the actual regression (machine learning) module
from sklearn.linear_model import LinearRegression
```
## Load the data
```
# Load the data from a .csv in the same folder
data = pd.read_csv('../../data/1.02. Multiple linear regression.csv')
# Let's explore the top 5 rows of the df
data.head()
data.describe()
```
## Create the multiple linear regression
### Declare the dependent and independent variables
```
# There are two independent variables: 'SAT' and 'Rand 1,2,3'
x = data[['SAT','Rand 1,2,3']]
# and a single dependent variable: 'GPA'
y = data['GPA']
```
## Standardization
스케일이 서로 다른 변수를 사용할 때는 표준화함으로 데이터의 스케일을 맞춘다.
이렇게 변수를 표준화하면 단일한 척도로 변환할 수 있다.
**표준화 계수란?**
단위와 분포의 평균이 변수마다 다르기 때문에 이를 비교해보기 위해서 변환한 수치를 의미한다.
다음의 공식으로 표준화 한다.
$ z = \frac{X - \mu}{\sigma}$
```
# Import the preprocessing module
# StandardScaler 모듈을 사용하여 표준화를 수행한다.
from sklearn.preprocessing import StandardScaler
# Create an instance of the StandardScaler class
scaler = StandardScaler()
# Fit the input data (x)
# 각각의 feature(독립변수)별로 표준화된 계수를 계산한다.
scaler.fit(x)
# 'transform()'메소드를 사용하여 data를 표준화한다.
# The actual scaling of the data is done through the method '
# Let's store it in a new variable, named appropriately
x_scaled = scaler.transform(x)
# The result is an ndarray
x_scaled
```
## Regression with scaled features
표준화된 데이터를 사용하여 회귀모형을 생성한다.
```
# Creating a regression works in the exact same way
reg = LinearRegression()
# We just need to specify that our inputs are the 'scaled inputs'
reg.fit(x_scaled,y)
# Let's see the coefficients
reg.coef_
# And the intercept
reg.intercept_
```
## Creating a summary table
**회귀결과를 요약하는 summary table을 생성해 보자.**
```
# Let's create a new data frame with the names of the features
reg_summary = pd.DataFrame([['Bias'],['SAT'],['Rand 1,2,3']], columns=['Features'])
# 'Weights' 는 회귀식에서 회귀계수를 의미한다.
# 보통 머신러닝에서는 회귀계수보다는 weight라는 표현을 더 많이 사용한다.
reg_summary['Weights'] = reg.intercept_, reg.coef_[0], reg.coef_[1]
# Now we have a pretty clean summary, which can help us make an informed decision about the importance of each feature
reg_summary
```
## Making predictions with the standardized coefficients (weights)
표준화된 계수로 생성된 회귀모형으로 예측을 수행한다.
```
# For simplicity, let's crete a new dataframe with 2 *new* observations
new_data = pd.DataFrame(data=[[1700,2],[1800,1]],columns=['SAT','Rand 1,2,3'])
new_data
# We can make a prediction for a whole dataframe (not a single value)
# Note that the output is very strange (different from mine)
reg.predict(new_data)
```
우리의 모형은 표준화된 feature를 사용한다.
따라서 새로운 데이터로 예측을 수행하기 위해서는 동일하게 표준화된 값으로 예측을 수행해야 한다.
```
new_data_scaled = scaler.transform(new_data)
# Let's check the result
new_data_scaled
# Finally we make a prediction using the scaled new data
reg.predict(new_data_scaled)
# The output is much more appropriate, isn't it?
```
## What if we removed the 'Random 1,2,3' variable?
표준화된 회귀모형에서 **'Random 1,2,3'의 회귀계수는 매우 작다.**
이것은 해당 모형에서 **'Random 1,2,3'이라는 변수의 영향력이 매우 작다**는 뜻이다.
'Random 1,2,3'이라는 변수를 제거하면 어떻게 될까?
```
# Theory suggests that features with very small weights could be removed and the results should be identical
# Moreover, we proved in 2-3 different ways that 'Rand 1,2,3' is an irrelevant feature
# Let's create a simple linear regression (simple, because there is a single feature) without 'Rand 1,2,3'
reg_simple = LinearRegression()
# Once more, we must reshape the inputs into a matrix, otherwise we will get a compatibility error
# Note that instead of standardizing again, I'll simply take only the first column of x
x_simple_matrix = x_scaled[:,0].reshape(-1,1)
# Finally, we fit the regression
reg_simple.fit(x_simple_matrix,y)
# In a similar manner to the cell before, we can predict only the first column of the scaled 'new data'
# Note that we also reshape it to be exactly the same as x
reg_simple.predict(new_data_scaled[:,0].reshape(-1,1))
```
모형에서 영향력이 매우 작은 'Random 1,2,3'를 제거해도 회귀식의 예측에는 큰 변화가 없다.
>**오컴의 면도날(Occam's Razor 또는 Ockham's Razor)**
흔히 '경제성의 원리' (Principle of economy), 검약의 원리(lex parsimoniae), 또는 **단순성의 원리**라고도 한다.
즉 회귀식에서도 불필요한 변수를 제거하여 단순한 모형을 생성하는 것이 더 좋은 모형을 생성하는 방법이다.
| github_jupyter |
## Synthetic spectra generator
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
max_features = 15
n_points = 640
nu = np.linspace(0,1,n_points)
def random_chi3():
"""
generates a random spectrum, without NRB.
output:
params = matrix of parameters. each row corresponds to the [amplitude, resonance, linewidth] of each generated feature (n_lor,3)
"""
n_lor = np.random.randint(1,max_features)
a = np.random.uniform(0,1,n_lor)
w = np.random.uniform(0,1,n_lor)
g = np.random.uniform(0.001,0.008, n_lor)
params = np.c_[a,w,g]
return params
def build_chi3(params):
"""
buiilds the normalized chi3 complex vector
inputs:
params: (n_lor, 3)
outputs
chi3: complex, (n_points, )
"""
chi3 = np.sum(params[:,0]/(-nu[:,np.newaxis]+params[:,1]-1j*params[:,2]),axis = 1)
return chi3/np.max(np.abs(chi3))
def sigmoid(x,c,b):
return 1/(1+np.exp(-(x-c)*b))
def generate_nrb():
"""
Produces a normalized shape for the NRB
outputs
NRB: (n_points,)
"""
bs = np.random.normal(10,5,2)
c1 = np.random.normal(0.2,0.3)
c2 = np.random.normal(0.7,.3)
cs = np.r_[c1,c2]
sig1 = sigmoid(nu, cs[0], bs[0])
sig2 = sigmoid(nu, cs[1], -bs[1])
nrb = sig1*sig2
return nrb
def get_spectrum():
"""
Produces a cars spectrum.
It outputs the normalized cars and the corresponding imaginary part.
Outputs
cars: (n_points,)
chi3.imag: (n_points,)
"""
chi3 = build_chi3(random_chi3())*np.random.uniform(0.3,1)
nrb = generate_nrb()
noise = np.random.randn(n_points)*np.random.uniform(0.0005,0.003)
cars = ((np.abs(chi3+nrb)**2)/2+noise)
return cars, chi3.imag
import tensorflow as tf
import keras.backend as K
from keras.models import Model, Sequential
from keras.layers import Dense, Conv1D, Flatten, BatchNormalization, Activation, Dropout
from keras import regularizers
from datetime import datetime
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session()
model = Sequential()
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None,input_shape = (n_points, 1)))
model.add(Activation('relu'))
model.add(Conv1D(128, activation = 'relu', kernel_size = (32)))
model.add(Conv1D(64, activation = 'relu', kernel_size = (16)))
model.add(Conv1D(16, activation = 'relu', kernel_size = (8)))
model.add(Conv1D(16, activation = 'relu', kernel_size = (8)))
model.add(Conv1D(16, activation = 'relu', kernel_size = (8)))
model.add(Dense(32, activation = 'relu', kernel_regularizer=regularizers.l1_l2(l1 = 0, l2=0.1)))
model.add(Dense(16, activation = 'relu', kernel_regularizer=regularizers.l1_l2(l1 = 0, l2=0.1)))
model.add(Flatten())
model.add(Dropout(.25))
model.add(Dense(n_points, activation='relu'))
model.compile(loss='mse', optimizer='Adam', metrics=['mean_absolute_error','mse','accuracy'])
model.summary()
```
## Training
```
def generate_batch(size = 10000):
X = np.empty((size, n_points,1))
y = np.empty((size,n_points))
for i in range(size):
X[i,:,0], y[i,:] = get_spectrum()
return X, y
X, y = generate_batch(50000)
history = model.fit(X, y,epochs=10, verbose = 1, validation_split=0.25, batch_size=256)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
```
Use this function to test the model on single instances
```
def predict_and_plot():
x,y = generate_batch(1)
yhat = model.predict(x, verbose =0)
f, a = plt.subplots(2,1, sharex=True)
a[0].plot(x.flatten(), label = 'cars')
a[1].plot(y.T+.7, label = 'true',c= 'g' )
a[1].plot(yhat.flatten()+1.4, label = 'pred.',c='r')
plt.subplots_adjust(hspace=0)
#return x, y.flatten(), yhat.flatten(), chi3, NRB
```
| github_jupyter |
# NLP-Cube local installation
To be able to use NLP-Cube to the fullest (train models, export them, change the network structure, etc.) we need a clone of the NLP-Cube repository.
NLP-Cube requires a number of dependencies installed. This tutorial will show each step in detail. We'll install on a fresh Ubuntu 18.04 with Python3.
Assume we are working in folder ``/work/``. You can use any folder, including he ``~`` user home folder, but for clarity we'll use here a fixed folder (always with write access).
Before cloning NLP-Cube, we need to setup the environment.
### 1. Install system and python prerequisites
System-dependent requirements:
```
sudo apt-get update && sudo apt-get install -y build-essential automake make cmake g++ wget git mercurial python3-pip
```
Python3 requirements:
```
pip3 install cython future scipy nltk requests xmltodict nose2
```
### 2. Install MKL
[MKL](https://software.seek.intel.com/performance-libraries) is a suite of libraries which include fast, multi-core math processing. Using them will speed up training 2.5-3x.
```
sudo wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
sudo wget https://apt.repos.intel.com/setup/intelproducts.list -O /etc/apt/sources.list.d/intelproducts.list
sudo apt-get update
sudo apt-get install -y intel-mkl-64bit-2018.2-046
```
### 3. Install DyNet
[DyNet](https://github.com/clab/dynet) is the neural processing framework that NLP-Cube is based on. Let's install it: (assuming we're in ``/work``)
```
mkdir dynet-base
cd dynet-base
git clone https://github.com/clab/dynet.git
hg clone https://bitbucket.org/eigen/eigen -r b2e267d
cd dynet
mkdir build
cd build
cmake .. -DEIGEN3_INCLUDE_DIR=../../eigen -DPYTHON=/usr/bin/python3 -DMKL_ROOT=/opt/intel/mkl
make -j 8
cd python
sudo python3 ../../setup.py build --build-dir=.. --skip-build install
```
Note: use absolute paths for cmake for all its parameters. Also, ``make -j 8`` assumes an 8-core machine. Replace with your core count for a faster build (for debugging use ``make`` to build everything single-threaded)
\* If you have CUDA-enabled GPUs available, please follow the [tutorial on the official DyNet page](http://dynet.readthedocs.io/en/latest/python.html#installing-a-cutting-edge-and-or-gpu-version).
Let's test DyNet was successfully installed. Open a python3 prompt and type:
```
import dynet
```
and you should get an output like:
```
[dynet] random seed: 885379706
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
```
which means DyNet is up and running.
### 4. Clone NLP-Cube
```
cd /work
git clone https://github.com/adobe/NLP-Cube.git
```
You should now have a fully working NLP-Cube install.
---
The next tutorial shows how to [train your own models](./3.%20Advanced%20usage%20-%20Training%20a%20model%20on%20the%20UD%20Corpus.ipynb).
| github_jupyter |
```
#DATA TAKEN ON 5/1
import pandas as pd
import numpy as np
from sodapy import Socrata
client = Socrata("data.sfgov.org", None)
# results = client.get("cuks-n6tp", limit = 2191368)
data = client.get("cuks-n6tp", limit = 3000000)
data_df = pd.DataFrame.from_records(data)#here
print(data_df.shape)
data_df.columns
data_df.to_csv('CITY_data.csv', index=False, header=True, encoding='utf-8')
import pickle
print(data_df.columns.values)
districts = data_df['pddistrict'].unique().tolist()
print(districts)
mask = data_df['pddistrict'] == 'BAYVIEW'
df_BAYVIEW = data_df[mask]
print(df_BAYVIEW.shape)
df_BAYVIEW.to_csv('BAYVIEW_data.csv', index=False, header=True, encoding='utf-8')
#check if it was exported properly by importing and checking shape/columns for only bayview
df_bayview_check = pd.read_csv('BAYVIEW_data.csv')
print(df_bayview_check.shape)
print(df_BAYVIEW.columns.values)
print(df_bayview_check.columns.values)
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'RICHMOND'
df_RICHMOND = data_df[mask]
df_RICHMOND.to_csv('RICHMOND_data.csv', index=False, header=True, encoding='utf-8')
print(df_RICHMOND.shape)
mask = data_df['pddistrict'] == 'MISSION'
df_MISSION = data_df[mask]
df_MISSION.to_csv('MISSION_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'CENTRAL'
df_CENTRAL = data_df[mask]
df_CENTRAL.to_csv('CENTRAL_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'TARAVAL'
df_TARAVAL = data_df[mask]
df_TARAVAL.to_csv('TARAVAL_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'NORTHERN'
df_NORTHERN = data_df[mask]
df_NORTHERN.to_csv('NORTHERN_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'SOUTHERN'
df_SOUTHERN = data_df[mask]
df_SOUTHERN.to_csv('SOUTHERN_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'PARK'
df_PARK = data_df[mask]
df_PARK.to_csv('PARK_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'INGLESIDE'
df_INGLESIDE = data_df[mask]
df_INGLESIDE.to_csv('INGLESIDE_data.csv', index=False, header=True, encoding='utf-8')
#reuse this for all csvs, only doing bayview for now...
mask = data_df['pddistrict'] == 'TENDERLOIN'
df_TENDERLOIN = data_df[mask]
df_TENDERLOIN.to_csv('TENDERLOIN_data.csv', index=False, header=True, encoding='utf-8')
df_district = pd.read_csv('BAYVIEW_data.csv') #change this city for csv for whatever district being done
df_district = df_district.drop(columns=['pddistrict', 'incidntnum', 'pdid', 'location', 'descript'])
df_y = df_bayview['category']
df_x = df_district.drop(columns=['category'])
labelencoder = LabelEncoder()
labelencoder = labelencoder.fit(df_y)
labelencoded_y = labelencoder.transform(df_y)
df_x['day'] = df_x.date.apply(lambda x: convert_date_to_day(x))
df_x['month'] = df_x.date.apply(lambda x: convert_date_to_month(x))
df_x['year'] = df_x.date.apply(lambda x: convert_date_to_year(x))
df_x['hour'] = df_x.time.apply(lambda x: convert_time_to_hour(x))
df_x = df_x.drop(columns=['date', 'time'])
df_x['day'] = (df_x['day']).astype(int)
df_x['month'] = (df_x['month']).astype(int)
df_x['year'] = (df_x['year']).astype(int)
df_x['hour'] = (df_x['hour']).astype(int)
label_encoder_addr = LabelEncoder()
addr_feature = label_encoder_addr.fit_transform(df_x_int.address.iloc[:].values)
addr_feature = addr_feature.reshape(df_x_int.shape[0], 1)
onehot_encoder_addr = OneHotEncoder(sparse = False)
addr_feature = onehot_encoder_addr.fit_transform(addr_feature)
label_encoder_DoW = LabelEncoder()
DoW_feature = label_encoder_DoW.fit_transform(df_x_int.dayofweek.iloc[:].values)
DoW_feature = DoW_feature.reshape(df_x_int.shape[0], 1)
onehot_encoder_DoW = OneHotEncoder(sparse = False)
DoW_feature = onehot_encoder_DoW.fit_transform(DoW_feature)
label_encoder_res = LabelEncoder()
res_feature = label_encoder_res.fit_transform(df_x_int.resolution.iloc[:].values)
res_feature = res_feature.reshape(df_x_int.shape[0], 1)
onehot_encoder_res = OneHotEncoder(sparse = False)
res_feature = onehot_encoder_res.fit_transform(res_feature)
day = df_x.day.values
month = df_x.month.values
year = df_x.year.values
hour = df_x.hour.values
x = df_x.x.values
y = df_x.y.values
columns = []
columns.append(addr_feature)
columns.append(DoW_feature)
columns.append(res_feature)
columns.append(x)
columns.append(y)
columns.append(day)
columns.append(month)
columns.append(year)
columns.append(hour)
encoded_feats = column_stack(columns)
sparse_features = sparse.csr_matrix(encoded_feats)
X_train, X_test, y_train, y_test = train_test_split(sparse_features, labelencoded_y, test_size=0.20, random_state=random_seed)
model = XGBClassifier(nthread = n_threads) #or -1
kfold = StratifiedKFold(n_splits=3, shuffle=True, random_state=random_seed)
param_grid = {'n_estimators': stats.randint(100,500), #random int btwn 100 and 500
'learning_rate': stats.uniform(0.01, 0.08), #.01 + loc, range of .01+/-.08
'max_depth': [2, 4, 6, 8], #tree depths to check
'colsample_bytree': stats.uniform(0.5, 0.4) #btwn .1 and 1.0
}
rand_search = RandomizedSearchCV(model, param_distributions = param_grid, scoring = 'f1_micro', n_iter = 5, n_jobs=-1, verbose = 10, cv=kfold)
rand_result = rand_search.fit(X_train, y_train)
print("Best: %f using %s" % (rand_result.best_score_, rand_result.best_params_))
best_XGB_parameters = rand_result.best_estimator_
#INSERT CITY NAME FOR .DAT FILE
pickle.dump(best_XGB_LE_estimator, open("xgb_CITYHERE.pickle.dat, "wb""))
#test on test set
best_XGB_parameters.fit(X_train, y_train)
#CSV append best score after test set
```
| github_jupyter |
# Distribution function for the NFW profiles
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
from scipy.integrate import quad, cumtrapz
from tqdm import tqdm
import matplotlib as mpl
mpl.rcParams['font.size'] = 18.0
G_N = 4.302e-3
c = 100
def f_NFW(x):
return np.log(1+x) - x/(1+x)
rho_AMC = 1.0
R_AMC = 1.0
r_s = R_AMC/c
x2_avg = 0.13*R_AMC**2
M_AMC = 4*np.pi*rho_AMC*(r_s)**3*f_NFW(c)
print(M_AMC)
psi0 = G_N*M_AMC/R_AMC
```
### Quick summary of properties:
$$E_\mathrm{bind} \equiv \alpha \frac{G_N M_\mathrm{AMC}^2}{R_\mathrm{AMC}}$$
$$\langle R^2 \rangle = \kappa R_\mathrm{AMC}^2$$
**Power-law**: $\alpha = 3/2$, $\kappa = 3/11$
**NFW**: $\alpha = 3.46$, $\kappa = 0.133$
### Comparison of density profiles
**Power law** density profile:
$$\rho_\mathrm{PL}(r) = A/r^{9/4}$$
truncated at
$$R_\mathrm{AMC} = \left(\frac{3 M_\mathrm{AMC}}{4 \pi \rho_\mathrm{AMC}}\right)^{1/3}$$
meaning that the average density inside the minicluster is fixed equal to $\rho_\mathrm{AMC}$. The enclosed mass is given by:
$$M_\mathrm{enc}(r) = \frac{16\pi}{3} A r^{3/4} = M_\mathrm{AMC} \left(\frac{r}{R_\mathrm{AMC}}\right)^{3/4}$$
Or, put another way:
$$ \rho_\mathrm{PL}(r) = \frac{3 M_\mathrm{AMC}}{16 \pi R_\mathrm{AMC}{}^3} \left(\frac{R_\mathrm{AMC}}{r}\right)^{9/4} = \frac{\rho_\mathrm{AMC}}{4} \left(\frac{R_\mathrm{AMC}}{r}\right)^{9/4}$$
**NFW** density profile:
$$ \rho_\mathrm{NFW}(x) = \frac{\rho_s}{x(1+x)^2} \equiv \rho_s \omega(x)$$
where $x = r/r_s$ and the profile is truncated at $R_\mathrm{AMC} = c r_s$, with $c = 100$. We make the identification $\rho_s = \rho_\mathrm{AMC}$ and
$$r_s = \left(\frac{M_\mathrm{AMC}}{4 \pi \rho_\mathrm{AMC} f_\mathrm{NFW}(c)}\right)^{1/3}$$
### NFW disruption
Potential:
$$\psi(r) = 4\pi G \rho_s r_s^2 \frac{\log(1 + x)}{x} = \frac{G_N M_\mathrm{AMC}}{R_\mathrm{AMC}} \frac{c}{f(c)} \frac{\log(1 + x)}{x}$$
**Binding energy:**
```
def psi_NFW(r):
x = r/r_s
return (G_N*M_AMC/R_AMC)*(c/f_NFW(c))*np.log(1+x)/x
def psi(r):
psi_outer = G_N*M_AMC/np.clip(r, R_AMC, 1e30)
return np.clip(psi_NFW(r) - psi_NFW(R_AMC), 0, 1e30) + psi_outer
@np.vectorize
def rho(r):
x = r/r_s
#if (x > c):
# return 0
#else:
return rho_AMC/(x*(1+x)**2)
print(quad(lambda x: rho(x)*4*np.pi*x**2, 0, 1)[0])
R_list = np.geomspace(1e-6, 1e3, 1000)*R_AMC
rho_list = rho(R_list)
psi_list = psi(R_list)
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
axes[0].loglog(R_list, rho_list)
axes[0].set_xlabel(r"$R/R_\mathrm{AMC}$")
axes[0].set_ylabel(r"$\rho/\rho_\mathrm{AMC}$")
axes[1].loglog(R_list, psi_list/psi0)
axes[1].loglog(R_list, (G_N*M_AMC/R_list)/psi0, 'k--')
axes[1].set_xlabel(r"$R/R_\mathrm{AMC}$")
axes[1].set_ylabel(r"$\psi/\psi_0$")
axes[2].loglog(psi_list, rho_list)
axes[2].set_xlabel(r"$\psi/\psi_0$")
axes[2].set_ylabel(r"$\rho/\rho_\mathrm{AMC}$")
plt.tight_layout()
plt.show()
```
#### Generating the distribution function
```
rho_of_psi = interpolate.InterpolatedUnivariateSpline(psi_list[::-1], rho_list[::-1])
d2rho = rho_of_psi.derivative(n=2)
def f(eps):
integ = lambda x: d2rho(x)/(np.sqrt(eps - x))
result = quad(integ, 0, eps, epsrel=1e-6)[0]
return result/(np.sqrt(8)*np.pi**2)
eps_list = psi(R_list)
f_list = 0.0*eps_list
for i, eps in enumerate(tqdm(eps_list)):
f_list[i] = f(eps)
f_interp_NFW = interpolate.interp1d(eps_list, f_list, bounds_error=False, fill_value = 0.0)
plt.figure()
plt.loglog(eps_list/psi0, f_list)
plt.xlabel(r"$\mathcal{E}/\psi_0$")
plt.ylabel(r"$f(\mathcal{E})/(\rho_\mathrm{AMC}\psi_0{}^{-3/2})$")
plt.show()
def v_max(r):
return np.sqrt(2*psi(r))
def get_density(r):
v_max = np.sqrt(2*psi(r))
v_list = np.linspace(0, v_max, 100)
f_list = f_interp_NFW(psi(r)-0.5*v_list**2)
return 4*np.pi*np.trapz(v_list**2*f_list, v_list)
r_check = np.geomspace(1e-5, 1e3, 1000)
dens_list = 0.0*r_check
for i, r in enumerate(tqdm(r_check)):
dens_list[i] = get_density(r)
plt.figure()
plt.loglog(r_check, rho(r_check), linestyle='--', color='grey')
plt.loglog(r_check, dens_list)
#plt.xlim(0, 10)
plt.xlabel(r"$R/R_\mathrm{AMC}$")
plt.ylabel(r"$\rho/\rho_\mathrm{AMC}$")
plt.show()
```
#### Checking the AMC properties
**Total Mass**
```
def I_nocut(x):
integ = lambda eps: np.sqrt(2*(psi(x) - eps))*f_interp_NFW(eps)
return quad(integ, 0, psi(x), epsrel=1e-4)[0]
def calcMass():
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_nocut(x)
return 16*np.pi**2*np.trapz(I_integ*x_range**2, x_range)
M_total = calcMass()
print(M_total/M_AMC)
```
**Kinetic Energy**
```
def I_kin(x):
integ = lambda eps: 0.5*(np.sqrt(2*(psi(x) - eps)))**3*f_interp_NFW(eps)
return quad(integ, 0, psi(x), epsrel=1e-4)[0]
def calcEkin():
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_kin(x)
return 16*np.pi**2*np.trapz(I_integ*x_range**2, x_range)
E_kin = calcEkin()
kappa = 2*E_kin/(G_N*M_AMC**2/R_AMC)
print("kappa = ", kappa)
```
**Potential Energy**
```
#Note the factor of 1/2 to prevent double-counting.
def I_pot(x):
integ = lambda eps: 0.5*psi(x)*np.sqrt(2*(psi(x) - eps))*f_interp_NFW(eps)
return quad(integ, 0, psi(x), epsrel=1e-6)[0]
def calcEpot():
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_pot(x)
return 16*np.pi**2*np.trapz(I_integ*x_range**2, x_range)
E_bind = calcEpot()
beta = E_bind/(G_N*M_AMC**2/R_AMC)
print("beta = ", beta)
E_total = E_kin - E_bind
print(E_total/(G_N*M_AMC**2/R_AMC))
```
#### Mass Loss
The total mass is then:
$$ M(< \Delta \eta) = 16\pi^2 \rho_\mathrm{AMC} R_\mathrm{AMC}^3 \int_{0}^{1} x^2 I(y, \Delta \eta)\,\mathrm{d}y$$
Although actually, note that $\Delta \mathcal{E} = (\Delta E/M) \times r^2/\langle r^2 \rangle$
```
def I_loss(x, delta_eps):
integ = lambda eps: np.sqrt(2*(psi(x) - eps))*f_interp_NFW(eps)
return quad(integ, 0, np.minimum(delta_eps, psi(x)), epsrel=1e-4)[0]
def I_remain(x, delta_eps):
if (delta_eps >= psi(x)):
return 0
else:
integ = lambda eps: np.sqrt(2*np.clip(psi(x) - eps, 0, 1e30))*f_interp_NFW(eps)
#eps_range = np.sort(psi(x) - np.geomspace(1e-10, psi(x) - delta_eps, 100))
#print(eps_range/psi(x))
eps_range = psi(x)*np.sort(1 - np.geomspace(1e-9, 1 - delta_eps/psi(x), 200))
eps_range = np.sort(np.append(eps_range, np.linspace(delta_eps*1.0001, psi(x)), 200))
#eps_range = np.linspace(delta_eps, psi(x), 1000)
#print(eps_range)#,integ(eps_range))
#print(psi(x) - eps_range)
return np.trapz(integ(eps_range), eps_range)
#else:
# i
# return quad(integ, delta_eps, psi(x), epsrel=1e-4)[0]
def I_remain_corr(x, delta_eps, psi_fun):
integ = lambda eps: np.sqrt(2*(psi(x) - eps))*f_interp_NFW(eps)
return quad(integ, np.minimum(delta_eps, psi_fun(x)), psi_fun(x), epsrel=1e-4)[0]
def calcMassLoss(delta_eps):
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_loss(x, delta_eps*x**2/x2_avg)
return 16*np.pi**2*np.trapz(I_integ*x_range**2, x_range)
deltaE_list = np.geomspace(1e-6, 1e4, 200)*E_bind
deltaM_list = 0.0*deltaE_list
for i, deltaE in enumerate(tqdm(deltaE_list)):
deltaM_list[i] = calcMassLoss(deltaE/M_AMC)
plt.figure()
plt.loglog(deltaE_list/E_bind, deltaM_list/M_AMC)
plt.xlim(1e-5, 1e4)
plt.ylim(1e-6, 2)
plt.xlabel(r"$\Delta E/E_\mathrm{bind}$")
plt.ylabel(r"$\Delta M/M_\mathrm{AMC}$")
plt.axhline(1.0, linestyle='--', color='grey')
plt.show()
```
#### Energy Ejection and Remaining
```
def calcEnergyEjected(delta_eps):
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_loss(x, delta_eps*x**2/x2_avg)
return 16*np.pi**2*np.trapz((delta_eps*x_range**2/x2_avg)*I_integ*x_range**2, x_range)
E_ejec_list = 0.0*deltaE_list
for i, deltaE in enumerate(tqdm(deltaE_list)):
E_ejec_list[i] = calcEnergyEjected(deltaE/M_AMC)
def calcEnergyRemain(delta_eps):
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_remain(x, delta_eps*x**2/x2_avg)
return 16*np.pi**2*np.trapz((delta_eps*x_range**2/x2_avg)*I_integ*x_range**2, x_range)
E_rem_list = 0.0*deltaE_list
for i, deltaE in enumerate(tqdm(deltaE_list)):
E_rem_list[i] = calcEnergyRemain(deltaE/M_AMC)
f_ej_list = E_ejec_list/deltaE_list
f_rem_list = E_rem_list/deltaE_list
f_ej_fixed = np.append(f_ej_list[:100], 1-f_rem_list[100:]) #Fix numerical issues when f_ej is close to 0 or 1
plt.figure()
plt.loglog(deltaE_list/E_bind, f_ej_list, label=r'$f_\mathrm{ej}$')
plt.loglog(deltaE_list/E_bind, f_rem_list, label=r'$f_\mathrm{rem}$')
plt.xlabel(r"$\Delta E/E_\mathrm{bind}$")
#plt.ylabel(r"$f_\mathrm{rem}$")
plt.legend(loc='best')
plt.axhline(1.0, linestyle='--', color='grey')
plt.show()
```
#### Initial Energy of unbound particles
We'll define the 'initial energy of the particles which will eventually be unbound' as:
$$E_i^\mathrm{unbound} = T_i^\mathrm{unbound} + E_{\mathrm{bind}, i} - E_{\mathrm{bind}, f}$$
where $T_i^\mathrm{unbound}$ is the total initial kinetic energy of the particles which will be unbound.
```
print(rho(0.01), 4*np.pi*I_loss(0.01, 1e-5))
print(calcFinalEbind(E_bind/M_AMC))
def calcFinalEbind(delta_eps):
x_range = np.geomspace(1e-6, 1, 100)
rho_final = 0.0*x_range
#if (delta_eps > 0.1*E_bind):
for j, x in enumerate(x_range):
rho_final[j] = 4*np.pi*I_remain(x, delta_eps*x**2/x2_avg)
#else:
# for j, x in enumerate(x_range):
# rho_final[j] = rho(x) - 4*np.pi*I_loss(x, delta_eps*x**2/x2_avg)
Menc = cumtrapz(4*np.pi*rho_final*x_range**2, x_range, initial=0.0)
return G_N*np.trapz((Menc/x_range)*4*np.pi*rho_final*x_range**2, x_range)
```
Calculating the 'first order' change in binding energy
```
#Note the factor of 1/2 to prevent double-counting.
def I_pot_loss(x, delta_eps):
integ = lambda eps: 0.5*psi(x)*np.sqrt(2*(psi(x) - eps))*f_interp_NFW(eps)
return quad(integ, 0, np.minimum(delta_eps, psi(x)), epsrel=1e-4)[0]
def calcEpot_loss(delta_eps):
x_range = np.geomspace(1e-6, 1, 100)
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_pot_loss(x, delta_eps*x**2/x2_avg)
return 16*np.pi**2*np.trapz(I_integ*x_range**2, x_range)
calcFinalEbind(1e-5*E_bind/M_AMC)/E_bind
Ebind1_list = 0.0*deltaE_list
Ebind2loss_list = 0.0*deltaE_list
for i, deltaE in enumerate(tqdm(deltaE_list)):
Ebind1_list[i] = calcFinalEbind(deltaE/M_AMC)
Ebind2loss_list[i] = calcEpot_loss(deltaE/M_AMC)
print(E_bind, Ebind1_list[0])
```
**Need to check the Ebind1_list calculation for small dE**
```
plt.figure()
plt.loglog(deltaE_list/E_bind, np.abs(1-Ebind1_list/E_bind), label="Full")
plt.show()
```
The change in binding energy can be very well approximated as:
$$ \Delta E_\mathrm{bind} = (1 - \frac{1}{2}\frac{\Delta M}{M}) \times \int_\mathrm{removed} \psi(r) f(r, v)\,\mathrm{d}^3 r$$
```
plt.figure()
plt.loglog(deltaE_list/E_bind, 1 - Ebind1_list/E_bind, label="Full")
plt.loglog(deltaE_list/E_bind, 2*(1 - 0.5*deltaM_list/M_AMC)*Ebind2loss_list/E_bind, label="1st order")
plt.ylim(1e-5, 2)
plt.legend()
plt.show()
plt.figure()
plt.loglog(deltaE_list/E_bind,(1-Ebind1_list/E_bind)/(Ebind2loss_list/E_bind))
plt.show()
def I_kin_loss(x, delta_eps):
integ = lambda eps: 0.5*(np.sqrt(2*(psi(x) - eps)))**3*f_interp_NFW(eps)
return quad(integ, 0, np.minimum(delta_eps, psi(x)), epsrel=1e-4)[0]
def calcEunbound_kin(delta_eps):
x_range = np.geomspace(1e-6, 1, 100)*R_AMC
I_integ = 0.0*x_range
for j, x in enumerate(x_range):
I_integ[j] = I_kin_loss(x, delta_eps*x**2/x2_avg)
return 16*np.pi**2*np.trapz(I_integ*x_range**2, x_range)
deltaU0 = -calcFinalEbind(0)- (-E_bind)
#print(FinalEbind0)
def calcEi_unbound(deltaE):
T_i_ub = calcEunbound_kin(deltaE/M_AMC)
deltaU = (-calcFinalEbind(deltaE/M_AMC)) - (-E_bind) - deltaU0
#print(deltaU)
return T_i_ub - (deltaU)
Ei_unbound_list = 0.0*deltaE_list
for i, deltaE in enumerate(tqdm(deltaE_list)):
Ei_unbound_list[i] = calcEi_unbound(deltaE)
plt.figure()
plt.loglog(deltaE_list/E_bind, Ei_unbound_list/E_total)
plt.xlabel(r"$\Delta E/E_\mathrm{bind}$")
plt.ylabel(r"$E_i^\mathrm{unbound}/E_i^\mathrm{total}$")
plt.show()
E_final_list = E_total + deltaE_list*(1 - f_ej_fixed) - Ei_unbound_list
plt.figure()
plt.semilogx(deltaE_list/E_bind, E_final_list/E_total)
plt.xlabel(r"$\Delta E/E_\mathrm{bind}$")
plt.ylabel(r"$E_f/E_i$")
plt.show()
```
#### Summary plot
```
plt.figure()
plt.loglog(deltaE_list/E_bind, deltaM_list/M_AMC, label="$\Delta M/M_\mathrm{AMC}$")
plt.loglog(deltaE_list/E_bind, f_ej_fixed, label="$f_\mathrm{ej}$")
plt.loglog(deltaE_list/E_bind, Ei_unbound_list/E_total, label="$E_i^\mathrm{unbound}/E_i^\mathrm{total}$")
plt.axhline(1.0, linestyle='--', color='grey')
plt.xlabel(r"$\Delta E/E_\mathrm{bind}$")
#plt.ylabel(r"$E_i^\mathrm{unbound}/E_i^\mathrm{total}$")
plt.xlim(1e-5, 1e4)
plt.ylim(1e-6, 2)
plt.legend(loc='best')
plt.show()
hdrtxt = "Binding energy = (3.46)*G_N*M_AMC^2/R_AMC\nColumns: deltaE/Ebind, deltaM/M, f_ej, E_i_unbound/E_i_total"
np.savetxt("../data/Perturbations_NFW.txt", list(zip(deltaE_list/E_bind, deltaM_list/M_AMC, f_ej_fixed, np.clip(Ei_unbound_list/E_total, 0, 1e30))), header=hdrtxt)
```
| github_jupyter |
# Markov Random Fields for Collaborative Filtering (Memory Efficient)
This notebook provides a **memory efficient version** in Python 3.7 of the algorithm outlined in the paper
"[Markov Random Fields for Collaborative Filtering](https://arxiv.org/abs/1910.09645)"
at the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
For reproducibility, the experiments utilize publicly available [code](https://github.com/dawenl/vae_cf) for pre-processing three popular data-sets and for evaluating the learned model. That code accompanies the paper "[Variational Autoencoders for Collaborative Filtering](https://arxiv.org/abs/1802.05814)" by Dawen Liang et al. at The Web Conference 2018. While the code for the Movielens-20M data-set was made publicly available, the code for pre-processing the other two data-sets can easily be obtained by modifying their code as described in their paper.
The experiments in the paper (where an AWS instance with 64 GB RAM and 16 vCPUs was used) may be re-run by following these three steps:
- Step 1: Pre-processing the data (utilizing the publicly available [code](https://github.com/dawenl/vae_cf))
- Step 2: Learning the MRF (this code implements the new algorithm)
- Step 3: Evaluation (utilizing the publicly available [code](https://github.com/dawenl/vae_cf))
This memory efficient version is modified by Yifei Shen @ Hong Kong University of Science and Technology
## Step 1: Pre-processing the data
Utilizing the publicly available [code](https://github.com/dawenl/vae_cf), which is copied below (with kind permission of Dawen Liang):
- run their cells 1-26 for data pre-processing
- note that importing matplotlib, seaborn, and tensorflow may not be necessary for our purposes here
- run their cells 29-31 for loading the training data
Note that the following code is modified as to pre-process the [MSD data-set](https://labrosa.ee.columbia.edu/millionsong/tasteprofile). For pre-processing the [MovieLens-20M data-set](https://grouplens.org/datasets/movielens/20m/), see their original publicly-available [code](https://github.com/dawenl/vae_cf).
```
import os
import shutil
import sys
import numpy as np
from scipy import sparse
import pandas as pd
import bottleneck as bn
# change to the location of the data
DATA_DIR = 'MSD'
itemId='songId' # for MSD data
raw_data = pd.read_csv(os.path.join(DATA_DIR, 'train_triplets.txt'), sep='\t', header=None, names=['userId', 'songId', 'playCount'])
```
### Data splitting procedure
- Select 50K users as heldout users, 50K users as validation users, and the rest of the users for training
- Use all the items from the training users as item set
- For each of both validation and test user, subsample 80% as fold-in data and the rest for prediction
```
def get_count(tp, id):
playcount_groupbyid = tp[[id]].groupby(id, as_index=False)
count = playcount_groupbyid.size()
return count
def filter_triplets(tp, min_uc=5, min_sc=0):
# Only keep the triplets for items which were clicked on by at least min_sc users.
if min_sc > 0:
itemcount = get_count(tp, itemId)
tp = tp[tp[itemId].isin(itemcount.index[itemcount >= min_sc])]
# Only keep the triplets for users who clicked on at least min_uc items
# After doing this, some of the items will have less than min_uc users, but should only be a small proportion
if min_uc > 0:
usercount = get_count(tp, 'userId')
tp = tp[tp['userId'].isin(usercount.index[usercount >= min_uc])]
# Update both usercount and itemcount after filtering
usercount, itemcount = get_count(tp, 'userId'), get_count(tp, itemId)
return tp, usercount, itemcount
raw_data, user_activity, item_popularity = filter_triplets(raw_data, min_uc=20, min_sc=200) # for MSD data
sparsity = 1. * raw_data.shape[0] / (user_activity.shape[0] * item_popularity.shape[0])
print("After filtering, there are %d watching events from %d users and %d movies (sparsity: %.3f%%)" %
(raw_data.shape[0], user_activity.shape[0], item_popularity.shape[0], sparsity * 100))
unique_uid = user_activity.index
np.random.seed(98765)
idx_perm = np.random.permutation(unique_uid.size)
unique_uid = unique_uid[idx_perm]
# create train/validation/test users
n_users = unique_uid.size
n_heldout_users = 50000 # for MSD data
tr_users = unique_uid[:(n_users - n_heldout_users * 2)]
vd_users = unique_uid[(n_users - n_heldout_users * 2): (n_users - n_heldout_users)]
te_users = unique_uid[(n_users - n_heldout_users):]
train_plays = raw_data.loc[raw_data['userId'].isin(tr_users)]
unique_sid = pd.unique(train_plays[itemId])
show2id = dict((sid, i) for (i, sid) in enumerate(unique_sid))
profile2id = dict((pid, i) for (i, pid) in enumerate(unique_uid))
pro_dir = os.path.join(DATA_DIR, 'pro_sg')
if not os.path.exists(pro_dir):
os.makedirs(pro_dir)
with open(os.path.join(pro_dir, 'unique_sid.txt'), 'w') as f:
for sid in unique_sid:
f.write('%s\n' % sid)
def split_train_test_proportion(data, test_prop=0.2):
data_grouped_by_user = data.groupby('userId')
tr_list, te_list = list(), list()
np.random.seed(98765)
for i, (_, group) in enumerate(data_grouped_by_user):
n_items_u = len(group)
if n_items_u >= 5:
idx = np.zeros(n_items_u, dtype='bool')
idx[np.random.choice(n_items_u, size=int(test_prop * n_items_u), replace=False).astype('int64')] = True
tr_list.append(group[np.logical_not(idx)])
te_list.append(group[idx])
else:
tr_list.append(group)
if i % 5000 == 0:
print("%d users sampled" % i)
sys.stdout.flush()
data_tr = pd.concat(tr_list)
data_te = pd.concat(te_list)
return data_tr, data_te
vad_plays = raw_data.loc[raw_data['userId'].isin(vd_users)]
vad_plays = vad_plays.loc[vad_plays[itemId].isin(unique_sid)]
vad_plays_tr, vad_plays_te = split_train_test_proportion(vad_plays)
test_plays = raw_data.loc[raw_data['userId'].isin(te_users)]
test_plays = test_plays.loc[test_plays[itemId].isin(unique_sid)]
test_plays_tr, test_plays_te = split_train_test_proportion(test_plays)
```
### Save the data into (user_index, item_index) format
```
def numerize(tp):
uid = list(map(lambda x: profile2id[x], tp['userId']))
sid = list(map(lambda x: show2id[x], tp[itemId]))
return pd.DataFrame(data={'uid': uid, 'sid': sid}, columns=['uid', 'sid'])
train_data = numerize(train_plays)
train_data.to_csv(os.path.join(pro_dir, 'train.csv'), index=False)
vad_data_tr = numerize(vad_plays_tr)
vad_data_tr.to_csv(os.path.join(pro_dir, 'validation_tr.csv'), index=False)
vad_data_te = numerize(vad_plays_te)
vad_data_te.to_csv(os.path.join(pro_dir, 'validation_te.csv'), index=False)
test_data_tr = numerize(test_plays_tr)
test_data_tr.to_csv(os.path.join(pro_dir, 'test_tr.csv'), index=False)
test_data_te = numerize(test_plays_te)
test_data_te.to_csv(os.path.join(pro_dir, 'test_te.csv'), index=False)
```
### Load the pre-processed training and validation data
```
unique_sid = list()
with open(os.path.join(pro_dir, 'unique_sid.txt'), 'r') as f:
for line in f:
unique_sid.append(line.strip())
n_items = len(unique_sid)
def load_train_data(csv_file):
tp = pd.read_csv(csv_file)
n_users = tp['uid'].max() + 1
rows, cols = tp['uid'], tp['sid']
data = sparse.csr_matrix((np.ones_like(rows),
(rows, cols)), dtype='float64',
shape=(n_users, n_items))
return data
train_data = load_train_data(os.path.join(pro_dir, 'train.csv'))
```
## Step 2: Learning the MRF model (implementation of the new algorithm)
Now run the following code and choose to learn
- either the dense MRF model
- or the sparse MRF model
```
import time
from copy import deepcopy
class MyClock:
startTime = time.time()
def tic(self):
self.startTime = time.time()
def toc(self):
secs = time.time() - self.startTime
print("... elapsed time: {} min {} sec".format(int(secs//60), secs%60) )
myClock = MyClock()
totalClock = MyClock()
alpha = 0.75
```
### Pre-computation of the training data
```
def filter_XtX(train_data, block_size, thd4mem, thd4comp):
# To obtain and sparsify XtX at the same time to save memory
# block_size (2nd input) and threshold for memory (3rd input) controls the memory usage
# thd4comp is the threshold to control training efficiency
XtXshape = train_data.shape[1]
userCount = train_data.shape[0]
bs = block_size
blocks = train_data.shape[1]// bs + 1
flag = False
thd = thd4mem
#normalize data
mu = np.squeeze(np.array(np.sum(train_data, axis=0)))/ userCount
variance_times_userCount = (mu - mu * mu) * userCount
rescaling = np.power(variance_times_userCount, alpha / 2.0)
scaling = 1.0 / rescaling
#block multiplication
for ii in range(blocks):
for jj in range(blocks):
XtX_tmp = np.asarray(train_data[:,bs*ii : bs*(ii+1)].T.dot(train_data[:,bs*jj : bs*(jj+1)]).todense(), dtype = np.float32)
XtX_tmp -= mu[bs*ii:bs*(ii+1),None] * (mu[bs*jj : bs*(jj+1)]* userCount)
XtX_tmp = scaling[bs*ii:bs*(ii+1),None] * XtX_tmp * scaling[bs*jj : bs*(jj+1)]
# sparsification filter 1 to control memory usage
ix = np.where(np.abs(XtX_tmp) > thd)
XtX_nz = XtX_tmp[ix]
ix = np.array(ix, dtype = 'int32')
ix[0,:] += bs*ii
ix[1,:] += bs*jj
if(flag):
ixs = np.concatenate((ixs, ix), axis = 1)
XtX_nzs = np.concatenate((XtX_nzs, XtX_nz), axis = 0)
else:
ixs = ix
XtX_nzs = XtX_nz
flag = True
#sparsification filter 2 to control training time of the algorithm
ix2 = np.where(np.abs(XtX_nzs) >= thd4comp)
AA_nzs = XtX_nzs[ix2]
AA_ixs = np.squeeze(ixs[:,ix2])
print(XtX_nzs.shape, AA_nzs.shape)
XtX = sparse.csc_matrix( (XtX_nzs, ixs), shape=(XtXshape,XtXshape), dtype=np.float32)
AA = sparse.csc_matrix( (AA_nzs, AA_ixs), shape=(XtXshape,XtXshape), dtype=np.float32)
return XtX, rescaling, XtX.diagonal(), AA
XtX, rescaling, XtXdiag, AtA = filter_XtX(train_data, 10000, 0.04, 0.11)
ii_diag = np.diag_indices(XtX.shape[0])
scaling = 1/rescaling
```
### Sparse MRF model
```
def calculate_sparsity_pattern(AtA, maxInColumn):
# this implements section 3.1 in the paper.
print("sparsifying the data-matrix (section 3.1 in the paper) ...")
myClock.tic()
# apply threshold
#ix = np.where( np.abs(XtX) > threshold)
#AA = sparse.csc_matrix( (XtX[ix], ix), shape=XtX.shape, dtype=np.float32)
AA = AtA
# enforce maxInColumn, see section 3.1 in paper
countInColumns=AA.getnnz(axis=0)
iiList = np.where(countInColumns > maxInColumn)[0]
print(" number of items with more than {} entries in column: {}".format(maxInColumn, len(iiList)) )
for ii in iiList:
jj= AA[:,ii].nonzero()[0]
kk = bn.argpartition(-np.abs(np.asarray(AA[jj,ii].todense()).flatten()), maxInColumn)[maxInColumn:]
AA[ jj[kk], ii ] = 0.0
AA.eliminate_zeros()
print(" resulting sparsity of AA: {}".format( AA.nnz*1.0 / AA.shape[0] / AA.shape[0]) )
myClock.toc()
return AA
def sparse_parameter_estimation(rr, XtX, AA, XtXdiag):
# this implements section 3.2 in the paper
# list L in the paper, sorted by item-counts per column, ties broken by item-popularities as reflected by np.diag(XtX)
AAcountInColumns = AA.getnnz(axis=0)
sortedList=np.argsort(AAcountInColumns+ XtXdiag /2.0/ np.max(XtXdiag) )[::-1]
print("iterating through steps 1,2, and 4 in section 3.2 of the paper ...")
myClock.tic()
todoIndicators=np.ones(AAcountInColumns.shape[0])
blockList=[] # list of blocks. Each block is a list of item-indices, to be processed in step 3 of the paper
for ii in sortedList:
if todoIndicators[ii]==1:
nn, _, vals=sparse.find(AA[:,ii]) # step 1 in paper: set nn contains item ii and its neighbors N
kk=np.argsort(np.abs(vals))[::-1]
nn=nn[kk]
blockList.append(nn) # list of items in the block, to be processed in step 3 below
# remove possibly several items from list L, as determined by parameter rr (r in the paper)
dd_count=max(1,int(np.ceil(len(nn)*rr)))
dd=nn[:dd_count] # set D, see step 2 in the paper
todoIndicators[dd]=0 # step 4 in the paper
myClock.toc()
print("now step 3 in section 3.2 of the paper: iterating ...")
# now the (possibly heavy) computations of step 3:
# given that steps 1,2,4 are already done, the following for-loop could be implemented in parallel.
myClock.tic()
BBlist_ix1, BBlist_ix2, BBlist_val = [], [], []
for nn in blockList:
#calculate dense solution for the items in set nn
BBblock=np.linalg.inv( np.array(XtX[np.ix_(nn,nn)].todense()) )
#BBblock=np.linalg.inv( XtX[np.ix_(nn,nn)] )
BBblock/=-np.diag(BBblock)
# determine set D based on parameter rr (r in the paper)
dd_count=max(1,int(np.ceil(len(nn)*rr)))
dd=nn[:dd_count] # set D in paper
# store the solution regarding the items in D
blockix = np.meshgrid(dd,nn)
BBlist_ix1.extend(blockix[1].flatten().tolist())
BBlist_ix2.extend(blockix[0].flatten().tolist())
BBlist_val.extend(BBblock[:,:dd_count].flatten().tolist())
myClock.toc()
print("final step: obtaining the sparse matrix BB by averaging the solutions regarding the various sets D ...")
myClock.tic()
BBsum = sparse.csc_matrix( (BBlist_val, (BBlist_ix1, BBlist_ix2 ) ), shape=XtX.shape, dtype=np.float32)
BBcnt = sparse.csc_matrix( (np.ones(len(BBlist_ix1), dtype=np.float32), (BBlist_ix1,BBlist_ix2 ) ), shape=XtX.shape, dtype=np.float32)
b_div= sparse.find(BBcnt)[2]
b_3= sparse.find(BBsum)
BBavg = sparse.csc_matrix( ( b_3[2] / b_div , (b_3[0],b_3[1] ) ), shape=XtX.shape, dtype=np.float32)
BBavg[ii_diag]=0.0
myClock.toc()
print("forcing the sparsity pattern of AA onto BB ...")
myClock.tic()
BBavg = sparse.csr_matrix( ( np.asarray(BBavg[AA.nonzero()]).flatten(), AA.nonzero() ), shape=BBavg.shape, dtype=np.float32)
print(" resulting sparsity of learned BB: {}".format( BBavg.nnz * 1.0 / AA.shape[0] / AA.shape[0]) )
myClock.toc()
return BBavg
def sparse_solution(rr, maxInColumn, L2reg):
# sparsity pattern, see section 3.1 in the paper
XtX[ii_diag] = XtXdiag
AA = calculate_sparsity_pattern(AtA, maxInColumn)
# parameter-estimation, see section 3.2 in the paper
XtX[ii_diag] = XtXdiag+L2reg
BBsparse = sparse_parameter_estimation(rr, XtX, AA, XtXdiag+L2reg)
return BBsparse
```
training the sparse model:
```
maxInColumn = 1000
# hyper-parameter r in the paper, which determines the trade-off between approximation-accuracy and training-time
rr = 0.1
# L2 norm regularization
L2reg = 1.0
print("training the sparse model:\n")
totalClock.tic()
BBsparse = sparse_solution(rr, maxInColumn, L2reg)
print("\ntotal training time (including the time for determining the sparsity-pattern):")
totalClock.toc()
print("\nre-scaling BB back to the original item-popularities ...")
# assuming that mu.T.dot(BB) == mu, see Appendix in paper
myClock.tic()
BBsparse=sparse.diags(scaling).dot(BBsparse).dot(sparse.diags(rescaling))
myClock.toc()
#print("\nfor the evaluation below: converting the sparse model into a dense-matrix-representation ...")
#myClock.tic()
#BB = np.asarray(BBsparse.todense(), dtype=np.float32)
#myClock.toc()
```
## Step 3: Evaluating the MRF model
Utilizing the publicly available [code](https://github.com/dawenl/vae_cf), which is copied below (with kind permission of Dawen Liang):
- run their cell 32 for loading the test data
- run their cells 35 and 36 for the ranking metrics (for later use in evaluation)
- run their cells 45 and 46
- modify and run their cell 50:
- remove 2 lines: the one that starts with ```with``` and the line below
- remove the indentation of the line that starts with ```for```
- modify the line that starts with ```pred_val``` as follows: ```pred_val = X.dot(BB)```
- run their cell 51
```
def load_tr_te_data(csv_file_tr, csv_file_te):
tp_tr = pd.read_csv(csv_file_tr)
tp_te = pd.read_csv(csv_file_te)
start_idx = min(tp_tr['uid'].min(), tp_te['uid'].min())
end_idx = max(tp_tr['uid'].max(), tp_te['uid'].max())
rows_tr, cols_tr = tp_tr['uid'] - start_idx, tp_tr['sid']
rows_te, cols_te = tp_te['uid'] - start_idx, tp_te['sid']
data_tr = sparse.csr_matrix((np.ones_like(rows_tr),
(rows_tr, cols_tr)), dtype='float64', shape=(end_idx - start_idx + 1, n_items))
data_te = sparse.csr_matrix((np.ones_like(rows_te),
(rows_te, cols_te)), dtype='float64', shape=(end_idx - start_idx + 1, n_items))
return data_tr, data_te
def NDCG_binary_at_k_batch(X_pred, heldout_batch, k=100):
'''
normalized discounted cumulative gain@k for binary relevance
ASSUMPTIONS: all the 0's in heldout_data indicate 0 relevance
'''
batch_users = X_pred.shape[0]
idx_topk_part = bn.argpartition(-X_pred, k, axis=1)
topk_part = X_pred[np.arange(batch_users)[:, np.newaxis],
idx_topk_part[:, :k]]
idx_part = np.argsort(-topk_part, axis=1)
# X_pred[np.arange(batch_users)[:, np.newaxis], idx_topk] is the sorted
# topk predicted score
idx_topk = idx_topk_part[np.arange(batch_users)[:, np.newaxis], idx_part]
# build the discount template
tp = 1. / np.log2(np.arange(2, k + 2))
DCG = (heldout_batch[np.arange(batch_users)[:, np.newaxis],
idx_topk].toarray() * tp).sum(axis=1)
IDCG = np.array([(tp[:min(n, k)]).sum()
for n in heldout_batch.getnnz(axis=1)])
return DCG / IDCG
def Recall_at_k_batch(X_pred, heldout_batch, k=100):
batch_users = X_pred.shape[0]
idx = bn.argpartition(-X_pred, k, axis=1)
X_pred_binary = np.zeros_like(X_pred, dtype=bool)
X_pred_binary[np.arange(batch_users)[:, np.newaxis], idx[:, :k]] = True
X_true_binary = (heldout_batch > 0).toarray()
tmp = (np.logical_and(X_true_binary, X_pred_binary).sum(axis=1)).astype(
np.float32)
recall = tmp / np.minimum(k, X_true_binary.sum(axis=1))
return recall
```
### Load the test data and compute test metrics
```
test_data_tr, test_data_te = load_tr_te_data(
os.path.join(pro_dir, 'test_tr.csv'),
os.path.join(pro_dir, 'test_te.csv'))
N_test = test_data_tr.shape[0]
idxlist_test = range(N_test)
batch_size_test = 2000
n100_list, r20_list, r50_list = [], [], []
for bnum, st_idx in enumerate(range(0, N_test, batch_size_test)):
end_idx = min(st_idx + batch_size_test, N_test)
X = test_data_tr[idxlist_test[st_idx:end_idx]]
#if sparse.isspmatrix(X):
# X = X.toarray()
#X = X.astype('float32')
pred_val = np.array(X.dot(BBsparse).todense())
# exclude examples from training and validation (if any)
pred_val[X.nonzero()] = -np.inf
n100_list.append(NDCG_binary_at_k_batch(pred_val, test_data_te[idxlist_test[st_idx:end_idx]], k=100))
r20_list.append(Recall_at_k_batch(pred_val, test_data_te[idxlist_test[st_idx:end_idx]], k=20))
r50_list.append(Recall_at_k_batch(pred_val, test_data_te[idxlist_test[st_idx:end_idx]], k=50))
n100_list = np.concatenate(n100_list)
r20_list = np.concatenate(r20_list)
r50_list = np.concatenate(r50_list)
print("Test NDCG@100=%.5f (%.5f)" % (np.mean(n100_list), np.std(n100_list) / np.sqrt(len(n100_list))))
print("Test Recall@20=%.5f (%.5f)" % (np.mean(r20_list), np.std(r20_list) / np.sqrt(len(r20_list))))
print("Test Recall@50=%.5f (%.5f)" % (np.mean(r50_list), np.std(r50_list) / np.sqrt(len(r50_list))))
```
... accuracy of the sparse approximation (with sparsity 0.1% and parameter r=0.5)
| github_jupyter |
<a href="https://colab.research.google.com/github/HenriqueCCdA/bootCampAluraDataScience/blob/master/modulo1/desafios/Desafio_aula5_modulo1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
milhao = 1.e6
```
# Desafios aula 5 do modulo 1
## Desafio 01: Buscar na documentação do Matplotlib como colocar um grid nos gráficos e adicionar nos gráficos de barra.
```
uri = "https://raw.githubusercontent.com/alura-cursos/agendamento-hospitalar/main/dados/A151346189_28_143_208.csv"
dados = pd.read_csv(uri,
encoding="ISO-8859-1",
skiprows = 3, sep=";",
skipfooter=12,
thousands=".",
decimal=",",
engine='python')
colunas_usaveis = dados.mean().index.tolist()
colunas_usaveis.insert(0, "Unidade da Federação")
dados_usaveis = dados[colunas_usaveis]
dados_usaveis = dados_usaveis.set_index("Unidade da Federação")
dados_usaveis = dados_usaveis/milhao
ax = dados_usaveis.sort_values("Total", ascending=False).plot(y = "2018/Ago",
kind = "bar",
figsize=(9,6),
color='green')
plt.title( "Valor de gastos de saude por unidade da fedaração", fontsize=20)
ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:,.2f}"))
ax.set_ylabel("Gastos (R$ Milhões)", fontsize=14)
plt.grid(axis="y", ls=':')
plt.show()
```
## Desafio 02: Fazer um gráfico e uma tabela do gasto dos outros estados em função do seu estado, ou qualquer outro de interesse.
```
gatos_por_estados_normalizado_RJ = dados_usaveis.copy()
for coluna in gatos_por_estados_normalizado_RJ.columns:
gatos_por_estados_normalizado_RJ[coluna] = gatos_por_estados_normalizado_RJ[coluna] / gatos_por_estados_normalizado_RJ.loc["33 Rio de Janeiro", coluna]
gatos_por_estados_normalizado_RJ = gatos_por_estados_normalizado_RJ.drop("33 Rio de Janeiro")
gatos_por_estados_normalizado_RJ.head()
ax = gatos_por_estados_normalizado_RJ.sort_values("Total", ascending=False).plot(y = "2018/Ago",
kind = "bar",
figsize=(9,6),
color='green')
plt.title( "Valor de gastos de saude por Estado normalizados pelo RJ", fontsize=20)
ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:,.2f}"))
ax.set_ylabel("Gastos (R$ Milhões)", fontsize=14)
plt.grid(axis="y", ls=':')
plt.show()
```
## Desafio 03: Fazer o cálculo proporcional a população do seu estado e mais um a sua escolha.
Fonte para estimativa dos dados da polução:
http://www2.datasus.gov.br/DATASUS/index.php?area=0206&id=6942
```
dados_pol = pd.read_csv("populacao_por_estado_estimativa.csv",
encoding="ISO-8859-1",
skiprows = 3, sep=";",
skipfooter=12,
thousands=".",
decimal=",",
engine='python')
dados_pol.set_index("Unidade da Federação", inplace=True)
dados_pol.head()
ax = dados_pol.T[1:].plot(figsize=(12,6))
ax.legend(loc='best', ncol=2, bbox_to_anchor=(1,1))
```
***Calculo do gatos por habitantes. Eu considerei que a polução não muda naquele ano.***
```
gatos_por_estados_por_hab = dados_usaveis.copy()
gatos_por_estados_por_hab = gatos_por_estados_por_hab*milhao
gatos_por_estados_por_hab = gatos_por_estados_por_hab.loc[:,"2008/Jan":"2019/Dez"]
anos = dados_pol.columns
for coluna in gatos_por_estados_por_hab.columns:
for ano in anos:
if coluna.startswith(ano):
gatos_por_estados_por_hab[coluna] = gatos_por_estados_por_hab[coluna]/dados_pol[ano]
gatos_por_estados_por_hab.head()
```
**Verificando o calculo por hab foi feito de forma correta. Verificao feita por amostragem**
```
from termcolor import colored
from random import sample
estados = sample(dados_usaveis.index.tolist(), 3)
anos = ['2008', '2009'] + ['20'+ str(i) for i in range(10,20)]
anos = sample(anos, 3)
meses= ['/Jan', '/Fev', '/Mar', '/Abr', '/Mai', '/Jun', '/Jul', '/Ago', '/Set', '/Out', '/Nov', '/Dez']
meses = sample(meses, 3)
for estado in estados:
for ano in anos:
for mes in meses:
print(f"Verificao para o estado {estado} no {ano}{mes} ", end=" ")
gastos = dados_usaveis.loc[estado, ano+mes]*milhao
pol = dados_pol.loc[estado, ano]
gastos_por_hab = gastos/pol
valor_tabela = gatos_por_estados_por_hab.loc[estado, ano+mes]
if gastos_por_hab == valor_tabela:
print(colored("OK!", "green"))
else:
print(colored("Valores diferentes", "red"))
print(gastos, pol, gastos_por_hab, valor_tabela)
ax = gatos_por_estados_por_hab.plot(y = "2019/Ago",
kind = "bar",
figsize=(9,6),
color='green')
plt.title( "Valor de gastos de saude por Estado", fontsize=20)
ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:,.2f}"))
ax.set_ylabel("Valores por Habitantes", fontsize=14)
plt.grid(axis="y", ls=':')
plt.show()
estados = ['31 Minas Gerais' , '32 Espírito Santo',
'33 Rio de Janeiro', '35 São Paulo']
ax = gatos_por_estados_por_hab.loc[estados].T.plot(kind = "line",
figsize=(9,6))
plt.show()
ax = gatos_por_estados_por_hab.loc[estados].T.plot(kind = "line",
figsize=(9,6))
gatos_por_estados_por_hab.loc[['31 Minas Gerais', '32 Espírito Santo',
'33 Rio de Janeiro', '35 São Paulo']]
```
## Desafio 04: Faça uma análise dos dados analisados, levante hipóteses e compartilhe com a gente no Discord.
```
gatos_por_estados_por_hab_com_regiao = gatos_por_estados_por_hab.copy()
estados_index = gatos_por_estados_por_hab.index;
nome_regioes = {'1': 'Norte', '2': 'Nordeste', '3': 'Sudeste', '4': 'Sul', '5': 'Centro-Oeste'}
gatos_por_estados_por_hab_com_regiao["Regiao"] = list(map(lambda estado_index: nome_regioes[estado_index[0]] , estados_index))
gatos_por_estados_por_hab_com_regiao_medios = gatos_por_estados_por_hab_com_regiao.groupby(["Regiao"]).mean()
gatos_por_estados_por_hab_com_regiao_medios
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(25,10))
fig.suptitle('Serie historica dos valor gastos por habitantes por Região em saude', fontsize=18)
# Região Sul
gatos_por_estados_por_hab_com_regiao.query("Regiao == 'Sul'").drop("Regiao", axis = True).T.plot(kind = "line",
ax=axes[0, 0])
axes[0, 0].set_title('Sul', fontsize=14)
axes[0, 0].legend(ncol = 1)
# Região Sudeste
gatos_por_estados_por_hab_com_regiao.query("Regiao == 'Sudeste'").drop("Regiao", axis = True).T.plot(kind = "line",
ax=axes[0, 1])
axes[0, 1].set_title('Sudeste', fontsize=14)
axes[0, 1].legend(ncol = 2)
# Região Centro-Oeste
gatos_por_estados_por_hab_com_regiao.query("Regiao == 'Centro-Oeste'").drop("Regiao", axis = True).T.plot(kind = "line",
ax=axes[1, 0])
axes[1, 0].set_title('Centro-Oeste', fontsize=14)
axes[1, 0].legend(ncol = 2)
# Região Nordeste
gatos_por_estados_por_hab_com_regiao.query("Regiao == 'Nordeste'").drop("Regiao", axis = True).T.plot(kind = "line",
ax=axes[1, 1])
axes[1, 1].set_title('Nordeste', fontsize=14)
axes[1, 1].legend(ncol = 3)
# Região Centro-Oeste
gatos_por_estados_por_hab_com_regiao.query("Regiao == 'Centro-Oeste'").drop("Regiao", axis = True).T.plot(kind = "line",
ax=axes[0, 2])
axes[0, 2].set_title('Centro-Oeste', fontsize=14)
axes[0, 2].legend(ncol = 2)
# Região Medias
gatos_por_estados_por_hab_com_regiao_medios.T.plot(kind = "line",
ax=axes[1, 2])
axes[1, 2].set_title('Medias', fontsize=14)
axes[1, 2].legend(title = "Região", ncol = 1)
```
Os graficos por regiões mostram um aumento de gastos por habitante crescente em todas as regiões. O interessante é que a região sul que mostras os maiores gastos.
Para maiores analise seria necessario fazer uma correção monetária para saber se existe um aumento real ou não nos gasto por habitantes.
| github_jupyter |
<a href="https://colab.research.google.com/github/emadphysics/Amsterdam_Airbnb_predictive_models/blob/main/airbnb_pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import numpy as np
from datetime import date
import matplotlib.pyplot as plt
import seaborn as sns
import os
import re
from sklearn.feature_selection import *
from sklearn.linear_model import *
from sklearn.neighbors import *
from sklearn.svm import *
from sklearn.neighbors import *
from sklearn.tree import *
from sklearn.preprocessing import *
from xgboost import *
from sklearn.metrics import *
from geopy.distance import great_circle
# Geographical analysis
import json # library to handle JSON files
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
from statsmodels.tsa.seasonal import seasonal_decompose
import requests
import descartes
import math
print('Libraries imported.')
from google.colab import drive
drive.mount("/content/gdrive")
df=pd.read_csv('/content/gdrive/My Drive/listingss.csv')
print(f'the numer of observations are {len(df)}')
categoricals = [var for var in df.columns if df[var].dtype=='object']
numerics = [var for var in df.columns if (df[var].dtype=='int64')|(df[var].dtype=='float64')]
dates=[var for var in df.columns if df[var].dtype=='datetime64[ns]']
#pandas data types: numeric(float,integer),object(string),category,Boolean,date
one_hot_col_names = ['host_id', 'host_location', 'host_response_time','host_is_superhost','host_neighbourhood','host_has_profile_pic','host_identity_verified',
'neighbourhood','neighbourhood_cleansed','neighbourhood_group_cleansed', 'zipcode', 'is_location_exact', 'property_type', 'room_type', 'bed_type', 'has_availability', 'requires_license', 'instant_bookable',
'is_business_travel_ready', 'cancellation_policy', 'cancellation_policy','require_guest_profile_picture', 'require_guest_phone_verification', 'calendar_updated']
text_cols = ['name', 'summary', 'space', 'description', 'neighborhood_overview', 'notes', 'transit', 'access', 'interaction', 'house_rules', 'host_name', 'host_about']
features = ['host_listings_count', 'host_total_listings_count', 'latitude', 'longitude',
'accommodates', 'bathrooms', 'bedrooms', 'beds', 'square_feet',
'guests_included', 'minimum_nights', 'maximum_nights', 'availability_30', 'availability_60',
'availability_90', 'availability_365', 'number_of_reviews', 'review_scores_rating', 'review_scores_accuracy',
'review_scores_cleanliness', 'review_scores_checkin', 'review_scores_communication', 'review_scores_location',
'review_scores_value', 'calculated_host_listings_count', 'reviews_per_month']
price_features = ['security_deposit', 'cleaning_fee', 'extra_people','price']
date_cols = ['host_since', 'first_review', 'last_review']
def host_verification(cols):
possible_words = {}
i = 0
for col in cols:
words = col.split()
for w in words:
wr = re.sub(r'\W+', '', w)
if wr != '' and wr not in possible_words:
possible_words[wr] = i
i += 1
l = len(possible_words)
new_cols = np.zeros((cols.shape[0], l))
for i, col in enumerate(cols):
words = col.split()
arr = np.zeros(l)
for w in words:
wr = re.sub(r'\W+', '', w)
if wr != '':
arr[possible_words[wr]] = 1
new_cols[i] = arr
return new_cols
def amenities(cols):
dic = {}
i = 0
for col in cols:
arr = col.split(',')
for a in arr:
ar = re.sub(r'\W+', '', a)
if len(ar) > 0:
if ar not in dic:
dic[ar] = i
i += 1
l = len(dic)
new_cols = np.zeros((cols.shape[0], l))
for i, col in enumerate(cols):
words = col.split(',')
arr = np.zeros(l)
for w in words:
wr = re.sub(r'\W+', '', w)
if wr != '':
arr[dic[wr]] = 1
new_cols[i] = arr
return new_cols
def one_hot(arr):
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(arr)
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
return onehot_encoded
one_hot_col_names = ['host_response_time','host_is_superhost','host_has_profile_pic','host_identity_verified',
'neighbourhood_cleansed','neighbourhood_group_cleansed', 'zipcode', 'is_location_exact', 'property_type', 'room_type', 'bed_type', 'has_availability', 'requires_license', 'instant_bookable',
'is_business_travel_ready', 'cancellation_policy','require_guest_profile_picture', 'require_guest_phone_verification','calendar_updated']
one_hot_dict = {}
for i in one_hot_col_names:
one_hot_dict[i] = one_hot(np.array(df[i].fillna(""), dtype=str))
one_hot_dict['host_verifications'] = host_verification(df['host_verifications'])
one_hot_dict['amenities'] = amenities(df['amenities'])
ont_hot_list = []
for i in one_hot_dict.keys():
if 1<one_hot_dict[i].shape[1]<400:
ont_hot_list.append(one_hot_dict[i])
# print(i,one_hot_dict[i].shape[1])
onehot_variables = np.concatenate(ont_hot_list, axis=1)
hot_cat_variables=pd.DataFrame(onehot_variables)
hot_cat_variables.isnull().sum().sum()
hot_cat_variables.shape
def check_nan(cols):
for col in cols:
if np.isnan(col):
return True
return False
def clean_host_response_rate(host_response_rate, num_data):
total = 0
count = 0
for col in host_response_rate:
if not isinstance(col, float):
total += float(col.strip('%'))
count += 1
arr = np.zeros(num_data)
mean = total / count
for i, col in enumerate(host_response_rate):
if not isinstance(col, float):
arr[i] += float(col.strip('%'))
else:
assert(math.isnan(col))
arr[i] = mean
return arr
def clean_price(price, num_data):
arr = np.zeros(num_data)
for i, col in enumerate(price):
if not isinstance(col, float):
arr[i] += float(col.strip('$').replace(',', ''))
else:
assert(math.isnan(col))
arr[i] = 0
return arr
def to_np_array_fill_NA_mean(cols):
return np.array(cols.fillna(np.nanmean(np.array(cols))))
num_data = df.shape[0]
arr = np.zeros((len(features) + len(price_features) + 1, num_data))
host_response_rate = clean_host_response_rate(df['host_response_rate'], num_data)
arr[0] = host_response_rate
i = 0
for feature in features:
i += 1
if check_nan(df[feature]):
arr[i] = to_np_array_fill_NA_mean(df[feature])
else:
arr[i] = np.array(df[feature])
for feature in price_features:
i += 1
arr[i] = clean_price(df[feature], num_data)
target = arr[-1]
numeric_variables = arr[:-1].T
numeric_variables=pd.DataFrame(numeric_variables)
numeric_variables.isnull().sum()\
.sum()
inde_variables=np.concatenate((numeric_variables,hot_cat_variables),axis=1)
inde_variables=pd.DataFrame(inde_variables)
inde_variables.isnull().sum().sum()
mean = np.mean(inde_variables, axis = 0)
std = np.std(inde_variables, axis = 0)
inde_variables=(inde_variables-mean)/std
inde_variables.shape
import torch
from torch import nn
import torch.optim as optim
import numpy as np
import random
import copy
import torch.utils.data as data
import os
class NN229(nn.Module):
def __init__(self, input_size=355, hidden_size1=128, hidden_size2=512, hidden_size3=64, output_size=1, drop_prob=0.05):
super(NN229, self).__init__()
self.relu = nn.ReLU()
self.dropout = nn.Dropout(p=drop_prob)
self.W1 = nn.Linear(input_size, hidden_size1)
self.W2 = nn.Linear(hidden_size1, hidden_size2)
self.W3 = nn.Linear(hidden_size2, hidden_size3)
self.W4 = nn.Linear(hidden_size3, output_size)
def forward(self, x):
hidden1 = self.dropout(self.relu(self.W1(x)))
hidden2 = self.dropout(self.relu(self.W2(hidden1)))
hidden3 = self.dropout(self.relu(self.W3(hidden2)))
out = self.W4(hidden3)
return out
class AirBnb(data.Dataset):
def __init__(self, train_path, label_path):
super(AirBnb, self).__init__()
self.x = torch.from_numpy(train_path).float()
self.y = torch.from_numpy(label_path).float()
def __getitem__(self, idx):
x = self.x[idx]
y = self.y[idx]
return x, y
def __len__(self):
return self.x.shape[0]
class CSVDataset(data.Dataset):
def __init__(self, train_path, label_path):
super(CSVDataset, self).__init__()
self.x = torch.from_numpy(train_path).float()
self.y = torch.from_numpy(label_path).float()
self.y = self.y.reshape((len(self.y), 1))
def __len__(self):
return len(self.x)
def __getitem__(self, idx):
return [self.x[idx], self.y[idx]]
def get_splits(self, n_test=0.33):
test_size = round(n_test * len(self.x))
train_size = len(self.x) - test_size
return data.random_split(self, [train_size, test_size])
def load_model(model, optimizer, checkpoint_path, model_only = False):
ckpt_dict = torch.load(checkpoint_path, map_location="cuda:0")
model.load_state_dict(ckpt_dict['state_dict'])
if not model_only:
optimizer.load_state_dict(ckpt_dict['optimizer'])
epoch = ckpt_dict['epoch']
val_loss = ckpt_dict['val_loss']
else:
epoch = None
val_loss = None
return model, optimizer, epoch, val_loss
np.log(target)
def train(model, optimizer, loss_fn, epoch = 0):
train_dataset = CSVDataset(inde_variables.to_numpy(), target)
train, test = train_dataset.get_splits()
train_loader = data.DataLoader(train,
batch_size=batch_size,
shuffle=True)
dev_loader = data.DataLoader(test,
batch_size=batch_size,
shuffle=True)
model.train()
step = 0
best_model = NN229()
best_epoch = 0
best_val_loss = None
while epoch < max_epoch:
epoch += 1
stats = []
with torch.enable_grad():
for x, y in train_loader:
step += 1
# print (x)
# print (y)
# break
x = x.cuda()
y = y.cuda()
optimizer.zero_grad()
pred = model(x).reshape(-1)
loss = loss_fn(pred, y)
loss_val = loss.item()
loss.backward()
optimizer.step()
stats.append(loss_val)
# stats.append((epoch, step, loss_val))
# print ("Epoch: ", epoch, " Step: ", step, " Loss: ", loss_val)
print ("Train loss: ", sum(stats) / len(stats))
val_loss = evaluate(dev_loader, model)
if best_val_loss is None or best_val_loss > val_loss:
best_val_loss = val_loss
model.cpu()
best_model = copy.deepcopy(model)
model.cuda()
best_epoch = epoch
# print (evaluate(dev_loader, model))
return best_model, best_epoch, best_val_loss
def evaluate(dev_loader, model):
model.eval()
stats = []
with torch.no_grad():
for x, y in dev_loader:
x = x.cuda()
y = y.cuda()
pred = model(x).reshape(-1)
loss_val = loss_fn(pred, y).item()
stats.append(loss_val)
# print ("Loss: ", loss_val)
print ("Val loss: ", sum(stats) / len(stats))
return sum(stats) / len(stats)
lr = 1e-4
weight_decay = 1e-5
beta = (0.9, 0.999)
max_epoch = 100
batch_size = 64
model = NN229().cuda()
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay, betas=beta)
loss_fn = nn.MSELoss()
best_model, best_epoch, best_val_loss = train(model, optimizer, loss_fn, epoch = 0)
train_dataset = CSVDataset(inde_variables.to_numpy(), target)
train, test = train_dataset.get_splits()
dev_loader = data.DataLoader(test,
shuffle=True)
y_truth_list = []
for _, y_truth in dev_loader:
y_truth_list.append(y_truth[0][0].cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_truth_list]
y_t=np.array(y_truth_list)
y_t
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
y_pred_list = []
with torch.no_grad():
model.eval()
for X_batch, _ in dev_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_pred_list.append(y_test_pred.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
y_p=np.array(y_pred_list)
y_p
import sklearn.metrics
sklearn.metrics.r2_score(y_t, y_p)
```
| github_jupyter |
```
import panel as pn
pn.extension()
```
One of the main design goals for Panel was that it should make it possible to seamlessly transition back and forth between interactively prototyping a dashboard in the notebook or on the commandline to deploying it as a standalone server app. This section shows how to display panels interactively, embed static output, save a snapshot, and deploy as a separate web-server app.
## Configuring output
As you may have noticed, almost all the Panel documentation is written using notebooks. Panel objects display themselves automatically in a notebook and take advantage of Jupyter Comms to support communication between the rendered app and the Jupyter kernel that backs it on the Python end. To display a Panel object in the notebook is as simple as putting it on the end of a cell. Note, however, that the ``panel.extension`` first has to be loaded to initialize the required JavaScript in the notebook context. Also, if you are working in JupyterLab, the pyviz labextension has to be installed with:
jupyter labextension install @pyviz/jupyterlab_pyviz
### Optional dependencies
Also remember that in order to use certain components such as Vega, LaTeX, and Plotly plots in a notebook, the models must be loaded using the extension. If you forget to load the extension, you should get a warning reminding you to do it. To load certain JS components, simply list them as part of the call to ``pn.extension``:
pn.extension('vega', 'katex')
Here we've ensured that the Vega and LaTeX JS dependencies will be loaded.
### Initializing JS and CSS
Additionally, any external ``css_files``, ``js_files`` and ``raw_css`` needed should be declared in the extension. The ``js_files`` should be declared as a dictionary mapping from the exported JS module name to the URL containing the JS components, while the ``css_files`` can be defined as a list:
pn.extension(js_files={'deck': https://unpkg.com/deck.gl@~5.2.0/deckgl.min.js},
css_files=['https://api.tiles.mapbox.com/mapbox-gl-js/v0.44.1/mapbox-gl.css'])
The ``raw_css`` argument allows defining a list of strings containing CSS to publish as part of the notebook and app.
Providing keyword arguments via the ``extension`` is the same as setting them on ``pn.config``, which is the preferred approach outside the notebook. ``js_files`` and ``css_files`` may be set to your chosen values as follows:
pn.config.js_files = {'deck': 'https://unpkg.com/deck.gl@~5.2.0/deckgl.min.js'}
pn.config.css_files = ['https://api.tiles.mapbox.com/mapbox-gl-js/v0.44.1/mapbox-gl.css']
## Display in the notebook
#### The repr
Once the extension is loaded, Panel objects will display themselves if placed at the end of cell in the notebook:
```
pane = pn.panel('<marquee>Here is some custom HTML</marquee>')
pane
```
To instead see a textual representation of the component, you can use the ``pprint`` method on any Panel object:
```
pane.pprint()
```
#### The ``display`` function
To avoid having to put a Panel on the last line of a notebook cell, e.g. to display it from inside a function call, you can use the IPython built-in ``display`` function:
```
def display_marquee(text):
display(pn.panel('<marquee>{text}</marquee>'.format(text=text)))
display_marquee('This Panel was displayed from within a function')
```
#### Inline apps
Lastly it is also possible to display a Panel object as a Bokeh server app inside the notebook. To do so call the ``.app`` method on the Panel object and provide the URL of your notebook server:
```
pane.app('localhost:8888')
```
The app will now run on a Bokeh server instance separate from the Jupyter notebook kernel, allowing you to quickly test that all the functionality of your app works both in a notebook and in a server context.
## Display in the Python REPL
Working from the command line will not automatically display rich representations inline as in a notebook, but you can still interact with your Panel components if you start a Bokeh server instance and open a separate browser window using the ``show`` method. The method has the following arguments:
port: int (optional)
Allows specifying a specific port (default=0 chooses an arbitrary open port)
websocket_origin: str or list(str) (optional)
A list of hosts that can connect to the websocket.
This is typically required when embedding a server app in
an external-facing web site.
If None, "localhost" is used.
threaded: boolean (optional, default=False)
Whether to launch the Server on a separate thread, allowing
interactive use.
To work with an app completely interactively you can set ``threaded=True`,` which will launch the server on a separate thread and let you interactively play with the app.
<img src='https://assets.holoviews.org/panel/gifs/commandline_show.gif'></img>
The ``show`` call will return either a Bokeh server instance (if ``threaded=False``) or a ``StoppableThread`` instance (if ``threaded=True``) which both provide a ``stop`` method to stop the server instance.
## Launching a server on the commandline
Once the app is ready for deployment it can be served using the Bokeh server. For a detailed breakdown of the design and functionality of Bokeh server, see the [Bokeh documentation](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html). The most important thing to know is that Panel (and Bokeh) provide a CLI command to serve a Python script, app directory, or Jupyter notebook containing a Bokeh or Panel app. To launch a server using the CLI, simply run:
panel serve app.ipynb
The ``panel serve`` command has the following options:
positional arguments:
DIRECTORY-OR-SCRIPT The app directories or scripts or notebooks to serve
(serve empty document if not specified)
optional arguments:
-h, --help show this help message and exit
--port PORT Port to listen on
--address ADDRESS Address to listen on
--log-level LOG-LEVEL
One of: trace, debug, info, warning, error or critical
--log-format LOG-FORMAT
A standard Python logging format string (default:
'%(asctime)s %(message)s')
--log-file LOG-FILE A filename to write logs to, or None to write to the
standard stream (default: None)
--args ... Any command line arguments remaining are passed on to
the application handler
--show Open server app(s) in a browser
--allow-websocket-origin HOST[:PORT]
Public hostnames which may connect to the Bokeh
websocket
--prefix PREFIX URL prefix for Bokeh server URLs
--keep-alive MILLISECONDS
How often to send a keep-alive ping to clients, 0 to
disable.
--check-unused-sessions MILLISECONDS
How often to check for unused sessions
--unused-session-lifetime MILLISECONDS
How long unused sessions last
--stats-log-frequency MILLISECONDS
How often to log stats
--mem-log-frequency MILLISECONDS
How often to log memory usage information
--use-xheaders Prefer X-headers for IP/protocol information
--session-ids MODE One of: unsigned, signed, or external-signed
--index INDEX Path to a template to use for the site index
--disable-index Do not use the default index on the root path
--disable-index-redirect
Do not redirect to running app from root path
--num-procs N Number of worker processes for an app. Using 0 will
autodetect number of cores (defaults to 1)
--websocket-max-message-size BYTES
Set the Tornado websocket_max_message_size value
(defaults to 20MB) NOTE: This setting has effect ONLY
for Tornado>=4.5
--dev [FILES-TO-WATCH [FILES-TO-WATCH ...]]
Enable live reloading during app development.By
default it watches all *.py *.html *.css *.yaml
filesin the app directory tree. Additional files can
be passedas arguments. NOTE: This setting only works
with a single app.It also restricts the number of
processes to 1.
To turn a notebook into a deployable app simply append ``.servable()`` to one or more Panel objects, which will add the app to Bokeh's ``curdoc``, ensuring it can be discovered by Bokeh server on deployment. In this way it is trivial to build dashboards that can be used interactively in a notebook and then seamlessly deployed on Bokeh server.
### Accessing session state
Whenever a Panel app is being served the ``panel.state`` object exposes some of the internal Bokeh server components to a user.
#### Document
The current Bokeh ``Document`` can be accessed using ``panel.state.curdoc``.
#### Request arguments
When a browser makes a request to a Bokeh server a session is created for the Panel application. The request arguments are made available to be accessed on ``pn.state.session_args``. For example if your application is hosted at ``localhost:8001/app``, appending ``?phase=0.5`` to the URL will allow you to access the phase variable using the following code:
```python
try:
phase = int(pn.state.session_args.get('phase')[0])
except:
phase = 1
```
This mechanism may be used to modify the behavior of an app dependending on parameters provided in the URL.
### Accessing the Bokeh model
Since Panel is built on top of Bokeh, all Panel objects can easily be converted to a Bokeh model. The ``get_root`` method returns a model representing the contents of a Panel:
```
pn.Column('# Some markdown').get_root()
```
By default this model will be associated with Bokeh's ``curdoc()``, so if you want to associate the model with some other ``Document`` ensure you supply it explictly as the first argument.
## Embedding
Panel generally relies on either the Jupyter kernel or a Bokeh Server to be running in the background to provide interactive behavior. However for simple apps with a limited amount of state it is also possible to `embed` all the widget state, allowing the app to be used entirely from within Javascript. To demonstrate this we will create a simple app which simply takes a slider value, multiplies it by 5 and then display the result.
```
slider = pn.widgets.IntSlider(start=0, end=10)
@pn.depends(slider.param.value)
def callback(value):
return '%d * 5 = %d' % (value, value*5)
row = pn.Row(slider, callback)
```
If we displayed this the normal way it would call back into Python every time the value changed. However, the `.embed()` method will record the state of the app for the different widget configurations.
```
row.embed()
```
If you try the widget above you will note that it only has 3 different states, 0, 5 and 10. This is because by default embed will try to limit the number of options of non-discrete or semi-discrete widgets to at most three values. This can be controlled using the `max_opts` argument to the embed method. The full set of options for the embed method include:
- **max_states**: The maximum number of states to embed
- **max_opts**: The maximum number of states for a single widget
- **json** (default=True): Whether to export the data to json files
- **save_path** (default='./'): The path to save json files to
- **load_path** (default=None): The path or URL the json files will be loaded from (same as ``save_path`` if not specified)
As you might imagine if there are multiple widgets there can quickly be a combinatorial explosion of states so by default the output is limited to about 1000 states. For larger apps the states can also be exported to json files, e.g. if you want to serve the app on a website specify the ``save_path`` to declare where it will be stored and the ``load_path`` to declare where the JS code running on the website will look for the files.
## Saving
In case you don't need an actual server or simply want to export a static snapshot of a panel app, you can use the ``save`` method, which allows exporting the app to a standalone HTML or PNG file.
By default, the HTML file generated will depend on loading JavaScript code for BokehJS from the online ``CDN`` repository, to reduce the file size. If you need to work in an airgapped or no-network environment, you can declare that ``INLINE`` resources should be used instead of ``CDN``:
```python
from bokeh.resources import INLINE
panel.save('test.html', resources=INLINE)
```
Additionally the save method also allows enabling the `embed` option, which, as explained above, will embed the apps state in the app or save the state to json files which you can ship alongside the exported HTML.
Finally, if a 'png' file extension is specified, the exported plot will be rendered as a PNG, which currently requires Selenium and PhantomJS to be installed:
```python
pane.save('test.png')
```
| github_jupyter |
```
import os
import numpy as np
import scipy.stats as sps
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import random
import sys, os
sys.path += [os.path.abspath(os.pardir + '/code')]
print(sys.path)
from experiment import init_random_state, BanditLoopExperiment, get_ts_model
sns.set(font_scale=1.2, palette='tab20')
def draw_posteriori(grid, distr_class, post_params, obj, steps, xlim=None):
'''Рисует серию графиков апостериорных плотностей.
:param grid: сетка для построения графика
:param distr_class: класс распределений из scipy.stats
:param post_params: параметры апостериорных распределений
shape=(размер выборки, кол-во параметров)
'''
size = post_params.shape[0] - 1
plt.figure(figsize=(12, 7))
for n, t in enumerate(steps):
plt.plot(grid,
distr_class(post_params[n]).pdf(grid) \
if np.isscalar(post_params[n]) \
else distr_class(*post_params[n]).pdf(grid),
label='t={}: {}'.format(t, np.round(post_params[n], 3)),
lw=2.5,
color=(1-n/size, n/size, 0))
plt.title(f'Апостериорное распределение для объекта {obj} в зависимости от шага')
plt.grid(ls=':')
plt.legend(fontsize=12)
plt.xlim(xlim)
plt.show()
seed = 42
ps = np.linspace(0.5, 1, 5)
Q = 1
w = 2
b = 0.1
T = 2000
M = 10
l = 4
ps
interests, TS_paramss, responses = [], [], []
for p in ps:
init_random_state(seed)
bandit = lambda: get_ts_model(M=M, l=l)
exp = BanditLoopExperiment(bandit, "TS bandit")
exp.prepare(w=w, Q=Q, p=p, b=b)
exp.run_experiment(T=T)
results = exp.get_as_np()
interests.append(results.interest)
TS_paramss.append(results.TS_params)
responses.append(results.response)
sum_responces = []
for i, p in enumerate(ps):
sum_responces.append(np.cumsum(responses[i].sum(axis=1)))
plt.figure(figsize=(12, 8))
for i, p in enumerate(ps):
plt.plot(np.arange(1, T+1), sum_responces[i], label=f'p = {round(p,3)}')
plt.title('Зависимость суммы откликов от времени')
plt.ylabel('Сумма откликов')
plt.xlabel('Шаг')
plt.legend()
# plt.savefig('rewards.pdf')
plt.figure(figsize=(18, 36))
for m in range(M):
plt.subplot(M // 2 + 1, 2, m+1)
for i, p in enumerate(ps):
plt.plot(interests[i][:, m], label=f'p = {p}')
plt.title(f'интерес к {p} объекту')
plt.ylabel('интерес')
plt.xlabel('Шаг')
plt.legend()
plt.tight_layout()
plt.figure(figsize=(12, 8))
for i, p in enumerate(ps):
plt.plot(np.linalg.norm(interests[i] - interests[i][0], axis=1)**2, label=f'p = {round(p,3)}')
plt.yscale('log')
plt.ylabel(r'$\|\mu_t - \mu_0 \|^2$')
plt.title('Зависимость нормы разности интересов от шага')
plt.legend()
plt.xlabel('Шаг')
# plt.savefig('norm_interest.pdf')
```
| github_jupyter |
# Emulators: First example
This example illustrates Bayesian inference on a time series, using [Adaptive Covariance MCMC](http://pints.readthedocs.io/en/latest/mcmc_samplers/adaptive_covariance_mcmc.html) with emulator neural networks .
It follows on from [Sampling: First example](../sampling/first-example.ipynb)
Like in the sampling example, I start by importing pints:
```
import pints
```
Next, I create a model class using the "Logistic" toy model included in pints:
```
import pints.toy as toy
class RescaledModel(pints.ForwardModel):
def __init__(self):
self.base_model = toy.LogisticModel()
def simulate(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulate([r, k], times)
def simulateS1(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulateS1([r, k], times)
def n_parameters(self):
# Return the dimension of the parameter vector
return 2
# Rescale parameters
#found_parameters = list(found_parameters)
#found_parameters[0] = found_parameters[0] / 50
#found_parameters[1] = found_parameters[1] * 500
# Show score of true solution
#print('Score at true solution: ')
#print(score(true_parameters))
# Compare parameters with original
#print('Found solution: True parameters:' )
#for k, x in enumerate(found_parameters):
#print(pints.strfloat(x) + ' ' + pints.strfloat(true_parameters[k]))
model = toy.LogisticModel()
```
In order to generate some test data, I choose an arbitrary set of "true" parameters:
```
true_parameters = [0.015, 500]
start_parameters = [0.75, 1.0]
```
And a number of time points at which to sample the time series:
```
import numpy as np
times = np.linspace(0, 1000, 400)
```
Using these parameters and time points, I generate an example dataset:
```
org_values = model.simulate(true_parameters, times)
range_values = max(org_values) - min(org_values)
```
And make it more realistic by adding gaussian noise:
```
noise = 0.05 * range_values
print("The noise is:", noise)
values = org_values + np.random.normal(0, noise, org_values.shape)
values = org_values + np.random.normal(0, noise, org_values.shape)
```
Using matplotlib, I look at the noisy time series I just simulated:
```
import matplotlib.pyplot as plt
plt.figure(figsize=(12,4.5))
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, values, label='Noisy data')
plt.plot(times, org_values, lw=2, label='Noise-free data')
plt.legend()
plt.show()
```
Now, I have enough data (a model, a list of times, and a list of values) to formulate a PINTS problem:
```
model = RescaledModel()
problem = pints.SingleOutputProblem(model, times, values)
```
I now have some toy data, and a model that can be used for forward simulations. To make it into a probabilistic problem, a _noise model_ needs to be added. This can be done using the `GaussianLogLikelihood` function, which assumes independently distributed Gaussian noise over the data, and can calculate log-likelihoods:
```
#log_likelihood = pints.GaussianLogLikelihood(problem)
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
```
This `log_likelihood` represents the _conditional probability_ $p(y|\theta)$, given a set of parameters $\theta$ and a series of $y=$ `values`, it can calculate the probability of finding those values if the real parameters are $\theta$.
This can be used in a Bayesian inference scheme to find the quantity of interest:
$p(\theta|y) = \frac{p(\theta)p(y|\theta)}{p(y)} \propto p(\theta)p(y|\theta)$
To solve this, a _prior_ is defined, indicating an initial guess about what the parameters should be.
Similarly as using a _log-likelihood_ (the natural logarithm of a likelihood), this is defined by using a _log-prior_. Hence, the above equation simplifies to:
$\log p(\theta|y) \propto \log p(\theta) + \log p(y|\theta)$
In this example, it is assumed that we don't know too much about the prior except lower and upper bounds for each variable: We assume the first model parameter is somewhere on the interval $[0.01, 0.02]$, the second model parameter on $[400, 600]$, and the standard deviation of the noise is somewhere on $[1, 100]$.
```
# Create bounds for our parameters and get prior
#bounds = pints.RectangularBoundaries([0.01, 400], [0.02, 600])
bounds = pints.RectangularBoundaries([0.7, 0.95], [0.8, 1.05])
log_prior = pints.UniformLogPrior(bounds)
```
With this prior, the numerator of Bayes' rule can be defined -- the unnormalised log posterior, $\log \left[ p(y|\theta) p(\theta) \right]$, which is the natural logarithm of the likelihood times the prior:
```
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
import numpy as np
import math
lower=[0.7, 0.95]
upper=[0.8, 1.05]
evaluations=1000
n_param = 2
f = log_likelihood
g = log_prior
x = start_parameters
# Create points to plot
xs = np.tile(x, (n_param * evaluations, 1))
for j in range(n_param):
i1 = j * evaluations
i2 = i1 + evaluations
xs[i1:i2, j] = np.linspace(lower[j], upper[j], evaluations)
# Evaluate points
fs = pints.evaluate(f, xs, parallel=False)
#fs = [math.exp(f)*100 for f in fs]
gs = pints.evaluate(g, xs, parallel=False)
# Create figure
fig, axes = plt.subplots(n_param, 1, figsize=(6, 2 * n_param))
for j, p in enumerate(x):
i1 = j * evaluations
i2 = i1 + evaluations
axes[j].plot(xs[i1:i2, j], fs[i1:i2], c='green', label='Function')
axes[j].axvline(p, c='blue', label='Value')
axes[j].set_xlabel('Parameter ' + str(1 + j))
axes[j].legend()
for j, p in enumerate(x):
i1 = j * evaluations
i2 = i1 + evaluations
axes[j].plot(xs[i1:i2, j], gs[i1:i2], c='orange', label='Function')
# Customise the figure size
fig.set_size_inches(14, 9)
plt.show()
fig, ax = pints.plot.function(log_likelihood, start_parameters, lower=[0.5, 0.8], upper=[1.0, 1.2])
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import seaborn as sns
sns.set(context='notebook', style='white', palette='deep', font='Times New Roman',
font_scale=2, color_codes=True, rc={"grid.linewidth": 1})
# Plot between 0 and 20 with .001 steps.
x_axis = np.arange(0, 20, 0.001)
# Mean = 5, SD = 0.2
fig, ax = plt.subplots(figsize=(6,6))
plt.title("Likelihood")
plt.plot(x_axis, norm.pdf(x_axis,5,.2), lw=3)
plt.show()
fig.savefig("figures/integral/likelihood.png", bbox_inches='tight', dpi=200)
from scipy.stats import uniform
# Plot between 0 and 20 with .001 steps.
x_axis = np.arange(0, 20, 0.001)
fig, ax = plt.subplots(figsize=(6,6))
plt.title("Prior")
plt.plot(x_axis, uniform(scale=20).pdf(x_axis), "C1", lw=3)
plt.show()
fig.savefig("figures/integral/prior.png", bbox_inches='tight', dpi=200)
likelihood = norm.pdf(x_axis,5,.2)
product = [l*0.05 for l in likelihood]
# Plot between 0 and 20 with .001 steps.
x_axis = np.arange(0, 20, 0.001)
# Mean = 5, SD = 0.2
#plt.plot(x_axis, likelihood, label="Likelihood")
fig, ax = plt.subplots(figsize=(6,6))
plt.title("Unnormalized posterior")
plt.plot(x_axis, product, "C2", lw=3, label="Likelihood*Prior")
#plt.legend()
plt.show()
fig.savefig("figures/integral/posterior.png", bbox_inches='tight', dpi=200)
```
| github_jupyter |
# Bias Reduction
Climate models can have biases towards different references. Commonly, biases are reduced by postprocessing before verification of forecasting skill. `climpred` provides convenience functions to do so.
```
import climpred
import xarray as xr
import matplotlib.pyplot as plt
from climpred import HindcastEnsemble
hind = climpred.tutorial.load_dataset('CESM-DP-SST') # CESM-DPLE hindcast ensemble output.
obs = climpred.tutorial.load_dataset('ERSST') # ERSST observations.
recon = climpred.tutorial.load_dataset('FOSI-SST') # Reconstruction simulation that initialized CESM-DPLE.
hind["lead"].attrs["units"] = "years"
v='SST'
alignment='same_verif'
hindcast = HindcastEnsemble(hind)
# choose one observation
hindcast = hindcast.add_observations(recon)
#hindcast = hindcast.add_observations(obs, 'ERSST') # fits hind better than reconstruction
# always only subtract a PredictionEnsemble from another PredictionEnsemble if you handle time and init at the same time
# compute anomaly with respect to 1964-2014
hindcast = hindcast - hindcast.sel(time=slice('1964', '2014')).mean('time').sel(init=slice('1964', '2014')).mean('init')
hindcast.plot()
```
The warming of the `reconstruction` is less than the `initialized`.
## Mean bias reduction
Typically, bias depends on lead-time and therefore should therefore also be removed depending on lead-time.
```
# build bias_metric by hand
from climpred.metrics import Metric
def bias_func(a,b,**kwargs):
return a-b
bias_metric = Metric('bias', bias_func, True, False,1)
bias = hindcast.verify(metric=bias_metric, comparison='e2r', dim='init', alignment=alignment).squeeze()
# equals using the pre-defined (unconditional) bias metric applied to over dimension member
xr.testing.assert_allclose(bias, hindcast.verify(metric='unconditional_bias', comparison='m2r',dim='member', alignment=alignment).squeeze())
bias[v].plot()
```
- against Reconstruction: Cold bias in early years and warm bias in later years.
- against ERSST: Overall cold bias.
### cross validatation
```
from climpred.bias_reduction import _mean_bias_reduction_quick, _mean_bias_reduction_cross_validate
_mean_bias_reduction_quick??
_mean_bias_reduction_cross_validate??
```
`climpred` wraps these functions in `HindcastEnsemble.reduce_bias(how='mean', cross_validate={bool})`.
```
hindcast.reduce_bias(how='mean', cross_validate=True, alignment=alignment).plot()
plt.title('hindcast lead timeseries reduced for unconditional mean bias')
plt.show()
```
## Skill
Distance-based accuracy metrics like (`mse`,`rmse`,`nrmse`,...) are sensitive to mean bias reduction. Correlations like (`pearson_r`, `spearman_r`) are insensitive to bias correction.
```
metric='rmse'
hindcast.verify(metric=metric, comparison='e2o', dim='init', alignment=alignment)[v].plot(label='no bias correction')
hindcast.reduce_bias(cross_validate=False, alignment=alignment).verify(metric=metric, comparison='e2o', dim='init', alignment=alignment)[v].plot(label='bias correction without cross validation')
hindcast.reduce_bias(cross_validate=True, alignment=alignment).verify(metric=metric, comparison='e2o', dim='init', alignment=alignment)[v].plot(label='formally correct bias correction with cross validation')
plt.legend()
plt.title(f"{metric} {v} evaluated against {list(hindcast._datasets['observations'].keys())[0]}")
plt.show()
```
| github_jupyter |
```
import os
import sys
import random
import math
import re
import time
import numpy as np
from keras import backend as K
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../..")
# Import Mask RCNN
sys.path.append(ROOT_DIR)
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
from samples.face import face
%matplotlib inline
# Directory to save trained models
MODEL_DIR = os.path.join(ROOT_DIR, "logs/weights")
```
## Notebook Preferences
```
def get_ax(rows=1, cols=1, size=8):
"""
Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default isze attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
## Configurations
# Configurations are defined in face.py
config = face.FaceConfig()
config.display()
config.IMAGE_MAX_DIM = 512 # Override the resizing options from 256 to 1024.
config.STEPS_PER_EPOCH = 3200 # Override the value of steps per epoch
FACE_DIR = os.path.join(ROOT_DIR, "samples/face/face_data")
# Directory to save weights
FACE_MODEL_DIR = os.path.join(MODEL_DIR, 'face')
# Which weights to start with?
init_weight = "coco"
custom_weight_path = os.path.join(FACE_MODEL_DIR, "coco/face_epochs10(5)_steps3200_resize512")
# Set epochs
head_epochs = 2
middle_epochs = 6
all_epochs = 8
tag = "coco_epochs2h-6m-8a_crop-pad(-0.25-0.25)"
# Directory to save events
import datetime
EVENT_DIR = os.path.join(ROOT_DIR, "logs/events/face_{}_{:%Y%m%dT%H%M}".format(
tag, datetime.datetime.now()))
# Print this jupyter file's configurations
```
## Dataset
```
# Load dataset
# Get the dataset 'CelebA'
# dataset = face.FaceDataset()
# dataset.load_face(FACE_DIR, "train")
# Must call before using the dataset
# dataset.prepare()
# print("Image Count: {}".format(len(dataset.image_ids)))
# print("Class Count: {}".format(dataset.num_classes))
# for i, info in enumerate(dataset.class_info):
# print("{:3}. {:50}".format(i, info['name']))
### Training dataset
# Training dataset
dataset_train = face.FaceDataset()
dataset_train.load_face(FACE_DIR, 'train', augmentation_sequence=None)
dataset_train.prepare()
print("Image Count: {}".format(len(dataset_train.image_ids)))
print("Class Count: {}".format(dataset_train.num_classes))
for i, info in enumerate(dataset_train.class_info):
print("{:3}. {:50}".format(i, info['name']))
### Validation Dataset
# Validation dataset
dataset_val = face.FaceDataset()
dataset_val.load_face(FACE_DIR, 'val')
dataset_val.prepare()
print("Image Count: {}".format(len(dataset_val.image_ids)))
print("Class Count: {}".format(dataset_val.num_classes))
for i, info in enumerate(dataset_val.class_info):
print("{:3}. {:50}".format(i, info['name']))
# # Load and display random samples
# image_ids = np.random.choice(dataset_train.image_ids, 4)
# for image_id in image_ids:
# image = dataset_train.load_image(image_id)
# mask, class_ids = dataset_train.load_mask(image_id)
# visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
#
```
## Create Model
```
# Create model in training mode
model = modellib.MaskRCNN(
mode="training",
config=config,
model_dir=MODEL_DIR)
# Directory to save logs and trained model
if init_weight == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_weight == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory to save logs and trained model
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_weight == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
elif init_weight == "custom":
if not os.path.exists(custom_weight_path):
raise FileNotFoundError
model.load_weights(custom_weight_path)
```
## Training
Train in two stages:
1. Only the heads. Here we're freezing all the backbone layers and training only the randomly intialized layers
(.e. the ones that we didn't use pre-trained weights from MS COCO).
To train only the head layers, pass layers='heads' to the train() function.
2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process.
Simply pass layers="all to train all layers.
### Augmentation
```
import imgaug.augmenters as iaa
aug = iaa.CropAndPad(percent=(-0.25, 0.25))
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
event_dir=EVENT_DIR,
learning_rate=config.LEARNING_RATE,
epochs=head_epochs,
layers='heads',
augmentation=aug)
# Finetune layers from ResNet stage 4 and up
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE /100,
epochs=middle_epochs,
layers='4+',
augmentation=aug)
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
event_dir=EVENT_DIR,
learning_rate=config.LEARNING_RATE / 100,
epochs=all_epochs,
layers="all",
augmentation=aug)
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
event_dir=EVENT_DIR,
learning_rate=config.LEARNING_RATE / 10,
epochs=all_epochs,
layers="all")
```
#### Save weights
```
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.200324.h5")
# model.keras_model.save_weights(model_path)
import pathlib
pathlib.Path(FACE_MODEL_DIR).mkdir(exist_ok=True)
model_path = os.path.join(FACE_MODEL_DIR, init_weight)
model_path = os.path.join(model_path, 'face_{}_steps{}_resize{}.h5'.format(tag, config.STEPS_PER_EPOCH, config.IMAGE_MAX_DIM))
model.keras_model.save_weights(model_path)
print("weights saved to {}".format(model_path))
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.200324.h5")
# model.keras_model.save_weights(model_path)
import pathlib
pathlib.Path(FACE_MODEL_DIR).mkdir(exist_ok=True)
model_path = os.path.join(FACE_MODEL_DIR, init_weight)
model_path = os.path.join(model_path, 'face_{}_steps{}_resize{}.h5'.format(tag, config.STEPS_PER_EPOCH, config.IMAGE_MAX_DIM))
model.keras_model.save_weights(model_path)
print("weights saved to {}".format(model_path))
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
import gin
import numpy as np
from matplotlib import pyplot as plt
from torch.autograd import Variable
from tqdm.auto import tqdm
import torch
from causal_util.helpers import lstdct2dctlst
from sparse_causal_model_learner_rl.sacred_gin_tune.sacred_wrapper import load_config_files
from sparse_causal_model_learner_rl.loss.losses import fit_loss_obs_space, lagrangian_granular
from sparse_causal_model_learner_rl.config import Config
from sparse_causal_model_learner_rl.learners.rl_learner import CausalModelLearnerRL
%matplotlib inline
gin.enter_interactive_mode()
import ray
ray.init('10.90.40.6:42515')
def reload_config():
load_config_files(['../sparse_causal_model_learner_rl/configs/rl_const_sparsity_obs_space.gin',
# '../keychest/config/5x5_1f_obs.gin',
# '../sparse_causal_model_learner_rl/configs/env_kc_5x5_1f_obs_quad.gin',
'../sparse_causal_model_learner_rl/configs/env_sm5_linear.gin',
# '../sparse_causal_model_learner_rl/configs/with_lagrange_dual_sparsity.gin',
'../sparse_causal_model_learner_rl/configs/with_lagrange_dual_sparsity_per_component.gin',
])
reload_config()
gin.bind_parameter('Config.collect_initial_steps', 1000)
l = CausalModelLearnerRL(Config())
l.create_trainables()
ctx = l.collect_and_get_context()
from sparse_causal_model_learner_rl.loss.losses import cache_get, maybe_item, delta_pow2_sum1, delta_01_obs, manual_switch_gradient, RunOnce
l.model.model.switch.probas.data[:, :] = 0.5
l.lagrange_multipliers.vectorized
l.lagrange_multipliers().shape
fit_loss_obs_space(**ctx,
fill_switch_grad=True, divide_by_std=True, loss_local_cache={},
return_per_component=True)
from sparse_causal_model_learner_rl.loss.losses import fit_loss_obs_space, lagrangian_granular
reload_config()
lagrangian_granular(**ctx, mode='PRIMAL')
lagrangian_granular(**ctx, mode='DUAL')
gin.clear_config()
load_config_files(['../sparse_causal_model_learner_rl/configs/rl_const_sparsity_obs_space.gin',
'../keychest/config/5x5_1f_obs.gin',
'../sparse_causal_model_learner_rl/configs/env_kc_5x5_1f_obs_quad.gin',
# '../sparse_causal_model_learner_rl/configs/env_sm5_linear.gin',
# '../sparse_causal_model_learner_rl/configs/with_lagrange_dual_sparsity.gin',
'../sparse_causal_model_learner_rl/configs/with_lagrange_dual_sparsity_per_component.gin',
])
gin.bind_parameter('Config.collect_initial_steps', 1000)
os.environ['CUDA_VISIBLE_DEVICES'] = "-1"
l = CausalModelLearnerRL(Config())
l.create_trainables()
ctx = l.collect_and_get_context()
import seaborn as sns
loss = fit_loss_obs_space(**ctx,
fill_switch_grad=True, divide_by_std=True, loss_local_cache={},
return_per_component=True)
obs_shape = l.observation_shape
l_np = loss['losses']['obs_orig'].detach().cpu().numpy().reshape(obs_shape)
l_np_1ch = np.mean(l_np, axis=2)
sns.heatmap(l_np_1ch)
obs_example = ctx['obs_x'].detach().cpu().numpy()
obs_example = obs_example[:, :, :, 2]
#obs_example = np.mean(obs_example, axis=3)
sns.heatmap(np.mean(obs_example, axis=0))
sns.heatmap(np.std(obs_example, axis=0))
```
| github_jupyter |
```
import os
import threading
import gym
import multiprocessing
import numpy as np
from queue import Queue
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, optimizers
class ActorCriticModel(keras.Model):
def __init__(self, state_size, action_size):
super(ActorCriticModel, self).__init__()
self.state_size = state_size
self.action_size = action_size
self.dense1 = layers.Dense(100, activation='relu')
self.policy_logits = layers.Dense(action_size)
self.dense2 = layers.Dense(100, activation='relu')
self.values = layers.Dense(1)
def call(self, inputs):
# Forward pass
x = self.dense1(inputs)
logits = self.policy_logits(x)
v1 = self.dense2(inputs)
values = self.values(v1)
return logits, values
def record(episode,
episode_reward,
worker_idx,
global_ep_reward,
result_queue,
total_loss,
num_steps):
"""Helper function to store score and print statistics.
Arguments:
episode: Current episode
episode_reward: Reward accumulated over the current episode
worker_idx: Which thread (worker)
global_ep_reward: The moving average of the global reward
result_queue: Queue storing the moving average of the scores
total_loss: The total loss accumualted over the current episode
num_steps: The number of steps the episode took to complete
"""
if global_ep_reward == 0:
global_ep_reward = episode_reward
else:
global_ep_reward = global_ep_reward * 0.99 + episode_reward * 0.01
print(
"Episode: {} | ".format(episode) +
"Moving Average Reward: {} | ".format(int(global_ep_reward)) +
"Episode Reward: {} | ".format(int(episode_reward)) +
"Loss: {} | ".format(int(total_loss / float(num_steps) * 1000) / 1000) +
"Steps: {} | ".format(num_steps) +
"Worker: {}".format(worker_idx)
)
result_queue.put(global_ep_reward)
return global_ep_reward
class RandomAgent:
"""Random Agent that will play the specified game
Arguments:
env_name: Name of the environment to be played
max_eps: Maximum number of episodes to run agent for.
"""
def __init__(self, env_name, max_eps):
self.env = gym.make(env_name)
self.max_episodes = max_eps
self.global_moving_average_reward = 0
self.res_queue = Queue()
def run(self):
reward_avg = 0
for episode in range(self.max_episodes):
done = False
self.env.reset()
reward_sum = 0.0
steps = 0
while not done:
# Sample randomly from the action space and step
_, reward, done, _ = self.env.step(self.env.action_space.sample())
steps += 1
reward_sum += reward
# Record statistics
self.global_moving_average_reward = record(episode,
reward_sum,
0,
self.global_moving_average_reward,
self.res_queue, 0, steps)
reward_avg += reward_sum
final_avg = reward_avg / float(self.max_episodes)
print("Average score across {} episodes: {}".format(self.max_episodes, final_avg))
return final_avg
class MasterAgent:
def __init__(self,
algorithm='A3C',
max_eps=1000,
game_name='CartPole-v1',
save_dir='output/'):
self.algorithm = algorithm
self.max_eps = max_eps
self.game_name = game_name
self.save_dir = save_dir
self.learning_rate = 0.001
if not os.path.exists(save_dir):
os.makedirs(save_dir)
env = gym.make(self.game_name)
self.state_size = env.observation_space.shape[0]
self.action_size = env.action_space.n
self.opt = optimizers.Adam(self.learning_rate)
print(self.state_size, self.action_size)
self.global_model = ActorCriticModel(self.state_size, self.action_size) # global network
self.global_model(tf.convert_to_tensor(np.random.random((1, self.state_size)), dtype=tf.float32))
self.global_model.summary()
def train(self):
if self.algorithm == 'random':
random_agent = RandomAgent(self.game_name, self.max_eps)
random_agent.run()
return
res_queue = Queue()
workers = [Worker(self.state_size,
self.action_size,
self.global_model,
self.opt, res_queue, i,
max_eps=self.max_eps,
game_name=self.game_name,
save_dir=self.save_dir) for i in range(multiprocessing.cpu_count())]
for i, worker in enumerate(workers):
print("Starting worker {}".format(i))
worker.start()
moving_average_rewards = [] # record episode reward to plot
while True:
reward = res_queue.get()
if reward is not None:
moving_average_rewards.append(reward)
else:
break
[w.join() for w in workers]
fig = plt.figure(figsize=(12,6))
fig.suptitle(self.game_name, fontsize=20)
plt.plot(moving_average_rewards)
plt.ylabel('Moving average ep reward')
plt.xlabel('Step')
plt.show()
def play(self):
# Set up a virtual display for rendering OpenAI gym environments.
# display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
model = self.global_model
model_path = os.path.join(self.save_dir, 'model_{}.h5'.format(self.game_name))
video_path = 'video/{}.mp4'.format(self.game_name)
print('Loading model from: {}'.format(model_path))
model.load_weights(model_path)
done = False
step_counter = 0
reward_sum = 0
env = gym.make(self.game_name).unwrapped
env = gym.wrappers.Monitor(env, 'video', force=True)
state = env.reset()
num_episodes = 5
try:
while not done:
# env.render()
policy, value = model(tf.convert_to_tensor(state[None, :], dtype=tf.float32))
policy = tf.nn.softmax(policy)
action = np.argmax(policy)
state, reward, done, _ = env.step(action)
reward_sum += reward
print('{}. Reward: {}, Action: {}'.format(step_counter, reward_sum, action))
step_counter += 1
except KeyboardInterrupt:
print('Received Keyboard Interrupt. Shutting down.')
finally:
env.close()
class Memory:
def __init__(self):
self.states = []
self.actions = []
self.rewards = []
def store(self, state, action, reward):
self.states.append(state)
self.actions.append(action)
self.rewards.append(reward)
def clear(self):
self.states = []
self.actions = []
self.rewards = []
class Worker(threading.Thread):
# Set up global variables across different threads
global_episode = 0
# Moving average reward
global_moving_average_reward = 0
best_score = 0
save_lock = threading.Lock()
def __init__(self,
state_size,
action_size,
global_model,
opt,
result_queue,
idx,
gamma=0.99,
max_eps=1000,
update_freq=20,
game_name='CartPole-v1',
save_dir='output/'):
super(Worker, self).__init__()
self.state_size = state_size
self.action_size = action_size
self.result_queue = result_queue
self.global_model = global_model
self.opt = opt
self.local_model = ActorCriticModel(self.state_size, self.action_size)
self.worker_idx = idx
self.gamma = gamma
self.max_eps = max_eps
self.update_freq = update_freq
self.game_name = game_name
self.env = gym.make(self.game_name).unwrapped
self.save_dir = save_dir
self.ep_loss = 0.0
def run(self):
total_step = 1
mem = Memory()
while Worker.global_episode < self.max_eps:
current_state = self.env.reset()
mem.clear()
ep_reward = 0.
ep_steps = 0
self.ep_loss = 0
time_count = 0
done = False
while not done:
logits, _ = self.local_model(
tf.convert_to_tensor(current_state[None, :],
dtype=tf.float32))
probs = tf.nn.softmax(logits)
action = np.random.choice(self.action_size, p=probs.numpy()[0])
new_state, reward, done, _ = self.env.step(action)
if done:
reward = -1
ep_reward += reward
mem.store(current_state, action, reward)
if time_count == self.update_freq or done:
# Calculate gradient wrt to local model. We do so by tracking the
# variables involved in computing the loss by using tf.GradientTape
with tf.GradientTape() as tape:
total_loss = self.compute_loss(done,
new_state,
mem,
self.gamma)
self.ep_loss += total_loss
# Calculate local gradients
grads = tape.gradient(total_loss, self.local_model.trainable_weights)
# Push local gradients to global model
self.opt.apply_gradients(zip(grads,
self.global_model.trainable_weights))
# Update local model with new weights
self.local_model.set_weights(self.global_model.get_weights())
mem.clear()
time_count = 0
if done: # done and print information
Worker.global_moving_average_reward = \
record(Worker.global_episode, ep_reward, self.worker_idx,
Worker.global_moving_average_reward, self.result_queue,
self.ep_loss, ep_steps)
# We must use a lock to save our model and to print to prevent data races.
if ep_reward > Worker.best_score:
with Worker.save_lock:
print("Saving best model to {}, episode score: {}".format(self.save_dir, ep_reward))
self.global_model.save_weights(
os.path.join(self.save_dir,
'model_{}.h5'.format(self.game_name))
)
Worker.best_score = ep_reward
Worker.global_episode += 1
ep_steps += 1
time_count += 1
current_state = new_state
total_step += 1
self.result_queue.put(None)
def compute_loss(self,
done,
new_state,
memory,
gamma=0.99):
if done:
reward_sum = 0. # terminal
else:
reward_sum = self.local_model(
tf.convert_to_tensor(new_state[None, :],
dtype=tf.float32))[-1].numpy()[0]
# Get discounted rewards
discounted_rewards = []
for reward in memory.rewards[::-1]: # reverse buffer r
reward_sum = reward + gamma * reward_sum
discounted_rewards.append(reward_sum)
discounted_rewards.reverse()
logits, values = self.local_model(
tf.convert_to_tensor(np.vstack(memory.states),
dtype=tf.float32))
# Get our advantages
advantage = tf.convert_to_tensor(np.array(discounted_rewards)[:, None],
dtype=tf.float32) - values
# Value loss
value_loss = advantage ** 2
# Calculate our policy loss
actions_one_hot = tf.one_hot(memory.actions, self.action_size, dtype=tf.float32)
policy = tf.nn.softmax(logits)
entropy = tf.reduce_sum(policy * tf.math.log(policy + 1e-20), axis=1)
policy_loss = tf.nn.softmax_cross_entropy_with_logits(labels=actions_one_hot,
logits=logits)
policy_loss *= tf.stop_gradient(advantage)
policy_loss -= 0.01 * entropy
total_loss = tf.reduce_mean((0.5 * value_loss + policy_loss))
return total_loss
train = False
master = MasterAgent(game_name='CartPole-v1', max_eps=5000)
if train:
master.train()
else:
master.play()
```
| github_jupyter |
```
class DF2Paths():
def __init__(self, path, fps=24):
self.path, self.fps = path, fps
def __call__(self, item:pd.Series):
def fr(t): return int(float(t)*self.fps)
Id, start, end = item['id'], item['start'], item['end']
start, end = fr(start), fr(end)
step = -1 if start > end else 1 # If start is greater than end,
# it reverses the order of the for loop
vid = L() # This because it seems some videos are in reverse
for n in range(start, end, step):
fr_path = self.path/'Charades_v1_rgb'/Id/f'{Id}-{n:0>6d}.jpg'
if os.path.exists(fr_path):
vid.append(fr_path)
return vid
@delegates()
class UniformizedDataLoader(TfmdDL):
def __init__(self, dataset=None, n_el=4, n_lbl=4, **kwargs):
kwargs['bs'] = n_el*n_lbl
super().__init__(dataset, **kwargs)
store_attr(self, 'n_el,n_lbl')
self.lbls = list(map(int, self.dataset.tls[1]))
self.dl_vocab = list(range(len(self.vocab)))
def before_iter(self):
super().before_iter()
lbl2idxs = {lbl:[] for lbl in self.dl_vocab}
for i, lbl in enumerate(self.lbls): lbl2idxs[lbl].append(i)
if self.shuffle: [random.shuffle(v) for v in lbl2idxs.values()]
self.lbl2idxs = lbl2idxs
def get_labeled_elements(self, lbl, n_el):
els_of_lbl = []
while len(els_of_lbl) < n_el:
item = self.do_item(self.lbl2idxs[lbl].pop())
if item is not None: els_of_lbl.append(item)
return els_of_lbl
def create_batches(self, samps):
n_lbl, n_el = self.n_lbl, self.n_el
self.it = iter(self.dataset) if self.dataset is not None else None
while len(self.dl_vocab) >= n_lbl:
batch_lbls, b = [], []
while len(batch_lbls) < n_lbl:
try: i = random.randint(0, len(self.dl_vocab) - 1)
except ValueError: raise CancelBatchException
lbl = self.dl_vocab.pop(i)
if len(self.lbl2idxs[lbl]) < n_lbl: continue
try: els_of_lbl = self.get_labeled_elements(lbl, n_el)
except IndexError: continue
b.extend(els_of_lbl)
batch_lbls.append(lbl)
self.dl_vocab.extend(batch_lbls)
yield self.do_batch(b)
self.dl_vocab = list(range(len(self.vocab)))
#export
def uniformize_dataset(items, lbls, vocab=None, n_el=3, n_lbl=3, shuffle=True):
if vocab is None: vocab = list(set(lbls))
lbl2idxs = {lbl:[] for lbl in vocab}
for i, lbl in enumerate(lbls): lbl2idxs[lbl].append(i)
for lbl, idxs in lbl2idxs.items():
if len(idxs) < n_el: vocab.remove(lbl)
if shuffle: [random.shuffle(v) for v in lbl2idxs.values()]
idxs = []
while len(vocab) >= n_lbl:
lbl_samples = random.sample(vocab, n_lbl)
for lbl in lbl_samples:
i = 0
while i < n_el:
i += 1
idx = lbl2idxs[lbl].pop()
idxs.append(idx)
if len(lbl2idxs[lbl]) <= n_el:
vocab.remove(lbl)
return getattr(items, 'iloc', items)[idxs]
items = pd.read_csv(path_charades/'df0.csv', index_col=0)
items = uniformize_dataset(items, items['lbl'])
items.tail(6)
#export
class UniformizedShuffle():
def __init__(self, lbls, vocab=None, n_el=4, n_lbl=4):
self.lbls = lbls
if vocab is None: vocab = list(set(lbls))
self.vocab = vocab
self.n_el = n_el
self.n_lbl = n_lbl
def __call__ (self, items):
return uniformize_dataset(items, lbls=self.lbls, vocab=self.vocab, n_el=self.n_el, n_lbl=self.n_lbl)
df = pd.read_csv(path_charades/'df0.csv', index_col=0)
un = UniformizedShuffle(items['lbl'])
un(items).tail(7)
```
| github_jupyter |
# Using `bw2landbalancer`
Notebook showing typical usage of `bw2landbalancer`
## Generating the samples
`bw2landbalancer` works with Brightway2. You only need set as current a project in which the database for which you want to balance land transformation exchanges is imported.
```
import brightway2 as bw
import numpy as np
bw.projects.set_current('ei36cutoff') # Project with ecoinvent 3.6 cut-off by classification already imported
```
The only Class you need is the `DatabaseLandBalancer`:
```
from bw2landbalancer import DatabaseLandBalancer
```
Instantiating the DatabaseLandBalancer will automatically identify land transformation biosphere activities (elementary flows).
```
dlb = DatabaseLandBalancer(
database_name="ei36_cutoff", #name the LCI db in the brightway2 project
)
```
Generating presamples for the whole database is a lengthy process. Thankfully, it only ever needs to be done once per database:
```
dlb.add_samples_for_all_acts(iterations=1000)
```
The samples and associated indices are stored as attributes:
```
dlb.matrix_samples
dlb.matrix_samples.shape
dlb.matrix_indices[0:10] # First ten indices
len(dlb.matrix_indices)
```
These can directly be used to generate [`presamples`](https://presamples.readthedocs.io/):
```
presamples_id, presamples_fp = dlb.create_presamples(
name=None, #Could have specified a string as name, not passing anything will use automatically generated random name
dirpath=None, #Could have specified a directory path to save presamples somewhere specific
id_=None, #Could have specified a string as id, not passing anything will use automatically generated random id
seed='sequential', #or None, or int.
)
```
## Using the samples
The samples are formatted for use in brighway2 via the presamples package.
The following function calculates:
- Deterministic results, using `bw.LCA`
- Stochastic results, using `bw.MonteCarloLCA`
- Stochastic results using presamples, using `bw.MonteCarloLCA` and passing `presamples=[presamples_fp]`
The ratio of stochastic results to deterministic results are then plotted for Monte Carlo results with and without presamples.
Ratios for Monte Carlo with presamples are on the order of 1.
Ratios for Monte Carlo without presamples can be multiple orders of magnitude, and can be negative or positive.
```
def check_presamples_act(act_key, ps_fp, lcia_method, iterations=1000):
"""Plot histrograms of Monte Carlo samples/det result for case w/ and w/o presamples"""
lca = bw.LCA({act_key:1}, method=m)
lca.lci()
lca.lcia()
mc_arr_wo = np.empty(shape=iterations)
mc = bw.MonteCarloLCA({act_key:1}, method=m)
for i in range(iterations):
mc_arr_wo[i] = next(mc)/lca.score
mc_arr_w = np.empty(shape=iterations)
mc_w = bw.MonteCarloLCA({act_key:1}, method=m, presamples=[ps_fp])
for i in range(iterations):
mc_arr_w[i] = next(mc_w)/lca.score
plt.hist(mc_arr_wo, histtype="step", color='orange', label="without presamples")
plt.hist(mc_arr_w, histtype="step", color='green', label="with presamples")
plt.legend()
```
Let's run this on a couple of random ecoinvent products with the ImpactWorld+ Land transformation, biodiversity LCIA method:
```
m=('IMPACTWorld+ (Default_Recommended_Midpoint 1.23)', 'Midpoint', 'Land transformation, biodiversity')
import matplotlib.pyplot as plt
%matplotlib inline
act = [act for act in bw.Database('ei36_cutoff') if act['name']=='polyester-complexed starch biopolymer production'][0]
print("Working on activity known to have non-negligeable land transformation impacts: ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
act = bw.Database('ei36_cutoff').random()
print("Randomly working on ", act)
check_presamples_act(act.key, presamples_fp, m)
```
| github_jupyter |
# Analyzing volumes for word frequencies
This notebook will demonstrate some of basic functionality of the Hathi Trust FeatureReader object. We will look at a few examples of easily replicable text analysis techniques — namely word frequency and visualization.
```
%%capture
!pip install nltk
from htrc_features import FeatureReader
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
```
## Part 1 — Word frequency in novels
The following cells load a collection of nine novels from the 18th-20th centuries, chosen from an HTRC collection. Also loaded are a collection of math textbooks from the 17th-19th centuries, but the latter will be used in a later part. The collection of novels will be used as a departure point for our text analysis.
```
!rm -rf local-folder
download_output = !htid2rsync --f novels-word-use.txt | rsync -azv --files-from=- data.sharc.hathitrust.org::features/ local-folder/
suffix = '.json.bz2'
file_paths = ['local-folder/' + path for path in download_output if path.endswith(suffix)]
fr_novels = FeatureReader(file_paths)
for vol in fr_novels:
print(vol.title)
```
## Selecting volumes
The following cell is useful in choosing a volume to manipulate. Set `title_word` to any word that is contained in the title of the fr-volume you would like to work with (the string comparison is case-insensitive since some titles are lower-case). The volume will then be stored as 'vol', and can be reassigned to any variable name you would like! As an example, `title_word` is currently set to "grapes", meaning "The Grapes of Wrath" by John Steinbeck is the current volume saved under the variable name vol. You can change this cell at any time to work with a different volume.
```
title_word = 'mockingbird'
for vol in fr_novels:
if title_word.lower() in vol.title.lower():
print('Current volume:', vol.title)
break
```
## Sampling tokens from a book
The following cell will display the most common tokens (words or punctuation marks) in a given volume, alongside the number of times they appear. It will also calculate their relative frequencies (found by dividing the number of appearances over the total number of words in the book) and display the results in a `DataFrame`. We'll do this for the volume we found above, the cell may take a few seonds to run because we're looping through every word in the volume!
```
tokens = vol.tokenlist(pos=False, case=False, pages=False).sort_values('count', ascending=False)
freqs = []
for count in tokens['count']:
freqs.append(count/sum(tokens['count']))
tokens['rel_frequency'] = freqs
tokens
```
### Graphing word frequencies
The following cell outputs a bar plot of the most common tokens from the volume and their frequencies.
```
%matplotlib inline
# Build a list of frequencies and a list of tokens.
freqs_1, tokens_1 = [], []
for i in range(15): # top 8 words
freqs_1.append(freqs[i])
tokens_1.append(tokens.index.get_level_values('lowercase')[i])
# Create a range for the x-axis
x_ticks = np.arange(len(tokens_1))
# Plot!
plt.bar(x_ticks, freqs_1)
plt.xticks(x_ticks, tokens_1, rotation=90)
plt.ylabel('Frequency', fontsize=14)
plt.xlabel('Token', fontsize=14)
plt.title('Common token frequencies in "' + vol.title[:14] + '..."', fontsize=14)
```
As you can see, the most common tokens in "The Grapes of Wrath" are mostly punctuation and basic words that don't provide context. Let's see if we can narrow our search to gain some more relevant insight. We can get a list of stopwords from the `nltk` library. Punctuation is in the `string` library:
```
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from string import punctuation
print(stopwords.words('english'))
print()
print(punctuation)
```
Now that we have a list of words to ignore in our search, we can make a few tweaks to our plotting cell.
```
freqs_filtered, tokens_filtered, i = [], [], 0
while len(tokens_filtered) < 10:
if tokens.index.get_level_values('lowercase')[i] not in stopwords.words('english') + list(punctuation):
freqs_filtered.append(freqs[i])
tokens_filtered.append(tokens.index.get_level_values('lowercase')[i])
i += 1
# Create a range for the x-axis
x_ticks = np.arange(len(freqs_filtered))
# Plot!
plt.bar(x_ticks, freqs_filtered)
plt.xticks(x_ticks, tokens_filtered, rotation=90)
plt.ylabel('Frequency', fontsize=14)
plt.xlabel('Token', fontsize=14)
plt.title('Common token frequencies in "' + vol.title[:14] + '..."', fontsize=14)
```
That's better. No more punctuation and lower frequencies on the y-axis mean that narrowing down our search choices was effective. This is also helpful if we're trying to find distinctive words in a text, because we removed the words that most texts share.
## Sampling tokens from all books
Now we can see how relative word frequencies compare across all the books in our sample. To do this, we'll need a few useful functions.
The first finds the most common noun in a volume, with adjustable parameters for minimum length.
The second calculates the relative frequency of a token across the entirety of a volume, saving us the time of doing the calculation like in the above cell.
Finally, we'll have a visualization function to create a bar plot of relative frequencies for all volumes in our sample, so that we can easily track how word frequencies differ across titles.
```
# A function to return the most common noun of length at least word_length in the volume.
# NOTE: word_length defaults to 2.
# e.g. most_common_noun(fr_novels.first) returns 'time'.
def most_common_noun(vol, word_length=2):
# Build a table of common nouns
tokens_1 = vol.tokenlist(pages=False, case=False)
nouns_only = tokens_1.loc[(slice(None), slice(None), ['NN']),]
top_nouns = nouns_only.sort_values('count', ascending=False)
token_index = top_nouns.index.get_level_values('lowercase')
# Choose the first token at least as long as word_length with non-alphabetical characters
for i in range(max(token_index.shape)):
if (len(token_index[i]) >= word_length):
if("'", "!", ",", "?" not in token_index[i]):
return token_index[i]
print('There is no noun of this length')
return None
most_common_noun(vol, 15)
# Return the usage frequency of a given word in a given volume.
# NOTE: frequency() returns a dictionary entry of the form {'word': frequency}.
# e.g. frequency(fr_novels.first(), 'blue') returns {'blue': 0.00012}
def frequency(vol, word):
t1 = vol.tokenlist(pages=False, pos=False, case=False)
token_index = t1[t1.index.get_level_values("lowercase") == word]
if len(token_index['count'])==0:
return {word: 0}
count = token_index['count'][0]
freq = count/sum(t1['count'])
return {word: float('%.5f' % freq)}
frequency(vol, 'blue')
# Returns a plot of the usage frequencies of the given word across all volumes in the given FeatureReader collection.
# NOTE: frequencies are given as percentages rather than true ratios.
def frequency_bar_plot(word, fr):
freqs, titles = [], []
for vol in fr:
title = vol.title
short_title = title[:6] + (title[6:] and '..')
freqs.append(100*frequency(vol, word)[word])
titles.append(short_title)
# Format and plot the data
x_ticks = np.arange(len(titles))
plt.bar(x_ticks, freqs)
plt.xticks(x_ticks, titles, fontsize=10, rotation=45)
plt.ylabel('Frequency (%)', fontsize=12)
plt.title('Frequency of "' + word + '"', fontsize=14)
frequency_bar_plot('blue', fr_novels)
```
Your turn! See if you can output a bar plot of the most common noun of length at least 5 in "To Kill a Mockingbird". REMEMBER, you may have to set vol to a different value than it already has.
```
# Use the provided frequency functions to plot the most common 5-letter noun in "To Kill a Mockinbird".
# Your solution should be just one line of code.
```
## Part 2— Non-fiction volumes
Now we'll load a collection of 33 math textbooks from the 18th and 19th centuries. These volumes focus on number theory and arithmetic, and were written during the lives of Leonhard Euler and Joseph-Louis Lagrange – two of the most prolific researchers of number theory in all of history. As a result, we can expect the frequency of certain words and topics to shift over time to reflect the state of contemporary research. Let's load them and see.
```
download_output = !htid2rsync --f math-collection.txt | rsync -azv --files-from=- data.sharc.hathitrust.org::features/ local-folder/
file_paths = ['local-folder/' + path for path in download_output if path.endswith(suffix)]
fr_math = FeatureReader(file_paths)
fr_math = FeatureReader(file_paths)
for vol in fr_math:
print(vol.title)
```
### Another frequency function
The next cell contains a frequency_by_year function that takes as inputs a query word and a FeatureReader object. The function calculates relative frequencies of the query word across all volumes in the FR, then outputs them in a `DataFrame` sorted by the volume year. It then plots the frequencies and allows us to easily see trends in word usage across a time period.
```
# Returns a DF of relative frequencies, volume years, and page counts, along with a scatter plot.
# NOTE: frequencies are given in percentages rather than true ratios.
def frequency_by_year(query_word, fr):
volumes = pd.DataFrame()
years, page_counts, query_freqs = [], [], []
for vol in fr:
years.append(int(vol.year))
page_counts.append(int(vol.page_count))
query_freqs.append(100*frequency(vol, query_word)[query_word])
volumes['year'], volumes['pages'], volumes['freq'] = years, page_counts, query_freqs
volumes = volumes.sort_values('year')
# Set plot dimensions and labels
scatter_plot = volumes.plot.scatter('year', 'freq', color='black', s=50, fontsize=12)
plt.ylim(0-np.mean(query_freqs), max(query_freqs)+np.mean(query_freqs))
plt.ylabel('Frequency (%)', fontsize=12)
plt.xlabel('Year', fontsize=12)
plt.title('Frequency of "' + query_word + '"', fontsize=14)
return volumes.head(10)
```
### Checking for shifts over time
In 1744, Euler began a huge volume of work on identifying quadratic forms and progressions of primes. It follows from reason, then, that the mentions of these topics in number theory textbooks should see a discernible jump following the 1740's. The following cells call frequency_by_year on several relevant words.
```
frequency_by_year('quadratic', fr_math)
frequency_by_year('prime', fr_math)
frequency_by_year('factor', fr_math)
```
# All done!
That's all for this notebook, but it doesn't mean you can't apply what you've learned. Can you think of any words you'd like to track over time? Feel free to use the following empty cells however you'd like. An interesting challenge would be to see if you can incorporate the frequency functions from Part 1 into the scatter function from Part 2. Have fun!
| github_jupyter |
```
import json
from matplotlib import pyplot as plt
import numpy as np
import statistics
n_gen = 200 # number of generatons
n = 10 # number of generations to group
```
Read files
```
with open("Results_vehicle_ga.json", "r") as f:
ga = json.load(f)
with open("Results_vehicle_mo.json", "r") as f:
mo = json.load(f)
with open("Results_vehicle_ran.json", "r") as f:
ran = json.load(f)
Group evaluaitons by generations of n
mo_by_generation = {}
for i in range(0, n_gen, n):
mo_by_generation[i] = []
ga_by_generation = {}
for i in range(0, n_gen, n):
ga_by_generation[i] = []
ran_by_generation = {}
for i in range(0, n_gen, n):
ran_by_generation[i] = []
n_gen_res = []
for i, run in enumerate(ran):
for m in range(0, len(ran[run]["fitness"]), n):
ran_by_generation[m].append(ran[run]["fitness"][m])
for i, run in enumerate(mo):
#print(len(mo[run]["fitness"]))
for m in range(0, len(mo[run]["fitness"]), n):
mo_by_generation[m].append(mo[run]["fitness"][m])
for i, run in enumerate(ga):
for m in range(0, len(ga[run]["fitness"]), n):
#print(type(ga_by_generation[m]))
ga_by_generation[m].append(ga[run]["fitness"][m])
```
Evaluate the novelty
```
mo_novelty = []
for i, run in enumerate(mo):
mo_novelty.append(-mo[run]["novelty_20"])
ga_novelty = []
for i, run in enumerate(ga):
ga_novelty.append(-ga[run]["novelty_20"])
```
Evaluate average time
```
mo_time = []
for i, run in enumerate(mo):
mo_time.append(mo[run]["time"])
sum(mo_time)/len(mo_time)
ga_time = []
for i, run in enumerate(ga):
ga_time.append(ga[run]["time"])
sum(ga_time)/len(ga_time)
```
Build graphs
```
def build_boxplot(y1, x1):
fig, ax1 = plt.subplots(figsize=(10, 5))
ax1.set_xlabel('Type of algorithm', fontsize=16)
ax1.set_ylabel('Average novelty', fontsize=16)
ax1.set_xticklabels(x1, fontsize=16, rotation=45)
ax1.yaxis.grid(True, linestyle='-', which='major', color='darkgray', alpha=0.5)
top = 12
bottom = 0
ax1.set_ylim(bottom, top)
ax1.boxplot(y1)
ax1.tick_params(axis='y', labelsize=16)
build_boxplot([ga_novelty, mo_novelty], ["GA", "NSGA2"] )
def build_boxplot_time(y1, x1):
fig, ax1 = plt.subplots(figsize=(20, 10))
ax1.set_xlabel('Type of algorithm', fontsize=16)
ax1.set_ylabel('Time for 50 000 evaluations, sec', fontsize=16)
ax1.set_xticklabels(x1, fontsize=16, rotation=45)
ax1.yaxis.grid(True, linestyle='-', which='major', color='darkgray', alpha=0.5)
top = 1800
bottom = 0
ax1.set_ylim(bottom, top)
ax1.boxplot(y1)
ax1.tick_params(axis='y', labelsize=16)
build_boxplot_time([ga_time, mo_time], ["ga", "mo"] )
import matplotlib.pyplot as plt
def box_plot(data, edge_color, fill_color):
bp = ax.boxplot(data, patch_artist=True, labels=None)
for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']:
plt.setp(bp[element], color=edge_color)
for patch in bp['boxes']:
patch.set(facecolor=fill_color)
x = range(0, 21, 1)
x1 = range(0, 105000, 5000)
fig, ax = plt.subplots(figsize=(20, 10))
ax.set_xlabel('Number of evaluations', fontsize=20)
ax.set_ylabel('Fitness value', fontsize=20)
box_plot([ga_by_generation[v] for v in ga_by_generation], 'red', 'tan')
box_plot([mo_by_generation[v] for v in mo_by_generation], 'blue', 'cyan')
box_plot([ran_by_generation[v] for v in ran_by_generation], 'green', 'yellow')
ax.set_xticks(x)
ax.set_ylim(0, -30)
ax.set_xticklabels(x1, fontsize=16, rotation=45)
ax.tick_params(axis='y', labelsize=16)
ax.grid(True)
```
Do statistical tests
```
ga_by_generation[190]
mo_by_generation[190]
from scipy.stats import mannwhitneyu
mannwhitneyu(ga_by_generation[190],mo_by_generation[190], alternative="two-sided")
from cliffsDelta import cliffsDelta
d, res = cliffsDelta(ga_by_generation[190],mo_by_generation[190] )
(d, res)
mannwhitneyu(ga_novelty,mo_novelty, alternative="two-sided")
```
| github_jupyter |
# Correlation and Causation
It is hard to over-emphasize the point that **correlation is not causation**!. Variables can be highly correlated for any number of reasons, none of which imply a causal relationship.
When trying to understand relationships between variables, it is worth the effort to think carefully and ask the question, does this relationship make sense? In this exercise you will explore a case where correlation appears to arise from **latent or hidden variables**.
As a first step, execute the code in the cell below to import the packages you will need.
```
import pandas as pd
import numpy as np
import numpy.random as nr
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
#matplotlib inline
```
Is there anything you can do to improve your chances of wining a Nobel Prize? Let's have a look at some data and decide if the correlations make any sense?
Now, execute the code in the cell below and examine the first 10 rows of the data frame.
```
Nobel_chocolate = pd.read_csv('nobel-chocolate.csv', thousands=',')
print('Dimensions of the data frame = ', Nobel_chocolate.shape)
Nobel_chocolate
```
The nation of China is a bit of an outlier. While people in China win a reasonable number of Nobel prizes, the huge population skews the changes of winning per person.
To get a feel for these data, create a scatter plot of Nobel prizes vs. chocolate consumption by executing the code in the cell below.
```
## Define a figure and axes and make a scatter plot
fig = plt.figure(figsize=(8, 8)) # define plot area
ax = fig.gca() # define axis
Nobel_chocolate.plot.scatter('Chocolate', 'Laureates10_million', ax = ax) # Scatter plot
ax.set_title('Nobel Prizes vs. Chocolate Consumption') # Give the plot a main title
```
What is the correlation between Nobel prizes and chocolate consumption? To find out, execute the code in the cell below.
> Note: The Pandas corr method de-means each column before computing correlation.
```
Nobel_chocolate[['Laureates10_million', 'Chocolate']].corr()
```
There seems to be a high correlation between the number of Nobel prizes and chocolate consumption.
What about the relationship between the log of Nobel prizes and chocolate consumption? Execute the code in the cell below and examine the resulting plot.
```
Nobel_chocolate['log_Nobel'] = np.log(Nobel_chocolate.Laureates10_million)
## PLot the log Nobel vs. chocolate
fig = plt.figure(figsize=(9, 8)) # define plot area
ax = fig.gca() # define axis
Nobel_chocolate.plot.scatter('Chocolate', 'log_Nobel', ax = ax) # Scatter plot
ax.set_title('Log Nobel Prizes vs. Chocolate Consumption') # Give the plot a main title
ax.set_xlabel('Chocolate Consumption') # Set text for the x axis
ax.set_ylabel('Log Nobel Prizes per 10 Million People')# Set text for y axis
```
This looks like fairly straight line relationship, with the exception of an outlier, China.
What is the correlation between log of Nobel prizes and chocolate consumption? Execute the code in the cell below to find out.
```
Nobel_chocolate[['log_Nobel', 'Chocolate']].corr()
```
This correlation is even higher than for the untransformed relationship. But, does this make any sense in terms of a causal relationship? Can eating chocolate really improve someone's chances of winning a Nobel prize?
Perhaps some other variable makes more sense for finding a causal relationship? GDP per person could be a good choice. Execute the code in the cell below to load the GDP data.
```
GDP = pd.read_csv('GDP_Country.csv')
print(GDP)
```
There are now two data tables (Pandas data frames). These data tables must be joined and the GDP per person computed. Execute the code in the cell below to perform these operations and examine the resulting data frame.
```
Nobel_chocolate = Nobel_chocolate.merge(right=GDP, how='left', left_on='Country', right_on='Country')
Nobel_chocolate['GDP_person_thousands'] = 1000000 * np.divide(Nobel_chocolate.GDP_billions, Nobel_chocolate.Population)
Nobel_chocolate
```
Let's examine the relationship between GDP per person and the number of Nobel prizes. Execute the code in the cell below and examine the resulting plot.
```
## PLot the log Nobel vs. GDP
fig = plt.figure(figsize=(9, 8)) # define plot area
ax = fig.gca() # define axis
Nobel_chocolate.plot.scatter('GDP_person_thousands', 'log_Nobel', ax = ax) # Scatter plot
ax.set_title('Log Nobel Prizes vs. GDP per person') # Give the plot a main title
ax.set_xlabel('GDP per person') # Set text for the x axis
ax.set_ylabel('Log Nobel Prizes per 10 Million People')# Set text for y axis
```
There seems to be a reasonable relationship between the GDP per person and the log of Nobel prizes per population. There is one outlier, again China.
What is the correlation between the GDP per person and log Nobel prizes? Execute the code in the cell below and examine the results.
```
Nobel_chocolate[['log_Nobel', 'GDP_person_thousands']].corr()
```
GDP per person and the log of the number of Nobel prizes per population exhibits fairly high correlation. Does this relationship make more sense than the relationship with chocolate consumption?
Is there a relationship between chocolate consumption and GDP? This seems likely. To find out, execute the code in the cell below and examine the resulting plot.
```
## PLot the chocolate consuption vs. GDP
fig = plt.figure(figsize=(9, 8)) # define plot area
ax = fig.gca() # define axis
Nobel_chocolate.plot.scatter('Chocolate', 'GDP_person_thousands', ax = ax) # Scatter plot
ax.set_title('Chocolate consumption vs. GDP per person') # Give the plot a main title
ax.set_xlabel('GDP per person') # Set text for the x axis
ax.set_ylabel('Chocolate consumption')# Set text for y axis
```
The relationship looks fairly linear.
How correlated is chocolate consumption and GDP? To answer this question, in the cell below create and execute the code to compute the correlations between three of the variables and display the results: 'Chocolate', 'GDP_person_thousands', 'log_Nobel'. Make sure you name your correlation matrix object `corr_matrix`.
Notice the relationship between GDP per population and chocolate consumption. Do you think this relationship could be causal? What about the relationship between GDP per person and Nobel prizes?
Finally, execute the code in the cell below to display a visualization of these correlations, and examine the results?
```
sns.heatmap(corr_matrix, center=0, cmap="YlGn",
square=True, linewidths=.25)
plt.title('Correlation matrix for Nobel prize variables')
plt.yticks(rotation='horizontal')
plt.xticks(rotation='vertical')
```
Notice that the correlation coefficients between all these variables is relatively high. This example illustrates the perils of trying to extract causal relationships from correlation values alone.
Is it possible GDP has a causal relationship with both chocolate consumption and winning Nobel prizes. Are there other latent (hidden or not part of this data set) which might be important in determining causal relationships, like local tastes for chocolate, R&D spending levels in different countries?
##### Copyright 2020, Stephen F. Elston. All rights reserved.
| github_jupyter |
# Modélisation thématique
Dans ce notebook, nous effectuons de la modélisation thématique de textes à l'aide de modules Python spécialisés.
**IMPORTANT**: ce notebook requiert le module `java`. S'il n'était pas chargé au moment d'ouvrir ce notebook, vous devez le fermer, l'arrêter, charger le module `java` et rouvrir le présent notebook.
```
!which java
```
## Chargement des modules Python requis
```
# Modules réguliers et scientifiques
print('- Chargement des modules réguliers...')
import os
import re
import numpy as np
import pandas as pd
from pprint import pprint
from pathlib import Path
import json
# NLTK - Natural Language Toolkit
print('- Chargement de NLTK...')
import nltk
nltk.download('stopwords') # Requis seulement une fois
# Gensim
print('- Chargement de Gensim...')
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel, LdaModel, LdaMulticore
# spaCy pour la lemmatisation
print('- Chargement de spaCy...')
import spacy
# Outils de visualisation
print('- Chargement des outils de visualisation...')
import pyLDAvis
import pyLDAvis.gensim_models
import matplotlib.pyplot as plt
# Configurer la journalisation de Gensim (optionnel)
print('- Configuration finale...')
import logging
logging.basicConfig(
format='%(asctime)s : %(levelname)s : %(message)s',
level=logging.ERROR)
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
print('Chargement des modules terminé.')
```
## Chargement des données
* Charger les "mots vides" de la langue française à partir du module NLTK
```
# Mots vides dans NLTK sont les "stopwords"
from nltk.corpus import stopwords
stop_words = stopwords.words('french')
# Afficher la liste par défaut
print('Liste par défaut:\n', stop_words)
# Ajouter d'autres mots à la liste, au besoin
stop_words.extend([])
# Afficher la liste finale
print('\nListe finale:\n', stop_words)
```
* Obtenir la liste des fichiers texte
```
# Obtenir le chemin vers tous les fichiers texte dans le dossier "donnees/"
txt_folder = Path('donnees/').rglob('*.txt')
files = sorted([x for x in txt_folder]) # Convertir le tout en une liste triée
print(files[:3], '...', files[-3:]) # Afficher les premiers et derniers fichiers
print(f' => {len(files)} fichiers au total')
```
* Créer un dictionnaire qui servira à initialiser un DataFrame Pandas avec deux colonnes :
* `target_names`: le nom du fichier et son chemin
* `content`: le texte original du fichier regroupé en une seule ligne
```
text_dict = {'target_names': [], 'content': []}
# Pour chaque fichier texte
for name in files:
f = open(name, 'r', encoding='utf-8')
basename = os.path.basename(name)
# Afficher la progression à tous les 10 fichiers
if name in files[::10]:
print(f'Reading {basename} ...')
# Noter le nom du fichier et son contenu
text_dict['target_names'].append(basename)
text_dict['content'].append(' '.join(f.readlines()))
f.close()
# Convertir le dictionnaire en dataframe pandas
df = pd.DataFrame.from_dict(text_dict)
print(f'Total: {len(df)} rangées. Voici les 5 premières:')
df.head()
```
## Nettoyer les données textuelles
* Enlever les chiffres romains et les espaces multiples
```
# Sélectionner le contenu de tous les fichiers
data = text_dict['content']
# Supprimer les chiffres romains
data = [re.sub('[MDCLXVI]+(\.|\b\w\n)', ' ', sentence) for sentence in data]
# Remplacer les espaces (et sauts de ligne) multiples par un simple espace
data = [re.sub('\s+', ' ', sentence) for sentence in data]
# Supprimer les caractères de citations
#data = [re.sub("\'", "", sentence) for sentence in data]
print(f'Premier texte nettoyé:\n {data[0][:308]}...\n')
print(f'Dernier texte nettoyé:\n {data[-1][:308]}...')
```
* Enlever tous les symboles de ponctuation et transformer chaque texte en liste de mots
```
def sentences_to_words(sentences):
"""
Générateur - Pour chaque texte, retourner une liste de mots
Retourne:
---------
Chaque texte est traité par gensim.utils.simple_preprocess() qui
enlève la ponctuation et collecte tous les mots individuels.
"""
for sentence in sentences:
# L'option deacc=True enlève les symboles de ponctuation
yield(simple_preprocess(sentence, deacc=True))
# Créer une liste de listes de mots - une liste de mots par texte
data_words = list(sentences_to_words(data))
print('Première liste de mots:\n', data_words[0][:50], '...\n')
print('Dernière liste de mots:\n', data_words[-1][:50], '...')
```
## Modélisation thématique
On commence par utiliser:
* [la classe Phrases](https://radimrehurek.com/gensim/models/phrases.html#gensim.models.phrases.Phrases) de Gensim - détecte les phrases en fonction des décomptes de collocation
* [la classe Phraser](https://radimrehurek.com/gensim/models/phrases.html#gensim.models.phrases.Phraser) (alias de [FrozenPhrases](https://radimrehurek.com/gensim/models/phrases.html#gensim.models.phrases.FrozenPhrases)) de Gensim - réduit la consommation de mémoire-vive en éliminant les informations optionnelles pour la détection de phrases
```
# Construire les modèles bigramme et trigramme - threshold élevé => moins de phrases
bigram = gensim.models.phrases.Phrases(data_words, min_count=4, threshold=8)
trigram = gensim.models.phrases.Phrases(bigram[data_words], threshold=8)
# Moyen plus rapide d'obtenir une phrase identifiée comme un trigramme/bigramme
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
# Voir l'exemple d'un trigramme
for mot in trigram_mod[bigram_mod[data_words[0]]]:
if len(mot.split('_')) == 3:
print(mot)
```
* Définir des fonctions pour traiter les mots vides, les bigrammes, les trigrammes et la lemmatisation
```
def remove_stopwords(texts):
return [
[word for word in simple_preprocess(str(doc)) if word not in stop_words]
for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""https://spacy.io/api/annotation"""
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append(
[token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
```
* Compléter le nettoyage des listes de mots
```
print('- Supprimer les mots vides...')
data_words_nostops = remove_stopwords(data_words)
print('- Former les bigrammes...')
data_words_bigrams = make_bigrams(data_words_nostops)
print('- Former les trigrammes...')
data_words_trigrams = make_trigrams(data_words_bigrams)
# Initialiser le modèle spaCy 'fr', en ne gardant que le composant "tagger"
print('- Initialiser le modèle spaCy...')
nlp = spacy.load('fr_core_news_sm', disable=['parser', 'ner'])
# Faire la lemmatisation en ne gardant que les noms, adjectifs, verbes et adverbes
print('- Lemmatisation...')
data_lemmatized = lemmatization(data_words_trigrams,
allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
print(data_lemmatized[0][:50])
```
* Création du dictionnaire et du corpus
```
# Créer le dictionnaire
id2word = corpora.Dictionary(data_lemmatized)
# Calculer la fréquence des mots par fichier
corpus = [id2word.doc2bow(text) for text in data_lemmatized]
# Format lisible d'un extrait du corpus
[[(id2word[id], freq) for id, freq in cp[:10]] for cp in corpus[:4]]
start = 2 # Le nombre minimum de thèmes par modèle
limit = 10 # Le nombre maximum de thèmes par modèle
step = 2 # Le pas d'augmentation du nombre de thèmes
multiple_num_topics = range(start, limit + 1, step)
model_list = []
coherence_values = []
for num_topics in multiple_num_topics:
print(f'Avec {num_topics} thèmes...')
model = LdaMulticore(
corpus=corpus,
num_topics=num_topics,
id2word=id2word,
workers=1)
model_list.append(model)
coherencemodel = CoherenceModel(
model=model,
texts=data_lemmatized,
dictionary=id2word,
coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
print('Terminé')
# Afficher le graphique des valeurs de cohérence
plt.plot(multiple_num_topics, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
# Afficher les valeurs de cohérence
for m, cv in zip(multiple_num_topics, coherence_values):
print(f'Pour un nombre de thèmes = {m:2d},',
f'on obtient une cohérence de {round(cv, 4)}')
# Choissisez le modèle que vous croyez être le meilleur
# Rappel - les indices commencent à 0 dans Python
optimal_model = model_list[3]
# Affichage des différents thèmes
model_topics = optimal_model.show_topics(formatted=False)
pprint(optimal_model.print_topics(num_words=10))
# Relancer le modèle avec le nombre exact de thèmes
ldamallet = LdaMulticore(corpus=corpus, num_topics=8, id2word=id2word, workers=1)
# Afficher les thèmes retenus
pprint(ldamallet.show_topics(formatted=False))
# Afficher la cohérance
coherence_model_ldamallet = CoherenceModel(
model=ldamallet, texts=data_lemmatized, dictionary=id2word, coherence='c_v')
coherence_ldamallet = coherence_model_ldamallet.get_coherence()
print('\nScore de cohérence: ', coherence_ldamallet)
def format_topics_sentences(ldamodel=ldamallet, corpus=corpus, texts=df):
# Créer un nouveau DataFrame
sent_topics_df = pd.DataFrame()
# Extraire les thèmes principaux de chaque document
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Obtenir le Dominant_Topic, le Perc_Contribution et les Topic_Keywords
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => thème principal
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(
pd.Series(
[int(topic_num), round(prop_topic,4), topic_keywords]),
ignore_index=True)
else:
break
sent_topics_df.columns = [
'Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Ajouter les colonnes nom de fichier et contenu
contents = texts
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
# Préparer les résultats finaux
df_topic_sents_keywords = format_topics_sentences(
ldamodel=ldamallet, corpus=corpus, texts=df)
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = [
'Document number',
'Dominant_Topic',
'Topic_Perc_Contrib',
'Keywords',
'file_name',
'Text']
# Afficher les résultats finaux
df_dominant_topic
```
| github_jupyter |
# Fraud_Detection_Using_ADASYN_OVERSAMPLING
I am able to achieve the following accuracies in the validation data. These results can be further improved by reducing the
parameter, number of frauds used to create features from category items. I have used a threshold of 100.
* Logistic Regression :
Validation Accuracy: 70.0%, ROC_AUC_Score: 70.0%
* Random Forest :
Validation Accuracy: 98.9%, ROC_AUC_Score: 98.9%
* Linear Support Vector Machine :
Validation Accuracy: 51.0%, ROC_AUC_Score: 51.1%
* K Nearest Neighbors :
Validation Accuracy: 86.7%, ROC_AUC_Score: 86.7%
* Extra Trees Classifer :
Validation Accuracy: 99.2%, ROC_AUC_Score: 99.2%
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression, Ridge, Lasso, ElasticNet
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn import svm, neighbors
from sklearn.naive_bayes import GaussianNB
from imblearn.over_sampling import SMOTE, ADASYN
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
import itertools
% matplotlib inline
```
### Loading Training Transactions Data
```
tr_tr = pd.read_csv('data/train_transaction.csv', index_col='TransactionID')
print('Rows :', tr_tr.shape[0],' Columns : ',tr_tr.shape[1] )
tr_tr.tail()
print('Memory Usage : ', (tr_tr.memory_usage(deep=True).sum()/1024).round(0))
tr_tr.tail()
tr_id = pd.read_csv('data/train_identity.csv', index_col='TransactionID')
print(tr_id.shape)
tr_id.tail()
tr = tr_tr.join(tr_id)
tr['data']='train'
print(tr.shape)
tr.head()
del tr_tr
del tr_id
te_tr = pd.read_csv('data/test_transaction.csv', index_col='TransactionID')
print(te_tr.shape)
te_tr.tail()
te_id = pd.read_csv('data/test_identity.csv', index_col='TransactionID')
print(te_id.shape)
te_id.tail()
te = te_tr.join(te_id)
te['data']='test'
te['isFraud']=2
print(te.shape)
te.head()
del te_tr
del te_id
tr.isFraud.describe()
tr.isFraud.value_counts().plot(kind='bar')
tr.isFraud.value_counts()
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(12,4))
ax1.hist(tr.TransactionAmt[tr.isFraud == 1], bins = 10)
ax1.set_title('Fraud Transactions ='+str(tr.isFraud.value_counts()[1]))
ax2.hist(tr.TransactionAmt[tr.isFraud == 0], bins = 10)
ax2.set_title('Normal Transactions ='+str(tr.isFraud.value_counts()[0]))
plt.xlabel('Amount ($)')
plt.ylabel('Number of Transactions')
plt.yscale('log')
plt.show()
sns.distplot(tr['TransactionAmt'], color='red')
sns.pairplot(tr[['TransactionAmt','isFraud']], hue='isFraud')
df = pd.concat([tr,te], sort=False)
print(df.shape)
df.head()
del tr
del te
```
### Make new category for items in Objects with A Fraud Count of more than 100
```
fraud_threshold = 100
def map_categories(*args):
columns = [col for col in args]
for column in columns:
if column == index:
return 1
else:
return 0
new_categories = []
for i in df.columns:
if i != 'data':
if df[i].dtypes == str('object'):
fraud_count = df[df.isFraud==1][i].value_counts(dropna=False)
for index, value in fraud_count.items():
if value>fraud_threshold:
df[(str(i)+'_'+str(index))]=list(map(map_categories, df[i]))
new_categories.append((str(i)+'_'+str(index)))
# else:
# tr[(str(i)+'_'+str('other'))]=list(map(map_categories, tr[i]))
# new_tr_categories.append((str(i)+'_'+str('other')))
df.drop([i], axis=1, inplace=True)
print(new_categories)
print(df.shape)
df.head()
df.isna().any().mean()
df.fillna(0, inplace=True)
df.isna().any().mean()
X = df[df['data'] == 'train'].drop(['isFraud','data'], axis=1)
y = df[df['data'] == 'train']['isFraud']
X_predict = df[df['data'] == 'test'].drop(['isFraud','data'], axis=1)
print(X.shape, y.shape, X_predict.shape)
```
### Oversampling using ADASYN
```
ada = ADASYN(random_state=91)
X_sampled,y_sampled = ada.fit_sample(X,y)
#fraudlent records in original data
y.value_counts()
#fraudlent records in oversampled data is is almost equal to normal data
np.bincount(y_sampled)
X_train, X_test, y_train, y_test = train_test_split(X_sampled,y_sampled,test_size=0.3)
class_names = ['FRAUD', 'NORMAL']
def plot_confusion_matrix(cm, classes,normalize=False,title='Confusion matrix',cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('Ground Truth')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
```
### Logistic Regression
```
lr = LogisticRegression(solver='lbfgs')
clf_lr = lr.fit(X_train, y_train)
confidence_lr=clf_lr.score(X_test, y_test)
print('Accuracy on Validation Data : ', confidence_lr.round(2)*100,'%')
test_prediction = clf_lr.predict(X_test)
print('ROC_AUC_SCORE ; ', roc_auc_score(y_test, test_prediction).round(3)*100,'%')
cnf_matrix = confusion_matrix(y_test, test_prediction)
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion Matrix')
prediction_lr = clf_lr.predict(X_predict)
test = df[df['data'] == 'test']
del df
test['prediction_lr'] = prediction_lr
test.prediction_lr.value_counts()
test.prediction_lr.to_csv('adLogistic_Regression_Prediction.csv')
```
### Random Forest
```
rfor=RandomForestClassifier()
clf_rfor = rfor.fit(X_train, y_train)
confidence_rfor=clf_rfor.score(X_test, y_test)
print('Accuracy on Validation Data : ', confidence_rfor.round(3)*100,'%')
test_prediction = clf_rfor.predict(X_test)
print('ROC_AUC_SCORE ; ', roc_auc_score(y_test, test_prediction).round(3)*100,'%')
cnf_matrix = confusion_matrix(y_test, test_prediction)
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion Matrix')
prediction_rfor = clf_rfor.predict(X_predict)
test['prediction_rfor'] = prediction_rfor
test.prediction_rfor.value_counts()
test.prediction_rfor.to_csv('adRandom_Forest_Prediction.csv')
```
### Linear Support Vector Machine Algorithm
```
lsvc=svm.LinearSVC()
clf_lsvc=lsvc.fit(X_train, y_train)
confidence_lsvc=clf_lsvc.score(X_test, y_test)
print('Accuracy on Validation Data : ', confidence_lsvc.round(3)*100,'%')
test_prediction = clf_lsvc.predict(X_test)
print('ROC_AUC_SCORE ; ', roc_auc_score(y_test, test_prediction).round(3)*100,'%')
cnf_matrix = confusion_matrix(y_test, test_prediction)
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion Matrix')
```
### K-Nearest Neighbors Algorithm
```
knn=neighbors.KNeighborsClassifier(n_neighbors=10, n_jobs=-1)
clf_knn=knn.fit(X_train, y_train)
confidence_knn=clf_knn.score(X_test, y_test)
print('Accuracy on Validation Data : ', confidence_knn.round(3)*100,'%')
test_prediction = clf_knn.predict(X_test)
print('ROC_AUC_SCORE ; ', roc_auc_score(y_test, test_prediction).round(3)*100,'%')
cnf_matrix = confusion_matrix(y_test, test_prediction)
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion Matrix')
```
### Extra Trees Classifier
```
etc=ExtraTreesClassifier()
clf_etc = etc.fit(X_train, y_train)
confidence_etc=clf_etc.score(X_test, y_test)
print('Accuracy on Validation Data : ', confidence_etc.round(3)*100,'%')
test_prediction = clf_etc.predict(X_test)
print('ROC_AUC_SCORE ; ', roc_auc_score(y_test, test_prediction).round(3)*100,'%')
cnf_matrix = confusion_matrix(y_test, test_prediction)
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion Matrix')
prediction_etc = clf_etc.predict(X_predict)
test['prediction_etc'] = prediction_etc
test.prediction_etc.value_counts()
test.prediction_etc.to_csv('adExtra_Trees_Prediction.csv')
```
| github_jupyter |
# Boston Housing Prices Classification
```
import itertools
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from dataclasses import dataclass
from sklearn import datasets
from sklearn import svm
from sklearn import tree
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import zero_one_loss
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
import graphviz
%matplotlib inline
# Matplotlib has some built in style sheets
mpl.style.use('fivethirtyeight')
```
## Data Loading
Notice that I am loading in the data in the same way that we did for our visualization module. Time to refactor? It migth be good to abstract away some of this as functions, that way we aren't copying and pasting code between all of our notebooks.
```
boston = datasets.load_boston()
# Sklearn uses a dictionary like object to hold its datasets
X = boston['data']
y = boston['target']
feature_names = list(boston.feature_names)
X_df = pd.DataFrame(X)
X_df.columns = boston.feature_names
X_df["PRICE"] = y
X_df.describe()
def create_classes(data):
"""Create our classes using thresholds
This is used as an `apply` function for
every row in `data`.
Args:
data: pandas dataframe
"""
if data["PRICE"] < 16.:
return 0
elif data["PRICE"] >= 16. and data["PRICE"] < 22.:
return 1
else:
return 2
y = X_df.apply(create_classes, axis=1)
# Get stats for plotting
classes, counts = np.unique(y, return_counts=True)
plt.figure(figsize=(20, 10))
plt.bar(classes, counts)
plt.xlabel("Label")
plt.ylabel(r"Number of Samples")
plt.suptitle("Distribution of Classes")
plt.show()
```
## Support Vector Machine
```
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Args:
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns:
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Args:
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
# Careful, `loc` uses inclusive bounds!
X_smol = X_df.loc[:99, ['LSTAT', 'PRICE']].values
y_smol = y[:100]
C = 1.0 # SVM regularization parameter
models = [
svm.SVC(kernel='linear', C=C),
svm.LinearSVC(C=C, max_iter=10000),
svm.SVC(kernel='rbf', gamma=0.7, C=C),
svm.SVC(kernel='poly', degree=3, gamma='auto', C=C)
]
models = [clf.fit(X_smol, y_smol) for clf in models]
# title for the plots
titles = [
'SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel'
]
# Set-up 2x2 grid for plotting.
fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(15, 15))
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X_smol[:, 0], X_smol[:, 1]
xx, yy = make_meshgrid(X0, X1)
for clf, title, ax in zip(models, titles, axs.flatten()):
plot_contours(
ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8
)
ax.scatter(
X0, X1, c=y_smol, cmap=plt.cm.coolwarm, s=20, edgecolors='k'
)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('LSTAT')
ax.set_ylabel('PRICE')
ax.set_title(title)
plt.show()
```
## Modeling with Trees and Ensembles of Trees
```
@dataclass
class Hparams:
"""Hyperparameters for our models"""
max_depth: int = 2
min_samples_leaf: int = 1
n_estimators: int = 400
learning_rate: float = 1.0
hparams = Hparams()
# Keeping price in there is cheating
#X_df = X_df.drop("PRICE", axis=1)
x_train, x_test, y_train, y_test = train_test_split(
X_df, y, test_size=0.33, random_state=42
)
dt_stump = DecisionTreeClassifier(
max_depth=hparams.max_depth,
min_samples_leaf=hparams.min_samples_leaf
)
dt_stump.fit(x_train, y_train)
dt_stump_err = 1.0 - dt_stump.score(x_test, y_test)
class_names = ['0', '1', '2']
dot_data = tree.export_graphviz(dt_stump, out_file=None,
feature_names=boston.feature_names,
class_names=class_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
# Adding greater depth to the tree
dt = DecisionTreeClassifier(
max_depth=9, # No longer using Hparams here!
min_samples_leaf=hparams.min_samples_leaf
)
dt.fit(x_train, y_train)
dt_err = 1.0 - dt.score(x_test, y_test)
```
### A Deeper Tree
```
class_names = ['0', '1', '2']
dot_data = tree.export_graphviz(dt, out_file=None,
feature_names=boston.feature_names,
class_names=class_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph#.render("decision_tree_boston")
```
## Adaboost
An AdaBoost classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases.
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html
```
ada_discrete = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=hparams.learning_rate,
n_estimators=hparams.n_estimators,
algorithm="SAMME"
)
ada_discrete.fit(x_train, y_train)
# Notice the `algorithm` is different here.
# This is just one parameter change, but it
# makes a world of difference! Read the docs!
ada_real = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=hparams.learning_rate,
n_estimators=hparams.n_estimators,
algorithm="SAMME.R" # <- take note!
)
ada_real.fit(x_train, y_train)
def misclassification_rate_by_ensemble_size(model, n_estimators, data, labels):
"""Get the fraction of misclassifications per ensemble size
As we increase the number of trees in the ensemble,
we often find that the performance of our model changes.
This shows us how our misclassification rate changes as
we increase the number of members in our ensemble up to
`n_estimators`
Args:
model: ensembe model that has a `staged_predict` method
n_estimators: number of models in the ensemble
data: data to be predicted over
labels: labels for the dataset
Returns:
misclassification_rate: numpy array of shape (n_estimators,)
This is the fraction of misclassifications for the `i_{th}`
number of estimators
"""
misclassification_rate = np.zeros((n_estimators,))
for i, y_pred in enumerate(model.staged_predict(data)):
# zero_one_loss returns the fraction of misclassifications
misclassification_rate[i] = zero_one_loss(y_pred, labels)
return misclassification_rate
# Get the misclassification rates for each algo on each data split
ada_discrete_err_train = misclassification_rate_by_ensemble_size(
ada_discrete, hparams.n_estimators, x_train, y_train
)
ada_discrete_err_test = misclassification_rate_by_ensemble_size(
ada_discrete, hparams.n_estimators, x_test, y_test
)
ada_real_err_train = misclassification_rate_by_ensemble_size(
ada_real, hparams.n_estimators, x_train, y_train
)
ada_real_err_test = misclassification_rate_by_ensemble_size(
ada_real, hparams.n_estimators, x_test, y_test
)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(111)
ax.plot([1, hparams.n_estimators], [dt_stump_err] * 2, 'k-',
label='Decision Stump Error')
ax.plot([1, hparams.n_estimators], [dt_err] * 2, 'k--',
label='Decision Tree Error')
ax.plot(np.arange(hparams.n_estimators) + 1, ada_discrete_err_test,
label='Discrete AdaBoost Test Error',
color='red')
ax.plot(np.arange(hparams.n_estimators) + 1, ada_discrete_err_train,
label='Discrete AdaBoost Train Error',
color='blue')
ax.plot(np.arange(hparams.n_estimators) + 1, ada_real_err_test,
label='Real AdaBoost Test Error',
color='orange')
ax.plot(np.arange(hparams.n_estimators) + 1, ada_real_err_train,
label='Real AdaBoost Train Error',
color='green')
ax.set_ylim((0.0, 0.5))
ax.set_xlabel('n_estimators')
ax.set_ylabel('error rate')
leg = ax.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.7)
```
## Classification Performance
How well are our classifiers doing?
```
def plot_confusion_matrix(confusion, classes, normalize=False, cmap=plt.cm.Reds):
"""Plot a confusion matrix
"""
mpl.style.use('seaborn-ticks')
fig = plt.figure(figsize=(20,10))
plt.imshow(confusion, interpolation='nearest', cmap=cmap)
plt.title("Confusion Matrix")
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = confusion.max() / 2.
for i, j in itertools.product(range(confusion.shape[0]), range(confusion.shape[1])):
plt.text(
j, i, format(confusion[i, j], fmt),
horizontalalignment="center",
color="white" if confusion[i, j] > thresh else "black"
)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
ada_discrete_preds_test = ada_discrete.predict(x_test)
ada_real_preds_test = ada_real.predict(x_test)
```
### Accuracy
```
ada_discrete_acc = accuracy_score(y_test, ada_discrete_preds_test)
ada_real_acc = accuracy_score(y_test, ada_real_preds_test)
print(f"Adaboost discrete accuarcy: {ada_discrete_acc:.3f}")
print(f"Adaboost real accuarcy: {ada_discrete_acc:.3f}")
```
### Confusion Matrix
Accuracy, however is an overall summary. To see where our models are predicting correctly and how they could be predicting incorrectly, we can use a `confusion matrix`.
```
ada_discrete_confusion = confusion_matrix(y_test, ada_discrete_preds_test)
ada_real_confusion = confusion_matrix(y_test, ada_real_preds_test)
plot_confusion_matrix(ada_discrete_confusion, classes)
plot_confusion_matrix(ada_real_confusion, classes)
```
| github_jupyter |
# ALL/AML Classifier
## Research Description
### Summary of the relative research
- **Index of the papers discussed**
| Paper ID | Title | Published Year |
|----------|-------|----------------|
|1 |_Molecular classification of cancer: class discovery and class prediction by gene expression monitoring_|1999/10/15|
|2|Class Prediction and Discovery Using Gene Expression Data|2000|
|3|Tissue Classification with Gene Expression Profiles|2000/08/01|
|4 |_Support vector machine classification and validation of cancer tissue samples using microarray expression data_|2000/10/01|
|5|Identifying marker genes in transcription profiling data using a mixture of feature relevance experts|2001/03/08|
|6 |_Classification of Acute Leukemia Based on DNA Microarray Gene Expressions Using Partial Least Squares_|2002|
|7|Gene Selection for Cancer Classification using Support Vector Machines|2002/01/01|
|8 |_Tumor classification by partial least squares using microarray gene expression data_|2002/01/01|
|9 |_Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data_|2002/03/01|
|10|_Ensemble machine learning on gene expression data for cancer classification_|2003|
|11|_Effective dimension reduction methods for tumor classification using gene expression data_|2003/03/22|
|12|_PCA disjoint models for multiclass cancer analysis using gene expression data_|2003/03/22|
|13|_Spectral Biclustering of Microarray Data: Coclustering Genes and Conditions_|2003/04/01|
|14|_Boosting for tumor classification with gene expression data_|2003/06/12|
|15|_Classification of multiple cancer types by multicategory support vector machines using gene expression data_|2003/06/12|
|16|Optimization models for cancer classification: extracting gene interaction information from microarray expression data|2004/03/22|
|17|_Classification of gene microarrays by penalized logistic regression_|2004/07|
|18|_A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression_|2004/10/12|
|19|A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis|2005/03/01|
|20|An extensive comparison of recent classification tools applied to microarray data|2005/04/01|
|21|_Simple decision rules for classifying human cancers from gene expression profiles_|2005/10/15|
|22|Gene selection and classification of microarray data using random forest|2006|
|23|Gene Selection Using Rough Set Theory|2006/07/24|
|24|_Independent component analysis-based penalized discriminant method for tumor classification using gene expression data_|2006/08/01|
|25|Gene selection for classification of microarray data based on the Bayes error|2007|
|26|_Logistic regression for disease classification using microarray data: model selection in a large p and small n case_|2007/08/01|
|27|A sequential feature extraction approach for naïve bayes classification of microarray data|2009/08|
|28|_Optimization Based Tumor Classification from Microarray Gene Expression Data_|2011/02/04|
|29|_Acute Leukemia Classification using Bayesian Networks_|2012/10|
|30|A novel approach to select significant genes of leukemia cancer data using K-Means clustering|2013/02|
- **Detailed List**
| Paper ID | Dataset Described | Classifier | Results | Note|
|-----------------------------------------------|--------------|------------|---------|-----|
|1|$72\times 6817$ $(47 ALL, 25 AML)$ <sup>1</sup>|Golub Classifier: informative genes + weighted vote|(50 genes) <br> Train: 36 correct ,2 uncertain <br> Test: 29 correct, 5 uncertain||
|2|$72\times 6817$ $(47 ALL,25 AML)$|Golub Classifier: informative genes + weighted vote|(50 genes) <br> Train: 36 correct ,2 uncertain <br> Test: 29 correct, 5 uncertain| Detailed explanation of 1|
|3|$72\times 7129$ $(47 ALL,25 AML)$<sup>2</sup>|Nearest Neighbor <br> SVM(linear kernel, quadratic kernel) <br> Boosting (100, 1000, 10000 iteration)| Accuracy: $>= 90% $, ROC curves, Prediction Error||
|4|$72\times 7129$ $(47 ALL,25 AML)$|SVM(top 25, 250, 500, 1000 features)|# of correct classification is reported(too long to list here)||
|5|$72 \times 7070$ $(47 ALL,25 AML)$|MVR(median vote relevance),NBGR(naive bayes global relevance), MAR(Golub paper relevance)+ SVM|# of correct classification is reported|Mainly focus on the criterion of feature selection|
|6|$72\times 6817$ $(47 ALL,25 AML)$|Dimension Reduction: PCA, PLS(Partial Least Square) <br> Classification: logistic and quadratic discrimination|Average accuracy rate reported|(50 same genes as in Golub Paper, however, re-randomization to the train and test samples introduced)|
|7|$72\times 7129$ $(47 ALL,25 AML)$|SVM|multiple genes are selected, error rate/success rate, rejection rate/acceptance rate, externel margin, median margin reported||
|8|$72\times 6817$ $(47 ALL,25 AML)$|||Almost same as 6|
|9|$72\times 6817$ $(47 ALL, 25 AML)$->$72\times 3571$|Linear and quadratic discriminant analysis(4), Classification trees(4), Nearest neighbors|number of misclassified tumor samples quartile for each classifiers reported|40 genes used, test set sze 24|
|10|$72\times 7129$ $(47 ALL, 25 AML)$|single C4.5(decision tree), bagged(C4.5), AdaBoost C4.5|Accuracy, Precision(Positive Predictive Accuracy), Sensitivity, Specificity reported/plotted||
|11|$72\times 7129$ $(47 ALL, 25 AML)$|MAVE-LD, DLDA, DQDA, MAVE-NPLD|# of correct classification and error rate reported||
|12|$72\times 7129$ $(47 ALL, 25 AML)$|Disjoint PCA, SIMCA classification, classifier feedback feature selection|correct classify and misclassified reported||
|13|$72\times 7129$ $(47 ALL, 25 AML)$|Spectral biclustering methods| correctly partitions the patient, with only 1 ambiguous case||
|14|$72\times 7129$ $(47 ALL, 25 AML)$->$72\times 3571$|LogitBoost, AdaBoost, Nearest Neighbor, Classification Tree| Error rate reported||
|15|$72\times 7129$ $(47 ALL, 25 AML)$|2 types of preprocessing+2 kernel function+ 2 tuning methods|Test errors reported(#)||
|16|$72\times 7129$ $(47 ALL, 25 AML)$|MAMA|# of misclassifications and prediction rate reported||
|17|$72\times 7129$ $(47 ALL, 25 AML)$|Feature selection: UR, REF <br> Classifier: Penalized Logistic Regression| Error rate reported, also estimation of the prob. dist.||
|18|$72\times 7129$ $(47 ALL, 25 AML)$|SVM, KNN, Naive Bayes, J4.8 DT|Classification accuracy plotted|In this paper, they do a 3 class and a 4 class classification|
|19|$72\times 5327$ $(47 ALL, 25 AML)$|MC-SVM, Neural Network, KNN|Accuracy, relative classifier information reported|Also compared the result w/o gene selection|
|20|$72\times 3571$ $(47 ALL, 25 AML)$|Gene selection: BSS/WSS, Soft-thresholding, Rank-based <br> Classifier: FLDA, DLDA, DQDA, KNN, logistic, GPLS..etc.| Mean error rate reported|This one compared tons of classifers.|
|21|$72\times 7129$ $(47 ALL, 25 AML)$|TSP(Top scoring pairs), KNN, PAM, DT, NB, SVM|LOOCV accuracy, test accuracy reported||
|22|$38\times 3051$ $(27 ALL, 11 AML)$|SVM, KNN, DLDA, SC, NN, RF|Error rate|Also discussed gene selection for RF|
|23|$72\times 7129$ $(47 ALL, 25 AML)$|SVM, NB|Accuracy plotted||
|24|$72\times 7129$ $(47 ALL, 25 AML)$|SVM, PCA+FDA, P-RR,P-PCR,P-ICR, PAM|Accuracy reported||
|25|$38\times 3051$ $(27 ALL, 11 AML)$|KNN, SVM|error rate reported|Mainly about BBF gene selection instead of classification|
|26|$72\times 3051$ $(47 ALL, 25 AML)$<sup>3</sup>|penalized logistic regression|prediction error reported|mainly discussed parametric bootstrap model to get a more accurate prediction error|
|27|$72\times 7129$ $(47 ALL, 25 AML)$|NB, FS+NB, FS+ICA+NB, FS+CCICA+NB|Boxplot of Accuracy rate reported|stepwise feature selection|
|28|$72\times 7129$ $(47 ALL, 25 AML)$|HBE, BayesNet, LibSVM, SMO, Logistic, RBF network, IBk, J48, Random Forest|Accuracy rate reported||
|29|$72\times 7129$ $(47 ALL, 25 AML)$|Bayes Network|Classification rate reported||
|30|$34\times 7129$ $(20 ALL, 14 AML)$|Kmeans Clustering| Accuracy, Specificity, Sensitivity reported|Although not explicitly say the data is from Golub, but the dimension indicate that||
**Footnotes**
1. $72\times 6817 (47 ALL, 25 AML)$: Train:$38\times 6817(27 ALL, 11 AML)$ Test:$34\times6817(20 ALL, 14 AML)$
2. $72\times 7129 (47 ALL, 25 AML)$: Train:$38\times 7129(27 ALL, 11 AML)$ Test:$34\times 7129(20 ALL, 14 AML)$
3. $72\times 3051 (47 ALL, 25 AML)$: Train: $38\times 3051(27 ALL, 11 AML, used availible \ GeneLogit \ Library)$ Test: $34\times 3051$
### Summary of the Leukemia Dataset
- Acute Lymphocytic leukemia (ALL), also called Acute Lymphoblastic Leukemia, is a cancer that starts from the early version of white blood cells called lymphocytes in the bone marrow. The term "acute" means that the leukemia can progress quickly, and if not treated in time, would probably be fatal within a few months. Lymphocytic means it develops from early(immature) forms of lymphocytes, a type of white blood cell. It is different from acute myeloid leukemia (AML), which develops in other blood cell types found in the bone marrow. Using machine learning method, we could classify the two types of leukemia quickly with high accuracy and a lot of work have been done around this topic.
- A generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias by Golub et al.(1999)[1]. They proposed a class discovery procedure automatically distinguish between AML and ALL without previous knowledge of these classes. That paper is also the origin of the famous Golub Gene expression dataset. After that, tons of work have used this dataset to verify their feature selections procedures, classifiers etc., which are summarized in the tables above.
- There are two datasets in the paper, training data and test data. The Golub Gene expression dataset contains both of them and also one merged dataset of those two. The traning data consisted of 38 bone marrow samples (27 ALL, 11 AML) obtained from acute leukemia patients at the time of diagnosis. There are 7129 probes in the experiment for 6817 genes, i.e. there are 7129 gene expressions for 6817 genes in the dataset. The test data is an independent collection of 34 leukemia samples with 24 bone marrow and 10 peripheral blood samples. 20 of them are ALL samples and the rest are AML samples. More details about the dataset could be found in the paper 1 or in this linked discription [golubEsets](https://www.bioconductor.org/packages/devel/data/experiment/manuals/golubEsets/man/golubEsets.pdf).
- Since the range of the gene expression in the dataset is large and there are lots of negative gene expression values, usually several transformation would be done before building the classifier. In paper 2, they manually restricted the value to above some positive threshold and did a log transformation after that. Paper 9 proposed a transformation procedure, which is widely used by researcher afterwards. They did three preprocessing steps: thresholding, filtering and base 10 logarithmic transformation and then reduced the whole training and test dataset to have only 3571 predictors.([dataset](https://cran.r-project.org/web/packages/spikeslab/spikeslab.pdf)) However, after preprocessing use the procedure, we will left with 3051 predictors and that resulting dataset is available at [library/package](http://faculty.mssm.edu/gey01/multtest/multtest-manual.pdf).
- Since the dataset has more predictors than observations, the focus of research on the dataset is not restrict to finding an effective classifiers but also the feature selection criterions. In the origin paper, they use a 50-gene classifiers selected by correlation. Lots of other criterions and classifiers are studied by other researchers in the later papers and we will try to reproduce them in our study.
| github_jupyter |
# Increase Transparency and Accountability in Your Machine Learning Project with Python and H2O
#### Explain your complex models with decision tree surrogates, GBM feature importance, and reason codes
Decision trees and decision tree ensembles are some of the most popular machine learning models used in commercial practice. They can train and make predictions on data containing character values and missing values - both common in large commercial data stores. Single decision trees are easily represented as directed graphs, which can drastically increase their interpretability and transparency. Decision tree ensembles (i.e., random forests and gradient boosting machines (GBMs)), can be used to increase the accuracy and stability of single decision tree models, but are far less intepretable than single trees. These characteristics of decision trees will be leveraged here to increase transparency and accountability in complex, nonlinear, machine learning models.
This notebook starts by training a GBM on the UCI credit card default data using the popular open source library, h2o. A single decision tree *surrogate* model will then be trained on the original UCI credit card default data and the predictions from the h2o GBM, to create an approximate flow chart for the GBM's global decision-making processes. A technique known as leave-one-covariate-out (LOCO) will then be used to generate local explanations for any row-wise prediction made by the GBM model. Finally, local explanations are ensembled together from multiple similar models to increase explanation stability.
#### Python imports
In general, NumPy and Pandas will be used for data manipulation purposes and h2o will be used for modeling tasks.
```
# imports
# h2o Python API with specific classes
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator # for GBM
from h2o.estimators.random_forest import H2ORandomForestEstimator # for single tree
from h2o.backend import H2OLocalServer # for plotting local tree in-notebook
import numpy as np # array, vector, matrix calculations
import pandas as pd # DataFrame handling
# system packages for calling external graphviz processes
import os
import re
import subprocess
# in-notebook display
from IPython.display import Image
from IPython.display import display
%matplotlib inline
```
#### Start h2o
H2o is both a library and a server. The machine learning algorithms in the library take advantage of the multithreaded and distributed architecture provided by the server to train machine learning algorithms extremely efficiently. The API for the library was imported above in cell 1, but the server still needs to be started.
```
h2o.init(max_mem_size='2G') # start h2o
h2o.remove_all() # remove any existing data structures from h2o memory
```
## 1. Download, explore, and prepare UCI credit card default data
UCI credit card default data: https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients
The UCI credit card default data contains demographic and payment information about credit card customers in Taiwan in the year 2005. The data set contains 23 input variables:
* **`LIMIT_BAL`**: Amount of given credit (NT dollar)
* **`SEX`**: 1 = male; 2 = female
* **`EDUCATION`**: 1 = graduate school; 2 = university; 3 = high school; 4 = others
* **`MARRIAGE`**: 1 = married; 2 = single; 3 = others
* **`AGE`**: Age in years
* **`PAY_0`, `PAY_2` - `PAY_6`**: History of past payment; `PAY_0` = the repayment status in September, 2005; `PAY_2` = the repayment status in August, 2005; ...; `PAY_6` = the repayment status in April, 2005. The measurement scale for the repayment status is: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; ...; 8 = payment delay for eight months; 9 = payment delay for nine months and above.
* **`BILL_AMT1` - `BILL_AMT6`**: Amount of bill statement (NT dollar). `BILL_AMNT1` = amount of bill statement in September, 2005; `BILL_AMT2` = amount of bill statement in August, 2005; ...; `BILL_AMT6` = amount of bill statement in April, 2005.
* **`PAY_AMT1` - `PAY_AMT6`**: Amount of previous payment (NT dollar). `PAY_AMT1` = amount paid in September, 2005; `PAY_AMT2` = amount paid in August, 2005; ...; `PAY_AMT6` = amount paid in April, 2005.
These 23 input variables are used to predict the target variable, whether or not a customer defaulted on their credit card bill in late 2005.
Because h2o accepts both numeric and character inputs, some variables will be recoded into more transparent character values.
#### Import data and clean
The credit card default data is available as an `.xls` file. Pandas reads `.xls` files automatically, so it's used to load the credit card default data and give the prediction target a shorter name: `DEFAULT_NEXT_MONTH`.
```
# import XLS file
path = '../data/default_of_credit_card_clients.xls'
data = pd.read_excel(path)
# remove spaces from target column name
data = data.rename(columns={'default payment next month': 'DEFAULT_NEXT_MONTH'})
```
#### Assign modeling roles
The shorthand name `y` is assigned to the prediction target. `X` is assigned to all other input variables in the credit card default data except the row indentifier, `ID`.
```
# assign target and inputs for GBM
y = 'DEFAULT_NEXT_MONTH'
X = [name for name in data.columns if name not in [y, 'ID']]
print('y =', y)
print('X =', X)
```
#### Helper function for recoding values in the UCI credict card default data
This simple function maps longer, more understandable character string values from the UCI credit card default data dictionary to the original integer values of the input variables found in the dataset. These character values can be used directly in h2o decision tree models, and the function returns the original Pandas DataFrame as an h2o object, an H2OFrame. H2o models cannot run on Pandas DataFrames. They require H2OFrames.
```
def recode_cc_data(frame):
""" Recodes numeric categorical variables into categorical character variables
with more transparent values.
Args:
frame: Pandas DataFrame version of UCI credit card default data.
Returns:
H2OFrame with recoded values.
"""
# define recoded values
sex_dict = {1:'male', 2:'female'}
education_dict = {0:'other', 1:'graduate school', 2:'university', 3:'high school',
4:'other', 5:'other', 6:'other'}
marriage_dict = {0:'other', 1:'married', 2:'single', 3:'divorced'}
pay_dict = {-2:'no consumption', -1:'pay duly', 0:'use of revolving credit', 1:'1 month delay',
2:'2 month delay', 3:'3 month delay', 4:'4 month delay', 5:'5 month delay', 6:'6 month delay',
7:'7 month delay', 8:'8 month delay', 9:'9+ month delay'}
# recode values using Pandas apply() and anonymous function
frame['SEX'] = frame['SEX'].apply(lambda i: sex_dict[i])
frame['EDUCATION'] = frame['EDUCATION'].apply(lambda i: education_dict[i])
frame['MARRIAGE'] = frame['MARRIAGE'].apply(lambda i: marriage_dict[i])
for name in frame.columns:
if name in ['PAY_0', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6']:
frame[name] = frame[name].apply(lambda i: pay_dict[i])
return h2o.H2OFrame(frame)
data = recode_cc_data(data)
```
#### Ensure target is handled as a categorical variable
In h2o, a numeric variable can be treated as numeric or categorical. The target variable `DEFAULT_NEXT_MONTH` takes on values of `0` or `1`. To ensure this numeric variable is treated as a categorical variable, the `asfactor()` function is used to explicitly declare that it is a categorical variable.
```
data[y] = data[y].asfactor()
```
#### Display descriptive statistics
The h2o `describe()` function displays a brief description of the credit card default data. For the categorical input variables `LIMIT_BAL`, `SEX`, `EDUCATION`, `MARRIAGE`, and `PAY_0`-`PAY_6`, the new character values created above in cell 5 are visible. Basic descriptive statistics are displayed for numeric inputs. Also, it's easy to see there are no missing values in this dataset, which will be an important consideration for calculating LOCO values in section 5 and 6.
```
data[X + [y]].describe()
```
## 2. Train an H2O GBM classifier
#### Split data into training and test sets for early stopping
The credit card default data is split into training and test sets to monitor and prevent overtraining. Reproducibility is also an important factor in creating trustworthy models, and randomly splitting datasets can introduce randomness in model predictions and other results. A random seed is used here to ensure the data split is reproducible.
```
# split into training and validation
train, test = data.split_frame([0.7], seed=12345)
# summarize split
print('Train data rows = %d, columns = %d' % (train.shape[0], train.shape[1]))
print('Test data rows = %d, columns = %d' % (test.shape[0], test.shape[1]))
```
#### Train h2o GBM classifier
Many tuning parameters must be specified to train a GBM using h2o. Typically a grid search would be performed to identify the best parameters for a given modeling task using the `H2OGridSearch` class. For brevity's sake, a previously-discovered set of good tuning parameters are specified here. Because gradient boosting methods typically resample training data, an additional random seed is also specified for the h2o GBM using the `seed` parameter to create reproducible predictions, error rates, and variable importance values. To avoid overfitting, the `stopping_rounds` parameter is used to stop the training process after the test error fails to decrease for 5 iterations.
The `balance_classes` parameter ensures the positive and negative classes of the target variable are seen by the model in equal proportions during training. This can be very important for the LOCO calculations in section 5 and 6 for unbalanced data. From experiments across several data sets, explanations for rows with a majority class label for the target variable (e.g., 0) generated by LOCO are more likely to match those generated by another popular explanatory technique, LIME, when the target class is rebalanced during training. `balance_classes` is commented below because the row explained in this notebook has a minority class label (e.g., 1).
```
# initialize GBM model
model = H2OGradientBoostingEstimator(ntrees=150, # maximum 150 trees in GBM
max_depth=4, # trees can have maximum depth of 4
sample_rate=0.9, # use 90% of rows in each iteration (tree)
col_sample_rate=0.9, # use 90% of variables in each iteration (tree)
#balance_classes=True, # sample to balance 0/1 distribution of target - can help LOCO
stopping_rounds=5, # stop if validation error does not decrease for 5 iterations (trees)
score_tree_interval=1, # for reproducibility, set higher for bigger data
seed=12345) # for reproducibility
# train a GBM model
model.train(y=y, x=X, training_frame=train, validation_frame=test)
# print AUC
print('GBM Test AUC = %.2f' % model.auc(valid=True))
# uncomment to see model details
# print(model)
```
#### Display variable importance
During training, the h2o GBM aggregates the improvement in error caused by each split in each decision tree across all the decision trees in the ensemble classifier. These values are attributed to the input variable used in each split and give an indication of the contribution each input variable makes toward the model's predictions. The variable importance ranking should be parsimonious with human domain knowledge and reasonable expectations. In this case, a customer's most recent payment behavior, `PAY_0`, is by far the most important variable followed by their second most recent payment, `PAY_2`, and third most recent payment, `PAY_3`, behavior. This result is well-aligned with business practices in credit lending: people who miss their most recent payments are likely to default soon.
```
model.varimp_plot()
```
## 3. Train a decision tree surrogate model to describe GBM
A surrogate model is a simple model that is used to explain a complex model. One of the original references for surrogate models is available here: https://papers.nips.cc/paper/1152-extracting-tree-structured-representations-of-trained-networks.pdf. In this example, a single decision tree will be trained on the original inputs and predictions of the h2o GBM model and the tree will be visualized using special functionality in h2o and GraphViz. The variable importance, interactions, and decision paths displayed in the directed graph of the trained decision tree surrogate model are then assumed to be indicative of the internal mechanisms of the more complex GBM model, creating an approximate, overall flowchart for the GBM. There are few mathematical guarantees that the simple surrogate model is highly representative of the more complex GBM, but a recent preprint article has put forward ideas on strenghthening the theoretical relationship between surrogate models and more complex models: https://arxiv.org/pdf/1705.08504.pdf. Since surrogate models alone do not gaurantee accurate transparency, they will be used along with GBM variable importance and LOCO to build a cohesive narrative about the mechansims within the GBM. **Because most currently-available explanatory techniques are approximate, it is recommended that users employ several different explanatory techniques and trust only consisent results across techniques.**
#### Create dataset for surrogate model
To train a surrogate model, the predictions and original inputs of the complex model to be explained need to be in the same dataset. The test data is used here to see how the model behaves on holdout data, which should be closer to its behavior on new data than analyzing the surrogate model for the training inputs and predictions.
```
# cbind predictions to training frame
# give them a nice name
yhat = 'p_DEFAULT_NEXT_MONTH'
preds1 = test['ID'].cbind(model.predict(test).drop(['predict', 'p0']))
preds1.columns = ['ID', yhat]
test_yhat = test.cbind(preds1[yhat])
```
#### Train single h2o decision tree
A single decision tree is trained on the test inputs and predictions. To simulate a single decision tree in h2o, the `H2ORandomForestEstimator` class is used, but only one tree is trained instead of a forest of decision trees. Setting the `mtry` parameter to `-2` tells the `H2ORandomForestEstimator` to consider all variables in all splits of a tree, instead of considering a random subset of columns. It is also recommended to set a random seed for reproducibility and to set `max_depth` to a lower number, say less than 6, so that the surrogate model will not become overly complex and hard to explain and understand. Once the tree is trained, a model optimized java object (MOJO) representation of the tree is saved. H2o provides a way to visualize the trained tree in detail using the MOJO and Graphviz.
```
model_id = 'dt_surrogate_mojo' # gives MOJO artifact a recognizable name
# initialize single tree surrogate model
surrogate = H2ORandomForestEstimator(ntrees=1, # use only one tree
sample_rate=1, # use all rows in that tree
mtries=-2, # use all columns in that tree
max_depth=3, # shallow trees are easier to understand
seed=12345, # random seed for reproducibility
model_id=model_id) # gives MOJO artifact a recognizable name
# train single tree surrogate model
surrogate.train(x=X, y=yhat, training_frame=test_yhat)
# persist MOJO (compiled, representation of trained model)
# from which to generate plot of surrogate
mojo_path = surrogate.download_mojo(path='.')
print('Generated MOJO path:\n', mojo_path)
```
#### Create GraphViz dot file
GraphViz is an open source graph visualization tool. It is freely available from this url: http://www.graphviz.org/. To plot the trained decision tree surrogate model, a special h2o class, `PrintMojo`, is executed against the MOJO to create a GraphViz dot file representation of the tree.
```
# title for plot
title = 'Credit Card Default Decision Tree Surrogate'
# locate h2o jar
hs = H2OLocalServer()
h2o_jar_path = hs._find_jar()
print('Discovered H2O jar path:\n', h2o_jar_path)
# construct command line call to generate graphviz version of
# surrogate tree see for more information:
# http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/index.html
gv_file_name = model_id + '.gv'
gv_args = str('-cp ' + h2o_jar_path +
' hex.genmodel.tools.PrintMojo --tree 0 -i '
+ mojo_path + ' -o').split()
gv_args.insert(0, 'java')
gv_args.append(gv_file_name)
if title is not None:
gv_args = gv_args + ['--title', title]
# call
print()
print('Calling external process ...')
print(' '.join(gv_args))
_ = subprocess.call(gv_args)
```
#### Create PNG from GraphViz dot file and display
Then a GraphViz command line tool is used to create a static PNG image from the dot file ...
```
# construct call to generate PNG from
# graphviz representation of the tree
png_file_name = model_id + '.png'
png_args = str('dot -Tpng ' + gv_file_name + ' -o ' + png_file_name)
png_args = png_args.split()
# call
print('Calling external process ...')
print(' '.join(png_args))
_ = subprocess.call(png_args)
```
#### Display surrogate decision tree in notebook
... and the image is displayed in the notebook.
```
# display in-notebook
display(Image((png_file_name)))
```
## 4. Analyze surrogate model and compare to global GBM variable importance
The displayed tree is comparable with the global GBM variable importance. A simple heuristic rule for variable importance in a decision tree relates to the depth and frequency at which a variable is split on in a tree: variables used higher in the tree and more frequently in the tree are more important. Most of the variables pictured in this tree also appear as highly important in the GBM variable importance plot. In both cases, `PAY_0` is appearing as crucially important, with other payment behavior variables following close behind. The surrogate decision tree enables users to understand and confirm not only what input variables are important, but also how their values contribute to model decisions. For instance, to fall into the lowest probability of default leaf node in the surrogate decision tree a customer must make their first and second payments in a timely fashion and then pay more than 1515.5 New Tiawanese Dollars for their fifth payment. Conversely, customers who miss their first, fifth, and third payments fall into the highest probability of default leaf node of the surrogate decision tree. It is also imperative to compare these results to domain knowledge and reasonable expectations. In this case, the global explanatory methods applied thus far tell a consisent and reasonable story about the GBM's behavior. If this was not so, steps should be taken to either reconcile or remove inconsistencies and unreasonable prediction behavior.
## 5. Generate reason codes using the LOCO method
Now that a solid understanding of global model behavior has been attained, local behavior for any given row of data and prediction can be analyzed and validated using LOCO. The LOCO method presented here is adapted from *Distribution-Free Predictive Inference for Regression* by Jing Lei et al., http://www.stat.cmu.edu/~ryantibs/papers/conformal.pdf. Here the local contribution of an input variable to a prediction for a single row of data is estimated by rescoring the GBM on that row one time for each input variable, each time leaving out one input variable (e.g., "covariate") by setting it to missing, and then subtracting the new score from the original score. By default, h2o scores missing data in decision trees by running them through the majority decision path. This means LOCO will be a numeric measure of how different the local contribution of an input variable is from the most common local contribution of that variable in the model. This variant of LOCO differs from the original method, in which one input variable is dropped from the model and the model is retrained without that variable. For nonlinear models, nonlinear dependencies can allow variables to nearly completely replace one another when a variable is dropped and the model is retrained. Hence, the approach of injecting missing values is used to estimate local contributions of input variables for nonlinear models here, as opposed to dropping a variable and retraining the model.
#### Calculate LOCO reason values for each row of the test set
To implement LOCO, GBM model predicitions are calculated once for the test data and then again for each input variable, setting the entire input variable column to missing. Once the prediction without the variable is found for every row of data in the test set, that column vector of predictions on corrupted data can be subtracted from the column vector of predictions on the original, non-corrupted data to estimate the local contribution of that variable for each prediction in the test data. For better local accuracy and explainability, LOCO contributions are scaled such that contributions for each prediction plus the overall average of `DEFAULT_NEXT_MONTH` always sum to the model predictions.
```
h2o.no_progress() # turn off h2o gratuitous progress bars
# create set of original predictions and row ID
preds2 = test['ID'].cbind(model.predict(test).drop(['predict', 'p0']))
preds2.columns = ['ID', yhat]
# calculate LOCO for each variable
print('Calculating LOCO contributions ...')
for k, i in enumerate(X):
# train and predict with x_i set to missing
test_loco = h2o.deep_copy(test, 'test_loco')
test_loco[i] = np.nan
preds_loco = model.predict(test_loco).drop(['predict','p0'])
# create a new, named column for the LOCO prediction
preds_loco.columns = [i]
preds2 = preds2.cbind(preds_loco)
# subtract the LOCO prediction from the original prediction
preds2[i] = preds2[yhat] - preds2[i]
# update progress
print('LOCO Progress: ' + i + ' (' + str(k+1) + '/' + str(len(X)) + ') ...')
# scale contributions to sum to yhat - y_0
print('\nScaling contributions ...')
y_0 = test[y].mean()[0]
preds2_pd = preds2.as_data_frame()
pred_ = preds2_pd[yhat]
scaler = (pred_ - y_0) / preds2_pd[X].sum(axis=1)
preds2_pd[X] = preds2_pd[X].multiply(scaler, axis=0)
print('Done.')
preds2_pd.head()
```
The numeric LOCO values in each column are an estimate of how much each variable contributed to each prediction. LOCO can indicate how a variable and its values were weighted in any given decision by the model. These values are crucially important for machine learning interpretability and are related to "local feature importance", "reason codes", or "turn-down codes." The latter phrases are borrowed from credit scoring. Credit lenders in the U.S. must provide reasons for automatically rejecting a credit application. Reason codes can be easily extracted from LOCO local variable contribution values by simply ranking the variables that played the largest role in any given decision.
#### Helper function for finding percentile indices
The function below finds and returns the row indices for the minimum, the maximum, and the deciles of one column in terms of another, in this case the model predictions (`p_DEFAULT_NEXT_MONTH`) and the row identifier (`ID`), respectively. These indices are used as a starting point for finding potentially interesting predictions. Outlying predictions found through residual analysis is another group of potentially interesting local predictions to analyze with LOCO.
```
def get_percentile_dict(yhat, id_, frame):
""" Returns the minimum, maximum, and percentiles of a column, yhat,
as the indices based on another column id_.
Args:
yhat: Column in which to find percentiles.
id_: Id column that stores indices for percentiles of yhat.
frame: H2OFrame containing yhat and id_.
Returns:
Dictionary of percentile values and index column values.
"""
# convert to Pandas and sort
sort_df = preds2_pd.copy(deep=True)
sort_df.sort_values(yhat, inplace=True)
sort_df.reset_index(inplace=True)
# find top and bottom percentiles
percentiles_dict = {}
percentiles_dict[0] = sort_df.loc[0, id_]
percentiles_dict[99] = sort_df.loc[sort_df.shape[0]-1, id_]
inc = sort_df.shape[0]//10
# find 10th-90th percentiles
for i in range(1, 10):
percentiles_dict[i * 10] = sort_df.loc[i * inc, id_]
return percentiles_dict
# display percentiles dictionary
# ID values for rows
# from lowest prediction
# to highest prediction
percentile_dict = get_percentile_dict(yhat, 'ID', preds2_pd)
percentile_dict
```
#### Plot some reason codes for a risky customer
Investigating customers with very high or low predicted probabilities to determine if their local explanations justify their extreme predictions is typically a productive exercise in boundary testing, model debugging, and validation. Reason codes are generated for the customer with the highest probability of default in the test data set below in cell 18, but LOCO can create local explanations for any or all rows in the training or test datasets, and on new data.
```
# select single customer
# convert to Pandas
# drop prediction and row ID
risky_loco = preds2_pd[preds2_pd['ID'] == int(percentile_dict[99])].drop(['ID', yhat], axis=1)
# transpose into column vector and sort
risky_loco = risky_loco.T.sort_values(by=8674, ascending=False)[:5]
# plot
_ = risky_loco.plot(kind='bar',
title='Top Five Reason Codes for a Risky Customer\n',
legend=False)
```
For the customer in the test dataset that the GBM predicts as most likely to default, the most important input variables in the prediction are, in descending order, `PAY_0`, `PAY_6`, `PAY_3`, `PAY_5`, and `AGE`.
#### Display customer in question
The local contributions for this customer appear reasonable, especially when considering her payment information. Her most recent payment was 3 months late and her payment for 6 months previous was 4 months late, so it's logical that these would weigh heavily into the model's prediction for default for this customer.
```
test_yhat[test_yhat['ID'] == int(percentile_dict[99]), :] # helps understand reason codes
```
To generate reason codes for the model's decision, the locally important variable and its value are used together. If this customer was denied future credit based on this model and data, the top five LOCO-based reason codes for the automated decision would be:
1. Most recent payment is 3 months delayed.
2. 6th most recent payment is 4 months delayed.
3. 3rd most recent payment is 3 months delayed.
4. 5th most recent payment is 2 months delayed.
5. Customer age is 59.
(Of course, in many places, variables like `AGE` and `SEX` cannot be used in credit lending decisions.)
## 6. Bonus: Generate ensemble LOCO reason codes for greater explanation stability
Just like predictions from high variance, nonlinear models, *explanations* derived from machine learning models can be unstable. One general way to decrease variance is to ensemble the results of many models. The last section of this notebook puts forward a simple approach to creating ensemble explanations.
#### Train multiple models
To create ensemble explanations, several accurate models are trained. The models and their predictions on the test data are stored in Python lists.
```
n_models = 10 # select number of models
# lists for holding models and predictions
models = []
pred_frames = []
for i in range(0, n_models):
# initialize and store models
models.append(H2OGradientBoostingEstimator(ntrees=150,
max_depth=4,
sample_rate=0.9 - ((i + 1)*0.01), # perturb sample rate
col_sample_rate=0.9 - ((i + 1)*0.01), # perturb column sample rate
#balance_classes=True, # sample to balance 0/1 distribution of target - helps LOCO
stopping_rounds=5, # stop if validation error does not decrease for 5 iterations (trees)
seed=i + 1)) # new random seed for each model
# train models
models[i].train(y=y, x=X, training_frame=train, validation_frame=test)
# store predictions
pred_frames.append(test['ID'].cbind(models[i].predict(test).drop(['predict','p0'])))
pred_frames[i].columns = ['ID', yhat]
# update progress
print('Training Progress: model %d/%d, AUC = %.4f ...' % (i + 1, n_models, models[i].auc(valid=True)))
print('Done.')
```
#### Calculate LOCO for each model
LOCO is calculated on the test data for each model, each input, and each row of data in the test set using the stored models and predictions.
```
# for each new model ...
for k, model in enumerate(models):
# calculate LOCO for each input variable
for i in X:
# train and predict with Xi set to missing
test_loco = h2o.deep_copy(test, 'test_loco')
test_loco[i] = np.nan
preds_loco = model.predict(test_loco).drop(['predict','p0'])
# create a new, named column for the LOCO prediction
preds_loco.columns = [i]
pred_frames[k] = pred_frames[k].cbind(preds_loco)
# subtract the LOCO prediction from the original prediction
pred_frames[k][i] = pred_frames[k][yhat] - pred_frames[k][i]
# update progress
print('LOCO Progress: model %d/%d ...' % (k + 1, n_models))
print('Done.')
```
#### Collect LOCO values for each model for a risky customer
To create ensemble explanations for a single row, the LOCO values for each variable in the row are averaged across all models. Single-model and mean LOCO values for the most risky person in the test set are displayed below. Notice that even slight changes in model specifications can result in different explanations. For example, the local contribution of `PAY_0` for the riskiest customer ranges from 0.13 to 0.23 across the 10 models in the table below.
```
# holds predictions for a specific row
risky_loco_frames = []
# column names for Pandas DataFrame of combined LOCO prediction
col_names = ['Loco ' + str(i) for i in range(1, n_models + 1)]
# for each new model ...
for i in range(0, n_models):
# collect LOCO for that model and a specific row
# as a column vector in a Pandas DataFrame
preds = pred_frames[i]
risky_loco_frames.append(preds[preds['ID'] == int(percentile_dict[99]), :] # row for risky person
.as_data_frame() # convert to Pandas
.drop(['ID', yhat], axis=1) # drop predictions and row ID
.T) # Transpose into column vector
# bind LOCO for each row as column vectors
# into the same Pandas DataFrame
loco_ensemble = pd.concat(risky_loco_frames, axis=1)
# update column names
loco_ensemble.columns = col_names
# mean local importance across models
loco_ensemble['Mean Local Importance'] = loco_ensemble.mean(axis=1)
# scale contribs
scaler = (test_yhat[test_yhat['ID'] == int(percentile_dict[99]), yhat] - y_0) /\
(loco_ensemble['Mean Local Importance'].sum())
loco_ensemble['Scaled Mean Local Importance'] = loco_ensemble['Mean Local Importance'] * scaler[0, 0]
# std deviation
loco_ensemble['Std. Dev. Local Importance'] = loco_ensemble\
.drop('Scaled Mean Local Importance', axis=1)\
.std(axis=1)
# display
loco_ensemble
```
#### Plot some mean reason codes for a risky customer
Taking mean explanations across multiple models leads to reason codes somewhat different from the reason codes produced by a single model. Mean reason codes may be more stable, they represent explanations from several models, and they may take practicioners a step closer to using machine learning models to make inferential conclusions about phenomena represented in the training or test data, instead of simply providing an approximate explanation of a single model's decision processes.
```
risky_mean_loco = loco_ensemble['Mean Local Importance'].sort_values(ascending=False)[:5]
_ = risky_mean_loco.plot(kind='bar',
title='Top Five Reason Codes for a Risky Customer\n',
color='b',
legend=False)
```
#### Shutdown H2O
After using h2o, it's typically best to shut it down. However, before doing so, users should ensure that they have saved any h2o data structures, such as models and H2OFrames, or scoring artifacts, such as POJOs and MOJOs.
```
# be careful, this can erase your work!
h2o.cluster().shutdown(prompt=True)
```
| github_jupyter |
[](http://rpi.analyticsdojo.com)
<center><h1>Intro to Tensorflow - MINST</h1></center>
<center><h3><a href = 'http://rpi.analyticsdojo.com'>rpi.analyticsdojo.com</a></h3></center>
[](https://colab.research.google.com/github/rpi-techfundamentals/fall2018-materials/blob/master/10-deep-learning/06-tensorflow-minst.ipynb)
Adopted from [Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron](https://github.com/ageron/handson-ml).
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
[For full license see repository.](https://github.com/ageron/handson-ml/blob/master/LICENSE)
**Chapter 10 – Introduction to Artificial Neural Networks**
_This notebook contains all the sample code and solutions to the exercices in chapter 10._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "/home/jovyan/techfundamentals-fall2017-materials/classes/13-deep-learning"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, 'images', fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
### MNIST
- Very common machine learning library with goal to classify digits.
- This example is using MNIST handwritten digits, which contains 55,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).

More info: http://yann.lecun.com/exdb/mnist/
```
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")
print ("Training set: ", X_train.shape,"\nTest set: ", X_test.shape)
# List a few images and print the data to get a feel for it.
images = 2
for i in range(images):
#Reshape
x=np.reshape(X_train[i], [28, 28])
print(x)
plt.imshow(x, cmap=plt.get_cmap('gray_r'))
plt.show()
# print("Model prediction:", preds[i])
```
## TFLearn: Deep learning library featuring a higher-level API for TensorFlow
- TFlearn is a modular and transparent deep learning library built on top of Tensorflow.
- It was designed to provide a higher-level API to TensorFlow in order to facilitate and speed-up experimentations
- Fully transparent and compatible with Tensorflow
- [DNN classifier](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/DNNClassifier)
- `hidden_units` list of hidden units per layer. All layers are fully connected. Ex. [64, 32] means first layer has 64 nodes and second one has 32.
- [Scikit learn wrapper for TensorFlow Learn Estimator](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/SKCompat)
- See [tflearn documentation](http://tflearn.org/).
```
import tensorflow as tf
config = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
# List of hidden units per layer. All layers are fully connected. Ex. [64, 32] means first layer has 64 nodes and second one has 32.
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols, config=config)
dnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1
dnn_clf.fit(X_train, y_train, batch_size=50, steps=4000)
#We can use the sklearn version of metrics
from sklearn import metrics
y_pred = dnn_clf.predict(X_test)
#This calculates the accuracy.
print("Accuracy score: ", metrics.accuracy_score(y_test, y_pred['classes']) )
#Log Loss is a way of score classes probabilsitically
print("Logloss: ",metrics.log_loss(y_test, y_pred['probabilities']))
```
### Tensorflow
- Direct access to Python API for Tensorflow will give more flexibility
- Like earlier, we will define the structure and then run the session.
- set placeholders
```
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300 # hidden units in first layer.
n_hidden2 = 100
n_outputs = 10 # Classes of output variable.
#Placehoder
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1", activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2", activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
```
### Running the Analysis over 40 Epocs
- 40 passes through entire dataset.
-
```
n_epochs = 40
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images,
y: mnist.test.labels})
print("Epoc:", epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = mnist.test.images[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", mnist.test.labels[:20])
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
```
## Using `dense()` instead of `neuron_layer()`
Note: the book uses `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function, except for a few minor differences:
* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.
* the default `activation` is now `None` rather than `tf.nn.relu`.
* a few more differences are presented in chapter 11.
```
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
```
| github_jupyter |
```
# code reference: https://github.com/jeffheaton/t81_558_deep_learning/blob/dce2306815d4ac7c6443a01c071901822d612c6a/t81_558_class_06_4_keras_images.ipynb
%matplotlib inline
from PIL import Image, ImageFile
from matplotlib.pyplot import imshow
import requests
import numpy as np
from io import BytesIO
from IPython.display import display, HTML
import os
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
import tensorflow as tf
import time
IMAGE_WIDTH = 32
IMAGE_HEIGHT = 32
IMAGE_CHANNELS = 3
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
def make_square(img):
cols,rows = img.size
if rows>cols:
pad = (rows-cols)/2
img = img.crop((pad,0,cols,cols))
else:
pad = (cols-rows)/2
img = img.crop((0,pad,rows,rows))
return img
def load_images_from_folder(folder):
images = []
for filename in os.listdir(folder):
try:
img = Image.open(os.path.join(folder,filename))
img.load()
if img is not None:
make_square(img)
img = img.resize((IMAGE_WIDTH,IMAGE_HEIGHT),Image.ANTIALIAS)
class_label = 0 if filename[0] == '0' else 1
images.append((img,class_label))
# print("Loaded ",filename)
except:
continue;
# print("An exception occurred while reading image",filename)
return images
training_data = []
training_data_label = []
testing_data = []
testing_data_label = []
randomised_data = []
train_folder="HW4-dataset/training"
test_folder="HW4-dataset/testing"
randomised_folder="HW4-dataset/randomised"
train_images=load_images_from_folder(train_folder)
test_images=load_images_from_folder(test_folder)
randomised_images=load_images_from_folder(randomised_folder)
for i in train_images:
(img, label) = i
# display(img)
training_data.append(np.asarray(img))
training_data_label.append(label)
for i in test_images:
(img, label) = i
# display(img)
testing_data.append(np.asarray(img))
testing_data_label.append(label)
for i in randomised_images:
(img, label) = i
# display(img)
randomised_data.append(np.asarray(img))
training_data = np.array(training_data) / 127.5 - 1.
testing_data = np.array(testing_data) / 127.5 - 1.
randomised_data = np.array(randomised_data) / 127.5 - 1.
print("Saving image binary...")
np.save("HW4-training",training_data) # Saves as "training.npy"
np.save("HW4-testing",testing_data)
print("Done.")
num_classes = 2
epochs = 20
if K.image_data_format() == 'channels_first':
training_data = training_data.reshape(training_data.shape[0], IMAGE_CHANNELS, IMAGE_WIDTH, IMAGE_HEIGHT)
testing_data = testing_data.reshape(testing_data.shape[0], IMAGE_CHANNELS, IMAGE_WIDTH, IMAGE_HEIGHT)
randomised_data = randomised_data.reshape(randomised_data.shape[0], IMAGE_CHANNELS, IMAGE_WIDTH, IMAGE_HEIGHT)
input_shape = (IMAGE_CHANNELS, IMAGE_WIDTH, IMAGE_HEIGHT)
else:
training_data = training_data.reshape(training_data.shape[0], IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS)
testing_data = testing_data.reshape(testing_data.shape[0], IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS)
randomised_data = randomised_data.reshape(randomised_data.shape[0], IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS)
input_shape = (IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
model2 = Sequential()
model2.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model2.add(Conv2D(64, (3, 3), activation='relu'))
model2.add(MaxPooling2D(pool_size=(2, 2)))
model2.add(Flatten())
model2.add(Dense(128, activation='sigmoid'))
model2.add(Dense(num_classes, activation='softmax'))
model2.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
start_time = time.time()
training_data_label = tensorflow.keras.utils.to_categorical(training_data_label, num_classes)
testing_data_label = tensorflow.keras.utils.to_categorical(testing_data_label, num_classes)
model.fit(training_data, training_data_label,
epochs=epochs,
verbose=2,
validation_data=(testing_data, testing_data_label))
score = model.evaluate(testing_data, testing_data_label, verbose=0)
print('Test loss: {}'.format(score[0]))
print('Test accuracy: {}'.format(score[1]))
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(hms_string(elapsed_time)))
start_time = time.time()
model2.fit(training_data, training_data_label,
epochs=epochs,
verbose=2,
validation_data=(testing_data, testing_data_label))
score = model2.evaluate(testing_data, testing_data_label, verbose=0)
print('Test loss: {}'.format(score[0]))
print('Test accuracy: {}'.format(score[1]))
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(hms_string(elapsed_time)))
classes = model.predict(randomised_data)
# show the inputs and predicted outputs
for i in range(len(classes)):
class_labels = classes[i]
print("------------------------------------")
display(randomised_images[i][0])
print("cross %", '%f' % (class_labels[0]*100))
print("circle %", '%f' % (class_labels[1]*100))
classes = model2.predict(randomised_data)
# show the inputs and predicted outputs
for i in range(len(classes)):
class_labels = classes[i]
print("------------------------------------")
display(randomised_images[i][0])
print("cross %", '%f' % (class_labels[0]*100))
print("circle %", '%f' % (class_labels[1]*100))
```
# a) Dataset Description
Training Dataset: balanced consists of 22 items
Testing Dataset: balanced consists of 8 items
Randomised Verification Dataset: 12 items
# b) CNN Description
- First CNN consists of 8 layers (Taken from Jeff Heaton course) uses RELU as activation function
- Second CNN consists of 6 layers with sigmoid as activation function
# c, d) Displayed above
# e) Epoch's Required for Convergence
Its evident by looking at the metrics that convergence starts at 4th Epoch after jumping between between convergence and divergence in the first 3 Epochs
# f) Misclassified Images
The 6th image in the predicted images section is being classified wrong. I think its because it has some similarity with the circle images and we didnot have any such image or close to this image in our training dataset. to fix this we can add this in our training data set.
Also it would be great if we can discuss in class, How to design such a network which can learn while predicting.
| github_jupyter |
## 3.2 autograd
用Tensor训练网络很方便,但从上一小节最后的线性回归例子来看,反向传播过程需要手动实现。这对于像线性回归等较为简单的模型来说,还可以应付,但实际使用中经常出现非常复杂的网络结构,此时如果手动实现反向传播,不仅费时费力,而且容易出错,难以检查。torch.autograd就是为方便用户使用,而专门开发的一套自动求导引擎,它能够根据输入和前向传播过程自动构建计算图,并执行反向传播。
计算图(Computation Graph)是现代深度学习框架如PyTorch和TensorFlow等的核心,其为高效自动求导算法——反向传播(Back Propogation)提供了理论支持,了解计算图在实际写程序过程中会有极大的帮助。本节将涉及一些基础的计算图知识,但并不要求读者事先对此有深入的了解。关于计算图的基础知识推荐阅读Christopher Olah的文章[^1]。
[^1]: http://colah.github.io/posts/2015-08-Backprop/
### 3.2.1 requires_grad
PyTorch在autograd模块中实现了计算图的相关功能,autograd中的核心数据结构是Variable。从v0.4版本起,Variable和Tensor合并。我们可以认为需要求导(requires_grad)的tensor即Variable. autograd记录对tensor的操作记录用来构建计算图。
Variable提供了大部分tensor支持的函数,但其不支持部分`inplace`函数,因这些函数会修改tensor自身,而在反向传播中,variable需要缓存原来的tensor来计算反向传播梯度。如果想要计算各个Variable的梯度,只需调用根节点variable的`backward`方法,autograd会自动沿着计算图反向传播,计算每一个叶子节点的梯度。
`variable.backward(gradient=None, retain_graph=None, create_graph=None)`主要有如下参数:
- grad_variables:形状与variable一致,对于`y.backward()`,grad_variables相当于链式法则${dz \over dx}={dz \over dy} \times {dy \over dx}$中的$\textbf {dz} \over \textbf {dy}$。grad_variables也可以是tensor或序列。
- retain_graph:反向传播需要缓存一些中间结果,反向传播之后,这些缓存就被清空,可通过指定这个参数不清空缓存,用来多次反向传播。
- create_graph:对反向传播过程再次构建计算图,可通过`backward of backward`实现求高阶导数。
上述描述可能比较抽象,如果没有看懂,不用着急,会在本节后半部分详细介绍,下面先看几个例子。
```
from __future__ import print_function
import torch as t
import torch
#在创建tensor的时候指定requires_grad
a = t.randn(3,4, requires_grad=True)
# 或者
a = t.randn(3,4).requires_grad_()
# 或者
a = t.randn(3,4)
a.requires_grad=True
a
b = t.zeros(3,4).requires_grad_()
b
# 也可写成c = a + b
c = a.add(b)
c
g = t.zeros_like(a)
g[0][0] = 1
c.backward(g)
a.grad
d = c.sum()
d.backward() # 反向传播
d # d还是一个requires_grad=True的tensor,对它的操作需要慎重
d.requires_grad
a.grad
# 此处虽然没有指定c需要求导,但c依赖于a,而a需要求导,
# 因此c的requires_grad属性会自动设为True
a.requires_grad, b.requires_grad, c.requires_grad
# 由用户创建的variable属于叶子节点,对应的grad_fn是None
a.is_leaf, b.is_leaf, c.is_leaf
# c.grad是None, 因c不是叶子节点,它的梯度是用来计算a的梯度
# 所以虽然c.requires_grad = True,但其梯度计算完之后即被释放
c.grad is None
```
计算下面这个函数的导函数:
$$
y = x^2\bullet e^x
$$
它的导函数是:
$$
{dy \over dx} = 2x\bullet e^x + x^2 \bullet e^x
$$
来看看autograd的计算结果与手动求导计算结果的误差。
```
def f(x):
'''计算y'''
y = x**2 * t.exp(x)
return y
def gradf(x):
'''手动求导函数'''
dx = 2*x*t.exp(x) + x**2*t.exp(x)
return dx
x = t.randn(3,4, requires_grad = True)
y = f(x)
y
y.backward(t.ones(y.size())) # gradient形状与y一致
x.grad
# autograd的计算结果与利用公式手动计算的结果一致
gradf(x)
```
### 3.2.2 计算图
PyTorch中`autograd`的底层采用了计算图,计算图是一种特殊的有向无环图(DAG),用于记录算子与变量之间的关系。一般用矩形表示算子,椭圆形表示变量。如表达式$ \textbf {z = wx + b}$可分解为$\textbf{y = wx}$和$\textbf{z = y + b}$,其计算图如图3-3所示,图中`MUL`,`ADD`都是算子,$\textbf{w}$,$\textbf{x}$,$\textbf{b}$即变量。

如上有向无环图中,$\textbf{X}$和$\textbf{b}$是叶子节点(leaf node),这些节点通常由用户自己创建,不依赖于其他变量。$\textbf{z}$称为根节点,是计算图的最终目标。利用链式法则很容易求得各个叶子节点的梯度。
$${\partial z \over \partial b} = 1,\space {\partial z \over \partial y} = 1\\
{\partial y \over \partial w }= x,{\partial y \over \partial x}= w\\
{\partial z \over \partial x}= {\partial z \over \partial y} {\partial y \over \partial x}=1 * w\\
{\partial z \over \partial w}= {\partial z \over \partial y} {\partial y \over \partial w}=1 * x\\
$$
而有了计算图,上述链式求导即可利用计算图的反向传播自动完成,其过程如图3-4所示。

在PyTorch实现中,autograd会随着用户的操作,记录生成当前variable的所有操作,并由此建立一个有向无环图。用户每进行一个操作,相应的计算图就会发生改变。更底层的实现中,图中记录了操作`Function`,每一个变量在图中的位置可通过其`grad_fn`属性在图中的位置推测得到。在反向传播过程中,autograd沿着这个图从当前变量(根节点$\textbf{z}$)溯源,可以利用链式求导法则计算所有叶子节点的梯度。每一个前向传播操作的函数都有与之对应的反向传播函数用来计算输入的各个variable的梯度,这些函数的函数名通常以`Backward`结尾。下面结合代码学习autograd的实现细节。
```
x = t.ones(1)
b = t.rand(1, requires_grad = True)
w = t.rand(1, requires_grad = True)
y = w * x # 等价于y=w.mul(x)
z = y + b # 等价于z=y.add(b)
x.requires_grad, b.requires_grad, w.requires_grad
# 虽然未指定y.requires_grad为True,但由于y依赖于需要求导的w
# 故而y.requires_grad为True
y.requires_grad
x.is_leaf, w.is_leaf, b.is_leaf
y.is_leaf, z.is_leaf
# grad_fn可以查看这个variable的反向传播函数,
# z是add函数的输出,所以它的反向传播函数是AddBackward
z.grad_fn
# next_functions保存grad_fn的输入,是一个tuple,tuple的元素也是Function
# 第一个是y,它是乘法(mul)的输出,所以对应的反向传播函数y.grad_fn是MulBackward
# 第二个是b,它是叶子节点,由用户创建,grad_fn为None,但是有
print(b.grad_fn)
z.grad_fn.next_functions
# variable的grad_fn对应着和图中的function相对应
z.grad_fn.next_functions[0][0] == y.grad_fn
# 第一个是w,叶子节点,需要求导,梯度是累加的
# 第二个是x,叶子节点,不需要求导,所以为None
y.grad_fn.next_functions
# 叶子节点的grad_fn是None
w.grad_fn,x.grad_fn
```
计算w的梯度的时候,需要用到x的数值(${\partial y\over \partial w} = x $),这些数值在前向过程中会保存成buffer,在计算完梯度之后会自动清空。为了能够多次反向传播需要指定`retain_graph`来保留这些buffer。
```
# 使用retain_graph来保存buffer
z.backward(retain_graph=True)
w.grad
# 多次反向传播,梯度累加,这也就是w中AccumulateGrad标识的含义
z.backward()
w.grad
```
PyTorch使用的是动态图,它的计算图在每次前向传播时都是从头开始构建,所以它能够使用Python控制语句(如for、if等)根据需求创建计算图。这点在自然语言处理领域中很有用,它意味着你不需要事先构建所有可能用到的图的路径,图在运行时才构建。
```
def abs(x):
if x.data[0]>0: return x
else: return -x
x = t.ones(1,requires_grad=True)
y = abs(x)
y.backward()
x.grad
x = -1*t.ones(1)
x = x.requires_grad_()
y = abs(x)
y.backward()
print(x.grad)
y
x
x.requires_grad
x.requires_grad
cc=x*3
cc.requires_grad
def f(x):
result = 1
for ii in x:
if ii.item()>0:
result=ii*result
return result
x = t.arange(-2,4,dtype=t.float32).requires_grad_()
y = f(x) # y = x[3]*x[4]*x[5]
y.backward()
x.grad
```
变量的`requires_grad`属性默认为False,如果某一个节点requires_grad被设置为True,那么所有依赖它的节点`requires_grad`都是True。这其实很好理解,对于$ \textbf{x}\to \textbf{y} \to \textbf{z}$,x.requires_grad = True,当需要计算$\partial z \over \partial x$时,根据链式法则,$\frac{\partial z}{\partial x} = \frac{\partial z}{\partial y} \frac{\partial y}{\partial x}$,自然也需要求$ \frac{\partial z}{\partial y}$,所以y.requires_grad会被自动标为True.
有些时候我们可能不希望autograd对tensor求导。认为求导需要缓存许多中间结构,增加额外的内存/显存开销,那么我们可以关闭自动求导。对于不需要反向传播的情景(如inference,即测试推理时),关闭自动求导可实现一定程度的速度提升,并节省约一半显存,因其不需要分配空间计算梯度。
```
x = t.ones(1, requires_grad=True)
w = t.rand(1, requires_grad=True)
y = x * w
# y依赖于w,而w.requires_grad = True
x.requires_grad, w.requires_grad, y.requires_grad
with t.no_grad():
x = t.ones(1)
w = t.rand(1, requires_grad = True)
y = x * w
# y依赖于w和x,虽然w.requires_grad = True,但是y的requires_grad依旧为False
x.requires_grad, w.requires_grad, y.requires_grad
t.no_grad??
t.set_grad_enabled(False)
x = t.ones(1)
w = t.rand(1, requires_grad = True)
y = x * w
# y依赖于w和x,虽然w.requires_grad = True,但是y的requires_grad依旧为False
x.requires_grad, w.requires_grad, y.requires_grad
# 恢复默认配置
t.set_grad_enabled(True)
```
如果我们想要修改tensor的数值,但是又不希望被autograd记录,那么我么可以对tensor.data进行操作
```
a = t.ones(3,4,requires_grad=True)
b = t.ones(3,4,requires_grad=True)
c = a * b
a.data # 还是一个tensor
a.data.requires_grad # 但是已经是独立于计算图之外
d = a.data.sigmoid_() # sigmoid_ 是个inplace操作,会修改a自身的值
print(d.requires_grad, d.type)
a
```
如果我们希望对tensor,但是又不希望被记录, 可以使用tensor.data 或者tensor.detach()
```
a.requires_grad
# 近似于 tensor=a.data, 但是如果tensor被修改,backward可能会报错
tensor = a.detach()
tensor.requires_grad
# 统计tensor的一些指标,不希望被记录
mean = tensor.mean()
std = tensor.std()
maximum = tensor.max()
tensor[0]=1
# 下面会报错: RuntimeError: one of the variables needed for gradient
# computation has been modified by an inplace operation
# 因为 c=a*b, b的梯度取决于a,现在修改了tensor,其实也就是修改了a,梯度不再准确
# c.sum().backward()
```
在反向传播过程中非叶子节点的导数计算完之后即被清空。若想查看这些变量的梯度,有两种方法:
- 使用autograd.grad函数
- 使用hook
`autograd.grad`和`hook`方法都是很强大的工具,更详细的用法参考官方api文档,这里举例说明基础的使用。推荐使用`hook`方法,但是在实际使用中应尽量避免修改grad的值。
```
x = t.ones(3, requires_grad=True)
w = t.rand(3, requires_grad=True)
y = x * w
# y依赖于w,而w.requires_grad = True
z = y.sum()
print(x, w)
x.requires_grad, w.requires_grad, y.requires_grad
# 非叶子节点grad计算完之后自动清空,y.grad是None
z.backward()
(x.grad, w.grad, y.grad)
# 第一种方法:使用grad获取中间变量的梯度
x = t.ones(3, requires_grad=True)
w = t.rand(3, requires_grad=True)
y = x * w
z = y.sum()
# z对y的梯度,隐式调用backward()
t.autograd.grad(z, y)
# 第二种方法:使用hook
# hook是一个函数,输入是梯度,不应该有返回值
def variable_hook(grad):
print('y的梯度:',grad)
x = t.ones(3, requires_grad=True)
w = t.rand(3, requires_grad=True)
y = x * w
# 注册hook
hook_handle = y.register_hook(variable_hook)
z = y.sum()
z.backward()
# 除非你每次都要用hook,否则用完之后记得移除hook
hook_handle.remove()
```
最后再来看看variable中grad属性和backward函数`grad_variables`参数的含义,这里直接下结论:
- variable $\textbf{x}$的梯度是目标函数${f(x)} $对$\textbf{x}$的梯度,$\frac{df(x)}{dx} = (\frac {df(x)}{dx_0},\frac {df(x)}{dx_1},...,\frac {df(x)}{dx_N})$,形状和$\textbf{x}$一致。
- 对于y.backward(grad_variables)中的grad_variables相当于链式求导法则中的$\frac{\partial z}{\partial x} = \frac{\partial z}{\partial y} \frac{\partial y}{\partial x}$中的$\frac{\partial z}{\partial y}$。z是目标函数,一般是一个标量,故而$\frac{\partial z}{\partial y}$的形状与variable $\textbf{y}$的形状一致。`z.backward()`在一定程度上等价于y.backward(grad_y)。`z.backward()`省略了grad_variables参数,是因为$z$是一个标量,而$\frac{\partial z}{\partial z} = 1$
```
x = t.arange(0,3, requires_grad=True,dtype=t.float)
y = x**2 + x*2
z = y.sum()
z.backward() # 从z开始反向传播
x.grad
x = t.arange(0,3, requires_grad=True,dtype=t.float)
y = x**2 + x*2
z = y.sum()
y_gradient = t.Tensor([1,1,1]) # dz/dy
y.backward(y_gradient) #从y开始反向传播
x.grad
```
另外值得注意的是,只有对variable的操作才能使用autograd,如果对variable的data直接进行操作,将无法使用反向传播。除了对参数初始化,一般我们不会修改variable.data的值。
在PyTorch中计算图的特点可总结如下:
- autograd根据用户对variable的操作构建其计算图。对变量的操作抽象为`Function`。
- 对于那些不是任何函数(Function)的输出,由用户创建的节点称为叶子节点,叶子节点的`grad_fn`为None。叶子节点中需要求导的variable,具有`AccumulateGrad`标识,因其梯度是累加的。
- variable默认是不需要求导的,即`requires_grad`属性默认为False,如果某一个节点requires_grad被设置为True,那么所有依赖它的节点`requires_grad`都为True。
- variable的`volatile`属性默认为False,如果某一个variable的`volatile`属性被设为True,那么所有依赖它的节点`volatile`属性都为True。volatile属性为True的节点不会求导,volatile的优先级比`requires_grad`高。
- 多次反向传播时,梯度是累加的。反向传播的中间缓存会被清空,为进行多次反向传播需指定`retain_graph`=True来保存这些缓存。
- 非叶子节点的梯度计算完之后即被清空,可以使用`autograd.grad`或`hook`技术获取非叶子节点的值。
- variable的grad与data形状一致,应避免直接修改variable.data,因为对data的直接操作无法利用autograd进行反向传播
- 反向传播函数`backward`的参数`grad_variables`可以看成链式求导的中间结果,如果是标量,可以省略,默认为1
- PyTorch采用动态图设计,可以很方便地查看中间层的输出,动态的设计计算图结构。
这些知识不懂大多数情况下也不会影响对pytorch的使用,但是掌握这些知识有助于更好的理解pytorch,并有效的避开很多陷阱
### 3.2.3 扩展autograd
目前绝大多数函数都可以使用`autograd`实现反向求导,但如果需要自己写一个复杂的函数,不支持自动反向求导怎么办? 写一个`Function`,实现它的前向传播和反向传播代码,`Function`对应于计算图中的矩形, 它接收参数,计算并返回结果。下面给出一个例子。
```python
class Mul(Function):
@staticmethod
def forward(ctx, w, x, b, x_requires_grad = True):
ctx.x_requires_grad = x_requires_grad
ctx.save_for_backward(w,x)
output = w * x + b
return output
@staticmethod
def backward(ctx, grad_output):
w,x = ctx.saved_tensors
grad_w = grad_output * x
if ctx.x_requires_grad:
grad_x = grad_output * w
else:
grad_x = None
grad_b = grad_output * 1
return grad_w, grad_x, grad_b, None
```
分析如下:
- 自定义的Function需要继承autograd.Function,没有构造函数`__init__`,forward和backward函数都是静态方法
- backward函数的输出和forward函数的输入一一对应,backward函数的输入和forward函数的输出一一对应
- backward函数的grad_output参数即t.autograd.backward中的`grad_variables`
- 如果某一个输入不需要求导,直接返回None,如forward中的输入参数x_requires_grad显然无法对它求导,直接返回None即可
- 反向传播可能需要利用前向传播的某些中间结果,需要进行保存,否则前向传播结束后这些对象即被释放
Function的使用利用Function.apply(variable)
```
from torch.autograd import Function
class MultiplyAdd(Function):
@staticmethod
def forward(ctx, w, x, b):
ctx.save_for_backward(w,x)
output = w * x + b
return output
@staticmethod
def backward(ctx, grad_output):
w,x = ctx.saved_tensors
grad_w = grad_output * x
grad_x = grad_output * w
grad_b = grad_output * 1
return grad_w, grad_x, grad_b
x = t.ones(3)
w = t.rand(3, requires_grad = True)
b = t.rand(1, requires_grad = True)
# 开始前向传播
z=MultiplyAdd.apply(w, x, b)
# 开始反向传播
z.backward(t.tensor([1,1,1]))
print(x, w, b)
# x不需要求导,中间过程还是会计算它的导数,但随后被清空
x.grad, w.grad, b.grad
x = t.ones(3)
w = t.rand(3, 1, requires_grad = True)
w * x
print(w, x, w * x)
x = t.ones(1)
w = t.rand(1, requires_grad = True)
b = t.rand(1, requires_grad = True)
#print('开始前向传播')
z=MultiplyAdd.apply(w,x,b)
#print('开始反向传播')
# 调用MultiplyAdd.backward
# 输出grad_w, grad_x, grad_b
z.grad_fn.apply(t.ones(1))
```
之所以forward函数的输入是tensor,而backward函数的输入是variable,是为了实现高阶求导。backward函数的输入输出虽然是variable,但在实际使用时autograd.Function会将输入variable提取为tensor,并将计算结果的tensor封装成variable返回。在backward函数中,之所以也要对variable进行操作,是为了能够计算梯度的梯度(backward of backward)。下面举例说明,有关torch.autograd.grad的更详细使用请参照文档。
```
x = t.tensor([5], requires_grad=True,dtype=t.float)
y = x ** 2
grad_x = t.autograd.grad(y, x, create_graph=True)
grad_x # dy/dx = 2 * x
grad_grad_x = t.autograd.grad(grad_x[0],x)
grad_grad_x # 二阶导数 d(2x)/dx = 2
```
这种设计虽然能让`autograd`具有高阶求导功能,但其也限制了Tensor的使用,因autograd中反向传播的函数只能利用当前已经有的Variable操作。这个设计是在`0.2`版本新加入的,为了更好的灵活性,也为了兼容旧版本的代码,PyTorch还提供了另外一种扩展autograd的方法。PyTorch提供了一个装饰器`@once_differentiable`,能够在backward函数中自动将输入的variable提取成tensor,把计算结果的tensor自动封装成variable。有了这个特性我们就能够很方便的使用numpy/scipy中的函数,操作不再局限于variable所支持的操作。但是这种做法正如名字中所暗示的那样只能求导一次,它打断了反向传播图,不再支持高阶求导。
上面所描述的都是新式Function,还有个legacy Function,可以带有`__init__`方法,`forward`和`backwad`函数也不需要声明为`@staticmethod`,但随着版本更迭,此类Function将越来越少遇到,在此不做更多介绍。
此外在实现了自己的Function之后,还可以使用`gradcheck`函数来检测实现是否正确。`gradcheck`通过数值逼近来计算梯度,可能具有一定的误差,通过控制`eps`的大小可以控制容忍的误差。
关于这部份的内容可以参考github上开发者们的讨论[^3]。
[^3]: https://github.com/pytorch/pytorch/pull/1016
下面举例说明如何利用Function实现sigmoid Function。
```
class Sigmoid(Function):
@staticmethod
def forward(ctx, x):
output = 1 / (1 + t.exp(-x))
ctx.save_for_backward(output)
return output
@staticmethod
def backward(ctx, grad_output):
output, = ctx.saved_tensors
grad_x = output * (1 - output) * grad_output
return grad_x
# 采用数值逼近方式检验计算梯度的公式对不对
test_input = t.randn(3,4, requires_grad=True).double()
t.autograd.gradcheck(Sigmoid.apply, (test_input,), eps=1e-3)
def f_sigmoid(x):
y = Sigmoid.apply(x)
y.backward(t.ones(x.size()))
def f_naive(x):
y = 1/(1 + t.exp(-x))
y.backward(t.ones(x.size()))
def f_th(x):
y = t.sigmoid(x)
y.backward(t.ones(x.size()))
x=t.randn(100, 100, requires_grad=True)
%timeit -n 100 f_sigmoid(x)
%timeit -n 100 f_naive(x)
%timeit -n 100 f_th(x)
```
显然`f_sigmoid`要比单纯利用`autograd`加减和乘方操作实现的函数快不少,因为f_sigmoid的backward优化了反向传播的过程。另外可以看出系统实现的built-in接口(t.sigmoid)更快。
### 3.2.4 小试牛刀: 用Variable实现线性回归
在上一节中讲解了利用tensor实现线性回归,在这一小节中,将讲解如何利用autograd/Variable实现线性回归,以此感受autograd的便捷之处。
```
import torch as t
%matplotlib inline
from matplotlib import pyplot as plt
from IPython import display
import numpy as np
# 设置随机数种子,为了在不同人电脑上运行时下面的输出一致
t.manual_seed(1000)
def get_fake_data(batch_size=8):
''' 产生随机数据:y = x*2 + 3,加上了一些噪声'''
x = t.rand(batch_size, 1) * 5
y = x * 2 + 3 + t.randn(batch_size, 1)
return x, y
# 来看看产生x-y分布是什么样的
x, y = get_fake_data()
plt.scatter(x.squeeze().numpy(), y.squeeze().numpy())
# 随机初始化参数
w = t.rand(1,1, requires_grad=True)
b = t.zeros(1,1, requires_grad=True)
losses = np.zeros(500)
lr =0.005 # 学习率
for ii in range(500):
x, y = get_fake_data(batch_size=32)
# forward:计算loss
y_pred = x.mm(w) + b.expand_as(y)
loss = 0.5 * (y_pred - y) ** 2
loss = loss.sum()
losses[ii] = loss.item()
# backward:手动计算梯度
loss.backward()
# 更新参数
w.data.sub_(lr * w.grad.data)
b.data.sub_(lr * b.grad.data)
# 梯度清零
w.grad.data.zero_()
b.grad.data.zero_()
if ii%50 ==0:
# 画图
display.clear_output(wait=True)
x = t.arange(0, 6).view(-1, 1).float()
y = x.mm(w.data) + b.data.expand_as(x)
plt.plot(x.numpy(), y.numpy()) # predicted
x2, y2 = get_fake_data(batch_size=20)
plt.scatter(x2.numpy(), y2.numpy()) # true data
plt.xlim(0,5)
plt.ylim(0,13)
plt.show()
plt.pause(0.5)
print(w.item(), b.item())
plt.plot(losses)
plt.ylim(5,50)
```
用autograd实现的线性回归最大的不同点就在于autograd不需要计算反向传播,可以自动计算微分。这点不单是在深度学习,在许多机器学习的问题中都很有用。另外需要注意的是在每次反向传播之前要记得先把梯度清零。
本章主要介绍了PyTorch中两个基础底层的数据结构:Tensor和autograd中的Variable。Tensor是一个类似Numpy数组的高效多维数值运算数据结构,有着和Numpy相类似的接口,并提供简单易用的GPU加速。Variable是autograd封装了Tensor并提供自动求导技术的,具有和Tensor几乎一样的接口。`autograd`是PyTorch的自动微分引擎,采用动态计算图技术,能够快速高效的计算导数。
# Torch optimizer linear regression
```
# 设置随机数种子,为了在不同人电脑上运行时下面的输出一致
t.manual_seed(1000)
w_true = torch.Tensor([2,-2,2,1,-1]).reshape(-1, 1)
def get_fake_data(batch_size=8):
''' 产生随机数据:y = x*2 + 3,加上了一些噪声'''
x = t.rand(batch_size, 5) * 5
y = x.mm(w_true) + 3 #+ t.randn(batch_size, 1)
return x, y
w = t.rand(5,1, requires_grad=True)
b = t.zeros(1,1, requires_grad=True)
losses = np.zeros(2000)
lr = 0.0008 # 学习率
for ii in range(2000):
x, y = get_fake_data(batch_size=64)
# forward:计算loss
y_pred = x.mm(w) + b.expand_as(y)
loss = 0.5 * (y_pred - y) ** 2
loss = loss.sum()
if ii % 1000 == 0:
print(loss, w, b)
losses[ii] = loss.item()
# backward:手动计算梯度
loss.backward()
# 更新参数
w.data.sub_(lr * w.grad.data)
b.data.sub_(lr * b.grad.data)
# 梯度清零
w.grad.data.zero_()
b.grad.data.zero_()
print(w,b)
class linearRegression(torch.nn.Module):
def __init__(self, inputSize, outputSize):
super(linearRegression, self).__init__()
self.linear = torch.nn.Linear(inputSize, outputSize)
def forward(self, x):
out = self.linear(x)
return out
inputDim = 5 # takes variable 'x'
outputDim = 1 # takes variable 'y'
learningRate = 0.001
epochs = 10000
model = linearRegression(inputDim, outputDim)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learningRate)
for epoch in range(epochs):
# Converting inputs and labels to Variable
inputs, labels = get_fake_data(batch_size=64)
# Clear gradient buffers because we don't want any gradient from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
outputs = model(inputs)
# get loss for the predicted output
loss = criterion(outputs, labels)
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
if epoch % 100 == 0:
print('epoch {}, loss {}'.format(epoch, loss.item()))
for name, param in model.named_parameters():
if param.requires_grad:
print(name, param.data)
```
| github_jupyter |
# COVID-19 Deaths Per Capita
> Comparing death rates adjusting for population size.
- comments: true
- author: Joao B. Duarte & Hamel Husain
- categories: [growth, compare, interactive]
- hide: false
- image: images/covid-permillion-trajectories.png
- permalink: /covid-compare-permillion/
```
#hide
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import altair as alt
%config InlineBackend.figure_format = 'retina'
chart_width = 550
chart_height= 400
```
## Deaths Per Million Of Inhabitants
Since reaching at least 1 death per million
> Tip: Click (Shift+ for multiple) on countries in the legend to filter the visualization.
```
#hide
data = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv", error_bad_lines=False)
data = data.drop(columns=["Lat", "Long"])
data = data.melt(id_vars= ["Province/State", "Country/Region"])
data = pd.DataFrame(data.groupby(['Country/Region', "variable"]).sum())
data.reset_index(inplace=True)
data = data.rename(columns={"Country/Region": "location", "variable": "date", "value": "total_cases"})
data['date'] =pd.to_datetime(data.date)
data = data.sort_values(by = "date")
data.loc[data.location == "US","location"] = "United States"
data.loc[data.location == "Korea, South","location"] = "South Korea"
data_pwt = pd.read_stata("https://www.rug.nl/ggdc/docs/pwt91.dta")
filter1 = data_pwt["year"] == 2017
data_pop = data_pwt[filter1]
data_pop = data_pop[["country","pop"]]
data_pop.loc[data_pop.country == "Republic of Korea","country"] = "South Korea"
data_pop.loc[data_pop.country == "Iran (Islamic Republic of)","country"] = "Iran"
# per habitant
data_pc = data.copy()
countries = ["China", "Italy", "Spain", "France", "United Kingdom", "Germany",
"Portugal", "United States", "Singapore","South Korea", "Japan",
"Brazil","Iran"]
data_countries = []
data_countries_pc = []
# compute per habitant
for i in countries:
data_pc.loc[data_pc.location == i,"total_cases"] = data_pc.loc[data_pc.location == i,"total_cases"]/float(data_pop.loc[data_pop.country == i, "pop"])
# get each country time series
filter1 = data_pc["total_cases"] > 1
for i in countries:
filter_country = data_pc["location"]== i
data_countries_pc.append(data_pc[filter_country & filter1])
#hide_input
# Stack data to get it to Altair dataframe format
data_countries_pc2 = data_countries_pc.copy()
for i in range(0,len(countries)):
data_countries_pc2[i] = data_countries_pc2[i].reset_index()
data_countries_pc2[i]['n_days'] = data_countries_pc2[i].index
data_countries_pc2[i]['log_cases'] = np.log(data_countries_pc2[i]["total_cases"])
data_plot = data_countries_pc2[0]
for i in range(1, len(countries)):
data_plot = pd.concat([data_plot, data_countries_pc2[i]], axis=0)
data_plot["trend_2days"] = data_plot["n_days"]*1/2
data_plot["trend_4days"] = data_plot["n_days"]*1/4
data_plot["trend_12days"] = data_plot["n_days"]*1/12
data_plot["trend_2days_label"] = "Doubles every 2 days"
data_plot["trend_4days_label"] = "Doubles evey 4 days"
data_plot["trend_12days_label"] = "Doubles every 12 days"
# Plot it using Altair
source = data_plot
scales = alt.selection_interval(bind='scales')
selection = alt.selection_multi(fields=['location'], bind='legend')
base = alt.Chart(source, title = "COVID-19 Deaths Since Outbreak").encode(
x = alt.X('n_days:Q', title = "Days passed since reaching 1 death per million of inhabitants"),
y = alt.Y("log_cases:Q",title = "Log of Deaths Per Million of Inhabitants"),
color = alt.Color('location:N', legend=alt.Legend(title="Country", labelFontSize=15, titleFontSize=17),
scale=alt.Scale(scheme='tableau20')),
opacity = alt.condition(selection, alt.value(1), alt.value(0.1))
)
lines = base.mark_line().add_selection(
scales
).add_selection(
selection
).properties(
width=chart_width,
height=chart_height
)
trend_2d = alt.Chart(source).encode(
x = "n_days:Q",
y = alt.Y("trend_2days:Q", scale=alt.Scale(domain=(0, max(data_plot["log_cases"])))),
).mark_line(color="grey", strokeDash=[3,3])
labels = pd.DataFrame([{'label': 'Doubes every 2 days', 'x_coord': 6, 'y_coord': 4},
{'label': 'Doubes every 4 days', 'x_coord': 17, 'y_coord': 3.5},
{'label': 'Doubes every 12 days', 'x_coord': 25, 'y_coord': 2.5},
])
trend_label = (alt.Chart(labels)
.mark_text(align='left', dx=-55, dy=-15, fontSize=12, color="grey")
.encode(x='x_coord:Q',
y='y_coord:Q',
text='label:N')
)
trend_4d = alt.Chart(source).mark_line(color="grey", strokeDash=[3,3]).encode(
x = "n_days:Q",
y = alt.Y("trend_4days:Q", scale=alt.Scale(domain=(0, max(data_plot["log_cases"])))),
)
trend_12d = alt.Chart(source).mark_line(color="grey", strokeDash=[3,3]).encode(
x = "n_days:Q",
y = alt.Y("trend_12days:Q", scale=alt.Scale(domain=(0, max(data_plot["log_cases"])))),
)
(
(trend_2d + trend_4d + trend_12d + trend_label + lines)
.configure_title(fontSize=20)
.configure_axis(labelFontSize=15,titleFontSize=18)
)
```
Last Available Total Deaths By Country:
```
#hide_input
label = 'Deaths'
temp = pd.concat([x.copy() for x in data_countries_pc]).loc[lambda x: x.date >= '3/1/2020']
metric_name = f'{label} per Million'
temp.columns = ['Country', 'date', metric_name]
# temp.loc[:, 'month'] = temp.date.dt.strftime('%Y-%m')
temp.loc[:, f'Log of {label} per Million'] = temp[f'{label} per Million'].apply(lambda x: np.log10(x))
temp.groupby('Country').last()
# summary = temp.set_index('date').groupby(['Country', 'month']).last()
# pd.pivot_table(summary,
# index='Country',
# values=[f'Log of Total {label} per Million',metric_name],
# columns='month').fillna('')
#hide
# Get data and clean it
data = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv", error_bad_lines=False)
data = data.drop(columns=["Lat", "Long"])
data = data.melt(id_vars= ["Province/State", "Country/Region"])
data = pd.DataFrame(data.groupby(['Country/Region', "variable"]).sum())
data.reset_index(inplace=True)
data = data.rename(columns={"Country/Region": "location", "variable": "date", "value": "total_cases"})
data['date'] =pd.to_datetime(data.date)
data = data.sort_values(by = "date")
data.loc[data.location == "US","location"] = "United States"
data.loc[data.location == "Korea, South","location"] = "South Korea"
# Population data (last year is 2017 which is what we use)
data_pwt = pd.read_stata("https://www.rug.nl/ggdc/docs/pwt91.dta")
filter1 = data_pwt["year"] == 2017
data_pop = data_pwt[filter1]
data_pop = data_pop[["country","pop"]]
data_pop.loc[data_pop.country == "Republic of Korea","country"] = "South Korea"
data_pop.loc[data_pop.country == "Iran (Islamic Republic of)","country"] = "Iran"
# per habitant
data_pc = data.copy()
# I can add more countries if needed
countries = ["China", "Italy", "Spain", "France", "United Kingdom", "Germany",
"Portugal", "United States", "Singapore","South Korea", "Japan",
"Brazil","Iran"]
data_countries = []
data_countries_pc = []
# compute per habitant
for i in countries:
data_pc.loc[data_pc.location == i,"total_cases"] = data_pc.loc[data_pc.location == i,"total_cases"]/float(data_pop.loc[data_pop.country == i, "pop"])
# get each country time series
filter1 = data_pc["total_cases"] > 1
for i in countries:
filter_country = data_pc["location"]== i
data_countries_pc.append(data_pc[filter_country & filter1])
```
## Appendix
> Warning: The following chart, "Cases Per Million of Habitants" is biased depending on how widely a country administers tests. Please read with caution.
### Cases Per Million of Habitants
```
#hide_input
# Stack data to get it to Altair dataframe format
data_countries_pc2 = data_countries_pc.copy()
for i in range(0,len(countries)):
data_countries_pc2[i] = data_countries_pc2[i].reset_index()
data_countries_pc2[i]['n_days'] = data_countries_pc2[i].index
data_countries_pc2[i]['log_cases'] = np.log(data_countries_pc2[i]["total_cases"])
data_plot = data_countries_pc2[0]
for i in range(1, len(countries)):
data_plot = pd.concat([data_plot, data_countries_pc2[i]], axis=0)
data_plot["trend_2days"] = data_plot["n_days"]*1/2
data_plot["trend_4days"] = data_plot["n_days"]*1/4
data_plot["trend_12days"] = data_plot["n_days"]*1/12
data_plot["trend_2days_label"] = "Doubles every 2 days"
data_plot["trend_4days_label"] = "Doubles evey 4 days"
data_plot["trend_12days_label"] = "Doubles every 12 days"
# Plot it using Altair
source = data_plot
scales = alt.selection_interval(bind='scales')
selection = alt.selection_multi(fields=['location'], bind='legend')
base = alt.Chart(source, title = "COVID-19 Confirmed Cases Since Outbreak").encode(
x = alt.X('n_days:Q', title = "Days passed since reaching 1 case per million of inhabitants"),
y = alt.Y("log_cases:Q",title = "Log of Confirmed Cases Per Million of Inhabitants"),
color = alt.Color('location:N', legend=alt.Legend(title="Country", labelFontSize=15, titleFontSize=17),
scale=alt.Scale(scheme='tableau20')),
opacity = alt.condition(selection, alt.value(1), alt.value(0.1))
).properties(
width=chart_width,
height=chart_height
)
lines = base.mark_line().add_selection(
scales
).add_selection(
selection
)
trend_2d = alt.Chart(source).encode(
x = "n_days:Q",
y = alt.Y("trend_2days:Q", scale=alt.Scale(domain=(0, max(data_plot["log_cases"])))),
).mark_line( strokeDash=[3,3], color="grey")
labels = pd.DataFrame([{'label': 'Doubes every 2 days', 'x_coord': 10, 'y_coord': 6},
{'label': 'Doubes every 4 days', 'x_coord': 30, 'y_coord': 6},
{'label': 'Doubes every 12 days', 'x_coord': 45, 'y_coord': 4},
])
trend_label = (alt.Chart(labels)
.mark_text(align='left', dx=-55, dy=-15, fontSize=12, color="grey")
.encode(x='x_coord:Q',
y='y_coord:Q',
text='label:N')
)
trend_4d = alt.Chart(source).mark_line(color="grey", strokeDash=[3,3]).encode(
x = "n_days:Q",
y = alt.Y("trend_4days:Q", scale=alt.Scale(domain=(0, max(data_plot["log_cases"])))),
)
trend_12d = alt.Chart(source).mark_line(color="grey", strokeDash=[3,3]).encode(
x = "n_days:Q",
y = alt.Y("trend_12days:Q", scale=alt.Scale(domain=(0, max(data_plot["log_cases"])))),
)
(
(trend_2d + trend_4d + trend_12d + trend_label + lines)
.configure_title(fontSize=20)
.configure_axis(labelFontSize=15,titleFontSize=18)
)
#hide_input
label = 'Cases'
temp = pd.concat([x.copy() for x in data_countries_pc]).loc[lambda x: x.date >= '3/1/2020']
metric_name = f'{label} per Million'
temp.columns = ['Country', 'date', metric_name]
# temp.loc[:, 'month'] = temp.date.dt.strftime('%Y-%m')
temp.loc[:, f'Log of {label} per Million'] = temp[f'{label} per Million'].apply(lambda x: np.log10(x))
# summary = temp.set_index('date').groupby(['Country', 'month']).last()
# pd.pivot_table(summary,
# index='Country',
# values=[f'Log of Total {label} per Million',metric_name],
# columns='month').fillna('')
temp.groupby('Country').last()
```
This analysis was conducted by [Joao B. Duarte](https://www.jbduarte.com). Assitance with creating visualizations were provided by [Hamel Husain](https://twitter.com/HamelHusain). Relevant sources are listed below:
1. ["2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE"](https://systems.jhu.edu/research/public-health/ncov/) [GitHub repository](https://github.com/CSSEGISandData/COVID-19).
2. [Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), "The Next Generation of the Penn World Table" American Economic Review, 105(10), 3150-3182](https://www.rug.nl/ggdc/productivity/pwt/related-research)
| github_jupyter |
```
# Required to load webpages
from IPython.display import IFrame
```
[Table of contents](../toc.ipynb)
<img src="https://github.com/scipy/scipy/raw/master/doc/source/_static/scipyshiny_small.png" alt="Scipy" width="150" align="right">
# SciPy
* Scipy extends numpy with powerful modules in
* optimization,
* interpolation,
* linear algebra,
* fourier transformation,
* signal processing,
* image processing,
* file input output, and many more.
* Please find here the scipy reference for a complete feature list [https://docs.scipy.org/doc/scipy/reference/](https://docs.scipy.org/doc/scipy/reference/).
We will take a look at some features of scipy in the latter. Please explore the rich content of this package later on.
## Optimization
* Scipy's optimization module provides many optimization methods like least squares, gradient methods, BFGS, global optimization, and many more.
* Please find a detailed tutorial here [https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html).
* Next, we will apply one of the optimization algorithms in a simple example.
A common function to test optimization algorithms is the Rosenbrock function for $N$ variables:
$f(\boldsymbol{x}) = \sum\limits_{i=2}^N 100 \left(x_{i+1} - x_i^2\right)^2 + \left(1 - x_i^2 \right)^2$.
The optimum is at $x_i=1$, where $f(\boldsymbol{x})=0$
```
import numpy as np
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
def rosen(x):
"""The Rosenbrock function"""
return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1]**2.0)**2.0)
```
We need to generate some data in a mesh grid.
```
X = np.arange(-2, 2, 0.2)
Y = np.arange(-2, 2, 0.2)
X, Y = np.meshgrid(X, Y)
data = np.vstack([X.reshape(X.size), Y.reshape(Y.size)])
```
Let's evaluate the Rosenbrock function at the grid points.
```
Z = rosen(data)
```
And we will plot the function in a 3D plot.
```
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, Z.reshape(X.shape), cmap='bwr')
ax.view_init(40, 230)
```
Now, let us check that the true minimum value is at (1, 1).
```
rosen(np.array([1, 1]))
```
Finally, we will call scipy optimize and find the minimum with Nelder Mead algorithm.
```
from scipy.optimize import minimize
x0 = np.array([1.3, 0.7])
res = minimize(rosen, x0, method='nelder-mead',
options={'xatol': 1e-8, 'disp': True})
print(res.x)
```
Many more optimization examples are to find in scipy optimize tutorial [https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html).
```
IFrame(src='https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html',
width=1000, height=600)
```
## Interpolation
* Interpolation of data is very often required, for instance to replace NaNs or to fill missing values in data records.
* Scipy comes with
* 1D interpolation,
* multivariate data interpolation
* spline, and
* radial basis function interpolation.
* Please find here the link to interpolation tutorials [https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html).
```
from scipy.interpolate import interp1d
x = np.linspace(10, 20, 15)
y = np.sin(x) + np.cos(x**2 / 10)
f = interp1d(x, y, kind="linear")
f1 = interp1d(x, y, kind="cubic")
x_fine = np.linspace(10, 20, 200)
plt.plot(x, y, 'ko',
x_fine, f(x_fine), 'b--',
x_fine, f1(x_fine), 'r--')
plt.legend(["Data", "Linear", "Cubic"])
plt.show()
```
## Signal processing
The signal processing module is very powerful and we will have a look at its tutorial [https://docs.scipy.org/doc/scipy/reference/tutorial/signal.html](https://docs.scipy.org/doc/scipy/reference/tutorial/signal.html) for a quick overview.
```
IFrame(src='https://docs.scipy.org/doc/scipy/reference/tutorial/signal.html',
width=1000, height=600)
```
## Linear algebra
* In addition to numpy, scipy has its own linear algebra module.
* It offers more functionality than numpy's linear algebra module and is based on BLAS/LAPACK support, which makes it faster.
* The respective tutorial is here located [https://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html](https://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html).
```
IFrame(src='https://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html',
width=1000, height=600)
```
### Total least squares as linear algebra application
<img src="ls-tls.png" alt="LS vs TLS" width="350" align="right">
We will now implement a total least squares estimator [[Markovsky2007]](../references.bib) with help of scipy's singular value decomposition (svd). The total least squares estimator provides a solution for the errors in variables problem, where model inputs and outputs are corrupted by noise.
The model becomes
$A X \approx B$, where $A \in \mathbb{R}^{m \times n}$ and $B \in \mathbb{R}^{m \times d}$ are input and output data, and $X$ is the unknown parameter vector.
More specifically, the total least squares regression becomes
$\widehat{A}X = \widehat{B}$, $\widehat{A} := A + \Delta A$, $\widehat{B} := B + \Delta B$.
The estimator can be written as pseudo code as follows.
$C = [A B] = U \Sigma V^\top$, where $U \Sigma V^\top$ is the svd of $C$.
$V:= \left[\begin{align}V_{11} &V_{12} \\
V_{21} & V_{22}\end{align}\right]$,
$\widehat{X} = -V_{12} V_{22}^{-1}$.
In Python, the implementation could be like this function.
```
from scipy import linalg
def tls(A, B):
m, n = A.shape
C = np.hstack((A, B))
U, S, V = linalg.svd(C)
V12 = V.T[0:n, n:]
V22 = V.T[n:, n:]
X = -V12 / V22
return X
```
Now we create some data where input and output are appended with noise.
```
A = np.random.rand(100, 2)
X = np.array([[3], [-7]])
B = A @ X
A += np.random.randn(100, 2) * 0.1
B += np.random.randn(100, 1) * 0.1
```
The total least squares solution becomes
```
tls(A, B)
```
And this solution is closer to correct value $X = [3 , -7]^\top$ than ordinary least squares.
```
linalg.solve((A.T @ A), (A.T @ B))
```
Finally, next function shows a "self" written least squares estimator, which uses QR decomposition and back substitution. This implementation is numerically robust in contrast to normal equations
$A ^\top A X = A^\top B$.
Please find more explanation in [[Golub2013]](../references.bib) and in section 3.11 of [[Burg2012]](../references.bib).
```
def ls(A, B):
Q, R = linalg.qr(A, mode="economic")
z = Q.T @ B
return linalg.solve_triangular(R, z)
ls(A, B)
```
## Integration
* Scipy's integration can be used for general equations as well as for ordinary differential equations.
* The integration tutorial is linked here [https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html](https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html).
### Solving a differential equation
Here, we want to use an ode solver to simulate the differential equation (ode)
$y'' + y' + 4 y = 0$.
To evaluate this second order ode, we need to convert it into a set of first order ode. The trick is to use this substitution: $x_0 = y$, $x_1 = y'$, which yields
$\begin{align}
x'_0 &= x_1 \\
x'_1 &= -4 x_0 - x_1
\end{align}$
The implementation in Python becomes.
```
def equation(t, x):
return [x[1], -4 * x[0] - x[1]]
from scipy.integrate import solve_ivp
time_span = [0, 20]
init = [1, 0]
time = np.arange(0, 20, 0.01)
sol = solve_ivp(equation, time_span, init, t_eval=time)
plt.plot(time, sol.y[0, :])
plt.plot(time, sol.y[1, :])
plt.legend(["$y$", "$y'$"])
plt.xlabel("Time")
plt.show()
```
| github_jupyter |
# ¿Cómo funciona la suspensión de un auto?
<div>
<img style="float: left; margin: 0px 0px 15px 0px;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/ce/Packard_wishbone_front_suspension_%28Autocar_Handbook%2C_13th_ed%2C_1935%29.jpg/414px-Packard_wishbone_front_suspension_%28Autocar_Handbook%2C_13th_ed%2C_1935%29.jpg" width="150px" height="50px" />
<img style="float: center; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/d/df/Radaufhängung_Renault.JPG" width="150px" height="100px" />
</div>
> Una primer aproximación al modelo de la suspensión de un automovil es considerar el *oscilador armónico amortiguado*.
<img style="float: center; margin: 0px 0px 15px 0px;" src="https://upload.wikimedia.org/wikipedia/commons/4/45/Mass_spring_damper.svg" width="300px" height="100px" />
Referencia:
- https://es.wikipedia.org/wiki/Oscilador_arm%C3%B3nico#Oscilador_arm.C3.B3nico_amortiguado
Un **modelo** que describe el comportamiento del sistema mecánico anterior es
\begin{equation}
m\frac{d^2 x}{dt^2}=-c\frac{dx}{dt}-kx
\end{equation}
donde $c$ es la constante de amortiguamiento y $k$ es la constante de elasticidad. <font color=red> Revisar modelado </font>
Documentación de los paquetes que utilizaremos hoy.
- https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html
- https://docs.scipy.org/doc/scipy/reference/index.html
___
En `python` existe una función llamada <font color = blue>_odeint_</font> del paquete <font color = blue>_integrate_</font> de la libreria <font color = blue>_scipy_</font>, que permite integrar sistemas vectoriales de primer orden, del tipo
\begin{equation}
\frac{d\boldsymbol{y}}{dt} = \boldsymbol{f}(\boldsymbol{y},t); \qquad \text{ con }\quad \boldsymbol{y}\in\mathbb{R}^n,\quad \boldsymbol{f}:\mathbb{R}^n\times\mathbb{R}_{+}\to\mathbb{R}^n
\end{equation}
con condiciones iniciales $\boldsymbol{y}(0) = \boldsymbol{y}_{0}$. Notar que <font color=red> $\boldsymbol{y}$ representa un vector de $n$ componentes</font>.
Ahora, si nos fijamos bien, el modelo del *oscilador armónico amortiguado* que obtuvimos es una ecuación diferencial ordinaria (EDO) de segundo orden. No hay problema. La podemos convertir en un sistema de ecuaciones de primer orden de la siguiente manera:
1. Seleccionamos el vector $\boldsymbol{y}=\left[y_1\quad y_2\right]^T$, con $y_1=x$ y $y_2=\frac{dx}{dt}$.
2. Notamos que $\frac{dy_1}{dt}=\frac{dx}{dt}=y_2$ y $\frac{dy_2}{dt}=\frac{d^2x}{dt^2}=-\frac{c}{m}\frac{dx}{dt}-\frac{k}{m}x=-\frac{c}{m}y_2-\frac{k}{m}y_1$.
3. Entonces, el modelo de segundo orden lo podemos representar como el siguiente sistema vectorial de primer orden:
\begin{equation}
\frac{d\boldsymbol{y}}{dt}=\left[\begin{array}{c}\frac{dy_1}{dt} \\ \frac{dy_2}{dt}\end{array}\right]=\left[\begin{array}{c}y_2 \\ -\frac{k}{m}y_1-\frac{c}{m}y_2\end{array}\right]=\left[\begin{array}{cc}0 & 1 \\-\frac{k}{m} & -\frac{c}{m}\end{array}\right]\boldsymbol{y}.
\end{equation}
```
# Primero importamos todas las librerias, paquetes y/o funciones que vamos a utlizar
from matplotlib import pyplot as plt
from scipy.integrate import odeint
import numpy as np
# Ayuda función odeint
odeint?
# Función f(y,t) que vamos a integrar
def amortiguado(y, t, k, m, c):
y1 = y[0]
y2 = y[1]
return np.array([y2,
-k / m * y1 - c / m * y2])
# Definimos los parámetros k, m y c
k = 3
m = 30
c = 5
# Condiciones iniciales
x0 = 0.5 # m
dx0 = 0 # m/s
y0 = [x0, dx0]
# Especificamos los puntos de tiempo donde queremos la solución
t = np.linspace(0, 100, 1000)
# Solución numérica
y = odeint(func=amortiguado,
y0=y0,
t=t,
args=(k, m, c))
```
¿Cómo entrega odeint las soluciones?
```
# Averiguar la forma de solución
y
# Mostrar la solución
y.shape
```
- $y$ es una matriz de n filas y 2 columnas.
- La primer columna de $y$ corresponde a $y_1$.
- La segunda columna de $y$ corresponde a $y_2$.
¿Cómo extraemos los resultados $y_1$ y $y_2$ independientemente?
```
# Extraer y1 y y2
x = y[:, 0]
dx = y[:, 1]
```
### Para hacer participativamente...
- Graficar en una misma ventana $y_1$ vs. $t$ y $y_2$ vs. $t$... ¿qué pueden observar?
```
# Gráfica
plt.figure(figsize=(6, 4))
plt.plot(t, x, label="Posición $y_1(t)=x(t)$")
plt.plot(t, dx, label="Velocidad $y_2(t)=x'(t)$")
plt.xlabel("Tiempo")
plt.legend()
```
- Graficar $y_2/\omega_0$ vs. $y_1$... ¿cómo se complementan estos gráficos? ¿conclusiones?
```
# Gráfica
plt.figure(figsize=(6, 4))
plt.plot(x, dx)
plt.xlabel("Posición")
plt.ylabel("Velocidad")
```
## Dependiendo de los parámetros, 3 tipos de soluciones
Teníamos
\begin{equation}
m\frac{d^2 x}{dt^2} + c\frac{dx}{dt} + kx = 0
\end{equation}
si recordamos que $\omega_0 ^2 = \frac{k}{m}$ y definimos $\frac{c}{m}\equiv 2\Gamma$, tendremos
\begin{equation}
\frac{d^2 x}{dt^2} + 2\Gamma \frac{dx}{dt}+ \omega_0^2 x = 0
\end{equation}
<font color=blue>El comportamiento viene determinado por las raices de la ecuación característica. Ver en el tablero...</font>
### Subamortiguado
Si $\omega_0^2 > \Gamma^2$ se tiene movimiento oscilatorio *subamortiguado*.
```
k = 3
m = 30
c = 5
w02 = k / m
G = c / (2 * m)
w02, G**2
w02 > G**2
```
Entonces, el primer caso que ya habíamos presentado corresponde a movimiento amortiguado.
```
# Gráfica, de nuevo
plt.figure(figsize=(6, 4))
plt.plot(t, x, label="Posición $y_1(t)=x(t)$")
plt.plot(t, dx, label="Velocidad $y_2(t)=x'(t)$")
plt.xlabel("Tiempo")
plt.legend()
plt.figure(figsize=(6, 4))
plt.plot(x, dx)
plt.xlabel("Posición")
plt.ylabel("Velocidad")
```
### Sobreamortiguado
Si $\omega_0^2 < \Gamma^2$ se tiene movimiento oscilatorio *sobreamortiguado*.
```
# Nuevas constantes
k = .1 # Constante del muelle
m = 1.0 # Masa
c = 1 # Constante de amortiguación
```
Simular y graficar...
```
w02 = k / m
G = c / (2 * m)
w02, G
w02 < G**2
# Simular
y = odeint(func=amortiguado,
y0=y0,
t=t,
args=(k, m, c))
xs = y[:, 0]
dxs = y[:, 1]
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(t, xs, label="Posición $y_1(t)=x(t)$")
plt.plot(t, dxs, label="Velocidad $y_2(t)=x'(t)$")
plt.xlabel("Tiempo")
plt.legend()
plt.figure(figsize=(6, 4))
plt.plot(xs, dxs)
plt.xlabel("Posición")
plt.ylabel("Velocidad")
```
### Amortiguamiento crítico
Si $\omega_0^2 = \Gamma^2$ se tiene movimiento *críticamente amortiguado*.
```
# Nuevas constantes
k = .0625 # Constante del muelle
m = 1.0 # Masa
c = .5 # Constante de amortiguación
```
Simular y graficar...
```
w02 = k / m
G = c / (2 * m)
w02, G**2
w02 == G**2
# Simular
y = odeint(func=amortiguado,
y0=y0,
t=t,
args=(k, m, c))
xc = y[:, 0]
dxc = y[:, 1]
# Graficar
plt.figure(figsize=(6, 4))
plt.plot(t, xc, label="Posición $y_1(t)=x(t)$")
plt.plot(t, dxc, label="Velocidad $y_2(t)=x'(t)$")
plt.xlabel("Tiempo")
plt.legend()
plt.figure(figsize=(6, 4))
plt.plot(xc, dxc)
plt.xlabel("Posición")
plt.ylabel("Velocidad")
```
En resumen, se tiene entonces:
```
tt = t
fig, ((ax1, ax2, ax3), (ax4, ax5, ax6)) = plt.subplots(2, 3, sharex='col',
sharey='row',figsize =(10,6))
ax1.plot(tt, x, c = 'k')
ax1.set_title('Amortiguado', fontsize = 14)
ax1.set_ylabel('Posición', fontsize = 14)
ax2.plot(tt, xs, c = 'b')
ax2.set_title('Sobreamortiguado', fontsize = 14)
ax3.plot(tt, xc, c = 'r')
ax3.set_title('Crítico', fontsize = 16)
ax4.plot(tt, dx, c = 'k')
ax4.set_ylabel('Velocidad', fontsize = 14)
ax4.set_xlabel('tiempo', fontsize = 14)
ax5.plot(tt, dxs, c = 'b')
ax5.set_xlabel('tiempo', fontsize = 14)
ax6.plot(tt, dxc, c = 'r')
ax6.set_xlabel('tiempo', fontsize = 14)
plt.show()
```
> **Tarea**. ¿Cómo se ve el espacio fase para los diferentes casos así como para diferentes condiciones iniciales?
> En un gráfico como el anterior, realizar gráficas del plano fase para los distintos movimientos y para cuatro conjuntos de condiciones iniciales distintas
- y0 = [1, 1]
- y0 = [1, -1]
- y0 = [-1, 1]
- y0 = [-1, -1]
Hacer lo anterior en un nuevo notebook de jupyter llamado Tarea7_ApellidoNombre.ipynb y subir en el espacio habilitado.
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Lázaro Alonso. Modified by Esteban Jiménez Rodríguez.
</footer>
| github_jupyter |
## RaTG13 de-novo assembly - plots of assembled nucleotide sequences
Multiple preprints have questioned the validity of the metagenomic dataset upon which RaTG13 is based (Zhang, 2020; Rahalkar and Bahulikar, 2020; Singla et al., 2020; Signus, 2020). Here we undertook de-novo assembly of the metagenomic dataset uploaded by Zhou et al. (2020) to NCBI (SRA accession SRR11085797) using MEGAHIT (v.1.2.9) and CoronaSPAdes (v.3.15.0). We repeated the MEGAHIT assembly methodology of Singla et al. (2020), by utilising three settings: i) default; ii) maximum k-mer size set to 79; and iii) default settings with k-step=10 and --no-mercy option. For CoronaSPAdes, default settings were used. Each final assembly was then Blasted against the default nt dataset via NCBI. Consensus sequences from each of the assemblies are shown in Fig. 1 below. 16 contiguous sections of the 29855 nucleotide long RaTG13 sequence were not recovered during assembly of the SRA data by any of the methods used. This included 7 sequences of > 100 amino acids long, including a 48 amino acid long sequence in the NTD covering insert 3 identified by Zhou et al. (2020).
On the 19/5/2021, an amplicon dataset (accession SRR11806578) was uploaded to NCBI by Zhou et al, 2020, presumably in response to findings by Zhang (2020) and Singla et al. (2020). This dataset was also Blasted and results shown below.
1) [Blast2.ipynb](files/Blast2.ipynb) was run on the following assembiles
- final.contigs.fa MEGAHIT reults using default settings for NCBI accession SRR11085797
- final.contigs.fa MEGAHIT reults using max Kmer of K79 for NCBI accession SRR11085797
- final.contigs.fa MEGAHIT reults using k-step10 and --no-mercy option for NCBI accession SRR11085797
- gene_clusters.fasta CoronaSPAdes reults using default settings for NCBI accession SRR11085797
- SRR11806578.fa generated using Biopython from SRR11806578.fastq, sourced from NCBI accession SRR11085797
2) The results of consensus fastsa sequences generated in [Blast2.ipynb](files/Blast2.ipynb) were then used in this notebook.
The first 4 files in list above are SRA's uploaded to NCBI on 13/3/2021, the last file contains amplicon data uploaded to NCBI on the 19/5/2021
We agree with the findings of Zhang (2020) and Singla et al. (2020), in that RaTG13 cannot be assembled using the RaTG13 SRA dataset SRR11085797. Further, we find the full genome sequence still cannot be assembled when combined with the amplicon datsaset SRR11085797, as the first 14nt are missing from sequence matches in SRR11085797 and SRR11085797.
<img src="figures/5_RaTG13_SL3_R1_stiched_asm_seqs_amplicon.png">
Figure 1. Consensus nucleotide sequences of accession SRR11085797 generated using assembly with Megablast and CoronaSPAdes and Blasted using NCBI Blastn against MN996532.2 (first four dark blue rows), as well as Amplicon dataset NCBI accession SRR11806578 shown in bottom row.
Nucleotide sequences with significant alignments to RaTG13 (MN996532.2) are shown in dark blue. Red shows areas of poor coverages (single nt hits surrounded by empty reads, yellow shows areas of very poor coverage). Note that several regions in the amplicon dataset show poor sequence coverage.
Structural subdomains of the RaTG13 genome shown at top of image.
```
import os
import collections
import re
import pathlib
from io import StringIO
from Bio import SeqIO
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from operator import itemgetter
from itertools import groupby
FASTA_PATH='../../fasta/'
TARGET_FILE='MN996532_2_RaTG13_complete_genome.fa'
ASM_PATH=FASTA_PATH+'asm_out/'
FASTA_LIST=['RaTG13_SL3_R1_megahit_default.fa', 'RaTG13_SL3_R1_megahit_k79.fa',\
'RaTG13_SL3_R1_megahit_kstep10_nm.fa', 'RaTG13_SL3_R1_coronaspades_default.fa', \
'RaTG13_SRR11806578_SRR11806578_amplicons.fa']
COV_NAME='RaTG13'
ASM_CODE='SL3_R1'
#specific to each genome, below are nt locations for major subdomain boundaries (ORF1ab etc) for RaTG13
#leave as empty list if not required
SUBDOMAIN_LOCATIONS=[266, 21563,22549,23311,23604,25360,25381,26208,26511,27179,28262,29521]
OUT_PATH=ASM_PATH+'comparison_plots/'
pathlib.Path(OUT_PATH).mkdir(exist_ok=True)
query_file = os.path.join(FASTA_PATH, TARGET_FILE)
fasta_files = [os.path.join(ASM_PATH, x) for x in FASTA_LIST]
query_target = SeqIO.read(query_file, format="fasta")
query_target_seq=query_target.seq
fasta_targets = [SeqIO.read(x, format="fasta") for x in fasta_files]
assert len(fasta_targets)==len(FASTA_LIST)
fasta_seqs=[str(x.seq) for x in fasta_targets]
fasta_titles=[x.description for x in fasta_targets]
for s in fasta_seqs:
assert len(s)==len(query_target_seq)
def plot_blocked_seq(stack_arr, name='sequences_blocked.png', cmap='CMRmap_r', title=''):
print(f'>>plot_blocked_seq, stack_arr: {stack_arr.shape}')
fig= plt.figure(figsize=(20,6))
plt.imshow(stack_arr, cmap=plt.get_cmap(cmap))
ax = plt.gca()
ax.axes.yaxis.set_visible(False)
plt.xlabel('nucleotides', fontsize=10)
plt.xticks(fontsize=10)
plt.title(f'{title}', fontsize=12)
plt.tight_layout()
plt.savefig(name, dpi=600)
plt.show()
def ord_convert(x):
'''convert each character in array to its integer representation'''
return ord(x)
ord_v = np.vectorize(ord_convert)
seq_arrays=[np.array(list(x)) for x in fasta_seqs]
ord_arrays=[]
for seqa in seq_arrays:
ta=ord_v(seqa)
#change '-' char value to zero for background colour
ta[ta == 45] = 0
ord_arrays.append(ta)
#add an empty array in between each for plotting
spacer_array=np.zeros(len(query_target_seq))
spaced_seqs=[]
for a in ord_arrays:
spaced_seqs.append(a)
spaced_seqs.append(spacer_array)
subdomain_boundaris= SUBDOMAIN_LOCATIONS
subdomain_array=np.zeros(len(query_target_seq))
for b in subdomain_boundaris:
subdomain_array[b]= 84
twod_borders=np.stack([subdomain_array, subdomain_array], axis=0)
spacer_twod=np.stack([spacer_array, spacer_array], axis=0)
twod_borders=np.stack([twod_borders, spacer_twod], axis=0)
twod_borders=twod_borders.reshape(4, len(query_target_seq))
twod_borders.shape
twod_borders_repeated = np.repeat(twod_borders, repeats=500, axis=0)
twod_borders_repeated.shape
#convert to 2D so can plot
stacked=np.stack(spaced_seqs, axis=0)
np.unique(stacked[0])
stacked_repeated = np.repeat(stacked, repeats=500, axis=0)
stacked_repeated.shape
twod_borders_on_stack=np.vstack((twod_borders_repeated, stacked_repeated))
twod_borders_on_stack.shape
#if want a tile:
#plot_title=', '.join(x for x in FASTA_LIST)
plot_title=''
plot_blocked_seq(twod_borders_on_stack, name=OUT_PATH+f'{len(FASTA_LIST)}_{COV_NAME}_{ASM_CODE}_stiched_asm_seqs.png', title=plot_title)
```
Figure 2. (generation of Figure 1). In the figure above we can see 5 rows of nuclotide sequences. The first four rows shows consensus sequences from assembly of SRA dataset NCBI accession SRR11085797. There are multiple gaps in the dataset, and the RaTG13 sequence cannot be assembled from these alone.
The last row shows consensus sequences from assembly of Amplicon dataset NCBI accession SRR11806578 which covers most gaps, however the first 14nt are still not covered, as well as multiple single to triple nucleotide locations in the spike protein.
```
missing_nns = np.argwhere(np.all(stacked[..., :] == 0, axis=0)).tolist()
len(missing_nns)
missing_nns = [item for sublist in missing_nns for item in sublist]
#S protein location specific to RaTG13
spike_seq_missing = [i for i in missing_nns if int(i) >= 21563 and int(i)<=25384 ]
len(spike_seq_missing)
def group(L):
'''after https://stackoverflow.com/questions/2154249/identify-groups-of-continuous-numbers-in-a-list'''''
first = last = L[0]
for n in L[1:]:
if n - 1 == last: # Part of the group, bump the end
last = n
else: # Not part of the group, yield current group and start a new
yield first, last
first = last = n
yield first, last # Yield the last group
print (list(group(spike_seq_missing)))
grouped_missing= list(group(missing_nns))
```
The list below shows tuples of (start, end) of gaps in coverage accross all the fasta inputs
```
grouped_missing
missing_nn_locations=[]
for g in grouped_missing:
if g[0]==g[1]:
missing_nn_locations.append(g[0])
else:
missing_nn_locations.append(g)
missing_nn_locations
deltas=[]
for t in grouped_missing:
deltas.append(1+t[1]-t[0])
#number of and legth of each gap (in NN)
len(deltas),deltas
plt.bar(list(range(len(deltas))), deltas)
pct_missing = (len(missing_nns)/len(query_target_seq))%100
print(f'{pct_missing} from any assemby method')
```
### Summary
When the result of de-novo assembly of SRR11085797 using 4 methods and the SRR11806578 dataset are all blasted against MN996532.2, and then the results stacked to identify any gaps in coversge (Fig. 1.), we find that:
1) there is no coverage of the first 14nt of RaTG13 (MN996532.2) by either SRR11085797 or SRR11806578
2) 21 single to triple nn locations (22326, 22333, 22369-22370, 22374, 22378, 22384, 22386, 22389,22394, 22396, 22399-22400, 22402, 22405, 22407, 22411, 22414-22416, 22418, 22420-22421, 22423, 22425, 22427) in the RaTG13 spike protein are not covered by either SRR11085797 or SRR11806578 using the assembly and Blastn parameters we used.
As such, we find that RaTG13 cannot been assembled from the sequences provided by Zhou et al. (2020).
### References
Rahalkar, M.; Bahulikar, R. The Anomalous Nature of the Fecal Swab Data, Receptor Binding Domain and Other Questions in RaTG13 Genome . https://www.preprints.org/manuscript/202008.0205/v3 doi: 10.20944/preprints202008.0205.v3
Signus, J. Anomalous datasets reveal metagenomic fabrication pipeline that further questions the legitimacy of RaTG13 genome and the associated Nature paper. Preprint. https://vixra.org/abs/2010.0164
Singla, M., Ahmad, S., Gupta, C., Sethi, T. De-novo Assembly of RaTG13 Genome Reveals Inconsistencies Further Obscuring SARS-CoV-2 Origins. https://www.preprints.org/manuscript/202008.0595/v1 doi: 10.20944/preprints202008.0595.v1
Zhang, D. 2020. Anomalies in BatCoV/RaTG13 sequencing and provenance. Zenodo. http://doi.org/10.5281/zenodo.4064067
Zhou, P., Yang, XL., Wang, XG. et al., 2020. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature 579, 270–273 (2020). https://doi.org/10.1038/s41586-020-2012-7
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# This code creates a virtual display to draw game images on.
# If you are running locally, just ignore it
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
```
### OpenAI Gym
We're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.
That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.
So here's how it works:
```
import gym
env = gym.make("MountainCar-v0")
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
```
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away.
### Gym interface
The three main methods of an environment are
* __reset()__ - reset environment to initial state, _return first observation_
* __render()__ - show current environment state (a more colorful version :) )
* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info)
* _new observation_ - an observation right after commiting the action __a__
* _reward_ - a number representing your reward for commiting action __a__
* _is done_ - True if the MDP has just finished, False if still in progress
* _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
```
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the riht slightly (around 0.0005)
```
### Play with it
Below is the code that drives the car to the right.
However, it doesn't reach the flag at the far right due to gravity.
__Your task__ is to fix it. Find a strategy that reaches the flag.
You're not required to build any sophisticated algorithms for now, feel free to hard-code :)
_Hint: your action at each step should depend either on __t__ or on __s__._
```
# create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1)
s = env.reset()
actions = {'left': 0, 'stop': 1, 'right': 2}
# prepare "display"
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for t in range(TIME_LIMIT):
# change the line below to reach the flag
s, r, done, _ = env.step(actions['right' if s[1] >= 0 else 'left'])
#draw game image on display
ax.clear()
ax.imshow(env.render('rgb_array'))
fig.canvas.draw()
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
assert s[0] > 0.47
print("You solved it!")
```
| github_jupyter |
# Financial Planning with APIs and Simulations
In this Challenge, you’ll create two financial analysis tools by using a single Jupyter notebook:
Part 1: A financial planner for emergencies. The members will be able to use this tool to visualize their current savings. The members can then determine if they have enough reserves for an emergency fund.
Part 2: A financial planner for retirement. This tool will forecast the performance of their retirement portfolio in 30 years. To do this, the tool will make an Alpaca API call via the Alpaca SDK to get historical price data for use in Monte Carlo simulations.
You’ll use the information from the Monte Carlo simulation to answer questions about the portfolio in your Jupyter notebook.
```
# Imports the required libraries and dependencies
import os
import requests
import json
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
%matplotlib inline
# Loads the environment variables from the .env file
#by calling the load_dotenv function
load_dotenv()
```
## Part 1: Create a Financial Planner for Emergencies
### Evaluate the Cryptocurrency Wallet by Using the Requests Library
In this section, you’ll determine the current value of a member’s cryptocurrency wallet. You’ll collect the current prices for the Bitcoin and Ethereum cryptocurrencies by using the Python Requests library. For the prototype, you’ll assume that the member holds the 1.2 Bitcoins (BTC) and 5.3 Ethereum coins (ETH). To do all this, complete the following steps:
1. Create a variable named `monthly_income`, and set its value to `12000`.
2. Use the Requests library to get the current price (in US dollars) of Bitcoin (BTC) and Ethereum (ETH) by using the API endpoints that the starter code supplies.
3. Navigate the JSON response object to access the current price of each coin, and store each in a variable.
> **Hint** Note the specific identifier for each cryptocurrency in the API JSON response. The Bitcoin identifier is `1`, and the Ethereum identifier is `1027`.
4. Calculate the value, in US dollars, of the current amount of each cryptocurrency and of the entire cryptocurrency wallet.
```
# The current number of coins for each cryptocurrency asset held in the portfolio.
btc_coins = 1.2
eth_coins = 5.3
```
#### Step 1: Create a variable named `monthly_income`, and set its value to `12000`.
```
# The monthly amount for the member's household income
monthly_income=12000
```
#### Review the endpoint URLs for the API calls to Free Crypto API in order to get the current pricing information for both BTC and ETH.
```
# The Free Crypto API Call endpoint URLs for the held cryptocurrency assets
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=USD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=USD"
```
#### Step 2. Use the Requests library to get the current price (in US dollars) of Bitcoin (BTC) and Ethereum (ETH) by using the API endpoints that the starter code supplied.
```
# Makes an API call to access the current price of BTC using the Python requests library
btc_response = requests.get(btc_url).json()
# Uses the json.dumps function to review the response data from the API call
# Uses the indent and sort_keys parameters to make the response object readable
print(json.dumps(btc_response, indent=4, sort_keys=True))
# Make an API call to access the current price ETH using the Python requests library
eth_response = requests.get(eth_url).json()
# Uses the json.dumps function to review the response data from the API call
# Uses the indent and sort_keys parameters to make the response object readable
print(json.dumps(eth_response, indent=4, sort_keys=True))
```
#### Step 3: Navigate the JSON response object to access the current price of each coin, and store each in a variable.
```
# Navigates the BTC response object to access the current price of BTC
btc_price = btc_response['data']['1']['quotes']['USD']['price']
# Prints the current price of BTC
print(f"The current price of BTC is ${btc_price: .2f}.")
# Navigates the BTC response object to access the current price of ETH
eth_price = eth_price = eth_response['data']['1027']['quotes']['USD']['price']
# Prints the current price of ETH
print(f"The current price of ETH is ${eth_price: .2f}.")
```
### Step 4: Calculate the value, in US dollars, of the current amount of each cryptocurrency and of the entire cryptocurrency wallet.
```
# Computes the current value of the BTC holding
btc_value = btc_price * btc_coins
# Prints current value of your holding in BTC
print(f"The current value of the holding in BTC is ${btc_value: .2f}.")
# Computes the current value of the ETH holding
eth_value = eth_price * eth_coins
# Prints current value of your holding in ETH
print(f"The current value of the holding in ETH is ${eth_value: .2f}.")
# Computes the total value of the cryptocurrency wallet
# Adds the value of the BTC holding to the value of the ETH holding
total_crypto_wallet = btc_value + eth_value
# Prints current cryptocurrency wallet balance
print(f"The current cyrptocurrency wallet balance is $ {total_crypto_wallet: .2f}.")
```
### Evaluate the Stock and Bond Holdings by Using the Alpaca SDK
In this section, you’ll determine the current value of a member’s stock and bond holdings. You’ll make an API call to Alpaca via the Alpaca SDK to get the current closing prices of the SPDR S&P 500 ETF Trust (ticker: SPY) and of the iShares Core US Aggregate Bond ETF (ticker: AGG). For the prototype, assume that the member holds 110 shares of SPY, which represents the stock portion of their portfolio, and 200 shares of AGG, which represents the bond portion. To do all this, complete the following steps:
1. In the `Starter_Code` folder, create an environment file (`.env`) to store the values of your Alpaca API key and Alpaca secret key.
2. Set the variables for the Alpaca API and secret keys. Using the Alpaca SDK, create the Alpaca `tradeapi.REST` object. In this object, include the parameters for the Alpaca API key, the secret key, and the version number.
3. Set the following parameters for the Alpaca API call:
- `tickers`: Use the tickers for the member’s stock and bond holdings.
- `timeframe`: Use a time frame of one day.
- `start_date` and `end_date`: Use the same date for these parameters, and format them with the date of the previous weekday (or `2020-08-07`). This is because you want the one closing price for the most-recent trading day.
4. Get the current closing prices for `SPY` and `AGG` by using the Alpaca `get_barset` function. Format the response as a Pandas DataFrame by including the `df` property at the end of the `get_barset` function.
5. Navigating the Alpaca response DataFrame, select the `SPY` and `AGG` closing prices, and store them as variables.
6. Calculate the value, in US dollars, of the current amount of shares in each of the stock and bond portions of the portfolio, and print the results.
#### Review the total number of shares held in both (SPY) and (AGG).
```
# Current amount of shares held in both the stock (SPY) and bond (AGG) portion of the portfolio.
spy_shares = 110
agg_shares = 200
```
#### Step 1: In the `Starter_Code` folder, create an environment file (`.env`) to store the values of your Alpaca API key and Alpaca secret key.
#### Step 2: Set the variables for the Alpaca API and secret keys. Using the Alpaca SDK, create the Alpaca `tradeapi.REST` object. In this object, include the parameters for the Alpaca API key, the secret key, and the version number.
```
# Sets the variables for the Alpaca API and secret keys
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Creates the Alpaca tradeapi.REST object
alpaca = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version = "v2")
```
#### Step 3: Set the following parameters for the Alpaca API call:
- `tickers`: Use the tickers for the member’s stock and bond holdings.
- `timeframe`: Use a time frame of one day.
- `start_date` and `end_date`: Use the same date for these parameters, and format them with the date of the previous weekday (or `2020-08-07`). This is because you want the one closing price for the most-recent trading day.
```
# Sets the tickers for both the bond and stock portion of the portfolio
tickers = ["SPY", "AGG"]
# Sets timeframe to 1D
timeframe = "1D"
# Formats current date as ISO format
# Sets both the start and end date at the date of the prior weekday
# This will give you the closing price of the previous trading day
# Alternatively you can use a start and end date of 2020-08-07
start_date = pd.Timestamp("2022-01-27", tz= "America/New_York").isoformat()
end_date = pd.Timestamp("2022-01-27", tz = "America/New_york").isoformat()
```
#### Step 4: Get the current closing prices for `SPY` and `AGG` by using the Alpaca `get_barset` function. Format the response as a Pandas DataFrame by including the `df` property at the end of the `get_barset` function.
```
# Uses the Alpaca get_barset function to get current closing prices the portfolio
# Sets the `df` property after the function to format the response object as a DataFrame
df_portfolio = alpaca.get_barset(
tickers,
timeframe,
start = start_date,
end = end_date
).df
# Reviews the first 5 rows of the Alpaca DataFrame
df_portfolio
```
#### Step 5: Navigating the Alpaca response DataFrame, select the `SPY` and `AGG` closing prices, and store them as variables.
```
# Accesses the closing price for AGG from the Alpaca DataFrame
# Converts the value to a floating point number
agg_close_price = float(df_portfolio["AGG"]["close"])
# Prints the AGG closing price
print(agg_close_price)
# Accesses the closing price for SPY from the Alpaca DataFrame
# Converts the value to a floating point number
spy_close_price = float(df_portfolio["SPY"]["close"])
# Prints the SPY closing price
print(spy_close_price)
```
#### Step 6: Calculate the value, in US dollars, of the current amount of shares in each of the stock and bond portions of the portfolio, and print the results.
```
# Calculates the current value of the bond portion of the portfolio
agg_value = agg_shares * agg_close_price
# Prints the current value of the bond portfolio
print(agg_value)
# Calculates the current value of the stock portion of the portfolio
spy_value = spy_shares * spy_close_price
# Prints the current value of the stock portfolio
print(spy_value)
# Calculates the total value of the stock and bond portion of the portfolio
total_stocks_bonds = agg_value + spy_value
# Prints the current balance of the stock and bond portion of the portfolio
print(total_stocks_bonds)
# Calculates the total value of the member's entire savings portfolio
# Add the value of the cryptocurrency walled to the value of the total stocks and bonds
total_portfolio = total_crypto_wallet + total_stocks_bonds
# Prints the total value of the member's entire savings portfolio
print(f"The total value of the member's entire savings portfolio is ${total_portfolio:0.2f}.")
```
### Evaluate the Emergency Fund
In this section, you’ll use the valuations for the cryptocurrency wallet and for the stock and bond portions of the portfolio to determine if the credit union member has enough savings to build an emergency fund into their financial plan. To do this, complete the following steps:
1. Create a Python list named `savings_data` that has two elements. The first element contains the total value of the cryptocurrency wallet. The second element contains the total value of the stock and bond portions of the portfolio.
2. Use the `savings_data` list to create a Pandas DataFrame named `savings_df`, and then display this DataFrame. The function to create the DataFrame should take the following three parameters:
- `savings_data`: Use the list that you just created.
- `columns`: Set this parameter equal to a Python list with a single value called `amount`.
- `index`: Set this parameter equal to a Python list with the values of `crypto` and `stock/bond`.
3. Use the `savings_df` DataFrame to plot a pie chart that visualizes the composition of the member’s portfolio. The y-axis of the pie chart uses `amount`. Be sure to add a title.
4. Using Python, determine if the current portfolio has enough to create an emergency fund as part of the member’s financial plan. Ideally, an emergency fund should equal to three times the member’s monthly income. To do this, implement the following steps:
1. Create a variable named `emergency_fund_value`, and set it equal to three times the value of the member’s `monthly_income` of $12000. (You set this earlier in Part 1).
2. Create a series of three if statements to determine if the member’s total portfolio is large enough to fund the emergency portfolio:
1. If the total portfolio value is greater than the emergency fund value, display a message congratulating the member for having enough money in this fund.
2. Else if the total portfolio value is equal to the emergency fund value, display a message congratulating the member on reaching this important financial goal.
3. Else the total portfolio is less than the emergency fund value, so display a message showing how many dollars away the member is from reaching the goal. (Subtract the total portfolio value from the emergency fund value.)
#### Step 1: Create a Python list named `savings_data` that has two elements. The first element contains the total value of the cryptocurrency wallet. The second element contains the total value of the stock and bond portions of the portfolio.
```
# Creates a list named saving_data that has two elements that contains the total value of the cryptocurrency wallet and the total value of the stock and bond portfolio
savings_data = [total_crypto_wallet, total_stocks_bonds]
# Reviews the Python list savings_data
print(savings_data)
```
#### Step 2: Use the `savings_data` list to create a Pandas DataFrame named `savings_df`, and then display this DataFrame. The function to create the DataFrame should take the following three parameters:
- `savings_data`: Use the list that you just created.
- `columns`: Set this parameter equal to a Python list with a single value called `amount`.
- `index`: Set this parameter equal to a Python list with the values of `crypto` and `stock/bond`.
```
# Creates a Pandas DataFrame called savings_df that shows the the total value of the cyrptocurrency wallet and the total value of the stock and bond porfolio
savings_df = pd.DataFrame(savings_data, columns = ['Amount'], index = ['Crypto', 'Stock/Bond'])
# Displays the savings_df DataFrame
print(savings_df)
```
#### Step 3: Use the `savings_df` DataFrame to plot a pie chart that visualizes the composition of the member’s portfolio. The y-axis of the pie chart uses `amount`. Be sure to add a title.
```
# Plots the total value of the member's portfolio (crypto and stock/bond) in a pie chart
savings_df.plot.pie(y = 'Amount', figsize = (12, 6), title = 'Crypto and Stock/Bond Portfolio')
```
#### Step 4: Using Python, determine if the current portfolio has enough to create an emergency fund as part of the member’s financial plan. Ideally, an emergency fund should equal to three times the member’s monthly income. To do this, implement the following steps:
Step 1. Create a variable named `emergency_fund_value`, and set it equal to three times the value of the member’s `monthly_income` of 12000. (You set this earlier in Part 1).
Step 2. Create a series of three if statements to determine if the member’s total portfolio is large enough to fund the emergency portfolio:
* If the total portfolio value is greater than the emergency fund value, display a message congratulating the member for having enough money in this fund.
* Else if the total portfolio value is equal to the emergency fund value, display a message congratulating the member on reaching this important financial goal.
* Else the total portfolio is less than the emergency fund value, so display a message showing how many dollars away the member is from reaching the goal. (Subtract the total portfolio value from the emergency fund value.)
##### Step 4-1: Create a variable named `emergency_fund_value`, and set it equal to three times the value of the member’s `monthly_income` of 12000. (You set this earlier in Part 1).
```
# Creates a variable named emergency_fund_value and sets it equal to three times the value of the member's monthly income of $12000
emergency_fund_value = monthly_income * 3
```
##### Step 4-2: Create a series of three if statements to determine if the member’s total portfolio is large enough to fund the emergency portfolio:
* If the total portfolio value is greater than the emergency fund value, display a message congratulating the member for having enough money in this fund.
* Else if the total portfolio value is equal to the emergency fund value, display a message congratulating the member on reaching this important financial goal.
* Else the total portfolio is less than the emergency fund value, so display a message showing how many dollars away the member is from reaching the goal. (Subtract the total portfolio value from the emergency fund value.)
```
# Evaluates the possibility of creating an emergency fund with 3 conditions:
if total_portfolio > emergency_fund_value:
print(f"Congratulations! You have enough money in this fund.")
elif total_portfolio == emergency_fund_value:
print(f"Congratulations on reaching this important finanical goal.")
else:
print(f"You are ${emergency_fund_value - total_portfolio} from reaching your goal.")
```
## Part 2: Create a Financial Planner for Retirement
### Create the Monte Carlo Simulation
In this section, you’ll use the MCForecastTools library to create a Monte Carlo simulation for the member’s savings portfolio. To do this, complete the following steps:
1. Make an API call via the Alpaca SDK to get 3 years of historical closing prices for a traditional 60/40 portfolio split: 60% stocks (SPY) and 40% bonds (AGG).
2. Run a Monte Carlo simulation of 500 samples and 30 years for the 60/40 portfolio, and then plot the results.The following image shows the overlay line plot resulting from a simulation with these characteristics. However, because a random number generator is used to run each live Monte Carlo simulation, your image will differ slightly from this exact image:

3. Plot the probability distribution of the Monte Carlo simulation. Plot the probability distribution of the Monte Carlo simulation. The following image shows the histogram plot resulting from a simulation with these characteristics. However, because a random number generator is used to run each live Monte Carlo simulation, your image will differ slightly from this exact image:

4. Generate the summary statistics for the Monte Carlo simulation.
#### Step 1: Make an API call via the Alpaca SDK to get 3 years of historical closing prices for a traditional 60/40 portfolio split: 60% stocks (SPY) and 40% bonds (AGG).
```
# Sets start and end dates of 3 years back from current date
# Alternatively, you can use an end date of 2020-08-07 and work 3 years back from that date
start_date = pd.Timestamp("2019-01-28", tz="America/New_York").isoformat()
end_date = pd.Timestamp("2022-01-28", tz="America/New_York").isoformat()
# Sets number of rows to 1000 to retrieve the maximum amount of rows
limit_rows = 1000
# Uses the Alpaca get_barset function to make the API call to get the 3 years worth of pricing data
# The tickers and timeframe parameters have been set in Part 1
# The start and end dates are updated with the information set above
# df property added to the end of the call so the response is returned as a DataFrame
prices_df = alpaca.get_barset(
tickers,
timeframe,
start = start_date,
end = end_date,
limit = limit_rows
).df
# Displays both the first and last five rows of the DataFrame
display(prices_df.head())
display(prices_df.tail())
```
#### Step 2: Run a Monte Carlo simulation of 500 samples and 30 years for the 60/40 portfolio, and then plot the results.
```
# Configures the Monte Carlo simulation to forecast 30 years cumulative returns
# The weights should be split 40% to AGG and 60% to SPY.
# Runs 500 samples.
MC_thrityyear = MCSimulation(
portfolio_data = prices_df,
weights = [.60, .40],
num_simulation = 500,
num_trading_days = 252 * 30
)
# Reviews the simulation input data
MC_thrityyear.portfolio_data
# Runs the Monte Carlo simulation to forecast 30 years cumulative returns
MC_thrityyear.calc_cumulative_return()
# Visualizes the 30-year Monte Carlo simulation by creating an
# overlay line plot
MC_sim_line_plot = MC_thrityyear.plot_simulation()
```
#### Step 3: Plot the probability distribution of the Monte Carlo simulation.
```
# Visualizes the probability distribution of the 30-year Monte Carlo simulation
# by plotting a histogram
MC_sim_dist_plot = MC_thrityyear.plot_distribution()
```
#### Step 4: Generate the summary statistics for the Monte Carlo simulation.
```
# Generates summary statistics from the 30-year Monte Carlo simulation results
# Saves the results as a variable
MC_summary_statistics = MC_thrityyear.summarize_cumulative_return()
# Reviews the 30-year Monte Carlo summary statistics
print(MC_summary_statistics)
```
### Analyze the Retirement Portfolio Forecasts
Using the current value of only the stock and bond portion of the member's portfolio and the summary statistics that you generated from the Monte Carlo simulation, answer the following question in your Jupyter notebook:
- What are the lower and upper bounds for the expected value of the portfolio with a 95% confidence interval?
```
# Prints the current balance of the stock and bond portion of the members portfolio
print(total_stocks_bonds)
# Uses the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes for the current stock/bond portfolio
ci_lower_thirty_cumulative_return = MC_summary_statistics[8] * 69783.7
ci_upper_thirty_cumulative_return = MC_summary_statistics[9] * 69783.7
# Prints the result of the calculations
print(f"There is a 95% chance that an investment of the current stock/bond portfolio of $69,783.70"
f" over the next 30 years will end within the range of"
f" ${ci_lower_thirty_cumulative_return: .2f} and ${ci_upper_thirty_cumulative_return: .2f}.")
```
### Forecast Cumulative Returns in 10 Years
The CTO of the credit union is impressed with your work on these planning tools but wonders if 30 years is a long time to wait until retirement. So, your next task is to adjust the retirement portfolio and run a new Monte Carlo simulation to find out if the changes will allow members to retire earlier.
For this new Monte Carlo simulation, do the following:
- Forecast the cumulative returns for 10 years from now. Because of the shortened investment horizon (30 years to 10 years), the portfolio needs to invest more heavily in the riskier asset—that is, stock—to help accumulate wealth for retirement.
- Adjust the weights of the retirement portfolio so that the composition for the Monte Carlo simulation consists of 20% bonds and 80% stocks.
- Run the simulation over 500 samples, and use the same data that the API call to Alpaca generated.
- Based on the new Monte Carlo simulation, answer the following questions in your Jupyter notebook:
- Using the current value of only the stock and bond portion of the member's portfolio and the summary statistics that you generated from the new Monte Carlo simulation, what are the lower and upper bounds for the expected value of the portfolio (with the new weights) with a 95% confidence interval?
- Will weighting the portfolio more heavily toward stocks allow the credit union members to retire after only 10 years?
```
# Configures a Monte Carlo simulation to forecast 10 years cumulative returns
# The weights are split 20% to AGG and 80% to SPY.
# Runs 500 samples.
MC_tenyear = MCSimulation(
portfolio_data = prices_df,
weights = [.20, .80],
num_simulation = 500,
num_trading_days = 252 * 10
)
# Reviews the simulation input data
MC_tenyear.portfolio_data
# Runs the Monte Carlo simulation to forecast 10 years cumulative returns
MC_tenyear.calc_cumulative_return()
# Visualizes the 10-year Monte Carlo simulation by creating an
# overlay line plot
MC_sim_line_plot_10_year = MC_tenyear.plot_simulation()
# Visualizes the probability distribution of the 10-year Monte Carlo simulation
# by plotting a histogram
MC_sim_dist_plot_10_year = MC_tenyear.plot_distribution()
# Generates summary statistics from the 10-year Monte Carlo simulation results
# Saves the results as a variable
MC_summary_statistics_ten_year = MC_tenyear.summarize_cumulative_return()
# Review the 10-year Monte Carlo summary statistics
print(MC_summary_statistics_ten_year)
```
### Answer the following questions:
#### Question: Using the current value of only the stock and bond portion of the member's portfolio and the summary statistics that you generated from the new Monte Carlo simulation, what are the lower and upper bounds for the expected value of the portfolio (with the new weights) with a 95% confidence interval?
```
# Prints the current balance of the stock and bond portion of the members portfolio
print(total_stocks_bonds)
# Uses the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes for the current stock/bond portfolio
ci_lower_ten_cumulative_return = MC_summary_statistics_ten_year[8] * 69783.7
ci_upper_ten_cumulative_return = MC_summary_statistics_ten_year[9] * 69783.7
# Prints the result of your calculations
print(f"There is a 95% chance that an investment of the current stock/bond portfolio of $69,783.70"
f" over the next 10 years will end within the range of"
f" ${ci_lower_ten_cumulative_return: .2f} and ${ci_upper_ten_cumulative_return: .2f}.")
```
#### Question: Will weighting the portfolio more heavily to stocks allow the credit union members to retire after only 10 years?
Weighting the portfolio more heavily to stocks will not allow the credit union members to retire after only ten years with an investment of 69,783.70.
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
def take_and_clean(filename, keyword, print_info=False):
raw = pd.read_json(f'./data/giantsGates/{filename}.json')
ranks = raw[['creatorName', 'totalScore']]
ranks = ranks[ranks['creatorName'].str.contains(keyword)]
ranks = ranks[ranks['creatorName'].str.contains('GG')]
if print_info:
ranks.info()
return ranks
NAMES = ['MUN', 'THR', 'OGR', 'SHM', 'BRW', 'WRL']
def collect_scores(ranks, col_names=NAMES):
by_units = []
for name in col_names:
units = ranks[ranks['creatorName'].str.contains(name)]['totalScore'] * 100
units.reset_index(drop=True, inplace=True)
by_units.append(units)
unit_scores = pd.concat(by_units,
ignore_index=True, axis=1)
unit_scores.columns = col_names
return unit_scores
def print_score_data(unit_scores):
result = []
for name in unit_scores.columns:
units = unit_scores[name]
result.append([name, units.mean(), units.median()])
result.sort(key=lambda x: x[1], reverse=True)
print('NAME -- MEAN -- MED')
for line in result:
print(f'{line[0]} -- {line[1]: <7.2f} -- {line[2]:.2f}')
def boxplot_score(unit_scores, vert_size=6):
sns.set_theme(style="whitegrid")
plt.figure(figsize=(16, vert_size))
sns.boxplot(data=unit_scores)
# Version 0.8.23
# team by 3
# C point without assault
front_team3_v0 = take_and_clean('rankingsC', '-C-', False)
front_team3_v0_scores = collect_scores(front_team3_v0)
print_score_data(front_team3_v0_scores)
boxplot_score(front_team3_v0_scores)
# Version 0.8.23.a
# team by 3
# ALT point with assault
front_team3_v0 = take_and_clean('rankingsV23A', 'V23A', True)
front_team3_v0_scores = collect_scores(front_team3_v0)
print_score_data(front_team3_v0_scores)
boxplot_score(front_team3_v0_scores, 6)
# Version 0.8.23.h
# team by 3
# ALT point with assault
# shaman maxHealth: 100 -> 105
front_team = take_and_clean('rankings23H3', '23H3', True)
front_team_scores = collect_scores(front_team)
print_score_data(front_team_scores)
boxplot_score(front_team_scores, 10)
# Version 0.8.23.j
# team by 4
# ALT point with assault
# shaman maxHealth: 100 -> 105
front_team = take_and_clean('rankings23J4', '23J4', True)
front_team_scores = collect_scores(front_team)
print_score_data(front_team_scores)
boxplot_score(front_team_scores, 10)
def collect_pairs_scores(ranks):
columns = []
result = []
for i, n1 in enumerate(NAMES):
for n2 in NAMES[i+1:]:
columns.append(f'{n1}-{n2}')
pairs = ranks[(ranks['creatorName'].str.contains(n1))
& (ranks['creatorName'].str.contains(n2))]['totalScore'] * 100
pairs.reset_index(drop=True, inplace=True)
result.append(pairs)
result_df = pd.concat(result,
ignore_index=True, axis=1)
result_df.columns = columns
return result_df
ladder_result = take_and_clean('rankings23J4', '23J4')
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 10)
# pairs_scores
# Version 0.8.24.б
# team by 4
# ALT point with assault
# brawler
# attackDamage 50 => 52
# shaman
# attackDamage 23 => 22
ladder_result = take_and_clean('rankings24B4', '24B4')
unit_scores = collect_scores(ladder_result)
print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 6)
PLACE_TACTICS = [
"PTOP-",
"PBOT-",
"PALT-",
"PALTP-",
"FTOP-",
"FBOT-",
"FALT-",
"FALTP-"]
ladder_result = take_and_clean('rankings824NB', '824NB')
place_scores = collect_scores(ladder_result, PLACE_TACTICS)
print_score_data(place_scores)
boxplot_score(place_scores, 6)
name_pairs = [f'{n1}-{n2}' for n1 in NAMES for n2 in NAMES]
df = pd.DataFrame(columns=PLACE_TACTICS)
for np in name_pairs:
line = {}
for pt in PLACE_TACTICS:
res = ladder_result[ladder_result['creatorName'].str.contains(np) &
ladder_result['creatorName'].str.contains(pt)]
line[pt] = res.iloc[0, 1]
df.loc[np] = line
plt.figure(figsize=(20, 10))
# sns.color_palette('tab10')
sns.heatmap(df.transpose())
# Version 0.8.27.a
# team by 4
# TOP/BOT point with assault
# munchkin
# attackDamage +1
ladder_result = take_and_clean('rankings827NF', '827NF')
unit_scores = collect_scores(ladder_result)
print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 6)
ladder_result = take_and_clean('rankings827NC', '827NC')
unit_scores = collect_scores(ladder_result)
# print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
ladder_result = take_and_clean('rankings827NF', '827NF')
unit_scores = collect_scores(ladder_result)
# print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
# Version 0.9.09a
# team by 2
# TOP/BOT/CEN point with assault
# munchkin
# attackDamage +1
ladder_result = take_and_clean('rankings909A', '909A')
unit_scores = collect_scores(ladder_result)
print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 6)
# Version 0.9.09B
# team by 2
# TOP/BOT/CEN point with assault
ladder_result = take_and_clean('rankingsG910B', 'G910B')
# ladder_result
unit_scores = collect_scores(ladder_result)
print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 6)
# Version 0.9.10c
# team by 4
ladder_result = take_and_clean('rankingsG910C', 'G910C')
# ladder_result
unit_scores = collect_scores(ladder_result)
print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 6)
# Version 0.9.13c
# team by 4
ladder_result = take_and_clean('G914B', 'G914B')
# ladder_result
unit_scores = collect_scores(ladder_result)
print_score_data(unit_scores)
boxplot_score(unit_scores, 6)
pairs_scores = collect_pairs_scores(ladder_result)
print_score_data(pairs_scores)
boxplot_score(pairs_scores, 6)
```
| github_jupyter |
# Visualizing Sorting Algorithm Behavior
```
import numpy as np
import random
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import time
import scipy.signal
def generateRandomList(n):
# Generate list of integers
# Possible alternative: l = [random.randint(0, n) for _ in range(n)]
# does this increase/decrease randomness? not sure
l = [i for i in range(n)]
# Randomly shuffle integers
random.shuffle(l)
return l
def plotSmoothed(n, sorting_fn, window_len, poly_order):
# Generate randomly shuffled list
rand_list = generateRandomList(n)
# Sort the list using the sorting function
_, y, x = sorting_fn(rand_list)
# FFT code that did not work
# https://stackoverflow.com/questions/20618804/how-to-smooth-a-curve-in-the-right-way
# w = scipy.fftpack.rfft(y)
# f = scipy.fftpack.rfftfreq(n, x[1]-x[0])
# spectrum = w**2
# cutoff_idx = spectrum < (spectrum.max()/5)
# w2 = w.copy()
# w2[cutoff_idx] = 0
# y2 = scipy.fftpack.irfft(w2)
# Generate regular plot (unsmoothed)
plt.figure()
plt.plot(x,y)
# Smooth time step array using Savitzky-Golay filter (need to read up on exactly how this works,
# how to auto-generate appropriate parameters)
y2 = scipy.signal.savgol_filter(y, window_len, poly_order)
# Generate smoothed plot
plt.figure()
plt.plot(x, y2)
```
## Sorting algorithms
Source: https://github.com/TheAlgorithms/Python
### Insertion Sort
for $i = 0, ... N-1$:
- designate item $i$ as the traveling item - swap item backwards until the traveling item is in the right place among previously examined items
$\Theta(N^2)$
```
def insertion_sort(collection):
"""Pure implementation of the insertion sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> insertion_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> insertion_sort([])
[]
>>> insertion_sort([-2, -5, -45])
[-45, -5, -2]
"""
time_list = [0]
step_list = [0]
for index in range(1, len(collection)):
t1 = time.time()
while 0 < index and collection[index] < collection[index - 1]:
collection[index], collection[index - 1] = collection[index - 1], collection[index]
index -= 1
t2 = time.time()
time_list.append(t2 - t1)
step_list.append(step_list[-1] + 1)
return collection, time_list, step_list
plotSmoothed(1000, insertion_sort, 103, 5)
def insertion_sort_v2(collection):
"""Pure implementation of the insertion sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> insertion_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> insertion_sort([])
[]
>>> insertion_sort([-2, -5, -45])
[-45, -5, -2]
"""
time_list = [0]
step_list = [0]
for index in range(1, len(collection)):
while 0 < index and collection[index] < collection[index - 1]:
t1 = time.time()
collection[index], collection[index - 1] = collection[index - 1], collection[index]
index -= 1
t2 = time.time()
time_list.append(t2 - t1)
step_list.append(step_list[-1] + 1)
return collection, time_list, step_list
plotSmoothed(1000, insertion_sort_v2, 7003, 3)
```
### Selection Sort
repeat until all items are fixed:
- find the smallest item
- swap this time to the front and fix its position
$\Theta(N^2)$ worst case
```
def selection_sort(collection):
"""Pure implementation of the selection sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> selection_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> selection_sort([])
[]
>>> selection_sort([-2, -5, -45])
[-45, -5, -2]
"""
time_list = [0]
step_list = [0]
length = len(collection)
for i in range(length):
least = i
t1 = time.time()
for k in range(i + 1, length):
if collection[k] < collection[least]:
least = k
collection[least], collection[i] = (
collection[i], collection[least]
)
t2 = time.time()
time_list.append(t2 - t1)
step_list.append(step_list[-1] + 1)
return collection, time_list, step_list
plotSmoothed(1000, selection_sort, 103, 3)
def selection_sort_v2(collection):
"""Pure implementation of the selection sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> selection_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> selection_sort([])
[]
>>> selection_sort([-2, -5, -45])
[-45, -5, -2]
"""
time_list = [0]
step_list = [0]
length = len(collection)
for i in range(length):
least = i
for k in range(i + 1, length):
t1 = time.time()
if collection[k] < collection[least]:
least = k
t2 = time.time()
time_list.append(t2 - t1)
step_list.append(step_list[-1] + 1)
collection[least], collection[i] = (collection[i], collection[least])
return collection, time_list, step_list
plotSmoothed(1000, selection_sort_v2, 20003, 3)
```
### Heap Sort
Build a max-heap out of the array, popping off the largest element and rebalancing the heap until it is empty.
$\Theta(N \log N)$
```
def heapify(unsorted, index, heap_size):
largest = index
left_index = 2 * index + 1
right_index = 2 * index + 2
if left_index < heap_size and unsorted[left_index] > unsorted[largest]:
largest = left_index
if right_index < heap_size and unsorted[right_index] > unsorted[largest]:
largest = right_index
if largest != index:
unsorted[largest], unsorted[index] = unsorted[index], unsorted[largest]
heapify(unsorted, largest, heap_size)
def heap_sort(unsorted):
'''
Pure implementation of the heap sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> heap_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> heap_sort([])
[]
>>> heap_sort([-2, -5, -45])
[-45, -5, -2]
'''
time_list = [0]
step_list = [0]
n = len(unsorted)
for i in range(n // 2 - 1, -1, -1):
t1 = time.time()
heapify(unsorted, i, n)
t2 = time.time()
time_list.append(t2 - t1)
step_list.append(step_list[-1] + 1)
for i in range(n - 1, 0, -1):
t1 = time.time()
unsorted[0], unsorted[i] = unsorted[i], unsorted[0]
heapify(unsorted, 0, i)
t2 = time.time()
time_list.append(t2 - t1)
step_list.append(step_list[-1] + 1)
return unsorted, time_list, step_list
plotSmoothed(10000, heap_sort, 503, 3)
```
### Mergesort
split items into two roughly even pieces
- mergesort each half
- merge the two sorted halves
$\Theta(N \log N)$
```
def merge_sort(collection):
"""Pure implementation of the merge sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
time_list = [0]
length = len(collection)
if length > 1:
midpoint = length // 2
left_half, t_temp1 = merge_sort(collection[:midpoint])
right_half, t_temp2 = merge_sort(collection[midpoint:])
time_list += t_temp1 + t_temp2
i = 0
j = 0
k = 0
left_length = len(left_half)
right_length = len(right_half)
t1 = time.time()
while i < left_length and j < right_length:
if left_half[i] < right_half[j]:
collection[k] = left_half[i]
i += 1
else:
collection[k] = right_half[j]
j += 1
k += 1
while i < left_length:
collection[k] = left_half[i]
i += 1
k += 1
while j < right_length:
collection[k] = right_half[j]
j += 1
k += 1
t2 = time.time()
time_list.append(t2 - t1)
return collection, time_list
def plotSmoothed_alt(n, sorting_fn, window_len, poly_order):
# Generate randomly shuffled list
rand_list = generateRandomList(n)
# Sort the list using the sorting function
_, y = sorting_fn(rand_list)
# FFT code that did not work
# https://stackoverflow.com/questions/20618804/how-to-smooth-a-curve-in-the-right-way
# w = scipy.fftpack.rfft(y)
# f = scipy.fftpack.rfftfreq(n, x[1]-x[0])
# spectrum = w**2
# cutoff_idx = spectrum < (spectrum.max()/5)
# w2 = w.copy()
# w2[cutoff_idx] = 0
# y2 = scipy.fftpack.irfft(w2)
# Generate regular plot (unsmoothed)
plt.figure()
plt.plot(y)
# Smooth time step array using Savitzky-Golay filter (need to read up on exactly how this works,
# how to auto-generate appropriate parameters)
y2 = scipy.signal.savgol_filter(y, window_len, poly_order)
# Generate smoothed plot
plt.figure()
plt.plot(y2)
plotSmoothed_alt(10000, merge_sort, 2003, 3)
```
### Quicksort
partition the leftmost item
- QuickSort the left half
- QuickSort the right half
- add the two arrays
```
def quick_sort(ARRAY):
"""Pure implementation of quick sort algorithm in Python
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> quick_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> quick_sort([])
[]
>>> quick_sort([-2, -5, -45])
[-45, -5, -2]
"""
time_list = [0]
ARRAY_LENGTH = len(ARRAY)
if( ARRAY_LENGTH <= 1):
return ARRAY, []
else:
t1 = time.time()
PIVOT = ARRAY[0]
GREATER = [ element for element in ARRAY[1:] if element > PIVOT ]
LESSER = [ element for element in ARRAY[1:] if element <= PIVOT ]
t2 = time.time()
time_list.append(t2 - t1)
LEFT, t_temp1 = quick_sort(LESSER)
RIGHT, t_temp2 = quick_sort(GREATER)
time_list += t_temp1 + t_temp2
sorted = RIGHT + [PIVOT] + LEFT
return sorted, time_list
plotSmoothed_alt(10000, quick_sort, 1003, 3)
```
| github_jupyter |
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
$ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
<font style="font-size:28px;" align="left"><b>Quantum Tomography </b></font>
<br>
_prepared by Abuzer Yakaryilmaz_
<br><br>
[<img src="../qworld/images/watch_lecture.jpg" align="left">](https://youtu.be/mIEiWCJ6R58)
<br><br><br>
We study a simplified version of quantum tomography here.
It is similar to learn the bias of a coin by collecting statistics from tossing this coin many times. But, only making measurement may not be enough to make a good guess.
Suppose that you are given 1000 copies of a qubit and your task is to learn the state of this qubit. We use a python class called "unknown_qubit" for doing our quantum experiments.
Please run the following cell before continuing.
```
# class unknown_qubit
# available_qubit = 1000 -> you get at most 1000 qubit copies
# get_qubits(number_of_qubits) -> you get the specified number of qubits for your experiment
# measure_qubits() -> your qubits are measured and the result is returned as a dictionary variable
# -> after measurement, these qubits are destroyed
# rotate_qubits(angle) -> your qubits are rotated with the specified angle in radian
# compare_my_guess(my_angle) -> your guess in radian is compared with the real angle
from random import randrange
from math import pi
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
class unknown_qubit:
def __init__(self):
self.__theta = randrange(18000)/18000*pi
self.__available_qubits = 1000
self.__active_qubits = 0
print(self.__available_qubits,"qubits are created")
def get_qubits(self,number_of_qubits=None):
if number_of_qubits is None or isinstance(number_of_qubits,int) is False or number_of_qubits < 1:
print()
print("ERROR: the method 'get_qubits' takes the number of qubit(s) as a positive integer, i.e., get_qubits(100)")
elif number_of_qubits <= self.__available_qubits:
self.__qc = QuantumCircuit(1,1)
self.__qc.ry(2 * self.__theta,0)
self.__active_qubits = number_of_qubits
self.__available_qubits = self.__available_qubits - self.__active_qubits
print()
print("You have",number_of_qubits,"active qubits that are set to (cos(theta),sin(theta))")
self.available_qubits()
else:
print()
print("WARNING: you requested",number_of_qubits,"qubits, but there is not enough available qubits!")
self.available_qubits()
def measure_qubits(self):
if self.__active_qubits > 0:
self.__qc.measure(0,0)
job = execute(self.__qc,Aer.get_backend('qasm_simulator'),shots=self.__active_qubits)
counts = job.result().get_counts(self.__qc)
print()
print("your",self.__active_qubits,"qubits are measured")
print("counts = ",counts)
self.__active_qubits = 0
return counts
else:
print()
print("WARNING: there is no active qubits -- you might first execute 'get_qubits()' method")
self.available_qubits()
def rotate_qubits(self,angle=None):
if angle is None or (isinstance(angle,float) is False and isinstance(angle,int) is False):
print()
print("ERROR: the method 'rotate_qubits' takes a real-valued angle in radian as its parameter, i.e., rotate_qubits(1.2121)")
elif self.__active_qubits > 0:
self.__qc.ry(2 * angle,0)
print()
print("your active qubits are rotated by angle",angle,"in radian")
else:
print()
print("WARNING: there is no active qubits -- you might first execute 'get_qubits()' method")
self.available_qubits()
def compare_my_guess(self,my_angle):
if my_angle is None or (isinstance(my_angle,float) is False and isinstance(my_angle,int) is False):
print("ERROR: the method 'compare_my_guess' takes a real-valued angle in radian as your guessed angle, i.e., compare_my_guess(1.2121)")
else:
self.__available_qubits = 0
diff = abs(my_angle-self.__theta)
print()
print(self.__theta,"is the original",)
print(my_angle,"is your guess")
print("the angle difference between the original theta and your guess is",diff/pi*180,"degree")
print("-->the number of available qubits is (set to) zero, and so you cannot make any further experiment")
def available_qubits(self):
print("--> the number of available unused qubit(s) is",self.__available_qubits)
```
class unknown_qubit:
available_qubit = 1000 -> you get at most 1000 qubit copies
get_qubits(number_of_qubits) -> you get the specified number of qubits for your experiment
measure_qubits() -> your qubits are measured and the result is returned as a dictionary variable
-> after measurement, these qubits are destroyed
rotate_qubits(angle) -> your qubits are rotated with the specified angle in radian
compare_my_guess(my_angle) -> your guess in radian is compared with the real angle
<h3> Task 1 </h3>
You are given 1000 copies of the identical qubits which are in the same quantum state lying in the first or second quadrant of the unit circle.
This quantum state is represented by an angle $ \theta \in [0,\pi) $, and your task is to guess this angle.
You use the class __unknown_qubit__ and its methods for your experiments.
_Remark that the measurement outcomes of the quantum states with angles $ \pi \over 3 $ and $ 2 \pi \over 3 $ are identical even though they are different quantum states. Therefore, getting 1000 qubits and then measuring them does not guarantee the correct answer._
Test your solution at least ten times.
```
from math import pi, cos, sin, acos, asin
# an angle theta is randomly picked and it is fixed througout the experiment
my_experiment = unknown_qubit()
#
# my_experiment.get_qubits(number_of_qubits)
# my_experiment.rotate_qubits(angle)
# my_experiment.measure_qubits()
# my_experiment.compare_my_guess(my_angle)
#
#
# your solution is here
#
for i in range(10):
my_experiment = unknown_qubit()
#
# your solution
#
```
[click for our solution](Q52_Quantum_Tomography_Solution.ipynb#task1)
<h3> Task 2 (extra) </h3>
You are given 1000 identical quantum systems with two qubits that are in states $ \myvector{\cos \theta_1 \\ \sin \theta_1} $ and $ \myvector{\cos \theta_2 \\ \sin \theta_2} $, where $ \theta_1,\theta_2 \in [0,\pi) $.
Your task is to guess the values of $ \theta_1 $ and $ \theta_2 $.
Create a quantum circuit with two qubits.
Randomly pick $\theta_1$ and $ \theta_2 $ and set the states of qubits respectively. (Do not use $ \theta_1 $ and $ \theta_2 $ except initializing the qubits.)
Do experiments (making measurements and/or applying basic quantum operators) with your circuit(s). You may create more than one circuit.
Assume that the total number of shots does not exceed 1000 throughout the whole experiment.
_Since you have two qubits, your measurement outcomes will be '00', '01', '10', and '11'._
```
#
# your solution
#
```
<h3> Task 3 (Discussion) </h3>
If the angle in Task 1 is picked in range $ [0,2\pi) $, then can we determine its quadrant correctly?
<h3> Global phase </h3>
Suppose that we have a qubit and its state is either $ \ket{0} $ or $ -\ket{0} $.
Is there any sequence of one-qubit gates such that we can measure different results after applying them?
All one-qubit gates are $ 2 \times 2 $ matrices, and their application is represented by a single matrix: $ A_n \cdot \cdots \cdot A_2 \cdot A_1 = A $.
By linearity, if $ A \ket{0} = \ket{u} $, then $ A (- \ket{0}) = -\ket{u} $. Thus, after measurement, the probabilities of observing state $ \ket{0} $ and state $ \ket{1} $ are the same for $ \ket{u} $ and $ -\ket{u} $. Therefore, we cannot distinguish them.
Even though the states $ \ket{0} $ and $ -\ket{0} $ are different mathematically, they are assumed as identical from the physical point of view.
The minus sign in front of $ -\ket{0} $ is called as a global phase.
In general, a global phase can be a complex number with magnitude 1.
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1"><span class="toc-item-num">1 </span>Objectives</a></span></li><li><span><a href="#Example-Together" data-toc-modified-id="Example-Together-2"><span class="toc-item-num">2 </span>Example Together</a></span><ul class="toc-item"><li><span><a href="#Question" data-toc-modified-id="Question-2.1"><span class="toc-item-num">2.1 </span>Question</a></span></li><li><span><a href="#Considerations" data-toc-modified-id="Considerations-2.2"><span class="toc-item-num">2.2 </span>Considerations</a></span></li><li><span><a href="#Loading-the-Data" data-toc-modified-id="Loading-the-Data-2.3"><span class="toc-item-num">2.3 </span>Loading the Data</a></span></li><li><span><a href="#Some-Exploration-to-Better-Understand-our-Data" data-toc-modified-id="Some-Exploration-to-Better-Understand-our-Data-2.4"><span class="toc-item-num">2.4 </span>Some Exploration to Better Understand our Data</a></span></li><li><span><a href="#Experimental-Setup" data-toc-modified-id="Experimental-Setup-2.5"><span class="toc-item-num">2.5 </span>Experimental Setup</a></span><ul class="toc-item"><li><span><a href="#What-Test-Would-Make-Sense?" data-toc-modified-id="What-Test-Would-Make-Sense?-2.5.1"><span class="toc-item-num">2.5.1 </span>What Test Would Make Sense?</a></span></li><li><span><a href="#The-Hypotheses" data-toc-modified-id="The-Hypotheses-2.5.2"><span class="toc-item-num">2.5.2 </span>The Hypotheses</a></span></li><li><span><a href="#Setting-a-Threshold" data-toc-modified-id="Setting-a-Threshold-2.5.3"><span class="toc-item-num">2.5.3 </span>Setting a Threshold</a></span></li></ul></li><li><span><a href="#$\chi^2$-Test" data-toc-modified-id="$\chi^2$-Test-2.6"><span class="toc-item-num">2.6 </span>$\chi^2$ Test</a></span><ul class="toc-item"><li><span><a href="#Setup-the-Data" data-toc-modified-id="Setup-the-Data-2.6.1"><span class="toc-item-num">2.6.1 </span>Setup the Data</a></span></li><li><span><a href="#Calculation" data-toc-modified-id="Calculation-2.6.2"><span class="toc-item-num">2.6.2 </span>Calculation</a></span></li></ul></li><li><span><a href="#Interpretation" data-toc-modified-id="Interpretation-2.7"><span class="toc-item-num">2.7 </span>Interpretation</a></span></li></ul></li><li><span><a href="#Exercise" data-toc-modified-id="Exercise-3"><span class="toc-item-num">3 </span>Exercise</a></span></li></ul></div>
```
import numpy as np
import pandas as pd
from scipy import stats
import seaborn as sns
```
# Objectives
- Conduct an A/B test in Python
- Interpret the results of the A/B tests for a stakeholder
# Example Together
## Question
We have data about whether customers completed sales transactions, segregated by the type of ad banners to which the customers were exposed.
The question we want to answer is whether there was any difference in sales "conversions" between desktop customers who saw the sneakers banner and desktop customers who saw the accessories banner in the month of May 2019.
## Considerations
What would we need to consider when designing our experiment?
Might include:
- Who is it that we're including in our test?
- How big of an effect would make it "worth" us seeing?
- This can affect sample size
- This can give context of a statistically significant result
- Other biases or "gotchas"
## Loading the Data
First let's download the data from [kaggle](https://www.kaggle.com/podsyp/how-to-do-product-analytics) via the release page of this repo: https://github.com/flatiron-school/ds-ab_testing/releases
The code below will load it into our DataFrame:
```
# This will download the data from online so it can take some time (but relatively small download)
df = pd.read_csv('https://github.com/flatiron-school/ds-ab_testing/releases/download/v1.2/products_small.csv')
```
> Let's take a look while we're at it
```
df.head()
df.info()
```
## Some Exploration to Better Understand our Data
Lets's look at the different banner types:
```
df['product'].value_counts()
df.groupby('product')['target'].value_counts()
```
Let's look at the range of time-stamps on these data:
```
df['time'].min()
df['time'].max()
```
Let's check the counts of the different site_version values:
```
df['site_version'].value_counts()
df['title'].value_counts()
df.groupby('title').agg({'target': 'mean'})
```
## Experimental Setup
We need to filter by site_version, time, and product:
```
df_AB = df[(df['site_version'] == 'desktop') &
(df['time'] >= '2019-05-01') &
((df['product'] == 'accessories') | (df['product'] == 'sneakers'))].reset_index(drop = True)
df_AB.tail()
```
### What Test Would Make Sense?
Since we're comparing the frequency of conversions of customers who saw the "sneakers" banner against those who saw the "accessories" banner, we can use a $\chi^2$ test.
Note there are other hypothesis tests we can use but this should be fine since it should fit our criteria.
### The Hypotheses
$H_0$: Customers who saw the sneakers banner were no more or less likely to buy than customers who saw the accessories banner.
$H_1$: Customers who saw the sneakers banner were more or less likely to buy than customers who saw the accessories banner.
### Setting a Threshold
We'll set a false-positive rate of $\alpha = 0.05$.
## $\chi^2$ Test
### Setup the Data
We need our contingency table: the numbers of people who did or did not submit orders, both for the accessories banner and the sneakers banner.
```
# We have two groups
df_A = df_AB[df_AB['product'] == 'accessories']
df_B = df_AB[df_AB['product'] == 'sneakers']
accessories_orders = sum(df_A['target'])
sneakers_orders = sum(df_B['target'])
accessories_orders, sneakers_orders
```
To get the numbers of people who didn't submit orders, we get the total number of people who were shown banners and then subtract the numbers of people who did make orders.
```
accessories_total = sum(df_A['title'] == 'banner_show')
sneakers_total = sum(df_B['title'] == 'banner_show')
accessories_no_orders = accessories_total - accessories_orders
sneakers_no_orders = sneakers_total - sneakers_orders
accessories_no_orders, sneakers_no_orders
contingency_table = np.array([
(accessories_orders, accessories_no_orders),
(sneakers_orders, sneakers_no_orders)
])
contingency_table
```
### Calculation
```
stats.chi2_contingency(contingency_table)
```
This extremely low $p$-value suggests that these two groups are genuinely performing differently. In particular, the desktop customers who saw the sneakers banner in May 2019 bought at a higher rate than the desktop customers who saw the accessories banner in May 2019.
## Interpretation
```
contingency_table
# Find the difference in conversion rate
accessory_CR, sneaker_CR = contingency_table[:,0]/contingency_table[:,1]
print(f'Conversion Rate for accessory banner:\n\t{100*accessory_CR:.3f}%')
print(f'Conversion Rate for sneaker banner:\n\t{100*sneaker_CR:.3f}%')
print('')
print(f'Absolute difference of CR: {100*(sneaker_CR-accessory_CR):.3f}%')
```
So we can say:
- There was a statistically significant difference at the $\alpha$-level (confidence level)
- The difference was about $2.8\%$ in favor of the sneaker banner!
# Exercise
> The company is impressed with what you found and is now wondering if there is a difference in their other banner ads!
With your group, look at the same month (May 2019) but compare different platforms ('mobile' vs 'desktop') and or different banner types ('accessories', 'sneakers', 'clothes', 'sports_nutrition'). Just don't repeat the same test we did above 😉
Make sure you record what considerations you have for the experiment, what hypothesis test you performed ($H_0$ and $H_1$ too), and your overall conclusion/interpretation for the _business stakeholders_. Is there a follow up you'd suggest?
```
#Null: There is not a different between conversion rates for sports nutrition and clothes on the mobile site.
#Alternative: There is
df_AB = df[(df['site_version'] == 'mobile') &
(df['time'] >= '2019-05-01') &
((df['product'] == 'sports_nutrition') | (df['product'] == 'clothes'))].reset_index(drop = True)
df_A = df_AB[df_AB['product'] == 'sports_nutrition']
df_B = df_AB[df_AB['product'] == 'clothes']
sports_nutrition_orders = sum(df_A['target'])
clothes_orders = sum(df_B['target'])
sports_nutrition_total = sum(df_A['title'] == 'banner_show')
clothes_total = sum(df_B['title'] == 'banner_show')
sports_nutrition_no_orders = sports_nutrition_total - sports_nutrition_orders
clothes_no_orders = clothes_total - clothes_orders
contingency_table = np.array([
(sports_nutrition_orders, sports_nutrition_no_orders),
(clothes_orders, clothes_no_orders)
])
stats.chi2_contingency(contingency_table)
sports_nutrition_CR, clothes_CR = contingency_table[:,0]/contingency_table[:,1]
print(f'Conversion Rate for sports nutrition banner:\n\t{100*sports_nutrition_CR:.3f}%')
print(f'Conversion Rate for clothes banner:\n\t{100*clothes_CR:.3f}%')
print('')
print(f'Absolute difference of CR: {100*(sports_nutrition_CR-clothes_CR):.3f}%')
```
| github_jupyter |
## Dependencies
```
import json, warnings, shutil
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
class RectifiedAdam(tf.keras.optimizers.Optimizer):
"""Variant of the Adam optimizer whose adaptive learning rate is rectified
so as to have a consistent variance.
It implements the Rectified Adam (a.k.a. RAdam) proposed by
Liyuan Liu et al. in [On The Variance Of The Adaptive Learning Rate
And Beyond](https://arxiv.org/pdf/1908.03265v1.pdf).
Example of usage:
```python
opt = tfa.optimizers.RectifiedAdam(lr=1e-3)
```
Note: `amsgrad` is not described in the original paper. Use it with
caution.
RAdam is not a placement of the heuristic warmup, the settings should be
kept if warmup has already been employed and tuned in the baseline method.
You can enable warmup by setting `total_steps` and `warmup_proportion`:
```python
opt = tfa.optimizers.RectifiedAdam(
lr=1e-3,
total_steps=10000,
warmup_proportion=0.1,
min_lr=1e-5,
)
```
In the above example, the learning rate will increase linearly
from 0 to `lr` in 1000 steps, then decrease linearly from `lr` to `min_lr`
in 9000 steps.
Lookahead, proposed by Michael R. Zhang et.al in the paper
[Lookahead Optimizer: k steps forward, 1 step back]
(https://arxiv.org/abs/1907.08610v1), can be integrated with RAdam,
which is announced by Less Wright and the new combined optimizer can also
be called "Ranger". The mechanism can be enabled by using the lookahead
wrapper. For example:
```python
radam = tfa.optimizers.RectifiedAdam()
ranger = tfa.optimizers.Lookahead(radam, sync_period=6, slow_step_size=0.5)
```
"""
def __init__(self,
learning_rate=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-7,
weight_decay=0.,
amsgrad=False,
sma_threshold=5.0,
total_steps=0,
warmup_proportion=0.1,
min_lr=0.,
name='RectifiedAdam',
**kwargs):
r"""Construct a new RAdam optimizer.
Args:
learning_rate: A `Tensor` or a floating point value. or a schedule
that is a `tf.keras.optimizers.schedules.LearningRateSchedule`
The learning rate.
beta_1: A float value or a constant float tensor.
The exponential decay rate for the 1st moment estimates.
beta_2: A float value or a constant float tensor.
The exponential decay rate for the 2nd moment estimates.
epsilon: A small constant for numerical stability.
weight_decay: A floating point value. Weight decay for each param.
amsgrad: boolean. Whether to apply AMSGrad variant of this
algorithm from the paper "On the Convergence of Adam and
beyond".
sma_threshold. A float value.
The threshold for simple mean average.
total_steps: An integer. Total number of training steps.
Enable warmup by setting a positive value.
warmup_proportion: A floating point value.
The proportion of increasing steps.
min_lr: A floating point value. Minimum learning rate after warmup.
name: Optional name for the operations created when applying
gradients. Defaults to "RectifiedAdam".
**kwargs: keyword arguments. Allowed to be {`clipnorm`,
`clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients
by norm; `clipvalue` is clip gradients by value, `decay` is
included for backward compatibility to allow time inverse
decay of learning rate. `lr` is included for backward
compatibility, recommended to use `learning_rate` instead.
"""
super(RectifiedAdam, self).__init__(name, **kwargs)
self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
self._set_hyper('beta_1', beta_1)
self._set_hyper('beta_2', beta_2)
self._set_hyper('decay', self._initial_decay)
self._set_hyper('weight_decay', weight_decay)
self._set_hyper('sma_threshold', sma_threshold)
self._set_hyper('total_steps', float(total_steps))
self._set_hyper('warmup_proportion', warmup_proportion)
self._set_hyper('min_lr', min_lr)
self.epsilon = epsilon or tf.keras.backend.epsilon()
self.amsgrad = amsgrad
self._initial_weight_decay = weight_decay
self._initial_total_steps = total_steps
def _create_slots(self, var_list):
for var in var_list:
self.add_slot(var, 'm')
for var in var_list:
self.add_slot(var, 'v')
if self.amsgrad:
for var in var_list:
self.add_slot(var, 'vhat')
def set_weights(self, weights):
params = self.weights
num_vars = int((len(params) - 1) / 2)
if len(weights) == 3 * num_vars + 1:
weights = weights[:len(params)]
super(RectifiedAdam, self).set_weights(weights)
def _resource_apply_dense(self, grad, var):
var_dtype = var.dtype.base_dtype
lr_t = self._decayed_lr(var_dtype)
m = self.get_slot(var, 'm')
v = self.get_slot(var, 'v')
beta_1_t = self._get_hyper('beta_1', var_dtype)
beta_2_t = self._get_hyper('beta_2', var_dtype)
epsilon_t = tf.convert_to_tensor(self.epsilon, var_dtype)
local_step = tf.cast(self.iterations + 1, var_dtype)
beta_1_power = tf.pow(beta_1_t, local_step)
beta_2_power = tf.pow(beta_2_t, local_step)
if self._initial_total_steps > 0:
total_steps = self._get_hyper('total_steps', var_dtype)
warmup_steps = total_steps *\
self._get_hyper('warmup_proportion', var_dtype)
min_lr = self._get_hyper('min_lr', var_dtype)
decay_steps = tf.maximum(total_steps - warmup_steps, 1)
decay_rate = (min_lr - lr_t) / decay_steps
lr_t = tf.where(
local_step <= warmup_steps,
lr_t * (local_step / warmup_steps),
lr_t + decay_rate * tf.minimum(local_step - warmup_steps,
decay_steps),
)
sma_inf = 2.0 / (1.0 - beta_2_t) - 1.0
sma_t = sma_inf - 2.0 * local_step * beta_2_power / (
1.0 - beta_2_power)
m_t = m.assign(
beta_1_t * m + (1.0 - beta_1_t) * grad,
use_locking=self._use_locking)
m_corr_t = m_t / (1.0 - beta_1_power)
v_t = v.assign(
beta_2_t * v + (1.0 - beta_2_t) * tf.square(grad),
use_locking=self._use_locking)
if self.amsgrad:
vhat = self.get_slot(var, 'vhat')
vhat_t = vhat.assign(
tf.maximum(vhat, v_t), use_locking=self._use_locking)
v_corr_t = tf.sqrt(vhat_t / (1.0 - beta_2_power))
else:
vhat_t = None
v_corr_t = tf.sqrt(v_t / (1.0 - beta_2_power))
r_t = tf.sqrt((sma_t - 4.0) / (sma_inf - 4.0) * (sma_t - 2.0) /
(sma_inf - 2.0) * sma_inf / sma_t)
sma_threshold = self._get_hyper('sma_threshold', var_dtype)
var_t = tf.where(sma_t >= sma_threshold,
r_t * m_corr_t / (v_corr_t + epsilon_t), m_corr_t)
if self._initial_weight_decay > 0.0:
var_t += self._get_hyper('weight_decay', var_dtype) * var
var_update = var.assign_sub(
lr_t * var_t, use_locking=self._use_locking)
updates = [var_update, m_t, v_t]
if self.amsgrad:
updates.append(vhat_t)
return tf.group(*updates)
def _resource_apply_sparse(self, grad, var, indices):
var_dtype = var.dtype.base_dtype
lr_t = self._decayed_lr(var_dtype)
beta_1_t = self._get_hyper('beta_1', var_dtype)
beta_2_t = self._get_hyper('beta_2', var_dtype)
epsilon_t = tf.convert_to_tensor(self.epsilon, var_dtype)
local_step = tf.cast(self.iterations + 1, var_dtype)
beta_1_power = tf.pow(beta_1_t, local_step)
beta_2_power = tf.pow(beta_2_t, local_step)
if self._initial_total_steps > 0:
total_steps = self._get_hyper('total_steps', var_dtype)
warmup_steps = total_steps *\
self._get_hyper('warmup_proportion', var_dtype)
min_lr = self._get_hyper('min_lr', var_dtype)
decay_steps = tf.maximum(total_steps - warmup_steps, 1)
decay_rate = (min_lr - lr_t) / decay_steps
lr_t = tf.where(
local_step <= warmup_steps,
lr_t * (local_step / warmup_steps),
lr_t + decay_rate * tf.minimum(local_step - warmup_steps,
decay_steps),
)
sma_inf = 2.0 / (1.0 - beta_2_t) - 1.0
sma_t = sma_inf - 2.0 * local_step * beta_2_power / (
1.0 - beta_2_power)
m = self.get_slot(var, 'm')
m_scaled_g_values = grad * (1 - beta_1_t)
m_t = m.assign(m * beta_1_t, use_locking=self._use_locking)
with tf.control_dependencies([m_t]):
m_t = self._resource_scatter_add(m, indices, m_scaled_g_values)
m_corr_t = m_t / (1.0 - beta_1_power)
v = self.get_slot(var, 'v')
v_scaled_g_values = (grad * grad) * (1 - beta_2_t)
v_t = v.assign(v * beta_2_t, use_locking=self._use_locking)
with tf.control_dependencies([v_t]):
v_t = self._resource_scatter_add(v, indices, v_scaled_g_values)
if self.amsgrad:
vhat = self.get_slot(var, 'vhat')
vhat_t = vhat.assign(
tf.maximum(vhat, v_t), use_locking=self._use_locking)
v_corr_t = tf.sqrt(vhat_t / (1.0 - beta_2_power))
else:
vhat_t = None
v_corr_t = tf.sqrt(v_t / (1.0 - beta_2_power))
r_t = tf.sqrt((sma_t - 4.0) / (sma_inf - 4.0) * (sma_t - 2.0) /
(sma_inf - 2.0) * sma_inf / sma_t)
sma_threshold = self._get_hyper('sma_threshold', var_dtype)
var_t = tf.where(sma_t >= sma_threshold,
r_t * m_corr_t / (v_corr_t + epsilon_t), m_corr_t)
if self._initial_weight_decay > 0.0:
var_t += self._get_hyper('weight_decay', var_dtype) * var
with tf.control_dependencies([var_t]):
var_update = self._resource_scatter_add(
var, indices, tf.gather(-lr_t * var_t, indices))
updates = [var_update, m_t, v_t]
if self.amsgrad:
updates.append(vhat_t)
return tf.group(*updates)
def get_config(self):
config = super(RectifiedAdam, self).get_config()
config.update({
'learning_rate':
self._serialize_hyperparameter('learning_rate'),
'beta_1':
self._serialize_hyperparameter('beta_1'),
'beta_2':
self._serialize_hyperparameter('beta_2'),
'decay':
self._serialize_hyperparameter('decay'),
'weight_decay':
self._serialize_hyperparameter('weight_decay'),
'sma_threshold':
self._serialize_hyperparameter('sma_threshold'),
'epsilon':
self.epsilon,
'amsgrad':
self.amsgrad,
'total_steps':
self._serialize_hyperparameter('total_steps'),
'warmup_proportion':
self._serialize_hyperparameter('warmup_proportion'),
'min_lr':
self._serialize_hyperparameter('min_lr'),
})
return config
```
# Load data
```
database_base_path = '/kaggle/input/tweet-dataset-split-roberta-base-96/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
display(k_fold.head())
# Unzip files
!tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_1.tar.gz
!tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_2.tar.gz
!tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_3.tar.gz
# !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_4.tar.gz
# !tar -xvf /kaggle/input/tweet-dataset-split-roberta-base-96/fold_5.tar.gz
```
# Model parameters
```
vocab_path = database_base_path + 'vocab.json'
merges_path = database_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
config = {
"MAX_LEN": 96,
"BATCH_SIZE": 32,
"EPOCHS": 4,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 1,
"question_size": 4,
"N_FOLDS": 3,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Model
```
Model
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
last_state = sequence_output[0]
x_start = layers.Dropout(.1)(last_state)
x_start = layers.Conv1D(1, 1)(x_start)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dropout(.1)(last_state)
x_end = layers.Conv1D(1, 1)(x_end)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
# optimizer = optimizers.Adam(lr=config['LEARNING_RATE'])
optimizer = RectifiedAdam(lr=config['LEARNING_RATE'],
total_steps=(len(k_fold[k_fold['fold_1'] == 'train']) // config['BATCH_SIZE']) * config['EPOCHS'],
warmup_proportion=0.1,
min_lr=1e-7)
model.compile(optimizer, loss=losses.CategoricalCrossentropy(),
metrics=[metrics.CategoricalAccuracy()])
return model
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
tokenizer.save('./')
```
# Train
```
history_list = []
AUTO = tf.data.experimental.AUTOTUNE
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
### Delete data dir
shutil.rmtree(base_data_path)
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
history = model.fit(list(x_train), list(y_train),
validation_data=(list(x_valid), list(y_valid)),
batch_size=config['BATCH_SIZE'],
callbacks=[checkpoint, es],
epochs=config['EPOCHS'],
verbose=1).history
history_list.append(history)
# Make predictions
train_preds = model.predict(list(x_train))
valid_preds = model.predict(list(x_valid))
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'start_fold_%d' % (n_fold)] = train_preds[0].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'end_fold_%d' % (n_fold)] = train_preds[1].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'start_fold_%d' % (n_fold)] = valid_preds[0].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'end_fold_%d' % (n_fold)] = valid_preds[1].argmax(axis=-1)
k_fold['end_fold_%d' % (n_fold)] = k_fold['end_fold_%d' % (n_fold)].astype(int)
k_fold['start_fold_%d' % (n_fold)] = k_fold['start_fold_%d' % (n_fold)].astype(int)
k_fold['end_fold_%d' % (n_fold)].clip(0, k_fold['text_len'], inplace=True)
k_fold['start_fold_%d' % (n_fold)].clip(0, k_fold['end_fold_%d' % (n_fold)], inplace=True)
k_fold['prediction_fold_%d' % (n_fold)] = k_fold.apply(lambda x: decode(x['start_fold_%d' % (n_fold)], x['end_fold_%d' % (n_fold)], x['text'], config['question_size'], tokenizer), axis=1)
k_fold['prediction_fold_%d' % (n_fold)].fillna(k_fold["text"], inplace=True)
k_fold['jaccard_fold_%d' % (n_fold)] = k_fold.apply(lambda x: jaccard(x['selected_text'], x['prediction_fold_%d' % (n_fold)]), axis=1)
```
# Model loss graph
```
sns.set(style="whitegrid")
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
```
# Model evaluation
```
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
```
| github_jupyter |
__This notebook__ trains resnet18 from scratch on CIFAR10 dataset.
```
%load_ext autoreload
%autoreload 2
%env CUDA_VISIBLE_DEVICES=YOURDEVICEHERE
import os, sys, time
sys.path.insert(0, '..')
import lib
import numpy as np
import torch, torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
%matplotlib inline
import random
random.seed(42)
np.random.seed(42)
torch.random.manual_seed(42)
import time
from resnet import ResNet18
device = 'cuda' if torch.cuda.is_available() else 'cpu'
experiment_name = 'editable_layer3'
experiment_name = '{}_{}.{:0>2d}.{:0>2d}_{:0>2d}:{:0>2d}:{:0>2d}'.format(experiment_name, *time.gmtime()[:6])
print(experiment_name)
print("PyTorch version:", torch.__version__)
from torchvision import transforms, datasets
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
trainset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2)
testset = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
X_test, y_test = map(torch.cat, zip(*list(testloader)))
model = lib.Editable(
module=ResNet18(), loss_function=lib.contrastive_cross_entropy,
get_editable_parameters=lambda module: module.layer3.parameters(),
optimizer=lib.IngraphRMSProp(
learning_rate=1e-3, beta=nn.Parameter(torch.tensor(0.5, dtype=torch.float32)),
), max_steps=10,
).to(device)
trainer = lib.EditableTrainer(model, F.cross_entropy, experiment_name=experiment_name, max_norm=10)
trainer.writer.add_text("trainer", repr(trainer).replace('\n', '<br>'))
from tqdm import tqdm_notebook, tnrange
from IPython.display import clear_output
val_metrics = trainer.evaluate_metrics(X_test.to(device), y_test.to(device))
min_error, min_drawdown = val_metrics['base_error'], val_metrics['drawdown']
early_stopping_epochs = 500
number_of_epochs_without_improvement = 0
def edit_generator():
while True:
for xb, yb in torch.utils.data.DataLoader(trainset, batch_size=1, shuffle=True, num_workers=2):
yield xb.to(device), torch.randint_like(yb, low=0, high=len(classes), device=device)
edit_generator = edit_generator()
while True:
for x_batch, y_batch in tqdm_notebook(trainloader):
trainer.step(x_batch.to(device), y_batch.to(device), *next(edit_generator))
val_metrics = trainer.evaluate_metrics(X_test.to(device), y_test.to(device))
clear_output(True)
error_rate, drawdown = val_metrics['base_error'], val_metrics['drawdown']
number_of_epochs_without_improvement += 1
if error_rate < min_error:
trainer.save_checkpoint(tag='best_val_error')
min_error = error_rate
number_of_epochs_without_improvement = 0
if drawdown < min_drawdown:
trainer.save_checkpoint(tag='best_drawdown')
min_drawdown = drawdown
number_of_epochs_without_improvement = 0
trainer.save_checkpoint()
trainer.remove_old_temp_checkpoints()
if number_of_epochs_without_improvement > early_stopping_epochs:
break
from lib import evaluate_quality
np.random.seed(9)
indices = np.random.permutation(len(X_test))[:1000]
X_edit = X_test[indices].clone().to(device)
y_edit = torch.tensor(np.random.randint(0, 10, size=y_test[indices].shape), device=device)
metrics = evaluate_quality(editable_model, X_test, y_test, X_edit, y_edit, batch_size=512)
for key in sorted(metrics.keys()):
print('{}\t:{:.5}'.format(key, metrics[key]))
```
| github_jupyter |
```
#!pip install tensorflow
import pandas as pd
#import tensorflow as tf
#from tensorflow import keras
#from tensorflow.keras.models import Sequential
#from tensorflow.keras.layers import Activation, Dense
import matplotlib.pyplot as plt
x = [-1, 0, 1, 2, 3, 4]
y = [-3, -1, 1, 3, 5, 7]
df = pd.DataFrame({
"x": [-1, 0, 1, 2, 3, 4],
"y" : [-3, -1, 1, 3, 5, 7]
})
df
plt.scatter(x, y)
plt.show()
model = keras.Sequential([
keras.layers.Dense(32, activation=tf.nn.relu, input_shape=[1]),
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dense(1)
],name ='staright_line_model')
model.summary()
optimizer = tf.keras.optimizers.RMSprop(0.0099)
model.compile(loss='mean_squared_error', optimizer=optimizer)
model.fit(x, y, epochs=500, verbose=1)
print(model.predict([-1]))
print(model.predict([-1, 0, 1, 2, 3, 4]))
predicted_y = model.predict(x)
print(predicted_y)
import numpy as np
y_pred_round = []
for i in predicted_y:
y_pred_round.append(np.round(i))
from sklearn.metrics import confusion_matrix as cm
print(cm(y, y_pred_round))
model.predict([-1])[0][0]
import seaborn as sns
predicted_df = pd.DataFrame({
"x" : x,
"y" : y
})
sns.scatterplot(x='x', y='y', data=predicted_df)
sns.histplot
plt.scatter(x, y)
plt.plot(x, predicted_y, color='red')
plt.show()
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
!pip install ml_metadata
import ml_metadata as mlmd
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.fake_database.SetInParent() # Sets an empty fake database proto.
store = metadata_store.MetadataStore(connection_config)
# Create ArtifactTypes, e.g., Data and Model
data_type = metadata_store_pb2.ArtifactType()
data_type.name = "DataSet"
data_type.properties["day"] = metadata_store_pb2.INT
data_type.properties["split"] = metadata_store_pb2.STRING
data_type_id = store.put_artifact_type(data_type)
model_type = metadata_store_pb2.ArtifactType()
model_type.name = "SavedModel"
model_type.properties["version"] = metadata_store_pb2.INT
model_type.properties["name"] = metadata_store_pb2.STRING
model_type_id = store.put_artifact_type(model_type)
# Query all registered Artifact types.
artifact_types = store.get_artifact_types()
artifact_types
```
| github_jupyter |
## Homework-3: MNIST Classification with ConvNet
### **Deadline: 2021.04.06 23:59:00 **
### In this homework, you need to
- #### implement the forward and backward functions for ConvLayer (`layers/conv_layer.py`)
- #### implement the forward and backward functions for PoolingLayer (`layers/pooling_layer.py`)
- #### implement the forward and backward functions for DropoutLayer (`layers/dropout_layer.py`)
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
from network import Network
from solver import train, test
from plot import plot_loss_and_acc
```
## Load MNIST Dataset
We use tensorflow tools to load dataset for convenience.
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
def decode_image(image):
# Normalize from [0, 255.] to [0., 1.0], and then subtract by the mean value
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [1, 28, 28])
image = image / 255.0
image = image - tf.reduce_mean(image)
return image
def decode_label(label):
# Encode label with one-hot encoding
return tf.one_hot(label, depth=10)
# Data Preprocessing
x_train = tf.data.Dataset.from_tensor_slices(x_train).map(decode_image)
y_train = tf.data.Dataset.from_tensor_slices(y_train).map(decode_label)
data_train = tf.data.Dataset.zip((x_train, y_train))
x_test = tf.data.Dataset.from_tensor_slices(x_test).map(decode_image)
y_test = tf.data.Dataset.from_tensor_slices(y_test).map(decode_label)
data_test = tf.data.Dataset.zip((x_test, y_test))
```
## Set Hyperparameters
You can modify hyperparameters by yourself.
```
batch_size = 100
max_epoch = 10
init_std = 0.1
learning_rate = 0.001
weight_decay = 0.005
disp_freq = 50
```
## Criterion and Optimizer
```
from criterion import SoftmaxCrossEntropyLossLayer
from optimizer import SGD
criterion = SoftmaxCrossEntropyLossLayer()
sgd = SGD(learning_rate, weight_decay)
```
## ConvNet
```
from layers import FCLayer, ReLULayer, ConvLayer, MaxPoolingLayer, ReshapeLayer
convNet = Network()
convNet.add(ConvLayer(1, 8, 3, 1))
convNet.add(ReLULayer())
convNet.add(MaxPoolingLayer(2, 0))
convNet.add(ConvLayer(8, 16, 3, 1))
convNet.add(ReLULayer())
convNet.add(MaxPoolingLayer(2, 0))
convNet.add(ReshapeLayer((batch_size, 16, 7, 7), (batch_size, 784)))
convNet.add(FCLayer(784, 128))
convNet.add(ReLULayer())
convNet.add(FCLayer(128, 10))
# Train
convNet.is_training = True
convNet, conv_loss, conv_acc = train(convNet, criterion, sgd, data_train, max_epoch, batch_size, disp_freq)
# Test
convNet.is_training = False
test(convNet, criterion, data_test, batch_size, disp_freq)
```
## Plot
```
plot_loss_and_acc({'ConvNet': [conv_loss, conv_acc]})
```
### ~~You have finished homework3, congratulations!~~
**Next, according to the requirements (4):**
### **You need to implement the Dropout layer and train the network again.**
```
from layers import DropoutLayer
from layers import FCLayer, ReLULayer, ConvLayer, MaxPoolingLayer, ReshapeLayer, DropoutLayer
# build your network
convNet = Network()
convNet.add(ConvLayer(1, 8, 3, 1))
convNet.add(ReLULayer())
convNet.add(DropoutLayer(0.5))
convNet.add(MaxPoolingLayer(2, 0))
convNet.add(ConvLayer(8, 16, 3, 1))
convNet.add(ReLULayer())
convNet.add(MaxPoolingLayer(2, 0))
convNet.add(ReshapeLayer((batch_size, 16, 7, 7), (batch_size, 784)))
convNet.add(FCLayer(784, 128))
convNet.add(ReLULayer())
convNet.add(FCLayer(128, 10))
# training
convNet.is_training = True
convNet, conv_loss, conv_acc = train(convNet, criterion, sgd, data_train, max_epoch, batch_size, disp_freq)
# testing
convNet.is_training = False
test(convNet, criterion, data_test, batch_size, disp_freq)
plot_loss_and_acc({'ConvNet': [conv_loss, conv_acc]})
```
| github_jupyter |
#### Import the libraries and load the data
```
import pandas as pd
import numpy as np
import random
from tqdm import tqdm
from gensim.models import Word2Vec
import matplotlib.pyplot as plt
%matplotlib inline
import warnings;
warnings.filterwarnings('ignore')
onlineData = pd.read_excel('./Data/Online Retail.xlsx')
onlineData.head()
onlineData.shape
```
The dataset contains 541,909 transactions.
#### Handling missing data
```
# check for missing values
onlineData.isnull().sum()
```
Since we have sufficient data, we will drop all the rows with missing values.
```
# remove missing values
onlineData.dropna(inplace=True)
# again check missing values
onlineData.isnull().sum()
```
#### Data preparation
Let's convert the StockCode to string datatype.
```
onlineData['StockCode']= onlineData['StockCode'].astype(str)
```
Let's check out the number of unique customers in our dataset.
```
customers = onlineData["CustomerID"].unique().tolist()
len(customers)
```
There are 4,372 customers in our dataset. For each of these customers we will extract their buying history. In other words, we can have 4,372 sequences of purchases.
We will use data of 90% of the customers to create word2vec embeddings and the rest for validation. Let's split the data.
```
# shuffle customer IDs
random.shuffle(customers)
# extract 90% of customer IDs
customers_train = [customers[i] for i in range(round(0.9*len(customers)))]
# split data into train and validation set
train_df = onlineData[onlineData['CustomerID'].isin(customers_train)]
validation_df = onlineData[~onlineData['CustomerID'].isin(customers_train)]
```
Let's create sequences of purchases made by the customers in the dataset for both the train and validation set.
```
# list to capture purchase history of the training set customers
purchases_train = []
# populate the list with the product codes
for i in tqdm(customers_train):
temp = train_df[train_df["CustomerID"] == i]["StockCode"].tolist()
purchases_train.append(temp)
# list to capture purchase history of the validation set customers
purchases_val = []
# populate the list with the product codes
for i in tqdm(validation_df['CustomerID'].unique()):
temp = validation_df[validation_df["CustomerID"] == i]["StockCode"].tolist()
purchases_val.append(temp)
```
#### Build word2vec embeddings for products
```
# train word2vec model
model = Word2Vec(window = 10, sg = 1, hs = 0,
negative = 10, # for negative sampling
alpha = 0.03, min_alpha = 0.0007,
seed = 14)
model.build_vocab(purchases_train, progress_per=200)
model.train(purchases_train, total_examples = model.corpus_count, epochs = 10, report_delay = 1)
# save word2vec model
model.save("word2vec_2.model")
```
As we do not plan to train the model any further, we are calling init_sims(), which will make the model much more memory-efficient.
```
model.init_sims(replace=True)
print(model)
```
Now we will extract the vectors of all the words in our vocabulary and store it in one place for easy access.
```
# extract all vectors
X = model[model.wv.vocab]
X.shape
```
#### Visualize word2vec embeddings
For sake of visualization, we are going to reduce the dimensions of the product embeddings from 100 to 2 by using the UMAP algorithm, it is used for dimensionality reduction.
```
import umap.umap_ as umap
cluster_embedding = umap.UMAP(n_neighbors=30, min_dist=0.0,
n_components=2, random_state=42).fit_transform(X)
plt.figure(figsize=(10,9))
plt.scatter(cluster_embedding[:, 0], cluster_embedding[:, 1], s=3, cmap='Spectral')
```
Every dot in this plot is a product. There are several tiny clusters of these datapoints. These are groups of similar products.
#### Recommending Products
We are ready with the word2vec embeddings for every product in the online retail dataset. Now our next step is to suggest similar products for a certain product or a product's vector.
Let's first create a product-ID and product-description dictionary to easily map a product's description to its ID and vice versa.
```
products = train_df[["StockCode", "Description"]]
# remove duplicates
products.drop_duplicates(inplace=True, subset='StockCode', keep="last")
# create product-ID and product-description dictionary
products_dict = products.groupby('StockCode')['Description'].apply(list).to_dict()
# test the dictionary
products_dict['84029E']
```
Top 6 similar products
```
def similar_products(v, n = 6):
# extract most similar products for the input vector
ms = model.similar_by_vector(v, topn=n+1)[1:]
# extract name and similarity score of the similar products
new_ms = []
for j in ms:
pair = (products_dict[j[0]][0], j[1])
new_ms.append(pair)
return new_ms
```
Let's try out our function by passing the vector of the product '90019A' ('SILVER M.O.P ORBIT BRACELET')
```
similar_products(model['90019A'])
```
The results are pretty relevant and match well with the input product. However, this output is based on the vector of a single product only. What if we want recommend a user products based on the multiple purchases they have made in the past?
One simple solution is to take average of all the vectors of the products they bought so far and use this resultant vector to find similar products. For that we will use the function below that takes in a list of product IDs and gives out a 100 dimensional vector which is mean of vectors of the products in the input list.
```
def aggregate_vectors(products):
product_vec = []
for i in products:
try:
product_vec.append(model[i])
except KeyError:
continue
return np.mean(product_vec, axis=0)
```
We will pass this products' sequence of the validation set to the function aggregate_vectors.
```
similar_products(aggregate_vectors(purchases_val[0]))
```
The system has recommended 6 products based on the entire purchase history of a user. Moreover, if you want to get products suggestions based on just the last few purchases then we can use the same set of functions.
For the last 10 products purchased as input:
```
similar_products(aggregate_vectors(purchases_val[0][-10:]))
```
We do see different products are recommended using the subset of the purchases history.
| github_jupyter |
# YOLOv5 Training on Custom Dataset
## Pre-requisite
- Make sure you read the user guide from the [repository](https://github.com/CertifaiAI/classifai-blogs/tree/sum_blogpost01/0_Complete_Guide_To_Custom_Object_Detection_Model_With_Yolov5)
- Upload this to Google Drive to run on Colab.
*This script is written primarily to run on Google Colab. If you want to run it on local jupyter notebook, modification of code is expected*
- Make sure you are running "GPU" on runtime. [tutorial](https://www.tutorialspoint.com/google_colab/google_colab_using_free_gpu.htm)
- **Make sure your `dataset.zip` file is uploaded at the right location. Click [here](https://github.com/CertifaiAI/classifai-blogs/blob/sum_blogpost01/0_Complete_Guide_To_Custom_Object_Detection_Model_With_Yolov5/ModelTraining/README.md#model-training-1) for tutorial to upload `dataset.zip` file.**
*Reference: https://github.com/ultralytics/yolov5*
## Step 1: Extract *dataset.zip* File
```
%cd /content
!unzip dataset.zip; rm dataset.zip
```
## Step 2: Clone YOLOv5 Repo and Install All Dependencies
```
# clone the repo
!git clone https://github.com/ultralytics/yolov5
# install dependencies
!pip install -qr yolov5/requirements.txt
%cd yolov5
import torch
# to display image
from IPython.display import Image, clear_output
# to download models/datasets
from utils.google_utils import gdrive_download
clear_output()
print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
```
## Step 3: *data.yaml* File Visualization
Make sure that our train and valid dataset locations are all right and the number of classes are also correct.
```
%cat /content/data.yaml
```
## Step 4: YOLOv5 Training
## Model Selection
There are 4 pre-trained models that you can choose from to start training your model and they are:
- yolov5s
- yolov5m
- yolov5l
- yolov5x
In this example, **yolov5s** is chosen for the computational speed.
For more details on these models please check out [yolov5 models](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data#4-select-a-model).
Here, we are able to pass a number of arguments including:
- **img:** Define input image size
- **batch:** Specify batch size
- **epochs:** Define the number of training epochs (note - typically we will train for more than 100 epochs)
- **data:** Set the path to our yaml file
- **weights:** Specify a custom path to weights.
- **name:** Name of training result folder
- **cache:** Cache images for faster training
```
%%time
%cd yolov5/
!python train.py --img 416 --batch 16 --epochs 100 --data '../data.yaml' --weights yolov5s.pt --name yolov5s_results --cache
```
## Step 5: Evaluate the Model Performance
Training losses and performance metrics are saved to Tensorboard. A logfile is also defined above with the **--name** flag when we train. In our case, we named this *yolov5s_results*.
### Tensorboard
```
# Start tensorboard
# Launch after you have finished training
# logs save in the folder "runs"
%load_ext tensorboard
%tensorboard --logdir runs
```
### Manual Plotting
```
Image(filename='/content/yolov5/runs/train/yolov5s_results/results.png', width=1000)
```
### Object Detection Visualization
#### Ground Truth Train Data:
```
Image(filename='/content/yolov5/runs/train/yolov5s_results/test_batch0_labels.jpg', width=900)
```
#### Model Predictions:
```
Image(filename='/content/yolov5/runs/train/yolov5s_results/test_batch0_pred.jpg', width=900)
```
## Step 6: Run Inference with Trained Model
Model is used to predict the test data
```
%cd yolov5/
!python detect.py --source '../test/images/*' --weights runs/train/yolov5s_results/weights/best.pt --img 416 --conf 0.4
```
### Inference Visualization
```
import glob
from IPython.display import Image, display
for imageName in glob.glob('/content/yolov5/runs/detect/exp/*.jpg'):
display(Image(filename=imageName))
print("\n")
```
## Step 7: Export Trained Model's Weights
```
from google.colab import files
files.download('/content/yolov5/runs/train/yolov5s_results/weights/best.pt')
```
| github_jupyter |
**ZuckFlix**
*A data analysis using Python to give a streaming proposal.*
---
By: Josue Salvador Cano Martinez.
Facebook Data Challenge 2021. California, United States.
Importing libraries.
Analyzing DB in a general way.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
df = pd.read_csv("/content/drive/MyDrive/netflix_titles.csv", index_col=0)
print(df.shape)
print(df.info())
df.head()
```
Calculate the number of null values in each column.
```
df.isnull().sum()
```
Remove null values from rating and date_added columns.
We do not require Country data because we have previously selected the country: United States.
The cast and director columns will not be used in the analysis, therefore their null values do not interfere with the results.
```
df = df.dropna(how='any', subset=['date_added', 'rating'])
df.isnull().sum()
```
Add a new column that contains only the years of the dates from the date_added records.
```
df['year_added'] = df['date_added'].apply(lambda x: x.split(" ")[-1])
df.head(3)
```
Change date_added column to date format.
```
df["date_added"] = pd.to_datetime(df["date_added"], dayfirst = True)
df.head(3)
```
Dataframe only for the United States records and percentage it represents of the total.
```
unitedStates = df[df['country'] == 'United States']
unitedStatesCount = unitedStates['country'].count() / df.shape[0] * 100
print(unitedStatesCount)
```
Types of content that are in the United States Frame.
```
unitedStates.type.unique()
```
Total number of Movies vs total number of TV Shows
Conclusion: it is possible to notice the predominance of Movies.
```
movie = unitedStates[unitedStates["type"] == "Movie"]
movieCount = movie["type"].count()
tvShow = unitedStates[unitedStates["type"] == "TV Show"]
tvShowCount = tvShow["type"].count()
plt.bar(1, movieCount, label="Movie", color='g')
plt.bar(3, tvShowCount, label="TV Show", color='r')
plt.plot()
plt.xlabel("Type")
plt.ylabel("Quantity")
plt.title("Movie vs TV Show (United States)")
plt.grid(True)
plt.legend()
plt.show()
```
Frequency with which content is added regarding Movies and TV Shows during the last years.
Conclusion: it is possible to notice the predominance of Movies.
```
plt.figure(figsize=(10,8))
sb.countplot(x="year_added", data=unitedStates, palette="cool", order=unitedStates["year_added"].value_counts().sort_index(ascending=True).index[5:13],hue=df['type'])
```
Release years that are mostly repeated in the streaming content.
Conclusion: the most popular streaming content is that which has been released after 2015.
```
releaseYear = unitedStates.release_year.unique()
print(releaseYear)
unitedStates["release_year"].value_counts().nlargest(10).to_frame()
```
The most popular durations of streaming content.
Conclusion: the most predominant movies have a duration that oscillates the 90 minutes.
```
unitedStates['duration'].value_counts().nlargest(5).plot(kind="pie", autopct='%1.1f%%', title="Most popular durations (United States)", ylabel="")
```
Most viewed Genres.
Conclusion: Documentaries, Stand-Up Comedy and Children & Family Movies
```
unitedStates["listed_in"].value_counts().to_frame()
```
Graphic representation of the most viewed genres.
```
unitedStates["listed_in"].value_counts().nlargest(5).plot(kind="bar", grid="True", color='b', ylabel="Quantity", title="Genre count (United States)")
```
Most watched genre trend.
```
firstGenre = unitedStates[unitedStates['listed_in'] == 'Documentaries']
secondGenre = unitedStates[unitedStates['listed_in'] == 'Stand-Up Comedy']
thirdGenre = unitedStates[unitedStates['listed_in'] == 'Children & Family Movies, Comedies']
topGenre = pd.concat([firstGenre, secondGenre, thirdGenre])
plt.figure(figsize=(10,8))
sb.countplot(x="year_added", data = topGenre, palette="cool", order=topGenre['year_added'].value_counts().sort_index(ascending=True).index[1:11],hue=topGenre['listed_in'])
```
The most popular ratings.
Conclusion: TV-MA, TV-14 and R.
```
ratingsVisualization = unitedStates.rating.value_counts().nlargest(5).plot(kind = 'bar', grid = "True", color = "g", ylabel="Quantity", title="Top Rating in United States")
```
| github_jupyter |
## Calculations and plots for blog post
## Ice Lake Xeon Platinum 8352Y results ... with old data ...
### TR 3990x vs 3970x vs 3265W Performance and Scaling and rocket lake stuff
These are typical imports I do for almost any data analysis
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#import matplotlib.image as mpimg
import matplotlib.gridspec as gridspec
from scipy.optimize import curve_fit
import seaborn as sns
sns.set() # not using seaborn but this makes the plots look better
%matplotlib inline
```
- **Performance data** HPL: GFLOP/s, Numpy norm(A@B): seconds to complete, NAMD: day/ns
- **Scaling data** (job perf in seconds vs number of CPU cores)
## HPL Linpack Performance
```
dfhpl = pd.DataFrame({'CPU':[
'NVIDIA (4) A100 cuBLAS',
'NVIDIA (2) A100 cuBLAS + 4 CPU cores/GPU',
'NVIDIA (1) A100 cuBLAS + 4 CPU cores',
'Xeon (2)8352Y 64-core AVX512 oneMKL',
'Xeon (2)6258R 56-core AVX512 oneMKL',
'EPYC (2)7742 120-core-Azure AVX2 BLIS2',
'TR Pro 3995WX 64-core AVX2 BLIS2.2',
'TR 3990x 64-core AVX2 BLIS2',
'TR 3970x 32-Core AVX2 BLIS2',
'Xeon 3265-W 24-core AVX512 MKL',
'TR 3960x (24 core AVX2 BLIS2)',
'Xeon 2295W (18 core, AVX512 oneMKL)',
'Xeon 2175W (14 core, AVX512)',
'i7 9800X (8 core, AVX512 MKL)',
'Xeon 2145W (8 core, AVX512 MKL)',
'Ryzen 3950X (16 core AVX2 BLIS2)',
'TR 2990WX (32 core AVX2 BLIS1.3)',
'i9 11900KF (8-core, AVX512 oneMKL)',
'Ryzen 3900X (12 core AVX2 BLIS2.0)',
'i9 9900K (8 core, AVX2 MKL)',
'Ryzen 5800X (8 core AVX2 BLIS3.0)'
],
'GFLOP/s':[41220,10940,2905,2667, 2483, 1583, 1571,1326,1317,1013,999,838,771,660,637,597,540,539,476,415]})
dfhpl
gs = gridspec.GridSpec(2, 1, height_ratios=[30,1] )
ax1 = plt.subplot(gs[0])
a = "#6be0c0" #"#08cc96" #"#f5b7b7" # "#cccccc" #"#E64C4C"# " "#fd411e"
i = "#7389e6" # "#163AD6" # "#130c64" "#0071c5" "#7389E6"
p = "#3e7aff"
m = "#E6CE4C"
d = "#163AD6"
old = "#163AD6"
new = "#08cc96"
#clrs = [i,a,a,a,a,i,a,i,i,i,i,a,a,d,a,i,a]
clrs = ["#163AD6"]*20
print(len(clrs))
#clrs[14] = new
#clrs[17] = new
clrs[0] = new
clrs[1] = new
ax1.set_title('HPL Linpack Benchmark \n (Higher is better)', fontsize=18)
ax1.figure.set_figwidth(10)
ax1.figure.set_figheight(14)
ax1 = sns.barplot(y="CPU", x="GFLOP/s", data=dfhpl, palette=clrs )
y = dfhpl['GFLOP/s']
for i, v in enumerate(y):
ax1.text(v , i + .125, str(v), color='black', fontweight='bold')
ax2 = plt.subplot(gs[1])
logo = plt.imread('Puget-Systems-2020-logo-color-500.png')
img = ax2.imshow(logo)
ax2.axis('off')
```
## HPL Performance Scaling
```
hpl3990 = np.array([12064,6334,3652,1980,1139,861,724,647,619,601,599])
hpl3970 = np.array([12111,6162,3297,1720,1143,1068,951])
#hpl3265 = np.array([9671,4738,2513,1306,853,750])
baseline = hpl3990[0]
hpl3990 = baseline/hpl3990
hpl3970 = baseline/hpl3970
#hpl3265 = baseline/hpl3265
# don't have the data!
#hpl3265 = np.array([102.1,208.5,393.1,756.1,1157.6 1316.8])
numcores = np.array([1,2,4,8,16,24,32,40,48,56,64])
def amdhal3990(n,P):
return hpl3990[0]/((1-P)+(P/n))
popt3990, pcov = curve_fit(amdhal3990, numcores, hpl3990)
def amdhal3970(n,P):
return hpl3970[0]/((1-P)+(P/n))
popt3970, pcov = curve_fit(amdhal3970, numcores[:7], hpl3970)
plt.rcParams["figure.figsize"] = [12,7]
#plt.figure(figsize=(16,9))
fig, ax = plt.subplots()
ax.plot( numcores, hpl3990, "o", color='g', label='Linpack scaling 3990x') # plot the test data
ax.plot( numcores[:7], hpl3970, "x", color='r', label='Linpack scaling 3970x')
#ax.plot( numcores[:6], hpl3265, "d-", color='b', label='Python numpy norm(AxB) scaling 3265W')
xt = np.linspace(0.5,70,20)
ax.plot(xt, amdhal3990(xt,popt3990) ,color='g', label='Amdhals Eqn with P = %.4f ' %(popt3990[0])) # plot the model function
ax.plot(xt[:11], amdhal3970(xt[:11],popt3970) ,color='r', label='Amdhals Eqn with P = %.4f ' %(popt3970[0]))
#ax.plot(xt[:8], amdhal3265(xt[:8],popt3265) ,color='b', label='Amdhals Eqn with P = %.4f ' %(popt3265[0]))
#ax.plot(xt[:8], amdhal3265mkl(xt[:8],popt3265mkl) ,color='c', label='Amdhals Eqn with P = %.4f ' %(popt3265mkl[0]))
slope=3.45/4.35
ax.plot(xt,slope*xt, "--", color='k', label='Linear Scaling (Clock Adjusted)')
ax.plot(xt,1*xt, color='k', label='Linear Scaling')
plt.xlabel("Number of Cores")
plt.ylabel("Speed Up")
plt.title("Amdhal's Law, Threadripper 3990x and 3970x (Clock Adjusted) Scaling \n HPL Linpack", fontsize=18)
ax.legend()
```
This is Amdhal's Law equation that I will "fit" the data to.
This is the curve fit. Really easy using scipy!
popt is the optimized parameter P and pcov is the
covarience which is just a statistics measure that I don't need
but I need a variable for it since it is part of the output from
that command.
#### This mess generates the plots with matplotlib
## Numpy OpenBLAS and MKL norm(A@B) 3990x, 3970x, 3265W, EPYC 7v12 Performance
```
dfnorm = pd.DataFrame({'CPU':[
'EPYC (2)7742 120-core-Azure(96) BLIS2',
'Xeon 3265W 24-core numpy MKL',
'TR 3990x 64-core(56) numpy OpenBLAS',
'TR 3970x 32-Core numpy OpenBLAS',
'Xeon 3265W 24-core numpy MKL-DEBUG', #MKL_DEBUG_CPU_TYPE=5
'Xeon 3265W 24-core numpy OpenBLAS'
],
'Seconds':[9.55, 11.0,11.2, 13.5,16.6,20.5 ]})
dfnorm
plt.figure(figsize=(9,5))
clrs = sns.color_palette("Reds_d", 6)
clrs2 = sns.color_palette("Blues_d", 6)
#print(clrs)
clrs[1]=clrs2[1]
clrs[4]=clrs2[4]
clrs[5]=clrs2[5]
#print(clrs)
#clrs[1]=sns.xkcd_rgb["red"]
#clrs[2]=sns.xkcd_rgb["red"]
#clrs[3]=sns.xkcd_rgb["red"]
ax = sns.barplot(y="CPU", x="Seconds", data=dfnorm, palette=clrs)
#ax.set_xlim(100,320)
ax.set_title('Numpy norm(A@B): 3990x, 3970x, 3265W, EPYC 7742 \n (Lower is better)', fontsize=18)
y = dfnorm['Seconds']
for i, v in enumerate(y):
ax.text(v , i + .125, str(v), color='black', fontweight='bold')
```
## Numpy OpenBLAS norm(A@B) 3990x vs 3970x vs 3265W Scaling
```
mnormepyc = np.array([439,222,112,57.7,30.2,15.7,11.7,11.6,11.8,9.55,9.90,10.3])
mnorm3990 = np.array([341,171,86,44,23,17,14,12,11.4,11.2,11.5])
mnorm3970 = np.array([335.9,167.8,84.7,43.3,23.0,16.3,13.5])
mnorm3265 = np.array([354.0,163.8,85.5,43.9,25.0,20.5])
mnorm3265mkl = np.array([171.1,75.6,39.5,20.7,13.2,11.0])
mnorm3265mkldbg = np.array([290.8,147.4,76.8,38.8,21.4,16.8])
baseline = mnorm3265[0]
mnormepyc = baseline/mnormepyc
mnorm3990 = baseline/mnorm3990
mnorm3970 = baseline/mnorm3970
mnorm3265 = baseline/mnorm3265
mnorm3265mkl = baseline/mnorm3265mkl
mnorm3265mkldbg = baseline/mnorm3265mkldbg
numcores = np.array([1,2,4,8,16,24,32,40,48,56,64])
numcores2 = np.array([1,2,4,8,16,32,48,64,80,96,112,120])
def amdhal3990(n,P):
return mnorm3990[0]/((1-P)+(P/n))
popt3990, pcov = curve_fit(amdhal3990, numcores, mnorm3990)
def amdhal3970(n,P):
return mnorm3970[0]/((1-P)+(P/n))
popt3970, pcov = curve_fit(amdhal3970, numcores[:7], mnorm3970)
def amdhal3265(n,P):
return mnorm3265[0]/((1-P)+(P/n))
popt3265, pcov = curve_fit(amdhal3265, numcores[:6], mnorm3265)
def amdhal3265mkl(n,P):
return mnorm3265mkl[0]/((1-P)+(P/n))
popt3265mkl, pcov = curve_fit(amdhal3265mkl, numcores[:6], mnorm3265mkl)
def amdhal3265mkldbg(n,P):
return mnorm3265mkldbg[0]/((1-P)+(P/n))
popt3265mkl, pcov = curve_fit(amdhal3265mkldbg, numcores[:6], mnorm3265mkldbg)
popt3990
plt.rcParams["figure.figsize"] = [12,7]
#plt.figure(figsize=(16,9))
fig, ax = plt.subplots()
ax.plot( numcores2, mnormepyc, "+-", color='k', label='Python numpy norm(AxB) scaling EPYC 7742')
ax.plot( numcores, mnorm3990, "o-", color='g', label='Python numpy norm(AxB) scaling 3990x')
ax.plot( numcores[:7], mnorm3970, "x-", color='r', label='Python numpy norm(AxB) scaling 3970x')
ax.plot( numcores[:6], mnorm3265, "d-", color='b', label='Python numpy norm(AxB) scaling 3265W')
ax.plot( numcores[:6], mnorm3265mkl, "D-", color='c', label='Python numpy-MKL norm(AxB) scaling 3265W')
ax.plot( numcores[:6], mnorm3265mkldbg, "P-", color='k', label='Python numpy-MKL-DEBUG norm(AxB) scaling 3265W')
#xt = np.linspace(0.5,70,20)
#ax.plot(xt, amdhal3990(xt,popt3990) ,color='g', label='Amdhals Eqn with P = %.4f ' %(popt3990[0])) # plot the model function
#ax.plot(xt[:11], amdhal3970(xt[:11],popt3970) ,color='r', label='Amdhals Eqn with P = %.4f ' %(popt3970[0]))
#ax.plot(xt[:8], amdhal3265(xt[:8],popt3265) ,color='b', label='Amdhals Eqn with P = %.4f ' %(popt3265[0]))
#ax.plot(xt[:8], amdhal3265mkl(xt[:8],popt3265mkl) ,color='c', label='Amdhals Eqn with P = %.4f ' %(popt3265mkl[0]))
#ax.plot(xt,hpl[0]*xt, color='k', label='Linear Scaling')
plt.xlabel("Number of Cores")
plt.ylabel("Speed Up")
plt.title("Numpy norm(A@B): 3990x, 3970x, 3265W, EPYC 7742 Scaling \n Python numpy norm(AxB) Relative Speedup", fontsize=18)
ax.legend()
```
## HPCG
```
hpcg3265=[1.65,3.13,5.90,10.8,14.3,14.8]
# 1 2 4 8 16 24
hpcg3990=[2.79,4.68,7.96,9.88,10.2,9.94,9.80,9.65,9.54,9.41,9.30]
# 1 2 4 8 16 24 32 40 4 8 56 64
hpcg3970=[2.68,4.56,8.06,9.93,9.80,9.59,9.38]
# 1 2 4 8 16 24 32
hpcgepyc=[2.14,3.98,7.87,13.1,21.2,28.4,31.5,33.1,34.0,31.7,36.6]
numcores2=[ 1, 2, 4, 8, 16, 32, 48, 64, 80, 96, 120]
dfhpcg = pd.DataFrame({'CPU':[
'Xeon (2)8352Y 64-core oneMKL',
'EPYC (2)7742 120-core(120)',
'Xeon (2)6258R 56-core oneMKL',
'TR Pro 3995WX 64-core(16)',
'Xeon 3265W 24-core(24)',
'Xeon 2295W (18-core, oneMKL)',
'TR 3990x 64-core(16)',
'TR 3970x 32-Core(8)',
'i9 11900KF 8-Core(6))',
'Ryzen 5800X 8-Core(4)'
],
'GFLOPS':[45.6, 36.6,34.6,19.8,14.8,13.6,10.2,9.93,8.69,6.39]})
dfhpcg
gs = gridspec.GridSpec(2, 1, height_ratios=[14,1])
plt.subplots_adjust(bottom=-0.1)
ax1 = plt.subplot(gs[0])
a = "#6be0c0" #"#08cc96" #"#f5b7b7" # "#cccccc" #"#E64C4C"# " "#fd411e"
i = "#7389e6" # "#163AD6" # "#130c64" "#0071c5" "#7389E6"
p = "#3e7aff"
m = "#E6CE4C"
d = "#163AD6"
#a = "#08cc96"#"#fd411e"
#i = "#130c64"#"#0071c5"
#p = "#3e7aff"
#clrs = (a,d,m,i,d,a,a)
old = "#163AD6"
new = "#08cc96"
#clrs = [i,a,a,a,a,i,a,i,i,i,i,a,a,d,a,i,a]
clrs = ["#163AD6"]*10
#print(len(clrs))
clrs[8] = new
clrs[9] = new
clrs[0] = new
ax1.set_title('HPCG Benchmark \n (Higher is better)', fontsize=18)
ax1.figure.set_figwidth(10)
ax1.figure.set_figheight(6)
ax1 = sns.barplot(y="CPU", x="GFLOPS", data=dfhpcg, palette=clrs )
y = dfhpcg['GFLOPS']
for i, v in enumerate(y):
ax1.text(v , i + .125, str(v), color='black', fontweight='bold')
ax2 = plt.subplot(gs[1])
logo = plt.imread('Puget-Systems-2020-logo-color-500.png')
ax2.imshow(logo)
ax2.axis('off')
plt.rcParams["figure.figsize"] = [12,7]
#plt.figure(figsize=(16,9))
fig, ax = plt.subplots()
ax.plot( numcores2, hpcgepyc, "+-", color='k', label='HPCG scaling EPYC 7742')
ax.plot( numcores, hpcg3990, "o-", color='g', label='HPCG scaling 3990x')
ax.plot( numcores[:7], hpcg3970, "x-", color='r', label='HPCG scaling 3970x')
ax.plot( numcores[:6], hpcg3265, "d-", color='b', label='HPCG scaling 3265W')
#xt = np.linspace(0.5,70,20)
#ax.plot(xt, amdhal3990(xt,popt3990) ,color='g', label='Amdhals Eqn with P = %.4f ' %(popt3990[0])) # plot the model function
#ax.plot(xt[:11], amdhal3970(xt[:11],popt3970) ,color='r', label='Amdhals Eqn with P = %.4f ' %(popt3970[0]))
#ax.plot(xt[:8], amdhal3265(xt[:8],popt3265) ,color='b', label='Amdhals Eqn with P = %.4f ' %(popt3265[0]))
#ax.plot(xt[:8], amdhal3265mkl(xt[:8],popt3265mkl) ,color='c', label='Amdhals Eqn with P = %.4f ' %(popt3265mkl[0]))
#ax.plot(xt,hpl[0]*xt, color='k', label='Linear Scaling')
plt.xlabel("Number of Cores")
plt.ylabel("GFLOP/s")
plt.title("HPCG TR3990x 3970x Xeon 3265W EPYC 7742 Scaling \n HPCG", fontsize=18)
ax.legend()
```
## NAMD ApoA1 3990x vs 3970x Performance
```
dfapoa1 = pd.DataFrame({'CPU':[
#'TR Pro 3995WX 64-core + (2)NVIDIA A6000',
#'TR 3990x 64-core + (2)NVIDIA RTX Titan',
#'TR 3970x 32-Core + (2)NVIDIA RTX 2080Ti',
'EPYC (2)7742 120-core(120)',
'Xeon (2)8352Y 64-core No-HT',
'TR Pro 3995WX 64-core + 64-SMT',
'Xeon (2)6258R 56-core + 56-HT',
'TR 3990x 64-core + 64-SMT',
'TR 3970x 32-Core + 32-SMT',
'Xeon 3265W 24-core + 24-HT',
'Xeon 3265W 24-core(24) No-HT',
'Xeon 2295W 18-core + 18-HT',
'i9 11900KF 8-core + 8-HT',
'Ryzen 5800X 8-core + 8-SMT'
],
'day/ns':[0.101,0.110248,0.130697,0.1315,0.1325,0.1874,0.270,0.319,0.355,0.419,0.610]})
dfapoa1
gs = gridspec.GridSpec(2, 1, height_ratios=[18,1])
ax1 = plt.subplot(gs[0])
a = "#6be0c0" #"#08cc96" #"#f5b7b7" # "#cccccc" #"#E64C4C"# " "#fd411e"
i = "#7389e6" # "#163AD6" # "#130c64" "#0071c5" "#7389E6"
p = "#3e7aff"
m = "#E6CE4C"
d = "#163AD6"
#a = "#08cc96"#"#fd411e"
#i = "#130c64"#"#0071c5"
#p = "#3e7aff"
#clrs = (m,a,a,a,m,d,a,a,i,i,d)
old = "#163AD6"
new = "#08cc96"
#clrs = [i,a,a,a,a,i,a,i,i,i,i,a,a,d,a,i,a]
clrs = ["#163AD6"]*11
clrs[9] = new
clrs[10] = new
clrs[1] = new
ax1.set_title('NAMD ApoA1 (day/ns)\n (Lower is better)', fontsize=18)
ax1.figure.set_figwidth(10)
ax1.figure.set_figheight(9)
ax1 = sns.barplot(y="CPU", x="day/ns", data=dfapoa1, palette=clrs )
y = dfapoa1['day/ns']
for i, v in enumerate(y):
ax1.text(v , i + .125, str(v), color='black', fontweight='bold')
ax2 = plt.subplot(gs[1])
logo = plt.imread('Puget-Systems-2020-logo-color-500.png')
img = ax2.imshow(logo)
ax2.axis('off')
plt.figure(figsize=(9,6))
clrs = sns.color_palette("Reds_d", 7)
#print(clrs)
clrs[0]=sns.xkcd_rgb["green"]
clrs[1]=sns.xkcd_rgb["green"]
clrs[5]=sns.xkcd_rgb["blue"]
clrs[6]=sns.xkcd_rgb["blue"]
ax = sns.barplot(y="CPU", x="day/ns", data=dfapoa1, palette=clrs)
#ax.set_xlim(100,320)
ax.set_title('NAMD ApoA1: 3990x, 3970x, 3265W, EPYC 7742 \n (Lower is better)', fontsize=18)
y = dfapoa1['day/ns']
for i, v in enumerate(y):
ax.text(v , i + .125, str(v), color='black', fontweight='bold')
```
## NAMD ApoA1 3990x vs 3970x Scaling
```
apoa1 = np.array([267,136,70,37,20,14,11.3,9.7,8.2,7.7,7.5])
apoa1 = apoa1[0]/apoa1
numcores = np.array([1,2,4,8,16,24,32,40,48,56,64])
apoa1
def amdhal(n,P):
return apoa1[0]/((1-P)+(P/n))
popt, pcov = curve_fit(amdhal, numcores, apoa1)
popt
# data for 3970x 32 core
apoa132 = np.array([261.0,132.6,68.9,36.0,19.1,13.3,10.8])
apoa132 = apoa132[0]/apoa132
print(apoa132)
def amdhal32(n,P):
return apoa132[0]/((1-P)+(P/n))
popt32, pcov32 = curve_fit(amdhal32, numcores[:7], apoa132)
popt32
plt.rcParams["figure.figsize"] = [12,7]
#plt.figure(figsize=(16,9))
fig, ax = plt.subplots()
ax.plot( numcores, apoa1, "o", color='g', label='NAMD ApoA1: "Wall Time" 3990x') # plot the test data
ax.plot( numcores[:7], apoa132, "x", color='r', label='NAMD ApoA1: "Wall Time" 3970x')
xt = np.linspace(0.5,70,20)
ax.plot(xt, amdhal(xt,popt) , label='Amdhals Eqn with P = %.4f ' %(popt[0])) # plot the model function
ax.plot(xt[:11], amdhal32(xt[:11],popt32) , label='Amdhals Eqn with P = %.4f ' %(popt32[0]))
ax.plot(xt,xt, color='k', label='Linear Scaling')
slope=3.45/4.35
ax.plot(xt,slope*xt, "--", color='k', label='Linear Scaling (Clock Adjusted)')
plt.xlabel("Number of Cores")
plt.ylabel("Speed Up")
plt.title("Amdhal's Law, Threadripper 3990x(64core) and 3970x(32core) Scaling \n NAMD ApoA1", fontsize=18)
ax.legend()
1/(1-popt)
```
## NAMD STMV 3990x vs 3970x Performance
```
dfstmv = pd.DataFrame({'CPU':[
#'TR Pro 3995WX 64-core + (2)NVIDIA A6000',
#'Xeon 3265W 24-core + (4)NVIDIA RTX 2080Ti',
#'TR 3990x 64-core + (2)NVIDIA RTX Titan',
#'TR 3970x 32-Core + (2)NVIDIA RTX 2080Ti',
'EPYC (2)7742 120-core(120)',
'Xeon (2)8352Y 64-core No-HT',
'TR Pro 3995WX 64-core + 64-SMT',
'Xeon (2)6258R 56-core + 56-HT',
'TR 3990x 64-core + 64-SMT',
'TR 3970x 32-Core + 32-SMT',
'Xeon 3265W 24-core + 24-HT',
'Xeon 3265W 24-core(24) No-HT',
'Xeon 2295W 18-core + 18-HT',
'i9 11900KF 8-core + 8-HT',
'Ryzen 5800X 8-core + 8-SMT'
],
'day/ns':[1.016,1.248,1.4012,1.427,1.601,2.124, 3.13, 3.702,4.608,4.925, 6.60]})
dfstmv
gs = gridspec.GridSpec(2, 1, height_ratios=[18,1])
ax1 = plt.subplot(gs[0])
a = "#6be0c0" #"#08cc96" #"#f5b7b7" # "#cccccc" #"#E64C4C"# " "#fd411e"
i = "#7389e6" # "#163AD6" # "#130c64" "#0071c5" "#7389E6"
p = "#3e7aff"
m = "#E6CE4C"
d = "#163AD6"
#a = "#08cc96"#"#fd411e"
#i = "#130c64"#"#0071c5"
#p = "#3e7aff"
#clrs = (m,i,a,a,a,m,d,a,a,i,i,d)
old = "#163AD6"
new = "#08cc96"
#clrs = [i,a,a,a,a,i,a,i,i,i,i,a,a,d,a,i,a]
clrs = ["#163AD6"]*11
clrs[9] = new
clrs[10] = new
clrs[1] = new
ax1.set_title('NAMD STMV (day/ns)\n (Lower is better)', fontsize=18)
ax1.figure.set_figwidth(10)
ax1.figure.set_figheight(9)
ax1 = sns.barplot(y="CPU", x="day/ns", data=dfstmv, palette=clrs )
y = dfstmv['day/ns']
for i, v in enumerate(y):
ax1.text(v , i + .125, str(v), color='black', fontweight='bold')
ax2 = plt.subplot(gs[1])
logo = plt.imread('Puget-Systems-2020-logo-color-500.png')
img = ax2.imshow(logo)
ax2.axis('off')
plt.figure(figsize=(9,6))
clrs = sns.color_palette("Reds_d", 8)
#print(clrs)
clrs[0]=sns.xkcd_rgb["blue"]
clrs[1]=sns.xkcd_rgb["green"]
clrs[2]=sns.xkcd_rgb["green"]
clrs[6]=sns.xkcd_rgb["blue"]
clrs[7]=sns.xkcd_rgb["blue"]
ax = sns.barplot(y="CPU", x="day/ns", data=dfstmv, palette=clrs)
#ax.set_xlim(100,320)
ax.set_title('NAMD STMV: 3990x, 3970x, 3265W, EPYC 7742 \n (Lower is better)', fontsize=18)
y = dfstmv['day/ns']
for i, v in enumerate(y):
ax.text(v , i + .125, str(v), color='black', fontweight='bold')
```
## NAMD STMV 3990x vs 3970x Scaling
```
stmv = np.array([2934,1478,763,398,212,148,120,103,92,85,79])
stmv = stmv[0]/stmv
numcores = np.array([1,2,4,8,16,24,32,40,48,56,64])
stmv
def amdhal(n,P):
return stmv[0]/((1-P)+(P/n))
popt, pcov = curve_fit(amdhal, numcores, stmv)
popt
# data for 3970x 32 core
stmv32 = np.array([2846,1440,744,387.5,204.6,144.5,114.2])
stmv32 = stmv32[0]/stmv32
print(stmv32)
def amdhal32(n,P):
return stmv32[0]/((1-P)+(P/n))
popt32, pcov32 = curve_fit(amdhal32, numcores[:7], stmv32)
popt32
plt.rcParams["figure.figsize"] = [12,7]
#plt.figure(figsize=(16,9))
fig, ax = plt.subplots()
ax.plot( numcores, stmv, "o", color='g', label='NAMD STMV: "Wall Time" 3990x') # plot the test data
ax.plot( numcores[:7], stmv32, "x", color='r', label='NAMD STMV: "Wall Time" 3970x')
xt = np.linspace(0.5,70,20)
ax.plot(xt, amdhal(xt,popt) , label='Amdhals Eqn with P = %.4f ' %(popt[0])) # plot the model function
ax.plot(xt[:11], amdhal32(xt[:11],popt32) , label='Amdhals Eqn with P = %.4f ' %(popt32[0]))
ax.plot(xt,hpl[0]*xt, color='k', label='Linear Scaling')
plt.xlabel("Number of Cores")
plt.ylabel("Speed Up")
plt.title("Amdhal's Law, Threadripper 3990x(64core) and 3970x(32core) Scaling \n NAMD STMV", fontsize=18)
ax.legend()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.