text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# NumPy ๋ฐฐ์ด ์์ฑ๊ณผ ๋ณํ
## NumPy์ ์๋ฃํ
NumPy์ `ndarray`ํด๋์ค๋ ํฌํจํ๋ ๋ชจ๋ ๋ฐ์ดํฐ๊ฐ ๊ฐ์ ์๋ฃํ(data type)์ด์ด์ผ ํ๋ค. ๋ํ ์๋ฃํ ์์ฒด๋ ์ผ๋ฐ ํ์ด์ฌ์์ ์ ๊ณตํ๋ ๊ฒ๋ณด๋ค ํจ์ฌ ์ธ๋ถํ๋์ด ์๋ค.
NumPy์ ์๋ฃํ์ `dtype` ์ด๋ผ๋ ์ธ์๋ก ์ง์ ํ๋ค. `dtype` ์ธ์๋ก ์ง์ ํ ๊ฐ์ ๋ค์ ํ์ ๋ณด์ธ๊ฒ๊ณผ ๊ฐ์ dtype ์ ๋์ฌ๋ก ์์ํ๋ ๋ฌธ์์ด์ด๊ณ ๋นํธ/๋ฐ์ดํธ ์๋ฅผ ์๋ฏธํ๋ ์ซ์๊ฐ ๋ถ์ ์๋ ์๋ค.
| dtype ์ ๋์ฌ | ์ค๋ช
| ์ฌ์ฉ ์ |
|-|-|-|
| `t` | ๋นํธ ํ๋ | `t4` (4๋นํธ) |
| `b` | ๋ถ๋ฆฌ์ธ | `b` (์ฐธ ํน์ ๊ฑฐ์ง) |
| `i` | ์ ์ | `i8` (64๋นํธ) |
| `u` | ๋ถํธ ์๋ ์ ์ | `u8` (64๋นํธ) |
| `f` | ๋ถ๋์์์ | `f8` (64๋นํธ) |
| `c` | ๋ณต์ ๋ถ๋์์์ | `c16` (128๋นํธ) |
| `O` | ๊ฐ์ฒด | `0` (๊ฐ์ฒด์ ๋ํ ํฌ์ธํฐ) |
| `S`, `a` | ๋ฌธ์์ด | `S24` (24 ๊ธ์) |
| `U` | ์ ๋์ฝ๋ ๋ฌธ์์ด | `U24` (24 ์ ๋์ฝ๋ ๊ธ์) |
| `V` | ๊ธฐํ | `V12` (12๋ฐ์ดํธ์ ๋ฐ์ดํฐ ๋ธ๋ญ) |
`ndarray` ๊ฐ์ฒด์ `dtype` ์์ฑ์ผ๋ก ์๋ฃํ์ ์ ์ ์๋ค.
```
x = np.array([1, 2, 3])
x.dtype
```
๋ง์ฝ ๋ถ๋์์์ ์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ์๋ ๋ฌดํ๋๋ฅผ ํํํ๊ธฐ ์ํ `np.inf`์ ์ ์ํ ์ ์๋ ์ซ์๋ฅผ ๋ํ๋ด๋ `np.nan` ์ ์ฌ์ฉํ ์ ์๋ค.
```
np.exp(-np.inf)
np.array([1, 0]) / np.array([0, 0])
```
## ๋ฐฐ์ด ์์ฑ
```
x = np.array([1, 2, 3])
x
```
์์์ ํ์ด์ฌ ๋ฆฌ์คํธ๋ฅผ NumPy์ `ndarray` ๊ฐ์ฒด๋ก ๋ณํํ์ฌ ์์ฑํ๋ ค๋ฉด `array` ๋ช
๋ น์ ์ฌ์ฉํ์๋ค. ๊ทธ๋ฌ๋ ๋ณดํต์ ์ด๋ฌํ ๊ธฐ๋ณธ ๊ฐ์ฒด์์ด ๋ค์๊ณผ ๊ฐ์ ๋ช
๋ น์ ์ฌ์ฉํ์ฌ ๋ฐ๋ก `ndarray` ๊ฐ์ฒด๋ฅผ ์์ฑํ๋ค.
* `zeros`, `ones`
* `zeros_like`, `ones_like`
* `empty`
* `arange`
* `linspace`, `logspace`
* `rand`, `randn`
ํฌ๊ธฐ๊ฐ ์ ํด์ ธ ์๊ณ ๋ชจ๋ ๊ฐ์ด 0์ธ ๋ฐฐ์ด์ ์์ฑํ๋ ค๋ฉด `zeros` ๋ช
๋ น์ ์ฌ์ฉํ๋ค. `dtype` ์ธ์๊ฐ ์์ผ๋ฉด ์ ์ํ์ด ๋๋ค.
```
a = np.zeros(5)
a
```
`dtype` ์ธ์๋ฅผ ๋ช
์ํ๋ฉด ํด๋น ์๋ฃํ ์์๋ฅผ ๊ฐ์ง ๋ฐฐ์ด์ ๋ง๋ ๋ค.
```
b = np.zeros((5,2), dtype="f8")
b
```
๋ฌธ์์ด ๋ฐฐ์ด๋ ๊ฐ๋ฅํ์ง๋ฉด ๋ชจ๋ ์์์ ๋ฌธ์์ด ํฌ๊ธฐ๊ฐ ๊ฐ์์ผ ํ๋ค. ๋ง์ฝ ๋ ํฐ ํฌ๊ธฐ์ ๋ฌธ์์ด์ ํ ๋นํ๋ฉด ์๋ฆด ์ ์๋ค.
```
c = np.zeros(5, dtype="S4")
c[0] = "abcd"
c[1] = "ABCDE" #E ์งค๋ฆผ
c
```
0์ด ์๋ 1์ผ๋ก ์ด๊ธฐํ๋ ๋ฐฐ์ด์ ์์ฑํ๋ ค๋ฉด `ones` ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
```
d = np.ones((2,3,4), dtype="i8")
d
```
๋ง์ฝ ํฌ๊ธฐ๋ฅผ ํํ(tuple)๋ก ๋ช
์ํ์ง ์๊ณ ํน์ ํ ๋ฐฐ์ด ํน์ ๋ฆฌ์คํธ์ ๊ฐ์ ํฌ๊ธฐ์ ๋ฐฐ์ด์ ์์ฑํ๊ณ ์ถ๋ค๋ฉด `ones_like`, `zeros_like` ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
```
e = range(10)
print(e)
f = np.ones_like(e, dtype="f")
f
np.random.rand(4)
```
๋ฐฐ์ด์ ํฌ๊ธฐ๊ฐ ์ปค์ง๋ฉด ๋ฐฐ์ด์ ์ด๊ธฐํํ๋๋ฐ๋ ์๊ฐ์ด ๊ฑธ๋ฆฐ๋ค. ์ด ์๊ฐ์ ๋จ์ถํ๋ ค๋ฉด ์์ฑ๋ง ํ๊ณ ์ด๊ธฐํ๋ฅผ ํ์ง ์๋ `empty` ๋ช
๋ น์ ์ฌ์ฉํ ์ ์๋ค. `empty` ๋ช
๋ น์ผ๋ก ์์ฑ๋ ๋ฐฐ์ด์ ์ด๋ค ๊ฐ์ด ๋ค์ด์์์ง๋ ์ ์ ์๋ค.
```
g = np.empty((4,3))
g
```
`arange` ๋ช
๋ น์ NumPy ๋ฒ์ ์ `range` ๋ช
๋ น์ด๋ผ๊ณ ๋ณผ ์ ์๋ค. ํด๋นํ๋ ๋ฒ์์ ์ซ์ ์์ด์ ์์ฑํ๋ค.
```
np.arange(10) # 0 .. n-1
np.arange(3, 21, 2) # start, end (exclusive), step
```
`linspace` ๋ช
๋ น์ด๋ `logspace` ๋ช
๋ น์ ์ ํ ๊ตฌ๊ฐ ํน์ ๋ก๊ทธ ๊ตฌ๊ฐ์ ์ง์ ํ ๊ตฌ๊ฐ์ ์๋งํผ ๋ถํ ํ๋ค.
```
np.linspace(0, 100, 5) # start, end, num-points
np.linspace(0, 100, 4)
np.logspace(0, 4, 4, endpoint=False)
```
์์์ ๋์๋ฅผ ์์ฑํ๊ณ ์ถ๋ค๋ฉด random ์๋ธํจํค์ง์ `rand` ํน์ `randn` ๋ช
๋ น์ ์ฌ์ฉํ๋ค. `rand` ๋ช
๋ น์ uniform ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ๋์๋ฅผ ์์ฑํ๊ณ `randn` ๋ช
๋ น์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ๋์๋ฅผ ์์ฑํ๋ค. ์์ฑํ ์๋(seed)๊ฐ์ ์ง์ ํ๋ ค๋ฉด `seed` ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
```
np.random.seed(0)
np.random.rand(4) #0~1์์ ๋์ด
np.random.randn(3,5) #randn n=normal์ ์๋ฏธ. -1 ~ 1
```
## ๋ฐฐ์ด์ ํฌ๊ธฐ ๋ณํ
์ผ๋จ ๋ง๋ค์ด์ง ๋ฐฐ์ด์ ๋ด๋ถ ๋ฐ์ดํฐ๋ ๋ณด์กดํ ์ฑ๋ก ํํ๋ง ๋ฐ๊พธ๋ ค๋ฉด `reshape` ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ค. ์๋ฅผ ๋ค์ด 12๊ฐ์ ์์๋ฅผ ๊ฐ์ง 1์ฐจ์ ํ๋ ฌ์ 3x4 ํํ์ 2์ฐจ์ ํ๋ ฌ๋ก ๋ง๋ค ์ ์๋ค.
```
a = np.arange(12)
a
b = a.reshape(3, 4)
b
```
์ฌ์ฉํ๋ ์์์ ๊ฐฏ์๊ฐ ์ ํด์ ์๊ธฐ ๋๋ฌธ์ `reshape` ๋ช
๋ น์ ํํ ํํ์ ์์ ์ค ํ๋๋ -1์ด๋ผ๋ ์ซ์๋ก ๋์ฒดํ ์ ์๋ค. -1์ ๋ฃ์ผ๋ฉด ํด๋น ์ซ์๋ ๋ค๋ฅผ ๊ฐ์์ ๊ณ์ฐ๋์ด ์ฌ์ฉ๋๋ค.
```
a.reshape(2,2,-1)
a.reshape(2,-1,2)
```
๋ค์ฐจ์ ๋ฐฐ์ด์ ๋ฌด์กฐ๊ฑด 1์ฐจ์์ผ๋ก ํผ์น๊ธฐ ์ํด์๋ `flatten` ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ๋ค.
```
a.flatten()
```
๊ธธ์ด๊ฐ 5์ธ 1์ฐจ์ ๋ฐฐ์ด๊ณผ ํ, ์ด์ ๊ฐฏ์๊ฐ (5,1)์ธ 2์ฐจ์ ๋ฐฐ์ด์ ๋ฐ์ดํฐ๋ ๊ฐ์๋ ์์ฐํ ๋ค๋ฅธ ๊ฐ์ฒด์ด๋ค.
```
x = np.arange(5)
x
y = x.reshape(5,1)
y
```
์ด๋ ๊ฒ ๊ฐ์ ๋ฐฐ์ด์ ๋ํด ์ฐจ์๋ง 1์ฐจ์ ์ฆ๊ฐ์ํค๋ ๊ฒฝ์ฐ์๋ `newaxis` ๋ช
๋ น์ ์ฌ์ฉํ๊ธฐ๋ ํ๋ค.
```
z = x[:, np.newaxis]
z
```
## ๋ฐฐ์ด ์ฐ๊ฒฐ
ํ์ ์๋ ์ด์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์ฐ๊ฒฐํ์ฌ(concatenate) ๋ ํฐ ๋ฐฐ์ด์ ๋ง๋ค ๋๋ ๋ค์๊ณผ ๊ฐ์ ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
* `hstack`
* `vstack`
* `dstack`
* `stack`
* `r_`
* `tile`
`hstack` ๋ช
๋ น์ ํ์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์์ผ๋ก ์ฐ๊ฒฐํ์ฌ ์ด์ ์๊ฐ ๋ ๋ง์ ๋ฐฐ์ด์ ๋ง๋ ๋ค. ์ฐ๊ฒฐํ ๋ฐฐ์ด์ ํ๋์ ๋ฆฌ์คํธ์ ๋ด์์ผ ํ๋ค.
```
a1 = np.ones((2, 3))
a1
a2 = np.zeros((2, 2))
a2
np.hstack([a1, a2])
```
`vstack` ๋ช
๋ น์ ์ด์ ์๊ฐ ๊ฐ์ ๋ ๊ฐ ์ด์์ ๋ฐฐ์ด์ ์์๋๋ก ์ฐ๊ฒฐํ์ฌ ํ์ ์๊ฐ ๋ ๋ง์ ๋ฐฐ์ด์ ๋ง๋ ๋ค. ์ฐ๊ฒฐํ ๋ฐฐ์ด์ ๋ง์ฐฌ๊ฐ์ง๋ก ํ๋์ ๋ฆฌ์คํธ์ ๋ด์์ผ ํ๋ค.
```
b1 = np.ones((2, 3))
b1
b2 = np.zeros((3, 3))
b2
np.vstack([b1, b2])
```
`dstack` ๋ช
๋ น์ ์ 3์ ์ถ ์ฆ, ํ์ด๋ ์ด์ด ์๋ ๊น์ด(depth) ๋ฐฉํฅ์ผ๋ก ๋ฐฐ์ด์ ํฉ์น๋ค.
```
c1 = np.ones((2,3))
c1
c3 = np.arange(6)
c4 = c3.reshape(2, 3)
c4
c5 = np.dstack([c4, c2])
c5
c2 = np.zeros((2,3))
c2
np.dstack([c1, c2])
```
`stack` ๋ช
๋ น์ ์๋ก์ด ์ฐจ์(์ถ์ผ๋ก) ๋ฐฐ์ด์ ์ฐ๊ฒฐํ๋ฉฐ ๋น์ฐํ ์ฐ๊ฒฐํ๊ณ ์ ํ๋ ๋ฐฐ์ด๋ค์ ํฌ๊ธฐ๊ฐ ๋ชจ๋ ๊ฐ์์ผ ํ๋ค.
`axis` ์ธ์(๋ํดํธ 0)๋ฅผ ์ฌ์ฉํ์ฌ ์ฐ๊ฒฐํ์ ํ์ ๋ฐฉํฅ์ ์ ํ๋ค.
```
np.stack([c4, c2])
np.stack([c4, c2], axis=1)
```
`r_` ๋ฉ์๋๋ `hstack` ๋ช
๋ น๊ณผ ์ ์ฌํ๋ค. ๋ค๋ง ๋ฉ์๋์์๋ ๋ถ๊ตฌํ๊ณ ์๊ดํธ(parenthesis, `()`)๋ฅผ ์ฌ์ฉํ์ง ์๊ณ ์ธ๋ฑ์ฑ๊ณผ ๊ฐ์ด ๋๊ดํธ(bracket, `[]`)๋ฅผ ์ฌ์ฉํ๋ค.
```
np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])]
```
`tile` ๋ช
๋ น์ ๋์ผํ ๋ฐฐ์ด์ ๋ฐ๋ณตํ์ฌ ์ฐ๊ฒฐํ๋ค.
```
a = np.array([0, 1, 2])
np.tile(a, 2)
np.tile(a, (3, 2))
```
### ๊ทธ๋ฆฌ๋ ์์ฑ
๋ณ์๊ฐ 2๊ฐ์ธ 2์ฐจ์ ํจ์์ ๊ทธ๋ํ๋ฅผ ๊ทธ๋ฆฌ๊ฑฐ๋ ํ๋ฅผ ์์ฑํ๋ ค๋ฉด ๋ง์ ์ขํ๋ฅผ ํ๊บผ๋ฒ์ ์์ฑํ์ฌ ๊ฐ ์ขํ์ ๋ํ ํจ์ ๊ฐ์ ๊ณ์ฐํด์ผ ํ๋ค.
์๋ฅผ ๋ค์ด x, y ๋ผ๋ ๋ ๋ณ์๋ฅผ ๊ฐ์ง ํจ์์์ x๊ฐ 0๋ถํฐ 2๊น์ง, y๊ฐ 0๋ถํฐ 4๊น์ง์ ์ฌ๊ฐํ ์์ญ์์ ๋ณํํ๋ ๊ณผ์ ์ ๋ณด๊ณ ์ถ๋ค๋ฉด ์ด ์ฌ๊ฐํ ์์ญ ์์ ๋ค์๊ณผ ๊ฐ์ (x,y) ์ ๊ฐ๋ค์ ๋ํด ํจ์๋ฅผ ๊ณ์ฐํด์ผ ํ๋ค.
$$ (x,y) = (0,0), (0,1), (0,2), (0,3), (0,4), (1,0), \cdots (2,4) $$
์ด๋ฌํ ๊ณผ์ ์ ์๋์ผ๋ก ํด์ฃผ๋ ๊ฒ์ด NumPy์ `meshgrid` ๋ช
๋ น์ด๋ค. `meshgrid` ๋ช
๋ น์ ์ฌ๊ฐํ ์์ญ์ ๊ตฌ์ฑํ๋ ๊ฐ๋ก์ถ์ ์ ๋ค๊ณผ ์ธ๋ก์ถ์ ์ ์ ๋ํ๋ด๋ ๋ ๋ฒกํฐ๋ฅผ ์ธ์๋ก ๋ฐ์์ ์ด ์ฌ๊ฐํ ์์ญ์ ์ด๋ฃจ๋ ์กฐํฉ์ ์ถ๋ ฅํ๋ค. ๋จ ์กฐํฉ์ด ๋ (x,y)์์ x๊ฐ๋ง์ ํ์ํ๋ ํ๋ ฌ๊ณผ y๊ฐ๋ง์ ํ์ํ๋ ํ๋ ฌ ๋ ๊ฐ๋ก ๋ถ๋ฆฌํ์ฌ ์ถ๋ ฅํ๋ค.
```
x = np.arange(3)
x
y = np.arange(5)
y
X, Y = np.meshgrid(x, y)
X
Y
[zip(x, y) for x, y in zip(X, Y)] # ๋ฆฌ์คํธ์ ๋ฆฌ์คํธ. 2์ฐจ์ ํฌ์ธํธ์ ๋ํ ๊ฐ ๊ณ์ฐ
plt.scatter(X, Y, linewidths=10);
```
| github_jupyter |
```
#IMPORT SEMUA LIBARARY
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY UNTUK POSTGRE
from sqlalchemy import create_engine
import psycopg2
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY BASE PATH
import os
import io
#IMPORT LIBARARY PDF
from fpdf import FPDF
#IMPORT LIBARARY CHART KE BASE64
import base64
#IMPORT LIBARARY EXCEL
import xlsxwriter
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(columns, table, filePath, engine):
#FUNGSI UNTUK MEMBACA CSV
df = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#APABILA ADA FIELD KOSONG DISINI DIFILTER
df.fillna('')
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
del df['kategori']
del df['jenis']
del df['pengiriman']
del df['satuan']
#MEMINDAHKAN DATA DARI CSV KE POSTGRESQL
df.to_sql(
table,
engine,
if_exists='replace'
)
#DIHITUNG APABILA DATA YANG DIUPLOAD BERHASIL, MAKA AKAN MENGEMBALIKAN KELUARAN TRUE(BENAR) DAN SEBALIKNYA
if len(df) == 0:
return False
else:
return True
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, columns, filePath, name, subjudul, limit, negara, basePath):
#TEST KONEKSI DATABASE
try:
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBL DATA DARI TABLE YANG DIDEFINISIKAN DIBAWAH, DAN DIORDER DARI TANGGAL TERAKHIR
#BISA DITAMBAHKAN LIMIT SUPAYA DATA YANG DIAMBIL TIDAK TERLALU BANYAK DAN BERAT
postgreSQL_select_Query = "SELECT * FROM "+table+" ORDER BY tanggal ASC LIMIT " + str(limit)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MELAKUKAN LOOPING ATAU PERULANGAN DARI DATA YANG SUDAH DIAMBIL
#KEMUDIAN DATA TERSEBUT DITEMPELKAN KE VARIABLE DIATAS INI
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
if row[2] == "":
lengthy.append(float(0))
else:
lengthy.append(float(row[2]))
#FUNGSI UNTUK MEMBUAT CHART
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#MASUKAN DATA ID DARI DATABASE, DAN JUGA DATA TANGGAL
ax.bar(uid, lengthy, align='center')
#UNTUK JUDUL CHARTNYA
ax.set_title(judul)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
#TOTAL DATA YANG DIAMBIL DARI DATABASE, DIMASUKAN DISINI
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#line
#MASUKAN DATA DARI DATABASE
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#UNTUK JUDUL CHARTNYA
plt.title(judul)
plt.grid(True)
l = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(l, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#pie
#UNTUK JUDUL CHARTNYA
plt.title(judul)
#MASUKAN DATA DARI DATABASE
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.axis('equal')
p = io.BytesIO()
#CHART DISIMPAN KE FORMAT PNG
plt.savefig(p, format='png', bbox_inches="tight")
#CHART YANG SUDAH DIJADIKAN PNG, DISINI DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
#CHART DITAMPILKAN
plt.show()
#MENGAMBIL DATA DARI CSV YANG DIGUNAKAN SEBAGAI HEADER DARI TABLE UNTUK EXCEL DAN JUGA PDF
header = pd.read_csv(
os.path.abspath(filePath),
names=columns,
keep_default_na=False
)
#MENGHAPUS COLUMN YANG TIDAK DIGUNAKAN
header.fillna('')
del header['tanggal']
del header['total']
#MEMANGGIL FUNGSI EXCEL
makeExcel(mobile_records, header, name, limit, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(mobile_records, header, judul, barChart, lineChart, pieChart, name, subjudul, limit, basePath)
#JIKA GAGAL KONEKSI KE DATABASE, MASUK KESINI UNTUK MENAMPILKAN ERRORNYA
except (Exception, psycopg2.Error) as error :
print (error)
#KONEKSI DITUTUP
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, dataheader, name, limit, basePath):
#MEMBUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/BLOOMBERG/SektorRiil/excel/'+name+'.xlsx')
#MENAMBAHKAN WORKSHEET PADA FILE EXCEL TERSEBUT
worksheet = workbook.add_worksheet('sheet1')
#SETINGAN AGAR DIBERIKAN BORDER DAN FONT MENJADI BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#MENJADIKAN DATA MENJADI ARRAY
data=list(datarow)
isihead=list(dataheader.values)
header = []
body = []
#LOOPING ATAU PERULANGAN, KEMUDIAN DATA DITAMPUNG PADA VARIABLE DIATAS
for rowhead in dataheader:
header.append(str(rowhead))
for rowhead2 in datarow:
header.append(str(rowhead2[1]))
for rowbody in isihead[1]:
body.append(str(rowbody))
for rowbody2 in data:
body.append(str(rowbody2[2]))
#MEMASUKAN DATA DARI VARIABLE DIATAS KE DALAM COLUMN DAN ROW EXCEL
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
for col_num, data in enumerate(body):
worksheet.write(1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, dataheader, judul, bar, line, pie, name, subjudul, lengthPDF, basePath):
#FUNGSI UNTUK MENGATUR UKURAN KERTAS, DISINI MENGGUNAKAN UKURAN A4 DENGAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#MENAMBAHKAN HALAMAN PADA PDF
pdf.add_page()
#PENGATURAN UNTUK JARAK PADDING DAN JUGA UKURAN FONT
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#MEMASUKAN JUDUL KE DALAM PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#MEMASUKAN SUB JUDUL KE PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#MEMBUAT GARIS DI BAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','',10.0)
#MENGAMBIL DATA HEADER PDF YANG SEBELUMNYA SUDAH DIDEFINISIKAN DIATAS
datahead=list(dataheader.values)
pdf.set_font('Times','B',12.0)
pdf.ln(0.5)
th1 = pdf.font_size
#MEMBUAT TABLE PADA PDF, DAN MENAMPILKAN DATA DARI VARIABLE YANG SUDAH DIKIRIM
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][0], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Jenis", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][1], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Pengiriman", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][2], border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Satuan", border=1, align='C')
pdf.cell(177, 2*th1, datahead[0][3], border=1, align='C')
pdf.ln(2*th1)
#PENGATURAN PADDING
pdf.set_xy(17.0, 75.0)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_font('Times','B',11.0)
data=list(datarow)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
#PENGATURAN UNTUK JARAK PADDING
pdf.ln(0.5)
th = pdf.font_size
#MEMASUKAN DATA HEADER YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.cell(50, 2*th, str("Negara"), border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#MEMASUKAN DATA ISI YANG DIKIRIM DARI VARIABLE DIATAS KE DALAM PDF
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(50, 2*th, negara, border=1, align='C')
for row in data:
pdf.cell(40, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#MENGAMBIL DATA CHART, KEMUDIAN CHART TERSEBUT DIJADIKAN PNG DAN DISIMPAN PADA DIRECTORY DIBAWAH INI
#BAR CHART
bardata = base64.b64decode(bar)
barname = basePath+'jupyter/BLOOMBERG/SektorRiil/img/'+name+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINE CHART
linedata = base64.b64decode(line)
linename = basePath+'jupyter/BLOOMBERG/SektorRiil/img/'+name+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIE CHART
piedata = base64.b64decode(pie)
piename = basePath+'jupyter/BLOOMBERG/SektorRiil/img/'+name+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#PENGATURAN UNTUK UKURAN FONT DAN JUGA JARAK PADDING
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
widthcol = col/3
#MEMANGGIL DATA GAMBAR DARI DIREKTORY DIATAS
pdf.image(barname, link='', type='',x=8, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(linename, link='', type='',x=103, y=100, w=widthcol)
pdf.set_xy(17.0, 75.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(piename, link='', type='',x=195, y=100, w=widthcol)
pdf.ln(2*th)
#MEMBUAT FILE PDF
pdf.output(basePath+'jupyter/BLOOMBERG/SektorRiil/pdf/'+name+'.pdf', 'F')
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#DEFINISIKAN COLUMN BERDASARKAN FIELD CSV
columns = [
"kategori",
"jenis",
"tanggal",
"total",
"pengiriman",
"satuan",
]
#UNTUK NAMA FILE
name = "SektorRiil4_2"
#VARIABLE UNTUK KONEKSI KE DATABASE
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "bloomberg_SektorRiil"
table = name.lower()
#JUDUL PADA PDF DAN EXCEL
judul = "Data Sektor Riil"
subjudul = "Badan Perencanaan Pembangunan Nasional"
#LIMIT DATA UNTUK SELECT DI DATABASE
limitdata = int(8)
#NAMA NEGARA UNTUK DITAMPILKAN DI EXCEL DAN PDF
negara = "Indonesia"
#BASE PATH DIRECTORY
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE CSV
filePath = basePath+ 'data mentah/BLOOMBERG/SektorRiil/' +name+'.csv';
#KONEKSI KE DATABASE
engine = create_engine('postgresql://'+username+':'+password+'@'+host+':'+port+'/'+database)
#MEMANGGIL FUNGSI UPLOAD TO PSQL
checkUpload = uploadToPSQL(columns, table, filePath, engine)
#MENGECEK FUNGSI DARI UPLOAD PSQL, JIKA BERHASIL LANJUT MEMBUAT FUNGSI CHART, JIKA GAGAL AKAN MENAMPILKAN PESAN ERROR
if checkUpload == True:
makeChart(host, username, password, database, port, table, judul, columns, filePath, name, subjudul, limitdata, negara, basePath)
else:
print("Error When Upload CSV")
```
| github_jupyter |
# segmentation with SVM
```
import os
import numpy as np
import imageio
import matplotlib.pyplot as plt
# import mahotas as mt
Image_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells/Th0'
Mask_dir = os.path.join(Image_dir,'Masks')
Image_name = 'Tcells_Th0_1f_photons.tiff'
I = imageio.imread(os.path.join(Image_dir,Image_name)).astype("uint8")
if len(I.shape) > 2:
I = I[:,:,0]
print(I.dtype)
print(I.shape)
print(I.min())
print(I.mean())
print(I.std())
# print images
plt.figure(figsize=(10,10))
plt.imshow(I, cmap='gray', vmin=0, vmax=255)
hist, bins = np.histogram(I, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = I.shape
norm_cum_hist = cum_hist / (height * width)
norm_hist = hist / hist.max()
#width = (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, norm_hist, align='center')
plt.plot(norm_cum_hist, color='r')
plt.show()
hists_cdf = (norm_cum_hist * 255).astype("uint8")
# mapping
img_eq = hists_cdf[I]
# img_eq[img_eq<100] = 0
plt.figure(figsize=(10,10))
plt.imshow(img_eq, cmap='gray')
hist, bins = np.histogram(img_eq, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = img_eq.shape
norm_cum_hist = cum_hist / (height * width)
norm_hist = hist / hist.max()
#width = (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, norm_hist, align='center')
plt.plot(norm_cum_hist, color='r')
plt.show()
M_cell = 'Tcells_Th0_1n_photons_cells.tiff'
Mask_cell = imageio.imread(os.path.join(Mask_dir,M_cell))
plt.imshow(Mask_cell, cmap='gray', vmin=0, vmax=255)
print(Mask_cell.shape)
Mask_cell[Mask_cell>=1] = 1
plt.imshow(Mask_cell, cmap='gray', vmin=0, vmax=1)
print(Mask_cell.max(), Mask_cell.shape)
x = img_eq.reshape(-1,1)
x.shape
```
## cell segmentation
```
def img_preprocess(img_dir):
image = imageio.imread(img_dir).astype("uint8")
hist, bins = np.histogram(image, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = image.shape
norm_cum_hist = cum_hist / (height * width)
hists_cdf = (norm_cum_hist * 255).astype("uint8")
# mapping
img_eq = hists_cdf[image]
plt.imshow(img_eq, cmap='gray', vmin=0, vmax=255)
return img_eq
# vec_img_eq = img_eq.reshape(-1,1)
# return vec_img_eq
def label_preprocess(mask_dir):
mask = imageio.imread(mask_dir).astype('uint8')
mask[mask>=1] = 1
plt.imshow(mask, cmap='gray', vmin=0, vmax=1)
return mask
# vec_mask = mask.reshape(-1,1)
# return vec_mask
img_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells/Th0/Tcells_Th0_1n_photons.tiff'
lab_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells/Th0/Masks/Tcells_Th0_1n_photons_cells.tiff'
img_pre = img_preprocess(img_dir)
mask_pre = label_preprocess(lab_dir)
def harlick_features(img, h_neigh, ss_idx):
print ('[INFO] Computing haralick features.')
size = h_neigh
shape = (img.shape[0] - size + 1, img.shape[1] - size + 1, size, size)
strides = 2 * img.strides
patches = stride_tricks.as_strided(img, shape=shape, strides=strides)
patches = patches.reshape(-1, size, size)
if len(ss_idx) == 0 :
bar = progressbar.ProgressBar(maxval=len(patches), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
else:
bar = progressbar.ProgressBar(maxval=len(ss_idx), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
h_features = []
if len(ss_idx) == 0:
for i, p in enumerate(patches):
bar.update(i+1)
h_features.append(calc_haralick(p))
else:
for i, p in enumerate(patches[ss_idx]):
bar.update(i+1)
h_features.append(calc_haralick(p))
#h_features = [calc_haralick(p) for p in patches[ss_idx]]
return np.array(h_features)
def create_binary_pattern(img, p, r):
print ('[INFO] Computing local binary pattern features.')
lbp = feature.local_binary_pattern(img, p, r)
return (lbp-np.min(lbp))/(np.max(lbp)-np.min(lbp)) * 255
def create_features(img, img_gray, label, train=True):
lbp_radius = 24 # local binary pattern neighbourhood
h_neigh = 11 # haralick neighbourhood
num_examples = 1000 # number of examples per image to use for training model
lbp_points = lbp_radius*8
h_ind = int((h_neigh - 1)/ 2)
feature_img = np.zeros((img.shape[0],img.shape[1],4))
feature_img[:,:,:3] = img
img = None
feature_img[:,:,3] = create_binary_pattern(img_gray, lbp_points, lbp_radius)
feature_img = feature_img[h_ind:-h_ind, h_ind:-h_ind]
features = feature_img.reshape(feature_img.shape[0]*feature_img.shape[1], feature_img.shape[2])
if train == True:
ss_idx = subsample_idx(0, features.shape[0], num_examples)
features = features[ss_idx]
else:
ss_idx = []
h_features = harlick_features(img_gray, h_neigh, ss_idx)
features = np.hstack((features, h_features))
if train == True:
label = label[h_ind:-h_ind, h_ind:-h_ind]
labels = label.reshape(label.shape[0]*label.shape[1], 1)
labels = labels[ss_idx]
else:
labels = None
return features, labels
def create_training_dataset(image_list, label_list):
print ('[INFO] Creating training dataset on %d image(s).' %len(image_list))
X = []
y = []
for i, img_dir in enumerate(image_list):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
features, labels = create_features(img, img_gray, label_list[i])
X.append(features)
y.append(labels)
X = np.array(X)
X = X.reshape(X.shape[0]*X.shape[1], X.shape[2])
y = np.array(y)
y = y.reshape(y.shape[0]*y.shape[1], y.shape[2]).ravel()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print ('[INFO] Feature vector size:', X_train.shape)
return X_train, X_test, y_train, y_test
def train_model(X, y, classifier):
if classifier == "SVM":
from sklearn.svm import SVC
print ('[INFO] Training Support Vector Machine model.')
model = SVC()
model.fit(X, y)
elif classifier == "RF":
from sklearn.ensemble import RandomForestClassifier
print ('[INFO] Training Random Forest model.')
model = RandomForestClassifier(n_estimators=250, max_depth=12, random_state=42)
model.fit(X, y)
elif classifier == "GBC":
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0)
model.fit(X, y)
def test_model(X, y, model):
pred = model.predict(X)
precision = metrics.precision_score(y, pred, average='weighted', labels=np.unique(pred))
recall = metrics.recall_score(y, pred, average='weighted', labels=np.unique(pred))
f1 = metrics.f1_score(y, pred, average='weighted', labels=np.unique(pred))
accuracy = metrics.accuracy_score(y, pred)
print ('--------------------------------')
print ('[RESULTS] Accuracy: %.2f' %accuracy)
print ('[RESULTS] Precision: %.2f' %precision)
print ('[RESULTS] Recall: %.2f' %recall)
print ('[RESULTS] F1: %.2f' %f1)
print ('--------------------------------')
all_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells'
image_list, label_list = read_data(all_dir)
X_train, X_test, y_train, y_test = create_training_dataset(image_list, label_list)
model = train_model(X_train, y_train, classifier)
test_model(X_test, y_test, model)
image_list = []
label_list = []
all_dir = '/home/zhangj41/HW/group_proj/Immune-Cells_2D/190718_Tcells'
all_folder = os.listdir(all_dir)
for folder in all_folder:
# folder = all_folder[3]
current_folder = os.path.join(all_dir, folder)
all_images = os.listdir(current_folder)
for file_name in all_images:
# file_name = all_images[3]
file_name_front, file_name_end = os.path.splitext(file_name)
if file_name_end is not '':
fn = file_name_front.split('_')[2][1]
if file_name_end=='.tiff' and fn=='n':
image_dir = os.path.join(current_folder,file_name)
image_list.append(image_dir)
mask_dir = os.path.join(current_folder,'Masks',file_name_front+'cells.tiff')
label_list.append(mask_dir)
image_list
label_list
```
## cyto and nuclei segmentation
```
hist, bins = np.histogram(Mask_cell, bins=256, range=[0,256])
cum_hist = np.cumsum(hist)
height, width = Mask_cell.shape
norm_cum_hist = cum_hist / (height * width)
norm_hist = hist / hist.max()
#width = (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, norm_hist, align='center')
plt.plot(norm_cum_hist, color='r')
plt.show()
M_cyto = 'Tcells_Th0_1n_photons_cyto.tiff'
Mask_cyto = imageio.imread(os.path.join(Mask_dir,M_cyto))
# plt.imshow(Mask_cyto, cmap='gray', vmin=0, vmax=255)
# print(Mask_cyto.shape)
Mask_cyto[Mask_cyto>=1] = 1
plt.imshow(Mask_cyto, cmap='gray', vmin=0, vmax=1)
M_nuclei = 'Tcells_Th0_1n_photons_nuclei.tiff'
Mask_nuclei = imageio.imread(os.path.join(Mask_dir,M_nuclei))
# plt.imshow(Mask_nuclei, cmap='gray', vmin=0, vmax=255)
# print(Mask_nuclei.shape)
Mask_nuclei[Mask_nuclei>=1] = 1
plt.imshow(Mask_nuclei, cmap='gray', vmin=0, vmax=1)
image_list, label_list = read_data(image_dir, label_dir)
X_train, X_test, y_train, y_test = create_training_dataset(image_list, label_list)
model = train_model(X_train, y_train, classifier)
test_model(X_test, y_test, model)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mrdbourke/pytorch-deep-learning/blob/main/extras/solutions/03_pytorch_computer_vision_exercise_solutions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 03. PyTorch Computer Vision Exercise Solutions
The following is one possible set (there may be more than one way to do things) of solutions for the 03. PyTorch Computer Vision exercise template.
## Resources
1. These exercises/solutions are based on [notebook 03 of the Learn PyTorch for Deep Learning course](https://www.learnpytorch.io/03_pytorch_computer_vision/).
2. See a live [walkthrough of the solutions (errors and all) on YouTube](https://youtu.be/_PibmqpEyhA).
* **Note:** Going through these exercises took me just over 3 hours, so you should expect around the same.
3. See [other solutions on the course GitHub](https://github.com/mrdbourke/pytorch-deep-learning/tree/main/extras/solutions).
```
# Check for GPU
!nvidia-smi
# Import torch
import torch
# Exercises require PyTorch > 1.10.0
print(torch.__version__)
# Setup device agnostic code
device = "cuda" if torch.cuda.is_available() else "cpu"
device
```
## 1. What are 3 areas in industry where computer vision is currently being used?
1. Self-driving cars, such as Tesla using computer vision to percieve what's happening on the road. See Tesla AI day for more - https://youtu.be/j0z4FweCy4M
2. Healthcare imaging, such as using computer vision to help interpret X-rays. Google also uses computer vision for detecting polyps in the intenstines - https://ai.googleblog.com/2021/08/improved-detection-of-elusive-polyps.html
3. Security, computer vision can be used to detect whether someone is invading your home or not - https://store.google.com/au/product/nest_cam_battery?hl=en-GB
## 2. Search "what is overfitting in machine learning" and write down a sentence about what you find.
Overfitting is like memorizing for a test but then you can't answer a question that's slightly different.
In other words, if a model is overfitting, it's learning the training data *too well* and these patterns don't generalize to unseen data.
## 3. Search "ways to prevent overfitting in machine learning", write down 3 of the things you find and a sentence about each.
> **Note:** there are lots of these, so don't worry too much about all of them, just pick 3 and start with those.
See this article for some ideas: https://elitedatascience.com/overfitting-in-machine-learning
3 ways to prevent overfitting:
1. **Regularization techniques** - You could use [dropout on your neural networks](https://en.wikipedia.org/wiki/Dilution_(neural_networks)), dropout involves randomly removing neurons in different layers so that the remaining neurons hopefully learn more robust weights/patterns.
2. **Use a different model** - maybe the model you're using for a specific problem is too complicated, as in, it's learning the data too well because it has so many layers. You could remove some layers to simplify your model. Or you could pick a totally different model altogether, one that may be more suited to your particular problem. Or... you could also use [transfer learning](https://en.wikipedia.org/wiki/Transfer_learning) (taking the patterns from one model and applying them to your own problem).
3. **Reduce noise in data/cleanup dataset/introduce data augmentation techniques** - If the model is learning the data too well, it might be just memorizing the data, including the noise. One option would be to remove the noise/clean up the dataset or if this doesn't, you can introduce artificial noise through the use of data augmentation to artificially increase the diversity of your training dataset.
## 4. Spend 20-minutes reading and clicking through the [CNN Explainer website](https://poloclub.github.io/cnn-explainer/).
* Upload your own example image using the "upload" button on the website and see what happens in each layer of a CNN as your image passes through it.
The CNN explainer website is a great insight into all of the nuts and bolts of a convolutional neural network.
## 5. Load the [`torchvision.datasets.MNIST()`](https://pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html#torchvision.datasets.MNIST) train and test datasets.
```
import torchvision
from torchvision import datasets
from torchvision import transforms
# Get the MNIST train dataset
train_data = datasets.MNIST(root=".",
train=True,
download=True,
transform=transforms.ToTensor()) # do we want to transform the data as we download it?
# Get the MNIST test dataset
test_data = datasets.MNIST(root=".",
train=False,
download=True,
transform=transforms.ToTensor())
train_data, test_data
len(train_data), len(test_data)
# Data is in tuple form (image, label)
img = train_data[0][0]
label = train_data[0][1]
print(f"Image:\n {img}")
print(f"Label:\n {label}")
# Check out the shapes of our data
print(f"Image shape: {img.shape} -> [color_channels, height, width] (CHW)")
print(f"Label: {label} -> no shape, due to being integer")
```
Note: There are two main agreed upon ways for representing images in machine learning:
1. Color channels first: [color_channels, height, width] (CHW) -> PyTorch default (as of April 2022)
2. Color channels last: [height, width, color_channels] (HWC) -> Matplotlib/TensorFlow default (as of April 2022)
```
# Get the class names from the dataset
class_names = train_data.classes
class_names
```
## 6. Visualize at least 5 different samples of the MNIST training dataset.
```
import matplotlib.pyplot as plt
for i in range(5):
img = train_data[i][0]
print(img.shape)
img_squeeze = img.squeeze()
print(img_squeeze.shape)
label = train_data[i][1]
plt.figure(figsize=(3, 3))
plt.imshow(img_squeeze, cmap="gray")
plt.title(label)
plt.axis(False);
```
## 7. Turn the MNIST train and test datasets into dataloaders using `torch.utils.data.DataLoader`, set the `batch_size=32`.
```
# Create train dataloader
from torch.utils.data import DataLoader
train_dataloader = DataLoader(dataset=train_data,
batch_size=32,
shuffle=True)
test_dataloader = DataLoader(dataset=test_data,
batch_size=32,
shuffle=False)
train_dataloader, test_dataloader
for sample in next(iter(train_dataloader)):
print(sample.shape)
len(train_dataloader), len(test_dataloader)
```
## 8. Recreate `model_2` used in notebook 03 (the same model from the [CNN Explainer website](https://poloclub.github.io/cnn-explainer/), also known as TinyVGG) capable of fitting on the MNIST dataset.
```
from torch import nn
class MNIST_model(torch.nn.Module):
"""Model capable of predicting on MNIST dataset.
"""
def __init__(self, input_shape: int, hidden_units: int, output_shape: int):
super().__init__()
self.conv_block_1 = nn.Sequential(
nn.Conv2d(in_channels=input_shape,
out_channels=hidden_units,
kernel_size=3,
stride=1,
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units,
out_channels=hidden_units,
kernel_size=3,
stride=1,
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2)
)
self.conv_block_2 = nn.Sequential(
nn.Conv2d(in_channels=hidden_units,
out_channels=hidden_units,
kernel_size=3,
stride=1,
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units,
out_channels=hidden_units,
kernel_size=3,
stride=1,
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2)
)
self.classifier = nn.Sequential(
nn.Flatten(),
nn.Linear(in_features=hidden_units*7*7,
out_features=output_shape)
)
def forward(self, x):
x = self.conv_block_1(x)
# print(f"Output shape of conv block 1: {x.shape}")
x = self.conv_block_2(x)
# print(f"Output shape of conv block 2: {x.shape}")
x = self.classifier(x)
# print(f"Output shape of classifier: {x.shape}")
return x
device
model = MNIST_model(input_shape=1,
hidden_units=10,
output_shape=10).to(device)
model
# Check out the model state dict to find out what patterns our model wants to learn
# model.state_dict()
# Try a dummy forward pass to see what shapes our data is
dummy_x = torch.rand(size=(1, 28, 28)).unsqueeze(dim=0).to(device)
# dummy_x.shape
model(dummy_x)
dummy_x_2 = torch.rand(size=([1, 10, 7, 7]))
dummy_x_2.shape
flatten_layer = nn.Flatten()
flatten_layer(dummy_x_2).shape
```
## 9. Train the model you built in exercise 8. for 5 epochs on CPU and GPU and see how long it takes on each.
```
%%time
from tqdm.auto import tqdm
# Train on CPU
model_cpu = MNIST_model(input_shape=1,
hidden_units=10,
output_shape=10).to("cpu") co
# Create a loss function and optimizer
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model_cpu.parameters(), lr=0.1)
### Training loop
epochs = 5
for epoch in tqdm(range(epochs)):
train_loss = 0
for batch, (X, y) in enumerate(train_dataloader):
model_cpu.train()
# Put data on CPU
X, y = X.to("cpu"), y.to("cpu")
# Forward pass
y_pred = model_cpu(X)
# Loss calculation
loss = loss_fn(y_pred, y)
train_loss += loss
# Optimizer zero grad
optimizer.zero_grad()
# Loss backward
loss.backward()
# Step the optimizer
optimizer.step()
# Adjust train loss for number of batches
train_loss /= len(train_dataloader)
### Testing loop
test_loss_total = 0
# Put model in eval mode
model_cpu.eval()
# Turn on inference mode
with torch.inference_mode():
for batch, (X_test, y_test) in enumerate(test_dataloader):
# Make sure test data on CPU
X_test, y_test = X_test.to("cpu"), y_test.to("cpu")
test_pred = model_cpu(X_test)
test_loss = loss_fn(test_pred, y_test)
test_loss_total += test_loss
test_loss_total /= len(test_dataloader)
# Print out what's happening
print(f"Epoch: {epoch} | Loss: {train_loss:.3f} | Test loss: {test_loss_total:.3f}")
%%time
from tqdm.auto import tqdm
device = "cuda" if torch.cuda.is_available() else "cpu"
# Train on GPU
model_gpu = MNIST_model(input_shape=1,
hidden_units=10,
output_shape=10).to(device)
# Create a loss function and optimizer
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model_gpu.parameters(), lr=0.1)
# Training loop
epochs = 5
for epoch in tqdm(range(epochs)):
train_loss = 0
model_gpu.train()
for batch, (X, y) in enumerate(train_dataloader):
# Put data on target device
X, y = X.to(device), y.to(device)
# Forward pass
y_pred = model_gpu(X)
# Loss calculation
loss = loss_fn(y_pred, y)
train_loss += loss
# Optimizer zero grad
optimizer.zero_grad()
# Loss backward
loss.backward()
# Step the optimizer
optimizer.step()
# Adjust train loss to number of batches
train_loss /= len(train_dataloader)
### Testing loop
test_loss_total = 0
# Put model in eval mode and turn on inference mode
model_gpu.eval()
with torch.inference_mode():
for batch, (X_test, y_test) in enumerate(test_dataloader):
# Make sure test data on target device
X_test, y_test = X_test.to(device), y_test.to(device)
test_pred = model_gpu(X_test)
test_loss = loss_fn(test_pred, y_test)
test_loss_total += test_loss
# Adjust test loss total for number of batches
test_loss_total /= len(test_dataloader)
# Print out what's happening
print(f"Epoch: {epoch} | Loss: {train_loss:.3f} | Test loss: {test_loss_total:.3f}")
```
## 10. Make predictions using your trained model and visualize at least 5 of them comparing the prediciton to the target label.
```
# Make predictions with the trained model
plt.imshow(test_data[0][0].squeeze(), cmap="gray")
# Logits -> Prediction probabilities -> Prediction labels
model_pred_logits = model_gpu(test_data[0][0].unsqueeze(dim=0).to(device)) # make sure image is right shape + on right device
model_pred_probs = torch.softmax(model_pred_logits, dim=1)
model_pred_label = torch.argmax(model_pred_probs, dim=1)
model_pred_label
num_to_plot = 5
for i in range(num_to_plot):
# Get image and labels from the test data
img = test_data[i][0]
label = test_data[i][1]
# Make prediction on image
model_pred_logits = model_gpu(img.unsqueeze(dim=0).to(device))
model_pred_probs = torch.softmax(model_pred_logits, dim=1)
model_pred_label = torch.argmax(model_pred_probs, dim=1)
# Plot the image and prediction
plt.figure()
plt.imshow(img.squeeze(), cmap="gray")
plt.title(f"Truth: {label} | Pred: {model_pred_label.cpu().item()}")
plt.axis(False);
```
## 11. Plot a confusion matrix comparing your model's predictions to the truth labels.
```
# See if torchmetrics exists, if not, install it
try:
import torchmetrics, mlxtend
print(f"mlxtend version: {mlxtend.__version__}")
assert int(mlxtend.__version__.split(".")[1]) >= 19, "mlxtend verison should be 0.19.0 or higher"
except:
!pip install -q torchmetrics -U mlxtend # <- Note: If you're using Google Colab, this may require restarting the runtime
import torchmetrics, mlxtend
print(f"mlxtend version: {mlxtend.__version__}")
# Import mlxtend upgraded version
import mlxtend
print(mlxtend.__version__)
assert int(mlxtend.__version__.split(".")[1]) >= 19 # should be version 0.19.0 or higher
# Make predictions across all test data
from tqdm.auto import tqdm
model_gpu.eval()
y_preds = []
with torch.inference_mode():
for batch, (X, y) in tqdm(enumerate(test_dataloader)):
# Make sure data on right device
X, y = X.to(device), y.to(device)
# Forward pass
y_pred_logits = model_gpu(X)
# Logits -> Pred probs -> Pred label
y_pred_labels = torch.argmax(torch.softmax(y_pred_logits, dim=1), dim=1)
# Append the labels to the preds list
y_preds.append(y_pred_labels)
y_preds=torch.cat(y_preds).cpu()
len(y_preds)
test_data.targets[:10], y_preds[:10]
from torchmetrics import ConfusionMatrix
from mlxtend.plotting import plot_confusion_matrix
# Setup confusion matrix
confmat = ConfusionMatrix(num_classes=len(class_names))
confmat_tensor = confmat(preds=y_preds,
target=test_data.targets)
# Plot the confusion matrix
fix, ax = plot_confusion_matrix(
conf_mat=confmat_tensor.numpy(),
class_names=class_names,
figsize=(10, 7)
)
```
## 12. Create a random tensor of shape `[1, 3, 64, 64]` and pass it through a `nn.Conv2d()` layer with various hyperparameter settings (these can be any settings you choose), what do you notice if the `kernel_size` parameter goes up and down?
```
random_tensor = torch.rand([1, 3, 64, 64])
random_tensor.shape
conv_layer = nn.Conv2d(in_channels=3,
out_channels=64,
kernel_size=3,
stride=2,
padding=1)
print(f"Random tensor original shape: {random_tensor.shape}")
random_tensor_through_conv_layer = conv_layer(random_tensor)
print(f"Random tensor through conv layer shape: {random_tensor_through_conv_layer.shape}")
```
## 13. Use a model similar to the trained `model_2` from notebook 03 to make predictions on the test [`torchvision.datasets.FashionMNIST`](https://pytorch.org/vision/main/generated/torchvision.datasets.FashionMNIST.html) dataset.
* Then plot some predictions where the model was wrong alongside what the label of the image should've been.
* After visualing these predictions do you think it's more of a modelling error or a data error?
* As in, could the model do better or are the labels of the data too close to each other (e.g. a "Shirt" label is too close to "T-shirt/top")?
```
# Download FashionMNIST train & test
from torchvision import datasets
from torchvision import transforms
fashion_mnist_train = datasets.FashionMNIST(root=".",
download=True,
train=True,
transform=transforms.ToTensor())
fashion_mnist_test = datasets.FashionMNIST(root=".",
train=False,
download=True,
transform=transforms.ToTensor())
len(fashion_mnist_train), len(fashion_mnist_test)
# Get the class names of the Fashion MNIST dataset
fashion_mnist_class_names = fashion_mnist_train.classes
fashion_mnist_class_names
# Turn FashionMNIST datasets into dataloaders
from torch.utils.data import DataLoader
fashion_mnist_train_dataloader = DataLoader(fashion_mnist_train,
batch_size=32,
shuffle=True)
fashion_mnist_test_dataloader = DataLoader(fashion_mnist_test,
batch_size=32,
shuffle=False)
len(fashion_mnist_train_dataloader), len(fashion_mnist_test_dataloader)
# model_2 is the same architecture as MNIST_model
model_2 = MNIST_model(input_shape=1,
hidden_units=10,
output_shape=10).to(device)
model_2
# Setup loss and optimizer
from torch import nn
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model_2.parameters(), lr=0.01)
# Setup metrics
from tqdm.auto import tqdm
from torchmetrics import Accuracy
acc_fn = Accuracy(num_classes=len(fashion_mnist_class_names)).to(device)
# Setup training/testing loop
epochs = 5
for epoch in tqdm(range(epochs)):
train_loss, test_loss_total = 0, 0
train_acc, test_acc = 0, 0
### Training
model_2.train()
for batch, (X_train, y_train) in enumerate(fashion_mnist_train_dataloader):
X_train, y_train = X_train.to(device), y_train.to(device)
# Forward pass and loss
y_pred = model_2(X_train)
loss = loss_fn(y_pred, y_train)
train_loss += loss
train_acc += acc_fn(y_pred, y_train)
# Backprop and gradient descent
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Adjust the loss/acc (find the loss/acc per epoch)
train_loss /= len(fashion_mnist_train_dataloader)
train_acc /= len(fashion_mnist_train_dataloader)
### Testing
model_2.eval()
with torch.inference_mode():
for batch, (X_test, y_test) in enumerate(fashion_mnist_test_dataloader):
X_test, y_test = X_test.to(device), y_test.to(device)
# Forward pass and loss
y_pred_test = model_2(X_test)
test_loss = loss_fn(y_pred_test, y_test)
test_loss_total += test_loss
test_acc += acc_fn(y_pred_test, y_test)
# Adjust the loss/acc (find the loss/acc per epoch)
test_loss /= len(fashion_mnist_test_dataloader)
test_acc /= len(fashion_mnist_test_dataloader)
# Print out what's happening
print(f"Epoch: {epoch} | Train loss: {train_loss:.3f} | Train acc: {train_acc:.2f} | Test loss: {test_loss_total:.3f} | Test acc: {test_acc:.2f}")
# Make predictions with trained model_2
test_preds = []
model_2.eval()
with torch.inference_mode():
for X_test, y_test in tqdm(fashion_mnist_test_dataloader):
y_logits = model_2(X_test.to(device))
y_pred_probs = torch.softmax(y_logits, dim=1)
y_pred_labels = torch.argmax(y_pred_probs, dim=1)
test_preds.append(y_pred_labels)
test_preds = torch.cat(test_preds).cpu() # matplotlib likes CPU
test_preds[:10], len(test_preds)
# Get wrong prediction indexes
import numpy as np
wrong_pred_indexes = np.where(test_preds != fashion_mnist_test.targets)[0]
len(wrong_pred_indexes)
# Select random 9 wrong predictions and plot them
import random
random_selection = random.sample(list(wrong_pred_indexes), k=9)
plt.figure(figsize=(10, 10))
for i, idx in enumerate(random_selection):
# Get true and pred labels
true_label = fashion_mnist_class_names[fashion_mnist_test[idx][1]]
pred_label = fashion_mnist_class_names[test_preds[idx]]
# Plot the wrong prediction with its original label
plt.subplot(3, 3, i+1)
plt.imshow(fashion_mnist_test[idx][0].squeeze(), cmap="gray")
plt.title(f"True: {true_label} | Pred: {pred_label}", c="r")
plt.axis(False);
```
From the look of some of these predictions, the model is getting about as confused as I would...
For example it predicts "Sneaker" instead of "Sandal" when it could have easily been a "Sneaker".
The same goes for the confusion between the classes of "T-shirt/top" and "Shirt", many of the examples here look similar.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import torch
import random
device = 'cuda' if torch.cuda.is_available() else 'cpu'
from scipy.ndimage import gaussian_filter
import sys
from tqdm import tqdm
from functools import partial
import acd
from copy import deepcopy
sys.path.append('..')
sys.path.append('../..')
from transforms_torch import bandpass_filter
# plt.style.use('dark_background')
sys.path.append('../../dsets/mnist')
import dset
from model import Net, Net2c
from util import *
from numpy.fft import *
from torch import nn
from style import *
from captum.attr import (
InputXGradient,
Saliency,
GradientShap,
DeepLift,
DeepLiftShap,
IntegratedGradients,
LayerConductance,
NeuronConductance,
NoiseTunnel,
)
import pickle as pkl
from torchvision import datasets, transforms
from sklearn.decomposition import NMF
import transform_wrappers
import visualize as viz
torch.manual_seed(42)
np.random.seed(42)
sys.path.append('../../..')
# from hierarchical_dnn_interpretations.acd.scores import cd as acd
from acd_wooseok.acd.scores import cd
from acd_wooseok.acd.util import tiling_2d
from knockout_nmf import *
```
# Dataset
```
# load args
args = dset.get_args()
args.batch_size = int(args.batch_size/2) # half the batchsize
args.epochs = 5
args.cuda = not args.no_cuda and torch.cuda.is_available()
# load NMF object
# run NMF
# nmf = NMF(n_components=30, max_iter=1000)
# nmf.fit(X)
# pkl.dump(nmf, open('./results/nmf_30.pkl', 'wb'))
nmf = pkl.load(open('./results/nmf_30.pkl', 'rb'))
```
# Train model
```
basis_indx = 5
(train_loader, test_loader, interp_loader), data_dict = dataloader_nmf_knockout(args,
nmf,
basis_indx=basis_indx,
return_interp_loader=True,
return_indices=True,
task_type='remove_one_basis')
# load model
model = Net2c()
if args.cuda:
model = model.to(device)
model.load_state_dict(torch.load('models/nmf/net2c_{}.pth'.format(basis_indx), map_location=device))
model = model.eval()
dset.test(model, test_loader, args)
# train model
# for epoch in range(1, args.epochs + 1):
# model = dset.train(epoch, train_loader, model, args)
# dset.test(model, test_loader, args)
# save
# torch.save(model.state_dict(), 'mnist.model')
```
# Grad scores
```
# gradients evaluated at the entire image
results = comp_grad_scores(model, nmf, interp_loader, data_dict, grad_mode='exact')
list_of_x = np.arange(nmf.n_components)
interp_modules = ['gradient_shap', 'ig', 'saliency', 'input_x_gradient']
viz.viz_interp_scores(list_of_x, interp_modules, results, basis_indx=basis_indx)
# gradients evaluated at the nmf approximation
results = comp_grad_scores(model, nmf, interp_loader, data_dict, grad_mode='approx')
list_of_x = np.arange(nmf.n_components)
interp_modules = ['gradient_shap', 'ig', 'saliency', 'input_x_gradient']
viz.viz_interp_scores(list_of_x, interp_modules, results, basis_indx=basis_indx)
```
# CD score
```
results_cd = comp_cd_scores(model, nmf, interp_loader, data_dict, cd_mode='cd', device='cuda')
results['cd'] = results_cd['cd']
list_of_x = np.arange(nmf.n_components)
interp_modules = ['gradient_shap', 'ig', 'saliency', 'input_x_gradient', 'cd']
viz.viz_interp_scores(list_of_x, interp_modules, results, basis_indx=basis_indx)
```
| github_jupyter |
# Particle physics results
## Setup
```
import sys
import numpy as np
from sklearn.metrics import roc_auc_score
from sklearn.neighbors import KernelDensity
from itertools import product
```
## Load results
```
n_runs = 10
n_chains = 4
n_trueparams = 3
algo_filenames = []
algo_additionals = []
algo_labels = []
algo_dividers = []
algo_dims = []
def add_algo(filename, add, label, dim=""):
algo_filenames.append(filename)
algo_additionals.append(add)
algo_labels.append(label)
algo_dims.append(dim)
def add_divider():
algo_dividers.append(len(algo_filenames))
add_algo("flow", "_june", r"\af{}", "40d")
add_algo("pie", "_conditionalmanifold_june", r"\pie{} (original)", "40d")
add_algo("pie", "_june", r"\pie{} (unconditional manifold)", "40d")
add_algo("pae", "_june", r"\pae{}", "40d")
add_algo("mf", "_june", r"\mf{}", "40d")
add_algo("emf", "_june", r"\mfe{}", "40d")
add_divider()
add_algo("flow", "_scandal_june", r"\af{} (\scandal{})", "40d")
add_algo("pie", "_conditionalmanifold_scandal_june", r"\pie{} (original, \scandal{})", "40d")
add_algo("pie", "_scandal_june", r"\pie{} (uncond.~manifold, \scandal{})", "40d")
add_algo("pae", "_scandal_june", r"\pae{} (\scandal{})", "40d")
add_algo("mf", "_scandal_june", r"\mf{} (\scandal{})", "40d")
add_algo("emf", "_scandal_june", r"\mfe{} (\scandal{})", "40d")
add_divider()
add_algo("alices", "_may", r"Likelihood ratio estimator (\alices{})")
n_algos = len(algo_filenames)
def load(name, shape, numpyfy=True, chains=1, result_dir="../data/results"):
all_results = []
for algo_filename, algo_add, algo_dim in zip(algo_filenames, algo_additionals, algo_dims):
algo_results = []
for run in range(n_runs):
run_str = "" if run == 0 else "_run{}".format(run)
for trueparam in range(n_trueparams):
trueparam_str = "" if trueparam == 0 else "_trueparam{}".format(trueparam)
try:
this_result = np.load(
"{}/{}_{}_lhc{}{}{}_{}{}.npy".format(
result_dir, algo_filename, "2" if algo_dim == "2d" else "14",
algo_dim, algo_add, run_str, name, trueparam_str
)
)
if (not numpyfy) or (shape is None) or np.product(this_result.shape) == np.product(shape):
algo_results.append(this_result.reshape(shape))
else:
algo_results.append(np.nan*np.ones(shape))
except FileNotFoundError as e:
# print(e)
if shape is None:
algo_results.append(None)
else:
algo_results.append(np.nan*np.ones(shape))
except ValueError as e:
print(e)
if shape is None:
algo_results.append(None)
else:
algo_results.append(np.nan*np.ones(shape))
all_results.append(algo_results)
if numpyfy:
all_results = np.array(all_results, dtype=np.float)
all_results = all_results.reshape([all_results.shape[0], n_runs, n_trueparams] + list(shape))
return all_results
model_gen_x = load("samples", None, numpyfy=False)
model_gen_closure = load("samples_manifold_distance", (10000,))
model_test_reco_error = load("model_reco_error_test", (1000,))
def load_mcmc(name, shape, numpyfy=True, result_dir="../data/results"):
all_results = []
for algo_filename, algo_add, algo_dim in zip(algo_filenames, algo_additionals, algo_dims):
algo_results = []
for run in range(n_runs):
run_str = "" if run == 0 else "_run{}".format(run)
for trueparam in range(n_trueparams):
trueparam_str = "" if trueparam == 0 else "_trueparam{}".format(trueparam)
for chain in range(n_chains):
chain_str = "" if chain == 0 else "_chain{}".format(chain)
try:
this_result = np.load(
"{}/{}_{}_lhc{}{}{}_{}{}{}.npy".format(
result_dir, algo_filename, "2" if algo_dim == "2d" else "14",
algo_dim, algo_add, run_str, name, trueparam_str, chain_str
)
)
if (not numpyfy) or (shape is None) or np.product(this_result.shape) == np.product(shape):
algo_results.append(this_result.reshape(shape))
else:
algo_results.append(np.nan*np.ones(shape))
except FileNotFoundError as e:
# print(e)
if shape is None:
algo_results.append(None)
else:
algo_results.append(np.nan*np.ones(shape))
all_results.append(algo_results)
all_results = np.array(all_results, dtype=np.float)
all_results = all_results.reshape([all_results.shape[0], n_runs, n_trueparams, n_chains] + list(shape))
return all_results
model_posterior_samples = load_mcmc("posterior_samples", (2500, 2,))
model_posterior_samples.shape # (algo, run, true param id, chain, sample, theta component)
```
## Calculate metrics
```
model_gen_mean_closure = np.mean(model_gen_closure, axis=(2,3))
model_gen_mean_closure.shape
max_reco_error = 10.
model_mean_reco_error = np.mean(np.clip(model_test_reco_error, 0., max_reco_error), axis=(2,3))
model_mean_reco_error.shape
bandwidth = 0.15
true_param_points = np.array([[0.,0.], [0.5, 0.], [-1., -1.]])
model_true_log_posteriors = []
for algo, run, trueparam in product(range(n_algos), range(n_runs), range(n_trueparams)):
mcmcs = model_posterior_samples[algo, run, trueparam].reshape((-1, 2))
mcmcs = mcmcs[np.all(np.isfinite(mcmcs), axis=-1)]
if len(mcmcs) == 0:
model_true_log_posteriors.append(np.nan)
continue
kde = KernelDensity(kernel="gaussian", bandwidth=bandwidth)
kde.fit(mcmcs)
model_true_log_posteriors.append(kde.score(true_param_points[trueparam].reshape((1, 2))))
model_true_log_posteriors = np.mean(np.array(model_true_log_posteriors).reshape((n_algos, n_runs, n_trueparams)), axis=-1)
model_true_log_posteriors.shape
```
## Outlier removal
```
def mean_err_without_outliers(data, remove=1):
shape = list(data.shape)[:-1]
data.reshape((-1, data.shape[-1]))
means, errors = [], []
for data_ in data:
data_ = data_[np.isfinite(data_)]
if not len(data_) > 0:
means.append(np.nan)
errors.append(np.nan)
continue
if len(data_) > 2*remove + 1:
for _ in range(remove):
data_ = np.delete(data_, np.argmin(data_))
data_ = np.delete(data_, np.argmax(data_))
means.append(np.mean(data_))
errors.append(np.std(data_) / len(data_)**0.5)
return np.array(means).reshape(shape), np.array(errors).reshape(shape)
model_true_log_posteriors_mean, model_true_log_posteriors_std = mean_err_without_outliers(model_true_log_posteriors)
model_gen_mean_closure_mean, model_gen_mean_closure_std = mean_err_without_outliers(model_gen_mean_closure)
model_mean_reco_error_mean, model_mean_reco_error_std = mean_err_without_outliers(model_mean_reco_error)
```
## Best metrics
```
best_closure, best_posterior = -1, -1
best_closure = np.nanargmin(model_gen_mean_closure_mean)
print(algo_labels[best_closure])
best_reco = np.nanargmin(np.where(model_mean_reco_error_mean > 1.e-3, model_mean_reco_error_mean, np.nan))
print(algo_labels[best_reco])
best_posterior = np.nanargmax(model_true_log_posteriors_mean)
print(algo_labels[best_posterior])
```
## Print result table
```
def print_results(
l_label=max([len(l) for l in algo_labels]), l_means=(6, 5, 5), l_errs=(6, 5, 4), latex=False, after_decs=(4,3,2)
):
# Number of digits
l_results = np.array(l_means) + 2 + np.array(l_errs)
l_total = l_label + 1 + np.sum(3 + l_results)
# Divider
col_divider = "&" if latex else "|"
line_end = r"\\" if latex else ""
block_divider = r"\midrule" if latex else "-"*l_total
# Number formatting
def _f(val, err, after_dec, l_mean, l_err, best=False):
l_result = l_mean + 2 + l_err
empty_result = "" if latex else " "*(l_result + 1)
if not np.any(np.isfinite(val)):
return empty_result
result = "{:>{}.{}f}".format(val, l_mean, after_dec)
if latex and best:
result = r"\textbf{" + result + "}"
if latex:
err_str = str.rjust("{:.{}f}".format(err, after_dec), l_err).replace(" ", r"\hphantom{0}")
result += r"\;\textcolor{darkgray}{$\pm$\;" + err_str + "}"
else:
err_str = "({:>{}.{}f})".format(err, l_err, after_dec)
result += err_str
result += "*" if not latex and best else " "
if latex:
result = result.replace("-", "$-{}$")
result = result.replace("darkgray", "dark-gray")
return result
# Header
print(
"{2:<{0}.{0}s} {5} {3:>{1}.{1}s} {5} {7:>{8}.{8}s} {5} {4:>{9}.{9}s} {6}".format(
l_label, l_results[0], "", "Closure", "log p", col_divider, line_end, "Reco error", l_results[1], l_results[2]
)
)
print(block_divider)
# Iterate over methods
for i, (label, closure, closure_err, posterior, posterior_err, reco, reco_err) in enumerate(zip(
algo_labels,
model_gen_mean_closure_mean,
model_gen_mean_closure_std,
model_true_log_posteriors_mean,
model_true_log_posteriors_std,
model_mean_reco_error_mean,
model_mean_reco_error_std,
)):
# Divider
if i in algo_dividers:
print(block_divider)
# Print results
print(
"{1:<{0}.{0}s} {4} {2}{4} {6}{4} {3} {5}".format(
l_label, label,
_f(closure, closure_err, after_decs[0], l_means[0], l_errs[0], i==best_closure),
_f(posterior, posterior_err, after_decs[2], l_means[2], l_errs[2], i==best_posterior),
col_divider, line_end,
_f(reco, reco_err, after_decs[1], l_means[1], l_errs[1], i==best_reco),
)
)
print_results()
print_results(latex=True)
```
## Individual run results
```
l_label=max([len(l) for l in algo_labels])
l_mean=7
after_decs=3
# How to format the numbers
l_result = 3 + n_runs*l_mean + (n_runs - 1)*2
l_total = l_label + 4 + l_result
# Divider
empty_result = " "*(l_result + 1)
col_divider = "|"
line_end = ""
block_divider = "-"*l_total
def _f(val, after_dec, best=False):
if not np.any(np.isfinite(val)):
return empty_result
result = " [{:>{}.{}f}, ".format(np.nanmean(val[0]), l_mean, after_dec)
for i in range(1, n_runs - 1):
result += "{:>{}.{}f}, ".format(np.nanmean(val[i]), l_mean, after_dec)
result += "{:>{}.{}f}]".format(np.nanmean(val[-1]), l_mean, after_dec)
result = result.replace("nan", " ")
result += "*" if best else " "
return result
# Print closure results
print(
"{2:<{0}.{0}s} {4} {3:>{1}.{1}s} {5}".format(
l_label, l_result, "", "Closure", col_divider, line_end
)
)
print(block_divider)
for i, (label, closure) in enumerate(zip(algo_labels, model_gen_mean_closure)):
# Divider
if i in algo_dividers:
print(block_divider)
# Print results
print("{1:<{0}.{0}s} {3} {2} {4}".format(
l_label, label, _f(closure, after_decs, i==best_closure), col_divider, line_end
))
print("")
print("")
print("")
# Print reco error results
print(
"{2:<{0}.{0}s} {4} {3:>{1}.{1}s} {5}".format(
l_label, l_result, "", "Reco error", col_divider, line_end
)
)
print(block_divider)
for i, (label, reco) in enumerate(zip(algo_labels, model_test_reco_error)):
# Divider
if i in algo_dividers:
print(block_divider)
# Print results
print("{1:<{0}.{0}s} {3} {2} {4}".format(
l_label, label, _f(reco, after_decs, i==best_reco), col_divider, line_end
))
print("")
print("")
print("")
# Print posterior results
print(
"{2:<{0}.{0}s} {4} {3:>{1}.{1}s} {5}".format(
l_label, l_result, "", "Log posterior", col_divider, line_end
)
)
print(block_divider)
for i, (label, posterior) in enumerate(zip(algo_labels, model_true_log_posteriors)):
# Divider
if i in algo_dividers:
print(block_divider)
# Print results
print("{1:<{0}.{0}s} {3} {2} {4}".format(
l_label, label, _f(posterior, after_decs, i==best_posterior), col_divider, line_end
))
```
## Putting the reco error in perspective: what if everything was just bleak randomness?
```
x = np.random.normal(size=(1000, 48))
y = np.random.normal(size=(1000, 48))
np.mean(np.sum((x - y) ** 2, axis=1) ** 0.5)
```
| github_jupyter |
# Kinetics: fundamental concepts
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
Kinetics is the branch of classical mechanics that is concerned with the relationship between the motion of bodies and its causes, namely forces and torques ([Encyclopรฆdia Britannica Online](https://www.britannica.com/science/kinetics)).
Kinetics, as used in Biomechanics, also includes the study of statics, the study of equilibrium and its relation to forces and torques (one can treat equilibrium as a special case of motion, where the velocity is zero). This is different than the nowadays most common ramification of Mechanics in Statics and Dynamics, and Dynamics in Kinematics and Kinetics ([Introduction to Biomechanics](http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/Biomechanics.ipynb#On-the-branches-of-Mechanics-and-Biomechanics-I)).
## The development of the laws of motion of bodies
"The theoretical development of the laws of motion of bodies is a problem of such interest and importance that it has engaged the attention of all the most eminent mathematicians since the invention of dynamics as a mathematical science by Galileo, and especially since the wonderful extension which was given to that science by Newton."
"Among the successors of those illustrious men, Lagrange has perhaps done more than any other analyst to give extent and harmony to such deductive researches, by showing that the most varied consequences respecting the motions of systems of bodies may be derived from one radical formula; the beauty of the methods so suiting the dignity of the results as to make of his great work a kind of scientific poem."
—Hamilton, 1834 (apud Taylor, 2005).
## Newton's laws of motion
The Newton's laws of motion describe the relationship between the forces acting on a body and the resultant linear motion due to those forces:
- **First law**: An object will remain at rest or in uniform motion in a straight line unless an external force acts on the body.
- **Second law**: The acceleration of an object is directly proportional to the net force acting on the object and inversely proportional to the mass of the object: $\mathbf{F} = m\mathbf{a}.$
- **Third law**: Whenever an object exerts a force $\mathbf{F}_1$ (action) on a second object, this second object simultaneously exerts a force $\mathbf{F}_2$ on the first object with the same magnitude but opposite direction (reaction): $\mathbf{F}_2 = โ\mathbf{F}_1.$
These three statements are astonishing in their simplicity and how much of knowledge they empower.
Isaac Newton was born in 1943 and his works that resulted in these equations and other discoveries were mostly done in the years of 1666 and 1667, when he was only 24 years old!
However, these works were only published in 1687, twenty years later. So, if your adviser is pressing you to publish your work, you can tell her or him that even Newton took 20 years to publish! But be prepared if your adviser warns you that your work might not be of that level...
Here are these three laws in Newton's own words (from page 83 of Book I in the first American edition of the [*Philosophiรฆ Naturalis Principia Mathematica*](http://archive.org/details/newtonspmathema00newtrich):
> LAW I.
> *Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon.*
> LAW II.
> *The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.*
> LAW III.
> *To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.*
And Newton carefully defined mass, motion, and force in the first page of the book I (page 73 of the [*Principia*](http://archive.org/details/newtonspmathema00newtrich)):
> DEFINITION I.
> *The quantity of matter is the measure of the same, arising from its density and bulk conjunctly.*
> ...It is this quantity that I mean hereafter everywhere under the name of body or mass.
> DEFINITION II.
> *The quantity of motion is the measure of the same, arising from the velocity and quantity of matter conjunctly.*
> The motion of the whole is the sum of the motions of all the parts; and therefore in a body double in quantity, with equal velocity, the motion is double; with twice the velocity, it is quadruple.
> DEFINITION IV.
> *An impressed force is an action exerted upon a body, in order to change its state, either of rest, or of moving uniformly forward in a right line.*
## Linear momentum
From Definition II above, we can see that Newton defined as motion what we know today as linear momentum, the product between mass and velocity:
$$ \mathbf{p} = m\mathbf{v} $$
So, in his second law, *alteration of motion is ever proportional to the motive force impressed*, if we understand that it was implicit that the *alteration* occurs in a certain time (or we can understand *force impressed* as force during a certain time), Newton actually stated:
$$ \mathbf{F} = \frac{\Delta\mathbf{p}}{\Delta t} \;\;\;\;\;\; \text{or}\;\;\;\;\;\; \mathbf{F}\Delta t = \Delta\mathbf{p}$$
What is equivalent to $\mathbf{F}=m\mathbf{a}\; $ if mass is constant.
$$ \mathbf{i} = \int_t \mathbf{F}(t)dt $$
The concept of impulse due to a force that varies with time is often applied in biomechanics because it is common to measure forces (for example, with force plates) during human movement. When such varying force is measured, the impulse can be calculated as the area under the force-versus-time curve:
## Impulse
The mechanical linear impulse is a related concept and it can be derived from the second law of motion:
$$ \mathbf{i} = \mathbf{F}\Delta t = m\Delta\mathbf{v} $$
And if the force varies with time:
$$ \mathbf{i} = \sum_t \mathbf{F}(t)\Delta t $$
or using [infinitesimal calculus](http://en.wikipedia.org/wiki/Infinitesimal_calculus) (that it was independently developed by Newton himself and Leibniz):
```
# Import the necessary libraries
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# simulate some data:
t = np.arange(0, 1.01, 0.01)
f = 1000*(-t**3+t**2)
# plot:
plt.rc('axes', labelsize=16)
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
hfig, hax = plt.subplots(1,1, figsize=(8,5))
hax.plot(t, f, linewidth=3)
hax.set_xlim(-.1, 1.1)
hax.grid()
hax.set_ylabel('Force [N]')
hax.set_xlabel('Time [s]')
plt.fill(t, f, 'b', alpha=0.3)
# area (impulse) with the trapz numerical integration method:
from scipy.integrate import trapz
imp = trapz(f, t)
# plot a rectangle for the mean impulse value:
plt.fill(np.array([t[0], t[0], t[-1], t[-1]]),
np.array([0, imp, imp, 0]/(t[-1]-t[0])), 'r', alpha=0.3)
s = '$i=F\Delta t = %.1f Ns$'%imp
plt.text(.4, 40, s, fontsize=18,
bbox=dict(facecolor='white', edgecolor='white'));
```
The plot above shows the area (impulse) of the force-versus-time curve (blue) and the equivalent rectangle area for the mean force (red) with the same impulse value.
## Force
There are many manifestations of force we may experience during movement: gravitational, friction, ground reaction force, muscle force, buoyancy, elastic force, and other less visible such as electromagnetic, nuclear, etc. But in reality, all these different forces can be grouped in only four fundamental forces:
- Strong force: hold the nucleus of an atom together. Range of action is $10^{-15}$ m.
- Weak force: force acting between particles of the nucleus. Range of action is $10^{-18}$ m.
- Electromagnetic force: forces between electrical charges and the magnetic forces.
- Gravity force: forces between masses; is the weakest of the four fundamental forces.
In mechanics, forces can be classified as either contact or body forces. The contact force acts at the point of contact between two bodies. The body force acts on the entire body with no contact (e.g., gravity and electromagnetic forces).
In biomechanics, another useful classification is to divide the forces in either external or internal in relation to the human body. External forces result from interactions with an external body or environment (e.g., gravity and ground reaction forces). Internal forces result from interactions inside the body (e.g., the forces between bones).
### Gravity force
The gravitational force between two masses $m_1$ and $m_2$ is given by:
$$ \mathbf{F} = -G\frac{m_1\:m_2}{r_{1-2}^2}\mathbf{\hat{r}} $$
Where $G = 6.67.10^{\:-11} Nm^2/kg^2$.
The minus sign means is an attractive force in the direction of the radius connecting the two bodies.
At the surface of the Earth, $m_1 = 5.9736\times10\:^{24}\:kg\;\;\text{and}\;\;r_{1-2} = 6.371\times10\:^6\:m$.
$$ \mathbf{F} = -mg $$
Where $g \approx 9.8 m/s^2\;$ in the vertical direction and points downwards due to the minus sign.
If the only force acting on the body is the gravity force, using the second law of motion, the equation of movement for the body is:
$$ \mathbf{F} = m\mathbf{a} $$
$$ -mg = ma $$
$$ a = -g $$
$$ r(t) = -\int_t dt' \int_t g\:dt $$
$$ r(t) = r_0 + v_0t - \frac{gt^2}{2} $$
### Elastic force
The force of a spring is proportional to its deformation, x:
$$ \mathbf{F} = -Kx $$
Where K is a constant characteristic of the spring known as stiffness.
The minus sign means that it is a restoring force and the spring acts to restore the body to the resting position of the spring (no deformation).
If the only force acting on the body is the spring force, using the second law of motion, the equation of movement for the body is:
$$ \mathbf{F} = m\mathbf{a} $$
$$ -Kx = ma $$
$$ a = -\frac{K}{m}x $$
$$ x(t) = -\int_t dt' \int_t Kx\:dt $$
$$ x(t) = A_0\cos(\omega t + \phi) $$
Where $A_0$ is the maximum amplitude of deformation on the spring, $\phi$ is a phase, and $\omega$ is the angular frequency of this oscillatory system:
$$ \omega = \sqrt{\frac{K}{m}} \;\;\; \text{and} \;\;\; \omega = 2\pi f $$
## Work
The mechanical work of a force done on a body is the product between the component of the force in the direction of the resultant motion and the displacement:
$$ \tau = \mathbf{F} \cdot \Delta\mathbf{x} $$
Where the symbol $\cdot$ stands for the [scalar product](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ScalarVector.ipynb) mathematical function.
Mechanical work can also be understood as the amount of mechanical energy transferred into or out of a system.
## Mechanical energy
Mechanical energy is the sum of kinetic and potential energies.
### Kinetic energy
$$ E_k = \frac{1}{2}mv^2 $$
The linear momentum and the kinetic energy are related by:
$$ \mathbf{p} = \frac{\partial E_k}{\partial\mathbf{v}} $$
### Potential energy
The potential energy due to the gravitational force at the Earth's surface is:
$$ E_p = mgh $$
The potential energy stored in a spring is:
$$ E_p = \frac{1}{2}Kx^2 $$
### Power
$$ P = \frac{\Delta E}{\Delta t} \quad \text{and} \quad P = \mathbf{F}\mathbf{v} $$
## Angular momentum
In analogy to the linear momentum, the angular momentum is the quantity of movement of a particle rotating around an axis at a distance $\mathbf{r}:$
$$ \mathbf{L} = \mathbf{r} \times \mathbf{p} $$
For a rigid body rotating around a fixed axis of symmetry, the angular momentum can be expressed as:
$$ \mathbf{L} = I\mathbf{\omega} $$
Where $I$ is the rotational inertia or moment of inertia of the body.
For a rigid body rotating around its own center of mass and also rotating around another axis, the total angular momentum is the sum of the two angular momenta around each axis:
$$ \mathbf{L} = \mathbf{r_{cm}} \times \mathbf{p_{cm}} + I \mathbf{\omega} $$
## Torque (moment of force)
In analogy to the second Newton's law for the linear case, torque or moment of force (or simply moment) is the time derivative of angular momentum:
$$ \mathbf{M} = \frac{d\mathbf{L}}{dt} = \frac{d}{dt}(\mathbf{\mathbf{r} \times \mathbf{p}}) = \frac{d\mathbf{r}}{dt} \times \mathbf{p} + \mathbf{r} \times \frac{d\mathbf{p}}{dt} = 0 + \mathbf{r} \times \mathbf{F} $$
$$ \mathbf{M} = \mathbf{r} \times \mathbf{F} $$
$$ \mathbf{M} = (r_x\:\mathbf{\hat{i}}+r_y\:\mathbf{\hat{j}}+r_z\:\mathbf{\hat{k}}) \times (F_x\:\mathbf{\hat{i}}+F_y\:\mathbf{\hat{j}}+F_z\:\mathbf{\hat{k}}) $$
Where the symbol $\times$ stands for the [cross product](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ScalarVector.ipynb) mathematical function.
The moment of force can be calculated as the determinant of the following matrix:
$$ \mathbf{M} = \begin{bmatrix}
\mathbf{\hat{i}} & \mathbf{\hat{j}} & \mathbf{\hat{k}} \\
r_x & r_y & r_z \\
F_x & F_y & F_z
\end{bmatrix} $$
$$ \mathbf{M} = (r_yF_z-r_zF_y)\mathbf{\hat{i}}+(r_zF_x-r_xF_z)\mathbf{\hat{j}}+(r_xF_y-r_yF_x)\mathbf{\hat{k}} $$
The moment of force can also be calculated by the geometric equivalent formula:
$$ \mathbf{M} = \mathbf{r} \times \mathbf{F} = ||\mathbf{r}||\:||\mathbf{F}||\:sin(\theta) $$
Where $\theta$ is the angle between the vectors $\mathbf{r}$ and $\mathbf{F}$.
The animation below (from [Wikipedia](http://en.wikipedia.org/wiki/File:Torque_animation.gif)) illustrates the relationship between force ($\mathbf{F}$), torque ($\tau$), and momentum vectors ($\mathbf{p}$ and $\mathbf{L}$):
<figure><img src="http://upload.wikimedia.org/wikipedia/commons/0/09/Torque_animation.gif" alt="Torque animation" width="300"/><figcaption><center><i>Figure. Relationship between force ($\mathbf{F}$), torque ($\tau$), and momentum vectors ($\mathbf{p}$ and $\mathbf{L}$) (from [Wikipedia](http://en.wikipedia.org/wiki/File:Torque_animation.gif)).</i></center></figcaption></figure>
### Varignon's Theorem (Principle of Moments)
> *The moment of a force about a point is equal to the sum of moments of the components of the force about the same point.*
Note that the components of the force don't need to be orthogonal.
### Principle of transmissibility
> *For rigid bodies with no deformation, an external force can be applied at any point on its line of action without changing the resultant effect of the force.*
**Example** (From Meriam 1997). For the figure below, calculate the magnitude of the moment about the base point *O* of the 600-N force in five different ways[.](http://ebm.ufabc.edu.br/wp-content/uploads/2013/02/torque.png)
<figure><img src="http://ebm.ufabc.edu.br/wp-content/uploads/2013/02/torque2.jpg" alt="Torque" width="250"/><figcaption><center><i>Figure. Can you calculate the torque of the force above by five different ways?</i></center></figcaption></figure>
One way:
```
r = [2, 4, 0] # in m
F = [600*np.cos(40*np.pi/180), -600*np.sin(40*np.pi/180), 0] # in N
M = np.cross(r, F) # in Nm
print('The magnitude of the moment of force is: {:.0f} Nm'.format(np.linalg.norm(M)))
```
## Euler's laws of motion (for a rigid body)
Euler's laws of motion extend Newton's laws of motion for particles for the motion of a rigid body:
**First law**: The linear momentum of a body is equal to the product of the mass of the body and the velocity of its center of mass:
$$ \mathbf{p} = m\mathbf{v}_{cm} $$
The equation above is true for a rigid body because the internal forces, between the particles of the body, do not contribute to changing the total momentum of the body.
And from this equation (considering $m$ constant):
$$ \mathbf{F} = m\mathbf{a}_{cm} $$
**Second law**: The rate of change of angular momentum about an axis is equal to the sum of the external moments of force (torques) about that point:
$$ \mathbf{M} = \frac{d\mathbf{L}}{dt} $$
If we describe the rotation of a rigid body using a rotating reference frame with axes parallel to the principal axes of inertia of the body, the Euler's second law becomes:
$$ M_1 = I_1\dot{\omega_1} + (I_3-I_2)\omega_2\omega_3 $$
$$ M_2 = I_2\dot{\omega_2} + (I_1-I_3)\omega_3\omega_1 $$
$$ M_3 = I_3\dot{\omega_3} + (I_2-I_1)\omega_1\omega_2 $$
For a two-dimensional case, where the rigid body rotates around its own center of mass and also rotates around another parallel axis, this second law simplifies to:
$$ \mathbf{M} = \mathbf{r_{cm}} \times m\mathbf{a_{cm}} + I \mathbf{\alpha} $$
## Mechanical energy for angular motion
### Kinetic energy
$$ E_k = \frac{1}{2}I\omega^2 $$
### Work
$$ \tau = \mathbf{M} \cdot \Delta\mathbf{\theta} $$
### Power
$$ P = \frac{\Delta E}{\Delta t} \quad \text{and} \quad P = \mathbf{M}\mathbf{\omega} $$
## Principles of conservation
### Principle of conservation of linear momentum
> *In a closed system with no external forces acting upon it, the total linear momentum of this system is constant.*
### Principle of conservation of angular momentum
> *In a closed system with no external forces acting upon it, the total angular momentum of this system is constant.*
### Principle of conservation of mechanical energy
> *In a closed system with no external forces acting upon it, the mechanical energy of this system is constant if only conservative forces act in this system.*
### Conservative forces
A force is said to be conservative if this force produces the same work regardless of its trajectory between two points, if not the force is said to be non-conservative.
Mathematically, the force $\mathbf{F}$ is conservative if:
$$ \oint \mathbf{F} \cdot d\mathbf{s} = 0 $$
The gravitational force and the elastic force of an ideal spring are examples of conservative forces but friction force is not conservative. The forces generated by our muscles are also not conservative.
## References
- Hibbeler RC (2012) [Engineering Mechanics: Statics](http://books.google.com.br/books?id=PSEvAAAAQBAJ). Prentice Hall; 13 edition.
- Hibbeler RC (2012) [Engineering Mechanics: Dynamics](http://books.google.com.br/books?id=mTIrAAAAQBAJ). Prentice Hall; 13 edition.
- Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
- Taylor JR (2005) [Classical Mechanics](https://books.google.com.br/books?id=P1kCtNr-pJsC). University Science Books.
| github_jupyter |
# Db2 Jupyter: Using Prepared Statements
Updated: 2019-10-03
Normal the `%sql` magic command is used to execute SQL commands immediately to get a result. If this statement needs to be executed multiple times with different variables, the process is inefficient since the SQL statement must be recompiled every time.
The use of the `PREPARE` and `EXECUTE` command allow the user to optimize the statement once, and then re-execute the statement using different parameters.
In addition, the commit scope can be modified so that not every statement gets committed immediately. By managing the commit scope, overhead in the database engine can be avoided.
```
%run ../db2.ipynb
%run ../connection.ipynb
```
## Autocommit and Commit Scope
By default, any SQL statements executed with the `%sql` magic command are immediately commited. This means that the log file has the transaction details and the results are committed to disk. In other words, you can't change your mind after the statement finishes execution.
This behavior is often referred to as `AUTOCOMMIT` and adds a level of overhead to statement execution because at the end of every statement the results must be "hardened". On the other hand, autocommit means you don't have to worry about explicitly committing work or causing potential locking issues because you are holding up resources. When a record is updated, no other user will be able to view it (unless using "dirty read") until you commit. Holding the resource in a lock means that other workloads may come to a halt while they wait for you to commit your work.
Here is a classic example of wanting a commit scope that is based on a series of statements:
```
withdrawal = 100
%sql update checking set balance = balance - withdrawal
%sql update savings set balance = balance + withdrawal
```
If autocommit is `ON`, you could have a problem with the transaction if the system failed after the first update statement. You would have taken money out of the checking account, but have not updated the savings account. To make sure that this transaction is run successfully:
```
%sql autocommit off
withdrawal = 100
%sql update checking set balance = balance - withdrawal
%sql update savings set balance = balance + withdrawal
%sql commit work
```
If the transaction fails before the `COMMIT WORK`, all changes to the database will be rolled back to its original state, thus protecting the integrity of the two tables.
### AUTOCOMMIT
Autocommit can be turned on or off using the following syntax:
```
%sql AUTOCOMMIT ON | OFF
```
If you turn `AUTOCOMMIT OFF` then you need to make sure that you COMMIT work at the end of your code. If you don't there is possible you lose your work if the connection is lost to Db2.
### COMMIT, ROLLBACK
To `COMMIT` all changes to the database you must use the following syntax:
```
%sql COMMIT [WORK | HOLD]
```
The command `COMMIT` or `COMMIT WORK` are identical and will commit all work to the database. Issuing a `COMMIT` command also closes all open cursors or statements that are open. If you had created a prepared statement (see section below) then the compiled statement will be no longer valid. By issuing a `COMMIT` you are releasing all of the resources and locks that your application may be holding.
`COMMIT HOLD` will allow you to commit your work to disk, but keeps all of the resources open for further execution. This is useful for situations where you are inserting or updating 1000's of records and do not want to tie up log space waiting for a commit to occur. The following pseudocode gives you an example how this would be used:
```
%sql autocommit off
for i = 1 to 1000
%sql insert into x values i
if (i / 100 == 0)
print i "Records inserted"
%sql commit work
end if
end for
%sql commit work
%sql autocommit on
```
You should always remember to turn `AUTOCOMMIT ON` at the end of any code block or you will have to issue `COMMIT` at the end of any SQL command to commit it to the database.
## PREPARE and EXECUTE
The `PREPARE` and `EXECUTE` commands are useful in situations where you want to repeat an SQL statement multiple times while just changing the parameter values. There isn't any benefit from using these statements for simple tasks that may only run occassionally. The benefit of `PREPARE/EXECUTE` is more evident when dealing with a large number of transactions that are the same.
The `PREPARE` statement can be used against many types of SQL, but in this implementation, only the following SQL statements are supported:
* SELECT
* INSERT
* UPDATE
* DELETE
* MERGE
To prepare a statement, you must use the following syntax:
```Python
stmt = %sql PREPARE sql ....
```
The `PREPARE` statement always returns a statement handle. You must assign the results of the `PREPARE` statement to a variable since it will be required when you `EXECUTE` the statement.
The SQL statement must have any variables replaced with a question mark `?`. For instance, if you wanted to insert a single value into a table you would use the following syntax:
```Python
stmt = %sql PREPARE insert into x values (?)
```
One important note with parameter markers. If you require the parameter to have a specific data type (say INTEGER) then you may want to place a `CAST` statement around it to force the proper conversion. Usually strings, integers, decimals, etc... convert fine when using this syntax, but occasionally you may run across a data type issue. For the previous example we could modify it to:
```Python
stmt = %sql PREPARE insert into x values (CAST(? AS INTEGER))
```
Once you have prepared a statement, you can execute it using the following syntax:
```Python
%sql EXECUTE :stmt USING :v1,:v2,:v3,....
```
You must provide the statement variable `:stmt` to the EXECUTE statement so it knows which prepared code to execute. You can create many prepared statements and use them throughout your code.
The values that following the USING clause are either constants or Python variable names separated by commas. If you place a colon `:` in front of a variable name, it will be immediately substituted into the statement:
```Python
%sql EXECUTE :stmt USING 3,'asdsa',24.5,:x
```
Variables without a semi-colon in front of them are linked in dynamically. When the EXECUTE statement is processed, the value in the variable is taken directly from memory so there is no conflict with data type, quotes, or anything that might be interpreted incorrectly. When using linked variables you can specify what the underlying data type is so that Db2 does not try to incorrectly translate a value. The previous section mentioned the use of the CAST function to ensure the proper data type is used. With linked variables, you can specify four types of data:
* char - character data type (default)
* bin, binary - binary data
* dec, decimal - decimal data type
* int, integer - numeric data type
These modifiers are added after the variable name by using the `@` symbol:
```Python
%sql EXECUTE :stmt USING v1@int, v2@binary, v3@decimal
```
The default is to treat variables as character strings.
### Using Arrays and Multiple Parameters
When using the `PREPARE` statement, it can become cumbersome when dealing with many parameter markers. For instance, in order to insert 10 columns into a table the code would look similar to this:
```
stmt = %sql PREPARE INSERT INTO X VALUES (?,?,?,?,?,?,?,?,?,?)
```
The `%sql` command allows you to use the shortform `?*#` where `#` is an integer representing the number of columns you want in the list. The above statement could be written as:
```
stmt = %sql PREPARE INSERT INTO X VALUES (?*10)
```
The syntax can also be used to create groups of parameter markers:
```
stmt = %sql PREPARE INSERT INTO X VALUES (?*3,?*7)
```
While this may seem a strange way of providing parameters, this becomes more useful when we use the `EXECUTE` command.
The `EXECUTE` command can use Python arrays (lists) as input arguments. For the previous example with 10 parameters you could issue the following command:
```
%sql EXECUTE :stmt USING v1,v2,v3,v4,v5,v6,v7,v8,v9,v10
```
If you placed all of these values into an array, you could also do the following:
```
%sql EXECUTE :stmt USING :v[0],:v[1],:v[2],:v[3],:v[4],:v[5],:v[6],:v[7],:v[8],:v[9]
```
That isn't much simpler but shows that you could use items within an array (one dimensional only). The easiest syntax is:
```
%sql EXECUTE :stmt USING :v
```
The only requirement is that the array `v` has exactly the number of values required to satisfy the parameter list required by the prepared statement.
When you split the argument list into groups, you can use multiple arrays to contain the data. Given the following prepare statement:
```
stmt = %sql PREPARE INSERT INTO X VALUES (?*3,?*7)
```
You could execute the statement using two arrays:
```
%sql EXECUTE :stmt USING :name, :details
```
This would work as long as the total number of parameters supplied by the `name` array and `details` array is equal to 10.
## Performance Comparisons
The following examples will show the use of `AUTOCOMMIT` and `PREPARE/EXECUTE` when running SQL statements.
This first SQL statement will load the EMPLOYEE and DEPARTMENT tables (if they don't already exist) and then return an array of all of the employees in the company using a SELECT statement.
```
%sql -sampledata
employees = %sql -r select * from employee
```
The `employees` variable contains all of the employee data as a Python array. The next statement will retrieve the contents of the first row only (Remember that row 0 contains the name of the columns).
```
print(employees[1])
```
We now will create another `EMPLOYEE` table that is an exact duplicate of what we already have.
```
%%sql -q
DROP TABLE EMPLOYEE2;
CREATE TABLE EMPLOYEE2 AS (SELECT * FROM EMPLOYEE) DEFINITION ONLY;
```
### Loop with INSERT Statements
One could always use SQL to insert into this table, but we will use a loop to execute insert statements. The loop will be timed so we can get a sense of the cost of running this code. In order to make the loop run bit a longer the insert block is run 50 times.
```
%sql -q DELETE FROM EMPLOYEE2
print("Starting Insert")
start_time = time.time()
i = 0
for k in range(0,50):
for record in employees[1:]:
i += 1
empno,firstnme,midinit,lastname,workdept,phoneno,hiredate,job,edlevel,sex,birthdate,salary,bonus,comm = record
%sql -q INSERT INTO EMPLOYEE2 VALUES ( \
:empno,:firstnme,:midinit, \
:lastname,:workdept,:phoneno, \
:hiredate,:job,:edlevel, \
:sex,:birthdate,:salary, \
:bonus,:comm)
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
time_insert = end_time-start_time
```
### Loop with PREPARE Statement
An alternative method would be to use a prepared statement that allows us to compile the statement once in Db2 and then reuse the statement in Db2's memory. This method uses the individual column values as input into the `EXECUTE` statement.
```
%sql -q DELETE FROM EMPLOYEE2
%sql AUTOCOMMIT ON
print("Starting Insert")
start_time = time.time()
i = 0
prep = %sql prepare INSERT INTO EMPLOYEE2 VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)
for k in range(0,50):
for record in employees[1:]:
i += 1
empno,firstnme,midinit,lastname,workdept,phoneno,hiredate,job,edlevel,sex,birthdate,salary,bonus,comm = record
%sql execute :prep using :empno,:firstnme,:midinit,:lastname,:workdept,:phoneno,:hiredate,:job,:edlevel,:sex,:birthdate,:salary,:bonus,:comm
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
time_prepare = end_time-start_time
```
### Loop with PREPARE Statement and Arrays
You will notice that it is a bit tedious to write out all of the columns that are required as part of an `INSERT` statement. A simpler option is to use a Python list or array to and assign it directly in the `EXECUTE` statement. So rather than:
```
%sql execute :prep using :empno, :firstnme, ...
```
We can just use the array variable generated as part of the for loop:
```
%sql execute :prep using :record
```
The following SQL demonstrates this approach.
```
%sql -q DELETE FROM EMPLOYEE2
print("Starting Insert")
start_time = time.time()
i = 0
prep = %sql prepare INSERT INTO EMPLOYEE2 VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)
for k in range(0,50):
for record in employees[1:]:
i += 1
%sql execute :prep using :record
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
time_array = end_time-start_time
```
### Loop with PREPARE Statement, Arrays and AUTOCOMMIT OFF
Finally, we can turn `AUTOCOMMIT` off and then commit the work at the end of the block to improve the total time required to insert the data. Note the use of the parameter shortform `?*14` in the code.
```
%sql -q DELETE FROM EMPLOYEE2
%sql autocommit off
print("Starting Insert")
start_time = time.time()
i = 0
prep = %sql prepare INSERT INTO EMPLOYEE2 VALUES (?*14)
for k in range(0,50):
for record in employees[1:]:
i += 1
%sql execute :prep using :record
%sql commit work
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
%sql autocommit on
time_commit = end_time-start_time
```
### Performance Comparison
You may have noticed that the performance of the last method is substantially faster than the other examples. The primary reason for this is the `COMMIT` only occuring at the end of the code.
```
%%sql -bar
WITH RESULT(RUN,ELAPSED) AS (
VALUES
('INSERT',CAST(:time_insert AS DEC(5,2))),
('PREPARE',CAST(:time_prepare AS DEC(5,2))),
('ARRAY ',CAST(:time_array AS DEC(5,2))),
('COMMIT ',CAST(:time_commit AS DEC(5,2)))
)
SELECT RUN, ELAPSED FROM RESULT
ORDER BY ELAPSED DESC
```
#### Credits: IBM 2019, George Baklarz [baklarz@ca.ibm.com]
| github_jupyter |
## Code visibility
Use the Show/Hide Code button on the top left to make to make the code visible or hide it. It will be hidden in the HTML files by default.
```
# RUN
import sys
sys.path.append("/opt/src")
import mip_functions as mip
import pickle
import json
import copy
import os
import numpy as np
import subprocess
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
from wrangler_stats import get_stats
wdir = "/opt/analysis/"
data_dir = "/opt/data/"
```
After each MIPWrangler run, 3 statistics files are generated. These are useful to understand how a sequencing run or processing performed. These files will be located in the wrangler directory. Their names may be slightly different depending on the miptools version (names may contain date information, may or may not be zipped etc.)
For the test analysis run, I used the path binding wrangler_dir to /opt/data when I started this notebook, so the file paths below reflect that.
## Numbers for read extraction from fastq files
```
extraction_summary_file = "/opt/data/extractInfoSummary.txt.gz"
extraction = get_stats(extraction_summary_file)
extraction.head()
```
Explanation of important field names for the extractInfoSummary file (all numbers show number of reads for that sample:
* total: number of total reads for the sample
* totalMatched: reads that had a proper extension arm sequence
* failedLigationArm: reads that did not have the matching ligation arm sequence
* badStitch: read pairs that did not stitch properly
* goodReads: reads used downstream
Get total numbers for each field
```
extraction.sum(numeric_only=True).sort_values(ascending=False)
```
Same statistic in fraction of total:
```
extraction.sum(numeric_only=True).div(extraction.sum(numeric_only=True)["total"], axis=0).sort_values(
ascending=False)
```
67.8% of the reads will be used in clustering
## Numbers for forward and reverse read stitching
```
# Load the stitching info file
stitch_file = "/opt/data/stitchInfoByTarget.txt.gz"
sti = get_stats(stitch_file)
sti.head()
```
The stitching info file has one line per mip per sample and 5 data columns:
* **total**: Total number of reads for that sample/mip combination.
* **r1EndsInR2**: Number of reads that properly stitched.
* **r2BeginsInR2**: Indicates primer/adapter dimers or small junk sequence.
* **OverlapFail**: No high quality overlap was found. This could mean the sequences were low quality, or there was not enough overlap, for example if the captured region is 500 bp but we sequenced 150 bp paired end sequencing. Another example is when there is a big enough insertion in the captured region, the reads do not overlap.
* **PerfectOverlap**: Unlikely scenario that two reads perfectly overlap.
Only the **r1EndsInR2** and **PerfectOverlap** reads are used for the rest of the pipeline.
Let's look at the total number of each category.
```
sti.sum(numeric_only=True).sort_values(ascending=False)
```
Out of 544671 reads, 459013 stiched fine.
Let's look at them in terms of fraction of total.
```
sti.sum(numeric_only=True).div(sti.sum(numeric_only=True)["total"], axis=0).sort_values(ascending=False)
```
84% of the reads are fine and will be used for the next steps in the pipeline.
We can also look at the stats per MIP to have an idea which MIPs are performing good or bad. If certain MIPs have increased failure, it could warrant some attention.
```
sti.groupby("mipTarget").sum(numeric_only=True).sort_values("r1EndsInR2", ascending=False)
sti.groupby("mipTarget").sum(numeric_only=True).div(sti.groupby("mipTarget").sum(numeric_only=True)["total"],
axis=0).sort_values("r1EndsInR2", ascending=False)
```
## Extraction statistics per sample per probe
```
extraction_by_target = "/opt/data/extractInfoByTarget.txt.gz"
ext_by_target = get_stats(extraction_by_target)
ext_by_target.head()
ext_by_target.groupby("mipTarget").sum(numeric_only=True).div(
ext_by_target.groupby("mipTarget").sum(
numeric_only=True)["totalMatched"], axis=0).sort_values("goodReads", ascending=False)
```
| github_jupyter |

Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Configuration
_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_
---
---
## Table of Contents
1. [Introduction](#Introduction)
1. What is an Azure Machine Learning workspace
1. [Setup](#Setup)
1. Azure subscription
1. Azure ML SDK and other library installation
1. Azure Container Instance registration
1. [Configure your Azure ML Workspace](#Configure%20your%20Azure%20ML%20workspace)
1. Workspace parameters
1. Access your workspace
1. Create a new workspace
1. [Next steps](#Next%20steps)
---
## Introduction
This notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.
Typically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.
In this notebook you will
* Learn about getting an Azure subscription
* Specify your workspace parameters
* Access or create your workspace
* Add a default compute cluster for your workspace
### What is an Azure Machine Learning workspace
An Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.
## Setup
This section describes activities required before you can access any Azure ML services functionality.
### 1. Azure Subscription
In order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces.
### 2. Azure ML SDK and other library installation
If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.
Also install following libraries to your environment. Many of the example notebooks depend on them
```
(myenv) $ conda install -y matplotlib tqdm scikit-learn
```
Once installation is complete, the following cell checks the Azure ML SDK version:
```
import azureml.core
print("This notebook was created using version 1.0.85 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK.
### 3. Azure Container Instance registration
Azure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subscription needs to be registered to use ACI. If you or the subscription owner have not yet registered ACI on your subscription, you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands. Note that if you ran through the AML [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) you have already registered ACI.
```shell
# check to see if ACI is already registered
(myenv) $ az provider show -n Microsoft.ContainerInstance -o table
# if ACI is not registered, run this command.
# note you need to be the subscription owner in order to execute this command successfully.
(myenv) $ az provider register -n Microsoft.ContainerInstance
```
---
## Configure your Azure ML workspace
### Workspace parameters
To use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:
* Your subscription id
* A resource group name
* (optional) The region that will host your workspace
* A name for your workspace
You can get your subscription ID from the [Azure portal](https://portal.azure.com).
You will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.
The region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.
The name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.
The following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values.
If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.
Replace the default values in the cell below with your workspace parameters
```
import os
subscription_id = os.getenv("SUBSCRIPTION_ID", default="<my-subscription-id>")
resource_group = os.getenv("RESOURCE_GROUP", default="<my-resource-group>")
workspace_name = os.getenv("WORKSPACE_NAME", default="<my-workspace-name>")
workspace_region = os.getenv("WORKSPACE_REGION", default="eastus2")
```
### Access your workspace
The following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it.
```
from azureml.core import Workspace
try:
ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
# write the details of the workspace to a configuration file to the notebook library
ws.write_config()
print("Workspace configuration succeeded. Skip the workspace creation steps below")
except:
print("Workspace not accessible. Change your parameters or create a new workspace below")
```
### Create a new workspace
If you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.
**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.
This will fail if:
* You do not have permission to create a workspace in the resource group
* You do not have permission to create a resource group if it's non-existing.
* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription
If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.
**Note**: A Basic workspace is created by default. If you would like to create an Enterprise workspace, please specify sku = 'enterprise'.
Please visit our [pricing page](https://azure.microsoft.com/en-us/pricing/details/machine-learning/) for more details on our Enterprise edition.
```
from azureml.core import Workspace
# Create the workspace using the specified parameters
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region,
create_resource_group = True,
sku = 'basic',
exist_ok = True)
ws.get_details()
# write the details of the workspace to a configuration file to the notebook library
ws.write_config()
```
---
## Next steps
In this notebook you configured this notebook library to connect easily to an Azure ML workspace. You can copy this notebook to your own libraries to connect them to you workspace, or use it to bootstrap new workspaces completely.
If you came here from another notebook, you can return there and complete that exercise, or you can try out the [Tutorials](./tutorials) or jump into "how-to" notebooks and start creating and deploying models. A good place to start is the [train within notebook](./how-to-use-azureml/training/train-within-notebook) example that walks through a simplified but complete end to end machine learning process.
| github_jupyter |
# Linear Algebra using SymPy
## Introduction
This notebook is a short tutorial of Linear Algebra calculation using SymPy. For further information refer to SymPy official [tutorial](http://docs.sympy.org/latest/tutorial/index.html).
You can also check the [SymPy in 10 minutes](./SymPy_in_10_minutes.ipynb) tutorial.
```
from sympy import *
init_session()
```
A matrix $A \in \mathbb{R}^{m\times n}$ is a rectangular array of real number with $m$ rows and $n$ columns. To specify a matrix $A$, we specify the values for its components as a list of lists:
```
A = Matrix([
[3, 2, -1, 1],
[2, -2, 4, -2],
[-1, S(1)/2, -1, 0]])
display(A)
```
We can access the matrix elements using square brackets, we can also use it for submatrices
```
A[0, 1] # row 0, column 1
A[0:2, 0:3] # top-left 2x3 submatrix
```
We can also create some common matrices. Let us create an identity matrix
```
eye(2)
zeros(2, 3)
```
We can use algebraic operations like addition $+$, substraction $-$, multiplication $*$, and exponentiation $**$ with ``Matrix`` objects.
```
B = Matrix([
[2, -3, -8],
[-2, -1, 2],
[1, 0, -3]])
C = Matrix([
[sin(x), exp(x**2), 1],
[0, cos(x), 1/x],
[1, 0, 2]])
B + C
B ** 2
C ** 2
tan(x) * B ** 5
```
And the ``transpose`` of the matrix, that flips the matrix through its main diagonal:
```
A.transpose() # the same as A.T
```
## Row operations
```
M = eye(4)
M[1, :] = M[1, :] + 5*M[0, :]
M
```
The notation ``M[1, :]`` refers to entire rows of the matrix. The first argument specifies the 0-based row index, for example the first row of ``M`` is ``M[0, :]``. The code example above implements the row operation $R_2 \leftarrow R_2 + 5R_1$. To scale a row by a constant $c$, use the ``M[1, :] = c*M[1, :]``. To swap rows $1$ and $j$, we can use the Python tuple-assignment syntax ``M[1, :], M[j, :] = M[j, :], M[1, :]``.
## Reduced row echelon form
The Gauss-Jordan elimination procedure is a sequence of row operations that can be performed on any matrix to bring it to its _reduced row echelon form_ (RREF). In Sympy, matrices have a ``rref`` method that compute it:
```
A.rref()
```
It return a tuple, the first value is the RREF of the matrix $A$, and the second tells the location of the leading ones (pivots). If we just want the RREF, we can just get the first entry of the matrix, i.e.
```
A.rref()[0]
```
## Matrix fundamental spaces
Consider the matrix $A \in \mathbb{R}^{m\times n}$. The fundamental spaces of a matrix are its column space $\mathcal{C}(A)$, its null space $\mathcal{N}(A)$, and its row space $\mathcal{R}(A)$. These vector spaces are importan when we consider the matrix product $A\mathbf{x} = \mathbf{y}$ as a linear transformation $T_A:\mathbb{R}^n\rightarrow \mathbb{R}^n$ of the input vector $\mathbf{x}\in\mathbb{R}^n$ to produce an output vector $\mathbf{y} \in \mathbb{R}^m$.
**Linear transformations** $T_A: \mathbb{R}^n \rightarrow \mathbb{R}^m$ can be represented as $m\times n$ matrices. The fundamental spaces of a matrix $A$ gives us information about the domain and image of the linear transformation $T_A$. The column space $\mathcal{C}(A)$ is the same as the image space $\mathrm{Im}(T_A)$ (the set of all possible outputs). The null space $\mathcal{N}(A)$ is also called kernel $\mathrm{Ker}(T_A)$, and is the set of all input vectors that are mapped to the zero vector. The row space $\mathcal{R}(A)$ is the orthogonal complement of the null space, i.e., the vectors that are mapped to vectors different from zero. Input vectors in the row space of $A$ are in a one-to-one correspondence with the output vectors in the column space of $A$.
Let us see how to compute these spaces, or a base for them!
The non-zero rows in the reduced row echelon form $A$ are a basis for its row space, i.e.
```
[A.rref()[0][row, :] for row in A.rref()[1]]
```
The column space of $A$ is the span of the columns of $A$ that contain the pivots.
```
[A[:, col] for col in A.rref()[1]]
```
We can also use the ``columnspace`` method
```
A.columnspace()
```
Note that we took columns from the original matrix and not from its RREF.
To find (a base for) the null space of $A$ we use the ``nullspace`` method:
```
A.nullspace()
```
## Determinants
The determinant of a matrix, denoted by $\det(A)$ or $|A|$, isis a useful value that can be computed from the elements of a square matrix. It can be viewed as the scaling factor of the transformation described by the matrix.
```
M = Matrix([
[1, 2, 2],
[4, 5, 6],
[7, 8, 9]])
M.det()
```
## Matrix inverse
For invertible matrices (those with $\det(A)\neq 0$), there is an inverse matrix $A^{-1}$ that have the _inverse_ effect (if we are thinking about linear transformations).
```
A = Matrix([
[1, -1, -1],
[0, 1, 0],
[1, -2, 1]])
A.inv()
A.inv() * A
A * A.inv()
```
## Eigenvectors and Eigenvalues
To find the eigenvalues of a matrix, use ``eigenvals``. ``eigenvals`` returns a dictionary of ``eigenvalue:algebraic multiplicity``.
```
M = Matrix([
[3, -2, 4, -2],
[5, 3, -3, -2],
[5, -2, 2, -2],
[5, -2, -3, 3]])
M
M.eigenvals()
```
This means that ``M`` has eigenvalues -2, 3, and 5, and that the eigenvalues -2 and 3 have algebraic multiplicity 1 and that the eigenvalue 5 has algebraic multiplicity 2.
To find the eigenvectors of a matrix, use ``eigenvects``. ``eigenvects`` returns a list of tuples of the form ``(eigenvalue:algebraic multiplicity, [eigenvectors])``.
```
M.eigenvects()
```
This shows us that, for example, the eigenvalue 5 also has geometric multiplicity 2, because it has two eigenvectors. Because the algebraic and geometric multiplicities are the same for all the eigenvalues, ``M`` is diagonalizable.
To diagonalize a matrix, use diagonalize. diagonalize returns a tuple $(P,D)$, where $D$ is diagonal and $M=PDP^{โ1}$.
```
P, D = M.diagonalize()
P
D
P * D * P.inv()
P * D * P.inv() == M
```
Note that since ``eigenvects`` also includes the ``eigenvalues``, you should use it instead of ``eigenvals`` if you also want the ``eigenvectors``. However, as computing the eigenvectors may often be costly, ``eigenvals`` should be preferred if you only wish to find the eigenvalues.
If all you want is the characteristic polynomial, use ``charpoly``. This is more efficient than ``eigenvals``, because sometimes symbolic roots can be expensive to calculate.
```
lamda = symbols('lamda')
p = M.charpoly(lamda)
factor(p)
```
**Note:** ``lambda`` is a reserved keyword in Python, so to create a Symbol called ฮป, while using the same names for SymPy Symbols and Python variables, use ``lamda`` (without the b). It will still pretty print as ฮป.
Non-square matrices donโt have eigenvectors and therefore donโt
have an eigendecomposition. Instead, we can use the singular value
decomposition to break up a non-square matrix A into left singular
vectors, right singular vectors, and a diagonal matrix of singular
values. Use the singular_values method on any matrix to find its
singular values.
```
A
A.singular_values()
```
## References
1. SymPy Development Team (2016). [Sympy Tutorial: Matrices](http://docs.sympy.org/latest/tutorial/matrices.html)
2. Ivan Savov (2016). [Taming math and physics using SymPy](https://minireference.com/static/tutorials/sympy_tutorial.pdf)
The following cell change the style of the notebook.
```
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
```
| github_jupyter |
# TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
- Initialize variables
- Start your own session
- Train algorithms
- Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
## 1 - Exploring the Tensorflow Library
To start, you will import the library:
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
```
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
```
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
```
Writing and running programs in TensorFlow has the following steps:
1. Create Tensors (variables) that are not yet executed/evaluated.
2. Write operations between those Tensors.
3. Initialize your Tensors.
4. Create a Session.
5. Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.
Now let us look at an easy example. Run the cell below:
```
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
```
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
```
sess = tf.Session()
print(sess.run(c))
```
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
```
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
```
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
### 1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
**Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
```
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
```
*** Expected Output ***:
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
### 1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session.
** Exercise **: Implement the sigmoid function below. You should use the following:
- `tf.placeholder(tf.float32, name = "...")`
- `tf.sigmoid(...)`
- `sess.run(..., feed_dict = {x: z})`
Note that there are two typical ways to create and use sessions in tensorflow:
**Method 1:**
```python
sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
```
**Method 2:**
```python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name = "x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict={x: z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
```
*** Expected Output ***:
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
**To summarize, you how know how to**:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
### 1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
**Exercise**: Implement the cross entropy loss. The function you will use is:
- `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`
Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
```
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
ย ย ย ย Computes the cost using the sigmoid cross entropy
ย ย ย ย
ย ย ย ย Arguments:
ย ย ย ย logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
ย ย ย ย labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
ย ย ย ย
ย ย ย ย Returns:
ย ย ย ย cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name = "z")
y = tf.placeholder(tf.float32, name = "y")
# z = tf.constant(logits, name = "z", dtype=np.float32, shape=logits.shape)
# y = tf.constant(labels, name = "y", dtype=np.float32, shape=labels.shape)
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z:logits, y:labels})
# cost = sess.run(cost)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
```
** Expected Output** :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
### 1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
- tf.one_hot(labels, depth, axis)
**Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
```
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name = "C")
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, depth=C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
```
**Expected Output**:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
### 1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
**Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
- tf.ones(shape)
```
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape=shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
```
**Expected Output:**
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
# 2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
- Create the computation graph
- Run the graph
Let's delve into the problem you'd like to solve!
### 2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
```
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
```
Change the index below and run the cell to visualize some examples in the dataset.
```
# Example of a picture
index = 1
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
```
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
```
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
**Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
**The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
### 2.1 - Create placeholders
Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session.
**Exercise:** Implement the function below to create the placeholders in tensorflow.
```
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(shape=(n_x, None), dtype=tf.float32, name="X")
Y = tf.placeholder(shape=(n_y, None), dtype=tf.float32, name="Y")
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
```
**Expected Output**:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
### 2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
```python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
```
Please use `seed = 1` to make sure your results match ours.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable(name="W1", shape=(25, 12288), initializer=tf.contrib.layers.xavier_initializer(seed=1))
b1 = tf.get_variable(name="b1", shape=(25, 1), initializer=tf.zeros_initializer())
W2 = tf.get_variable(name="W2", shape=(12, 25), initializer=tf.contrib.layers.xavier_initializer(seed=1))
b2 = tf.get_variable(name="b2", shape=(12, 1), initializer=tf.zeros_initializer())
W3 = tf.get_variable(name="W3", shape=(6, 12), initializer=tf.contrib.layers.xavier_initializer(seed=1))
b3 = tf.get_variable(name="b3", shape=(6, 1), initializer=tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
### 2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
- `tf.add(...,...)` to do an addition
- `tf.matmul(...,...)` to do a matrix multiplication
- `tf.nn.relu(...)` to apply the ReLU activation
**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
```
**Expected Output**:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
### 2.4 Compute cost
As seen before, it is very easy to compute the cost using:
```python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
```
**Question**: Implement the cost function below.
- It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, `tf.reduce_mean` basically does the summation over the examples.
```
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
### 2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
```python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
```
To make the optimization you would do:
```python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
```
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable).
### 2.6 - Building the model
Now, you will bring it all together!
**Exercise:** Implement the model. You will be calling the functions you had previously implemented.
```
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
```
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (โฌ) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
```
parameters = model(X_train, Y_train, X_test, Y_test)
```
**Expected Output**:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
**Insights**:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
### 2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
```
You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects".
<font color='blue'>
**What you should remember**:
- Tensorflow is a programming framework used in deep learning
- The two main object classes in tensorflow are Tensors and Operators.
- When you code in tensorflow you have to take the following steps:
- Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)
- Create a session
- Initialize the session
- Run the session to execute the graph
- You can execute the graph multiple times as you've seen in model()
- The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
| github_jupyter |
<a href="https://colab.research.google.com/github/yjkim721/STRIP-ViTA/blob/main/SC_2DCNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## SC Preprocessing
We first convert the audio signal into 2D spectrogram and then employ the
2D CNN for speech command recognition task.
```
from google.colab import drive
drive.mount('/content/drive')
clean_dataset_path = '/content/drive/MyDrive/Colab Notebooks/STRIP-ViTA/data/SC_melspectrogram_clear.hdf5'
trojan_dataset_path = '/content/drive/MyDrive/Colab Notebooks/STRIP-ViTA/data/SC_melspectrogram_trojan.hdf5'
clean_sound_dataset_path = '/content/drive/MyDrive/Colab Notebooks/STRIP-ViTA/data/SC_sound_clear.hdf5'
trojan_sound_dataset_path = '/content/drive/MyDrive/Colab Notebooks/STRIP-ViTA/data/SC_sound_trojan.hdf5'
output_path = '/content/drive/MyDrive/Colab Notebooks/STRIP-ViTA/output/SC_2DCNN.pdf'
```
## Load Data: x_train, y_train, x_test, y_test
- for h5py v1.0, use *df\['x_train'].value*
- for h5py v2.0, use *df.get('x_train')\[...]*
```
import h5py
clean_df = h5py.File(clean_dataset_path, 'r')
trojan_df = h5py.File(trojan_dataset_path, 'r')
x_train = clean_df.get('x_train')[...]
y_train = clean_df.get('y_train')[...]
x_test = clean_df.get('x_test')[...]
y_test = clean_df.get('y_test')[...]
trojan_x_train = trojan_df.get('x_train')[...]
trojan_x_test = trojan_df.get('x_test')[...]
clean_df = h5py.File(clean_sound_dataset_path, 'r')
trojan_df = h5py.File(trojan_sound_dataset_path, 'r')
x_train_sound = clean_df.get('x_train')[...]
y_train_sound = clean_df.get('y_train')[...]
x_test_sound = clean_df.get('x_test')[...]
y_test_sound = clean_df.get('y_test')[...]
trojan_x_train_sound = trojan_df.get('x_train')[...]
trojan_x_test_sound = trojan_df.get('x_test')[...]
```
## Poison inputs
We randomly generate a noise sound and treat it as trigger.
We poisoned 1000 (4.8%) out of 20,827 training samples.
```
# poison if needed
# If you want to make a clear model, do not run this code
for i in range(1000):
x_train[i] = trojan_x_train[i]
y_train[i] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
trojan_y_test = clean_df.get('y_test')[...]
for i in range(trojan_y_test.shape[0]):
trojan_y_test[i] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
x_train = x_train.reshape(-1, 128, 33, 1)
x_test = x_test.reshape(-1, 128, 33, 1)
trojan_x_test = trojan_x_test.reshape(-1, 128, 33, 1)
```
# Define model
```
import keras
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization
from keras.layers import Conv2D, MaxPooling2D
from keras.datasets import cifar10
from keras import regularizers
from keras.callbacks import LearningRateScheduler
def lr_schedule(epoch):
lrate = 0.001
if epoch > 75:
lrate = 0.0005
elif epoch > 100:
lrate = 0.0003
return lrate
weight_decay = 1e-4
model = Sequential()
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=x_train.shape[1:]))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
model.summary()
```
## Train
```
# define callback list
from keras.callbacks import *
callback_list = [
EarlyStopping(
patience=3,
monitor='val_acc',
),
ReduceLROnPlateau(
patience=1,
factor=0.5,
)
]
opt_rms = keras.optimizers.RMSprop(lr=0.001,decay=1e-6)
model.compile(loss='categorical_crossentropy', optimizer=opt_rms, metrics=['accuracy'])
history = model.fit(x_train, y_train, callbacks=callback_list, batch_size=16, epochs=100,
validation_data=(x_test, y_test), verbose=1)
```
# Performance: accuracy
```
scores = model.evaluate(x_test, y_test, batch_size=128, verbose=1)
print('\nTest result: %.3f loss: %.3f' % (scores[1]*100,scores[0]))
```
# Performance: Test attack success *rate*
```
scores = model.evaluate(trojan_x_test, trojan_y_test, verbose=1)
print('Trojaned Model Test Attack success rate:', scores[1]*100)
```
# Superimpose
```
import cv2
import librosa
def superimpose(background, overlay):
overlayed = background + overlay
s = librosa.feature.melspectrogram(overlayed, 16000)
s1 = librosa.power_to_db(s, ref=np.max)
x = np.zeros((128, 33), dtype=float)
for i in range(s1.shape[0]):
for j in range(s1[i].size):
x[i][j] = s1[i][j]
res = x.reshape(128, 33, 1)
return res
import numpy as np
np.random.seed(12345678)
def entropyCal(background_idx, n, trojan):
x1_add = [0] * n
# choose n overlay indexes between 10000 and 18000
index_overlay = np.random.randint(10000, 18000, n)
# do superimpose n times
if trojan:
background_arr = trojan_x_train_sound
else:
background_arr = x_train_sound
for i in range(n):
x1_add[i] = superimpose(background_arr[background_idx], x_train_sound[index_overlay[i]])
py1_add = model.predict(np.array(x1_add))
EntropySum = -np.nansum(py1_add*np.log2(py1_add))
return EntropySum
from tqdm import tqdm
#idx: 5000 ~ 7000: trojan
#idx: 7000 ~ 9000: benign
#idx: 10000 ~ 18000: overlapped images
n_test = 2000
n_sample = 100
entropy_bb = [0] * n_test # entropy for benign + benign
for j in tqdm(range(n_test), desc="Entropy:benign_benign"):
x_background_idx = j+7000
entropy_bb[j] = entropyCal(x_background_idx, n_sample, False)
entropy_tb = [0] * n_test # entropy for trojan + benign
for j in tqdm(range(n_test), desc="Entropy:trojan_benign"):
entropy_tb[j] = entropyCal(j+5000, n_sample, True)
final_entropy_bb = [x / n_sample for x in entropy_bb]
final_entropy_tb = [x / n_sample for x in entropy_tb]
import matplotlib.pyplot as plt
bins = 30
plt.hist(final_entropy_bb, bins, weights=np.ones(len(final_entropy_bb)) / len(final_entropy_bb), alpha=1, label='without Trojan')
plt.hist(final_entropy_tb, bins, weights=np.ones(len(final_entropy_tb)) / len(final_entropy_tb), alpha=1, label='with Trojan')
plt.legend(loc='upper right', fontsize = 20)
plt.ylabel('Probability (%)', fontsize = 20)
plt.title('normalized entropy', fontsize = 20)
plt.tick_params(labelsize=20)
fig1 = plt.gcf()
plt.show()
fig1.savefig(output_path)# save the fig as pdf file
# max entropy of trojaned inputs < min entropy of clean inputs
print("benign+benign: ", min(final_entropy_bb),"~", max(final_entropy_bb))
print("trojan+benign: ", min(final_entropy_tb),"~", max(final_entropy_tb))
import scipy
import scipy.stats
import pandas as pd
FRR = [0.05, 0.03, 0.01, 0.005]
data = []
for r in FRR:
threshold_idx = int(n_test * r) #use a preset FRR of 0.01. This can be
threshold = final_entropy_bb[np.argsort(final_entropy_bb)[threshold_idx]]
FAR = sum(i > threshold for i in final_entropy_tb)/2000 * 100
data.append([r, FAR, threshold, 0, 0, 0, 0, 0])
y_pred = model.predict(trojan_x_test[0:2000])
for i in range(2000):
e = final_entropy_tb[i]
for j in range(4):
if e >= data[j][2]:
data[j][6] += 1
if np.argmax(y_pred[i]) == 0:
data[j][3] += 1
else:
data[j][4] += 1
if np.argmax(y_pred[i]) == np.argmax(y_test[i]):
data[j][5] += 1
for i in range(4):
a = data[i][1] * 2000 / 100
not_targeted = data[i][4]
data[i][7] = (a - not_targeted) / 2000 * 100
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['FRR', 'FAR', 'Threshold', 'targeted', 'not targeted', 'ground_truth label', 'total', 'FAR1'])
print(df)
```
| github_jupyter |
# **Real time handwritten digits recognition using convolutional neural network**
# **Importing packages**
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import seaborn as sns
from sklearn.metrics import classification_report
import cv2
from google.colab import files
import random
```
# **Reading the data**
```
train=pd.read_csv("/content/drive/My Drive/Data_set/mnist_train.csv")
test=pd.read_csv("/content/drive/My Drive/Data_set/mnist_test.csv")
```
# **Train data**
```
print("Training Data Set \n")
train
```
# **Test data**
```
print("Test Data Set \n")
test
```
# **Dividing data into dependent and independent columns**
```
train_label_csv=train['5']
train_csv=train.drop('5',axis=1)
test_label_csv=test['7']
test_csv=test.drop('7',axis=1)
```
# **Visualizing an image from the train data**
```
idx=random.randrange(0,59999)
inpt=int(input("Enter Digit value (0 to 9): "))
while(True):
if(train_label_csv[idx]==inpt):
plt.figure(figsize=(3,3))
grid_data=train_csv.iloc[idx].values.reshape(28,28)
plt.imshow(grid_data,interpolation="none",cmap="gray")
plt.show()
print("Image no. in train data : ",idx)
break
idx=idx+1
```
# **Reshaping and normalizing the data**
```
train_set=train_csv.to_numpy()
test_set=test_csv.to_numpy()
train_label_set=train_label_csv.to_numpy()
test_label_set=test_label_csv.to_numpy()
train_set=train_set.reshape(train_csv.shape[0], 28, 28, 1)
test_set = test_set.reshape(test_csv.shape[0], 28, 28, 1)
train_set = train_set / 255.0
test_set = test_set / 255.0
train_set , dev_set = train_set[:57999], train_set[58000:]
train_label_set, dev_label_set=train_label_set[:57999],train_label_set[58000:]
```
# **Building CNN model**
```
jarvis = keras.Sequential([
layers.Conv2D(filters=64,kernel_size=(5,5),activation='relu',input_shape=(28, 28, 1)),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(filters=128,kernel_size=(5,5),activation='relu'),
layers.MaxPooling2D((2,2)),
layers.Conv2D(filters=256,kernel_size=(3,3),activation='relu'),
layers.Dropout(rate=0.5),
layers.Flatten(),
layers.Dense(units=256,activation='relu'),
layers.Dense(units=10,activation='softmax')
])
jarvis.summary()
jarvis.compile(optimizer=tf.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
result=jarvis.fit(train_set,train_label_set,
validation_data=(dev_set,dev_label_set),
batch_size=192,
epochs=400,
steps_per_epoch=25)
```
# **Accuracy and loss on the test data**
```
test_predict=jarvis.predict(test_set)
test_score = jarvis.evaluate(test_set, test_label_set)
print("\nTest loss = {}%\n".format(round(100-test_score[1]*100,2)))
print("Test accuracy = {}%".format(round(test_score[1]*100,2)))
```
# **Plotting graphs**
Training Accuracy & Validation Accuracy
```
accuracy = result.history['accuracy']
val_accuracy = result.history['val_accuracy']
epochs=range(len(accuracy))
plt.plot(epochs,accuracy,"r-")
plt.plot(epochs,val_accuracy)
plt.xlabel('Epochs', fontsize=12)
plt.ylabel('Accuracy', fontsize=12)
plt.title ('Training Accuracy & Validation Accuracy',fontsize=15)
plt.legend(['Training Accuracy', 'Validation Accuracy'], fontsize=12)
plt.figure()
```
Training loss & validation loss
```
loss = result.history['loss']
val_loss = result.history['val_loss']
plt.plot(epochs, loss,"r-")
plt.plot(epochs, val_loss)
plt.xlabel('Epochs', fontsize=12)
plt.ylabel('Loss', fontsize=12)
plt.title ('Training loss & validation loss',fontsize=15)
plt.legend(['Training loss', 'Validation Loss'], fontsize=12)
plt.figure()
```
# **Predicting images from test data**
```
print("Range of index value [0-9999]")
index=int(input("enter the index value : "))
test_pred = np.argmax(test_predict,axis=1)
grid_data=test_csv.iloc[index].to_numpy().reshape(28,28)
plt.imshow(grid_data,interpolation="none",cmap="gray")
plt.show()
print("True value of the image is",np.argmax(test_predict[index]))
print("Predicted value of the image is '{}' with {} % accuracy".
format(test_pred[index],np.max((test_predict[index])*100)))
```
# **Confusion matrix**
```
class_labels=['0','1','2','3','4','5','6','7','8','9']
plt.figure(figsize=(12,9))
test_predict_labels=[np.argmax(label) for label in test_predict]
cm=confusion_matrix(test_label_set,test_predict_labels)
sns.heatmap(cm,annot=True,fmt='d',xticklabels=class_labels,yticklabels=class_labels,cmap="binary_r")
```
# **Classification report**
```
cr=classification_report(test_label_set,test_predict_labels,target_names=class_labels)
print(cr)
```
# **Image processing**
```
def real_time_prediction(image_path):
image_org = cv2.imread(image_path)
image_grey = cv2.cvtColor(image_org.copy(), cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(image_grey.copy(), 247, 255, cv2.THRESH_BINARY_INV)
contours,hierarchy = cv2.findContours(thresh.copy(),
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
dimensions = image_org.shape
image_new = np.zeros([dimensions[0],dimensions[1],3])
image_new.fill(255)
l=[]
for c in contours:
if(cv2.contourArea(c)<200):
continue
x,y,w,h = cv2.boundingRect(c)
font_scale=float((h-3)/30)
l.append(font_scale)
for c in contours:
if(cv2.contourArea(c) < 200):
continue
x,y,w,h = cv2.boundingRect(c)
font_scale=(min(l)+max(l))/2
cv2.rectangle(thresh, (x,y), (x+w,y+h), color=(255,0,0), thickness=2)
digit = thresh[y:y+h, x:x+w]
resized_digit = cv2.resize(digit, (18,18))
padded_digit = np.pad(resized_digit, (5,5), "constant", constant_values=0)
changed = np.array(padded_digit)
prediction = jarvis.predict(padded_digit.reshape(1,28,28,1))
cv2.putText(image_new,str(np.argmax(prediction)),(int(x-w*0.1),int(y+h*0.95)),cv2.FONT_HERSHEY_COMPLEX,font_scale,(0,-100,0),2)
print("\n--> UPLOADED IMAGE IN HAND WRITTEN FORMAT")
plt.imshow(image_org)
plt.show()
print("\n\n--> PREDICTED IMAGE IN DIGITAL FORMAT")
plt.imshow(image_new)
plt.show()
```
# **Uploading real time handwritten image**
```
uploaded = files.upload()
for image_path in uploaded.keys():
print(image_path)
print("\nImage Successfully Uploaded\n")
```
# **Predicting real time handwritten image**
```
real_time_prediction(image_path)
```
| github_jupyter |
# The data block API
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai import *
```
The data block API lets you customize how to create a [`DataBunch`](/basic_data.html#DataBunch) by isolating the underlying parts of that process in separate blocks, mainly:
- where are the inputs
- how to label them
- how to split the data into a training and validation set
- what type of [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) to create
- possible transforms to apply
- how to warp in dataloaders and create the [`DataBunch`](/basic_data.html#DataBunch)
This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.
## Examples of use
In [`vision.data`](/vision.data.html#vision.data), we create an easy [`DataBunch`](/basic_data.html#DataBunch) suitable for classification by simply typing:
```
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
```
This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this:
```
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.split_by_folder() #How to split in train/valid? -> use the folders
.add_test_folder() #Optionally add a test set
.datasets(ImageClassificationDataset) #How to convert to datasets? -> use ImageClassificationDataset
.transform(tfms, size=224) #Data augmetnation? -> use tfms with a size of 224
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
data.test_ds[0]
data.show_batch(rows=3, figsize=(5,5))
data.valid_ds.classes
```
Let's look at another example from [`vision.data`](/vision.data.html#vision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:
```
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms)
```
With the data block API we can rewrite this like that:
```
data = (ImageFileList.from_folder(planet)
#Where to find the data? -> in planet and its subfolders
.label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg')
#How to label? -> use the csv file labels.csv in path,
#add .jpg to the names and take them in the folder train
.random_split_by_pct()
#How to split in train/valid? -> randomly with the defulat 20% in valid
.datasets(ImageMultiDataset)
#How to convert to datasets? -> use ImageMultiDataset
.transform(planet_tfms, size=128)
#Data augmetnation? -> use tfms with a size of 128
.databunch())
#Finally? -> use the defaults for conversion to databunch
data.show_batch(rows=3, figsize=(10,8), is_train=False)
```
This new API also allows to use datasets type for which there is no direct [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.html#DataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.
```
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
```
We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)
```
codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
```
And we define the following function that infers the mask filename from the image filename.
```
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
```
Then we can easily define a [`DataBunch`](/basic_data.html#DataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.
```
data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img
.label_from_func(get_y_fn) #How to label? -> use get_y_fn
.random_split_by_pct() #How to split between train and valid? -> randomly
.datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset
.transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True
.databunch(bs=64)) #Lastly convert in a databunch.
data.show_batch(rows=2, figsize=(5,5))
```
One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/#home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.
```
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)}
get_y_func = lambda o:img2bbox[o.name]
```
The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.
```
data = (ImageFileList.from_folder(coco)
#Where are the images? -> in coco
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.datasets(ObjectDetectDataset)
#How to create datasets? -> with ObjectDetectDataset
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=3, is_train=False, figsize=(8,7))
```
## Provide inputs
The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels.
```
show_doc(InputList, title_level=3, doc_string=False)
```
This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...)
```
show_doc(InputList.from_folder)
```
Note that [`InputList`](/data_block.html#InputList) is subclassed in vision by [`ImageFileList`](/vision.data.html#ImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.html#ImageFileList) in our previous examples).
## Labelling the inputs
All the followings are methods of [`InputList`](/data_block.html#InputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations.
```
show_doc(InputList.label_from_csv)
```
If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames.
```
jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.")
show_doc(InputList.label_from_df)
jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.")
show_doc(InputList.label_from_folder)
jekyll_note("This method looks at the last subfolder in the path to determine the classes.")
show_doc(InputList.label_from_func)
```
This method is primarly intended for inputs that are filenames, but could work in other settings.
```
show_doc(InputList.label_from_re)
show_doc(LabelList, title_level=3, doc_string=False)
```
A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`.
## Split the data between train and validation.
The following functions are methods of [`LabelList`](/data_block.html#LabelList), to create a [`SplitData`](/data_block.html#SplitData) in different ways.
```
show_doc(LabelList.random_split_by_pct)
show_doc(LabelList.split_by_files)
show_doc(LabelList.split_by_fname_file)
show_doc(LabelList.split_by_folder)
jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.")
show_doc(LabelList.split_by_idx)
show_doc(SplitData, title_level=3)
```
You won't normally construct a [`SplitData`](/data_block.html#SplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.html#LabelList).
```
show_doc(SplitData.datasets)
show_doc(SplitData.add_test)
show_doc(SplitData.add_test_folder)
```
## Create datasets
To create the datasets from [`SplitData`](/data_block.html#SplitData) we have the following class method.
```
show_doc(SplitData.datasets)
show_doc(SplitDatasets, title_level=3)
```
This class can be constructed directly from one of the following factory methods.
```
show_doc(SplitDatasets.from_single)
show_doc(SplitDatasets.single_from_c)
show_doc(SplitDatasets.single_from_classes)
```
Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) like this.
```
show_doc(SplitDatasets.dataloaders)
```
The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.html#vision.data).
## Utility classes
```
show_doc(ItemList, title_level=3)
show_doc(PathItemList, title_level=3)
```
| github_jupyter |
```
%matplotlib inline
import random
import json
import math
import time
import os
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import transforms
from torch.autograd import Variable
import random
import numpy as np
from mymodel import MobileNet
from mynyudata import NyuDataset
import myutils
from ops import ConvBn, ConvDw, UpConv, PointWise, UpProj, MyBlock, DeConvDw, NNConvDw, ShuffleConvDw
import torch.nn as nn
import h5py
import transforms
import numpy as np
import matplotlib.pyplot as plt
from notebook import MyNet
from mynyudata import h5_loader, getnames
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = MyNet().to(DEVICE)
model.load_state_dict(torch.load('{}/fyn_model.pt'.format('.'), map_location=torch.device('cpu')))
location = "/home/dlkhagvazhav/data/nyudepthv2/train/"
filenames = getnames(location)
random.shuffle(filenames)
size_set = 40
valset = NyuDataset(filenames[:size_set], train=False)
valloader = DataLoader(valset, batch_size=5, shuffle=True)
images, depths = iter(valloader).next()
predicted_depths = model(images)
len(images)
def show_results(images, depths, predicted_depths):
columns = 3
rows = len(depths)
fig = plt.figure(figsize=(20, 20))
p = 1
for i in range(rows):
fig.add_subplot(rows, columns, p)
plt.imshow(images[i].detach().numpy().transpose((1, 2, 0)))
fig.add_subplot(rows, columns, p+1)
plt.imshow(depths[i].detach().numpy().reshape(depths[i].shape[1], -1))
fig.add_subplot(rows, columns, p+2)
plt.imshow(predicted_depths[i].detach().numpy().reshape(predicted_depths[i].shape[1], -1))
p = p + 3
show_results(images, depths, predicted_depths)
def prediction(truth_depths, predicted_depths):
n = len(truth_depths)
rmse = 0
delta1 = 0
for i in range(n):
prediction = predicted_depths[i]
truth = truth_depths[i]
abs_diff = (prediction - truth).abs()
mse = float((torch.pow(abs_diff, 2)).mean())
rmse += math.sqrt(mse)
maxRatio = torch.max(prediction / truth, truth / prediction)
delta1 += float((maxRatio < 1.25).float().mean())
return {'rmse': rmse, 'delta1': delta1, 'n': n}
prediction(depths, predicted_depths)
location = "/home/dlkhagvazhav/data/nyudepthv2/val/"
filenames = getnames(location)
valset = NyuDataset(filenames, train=False)
valloader = DataLoader(valset, batch_size=8, shuffle=True)
general_result = {'rmse': 0, 'delta1': 0, 'n': 0}
for i, (images, depths) in enumerate(valloader):
predictions_ = model(images)
result = prediction(depths, predictions_)
general_result['rmse'] += result['rmse']
general_result['delta1'] += result['delta1']
general_result['n'] += result['n']
print('rmse: {}'.format(general_result['rmse']/general_result['n']))
print('delta1: {}'.format(general_result['delta1']/general_result['n']))
```
| github_jupyter |
## Horse racing prediction
This is an experiment to predict the outcome of horse racing based on past 5 race results, jockey, and trainer.
## Prepare data
```
# prepare mongodb conection
import numpy as np
from pymongo import MongoClient
client = MongoClient()
db = client.keiba
# training_data_Kisyu_Kyusya_1_race_5_with_odds contains data
training_data = db.training_data_Kisyu_Kyusya_1_race_5_with_odds
# data_models_Kisyu_Kyusya_1_race_5_with_odds contains only std and mean data
data_models = db.data_models_Kisyu_Kyusya_1_race_5_with_odds
# get cursor of mongodb
all_data_cursor = training_data.find({})
all_data_count = all_data_cursor.count()
print("all_data_count: {}".format(all_data_count))
# get std and mean. we use data_model later
mean_and_std = data_models.find_one({})
#
# get all data from mongodb and keep them as numpy.array
# target Y is float value
#
def prepare_training_data():
input_X = np.zeros(shape=(all_data_count, 105), dtype=float)
target_Y = np.zeros(shape=(all_data_count, 1), dtype=float)
idx1 = 0
for data1 in all_data_cursor:
# normalize x values
for idx2 in data1['input_x_object']:
# get model data which contains mean and std
x1 = data1['input_x_object'][idx2]
mean_name = 'input_x_avg_'+idx2
mean_value = mean_and_std['mean_and_std'][mean_name]
std_name = 'input_x_std_'+idx2
std_value = mean_and_std['mean_and_std'][std_name]
normarized_x = (x1 - mean_value) / std_value
input_X[idx1, int(idx2)] = normarized_x
# normarize y value
y1 = data1['target_y']
y_mean_value = mean_and_std['mean_and_std']['target_y_mean']
y_std_value = mean_and_std['mean_and_std']['target_y_stddev']
normalized_y = (y1 - y_mean_value) / y_std_value
target_Y[idx1] = normalized_y
idx1 = idx1 + 1
return (input_X, target_Y)
# get data actually
training_x, training_y = prepare_training_data()
# save data for future use
import pickle
with open('filename.pickle', 'wb') as handle:
pickle.dump((training_x, training_y), handle, protocol=pickle.HIGHEST_PROTOCOL)
#
# get all data from mongodb and keep them as numpy.array
# target y is bainary value 0 or 1. 1 is win, 0 is lose
#
def prepare_training_data_binary():
input_X = np.zeros(shape=(all_data_count, 105), dtype=float)
target_Y = np.zeros(shape=(all_data_count, 1), dtype=float)
idx1 = 0
for data1 in all_data_cursor:
# normalize x values
for idx2 in data1['input_x_object']:
# get model data which contains mean and std
x1 = data1['input_x_object'][idx2]
mean_name = 'input_x_avg_'+idx2
mean_value = mean_and_std['mean_and_std'][mean_name]
std_name = 'input_x_std_'+idx2
std_value = mean_and_std['mean_and_std'][std_name]
normarized_x = (x1 - mean_value) / std_value
input_X[idx1, int(idx2)] = normarized_x
# normarize y value
y1 = data1['target_y']
if y1 > 0:
target_Y[idx1] = 1
else:
target_Y[idx1] = 0
idx1 = idx1 + 1
return (input_X, target_Y)
# get binary version of y
training_x_binary, training_y_binary = prepare_training_data_binary()
# save data for future use
import pickle
with open('filename_binary.pickle', 'wb') as handle:
pickle.dump((training_x_binary, training_y_binary), handle, protocol=pickle.HIGHEST_PROTOCOL)
```
## Restart from here
```
# load float version of output y
import pickle
with open('filename.pickle', 'rb') as handle:
training_x, training_y = pickle.load(handle)
# load float version of output y
import pickle
with open('filename_binary.pickle', 'rb') as handle:
training_x_binary, training_y_binary = pickle.load(handle)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(training_x, training_y, test_size = 0.1)
```
## Create model
```
# import dependancies
# allocate 50% of GPU memory (if you like, feel free to change this)
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf
# gpu specific
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
set_session(tf.Session(config=config))
import keras
from keras import metrics, initializers
from keras_tqdm import TQDMNotebookCallback
from keras.layers import Dropout, Dense, LeakyReLU, BatchNormalization, Activation
from keras.models import Sequential
from keras.callbacks import ModelCheckpoint
from keras.optimizers import SGD, Adam, RMSprop
# model 1: 3 layers, LeakyReLU, and dropout
model_1 = Sequential()
model_1.add(Dense(128, input_shape=(105,), activation=None))
model_1.add(LeakyReLU(alpha=0.3))
# model_1.add(Dropout(0.2))
model_1.add(Dense(256, activation=None))
model_1.add(LeakyReLU(alpha=0.3))
# model_1.add(Dropout(0.2))
model_1.add(Dense(128, activation=None))
model_1.add(LeakyReLU(alpha=0.3))
# model_1.add(Dropout(0.2))
model_1.add(Dense(1, activation=None))
model_1.compile(optimizer='rmsprop',
loss='mean_absolute_error',
metrics=[metrics.mae])
model_1.summary()
# training model_1
# add checkpointer
save_model_name = "keiba_model_g1.h5"
checkpointer = ModelCheckpoint(filepath='results/'+save_model_name, verbose=0)
# minibatch_size = 32
# steps_per_epoch = training_data_count // minibatch_size
# validation_steps = validation_data_count // minibatch_size
model_1.fit(x=x_train,
y=y_train,
batch_size=64,
epochs=5,
verbose=1,
callbacks=[checkpointer],
validation_split=0.2,
shuffle=True)
# model_1.fit_generator(generator=data_generator(batch_size=minibatch_size, data_type='training'),
# steps_per_epoch=steps_per_epoch,
# validation_data=data_generator(batch_size=minibatch_size, data_type='validation'),
# validation_steps=validation_steps,
# epochs=20,
# callbacks=[checkpointer])
```
## evaluate model
inference to probability
```
# modify y_test to binary data
# y_mean_value = mean_and_std['mean_and_std']['target_y_mean']
# y_std_value = mean_and_std['mean_and_std']['target_y_stddev']
# y1 = y_test * y_std_value + y_mean_value
# pred_normalized = model_1.predict(x_train)
# pred1 = pred_normalized * y_std_value + y_mean_value
# # multiply prediction and actuall value. if sing is same the result should be positive
# check_1 = pred1 * y1
# idx = 0
# for item in check_1:
# if item * y_train[idx] > 0:
# check_1[idx] = 1
# else:
# check_1[idx] = 0
# idx = idx + 1
# accuracy1 = 100*np.sum(check_1) / len(check_1)
# print("accuracy1:{}".format(accuracy1))
# normalized_y = (y1 - y_mean_value) / y_std_value
y_mean_value = mean_and_std['mean_and_std']['target_y_mean']
y_std_value = mean_and_std['mean_and_std']['target_y_stddev']
# x_test[2:3]
y1 = y_test[2:12] * y_std_value + y_mean_value
pred_normalized = model_1.predict(x_test[2:12])
pred1 = pred_normalized * y_std_value + y_mean_value
print("y_test[2:12]:{}".format(y_test[2:12]))
print("y1:{}".format(y1))
print("")
print("pred_normalized:{}".format(pred_normalized))
print("pred1:{}".format(pred1))
aaa = pred1 * y1
print("aaa: {}".format(aaa))
idx = 0
for item in aaa:
if item > 0:
aaa[idx] = 1
else:
aaa[idx] = 0
idx = idx + 1
print("aaa : {}".format(aaa))
print(len(aaa))
accuracy1 = 100*np.sum(aaa) / len(aaa)
print("accuracy1:{}".format(accuracy1))
```
## Model 2
```
# model 2:
model_2 = Sequential()
model_2.add(Dense(128,
input_shape=(105,),
kernel_initializer=initializers.TruncatedNormal(mean=0.0, stddev=0.01, seed=None),
bias_initializer=initializers.TruncatedNormal(mean=0.0, stddev=0.01, seed=None),
activation=None))
# model_2.add(BatchNormalization())
model_2.add(LeakyReLU(alpha=0.3))
# model_2.add(Dropout(0.2))
# model_2.add(Dense(256, activation=None))
# # model_2.add(BatchNormalization())
# model_2.add(LeakyReLU(alpha=0.3))
model_2.add(Dense(256, activation=None))
# model_2.add(BatchNormalization())
model_2.add(LeakyReLU(alpha=0.3))
# model_2.add(Dropout(0.2))
model_2.add(Dense(128, activation=None))
# model_2.add(BatchNormalization())
model_2.add(LeakyReLU(alpha=0.3))
# model_2.add(Dropout(0.2))
model_2.add(Dense(1, activation=None))
model_2.add(Activation('sigmoid'))
# Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model_2.compile(optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0),
loss='binary_crossentropy',
metrics=[metrics.binary_accuracy])
model_2.summary()
# training model_2
# add checkpointer
save_model_name = "keiba_model_g2.h5"
checkpointer = ModelCheckpoint(filepath='results/'+save_model_name, verbose=0)
# training_x_binary, training_y_binary
model_2.fit(x=training_x_binary,
y=training_y_binary,
batch_size=128,
epochs=20,
verbose=1,
callbacks=[checkpointer],
validation_split=0.2,
shuffle=True)
```
## other helper functions
```
# get batch as generator: not used here
def data_generator(batch_size, data_type):
input_X = np.zeros(shape=(batch_size, 105), dtype=float)
target_Y = np.zeros(shape=(batch_size, 1), dtype=float)
while True:
for idx1 in range(batch_size):
# get one row
data1 = None
if data_type == 'validation':
data1 = validation_data_cursor.next()
else:
data1 = training_data_cursor.next()
# normalize x values
for idx2 in data1['input_x_object']:
# get model data which contains mean and std
x1 = data1['input_x_object'][idx2]
mean_name = 'input_x_avg_'+idx2
mean_value = mean_and_std['mean_and_std'][mean_name]
std_name = 'input_x_std_'+idx2
std_value = mean_and_std['mean_and_std'][std_name]
normarized_x = (x1 - mean_value) / std_value
input_X[idx1, int(idx2)] = normarized_x
# normarize y value
y1 = data1['target_y']
y_mean_value = mean_and_std['mean_and_std']['target_y_mean']
y_std_value = mean_and_std['mean_and_std']['target_y_stddev']
normalized_y = (y1 - y_mean_value) / y_std_value
target_Y[idx1] = normalized_y
yield (input_X, target_Y)
# get batch (y is binary data) as generator: not used here
def data_generator_binary(batch_size, data_type):
input_X = np.zeros(shape=(batch_size, 105), dtype=float)
target_Y = np.zeros(shape=(batch_size, 1), dtype=float)
while True:
for idx1 in range(batch_size):
# get one row
data1 = None
if data_type == 'validation':
data1 = validation_data_cursor.next()
else:
data1 = training_data_cursor.next()
# normalize x values
for idx2 in data1['input_x_object']:
# get model data which contains mean and std
x1 = data1['input_x_object'][idx2]
mean_name = 'input_x_avg_'+idx2
mean_value = mean_and_std['mean_and_std'][mean_name]
std_name = 'input_x_std_'+idx2
std_value = mean_and_std['mean_and_std'][std_name]
normarized_x = (x1 - mean_value) / std_value
input_X[idx1, int(idx2)] = normarized_x
# normarize y value
y1 = data1['target_y']
if y1 >= 0:
target_Y[idx1] = 1
else:
target_Y[idx1] = 0
yield (input_X, target_Y)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# ่ชๅจๅพฎๅๆณๅ่ชๅจๆฑๅฏผๆบๅถ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/eager/automatic_differentiation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/eager/automatic_differentiation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/eager/automatic_differentiation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
ๅจๅ้ข็ๆ็จไธญ๏ผๆไปฌไป็ปไบๅผ ้ๅๅฏนๅบ็ๆไฝใๅจ่ฟไธชๆ็จไธญ๏ผๆไปฌๅฐ่ฎฒ่ฎฒ [่ชๅจๅพฎๅๆณ](https://en.wikipedia.org/wiki/Automatic_differentiation) ๏ผไธไธชไผๅๆบๅจๅญฆไน ๆจกๅ็ๅ
ณ้ฎๆๆฏใ
## ่ฎพ็ฝฎ
```
import tensorflow as tf
tf.enable_eager_execution()
tfe = tf.contrib.eager # ็จ็ฌฆๅท็ฎ็ฅ่กจ่พพ
```
## ๅฝๆฐ็ๅฏผๆฐ
TensorFlow ไธบ่ชๅจๅพฎๅๆไพไบAPI โโ ่ฎก็ฎๅฝๆฐ็ๅฏผๆฐใๆจกไปฟๆฐๅญฆ็ๆนๅผ๏ผ็จ `f` ๆฆๆฌๅจ Python ไธญๅฝๆฐ็่ฎก็ฎ่ฟ็จ๏ผๅนถไธ็จ tfe.gradients_function ไปฅๅๅฏนๅบ็ๅๆฐๅๅปบไธไธชๅฏน 'f' ๆฑๅฏผ็ๅฝๆฐใๅฆๆไฝ ๅฏน numpy ๆฑๅฏผๅฝๆฐ [autograd](https://github.com/HIPS/autograd) ็ๆ๏ผ่ฟไผๅพ็ธไผผใไพๅฆ๏ผ
```
from math import pi
def f(x):
return tf.square(tf.sin(x))
assert f(pi/2).numpy() == 1.0
# grad_f ๅฐ่ฟๅไธไธช f ็ๅฏผๆฐๅ่กจ
# ๆฅๅฏนๅบๅฎ็ๅๆฐใๅ ไธบ f() ๆไธไธชๅๆฐ๏ผ
# grad_f ๅฐ่ฟๅๅธฆๆไธไธชๅ
็ด ็ๅ่กจใ
grad_f = tfe.gradients_function(f)
assert tf.abs(grad_f(pi/2)[0]).numpy() < 1e-7
```
### ้ซ้ถๆขฏๅบฆ
็ธๅ็ API ๅฏไปฅ็จๆฅๅคๆฌกๅพฎๅ๏ผ
```
def f(x):
return tf.square(tf.sin(x))
def grad(f):
return lambda x: tfe.gradients_function(f)(x)[0]
x = tf.lin_space(-2*pi, 2*pi, 100) # ๅจ-2ฯ ๅ +2ฯ ไน้ด็ๆ 100 ไธช็น
import matplotlib.pyplot as plt
plt.plot(x, f(x), label="f")
plt.plot(x, grad(f)(x), label="first derivative")
plt.plot(x, grad(grad(f))(x), label="second derivative")
plt.plot(x, grad(grad(grad(f)))(x), label="third derivative")
plt.legend()
plt.show()
```
## ่ชๅจๆฑๅฏผๆบๅถ
ๆฏไธช TensorFlow ๅพฎๅๆไฝ้ฝๆไธไธชๅฏนๅบ็ๆขฏๅบฆๅฝๆฐใไพๅฆ๏ผ`tf.square(x)` ็ๆขฏๅบฆๅฝๆฐๆฏ `2.0 * x` ใ่ฎก็ฎ็จๆทๅฎไนๅฝๆฐ็ๆขฏๅบฆ๏ผๅฆไธไพไธญ `f(x)`๏ผ๏ผTensorFlow ้ฆๅ
่ฎฐๅฝๅบ็จไบ่ฎก็ฎๅฝๆฐ่พๅบ็ๆๆๆไฝใๆไปฌๆ่ฟไธช่ฎฐๅฝๅซๅโๆฑๅฏผ่ฟ็จโใไนๅๅฐไฝฟ็จ่ฟไธชๆฑๅฏผ่ฟ็จไธๅธฆๆๅๆไฝ็ๆขฏๅบฆๅฝๆฐๅปไฝฟ็จ[ๅๅๅพฎๅ](https://en.wikipedia.org/wiki/Automatic_differentiation)่ฎก็ฎ็จๆทๅฎไนๅฝๆฐ็ๆขฏๅบฆใ
ๅ ไธบๆไฝๆ็
งๅฎไปฌ็ๆง่ก่ฟ็จ่ฟ่ก่ฎฐๅฝ๏ผPython ๆงๅถๆต๏ผไพๅฆไฝฟ็จ `if` ๅ `while`๏ผ็่ช็ถๅค็ๆนๅผๆฏ๏ผ
```
def f(x, y):
output = 1
# ๅฝไฝฟ็จ TensorFlow 1.10 ๆๆดๆฉ็็ๆฌๆถ๏ผPython 3 ็ฏๅขไธ
# ๅฟ
้กปไฝฟ็จ range(int(y)) ๆฟไปฃ range(y)ใๅจ 1.11+ ็ๆฌไธญๅฏไปฅไฝฟ็จ range(y)ใ
for i in range(int(y)):
output = tf.multiply(output, x)
return output
def g(x, y):
# ่ฟๅ `f` ๅฏนๅบ็็ฌฌไธไธชๅๆฐ็ๆขฏๅบฆ
return tfe.gradients_function(f)(x, y)[0]
assert f(3.0, 2).numpy() == 9.0 # f(x, 2) ๆฌ่ดจไธๆฏ x * x
assert g(3.0, 2).numpy() == 6.0 # ๅฎ็ๆขฏๅบฆๆฏ 2 * x
assert f(4.0, 3).numpy() == 64.0 # f(x, 3) ๆฌ่ดจไธๆฏ x * x * x
assert g(4.0, 3).numpy() == 48.0 # ๅฎ็ๆขฏๅบฆๆฏ 3 * x * x
```
ๆๆถ๏ผๅจๅฝๆฐไธญๅๆฌ่ฎก็ฎ็่ฟ็จๅฏ่ฝๅนถไธๆนไพฟใไพๅฆ๏ผๅฆๆไฝ ๆณ่ฆ่พๅบๆขฏๅบฆๅ
ณไบๅฝๆฐไธญ่ฎก็ฎ็ไธญ้ดๅผใๅจ่ฟ็งๆ
ๅตไธ๏ผ็ฅๆพๅ้ฟไฝๅซไนๆ็กฎ็ [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) ๅพๆ็จใๆๆ `tf.GradientTape` ไธญ็่ฎก็ฎ้ฝไผ่ขซ่ฎฐๅฝใ
ไพๅฆ๏ผ
```
x = tf.ones((2, 2))
# ๅฝ้ฎ้ข่ขซไฟฎๅคๅ๏ผๅฏไปฅ่ฐ็จไธไธช t.gradient()ใ
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# ไฝฟ็จ็ธๅ็ๆฑๅฏผ่ฟ็จๅป่ฎก็ฎ z ๅ
ณไบไธญ้ดๅผ y ็ๅฏผๆฐ
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
# ๅ
ณไบๅๅง่พๅ
ฅๅผ ้ x ็ z ็ๅฏผๆฐ
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
```
### ้ซ้ถๆขฏๅบฆ
`GradientTape` ็ฎก็ๅจไธญ่ฎฐๅฝ็ๆไฝๆฏไธบไบ่ชๅจๅพฎๅใๅฆๆๅจๅ
ถไธญ่ฎก็ฎไบๆขฏๅบฆ๏ผ้ฃไนๆขฏๅบฆ่ฎก็ฎ็่ฟ็จๅฐฑไผ่ขซ่ฎฐๅฝใๆๅ๏ผๅฎๅ
จ็ธๅ็ API ไนๅฏไปฅไฝ็จไบ้ซ้ถๆขฏๅบฆใไพๅฆ๏ผ
```
x = tf.constant(1.0) # ๅฐ Python ไธญ็ 1.0 ่ฝฌๆขไธบๅผ ้ๅฏน่ฑก
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
t2.watch(x)
y = x * x * x
# ๅจ็ฎก็ๅจ t ไธญ่ฎก็ฎๆขฏๅบฆ
# ่ฟๆๅณ็ๆขฏๅบฆ่ฎก็ฎไนๆฏๅฏๅพฎๅ็
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
```
## ๅ็ปญ
ๅจ่ฟไธชๆ็จไธญ๏ผๆไปฌ่ฎฒไบ TensorFlow ไธญ็ๆขฏๅบฆ่ฎก็ฎใๆไบไธ้ข็ๅ
ๅฎน๏ผๆไปฌๅฐฑๆไบๆๅปบๅ่ฎญ็ป็ฅ็ป็ฝ็ป่ถณๅค็่ฟ็ฎๅบ็กใ
| github_jupyter |
# Debug training
May 19, 2021
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import subprocess as sp
import sys
import os
import glob
import pickle
from matplotlib.colors import LogNorm, PowerNorm, Normalize
import seaborn as sns
from functools import reduce
import socket
from ipywidgets import *
%matplotlib widget
base_dict={'cori':'/global/u1/v/vpa/project/jpt_notebooks/Cosmology/Cosmo_GAN/repositories/cosmogan_pytorch/',
'summit':'/autofs/nccs-svm1_home1/venkitesh/projects/cosmogan/cosmogan_pytorch/'}
facility='cori' if socket.gethostname()[:4]=='cori' else 'summit'
base_dir=base_dict[facility]
sys.path.append(base_dir+'/code/modules_image_analysis/')
from modules_img_analysis import *
# sys.path.append(base_dir+'/code/5_3d_cgan/1_main_code/')
# import post_analysis_pandas as post
### Transformation functions for image pixel values
def f_transform(x):
return 2.*x/(x + 4.) - 1.
def f_invtransform(s):
return 4.*(1. + s)/(1. - s)
```
## Read data
```
dict1={'cori':{
'2d':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128_square/',
'3d':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3D/',
'3d_cgan':'/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d_cGAN/'},
'summit':{'2d':'/gpfs/alpine/ast153/proj-shared/venkitesh/Cosmogan/data/results_pytorch/2d/',
'3d':'/gpfs/alpine/ast153/proj-shared/venkitesh/Cosmogan/data/results_pytorch/3d/'}}
# parent_dir=u.result
parent_dir=dict1[facility]['3d']
dir_lst=[i.split('/')[-1] for i in glob.glob(parent_dir+'20210521*')]
dir_lst
w=interactive(lambda x: x, x=Dropdown(options=dir_lst))
display(w)
result=w.result
result_dir=parent_dir+result
print(result_dir)
```
## Plot Losses
```
df_metrics=pd.read_pickle(result_dir+'/df_metrics.pkle').astype(np.float64)
# df_metrics.tail(10)
def f_plot_metrics(df,col_list):
plt.figure()
for key in col_list:
plt.plot(df_metrics[key],label=key,marker='*',linestyle='')
plt.legend()
# col_list=list(col_list)
# df.plot(kind='line',x='step',y=col_list)
f_plot_metrics(df_metrics,['hist_chi'])
f_plot_metrics(df_metrics,['lr_d','lr_g'])
interact_manual(f_plot_metrics,df=fixed(df_metrics), col_list=SelectMultiple(options=df_metrics.columns.values))
# df_metrics[(df_metrics.lr_d>=6.69e-04) ]
df_metrics.plot(kind='line',x='step',y=['lr_d'])
np.unique(df_metrics.lr_d.values),np.unique(df_metrics.lr_g.values)
# display(df_metrics.sort_values(by=['hist_chi']).head(8))
# display(df_metrics.sort_values(by=['spec_chi']).head(8))
```
## Calculating learn rates
```
Nsteps=6;Lf=0.0003;Li=0.001
Lambda=(Lf/Li)**(1.0/Nsteps)
print(Lambda,np.sqrt(Lambda))
lst=[5,10,15,25,35,100]
# lst=range(1,11)
[(Li*Lambda**(count+1),i) for count,i in enumerate(lst)]
```
## Grid plot of images
```
epoch=170
flist=glob.glob(result_dir+'/images/gen_img*_epoch-{0}_step*'.format(epoch))
steps_list=[fname.split('/')[-1].split('step-')[-1].split('.')[0] for fname in flist]
print(*steps_list)
# fname=flist[0]
# fname,fname.split('/')[-1].split('step-')[-1].split('.')[0]
step=9550
fname=glob.glob(result_dir+'/images/gen_img_*_epoch-{0}_step-{1}.npy'.format(epoch,step))[0]
fname
images=np.load(fname)[:,0,:,:]
print(images.shape)
f_plot_grid(images[:8,:,:,0],cols=4,fig_size=(8,4))
```
| github_jupyter |
```
# Makes print and division act like Python 3
from __future__ import print_function, division
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
# Enable inline plotting
%matplotlib inline
from IPython.display import display, Latex, clear_output
from matplotlib.backends.backend_pdf import PdfPages
import pynrc
from pynrc import nrc_utils
from pynrc.nrc_utils import S, source_spectrum
# from pynrc.obs_nircam import model_to_hdulist, obs_hci
# from pynrc.obs_nircam import plot_contrasts, plot_contrasts_mjup, planet_mags, plot_planet_patches
pynrc.setup_logging('WARNING', verbose=False)
# Observation Definitions
from pynrc.nb_funcs import make_key, model_info, obs_wfe, obs_optimize
# Functions to run a series of operations
from pynrc.nb_funcs import do_opt, do_contrast, do_gen_hdus, do_sat_levels
# Plotting routines
from pynrc.nb_funcs import do_plot_contrasts, plot_images, plot_images_swlw
```
## Define Sources and their Reference PSF Stars
```
# Various Bandpasses
bp_v = S.ObsBandpass('v')
bp_k = pynrc.bp_2mass('k')
bp_w1 = pynrc.bp_wise('w1')
bp_w2 = pynrc.bp_wise('w2')
# source, dist, age, sptype, Teff, [Fe/H], log_g, mag, band, fov
args_sources = [('epsEri', 3.2, 800, 'K2V', 5084, -0.13, 4.3, 1.67, bp_k, 10)]
ref_sources = [('delEri', 'K0IV', 5055, +0.13, 3.9, 1.43, bp_k)]
# Directory housing VOTables
# http://vizier.u-strasbg.fr/vizier/sed/
votdir = 'votables/'
# Directory to save plots and figures
outdir = 'Moonshots/'
# List of filters
args_filter = [('F356W', 'MASK430R', 'CIRCLYOT'),
('F444W', 'MASK430R', 'CIRCLYOT')]
args_filter = [('F444W', 'MASK430R', 'CIRCLYOT')]
subsize = 320
filt_keys = []
for filt,mask,pupil in args_filter:
filt_keys.append(make_key(filt, mask=mask, pupil=pupil))
```
## Eps Eri
```
# Fit spectrum to SED photometry
i=0
name_sci, dist_sci, age, spt_sci, Teff_sci, feh_sci, logg_sci, mag_sci, bp_sci, fov = args_sources[i]
vot = votdir + name_sci.replace(' ' ,'') + '.vot'
args = (name_sci, spt_sci, mag_sci, bp_sci, vot)
kwargs = {'Teff':Teff_sci, 'metallicity':feh_sci, 'log_g':logg_sci}
src = source_spectrum(*args, **kwargs)
src.fit_SED(use_err=False, robust=True, wlim=[0.5,10])
# Final source spectrum
sp_sci = src.sp_model
# Do the same for the reference source
name_ref, spt_ref, Teff_ref, feh_ref, logg_ref, mag_ref, bp_ref = ref_sources[i]
vot = votdir + name_ref.replace(' ' ,'') + '.vot'
args = (name_ref, spt_ref, mag_ref, bp_ref, vot)
kwargs = {'Teff':Teff_ref, 'metallicity':feh_ref, 'log_g':logg_ref}
ref = nrc_utils.source_spectrum(*args, **kwargs)
ref.fit_SED(use_err=False, robust=True, wlim=[2,10])
# Final reference spectrum
sp_ref = ref.sp_model
# Plot spectra
fig, axes = plt.subplots(1,2, figsize=(14,4.5))
src.plot_SED(ax=axes[0], xr=[0.5,30])
ref.plot_SED(ax=axes[1], xr=[0.5,30])
axes[0].set_title('Science Specta -- {} ({})'.format(src.name, spt_sci))
axes[1].set_title('Reference Specta -- {} ({})'.format(ref.name, spt_ref))
#for ax in axes:
# ax.set_xscale('linear')
# ax.xaxis.set_minor_locator(AutoMinorLocator())
fig.tight_layout()
fig.subplots_adjust(top=0.85, bottom=0.1 , left=0.05, right=0.97)
fig.savefig(outdir+'{}_SEDs.pdf'.format(name_sci.replace(' ','')))
# Plot the two spectra
fig, ax = plt.subplots(1,1, figsize=(8,5))
xr = [2.5,5.5]
for sp in [sp_sci, sp_ref]:
w = sp.wave / 1e4
ind = (w>=xr[0]) & (w<=xr[1])
ind2 = (w>=3.9) & (w<=4.1)
sp.convert('Jy')
f = sp.flux / np.mean(sp.flux[ind2])
ax.plot(w[ind], f[ind], lw=1, label=sp.name)
ax.set_ylabel('Flux (Jy) normalized at 4 $\mu m$')
sp.convert('flam')
ax.set_xlim(xr)
ax.set_ylim([0,ax.get_ylim()[1]])
ax.set_xlabel(r'Wavelength ($\mu m$)')
ax.set_title('{} Spectra'.format(sp_sci.name))
# Overplot Filter Bandpass
bp = pynrc.read_filter(*args_filter[-1])
ax2 = ax.twinx()
ax2.plot(bp.wave/1e4, bp.throughput, color='C2', label=bp.name+' Bandpass')
ax2.set_ylim([0,0.8])
ax2.set_xlim(xr)
ax2.set_ylabel('Bandpass Throughput')
ax.legend(loc='upper left')
ax2.legend(loc='upper right')
fig.tight_layout()
fig.savefig(outdir+'{}_2SEDs.pdf'.format(name_sci.replace(' ','')))
# Create a dictionary that holds the obs_coronagraphy class for each filter
wfe_drift = 0
obs_dict = obs_wfe(wfe_drift, args_filter, sp_sci, dist_sci, sp_ref=sp_ref,
wind_mode='WINDOW', subsize=subsize, verbose=False)
do_opt(obs_dict, tacq_max=1000, ng_min=8, well_levels=[1,3,5,10,20])
for key in filt_keys:
obs = obs_dict[key]
# Science observations
if 'F444W' in key:
read_mode, ng, nint = ('BRIGHT2', 10, 40)
else:
read_mode, ng, nint = ('BRIGHT2', 10, 40)
obs.update_detectors(read_mode=read_mode, ngroup=ng, nint=nint)
# Reference observations
if 'F444W' in key:
read_mode, ng, nint = ('BRIGHT2', 10, 40)
else:
read_mode, ng, nint = ('BRIGHT2', 10, 40)
obs.nrc_ref.update_detectors(read_mode=read_mode, ngroup=ng, nint=nint)
# print(key)
# print(obs.multiaccum_times)
# _ = obs.sensitivity(nsig=5, units='vegamag', verbose=True)
# print('')
# Saturation Levels
# Only want F444W
#obs = obs_dict[filt_keys[-1]]
#sat_rad = do_sat_levels(obs, plot=True)
# Max Saturation Values
dmax = []
for k in filt_keys:
print(k)
obs = obs_dict[k]
imsci = obs.gen_slope_image(exclude_noise=True, quick_PSF=True)
imref = obs.gen_slope_image(exclude_noise=True, quick_PSF=True, do_ref=True)
ng1 = 2
sci_sat1 = obs.saturation_levels(ngroup=ng1, image=imsci)
ref_sat1 = obs.saturation_levels(ngroup=ng1, image=imref, do_ref=True)
ng2 = obs.multiaccum.ngroup
sci_sat2 = obs.saturation_levels(ngroup=ng2, image=imsci)
ng3 = obs.nrc_ref.multiaccum.ngroup
ref_sat2 = obs.saturation_levels(ngroup=ng3, image=imref, do_ref=True)
print('Max Well NG={}: {:.2f} {:.2f}; Max Well NG=max: {:.2f} {:.2f}'\
.format(ng1,sci_sat1.max(),ref_sat1.max(),sci_sat2.max(),ref_sat2.max()))
# Get position of PFS
xcen, ycen = obs.get_psf_cen()
# Radius at NG=2
rho = nrc_utils.dist_image(sci_sat1, center=(xcen,ycen)) # Pixel distances
sat_mask = sci_sat1 > 0.9
nsat = np.size(rho[sat_mask])
rval_sci = rho[sat_mask].max()*obs.pix_scale if nsat>0 else 0
# Reference PSF
sat_mask = ref_sat1 > 0.9
nsat = np.size(rho[sat_mask])
rval_ref = rho[sat_mask].max()*obs.pix_scale if nsat>0 else 0
print('rmax = ({:.2f}, {:.2f}) arcsec'.format(rval_sci, rval_ref))
# Determine contrast curves for various WFE drift values
wfe_list = [0,2,5,10]
nsig = 5
roll = 10
# (Roll1 - Ref) + (Roll2 - Ref)
curves_dict = do_contrast(obs_dict, wfe_list, filt_keys[-1:], nsig=nsig, roll_angle=roll)
curves_F444W = curves_dict[filt_keys[-1]]
# Roll1 - Roll2
curves_dict = do_contrast(obs_dict, wfe_list, filt_keys[-1:], nsig=nsig, roll_angle=roll, no_ref=True)
curves_F444W2 = curves_dict[filt_keys[-1]]
pynrc._reload()
import sys
del sys.modules['pynrc.obs_nircam']
pynrc._reload()
import sys
del sys.modules['pynrc.nb_funcs']
# Observation Definitions
from pynrc.nb_funcs import make_key, model_info, obs_wfe, obs_optimize
# Functions to run a series of operations
from pynrc.nb_funcs import do_opt, do_contrast, do_gen_hdus, do_sat_levels
# Plotting routines
from pynrc.nb_funcs import do_plot_contrasts, plot_images, plot_images_swlw
sat_rad = 0
obs = obs_dict[filt_keys[-1]]
fig, axes = do_plot_contrasts(None, curves_F444W2, nsig, wfe_list, obs, 800, age2=400,
sat_rad=sat_rad, jup_mag=False,
save_fig=False, return_fig_axes=True, yscale2='log')
from pynrc.obs_nircam import planet_mags
cvals = ['C2', 'C3']
for i, age_val in enumerate([400, 800]):
ax = axes[0]
mass = 1.2
pmags = planet_mags(obs, age=age_val, mass_list=[mass], av_vals=None, cond=True)
jmag = pmags[mass][0]
xr = ax.get_xlim()
ax.plot(xr, [jmag,jmag], color=cvals[i], ls='--', lw=1)
txt_mj = u'M$_{Jup}$'
txt = '{:.1f} {} at {:.1f} pc (COND, {} Myr)'.format(mass, txt_mj, obs.distance, age_val)
#ax.text(3, jmag, txt, horizontalalignment='left', verticalalignment='bottom')
pmags = planet_mags(obs, age=age_val, mass_list=[mass], av_vals=None, cond=False, entropy=8)
jmag = pmags[mass][0]
xr = ax.get_xlim()
ax.plot(xr, [jmag,jmag], color=cvals[i], ls='--', lw=1)
txt_mj = u'M$_{Jup}$'
txt = '{:.1f} {} at {:.1f} pc (SB12, {} Myr)'.format(mass, txt_mj, obs.distance, age_val)
#ax.text(3, jmag, txt, horizontalalignment='left', verticalalignment='bottom')
txt_mj = u'M$_{Jup}$'
txt = '{:.1f} {} at {:.1f} pc (Various Models/Ages)'.format(mass, txt_mj, obs.distance)
ax.text(2.5, 15.5, txt, horizontalalignment='left', verticalalignment='bottom')
for ax in axes:
ax.plot([1.09, 1.09], ax.get_ylim(), ls='--', color='C0', lw=1)
name_sci = obs.sp_sci.name
title_str = '{} (dist = {:.1f} pc) -- {} Contrast Curves'\
.format(name_sci, obs.distance, obs.filter)
fig.suptitle(title_str, fontsize=16)
fname = "{}_contrast_{}_WINDOW.pdf".format(name_sci.replace(" ", ""), obs.mask)
fig.savefig(outdir+fname)
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
# load packages
import pandas as pd
import gc
import tensorflow as tf
import pickle
import numpy as np
import keras
from keras import backend as K
from keras.models import load_model, Model
from keras.layers import Flatten, Dense, Dropout, Activation, Input, LSTM, Reshape, Conv2D, MaxPooling2D
from keras.optimizers import Adam
from keras.layers.advanced_activations import LeakyReLU
from keras.utils import np_utils
import matplotlib.pyplot as plt
# set random seeds
np.random.seed(1)
tf.random.set_seed(2)
```
# Data preparation
```
def prepare_x(data):
df1 = data[:40, :].T
return np.array(df1)
def get_label(data):
lob = data[-5:, :].T
return lob
def data_classification(X, Y, T):
[N, D] = X.shape
df = np.array(X)
dY = np.array(Y)
dataY = dY[T - 1:N]
dataX = np.zeros((N - T + 1, T, D))
for i in range(T, N + 1):
dataX[i - T] = df[i - T:i, :]
return dataX.reshape(dataX.shape + (1,)), dataY
dec_train = np.loadtxt('/content/drive/MyDrive/College/6th sem/DA/DA Mini Project/1.NoAuction_Zscore/NoAuction_Zscore_Training/Train_Dst_NoAuction_ZScore_CF_7.txt')
dec_test3 = np.loadtxt('/content/drive/MyDrive/College/6th sem/DA/DA Mini Project/1.NoAuction_Zscore/NoAuction_Zscore_Testing/Test_Dst_NoAuction_ZScore_CF_7.txt')
dec_test4 = np.loadtxt('/content/drive/MyDrive/College/6th sem/DA/DA Mini Project/1.NoAuction_Zscore/NoAuction_Zscore_Testing/Test_Dst_NoAuction_ZScore_CF_8.txt')
dec_test5 = np.loadtxt('/content/drive/MyDrive/College/6th sem/DA/DA Mini Project/1.NoAuction_Zscore/NoAuction_Zscore_Testing/Test_Dst_NoAuction_ZScore_CF_9.txt')
dec_test = np.hstack((dec_test3, dec_test4, dec_test5))
del dec_test3
del dec_test4
del dec_test5
gc.collect()
# extract limit order book data from the FI-2010 dataset
train_lob = prepare_x(dec_train)
test_lob = prepare_x(dec_test)
# extract label from the FI-2010 dataset
train_label = get_label(dec_train)
test_label = get_label(dec_test)
# prepare training data. We feed past 100 observations into our algorithms and choose the prediction horizon.
trainX_CNN, trainY_CNN = data_classification(train_lob, train_label, T=10)
trainY_CNN = trainY_CNN[:,3] - 1
trainY_CNN = np_utils.to_categorical(trainY_CNN, 3)
# prepare test data.
testX_CNN, testY_CNN = data_classification(test_lob, test_label, T=10)
testY_CNN = testY_CNN[:,3] - 1
testY_CNN = np_utils.to_categorical(testY_CNN, 3)
```
# Model Architecture
```
def create_deeplob(T, NF, number_of_lstm):
input_lmd = Input(shape=(T, NF, 1))
# build the convolutional block
conv_first1 = Conv2D(32, (1, 2), strides=(1, 2))(input_lmd)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (4, 1), padding='same')(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (4, 1), padding='same')(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (1, 2), strides=(1, 2))(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (4, 1), padding='same')(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (4, 1), padding='same')(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (1, 10))(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (4, 1), padding='same')(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
conv_first1 = Conv2D(32, (4, 1), padding='same')(conv_first1)
conv_first1 = keras.layers.LeakyReLU(alpha=0.05)(conv_first1)
# build the inception module
convsecond_1 = Conv2D(64, (1, 1), padding='same')(conv_first1)
convsecond_1 = keras.layers.LeakyReLU(alpha=0.05)(convsecond_1)
convsecond_1 = Conv2D(64, (3, 1), padding='same')(convsecond_1)
convsecond_1 = keras.layers.LeakyReLU(alpha=0.05)(convsecond_1)
convsecond_2 = Conv2D(64, (1, 1), padding='same')(conv_first1)
convsecond_2 = keras.layers.LeakyReLU(alpha=0.05)(convsecond_2)
convsecond_2 = Conv2D(64, (5, 1), padding='same')(convsecond_2)
convsecond_2 = keras.layers.LeakyReLU(alpha=0.05)(convsecond_2)
convsecond_3 = MaxPooling2D((3, 1), strides=(1, 1), padding='same')(conv_first1)
convsecond_3 = Conv2D(64, (1, 1), padding='same')(convsecond_3)
convsecond_3 = keras.layers.LeakyReLU(alpha=0.05)(convsecond_3)
convsecond_output = keras.layers.concatenate([convsecond_1, convsecond_2, convsecond_3], axis=3)
conv_reshape = Reshape((int(convsecond_output.shape[1]), int(convsecond_output.shape[3])))(convsecond_output)
# build the last LSTM layer
conv_lstm = LSTM(number_of_lstm)(conv_reshape)
# build the output layer
out = Dense(3, activation='softmax')(conv_lstm)
model = Model(inputs=input_lmd, outputs=out)
adam = keras.optimizers.Adam(lr=0.005, beta_1=0.9, beta_2=0.999, epsilon=1)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])
return model
deeplob = create_deeplob(10, 40, 64)
```
# Model Training and Testing
```
deeplob.fit(trainX_CNN, trainY_CNN, epochs=100, batch_size=64, verbose=2, validation_data=(testX_CNN, testY_CNN))
from sklearn.metrics import classification_report
y_pred = deeplob.predict(testX_CNN, batch_size=64, verbose=2)
y_pred_bool = np.argmax(y_pred, axis=1)
round_testy = np.argmax(testY_CNN, axis=1)
print(classification_report(round_testy, y_pred_bool))
```
| github_jupyter |
# Train a binary classification model
In this tutorial, we walk through a simple binary classificaiton problem using PyCaret.
## Install required packages
```
!pip install --upgrade pycaret scikit-plot
```
## Setup cloud tracking
[Mlflow](https://github.com/mlflow/mlflow) is a great tool for local ML experimentation tracking. However, using it alone is like using git without GitHub. Your Azure Machine Learning workspace can easily be used to setup a remote tracking URI for mlflow:
```
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
```
## Load a pandas.DataFrame
The PyCaret datasets module contains many sample datasets. Try replacing with your own data!
```
from pycaret.datasets import get_data
df = get_data("credit")
df
df.shape
```
## Split data
Split the data into training data (for modeling) and test data (for prediction):
```
data = df.sample(frac=0.95, random_state=42)
data_unseen = df.drop(data.index)
data.reset_index(inplace=True, drop=True)
data_unseen.reset_index(inplace=True, drop=True)
print("Data for modeling: " + str(data.shape))
print("Unseen data for predictions: " + str(data_unseen.shape))
```
## Setup PyCaret
The `setup()` function initializes the environment in pycaret and creates the transformation pipeline to prepare the data for modeling and deployment.
`setup()` must be called before executing any other function in pycaret.
It takes two mandatory parameters: a `pandas.DataFrame` and the name of the target column. All other parameters are optional.
Refer to the [PyCaret documentation](https://pycaret.readthedocs.io/en/stable/) for details.
```
from pycaret.classification import *
exp = setup(
data=data,
target="default",
log_experiment=True,
experiment_name="automl-with-pycaret-tutorial",
log_plots=True,
log_profile=True,
silent=True, # set to False for interactively setting data types
)
models()
```
## Run AutoML
Run a series of trials to find the best model.
```
%%time
best_model = compare_models()
print(best_model)
```
## Evaluate model
Evaluate the best model.
```
evaluate_model(best_model)
```
## Test model
Evaluate the best model on unseen data.
```
unseen_predictions = predict_model(best_model, data=data_unseen)
unseen_predictions.head()
from pycaret.utils import check_metric
check_metric(
unseen_predictions.default,
unseen_predictions.Label.astype(int),
"Accuracy",
)
```
| github_jupyter |
# Mount Drive & Login to Wandb
```
from google.colab import drive
from getpass import getpass
import urllib
import os
# Mount drive
drive.mount('/content/drive')
!pip install wandb -qqq
!wandb login
```
# Install dependencies
```
!rm -r pearl
!git clone https://github.com/PAL-ML/PEARL_v1.git pearl
%cd pearl
!pip install -r requirements.txt
%cd ..
!pip install git+git://github.com/ankeshanand/pytorch-a2c-ppo-acktr-gail
!pip install git+git://github.com/mila-iqia/atari-representation-learning.git
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
!pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git
!pip install git+git://github.com/openai/baselines
! wget http://www.atarimania.com/roms/Roms.rar
! unrar x Roms.rar
! unzip ROMS.zip
! python -m atari_py.import_roms /content/ROMS
```
# Imports
```
# ML libraries
import torch.nn as nn
import torch
import pearl.src.benchmark.colab_data_preprocess as data_utils
from pearl.src.benchmark.probe_training_wrapper import run_probe_training
from pearl.src.benchmark.utils import appendabledict
# Models
import clip
# Data processing
from PIL import Image
from torchvision.transforms import Compose, Resize, Normalize
# Misc
import numpy as np
import wandb
import os
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
```
# Helper functions
```
class ClipEncoder(nn.Module):
def __init__(self, input_channels, feature_size):
super().__init__()
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.clip_model, _ = clip.load("ViT-B/32", device=self.device, jit=False)
self.preprocess = Compose([
Resize((224, 224), interpolation=Image.BICUBIC),
Normalize(
(0.48145466, 0.4578275, 0.40821073),
(0.26862954, 0.26130258, 0.27577711)
)
])
self.feature_size = feature_size
self.input_channels = input_channels
def forward(self, inputs):
x = self.get_clip_features(inputs)
x = x.view(x.size(0), -1)
return x
def get_clip_features(self, image):
with torch.no_grad():
image_features = self.clip_model.encode_image(self.preprocess(image)).float()
return image_features
```
# Initialization & constants
General
```
env_name = "BreakoutNoFrameskip-v4"
collect_mode = "random_agent" # random_agent or ppo_agent
steps = 50000
training_input = "embeddings" # embeddings or images
probe_type = "linear"
use_encoder = False
input_resolution = "4x4patch" # full-image, 4x4patch, 2x2patch
num_patches = 16
```
Encoder Params
```
feature_size = 512 * num_patches
input_channels = 3
```
Probe Params
```
lr = 3e-4
batch_size = 64
num_epochs = 100
patience = 15
probe_name = input_resolution + "-" + probe_type
```
Paths
```
data_path_suffix = "-latest-04-05-2021"
models_dir = os.path.join("drive/MyDrive/PAL_HILL_2021/Atari-RL/Models/probes", probe_name, env_name)
data_dir = os.path.join("/content/drive/MyDrive/PAL_HILL_2021/Atari-RL/Images_Labels_Clip_embeddings", env_name + data_path_suffix)
if not os.path.exists(models_dir):
os.makedirs(models_dir)
```
Wandb
```
wandb.init(project='atari-clip')
config = wandb.config
config.game = "{}-4x4patch-Linear".format(env_name.replace("NoFrameskip-v4", ""))
wandb.run.name = "{}_Linear_4x4patch_may_07".format(env_name.replace("NoFrameskip-v4", ""))
```
# Get episode data
```
tr_episodes, val_episodes,\
tr_labels, val_labels,\
test_episodes, test_labels = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True, input_resolution=input_resolution)
tr_episodes = data_utils.concatcat_patch_embeddings(tr_episodes, num_patches=num_patches)
val_episodes = data_utils.concatcat_patch_embeddings(val_episodes, num_patches=num_patches)
test_episodes = data_utils.concatcat_patch_embeddings(test_episodes, num_patches=num_patches)
```
# Load model
```
encoder = ClipEncoder(input_channels=input_channels,feature_size=feature_size)
```
# Run probe training
```
run_probe_training(training_input, encoder, probe_type, num_epochs, lr, patience, wandb, models_dir, batch_size,
tr_episodes, val_episodes, tr_labels, val_labels, test_episodes, test_labels, use_encoder=use_encoder)
```
| github_jupyter |
```
import os
theta_correct("/home/michael/msc/mcmd/bigbox1",isnap=2)
def insert_theta(line,th):
p = " ".join(line.split()[:4])
q = " ".join(line.split()[5:])
return p+" "+str(th)+" "+q+"\n"
import numpy as np
import os
cos, sin = np.cos, np.sin
twopi = np.pi*2.0
def theta_correct(fname,isnap=None,inplace=False):
# readjust all theta values in a file
foutname = fname+"_thcorr"
if isnap: inplace = False
# Count num blocks
Nblock = 0
fin = open(fname, "r")
for line in fin.readlines():
if line == "\n": Nblock+=1
fin.seek(0)
ln = fin.readline().split("|")
nrod = 0
edge = 0.
Nx = 0
for s in ln:
if "boxEdge" in s:
edge = float(s.split()[1])
if "nObj" in s:
nrod = int(s.split()[1])
if "cellNx" in s:
Nx = int(s.split()[1])
fin.seek(0)
thgrid = np.zeros(shape=(Nx,Nx))
cell2grid = {}
for i in range(Nx*Nx):
xi = i//Nx
yi = i%Nx
cell2grid.update({i:[xi,yi]})
# dict of corrected refs so that in subsequent blocks
# we only need to compare it with the ref
threfs = {}
neighbors = {}
snaps = []
if isnap:
snaps = [isnap]
foutname = foutname+"_"+str(isnap)
else:
for s in range(0,Nblock):
snaps.append(s)
fout = open(foutname, 'w')
l = fin.readline()
if l[0].isalpha():
fout.write(l)
else:
fin.seek(0)
xs,ys,thetas = [],[],[]
rids, cellids = [], []
blocklines = []
tmpthetas = []
cntsnap = 0
lblstring = None
#
# First build up threfs based on first image
#
for line in fin.readlines():
if cntsnap not in snaps:
if line == "\n": cntsnap+=1
continue
else:
if line == "\n":
# Done the block
for x,y,th,r,c in zip(xs,ys,thetas,rids,cellids):
# th is initially in range [0,2pi]
# arctan2 outputs in range [-pi,pi]
# difference between arctan and arctan2 is that
# arctan only "folds back" the range [-pi/2,pi/2]
if th > np.pi: th = -twopi + th # th: [-pi,pi]
xi,yi = cell2grid[c]
if not thgrid[xi,yi]:
# First rod in cell, define direction
# get it a 'safe' distance from modpoint for
# angle comparisons later
if th > np.pi*0.5: th -= np.pi
if th < -np.pi*0.5: th += np.pi
# now th is [-pi/2,pi/2]
thgrid[xi,yi] = th
else:
thref = thgrid[xi,yi]
if (th > thref) and ((th - thref) > 0.25*twopi): th -= 0.5*twopi
if (th < thref) and ((thref - th) > 0.25*twopi): th += 0.5*twopi
# return theta to [0,2pi] range
th += np.pi
threfs.update({r: th})
tmpthetas.append(th)
# Reset arrays
xs = []
ys = []
thetas = []
break
blocklines.append(line)
spt = [float(x) for x in line.split()]
xs.append(spt[2])
ys.append(spt[3])
thetas.append(spt[4])
rids.append(int(spt[0]))
cellids.append(int(spt[1]))
# If given isnap, we are done
if isnap:
ith = 0
for l in blocklines:
fout.write(insert_theta(l,tmpthetas[ith]))
ith+=1
fin.close()
fout.close()
return
xs,ys,thetas = [],[],[]
blocklines = []
#
# Continue on updating rest of the file
#
fin.close()
fin = open(fname,'r')
l = fin.readline()
if not l[0].isalpha(): fin.seek(0)
for line in fin.readlines():
if line.startswith("label") or line == "\n":
fout.write(line)
continue
spt = [float(x) for x in line.split()]
th = spt[4]
if th > np.pi: th = -twopi+th
r = int(spt[0])
th_ = threfs[r]
if th_ > np.pi: th_ = -twopi+th_
if (th > th_) and ((th - th_) > 0.25*twopi): th -= 0.5*twopi
if (th < th_) and ((th_ - th) > 0.25*twopi): th += 0.5*twopi
# return theta to [0,2pi] range
th += np.pi
threfs[r] = th
fout.write(insert_theta(line,th))
fin.close()
fout.close()
print "Done correcting"
# if inplace = True and isnap=None, overwrite fname
if inplace and not isnap:
print "Overwriting",fname
os.rename(foutname,fname)
else:
print "Writing to",foutname
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Gena/map_set_center.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_set_center.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Gena/map_set_center.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_set_center.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
```
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
```
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# Add some data to the Map
dem = ee.Image("JAXA/ALOS/AW3D30_V1_1").select('MED')
Map.addLayer(dem, {'min': 0, 'max': 5000, 'palette': ['000000', 'ffffff'] }, 'DEM', True)
# TEST Map.setCenter
Map.setCenter(0, 28, 2.5)
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
```
%%capture
# Compile and import local pyrossgeo module
import os, sys
owd = os.getcwd()
os.chdir('../../')
sys.path.insert(0,'../../')
!python setup.py build_ext --inplace
os.chdir(owd)
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import pyrossgeo
import pandas as pd
import json
```
We will generate most of the simulation configuration files based on parameters we feed into the simulation.
Warnings about RAM
Allow to have names in the node populations
Explanation about how node pop and commtuer network awas construced.
Explain what age groups we have
Mention that we set `min_num_moving` quite high here for demonstration purposes.
- Node populations
- Commuter network
- Generate contact matrices using formula
- Note that the units of the contact matries are per hour, rather than day
-
- Contact matrices, different ones for school.
- Note that the units of the contact matries are per hour, rather than day
- Write up nicely.
- Plot density dependence, with population also
- Make python version
- Make UK version
- Clean up rest of notebooks.
- Update config documentation
- Can have NaNs in node parameters.
- Only compute necessary contact matrices at each node
- Allow user to update this
# Generate the configuration files
### Define model
```
model = {
"settings" : {
"classes" : ["S", "E", "A", "I", "R"],
"stochastic_threshold_from_below" : [1000, 1000, 1000, 1000, 1000],
"stochastic_threshold_from_above" : [500, 500, 500, 500, 500],
"infection_scaling" : "powerlaw",
"infection_scaling_parameters" : [0, 0.004, 0.5] # a + b * rho^c
},
"S" : {
"linear" : [],
"infection" : [ ["I", "-betaI"], ["A", "-betaA"] ]
},
"E" : {
"linear" : [ ["E", "-gammaE"] ],
"infection" : [ ["I", "betaI"], ["A", "betaA"] ]
},
"A" : {
"linear" : [ ["E", "gammaE"], ["A", "-gammaA"] ],
"infection" : []
},
"I" : {
"linear" : [ ["A", "gammaA"], ["I", "-gammaI"] ],
"infection" : []
},
"R" : {
"linear" : [ ["I", "gammaI"] ],
"infection" : []
}
}
model_classes = model['settings']['classes']
model_dim = len(model_classes)
```
### Configuration generation parameters
Here we define some parameters with which all the configuration files will be generated. Edit these if you want to change the simulation.
```
sim_config_path = 'london_simulation'
min_num_moving = 20 # Remove all commuting edges where less than `min_num_moving` are moving
# Decide which classes are allowed to commute
allow_class = [
('S', True),
('E', True),
('A', True),
('Ia1', True),
('Ia2', True),
('Ia3', True),
('Is1', True),
('Is2', False),
('Is3', False),
('R', True),
]
# Decide where to seed with infecteds
seed_pop = [
(0, 1, 'E', 100), # Home, age group, model class, seed quantity
(10, 2, 'E', 100),
(23, 0, 'E', 100),
(622, 4, 'E', 100),
(232, 4, 'E', 100)
]
# Node parameters
n_betaI = 0.02
n_betaA = 0.02
n_gammaE = 1/3.0
n_gammaA = 1/3.0
n_gammaI = 1/3.0
# Cnode parameters
cn_betaI = n_betaI
cn_betaA = n_betaA
cn_gammaE = n_gammaE
cn_gammaA = n_gammaA
cn_gammaI = n_gammaI
# Time steps
t_start = 0
t_end = 24*60*100
_, dts = pyrossgeo.utils.get_dt_schedule([
(0, 1*60),
(7*60, 2),
(10*60, 2*60),
(17*60, 2),
(19*60, 2*60)
], end_time=24*60)
```
### Format the commuting network
```
cn = pd.read_csv("%s/commuter_networks.csv" % sim_config_path)
#### Set which classes are allowed to commute
# Drop the current allow_O columns
cn = cn.iloc[:,:10]
# Set allow settings
for O, allow_O in allow_class:
cn[ "Allow %s" % O ] = 1 if allow_O else 0
# Allow people to return home
cn.loc[ cn['Home'] == cn['To'],"Allow %s" % allow_class[0][0]:] = 1
#### Remove commuting edges where fewer than `min_num_moving` people are commuting
delete_rows = []
for i, row in cn.loc[ cn['Home'] == cn['From'] ].iterrows():
if row['# to move'] < min_num_moving:
delete_rows.append(i)
delete_rows.append(i+1) # Delete the returning commuting edge as well
cn = cn.reset_index()
cn = cn.drop(delete_rows)
cn = cn.drop(columns='index')
cn.loc[cn['ct1'] == cn['ct2'], 'ct2'] += 0.1
cn.head()
```
### Populate the network
Our `node_populations.csv` currently only has the total population for each age group at each node. In order to use it for the simulation, we must populate it with the model classes, as well as seed some infections.
```
tot_pop = pd.read_csv("%s/node_populations.csv" % sim_config_path)
tot_pop.head()
# Create all model classes, and set everyone to be susceptible
npop = pd.DataFrame()
npop['Home'] = tot_pop['Home']
npop['Location'] = tot_pop['Location']
for _cn, _cd in tot_pop.iloc[:,2:].iteritems():
for O in model['settings']['classes']:
npop["%s%s" % (O, _cn[1:])] = 0
npop["%s%s" % ("S", _cn[1:])] = _cd
# Seed with infecteds
for home, age, O, seed_quantity in seed_pop:
row_i = npop[npop['Home'] == home].index[0]
col_i = 2 + age*model_dim
S = npop.iloc[row_i,col_i]
npop.iloc[row_i, col_i + model_classes.index('E')] = seed_quantity
npop.iloc[row_i, col_i] -= seed_quantity
```
### Setting the node and cnode parameters
We need to add rows giving the model parameters in `node_parameters.csv` and `cnode_parameters.csv`, which currently only has the areas of each geographical node:
```
nparam = pd.read_csv('london_simulation/node_parameters.csv')
cnparam = pd.read_csv('london_simulation/cnode_parameters.csv')
nparam.head()
cnparam['betaI'] = cn_betaI
cnparam['betaA'] = cn_betaA
cnparam['gammaE'] = cn_gammaE
cnparam['gammaA'] = cn_gammaA
cnparam['gammaI'] = cn_gammaI
nparam = nparam.append({
'Home' : 'ALL',
'Location' : 'ALL',
'Age' : 'ALL',
'betaI' : n_betaI,
'betaA' : n_betaA,
'gammaE' : n_gammaE,
'gammaA' : n_gammaA,
'gammaI' : n_gammaI,
}, ignore_index=True)
nparam.iloc[-2:-1,:]
```
### Contact matrices
Define the contact matrices
```
C_home = np.array( [
[5.0,4.83,4.69,4.58,4.48,4.4,4.33,4.28,4.23],
[4.83,5.0,4.83,4.69,4.58,4.48,4.4,4.33,4.28],
[4.69,4.83,5.0,4.83,4.69,4.58,4.48,4.4,4.33],
[4.58,4.69,4.83,5.0,4.83,4.69,4.58,4.48,4.4],
[4.48,4.58,4.69,4.83,5.0,4.83,4.69,4.58,4.48],
[4.4,4.48,4.58,4.69,4.83,5.0,4.83,4.69,4.58],
[4.33,4.4,4.48,4.58,4.69,4.83,5.0,4.83,4.69],
[4.28,4.33,4.4,4.48,4.58,4.69,4.83,5.0,4.83],
[4.23,4.28,4.33,4.4,4.48,4.58,4.69,4.83,5.0],
] )
C_school = np.array( [
[8.0,7.83,7.69,0.25,0.19,0.15,0.12,0.1,0.09],
[7.83,8.0,7.83,0.26,0.19,0.15,0.12,0.1,0.09],
[7.69,7.83,8.0,0.26,0.19,0.15,0.12,0.11,0.09],
[0.25,0.26,0.26,0.27,0.2,0.15,0.13,0.11,0.09],
[0.19,0.19,0.19,0.2,0.2,0.16,0.13,0.11,0.09],
[0.15,0.15,0.15,0.15,0.16,0.16,0.13,0.11,0.09],
[0.12,0.12,0.12,0.13,0.13,0.13,0.13,0.11,0.1],
[0.1,0.1,0.11,0.11,0.11,0.11,0.11,0.11,0.1],
[0.09,0.09,0.09,0.09,0.09,0.09,0.1,0.1,0.1]
])
C_work = np.array( [
[0.08,0.07,0.07,0.07,0.07,0.07,0.07,0.07,0.07],
[0.07,0.09,0.08,0.08,0.08,0.08,0.08,0.08,0.08],
[0.07,0.08,0.1,0.1,0.09,0.09,0.09,0.09,0.09],
[0.07,0.08,0.1,0.12,0.12,0.11,0.11,0.11,0.11],
[0.07,0.08,0.09,0.12,0.15,0.15,0.14,0.14,0.14],
[0.07,0.08,0.09,0.11,0.15,0.2,0.19,0.19,0.19],
[0.07,0.08,0.09,0.11,0.14,0.19,6.0,5.83,5.69],
[0.07,0.08,0.09,0.11,0.14,0.19,5.83,6.0,5.83],
[0.07,0.08,0.09,0.11,0.14,0.19,5.69,5.83,6.0]
])
C_transport = np.array( [
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0],
[10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0,10.0]
])
contact_matrices = {
'C' : C_home + C_school + C_work,
'C_commute' : C_transport
}
ncm = pd.DataFrame(columns=['Home', 'Location'] + model['settings']['classes'])
ncm = ncm.append({
'Home' : 'ALL',
'Location' : 'ALL',
'A' : 'C',
'I' : 'C'
}, ignore_index=True)
cncm = pd.DataFrame(columns=['Home', 'From', 'To'] + model['settings']['classes'])
cncm = cncm.append({
'Home' : 'ALL',
'From' : 'ALL',
'To' : 'ALL',
'A' : 'C_commute',
'I' : 'C_commute'
}, ignore_index=True)
```
## Run simulation
```
sim = pyrossgeo.Simulation()
X_state = sim.initialize(
model_dat = model,
commuter_networks_dat = cn,
node_populations_dat = npop,
node_parameters_dat = nparam,
cnode_parameters_dat = cnparam,
contact_matrices_dat = contact_matrices,
node_cmatrices_dat = ncm,
cnode_cmatrices_dat = cncm
)
sim_data = sim.simulate(X_state, t_start, t_end, dts, steps_per_save=len(dts), steps_per_print=len(dts))
ts, node_data, cnode_data, location_data, community_data, network_data = pyrossgeo.utils.extract_simulation_data(sim_data)
ts_days = ts / (24*60)
ts_hours = ts / 60
```
## Plot the result
Plot the evolution of the whole network
```
plt.figure( figsize=(8,3) )
S = np.sum(network_data[:,:,0], axis=1)
E = np.sum(network_data[:,:,1], axis=1)
A = np.sum(network_data[:,:,2], axis=1)
I = np.sum(network_data[:,:,3], axis=1)
R = np.sum(network_data[:,:,4], axis=1)
plt.plot(ts_days, S, label="S")
plt.plot(ts_days, E, label="I")
plt.plot(ts_days, A, label="I")
plt.plot(ts_days, I, label="I")
plt.plot(ts_days, R, label="R")
plt.legend(loc='upper right', fontsize=12)
plt.xlabel('Days')
```
### Plotting the result using GeoPandas
Assemble geo data and define helper functions. Edit `plot_frame` to change the format of the video.
```
import pickle
import tempfile
import geopandas as gpd
from geopandas.plotting import plot_polygon_collection
from matplotlib import animation
# Simulation data
N_ = np.sum(location_data[:,:,:,:], axis=(1,2))
S_ = np.sum(location_data[:,:,0,:], axis=1)
E_ = np.sum(location_data[:,:,1,:], axis=1)
A_ = np.sum(location_data[:,:,2,:], axis=1)
I_ = np.sum(location_data[:,:,3,:], axis=1)
R_ = np.sum(location_data[:,:,4,:], axis=1)
s_ = S_ / N_
e_ = E_ / N_
a_ = A_ / N_
i_ = I_ / N_
r_ = R_ / N_
ts_days = pyrossgeo.utils.extract_ts(sim_data) / (24*60)
epi_data = np.sum(np.array([ # Used to plot pandemic curves
S_,E_,A_,I_,R_
]), axis=2)
# Load geometry
geometry_node_key = 'msoa11cd'
geometry = gpd.read_file("../geodata/london_geo/london_msoa_shapes/Middle_Layer_Super_Output_Areas_December_2011_Boundaries_EW_BGC.shp")
loc_table = pd.read_csv('london_simulation/loc_table.csv')
loc_table_loc_col = loc_table.columns[0]
loc_table_loc_key_col = loc_table.columns[1]
geometry = geometry[ geometry[geometry_node_key].isin(loc_table.iloc[:,1]) ] # Remove locations in geometry that are not in loc_table
geometry = geometry.merge(loc_table, left_on=geometry_node_key, right_on=loc_table_loc_key_col) # Add location indices
geometry = geometry.sort_values(by=loc_table_loc_col) # Sort them by location indices
# Edit this function to adjust the layout of the video
def plot_frame(ti, close_plot=False, tmp_save=None):
fig, axes = plt.subplots(ncols=3, nrows=2, gridspec_kw={'width_ratios':[1, 1, 1.3]}, figsize=(18, 14))
geometry['S'] = s_[ti,:]
geometry['E'] = e_[ti,:]
geometry['A'] = a_[ti,:]
geometry['I'] = i_[ti,:]
geometry['R'] = r_[ti,:]
plot_geo(geometry, axes[0,0], vmin=0, vmax=1, value_key='S', title="Susceptible", legend=False)
plot_geo(geometry, axes[0,1], vmin=0, vmax=1, value_key='E', title="Exposed", legend=False)
plot_geo(geometry, axes[0,2], vmin=0, vmax=1, value_key='A', title="Activated", legend=True)
plot_geo(geometry, axes[1,0], vmin=0, vmax=1, value_key='I', title="Infected", legend=False)
plot_geo(geometry, axes[1,1], vmin=0, vmax=1, value_key='R', title="Recovered", legend=False)
plot_epi(axes[1,2], ti, ts_days, epi_data, ['S','E','A','I','R'])
fig.tight_layout(rect=[0, 0.03, 1, 0.92])
fig.suptitle("SEAIR Model - Day %s" % ti, fontsize=18)
if not tmp_save is None:
plt.savefig(tmp_save.name + '/%s.png' % ti)
if close_plot:
plt.close(fig)
if not tmp_save is None:
return tmp_save.name + '/%s.png' % ti
# Helper functions for plotting
def plot_geo(geometry, ax, vmin, vmax, value_key='val', title="", legend=True, legend_label='', cax=None, axis_on=False):
if legend:
if cax is None:
geometry.plot(column=value_key, ax=ax, vmin=vmin, vmax=vmax, legend=True, legend_kwds={'label': legend_label})
else:
geometry.plot(column=value_key, ax=ax, cax=cax, vmin=vmin, vmax=vmax, legend=True, legend_kwds={'label': legend_label})
else:
geometry.plot(column=value_key, ax=ax, cax=cax, vmin=vmin, vmax=vmax, legend=False)
ax.set_title(title)
if not axis_on:
ax.set_axis_off()
def plot_epi(ax, ti, ts, epi_data, epi_data_labels):
for oi in range(epi_data.shape[0]):
ax.plot(ts[:ti], epi_data[oi,:ti], label=epi_data_labels[oi])
ax.legend(loc='center left')
ax.set_xlim(np.min(ts_days), np.max(ts_days))
ax.set_ylim(0, np.max(epi_data))
```
Plot the pandemic at a given day
```
day = 50
geometry['S'] = s_[day,:]
geometry['E'] = e_[day,:]
geometry['A'] = a_[day,:]
geometry['I'] = i_[day,:]
geometry['R'] = r_[day,:]
fig, ax = plt.subplots(figsize=(7, 5))
plot_geo(geometry, ax, vmin=0, vmax=1, value_key='S', title='Susceptibles at day %s' % day)
day = 50
plot_frame(day)
```
Create a video of the pandemic
```
tmp_dir = tempfile.TemporaryDirectory()
frames_paths = []
for ti in range(len(ts)):
if ti % 1 == 0:
print("Frame %s of %s" % (ti, len(ts)))
frame_path = plot_frame(ti, close_plot=True, tmp_save=tmp_dir)
frames_paths.append(frame_path)
import cv2
video_name = 'sim_video.mp4'
frame = cv2.imread(frames_paths[0])
height, width, layers = frame.shape
fps = 6
#codec=cv2.VideoWriter_fourcc('D', 'I', 'V', 'X')
codec=cv2.VideoWriter_fourcc(*'DIVX')
video = cv2.VideoWriter(video_name, codec, fps, (width,height))
for frame_path in frames_paths:
video.write(cv2.imread(frame_path))
cv2.destroyAllWindows()
video.release()
```
| github_jupyter |
# Lab06: Topic Modeling with Latent Semantic Analysis
Latent Semantic Analysis (LSA) is a method for finding latent similarities between documents treated as a bag of words by using a low rank approximation. It is used for document classification, clustering and retrieval. For example, LSA can be used to search for prior art given a new patent application. In this homework, we will implement a small library for simple latent semantic analysis as a practical example of the application of SVD. The ideas are very similar to PCA. SVD is also used in recommender systems in an similar fashion (for an SVD-based recommender system library, see [Surpise](http://surpriselib.com).
We will implement a toy example of LSA to get familiar with the ideas. If you want to use LSA or similar methods for statistical language analysis, the most efficient Python libraries are probably [gensim](https://radimrehurek.com/gensim/) and [spaCy](https://spacy.io) - these also provide an online algorithm - i.e. the training information can be continuously updated. Other useful functions for processing natural language can be found in the [Natural Language Toolkit](http://www.nltk.org/).
**Note**: The SVD from scipy.linalg performs a full decomposition, which is inefficient since we only need to decompose until we get the first k singluar values. If the SVD from `scipy.linalg` is too slow, please use the `sparsesvd` function from the [sparsesvd](https://pypi.python.org/pypi/sparsesvd/) package to perform SVD instead. You can install in the usual way with
```
!pip install sparsesvd
```
Then import the following
```python
from sparsesvd import sparsesvd
from scipy.sparse import csc_matrix
```
and use as follows
```python
sparsesvd(csc_matrix(M), k=10)
```
**Exercise 1 (20 points)**. Calculating pairwise distance matrices.
Suppose we want to construct a distance matrix between the rows of a matrix. For example, given the matrix
```python
M = np.array([[1,2,3],[4,5,6]])
```
the distance matrix using Euclidean distance as the measure would be
```python
[[ 0.000 1.414 2.828]
[ 1.414 0.000 1.414]
[ 2.828 1.414 0.000]]
```
if $M$ was a collection of column vectors.
Write a function to calculate the pairwise-distance matrix given the matrix $M$ and some arbitrary distance function. Your functions should have the following signature:
```
def func_name(M, distance_func):
pass
```
0. Write a distance function for the Euclidean, squared Euclidean and cosine measures.
1. Write the function using looping for M as a collection of row vectors.
2. Write the function using looping for M as a collection of column vectors.
3. Wrtie the function using broadcasting for M as a collection of row vectors.
4. Write the function using broadcasting for M as a collection of column vectors.
For 3 and 4, try to avoid using transposition (but if you get stuck, there will be no penalty for using transposition). Check that all four functions give the same result when applied to the given matrix $M$.
**Exercise 2 (20 points)**.
**Exercise 2 (20 points)**. Write 3 functions to calculate the term frequency (tf), the inverse document frequency (idf) and the product (tf-idf). Each function should take a single argument `docs`, which is a dictionary of (key=identifier, value=document text) pairs, and return an appropriately sized array. Convert '-' to ' ' (space), remove punctuation, convert text to lowercase and split on whitespace to generate a collection of terms from the document text.
- tf = the number of occurrences of term $i$ in document $j$
- idf = $\log \frac{n}{1 + \text{df}_i}$ where $n$ is the total number of documents and $\text{df}_i$ is the number of documents in which term $i$ occurs.
Print the table of tf-idf values for the following document collection
```
s1 = "The quick brown fox"
s2 = "Brown fox jumps over the jumps jumps jumps"
s3 = "The the the lazy dog elephant."
s4 = "The the the the the dog peacock lion tiger elephant"
docs = {'s1': s1, 's2': s2, 's3': s3, 's4': s4}
```
**Exercise 3 (20 points)**.
1. Write a function that takes a matrix $M$ and an integer $k$ as arguments, and reconstructs a reduced matrix using only the $k$ largest singular values. Use the `scipy.linagl.svd` function to perform the decomposition. This is the least squares approximation to the matrix $M$ in $k$ dimensions.
2. Apply the function you just wrote to the following term-frequency matrix for a set of $9$ documents using $k=2$ and print the reconstructed matrix $M'$.
```
M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 2, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1]])
```
3. Calculate the pairwise correlation matrix for the original matrix M and the reconstructed matrix using $k=2$ singular values (you may use [scipy.stats.spearmanr](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanr.html) to do the calculations). Consider the fist 5 sets of documents as one group $G1$ and the last 4 as another group $G2$ (i.e. first 5 and last 4 columns). What is the average within group correlation for $G1$, $G2$ and the average cross-group correlation for G1-G2 using either $M$ or $M'$. (Do not include self-correlation in the within-group calculations.).
**Exercise 4 (40 points)**. Clustering with LSA
1. Begin by loading a PubMed database of selected article titles using 'pickle'. With the following:
```import cPickle
docs = pickle.load(open('data/pubmed.pic', 'rb'))```
Create a tf-idf matrix for every term that appears at least once in any of the documents. What is the shape of the tf-idf matrix?
2. Perform SVD on the tf-idf matrix to obtain $U \Sigma V^T$ (often written as $T \Sigma D^T$ in this context with $T$ representing the terms and $D$ representing the documents). If we set all but the top $k$ singular values to 0, the reconstructed matrix is essentially $U_k \Sigma_k V_k^T$, where $U_k$ is $m \times k$, $\Sigma_k$ is $k \times k$ and $V_k^T$ is $k \times n$. Terms in this reduced space are represented by $U_k \Sigma_k$ and documents by $\Sigma_k V^T_k$. Reconstruct the matrix using the first $k=10$ singular values.
3. Use agglomerative hierarchical clustering with complete linkage to plot a dendrogram and comment on the likely number of document clusters with $k = 100$. Use the dendrogram function from [SciPy ](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.cluster.hierarchy.dendrogram.html).
4. Determine how similar each of the original documents is to the new document `data/mystery.txt`. Since $A = U \Sigma V^T$, we also have $V = A^T U S^{-1}$ using orthogonality and the rule for transposing matrix products. This suggests that in order to map the new document to the same concept space, first find the tf-idf vector $v$ for the new document - this must contain all (and only) the terms present in the existing tf-idx matrix. Then the query vector $q$ is given by $v^T U_k \Sigma_k^{-1}$. Find the 10 documents most similar to the new document and the 10 most dissimilar.
**Notes on the Pubmed articles**
These were downloaded with the following script.
```python
from Bio import Entrez, Medline
Entrez.email = "YOUR EMAIL HERE"
import cPickle
try:
docs = cPickle.load(open('pubmed.pic'))
except Exception, e:
print e
docs = {}
for term in ['plasmodium', 'diabetes', 'asthma', 'cytometry']:
handle = Entrez.esearch(db="pubmed", term=term, retmax=50)
result = Entrez.read(handle)
handle.close()
idlist = result["IdList"]
handle2 = Entrez.efetch(db="pubmed", id=idlist, rettype="medline", retmode="text")
result2 = Medline.parse(handle2)
for record in result2:
title = record.get("TI", None)
abstract = record.get("AB", None)
if title is None or abstract is None:
continue
docs[title] = '\n'.join([title, abstract])
print title
handle2.close()
cPickle.dump(docs, open('pubmed.pic', 'w'))
docs.values()
```
| github_jupyter |

# TESTE FINAL PARA SMARKIO
## Instalando as depรชndรชncias bรกsicas
```
# sรณ rode esta cรฉlula se nรฃo possuir uma destas ferramentas abaixo, lembre-se de checar a versรฃo pip
!pip3 install numpy
!pip3 install pandas
!pip3 install matplotlib
!pip3 install seaborn
!pip3 install sklearn
```
## Imports
```
import pandas as pd # para leitura do arquivo e anรกlise exploratรณria
import numpy as np # para manipulaรงรฃo dos dados
import matplotlib.pyplot as plt # para visualizaรงรฃo
import seaborn as sns # para visualizaรงรฃo
# referente ร questรฃo 2 e 4 sobre as mรฉtricas
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import recall_score
# referente ร questรฃo 3 sobre a criaรงรฃo de um classificador
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import KFold
# referente ร questรฃo 5 para processamento de linguagem natural
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
# a funรงรฃo abaixo define que a exibiรงรฃo e o armazenamento dos plots devem ser no notebook
%matplotlib inline
```
_______________________
## Leitura dos dados
```
'''
o arquivo fornecido possui 2 pรกginas. O pandas รฉ capaz de selecionar quais pรกginas vocรช deseja ler com o parรขmetro
sheet_name, armazenando a n-รฉsima pรกgina do arquivo.xls em dados[n] como um dict de DataFrames. No caso, eu
irei ler o arquivo e jรก armazenar o DataFrame que preciso na variรกvel selecionando-a com [0]
'''
analise_ml = pd.read_excel('dados/teste_smarkio_Lbs.xls', sheet_name = [0])[0]
```
______________________
## Questรฃo 1
**Anรกlise exploratรณria dos dados utilizando estatรญstica descritiva e inferencial,
considerando uma, duas e/ou mais variรกveis;**
```
analise_ml[:10] # dados da Anรกlise_ML
analise_ml.shape
analise_ml.info()
```
**Acima vemos 5 amostras do nosso dataset, o shape do DataFrame e tipagem dos dados por coluna. Concluรญmos que o dataset possui 643 linhas e 4 colunas e que a coluna Pred_class trata-se de uma coluna de inteiros, probabilidade de valores float, status de strings e True_class tambรฉm como float.**
```
# checando quantas classes fiferentes possuo e qual o range delas. Vemos que vรฃo de 2 a 118.
np.sort(analise_ml.Pred_class.unique())
np.sort(analise_ml.Pred_class.unique()).size # possuimos 80 classes diferentes
# a coluna status รฉ uma coluna binรกria
np.sort(analise_ml.status.unique())
# checando os possรญveis valores da coluna, estรฃo em float mas aparenta ser uma coluna categรณrica
np.sort(analise_ml.True_class.unique())
np.sort(analise_ml.True_class.unique()).size # possuimos 41 classes diferentes
```
### Resumo das colunas:
- **Pred_class**:
- int
- 80 valores diferentes
- coluna de variรกveis discretas numรฉricas
- **status**:
- strings
- coluna binรกria
- coluna de variรกveis categรณricas
- **probabilidade**:
- floats
- coluna de variรกveis contรญnuas
- **True_class**:
- floats
- possui muitos valores nulos
**O mรฉtodo pandas.DataFrame.describe() retorna uma tabela com resumos estatรญsticos sobre os dados, talvez possamos encontrar informaรงรตes que sejam relevantes para a compreensรฃo melhor dos dados.**
```
analise_ml.describe().T # matriz transposta da gerada em Describe()
```
**Hรก algumas informaรงรตes novas que podemos tomar atravรฉs desta tabela como a grande quantidade de valores nulos na coluna True_class valores estes que logo serรฃo filtrados. Alรฉm de que, nas classes preditas, 25% delas estรฃo abaixo de 12.0, 50% abaixo de 59, 75% abaixo de 81 e que a maior classe possui valor 118.**
____________________
### Checando valores nulos e sua distribuiรงรฃo
```
# subtraimos o nรบmero de linhas pelos valores que nรฃo sรฃo nulos para obtermos a quantidade de valores nulos
# por coluna
analise_ml.shape[0] - analise_ml.count()
```
**Possuรญmos 462 valores nulos na coluna True_class, ou seja, devemos filtrar estes 462 valores nulos com os respectivos valores da coluna Pred_class. Para melhor visualizarmos a distribuiรงรฃo, plotei uma matriz onde os valores nulos estรฃo em branco, este plot nos
ajuda a perceber que os valores nulos se distribuiem de forma uniforme pela coluna.**
```
sns.heatmap(analise_ml.isnull(), cbar = False)
# plt.savefig('img/matriz_nulos.png', dpi = 500)
```
**Vamos filtrar os valors nulos considerando a regra expressa em 1.d
1.d true_class - A classe verdadeira (se nula, assumir o pred_class).**
```
for index in analise_ml.True_class[analise_ml.True_class.isna()].index: # para classes com valores null
analise_ml.iloc[index, 3] = analise_ml.iloc[index, 0] # atribua em True_class o valor de Pred_class
analise_ml.shape[0] - analise_ml.count() # agora nรฃo possuรญmos mais valores nulos
analise_ml.head(10)
```
_________________________
```
ax = sns.countplot(x = analise_ml.status)
# criando a porcentagem acima das barras
for p in ax.patches:
ax.annotate('{1:} ({0:.2f}%) '.format(p.get_height()*100/len(analise_ml), # formato da string e cรกlculo do valor
p.get_height()), # tomando a porcentagem
(p.get_x()+0.2, p.get_height()+20)) # alinhamento dos valores
ax.set_ylim(0, 700)
plt.title('Contagem de amostras por tipo')
# plt.savefig('img/contagem_status.png', dpi = 500)
```
**Podemos ver que o dataset tem uma grande quantidade de prediรงรตes que foram aprovadas (93.31%), contrรกrio ao nรบmero de amostras que necessitam de revisรฃo que representa apenas 6.69% do nosso conjunto de dados.**
**Para tentarmos encontrar insights ou pelo menos para sabermos onde procurar, vamos plotar um grรกfico que relacione todas as variรกveis entre si e ver como estes dados se comportam.**
```
sns.pairplot(analise_ml, hue = 'status')
# plot que relaciona todas as colunas numรฉricas entre si, diferenciadas pelo status
# plt.savefig('img/pairplot.png', dpi = 500)
```
**Nรฃo conseguimos tirar muitas informaรงรตes deles, mas podemos afirmar algumas coisas:**
- As probabilidades das classes preditas tendem a serem maiores que 30%
- As probabilidades tendem a serem maiores para as classes maiores, vide a densidade de pontos de aprovados no primeiro quadrante do grรกfico Pred_class x probabilidade, o mesmo parece ocorrer no grรกfico True_class x probabilidade
**Vamos tentar entender melhor a relaรงรฃo entre a Pred_class com a probabilidade. Para realizar tal tarefa, vamos plotar um grรกfico chamado JointGrid, ele nos mostra a relaรงรฃo entre duas variรกveis em um espaรงo bidimensional.**
```
g = sns.JointGrid(x = analise_ml.Pred_class, y = analise_ml.probabilidade, space = 0)
g.plot_joint(sns.kdeplot,
fill=True, clip=((2, 118), (0, 1.0)),
thresh=0, levels=100, cmap="rocket")
g.plot_marginals(sns.histplot, color="#03051A", alpha = .8, bins = 15)
plt.xlim(0, 150)
plt.ylim(0, 1.0)
# plt.savefig('img/joint_grid_pred.png', dpi = 500)
```
**O histograma na parte superior รฉ a distribuiรงรฃo da variรกvel do eixo x, o histograma na lateral direita รฉ a distribuiรงรฃo da variรกvel representada no eixo y. Olhando o grรกfico acima podemos perceber como se distribui a relaรงรฃo da classes preditas com as probabilidades, veja que as amostras que pertencem ร classes entre 60 e 90 tendem a terem probabilidades entre 30% e 60%, jรก as classes com valores mais baixos tendem a ter uma probabilidade prรณxima de 100%, mas com uma intensidade menor comparada ร intensidade expressa anteriormente.**
```
g = sns.JointGrid(x = analise_ml.True_class, y = analise_ml.probabilidade, space=0)
g.plot_joint(sns.kdeplot,
fill=True, clip=((0, 118), (0, 1.0)),
thresh=0, levels=100, cmap="rocket")
g.plot_marginals(sns.histplot, color="#03051A", alpha=1)
plt.xlim(0, 150)
plt.ylim(0, 1.0)
# plt.savefig('img/joint_grid_true.png', dpi = 500)
```
**Esta mesma distribuiรงรฃo espacial entre ocorre nas classes corretas, mas com uma intensidade ainda mais forte nas mesmas posiรงรตes que no JointGrid anterior. Isto nos leva a crer que as classes entre 60 e 90 sรฃo as mais dรญficeis de predizer ou que รฉ onde o algoritmo possui maior dificuldade em confiar no resultado que forneceu, este รบltimo pode ter diversas explicaรงรตes, uma delas รฉ que talvez o modelo fora treinado com poucos dados rotulados com estas classes.**
```
# separaremos os dados aprovados dos de em revisรฃo e vamos buscar insights no conjunto dos aprovados pois
# desta forma vamos ter informaรงรตes que nos aproximem do que รฉ a classificaรงรฃo correta
aprovados = analise_ml[analise_ml.status == 'approved']
revisao = analise_ml[analise_ml.status == 'revision']
aprovados.head()
aprovados.shape, revisao.shape
fig, ax = plt.subplots(1, 2, figsize = (13, 4))
sns.countplot(x = aprovados.Pred_class.value_counts(), ax = ax[0])
sns.countplot(x = aprovados.True_class.value_counts(), ax = ax[1])
ax[0].set_title('Nรบmero de classes por registro (preditas)')
ax[1].set_title('Nรบmero de classes por registro (verdadeiras)')
ax[0].set_xlabel('Nรบmero de registros')
ax[0].set_ylabel('Nรบmero de classes')
ax[1].set_ylabel('Nรบmero de classes')
ax[1].set_xlabel('Nรบmero de registros')
# plt.savefig('img/contagem_classes.png', dpi = 500)
```
**Acima temos um grรกfico que relaciona o nรบmero de registros por nรบmero de classes. Por exemplo, no grรกfico 1, 12 classes possui apenas 1 instรขncia classificada, 10 classes possuem 2 instรขncias, 12 classes possuem 3 instรขncias, assim sucessivamente. Isto esclarecido, podemos afirmar que sรฃo poucas as classes que possuem muitas instรขncias pertencentes a elas.**
```
fig, ax = plt.subplots(1, 2, figsize = (13, 4))
sns.kdeplot(aprovados.probabilidade, fill=True, ax = ax[0]) # plot do grรกfico ร esquerda
sns.histplot(x = aprovados.probabilidade, ax = ax[1], bins = 30) # plot do grรกfico ร direita
ax[0].set_title('Estimativa de Densidade Kernel')
ax[1].set_title('Histograma das probabilidades')
# plt.savefig('img/kde_distr.png', dpi = 500)
```
**Os grรกficos acima expressam como estรฃo distribuรญdas as probabilidades, veja que a maioria das probabilidades se concentram em valores prรณximos a 40/45% e acima de 94%, mas com uma leve queda por volta dos 60%. Isto nos indica que o modelo aplicado para prediรงรฃo das classes tende a acertar muito bem, mas que em alguns casos ele se encontra razoavelmente confuso quando a confiabilide de sua prediรงรฃo, talvez por conta da dificuldade para predizer esta classe. <br>Talvez o grรกfico de densidade gere um pouco mais de confusรฃo pois a interpretaรงรฃo do eixo Y nรฃo รฉ รณbvia, este grรกfico รฉ, sumariamente falando, um grรกfico cuja a รกrea rachurada tem soma igual a 1, desta forma, supondo que eu queira saber a porcentagem das probabilidades que vรฃo de 20% a 40%, basta calcular a รกrea dentre 0.2 e 0.4 delimitada pela funรงรฃo de densidade, efetuando o cรกlculo da รกrea usando integral definida, chegamos ao valor de 0.165, ou seja, 16,5% das probabilidades estรฃo entre 20% e 40%.**
```
# abaixo iremos plotar a frequรชncia de cada classe ordenadas pela quantidade de amostras que as representam
plt.figure(figsize = (26, 8))
ax = sns.countplot(x = aprovados.Pred_class, order = aprovados.Pred_class.value_counts().index)
plt.title('Contagem de de frequรชncia por classe')
# plt.savefig('img/distr_classes.png', dpi = 500)
# abaixo farei o plot exponencial e da distribuiรงรฃo para comparaรงรฃo
# uma funรงรฃo exponencial รฉ do tipo f(x) = exp(a*x) + b, para chegar em a = -0.125 e b = 2.5 fui apenas ajustando
# os valores atรฉ que a funรงรฃo encaixasse na distribuiรงรฃo, mas hรก mรฉtodos melhores para fazer isso
a = lambda x: 60 * np.exp(-0.125*x) + 2.5
y = np.linspace(0, 100, 77)
y = a(y)
plt.figure(figsize = (26, 8))
ax = sns.countplot(x = aprovados.Pred_class, order = aprovados.Pred_class.value_counts().index)
plt.plot(y, c ='black', lw = 3.0)
plt.title('Decrescimento exponencial das classes')
plt.legend(['f(x) = 60 * exp(-0.125 * x) + 2.5'], prop={'size': 20})
plt.xlim(-1, 76)
# plt.savefig('img/distr_classes_exp.png', dpi = 500)
aprovados.Pred_class.value_counts()[22]
```
**Neste dataset, vemos uma grande frequรชncia das classes 3, 74 e 2, um pouco menos frequente se encontram as classes 77, 60, 4, 52, 96 e 110.**
## Terminamos a anรกlise exploratรณria
________________________
## Questรฃo 2 e 4
**Calcule o desempenho do modelo de classificaรงรฃo utilizando pelo menos trรชs
mรฉtricas; revision estรฃo corretos ou nรฃo (Sugestรฃo : Tรฉcnica de cross-validation K-fold);**<br>
**Compare trรชs mรฉtricas de avaliaรงรฃo aplicadas ao modelo e descreva sobre a
diferenรงa;**
Obs: decidi por unir as questรตes por se tratar dos mesmos conceitos.
**Usaremos as trรชs mรฉtricas mais utilizadas para avaliaรงรฃo de modelos de classificaรงรฃo:**
- Matriz de Confusรฃo (Confusion Matrix)
- Pontuaรงรฃo de Acurรกcia (Accuracy Score)
- Recall Score
**A primeira mรฉtrica trata-se da matriz de confusรฃo, esta mรฉtrica gera uma matriz cuja qual relaciona os acertos e erros do modelo. Decidi colocรก-la em primeiro pois esta mรฉtrica รฉ a origem de outras mรฉtricas utilizadas em problemas de classificaรงรฃo, tais como o Recall e Acurรกcia.**

**Suponha que tenhamos uma classificaรงรฃo binรกria, a classe positiva sรฃo as amostras catalogadas como 1 e na classe negativa as amostras categorizadas como 0. Em VP terรญamos as amostras que foram classificadas como 1 e que de fato eram 1, analogamente com VN, prediรงรตes 0 que de fato eram 0. Por outro lado, nos quadrantes falsos que รฉ onde ocorrem os erros, temos FN que representa os registros classificados como 0 mas que eram 1 e em FP as instรขncias que foram classificadas como 1 mas que, na verdade, pertenciam ร classe 0.**
**Todavia, esta matriz pode ficar bem maior para problemas de classificaรงรฃo multiclasse. Como podemos ver abaixo:**
```
confusion_matrix(aprovados.Pred_class, aprovados.True_class)
plt.figure(figsize = (8, 6))
sns.heatmap(confusion_matrix(aprovados.Pred_class, aprovados.True_class))
plt.title('Matriz de Confusรฃo')
# plt.savefig('img/matriz_confu.png', dpi = 700)
```
**Para termos uma melhor visualizaรงรฃo da mรฉtrica, plotaremos um HeatMap desta matriz de confusรฃo. Os pontos mais claros representam os valores maiores. A Diagonal Principal da matriz representa os acertos, ou seja, quanto mais claro e denso esta diagonal, melhor รฉ o modelo.**
____________________
**A segunda รฉ a mรฉtrica de acurรกcia, para calcular fazemos utilizamos os valores da matriz de confusรฃo:**

- TP = Verdadeiro Positivo
- TN = Verdadeiro Negativo
- FP = Falso Positivo
- FN = Falso Negativo
```
accuracy_score(aprovados.True_class, aprovados.Pred_class)
```
**A terceira mรฉtrica รฉ chamada Recall Score, esta mรฉtrica รฉ calculada como o nรบmero de amostras da classe X que foram classificadas corretamente dividido por todas as amostras que foram classificadas como X, sejam elas acertadas ou nรฃo. Em outras palavras, รฉ a capacidade que o classificador tem de identificar todas as amostras corretas (sensibilidade).**

- TP = Verdadeiro Positivo
- FN = Falso Negativo
```
# o parรขmetro average deve ser especificado em caso de classificaรงรฃo multiclasse, o valor 'weighted' calcula a
# pontuaรงรฃo levando em consideraรงรฃo a frequรชncia das classes, uma vez que sabemos que as classes dos nossos dados
# nรฃo seguem uma distribuiรงรฃo uniforme. Existem outros valores para average, optei por este pois รฉ o que mais me
# faz sentido
recall_score(y_true = aprovados.True_class, y_pred = aprovados.Pred_class, average = 'weighted', zero_division = 0)
```
________________
## Questรฃo 3
**Crie um classificador que tenha como output se os dados com status igual a
'revision' estรฃo corretos ou nรฃo (Sugestรฃo: Tรฉcnica de cross-validation K-fold);**
**Primeiramente, prepararemos os dados dados para treino. Para isto usaremos os dados que foram aprovados.**
```
aprovados.head()
# precisamos preparar os dados apra treino, entรฃo usei a propriedade values que retorna um numpy.array com os dados
# e ajeitei a dimensรฃo com o mรฉtodo reshape
# treinei apenas com a coluna Pred_class porque ela mais a coluna probabilidade leva a uma queda de 10% da acurรกcia
# da DecisionTree
X = aprovados.iloc[:, 0].values.reshape(-1, 1)
Y = aprovados.iloc[:, 3]
X.shape, Y.shape
```
**O mรฉtodo de validaรงรฃo cruzada que serรก utilizado รฉ o KFold, esta tรฉcnica divide os dados de treino e teste em sessรตes de modo que o modelo treine por todo o dataset fornecido. Supondo que os dados sejam o conjunto (1, 2, 3, 4, 5, 6) e sejam escolhidos 2 sessรตes.**
- **Sessรฃo 1:**
- **Treino: (3, 4, 5)**
- **Teste: (0, 1, 2)**
- **Sessรฃo 2:**
- **Treino: (0, 1, 2)**
- **Teste: (3, 4, 5)**
**Jรก para o modelo, decidi pelo algoritmo DecisionTreeClassfier, uma vez que os dados nรฃo sรฃo complexos e que, aparentemente, obedecem a algumas regras especรญficas. O classificador de รกrvore de decisรฃo รฉ "simples" e exelente para encontrar padrรตes e regras em conjuntos de dados. Este foi meu motivo por optar por ele.**
```
kf = KFold(n_splits = 10) # declarando a classe KFold
# definindo o modelo e que ele tenha sempre os mesmos pesos iniciais
modelo = DecisionTreeClassifier(random_state = 42)
scores = [] # armazenando as pontuaรงรตes
for treino, teste in kf.split(np.array(aprovados)): # aplicando a tรฉcnica de cross-validation
modelo.fit(X[treino], Y[treino]) # treinando o modelo, o mรฉtodo fit sempre reinicia os pesos
y_pred = modelo.predict(X[teste]) # predizendo os valores
scores.append(accuracy_score(y_pred, Y[teste])) # armazenando as ponutaรงรตes para mรฉdia final
for i in range(len(scores)): print(f'Treino {i + 1}: {scores[i]: .4f}')
print(f'Mรฉdia das Acurรกcias: {np.mean(scores): .4f}')
plt.plot(scores, c = 'blue') # plot da lista scores gerada anteriormente
plt.title('Desempenho dos modelos')
plt.xlabel('Modelo')
plt.ylabel('Pontuaรงรฃo')
plt.ylim(0, 1)
# plt.savefig('img/desempenho.png', dpi = 300)
```
**Plotando o desempenho dos modelos que foram treinados, por algum motivo o modelo 6 foi o melhor isso, isso pode ser explicado devido a qualidade dos dados de treino na sessรฃo 6 ou pela facilidade dos dados de teste da mesma sessรฃo de treino.**
________________
**Vamos agora checar se o modelo fez a classificaรงรฃo correta dos dados.**
```
# assim como nos dados de treino, os dados que serรฃo checados devem passar pelo mesmo tratamento
X_revisao = revisao.iloc[:, 0].values.reshape(-1, 1)
Y_revisao = revisao.iloc[:, 3]
X_revisao.shape
pred = modelo.predict(X_revisao)
pred
```
**Vamos comparar os valores preditos com os valores das classes verdadeiras do dataset de revisรฃo.**
```
Y_revisao.index = range(len(Y_revisao)) # alterando o index para utilizar a estrutura de repetiรงรฃo abaixo com รญndice
# montando uma tabela dos acertos e erros, decidi fazer utilizando prints para que fosse possรญvel
# pintar os resultados
print('\033[1mEsperado Predito Resultado\033[92m')
for i in range(len(pred)):
if np.array(Y_revisao == pred)[i]: print(f'\033[92m {Y_revisao[i]: =5} {pred[i]: =5} \033[92mAPROVADO')
else: print(f'\033[91m {Y_revisao[i]: =5} {pred[i]: =5} REPROVADO')
scores_modelo = np.where(pred == Y_revisao, 1, 0) # transformando acertos em 1 e erros em 0
acertos = scores_modelo.sum() # na lista scores_modelo, para termos num de acertos basta somar os 1's
erros = len(scores_modelo) - acertos # para os erros, basta tomar o total menos os acertos
print('Acertos:', acertos)
print('Erros:', erros)
print(f'Acurรกcia mรฉdia: {acertos/(acertos + erros)*100: .2f}%')
pd.Series(
[acertos, erros], index = ['Acertos', 'Erros'],).plot(
kind='pie',
figsize=[4,4],
autopct=lambda p: f'{p: .2f}% ({(p/100) * len(scores_modelo):.0f})',
colors = ['blue', 'red'],
title = 'Proporรงรฃo de acertos e erros'
)
plt.ylabel(None)
# plt.savefig('img/pie.png', dpi = 500)
```
## Questรฃo 5
**Crie um classificador, a partir da segunda aba - NLP do arquivo de dados, que
permita identificar qual trecho de mรบsica corresponde ร s respectivas artistas listadas
(Sugestรฃo: Naive Bayes Classifier).**
```
NLP = pd.read_excel('dados/teste_smarkio_Lbs.xls', sheet_name = [1])[1] # lendo os dados do segundo sheet
NLP.head()
plt.figure(figsize = (15, 5))
NLP.letra.apply(len).hist(bins = 150, color = 'red', alpha = .5)
plt.xlabel('Nรบmero de Linhas')
plt.ylabel('Nรบmero de Letras')
```
**O grรกfico acima demonstra que a maioria das letras das mรบsicas possuem tamanhos entre 1000 e 3000 caracteres.**
```
NLP.letra.apply(len).hist(bins = 100, color = 'red', alpha = .5, by = NLP.artista, figsize = (15, 5))
```
**Atravรฉs destes dois grรกficos, podemos concluir que as duas artisas possuem a mesma distribuiรงรฃo do tamanho de letras e que Beyoncรฉ jรก produziu mรบsicas com quse 7000 linhas.**
```
sns.countplot(x = NLP.artista)
# plt.savefig('img/contagem_cantoras.png', dpi = 500)
```
**A quantidade de amostras รฉ muito prรณxima das duas classes de cantoras e isto รฉ sempre saudรกvel para qualquer algoritmo de Machine Learning**
________________
**Vamos iniciar o processo de classificaรงรฃo das mรบsicas. Para isto, iremos precisar de preparar os dados para o treinamento.**
```
# nesta parte separaremos as features da coluna target e em seguida iremos separar o dados de treino e teste
X = NLP.letra
Y = NLP.artista
X, X_teste, Y, Y_teste = train_test_split(X, Y, test_size=0.1, random_state = 50)
```
**Nรฃo รฉ possรญvel trabalhar com textos quando estamos lidando com algoritmos de Machine Learning. Entรฃo precisamos buscar alguma representaรงรฃo matemรกtica para os textos e uma maneira de fazer isso รฉ descrita na implementaรงรฃo do mรฉtodo CountVectorizer(). <br> Este mรฉtodo associarรก a cada palavra um valor inteiro, este procedimento รฉ chamado de Bag-of-Words, ou BoW, em seguida ele contabiliza a frequรชncia de cada uma destas palavras presentes no texto gerando como resultado uma matriz sparse com os valores de frequรชncia de cada palavra e com dimensรฃo de nรบmero total de palavras encontradas no dataset.**
```
# aqui criamos uma instรขncia de CountVectorizer e no parรขmetros stop_words declaramos que estamos lidando com
# textos em inglรชs. Resumidamente, Stop_words รฉ um conjunto diversas palavras na lรญgua a qual foi definida.
CV = CountVectorizer(stop_words='english')
# aqui estamos passando os textos para a instรขncia recรฉm-criada para que ela conheรงa os textos, para que aprenda
# o vocabulรกrio do dataset
CV.fit(NLP.letra)
# Finalmente estamos prรฉ-processando os dados e os preparando para treinamento, neste ponto estamos convertendo
# os testos em vetores de frequรชncia
X_cv = CV.transform(X)
X_teste_cv = CV.transform(X_teste)
modelo = MultinomialNB().fit(X_cv, Y)
pred_nlp = modelo.predict(X_teste_cv)
scores_modelo2 = np.where(pred_nlp == Y_teste, 1, 0) # como feito anteriormente, acertos 1 e erros 0
acertos = scores_modelo2.sum() # soma dos 1's para nรบmero de acertos
erros = len(scores_modelo2) - acertos # erro รฉ o total menos os acertos
print('Acertos:', acertos)
print('Erros:', erros)
print(f'Acurรกcia mรฉdia: {acertos/(acertos + erros)*100: .2f}%')
pd.Series(
[acertos, erros], index = ['Acertos', 'Erros'],).plot(
kind='pie',
figsize=[4,4],
autopct=lambda p: f'{p: .2f}% ({(p/100) * len(scores_modelo2):.0f})',
colors = ['blue', 'red'],
title = 'Proporรงรฃo de acertos e erros'
)
plt.ylabel(None)
# plt.savefig('img/pie_nlp.png', dpi = 500)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/tykimos/Keras/blob/master/pseudo_labeling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# 0. ์ฌ์ฉํ ํจํค์ง ๋ถ๋ฌ์ค๊ธฐ
from keras.utils import np_utils
from keras.datasets import fashion_mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import Conv2D, MaxPooling2D, Flatten
import keras
import numpy as np
# 1. ๋ฐ์ดํฐ์
์์ฑํ๊ธฐ
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255.0
x_test = x_test.reshape(10000, 784).astype('float32') / 255.0
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
x_train_label = x_train[0:25000]
x_train_no_label = x_train[25000:50000]
x_val = x_train[50000:60000]
y_train_label = y_train[0:25000]
y_train_no_label = y_train[25000:50000]
y_val = y_train[50000:60000]
# 2. ๋ชจ๋ธ ๊ตฌ์ฑํ๊ธฐ
model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=784))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=10, activation='softmax'))
# 3. ๋ชจ๋ธ ํ์ต๊ณผ์ ์ค์ ํ๊ธฐ
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 32
train_label_loss = []
train_label_acc = []
val_loss = []
val_acc = []
for epoch_idx in range(100):
batch_train_label_loss = []
batch_train_label_acc = []
batch_val_loss = []
batch_val_acc = []
for i in range(len(x_train_label)//batch_size):
x_batch = x_train_label[i*batch_size: (i+1)*batch_size]
y_batch = y_train_label[i*batch_size: (i+1)*batch_size]
loss, acc = model.train_on_batch(x_batch, y_batch)
batch_train_label_loss.append(loss)
batch_train_label_acc.append(acc)
for i in range(len(x_val)//batch_size):
x_batch = x_val[i*batch_size: (i+1)*batch_size]
y_batch = y_val[i*batch_size: (i+1)*batch_size]
xx = model.test_on_batch(x_batch, y_batch)
loss, acc = model.test_on_batch(x_batch, y_batch)
batch_val_loss.append(loss)
batch_val_acc.append(acc)
train_label_loss.append(np.mean(batch_train_label_loss))
train_label_acc.append(np.mean(batch_train_label_acc))
val_loss.append(np.mean(batch_val_loss))
val_acc.append(np.mean(batch_val_acc))
model.save((str(epoch_idx) + '_model'))
print('epoch {0:4d} train_label acc {1:0.3f} loss {2:0.3f} val acc {3:0.3f} loss {4:0.3f}'.format(epoch_idx,
np.mean(batch_train_label_acc),
np.mean(batch_train_label_loss),
np.mean(batch_val_acc),
np.mean(batch_val_loss)))
%matplotlib inline
import matplotlib.pyplot as plt
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(train_label_loss, 'b', label='train label loss')
loss_ax.plot(val_loss, 'r', label='val loss')
acc_ax.plot(train_label_acc, 'c', label='train label acc')
acc_ax.plot(val_acc, 'y', label='val acc')
# 'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'
loss_ax.set_xlabel('epoch')
loss_ax.set_ylabel('loss')
acc_ax.set_ylabel('accuracy')
loss_ax.legend(loc='upper left')
acc_ax.legend(loc='lower left')
plt.show()
model = keras.models.load_model('6_model')
train_no_label_loss = []
train_no_label_acc = []
val_loss = []
val_acc = []
for epoch_idx in range(100):
batch_train_no_label_loss = []
batch_train_no_label_acc = []
batch_val_loss = []
batch_val_acc = []
for i in range(len(x_train_no_label)//batch_size):
x_batch = x_train_no_label[i*batch_size: (i+1)*batch_size]
y_true_batch = y_train_no_label[i*batch_size: (i+1)*batch_size]
y_pred_batch = model.predict_on_batch(x_batch)
y_pseudo_batch = np.zeros((batch_size, 10))
for b in range(batch_size):
ys = y_pred_batch[b]
y_pseudo_batch[b, np.argmax(ys)] = 1.0
loss, _ = model.train_on_batch(x_batch, y_pseudo_batch)
acc = np.mean(np.equal(y_true_batch, y_pseudo_batch))
batch_train_no_label_loss.append(loss)
batch_train_no_label_acc.append(acc)
for i in range(len(x_val)//batch_size):
x_batch = x_val[i*batch_size: (i+1)*batch_size]
y_batch = y_val[i*batch_size: (i+1)*batch_size]
xx = model.test_on_batch(x_batch, y_batch)
loss, acc = model.test_on_batch(x_batch, y_batch)
batch_val_loss.append(loss)
batch_val_acc.append(acc)
train_no_label_loss.append(np.mean(batch_train_no_label_loss))
train_no_label_acc.append(np.mean(batch_train_no_label_acc))
val_loss.append(np.mean(batch_val_loss))
val_acc.append(np.mean(batch_val_acc))
model.save('no_label_' + str(epoch_idx) + '_model')
print('epoch {0:4d} train no_label acc {1:0.3f} loss {2:0.3f} val acc {3:0.3f} loss {4:0.3f}'.format(epoch_idx,
np.mean(batch_train_no_label_acc),
np.mean(batch_train_no_label_loss),
np.mean(batch_val_acc),
np.mean(batch_val_loss)))
%matplotlib inline
import matplotlib.pyplot as plt
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(train_no_label_loss, 'g', label='train no label loss')
loss_ax.plot(val_loss, 'r', label='val loss')
acc_ax.plot(train_no_label_acc, 'm', label='train no label acc')
acc_ax.plot(val_acc, 'y', label='val acc')
# 'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'
loss_ax.set_xlabel('epoch')
loss_ax.set_ylabel('loss')
acc_ax.set_ylabel('accuracy')
loss_ax.legend(loc='upper left')
acc_ax.legend(loc='lower left')
plt.show()
# 6. ๋ชจ๋ธ ํ๊ฐํ๊ธฐ
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=32)
print('## evaluation loss and_metrics ##')
print(loss_and_metrics)
prob_pred.shape
y = [0.1, 0.3, 0.4, 0.2]
y_pred = np.zeros(4)
y_pred[np.argmax(y)] = 1.0
print(y_pred)
np.mean(np.equal(y_pred, y))
```
| github_jupyter |
```
import numpy as np
import torch
import torch.nn as nn
from matplotlib import pyplot as plt
from tqdm import tqdm
import seaborn as sns
def get_dynamics(n=5, n_steps=1000):
"""Get dynamics under a sparse matrix.
Args:
n: dimensionality
n_steps: number of steps to generate
"""
# initial state
state = np.random.randn(n)
# permutation of 1..n to fill elements in the matrix in
p = np.random.permutation(range(n))
# the transition dynamics matrix
A = np.zeros((n, n))
# filling in the elements
A[range(n), p] = np.random.choice([-1, 1], size=n)
# doing a rollout under the defined dynamics
states = [np.array(state)]
for _ in range(n_steps):
# computing next state
state = A @ state
# saving the state
states.append(np.array(state))
# converting to a numpy array
states = np.array(states)
return states
def get_encoder(n_out, n):
"""Get a linear encoder matrix
Args:
n_out: output observation dimensionality
n: input state dimensionality
"""
W = np.random.randn(n_out, n)
return W
def encode(states, encoder_matrix):
"""Encode states with an encoder."""
X = (states @ encoder_matrix.T)
return X
# state dim
n = 5
# obs dim
k = 100
# features dim
f = 5
# number of steps to generate
n_steps = 1000
# encoder matrix
encoder = get_encoder(n_out=k, n=n)
# obtaining the states
states = get_dynamics(n=n, n_steps=n_steps)
# compute observations
observations = encode(states, encoder)
observations.shape
class LinearNet(nn.Module):
"""Linear neural network."""
def __init__(self, inp_dim, out_dim, add_batch_norm=False):
super(LinearNet, self).__init__()
self.fc1 = nn.Linear(in_features=inp_dim, out_features=out_dim,
bias=False)
if add_batch_norm:
self.batch_norm = torch.nn.BatchNorm1d(num_features=out_dim)
def forward(self, x):
x = self.fc1(x)
if hasattr(self, 'batch_norm'):
x = self.batch_norm(x)
return x
def reconstruction_loss(obss, decoder, reconstructor):
"""Ensure that the decoder is not degenerate by fitting a reconstructor."""
mse = torch.nn.MSELoss()
return mse(reconstructor(decoder(obss)), obss)
def reconstruction_loss_norm(reconstructor):
"""Ensure that the decoder is not degenerate (inverse norm not too high)."""
regularization_loss = 0
for param in reconstructor.parameters():
regularization_loss += torch.sum(torch.square(param))
return regularization_loss
def fit_loss(obss, decoder, model):
"""Ensure that the model fits the features data."""
mse = torch.nn.MSELoss()
return mse(model(decoder(obss[:-1])), decoder(obss[1:]))
def sparsity_loss(model):
"""Ensure that the model is sparse."""
regularization_loss = 0
for param in model.parameters():
regularization_loss += torch.sum(torch.abs(param))
return regularization_loss
def losses(obss_torch, decoder, reconstructor, model, rn_threshold=100):
"""Compute all losses ob observations."""
res = {}
res['r'] = reconstruction_loss(obss_torch, decoder, reconstructor)
res['f'] = fit_loss(obss_torch, decoder, model)
res['s'] = sparsity_loss(model)
res['rn'] = reconstruction_loss_norm(reconstructor)
if res['rn'] < rn_threshold:
res['rn'] = torch.from_numpy(np.array(rn_threshold))
return res
def total_loss(losses_, hypers):
"""Compute total loss."""
loss = 0.0
for key in hypers.keys():
loss += hypers[key] * losses_[key]
return loss
def lstdct2dctlst(lst):
"""List of dictionaries -> dict of lists."""
keys = lst[0].keys()
result = {k: [] for k in keys}
for item in lst:
for k, v in item.items():
result[k].append(v)
return result
def epoch(obss_torch, decoder, reconstructor, model, hypers):
"""One optimization epoch."""
optimizer.zero_grad()
L = losses(obss_torch, decoder, reconstructor, model)
loss = total_loss(L, hypers)
loss.backward()
optimizer.step()
L['total'] = loss
return {x: y.item() for x, y in L.items()}
def metrics(decoder, reconstructor, model, hypers):
m = {}
m['nnz'] = np.sum(np.abs(list(model.parameters())[0].detach().numpy().flatten()) > 1e-2)
m['hyper_s'] = hypers['s']
return m
hypers = {'r': 0.1, 'f': 0.2, 's': 0.0001, 'rn': 0.0001}
k, f
# creating models
decoder = LinearNet(inp_dim=k, out_dim=f, add_batch_norm=True)
reconstructor = LinearNet(inp_dim=f, out_dim=k)
model = LinearNet(inp_dim=f, out_dim=f)
```
#### Optimizing via total loss over the whole data
```
# converting observations to torch
obss_torch = torch.from_numpy(np.array(observations, dtype=np.float32))
all_parameters = list(model.parameters()) + list(decoder.parameters()) + list(reconstructor.parameters())
optimizer = torch.optim.Adam(all_parameters, lr=1e-3)
# training
last_hyper_adjustment = -1
results = []
for i in tqdm(range(5000)):
e = epoch(obss_torch, decoder, reconstructor, model, hypers)
e.update(metrics(decoder, reconstructor, model, hypers))
results.append(e)
if e['r'] + e['f'] > 1e-2:
if hypers['s'] > 1e-5:
suggested_hyper = hypers['s'] * 0.5
else:
if hypers['s'] < 10:
suggested_hyper = hypers['s'] / 0.5
if i - last_hyper_adjustment >= 100:
hypers['s'] = suggested_hyper
last_hyper_adjustment = i
# plotting
plt.figure(figsize=(16, 5))
for i, (k_, v) in enumerate(lstdct2dctlst(results).items()):
plt.subplot(1, len(results[0]) + 1, i + 1)
plt.xlabel('epoch')
plt.title(k_)
plt.axhline(0)
plt.plot(v)
plt.yscale('log')
plt.subplot(1, len(results[0]) + 1, len(results[0]) + 1)
plt.title("Weights heatmap")
sns.heatmap(list(model.parameters())[0].detach().numpy())
from scipy.special import expit
x = np.array([-1, -2, 0, 1,2,3])
x / (1 + x)
Add batch normalization!!
or fix the model to be 0/1/-1 (what about scale of features?)
can't do this fully...
F = 0.5 f
plt.hist(list(model.parameters())[0].detach().numpy().flatten())
```
| github_jupyter |
```
from sympy import *
from sympy.abc import *
import numpy as np
import matplotlib.pyplot as plt
def Kw_(C):
return ((4*C - 1)/(4*C - 4)) + 0.625/C
def Ks_(C):
return (0.5/C) + 1
def Fm_(Fmax, Fmin):
return (Fmax + Fmin)/2
def Fa_(Fmax, Fmin):
return (Fmax - Fmin)/2
Ks, Kw, Fmin, Fmax, Fm, Sf, Fa, nf, Ytrab = symbols("K_s K_w F_{min} F_{max} F_m S_f F_a n_f y_{trab}")
Y = ((d**(b+2))*0.67*np.pi*A)
Z = (Y/(8*C)) - Ks*Fmin
X = Ks*(Fm-Fmin) + (1.34*((A*d**b)/Sf)-1)*Kw*Fa
e = Z/X
e.evalf(3)
```
### Resumo de molas helicoidais - Segunda - Aula 10/01
Indice da mola $C$
$$K_s = 1 + \frac{0.5}{C}$$
$$K_w = \frac{4C - 1}{4C - 4} + \frac{0.625}{C}$$
Forรงas mรฉdias e alternadas
$$F_m = \frac{F_{max + F_{min}}}{2}$$
$$F_a = \frac{F_{max - F_{min}}}{2}$$
Determinando $d$ pela relaรงรฃo iterativa - Comeรงar i com $d=1mm$
$$d = \left(\frac{11.9 C n_{f} \left(F_{a} K_{w} \left(\frac{1.34 A d^{b}}{S_{f}} - 1.0\right) + K_{s} \left(F_{m} - \frac{F_{min} \left(n_{f} - 1.0\right)}{n_{f}}\right)\right)}{A \pi}\right)^{\frac{1}{b + 2.0}}$$
Diรขmetro mรฉdio $D$ e externo $D_0$
$$D = Cd$$
$$D_0 = D + d$$
Rigidez da mola $K$
$$K = \frac{F_{max - F_{min}}}{y_{trabalho}}$$
Nรบmero de espiras ativas $N_a$ e totais $N_t$
$$N_a = \frac{d^{4}G}{8D^3K}$$
$$N_t = N_a + 2$$
Rigidez da mola $K$ considerando $Na$ inteiro
$$K = \frac{d^{4}G}{8D^3N_a}$$
Deflexรฃo inicial $y_{inicial}$
$$y_{inic} = \frac{F_{min}}{K}$$
Comprimento livre $L_f$
$$L_f = dN_t + 1.15 + y_{inic}$$
```
#Equaรงรฃo iterativo de 'd'
U = Fm-((nf-1)/nf)*Fmin
U_ = 1.34*((A*d**b)/Sf) - 1
J = ((8*C*nf)/(0.67*pi*A))
d_ = (J*(Ks*U + U_*Kw*Fa))**(1/(b+2))
d_.evalf(3)
#Dados do problema
fmax, fmin, ytrab, c = 600, 300, 25E-3, 8
A_, b_, sf = 1909.9, -0.1453, 310
#Cรกlculo dos coeficientes
ks, kw = Ks_(c), Kw_(c)
#Cรกlculo das forรงas mรฉdias e alternadas
fm = Fm_(fmax, fmin)
fa = Fa_(fmax, fmin)
#Cรกlculo iterativo do diรขmetro
soluรงรฃo = []
din = 1 #chute inicial
for i in range(10):
s = d_.subs({d:din, Kw:kw, Ks:ks, Fm:fm, Fa:fa, nf:1.5, Fmin:fmin, Fmax:fmax,
b:b_, A:A_, Sf:sf, C:c, pi:np.pi})
soluรงรฃo.append(s)
din = (s.evalf(5))
np.array(soluรงรฃo, dtype=float)
#Diรขmetro comercial e dados numรฉricos
C, G = c, 80.8E9
dc = 6.5E-3
#Diรขmetro mรฉdio D
D = C*dc
#Diรขmetro externo D0
D0 = D + dc
#Rigidez da mola
K = (fmax-fmin)/ytrab
K, D, D0
#Qtd de espiras ativas
Na_ = (G*dc**4)/(8*K*(D)**3)
#aprox pra cima
Na = int(Na_) + 1
(Na_, Na)
#Rigidez da mola de novo
K_ = (G*dc**4)/(8*Na*(D)**3)
K_
#Deflexรฃo inicial
yinic = fmin/K_
yinic
#Comprimento livre
Nt = Na + 2
Lf = (dc)*Nt + 1.15*(ytrab) + yinic
Lf
```
### Resumo de molas de compressรฃo/traรงรฃo - Terรงa - Aula 11/01
Comprimento livre $L_f$
$$L_f = L_b + 2L_g$$
$$L_f = N_t + 2L_g$$
Espiras ativas $N_a$
$$N_t = N_a + 1$$
Equaรงรฃo transcendental pra determinar o diรขmetro do fio
$$d = \left(G \left(\frac{2.98507462686567 F_{a} n_{f} \left(0.38659 A d^{b} - 0.2885 S_{f}\right)}{S_{f}} + F_{m} n_{f} - F_{min} \left(n_{f} - 1\right)\right)\right)^{\frac{1}{b + 2}}$$
```
from sympy import *
from sympy.abc import *
import numpy as np
import matplotlib.pyplot as plt
Ks, Kw, Fmin, Fmax, Fm, Sf, Fa, nf, Ytrab = symbols("K_s K_w F_{min} F_{max} F_m S_f F_a n_f y_{trab}")
Kb = symbols("K_b")
C=8
G_ = (4*(4*Kb*C+1))/(pi*A)
H_ = nf*Fm - (nf-1)*Fmin
T_ = (0.577*(-0.5*Sf + 0.67*A*d**b))/(0.5*0.67*Sf)
d_ = (G_*(H_ + nf*Fa*T_))**(1/(b+2))
d_.evalf(3)
#Dados dos materiais
dic = { 'A227':[-0.1822, 1753.3],
'A228':[-0.1625, 2153.5],
'A229':[-0.1833, 1831.2],
'A232':[-0.1453, 1909.9],
'A232':[-0.0934, 2059.2]}
material = 'A228'
fmax, fmin, C = 150, 50, 8
b_, A_, sf = dic[material][0], dic[material][1], 310
fm = Fm_(fmax, fmin)
fa = Fa_(fmax, fmin)
kb = (-1-C+4*C**2)/(4*C*(C-1))
#Cรกlculo iterativo do diรขmetro
soluรงรฃo = []
din = 1 #chute inicial
for i in range(10):
s = d_.subs({d:din, Kb:kb, Fm:fm, Fa:fa, nf:2, Fmin:fmin, Fmax:fmax,
b:b_, A:A_, Sf:sf, C:8, pi:np.pi})
soluรงรฃo.append(s)
din = (s.evalf(5))
soluรงรฃo
#np.array(soluรงรฃo, dtype=float)
```
| github_jupyter |
```
# import statements
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import spacy
import squarify
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.neighbors import NearestNeighbors
from collections import Counter
from spacy.tokenizer import Tokenizer
nlp = spacy.load("en_core_web_md")
df = pd.read_csv('./final_df_percentsep.csv', sep='%', index_col='Unnamed: 0')
df = df.fillna('none')
df.head()
df.isnull().sum()
# combine all text features into one string:
df['combined_text'] = df.name + " " + df.flavors + " " + df.race + " " + df.positive_effects + " " + df.negative_effects + " " + df.medical_uses + " " + df.Description
# Removing punctuations from our string
df["combined_text"] = df['combined_text'].str.replace('[^\w\s]',' ')
df["combined_text"] = df['combined_text'].str.replace('none','')
# Tokenizer
STOP_WORDS = nlp.Defaults.stop_words.union([' ',' ', '-PRON-', 'none'])
tokenizer = Tokenizer(nlp.vocab)
tokens = []
for doc in tokenizer.pipe(df['combined_text'], batch_size=500):
doc_tokens = []
for token in doc:
if token.text.lower() not in STOP_WORDS:
doc_tokens.append(token.text.lower())
tokens.append(doc_tokens)
df['tokens'] = tokens
df['tokens']
df['lemmas'] = df['combined_text'].apply(lambda text: [token.lemma_ for token in nlp(text) if (token.is_stop != True) and (token.is_punct != True)])
df['lemmas'] = df['lemmas'].str.join(' ')
df['lemmas']
```
### Test with lemmatization
```
tfidf = TfidfVectorizer()
dtm3 = tfidf.fit_transform(df['lemmas'])
dtm3 = pd.DataFrame(dtm3.todense(), columns=tfidf.get_feature_names())
dtm3.head()
# Fit on DTM
nn = NearestNeighbors(n_neighbors=5, algorithm='ball_tree')
nn.fit(dtm3)
# with putting it back into a df:
test_input = ["I need something to help with anxiety and pain but has a sweet flavor"]
user_input = tfidf.transform(test_input)
input_df = pd.DataFrame(user_input.todense())
score, recommended_strains = nn.kneighbors(input_df)
# without putting it back into a df:
user_input = "I want something to help with lack of appetite"
user_input = pd.Series(user_input)
vect_input = tfidf.transform(user_input)
score, recommended_strains = nn.kneighbors(vect_input.todense())
print(strain_index)
print(score, recommended_strains)
```
### Test without lemmatization:
```
# without lemmatization & putting into dataframe
dtm2 = tfidf.fit_transform(df['combined_text'])
dtm2 = pd.DataFrame(dtm2.todense(), columns=tfidf.get_feature_names())
# Fit on DTM
nn = NearestNeighbors(n_neighbors=5, leaf_size=50, algorithm='kd_tree')
nn.fit(dtm2)
test_input = ["Looking for something to help with headaches"]
user_input = tfidf.transform(test_input)
score, strain_index = nn.kneighbors(user_input.todense())
print(score, strain_index)
strains = [df[['name', 'medical_uses']].loc[n] for n in strain_index]
print(strains)
```
### Using Basilica
```
import basilica
API_KEY = 'API_KEY'
with basilica.Connection(API_KEY) as c:
embedded = []
for row in df['combined_text']:
sentence = row
embedding = list(c.embed_sentence(sentence))
embedded.append(embedding)
df['embedded'] = embedded
df.head()
df.to_csv('embedded_df.csv', index=False)
import basilica
import json
import numpy as np
import os
import random
import re
import sklearn.decomposition
import sklearn.neighbors
import sklearn.preprocessing
import time
from sklearn.pipeline import Pipeline
data_input = np.stack(df['embedded'].values, axis=0)
scaler = sklearn.preprocessing.StandardScaler(with_std=False)
pca = sklearn.decomposition.PCA(n_components=75, whiten=True)
data_input = scaler.fit_transform(data_input)
data_input = pca.fit_transform(data_input)
data_input = sklearn.preprocessing.normalize(data_input)
print(data_input.shape)
dtm = pd.DataFrame(data_input)
# Fit on DTM
nn3 = NearestNeighbors(n_neighbors=5, algorithm='ball_tree').fit(dtm)
user_input = "I need something to help with anxiety and pain but has a sweet flavor"
with basilica.Connection(API_KEY) as c:
embedded = c.embed_sentence(user_input)
embedded = np.stack([embedded], axis=0)
user_input = scaler.transform(embedded)
user_input = pca.transform(user_input)
user_input = sklearn.preprocessing.normalize(user_input)
score, strain_index = nn3.kneighbors(user_input)
print(score, strain_index)
strains = [df[['name', 'flavors', 'medical_uses']].loc[n] for n in strain_index]
print(strains)
```
### for pickled model
```
data_input = np.stack(df['embedded'].values, axis=0)
scaler = sklearn.preprocessing.StandardScaler(with_std=False)
pca = sklearn.decomposition.PCA(n_components=75, whiten=True)
normalizer = sklearn.preprocessing.Normalizer().fit(pcad)
nn = NearestNeighbors(n_neighbors=5, algorithm='ball_tree')
scaled = scaler.fit_transform(data_input)
print(type(scaled))
pcad = pca.fit_transform(scaled)
normd = sklearn.preprocessing.normalize(pcad)
dtm = pd.DataFrame(normd)
nn = NearestNeighbors(n_neighbors=5, algorithm='ball_tree').fit(dtm)
dtm.shape
joblib.dump(scaler, 'scaler.pkl')
joblib.dump(pca, 'pcaer.pkl')
joblib.dump(nn, 'nnmodel.pkl')
joblib.dump(normalizer, 'normd.pkl')
model = joblib.load('nn.pkl')
scaled = joblib.load('scaler.pkl')
pcaer = joblib.load('pcaer.pkl')
nnmodel = joblib.load('nnmodel.pkl')
normd =
target = "I need something to help with anxiety and pain but has a sweet flavor"
with basilica.Connection(API_KEY) as c:
embedded = c.embed_sentence(target)
embedded = np.stack([embedded], axis=0)
embedded.shape
user_input = scaled.transform(embedded)
user_input = pcaer.transform(user_input)
# score, strain_index = nn3.kneighbors(embedded1)
# print(score, strain_index)
import joblib
joblib.dump(scaled, 'scaled.pkl')
joblib.dump(nn, 'nn.pkl')
```
| github_jupyter |
This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also https://splines.readthedocs.io/.
[back to rotation splines](index.ipynb)
# Cumulative Form
The basic idea, as proposed by
<cite data-cite="kim1995general">Kim, Kim and Shin (1995)</cite>
(section 4) is the following:
Instead of representing a curve as a sum of basis functions
weighted by its control point's position vectors $p_i$
(as it's for example done with [Bรฉzier splines](../euclidean/bezier.ipynb)),
they suggest to use the relative difference vectors $\Delta p_i$ between successive control points.
These relative difference vectors can then be "translated" to *local* rotations
(replacing additions with multiplications),
leading to a form of rotation splines.
## Piecewise Slerp
As an example,
they define a piecewise linear curve
\begin{equation*}
p(t) =
p_0 +
\sum_{i=1}^n \alpha_i(t) \Delta p_i,
\end{equation*}
where
\begin{align*}
\Delta p_i &= p_i - p_{i - 1}\\
\alpha_i(t) &= \begin{cases}
0 & t < i - 1\\
t - i + 1 & i - 1 \leq t < i\\
1 & t \geq i.
\end{cases}
\end{align*}
```
def alpha(i, t):
if t < i - 1:
return 0
elif t >= i:
return 1
else:
return t - i + 1
```
<div class="alert alert-info">
Note
There is an off-by-one error in the paper's definition of $\alpha_i(t)$:
\begin{equation*}
\alpha_i(t) = \begin{cases}
0 & t < i\\
t - i & i \leq t < i + 1\\
1 & t \geq i + 1.
\end{cases}
\end{equation*}
This assumes that $i$ starts with $0$,
but it actually starts with $1$.
</div>
This "cumulative form" can be "translated" to a rotation spline
by replacing addition with multiplication
and the relative difference vectors by relative (i.e. local) rotations
(represented by unit quaternions):
\begin{equation*}
q(t) =
q_0
\prod_{i = 1}^n \exp(\omega_i \alpha_i(t)),
\end{equation*}
where
\begin{equation*}
\omega_i =
\log\left(q_{i - 1}^{-1} q_i\right).
\end{equation*}
The paper uses above notation,
but this could equivalently be written as
\begin{equation*}
q(t) =
q_0
\prod_{i = 1}^n \left(q_{i - 1}^{-1} q_i\right)^{\alpha_i(t)}.
\end{equation*}
```
import numpy as np
```
[helper.py](helper.py)
```
from helper import angles2quat, animate_rotations, display_animation
from splines.quaternion import UnitQuaternion
# NB: math.prod() since Python 3.8
product = np.multiply.reduce
def piecewise_slerp(qs, t):
return qs[0] * product([
(qs[i - 1].inverse() * qs[i])**alpha(i, t)
for i in range(1, len(qs))])
qs = [
angles2quat(0, 0, 0),
angles2quat(90, 0, 0),
angles2quat(90, 90, 0),
angles2quat(90, 90, 90),
]
times = np.linspace(0, len(qs) - 1, 100)
ani = animate_rotations(
[piecewise_slerp(qs, t) for t in times],
figsize=(3, 2))
display_animation(ani, default_mode='reflect')
```
## Cumulative Bรฉzier/Bernstein Curve
After the piecewise Slerp,
<cite data-cite="kim1995general">Kim, Kim and Shin (1995)</cite>
show (in section 5.1) how to create a *cumulative form*
inspired by Bรฉzier splines, i.e. using Bernstein polynomials.
They start with the well-known equation for Bรฉzier splines:
\begin{equation*}
p(t) =
\sum_{i=0}^n p_i \beta_{i,n}(t),
\end{equation*}
where $\beta_{i,n}(t)$ are Bernstein basis functions as shown in
[the notebook about Bรฉzier splines](../euclidean/bezier-de-casteljau.ipynb#Arbitrary-Degree).
They re-formulate this into a *cumulative form*:
\begin{equation*}
p(t) =
p_0 \tilde{\beta}_{0,n}(t) +
\sum_{i=1}^n \Delta p_i \tilde{\beta}_{i,n}(t),
\end{equation*}
where the cumulative Bernstein basis functions are given by
\begin{equation*}
\tilde{\beta}_{i,n}(t) =
\sum_{j=i}^n \beta_{j,n}(t).
\end{equation*}
We can get the Bernstein basis polynomials via the function
[splines.Bernstein.basis()](../python-module/splines.rst#splines.Bernstein.basis):
```
from splines import Bernstein
```
... and create a simple helper function to sum them up:
```
from itertools import accumulate
def cumulative_bases(degree, t):
return list(accumulate(Bernstein.basis(degree, t)[::-1]))[::-1]
```
Finally, they "translate" this into a rotation spline using quaternions, like before:
\begin{equation*}
q(t) =
q_0
\prod_{i=1}^n \exp\left(\omega_i \tilde{\beta}_{i,n}(t)\right),
\end{equation*}
where
\begin{equation*}
\omega_i =
\log(q_{i-1}^{-1} q_i).
\end{equation*}
Again, they use above notation in the paper,
but this could equivalently be written as
\begin{equation*}
q(t) =
q_0
\prod_{i=1}^n \left(q_{i-1}^{-1} q_i\right)^{\tilde{\beta}_{i,n}(t)}.
\end{equation*}
```
def cumulative_bezier(qs, t):
degree = len(qs) - 1
bases = cumulative_bases(degree, t)
assert np.isclose(bases[0], 1)
return qs[0] * product([
(qs[i - 1].inverse() * qs[i])**bases[i]
for i in range(1, len(qs))
])
times = np.linspace(0, 1, 100)
rotations = [cumulative_bezier(qs, t) for t in times]
ani = animate_rotations(rotations, figsize=(3, 2))
display_animation(ani, default_mode='reflect')
```
## Comparison with De Casteljau's Algorithm
> This Bรฉzier quaternion curve has a different
> shape from the Bรฉzier quaternion curve
> of <cite data-cite="shoemake1985animating">Shoemake (1985)</cite>.
>
> --<cite data-cite="kim1995general">Kim, Kim and Shin (1995)</cite>, section 5.1
The method described by <cite data-cite="shoemake1985animating">Shoemake (1985)</cite>
is shown in [a separate notebook](de-casteljau.ipynb).
An implementation is available in the
[splines.quaternion.DeCasteljau](../python-module/splines.quaternion.rst#splines.quaternion.DeCasteljau) class:
```
from splines.quaternion import DeCasteljau
times = np.linspace(0, 1, 100)
control_polygon = [
angles2quat(90, 0, 0),
angles2quat(0, -45, 90),
angles2quat(0, 0, 0),
angles2quat(180, 0, 180),
]
cumulative_rotations = [
cumulative_bezier(control_polygon, t)
for t in times
]
cumulative_rotations_reversed = [
cumulative_bezier(control_polygon[::-1], t)
for t in times
][::-1]
casteljau_rotations = DeCasteljau([control_polygon]).evaluate(times)
ani = animate_rotations({
'De Casteljau': casteljau_rotations,
'Cumulative': cumulative_rotations,
'Cumulative reversed': cumulative_rotations_reversed,
}, figsize=(7, 2))
display_animation(ani, default_mode='reflect')
```
Applying the same method on the reversed list of control points
and then time-reversing the resulting sequence of rotations
leads to an equal (except for rounding errors) sequence of rotations
when using De Casteljau's algorithm:
```
casteljau_rotations_reversed = DeCasteljau([control_polygon[::-1]]).evaluate(times)[::-1]
for one, two in zip(casteljau_rotations, casteljau_rotations_reversed):
assert np.isclose(one.scalar, two.scalar)
assert np.isclose(one.vector[0], two.vector[0])
assert np.isclose(one.vector[1], two.vector[1])
assert np.isclose(one.vector[2], two.vector[2])
```
However, doing the same thing with the "cumulative form"
can lead to a significantly different sequence,
as can be seen in the above animation.
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Strings" data-toc-modified-id="Strings-1"><span class="toc-item-num">1 </span>Strings</a></span></li><li><span><a href="#Lists" data-toc-modified-id="Lists-2"><span class="toc-item-num">2 </span>Lists</a></span></li><li><span><a href="#For-loops:-iterating" data-toc-modified-id="For-loops:-iterating-3"><span class="toc-item-num">3 </span>For loops: iterating</a></span></li><li><span><a href="#Commenting-and-variable-names" data-toc-modified-id="Commenting-and-variable-names-4"><span class="toc-item-num">4 </span>Commenting and variable names</a></span></li></ul></div>
> All content here is under a Creative Commons Attribution [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and all source code is released under a [BSD-2 clause license](https://en.wikipedia.org/wiki/BSD_licenses).
>
>Please reuse, remix, revise, and [reshare this content](https://github.com/kgdunn/python-basic-notebooks) in any way, keeping this notice.
# Module 2: Overview
We cover a diverse range of topics:
* strings,
* lists [also called *vectors* , if you are used to MATLAB or C or Java]
* for-loops.
They seem unrelated, but they hang together conceptually: they are all about sequences, or collections: characters in a strings, items in a list, and loops to process the sequence. We will formally compare all sequence types later. For now let us just use them.
At the end, and in between these sections we will cover some topics related to commenting.
## Preparing for this module
You should cover these resources (it can take quite some time!)
* https://runestone.academy/runestone/static/fopp/Sequences/toctree.html and go through the entire chapter 6. You can interactively code on this website. Please also answer the "Check your understanding" questions as you go.
* https://runestone.academy/runestone/static/fopp/Iteration/toctree.html and go through all of chapter 7, skipping section 7.8 unless you are interested in image analysis ๐
* https://runestone.academy/runestone/static/fopp/Files/toctree.html and only complete up to section 10.5 (reading a file). We will cover writing to files in a later session.
* https://www.w3schools.com/python/python_lists.asp and go through the presented examples on lists.
## Strings
Strings are some of the simplest objects in Python. In the [prior module](https://yint.org/pybasic01) you created several strings. Now create this string in Python:
```python
s = """Secretly under development for the past three years, Bezos said the
"Blue Moon" lander, using a powerful new hydrogen-powered engine generating up
to 10,000 pounds of thrust, will be capable of landing up to 6.5 metric tons
of equipment on the lunar surface."""
```
Now use the above string to perform the following actions. Look up the Standard library help files for ``strings`` (like we showed last time) to find the methods required.
1. Print it to screen completely in upper case.
1. Print it to screen but with lower and uppercase characters switched around.
1. Try the following: ``print(s * 8)``.
1. Try the following: ``print(s + s)``. *Do these two mathematical operations make sense for strings?*
1. What is the length of this string?
1. How many times does the word "the" appear in the string?
1. At which position in the string does the word ``Secretly`` appear? *How does this differ with MATLAB?*
1. At which position in the string does the word ``Bezos`` appear?
1. Return a boolean ``True`` or ``False`` if the string ``endswith`` a full stop.
1. Return the string, replacing the instance of 'hydrogen' with 'nuclear'.
1. Replace every space in the above sentence with a newline character, and reprint the sentence to the screen.
The above are all effectively done using what are called ***methods***.
> A method an *attribute* of an *object*.
In the above, a ``string`` is your *object* and objects have one or more attributes.
Some tips:
1. You can get a **list** [we cover lists next!] of all attributes using the ``dir(...)`` command.
```python
s = """Secretly under development for ... the lunar surface."""
dir(s)
['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__',
'__gt__', '__hash__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mod__',
'__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__',
'__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'capitalize',
'casefold', 'center', 'count', 'encode', 'endswith', 'expandtabs', 'find', 'format',
'format_map', 'index', 'isalnum', 'isalpha', 'isdecimal', 'isdigit', 'isidentifier',
'islower', 'isnumeric', 'isprintable', 'isspace', 'istitle', 'isupper', 'join', 'ljust',
'lower', 'lstrip', 'maketrans', 'partition', 'replace', 'rfind', 'rindex', 'rjust',
'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip',
'swapcase', 'title', 'translate', 'upper', 'zfill']
```
* You can ignore all the attributes beginning and ending with a double underscore, for example ``__add__``. The attributes which are of practical use to you are the ones starting from ``capitalize``, all the way to the end.
* You don't need to create a string ``s`` first to get a list of the attributes. You can also use this shortcut:
```python
dir('')
dir(str)
```
* If you see an attribute that looks interesting, you can request help on it: ``help(''.startswith)`` or ``help("".startswith)``. Notice the ``''`` in the brackets: it creates an empty string, and then accesses the attribute ``.startswith`` and then asks for help on that.
* You will get a piece of help text printed to the screen. This is helpful later on when you are comfortable with Python. In the beginning it is more helpful to search in a search engine, which will give you a page with examples. The built-in Python help is usually very very brief.
Use this knowledge know to figure out what the difference is between ``s.find`` and ``s.index``. Make sense?
You can do what is called *slicing* on a string. Slicing is the ability to get sub-parts of a string:
```python
word = 'landing'
print(word[1:4])
```
* How many characters are in the text which is printed on the screen?
* Again, for MATLAB users: how does that differ with what you are used to?
* What is returned with ``word[3:]``?
* What is returned with ``word[3:99]``?
* What is returned with ``word[2:6:3]``?
* And try this: ``word[6:2:-1]``
* And lastly ``word[-4:-7:-1]``
Speaking of DNA ... create this sequence in Python:
```python
seq = """TAGGGGCCTCCAATTCATCCAACACTCTACGCCTTCTCCAAGAGCTAGTAGGGCACCCTGCAGTTGGAAAGGGAACTATTTCGTAGGGCGAGCCCATACCGTCTCTCTTGCGGAAGACTTAACACGATAGGAAGCTGGAATAGTTTCGAACGATGGTTATTAATCCTAATAACGGAACGCTGTCTGGAGGATGAGTGTGACGGAGTGTAACTCGATGAGTTACCCGCTAATCGAACTGGGCGAGAGATCCCAGCGCTGATGCACTCGATCCCGAGGCCTGACCCGACATATCAGCTCAGACTAGAGCGGGGCTGTTGACGTTTGGGGTTGAAAAAATCTATTGTACCAATCGGCTTCAACGTGCTCCACGGCTGGCGCCTGAGGAGGGGCCCACACCGAGGAAGTAGACTGTTGCACGTTGGCGATGGCGGTAGCTAACTAAGTCGCCTGCCACAACAACAGTATCAAAGCCGTATAAAGGGAACATCCACACTTTAGTGAATCGAAGCGCGGCATCAGAATTTCCTTTTGGATACCTGATACAAAGCCCATCGTGGTCCTTAGACTTCGTGCACATACAGCTGCACCGCACGCATGTGGAATTAGAGGCGAAGTACGATTCCTAGACCGACGTACGATACAACTATGTGGATGTGACGAGCTTCTTTTATATGCTTCGCCCGCCGGACCGGCCTCGCGATGGCGTAG"""
```
* What is the first occurrence of ``GATTAG`` in the sequence?
* How many times does ``TTTT`` occur?
* Replace all ``A`` entries with ``T``'s and all ``C`` entries with ``G``'s.
* Reset the string back again, but now try something a bit more advanced: switch all ``T`` entries to ``A`` and all ``A`` entries to ``T``.
## Lists
We will cover creating, adding, accessing and using lists of objects.
You have seen this before: create a list with the square bracket characters: ``[`` and ``]``.
For example: ``words = ['Mary', 'loved', 'chocolate.']``
One of the most useful functions in Python is ``len(...)``. Verify that it returns an integer value of 3. Does it have the **type** you expect?
The entries in the list can be mixed types (contrast this to most other programming languages!)
```python
group = ['yeast', 'bacillus', 994, 'aspergillus' ]
```
An important test is to check if the list contains something:
```python
'aspergillus' in group
499 in group
```
Like we saw with strings, you can use the ``*`` and ``+`` operators:
```python
group * 3
group + group # might not do what you expect!
group - group # oooops
```
And like strings, you refer to them based on the position counter of 0:
```python
group[0]
# but this is also possible:
group[-3]
# however, is this expected?
group[4]
```
Lists, also have have some methods that you can use. Lists in fact have far fewer methods than strings. Remember how to get a list of methods from the [prior module](https://yint.org/pybasic01)?
```python
dir(....) # what do you fill in here?
```
How many methods do you see which you can apply to a list?
Let's try a few of them out:
1. Try ``append`` a new entry to the ``group`` list you created above: add the entry "Candida albicans"
1. Create a new list ``reptiles = ['crocodile', 'turtle']`` and then try: ``group.extend(reptiles)``.
1. Print the list. Remove the ``crocodile`` entry from the list. Print it again to verify it succeeded.
1. Now try to remove the entry again. What happens?
1. Use the following command: ``group.reverse()``, and print the ``group`` variable to the screen.
1. Now try this instead: ``group = group.reverse()`` and print the ``group`` variable to the screen. What happened this time?
1. So you are back to square one: make a new list variable ``group = ['yeast', 'bacillus', 'aspergillus' ]`` and try ``group.sort()``. Notice that ``.sort()``, like the ``.reverse()`` method operate *in-place*: there is no need to assign the output of the action to a new variable. In fact, you cannot.
1. Here's something to be aware of: create ``group = ['yeast', 'bacillus', 994, 'aspergillus' ]``; and now try ``group.sort()``. What does the error message tell you?
Lists behave like a stack: you can add things to the end using ``.append()`` and you can remove them again with ``.pop()``.
Think of a stack of plates: last appended, first removed.
Try it:
```python
species = ['chimp', 'bacillus', 'aspergillus']
species.append('hoooman')
first_out = species.pop()
print(first_out)
```
* What is the length of the list after running this code?
* Try adding a new entry ``arachnid`` between ``chimp`` and ``bacillus`` using the ``.insert()`` command. Print the list to verify it.
> If you don't know how to use the ``.insert()`` method, but you know if exists, you can type ``help([].insert)`` at the command prompt to get a quick help. Or you can search the web which gives more comprehensive help, with examples.
* First use the ``.index()`` function to find the index of "bacillus". Then use the ``.pop()`` method to remove it. In other words, do not directly provide ``.pop()`` the integer index to remove. Assign the popped entry to a new variable.
* Overwrite the entry that is currently in the second position with a new value: "neanderthalensis".
## For loops: iterating
The ``for`` loop is used to run a piece of code a certain number of times. The basic structure is shown, with an example that prints the integer values from 3 up to, and including, 8:
```python
# This is one way to do it:
for i in range(3, 9):
# You can have many lines of code in the for-loop.
# As an example, two for-loop statements are shown here.
print(i)
print('-----')
```
Before the command ``print(i)`` is a tab character or 4 spaces. Please use spaces, and not tabs. Especially if you will interact with other colleagues writing code. Therefore the letter ``p`` from ``print`` goes exactly under the ``i``.
That ``i`` is the *loop counter*. The ``range(3, 9)`` tells how many times the loop will iterate.
Use ``list(range(3, 9))`` to see a list representation of the ``range()`` function. Try creating these ranges:
* Every integer from 0, up to and including 12.
* Every integer from 0, up to and including 12, in steps of 2
* Every integer from 12 down to and including 0, in steps of -3
* Use a ``range`` command to create the values ``[-10, -40, -70]``
* Values between 0.5, up to and including 9.5, in steps of 0.5
Notice how these behave exactly as the string slices seen above.
1. Inside the for loop you can write one or more statements. In the above there are 2 statements and a comment. It is usual to start your comment - if it is required -- with an indent as well. This way it is clear the comment refers to the contents of the for-loop.
2. You can call the *loop counter* anything you like, as long as it is a valid variable name. Remember those from [last time](https://yint.org/pybasic01)?
You can loop over many types of objects in Python. Try this:
```python
reptiles = ['crocodile', 'turtle', 12.34, 'lizard', 'snake', False]
for animal in reptiles:
print('The "animal" object is of type ' + str(type(animal)))
```
and here you can see *dynamic typing* at its finest: the ``animal`` variable is dynamically changing its type in the loop.
You can also iterate over the entries of a string!
```python
sequence = "TAGGGGCCTCCA"
number = 1
for base in sequence:
print('Base number {} is {}'.format(number, base))
number += 1
```
In the above we introduced another concept: that you can print with the ``.format()`` command. We will see more of this later, but then it won't be a surprise.
Now that you have seen how you can iterate over the items of a list, let's try to put this to use:
1. Print the 3-times table, from 1 up till 12, like you learned in school:
> 3 times 1 is 3
>
> 3 times 2 is 6
>
> 3 times 3 is 9
> ...
2. If you haven't done so already, re-write your code to use the ``.format()`` command, as demonstrated above.
3. With 1 line of code find at which position in the list the value of 42 appears: ``[0, 3, 9, 12, 27, 35, 42, 50, 66]``
4. *Based on a real example that I had to code last week*: find the value in the previous list closest to ``19``. Note: don't worry about short code, or efficiency. Just find the answer. In the real example the list was thousands of entries long and was to find the closest time within $\pm$ 5 minutes. Then you need to worry about efficiency.
**Advanced tip:** sometimes you want to iterate through a list, but also know which entry you are iterating on. You can do both simultaneously with the ``enumerate`` command.
```python
names = ['Leonardo', 'Carl', 'Amiah', 'Yaretzi', 'Destiny', 'Alan']
for index, name in enumerate(names):
print('{} is number {} in the list'.format(name, index+1))
```
What ``enumerate`` does is to create a ``tuple`` with 2 entries. These two entries are dynamically assigned: the first one is an ``integer`` assigned to ``index`` and the second one is assigned to ``name`` in this example. You are free to choose both variable names.
Further self-development:
1. Rewrite the code above for DNA bases using the ``enumerate`` function, eliminating the manual ``number`` tracking.
2. Look up the ``reversed`` keyword, which can be used inside ``enumerate`` to run your for-loop in reverse.
## Commenting and variable names
Comments are often as important as the code itself. But it takes time to write them.
The choice of variable names is related to the topic of comments. In many ways, the syntax of Python makes the code self-documenting, meaning you do not need to add comments at all. But it definitely is assisted by choosing meaningful variable names:
>```python
>for genome in genome_list:
> <do something with genome>
```
This quite clearly shows that we are iterating over the all genomes in some iterable (it could be a list, tuple, or set, for example) container variables of sequenced genomes.
But here the code structure is identical:
>```python
>for k in seq:
> <do something with k>
```
Later on in the code it might not be clear what ``k`` represents. It is also not clear what ``seq`` is, or contains.
Comments should be added in these places and cases:
* At the top of your file: name and date, and a few sentences on the purpose of the code. It is also helpful to note which Python version you use, or expect.
* Refer to any publications or internal company reports for algorithms implemented
* Refer to a website if you use any interesting/unusual shortcut code or non-obvious code. This is more for yourself, and your future colleagues.
To cover during the interactive session:
* Creating code cells in Spyder: ``# %% text here (in Spyder)``
* The differences between lists and strings: *mutable* and *immutable*.
```
# IGNORE this. Execute this cell to load the notebook's style sheet.
from IPython.core.display import HTML
css_file = './images/style.css'
HTML(open(css_file, "r").read())
```
| github_jupyter |
## Pattern Recognition
### Assignment 4
#### Group 4:
- COE18B056 - Thigulla Vamsi Krishna
- COE18B065 - Srinivasan R Sharma
- CED18I039 - Paleti Krishnasai
**Question 2**
```
import cvxopt
import numpy as np
import matplotlib.pyplot as plt
cvxopt.solvers.options['show_progress']=False
class Perceptron:
def __init__(self,learning_rate=0.01,epochs=1000):
self.learning_rate=learning_rate
self.epochs=epochs
def __activation_func(self,x):
return 1 if x>=0 else 0
def __predict(self,x):
g=(self.weights.T).dot(x)
return self.__activation_func(g)
def __plot(self,X,y):
color_cond = ['red' if i==1 else 'yellow' for i in y]
plt.scatter(np.array(X[:, 1]), np.array(X[:, 2]), color=color_cond)
slope = -(self.weights[1] / self.weights[2])
intercept = -(self.weights[0] / self.weights[2])
ax = plt.gca()
x_vals = np.array(ax.get_xlim())
y_vals = intercept + (slope * x_vals)
plt.plot(x_vals, y_vals)
plt.title('DECISION BOUNDARY')
plt.show()
def fit(self, X, y):
self.X = np.hstack((np.ones((len(X), 1)), X))
self.y = y
self.weights = np.zeros(len(self.X[0]))
self.epoch = 0
while self.epoch < self.epochs:
self.epoch = self.epoch + 1
self.old_W = np.copy(self.weights)
for index, x in enumerate(self.X):
self.weights = self.weights + self.learning_rate * (self.y[index] - self.__predict(x)) * x
if np.array_equal(self.weights, self.old_W):
break
print(f'Epoch {self.epoch} --> W: {self.weights}')
self.__plot(self.X, self.y)
def predict(self, X):
if hasattr(self, 'weights'):
X = np.hstack((np.ones((len(X), 1)), X))
g = X @ self.weights
return np.where(g >= 0, 1, 0)
else:
print('Please run fit in order to be able to use predict')
@staticmethod
def accuracy(y_true,y_pred):
return np.sum(y_true == y_pred) / len(y_true)
def linear_kernel(x1, x2):
return np.dot(x1,x2)
def poly_kernal(x,y,p=2):
return(1+np.dot(x,y))**p
def gaussian_kernal(x,y,sigma=5.0):
return np.exp(-np.linalg.norm(x - y) ** 2 / (2 * (sigma ** 2)))
class SVM:
def __init__(self, kernel=linear_kernel, C=None):
self.kernel = kernel
self.C = C
if self.C is not None:
self.C = float(self.C)
def fit(self,X,y):
n_samples, n_features = X.shape
K = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
K[i, j] = self.kernel(X[i], X[j])
P = cvxopt.matrix(np.outer(y, y) * K)
q = cvxopt.matrix(np.ones(n_samples) * -1)
y = y.astype(np.double)
A = cvxopt.matrix(y, (1, n_samples))
b = cvxopt.matrix(0.0)
if self.C is None:
G = cvxopt.matrix(np.diag(np.ones(n_samples) * -1))
h = cvxopt.matrix(np.zeros(n_samples))
else:
tmp1 = np.diag(np.ones(n_samples) * -1)
tmp2 = np.identity(n_samples)
G = cvxopt.matrix(np.vstack((tmp1, tmp2)))
tmp1 = np.zeros(n_samples)
tmp2 = np.ones(n_samples) * self.C
h = cvxopt.matrix(np.hstack((tmp1, tmp2)))
solution = cvxopt.solvers.qp(P, q, G, h, A, b)
a = np.ravel(solution['x'])
sv = a > 1e-5
ind = np.arange(len(a))[sv]
self.a = a[sv]
self.sv = X[sv]
self.sv_y = y[sv]
print(f"{len(self.a)} support vectors out of {n_samples} points")
self.b = 0
for n in range(len(self.a)):
self.b += self.sv_y[n]
self.b -= np.sum(self.a * self.sv_y * K[ind[n], sv])
self.b /= len(self.a)
if self.kernel == linear_kernel:
self.w = np.zeros(n_features)
for n in range(len(self.a)):
self.w += self.a[n] * self.sv_y[n] * self.sv[n]
else:
self.w=None
def project(self,X):
if self.w is not None:
return np.dot(X, self.w) + self.b
else:
y_predict = np.zeros(len(X))
for i in range(len(X)):
s = 0
for a, sv_y, sv in zip(self.a, self.sv_y, self.sv):
s += a * sv_y * self.kernel(X[i], sv)
y_predict[i] = s
return y_predict + self.b
def predict(self,X):
return np.sign(self.project(X))
def get_nonlinear_equation(self):
eq = np.zeros(6)
for a, sv_y, sv in zip(self.a, self.sv_y, self.sv):
eq += a * sv_y * np.asarray([sv[0] ** 2, sv[1] ** 2, 1, sv[0] * 2,sv[1] * 2, 2 * sv[0] * sv[1]])
return eq
def __str__(self):
print("Classifier Details")
print(f"Alpha: {self.a}")
print(f"Bias: {self.b}")
if self.kernel == linear_kernel:
print(f"Weights: {self.w}")
print(f"Center Margin Equation: {self.w[0]} x1 + {self.w[1]} x2 +{self.b} = 0")
else:
print("Weights: None")
w = self.get_nonlinear_equation()
print(f"Center Margin Equation: {w[0]} x1^2 + {w[1]} x2^2 + {w[3]}x1 + {w[4]} x2 + {w[5]} x1x2 + {w[2]} + {self.b} = 0")
print(f"Support vectors: {self.sv}")
return ""
@staticmethod
def accuracy(y_true, y_pred):
return np.sum(y_true == y_pred) / len(y_true)
def plot_decision_boundary(self, X, y):
color = ['red' if c > 0 else 'blue' for c in y]
plt.scatter(self.sv[:, 0], self.sv[:, 1], s=100, c="g")
plt.scatter(X[:, 0], X[:, 1], c=color)
w = self.w
b = self.b
a = -w[0] / w[1]
xx = np.linspace(-1, 2)
yy = a * xx - b / w[1]
plt.plot(xx, yy, "k")
yy = a * xx - (b + 1) / w[1]
plt.plot(xx, yy, "k--")
yy = a * xx - (b - 1) / w[1]
plt.plot(xx, yy, "k--")
plt.show()
def plot_contour(self, X, y):
X1_train = X[y == -1]
X2_train = X[y != -1]
color = ['red' if c > 0 else 'blue' for c in y]
plt.scatter(self.sv[:, 0], self.sv[:, 1], s=100, c="g")
plt.scatter(X[:, 0], X[:, 1], c=color)
X1, X2 = np.meshgrid(np.linspace(0, 7, 20), np.linspace(-5, 10, 20))
X = np.array([[x1, x2] for x1, x2 in zip(np.ravel(X1), np.ravel(X2))])
Z = self.project(X).reshape(X1.shape)
plt.contour(X1, X2, Z, [0.0], colors='k', linewidths=1, origin='lower')
plt.contour(X1, X2, Z + 1, [0.0], colors='grey', linewidths=1,origin='lower')
plt.contour(X1, X2, Z - 1, [0.0], colors='grey', linewidths=1,origin='lower')
plt.show()
```
### perceptron
```
X = np.array([[2, 2], [-1, -3], [-1, 2], [0, -1], [1, 3], [-1, -2], [1, -2],[-1, -1]])
y = np.array([1, 0, 1, 0, 1, 0, 0, 1]).T
slp=Perceptron(learning_rate=0.01, epochs=20)
slp.fit(X, y)
y_pred = slp.predict(X)
print(f"Accuracy: {slp.accuracy(y, y_pred)}")
```
###### learning rate = 0.5
```
slp = Perceptron(learning_rate=0.5)
slp.fit(X, y)
y_pred = slp.predict(X)
print(f"Accuracy: {slp.accuracy(y, y_pred)}")
```
### SVM
```
X = np.array([[2, 2], [-1, -3], [-1, 2], [0, -1], [1, 3], [-1, -2], [1, -2],[-1, -1]])
y = np.array([1, 0, 1, 0, 1, 0, 0, 1]).T
y = np.where(y <= 0, -1, 1)
svm = SVM()
svm.fit(X, y)
y_pred = svm.predict(X)
print(f"\nAccuracy: {slp.accuracy(y, y_pred)}\n")
print(svm)
svm.plot_decision_boundary(X, y)
```
| github_jupyter |
# Neural Fingerprints
We create atom, bond, and edge tensors from molecule SMILES using `chemml.chem.tensorize_molecules` in order to build neural fingerprints using `chemml.models.NeuralGraphHidden` and `chemml.models.NeuralGraphOutput` modules. These neural fingerprints are then used as features to train a simple feed forward neural network to predict densities of small organic compounds using tensorflow.
Here we import a sample dataset from ChemML library which has the SMILES codes for 500 small organic molecules with their densities in $kg/m^3$.
```
import numpy as np
from chemml.datasets import load_organic_density
molecules, target, dragon_subset = load_organic_density()
target = np.asarray(target['density_Kg/m3'])
```
Building `chemml.chem.Molecule` objects from molecule SMILES.
```
from chemml.chem import Molecule
mol_objs_list = []
for smi in molecules['smiles']:
mol = Molecule(smi, 'smiles')
mol.hydrogens('add')
mol.to_xyz('MMFF', maxIters=10000, mmffVariant='MMFF94s')
mol_objs_list.append(mol)
```
Molecule tensors can be used to create neural graph fingerprints using `chemml.models`
```
from chemml.chem import tensorise_molecules
xatoms, xbonds, xedges = tensorise_molecules(molecules=mol_objs_list, max_degree=5,
max_atoms=None, n_jobs=-1, batch_size=100, verbose=True)
```
## Splitting and preprocessing the data
```
from sklearn.model_selection import ShuffleSplit
from sklearn.preprocessing import StandardScaler
y_scale = StandardScaler()
rs = ShuffleSplit(n_splits=1, test_size=.20, random_state=42)
for train, test in rs.split(mol_objs_list):
xatoms_train = xatoms[train]
xatoms_test = xatoms[test]
xbonds_train = xbonds[train]
xbonds_test = xbonds[test]
xedges_train = xedges[train]
xedges_test = xedges[test]
target_train = target[train]
target_test = target[test]
target_train = y_scale.fit_transform(target_train.reshape(-1,1))
print('Training data:\n')
print('Atoms: ',xatoms_train.shape)
print('Bonds: ',xbonds_train.shape)
print('Edges: ',xedges_train.shape)
print('Target: ',target_train.shape)
print('\nTesting data:\n')
print('Atoms: ',xatoms_test.shape)
print('Bonds: ',xbonds_test.shape)
print('Edges: ',xedges_test.shape)
print('Target: ',target_test.shape)
```
## Building the Neural Fingerprints
The atom, bond, and edge tensors are used here to build 200 neural fingerprints of width 8 (i.e., the size atomic neighborhood which will be considered in the convolution process).
```
from chemml.models import NeuralGraphHidden, NeuralGraphOutput
from tensorflow.keras.layers import Input, add
import tensorflow as tf
tf.random.set_seed(42)
conv_width = 8
fp_length = 200
num_molecules = xatoms_train.shape[0]
max_atoms = xatoms_train.shape[1]
max_degree = xbonds_train.shape[2]
num_atom_features = xatoms_train.shape[-1]
num_bond_features = xbonds_train.shape[-1]
# Creating input layers for atoms ,bonds and edge information
atoms0 = Input(name='atom_inputs', shape=(max_atoms, num_atom_features),batch_size=None)
bonds = Input(name='bond_inputs', shape=(max_atoms, max_degree, num_bond_features),batch_size=None)
edges = Input(name='edge_inputs', shape=(max_atoms, max_degree), dtype='int32',batch_size=None)
# Defining the convolved atom feature layers
atoms1 = NeuralGraphHidden(conv_width, activation='relu', use_bias=False)([atoms0, bonds, edges])
atoms2 = NeuralGraphHidden(conv_width, activation='relu', use_bias=False)([atoms1, bonds, edges])
# Defining the outputs of each (convolved) atom feature layer to fingerprint
fp_out0 = NeuralGraphOutput(fp_length, activation='softmax')([atoms0,bonds,edges])
fp_out1 = NeuralGraphOutput(fp_length, activation='softmax')([atoms1,bonds,edges])
fp_out2 = NeuralGraphOutput(fp_length, activation='softmax')([atoms2,bonds,edges])
# Sum outputs to obtain fingerprint
final_fp = add([fp_out0, fp_out1, fp_out2])
print('Neural Fingerprint Shape: ',final_fp.shape)
```
## Building and training the neural network
Here, we build and train a simple feed forward neural network using `tensorflow.keras` and provide our neural fingerprints as features.
```
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense
# Build and compile model for regression.
dense_layer0 = Dense(128,activation='relu',name='dense_layer0',
kernel_regularizer=tf.keras.regularizers.l2(0.01))(final_fp)
dense_layer1 = Dense(64,activation='relu',name='dense_layer1',
kernel_regularizer=tf.keras.regularizers.l2(0.01))(dense_layer0)
dense_layer2 = Dense(32,activation='relu',name='dense_layer2',
kernel_regularizer=tf.keras.regularizers.l2(0.01))(dense_layer1)
main_prediction = Dense(1, activation='linear', name='main_prediction')(dense_layer1)
model = Model(inputs=[atoms0, bonds, edges], outputs=[main_prediction])
model.compile(optimizer='adam', loss='mae')
# Show summary
model.summary()
model.fit([xatoms_train, xbonds_train, xedges_train], target_train, epochs=50,
steps_per_epoch=None, batch_size=None,verbose=False,validation_split=0.1)
```
Predicting the density of the molecules in our test data and evaluating our model based on it.
```
from chemml.utils import regression_metrics
y_pred = model.predict([xatoms_test,xbonds_test,xedges_test])
y_pred = y_scale.inverse_transform(y_pred)
metrics_dict = regression_metrics(target_test, list(y_pred.reshape(-1,)))
mae = metrics_dict['MAE']
r_2 = metrics_dict['r_squared']
print("Mean Absolute Error = {} kg/m^3".format(mae.round(3)))
print("R squared = {}".format(r_2.round(3)))
```
| github_jupyter |
```
#Reference
#https://towardsai.net/p/data-mining/text-mining-in-python-steps-and-examples-78b3f8fd913b
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import nltk
import os
import nltk.corpus
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.probability import FreqDist
# URL = 'https://finance.yahoo.com/quote/FBALX/profile?p=FBALX'
URL = 'https://finance.yahoo.com/quote/AAGPX/profile?p=AAGPX'
# URL = 'https://finance.yahoo.com/quote/FXAIX/profile?p=FXAIX'
# URL = 'https://finance.yahoo.com/quote/FBAKX/profile?p=FBAKX'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
soup
#Find div where job info starts
# results = soup.find_all('div')
results = soup.find_all('div', class_='Bdbw(1px) Bdbc($seperatorColor) Bdbs(s) H(25px) Pt(10px)')
# results = soup.find
# results1 = results.find('span', class_= 'Mend(5px) Whs(nw)')
# results1 = results.find_all('span')
parameter = []
val = []
ct = 0
for i in results:
teststr=str(i)
parameter.append(teststr[teststr.find('class="Mend(5px) Whs(nw)"')+68:teststr.find('</span></span></span><span class="Fl(end)')])
val.append(teststr[teststr.find('<span class="Fl(end)')+40:teststr.find('</span></div>')])
testdict = dict(zip(parameter,val))
testdict
teststr = str(results[3])
teststr[teststr.find('class="Mend(5px) Whs(nw)"')+68:teststr.find('</span></span></span><span class="Fl(end)')]
# teststr.find('</span></span><span class="Fl(end)')
def tokenize_from_url(URLlist):
for url in enumerate(URLlist):
#Request and get page html content
page = requests.get(url[1])
soup = BeautifulSoup(page.content, 'html.parser')
soup
#Find div where job info starts
results = soup.find('div', class_='show-more-less-html__markup')
results1 = results.find_all('li')
jobstring = ""
for job_elem in results1:
#print(job_elem, end='\n'*2)
addstr = " " + str(job_elem)
jobstring += addstr
#Removing extraneous HTML stuff
removelst = ['.',',','<li>','</li>','(',')',';','-','``',"''",'<','>','br/','[',']','strong','/','&','\'',
':','โ']
for item in removelst:
jobstring = jobstring.replace(item,"")
if url[0] == 0:
#Tokenize the words
word_tokens = word_tokenize(jobstring)
word_tokens #This is a list
elif url[0] != 0:
#Tokenize the words
word_tokens1 = word_tokenize(jobstring)
word_tokens.extend(word_tokens1)
#word_tokens = word_tokens.append(word_tokens1)
return(word_tokens)
word_tokens = tokenize_from_url(URLlist)
#Removing stop words + tranform
stop_words = set(stopwords.words('english'))
filtered_word_tokens = [w for w in word_tokens if not w in stop_words]
#Convert to lowercase
filtered_word_tokens = [x.lower() for x in filtered_word_tokens]
filtered_word_tokens
#Removing extra words
removelst=['experience','ability','skills','insights','including','data','using','working','tools','health','team',
'company','work','you','years','finance','environment','knowledge','develop','drive','strong','we','key']
filtered_word_tokens = [w for w in filtered_word_tokens if not w in removelst]
fdist = FreqDist(filtered_word_tokens)
most_common_fdist = fdist.most_common()
for i in range(0,20):
print(most_common_fdist[i][0])
most_common_fdist
from nltk.util import ngrams
# Function to generate n-grams from sentences.
def extract_ngrams(data, num):
n_grams = ngrams(nltk.word_tokenize(data), num)
return [ ' '.join(grams) for grams in n_grams]
print("2-gram: ", extract_ngrams(wordstring, 2))
#print("3-gram: ", extract_ngrams(filtered_word_tokens, 3))
# Importing Porterstemmer from nltk library
# Checking for the word โgivingโ
from nltk.stem import PorterStemmer
pst = PorterStemmer()
pst.stem("waiting")
thisset = {"apple", "banana", "banana"}
```
| github_jupyter |
# ็ฏไพ : ่จ็จ่ป่ฒป็้ ๆธฌ
https://www.kaggle.com/c/new-york-city-taxi-fare-prediction
# [ไฝๆฅญ็ฎๆจ]
- ่ฉฆ่ๆจกไปฟ็ฏไพๅฏซๆณ, ไฝฟ็จ็จ่ป่ฒป็้ ๆธฌ็ซถ่ณฝ็ทด็ฟๆ้ๆฌไฝ่็
# [ไฝๆฅญ้้ป]
- ๆฐๅขๆๆๅนพ(day of week)่็ฌฌๅนพๅจ(week of year)้ๅ
ฉ้
็นๅพต, ่งๅฏๆไป้บผๅฝฑ้ฟ (In[4], Out[4], In[5], Out[5])
- ๆฐๅขๅ ไธๅนด้ฑๆ่ๅจๅจๆ็นๅพต , ่งๅฏๆไป้บผๅฝฑ้ฟ (In[8], Out[8], In[9], Out[9])
```
# ๅๅฎ็นๅพตๅทฅ็จๅ็ๆๆๆบๅ
import pandas as pd
import numpy as np
import datetime
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
data_path = '../data/'
df = pd.read_csv(data_path + 'taxi_data1.csv')
train_Y = df['fare_amount']
df = df.drop(['fare_amount'] , axis=1)
df.head()
# ๆ้็นๅพตๅ่งฃๆนๅผ:ไฝฟ็จdatetime
df['pickup_datetime'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S UTC'))
df['pickup_year'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%Y')).astype('int64')
df['pickup_month'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%m')).astype('int64')
df['pickup_day'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%d')).astype('int64')
df['pickup_hour'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%H')).astype('int64')
df['pickup_minute'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%M')).astype('int64')
df['pickup_second'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x, '%S')).astype('int64')
df.head()
# ๅฐ็ตๆไฝฟ็จ็ทๆง่ฟดๆญธ / ๆขฏๅบฆๆๅๆจนๅๅฅ็็ตๆ
df_temp = df.drop(['pickup_datetime'] , axis=1)
scaler = MinMaxScaler()
train_X = scaler.fit_transform(df_temp)
Linear = LinearRegression()
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
GDBT = GradientBoostingRegressor()
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')
```
# ไฝๆฅญ1
* ๅฐ็
ง็ฏไพ๏ผ่ฉฆ่ๅ ๅ
ฅๆๆๅนพ (day of week) ่็ฌฌๅนพๅจ (week of year) ้ๅ
ฉ้
็นๅพต๏ผ
็็็ตๆๆๆฏๅๆฌๅชๆๆ้็นๅพตๅ่งฃ็็ตๆๆดๅฅฝๆๆดๅทฎ?
```
# ๅ ๅ
ฅๆๆๅนพ่็ฌฌๅนพๅจๅ
ฉๅ็นๅพต
"""
Your Code Here
"""
df['day of week'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x,'%w')).astype('int64')
df['week of year'] = df['pickup_datetime'].apply(lambda x: datetime.datetime.strftime(x,'%U')).astype('int64')
df.head()
# ๅฐ็ตๆไฝฟ็จ็ทๆง่ฟดๆญธ / ๆขฏๅบฆๆๅๆจนๅๅฅ็็ตๆ
df_temp = df.drop(['pickup_datetime'] , axis=1)
train_X = scaler.fit_transform(df_temp)
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')
# ๅ ไธ"ๆฅ้ฑๆ"็นๅพต (ๅ่่ฌ็พฉ"้ฑๆๅพช็ฐ็นๅพต")
import math
df['day_cycle'] = df['pickup_hour']/12 + df['pickup_minute']/720 + df['pickup_second']/43200
df['day_cycle'] = df['day_cycle'].map(lambda x:math.sin(x*math.pi))
df.head()
# ๅฐ็ตๆไฝฟ็จ็ทๆง่ฟดๆญธ / ๆขฏๅบฆๆๅๆจนๅๅฅ็็ตๆ
df_temp = df.drop(['pickup_datetime'] , axis=1)
train_X = scaler.fit_transform(df_temp)
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')
```
# ไฝๆฅญ2
* ๅฐ็
ง็ฏไพ็ๆฅ้ฑๆๆๆ๏ผ่ฉฆ่ๅ่ๆๅฝฑ็ๅฎๆๅนด้ฑๆ่ๅจ้ฑๆ็็นๅพต (ไนๅฏไปฅ็จไฝ ่ชๅทฑๆณๅฐ็ๆนๅผ)๏ผ
็็็ตๆๆๆฏ็ฏไพไธญ็็ตๆๆดๅฅฝๆๆดๅทฎ?
```
# ๅ ไธ"ๅนด้ฑๆ"่"ๅจ้ฑๆ"็นๅพต
"""
Your Code Here
"""
df['year_cycle'] = df['pickup_month']/6 + df['pickup_day']/180
df['year_cycle'] = df['year_cycle'].map(lambda x: math.cos(x*math.pi))
df['week_cycle'] = df['day of week']/3.5 + df['pickup_hour']/84
df['week_cycle'] = df['week_cycle'].map(lambda x: math.sin(x*math.pi))
df.head()
# ๅฐ็ตๆไฝฟ็จ็ทๆง่ฟดๆญธ / ๆขฏๅบฆๆๅๆจนๅๅฅ็็ตๆ
df_temp = df.drop(['pickup_datetime'] , axis=1)
train_X = scaler.fit_transform(df_temp)
print(f'Linear Reg Score : {cross_val_score(Linear, train_X, train_Y, cv=5).mean()}')
print(f'Gradient Boosting Reg Score : {cross_val_score(GDBT, train_X, train_Y, cv=5).mean()}')
```
| github_jupyter |
# CNN - Classification + bounding boxes
```
# !pip install git+https://github.com/fastai/fastai.git
# !sudo apt update && sudo apt install -y libsm6 libxext6 libxrender-dev
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from pathlib import Path
import fastai
from fastai.conv_learner import ConvLearner, ConvnetBuilder
from fastai.core import V
from fastai.transforms import RandomRotate, RandomLighting, tfms_from_model, CropType, TfmType
from fastai.dataset import ImageClassifierData, get_cv_idxs
from torchvision.models.resnet import resnet101
from fastai.layers import Flatten
from fastai.metrics import accuracy
from matplotlib import patches, patheffects
from torch import nn
from torch import optim
from torch.nn import functional as F
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
from torch.utils.data import Dataset
from fastai.core import to_np
import torch
torch.cuda.is_available()
def bb_hw(bb):
ymin, xmin, ymax, xmax = bb
return np.array([xmin, ymin, xmax - xmin + 1, ymax - ymin + 1])
def draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(
linewidth=lw, foreground='black'), patheffects.Normal()])
def draw_rect(ax, b, color='white'):
patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor=color, lw=2))
draw_outline(patch, 4)
def draw_text(ax, xy, txt, sz=14):
text = ax.text(*xy, txt,
verticalalignment='top', color='white', fontsize=sz, weight='bold')
draw_outline(text, 1)
def show_img(im, figsize=None, ax=None):
if not ax: fig,ax = plt.subplots(figsize=figsize)
ax.imshow(im)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
return ax
arch = resnet101
size = 224
batch_size = 32
PATH = Path('../data')
df_class = pd.read_csv(PATH/'train_classes.csv')
df_bb = pd.read_csv(PATH/'train_bbs.csv')
df_class.head()
df_bb.head()
val_idxs = get_cv_idxs(len(df_class))
augs = [RandomRotate(30, tfm_y=TfmType.COORD), RandomLighting(0.1,0.1, tfm_y=TfmType.COORD)]
class_model_data = ImageClassifierData.from_csv(PATH, 'yolo', PATH/'train_classes.csv', tfms=tfms_from_model(arch, size))
class ConcatLabelledDataset(Dataset):
def __init__(self, ds, y2):
self.ds = ds
self.y2 = y2
def __len__(self):
return len(self.ds)
def __getitem__(self, i):
x ,y = self.ds[i]
return (x, (y,self.y2[i]))
trn_ds2 = ConcatLabelledDataset(bb_model_data.trn_ds, class_model_data.trn_y)
val_ds2 = ConcatLabelledDataset(bb_model_data.val_ds, class_model_data.val_y)
val_ds2[0][1]
bb_model_data.trn_dl.dataset = trn_ds2
bb_model_data.val_dl.dataset = val_ds2
iter_val = iter(bb_model_data.val_dl)
x, y = next(iter_val)
idx = 3
ima = bb_model_data.val_ds.ds.denorm(to_np(x))[idx]
b = bb_hw(to_np(y[0][idx]))
ax = show_img(ima)
draw_rect(ax, b)
draw_text(ax, b[:2], class_model_data.classes[y[1][idx]])
x, y = next(iter_val)
idx = 4
ima = bb_model_data.val_ds.ds.denorm(to_np(x))[idx]
b = bb_hw(to_np(y[0][idx]))
ax = show_img(ima)
draw_rect(ax, b)
draw_text(ax, b[:2], class_model_data.classes[y[1][idx]])
2048*7*7
model_head = nn.Sequential(
Flatten(),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(100352, 256),
nn.ReLU(),
nn.BatchNorm1d(256),
nn.Dropout(0.5),
nn.Linear(256, 4 + len(class_model_data.classes))
)
models = ConvnetBuilder(arch, 0, 0, 0, custom_head=model_head)
learn = ConvLearner(bb_model_data, models)
learn.opt_fn = optim.Adam
def detn_loss(input, target):
# Picked a multiplier that makes cross ent and l1 loss equal
cross_mult = 35
bb_t, c_t = target
bb_i, c_i = input[:, :4], input[:, 4:]
bb_i = F.sigmoid(bb_i) * size
return F.l1_loss(bb_i, bb_t) + F.cross_entropy(c_i, c_t) * cross_mult
def detn_l1(input, target):
bb_t,_ = target
bb_i = input[:, :4]
bb_i = F.sigmoid(bb_i) * 224
return F.l1_loss(V(bb_i),V(bb_t)).data
def detn_acc(input, target):
_,c_t = target
c_i = input[:, 4:]
return accuracy(c_i, c_t)
learn.crit = detn_loss
learn.metrics = [detn_acc, detn_l1]
learn.lr_find(1e-5, 100)
learn.sched.plot()
lr = 0.03
learn.fit(lr, 2, cycle_len=1, cycle_mult=2)
learn.fit(lr, 2, cycle_len=1, cycle_mult=2)
learn.freeze_to(-2)
lrs = np.array([lr/100, lr/10, lr])
lrf=learn.lr_find(lrs/1000)
llearn.sched.plot(1)
lr = 0.0003
lrs = np.array([lr / 100, lr / 10, lr])
learn.fit(lrs, 2, cycle_len=1, cycle_mult=2)
learn.freeze_to(-3)
learn.fit(lrs, 1, cycle_len=2)
learn.fit(lrs, 1, cycle_len=2)
```
## Evaluate
```
from fastai.core import VV
from scipy.special import expit
y = learn.predict()
x,_ = next(iter(bb_model_data.val_dl))
fig, axes = plt.subplots(3, 4, figsize=(12, 8))
for i,ax in enumerate(axes.flat):
ima = bb_model_data.val_ds.ds.denorm(to_np(x))[i]
print(y[i][:4])
bb = expit(y[i][:4])*224
b = bb_hw(bb)
c = np.argmax(y[i][4:])
ax = show_img(ima, ax=ax)
draw_rect(ax, b)
draw_text(ax, b[:2], class_model_data.classes[c])
plt.tight_layout()
learn.save('resnet101-val-loss-29.914882')
torch.save(learn.model, PATH/'models'/'torch.resnet101-val-loss-29.914882')
torch.save(learn.model.state_dict(), PATH/'models'/'torch.resnet101-val-loss-29.914882.sd')
```
| github_jupyter |
# Numerical chaos in the logistic map and floating point error
One of the most classic examples of chaotic behavior in non-linear
systems is the iteration of the logistic map
$$
x_{n+1} = f(x_n) = r x_n (1-x_n)
$$
which for $x in (0,1)$ and $r in (0,4)$ can produce very surprising
behavior. We'll revisit this system later with some more sophisticated
tools, but for now we simply want use it to illustrate numerical
roundoff error.
Computers, when performing almost any floating point operation, must by
necessity throw away information from the digits that can't be stored at
any finite precision. This has a simple implication that is nonetheless
often ovelooked: algebraically equivalent forms of the same expression
aren't necessarily always numerically equivalent. A simple illustration
shows the problem very easily:
For this exercise, try to find three different ways to express $f(x)$
in logistic-map and compute the evolution of the same initial condition
after a few hundred iterations. For this problem, it will be extremely
useful to look at your results graphically; simply build lists of
numbers and call matplotlib's `plot` function to look at how each trace
evolves.
**Note:** this was inspired by this [very nice presentation](https://code.google.com/p/python-turtle-demo/downloads/detail?name=SevenWaysToUseTurtle-PyCon2009.pdf) about Python's turtle module, which includes some great numerical examples.
The following snippet can be used as a starting point, and it includes
some hints on what values of $r$ to look at:
```
"""Illustrating error propagation by iterating the logistic map.
f(x) = r*x*(1-x)
Write the above function in three algebraically equivalent forms, and study
their behavior under iteration. See for what values of r all forms evolve
identically and for which ones they don't.
"""
import matplotlib.pyplot as plt
# Interesting values to try for r:
# [1.9, 2.9, 3.1, 3.5, 3.9]
r = 3.9 # global default
x0 = 0.6 # any number in [0,1] will do here
num_points = 100 # total number of points to compute
drop_points = 0 # don't display the first drop_points
```
The three algebraically, but not numerically, equivalent forms of $f(x)$:
```
def f1(x): return r*x*(1-x)
def f2(x): return r*x - r*x**2
def f3(x): return r*(x-x**2)
```
Now, we compute and plot results with these three forms of $f(x)$:
```
%matplotlib inline
fp = (r-1.0)/r
x1 = x2 = x3 = x0
data = []
data.append([x1,x2,x3])
for i in range(num_points):
x1 = f1(x1)
x2 = f2(x2)
x3 = f3(x3)
data.append([x1,x2,x3])
# Display the results
plt.figure()
plt.title('r=%1.1f' % r)
plt.axhline(fp, color='black')
plt.plot(data[drop_points:], '-o', markersize=4);
```
| github_jupyter |
# Tensorflow Adversarial Embedding MNIST demo for a Dioptra deployment
This demo will cover the adversarial clean label backdoor attack on an MNIST-LeNet model.
The following two sections cover experiment setup and is similar across all demos.
To access demo results in MlFlow, please follow the general experiment setup steps outlined in `basic-mlflow-demo`.
## Setup: Experiment Name and MNIST Dataset
Here we will import the necessary Python modules and ensure the proper environment variables are set so that all the code blocks will work as expected.
**Important: Users will need to verify or update the following parameters:**
- Ensure that the `USERNAME` parameter is set to your own name.
- Ensure that the `DATASET_DIR` parameter is set to the location of the MNIST dataset directory. Currently set to `/nfs/data/Mnist` as the default location.
- (Optional) Set the `EXPERIMENT_NAME` parameter to your own preferred experiment name.
Other parameters can be modified to alter the RESTful API and MLFlow tracking addresses.
```
# Import packages from the Python standard library
import os
import pprint
import time
import warnings
from pathlib import Path
from typing import Tuple
# Filter out warning messages
warnings.filterwarnings("ignore")
# Please enter custom username here.
USERNAME = "howard"
# Ensure that the dataset location is properly set here.
DATASET_DIR = "/nfs/data/Mnist"
# Experiment name (note the username_ prefix convention)
EXPERIMENT_NAME = f"{USERNAME}_mnist_clean_label_backdoor"
# Address for connecting the docker container to exposed ports on the host device
HOST_DOCKER_INTERNAL = "host.docker.internal"
# HOST_DOCKER_INTERNAL = "172.17.0.1"
# Testbed API ports
RESTAPI_PORT = "30080"
MLFLOW_TRACKING_PORT = "35000"
# Default address for accessing the RESTful API service
RESTAPI_ADDRESS = (
f"http://{HOST_DOCKER_INTERNAL}:{RESTAPI_PORT}"
if os.getenv("IS_JUPYTER_SERVICE")
else f"http://localhost:{RESTAPI_PORT}"
)
# Override the AI_RESTAPI_URI variable, used to connect to RESTful API service
os.environ["AI_RESTAPI_URI"] = RESTAPI_ADDRESS
# Default address for accessing the MLFlow Tracking server
MLFLOW_TRACKING_URI = (
f"http://{HOST_DOCKER_INTERNAL}:{MLFLOW_TRACKING_PORT}"
if os.getenv("IS_JUPYTER_SERVICE")
else f"http://localhost:{MLFLOW_TRACKING_PORT}"
)
# Path to custom task plugins archives
CUSTOM_PLUGINS_POISONING_TAR_GZ = Path("custom-plugins-poisoning.tar.gz")
# Override the MLFLOW_TRACKING_URI variable, used to connect to MLFlow Tracking service
os.environ["MLFLOW_TRACKING_URI"] = MLFLOW_TRACKING_URI
# Base API address
RESTAPI_API_BASE = f"{RESTAPI_ADDRESS}/api"
# Path to workflows archive
WORKFLOWS_TAR_GZ = Path("workflows.tar.gz")
# Import third-party Python packages
import numpy as np
import requests
from mlflow.tracking import MlflowClient
# Import utils.py file
import utils
# Create random number generator
rng = np.random.default_rng(54399264723942495723666216079516778448)
```
## Submit and run jobs
The entrypoints that we will be running in this example are implemented in the Python source files under `src/` and the `MLproject` file.
To run these entrypoints within the testbed architecture, we need to package those files up into an archive and submit it to the Testbed RESTful API to create a new job.
For convenience, the `Makefile` provides a rule for creating the archive file for this example, just run `make workflows`,
```
%%bash
# Create the workflows.tar.gz file
make workflows
```
To connect with the endpoint, we will use a client class defined in the `utils.py` file that is able to connect with the Testbed RESTful API using the HTTP protocol.
We connect using the client below, which uses the environment variable `AI_RESTAPI_URI` to figure out how to connect to the Testbed RESTful API,
```
restapi_client = utils.SecuringAIClient()
```
We need to register an experiment under which to collect our job runs.
The code below checks if the relevant experiment exists.
If it does, then it just returns info about the experiment, if it doesn't, it then registers the new experiment.
```
response_experiment = restapi_client.get_experiment_by_name(name=EXPERIMENT_NAME)
if response_experiment is None or "Not Found" in response_experiment.get("message", []):
response_experiment = restapi_client.register_experiment(name=EXPERIMENT_NAME)
response_experiment
```
We should also check which queues are available for running our jobs to make sure that the resources that we need are available.
The code below queries the Testbed API and returns a list of active queues.
```
restapi_client.list_queues()
```
This example also makes use of the `custom_poisoning_plugins` package stored locally under the `task-plugins/securingai_custom/custom_poisoning_plugins` directory.
To register these custom task plugins, we first need to package them up into an archive.
For convenience, the `Makefile` provides a rule for creating the custom task plugins archive file, just run `make custom-plugins`,
```
%%bash
# Create the workflows.tar.gz file
make custom-plugins
```
Now that the custom task plugin package is packaged into an archive file, next we register it by uploading the file to the REST API.
Note that we need to provide the name to use for custom task plugin package, this name must be unique under the custom task plugins namespace.
For a full list of the custom task plugins, use `restapi_client.restapi_client.list_custom_task_plugins()`.
```
restapi_client.delete_custom_task_plugin(name="custom_poisoning_plugins")
response_custom_plugins = restapi_client.get_custom_task_plugin(name="custom_poisoning_plugins")
if response_custom_plugins is None or "Not Found" in response_custom_plugins.get("message", []):
response_custom_plugins = restapi_client.upload_custom_plugin_package(
custom_plugin_name="custom_poisoning_plugins",
custom_plugin_file=CUSTOM_PLUGINS_POISONING_TAR_GZ,
)
response_custom_plugins
```
If at any point you need to update one or more files within the `custom_poisoning_plugins` package, you will need to unregister/delete the custom task plugin first using the REST API.
This can be done as follows,
```python
# Delete the 'custom_poisoning_plugins' package
restapi_client.delete_custom_task_plugin(name="custom_poisoning_plugins")
```
The following helper functions will recheck the job responses until the job is completed or a run ID is available.
The run ID is needed to link dependencies between jobs.
```
def mlflow_run_id_is_not_known(job_response):
return job_response["mlflowRunId"] is None and job_response["status"] not in [
"failed",
"finished",
]
def get_run_id(job_response):
while mlflow_run_id_is_not_known(job_response):
time.sleep(1)
job_response = restapi_client.get_job_by_id(job_response["jobId"])
return job_response
def wait_until_finished(job_response):
# First make sure job has started.
job_response = get_run_id(job_response)
# Next re-check job until it has stopped running.
while (job_response["status"] not in ["failed", "finished"]):
time.sleep(1)
job_response = restapi_client.get_job_by_id(job_response["jobId"])
return job_response
# Helper function for viewing MLflow results
def get_mlflow_results(job_response):
mlflow_client = MlflowClient()
job_response = wait_until_finished(job_response)
if(job_response['status']=="failed"):
return {}
run = mlflow_client.get_run(job_response["mlflowRunId"])
while(len(run.data.metrics) == 0):
time.sleep(1)
run = mlflow_client.get_run(job_response["mlflowRunId"])
return run
def print_mlflow_results(response):
results = get_mlflow_results(response)
pprint.pprint(results.data.metrics)
```
## MNIST Training: Baseline Model
Next, we need to train our baseline model that will serve as a reference point for the effectiveness of our attacks.
We will be submitting our job to the `"tensorflow_gpu"` queue.
```
response_le_net_train = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="train",
entry_point_kwargs=" ".join([
"-P batch_size=256",
f"-P register_model_name={EXPERIMENT_NAME}_le_net",
"-P model_architecture=le_net",
"-P epochs=30",
f"-P data_dir_training={DATASET_DIR}/training",
f"-P data_dir_testing={DATASET_DIR}/testing",
]),
queue="tensorflow_gpu",
timeout="1h",
)
print("Training job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_le_net_train)
response_le_net_train = get_run_id(response_le_net_train)
print_mlflow_results(response_le_net_train)
# Train a special model for making poisons
response_le_net_train_robust = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="train_madry_pgd",
entry_point_kwargs=" ".join([
"-P batch_size=256",
f"-P register_model_name={EXPERIMENT_NAME}_robust_le_net",
"-P model_architecture=le_net",
"-P epochs=10",
f"-P data_dir_training={DATASET_DIR}/training",
f"-P data_dir_testing={DATASET_DIR}/testing",
]),
queue="tensorflow_gpu",
timeout="1h",
)
print("Training job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_le_net_train_robust)
response_le_net_train_robust = get_run_id(response_le_net_train_robust)
print_mlflow_results(response_le_net_train_robust)
```
### Generating Poisoned Images
Now we will create our set of poisoned images.
Start by submitting the poison generation job below.
```
## Create poisoned test images.
response_gen_poison_le_net_test = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="gen_poison_data",
entry_point_kwargs=" ".join(
[
f"-P data_dir={DATASET_DIR}/testing",
"-P batch_size=100",
"-P target_class=1",
"-P poison_fraction=1",
"-P label_type=test"
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_train["jobId"],
)
print("Backdoor poison attack (LeNet-5 architecture) job submitted")
print("")
pprint.pprint(response_gen_poison_le_net_test)
print("")
response_gen_poison_le_net_test = get_run_id(response_gen_poison_le_net_test)
## Create poisoned training images (clean_label)
response_gen_poison_le_net_train_clean = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="gen_poison_clean_data",
entry_point_kwargs=" ".join(
[
f"-P model_name={EXPERIMENT_NAME}_robust_le_net",
"-P model_version=none",
f"-P data_dir={DATASET_DIR}/testing",
"-P batch_size=200",
"-P target_index=1",
"-P poison_fraction=0.33",
"-P label_type=train"
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_train["jobId"],
)
print("Backdoor poison attack (LeNet-5 architecture) job submitted")
print("")
pprint.pprint(response_gen_poison_le_net_train_clean)
print("")
response_gen_poison_le_net_train_clean = get_run_id(response_gen_poison_le_net_train_clean)
```
## MNIST Training: Poisoned Model using a Clean Label technique
Next we will train our poisoned model using a clean label technique.
```
# Now train a new model on the poisoned clean label images
response_le_net_train_backdoor_model_clean = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="train",
entry_point_kwargs=" ".join([
"-P batch_size=256",
f"-P register_model_name={EXPERIMENT_NAME}_data_poison_le_net",
"-P model_architecture=le_net",
"-P epochs=30",
f"-P data_dir_training={DATASET_DIR}/training",
f"-P data_dir_testing={DATASET_DIR}/testing",
"-P load_dataset_from_mlruns=true",
f"-P dataset_run_id_training={response_gen_poison_le_net_train_clean['mlflowRunId']}",
"-P adv_tar_name=adversarial_poison.tar.gz",
"-P adv_data_dir=adv_poison_data"
]),
depends_on=response_gen_poison_le_net_train_clean["jobId"],
queue="tensorflow_gpu",
timeout="1h",
)
print("Training job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_le_net_train_backdoor_model_clean)
response_le_net_train_backdoor_model_clean = get_run_id(response_le_net_train_backdoor_model_clean)
print_mlflow_results(response_le_net_train_backdoor_model_clean)
```
## Model Evaluation: Poisoned vs Regular Models on Backdoor-Poisoned Images.
Below we will compare the results of the regular model vs poisoned-backdoor model on backdoor test images.
```
# Inference: Model trained on poisoned backdoor attack
response_infer_pos_model_clean = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_gen_poison_le_net_test['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_data_poison_le_net",
"-P model_version=none",
"-P batch_size=512",
"-P adv_tar_name=adversarial_poison.tar.gz",
"-P adv_data_dir=adv_poison_data",
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_train_backdoor_model_clean["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_pos_model_clean)
response_infer_pos_model_clean = get_run_id(response_infer_pos_model_clean)
print_mlflow_results(response_infer_pos_model_clean)
# Inference: Regular model on poisoned test images
response_infer_reg_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_gen_poison_le_net_test['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_le_net",
"-P model_version=none",
"-P batch_size=512",
"-P adv_tar_name=adversarial_poison.tar.gz",
"-P adv_data_dir=adv_poison_data",
]
),
queue="tensorflow_gpu",
depends_on=response_le_net_train["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_reg_model)
print_mlflow_results(response_infer_reg_model)
```
## Defending against the clean label poisoning attack
Now we will explore available defenses on the adversarial backdoor poisoning attack.
The following three jobs will run a selected defense (spatial smoothing, gaussian augmentation, or jpeg compression) and evaluate the defense on the baseline and backdoor trained models.
- The first job uses the selected defense entrypoint to apply a preprocessing defense over the poisoned test images.
- The second job runs the defended images against the poisoned backdoor model.
- The final job runs the defended images against the baseline model.
Ideally the defense will not impact the baseline model accuracy, while improving the backdoor model accuracy scores.
```
defenses = ["gaussian_augmentation", "spatial_smoothing", "jpeg_compression"]
defense = defenses[2]
response_poison_def = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point=defense,
entry_point_kwargs=" ".join(
[
f"-P data_dir={DATASET_DIR}/testing",
"-P batch_size=20",
"-P load_dataset_from_mlruns=true",
f"-P dataset_run_id={response_gen_poison_le_net_test['mlflowRunId']}",
"-P dataset_tar_name=adversarial_poison.tar.gz",
"-P dataset_name=adv_poison_data",
]
),
queue="tensorflow_gpu",
depends_on=response_gen_poison_le_net_test["jobId"],
)
print(f"FGM {defense} defense (LeNet architecture) job submitted")
print("")
pprint.pprint(response_poison_def)
print("")
response_poison_def = get_run_id(response_poison_def)
# Inference: Poisoned model on poisoned test images.
response_infer_pos_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_poison_def['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_data_poison_le_net",
f"-P model_version=none",
"-P batch_size=512",
f"-P adv_tar_name={defense}_dataset.tar.gz",
"-P adv_data_dir=adv_testing",
]
),
queue="tensorflow_gpu",
depends_on=response_poison_def["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_pos_model)
print_mlflow_results(response_infer_pos_model)
# Inference: Regular model on poisoned test images.
response_infer_reg_model = restapi_client.submit_job(
workflows_file=WORKFLOWS_TAR_GZ,
experiment_name=EXPERIMENT_NAME,
entry_point="infer",
entry_point_kwargs=" ".join(
[
f"-P run_id={response_poison_def['mlflowRunId']}",
f"-P model_name={EXPERIMENT_NAME}_le_net",
f"-P model_version=none",
"-P batch_size=512",
f"-P adv_tar_name={defense}_dataset.tar.gz",
"-P adv_data_dir=adv_testing",
]
),
queue="tensorflow_gpu",
depends_on=response_poison_def["jobId"],
)
print("Inference job for LeNet-5 neural network submitted")
print("")
pprint.pprint(response_infer_reg_model)
print_mlflow_results(response_infer_reg_model)
```
<a id='querying_cell'></a>
## Querying the MLFlow Tracking Service
Currently the lab API can only be used to register experiments and start jobs, so if users wish to extract their results programmatically, they can use the `MlflowClient()` class from the `mlflow` Python package to connect and query their results.
Since we captured the run ids generated by MLFlow, we can easily retrieve the data logged about one of our jobs and inspect the results.
To start the client, we simply need to run,
```
mlflow_client = MlflowClient()
```
The client uses the environment variable `MLFLOW_TRACKING_URI` to figure out how to connect to the MLFlow Tracking Service, which we configured near the top of this notebook.
To query the results of one of our runs, we just need to pass the run id to the client's `get_run()` method.
As an example, let's query the run results for the patch attack applied to the LeNet-5 architecture,
```
run_le_net = mlflow_client.get_run(response_le_net_train["mlflowRunId"])
```
If the request completed successfully, we should now be able to query data collected during the run.
For example, to review the collected metrics, we just use,
```
pprint.pprint(run_le_net.data.metrics)
```
To review the run's parameters, we use,
```
pprint.pprint(run_le_net.data.params)
```
To review the run's tags, we use,
```
pprint.pprint(run_le_net.data.tags)
```
There are many things you can query using the MLFlow client.
[The MLFlow documentation gives a full overview of the methods that are available](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient).
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Scripts/.' .
!unzip -q '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/train_images320x480.zip'
```
### Dependencies
```
from utillity_script_cloud_segmentation import *
from utillity_script_lr_schedulers2 import *
seed = 0
seed_everything(seed)
warnings.filterwarnings("ignore")
base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/'
data_path = base_path + 'Data/'
model_base_path = base_path + 'Models/files/classification/'
train_path = data_path + 'train.csv'
hold_out_set_path = data_path + 'hold-out.csv'
train_images_dest_path = 'train_images/'
```
### Load data
```
train = pd.read_csv(train_path)
hold_out_set = pd.read_csv(hold_out_set_path)
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
print('Compete set samples:', len(train))
print('Train samples: ', len(X_train))
print('Validation samples: ', len(X_val))
# Preprocecss data
train['image'] = train['Image_Label'].apply(lambda x: x.split('_')[0])
label_columns=['Fish', 'Flower', 'Gravel', 'Sugar']
for label in label_columns:
X_train[label].replace({0: 1, 1: 0}, inplace=True)
X_val[label].replace({0: 1, 1: 0}, inplace=True)
display(X_train.head())
```
# Model parameters
```
BATCH_SIZE = 32
WARMUP_EPOCHS = 3
WARMUP_LEARNING_RATE = 1e-3
EPOCHS = 20
LEARNING_RATE = 10**(-1.5)
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
N_CLASSES = 4
ES_PATIENCE = 10
STEP_SIZE_TRAIN = len(X_train)//BATCH_SIZE
STEP_SIZE_VALID = len(X_val)//BATCH_SIZE
model_name = '28-EfficientNetB3_%sx%s' % (HEIGHT, WIDTH)
model_path = model_base_path + '%s.h5' % (model_name)
```
### Data generator
```
datagen=ImageDataGenerator(rescale=1./255.,
vertical_flip=True,
horizontal_flip=True,
zoom_range=[1, 1.1],
shear_range=45.0,
rotation_range=360,
width_shift_range=0.1,
height_shift_range=0.1,
brightness_range=(0.9, 1),
fill_mode='constant',
cval=0.)
test_datagen=ImageDataGenerator(rescale=1./255.)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_images_dest_path,
x_col="image",
y_col=label_columns,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,
class_mode="other",
shuffle=True,
seed=seed)
valid_generator=test_datagen.flow_from_dataframe(
dataframe=X_val,
directory=train_images_dest_path,
x_col="image",
y_col=label_columns,
target_size=(HEIGHT, WIDTH),
batch_size=BATCH_SIZE,
class_mode="other",
shuffle=True,
seed=seed)
```
# Model
```
def create_model(input_shape, N_CLASSES):
input_tensor = Input(shape=input_shape)
base_model = efn.EfficientNetB3(weights='imagenet',
include_top=False,
input_tensor=input_tensor,
pooling='avg')
x = base_model.output
final_output = Dense(N_CLASSES, activation='sigmoid')(x)
model = Model(input_tensor, final_output)
return model
```
## Warmup top layers
```
model = create_model((None, None, CHANNELS), N_CLASSES)
metric_list = ['accuracy']
freeze_segmentation_model(model)
model.layers[-1].trainable = True
optimizer = optimizers.SGD(lr=WARMUP_LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=metric_list)
warmup_history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
for layer in model.layers:
layer.trainable = True
warm_weights = model.get_weights()
```
# Learning rate finder
```
#@title
lr_finder = LRFinder(num_samples=len(X_train), batch_size=BATCH_SIZE, minimum_lr=1e-6, maximum_lr=10, verbose=0)
optimizer = optimizers.SGD(lr=WARMUP_LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='binary_crossentropy')
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=1,
callbacks=[lr_finder])
plt.rcParams.update({'font.size': 16})
plt.figure(figsize=(24, 8))
plt.axvline(x=np.log10(LEARNING_RATE), color='red')
lr_finder.plot_schedule(clip_beginning=5)
```
## Fine-tune all layers
```
model.set_weights(warm_weights)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
oneCycleLR = OneCycleLR(max_lr=LEARNING_RATE, maximum_momentum=0.95, minimum_momentum=0.85,
epochs=EPOCHS, batch_size=BATCH_SIZE, samples=len(X_train), steps=STEP_SIZE_TRAIN)
callback_list = [checkpoint, es, oneCycleLR]
optimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=metric_list)
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callback_list,
epochs=EPOCHS,
verbose=2).history
```
## Model loss graph
```
#@title
metrics_history = ['loss', 'acc']
for metric_hist in metrics_history:
history[metric_hist] = warmup_history[metric_hist] + history[metric_hist]
history['val_' + metric_hist] = warmup_history['val_' + metric_hist] + history['val_' + metric_hist]
plot_metrics(history, metric_list=metrics_history)
```
## Scheduler learning rates
```
#@title
fig, ax1 = plt.subplots(1, 1, figsize=(20, 6))
plt.xlabel('Training Iterations')
plt.ylabel('Learning Rate')
plt.title("CLR")
plt.plot(oneCycleLR.history['lr'])
plt.show()
```
| github_jupyter |
```
%load_ext watermark
%watermark -p torch,pytorch_lightning,torchmetrics,matplotlib
%load_ext pycodestyle_magic
%flake8_on --ignore W291,W293,E703
```
<a href="https://pytorch.org"><img src="https://raw.githubusercontent.com/pytorch/pytorch/master/docs/source/_static/img/pytorch-logo-dark.svg" width="90"/></a> <a href="https://www.pytorchlightning.ai"><img src="https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/docs/source/_static/images/logo.svg" width="150"/></a>
# Model Zoo -- AlexNet Trained on CIFAR-10
**References:**
- [1] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "[Imagenet classification with deep convolutional neural networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)." In Advances in Neural Information Processing Systems, pp. 1097-1105. 2012.
## General settings and hyperparameters
- Here, we specify some general hyperparameter values and general settings
- Note that for small datatsets, it is not necessary and better not to use multiple workers as it can sometimes cause issues with too many open files in PyTorch. So, if you have problems with the data loader later, try setting `NUM_WORKERS = 0` instead.
```
BATCH_SIZE = 256
NUM_EPOCHS = 40
LEARNING_RATE = 0.001
NUM_WORKERS = 4
```
## Implementing a Neural Network using PyTorch Lightning's `LightningModule`
- In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.
- We start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
```
import torch.nn as nn
import torch.nn.functional as F
# Regular PyTorch Module
class PyTorchAlexNet(nn.Module):
def __init__(self, num_classes):
super().__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.classifier = nn.Sequential(
nn.Dropout(0.5),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes)
)
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, start_dim=1)
logits = self.classifier(x)
return logits
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningModel(pl.LightningModule):
def __init__(self, model, learning_rate):
super().__init__()
self.learning_rate = learning_rate
# The inherited PyTorch module
self.model = model
# Save settings and hyperparameters to the log directory
# but skip the model parameters
self.save_hyperparameters(ignore=['model'])
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("train_loss", loss)
# To account for Dropout behavior during evaluation
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc.update(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss # this is passed to the optimzer for training
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc(predicted_labels, true_labels)
self.log("valid_acc", self.valid_acc,
on_epoch=True, on_step=False, prog_bar=True)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
```
## Setting up the dataset
- In this section, we are going to set up our dataset.
### Inspecting the dataset
```
import torch
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.CIFAR10(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True)
test_dataset = datasets.CIFAR10(root='./data',
train=False,
transform=transforms.ToTensor())
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False)
# Checking the dataset
all_train_labels = []
all_test_labels = []
for images, labels in train_loader:
all_train_labels.append(labels)
all_train_labels = torch.cat(all_train_labels)
for images, labels in test_loader:
all_test_labels.append(labels)
all_test_labels = torch.cat(all_test_labels)
print('Training labels:', torch.unique(all_train_labels))
print('Training label distribution:', torch.bincount(all_train_labels))
print('\nTest labels:', torch.unique(all_test_labels))
print('Test label distribution:', torch.bincount(all_test_labels))
```
### Performance baseline
- Especially for imbalanced datasets, it's quite useful to compute a performance baseline.
- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- you want your model to be better than that!
```
majority_prediction = torch.argmax(torch.bincount(all_test_labels))
baseline_acc = torch.mean((all_test_labels == majority_prediction).float())
print(f'Baseline ACC: {baseline_acc*100:.2f}%')
```
### Setting up a `DataModule`
- There are three main ways we can prepare the dataset for Lightning. We can
1. make the dataset part of the model;
2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the next subsection;
3. create a LightningDataModule.
- Here, we are going to use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods as we can see below:
```
import os
from torch.utils.data.dataset import random_split
from torch.utils.data import DataLoader
from torchvision import transforms
class DataModule(pl.LightningDataModule):
def __init__(self, data_path='./'):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.CIFAR10(root=self.data_path,
download=True)
self.train_transform = transforms.Compose(
[transforms.Resize((70, 70)),
transforms.RandomCrop((64, 64)),
transforms.ToTensor()])
self.test_transform = transforms.Compose(
[transforms.Resize((70, 70)),
transforms.CenterCrop((64, 64)),
transforms.ToTensor()])
return
def setup(self, stage=None):
train = datasets.CIFAR10(root=self.data_path,
train=True,
transform=self.train_transform,
download=False)
self.test = datasets.CIFAR10(root=self.data_path,
train=False,
transform=self.test_transform,
download=False)
self.train, self.valid = random_split(train, lengths=[45000, 5000])
def train_dataloader(self):
train_loader = DataLoader(dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return test_loader
```
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the the dataset loading -- if you run your code in a distributed setting, this will be called on each node / GPU.
- Next, lets initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
```
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
```
## Training the model using the PyTorch Lightning Trainer class
- Next, we initialize our model.
- Also, we define a call back so that we can obtain the model with the best validation set performance after training.
- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. Here, we will keep things simple and use the `CSVLogger`:
```
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
pytorch_model = PyTorchAlexNet(num_classes=10)
lightning_model = LightningModel(
pytorch_model, learning_rate=LEARNING_RATE)
callbacks = [ModelCheckpoint(
save_top_k=1, mode='max', monitor="valid_acc")] # save top 1 model
logger = CSVLogger(save_dir="logs/", name="my-model")
```
- Now it's time to train our model:
```
import time
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
progress_bar_refresh_rate=50, # recommended for notebooks
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
log_every_n_steps=100)
start_time = time.time()
trainer.fit(model=lightning_model, datamodule=data_module)
runtime = (time.time() - start_time)/60
print(f"Training took {runtime:.2f} min in total.")
```
## Evaluating the model
- After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (you may want to consider a [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) that does that for you):
```
import pandas as pd
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='Loss')
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='ACC')
```
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
```
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
```
## Predicting labels of new data
- You can use the `trainer.predict` method on a new `DataLoader` or `DataModule` to apply the model to new data.
- Alternatively, you can also manually load the best model from a checkpoint as shown below:
```
path = trainer.checkpoint_callback.best_model_path
print(path)
lightning_model = LightningModel.load_from_checkpoint(
path, model=pytorch_model)
lightning_model.eval();
```
- Note that our PyTorch model, which is passed to the Lightning model requires input arguments. However, this is automatically being taken care of since we used `self.save_hyperparameters()` in our PyTorch model's `__init__` method.
- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
```
test_dataloader = data_module.test_dataloader()
all_true_labels = []
all_predicted_labels = []
for batch in test_dataloader:
features, labels = batch
with torch.no_grad(): # since we don't need to backprop
logits = lightning_model(features)
predicted_labels = torch.argmax(logits, dim=1)
all_predicted_labels.append(predicted_labels)
all_true_labels.append(labels)
all_predicted_labels = torch.cat(all_predicted_labels)
all_true_labels = torch.cat(all_true_labels)
all_predicted_labels[:5]
```
Just as an internal check, if the model was loaded correctly, the test accuracy below should be identical to the test accuracy we saw earlier in the previous section.
```
test_acc = torch.mean((all_predicted_labels == all_true_labels).float())
print(f'Test accuracy: {test_acc:.4f} ({test_acc*100:.2f}%)')
```
| github_jupyter |
# What is PyTorch?
* Its a machine learning library used for NLP, computer vision,etc.
* Provides Tensor computing with GPU support.
# Modules in Pytorch
* **Autograd module:**
PyTorch uses a method called automatic differentiation. A recorder records what operations have performed, and then it replays it backward to compute the gradients. This method is especially powerful when building neural networks to save time on one epoch by calculating differentiation of the parameters at the forward pass.
* **Optim module:**
torch.optim is a module that implements various optimization algorithms used for building neural networks.
* **nn module:**
PyTorch autograd makes it easy to define computational graphs and take gradients, but raw autograd can be a bit too low-level for defining complex neural networks. This is where the nn module can help.
# Topics covered in this notebook
* Handwritten Digits Classification (Numerical Data)-**Digit MNIST**
* Objects Image Classification (Image Data, CNN)-**Sign Language MNIST**
# Objects Image Classification (Image Data, CNN)-Sign Language MNIST
### Importing lib
```
import cv2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import os
import math
%matplotlib inline
import time
#pytorch utility imports
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader, TensorDataset
from torchvision.utils import make_grid
#neural net imports
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
start = torch.cuda.Event(enable_timing=True) #time measure during cuda training
end = torch.cuda.Event(enable_timing=True)
```
### Loading dataset
```
test_df = pd.read_csv('../input/sign-language-mnist/sign_mnist_test/sign_mnist_test.csv')
train_df = pd.read_csv('../input/sign-language-mnist/sign_mnist_train/sign_mnist_train.csv')
print((train_df['label'].unique()).shape )# There are 24 possible labels, 9=J and 25=Z require motion so they are absent.
print(np.sort(train_df['label'].unique()))
```
### Separating labels and features
```
train_labels = train_df['label'].values
test_labels=test_df['label'].values
train_images = (train_df.iloc[:,1:].values).astype('float32')
test_images = (test_df.iloc[:,1:].values).astype('float32')
print("train images shape",train_images.shape)
print("train labels shape",train_labels.shape)
print("test images shape",test_images.shape)
print("test labels shape",test_labels.shape)
```
### Reshape features
*Note: For images reshape will be in 4D*
```
train_images = train_images.reshape(train_images.shape[0],1, 28, 28)
test_images = test_images.reshape(test_images.shape[0],1, 28, 28)
print(train_images.shape)
print(test_images.shape)
```
### Changing to Tensor
```
train_images_tensor = torch.tensor(train_images)/255.0 #default torch.FloatTensor
train_labels_tensor = torch.tensor(train_labels)
train_tensor = TensorDataset(train_images_tensor, train_labels_tensor)
test_images_tensor = torch.tensor(test_images)/255.0
test_labels_tensor = torch.tensor(test_labels)
test_tensor = TensorDataset(test_images_tensor, test_labels_tensor)
```
### DataLoader
```
train_loader = DataLoader(train_tensor, batch_size=16, num_workers=2, shuffle=True)
test_loader = DataLoader(test_images_tensor, batch_size=16, num_workers=2, shuffle=False)
```
### Model
```
import torch.nn.functional as F
from torch.nn import Linear, ReLU, CrossEntropyLoss, Sequential, Conv2d, MaxPool2d, Module, Softmax, BatchNorm2d, Dropout
from torch.optim import Adam, SGD
class Net(nn.Module): # class Net inherits from predefined Module class in torch.nn
def __init__(self): # calling constructor of parent class
super().__init__()
self.conv1 = nn.Conv2d(1,32,3) # 2d convolution layer : (input : 1 image , output : 32 channels , kernel size : 3*3)
self.conv2 = nn.Conv2d(32,64,3)
self.conv3 = nn.Conv2d(64,128,3)
self.linear_in = None # used to calculate input of first linear layer by passing fake data through 2d layers
x = torch.rand(28,28).view(-1,1,28,28) # using convs function
self.convs(x)
self.fc1 = nn.Linear(self.linear_in,512)
self.fc2 = nn.Linear(512,26)
def convs(self,x):
x = F.max_pool2d(F.relu(self.conv1(x)) , (2,2) ) # relu used for activation function
x = F.max_pool2d(F.relu(self.conv2(x)) , (2,2) ) # max_pool2d for max pooling results of each kernel with window size 2*2
x = F.max_pool2d(F.relu(self.conv3(x)) , (2,2) )
if self.linear_in == None:
self.linear_in = x[0].shape[0]*x[0].shape[1]*x[0].shape[2] # input of first linear layer is multiplication of dimensions of ouput
return x # tensor of the 2d layers
def forward(self,x): # forward pass function uses the convs function to pass through 2d layers
x = self.convs(x)
x = x.view(-1,self.linear_in)
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = F.log_softmax(x ,dim = -1) # log_softmax for finding output neuron with highest value
return x
net = Net()
print(net)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
if (device.type=='cuda'):
model.cuda() # CUDA
net.to(device)
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
if (device.type=='cuda'):
start.record()
loss_log = []
for epoch in range(20): # loop over dataset multiple times
running_loss = 0.0
for i, (data,target) in enumerate(train_loader):
if (device.type=='cuda'):
inputs,labels= Variable(data.cuda()), Variable(target.cuda())
else:
inputs,labels= Variable(data), Variable(target)
# zero parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = F.cross_entropy(outputs, labels)
#print(loss)
loss.backward()
optimizer.step()
#if i % 100 == 1:
print('\r Train Epoch: {} [{}/{} ({:.0f}%)] \tLoss: {:.6f}'.format( epoch, i * len(data), len(train_loader.dataset),
100. * i / len(train_loader), loss.data), end='')
print("")
print('Finished Training')
if (device.type=='cuda'):
end.record()
if (device.type=='cuda'):
evaluate_x=test_images_tensor.cuda()
evaluate_y=test_labels_tensor.cuda()
else:
evaluate_x=test_images_tensor
evaluate_y=test_labels_tensor
output = net(evaluate_x)
pred = output.data.max(1)[1]
d = pred.eq(evaluate_y.data).cpu()
a=(d.sum().data.cpu().numpy())
b=d.size()
b=torch.tensor(b)
b=(b.sum().data.cpu().numpy())
accuracy = a/b
print('Accuracy:', accuracy*100)
if (device.type=='cuda'):
torch.cuda.synchronize()
print(start.elapsed_time(end)/1000,"sec")
```
### Calculating the F1 score
```
from sklearn.metrics import f1_score
print("f1 score =",f1_score(test_labels, pred.cpu().numpy(), average='macro'))
```
## Conclusion
Lets get connected on [Linkedin](https://www.linkedin.com/in/manzoor-bin-mahmood/)
Visit my [website](https://manzoormahmood.github.io/)
| github_jupyter |
* Hyperparameter tuning of All classifiers for emotional transition detection
* 6 fold cross validation with grid-search
* Binary classification
```
import pandas as pd
import datetime
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from pprint import pprint
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.feature_selection import SelectFromModel,RFECV
from sklearn.model_selection import cross_validate
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold, StratifiedKFold, cross_val_score, PredefinedSplit
from sklearn.feature_selection import SelectKBest, mutual_info_classif
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold, StratifiedKFold, cross_val_score
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn import metrics
from imblearn.over_sampling import SMOTE
from imblearn.over_sampling import SMOTENC
from imblearn.over_sampling import ADASYN
from imblearn.over_sampling import SVMSMOTE
from imblearn.combine import SMOTEENN
from imblearn.combine import SMOTETomek
pd.options.mode.chained_assignment = None
import re
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#warnings.filterwarnings('always')
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.decomposition import PCA
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from sklearn.metrics import classification_report
from sklearn.metrics import cohen_kappa_score
from imblearn.metrics import specificity_score
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import make_scorer, f1_score, roc_auc_score, precision_score, recall_score, confusion_matrix
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from xgboost import XGBClassifier
from catboost import CatBoostClassifier, Pool, cv
from sklearn.neural_network import MLPClassifier
#from pandas_ml import ConfusionMatrix
#import collections
def read_input(p):
#Read input file of each person
filename='data/NonOverlap_w5_emoChange_SelFeat_data_p'+str(p)+'.csv'
raw_df= pd.read_csv(filename)
print("The shape of the dataframe is ",raw_df.shape)
return raw_df
# replace NANs with -999
def prep_data(data):
return data.fillna(-999)
#drop columns
def drop_cols(data, col_list):
return data.drop(col_list, axis=1)
# normalize data with minmax
def scale_data(trn_x, tst_x):
sc= StandardScaler()
scaled_trn_x = sc.fit_transform(trn_x)
scaled_tst_x = sc.fit_transform(tst_x)
return scaled_trn_x, scaled_tst_x
# oversampling with SMOTE with 'minority' and 'not majority'
def over_sample_SMOTE(X_train, y_train):
sm=SMOTE(sampling_strategy='not majority', random_state=10) # 'minority'
X_train_ovr, y_train_ovr=sm.fit_sample(X_train, y_train)
#print(X_train_ovr.shape, y_train_ovr.shape)
return X_train_ovr, y_train_ovr
# oversampling with SMOTENC with 'minority' and 'not majority'
def over_sample_SMOTENC(X_train, y_train):
sm = SMOTENC(sampling_strategy='not majority',random_state=10)
#sm = SMOTENC(sampling_strategy='minority',random_state=10)
X_train_ovr, y_train_ovr=sm.fit_sample(X_train, y_train)
#print(X_train_ovr.shape, y_train_ovr.shape)
return X_train_ovr, y_train_ovr
# oversampling with SVMSMOTE
def over_sample_SVMSMOTE(X_train, y_train):
sm=SVMSMOTE(random_state=10)
X_train_ovr, y_train_ovr=sm.fit_sample(X_train, y_train)
#print(X_train_ovr.shape, y_train_ovr.shape)
return X_train_ovr, y_train_ovr
def merge_dataframes(p_list):
df = pd.DataFrame()
for p in p_list:
new_df = read_input(p)
df=df.append(new_df,ignore_index = True)
#drop all variables that contain all NANs
df.dropna(axis=1,how='all', inplace=True)
#reset the index
df.reset_index(drop=True, inplace=True)
#drop columns with all zeros in pandas dataframe
df=df.T[(df!=0).any()].T
#keep columns with missing values < 30%
df = df.loc[:, df.isnull().mean() < .3]
print("The shape of the merged dataframe is ",df.shape)
return df
#drop all columns that contain location information (if any)
def drop_location(df):
print(df.shape)
df = df[df.columns.drop(list(df.filter(regex='location')))]
df = df[df.columns.drop(list(df.filter(regex='latitude')))]
df = df[df.columns.drop(list(df.filter(regex='lonitude')))]
print(df.shape)
return df
def select_k_features(X_train_scaled,X_test_scaled,y_train,k):
selection = SelectKBest(mutual_info_classif, k)
X_train = selection.fit_transform(X_train_scaled,y_train)
X_test = selection.transform(X_test_scaled)
return X_train, X_test
def print_results(accu, bl_accu, prec, rec_, spec_, roc_, f1_):
print('.....................')
print("Average Accuracy: %.2f%% (%.2f)" % (np.mean(accu), np.std(accu)))
print("Average Balanced_accuracy: %.2f%% (%.2f)" % (np.mean(bl_accu),np.std(bl_accu)))
print("Average Precision: %.2f%% (%.2f)" % (np.mean(prec),np.std(prec)))
print("Average Recall: %.2f%% (%.2f)" % (np.mean(rec_),np.std(rec_)))
print("Average Specificity: %.2f%% (%.2f)" % (np.mean(spec_),np.std(spec_)))
print("Average ROC AUC: %.2f%% (%.2f)" % (np.mean(roc_),np.std(roc_)))
print("Average F1 score: %.2f%% (%.2f)" % (np.mean(f1_),np.std(f1_)))
print('..................................................')
print('\n')
pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', LogisticRegression())])
search_space = [{'selector__k': [ 50, 70, 90]},
{'classifier': [LogisticRegression(solver='lbfgs')],
'classifier__C': [0.01, 0.1, 1.0],
'classifier__penalty': ['l1', 'l2', None],
'classifier__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'classifier__max_iter':[100, 150, 200],
'classifier__class_weight':[None, 'balanced']},
{'classifier': [RandomForestClassifier()],
'classifier__max_depth': [5, 10, 30, None],
'classifier__criterion':['gini','entropy'],
'classifier__bootstrap': [True],
'classifier__max_features':['log2', None],
'classifier__n_estimators': [50, 100, 200, 300, 400]},
{'classifier': [MLPClassifier(random_state=1, early_stopping=True)],
'classifier__hidden_layer_sizes' : [(50, 50, 50), (50, 100, 50), (20, 20, 20), (30, ), (50,),(100,)],
'classifier__activation' : ['tanh', 'relu', 'logistic'],
'classifier__max_iter':[50, 100, 150, 200, 300],
'classifier__solver': ['sgd', 'adam', 'lbfgs'],
'classifier__alpha': [0.0001, 0.001, 0.05]},
{'classifier': [CatBoostClassifier(random_seed=1)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2]},
{'classifier': [XGBClassifier(objective='binary:logistic', random_state=1)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2],
'classifier__colsample_bytree':[.5, .75, 1],
'classifier__max_depth': np.arange(3, 6, 10),
'classifier__n_estimators': [50, 100, 200, 300, 400]}]
scorer = make_scorer(f1_score, average = 'binary')
LR_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', LogisticRegression())])
LR_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [LogisticRegression(solver='lbfgs')],
'classifier__C': [0.01, 0.1, 1.0],
'classifier__penalty': ['l1', 'l2', None],
'classifier__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'classifier__max_iter':[100, 150, 200],
'classifier__class_weight':[None, 'balanced']}]
################################################################################
RF_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', RandomForestClassifier())])
RF_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [RandomForestClassifier()],
'classifier__max_depth': [5, 10, 30, None],
'classifier__criterion':['gini','entropy'],
'classifier__bootstrap': [True],
'classifier__max_features':['log2', None],
'classifier__n_estimators': [50, 100, 200, 300, 400]}]
################################################################################
MLP_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', MLPClassifier(random_state=1, early_stopping=True))])
MLP_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [MLPClassifier(random_state=1, early_stopping=True)],
'classifier__hidden_layer_sizes' : [(50, 50, 50), (50, 100, 50), (20, 20, 20), (30, ), (50,),(100,)],
'classifier__activation' : ['tanh', 'relu', 'logistic'],
'classifier__max_iter':[50, 100, 150, 200, 300],
'classifier__solver': ['sgd', 'adam', 'lbfgs'],
'classifier__alpha': [0.0001, 0.001, 0.05]}]
################################################################################
CB_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', CatBoostClassifier(random_seed=1))])
CB_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [CatBoostClassifier(random_seed=1, verbose=False)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2]}]
#'iterations': Integer(10, 1000),
# 'depth': Integer(1, 8),
# 'learning_rate': Real(0.01, 1.0, 'log-uniform'),
# 'random_strength': Real(1e-9, 10, 'log-uniform'),
# 'bagging_temperature': Real(0.0, 1.0),
# 'border_count': Integer(1, 255),
# 'l2_leaf_reg': Integer(2, 30),
# 'scale_pos_weight':Real(0.01, 1.0, 'uniform')
################################################################################
XGB_pipe = Pipeline([('scaler', StandardScaler()), # MinMaxScaler()
('selector', SelectKBest(mutual_info_classif, k=90)), #
('classifier', XGBClassifier(objective='binary:logistic', random_state=1))])
XGB_search_space = [{'selector__k': [ 50, 70, 90, 110]},
{'classifier': [XGBClassifier(objective='binary:logistic', random_state=1)],
'classifier__learning_rate': [0.05, 0.1, 0.15, 0.2],
'classifier__colsample_bytree':[.5, .75, 1],
'classifier__max_depth': np.arange(3, 6, 10),
'classifier__n_estimators': [50, 100, 200, 300, 400]}]
p_list=[8,10,12,13,15,20,21,25, 27, 33,35,40,46,48,49,52,54,55]
nfolds = 6
# make a predifined CV split (test_fold)
test_fold = []
for i in range(nfolds):
p_test = p_list[i*3:i*3+3]
df_test = merge_dataframes(p_test)
tst = [i] * df_test.shape[0]
test_fold= test_fold + tst
ps = PredefinedSplit(test_fold)
# df contains all persons' data in one dataset
df = merge_dataframes(p_list)
df = prep_data(df)
# remove day_of_month variable if present in data
if 'day_of_month' in df.columns:
drop_col=['day_of_month']
df=drop_cols(df, drop_col)
#drop all columns that contain location information
df = drop_location(df)
labels = list(df.columns)
labels.remove('emotion_change')
X = df[labels]
y = df['emotion_change']
def grid_search_wrapper(pipe = pipe, search_space = search_space, verbose= False,refit_score=scorer):
"""
fits a GridSearchCV classifiers using refit_score for optimization
prints classifier performance metrics
"""
#cross_validation = StratifiedKFold(n_splits=5, shuffle=True, random_state=random_state)
cross_validation = ps
grid_search = GridSearchCV(pipe, search_space, cv=cross_validation, verbose=verbose, n_jobs = -1) #scoring=scorer, refit=scorer
grid_search.fit(X, y)
return grid_search
# do gird search for best parameters
pipeline_grid_search_LR = grid_search_wrapper(pipe = LR_pipe, search_space = LR_search_space, verbose=2)
pipeline_grid_search_RF = grid_search_wrapper(pipe = RF_pipe, search_space = RF_search_space, verbose=2)
pipeline_grid_search_MLP = grid_search_wrapper(pipe = MLP_pipe, search_space = MLP_search_space, verbose=2)
pipeline_grid_search_XGB = grid_search_wrapper(pipe = XGB_pipe, search_space = XGB_search_space, verbose=2)
pipeline_grid_search_CB = grid_search_wrapper(pipe = CB_pipe, search_space = CB_search_space, verbose=False)
print(pipeline_grid_search_RF.best_estimator_)
print(pipeline_grid_search_RF.best_score_)
print(pipeline_grid_search_XGB.best_estimator_)
print(pipeline_grid_search_XGB.best_score_)
print(pipeline_grid_search_LR.best_estimator_)
print(pipeline_grid_search_LR.best_score_)
print(pipeline_grid_search_CB.best_estimator_)
print(pipeline_grid_search_CB.best_score_)
print(pipeline_grid_search_MLP.best_estimator_)
print(pipeline_grid_search_MLP.best_score_)
# best models
LR_model = LogisticRegression(C=1.0, class_weight=None, dual=False,
fit_intercept=True, intercept_scaling=1,
l1_ratio=None, max_iter=100,
multi_class='auto', n_jobs=None,
penalty='l2', random_state=None,
solver='lbfgs', tol=0.0001, verbose=0,
warm_start=False)
RF_model = RandomForestClassifier(bootstrap=True, ccp_alpha=0.0,
class_weight=None, criterion='gini',
max_depth=5, max_features=None,
max_leaf_nodes=None, max_samples=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=200, n_jobs=None,
oob_score=False, random_state=None,
verbose=0, warm_start=False)
XGB_model = XGBClassifier(base_score=0.5, booster='gbtree',
colsample_bylevel=1, colsample_bynode=1,
colsample_bytree=0.5, gamma=0, learning_rate=0.1,
max_delta_step=0, max_depth=3,
min_child_weight=1, missing=None,
n_estimators=400, n_jobs=1, nthread=None,
objective='binary:logistic', random_state=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1,
seed=None, silent=None, subsample=1,
verbosity=1)
MLP_model = MLPClassifier(activation='tanh', alpha=0.001,
batch_size='auto', beta_1=0.9, beta_2=0.999,
early_stopping=True, epsilon=1e-08,
hidden_layer_sizes=(50, 50, 50),
learning_rate='constant',
learning_rate_init=0.001, max_fun=15000,
max_iter=50, momentum=0.9, n_iter_no_change=10,
nesterovs_momentum=True, power_t=0.5,
random_state=1, shuffle=True, solver='sgd',
tol=0.0001, validation_fraction=0.1,
verbose=False, warm_start=False)
CB_model = CatBoostClassifier(random_seed=1, verbose=False,learning_rate= 0.1)
best_models = {} # dictionary of best models with best parameters
best_models['Logistic Regression'] = LR_model
best_models['RandomForest Classifier'] = RF_model
best_models['MLP Classifier'] = MLP_model
best_models['XGBoost Classifier'] = XGB_model
best_models['CatBoost Classifier'] = CB_model
n_features = [50, 90, 90, 90, 90]
nfolds = 6
rnd_state=42
# this is to get all the detailed performance meterics after selecting the best model parameters
k_i = -1
for model_name, model in best_models.items():
k_i = k_i + 1
accu = []
prec = []
rec_ = []
f1_ = []
bl_accu = []
roc_ = []
spec_ = []
i = 1
for train_index, test_index in ps.split():
#print("fold", i)
i+=1
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
#scale features
X_train_scaled, X_test_scaled= scale_data(X_train, X_test)
#feature selection
X_train, X_test = select_k_features(X_train_scaled,X_test_scaled,y_train,k=n_features[k_i])
#oversample training data
X_train_imb,y_train_imb=over_sample_SMOTE(X_train, y_train)
#X_train_imb,y_train_imb=over_sample_SMOTENC(X_train, y_train)
#X_train_imb,y_train_imb=over_sample_SVMSMOTE(X_train, y_train)
# train model on imbalance-handled data
model.fit(X_train_imb, y_train_imb)
#train model on imbalance data
#model.fit(X_train, y_train)
# test model, measure class label and probability score
y_pred = model.predict(X_test)
y_scores = model.predict_proba(X_test)[:,1]
#calculate metrices
accuracy = accuracy_score(y_test, y_pred)
bl_accuracy = balanced_accuracy_score(y_test, y_pred)
precision=precision_score(y_test, y_pred, labels=np.unique(y_pred))
recall=recall_score(y_test, y_pred, labels=np.unique(y_pred))
f1=f1_score(y_test, y_pred, labels=np.unique(y_pred))
roc=roc_auc_score(y_test, y_scores, labels=np.unique(y_pred))
spec=specificity_score(y_test, y_pred ,labels=np.unique(y_pred))
ac=accuracy * 100.0
pr=precision*100
rc=recall*100
f1_p=f1*100
bl_ac=bl_accuracy*100
roc=roc*100
spec=spec*100
accu.append(ac)
prec.append(pr)
rec_.append(rc)
f1_.append(f1_p)
bl_accu.append(bl_ac)
roc_.append(roc)
spec_.append(spec)
print('Restuls for: ', model_name)
print_results(accu, bl_accu, prec, rec_, spec_, roc_, f1_)
```
| github_jupyter |
# Using PS GPIO with PYNQ
## Goal
The aim of this notebook is to show how to use the Zynq PS GPIO from PYNQ. The PS GPIO are simple wires from the PS, and don't need a controller in the programmable logic.
Up to 64 PS GPIO are available, and they can be used to connect simple control and data signals to IP or peripherals in the PL.
## Hardware design
This example uses a bitstream that connects PS GPIO to the LEDs, buttons, and switches and can be used with the PYNQ-Z1 or PYNQ-Z2 board.

GPIO Map:
* GPIO 0 - 3: Buttons
* GPIO 4 - 5: Switches
* GPIO 6 - 9: LEDs
### 1. Download the tutorial overlay
The `ps_gpio.bit` and `ps_gpio.tcl` files can be found in the bitstreams directory local to this folder.
The bitstream can be downloaded by passing the relative path to the Overlay class.
* Check the bitstream and .tcl exists in the bitstream directory
```
!dir ./bitstream
```
* Download the bitstream
```
from pynq import Overlay
ps_gpio_design = Overlay("./bitstream/ps_gpio.bit")
```
## GPIO class
The GPIO class will be used to access the PS GPIO.
### 1. Controlling the switches and push-buttons
In the design PS GPIO pins 0 to 3 are connected to the pushbuttons, and pins 4 to 5 are connected to the dip-switches on the PYNQ-Z1 and PYNQ-Z2 boards.
```
from pynq import GPIO
button0 = GPIO(GPIO.get_gpio_pin(0), 'in')
button0.read()
```
Try pressing the button BTN0 on the board and rerunning the cell above.
The other buttons and switches can be read in a similar way.
```
button1 = GPIO(GPIO.get_gpio_pin(1), 'in')
button2 = GPIO(GPIO.get_gpio_pin(2), 'in')
button3 = GPIO(GPIO.get_gpio_pin(3), 'in')
switch0 = GPIO(GPIO.get_gpio_pin(4), 'in')
switch1 = GPIO(GPIO.get_gpio_pin(5), 'in')
```
Try pressing different buttons (BTN1, BTN2, BTN3), and moving the switches (SW0, SW1) while executing the cell below.
```
print(f"Button0: {button0.read()}")
print(f"Button1: {button1.read()}")
print(f"Button2: {button2.read()}")
print(f"Button3: {button3.read()}")
print("")
print(f"Switch0: {switch0.read()}")
print(f"Switch1: {switch1.read()}")
```
### 2. Controlling the LEDs
The LEDs can be used in a similar way, the only difference is the direction passed to the GPIO class. The LEDs are connected to PS GPIO 6 to 9 in the design we are using.
```
led0 = GPIO(GPIO.get_gpio_pin(6), 'out')
led0.write(1)
led1 = GPIO(GPIO.get_gpio_pin(7), 'out')
led2 = GPIO(GPIO.get_gpio_pin(8), 'out')
led3 = GPIO(GPIO.get_gpio_pin(9), 'out')
from time import sleep
led1.write(1)
sleep(1)
led2.write(1)
sleep(1)
led3.write(1)
```
* Reset the LEDs
```
led0.write(0)
led1.write(0)
led2.write(0)
led3.write(0)
```
### 3 putting it together
Run a loop to set the LEDs to the value of the pushbuttons.
Before executing the next cell, make sure Switch 0 (SW0) is "on". While the loop is running, press a push-button and notice the corresponding LED turns on. To exist the loop, change Switch 0 to off.
```
while(switch0.read() is 1):
led0.write(button0.read())
led1.write(button1.read())
led2.write(button2.read())
led3.write(button3.read())
```
| github_jupyter |
<a href="https://colab.research.google.com/github/RSNA/AI-Deep-Learning-Lab-2021/blob/main/sessions/tcga-gbm/RSNA_2021_TCGA_GBM_radiogenomics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Notes**
- This notebook is optimized to work on **Google Colab**.
- Remember to activate a **GPU runtime**
- Runtime > Change runtime type > Hardware accelerator: GPU
# Intro & info
Before 2021, glioblastomas (GBMs) used to be classified as IDH1 wild-type (**IDH wt**) or mutant (**IDH mut**).
With the 2021 update of the WHO classification of brain tumors published in 2021, GBMs are now only identified as IDH wt.
Even though not "meaningfull" anymore, this notebook will show you how to develop a machine learning classifier to discriminate IDH mut and IDH wt GBMs using radiomics features extracted from brain MRIs.
We will simulate a full pipeline, from image selection to model creation; specifically we will:
- Retrieve imaging and genomic data
- Match imaging and genomic data
- Obtain the segmentation masks of the lesions
- Extract radiomics features from the segmentation masks
- Develop a classifier
We will use only free and publicly available resources: we will retrieve [imaging](https://portal.imaging.datacommons.cancer.gov/) and [genomic](https://portal.gdc.cancer.gov/) information from the **TCGA-GBM** cohort, use [**HD-GLIO**](https://github.com/NeuroAI-HD/HD-GLIO) to segment the lesions, [**Pyradiomics**](https://pyradiomics.readthedocs.io/en/latest/) to extract the radiomics features, and [**scikit-learn**](https://scikit-learn.org/stable/) to build our classifier.
While this notebook **does not** contain enough data to build a meaningful classifier, it should serve as a starting point to develop a fully functional one.
**Note**: for simplicity, we will refer to IDH1 as "IDH".
# Download the class material
```
cd /content/
# Run this cell at the beginning of the class or in case you will need to start again.
!rm -rf RSNA_2021_radiogenomics
!rm -rf __MACOSX
!rm -rf sample_data
!rm -rf RSNA_2021_radiogenomics.zip
!rm -rf segmentation_output/
# This cell will download the content for the class.
!gdown --id 1A1N54cRMtDthMIepsQBI17Gxfi5_XOT6
!unzip -q "/content/RSNA_2021_radiogenomics.zip"
!rm -rf __MACOSX
```
# Install necessary resources:
- HD-GLIO (https://github.com/NeuroAI-HD/HD-GLIO)
- Pyradiomics (https://pyradiomics.readthedocs.io/en/latest/index.html)
- Python libraries
## Install HD-GLIO
```
!pip install hd_glio
```
## Install pyradiomics
```
!python -m pip install pyradiomics
!pyradiomics -h
```
NOTE:
After installing the packages, it is possible that you will need to restart the notebook.
If so, please make sure to select the correct runtime (GPU).
## Import libraries
```
import os
import shutil
import numpy as np
import pandas as pd
import nibabel as nib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from pathlib import Path
import gdown
import statistics
from scipy import stats
from collections import Counter
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, KFold, StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn import preprocessing
from sklearn.decomposition import PCA
from sklearn import tree
from sklearn import svm
from sklearn.svm import SVC
from joblib import dump, load
from sklearn import metrics
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_curve, precision_recall_curve, f1_score, auc, make_scorer, recall_score, accuracy_score, precision_score, confusion_matrix
from sklearn.metrics import average_precision_score
from sklearn.metrics import plot_precision_recall_curve
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import cross_validate
pd.set_option('display.max_colwidth', None)
%matplotlib inline
```
# Work with NIH cloud resources: Imaging Data Commons and Genomic Data Commons
The data that we will use in this notebook are part of the TCGA and TCIA initiatives:
- The genomic data will be obtained from the [Genomic Data Commons](https://portal.gdc.cancer.gov/) (GDC) portal.
- The imaging data will be obtained from the [Imaging Data Commons](https://portal.imaging.datacommons.cancer.gov/) (IDC) portal.
A separate session of this Deep Learning Lab will show you how to work with IDC tools (you can find a list of the classes [here](https://github.com/RSNA/AI-Deep-Learning-Lab-2021#lessons)); once you'll be familiar with the IDC portal, you can run the next cells to retrieve the imaging data used in this class directly from the IDC portal.
## Query the TCGA-GBM cohort from the IDC platform using BigQuery
This cohort contains both pre- and post-surgery studies.
Since we are interested in the pre-surgery studies, the query will include only the earliest study for each subject.
We will save this information in a pandas dataframe called "tcga_gbm_mri".
**Note:** You'll be able to run the following cells only with a valid ProjectID to use to query the IDC portal. If you do not have one, the class material includes a csv file with the output of the query performed with BigQuery.
```
# from google.colab import auth
# auth.authenticate_user()
# Specify the project ID that points to your GCP project for billing purposes
# myProjectID="insert your project ID"
# myProjectID = "idc-external-005"
# %%bigquery tcga_gbm_mri --project=$myProjectID
# SELECT
# P.PatientID,
# P.StudyDate,
# P.StudyInstanceUID,
# P.gcs_url
# FROM
# `canceridc-data.idc_views.dicom_all` AS P
# WHERE
# P.collection_id = "tcga_gbm"
# AND
# P.Modality = "MR"
# AND
# P.StudyDate =
# (
# SELECT
# MIN(C.StudyDate)
# FROM
# `canceridc-data.idc_views.dicom_all` AS C
# WHERE
# P.PatientID = C.PatientID
# AND
# C.collection_id = "tcga_gbm"
# AND
# C.Modality = "MR"
# )
# ORDER BY
# P.PatientID
```
The next cell will load the same dataframe that you would obtain running the BigQuery command.
```
tcga_gbm_mri = pd.read_csv('/content/RSNA_2021_radiogenomics/dataframes/tcga-gbm-mri.csv')
```
Let's visualize the dataframe
```
tcga_gbm_mri
```
Now let's check how many patients and how many exams we have
```
num_subjects = len(tcga_gbm_mri["PatientID"].unique())
num_studies = len(tcga_gbm_mri["StudyInstanceUID"].unique())
print(f"Total number of subjects: {num_subjects}")
print(f"Total number of studies: {num_studies}")
```
We have more studies than subjects, meaning that some subjects have multiple studies performed the same day.
There is no programmatic way of knowing which study to include; the only option is to manually review these cases and select the ones to include.
**Note: Know your data**
Datasets are rarely ready to use to develop ML models; it is fundamental to perform quality control of the data you are going to use, or you will end up with a useless model.
For example, the cohort we are using contains more than one study per subject, and within each study, there could be repeated scans (usually because of motion artefacts; for example see the MRIs subject TCGA-08-0522). Moreover, some of the cases do not have a pre-surgery scan, so they will need to be excluded from the analysis.
## Match imaging and genomic information
We obtained the IDH1 mutational status information from the Genomic Data Commons portal (GDC) (https://portal.gdc.cancer.gov/).
You can find the spreadsheet with the genomic information "IDH_mutant_TCGA-GBM.csv" in /content/RSNA_2021_radiogenomics/dataframes
```
# Load the files with the imaging and genomic information and store them as a pandas dataframe
tcga_gbm_mri = pd.read_csv('/content/RSNA_2021_radiogenomics/dataframes/tcga-gbm-mri.csv') #imaging dataframe
idh_df = pd.read_csv('/content/RSNA_2021_radiogenomics/dataframes/IDH_mutant_TCGA-GBM.csv') #genomic dataframe
idh_df # visualize the dataframe
# First let's make a copy of the original dataframe
tcga_gbm_mri_idh = tcga_gbm_mri.copy()
```
#### Some of the subjects for which we have genomic data available might not have an available MRI.
Let's check how many subjects of the genomic dataframe are missing from the imaging dataframe.
```
# Finds the differences between the IDs columns of the two dataframes
missing_subjects = set(idh_df['Case ID']).difference(set(tcga_gbm_mri_idh['PatientID']))
print(f"A total of {len(missing_subjects)} subjects are missing from the imaging dataframe.")
print("The missing subjects are: "+ str(missing_subjects))
```
#### Now let's add the IDH status information for the subjects that are present in both dataframes.
```
# Add '1' to all images corresponding to an IDH mutant subject and '0' to IDH wt
tcga_gbm_mri_idh['IDH Status'] = tcga_gbm_mri_idh['PatientID'].isin(idh_df['Case ID']).astype(int)
# List all the images for the IDH mutant subjects
tcga_gbm_mri_idh.loc[tcga_gbm_mri_idh['IDH Status'] == 1]
# Let's visualize only the PatientID and the StudyDate of the IDH mutant class
tcga_gbm_mri_idh.loc[tcga_gbm_mri_idh['IDH Status'] == 1].drop(['gcs_url','StudyInstanceUID'],axis=1).drop_duplicates(['PatientID','StudyDate'])
```
# HD-GLIO: obtain segmentation masks
At this point, we have both imaging and genomic information. Our next step will be obtaining the segmentation of the lesions using HD-GLIO.
HD-GLIO is the result of a joint project between the Department of Neuroradiology at the Heidelberg University Hospital, Germany and the Division of Medical Image Computing at the German Cancer Research Center (DKFZ) Heidelberg, Germany. It requires four MRI sequences (pre-contrast and post-contrast T1, T2 and FLAIR) to obtain the segmentation of the lesions.
For this notebook, we will use a version of HD-GLIO that can work only with nifti files and requires data to be already processed and registered. Another version, [HD-GLIO-AUTO](https://github.com/NeuroAI-HD/HD-GLIO-AUTO), can work directly with raw files, either dicom or nifti, and do not needs sequences to be preprocessed.
For more information see https://github.com/NeuroAI-HD/HD-GLIO
There are two scripts available to run HD-GLIO:
- One better suited to perform predictions on multiple subjects.
- One to make a prediction on a single subject.
For more information see [here](https://github.com/NeuroAI-HD/HD-GLIO#run-hd-glio).
**Note before starting the segmentation:**
If the process will take more than few minutes, make sure you are using a GPU by checking the current runtime.
### Run HD-GLIO pointing to an input folder (best for multiple predictions):
Let's navigate to the folder containing the example.
```
cd /content/RSNA_2021_radiogenomics/imaging/hd-glio_test/TCGA-06-2570
```
INPUT_FOLDER hereby contains the T1, T1C, T2 and FLAIR images. In order to ensure that HD-GLIO correctly assigns filenames to modalities, you must apply the following naming convention to your data
- INPUT_T1: PATIENT_IDENTIFIER_0000.nii.gz
- INPUT_CT1: PATIENT_IDENTIFIER_0001.nii.gz
- INPUT_T2: PATIENT_IDENTIFIER_0002.nii.gz
- INPUT_FLAIR: PATIENT_IDENTIFIER_0003.nii.gz
Run HD-GLIO to obtain the segmentation.
```
!hd_glio_predict_folder -i input_folder -o segmentation_output
```
### Run HD-GLIO pointing to single files (best for single predictions):
```
cd /content/RSNA_2021_radiogenomics/imaging/hd-glio_test/TCGA-06-2570/single_files
!hd_glio_predict -t1 t1.nii.gz -t1c ct1.nii.gz -t2 t2.nii.gz -flair flair.nii.gz -o segmentation.nii.gz
```
# Pyradiomics: radiomics features extraction
Now that we have obtained the segmentation masks, we will extract radiomics features from them using Pyradiomics.
At this stage, we will use [Pyradiomics](https://pyradiomics.readthedocs.io/en/latest/) to extract radiomics features from the segmentation masks.
In this case, the segmentation masks are marked with different labels corresponding to different regions of the tumors:
- Label 1: T2/FLAIR hyperintensities.
- Label 2: enhancing area.
**Note:** different segmentation algorithm might genereate different segmentation masks in which tumors' regions could be marked with different labels.
Pyradiomics can extract radiomics features from each of these regions independently, specyfing the "Label" parameter during the extraction. This can be done both at the parameters file level (YML) or in the CSV file used as input for the extraction.
Alternatively, we could choose to extract radiomics features from a single mask containing the whole lesion. To achieve this, we would first need to binarize the segmentation masks so that all areas containing the lesion will be labeled with 1 and everything else with 0 (you can try this at home!).
```
cd /content/RSNA_2021_radiogenomics
# Loads the csv file containing the genomic data
idh_df = pd.read_csv('/content/RSNA_2021_radiogenomics/dataframes/IDH_mutant_TCGA-GBM.csv')
# Prints the list of the subjects with an IDH for the mutant cases
idh_mutant_list = idh_df['Case ID'].tolist()
for i in idh_mutant_list:
print(i)
```
To simplify further steps, we will add a label to the folders containing the MRIs:
- 0: for IDH WT subjects
- 1: for IDH mut subjects
```
# Change the folder name of the subjects, adding '1' to those who have the IDH mutation and '0' to those who don't have it.
subjects_dir = Path('/content/RSNA_2021_radiogenomics/imaging/processed_nifti')
for subject in subjects_dir.iterdir():
if subject.is_dir():
if str(subject.parts[-1]) in idh_mutant_list: # checks if the PatientID is among the mutant cases
print(subject)
source = str(subject)
dest = source+'_1'
print(dest)
os.rename(source,dest)
else:
source = str(subject)
dest = source+'_0'
print(dest)
os.rename(source,dest)
```
There are [different options](https://pyradiomics.readthedocs.io/en/latest/usage.html#usage) to choose from to run the radiomics features extraction in Pyradiomics.
In this case, we will opt for the command line usage and the [batch mode](https://pyradiomics.readthedocs.io/en/latest/usage.html#batch-mode), so all we need is a CSV file containing at least three columns:
- Patient ID (PID)
- File paths to the images (MRIs to extract the features from)
- File paths to the masks (segmentations to use for the extraction)
We will then add another columns in which we'll store the IDH status.
We can choose to extract the radiomics features from any of the 4 MRI sequences; in this example, the features will be extracted from the T2-w MRIs; you can try to modify the following cell to include instead the FLAIR or post-contrast T1-w MRIs.
```
# Obtain the csv file that we'll use as input for Pyradiomics.
T2_list = []
mask_list = []
p = Path('/content/RSNA_2021_radiogenomics/imaging/processed_nifti')
T2_list = list(p.rglob('t2.nii.gz'))
mask_list = list(p.rglob('segmentation.nii.gz'))
PID = []
IDH_label = []
T2_path = []
mask_path = []
for t2,mask in zip(sorted(T2_list),sorted(mask_list)):
PID.append(t2.parts[-2])
IDH_label.append(t2.parts[-2][-1])
T2_path.append(str(t2))
mask_path.append(str(mask))
dict_T2 = {'PID':PID,'Image': T2_path, 'Mask': mask_path, 'IDH Status': IDH_label}
df_T2 = pd.DataFrame(dict_T2)
df_T2.set_index('PID', inplace=True)
df_T2.to_csv(r"/content/RSNA_2021_radiogenomics/pyradiomics_files/inputs/T2_input.csv")
```
### Run the extraction
To run the extraction, all we'll need to specify is:
- the location of the CSV input file;
- the location where to store the output CSV file;
- the location of the YML parameters file containing the settings for the extraction
While the extraction runs, we'll have a look at the **YML file** containing the extraction parameters: /content/RSNA_2021_radiogenomics/pyradiomics_files/YML_files/pyradiomics_parameters.yml
You can find more examples of YML files [here](https://github.com/AIM-Harvard/pyradiomics/tree/master/examples/exampleSettings).
```
# The following line will run the command to extract radiomics features from T2-w MRIs.
!pyradiomics /content/RSNA_2021_radiogenomics/pyradiomics_files/inputs/T2_input.csv -o /content/RSNA_2021_radiogenomics/pyradiomics_files/outputs/T2_radiomics_output.csv -f csv -p /content/RSNA_2021_radiogenomics/pyradiomics_files/YML_files/pyradiomics_parameters.yml -v 5
```
### Congratulations! You have now extracted radiomics features from T2-w MRIs.
You can find the CSV file called "T2_output.csv" here: /content/RSNA_2021_radiogenomics/pyradiomics_files/outputs/T2_radiomics_output.csv
# Scikit-learn: develop a ML model
Now we can finally use the radiomics features extracted fromt the segmentation masks to build our classifier using [Scikit-learn](https://scikit-learn.org/stable/).
## Load the data
The starting point for our analysis will be the CSV file containing the radiomics features extracted using Pyradiomics.
```
# Loads the dataset.
df_train = pd.read_csv('/content/RSNA_2021_radiogenomics/pyradiomics_files/outputs/T2_radiomics_output.csv', index_col='PID')
# Prints some basic information about the file.
df_train.info()
```
## Check for missing values and drop NaN
```
# Obtain the number of missing values per column in the training set
display(df_train.isnull().sum())
df_train = df_train.dropna()
display(df_train.isnull().sum())
df_train.describe()
```
## Encode 'IDH Status' using the LabelEncoder function
**Note**: in this example, the classes are already labeled as 0 for IDH wt and 1 for IDH mutant cases, so the following code is redundant.
Nevertheless, in case the classes would be labeled for example as "wt" for the IDH wt and "mut" for the IDH mutant cases, the following code would take care of generate proper labels for the training process.
More info [here](https://scikit-learn.org/stable/modules/preprocessing_targets.html#preprocessing-targets).
```
# Create a label (category) encoder object
le = preprocessing.LabelEncoder()
# Fit the encoder to the pandas column
le.fit(df_train['IDH Status'])
# View the labels
list(le.classes_)
# Apply the fitted encoder to the pandas column
le.transform(df_train['IDH Status'])
# Substitutes the original labels with the encoded ones
df_train['IDH Status'] = le.transform(df_train['IDH Status'])
```
## Separate features and IDH status columns
```
# Separates features from labels: IDH mutation = 0 (wild type) or 1 (mutant)
X_training = df_train.drop(['IDH Status'], axis=1) # features
y_training = df_train['IDH Status'] # labels
# Drops Image and Mask columns since they contain info that we don't need anymore
X_training = X_training.drop(['Image','Mask'], axis=1)
```
## Separate radiomics features based on feature types (shape, first order, texture)
Note: the "diagnostics" columns contains information on how the features were extracted, so they won't be used during training.
```
features_list = X_training.columns.to_list()
type(features_list), len(features_list)
diagnostics_parameters = []
shape_features = []
first_order_features = []
texture_features = []
for feature in X_training.columns:
if 'diagnostics' in feature:
diagnostics_parameters.append(feature)
elif 'shape' in feature:
shape_features.append(feature)
elif 'firstorder' in feature:
first_order_features.append(feature)
else:
texture_features.append(feature)
```
Let's print each group of features:
```
shape_features
first_order_features
texture_features
# Total number of features
len(shape_features+texture_features+first_order_features)
# Creates separates dataframe for each sets of features
diagnostics_df = X_training[diagnostics_parameters]
shape_df = X_training[shape_features]
firstorder_df = X_training[first_order_features]
texture_df = X_training[texture_features]
# Shows the shape features; try to modify this cell to show the other dataframes
shape_df
```
### Define the final sets of features
```
# Since the "diagnostics" columns will not be used during training, we can exclude them from the training dataframe
X_training = X_training.drop(diagnostics_parameters, axis=1)
# Prints the final sets of features
X_training.columns
final_features = X_training.columns.to_list()
```
## Train/test split
```
# The Train/Test split must be performed before the pipeline
# Used stratified splitting to keep class balance in each split
X_train,X_test,y_train,y_test = train_test_split(X_training, y_training,
test_size = 0.2,
stratify = y_training,
random_state = 33)
# Visualize the labels associated with each PID
y_train
```
## Data processing
Transform the features (e.g., data standardization).
### Features standardization
```
# Define the transformation
scaler = StandardScaler()
# Calculates and applies the transformation to the training set
X_train_std = scaler.fit_transform(X_train)
X_train_std
# Applies the transformation learned on the training set to the test set
X_test_std = scaler.transform(X_test)
```
## Develop the classifier
Scikit-learn offers [several models](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning) to choose from to develop a classifier.
In this example, we will use a support vector machine; you could then try to develop a classifier using a different type of model.
```
# Define the classifier
svm_clf = SVC(probability=True)
```
### Obtain cross-validation scores
```
# List of metrics to keep track during the training
scoring = ['precision_weighted', 'precision_macro',
'recall_weighted','recall_macro',
'f1_weighted','f1_macro',
'average_precision',
'accuracy','balanced_accuracy',
'roc_auc']
# Initiates cross-validation
sub_scores = cross_validate(svm_clf,
X_train_std, y_train,
cv=3,
scoring = scoring)
# Prints the cross-validation scores
sub_scores
# Prints the median and the median absolute deviation measured for each metrics
print("Balanced Accuracy: %0.2f (+/- %0.2f)" % (statistics.median(sub_scores['test_balanced_accuracy']), stats.median_absolute_deviation(sub_scores['test_balanced_accuracy'])))
print(sub_scores['test_balanced_accuracy'])
print("\nF1-score: %0.2f (+/- %0.2f)" % (statistics.median(sub_scores['test_f1_macro']), stats.median_absolute_deviation(sub_scores['test_f1_macro'])))
print(sub_scores['test_f1_macro'])
print("\nAP: %0.2f (+/- %0.2f)" % (statistics.median(sub_scores['test_average_precision']), stats.median_absolute_deviation(sub_scores['test_average_precision'])))
print(sub_scores['test_average_precision'])
print("\nAUROC: %0.2f (+/- %0.2f)" % (statistics.median(sub_scores['test_roc_auc']), stats.median_absolute_deviation(sub_scores['test_roc_auc'])))
print(sub_scores['test_roc_auc'])
# After cross-validation, we can fit the classifier on the whole training set
svm_clf.fit(X_train_std,y_train)
```
We can now run the trained classifier on the test set and print the performance report.
```
# We can also print a confusion matrix
target_names = ['IDH wt', 'IDH mut']
sub_predictions = svm_clf.predict(X_test_std)
conf_matrix_sub = confusion_matrix(y_test,sub_predictions)
sns.heatmap(conf_matrix_sub, annot=True,fmt='d',
xticklabels=target_names,
yticklabels=target_names,
cmap=plt.cm.Blues)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
print(classification_report(y_test, sub_predictions,target_names=target_names))
# Obtains the inputs to use to build a precision-recall and an ROC curve
y_score_sub = svm_clf.decision_function(X_test_std)
# Plots a precision-recall curve
average_precision_eval = average_precision_score(y_test, y_score_sub)
print('Average precision-recall score: {0:0.2f}'.format(average_precision_eval))
disp = plot_precision_recall_curve(svm_clf, X_test_std, y_test)
disp.ax_.set_title('2-class Precision-Recall curve: '
'AP={0:0.2f}'.format(average_precision_eval))
# Plots an ROC curve
sub_decision_scores = svm_clf.decision_function(X_test_std)
fpr, tpr, thres = roc_curve(y_test, sub_decision_scores)
print('AUC: {:.3f}'.format(roc_auc_score(y_test, sub_decision_scores)))
# roc curve
plt.plot(fpr, tpr, "b", label='SVM')
plt.plot([0,1],[0,1], "r--", label='Random Guess')
plt.xlabel("False positive rate")
plt.ylabel("True positive rate")
plt.legend(loc="best")
plt.title("ROC curve - "+'AUC: {:.3f}'.format(roc_auc_score(y_test, sub_decision_scores)))
plt.show()
```
# DO TRY THIS AT HOME
- Obtain the segmentation masks of the whole TCGA-GBM cohort.
- Binarize the segmentation masks to extract radiomics features from the whole lesion.
- Extract radiomics features from MRI sequences other than T2 (i.e., FLAIR, post-contrast T1)
- Modify the YML file to extract the radiomics features from another sub-region of the tumors.
- Perform a gridsearch for the model.
- Train a model different than a SVM.
```
```
| github_jupyter |
Solving a simple PDE
====================
Here we assume that you know all the
basic concepts, that were part of the previous tutorials and will only give short explanations
to every step.
Our aim is to solve the following PDE:
\begin{align*}
-\Delta u &= 4.25\pi^2 u \text{ in } \Omega = [0, 1] \times [0, 1] \\
u &= \sin(\tfrac{\pi}{2} x_1)\cos(2\pi x_2) \text{ on } \partial \Omega
\end{align*}
For comparison, the analytic solution is $u(x_1, x_2) = \sin(\tfrac{\pi}{2} x_1)\cos(2\pi x_2)$.
We start by defining the spaces for the input and output values.
```
import torchphysics as tp
X = tp.spaces.R2('x') # input is 2D
U = tp.spaces.R1('u') # output is 1D
```
Next up is the domain:
```
square = tp.domains.Parallelogram(X, [0, 0], [1, 0], [0, 1])
```
Now we define our model, that we want to train. Since we have a simple domain, we do not use any
normalization.
```
model = tp.models.FCN(input_space=X, output_space=U, hidden=(50,50,50,50,50))
```
The next step is the definition of the conditions. For this PDE we have two different ones, the differential
equation itself and the boundary condition. We start with the boundary condition:
```
import torch
import numpy as np
# Frist the function that defines the residual:
def bound_residual(u, x):
bound_values = torch.sin(np.pi/2*x[:, :1]) * torch.cos(2*np.pi*x[:, 1:])
return u - bound_values
# the point sampler, for the trainig points:
# here we use grid points any other sampler could also be used
bound_sampler = tp.samplers.GridSampler(square.boundary, n_points=5000)
bound_sampler = bound_sampler.make_static() # grid always the same, therfore static for one single computation
# wrap everything together in the condition
bound_cond = tp.conditions.PINNCondition(module=model, sampler=bound_sampler,
residual_fn=bound_residual, weight=10)
```
It follows the differential condition, here we use the pre implemented operators:
```
# Again a function that defines the residual:
def pde_residual(u, x):
return tp.utils.laplacian(u, x) + 4.25*np.pi**2*u
# the point sampler, for the trainig points:
pde_sampler = tp.samplers.GridSampler(square, n_points=15000) # again point grid
pde_sampler = pde_sampler.make_static()
# wrap everything together in the condition
pde_cond = tp.conditions.PINNCondition(module=model, sampler=pde_sampler,
residual_fn=pde_residual)
```
The transformation of our PDE into a TorchPhysics problem is finished. So we can start the
training.
The last step before the training is the creation of a *Solver*. This is an object that inherits from
the Pytorch Lightning *LightningModule*. It handles the training and validation loops and takes care of the
data loading for GPUs or CPUs. It gets the following inputs:
- train_conditions: A list of all train conditions
- val_conditions: A list of all validation conditions (optional)
- optimizer_setting: With this, one can specify what optimizers, learning, and learning-schedulers
should be used. For this, there exists the class *OptimizerSetting* that handles all these parameters.
```
# here we start with Adam:
optim = tp.OptimizerSetting(optimizer_class=torch.optim.Adam, lr=0.001)
solver = tp.solver.Solver(train_conditions=[bound_cond, pde_cond], optimizer_setting=optim)
```
Now we define the trainer, for this we use Pytorch Lightning. Almost all functionalities of
Pytorch Lightning can be applied in the trainings process.
```
import pytorch_lightning as pl
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0" # select GPUs to use
trainer = pl.Trainer(gpus=1, # or None if CPU is used
max_steps=4000, # number of training steps
logger=False,
benchmark=True,
checkpoint_callback=False)
trainer.fit(solver)
```
Afterwards we switch to LBFGS:
```
optim = tp.OptimizerSetting(optimizer_class=torch.optim.LBFGS, lr=0.05,
optimizer_args={'max_iter': 2, 'history_size': 100})
solver = tp.solver.Solver(train_conditions=[bound_cond, pde_cond], optimizer_setting=optim)
trainer = pl.Trainer(gpus=1,
max_steps=3000, # number of training steps
logger=False,
benchmark=True,
checkpoint_callback=False)
trainer.fit(solver)
```
If we want to have a look on our solution, we can use the plot-methods of TorchPhysics:
```
plot_sampler = tp.samplers.PlotSampler(plot_domain=square, n_points=640, device='cuda')
fig = tp.utils.plot(model, lambda u : u, plot_sampler, plot_type='contour_surface')
```
We can plot the error, since we know the exact solution:
```
def plot_fn(u, x):
exact = torch.sin(np.pi/2*x[:, :1])*torch.cos(2*np.pi*x[:, 1:])
return torch.abs(u - exact)
fig = tp.utils.plot(model, plot_fn, plot_sampler, plot_type='contour_surface')
```
Now you know how to solve a PDE in TorchPhysics, additional examples can
be found under the [example-folder](https://github.com/boschresearch/torchphysics/tree/main/examples).
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../CausalModel')
import networkx as nx
import copy
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as plt
import seaborn as sns
from pandas import DataFrame
import numpy as np
import random
from sklearn.manifold import MDS
from pomegranate.distributions import IndependentComponentsDistribution
from pomegranate.distributions import UniformDistribution, NormalDistribution
%matplotlib inline
import sample_models
from CausalModel.CMD import IDist, ODist, CDist
from cdt.metrics import SID, SHD
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
current_palette = sns.color_palette(flatui)
sns.palplot(current_palette)
```
# Pairwise Comparison of Metrics
### Helpers
```
default_structure = {
'nb_nodes': 10,
'density': 0.3, # 0.4 / (n ** 1.25 / 10)
'cycles': False,
'fraction_observed': 1
}
def generate_causal_models(n):
pairs = []
for _ in range(n):
# model_a, model_b = sample_models.generate_discrete_models(nb_models=2)
# pairs.append((model_a, model_b))
model_a, model_b = sample_models.generate_linear_gaussian(default_structure, nb_models=2)
pairs.append((model_a, model_b))
model_a, model_b = sample_models.generate_linear_non_gaussian(default_structure, nb_models=2)
pairs.append((model_a, model_b))
model_a, model_b = sample_models.generate_GP(default_structure, nb_models=2)
pairs.append((model_a, model_b))
return pairs
model_pairs = generate_causal_models(30)
def compare_ODist_SID(pairs, nb_samples):
X = []
for model_a, model_b in pairs:
x_0 = ODist(model_a, model_b, nb_samples, discrete=False)
x_1 = SID(model_a.causal_graph, model_b.causal_graph)
X.append([x_0, x_1])
return DataFrame.from_records(X, columns=['OD', 'SID'])
od_sid = compare_ODist_SID(model_pairs, 1000)
def compare_ODist_SHD(pairs, nb_samples):
X = []
for model_a, model_b in pairs:
x_0 = ODist(model_a, model_b, nb_samples, discrete=False)
x_1 = SHD(model_a.causal_graph, model_b.causal_graph)
X.append([x_0, x_1])
return DataFrame.from_records(X, columns=['OD', 'SHD'])
od_shd = compare_ODist_SHD(model_pairs, 1000)
def compare_IDist_SID(pairs, nb_samples):
X = []
for model_a, model_b in pairs:
x_0 = IDist(model_a, model_b, nb_samples, discrete=False)
x_1 = SID(model_a.causal_graph, model_b.causal_graph)
X.append([x_0, x_1])
return DataFrame.from_records(X, columns=['ID', 'SID'])
id_sid = compare_IDist_SID(model_pairs, 500)
def compare_IDist_SHD(pairs, nb_samples):
X = []
for model_a, model_b in pairs:
x_0 = IDist(model_a, model_b, nb_samples, discrete=False)
x_1 = SHD(model_a.causal_graph, model_b.causal_graph)
X.append([x_0, x_1])
return DataFrame.from_records(X, columns=['ID', 'SHD'])
id_shd = compare_IDist_SHD(model_pairs, 500)
def relplot(data, x, y, fname=None):
plt.close()
sns.set_context("notebook")
ax = sns.relplot(x=x, y=y, data=data, s=100, alpha=0.5)
plt.grid(False)
plt.xlabel(x, fontsize=17)
plt.ylabel(y, fontsize=17)
if fname:
plt.savefig(fname, format="pdf")
plt.show()
```
### Results
```
relplot(od_sid, 'SID', 'OD', 'results/sid_od.pdf')
relplot(od_shd, 'SHD', 'OD', 'results/shd_od.pdf')
relplot(id_sid, 'SID', 'ID', 'results/sid_id.pdf')
relplot(id_shd, 'SHD', 'ID', 'results/shd_id.pdf')
```
# Geometry
### Helpers
```
from CausalModel import Mechanisms, SCM
from CausalModel.Mechanisms import LinearAdditive
from CausalModel.BayesianNetwork import BayesianNetwork
def default_noise(n):
return IndependentComponentsDistribution([NormalDistribution(0, 0.1) for _ in range(n)])
nodes = [0, 1, 2]
betas = [0.1, 0.5, 1, 2, 5]
noise = default_noise(2)
zero_one = nx.DiGraph([(nodes[0], nodes[1])])
one_zero = nx.DiGraph([(nodes[1], nodes[0])])
for i, node in zero_one.nodes(data=True):
node['observed'] = True
for i, node in one_zero.nodes(data=True):
node['observed'] = True
all_scms = {}
for beta in betas:
## 0 -> 1+
causal_struct = copy.deepcopy(zero_one)
mechanism_end = Mechanisms.LinearAdditive(1)
mechanism_in = Mechanisms.LinearAdditive(0)
mechanism_end.coeffs = [beta]
mechanisms = [mechanism_in, mechanism_end]
name = '0->1+::' + str(beta)
all_scms[name] = SCM.SCM(causal_struct, mechanisms, default_noise(2))
## 0 -> 1-
causal_struct = copy.deepcopy(zero_one)
mechanism_end = Mechanisms.LinearAdditive(1)
mechanism_in = Mechanisms.LinearAdditive(0)
mechanism_end.coeffs = [-beta]
mechanisms = [mechanism_in, mechanism_end]
name = '0->1-::' + str(beta)
all_scms[name] = SCM.SCM(causal_struct, mechanisms, default_noise(2))
## 1 -> 0+
causal_struct = copy.deepcopy(one_zero)
mechanism_end = Mechanisms.LinearAdditive(1)
mechanism_in = Mechanisms.LinearAdditive(0)
mechanism_end.coeffs = [beta]
mechanisms = [mechanism_end, mechanism_in]
name = '1->0+::' + str(beta)
all_scms[name] = SCM.SCM(causal_struct, mechanisms, default_noise(2))
## 1 -> 0-
causal_struct = copy.deepcopy(one_zero)
mechanism_end = Mechanisms.LinearAdditive(1)
mechanism_in = Mechanisms.LinearAdditive(0)
mechanism_end.coeffs = [-beta]
mechanisms = [mechanism_end, mechanism_in]
name = '1->0-::' + str(beta)
all_scms[name] = SCM.SCM(causal_struct, mechanisms, default_noise(2))
def pairwise_distances_IDist(all_scms):
pairwise_distance_matrix = np.zeros((len(all_scms), len(all_scms)))
it = dict(enumerate(all_scms))
for i in range(len(it)):
for j in range(i+1,len(it)):
el_i = all_scms[it[i]]
el_j = all_scms[it[j]]
dist = IDist(el_i, el_j, 100, l_samples=15, discrete=False, add_OD=True)
pairwise_distance_matrix[i][j] = dist
pairwise_distance_matrix[j][i] = dist
return pairwise_distance_matrix, it.values()
def pairwise_distances_ODist(all_scms):
pairwise_distance_matrix = np.zeros((len(all_scms), len(all_scms)))
it = dict(enumerate(all_scms))
for i in range(len(it)):
for j in range(i+1,len(it)):
el_i = all_scms[it[i]]
el_j = all_scms[it[j]]
dist = ODist(el_i, el_j, 1000, discrete=False)
pairwise_distance_matrix[i][j] = dist
pairwise_distance_matrix[j][i] = dist
return pairwise_distance_matrix, it.values()
def pairwise_distances_SID(all_scms):
pairwise_distance_matrix = np.zeros((len(all_scms), len(all_scms)))
it = dict(enumerate(all_scms))
for i in range(len(it)):
for j in range(i+1,len(it)):
el_i = all_scms[it[i]]
el_j = all_scms[it[j]]
dist = SID(el_i.causal_graph, el_j.causal_graph)
pairwise_distance_matrix[i][j] = dist
pairwise_distance_matrix[j][i] = dist
return pairwise_distance_matrix, it.values()
def convert_label(s):
if s == "0->1+":
return "$A \\nearrow B$"
if s == "0->1-":
return "$A \searrow B$"
if s == "1->0-":
return "$B \searrow A$"
if s == "1->0+":
return "$B \\nearrow A$"
def plot_embedding_3D(pairwise_distance_matrix, labels):
embedding = MDS(n_components=3, dissimilarity='precomputed')
data = embedding.fit_transform(pairwise_distance_matrix)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(data[:, 0], data[:, 1], data[:, 2], c='blue', s=60)
ax.view_init(30, 295)
plt.show()
def plot_embedding_sns(pairwise_distance_matrix, labels, fname=None):
embedding = MDS(n_components=2, dissimilarity='precomputed')
data = embedding.fit_transform(pairwise_distance_matrix)
df = DataFrame({'x': data[:, 0], 'y': data[:, 1]})
df['Model Type'] = [convert_label(l.split('::')[0]) for l in labels]
df['Strength ($\\alpha$)'] = [float(l.split('::')[-1]) for l in labels]
sns.set_context("paper")
plt.close()
sns.set_style("whitegrid", {'axes.spines.top': True, 'axes.spines.right': True})
# color_palette = ['#F5793A', '#A95AA1', '#85C0F9', '#0F2080']
# sns.color_palette()[0:0+4]
color_palette = sns.color_palette("RdBu", n_colors=7)
palette = [current_palette[1], current_palette[3], current_palette[4], current_palette[5]]
g = sns.relplot(x="x", y="y", hue="Model Type", size="Strength ($\\alpha$)", palette=palette, sizes=(30, 700), data=df, legend='full')
g.set_xlabels('', fontsize=1)
g.set_ylabels('', fontsize=1)
if fname:
plt.savefig(fname, format="pdf")
plt.show()
```
### Results
```
dist_mat_IDist, labels = pairwise_distances_IDist(all_scms)
plot_embedding_sns(dist_mat_IDist, labels, fname='results/geometry_id.pdf')
dist_mat_ODist, labels = pairwise_distances_ODist(all_scms)
plot_embedding_sns(dist_mat_ODist, labels, fname='results/geometry_od.pdf')
dist_mat_SID, labels = pairwise_distances_SID(all_scms)
plot_embedding_sns(dist_mat_SID, labels, fname='results/geometry_sid.pdf')
```
# Sample Efficiency
### NB SAMPLES
```
default_structure = {
'nb_nodes': 3,
'density': 0.3, # 0.4 / (n ** 1.25 / 10)
'cycles': False,
'fraction_observed': 1
}
def OD_sample_efficiency(scm_a, scm_b, K, repetitions):
data_points = []
for k in K:
for _ in range(repetitions):
d = ODist(scm_a, scm_b, nb_samples=k, discrete=False)
data_points.append([k, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[k, float(d / max_d)] for k, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['nb_samples', 'ODist'])
def ID_sample_efficiency(scm_a, scm_b, K, repetitions):
data_points = []
for k in K:
for _ in range(repetitions):
d = IDist(scm_a, scm_b, nb_samples=k, discrete=False)
data_points.append([k, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[k, float(d / max_d)] for k, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['nb_samples', 'IDist'])
def CD_sample_efficiency(scm_a, scm_b, X, K, repetitions):
data_points = []
for k in K:
for _ in range(repetitions):
d = CDist(scm_a, scm_b, nb_samples_x=X, nb_samples_k=k, discrete=False)
data_points.append([k*X, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[k, float(d / max_d)] for k, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['nb_samples', 'CDist'])
scm_a = sample_models.generate_linear_gaussian(nb_models=1)
K = [1, 100, 200, 500]
repetitions = 10
OD_convergence = OD_sample_efficiency(scm_a, scm_a, K, repetitions)
ID_convergence = ID_sample_efficiency(scm_a, scm_a, K, repetitions)
CD_convergence = CD_sample_efficiency(scm_a, scm_a, 1, [1, 100, 200, 500], repetitions=3)
```
### Results
```
sns.set_context("paper")
ax = sns.lineplot(x='nb_samples', y='ODist', ci="sd", data=OD_convergence, label='OD', marker="o", color="#63ACBE")
sns.lineplot(x='nb_samples', y='IDist', ci="sd", ax=ax, data=ID_convergence, label='ID', marker="o", color="#EE442F")
sns.lineplot(x='nb_samples', y='CDist', ci="sd", data=CD_convergence, ax=ax, label='CD', marker="o", color="#601A4A")
plt.xlabel('Number of samples (k)', fontsize=17)
plt.ylabel(' Normalized Error', fontsize=17)
ax.legend(fontsize='x-large')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.savefig("results/sample_efficiency.pdf", bbox_inches='tight', format="pdf")
```
### k,l,m Sample Efficiency
```
def k_sample_efficiency(scm_a, scm_b, K, repetitions):
data_points = []
for k in K:
for _ in range(repetitions):
d = ODist(scm_a, scm_b, nb_samples=k, discrete=False)
data_points.append([k, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[float(k / max(K)), float(d / max_d)] for k, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['nb_samples', 'ODist'])
def l_sample_efficiency(scm_a, scm_b, L, repetitions):
data_points = []
for l in L:
for _ in range(repetitions):
d = IDist(scm_a, scm_b, nb_samples=500, l_samples=l, discrete=False)
data_points.append([l, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[float(l / max(L)), float(d / max_d)] for l, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['nb_samples', 'IDist'])
def m_sample_efficiency(scm_a, scm_b, M, repetitions):
data_points = []
for m in M:
for _ in range(repetitions):
d = CDist(scm_a, scm_b, nb_samples_x=m, discrete=False)
data_points.append([m, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[float(m / max(M)), float(d / max_d)] for m, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['nb_samples', 'CDist'])
k_convergence = k_sample_efficiency(scm_a, scm_a, [1, 100, 200, 500], repetitions=10)
l_convergence = l_sample_efficiency(scm_a, scm_a, [1, 5, 10, 20], repetitions=10)
m_convergence = m_sample_efficiency(scm_a, scm_a, [1, 2, 4, 8], repetitions=10)
sns.set_context("notebook")
ax = sns.lineplot(x='nb_samples', y='ODist', ci="sd", data=k_convergence, label='k', marker="o", color="#63ACBE")
sns.lineplot(x='nb_samples', y='IDist', ci="sd", ax=ax, data=l_convergence, label='l', marker="o", color="#EE442F")
sns.lineplot(x='nb_samples', y='CDist', ci="sd", data=m_convergence, ax=ax, label='m', marker="o", color="#601A4A")
plt.xlabel('Number of samples (k)', fontsize=17)
plt.ylabel(' Normalized Error', fontsize=17)
ax.legend(fontsize='x-large')
```
# Sensitivity Analysis
### Helpers
```
def perturbation(scm, epsilon):
new_model = copy.deepcopy(scm)
for m in new_model.mechanisms:
m.perturbate(epsilon)
return new_model
def OD_sensitivity_analysis(scm, perturbations, K, repetitions):
data_points = []
for epsilon in perturbations:
perturbated_scm = perturbation(scm, epsilon)
for _ in range(repetitions):
d = ODist(scm, perturbated_scm, nb_samples=K, discrete=False)
data_points.append([epsilon, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[eps, float(d / max_d)] for eps, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['epsilon', 'ODist'])
def ID_sensitivity_analysis(scm, perturbations, K, repetitions):
data_points = []
for epsilon in perturbations:
perturbated_scm = perturbation(scm, epsilon)
for _ in range(repetitions):
d = IDist(scm, perturbated_scm, nb_samples=K, discrete=False)
data_points.append([epsilon, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[eps, float(d / max_d)] for eps, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['epsilon', 'IDist'])
def CD_sensitivity_analysis(scm, perturbations, X, K, repetitions):
data_points = []
for epsilon in perturbations:
print('{}::{}'.format(epsilon, perturbations))
perturbated_scm = perturbation(scm, epsilon)
for i in range(repetitions):
print('---{}::{}'.format(i, repetitions))
d = CDist(scm, perturbated_scm, nb_samples_x=X, nb_samples_k=K)
data_points.append([epsilon, d])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[eps, float(d / max_d)] for eps, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['epsilon', 'CDist'])
default_structure = {
'nb_nodes': 4,
'density': 0.3, # 0.4 / (n ** 1.25 / 10)
'cycles': False,
'fraction_observed': 1
}
scm = sample_models.generate_linear_gaussian(default_structure, nb_models=1)
OD_sensitivity = OD_sensitivity_analysis(scm, [0.01, 0.1, 0.5, 1], 1000, 10)
ID_sensitivity = ID_sensitivity_analysis(scm, [0.01, 0.1, 0.5, 1], 500, 10)
CD_sensitivity = CD_sensitivity_analysis(scm, [0.01, 0.1, 1], X=1, K=500, repetitions=2)
sns.set_context("paper")
ax = sns.lineplot(x='epsilon', y='ODist', ci="sd", data=OD_sensitivity, label='OD', marker="o", color="#63ACBE")
sns.lineplot(x='epsilon', y='IDist', ci="sd", ax=ax, data=ID_sensitivity, label='ID', marker="o", color="#EE442F")
sns.lineplot(x='epsilon', y='CDist', ci="sd", data=CD_sensitivity, ax=ax, label='CD', marker="o", color="#601A4A")
plt.xlabel('epsilon', fontsize=17)
plt.ylabel(' Normalized Error', fontsize=17)
ax.legend(fontsize='x-large')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.savefig("results/perturbations.pdf", bbox_inches='tight', format="pdf")
```
# Specific Perturbations
#### ID with OD constant
```
from CausalModel.Mechanisms import LinearAdditive
from CausalModel.SCM import SCM
def build_graph(flipped):
G = nx.DiGraph()
G.add_node(0, observed=True)
G.add_node(1, observed=True)
if flipped:
G.add_edge(1,0)
else:
G.add_edge(0,1)
# nx.draw_networkx(G)
return G
def build_mechanisms(beta, flipped):
A_mechanism = LinearAdditive(nb_causes=0)
B_mechanism = LinearAdditive(nb_causes=1, generate_random=False)
B_mechanism.coeffs = [beta]
if flipped:
return [B_mechanism, A_mechanism]
return [A_mechanism, B_mechanism]
def build_noises(mu_a, sigma_a, mu_b, sigma_b):
return IndependentComponentsDistribution([
NormalDistribution(mu_a, sigma_a),
NormalDistribution(mu_b, sigma_b)
])
def build_SCM(beta, mu_a, sigma_a, mu_b, sigma_b, flipped=False):
G = build_graph(flipped)
mechanisms = build_mechanisms(beta, flipped)
noises = build_noises(mu_a, sigma_a, mu_b, sigma_b)
return SCM(G, mechanisms, noises)
def ID_specific_comparison(repetitions):
a_b = build_SCM(1, 0, 1, 0, 1)
b_a = build_SCM(1./(np.sqrt(4)), 0, np.sqrt(0.5), 0, np.sqrt(2), flipped=True)
data_points = []
for _ in range(repetitions):
data_points.append([0, IDist(a_b, a_b, nb_samples=100, l_samples=5, discrete=False)])
data_points.append([1, IDist(a_b, b_a, nb_samples=100, l_samples=5, discrete=False)])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[eps, float(d / max_d)] for eps, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['epsilon', 'IDist'])
id_spec = ID_specific_comparison(10)
```
#### CD with ID constant
```
from pomegranate.distributions import GammaDistribution
from sklearn.gaussian_process import GaussianProcessRegressor
import CausalModel.Mechanisms as Mechanisms
def CD_specific_comparison(repetitions):
nb_training = 1000
scm_cd = build_SCM(1, 0, 1, 0, 1)
training_samples = scm_cd.sample(nb_training)
new_graph = copy.deepcopy(scm_cd.causal_graph)
new_noise = IndependentComponentsDistribution([
NormalDistribution(0, 1),
GammaDistribution(alpha=2, beta=2)
])
new_mechanisms = copy.deepcopy(scm_cd.mechanisms)
noise_sample = new_noise.sample(nb_training)
second_var_noise = noise_sample[:,1]
cause_values = training_samples[training_samples.columns[0]]
X = np.column_stack((cause_values.tolist(), second_var_noise))
Y = np.array(training_samples[training_samples.columns[1]].tolist())
gpr = GaussianProcessRegressor()
gpr.fit(X, Y)
new_mechanisms[1] = Mechanisms.GaussianProcess(nb_causes=1, parameters={'nb_points': 10, 'variance': 1})
new_mechanisms[1].gpr = gpr
new_SCM = SCM(new_graph, new_mechanisms, new_noise)
data_points = []
for i in range(repetitions):
data_points.append([0, CDist(scm_cd, scm_cd, nb_samples_k=100, nb_samples_x=2, discrete=False)])
data_points.append([1, CDist(scm_cd, new_SCM, nb_samples_k=100, nb_samples_x=2, discrete=False)])
max_d = max([d_ for _, d_ in data_points])
updated_data_points = [[eps, float(d / max_d)] for eps, d in data_points]
return DataFrame.from_records(updated_data_points, columns=['epsilon', 'CDist'])
cd_specific = CD_specific_comparison(5)
sns.set_context("paper")
ax = sns.lineplot(x='epsilon', y='IDist', ci="sd", data=id_spec, label='ID', marker="o", color="#EE442F")
sns.lineplot(x='epsilon', y='CDist', ci="sd", data=cd_specific, ax=ax, label='CD', marker="o", color="#601A4A")
plt.xlabel('epsilon', fontsize=17)
plt.ylabel(' Normalized Error', fontsize=17)
ax.legend(fontsize='x-large')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.savefig("results/specific_perturbations.pdf", bbox_inches='tight', format="pdf")
```
| github_jupyter |
# Assignment 3: Hello Vectors
Welcome to this week's programming assignment on exploring word vectors.
In natural language processing, we represent each word as a vector consisting of numbers.
The vector encodes the meaning of the word. These numbers (or weights) for each word are learned using various machine
learning models, which we will explore in more detail later in this specialization. Rather than make you code the
machine learning models from scratch, we will show you how to use them. In the real world, you can always load the
trained word vectors, and you will almost never have to train them from scratch. In this assignment, you will:
- Predict analogies between words.
- Use PCA to reduce the dimensionality of the word embeddings and plot them in two dimensions.
- Compare word embeddings by using a similarity measure (the cosine similarity).
- Understand how these vector space models work.
## 1.0 Predict the Countries from Capitals
In the lectures, we have illustrated the word analogies
by finding the capital of a country from the country.
We have changed the problem a bit in this part of the assignment. You are asked to predict the **countries**
that corresponds to some **capitals**.
You are playing trivia against some second grader who just took their geography test and knows all the capitals by heart.
Thanks to NLP, you will be able to answer the questions properly. In other words, you will write a program that can give
you the country by its capital. That way you are pretty sure you will win the trivia game. We will start by exploring the data set.
<img src = 'map.jpg' width="width" height="height" style="width:467px;height:300px;"/>
### 1.1 Importing the data
As usual, you start by importing some essential Python libraries and then load the dataset.
The dataset will be loaded as a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html),
which is very a common method in data science.
This may take a few minutes because of the large size of the data.
```
# Run this cell to import packages.
import pickle #not secure
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from utils import get_vectors
data = pd.read_csv('capitals.txt', delimiter=' ')
data.columns = ['city1', 'country1', 'city2', 'country2']
# print first five elements in the DataFrame
data.head(5)
```
***
### To Run This Code On Your Own Machine:
Note that because the original google news word embedding dataset is about 3.64 gigabytes,
the workspace is not able to handle the full file set. So we've downloaded the full dataset,
extracted a sample of the words that we're going to analyze in this assignment, and saved
it in a pickle file called `word_embeddings_capitals.p`
If you want to download the full dataset on your own and choose your own set of word embeddings,
please see the instructions and some helper code.
- Download the dataset from this [page](https://code.google.com/archive/p/word2vec/).
- Search in the page for 'GoogleNews-vectors-negative300.bin.gz' and click the link to download.
Copy-paste the code below and run it on your local machine after downloading
the dataset to the same directory as the notebook.
```python
import nltk
from gensim.models import KeyedVectors
embeddings = KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary = True)
f = open('capitals.txt', 'r').read()
set_words = set(nltk.word_tokenize(f))
select_words = words = ['king', 'queen', 'oil', 'gas', 'happy', 'sad', 'city', 'town', 'village', 'country', 'continent', 'petroleum', 'joyful']
for w in select_words:
set_words.add(w)
def get_word_embeddings(embeddings):
word_embeddings = {}
for word in embeddings.vocab:
if word in set_words:
word_embeddings[word] = embeddings[word]
return word_embeddings
# Testing your function
word_embeddings = get_word_embeddings(embeddings)
print(len(word_embeddings))
pickle.dump( word_embeddings, open( "word_embeddings_subset.p", "wb" ) )
```
***
Now we will load the word embeddings as a [Python dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries).
As stated, these have already been obtained through a machine learning algorithm.
```
word_embeddings = pickle.load(open("word_embeddings_subset.p", "rb"))
len(word_embeddings) # there should be 243 words that will be used in this assignment
```
Each of the word embedding is a 300-dimensional vector.
```
print("dimension: {}".format(word_embeddings['Spain'].shape[0]))
```
### Predict relationships among words
Now you will write a function that will use the word embeddings to predict relationships among words.
* The function will take as input three words.
* The first two are related to each other.
* It will predict a 4th word which is related to the third word in a similar manner as the two first words are related to each other.
* As an example, "Athens is to Greece as Bangkok is to ______"?
* You will write a program that is capable of finding the fourth word.
* We will give you a hint to show you how to compute this.
A similar analogy would be the following:
<img src = 'vectors.jpg' width="width" height="height" style="width:467px;height:200px;"/>
You will implement a function that can tell you the capital of a country.
You should use the same methodology shown in the figure above. To do this,
compute you'll first compute cosine similarity metric or the Euclidean distance.
### 1.2 Cosine Similarity
The cosine similarity function is:
$$\cos (\theta)=\frac{\mathbf{A} \cdot \mathbf{B}}{\|\mathbf{A}\|\|\mathbf{B}\|}=\frac{\sum_{i=1}^{n} A_{i} B_{i}}{\sqrt{\sum_{i=1}^{n} A_{i}^{2}} \sqrt{\sum_{i=1}^{n} B_{i}^{2}}}\tag{1}$$
$A$ and $B$ represent the word vectors and $A_i$ or $B_i$ represent index i of that vector.
& Note that if A and B are identical, you will get $cos(\theta) = 1$.
* Otherwise, if they are the total opposite, meaning, $A= -B$, then you would get $cos(\theta) = -1$.
* If you get $cos(\theta) =0$, that means that they are orthogonal (or perpendicular).
* Numbers between 0 and 1 indicate a similarity score.
* Numbers between -1-0 indicate a dissimilarity score.
**Instructions**: Implement a function that takes in two word vectors and computes the cosine distance.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li> Python's<a href="https://docs.scipy.org/doc/numpy/reference/" > NumPy library </a> adds support for linear algebra operations (e.g., dot product, vector norm ...).</li>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" > numpy.dot </a>.</li>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html">numpy.linalg.norm </a>.</li>
</ul>
</p>
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def cosine_similarity(A, B):
'''
Input:
A: a numpy array which corresponds to a word vector
B: A numpy array which corresponds to a word vector
Output:
cos: numerical number representing the cosine similarity between A and B.
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
dot = np.dot(A,B)
norma = np.sqrt(np.dot(A,A))
normb = np.sqrt(np.dot(B,B))
cos = dot/(norma * normb)
### END CODE HERE ###
return cos
# feel free to try different words
king = word_embeddings['king']
queen = word_embeddings['queen']
cosine_similarity(king, queen)
```
**Expected Output**:
$\approx$ 0.6510956
### 1.3 Euclidean distance
You will now implement a function that computes the similarity between two vectors using the Euclidean distance.
Euclidean distance is defined as:
$$ \begin{aligned} d(\mathbf{A}, \mathbf{B})=d(\mathbf{B}, \mathbf{A}) &=\sqrt{\left(A_{1}-B_{1}\right)^{2}+\left(A_{2}-B_{2}\right)^{2}+\cdots+\left(A_{n}-B_{n}\right)^{2}} \\ &=\sqrt{\sum_{i=1}^{n}\left(A_{i}-B_{i}\right)^{2}} \end{aligned}$$
* $n$ is the number of elements in the vector
* $A$ and $B$ are the corresponding word vectors.
* The more similar the words, the more likely the Euclidean distance will be close to 0.
**Instructions**: Write a function that computes the Euclidean distance between two vectors.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html" > numpy.linalg.norm </a>.</li>
</ul>
</p>
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def euclidean(A, B):
"""
Input:
A: a numpy array which corresponds to a word vector
B: A numpy array which corresponds to a word vector
Output:
d: numerical number representing the Euclidean distance between A and B.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# euclidean distance
d = np.linalg.norm(A-B)
### END CODE HERE ###
return d
# Test your function
euclidean(king, queen)
```
**Expected Output:**
2.4796925
### 1.4 Finding the country of each capital
Now, you will use the previous functions to compute similarities between vectors,
and use these to find the capital cities of countries. You will write a function that
takes in three words, and the embeddings dictionary. Your task is to find the
capital cities. For example, given the following words:
- 1: Athens 2: Greece 3: Baghdad,
your task is to predict the country 4: Iraq.
**Instructions**:
1. To predict the capital you might want to look at the *King - Man + Woman = Queen* example above, and implement that scheme into a mathematical function, using the word embeddings and a similarity function.
2. Iterate over the embeddings dictionary and compute the cosine similarity score between your vector and the current word embedding.
3. You should add a check to make sure that the word you return is not any of the words that you fed into your function. Return the one with the highest score.
```
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_country(city1, country1, city2, embeddings):
"""
Input:
city1: a string (the capital city of country1)
country1: a string (the country of capital1)
city2: a string (the capital city of country2)
embeddings: a dictionary where the keys are words and values are their embeddings
Output:
countries: a dictionary with the most likely country and its similarity score
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# store the city1, country 1, and city 2 in a set called group
group = set((city1, country1, city2))
# get embeddings of city 1
city1_emb = word_embeddings[city1]
# get embedding of country 1
country1_emb = word_embeddings[country1]
# get embedding of city 2
city2_emb = word_embeddings[city2]
# get embedding of country 2 (it's a combination of the embeddings of country 1, city 1 and city 2)
# Remember: King - Man + Woman = Queen
vec = country1_emb - city1_emb + city2_emb
# Initialize the similarity to -1 (it will be replaced by a similarities that are closer to +1)
similarity = -1
# initialize country to an empty string
country = ''
# loop through all words in the embeddings dictionary
for word in embeddings.keys():
# first check that the word is not already in the 'group'
if word not in group:
# get the word embedding
word_emb = word_embeddings[word]
# calculate cosine similarity between embedding of country 2 and the word in the embeddings dictionary
cur_similarity = cosine_similarity(vec,word_emb)
# if the cosine similarity is more similar than the previously best similarity...
if cur_similarity > similarity:
# update the similarity to the new, better similarity
similarity = cur_similarity
# store the country as a tuple, which contains the word and the similarity
country = (word, similarity)
### END CODE HERE ###
return country
# Testing your function, note to make it more robust you can return the 5 most similar words.
get_country('Athens', 'Greece', 'Cairo', word_embeddings)
```
**Expected Output:**
('Egypt', 0.7626821)
### 1.5 Model Accuracy
Now you will test your new function on the dataset and check the accuracy of the model:
$$\text{Accuracy}=\frac{\text{Correct # of predictions}}{\text{Total # of predictions}}$$
**Instructions**: Write a program that can compute the accuracy on the dataset provided for you. You have to iterate over every row to get the corresponding words and feed them into you `get_country` function above.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html" > pandas.DataFrame.iterrows </a>.</li>
</ul>
</p>
```
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_accuracy(word_embeddings, data):
'''
Input:
word_embeddings: a dictionary where the key is a word and the value is its embedding
data: a pandas dataframe containing all the country and capital city pairs
Output:
accuracy: the accuracy of the model
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# initialize num correct to zero
num_correct = 0
# loop through the rows of the dataframe
for i, row in data.iterrows():
# get city1
city1 = row['city1']
# get country1
country1 = row['country1']
# get city2
city2 = row['city2']
# get country2
country2 = row['country2']
# use get_country to find the predicted country2
predicted_country2, _ = get_country(city1,country1,city2,word_embeddings)
# if the predicted country2 is the same as the actual country2...
if predicted_country2 == country2:
# increment the number of correct by 1
num_correct += 1
# get the number of rows in the data dataframe (length of dataframe)
m = len(data)
# calculate the accuracy by dividing the number correct by m
accuracy = num_correct/m
### END CODE HERE ###
return accuracy
```
**NOTE: The cell below takes about 30 SECONDS to run.**
```
accuracy = get_accuracy(word_embeddings, data)
print(f"Accuracy is {accuracy:.2f}")
```
**Expected Output:**
$\approx$ 0.92
# 3.0 Plotting the vectors using PCA
Now you will explore the distance between word vectors after reducing their dimension.
The technique we will employ is known as
[*principal component analysis* (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis).
As we saw, we are working in a 300-dimensional space in this case.
Although from a computational perspective we were able to perform a good job,
it is impossible to visualize results in such high dimensional spaces.
You can think of PCA as a method that projects our vectors in a space of reduced
dimension, while keeping the maximum information about the original vectors in
their reduced counterparts. In this case, by *maximum infomation* we mean that the
Euclidean distance between the original vectors and their projected siblings is
minimal. Hence vectors that were originally close in the embeddings dictionary,
will produce lower dimensional vectors that are still close to each other.
You will see that when you map out the words, similar words will be clustered
next to each other. For example, the words 'sad', 'happy', 'joyful' all describe
emotion and are supposed to be near each other when plotted.
The words: 'oil', 'gas', and 'petroleum' all describe natural resources.
Words like 'city', 'village', 'town' could be seen as synonyms and describe a
similar thing.
Before plotting the words, you need to first be able to reduce each word vector
with PCA into 2 dimensions and then plot it. The steps to compute PCA are as follows:
1. Mean normalize the data
2. Compute the covariance matrix of your data ($\Sigma$).
3. Compute the eigenvectors and the eigenvalues of your covariance matrix
4. Multiply the first K eigenvectors by your normalized data. The transformation should look something as follows:
<img src = 'word_embf.jpg' width="width" height="height" style="width:800px;height:200px;"/>
**Instructions**:
You will write a program that takes in a data set where each row corresponds to a word vector.
* The word vectors are of dimension 300.
* Use PCA to change the 300 dimensions to `n_components` dimensions.
* The new matrix should be of dimension `m, n_componentns`.
* First de-mean the data
* Get the eigenvalues using `linalg.eigh`. Use `eigh` rather than `eig` since R is symmetric. The performance gain when using `eigh` instead of `eig` is substantial.
* Sort the eigenvectors and eigenvalues by decreasing order of the eigenvalues.
* Get a subset of the eigenvectors (choose how many principle components you want to use using `n_components`).
* Return the new transformation of the data by multiplying the eigenvectors with the original data.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html" > numpy.mean(a,axis=None) </a> : If you set <code>axis = 0</code>, you take the mean for each column. If you set <code>axis = 1</code>, you take the mean for each row. Remember that each row is a word vector, and the number of columns are the number of dimensions in a word vector. </li>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html" > numpy.cov(m, rowvar=True) </a>. This calculates the covariance matrix. By default <code>rowvar</code> is <code>True</code>. From the documentation: "If rowvar is True (default), then each row represents a variable, with observations in the columns." In our case, each row is a word vector observation, and each column is a feature (variable). </li>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html" > numpy.linalg.eigh(a, UPLO='L') </a> </li>
<li>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" > numpy.argsort </a> sorts the values in an array from smallest to largest, then returns the indices from this sort. </li>
<li>In order to reverse the order of a list, you can use: <code>x[::-1]</code>.</li>
<li>To apply the sorted indices to eigenvalues, you can use this format <code>x[indices_sorted]</code>.</li>
<li>When applying the sorted indices to eigen vectors, note that each column represents an eigenvector. In order to preserve the rows but sort on the columns, you can use this format <code>x[:,indices_sorted]</code></li>
<li>To transform the data using a subset of the most relevant principle components, take the matrix multiplication of the eigenvectors with the original data. </li>
<li>The data is of shape <code>(n_observations, n_features)</code>. </li>
<li>The subset of eigenvectors are in a matrix of shape <code>(n_features, n_components)</code>.</li>
<li>To multiply these together, take the transposes of both the eigenvectors <code>(n_components, n_features)</code> and the data (n_features, n_observations).</li>
<li>The product of these two has dimensions <code>(n_components,n_observations)</code>. Take its transpose to get the shape <code>(n_observations, n_components)</code>.</li>
</ul>
</p>
```
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def compute_pca(X, n_components=2):
"""
Input:
X: of dimension (m,n) where each row corresponds to a word vector
n_components: Number of components you want to keep.
Output:
X_reduced: data transformed in 2 dims/columns + regenerated original data
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# mean center the data
X_demeaned = X - np.mean(X,axis=0)
# calculate the covariance matrix
covariance_matrix = np.cov(X_demeaned, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
eigen_vals, eigen_vecs = np.linalg.eigh(covariance_matrix, UPLO='L')
# sort eigenvalue in increasing order (get the indices from the sort)
idx_sorted = np.argsort(eigen_vals)
# reverse the order so that it's from highest to lowest.
idx_sorted_decreasing = idx_sorted[::-1]
# sort the eigen values by idx_sorted_decreasing
eigen_vals_sorted = eigen_vals[idx_sorted_decreasing]
# sort eigenvectors using the idx_sorted_decreasing indices
eigen_vecs_sorted = eigen_vecs[:,idx_sorted_decreasing]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
eigen_vecs_subset = eigen_vecs_sorted[:,0:n_components]
# transform the data by multiplying the transpose of the eigenvectors
# with the transpose of the de-meaned data
# Then take the transpose of that product.
X_reduced = np.dot(eigen_vecs_subset.transpose(),X_demeaned.transpose()).transpose()
### END CODE HERE ###
return X_reduced
# Testing your function
np.random.seed(1)
X = np.random.rand(3, 10)
X_reduced = compute_pca(X, n_components=2)
print("Your original matrix was " + str(X.shape) + " and it became:")
print(X_reduced)
```
**Expected Output:**
Your original matrix was: (3,10) and it became:
<table>
<tr>
<td>
0.43437323
</td>
<td>
0.49820384
</td>
</tr>
<tr>
<td>
0.42077249
</td>
<td>
-0.50351448
</td>
</tr>
<tr>
<td>
-0.85514571
</td>
<td>
0.00531064
</td>
</tr>
</table>
Now you will use your pca function to plot a few words we have chosen for you.
You will see that similar words tend to be clustered near each other.
Sometimes, even antonyms tend to be clustered near each other. Antonyms
describe the same thing but just tend to be on the other end of the scale
They are usually found in the same location of a sentence,
have the same parts of speech, and thus when
learning the word vectors, you end up getting similar weights. In the next week
we will go over how you learn them, but for now let's just enjoy using them.
**Instructions:** Run the cell below.
```
words = ['oil', 'gas', 'happy', 'sad', 'city', 'town',
'village', 'country', 'continent', 'petroleum', 'joyful']
# given a list of words and the embeddings, it returns a matrix with all the embeddings
X = get_vectors(word_embeddings, words)
print('You have 11 words each of 300 dimensions thus X.shape is:', X.shape)
# We have done the plotting for you. Just run this cell.
result = compute_pca(X, 2)
plt.scatter(result[:, 0], result[:, 1])
for i, word in enumerate(words):
plt.annotate(word, xy=(result[i, 0] - 0.05, result[i, 1] + 0.1))
plt.show()
```
**What do you notice?**
The word vectors for 'gas', 'oil' and 'petroleum' appear related to each other,
because their vectors are close to each other. Similarly, 'sad', 'joyful'
and 'happy' all express emotions, and are also near each other.
| github_jupyter |
```
import os
import random
import math
import time
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from collections import deque
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Conv1D, MaxPooling1D, Flatten, concatenate, Conv2D, MaxPooling2D
import tensorflow.keras.losses as kls
#import tensorflow_probability as tfp
from libs.utils import *
from libs.generate_boxes import *
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '0'
tf.get_logger().setLevel('INFO')
tf.keras.backend.floatx()
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (20,10)
class Actor(tf.keras.Model):
def __init__(self, state_size, selected_size, remain_size, output_size):
super(Actor, self).__init__()
l1, b1, k1 = state_size
self.state_size = (l1*b1*k1,)
self.case_dnn1 = Dense(64, activation='relu', input_shape=self.state_size)
self.case_dnn2 = Dense(64, activation='relu')
l2, b2, k2 = selected_size
self.selected_size = (l2*b2*k2,)
self.select_dnn1 = Dense(64, activation='relu', input_shape=self.selected_size)
self.select_dnn2 = Dense(64, activation='relu')
l3, b3, k3 = remain_size
self.remain_size = (l3*b3*k3,)
self.remain_dnn1 = Dense(128, activation='relu', input_shape=self.remain_size)
self.remain_dnn2 = Dense(128, activation='relu')
self.d1 = Dense(256, activation='relu')
self.d2 = Dense(256, activation='relu')
self.d3 = Dense(128, activation='relu')
self.out = Dense(output_size, activation='softmax')
def call(self, cb_list):
c, s, r = cb_list[0], cb_list[1], cb_list[2]
c = tf.reshape(c, [-1, self.state_size[0]])
s = tf.reshape(s, [-1, self.selected_size[0]])
r = tf.reshape(r, [-1, self.remain_size[0]])
c = self.case_dnn1(c)
c = self.case_dnn2(c)
s = self.select_dnn1(s)
s = self.select_dnn2(s)
r = self.remain_dnn1(r)
r = self.remain_dnn2(r)
x = concatenate([c,s,r])
x = self.d1(x)
x = self.d2(x)
x = self.d3(x)
q = self.out(x)
return q
class Critic(tf.keras.Model):
def __init__(self, state_size, selected_size, remain_size, output_size):
super(Critic, self).__init__()
l1, b1, k1 = state_size
self.state_size = (l1*b1*k1,)
self.case_dnn1 = Dense(64, activation='relu', input_shape=self.state_size)
self.case_dnn2 = Dense(64, activation='relu')
l2, b2, k2 = selected_size
self.selected_size = (l2*b2*k2,)
self.select_dnn1 = Dense(64, activation='relu', input_shape=self.selected_size)
self.select_dnn2 = Dense(64, activation='relu')
l3, b3, k3 = remain_size
self.remain_size = (l3*b3*k3,)
self.remain_dnn1 = Dense(128, activation='relu', input_shape=self.remain_size)
self.remain_dnn2 = Dense(128, activation='relu')
self.d1 = Dense(256, activation='relu')
self.d2 = Dense(256, activation='relu')
self.d3 = Dense(128, activation='relu')
self.out = Dense(output_size, activation='softmax')
def call(self, cb_list):
c, s, r = cb_list[0], cb_list[1], cb_list[2]
c = tf.reshape(c, [-1, self.state_size[0]])
s = tf.reshape(s, [-1, self.selected_size[0]])
r = tf.reshape(r, [-1, self.remain_size[0]])
c = self.case_dnn1(c)
c = self.case_dnn2(c)
s = self.select_dnn1(s)
s = self.select_dnn2(s)
r = self.remain_dnn1(r)
r = self.remain_dnn2(r)
x = concatenate([c,s,r])
x = self.d1(x)
x = self.d2(x)
x = self.d3(x)
q = self.out(x)
return q
class PPO_Agent():
def __init__(self, L=20, B=20, H=20, n_remains=5, lr=1e-8, exp_steps=500,
train_st = 200, memory_len = 500):
self.state_size = (L,B,1)
self.selected_size = (L,B,2)
self.remain_size = (L,B,n_remains)
self.output_size = 1
self.discount_factor = 0.99
self.learning_rate = lr #1e-8 #1e-4
self.exploration_steps = exp_steps
self.batch_size = 32
self.train_start = train_st
self.beta = 0.2
self.clip_pram = 0.2
self.memory = deque(maxlen=memory_len)
self.gamma = 0.9
self.actor = Actor(self.state_size, self.selected_size, self.remain_size, self.output_size)
self.critic = Critic(self.state_size, self.selected_size, self.remain_size, self.output_size)
self.actor_optimizer = Adam(self.learning_rate)
self.critic_optimizer = Adam(self.learning_rate)
self.avg_actor_loss = 0
self.avg_critic_loss = 0
def get_action(self, state, loaded_mh_c, r_boxes):
q_values = self.actor([state, loaded_mh_c, r_boxes])
argmax_idx = np.where(q_values == tf.math.reduce_max(q_values))
action_idx = argmax_idx[0][0]
return q_values, argmax_idx, action_idx
def append_sample(self, history, load, remain_size, load_size, reward, last, next_history, next_load, next_remain_size, next_load_size):
self.memory.append(( history, load, remain_size, load_size, reward, last, next_history, next_load, next_remain_size, next_load_size))
def actor_loss_temp(self, probs, actions, adv, old_probs, closs):
probability = probs
entropy = tf.reduce_mean(tf.math.negative(tf.math.multiply(probability, tf.math.log(probability))))
sur1 = []
sur2 = []
for pb, t, op in zip(probability, adv, old_probs):
t = tf.constant(t)
op = tf.constant(op)
ratio = tf.math.divide(pb, op)
s1 = tf.math.multiply(ratio, t)
s2 = tf.math.multiply(tf.clip_by_value(ratio, 1.0-self.clip_pram, 1.0 + self.clip_pram), t)
sur1.append(s1)
sur2.append(s2)
sr1 = tf.stack(sur1)
sr2 = tf.stack(sur2)
loss = tf.math.negative(tf.reduce_mean(tf.math.minimum(sr1, sr2)) - closs + 0.001 * entropy)
return loss
def get_actor_loss(self, discnt_rewards, a):
return 0.5 * kls.mean_squared_error(discnt_rewards, a)
def get_critic_loss(self, discnt_rewards, v):
return 0.5 * kls.mean_squared_error(discnt_rewards, v)
def train_model(self):
batch = random.sample(self.memory, self.batch_size)
history = np.array([sample[0] for sample in batch])
load = np.array([sample[1] for sample in batch])
remain_size = np.array([sample[2] for sample in batch])
load_size = np.array([sample[3] for sample in batch])
reward = np.array([sample[4] for sample in batch])
dones = np.array([sample[5] for sample in batch])
next_history = [sample[6] for sample in batch]
next_load = [sample[7] for sample in batch]
next_remain_size = [sample[8] for sample in batch]
next_load_size = [sample[9] for sample in batch]
with tf.GradientTape() as actor_tape, tf.GradientTape() as critic_tape:
actor = self.actor([history, load, remain_size])
critic = self.critic([history, load, remain_size])
targets = []
for i in range(self.batch_size):
next_value = self.critic([next_history[i], next_load[i], next_remain_size[i]])
targets.append(next_value)
targets = np.array(targets)
targets = targets.reshape(-1, 1)
print(actor.shape, critic.shape, targets.shape)
actor_loss = self.get_actor_loss(targets, actor)
critic_loss = self.get_critic_loss(targets, value)
actor_grads = actor_tape.gradient(actor_loss, self.actor.trainable_variables)
critic_grads = critic_tape.gradient(critic_loss, self.critic.trainable_variables)
self.actor_optimizer.apply_gradients(zip(actor_grads, self.actor.trainable_variables))
self.critic_optimizer.apply_gradients(zip(critic_grads, self.critic.trainable_variables))
num_episode = 1500
global_step = 0
allow_skip = False
tr_l, h_fill, tr_r, avg_actor_loss_l, avg_critic_loss_l, history_eps, used_boxes_eps = [],[],[],[],[],[],[]
N_MDD = 7
K = 4
n_candidates = 4
boxes_multi1 = [np.array([[20, 20, 4],
[20, 4, 4],
[20, 4, 4],
[20, 4, 4],
[20, 4, 4],
[20, 4, 4],
[20, 20, 4],
[20, 20, 4],
[20, 20, 4]])]
gt_pos1 = [np.array([[ 0, 0, 0],
[ 0, 0, 4],
[ 0, 4, 4],
[ 0, 8, 4],
[ 0, 12, 4],
[ 0, 16, 4],
[ 0, 0, 8],
[ 0, 0, 12],
[ 0, 0, 16]])]
boxes_multi2 = [np.array([[20, 20, 5],
[ 4, 20, 5],
[ 4, 20, 5],
[ 4, 20, 5],
[ 4, 20, 5],
[ 4, 20, 5],
[10, 20, 5],
[10, 20, 5],
[20, 20, 5]])]
gt_pos2 = [np.array([[ 0, 0, 0],
[ 0, 0, 5],
[ 4, 0, 5],
[ 8, 0, 5],
[12, 0, 5],
[16, 0, 5],
[ 0, 0, 10],
[10, 0, 10],
[ 0, 0, 15]])]
num_max_boxes = max(len(boxes_multi1[0]), len(boxes_multi2[0]))
num_max_remain = num_max_boxes
print('num_max_boxes',num_max_boxes,'num_max_remain',num_max_remain)
env=Bpp3DEnv()
agent = PPO_Agent(L=20, B=20, H=20, n_remains=num_max_remain, lr=1e-4, exp_steps=900,
train_st=500, memory_len=1000)
boxes_multi, gt_pos = boxes_multi2.copy(), gt_pos2.copy()
env.reset()
done = False
step = 0
history, h_load, h_remain_size, h_load_size = [],[],[],[]
next_history, next_load, next_remain_size, next_load_size = [],[],[],[]
used_boxes, pred_pos = [],[]
boxes_all = np.array(boxes_multi)[0].copy()
r_boxes = boxes_all.copy()
r_boxes
q_list, arg_list, action_list = [], [], []
while not done:
state = env.container.copy()
state_h = env.update_h().copy()
step += 1
k = min(K, len(r_boxes))
selected = cbn_select_boxes(r_boxes[:n_candidates], k)
s_order = get_selected_order(selected, k)
s_loc_c, num_loaded_box_c, loading_size_c, loading_pos_c, next_cube_c , next_state_c = get_selected_location(s_order, state)
loaded_mh_c = np.array([get_loaded_mh(s_loc, env.length, env.breadth, env.height) for s_loc in s_loc_c] ) # 3D -> 2D
in_state, in_r_boxes, in_loading = raw2input(state_h, len(s_loc_c), r_boxes, num_max_remain, K, loading_size_c, env.height)
s_order, s_loc_c, num_loaded_box_c, loading_size_c, loading_pos_c, next_cube_c , next_state_c, loaded_mh_c, in_state, in_r_boxes, in_loading =\
get_unique(s_order, s_loc_c, num_loaded_box_c, loading_size_c, loading_pos_c, next_cube_c , next_state_c, loaded_mh_c, in_state, in_r_boxes, in_loading)
if len(s_loc_c) == 1:
action_idx = 0
else:
q, arg, action_idx = agent.get_action(in_state, loaded_mh_c, in_r_boxes)
print('Action')
print(q, arg, action_idx)
q_list.append(q)
arg_list.append(arg)
action_list.append(action_idx)
env.convert_state(next_cube_c[action_idx])
num_loaded_box = num_loaded_box_c[action_idx]
if num_loaded_box != 0:
new_used_boxes = loading_size_c[action_idx]
r_boxes = get_remain(new_used_boxes, r_boxes)
else:
r_boxes = get_remain(s_order[action_idx], r_boxes)
used_boxes = used_boxes + loading_size_c[action_idx]
pred_pos = pred_pos + loading_pos_c[action_idx]
if len(r_boxes) == 0 or np.sum(env.container_h != env.height) == 0:
done = True
if len(s_loc_c) != 1 or done:
history.append(in_state[action_idx])
h_load.append(loaded_mh_c[action_idx])
h_remain_size.append(in_r_boxes[action_idx])
h_load_size.append(in_loading[action_idx])
next_state = env.container.copy()
next_state_h = env.container_h.copy()
if done:
in_next_history = next_state_h.reshape((1, env.length, env.breadth, 1))
loaded_mh_c = np.zeros((1, env.length, env.breadth, 2))
in_next_remains = np.zeros((1, env.length, env.breadth, num_max_remain))
in_next_loading = np.zeros((1, env.length, env.breadth, K))
else:
k = min(K, len(r_boxes))
selected = cbn_select_boxes(r_boxes[:n_candidates], k)
s_order = get_selected_order(selected, k)
s_loc_c, num_loaded_box_c, loading_size_c, loading_pos_c, next_cube_c , next_state_c =\
get_selected_location(s_order, next_state)
loaded_mh_c = np.array( [get_loaded_mh(s_loc, env.length, env.breadth, env.height) for s_loc in s_loc_c] )
in_next_history, in_next_remains, in_next_loading =\
raw2input(next_state_h, len(s_loc_c), r_boxes, num_max_remain, K, loading_size_c, env.height)
s_order, s_loc_c, num_loaded_box_c, loading_size_c, loading_pos_c, next_cube_c , next_state_c, loaded_mh_c, in_next_history, in_next_remains, in_next_loading =\
get_unique(s_order, s_loc_c, num_loaded_box_c, loading_size_c, loading_pos_c, next_cube_c , next_state_c, loaded_mh_c, in_next_history, in_next_remains, in_next_loading)
next_history.append(in_next_history)
next_load.append(loaded_mh_c)
next_remain_size.append(in_next_remains)
next_load_size.append(in_next_loading)
done
avg_tr = 0 if len(tr_r)==0 else np.mean(tr_r)
terminal_reward = env.terminal_reward()
tr_l.append(terminal_reward)
h_fill.append(env.terminal_reward())
tr_r.append(env.terminal_reward())
avg_tr
terminal_reward
a_repeate = 6 if env.terminal_reward() ==1.0 else 1
is_last = False
N = len(history)
for i in range(N):
if i == N-1: is_last=True
reward=(0.99**(N-i-1))*terminal_reward
print(i, ':', reward, ':', a_repeate)
for a in range(a_repeate):
agent.append_sample(history[i], h_load[i], h_remain_size[i], h_load_size[i], reward, is_last,
next_history[i], next_load[i], next_remain_size[i], next_load_size[i])
len(agent.memory)
batch = random.sample(agent.memory, 3)
history = np.array([sample[0] for sample in batch])
load = np.array([sample[1] for sample in batch])
remain_size = np.array([sample[2] for sample in batch])
load_size = np.array([sample[3] for sample in batch])
reward = np.array([sample[4] for sample in batch])
dones = np.array([sample[5] for sample in batch])
next_history = [sample[6] for sample in batch]
next_load = [sample[7] for sample in batch]
next_remain_size = [sample[8] for sample in batch]
next_load_size = [sample[9] for sample in batch]
history.shape
vmin = 0
vmax = 1
nsup = 51
z = np.linspace(vmin,vmax,nsup)
z
dz = (vmax - vmin) / (nsup - 1.)
dz
with tf.GradientTape() as actor_tape, tf.GradientTape() as critic_tape:
actor = agent.actor([history, load, remain_size])
value = agent.critic([history, load, remain_size])
#print(value.shape)
print('Actor')
print(actor, '\n')
print('Critic')
print(value, '\n')
targets = []
print('Next Critic')
for i in range(3):
#print(next_history[i].shape, next_load[i].shape, next_remain_size[i].shape)
next_value = agent.critic([next_history[i], next_load[i], next_remain_size[i]])
t_max_q = tf.math.reduce_max(next_value)
t = [(1- 0.75)*reward[i] + (1 - dones[i]) *0.75*t_max_q]
targets.append(t)
#targets = np.array(targets)
#targets = targets.reshape(-1, 1)
print('\ntargets')
print(targets)
#print(actor.shape, value.shape, targets.shape)
actor_loss = agent.get_actor_loss(targets, actor)
critic_loss = agent.get_critic_loss(targets, value)
print('Act Loss:', actor_loss)
print('Crt Loss:', critic_loss)
print('')
actor_grads = actor_tape.gradient(actor_loss, agent.actor.trainable_variables)
critic_grads = critic_tape.gradient(critic_loss, agent.critic.trainable_variables)
agent.actor_optimizer.apply_gradients(zip(actor_grads, agent.actor.trainable_variables))
agent.critic_optimizer.apply_gradients(zip(critic_grads, agent.critic.trainable_variables))
with tf.GradientTape() as actor_tape, tf.GradientTape() as critic_tape:
actor = agent.actor([history, load, remain_size])
value = agent.critic([history, load, remain_size])
print('Actor')
print(actor)
print('Critic')
print(critic)
targets = []
print('Next Critic')
for i in range(1):
next_value = agent.critic([next_history[i], next_load[i], next_remain_size[i]])
print(i, next_value)
targets.append(next_value)
targets = np.array(targets)
targets = targets.reshape(-1, 1)
print('targets')
print(targets)
actor_loss = agent.get_actor_loss(targets, actor)
critic_loss = agent.get_critic_loss(targets, value)
print('Act Loss:', actor_loss)
print('Crt Loss:', critic_loss)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/faisalnawazmir/Python_Lectures/blob/main/Data_Structrue__String2_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
```
# String methods
**Strings** are an example of Python *objects*. An object contains both data (the actual string itself) and methods, which are effectively functions that are built into the object and are available to any instance of the object.
Python has a function called **dir** which lists the methods available for an object. The type function shows the type of an object and the dir function shows the available methods.
```
mystring='Hello World!'
type(mystring)
dir(mystring)
help(str.capitalize)
```
While the dir function lists the methods, and you can use help to get some simple documentation on a method, a better source of documentation for string methods would be https://docs.python.org/library/stdtypes.html#string-methods.
Calling a method is similar to calling a function (it takes arguments and returns
a value) but the syntax is different. We call a method by appending the method
name to the variable name using the period as a delimiter.
For example, the method upper takes a string and returns a new string with all
uppercase letters:
Instead of the function syntax upper(word), it uses the method syntax
word.upper().
```
word='apple'
newword=word.upper()
print(newword)
```
A method call is called an invocation; in this case, we would say that we are
invoking upper on the word.
For example, there is a string method named find that searches for the position
of one string within another:
```
word='banana'
index=word.find('a')
print(index)
```
In this example, we invoke find on word and pass the letter we are looking for as
a parameter.
The find method can find substrings as well as characters:
```
word.find('na')
```
It can take as a second argument the index where it should start:
```
word.find('na',3)
```
One common task is to remove white space (spaces, tabs, or newlines) from the
beginning and end of a string using the strip method:
```
line=' Here we go '
line.strip()
```
Some methods such as startswith return boolean values.
```
line='Have a nice day'
line.startswith('Have')
line.startswith('h')
```
You will note that startswith requires case to match, so sometimes we take a line
and map it all to lowercase before we do any checking using the lower method.
```
line.lower()
line.lower().startswith('h')
```
#Parsing strings
Often, we want to look into a string and find a substring. For example if we were
presented a series of lines formatted as follows:
_From stephen.marquard@** uct.ac.za** Sat Jan
5 09:14:16 2008_
and we wanted to pull out only the second half of the address (i.e.,** uct.ac.za**)
from each line, we can do this by using the find method and string slicing.
First, we will find the position of the at-sign in the string. Then we will find the
position of the first space after the at-sign. And then we will use string slicing to
extract the portion of the string which we are looking for.
```
data = 'From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008'
atpos=data.find('@')
print(atpos)
sppos=data.find(' ',atpos)
print(sppos)
host=data[atpos+1:sppos]
print(host)
```
#Format operator
The format operator, % allows us to construct strings, replacing parts of the strings
with the data stored in variables.
```
camel=42
'%d' %camel
```
The result is the string โ42โ, which is not to be confused with the integer value 42.
```
camels=42
'I have spotted %d camels.' % camels
```
If there is more than one format sequence in the string, the second argument has
to be a tuple1. Each format sequence is matched with an element of the tuple, in
order.
The following example uses %d to format an integer, %g to format a floating-point
number (donโt ask why), and %s to format a string:
```
'In %d years I have spotted %g %s.' % (3, 0.1, 'camels')
```
#fstring
Letโs start with some basic examples to have some fresh taste with some simplest usages, as shown below. In essence, the f-strings are string literals with the f letter, either lower or upper case, as the prefix. String literals are basically strings that are enclosed by a pair of quotes (i.e., quotation marks), and Python treats single and double quotes the same, as long as they come in pairs
```
#interpolate a string
name='Elan'
f'hello {name}'
#interpolate an integer
id=123
f'My student id # is {id}'
# interpolate a list
fruits=['apple','banana','mango']
f'My favourite fruits are: {fruits}'
# Interpolate a tuple
http_response = ('data', 200)
f"Http Response: {http_response}"
# Interpolate a dictionary
grades = {"John": 95, "Jennifer": 98}
f"Grades: {grades}"
# Access an element in a list
pets = ["Dogs", "Cats", "Turtles"]
f"My pet: {pets[-1]}"
# With a function call
name = "john"
f"Name: {name.title()}"
# Some calculation
number = 5
f"Square: {number*number}"
```
#Numeric and String Formatting
```
# Big numbers separator
big_number = 98765432123456789
f'{big_number:_d}'
# Floating numbers formatting
more_digits = 2.345678
f"2 digits: {more_digits:.2f}; 4 digits: {more_digits:.4f}"
# Scientific notation
sci_number = 0.0000043203
f"number: {sci_number:e}"
```
source:https://medium.com/swlh/string-formatting-in-python-6-things-to-know-about-f-strings-72fd38d96172
```
```
| github_jupyter |
# ClickModels: DBN
ClickModels is a field of study in Machine Learning that uses Probabilistic Graphical Models to model the interactions between users and a set of ranked items.
One of the main uses of ClickModels is to train models from past observed data to evaluate how good each document probably is for each query, also known in literature as judgments' values.
In order to compute the judgments for each document for each query, we rely on the [work](https://pdfs.semanticscholar.org/0b19/b37da5e438e6355418c726469f6a00473dc3.pdf) developed by Aleksandr et. al. where users interactions with each query result is modeled through a Dynamic Bayesian Network as depicted below
<p align="center">
<img src="./dbn.png">
</p>
$E_r$ is a random variable that tracks whether a given document $u$ was examined at rank $r$ by the customer or not (this would be equivallent to the impression event from GA's dataset).
$A_r$ is an indicator as to whether the customer found that given document attractive or not. When a sku is examined and it's attractive, then we have a Click event,represented by the observed variable $C_r$.
Another observed variable is $P_r$ which represents the purchasing event. $P_r$ and $C_r$ both directly influence $S_ur$ which indicates whether customer is satisfied already or not.
Case not, then it's considered that customers can continue examining through the result dataset with a $\gamma$ probability factor.
Creating the DNB above is done through the following code:
```
import daft
from matplotlib import rc
rc("font", family="serif", size=12)
rc("text", usetex=True)
pgm = daft.PGM(grid_unit=4.0, node_unit=1.4)
# Start with the plates.
rect_params = {"lw": 2}
edge_params = {
'linewidth': 1
}
pgm.add_plate(
[0, 0, 3, 2],
label=r"\Large $r$",
rect_params=rect_params,
)
pgm.add_plate(
[3 + 0.2, 0, 3, 2],
label=r"\Large $r+1$",
rect_params=rect_params,
)
pgm.add_node("e_r", r"$E_r$", 0.5, 0.5, scale=1.5, fontsize=24)
pgm.add_node("e_r_1", r"$E_{r+1}$", 3.5 + 0.2, 0.5, scale=1.5, fontsize=24)
pgm.add_edge("e_r", "e_r_1", plot_params=edge_params)
pgm.add_node("c_r", r"$C_r$", 1.5, 1., scale=1.5, fontsize=24, observed=True)
pgm.add_node("c_r_1", r"$C_{r+1}$", 3.5 + 0.2 + 1, 1., scale=1.5, fontsize=24, observed=True)
pgm.add_edge("e_r", "c_r", plot_params=edge_params)
pgm.add_edge("e_r_1", "c_r_1", plot_params=edge_params)
pgm.add_node("a_r", r"$A_u$", 0.5, 1.5, scale=1.5, fontsize=24)
pgm.add_node("a_r_1", r"$A_{ur+1}$", 3.7, 1.5, scale=1.5, fontsize=24)
pgm.add_edge("a_r", "c_r", plot_params=edge_params)
pgm.add_edge("a_r_1", "c_r_1", plot_params=edge_params)
pgm.add_node("p_r", r"$P_{ur}$", 2.3, 1., scale=1.5, fontsize=24, observed=True)
pgm.add_node("p_r_1", r"$P_{ur+1}$", 3.5 + 2, 1., scale=1.5, fontsize=24, observed=True)
pgm.add_edge("c_r", "p_r", plot_params=edge_params)
pgm.add_edge("c_r_1", "p_r_1", plot_params=edge_params)
pgm.add_node("s_r", r"$S_{ur}$", 2., 1.7, scale=1.5, fontsize=24)
pgm.add_node("s_r_1", r"$S_{ur+1}$", 3.7 + 1.5, 1.7, scale=1.5, fontsize=24)
pgm.add_edge("c_r", "s_r", plot_params=edge_params)
pgm.add_edge("c_r_1", "s_r_1", plot_params=edge_params)
pgm.add_edge("p_r", "s_r", plot_params=edge_params)
pgm.add_edge("p_r_1", "s_r_1", plot_params=edge_params)
pgm.add_edge("s_r", "e_r_1", plot_params=edge_params)
# Render and save.
pgm.render()
pgm.savefig("dbn.png", dpi=150)
```
Here are the equations we'll be using for finding the parameters of the DBN.
\begin{align}
P\left(E_r=1\mid E{r-1}=0\right) & = 0 \label{eq:1} \tag{1} \\
P\left(A_u=1\right) & = \alpha_{uq} \label{eq:2} \tag{2} \\
P\left(C_r=1\mid E_r=1, A_u=1\right) & = 1 \label{eq:3} \tag{3} \\
P\left(S_{r}=1\mid C_r=0,P_r=0\right) & = 0 \label{eq:4} \tag{4} \\
P\left(S_{r}=1\mid C_r=1,P_r=0\right) & = \sigma_{uq} \label{eq:5} \tag{5} \\
P\left(S_{r}=1\mid C_r=1,P_r=1\right) & = 1 \label{eq:6} \tag{6} \\
P\left(E_{r}=1\mid S_{r-1}=1\right) & = 0 \label{eq:7} \tag{7} \\
P\left(E_{r}=1\mid E_{r-1}=1,S_{r-1}=0\right) & = \gamma \label{eq:8} \tag{8} \\
P\left(C_r=1\right) = P\left(C_{r}=1\mid E_r=1\right)\cdot P\left(E_r=1\right) & = \alpha_{uq}\epsilon_{ru}\label{eq:9} \tag{9} \\
\end{align}
Each query and each sku carries an attractive factor $\alpha_{uq}$. When the customer interacts with a sku, there's a $\sigma_{uq}$ chance them'll enjoy it and end their browsing through the query result page.
If they are not satisfied, they continue browsing through with a probability of $\gamma$.
In this model, only clicks and purchases are observed which means all other variables are hidden; in such case we use EM optimization techniques to find values for each parameter that best describes observed data in terms of log-likelihood.
This being said, the log-likelihood is given by:
$$\ell\ell = \sum_{s \in S}log\left(\sum_{\textbf{X}}P(\textbf{X}, \textbf{C}^{(s)}, \textbf{P}^{(s)} \mid \Psi \right)$$
Where $X$ represents the hidden variables, $C$ and $P$ are the observed data clicks and purhcases and finally $\Psi$ represents all variables used to model the data.
Finding the derivative of this equation is intractable thanks to the summation of the hidden variables. We use them the [Expectation-Maximization](https://towardsdatascience.com/inference-using-em-algorithm-d71cccb647bc) algorithm and aim to maximize the following $Q$ function:
$$Q = \sum_{s \in S} \mathbb{E}_{X|C^{(s)}}\left[logP\left(X, C^{(s)}, P^{(s)} \mid \Psi\right)\right]$$
In our case, as all variables are Bernoulli (either 0 or 1), each modeled by a parameter $\theta_c$ which translates the above to:
$$
Q(\theta_c) =\sum_{s \in S} \sum_{c_i \in s} \left(P\left(X_{c_i}^{(s)}=1, Par(X_{c_i}^{(s)}) = p \mid C^{(s)}, P^{(s)}, \Psi\right)log(\theta_c) + P\left(X_{c_i}^{(s)}=0, Par(X_{c_i}^{(s)}) = p \mid C^{(s)}, P^{(s)}, \Psi\right)log(1-\theta_c)\right) + Z
$$
We'll be using this equation in the maximization step, derive it to find new optimum values for each parameter of our model and repeat the process until either we reach convergence (usually set by no increment in loglikelihood metric) or by total amount of desired iterations.
The derivative to find new values is given by:
$$\theta_c^{(t+1)} = \frac{\sum_{s\in S}\sum_{c_i \in s}P\left(P(X_{c_i}^{(s)}=1, Par(X_{c_i}^{(s)})=p \mid C^{(s)}, P^{(s)}, \Psi\right)}{\sum_{s\in S}\sum_{c_i \in s}P\left(Par(X_{c_i}^{(s)})=p \mid C^{(s)}, P^{(s)}, \Psi\right)}\label{eq:10} \tag{10}$$
## Attractiveness $\alpha_{uq}$
We have that:
$$P(A_u = 1) = \alpha_{uq}$$
Given equations 1-9, we can also derive that:
\begin{equation}
\begin{split}
\epsilon_1 & = P(E_1=1) = 1 \\
\epsilon_{r+1} & = P(E_{r+1} =1) \\
& = P(E_{r+1} = 1 \mid E_r=1) \cdot P(E_r=1) \\
& = \epsilon_r P\left(E_{r+1}=1 \mid S_r = 0, E_r=1\right) \cdot P(S_r=0 \mid E_r=1) \\
& = \epsilon_r\gamma P(S_r=0 \mid E_r=1) \\
& = \epsilon_r\gamma \left(P\left(S_r=0 \mid C_r = 0, P_r = 0, E_r=1 \right)P\left(C_r=0, P_r=0 \mid E_r=1\right) + P\left(S_r=0 \mid C_r = 0, P_r = 1, E_r=1 \right)P\left(C_r=0, P_r=1 \mid E_r=1\right) + P\left(S_r=0 \mid C_r = 1, P_r = 0, E_r=1 \right)P\left(C_r=1, P_r=0 \mid E_r=1\right) + P\left(S_r=0 \mid C_r = 1, P_r = 1, E_r=1 \right)P\left(C_r=1, P_r=1 \mid E_r=1\right)\right) \\
& = \epsilon_r \gamma \left((1 - \alpha_{uq}) + (1 - \sigma_{uq})(1 - cr_{uq})\alpha_{uq} \right)
\end{split}\label{eq:11} \tag{11}
\end{equation}
Where $cr$ is the conversion rate of document $u$ for query $q$.
Given equation 10, we derive for the attractiveness parameter the following updating rule:
$$\alpha_{uq}^{t+1} = \frac{\sum_{s \in S_{uq}} P(A_u = 1 \mid C, P)}{|S_{uq}|} \label{eq:12} \tag{12}$$
But given the structure of the DBN, we can infer that if $C$ is observed then $A_u$ is independent of $P$ as the former is a parent for the attractiveness variable. We can use this to assert a simplified updating rule:
$$\alpha_{uq}^{t+1} = \frac{\sum_{s \in S_{uq}} P(A_u = 1 \mid C)}{|S_{uq}|} \label{eq:13} \tag{13}$$
Which can be developed as follows:
$$
\begin{equation}
\begin{split}
P(A_u = 1 \mid C) & = P(A_u = 1 \mid C_r, C_{>r}) \\
& = \unicode{x1D7D9}(C_r=1)\cdot P(A_u=1 \mid C_r = 1, C_{>r}) + \unicode{x1D7D9}(C_r=0)\cdot P(A_u=1 \mid C_r = 0, C_{>r}) \\
& = c_r + (1 - c_r) \cdot \left(\unicode{x1D7D9}(C_{>r}=1) \cdot P(A_u=1|C_r=0, C_{>r}=1) + \unicode{x1D7D9}(C_{>r}=0) \cdot P(A_u=1 \mid C_r=0, C_{>r}=0)\right) \\
& = c_r + (1 - c_r)(1 - c_{>r}) \cdot \frac{P(C_r=0, C_{>r}=0 \mid A_u=1) \cdot P(A_u=1)}{P(C_r=0, C_{>r} = 0)}
\end{split}\label{eq:14} \tag{14}
\end{equation}
$$
Where $C_u$ is the click on current rank document and $C_{>r}$ is a random variable that is 1 if there's any click above current r and 0 otherwise.
Now developing the numerator of (14) we have:
$$
\begin{equation}
\begin{split}
P(C_r=0, C_{>r}=0 \mid A_u=1) & = P(C_r=0, C_{>r}=0 \mid A_u=1, E_r=0) \cdot P(E_r=0) \\
& = P(E_r=0) = 1 - \epsilon_r
\end{split}\label{eq:14.1} \tag{14.1}
\end{equation}
$$
The equation above is derived from the fact that an attractive document is only not clicked if it's not examined.
The numerator is already solved, we still need to develop the denominator:
$$
\begin{equation}
\begin{split}
P\left(C_r=0, C_{>r}=0\right) = P(C{\geq r}=0) = 1 - P(C_{\geq r} = 1)
\end{split}\label{eq:14.2} \tag{14.2}
\end{equation}
$$
$$
\begin{equation}
\begin{split}
P(C_{\geq r} = 1) = \epsilon_r \cdot X_r
\end{split}\label{eq:14.3} \tag{14.3}
\end{equation}
$$
$$
\begin{equation}
\begin{split}
X_r & = P(C_{\geq r} \mid E_r=1) \\
& = P(C_r = 1 \mid E_r=1) + P(C_r=0, C_{\geq r+1} \mid E_r=1) \\
& = \alpha_{uq} + P(C_{\geq r+1} \mid C_r=0, E_r=1) \cdot P(C_r=0|E_r=1) \\
& = \alpha_{uq} + P(C_{\geq r+1} \mid E_{r+1}) \cdot P(E_{r+1}=1 \mid C_r=0, E_r=1) \cdot (1 - \alpha_{uq}) \\
& = \alpha_{uq} + (1 - \alpha_{uq})\gamma X_{r+1}
\end{split}\label{eq:14.4} \tag{14.4}
\end{equation}
$$
Finally, we have the updating rule for the attractiveness parameter:
$$ \alpha_{uq}^{(t+1)} = \frac{\sum_{s \in S_{uq}}\left(c_r^{(s)} + \left(1 - c_r^{(s)}\right)\left(1 - c_{>r}^{(s)}\right) \cdot \frac{\left(1 - \epsilon_r^{(t)}\right)\alpha_{uq}^{(t)}}{\left(1 - \epsilon_r^{(t)}X_r^{(t)} \right)} \right)}{|S_{uq}|} \label{eq:15} \tag{15}$$
Where $\epsilon_r$ is given by equation (11).
## Satisfaction $\sigma_{uq}$
In our presented DBN model, the satisfaction factor is only defined when:
$$ \sigma_{uq} = P(S_u=1 \mid C_r=1, P_r=0)$$
This means the updating rule for the satisfaction term is given by:
$$\sigma_{uq}^{(t+1)} = \frac{\sum_{s \in S'_{uq}}P(S_u=1 \mid C, P)}{|S'_{uq}|} \label{eq:16} \tag{16} $$
Which can be developed as:
$$
\begin{equation}
\begin{split}
P(S_u=1 \mid C, P) &= P(S_u = 1 \mid C_r=1, P_r=0, C_{>r}=0, P_{>r}=0) \\
&= (1 - c_{>r})\cdot P(S_u=1 \mid C_r=1, P_r=0, C_{>r}=0, P_{>r}=0) \\
&= (1 - c_{>r})\cdot \frac{P(C_{>r}=0, P_{>r}=0 \mid S_u=1, C_r=1, P_r=0) \cdot P(S_u=1 \mid C_r=1, P_r=0)}{P(C_{>r}=0, P_{>r}=0 \mid C_r=1, P_r=0)} \\
&= \frac{(1 - c_r)(1-p_r)\sigma_{uq}}{P(P_{>r}=0 \mid C_{>r}=0, C_r=1, P_r=0) \cdot P(C_{>r}=0 \mid C_r=1, P_r=0)} \\
&= \frac{(1 - c_r)(1-p_r)\sigma_{uq}}{1 - P(C_{\geq r+1}=1 \mid E_{r+1})\cdot P(E_{r+1}\mid C_r=1, P_r=0)} \\
&= \frac{(1 - c_r)(1-p_r)\sigma_{uq}}{(1 - X_{r+1}\cdot (1-\alpha_{uq})\gamma)}
\end{split}\label{eq:17} \tag{17}
\end{equation}
$$
Given equations (16) and (17), we devire that the updating rule is given by:
$$\sigma_{uq}^{(t+1)} = \frac{\sum_{s \in S^{[1, 0]}}\frac{(1 - c_r^{(t)})(1-p_r^{(t)})\sigma_{uq}^{(t)}}{(1 - X_{r+1}\cdot (1-\alpha_{uq}^{(t)})\gamma^{(t)})}}{|S^{[1, 0]}|} \label{eq:18} \tag{18}$$
Where $S^{[1, 0]}$ is the set of sessions of customers interactions where at rank $r$ there's an observed click and no purchase for document $u$ and query $q$.
## Persistence $\gamma$
Persistence is defined as:
$$\gamma = P(E_{r+1} = 1 \mid E_r = 1, S_{ur}=0)$$
The sufficient statistics for this parameter is defined as:
$$ESS(z) = \sum_{s \in S} \sum_r P(E_{r+1}=z, E_r=1, S_{ur}=0 \mid C, P) \label{eq:19} \tag{19}$$
There's no closed form for this equation, so we use some techniques in order to able to compute it, like so:
$$
\begin{equation}
\begin{split}
ESS(z) &= \sum_{s \in S}\sum_{r}\frac{P(E_{r+1}=\, E_r=1, S_u=0, C, P)}{P(C, P)} \\
&= \sum_{s \in S}\sum_{r}\frac{P(E_{r+1}=\, E_r=1, S_u=0, C, P)}{\sum_x \sum_y \sum_z P(E_{r+1}=z,E_r=x, S_u=y, C, P)} \\
&= \sum_{s \in S}\sum_{r}\frac{P(E_{r+1}=\, E_r=1, S_u=0, C, P) \cdot \frac{1}{P(C_{<r}, P_{<r})}}{\sum_x \sum_y \sum_z P(E_{r+1}=z,E_r=x, S_u=y, C, P)\cdot \frac{1}{P(C_{<r}, P_{<r})}} \\
&= \sum_{s \in S}\sum_{r}\frac{\phi^{(s)}(1, 0,z)}{\sum_x \sum_y \sum_z \phi^{(s)}(x, y, z)}
\end{split}\label{eq:19} \tag{19}
\end{equation}
$$
The challege then becomes finding a description for the $\phi$ factor. We derive it as follows:
$$
\begin{equation}
\begin{split}
\phi (x, y, z) &= P(E_r=x, C_r=c_r,P_r=p_r, S_u=y, E_{r+1}=z, C_{\geq {r+1}}, P_{\geq {r+1}} \mid C_{<r}, P_{<r}) \\
&= P(C{>r}, P{>r} \mid E_{r+1}=z, E_r=x, S_u=y, C_r=c_r, P_r=p_r) \cdot P(E_r=x, S_u=y, E_{r+1}=z, C_r=c_r, P_r=p_r \mid C_{<r}, P_{<r}) \\
&= P(C_{>r}, P_{>r} \mid E_{r+1}=z) \cdot P(E_{r+1}=z, S_u=y, C_r=c_r, P_r=p_r \mid E_r=x) \cdot P(E_r=x \mid C_{<r}, P_{<r})
\end{split}\label{eq:20} \tag{20}
\end{equation}
$$
To solve this equation we'll devide it in three parts: first will be $$P(C_{>r}, P_{>r} \mid E_{r+1}=z)$$, second is $$P(E_{r+1}=z, S_u=y, C_r=c_r, P_r=p_r \mid E_r=x)$$ and finally $$P(E_r=x \mid C_{<r}, P_{<r})$$
Let's first solve $P(E_r=x \mid C_{<r}, P_{<r})$:
$$
\begin{equation}
\begin{split}
P(E_{r+1}=1 \mid C_{<r+1}, P_{<r+1}) &= \epsilon_{r+1} = P(E_{r+1}=1 \mid E_r=1, C_{<r+1}, P_{<r+1})P(E_r=1 \mid C_{<r+1}, P_{<r+1}) \\
&=P(E_{r+1}=1 \mid E_r=1,C_r=1, P_r=0) \cdot P(E_r=1 \mid C_{<r+1}, P_{<r+1})c_r^{(s)}(1-p_r^{(s)}) + P(E_{r+1}=1 \mid E_r=1,C_r=0, P_r=0) \cdot P(E_r=1 \mid C_{<r+1}, P_{<r+1}) \cdot (1 - c_r^{(s)})(1-p_r^{(s)}) + P(E_{r+1}=1 \mid E_r=1,C_r=1, P_r=1) \cdot P(E_r=1 \mid C_{<r+1}, P_{<r+1})\cdot c_r^{(s)}p_r^{(s)} \\
&= (1 - \sigma_{uq})\gamma c_r^{(s)}(1-p_r^{(s)}) + \gamma P(E_r=1 \mid C_r=0, P_r=0, C_{<r}, P_{<r})(1-c_r^{(s)})(1 -p_r^{(s)})
\end{split}\label{eq:21} \tag{21}
\end{equation}
$$
To develop the inner probability equation, we have that:
$$
\begin{equation}
\begin{split}
P(E_r=1 \mid C_r=0, P_r=0, C_{<r}, P_{<r}) &= \frac{P(E_r=1, C_r=0, P_r=0 \mid C_{<r}, P_{<r})}{P(C_r=0, P_r=0 \mid C_{<r}, P_{<r})} \\
&= \frac{P(C_r=0, P_r=0 \mid E_r=1, C_{<r}, P_{<r}) \cdot P(E_r=1\mid C_{<r}, P_{<r})}{P(C_r=0, P_r=0\mid C_{<r}, P_{<r})}
\end{split}\label{eq:22} \tag{22}
\end{equation}
$$
Once again we need to develop the inner probability equation, which leads to:
$$
\begin{equation}
\begin{split}
P(C_r=0, P_r=0 \mid C_{<r}, P_{<r}) &= P(P_r=0 \mid C_r=0) \cdot P(C_r=0 \mid C_{<r}, P_{<r}) \\
&= P(C_r=0 \mid C_{<r}, P_{<r}) = 1 - P(C_r=1 \mid C_{<r}, P_{<r})\\
&= 1- P(C_r=1 \mid E_r=1, C_{<r}, P_{<r}) \cdot P(E_r=1 \mid P_{<r}, C_{<r}) \\
&= 1 - \alpha_{uq}\epsilon_r
\end{split}\label{eq:23} \tag{23}
\end{equation}
$$
Using 22 and 23 into 21 we finally get:
$$P(E_r =1\mid C_{<r}, P_{<r}) = (1 - \sigma_{uq}) \gamma c_r(1-p_r) + \frac{\gamma (1-\alpha_{uq})\epsilon_{r-1}}{(1 - \alpha_{uq} \epsilon_{r-1})}(1-c_r)(1-p_r) \label{eq:24} \tag{24}$$
Now we need to compute $P(C_{>r}, P_{>r} \mid E_{r+1}=z)$ which is derived as:
$$
\begin{equation}
\begin{split}
P(C_{>r}, P_{>r} \mid E_{r+1}=z) &= P(C_r, P_r\mid C_{r-1}, P_{r-1}, ..., E_l=1) \cdot P(C_{r-1}, P_{r-1}, E_l=1) \\
&= (1-\alpha \epsilon_{rl})\left((1-c_r)(1-p_r) + (1-w)(\alpha \epsilon_{rl}c_r(1-p_r)) + w \alpha \epsilon_{rl}c_r p_r \right) \cdot P(C_{r-1},P_{r-1} \mid C_{<r-1}, P_{<r-1}, E_l=1)
\end{split}\label{eq:25} \tag{25}
\end{equation}
$$
Where $w$ is the conversion rate of given sku on a given query.
```
from IPython.core.display import HTML,display
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
```
#@title ##### License
# Copyright 2018 The GraphNets Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
```
# Sort a list of elements
This notebook and the accompanying code demonstrates how to use the Graph Nets library to
learn to sort a list of elements.
The list of elements is treated as a fully connected graph, and the task is to
label the nodes and edges as a linked list. The network is trained to label the
start node, and which (directed) edges correspond to the links to the next
largest element, for each node.
After training, the network's prediction ability is illustrated by comparing its
output to the true sorted list. Then the network's ability to generalise is
tested, by using it to sort larger lists.
```
#@title ### Install the Graph Nets library on this Colaboratory runtime { form-width: "60%", run: "auto"}
#@markdown <br>1. Connect to a local or hosted Colaboratory runtime by clicking the **Connect** button at the top-right.<br>2. Choose "Yes" below to install the Graph Nets library on the runtime machine with:<br> ```pip install graph_nets```<br> Note, this works both with local and hosted Colaboratory runtimes.
install_graph_nets_library = "No" #@param ["Yes", "No"]
if install_graph_nets_library.lower() == "yes":
print("Installing Graph Nets library with:")
print(" $ pip install graph_nets\n")
print("Output message from command:\n")
!pip install graph_nets
else:
print("Skipping installation of Graph Nets library")
```
### Install dependencies locally
If you are running this notebook locally (i.e., not through Colaboratory), you will also need to install a few more dependencies. Run the following on the command line to install the graph networks library, as well as a few other dependencies:
```
pip install graph_nets matplotlib scipy
```
# Code
```
#@title Imports { form-width: "30%" }
# The demo dependencies are not installed with the library, but you can install
# them with:
#
# $ pip install jupyter matplotlib scipy
#
# Run the demo with:
#
# $ jupyter notebook <path>/<to>/<demos>/shortest_path.ipynb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import time
from graph_nets import utils_np
from graph_nets import utils_tf
from graph_nets.demos import models
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
SEED = 1
np.random.seed(SEED)
tf.set_random_seed(SEED)
#@title Helper functions { form-width: "30%" }
# pylint: disable=redefined-outer-name
def create_graph_dicts_tf(num_examples, num_elements_min_max):
"""Generate graphs for training.
Args:
num_examples: total number of graphs to generate
num_elements_min_max: a 2-tuple with the minimum and maximum number of
values allowable in a graph. The number of values for a graph is
uniformly sampled withing this range. The upper bound is exclusive, and
should be at least 2 more than the lower bound.
Returns:
inputs: contains the generated random numbers as node values.
sort_indices: contains the sorting indices as nodes. Concretely
inputs.nodes[sort_indices.nodes] will be a sorted array.
ranks: the rank of each value in inputs normalized to the range [0, 1].
"""
num_elements = tf.random_uniform(
[num_examples],
minval=num_elements_min_max[0],
maxval=num_elements_min_max[1],
dtype=tf.int32)
inputs_graphs = []
sort_indices_graphs = []
ranks_graphs = []
for i in range(num_examples):
values = tf.random_uniform(shape=[num_elements[i]])
sort_indices = tf.cast(
tf.contrib.framework.argsort(values, axis=-1), tf.float32)
ranks = tf.cast(
tf.contrib.framework.argsort(sort_indices, axis=-1), tf.float32) / (
tf.cast(num_elements[i], tf.float32) - 1.0)
inputs_graphs.append({"nodes": values[:, None]})
sort_indices_graphs.append({"nodes": sort_indices[:, None]})
ranks_graphs.append({"nodes": ranks[:, None]})
return inputs_graphs, sort_indices_graphs, ranks_graphs
def create_linked_list_target(batch_size, input_graphs):
"""Creates linked list targets.
Returns a graph with the same number of nodes as `input_graph`. Each node
contains a 2d vector with targets for a 1-class classification where only one
node is `True`, the smallest value in the array. The vector contains two
values: [prob_true, prob_false].
It also contains edges connecting all nodes. These are again 2d vectors with
softmax targets [prob_true, prob_false]. An edge is True
if n+1 is the element immediately after n in the sorted list.
Args:
batch_size: batch size for the `input_graphs`.
input_graphs: a `graphs.GraphsTuple` which contains a batch of inputs.
Returns:
A `graphs.GraphsTuple` with the targets, which encode the linked list.
"""
target_graphs = []
for i in range(batch_size):
input_graph = utils_tf.get_graph(input_graphs, i)
num_elements = tf.shape(input_graph.nodes)[0]
si = tf.cast(tf.squeeze(input_graph.nodes), tf.int32)
nodes = tf.reshape(tf.one_hot(si[:1], num_elements), (-1, 1))
x = tf.stack((si[:-1], si[1:]))[None]
y = tf.stack(
(input_graph.senders, input_graph.receivers), axis=1)[:, :, None]
edges = tf.reshape(
tf.cast(
tf.reduce_any(tf.reduce_all(tf.equal(x, y), axis=1), axis=1),
tf.float32), (-1, 1))
target_graphs.append(input_graph._replace(nodes=nodes, edges=edges))
return utils_tf.concat(target_graphs, axis=0)
def compute_accuracy(target, output):
"""Calculate model accuracy.
Returns the number of correctly predicted links and the number
of completely solved list sorts (100% correct predictions).
Args:
target: A `graphs.GraphsTuple` that contains the target graph.
output: A `graphs.GraphsTuple` that contains the output graph.
Returns:
correct: A `float` fraction of correctly labeled nodes/edges.
solved: A `float` fraction of graphs that are completely correctly labeled.
"""
tdds = utils_np.graphs_tuple_to_data_dicts(target)
odds = utils_np.graphs_tuple_to_data_dicts(output)
cs = []
ss = []
for td, od in zip(tdds, odds):
num_elements = td["nodes"].shape[0]
xn = np.argmax(td["nodes"], axis=-1)
yn = np.argmax(od["nodes"], axis=-1)
xe = np.reshape(
np.argmax(
np.reshape(td["edges"], (num_elements, num_elements, 2)), axis=-1),
(-1,))
ye = np.reshape(
np.argmax(
np.reshape(od["edges"], (num_elements, num_elements, 2)), axis=-1),
(-1,))
c = np.concatenate((xn == yn, xe == ye), axis=0)
s = np.all(c)
cs.append(c)
ss.append(s)
correct = np.mean(np.concatenate(cs, axis=0))
solved = np.mean(np.stack(ss))
return correct, solved
def create_data_ops(batch_size, num_elements_min_max):
"""Returns graphs containing the inputs and targets for classification.
Refer to create_data_dicts_tf and create_linked_list_target for more details.
Args:
batch_size: batch size for the `input_graphs`.
num_elements_min_max: a 2-`tuple` of `int`s which define the [lower, upper)
range of the number of elements per list.
Returns:
inputs_op: a `graphs.GraphsTuple` which contains the input list as a graph.
targets_op: a `graphs.GraphsTuple` which contains the target as a graph.
sort_indices_op: a `graphs.GraphsTuple` which contains the sort indices of
the list elements a graph.
ranks_op: a `graphs.GraphsTuple` which contains the ranks of the list
elements as a graph.
"""
inputs_op, sort_indices_op, ranks_op = create_graph_dicts_tf(
batch_size, num_elements_min_max)
inputs_op = utils_tf.data_dicts_to_graphs_tuple(inputs_op)
sort_indices_op = utils_tf.data_dicts_to_graphs_tuple(sort_indices_op)
ranks_op = utils_tf.data_dicts_to_graphs_tuple(ranks_op)
inputs_op = utils_tf.fully_connect_graph_dynamic(inputs_op)
sort_indices_op = utils_tf.fully_connect_graph_dynamic(sort_indices_op)
ranks_op = utils_tf.fully_connect_graph_dynamic(ranks_op)
targets_op = create_linked_list_target(batch_size, sort_indices_op)
nodes = tf.concat((targets_op.nodes, 1.0 - targets_op.nodes), axis=1)
edges = tf.concat((targets_op.edges, 1.0 - targets_op.edges), axis=1)
targets_op = targets_op._replace(nodes=nodes, edges=edges)
return inputs_op, targets_op, sort_indices_op, ranks_op
def create_loss_ops(target_op, output_ops):
"""Returns graphs containing the inputs and targets for classification.
Refer to create_data_dicts_tf and create_linked_list_target for more details.
Args:
target_op: a `graphs.GraphsTuple` which contains the target as a graph.
output_ops: a `list` of `graphs.GraphsTuple`s which contains the model
outputs for each processing step as graphs.
Returns:
A `list` of ops which are the loss for each processing step.
"""
if not isinstance(output_ops, collections.Sequence):
output_ops = [output_ops]
loss_ops = [
tf.losses.softmax_cross_entropy(target_op.nodes, output_op.nodes) +
tf.losses.softmax_cross_entropy(target_op.edges, output_op.edges)
for output_op in output_ops
]
return loss_ops
def make_all_runnable_in_session(*args):
"""Lets an iterable of TF graphs be output from a session as NP graphs."""
return [utils_tf.make_runnable_in_session(a) for a in args]
def plot_linked_list(ax, graph, sort_indices):
"""Plot a networkx graph containing weights for the linked list probability."""
nd = len(graph.nodes())
probs = np.zeros((nd, nd))
for edge in graph.edges(data=True):
probs[edge[0], edge[1]] = edge[2]["features"][0]
ax.matshow(probs[sort_indices][:, sort_indices], cmap="viridis")
ax.grid(False)
# pylint: enable=redefined-outer-name
#@title Visualize the sort task { form-width: "30%" }
tf.reset_default_graph()
num_elements_min_max = (5, 10)
inputs_op, targets_op, sort_indices_op, ranks_op = create_data_ops(
1, num_elements_min_max)
inputs_op, targets_op, sort_indices_op, ranks_op = make_all_runnable_in_session(
inputs_op, targets_op, sort_indices_op, ranks_op)
with tf.Session() as sess:
inputs_nodes, sort_indices_nodes, ranks_nodes, targets = sess.run(
[inputs_op.nodes, sort_indices_op.nodes, ranks_op.nodes, targets_op])
sort_indices = np.squeeze(sort_indices_nodes).astype(int)
# Plot sort linked lists.
# The matrix plots show each element from the sorted list (rows), and which
# element they link to as next largest (columns). Ground truth is a diagonal
# offset toward the upper-right by one.
fig = plt.figure(1, figsize=(4, 4))
fig.clf()
ax = fig.add_subplot(1, 1, 1)
plot_linked_list(ax,
utils_np.graphs_tuple_to_networkxs(targets)[0], sort_indices)
ax.set_title("Element-to-element links for sorted elements")
ax.set_axis_off()
fig = plt.figure(2, figsize=(10, 2))
fig.clf()
ax1 = fig.add_subplot(1, 3, 1)
ax2 = fig.add_subplot(1, 3, 2)
ax3 = fig.add_subplot(1, 3, 3)
i = 0
num_elements = ranks_nodes.shape[0]
inputs = np.squeeze(inputs_nodes)
ranks = np.squeeze(ranks_nodes * (num_elements - 1.0)).astype(int)
x = np.arange(inputs.shape[0])
ax1.set_title("Inputs")
ax1.barh(x, inputs, color="b")
ax1.set_xlim(-0.01, 1.01)
ax2.set_title("Sorted")
ax2.barh(x, inputs[sort_indices], color="k")
ax2.set_xlim(-0.01, 1.01)
ax3.set_title("Ranks")
ax3.barh(x, ranks, color="r")
_ = ax3.set_xlim(0, len(ranks) + 0.5)
#@title Set up model training and evaluation { form-width: "30%" }
# The model we explore includes three components:
# - An "Encoder" graph net, which independently encodes the edge, node, and
# global attributes (does not compute relations etc.).
# - A "Core" graph net, which performs N rounds of processing (message-passing)
# steps. The input to the Core is the concatenation of the Encoder's output
# and the previous output of the Core (labeled "Hidden(t)" below, where "t" is
# the processing step).
# - A "Decoder" graph net, which independently decodes the edge, node, and
# global attributes (does not compute relations etc.), on each
# message-passing step.
#
# Hidden(t) Hidden(t+1)
# | ^
# *---------* | *------* | *---------*
# | | | | | | | |
# Input --->| Encoder | *->| Core |--*->| Decoder |---> Output(t)
# | |---->| | | |
# *---------* *------* *---------*
#
# The model is trained by supervised learning. Input graphs are procedurally
# generated, and output graphs have the same structure with the nodes and edges
# of the linked list labeled (using 2-element 1-hot vectors). The target
# labels the node corresponding to the lowest value in the list, and labels each
# which represents the connection between neighboring values in the sorted
# list.
#
# The training loss is computed on the output of each processing step. The
# reason for this is to encourage the model to try to solve the problem in as
# few steps as possible. It also helps make the output of intermediate steps
# more interpretable.
#
# There's no need for a separate evaluate dataset because the inputs are
# never repeated, so the training loss is the measure of performance on graphs
# from the input distribution.
#
# We also evaluate how well the models generalize to lists which are up to
# twice as large as those on which it was trained. The loss is computed only
# on the final processing step.
#
# Variables with the suffix _tr are training parameters, and variables with the
# suffix _ge are test/generalization parameters.
#
# After around 2000-5000 training iterations the model reaches near-perfect
# performance on lists with between 8-16 elements.
tf.reset_default_graph()
# Model parameters.
# Number of processing (message-passing) steps.
num_processing_steps_tr = 10
num_processing_steps_ge = 10
# Data / training parameters.
num_training_iterations = 10000
batch_size_tr = 32
batch_size_ge = 100
# Number of elements in each list is sampled uniformly from this range.
num_elements_min_max_tr = (8, 17)
num_elements_min_max_ge = (16, 33)
# Data.
# Training.
inputs_op_tr, targets_op_tr, sort_indices_op_tr, _ = create_data_ops(
batch_size_tr, num_elements_min_max_tr)
inputs_op_tr = utils_tf.set_zero_edge_features(inputs_op_tr, 1)
inputs_op_tr = utils_tf.set_zero_global_features(inputs_op_tr, 1)
# Test/generalization.
inputs_op_ge, targets_op_ge, sort_indices_op_ge, _ = create_data_ops(
batch_size_ge, num_elements_min_max_ge)
inputs_op_ge = utils_tf.set_zero_edge_features(inputs_op_ge, 1)
inputs_op_ge = utils_tf.set_zero_global_features(inputs_op_ge, 1)
# Connect the data to the model.
# Instantiate the model.
model = models.EncodeProcessDecode(edge_output_size=2, node_output_size=2)
# A list of outputs, one per processing step.
output_ops_tr = model(inputs_op_tr, num_processing_steps_tr)
output_ops_ge = model(inputs_op_ge, num_processing_steps_ge)
# Loss.
loss_ops_tr = create_loss_ops(targets_op_tr, output_ops_tr)
loss_op_tr = sum(loss_ops_tr) / num_processing_steps_tr # loss_ops_tr
loss_ops_ge = create_loss_ops(targets_op_ge, output_ops_ge)
loss_op_ge = loss_ops_ge[-1]
# Optimizer.
learning_rate = 1e-3
optimizer = tf.train.AdamOptimizer(learning_rate)
step_op = optimizer.minimize(loss_op_tr)
# Lets an iterable of TF graphs be output from a session as NP graphs.
inputs_op_tr, targets_op_tr, sort_indices_op_tr = make_all_runnable_in_session(
inputs_op_tr, targets_op_tr, sort_indices_op_tr)
inputs_op_ge, targets_op_ge, sort_indices_op_ge = make_all_runnable_in_session(
inputs_op_ge, targets_op_ge, sort_indices_op_ge)
#@title Reset session { form-width: "30%" }
# This cell resets the Tensorflow session, but keeps the same computational
# graph.
try:
sess.close()
except NameError:
pass
sess = tf.Session()
sess.run(tf.global_variables_initializer())
last_iteration = 0
logged_iterations = []
losses_tr = []
corrects_tr = []
solveds_tr = []
losses_ge = []
corrects_ge = []
solveds_ge = []
#@title Run training steps { form-width: "30%" }
# You can interrupt this cell's training loop at any time, and visualize the
# intermediate results by running the next cell (below). You can then resume
# training by simply executing this cell again.
# How much time between logging and printing the current results.
log_every_seconds = 20
print("# (iteration number), T (elapsed seconds), "
"Ltr (training loss), Lge (test/generalization loss), "
"Ctr (training fraction nodes/edges labeled correctly), "
"Str (training fraction examples solved correctly), "
"Cge (test/generalization fraction nodes/edges labeled correctly), "
"Sge (test/generalization fraction examples solved correctly)")
start_time = time.time()
last_log_time = start_time
for iteration in range(last_iteration, num_training_iterations):
last_iteration = iteration
train_values = sess.run({
"step": step_op,
"inputs": inputs_op_tr,
"targets": targets_op_tr,
"sort_indices": sort_indices_op_tr,
"loss": loss_op_tr,
"outputs": output_ops_tr
})
the_time = time.time()
elapsed_since_last_log = the_time - last_log_time
if elapsed_since_last_log > log_every_seconds:
last_log_time = the_time
test_values = sess.run({
"targets": targets_op_ge,
"loss": loss_op_ge,
"outputs": output_ops_ge,
})
correct_tr, solved_tr = compute_accuracy(train_values["targets"],
train_values["outputs"][-1])
correct_ge, solved_ge = compute_accuracy(test_values["targets"],
test_values["outputs"][-1])
elapsed = time.time() - start_time
losses_tr.append(train_values["loss"])
corrects_tr.append(correct_tr)
solveds_tr.append(solved_tr)
losses_ge.append(test_values["loss"])
corrects_ge.append(correct_ge)
solveds_ge.append(solved_ge)
logged_iterations.append(iteration)
print("# {:05d}, T {:.1f}, Ltr {:.4f}, Lge {:.4f}, Ctr {:.4f}, "
"Str {:.4f}, Cge {:.4f}, Sge {:.4f}".format(
iteration, elapsed, train_values["loss"], test_values["loss"],
correct_tr, solved_tr, correct_ge, solved_ge))
#@title Visualize results { form-width: "30%" }
# This cell visualizes the results of training. You can visualize the
# intermediate results by interrupting execution of the cell above, and running
# this cell. You can then resume training by simply executing the above cell
# again.
# Plot results curves.
fig = plt.figure(11, figsize=(18, 3))
fig.clf()
x = np.array(logged_iterations)
# Loss.
y_tr = losses_tr
y_ge = losses_ge
ax = fig.add_subplot(1, 3, 1)
ax.plot(x, y_tr, "k", label="Training")
ax.plot(x, y_ge, "k--", label="Test/generalization")
ax.set_title("Loss across training")
ax.set_xlabel("Training iteration")
ax.set_ylabel("Loss (binary cross-entropy)")
ax.legend()
# Correct.
y_tr = corrects_tr
y_ge = corrects_ge
ax = fig.add_subplot(1, 3, 2)
ax.plot(x, y_tr, "k", label="Training")
ax.plot(x, y_ge, "k--", label="Test/generalization")
ax.set_title("Fraction correct across training")
ax.set_xlabel("Training iteration")
ax.set_ylabel("Fraction nodes/edges correct")
# Solved.
y_tr = solveds_tr
y_ge = solveds_ge
ax = fig.add_subplot(1, 3, 3)
ax.plot(x, y_tr, "k", label="Training")
ax.plot(x, y_ge, "k--", label="Test/generalization")
ax.set_title("Fraction solved across training")
ax.set_xlabel("Training iteration")
ax.set_ylabel("Fraction examples solved")
# Plot sort linked lists for test/generalization.
# The matrix plots show each element from the sorted list (rows), and which
# element they link to as next largest (columns). Ground truth is a diagonal
# offset toward the upper-right by one.
outputs = utils_np.graphs_tuple_to_networkxs(train_values["outputs"][-1])
targets = utils_np.graphs_tuple_to_networkxs(train_values["targets"])
inputs = utils_np.graphs_tuple_to_networkxs(train_values["inputs"])
batch_element = 0
fig = plt.figure(12, figsize=(8, 4.5))
fig.clf()
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2)
sort_indices = np.squeeze(
utils_np.get_graph(train_values["sort_indices"],
batch_element).nodes).astype(int)
fig.suptitle("Element-to-element link predictions for sorted elements")
plot_linked_list(ax1, targets[batch_element], sort_indices)
ax1.set_title("Ground truth")
ax1.set_axis_off()
plot_linked_list(ax2, outputs[batch_element], sort_indices)
ax2.set_title("Predicted")
ax2.set_axis_off()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/bxck75/Python_Helpers/blob/master/topper.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
''' install 3dparty sheit '''
from IPython.display import clear_output as cle
from pprint import pprint as print
from PIL import Image
import os
import sys
import json
import IPython
''' default sample data delete '''
os.system('rm -r sample_data')
''' set root paths '''
root = '/content'
gdrive_root = '/content/drive/My Drive'
helpers_root = root + '/installed_repos/Python_Helpers'
''' setup install the Helpers module '''
os.system('git clone https://github.com/bxck75/Python_Helpers.git ' + helpers_root)
os.system('python ' + helpers_root + 'setup.py install')
''' import helpers '''
os.chdir(helpers_root)
import main as main_core
MainCore = main_core.main()
HelpCore = MainCore.Helpers_Core
FScrape = HelpCore.flickr_scrape
fromGdrive = HelpCore.GdriveD
toGdrive = HelpCore.ZipUp.ZipUp
cle()
dir(HelpCore)
FScrape(['Ork','Troll','Dragon'], 25, '/content/images')
imgs_path_list = HelpCore.GlobX(img_dir,'*.*g')
print(imgs_path_list)
dir(HelpCore)
print(MainCore.Helpers_Core.cloner('/content/images'))
# def LandMarks(img_dir,out_dir):
# ''' Folder glob and Landmark all found imgs '''
# imgs_path_list = HelpCore.GlobX(img_dir,'*.*g')
# imgs_path_list.sort()
# i=0
# for i in range(len(imgs_path_list)):
# ''' make folders '''
# img_pathAr = imgs_path_list[i]
# # img_pathAr.Split(Path.DirectorySeparatorChar) # returns array of the path
# # img_pathAr.Lenth - 2
# # print(img_pathAr[2])
# os.system('mkdir -p '+os.path.join(out_dir,'/org'))
# os.system('mkdir -p '+os.path.join(out_dir,'/marked'))
# ''' backup original '''
# os.system('cp imgs_path_list[i] '+out_dir + '/org')
# ''' loop over images '''
# img = cv.imread(imgs_path_list[i])
# for c, w, h in img.shape:
# print(3, w, h)
# if img is None:
# print('Failed to load image file:', fname)
# sys.exit(1)
# fork_img = img
# ''' Mark the image '''
# gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# lsd = cv.line_descriptor_LSDDetector.createLSDDetector()
# lines = lsd.detect(gray, 1, 1)
# for kl in lines:
# if kl.octave == 0:
# pt1 = (int(kl.startPointX), int(kl.startPointY))
# pt2 = (int(kl.endPointX), int(kl.endPointY))
# cv.line(fork_img, pt1, pt2, [255, 0, 0], 2)
# cv.waitKey(0)
# cv.destroyAllWindows()
# cv.imwrite('nice.jpg',img)
# # marked
# cv.imwrite('nice.jpg',img)
# i += 1
# def org_n_marked_clone(img_path,id=0):
# ''' backup the originals '''
# drive, path_and_file = os.path.splitdrive(img_path)
# path, file = os.path.split(path_and_file)
# fi, ex = file.split(.)
# fi.rstrip(string.digits)
# ''' compose the new paths '''
# org_path = path + '/org/' + fi + '_%d' % id
# marked_path = path + '/marked/' + fi + '_%d' % id
# ''' return the list '''
# return [org_path,marked_path]
# org_n_marked_clone('/content/images/img_1.jpg')
os.path.join( path+'/org', file )
import sys
import cv2 as cv
if __name__ == '__main__':
print(__doc__)
fname = '/content/images/img_1.jpg'
img = cv.imread(fname)
if img is None:
print('Failed to load image file:', fname)
sys.exit(1)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
lsd = cv.line_descriptor_LSDDetector.createLSDDetector()
lines = lsd.detect(gray, 1, 1)
for kl in lines:
if kl.octave == 0:
# cv.line only accepts integer coordinate
pt1 = (int(kl.startPointX), int(kl.startPointY))
pt2 = (int(kl.endPointX), int(kl.endPointY))
cv.line(img, pt1, pt2, [255, 0, 0], 2)
# plt.imshow('output', img)
cv.waitKey(0)
cv.destroyAllWindows()
cv.imwrite('nice.jpg',img)
from google.colab import drive
drive.mount('/content/drive')
help(HelpCore)
help(FScrape)
# search_list,img_dir,qty = ['zombie'], 'images', 21
# FScrape(search_list,qty,img_dir)
funcs=[
'BigHelp',
'Colab_root',
'ColorPrint',
'FileView',
'FlickrS',
'GdriveD',
'Gdrive_root',
'GlobX',
'GooScrape',
'ImgCrawler',
'ImgTools',
'LogGER',
'Logger',
'MethHelp',
'Ops',
'Repo_List',
'Resize',
'Sys_Cmd',
'Sys_Exec',
'ZipUp',
]
def img_show_folder(folder):
# fname = '/content/images/img_1.jpg'
folder_path = Path(folder)
GlobX(folder_path,'*.*g')
for base, dirs, files in os.walk('/content/images'):
files.sort()
for i in range(len(files)):
print(base+'/'+files[i])
img = cv.imread(base+'/'+files[i])
plt.imshow(img,cmap=dark2)
plt.show()
HelpCore.GlobX('/content/images','*.*g')
# search_list, img_dir, qty = ['zombie'], 'images', 200
# FScrape(search_list, qty, img_dir)
# toGdrive('cv2_samples','/content/drive/My Drive','/content/installed_repos/opencv/samples')
# Load zipper
# Zipper = toGdrive.GdriveD
# Zip folder
# images_set_name, gdrive_folder, folder_to_zip = 'cv2_samples', '/content/drive/My Drive', '/content/installed_repos/opencv/samples'
# result=Zipper(images_set_name,gdrive_folder,folder_to_zip).ZipUp
# Print Resulting hash
print(result)
dir(toGdrive)
# HelpCore.GlobX('/content', '*.py')
!python /content/installed_repos/opencv/samples/dnn/segmentation.py --zoo --input --framework 'tensorflow'
%cd /content/installed_repos/opencv/samples/dnn
!cat segmentation.py
```
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument('--zoo', default=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'models.yml'),
help='An optional path to file with preprocessing parameters.')
parser.add_argument('--input', help='Path to input image or video file. Skip this argument to capture frames from a camera.')
parser.add_argument('--framework', choices=['caffe', 'tensorflow', 'torch', 'darknet'],
help='Optional name of an origin framework of the model. '
'Detect it automatically if it does not set.')
parser.add_argument('--colors', help='Optional path to a text file with colors for an every class. '
'An every color is represented with three values from 0 to 255 in BGR channels order.')
parser.add_argument('--backend', choices=backends, default=cv.dnn.DNN_BACKEND_DEFAULT, type=int,
help="Choose one of computation backends: "
"%d: automatically (by default), "
"%d: Halide language (http://halide-lang.org/), "
"%d: Intel's Deep Learning Inference Engine (https://software.intel.com/openvino-toolkit), "
"%d: OpenCV implementation" % backends)
parser.add_argument('--target', choices=targets, default=cv.dnn.DNN_TARGET_CPU, type=int,
help='Choose one of target computation devices: '
'%d: CPU target (by default), '
'%d: OpenCL, '
'%d: OpenCL fp16 (half-float precision), '
'%d: VPU' % targets)
args, _ = parser.parse_known_args()
```
# !wget https://drive.google.com/open?id=1KNfN-ktxbPJMtmdiL-I1WW0IO1B_2EG2
# landmarks_file=['1KNfN-ktxbPJMtmdiL-I1WW0IO1B_2EG2','/content/shape_predictor_68_face_landmarks.dat']
# fromGdrive.GdriveD(landmarks_file[0],landmarks_file[1])
import cv2
import numpy
import dlib
import matplotlib.pyplot as plt
# cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("/content/shape_predictor_68_face_landmarks.dat")
while True:
_, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = detector(gray)
for face in faces:
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
#cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 3)
landmarks = predictor(gray, face)
for n in range(0, 68):
x = landmarks.part(n).x
y = landmarks.part(n).y
cv2.circle(frame, (x, y), 4, (255, 0, 0), -1)
plt.imshow("Frame", frame)
key = cv2.waitKey(1)
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
''' install 3dparty sheit '''
from IPython.display import clear_output as cle
from pprint import pprint as print
from PIL import Image
import os
import sys
import json
import IPython
''' default sample data delete '''
os.system('rm -r sample_data')
''' set root paths '''
root = '/content'
gdrive_root = '/content/drive/My Drive'
helpers_root = root + '/installed_repos/Python_Helpers'
''' setup install the Helpers module '''
os.system('git clone https://github.com/bxck75/Python_Helpers.git ' + helpers_root)
os.system('python ' + helpers_root + 'setup.py install')
os.chdir(helpers_root)
from main import main
landmarks_68_file = '1KNfN-ktxbPJMtmdiL-I1WW0IO1B_2EG2'
landmarks_194_file = '1fMOT_0f5clPbZXsphZyrGcLXkIhSDl3o'
os.chdir(root)
images_set_name, gdrive_folder, folder_to_zip = 'cv2_samples', '/content/installed_repos/opencv/samples/dnn/*', '/content/drive/My Drive'
results=HelpCore.ZipUp.ZipUp(images_set_name,gdrive_folder,folder_to_zip).ZipUp
print(results)
!zip -r cv2_examples.zip /content/installed_repos/opencv/samples/dnn /content/installed_repos/opencv/samples/python
```
| github_jupyter |
Note: All of the model fits presented here have been pre-run using the script `fit_choice_models.py`. This can take a lot of time to run.
```
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context('paper')
fig_size = (5.0, 2.5)
```
# Experiment 1
```
loo = pd.read_pickle('Data/model_fits/model_fits_exp_lin.pkl')
summary = pd.read_pickle('Data/model_fits/model_params_exp_lin_rbf_cls.pkl')
loo.rename(index={'Kalman w/SC':'Simple Kalman'}, inplace=True)
#note the "Simple Kalman" is modeled without sticky choice and wasn't included in the paper
loo.drop('Simple Kalman', inplace=True)
loo.drop('Scrambled', inplace=True)
# calculate chance
n = len(pd.read_csv('Data/exp_linear/rbfpred.csv')) # n_subj * n_trails_per_subj
n_arms = 8
chance_loo = -2 * n * np.log(1. / n_arms)
print "Chance: {}".format(chance_loo)
loo.sort_values('LOO', ascending=False, inplace=True)
loo['dLOO'] = loo.LOO - np.min(loo.LOO)
loo['LOO dLOO LOO_se'.split()]
def plot_results(loo, summary, chance_loo, vars_=None, labels=None):
_loo = loo.sort_values('LOO')
_loo[r'pseudo-$r^2$'] = 1 - _loo.LOO / chance_loo
_loo['r2_err'] = _loo.LOO_se / chance_loo
fig, axes = plt.subplots(1, 2, figsize=fig_size, gridspec_kw=dict(wspace=0.5))
ax = axes[0]
min_r2 = np.max(_loo[r'pseudo-$r^2$'])
with sns.axes_style('ticks'):
plt.sca(ax)
ax.barh(range(len(_loo[r'pseudo-$r^2$'].values)),
_loo[r'pseudo-$r^2$'].values, color='grey', align='center')
ax.set_yticks(range(len(_loo.LOO.values)))
ax.set_yticklabels(_loo.index.tolist())
ax.errorbar(y = range(len(_loo.LOO.values)),
x = _loo[r'pseudo-$r^2$'].values,
xerr = _loo['r2_err'].values, linestyle='None',
color='k',
)
xlb, xub = plt.gca().get_xlim()
ylb, yub = plt.gca().get_ylim()
ax.plot([min_r2, min_r2],[ylb, yub], 'r:')
ax.set_xticks([0., 0.2, 0.4, 0.6])
sns.despine()
ax.set_xlabel(r'pseudo-$r^2$')
plt.xticks(rotation=45)
#### second plot
ax = axes[1]
plt.sca(ax)
if vars_ is None:
vars_ = ['mu_beta_rbf_mean', 'mu_beta_cls_mean',
'mu_beta_rbf_stdv', 'mu_beta_cls_stdv',]
if labels is None:
labels = ['RBF Mean', 'Cls Mean', 'RBF Std', 'Cls Std']
y = summary.loc[vars_, 'mean'].values
plt.bar(range(4), y, color='skyblue')
for ii, b in enumerate(vars_):
ylb = summary.loc[b, 'hpd_2.5']
yub = summary.loc[b, 'hpd_97.5']
ax.plot([ii,ii], [ylb, yub], 'k', linewidth=2)
plt.xticks(range(len(y)))
ax.set_xticklabels(labels)
ax.axhline(y=0, color='k', linewidth=1)
plt.ylabel('Parameter Estimate')
plt.xticks(rotation=90)
ax.xaxis.label.set_visible(False)
return fig
_ = plot_results(loo, summary, chance_loo)
plt.savefig('fig_model_fits_exp_linear.pdf', dpi=300, bbox_inches='tight')
```
# Experiment Change Point
```
loo = pd.read_pickle('Data/model_fits/model_fits_exp_cp.pkl')
summary = pd.read_pickle('Data/model_fits/model_params_exp_cp_rbf_cls.pkl')
# vars_ = ['mu_beta_lin_mean', 'mu_beta_cls_mean',
# 'mu_beta_lin_stdv', 'mu_beta_cls_stdv',]
# labels = ['Lin Mean', 'Cls Mean', 'Lin Std', 'Cls Std']
# calculate chance
n = len(pd.read_csv('Data/exp_changepoint/changerbfpred.csv')) # n_subj * n_trails_per_subj
n_arms = 8
chance_loo = -2 * n * np.log(1. / n_arms)
print "Chance: {}".format(chance_loo)
loo.sort_values('LOO', ascending=False, inplace=True)
loo['dLOO'] = loo.LOO - np.min(loo.LOO)
loo['LOO dLOO LOO_se'.split()]
_ = plot_results(loo, summary, chance_loo)
plt.savefig('fig_model_fits_exp_cp.pdf', dpi=300, bbox_inches='tight')
```
# Experiment Scrambled
```
loo = pd.read_pickle('Data/model_fits/model_fits_exp_scram.pkl')
summary = pd.read_pickle('Data/model_fits/model_params_exp_scram_rbf_cls.pkl')
loo.rename(index={'Kalman w/SC':'Simple Kalman'}, inplace=True)
#note the "Simple Kalman" is modeled without sticky choice and wasn't included in the paper
loo.drop('Simple Kalman', inplace=True)
loo.drop('Scrambled', inplace=True)
# calculate chance
n = len(pd.read_csv('Data/exp_scrambled/gprbfscrambled.csv')) # n_subj * n_trails_per_subj
n_arms = 8
chance_loo = -2 * n * np.log(1. / n_arms)
print "Chance: {}".format(chance_loo)
loo.sort_values('LOO', ascending=False, inplace=True)
loo['dLOO'] = loo.LOO - np.min(loo.LOO)
loo['LOO dLOO LOO_se'.split()]
_ = plot_results(loo, summary, chance_loo)
plt.savefig('fig_model_fits_exp_scrambled.pdf', dpi=300, bbox_inches='tight')
```
# Experiment Shifted
```
loo = pd.read_pickle('Data/model_fits/model_fits_exp_shifted.pkl')
summary = pd.read_pickle('Data/model_fits/model_params_exp_shifted_rbf_cls.pkl')
loo.rename(index={'Kalman w/SC':'Simple Kalman'}, inplace=True)
#note the "Simple Kalman" is modeled without sticky choice and wasn't included in the paper
loo.drop('Scrambled', inplace=True)
# calculate chance
n = len(pd.read_csv('Data/exp_shifted/gprbfshifted.csv')) # n_subj * n_trails_per_subj
n_arms = 8
chance_loo = -2 * n * np.log(1. / n_arms)
print "Chance: {}".format(chance_loo)
loo.sort_values('LOO', ascending=False, inplace=True)
loo['dLOO'] = loo.LOO - np.min(loo.LOO)
loo['LOO dLOO LOO_se'.split()]
_ = plot_results(loo, summary, chance_loo)
plt.savefig('fig_model_fits_exp_shifted.pdf', dpi=300, bbox_inches='tight')
```
# Experiment SRS
```
loo = pd.read_pickle('Data/model_fits/model_fits_exp_srs.pkl')
summary = pd.read_pickle('Data/model_fits/model_params_exp_srs_rbf_cls.pkl')
#note the "Simple Kalman" is modeled without sticky choice and wasn't included in the paper
loo.drop('Scrambled', inplace=True)
# calculate chance
n = len(pd.read_csv('Data/exp_srs/gprbfsrs.csv')) # n_subj * n_trails_per_subj
n_arms = 8
chance_loo = -2 * n * np.log(1. / n_arms)
print "Chance: {}".format(chance_loo)
loo.sort_values('LOO', ascending=False, inplace=True)
loo['dLOO'] = loo.LOO - np.min(loo.LOO)
loo['LOO dLOO LOO_se'.split()]
_ = plot_results(loo, summary, chance_loo)
plt.savefig('fig_model_fits_exp_srs.pdf', dpi=300, bbox_inches='tight')
np.log2(np.exp(1.5/-2))
loo = pd.read_pickle('Data/model_fits/model_fits_exp_cp.pkl') \
+ pd.read_pickle('Data/model_fits/model_fits_exp_srs.pkl') \
+ pd.read_pickle('Data/model_fits/model_fits_exp_lin.pkl') \
+ pd.read_pickle('Data/model_fits/model_fits_exp_scram.pkl') \
+ pd.read_pickle('Data/model_fits/model_fits_exp_shifted.pkl')
loo.sort_values('LOO', ascending=False, inplace=True)
#note the "Simple Kalman" is modeled without sticky choice and wasn't included in the paper
loo.drop('Kalman w/SC', inplace=True)
n = len(pd.read_csv('Data/exp_srs/gprbfsrs.csv')) \
+ len(pd.read_csv('Data/exp_shifted/gprbfshifted.csv')) \
+ len(pd.read_csv('Data/exp_scrambled/gprbfscrambled.csv')) \
+ len(pd.read_csv('Data/exp_changepoint/changerbfpred.csv')) \
+ len(pd.read_csv('Data/exp_linear/rbfpred.csv'))
# n_subj * n_trails_per_subj
n_arms = 8
chance_loo = -2 * n * np.log(1. / n_arms)
_loo = loo.sort_values('LOO')
_loo[r'pseudo-$r^2$'] = 1 - _loo.LOO / chance_loo
_loo['r2_err'] = _loo.LOO_se / chance_loo
fig, ax = plt.subplots(1, 1, figsize=(4, 4), gridspec_kw=dict(wspace=0.5))
min_r2 = np.max(_loo[r'pseudo-$r^2$'])
with sns.axes_style('ticks'):
plt.sca(ax)
ax.barh(range(len(_loo[r'pseudo-$r^2$'].values)),
_loo[r'pseudo-$r^2$'].values, color='grey', align='center')
ax.set_yticks(range(len(_loo.LOO.values)))
ax.set_yticklabels(_loo.index.tolist())
ax.errorbar(y = range(len(_loo.LOO.values)),
x = _loo[r'pseudo-$r^2$'].values,
xerr = _loo['r2_err'].values, linestyle='None',
color='k',
)
xlb, xub = plt.gca().get_xlim()
ylb, yub = plt.gca().get_ylim()
ax.plot([min_r2, min_r2],[ylb, yub], 'r:')
ax.set_xticks([0., 0.2, 0.4, 0.6])
sns.despine()
ax.set_xlabel(r'pseudo-$r^2$')
plt.xticks(rotation=45)
```
| github_jupyter |
## Tutorial for recording a guitar string stroke and detecting its pitch
I use the python library called sounddevice which allows to easily record audio and represent the result as a numpy array.
We will use two different methods for detecting the pitch and compare their results.
For reference, here is the list of frequencies of all 6 strings expected for a well tuned guitar:
String | Frequency | Scientific pitch notation
--- | --- | ---
1 (E) | 329.63 Hz | E4
2 (B) | 246.94 Hz | B3
3 (G) | 196.00 Hz | G3
4 (D) | 146.83 Hz | D3
5 (A) | 110.00 Hz | A2
6 (E) | 82.41 Hz | E2
```
import sounddevice as sd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
#### First of all, check the list of available audio devices on the system
I use an external USB sound card called Sound Blaster E1: this is the one we will use here
```
sd.query_devices()
```
#### We define the length we want to record in seconds and the sampling rate to 44100 Hz
```
device = 0 # we use my USB sound card device
duration = 2 # seconds
fs = 44100 # samples by second
```
#### We can now record 2 seconds worth of audio
For this tutorial, I have played the D string of my guitar.
The result is a numpy array we store in the `myrecording` variable
```
myrecording = sd.rec(duration * fs, samplerate=fs, channels=1, device=0)
```
#### Let's plot a section of this array to look at it first
We notice a pretty periodic signal with a clear fundamental frequency: which makes sense since a guitar string vibrates producing an almost purely sinuzoidal wave
```
df = pd.DataFrame(myrecording)
df.loc[25000:30000].plot()
```
### Pitch detection using Fast Fourier Transform
#### We use numpy to compute the discrete Fourier transform of the signal:
```
fourier = np.fft.fft(rec)
```
We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency:
```
plt.plot(abs(fourier[:len(fourier)/10]))
```
#### We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
```
f_max_index = np.argmax(abs(fourier[:fourier.size/2]))
freqs = np.fft.fftfreq(len(fourier))
freqs[f_max_index]*fs
```
#### This methid has detected that my guitar string stroke has fundamental frequency of 149.94 Hz, which is indeed very close to the expected frequency of the D string of a well tuned guitar (target if 146.83 Hz)
My guitar was not very well tuned: this indicates I should slightly tune down my 4th string
-------
### Using Autocorrelation method for pitch detection
```
rec = myrecording.ravel()
rec = rec[25000:30000]
autocorr = np.correlate(rec, rec, mode='same')
plt.plot(autocorr)
```
| github_jupyter |
# Writing Custom Dataset Exporters
This recipe demonstrates how to write a [custom DatasetExporter](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.html#custom-formats) and use it to export a FiftyOne dataset to disk in your custom format.
## Setup
If you haven't already, install FiftyOne:
```
!pip install fiftyone
```
In this recipe we'll use the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo_datasets.html) to download the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) to use as sample data to feed our custom exporter.
Behind the scenes, FiftyOne either the
[TensorFlow Datasets](https://www.tensorflow.org/datasets) or
[TorchVision Datasets](https://pytorch.org/docs/stable/torchvision/datasets.html) libraries to wrangle the datasets, depending on which ML library you have installed.
You can, for example, install PyTorch as follows:
```
!pip install torch torchvision
```
## Writing a DatasetExporter
FiftyOne provides a [DatasetExporter](https://voxel51.com/docs/fiftyone/api/fiftyone.utils.data.html#fiftyone.utils.data.exporters.DatasetExporter) interface that defines how it exports datasets to disk when methods such as [Dataset.export()](https://voxel51.com/docs/fiftyone/api/fiftyone.core.html#fiftyone.core.dataset.Dataset.export) are used.
`DatasetExporter` itself is an abstract interface; the concrete interface that you should implement is determined by the type of dataset that you are exporting. See [writing a custom DatasetExporter](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.html#custom-formats) for full details.
In this recipe, we'll write a custom [LabeledImageDatasetExporter](https://voxel51.com/docs/fiftyone/api/fiftyone.utils.data.html#fiftyone.utils.data.exporters.LabeledImageDatasetExporter) that can export an image classification dataset to disk in the following format:
```
<dataset_dir>/
data/
<filename1>.<ext>
<filename2>.<ext>
...
labels.csv
```
where `labels.csv` is a CSV file that contains the image metadata and associated labels in the following format:
```
filepath,size_bytes,mime_type,width,height,num_channels,label
<filepath>,<size_bytes>,<mime_type>,<width>,<height>,<num_channels>,<label>
<filepath>,<size_bytes>,<mime_type>,<width>,<height>,<num_channels>,<label>
...
```
Here's the complete definition of the `DatasetExporter`:
```
import csv
import os
import fiftyone as fo
import fiftyone.utils.data as foud
class CSVImageClassificationDatasetExporter(foud.LabeledImageDatasetExporter):
"""Exporter for image classification datasets whose labels and image
metadata are stored on disk in a CSV file.
Datasets of this type are exported in the following format:
<dataset_dir>/
data/
<filename1>.<ext>
<filename2>.<ext>
...
labels.csv
where ``labels.csv`` is a CSV file in the following format::
filepath,size_bytes,mime_type,width,height,num_channels,label
<filepath>,<size_bytes>,<mime_type>,<width>,<height>,<num_channels>,<label>
<filepath>,<size_bytes>,<mime_type>,<width>,<height>,<num_channels>,<label>
...
Args:
export_dir: the directory to write the export
"""
def __init__(self, export_dir):
super().__init__(export_dir=export_dir)
self._data_dir = None
self._labels_path = None
self._labels = None
self._image_exporter = None
@property
def requires_image_metadata(self):
"""Whether this exporter requires
:class:`fiftyone.core.metadata.ImageMetadata` instances for each sample
being exported.
"""
return True
@property
def label_cls(self):
"""The :class:`fiftyone.core.labels.Label` class(es) exported by this
exporter.
This can be any of the following:
- a :class:`fiftyone.core.labels.Label` class. In this case, the
exporter directly exports labels of this type
- a list or tuple of :class:`fiftyone.core.labels.Label` classes. In
this case, the exporter can export a single label field of any of
these types
- a dict mapping keys to :class:`fiftyone.core.labels.Label` classes.
In this case, the exporter can handle label dictionaries with
value-types specified by this dictionary. Not all keys need be
present in the exported label dicts
- ``None``. In this case, the exporter makes no guarantees about the
labels that it can export
"""
return fo.Classification
def setup(self):
"""Performs any necessary setup before exporting the first sample in
the dataset.
This method is called when the exporter's context manager interface is
entered, :func:`DatasetExporter.__enter__`.
"""
self._data_dir = os.path.join(self.export_dir, "data")
self._labels_path = os.path.join(self.export_dir, "labels.csv")
self._labels = []
# The `ImageExporter` utility class provides an `export()` method
# that exports images to an output directory with automatic handling
# of things like name conflicts
self._image_exporter = foud.ImageExporter(
True, export_path=self._data_dir, default_ext=".jpg",
)
self._image_exporter.setup()
def export_sample(self, image_or_path, label, metadata=None):
"""Exports the given sample to the dataset.
Args:
image_or_path: an image or the path to the image on disk
label: an instance of :meth:`label_cls`, or a dictionary mapping
field names to :class:`fiftyone.core.labels.Label` instances,
or ``None`` if the sample is unlabeled
metadata (None): a :class:`fiftyone.core.metadata.ImageMetadata`
instance for the sample. Only required when
:meth:`requires_image_metadata` is ``True``
"""
out_image_path, _ = self._image_exporter.export(image_or_path)
if metadata is None:
metadata = fo.ImageMetadata.build_for(image_or_path)
self._labels.append((
out_image_path,
metadata.size_bytes,
metadata.mime_type,
metadata.width,
metadata.height,
metadata.num_channels,
label.label, # here, `label` is a `Classification` instance
))
def close(self, *args):
"""Performs any necessary actions after the last sample has been
exported.
This method is called when the exporter's context manager interface is
exited, :func:`DatasetExporter.__exit__`.
Args:
*args: the arguments to :func:`DatasetExporter.__exit__`
"""
# Ensure the base output directory exists
basedir = os.path.dirname(self._labels_path)
if basedir and not os.path.isdir(basedir):
os.makedirs(basedir)
# Write the labels CSV file
with open(self._labels_path, "w") as f:
writer = csv.writer(f)
writer.writerow([
"filepath",
"size_bytes",
"mime_type",
"width",
"height",
"num_channels",
"label",
])
for row in self._labels:
writer.writerow(row)
```
## Generating a sample dataset
In order to use `CSVImageClassificationDatasetExporter`, we need some labeled image samples to work with.
Let's use some samples from the test split of CIFAR-10:
```
import fiftyone.zoo as foz
num_samples = 1000
#
# Load `num_samples` from CIFAR-10
#
# This command will download the test split of CIFAR-10 from the web the first
# time it is executed, if necessary
#
cifar10_test = foz.load_zoo_dataset("cifar10", split="test")
samples = cifar10_test.limit(num_samples)
# Print summary information about the samples
print(samples)
# Print a sample
print(samples.first())
```
## Exporting a dataset
With our samples and `DatasetExporter` in-hand, exporting the samples to disk in our custom format is as simple as follows:
```
export_dir = "/tmp/fiftyone/custom-dataset-exporter"
# Export the dataset
print("Exporting %d samples to '%s'" % (len(samples), export_dir))
exporter = CSVImageClassificationDatasetExporter(export_dir)
samples.export(dataset_exporter=exporter)
```
Let's inspect the contents of the exported dataset to verify that it was written in the correct format:
```
!ls -lah /tmp/fiftyone/custom-dataset-exporter
!ls -lah /tmp/fiftyone/custom-dataset-exporter/data | head -n 10
!head -n 10 /tmp/fiftyone/custom-dataset-exporter/labels.csv
```
## Cleanup
You can cleanup the files generated by this recipe by running:
```
!rm -rf /tmp/fiftyone
```
| github_jupyter |
```
import warnings
warnings.simplefilter('ignore')
import math
import time
from keras.models import Sequential
from keras.layers import Dropout
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Activation
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import numpy as np
import pandas as pd
import hypertools as hyp
import seaborn as sns
from matplotlib import pyplot as plt
%matplotlib inline
x = np.sin(np.arange(0, 500, 0.1))
plt.plot(x[:1000])
scaler = MinMaxScaler(feature_range=(-1, 1))
scaled = scaler.fit_transform(x[:, np.newaxis])
series = pd.DataFrame(data=scaled)
n = 50
series_s = series.copy()
for i in range(n):
series = pd.concat([series, series_s.shift(-(i+1))], axis=1)
series.dropna(axis=0, inplace=True)
sns.heatmap(series.values)
n_train = int(0.67*series.shape[0])
train = series.iloc[:n_train, :]
test = series.iloc[n_train:, :]
train_in = train.iloc[:, :-3].values
train_out = train.iloc[:, -3:].values
test_in = test.iloc[:, :-3].values
test_out = test.iloc[:, -3:].values
train_in = train_in.reshape(train_in.shape[0], train_in.shape[1], 1)
test_in = test_in.reshape(test_in.shape[0], test_in.shape[1], 1)
train_in.shape, test_in.shape, train_out.shape, test_out.shape
model = Sequential()
model.add(LSTM(input_shape=(train_in.shape[1], 1), output_dim=train_in.shape[1], return_sequences=True))
model.add(Dropout(0.5))
model.add(LSTM(256))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('linear'))
model.compile(loss='mse', optimizer='adam')
model.summary()
start = time.time()
model.fit(train_in, train_out, batch_size=512, nb_epoch=3, validation_split=0.1)
print("> Compilation time: ", time.time() - start)
predictions = model.predict(test_in)
predictions = scaler.inverse_transform(predictions)
predictions.shape
observed = scaler.inverse_transform(test_out)
observed.shape
hyp.plot([observed, predictions], ['k-', 'r:'])
geo = hyp.load('weights');
x = geo.data[0]
hyp.plot(x, 'k-')
scaler = MinMaxScaler(feature_range=(0, 1))
x_scaled = scaler.fit_transform(x)
hyp.plot(x_scaled, 'k-')
sns.heatmap(x)
sns.heatmap(x_scaled)
train_size = int(x.shape[0] * 0.67)
test_size = x.shape[0] - train_size
train, test = x_scaled[0:train_size, :], x_scaled[train_size:, :]
def forward_project(x, n_steps):
a, b, = x[n_steps:], x[:-n_steps]
return a.reshape((a.shape[0], 1, a.shape[1])), b.reshape((b.shape[0], 1, b.shape[1]))
#return a, b
n_steps = 1
train_in, train_out = forward_project(train, n_steps)
test_in, test_out = forward_project(test, n_steps)
train_in.shape, train_out.shape, test_in.shape, test_out.shape
n_back = 5
epochs = 10
model = Sequential()
model.add(LSTM(epochs, input_shape=(train_in.shape[1], train_in.shape[2])))
model.add(Dense(train_in.shape[2], input_shape=(train_in.shape[2], )))
model.compile(loss='mae', optimizer='adam')
history = model.fit(train_in, train_out, epochs=epochs, batch_size=20, verbose=2, validation_data=(test_in, test_out), shuffle=False)
```
| github_jupyter |
### 1. Load data
```
library(tidyverse)
library(FCSplankton)
library(openxlsx)
options(repr.plot.width=8, repr.plot.height=6)
unstained <- FALSE # TRUE if samples were not stained, FALSE if samples have been stained
if(unstained){
summary <- read_csv("./unstained/summary.csv") # load unstained summary data
}else{
stained_summary_all <- read_csv("./stained/summary.csv") # load stained summary data
stained_summary <- dplyr::filter(stained_summary_all, stained_summary_all$population == "bacteria")
unstained_summary <- read_csv("./unstained/summary.csv")
}
meta <- read_csv("metadata.txt",col_types = cols(date = col_character()))
if(unstained == FALSE){
summary <- merge(unstained_summary, stained_summary, all=TRUE)
summary[1:3,]
}
```
### 2. Convert metadata
```
meta[1:3,] # print the first few lines to know how to parse metadata
# add required columns (filename, volume and comments) from metadata
file <- paste0(meta$file,".fcs") # format sample name to filename (.fcs)
time <- meta$date
lat <- meta$lat
lon <- meta$lon
depth <- meta$depth
replicate <- meta$replicate
volume <- meta$volume
stain <- meta$stain
flag <- meta$flag
comments <- meta$comments
# add required metadata for CMAP
# time <- as.POSIXct(meta$date, format="%d/%b/%y", tz="UTC")
# lat <- NA
# lon <- NA
# add key information from sample label
# label <- matrix(unlist(list(strsplit(meta$label, split=" "))), ncol=3, byrow=T)
# treatment <- label[,1]
# timepoint <- label[,2]
# replicate <- label[,3]
# create new metadata
metadata <- tibble(file, time, lat, lon, depth, replicate, volume, stain, flag, comments)
```
### 3. Merge metadata and summary data
```
all <- merge(summary, metadata, by="file")
all[1:3,]
```
### 4. Data correction
#### a. Calculate abundance
```
all$abundance <- all$count / all$volume
all[1:3,]
```
#### b. Check variables before correction
```
all %>%
dplyr::filter(population != "beads") %>%
ggplot(aes(abundance, -depth, col=population)) +
geom_point() +
facet_grid(population ~ lat, scale="free_x") +
theme_bw() +
xlab("Abundance (cells uL-1)") +
ylab("Depth (m)")
all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(sd = sd(scatter),
avg=mean(scatter)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_point(size=3) +
geom_linerange(aes(ymin=avg-sd, ymax=avg+sd)) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Scatter (normalized to beads)")
```
#### c. Correct bacteria counts
```
new.all <- all
if(unstained == FALSE){
pro <- subset(all, population == "prochloro")
bact <- subset(all, population == "bacteria")
for (i in 1:nrow(pro)){
file_number <- regmatches(pro$file[i], regexpr(pattern = "[0-9].*fcs" , text = pro$file[i])) # removes prefix from the current file so the stained and unstained files will be identical
matching_file_id <- grep(file_number, bact$file) # find the file in stained samples that matches the file number
id <- which(all$file == bact$file[matching_file_id] & all$population == "bacteria") # return the index of the file that matches the Pro file numbner
if(length(id) !=0) new.all$abundance[id] <- all$abundance[id] - pro$abundance[i]
if(length(id) !=0) new.all$count[id] <- new.all$abundance[id] * new.all$volume[i] # calculate bacteria particle count based off abundance and volume
if(length(id) !=0) new.all$scatter[id] <- (((all$scatter[id]*all$abundance[id])-(pro$scatter[i]*pro$abundance[i]))/ new.all$abundance[id]) # calculate bacteria scatter: assumes adding staining does not change scatter
}
}
new.all[1:3,]
```
#### d. Check corrected variables
```
new.all %>%
dplyr::filter(population != "beads") %>%
ggplot(aes(abundance, -depth, col=population)) +
geom_point() +
facet_grid(population ~ lat, scale="free_x") +
theme_bw() +
xlab("Abundance (cells uL-1)") +
ylab("Depth (m)")
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(sd = sd(scatter),
avg=mean(scatter)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_point(size=3) +
geom_linerange(aes(ymin=avg-sd, ymax=avg+sd)) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Scatter (normalized to beads)")
```
### 5. Size and carbon content conversion
```
mie <- read.csv(system.file("scatter", paste0("calibrated-mieINFLUX.csv"),package="FCSplankton"))
mie[1:3,] ## NOTE: Leo and Penny are included in the same Mie lookup table. Choose the column index for the correct instrument that samples were run on.
# find closest matches in Mie lookup table
id <- findInterval(new.all$scatter, mie$scatter)
for(i in 1:length(id)){
## choose the correct column index for Influx instrument (Leo 2-7 or Penny 8-13)
new.all$diam_mid[[i]] <- mie[id[i],2]
new.all$diam_upr[[i]] <- mie[id[i],3]
new.all$diam_lwr[[i]] <- mie[id[i],4]
new.all$Qc_mid[[i]] <- mie[id[i],5]
new.all$Qc_upr[[i]] <- mie[id[i],6]
new.all$Qc_lwr[[i]] <- mie[id[i],7]
}
new.all[1:3,]
```
### 6. Plotting
#### a. Abundance profiles
##### i. Abundance surface profile
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(sd = sd(abundance),
avg=mean(abundance)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_errorbar(aes(ymin=avg-sd, ymax=avg+sd), color = "black", size = .3, width=.1) +
geom_point(size=3) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Abundance (cells uL-1)")
ggsave("surface_abundance.png", path = "./plots")
```
##### ii. Abundance depth profile
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown") %>%
dplyr::group_by(lat, depth, population) %>%
dplyr::summarize(avg=mean(abundance)) %>%
ggplot(aes(lat, -depth)) +
geom_point(aes(colour=avg), size=4) +
viridis::scale_colour_viridis(name="Abundance (cells uL-1)",option ="D") +
facet_grid(population ~ .) +
theme_bw() +
xlab("Latitude") +
ylab("Depth (m)")
ggsave("abundance_depth_profile.png", path = "./plots")
```
#### b. Scatter profiles
##### i. Surface scatter profile
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(sd = sd(scatter),
avg=mean(scatter)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_errorbar(aes(ymin=avg-sd, ymax=avg+sd), color = "black", size = .3, width=.1) +
geom_point(size=3) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Scatter (normalized to beads)")
ggsave("surface_scatter.png", path = "./plots")
```
##### ii. Depth scatter profile
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown") %>%
dplyr::group_by(lat, depth, population) %>%
dplyr::summarize(avg=log(mean(scatter))) %>%
ggplot(aes(lat, -depth)) +
geom_point(aes(colour=avg), size=4) +
viridis::scale_colour_viridis(name="Log Scatter (normalized to beads)",option ="D") +
facet_grid(population ~ .) +
theme_bw() +
xlab("Latitude") +
ylab("Depth (m)")
ggsave("scatter_depth_profile.png", path = "./plots")
```
#### c. Red fluorescence depth profile
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & population != "bacteria") %>%
dplyr::group_by(lat, depth, population) %>%
dplyr::summarize(avg=log(mean(red))) %>%
ggplot(aes(lat, -depth)) +
geom_point(aes(colour=avg), size=4) +
viridis::scale_colour_viridis(name="Log Red fluorescence (normalized to beads)",option ="D") +
facet_grid(population ~ .) +
theme_bw() +
xlab("Latitude") +
ylab("Depth (m)")
ggsave("red_fluorescence_depth_profile.png", path = "./plots")
```
#### d. Orange fluorescence depth profile
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & population != "bacteria") %>%
dplyr::group_by(lat, depth, population) %>%
dplyr::summarize(avg=log(mean(orange))) %>%
ggplot(aes(lat, -depth)) +
geom_point(aes(colour=avg), size=4) +
viridis::scale_colour_viridis(name="Log orange fluorescence (normalized to beads)",option ="D") +
facet_grid(population ~ .) +
theme_bw() +
xlab("Latitude") +
ylab("Depth (m)")
ggsave("orange_fluorescence_depth_profile.png", path = "./plots")
```
#### e. Cell size profiles
##### i. Range of surface cell size estimates using different indexes of refraction
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20 & flag == 0) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(avg=(mean(diam_mid)),
avg_lwr=mean(diam_lwr),
avg_upr=mean(diam_upr)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_point(size=2) +
geom_linerange(aes(ymin=avg_lwr, ymax=avg_upr)) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Cell size (um)")
ggsave("surface_cell_size_RI_range.png", path = "./plots")
```
##### ii. Surface (<20m) cell size profile using a single, specific index of refraction for each population
```
# select specific refractive indexes for each population
lwr <- new.all %>%
dplyr::filter(population == "picoeuk" | population == "prochloro" | population == "synecho" | population == "bacteria") %>%
dplyr::select(-diam_mid, -diam_upr, -Qc_mid, -Qc_upr) %>%
dplyr::rename(cell_diameter = diam_lwr, carbon_content = Qc_lwr)
mid <- new.all %>%
dplyr::filter(population == "unknown" | population == "beads" | population == "croco") %>%
dplyr::select(-diam_lwr, -diam_upr, -Qc_lwr, -Qc_upr) %>%
dplyr::rename(cell_diameter = diam_mid, carbon_content = Qc_mid)
RI.all <- merge(lwr, mid, all = TRUE)
RI.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20 & flag == 0) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(avg=(mean(cell_diameter)),
sd=sd(cell_diameter)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_errorbar(aes(ymin=avg-sd, ymax=avg+sd), color = "black", size = .3, width=.1) +
geom_point(size=2) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Cell size (um)")
ggsave("surface_cell_size.png", path = "./plots")
```
##### iii. Cell size depth profile using a single, specific index of refraction for each population
```
RI.all %>%
dplyr::filter(population != "beads" & population != "unknown") %>%
dplyr::group_by(lat, depth, population) %>%
dplyr::summarize(avg=mean(cell_diameter)) %>%
ggplot(aes(lat, -depth)) +
geom_point(aes(colour=avg), size=5) +
viridis::scale_colour_viridis(name="Equivalent spherical diameter\n(micrometer)",option ="D") +
facet_grid(population ~ .) +
theme_bw() +
xlab("Latitude") +
ylab("Depth (m)")
ggsave("cell_size_depth_profile.png", path = "./plots")
```
#### f. Carbon content profiles
##### i. Range of surface carbon content estimates using different indexes of refraction
```
new.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20 & flag == 0) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(avg=(mean(Qc_mid)),
avg_lwr=mean(Qc_lwr),
avg_upr=mean(Qc_upr)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_point(size=2) +
geom_linerange(aes(ymin=avg_lwr, ymax=avg_upr)) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Carbon content (picogram carbon per cell)")
ggsave("surface_carbon_content_RI_range.png", path = "./plots")
```
##### ii. Surface (<20m) carbon content profile using a single, specific index of refraction for each population
```
RI.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth < 20 & flag == 0) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(avg=(mean(carbon_content)),sd=sd(carbon_content)) %>%
ggplot(aes(lat, avg, col=population)) +
geom_errorbar(aes(ymin=avg-sd, ymax=avg+sd), color = "black", size = .3, width=.1) +
geom_point(size=2) +
facet_grid(population ~ ., scale="free_y") +
theme_bw() +
ylab("Carbon content (picogram carbon per cell)")
ggsave("surface_carbon_content.png", path = "./plots")
```
##### iii. Carbon content depth profile using a single, specific index of refraction for each population
```
RI.all %>%
dplyr::filter(population != "beads" & population != "unknown") %>%
dplyr::group_by(lat, depth, population) %>%
dplyr::summarize(avg=(mean(carbon_content))) %>%
ggplot(aes(lat, -depth)) +
geom_point(aes(colour=avg), size=5) +
viridis::scale_colour_viridis(name="Cellular carbon content\n(picogram carbon per cell)",option ="D") +
facet_grid(population ~ .) +
theme_bw() +
xlab("Latitude") +
ylab("Depth (m)")
ggsave("carbon_content_depth_profile.png", path = "./plots")
```
#### g. Total biomass
```
biomass.all <- RI.all
biomass.all$biomass <- biomass.all$abundance * biomass.all$carbon_content
biomass.all %>%
dplyr::filter(population != "beads" & population != "unknown" & depth == 15) %>%
dplyr::group_by(lat, population) %>%
dplyr::summarize(avg=(mean(biomass))) %>%
ggplot(aes(fill = population, x = lat, y = avg)) +
geom_bar(position= "stack", stat = "identity", width=.1, color="black", size = .2) +
ylab("Total biomass (microgram carbon per liter)")
ggsave("total_biomass.png", path = "./plots")
```
### 7. Save data
```
project <- basename(getwd())
cruise <- "MGL1704" # Cruise ID (ex. KM1906); leave blank if samples were not collected during a cruise
cruise_nickname <- "Gradients 2, Gradients 2017" # Cruise nickname commonly referred to (ex. Gradients 2, Gradients 2017); leave blank if samples were not collected during a cruise
cmap_convert(data = new.all , cruise, cruise_nickname, project, version = "v1.0")
```
| github_jupyter |
# ์ผ๋ผ์ค API๋ฅผ ์ฌ์ฉํ ์ฌ์ฉ์ ์ ์ ๋ชจ๋ธ ๋ง๋ค๊ธฐ with ํ
์ํ๋ก 2.3+2.4
DLD(Daejeon Learning Day) 2020์ ์ํด ์์ฑ๋ ๋
ธํธ๋ถ์
๋๋ค.
* ๊นํ๋ธ ์ฃผ์: https://github.com/rickiepark/handson-ml2/blob/master/custom_model_in_keras.ipynb
* ์ฝ๋ฉ ์ฃผ์: https://colab.research.google.com/github/rickiepark/handson-ml2/blob/master/custom_model_in_keras.ipynb
```
import tensorflow as tf
tf.__version__
```
### MNIST ์๊ธ์จ ์ซ์ ๋ฐ์ดํฐ ์ ์ฌ
```
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.reshape(-1, 784) / 255.
X_train.shape
```
### `Sequential()` ํด๋์ค์ ํจ์ํ API์ ๊ด๊ณ
`Sequential()`:
์ํ์
๋ชจ๋ธ์ 10๊ฐ์ ์ ๋์ ๊ฐ์ง ์์ ์ฐ๊ฒฐ ์ธต์ ์ถ๊ฐํฉ๋๋ค.
```
seq_model = tf.keras.Sequential()
seq_model.add(tf.keras.layers.Dense(units=10,
activation='softmax',
input_shape=(784,)))
seq_model.summary()
seq_model.compile(loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
seq_model.fit(X_train, y_train, batch_size=32, epochs=2)
```
### ํจ์ํ API:
ํจ์ํ API๋ฅผ ์ฌ์ฉํ ๋๋ `Input()`์ ์ฌ์ฉํด ์
๋ ฅ์ ํฌ๊ธฐ๋ฅผ ์ ์ํด์ผ ํฉ๋๋ค. ํ์ง๋ง `InputLayer` ์ธต์ด ์ถ๊ฐ๋์ด ์์ต๋๋ค.
```
inputs = tf.keras.layers.Input(784)
outputs = tf.keras.layers.Dense(units=10,
activation='softmax')(inputs) # __call()__ ๋ฉ์๋ ํธ์ถ
# dense = tf.keras.layers.Dense(units=10, activation='softmax')
# outputs = dense(inputs)
func_model = tf.keras.Model(inputs, outputs)
func_model.summary()
func_model.compile(loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
func_model.fit(X_train, y_train, batch_size=32, epochs=2)
```
`Input`์ ์ ์ฒด๋ ๋ฌด์์ผ๊น์? ์ด ํจ์๋ `InputLayer` ํด๋์ค์ ๊ฐ์ฒด๋ฅผ ๋ง๋ค์ด ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํฉ๋๋ค.
```
type(tf.keras.layers.Input)
```
์ฌ์ค ์ ๊ฒฝ๋ง์ ์
๋ ฅ์ธต์ ์
๋ ฅ ๊ทธ ์์ฒด์
๋๋ค. `InputLayer` ๊ฐ์ฒด์ ์
๋ ฅ ๋
ธ๋ ์ถ๋ ฅ์ ๊ทธ๋๋ก `Dense` ์ธต์ ์ฃผ์
ํ ์ ์์ต๋๋ค. ๋ชจ๋ ์ธต์ ์
๋ ฅ๊ณผ ์ถ๋ ฅ ๋
ธ๋๋ฅผ ์ ์ํฉ๋๋ค.
```
# inputs = tf.keras.layers.Input(784)
input_layer = tf.keras.layers.InputLayer(784)
inputs = input_layer._inbound_nodes[0].outputs
outputs = tf.keras.layers.Dense(units=10,
activation='softmax')(inputs)
input_layer_model = tf.keras.Model(inputs, outputs)
input_layer_model.summary()
input_layer_model.compile(loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
input_layer_model.fit(X_train, y_train, batch_size=32, epochs=2)
```
ํจ์ํ API๋ฅผ ์ฌ์ฉํ ๋ชจ๋ธ์ `layers` ์์ฑ์ `InputLayer` ํด๋์ค๋ฅผ ํฌํจํฉ๋๋ค.
```
func_model.layers
```
ํ์ง๋ง ์ํ์
๋ชจ๋ธ์ `layers` ์์ฑ์ `InputLayer` ํด๋์ค๊ฐ ๋ณด์ด์ง ์์ต๋๋ค.
```
seq_model.layers
```
๋ชจ๋ธ์ ๊ฐ์ถฐ์ง `_self_tracked_trackables` ์์ฑ์ด ๋ ์์ต๋๋ค. ์ฌ๊ธฐ์์ `InputLayer` ํด๋์ค๋ฅผ ํ์ธํ ์ ์์ต๋๋ค(ํ
์ํ๋ก 2.5 ์ด์ ๋ฒ์ ์์๋ `_layers` ์์ฑ์ ์ฌ์ฉํฉ๋๋ค).
```
seq_model._self_tracked_trackables
```
๋๋ `_input_layers` ์์ฑ์์๋ ํ์ธํ ์ ์์ต๋๋ค.
```
seq_model._input_layers, func_model._input_layers
seq_model._output_layers, func_model._output_layers
```
`Model` ํด๋์ค๋ก ๋ง๋ `func_model`์ ์ฌ์ค `Functional` ํด๋์ค์ ๊ฐ์ฒด์
๋๋ค. `Model` ํด๋์ค๋ ์๋ธํด๋์ฑ์ ์ฌ์ฉํฉ๋๋ค.
```
func_model.__class__
```
์ํ์
๋ชจ๋ธ์ ํจ์ํ ๋ชจ๋ธ์ ํน๋ณํ ๊ฒฝ์ฐ์
๋๋ค. (`Model` --> `Functional` --> `Sequential`)
### ์ฌ์ฉ์ ์ ์ ์ธต ๋ง๋ค๊ธฐ
`tf.layers.Layer` ํด๋์ค๋ฅผ ์์ํ๊ณ `build()` ๋ฉ์๋์์ ๊ฐ์ค์น๋ฅผ ๋ง๋ ๋ค์ `call()` ๋ฉ์๋์์ ์ฐ์ฐ์ ๊ตฌํํฉ๋๋ค.
```
class MyDense(tf.keras.layers.Layer):
def __init__(self, units, activation=None, **kwargs):
# units์ activation ๋งค๊ฐ๋ณ์ ์ธ์ ๋๋จธ์ง ๋ณ์๋ฅผ ๋ถ๋ชจ ํด๋์ค์ ์์ฑ์๋ก ์ ๋ฌํฉ๋๋ค.
super(MyDense, self).__init__(**kwargs)
self.units = units
# ๋ฌธ์์ด๋ก ๋ฏธ๋ฆฌ ์ ์๋ ํ์ฑํ ํจ์๋ฅผ ์ ํํฉ๋๋ค. e.g., 'softmax', 'relu'
self.activation = tf.keras.activations.get(activation)
def build(self, input_shape):
# __call__() ๋ฉ์๋๋ฅผ ํธ์ถํ ๋ ํธ์ถ๋ฉ๋๋ค. ๊ฐ์ค์น ์์ฑ์ ์ง์ฐํฉ๋๋ค.
# ๊ฐ์ค์น์ ์ ํธ์ ์์ฑํฉ๋๋ค.
self.kernel = self.add_weight(name='kernel',
shape=[input_shape[-1], self.units],
initializer='glorot_uniform' # ์ผ๋ผ์ค์ ๊ธฐ๋ณธ ์ด๊ธฐํ
)
self.bias = self.add_weight(name='bias',
shape=[self.units],
initializer='zeros')
def call(self, inputs): # training=None์ training์ ๋ฐฐ์น ์ ๊ทํ๋ ๋๋กญ์์ ๊ฐ์ ๊ฒฝ์ฐ ์ฌ์ฉ
# __call__() ๋ฉ์๋๋ฅผ ํธ์ถํ ๋ ํธ์ถ๋ฉ๋๋ค.
# ์ค์ ์ฐ์ฐ์ ์ํํฉ๋๋ค. [batch_size, units]
z = tf.matmul(inputs, self.kernel) + self.bias
if self.activation:
return self.activation(z)
return z
inputs = tf.keras.layers.Input(784)
# Layer.__call__() --> MyDense().build() --> Layer.build() --> MyDense().call()
outputs = MyDense(units=10, activation='softmax')(inputs)
my_dense_model = tf.keras.Model(inputs, outputs)
my_dense_model.summary()
my_dense_model.compile(loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
my_dense_model.fit(X_train, y_train, batch_size=32, epochs=2)
```
### ์ฌ์ฉ์ ์ ์ ๋ชจ๋ธ ๋ง๋ค๊ธฐ
```
# fit(), compile(), predict(), evaluate() ๋ฑ์ ๋ฉ์๋ ์ ๊ณต
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.output_layer = MyDense(units=10, activation='softmax')
def call(self, inputs):
return self.output_layer(inputs)
my_model = MyModel()
my_model.compile(loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
my_model.fit(X_train, y_train, batch_size=32, epochs=2)
```
### ์ฌ์ฉ์ ์ ์ ํ๋ จ
```
class MyCustomStep(MyModel):
def train_step(self, data):
# fit()์์ ์ ๋ฌ๋ ๋ฐ์ดํฐ
x, y = data
# ๊ทธ๋ ์ด๋์ธํธ ๊ธฐ๋ก ์์
with tf.GradientTape() as tape:
# ์ ๋ฐฉํฅ ๊ณ์ฐ
y_pred = self(x)
# compile() ๋ฉ์๋์์ ์ง์ ํ ์์ค ๊ณ์ฐ
loss = self.compiled_loss(y, y_pred)
# ํ๋ จ๊ฐ๋ฅํ ํ๋ผ๋ฏธํฐ์ ๋ํ ๊ทธ๋ ์ด๋์ธํธ ๊ณ์ฐ
gradients = tape.gradient(loss, self.trainable_variables)
# ํ๋ผ๋ฏธํฐ ์
๋ฐ์ดํธ
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# TF 2.4์์๋
# self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
# compile() ๋ฉ์๋์์ ์ง์ ํ ์งํ ๊ณ์ฐ
self.compiled_metrics.update_state(y, y_pred)
# ํ์ฌ๊น์ง ์งํ์ ๊ฒฐ๊ด๊ฐ์ ๋์
๋๋ฆฌ๋ก ๋ฐํ
return {m.name: m.result() for m in self.metrics}
my_custom_step = MyCustomStep()
my_custom_step.compile(loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
my_custom_step.fit(X_train, y_train, batch_size=32, epochs=2)
```
| github_jupyter |
```
#default_exp vision.timm
```
# Utilizing the `timm` Library Inside of `fastai`
> How to bring the power of Transfer Learning with new architectures
```
#hide
from wwf.utils import *
from nbdev.showdoc import *
#hide_input
state_versions(['fastai', 'fastcore', 'timm'])
```
## Bringing in External Models into the Framework
As we are well aware, `fastai` models deep down are just `PyTorch` models. However as the field of Machine Learning keeps going, new and fresh architectures are introduced. Wouldn't it be nice if it were easy to integrate them into the `fastai` framework and play with them?
## Using Ross Wightman's `timm` Library
[Ross Wightman](https://twitter.com/wightmanr) has been on a mission to get pretrained weights for the newest Computer Vision models that come out of papers, and compare his results what the papers state themselves. The fantastic results live in his repository [here](https://github.com/rwightman/pytorch-image-models)
For users of the `fastai` library, it is a goldmine of models to play with! But how do we use it? Let's set up a basic `PETs` problem following the [tutorial](https://walkwithfastai.com/vision.clas.single_label):
```
#export
from fastai.vision.all import *
path = untar_data(URLs.PETS)
pat = r'/([^/]+)_\d+.*'
item_tfms = RandomResizedCrop(460, min_scale=0.75, ratio=(1.,1.))
batch_tfms = [*aug_transforms(size=224, max_warp=0), Normalize.from_stats(*imagenet_stats)]
bs=16
pets = DataBlock(blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(0.2),
get_y=RegexLabeller(pat = pat),
item_tfms=item_tfms,
batch_tfms=batch_tfms)
dls = pets.dataloaders(path/'images', bs=bs)
```
From here we would normally do something like `cnn_learner(dls, arch, metrics)`, however we need to do a few things special to work with Ross' framework.
`fastai` has a `create_body` function, whcih is called during `cnn_learner`, that will take a model architecuture and slice off the last Linear layer (resulting in a "body" that outputs unpooled features). This function looks like:
```
def create_body(arch, n_in=3, pretrained=True, cut=None):
"Cut off the body of a typically pretrained `arch` as determined by `cut`"
model = arch(pretrained=pretrained)
_update_first_layer(model, n_in, pretrained)
if cut is None:
ll = list(enumerate(model.children()))
cut = next(i for i,o in reversed(ll) if has_pool_type(o))
if isinstance(cut, int): return nn.Sequential(*list(model.children())[:cut])
elif callable(cut): return cut(model)
else: raise NamedError("cut must be either integer or a function")
```
We're going to create our own that plays well
> Also: notebooks like this are exported as external modules inside of the `wwf` library! This one can be found in `vision.timm` to be used with your projects!
```
#export
from timm import create_model
from fastai.vision.learner import _update_first_layer
#exports
def create_timm_body(arch:str, pretrained=True, cut=None, n_in=3):
"Creates a body from any model in the `timm` library."
model = create_model(arch, pretrained=pretrained, num_classes=0, global_pool='')
_update_first_layer(model, n_in, pretrained)
if cut is None:
ll = list(enumerate(model.children()))
cut = next(i for i,o in reversed(ll) if has_pool_type(o))
if isinstance(cut, int): return nn.Sequential(*list(model.children())[:cut])
elif callable(cut): return cut(model)
else: raise NamedError("cut must be either integer or function")
```
How do we use it? Let's try it out on an `efficientnet_b3` architecture (the entire list of supported architectures is found [here](https://github.com/rwightman/pytorch-image-models#models)
```
body = create_timm_body('efficientnet_b3a', pretrained=True)
```
From here we can calculate the number input features our head needs to have with `num_features_model`. We'll mutliply this by two since we have two pooling layers, `AdaptiveConcatPool2d` and `nn.AdaptiveAvgPool2d`
```
nf = num_features_model(body)*2; nf
```
And now we can create a head!
```
head = create_head(nf, dls.c)
```
To mix them together, we just wrap the two in a `nn.Sequential` and we now have a `PyTorch` model ready to be trained on:
```
net = nn.Sequential(body, head)
```
From here we would pass it onto `Learner`, specifying our `splitter` to be the `default_split`
> `default_splitter` expects the body in `model[0]` and the head in `model[1]` to split our layer groups
```
learn = Learner(dls, net, splitter=default_split)
```
To know this all worked properly, we should be able to call `learn.freeze()` and check the number of frozen parameters. (You can also call `learn.summary` but we are not since it has a lengthy output):
```
learn.freeze()
unfrozen_params = filter(lambda p: not p.requires_grad, learn.model.parameters())
unfrozen_params = sum([np.prod(p.size()) for p in unfrozen_params])
model_parameters = filter(lambda p: p.requires_grad, learn.model.parameters())
frozen_params = sum([np.prod(p.size()) for p in model_parameters])
unfrozen_params, frozen_params
```
Which we can see that only 1.6 million of the 10 million parameters are trainable, so our model is ready for transfer learning!
## Turning it all into a function
Let's make this a bit easier and create something like `cnn_learner`, but for `timm`! We'll call it a `timm_learner`. First let's look at and compare what `cnn_learner` does internally:
```
def cnn_learner(dls, arch, loss_func=None, pretrained=True, cut=None, splitter=None,
y_range=None, config=None, n_out=None, normalize=True, **kwargs):
"Build a convnet style learner from `dls` and `arch`"
if config is None: config = {}
meta = model_meta.get(arch, _default_meta)
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`"
if normalize: _add_norm(dls, meta, pretrained)
if y_range is None and 'y_range' in config: y_range = config.pop('y_range')
model = create_cnn_model(arch, n_out, ifnone(cut, meta['cut']), pretrained, y_range=y_range, **config)
learn = Learner(dls, model, loss_func=loss_func, splitter=ifnone(splitter, meta['split']), **kwargs)
if pretrained: learn.freeze()
return learn
```
At first it looks scary, but let's try and read it as best we can:
1. Grab potential private meta about an architecture we're using
2. Grab the number of expected outputs
3. Potentially normalize
4. Add a `y_range`
5. Create a `cnn_model` and `Learner`
6. Freeze our model
We're going to make a custom `create_timm_model` and `timm_learner` function to do what we just did above. First, `create_timm_model` will model after `create_cnn_model`:
```
#exports
def create_timm_model(arch:str, n_out, cut=None, pretrained=True, n_in=3, init=nn.init.kaiming_normal_, custom_head=None,
concat_pool=True, **kwargs):
"Create custom architecture using `arch`, `n_in` and `n_out` from the `timm` library"
body = create_timm_body(arch, pretrained, None, n_in)
if custom_head is None:
nf = num_features_model(nn.Sequential(*body.children())) * (2 if concat_pool else 1)
head = create_head(nf, n_out, concat_pool=concat_pool, **kwargs)
else: head = custom_head
model = nn.Sequential(body, head)
if init is not None: apply_init(model[1], init)
return model
```
And now for our `timm_learner`:
```
#export
from fastai.vision.learner import _add_norm
#exports
def timm_learner(dls, arch:str, loss_func=None, pretrained=True, cut=None, splitter=None,
y_range=None, config=None, n_out=None, normalize=True, **kwargs):
"Build a convnet style learner from `dls` and `arch` using the `timm` library"
if config is None: config = {}
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`"
if y_range is None and 'y_range' in config: y_range = config.pop('y_range')
model = create_timm_model(arch, n_out, default_split, pretrained, y_range=y_range, **config)
learn = Learner(dls, model, loss_func=loss_func, splitter=default_split, **kwargs)
if pretrained: learn.freeze()
return learn
```
Let's try it out by making the same model we did a moment ago:
```
learn = timm_learner(dls, 'efficientnet_b3a')
```
And to verify let's look at those parameters one more time:
```
unfrozen_params = filter(lambda p: not p.requires_grad, learn.model.parameters())
unfrozen_params = sum([np.prod(p.size()) for p in unfrozen_params])
model_parameters = filter(lambda p: p.requires_grad, learn.model.parameters())
frozen_params = sum([np.prod(p.size()) for p in model_parameters])
unfrozen_params, frozen_params
```
They're exactly the same! So now we can utilize any architecture found inside of `timm` right away, and we built it in a structure very similar to how native `fastai` does it.
To use this module in your own work, simply do:
```python
from wwf.vision.timm import *
learn = timm_learner(dls, 'efficientnet_b3a', metrics=[error_rate, accuracy])
```
> Note: `timm` needs to be installed beforehand
## Model Lookup
To query various models to see what is available, you should directly use the `timm` library.
```
import timm
```
### Listing all models available
One option is to list every model possible:
```
timm.list_models()[:10]
```
### Searching for models
You can also query the names of what is available as well, denoted as below:
```
timm.list_models('*efficientnet*')[:10]
timm.list_models('*b3a')[:10]
timm.list_models('resne*t*', pretrained=True)[:10]
```
## Some Warnings
* Watch for anything with a `tf_` prefix. This means the original weights were ported from Google, so it uses manual padding to match TensorFlow's "same" padding, which adds GPU overhead and a general slowdown. If possible try to use the non-TF versions of models
* HRNet is a bit of a problem-child, so it is the only one not straight-forward to use
| github_jupyter |
This notebook uses blurr, integration library between fastai and huggingface to train a multilabel classification model to identify the toxic comments from the dataset.
DataSet: [Jigsaw Toxicity Prediction](https://huggingface.co/datasets/jigsaw_toxicity_pred)
```
#hide
!pip install fastai -q --upgrade
!pip install nbdev -q --upgrade
!pip install -q huggingface-hub>0.0.10
!pip install ohmeow-blurr -q --upgrade
#hide
%pip install datasets -q --upgrade #datasets & evaluation metrics for nlp from HF
```
# Imports
```
import torch
from datasets import load_dataset
from transformers import *
from fastai.text.all import *
from blurr.data.all import *
from blurr.modeling.all import *
#hide
from nbdev.showdoc import show_doc
from fastai import __version__ as fastai_version
from torch import __version__ as torch_version
from transformers import __version__ as transformers_version
print(f'Pytorch: {torch_version} Fastai: {fastai_version} HF Transformers: {transformers_version}')
#cuda
#torch.cuda.set_device(1)
print(f'Using GPU #{torch.cuda.current_device()}: {torch.cuda.get_device_name()}')
```
# Get your data
```
%mkdir data
raw_data = load_dataset('civil_comments', split='train[:1%]')
len(raw_data)
toxic_df = pd.DataFrame(raw_data)
toxic_df.head()
toxic_df.columns
label_columns = list(toxic_df.columns[1:]);
label_columns.remove('text');
label_columns
toxic_df.round({col: 0 for col in label_columns}) #round the toxic dataframe to a var number of decimal places
toxic_df = toxic_df.convert_dtypes()
toxic_df.head()
```
# Get your huggingface objects
```
model_cls = AutoModelForSequenceClassification
#Define the pretrained model that we want to use
#The DistilRoBERTa model distilled from the RoBERTa model roberta-base checkpoint.
#Compressed model from roberta having 35% less params & runs twice faster & preserves 95%
pretrained_model_name = 'distilroberta-base'
```
Pretrained models from HuggingFace can be found here
https://huggingface.co/transformers/pretrained_models.html
Community uploaded models
https://huggingface.co/models.
- `t5`: :class:`~transformers.T5Config` (T5 model)
- `distilbert`: :class:`~transformers.DistilBertConfig` (DistilBERT model)
- `albert`: :class:`~transformers.AlbertConfig` (ALBERT model)
- `camembert`: :class:`~transformers.CamembertConfig` (CamemBERT model)
- `xlm-roberta`: :class:`~transformers.XLMRobertaConfig` (XLM-RoBERTa model)
- `longformer`: :class:`~transformers.LongformerConfig` (Longformer model)
- `roberta`: :class:`~transformers.RobertaConfig` (RoBERTa model)
- `reformer`: :class:`~transformers.ReformerConfig` (Reformer model)
- `bert`: :class:`~transformers.BertConfig` (Bert model)
- `openai-gpt`: :class:`~transformers.OpenAIGPTConfig` (OpenAI GPT model)
- `gpt2`: :class:`~transformers.GPT2Config` (OpenAI GPT-2 model)
- `transfo-xl`: :class:`~transformers.TransfoXLConfig` (Transformer-XL model)
- `xlnet`: :class:`~transformers.XLNetConfig` (XLNet model)
- `xlm`: :class:`~transformers.XLMConfig` (XLM model)
- `ctrl` : :class:`~transformers.CTRLConfig` (CTRL model)
- `flaubert` : :class:`~transformers.FlaubertConfig` (Flaubert model)
- `electra` : :class:`~transformers.ElectraConfig` (ELECTRA model)
```
config = AutoConfig.from_pretrained(pretrained_model_name) # Download configuration from S3 and cache.
config
config.num_labels = len(label_columns)
doc(BLURR.get_hf_objects)
AutoModelForSequenceClassification.from_pretrained(pretrained_model_name, )
BLURR.get_hf_objects()
#Returns the architecture (str), config (obj), tokenizer (obj), and model (obj) given at minimum a pre-trained model name or path. Specify a task to ensure the right "AutoModelFor" is used to create the model.
hf_arch, hf_config, hf_tokenizer, hf_model = BLURR.get_hf_objects(
pretrained_model_name,
model_cls=model_cls,
config=config)
print(hf_arch), print(type(hf_config)), print(type(hf_tokenizer)), print(type(hf_model))
```
# Build your DataBlock & DataLoaders
```
doc(HF_TextBlock)
```
HF_TextBlock handles setting up the HF_TokenizerTransform transforms and HF_BatchTransform transform regardless of data source (e.g., this will work with files, DataFrames, whatever).
HF_TokenizerTransform was inspired by this [article](http://dev.fast.ai/tutorial.transformers). It handles both the tokenization and numericalization traditionally split apart in the fastai text DataBlock API. Addtionally, it's been updated to add a prefix space for the huggingface architectures that need it.
You can pass a string or list into this Transform, the later being common in token classification tasks like named entity recognition.
build_hf_input uses fastai's @typedispatched decorator to provide for complete flexibility in terms of how your numericalized tokens are assembled, and also what you return via HF_BaseInput and as your targets. You can override this implementation as needed by assigning a type to the task argument (and optionally the tokenizer argument as well).
What you return here is what will be fed into your huggingface model.
```
# Note how we have to configure the num_labels to the number of labels we are predicting. Given that our labels are already encoded, we use a MultiCategoryBlock with encoded=True and vocab equal to the columns with our 1's and 0's.
# single input
# Define the datatypes of your item X,y
X = HF_TextBlock(hf_arch,
hf_config,
hf_tokenizer,
hf_model)
y = MultiCategoryBlock(encoded=True, vocab=label_columns)
blocks = (X, y)
blocks
doc(DataBlock)
label_columns
toxic_df.dtypes
dblock = DataBlock(blocks=blocks, # Define the datatypes of your item
get_x=ColReader('text'), #where to get the X from
get_y=ColReader(label_columns), #where to get the y from
splitter=RandomSplitter(valid_pct=0.2, seed=42)) #how to split the data for training & validation
dblock.summary(source=toxic_df)
dls = dblock.dataloaders(toxic_df, bs=16)
b = dls.one_batch(); len(b), b[0]['input_ids'].shape, b[1].shape
b[0]
b[1]
```
# Training
With our DataLoaders built, we can now build our Learner and train. We'll use mixed precision so we can train with bigger batches
```
model = HF_BaseModelWrapper(hf_model)
```
Same as nn.Module, but no need for subclasses to call super().__init__. This uses a model wrapper in order to pass named arguments into the huggingface model. We do this because all arguments for a given model are not all used or needed for each architecture, and fastai does not support passing in None.
```
doc(Learner)
learn = Learner(dls,
model,
opt_func=partial(Adam),
loss_func=BCEWithLogitsLossFlat(), # multilabel classification loss_func, same as BCEWithLogitsLoss but flattens the input & target
metrics=partial(accuracy_multi, thresh=0.2),
cbs=[HF_BaseModelCallback],
splitter=hf_splitter).to_fp16()
learn.loss_func.thresh = 0.2
learn.create_opt() # -> will create your layer groups based on your "splitter" function
learn.freeze()
learn.blurr_summary()
doc(Learner.create_opt)
```
This method is called internally to create the optimizer, the hyper-parameters are then adjusted by what you pass to Learner.fit or your particular schedulers (see callback.schedule).
```
#check the learner work for a sample
preds = model(b[0])
preds.logits.shape, preds
print(len(learn.opt.param_groups))
learn.lr_find(suggestions=True)
learn.fit_one_cycle(1, lr_max=1e-3)
learn.unfreeze()
learn.lr_find(suggestions=True, start_lr=5e-7, end_lr=1e-2)
learn.fit_one_cycle(3, lr_max=slice(1e-7, 3e-6))
learn.show_results(learner=learn, max_n=2)
learn.loss_func.thresh = 0.7
learn.blurr_predict("You are the biggest loser! go to hell")
learn.blurr_predict("Who the fuck you think you are!")
learn.loss_func.thresh = 0.02
comment = """
Those damned affluent white people should only eat their own food, like cod cakes and boiled potatoes.
No enchiladas for them!
"""
learn.blurr_predict(comment)
preds, targs, losses = learn.get_preds(with_loss=True)
preds.shape, targs.shape, losses.shape
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import analysis
from analysis import lin_cost, abs_cost, las_cost
from analysis import fit_linear, fit_lasso, fit_lasso_linear
```
# Data parameters
```
rng = np.random.RandomState(20170808)
n_data = 50
alpha = .1
X_cov = np.array([[1., 0., 0.],
[0., .6, .59],
[0., .59, .6]])
M = np.array([-.5, .75, 0.])
noise_std = .5
fontsize = 12
```
# Analysis
```
X, Y = analysis.sample_data(X_cov, M, noise_std, n_data, rng)
f, axes = plt.subplots(1, 3, figsize=(9, 3))
for ii, ax in enumerate(axes):
ax.plot(X[:, ii], Y, '.')
ax.set_ylabel('Y')
ax.set_xlabel('X{}'.format(ii+1))
f.tight_layout()
plt.show()
f, axes = plt.subplots(1, 3, figsize=(9, 3))
idx = 0
for ii in range(3):
for jj in range(ii+1, 3):
axes[idx].plot(X[:, ii], X[:, jj], '.')
axes[idx].set_ylabel('X{}'.format(jj+1))
axes[idx].set_xlabel('X{}'.format(ii+1))
idx += 1
f.tight_layout()
plt.show()
lin_est = fit_linear(X, Y)
las_est = fit_lasso(X, Y, alpha)
laslin_est = fit_lasso_linear(X, Y, alpha)
n_pts = 100
limit = 1
locs = np.linspace(-limit, limit, n_pts)
M_grid = np.stack([a.ravel() for a in np.meshgrid(locs,
locs)]).T
M_grid = np.concatenate((M[0] * np.ones((n_pts**2, 1)), M_grid), axis=1)
lin_c = lin_cost(X, Y, M_grid).reshape(n_pts, n_pts)
abs_c = abs_cost(M_grid, alpha).reshape(n_pts, n_pts)
las_c = las_cost(X, Y, M_grid, alpha).reshape(n_pts, n_pts)
f, axes = plt.subplots(1, 3, figsize=(9, 3))
X_locs = M_grid.T[1].reshape(n_pts, n_pts)
Y_locs = M_grid.T[2].reshape(n_pts, n_pts)
for ax, data in zip(axes, [lin_c, abs_c, las_c]):
ax.contour(X_locs, Y_locs, data, 20)
ax.plot(M[1], M[2], 'ro')
ax.plot(lin_est[1], lin_est[2], 'o')
ax.plot(las_est[1], las_est[2], 'o')
ax.plot(laslin_est[1], laslin_est[2], 'o')
ax.set_xticks(np.linspace(-1, 1, 5))
ax.set_yticks(np.linspace(-1, 1, 5))
# Move left y-axis and bottim x-axis to centre, passing through (0,0)
ax.spines['left'].set_position(('data', 0))
ax.spines['bottom'].set_position(('data', 0))
# Eliminate upper and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Show ticks in the left and lower axes only
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
n_trials = 10000
lin_m = np.zeros((n_trials, 2))
las_m = np.zeros((n_trials, 2))
laslin_m = np.zeros((n_trials, 2))
for ii in range(n_trials):
X, Y = analysis.sample_data(X_cov, M, noise_std, n_data, rng)
lin_m[ii] = fit_linear(X, Y)[1:]
las_m[ii] = fit_lasso(X, Y, alpha)[1:]
laslin_m[ii] = fit_lasso_linear(X, Y, alpha)[1:]
f, axes = plt.subplots(1, 3, figsize=(9, 3))
for ax, data in zip(axes, [lin_m, las_m, laslin_m]):
ax.hist2d(data[:,0], data[:,1], bins=np.linspace(-1, 1, 100), cmap='viridis_r')
for ax in axes:
ax.plot(M[1], M[2], 'wo', markeredgecolor='k')
# Move left y-axis and bottim x-axis to centre, passing through (0,0)
ax.spines['left'].set_position(('data', 0))
ax.spines['bottom'].set_position(('data', 0))
# Eliminate upper and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Show ticks in the left and lower axes only
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xticks(np.linspace(-1, 1, 5))
ax.set_yticks(np.linspace(-1, 1, 5))
f.tight_layout()
plt.show()
```
# Cross-validation analysis
```
alphas = np.logspace(-8, 0, 16)
cv_iters = 100
costs = np.zeros((cv_iters, alphas.size, 3))
for ii in range(cv_iters):
X, Y = analysis.sample_data(X_cov, M, noise_std, n_data, rng)
Xh, Yh = analysis.sample_data(X_cov, M, noise_std, n_data, rng)
for jj, a in enumerate(alphas):
costs[ii, jj, 0] = lin_cost(Xh, Yh, fit_lasso(X, Y, a))
costs[ii, jj, 1] = lin_cost(Xh, Yh, fit_lasso_linear(X, Y, a))
costs[ii, :, 2] = lin_cost(Xh, Yh, fit_linear(X, Y))
median_costs = np.median(costs, axis=0)
best_idxs = median_costs.argmin(axis=0)
n_trials = 10000
lin_m = np.zeros((n_trials, 2))
las_m = np.zeros((n_trials, 2))
laslin_m = np.zeros((n_trials, 2))
for ii in range(n_trials):
X, Y = analysis.sample_data(X_cov, M, noise_std, n_data, rng)
lin_m[ii] = fit_linear(X, Y)[1:]
las_m[ii] = fit_lasso(X, Y, alphas[best_idxs[0]])[1:]
laslin_m[ii] = fit_lasso_linear(X, Y, alphas[best_idxs[1]])[1:]
f, axes = plt.subplots(1, 3, figsize=(9, 3))
for ax, data in zip(axes, [lin_m, las_m, laslin_m]):
ax.hist2d(data[:,0], data[:,1], bins=np.linspace(-1, 1, 100), cmap='viridis_r')
for ax in axes:
ax.plot(M[1], M[2], 'wo', markeredgecolor='k')
# Move left y-axis and bottim x-axis to centre, passing through (0,0)
ax.spines['left'].set_position(('data', 0))
ax.spines['bottom'].set_position(('data', 0))
# Eliminate upper and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Show ticks in the left and lower axes only
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xticks(np.linspace(-1, 1, 5))
ax.set_yticks(np.linspace(-1, 1, 5))
f.tight_layout()
plt.show()
```
| github_jupyter |
```
import os
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, BatchNormalization
from PIL import Image
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('dark_background')
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
encoder.fit([[0], [1]])
data = []
paths = []
result = []
for r, d, f in os.walk(r'C:\Users\U\Downloads\brain tumour detection\datasets\yes'):
for file in f:
if '.jpg' in file:
paths.append(os.path.join(r, file))
for path in paths:
img = Image.open(path)
img = img.resize((128,128))
img = np.array(img)
if(img.shape == (128,128,3)):
data.append(np.array(img))
result.append(encoder.transform([[0]]).toarray())
paths = []
for r, d, f in os.walk(r"C:\Users\U\Downloads\brain tumour detection\datasets\yes"):
for file in f:
if '.jpg' in file:
paths.append(os.path.join(r, file))
for path in paths:
img = Image.open(path)
img = img.resize((128,128))
img = np.array(img)
if(img.shape == (128,128,3)):
data.append(np.array(img))
result.append(encoder.transform([[1]]).toarray())
data = np.array(data)
data.shape
result = np.array(result)
result = result.reshape(2848,2)
x_train,x_test,y_train,y_test = train_test_split(data, result, test_size=0.2, shuffle=True, random_state=0)
model = Sequential()
model.add(Conv2D(32, kernel_size=(2, 2), input_shape=(128, 128, 3), padding = 'Same'))
model.add(Conv2D(32, kernel_size=(2, 2), activation ='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size = (2,2), activation ='relu', padding = 'Same'))
model.add(Conv2D(64, kernel_size = (2,2), activation ='relu', padding = 'Same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
model.compile(loss = "categorical_crossentropy", optimizer='Adamax')
print(model.summary())
y_train.shape
history = model.fit(x_train, y_train, epochs = 30, batch_size = 40, verbose = 1,validation_data = (x_test, y_test))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Test', 'Validation'], loc='upper right')
plt.show()
def names(number):
if number==0:
return 'Its a Tumor'
else:
return 'No, Its not a tumor'
from matplotlib.pyplot import imshow
img = Image.open(r"C:\Users\U\Downloads\brain tumour detection\datasets\yes\y1.jpg")
x = np.array(img.resize((128,128)))
x = x.reshape(1,128,128,3)
res = model.predict_on_batch(x)
classification = np.where(res == np.amax(res))[1][0]
imshow(img)
print(str(res[0][classification]*100) + '% Confidence This Is ' + names(classification))
from matplotlib.pyplot import imshow
img = Image.open(r"C:\Users\U\Downloads\brain tumour detection\datasets\no\no1.jpg")
x = np.array(img.resize((128,128)))
x = x.reshape(1,128,128,3)
res = model.predict_on_batch(x)
classification = np.where(res == np.amax(res))[1][0]
imshow(img)
print(str(res[0][classification]*100) + '% Confidence This Is A ' + names(classification))
```
| github_jupyter |
## Homework 4
Use this notebook as a starter
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
Data:
- https://github.com/gastonstat/CreditScoring
- Also available [here](https://raw.githubusercontent.com/alexeygrigorev/mlbookcamp-code/master/chapter-06-trees/CreditScoring.csv)
```
!wget https://raw.githubusercontent.com/alexeygrigorev/mlbookcamp-code/master/chapter-06-trees/CreditScoring.csv
```
## Preparation
We'll talk about this dataset in more details in week 6. But for now, use the following code to get started
```
df = pd.read_csv('CreditScoring.csv')
df.columns = df.columns.str.lower()
df
```
Some of the features are encoded as numbers. Use the following code to de-code them:
```
status_values = {
1: 'ok',
2: 'default',
0: 'unk'
}
df.status = df.status.map(status_values)
home_values = {
1: 'rent',
2: 'owner',
3: 'private',
4: 'ignore',
5: 'parents',
6: 'other',
0: 'unk'
}
df.home = df.home.map(home_values)
marital_values = {
1: 'single',
2: 'married',
3: 'widow',
4: 'separated',
5: 'divorced',
0: 'unk'
}
df.marital = df.marital.map(marital_values)
records_values = {
1: 'no',
2: 'yes',
0: 'unk'
}
df.records = df.records.map(records_values)
job_values = {
1: 'fixed',
2: 'partime',
3: 'freelance',
4: 'others',
0: 'unk'
}
df.job = df.job.map(job_values)
df
```
Prepare the numerical variables:
```
for c in ['income', 'assets', 'debt']:
df[c] = df[c].replace(to_replace=99999999, value=0)
```
Remove clients with unknown default status
```
df = df[df.status != 'unk'].reset_index(drop=True)
```
Create the target variable
```
df['default'] = (df.status == 'default').astype(int)
del df['status']
df
```
## Your code
What are the categorical variables? What are the numerical?
```
numerical = ["seniority","time","age","expenses","income","assets","debt","amount","price","default"]
categorical = ["home", "marital", "records", "job"]
```
Split the data into 3 parts: train/validation/test with 60%/20%/20% distribution. Use `train_test_split` funciton for that with `random_state=1`
```
# Setup validation framework
from sklearn.model_selection import train_test_split
df_full_train, df_test = train_test_split(df, test_size=0.2, random_state=1)
df_train, df_val = train_test_split(df_full_train, test_size=0.25, random_state=1)
df_train = df_train.reset_index(drop="true")
df_val = df_val.reset_index(drop="true")
df_test = df_test.reset_index(drop="true")
# y_full_train = df_full_train.churn.values
# y_train = df_train.churn.values
# y_val = df_val.churn.values
# y_test = df_test.churn.values
# del df_test["churn"]
# del df_val["churn"]
# del df_train["churn"]
# df_train.columns
print("train: %.2f, val: %.2f, test: %.2f" % (len(df_train)/len(df), len(df_val)/len(df), len(df_test)/len(df)))
```
## Question 1
ROC AUC could also be used to evaluate feature importance of numerical variables.
Let's do that
* For each numerical variable, use it as score and compute AUC with the "default" variable
* Use the training dataset for that
If your AUC is < 0.5, invert this variable by putting "-" in front
(e.g. `-df_train['expenses']`)
AUC can go below 0.5 if the variable is negatively correlated with the target varialble. You can change the direction of the correlation by negating this variable - then negative correlation becomes positive.
```
# Q1
import warnings
warnings.filterwarnings(action='once')
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn import linear_model
from sklearn.feature_extraction import DictVectorizer
from sklearn.model_selection import KFold
from tqdm.auto import tqdm
def train(dataFrame, y):
dicts = dataFrame.to_dict(orient="records")
dv = DictVectorizer(sparse=False)
X = dv.fit_transform(dicts)
model = linear_model.LogisticRegression()
model.fit(X, y)
return dv, model
def predict(dataFrame, dv, model):
dicts = dataFrame.to_dict(orient="records")
X = dv.transform(dicts)
y_pred = model.predict_proba(X)[:,1]
return y_pred
fields = [
{"field": "seniority", "correlation": 1},
{"field": "time", "correlation": 1},
{"field": "age", "correlation": 1},
{"field": "expenses", "correlation": 1},
{"field": "income", "correlation": 1},
{"field": "assets", "correlation": 1},
{"field": "debt", "correlation": 1},
{"field": "amount", "correlation": 1},
{"field": "price", "correlation": -1},
{"field": "default", "correlation": 1}
]
for f in fields:
df_train_selected = df_train[[f["field"]]]
df_val_selected = df_val[[f["field"]]]
y_train = f["correlation"] * df_train["default"].values
y_val = df_val["default"].values
dv, model = train(df_train_selected, y_train)
y_pred = predict(df_val_selected, dv, model)
# display(y_val, y_pred)
auc = metrics.roc_auc_score(y_val, y_pred)
print("field:", f["field"], "auc:",round(auc,3))
# Q1b (no model training)
import warnings
warnings.filterwarnings(action='once')
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn import linear_model
from sklearn.feature_extraction import DictVectorizer
from sklearn.model_selection import KFold
from tqdm.auto import tqdm
fields = [
{"field": "seniority", "correlation": -1},
{"field": "time", "correlation": 1},
{"field": "age", "correlation": -1},
{"field": "expenses", "correlation": -1},
{"field": "income", "correlation": -1},
{"field": "assets", "correlation": -1},
{"field": "debt", "correlation": -1},
{"field": "amount", "correlation": 1},
{"field": "price", "correlation": 1},
{"field": "default", "correlation": 1}
]
# df_train[fields[0]["field"]].values
# metrics.roc_auc_score([1,0,1], [0.3, 0.2, 0.4])
for f in fields:
train_selected = f["correlation"] * df_train[f["field"]].values
# df_val_selected = df_val[[f["field"]]]
# y_train = f["correlation"] * df_train["default"].values
y = df_train["default"].values
# dv, model = train(df_train_selected, y_train)
# y_pred = predict(df_val_selected, dv, model)
auc = metrics.roc_auc_score(y, train_selected)
print("field:", f["field"], "auc:",round(auc,3))
```
Which numerical variable (among the following 4) has the highest AUC?
- seniority
- time
- income
- debt
## Training the model
From now on, use these columns only:
```
['seniority', 'income', 'assets', 'records', 'job', 'home']
```
Apply one-hot-encoding using `DictVectorizer` and train the logistic regression with these parameters:
```
LogisticRegression(solver='liblinear', C=1.0, max_iter=1000)
```
```
# Q2
selected_fields = ['seniority', 'income', 'assets', 'records', 'job', 'home']
def train(dataFrame, y):
dicts = dataFrame[selected_fields].to_dict(orient="records")
dv = DictVectorizer(sparse=False)
X = dv.fit_transform(dicts)
model = linear_model.LogisticRegression(solver='liblinear', C=1.0, max_iter=1000)
model.fit(X, y)
return dv, model
def predict(dataFrame, dv, model):
dicts = dataFrame.to_dict(orient="records")
X = dv.transform(dicts)
y_pred = model.predict_proba(X)[:,1]
return y_pred
y_train = df_train["default"].values
y_val = df_val["default"].values
dv, model = train(df_train, y_train)
y_pred = predict(df_val, dv, model)
auc = metrics.roc_auc_score(y_val, y_pred)
print("auc:",round(auc,3))
```
## Question 2
What's the AUC of this model on the validation dataset? (round to 3 digits)
- 0.512
- 0.612
- 0.712
- 0.812
## Question 3
Now let's compute precision and recall for our model.
* Evaluate the model on all thresholds from 0.0 to 1.0 with step 0.01
* For each threshold, compute precision and recall
* Plot them
```
import matplotlib.pyplot as plt
precissions = []
recalls = []
# default prediction thrashold:
thrashholds = np.linspace(0,1, 100)
for t in thrashholds:
predict_positive = (y_pred >= t)
predict_negative = (y_pred < t)
actual_positive = (y_val >= t)
actual_negative = (y_val < t)
true_positive = (predict_positive & actual_positive).sum()
true_negative = (predict_negative & actual_negative).sum()
false_positive = (predict_positive & actual_negative).sum()
false_negative = (predict_negative & actual_positive).sum()
precission = true_positive / (true_positive + false_positive)
# print("t=",t, "precission=",precission)
precissions.append(precission)
recall = true_positive / (true_positive + false_negative)
# print("t=",t, "recall=",recall)
recalls.append(recall)
# precissions, recalls
plt.plot(thrashholds, precissions)
plt.plot(thrashholds, recalls)
```
At which threshold precision and recall curves intersect?
* 0.2
* 0.4
* 0.6
* 0.8
## Question 4
Precision and recall are conflicting - when one grows, the other goes down. That's why they are often combined into the F1 score - a metrics that takes into account both
This is the formula for computing F1:
$$F_1 = 2 \cdot \cfrac{P \cdot R}{P + R}$$
Where $P$ is precision and $R$ is recall.
Let's compute F1 for all thresholds from 0.0 to 1.0 with increment 0.01
```
thrashholds_precissions_recalls = zip(thrashholds, precissions, recalls)
# set(thrashholds_precissions_recalls)
thrashholds_F1s = []
for t_p_r in thrashholds_precissions_recalls:
F1 = (2*t_p_r[1]*t_p_r[2])/(t_p_r[1]+t_p_r[2])
thrashholds_F1s.append((t_p_r[0], F1))
sorted_thrashholds_F1s = sorted(thrashholds_F1s, key=lambda elem: elem[1], reverse=True)
rounded_sorted_thrashholds_F1s = map(lambda t: (round(t[0],2),round(t[1],2)), sorted_thrashholds_F1s)
list(rounded_sorted_thrashholds_F1s)
#threashold #F1
```
At which threshold F1 is maximal?
- 0.1
- 0.3
- 0.5
- 0.7
## Question 5
Use the `KFold` class from Scikit-Learn to evaluate our model on 5 different folds:
```
KFold(n_splits=5, shuffle=True, random_state=1)
```
* Iterate over different folds of `df_full_train`
* Split the data into train and validation
* Train the model on train with these parameters: `LogisticRegression(solver='liblinear', C=1.0, max_iter=1000)`
* Use AUC to evaluate the model on validation
```
# !pip install tqdm
import warnings
warnings.filterwarnings(action='once')
warnings.filterwarnings('ignore')
#Cross Validation
from sklearn.model_selection import KFold
from tqdm.auto import tqdm
def train(dataFrame, y):
dicts = dataFrame[selected_fields].to_dict(orient="records")
dv = DictVectorizer(sparse=False)
X = dv.fit_transform(dicts)
model = linear_model.LogisticRegression(solver='liblinear', C=1.0, max_iter=1000)
model.fit(X, y)
return dv, model
def predict(dataFrame, dv, model):
dicts = dataFrame.to_dict(orient="records")
X = dv.transform(dicts)
y_pred = model.predict_proba(X)[:,1]
return y_pred
# df_full_train_selected1 = df_full_train[selected_fields]
splits = 5
# for C in tqdm([ 0.001, 0.01, 0.1, 0.5, 1, 5, 10], total=splits):
kf = KFold(n_splits=splits, shuffle=True, random_state=1)
auc_scores = []
for train_idx, val_idx in kf.split(df_full_train):
df_train_itter = df_full_train.iloc[train_idx]
df_val_itter = df_full_train.iloc[val_idx]
y_train_iter = df_full_train.iloc[train_idx].default.values
y_val_iter = df_full_train.iloc[val_idx].default.values
dv, model = train(df_train_itter, y_train_iter)
y_pred_iter = predict(df_val_itter, dv, model)
auc = metrics.roc_auc_score(y_val_iter, y_pred_iter)
auc_scores.append(auc)
auc_scores
print("AUC mean: %.4f, AUC std: +-%.4f" % (np.mean(auc_scores), np.std(auc_scores)))
```
How large is standard devidation of the scores across different folds?
- 0.001
- 0.014
- 0.09
- 0.14
## Question 6
Now let's use 5-Fold cross-validation to find the best parameter C
* Iterate over the following C values: `[0.01, 0.1, 1, 10]`
* Use these parametes for the model: `LogisticRegression(solver='liblinear', C=C, max_iter=1000)`
* Compute the mean score as well as the std
```
# !pip install tqdm
import warnings
warnings.filterwarnings(action='once')
warnings.filterwarnings('ignore')
#Cross Validation
from sklearn.model_selection import KFold
from tqdm.auto import tqdm
def train(dataFrame, y, C):
dicts = dataFrame[selected_fields].to_dict(orient="records")
dv = DictVectorizer(sparse=False)
X = dv.fit_transform(dicts)
model = linear_model.LogisticRegression(solver='liblinear', C=C, max_iter=1000)
model.fit(X, y)
return dv, model
def predict(dataFrame, dv, model):
dicts = dataFrame.to_dict(orient="records")
X = dv.transform(dicts)
y_pred = model.predict_proba(X)[:,1]
return y_pred
# df_full_train_selected1 = df_full_train[selected_fields]
splits = 5
for C in tqdm([0.01, 0.1, 1, 10], total=4):
kf = KFold(n_splits=splits, shuffle=True, random_state=1)
auc_scores = []
for train_idx, val_idx in kf.split(df_full_train):
df_train_itter = df_full_train.iloc[train_idx]
df_val_itter = df_full_train.iloc[val_idx]
y_train_iter = df_full_train.iloc[train_idx].default.values
y_val_iter = df_full_train.iloc[val_idx].default.values
dv, model = train(df_train_itter, y_train_iter,C)
y_pred_iter = predict(df_val_itter, dv, model)
auc = metrics.roc_auc_score(y_val_iter, y_pred_iter)
auc_scores.append(auc)
print("C: %s, AUC mean: %.3f, AUC std: +-%.4f" % (C, np.mean(auc_scores), np.std(auc_scores)))
```
Which C leads to the best mean score?
- 0.01
- 0.1
- 1
- 10
If you have ties, select the score with the lowest std. If you still have ties, select the smallest C
## Submit the results
Submit your results here: https://forms.gle/e497sR5iB36mM9Cs5
It's possible that your answers won't match exactly. If it's the case, select the closest one.
## Deadline
The deadline for submitting is 04 October 2021, 17:00 CET. After that, the form will be closed.
| github_jupyter |
<a href="https://colab.research.google.com/github/asigalov61/Meddleying-MAESTRO/blob/main/Meddleying_MAESTRO.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Meddleying MAESTRO (ver 3.1)
***
## Full-featured Algorithmic Intelligence Music Augmentator (AIMA) with full multi-instrument MIDI output and Karaoke support.
***
### Project Los Angeles
### Tegridy Code 2020
***
# Setup Environment, clone needed code, and install all required dependencies
```
#@title Install all dependencies (run only once per session)
!pip install pretty_midi
!pip install visual_midi
!curl -L "https://raw.githubusercontent.com/asigalov61/Meddleying-MAESTRO/main/MIDI.py" > 'MIDI.py'
!mkdir '/content/Dataset/'
!mkdir '/content/C_Dataset/'
#@title Import all modules
import glob
import os
import numpy as np
import toolz
import music21
from music21 import *
import pickle
import time
import math
import sys
import tqdm.auto
import secrets
import pretty_midi
from google.colab import output, drive
import statistics
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
from mido import MidiFile
from IPython.display import display, Image
import MIDI
from visual_midi import Plotter
from visual_midi import Preset
from pretty_midi import PrettyMIDI
ticks_per_note = 50
ctime = 0
cev_matrix = []
cnotes_matrix = []
debug = False
```
# Select and download a sample MIDI dataset
```
#@title (Best Choice/Multi-Instrumental) Processed ready-to-use Special Tegridy Multi-Instrumental Dataset
%cd /content/
!wget 'https://github.com/asigalov61/Meddleying-MAESTRO/raw/main/Meddleying-MAESTRO-Music-Dataset.data'
#!unzip -j Meddleying-MAESTRO-Music-Dataset.data
#!rm Meddleying-MAESTRO-Music-Dataset.data
#@title (BEST Choice / Multi-Intrumental) Special Tegridy MIDI Multi-Instrumental Dataset (~325 MIDIs)
%cd /content/Dataset/
!wget 'https://github.com/asigalov61/Meddleying-MAESTRO/raw/main/Dataset/Tegridy-MIDI-Dataset-CC-BY-NC-SA.zip'
!unzip -j 'Tegridy-MIDI-Dataset-CC-BY-NC-SA.zip'
!rm 'Tegridy-MIDI-Dataset-CC-BY-NC-SA.zip'
%cd /content/
#@title (Piano Performance Dataset) Download Google Magenta MAESTRO v.2.0.0 Piano MIDI Dataset (~1300 MIDIs)
%cd /content/Dataset/
!wget 'https://storage.googleapis.com/magentadata/datasets/maestro/v2.0.0/maestro-v2.0.0-midi.zip'
!unzip -j maestro-v2.0.0-midi.zip
!rm maestro-v2.0.0-midi.zip
%cd /content/
#@title A simple code to unzip MIDI datasets without any hastle
!mkdir /content/Dataset/
%cd /content/Dataset/
!unzip -j /content/Dataset/*.zip
!rm /content/Dataset/*.zip
%cd /content/
```
# Process MIDI Dataset to MIDI Notes and MIDI Events Lists
```
#@title Please note that transpose function reduces MIDIs to Piano with chordwise timings. Sliding some sliders to minimum value disables slider's option. Standard MIDI timings are 400/120
full_path_to_output_dataset_to = "/content/Meddleying-MAESTRO-Music-Dataset.data" #@param {type:"string"}
dataset_slices_length_in_notes = 2 #@param {type:"slider", min:2, max:60, step:1}
transpose_MIDIs_to_one_key = False #@param {type:"boolean"}
melody_reduction_to_slices_max_pitches = False #@param {type:"boolean"}
desired_MIDI_channel = 16 #@param {type:"slider", min:1, max:16, step:1}
flip_input_dataset = False #@param {type:"boolean"}
remove_drums = True #@param {type:"boolean"}
flip_notes = False #@param {type:"boolean"}
transpose_notes_pitch = 0 #@param {type:"slider", min:-30, max:30, step:1}
remove_random_notes = False #@param {type:"boolean"}
remove_every_randomth_note = False #@param {type:"boolean"}
remove_every_n_notes = False #@param {type:"boolean"}
remove_n_notes_per_slice = 0 #@param {type:"slider", min:0, max:7, step:1}
remove_every_nth_note = 0 #@param {type:"slider", min:0, max:7, step:1}
keep_only_notes_above_this_pitch_number = 0 #@param {type:"slider", min:0, max:127, step:1}
constant_notes_duration_time_ms = 0 #@param {type:"slider", min:0, max:800, step:100}
five_notes_per_octave_pitch_quantization = False #@param {type:"boolean"}
octave_channel_split = False #@param {type:"boolean"}
simulated_velocity_volume = 2 #@param {type:"slider", min:2, max:127, step:1}
simulated_velocity_range = 1 #@param {type:"slider", min:1, max:127, step:1}
simulated_velocity_multiplier = 1.2 #@param {type:"slider", min:0, max:2, step:0.1}
simulated_velocity_based_on_pitch = False #@param {type:"boolean"}
simulated_velocity_based_on_top_pitch = True #@param {type:"boolean"}
simulated_velocity_top_pitch_shift_pitch = 1 #@param {type:"slider", min:1, max:120, step:1}
simulated_velocity_baseline_pitch = 1 #@param {type:"slider", min:1, max:127, step:1}
simulated_velocity_chord_size_in_notes = 0 #@param {type:"slider", min:0, max:127, step:1}
reverse_resulting_dataset = False #@param {type:"boolean"}
combine_original_and_resulting_datasets_together = False #@param {type:"boolean"}
combine_flipped_and_resulting_datasets_together = False #@param {type:"boolean"}
try_karaoke = False #@param {type:"boolean"}
debug = False #@param {type:"boolean"}
###########
os.chdir("./")
idx = 0
oev_matrix = []
onot_matrix = []
fev_matrix = []
fnot_matrix = []
fevent = []
kar_events_matrix = []
kar_notes_matrix = []
ev_matrix = []
rev_matrix = []
not_matrix = []
rnot_matrix = []
durations_matrix = []
velocities_matrix = []
files_count = 0
remnote = 0
remnote_count = 0
notes_counter = 0
every_random_note = 7
itrack = 0
prev_event = []
next_event = []
slice_events = []
slices_pitches = []
slices_events = []
slices_melody_events = []
slices_melody_pitches = []
slices_counter = 0
slices_count = 0
chord_counter = 0
max_event_pitch = 0
first_event = True
###########
def list_average(num):
sum_num = 0
for t in num:
sum_num = sum_num + t
avg = sum_num / len(num)
return avg
#converts all midi files in the current folder
#converting everything into the key of C major or A minor
# major conversions
if transpose_MIDIs_to_one_key:
majors = dict([("A-", 4),("A", 3),("B-", 2),("B", 1),("C", 0),("D-", -1),("D", -2),("E-", -3),("E", -4),("F", -5),("G-", 6),("G", 5)])
minors = dict([("A-", 1),("A", 0),("B-", -1),("B", -2),("C", -3),("D-", -4),("D", -5),("E-", 6),("E", 5),("F", 4),("G-", 3),("G", 2)])
os.chdir("./Dataset/")
print('Converting all possible MIDI files to C Key.')
print('This may take a while. Please wait...')
for file in tqdm.auto.tqdm(glob.glob("*.mid")):
try:
score = music21.converter.parse(file)
key = score.analyze('key')
#print('Detected Key:', key.tonic.name, key.mode)
if key.mode == "major":
halfSteps = majors[key.tonic.name]
elif key.mode == "minor":
halfSteps = minors[key.tonic.name]
newscore = score.transpose(halfSteps)
key = newscore.analyze('key')
#print('Detected Key:', key.tonic.name, key.mode)
newFileName = "./C_Dataset/C_" + file
newscore.write('midi',newFileName)
except:
pass
os.chdir("./")
print('Loading MIDI files...')
print('This may take a while on a large dataset in particular.')
if not transpose_MIDIs_to_one_key :
dataset_addr = "Dataset"
else:
dataset_addr = "C_Dataset"
files = os.listdir(dataset_addr)
print('Now processing the files.')
print('Please stand-by...')
for file in tqdm.auto.tqdm(files):
file_address = os.path.join(dataset_addr, file)
score = []
midi_file = open(file_address, 'rb')
if debug: print('Processing File:', file_address)
try:
score2 = MIDI.midi2opus(midi_file.read())
except:
print('Bad file. Skipping...')
continue
score1 = MIDI.to_millisecs(score2)
score3 = MIDI.opus2score(score1)
score = score3
midi_file.close()
if remove_drums:
score4 = MIDI.grep(score3, [0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15])
else:
score4 = score3
if desired_MIDI_channel < 16:
score = MIDI.grep(score4, [desired_MIDI_channel-1])
else:
score = score4
first_event = True
itrack = 1
while itrack < len(score):
for event in score[itrack]:
if event[0] == 'note':
event.append(idx)
idx += 1
if first_event == True:
event.append(1)
first_event = False
else:
event.append(0)
onot_matrix.append(event[4])
oev_matrix.append(event)
fevent = event
fevent[4] = 127 - event[4]
fnot_matrix.append(fevent[4])
fev_matrix.append(fevent)
#if flip_input_dataset:
event[4] = 127 - event[4]
if five_notes_per_octave_pitch_quantization:
event[4] = int(math.floor(event[4] / 12 * 5) * 12 / 5)
if octave_channel_split:
event[4] = int((event[4] + (event[3] - 4) * 12) % (127 - 12 * 2))
if simulated_velocity_volume > 2 and simulated_velocity_range > 1:
event[5] = simulated_velocity_volume + int(secrets.randbelow(simulated_velocity_range) * simulated_velocity_multiplier)
if simulated_velocity_based_on_pitch:
if event[4] >= simulated_velocity_baseline_pitch:
event[5] = int(simulated_velocity_volume * simulated_velocity_multiplier)
else:
if event[5] < simulated_velocity_baseline_pitch:
event[5] = int(simulated_velocity_baseline_pitch * simulated_velocity_multiplier)
if chord_counter < simulated_velocity_chord_size_in_notes:
event[5] = int(simulated_velocity_volume * simulated_velocity_multiplier)
chord_counter += 1
else:
chord_counter = 0
simulated_velocity_volume = int(event[4] * simulated_velocity_multiplier)
if simulated_velocity_based_on_top_pitch:
if event[5] < simulated_velocity_baseline_pitch:
event[5] = int((max_event_pitch + simulated_velocity_top_pitch_shift_pitch) * simulated_velocity_multiplier)
else:
event[5] = max_event_pitch + simulated_velocity_top_pitch_shift_pitch
if constant_notes_duration_time_ms > 0:
event[2] = constant_notes_duration_time_ms
if transpose_notes_pitch:
event[4] = event[4] + transpose_notes_pitch
if flip_notes:
event[4] = 127 - event[4]
if slices_counter < dataset_slices_length_in_notes:
slices_counter += 1
notes_per_slice = dataset_slices_length_in_notes
slices_events.append(event)
slices_pitches.append(event[4])
else:
notes_per_slice = 3
slices_count += 1
slices_counter = 0
slices_events.append(event)
slices_pitches.append(event[4])
max_event_pitch = max(slices_pitches)
max_event_index = slices_pitches.index(max_event_pitch)
max_event = slices_events[max_event_index]
slices_melody_events.append(max_event)
slices_melody_pitches.append(max_event[4])
slices_events = []
slices_pitches = []
if remove_random_notes:
if secrets.randbelow(2) == 1:
remnote_count += 1
else:
not_matrix.append(event[4])
ev_matrix.append(event)
if keep_only_notes_above_this_pitch_number > 0:
if event[4] < keep_only_notes_above_this_pitch_number:
remnote_count += 1
else:
not_matrix.append(event[4])
ev_matrix.append(event)
slices_events.append(event)
slices_pitches.append(event[4])
if remove_every_n_notes > 0:
if remnote < remove_every_n_notes:
remnote += 1
remnote_count += 1
else:
remnote = 0
not_matrix.append(event[4])
ev_matrix.append(event)
slices_events.append(event)
slices_pitches.append(event[4])
if remove_n_notes_per_slice > 0:
if slices_counter == remove_n_notes_per_slice:
not_matrix.append(event[4])
ev_matrix.append(event)
slices_events.append(event)
slices_pitches.append(event[4])
else:
remnote_count += 1
if remove_every_nth_note > 0:
if remnote == remove_every_nth_note + 1:
remnote = 0
remnote_count += 1
else:
remnote += 1
not_matrix.append(event[4])
ev_matrix.append(event)
slices_events.append(event)
slices_pitches.append(event[4])
if remove_every_randomth_note:
if remnote == every_random_note + 1:
remnote = 0
remnote_count += 1
every_random_note = secrets.randbelow(every_random_note+2)
else:
remnote += 1
not_matrix.append(event[4])
ev_matrix.append(event)
slices_events.append(event)
slices_pitches.append(event[4])
else:
not_matrix.append(event[4])
ev_matrix.append(event)
slices_events.append(event)
slices_pitches.append(event[4])
notes_counter += 1
kar_events_matrix.append(event)
kar_notes_matrix.append(event[4])
#how to add stuff...
if try_karaoke:
if event[0] == 'text_event' or event[0] == 'lyric':
kar_events_matrix.append(event) #only if you want a separate matrix for this kind of event
kar_notes_matrix.append(-1) #pitch -1 will be karaoke
ev_matrix.append(event)
not_matrix.append(-1)
#this is it :)
itrack += 1
# Calculate stats about the resulting dataset
average_note_pitch = 0
min_note = 0
max_note = 0
itrack = 1
while itrack < len(score):
for event in score[itrack]:
if event[0] == 'note':
min_note = int(min(min_note, event[4]))
max_note = int(max(min_note, event[4]))
itrack += 1
files_count += 1
if debug:
print('File:', midi_file)
if melody_reduction_to_slices_max_pitches:
not_matrix = slices_melody_pitches
ev_matrix = slices_melody_events
print('Augmenting the dataset now to reduce plagiarizm and repetitions.')
rnot_matrix = not_matrix
rev_matrix = ev_matrix
if reverse_resulting_dataset:
rnot_matrix.reverse()
rev_matrix.reverse()
fnot_matrix.reverse()
fev_matrix.reverse()
#else:
#onot_matrix.reverse()
#oev_matrix.reverse()
#not_matrix = onot_matrix
#ev_matrix = oev_matrix
if combine_original_and_resulting_datasets_together:
not_matrix += onot_matrix
ev_matrix += oev_matrix
slices_count = slices_count * 2
if combine_flipped_and_resulting_datasets_together:
not_matrix += fnot_matrix
ev_matrix += fev_matrix
slices_count = slices_count * 2
if try_karaoke == True:
#ev_matrix += kar_events_matrix
#not_matrix += kar_notes_matrix
ev_matrix.append(['karaoke'])
not_matrix.append(-1)
average_note_pitch = int(list_average(not_matrix))
print('Task complete :)')
print('==================================================')
print('Number of processed dataset MIDI files:', files_count)
if reverse_resulting_dataset: print('The dataset was augmented to prevent plagiarism as requested.')
print('Number of notes in the dataset MIDI files:', notes_counter)
if remnote_count > 0: print('Number of notes removed:', remnote_count)
if slices_count > 0: print('There are', slices_count, 'slices, each', notes_per_slice, '/', dataset_slices_length_in_notes, 'notes long.')
#print('Minimum note pitch:', min_note)
#print('Maximum note pitch:', max_note)
print('Number of notes in the resulting dataset:', len(not_matrix))
print('Number of total MIDI events recorded:', len(ev_matrix))
print('Average note pitch:', average_note_pitch)
print('First 5 notes of the resulting dataset:', ev_matrix[0:6])
print('Last event:', ev_matrix[-1])
if remove_drums: print('Drums MIDI events have been removed as requested.')
# define a list of places
MusicDataset = [not_matrix, ev_matrix]
with open(full_path_to_output_dataset_to, 'wb') as filehandle:
# store the data as binary data stream
pickle.dump(MusicDataset, filehandle)
print('Dataset was saved at:', full_path_to_output_dataset_to)
print('Task complete. Enjoy! :)')
```
# Load/Re-load the processed dataset
```
#@title Load pre-processed dataset from a file to memory
full_path_to_dataset_file = "/content/Meddleying-MAESTRO-Music-Dataset.data" #@param {type:"string"}
not_matrix = []
ev_matrix = []
try_karaoke = False
with open(full_path_to_dataset_file, 'rb') as filehandle:
# read the data as binary data stream
MusicDataset = pickle.load(filehandle)
not_matrix = MusicDataset[0]
ev_matrix = MusicDataset[1]
events_matrix = ev_matrix
notes_matrix = not_matrix
if ev_matrix[-1][0] == 'karaoke' and not_matrix[-1] == -1:
try_karaoke = True
print('Detected MM Version 2.7+ Karaoke Dataset')
else:
print('Detected MM Legacy/non-Karaoke Dataset')
print('Task complete. Enjoy! :)')
print('==================================================')
print('Number of notes in the dataset:', len(not_matrix))
print('Number of total MIDI events recorded:', len(ev_matrix))
print('Done! Enjoy! :)')
```
# Custom MIDI / priming sequence option
```
#@title PRO Tip: Try to match at least an end_note duration and/or velocity for best results
full_path_to_MIDI_file = "/content/seed3.mid" #@param {type:"string"}
MIDI_channels_selection = "all" #@param ["all"] {allow-input: true}
start_note_index = 0 #@param {type:"number"}
end_note_index = 60#@param {type:"number"}
output_ticks = 400 #@param {type:"slider", min:0, max:2000, step:100}
ticks_per_note = 120 #@param {type:"slider", min:0, max:2000, step:10}
ticks_durations_multiplier = 1
score = []
cev_matrix = []
cnotes_matrix = []
ctime = 0
ctime = 0
midi_file = open(full_path_to_MIDI_file, 'rb')
if debug: print('Processing File:', file_address)
if MIDI_channels_selection == 'all':
score1 = MIDI.midi2score(midi_file.read())
else:
score0 = MIDI.midi2score(midi_file.read())
score1 = MIDI.grep(score0, [int(MIDI_channels_selection)])
midi_file.close()
score2 = MIDI.score2opus(score1)
score3 = MIDI.to_millisecs(score2)
score = MIDI.opus2score(score3)
cnotes_matrix = []
cev_matrix = []
x = 0
itrack = 1
while itrack < len(score):
for event in score[itrack]:
if event[0] == 'note':
if x >= start_note_index and x <= end_note_index:
cnotes_matrix.append(event[4])
if x >= start_note_index and x <= end_note_index:
cev_matrix.append(['note', ctime, event[2], event[3], event[4], event[5]])
#ctime += ticks_per_note
#ctime = event[1]
ctime += abs(output_ticks - int((event[5] + ticks_per_note) * ticks_durations_multiplier))
x += 1
itrack += 1
if debug:
print('File:', midi_file)
print('Results:')
events_matrix = ev_matrix
start_event = cev_matrix[-1]
cindex = 0
index2 = 0
index4 = 0
index3 = 0
index5 = 0
for i in range(len(events_matrix)):
if events_matrix[i][4] == start_event[4]:
index4 = i
if debug: print('Found matching continuation primer note.')
if events_matrix[i][5] == start_event[5]:
index2 = i
if debug: print('Found matching continuation primer velocity.')
if events_matrix[i][2] == start_event[2]:
index5 = i
if debug: print('Found matching continuation duration.')
if events_matrix[i][3] == start_event[3]:
index3 = i
if debug: print('Found matching continuation MIDI channel.')
if debug: print('Found matching continuation MIDI event.')
cindex = 0
if index4 != 0: cindex = index4, print('Found a matching note.')
if index5 != 0: cindex = index5, print('Found a matching velocity.')
if index2 != 0: cindex = index2, print('Found a matching duration.')
if index3 != 0: cindex = index3, print('Found a matching MIDI channel.')
cindex = int(cindex[0])
if cindex != 0:
print('Success. Continuation is possible. Enjoy! :)')
print('Number of notes in the primer composition:', len(cnotes_matrix))
print('Found matching Dataset index #:', cindex)
print('Primer MIDI last event/continuation event:', start_event)
else:
print('Sorry, but there are no matching MIDI events in the dataset to continue given primer composition.')
print('Try to use a different End Note value (end_note_index), a larger dataset, a different primer compostion')
print('Or you can try different dataset/primer processing settings. You can also try including your primer into the dataset.')
print('Done!')
```
# Generate Music
Standard MIDI timings are 400/120(80).
```
#@title Play with the settings until you get what you like
relative_note_timings = True #@param {type:"boolean"}
start_note = 60 #@param {type:"slider", min:1, max:127, step:1}
start_with_random_introduction = True #@param {type:"boolean"}
notes_per_slice = 60 #@param {type:"slider", min:1, max:200, step:1}
number_of_notes_to_match_slices = 40 #@param {type:"slider", min:0, max:100, step:1}
number_of_slices = 10 #@param {type:"slider", min:1, max:100, step:1}
extra_match_slices = "Durations" #@param ["Notes Only", "Durations", "Velocities", "Channels", "Full Match"]
try_to_find_intro_for_composition = False
output_ticks = 400 #@param {type:"slider", min:0, max:2000, step:100}
ticks_per_note = 180 #@param {type:"slider", min:0, max:2000, step:10}
ticks_durations_multiplier = 1
notes_timings_multiplier = 1 #@param {type:"slider", min:0, max:2, step:0.01}
notes_durations_multiplier = 1 #@param {type:"slider", min:0.5, max:1.5, step:0.01}
notes_velocities_multiplier = 1.5 #@param {type:"slider", min:0.1, max:2, step:0.1}
transpose_velocity = -30 #@param {type:"slider", min:-60, max:60, step:1}
transpose_composition = 0 #@param {type:"slider", min:-30, max:30, step:1}
set_all_MIDI_patches_to_piano = False #@param {type:"boolean"}
MIDI_channel_patch_00 = 0 #@param {type:"number"}
MIDI_channel_patch_01 = 24 #@param {type:"number"}
MIDI_channel_patch_02 = 32 #@param {type:"number"}
MIDI_channel_patch_03 = 40 #@param {type:"number"}
MIDI_channel_patch_04 = 42 #@param {type:"number"}
MIDI_channel_patch_05 = 46 #@param {type:"number"}
MIDI_channel_patch_06 = 56 #@param {type:"number"}
MIDI_channel_patch_07 = 71 #@param {type:"number"}
MIDI_channel_patch_08 = 73 #@param {type:"number"}
MIDI_channel_patch_09 = 0 #@param {type:"number"}
MIDI_channel_patch_10 = 0 #@param {type:"number"}
MIDI_channel_patch_11 = 0 #@param {type:"number"}
MIDI_channel_patch_12 = 0 #@param {type:"number"}
MIDI_channel_patch_13 = 0 #@param {type:"number"}
MIDI_channel_patch_14 = 0 #@param {type:"number"}
MIDI_channel_patch_15 = 0 #@param {type:"number"}
output_events_matrix = []
output_events_matrix1 = []
midi_data = []
midi_dats1 = []
events_matrix = []
notes_matrix = []
index = 0
index1 = 0
time = 0
x = 0
nts = 0
nts1 = 0
dts = 0
kar = 0
output = []
output1 = []
average_note_pitch = 100
time_r = 0
ovent = []
ovent_r = []
ovent_a = []
start_event = []
event4 = []
event3 = []
event2 = []
event1 = []
event = []
event01 = []
event02 = []
event03 = []
event04 = []
global_time = []
block_events = []
block_notes = []
end_index = 0
ch0_ev_matrix = []
ch1_ev_matrix = []
ch2_ev_matrix = []
ch3_ev_matrix = []
ch4_ev_matrix = []
ch5_ev_matrix = []
ch6_ev_matrix = []
ch7_ev_matrix = []
ch8_ev_matrix = []
ch9_ev_matrix = []
ch10_ev_matrix = []
ch11_ev_matrix = []
ch12_ev_matrix = []
ch13_ev_matrix = []
ch14_ev_matrix = []
ch15_ev_matrix = []
if set_all_MIDI_patches_to_piano:
output = [output_ticks, [['track_name', 0, b'Meddleying MAESTRO']]]
else:
output = [output_ticks,
[['track_name', 0, b'Meddleying MAESTRO'],
['patch_change', 0, 0, MIDI_channel_patch_00],
['patch_change', 0, 1, MIDI_channel_patch_01],
['patch_change', 0, 2, MIDI_channel_patch_02],
['patch_change', 0, 3, MIDI_channel_patch_03],
['patch_change', 0, 4, MIDI_channel_patch_04],
['patch_change', 0, 5, MIDI_channel_patch_05],
['patch_change', 0, 6, MIDI_channel_patch_06],
['patch_change', 0, 7, MIDI_channel_patch_07],
['patch_change', 0, 8, MIDI_channel_patch_08],
['patch_change', 0, 9, MIDI_channel_patch_09],
['patch_change', 0, 10, MIDI_channel_patch_10],
['patch_change', 0, 11, MIDI_channel_patch_11],
['patch_change', 0, 12, MIDI_channel_patch_12],
['patch_change', 0, 13, MIDI_channel_patch_13],
['patch_change', 0, 14, MIDI_channel_patch_14],
['patch_change', 0, 15, MIDI_channel_patch_15],]]
output1 = output
output_events_matrix = [['track_name', 0, b'Composition Track']]
output_events_matrix1 = [['track_name', 0, b'Composition Track']]
print('Prepping the dataset...')
print('Splitting the dataset into channels...')
for i in range(len(ev_matrix)):
if ev_matrix[i][3] == 0:
ch0_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 1:
ch1_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 2:
ch2_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 3:
ch3_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 4:
ch4_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 5:
ch5_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 6:
ch6_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 7:
ch7_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 8:
ch8_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 9:
ch9_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 10:
ch10_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 11:
ch11_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 12:
ch12_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 13:
ch13_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 14:
ch14_ev_matrix.append(ev_matrix[i])
if ev_matrix[i][3] == 15:
ch15_ev_matrix.append(ev_matrix[i])
print('Sorting channel events...')
ch0_ev_matrix.sort(key=lambda x: x[6])
ch1_ev_matrix.sort(key=lambda x: x[6])
ch2_ev_matrix.sort(key=lambda x: x[6])
ch3_ev_matrix.sort(key=lambda x: x[6])
ch4_ev_matrix.sort(key=lambda x: x[6])
ch5_ev_matrix.sort(key=lambda x: x[6])
ch6_ev_matrix.sort(key=lambda x: x[6])
ch7_ev_matrix.sort(key=lambda x: x[6])
ch8_ev_matrix.sort(key=lambda x: x[6])
ch9_ev_matrix.sort(key=lambda x: x[6])
ch10_ev_matrix.sort(key=lambda x: x[6])
ch11_ev_matrix.sort(key=lambda x: x[6])
ch12_ev_matrix.sort(key=lambda x: x[6])
ch13_ev_matrix.sort(key=lambda x: x[6])
ch14_ev_matrix.sort(key=lambda x: x[6])
ch15_ev_matrix.sort(key=lambda x: x[6])
print('Chordifying MIDI channels events...')
#ev_matrix = [j for i in zip(ch0_ev_matrix, ch1_ev_matrix) for j in i]
ev_matrix1 = [ch0_ev_matrix,
ch1_ev_matrix,
ch2_ev_matrix,
ch3_ev_matrix,
ch4_ev_matrix,
ch5_ev_matrix,
ch6_ev_matrix,
ch7_ev_matrix,
ch8_ev_matrix,
ch9_ev_matrix,
ch10_ev_matrix,
ch11_ev_matrix,
ch12_ev_matrix,
ch13_ev_matrix,
ch14_ev_matrix,
ch15_ev_matrix]
ev_matrix2 = [ele for ele in ev_matrix1 if ele != []]
ev_matrix = list(toolz.itertoolz.interleave(ev_matrix2))
print('Final sorting and notes list creation...')
ev_matrix.sort(key=lambda x: x[6])
not_matrix = [row[4] for row in ev_matrix]
if ctime > 0:
time = ctime
else:
time = 0
try:
if len(cev_matrix) != 0:
events_matrix = ev_matrix
notes_matrix = not_matrix
output_events_matrix = cev_matrix
start_note = cnotes_matrix[-1]
index = cindex
print('Priming_sequence: MIDI event:', cev_matrix[-1])
cev_matrix = 0
ctime = 0
else:
index = 0
flag = True
events_matrix = ev_matrix
notes_matrix = not_matrix
print('Priming_sequence: MIDI note #', [start_note])
index = not_matrix.index(start_note, secrets.choice(range(len(not_matrix))))
if start_with_random_introduction:
print('Trying to find a random intro for the composition...')
start = False
while start == False:
index = secrets.choice(range(len(ev_matrix)))
for i in range(index, (len(ev_matrix))):
event = ev_matrix[i]
if event[7] == 1:
start = True
index = event[6]
print('Success! Found a sutable composition introduction!')
break
except:
print('The Generator could not find the starting note in a dataset note sequence. Please adjust the parameters.')
print('Meanwhile, trying to generate a sequence with the MIDI note # [60]')
try:
index = not_matrix.index(60, secrets.choice(range(len(not_matrix))))
print('Trying a random starting note...')
index = not_matrix.index(secrets.randbelow(128), secrets.choice(range(len(not_matrix))))
except:
print('Unfortunatelly, that did not work either. Please try again/restart run-time.')
sys.exit()
print('Final starting index:', index)
print('Beginning the pattern search and generation...')
if extra_match_slices != 'Notes Only':
print('Extra slices matching type requested:', extra_match_slices)
for i in tqdm.auto.tqdm(range(number_of_slices)):
block_events = []
block_notes = []
for i in range(notes_per_slice):
previous_event = ev_matrix[index-1+i]
event = ev_matrix[index+i]
next_event = ev_matrix[index+1+i]
if ev_matrix[index+i][4] > 0 and event[0] == 'note':
if relative_note_timings:
if previous_event[1] != event[1]:
time += abs(output_ticks - int((event[5] + ticks_per_note) * ticks_durations_multiplier))
else:
time += 0
else:
if previous_event[1] != event[1]:
time += abs(int(ticks_per_note * ticks_durations_multiplier))
else:
time += 0
ovent_a = ['note', int(time * notes_timings_multiplier), int(event[2] * notes_durations_multiplier), event[3], event[4] + transpose_composition, (int(event[5] * notes_velocities_multiplier) + transpose_velocity)]
output_events_matrix.append(ovent_a)
nts1 += 1
if i >= notes_per_slice - number_of_notes_to_match_slices:
block_events.append(ovent_a)
block_notes.append(ovent_a[4])
if debug: print(block_notes)
#how to add stuff...
if ev_matrix[index+i][4] == -1 and event[0] == 'text_event' or event[0] == 'lyric':
ovent_a = ['text_event', int(time), event[2]]
output_events_matrix.append(ovent_a)
block_events.append(ovent_a)
kar += 1
#this is it :)
found_pattern = False
for x in range(len(ev_matrix) - number_of_notes_to_match_slices - notes_per_slice):
z = 0
if ev_matrix[x][0] == 'note':
for y in range(len(block_events)):
if ev_matrix[x+y][0] == 'note' and block_events[y][0] == 'note':
if extra_match_slices == 'Full Match':
if block_events[y][3] == ev_matrix[x+y][3] and block_events[y][4] == ev_matrix[x+y][4] and block_events[y][2] == ev_matrix[x+y][2] and block_events[y][5] == ev_matrix[x+y][5]:
z += 1
nts += 1
continue
if extra_match_slices == 'Notes Only':
if block_events[y][4] == ev_matrix[x+y][4]:
z += 1
nts += 1
continue
if extra_match_slices == 'Durations':
if block_events[y][2] == ev_matrix[x+y][2]:
z += 1
nts += 1
continue
if extra_match_slices == 'Velocities':
if block_events[y][5] == ev_matrix[x+y][5]:
z += 1
nts += 1
continue
if extra_match_slices == 'Channels':
if block_events[y][3] == ev_matrix[x+y][3]:
z += 1
nts += 1
if z == len(block_events):
end_index = x + y
found_pattern = True
break
if debug: print('End Index', end_index)
if end_index != 0:
index = end_index
dts += 1
output += [output_events_matrix]
midi_data = MIDI.opus2midi(MIDI.score2opus(output))
if not relative_note_timings:
with open('output-absolute.mid', 'wb') as midi_file1:
midi_file1.write(midi_data)
midi_file1.close()
else:
with open('output-relative.mid', 'wb') as midi_file1:
midi_file1.write(midi_data)
midi_file1.close()
print('Done! Crunching quick stats...')
print('First Note:', output[2][1], '=== Last Note:', output[2][-1])
print('MIDI Stats:', MIDI.score2stats(output))
print('The dataset was scanned', dts, 'times.')
print('Examined', nts, 'notes from the dataset.')
print('Generated notes total:', nts1, 'out of expected', len(output[2]) - len(cnotes_matrix) - 1, 'MIDI events...')
if try_karaoke: print('Generated', kar, 'Karaoke events.')
print('Task complete!')
print('Downloading your MIDI now :) Enjoy!')
from google.colab import files
if not relative_note_timings:
files.download('/content/output-absolute.mid')
else:
files.download('/content/output-relative.mid')
#files.download('/content/output-relative.mid')
print('Enjoy! :)')
```
# Fun MIR stats
```
#@title Basic statistical analysis of the output MIDI file
MIDI_DIR = "/content/*.mid"
### https://github.com/brennan2602/FYP
def get_piano_roll(midifile):
midi_pretty_format = pretty_midi.PrettyMIDI(midifile)
piano_midi = midi_pretty_format.instruments[0] # Get the piano channels
piano_roll = piano_midi.get_piano_roll(fs=20)
return piano_roll
#uses split encoding scheme (here only encoding the note values)
#works by looping through time increments of the piano roll array and writing the notes being played
#at a given time sample as a number on the corresponding line of a string # is written when no notes played for that
#sample
def encode(arr):
timeinc=0
outString=""
for time in arr:
notesinc = -1
#print(time)
if np.all(time==0):
outString=outString+"#"
for vel in arr[timeinc]:
notesinc=notesinc+1
if vel != 0:
noteRep=str(notesinc) + " "
#print(noteRep)
outString=outString+noteRep
outString=outString+"\n"
timeinc = timeinc+1
return outString
def getSilences(test):
test=test[:-1] #removing last line in string (always blank)
output=test.split("\n") #splitting into array
res = len(output)
#initialising counters
maxcounter=0
counter=0
silenceCount=0
for x in output:
if x == "#": #when a "#" is seen nothing is being played that sample
counter=counter+1 #this tracks a streak of silences
silenceCount+=1 #this tracks total silences
if x != "#":
counter=0 #reseting streak
if counter>maxcounter:
maxcounter=counter #updating longest silence streak when appropriate
return maxcounter,silenceCount
#by looking at the length of song and the amount of silences this returns % silence
def getPercentSilence(gen,silences):
test = gen
test = test[:-1]
output = test.split("\n")
res = len(output)
percent=silences/res
return percent
def getStatsNotes(test):
test=test[:-1] #get rid of blank line at the end
notes=[]
output = test.split("\n") #split string on new lines
#initial values updated while looping through
maxPerSamp=0
silenceSamp=0
notesPlayed=0
maxNotes=0
maxVal=0
minVal=127
for x in output:
samp=x.split(" ")
samp=samp[:-1] #theres a blank result at the end of array from split this indexing removes it
while "0" in samp:
samp.remove("0") #sometimes 0 samples exist this removes them as they aren't notes played
if len(samp)==0:
silenceSamp+=1 #counting silences
notesPlayed=notesPlayed+len(samp) #counting notes played
if len(samp)>0:
#getting max and min note values at this time step
minimum=min(samp)
maximum=max(samp)
#updating max and min values note values for song if appropriate
if int(minimum)<minVal:
minVal=int(minimum)
if int(maximum)>maxVal:
maxVal=int(maximum)
#updating maximum number of notes per sample if appropriate
if len(samp)>maxNotes:
maxNotes=len(samp)
rangeNotes=maxVal-minVal #spread of notes
avgNotes = notesPlayed / len(output) #average notes per sample
adjNotes=notesPlayed /(len(output)-silenceSamp) #average notes per sample adjusted to remove silent samples
return rangeNotes, maxVal, minVal,maxNotes,avgNotes,adjNotes
files=glob.glob(MIDI_DIR)#point towards directory with midi files (here same folder)
#print(files)
for f in files:
print(f)
pr = get_piano_roll(f) #gets piano roll representation of the midi file
arr = pr.T
outString= encode(arr) #gets a string representation of the midi file
maxsilences, silences = getSilences(outString) #by passing in the encoded string get longest silence and the total
#number of samples which are silent
noteRange, maxVal, minVal, maxNotes, avgNotes, adjAvg =getStatsNotes(outString) # getting some stats by looping
# through encoded data
percentSilence= getPercentSilence(outString,silences) # get % silence from silence / outString length
#printing out to the user
print("longest silence is ",maxsilences,"samples long")
print("silence covers:",round(percentSilence,4),"%")
print("notes span range:",noteRange)
print("max note value:",maxVal)
print("min note value:",minVal)
print("average number of notes per sample:",round(avgNotes,4))
print("average number of notes per sample (adjusted to remove silence samples):",round(adjAvg,4))
print("max number of notes played in a sample:",maxNotes)
print("\n")
#NOTE some minor discrepencies vs reading in from generated file directly
#However this does provide a uniform check to use for songs generated by both encoding schemes
#Can also be used to evaluate training file
#uses split encoding to get the text representation for ease of development
#@title Basic graph of the last output
seconds_to_show = 30 #@param {type:"slider", min:1, max:180, step:1}
show_whole_track = False #@param {type:"boolean"}
graph_color = "red" #@param ["blue", "red", "green"]
x = []
y = []
z = []
t = 0
itrack = 1
fig = plt.figure(figsize=(12,5))
while itrack < len(output1):
for event in output1[itrack]:
if event[0] == 'note':
y.append(event[4])
x.append(t)
plt.plot(x, y, color=graph_color)
t += 0.25
if not show_whole_track:
if t == seconds_to_show: break
itrack +=1
plt.show()
#@title Output MIDI bokeh plot
preset = Preset(plot_width=850)
plotter = Plotter(preset, plot_max_length_bar=4)
if not relative_note_timings:
pm = PrettyMIDI("/content/output-absolute.mid")
else:
pm = PrettyMIDI("/content/output-relative.mid")
plotter.show_notebook(pm)
#@title Bonus Music21 Graphs (slow)
display_relative_output = True
histogram_pitchSpace_count = False #@param {type:"boolean"}
histogram_pitchClass_count = False #@param {type:"boolean"}
Windowed_Keys = False #@param {type:"boolean"}
scatter_quarterLength_pitchSpace = True #@param {type:"boolean"}
quarterLength_3DBars_pitchSpace_count = False #@param {type:"boolean"}
if relative_note_timings:
s = converter.parse("/content/output-relative.mid")
else:
s = converter.parse("/content/output-absolute.mid")
p = 0
if histogram_pitchSpace_count:
p = music21.graph.plot.HistogramPitchSpace(s)
p.id
'histogram-pitchSpace-count'
p.run() # with defaults and proper configuration, will open graph
if histogram_pitchClass_count:
p = graph.plot.HistogramPitchClass(s)
p.id
'histogram-pitchClass-count'
p.run()
if Windowed_Keys:
p = graph.plot.WindowedKey(s.parts[0])
p.id
p.run()
if scatter_quarterLength_pitchSpace:
p = graph.plot.ScatterPitchSpaceQuarterLength(s)
p.id
'scatter-quarterLength-pitchSpace'
p.run()
if quarterLength_3DBars_pitchSpace_count:
p = graph.plot.Plot3DBarsPitchSpaceQuarterLength(s)
p.id
'3DBars-quarterLength-pitchSpace-count'
p.run()
if p == 0:
print('Please select graph(s) to plot, please :)')
```
# Congrats! :) You did it :)
```
#@title Make a nice Arc diagram of the output to show friends and family :)
MIDI_file_track_to_visualize = 1 #@param {type:"number"}
multi_track_input = True
midi_file = '/content/output-absolute.mid'
plot_title = "Meddleying MAESTRO Output Arc Diagram"
def maximal_matching_pair( s, substring_length, old_index=-1 ):
'''
find the first pair of matching substrings at least as long as the specified length
'''
if substring_length > len(s)/2:
return (len(substring_length), -1) # fail- futile to keep searching with this string
head = s[:substring_length]
tail = s[substring_length:]
index = tail.find(head)
if index == -1:
if substring_length > 2:
return (substring_length-1, old_index) # success
return (substring_length, index) # fail- failed on first 2 character substring attempt
return maximal_matching_pair(s, substring_length+1, index) # keep looking
def first_matching_substring_pair( s, start=0 ):
'''
returns the first matching substring pair of at least length 2 in the given string,
ignoring all characters of the string before the given start index
'''
if start < 0:
return () # invalid input: start must be non-negative
if len(s[start:]) < 4:
return () # fail: string too short to find matching substrings of minimal length 2
minimal_substring_length = 2
(length, distance) = maximal_matching_pair(s[start:], minimal_substring_length)
if distance != -1:
return (start, length, distance) # success
return first_matching_substring_pair(s, start+1) # keep looking
def matching_substring_pairs( string ):
'''
returns a collection of consecutive substring pairs encoded as (start, length, distance) where
* start is the index of the first character of the first substring of the matching substring pair,
* length is the length of the substrings in the matching substring pair, and
* distance is the distance from the end of the first substring to the begining of the second substring
'''
pairs = []
pair = first_matching_substring_pair(string, 0)
while pair:
pairs.append(pair)
(start, length, distance) = pair
pair = first_matching_substring_pair(string, start+length)
return pairs
def plot_arc_diagram( string, plot_title="" ):
slds = matching_substring_pairs(string)
bews = map( lambda sld: (sld[0], sum(sld)+sld[1], sld[1]), slds )
plot_arc_diagram_impl(bews, plot_title)
# begin end
# / /
# ***********-----------O-----------***********
# |--width--| \ |--width--|
# |-inner rad-| \
# |-----outer radius----| center
def plot_ring( ax, begin, end, width ):
cx = 0.5*(begin + end)
center = (cx, 0)
outer_radius = cx - begin
inner_radius = outer_radius - width
mypie, _ = ax.pie([1], radius=outer_radius, colors=[(0.4,0.4, 1.0, 0.3)], center=center )
plt.setp( mypie, width=width)
return outer_radius
def plot_arc_diagram_impl( bews, plot_title ):
fig, ax = plt.subplots(subplot_kw={'aspect': 'auto'})
x_min = 0
x_max = 1920
max_width = 1080
for bew in bews:
x_max = max(x_max, bew[1])
orad = plot_ring(ax, bew[0], bew[1], bew[2])
max_width = max(max_width, orad)
ax.set_xlim(x_min, x_max)
ax.set_ylim( -max_width, max_width)
plt.axis('off')
title_obj = plt.title(plot_title, loc='center')
plt.setp(title_obj, color=(0.0, 0.0, 0.0, 1))
plt.savefig('/content/output.png', dpi=600)
plt.show()
def stringify_notes(midi_file, track_number):
mid = MidiFile(midi_file)
track_notes = {}
for i, track in enumerate(mid.tracks):
track_notes[i] = ''
for msg in track:
if( msg.type == 'note_on'):
track_notes[i] += str(msg.note) +'n'
if( msg.type == 'note_off'):
track_notes[i] += str(msg.note) +'f'
return track_notes[track_number]
if multi_track_input:
try:
plot_arc_diagram(stringify_notes(midi_file, MIDI_file_track_to_visualize), plot_title)
if debug:
print('Debug mode')
print('MIDI Track #', MIDI_file_track_to_visualize, 'Arc Diagram')
Image('output-absolute.png')
except:
print('Error in processing your MIDI file. Sorry.')
sys.exit
from google.colab import files
files.download('/content/output.png')
```
# MIDI Patch Numbers Reference Chart
***
## General MIDI Level 1 Instrument Families
### The General MIDI Level 1 instrument sounds are grouped by families. In each family are 8 specific instruments.
https://www.midi.org/specifications-old/item/gm-level-1-sound-set
***
## PC # Family Name
1-8 Piano
9-16 Chromatic Percussion
17-24 Organ
25-32 Guitar
33-40 Bass
41-48 Strings
49-56 Ensemble
57-64 Brass
65-72 Reed
73-80 Pipe
81-88 Synth Lead
89-96 Synth Pad
97-104 Synth Effects
105-112 Ethnic
113-120 Percussive
121-128 Sound Effects
***
Note: While GM1 does not define the actual characteristics of any sounds, the names in parentheses after each of the synth leads, pads, and sound effects are, in particular, intended only as guides).
***
### PC # Instrument Name
#### Subtract 1 from MIDI patch index number below to get MIDI patch number to use
1. Acoustic Grand Piano
2. Bright Acoustic Piano
3. Electric Grand Piano
4. Honky-tonk Piano
5. Electric Piano 1
6. Electric Piano 2
7. Harpsichord
8. Clavi
9. Celesta
10. Glockenspiel
11. Music Box
12. Vibraphone
13. Marimba
14. Xylophone
15. Tubular Bells
16. Dulcimer
17. Drawbar Organ
18. Percussive Organ
19. Rock Organ
20. Church Organ
21. Reed Organ
22. Accordion
23. Harmonica
24. Tango Accordion
25. Acoustic Guitar (nylon)
26. Acoustic Guitar (steel)
27. Electric Guitar (jazz)
28. Electric Guitar (clean)
29. Electric Guitar (muted)
30. Overdriven Guitar
31. Distortion Guitar
32. Guitar harmonics
33. Acoustic Bass
34. Electric Bass (finger)
35. Electric Bass (pick)
36. Fretless Bass
37. Slap Bass 1
38. Slap Bass 2
39. Synth Bass 1
40. Synth Bass 2
41. Violin
42. Viola
43. Cello
44. Contrabass
45. Tremolo Strings
46. Pizzicato Strings
47. Orchestral Harp
48. Timpani
49. String Ensemble 1
50. String Ensemble 2
51. SynthStrings 1
52. SynthStrings 2
53. Choir Aahs
54. Voice Oohs
55. Synth Voice
56. Orchestra Hit
57. Trumpet
58. Trombone
59. Tuba
60. Muted Trumpet
61. French Horn
62. Brass Section
63. SynthBrass 1
64. SynthBrass 2
65. Soprano Sax
66. Alto Sax
67. Tenor Sax
68. Baritone Sax
69. Oboe
70. English Horn
71. Bassoon
72. Clarinet
73. Piccolo
74. Flute
75. Recorder
76. Pan Flute
77. Blown Bottle
78. Shakuhachi
79. Whistle
80. Ocarina
81. Lead 1 (square)
82. Lead 2 (sawtooth)
83. Lead 3 (calliope)
84. Lead 4 (chiff)
85. Lead 5 (charang)
86. Lead 6 (voice)
87. Lead 7 (fifths)
88. Lead 8 (bass + lead)
89. Pad 1 (new age)
90. Pad 2 (warm)
91. Pad 3 (polysynth)
92. Pad 4 (choir)
93. Pad 5 (bowed)
94. Pad 6 (metallic)
95. Pad 7 (halo)
96. Pad 8 (sweep)
97. FX 1 (rain)
98. FX 2 (soundtrack)
99. FX 3 (crystal)
100. FX 4 (atmosphere)
101. FX 5 (brightness)
102. FX 6 (goblins)
103. FX 7 (echoes)
104. FX 8 (sci-fi)
105. Sitar
106. Banjo
107. Shamisen
108. Koto
109. Kalimba
110. Bag pipe
111. Fiddle
112. Shanai
113. Tinkle Bell
114. Agogo
115. Steel Drums
116. Woodblock
117. Taiko Drum
118. Melodic Tom
119. Synth Drum
120. Reverse Cymbal
121. Guitar Fret Noise
122. Breath Noise
123. Seashore
124. Bird Tweet
125. Telephone Ring
126. Helicopter
127. Applause
128. Gunshot
***
# MIDI.py Version 6.7 by Peter Billam (just in case)
***
```
#@title MIDI.py Module
#! /usr/bin/python3
# unsupported 20091104 ...
# ['set_sequence_number', dtime, sequence]
# ['raw_data', dtime, raw]
# 20150914 jimbo1qaz MIDI.py str/bytes bug report
# I found a MIDI file which had Shift-JIS titles. When midi.py decodes it as
# latin-1, it produces a string which cannot even be accessed without raising
# a UnicodeDecodeError. Maybe, when converting raw byte strings from MIDI,
# you should keep them as bytes, not improperly decode them. However, this
# would change the API. (ie: text = a "string" ? of 0 or more bytes). It
# could break compatiblity, but there's not much else you can do to fix the bug
# https://en.wikipedia.org/wiki/Shift_JIS
r'''
This module offers functions: concatenate_scores(), grep(),
merge_scores(), mix_scores(), midi2opus(), midi2score(), opus2midi(),
opus2score(), play_score(), score2midi(), score2opus(), score2stats(),
score_type(), segment(), timeshift() and to_millisecs(),
where "midi" means the MIDI-file bytes (as can be put in a .mid file,
or piped into aplaymidi), and "opus" and "score" are list-structures
as inspired by Sean Burke's MIDI-Perl CPAN module.
Warning: Version 6.4 is not necessarily backward-compatible with
previous versions, in that text-data is now bytes, not strings.
This reflects the fact that many MIDI files have text data in
encodings other that ISO-8859-1, for example in Shift-JIS.
Download MIDI.py from http://www.pjb.com.au/midi/free/MIDI.py
and put it in your PYTHONPATH. MIDI.py depends on Python3.
There is also a call-compatible translation into Lua of this
module: see http://www.pjb.com.au/comp/lua/MIDI.html
The "opus" is a direct translation of the midi-file-events, where
the times are delta-times, in ticks, since the previous event.
The "score" is more human-centric; it uses absolute times, and
combines the separate note_on and note_off events into one "note"
event, with a duration:
['note', start_time, duration, channel, note, velocity] # in a "score"
EVENTS (in an "opus" structure)
['note_off', dtime, channel, note, velocity] # in an "opus"
['note_on', dtime, channel, note, velocity] # in an "opus"
['key_after_touch', dtime, channel, note, velocity]
['control_change', dtime, channel, controller(0-127), value(0-127)]
['patch_change', dtime, channel, patch]
['channel_after_touch', dtime, channel, velocity]
['pitch_wheel_change', dtime, channel, pitch_wheel]
['text_event', dtime, text]
['copyright_text_event', dtime, text]
['track_name', dtime, text]
['instrument_name', dtime, text]
['lyric', dtime, text]
['marker', dtime, text]
['cue_point', dtime, text]
['text_event_08', dtime, text]
['text_event_09', dtime, text]
['text_event_0a', dtime, text]
['text_event_0b', dtime, text]
['text_event_0c', dtime, text]
['text_event_0d', dtime, text]
['text_event_0e', dtime, text]
['text_event_0f', dtime, text]
['end_track', dtime]
['set_tempo', dtime, tempo]
['smpte_offset', dtime, hr, mn, se, fr, ff]
['time_signature', dtime, nn, dd, cc, bb]
['key_signature', dtime, sf, mi]
['sequencer_specific', dtime, raw]
['raw_meta_event', dtime, command(0-255), raw]
['sysex_f0', dtime, raw]
['sysex_f7', dtime, raw]
['song_position', dtime, song_pos]
['song_select', dtime, song_number]
['tune_request', dtime]
DATA TYPES
channel = a value 0 to 15
controller = 0 to 127 (see http://www.pjb.com.au/muscript/gm.html#cc )
dtime = time measured in "ticks", 0 to 268435455
velocity = a value 0 (soft) to 127 (loud)
note = a value 0 to 127 (middle-C is 60)
patch = 0 to 127 (see http://www.pjb.com.au/muscript/gm.html )
pitch_wheel = a value -8192 to 8191 (0x1FFF)
raw = bytes, of length 0 or more (for sysex events see below)
sequence_number = a value 0 to 65,535 (0xFFFF)
song_pos = a value 0 to 16,383 (0x3FFF)
song_number = a value 0 to 127
tempo = microseconds per crochet (quarter-note), 0 to 16777215
text = bytes, of length 0 or more
ticks = the number of ticks per crochet (quarter-note)
In sysex_f0 events, the raw data must not start with a \xF0 byte,
since this gets added automatically;
but it must end with an explicit \xF7 byte!
In the very unlikely case that you ever need to split sysex data
into one sysex_f0 followed by one or more sysex_f7s, then only the
last of those sysex_f7 events must end with the explicit \xF7 byte
(again, the raw data of individual sysex_f7 events must not start
with any \xF7 byte, since this gets added automatically).
Since version 6.4, text data is in bytes, not in a ISO-8859-1 string.
GOING THROUGH A SCORE WITHIN A PYTHON PROGRAM
channels = {2,3,5,8,13}
itrack = 1 # skip 1st element which is ticks
while itrack < len(score):
for event in score[itrack]:
if event[0] == 'note': # for example,
pass # do something to all notes
# or, to work on events in only particular channels...
channel_index = MIDI.Event2channelindex.get(event[0], False)
if channel_index and (event[channel_index] in channels):
pass # do something to channels 2,3,5,8 and 13
itrack += 1
'''
import sys, struct, copy
# sys.stdout = os.fdopen(sys.stdout.fileno(), 'wb')
Version = '6.7'
VersionDate = '20201120'
# 20201120 6.7 call to bytest() removed, and protect _unshift_ber_int
# 20160702 6.6 to_millisecs() now handles set_tempo across multiple Tracks
# 20150921 6.5 segment restores controllers as well as patch and tempo
# 20150914 6.4 text data is bytes or bytearray, not ISO-8859-1 strings
# 20150628 6.3 absent any set_tempo, default is 120bpm (see MIDI file spec 1.1)
# 20150101 6.2 all text events can be 8-bit; let user get the right encoding
# 20141231 6.1 fix _some_text_event; sequencer_specific data can be 8-bit
# 20141230 6.0 synth_specific data can be 8-bit
# 20120504 5.9 add the contents of mid_opus_tracks()
# 20120208 5.8 fix num_notes_by_channel() ; should be a dict
# 20120129 5.7 _encode handles empty tracks; score2stats num_notes_by_channel
# 20111111 5.6 fix patch 45 and 46 in Number2patch, should be Harp
# 20110129 5.5 add mix_opus_tracks() and event2alsaseq()
# 20110126 5.4 "previous message repeated N times" to save space on stderr
# 20110125 5.2 opus2score terminates unended notes at the end of the track
# 20110124 5.1 the warnings in midi2opus display track_num
# 21110122 5.0 if garbage, midi2opus returns the opus so far
# 21110119 4.9 non-ascii chars stripped out of the text_events
# 21110110 4.8 note_on with velocity=0 treated as a note-off
# 21110108 4.6 unknown F-series event correctly eats just one byte
# 21011010 4.2 segment() uses start_time, end_time named params
# 21011005 4.1 timeshift() must not pad the set_tempo command
# 21011003 4.0 pitch2note_event must be chapitch2note_event
# 21010918 3.9 set_sequence_number supported, FWIW
# 20100913 3.7 many small bugfixes; passes all tests
# 20100910 3.6 concatenate_scores enforce ticks=1000, just like merge_scores
# 20100908 3.5 minor bugs fixed in score2stats
# 20091104 3.4 tune_request now supported
# 20091104 3.3 fixed bug in decoding song_position and song_select
# 20091104 3.2 unsupported: set_sequence_number tune_request raw_data
# 20091101 3.1 document how to traverse a score within Python
# 20091021 3.0 fixed bug in score2stats detecting GM-mode = 0
# 20091020 2.9 score2stats reports GM-mode and bank msb,lsb events
# 20091019 2.8 in merge_scores, channel 9 must remain channel 9 (in GM)
# 20091018 2.7 handles empty tracks gracefully
# 20091015 2.6 grep() selects channels
# 20091010 2.5 merge_scores reassigns channels to avoid conflicts
# 20091010 2.4 fixed bug in to_millisecs which now only does opusses
# 20091010 2.3 score2stats returns channels & patch_changes, by_track & total
# 20091010 2.2 score2stats() returns also pitches and percussion dicts
# 20091010 2.1 bugs: >= not > in segment, to notice patch_change at time 0
# 20091010 2.0 bugs: spurious pop(0) ( in _decode sysex
# 20091008 1.9 bugs: ISO decoding in sysex; str( not int( in note-off warning
# 20091008 1.8 add concatenate_scores()
# 20091006 1.7 score2stats() measures nticks and ticks_per_quarter
# 20091004 1.6 first mix_scores() and merge_scores()
# 20090424 1.5 timeshift() bugfix: earliest only sees events after from_time
# 20090330 1.4 timeshift() has also a from_time argument
# 20090322 1.3 timeshift() has also a start_time argument
# 20090319 1.2 add segment() and timeshift()
# 20090301 1.1 add to_millisecs()
_previous_warning = '' # 5.4
_previous_times = 0 # 5.4
#------------------------------- Encoding stuff --------------------------
def opus2midi(opus=[]):
r'''The argument is a list: the first item in the list is the "ticks"
parameter, the others are the tracks. Each track is a list
of midi-events, and each event is itself a list; see above.
opus2midi() returns a bytestring of the MIDI, which can then be
written either to a file opened in binary mode (mode='wb'),
or to stdout by means of: sys.stdout.buffer.write()
my_opus = [
96,
[ # track 0:
['patch_change', 0, 1, 8], # and these are the events...
['note_on', 5, 1, 25, 96],
['note_off', 96, 1, 25, 0],
['note_on', 0, 1, 29, 96],
['note_off', 96, 1, 29, 0],
], # end of track 0
]
my_midi = opus2midi(my_opus)
sys.stdout.buffer.write(my_midi)
'''
if len(opus) < 2:
opus=[1000, [],]
tracks = copy.deepcopy(opus)
ticks = int(tracks.pop(0))
ntracks = len(tracks)
if ntracks == 1:
format = 0
else:
format = 1
my_midi = b"MThd\x00\x00\x00\x06"+struct.pack('>HHH',format,ntracks,ticks)
for track in tracks:
events = _encode(track)
my_midi += b'MTrk' + struct.pack('>I',len(events)) + events
_clean_up_warnings()
return my_midi
def score2opus(score=None):
r'''
The argument is a list: the first item in the list is the "ticks"
parameter, the others are the tracks. Each track is a list
of score-events, and each event is itself a list. A score-event
is similar to an opus-event (see above), except that in a score:
1) the times are expressed as an absolute number of ticks
from the track's start time
2) the pairs of 'note_on' and 'note_off' events in an "opus"
are abstracted into a single 'note' event in a "score":
['note', start_time, duration, channel, pitch, velocity]
score2opus() returns a list specifying the equivalent "opus".
my_score = [
96,
[ # track 0:
['patch_change', 0, 1, 8],
['note', 5, 96, 1, 25, 96],
['note', 101, 96, 1, 29, 96]
], # end of track 0
]
my_opus = score2opus(my_score)
'''
if len(score) < 2:
score=[1000, [],]
tracks = copy.deepcopy(score)
ticks = int(tracks.pop(0))
opus_tracks = []
for scoretrack in tracks:
time2events = dict([])
for scoreevent in scoretrack:
if scoreevent[0] == 'note':
note_on_event = ['note_on',scoreevent[1],
scoreevent[3],scoreevent[4],scoreevent[5]]
note_off_event = ['note_off',scoreevent[1]+scoreevent[2],
scoreevent[3],scoreevent[4],scoreevent[5]]
if time2events.get(note_on_event[1]):
time2events[note_on_event[1]].append(note_on_event)
else:
time2events[note_on_event[1]] = [note_on_event,]
if time2events.get(note_off_event[1]):
time2events[note_off_event[1]].append(note_off_event)
else:
time2events[note_off_event[1]] = [note_off_event,]
continue
if time2events.get(scoreevent[1]):
time2events[scoreevent[1]].append(scoreevent)
else:
time2events[scoreevent[1]] = [scoreevent,]
sorted_times = [] # list of keys
for k in time2events.keys():
sorted_times.append(k)
sorted_times.sort()
sorted_events = [] # once-flattened list of values sorted by key
for time in sorted_times:
sorted_events.extend(time2events[time])
abs_time = 0
for event in sorted_events: # convert abs times => delta times
delta_time = event[1] - abs_time
abs_time = event[1]
event[1] = delta_time
opus_tracks.append(sorted_events)
opus_tracks.insert(0,ticks)
_clean_up_warnings()
return opus_tracks
def score2midi(score=None):
r'''
Translates a "score" into MIDI, using score2opus() then opus2midi()
'''
return opus2midi(score2opus(score))
#--------------------------- Decoding stuff ------------------------
def midi2opus(midi=b''):
r'''Translates MIDI into a "opus". For a description of the
"opus" format, see opus2midi()
'''
my_midi=bytearray(midi)
if len(my_midi) < 4:
_clean_up_warnings()
return [1000,[],]
id = bytes(my_midi[0:4])
if id != b'MThd':
_warn("midi2opus: midi starts with "+str(id)+" instead of 'MThd'")
_clean_up_warnings()
return [1000,[],]
[length, format, tracks_expected, ticks] = struct.unpack(
'>IHHH', bytes(my_midi[4:14]))
if length != 6:
_warn("midi2opus: midi header length was "+str(length)+" instead of 6")
_clean_up_warnings()
return [1000,[],]
my_opus = [ticks,]
my_midi = my_midi[14:]
track_num = 1 # 5.1
while len(my_midi) >= 8:
track_type = bytes(my_midi[0:4])
if track_type != b'MTrk':
_warn('midi2opus: Warning: track #'+str(track_num)+' type is '+str(track_type)+" instead of b'MTrk'")
[track_length] = struct.unpack('>I', my_midi[4:8])
my_midi = my_midi[8:]
if track_length > len(my_midi):
_warn('midi2opus: track #'+str(track_num)+' length '+str(track_length)+' is too large')
_clean_up_warnings()
return my_opus # 5.0
my_midi_track = my_midi[0:track_length]
my_track = _decode(my_midi_track)
my_opus.append(my_track)
my_midi = my_midi[track_length:]
track_num += 1 # 5.1
_clean_up_warnings()
return my_opus
def opus2score(opus=[]):
r'''For a description of the "opus" and "score" formats,
see opus2midi() and score2opus().
'''
if len(opus) < 2:
_clean_up_warnings()
return [1000,[],]
tracks = copy.deepcopy(opus) # couple of slices probably quicker...
ticks = int(tracks.pop(0))
score = [ticks,]
for opus_track in tracks:
ticks_so_far = 0
score_track = []
chapitch2note_on_events = dict([]) # 4.0
for opus_event in opus_track:
ticks_so_far += opus_event[1]
if opus_event[0] == 'note_off' or (opus_event[0] == 'note_on' and opus_event[4] == 0): # 4.8
cha = opus_event[2]
pitch = opus_event[3]
key = cha*128 + pitch
if chapitch2note_on_events.get(key):
new_event = chapitch2note_on_events[key].pop(0)
new_event[2] = ticks_so_far - new_event[1]
score_track.append(new_event)
elif pitch > 127:
_warn('opus2score: note_off with no note_on, bad pitch='+str(pitch))
else:
_warn('opus2score: note_off with no note_on cha='+str(cha)+' pitch='+str(pitch))
elif opus_event[0] == 'note_on':
cha = opus_event[2]
pitch = opus_event[3]
key = cha*128 + pitch
new_event = ['note',ticks_so_far,0,cha,pitch, opus_event[4]]
if chapitch2note_on_events.get(key):
chapitch2note_on_events[key].append(new_event)
else:
chapitch2note_on_events[key] = [new_event,]
else:
opus_event[1] = ticks_so_far
score_track.append(opus_event)
# check for unterminated notes (Oisรญn) -- 5.2
for chapitch in chapitch2note_on_events:
note_on_events = chapitch2note_on_events[chapitch]
for new_e in note_on_events:
new_e[2] = ticks_so_far - new_e[1]
score_track.append(new_e)
_warn("opus2score: note_on with no note_off cha="+str(new_e[3])+' pitch='+str(new_e[4])+'; adding note_off at end')
score.append(score_track)
_clean_up_warnings()
return score
def midi2score(midi=b''):
r'''
Translates MIDI into a "score", using midi2opus() then opus2score()
'''
return opus2score(midi2opus(midi))
def midi2ms_score(midi=b''):
r'''
Translates MIDI into a "score" with one beat per second and one
tick per millisecond, using midi2opus() then to_millisecs()
then opus2score()
'''
return opus2score(to_millisecs(midi2opus(midi)))
#------------------------ Other Transformations ---------------------
def to_millisecs(old_opus=None):
r'''Recallibrates all the times in an "opus" to use one beat
per second and one tick per millisecond. This makes it
hard to retrieve any information about beats or barlines,
but it does make it easy to mix different scores together.
'''
if old_opus == None:
return [1000,[],]
try:
old_tpq = int(old_opus[0])
except IndexError: # 5.0
_warn('to_millisecs: the opus '+str(type(old_opus))+' has no elements')
return [1000,[],]
new_opus = [1000,]
# 6.7 first go through building a table of set_tempos by absolute-tick
ticks2tempo = {}
itrack = 1
while itrack < len(old_opus):
ticks_so_far = 0
for old_event in old_opus[itrack]:
if old_event[0] == 'note':
raise TypeError('to_millisecs needs an opus, not a score')
ticks_so_far += old_event[1]
if old_event[0] == 'set_tempo':
ticks2tempo[ticks_so_far] = old_event[2]
itrack += 1
# then get the sorted-array of their keys
tempo_ticks = [] # list of keys
for k in ticks2tempo.keys():
tempo_ticks.append(k)
tempo_ticks.sort()
# then go through converting to millisec, testing if the next
# set_tempo lies before the next track-event, and using it if so.
itrack = 1
while itrack < len(old_opus):
ms_per_old_tick = 500.0 / old_tpq # float: will round later 6.3
i_tempo_ticks = 0
ticks_so_far = 0
ms_so_far = 0.0
previous_ms_so_far = 0.0
new_track = [['set_tempo',0,1000000],] # new "crochet" is 1 sec
for old_event in old_opus[itrack]:
# detect if ticks2tempo has something before this event
# 20160702 if ticks2tempo is at the same time, leave it
event_delta_ticks = old_event[1]
if (i_tempo_ticks < len(tempo_ticks) and
tempo_ticks[i_tempo_ticks] < (ticks_so_far + old_event[1])):
delta_ticks = tempo_ticks[i_tempo_ticks] - ticks_so_far
ms_so_far += (ms_per_old_tick * delta_ticks)
ticks_so_far = tempo_ticks[i_tempo_ticks]
ms_per_old_tick = ticks2tempo[ticks_so_far] / (1000.0*old_tpq)
i_tempo_ticks += 1
event_delta_ticks -= delta_ticks
new_event = copy.deepcopy(old_event) # now handle the new event
ms_so_far += (ms_per_old_tick * old_event[1])
new_event[1] = round(ms_so_far - previous_ms_so_far)
if old_event[0] != 'set_tempo':
previous_ms_so_far = ms_so_far
new_track.append(new_event)
ticks_so_far += event_delta_ticks
new_opus.append(new_track)
itrack += 1
_clean_up_warnings()
return new_opus
def event2alsaseq(event=None): # 5.5
r'''Converts an event into the format needed by the alsaseq module,
http://pp.com.mx/python/alsaseq
The type of track (opus or score) is autodetected.
'''
pass
def grep(score=None, channels=None):
r'''Returns a "score" containing only the channels specified
'''
if score == None:
return [1000,[],]
ticks = score[0]
new_score = [ticks,]
if channels == None:
return new_score
channels = set(channels)
global Event2channelindex
itrack = 1
while itrack < len(score):
new_score.append([])
for event in score[itrack]:
channel_index = Event2channelindex.get(event[0], False)
if channel_index:
if event[channel_index] in channels:
new_score[itrack].append(event)
else:
new_score[itrack].append(event)
itrack += 1
return new_score
def play_score(score=None):
r'''Converts the "score" to midi, and feeds it into 'aplaymidi -'
'''
if score == None:
return
import subprocess
pipe = subprocess.Popen(['aplaymidi','-'], stdin=subprocess.PIPE)
if score_type(score) == 'opus':
pipe.stdin.write(opus2midi(score))
else:
pipe.stdin.write(score2midi(score))
pipe.stdin.close()
def timeshift(score=None, shift=None, start_time=None, from_time=0, tracks={0,1,2,3,4,5,6,7,8,10,12,13,14,15}):
r'''Returns a "score" shifted in time by "shift" ticks, or shifted
so that the first event starts at "start_time" ticks.
If "from_time" is specified, only those events in the score
that begin after it are shifted. If "start_time" is less than
"from_time" (or "shift" is negative), then the intermediate
notes are deleted, though patch-change events are preserved.
If "tracks" are specified, then only those tracks get shifted.
"tracks" can be a list, tuple or set; it gets converted to set
internally.
It is deprecated to specify both "shift" and "start_time".
If this does happen, timeshift() will print a warning to
stderr and ignore the "shift" argument.
If "shift" is negative and sufficiently large that it would
leave some event with a negative tick-value, then the score
is shifted so that the first event occurs at time 0. This
also occurs if "start_time" is negative, and is also the
default if neither "shift" nor "start_time" are specified.
'''
#_warn('tracks='+str(tracks))
if score == None or len(score) < 2:
return [1000, [],]
new_score = [score[0],]
my_type = score_type(score)
if my_type == '':
return new_score
if my_type == 'opus':
_warn("timeshift: opus format is not supported\n")
# _clean_up_scores() 6.2; doesn't exist! what was it supposed to do?
return new_score
if not (shift == None) and not (start_time == None):
_warn("timeshift: shift and start_time specified: ignoring shift\n")
shift = None
if shift == None:
if (start_time == None) or (start_time < 0):
start_time = 0
# shift = start_time - from_time
i = 1 # ignore first element (ticks)
tracks = set(tracks) # defend against tuples and lists
earliest = 1000000000
if not (start_time == None) or shift < 0: # first find the earliest event
while i < len(score):
if len(tracks) and not ((i-1) in tracks):
i += 1
continue
for event in score[i]:
if event[1] < from_time:
continue # just inspect the to_be_shifted events
if event[1] < earliest:
earliest = event[1]
i += 1
if earliest > 999999999:
earliest = 0
if shift == None:
shift = start_time - earliest
elif (earliest + shift) < 0:
start_time = 0
shift = 0 - earliest
i = 1 # ignore first element (ticks)
while i < len(score):
if len(tracks) == 0 or not ((i-1) in tracks): # 3.8
new_score.append(score[i])
i += 1
continue
new_track = []
for event in score[i]:
new_event = list(event)
#if new_event[1] == 0 and shift > 0 and new_event[0] != 'note':
# pass
#elif new_event[1] >= from_time:
if new_event[1] >= from_time:
# 4.1 must not rightshift set_tempo
if new_event[0] != 'set_tempo' or shift<0:
new_event[1] += shift
elif (shift < 0) and (new_event[1] >= (from_time+shift)):
continue
new_track.append(new_event)
if len(new_track) > 0:
new_score.append(new_track)
i += 1
_clean_up_warnings()
return new_score
def segment(score=None, start_time=None, end_time=None, start=0, end=100000000,
tracks={0,1,2,3,4,5,6,7,8,10,11,12,13,14,15}):
r'''Returns a "score" which is a segment of the one supplied
as the argument, beginning at "start_time" ticks and ending
at "end_time" ticks (or at the end if "end_time" is not supplied).
If the set "tracks" is specified, only those tracks will
be returned.
'''
if score == None or len(score) < 2:
return [1000, [],]
if start_time == None: # as of 4.2 start_time is recommended
start_time = start # start is legacy usage
if end_time == None: # likewise
end_time = end
new_score = [score[0],]
my_type = score_type(score)
if my_type == '':
return new_score
if my_type == 'opus':
# more difficult (disconnecting note_on's from their note_off's)...
_warn("segment: opus format is not supported\n")
_clean_up_warnings()
return new_score
i = 1 # ignore first element (ticks); we count in ticks anyway
tracks = set(tracks) # defend against tuples and lists
while i < len(score):
if len(tracks) and not ((i-1) in tracks):
i += 1
continue
new_track = []
channel2cc_num = {} # most recent controller change before start
channel2cc_val = {}
channel2cc_time = {}
channel2patch_num = {} # keep most recent patch change before start
channel2patch_time = {}
set_tempo_num = 500000 # most recent tempo change before start 6.3
set_tempo_time = 0
earliest_note_time = end_time
for event in score[i]:
if event[0] == 'control_change': # 6.5
cc_time = channel2cc_time.get(event[2]) or 0
if (event[1] <= start_time) and (event[1] >= cc_time):
channel2cc_num[event[2]] = event[3]
channel2cc_val[event[2]] = event[4]
channel2cc_time[event[2]] = event[1]
elif event[0] == 'patch_change':
patch_time = channel2patch_time.get(event[2]) or 0
if (event[1]<=start_time) and (event[1] >= patch_time): # 2.0
channel2patch_num[event[2]] = event[3]
channel2patch_time[event[2]] = event[1]
elif event[0] == 'set_tempo':
if (event[1]<=start_time) and (event[1]>=set_tempo_time): #6.4
set_tempo_num = event[2]
set_tempo_time = event[1]
if (event[1] >= start_time) and (event[1] <= end_time):
new_track.append(event)
if (event[0] == 'note') and (event[1] < earliest_note_time):
earliest_note_time = event[1]
if len(new_track) > 0:
new_track.append(['set_tempo', start_time, set_tempo_num])
for c in channel2patch_num:
new_track.append(['patch_change',start_time,c,channel2patch_num[c]],)
for c in channel2cc_num: # 6.5
new_track.append(['control_change',start_time,c,channel2cc_num[c],channel2cc_val[c]])
new_score.append(new_track)
i += 1
_clean_up_warnings()
return new_score
def score_type(opus_or_score=None):
r'''Returns a string, either 'opus' or 'score' or ''
'''
if opus_or_score == None or str(type(opus_or_score)).find('list')<0 or len(opus_or_score) < 2:
return ''
i = 1 # ignore first element
while i < len(opus_or_score):
for event in opus_or_score[i]:
if event[0] == 'note':
return 'score'
elif event[0] == 'note_on':
return 'opus'
i += 1
return ''
def concatenate_scores(scores):
r'''Concatenates a list of scores into one score.
If the scores differ in their "ticks" parameter,
they will all get converted to millisecond-tick format.
'''
# the deepcopys are needed if the input_score's are refs to the same obj
# e.g. if invoked by midisox's repeat()
input_scores = _consistentise_ticks(scores) # 3.7
output_score = copy.deepcopy(input_scores[0])
for input_score in input_scores[1:]:
output_stats = score2stats(output_score)
delta_ticks = output_stats['nticks']
itrack = 1
while itrack < len(input_score):
if itrack >= len(output_score): # new output track if doesn't exist
output_score.append([])
for event in input_score[itrack]:
output_score[itrack].append(copy.deepcopy(event))
output_score[itrack][-1][1] += delta_ticks
itrack += 1
return output_score
def merge_scores(scores):
r'''Merges a list of scores into one score. A merged score comprises
all of the tracks from all of the input scores; un-merging is possible
by selecting just some of the tracks. If the scores differ in their
"ticks" parameter, they will all get converted to millisecond-tick
format. merge_scores attempts to resolve channel-conflicts,
but there are of course only 15 available channels...
'''
input_scores = _consistentise_ticks(scores) # 3.6
output_score = [1000]
channels_so_far = set()
all_channels = {0,1,2,3,4,5,6,7,8,10,11,12,13,14,15}
global Event2channelindex
for input_score in input_scores:
new_channels = set(score2stats(input_score).get('channels_total', []))
new_channels.discard(9) # 2.8 cha9 must remain cha9 (in GM)
for channel in channels_so_far & new_channels:
# consistently choose lowest avaiable, to ease testing
free_channels = list(all_channels - (channels_so_far|new_channels))
if len(free_channels) > 0:
free_channels.sort()
free_channel = free_channels[0]
else:
free_channel = None
break
itrack = 1
while itrack < len(input_score):
for input_event in input_score[itrack]:
channel_index=Event2channelindex.get(input_event[0],False)
if channel_index and input_event[channel_index]==channel:
input_event[channel_index] = free_channel
itrack += 1
channels_so_far.add(free_channel)
channels_so_far |= new_channels
output_score.extend(input_score[1:])
return output_score
def _ticks(event):
return event[1]
def mix_opus_tracks(input_tracks): # 5.5
r'''Mixes an array of tracks into one track. A mixed track
cannot be un-mixed. It is assumed that the tracks share the same
ticks parameter and the same tempo.
Mixing score-tracks is trivial (just insert all events into one array).
Mixing opus-tracks is only slightly harder, but it's common enough
that a dedicated function is useful.
'''
output_score = [1000, []]
for input_track in input_tracks: # 5.8
input_score = opus2score([1000, input_track])
for event in input_score[1]:
output_score[1].append(event)
output_score[1].sort(key=_ticks)
output_opus = score2opus(output_score)
return output_opus[1]
def mix_scores(scores):
r'''Mixes a list of scores into one one-track score.
A mixed score cannot be un-mixed. Hopefully the scores
have no undesirable channel-conflicts between them.
If the scores differ in their "ticks" parameter,
they will all get converted to millisecond-tick format.
'''
input_scores = _consistentise_ticks(scores) # 3.6
output_score = [1000, []]
for input_score in input_scores:
for input_track in input_score[1:]:
output_score[1].extend(input_track)
return output_score
def score2stats(opus_or_score=None):
r'''Returns a dict of some basic stats about the score, like
bank_select (list of tuples (msb,lsb)),
channels_by_track (list of lists), channels_total (set),
general_midi_mode (list),
ntracks, nticks, patch_changes_by_track (list of dicts),
num_notes_by_channel (list of numbers),
patch_changes_total (set),
percussion (dict histogram of channel 9 events),
pitches (dict histogram of pitches on channels other than 9),
pitch_range_by_track (list, by track, of two-member-tuples),
pitch_range_sum (sum over tracks of the pitch_ranges),
'''
bank_select_msb = -1
bank_select_lsb = -1
bank_select = []
channels_by_track = []
channels_total = set([])
general_midi_mode = []
num_notes_by_channel = dict([])
patches_used_by_track = []
patches_used_total = set([])
patch_changes_by_track = []
patch_changes_total = set([])
percussion = dict([]) # histogram of channel 9 "pitches"
pitches = dict([]) # histogram of pitch-occurrences channels 0-8,10-15
pitch_range_sum = 0 # u pitch-ranges of each track
pitch_range_by_track = []
is_a_score = True
if opus_or_score == None:
return {'bank_select':[], 'channels_by_track':[], 'channels_total':[],
'general_midi_mode':[], 'ntracks':0, 'nticks':0,
'num_notes_by_channel':dict([]),
'patch_changes_by_track':[], 'patch_changes_total':[],
'percussion':{}, 'pitches':{}, 'pitch_range_by_track':[],
'ticks_per_quarter':0, 'pitch_range_sum':0}
ticks_per_quarter = opus_or_score[0]
i = 1 # ignore first element, which is ticks
nticks = 0
while i < len(opus_or_score):
highest_pitch = 0
lowest_pitch = 128
channels_this_track = set([])
patch_changes_this_track = dict({})
for event in opus_or_score[i]:
if event[0] == 'note':
num_notes_by_channel[event[3]] = num_notes_by_channel.get(event[3],0) + 1
if event[3] == 9:
percussion[event[4]] = percussion.get(event[4],0) + 1
else:
pitches[event[4]] = pitches.get(event[4],0) + 1
if event[4] > highest_pitch:
highest_pitch = event[4]
if event[4] < lowest_pitch:
lowest_pitch = event[4]
channels_this_track.add(event[3])
channels_total.add(event[3])
finish_time = event[1] + event[2]
if finish_time > nticks:
nticks = finish_time
elif event[0] == 'note_off' or (event[0] == 'note_on' and event[4] == 0): # 4.8
finish_time = event[1]
if finish_time > nticks:
nticks = finish_time
elif event[0] == 'note_on':
is_a_score = False
num_notes_by_channel[event[2]] = num_notes_by_channel.get(event[2],0) + 1
if event[2] == 9:
percussion[event[3]] = percussion.get(event[3],0) + 1
else:
pitches[event[3]] = pitches.get(event[3],0) + 1
if event[3] > highest_pitch:
highest_pitch = event[3]
if event[3] < lowest_pitch:
lowest_pitch = event[3]
channels_this_track.add(event[2])
channels_total.add(event[2])
elif event[0] == 'patch_change':
patch_changes_this_track[event[2]] = event[3]
patch_changes_total.add(event[3])
elif event[0] == 'control_change':
if event[3] == 0: # bank select MSB
bank_select_msb = event[4]
elif event[3] == 32: # bank select LSB
bank_select_lsb = event[4]
if bank_select_msb >= 0 and bank_select_lsb >= 0:
bank_select.append((bank_select_msb,bank_select_lsb))
bank_select_msb = -1
bank_select_lsb = -1
elif event[0] == 'sysex_f0':
if _sysex2midimode.get(event[2], -1) >= 0:
general_midi_mode.append(_sysex2midimode.get(event[2]))
if is_a_score:
if event[1] > nticks:
nticks = event[1]
else:
nticks += event[1]
if lowest_pitch == 128:
lowest_pitch = 0
channels_by_track.append(channels_this_track)
patch_changes_by_track.append(patch_changes_this_track)
pitch_range_by_track.append((lowest_pitch,highest_pitch))
pitch_range_sum += (highest_pitch-lowest_pitch)
i += 1
return {'bank_select':bank_select,
'channels_by_track':channels_by_track,
'channels_total':channels_total,
'general_midi_mode':general_midi_mode,
'ntracks':len(opus_or_score)-1,
'nticks':nticks,
'num_notes_by_channel':num_notes_by_channel,
'patch_changes_by_track':patch_changes_by_track,
'patch_changes_total':patch_changes_total,
'percussion':percussion,
'pitches':pitches,
'pitch_range_by_track':pitch_range_by_track,
'pitch_range_sum':pitch_range_sum,
'ticks_per_quarter':ticks_per_quarter}
#----------------------------- Event stuff --------------------------
_sysex2midimode = {
"\x7E\x7F\x09\x01\xF7": 1,
"\x7E\x7F\x09\x02\xF7": 0,
"\x7E\x7F\x09\x03\xF7": 2,
}
# Some public-access tuples:
MIDI_events = tuple('''note_off note_on key_after_touch
control_change patch_change channel_after_touch
pitch_wheel_change'''.split())
Text_events = tuple('''text_event copyright_text_event
track_name instrument_name lyric marker cue_point text_event_08
text_event_09 text_event_0a text_event_0b text_event_0c
text_event_0d text_event_0e text_event_0f'''.split())
Nontext_meta_events = tuple('''end_track set_tempo
smpte_offset time_signature key_signature sequencer_specific
raw_meta_event sysex_f0 sysex_f7 song_position song_select
tune_request'''.split())
# unsupported: raw_data
# Actually, 'tune_request' is is F-series event, not strictly a meta-event...
Meta_events = Text_events + Nontext_meta_events
All_events = MIDI_events + Meta_events
# And three dictionaries:
Number2patch = { # General MIDI patch numbers:
0:'Acoustic Grand',
1:'Bright Acoustic',
2:'Electric Grand',
3:'Honky-Tonk',
4:'Electric Piano 1',
5:'Electric Piano 2',
6:'Harpsichord',
7:'Clav',
8:'Celesta',
9:'Glockenspiel',
10:'Music Box',
11:'Vibraphone',
12:'Marimba',
13:'Xylophone',
14:'Tubular Bells',
15:'Dulcimer',
16:'Drawbar Organ',
17:'Percussive Organ',
18:'Rock Organ',
19:'Church Organ',
20:'Reed Organ',
21:'Accordion',
22:'Harmonica',
23:'Tango Accordion',
24:'Acoustic Guitar(nylon)',
25:'Acoustic Guitar(steel)',
26:'Electric Guitar(jazz)',
27:'Electric Guitar(clean)',
28:'Electric Guitar(muted)',
29:'Overdriven Guitar',
30:'Distortion Guitar',
31:'Guitar Harmonics',
32:'Acoustic Bass',
33:'Electric Bass(finger)',
34:'Electric Bass(pick)',
35:'Fretless Bass',
36:'Slap Bass 1',
37:'Slap Bass 2',
38:'Synth Bass 1',
39:'Synth Bass 2',
40:'Violin',
41:'Viola',
42:'Cello',
43:'Contrabass',
44:'Tremolo Strings',
45:'Pizzicato Strings',
46:'Orchestral Harp',
47:'Timpani',
48:'String Ensemble 1',
49:'String Ensemble 2',
50:'SynthStrings 1',
51:'SynthStrings 2',
52:'Choir Aahs',
53:'Voice Oohs',
54:'Synth Voice',
55:'Orchestra Hit',
56:'Trumpet',
57:'Trombone',
58:'Tuba',
59:'Muted Trumpet',
60:'French Horn',
61:'Brass Section',
62:'SynthBrass 1',
63:'SynthBrass 2',
64:'Soprano Sax',
65:'Alto Sax',
66:'Tenor Sax',
67:'Baritone Sax',
68:'Oboe',
69:'English Horn',
70:'Bassoon',
71:'Clarinet',
72:'Piccolo',
73:'Flute',
74:'Recorder',
75:'Pan Flute',
76:'Blown Bottle',
77:'Skakuhachi',
78:'Whistle',
79:'Ocarina',
80:'Lead 1 (square)',
81:'Lead 2 (sawtooth)',
82:'Lead 3 (calliope)',
83:'Lead 4 (chiff)',
84:'Lead 5 (charang)',
85:'Lead 6 (voice)',
86:'Lead 7 (fifths)',
87:'Lead 8 (bass+lead)',
88:'Pad 1 (new age)',
89:'Pad 2 (warm)',
90:'Pad 3 (polysynth)',
91:'Pad 4 (choir)',
92:'Pad 5 (bowed)',
93:'Pad 6 (metallic)',
94:'Pad 7 (halo)',
95:'Pad 8 (sweep)',
96:'FX 1 (rain)',
97:'FX 2 (soundtrack)',
98:'FX 3 (crystal)',
99:'FX 4 (atmosphere)',
100:'FX 5 (brightness)',
101:'FX 6 (goblins)',
102:'FX 7 (echoes)',
103:'FX 8 (sci-fi)',
104:'Sitar',
105:'Banjo',
106:'Shamisen',
107:'Koto',
108:'Kalimba',
109:'Bagpipe',
110:'Fiddle',
111:'Shanai',
112:'Tinkle Bell',
113:'Agogo',
114:'Steel Drums',
115:'Woodblock',
116:'Taiko Drum',
117:'Melodic Tom',
118:'Synth Drum',
119:'Reverse Cymbal',
120:'Guitar Fret Noise',
121:'Breath Noise',
122:'Seashore',
123:'Bird Tweet',
124:'Telephone Ring',
125:'Helicopter',
126:'Applause',
127:'Gunshot',
}
Notenum2percussion = { # General MIDI Percussion (on Channel 9):
35:'Acoustic Bass Drum',
36:'Bass Drum 1',
37:'Side Stick',
38:'Acoustic Snare',
39:'Hand Clap',
40:'Electric Snare',
41:'Low Floor Tom',
42:'Closed Hi-Hat',
43:'High Floor Tom',
44:'Pedal Hi-Hat',
45:'Low Tom',
46:'Open Hi-Hat',
47:'Low-Mid Tom',
48:'Hi-Mid Tom',
49:'Crash Cymbal 1',
50:'High Tom',
51:'Ride Cymbal 1',
52:'Chinese Cymbal',
53:'Ride Bell',
54:'Tambourine',
55:'Splash Cymbal',
56:'Cowbell',
57:'Crash Cymbal 2',
58:'Vibraslap',
59:'Ride Cymbal 2',
60:'Hi Bongo',
61:'Low Bongo',
62:'Mute Hi Conga',
63:'Open Hi Conga',
64:'Low Conga',
65:'High Timbale',
66:'Low Timbale',
67:'High Agogo',
68:'Low Agogo',
69:'Cabasa',
70:'Maracas',
71:'Short Whistle',
72:'Long Whistle',
73:'Short Guiro',
74:'Long Guiro',
75:'Claves',
76:'Hi Wood Block',
77:'Low Wood Block',
78:'Mute Cuica',
79:'Open Cuica',
80:'Mute Triangle',
81:'Open Triangle',
}
Event2channelindex = { 'note':3, 'note_off':2, 'note_on':2,
'key_after_touch':2, 'control_change':2, 'patch_change':2,
'channel_after_touch':2, 'pitch_wheel_change':2
}
################################################################
# The code below this line is full of frightening things, all to
# do with the actual encoding and decoding of binary MIDI data.
def _twobytes2int(byte_a):
r'''decode a 16 bit quantity from two bytes,'''
return (byte_a[1] | (byte_a[0] << 8))
def _int2twobytes(int_16bit):
r'''encode a 16 bit quantity into two bytes,'''
return bytes([(int_16bit>>8) & 0xFF, int_16bit & 0xFF])
def _read_14_bit(byte_a):
r'''decode a 14 bit quantity from two bytes,'''
return (byte_a[0] | (byte_a[1] << 7))
def _write_14_bit(int_14bit):
r'''encode a 14 bit quantity into two bytes,'''
return bytes([int_14bit & 0x7F, (int_14bit>>7) & 0x7F])
def _ber_compressed_int(integer):
r'''BER compressed integer (not an ASN.1 BER, see perlpacktut for
details). Its bytes represent an unsigned integer in base 128,
most significant digit first, with as few digits as possible.
Bit eight (the high bit) is set on each byte except the last.
'''
ber = bytearray(b'')
seven_bits = 0x7F & integer
ber.insert(0, seven_bits) # XXX surely should convert to a char ?
integer >>= 7
while integer > 0:
seven_bits = 0x7F & integer
ber.insert(0, 0x80|seven_bits) # XXX surely should convert to a char ?
integer >>= 7
return ber
def _unshift_ber_int(ba):
r'''Given a bytearray, returns a tuple of (the ber-integer at the
start, and the remainder of the bytearray).
'''
if not len(ba): # 6.7
_warn('_unshift_ber_int: no integer found')
return ((0, b""))
byte = ba.pop(0)
integer = 0
while True:
integer += (byte & 0x7F)
if not (byte & 0x80):
return ((integer, ba))
if not len(ba):
_warn('_unshift_ber_int: no end-of-integer found')
return ((0, ba))
byte = ba.pop(0)
integer <<= 7
def _clean_up_warnings(): # 5.4
# Call this before returning from any publicly callable function
# whenever there's a possibility that a warning might have been printed
# by the function, or by any private functions it might have called.
global _previous_times
global _previous_warning
if _previous_times > 1:
# E:1176, 0: invalid syntax (<string>, line 1176) (syntax-error) ???
# print(' previous message repeated '+str(_previous_times)+' times', file=sys.stderr)
# 6.7
sys.stderr.write(' previous message repeated {0} times\n'.format(_previous_times))
elif _previous_times > 0:
sys.stderr.write(' previous message repeated\n')
_previous_times = 0
_previous_warning = ''
def _warn(s=''):
global _previous_times
global _previous_warning
if s == _previous_warning: # 5.4
_previous_times = _previous_times + 1
else:
_clean_up_warnings()
sys.stderr.write(str(s)+"\n")
_previous_warning = s
def _some_text_event(which_kind=0x01, text=b'some_text'):
if str(type(text)).find("'str'") >= 0: # 6.4 test for back-compatibility
data = bytes(text, encoding='ISO-8859-1')
else:
data = bytes(text)
return b'\xFF'+bytes((which_kind,))+_ber_compressed_int(len(data))+data
def _consistentise_ticks(scores): # 3.6
# used by mix_scores, merge_scores, concatenate_scores
if len(scores) == 1:
return copy.deepcopy(scores)
are_consistent = True
ticks = scores[0][0]
iscore = 1
while iscore < len(scores):
if scores[iscore][0] != ticks:
are_consistent = False
break
iscore += 1
if are_consistent:
return copy.deepcopy(scores)
new_scores = []
iscore = 0
while iscore < len(scores):
score = scores[iscore]
new_scores.append(opus2score(to_millisecs(score2opus(score))))
iscore += 1
return new_scores
###########################################################################
def _decode(trackdata=b'', exclude=None, include=None,
event_callback=None, exclusive_event_callback=None, no_eot_magic=False):
r'''Decodes MIDI track data into an opus-style list of events.
The options:
'exclude' is a list of event types which will be ignored SHOULD BE A SET
'include' (and no exclude), makes exclude a list
of all possible events, /minus/ what include specifies
'event_callback' is a coderef
'exclusive_event_callback' is a coderef
'''
trackdata = bytearray(trackdata)
if exclude == None:
exclude = []
if include == None:
include = []
if include and not exclude:
exclude = All_events
include = set(include)
exclude = set(exclude)
# Pointer = 0; not used here; we eat through the bytearray instead.
event_code = -1; # used for running status
event_count = 0;
events = []
while(len(trackdata)):
# loop while there's anything to analyze ...
eot = False # When True, the event registrar aborts this loop
event_count += 1
E = []
# E for events - we'll feed it to the event registrar at the end.
# Slice off the delta time code, and analyze it
[time, remainder] = _unshift_ber_int(trackdata)
# Now let's see what we can make of the command
first_byte = trackdata.pop(0) & 0xFF
if (first_byte < 0xF0): # It's a MIDI event
if (first_byte & 0x80):
event_code = first_byte
else:
# It wants running status; use last event_code value
trackdata.insert(0, first_byte)
if (event_code == -1):
_warn("Running status not set; Aborting track.")
return []
command = event_code & 0xF0
channel = event_code & 0x0F
if (command == 0xF6): # 0-byte argument
pass
elif (command == 0xC0 or command == 0xD0): # 1-byte argument
parameter = trackdata.pop(0) # could be B
else: # 2-byte argument could be BB or 14-bit
parameter = (trackdata.pop(0), trackdata.pop(0))
#################################################################
# MIDI events
if (command == 0x80):
if 'note_off' in exclude:
continue
E = ['note_off', time, channel, parameter[0], parameter[1]]
elif (command == 0x90):
if 'note_on' in exclude:
continue
E = ['note_on', time, channel, parameter[0], parameter[1]]
elif (command == 0xA0):
if 'key_after_touch' in exclude:
continue
E = ['key_after_touch',time,channel,parameter[0],parameter[1]]
elif (command == 0xB0):
if 'control_change' in exclude:
continue
E = ['control_change',time,channel,parameter[0],parameter[1]]
elif (command == 0xC0):
if 'patch_change' in exclude:
continue
E = ['patch_change', time, channel, parameter]
elif (command == 0xD0):
if 'channel_after_touch' in exclude:
continue
E = ['channel_after_touch', time, channel, parameter]
elif (command == 0xE0):
if 'pitch_wheel_change' in exclude:
continue
E = ['pitch_wheel_change', time, channel,
_read_14_bit(parameter)-0x2000]
else:
_warn("Shouldn't get here; command="+hex(command))
elif (first_byte == 0xFF): # It's a Meta-Event! ##################
#[command, length, remainder] =
# unpack("xCwa*", substr(trackdata, $Pointer, 6));
#Pointer += 6 - len(remainder);
# # Move past JUST the length-encoded.
command = trackdata.pop(0) & 0xFF
[length, trackdata] = _unshift_ber_int(trackdata)
if (command == 0x00):
if (length == 2):
E = ['set_sequence_number',time,_twobytes2int(trackdata)]
else:
_warn('set_sequence_number: length must be 2, not '+str(length))
E = ['set_sequence_number', time, 0]
elif command >= 0x01 and command <= 0x0f: # Text events
# 6.2 take it in bytes; let the user get the right encoding.
# text_str = trackdata[0:length].decode('ascii','ignore')
# text_str = trackdata[0:length].decode('ISO-8859-1')
# 6.4 take it in bytes; let the user get the right encoding.
text_data = bytes(trackdata[0:length]) # 6.4
# Defined text events
if (command == 0x01):
E = ['text_event', time, text_data]
elif (command == 0x02):
E = ['copyright_text_event', time, text_data]
elif (command == 0x03):
E = ['track_name', time, text_data]
elif (command == 0x04):
E = ['instrument_name', time, text_data]
elif (command == 0x05):
E = ['lyric', time, text_data]
elif (command == 0x06):
E = ['marker', time, text_data]
elif (command == 0x07):
E = ['cue_point', time, text_data]
# Reserved but apparently unassigned text events
elif (command == 0x08):
E = ['text_event_08', time, text_data]
elif (command == 0x09):
E = ['text_event_09', time, text_data]
elif (command == 0x0a):
E = ['text_event_0a', time, text_data]
elif (command == 0x0b):
E = ['text_event_0b', time, text_data]
elif (command == 0x0c):
E = ['text_event_0c', time, text_data]
elif (command == 0x0d):
E = ['text_event_0d', time, text_data]
elif (command == 0x0e):
E = ['text_event_0e', time, text_data]
elif (command == 0x0f):
E = ['text_event_0f', time, text_data]
# Now the sticky events -------------------------------------
elif (command == 0x2F):
E = ['end_track', time]
# The code for handling this, oddly, comes LATER,
# in the event registrar.
elif (command == 0x51): # DTime, Microseconds/Crochet
if length != 3:
_warn('set_tempo event, but length='+str(length))
E = ['set_tempo', time,
struct.unpack(">I", b'\x00'+trackdata[0:3])[0]]
elif (command == 0x54):
if length != 5: # DTime, HR, MN, SE, FR, FF
_warn('smpte_offset event, but length='+str(length))
E = ['smpte_offset',time] + list(struct.unpack(">BBBBB",trackdata[0:5]))
elif (command == 0x58):
if length != 4: # DTime, NN, DD, CC, BB
_warn('time_signature event, but length='+str(length))
E = ['time_signature', time]+list(trackdata[0:4])
elif (command == 0x59):
if length != 2: # DTime, SF(signed), MI
_warn('key_signature event, but length='+str(length))
E = ['key_signature',time] + list(struct.unpack(">bB",trackdata[0:2]))
elif (command == 0x7F): # 6.4
E = ['sequencer_specific',time, bytes(trackdata[0:length])]
else:
E = ['raw_meta_event', time, command,
bytes(trackdata[0:length])] # 6.0
#"[uninterpretable meta-event command of length length]"
# DTime, Command, Binary Data
# It's uninterpretable; record it as raw_data.
# Pointer += length; # Now move Pointer
trackdata = trackdata[length:]
######################################################################
elif (first_byte == 0xF0 or first_byte == 0xF7):
# Note that sysexes in MIDI /files/ are different than sysexes
# in MIDI transmissions!! The vast majority of system exclusive
# messages will just use the F0 format. For instance, the
# transmitted message F0 43 12 00 07 F7 would be stored in a
# MIDI file as F0 05 43 12 00 07 F7. As mentioned above, it is
# required to include the F7 at the end so that the reader of the
# MIDI file knows that it has read the entire message. (But the F7
# is omitted if this is a non-final block in a multiblock sysex;
# but the F7 (if there) is counted in the message's declared
# length, so we don't have to think about it anyway.)
#command = trackdata.pop(0)
[length, trackdata] = _unshift_ber_int(trackdata)
if first_byte == 0xF0:
# 20091008 added ISO-8859-1 to get an 8-bit str
# 6.4 return bytes instead
E = ['sysex_f0', time, bytes(trackdata[0:length])]
else:
E = ['sysex_f7', time, bytes(trackdata[0:length])]
trackdata = trackdata[length:]
######################################################################
# Now, the MIDI file spec says:
# <track data> = <MTrk event>+
# <MTrk event> = <delta-time> <event>
# <event> = <MIDI event> | <sysex event> | <meta-event>
# I know that, on the wire, <MIDI event> can include note_on,
# note_off, and all the other 8x to Ex events, AND Fx events
# other than F0, F7, and FF -- namely, <song position msg>,
# <song select msg>, and <tune request>.
#
# Whether these can occur in MIDI files is not clear specified
# from the MIDI file spec. So, I'm going to assume that
# they CAN, in practice, occur. I don't know whether it's
# proper for you to actually emit these into a MIDI file.
elif (first_byte == 0xF2): # DTime, Beats
# <song position msg> ::= F2 <data pair>
E = ['song_position', time, _read_14_bit(trackdata[:2])]
trackdata = trackdata[2:]
elif (first_byte == 0xF3): # <song select msg> ::= F3 <data singlet>
# E = ['song_select', time, struct.unpack('>B',trackdata.pop(0))[0]]
E = ['song_select', time, trackdata[0]]
trackdata = trackdata[1:]
# DTime, Thing (what?! song number? whatever ...)
elif (first_byte == 0xF6): # DTime
E = ['tune_request', time]
# What would a tune request be doing in a MIDI /file/?
#########################################################
# ADD MORE META-EVENTS HERE. TODO:
# f1 -- MTC Quarter Frame Message. One data byte follows
# the Status; it's the time code value, from 0 to 127.
# f8 -- MIDI clock. no data.
# fa -- MIDI start. no data.
# fb -- MIDI continue. no data.
# fc -- MIDI stop. no data.
# fe -- Active sense. no data.
# f4 f5 f9 fd -- unallocated
r'''
elif (first_byte > 0xF0) { # Some unknown kinda F-series event ####
# Here we only produce a one-byte piece of raw data.
# But the encoder for 'raw_data' accepts any length of it.
E = [ 'raw_data',
time, substr(trackdata,Pointer,1) ]
# DTime and the Data (in this case, the one Event-byte)
++Pointer; # itself
'''
elif first_byte > 0xF0: # Some unknown F-series event
# Here we only produce a one-byte piece of raw data.
# E = ['raw_data', time, bytest(trackdata[0])] # 6.4
E = ['raw_data', time, trackdata[0]] # 6.4 6.7
trackdata = trackdata[1:]
else: # Fallthru.
_warn("Aborting track. Command-byte first_byte="+hex(first_byte))
break
# End of the big if-group
######################################################################
# THE EVENT REGISTRAR...
if E and (E[0] == 'end_track'):
# This is the code for exceptional handling of the EOT event.
eot = True
if not no_eot_magic:
if E[1] > 0: # a null text-event to carry the delta-time
E = ['text_event', E[1], '']
else:
E = [] # EOT with a delta-time of 0; ignore it.
if E and not (E[0] in exclude):
#if ( $exclusive_event_callback ):
# &{ $exclusive_event_callback }( @E );
#else:
# &{ $event_callback }( @E ) if $event_callback;
events.append(E)
if eot:
break
# End of the big "Event" while-block
return events
###########################################################################
def _encode(events_lol, unknown_callback=None, never_add_eot=False,
no_eot_magic=False, no_running_status=False):
# encode an event structure, presumably for writing to a file
# Calling format:
# $data_r = MIDI::Event::encode( \@event_lol, { options } );
# Takes a REFERENCE to an event structure (a LoL)
# Returns an (unblessed) REFERENCE to track data.
# If you want to use this to encode a /single/ event,
# you still have to do it as a reference to an event structure (a LoL)
# that just happens to have just one event. I.e.,
# encode( [ $event ] ) or encode( [ [ 'note_on', 100, 5, 42, 64] ] )
# If you're doing this, consider the never_add_eot track option, as in
# print MIDI ${ encode( [ $event], { 'never_add_eot' => 1} ) };
data = [] # what I'll store the chunks of byte-data in
# This is so my end_track magic won't corrupt the original
events = copy.deepcopy(events_lol)
if not never_add_eot:
# One way or another, tack on an 'end_track'
if events:
last = events[-1]
if not (last[0] == 'end_track'): # no end_track already
if (last[0] == 'text_event' and len(last[2]) == 0):
# 0-length text event at track-end.
if no_eot_magic:
# Exceptional case: don't mess with track-final
# 0-length text_events; just peg on an end_track
events.append(['end_track', 0])
else:
# NORMAL CASE: replace with an end_track, leaving DTime
last[0] = 'end_track'
else:
# last event was neither 0-length text_event nor end_track
events.append(['end_track', 0])
else: # an eventless track!
events = [['end_track', 0],]
# maybe_running_status = not no_running_status # unused? 4.7
last_status = -1
for event_r in (events):
E = copy.deepcopy(event_r)
# otherwise the shifting'd corrupt the original
if not E:
continue
event = E.pop(0)
if not len(event):
continue
dtime = int(E.pop(0))
# print('event='+str(event)+' dtime='+str(dtime))
event_data = ''
if ( # MIDI events -- eligible for running status
event == 'note_on'
or event == 'note_off'
or event == 'control_change'
or event == 'key_after_touch'
or event == 'patch_change'
or event == 'channel_after_touch'
or event == 'pitch_wheel_change' ):
# This block is where we spend most of the time. Gotta be tight.
if (event == 'note_off'):
status = 0x80 | (int(E[0]) & 0x0F)
parameters = struct.pack('>BB', int(E[1])&0x7F, int(E[2])&0x7F)
elif (event == 'note_on'):
status = 0x90 | (int(E[0]) & 0x0F)
parameters = struct.pack('>BB', int(E[1])&0x7F, int(E[2])&0x7F)
elif (event == 'key_after_touch'):
status = 0xA0 | (int(E[0]) & 0x0F)
parameters = struct.pack('>BB', int(E[1])&0x7F, int(E[2])&0x7F)
elif (event == 'control_change'):
status = 0xB0 | (int(E[0]) & 0x0F)
parameters = struct.pack('>BB', int(E[1])&0xFF, int(E[2])&0xFF)
elif (event == 'patch_change'):
status = 0xC0 | (int(E[0]) & 0x0F)
parameters = struct.pack('>B', int(E[1]) & 0xFF)
elif (event == 'channel_after_touch'):
status = 0xD0 | (int(E[0]) & 0x0F)
parameters = struct.pack('>B', int(E[1]) & 0xFF)
elif (event == 'pitch_wheel_change'):
status = 0xE0 | (int(E[0]) & 0x0F)
parameters = _write_14_bit(int(E[1]) + 0x2000)
else:
_warn("BADASS FREAKOUT ERROR 31415!")
# And now the encoding
# w = BER compressed integer (not ASN.1 BER, see perlpacktut for
# details). Its bytes represent an unsigned integer in base 128,
# most significant digit first, with as few digits as possible.
# Bit eight (the high bit) is set on each byte except the last.
data.append(_ber_compressed_int(dtime))
if (status != last_status) or no_running_status:
data.append(struct.pack('>B', status))
data.append(parameters)
last_status = status
continue
else:
# Not a MIDI event.
# All the code in this block could be more efficient,
# but this is not where the code needs to be tight.
# print "zaz $event\n";
last_status = -1
if event == 'raw_meta_event':
event_data = _some_text_event(int(E[0]), E[1])
elif (event == 'set_sequence_number'): # 3.9
event_data = b'\xFF\x00\x02'+_int2twobytes(E[0])
# Text meta-events...
# a case for a dict, I think (pjb) ...
elif (event == 'text_event'):
event_data = _some_text_event(0x01, E[0])
elif (event == 'copyright_text_event'):
event_data = _some_text_event(0x02, E[0])
elif (event == 'track_name'):
event_data = _some_text_event(0x03, E[0])
elif (event == 'instrument_name'):
event_data = _some_text_event(0x04, E[0])
elif (event == 'lyric'):
event_data = _some_text_event(0x05, E[0])
elif (event == 'marker'):
event_data = _some_text_event(0x06, E[0])
elif (event == 'cue_point'):
event_data = _some_text_event(0x07, E[0])
elif (event == 'text_event_08'):
event_data = _some_text_event(0x08, E[0])
elif (event == 'text_event_09'):
event_data = _some_text_event(0x09, E[0])
elif (event == 'text_event_0a'):
event_data = _some_text_event(0x0A, E[0])
elif (event == 'text_event_0b'):
event_data = _some_text_event(0x0B, E[0])
elif (event == 'text_event_0c'):
event_data = _some_text_event(0x0C, E[0])
elif (event == 'text_event_0d'):
event_data = _some_text_event(0x0D, E[0])
elif (event == 'text_event_0e'):
event_data = _some_text_event(0x0E, E[0])
elif (event == 'text_event_0f'):
event_data = _some_text_event(0x0F, E[0])
# End of text meta-events
elif (event == 'end_track'):
event_data = b"\xFF\x2F\x00"
elif (event == 'set_tempo'):
#event_data = struct.pack(">BBwa*", 0xFF, 0x51, 3,
# substr( struct.pack('>I', E[0]), 1, 3))
event_data = b'\xFF\x51\x03'+struct.pack('>I',E[0])[1:]
elif (event == 'smpte_offset'):
# event_data = struct.pack(">BBwBBBBB", 0xFF, 0x54, 5, E[0:5] )
event_data = struct.pack(">BBBbBBBB", 0xFF,0x54,0x05,E[0],E[1],E[2],E[3],E[4])
elif (event == 'time_signature'):
# event_data = struct.pack(">BBwBBBB", 0xFF, 0x58, 4, E[0:4] )
event_data = struct.pack(">BBBbBBB", 0xFF, 0x58, 0x04, E[0],E[1],E[2],E[3])
elif (event == 'key_signature'):
event_data = struct.pack(">BBBbB", 0xFF, 0x59, 0x02, E[0],E[1])
elif (event == 'sequencer_specific'):
# event_data = struct.pack(">BBwa*", 0xFF,0x7F, len(E[0]), E[0])
event_data = _some_text_event(0x7F, E[0])
# End of Meta-events
# Other Things...
elif (event == 'sysex_f0'):
#event_data = struct.pack(">Bwa*", 0xF0, len(E[0]), E[0])
#B=bitstring w=BER-compressed-integer a=null-padded-ascii-str
event_data = bytearray(b'\xF0')+_ber_compressed_int(len(E[0]))+bytearray(E[0])
elif (event == 'sysex_f7'):
#event_data = struct.pack(">Bwa*", 0xF7, len(E[0]), E[0])
event_data = bytearray(b'\xF7')+_ber_compressed_int(len(E[0]))+bytearray(E[0])
elif (event == 'song_position'):
event_data = b"\xF2" + _write_14_bit( E[0] )
elif (event == 'song_select'):
event_data = struct.pack('>BB', 0xF3, E[0] )
elif (event == 'tune_request'):
event_data = b"\xF6"
elif (event == 'raw_data'):
_warn("_encode: raw_data event not supported")
# event_data = E[0]
continue
# End of Other Stuff
else:
# The Big Fallthru
if unknown_callback:
# push(@data, &{ $unknown_callback }( @$event_r ))
pass
else:
_warn("Unknown event: "+str(event))
# To surpress complaint here, just set
# 'unknown_callback' => sub { return () }
continue
#print "Event $event encoded part 2\n"
if str(type(event_data)).find("'str'") >= 0:
event_data = bytearray(event_data.encode('Latin1', 'ignore'))
if len(event_data): # how could $event_data be empty
# data.append(struct.pack('>wa*', dtime, event_data))
# print(' event_data='+str(event_data))
data.append(_ber_compressed_int(dtime)+event_data)
return b''.join(data)
```
| github_jupyter |
```
import pandas as pd
from rdkit import Chem
import numpy as np
import seaborn as sns
from descriptors.preprocessing import preprocess
from descriptors.dft_featurisation import *
import matplotlib.pyplot as plt
nicolit= pd.read_csv("data/NiCOlit.csv")
nicolit= preprocess(nicolit)
yield_ = []
for i, y in enumerate(nicolit.analytical_yield):
if float(y) == float(y):
float(y)
yield_.append(float(y))
else:
yield_.append(float(nicolit.isolated_yield[i]))
nicolit['yield'] = yield_
nicolit["analytical_yield"] = [float(y) for y in nicolit.analytical_yield]
fig = plt.figure(figsize=(7,5))
ax = plt.subplot(111)
g = sns.violinplot(x="solvent", y='yield', data=nicolit, cut=0, ax=ax,
linewidth=0)
ax.tick_params(axis='x', rotation=90)
fig = plt.figure(figsize=(7,5))
ax = plt.subplot(111)
g = sns.violinplot(x="coupling_partner_class", y='yield', data=nicolit, cut=2, ax=ax,
linewidth=0)
ax.tick_params(axis='x', rotation=90)
from descriptors.dft_featurisation import process_dataframe_dft
nicolit_dft = process_dataframe_dft(nicolit, data_path = "data/utils/")
#try to get info on dft based descriptors for full and partial training.
#1. get dft featurisation with all parameters
#2. reduce the numbe of parameters
#3. get the percentage of the parameters the more involved
data_path = "data/utils/"
descritpors_to_remove_lig = ["number_of_atoms", "charge", "multiplicity", "molar_mass", "molar_volume", "E_scf", "zero_point_correction", "E_thermal_correction","H_thermal_correction", "G_thermal_correction", "E_zpe", "E", "H", "G", "stoichiometry", "converged", "ES_root_molar_volume", "ES_root_electronic_spatial_extent",
"X_0", "X_1", "X_2", "X_3", "X_4", "X_5", "X_6", "X_7",
"Y_0", "Y_1", "Y_2", "Y_3", "Y_4", "Y_5", "Y_6", "Y_7",
"Z_0", "Z_1", "Z_2", "Z_3", "Z_4", "Z_5", "Z_6", "Z_7",
"at_0", "at_1", "at_2", "at_3", "at_4", "at_5", "at_6", "at_7", 'ES_root_Mulliken_charge_0','ES_root_Mulliken_charge_1','ES_root_Mulliken_charge_2','ES_root_Mulliken_charge_3','ES_root_Mulliken_charge_4','ES_root_Mulliken_charge_5','ES_root_Mulliken_charge_6',
'ES_root_Mulliken_charge_7',
'ES_root_NPA_charge_0','ES_root_NPA_charge_1', 'ES_root_NPA_charge_2', 'ES_root_NPA_charge_3', 'ES_root_NPA_charge_4', 'ES_root_NPA_charge_5','ES_root_NPA_charge_6','ES_root_NPA_charge_7',
'ES_root_NPA_core_0', 'ES_root_NPA_core_1', 'ES_root_NPA_core_2', 'ES_root_NPA_core_3', 'ES_root_NPA_core_4', 'ES_root_NPA_core_5', 'ES_root_NPA_core_6', 'ES_root_NPA_core_7',
'ES_root_NPA_valence_0', 'ES_root_NPA_valence_1', 'ES_root_NPA_valence_2', 'ES_root_NPA_valence_3', 'ES_root_NPA_valence_4', 'ES_root_NPA_valence_5', 'ES_root_NPA_valence_6', 'ES_root_NPA_valence_7',
'ES_root_NPA_Rydberg_0', 'ES_root_NPA_Rydberg_1', 'ES_root_NPA_Rydberg_2', 'ES_root_NPA_Rydberg_3', 'ES_root_NPA_Rydberg_4', 'ES_root_NPA_Rydberg_5', 'ES_root_NPA_Rydberg_6', 'ES_root_NPA_Rydberg_7',
'ES_root_NPA_total_0', 'ES_root_NPA_total_1', 'ES_root_NPA_total_2', 'ES_root_NPA_total_3', 'ES_root_NPA_total_4', 'ES_root_NPA_total_5', 'ES_root_NPA_total_6', 'ES_root_NPA_total_7',
'ES_transition_0', 'ES_transition_1', 'ES_transition_2', 'ES_transition_3', 'ES_transition_4', 'ES_transition_5', 'ES_transition_6', 'ES_transition_7', 'ES_transition_8', 'ES_transition_9',
'ES_osc_strength_0', 'ES_osc_strength_1', 'ES_osc_strength_2', 'ES_osc_strength_3', 'ES_osc_strength_4', 'ES_osc_strength_5', 'ES_osc_strength_6', 'ES_osc_strength_7', 'ES_osc_strength_8', 'ES_osc_strength_9',
'ES_<S**2>_0', 'ES_<S**2>_1', 'ES_<S**2>_2', 'ES_<S**2>_3', 'ES_<S**2>_4', 'ES_<S**2>_5', 'ES_<S**2>_6', 'ES_<S**2>_7', 'ES_<S**2>_8','ES_<S**2>_9']
descritpors_to_remove_ax = ["number_of_atoms", "charge", "multiplicity", "molar_mass", "molar_volume", "E_scf", "zero_point_correction", "E_thermal_correction","H_thermal_correction", "G_thermal_correction", "E_zpe", "E", "H", "G", "stoichiometry", "converged", "ES_root_molar_volume", "ES_root_electronic_spatial_extent",
"X_0", "X_1", "X_2", "X_3",
"Y_0", "Y_1", "Y_2", "Y_3",
"Z_0", "Z_1", "Z_2", "Z_3",
"at_0", "at_1", "at_2", "at_3",
'ES_root_Mulliken_charge_0', 'ES_root_Mulliken_charge_1', 'ES_root_Mulliken_charge_2', 'ES_root_Mulliken_charge_3',
'ES_root_NPA_charge_0', 'ES_root_NPA_charge_1', 'ES_root_NPA_charge_2', 'ES_root_NPA_charge_3',
'ES_root_NPA_core_0', 'ES_root_NPA_core_1', 'ES_root_NPA_core_2', 'ES_root_NPA_core_3',
'ES_root_NPA_valence_0', 'ES_root_NPA_valence_1', 'ES_root_NPA_valence_2', 'ES_root_NPA_valence_3',
'ES_root_NPA_Rydberg_0', 'ES_root_NPA_Rydberg_1', 'ES_root_NPA_Rydberg_2', 'ES_root_NPA_Rydberg_3',
'ES_root_NPA_total_0', 'ES_root_NPA_total_1', 'ES_root_NPA_total_2', 'ES_root_NPA_total_3',
'ES_transition_0', 'ES_transition_1', 'ES_transition_2', 'ES_transition_3', 'ES_transition_4', 'ES_transition_5', 'ES_transition_6', 'ES_transition_7', 'ES_transition_8', 'ES_transition_9',
'ES_osc_strength_0', 'ES_osc_strength_1', 'ES_osc_strength_2', 'ES_osc_strength_3', 'ES_osc_strength_4', 'ES_osc_strength_5', 'ES_osc_strength_6', 'ES_osc_strength_7', 'ES_osc_strength_8', 'ES_osc_strength_9',
'ES_<S**2>_0', 'ES_<S**2>_1', 'ES_<S**2>_2', 'ES_<S**2>_3', 'ES_<S**2>_4', 'ES_<S**2>_5', 'ES_<S**2>_6', 'ES_<S**2>_7', 'ES_<S**2>_8', 'ES_<S**2>_9']
descritpors_to_remove_al = ["converged", "stoichiometry", "ES_root_molar_volume", "X_0", "Y_0", "Z_0", "at_0", "ES_transition_7", "ES_transition_8", "ES_transition_9", 'ES_osc_strength_7', 'ES_osc_strength_8', 'ES_osc_strength_9', 'ES_<S**2>_7', 'ES_<S**2>_8', 'ES_<S**2>_9']
def add_solvent_prop(nicolit, data_path):
solv = pd.read_csv(data_path + "solvents.csv", sep = ',', index_col=0)
for prop in solv.columns:
list_prop = [solv[prop][solvent] for solvent in nicolit.solvent]
nicolit[prop] = list_prop
def add_substrate_prop(nicolit, data_path):
substrate = pd.read_csv(data_path + "substrate_dft.csv", sep = ',', index_col=0)
substrate.drop(columns=descritpors_to_remove_lig, inplace=True)
canon_rdkit = [Chem.CanonSmiles(smi_co) for smi_co in substrate.index.to_list() ]
substrate["can_rdkit"] = canon_rdkit
substrate.set_index("can_rdkit", inplace=True)
substrate = substrate[substrate.duplicated(keep='first') != True]
substrate = substrate[~substrate.index.duplicated(keep='first')]
for prop in substrate.columns:
sub_prop =str("sub_"+prop)
list_prop = [substrate[prop][solvent] for solvent in nicolit.substrate]
nicolit[sub_prop] = list_prop
def add_cp_prop(nicolit, data_path):
AX = pd.read_csv(data_path + "AX_dft.csv", sep = ',', index_col=0)
AX.drop(columns=descritpors_to_remove_ax, inplace=True)
canon_rdkit = [Chem.CanonSmiles(smi_co) for smi_co in AX.index.to_list() ]
AX["can_rdkit"] = canon_rdkit
AX.set_index("can_rdkit", inplace=True)
for prop in AX.columns:
ax_prop =str("ax_"+prop)
list_prop = [AX[prop][solvent] for solvent in nicolit.effective_coupling_partner]
nicolit[ax_prop] = list_prop
def add_lig_prop(nicolit, data_path):
# issue : what should we put for nan ?
ligs = pd.read_csv(data_path + "ligand_dft.csv", sep = ',', index_col=0)
ligs.drop(columns=descritpors_to_remove_lig, inplace=True)
ligs.index.to_list()
canon_rdkit = []
for smi in ligs.index.to_list():
try:
canon_rdkit.append(Chem.CanonSmiles(smi))
except:
canon_rdkit.append(smi)
ligs["can_rdkit"] = canon_rdkit
ligs.set_index("can_rdkit", inplace=True)
for prop in ligs.columns:
lig_prop =str("lig_"+prop)
list_prop = [ligs[prop][solvent] for solvent in nicolit.effective_ligand]
nicolit[lig_prop] = list_prop
def add_LA_prop(nicolit, data_path):
AL = pd.read_csv(data_path + "AL_dft.csv", sep = ',', index_col=0)
AL.drop(columns=descritpors_to_remove_al, inplace=True)
canon_rdkit = []
for smi in AL.index.to_list():
try:
canon_rdkit.append(Chem.CanonSmiles(smi))
except:
canon_rdkit.append(smi)
AL["can_rdkit"] = canon_rdkit
AL.set_index("can_rdkit", inplace=True)
for prop in AL.columns:
al_prop =str("al_"+prop)
list_prop = [AL[prop][solvent] for solvent in nicolit["Lewis Acid"]]
nicolit[al_prop] = list_prop
def choose_yield(nicolit):
yield_ = []
for i, y in enumerate(nicolit.analytical_yield):
if float(y) == float(y):
float(y)
yield_.append(float(y))
else:
yield_.append(float(nicolit.isolated_yield[i]))
nicolit['yield'] = yield_
def feat_dft_nicolit(nicolit, data_path):
# attention no precursor and no origin
add_solvent_prop(nicolit, data_path)
add_substrate_prop(nicolit, data_path)
add_cp_prop(nicolit, data_path)
add_lig_prop(nicolit, data_path)
add_LA_prop(nicolit,data_path)
nicolit.time = times(nicolit)
nicolit.temperature = temperatures(nicolit)
nicolit[['eq_substrate','eq_coupling_partner', 'eq_catalyst', 'eq_ligand','eq_reagent']] = equivalents(nicolit)
choose_yield(nicolit)
nicolit = pd.read_csv("data/NiCOlit.csv")
nicolit = preprocess(nicolit)
feat_dft_nicolit(nicolit,data_path=data_path)
nicolit_float = nicolit.select_dtypes(include=[np.float64])
from sklearn.metrics import r2_score
nicolit_float.drop(columns=['isolated_yield', 'polarisabilite', 'Unnamed: 9'], inplace=True)
# scale Nicolit_float:
scaler = StandardScaler()
X = nicolit_float.values
print(np.shape(X))
print(np.shape(nicolit_float))
X = scaler.fit_transform(X)
nicolit_float[nicolit_float.columns] = X
for prop in nicolit_float.columns:
if r2_score(nicolit_float[prop], nicolit_float['yield']) > -0.5:
print(prop , r2_score(nicolit_float[prop], nicolit_float['yield']))
from analysis import *
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
#reducing the parameters:
def perf_param(df, param_list):
test = select_parameters(df, param_list)
return perf(test, iterations=10)
def select_parameters(df, param_list):
param_list.append('yield')
return df[param_list]
def perf(test_1, iterations=10):
scores = []
for i in range(iterations):
X = test_1.drop(columns=['yield']).values
y = test_1['yield']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
regr = RandomForestRegressor()
regr.fit(X_train, y_train)
scores.append(regr.score(X_test, y_test))
return np.mean(scores), np.std(scores)
param1 = ['time', 'temperature']
perf_param(nicolit_float, param1)
param2 = ['time', 'temperature','eq_substrate', 'eq_coupling_partner','eq_catalyst', 'eq_ligand', 'eq_reagent']
perf_param(nicolit_float, param2)
param3_to_add = ['ax_lumo_energy', 'ax_homo_energy', 'sub_lumo_energy', 'sub_homo_energy', 'sub_VBur_0',
'ax_VBur_0']
param3 = param2 + param3_to_add
print(param3)
perf_param(nicolit_float, param3)
param4_to_add = ['ax_lumo_energy', 'ax_homo_energy', 'ax_VBur_0',
'sub_lumo_energy', 'sub_homo_energy', 'sub_VBur_0',
'lig_homo_energy', 'lig_lumo_energy', 'lig_VBur_0', 'lig_VBur_1',
'al_lumo_energy', 'al_lumo_energy']
param4 = param2 + param4_to_add
print(param3)
perf_param(nicolit_float, param3)
# global feature importance:
from sklearn.inspection import permutation_importance
regr = RandomForestRegressor()
X = nicolit_float.drop(columns=['yield']).values
y = nicolit_float['yield']
regr.fit(X,y)
pi = permutation_importance(regr, X, y,
scoring=None,
n_repeats=10,
max_samples=1.0)
def principal_components(pi, L):
restr_L = []
restr_imp = []
restr_std = []
for i,imp in enumerate(pi['importances_mean']):
if imp > 0.01:
restr_L.append(L[i])
restr_imp.append(imp)
restr_std.append(pi['importances_std'][i])
plt.bar(restr_L, restr_imp, yerr = restr_std)
plt.xticks(rotation=90)
plt.show()
return restr_L, restr_imp, restr_std
# plot all importances
L = list(nicolit_float.columns)
L.remove('yield')
print(type(L), type(pi['importances_mean']))
plt.bar(L, pi['importances_mean'], yerr = pi['importances_std'])
plt.xticks(rotation=90)
plt.show()
# plot strong importances
restr_L, r_imp, r_std = principal_components(pi, L)
# performance en prenant les composantes qui representent + de 1%
param = restr_L
perf_param(nicolit_float, param)
# test des mean importance par type de coupling partner
coupling_partner_classes = nicolit.coupling_partner_class.unique()
print(coupling_partner_classes)
Pi = []
R = []
for cp in coupling_partner_classes:
nicolit_restr = nicolit[nicolit['coupling_partner_class'] == cp]
nicolit_float = nicolit_restr.select_dtypes(include=[np.float64])
nicolit_float.drop(columns=['isolated_yield', 'polarisabilite', 'Unnamed: 9'], inplace=True)
regr = RandomForestRegressor()
X = nicolit_float.drop(columns=['yield']).values
y = nicolit_float['yield']
regr.fit(X,y)
pi = permutation_importance(regr, X, y,
scoring=None,
n_repeats=20, max_samples=1.0)
Pi.append(pi)
PC = principal_components(pi, L)
print(cp)
perf_param(nicolit_float, PC[0])
R.append(PC)
# get dataframe with importances
df_dict = {}
values = []
L = nicolit_float.columns.to_list()
L.remove('yield')
for j, cp in enumerate(coupling_partner_classes):
df_dict.update({cp:pd.DataFrame.from_dict(data=Pi[j], orient='index', columns=L)})
Pi_2 = Pi
df_dict.keys()
df = df_dict['B'][tt].loc['importances']
df[0][1]
#.loc['importances_mean']
import matplotlib.pyplot as plt
from matplotlib.patches import Circle, RegularPolygon
from matplotlib.path import Path
from matplotlib.projections.polar import PolarAxes
from matplotlib.projections import register_projection
from matplotlib.spines import Spine
from matplotlib.transforms import Affine2D
def radar_factory(num_vars, frame='circle'):
"""
Create a radar chart with `num_vars` axes.
This function creates a RadarAxes projection and registers it.
Parameters
----------
num_vars : int
Number of variables for radar chart.
frame : {'circle', 'polygon'}
Shape of frame surrounding axes.
"""
# calculate evenly-spaced axis angles
theta = np.linspace(0, 2*np.pi, num_vars, endpoint=False)
class RadarTransform(PolarAxes.PolarTransform):
def transform_path_non_affine(self, path):
# Paths with non-unit interpolation steps correspond to gridlines,
# in which case we force interpolation (to defeat PolarTransform's
# autoconversion to circular arcs).
if path._interpolation_steps > 1:
path = path.interpolated(num_vars)
return Path(self.transform(path.vertices), path.codes)
class RadarAxes(PolarAxes):
name = 'radar'
# use 1 line segment to connect specified points
RESOLUTION = 1
PolarTransform = RadarTransform
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# rotate plot such that the first axis is at the top
self.set_theta_zero_location('N')
def fill(self, *args, closed=True, **kwargs):
"""Override fill so that line is closed by default"""
return super().fill(closed=closed, *args, **kwargs)
def plot(self, *args, **kwargs):
"""Override plot so that line is closed by default"""
lines = super().plot(*args, **kwargs)
for line in lines:
self._close_line(line)
def _close_line(self, line):
x, y = line.get_data()
# FIXME: markers at x[0], y[0] get doubled-up
if x[0] != x[-1]:
x = np.append(x, x[0])
y = np.append(y, y[0])
line.set_data(x, y)
def set_varlabels(self, labels):
self.set_thetagrids(np.degrees(theta), labels)
def _gen_axes_patch(self):
# The Axes patch must be centered at (0.5, 0.5) and of radius 0.5
# in axes coordinates.
if frame == 'circle':
return Circle((0.5, 0.5), 0.5)
elif frame == 'polygon':
return RegularPolygon((0.5, 0.5), num_vars,
radius=.5, edgecolor="k")
else:
raise ValueError("Unknown value for 'frame': %s" % frame)
def _gen_axes_spines(self):
if frame == 'circle':
return super()._gen_axes_spines()
elif frame == 'polygon':
# spine_type must be 'left'/'right'/'top'/'bottom'/'circle'.
spine = Spine(axes=self,
spine_type='circle',
path=Path.unit_regular_polygon(num_vars))
# unit_regular_polygon gives a polygon of radius 1 centered at
# (0, 0) but we want a polygon of radius 0.5 centered at (0.5,
# 0.5) in axes coordinates.
spine.set_transform(Affine2D().scale(.5).translate(.5, .5)
+ self.transAxes)
return {'polar': spine}
else:
raise ValueError("Unknown value for 'frame': %s" % frame)
register_projection(RadarAxes)
return theta
# Radar plot for mean importance in model for each model
#1. display attributes and classes
nicolit_float.columns
tt = ['time', 'temperature']
sol = ['T ebullition', 'T fusion', 'Permittivite','Moment dipolaire',
'donneur de liaison hydrogene', 'accepteur de liaison', 'hydrogene']
eq = [i for i in nicolit_float.columns if 'eq_' in i]
sub = [i for i in nicolit_float.columns if 'sub_' in i]
ax = [i for i in nicolit_float.columns if 'ax_' in i]
lig = [i for i in nicolit_float.columns if 'lig_' in i]
al = [i for i in nicolit_float.columns if 'al_' in i]
comps = ['tt', 'solvent', 'molar ratios', 'substrate', 'coupling partner',
'ligand', 'lewis acid']
#2. plot radar plot for one model
dict_summarized = {}
# example on Boron:
def get_radars(cp, df_dict):
radar_mean = []
for comp in [tt, sol, eq, sub, ax, lig, al]:
mean_imp_comp = np.sum(df_dict[cp][comp].loc['importances_mean'])
radar_mean.append(mean_imp_comp)
all_tests = df_dict[cp][comp].loc['importances']
all_radars =[]
for i in range(20):
radar_single = []
for comp in [tt, sol, eq, sub, ax, lig, al]:
mean_imp_comp = df_dict[cp][comp].loc['importances']
comp_single = np.sum([mean_imp_comp[j][i] for j in range(np.shape(mean_imp_comp)[0])])
radar_single.append(comp_single)
all_radars.append(radar_single)
return radar_mean, all_radars
#3. plot radar plot for all models
Boron_radar, all_radars = get_radars(cp='B', df_dict=df_dict)
coupling_partner_classes
df_dict['Li']
# Libraries
import matplotlib.pyplot as plt
import pandas as pd
from math import pi
nrows = 2
ncols = np.trunc(len(coupling_partner_classes)/2)
# number of variable
categories=comps
N = len(categories)
# Initialise the spider plot
theta = radar_factory(N, frame='polygon')
fig, axs = plt.subplots(figsize=(20, 8), nrows=2, ncols=5,
subplot_kw=dict(projection='radar'))
fig.subplots_adjust(wspace=0.25, hspace=0.20, top=0.85, bottom=0.05)
print(np.shape(axs))
def plot_radar(cp, ax, color, df_dict=df_dict):
radar, all_radars = get_radars(cp=cp, df_dict=df_dict)
ax.plot(theta, radar, color=color, alpha=0.5, label=cp)
for i in range(len(all_radars)):
ax.plot(theta, all_radars[i], color=color, alpha=0.1)
ax.fill(theta, all_radars[i], color=color, alpha=0.02)
ax.set_rgrids([0.1, 0.2, 0.3])
ax.set_rlim([0,0.3])
ax.set_rlabel_position(0.1)
ax.set_varlabels(categories)
# Draw one axe per variable + add labels labels yet
# plt.xticks(angles, categories)
#for label,i in zip(ax.get_xticklabels(),range(0,len(angles))):
# if i<len(angles)/2:
# angle_text=angles[i]*(-180/pi)+90
# label.set_horizontalalignment('left')
# else:
# angle_text=angles[i]*(-180/pi)-90
# label.set_horizontalalignment('right')
# label.set_rotation(angle_text)
# Draw ylabels
ax.set_rlabel_position(0)
# ax.set_title(cp, fontsize=10)
# ax.set_theta_offset(pi / N)
ax.set_theta_direction(-1)
ax.legend(loc=[0,0])
colors= ['magenta', 'red', 'orange', 'yellow', 'green', 'cyan', 'blue', 'Darkblue',
'black', 'brown']
for i, cp in enumerate(coupling_partner_classes):
if i < 5:
ncol = i
nrow =0
else:
ncol = i-5
nrow=1
plot_radar(cp=cp, ax=axs[nrow, ncol], color=colors[i], df_dict=df_dict)
fig.subplots_adjust(wspace=0.5, hspace=0.40, top=0.85, bottom=0.05)
plt.show()
np.int(np.trunc(1/nrows))
Boron_radar
regr = RandomForestRegressor()
X = nicolit_float.drop(columns=['yield']).values
y = nicolit_float['yield']
regr.fit(X,y)
# Faire les diagrammes croises par count
nicolit= pd.read_csv("data/NiCOlit.csv")
nicolit= preprocess(nicolit)
# DFT Ligands
ligs = pd.read_csv("data/utils/ligand_dft.csv", sep =',', index_col=0)
ligs.drop(columns=descritpors_to_remove_lig, inplace=True)
ligs.index.to_list()
canon_rdkit = []
for smi in ligs.index.to_list():
try:
canon_rdkit.append(Chem.CanonSmiles(smi))
except:
canon_rdkit.append(smi)
ligs["can_rdkit"] = canon_rdkit
ligs.set_index("can_rdkit", inplace=True)
# scale:
scaler = StandardScaler()
X = ligs.values
X = scaler.fit_transform(X)
ligs[ligs.columns] = X
smi_ligs = nicolit.effective_ligand.unique()
dft_ligs = []
for smi in smi_ligs:
dft_ligs.append(list(ligs.loc[smi]))
Phos = []
DiPhos = []
others = []
NHC = []
colors = []
for smi in smi_ligs:
if 'P' in smi:
if smi.count('P') == 1:
Phos.append(smi)
colors.append('blue')
else:
DiPhos.append(smi)
colors.append('Darkblue')
elif '[C]' in smi:
NHC.append(smi)
colors.append('red')
else:
others.append(smi)
colors.append('black')
from sklearn.manifold import TSNE
X = np.array(dft_ligs)
X_embedded = TSNE(n_components=2, learning_rate='auto',
perplexity =6,
init='random').fit_transform(X)
plt.scatter(X_embedded[:,0], X_embedded[:,1], c=colors)
plt.show()
# plot counts of combinations between cc and ligands:
ligand_cat = ['Phos' ,'DiPhos', 'NHC', 'others']
cc = nicolit.coupling_partner_class.unique()
lig_cat = []
for smi in nicolit.effective_ligand:
if 'P' in smi:
if smi.count('P') == 1:
lig_cat.append('Phos')
else:
lig_cat.append('DiPhos')
elif '[C]' in smi:
lig_cat.append('NHC')
else:
lig_cat.append('others')
nicolit['ligand_cat'] = lig_cat
m = np.zeros((len(ligand_cat), len(cc)))
mask = np.ones_like(m)
for i in range(np.shape(m)[0]):
nicolit_lig = nicolit[nicolit['ligand_cat'] == ligand_cat[i]]
for j in range(np.shape(m)[1]):
num_cc = len(nicolit[nicolit['coupling_partner_class']==cc[j]])
m[i][j] = len(nicolit_lig[nicolit_lig['coupling_partner_class']==cc[j]])/num_cc
if m[i][j] != 0:
mask[i][j] = 0
fig = plt.figure(figsize=(10,4))
ax = plt.subplot(111)
sns.heatmap(m, ax=ax, annot=True, cmap="YlGnBu", mask=mask)
ax.set_xticklabels(cc)
ax.set_yticklabels(['PR$_3$', 'P^P', 'NHC', 'others'])
ax.set_ylabel("Ligand Type")
ax.set_xlabel("Coupling Partner Type")
ax.tick_params(axis='y', rotation=0)
plt.title("combinations between coupling partners and type of substrates")
plt.show()
# substrate classification:
mols = [Chem.MolFromSmiles(smi) for smi in nicolit.substrate]
sub_class = []
for mol in mols:
if mol.HasSubstructMatch(Chem.MolFromSmiles('c1ncncn1')) or mol.HasSubstructMatch(Chem.MolFromSmiles('C1=NC=NC=N1')):
sub_class.append('Otriazine')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('OC(=O)C(C)(C)C')):
sub_class.append('OPiv')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('OC(=O)N')):
sub_class.append('OC(=O)N')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('OC(=O)O')):
sub_class.append('OC(=O)O')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('O[Si](C)(C)C')) or mol.HasSubstructMatch(Chem.MolFromSmarts('o[Si](C)(C)C')):
sub_class.append('OSi(C)(C)C')
else:
mol = Chem.AddHs(mol)
if mol.HasSubstructMatch(Chem.MolFromSmiles('OC(=O)C([H])([H])[H]')):
sub_class.append('OAc')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('c1ccccc1Oc1ccccc1')):
sub_class.append('OPh')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('OCOC')):
sub_class.append('OCOC')
elif mol.HasSubstructMatch(Chem.MolFromSmiles('OC([H])([H])[H]')):
sub_class.append('OCH3')
else:
sub_class.append('others')
sub_classes = np.unique(sub_class)
# Check categories
mol_others = []
#print(len(sub_class))
for i, sub in enumerate(sub_class):
if sub == 'Otriazine':
mol_others.append(mols[i])
#print(len(mol_others))
#Draw.MolsToGridImage(mol_others, maxMols=200)
nicolit['substrate_cat'] = sub_class
m = np.zeros((len(sub_classes), len(cc)))
mask = np.ones((len(sub_classes), len(cc)))
for i in range(np.shape(m)[0]):
nicolit_sub = nicolit[nicolit['substrate_cat'] == sub_classes[i]]
for j in range(np.shape(m)[1]):
num_cc = len(nicolit[nicolit['coupling_partner_class']==cc[j]])
m[i][j] = len(nicolit_sub[nicolit_sub['coupling_partner_class']==cc[j]])/num_cc
if m[i][j] != 0:
mask[i][j] = 0
fig = plt.figure(figsize=(10,9))
ax = plt.subplot(111)
sns.heatmap(m, ax=ax, annot=True, cmap="YlGnBu", mask=mask)
ax.set_xticklabels(cc)
ax.set_xlabel("coupling partner type")
ax.set_yticklabels(sub_classes)
ax.set_ylabel("substrate type")
ax.tick_params(axis='y', rotation=0)
plt.title("combinations between coupling partners and type of substrates")
plt.show()
m = np.zeros((len(ligand_cat), len(sub_classes)))
mask = np.ones_like(m)
for i in range(np.shape(m)[0]):
nicolit_sub = nicolit[nicolit['ligand_cat'] == ligand_cat[i]]
for j in range(np.shape(m)[1]):
num_cc = len(nicolit[nicolit['substrate_cat']==sub_classes[j]])
m[i][j] = len(nicolit_sub[nicolit_sub['substrate_cat']==sub_classes[j]])/num_cc
if m[i][j] != 0:
mask[i][j] = 0
fig = plt.figure(figsize=(9,4))
ax = plt.subplot(111)
sns.heatmap(m, ax=ax, annot=True, cmap="YlGnBu", mask=mask)
ax.set_yticklabels(ligand_cat)
ax.set_ylabel("Ligands")
ax.set_yticklabels(['PR$_3$', 'P^P', 'NHC', 'others'])
ax.set_xticklabels(sub_classes)
ax.set_xlabel("substrate type")
ax.tick_params(axis='y', rotation=0)
ax.tick_params(axis='x', rotation=30)
plt.title("combinations between coupling partners and type of substrates")
plt.show()
#precursors
precs = nicolit.catalyst_precursor.unique()
counts = []
precs= list(precs)
precs[1] = 'No_precursor'
for prec in precs:
counts.append(nicolit.catalyst_precursor.to_list().count(prec))
counts = np.array(counts)
main_counts = []
mains_precs = []
for i, c in enumerate(counts):
if c > 29:
main_counts.append(c/np.sum(counts))
mains_precs.append(precs[i])
main_counts.append((np.sum(counts)-np.sum(main_counts)*np.sum(counts))/np.sum(counts))
mains_precs.append('others')
plt.bar(x=mains_precs, height=main_counts)
plt.xticks(rotation=90)
plt.show()
coupling_partner_classes
for i in nicolit_float.columns:
print(i)
Pi
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/take_all.png" height="300" width="1200">
# <center> ะะฐัะบะธ ะพ ะดะฐะฝะฝัั
<br> <br> ะกะพะฑะธัะฐะตะผ ะดะฐะฝะฝัะต ะฒ python </center>
## Agenda
* ะะทั ะฒัะตั
ะฐะทะพะฒ
* ะงัะพ ะดะตะปะฐัั, ะตัะปะธ ัะตัะฒะตั ัะฐะทะพะทะปะธะปัั
* ะงัะพ ัะฐะบะพะต API
* ะงัะพ ัะฐะบะพะต Selenium
* ะฅะธััะพััะธ
# 1. ะะทั ะฒัะตั
ะฐะทะพะฒ
ะงัะพะฑั ััะฒะพะธัั ะฐะทั ะฒัะตั
ะฐะทะพะฒ, ะฟัะพัะธัะฐะนัะต [ััะฐัะตะนะบั ั ั
ะฐะฑัะฐ.](https://habr.com/ru/company/ods/blog/346632/) ะขะฐะผ ะฒ ัะพะฐะฒัะพัะฐั
ะพะดะธะฝ ะธะท ัะตะผะธะฝะฐัะธััะพะฒ, ััะพ ะบะฐะบ ะฑั ะฝะฐะผะตะบะฐะตั ะฝะฐ ัะพ, ััะพ ะบะพะฝัะตะฝั ะณะพะดะฝัะน.
## ะะฐัะตะผ ัะพะฑะธัะฐัั ะดะฐะฝะฝัะต ะฐะฒัะพะผะฐัะธัะตัะบะธ?
<br>
<br>
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/aaaaaa.png" width="500">
## ะงัะพ ัะฐะบะพะต HTML?
**HTML (HyperText Markup Language)** โ ััะพ ัะฐะบะพะน ะถะต ัะทัะบ ัะฐะทะผะตัะบะธ ะบะฐะบ Markdown ะธะปะธ LaTeX. ะะฝ ัะฒะปัะตััั ััะฐะฝะดะฐััะฝัะผ ะดะปั ะฝะฐะฟะธัะฐะฝะธั ัะฐะทะปะธัะฝัั
ัะฐะนัะพะฒ. ะะพะผะฐะฝะดั ะฒ ัะฐะบะพะผ ัะทัะบะต ะฝะฐะทัะฒะฐัััั **ัะตะณะฐะผะธ**. ะัะปะธ ะพัะบัััั ะฐะฑัะพะปััะฝะพ ะปัะฑะพะน ัะฐะนั, ะฝะฐะถะฐัั ะฝะฐ ะฟัะฐะฒัั ะบะฝะพะฟะบั ะผััะบะธ, ะฐ ะฟะพัะปะต ะฝะฐะถะฐัั `View page source`, ัะพ ะฟะตัะตะด ะฒะฐะผะธ ะฟัะตะดััะฐะฝะตั HTML ัะบะตะปะตั ััะพะณะพ ัะฐะนัะฐ.
HTML-ัััะฐะฝะธัะฐ ััะพ ะฝะธ ััะพ ะธะฝะพะต, ะบะฐะบ ะฝะฐะฑะพั ะฒะปะพะถะตะฝะฝัั
ัะตะณะพะฒ. ะะพะถะฝะพ ะทะฐะผะตัะธัั, ะฝะฐะฟัะธะผะตั, ัะปะตะดัััะธะต ัะตะณะธ:
- `<title>` โ ะทะฐะณะพะปะพะฒะพะบ ัััะฐะฝะธัั
- `<h1>โฆ<h6>` โ ะทะฐะณะพะปะพะฒะบะธ ัะฐะทะฝัั
ััะพะฒะฝะตะน
- `<p>` โ ะฐะฑะทะฐั (paragraph)
- `<div>` โ ะฒัะดะตะปะตะฝะธั ััะฐะณะผะตะฝัะฐ ะดะพะบัะผะตะฝัะฐ ั ัะตะปัั ะธะทะผะตะฝะตะฝะธั ะฒะธะดะฐ ัะพะดะตัะถะธะผะพะณะพ
- `<table>` โ ะฟัะพัะธัะพะฒะบะฐ ัะฐะฑะปะธัั
- `<tr>` โ ัะฐะทะดะตะปะธัะตะปั ะดะปั ัััะพะบ ะฒ ัะฐะฑะปะธัะต
- `<td>` โ ัะฐะทะดะตะปะธัะตะปั ะดะปั ััะพะปะฑัะพะฒ ะฒ ัะฐะฑะปะธัะต
- `<b>` โ ัััะฐะฝะฐะฒะปะธะฒะฐะตั ะถะธัะฝะพะต ะฝะฐัะตััะฐะฝะธะต ััะธััะฐ
ะะฑััะฝะพ ะบะพะผะฐะฝะดะฐ `<...>` ะพัะบััะฒะฐะตั ัะตะณ, ะฐ `</...>` ะทะฐะบััะฒะฐะตั ะตะณะพ. ะัะต, ััะพ ะฝะฐั
ะพะดะธััั ะผะตะถะดั ััะธะผะธ ะดะฒัะผั ะบะพะผะฐะฝะดะฐะผะธ, ะฟะพะดัะธะฝัะตััั ะฟัะฐะฒะธะปั, ะบะพัะพัะพะต ะดะธะบััะตั ัะตะณ. ะะฐะฟัะธะผะตั, ะฒัะต, ััะพ ะฝะฐั
ะพะดะธััั ะผะตะถะดั `<p>` ะธ `</p>` โ ััะพ ะพัะดะตะปัะฝัะน ะฐะฑะทะฐั.
ะขะตะณะธ ะพะฑัะฐะทััั ัะฒะพะตะพะฑัะฐะทะฝะพะต ะดะตัะตะฒะพ ั ะบะพัะฝะตะผ ะฒ ัะตะณะต `<html>` ะธ ัะฐะทะฑะธะฒะฐัั ัััะฐะฝะธัั ะฝะฐ ัะฐะทะฝัะต ะปะพะณะธัะตัะบะธะต ะบััะพัะบะธ. ะฃ ะบะฐะถะดะพะณะพ ัะตะณะฐ ะตััั ัะฒะพะธ ะฟะพัะพะผะบะธ (ะดะตัะธ) โ ัะต ัะตะณะธ, ะบะพัะพััะต ะฒะปะพะถะตะฝั ะฒ ะฝะตะณะพ ะธ ัะฒะพะธ ัะพะดะธัะตะปะธ.
ะะฐะฟัะธะผะตั, HTML-ะดัะตะฒะพ ัััะฐะฝะธัั ะผะพะถะตั ะฒัะณะปัะดะตัั ะฒะพั ัะฐะบ:
````
<html>
<head> ะะฐะณะพะปะพะฒะพะบ </head>
<body>
<div>
ะะตัะฒัะน ะบััะพะบ ัะตะบััะฐ ัะพ ัะฒะพะธะผะธ ัะฒะพะนััะฒะฐะผะธ
</div>
<div>
ะัะพัะพะน ะบััะพะบ ัะตะบััะฐ
<b>
ะขัะตัะธะน, ะถะธัะฝัะน ะบััะพะบ
</b>
</div>
ะงะตัะฒััััะน ะบััะพะบ ัะตะบััะฐ
</body>
</html>
````
ะะพะถะฝะพ ัะฐะฑะพัะฐัั ั ััะธะผ html ะบะฐะบ ั ัะตะบััะพะผ, ะฐ ะผะพะถะฝะพ ะบะฐะบ ั ะดะตัะตะฒะพะผ. ะะฑั
ะพะด ััะพะณะพ ะดะตัะตะฒะฐ ะธ ะตััั ะฟะฐััะธะฝะณ ะฒะตะฑ-ัััะฐะฝะธัั. ะั ะฒัะตะณะพ ะปะธัั ะฑัะดะตะผ ะฝะฐั
ะพะดะธัั ะฝัะถะฝัะต ะฝะฐะผ ัะทะปั ััะตะดะธ ะฒัะตะณะพ ััะพะณะพ ัะฐะทะฝะพะพะฑัะฐะทะธั ะธ ะทะฐะฑะธัะฐัั ะธะท ะฝะธั
ะธะฝัะพัะผะฐัะธั!
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/tree.png" width="450">
## ะะฐัะฐะตะผ ัะตะฝั ะฝะฐ ะบะฝะธะณะธ
* ะฅะพัะธะผ ัะพะฑัะฐัั [ัะตะฝั ะฝะฐ ะบะฝะธะณะธ](http://books.toscrape.com)
* ะ ัะบะฐะผะธ ะดะพะปะณะพ, ะฝะฐะฟะธัะตะผ ะบะพะด ะฝะฐ ะฟะตััั
ะพะฝะต
ะะพัััะฟ ะบ ะฒะตะฑ-ััะฐะฝะธัะฐะผ ะฟะพะทะฒะพะปัะตั ะฟะพะปััะฐัั ะผะพะดัะปั requests. ะะพะดะณััะทะธะผ ะตะณะพ. ะัะปะธ ั ะฒะฐั ะฝะต ัััะฐะฝะพะฒะปะตะฝ ััะพั ะผะพะดัะปั, ัะพ ะฟัะธะดัััั ะฝะฐะฟัััััั ะธ ัััะฐะฝะพะฒะธัั: `pip install requests`.
```
import requests
url = 'https://books.toscrape.com/catalogue/page-1.html'
response = requests.get(url)
response
```
ะะปะฐะณะพัะปะพะฒะตะฝะฝัะน 200 ะพัะฒะตั - ัะพะตะดะธะฝะตะฝะธะต ัััะฐะฝะพะฒะปะตะฝะพ ะธ ะดะฐะฝะฝัะต ะฟะพะปััะตะฝั, ะฒัั ััะดะตัะฝะพ! ะัะปะธ ะฟะพะฟััะฐัััั ะฟะตัะตะนัะธ ะฝะฐ ะฝะตัััะตััะฒััััั ัััะฐะฝะธัั, ัะพ ะผะพะถะฝะพ ะฟะพะปััะธัั, ะฝะฐะฟัะธะผะตั, ะทะฝะฐะผะตะฝะธััั ะพัะธะฑะบั 404.
```
requests.get('http://books.toscrape.com/big_scholarship')
```
ะะฝัััะธ response ะปะตะถะธั html-ัะฐะทะผะตัะบะฐ ัััะฐะฝะธัะบะธ, ะบะพัะพััั ะผั ะฟะฐััะธะผ.
```
response.content[:1000]
```
ะัะณะปัะดะธั ะฝะตัะดะพะฑะพะฒะฐัะธะผะพ, ะบะฐะบ ะฝะฐััะตั ัะฒะฐัะธัั ะธะท ััะพะณะพ ะดะตะปะฐ ััะพ-ัะพ ะฟะพะบัะฐัะธะฒะตะต? ะะฐะฟัะธะผะตั, ะฟัะตะบัะฐัะฝัะน ััะฟ.
<img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/alisa.jpg" height="200" width="200">
ะะฐะบะตั **[`bs4`](https://www.crummy.com/software/BeautifulSoup/)**, a.k.a **BeautifulSoup** ะฑัะป ะฝะฐะทะฒะฐะฝ ะฒ ัะตััั ััะธัะบะฐ ะฟัะพ ะบัะฐัะธะฒัะน ััะฟ ะธะท ะะปะธัั ะฒ ัััะฐะฝะต ััะดะตั. ะญัะฐ ัะพะฒะตััะตะฝะฝะพ ะฒะพะปัะตะฑะฝะฐั ะฑะธะฑะปะธะพัะตะบะฐ, ะบะพัะพัะฐั ะธะท ัััะพะณะพ ะธ ะฝะตะพะฑัะฐะฑะพัะฐะฝะฝะพะณะพ HTML (ะธะปะธ XML) ะบะพะดะฐ ัััะฐะฝะธัั ะฒัะดะฐัั ะฒะฐะผ ััััะบัััะธัะพะฒะฐะฝะฝัะน ะผะฐััะธะฒ ะดะฐะฝะฝัั
, ะฟะพ ะบะพัะพัะพะผั ะพัะตะฝั ัะดะพะฑะฝะพ ะธัะบะฐัั ะฝะตะพะฑั
ะพะดะธะผัะต ัะตะณะธ, ะบะปะฐััั, ะฐััะธะฑััั, ัะตะบััั ะธ ะฟัะพัะธะต ัะปะตะผะตะฝัั ะฒะตะฑ ัััะฐะฝะธั.
> ะะฐะบะตั ะฟะพะด ะฝะฐะทะฒะฐะฝะธะตะผ `BeautifulSoup` โ ัะบะพัะตะต ะฒัะตะณะพ, ะฝะต ัะพ, ััะพ ะฒะฐะผ ะฝัะถะฝะพ. ะญัะพ ััะตััั ะฒะตััะธั (*Beautiful Soup 3*), ะฐ ะผั ะฑัะดะตะผ ะธัะฟะพะปัะทะพะฒะฐัั ัะตัะฒะตัััั. ะขะฐะบ ััะพ ะฝะฐะผ ะฝัะถะตะฝ ะฟะฐะบะตั `beautifulsoup4`. ะงัะพะฑั ะฑัะปะพ ัะพะฒัะตะผ ะฒะตัะตะปะพ, ะฟัะธ ะธะผะฟะพััะต ะฝัะถะฝะพ ัะบะฐะทัะฒะฐัั ะดััะณะพะต ะฝะฐะทะฒะฐะฝะธะต ะฟะฐะบะตัะฐ โ `bs4`, ะฐ ะธะผะฟะพััะธัะพะฒะฐัั ััะฝะบัะธั ะฟะพะด ะฝะฐะทะฒะฐะฝะธะตะผ `BeautifulSoup`. ะ ะพะฑัะตะผ, ัะฝะฐัะฐะปะฐ ะปะตะณะบะพ ะทะฐะฟััะฐัััั, ะฝะพ ััะธ ัััะดะฝะพััะธ ะฝัะถะฝะพ ะฟัะตะพะดะพะปะตัั ะพะดะฝะฐะถะดั, ะฐ ะฟะพัะพะผ ะฑัะดะตั ะฟัะพัะต.
```
from bs4 import BeautifulSoup
# ัะฐัะฟะฐััะธะปะธ ัััะฐะฝะธัะบั ะฒ ะดะตัะตะฒะพ
tree = BeautifulSoup(response.content, 'html.parser')
```
ะะฝัััะธ ะฟะตัะตะผะตะฝะฝะพะน `tree` ัะตะฟะตัั ะปะตะถะธั ะดะตัะตะฒะพ ะธะท ัะตะณะพะฒ, ะฟะพ ะบะพัะพัะพะผั ะผั ะผะพะถะตะผ ัะพะฒะตััะตะฝะฝะพ ัะฟะพะบะพะนะฝะพ ะฑัะพะดะธัั.
```
tree.html.head.title
```
ะะพะถะฝะพ ะฒััะฐัะธัั ะธะท ัะพะณะพ ะผะตััะฐ, ะบัะดะฐ ะผั ะทะฐะฑัะตะปะธ, ัะตะบัั ั ะฟะพะผะพััั ะผะตัะพะดะฐ `text`.
```
tree.html.head.title.text
```
ะก ัะตะบััะพะผ ะผะพะถะฝะพ ัะฐะฑะพัะฐัั ะบะปะฐััะธัะตัะบะธะผะธ ะฟะธัะพะฝะพะฒัะบะธะผะธ ะผะตัะพะดะฐะผะธ. ะะฐะฟัะธะผะตั, ะผะพะถะฝะพ ะธะทะฑะฐะฒะธัััั ะพั ะปะธัะฝะธั
ะพััััะฟะพะฒ.
```
tree.html.head.title.text.strip()
```
ะะพะปะตะต ัะพะณะพ, ะทะฝะฐั ะฐะดัะตั ัะปะตะผะตะฝัะฐ, ะผั ััะฐะทั ะผะพะถะตะผ ะฝะฐะนัะธ ะตะณะพ. ะะฐะฟัะธะผะตั, ะฒะพั ัะฐะบ ะฒ ะบะพะดะต ัััะฐะฝะธัั ะผั ะผะพะถะตะผ ะฝะฐะนัะธ ะณะดะต ะธะผะตะฝะฝะพ ะดะปั ะบะฐะถะดะพะน ะบะฝะธะณะธ ะปะตะถะธั ะพัะฝะพะฒะฝะฐั ะธะฝัะพัะผะฐัะธั. ะะธะดะฝะพ, ััะพ ะพะฝะฐ ะฝะฐั
ะพะดะธััั ะฒะฝัััะธ ัะตะณะฐ `article`, ะดะปั ะบะพัะพัะพะณะพ ะฟัะพะฟะธัะฐะฝ ะบะปะฐัั `product_pod` (ะณััะฑะพ ะณะพะฒะพัั, ะฒ html ะบะปะฐัั ะทะฐะดะฐัั ะพัะพัะผะปะตะฝะธะต ัะพะพัะฒะตััะฒัััะตะณะพ ะบััะพัะบะฐ ัััะฐะฝะธัั).
ะััะฐัะธะผ ะธะฝัั ะพ ะบะฝะธะณะต ะธะท ััะพะณะพ ัะตะณะฐ.
```
books = tree.find_all('article', {'class' : 'product_pod'})
books[0]
```
ะะพะปััะตะฝะฝัะน ะฟะพัะปะต ะฟะพะธัะบะฐ ะพะฑัะตะบั ัะฐะบะถะต ะพะฑะปะฐะดะฐะตั ััััะบัััะพะน bs4. ะะพััะพะผั ะผะพะถะฝะพ ะฟัะพะดะพะปะถะธัั ะธัะบะฐัั ะฝัะถะฝัะต ะฝะฐะผ ะพะฑัะตะบัั ัะถะต ะฒ ะฝัะผ.
```
type(books[0])
books[0].find('p', {'class': 'price_color'}).text
```
ะะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต, ััะพ ะดะปั ะฟะพะธัะบะฐ ะตััั ะบะฐะบ ะผะธะฝะธะผัะผ ะดะฒะฐ ะผะตัะพะดะฐ: `find` ะธ `find_all`. ะัะปะธ ะฝะตัะบะพะปัะบะพ ัะปะตะผะตะฝัะพะฒ ะฝะฐ ัััะฐะฝะธัะต ะพะฑะปะฐะดะฐัั ัะบะฐะทะฐะฝะฝัะผ ะฐะดัะตัะพะผ, ัะพ ะผะตัะพะด `find` ะฒะตัะฝัั ัะพะปัะบะพ ัะฐะผัะน ะฟะตัะฒัะน. ะงัะพะฑั ะฝะฐะนัะธ ะฒัะต ัะปะตะผะตะฝัั ั ัะฐะบะธะผ ะฐะดัะตัะพะผ, ะฝัะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะผะตัะพะด `find_all`. ะะฐ ะฒัั
ะพะด ะฑัะดะตั ะฒัะดะฐะฝ ัะฟะธัะพะบ.
ะัะพะผะต ัะพะดะตัะถะธะผะพะณะพ ั ัะตะณะพะฒ ัะฐััะพ ะตััั ะฐััะธะฑััั. ะะฐะฟัะธะผะตั, ั ะฝะฐะทะฒะฐะฝะธั ะบะฝะธะณะธ ะตััั ะฐััะธะฑััั `title` ะธ `href`:
```
books[0].h3
```
ะั
ัะพะถะต ะผะพะถะฝะพ ะฒััะฐัะธัั.
```
books[0].h3.a.get('href')
books[0].h3.a.get('title')
```
ะ ะตัั ะฟะพ ััะธะผ ะฐััะธะฑััะฐะผ ะผะพะถะฝะพ ะธัะบะฐัั ะธะฝัะตัะตััััะธะต ะฝะฐั ะบััะพัะบะธ ัััะฐะฝะธัั.
```
tree.find_all('a', {'title': 'A Light in the Attic'})
```
ะกะพะฑััะฒะตะฝะฝะพ ะณะพะฒะพัั, ััะพ ะฒัั.
ะะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต, ััะพ ะฝะฐ ัะฐะนัะต ะฒัะต ะบะฝะธะณะธ ะปะตะถะฐั ะฝะฐ ัะฐะทะฝัั
ัััะฐะฝะธัะบะฐั
. ะัะปะธ ะฟะพะฟัะพะฑะพะฒะฐัั ะฟะพััะบะฐัั ะธั
, ะผะพะถะฝะพ ะทะฐะผะตัะธัั, ััะพ ะฒ ัััะปะบะต ะฑัะดะตั ะผะตะฝััััั ะฐััะธะฑัั `page`. ะะฝะฐัะธั, ะตัะปะธ ะผั ั
ะพัะธะผ ัะพะฑัะฐัั ะฒัะต ะบะฝะธะณะธ, ะฝะฐะดะพ ัะพะทะดะฐัั ะบััั ัััะปะพะบ ั ัะฐะทะฝัะผ `page` ะฒะฝัััะธ ัะธะบะปะฐ. ะะพะณะดะฐ ะบะฐัะฐะตัั ะดะฐะฝะฝัะต ั ะฑะพะปะตะต ัะปะพะถะฝัั
ัะฐะนัะพะฒ, ะฒ ัััะปะบะต ัะฐััะพ ะตััั ะพะณัะพะผะฝะพะต ะบะพะปะธัะตััะฒะพ ะฐััะธะฑััะพะฒ, ะบะพัะพััะต ัะตะณัะปะธัััั ะฒัะดะฐัั.
ะะฐะฒะฐะนัะต ะทะฐะฟะธัะตะผ ะฒะตัั ะบะพะด ะดะปั ัะฑะพัะฐ ะบะฝะธะณ ะฒ ะฒะธะดะต ััะฝะบัะธะธ. ะะฐ ะฒั
ะพะด ะพะฝะฐ ะฑัะดะตั ะฟัะธะฝะธะผะฐัั ะฝะพะผะตั ัััะฐะฝะธัะบะธ, ะบะพัะพััั ะฝะฐะดะพ ัะบะฐัะฐัั.
```
def get_page(p):
# ะธะทะณะพัะพะฒะธะปะธ ัััะปะบั
url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(p)
# ัั
ะพะดะธะปะธ ะฟะพ ะฝะตะน
response = requests.get(url)
# ะฟะพัััะพะธะปะธ ะดะตัะตะฒะพ
tree = BeautifulSoup(response.content, 'html.parser')
# ะฝะฐัะปะธ ะฒ ะฝัะผ ะฒัั ัะฐะผะพะต ะธะฝัะตัะตัะฝะพะต
books = tree.find_all('article', {'class' : 'product_pod'})
infa = [ ]
for book in books:
infa.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title')})
return infa
```
ะััะฐะปะพัั ัะพะปัะบะพ ะฟัะพะนัะธัั ะฟะพ ะฒัะตะผ ัััะฐะฝะธัะบะฐะผ ะพั page-1 ะดะพ page-50 ัะธะบะปะพะผ ะธ ะดะฐะฝะฝัะต ั ะฝะฐั ะฒ ะบะฐัะผะฐะฝะต.
```
infa = []
for p in range(1,51):
infa.extend(get_page(p))
import pandas as pd
df = pd.DataFrame(infa)
print(df.shape)
df.head()
```
ะััะฐัะธ ะณะพะฒะพัั, ะตัะปะธ ะฟะตัะตะนัะธ ะฟะพ ัััะปะบะต ะฒ ัะฐะผั ะบะฝะธะณั, ัะฐะผ ะพ ะฝะตะน ะฑัะดะตั ะบััะฐ ะดะพะฟะพะปะฝะธัะตะปัะฝะพะน ะธะฝัะพัะผะฐัะธะธ. ะะพะถะฝะพ ะฟัะพะนัะธัั ะฟะพ ะฒัะตะผ ัััะปะบะฐะผ ะธ ะฒัะบะฐัะฐัั ัะตะฑะต ะฟะพ ะฝะธะผ ะดะพะฟะพะปะฝะธัะตะปัะฝัั ะธะฝัะพัะผะฐัะธั.
# 2. ะงัะพ ะดะตะปะฐัั, ะตัะปะธ ัะตัะฒะตั ัะฐะทะพะทะปะธะปัั
* ะั ัะตัะธะปะธ ัะพะฑัะฐัั ัะตะฑะต ะฝะตะผะฝะพะณะพ ะดะฐะฝะฝัั
* ะกะตัะฒะตั ะฝะต ะฒ ะฒะพััะพัะณะต ะพั ะบะพะฒัะพะฒะพะน ะฑะพะผะฑะฐัะดะธัะพะฒะบะธ ะฐะฒัะพะผะฐัะธัะตัะบะธะผะธ ะทะฐะฟัะพัะฐะผะธ
* Error 403, 404, 504, $\ldots$
* ะะฐะฟัะฐ, ััะตะฑะพะฒะฐะฝะธั ะทะฐัะตะณะธัััะธัะพะฒะฐัััั
* ะะฐะฑะพัะปะธะฒัะต ัะพะพะฑัะตะฝะธั, ััะพ ั ะฒะฐัะตะณะพ ััััะพะนััะฒะฐ ะพะฑะฝะฐััะถะตะฝ ะฟะพะดะพะทัะธัะตะปัะฝัะน ััะฐัะธะบ
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/doge.jpg" width="450">
## ะฐ) ะฑััั ัะตัะฟะตะปะธะฒัะผ
* ะกะปะธัะบะพะผ ัะฐัััะต ะทะฐะฟัะพัั ัะฐะทะดัะฐะถะฐัั ัะตัะฒะตั
* ะกัะฐะฒััะต ะผะตะถะดั ะฝะธะผะธ ะฒัะตะผะตะฝะฝัะต ะทะฐะดะตัะถะบะธ
```
import time
time.sleep(3) # ะธ ะฟัััั ะฒะตัั ะผะธั ะฟะพะดะพะถะดัั 3 ัะตะบัะฝะดั
```
## ะฑ) ะฑััั ะฟะพั
ะพะถะธะผ ะฝะฐ ัะตะปะพะฒะตะบะฐ
ะะฐะฟัะพั ะฝะพัะผะฐะปัะฝะพะณะพ ัะตะปะพะฒะตะบะฐ ัะตัะตะท ะฑัะฐัะทะตั ะฒัะณะปัะดะธั ัะฐะบ:
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/browser_get.png" width="600">
ะก ะฝะธะผ ะฝะฐ ัะตัะฒะตั ะฟะพะฟะฐะดะฐะตั ะบััะฐ ะธะฝัะพัะผะฐัะธะธ! ะะฐะฟัะพั ะพั ะฟะธัะพะฝะฐ ะฒัะณะปัะดะธั ัะฐะบ:
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/python_get.jpg" width="250">
ะะฐะผะตัะธะปะธ ัะฐะทะฝะธัั? ะัะตะฒะธะดะฝะพ, ััะพ ะฝะฐัะตะผั ัะบัะพะผะฝะพะผั ะทะฐะฟัะพัั ะฝะต ััะณะฐัััั ั ัะฐะบะธะผ ะพะฑะธะปะธะตะผ ะผะตัะฐ-ะธะฝัะพัะผะฐัะธะธ, ะบะพัะพัะพะต ะฟะตัะตะดะฐะตััั ะฟัะธ ะทะฐะฟัะพัะต ะธะท ะพะฑััะฝะพะณะพ ะฑัะฐัะทะตัะฐ. ะ ััะฐัััั, ะฝะธะบัะพ ะฝะฐะผ ะฝะต ะผะตัะฐะตั ะฟัะธัะฒะพัะธัััั ัะตะปะพะฒะตัะฝัะผะธ ะธ ะฟัััะธัั ะฟัะปั ะฒ ะณะปะฐะทะฐ ัะตัะฒะตัะฐ ะฟัะธ ะฟะพะผะพัะธ ะณะตะฝะตัะฐัะธะธ ัะตะนะบะพะฒะพะณะพ ัะทะตั-ะฐะณะตะฝัะฐ. ะะธะฑะปะธะพัะตะบ, ะบะพัะพััะต ัะฟัะฐะฒะปััััั ั ัะฐะบะพะน ะทะฐะดะฐัะตะน, ัััะตััะฒัะตั ะพัะตะฝั ะธ ะพัะตะฝั ะผะฝะพะณะพ, ะปะธัะฝะพ ะผะฝะต ะฑะพะปััะต ะฒัะตะณะพ ะฝัะฐะฒะธััั [fake-useragent.](https://pypi.org/project/fake-useragent/) ะัะธ ะฒัะทะพะฒะต ะผะตัะพะดะฐ ะธะท ัะฐะทะปะธัะฝัั
ะบััะพัะบะพะฒ ะฑัะดะตั ะณะตะฝะตัะธัะพะฒะฐัััั ัะฐะฝะดะพะผะฝะพะต ัะพัะตัะฐะฝะธะต ะพะฟะตัะฐัะธะพะฝะฝะพะน ัะธััะตะผั, ัะฟะตัะธัะธะบะฐัะธะน ะธ ะฒะตััะธะธ ะฑัะฐัะทะตัะฐ, ะบะพัะพััะต ะผะพะถะฝะพ ะฟะตัะตะดะฐะฒะฐัั ะฒ ะทะฐะฟัะพั:
```
!pip install fake_useragent
from fake_useragent import UserAgent
UserAgent().firefox
```
ะะฐะฟัะธะผะตั, https://knowyourmeme.com/ ะฝะต ะทะฐั
ะพัะตั ะฟััะบะฐัั ะบ ัะตะฑะต python ะธ ะฒัะดะฐัั ะพัะธะฑะบั 403. ะะฝะฐ ะฒัะดะฐะตััั ัะตัะฒะตัะพะผ, ะตัะปะธ ะพะฝ ะดะพัััะฟะตะฝ ะธ ัะฟะพัะพะฑะตะฝ ะพะฑัะฐะฑะฐััะฒะฐัั ะทะฐะฟัะพัั, ะฝะพ ะฟะพ ะฝะตะบะพัะพััะผ ะปะธัะฝัะผ ะฟัะธัะธะฝะฐะผ ะพัะบะฐะทัะฒะฐะตััั ััะพ ะดะตะปะฐัั.
```
url = 'https://knowyourmeme.com/'
response = requests.get(url)
response
```
ะ ะตัะปะธ ัะณะตะฝะตัะธัะพะฒะฐัั User-Agent, ะฒะพะฟัะพัะพะฒ ั ัะตัะฒะตัะฐ ะฝะต ะฒะพะทะฝะธะบะฝะตั.
```
response = requests.get(url, headers={'User-Agent': UserAgent().firefox})
response
```
__ะััะณะพะน ะฟัะธะผะตั:__ ะตัะปะธ ะทะฐั
ะพัะธัะต ัะฟะฐััะธัั ะฆะะะ, ะพะฝ ะฝะฐัะฝะตั ะฒะฐะผ ะฒัะดะฐะฒะฐัั ะบะฐะฟัั. ะะดะธะฝ ะธะท ะฒะฐัะธะฐะฝัะพะฒ ะพะฑั
ะพะดะฐ: ะผะตะฝััั ip ัะตัะตะท ัะพั. ะะดะฝะฐะบะพ ะฝะฐ ะฟัะฐะบัะธัะตัะบะธ ะบะฐะถะดัะน ะทะฐะฟัะพั ะธะท-ะฟะพะด ัะพัะฐ, ะฆะะะ ะฑัะดะตั ะฒัะดะฐะฒะฐัั ะบะฐะฟัั. ะัะปะธ ะดะพะฑะฐะฒะธัั ะฒ ะทะฐะฟัะพั `User_Agent`, ัะพ ะบะฐะฟัะฐ ะฑัะดะตั ะฒัะปะตะทะฐัั ะฝะฐะผะฝะพะณะพ ัะตะถะต.
## ะฒ) ะพะฑัะฐัััั ัะตัะตะท ะฟะพััะตะดะฝะธะบะพะฒ
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/proxy.jpeg" width="400">
ะะพัะผะพััะธะผ ะฝะฐ ัะฒะพะน ip-ะฐะดัะตั ะฑะตะท ะฟัะพะบัะธ.
```
r = requests.get('https://httpbin.org/ip')
print(r.json())
proxies = {
'http': ะะดัะตั ะฟัะพะบัะธ,
'https': ะะดัะตั ะฟัะพะบัะธ
}
r = requests.get('https://httpbin.org/ip', proxies=proxies)
print(r.json())
```
ะะฐะฟัะพั ัะฐะฑะพัะฐะป ะฝะตะผะฝะพะณะพ ะฟะพะดะพะปััะต, ip ะฐะดัะตั ัะผะตะฝะธะปัั. ะะพะปััะฐั ัะฐััั ะฟัะพะบัะตะน, ะบะพัะพััะต ะฒั ะฝะฐะนะดััะต ัะฐะฑะพัะฐัั ะบัะธะฒะพ. ะะฝะพะณะดะฐ ะทะฐะฟัะพั ะธะดัั ะพัะตะฝั ะดะพะปะณะพ ะธ ะฒัะณะพะดะฝะตะต ัะฑัะพัะธัั ะตะณะพ ะธ ะฟะพะฟัะพะฑะพะฒะฐัั ะดััะณัั ะฟัะพะบัั. ะญัะพ ะผะพะถะฝะพ ะฝะฐัััะพะธัั ะพะฟัะธะตะน `timeout`. ะะฐะฟัะธะผะตั, ัะฐะบ ะตัะปะธ ัะตัะฒะตั ะฝะต ะฑัะดะตั ะพัะฒะตัะฐัั ัะตะบัะฝะดั, ะบะพะด ัะฟะฐะดัั.
```
import requests
requests.get('http://www.google.com', timeout=1)
```
ะฃ requests ะตััั ะดะพะฒะพะปัะฝะพ ะผะฝะพะณะพ ัะฐะทะฝัั
ะธะฝัะตัะตัะฝัั
ะฟัะธะผะพัะตะบ. ะะพัะผะพััะตัั ะฝะฐ ะฝะธั
ะผะพะถะฝะพ ะฒ [ะณะฐะนะดะต ะธะท ะดะพะบัะผะตะฝัะฐัะธะธ.](https://requests.readthedocs.io/en/master/user/advanced/)
__ะะดะต ะผะพะถะฝะพ ะฟะพะฟััะฐัััั ัะฐะทะดะพะฑััั ัะฟะธัะบะธ ะฟัะพะบัะธ:__
* https://qna.habr.com/q/591069
* https://getfreeproxylists.blogspot.com/
* ะะพะปััะฐั ัะฐััั ะฑะตัะฟะปะฐัะฝัั
ะฟัะพะบัะธ ะพะฑััะฝะพ ะฝะต ัะฐะฑะพัะฐะตั. ะะธัะธัะต ะฟะฐััะตั, ะบะพัะพััะน ะฑัะดะตั ัะพะฑะธัะฐัั ัะฟะธัะบะธ ะธะท ะฟัะพะบัะตะน ะธ ะฟััะฐัััั ะฟัะธะผะตะฝะธัั ะธั
.
## ะณ) ัั
ะพะดะธัั ะณะปัะฑะถะต
<center>
<img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/tor.jpg" width="600">
ะะพะถะฝะพ ะฟะพะฟััะฐัััั ะพะฑั
ะพะดะธัั ะทะปัะต ัะตัะฒะตัะฐ ัะตัะตะท ัะพั. ะััั ะฐะถ ะฝะตัะบะพะปัะบะพ ัะฟะพัะพะฑะพะฒ, ะฝะพ ะผั ะฟัะพ ััะพ ะณะพะฒะพัะธัั ะฝะต ะฑัะดะตะผ. ะัััะต ะฟะพะดัะพะฑะฝะพ ะฟะพัะธัะฐัั ะฒ ััะฐััะต ะฝะฐ ะฅะฐะฑัะต. ะกััะปะบะฐ ะฝะฐ ะฝะตั ะฒ ะบะพะฝัะต ัะตััะฐะดะบะธ. ะัั ะฒ ัะฐะผะพะผ ะฝะฐัะฐะปะต ะฑัะปะฐ. ะ ะตัั ะฒ ัะตัะตะดะธะฝะต [ะฝะฐะฒะตัะฝัะบะฐ ะตััั.](https://habr.com/ru/company/ods/blog/346632/)
## ะกะพะฒะผะตััะธัั ะฒัั?
1. ะะฐัะฝะธัะต ั ะผะฐะปะพะณะพ
2. ะัะปะธ ะฟัะพะดะพะปะถะฐะตั ะฑะฐะฝะธัั, ะฝะฐะบะธะดัะฒะฐะนัะต ะฝะพะฒัะต ะฟัะธะผะพัะบะธ
3. ะะฐะถะดะฐั ะฝะพะฒะฐั ะฟัะธะผะพัะบะฐ ะฑััั ะฟะพ ัะบะพัะพััะธ
4. [ะ ะฐะทะฝัะต ะฟัะธะผะพัะบะธ ะดะปั requests](http://docs.python-requests.org/en/v0.10.6/user/advanced/)
# 3. API
__API (Application Programming Interface__ โ ััะพ ัะถะต ะณะพัะพะฒัะน ะบะพะด, ะบะพัะพััะน ะผะพะถะฝะพ ะฒััะฝััั ะฒ ัะฒะพะน ะบะพะด! ะะฝะพะณะธะต ัะตัะฒะธัั, ะฒ ัะพะผ ัะธัะปะต Google ะธ ะะบะพะฝัะฐะบัะต, ะฟัะตะดะพััะฐะฒะปััั ัะฒะพะธ ัะถะต ะณะพัะพะฒัะต ัะตัะตะฝะธั ะดะปั ะฒะฐัะตะน ัะฐะทัะฐะฑะพัะบะธ.
ะัะธะผะตัั:
* [ะะพะฝัะฐะบัะพะฒัะบะธะน API](https://vk.com/dev/methods)
* [API twitter](https://developer.twitter.com/en/docs.html) - nope
* [API youtube](https://developers.google.com/youtube/v3/)
* [API google maps](https://developers.google.com/maps/documentation/)
* [Aviasales](https://www.aviasales.ru/API)
* [Yandex Translate](https://yandex.ru/dev/translate/)
ะะฝะพ ะตััั ะฟะพััะธ ะฒะตะทะดะต! ะะฐ ััะพะผ ัะตะผะธะฝะฐัะต ะผั ะฟะพัะผะพััะธะผ ะฝะฐ ะดะฒะฐ ะฟัะธะผะตัะฐ: ะฝะฐ API ะบะพะฝัะฐะบัะฐ ะธ google maps.
## 3.1 API vk
ะะฐัะตะผ ะผะพะถะตั ะฟะพะฝะฐะดะพะฑะธัััั ะดะพัััะฟ ะบ API ะบะพะฝัะฐะบัะฐ, ะดัะผะฐั, ะพะฑัััะฝััั ะฝะต ะฝะฐะดะพ. ะกะพัะธะฐะปัะฝะฐั ัะตัะบะฐ โ ััะพ ัะพะฝะฝั ัะฐะทะปะธัะฝะพะน ะฟะพะปะตะทะฝะพะน ะธะฝัะพัะผะฐัะธะธ, ะบะพัะพััั ะผะพะถะฝะพ ะทะฐะธัะฟะพะปัะทะพะฒะฐัั ะดะปั ัะฒะพะธั
ัะตะปะตะน. [ะ ะดะพะบัะผะตะฝัะฐัะธะธ](https://vk.com/dev/manuals) ะพัะตะฝั ะฟะพะดัะพะฑะฝะพ ะพะฟะธัะฐะฝะพ ะบะฐะบ ะผะพะถะฝะพ ัะฐะฑะพัะฐัั ั API ะบะพะฝัะฐะบัะฐ ะธ ะบ ัะตะผั ััะพ ะฟัะธะฒะพะดะธั.
ะะพ ะดะปั ะฝะฐัะฐะปะฐ ะบ API ะฝัะถะฝะพ ะฟะพะปััะธัั ะดะพัััะฟ. ะะปั ััะพะณะพ ะฟัะธะดัััั ะฟัะพะนัะธ ะฟะฐัั ะฑััะพะบัะฐัะธัะตัะบะธั
ะฟัะพัะตะดัั (ะพ, ะฑะพะถะต, ััะธ ะดะฒะฐ ะฟัะตะดะปะพะถะตะฝะธั ะฑัะปะธ ัะฐะบ ะฑััะพะบัะฐัะธัะตัะบะธ ััะพัะผัะปะธัะพะฒะฐะฝั, ััะพ ะผะฝะต ะทะฐั
ะพัะตะปะพัั ะพัััะพััั ะฒ ะพัะตัะตะดะธ).
ะะตัะฒะฐั ัะฐะบะฐั ะฟัะพัะตะดััะฐ ะทะฐะบะปััะฐะตััั ะฒ ัะพะทะดะฐะฝะธะธ ัะฒะพะตะณะพ ะฟัะธะปะพะถะตะฝะธั. ะะปั ััะพะณะพ ะฟะตัะตั
ะพะดะธะผ ะฟะพ [ัััะปะบะต](http://vk.com/editapp?act=create) ะธ ะฟัะพั
ะพะดะธะผัั ะฟะพ ะฝะตะพะฑั
ะพะดะธะผัะผ ัะฐะณะฐะผ:
<img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/app_creation_1.png" width="500">
ะะพัะปะต ะฟะพะดัะฒะตัะถะดะตะฝะธั ัะฒะพะตะน ะปะธัะฝะพััะธ ะฟะพ ะฝะพะผะตัั ัะตะปะตัะพะฝะฐ, ะฟะพะฟะฐะดะฐะตะผ ะฝะฐ ัััะฐะฝะธัั ัะฒะตะถะตัะพะทะดะฐะฝะฝะพะณะพ ะฟัะธะปะพะถะตะฝะธั
<img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/app_creation_2.png" width="500">
ะกะปะตะฒะฐ ะฝะฐะผ ะฑัะดะตะผ ะดะพัััะฟะฝะฐ ะฒะบะปะฐะดะบะฐ ั ะฝะฐัััะพะนะบะฐะผะธ, ะฟะตัะตะนะดั ะฒ ะฝะตั ะผั ัะฒะธะดะธะผ ะฒัะต ะฝะตะพะฑั
ะพะดะธะผัะต ะฝะฐะผ ะดะปั ัะฐะฑะพัั ั ะฟัะธะปะพะถะตะฝะธะตะผ ะฟะฐัะฐะผะตััั:
<img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/app_creation_3.png" width="500">
ะัััะดะฐ ะฒ ะบะฐัะตััะฒะต ัะพะบะตะฝะฐ ะผะพะถะฝะพ ะทะฐะฑัะฐัั ัะตัะฒะธัะฝัะน ะบะปัั ะดะพัััะฟะฐ. ะะปั ัะฐะฑะพัั ั ัะฐัััั ะผะตัะพะดะพะฒ API ััะพะณะพ ะฒะฟะพะปะฝะต ะดะพััะฐัะพัะฝะพ (ะพะฑััะฝะพ ะฒ ะทะฐะณะพะปะพะฒะบะต ัะฐะบะพะณะพ ะผะตัะพะดะฐ ััะพะธั ัะพะพัะฒะตัััะฒัััะฐั ะฟะพะผะตัะบะฐ). ะะฝะพะณะดะฐ ะฝัะถะฝั ะดะพะฟะพะปะฝะธัะตะปัะฝัะต ะดะพัััะฟั. ะะปั ัะพะณะพ, ััะพะฑั ะฟะพะปััะธัั ะธั
, ะฝะตะพะฑั
ะพะดะธะผะพ ัะดะตะปะฐัั ะตัั ะฟะฐัั ัััะฐะฝะฝัั
ะผะฐะฝะธะฟัะปััะธะน:
ะะตัะตั
ะพะดะธะผ ะฟะพ ัััะปะบะต ะฒะธะดะฐ (ะฝะฐ ะผะตััะต ะทะฒะตะทะดะพัะตะบ ะดะพะปะถะตะฝ ััะพััั ID ัะพะทะดะฐะฝะฝะพะณะพ ะฒะฐะผะธ ะฟัะธะปะพะถะตะฝะธั):
> https://oauth.vk.com/authorize?client_id=**********&scope=8198&redirect_uri=https://oauth.vk.com/blank.html&display=page&v=5.16&response_type=token
ะ ะธัะพะณะต ะฟะพ ััะพะผั ะทะฐะฟัะพัั ะฑัะดะตั ััะพัะผะธัะพะฒะฐะฝะฐ ัััะปะบะฐ ัะปะตะดัััะตะณะพ ะฒะธะดะฐ:
> https://oauth.vk.com/blank.html#access_token=25b636116ef40e0718fe4d9f382544fc28&expires_in=86400&user_id=*******
ะะตัะฒัะน ะฝะฐะฑะพั ะทะฝะฐะบะพะฒ โ `access token`, ั.ะต. ะผะฐัะบะตั ะดะพัััะฟะฐ. ะัะพัะฐั ัะธััะฐ (`expires_in=`) ะฒัะตะผั ัะฐะฑะพัั ะผะฐัะบะตัะฐ ะดะพัััะฟะฐ ะฒ ัะตะบัะฝะดะฐั
(ะพะดะฝะธ ัััะบะธ). ะะพ ะธััะตัะตะฝะธั ัััะพะบ ะฝัะถะฝะพ ะฑัะดะตั ะฟะพะปััะธัั ะฝะพะฒัะน ะผะฐัะบะตั ะดะพัััะฟะฐ. ะะพัะปะตะดะฝัั ัะธััะฐ (`user_id=`) ะฒะฐั ID ะะบะพะฝัะฐะบัะต. ะะฐะผ ะฒ ะดะฐะปัะฝะตะนัะตะผ ะฟะพะฝะฐะดะพะฑะธััั ะผะฐัะบะตั ะดะพัััะฟะฐ. ะะปั ัะดะพะฑััะฒะฐ ัะพั
ัะฐะฝะธะผ ะตะณะพ ะฒ ะพัะดะตะปัะฝะพะผ ัะฐะนะปะต ะธะปะธ ัะบัะฟะพััะธััะตะผ ะฒ ะณะปะพะฑะฐะปัะฝัั ะพะฑะปะฐััั ะฒะธะดะธะผะพััะธ. ะ ัะตะปัั
ะฑะตะทะพะฟะฐัะฝะพััะธ ะฒะฐัะธั
ะดะฐะฝะฝัั
ะฝะต ััะพะธั ะฝะธะณะดะต ัะฒะตัะธัั ัะพะบะตะฝะฐะผะธ ะธ ัะตะผ ะฑะพะปะตะต ะฒัะบะปะฐะดัะฒะฐัั ะธั
ะฒ ะพัะบััััะน ะดะพัััะฟ. __ะขะฐะบ ะผะพะถะฝะพ ะธ ะฐะบะบะฐัะฝัะฐ ัะปััะฐะนะฝะพ ะปะธัะธัััั.__ ะะตัะตะณะธัะต ัะพะบะตะฝ ัะผะพะปะพะดั.
ะะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต ะฝะฐ ัััะปะบั, ะฟะพ ะบะพัะพัะพะน ะผั ะดะตะปะฐะปะธ ะทะฐะฟัะพั ะฝะฐ ะฟัะตะดะพััะฐะฒะปะตะฝะธะต ัะพะบะตะฝะฐ. ะะฝัััะธ ะฝะตั ะฝะฐั
ะพะดะธััั ัััะฐะฝะฝัะน ะฟะฐัะฐะผะตัั `scope=8198.` ะญัะพ ะผั ะฟัะพัะธะผ ะดะพัััะฟ ะบ ะบะพะฝะบัะตัะฝัะผ ัะฐะทะดะตะปะฐะผ. ะะพะดัะพะฑะฝะตะต ะฟะพะทะฝะฐะบะพะผะธัััั ั ะฒะทะฐะธะผะฝะพ-ะพะดะฝะพะทะฝะฐัะฝัะผ ัะพะพัะฒะตัััะฒะธะตะผ ะผะตะถะดั ัะธัะปะฐะผะธ ะธ ะฟัะฐะฒะฐะผะธ ะผะพะถะฝะพ [ะฒ ะดะพะบัะผะตะฝัะฐัะธะธ.](https://vk.com/dev/permissions) ะะฐะฟัะธะผะตั, ะตัะปะธ ะผั ั
ะพัะธะผ ะฟะพะปััะธัั ะดะพัััะฟ ะบ ะดััะทััะผ, ัะพัะพ ะธ ััะตะฝะฐะผ, ะผั ะฟะพะดััะฐะฒะธะผ ะฒ scope ัะธััั 2+4++8192=8198.
```
# ะผะพะน ะฝะพะผะตั ัััะฐะฝะธัะบะธ
myid = '153433657' # ะฒััะฐะฒะธัั ะฝะพะผะตั ัััะฐะฝะธัะบะธ
# ะฒะตััะธั ะธัะฟะพะปัะทัะตะผะพะณะพ API
version = '5.103'
# ะฟะพะดะณััะถะฐะตะผ ัะพะบะตะฝ ะธะท ัะฐะนะปะธะบะฐ ะฝะฐ ะบะพะผะฟัััะตัะต
with open('secret_token.txt') as f:
token = f.read()
requests.get(f'https://api.vk.com/method/users.get?user_id=153433657&v={version}&access_token={token}').json()
```
ะงัะพะฑั ัะบะฐัะฐัั ััะพ-ัะพ ะธะท ะบะพะฝัะฐะบัะฐ, ะฝะฐะดะพ ัะดะตะปะฐัั ัััะปะบั ะธ ัั
ะพะดะธัั ะฟะพ ะฝะตะน ะฟะฐะบะตัะพะผ `requests`. ะกััะปะบะฐ ะดะพะปะถะฝะฐ ะฑัะดะตั ะฒะบะปััะฐัั ะฒ ัะตะฑั ะผะตัะพะด (ััะพ ะผั ะฟัะพัะธะผ ั ะฒะบ) ะธ ะฟะฐัะฐะผะตััั (ะฝะฐัะบะพะปัะบะพ ะผะฝะพะณะพ ะธ ะบะฐะบ ะธะผะตะฝะฝะพ). ะั ะฑัะดะตะผ ะฟัะพััะพ ะทะฐะผะตะฝััั ััะธ ะดะฒะต ัััะบะธ ะธ ะฒัะบะฐัะธะฒะฐัั ัะฐะทะฝัะต ะฒะตัะธ.
```
method = 'users.get'
parameters = 'user_ids=153433657'
url = 'https://api.vk.com/method/' + method + '?' + parameters + '&v=' + version + '&access_token=' + token
response = requests.get(url)
response.json()
```
ะ ะพัะฒะตั ะฝะฐ ะฝะฐั ะทะฐะฟัะพั vk ะฒัะบะธะดัะฒะฐะตั JSON ั ะธะฝัะพัะผะฐัะธะตะน. JSON ะพัะตะฝั ะฟะพั
ะพะถ ะฝะฐ ะฟะธัะพะฝััะธะต ัะปะพะฒะฐัะธะบะธ. ะกะผััะป ะบะฒะฐะดัะฐัะฝัั
ะธ ัะธะณััะฝัั
ัะบะพะฑะพะบ ัะฐะบะพะน ะถะต. ะัะฐะฒะดะฐ, ะตััั ะธ ะพัะปะธัะธั: ะฝะฐะฟัะธะผะตั, ะฒ Python ะพะดะธะฝะฐัะฝัะต ะธ ะดะฒะพะนะฝัะต ะบะฐะฒััะบะธ ะฝะธัะตะผ ะฝะต ะพัะปะธัะฐัััั, ะฐ ะฒ JSON ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ัะพะปัะบะพ ะดะฒะพะนะฝัะต.
ะั ะฒะธะดะธะผ, ััะพ ะฟะพะปััะตะฝะฝัะน ะฝะฐะผะธ JSON ะฟัะตะดััะฐะฒะปัะตั ัะพะฑะพะน ัะปะพะฒะฐัั, ะทะฝะฐัะตะฝะธั ะบะพัะพัะพะณะพ โ ัััะพะบะธ ะธะปะธ ัะธัะปะฐ, ะฐ ัะฐะบะถะต ัะฟะธัะบะธ ะธะปะธ ัะปะพะฒะฐัะธ, ะทะฝะฐัะตะฝะธั ะบะพัะพััั
ะฒ ัะฒะพั ะพัะตัะตะดั ัะฐะบะถะต ะผะพะณัั ะฑััั ัััะพะบะฐะผะธ, ัะธัะปะฐะผะธ, ัะฟะธัะบะฐะผะธ, ัะปะพะฒะฐััะผะธ ะธ ั.ะด. ะขะพ ะตััั ะฟะพะปััะฐะตััั ัะฐะบะฐั ะดะพะฒะพะปัะฝะพ ัะปะพะถะฝะฐั ััััะบัััะฐ ะดะฐะฝะฝัั
, ะธะท ะบะพัะพัะพะน ะผะพะถะฝะพ ะฒััะฐัะธัั ะฒัั ัะพ, ััะพ ะฝะฐั ะธะฝัะตัะตััะตั.
```
response.json()['response'][0]['first_name']
```
[ะ ะดะพะบัะผะตะฝัะฐัะธะธ](https://vk.com/dev/manuals) ะพัะตะฝั ะฟะพะดัะพะฑะฝะพ ะพะฟะธัะฐะฝะพ ะบะฐะบะธะต ะตััั ะผะตัะพะดั ะธ ะบะฐะบะธะต ั ะฝะธั
ะฑัะฒะฐัั ะฟะฐัะฐะผะตััั. ะะฐะฒะฐะนัะต ะทะฐะฒะตัะฝัะผ ะบะพะด ะฒััะต ะฒ ััะฝะบัะธั ะธ ะฟะพะฟัะพะฑัะตะผ ััะพ-ะฝะธะฑัะดั ัะบะฐัะฐัั.
```
def vk_download(method, parameters):
url = 'https://api.vk.com/method/' + method + '?' + parameters + '&access_token=' + token + '&v=' + version
response = requests.get(url)
infa = response.json()
return infa
```
ะะฐะฟัะธะผะตั, ะฒัะต ะปะฐะนะบะธ ั [ั
ะฐะนะตั ัะบัะป ะพั ะผะตะผั.](https://vk.com/hsemem)
ะะฐะบ ะฝะฐะนัะธ ะฐะดัะตั ัััะฐะฝะธัั [ััั](https://vk.com/faq18062)
```
group_id = '-139105204' # ะฒะทัะปะธ ะธะท ัััะปะบะธ ะฝะฐ ะณััะฟะฟั
wall = vk_download('wall.get', 'owner_id={}&count=100'.format(group_id))
wall = wall['response']
# wall['items'][0]
wall['items'][0]['likes']['count']
likes = [item['likes']['count'] for item in wall['items']]
likes[:10]
```
ะะฐ ะพะดะธะฝ ะทะฐะฟัะพั ัะบะฐัะฐะปะพัั ะฒัะตะณะพ-ะปะธัั $100$ ะฟะพััะพะฒ ั ะปะฐะนะบะฐะผะธ. ะ ะฟะฐะฑะปะธะบะต ะธั
ัะตะปัั
```
wall['count']
```
[ะะพะบัะผะตะฝัะฐัะธั](https://vk.com/dev/manuals) ะณะพะฒะพัะธั, ััะพ ะตััั ะฟะฐัะฐะผะตัั `offset`, ั ะฟะพะผะพััั ะบะพัะพัะพะณะพ ะผะพะถะฝะพ ัะบะฐะทะฐัั ะบะฐะบะธะต ะธะผะตะฝะฝะพ ะฟะพััั ะธะท ะณััะฟะฟั ะฝัะถะฝะพ ัะบะฐัะฐัั. ะะฐะฟัะธะผะตั, ะตัะปะธ ะผั ัะบะฐะถะตะผ `offset = 100`, ัะบะฐัะฐะตััั ะฒัะพัะฐั ัะพัะฝั. ะะฐัะต ะดะตะปะพ ะทะฐ ะผะฐะปัะผ: ะฝะฐะฟะธัะฐัั ัะธะบะป.
```
import time
likes = [ ] # ััะดะฐ ะฑัะดั ัะพั
ัะฐะฝััั ะปะฐะนะบะธ
for offset in range(0, 4800, 100):
time.sleep(0.4) # ะฒะบ ัะพะณะปะฐัะตะฝ ัะฐะฑะพัะฐัั 3 ัะฐะทะฐ ะฒ ัะตะบัะฝะดั,
# ะผะตะถะดั ะทะฐะฟัะพัะฐะผะธ python ัะฟะธั 0.4 ัะตะบัะฝะดั
wall = vk_download('wall.get', 'owner_id={}&count=100&offset={}'.format(group_id, offset))
likes.extend([item['likes']['count'] for item in wall['response']['items']])
```
ะะฐะนะบะธ ะฒ ะฝะฐัะธั
ััะบะฐั
. ะะพะถะตะผ ะดะฐะถะต ะฟะพัะผะพััะตัั ะฝะฐ ะธั
ัะฐัะฟัะตะดะตะปะตะฝะธะต ะธ ะฟะพะฟัะพะฑะพะฒะฐัั ััะพ-ัะพ ั ะฝะธะผะธ ัะดะตะปะฐัั.
```
len(likes)
import matplotlib.pyplot as plt
plt.hist(likes);
```
ะ ะฟัะธะฝัะธะฟะต ะฟะพั
ะพะถะธะผ ะพะฑัะฐะทะพะผ ะผะพะถะฝะพ ัะบะฐัะฐัั ััะพ ัะณะพะดะฝะพ. ะะฑัะฐัะธัะต ะฒะฝะธะผะฐะฝะธะต, ััะพ ั ะฒะบ ะตััั ัะฟะตัะธะฐะปัะฝัะน ะผะตัะพะด [`execute`,](https://vk.com/dev/execute) ะบะพัะพััะน ะธะฝะพะณะดะฐ ะฟะพะผะพะณะฐะตั ััะบะพัะธัั ัะบะฐัะบั ะฒ $25$ ัะฐะท. [ะ ััะพะผ ะพัะตะฝั ััะฐัะพะผ ัััะพัะธะฐะปะต](https://github.com/DmitrySerg/OpenData/blob/master/RussianElections2018/Part_1_Parsing_VK.ipynb) ะดะฐะถะต ะตััั ะฟัะธะผะตั ะธัะฟะพะปัะทะพะฒะฐะฝะธั.
# 4. Selenium
ะญัะพ ะธะฝััััะผะตะฝั ะดะปั ัะพะฑะพัะธะทะธัะพะฒะฐะฝะฝะพะณะพ ัะฟัะฐะฒะปะตะฝะธั ะฑัะฐัะทะตัะพะผ. ะะปั ะตะณะพ ะบะพัะตะบัะฝะพะน ัะฐะฑะพัั ะฝัะถะฝะพ ัะบะฐัะฐัั ะดัะฐะนะฒะตั: [ะดะปั ั
ัะพะผะฐ](https://sites.google.com/a/chromium.org/chromedriver/downloads) ะธะปะธ [ะดะปั ัะฐะตััะพะบัะฐ.](https://github.com/mozilla/geckodriver/releases)
```
from selenium import webdriver
driver = webdriver.Firefox()
```
ะะพัะปะต ะฒัะฟะพะปะฝะตะฝะธั ะฒะตัั
ะฝะตะณะพ ะฑะปะพะบะฐ ั ะฒะฐั ะพัะบัะพะตััั ะตัั ะพะดะธะฝ ะฑัะฐัะทะตั. ะะพะถะฝะพ ะฟะพะนัะธ ะฒ ะฝัะผ ะฝะฐ ััะฐััะพะฒัั ะณัะณะปะฐ.
```
ref = 'http://google.com'
driver.get(ref)
```
ะะฐะนัะธ ะฟะพ html-ะบะพะดั ัััะพะบั ะดะปั ะฒะฒะพะดะฐ ะทะฐะฟัะพัะฐ, ะบะปะธะบะฝััั ะฝะฐ ะฝะตั.
```
stroka = driver.find_element_by_name("q")
stroka.click()
```
ะะฐะฟะธัะฐัั ะฒ ะฝะตั ััะพ-ะฝะธะฑัะดั.
```
stroka.send_keys('ะะบะพะฝัะฐะบัะต')
```
ะะฐะนัะธ ะบะฝะพะฟะบั ะดะปั ะณัะณะปะตะฝะธั ะธ ะฝะฐะถะฐัั ะตั.
```
# ะฝะฐั
ะพะดะธะผ ะบะฝะพะฟะบั ะดะปั ะณัะณะปะตะฝะธั ะธ ะถะผัะผ ะตั
button = driver.find_element_by_name('btnK')
button.click()
```
ะฃ ะฝะฐั ะฝะฐ ัััะธะฝัะบะต ะตััั ะฟะพะธัะบะพะฒะฐั ะฒัะดะฐัะฐ. ะะฐะฑะตััะผ ะตั ะฒ bs4 ะธ ะฝะฐะนะดัะผ ะฒัะต ัะฐะนัั.
```
bs = BeautifulSoup(driver.page_source)
dirty_hrefs = bs.find_all('h3',attrs={'class':'r'})
clean_hrefs = [href.a['href'] for href in dirty_hrefs]
clean_hrefs
```
ะะฐะบัะพะตะผ ะฑัะฐัะทะตั.
```
driver.close()
```
ะะพะพะฑัะต selenium ะฟัะธะดัะผัะฒะฐะปะธ ะดะปั ัะตััะธัะพะฒัะธะบะพะฒ, ะฐ ะฝะต ะดะปั ะฟะฐััะธะฝะณะฐ. ะะปั ะฟะฐััะตัะพะฒ ะธะผะตะตั ัะผััะป ะธัะฟะพะปัะทะพะฒะฐัั ัะพะปัะบะพ ะฒ ะบัะฐะนะฝะตะผ ัะปััะฐะต. ะะฝ ะพัะตะฝั ะผะตะดะปะตะฝะฝัะน. ะัะปะธ ั ะฒะฐั ะพัะตะฝั-ะพัะตะฝั-ะพัะตะฝั-ะพัะตะฝั ะฝะต ะฟะพะปััะฐะตััั ะพะฑะผะฐะฝััั ัะตัะฒะตั ัะตัะตะท requests ะธะปะธ ะฒั ััะฐะปะบะธะฒะฐะตัะตัั ั ะบะฐะบะพะน-ัะพ ัะฟะตัะธัะธัะตัะบะพะน ะทะฐัะธัะพะน ะพั ะฑะพัะพะฒ, seleium ะผะพะถะตั ะฟะพะผะพัั. ะัั ะดะปั selenium __ะฒะฐะถะฝะพ__ ะฝะต ะทะฐะฑัะฒะฐัั ััะฐะฒะธัั ะฒัะตะผะตะฝะฝัะต ะทะฐะดะตัะถะบะธ, ััะพะฑั ัััะฐะฝะธัะฐ ััะฟะตะฒะฐะปะฐ ะฟัะพะณััะทะธัััั. ะะธะฑะพ ะผะพะถะฝะพ ะดะพะฟะธััะฒะฐัั ะฟ
ะพะปะฝะพัะตะฝะฝัะต ะบะพะด, ะบะพัะพััะน ะฑัะดะตั ะถะดะฐัั ะฟัะพะณััะทะบะธ ะธ ัะพะปัะบะพ ัะพะณะดะฐ ััะบะฐัั ะฝะฐ ะบะฝะพะฟะบะธ ะธ ัะฟ.
ะััั [ะฟะตัะตะฒะพะด ะฝะฐ ััััะบะธะน ะดะพะบัะผะตะฝัะฐัะธะธ ะฝะฐ ั
ะฐะฑัะต.](https://habr.com/ru/post/248559/)
ะ ะผะพะตะน ะฟัะฐะบัะธะบะต ะฟะพะปะตะทะตะฝ ะฑัะป ะฟะฐัั ัะฐะท:
* ะะฐะดะพ ะฑัะปะพ ัะบะฐัะฐัั ะผะฝะพะณะพ ะธะฝัั ะพ ะฟะพะธัะบะพะฒัั
ะทะฐะฟัะพัะฐั
ะธะท [Google Trends,](https://trends.google.ru/trends/?geo=RU) ะฐ API ัะธะปัะฝะพ ะพะณัะฐะฝะธัะธะฒะฐะป ะผะตะฝั.
* ะะฐะดะพ ะฑัะปะพ ะฟะพะฝััั ัะตัะตะท ะฟะพะธัะบะพะฒะธะบ ะบะฐะบะพะน ั ัะฐะทะปะธัะฝัั
ะพัะณะฐะฝะธะทะฐัะธะน ะะะ ะฟะพ ะธั
ะฝะฐะธะผะตะฝะพะฒะฐะฝะธั (ะฟะพะผะพะณะปะพ ัะพะปัะบะพ ะดะปั ะบััะฟะฝัั
ะบะพะผะฟะฐะฝะธะน)
# 5. ะฅะธััะพััะธ:
### ะฅะธััะพััั 1: ะะต ััะตัะฝัะนัะตัั ะฟะพะปัะทะพะฒะฐัััั `try-except`
ะญัะฐ ะบะพะฝััััะบัะธั ะฟะพะทะฒะพะปัะตั ะฟะธัะพะฝั ะฒ ัะปััะฐะต ะพัะธะฑะบะธ ัะดะตะปะฐัั ััะพ-ะฝะธะฑัะดั ะดััะณะพะต ะปะธะฑะพ ะฟัะพะธะณะฝะพัะธัะพะฒะฐัั ะตั. ะะฐะฟัะธะผะตั, ะผั ั
ะพัะธะผ ะฝะฐะนัะธ ะปะพะณะฐัะธัะผ ะพั ะฒัะตั
ัะธัะตะป ะธะท ัะฟะธัะบะฐ:
```
from math import log
a = [1,2,3,-1,-5,10,3]
for item in a:
print(log(item))
```
ะฃ ะฝะฐั ะฝะต ะฒัั
ะพะดะธั, ัะฐะบ ะบะฐะบ ะปะพะณะฐัะธัะผ ะพั ะพััะธัะฐัะตะปัะฝัั
ัะธัะตะป ะฝะต ะฑะตััััั. ะงัะพะฑั ะบะพะด ะฝะต ะฟะฐะดะฐะป ะฟัะธ ะฒะพะทะฝะธะบะฝะพะฒะตะฝะธะธ ะพัะธะฑะบะธ, ะผั ะผะพะถะตะผ ะตะณะพ ะฝะตะผะฝะพะณะพ ะธะทะผะตะฝะธัั:
```
from math import log
a = [1,2,3,-1,-5,10,3]
for item in a:
try:
print(log(item)) # ะฟะพะฟัะพะฑัะน ะฒะทััั ะปะพะณะฐัะธัะผ
except:
print('ั ะฝะต ัะผะพะณ') # ะตัะปะธ ะฝะต ะฒััะปะพ, ัะพะทะฝะฐะนัั ะธ ัะฐะฑะพัะฐะน ะดะฐะปััะต
```
__ะะฐะบ ััะพ ะธัะฟะพะปัะทะพะฒะฐัั ะฟัะธ ะฟะฐััะธะฝะณะต?__ ะะฝัะตัะฝะตั ัะพะทะดะฐัั ัะตะปะพะฒะตะบ. ะฃ ะผะฝะพะณะธั
ะปัะดะตะน ััะบะธ ะพัะตะฝั ะบัะธะฒัะต. ะัะตะดะฟะพะปะพะถะธะผ, ััะพ ะผั ะฝะฐ ะฝะพัั ะฟะพััะฐะฒะธะปะธ ะฟะฐััะตั ัะบะฐัะธะฒะฐัั ัะตะฝั, ะพะฝ ะพััะฐะฑะพัะฐะป ัะฐั ะธ ัะฟะฐะป ะธะท-ะทะฐ ัะพะณะพ, ััะพ ะฝะฐ ะบะฐะบะพ-ะฝะธะฑัะดั ะพะดะฝะพะน ัััะฐะฝะธัะต ะฑัะปะธ ะบัะธะฒะพ ะฟัะพััะฐะฒะปะตะฝั ัะตะณะธ, ะปะธะฑะพ ะฒัะปะตะทะปะพ ะบะฐะบะพะต-ัะพ ัะตะดะบะพะต ะฟะพะปะต, ะปะธะฑะพ ะฒัะปะตะทะปะธ ะบะฐะบะธะต-ัะพ ะฐััะตัะฐะบัั ะพั ััะฐัะพะน ะฒะตััะธะธ ัะฐะนัะฐ, ะบะพัะพััะต ะฝะต ะฑัะปะธ ัััะตะฝั ะฒ ะฝะฐัะตะผ ะฟะฐััะตัะต. ะะพัะฐะทะดะพ ะปัััะต, ััะพะฑั ะบะพะด ะฟัะพะธะณะฝะพัะธัะพะฒะฐะป ััั ะพัะธะฑะบั ะธ ะฟัะพะดะพะปะถะธะป ัะฐะฑะพัะฐัั ะดะฐะปััะต.
### ะฅะธััะพััั 2: pd.read_html
ะัะปะธ ะฝะฐ ัััะฐะฝะธัะต, ะบะพัะพััั ะฒั ัะฟะฐััะธะปะธ, ััะตะดะธ ััะณะพะฒ `<tr>` ะธ `<td>` ะฟัััะตััั ัะฐะฑะปะธัะฐ, ัะฐัะต ะฒัะตะณะพ ะผะพะถะฝะพ ะทะฐะฑัะฐัั ะตั ัะตะฑะต ะฑะตะท ะฝะฐะฟะธัะฐะฝะธั ัะธะบะปะฐ, ะบะพัะพััะน ะฑัะดะตั ะฟะตัะตะฑะธัะฐัั ะฒัะต ััะพะฑัั ะธ ัััะพะบะธ. ะะพะผะพะถะตั ะฒ ััะพะผ `pd.read_html`. ะะฐะฟัะธะผะตั, ะฒะพั ัะฐะบ ะผะพะถะฝะพ ะทะฐะฑัะฐัั ัะตะฑะต [ัะฐะฑะปะธัะบั ั ัะฐะนัะฐ ะฆะ](https://cbr.ru/currency_base/daily/)
```
import pandas as pd
df = pd.read_html('https://cbr.ru/currency_base/daily/', header=-1)[0]
df.head()
```
ะะพะผะฐะฝะดะฐ ะฟััะฐะตััั ัะพะฑัะฐัั ะฒ ะผะฐััะธะฒ ะฒัะต ัะฐะฑะปะธัะบะธ c ะฒะตะฑ-ัััะฐะฝะธัั. ะัะปะธ ั
ะพัะตััั, ะผะพะถะฝะพ ัะฝะฐัะฐะปะฐ ัะตัะตะท bs4 ะฝะฐะนัะธ ะฝัะถะฝัั ัะฐะฑะปะธัั, ะฐ ะฟะพัะพะผ ัะถะต ัะฐัะฟะฐััะธัั ะตั:
```
resp = requests.get('https://cbr.ru/currency_base/daily/')
tree = BeautifulSoup(resp.content, 'html.parser')
# ะฝะฐัะปะธ ัะฐะฑะปะธัะบั
table = tree.find_all('table', {'class' : 'data'})[0]
# ัะฐัะฟะฐััะธะปะธ ะตั
df = pd.read_html(str(table), header=-1)[0]
df.head()
```
### ะฅะธััะพััั 3: ะธัะฟะพะปัะทัะนัะต ะฟะฐะบะตั tqdm
> ะะพะด ัะถะต ัะฐะฑะพัะฐะตั ัะฐั. ะฏ ะฒะพะพะฑัะต ะฑะตะท ะฟะพะฝััะธั ะบะพะณะดะฐ ะพะฝ ะทะฐะบะพะฝัะธั ัะฐะฑะพัั. ะัะปะพ ะฑั ะบัััะพ ัะทะฝะฐัั, ัะบะพะปัะบะพ ะตัั ะถะดะฐัั...
ะัะปะธ ะฒ ะฒะฐัะตะน ะณะพะปะพะฒะต ะฒะพะทะฝะธะบะปะฐ ัะฐะบะฐั ะผััะปั, ะฟะฐะบะตั `tqdm` ะฒะฐั ะปัััะธะน ะดััะณ. ะฃััะฐะฝะพะฒะธัะต ะตะณะพ: ```pip install tqdm```
```
from tqdm import tqdm_notebook
a = list(range(30))
# 30 ัะฐะท ะฑัะดะตะผ ัะฟะฐัั ะฟะพ ัะตะบัะฝะดะต
for i in tqdm_notebook(a):
time.sleep(1)
```
ะั ะพะฑะผะพัะฐะปะธ ัะพั ะฒะตะบัะพั, ะฟะพ ะบะพัะพัะพะผั ะธะดัั ัะธะบะป ะฒ `tqdm_notebook`. ะญัะพ ะดะฐัั ะฝะฐะผ ะบัะฐัะธะฒัั ะทะตะปัะฝัั ัััะพะบั, ะบะพัะพัะฐั ะฟะพะบะฐะทัะฒะฐะตั ะฝะฐัะบะพะปัะบะพ ัะธะปัะฝะพ ะผั ะฟัะพะดะฒะธะฝัะปะธัั ะฟะพ ะบะพะดั. ะะฑะผะฐััะฒะฐะนัะต ัะฒะพะธ ัะฐะผัะต ะฑะพะปััะธะต ะธ ะดะพะปะณะธะต ัะธะบะปั ะฒ `tqdm_notebook` ะธ ะฒัะตะณะดะฐ ะฟะพะฝะธะผะฐะนัะต ัะบะพะปัะบะพ ะพััะฐะปะพัั ะดะพ ะบะพะฝัะฐ.
### ะฅะธััะพััั 4: ัะฐัะฟะฐัะฐะปะตะปะธะฒะฐะฝะธะต
ะัะปะธ ัะตัะฒะตั ะฝะต ะพัะตะฝั ะฝะฐัััะพะตะฝ ะฒะฐั ะฑะฐะฝะธัั, ะผะพะถะฝะพ ัะฐัะฟะฐัะฐะปะตะปะธัั ัะฒะพะธ ะทะฐะฟัะพัั ะบ ะฝะตะผั. ะกะฐะผัะน ะฟัะพััะพะน ัะฟะพัะพะฑ ัะดะตะปะฐัั ััะพ โ ะฑะธะฑะปะธะพัะตะบะฐ `joblib`.
```
from joblib import Parallel, delayed
from tqdm import tqdm_notebook
def simple_function(x):
return x**2
nj = -1 # ะฟะฐัะฐะปะตะปั ะฝะฐ ะฒัะต ัะดัะฐ
result = Parallel(n_jobs=nj)(
delayed(simple_function)(item) # ะบะฐะบัั ััะฝะบัะธั ะฟัะธะผะตะฝัะตะผ
for item in tqdm_notebook(range(10))) # ะบ ะบะฐะบะธะผ ะพะฑัะตะบัะฐะผ ะฟัะธะผะตะฝัะผ
# tqdm_notebook ะฒ ะฟะพัะปะตะดะฝะตะน ัััะพัะบะต ะฑัะดะตั ัะพะทะดะฐะฒะฐัั ะทะตะปัะฝัะน ะฑะตะณัะฝะพะบ ั ะฟัะพะณัะตััะพะผ
```
ะะฐ ัะฐะผะพะผ ะดะตะปะต ััะพ ะฝะต ัะฐะผัะน ัััะตะบัะธะฒะฝัะน ัะฟะพัะพะฑ ะฟะฐัะฐะปะตะปะธัั ะฒ python. ะะฝ ะถััั ะผะฝะพะณะพ ะฟะฐะผััะธ ะธ ัะฐะฑะพัะฐะตั ะผะตะดะปะตะฝะฝะตะต, ัะตะผ [ััะฐะฝะดะฐััะฝัะน multiprocessing.](https://docs.python.org/3/library/multiprocessing.html) ะะพ ะทะฐัะพ ะดะฒะต ัััะพัะบะธ, ะะะ ะ! ะะฒะต ัััะพัะบะธ!
### ะฅะธััะพััั 5: selenium ะฑะตะท ะฑัะฐัะทะตัะฐ
ะกะตะปะตะฝะธัะผ ะผะพะถะฝะพ ะฝะฐัััะพะธัั ัะฐะบ, ััะพะฑั ัะธะทะธัะตัะบะธ ะฑัะฐัะทะตั ะฝะต ะพัะบััะฒะฐะปัั.
```
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = True
driver = webdriver.Firefox(options=options)
ref = 'http://google.com'
driver.get(ref)
driver.close()
```
### ะัั ั
ะธััะพััะธ:
* __ะกะพั
ัะฐะฝัะนัะต ัะพ, ััะพ ะฟะฐััะธัะต ะฟะพ ะผะตัะต ัะบะฐัะบะธ!__ ะััะผะพ ะฒะฝัััั ัะธะบะปะฐ ะทะฐะฟะธั
ะฝะธัะต ะบะพะด, ะบะพัะพััะน ัะพั
ัะฐะฝัะตั ัะฐะนะป!
* ะะพะณะดะฐ ะบะพะด ัะฟะฐะป ะฒ ัะตัะตะดะธะฝะต ัะฟะธัะบะฐ ะดะปั ัะบะฐัะบะธ, ะฝะต ะพะฑัะทะฐัะตะปัะฝะพ ะทะฐะฟััะบะฐัั ะตะณะพ ั ัะฐะผะพะณะพ ะฝะฐัะฐะปะฐ. ะัะพััะพ ัะพั
ัะฐะฝะธัะต ัะพั ะบััะพะบ, ะบะพัะพััะน ัะถะต ัะบะฐัะฐะปัั ะธ ะดะพะทะฐะฟัััะธัะต ะบะพะด ั ะผะตััะฐ ะฟะฐะดะตะฝะธั.
* ะะฐัะพะฒัะฒะฐัั ัะธะบะป ะดะปั ะพะฑั
ะพะดะฐ ัััะปะพะบ ะฒะฝัััั ััะฝะบัะธะธ - ะฝะต ัะฐะผะฐั ั
ะพัะพัะฐั ะธะดะตั. ะัะตะดะฟะพะปะพะถะธะผ, ััะพ ะฝะฐะดะพ ะพะฑะพะนัะธ $100$ ัััะปะพะบ. ะคัะฝะบัะธั ะดะพะปะถะฝะฐ ะฒะตัะฝััั ะฝะฐ ะฒัั
ะพะด ะพะฑัะตะบัั, ะบะพัะพััะต ัะบะฐัะฐะปะธัั ะฟะพ ะฒัะตะผั ััะพะผั ะดะพะฑัั. ะะฝะฐ ะฑะตััั ะธ ะฟะฐะดะฐะตั ะฝะฐ $50$ ะพะฑัะตะบัะต. ะะพะฝะตัะฝะพ ะถะต ัะพ, ััะพ ัะถะต ะฑัะปะพ ัะบะฐัะฐะฝะพ, ััะฝะบัะธั ะฝะต ะฒะพะทะฒัะฐัะฐะตั. ะัั, ััะพ ะฒั ะฝะฐะบะฐัะฐะปะธ - ะฒั ัะตััะตัะต. ะะฐะดะพ ะทะฐะฟััะบะฐัั ะทะฐะฝะพะฒะพ. ะะพัะตะผั? ะะพัะพะผั ััะพ ะฒะฝัััะธ ััะฝะบัะธะธ ัะฒะพั ะฟัะพัััะฐะฝััะฒะพ ะธะผัะฝ. ะัะปะธ ะฑั ะฒั ะดะตะปะฐะปะธ ััะพ ัะธะบะปะพะผ ะฒะปะพะฑ, ัะพ ะผะพะถะฝะพ ะฑัะปะพ ะฑั ัะพั
ัะฐะฝะธัั ะฟะตัะฒัะต $50$ ะพะฑัะตะบัะพะฒ, ะบะพัะพััะต ัะถะต ะปะตะถะฐั ะฒะฝัััะธ ะปะธััะฐ, ะฐ ะฟะพัะพะผ ะฟัะพะดะพะปะถะธัั ัะบะฐัะบั.
* ะะพะถะฝะพ ะพัะธะตะฝัะธัะพะฒะฐัััั ะฝะฐ html-ัััะฐะฝะธัะบะต ั ะฟะพะผะพััั `xpath`. ะะฝ ะฟัะตะดะฝะฐะทะฝะฐัะตะฝ ะดะปั ัะพะณะพ, ััะพะฑั ะฒะฝัััะธ html-ัััะฐะฝะธัะบะธ ะผะพะถะฝะพ ะฑัะปะพ ะฑััััะพ ะฝะฐั
ะพะดะธัั ะบะฐะบะธะต-ัะพ ัะปะตะผะตะฝัั. [ะะพะดัะพะฑะฝะตะต ะผะพะถะฝะพ ะฟะพัะธัะฐัั ััั.](https://devhints.io/xpath)
* ะะต ะปะตะฝะธัะตัั ะปะธััะฐัั ะดะพะบัะผะตะฝัะฐัะธั. ะะท ะฝะตั ะผะพะถะฝะพ ัะทะฝะฐัั ะผะฝะพะณะพ ะฟะพะปะตะทะฝัั
ัััะบ.
# 6. ะะพัะธัะฐัะบะธ
* [ะะฐััะธะผ ะผะตะผั ะฒ python](https://habr.com/ru/company/ods/blog/346632/) - ะฟะพะดัะพะฑะฝะฐั ััะฐััั ะฝะฐ ะฅะฐะฑัะต, ะฟะพ ะบะพัะพัะพะน ะผะพะถะฝะพ ะฝะฐััะธัััั ... ะฟะฐััะธัั (ะะะะะะะะ)
* [ะขะตััะฐะดะบะธ ะะปัะธ ะฉััะพะฒะฐ](https://github.com/ischurov/pythonhse) ะฟัะพ python ะดะปั ะฐะฝะฐะปะธะทะฐ ะดะฐะฝะฝัั
. ะ [ะปะตะบัะธะธ 9](https://nbviewer.jupyter.org/github/ischurov/pythonhse/blob/master/Lecture%209.ipynb) ะธ [ะปะตะบัะธะธ 10](https://nbviewer.jupyter.org/github/ischurov/pythonhse/blob/master/Lecture%2010.ipynb) ะตััั ะฟัะพ ะฟะฐััะตัั.
* [ะัะพะดะฒะธะฝััะพะต ะธัะฟะพะปัะทะพะฒะฐะฝะธะต requests](https://2.python-requests.org/en/master/user/advanced/)
* [ะะตัะตะฒะพะด ะดะพะบัะผะตะฝัะฐัะธะธ ะฟะพ selenium ะฝะฐ ััััะบะธะน ะฝะฐ ั
ะฐะฑัะต](https://habr.com/ru/post/248559/)
* [ะะตะผะฝะพะณะพ ัััะฐัะตะฒัะธะน ะณะฐะนะด ะฟะพ ะฟะฐััะธะฝะณ ะฒะบะพะฝัะฐะบัะต](https://nbviewer.jupyter.org/github/FUlyankin/ekanam_grand_research/blob/master/0.%20vk_parser_tutorial.ipynb)
| github_jupyter |
# Gradient Checking
Welcome to the final assignment for this week! In this assignment you'll be implementing gradient checking.
By the end of this notebook, you'll be able to:
Implement gradient checking to verify the accuracy of your backprop implementation
## Table of Contents
- [1 - Packages](#1)
- [2 - Problem Statement](#2)
- [3 - How does Gradient Checking work?](#3)
- [4 - 1-Dimensional Gradient Checking](#4)
- [Exercise 1 - forward_propagation](#ex-1)
- [Exercise 2 - backward_propagation](#ex-2)
- [Exercise 3 - gradient_check](#ex-3)
- [5 - N-Dimensional Gradient Checking](#5)
- [Exercise 4 - gradient_check_n](#ex-4)
<a name='1'></a>
## 1 - Packages
```
import numpy as np
from testCases import *
from public_tests import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
%load_ext autoreload
%autoreload 2
```
<a name='2'></a>
## 2 - Problem Statement
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
You already know that backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking."
Let's do it!
<a name='3'></a>
## 3 - How does Gradient Checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} $$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really, really small."
You know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Let's use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
<a name='4'></a>
## 4 - 1-Dimensional Gradient Checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center><font color='purple'><b>Figure 1</b>:1D linear model </font></center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
<a name='ex-1'></a>
### Exercise 1 - forward_propagation
Implement `forward propagation`. For this simple function compute $J(.)$
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
# (approx. 1 line)
# J =
# YOUR CODE STARTS HERE
J = theta * x
# YOUR CODE ENDS HERE
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
forward_propagation_test(forward_propagation)
```
<a name='ex-2'></a>
### Exercise 2 - backward_propagation
Now, implement the `backward propagation` step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial Jย }{ \partial \theta} = x$.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
# (approx. 1 line)
# dtheta =
# YOUR CODE STARTS HERE
dtheta = x
# YOUR CODE ENDS HERE
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
backward_propagation_test(backward_propagation)
```
<a name='ex-3'></a>
### Exercise 3 - gradient_check
To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
**Instructions**:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
```
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon=1e-7, print_msg=False):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a float input
theta -- our parameter, a float as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient. Float output
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
# (approx. 5 lines)
# theta_plus = # Step 1
# theta_minus = # Step 2
# J_plus = # Step 3
# J_minus = # Step 4
# gradapprox = # Step 5
# YOUR CODE STARTS HERE
thetaplus = theta + epsilon
thetaminus = theta - epsilon
J_plus = forward_propagation(x, thetaplus)
J_minus = forward_propagation(x, thetaminus)
gradapprox = (J_plus - J_minus) / (2 * epsilon)
# YOUR CODE ENDS HERE
# Check if gradapprox is close enough to the output of backward_propagation()
#(approx. 1 line) DO NOT USE "grad = gradapprox"
# grad =
# YOUR CODE STARTS HERE
grad = backward_propagation(x,theta)
# YOUR CODE ENDS HERE
#(approx. 1 line)
# numerator = # Step 1'
# denominator = # Step 2'
# difference = # Step 3'
# YOUR CODE STARTS HERE
numerator = np.linalg.norm(grad - gradapprox)
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)
difference = numerator / denominator
# YOUR CODE ENDS HERE
if print_msg:
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
x, theta = 2, 4
difference = gradient_check(2,4, print_msg=True)
#gradient_check_test(gradient_check)
```
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`.
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
<a name='5'></a>
## 5 - N-Dimensional Gradient Checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center><font color='purple'><b>Figure 2</b>: Deep neural network. LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</font></center></caption>
Let's look at your implementations for forward propagation and backward propagation.
```
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
cache -- a tuple with the intermediate values (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
log_probs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1. / m * np.sum(log_probs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
```
Now, run backward propagation.
```
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1. / m * np.dot(dZ3, A2.T)
db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1. / m * np.dot(dZ2, A1.T) * 2
db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1. / m * np.dot(dZ1, X.T)
db1 = 4. / m * np.sum(dZ1, axis=1, keepdims=True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
```
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
**How does gradient checking work?**.
As in Section 3 and 4, you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". The function "`dictionary_to_vector()`" has been implemented for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center><font color='purple'><b>Figure 2</b>: dictionary_to_vector() and vector_to_dictionary(). You will need these functions in gradient_check_n()</font></center></caption>
The "gradients" dictionary has also been converted into a vector "grad" using gradients_to_vector(), so you don't need to worry about that.
Now, for every single parameter in your vector, you will apply the same procedure as for the gradient_check exercise. You will store each gradient approximation in a vector `gradapprox`. If the check goes as expected, each value in this approximation must match the real gradient values stored in the `grad` vector.
Note that `grad` is calculated using the function `gradients_to_vector`, which uses the gradients outputs of the `backward_propagation_n` function.
<a name='ex-4'></a>
### Exercise 4 - gradient_check_n
Implement the function below.
**Instructions**: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute `J_plus[i]`:
1. Set $\theta^{+}$ to `np.copy(parameters_values)`
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`.
- To compute `J_minus[i]`: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
**Note**: Use `np.linalg.norm` to get the norms
```
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7, print_msg=False):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
#(approx. 3 lines)
# theta_plus = # Step 1
# theta_plus[i] = # Step 2
# J_plus[i], _ = # Step 3
# YOUR CODE STARTS HERE
theta_plus = np.copy(parameters_values)
theta_plus[i] = theta_plus[i] + epsilon
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(theta_plus))
# YOUR CODE ENDS HERE
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
#(approx. 3 lines)
# theta_minus = # Step 1
# theta_minus[i] = # Step 2
# J_minus[i], _ = # Step 3
# YOUR CODE STARTS HERE
theta_minus = np.copy(parameters_values)
theta_minus[i] = theta_minus[i] - epsilon
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(theta_minus))
# YOUR CODE ENDS HERE
# Compute gradapprox[i]
# (approx. 1 line)
# gradapprox[i] =
# YOUR CODE STARTS HERE
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
# YOUR CODE ENDS HERE
# Compare gradapprox to backward propagation gradients by computing difference.
# (approx. 1 line)
# numerator = # Step 1'
# denominator = # Step 2'
# difference = # Step 3'
# YOUR CODE STARTS HERE
numerator = np.linalg.norm(grad - gradapprox)
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox)
difference = numerator / denominator
# YOUR CODE ENDS HERE
if print_msg:
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y, 1e-7, True)
expected_values = [0.2850931567761623, 1.1890913024229996e-07]
assert not(type(difference) == np.ndarray), "You are not using np.linalg.norm for numerator or denominator"
assert np.any(np.isclose(difference, expected_values)), "Wrong value. It is not one of the expected values"
```
**Expected output**:
<table>
<tr>
<td> <b> There is a mistake in the backward propagation!</b> </td>
<td> difference = 0.2850931567761623 </td>
</tr>
</table>
It seems that there were errors in the `backward_propagation_n` code! Good thing you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember, you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code.
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, you should try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented.
**Notes**
- Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct.
- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout.
Congrats! Now you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
<br>
<font color='blue'>
**What you should remember from this notebook**:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so you don't want to run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
| github_jupyter |
```
# default_exp models.layers
```
# Layers
> Helper function used to build PyTorch timeseries models.
```
#export
from tsai.imports import *
from tsai.utils import *
from torch.nn.init import normal_
from fastai.torch_core import Module
from fastai.layers import *
from torch.nn.utils import weight_norm, spectral_norm
#export
def noop(x): return x
#export
def init_lin_zero(m):
if isinstance(m, (nn.Linear)):
if getattr(m, 'bias', None) is not None: nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 0)
for l in m.children(): init_lin_zero(l)
lin_zero_init = init_lin_zero
#export
class SwishBeta(Module):
def __init__(self, beta=1.):
self.sigmoid = torch.sigmoid
self.beta = nn.Parameter(torch.Tensor(1).fill_(beta))
def forward(self, x): return x.mul(self.sigmoid(x*self.beta))
#export
class Chomp1d(nn.Module):
def __init__(self, chomp_size):
super(Chomp1d, self).__init__()
self.chomp_size = chomp_size
def forward(self, x):
return x[:, :, :-self.chomp_size].contiguous()
#export
def same_padding1d(seq_len, ks, stride=1, dilation=1):
"Same padding formula as used in Tensorflow"
p = (seq_len - 1) * stride + (ks - 1) * dilation + 1 - seq_len
return p // 2, p - p // 2
class Pad1d(nn.ConstantPad1d):
def __init__(self, padding, value=0.):
super().__init__(padding, value)
@delegates(nn.Conv1d)
class SameConv1d(Module):
"Conv1d with padding='same'"
def __init__(self, ni, nf, ks=3, stride=1, dilation=1, **kwargs):
self.ks, self.stride, self.dilation = ks, stride, dilation
self.conv1d_same = nn.Conv1d(ni, nf, ks, stride=stride, dilation=dilation, **kwargs)
self.weight = self.conv1d_same.weight
self.bias = self.conv1d_same.bias
self.pad = Pad1d
def forward(self, x):
self.padding = same_padding1d(x.shape[-1], self.ks, dilation=self.dilation) #stride=self.stride not used in padding calculation!
return self.conv1d_same(self.pad(self.padding)(x))
#export
def same_padding2d(H, W, ks, stride=(1, 1), dilation=(1, 1)):
"Same padding formula as used in Tensorflow"
if isinstance(ks, Integral): ks = (ks, ks)
if ks[0] == 1: p_h = 0
else: p_h = (H - 1) * stride[0] + (ks[0] - 1) * dilation[0] + 1 - H
if ks[1] == 1: p_w = 0
else: p_w = (W - 1) * stride[1] + (ks[1] - 1) * dilation[1] + 1 - W
return (p_w // 2, p_w - p_w // 2, p_h // 2, p_h - p_h // 2)
class Pad2d(nn.ConstantPad2d):
def __init__(self, padding, value=0.):
super().__init__(padding, value)
@delegates(nn.Conv2d)
class Conv2dSame(Module):
"Conv2d with padding='same'"
def __init__(self, ni, nf, ks=(3, 3), stride=(1, 1), dilation=(1, 1), **kwargs):
if isinstance(ks, Integral): ks = (ks, ks)
if isinstance(stride, Integral): stride = (stride, stride)
if isinstance(dilation, Integral): dilation = (dilation, dilation)
self.ks, self.stride, self.dilation = ks, stride, dilation
self.conv2d_same = nn.Conv2d(ni, nf, ks, stride=stride, dilation=dilation, **kwargs)
self.weight = self.conv2d_same.weight
self.bias = self.conv2d_same.bias
self.pad = Pad2d
def forward(self, x):
self.padding = same_padding2d(x.shape[-2], x.shape[-1], self.ks, dilation=self.dilation) #stride=self.stride not used in padding calculation!
return self.conv2d_same(self.pad(self.padding)(x))
@delegates(nn.Conv2d)
def Conv2d(ni, nf, kernel_size=None, ks=None, stride=1, padding='same', dilation=1, init='auto', bias_std=0.01, **kwargs):
"conv1d layer with padding='same', 'valid', or any integer (defaults to 'same')"
assert not (kernel_size and ks), 'use kernel_size or ks but not both simultaneously'
assert kernel_size is not None or ks is not None, 'you need to pass a ks'
kernel_size = kernel_size or ks
if padding == 'same':
conv = Conv2dSame(ni, nf, kernel_size, stride=stride, dilation=dilation, **kwargs)
elif padding == 'valid': conv = nn.Conv2d(ni, nf, kernel_size, stride=stride, padding=0, dilation=dilation, **kwargs)
else: conv = nn.Conv2d(ni, nf, kernel_size, stride=stride, padding=padding, dilation=dilation, **kwargs)
init_linear(conv, None, init=init, bias_std=bias_std)
return conv
bs = 2
c_in = 3
c_out = 5
h = 16
w = 20
t = torch.rand(bs, c_in, h, w)
test_eq(Conv2dSame(c_in, c_out, ks=3, stride=1, dilation=1, bias=False)(t).shape, (bs, c_out, h, w))
test_eq(Conv2dSame(c_in, c_out, ks=(3, 1), stride=1, dilation=1, bias=False)(t).shape, (bs, c_out, h, w))
test_eq(Conv2dSame(c_in, c_out, ks=3, stride=(1, 1), dilation=(2, 2), bias=False)(t).shape, (bs, c_out, h, w))
test_eq(Conv2dSame(c_in, c_out, ks=3, stride=(2, 2), dilation=(1, 1), bias=False)(t).shape, (bs, c_out, h//2, w//2))
test_eq(Conv2dSame(c_in, c_out, ks=3, stride=(2, 2), dilation=(2, 2), bias=False)(t).shape, (bs, c_out, h//2, w//2))
test_eq(Conv2d(c_in, c_out, ks=3, padding='same', stride=1, dilation=1, bias=False)(t).shape, (bs, c_out, h, w))
#export
class CausalConv1d(torch.nn.Conv1d):
def __init__(self, ni, nf, ks, stride=1, dilation=1, groups=1, bias=True):
super(CausalConv1d, self).__init__(ni, nf, kernel_size=ks, stride=stride, padding=0, dilation=dilation, groups=groups, bias=bias)
self.__padding = (ks - 1) * dilation
def forward(self, input):
return super(CausalConv1d, self).forward(F.pad(input, (self.__padding, 0)))
#export
@delegates(nn.Conv1d)
def Conv1d(ni, nf, kernel_size=None, ks=None, stride=1, padding='same', dilation=1, init='auto', bias_std=0.01, **kwargs):
"conv1d layer with padding='same', 'causal', 'valid', or any integer (defaults to 'same')"
assert not (kernel_size and ks), 'use kernel_size or ks but not both simultaneously'
assert kernel_size is not None or ks is not None, 'you need to pass a ks'
kernel_size = kernel_size or ks
if padding == 'same':
if kernel_size%2==1:
conv = nn.Conv1d(ni, nf, kernel_size, stride=stride, padding=kernel_size//2 * dilation, dilation=dilation, **kwargs)
else:
conv = SameConv1d(ni, nf, kernel_size, stride=stride, dilation=dilation, **kwargs)
elif padding == 'causal': conv = CausalConv1d(ni, nf, kernel_size, stride=stride, dilation=dilation, **kwargs)
elif padding == 'valid': conv = nn.Conv1d(ni, nf, kernel_size, stride=stride, padding=0, dilation=dilation, **kwargs)
else: conv = nn.Conv1d(ni, nf, kernel_size, stride=stride, padding=padding, dilation=dilation, **kwargs)
init_linear(conv, None, init=init, bias_std=bias_std)
return conv
bs = 2
c_in = 3
c_out = 5
seq_len = 512
t = torch.rand(bs, c_in, seq_len)
dilation = 1
test_eq(CausalConv1d(c_in, c_out, ks=3, dilation=dilation)(t).shape, Conv1d(c_in, c_out, ks=3, padding="same", dilation=dilation)(t).shape)
dilation = 2
test_eq(CausalConv1d(c_in, c_out, ks=3, dilation=dilation)(t).shape, Conv1d(c_in, c_out, ks=3, padding="same", dilation=dilation)(t).shape)
bs = 2
ni = 3
nf = 5
seq_len = 6
ks = 3
t = torch.rand(bs, c_in, seq_len)
test_eq(Conv1d(ni, nf, ks, padding=0)(t).shape, (bs, c_out, seq_len - (2 * (ks//2))))
test_eq(Conv1d(ni, nf, ks, padding='valid')(t).shape, (bs, c_out, seq_len - (2 * (ks//2))))
test_eq(Conv1d(ni, nf, ks, padding='same')(t).shape, (bs, c_out, seq_len))
test_eq(Conv1d(ni, nf, ks, padding='causal')(t).shape, (bs, c_out, seq_len))
test_error('use kernel_size or ks but not both simultaneously', Conv1d, ni, nf, kernel_size=3, ks=3)
test_error('you need to pass a ks', Conv1d, ni, nf)
conv = Conv1d(ni, nf, ks, padding='same')
init_linear(conv, None, init='auto', bias_std=.01)
conv
conv = Conv1d(ni, nf, ks, padding='causal')
init_linear(conv, None, init='auto', bias_std=.01)
conv
conv = Conv1d(ni, nf, ks, padding='valid')
init_linear(conv, None, init='auto', bias_std=.01)
weight_norm(conv)
conv
conv = Conv1d(ni, nf, ks, padding=0)
init_linear(conv, None, init='auto', bias_std=.01)
weight_norm(conv)
conv
#export
class SeparableConv1d(Module):
def __init__(self, ni, nf, ks, stride=1, padding='same', dilation=1, bias=True, bias_std=0.01):
self.depthwise_conv = Conv1d(ni, ni, ks, stride=stride, padding=padding, dilation=dilation, groups=ni, bias=bias)
self.pointwise_conv = nn.Conv1d(ni, nf, 1, stride=1, padding=0, dilation=1, groups=1, bias=bias)
if bias:
if bias_std != 0:
normal_(self.depthwise_conv.bias, 0, bias_std)
normal_(self.pointwise_conv.bias, 0, bias_std)
else:
self.depthwise_conv.bias.data.zero_()
self.pointwise_conv.bias.data.zero_()
def forward(self, x):
x = self.depthwise_conv(x)
x = self.pointwise_conv(x)
return x
bs = 64
c_in = 6
c_out = 5
seq_len = 512
t = torch.rand(bs, c_in, seq_len)
test_eq(SeparableConv1d(c_in, c_out, 3)(t).shape, (bs, c_out, seq_len))
#export
class AddCoords1d(Module):
"""Add coordinates to ease position identification without modifying mean and std"""
def forward(self, x):
bs, _, seq_len = x.shape
cc = torch.linspace(-1,1,x.shape[-1], device=x.device).repeat(bs, 1, 1)
cc = (cc - cc.mean()) / cc.std()
x = torch.cat([x, cc], dim=1)
return x
bs = 2
c_in = 3
c_out = 5
seq_len = 50
t = torch.rand(bs, c_in, seq_len)
t = (t - t.mean()) / t.std()
test_eq(AddCoords1d()(t).shape, (bs, c_in + 1, seq_len))
new_t = AddCoords1d()(t)
test_close(new_t.mean(),0, 1e-2)
test_close(new_t.std(), 1, 1e-2)
#export
class ConvBlock(nn.Sequential):
"Create a sequence of conv1d (`ni` to `nf`), activation (if `act_cls`) and `norm_type` layers."
def __init__(self, ni, nf, kernel_size=None, ks=3, stride=1, padding='same', bias=None, bias_std=0.01, norm='Batch', zero_norm=False, bn_1st=True,
act=nn.ReLU, act_kwargs={}, init='auto', dropout=0., xtra=None, coord=False, separable=False, **kwargs):
kernel_size = kernel_size or ks
ndim = 1
layers = [AddCoords1d()] if coord else []
norm_type = getattr(NormType,f"{snake2camel(norm)}{'Zero' if zero_norm else ''}") if norm is not None else None
bn = norm_type in (NormType.Batch, NormType.BatchZero)
inn = norm_type in (NormType.Instance, NormType.InstanceZero)
if bias is None: bias = not (bn or inn)
if separable: conv = SeparableConv1d(ni + coord, nf, ks=kernel_size, bias=bias, stride=stride, padding=padding, **kwargs)
else: conv = Conv1d(ni + coord, nf, ks=kernel_size, bias=bias, stride=stride, padding=padding, **kwargs)
act = None if act is None else act(**act_kwargs)
if not separable: init_linear(conv, act, init=init, bias_std=bias_std)
if norm_type==NormType.Weight: conv = weight_norm(conv)
elif norm_type==NormType.Spectral: conv = spectral_norm(conv)
layers += [conv]
act_bn = []
if act is not None: act_bn.append(act)
if bn: act_bn.append(BatchNorm(nf, norm_type=norm_type, ndim=ndim))
if inn: act_bn.append(InstanceNorm(nf, norm_type=norm_type, ndim=ndim))
if bn_1st: act_bn.reverse()
if dropout: layers += [nn.Dropout(dropout)]
layers += act_bn
if xtra: layers.append(xtra)
super().__init__(*layers)
Conv = named_partial('Conv', ConvBlock, norm=None, act=None)
ConvBN = named_partial('ConvBN', ConvBlock, norm='Batch', act=None)
CoordConv = named_partial('CoordConv', ConvBlock, norm=None, act=None, coord=True)
SepConv = named_partial('SepConv', ConvBlock, norm=None, act=None, separable=True)
#export
class ResBlock1dPlus(Module):
"Resnet block from `ni` to `nh` with `stride`"
@delegates(ConvLayer.__init__)
def __init__(self, expansion, ni, nf, coord=False, stride=1, groups=1, reduction=None, nh1=None, nh2=None, dw=False, g2=1,
sa=False, sym=False, norm='Batch', zero_norm=True, act_cls=defaults.activation, ks=3,
pool=AvgPool, pool_first=True, **kwargs):
if nh2 is None: nh2 = nf
if nh1 is None: nh1 = nh2
nf,ni = nf*expansion,ni*expansion
k0 = dict(norm=norm, zero_norm=False, act=act_cls, **kwargs)
k1 = dict(norm=norm, zero_norm=zero_norm, act=None, **kwargs)
convpath = [ConvBlock(ni, nh2, ks, coord=coord, stride=stride, groups=ni if dw else groups, **k0),
ConvBlock(nh2, nf, ks, coord=coord, groups=g2, **k1)
] if expansion == 1 else [
ConvBlock(ni, nh1, 1, coord=coord, **k0),
ConvBlock(nh1, nh2, ks, coord=coord, stride=stride, groups=nh1 if dw else groups, **k0),
ConvBlock(nh2, nf, 1, coord=coord, groups=g2, **k1)]
if reduction: convpath.append(SEModule(nf, reduction=reduction, act_cls=act_cls))
if sa: convpath.append(SimpleSelfAttention(nf,ks=1,sym=sym))
self.convpath = nn.Sequential(*convpath)
idpath = []
if ni!=nf: idpath.append(ConvBlock(ni, nf, 1, coord=coord, act=None, **kwargs))
if stride!=1: idpath.insert((1,0)[pool_first], pool(stride, ndim=1, ceil_mode=True))
self.idpath = nn.Sequential(*idpath)
self.act = defaults.activation(inplace=True) if act_cls is defaults.activation else act_cls()
def forward(self, x): return self.act(self.convpath(x) + self.idpath(x))
#export
def SEModule1d(ni, reduction=16, act=nn.ReLU, act_kwargs={}):
"Squeeze and excitation module for 1d"
nf = math.ceil(ni//reduction/8)*8
assert nf != 0, 'nf cannot be 0'
return SequentialEx(nn.AdaptiveAvgPool1d(1),
ConvBlock(ni, nf, ks=1, norm=None, act=act, act_kwargs=act_kwargs),
ConvBlock(nf, ni, ks=1, norm=None, act=nn.Sigmoid), ProdLayer())
t = torch.rand(8, 32, 12)
test_eq(SEModule1d(t.shape[1], 16, act=nn.ReLU, act_kwargs={})(t).shape, t.shape)
#export
def Norm(nf, ndim=1, norm='Batch', zero_norm=False, init=True, **kwargs):
"Norm layer with `nf` features and `ndim` with auto init."
assert 1 <= ndim <= 3
nl = getattr(nn, f"{snake2camel(norm)}Norm{ndim}d")(nf, **kwargs)
if nl.affine and init:
nl.bias.data.fill_(1e-3)
nl.weight.data.fill_(0. if zero_norm else 1.)
return nl
BN1d = partial(Norm, ndim=1, norm='Batch')
IN1d = partial(Norm, ndim=1, norm='Instance')
bs = 2
ni = 3
nf = 5
sl = 4
ks = 5
t = torch.rand(bs, ni, sl)
test_eq(ConvBlock(ni, nf, ks)(t).shape, (bs, nf, sl))
test_eq(ConvBlock(ni, nf, ks, padding='causal')(t).shape, (bs, nf, sl))
test_eq(ConvBlock(ni, nf, ks, coord=True)(t).shape, (bs, nf, sl))
test_eq(BN1d(ni)(t).shape, (bs, ni, sl))
test_eq(BN1d(ni).weight.data.mean().item(), 1.)
test_eq(BN1d(ni, zero_norm=True).weight.data.mean().item(), 0.)
test_eq(ConvBlock(ni, nf, ks, norm='batch', zero_norm=True)[1].weight.data.unique().item(), 0)
test_ne(ConvBlock(ni, nf, ks, norm='batch', zero_norm=False)[1].weight.data.unique().item(), 0)
test_eq(ConvBlock(ni, nf, ks, bias=False)[0].bias, None)
ConvBlock(ni, nf, ks, act=Swish, coord=True)
#export
class LinLnDrop(nn.Sequential):
"Module grouping `LayerNorm1d`, `Dropout` and `Linear` layers"
def __init__(self, n_in, n_out, ln=True, p=0., act=None, lin_first=False):
layers = [nn.LayerNorm(n_out if lin_first else n_in)] if ln else []
if p != 0: layers.append(nn.Dropout(p))
lin = [nn.Linear(n_in, n_out, bias=not ln)]
if act is not None: lin.append(act)
layers = lin+layers if lin_first else layers+lin
super().__init__(*layers)
LinLnDrop(2, 3, p=.5)
#export
class LambdaPlus(Module):
def __init__(self, func, *args, **kwargs): self.func,self.args,self.kwargs=func,args,kwargs
def forward(self, x): return self.func(x, *self.args, **self.kwargs)
#export
class Squeeze(Module):
def __init__(self, dim=-1): self.dim = dim
def forward(self, x): return x.squeeze(dim=self.dim)
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim})'
class Unsqueeze(Module):
def __init__(self, dim=-1): self.dim = dim
def forward(self, x): return x.unsqueeze(dim=self.dim)
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim})'
class Add(Module):
def forward(self, x, y): return x.add(y)
def __repr__(self): return f'{self.__class__.__name__}'
class Concat(Module):
def __init__(self, dim=1): self.dim = dim
def forward(self, *x): return torch.cat(*x, dim=self.dim)
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim})'
class Permute(Module):
def __init__(self, *dims): self.dims = dims
def forward(self, x): return x.permute(self.dims)
def __repr__(self): return f"{self.__class__.__name__}(dims={', '.join([str(d) for d in self.dims])})"
class Transpose(Module):
def __init__(self, *dims, contiguous=False): self.dims, self.contiguous = dims, contiguous
def forward(self, x):
if self.contiguous: return x.transpose(*self.dims).contiguous()
else: return x.transpose(*self.dims)
def __repr__(self):
if self.contiguous: return f"{self.__class__.__name__}(dims={', '.join([str(d) for d in self.dims])}).contiguous()"
else: return f"{self.__class__.__name__}({', '.join([str(d) for d in self.dims])})"
class View(Module):
def __init__(self, *shape): self.shape = shape
def forward(self, x): return x.view(x.shape[0], *self.shape)
def __repr__(self): return f"{self.__class__.__name__}({', '.join(['bs'] + [str(s) for s in self.shape])})"
class Reshape(Module):
def __init__(self, *shape): self.shape = shape
def forward(self, x): return x.reshape(x.shape[0], *self.shape)
def __repr__(self): return f"{self.__class__.__name__}({', '.join(['bs'] + [str(s) for s in self.shape])})"
class Max(Module):
def __init__(self, dim=None, keepdim=False): self.dim, self.keepdim = dim, keepdim
def forward(self, x): return x.max(self.dim, keepdim=self.keepdim)[0]
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim}, keepdim={self.keepdim})'
class LastStep(Module):
def forward(self, x): return x[..., -1]
def __repr__(self): return f'{self.__class__.__name__}()'
class SoftMax(Module):
"SoftMax layer"
def __init__(self, dim=-1):
self.dim = dim
def forward(self, x):
return F.softmax(x, dim=self.dim)
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim})'
class Clamp(Module):
def __init__(self, min=None, max=None):
self.min, self.max = min, max
def forward(self, x):
return x.clamp(min=self.min, max=self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
class Clip(Module):
def __init__(self, min=None, max=None):
self.min, self.max = min, max
def forward(self, x):
if self.min is not None:
x = torch.maximum(x, self.min)
if self.max is not None:
x = torch.minimum(x, self.max)
return x
def __repr__(self): return f'{self.__class__.__name__}()'
class ReZero(Module):
def __init__(self, module):
self.module = module
self.alpha = nn.Parameter(torch.zeros(1))
def forward(self, x):
return x + self.alpha * self.module(x)
Noop = nn.Sequential()
bs = 2
nf = 5
sl = 4
t = torch.rand(bs, nf, sl)
test_eq(Permute(0,2,1)(t).shape, (bs, sl, nf))
test_eq(Max(1)(t).shape, (bs, sl))
test_eq(Transpose(1,2)(t).shape, (bs, sl, nf))
test_eq(Transpose(1,2, contiguous=True)(t).shape, (bs, sl, nf))
test_eq(View(-1, 2, 10)(t).shape, (bs, 1, 2, 10))
test_eq(Reshape(-1, 2, 10)(t).shape, (bs, 1, 2, 10))
Transpose(1,2), Permute(0,2,1), View(-1, 2, 10), Transpose(1,2, contiguous=True), Reshape(-1, 2, 10), Noop
# export
class DropPath(nn.Module):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
It's similar to Dropout but it drops individual connections instead of nodes.
Original code in https://github.com/rwightman/pytorch-image-models (timm library)
"""
def __init__(self, p=None):
super().__init__()
self.p = p
def forward(self, x):
if self.p == 0. or not self.training: return x
keep_prob = 1 - self.p
shape = (x.shape[0],) + (1,) * (x.ndim - 1)
random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
random_tensor.floor_()
output = x.div(keep_prob) * random_tensor
# output = x.div(random_tensor.mean()) * random_tensor # divide by the actual mean to mantain the input mean?
return output
t = torch.ones(100,2,3)
test_eq(DropPath(0.)(t), t)
assert DropPath(0.5)(t).max() >= 1
#export
class Sharpen(Module):
"This is used to increase confidence in predictions - MixMatch paper"
def __init__(self, T=.5): self.T = T
def forward(self, x):
x = x**(1. / self.T)
return x / x.sum(dim=1, keepdims=True)
n_samples = 1000
n_classes = 3
t = (torch.rand(n_samples, n_classes) - .5) * 10
probas = F.softmax(t, -1)
sharpened_probas = Sharpen()(probas)
plt.plot(probas.flatten().sort().values, color='r')
plt.plot(sharpened_probas.flatten().sort().values, color='b')
plt.show()
test_gt(sharpened_probas[n_samples//2:].max(-1).values.sum().item(), probas[n_samples//2:].max(-1).values.sum().item())
#export
class Sequential(nn.Sequential):
"""Class that allows you to pass one or multiple inputs"""
def forward(self, *x):
for i, module in enumerate(self._modules.values()):
x = module(*x) if isinstance(x, (list, tuple, L)) else module(x)
return x
#export
class TimeDistributed(nn.Module):
def __init__(self, module, batch_first=False):
super(TimeDistributed, self).__init__()
self.module = module
self.batch_first = batch_first
def forward(self, x):
if len(x.size()) <= 2:
return self.module(x)
# Squash samples and timesteps into a single axis
x_reshape = x.contiguous().view(-1, x.size(-1)) # (samples * timesteps, input_size)
y = self.module(x_reshape)
# We have to reshape Y
if self.batch_first:
y = y.contiguous().view(x.size(0), -1, y.size(-1)) # (samples, timesteps, output_size)
else:
y = y.view(-1, x.size(1), y.size(-1)) # (timesteps, samples, output_size)
return y
#export
class Temp_Scale(Module):
"Used to perform Temperature Scaling (dirichlet=False) or Single-parameter Dirichlet calibration (dirichlet=True)"
def __init__(self, temp=1., dirichlet=False):
self.weight = nn.Parameter(tensor(temp))
self.bias = None
self.log_softmax = dirichlet
def forward(self, x):
if self.log_softmax: x = F.log_softmax(x, dim=-1)
return x.div(self.weight)
class Vector_Scale(Module):
"Used to perform Vector Scaling (dirichlet=False) or Diagonal Dirichlet calibration (dirichlet=True)"
def __init__(self, n_classes=1, dirichlet=False):
self.weight = nn.Parameter(torch.ones(n_classes))
self.bias = nn.Parameter(torch.zeros(n_classes))
self.log_softmax = dirichlet
def forward(self, x):
if self.log_softmax: x = F.log_softmax(x, dim=-1)
return x.mul(self.weight).add(self.bias)
class Matrix_Scale(Module):
"Used to perform Matrix Scaling (dirichlet=False) or Dirichlet calibration (dirichlet=True)"
def __init__(self, n_classes=1, dirichlet=False):
self.ms = nn.Linear(n_classes, n_classes)
self.ms.weight.data = nn.Parameter(torch.eye(n_classes))
nn.init.constant_(self.ms.bias.data, 0.)
self.weight = self.ms.weight
self.bias = self.ms.bias
self.log_softmax = dirichlet
def forward(self, x):
if self.log_softmax: x = F.log_softmax(x, dim=-1)
return self.ms(x)
def get_calibrator(calibrator=None, n_classes=1, **kwargs):
if calibrator is None or not calibrator: return noop
elif calibrator.lower() == 'temp': return Temp_Scale(dirichlet=False, **kwargs)
elif calibrator.lower() == 'vector': return Vector_Scale(n_classes=n_classes, dirichlet=False, **kwargs)
elif calibrator.lower() == 'matrix': return Matrix_Scale(n_classes=n_classes, dirichlet=False, **kwargs)
elif calibrator.lower() == 'dtemp': return Temp_Scale(dirichlet=True, **kwargs)
elif calibrator.lower() == 'dvector': return Vector_Scale(n_classes=n_classes, dirichlet=True, **kwargs)
elif calibrator.lower() == 'dmatrix': return Matrix_Scale(n_classes=n_classes, dirichlet=True, **kwargs)
else: assert False, f'please, select a correct calibrator instead of {calibrator}'
bs = 2
c_out = 3
t = torch.rand(bs, c_out)
for calibrator, cal_name in zip(['temp', 'vector', 'matrix'], ['Temp_Scale', 'Vector_Scale', 'Matrix_Scale']):
cal = get_calibrator(calibrator, n_classes=c_out)
# print(calibrator)
# print(cal.weight, cal.bias, '\n')
test_eq(cal(t), t)
test_eq(cal.__class__.__name__, cal_name)
for calibrator, cal_name in zip(['dtemp', 'dvector', 'dmatrix'], ['Temp_Scale', 'Vector_Scale', 'Matrix_Scale']):
cal = get_calibrator(calibrator, n_classes=c_out)
# print(calibrator)
# print(cal.weight, cal.bias, '\n')
test_eq(cal(t), F.log_softmax(t, dim=1))
test_eq(cal.__class__.__name__, cal_name)
bs = 2
c_out = 3
t = torch.rand(bs, c_out)
test_eq(Temp_Scale()(t).shape, t.shape)
test_eq(Vector_Scale(c_out)(t).shape, t.shape)
test_eq(Matrix_Scale(c_out)(t).shape, t.shape)
test_eq(Temp_Scale(dirichlet=True)(t).shape, t.shape)
test_eq(Vector_Scale(c_out, dirichlet=True)(t).shape, t.shape)
test_eq(Matrix_Scale(c_out, dirichlet=True)(t).shape, t.shape)
test_eq(Temp_Scale()(t), t)
test_eq(Vector_Scale(c_out)(t), t)
test_eq(Matrix_Scale(c_out)(t), t)
bs = 2
c_out = 5
t = torch.rand(bs, c_out)
test_eq(Vector_Scale(c_out)(t), t)
test_eq(Vector_Scale(c_out).weight.data, torch.ones(c_out))
test_eq(Vector_Scale(c_out).weight.requires_grad, True)
test_eq(type(Vector_Scale(c_out).weight), torch.nn.parameter.Parameter)
bs = 2
c_out = 3
weight = 2
bias = 1
t = torch.rand(bs, c_out)
test_eq(Matrix_Scale(c_out)(t).shape, t.shape)
test_eq(Matrix_Scale(c_out).weight.requires_grad, True)
test_eq(type(Matrix_Scale(c_out).weight), torch.nn.parameter.Parameter)
#export
class LogitAdjustmentLayer(Module):
"Logit Adjustment for imbalanced datasets"
def __init__(self, class_priors):
self.class_priors = class_priors
def forward(self, x):
return x.add(self.class_priors)
LogitAdjLayer = LogitAdjustmentLayer
bs, n_classes = 16, 3
class_priors = torch.rand(n_classes)
logits = torch.randn(bs, n_classes) * 2
test_eq(LogitAdjLayer(class_priors)(logits), logits + class_priors)
#export
class PPV(Module):
def __init__(self, dim=-1):
self.dim = dim
def forward(self, x):
return torch.gt(x, 0).sum(dim=self.dim).float() / x.shape[self.dim]
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim})'
class PPAuc(Module):
def __init__(self, dim=-1):
self.dim = dim
def forward(self, x):
x = F.relu(x).sum(self.dim) / (abs(x).sum(self.dim) + 1e-8)
return x
def __repr__(self): return f'{self.__class__.__name__}(dim={self.dim})'
class MaxPPVPool1d(Module):
"Drop-in replacement for AdaptiveConcatPool1d - multiplies nf by 2"
def forward(self, x):
_max = x.max(dim=-1).values
_ppv = torch.gt(x, 0).sum(dim=-1).float() / x.shape[-1]
return torch.cat((_max, _ppv), dim=-1).unsqueeze(2)
bs = 2
nf = 5
sl = 4
t = torch.rand(bs, nf, sl)
test_eq(MaxPPVPool1d()(t).shape, (bs, nf*2, 1))
test_eq(MaxPPVPool1d()(t).shape, AdaptiveConcatPool1d(1)(t).shape)
#export
class AdaptiveWeightedAvgPool1d(Module):
'''Global Pooling layer that performs a weighted average along the temporal axis
It can be considered as a channel-wise form of local temporal attention. Inspired by the paper:
Hyun, J., Seong, H., & Kim, E. (2019). Universal Pooling--A New Pooling Method for Convolutional Neural Networks. arXiv preprint arXiv:1907.11440.'''
def __init__(self, n_in, seq_len, mult=2, n_layers=2, ln=False, dropout=0.5, act=nn.ReLU(), zero_init=True):
layers = nn.ModuleList()
for i in range(n_layers):
inp_mult = mult if i > 0 else 1
out_mult = mult if i < n_layers -1 else 1
p = dropout[i] if is_listy(dropout) else dropout
layers.append(LinLnDrop(seq_len * inp_mult, seq_len * out_mult, ln=False, p=p,
act=act if i < n_layers-1 and n_layers > 1 else None))
self.layers = layers
self.softmax = SoftMax(-1)
if zero_init: init_lin_zero(self)
def forward(self, x):
wap = x
for l in self.layers: wap = l(wap)
wap = self.softmax(wap)
return torch.mul(x, wap).sum(-1)
#export
class GAP1d(Module):
"Global Adaptive Pooling + Flatten"
def __init__(self, output_size=1):
self.gap = nn.AdaptiveAvgPool1d(output_size)
self.flatten = Flatten()
def forward(self, x):
return self.flatten(self.gap(x))
class GACP1d(Module):
"Global AdaptiveConcatPool + Flatten"
def __init__(self, output_size=1):
self.gacp = AdaptiveConcatPool1d(output_size)
self.flatten = Flatten()
def forward(self, x):
return self.flatten(self.gacp(x))
class GAWP1d(Module):
"Global AdaptiveWeightedAvgPool1d + Flatten"
def __init__(self, n_in, seq_len, n_layers=2, ln=False, dropout=0.5, act=nn.ReLU(), zero_init=False):
self.gacp = AdaptiveWeightedAvgPool1d(n_in, seq_len, n_layers=n_layers, ln=ln, dropout=dropout, act=act, zero_init=zero_init)
self.flatten = Flatten()
def forward(self, x):
return self.flatten(self.gacp(x))
# export
class GlobalWeightedAveragePool1d(Module):
""" Global Weighted Average Pooling layer
Inspired by Building Efficient CNN Architecture for Offline Handwritten Chinese Character Recognition
https://arxiv.org/pdf/1804.01259.pdf
"""
def __init__(self, n_in, seq_len):
self.weight = nn.Parameter(torch.ones(1, n_in, seq_len))
self.bias = nn.Parameter(torch.zeros(1, n_in, seq_len))
def forward(self, x):
ฮฑ = F.softmax(torch.sigmoid(x * self.weight + self.bias), dim=-1)
return (x * ฮฑ).sum(-1)
GWAP1d = GlobalWeightedAveragePool1d
def gwa_pool_head(n_in, c_out, seq_len, bn=True, fc_dropout=0.):
return nn.Sequential(GlobalWeightedAveragePool1d(n_in, seq_len), Flatten(), LinBnDrop(n_in, c_out, p=fc_dropout, bn=bn))
t = torch.randn(16, 64, 50)
head = gwa_pool_head(64, 5, 50)
test_eq(head(t).shape, (16, 5))
#export
class AttentionalPool1d(Module):
"""Global Adaptive Pooling layer inspired by Attentional Pooling for Action Recognition https://arxiv.org/abs/1711.01467"""
def __init__(self, n_in, c_out, bn=False):
store_attr()
self.bn = nn.BatchNorm1d(n_in) if bn else None
self.conv1 = Conv1d(n_in, 1, 1)
self.conv2 = Conv1d(n_in, c_out, 1)
def forward(self, x):
if self.bn is not None: x = self.bn(x)
return (self.conv1(x) @ self.conv2(x).transpose(1,2)).transpose(1,2)
class GAttP1d(nn.Sequential):
def __init__(self, n_in, c_out, bn=False):
super().__init__(AttentionalPool1d(n_in, c_out, bn=bn), Flatten())
def attentional_pool_head(n_in, c_out, seq_len=None, bn=True, **kwargs):
return nn.Sequential(AttentionalPool1d(n_in, c_out, bn=bn, **kwargs), Flatten())
bs, c_in, seq_len = 16, 1, 50
c_out = 3
t = torch.rand(bs, c_in, seq_len)
test_eq(GAP1d()(t).shape, (bs, c_in))
test_eq(GACP1d()(t).shape, (bs, c_in*2))
bs, c_in, seq_len = 16, 4, 50
t = torch.rand(bs, c_in, seq_len)
test_eq(GAP1d()(t).shape, (bs, c_in))
test_eq(GACP1d()(t).shape, (bs, c_in*2))
test_eq(GAWP1d(c_in, seq_len, n_layers=2, ln=False, dropout=0.5, act=nn.ReLU(), zero_init=False)(t).shape, (bs, c_in))
test_eq(GAWP1d(c_in, seq_len, n_layers=2, ln=False, dropout=0.5, act=nn.ReLU(), zero_init=False)(t).shape, (bs, c_in))
test_eq(GAWP1d(c_in, seq_len, n_layers=1, ln=False, dropout=0.5, zero_init=False)(t).shape, (bs, c_in))
test_eq(GAWP1d(c_in, seq_len, n_layers=1, ln=False, dropout=0.5, zero_init=True)(t).shape, (bs, c_in))
test_eq(AttentionalPool1d(c_in, c_out)(t).shape, (bs, c_out, 1))
bs, c_in, seq_len = 16, 128, 50
c_out = 14
t = torch.rand(bs, c_in, seq_len)
attp = attentional_pool_head(c_in, c_out)
test_eq(attp(t).shape, (bs, c_out))
#export
class GEGLU(Module):
def forward(self, x):
x, gates = x.chunk(2, dim=-1)
return x * F.gelu(gates)
class ReGLU(Module):
def forward(self, x):
x, gates = x.chunk(2, dim=-1)
return x * F.relu(gates)
#export
pytorch_acts = [nn.ELU, nn.LeakyReLU, nn.PReLU, nn.ReLU, nn.ReLU6, nn.SELU, nn.CELU, nn.GELU, nn.Sigmoid, Mish, nn.Softplus,
nn.Tanh, nn.Softmax, GEGLU, ReGLU]
pytorch_act_names = [a.__name__.lower() for a in pytorch_acts]
def get_act_fn(act, **act_kwargs):
if act is None: return
elif isinstance(act, nn.Module): return act
elif callable(act): return act(**act_kwargs)
idx = pytorch_act_names.index(act.lower())
return pytorch_acts[idx](**act_kwargs)
test_eq(get_act_fn(nn.ReLU).__repr__(), "ReLU()")
test_eq(get_act_fn(nn.ReLU()).__repr__(), "ReLU()")
test_eq(get_act_fn(nn.LeakyReLU, negative_slope=0.05).__repr__(), "LeakyReLU(negative_slope=0.05)")
test_eq(get_act_fn('reglu').__repr__(), "ReGLU()")
test_eq(get_act_fn('leakyrelu', negative_slope=0.05).__repr__(), "LeakyReLU(negative_slope=0.05)")
#export
def create_pool_head(n_in, c_out, seq_len=None, concat_pool=False, fc_dropout=0., bn=False, y_range=None, **kwargs):
if kwargs: print(f'{kwargs} not being used')
if concat_pool: n_in*=2
layers = [GACP1d(1) if concat_pool else GAP1d(1)]
layers += [LinBnDrop(n_in, c_out, bn=bn, p=fc_dropout)]
if y_range: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
pool_head = create_pool_head
average_pool_head = partial(pool_head, concat_pool=False)
setattr(average_pool_head, "__name__", "average_pool_head")
concat_pool_head = partial(pool_head, concat_pool=True)
setattr(concat_pool_head, "__name__", "concat_pool_head")
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(create_pool_head(nf, c_out, seq_len, fc_dropout=0.5)(t).shape, (bs, c_out))
test_eq(create_pool_head(nf, c_out, seq_len, concat_pool=True, fc_dropout=0.5)(t).shape, (bs, c_out))
create_pool_head(nf, c_out, seq_len, concat_pool=True, bn=True, fc_dropout=.5)
#export
def max_pool_head(n_in, c_out, seq_len, fc_dropout=0., bn=False, y_range=None, **kwargs):
if kwargs: print(f'{kwargs} not being used')
layers = [nn.MaxPool1d(seq_len, **kwargs), Flatten()]
layers += [LinBnDrop(n_in, c_out, bn=bn, p=fc_dropout)]
if y_range: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(max_pool_head(nf, c_out, seq_len, fc_dropout=0.5)(t).shape, (bs, c_out))
#export
def create_pool_plus_head(*args, lin_ftrs=None, fc_dropout=0., concat_pool=True, bn_final=False, lin_first=False, y_range=None):
nf = args[0]
c_out = args[1]
if concat_pool: nf = nf * 2
lin_ftrs = [nf, 512, c_out] if lin_ftrs is None else [nf] + lin_ftrs + [c_out]
ps = L(fc_dropout)
if len(ps) == 1: ps = [ps[0]/2] * (len(lin_ftrs)-2) + ps
actns = [nn.ReLU(inplace=True)] * (len(lin_ftrs)-2) + [None]
pool = AdaptiveConcatPool1d() if concat_pool else nn.AdaptiveAvgPool1d(1)
layers = [pool, Flatten()]
if lin_first: layers.append(nn.Dropout(ps.pop(0)))
for ni,no,p,actn in zip(lin_ftrs[:-1], lin_ftrs[1:], ps, actns):
layers += LinBnDrop(ni, no, bn=True, p=p, act=actn, lin_first=lin_first)
if lin_first: layers.append(nn.Linear(lin_ftrs[-2], c_out))
if bn_final: layers.append(nn.BatchNorm1d(lin_ftrs[-1], momentum=0.01))
if y_range is not None: layers.append(SigmoidRange(*y_range))
return nn.Sequential(*layers)
pool_plus_head = create_pool_plus_head
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(create_pool_plus_head(nf, c_out, seq_len, fc_dropout=0.5)(t).shape, (bs, c_out))
test_eq(create_pool_plus_head(nf, c_out, concat_pool=True, fc_dropout=0.5)(t).shape, (bs, c_out))
create_pool_plus_head(nf, c_out, seq_len, fc_dropout=0.5)
#export
def create_conv_head(*args, adaptive_size=None, y_range=None):
nf = args[0]
c_out = args[1]
layers = [nn.AdaptiveAvgPool1d(adaptive_size)] if adaptive_size is not None else []
for i in range(2):
if nf > 1:
layers += [ConvBlock(nf, nf // 2, 1)]
nf = nf//2
else: break
layers += [ConvBlock(nf, c_out, 1), GAP1d(1)]
if y_range: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
conv_head = create_conv_head
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(create_conv_head(nf, c_out, seq_len)(t).shape, (bs, c_out))
test_eq(create_conv_head(nf, c_out, adaptive_size=50)(t).shape, (bs, c_out))
create_conv_head(nf, c_out, 50)
#export
def create_mlp_head(nf, c_out, seq_len=None, flatten=True, fc_dropout=0., bn=False, lin_first=False, y_range=None):
if flatten: nf *= seq_len
layers = [Flatten()] if flatten else []
layers += [LinBnDrop(nf, c_out, bn=bn, p=fc_dropout, lin_first=lin_first)]
if y_range: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
mlp_head = create_mlp_head
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(create_mlp_head(nf, c_out, seq_len, fc_dropout=0.5)(t).shape, (bs, c_out))
t = torch.rand(bs, nf, seq_len)
create_mlp_head(nf, c_out, seq_len, bn=True, fc_dropout=.5)
#export
def create_fc_head(nf, c_out, seq_len=None, flatten=True, lin_ftrs=None, y_range=None, fc_dropout=0., bn=False, bn_final=False, act=nn.ReLU(inplace=True)):
if flatten: nf *= seq_len
layers = [Flatten()] if flatten else []
lin_ftrs = [nf, 512, c_out] if lin_ftrs is None else [nf] + lin_ftrs + [c_out]
if not is_listy(fc_dropout): fc_dropout = [fc_dropout]*(len(lin_ftrs) - 1)
actns = [act for _ in range(len(lin_ftrs) - 2)] + [None]
layers += [LinBnDrop(lin_ftrs[i], lin_ftrs[i+1], bn=bn and (i!=len(actns)-1 or bn_final), p=p, act=a) for i,(p,a) in enumerate(zip(fc_dropout+[0.], actns))]
if y_range is not None: layers.append(SigmoidRange(*y_range))
return nn.Sequential(*layers)
fc_head = create_fc_head
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(create_fc_head(nf, c_out, seq_len, fc_dropout=0.5)(t).shape, (bs, c_out))
create_mlp_head(nf, c_out, seq_len, bn=True, fc_dropout=.5)
#export
def create_rnn_head(*args, fc_dropout=0., bn=False, y_range=None):
nf = args[0]
c_out = args[1]
layers = [LastStep()]
layers += [LinBnDrop(nf, c_out, bn=bn, p=fc_dropout)]
if y_range: layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
rnn_head = create_rnn_head
bs = 16
nf = 12
c_out = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
test_eq(create_rnn_head(nf, c_out, seq_len, fc_dropout=0.5)(t).shape, (bs, c_out))
create_rnn_head(nf, c_out, seq_len, bn=True, fc_dropout=.5)
# export
def imputation_head(c_in, c_out, seq_len=None, ks=1, y_range=None, fc_dropout=0.):
layers = [nn.Dropout(fc_dropout), nn.Conv1d(c_in, c_out, ks)]
if y_range is not None:
y_range = (tensor(y_range[0]), tensor(y_range[1]))
layers += [SigmoidRange(*y_range)]
return nn.Sequential(*layers)
bs = 16
nf = 12
ni = 2
seq_len = 20
t = torch.rand(bs, nf, seq_len)
head = imputation_head(nf, ni, seq_len=None, ks=1, y_range=None, fc_dropout=0.)
test_eq(head(t).shape, (bs, ni, seq_len))
head = imputation_head(nf, ni, seq_len=None, ks=1, y_range=(.3,.7), fc_dropout=0.)
test_ge(head(t).min(), .3)
test_le(head(t).max(), .7)
y_range = (tensor([0.1000, 0.1000, 0.1000, 0.1000, 0.2000, 0.2000, 0.2000, 0.2000, 0.3000,
0.3000, 0.3000, 0.3000]),
tensor([0.6000, 0.6000, 0.6000, 0.6000, 0.7000, 0.7000, 0.7000, 0.7000, 0.8000,
0.8000, 0.8000, 0.8000]))
test_ge(head(t).min(), .1)
test_le(head(t).max(), .9)
head = imputation_head(nf, ni, seq_len=None, ks=1, y_range=y_range, fc_dropout=0.)
head
# export
class create_conv_lin_3d_head(nn.Sequential):
"Module to create a 3d output head"
def __init__(self, n_in, n_out, seq_len, d=(), conv_first=True, conv_bn=True, lin_first=False, lin_bn=True, act=None, fc_dropout=0., **kwargs):
assert len(d) == 2, "you must pass a tuple of len == 2 to create a 3d output"
conv = [BatchNorm(n_in, ndim=1)] if conv_bn else []
conv.append(Conv1d(n_in, d[0], 1, padding=0, bias=not conv_bn, **kwargs))
l = [Transpose(-1, -2), BatchNorm(n_out if lin_first else seq_len, ndim=1), Transpose(-1, -2)] if lin_bn else []
if fc_dropout != 0: l.append(nn.Dropout(fc_dropout))
lin = [nn.Linear(seq_len, d[1], bias=not lin_bn)]
if act is not None: lin.append(act)
lin_layers = lin+l if lin_first else l+lin
layers = conv + lin_layers if conv_first else lin_layers + conv
super().__init__(*layers)
conv_lin_3d_head = create_conv_lin_3d_head
t = torch.randn(16, 3, 50)
head = conv_lin_3d_head(3, 20, 50, (4,5))
test_eq(head(t).shape, (16, 4, 5))
head = conv_lin_3d_head(3, 20, 50, (2, 10))
test_eq(head(t).shape, (16, 2, 10))
head
# export
class create_lin_3d_head(nn.Sequential):
"Module to create a 3d output head with linear layers"
def __init__(self, n_in, n_out, seq_len, d=(), lin_first=False, bn=True, act=None, fc_dropout=0.):
assert len(d) == 2, "you must pass a tuple of len == 2 to create a 3d output"
layers = [Flatten()]
layers += LinBnDrop(n_in * seq_len, n_out, bn=bn, p=fc_dropout, act=act, lin_first=lin_first)
layers += [Reshape(*d)]
super().__init__(*layers)
lin_3d_head = create_lin_3d_head
t = torch.randn(16, 64, 50)
head = lin_3d_head(64, 10, 50, (5,2))
test_eq(head(t).shape, (16, 5, 2))
head = lin_3d_head(64, 5, 50, (5, 1))
test_eq(head(t).shape, (16, 5, 1))
head
# export
class create_conv_3d_head(nn.Sequential):
"Module to create a 3d output head with a convolutional layer"
def __init__(self, n_in, c_out, seq_len, d=(), lin_first=False, bn=True, act=None, fc_dropout=0.):
assert len(d) == 2, "you must pass a tuple of len == 2 to create a 3d output"
assert d[1] == seq_len, 'You can only use this head when learn.dls.len == learn.dls.d'
super().__init__(Conv(n_in, d[0], 1))
conv_3d_head = create_conv_3d_head
bs = 16
c_out = 4
seq_len = 50
d = (2,50)
nf = 128
t = torch.rand(bs, nf, seq_len)
test_eq(conv_3d_head(nf, c_out, seq_len, d)(t).shape, (bs, *d))
#export
def universal_pool_head(n_in, c_out, seq_len, mult=2, pool_n_layers=2, pool_ln=True, pool_dropout=0.5, pool_act=nn.ReLU(),
zero_init=True, bn=True, fc_dropout=0.):
return nn.Sequential(AdaptiveWeightedAvgPool1d(n_in, seq_len, n_layers=pool_n_layers, mult=mult, ln=pool_ln, dropout=pool_dropout, act=pool_act),
Flatten(), LinBnDrop(n_in, c_out, p=fc_dropout, bn=bn))
bs, c_in, seq_len = 16, 128, 50
c_out = 14
t = torch.rand(bs, c_in, seq_len)
uph = universal_pool_head(c_in, c_out, seq_len)
test_eq(uph(t).shape, (bs, c_out))
uph = universal_pool_head(c_in, c_out, seq_len, 2)
test_eq(uph(t).shape, (bs, c_out))
#export
heads = [mlp_head, fc_head, average_pool_head, max_pool_head, concat_pool_head, pool_plus_head, conv_head, rnn_head,
conv_lin_3d_head, lin_3d_head, conv_3d_head, attentional_pool_head, universal_pool_head, gwa_pool_head]
bs, c_in, seq_len = 16, 128, 50
c_out = 14
d = (7, 2)
t = torch.rand(bs, c_in, seq_len)
for head in heads:
print(head.__name__)
if head.__name__ == 'create_conv_3d_head':
test_eq(head(c_in, c_out, seq_len, (d[0], seq_len))(t).shape, (bs, *(d[0], seq_len)))
elif '3d' in head.__name__:
test_eq(head(c_in, c_out, seq_len, d)(t).shape, (bs, *d))
else:
test_eq(head(c_in, c_out, seq_len)(t).shape, (bs, c_out))
#export
class SqueezeExciteBlock(Module):
def __init__(self, ni, reduction=16):
self.avg_pool = GAP1d(1)
self.fc = nn.Sequential(nn.Linear(ni, ni // reduction, bias=False), nn.ReLU(), nn.Linear(ni // reduction, ni, bias=False), nn.Sigmoid())
def forward(self, x):
y = self.avg_pool(x)
y = self.fc(y).unsqueeze(2)
return x * y.expand_as(x)
bs = 2
ni = 32
sl = 4
t = torch.rand(bs, ni, sl)
test_eq(SqueezeExciteBlock(ni)(t).shape, (bs, ni, sl))
#export
class GaussianNoise(Module):
"""Gaussian noise regularizer.
Args:
sigma (float, optional): relative standard deviation used to generate the
noise. Relative means that it will be multiplied by the magnitude of
the value your are adding the noise to. This means that sigma can be
the same regardless of the scale of the vector.
is_relative_detach (bool, optional): whether to detach the variable before
computing the scale of the noise. If `False` then the scale of the noise
won't be seen as a constant but something to optimize: this will bias the
network to generate vectors with smaller values.
"""
def __init__(self, sigma=0.1, is_relative_detach=True):
self.sigma, self.is_relative_detach = sigma, is_relative_detach
def forward(self, x):
if self.training and self.sigma not in [0, None]:
scale = self.sigma * (x.detach() if self.is_relative_detach else x)
sampled_noise = torch.empty(x.size(), device=x.device).normal_() * scale
x = x + sampled_noise
return x
t = torch.ones(2,3,4)
test_ne(GaussianNoise()(t), t)
test_eq(GaussianNoise()(t).shape, t.shape)
t = torch.ones(2,3)
test_ne(GaussianNoise()(t), t)
test_eq(GaussianNoise()(t).shape, t.shape)
t = torch.ones(2)
test_ne(GaussianNoise()(t), t)
test_eq(GaussianNoise()(t).shape, t.shape)
#export
def gambler_loss(reward=2):
def _gambler_loss(model_output, targets):
outputs = torch.nn.functional.softmax(model_output, dim=1)
outputs, reservation = outputs[:, :-1], outputs[:, -1]
gain = torch.gather(outputs, dim=1, index=targets.unsqueeze(1)).squeeze()
doubling_rate = (gain + reservation / reward).log()
return - doubling_rate.mean()
return _gambler_loss
model_output = torch.rand(16, 3)
targets = torch.randint(0, 2, (16,))
criterion = gambler_loss(2)
criterion(model_output, targets)
#export
def CrossEntropyLossOneHot(output, target, **kwargs):
if target.ndim == 2: _, target = target.max(dim=1)
return nn.CrossEntropyLoss(**kwargs)(output, target)
output = torch.rand(16, 2)
target = torch.randint(0, 2, (16,))
CrossEntropyLossOneHot(output, target)
from tsai.data.transforms import OneHot
output = nn.Parameter(torch.rand(16, 2))
target = torch.randint(0, 2, (16,))
one_hot_target = OneHot()(target)
CrossEntropyLossOneHot(output, one_hot_target)
#hide
def proba_certainty(output):
if output.sum(-1).mean().item() != 1: output = F.softmax(output, -1)
return (output.max(-1).values - 1. / output.shape[-1])/( 1 - 1. / output.shape[-1])
#hide
target = random_shuffle(concat(torch.zeros(5), torch.ones(7), torch.ones(4) + 1)).long()
output = nn.Parameter(5 * torch.rand((16, 3)) - 5 * torch.rand((16, 3)))
proba_certainty(output)
#hide
def CrossEntropyLossOneHotWithUncertainty():
def _CrossEntropyLossOneHotWithUncertainty(output, target, **kwargs):
return (proba_certainty(output) * CrossEntropyLossOneHot(output, target, reduction='none', **kwargs)).mean()
return _CrossEntropyLossOneHotWithUncertainty
#hide
# https://stackoverflow.com/questions/22611446/perform-2-sample-t-test
from __future__ import print_function
import numpy as np
from scipy.stats import ttest_ind, ttest_ind_from_stats
from scipy.special import stdtr
np.random.seed(1)
# Create sample data.
a = np.random.randn(40)
b = 4*np.random.randn(50)
# Use scipy.stats.ttest_ind.
t, p = ttest_ind(a, b, equal_var=False)
print("ttest_ind: t = %g p = %g" % (t, p))
# Compute the descriptive statistics of a and b.
abar = a.mean()
avar = a.var(ddof=1)
na = a.size
adof = na - 1
bbar = b.mean()
bvar = b.var(ddof=1)
nb = b.size
bdof = nb - 1
# Use scipy.stats.ttest_ind_from_stats.
t2, p2 = ttest_ind_from_stats(abar, np.sqrt(avar), na,
bbar, np.sqrt(bvar), nb,
equal_var=False)
print("ttest_ind_from_stats: t = %g p = %g" % (t2, p2))
# Use the formulas directly.
tf = (abar - bbar) / np.sqrt(avar/na + bvar/nb)
dof = (avar/na + bvar/nb)**2 / (avar**2/(na**2*adof) + bvar**2/(nb**2*bdof))
pf = 2*stdtr(dof, -np.abs(tf))
print("formula: t = %g p = %g" % (tf, pf))
a = tensor(a)
b = tensor(b)
tf = (a.mean() - b.mean()) / torch.sqrt(a.var()/a.size(0) + b.var()/b.size(0))
print("formula: t = %g" % (tf))
ttest_tensor(a, b)
#export
def ttest_bin_loss(output, target):
output = nn.Softmax(dim=-1)(output[:, 1])
return ttest_tensor(output[target == 0], output[target == 1])
def ttest_reg_loss(output, target):
return ttest_tensor(output[target <= 0], output[target > 0])
for _ in range(100):
output = torch.rand(256, 2)
target = torch.randint(0, 2, (256,))
test_close(ttest_bin_loss(output, target).item(),
ttest_ind(nn.Softmax(dim=-1)(output[:, 1])[target == 0], nn.Softmax(dim=-1)(output[:, 1])[target == 1], equal_var=False)[0], eps=1e-3)
#export
class CenterLoss(Module):
r"""
Code in Pytorch has been slightly modified from: https://github.com/KaiyangZhou/pytorch-center-loss/blob/master/center_loss.py
Based on paper: Wen et al. A Discriminative Feature Learning Approach for Deep Face Recognition. ECCV 2016.
Args:
c_out (int): number of classes.
logits_dim (int): dim 1 of the logits. By default same as c_out (for one hot encoded logits)
"""
def __init__(self, c_out, logits_dim=None):
logits_dim = ifnone(logits_dim, c_out)
self.c_out, self.logits_dim = c_out, logits_dim
self.centers = nn.Parameter(torch.randn(c_out, logits_dim))
self.classes = nn.Parameter(torch.arange(c_out).long(), requires_grad=False)
def forward(self, x, labels):
"""
Args:
x: feature matrix with shape (batch_size, logits_dim).
labels: ground truth labels with shape (batch_size).
"""
bs = x.shape[0]
distmat = torch.pow(x, 2).sum(dim=1, keepdim=True).expand(bs, self.c_out) + \
torch.pow(self.centers, 2).sum(dim=1, keepdim=True).expand(self.c_out, bs).T
distmat = torch.addmm(distmat, x, self.centers.T, beta=1, alpha=-2)
labels = labels.unsqueeze(1).expand(bs, self.c_out)
mask = labels.eq(self.classes.expand(bs, self.c_out))
dist = distmat * mask.float()
loss = dist.clamp(min=1e-12, max=1e+12).sum() / bs
return loss
class CenterPlusLoss(Module):
def __init__(self, loss, c_out, ฮป=1e-2, logits_dim=None):
self.loss, self.c_out, self.ฮป = loss, c_out, ฮป
self.centerloss = CenterLoss(c_out, logits_dim)
def forward(self, x, labels):
return self.loss(x, labels) + self.ฮป * self.centerloss(x, labels)
def __repr__(self): return f"CenterPlusLoss(loss={self.loss}, c_out={self.c_out}, ฮป={self.ฮป})"
c_in = 10
x = torch.rand(64, c_in).to(device=default_device())
x = F.softmax(x, dim=1)
label = x.max(dim=1).indices
CenterLoss(c_in).to(x.device)(x, label), CenterPlusLoss(LabelSmoothingCrossEntropyFlat(), c_in).to(x.device)(x, label)
CenterPlusLoss(LabelSmoothingCrossEntropyFlat(), c_in)
#export
class FocalLoss(Module):
def __init__(self, gamma=0, eps=1e-7):
self.gamma, self.eps, self.ce = gamma, eps, CrossEntropyLossFlat()
def forward(self, input, target):
logp = self.ce(input, target)
p = torch.exp(-logp)
loss = (1 - p) ** self.gamma * logp
return loss.mean()
c_in = 10
x = torch.rand(64, c_in).to(device=default_device())
x = F.softmax(x, dim=1)
label = x.max(dim=1).indices
FocalLoss(c_in).to(x.device)(x, label)
#export
class TweedieLoss(Module):
def __init__(self, p=1.5, eps=1e-10):
"""
Tweedie loss as calculated in LightGBM
Args:
p: tweedie variance power (1 < p < 2)
eps: small number to avoid log(zero).
"""
assert p > 1 and p < 2, "make sure 1 < p < 2"
self.p, self.eps = p, eps
def forward(self, inp, targ):
inp = inp.flatten()
targ = targ.flatten()
torch.clamp_min_(inp, self.eps)
a = targ * torch.exp((1 - self.p) * torch.log(inp)) / (1 - self.p)
b = torch.exp((2 - self.p) * torch.log(inp)) / (2 - self.p)
loss = -a + b
return loss.mean()
c_in = 10
output = torch.rand(64).to(device=default_device())
target = torch.rand(64).to(device=default_device())
TweedieLoss().to(output.device)(output, target)
# export
class PositionwiseFeedForward(nn.Sequential):
def __init__(self, dim, dropout=0., act='reglu', mlp_ratio=1):
act_mult = 2 if act.lower() in ["geglu", "reglu"] else 1
super().__init__(nn.Linear(dim, dim * mlp_ratio * act_mult),
get_act_fn(act),
nn.Dropout(dropout),
nn.Linear(dim * mlp_ratio, dim),
nn.Dropout(dropout))
class TokenLayer(Module):
def __init__(self, token=True): self.token = token
def forward(self, x): return x[..., 0] if self.token is not None else x.mean(-1)
def __repr__(self): return f"{self.__class__.__name__}()"
t = torch.randn(2,3,10)
m = PositionwiseFeedForward(10, dropout=0., act='reglu', mlp_ratio=1)
test_eq(m(t).shape, t.shape)
#export
class ScaledDotProductAttention(Module):
"""Scaled Dot-Product Attention module (Vaswani et al., 2017) with optional residual attention from previous layer (He et al, 2020)"""
def __init__(self, attn_dropout=0., res_attention=False):
self.attn_dropout = nn.Dropout(attn_dropout)
self.res_attention = res_attention
def forward(self, q:Tensor, k:Tensor, v:Tensor, prev:Optional[Tensor]=None, key_padding_mask:Optional[Tensor]=None, attn_mask:Optional[Tensor]=None):
'''
Input shape:
q : [bs x n_heads x max_q_len x d_k]
k : [bs x n_heads x d_k x seq_len]
v : [bs x n_heads x seq_len x d_v]
prev : [bs x n_heads x q_len x seq_len]
key_padding_mask: [bs x seq_len]
attn_mask : [1 x seq_len x seq_len]
Output shape:
output: [bs x n_heads x q_len x d_v]
attn : [bs x n_heads x q_len x seq_len]
scores : [bs x n_heads x q_len x seq_len]
'''
# Scaled MatMul (q, k) - similarity scores for all pairs of positions in an input sequence
attn_scores = torch.matmul(q / np.sqrt(q.shape[-2]), k) # attn_scores : [bs x n_heads x max_q_len x q_len]
# Add pre-softmax attention scores from the previous layer (optional)
if prev is not None: attn_scores = attn_scores + prev
# Attention mask (optional)
if attn_mask is not None: # attn_mask with shape [q_len x seq_len] - only used when q_len == seq_len
if attn_mask.dtype == torch.bool:
attn_scores.masked_fill_(attn_mask, -np.inf)
else:
attn_scores += attn_mask
# Key padding mask (optional)
if key_padding_mask is not None: # mask with shape [bs x q_len] (only when max_w_len == q_len)
attn_scores.masked_fill_(key_padding_mask.unsqueeze(1).unsqueeze(2), -np.inf)
# normalize the attention weights
attn_weights = F.softmax(attn_scores, dim=-1) # attn_weights : [bs x n_heads x max_q_len x q_len]
attn_weights = self.attn_dropout(attn_weights)
# compute the new values given the attention weights
output = torch.matmul(attn_weights, v) # output: [bs x n_heads x max_q_len x d_v]
if self.res_attention: return output, attn_weights, attn_scores
else: return output, attn_weights
B = 16
C = 10
M = 1500 # seq_len
n_heads = 1
D = 128 # model dimension
N = 512 # max_seq_len - latent's index dimension
d_k = D // n_heads
xb = torch.randn(B, C, M)
xb = (xb - xb.mean()) / xb.std()
# Attention
# input (Q)
lin = nn.Linear(M, N, bias=False)
Q = lin(xb).transpose(1,2)
test_eq(Q.shape, (B, N, C))
# q
to_q = nn.Linear(C, D, bias=False)
q = to_q(Q)
q = nn.LayerNorm(D)(q)
# k, v
context = xb.transpose(1,2)
to_kv = nn.Linear(C, D * 2, bias=False)
k, v = to_kv(context).chunk(2, dim = -1)
k = k.transpose(-1, -2)
k = nn.LayerNorm(M)(k)
v = nn.LayerNorm(D)(v)
test_eq(q.shape, (B, N, D))
test_eq(k.shape, (B, D, M))
test_eq(v.shape, (B, M, D))
output, attn, scores = ScaledDotProductAttention(res_attention=True)(q.unsqueeze(1), k.unsqueeze(1), v.unsqueeze(1))
test_eq(output.shape, (B, 1, N, D))
test_eq(attn.shape, (B, 1, N, M))
test_eq(scores.shape, (B, 1, N, M))
scores.mean(), scores.std()
# #hide
# class MultiheadAttention(Module):
# def __init__(self, d_model:int, n_heads:int, d_k:Optional[int]=None, d_v:Optional[int]=None, res_attention:bool=False,
# dropout:float=0., qkv_bias:bool=True):
# """Multi Head Attention Layer
# Input shape:
# Q: [batch_size (bs) x max_q_len x d_model]
# K, V: [batch_size (bs) x q_len x d_model]
# mask: [q_len x q_len]
# """
# d_k = ifnone(d_k, d_model // n_heads)
# d_v = ifnone(d_v, d_model // n_heads)
# self.n_heads, self.d_k, self.d_v = n_heads, d_k, d_v
# self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=qkv_bias)
# self.W_K = nn.Linear(d_model, d_k * n_heads, bias=qkv_bias)
# self.W_V = nn.Linear(d_model, d_v * n_heads, bias=qkv_bias)
# # Scaled Dot-Product Attention (multiple heads)
# self.res_attention = res_attention
# self.sdp_attn = ScaledDotProductAttention(res_attention=self.res_attention)
# # Poject output
# project_out = not (n_heads == 1 and d_model == d_k)
# self.to_out = nn.Sequential(nn.Linear(n_heads * d_v, d_model), nn.Dropout(dropout)) if project_out else nn.Identity()
# def forward(self, Q:Tensor, K:Optional[Tensor]=None, V:Optional[Tensor]=None, prev:Optional[Tensor]=None,
# key_padding_mask:Optional[Tensor]=None, attn_mask:Optional[Tensor]=None):
# bs = Q.size(0)
# if K is None: K = Q
# if V is None: V = Q
# # Linear (+ split in multiple heads)
# q_s = self.W_Q(Q).view(bs, -1, self.n_heads, self.d_k).transpose(1,2) # q_s : [bs x n_heads x max_q_len x d_k]
# k_s = self.W_K(K).view(bs, -1, self.n_heads, self.d_k).permute(0,2,3,1) # k_s : [bs x n_heads x d_k x q_len] - transpose(1,2) + transpose(2,3)
# v_s = self.W_V(V).view(bs, -1, self.n_heads, self.d_v).transpose(1,2) # v_s : [bs x n_heads x q_len x d_v]
# # Apply Scaled Dot-Product Attention (multiple heads)
# if self.res_attention:
# output, attn_weights, attn_scores = self.sdp_attn(q_s, k_s, v_s, prev=prev, key_padding_mask=key_padding_mask, attn_mask=attn_mask)
# else:
# output, attn_weights = self.sdp_attn(q_s, k_s, v_s, key_padding_mask=key_padding_mask, attn_mask=attn_mask)
# # output: [bs x n_heads x q_len x d_v], attn: [bs x n_heads x q_len x q_len], scores: [bs x n_heads x max_q_len x q_len]
# # back to the original inputs dimensions
# output = output.transpose(1, 2).contiguous().view(bs, -1, self.n_heads * self.d_v) # output: [bs x q_len x n_heads * d_v]
# output = self.to_out(output)
# if self.res_attention: return output, attn_weights, attn_scores
# else: return output, attn_weights
#export
class MultiheadAttention(Module):
def __init__(self, d_model, n_heads, d_k=None, d_v=None, res_attention=False, attn_dropout=0., proj_dropout=0., qkv_bias=True):
"""Multi Head Attention Layer
Input shape:
Q: [batch_size (bs) x max_q_len x d_model]
K, V: [batch_size (bs) x q_len x d_model]
mask: [q_len x q_len]
"""
d_k = ifnone(d_k, d_model // n_heads)
d_v = ifnone(d_v, d_model // n_heads)
self.n_heads, self.d_k, self.d_v = n_heads, d_k, d_v
self.W_Q = nn.Linear(d_model, d_k * n_heads, bias=qkv_bias)
self.W_K = nn.Linear(d_model, d_k * n_heads, bias=qkv_bias)
self.W_V = nn.Linear(d_model, d_v * n_heads, bias=qkv_bias)
# Scaled Dot-Product Attention (multiple heads)
self.res_attention = res_attention
self.sdp_attn = ScaledDotProductAttention(attn_dropout=attn_dropout, res_attention=self.res_attention)
# Poject output
self.to_out = nn.Sequential(nn.Linear(n_heads * d_v, d_model), nn.Dropout(proj_dropout))
def forward(self, Q:Tensor, K:Optional[Tensor]=None, V:Optional[Tensor]=None, prev:Optional[Tensor]=None,
key_padding_mask:Optional[Tensor]=None, attn_mask:Optional[Tensor]=None):
bs = Q.size(0)
if K is None: K = Q
if V is None: V = Q
# Linear (+ split in multiple heads)
q_s = self.W_Q(Q).view(bs, -1, self.n_heads, self.d_k).transpose(1,2) # q_s : [bs x n_heads x max_q_len x d_k]
k_s = self.W_K(K).view(bs, -1, self.n_heads, self.d_k).permute(0,2,3,1) # k_s : [bs x n_heads x d_k x q_len] - transpose(1,2) + transpose(2,3)
v_s = self.W_V(V).view(bs, -1, self.n_heads, self.d_v).transpose(1,2) # v_s : [bs x n_heads x q_len x d_v]
# Apply Scaled Dot-Product Attention (multiple heads)
if self.res_attention:
output, attn_weights, attn_scores = self.sdp_attn(q_s, k_s, v_s, prev=prev, key_padding_mask=key_padding_mask, attn_mask=attn_mask)
else:
output, attn_weights = self.sdp_attn(q_s, k_s, v_s, key_padding_mask=key_padding_mask, attn_mask=attn_mask)
# output: [bs x n_heads x q_len x d_v], attn: [bs x n_heads x q_len x q_len], scores: [bs x n_heads x max_q_len x q_len]
# back to the original inputs dimensions
output = output.transpose(1, 2).contiguous().view(bs, -1, self.n_heads * self.d_v) # output: [bs x q_len x n_heads * d_v]
output = self.to_out(output)
if self.res_attention: return output, attn_weights, attn_scores
else: return output, attn_weights
q = torch.rand([16, 3, 50, 8])
k = torch.rand([16, 3, 50, 8]).transpose(-1, -2)
v = torch.rand([16, 3, 50, 6])
attn_mask = torch.triu(torch.ones(50, 50)) # shape: q_len x q_len
key_padding_mask = torch.zeros(16, 50)
key_padding_mask[[1, 3, 6, 15], -10:] = 1
key_padding_mask = key_padding_mask.bool()
print('attn_mask', attn_mask.shape, 'key_padding_mask', key_padding_mask.shape)
output, attn = ScaledDotProductAttention(attn_dropout=.1)(q, k, v, attn_mask=attn_mask, key_padding_mask=key_padding_mask)
output.shape, attn.shape
t = torch.rand(16, 50, 128)
output, attn = MultiheadAttention(d_model=128, n_heads=3, d_k=8, d_v=6)(t, t, t, key_padding_mask=key_padding_mask, attn_mask=attn_mask)
output.shape, attn.shape
t = torch.rand(16, 50, 128)
att_mask = (torch.rand((50, 50)) > .85).float()
att_mask[att_mask == 1] = -np.inf
mha = MultiheadAttention(d_model=128, n_heads=3, d_k=8, d_v=6)
output, attn = mha(t, t, t, attn_mask=att_mask)
test_eq(torch.isnan(output).sum().item(), 0)
test_eq(torch.isnan(attn).sum().item(), 0)
loss = output[:2, :].sum()
test_eq(torch.isnan(loss).sum().item(), 0)
loss.backward()
for n, p in mha.named_parameters(): test_eq(torch.isnan(p.grad).sum().item(), 0)
t = torch.rand(16, 50, 128)
attn_mask = (torch.rand((50, 50)) > .85)
# True values will be masked
mha = MultiheadAttention(d_model=128, n_heads=3, d_k=8, d_v=6)
output, attn = mha(t, t, t, attn_mask=att_mask)
test_eq(torch.isnan(output).sum().item(), 0)
test_eq(torch.isnan(attn).sum().item(), 0)
loss = output[:2, :].sum()
test_eq(torch.isnan(loss).sum().item(), 0)
loss.backward()
for n, p in mha.named_parameters(): test_eq(torch.isnan(p.grad).sum().item(), 0)
# export
class MultiConv1d(Module):
"""Module that applies multiple convolutions with different kernel sizes"""
def __init__(self, ni, nf=None, kss=[1,3,5,7], keep_original=False, separable=False, dim=1, **kwargs):
kss = listify(kss)
n_layers = len(kss)
if ni == nf: keep_original = False
if nf is None: nf = ni * (keep_original + n_layers)
nfs = [(nf - ni*keep_original) // n_layers] * n_layers
while np.sum(nfs) + ni * keep_original < nf:
for i in range(len(nfs)):
nfs[i] += 1
if np.sum(nfs) + ni * keep_original == nf: break
_conv = SeparableConv1d if separable else Conv1d
self.layers = nn.ModuleList()
for nfi,ksi in zip(nfs, kss):
self.layers.append(_conv(ni, nfi, ksi, **kwargs))
self.keep_original, self.dim = keep_original, dim
def forward(self, x):
output = [x] if self.keep_original else []
for l in self.layers:
output.append(l(x))
x = torch.cat(output, dim=self.dim)
return x
t = torch.rand(16, 6, 37)
test_eq(MultiConv1d(6, None, kss=[1,3,5], keep_original=True)(t).shape, [16, 24, 37])
test_eq(MultiConv1d(6, 36, kss=[1,3,5], keep_original=False)(t).shape, [16, 36, 37])
test_eq(MultiConv1d(6, None, kss=[1,3,5], keep_original=True, dim=-1)(t).shape, [16, 6, 37*4])
test_eq(MultiConv1d(6, 60, kss=[1,3,5], keep_original=True)(t).shape, [16, 60, 37])
test_eq(MultiConv1d(6, 60, kss=[1,3,5], separable=True)(t).shape, [16, 60, 37])
#export
class LSTMOutput(Module):
def forward(self, x): return x[0]
def __repr__(self): return f'{self.__class__.__name__}()'
t = ([1], [2], [3])
test_eq(LSTMOutput()(t), [1])
#export
class MultiEmbedding(Module):
def __init__(self, c_in, n_embeds, embed_dims=None, cat_pos=None, std=0.01):
if embed_dims is None:
embed_dims = [emb_sz_rule(s) for s in n_embeds]
else:
embed_dims = listify(embed_dims)
if len(embed_dims) == 1: embed_dims = embed_dims * len(n_embeds)
assert len(embed_dims) == len(n_embeds)
cat_pos = torch.as_tensor(listify(cat_pos)) if cat_pos else torch.arange(len(n_embeds))
self.register_buffer("cat_pos", cat_pos)
cont_pos = torch.tensor([p for p in torch.arange(c_in) if p not in self.cat_pos])
self.register_buffer("cont_pos", cont_pos)
self.cat_embed = nn.ModuleList([Embedding(n,d,std=std) for n,d in zip(n_embeds, embed_dims)])
def forward(self, x):
if isinstance(x, tuple): x_cat, x_cont, *_ = x
else: x_cat, x_cont = x[:, self.cat_pos], x[:, self.cont_pos]
x_cat = torch.cat([e(x_cat[:,i].long()).transpose(1,2) for i,e in enumerate(self.cat_embed)],1)
return torch.cat([x_cat, x_cont], 1)
a = alphabet[np.random.randint(0,3,40)]
b = ALPHABET[np.random.randint(6,10,40)]
c = np.random.rand(40).reshape(4,1,10)
map_a = {k:v for v,k in enumerate(np.unique(a))}
map_b = {k:v for v,k in enumerate(np.unique(b))}
n_embeds = [len(m.keys()) for m in [map_a, map_b]]
szs = [emb_sz_rule(n) for n in n_embeds]
a = np.asarray(a.map(map_a)).reshape(4,1,10)
b = np.asarray(b.map(map_b)).reshape(4,1,10)
inp = torch.from_numpy(np.concatenate((c,a,b), 1)).float()
memb = MultiEmbedding(3, n_embeds, cat_pos=[1,2])
# registered buffers are part of the state_dict() but not module.parameters()
assert all([(k in memb.state_dict().keys()) for k in ['cat_pos', 'cont_pos']])
embeddings = memb(inp)
print(n_embeds, szs, inp.shape, embeddings.shape)
test_eq(embeddings.shape, (inp.shape[0],sum(szs)+1,inp.shape[-1]))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
```
| github_jupyter |
# Contract a Grid Circuit
Shallow circuits on a planar grid with low-weight observables permit easy contraction.
### Imports
```
import numpy as np
import networkx as nx
import cirq
import quimb
import quimb.tensor as qtn
from cirq.contrib.svg import SVGCircuit
import cirq.contrib.quimb as ccq
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.rc('axes', labelsize=16, titlesize=16)
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
plt.rc('legend', fontsize=14, title_fontsize=16)
# theme colors
QBLUE = '#1967d2'
QRED = '#ea4335ff'
QGOLD = '#fbbc05ff'
QGREEN = '#34a853ff'
QGOLD2 = '#ffca28'
QBLUE2 = '#1e88e5'
```
## Make an example circuit topology
We'll use entangling gates according to this topology and compute the value of an observable on the red nodes.
```
width = 3
height = 4
graph = nx.grid_2d_graph(width, height)
rs = np.random.RandomState(52)
nx.set_edge_attributes(graph, name='weight',
values={e: np.round(rs.uniform(), 2) for e in graph.edges})
zz_inds = ((width//2, (height//2-1)), (width//2, (height//2)))
nx.draw_networkx(graph,
pos={n:n for n in graph.nodes},
node_color=[QRED if node in zz_inds else QBLUE for node in graph.nodes])
```
### Circuit
```
qubits = [cirq.GridQubit(*n) for n in graph]
circuit = cirq.Circuit(
cirq.H.on_each(qubits),
ccq.get_grid_moments(graph),
cirq.Moment([cirq.rx(0.456).on_each(qubits)]),
)
SVGCircuit(circuit)
```
### Observable
```
ZZ = cirq.Z(cirq.GridQubit(*zz_inds[0])) * cirq.Z(cirq.GridQubit(*zz_inds[1]))
ZZ
```
### The contraction
The value of the observable is $\langle 0 | U^\dagger (ZZ) U |0 \rangle$.
```
tot_c = ccq.circuit_for_expectation_value(circuit, ZZ)
SVGCircuit(tot_c)
```
## We can simplify the circuit
By cancelling the "forwards" and "backwards" part of the circuit that are outside of the light-cone of the observable, we can reduce the number of gates to consider --- and sometimes the number of qubits involved at all. To see this in action, run the following cell and then keep re-running the following cell to watch gates disappear from the circuit.
```
compressed_c = tot_c.copy()
print(len(list(compressed_c.all_operations())), len(compressed_c.all_qubits()))
```
**(try re-running the following cell to watch the circuit get smaller)**
```
ccq.MergeNQubitGates(n_qubits=2).optimize_circuit(compressed_c)
ccq.MergeNQubitGates(n_qubits=1).optimize_circuit(compressed_c)
cirq.DropNegligible(tolerance=1e-6).optimize_circuit(compressed_c)
cirq.DropEmptyMoments().optimize_circuit(compressed_c)
print(len(list(compressed_c.all_operations())), len(compressed_c.all_qubits()))
SVGCircuit(compressed_c)
```
### Utility function to fully-simplify
We provide this utility function to fully simplify a circuit.
```
ccq.simplify_expectation_value_circuit(tot_c)
SVGCircuit(tot_c)
# simplification might eliminate qubits entirely for large graphs and
# shallow `p`, so re-get the current qubits.
qubits = sorted(tot_c.all_qubits())
print(len(qubits))
```
## Turn it into a Tensor Netowork
We explicitly "cap" the tensor network with `<0..0|` bras so the entire thing contracts to the expectation value $\langle 0 | U^\dagger (ZZ) U |0 \rangle$.
```
tensors, qubit_frontier, fix = ccq.circuit_to_tensors(
circuit=tot_c, qubits=qubits)
end_bras = [
qtn.Tensor(
data=quimb.up().squeeze(),
inds=(f'i{qubit_frontier[q]}_q{q}',),
tags={'Q0', 'bra0'}) for q in qubits
]
tn = qtn.TensorNetwork(tensors + end_bras)
tn.graph(color=['Q0', 'Q1', 'Q2'])
plt.show()
```
### `rank_simplify` effectively folds together 1- and 2-qubit gates
In practice, using this is faster than running the circuit optimizer to remove gates that cancel themselves, but please benchmark for your particular use case.
```
tn.rank_simplify(inplace=True)
tn.graph(color=['Q0', 'Q1', 'Q2'])
```
### The tensor contraction path tells us how expensive this will be
```
path_info = tn.contract(get='path-info')
path_info.opt_cost / int(3e9) # assuming 3gflop, in seconds
path_info.largest_intermediate * 128 / 8 / 1024 / 1024 / 1024 # gb
```
### Do the contraction
```
zz = tn.contract(inplace=True)
zz = np.real_if_close(zz)
print(zz)
```
## Big Circuit
```
width = 8
height = 8
graph = nx.grid_2d_graph(width, height)
rs = np.random.RandomState(52)
nx.set_edge_attributes(graph, name='weight',
values={e: np.round(rs.uniform(), 2) for e in graph.edges})
zz_inds = ((width//2, (height//2-1)), (width//2, (height//2)))
nx.draw_networkx(graph,
pos={n:n for n in graph.nodes},
node_color=[QRED if node in zz_inds else QBLUE for node in graph.nodes])
qubits = [cirq.GridQubit(*n) for n in graph]
circuit = cirq.Circuit(
cirq.H.on_each(qubits),
ccq.get_grid_moments(graph),
cirq.Moment([cirq.rx(0.456).on_each(qubits)]),
)
ZZ = cirq.Z(cirq.GridQubit(*zz_inds[0])) * cirq.Z(cirq.GridQubit(*zz_inds[1]))
ZZ
ccq.tensor_expectation_value(circuit, ZZ)
```
| github_jupyter |
# BST in Deepctr
## Installation
```
!pip install -q deepctr
```
## API run
```
import numpy as np
from deepctr.models import BST
from deepctr.feature_column import SparseFeat, VarLenSparseFeat, DenseFeat, get_feature_names
def get_xy_fd():
feature_columns = [SparseFeat('user', 3, embedding_dim=10), SparseFeat(
'gender', 2, embedding_dim=4), SparseFeat('item_id', 3 + 1, embedding_dim=8),
SparseFeat('cate_id', 2 + 1, embedding_dim=4), DenseFeat('pay_score', 1)]
feature_columns += [
VarLenSparseFeat(SparseFeat('hist_item_id', vocabulary_size=3 + 1, embedding_dim=8, embedding_name='item_id'),
maxlen=4, length_name="seq_length"),
VarLenSparseFeat(SparseFeat('hist_cate_id', 2 + 1, embedding_dim=4, embedding_name='cate_id'), maxlen=4,
length_name="seq_length")]
# Notice: History behavior sequence feature name must start with "hist_".
behavior_feature_list = ["item_id", "cate_id"]
uid = np.array([0, 1, 2])
ugender = np.array([0, 1, 0])
iid = np.array([1, 2, 3]) # 0 is mask value
cate_id = np.array([1, 2, 2]) # 0 is mask value
pay_score = np.array([0.1, 0.2, 0.3])
hist_iid = np.array([[1, 2, 3, 0], [3, 2, 1, 0], [1, 2, 0, 0]])
hist_cate_id = np.array([[1, 2, 2, 0], [2, 2, 1, 0], [1, 2, 0, 0]])
seq_length = np.array([3, 3, 2]) # the actual length of the behavior sequence
feature_dict = {'user': uid, 'gender': ugender, 'item_id': iid, 'cate_id': cate_id,
'hist_item_id': hist_iid, 'hist_cate_id': hist_cate_id,
'pay_score': pay_score, 'seq_length': seq_length}
x = {name: feature_dict[name] for name in get_feature_names(feature_columns)}
y = np.array([1, 0, 1])
return x, y, feature_columns, behavior_feature_list
if __name__ == "__main__":
x, y, feature_columns, behavior_feature_list = get_xy_fd()
model = BST(feature_columns, behavior_feature_list,att_head_num=4)
model.compile('adam', 'binary_crossentropy',
metrics=['binary_crossentropy'])
history = model.fit(x, y, verbose=1, epochs=10, validation_split=0.5)
```
## Model definition
```
def BST(dnn_feature_columns, history_feature_list, transformer_num=1, att_head_num=8,
use_bn=False, dnn_hidden_units=(256, 128, 64), dnn_activation='relu', l2_reg_dnn=0,
l2_reg_embedding=1e-6, dnn_dropout=0.0, seed=1024, task='binary'):
"""Instantiates the BST architecture.
:param dnn_feature_columns: An iterable containing all the features used by deep part of the model.
:param history_feature_list: list, to indicate sequence sparse field.
:param transformer_num: int, the number of transformer layer.
:param att_head_num: int, the number of heads in multi-head self attention.
:param use_bn: bool. Whether use BatchNormalization before activation or not in deep net
:param dnn_hidden_units: list,list of positive integer or empty list, the layer number and units in each layer of DNN
:param dnn_activation: Activation function to use in DNN
:param l2_reg_dnn: float. L2 regularizer strength applied to DNN
:param l2_reg_embedding: float. L2 regularizer strength applied to embedding vector
:param dnn_dropout: float in [0,1), the probability we will drop out a given DNN coordinate.
:param seed: integer ,to use as random seed.
:param task: str, ``"binary"`` for binary logloss or ``"regression"`` for regression loss
:return: A Keras model instance.
"""
features = build_input_features(dnn_feature_columns)
inputs_list = list(features.values())
user_behavior_length = features["seq_length"]
sparse_feature_columns = list(
filter(lambda x: isinstance(x, SparseFeat), dnn_feature_columns)) if dnn_feature_columns else []
dense_feature_columns = list(
filter(lambda x: isinstance(x, DenseFeat), dnn_feature_columns)) if dnn_feature_columns else []
varlen_sparse_feature_columns = list(
filter(lambda x: isinstance(x, VarLenSparseFeat), dnn_feature_columns)) if dnn_feature_columns else []
history_feature_columns = []
sparse_varlen_feature_columns = []
history_fc_names = list(map(lambda x: "hist_" + x, history_feature_list))
for fc in varlen_sparse_feature_columns:
feature_name = fc.name
if feature_name in history_fc_names:
history_feature_columns.append(fc)
else:
sparse_varlen_feature_columns.append(fc)
embedding_dict = create_embedding_matrix(dnn_feature_columns, l2_reg_embedding, seed, prefix="",
seq_mask_zero=True)
query_emb_list = embedding_lookup(embedding_dict, features, sparse_feature_columns,
return_feat_list=history_feature_list, to_list=True)
hist_emb_list = embedding_lookup(embedding_dict, features, history_feature_columns,
return_feat_list=history_fc_names, to_list=True)
dnn_input_emb_list = embedding_lookup(embedding_dict, features, sparse_feature_columns,
mask_feat_list=history_feature_list, to_list=True)
dense_value_list = get_dense_input(features, dense_feature_columns)
sequence_embed_dict = varlen_embedding_lookup(embedding_dict, features, sparse_varlen_feature_columns)
sequence_embed_list = get_varlen_pooling_list(sequence_embed_dict, features, sparse_varlen_feature_columns,
to_list=True)
dnn_input_emb_list += sequence_embed_list
query_emb = concat_func(query_emb_list)
deep_input_emb = concat_func(dnn_input_emb_list)
hist_emb = concat_func(hist_emb_list)
transformer_output = hist_emb
for _ in range(transformer_num):
att_embedding_size = transformer_output.get_shape().as_list()[-1] // att_head_num
transformer_layer = Transformer(att_embedding_size=att_embedding_size, head_num=att_head_num,
dropout_rate=dnn_dropout, use_positional_encoding=True, use_res=True,
use_feed_forward=True, use_layer_norm=True, blinding=False, seed=seed,
supports_masking=False, output_type=None)
transformer_output = transformer_layer([transformer_output, transformer_output,
user_behavior_length, user_behavior_length])
attn_output = AttentionSequencePoolingLayer(att_hidden_units=(64, 16), weight_normalization=True,
supports_masking=False)([query_emb, transformer_output,
user_behavior_length])
deep_input_emb = concat_func([deep_input_emb, attn_output], axis=-1)
deep_input_emb = Flatten()(deep_input_emb)
dnn_input = combined_dnn_input([deep_input_emb], dense_value_list)
output = DNN(dnn_hidden_units, dnn_activation, l2_reg_dnn, dnn_dropout, use_bn, seed=seed)(dnn_input)
final_logit = Dense(1, use_bias=False, kernel_initializer=tf.keras.initializers.glorot_normal(seed))(output)
output = PredictionLayer(task)(final_logit)
model = tf.keras.models.Model(inputs=inputs_list, outputs=output)
return model
```
| github_jupyter |
# Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
### Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using [FloydHub](https://www.floydhub.com/), set `data_dir` to "/input" and use the [FloydHub data ID](http://docs.floydhub.com/home/using_datasets/) "R5KrjnANiKVhLWAkpXhNBe".
```
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
```
## Explore the Data
### MNIST
As you're aware, the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset contains images of handwritten digits. You can view the first number of examples by changing `show_n_images`.
```
data_dir = './data'
import helper
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
```
### CelebA
The [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing `show_n_images`.
```
show_n_images = 25
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
```
## Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
## Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- `model_inputs`
- `discriminator`
- `generator`
- `model_loss`
- `model_opt`
- `train`
### Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
### Input
Implement the `model_inputs` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using `image_width`, `image_height`, and `image_channels`.
- Z input placeholder with rank 2 using `z_dim`.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
```
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
input_image = tf.placeholder(tf.float32, shape=(None, image_width, image_height, image_channels))
z_image = tf.placeholder(tf.float32, shape=(None,z_dim))
learning_rate = tf.placeholder(tf.float32,shape=())
return input_image, z_image, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
```
### Discriminator
Implement `discriminator` to create a discriminator neural network that discriminates on `images`. This function should be able to reuse the variables in the neural network. Use [`tf.variable_scope`](https://www.tensorflow.org/api_docs/python/tf/variable_scope) with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
```
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
# TODO: Implement Function
with tf.variable_scope('discriminator', reuse=reuse):
alpha = 0.2
# Input layer is 32x32x3
x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
```
### Generator
Implement `generator` to generate an image using `z`. This function should be able to reuse the variables in the neural network. Use [`tf.variable_scope`](https://www.tensorflow.org/api_docs/python/tf/variable_scope) with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x `out_channel_dim` images.
```
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
# TODO: Implement Function
with tf.variable_scope('generator', reuse=not is_train):
alpha = 0.2
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# print(x1.get_shape().as_list())
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=is_train)
x1 = tf.maximum(alpha * x1, x1)
# print(x1.get_shape().as_list())
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 4, strides=1, padding='valid')
x2 = tf.layers.batch_normalization(x2, training=is_train)
x2 = tf.maximum(alpha * x2, x2)
# 7x7x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=is_train)
x3 = tf.maximum(alpha * x3, x3)
# 14x14x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides=2, padding='same')
# 28x28x5 now
# print(logits.get_shape().as_list())
out = tf.tanh(logits)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
```
### Loss
Implement `model_loss` to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- `discriminator(images, reuse=False)`
- `generator(z, out_channel_dim, is_train=True)`
```
def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
# TODO: Implement Function
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
```
### Optimization
Implement `model_opt` to create the optimization operations for the GANs. Use [`tf.trainable_variables`](https://www.tensorflow.org/api_docs/python/tf/trainable_variables) to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
```
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# TODO: Implement Function
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
```
## Neural Network Training
### Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
```
### Train
Implement `train` to build and train the GANs. Use the following functions you implemented:
- `model_inputs(image_width, image_height, image_channels, z_dim)`
- `model_loss(input_real, input_z, out_channel_dim)`
- `model_opt(d_loss, g_loss, learning_rate, beta1)`
Use the `show_generator_output` to show `generator` output while you train. Running `show_generator_output` for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the `generator` output every 100 batches.
```
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
# TODO: Build Model
# model_inputs(image_width, image_height, image_channels, z_dim)
input_real, input_z, learning_date = model_inputs(data_shape[1], data_shape[2],
data_shape[3], z_dim)
out_channel_dim = data_shape[3]
d_loss, g_loss = model_loss(input_real, input_z,
out_channel_dim)
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
steps = 0
n_images = 25
losses = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
# Run optimizers
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_z: batch_z, input_real: batch_images})
if steps % 10 == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(epoch_i+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % 100 == 0:
show_generator_output(sess, n_images, input_z, out_channel_dim, data_image_mode)
```
### MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
```
batch_size = 128
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
```
### CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
```
batch_size = None
z_dim = None
learning_rate = None
beta1 = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
```
### Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
| github_jupyter |
```
import tensorflow as tf
import warnings
warnings.filterwarnings('ignore')
import tensorflow_datasets as tfds
# loading the imdb review dataset
imdb, info = tfds.load("imdb_reviews", with_info=True, as_supervised=True)
print(len(imdb['test']))
import numpy as np
train_data, test_data = imdb['train'], imdb['test']
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = []
for s,l in train_data:
training_sentences.append(s.numpy().decode('utf8'))
training_labels.append(l.numpy())
for s,l in test_data:
testing_sentences.append(s.numpy().decode('utf8'))
testing_labels.append(l.numpy())
training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)
vocab_size = 10000 # max vocabulary size
embedding_dim = 16 # the embedding dimension for each word
max_length = 120 # max_length that each sentence can have
trunc_type='post' # if sentence is more than max_length, it is truncated at the end.
oov_tok = "<OOV>" # token for words out of vocabulary
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# tokenizer assigns a value to each unique word in the vocabulary
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(training_sentences)
# padding ensures each sentence has same number of words by
padded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences,maxlen=max_length)
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
# A sample sentence
print(decode_review(padded[3]))
print(training_sentences[3])
# first layer is an embedding layer which converts every word into a vector of size 'embedding_dim'
# then adding a LSTM layer to the model
# a fully connected hidden layer followed by a output layer
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 10
history = model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
```
The model achieved a training accuracy of 97% and testing accuracy of 82%
```
import io
e = model.layers[0]
weights = e.get_weights()[0]
# storing the words in the 'words.tsv' file and their corresponding embedding values in the 'vectors.tsv' file
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('words.tsv', 'w', encoding='utf-8')
for word_num in range(1, vocab_size):
word = reverse_word_index[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
sentence = "I really think this is amazing."
sequence = tokenizer.texts_to_sequences([sentence])
model.predict(sequence)
sentence = "Movie was not up to the mark."
sequence = tokenizer.texts_to_sequences([sentence])
model.predict(sequence)
```
We can see that the model predicts the positivity of both the sample sentences correctly.
```
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
```
validation loss is increasing indicating the model is overfitting a bit to the data
| github_jupyter |
# Example of reading and plotting CloudSat radar reflectivity data
Import standard scientific packages
```
import cartopy.crs as ccrs # for plotting in cartographic projection
import matplotlib as mpl
import matplotlib.pyplot as plt # the plotting interface
import numpy as np # for numerical array computations
from pathlib import Path # object-oriented file paths
```
Also import local modules for reading CloudSat data and interpolating them to a regular grid
```
from cloudsat_read import get_geodata, read_data
import utils
```
Define the input directory by creating a `Path`-like object pointing to the local `data/` directory
```
input_dir = Path('data')
```
Define a path to the input file
```
fname = input_dir / '2013085084411_36761_CS_2B-GEOPROF_GRANULE_P_R04_E06.h5'
```
Read geo data of the CloudSat pass from the input file using the function from the `cloudsat_read.py` script:
* longitude
* latitude
* height
* time
* surface elevation
## Reading data
```
cloudsat_lons, cloudsat_lats, cloudsat_height, cloudsat_time, elev = get_geodata(fname, return_list=True)
elev = elev * 1e-3
```
Print out time array
```
cloudsat_time
```
Read radar reflectivity data, to be plotted later
```
cldst_radar = read_data(fname)
```
## Creating simple plots
Using `matplotlib` and `cartopy`, create a simple map and plot the CloudSat track
```
fig = plt.figure()
ax = fig.add_subplot(111, projection=ccrs.Robinson())
ax.coastlines()
ax.set_global()
ax.plot(cloudsat_lons, cloudsat_lats, linewidth=2, transform=ccrs.PlateCarree());
```
Similarly, create another plot with a different projection and limited domain
```
fig = plt.figure()
ax = fig.add_subplot(111, projection=ccrs.Stereographic())
ax.coastlines()
ax.set_extent([-30, 50, 50, 85], crs=ccrs.PlateCarree())
ax.plot(cloudsat_lons, cloudsat_lats, linewidth=2, transform=ccrs.PlateCarree());
```
## Subsetting data
Define a bounding box - a tuple of `(lon0, lon1, lat0, lat0)` - to select a subset of CloudSat data
```
bbox = (-2.5, 10, 71, 80)
```
Find array indices of those longitude and latitude values of CloudSat track that fall within the bounding box
```
ii = np.where((cloudsat_lons > bbox[0]) & (cloudsat_lons < bbox[1]) &
(cloudsat_lats > bbox[2]) & (cloudsat_lats < bbox[3]))[0]
i1, i2 = ii[0], ii[-1]
```
Print out the first and the last index of that part of the track
```
i1, i2
```
Use this indices to define grid in horizontal
```
cloudsat_x = np.arange(i1, i2, dtype=np.float32)
```
Define the **target** vertical grid: from `cloudsat_z0` to `cloudsat_z1` (in kilometres), `cloudsat_nz` levels.
```
cloudsat_z0 = 0 # km
cloudsat_z1 = 6 # km
cloudsat_nz = 500 # Number of pixels (levels) in the vertical.
# cloudsat_z = np.linspace(cloudsat_z0, cloudsat_z1, cloudsat_nz)
```
Create another variable, which is just the original `cloudsat_height` array scaled to kilometres
```
cloudsat_z = (cloudsat_height * 0.001).astype(np.float32)
```
Select a subset of the radar reflectivity array (note the indices order!)
```
cldst_radar = cldst_radar[i1:i2, :]
```
Intepolate radar reflectivity to regular height levels
```
cldst_radar = utils.cc_interp2d(cldst_radar.filled(np.nan),
cloudsat_x,
cloudsat_z, i1, i2, i2-i1,
cloudsat_z1, cloudsat_z0, cloudsat_nz).T[::-1,:]
```
## Plotting radar data
Select a colourmap and normalise it to show desired contour levels
```
radr_cmap = plt.cm.magma_r
radr_cmap.set_bad('w')
radr_cmap.set_under('w')
radr_norm = mpl.colors.BoundaryNorm(np.linspace(-20, 30, 6), radr_cmap.N)
```
Pack all the necessary keywords into a dictionary for convenience
```
radr_kw = dict(cmap=radr_cmap, norm=radr_norm, rasterized=True)
```
Create axes and use pseudocolour-mesh method to plot the interpolated radar reflectivity
```
fig, ax = plt.subplots(figsize=(16, 6))
p = ax.pcolormesh(cloudsat_time[i1:i2], np.linspace(cloudsat_z0, cloudsat_z1, cloudsat_nz), cldst_radar, **radr_kw)
fig.colorbar(p);
# class CloudSat:
# """
# Handler for reading CloudSat data
# """
# def __init__():
# pass
```
| github_jupyter |
### Import packages
```
library(data.table)
library(Matrix)
library(proxy)
library(Rtsne)
library(data.table)
library(irlba)
library(umap)
library(ggplot2)
```
### Preprocess
`bsub < count_reads_peaks_erisone.sh`
```
path = './count_reads_peaks_output/'
files <- list.files(path,pattern = "\\.txt$")
length(files)
#assuming tab separated values with a header
datalist = lapply(files, function(x)fread(paste0(path,x))$V4)
#assuming the same header/columns for all files
datafr = do.call("cbind", datalist)
dim(datafr)
df_regions = read.csv("../../input/GSE96769_PeakFile_20160207.bed",
sep = '\t',header=FALSE,stringsAsFactors=FALSE)
dim(df_regions)
peaknames = paste(df_regions$V1,df_regions$V2,df_regions$V3,sep = "_")
head(peaknames)
head(sapply(strsplit(files,'\\.'),'[', 1))
colnames(datafr) = sapply(strsplit(files,'\\.'),'[', 1)
rownames(datafr) = peaknames
head(datafr)
dim(datafr)
# saveRDS(datafr, file = './datafr.rds')
# datafr = readRDS('./datafr.rds')
run_pca <- function(mat,num_pcs=50,remove_first_PC=FALSE,scale=FALSE,center=FALSE){
set.seed(2019)
mat = as.matrix(mat)
SVD = irlba(mat, num_pcs, num_pcs,scale=scale,center=center)
sk_diag = matrix(0, nrow=num_pcs, ncol=num_pcs)
diag(sk_diag) = SVD$d
if(remove_first_PC){
sk_diag[1,1] = 0
SVD_vd = (sk_diag %*% t(SVD$v))[2:num_pcs,]
}else{
SVD_vd = sk_diag %*% t(SVD$v)
}
return(SVD_vd)
}
elbow_plot <- function(mat,num_pcs=50,scale=FALSE,center=FALSE,title='',width=3,height=3){
set.seed(2019)
mat = data.matrix(mat)
SVD = irlba(mat, num_pcs, num_pcs,scale=scale,center=center)
options(repr.plot.width=width, repr.plot.height=height)
df_plot = data.frame(PC=1:num_pcs, SD=SVD$d);
# print(SVD$d[1:num_pcs])
p <- ggplot(df_plot, aes(x = PC, y = SD)) +
geom_point(col="#cd5c5c",size = 1) +
ggtitle(title)
return(p)
}
filter_peaks <- function (datafr,cutoff = 0.01){
binary_mat = as.matrix((datafr > 0) + 0)
binary_mat = Matrix(binary_mat, sparse = TRUE)
num_cells_ncounted = Matrix::rowSums(binary_mat)
ncounts = binary_mat[num_cells_ncounted >= dim(binary_mat)[2]*cutoff,]
ncounts = ncounts[rowSums(ncounts) > 0,]
options(repr.plot.width=4, repr.plot.height=4)
hist(log10(num_cells_ncounted),main="No. of Cells Each Site is Observed In",breaks=50)
abline(v=log10(min(num_cells_ncounted[num_cells_ncounted >= dim(binary_mat)[2]*cutoff])),lwd=2,col="indianred")
# hist(log10(new_counts),main="Number of Sites Each Cell Uses",breaks=50)
datafr_filtered = datafr[rownames(ncounts),]
return(datafr_filtered)
}
```
### Obtain Feature Matrix
```
start_time <- Sys.time()
metadata <- read.table('../../input/metadata.tsv',
header = TRUE,
stringsAsFactors=FALSE,quote="",row.names=1)
datafr_filtered <- filter_peaks(datafr)
dim(datafr_filtered)
p_elbow_control <- elbow_plot(datafr_filtered,num_pcs = 100, title = 'PCA on the raw count')
p_elbow_control
fm_control = run_pca(datafr_filtered,num_pcs = 50)
dim(fm_control)
fm_control[1:3,1:3]
end_time <- Sys.time()
end_time - start_time
colnames(fm_control) = colnames(datafr)
rownames(fm_control) = paste('PC',1:dim(fm_control)[1])
dim(fm_control)
all(colnames(fm_control) == rownames(metadata))
saveRDS(fm_control, file = '../../output/feature_matrices/FM_Control_buenrostro2018bulkpeaks.rds')
```
### Downstream Analysis
```
set.seed(0)
tsne_control = Rtsne(t(fm_control),pca=F)
library(RColorBrewer)
plot.tsne <- function(x, labels,
main="A tSNE visualization",n=20,
pad=0.1, cex=0.65, pch=19, add=FALSE, legend.suffix="",
cex.main=1, cex.legend=1) {
qual_col_pals = brewer.pal.info[brewer.pal.info$category == 'qual',]
col_vector = unlist(mapply(brewer.pal, qual_col_pals$maxcolors, rownames(qual_col_pals)))
layout = x
xylim = range(layout)
xylim = xylim + ((xylim[2]-xylim[1])*pad)*c(-0.5, 0.5)
if (!add) {
par(mar=c(0.2,0.7,1.2,0.7), ps=10)
plot(xylim, xylim, type="n", axes=F, frame=F)
rect(xylim[1], xylim[1], xylim[2], xylim[2], border="#aaaaaa", lwd=0.25)
}
points(layout[,1], layout[,2], col=col_vector[as.integer(labels)],
cex=cex, pch=pch)
mtext(side=3, main, cex=cex.main)
labels.u = unique(labels)
legend.pos = "topright"
legend.text = as.character(labels.u)
if (add) {
legend.pos = "bottomright"
legend.text = paste(as.character(labels.u), legend.suffix)
}
legend(legend.pos, legend=legend.text,
col=col_vector[as.integer(labels.u)],
bty="n", pch=pch, cex=cex.legend)
}
options(repr.plot.width=5, repr.plot.height=5)
plot.tsne(tsne_control$Y,as.factor(metadata[,'label']))
sessionInfo()
save.image(file = 'Control_buenrostro2018bulkpeaks.RData')
```
| github_jupyter |
# From Unlabeled Data to a Deployed Machine Learning Model: A SageMaker Ground Truth Demonstration for Image Classification
1. [Introduction](#Introduction)
2. [Run a Ground Truth labeling job (time: about 3h)](#Run-a-Ground-Truth-labeling-job)
1. [Prepare the data](#Prepare-the-data)
2. [Specify the categories](#Specify-the-categories)
3. [Create the instruction template](#Create-the-instruction-template)
4. [Create a private team to test your task [OPTIONAL]](#Create-a-private-team-to-test-your-task-[OPTIONAL])
5. [Define pre-built lambda functions for use in the labeling job](#Define-pre-built-lambda-functions-for-use-in-the-labeling-job)
6. [Submit the Ground Truth job request](#Submit-the-Ground-Truth-job-request)
1. [Verify your task using a private team [OPTIONAL]](#Verify-your-task-using-a-private-team-[OPTIONAL])
7. [Monitor job progress](#Monitor-job-progress)
3. [Analyze Ground Truth labeling job results (time: about 20min)](#Analyze-Ground-Truth-labeling-job-results)
1. [Postprocess the output manifest](#Postprocess-the-output-manifest)
2. [Plot class histograms](#Plot-class-histograms)
3. [Plot annotated images](#Plot-annotated-images)
1. [Plot a small output sample](#Plot-a-small-output-sample)
2. [Plot the full results](#Plot-the-full-results)
4. [Compare Ground Truth results to standard labels (time: about 5min)](#Compare-Ground-Truth-results-to-standard-labels)
1. [Compute accuracy](#Compute-accuracy)
2. [Plot correct and incorrect annotations](#Plot-correct-and-incorrect-annotations)
5. [Train an image classifier using Ground Truth labels (time: about 15min)](#Train-an-image-classifier-using-Ground-Truth-labels)
6. [Deploy the Model (time: about 20min)](#Deploy-the-Model)
1. [Create Model](#Create-Model)
2. [Batch Transform](#Batch-Transform)
3. [Realtime Inference](#Realtime-Inference)
1. [Create Endpoint Configuration](#Create-Endpoint-Configuration)
2. [Create Endpoint](#Create-Endpoint)
3. [Perform Inference](#Perform-Inference)
7. [Review](#Review)
## Introduction
This sample notebook takes you through an end-to-end workflow to demonstrate the functionality of SageMaker Ground Truth. We'll start with an unlabeled image data set, acquire labels for all the images using SageMaker Ground Truth, analyze the results of the labeling job, train an image classifier, host the resulting model, and, finally, use it to make predictions. Before you begin, we highly recommend you start a Ground Truth labeling job through the AWS Console first to familiarize yourself with the workflow. The AWS Console offers less flexibility than the API, but is simple to use.
#### Cost and runtime
You can run this demo in two modes:
1. Set `RUN_FULL_AL_DEMO = True` in the next cell to label 1000 images. This should cost about \$100 given current [Ground Truth pricing scheme](https://aws.amazon.com/sagemaker/groundtruth/pricing/). In order to reduce the cost, we will use Ground Truth's auto-labeling feature. Auto-labeling uses computer vision to learn from human responses and automatically create labels for the easiest images at a cheap price. The total end-to-end runtime should be about 4h.
1. Set `RUN_FULL_AL_DEMO = False` in the next cell to label only 100 images. This should cost about \$15. **Since Ground Truth's auto-labeling feature only kicks in for datasets of 1000 images or more, this cheaper version of the demo will not use it. Some of the analysis plots might look awkward, but you should still be able to see good results on the human-annotated 100 images.**
#### Prerequisites
To run this notebook, you can simply execute each cell one-by-one. To understand what's happening, you'll need:
* An S3 bucket you can write to -- please provide its name in the following cell. The bucket must be in the same region as this SageMaker Notebook instance. You can also change the `EXP_NAME` to any valid S3 prefix. All the files related to this experiment will be stored in that prefix of your bucket.
* Familiarity with Python and [numpy](http://www.numpy.org/).
* Basic familiarity with [AWS S3](https://docs.aws.amazon.com/s3/index.html),
* Basic understanding of [AWS Sagemaker](https://aws.amazon.com/sagemaker/),
* Basic familiarity with [AWS Command Line Interface (CLI)](https://aws.amazon.com/cli/) -- set it up with credentials to access the AWS account you're running this notebook from. This should work out-of-the-box on SageMaker Jupyter Notebook instances.
This notebook is only tested on a SageMaker notebook instance. The runtimes given are approximate, we used an `ml.m4.xlarge` instance in our tests. However, you can likely run it on a local instance by first executing the cell below on SageMaker, and then copying the `role` string to your local copy of the notebook.
NOTE: This notebook will create/remove subdirectories in its working directory. We recommend to place this notebook in its own directory before running it.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os
from collections import namedtuple
from collections import defaultdict
from collections import Counter
import itertools
import json
import random
import time
import imageio
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from sklearn.metrics import confusion_matrix
import boto3
import sagemaker
from urllib.parse import urlparse
BUCKET = '<< YOUR S3 BUCKET NAME >>'
assert BUCKET != '<< YOUR S3 BUCKET NAME >>', 'Please provide a custom S3 bucket name.'
EXP_NAME = 'ground-truth-ic-demo' # Any valid S3 prefix.
RUN_FULL_AL_DEMO = True # See 'Cost and Runtime' in the Markdown cell above!
# Make sure the bucket is in the same region as this notebook.
role = sagemaker.get_execution_role()
region = boto3.session.Session().region_name
s3 = boto3.client('s3')
bucket_region = s3.head_bucket(Bucket=BUCKET)['ResponseMetadata']['HTTPHeaders']['x-amz-bucket-region']
assert bucket_region == region, "You S3 bucket {} and this notebook need to be in the same region.".format(BUCKET)
```
# Run a Ground Truth labeling job
**This section should take about 3h to complete.**
We will first run a labeling job. This involves several steps: collecting the images we want labeled, specifying the possible label categories, creating instructions, and writing a labeling job specification. In addition, we highly recommend to run a (free) mock job using a private workforce before you submit any job to the public workforce. This notebook will explain how to do that as an optional step. Without using a private workforce, this section until completion of your labeling job should take about 3h. However, this may vary depending on the availability of the public annotation workforce.
## Prepare the data
We will first download images and labels of a subset of the [Google Open Images Dataset](https://storage.googleapis.com/openimages/web/index.html). These labels were [carefully verified](https://storage.googleapis.com/openimages/web/factsfigures.html). Later, will compare Ground Truth annotations to these labels. Our dataset will include images in the following categories:
* Musical Instrument (500 images)
* Fruit (370 images)
* Cheetah (50 images)
* Tiger (40 images)
* Snowman (40 images)
If you chose `RUN_FULL_AL_DEMO = False`, then we will choose a subset of 100 images in this dataset. This is a diverse dataset of interesting images, and should be fun for the human annotators to work with. You are free to ask the annotators to annotate any images you wish (as long as the images do not contain adult content; in which case, you must adjust the labeling job request this job produces, please check the Ground Truth documentation).
We will copy these images to our local `BUCKET`, and will create the corresponding *input manifest*. The input manifest is a formatted list of the S3 locations of the images we want Ground Truth to annotate. We will upload this manifest to our S3 `BUCKET`.
#### Disclosure regarding the Open Images Dataset V4:
Open Images Dataset V4 is created by Google Inc. We have not modified the images or the accompanying annotations. You can obtain the images and the annotations [here](https://storage.googleapis.com/openimages/web/download.html). The annotations are licensed by Google Inc. under [CC BY 4.0](https://creativecommons.org/licenses/by/2.0/) license. The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license. The following paper describes Open Images V4 in depth: from the data collection and annotation to detailed statistics about the data and evaluation of models trained on it.
A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, T. Duerig, and V. Ferrari.
*The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale.* arXiv:1811.00982, 2018. ([link to PDF](https://arxiv.org/abs/1811.00982))
```
# Download and process the Open Images annotations.
!wget https://storage.googleapis.com/openimages/2018_04/test/test-annotations-human-imagelabels-boxable.csv -O openimgs-annotations.csv
with open('openimgs-annotations.csv', 'r') as f:
all_labels = [line.strip().split(',') for line in f.readlines()]
# Extract image ids in each of our desired classes.
ims = {}
ims['Musical Instrument'] = [label[0] for label in all_labels if (label[2] == '/m/04szw' and label[3] == '1')][:500]
ims['Fruit'] = [label[0] for label in all_labels if (label[2] == '/m/02xwb' and label[3] == '1')][:371]
ims['Fruit'].remove('02a54f6864478101') # This image contains personal information, let's remove it from our dataset.
ims['Cheetah'] = [label[0] for label in all_labels if (label[2] == '/m/0cd4d' and label[3] == '1')][:50]
ims['Tiger'] = [label[0] for label in all_labels if (label[2] == '/m/07dm6' and label[3] == '1')][:40]
ims['Snowman'] = [label[0] for label in all_labels if (label[2] == '/m/0152hh' and label[3] == '1')][:40]
num_classes = len(ims)
# If running the short version of the demo, reduce each class count 10 times.
for key in ims.keys():
if RUN_FULL_AL_DEMO is False:
ims[key] = set(ims[key][:int(len(ims[key]) / 10)])
else:
ims[key] = set(ims[key])
# Copy the images to our local bucket.
s3 = boto3.client('s3')
for img_id, img in enumerate(itertools.chain.from_iterable(ims.values())):
if (img_id + 1) % 10 == 0:
print('Copying image {} / {}'.format((img_id+1), 1000))
copy_source = {
'Bucket': 'open-images-dataset',
'Key': 'test/{}.jpg'.format(img)
}
s3.copy(copy_source, BUCKET, '{}/images/{}.jpg'.format(EXP_NAME, img))
# Create and upload the input manifest.
manifest_name = 'input.manifest'
with open(manifest_name, 'w') as f:
for img in itertools.chain.from_iterable(ims.values()):
img_path = 's3://{}/{}/images/{}.jpg'.format(BUCKET, EXP_NAME, img)
f.write('{"source-ref": "' + img_path +'"}\n')
s3.upload_file(manifest_name, BUCKET, EXP_NAME + '/' + manifest_name)
```
After running the cell above, you should be able to go to `s3://BUCKET/EXP_NAME/images` in [S3 console](https://console.aws.amazon.com/s3/) and see a thousand images. We recommend you inspect the contents of these images! You can download them all to a local machine using the AWS CLI.
## Specify the categories
To run an image classification labeling job, you need to decide on a set of classes the annotators can choose from.
In our case, this list is `["Musical Instrument", "Fruit", "Cheetah", "Tiger", "Snowman"]`. In your own job you can choose any list of up to 10 classes. We recommend the classes to be as unambiguous and concrete as possible. The categories should be mutually exclusive, with only one correct label per image. In addition, be careful to make the task as *objective* as possible, unless of course your intention is to obtain subjective labels.
* Example good category lists: `["Human", "No Human"]`, `["Golden Retriever", "Labrador", "English Bulldog", "German Shepherd"]`, `["Car", "Train", "Ship", "Pedestrian"]`.
* Example bad category lists: `["Prominent object", "Not prominent"]` (meaning unclear), `["Beautiful", "Ugly"]` (subjective), `["Dog", "Animal", "Car"]` (not mutually exclusive).
To work with Ground Truth, this list needs to be converted to a .json file and uploaded to the S3 `BUCKET`.
*Note: The ordering of the labels or classes in the template governs the class indices that you will see downstream in the output manifest (this numbering is zero-indexed). In other words, the class that appears second in the template will correspond to class "1" in the output. At the end of this demonstration, we will train a model and make predictions, and this class ordering is instrumental to interpreting the results.*
```
CLASS_LIST = list(ims.keys())
print("Label space is {}".format(CLASS_LIST))
json_body = {
'labels': [{'label': label} for label in CLASS_LIST]
}
with open('class_labels.json', 'w') as f:
json.dump(json_body, f)
s3.upload_file('class_labels.json', BUCKET, EXP_NAME + '/class_labels.json')
```
You should now see `class_labels.json` in `s3://BUCKET/EXP_NAME/`.
## Create the instruction template
Part or all of your images will be annotated by human annotators. It is **essential** to provide good instructions that help the annotators give you the annotations you want. Good instructions are:
1. Concise. We recommend limiting verbal/textual instruction to two sentences, and focusing on clear visuals.
2. Visual. In the case of image classification, we recommend providing one labeled image in each of the classes as part of the instruction.
When used through the AWS Console, Ground Truth helps you create the instructions using a visual wizard. When using the API, you need to create an HTML template for your instructions. Below, we prepare a very simple but effective template and upload it to your S3 bucket.
NOTE: If you use any images in your template (as we do), they need to be publicly accessible. You can enable public access to files in your S3 bucket through the S3 Console, as described in [S3 Documentation](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-object-permissions.html).
#### Testing your instructions
It is very easy to create broken instructions. This might cause your labeling job to fail. However, it might also cause your job to complete with meaningless results (when the annotators have no idea what to do, or the instructions are plain wrong). We *highly recommend* that you verify that your task is correct in two ways:
1. The following cell creates and uploads a file called `instructions.template` to S3. It also creates `instructions.html` that you can open in a local browser window. Please do so and inspect the resulting web page; it should correspond to what you want your annotators to see (except the actual image to annotate will not be visible).
2. Run your job in a private workforce, which is a way to run a mock labeling job. We describe how to do it in [Verify your task using a private team [OPTIONAL]](#Verify-your-task-using-a-private-team-[OPTIONAL]).
```
img_examples = ['https://s3.amazonaws.com/open-images-dataset/test/{}'.format(img_id)
for img_id in ['0634825fc1dcc96b.jpg', '0415b6a36f3381ed.jpg', '8582cc08068e2d0f.jpg', '8728e9fa662a8921.jpg', '926d31e8cde9055e.jpg']]
def make_template(test_template=False, save_fname='instructions.template'):
template = r"""<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<crowd-form>
<crowd-image-classifier
name="crowd-image-classifier"
src="{{{{ task.input.taskObject | grant_read_access }}}}"
header="Dear Annotator, please tell me what you can see in the image. Thank you!"
categories="{categories_str}"
>
<full-instructions header="Image classification instructions">
</full-instructions>
<short-instructions>
<p>Dear Annotator, please tell me whether what you can see in the image. Thank you!</p>
<p><img src="{}" style="max-width:100%">
<br>Example "Musical Instrument". </p>
<p><img src="{}" style="max-width:100%">
<br>Example "Fruit".</p>
<p><img src="{}" style="max-width:100%">
<br>Example "Cheetah". </p>
<p><img src="{}" style="max-width:100%">
<br>Example "Tiger". </p>
<p><img src="{}" style="max-width:100%">
<br>Example "Snowman". </p>
</short-instructions>
</crowd-image-classifier>
</crowd-form>""".format(*img_examples,
categories_str=str(CLASS_LIST) if test_template else '{{ task.input.labels | to_json | escape }}')
with open(save_fname, 'w') as f:
f.write(template)
if test_template is False:
print(template)
make_template(test_template=True, save_fname='instructions.html')
make_template(test_template=False, save_fname='instructions.template')
s3.upload_file('instructions.template', BUCKET, EXP_NAME + '/instructions.template')
```
You should now be able to find your template in `s3://BUCKET/EXP_NAME/instructions.template`.
## Create a private team to test your task [OPTIONAL]
This step requires you to use the AWS Console. However, we **highly recommend** that you follow it, especially when creating your own task with a custom dataset, label set, and template.
We will create a `private workteam` and add only one user (you) to it. Then, we will modify the Ground Truth API job request to send the task to that workforce. You will then be able to see your annotation job exactly as the public annotators would see it. You can even annotate the whole dataset yourself!
To create a private team:
1. Go to `AWS Console > Amazon SageMaker > Labeling workforces`
2. Click "Private" and then "Create private team".
3. Enter the desired name for your private workteam.
4. Enter your own email address in the "Email addresses" section.
5. Enter the name of your organization and a contact email to administrate the private workteam.
6. Click "Create Private Team".
7. The AWS Console should now return to `AWS Console > Amazon SageMaker > Labeling workforces`. Your newly created team should be visible under "Private teams". Next to it you will see an `ARN` which is a long string that looks like `arn:aws:sagemaker:region-name-123456:workteam/private-crowd/team-name`. Copy this ARN in the cell below.
8. You should get an email from `no-reply@verificationemail.com` that contains your workforce username and password.
9. In `AWS Console > Amazon SageMaker > Labeling workforces`, click on the URL in `Labeling portal sign-in URL`. Use the email/password combination from Step 8 to log in (you will be asked to create a new, non-default password).
That's it! This is your private worker's interface. When we create a verification task in [Verify your task using a private team](#Verify-your-task-using-a-private-team-[OPTIONAL]) below, your task should appear in this window. You can invite your colleagues to participate in the labeling job by clicking the "Invite new workers" button.
The [SageMaker Ground Truth documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-private.html) has more details on the management of private workteams.
```
private_workteam_arn = '<< your private workteam ARN here >>'
```
## Define pre-built lambda functions for use in the labeling job
Before we submit the request, we need to define the ARNs for four key components of the labeling job: 1) the workteam, 2) the annotation consolidation Lambda function, 3) the pre-labeling task Lambda function, and 4) the machine learning algorithm to perform auto-annotation. These functions are defined by strings with region names and AWS service account numbers, so we will define a mapping below that will enable you to run this notebook in any of our supported regions.
See the official documentation for the available ARNs:
* [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-public.html) for a discussion of the workteam ARN definition. There is only one valid selection if you choose to use the public workfofce; if you elect to use a private workteam, you should check the corresponding ARN for the workteam.
* [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/API_HumanTaskConfig.html#SageMaker-Type-HumanTaskConfig-PreHumanTaskLambdaArn) for available pre-human ARNs for other workflows.
* [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/API_AnnotationConsolidationConfig.html#SageMaker-Type-AnnotationConsolidationConfig-AnnotationConsolidationLambdaArn) for available annotation consolidation ANRs for other workflows.
* [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/API_LabelingJobAlgorithmsConfig.html#SageMaker-Type-LabelingJobAlgorithmsConfig-LabelingJobAlgorithmSpecificationArn) for available auto-labeling ARNs for other workflows.
```
# Specify ARNs for resources needed to run an image classification job.
ac_arn_map = {'us-west-2': '081040173940',
'us-east-1': '432418664414',
'us-east-2': '266458841044',
'eu-west-1': '568282634449',
'ap-northeast-1': '477331159723'}
prehuman_arn = 'arn:aws:lambda:{}:{}:function:PRE-ImageMultiClass'.format(region, ac_arn_map[region])
acs_arn = 'arn:aws:lambda:{}:{}:function:ACS-ImageMultiClass'.format(region, ac_arn_map[region])
labeling_algorithm_specification_arn = 'arn:aws:sagemaker:{}:027400017018:labeling-job-algorithm-specification/image-classification'.format(region)
workteam_arn = 'arn:aws:sagemaker:{}:394669845002:workteam/public-crowd/default'.format(region)
```
## Submit the Ground Truth job request
The API starts a Ground Truth job by submitting a request. The request contains the
full configuration of the annotation task, and allows you to modify the fine details of
the job that are fixed to default values when you use the AWS Console. The parameters that make up the request are described in more detail in the [SageMaker Ground Truth documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateLabelingJob.html).
After you submit the request, you should be able to see the job in your AWS Console, at `Amazon SageMaker > Labeling Jobs`.
You can track the progress of the job there. This job will take several hours to complete. If your job
is larger (say 100,000 images), the speed and cost benefit of auto-labeling should be larger.
### Verify your task using a private team [OPTIONAL]
If you chose to follow the steps in [Create a private team](#Create-a-private-team-to-test-your-task-[OPTIONAL]), then you can first verify that your task runs as expected. To do this:
1. Set VERIFY_USING_PRIVATE_WORKFORCE to True in the cell below.
2. Run the next two cells. This will define the task and submit it to the private workforce (to you).
3. After a few minutes, you should be able to see your task in your private workforce interface [Create a private team](#Create-a-private-team-to-test-your-task-[OPTIONAL]).
Please verify that the task appears as you want it to appear.
4. If everything is in order, change `VERIFY_USING_PRIVATE_WORKFORCE` to `False` and rerun the cell below to start the real annotation task!
```
VERIFY_USING_PRIVATE_WORKFORCE = False
USE_AUTO_LABELING = True
task_description = 'What do you see: a {}?'.format(' a '.join(CLASS_LIST))
task_keywords = ['image', 'classification', 'humans']
task_title = task_description
job_name = 'ground-truth-demo-' + str(int(time.time()))
human_task_config = {
"AnnotationConsolidationConfig": {
"AnnotationConsolidationLambdaArn": acs_arn,
},
"PreHumanTaskLambdaArn": prehuman_arn,
"MaxConcurrentTaskCount": 200, # 200 images will be sent at a time to the workteam.
"NumberOfHumanWorkersPerDataObject": 3, # 3 separate workers will be required to label each image.
"TaskAvailabilityLifetimeInSeconds": 21600, # Your worteam has 6 hours to complete all pending tasks.
"TaskDescription": task_description,
"TaskKeywords": task_keywords,
"TaskTimeLimitInSeconds": 300, # Each image must be labeled within 5 minutes.
"TaskTitle": task_title,
"UiConfig": {
"UiTemplateS3Uri": 's3://{}/{}/instructions.template'.format(BUCKET, EXP_NAME),
}
}
if not VERIFY_USING_PRIVATE_WORKFORCE:
human_task_config["PublicWorkforceTaskPrice"] = {
"AmountInUsd": {
"Dollars": 0,
"Cents": 1,
"TenthFractionsOfACent": 2,
}
}
human_task_config["WorkteamArn"] = workteam_arn
else:
human_task_config["WorkteamArn"] = private_workteam_arn
ground_truth_request = {
"InputConfig" : {
"DataSource": {
"S3DataSource": {
"ManifestS3Uri": 's3://{}/{}/{}'.format(BUCKET, EXP_NAME, manifest_name),
}
},
"DataAttributes": {
"ContentClassifiers": [
"FreeOfPersonallyIdentifiableInformation",
"FreeOfAdultContent"
]
},
},
"OutputConfig" : {
"S3OutputPath": 's3://{}/{}/output/'.format(BUCKET, EXP_NAME),
},
"HumanTaskConfig" : human_task_config,
"LabelingJobName": job_name,
"RoleArn": role,
"LabelAttributeName": "category",
"LabelCategoryConfigS3Uri": 's3://{}/{}/class_labels.json'.format(BUCKET, EXP_NAME),
}
if USE_AUTO_LABELING and RUN_FULL_AL_DEMO:
ground_truth_request[ "LabelingJobAlgorithmsConfig"] = {
"LabelingJobAlgorithmSpecificationArn": labeling_algorithm_specification_arn
}
sagemaker_client = boto3.client('sagemaker')
sagemaker_client.create_labeling_job(**ground_truth_request)
```
## Monitor job progress
A Ground Truth job can take a few hours to complete (if your dataset is larger than 10000 images, it can take much longer than that!). One way to monitor the job's progress is through AWS Console. In this notebook, we will use Ground Truth output files and Cloud Watch logs in order to monitor the progress. You can re-evaluate the next two cells repeatedly.
You can re-evaluate the next cell repeatedly. It sends a `describe_labelging_job` request which should tell you whether the job is completed or not. If it is, then 'LabelingJobStatus' will be 'Completed'.
```
sagemaker_client.describe_labeling_job(LabelingJobName=job_name)
```
The next cell extract detailed information on how your job is doing to-date. You can re-evaluate it at any time. It should give you:
* The number of human and machine-annotated images in each category across the iterations of your labeling job.
* The training curves of any neural network training jobs launched by Ground Truth **(only if you are running with `RUN_FULL_AL_DEMO=True`)**.
* The cost of the human- and machine-annotatoed labels.
To understand the pricing, study [the pricing doc](https://aws.amazon.com/sagemaker/groundtruth/pricing/) carefully. In our case, each human label costs `$0.08 + 3 * $0.012 = $0.116` and each auto-label costs `$0.08`. There is also a small added cost of using SageMaker instances for neural net training and inference during auto-labeling. However, this should be insignificant compared the other costs.
If `RUN_FULL_AL_DEMO==True`, then the job will proceed in multiple iterations.
* Iteration 1: Ground Truth will send out 10 images as 'probes' for human annotation. If these are succesfully annotated, proceed to Iteration 2.
* Iteration 2: Send out a batch of `MaxConcurrentTaskCount - 10` (in our case, 190) images for human annotation to obtain an active learning training batch.
* Iteration 3: Send out another batch of 200 images for human annotation to obtain an active learning validation set.
* Iteration 4a: Train a neural net to do auto-labeling. Auto-label as many datapoints as possible.
* Iteration 4b: If there is any data leftover, send out at most 200 images for human annotation.
* Repeat Iteration 4a and 4b until all data is annotated.
If `RUN_FULL_AL_DEMO==False`, only Iterations 1 and 2 will happen.
```
from datetime import datetime
import glob
import shutil
HUMAN_PRICE = 0.116
AUTO_PRICE = 0.08
try:
os.makedirs('ic_output_data/', exist_ok=False)
except FileExistsError:
shutil.rmtree('ic_output_data/')
S3_OUTPUT = boto3.client('sagemaker').describe_labeling_job(LabelingJobName=job_name)[
'OutputConfig']['S3OutputPath'] + job_name
# Download human annotation data.
!aws s3 cp {S3_OUTPUT + '/annotations/worker-response'} ic_output_data/worker-response --recursive --quiet
worker_times = []
worker_ids = []
# Collect the times and worker ids of all the annotation events to-date.
for annot_fname in glob.glob('ic_output_data/worker-response/**', recursive=True):
if annot_fname.endswith('json'):
with open(annot_fname, 'r') as f:
annot_data = json.load(f)
for answer in annot_data['answers']:
annot_time = datetime.strptime(
answer['submissionTime'], '%Y-%m-%dT%H:%M:%SZ')
annot_id = answer['workerId']
worker_times.append(annot_time)
worker_ids.append(annot_id)
sort_ids = np.argsort(worker_times)
worker_times = np.array(worker_times)[sort_ids]
worker_ids = np.array(worker_ids)[sort_ids]
cumulative_n_annots = np.cumsum([1 for _ in worker_times])
# Count the number of annotations per unique worker id.
annots_per_worker = np.zeros(worker_ids.size)
ids_store = set()
for worker_id_id, worker_id in enumerate(worker_ids):
ids_store.add(worker_id)
annots_per_worker[worker_id_id] = float(
cumulative_n_annots[worker_id_id]) / len(ids_store)
# Count number of human annotations in each class each iteration.
!aws s3 cp {S3_OUTPUT + '/annotations/consolidated-annotation/consolidation-response'} ic_output_data/consolidation-response --recursive --quiet
consolidated_classes = defaultdict(list)
consolidation_times = {}
consolidated_cost_times = []
for consolidated_fname in glob.glob('ic_output_data/consolidation-response/**', recursive=True):
if consolidated_fname.endswith('json'):
iter_id = int(consolidated_fname.split('/')[-2][-1])
# Store the time of the most recent consolidation event as iteration time.
iter_time = datetime.strptime(consolidated_fname.split('/')[-1], '%Y-%m-%d_%H:%M:%S.json')
if iter_id in consolidation_times:
consolidation_times[iter_id] = max(consolidation_times[iter_id], iter_time)
else:
consolidation_times[iter_id] = iter_time
consolidated_cost_times.append(iter_time)
with open(consolidated_fname, 'r') as f:
consolidated_data = json.load(f)
for consolidation in consolidated_data:
consolidation_class = consolidation['consolidatedAnnotation']['content'][
'category-metadata']['class-name']
consolidated_classes[iter_id].append(consolidation_class)
total_human_labels = sum([len(annots) for annots in consolidated_classes.values()])
# Count the number of machine iterations in each class each iteration.
!aws s3 cp {S3_OUTPUT + '/activelearning'} ic_output_data/activelearning --recursive --quiet
auto_classes = defaultdict(list)
auto_times = {}
auto_cost_times = []
for auto_fname in glob.glob('ic_output_data/activelearning/**', recursive=True):
if auto_fname.endswith('auto_annotator_output.txt'):
iter_id = int(auto_fname.split('/')[-3])
with open(auto_fname, 'r') as f:
annots = [' '.join(l.split()[1:]) for l in f.readlines()]
for annot in annots:
annot = json.loads(annot)
time_str = annot['category-metadata']['creation-date']
auto_time = datetime.strptime(time_str, '%Y-%m-%dT%H:%M:%S.%f')
auto_class = annot['category-metadata']['class-name']
auto_classes[iter_id].append(auto_class)
if iter_id in auto_times:
auto_times[iter_id] = max(auto_times[iter_id], auto_time)
else:
auto_times[iter_id] = auto_time
auto_cost_times.append(auto_time)
total_auto_labels = sum([len(annots) for annots in auto_classes.values()])
n_iters = max(len(auto_times), len(consolidation_times))
def get_training_job_data(training_job_name):
logclient = boto3.client('logs')
log_group_name = '/aws/sagemaker/TrainingJobs'
log_stream_name = logclient.describe_log_streams(logGroupName=log_group_name,
logStreamNamePrefix=training_job_name)['logStreams'][0]['logStreamName']
train_log = logclient.get_log_events(
logGroupName=log_group_name,
logStreamName=log_stream_name,
startFromHead=True
)
events = train_log['events']
next_token = train_log['nextForwardToken']
while True:
train_log = logclient.get_log_events(
logGroupName=log_group_name,
logStreamName=log_stream_name,
startFromHead=True,
nextToken=next_token
)
if train_log['nextForwardToken'] == next_token:
break
events = events + train_log['events']
errors = []
for event in events:
msg = event['message']
if 'Final configuration' in msg:
num_samples = int(msg.split('num_training_samples\': u\'')[1].split('\'')[0])
elif 'Validation-accuracy' in msg:
errors.append(float(msg.split('Validation-accuracy=')[1]))
errors = 1 - np.array(errors)
return num_samples, errors
training_data = !aws s3 ls {S3_OUTPUT + '/training/'} --recursive
training_sizes = []
training_errors = []
training_iters = []
for line in training_data:
if line.split('/')[-1] == 'model.tar.gz':
training_job_name = line.split('/')[-3]
n_samples, errors = get_training_job_data(training_job_name)
training_sizes.append(n_samples)
training_errors.append(errors)
training_iters.append(int(line.split('/')[-5]))
plt.figure(facecolor='white', figsize=(14, 4), dpi=100)
ax = plt.subplot(131)
plt.title('Label counts ({} human, {} auto)'.format(
total_human_labels, total_auto_labels))
cmap = plt.get_cmap('coolwarm')
for iter_id in consolidated_classes.keys():
bottom = 0
class_counter = Counter(consolidated_classes[iter_id])
for cname_id, cname in enumerate(CLASS_LIST):
if iter_id == 1:
plt.bar(iter_id, class_counter[cname], width=.4, bottom=bottom,
label=cname, color=cmap(cname_id / float(len(CLASS_LIST)-1)))
else:
plt.bar(iter_id, class_counter[cname], width=.4, bottom=bottom,
color=cmap(cname_id / float(len(CLASS_LIST)-1)))
bottom += class_counter[cname]
for iter_id in auto_classes.keys():
bottom = 0
class_counter = Counter(auto_classes[iter_id])
for cname_id, cname in enumerate(CLASS_LIST):
plt.bar(iter_id + .4, class_counter[cname], width=.4, bottom=bottom, color=cmap(cname_id / float(len(CLASS_LIST)-1)))
bottom += class_counter[cname]
tick_labels_human = ['Iter {}, human'.format(iter_id + 1) for iter_id in range(n_iters)]
tick_labels_auto = ['Iter {}, auto'.format(iter_id + 1) for iter_id in range(n_iters)]
tick_locations_human = np.arange(n_iters) + 1
tick_locations_auto = tick_locations_human + .4
tick_labels = np.concatenate([[tick_labels_human[idx], tick_labels_auto[idx]] for idx in range(n_iters)])
tick_locations = np.concatenate([[tick_locations_human[idx], tick_locations_auto[idx]] for idx in range(n_iters)])
plt.xticks(tick_locations, tick_labels, rotation=90)
plt.legend()
plt.ylabel('Count')
ax = plt.subplot(132)
total_human = 0
total_auto = 0
for iter_id in range(1, n_iters + 1):
cost_human = len(consolidated_classes[iter_id]) * HUMAN_PRICE
cost_auto = len(auto_classes[iter_id]) * AUTO_PRICE
total_human += cost_human
total_auto += cost_auto
plt.bar(iter_id, cost_human, width=.8, color='gray',
hatch='/', edgecolor='k', label='human' if iter_id==1 else None)
plt.bar(iter_id, cost_auto, bottom=cost_human,
width=.8, color='gray', edgecolor='k', label='auto' if iter_id==1 else None)
plt.title('Annotation costs (\${:.2f} human, \${:.2f} auto)'.format(
total_human, total_auto))
plt.xlabel('Iter')
plt.ylabel('Cost in dollars')
plt.legend()
if len(training_sizes) > 0:
plt.subplot(133)
plt.title('Active learning training curves')
plt.grid(True)
cmap = plt.get_cmap('coolwarm')
n_all = len(training_sizes)
for iter_id_id, (iter_id, size, errs) in enumerate(zip(training_iters, training_sizes, training_errors)):
plt.plot(errs, label='Iter {}, auto'.format(iter_id + 1), color=cmap(iter_id_id / max(1, (n_all-1))))
plt.legend()
plt.xscale('log')
plt.xlabel('Training epoch')
plt.ylabel('Validation error')
```
# Analyze Ground Truth labeling job results
**This section should take about 20min to complete.**
After the job finishes running (**make sure `sagemaker_client.describe_labeling_job` shows the job is complete!**), it is time to analyze the results. The plots in the [Monitor job progress](#Monitor-job-progress) section form part of the analysis. In this section, we will gain additional insights into the results, all contained in the `output manifest`. You can find the location of the output manifest under `AWS Console > SageMaker > Labeling Jobs > [name of your job]`. We will obtain it programmatically in the cell below.
## Postprocess the output manifest
Now that the job is complete, we will download the output manifest manfiest and postprocess it to form four arrays:
* `img_uris` contains the S3 URIs of all the images that Ground Truth annotated.
* `labels` contains Ground Truth's labels for each image in `img_uris`.
* `confidences` contains the confidence of each label in `labels`.
* `human` is a flag array that contains 1 at indices corresponding to images annotated by human annotators, and 0 at indices corresponding to images annotated by Ground Truth's automated data labeling.
```
# Load the output manifest's annotations.
OUTPUT_MANIFEST = 's3://{}/{}/output/{}/manifests/output/output.manifest'.format(BUCKET, EXP_NAME, job_name)
!aws s3 cp {OUTPUT_MANIFEST} 'output.manifest'
with open('output.manifest', 'r') as f:
output = [json.loads(line.strip()) for line in f.readlines()]
# Create data arrays.
img_uris = [None] * len(output)
confidences = np.zeros(len(output))
groundtruth_labels = [None] * len(output)
human = np.zeros(len(output))
# Find the job name the manifest corresponds to.
keys = list(output[0].keys())
metakey = keys[np.where([('-metadata' in k) for k in keys])[0][0]]
jobname = metakey[:-9]
# Extract the data.
for datum_id, datum in enumerate(output):
img_uris[datum_id] = datum['source-ref']
groundtruth_labels[datum_id] = str(datum[metakey]['class-name'])
confidences[datum_id] = datum[metakey]['confidence']
human[datum_id] = int(datum[metakey]['human-annotated'] == 'yes')
groundtruth_labels = np.array(groundtruth_labels)
```
## Plot class histograms
Now, let's plot the class histograms. The next cell should produce three subplots:
* The Left subplot shows the number of images annotated as belonging to each visual category. The categories will be sorted from the most to the least numerous. Each bar is divided into a 'human' and 'machine' part which shows how many images were annotated as given category by human annotators and by the automated data labeling mechanism.
* The Middle subplot is the same as Left, except y-axis is in log-scale. This helps visualize unbalanced datasets where some categories contain orders of magnitude more images than other.
* The Right subplot shows the average confidence of images in each category, separately for human and auto-annotated images.
```
# Compute the number of annotations in each class.
n_classes = len(set(groundtruth_labels))
sorted_clnames, class_sizes = zip(*Counter(groundtruth_labels).most_common(n_classes))
# Find ids of human-annotated images.
human_sizes = [human[groundtruth_labels == clname].sum() for clname in sorted_clnames]
class_sizes = np.array(class_sizes)
human_sizes = np.array(human_sizes)
# Compute the average annotation confidence per class.
human_confidences = np.array([confidences[np.logical_and(groundtruth_labels == clname, human)]
for clname in sorted_clnames])
machine_confidences = [confidences[np.logical_and(groundtruth_labels == clname, 1-human)]
for clname in sorted_clnames]
# If there is no images annotated as a specific class, set the average class confidence to 0.
for class_id in range(n_classes):
if human_confidences[class_id].size == 0:
human_confidences[class_id] = np.array([0])
if machine_confidences[class_id].size == 0:
machine_confidences[class_id] = np.array([0])
plt.figure(figsize=(9, 3), facecolor='white', dpi=100)
plt.subplot(1, 3, 1)
plt.title('Annotation histogram')
plt.bar(range(n_classes), human_sizes, color='gray', hatch='/', edgecolor='k', label='human')
plt.bar(range(n_classes), class_sizes - human_sizes, bottom=human_sizes, color='gray', edgecolor='k', label='machine')
plt.xticks(range(n_classes), sorted_clnames, rotation=90)
plt.ylabel('Annotation Count')
plt.legend()
plt.subplot(1, 3, 2)
plt.title('Annotation histogram (logscale)')
plt.bar(range(n_classes), human_sizes, color='gray', hatch='/', edgecolor='k', label='human')
plt.bar(range(n_classes), class_sizes - human_sizes, bottom=human_sizes, color='gray', edgecolor='k', label='machine')
plt.xticks(range(n_classes), sorted_clnames, rotation=90)
plt.yscale('log')
plt.subplot(1, 3, 3)
plt.title('Mean confidences')
plt.bar(np.arange(n_classes), [conf.mean() for conf in human_confidences],
color='gray', hatch='/', edgecolor='k', width=.4)
plt.bar(np.arange(n_classes) + .4, [conf.mean() for conf in machine_confidences],
color='gray', edgecolor='k', width=.4)
plt.xticks(range(n_classes), sorted_clnames, rotation=90);
```
## Plot annotated images
In any data science task, it is crucial to plot and inspect the results to check they make sense. In order to do this, we will
1. Download the input images that Ground Truth annotated.
2. Split them by annotated category and whether the annotation was done by human or the auto-labeling mechanism.
3. Plot images in each category and human/auto-annoated class.
We will download the input images to `LOCAL_IMAGE_DIR` you can choose in the next cell. Note that if this directory already contains images with the same filenames as your Ground Truth input images, we will not re-download the images.
If your dataset is large and you do not wish to download and plot **all** the images, simply set `DATASET_SIZE` to a small number. We will pick a random subset of your data for plotting.
```
LOCAL_IMG_DIR = '<< choose a local directory name to download the images to >>' # Replace with the name of a local directory to store images.
assert LOCAL_IMG_DIR != '<< choose a local directory name to download the images to >>', 'Please provide a local directory name'
DATASET_SIZE = len(img_uris) # Change this to a reasonable number if your dataset much larger than 10K images.
subset_ids = np.random.choice(range(len(img_uris)), DATASET_SIZE, replace=False)
img_uris = [img_uris[idx] for idx in subset_ids]
groundtruth_labels = groundtruth_labels[subset_ids]
confidences = confidences[subset_ids]
human = human[subset_ids]
img_fnames = [None] * len(output)
for img_uri_id, img_uri in enumerate(img_uris):
target_fname = os.path.join(
LOCAL_IMG_DIR, img_uri.split('/')[-1])
if not os.path.isfile(target_fname):
!aws s3 cp {img_uri} {target_fname}
img_fnames[img_uri_id] = target_fname
```
### Plot a small output sample
The following cell will create two figures. The first plots `N_SHOW` images in each category, as annotated by humans. The second plots `N_SHOW` images in each category, as annotated by the auto-labeling mechanism.
If any category contains less than `N_SHOW` images, that row will not be displayed. By default, `N_SHOW = 10`, but feel free to change this to any other small number.
```
N_SHOW = 10
plt.figure(figsize=(3 * N_SHOW, 2 + 3 * n_classes), facecolor='white', dpi=60)
for class_name_id, class_name in enumerate(sorted_clnames):
class_ids = np.where(np.logical_and(np.array(groundtruth_labels) == class_name, human))[0]
try:
show_ids = class_ids[:N_SHOW]
except ValueError:
print('Not enough human annotations to show for class: {}'.format(class_name))
continue
for show_id_id, show_id in enumerate(show_ids):
plt.subplot2grid((n_classes, N_SHOW), (class_name_id, show_id_id))
plt.title('Human Label: ' + class_name)
plt.imshow(imageio.imread(img_fnames[show_id])) #image_fnames
plt.axis('off')
plt.tight_layout()
plt.figure(figsize=(3 * N_SHOW, 2 + 3 * n_classes), facecolor='white', dpi=100)
for class_name_id, class_name in enumerate(sorted_clnames):
class_ids = np.where(np.logical_and(np.array(groundtruth_labels) == class_name, 1-human))[0]
try:
show_ids = np.random.choice(class_ids, N_SHOW, replace=False)
except ValueError:
print('Not enough machine annotations to show for class: {}'.format(class_name))
continue
for show_id_id, show_id in enumerate(show_ids):
plt.subplot2grid((n_classes, N_SHOW), (class_name_id, show_id_id))
plt.title('Auto Label: ' + class_name)
plt.imshow(imageio.imread(img_fnames[show_id]))
plt.axis('off')
plt.tight_layout()
```
### Plot the full results
Finally, we plot all the results to a large pdf file. The pdf (called `ground_truth.pdf`) will display 100 images per page. Each page will contain images belonging to the same category, and annotated either by human annotators or by the auto-labeling mechanism. You can use this pdf to investigate exactly which images were annotated as which class at a glance.
This might take a while, and the resulting pdf might be very large. For a dataset of 1K images, the process takes only a minute and creates a 10MB-large pdf. You can set `N_SHOW_PER_CLASS` to a small number if you want to limit the max number of examples shown in each category.
```
N_SHOW_PER_CLASS = np.inf
plt.figure(figsize=(10, 10), facecolor='white', dpi=100)
with PdfPages('ground_truth.pdf') as pdf:
for class_name in sorted_clnames:
# Plot images annotated as class_name by humans.
plt.clf()
plt.text(0.1, 0.5, s='Images annotated as {} by humans'.format(class_name), fontsize=20)
plt.axis('off')
class_ids = np.where(np.logical_and(np.array(groundtruth_labels) == class_name, human))[0]
for img_id_id, img_id in enumerate(class_ids):
if img_id_id == N_SHOW_PER_CLASS:
break
if img_id_id % 100 == 0:
pdf.savefig()
plt.clf()
print('Plotting human annotations of {}, {}/{}...'.format(
class_name, (img_id_id + 1), min(len(class_ids), N_SHOW_PER_CLASS)))
plt.subplot(10, 10, (img_id_id % 100) + 1)
plt.imshow(imageio.imread(img_fnames[img_id]), aspect='auto')
plt.axis('off')
pdf.savefig()
# Plot images annotated as class_name by machines.
plt.clf()
plt.text(0.1, 0.5, s='Images annotated as {} by machines'.format(class_name), fontsize=20)
plt.axis('off')
class_ids = np.where(np.logical_and(np.array(groundtruth_labels) == class_name, 1-human))[0]
for img_id_id, img_id in enumerate(class_ids):
if img_id_id == N_SHOW_PER_CLASS:
break
if img_id_id % 100 == 0:
pdf.savefig()
plt.clf()
print('Plotting machine annotations of {}, {}/{}...'.format(
class_name, (img_id_id + 1), min(len(class_ids), N_SHOW_PER_CLASS)))
plt.subplot(10, 10, (img_id_id % 100) + 1)
plt.imshow(imageio.imread(img_fnames[img_id]), aspect='auto')
plt.axis('off')
pdf.savefig()
plt.clf()
```
# Compare Ground Truth results to known, pre-labeled data
**This section should take about 5 minutes to complete.**
Sometimes (for example, when benchmarking the system) we have an alternative set of data labels available.
For example, the Open Images data has already been carefully annotated by a professional annotation workforce.
This allows us to perform additional analysis that compares Ground Truth labels to the known, pre-labeled data.
When doing so, it is important to bear in mind that any image labels created by humans
will most likely not be 100% accurate. For this reason, it is better to think of labeling accuracy as
"adherence to a particular standard / set of labels" rather than "how good (in absolute terms) are the Ground Truth labels."
## Compute accuracy
In this cell, we will calculate the accuracy of Ground Truth labels with respect to the standard labels.
In [Prepare the data](#Prepare-the-data), we created the `ims` dictionary that specifies which image belongs to each category.
We will convert it to an array `standard_labels` such that `standard_labels[i]` contains the label of the `i-th` image, and
should ideally correspond to `groundtruth_labels[i]`.
This will allow us to plot confusion matrices to assess how well the Ground Truth labels adhere to the standard labels. We plot a confusion matrix for the total dataset, and separate matrices for human annotations and auto-annotations.
```
def plot_confusion_matrix(cm, classes, title='Confusion matrix', normalize=False, cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = 'd' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j].astype(int), fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
# Convert the 'ims' dictionary (which maps class names to images) to a list of image classes.
standard_labels = []
for img_uri in img_uris:
img_uri = img_uri.split('/')[-1].split('.')[0]
standard_label = [cname for cname, imgs_in_cname in ims.items() if img_uri in imgs_in_cname][0]
standard_labels.append(standard_label)
standard_labels = np.array(standard_labels)
# Plot a confusion matrix for the full dataset.
plt.figure(facecolor='white', figsize=(12, 4), dpi=100)
plt.subplot(131)
mean_err = 100 - np.mean(standard_labels == groundtruth_labels) * 100
cnf_matrix = confusion_matrix(standard_labels, groundtruth_labels)
np.set_printoptions(precision=2)
plot_confusion_matrix(cnf_matrix, classes=sorted(ims.keys()),
title='Full annotation set error {:.2f}%'.format(
mean_err), normalize=False)
# Plot a confusion matrix for human-annotated Ground Truth labels.
plt.subplot(132)
mean_err = 100 - np.mean(standard_labels[human==1.] == groundtruth_labels[human==1.]) * 100
cnf_matrix = confusion_matrix(standard_labels[human==1.], groundtruth_labels[human==1.])
np.set_printoptions(precision=2)
plot_confusion_matrix(cnf_matrix, classes=sorted(ims.keys()),
title='Human annotation set (size {}) error {:.2f}%'.format(
int(sum(human)), mean_err), normalize=False)
# Plot a confusion matrix for auto-annotated Ground Truth labels.
if sum(human==0.) > 0:
plt.subplot(133)
mean_err = 100 - np.mean(standard_labels[human==0.] == groundtruth_labels[human==0.]) * 100
cnf_matrix = confusion_matrix(standard_labels[human==0.], groundtruth_labels[human==0.])
np.set_printoptions(precision=2)
plot_confusion_matrix(cnf_matrix, classes=sorted(ims.keys()),
title='Auto-annotation set (size {}) error {:.2f}%'.format(
int(len(human) - sum(human)), mean_err), normalize=False)
```
## Plot correct and incorrect annotations
This cell repeats the plot from Plot the full results. However, it sorts the predictions into correct and incorrect, and indicates the standard label of all the incorrect predictions.
```
N_SHOW_PER_CLASS = np.inf
plt.figure(figsize=(10, 10), facecolor='white', dpi=100)
with PdfPages('ground_truth_benchmark.pdf') as pdf:
for class_name in sorted_clnames:
human_ids = np.where(np.logical_and(np.array(groundtruth_labels) == class_name, human))[0]
auto_ids = np.where(np.logical_and(np.array(groundtruth_labels) == class_name, 1-human))[0]
for class_ids_id, class_ids in enumerate([human_ids, auto_ids]):
plt.clf()
plt.text(0.1, 0.5, s='Images annotated as {} by {}'.format(class_name, 'humans' if class_ids_id == 0 else 'machines'), fontsize=20)
plt.axis('off')
good_ids = class_ids[np.where(standard_labels[class_ids] == groundtruth_labels[class_ids])[0]]
bad_ids = class_ids[np.where(standard_labels[class_ids] != groundtruth_labels[class_ids])[0]]
for img_id_id, img_id in enumerate(np.concatenate([good_ids, bad_ids])):
if img_id_id == N_SHOW_PER_CLASS:
break
if img_id_id % 100 == 0:
pdf.savefig()
plt.clf()
print('Plotting annotations of {}, {}/{}...'.format(
class_name, img_id_id, min(len(class_ids), N_SHOW_PER_CLASS)))
ax = plt.subplot(10, 10, (img_id_id % 100) + 1)
plt.imshow(imageio.imread(img_fnames[img_id]), aspect='auto')
plt.axis('off')
if img_id_id < len(good_ids):
# Draw a green border around the image.
rec = matplotlib.patches.Rectangle((0, 0), 1, 1, lw=10,
edgecolor='green', fill=False,
transform=ax.transAxes)
else:
# Draw a red border around the image.
rec = matplotlib.patches.Rectangle((0, 0), 1, 1, lw=10,
edgecolor='red', fill=False,
transform=ax.transAxes)
ax.add_patch(rec)
pdf.savefig()
plt.clf()
```
# Train an image classifier using Ground Truth labels
At this stage, we have fully labeled our dataset and we can train a machine learning model to classify images based on the categories we previously defined. We'll do so using the **augmented manifest** output of our labeling job - no additional file translation or manipulation required! For a more complete description of the augmented manifest, see our other [example notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/ground_truth_labeling_jobs/object_detection_augmented_manifest_training/object_detection_augmented_manifest_training.ipynb).
**NOTE:** Training neural networks to high accuracy often requires a careful choice of hyperparameters. In this case, we hand-picked hyperparameters that work reasonably well for this dataset. The neural net should have accuracy of about **60% if you're using 100 datapoints, and over 95% if you're using 1000 datapoints.**. To train neural networks on novel data, consider using [SageMaker's model tuning / hyperparameter optimization algorithms](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-how-it-works.html).
First, we'll split our augmented manifest into a training set and a validation set using an 80/20 split.
```
with open('output.manifest', 'r') as f:
output = [json.loads(line) for line in f.readlines()]
# Shuffle output in place.
np.random.shuffle(output)
dataset_size = len(output)
train_test_split_index = round(dataset_size*0.8)
train_data = output[:train_test_split_index]
validation_data = output[train_test_split_index:]
num_training_samples = 0
with open('train.manifest', 'w') as f:
for line in train_data:
f.write(json.dumps(line))
f.write('\n')
num_training_samples += 1
with open('validation.manifest', 'w') as f:
for line in validation_data:
f.write(json.dumps(line))
f.write('\n')
```
Next, we'll upload these manifest files to the previously defined S3 bucket so that they can be used in the training job.
```
s3.upload_file('train.manifest',BUCKET, EXP_NAME + '/train.manifest')
s3.upload_file('validation.manifest',BUCKET, EXP_NAME + '/validation.manifest')
# Create unique job name
nn_job_name_prefix = 'groundtruth-augmented-manifest-demo'
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
nn_job_name = nn_job_name_prefix + timestamp
training_image = sagemaker.amazon.amazon_estimator.get_image_uri(boto3.Session().region_name, 'image-classification', repo_version='latest')
training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": training_image,
"TrainingInputMode": "Pipe"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": 's3://{}/{}/output/'.format(BUCKET, EXP_NAME)
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.p3.2xlarge",
"VolumeSizeInGB": 50
},
"TrainingJobName": nn_job_name,
"HyperParameters": {
"epochs": "30",
"image_shape": "3,224,224",
"learning_rate": "0.01",
"lr_scheduler_step": "10,20",
"mini_batch_size": "32",
"num_classes": str(num_classes),
"num_layers": "18",
"num_training_samples": str(num_training_samples),
"resize": "224",
"use_pretrained_model": "1"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 86400
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "AugmentedManifestFile",
"S3Uri": 's3://{}/{}/{}'.format(BUCKET, EXP_NAME, 'train.manifest'),
"S3DataDistributionType": "FullyReplicated",
"AttributeNames": ["source-ref","category"]
}
},
"ContentType": "application/x-recordio",
"RecordWrapperType": "RecordIO",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "AugmentedManifestFile",
"S3Uri": 's3://{}/{}/{}'.format(BUCKET, EXP_NAME, 'validation.manifest'),
"S3DataDistributionType": "FullyReplicated",
"AttributeNames": ["source-ref","category"]
}
},
"ContentType": "application/x-recordio",
"RecordWrapperType": "RecordIO",
"CompressionType": "None"
}
]
}
```
Now we create the SageMaker training job.
```
sagemaker_client = boto3.client('sagemaker')
sagemaker_client.create_training_job(**training_params)
# Confirm that the training job has started
print('Transform job started')
while(True):
status = sagemaker_client.describe_training_job(TrainingJobName=nn_job_name)['TrainingJobStatus']
if status == 'Completed':
print("Transform job ended with status: " + status)
break
if status == 'Failed':
message = response['FailureReason']
print('Transform failed with the following error: {}'.format(message))
raise Exception('Transform job failed')
time.sleep(30)
```
# Deploy the Model
Now that we've fully labeled our dataset and have a trained model, we want to use the model to perform inference.
Image classification only supports encoded .jpg and .png image formats as inference input for now. The output is the probability values for all classes encoded in JSON format, or in JSON Lines format for batch transform.
This section involves several steps,
Create Model - Create model for the training output
Batch Transform - Create a transform job to perform batch inference.
Host the model for realtime inference - Create an inference endpoint and perform realtime inference.
## Create Model
```
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
model_name="groundtruth-demo-ic-model" + timestamp
print(model_name)
info = sagemaker_client.describe_training_job(TrainingJobName=nn_job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': training_image,
'ModelDataUrl': model_data,
}
create_model_response = sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
```
## Batch Transform
We now create a SageMaker Batch Transform job using the model created above to perform batch prediction.
### Download Test Data
First, let's download a test image that has been held out from the training and validation data.
```
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
batch_job_name = "image-classification-model" + timestamp
batch_input = 's3://{}/{}/test/'.format(BUCKET, EXP_NAME)
batch_output = 's3://{}/{}/{}/output/'.format(BUCKET, EXP_NAME, batch_job_name)
# Copy two images from each class, unseen by the neural net, to a local bucket.
test_images = []
for class_id in ['/m/04szw', '/m/02xwb', '/m/0cd4d', '/m/07dm6', '/m/0152hh']:
test_images.extend([label[0] + '.jpg' for label in all_labels if (label[2] == class_id and label[3] == '1')][-2:])
!aws s3 rm $batch_input --recursive
for test_img in test_images:
!aws s3 cp s3://open-images-dataset/test/{test_img} {batch_input}
request = \
{
"TransformJobName": batch_job_name,
"ModelName": model_name,
"MaxConcurrentTransforms": 16,
"MaxPayloadInMB": 6,
"BatchStrategy": "SingleRecord",
"TransformOutput": {
"S3OutputPath": 's3://{}/{}/{}/output/'.format(BUCKET, EXP_NAME, batch_job_name)
},
"TransformInput": {
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": batch_input
}
},
"ContentType": "application/x-image",
"SplitType": "None",
"CompressionType": "None"
},
"TransformResources": {
"InstanceType": "ml.p2.xlarge",
"InstanceCount": 1
}
}
print('Transform job name: {}'.format(batch_job_name))
sagemaker_client = boto3.client('sagemaker')
sagemaker_client.create_transform_job(**request)
print("Created Transform job with name: ", batch_job_name)
while(True):
response = sagemaker_client.describe_transform_job(TransformJobName=batch_job_name)
status = response['TransformJobStatus']
if status == 'Completed':
print("Transform job ended with status: " + status)
break
if status == 'Failed':
message = response['FailureReason']
print('Transform failed with the following error: {}'.format(message))
raise Exception('Transform job failed')
time.sleep(30)
```
After the job completes, let's inspect the prediction results.
```
def get_label(out_fname):
!aws s3 cp {out_fname} .
print(out_fname)
with open(out_fname.split('/')[-1]) as f:
data = json.load(f)
index = np.argmax(data['prediction'])
probability = data['prediction'][index]
print("Result: label - " + CLASS_LIST[index] + ", probability - " + str(probability))
input_fname = out_fname.split('/')[-1][:-4]
return CLASS_LIST[index], probability, input_fname
# Show prediction results.
!rm test_inputs/*
plt.figure(facecolor='white', figsize=(7, 15), dpi=100)
outputs = !aws s3 ls {batch_output}
outputs = [get_label(batch_output + prefix.split()[-1]) for prefix in outputs]
outputs.sort(key=lambda pred: pred[1], reverse=True)
for fname_id, (pred_cname, pred_conf, pred_fname) in enumerate(outputs):
!aws s3 cp {batch_input}{pred_fname} test_inputs/{pred_fname}
plt.subplot(5, 2, fname_id+1)
img = imageio.imread('test_inputs/{}'.format(pred_fname))
plt.imshow(img)
plt.axis('off')
plt.title('{}\nconfidence={:.2f}'.format(pred_cname, pred_conf))
if RUN_FULL_AL_DEMO:
warning = ''
else:
warning = ('\nNOTE: In this small demo we only used 80 images to train the neural network.\n'
'The predictions will be far from perfect! Set RUN_FULL_AL_DEMO=True to see properly trained results.')
plt.suptitle('Predictions sorted by confidence.{}'.format(warning))
```
## Realtime Inference
We now host the model with an endpoint and perform realtime inference.
This section involves several steps,
Create endpoint configuration - Create a configuration defining an endpoint.
Create endpoint - Use the configuration to create an inference endpoint.
Perform inference - Perform inference on some input data using the endpoint.
Clean up - Delete the endpoint and model
## Create Endpoint Configuration
```
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_config_name = job_name + '-epc' + timestamp
endpoint_config_response = sagemaker_client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print('Endpoint configuration name: {}'.format(endpoint_config_name))
print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn']))
```
## Create Endpoint
Lastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes about 10 minutes to complete.
```
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_name = job_name + '-ep' + timestamp
print('Endpoint name: {}'.format(endpoint_name))
endpoint_params = {
'EndpointName': endpoint_name,
'EndpointConfigName': endpoint_config_name,
}
endpoint_response = sagemaker_client.create_endpoint(**endpoint_params)
print('EndpointArn = {}'.format(endpoint_response['EndpointArn']))
# get the status of the endpoint
response = sagemaker_client.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
# wait until the status has changed
sagemaker_client.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = sagemaker_client.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
with open('test_inputs/{}'.format(test_images[0]), 'rb') as f:
payload = f.read()
payload = bytearray(payload)
client = boto3.client('sagemaker-runtime')
response = client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=payload)
# `response` comes in a json format, let's unpack it.
result = json.loads(response['Body'].read())
# The result outputs the probabilities for all classes.
# Find the class with maximum probability and print the class name.
print('Model prediction is: {}'.format(CLASS_LIST[np.argmax(result)]))
```
Finally, let's clean up and delete this endpoint.
```
sagemaker_client.delete_endpoint(EndpointName=endpoint_name)
```
# Review
We covered a lot of ground in this notebook! Let's recap what we accomplished. First we started with an unlabeled dataset (technically, the dataset was previously labeled by the authors of the dataset, but we discarded the original labels for the purposes of this demonstration). Next, we created a SageMake Ground Truth labeling job and generated new labels for all of the images in our dataset. Then we split this file into a training set and a validation set and trained a SageMaker image classification model. Finally, we created a hosted model endpoint and used it to make a live prediction for a held-out image in the original dataset.
| github_jupyter |
```
# PreSetup
!pip install numpy
!pip install networkx
!pip install matplotlib
!pip install sklearn
!pip install seaborn
!pip install pandas
!pip install nltk
!pip install wordcloud
!pip install tweepy
# Representing data
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import seaborn as sns
# Models
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation as LDA
import numpy as np
# Word Tokenization
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.stem import WordNetLemmatizer
# NLTK
import nltk
# Data Representation
import pandas as pd
import string
import csv
import json
# Twitter API
from tweepy import Stream, API, OAuthHandler
from tweepy.streaming import StreamListener
# Utils
from datetime import datetime
from dateutil import parser
import warnings
import re
import os
import ssl
import time
import traceback
# Go to http://apps.twitter.com and create an app.
# The consumer key and secret will be generated for you after
consumer_key=""
consumer_secret=""
# After the step above, you will be redirected to your app's page.
# Create an access token under the the "Your access token" section
access_token=""
access_token_secret=""
# Configure the maximum number of tweets
max_number_of_tweets = 1000
# Topic Generator Settings
topics_to_generate = 20
words_per_topic = 7
# We will use this to align the tweets
date_format = '%Y-%m-%d %H:%M:%S'
# Those are the english words that will be omitted
to_track = ["data", "artificial","intelligence", "machile", "learning", "event", "detection", "python"]
default_stopwords = ["i", "me", "my", "myself", "we", "our", "ours", "ourselves", "you", "your", "yours", "yourself", "yourselves", "he", "him", "his", "himself", "she", "her", "hers", "herself", "it", "its", "itself", "they", "them", "their", "theirs", "themselves", "what", "which", "who", "whom", "this", "that", "these", "those", "am", "is", "are", "was", "were", "be", "been", "being", "have", "has", "had", "having", "do", "does", "did", "doing", "a", "an", "the", "and", "but", "if", "or", "because", "as", "until", "while", "of", "at", "by", "for", "with", "about", "against", "between", "into", "through", "during", "before", "after", "above", "below", "to", "from", "up", "down", "in", "out", "on", "off", "over", "under", "again", "further", "then", "once", "here", "there", "when", "where", "why", "how", "all", "any", "both", "each", "few", "more", "most", "other", "some", "such", "no", "nor", "not", "only", "own", "same", "so", "than", "too", "very", "s", "t", "can", "will", "just", "don", "should", "now", "rt", "http", "wa", "nt", "re", "amp"]
# NLTK Data
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
nltk.download('punkt')
nltk.download('wordnet')
class MemoryListner(StreamListener):
""" A listener handles tweets that are received from the stream.
This is a basic listener that just adds received tweets to memory.
"""
def __init__(self, maxNumberOfTweets):
self.max_tweets = maxNumberOfTweets
self.tweet_count = 0
self.tweets = []
def on_data(self, data):
# We want to get just a small number of tweets
if (self.tweet_count < self.max_tweets):
self.tweet_count += 1
self.tweets.append(data)
return True
else:
return False
def on_error(self, status):
print(status)
def get_tweets(self):
return self.tweets
def on_status(self, status):
print(status.text)
# Create a listner for the Stream
my_listner = MemoryListner(max_number_of_tweets)
# Generate an authenticator
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = API(auth)
# We will listen to the Stream only for english and
# we should add some common words to look for
my_stream = Stream(auth = api.auth, listener=my_listner)
my_stream.filter(track=to_track)
# We will use lemmatization. If you want to know more, please visit:
# https://blog.bitext.com/what-is-the-difference-between-stemming-and-lemmatization/
wordnet_lemmatizer = WordNetLemmatizer()
# Standard ED Text Cleaning
def clean_text(text, stop_words, extra_words = []):
def tokenize_text(text):
return [w for s in sent_tokenize(text) for w in word_tokenize(s)]
def remove_special_characters(text):
tokens = tokenize_text(text)
return ' '.join(re.sub('[^a-z]+', '', x) for x in tokens)
def lemma_text(text, lemmatizer=wordnet_lemmatizer):
tokens = tokenize_text(text)
return ' '.join([lemmatizer.lemmatize(t) for t in tokens])
def remove_stopwords(text, stop_words= (stop_words + extra_words)):
tokens = [w for w in tokenize_text(text) if w not in stop_words]
return ' '.join(tokens)
text = str(text).strip(' ') # strip whitespaces
text = text.lower() # lowercase
text = remove_special_characters(text) # remove punctuation and symbols
text = lemma_text(text) # stemming
text = remove_stopwords(text) # remove stopwords
text = text.strip(' ') # strip whitespaces again?
return text
# Get All tweets and prepare some list for data
data = []
all_tweets = my_listner.get_tweets()
# Parse the tweets and take only the 2 columns of interest
for tweet in all_tweets:
y = json.loads(tweet)
try:
text = clean_text(y['text'], default_stopwords)
date = parser.parse(y['created_at'])
datetime = date.strftime(date_format)
data.append([text, datetime])
except Exception:
traceback.print_exc()
# Convert to a pandas DataFrame
print('Number of parsed tweets = ' + str(len(data)))
df = pd.DataFrame(data, columns=['text', 'date'])
df.head()
def get_topics(model, count_vectorizer, n_top_words):
words = count_vectorizer.get_feature_names()
data_labels = []
for _, topic in enumerate(model.components_):
data_labels.append(" ".join([words[i] for i in topic.argsort()[:-n_top_words - 1:-1]]))
topics = pd.DataFrame(data=data_labels, columns=['topic'])
return topics
def plot_most_common_10(count_data, count_vectorizer):
words = count_vectorizer.get_feature_names()
total_counts = np.zeros(len(words))
for t in count_data:
total_counts+=t.toarray()[0]
count_dict = (zip(words, total_counts))
count_dict = sorted(count_dict, key=lambda x:x[1], reverse=True)[0:10]
words = [w[0] for w in count_dict]
counts = [w[1] for w in count_dict]
x_pos = np.arange(len(words))
figure = plt.figure(2, figsize=(15, 15/1.6180))
plt.subplot(title='10 most common words')
sns.set_context("notebook", font_scale=1.25, rc={"lines.linewidth": 2.5})
sns.barplot(x_pos, counts, palette='husl')
plt.xticks(x_pos, words, rotation=90)
plt.xlabel('words')
plt.ylabel('counts')
figure.show()
def print_cloud(given_text):
# Create a WordCloud object
word_cloud = WordCloud(background_color="white", max_words=5000, contour_width=3, contour_color='steelblue', width=1600, height=800)
# Generate a word cloud
cloud = word_cloud.generate(given_text)
figure = plt.figure(figsize=(20,10))
plt.imshow(cloud, interpolation='bilinear')
plt.axis("off")
plt.show()
figure.show()
def train_OLDA(topics_to_generate, words_per_topic):
papers = df
# Join the different processed titles together.
long_string = ','.join(list(str(x) for x in papers['text'].values))
# Initialise the count vectorizer
count_vectorizer = CountVectorizer()
# Fit and transform the processed titles
count_data = count_vectorizer.fit_transform(papers['text'].values.astype('str'))
# Visualise the 10 most common words
# Create and fit the LDA model
lda = LDA(n_components = topics_to_generate)
lda.fit(count_data)
print('OLDA is done')
# Print the topics found by the LDA model
topics = get_topics(lda, count_vectorizer, words_per_topic)
print(topics)
print_cloud(long_string)
plot_most_common_10(count_data, count_vectorizer)
return topics
olda_topics = train_OLDA(topics_to_generate, words_per_topic)
olda_topics['set'] = olda_topics['topic'].map(lambda x: set(x.split()))
def project_topics(topic_threshold):
magnitude = {}
event_start = {}
event_end = {}
fine_grained = []
# We will go trough each tweet and try to match it to a topic
for _, row in df.iterrows():
try:
rowSet = set(row['text'].split())
except:
continue
for _, topic_row in olda_topics.iterrows():
topic_set = topic_row['set']
original_topic = topic_row['topic']
if len(topic_set.intersection(rowSet)) > topic_threshold:
if original_topic not in magnitude:
magnitude[original_topic] = 0
magnitude[original_topic] += 1
given_date = row['date']
fine_grained.append([original_topic, given_date, topic_threshold])
if original_topic not in event_start:
event_start[original_topic] = given_date
if given_date < event_start[original_topic]:
event_start[original_topic] = given_date
if original_topic not in event_end:
event_end[original_topic] = given_date
if given_date > event_end[original_topic]:
event_end[original_topic] = given_date
olda_data = []
for _, row in olda_topics.iterrows():
topic = row['topic']
olda_data.append([topic, magnitude.get(topic, 0), event_start.get(topic, 'NULL'), event_end.get(topic, 'NULL'), topic_threshold])
return (olda_data, fine_grained)
(one_word_match, fine_grained_one_word_match) = project_topics(1)
(two_words_match, fine_grained_two_words_match) = project_topics(2)
(three_words_match, fine_grained_three_words_match) = project_topics(3)
data = one_word_match + two_words_match + three_words_match
fine_grained_data = fine_grained_one_word_match + fine_grained_two_words_match + fine_grained_three_words_match
olda_df = pd.DataFrame(data, columns=['Topic', 'Magnitude', 'StartDate', 'EndDate', 'MatchSize'])
print(olda_df.head())
fine_grained_df = pd.DataFrame(fine_grained_data, columns=['Topic', 'EventDate', 'MatchSize'])
print(fine_grained_df.head())
olda_df.to_csv('output.csv', index=False)
fine_grained_df.to_csv('output_detailed.csv', index=False)
```
| github_jupyter |
# CEE 4530 Prelim 2020
Name:___________________
This prelim is open book and open internet. You **are** allowed to discuss the questions and related concepts with anyone while you are taking this prelim. You are not allowed to post questions on discussion boards. If you have questions, send an email to Monroe (monroews@gmail.com). If the question is relevant for the whole class he will post it on the Canvas discussion board.
For each question make sure to use units and to give variables names that are easily understood. Use print statements to **print the answers in a sentence** (except for the multiple choice!).
Download your file as an ipynb and name it "yournamePrelim.ipynb" (10 points for this!)
Turn your prelim in to Canvas as an assignment.
Each question is worth 5 points.
```
!pip install aguaclara
import aguaclara as ac
from aguaclara.core.units import unit_registry as u
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
from scipy import optimize
```
## Design a completed mixed flow reactor dye tracer experiment.
A tank that holds 4 Liters of water is to be used as a Completely Mixed Flow Reactor with a residence time of 10 minutes. The water is pumped with a peristaltic pump that delivers 2.8 mL per revolution. The maximum concentration of red dye that the photometer can measure is 50 mg/L and thus that is the target initial dye concentration. The red dye stock concentration is 100 g/L.
1. What flow rate (in mL/s) is required?
2. What should the pump revolutions per minute be in rev/min? (Note that rev is a unit!)
3. How much of the red dye stock solutuion should be added to the reactor initially to achieve 50 mg/L?
#Troubleshooting
You have set up the CMFR experiment described above and you measure the peristaltic pump flow rate. You find that the volume pumped in one minute is 35l mL.
4) How much water is the pump delivering per revolution?
5) What should the pump rpm be to obtain the desired flow rate?
## CMFR in series
There are 4 CMFR in series with a total volume of 4 L and a total residence time of 10 minutes. The water is pumped with a peristaltic pump that delivers 3.8 mL per revolution. The maximum concentration of red dye that the photometer can measure is 50 mg/L and thus that is the target maximum dye concentration. The red dye stock concentration is 100 g/L.
$$E_{N\left(t^{*}\right)}=\frac{C_{N(t^{*})} \forall_{r}}{C_{t r} \forall_{t r}}=\frac{N^{N}}{(N-1) !}\left(t^{*}\right)^{N-1} e^{\left(-N t^{*}\right)}$$
where $\forall_{r}$ is the volume of one of the CMFR tanks.
6) How much tracer would you have to add so that the maximum concentration exiting the last CMFR reached exactly 50 mg/L? Hint: a numerical solution is easy!
# Dissolved oxygen
7) What is the equilibrium concentration of dissolved oxygen at standard atmospheric pressure and $0^\circ C$?
8) The water in Fall Creek gets as cold as $0^\circ C$ in January. If we take a liter of this cold water and bring it into the laboratory where it warms up to $20^\circ C$, what mass of oxygen will come out of solution?
9) What volume of oxygen will come out of this 1 liter solution?
## Photometer calibration
The photometer that we use in lab was used to measure the absorbance of a set of standard red dye solutuions.
The relationship between concentration of a dissolved species and the absorbance should follow Beer's law.
$$ A = \epsilon b C + intercept $$
where $\epsilon$ is the extinction coefficient that is a property of the chemical species and where $intercept$ is zero.
The optical path length for the photometer is 19 mm. The standard concentrations and absorbance values are given below.
10. Use [stats.linregress](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html) to obtain the best fit line for the calibration with *absorbance as a function of concentration*. (No additional response needed for this number.)
11. Plot the calibration data AND the best fit line (use both slope and intercept to draw this line) showing absorbance as a function of concentration (note that that ProCoDA shows this plot with the axis flipped). Make sure to label the axis and include a legend. And make sure that you have the same units on both plots!
12. Why does the absorbance not continue to increase for the highest concentration standards?
13. Eliminate the data points that do not follow Beer's law. You can simply copy the raw data and then delete the data that doesn't follow Beer's law. You don't need to write an algorithm for this step. (No additional response needed for this number.)
14. Use [stats.linregress](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html) to obtain the best fit line for the data that is within the measurement range of the photometer. (No additional response needed for this number.)
15. What is $\epsilon$? Make sure to simplify the units to $length^2/mass$. (Make sure to attach the correct units after doing the linear regression! Note that you can now ignore the intercept because it is almost zero!)
16. Create a function in python that returns the concentration given a sample absorbance. ( No additional response needed for this number.)
17. What is the concentration of the unknown?
18. Plot the calibration data, the best fit line (use only the slope to draw this line) showing absorbance as a function of concentration, and the unknown sample. Make sure to label the axis and include a legend. And make sure that you have the same units on both plots!
```
Standard_A = np.array([0,0.4119488,0.83965677,1.26430819,1.95087259,2.32735848,2.4513617,2.45150071,2.45316238])
Standard_C = np.array([0, 10, 20, 30, 40, 50, 60, 70, 80])* u.mg/u.L
Unknown_A = 1.38759
```
| github_jupyter |
# Will It Blend? Deploying BlenderBot
First, load the model.
```
from parlai.core.agents import create_agent_from_model_file
blender_agent = create_agent_from_model_file("blender_distilled/blender_distilled.pth.checkpoint",
opt_overrides={
"model": "internal:cgen",
"skip_generation": False,
"inference": "delayedbeam",
"beam_delay": 25,
"beam_block_list_filename": "resources/banned_words.txt"
})
%load_ext autoreload
%autoreload 2
import logging
logging.basicConfig(level=logging.DEBUG)
blender_agent.opt["temperature"] = 0.7
blender_agent.opt["topp"] = 0.9
blender_agent.opt["topk"] = 5
blender_agent.opt["beam_min_length"] = 15
blender_agent.opt["beam_size"] = 10
blender_agent.opt["inference"] = "beam"
blender_agent
blender_agent.beam_block_list._phrases
```
## Example
Minimal example from Github issue
```
from itertools import zip_longest
import time
from parlai.utils.strings import normalize_reply
start = time.time()
blender_agent.reset()
history = [
"i'm bob",
"nice to meet you! what do you cook at home?",
"i love korean food",
"i'll have to try it! thanks for the recommendation, i'll try it out.",
"okay",
"have a great weekend! it was nice talking to you.",
"you too"
]
prefix = None
def process(history):
pairs = zip_longest(*[iter(history)]*2)
for user, bot in pairs:
blender_agent.observe({'text': user, 'episode_done': False})
print("Added user utterance:", user)
if bot:
blender_agent.history.add_reply(bot)
print("Added bot utterance:", bot)
response = blender_agent.act(prefix_text=prefix)
elapsed = time.time() - start
print(f"BlenderBot replied in {elapsed * 1000:.0f}ms: {normalize_reply(response['text'])}")
print("All beams:", "\n".join([t for t, s in response["beam_texts"]]))
print()
print("BlenderBot's history view:")
print(blender_agent.history)
return response
response = process(history)
response
from itertools import zip_longest
history = [
"hi, where are you from",
"i'm from the states! i'm from california originally but i live in utah now.",
"cool, i am also from utah",
"that's great, what is your favorite place?",
"um maybe taiwan?"
]
pairs = zip_longest(*[iter(history)]*2)
print(list(pairs))
from parlai.agents.transformer.transformer import TransformerGeneratorAgent
```
| github_jupyter |
# Chapter 4 - Managing Your Data in CAS
## Getting Started with CASLibs and CAS Tables
Create a connection to CAS and list the CASLibs using the **caslibinfo** action.
```
import swat
conn = swat.CAS('server-name.mycompany.com', 5570, 'username', 'password')
conn.caslibinfo()
```
List the items at a path relative to the given CASLib using the **fileinfo** action.
```
conn.fileinfo('data', caslib='casuser')
```
Use the **fileinfo** action with the active CASLib (i.e., casuser).
```
conn.fileinfo('data')
```
## Loading Data into a CAS Table
Load data from the server-side using the **loadtable** action.
```
out = conn.loadtable('data/iris.csv', caslib='casuser')
out
```
Specify an output table name explicitly.
```
out = conn.loadtable('data/iris.csv', caslib='casuser',
casout=dict(name='mydata', caslib='casuser'))
out
```
Get information about the table using the **tableinfo** action.
```
conn.tableinfo(name='data.iris', caslib='casuser')
```
Get information about the table columns using **columninfo**.
```
conn.columninfo(table=dict(name='data.iris', caslib='casuser'))
```
## Displaying Data in a CAS Table
Use the **fetch** action to download rows of data.
```
conn.fetch(table=dict(name='data.iris', caslib='casuser'), to=5)
```
Specify sorting options to get a predictable set of data.
```
conn.fetch(table=dict(name='data.iris', caslib='casuser'), to=5,
sortby=['sepal_length', 'sepal_width'])
```
## Computing Simple Statistics
Run the **summary** action on the table.
```
conn.summary(table=dict(name='data.iris', caslib='casuser'))
```
## Dropping a CAS Table
```
conn.droptable('data.iris', caslib='casuser')
```
## The Active CASLib
Use the **caslibinfo** action to display information about CASLibs. The Active column indicates whether the CASLib is the active CASLib.
```
conn.caslibinfo()
```
You can get the active CASLib setting using the **getsessopt** action.
```
conn.getsessopt('caslib')
```
You can set the active CASLib using the **setsessopt** action.
```
conn.setsessopt(caslib='otherlib')
# NOTE: 'CASTestTmp' is now the active caslib.
# Out[39]: + Elapsed: 0.000289s, mem: 0.0948mb
```
## Uploading Data Files to CAS Tables
Use the **upload** method on **CAS** connection objects to upload data from client-side files. This uploads the file to the server as-is. It is then parsed on the server.
```
conn.upload('/u/username/data/iris.csv')
conn.columninfo(table=dict(name='iris', caslib='casuser'))
```
Specify an explicit table name on **upload**.
```
conn.upload('/u/username/data/iris.csv', casout=dict(name='iris2', caslib='casuser'))
```
The **upload** method will pass a given **importoptions=** parameter to the underlying **loadtable** action.
```
out = conn.upload('/u/username/data/iris.tsv',
importoptions=dict(filetype='csv', delimiter='\t'),
casout=dict(name='iris_tsv', caslib='casuser'))
out
```
## Uploading Data from URLs to CAS Tables
Rather than specifying a filename, you can specify a URL.
```
conn.upload('https://github.com/sassoftware/'
'sas-viya-programming/blob/master/data/class.csv')
```
## Uploading Data from a Pandas DataFrame to a CAS Table
In addition to files, you can upload Pandas **DataFrames**. Note, however, that the **DataFrame** will be exported to a CSV file, then uploaded.
```
import pandas as pd
df = pd.read_csv('/u/username/data/iris.csv')
df.head()
conn.upload(df, casout=dict(name='iris_df', caslib='casuser'))
conn.fetch(table=dict(name='iris_df', caslib='casuser'), to=5)
```
# Using Data Message Handlers
Data message handlers allow you to write custom data loaders.
```
from swat.cas import datamsghandlers as dmh
```
Display all of the SWAT-supplied data message handler subclasses.
```
dmh.CASDataMsgHandler.__subclasses__()
dmh.PandasDataFrame.__subclasses__()
```
## The HTML Data Message Handler
```
htmldmh = dmh.HTML('https://www.fdic.gov/bank/' +
'individual/failed/banklist.html', index=0)
```
Display the **addtable** parameters created by the HTML data message handler.
```
htmldmh.args.addtable
```
Call the **addtable** action using the generated parameters.
```
out = conn.addtable(table='banklist', caslib='casuser',
**htmldmh.args.addtable)
out
conn.columninfo(table=dict(name='banklist', caslib='casuser'))
```
Parse the dates in columns 5 and 6. Use the **replace=** option in the new **addtable** call to replace the existing table.
```
htmldmh = dmh.HTML('https://www.fdic.gov/bank/' +
'individual/failed/banklist.html',
index=0, parse_dates=[5, 6])
out = conn.addtable(table='banklist', caslib='casuser',
replace=True,
**htmldmh.args.addtable)
conn.columninfo(table=dict(name='banklist',
caslib='casuser'))
```
Fetch a few rows of the data using **sastypes=False** so that we get actual dates in the resulting **DataFrame**.
```
conn.fetch(table=dict(name='banklist', caslib='casuser'),
sastypes=False, to=3)
```
## The Excel Data Message Handler
```
exceldmh = dmh.Excel('http://www.fsa.usda.gov/Internet/' +
'FSA_File/disaster_cty_list_ytd_14.xls')
```
Add the data to the server.
```
out = conn.addtable(table='crops', caslib='casuser',
**exceldmh.args.addtable)
out
conn.columninfo(table=dict(name='crops', caslib='casuser'))
```
## The PandasDataFrame Data Message Handler
```
import pandas as pd
```
Read the Excel file into a Pandas **DataFrame**.
```
exceldf = pd.read_excel('http://www.fsa.usda.gov/Internet/' +
'FSA_File/disaster_cty_list_ytd_14.xls')
```
Create a **PandasDataFrame** data message handler.
```
exceldmh = dmh.PandasDataFrame(exceldf)
```
Add the table to the server.
```
out = conn.addtable(table='dfcrops', caslib='casuser',
**exceldmh.args.addtable)
out
```
## Using Data Message Handlers with Databases
```
import csv
import sqlite3
```
Create an in-memory database.
```
sqlc = sqlite3.connect('iris.db')
cur = sqlc.cursor()
```
Define the table.
```
cur.execute('''CREATE TABLE iris (sepal_length REAL,
sepal_width REAL,
petal_length REAL,
petal_width REAL,
species CHAR(10));''')
```
Parse the iris CSV file and format it as tuples.
```
with open('/u/username/data/iris.csv', 'r') as iris:
data = csv.DictReader(iris)
rows = [(x['sepal_length'],
x['sepal_width'],
x['petal_length'],
x['petal_width'],
x['species']) for x in data]
```
Load the data into the database.
```
cur.executemany('''INSERT INTO iris (sepal_length,
sepal_width,
petal_length,
petal_width,
species)
VALUES (?, ?, ?, ?, ?);''', rows)
sqlc.commit()
```
Verify that the data looks correct.
```
cur.execute('SELECT * from iris')
cur.fetchmany(5)
```
Create an SQLAlchemy database engine.
```
eng = dmh.SQLTable.create_engine('sqlite:///iris.db')
```
Create the data message handler.
```
sqldmh = dmh.SQLTable('iris', eng)
```
Load the database into CAS.
```
out = conn.addtable(table='iris_sql', caslib='casuser',
**sqldmh.args.addtable)
out
```
Check the data in the server.
```
conn.columninfo(table=dict(name='iris_sql', caslib='casuser'))
conn.fetch(table=dict(name='iris_sql', caslib='casuser'), to=5)
```
Set up a query to use with the data message handler.
```
sqldmh = dmh.SQLQuery('''SELECT * FROM iris
WHERE species = "versicolor"
AND sepal_length > 6.6''', eng)
```
Load the result of the query into CAS.
```
out = conn.addtable(table='iris_sql2', caslib='casuser',
**sqldmh.args.addtable)
out
```
Check the data on the server.
```
conn.fetch(table=dict(name='iris_sql2', caslib='casuser'))
```
### Streaming Data from a Database into a CAS Table
```
import sqlite3
sqlc = sqlite3.connect('iris.db')
c = sqlc.cursor()
```
Execute a query on the database.
```
c.execute('SELECT * FROM iris')
```
Create a **DBAPI** data message handler to stream the data.
```
dbdmh = dmh.DBAPI(sqlite3, c, nrecs=10)
```
Run the **addtable** action.
```
conn.addtable(table='iris_db', caslib='casuser',
**dbdmh.args.addtable)
```
Verify the data on the server.
```
conn.columninfo(table=dict(name='iris_db', caslib='casuser'))
conn.fetch(table=dict(name='iris_db', caslib='casuser'), to=5)
```
# Writing Your Own Data Message Handlers
Create a data message handler that subclasses from **CASDataMsgHandler**.
```
class MyDMH(dmh.CASDataMsgHandler):
def __init__(self):
self.data = [
('Alfred', 'M', 14, 69, 112.5),
('Alice', 'F', 13, 56.5, 84),
('Barbara', 'F', 13, 65.3, 98),
('Carol', 'F', 14, 62.8, 102.5),
('Henry', 'M', 14, 63.5, 102.5),
]
vars = [
dict(name='name', label='Name', type='varchar'),
dict(name='sex', label='Sex', type='varchar'),
dict(name='age', label='Age', type='int32'),
dict(name='height', label='Height', type='double'),
dict(name='weight', label='Weight', type='double'),
]
super(MyDMH, self).__init__(vars)
def getrow(self, row):
try:
return self.data[row]
except IndexError:
return
```
Create an instance of the data message handler.
```
mydmh = MyDMH()
```
Call the **addtable** action.
```
conn.addtable(table='myclass', caslib='casuser',
**mydmh.args.addtable)
```
Verify the data on the server.
```
conn.columninfo(table=dict(name='myclass', caslib='casuser'))
conn.fetch(table=dict(name='myclass', caslib='casuser'), to=5)
```
## Adding Data Transformers
Add a date column to the data message handler.
```
class MyDMH(dmh.CASDataMsgHandler):
def __init__(self):
self.data = [
('Alfred', 'M', 14, 69, 112.5, '1987-03-01'),
('Alice', 'F', 13, 56.5, 84, '1988-06-12'),
('Barbara', 'F', 13, 65.3, 98, '1988-12-13'),
('Carol', 'F', 14, 62.8, 102.5, '1987-04-17'),
('Henry', 'M', 14, 63.5, 102.5, '1987-01-30'),
]
vars = [
dict(name='name', label='Name', type='varchar'),
dict(name='sex', label='Sex', type='varchar'),
dict(name='age', label='Age', type='int32'),
dict(name='height', label='Height', type='double'),
dict(name='weight', label='Weight', type='double'),
dict(name='birthdate', label='Birth Date',
type='date', format='DATE', formattedlength=12),
]
super(MyDMH, self).__init__(vars)
def getrow(self, row):
try:
return self.data[row]
except IndexError:
return
```
Take that same data message handler and add a **transformers=** parameter with a function that converts the date strings to CAS dates.
```
class MyDMH(dmh.CASDataMsgHandler):
def __init__(self):
self.data = [
('Alfred', 'M', 14, 69, 112.5, '1987-03-01'),
('Alice', 'F', 13, 56.5, 84, '1988-06-12'),
('Barbara', 'F', 13, 65.3, 98, '1988-12-13'),
('Carol', 'F', 14, 62.8, 102.5, '1987-04-17'),
('Henry', 'M', 14, 63.5, 102.5, '1987-01-30'),
]
vars = [
dict(name='name', label='Name', type='varchar'),
dict(name='sex', label='Sex', type='varchar'),
dict(name='age', label='Age', type='int32'),
dict(name='height', label='Height', type='double'),
dict(name='weight', label='Weight', type='double'),
dict(name='birthdate', label='Birth Date',
type='date', format='DATE', formattedlength=12),
]
transformers = {
'birthdate': dmh.str2cas_date,
}
super(MyDMH, self).__init__(vars, transformers=transformers)
def getrow(self, row):
try:
return self.data[row]
except IndexError:
return
```
Create an instance of the data message handler.
```
mydmh = MyDMH()
```
Run the **addtable** action.
```
conn.addtable(table='myclass', caslib='casuser', replace=True,
**mydmh.args.addtable)
```
Verify the data on the server.
```
conn.columninfo(table=dict(name='myclass', caslib='casuser'))
conn.fetch(table=dict(name='myclass', caslib='casuser'), sastypes=False)
```
# Managing CASLibs
## Creating a CASLib
Create a new filesystem-based CASLib.
```
conn.addcaslib(path='/research/data',
caslib='research',
description='Research Data',
subdirs=False,
session=False,
activeonadd=False)
```
## Setting an Active CASLib
The active CASLib for a session can be set using the **setsessopt** action.
```
conn.setsessopt(caslib='research')
conn.caslibinfo(caslib='research')
```
## Dropping a CASLib
```
conn.dropcaslib('research')
conn.close()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from src.c_country import C_Country
from utils.graph_generator import get_path
from utils.dotdict import dotdict
from utils.params import init_graph, get_centrum
from utils.params import GIRG_args1,GIRG_args2,GIRG_args3,GIRG_args4, get_moving
import seaborn as sn
import matplotlib.pyplot as plt
sn.set_theme(style="whitegrid")
import os
import pickle
args = {
"--p_moving": 0.015,
"--p_worker": 1.0,
"--beta": 0.5,
"--beta_super":0.0,
"--seed": 0,
"--sigma": 1.0,
"--gamma": 0.2,
"--max_sim": 500,
"inf_agent_num":1000,
}
import pandas as pd
def save_to_file(betas, agg1, agg2, seed, filename):
df = pd.DataFrame()
for beta,sims in zip(betas,agg1):
row = {"sim"+str(i):v for i,v in enumerate(sims)}
row["beta"]=beta
row["cen"]=True
df = df.append(row, ignore_index=True)
for beta,sims in zip(betas,agg2):
row = {"sim"+str(i):v for i,v in enumerate(sims)}
row["beta"]=beta
row["cen"]=False
df=df.append(row, ignore_index=True)
df.to_csv(filename)
for seed in range(0,2):
for config in [True, False]:
print("Config" if config else "GIRG")
for GIRG_args in [GIRG_args1 ,GIRG_args2, GIRG_args3, GIRG_args4]:
GIRG_args["config_model"]=config
GIRG_args["random_seed"]=seed
graph = init_graph(GIRG_args)
hun = C_Country(graph)
pm = get_moving(graph, procent = 0.001)
print(pm)
```
# GIRG
```
betas = np.concatenate(
(np.linspace(args["--gamma"], 1.3*args["--gamma"], 12),
np.linspace(1.3*args["--gamma"], 2.0*args["--gamma"], 15)[1:])
)
betas
args["procnum"] = 30
args["simnum"] = 10
random_seed_num = 5
inf_city = 30
%time
log_folder = "girgs_final"
if(not os.path.exists(log_folder)):
os.makedirs(log_folder)
for seed in range(0, random_seed_num):
print(f"\rRandom seed: {seed}/{random_seed_num}")
for config in [True, False]:
for GIRG_args in [GIRG_args1 ,GIRG_args2, GIRG_args3, GIRG_args4]:
GIRG_args["config_model"]=config
GIRG_args["random_seed"]=seed
graph = init_graph(GIRG_args)
hun = C_Country(graph)
pm = get_moving(graph, procent = 0.001)
args["--p_moving"]=pm
centrum = get_centrum(graph, "k-core", inf_city)
agg1 = hun.run_for_betas_simple_raw(args, centrum, betas, inf_city, "uniform_random")
periphery = [n for n in graph.nodes() if n not in centrum]
agg2 = hun.run_for_betas_simple_raw(args, periphery, betas, inf_city, "uniform_random")
graph_name = "Config" if config else "GIRG"
tau = GIRG_args["tau"]
alpha = GIRG_args["alpha"]
filename = "{}/{}_tau:{}_alpha:{}_seed:{}.csv".format(
log_folder, graph_name, tau, alpha,seed)
save_to_file(betas, agg1, agg2, seed, filename)
def save_file(df, file):
df_cen = df[df["cen"]==True].drop(['beta', 'cen'], axis=1).mean(1)
df_per = df[df["cen"]==False].drop(['beta', 'cen'], axis=1).mean(1)
betas = df[df["cen"]==True]["beta"]
df_agg = pd.DataFrame()
df_agg["betas"]=df[df["cen"]==True]["beta"]
df_agg["mean1"]=df_cen
df_agg["std1"]=df[df["cen"]==True].drop(['beta', 'cen'], axis=1).std(1)
df_agg["mean2"]=np.array(df_per)
df_agg["std2"]=np.array(df[df["cen"]==False].drop(['beta', 'cen'], axis=1).std(1))
df_agg["ratio"]=df_agg["mean1"]/df_agg["mean2"]
A = 1.645/np.sqrt(df.drop(['beta', 'cen'], axis=1).shape[1])
df_agg["conf1_lower"] = df_agg["mean1"]-df_agg["std1"]*A
df_agg["conf1_upper"] = df_agg["mean1"]+df_agg["std1"]*A
df_agg["conf2_lower"] = df_agg["mean2"]-df_agg["std2"]*A
df_agg["conf2_upper"] = df_agg["mean2"]+df_agg["std2"]*A
df_agg["lower"] = df_agg["conf1_lower"]/df_agg["conf2_upper"]
df_agg["upper"] = df_agg["conf1_upper"]/df_agg["conf2_lower"]
df_agg.to_csv("girgs_final/aggregation/"+file+"_agg.csv")
return df_agg
os.mkdir("girgs_final/aggregation")
for files in [["GIRG_tau:2.5_alpha:2.3","GIRG_tau:3_alpha:1.3",
"GIRG_tau:3.5_alpha:1.3","GIRG_tau:3.5_alpha:2.3"],
["Config_tau:2.5_alpha:2.3","Config_tau:3_alpha:1.3",
"Config_tau:3.5_alpha:1.3","Config_tau:3.5_alpha:2.3"]]:
plt.figure(figsize=(10,8))
for file in files:
df = pd.read_csv("girgs_final/{}_seed:{}.csv".format(file, 0))
for seed in range(0,random_seed_num):
df_temp = pd.read_csv("girgs_final/{}_seed:{}.csv".format(file, seed))
df = pd.merge(df,df_temp, how='inner', left_on=['beta','cen'], right_on=['beta','cen'])
df["mean"] = df.drop(['beta', 'cen'], axis=1).mean(1)
df["std"] = df.drop(['beta', 'cen'], axis=1).std(1)
df.to_csv("girgs_final/aggregation/"+file+"_all.csv")
df_agg = save_file(df, file)
plt.plot(betas/args["--gamma"], df_agg["ratio"].values, label=file)
plt.fill_between(betas/args["--gamma"], df_agg["lower"].values, df_agg["upper"].values,
alpha=0.2)
plt.xlabel("R_0")
plt.legend()
plt.show()
```
| github_jupyter |
```
!pip install --upgrade --no-deps --force-reinstall -q git+https://github.com/Pehlevan-Group/finite-width-bayesian
!pip install neural_tangents
import numpy as np
import pickle
import matplotlib.pyplot as plt
import neural_tangents as nt
from neural_tangents import stax
from langevin import model
from langevin.utils import convert_nt, curr_time
import langevin.theory as theory
import langevin.optimizer as opt
import langevin.dataset as ds
import jax
import jax.numpy as jnp
from jax import random
from jax import jit, grad, vmap
from jax.config import config
config.update("jax_enable_x64", True)
key = random.PRNGKey(1)
from functools import partial
from skimage.transform import resize
import pytz
from datetime import datetime
from dateutil.relativedelta import relativedelta
def time_diff(t_start):
t_end = datetime.now(pytz.timezone('US/Eastern'))
t_diff = relativedelta(t_end, t_start) # later/end time comes first!
return '{h}h {m}m {s}s'.format(h=t_diff.hours, m=t_diff.minutes, s=t_diff.seconds)
model_type = 'fnn'
opt_mode = 'sgld'
nonlin = 'relu'
dataset_name = 'mnist'
resized = 10 ## Resize the images to 10 x 10 pixels
N_tr = 1000
x_train, y_train = ds.dataset(N_tr, dataset_name, model_type, resized)
print(x_train.shape)
## For bottleneck experiments
no_bottleneck_widths = [[100,100,100],[200,200,200],[300,300,300],[400,400,400],[500,500,500],[600,600,600]]
bottleneck_widths = [[100,50,100],[200,50,200],[300,50,300],[400,50,400],[500,50,500],[600,50,600]]
exp_type = 0 # set it to 1 for bottleneck experiments
if exp_type == 0:
hidden_widths = no_bottleneck_widths
else:
hidden_widths = bottleneck_widths
beta = 1
batch_size = N_tr
step_size = min(1/N_tr, 1e-4)
batch_factor = N_tr//batch_size
nT = 5000000
burn_in = nT//3
K_avgs = []
K_nngps = []
Kernel_Fns = []
## Compute the theory
K_theories = []
for hidden_width in hidden_widths:
print(model_type, ' | ', hidden_width)
## Create the model layers
layers, layers_ker = model.model(hidden_width, nonlin=nonlin, model_type=model_type)
## Create the model functions for each layer
layer_fns = []
kernel_fns = []
emp_kernel_fns = []
for i, layer in enumerate(layers):
init_fn, apply_fn, kernel_fn = stax.serial(*(layers[:i+1]))
layer_fns += [jit(apply_fn)]
kernel_fns += [jit(kernel_fn)]
emp_kernel_fns += [jit(partial(nt.empirical_nngp_fn(layer_fns[i]), x_train, None))]
init_fn, apply_fn, kernel_fn = stax.serial(*layers)
apply_fn = jit(apply_fn)
kernel_fn = jit(kernel_fn)
## Initialize the model
_, params = init_fn(key, input_shape=x_train.shape)
## Set Optimizer
opt_init, opt_update, get_params = opt.sgld(step_size, beta, batch_factor)
opt_state = opt_init(params)
## Set Loss Function and its grad
loss_fn = jit(lambda params: jnp.sum((apply_fn(params,x_train)-y_train)**2)/2)
g_loss = jit(grad(loss_fn))
avg_count = 0
K_avg = []
t_start = datetime.now(pytz.timezone('US/Eastern'))
for j in range(nT):
_,key = random.split(key)
opt_params = get_params(opt_state)
opt_state = opt_update(i, g_loss(opt_params), opt_state)
if j > burn_in:
avg_count += 1
for i, lay_idx in enumerate(layers_ker):
params = opt_params[:lay_idx+1]
if j == burn_in + 1:
#K_avg += [nt.empirical_nngp_fn(layer_fns[i])(x_train,None,params)]
K_avg += [emp_kernel_fns[lay_idx](params)]
else:
#K_avg[i] += nt.empirical_nngp_fn(layer_fns[i])(x_train,None,params)
K_avg[i] += emp_kernel_fns[lay_idx](params)
if j % 1000 == 0:
print('%d | loss: %f | avg_count: %d | time: %s'%(j, loss_fn(opt_params), avg_count, time_diff(t_start)), flush=True)
kernel_fns_relu = []
K_nngp = []
for lay_idx in layers_ker:
kernel_fns_relu += [kernel_fns[lay_idx]]
K_nngp += [kernel_fns[lay_idx](x_train,).nngp]
K_avgs += [K_avg]
K_nngps += [K_nngp]
## Compute the theory predictions
if model_type == 'fnn':
_, K_theory, Gxx, Gyy = theory.theory_linear(x_train, y_train, beta, kernel_fns, hidden_width)
K_theories += [K_theory]
with open('data_%s_%d_%s_%s_%s_nT_%d.pkl'%(str(hidden_width), N_tr, model_type, opt_mode, nonlin, nT), 'wb') as outfile:
pickle.dump({'K_avg': K_avg, 'K_nngp': K_nngp, 'K_theory': K_theory, 'burn_in': burn_in,
'model_type': model_type, 'hidden_widths': hidden_widths, 'N_tr': N_tr,
'nT': nT, 'beta': beta, 'batch_size': batch_size, 'step_size': step_size,
'avg_count': avg_count, 'opt_mode': opt_mode}, outfile, pickle.HIGHEST_PROTOCOL)
plt.scatter((K_avg[0]/avg_count-Gxx).reshape(-1)[:], (K_theory[0]-Gxx).reshape(-1)[:], label='Width: %d'%hidden_width[0])
plt.savefig('k-nngp_%s_fnn_%s.jpg'%(str(hidden_width), opt_mode))
plt.show()
plt.scatter((K_avg[0]/avg_count).reshape(-1)[:], (K_theory[0]).reshape(-1)[:], label='Width: %d'%hidden_width[0])
plt.savefig('k_vs_nngp_%s_fnn_%s.jpg'%(str(hidden_width), opt_mode))
plt.show()
with open('data_%d_%s_%s.pkl'%(N_tr, model_type, opt_mode), 'wb') as outfile:
pickle.dump({'K_avgs': K_avgs, 'K_nngps': K_nngps, 'K_theories': K_theories, 'nonlin': nonlin,
'model_type': model_type, 'hidden_widths': hidden_widths, 'N_tr': N_tr,
'nT': nT, 'beta': beta, 'batch_size': batch_size, 'step_size': step_size,
'avg_count': avg_count, 'opt_mode': opt_mode}, outfile, pickle.HIGHEST_PROTOCOL)
depths = jnp.arange(len(K_avgs[0]))
deviation = []
deviation_th = []
for i, hidden_width in enumerate(hidden_widths):
width = hidden_width[0]
K_exp = K_avgs[i]
K_nngp = K_nngps[i]
deviation += [[jnp.linalg.norm(K/avg_count - K_t)**2 for K, K_t in zip(K_exp, K_nngp)]]
if model_type == 'fnn':
K_theory = K_theories[i]
deviation_th += [[jnp.linalg.norm(K - K_t)**2 for K, K_t in zip(K_theory, K_nngp)]]
deviation = np.array(deviation)
print(deviation.shape)
plt.loglog([width[0] for width in hidden_widths], deviation[:,:-1], 'o')
if model_type == 'fnn':
deviation_th = np.array(deviation_th)
plt.loglog([width[0] for width in hidden_widths], deviation_th,'k--')
plt.savefig('one_over_width_%s_%s.png'%(model_type, opt_mode))
plt.close()
for i, hidden_width in enumerate(hidden_widths):
plt.scatter((K_avgs[i][0]/avg_count-Gxx).reshape(-1)[:], (K_theories[i][0]-Gxx).reshape(-1)[:], label='Width: %d'%hidden_width[0])
#plt.legend()
plt.savefig('k-nngp_fnn_%s.jpg'%opt_mode)
plt.close()
for i, hidden_width in enumerate(hidden_widths):
plt.scatter((K_avgs[i][0]/avg_count).reshape(-1)[:], (K_theories[i][0]).reshape(-1)[:], label='Width: %d'%hidden_width[0])
plt.savefig('k_vs_nngp_fnn_%s.jpg'%opt_mode)
plt.close()
```
| github_jupyter |
```
final_result = []
content = []
#format = [loss,optimizer,lstm_len,dropoff,train_accuracy,validation_accuracy,test_accuracy]
with open('final_cnn.txt', 'rb') as file:
content = file.readlines()
content = [str(x.decode(encoding='utf8')).strip() for x in content]
current_result = ['loss','optimizer','cnn_len','dropout', 'test_accuracy','male_recall','female_recall','male_precision','female_precision','male_true','female_true','male_predict','female_predict']
next_look = False
for line in content:
if 'Training new model' in line:
final_result.append(current_result)
current_result = []
t = line.split(',')
l = t[1].split(':')[1]
o = t[2].split('=')[1]
llen = t[3].split('=')[1]
d = t[4].split('=')[1]
current_result.extend([l,o,llen,d])
if len(line) > 0:
if line[0] == '[':
current_result.append(line.split(',')[1][1:-1])
if 'recall' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'precision' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'true' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'predict' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
final_result.append(current_result)
with open('cnn_final.csv', 'wb') as file:
for res in final_result:
line = ''
for it in res:
line = line + it +","
line = line[:-1]
line = line +'\n'
file.write(line.encode('utf-8'))
final_result = []
content = []
#format = [loss,optimizer,lstm_len,dropoff,train_accuracy,validation_accuracy,test_accuracy]
with open('final_lstm.txt', 'rb') as file:
content = file.readlines()
content = [str(x.decode(encoding='utf8')).strip() for x in content]
current_result = ['loss','optimizer','lstm_len','dropout', 'test_accuracy','male_recall','female_recall','male_precision','female_precision','male_true','female_true','male_predict','female_predict']
next_look = False
for line in content:
if 'Training new model' in line:
final_result.append(current_result)
current_result = []
t = line.split(',')
l = t[1].split(':')[1]
o = t[2].split('=')[1]
llen = t[3].split('=')[1]
d = t[4].split('=')[1]
current_result.extend([l,o,llen,d])
if len(line) > 0:
if line[0] == '[':
current_result.append(line.split(',')[1][1:-1])
if 'recall' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'precision' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'true' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'predict' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
final_result.append(current_result)
with open('lstm_final.csv', 'wb') as file:
for res in final_result:
line = ''
for it in res:
line = line + it +","
line = line[:-1]
line = line +'\n'
file.write(line.encode('utf-8'))
final_result = []
content = []
#format = [loss,optimizer,lstm_len,dropoff,train_accuracy,validation_accuracy,test_accuracy]
with open('final_bilstm.txt', 'rb') as file:
content = file.readlines()
content = [str(x.decode(encoding='utf8')).strip() for x in content]
current_result = ['loss','optimizer','lstm_len','dropout', 'test_accuracy','male_recall','female_recall','male_precision','female_precision','male_true','female_true','male_predict','female_predict']
next_look = False
for line in content:
if 'Training new model' in line:
final_result.append(current_result)
current_result = []
t = line.split(',')
l = t[1].split(':')[1]
o = t[2].split('=')[1]
llen = t[3].split('=')[1]
d = t[4].split('=')[1]
current_result.extend([l,o,llen,d])
if len(line) > 0:
if line[0] == '[':
current_result.append(line.split(',')[1][1:-1])
if 'recall' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'precision' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'true' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
if 'predict' in line:
t = line.split(',')
current_result.extend([t[0].split('[')[1],t[1].split(']')[0]])
final_result.append(current_result)
with open('bilstm_final.csv', 'wb') as file:
for res in final_result:
line = ''
for it in res:
line = line + it +","
line = line[:-1]
line = line +'\n'
file.write(line.encode('utf-8'))
```
| github_jupyter |
```
%pylab inline
import numpy as np
import pandas as pd
import math
import tqdm
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.ensemble import IsolationForest
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from datetime import datetime
from costum_utils import future_checker
```
## 1. Data import
Let us start by importing the data obtained by quering eICU database
```
id_patients = pd.read_csv('./eICU/id_patients.csv',sep=',',index_col=0)
lab_results = pd.read_csv('./eICU/lab.csv',sep=',',index_col=0)
transfusion = pd.read_csv('./eICU/transfusion.csv',sep=',',index_col=0)
patient_info = pd.read_csv('./eICU/patient_info.csv',sep=',',index_col=0)
vital_periodic = pd.read_csv('./eICU/vital_periodic.csv',sep=',')
vital_aperiodic = pd.read_csv('./eICU/vital_aperiodic.csv',sep=',')
crystalloid = pd.read_csv('./eICU/crystalloids.csv',sep=',')
```
We only keep up to the first 24 hours of the icu stay
```
vital_aperiodic=vital_aperiodic[vital_aperiodic.time<=24]
vital_periodic=vital_periodic[vital_periodic.time<=24]
crystalloid = crystalloid[crystalloid.time<=24]
```
We also remove readmissions
```
patient_info = patient_info[patient_info.unitstaytype!='readmit']
patient_info = patient_info[['gender','age']]
```
Qualcosa sui cristalloidi che non capisco
```
crystalloid = crystalloid.drop('cellpath',axis = 1)
crystalloid.columns= ['patientunitstayid','time','crystalloid']
```
Collapse multiple transfusions within an hour into a unique observation
```
transfusion.groupby(transfusion.index).sum()['amount'].value_counts()/transfusion.groupby(transfusion.index).sum().shape[0]
```
## 2. Data merging
```
lab_e_transfusion=lab_results.merge(transfusion,left_on=[lab_results.index,'time'],right_on=[transfusion.index,'time'],how = 'outer').set_index('key_0')
vital_periodic=vital_periodic.groupby(['patientunitstayid','time'],as_index=False).mean().drop(['systemicsystolic','systemicdiastolic','systemicmean'],axis = 1)
lab_e_transfusion_e_periodic = lab_e_transfusion.merge(vital_periodic,left_on=[lab_e_transfusion.index,'time'],right_on=['patientunitstayid','time'],how = 'outer').set_index('patientunitstayid')
vital_aperiodic=vital_aperiodic.groupby(['patientunitstayid','time'],as_index=False).mean()
crystalloid=crystalloid.groupby(['patientunitstayid','time'],as_index=False).sum()
lab_e_transfusion_e_periodic_e_aperiodic = lab_e_transfusion_e_periodic.merge(vital_aperiodic,left_on=[lab_e_transfusion_e_periodic.index,'time'],right_on=['patientunitstayid','time'],how = 'outer').set_index('patientunitstayid')
lab_e_transfusion_e_periodic_e_aperiodic = lab_e_transfusion_e_periodic_e_aperiodic.merge(crystalloid,left_on=[lab_e_transfusion_e_periodic_e_aperiodic.index,'time'],right_on=['patientunitstayid','time'],how = 'outer').set_index('patientunitstayid')
tabel_final = lab_e_transfusion_e_periodic_e_aperiodic.join(patient_info,how = 'right')
```
## 3. Creating X and y
```
dataset = tabel_final.copy()
```
### 3.1 Rename columns as MIMIC
```
dataset=dataset.rename(columns={
'albumin':'ALBUMIN',
'creatinine':'CREATININE',
'glucose':'GLUCOSE',
'bicarbonate':'BICARBONATE',
'hematocrit':'HEMATOCRIT',
'hemoglobin':'HEMOGLOBIN',
'lactate':'LACTATE',
'potassium':'POTASSIUM',
'ptt':'PTT',
'wbc':'WBC',
'platelets':'PLATELET',
'amount':'AmountTransfused',
'temperature':'TempC',
'heartrate':'HEARTRATE',
'respiration':'RespRate',
'noninvasivesystolic':'SysBP',
'noninvasivediastolic':'DiasBP',
'noninvasivemean':'MeanBP',
'age':'admission_age',
'crystalloid':'crystalloid_bolus'
}
)
```
### 3.2 Handiling ICU preadmission infos
Let us define a dataframe which will contain all the information about 12 hours prior ICU admission
```
pre_icu = dataset[ (dataset['time']>=-12) & (dataset['time']<=-1)]
```
For each variable we define a specific extraction criteria
```
#define a new dataframe only containing the index of the patients
pre_x = pd.DataFrame(index=pre_icu.index.unique())
#feature extraction criteria
features_list = [
('ALBUMIN',np.nanmean),
('BUN',np.nanmedian),
('CREATININE', np.nanmax),
('GLUCOSE',np.nanmean),
('BICARBONATE', np.nanmedian),
('HEMATOCRIT', np.nanmin),
('HEMOGLOBIN', np.nanmin),
('INR',np.nanmean),
('LACTATE',np.nanmean),
('PLATELET', np.nanmin),
('POTASSIUM', np.nanmax),
('PTT', np.nanmax),
('WBC', np.nanmean),
('AmountTransfused', np.nansum),
('TempC', np.nanmin),
('HEARTRATE', np.nanmax),
('RespRate', np.nanmax),
('SysBP', np.nanmin),
('DiasBP',np.nanmin),
('MeanBP', np.nanmean),
('crystalloid_bolus', np.nansum),
('gender', np.nanmin),
('admission_age', np.nanmin),
]
#save features names
feature_names = [x[0] for x in features_list]
#process features column by column after grouping for patients
to_concat = []
grouped = pre_icu.groupby(pre_icu.index)
for feature, function in features_list:
to_concat.append(grouped[[feature]].apply(function))
#add obtained feature to previously defined dataframe
pre_x = pd.concat([pre_x] + to_concat, axis=1, join='inner')
pre_x.columns = feature_names
#add the time value
pre_x['time'] = [-1]*pre_x.shape[0]
```
Initial information imputation
### 3.3 ICU training data
We start by subsetting data in the time interval of interest: 0-3 hours
```
final = dataset[dataset['time']<=3]
final = final[final['time']>=0]
```
The let us merge the informations prior ICU admission obtained in the last step
```
final_x = pd.concat([pre_x,final],sort = True)
#reorder columns and rows
final_x = final_x[pre_x.columns]
final_x.sort_values(by=['patientunitstayid','time'])
```
Let us now recode sex and age in numeric values and remove patients of unkonw gender
```
#gender
final_x = final_x[final_x.gender != 'Unknown']
final_x.replace('Male',0,inplace=True)
final_x.replace('Female',1,inplace=True)
final_x.gender = final_x.gender.astype('float')
#age
final_x.replace('> 89',90,inplace=True)
final_x.admission_age = final_x.admission_age.astype('float')
new = final_x.groupby(by=['patientunitstayid','time']).mean().reset_index()
new.set_index('patientunitstayid',inplace = True)
```
Now we need to uniform the data format and add missing values records where needed. The command below adds an empty line for the hours in which we don't have any kind of information
```
time = [-1,0,1,2,3]
for i in new.index.unique():
try:
if new.loc[i].time.shape[0] !=5:
missing = [x for x in time if x not in new.loc[i].time.values]
df = pd.DataFrame(missing, columns = ['time'],index = [i]*len(missing))
new = pd.concat([new,df])
except:
missing = [x for x in time if x not in new.loc[i].time.reshape(-1,1)]
df = pd.DataFrame(missing, columns = ['time'],index = [i]*len(missing))
new = pd.concat([new,df])
#since we added new timeslots we need to reorder the dataset
new['temp_axis'] = new.index
new.sort_values(by=['temp_axis','time'],inplace=True)
new.drop('temp_axis',axis=1,inplace=True)
```
Since we adess empyt lines we need to fix static variables such as gender and age
```
for i in new.index.unique():
new.loc[i,['gender']] =np.nanmean(new.loc[i,['gender']])
for i in new.index.unique():
new.loc[i,['admission_age']] =np.nanmean(new.loc[i,['admission_age']])
```
We drop patients without age and gender
```
to_drop = []
for i in new.index.unique():
if(sum(new.loc[i].gender.isna().values)!=0): to_drop.append(i)
missing_age = []
for i in new.index.unique():
if(sum(new.loc[i].admission_age.isna().values)!=0): to_drop.append(i)
new.drop(to_drop,inplace=True)
new.time.value_counts().sum()
```
### 3.4 Missing imputation
We start by imputing the hour prior to ICU entering
```
#LABS - compute values to impute
pre_imputation=[
('ALBUMIN',np.nanmedian),
('BUN',np.nanmedian),
('CREATININE',np.nanmedian),
('GLUCOSE',np.nanmedian),
('BICARBONATE',np.nanmedian),
('HEMATOCRIT',np.nanmedian),
('HEMOGLOBIN',np.nanmedian),
('INR',np.nanmedian),
('LACTATE',np.nanmedian),
('PLATELET',np.nanmedian),
('POTASSIUM',np.nanmedian),
('PTT',np.nanmedian),
('WBC',np.nanmedian)
]
#save them in a dictionary
imputation_value_dict = {}
for i,j in pre_imputation:
imputation_value_dict[i] = j(new[new['time']==-1][i].values)
#impute values
for j in tqdm.tqdm(new.index.unique()):
for i,_ in pre_imputation:
if(math.isnan(new.loc[(new.index==j) & (new.time==-1),i].values)):
new.loc[(new.index==j) & (new.time==-1),i]=imputation_value_dict[i]
```
As a first step we have to impute vitals values for the -1 hour. We do that by using the median of the values at 0.
```
#VITALS - compute the values to impute
periodic_pre_impute=[
('TempC',np.nanmedian),
('HEARTRATE',np.nanmedian),
('RespRate',np.nanmedian),
('SysBP',np.nanmedian),
('DiasBP',np.nanmedian),
('MeanBP',np.nanmedian)
]
imputation_value_dict = {}
#compute the values to impute from next hour
for i,j in periodic_pre_impute:
imputation_value_dict[i] = j(new[new['time']==0][i].values)
#impute values
for j in tqdm.tqdm(new.index.unique()):
for i,_ in periodic_pre_impute:
if(math.isnan(new.loc[(new.index==j) & (new.time==-1),i].values)):
new.loc[(new.index==j) & (new.time==-1),i]=imputation_value_dict[i]
```
Let us first impute the values that are not meant to be forward filled, namely AmountTransfused and crystalloid_bolus
```
new['crystalloid_bolus'].fillna(0,inplace=True)
new['AmountTransfused'].fillna(0,inplace=True)
```
Let us now dynamically impute the remaining values
```
for i in tqdm.tqdm(new.index.unique()):
new.loc[i]=new.loc[i].fillna(method='ffill')
new.columns
#raw export
new.to_csv('raw_x_0_3_eICU.csv')
```
### 3.5 Feature collapsing
Let us now collapse the time series featrues in one unique observations according to the criteria chosen below
```
collapsed = pd.DataFrame(index=new.index.unique())
features_list = [
#('gender', np.nanmin),
('ALBUMIN',np.nanmean),
('BUN', np.nanmax),
('CREATININE', np.nanmax),
('GLUCOSE',np.nanmean),
('BICARBONATE', np.nanmin),
('HEMATOCRIT', np.nanmin),
('HEMOGLOBIN', np.nanmin),
('INR',np.nanmean),
('LACTATE',np.nanmean),
('PLATELET', np.nanmin),
('POTASSIUM', np.nanmax),
('PTT', np.nanmax),
('WBC', np.nanmean),
('AmountTransfused', np.nansum),
('TempC', np.nanmin),
('HEARTRATE', np.nanmax),
('RespRate', np.nanmax),
('SysBP', np.nanmin),
('DiasBP',np.nanmean),
('MeanBP', np.nanmean),
('crystalloid_bolus',np.nansum),
('gender',np.nanmin),
('admission_age', np.nanmin),
]
to_concat = []
grouped = new.groupby(new.index)
for feature, function in features_list:
to_concat.append(grouped[[feature]].apply(function))
collapsed = pd.concat([collapsed] + to_concat, axis=1, join='inner')
col_names =[x[0] for x in features_list]
collapsed.columns = col_names
collapsed
```
### 3.6 Feature engineering
We now add additional features in order to improve the performance of the classifier and capture temporal patterns. Namely we compute the inercept and slope of a lienar fit on the time series available. Before doing so we need to recode the informations about age and gender in order to exclude them in the subsequent analysis
```
new['gender'] = new['gender'].astype('int')
new['admission_age'] = new['admission_age'].astype('int')
```
Compute now the new features
```
#trend features
trend_features = pd.DataFrame(index=new.index.unique())
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
exclusion_list = ['time','AmountTransfused','crystalloid_bolus']
#select the columns of which we should make feature engineering
for col in new.select_dtypes('float').columns.tolist():
print(col)
list_fit=[]
if(col not in exclusion_list):
#for each patient compute intercept and slope
for ind in new.index.unique():
temp=new.loc[ind]
value = np.polyfit(temp[col],temp.time,1)
list_fit.append(value)
#make the labels
label_slope = 'slope_'+col
label_int = 'intercept_'+col
#add the features
trend_features[label_int] = [x[0] for x in list_fit]
trend_features[label_slope] = [x[1] for x in list_fit]
else:
pass
```
### 3.7 Final feature merging
```
to_export = pd.concat([collapsed] + [trend_features] , axis=1, join='inner')
to_export.columns
```
### 3.8 Making the y's
```
to_drop = future_checker(dataset,4)
y = dataset[dataset.time>=4]
y.columns
```
Before starting we have to check wheter, for some patients, there could be no data after the 3rd hour of ICU. Those patients are to be removed both from Xs and ys
Remove those indexes
```
X= to_export.drop(to_drop)
```
Create than the y by looking who has a positive amount of blood transfusion
```
y['temp_index'] = y.index
y=y[['temp_index','AmountTransfused']].groupby(['temp_index']).sum()
temp_index = y.index.copy()
y = [int(x) for x in (y>0).values]
y = pd.Series(y)
y.index = temp_index
y.name ='outcome'
```
### 3.9 Final checks and export
```
X = X.merge(y,left_index=True,right_index=True)
y = X['outcome']
X = X.drop('outcome',axis=1)
X.shape
y.shape
X.to_csv('x_0_3_eICU.csv')
y.to_csv('y_4_24_eICU.csv')
```
## High transfusion dataset
```
transfused_idx = y[y==1].index
#extract only transfused patients from the whole dataset
y_transfused = dataset.loc[transfused_idx].copy()
#subset the whole dataset to the columns of intrest
y_transfused = y_transfused[['time','AmountTransfused']]
#training set identic to the previous, need to extract different y's. Start by subsetting for time
y_transfused = y_transfused[y_transfused.time >=4]
#creat a temporary feature to use in groupby to aggregate measures
y_transfused['temp_index'] = y_transfused.index
y_transfused = y_transfused.drop('time',axis=1)
y_transfused = y_transfused.groupby('temp_index').sum()
#save indexes
temp_index = y_transfused.index
#trasform booleans in integers an create the series containing the outcomes
y_transfused= [int(x[0]) for x in (y_transfused>500).values]
y_transfused = pd.Series(y_transfused)
y_transfused.index = temp_index
y_transfused.name ='outcome'
#extract the relative X's from the previous dataset
X_transfused = X.loc[temp_index]
X_transfused.head()
y_transfused.head()
X_transfused.to_csv('Regression_x_0_3_eICU.csv')
X_transfused.to_csv('Regression_y_4_24_eICU.csv')
```
| github_jupyter |
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
<br>
updated by รzlem Salehi | October 25, 2020
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h2>Reflections</h2>
<table align="left"><tr><td><i>
We use certain tools from python library "<b>matplotlib.pyplot</b>" for drawing.
Check the notebook "<a href="../python/Python06_Drawing.ipynb" target="_blank">Python: Drawing</a>" for the list of these tools.
</i></td></tr></table>
<h3> Hadamard operator </h3>
Is Hadamard operator a reflection? If so, what is its line of reflection?
Remember the following transitions.
$ H \ket{0} = \hadamard \vzero = \stateplus = \ket{+} ~~~$ and $~~~ H \ket{+} = \hadamard \stateplus = \vzero = \ket{0} $.
$ H \ket{1} = \hadamard \vone = \stateminus = \ket{-} ~~~$ and $~~~ H \ket{-} = \hadamard \stateminus = \vone = \ket{1} $.
```
%run qlatvia.py
draw_qubit()
sqrttwo=2**0.5
draw_quantum_state(1,0,"")
draw_quantum_state(1/sqrttwo,1/sqrttwo,"|+>")
%run qlatvia.py
draw_qubit()
sqrttwo=2**0.5
draw_quantum_state(0,1,"")
draw_quantum_state(1/sqrttwo,-1/sqrttwo,"|->")
```
<h3> Hadamard - geometrical interpretation </h3>
Hadamard operator is a reflection and its line of reflection is represented below.
It is the line obtained by rotating $x$-axis with $ \frac{\pi}{8} $ radians in counter-clockwise direction.
```
%run qlatvia.py
draw_qubit()
sqrttwo=2**0.5
draw_quantum_state(1,0,"")
draw_quantum_state(1/sqrttwo,1/sqrttwo,"|+>")
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
# drawing the angle with |0>-axis
from matplotlib.pyplot import gca, text
from matplotlib.patches import Arc
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=0,theta2=22.5) )
text(0.09,0.015,'.',fontsize=30)
text(0.25,0.03,'\u03C0/8')
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=22.5,theta2=45) )
text(0.075,0.065,'.',fontsize=30)
text(0.21,0.16,'\u03C0/8')
```
Let us visually verify that Hadamard operator reflects the states on the same row to each other.
$$
\myarray{|c|c|c|c|}{
\hline
A & \ket{0} = \vzero & E & \ket{+} = \myrvector{\sqrttwo \\ \sqrttwo}
\\ \hline
B & \ket{1} = \vone & F & \ket{-} = \myrvector{\sqrttwo \\ -\sqrttwo}
\\ \hline
C & -\ket{0} = \myrvector{-1 \\ 0} & G & -\ket{+} = \myrvector{-\sqrttwo \\ -\sqrttwo}
\\ \hline
D & -\ket{1} = \myrvector{0 \\ -1} & H & -\ket{-} = \myrvector{-\sqrttwo \\ \sqrttwo}
\\ \hline
}
$$
<b> The second row</b>
```
%run qlatvia.py
draw_qubit()
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
sqrttwo=2**0.5
draw_quantum_state(0,1,"")
draw_quantum_state(1/sqrttwo,-1/sqrttwo,"|->")
```
<b> The third row</b>
```
%run qlatvia.py
draw_qubit()
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
sqrttwo=2**0.5
draw_quantum_state(-1,0,"")
draw_quantum_state(-1/sqrttwo,-1/sqrttwo,"-|+>")
```
<b> The fourth row</b>
```
%run qlatvia.py
draw_qubit()
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
sqrttwo=2**0.5
draw_quantum_state(0,-1,"")
draw_quantum_state(-1/sqrttwo,1/sqrttwo,"-|->")
```
<b> Random quantum states</b>
Now let us create a random quantum state and apply the Hadamard matrix.
A function for randomly creating a 2-dimensional quantum state:
```
# randomly create a 2-dimensional quantum state
from math import cos, sin, pi
from random import randrange
def random_quantum_state2():
angle_degree = randrange(360)
angle_radian = 2*pi*angle_degree/360
return [cos(angle_radian),sin(angle_radian)]
%run qlatvia.py
draw_qubit()
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
[x1,y1] = random_quantum_state2()
print(x1,y1)
sqrttwo=2**0.5
oversqrttwo = 1/sqrttwo
[x2,y2] = [ oversqrttwo*x1 + oversqrttwo*y1 , oversqrttwo*x1 - oversqrttwo*y1 ]
print(x2,y2)
draw_quantum_state(x1,y1,"main")
draw_quantum_state(x2,y2,"ref")
```
<h3> Task 1 </h3>
Find the matrix representing the reflection over $x$-axis.
<a href="B60_Reflections_Solutions.ipynb#task1">click for our solution</a>
<h3> Task 2 </h3>
Find the matrix representing the reflection over $y$-axis.
<a href="B60_Reflections_Solutions.ipynb#task4">click for our solution</a>
<h3> Task 3 </h3>
Find the matrix representing the reflection over the line $y=x$.
<i> Hint:</i> Think about the reflections of the points $ \myrvector{0 \\ 1} $, $ \myrvector{-1 \\ 0} $, and $ \myrvector{-\sqrttwo \\ \sqrttwo} $ over the line $y=x$.
<a href="B60_Reflections_Solutions.ipynb#task5">click for our solution</a>
<h3>Reflection Operators</h3>
As we have observed, the following operators are reflections on the unit circle.
<b> Z operator:</b> $ Z = \mymatrix{rr}{ 1 & 0 \\ 0 & -1 } $. The line of reflection is $x$-axis.
<b> NOT operator:</b> $ X = \mymatrix{rr}{ 0 & 1 \\ 1 & 0 } $. The line of reflection is $y=x$.
<b> Hadamard operator:</b> $ H = \hadamard $. The line of reflection is $y= \frac{\sin(\pi/8)}{\cos(\pi/8)} x$.
It is the line passing through the origin making an angle $ \pi/8 $ radians with $x$-axis.
<b>Arbitrary reflection operator:</b> Let $ \theta $ be the angle of the line of reflection. Then, the martix form of reflection is represented as follows:
$$ \mathrm{Ref}(\theta) = \mymatrix{rr}{ \cos(2\theta) & \sin(2\theta) \\ \sin(2\theta) & -\cos(2\theta) } . $$
<h3> Task 4 [Extra]</h3>
Let $\ket{v}=\myvector{\cos(\theta) \\ \sin(\theta) }$ be an arbitrary quantum state.
Find the matrix $2\ket{v}\bra{v}-I$. Does it look familiar? (Recall that $\bra{v}=v^T$)
Hint: The following may be useful.
<i> $\sin(2\theta) = 2\sin(\theta)cos(\theta)$
$\cos(2\theta) = 1 - 2\sin^2(\theta) = 2\cos^2(\theta)-1 $
</i>
<a href="B60_Reflections_Solutions.ipynb#task4">click for our solution</a>
<h3> Task 5 [Extra] </h3>
Randomly pick the angle $\theta$.
Draw the line of reflection with unit circle.
Construct the corresponding reflection matrix.
Randomly create a quantum state and multiply it with this matrix to find its reflection.
Draw both states.
Repeat the task for a few times.
```
%run qlatvia.py
draw_qubit()
#
# your code is here
#
# line of reflection
# from matplotlib.pyplot import arrow
# arrow(x,y,dx,dy,linestyle='dotted',color='red')
#
#
# draw_quantum_state(x,y,"name")
```
| github_jupyter |
# Updating features in a feature layer
As content publishers, you may be required to keep certain web layers up to date. As new data arrives, you may have to append new features, update existing features etc. There are a couple of different options to accomplish this:
- Method 1: editing individual features as updated datasets are available
- Method 2: overwriting feature layers altogether with updated datasets
Depending on the number of features that are updated, your workflow requirements, you may adopt either or both kinds of update mechanisms.
In this sample, we explore the first method:
**Method 1**
- [Updating feature layer by editing individual features](#Updating-feature-layer-by-editing-individual-features)
- [Publish the cities feature layer using the initial dataset](#Publish-the-cities-feature-layer-using-the-initial-dataset)
- [Apply updates from the second spreadsheet](#Apply-updates-from-the-second-spreadsheet)
- [Identifying existing features that need to be updated](#Identifying-existing-features-that-need-to-be-updated)
- [Perform updates to the existing features](#Perform-updates-to-the-existing-features)
- [Identifying new features that need to be added](#Identifying-new-features-that-need-to-be-added)
- [Adding new features](#Adding-new-features)
- [Apply edits from third spreadsheet](#Apply-edits-from-third-spreadsheet)
- [Inspecting existing fields of the feature layer](#Inspecting-existing-fields-of-the-feature-layer)
- [Preparing additional columns to add to the feature layer](#Preparing-additional-columns-to-add-to-the-feature-layer)
- [Adding additional columns to the feature layer](#Adding-additional-fields-to-the-feature-layer)
- [Adding attribute values to the new columns](#Adding-attribute-values-to-the-new-columns)
For **Method 2**, refer to the sample titled [Overwriting feature layers](https://developers.arcgis.com/python/sample-notebooks/overwriting-feature-layers)
**Note**: To run this sample, you need the ``pandas`` library in your conda environment. If you don't have the library, install it by running the following command from cmd.exe or your shell
```
conda install pandas```
```
# Connect to the GIS
from arcgis.gis import GIS
from arcgis import features
import pandas as pd
#Access the portal using "amazing_arcgis_123" as password for the given Username.
gis = GIS("https://pythonapi.playground.esri.com/portal", "arcgis_python")
```
## Updating feature layer by editing individual features
Let us consider a scenario where we need to update a feature layer containing the capital cities of the US. We have 3 csv datasets simulating an update workflow as described below:
1. capitals_1.csv -- contains the initial, incomplete dataset
2. capitals_2.csv -- contains additional points and updates to existing points, building on top of capitals_1.csv
3. capitals_annex.csv -- an alternate table containing additional attribute information
Our goal is to update the feature layer with each of these datasets doing the necessary edit operations.
### Publish the cities feature layer using the initial dataset
```
# read the initial csv
csv1 = 'data/updating_gis_content/capitals_1.csv'
cities_df_1 = pd.read_csv(csv1)
cities_df_1.head()
# print the number of records in this csv
cities_df_1.shape
```
As you can see, this dataset only contains 19 rows or 19 capital cities. It is not the complete dataset.
Let's add this `csv` as a portal item. Adding the item creates a CSV item and uploads the original file to the portal, establishing a link between the item and the original file name. Therefore, we need a unique name for the file to guarantee it does not collide with any file of the same name that may have been uploaded by the same user. We'll use standard library modules to copy the file and give it a new name so we can add it to the portal
```
import os
import datetime as dt
import shutil
# assign variables to locations on the file system
cwd = os.path.abspath(os.getcwd())
data_pth = os.path.join(cwd, r'data/updating_gis_content/')
# create a unique timestamp string to append to the file name
now_ts = str(int(dt.datetime.now().timestamp()))
# copy the file, appending the unique string and assign it to a variable
my_csv = shutil.copyfile(os.path.abspath(csv1),
os.path.join(data_pth, 'capitals_1_' + now_ts + '.csv'))
my_csv
# add the initial csv file and publish that as a web layer
item_prop = {'title':'USA Capitals spreadsheet ' + now_ts}
csv_item = gis.content.add(item_properties=item_prop, data=my_csv)
csv_item
```
This spreadsheet has co-ordinates as `latitude` and `longitude` columns which will be used for geometries during publishing.
```
# publish the csv item into a feature layer
cities_item = csv_item.publish()
cities_item
# update the item metadata
item_prop = {'title':'USA Capitals'}
cities_item.update(item_properties = item_prop, thumbnail='data/updating_gis_content/capital_cities.png')
cities_item
```
### Apply updates from the second spreadsheet
The next set of updates have arrived and are stored in `capitals_2.csv`. We are told it contains corrections for the original set in addition to new features. We need to figure out which rows have changed, apply `update` operation on those, then apply `add` operation to new rows.
To start with, let us read the second csv file. Note, in this sample, data is stored in csv. In reality, it could be from your enterprise database or any other data source.
```
# read the second csv set
csv2 = 'data/updating_gis_content/capitals_2.csv'
cities_df_2 = pd.read_csv(csv2)
cities_df_2.head()
# get the dimensions of this csv
cities_df_2.shape
```
#### Identifying existing features that need to be updated
To identify features that need to be updated, let us read the attribute table of the published feature layer and compare that against the second csv. To read the attribute table, we perform a `query()` on the feature layer which returns us an `arcgis.feature.FeatureSet` object. Refer to the guide pages on [accessing features from feature layers](https://developers.arcgis.com/python/guide/working-with-feature-layers-and-features/) to learn more about this.
Note, at this point, we could work with the `cities_df_1` dataframe we created from the original csv file. However, in practice you may not always have the original dataset or your feature layer might have undergone edits after it was published. Hence, we query the feature layer directly.
```
cities_flayer = cities_item.layers[0]
cities_fset = cities_flayer.query() #querying without any conditions returns all the features
cities_fset.sdf.head()
```
The `city_id` column is common between both the datasets. Next, let us perform an `inner` join with the table from feature layer as left and updated csv as right. Inner joins will yield those rows that are present in both tables. Learn more about [inner joins here](https://www.w3schools.com/sql/sql_join_inner.asp).
```
overlap_rows = pd.merge(left = cities_fset.sdf, right = cities_df_2, how='inner',
on = 'city_id')
overlap_rows
```
Thus, of 19 features in original and 36 features in second csv, 4 features are common. Inspecting the table, we find certain columns are updated, for instance, Cheyenne has its coordinates corrected, Oklahoma City has its state abbreviation corrected and similarly other cities have one of their attribute columns updated.
We could either update individual attribute values for these 4 features or update all attribute values with the latest csv. Below, we are performing the latter as it is simple and fast.
#### Perform updates to the existing features
```
features_for_update = [] #list containing corrected features
all_features = cities_fset.features
# inspect one of the features
all_features[0]
```
Note the X and Y geometry values are different from decimal degree coordinates present in Longitude and Latitude fields. To perform geometry edits, we need to project the coordinates to match that of the feature layer.
```
# get the spatial reference of the features since we need to update the geometry
cities_fset.spatial_reference
```
Below, we prepare updated geometries and attributes for each of the 4 features we determined above. We use the `arcgis.geometry` module to `project` the coordinates from geographic to projected coordinate system. The cell below prints the original `Feature` objects followed by the updated ones. If you look closely, you can find the differences.
```
from arcgis import geometry #use geometry module to project Long,Lat to X and Y
from copy import deepcopy
for city_id in overlap_rows['city_id']:
# get the feature to be updated
original_feature = [f for f in all_features if f.attributes['city_id'] == city_id][0]
feature_to_be_updated = deepcopy(original_feature)
print(str(original_feature))
# get the matching row from csv
matching_row = cities_df_2.where(cities_df_2.city_id == city_id).dropna()
#get geometries in the destination coordinate system
input_geometry = {'y':float(matching_row['latitude']),
'x':float(matching_row['longitude'])}
output_geometry = geometry.project(geometries = [input_geometry],
in_sr = 4326,
out_sr = cities_fset.spatial_reference['latestWkid'],
gis = gis)
# assign the updated values
feature_to_be_updated.geometry = output_geometry[0]
feature_to_be_updated.attributes['longitude'] = float(matching_row['longitude'])
feature_to_be_updated.attributes['city_id'] = int(matching_row['city_id'])
feature_to_be_updated.attributes['state'] = matching_row['state'].values[0]
feature_to_be_updated.attributes['capital'] = matching_row['capital'].values[0]
feature_to_be_updated.attributes['latitude'] = float(matching_row['latitude'])
feature_to_be_updated.attributes['name'] = matching_row['name'].values[0]
feature_to_be_updated.attributes['pop2000'] = int(matching_row['pop2000'])
feature_to_be_updated.attributes['pop2007'] = int(matching_row['pop2007'])
#add this to the list of features to be updated
features_for_update.append(feature_to_be_updated)
print(str(feature_to_be_updated))
print("========================================================================")
```
We have constructed a list of features with updated values. We can use this list to perform updates on the feature layer.
```
features_for_update
```
To update the feature layer, call the `edit_features()` method of the `FeatureLayer` object and pass the list of features to the `updates` parameter:
```
cities_flayer.edit_features(updates= features_for_update)
```
We have successfully applied corrections to those features which existed in the feature layer from the initial dataset. Next let us proceed to adding new features present only in the second csv file.
#### Identifying new features that need to be added
```
#select those rows in the capitals_2.csv that do not overlap with those in capitals_1.csv
new_rows = cities_df_2[~cities_df_2['city_id'].isin(overlap_rows['city_id'])]
print(new_rows.shape)
new_rows.head()
```
Thus, of the total 36 rows in the second csv, we have determined the 32 other rows which are new and need to be appended as new features.
#### Adding new features
Next, let us compose another `list` of `Feature` objects similar to earlier, from the `new_rows` data frame.
```
features_to_be_added = []
# get a template feature object
template_feature = deepcopy(features_for_update[0])
# loop through each row and add to the list of features to be added
for row in new_rows.iterrows():
new_feature = deepcopy(template_feature)
#print
print("Creating " + row[1]['name'])
#get geometries in the destination coordinate system
input_geometry = {'y':float(row[1]['latitude']),
'x':float(row[1]['longitude'])}
output_geometry = geometry.project(geometries = [input_geometry],
in_sr = 4326,
out_sr = cities_fset.spatial_reference['latestWkid'],
gis = gis)
# assign the updated values
new_feature.geometry = output_geometry[0]
new_feature.attributes['longitude'] = float(row[1]['longitude'])
new_feature.attributes['city_id'] = int(row[1]['city_id'])
new_feature.attributes['state'] = row[1]['state']
new_feature.attributes['capital'] = row[1]['capital']
new_feature.attributes['latitude'] = float(row[1]['latitude'])
new_feature.attributes['name'] = row[1]['name']
new_feature.attributes['pop2000'] = int(row[1]['pop2000'])
new_feature.attributes['pop2007'] = int(row[1]['pop2007'])
#add this to the list of features to be updated
features_to_be_added.append(new_feature)
# take a look at one of the features we created
features_to_be_added[0]
```
Thus, we have created a `list` of `Feature` objects with appropriate attributes and geometries. Next, to add these new features to the feature layer, call the `edit_features()` method of the `FeatureLayer` object and pass the list of `Feature` objects to the `adds` parameter:
```
cities_flayer.edit_features(adds = features_to_be_added)
```
Thus, we have successfully applied edits from second csv file. Next let us look at how we can apply edits from third csv file.
### Apply edits from third spreadsheet
The next set of updates have arrived and are stored in `capitals_annex.csv`. We are told it contains additional columns for each of the features that we want to add to the feature layer.
To start with, let us read the third csv file. Note in this sample, data is stored in csv. In reality, it could be from your enterprise database or any other data source.
```
# read the third csv set
csv3 = 'data/updating_gis_content/capitals_annex.csv'
cities_df_3 = pd.read_csv(csv3)
cities_df_3.head()
#find the number of rows in the third csv
cities_df_3.shape
```
The `capitals_annex.csv` does not add new features, instead it adds additional attribute columns to existing features. It has 51 rows which were found to match the 19 + 32 rows from first and second csv files. The columns `City_ID` and `NAME` are common to all 3 spreadsheets. Next let us take a look at how we can append this additional attribute information to our feature layer.
#### Inspecting existing fields of the feature layer
The `manager` property of the `FeatureLayer` object exposes a set of methods to read and update the properties and definition of feature layers.
```
#Get the existing list of fields on the cities feature layer
cities_fields = cities_flayer.manager.properties.fields
# Your feature layer may have multiple fields,
# instead of printing all, let us take a look at one of the fields:
cities_fields[1]
```
From above, we can see the representation of one of the fields. Let us loop through each of the fields and print the `name`, `alias`, `type` and `sqlType` properties
```
for field in cities_fields:
print(f"{field.name:13}| {field.alias:13}| {field.type:25}| {field.sqlType}")
```
#### Preparing additional columns to add to the feature layer
Now that we have an idea of how the fields are defined, we can go ahead and append new fields to the layer's definition. Once we compose a list of new fields, by calling the `add_to_definition()` method we can push those changes to the feature layer. Once the feature layer's definition is updated with new fields, we can loop through each feature and add the appropriate attribute values.
To compose a list of new fields to be added, we start by making a copy of one of the fields as a template and start editing it. One easy part in this example is, all new fields that need to be added except one, are of the same data type: integer. With your data, this may not be the case. In such instances, you can add each field individually.
```
# get a template field
template_field = dict(deepcopy(cities_fields[1]))
template_field
```
Let us use pandas to get the list of fields that are **new** in spread sheet 3
```
# get the list of new fields to add from the third spreadsheet, that are not in spread sheets 1,2
new_field_names = list(cities_df_3.columns.difference(cities_df_1.columns))
new_field_names
```
Now loop though each new field name and create a field dictionary using the template we created earlier. Except the field titled `class` all other fields are of type `integer`.
```
fields_to_be_added = []
for new_field_name in new_field_names:
current_field = deepcopy(template_field)
if new_field_name.lower() == 'class':
current_field['sqlType'] = 'sqlTypeVarchar'
current_field['type'] = 'esriFieldTypeString'
current_field['length'] = 8000
current_field['name'] = new_field_name.lower()
current_field['alias'] = new_field_name
fields_to_be_added.append(current_field)
len(fields_to_be_added)
#inspect one of the fields
fields_to_be_added[3]
```
#### Adding additional fields to the feature layer
The list of new fields we composed can be pushed to the server by calling `add_to_definition()` method on the `manager` property.
```
cities_flayer.manager.add_to_definition({'fields':fields_to_be_added})
```
Thus, we have successfully added new fields to our feature layer. Let us verify the new columns show up:
```
new_cities_fields = cities_flayer.manager.properties.fields
len(new_cities_fields)
for field in new_cities_fields:
print(f"{field.name:10}| {field.type}")
```
#### Adding attribute values to the new columns
Next we can loop through each row in the third csv and add the new attribute values for these newly created columns.
```
# Run a fresh query on the feature layer so it includes the new features from
# csv2 and new columns from csv3
cities_fset2 = cities_flayer.query()
cities_features2 = cities_fset2.features
```
Loop through each row in the third spreadsheet, find the corresponding feature by matching the `city_id` value and apply the attribute values for the new fields.
```
features_for_update = []
for city_id in cities_df_3['city_id']:
# get the matching row from csv
matching_row = cities_df_3.where(cities_df_3.city_id == city_id).dropna()
print(str(city_id) + " Adding additional attributes for: " + matching_row['name'].values[0])
# get the feature to be updated
original_feature = [f for f in cities_features2 if f.attributes['city_id'] == city_id][0]
feature_to_be_updated = deepcopy(original_feature)
# assign the updated values
feature_to_be_updated.attributes['class'] = matching_row['class'].values[0]
feature_to_be_updated.attributes['white'] = int(matching_row['white'])
feature_to_be_updated.attributes['black'] = int(matching_row['black'])
feature_to_be_updated.attributes['ameri_es'] = int(matching_row['ameri_es'])
feature_to_be_updated.attributes['asian'] = int(matching_row['asian'])
feature_to_be_updated.attributes['hawn_pl'] = int(matching_row['hawn_pl'])
feature_to_be_updated.attributes['hispanic'] = int(matching_row['hispanic'])
feature_to_be_updated.attributes['males'] = int(matching_row['males'])
feature_to_be_updated.attributes['females'] = int(matching_row['females'])
#add this to the list of features to be updated
features_for_update.append(feature_to_be_updated)
# inspect one of the features
features_for_update[-1]
# apply the edits to the feature layer
cities_flayer.edit_features(updates= features_for_update)
```
#### Verify the changes made so far
Let us run another query on the feature layer and visualize a few rows.
```
cities_fset3 = cities_flayer.query()
cities_fset3.sdf.head(5)
```
## Conclusion
In this sample, we observed an edit intensive method to keep feature layers updated. We published data from first spreadsheet as a feature layer. We then updated existing features from second spread sheet (used geometry module to project the coordinates in the process), and added new features. The third spreadsheet presented additional attribute columns which were added to the feature layer by editing its definition and then updating the features with this additional data.
This method is editing intensive and you may choose this when the number of features to edit is less or if you needed to selectively update certain features as updates come in.
An alternate method is to overwrite the feature layer altogether when you always have current information coming in. This method is explained in the sample [Overwriting feature layers](https://developers.arcgis.com/python/sample-notebooks/overwriting-feature-layers)
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
from utils import *
import tensorflow as tf
from sklearn.preprocessing import minmax_scale
df = pd.read_csv('./data/train.csv')
df_tst = pd.read_csv('./data/test.csv')
subm = pd.read_csv('./data/sample_submission.csv')
df[Xs] = minmax_scale(df[Xs])
Ys = ['Y{}'.format(str(i).zfill(2)) for i in range(16, 19)]
df['Y'] = df.loc[:, Ys].mean(axis=1)
df_trn = df[df['Y18'].isna()]
df_val = df[df['Y00'].isna()]
def df2seqs(df, time_range=14, vals=200):
seqs = []
for i in range(len(df) - time_range+1):
seqs.append(df.iloc[i:i+time_range])
data_len = len(seqs)
trn_X, trn_Y = [], []
val_X, val_Y = [], []
for i in range(data_len - vals):
seq = seqs[i]
trn_X.append(seq[Xs].values)
trn_Y.append(seq['Y'].values[-1])
for i in range(data_len-vals, data_len):
seq = seqs[i]
val_X.append(seq[Xs].values)
val_Y.append(seq['Y'].values[-1])
trn_X = np.array(trn_X)
trn_Y = np.array(trn_Y)
val_X = np.array(val_X)
val_Y = np.array(val_Y)
return trn_X, trn_Y, val_X, val_Y
def df2seqs_test(df_tst, df_val, time_range=14):
df = pd.concat([df_val, df_tst]).reset_index()
seqs = []
for i in range(len(df) - time_range +1):
seqs.append(df.iloc[i:i+time_range])
data_len = len(seqs)
test_X = []
for i in range(data_len):
seq = seqs[i]
test_X.append(seq[Xs].values)
return np.array(test_X)
time_range = 14
trn_X, trn_Y, val_X, val_Y = df2seqs(df_trn, time_range)
trn_X2, trn_Y2, val_X2, val_Y2 = df2seqs(df_val, time_range)
test_X = df2seqs_test(df_tst, df[-time_range+1:], time_range)
test_X.shape
```
# Model
```
tf.keras.backend.clear_session()
inp = tf.keras.layers.Input([time_range, 40])
x = tf.keras.layers.LSTM(120, return_sequences=True)(inp)
x = tf.keras.layers.LSTM(80)(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
x = tf.keras.layers.Dense(32, activation='relu')(x)
outp = tf.keras.layers.Dense(1, activation='linear')(x)
mdl = tf.keras.models.Model(inputs=inp, outputs=outp)
mdl.compile(optimizer='adam', loss='mse')
mdl.fit(x=trn_X, y=trn_Y, validation_data=(val_X, val_Y), batch_size=256, epochs=40, verbose=2)
for i in range(1, 3):
mdl.layers[i].trainable = False
mdl.fit(x=trn_X2, y=trn_Y2, validation_data=(val_X2, val_Y2), batch_size=256, epochs=40, verbose=2)
```
# Test
```
pred_test = mdl.predict(test_X)
pred_test_adj = pred_test.reshape([-1])
writeSubm(pred_test_adj)
```
# End
| github_jupyter |
```
import tensorflow as tf
print(tf.version.VERSION)
tf.config.list_physical_devices()
import numpy as np
import matplotlib.pyplot as plt
def plot_multiple_images(images, n_cols=None):
n_cols = n_cols or len(images)
n_rows = (len(images) - 1) // n_cols + 1
if images.shape[-1] == 1:
images = np.squeeze(images, axis=-1)
plt.figure(figsize=(n_cols, n_rows))
for index, image in enumerate(images):
plt.subplot(n_rows, n_cols, index + 1)
plt.imshow(image, cmap="binary")
plt.axis("off")
from tensorflow.keras.layers import Dense, Conv2DTranspose, Conv2D, BatchNormalization, Dropout
from tensorflow.keras.layers import Reshape, Activation, LeakyReLU, MaxPool2D, Flatten
encoding_size = 100
generator = tf.keras.models.Sequential([
Dense(7*7*128, input_shape=[encoding_size]),
Reshape([7,7,128]),
BatchNormalization(),
Conv2DTranspose(64, kernel_size=5, strides=2, padding="SAME", activation="selu"),
BatchNormalization(),
Conv2DTranspose(1, kernel_size=5, strides=2, padding="SAME", activation="tanh")
])
discriminator = tf.keras.models.Sequential([
Conv2D(64, kernel_size=5, strides=2, padding="SAME", activation=tf.keras.layers.LeakyReLU(0.2), input_shape=[28, 28, 1]),
Dropout(0.4),
Conv2D(128, kernel_size=5, strides=2, padding="SAME", activation=tf.keras.layers.LeakyReLU(0.2)),
Dropout(0.4),
Flatten(),
Dense(1, activation="sigmoid")
])
dcgan = tf.keras.models.Sequential([generator, discriminator])
discriminator.compile(loss="binary_crossentropy", optimizer="rmsprop")
discriminator.trainable = False
dcgan.compile(loss="binary_crossentropy", optimizer="rmsprop")
(X_train_full, y_train_full), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full.astype(np.float32) / 255.
X_train = X_train_full.reshape(-1, 28, 28, 1) * 2. - 1.
batch_size = 32
dataset = tf.data.Dataset.from_tensor_slices(X_train).shuffle(1000)
dataset = dataset.batch(batch_size, drop_remainder=True).prefetch(1)
def train_dcgan(dcgan, datatset, batch_size=32, num_encodings=100, epochs=10):
for epoch in range(epochs):
print(f'Epoch: {epoch+1}/{epochs}')
for X_batch in dataset:
# Train the Discriminator
latent_space = tf.random.normal([batch_size, num_encodings])
generated_images = generator(latent_space)
input_to_discriminator = tf.concat([generated_images, X_batch], axis=0)
labels_to_discriminator = tf.constant([[0]] * batch_size + [[1]] * batch_size)
discriminator.trainable = True
discriminator.train_on_batch(input_to_discriminator, labels_to_discriminator)
# Train the Generator
latent_space = tf.random.normal([batch_size, num_encodings])
labels_to_dcgan = tf.constant([[1]] * batch_size)
discriminator.trainable = False
dcgan.train_on_batch(latent_space, labels_to_dcgan)
plot_multiple_images(generated_images, 8)
plt.show()
train_dcgan(dcgan, dataset, 32, 100, 5)
noise = tf.random.normal(shape=[batch_size, encoding_size])
generated_images = generator(noise)
plot_multiple_images(generated_images, 8)
plt.savefig("dcgan_images_plot", tight_layout=False)
```
| github_jupyter |
# ICD-11, estructura y relaciรณn con ICD-9/10
_Guillermo Facundo Colunga_
## Definition
The ICD has been designed to address the needs of a broad range of use cases: Mortality, morbidity, epidemiology, casemix, quality and safety, primary care. Detailed information on the different use cases is available in other sections for mortality use and different morbidity uses. A situation may arise, which anticipates using the ICD-11 for a purpose for which it has not been designed. In this situation, the categorization used within the ICD-11 and its additional features may not be able to address such a new use case. In such cases, it is recommended to consult with the WHO to ensure that the information collected is appropriate to the intended new use.
## Structure
The codes of ICDโ11 are alphanumeric and cover the range from 1A00.00 to ZZ9Z.ZZ. Codes starting with โXโ indicate an extension code (see Section 2.9 โExtension Codesโ). The inclusion of a forced number at the 3rd character position prevents spelling โundesirable wordsโ. The letters โOโ and โIโ are omitted to prevent confusion with the numbers โ0โ and โ1โ. Technically, the coding scheme would be described as below:
ED1E.EE
- E corresponds to a โbase 34 numberโ (0-9 and A-Z; excluding O, I);
- D corresponds to โbase 24 numberโ (A-Z; excluding O, I); and
- 1 corresponds to the โbase 10 integersโ (0-9)
- The first E starts with โ1โ and is allocated for the chapter. (i.e. 1 is for the first chapter, 2: chapter 2, โฆ A chapter 10, etc.)
- The terminal letter Y is reserved for the residual category โother specifiedโ and the terminal letter โZโ is reserved for the residual category โunspecifiedโ. For the chapters that have more than 240 blocks, โFโ (โother specifiedโ) and โGโ (โunspecifiedโ) are also used to indicate residual categories (due to problems with the coding space).
- Chapters are indicated by the first character. For example, 1A00 is a code in chapter 1, and BA00 is a code in chapter 11.
- Blocks are not coded within this code structure โ each has its own. However, hierarchical relations are retained in the 4-digit codes. There is unused coding space allocated in all blocks to allow for later updates and to keep the codes stable.
## Relation to ICD-10
https://icd.who.int/dev11/content/refguide.ICD11_en/html/index.html#3.1.0Part3Annexdifferences|part-3-what-is-new-in-icd-11|c3
## Referencias
- https://icd.who.int/dev11/l-m/en
- https://icd.who.int/dev11/content/refguide.ICD11_en/html/index.html
- https://www.icd10monitor.com/icd-11-is-coming-take-time-to-adjust
- https://www.beckershospitalreview.com/finance/icd-10-or-icd-11-the-dilemma-behind-both-coding-systems.html
- https://icd.who.int/icdapi/docs/APIdoc.html
| github_jupyter |
# Running Experiments
You can use the Azure Machine Learning SDK to run code experiments that log metrics and generate outputs. This is at the core of most machine learning operations in Azure Machine Learning.
## Connect to Your Workspace
The first thing you need to do is to connect to your workspace using the Azure ML SDK.
> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## Run an Experiment
One of the most fundamentals tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run.
```
from azureml.core import Experiment
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace = ws, name = "diabetes-experiment")
# Start logging data from the experiment
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the data from a local file
data = pd.read_csv('data/diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
run.log('observations', row_count)
print('Analyzing {} rows of data'.format(row_count))
# Plot and log the count of diabetic vs non-diabetic patients
diabetic_counts = data['Diabetic'].value_counts()
fig = plt.figure(figsize=(6,6))
ax = fig.gca()
diabetic_counts.plot.bar(ax = ax)
ax.set_title('Patients with Diabetes')
ax.set_xlabel('Diagnosis')
ax.set_ylabel('Patients')
plt.show()
run.log_image(name = 'label distribution', plot = fig)
# log distinct pregnancy counts
pregnancies = data.Pregnancies.unique()
run.log_list('pregnancy categories', pregnancies)
# Log summary statistics for numeric columns
med_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI']
summary_stats = data[med_columns].describe().to_dict()
for col in summary_stats:
keys = list(summary_stats[col].keys())
values = list(summary_stats[col].values())
for index in range(len(keys)):
run.log_row(col, stat = keys[index], value = values[index])
# Save a sample of the data and upload it to the experiment output
data.sample(100).to_csv('sample.csv', index=False, header=True)
run.upload_file(name = 'outputs/sample.csv', path_or_stream = './sample.csv')
# Complete the run
run.complete()
```
## View Experiment Results
After the experiment has been finished, you can use the **run** object to get information about the run and its outputs:
```
import json
# Get run details
details = run.get_details()
print(details)
# Get logged metrics
metrics = run.get_metrics()
print(json.dumps(metrics, indent=2))
# Get output files
files = run.get_file_names()
print(json.dumps(files, indent=2))
```
In Jupyter Notebooks, you can use the **RunDetails** widget to get a better visualization of the run details.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
Note that the **RunDetails** widget includes a link to view the run in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following:
- The **Properties** tab contains the general properties of the experiment run.
- The **Metrics** tab enables you to select logged metrics and view them as tables or charts.
- The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot)
- The **Child Runs** tab lists any child runs (in this experiment there are none).
- The **Outputs** tab shows the output files generated by the experiment.
- The **Logs** tab shows any logs that were generated by the compute context for the experiment (in this case, the experiment was run inline so there are no logs).
- The **Snapshots** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook).
- The **Raw JSON** tab shows a JSON representation of the experiment details.
- The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none).
## Run an Experiment Script
In the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder.
First, let's create a folder for the experiment files, and copy the data into it:
```
import os, shutil
# Create a folder for the experiment files
folder_name = 'diabetes-experiment-files'
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
# Copy the data file into the experiment folder
shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv"))
```
Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder.
> **Note**: running the following cell just *creates* the script file - it doesn't run it!
```
%%writefile $folder_name/diabetes_experiment.py
from azureml.core import Run
import pandas as pd
import os
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
data = pd.read_csv('diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
run.log('observations', row_count)
print('Analyzing {} rows of data'.format(row_count))
# Count and log the label counts
diabetic_counts = data['Diabetic'].value_counts()
print(diabetic_counts)
for k, v in diabetic_counts.items():
run.log('Label:' + str(k), v)
# Save a sample of the data in the outputs folder (which gets uploaded automatically)
os.makedirs('outputs', exist_ok=True)
data.sample(100).to_csv("outputs/sample.csv", index=False, header=True)
# Complete the run
run.complete()
```
This code is a simplified version of the inline code used before. However, note the following:
- It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run.
- It loads the diabetes data from the folder where the script is located.
- It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment run
Now you're almost ready to run the experiment. There are just a few configuration issues you need to deal with:
1. Create a *Run Configuration* that defines the Python code execution environment for the script - in this case, it will automatically create a Conda environment with some default Python packages installed.
2. Create a *Script Configuration* that identifies the Python script file to be run in the experiment, and the environment in which to run it.
> **Note**: Don't worry too much about the environment configuration for now - we'll explore it in more depth later.
The following cell sets up these configuration objects, and then submits the experiment.
```
import os
import sys
from azureml.core import Experiment, RunConfiguration, ScriptRunConfig
from azureml.widgets import RunDetails
# create a new RunConfig object
experiment_run_config = RunConfiguration()
# Create a script config
src = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_experiment.py',
run_config=experiment_run_config)
# submit the experiment
experiment = Experiment(workspace = ws, name = 'diabetes-experiment')
run = experiment.submit(config=src)
RunDetails(run).show()
run.wait_for_completion()
```
As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated:
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
```
## View Experiment Run History
Now that you've run the same experiment multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK:
```
from azureml.core import Experiment, Run
diabetes_experiment = ws.experiments['diabetes-experiment']
for logged_run in diabetes_experiment.get_runs():
print('Run ID:', logged_run.id)
metrics = logged_run.get_metrics()
for key in metrics.keys():
print('-', key, metrics.get(key))
```
> **More Information**: To find out more about running experiments, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-manage-runs) in the Azure ML documentation. For details of how to log metrics in a run, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments).
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# CycleGAN
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/generative/cyclegan"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org ใงๅฎ่ก</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/cyclegan.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab ใงๅฎ่ก</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/cyclegan.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub ใงใฝใผในใ่กจ็คบ</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/generative/cyclegan.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ใใผใใใใฏใใใฆใณใญใผใ</a></td>
</table>
ใใฎใใผใใใใฏใงใฏใใ[ๅจๆ็ๆงๆใฎๆตๅฏพ็ใใใใฏใผใฏใไฝฟใฃใๅฏพใจใชใฃใฆใใชใ็ปๅใใ็ปๅใธใฎๅคๆ](https://arxiv.org/abs/1703.10593)ใใง่ชฌๆใใใฆใใใใใซใๆกไปถไปใ GAN ใไฝฟ็จใใฆๅฏพใซใชใฃใฆใใชใ็ปๅใใ็ปๅใธใฎๅคๆใๅฎๆผใใพใใๅจๆ็ๆงๆใฎๆตๅฏพ็ใใใใฏใผใฏใฏใCycleGAN ใจใใฆใ็ฅใใใฆใใพใใใใฎ่ซๆใงใฏใๅฏพใซใชใฃใฆใใใใฌใผใใณใฐใตใณใใซใไฝฟ็จใใใซใ1 ใคใฎ็ปๅ้ ๅใฎ็นๆงใใญใฃใใใฃใใฆใใใใใฎ็นๆงใๅฅใฎ็ปๅ้ ๅใซใฉใฎใใใซๅคๆใงใใใฎใใ่ฆใคใๅบใๆนๆณใๆๆกใใใฆใใพใใ
ใใฎใใผใใใใฏใฏใPix2Pix ใฎ็ฅ่ญใใใใใจใๅๆใจใใฆใใพใใPix2Pix ใซใคใใฆใฏใ[Pix2Pix ใใฅใผใใชใขใซ](https://www.tensorflow.org/tutorials/generative/pix2pix)ใใ่ฆงใใ ใใใCycleGAN ใฎใณใผใใฏ้กไผผใใฆใใพใใใไธปใช้ใใฏใ่ฟฝๅ ใฎๆๅคฑ้ขๆฐใใใใๅฏพใซใชใฃใฆใใชใใใฌใผใใณใฐใใผใฟใไฝฟ็จใใ็นใซใใใพใใ
CycleGAN ใงใฏใๅจๆ็ใซไธ่ฒซใใๆๅคฑใไฝฟ็จใใฆใๅฏพใซใชใฃใฆใใใใผใฟใๅฟ
่ฆใจใใใซใใฌใผใใณใฐใใใใจใใงใใพใใ่จใๆใใใจใใฝใผในใจใฟใผใฒใใ้ ๅใง 1 ๅฏพ 1 ใฎใใใใณใฐใ่กใใใซใ1 ใคใฎ้ ๅใใๅฅใฎ้ ๅใซๅคๆใใใใจใใงใใพใใ
ใใฎๆนๆณใซใใใๅ็่ฃๆญฃใใซใฉใผ็ปๅๅใ็ป้ขจๅคๆใจใใฃใ่ๅณๆทฑใๅคๆงใชใฟในใฏใๅฏ่ฝใจใชใใพใใใใใใฎใฟในใฏใซๅฟ
่ฆใจใชใใฎใฏใใฝใผในใจใฟใผใฒใใใใผใฟใปใใ๏ผๅ็ดใช็ปๅใใฃใฌใฏใใช๏ผใฎใฟใงใใ
 
## ๅ
ฅๅใใคใใฉใคใณใใปใใใขใใใใ
ใธใงใใฌใผใฟใจใใฃในใฏใชใใใผใฟใฎใคใณใใผใใๅฎ่กใใ [tensorflow_examples](https://github.com/tensorflow/examples) ใใใฑใผใธใใคใณในใใผใซใใพใใ
```
!pip install git+https://github.com/tensorflow/examples.git
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_examples.models.pix2pix import pix2pix
import os
import time
import matplotlib.pyplot as plt
from IPython.display import clear_output
AUTOTUNE = tf.data.AUTOTUNE
```
## ๅ
ฅๅใใคใใฉใคใณ
ใใฎใใฅใผใใชใขใซใงใฏใ้ฆฌใฎ็ปๅใใใทใใฆใใฎ็ปๅใซๅคๆใงใใใใใซใขใใซใใใฌใผใใณใฐใใพใใใใฎใใผใฟใปใใใจใใใซ้กไผผใใใปใใฎใใผใฟใปใใใฏใ[ใใกใ](https://www.tensorflow.org/datasets/datasets#cycle_gan)ใซใใใพใใ
[่ซๆ](https://arxiv.org/abs/1703.10593)ใซ่จ่ผใใใฆใใใใใซใใใฌใผใใณใฐใใผใฟใปใใใซใใฉใณใใ ใธใใฟใชใณใฐใจใใฉใผใชใณใฐใ้ฉ็จใใพใใใใใใฏใ้้ฉๅใ้ฟใใ็ปๅ่ช่ญ็ฒพๅบฆๅไธใใฏใใใฏใงใใ
ใใใฏใ[pix2pix](https://www.tensorflow.org/tutorials/generative/pix2pix#load_the_dataset) ใง่กใใใฆใใใใฎใซไผผใฆใใพใใ
- ใฉใณใใ ใธใใฟใชใณใฐใงใฏใ็ปๅใตใคใบใฏ `286 x 286` ใซๅคๆใใใฆใใใ`256 x 256` ใซใฉใณใใ ใซใฏใญใใใใใพใใ
- ใฉใณใใ ใใฉใผใชใณใฐใงใฏใ็ปๅใฏใฉใณใใ ใซๆฐดๅนณใซๅ่ปข๏ผๅทฆใใๅณ๏ผใใใพใใ
```
dataset, metadata = tfds.load('cycle_gan/horse2zebra',
with_info=True, as_supervised=True)
train_horses, train_zebras = dataset['trainA'], dataset['trainB']
test_horses, test_zebras = dataset['testA'], dataset['testB']
BUFFER_SIZE = 1000
BATCH_SIZE = 1
IMG_WIDTH = 256
IMG_HEIGHT = 256
def random_crop(image):
cropped_image = tf.image.random_crop(
image, size=[IMG_HEIGHT, IMG_WIDTH, 3])
return cropped_image
# normalizing the images to [-1, 1]
def normalize(image):
image = tf.cast(image, tf.float32)
image = (image / 127.5) - 1
return image
def random_jitter(image):
# resizing to 286 x 286 x 3
image = tf.image.resize(image, [286, 286],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# randomly cropping to 256 x 256 x 3
image = random_crop(image)
# random mirroring
image = tf.image.random_flip_left_right(image)
return image
def preprocess_image_train(image, label):
image = random_jitter(image)
image = normalize(image)
return image
def preprocess_image_test(image, label):
image = normalize(image)
return image
train_horses = train_horses.cache().map(
preprocess_image_train, num_parallel_calls=AUTOTUNE).shuffle(
BUFFER_SIZE).batch(BATCH_SIZE)
train_zebras = train_zebras.cache().map(
preprocess_image_train, num_parallel_calls=AUTOTUNE).shuffle(
BUFFER_SIZE).batch(BATCH_SIZE)
test_horses = test_horses.map(
preprocess_image_test, num_parallel_calls=AUTOTUNE).cache().shuffle(
BUFFER_SIZE).batch(BATCH_SIZE)
test_zebras = test_zebras.map(
preprocess_image_test, num_parallel_calls=AUTOTUNE).cache().shuffle(
BUFFER_SIZE).batch(BATCH_SIZE)
sample_horse = next(iter(train_horses))
sample_zebra = next(iter(train_zebras))
plt.subplot(121)
plt.title('Horse')
plt.imshow(sample_horse[0] * 0.5 + 0.5)
plt.subplot(122)
plt.title('Horse with random jitter')
plt.imshow(random_jitter(sample_horse[0]) * 0.5 + 0.5)
plt.subplot(121)
plt.title('Zebra')
plt.imshow(sample_zebra[0] * 0.5 + 0.5)
plt.subplot(122)
plt.title('Zebra with random jitter')
plt.imshow(random_jitter(sample_zebra[0]) * 0.5 + 0.5)
```
## Pix2Pix ใขใใซใใคใณใใผใใใฆใตใคใบๅคๆดใใ
ใคใณในใใผใซใใ [tensorflow_examples](https://github.com/tensorflow/examples) ใไปใใฆ [Pix2Pix](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py) ใงไฝฟ็จใใใธใงใใฌใผใฟใจใใฃในใฏใชใใใผใฟใใคใณใใผใใใพใใ
ใใฎใใฅใผใใชใขใซใงไฝฟ็จใใใใขใใซใขใผใญใใฏใใฃใฏใ[pix2pix](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py) ใงไฝฟ็จใใใใใฎใซ้ๅธธใซใใไผผใฆใใพใใใไปฅไธใฎใใใช้ใใใใใพใใ
- Cyclegan ใฏใ[ใใใๆญฃ่ฆๅ](https://arxiv.org/abs/1502.03167)ใงใฏใชใ[ใคใณในใฟใณในๆญฃ่ฆๅ](https://arxiv.org/abs/1607.08022)ใไฝฟ็จใใพใใ
- [CycleGAN ใฎ่ซๆ](https://arxiv.org/abs/1703.10593)ใงใฏใๅคๆดใ้ฉ็จใใใ `resnet` ใใผในใฎใธใงใใฌใผใฟใไฝฟ็จใใใฆใใพใใใใฎใใฅใผใใชใขใซใงใฏใๅ็ดๅใใใใใซใๅคๆดใ้ฉ็จใใ `unet` ใธใงใใฌใผใฟใไฝฟ็จใใฆใใพใใ
ใใใงใใฌใผใใณใฐใใใใฎใฏใไปฅไธใฎใใใช 2 ใคใฎใธใงใใฌใผใฟ๏ผG ใจ F๏ผใจ 2 ใคใฎใใฃในใฏใชใใใผใฟ๏ผX ใจ Y๏ผใงใใ
- ใธใงใใฌใผใฟ `G` ใฏใ็ปๅ `X` ใ็ปๅ `Y` ใซๅคๆใใๆนๆณใๅญฆ็ฟใใพใใ$(G: X -> Y)$
- ใธใงใใฌใผใฟ `F` ใฏใ็ปๅ `Y` ใ็ปๅ `X` ใซๅคๆใใๆนๆณใๅญฆ็ฟใใพใใ$(F: Y -> X)$
- ใใฃในใฏใชใใใผใฟ `D_X` ใฏใ็ปๅ `X` ใจ็ๆใใใ็ปๅ `X`๏ผ`F(Y)`๏ผใๅบๅฅใใๆนๆณใๅญฆ็ฟใใพใใ
- ใใฃในใฏใชใใใผใฟ `D_Y` ใฏใ็ปๅ `Y` ใจ็ๆใใใ็ปๅ `Y`๏ผ`G(X)`๏ผใๅบๅฅใใๆนๆณใๅญฆ็ฟใใพใใ

```
OUTPUT_CHANNELS = 3
generator_g = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')
generator_f = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')
discriminator_x = pix2pix.discriminator(norm_type='instancenorm', target=False)
discriminator_y = pix2pix.discriminator(norm_type='instancenorm', target=False)
to_zebra = generator_g(sample_horse)
to_horse = generator_f(sample_zebra)
plt.figure(figsize=(8, 8))
contrast = 8
imgs = [sample_horse, to_zebra, sample_zebra, to_horse]
title = ['Horse', 'To Zebra', 'Zebra', 'To Horse']
for i in range(len(imgs)):
plt.subplot(2, 2, i+1)
plt.title(title[i])
if i % 2 == 0:
plt.imshow(imgs[i][0] * 0.5 + 0.5)
else:
plt.imshow(imgs[i][0] * 0.5 * contrast + 0.5)
plt.show()
plt.figure(figsize=(8, 8))
plt.subplot(121)
plt.title('Is a real zebra?')
plt.imshow(discriminator_y(sample_zebra)[0, ..., -1], cmap='RdBu_r')
plt.subplot(122)
plt.title('Is a real horse?')
plt.imshow(discriminator_x(sample_horse)[0, ..., -1], cmap='RdBu_r')
plt.show()
```
## ๆๅคฑ้ขๆฐ
CycleGAN ใงใฏใใใฌใผใใณใฐใงใใใๅฏพใซใชใฃใใใผใฟใๅญๅจใใชใใใใใใฌใผใใณใฐไธญใฎๅ
ฅๅ `x` ใจใฟใผใฒใใ `y` ใฎใใขใซๆๅณใใใไฟ่จผใฏใใใพใใใใใใใฃใฆใใใใใฏใผใฏใซใใ็็ขบใชใใใใณใฐใๅผทๅใใใใใซใ่ซๆใฎๅท็ญ่
ใฏ ๅจๆ็ใซไธ่ฒซๆงใฎใใๆๅคฑใๆๆกใใฆใใพใใ
ใใฃในใฏใชใใใผใฟๆๅคฑใจใธใงใใฌใผใฟๆๅคฑใฏ [pix2pix](https://www.tensorflow.org/tutorials/generative/pix2pix#define_the_loss_functions_and_the_optimizer) ใงไฝฟ็จใใใฆใใใใฎใซไผผใฆใใพใใ
```
LAMBDA = 10
loss_obj = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real, generated):
real_loss = loss_obj(tf.ones_like(real), real)
generated_loss = loss_obj(tf.zeros_like(generated), generated)
total_disc_loss = real_loss + generated_loss
return total_disc_loss * 0.5
def generator_loss(generated):
return loss_obj(tf.ones_like(generated), generated)
```
ๅจๆ็ไธ่ฒซๆงใจใฏใ็ตๆใๅ
ใฎๅ
ฅๅใซ่ฟใใใจใๆใใพใใใใจใใฐใใใๆใ่ฑ่ชใใใใฉใณใน่ชใซ็ฟป่จณใใฆใใใใใฉใณใน่ชใใ่ฑ่ชใซๆปใๅ ดๅใ็ตๆใจใใฆๅพใใใๆใฏๅ
ใฎๆใจๅใใซใชใใจใใใใจใงใใ
ๅจๆ็ไธ่ฒซๆงๆๅคฑใฏใไปฅไธใฎใใใซ่กใใใพใใ
- ็ปๅ $X$ ใฏใธใงใใฌใผใฟ $G$ ใไปใใฆๆธกใใใ็ปๅ $\hat{Y}$ ใ็ๆใใพใใ
- ็ๆใใใ็ปๅ $\hat{Y}$ ใฏใธใงใใฌใผใฟ $F$ ใไปใใฆๆธกใใใๅจๆๅพใฎ็ปๅ $\hat{X}$ ใ็ๆใใพใใ
- $X$ ใจ $\hat{X}$ ้ใฎๅนณๅ็ตถๅฏพ่ชคๅทฎใ่จ็ฎใใใพใใ
$$forward\ cycle\ consistency\ loss: X -> G(X) -> F(G(X)) \sim \hat{X}$$
$$backward\ cycle\ consistency\ loss: Y -> F(Y) -> G(F(Y)) \sim \hat{Y}$$

```
def calc_cycle_loss(real_image, cycled_image):
loss1 = tf.reduce_mean(tf.abs(real_image - cycled_image))
return LAMBDA * loss1
```
ไธ่จใซ็คบใ้ใใ็ปๅ $X$ ใใ็ปๅ $Y$ ใธใฎๅคๆใฏใใธใงใใฌใผใฟ $G$ ใ่กใใพใใ ใขใคใใณใใฃใใฃๆๅคฑใฏใ็ปๅ $Y$ ใใธใงใใฌใผใฟ $G$ ใซใใฃใผใใใๅ ดๅใๅฎ้ใฎ $Y$ ใพใใฏ็ปๅ $Y$ ใซ่ฟใ็ปๅใ็ๆใใใจ่จ่ฟฐใงใใพใใ
$$Identity\ loss = |G(Y) - Y| + |F(X) - X|$$
$$Identity\ loss = |G(Y) - Y| + |F(X) - X|$$
```
def identity_loss(real_image, same_image):
loss = tf.reduce_mean(tf.abs(real_image - same_image))
return LAMBDA * 0.5 * loss
```
ใในใฆใฎใธใงใใฌใผใฟใจใใฃในใฏใชใใใผใฟใฎใชใใใฃใใคใถใๅๆๅใใพใใ
```
generator_g_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
generator_f_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_x_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_y_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
```
## ใใงใใฏใใคใณใ
```
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(generator_g=generator_g,
generator_f=generator_f,
discriminator_x=discriminator_x,
discriminator_y=discriminator_y,
generator_g_optimizer=generator_g_optimizer,
generator_f_optimizer=generator_f_optimizer,
discriminator_x_optimizer=discriminator_x_optimizer,
discriminator_y_optimizer=discriminator_y_optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
```
## ใใฌใผใใณใฐ
ๆณจๆ: ใใฎใตใณใใซใขใใซใฏใใใฎใใฅใผใใชใขใซใฎใใฌใผใใณใฐๆ้ใๅ็็ใซใใใใใซใ่ซๆ๏ผ200 ๅ๏ผใใใๅฐใชใใจใใใฏๆฐ๏ผ40๏ผใงใใฌใผใใณใฐใใใฆใใพใใใใฎใใใไบๆธฌใฎ็ฒพๅบฆใๆธๅฐใใๅฏ่ฝๆงใใใใพใใ
```
EPOCHS = 40
def generate_images(model, test_input):
prediction = model(test_input)
plt.figure(figsize=(12, 12))
display_list = [test_input[0], prediction[0]]
title = ['Input Image', 'Predicted Image']
for i in range(2):
plt.subplot(1, 2, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
```
ใใฌใผใใณใฐใซใผใใฏ่ค้ใซ่ฆใใใใใใใพใใใใไปฅไธใฎ 4 ใคใฎๅบๆฌใซใผใใงๆงๆใใใฆใใพใใ
- ไบๆธฌใๅๅพใใใ
- ๆๅคฑใ่จ็ฎใใใ
- ใใใฏใใญใใฒใผใทใงใณใไฝฟ็จใใฆๅพ้
ใ่จ็ฎใใใ
- ๅพ้
ใใชใใใฃใใคใถใซ้ฉ็จใใใ
```
@tf.function
def train_step(real_x, real_y):
# persistent is set to True because the tape is used more than
# once to calculate the gradients.
with tf.GradientTape(persistent=True) as tape:
# Generator G translates X -> Y
# Generator F translates Y -> X.
fake_y = generator_g(real_x, training=True)
cycled_x = generator_f(fake_y, training=True)
fake_x = generator_f(real_y, training=True)
cycled_y = generator_g(fake_x, training=True)
# same_x and same_y are used for identity loss.
same_x = generator_f(real_x, training=True)
same_y = generator_g(real_y, training=True)
disc_real_x = discriminator_x(real_x, training=True)
disc_real_y = discriminator_y(real_y, training=True)
disc_fake_x = discriminator_x(fake_x, training=True)
disc_fake_y = discriminator_y(fake_y, training=True)
# calculate the loss
gen_g_loss = generator_loss(disc_fake_y)
gen_f_loss = generator_loss(disc_fake_x)
total_cycle_loss = calc_cycle_loss(real_x, cycled_x) + calc_cycle_loss(real_y, cycled_y)
# Total generator loss = adversarial loss + cycle loss
total_gen_g_loss = gen_g_loss + total_cycle_loss + identity_loss(real_y, same_y)
total_gen_f_loss = gen_f_loss + total_cycle_loss + identity_loss(real_x, same_x)
disc_x_loss = discriminator_loss(disc_real_x, disc_fake_x)
disc_y_loss = discriminator_loss(disc_real_y, disc_fake_y)
# Calculate the gradients for generator and discriminator
generator_g_gradients = tape.gradient(total_gen_g_loss,
generator_g.trainable_variables)
generator_f_gradients = tape.gradient(total_gen_f_loss,
generator_f.trainable_variables)
discriminator_x_gradients = tape.gradient(disc_x_loss,
discriminator_x.trainable_variables)
discriminator_y_gradients = tape.gradient(disc_y_loss,
discriminator_y.trainable_variables)
# Apply the gradients to the optimizer
generator_g_optimizer.apply_gradients(zip(generator_g_gradients,
generator_g.trainable_variables))
generator_f_optimizer.apply_gradients(zip(generator_f_gradients,
generator_f.trainable_variables))
discriminator_x_optimizer.apply_gradients(zip(discriminator_x_gradients,
discriminator_x.trainable_variables))
discriminator_y_optimizer.apply_gradients(zip(discriminator_y_gradients,
discriminator_y.trainable_variables))
for epoch in range(EPOCHS):
start = time.time()
n = 0
for image_x, image_y in tf.data.Dataset.zip((train_horses, train_zebras)):
train_step(image_x, image_y)
if n % 10 == 0:
print ('.', end='')
n+=1
clear_output(wait=True)
# Using a consistent image (sample_horse) so that the progress of the model
# is clearly visible.
generate_images(generator_g, sample_horse)
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1,
time.time()-start))
```
## ใในใใใผใฟใปใใใไฝฟ็จใใฆ็ๆใใ
```
# Run the trained model on the test dataset
for inp in test_horses.take(5):
generate_images(generator_g, inp)
```
## ๆฌกใฎในใใใ
ใใฎใใฅใผใใชใขใซใงใฏใ[Pix2Pix](https://www.tensorflow.org/tutorials/generative/pix2pix) ใใฅใผใใชใขใซใงๅฎ่ฃ
ใใใใธใงใใฌใผใฟใจใใฃในใฏใชใใใผใฟใไฝฟใฃใฆ CycleGAN ใๅฎ่ฃ
ใใๆนๆณใ็ดนไปใใพใใใๆฌกใฎในใใใใจใใฆใ[TensorFlow ใใผใฟใปใใ](https://www.tensorflow.org/datasets/datasets#cycle_gan)ใฎๅฅใฎใใผใฟใปใใใไฝฟ็จใใฆ่ฉฆใใฆใฟใฆใใ ใใใ
ใพใใใจใใใฏๆฐใๅขๅ ใใฆใใฌใผใใณใฐใใ็ตๆใใฉใฎใใใซๆนๅใใใใ็ขบ่ชใใใใจใใงใใพใใใใใซใใใใงไฝฟ็จใใ U-Net ใธใงใใฌใผใฟใฎไปฃใใใซ[่ซๆ](https://arxiv.org/abs/1703.10593)ใงไฝฟ็จใใใฆใใๅคๆดใใใ ResNet ใธใงใใฌใผใฟใๅฎ่ฃ
ใใฆใฟใใฎใใใใงใใใใ
| github_jupyter |
# Getting started with Xanthus
## What is Xanthus?
Xanthus is a Neural Recommender package written in Python. It started life as a personal project to take an academic ML paper and translate it into a 'production-ready' software package and to replicate the results of the paper along the way. It uses Tensorflow 2.0 under the hood, and makes extensive use of the Keras API. If you're interested, the original authors of [the paper that inspired this project](https://dl.acm.org/doi/10.1145/3038912.3052569) provided code for their experiments, and this proved valuable when starting this project.
However, while it is great that they provided their code, the repository isn't maintained, the code uses an old versions of Keras (and Theano!), it can be a little hard for beginners to get to grips with, and it's very much tailored to produce the results in their paper. All fair enough, they wrote a great paper and published their workings. Admirable stuff. Xanthus aims to make it super easy to get started with the work of building a neural recommendation system, and to scale the techniques in the original paper (hopefully) gracefully with you as the complexity of your applications increase.
This notebook will walk you through a basic example of using Xanthus to predict previously unseen movies to a set of users using the classic 'Movielens' recommender dataset. The [original paper](https://dl.acm.org/doi/10.1145/3038912.3052569) tests the architectures in this paper as part of an _implicit_ recommendation problem. You'll find out more about what this means later in the notebook. In the meantime, it is worth remembering that the examples in this notebook make the same assumption.
Ready for some code?
## Loading a sample dataset
Ah, the beginning of a brand new ML problem. You'll need to download the dataset first. You can use the Xanthus `download.movielens` utility to download, unzip and save your Movielens data.
```
from xanthus import datasets
datasets.movielens.download(version="ml-latest-small", output_dir="data")
```
Time to crack out Pandas and load some CSVs. You know the drill.
```
import pandas as pd
ratings = pd.read_csv("data/ml-latest-small/ratings.csv")
movies = pd.read_csv("data/ml-latest-small/movies.csv")
```
Let's take a look at the data we've loaded. Here's the movies dataset:
```
movies.head()
```
As you can see, you've got the unique identifier for your movies, the title of the movie in human-readable format, and then the column `genres` that has a string containing a set of associated genres for the given movie. Straightforward enough. And hey, that `genres` column might come in handy at some point...
On to the `ratings` frame. Here's what is in there:
```
ratings.head()
```
First up, you've got a `userId` corresponding to the unique user identifier, and you've got the `movieId` corresponding to the unique movie identifier (this maps onto the `movieId` column in the `movies` frame, above). You've also got a `rating` field. This is associated with the user-assigned rating for that movie. Finally, you have the `timestamp` -- the date at which the user rated the movie. For future reference, you can convert from this timestamp to a 'human readable' date with:
```
from datetime import datetime
datetime.fromtimestamp(ratings.iloc[0]["timestamp"]).strftime("%Y-%m-%d %H:%M:%S")
```
Thats your freebie for the day. Onto getting the data ready for training your recommender model.
## Data preparation
Xanthus provides a few utilities for getting your recommender up and running. One of the more ubiquitous utilities is the `Dataset` class, and its related `DatasetEncoder` class. At the time of writing, the `Dataset` class assumes your 'ratings' data is in the format `user`, `item`, `rating`. You can rename the sample data to be in this format with:
```
ratings = ratings.rename(columns={"userId": "user", "movieId": "item"})
```
Next, you might find it helpful to re-map the movie IDs (now under the `item` column) to be the `titles` in the `movies` frame. This'll make it easier for you to see what the recommender is recommending! Don't do this for big datasets though -- it can get very expensive very quickly! Anyway, remap the `item` column with:
```
title_mapping = dict(zip(movies["movieId"], movies["title"]))
ratings.loc[:, "item"] = ratings["item"].apply(lambda _: title_mapping[_])
ratings.head(2)
```
A little more meaningful, eh? For this example, you are going to be looking at _implicit_ recommendations, so should also remove clearly negative rating pairs from the dataset. You can do this with:
```
ratings = ratings[ratings["rating"] > 3.0]
```
### Leave one out protocol
As with any ML model, it is important to keep a held-out sample of your dataset to evaluate your model's performance. This is naturally important for recommenders too. However, recommenders differ slightly in that we are often interested in the recommender's ability to _rank_ candidate items in order to surface the most relevant content to a user. Ultimately, the essence of recommendation problems is search, and getting relevant items in the top `n` search results is generally the name of the game -- absolute accuracy can often be a secondary consideration.
One common way of evaluating the performance of a recommender model is therefore to create a test set by sampling `n` items from each user's `m` interactions (e.g. movie ratings), keeping `m-n` interactions in the training set and putting the 'left out' `n` samples in the test set. The thought process then goes that when evaluating a model on this test set, you should see the model rank the 'held' out samples more highly in the results (i.e. it has started to learn a user's preferences).
The 'leave one out' protocol is a specific case of this approach where `n=1`. Concretely, when creating a test set using 'leave one out', you withold a single interaction from each user and put these in your test set. You then place all other interactions in your training set. To get you going, Xanthus provides a utility function called -- funnily enough -- `leave_one_out` under the `evaluate` subpackage. You can import it and use it as follows:
```
from xanthus.evaluate import leave_one_out
train_df, test_df = leave_one_out(ratings, shuffle=True, deduplicate=True)
```
You'll notice that there's a couple of things going on here. Firstly, the function returns the input interactions frame (in this case `ratings`) and splits it into the two datasets as expected. Fair enough. We then have two keyword arguments `shuffle` and `deduplicate`. The argument `shuffle` will -- you guessed it -- shuffle your dataset before sampling interactions for your test set. This is set to `True` by default, so it is shown here for the purpose of being explicit. The second argument is `deduplicate`. This does what you might expect too -- it strips any cases where a user interacts with a specific item more than once (i.e. a given user-item pair appears more than once).
As discussed above, the `leave_one_out` function is really a specific version of a more general 'leave `n` out' approach to splitting a dataset. There's also other ways you might want to split datasets for recommendation problems. For many of those circumstances, Xanthus provides a more generic `split` function. This was inspired by Azure's [_Recommender Split_](https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/split-data-using-recommender-split#:~:text=The%20Recommender%20Split%20option%20is,user%2Ditem%2Drating%20triples) method in Azure ML Studio. There are a few important tweaks in the Xanthus implementation, so make sure to check out that functions documentation if you're interested.
Anyway, time to build some datasets.
## Introducing the `Dataset`
Like other ML problems, recommendation problems typically need to create encoded representations of a domain in order to be passed into a model for training and evaluation. However, there's a few aspects of recommendation problems that can make this problem particularly fiddly. To help you on your way, Xanthus provides a few utilities, including the `Dataset` class and the `DatasetEncoder` class. These structures are designed to take care of the fiddliness for you. They'll build your input vectors (including with metadata, if you provide it -- more on that later) and sparse matrices as required. You shouldn't need to touch a thing.
Here's how it works. First, your 'train' and 'test' datasets are going to need to share the same encodings, right? Otherwise they'll disagree on whether `Batman Forever (1995)` shares the same encoding across the datasets, and that would be a terrible shame. To create your `DatasetEncoder` you can do this:
```
from xanthus.datasets import DatasetEncoder
encoder = DatasetEncoder()
encoder.fit(ratings["user"], ratings["item"])
```
This encoder will store all of the unique encodings of every user and item in the `ratings` set. Notice that you're passing in the `ratings` set here, as opposed to either train or test. This makes doubly sure you're creating encodings for every user-item pair in the dataset. To check this has worked, you can call the `transform` method on the encoder like this:
```
encoder.transform(items=["Batman Forever (1995)"])
```
The naming conventions on the `DatasetEncoder` are deliberately reminicent of the methods on Scikit-Learn encoders, just to help you along with using them. Now you've got your encoder, you can create your `Dataset` objects:
```
from xanthus.datasets import Dataset, utils
train_ds = Dataset.from_df(train_df, normalize=utils.as_implicit, encoder=encoder)
test_ds = Dataset.from_df(test_df, normalize=utils.as_implicit, encoder=encoder)
```
Let's unpack what's going on here. The `Dataset` class provides the `from_df` class method for quickly constructing a `Dataset` from a 'raw' Pandas `DataFrame`. You want to create a train and test dataset, hence creating two separate `Dataset` objects using this method. Next, you can see that the `encoder` keyword argument is passed in to the `from_df` method. This ensures that each `Dataset` maintains a reference to the _same_ `DatasetEncoder` to ensure consistency when used. The final argument here is `normalize`. This expects a callable object (e.g. a function) that scales the `rating` column (if provided). In the case of this example, the normalization is simply to treat the ratings as an implicit recommendation problem (i.e. all zero or one). The `utils.as_implicit` function simply sets all ratings to one. Simple enough, eh?
And that is it for preparing your datasets for modelling, at least for now. Time for some Neural Networks.
## Getting neural
With your datasets ready, you can build and fit your model. In the example, the `GeneralizedMatrixFactorization` (or `GMFModel`) is used. If you're not sure what a GMF model is, be sure to check out the original paper, and the GMF class itself in the Xanthus docs. Anyway, here's how you set it up:
```
from xanthus.models import GeneralizedMatrixFactorization as GMFModel
model = GMFModel(train_ds.user_dim, train_ds.item_dim, factors=64)
model.compile(optimizer="adam", loss="binary_crossentropy")
```
So what's going on here? Well, `GMFModel` is a _subclass_ of the Keras `Model` class. Consequently, is shares the same interface. You will initialize your model with specific information (in this case information related to the size of the user and item input vectors and the size of the latent factors you're looking to compute), compile the model with a given loss and optimizer, and then train it. Straightforward enough, eh? In principle, you can use `GMFModel` however you'd use a 'normal' Keras model.
You're now ready to fit your model. You can do this with:
```
# prepare training data
users_x, items_x, y = train_ds.to_components(
negative_samples=4
)
model.fit([users_x, items_x], y, epochs=5)
```
Remember that (as with any ML model) you'll want to tweak your hyperparameters (e.g. `factors`, regularization, etc.) to optimize your model's performance on your given dataset. The example model here is just a quick un-tuned model to show you the ropes.
## Evaluating the model
Now to diagnose how well your model has done. The evaluation protocol here is set up in accordance with the methodology outlined in [the original paper](). To get yourself ready to generate some scores, you'll need to run:
```
from xanthus.evaluate import create_rankings
users, items = create_rankings(
test_ds, train_ds, output_dim=1, n_samples=100, unravel=True
)
```
So, what's going on here? First, you're importing the `create_rankings` function. This implements a sampling approach used be _He et al_ in their work. The idea is that you evaluate your model on the user-item pairs in your test set, and for each 'true' user-item pair, you sample `n_samples` negative instances for that user (i.e. items they haven't interacted with). In the case of the `create_rankings` function, this produces and array of shape `n_users, n_samples + 1`. Concretely, for each user, you'll get an array where the first element is a positive sample (something they _did_ interact with) and `n_samples` negative samples (things they _did not_ interact with).
The rationale here is that by having the model rank these `n_samples + 1` items for each user, you'll be able to determine whether your model is learning an effective ranking function -- the positive sample _should_ appear higher in the recommendations than the negative results if the model is doing it's job. Here's how you can rank these sampled items:
```
from xanthus.models import utils
test_users, test_items, _ = test_ds.to_components(shuffle=False)
scores = model.predict([users, items], verbose=1, batch_size=256)
recommended = utils.reshape_recommended(users.reshape(-1, 1), items.reshape(-1, 1), scores, 10, mode="array")
```
And finally for the evaluation, you can use the `score` function and the provided `metrics` in the Xanthus `evaluate` subpackage. Here's how you can use them:
```
from xanthus.evaluate import score, metrics
print("t-nDCG", score(metrics.truncated_ndcg, test_items, recommended).mean())
print("HR@k", score(metrics.precision_at_k, test_items, recommended).mean())
```
Looking okay. Good work. Going into detail on how the metrics presented here work is beyond the scope of this notebook. If you're interested in what is going on here, make sure to check out the docs (docstrings) in the Xanthus package itself.
## The fun bit
After all of that, it is time to see what you've won. Exciting times. You can generate recommendations for your users _from unseen items_ by using the following:
```
scores = model.predict([users, items], verbose=1, batch_size=256)
recommended = utils.reshape_recommended(users.reshape(-1, 1), items.reshape(-1, 1), scores, 10, mode="array")
```
Recall that the first 'column' in the `items` array corresponds to positive the positive sample for a user. You can skip that here. So now you have a great big array of integers. Not as exciting as you'd hoped? Fair enough. Xanthus provides a utility to convert the outputs of your model predictions into a more readable Pandas `DataFrame`. Specifically, your `DatasetEncoder` has the handy `to_df` method for just this job. Give it a set of _encoded_ users and a list of _encoded_ items for each user, and it'll build you a nice `DataFrame`. Here's how:
```
recommended_df = encoder.to_df(test_users.flatten(), recommended)
recommended_df.head(25)
```
## That's a wrap
And that's it for this example. Be sure to raise any issues you have [on GitHub](https://github.com/markdouthwaite/xanthus), or get in touch [on Twitter](https://twitter.com/MarklDouthwaite).
| github_jupyter |
# Correlation between Detected Breeding Sites and Dengue Cases
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from plotly import tools
from plotly.graph_objs import *
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
import plotly.graph_objs as go
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import graphviz
from sklearn import *
from copy import deepcopy
from scipy.stats.stats import pearsonr, spearmanr
from collections import Counter
import visualizer
import data_loader
df_loader = data_loader.df_loader()
loo = model_selection.LeaveOneOut()
month = ['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
categories = np.array(['bin','bowl','bucket','cup','jar','pottedplant','tire','vase']).reshape(-1,1)
df_survey = df_loader.load_survey()
df_filtered = df_loader.load_filterd('ci')
df_area = df_loader.load_area()
df_detect = df_loader.load_detect()
df_population = df_loader.load_population()
df_dengue_cases = df_loader.load_cases()
df_dengue_cases_2016 = pd.read_csv('../data/dengue-cases/dengue_cases_2016.csv')
df_dengue_cases_2017 = pd.read_csv('../data/dengue-cases/dengue_cases_2017.csv')
```
# Correlation
Perform correlation between number of detected containers and breteau index
```
x_train, y_train = [], []
xs, ys = [], []
column = 'total'
mean_det, std_det = df_detect[column].mean(), df_detect[column].std()
mean_cases, std_cases = df_dengue_cases_2016['cases'].mean(), df_dengue_cases_2016['cases'].std()
subdist_list = df_dengue_cases_2016['subdist'].unique()
for subdist in subdist_list:
detect = round(df_detect.loc[df_detect['subdist'] == subdist][column].mean(),2)
area = round(df_area.loc[df_area['subdist'] == subdist]['area'].mean(),2)
population = round(df_population.loc[df_population['subdist'] == subdist]['population'].mean(),2)
n_villages = round(df_population.loc[df_population['subdist'] == subdist]['n_villages'].mean(),2)
survey = round(df_filtered.loc[(df_filtered['subdist'] == subdist)
# & (df_filtered.index.month.isin([6,7,8,9,10,11]))
]['ci'].mean(), 2)
cases = round(df_dengue_cases_2016.loc[(df_dengue_cases_2016['subdist'] == subdist)]['cases'].mean(), 2)
# if np.isnan(survey): continue
if np.isnan(detect) or np.isnan(cases) or np.isnan(population): continue
if detect > mean_det+1*std_det or detect < mean_det-1*std_det: continue
if cases > mean_cases+1*std_cases or cases < mean_cases-1*std_cases: continue
formula = (population)
ys.append(formula)
xs.append(cases)
x = df_detect.loc[df_detect['subdist'] == subdist].copy()
# x = x[['bin','bowl','bucket','cup','jar','pottedplant','tire','vase']].copy()
# x = x[['bin','bowl','bucket','cup','jar','pottedplant','tire']].copy()
x = x[['bin','bowl','bucket','jar','pottedplant','tire']].copy()
# x = x[['bucket','jar','pottedplant']].copy()
month = df_detect.loc[df_detect['subdist'] == subdist].index.month[0]
features = list(np.squeeze(x.values)) + [month, area, population]
# features = list(np.squeeze(x.values)) + [area, population]
# features = np.array(population)
x_train.append(np.array(features))
y_train.append(cases)
X = np.array(x_train)
y = np.array(y_train)
print('X_train.shape:', X.shape)
len(xs)
print('\nR-squared:', metrics.r2_score(xs, ys))
print('Person:', pearsonr(xs, ys))
print(spearmanr(xs, ys),'\n')
trace = go.Scatter(
x = xs,
y = ys,
mode = 'markers', name='Subdistrict',
marker = dict(size = 15, opacity = 0.4)
)
xs = np.array(xs)
ys = np.array(ys)
regr = linear_model.LinearRegression()
regr.fit(xs.reshape(-1, 1), ys.reshape(-1, 1))
ys_pred = regr.predict(xs.reshape(-1, 1))
trace_2 = go.Scatter(
x = xs,
y = np.squeeze(ys_pred),
mode = 'lines', name='Regression', line = dict(width = 4)
)
layout = dict(
title = '121 Data points, Population<br>' + \
'Pearson: 0.510, Spearman: 0.467',
width=650,
xaxis = dict(title = 'Dengue cases'),
yaxis = dict(title = 'Population'),
font=dict(size=16)
)
iplot(go.Figure(data=[trace, trace_2], layout=layout))
regr.fit(ys.reshape(-1, 1), xs.reshape(-1, 1))
pred = np.squeeze(regr.predict(ys.reshape(-1, 1)))
print('\nR-squared:', metrics.r2_score(xs, pred))
print('Person:', pearsonr(xs, pred))
print(spearmanr(xs, pred),'\n')
parameter_grid_gb = {
'max_depth': [3, 4, 5, 6, 7, 8],
'max_features': [2, 3, 4, 5, 6, 7],
# 'subsample': [0.6, 0.8, 1],
'learning_rate':[0.01, 0.05, 0.1]
}
parameter_grid_tree = {
'max_depth': [3, 4, 5, 6, 7, 8],
'max_features': [2, 3, 4, 5],
}
parameter_grid_svr = {
'kernel': ['linear','poly','rbf'],
'degree': [1,2,3,4,5,6]
}
parameter_grid_ada = {
'base_estimator': [svr, dt],
'n_estimators': [5, 10, 15, 20, 25],
'loss': ['linear', 'square', 'exponential'],
'learning_rate':[0.1]
}
# grid_search = model_selection.GridSearchCV(estimator=svm.SVR(),
# param_grid=parameter_grid_svr,
# cv=10,
# n_jobs=8)
# grid_search = model_selection.GridSearchCV(estimator=ensemble.RandomForestRegressor(),
# param_grid=parameter_grid_tree,
# cv=20,
# n_jobs=1)
# grid_search = model_selection.GridSearchCV(estimator=tree.DecisionTreeRegressor(),
# param_grid=parameter_grid_tree,
# cv=3,
# n_jobs=1)
grid_search = model_selection.GridSearchCV(estimator=ensemble.GradientBoostingRegressor(),
param_grid=parameter_grid_gb,
cv=10,
n_jobs=1)
_=grid_search.fit(X, y)
grid_search.best_score_, grid_search.best_params_
# _=grid_search.fit(X, y)
# grid_search.best_score_, grid_search.best_params_
# _=grid_search.fit(X, y)
# grid_search.best_score_, grid_search.best_params_
X = X.reshape(-1,1)
X[0], X.shape
svr = svm.SVR(kernel='poly', degree=2)
rf = ensemble.RandomForestRegressor(max_depth=3, max_features=3)
dt = tree.DecisionTreeRegressor(max_depth=3, max_features=5)
gb = ensemble.GradientBoostingRegressor(learning_rate=0.01, max_depth=3, max_features=3, subsample=0.8)
linear = linear_model.LinearRegression()
bayes = linear_model.BayesianRidge()
knn = neighbors.KNeighborsRegressor()
ada = ensemble.AdaBoostRegressor()
ada_svr = ensemble.AdaBoostRegressor(svr, learning_rate=0.03, loss='linear')
ada_dt = ensemble.AdaBoostRegressor(dt, learning_rate=0.03, loss='linear')
regrs = [
[linear, 'Linear Regression'],
# [svm.NuSVR(kernel='poly', degree=3, tol=12.3, gamma=0.28), 'NuSVR'],
# [svm.SVR(kernel='poly', degree=2, tol=0.1), 'SVR'],
# [bayes, 'Bayesian Ridge'],
# [rf, 'Random Forest'],
# [dt, 'Decision Tree'],
# [gb, 'Gradient Boosting'],
# [ada_svr, 'Ada SVR'],
]
df_selection = []
for k in range(1):
df_compare = []
for regr, name in regrs:
y_pred, y_true = [], []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
Y_train, Y_test = y[train_index], y[test_index]
_=regr.fit(X_train, Y_train)
pred = regr.predict(X_test)
y_true.append(np.squeeze(Y_test))
y_pred.append(np.squeeze(pred))
y_true = np.array(y_true)
y_pred = np.array(y_pred)
df_compare.append([
name+'-'+str(k+1),
metrics.r2_score(y_true, y_pred),
pearsonr(y_true, y_pred)[0],
spearmanr(y_true, y_pred)[0]
])
df_compare = pd.DataFrame.from_records(df_compare)
df_compare.columns = ['Model','R-squared','Pearson','Spearman']
df_compare = df_compare.set_index('Model')
df_compare = df_compare.round(4)
df_selection.append(df_compare)
df_selection = pd.concat(df_selection, axis=0)
tmp = pd.DataFrame([[df_selection['R-squared'].mean(),
df_selection['Pearson'].mean(),
df_selection['Spearman'].mean()]])
tmp.columns = ['R-squared','Pearson','Spearman']
tmp.index = ['Average']
df_selection = df_selection.append(tmp)
df_selection
visualizer.plot_correlation(
regrs[0][0],
'121 data points: Linear Regression<br>',
X, y, loo
)
categories = np.array(['bin','bowl','bucket','jar','pottedplant','tire',]).reshape(-1,1)
# features_name = np.concatenate((categories,[['month']]), axis=0)
# features_name = np.concatenate((categories,[['month'], ['popluation']]), axis=0)
features_name = np.concatenate((categories,[['month'], ['area'],['popluation']]), axis=0)
# features_name = np.array([['bucket'], ['jar'], ['pottedplant']])
# features_name = np.array([['bucket'], ['jar'], ['pottedplant'], ['month']])
# features_name = np.array([['bucket'], ['jar'], ['pottedplant'], ['popluation']])
# features_name = np.array([['bucket'], ['jar'], ['pottedplant'], ['month'], ['popluation']])
# features_name = np.array([['bucket'], ['jar'], ['pottedplant'], ['month'], ['area'], ['popluation']])
# features_name = deepcopy(categories)
features_name
features_name.shape
visualizer.plot_importance(regrs[0][0], regrs[0][1], X, y, loo, features_name)
```
| github_jupyter |
# Weight Initialization
In this lesson, you'll learn how to find good initial weights for a neural network. Weight initialization happens once, when a model is created and before it trains. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
<img src="images/neuron_weights.png" width=40%/>
## Initial Weights and Observing Training Loss
To see how different weights perform, we'll test on the same dataset and neural network. That way, we know that any changes in model behavior are due to the weights and not any changing data or model structure.
> We'll instantiate at least two of the same models, with _different_ initial weights and see how the training loss decreases over time, such as in the example below.
<img src="images/loss_comparison_ex.png" width=60%/>
Sometimes the differences in training loss, over time, will be large and other times, certain weights offer only small improvements.
### Dataset and Model
We'll train an MLP to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist) to demonstrate the effect of different initial weights. As a reminder, the FashionMNIST dataset contains images of clothing types; `classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']`. The images are normalized so that their pixel values are in a range [0.0 - 1.0). Run the cell below to download and load the dataset.
---
### Import Libraries and Load [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
```
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 100
# percentage of training set to use as validation
valid_size = 0.2
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.FashionMNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.FashionMNIST(root='data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
### Visualize Some Training Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title(classes[labels[idx]])
```
## Define the Model Architecture
We've defined the MLP that we'll use for classifying the dataset.
### Neural Network
<img style="float: left" src="images/neural_net.png" width=50%/>
* A 3 layer MLP with hidden dimensions of 256 and 128.
* This MLP accepts a flattened image (784-value long vector) as input and produces 10 class scores as output.
---
We'll test the effect of different initial weights on this 3 layer neural network with ReLU activations and an Adam optimizer.
The lessons you learn apply to other neural networks, including different activations and optimizers.
---
## Initialize Weights
Let's start looking at some initial weights.
### All Zeros or Ones
If you follow the principle of [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
Let's compare the loss with all ones and all zero weights by defining two models with those constant weights.
Below, we are using PyTorch's [nn.init](https://pytorch.org/docs/stable/nn.html#torch-nn-init) to initialize each Linear layer with a constant weight. The init library provides a number of weight initialization functions that give you the ability to initialize the weights of each layer according to layer type.
In the case below, we look at every layer/module in our model. If it is a Linear layer (as all three layers are for this MLP), then we initialize those layer weights to be a `constant_weight` with bias=0 using the following code:
>```
if isinstance(m, nn.Linear):
nn.init.constant_(m.weight, constant_weight)
nn.init.constant_(m.bias, 0)
```
The `constant_weight` is a value that you can pass in when you instantiate the model.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class Net(nn.Module):
def __init__(self, hidden_1=256, hidden_2=128, constant_weight=None):
super(Net, self).__init__()
# linear layer (784 -> hidden_1)
self.fc1 = nn.Linear(28 * 28, hidden_1)
# linear layer (hidden_1 -> hidden_2)
self.fc2 = nn.Linear(hidden_1, hidden_2)
# linear layer (hidden_2 -> 10)
self.fc3 = nn.Linear(hidden_2, 10)
# dropout layer (p=0.2)
self.dropout = nn.Dropout(0.2)
# initialize the weights to a specified, constant value
if(constant_weight is not None):
for m in self.modules():
if isinstance(m, nn.Linear):
nn.init.constant_(m.weight, constant_weight)
nn.init.constant_(m.bias, 0)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add hidden layer, with relu activation function
x = F.relu(self.fc2(x))
# add dropout layer
x = self.dropout(x)
# add output layer
x = self.fc3(x)
return x
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim
def _get_loss_acc(model, train_loader, valid_loader):
"""
Get losses and validation accuracy of example neural network
"""
n_epochs = 2
learning_rate = 0.001
# Training loss
criterion = nn.CrossEntropyLoss()
# Optimizer
optimizer = optimizer = torch.optim.Adam(model.parameters(), learning_rate)
# Measurements used for graphing loss
loss_batch = []
for epoch in range(1, n_epochs+1):
# initialize var to monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# record average batch loss
loss_batch.append(loss.item())
# after training for 2 epochs, check validation accuracy
correct = 0
total = 0
for data, target in valid_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# get the predicted class from the maximum class score
_, predicted = torch.max(output.data, 1)
# count up total number of correct labels
# for which the predicted and true labels are equal
```
### Compare Model Behavior
Below, we are using `compare_init_weights` to compare the training and validation loss for the two models we defined above, `model_0` and `model_1`. This function takes in a list of models (each with different initial weights), the name of the plot to produce, and the training and validation dataset loaders. For each given model, it will plot the training loss for the first 100 batches and print out the validation accuracy after 2 training epochs. *Note: if you've used a small batch_size, you may want to increase the number of epochs here to better compare how models behave after seeing a few hundred images.*
We plot the loss over the first 100 batches to better judge which model weights performed better at the start of training.
Run the cell below to see the difference between weights of all zeros against all ones.
```
# initialize two NN's with 0 and 1 constant weights
model_0 = Net(constant_weight=0)
model_1 = Net(constant_weight=1)
# put them in list form to compare
model_list = [(model_0, 'All Zeros'),
(model_1, 'All Ones')]
# plot the loss over the first 100 batches
compare_init_weights(model_list,
'All Zeros vs All Ones',
train_loader,
valid_loader)
```
As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
A good solution for getting these random weights is to sample from a uniform distribution.
### Uniform Distribution
A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution) has the equal probability of picking any number from a set of numbers. We'll be picking from a continuous distribution, so the chance of picking the same number is low. We'll use NumPy's `np.random.uniform` function to pick random numbers from a uniform distribution.
>#### [`np.random_uniform(low=0.0, high=1.0, size=None)`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html)
>Outputs random values from a uniform distribution.
>The generated values follow a uniform distribution in the range [low, high). The lower bound minval is included in the range, while the upper bound maxval is excluded.
>- **low:** The lower bound on the range of random values to generate. Defaults to 0.
- **high:** The upper bound on the range of random values to generate. Defaults to 1.
- **size:** An int or tuple of ints that specify the shape of the output array.
We can visualize the uniform distribution by using a histogram. Let's map the values from `np.random_uniform(-3, 3, [1000])` to a histogram using the `hist_dist` function. This will be `1000` random float values from `-3` to `3`, excluding the value `3`.
```
hist_dist('Random Uniform (low=-3, high=3)', np.random.uniform(-3, 3, [1000]))
```
The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
Now that you understand the uniform function, let's use PyTorch's `nn.init` to apply it to a model's initial weights.
### Uniform Initialization, Baseline
Let's see how well the neural network trains using a uniform weight initialization, where `low=0.0` and `high=1.0`. Below, I'll show you another way (besides in the Net class code) to initialize the weights of a network. To define weights outside of the model definition, you can:
>1. Define a function that assigns weights by the type of network layer, *then*
2. Apply those weights to an initialized model using `model.apply(fn)`, which applies a function to each model layer.
This time, we'll use `weight.data.uniform_` to initialize the weights of our model, directly.
```
# takes in a module and applies the specified weight initialization
def weights_init_uniform(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# apply a uniform distribution to the weights and a bias=0
m.weight.data.uniform_(0.0, 1.0)
m.bias.data.fill_(0)
# create a new model with these weights
model_uniform = Net()
model_uniform.apply(weights_init_uniform)
# evaluate behavior
compare_init_weights([(model_uniform, 'Uniform Weights')],
'Uniform Baseline',
train_loader,
valid_loader)
```
---
The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction!
## General rule for setting weights
The general rule for setting the weights in a neural network is to set them to be close to zero without being too small.
>Good practice is to start your weights in the range of $[-y, y]$ where $y=1/\sqrt{n}$
($n$ is the number of inputs to a given neuron).
Let's see if this holds true; let's create a baseline to compare with and center our uniform range over zero by shifting it over by 0.5. This will give us the range [-0.5, 0.5).
```
# takes in a module and applies the specified weight initialization
def weights_init_uniform_center(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# apply a centered, uniform distribution to the weights
m.weight.data.uniform_(-0.5, 0.5)
m.bias.data.fill_(0)
# create a new model with these weights
model_centered = Net()
model_centered.apply(weights_init_uniform_center)
```
Then let's create a distribution and model that uses the **general rule** for weight initialization; using the range $[-y, y]$, where $y=1/\sqrt{n}$ .
And finally, we'll compare the two models.
```
# takes in a module and applies the specified weight initialization
def weights_init_uniform_rule(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# get the number of the inputs
n = m.in_features
y = 1.0/np.sqrt(n)
m.weight.data.uniform_(-y, y)
m.bias.data.fill_(0)
# create a new model with these weights
model_rule = Net()
model_rule.apply(weights_init_uniform_rule)
# compare these two models
model_list = [(model_centered, 'Centered Weights [-0.5, 0.5)'),
(model_rule, 'General Rule [-y, y)')]
# evaluate behavior
compare_init_weights(model_list,
'[-0.5, 0.5) vs [-y, y)',
train_loader,
valid_loader)
```
This behavior is really promising! Not only is the loss decreasing, but it seems to do so very quickly for our uniform weights that follow the general rule; after only two epochs we get a fairly high validation accuracy and this should give you some intuition for why starting out with the right initial weights can really help your training process!
---
Since the uniform distribution has the same chance to pick *any value* in a range, what if we used a distribution that had a higher chance of picking numbers closer to 0? Let's look at the normal distribution.
### Normal Distribution
Unlike the uniform distribution, the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from NumPy's `np.random.normal` function to a histogram.
>[np.random.normal(loc=0.0, scale=1.0, size=None)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html)
>Outputs random values from a normal distribution.
>- **loc:** The mean of the normal distribution.
- **scale:** The standard deviation of the normal distribution.
- **shape:** The shape of the output array.
```
hist_dist('Random Normal (mean=0.0, stddev=1.0)', np.random.normal(size=[1000]))
```
Let's compare the normal distribution against the previous, rule-based, uniform distribution.
Below, we define a normal distribution that has a mean of 0 and a standard deviation of $y=1/\sqrt{n}$.
```
# takes in a module and applies the specified weight initialization
def weights_init_normal(m):
classname = m.__class__.__name__
# for every Linear layer in a model..
if classname.find('Linear') != -1:
# get the number of the inputs
n = m.in_features
y = (1.0/np.sqrt(n))
m.weight.data.normal_(0, y)
m.bias.data.fill_(0)
# create a new model with the rule-based, uniform weights
model_uniform_rule = Net()
model_uniform_rule.apply(weights_init_uniform_rule)
# create a new model with the rule-based, NORMAL weights
model_normal_rule = Net()
model_normal_rule.apply(weights_init_normal)
# compare the two models
model_list = [(model_uniform_rule, 'Uniform Rule [-y, y)'),
(model_normal_rule, 'Normal Distribution')]
# evaluate behavior
compare_init_weights(model_list,
'Uniform vs Normal',
train_loader,
valid_loader)
```
The normal distribution gives us pretty similar behavior compared to the uniform distribution, in this case. This is likely because our network is so small; a larger neural network will pick more weight values from each of these distributions, magnifying the effect of both initialization styles. In general, a normal distribution will result in better performance for a model.
---
### Automatic Initialization
Let's quickly take a look at what happens *without any explicit weight initialization*.
```
model_no_initialization = Net()
model_list = [(model_no_initialization, 'No Weights')]
# evaluate behavior
compare_init_weights(model_list,
'No Weight Initialization',
train_loader,
valid_loader)
```
### Default initialization
Something really interesting is happening here. You may notice that the red line "no weights" looks a lot like our uniformly initialized weights. It turns out that PyTorch has default weight initialization behavior for every kind of layer. You can see that **linear layers are initialized with a uniform distribution** (uniform weights _and_ biases) in [the module source code](https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html).
---
However, you can also see that the weights taken from a normal distribution are comparable, perhaps even a little better! So, it may still be useful, especially if you are trying to train the best models, to initialize the weights of a model according to rules that *you* define.
And, this is not the end ! You're encouraged to look at the different types of [common initialization distributions](https://pytorch.org/docs/stable/nn.html#torch-nn-init).
| github_jupyter |
```
import cv2
import numpy as np
import sys
import math
import matplotlib.pyplot as plt
sys.path.append("../")
%matplotlib inline
from vmarker import *
cap = cv2.VideoCapture("output.avi")
ok,frame = cap.read()
sframe = frame.copy()
plt.imshow(frame,cmap='gray')
K = np.loadtxt("../calib_usb/K.csv",delimiter=",")
dist_coef = np.loadtxt('../calib_usb/d.csv',delimiter=",")
vm = vmarker(markernum=5,K=K,dist=dist_coef,markerpos_file="roomA.csv")
tv = []
while tv == []:
cv2.destroyAllWindows()
ok,frame = cap.read()
sframe = frame.copy()
cv2.startWindowThread()
tv=vm.getcamerapose(frame)
cv2.waitKey(1)
img = sframe
# HSV่ฒ็ฉบ้ใซๅคๆ
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
plt.figure(1)
# ่ตค่ฒใฎHSVใฎๅคๅ1
hsv_min = np.array([0,127,0])
hsv_max = np.array([10,255,255])
mask1 = cv2.inRange(hsv, hsv_min, hsv_max)
plt.subplot(121)
plt.imshow(mask1,cmap='gray')
# ่ตค่ฒใฎHSVใฎๅคๅ2
hsv_min = np.array([150,127,0])
hsv_max = np.array([180,255,255])
mask2 = cv2.inRange(hsv, hsv_min, hsv_max)
plt.subplot(122)
plt.imshow(mask2,cmap='gray')
# RGB search
bgr_min = np.array([0,0,120])
bgr_max = np.array([50,50,255])
mask3 = cv2.inRange(img,bgr_min, bgr_max)
plt.imshow(mask3,cmap='gray')
plt.imshow(cv2.bitwise_and(img,img,mask=mask2),cmap='gray')
plt.imshow(sframe)
Mmt = cv2.moments(mask2)
# extract center of gravity
cx = Mmt['m10']/Mmt['m00']
cy = Mmt['m01']/Mmt['m00']
print([cx,cy])
#vm.getobjpose_1([cx,cy],0.13)
vm.R,_ = cv2.Rodrigues(vm.rvecs)
pt = cv2.undistortPoints(np.array([cx,cy]).reshape(-1,1,2),vm.K,vm.dist,P=vm.K)
Rt = np.concatenate([vm.R,vm.tvecs],axis=1)
P = np.dot(K,Rt)
z = -0.13
A3 = - np.float32([pt[0,0,0],pt[0,0,1],1]).reshape(3,1) #A1,A2 = self.Rt[:,0],self.Rt[:,1]
A4 = P[:,2:3]*z+P[:,3:4]
A3
A = np.concatenate([P[:,0:2],A3,A4],axis=1)
U, S, V = np.linalg.svd(A) # use svd to get null space
vec = V[3]
X = vec[0]/ vec[3]
Y = vec[1]/ vec[3]
print([X,Y])
# zissai [1,1.5] ni +[0.088,-0.088]teido? [0.001,0.1]
def showProj(pts):
orig = P.dot(np.float32(pts).reshape(4,1))
print(orig/orig[2])
showProj([X,Y,-0.13,1])
showProj([2,0,0,1])
# debug
#cv2.projectPoints(np.float32([2,0,0]).reshape(-1,1,3),vm.rvecs,vm.tvecs,vm.K,vm.dist)
#cv2.Rodrigues(vm.rvecs)
```
## Method2
```
plane2dmap = vm.objp[:,0:2].reshape(-1,1,2)
Homo,inliner = cv2.findHomography(vm.ccorners,plane2dmap,cv2.RANSAC,3.0)
posxy = cv2.perspectiveTransform(np.float32([cx,cy]).reshape(-1,1,2),Homo)
print(posxy[0,0])
%load_ext autoreload
%autoreload
```
## time keisoku
```
def extractRed(img):
# HSV่ฒ็ฉบ้ใซๅคๆ
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# ่ตค่ฒใฎHSVใฎๅคๅ2
hsv_min = np.array([150,127,0])
hsv_max = np.array([179,255,255])
mask2 = cv2.inRange(hsv, hsv_min, hsv_max)
Mmt = cv2.moments(mask2)
if Mmt["m00"] != 0:
cx = Mmt['m10']/Mmt['m00']
cy = Mmt['m01']/Mmt['m00']
else:
cx,cy = 0,0
#print([cx,cy])
return mask2,[cx,cy]
%time 500
mask,cpts = extractRed(sframe)
#cv2.imshow("mask",mask)
tv = vm.getcamerapose(sframe.copy())
if vm.PNPsolved:
objxy = vm.getobjpose_1(cpts,-0.13)
print([objxy[0] -1.088,objxy[1] -1.412])
```
| github_jupyter |
```
import sympy
from phasor.utilities.ipynb.displays import *
from phasor.utilities.ipynb.ipy_sympy import *
import scipy.linalg
import numpy.testing as np_test
import declarative
from test_SVD import SVD_gen_check, gen_rand_unitary
from phasor.system import DAG_algorithm
from phasor.system import SRE_matrix_algorithms
from phasor.system import scisparse_algorithm
import timeit
asavefig.org_subfolder = 'plots'
from functools import reduce
def SVD_compare_error(
N = 10,
length = 10,
solver = DAG_algorithm,
N_in = None,
N_out = None,
):
U = gen_rand_unitary(N = N, length = length)
V = gen_rand_unitary(N = N, length = length)
seq = dict()
req = dict()
edge_map = dict()
S_diags = []
for idx in range(N):
s_diag = 10**(-5 + 10 * np.random.random(length))
edge_map[idx, idx] = s_diag
S_diags.append(s_diag)
seq[idx] = set([idx])
req[idx] = set([idx])
S = seq, req, edge_map
condition = reduce(np.maximum, S_diags) / reduce(np.minimum, S_diags)
M = SRE_matrix_algorithms.matrix_mult_sre(
SRE_matrix_algorithms.matrix_mult_sre(U, S), V
)
SRE_matrix_algorithms.check_sre(M)
sparsity = SRE_matrix_algorithms.SRE_count_sparsity(M)
print("SPARSITY FRAC: ", sparsity)
if N_in is None:
inputs_set = set(range(N))
else:
inputs_set = set(range(N-N_in, N))
if N_out is None:
outputs_set = set(range(N))
else:
mid = N_out // 2
outputs_set = set(range(0, mid)) | set(range(N-(N_out - mid), N))
Mseq, Mreq, Medge_map = SRE_matrix_algorithms.copy_sre(M)
print(solver)
time_start = timeit.default_timer()
sbunch = solver.inverse_solve_inplace(
seq = Mseq,
req = Mreq,
edge_map = Medge_map,
inputs_set = inputs_set,
outputs_set = outputs_set,
verbose = True,
negative = False,
)
time_end = timeit.default_timer()
b = declarative.Bunch(
time = time_end - time_start,
length = length,
)
b.update(sparsity)
return b
mats = []
for N in [10, 30, 100, 300]:
for length in [10, 100, 1000, 10000]:
for inst in range(3):
r = SVD_compare_error(
N = N,
length = length,
N_in = 1,
)
mats.append(r)
axB = mplfigB(Nrows=1)
color_by_len = {
10 : 'blue',
100 : 'green',
1000 : 'purple',
10000 : 'red',
}
for r in mats:
axB.ax0.scatter(
r.Nnodes,
r.time,
color = color_by_len[r.length],
)
axB.ax0.set_xscale('log')
axB.ax0.set_yscale('log')
axB.ax0.set_xlim(9, 400)
axB.save('timing_snode_direct')
axB = mplfigB(Nrows=1)
for r in mats:
axB.ax0.scatter(
r.Nnodes,
r.time / r.Nnodes,
color = color_by_len[r.length],
)
axB.ax0.set_xscale('log')
axB.ax0.set_yscale('log')
axB.ax0.set_xlim(9, 400)
axB.save('timing_snode_relsq2')
axB = mplfigB(Nrows=1)
for r in mats:
axB.ax0.scatter(
r.Nnodes,
r.time / r.length,
color = color_by_len[r.length],
)
axB.ax0.set_xscale('log')
axB.ax0.set_yscale('log')
axB.ax0.set_xlim(9, 400)
axB.save('timing_snode_rellength')
```
| github_jupyter |
## Background
The RAPIDS cuDF library is a GPU DataFrame manipulation library based on Apache Arrow that accelerates loading, filtering, and manipulation of data for model training data preparation. It provides a pandas-like API that will be familiar to data scientists. Pandas lib provides a lot of special methods that covers most of the use cases for data scientists. However, it cannot cover all the cases, sometimes it is nice to have a way to accelerate the customized data transformation. Luckily, in cuDF, there are two special methods that serve this particular purpose: apply_rows and apply_chunk functions. They utilized the Numba library to accelerate the data transformation via GPU in parallel.
In this tutorial, I am going to show a few examples of how to use it.
## Difference between `apply_rows` and `apply_chunks`
`apply_rows` is a special case of `apply_chunks`, which processes each of the rows of the Dataframe independently in parallel. Under the hood, the `apply_rows` method will optimally divide the long columns into chunks, and assign chunks into different GPU blocks to compute. Here is one example, I am using `apply_rows` to double the input array and also print out the GPU block/grid allocation information.
```
import cudf
import numpy as np
from numba import cuda
df = cudf.dataframe.DataFrame()
df['in1'] = np.arange(1000, dtype=np.float64)
def kernel(in1, out):
for i, x in enumerate(in1):
print('tid:', cuda.threadIdx.x, 'bid:', cuda.blockIdx.x,
'array size:', in1.size, 'block threads:', cuda.blockDim.x)
out[i] = x * 2.0
outdf = df.apply_rows(kernel,
incols=['in1'],
outcols=dict(out=np.float64),
kwargs=dict())
print(outdf['in1'].sum()*2.0)
print(outdf['out'].sum())
```
From the output.txt, we can see that the for-loop in the kernel function is unrolled by the compiler automatically. It uses 14 CUDA blocks. Each CUDA block uses 64 threads to do the computation. Each of the thread in the block most of the time deals one element in the input array and sometimes deals with two elements. The order of processing row element is not defined.
We implement the same array double logic with the apply_chunks method.
```
import cudf
import numpy as np
from numba import cuda
df = cudf.dataframe.DataFrame()
df['in1'] = np.arange(100, dtype=np.float64)
def kernel(in1, out):
print('tid:', cuda.threadIdx.x, 'bid:', cuda.blockIdx.x,
'array size:', in1.size, 'block threads:', cuda.blockDim.x)
for i in range(cuda.threadIdx.x, in1.size, cuda.blockDim.x):
out[i] = in1[i] * 2.0
outdf = df.apply_chunks(kernel,
incols=['in1'],
outcols=dict(out=np.float64),
kwargs=dict(),
chunks=16,
tpb=8)
print(outdf['in1'].sum()*2.0)
print(outdf['out'].sum())
```
From the output.txt, we can see apply_chunks has more control than the apply_rows method. It can specify how to divide the long array into chunks, map each of the array chunks to different GPU blocks to process (chunks argument) and assign the number of thread in the block (tpb argument). The for-loop is no longer automatically unrolled in the kernel function as apply_rows method but stays as the for-loop for that GPU thread. Each kernel corresponds to each thread in one block and it has full access to all the elements in that chunk of the array. In this example, the chunk size is 16, and it uniformly cute the 100 elements into 7 chunks (except the last one) and assign them to 7 blocks. Each block has 8 thread to process this length 16 subarray (or length 4 for the last block).
## Performance benchmark compare
Here we compare the benefits of using cuDF apply_rows vs pandas apply method by the following python code:
```
import cudf
import pandas as pd
import numpy as np
import time
data_length = 1e6
df = cudf.dataframe.DataFrame()
df['in1'] = np.arange(data_length, dtype=np.float64)
def kernel(in1, out):
for i, x in enumerate(in1):
out[i] = x * 2.0
start = time.time()
df = df.apply_rows(kernel,
incols=['in1'],
outcols=dict(out=np.float64),
kwargs=dict())
end = time.time()
print('cuDF time', end-start)
assert(np.isclose(df['in1'].sum()*2.0, df['out'].sum()))
df = pd.DataFrame()
df['in1'] = np.arange(data_length, dtype=np.float64)
start = time.time()
df['out'] = df.in1.apply(lambda x: x*2)
end = time.time()
print('pandas time', end-start)
assert(np.isclose(df['in1'].sum()*2.0, df['out'].sum()))
```
We change the data_length from 1e4 to 1e7, here is the computation time spent in cuDF and pandas.
| data length | 1e3 | 1e4 | 1e5 | 1e6 | 1e7 | 1e8 |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
| cuDF Time(s)| 0.1750 | 0.1840 | 0.1750 | 0.1720 | 0.1770 | 0.2490 |
| Pandas Time(s)| 0.0006 | 0.0022 | 0.0180 | 0.2500 | 2.1300 | 21.400 |
| Speed Up | 0.003x | 0.011x | 0.103x | 1.453x | **12.034x** | **85.944x** |
As we can see, the cuDF has an overhead of launching GPU kernels(mostly the kernel compilation time), the computation time remains relatively constant in this test due to the massive number of cores in P100 card. While the CPU computation scales linearly with the length of the array due to the series computation nature of the "apply" function. cuDF has the advantage in computation once the array size is larger than one million.
## Realistic application
In the financial service industry, data scientists usually need to compute features from time series data. The most popular method to process the time series data is to compute moving average. In this example, I am going to show how to utilize apply_chunks to speed up moving average computation for a long array.
```
import cudf
import numpy as np
import pandas as pd
from numba import cuda
import time
data_length = int(1e9)
average_window = 4
df = cudf.dataframe.DataFrame()
threads_per_block = 128
trunk_size = 10240
df['in1'] = np.arange(data_length, dtype=np.float64)
def kernel1(in1, out, average_length):
for i in range(cuda.threadIdx.x,
average_length-1, cuda.blockDim.x):
out[i] = np.inf
for i in range(cuda.threadIdx.x + average_length - 1,
in1.size, cuda.blockDim.x):
summ = 0.0
for j in range(i - average_length + 1,
i + 1):
summ += in1[j]
out[i] = summ / np.float64(average_length)
def kernel2(in1, out, average_length):
if in1.size - average_length + cuda.threadIdx.x - average_length + 1 < 0 :
return
for i in range(in1.size - average_length + cuda.threadIdx.x,
in1.size, cuda.blockDim.x):
summ = 0.0
for j in range(i - average_length + 1,
i + 1):
#print(i,j, in1.size)
summ += in1[j]
out[i] = summ / np.float64(average_length)
start = time.time()
df = df.apply_chunks(kernel1,
incols=['in1'],
outcols=dict(out=np.float64),
kwargs=dict(average_length=average_window),
chunks=list(range(0, data_length,
trunk_size))+ [data_length],
tpb=threads_per_block)
df = df.apply_chunks(kernel2,
incols=['in1', 'out'],
outcols=dict(),
kwargs=dict(average_length=average_window),
chunks=[0]+list(range(average_window, data_length,
trunk_size))+ [data_length],
tpb=threads_per_block)
end = time.time()
print('cuDF time', end-start)
pdf = pd.DataFrame()
pdf['in1'] = np.arange(data_length, dtype=np.float64)
start = time.time()
pdf['out'] = pdf.rolling(average_window).mean()
end = time.time()
print('pandas time', end-start)
assert(np.isclose(pdf.out.as_matrix()[average_window:].mean(),
df.out.to_array()[average_window:].mean()))
```
In the above code, we divide the array into subarrays of size "trunk_size", and send those subarrays to GPU blocks to compute moving average. However, there is no history for the elements at the beginning of the subarray. To fix this, we shift the chunk division by an offset of "average_window". Then we call the kernel2 to compute the moving average of those missing records only.. Note, in kernel2, we didn't define outcols as it will create a new GPU memory buffer and overwrite the old "out" values. Instead, we reuse out array as input. For an array of 1e9 length, cuDF uses 1.387s to do the computation while pandas use 7.58s.
This code is not optimized in performance. There are a few things we can do to make it faster. First, we can use shared memory to load the array and reduce the IO when doing the summation. Secondly, there is a lot of redundant summation done by the threads. We can maintain an accumulation summation array to help reduce the redundancy. This is outside the scope of this tutorial.
| github_jupyter |
# KIC 9651065
```
%run setup.py
lc_sap = lk.search_lightcurvefile('KIC 9651065', mission='Kepler').download_all().SAP_FLUX.stitch().remove_nans()
lc_sap.to_periodogram().plot()
# t, y = np.loadtxt('../lc/9651065_lc.txt', usecols=(0,1)).T
# from maelstrom.utils import amplitude_spectrum
# from scipy.ndimage import gaussian_filter
# from maelstrom.utils import amplitude_spectrum
# y_low = gaussian_filter(y,1.8) * 0.01*np.random.normal(loc=0, size=len(y))
# y_high = y - y_low
# plt.plot(*amplitude_spectrum(t, y_low), alpha=0.5)
# plt.plot(*amplitude_spectrum(t, y), alpha=0.5)
t, y = lc_sap.time, lc_sap.flux
ms = Maelstrom(t, y, max_peaks=5, fmin=5, fmax=48)
ms.first_look()
period_guess = 300
a_guess = 200
time, flux = ms.time, ms.flux
freq = ms.freq
weights = ms.get_weights(norm=False)
ms.setup_orbit_model(period=period_guess)
# opt = ms.optimize()
pb1 = ms.pin_orbit_model()
opt = pb1.optimize()
opt
# with pb1:
# trace = pm.load_trace('traces/9651065_FINAL_VERSION2/')
with pb1:
trace = pm.sample(
tune=1000,
draws=1000,
start=opt,
chains=2,
step=xo.get_dense_nuts_step(target_accept=0.9),
)
pm.save_trace(trace, 'trace/REFEREE_9651065_SAP_FLUX')
with pb1:
trace = pm.load_trace('trace/REFEREE_9651065_SAP_FLUX')
trace
pm.summary(trace)
```
# Synthetic system
```
"""Generate colored noise."""
from numpy import sqrt, newaxis
from numpy.fft import irfft, rfftfreq
from numpy.random import normal
from numpy import sum as npsum
def powerlaw_psd_gaussian(exponent, size, fmin=0):
"""Gaussian (1/f)**beta noise.
Based on the algorithm in:
Timmer, J. and Koenig, M.:
On generating power law noise.
Astron. Astrophys. 300, 707-710 (1995)
Normalised to unit variance
Parameters:
-----------
exponent : float
The power-spectrum of the generated noise is proportional to
S(f) = (1 / f)**beta
flicker / pink noise: exponent beta = 1
brown noise: exponent beta = 2
Furthermore, the autocorrelation decays proportional to lag**-gamma
with gamma = 1 - beta for 0 < beta < 1.
There may be finite-size issues for beta close to one.
shape : int or iterable
The output has the given shape, and the desired power spectrum in
the last coordinate. That is, the last dimension is taken as time,
and all other components are independent.
fmin : float, optional
Low-frequency cutoff.
Default: 0 corresponds to original paper. It is not actually
zero, but 1/samples.
Returns
-------
out : array
The samples.
Examples:
---------
# generate 1/f noise == pink noise == flicker noise
>>> import colorednoise as cn
>>> y = cn.powerlaw_psd_gaussian(1, 5)
"""
# Make sure size is a list so we can iterate it and assign to it.
try:
size = list(size)
except TypeError:
size = [size]
# The number of samples in each time series
samples = size[-1]
# Calculate Frequencies (we asume a sample rate of one)
# Use fft functions for real output (-> hermitian spectrum)
f = rfftfreq(samples)
# Build scaling factors for all frequencies
s_scale = f
fmin = max(fmin, 1./samples) # Low frequency cutoff
ix = npsum(s_scale < fmin) # Index of the cutoff
if ix and ix < len(s_scale):
s_scale[:ix] = s_scale[ix]
s_scale = s_scale**(-exponent/2.)
# Calculate theoretical output standard deviation from scaling
w = s_scale[1:].copy()
w[-1] *= (1 + (samples % 2)) / 2. # correct f = +-0.5
sigma = 2 * sqrt(npsum(w**2)) / samples
# Adjust size to generate one Fourier component per frequency
size[-1] = len(f)
# Add empty dimension(s) to broadcast s_scale along last
# dimension of generated random power + phase (below)
dims_to_add = len(size) - 1
s_scale = s_scale[(newaxis,) * dims_to_add + (Ellipsis,)]
# Generate scaled random power + phase
sr = normal(scale=s_scale, size=size)
si = normal(scale=s_scale, size=size)
# If the signal length is even, frequencies +/- 0.5 are equal
# so the coefficient must be real.
if not (samples % 2): si[...,-1] = 0
# Regardless of signal length, the DC component must be real
si[...,0] = 0
# Combine power + corrected phase to Fourier components
s = sr + 1J * si
# Transform to real time series & scale to unit variance
y = irfft(s, n=samples, axis=-1) / sigma
return y
from maelstrom.synthetic import SyntheticBinary
from maelstrom.utils import amplitude_spectrum
# Fixed parameters
period_t = 10
asini_t = 100
varpi_t = 0.
tref_t = 0.
freqs = np.array([20, 50])
amps = np.array([0.5, 0.2])
eccen_t = 0.5
time = np.arange(0, 3*period_t, 1.0 / (24 * 30))
lc = SyntheticBinary(time, freqs, amps, period_t, eccen_t, asini_t, varpi_t, tref_t)
lc.add_noise(snr=1000)
rednoise = powerlaw_psd_gaussian(1, len(lc.time))
plt.plot(*amplitude_spectrum(lc.time, lc.flux + rednoise))
ms = Maelstrom(lc.time, lc.flux + rednoise, max_peaks=2, fmin=5, fmax=100)
ms.first_look()
ms.setup_orbit_model(period=period_t)
# opt = ms.optimize()
pb1 = ms.pin_orbit_model()
opt = pb1.optimize()
opt
with pb1:
trace = pm.sample(
tune=1000,
draws=1000,
start=opt,
chains=2,
step=xo.get_dense_nuts_step(target_accept=0.9),
)
pm.save_trace(trace, 'trace/REFEREE_SYNTHETIC_RED_NOISE')
pm.summary(trace)
```
# With GP
```
from maelstrom import PB1Model
pb1 = PB1Model(ms.time, ms.flux, ms.freq)
pb1.init_orbit(opt['PB1_period'], opt['PB1_asini'], with_gp=True)
opt = pb1.optimize()
opt
with pb1:
trace = pm.sample(
tune=1000,
draws=1000,
start=opt,
chains=2,
step=xo.get_dense_nuts_step(target_accept=0.9),
)
pm.save_trace(trace, 'trace/REFEREE_SYNTHETIC_RED_NOISE_WITH_GP')
pm.summary(trace)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.