code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # Tutorial-IllinoisGRMHD: eigen.C
#
# ## Authors: <NAME> & <NAME>
#
# <font color='red'>**This module is currently under development**</font>
#
# ## In this tutorial module we explain how to obtain the eigenvalues of a $3\times3$ matrix. This module will likely be absorbed by another one once we finish documenting the code.
#
# ### Required and recommended citations:
#
# * **(Required)** <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).
# * **(Required)** <NAME>., <NAME>., <NAME>., <NAME>. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).
# * **(Recommended)** <NAME>., <NAME>., <NAME>. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)).
#
# If using the version of `IllinoisGRMHD` with piecewise polytropic *or* tabulated (coming soon!) EOS support, then the following citation is also required:
#
# * **(Required)** <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>., *IllinoisGRMHD github repository* (2019). Source Code URL: https://github.com/zachetienne/nrpytutorial/tree/master/IllinoisGRMHD/.
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This module is organized as follows
#
# 0. [Step 0](#src_dir): **Source directory creation**
# 1. [Step 1](#introduction): **Introduction**
# 1. [Step 2](#eigen__c): **`eigen.C`**
# 1. [Step 2.a](#eigen__c__variables): *The variables used in `eigen.C`*
# 1. [Step 2.b](#eigen__c__phi): *Determining $\phi$*
# 1. [Step 2.c](#eigen__c__eigenvalues): *The eigenvalues of a $3\times3$ symmetric matrix*
# 1. [Step 3](#code_validation): **Code validation**
# 1. [Step 4](#latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file**
# <a id='src_dir'></a>
#
# # Step 0: Source directory creation \[Back to [top](#toc)\]
# $$\label{src_dir}$$
#
# We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
# +
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
outdir = os.path.join("..","src")
cmd.mkdir(outdir)
# -
# <a id='introduction'></a>
#
# # Step 1: Introduction \[Back to [top](#toc)\]
# $$\label{introduction}$$
#
# In this tutorial notebook we will implement an algorithm to evaluate the eigenvalues of a $3\times3$ symmetric matrix. Our method will be analytical and will follow closely [this discussion](https://en.wikipedia.org/wiki/Eigenvalue_algorithm#3%C3%973_matrices).
#
# Let $\mathcal{M}$ be a $3\times3$ symmetric matrix,
#
# $$
# \mathcal{M} =
# \begin{pmatrix}
# M_{11} & M_{12} & M_{13}\\
# M_{12} & M_{22} & M_{23}\\
# M_{13} & M_{23} & M_{33}
# \end{pmatrix}\ .
# $$
#
# To obtain the eigenvalues of $\mathcal{M}$, we must solve the *characteristic equation*
#
# $$
# \det\left(\lambda I_{3\times3} - \mathcal{M}\right) = 0\ ,
# $$
#
# where $\lambda$ represents the eigenvalues of $\mathcal{M}$ and $I_{3\times3} = {\rm diag}\left(1,1,1\right)$ is the $3\times3$ identity matrix. For this particular case, the characteristic equation of $\mathcal{M}$ is then given by
#
# $$
# \lambda^{3} - {\rm tr}\left(\mathcal{M}\right)\lambda^{2} + \left[\frac{{\rm tr}\left(\mathcal{M}^{2}\right) - {\rm tr}\left(\mathcal{M}\right)^{2}}{2}\right]\lambda - \det\left(\mathcal{M}\right) = 0\ .
# $$
#
# Now let $\mathcal{M} = n\mathcal{N} + mI_{3\times3}$, so that the matrices $\mathcal{M}$ and $\mathcal{N}$ have the same eigenvectors. Then, $\kappa$ is an eigenvalue of $\mathcal{N}$ if, and only if, $\lambda = n\kappa + m$ is an eigenvalue of $\mathcal{M}$. Now, let us look at the following identities:
#
# $$
# \mathcal{N} = \frac{1}{n}\left(\mathcal{M} - mI_{3\times3}\right)\ .
# $$
#
# Choosing $m \equiv \frac{1}{3}{\rm tr}\left(\mathcal{M}\right)$, we get
#
# $$
# {\rm tr}\left(\mathcal{N}\right) = \frac{1}{n}\left(\mathcal{M} - 3m\right)=0\ .
# $$
#
# Also,
#
# $$
# {\rm tr}\left(\mathcal{N}^{2}\right) = \frac{1}{n^{2}}\left[N_{11}^{2}+N_{22}^{2}+N_{33}^{2}+2\left(N_{12}^{2}+N_{13}^{2}+N_{23}^{2}\right)\right]\ ,
# $$
#
# so that if we choose $n\equiv\sqrt{\frac{N_{11}^{2}+N_{22}^{2}+N_{33}^{2}+2\left(N_{12}^{2}+N_{13}^{2}+N_{23}^{2}\right)}{6}}$ we get
#
# $$
# {\rm tr}\left(\mathcal{N}^{2}\right) = 6\ .
# $$
#
# Then, if we look at the characteristic equation for the matrix $\mathcal{N}$,
#
# $$
# \kappa^{3} - {\rm tr}\left(\mathcal{N}\right)\kappa^{2} + \left[\frac{{\rm tr}\left(\mathcal{N}^{2}\right) - {\rm tr}\left(\mathcal{N}\right)^{2}}{2}\right]\kappa - \det\left(\mathcal{N}\right) = 0\ ,
# $$
#
# we see that it can be greatly simplified with our choices of $m$ and $n$,
#
# $$
# \kappa^{3} - 3\kappa - \det\left(\mathcal{N}\right) = 0\ .
# $$
#
# Further simplification of this characteristic equation can be obtained by using
#
# $$
# \begin{align}
# \kappa &\equiv 2\cos\phi\ ,\\
# \cos\left(3\phi\right) &= 4\cos^{3}\phi - 3\cos\phi\ ,
# \end{align}
# $$
#
# so that
#
# $$
# \begin{align}
# 0 &= 8\cos^{3}\phi - 6\cos\phi - \det\left(\mathcal{N}\right)\\
# &= 2\cos\left(3\phi\right) - \det\left(\mathcal{N}\right)\\
# \implies \phi &= \frac{1}{3}\arccos\frac{\det\left(\mathcal{N}\right)}{2} + \frac{2k\pi}{3}\ ,\ k=0,1,2\ ,
# \end{align}
# $$
#
# which, finally, yields
#
# $$
# \boxed{\kappa\left(k\right) = 2\cos\left(\frac{1}{3}\arccos\frac{\det\left(\mathcal{N}\right)}{2}+\frac{2k\pi}{3}\right)}\ .
# $$
#
# Once we have $\kappa$, we can find the eigenvectors of $\mathcal{M}$ doing
#
# $$
# \boxed{
# \begin{align}
# \lambda_{1} &= m + 2n\kappa(0)\\
# \lambda_{2} &= m + 2n\kappa(1)\\
# \lambda_{3} &= 3m - \lambda_{1} - \lambda_{2}
# \end{align}
# }\ ,
# $$
#
# where we have used the fact that ${\rm tr}\left(\mathcal{M}\right)=\lambda_{1}+\lambda_{2}+\lambda_{3}$ to compute $\lambda_{3}$.
# <a id='eigen__c'></a>
#
# # Step 2: `eigen.C` \[Back to [top](#toc)\]
# $$\label{eigen__c}$$
#
# <a id='eigen__c__variables'></a>
#
# ## Step 2.a: The variables used in `eigen.C` \[Back to [top](#toc)\]
# $$\label{eigen__c__variables}$$
#
# In the algorithm below, we define the following quantities
#
# $$
# \boxed{
# \begin{align}
# \mathcal{K} &= \mathcal{M} - mI_{3\times3}\\
# m &= \frac{{\rm tr}\left(\mathcal{M}\right)}{3}\\
# q &= \frac{\det\left(\mathcal{K}\right)}{2}\\
# p &= n^{2} = \frac{{\rm tr}\left(\mathcal{K}^{2}\right)}{6}
# \end{align}
# }\ .
# $$
#
# We these definitions, we have the following quantities to be implemented:
#
# $$
# \boxed{ m = \frac{\left(M_{11} + M_{22} + M_{33}\right)}{3} }\ .
# $$
#
# The matrix $\mathcal{K}$ is simply
#
# $$
# \boxed{
# \mathcal{K} =
# \begin{pmatrix}
# M_{11}-m & M_{12} & M_{13}\\
# M_{12} & M_{22}-m & M_{23}\\
# M_{13} & M_{23} & M_{33}-m
# \end{pmatrix}
# }\ .
# $$
#
# Straightforwardly, we have
#
# $$
# \boxed{q = \frac{K_{11}K_{22}K_{33} +
# K_{12}K_{23}K_{13} +
# K_{13}K_{12}K_{23} -
# K_{13}K_{22}K_{13} -
# K_{12}K_{12}K_{33} -
# K_{11}K_{23}K_{23}
# }{2}
# }\ .
# $$
#
# Since $\mathcal{K}$ is symmetric as well, we have
#
# $$
# \boxed{p = \frac{K_{11}^{2} + K_{22}^{2} + K_{33}^{2} + 2\left(K_{12}^{2} + K_{13}^{2} + K_{23}^{2}\right)}{6}}\ .
# $$
# %%writefile $outdir/eigen.C
//
// This subroutine calcualtes the eigenvalues of a real, symmetric 3x3
// matrix M={{M11,M12,M13},{M12,M22,M23},{M13,M23,M33}} based on the
// algorithm described in
// http://en.wikipedia.org/wiki/Eigenvalue_algorithm#Eigenvalues_of_3.C3.973_matrices
// which simply solve the cubic equation Det( M - lamnda I)=0 analytically.
// The eigenvalues are stored in lam1, lam2 and lam3.
//
void eigenvalues_3by3_real_sym_matrix(CCTK_REAL & lam1, CCTK_REAL & lam2, CCTK_REAL & lam3,
CCTK_REAL M11, CCTK_REAL M12, CCTK_REAL M13, CCTK_REAL M22, CCTK_REAL M23, CCTK_REAL M33)
{
CCTK_REAL m = (M11 + M22 + M33)/3.0;
CCTK_REAL K11 = M11 - m, K12 = M12, K13 = M13, K22 = M22-m, K23 = M23, K33=M33-m;
CCTK_REAL q = 0.5* (K11*K22*K33 + K12*K23*K13 + K13*K12*K23 - K13*K22*K13
- K12*K12*K33 - K11*K23*K23);
CCTK_REAL p = ( SQR(K11) + SQR(K22) + SQR(K33) + 2.0*(SQR(K12) + SQR(K13) + SQR(K23) ) )/6.0;
# <a id='eigen__c__phi'></a>
#
# ## Step 2.b: Determining $\phi$ \[Back to [top](#toc)\]
# $$\label{eigen__c__phi}$$
#
# We then employ the following criterion to determine $\phi$:
#
# $$
# \phi
# =
# \left\{
# \begin{matrix}
# 0 &,\ {\rm if}\ \left|q\right| \geq p^{3/2}\ ,\\
# \arccos\left(\frac{q}{p^{3/2}}\right) &,\ {\rm otherwise}\ .
# \end{matrix}
# \right.
# $$
# %%writefile -a $outdir/eigen.C
CCTK_REAL phi;
CCTK_REAL p32 = sqrt(p*p*p);
if (fabs(q) >= fabs(p32) ) {
phi = 0.0;
} else {
phi = acos(q/p32)/3.0;
}
if (phi<0.0) phi += M_PI/3.0;
# <a id='eigen__c__eigenvalues'></a>
#
# ## Step 2.c: The eigenvalues of a $3\times3$ symmetric matrix \[Back to [top](#toc)\]
# $$\label{eigen__c__eigenvalues}$$
#
# Finally, the eigenvalues are computed using
#
# $$
# \boxed{
# \begin{align}
# \lambda_{1} &= m + 2\sqrt{p}\cos\phi\\
# \lambda_{2} &= m - \sqrt{p}\cos\phi - \sqrt{3p}\sin\phi\\
# \lambda_{3} &= m - \sqrt{p}\cos\phi + \sqrt{3p}\sin\phi
# \end{align}
# }\ .
# $$
# +
# %%writefile -a $outdir/eigen.C
CCTK_REAL sqrtp = sqrt(p);
CCTK_REAL sqrtp_cosphi = sqrtp*cos(phi);
CCTK_REAL sqrtp_sqrt3_sinphi = sqrtp*sqrt(3.0)*sin(phi);
lam1 = m + 2.0*sqrtp_cosphi;
lam2 = m - sqrtp_cosphi - sqrtp_sqrt3_sinphi;
lam3 = m - sqrtp_cosphi + sqrtp_sqrt3_sinphi;
}
# -
# <a id='code_validation'></a>
#
# # Step 3: Code validation \[Back to [top](#toc)\]
# $$\label{code_validation}$$
#
# First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
# +
# # Verify if the code generated by this tutorial module
# # matches the original IllinoisGRMHD source code
# # First download the original IllinoisGRMHD source code
# import urllib
# from os import path
# original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/eigen.C"
# original_IGM_file_name = "eigen-original.C"
# original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# # Then download the original IllinoisGRMHD source code
# # We try it here in a couple of ways in an attempt to keep
# # the code more portable
# try:
# original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# # Write down the file the original IllinoisGRMHD source code
# with open(original_IGM_file_path,"w") as file:
# file.write(original_IGM_file_code)
# except:
# try:
# original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# # Write down the file the original IllinoisGRMHD source code
# with open(original_IGM_file_path,"w") as file:
# file.write(original_IGM_file_code)
# except:
# # If all else fails, hope wget does the job
# # !wget -O $original_IGM_file_path $original_IGM_file_url
# # Perform validation
# # Validation__eigen__C = !diff $original_IGM_file_path $outfile_path__eigen__C
# if Validation__eigen__C == []:
# # If the validation passes, we do not need to store the original IGM source code file
# # !rm $original_IGM_file_path
# print("Validation test for eigen.C: PASSED!")
# else:
# # If the validation fails, we keep the original IGM source code file
# print("Validation test for eigen.C: FAILED!")
# # We also print out the difference between the code generated
# # in this tutorial module and the original IGM source code
# print("Diff:")
# for diff_line in Validation__eigen__C:
# print(diff_line)
# -
# <a id='latex_pdf_output'></a>
#
# # Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-IllinoisGRMHD__eigen.pdf](Tutorial-IllinoisGRMHD__eigen.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
# #!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__eigen.ipynb
# #!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__eigen.tex
# #!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__eigen.tex
# #!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__eigen.tex
# !rm -f Tut*.out Tut*.aux Tut*.log
| IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__eigen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python dilinde Yapay Sinir Ağları (YSA) Modellemesi
#
# <NAME>
# <NAME>-<NAME>, Sestek
#
#
# Problem:
#
# Chatbot'a şöyle içerik geldiğinde bunu nasıl anlaşılır formata dönüştürürüz?
#
# "faturabilgisi öğrenmek istiyorum"
#
#
# ## İleri beslemeli YSA
#
# 
#
# ref: https://cdn-images-1.medium.com/max/800/1*pbk9xtz7WbBwYPVATdl9Vw.png
# Gereklilikler:
# * [miniconda](https://docs.conda.io/en/latest/miniconda.html) kurulur.
# * Miniconda kurulumu sırasında Path'e ekle opsiyonu seçilir.
# * cmd
# ```console
# conda update conda
# conda install python=3.6
# pip install numpy
# pip install cntk / pip install cntk-gpu
# pip install tensorboard
# pip install tensorflow
# ```
# ```python
#
# ali okula gitti
#
# {a} l i o k u 1
# a {l} i o k u l 1
# a l {i} o k u l a 0
# a l i {o} k u l a g 1
# a l i o {k} u l a g i 1
# l i o k {u} l a g i t 1
# i o k u {l} a g i t t 1
# o k u l {a} g i t t i 0
# k u l a {g} i t t i h 1
# u l a g {i} t t i h a 1
# l a g i {t} t i h a s 1
# a g i t {t} i h a s a 1
# g i t t {i} h a s a n 0
# ```
# +
import io
import re
import numpy as np
tr_upper = list(" ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZ")
tr_lower = list(" abcçdefgğhıijklmnoöprsştuüvyz")
left_context = 9
right_context = 9
# -
def line_to_seq(line):
seq = []
for ch in line:
if ch in tr_lower:
seq.append(tr_lower.index(ch))
elif ch in tr_upper:
seq.append(tr_upper.index(ch))
else:
seq.append(tr_lower.index(' '))
return seq
def add_context(items, index):
items_count = len(items)
go_back = []
found_count = 0
cursor = index
while True:
if left_context == 0:
break
target = 0
cursor -= 1
if cursor >= 0:
target = items[cursor]
go_back.append(target)
found_count += 1
if found_count >= left_context:
break
go_forward = []
found_count = 0
cursor = index
while True:
if right_context == 0:
break
target = 0
cursor += 1
if cursor < items_count:
target = items[cursor]
if cursor < items_count and target == 0:
continue
go_forward.append(target)
found_count += 1
if found_count >= right_context:
break
prev_items = list(reversed(go_back))
return prev_items + [items[index]] + go_forward
def prepare_ctf_data(data_file, ctf_file):
with io.open(data_file, mode='rt', encoding='utf-8') as fr:
lines = fr.read().splitlines()
zero_sample_count = 0
one_sample_count = 0
with open(ctf_file, mode="wt", encoding="utf-8") as fw:
for line in lines:
normalized_line = re.sub(r'\s+', ' ', line)
normalized_line = normalized_line.strip() + ' '
seq = line_to_seq(normalized_line)
for i in range(len(seq) - 1):
next_item = seq[i + 1]
label = 0 if (next_item == 0) else 1
feat = np.array(add_context(seq, i)) / len(tr_upper)
if label == 1:
one_sample_count += 1
else:
zero_sample_count += 1
label_str = "0 1" if label == 1 else "1 0"
feature_str = " ".join(["{:.6f}".format(k) for k in feat])
line = "|labels {} |features {}\n".format(label_str, feature_str)
fw.write(line)
print("one_sample_count: {} zero_sample_count: {}".format(one_sample_count, zero_sample_count))
prepare_ctf_data("validation.txt", "validation.ctf")
prepare_ctf_data("train.txt", "train.ctf")
prepare_ctf_data("train_subset.txt", "train_subset.ctf")
| 2019-03-02-Boun-TechSummit-Chatbot/src/neural_networks/space_detection_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Frequency folding
# ## The relationship between the Fourier transform of a function and the Fourier transform of its sampled function
# Consider the function $f(t)$. The function is sampled with the sampling period $h$ (correspoding to sampling frequency $\omega_s = \frac{2\pi}{h}$) to obtain the sequence $f(kh)$.
# If $F(\omega)$ is the Fourier transform of the function $f(t)$ and $F_s(\omega)$ is the Fourier transform of the sampled signal $f(kh)$, then
# \begin{equation}
# F_s(\omega) = \frac{1}{h} \sum_{k=-\infty}^{\infty} F(\omega + k\omega_s)
# \end{equation}
#
# From this we see that the spectrum of a sampled signal is periodic, since the function $F_s(\omega)$ is periodic with period $\omega_s$, i.e.
# \begin{equation}
# F_s(\omega + m\omega_s) = \frac{1}{h} \sum_{k=-\infty}^{\infty} F(\omega + k\omega_s + m\omega_s) = \frac{1}{h} \sum_{k=-\infty}^{\infty} F(\omega + (k+m)\omega_s) = F_s(\omega)
# \end{equation}
#
# We also see that the power of the signal $f(kh)$ at a frequency $\omega_1$, $0 \le \omega \le \omega_N$ contains contributions from all frequencies of the original signal at the frequencies
# \begin{equation}
# \omega = \omega_1 + k\omega_s, \; k=-\infty, \ldots, 0, \ldots, \infty
# \end{equation}
# and
# \begin{equation}
# \omega = -\omega_1 + k\omega_s, \; k=-\infty, \ldots, 0, \ldots, \infty
# \end{equation}
# We say that $\omega_1$ is the alias of all these frequencies. The lowest such alias frequency is the frequency
# \begin{equation}
# \omega = -\omega_1 + \omega_s,
# \end{equation}
# which can be written
# \begin{equation}
# \omega = |\omega_1 - \omega_s| = | \omega_1 + \omega_N - \omega_N - \omega_s | = | (\omega_a + \omega_N) - \omega_s - \omega_N| = | (\omega_1 + \omega_N)\, \mathrm{mod}\, \omega_s - \omega_N|.
# \end{equation}
# +
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
def spectrum(x,h):
""" Computes spectrum using np.fft.fft Returns frequency in rad/s from -\omega_N to \omega_N"""
wN = np.pi/h # The Nyquist frequency
N = len(x)
X = np.fft.fft(x) # Computes the Fourier transform
Xpos = X[:N/2] # Positive part of the spectrum
Xneg = X[N/2:] # Negative part. Obs: for frequencies wN up to ws
wpos = np.linspace(0, wN, N/2) # Positive frequencies, goes from 0 to wN
W = np.hstack((-wpos[::-1], wpos))
XX = np.hstack((Xneg, Xpos))
return (XX, W)
# Assume too slow sampling of signal consisting of two high-frequency sinusoids
ws = 16 # Sampling frequency in rad/s
wN = ws/2
h = np.pi/wN
w1 = 10 # rad/s
w2 = w1+1*ws # rad/s
w1Alias = np.abs( (w1+wN) % ws - wN )
w2Alias = np.abs( (w1+wN) % ws - wN )
M = 4000 # Number of samples in the over-sampled ("continuous") signal
t = np.linspace(0, 60*2*np.pi/w1, M) # 60 periods of the slowest sinusoid
y = np.sin(w1*t) + np.sin(w2*t) # Continuous time (sort of) signal
N = 400 # Number of samples to take
ts = np.arange(N)*h
ys = np.sin(w1*ts) + np.sin(w2*ts) # Sampled signal
(Y,W) = spectrum(y,t[1]-t[0]) # get spectrum (from FFT) of the "continuous" signal
(Ys, Ws) = spectrum(ys, h) # Spectrum of discrete signal
plt.figure(figsize=(10,7))
plt.subplot(2,1,1)
plt.plot(W, np.real(Y))
plt.plot(Ws, np.real(Ys))
plt.plot([wN, wN], [-50, 150], 'k--')
plt.plot([-wN, -wN], [-50, 150], 'k--')
plt.xlim((-1.2*w2, 1.2*w2))
plt.xticks((-w2, -w1, -wN, -w1Alias, w1Alias, wN, w1, w2))
plt.ylim((-500, 500))
plt.ylabel('Real part')
plt.subplot(2,1,2)
plt.plot(W, np.imag(Y))
plt.plot(Ws, np.imag(Ys))
plt.plot([wN, wN], [-1500, 1500], 'k--')
plt.plot([-wN, -wN], [-1500, 1500], 'k--')
plt.xlim((-1.2*w2, 1.2*w2))
plt.xticks((-w2, -w1, -wN, -w1Alias, w1Alias, wN, w1, w2))
plt.ylim((-400, 400))
#plt.xticks((-10, -5, -1, 0, 1, 5, 10))
plt.ylabel('Imaginary part')
plt.xlabel(r'$\omega$ [rad/s]')
plt.legend(('Continuous', 'Sampled', 'Nyquist frequency'), loc=1, borderaxespad=0.)
# -
plt.figure(figsize=(10,5))
plt.plot(t,y, color=(0.7,0.7,1))
plt.stem(ts[:20], ys[:20],linefmt='r--', markerfmt='ro', basefmt = 'r-')
plt.xlim((0,7.8))
plt.xlabel('t [s]')
| sampling-and-aliasing/notebooks/Frequency-folding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pandas as pd; pd.set_option('display.max_columns', None)
import seaborn as sns
from matplotlib import pyplot as plt
import numpy as np
from scipy import stats
df = pd.read_csv("data/ad.csv")
df.columns
#Top search terms to reasess
zero = df.loc[df['Total Advertising Cost of Sales (ACoS) '] == '0']
zero
zero_df = zero.sort_values(by='Spend', ascending=False).head(10)
zero_df
ax = zero_df.plot.bar(x='Customer Search Term', y='Spend', rot=90)
ax.set_xlabel("Customer Search Term", fontsize=12)
ax.set_ylabel("Spend(Dollars)", fontsize=12)
ax.set_title("Highest Spending With Zero Sales", fontsize=16, fontweight="bold")
plt.show()
plt.figure(figsize=(12, 10))
sns.regplot(df['Spend'], df['7 Day Total Sales '])
plt.figure(figsize=(12, 10))
sns.regplot(df['Spend Per Day'], df['Sales Per Day'])
plt.figure(figsize=(12, 10))
sns.regplot(df['Spend Per Day'], df['Impressions Per Day'])
# +
slope, intercept, r_value, p_value, std_err = stats.linregress(df['Spend Per Day'],df['Impressions Per Day'])
ax = sns.regplot(x='Spend Per Day', y = 'Impressions Per Day', data = df, color='Purple',
line_kws={'label':"y={0:1f}x+{1:1f}".format(slope,intercept)})
ax.legend()
plt.show()
# +
slope, intercept, r_value, p_value, std_err = stats.linregress(df['Spend Per Day'],df['Sales Per Day'])
ax = sns.regplot(x='Spend Per Day', y = 'Sales Per Day', data = df, color='Blue',
line_kws={'label':"y={0:1f}x+{1:1f}".format(slope,intercept)})
ax.legend()
plt.show()
# -
res = stats.linregress(df['Spend Per Day'],df['Impressions Per Day'])
print(f"R-squared: {res.rvalue**2:.6f}")
plt.plot(df['Spend Per Day'],df['Sales Per Day'], 'o', label='original data')
plt.plot(df['Spend Per Day'], res.intercept + res.slope*df['Spend Per Day'], 'r', label='fitted line')
plt.legend()
plt.show()
X=df['Spend Per Day']
Y=df['Impressions Per Day']
X
from scipy.stats import linregress
X = df['Spend Per Day'].values
Y = df['Impressions Per Day'].values
linregress(X, Y)
corr = df.loc[df['Campaign Name'] == 'Floral - Manual'].corr()[['Spend Per Day']]
corr = corr.sort_values(by='Spend Per Day', ascending=False)
plt.figure(figsize=(15, 10))
sns.heatmap(corr, annot=True)
corr = df_copy.loc[df_copy['Pos'] == 'TE'].corr()[['FantasyPoints/Gm']]
plt.figure(figsize=(15, 10))
sns.heatmap(corr, annot=True)
| .ipynb_checkpoints/Advertising Viz-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Plotting Tutorial
#
# The Hail plot module allows for easy plotting of data. This notebook contains examples of how to use the plotting functions in this module, many of which can also be found in the first tutorial.
# +
import hail as hl
hl.init()
from bokeh.io import show, output_notebook
from bokeh.layouts import gridplot
output_notebook()
# +
hl.utils.get_1kg('data/')
mt = hl.read_matrix_table('data/1kg.mt')
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
mt = mt.annotate_cols(**table[mt.s])
mt = hl.sample_qc(mt)
mt.describe()
# -
# ### Histogram
#
# The `histogram()` method takes as an argument an aggregated hist expression, as well as optional arguments for the legend and title of the plot.
dp_hist = mt.aggregate_entries(hl.expr.aggregators.hist(mt.DP, 0, 30, 30))
p = hl.plot.histogram(dp_hist, legend='DP', title='DP Histogram')
show(p)
# This method, like all Hail plotting methods, also allows us to pass in fields of our data set directly. Choosing not to specify the `range` and `bins` arguments would result in a range being computed based on the largest and smallest values in the dataset and a default bins value of 50.
p = hl.plot.histogram(mt.DP, range=(0, 30), bins=30)
show(p)
# ### Cumulative Histogram
#
# The `cumulative_histogram()` method works in a similar way to `histogram()`.
p = hl.plot.cumulative_histogram(mt.DP, range=(0,30), bins=30)
show(p)
# ### Scatter
#
# The `scatter()` method can also take in either Python types or Hail fields as arguments for x and y.
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
# We can also pass in a Hail field as a `label` argument, which determines how to color the data points.
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt).cache()
common_mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
gwas = hl.linear_regression(y=common_mt.CaffeineConsumption, x=common_mt.GT.n_alt_alleles(), covariates=[1.0])
pca_eigenvalues, pca_scores, _ = hl.hwe_normalized_pca(common_mt.GT)
p = hl.plot.scatter(pca_scores.scores[0], pca_scores.scores[1],
label=common_mt.cols()[pca_scores.s].SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2', collect_all=True)
show(p)
# Hail's downsample aggregator is incorporated into the `scatter()`, `qq()`, and `manhattan()` functions. The `collect_all` parameter tells the plot function whether to collect all values or downsample. Choosing not to set this parameter results in downsampling.
# +
p2 = hl.plot.scatter(pca_scores.scores[0], pca_scores.scores[1],
label=common_mt.cols()[pca_scores.s].SuperPopulation,
title='PCA (downsampled)', xlabel='PC1', ylabel='PC2', collect_all=False, n_divisions=50)
show(gridplot([p, p2], ncols=2, plot_width=400, plot_height=400))
# -
# ### Q-Q (Quantile-Quantile)
#
# The `qq()` function requires either a Python type or a Hail field containing p-values to be plotted. This function also allows for downsampling.
# +
p = hl.plot.qq(gwas.linreg.p_value, collect_all=True)
p2 = hl.plot.qq(gwas.linreg.p_value, n_divisions=75)
show(gridplot([p, p2], ncols=2, plot_width=400, plot_height=400))
# -
# ### Manhattan
#
# The `manhattan()` function requires a Hail field containing p-values.
p = hl.plot.manhattan(gwas.linreg.p_value)
show(p)
# We can also pass in a dictionary of fields that we would like to show up as we hover over a data point, and choose not to downsample if the dataset is relatively small.
hover_fields = dict([('alleles', gwas.alleles)])
p = hl.plot.manhattan(gwas.linreg.p_value, hover_fields=hover_fields, collect_all=True)
show(p)
| python/hail/docs/tutorials/plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install selenium # only once!
# +
# now drivers https://pypi.org/project/selenium/
# https://sites.google.com/a/chromium.org/chromedriver/downloads
# driver has to be a in a path that is in your System Enviroment Path
# +
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome() # so you need driver for the browser your Python folder is one option
# -
browser.get('http://www.ss.com')
assert 'SS' in browser.title # so no error we are good
# +
# elem = browser.find_element_by_name('p') # Find the search box
# elem.send_keys('<PASSWORD>' + Keys.RETURN)
# -
browser.current_url
cars = browser.find_element_by_id("mtd_97")
type(cars)
cars.get_attribute('href')
cars.get_attribute('title')
cars.text
# https://devhints.io/xpath
apart = browser.find_element_by_xpath('//a[@title="Dzīvokļi"]') # XPath is very powerful tool for finding elements
type(apart)
apart.click() # i can emulate a mouse click
browser.back()
browser.forward()
riga = browser.find_element_by_xpath('//a[@title="Rīga, Sludinājumi"]')
type(riga)
riga.click()
browser.current_url
browser.get_window_size()
# +
# save with Pillow graphibrowser.get_screenshot_as_png()
browser.save_screenshot("page.png")
# -
myel = browser.find_element_by_id('ahc_1089')
myel.text
myel.get_property('attributes')
myel.click()
| Diena_15_Web_Scraping/Selenium.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Standard imports
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# %matplotlib inline
# Insert mavenn at beginning of path
import sys
path_to_mavenn_local = '../../../../'
sys.path.insert(0,path_to_mavenn_local)
path_to_suftware_local = '../../../../../suftware/'
sys.path.insert(0,path_to_suftware_local)
# Load mavenn and check path
import mavenn
print(mavenn.__path__)
# Load suftware and check path
import suftware
print(suftware.__file__)
# MAVE-NN utilities
from mavenn.src.entropy import entropy_continuous
# Useful constants
pi = np.pi
e = np.exp(1)
# +
# Load GB1 data
data_df = mavenn.load_example_dataset('gb1')
# Compute length and preview df
N = len(data_df)
print(f'N: {N}')
data_df.head()
# -
# Select N_sub sequences to estimate intrinsic information on
N_sub = 10000
ix = np.random.choice(N, size=N_sub, replace=False)
sub_df = data_df.iloc[ix].copy().reset_index(drop=True)
sub_df.head()
# +
# Set number of bootstrap samples
# Extract counts
i_n = sub_df['input_ct'].values
o_n = sub_df['selected_ct'].values
r_n = (o_n+1)/(i_n+1)
y_n = np.log2(r_n)
# Resample counts
K = 1000
mu_i_nk = np.random.choice(a=i_n, size=[N_sub,K], replace=True)
mu_o_nk = r_n[:,np.newaxis] * mu_i_nk
i_nk = np.random.poisson(lam=mu_i_nk)
o_nk = np.random.poisson(lam=mu_o_nk)
r_nk = (o_nk+1)/(i_nk+1)
y_nk = np.log2(r_nk)
# -
# Compute naive estimate
dy2_naive_n = (np.log2(e)**2)*(1./(o_n+1.) + 1./(i_n+1))
H_n_naive = 0.5*np.log2(2*pi*e*dy2_naive_n)
H_ygx_naive = np.mean(H_n_naive)
dH_ygx_naive = np.std(H_n_naive)/np.sqrt(N_sub)
print(f'H[y|x] (naive): {H_ygx_naive:.4f} +- {dH_ygx_naive:.4f} bits')
# Estaimte entropy using Gaussian approx
dy2_n = np.var(y_nk, axis=1)
H_n_gauss = 0.5*np.log2(2*pi*e*dy2_n)
H_ygx_gauss = np.mean(H_n_gauss)
dH_ygx_gauss = np.std(H_n_gauss)/np.sqrt(N_sub)
print(f'H[y|x] (gauss): {H_ygx_gauss:.4f} +- {dH_ygx_gauss:.4f} bits')
# Estimate gamma using interquartile range
q25_n = np.quantile(y_nk, q=.25, axis=1)
q75_n = np.quantile(y_nk, q=.75, axis=1)
gamma_n = 0.5*(q75_n-q25_n)
H_n = np.log2(4*np.pi*gamma_n)
H_ygx_cauchy = np.mean(H_n)
dH_ygx_cauchy = np.std(H_n)/np.sqrt(N_sub)
print(f'H[y|x] (cauchy): {H_ygx_cauchy:.4f} +- {dH_ygx_cauchy:.4f} bits')
# +
# Estimate entropy using knn
H_n_knn = np.zeros(N_sub)
for i in range(N_sub):
y_k = y_nk[i,:].copy()
H_n_knn[i] = entropy_continuous(y_k, knn=5, uncertainty=False)
H_ygx_knn = np.mean(H_n_knn)
dH_ygx_knn = np.std(H_n_knn)/np.sqrt(N_sub)
print(f'H[y|x] (knn): {H_ygx_knn:.4f} +- {dH_ygx_knn:.4f} bits')
# +
import warnings
warnings.filterwarnings("ignore")
# Estimate entropy using deft
N_deft = 1000
H_n_deft = np.ones(N_deft)*np.nan
for i in range(N_deft):
y_k = y_nk[i,:]
try:
est = suftware.DensityEstimator(y_k, num_posterior_samples=0)
print('.', end='')
if i%100==99:
print(i+1)
except SystemExit:
print('x', end='')
stats_df = est.get_stats(use_weights=False)
H_n_deft[i] = stats_df.loc['star','entropy']
ix = np.isfinite(H_n_deft)
H_ygx_deft = np.mean(H_n_deft[ix])
dH_ygx_deft = np.std(H_n_deft[ix])/np.sqrt(sum(ix))
print(f'\nH[y|x] (deft): {H_ygx_deft:.4f} +- {dH_ygx_deft:.4f} bits')
# +
# Compute cauchy and gaussian distributions
from scipy.stats import norm, cauchy
linewidth=3
fig, axs = plt.subplots(4,2,figsize=[15,15])
for n, ax in enumerate(axs.ravel()):
# Visualize marginals at selected n
y_k = y_nk[n,:]
# Show histogram
sns.histplot(y_k, stat="density", ax=ax, element="step", color='C9', label='sim')
# Estaimte and plot DEFT fit
est = suftware.DensityEstimator(y_k, num_posterior_samples=0)
y_lim = est.bounding_box
y_grid = np.linspace(y_lim[0], y_lim[1], 1000)
ax.plot(y_grid, est.evaluate(y_grid), label='deft', linewidth=linewidth)
# Plot Gaussian fit
f_gauss = norm(loc=y_n[n], scale=np.sqrt(dy2_n[n])).pdf
ax.plot(y_grid, f_gauss(y_grid), label='gauss', linewidth=linewidth)
# Plot Cauchy fit
f_cauchy = cauchy(loc=y_n[n], scale=gamma_n[n]).pdf
ax.plot(y_grid, f_cauchy(y_grid), label='cauchy', linewidth=linewidth)
ax.legend()
# +
# Use DEFT to estimate entropy of full dataset
y = y_n.copy()
linewidth=3
fig, ax = plt.subplots(1,1,figsize=[8,6])
# Show histogram
sns.histplot(y, stat="density", ax=ax, element="step", color='C9', label='sim')
# Estaimte and plot DEFT fit
est = suftware.DensityEstimator(y, num_posterior_samples=100)
y_lim = est.bounding_box
y_grid = np.linspace(y_lim[0], y_lim[1], 1000)
ax.plot(y_grid, est.evaluate(y_grid), label='deft', linewidth=linewidth)
# Compute entropy of dataset using DEFT
stats = est.get_stats()
H_y_deft = stats.loc['posterior mean', 'entropy']
dH_y_deft = stats.loc['posterior RMSD', 'entropy']
print(f'H[y] (deft): {H_y_deft:.4f} +- {dH_y_deft:.4f} bits')
# -
# Compute entropy of dataset using knn
y = y_n.copy()
H_y_knn, dH_y_knn = entropy_continuous(y, knn=5, uncertainty=True, num_subsamples=100)
print(f'H[y] (knn): {H_y_knn:.4f} +- {dH_y_knn:.4f} bits')
# +
# Report mutual information values for various H_ygx estimates
for (name, H_ygx, dH_ygx) in [('naive', H_ygx_naive, dH_ygx_naive),
('gauss', H_ygx_gauss, dH_ygx_gauss),
('cauchy', H_ygx_cauchy, dH_ygx_cauchy),
('knn', H_ygx_knn, dH_ygx_knn),
('deft', H_ygx_deft, dH_ygx_deft)]:
I_y_x = H_y_knn - H_ygx
dI_y_x = np.sqrt(dH_y_knn**2 + dH_ygx**2)
print(f'I_intr ({name}): {I_y_x:.4f} +- {dI_y_x:.4f} bits')
# Would be nice to see a plot of this
| mavenn/examples/datasets/gb1/gb1_intrinsic_info.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Application of Classification with Scikit-Learn
#
# Classification of Iris Dataset with Scikit-Learn. The code was taken from [the Scikit-learn Examples](https://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_iris.html).
# +
#title Licensend under New BSD License
# Copyright (c) 2007–2020 The scikit-learn developers.
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# a. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# b. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# c. Neither the name of the Scikit-learn Developers nor the names of
# its contributors may be used to endorse or promote products
# derived from this software without specific prior written
# permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
# DAMAGE.
# +
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import fetch_openml
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.inspection import permutation_importance
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
# +
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.datasets import load_iris
from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier,
AdaBoostClassifier)
from sklearn.tree import DecisionTreeClassifier
# Parameters
n_classes = 3
n_estimators = 30
cmap = plt.cm.RdYlBu
plot_step = 0.02 # fine step width for decision surface contours
plot_step_coarser = 0.5 # step widths for coarse classifier guesses
RANDOM_SEED = 13 # fix the seed on each iteration
# Load data
iris = load_iris()
plot_idx = 1
models = [DecisionTreeClassifier(max_depth=None),
RandomForestClassifier(n_estimators=n_estimators),
ExtraTreesClassifier(n_estimators=n_estimators),
AdaBoostClassifier(DecisionTreeClassifier(max_depth=3),
n_estimators=n_estimators)]
for pair in ([0, 1], [0, 2], [2, 3]):
for model in models:
# We only take the two corresponding features
X = iris.data[:, pair]
y = iris.target
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Train
model.fit(X, y)
scores = model.score(X, y)
# Create a title for each column and the console by using str() and
# slicing away useless parts of the string
model_title = str(type(model)).split(
".")[-1][:-2][:-len("Classifier")]
model_details = model_title
if hasattr(model, "estimators_"):
model_details += " with {} estimators".format(
len(model.estimators_))
print(model_details + " with features", pair,
"has a score of", scores)
plt.subplot(3, 4, plot_idx)
if plot_idx <= len(models):
# Add a title at the top of each column
plt.title(model_title, fontsize=9)
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
# Plot either a single DecisionTreeClassifier or alpha blend the
# decision surfaces of the ensemble of classifiers
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
else:
# Choose alpha blend level with respect to the number
# of estimators
# that are in use (noting that AdaBoost can use fewer estimators
# than its maximum if it achieves a good enough fit early on)
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)
# Build a coarser grid to plot a set of ensemble classifications
# to show how these are different to what we see in the decision
# surfaces. These points are regularly space and do not have a
# black outline
xx_coarser, yy_coarser = np.meshgrid(
np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = model.predict(np.c_[xx_coarser.ravel(),
yy_coarser.ravel()]
).reshape(xx_coarser.shape)
cs_points = plt.scatter(xx_coarser, yy_coarser, s=15,
c=Z_points_coarser, cmap=cmap,
edgecolors="none")
# Plot the training points, these are clustered together and have a
# black outline
plt.scatter(X[:, 0], X[:, 1], c=y,
cmap=ListedColormap(['r', 'y', 'b']),
edgecolor='k', s=20)
plot_idx += 1 # move on to the next plot in sequence
plt.suptitle("Classifiers on feature subsets of the Iris dataset", fontsize=12)
plt.axis("tight")
plt.tight_layout(h_pad=0.2, w_pad=0.2, pad=2.5)
plt.show()
# -
| scikitlearn/01-NumbersRandomForests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import the required libraries
import sys
from nyoka import PMML43Ext as ny
from datetime import datetime
# ### Define a pre-processing function
def pre_process(df):
import pandas as pd
df['z'] = df['x'] + df['y']
# ### Using Inspect library to get pre-process function's source
import inspect
inspect.getsource(pre_process)
# ### Using Nyoka "script" class to export our function
scr = ny.script(content=inspect.getsource(pre_process), for_="modelName", class_="preprocessing")
scr.export(sys.stdout,0,"")
# ### Checking the same with pmml object
# +
pmml = ny.PMML(version="4.3Ext",script=[ny.script(content=inspect.getsource(pre_process),for_="modelName")],
Header=ny.Header(copyright="Copyright (c) 2018 Software AG",
description="DEMO!!!",
Timestamp=ny.Timestamp(datetime.now())))
pmml_f_path = "preprocessing_using_script_tag.pmml"
pmml.export(open(pmml_f_path, "w"), 0, "")
import os
if os.path.exists(pmml_f_path):
print("PMML generated successfully.")
print("Your PMML file name is " + pmml_f_path + ".")
# -
| nyoka/tests/executed_Nyoka_Script_Tag.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# This is an example of one analysis done in
# Multi-study inference of regulatory networks for more accurate models of gene regulation
# https://doi.org/10.1371/journal.pcbi.1006591
# +
# Load modules
from inferelator import utils
from inferelator.distributed.inferelator_mp import MPControl
from inferelator import workflow
# Set verbosity level to "Talky"
utils.Debug.set_verbose_level(1)
# +
# Set the location of the input data and the desired location of the output files
DATA_DIR = '../data/bsubtilis'
OUTPUT_DIR = '~/bsubtilis_inference/'
PRIORS_FILE_NAME = 'gold_standard.tsv.gz'
GOLD_STANDARD_FILE_NAME = 'gold_standard.tsv.gz'
TF_LIST_FILE_NAME = 'tf_names.tsv'
# GEO Record GSE67023 data
BSUBTILIS_1_EXPRESSION = 'GSE67023_expression.tsv.gz'
BSUBTILIS_1_METADATA = 'GSE67023_meta_data.tsv'
# GEO Record GSE27219 data
BSUBTILIS_2_EXPRESSION = 'expression.tsv.gz'
BSUBTILIS_2_METADATA = 'meta_data.tsv'
CV_SEEDS = list(range(42, 52))
# +
# Start Multiprocessing Engine
# Default to a single computer. Setting up a cluster is left as an exercise to the reader.
n_cores_dask = 200
activate_path = '~/.local/anaconda3/bin/activate'
dask_engine = False
n_cores_local = 3
local_engine = True
# The if __name__ is __main__ pragma protects against runaway multiprocessing
# Dask requires a slurm controller in an HPC environment.
# The conda or venv activate script is necessary to set the worker environment
# This code does NOT set the environment for the current process, only for workers
if __name__ == '__main__' and dask_engine:
MPControl.set_multiprocess_engine("dask-cluster")
MPControl.client.minimum_cores = n_cores_dask
MPControl.client.maximum_cores = n_cores_dask
MPControl.client.walltime = '48:00:00'
MPControl.client.add_worker_env_line('module load slurm')
MPControl.client.add_worker_env_line('module load gcc/8.3.0')
MPControl.client.add_worker_env_line('source ' + activate_path)
MPControl.client.cluster_controller_options.append("-p ccb")
MPControl.connect()
# Multiprocessing uses the pathos implementation of multiprocessing (with dill instead of cPickle)
# This is suited for a single computer, but will likely be too slow for the example here
if __name__ == '__main__' and local_engine:
MPControl.set_multiprocess_engine("multiprocessing")
MPControl.client.processes = n_cores_local
MPControl.connect()
# +
# Inference on B. subtilis data set 1 (GSE67023) with BBSR
# Using the crossvalidation wrapper
# Run the regression 10 times and hold 20% of the gold standard out of the priors for testing each time
# Each run is seeded differently (and therefore has different holdouts)
# Create a crossvalidation wrapper
cv_wrap = CrossValidationManager()
# Assign variables for grid search
cv_wrap.add_gridsearch_parameter('random_seed', CV_SEEDS)
# Create a worker
worker = workflow.inferelator_workflow(regression="bbsr", workflow="tfa")
worker.set_file_paths(input_dir=DATA_DIR,
output_dir=OUTPUT_DIR,
expression_matrix_file=BSUBTILIS_1_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_1_METADATA,
priors_file=PRIORS_FILE_NAME,
gold_standard_file=GOLD_STANDARD_FILE_NAME)
worker.set_file_properties(expression_matrix_columns_are_genes=False)
worker.set_run_parameters(num_bootstraps=5)
worker.set_crossvalidation_parameters(split_gold_standard_for_crossvalidation=True, cv_split_ratio=0.2)
worker.append_to_path("output_dir", "bsubtilis_1")
# Assign the worker to the crossvalidation wrapper
cv_wrap.workflow = worker
# Run
cv_wrap.run()
# +
# Inference on B. subtilis data set 2 (GSE27219) with BBSR
# Using the crossvalidation wrapper
# Run the regression 10 times and hold 20% of the gold standard out of the priors for testing each time
# Each run is seeded differently (and therefore has different holdouts)
# Create a crossvalidation wrapper
cv_wrap = CrossValidationManager()
# Assign variables for grid search
cv_wrap.add_gridsearch_parameter('random_seed', CV_SEEDS)
# Create a worker
worker = workflow.inferelator_workflow(regression="bbsr", workflow="tfa")
worker.set_file_paths(input_dir=DATA_DIR,
output_dir=OUTPUT_DIR,
expression_matrix_file=BSUBTILIS_2_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_2_METADATA,
priors_file=PRIORS_FILE_NAME,
gold_standard_file=GOLD_STANDARD_FILE_NAME)
worker.set_file_properties(expression_matrix_columns_are_genes=False)
worker.set_run_parameters(num_bootstraps=5)
worker.set_crossvalidation_parameters(split_gold_standard_for_crossvalidation=True, cv_split_ratio=0.2)
worker.append_to_path("output_dir", "bsubtilis_2")
# Assign the worker to the crossvalidation wrapper
cv_wrap.workflow = worker
# Run
cv_wrap.run()
# +
# Inference on individual data sets with BBSR
# A final network is generated from the two separate networks
# Using the crossvalidation wrapper
# Run the regression 10 times and hold 20% of the gold standard out of the priors for testing each time
# Each run is seeded differently (and therefore has different holdouts)
# Create a crossvalidation wrapper
cv_wrap = CrossValidationManager()
# Assign variables for grid search
cv_wrap.add_gridsearch_parameter('random_seed', CV_SEEDS)
# Create a worker
worker = workflow.inferelator_workflow(regression="bbsr-by-task", workflow="multitask")
worker.set_file_paths(input_dir=DATA_DIR, output_dir=OUTPUT_DIR,
gold_standard_file=GOLD_STANDARD_FILE_NAME)
worker.create_task(task_name="Bsubtilis_1",
input_dir=DATA_DIR,
expression_matrix_file=BSUBTILIS_1_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_1_METADATA,
priors_file=PRIORS_FILE_NAME,
workflow_type="tfa")
worker.create_task(task_name="Bsubtilis_2",
input_dir=DATA_DIR,
expression_matrix_file=BSUBTILIS_2_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_2_METADATA,
priors_file=PRIORS_FILE_NAME,
workflow_type="tfa")
worker.set_run_parameters(num_bootstraps=5)
worker.set_crossvalidation_parameters(split_gold_standard_for_crossvalidation=True, cv_split_ratio=0.2)
worker.append_to_path("output_dir", "bsubtilis_1_2_STL")
# Assign the worker to the crossvalidation wrapper
cv_wrap.workflow = worker
# Run
cv_wrap.run()
# +
# Inference on individual data sets with AMuSR
# Using the crossvalidation wrapper
# Run the regression 10 times and hold 20% of the gold standard out of the priors for testing each time
# Each run is seeded differently (and therefore has different holdouts)
# Create a crossvalidation wrapper
cv_wrap = CrossValidationManager()
# Assign variables for grid search
cv_wrap.add_gridsearch_parameter('random_seed', CV_SEEDS)
# Create a worker
worker = workflow.inferelator_workflow(regression="amusr", workflow="multitask")
worker.set_file_paths(input_dir=DATA_DIR, output_dir=OUTPUT_DIR,
gold_standard_file=GOLD_STANDARD_FILE_NAME)
worker.create_task(task_name="Bsubtilis_1",
input_dir=DATA_DIR,
expression_matrix_file=BSUBTILIS_1_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_1_METADATA,
priors_file=PRIORS_FILE_NAME,
workflow_type="tfa")
worker.create_task(task_name="Bsubtilis_2",
input_dir=DATA_DIR,
expression_matrix_file=BSUBTILIS_2_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_2_METADATA,
priors_file=PRIORS_FILE_NAME,
workflow_type="tfa")
worker.set_run_parameters(num_bootstraps=5)
worker.set_crossvalidation_parameters(split_gold_standard_for_crossvalidation=True, cv_split_ratio=0.2)
worker.append_to_path("output_dir", "bsubtilis_1_2_MTL")
# Assign the worker to the crossvalidation wrapper
cv_wrap.workflow = worker
# Run
cv_wrap.run()
# +
# Final network
# Create a worker
worker = workflow.inferelator_workflow(regression="amusr", workflow="multitask")
worker.set_file_paths(input_dir=DATA_DIR, output_dir=OUTPUT_DIR,
gold_standard_file=GOLD_STANDARD_FILE_NAME)
worker.create_task(task_name="Bsubtilis_1",
input_dir=DATA_DIR,
expression_matrix_file=BSUBTILIS_1_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_1_METADATA,
priors_file=PRIORS_FILE_NAME,
workflow_type="tfa")
worker.create_task(task_name="Bsubtilis_2",
input_dir=DATA_DIR,
expression_matrix_file=BSUBTILIS_2_EXPRESSION,
tf_names_file=TF_LIST_FILE_NAME,
meta_data_file=BSUBTILIS_2_METADATA,
priors_file=PRIORS_FILE_NAME,
workflow_type="tfa")
worker.set_crossvalidation_parameters(split_gold_standard_for_crossvalidation=False, cv_split_ratio=None)
worker.append_to_path("output_dir", "MTL_Final")
worker.set_run_parameters(num_bootstraps=50, random_seed=100)
final_network = worker.run()
| examples/Castro_2019_PLoS_Comp_Bio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Inception V1 Example
# In this notebook we will go through the process of converting the Inception V1 model to a Neural Network Classifier CoreML model that directly predicts the class label of the input image. We will highlight the importance of setting the image preprocessing parameters correctly to get the right results.
# Lets get started!
# Lets first download the inception V1 frozen TF graph (the .pb file)
# +
# Download the model and class label package
import os
import urllib
import tarfile
def download_file_and_unzip(url, dir_path='.'):
"""Download the frozen TensorFlow model and unzip it.
url - The URL address of the frozen file
dir_path - local directory
"""
if not os.path.exists(dir_path):
os.makedirs(dir_path)
k = url.rfind('/')
fname = url[k+1:]
fpath = os.path.join(dir_path, fname)
if not os.path.exists(fpath):
urllib.urlretrieve(url, fpath)
tar = tarfile.open(fpath)
tar.extractall(dir_path)
tar.close()
inception_v1_url = 'https://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz'
download_file_and_unzip(inception_v1_url)
# -
# For conversion to CoreML, we need to find the input and output tensor names in the TF graph. This will also be required to run the TF graph for numerical accuracy check. Lets load the TF graph def and try to find the names
# +
# Load the TF graph definition
import tensorflow as tf
tf_model_path = './inception_v1_2016_08_28_frozen.pb'
with open(tf_model_path, 'rb') as f:
serialized = f.read()
tf.reset_default_graph()
original_gdef = tf.GraphDef()
original_gdef.ParseFromString(serialized)
# Lets get some details about a few ops in the beginning and the end of the graph
with tf.Graph().as_default() as g:
tf.import_graph_def(original_gdef, name='')
ops = g.get_operations()
N = len(ops)
for i in [0,1,2,N-3,N-2,N-1]:
print('\n\nop id {} : op type: "{}"'.format(str(i), ops[i].type));
print('input(s):'),
for x in ops[i].inputs:
print("name = {}, shape: {}, ".format(x.name, x.get_shape())),
print('\noutput(s):'),
for x in ops[i].outputs:
print("name = {}, shape: {},".format(x.name, x.get_shape())),
# -
# The output of the Placeholder op is the input ("input:0") and the output of the Softmax op towards the end of the graph is the output ("InceptionV1/Logits/Predictions/Softmax:0"). Lets convert to mlmodel now.
# +
import tfcoreml
# Supply a dictionary of input tensors' name and shape (with batch axis)
input_tensor_shapes = {"input:0":[1,224,224,3]} # batch size is 1
#providing the image_input_names argument converts the input into an image for CoreML
image_input_name = ['input:0']
# Output CoreML model path
coreml_model_file = './inception_v1.mlmodel'
# The TF model's ouput tensor name
output_tensor_names = ['InceptionV1/Logits/Predictions/Softmax:0']
# class label file: providing this will make a "Classifier" CoreML model
class_labels = 'imagenet_slim_labels.txt'
# Call the converter. This may take a while
coreml_model = tfcoreml.convert(
tf_model_path=tf_model_path,
mlmodel_path=coreml_model_file,
input_name_shape_dict=input_tensor_shapes,
output_feature_names=output_tensor_names,
image_input_names = image_input_name,
class_labels = class_labels)
# -
# Lets load an image for testing. We will get predictions on this image using the TF model and the corresponding mlmodel.
# Now we're ready to test out the CoreML model with a real image!
# Load an image
import numpy as np
import PIL
import requests
from io import BytesIO
from matplotlib.pyplot import imshow
# This is an image of a golden retriever from Wikipedia
img_url = 'https://upload.wikimedia.org/wikipedia/commons/9/93/Golden_Retriever_Carlos_%2810581910556%29.jpg'
response = requests.get(img_url)
# %matplotlib inline
img = PIL.Image.open(BytesIO(response.content))
imshow(np.asarray(img))
# +
# for getting CoreML predictions we directly pass in the PIL image after resizing
import coremltools
img = img.resize([224,224], PIL.Image.ANTIALIAS)
coreml_inputs = {'input__0': img}
coreml_output = coreml_model.predict(coreml_inputs, useCPUOnly=True)
coreml_pred_dict = coreml_output['InceptionV1__Logits__Predictions__Softmax__0']
coreml_predicted_class_label = coreml_output['classLabel']
#for getting TF prediction we get the numpy array of the image
img_np = np.array(img).astype(np.float32)
print 'image shape:', img_np.shape
print 'first few values: ', img_np.flatten()[0:4], 'max value: ', np.amax(img_np)
img_tf = np.expand_dims(img_np, axis = 0) #now shape is [1,224,224,3] as required by TF
# Evaluate TF and get the highest label
tf_input_name = 'input:0'
tf_output_name = 'InceptionV1/Logits/Predictions/Softmax:0'
with tf.Session(graph = g) as sess:
tf_out = sess.run(tf_output_name,
feed_dict={tf_input_name: img_tf})
tf_out = tf_out.flatten()
idx = np.argmax(tf_out)
label_file = 'imagenet_slim_labels.txt'
with open(label_file) as f:
labels = f.readlines()
#print predictions
print('\n')
print("CoreML prediction class = {}, probabiltiy = {}".format(coreml_predicted_class_label,
str(coreml_pred_dict[coreml_predicted_class_label])))
print("TF prediction class = {}, probability = {}".format(labels[idx],
str(tf_out[idx])))
# -
# Both the predictions match, this means that the conversion was correct. However, the class label seems incorrect. What could be the reason? The answer is that we did not preprocess the image correctly before passing it to the neural network!! This is always a crucial step when using neural networks on images.
#
# How do we know what preprocessing to apply? This can be tricky to find sometimes. The approach is to find the source of the pre-trained model and check for the preprocessing that the author of the model used while training and evaluation. In this case, the TF model comes from the SLIM library so we find the preprocessing steps [here](https://github.com/tensorflow/models/blob/edb6ed22a801665946c63d650ab9a0b23d98e1b1/research/slim/preprocessing/inception_preprocessing.py#L243)
#
# We see that the image pixels have to be scaled to lie in the interval [-1,1]. Lets do that and get the TF predictions again!
img_tf = (2.0/255.0) * img_tf - 1
with tf.Session(graph = g) as sess:
tf_out = sess.run(tf_output_name,
feed_dict={tf_input_name: img_tf})
tf_out = tf_out.flatten()
idx = np.argmax(tf_out)
print("TF prediction class = {}, probability = {}".format(labels[idx],
str(tf_out[idx])))
# Much better now! The model is predicting a dog as the highest class.
#
# What about CoreML? CoreML automatically handles the image preprocessing, when the input is of type image, so we do not have to change the input that we were passing in earlier. For the mlmodel we converted, lets see what the image biases and scale have been set to
# +
# Get image pre-processing parameters of a saved CoreML model
from coremltools.proto import FeatureTypes_pb2 as _FeatureTypes_pb2
spec = coremltools.models.utils.load_spec(coreml_model_file)
if spec.WhichOneof('Type') == 'neuralNetworkClassifier':
nn = spec.neuralNetworkClassifier
if spec.WhichOneof('Type') == 'neuralNetwork':
nn = spec.neuralNetwork
if spec.WhichOneof('Type') == 'neuralNetworkRegressor':
nn = spec.neuralNetworkRegressor
preprocessing = nn.preprocessing[0].scaler
print 'channel scale: ', preprocessing.channelScale
print 'blue bias: ', preprocessing.blueBias
print 'green bias: ', preprocessing.greenBias
print 'red bias: ', preprocessing.redBias
inp = spec.description.input[0]
if inp.type.WhichOneof('Type') == 'imageType':
colorspace = _FeatureTypes_pb2.ImageFeatureType.ColorSpace.Name(inp.type.imageType.colorSpace)
print 'colorspace: ', colorspace
# -
# As suspected, they are not correct. Lets convert the model again and set them correctly this time. Note that the channel scale is multiplied first and then the bias is added.
# Call the converter. This may take a while
coreml_model = tfcoreml.convert(
tf_model_path=tf_model_path,
mlmodel_path=coreml_model_file,
input_name_shape_dict=input_tensor_shapes,
output_feature_names=output_tensor_names,
image_input_names = image_input_name,
class_labels = class_labels,
red_bias = -1,
green_bias = -1,
blue_bias = -1,
image_scale = 2.0/255.0)
# Call CoreML predict again
coreml_output = coreml_model.predict(coreml_inputs, useCPUOnly=True)
coreml_pred_dict = coreml_output['InceptionV1__Logits__Predictions__Softmax__0']
coreml_predicted_class_label = coreml_output['classLabel']
print("CoreML prediction class = {}, probability = {}".format(coreml_predicted_class_label,
str(coreml_pred_dict[coreml_predicted_class_label])))
# Yes, now its matching the TF output and is correct!!
#
# Note that predictions with the default CoreML predict call (when the flag useCPUOnly=True is skipped) may vary slightly since it uses a lower precision optimized path that runs faster.
| examples/inception_v1_preprocessing_steps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <b>Calcule a integral dada</b>
# $29. \int \frac{e^x + e^{-x}}{e^x - e^{-x}}dx$
# $u = e^x - e^{-x}$
# $du = e^x + e^{-x}dx$
# <b>Aplicando a substituição</b>
# $\int \frac{e^x + e^{-x}}{e^x - e^{-x}}dx \rightarrow \int \frac{1}{u} du$
# <b>Integrando $\int \frac{1}{u}du$</b>
# $\int \frac{1}{u}du = ln(u) + C$
# $\int \frac{1}{u}du = ln(e^x - e^{-x}) + C$
| Problemas 5.2/29.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Fitting Data
# +
import matplotlib.pyplot as plt
import numpy as np
from astropy.table import QTable
from astropy import units as u
from astropy import constants as const
from scipy.optimize import curve_fit
# -
# ---
#
# # Power on the Moon
#
# <img src="images/ApolloRTG.jpg" alt="Apollo_ALSEP_RTG" width="700">
#
# ---
#
# * The Apollo lunar mission deployed a series of experiments on the Moon.
# * The experiment package was called the Apollo Lunar Surface Experiments Package [(ALSEP)](https://en.wikipedia.org/wiki/Apollo_Lunar_Surface_Experiments_Package)
# * The ALSEP was powered by a radioisotope thermoelectric generator [(RTG)](https://en.wikipedia.org/wiki/Radioisotope_thermoelectric_generator)
# * An RTG is basically a fist-sized slug of Pu-238 wrapped in a material that generates electric power when heated.
# * Since the RTG is powered by a radioisotope, the output power decreases over time as the radioisotope decays.
# ---
# ## Read in the datafile
#
# The data file `/Data/Apollo_RTG.csv` contains the power output of the Apollo 12 RTG as a function of time.
#
# The data colunms are
#
# * [Day] - Days on the Moon
# * [Power] - RTG power output in Watts
#
# Read in the datafile as a astropy `QTable`
# Add units to the columns
# ## Plot the Data
#
# * Day vs. Power
# * Fit the function with a (degree = 3) polynomial
# * Plot the fit with the data
# * Output size w:11in, h:8.5in
# * Make the plot look nice (including clear labels)
# ## Power over time
#
# * All of your answer should be formatted as sentences
# * For example: `The power on day 0 is VALUE UNIT`
# * Pay attention to the requested output units
# * Do not pick the complex roots!
#
# ### 1 - What was the power output on Day 0?
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
# -
# ### 2 - How many YEARS after landing could you still power a 60 W lightbulb?
# + jupyter={"outputs_hidden": false}
# -
# ### 3 - How many YEARS after landing could you still power a 5 W USB device?
# + jupyter={"outputs_hidden": false}
# -
# ### 4 - How many YEARS after landing until the power output is 0 W?
# + jupyter={"outputs_hidden": false}
# -
# ---
#
# # Fitting data to a function
#
# * The datafile `./Data/linedata.csv` contains two columns of data [no units]
# #### Read in the Data as an astropy `Qtable`
# + tags=[]
# -
# #### Plot the Data
#
# * Output size w:11in, h:8.5in
# * Make the plot look nice (including clear labels and a legend)
# + tags=[]
# -
# ----
#
# #### Fit a gaussian of the form:
#
# $$ \huge f(x) = A e^{-\frac{(x - C)^2}{W}} $$
#
# * A = amplitude of the gaussian
# * C = x-value of the central peak of the gaussian
# * W = width of the gaussian
# * Find the values `(A,C,W)` that best fit the data
# + jupyter={"outputs_hidden": false}
# -
# #### Plot the Data and the Fit on the same plot
#
# * Output size w:11in, h:8.5in
# * Make the plot look nice (including clear labels and a legend)
# + jupyter={"outputs_hidden": false}
# -
# ---
#
# # Stellar Spectra
#
# #### The file `./Data/StarData.csv` is a spectra of a main sequence star
#
# * Col 1 - Wavelength `[angstroms]`
# * Col 2 - Normalized Flux `[no units]`
# #### Read in the Data as an astropy `Qtable`
# + tags=[]
# -
# #### Add units to the `Wavelength` column
# #### Plot the Data
#
# * Output size w:11in, h:8.5in
# * Make the plot look nice (including clear labels and a legend)
# + tags=[]
# -
# #### Use [Wien's law](https://en.wikipedia.org/wiki/Wien%27s_displacement_law) to determine the temperature of the Star
#
# * **You will need to find the wavelength where the Flux is at a maximum**
# * Use the Astropy units and constants - do not hardcode
# + jupyter={"outputs_hidden": false}
# + jupyter={"outputs_hidden": false}
# -
# #### [Plank's Law](https://en.wikipedia.org/wiki/Planck%27s_law)
#
# * [Plank's Law](https://en.wikipedia.org/wiki/Planck%27s_law) describes the spectra emitted by a blackbody at a temperature T
# * You will want to look at the $\large \lambda$ version
# * Hint: all of the units should cancel in the `exp()` part of the expression
# * Write a function to calculate the blackbody flux, at the above temperature, for all of your data_wavelength points
# * Use the Astropy units and constants - do not hardcode
# * Scale the blackbody flux to `[0->1]`
# * Add a column to the table: `Blackbody`
# + jupyter={"outputs_hidden": false}
# Write a function
# + jupyter={"outputs_hidden": false}
# Apply the function
# + jupyter={"outputs_hidden": false}
# Normalize and add column
# -
# #### Plot the Data and the Blackbody fit on the same plot
#
# * Your blackbody fit should match the data pretty well.
# * Output size w:11in, h:8.5in
# * Make the plot look nice (including clear labels and a legend)
# + jupyter={"outputs_hidden": false}
# -
# ---
# ### Due Mon Feb 14 - 1 pm
# - `File -> Download as -> HTML (.html)`
# - `upload your .html and .ipynb file to the class Canvas page`
| HW_FittingData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# +
#
# Julia version of code adapted from
# https://github.com/rawlings-group/paresto/blob/master/examples/green_book/ABC.m
#
# -
using CSV, DataFrames
using DifferentialEquations, DiffEqSensitivity
using Plots
using Optim
using FiniteDiff, ForwardDiff
ABC_data = CSV.read("data_sets/ABC_data.csv", DataFrame)
# +
function rates!(dc, c, p, t)
cA = c[1]
cB = c[2]
cC = c[3]
k1 = p[1]
k2 = p[2]
dcA = -k1 * cA
dcB = k1 * cA - k2 * cB
dcC = k2 * cB
dc[1] = dcA
dc[2] = dcB
dc[3] = dcC
end
# -
c0 = [1.0, 0.0, 0.0]
p = [2.0, 1.0]
tspan = (0.0, 5.0)
prob = ODEProblem(rates!, c0, tspan, p)
sol = solve(prob, Rosenbrock23());
plot(sol)
function calc_SSE(p, data)
_prob = remake(prob, p = p)
sol = solve(_prob, Rosenbrock23())
sse = 0.0
for (i, t) in enumerate(data.t)
sse = sse + (sol(t)[1] - data.ca[i])^2 + (sol(t)[2] - data.cb[i])^2 + (sol(t)[3] - data.cc[i])^2
end
return sse
end
calc_SSE([5.0, 5.0], ABC_data)
res_pe = optimize(p -> calc_SSE(p, ABC_data), [5.0, 5.0], BFGS())
res_pe.minimizer
_prob = remake(prob, p = p)
sol = solve(_prob, Rosenbrock23());
plot(ABC_data.t, [ABC_data.ca, ABC_data.cb, ABC_data.cc], seriestype = :scatter, color = [:red :blue :green], label = ["ca" "cb" "cc"])
plot!(sol, color = [:red :blue :green], label = ["" "" ""])
H_ad = ForwardDiff.hessian(p -> calc_SSE(p, ABC_data), [2.0, 1.0])
H = FiniteDiff.finite_difference_hessian(p -> calc_SSE(p, ABC_data), [2.0, 1.0])
n = size(ABC_data)[1] * 3
p = 2
mse = calc_SSE([2.0, 1.0], ABC_data)/(n - p)
cov_est = 2 * mse * inv(H_ad)
| julia_paresto/ABC_parmest_jbr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow27_p37_cpu_v1]
# language: python
# name: conda-env-tensorflow27_p37_cpu_v1-py
# ---
#
import ocifs
import ads
import oci
import tensorflow as tf
print('ocifs version:', ocifs.__version__)
print('ads version:', ads.__version__)
print('oci version:', oci.__version__)
print('tf version:', tf.__version__)
| check_versions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="djUvWu41mtXa"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="su2RaORHpReL"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="NztQK2uFpXT-"
# # TensorBoard Scalars: Logging training metrics in Keras
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://tensorflow.google.cn/tensorboard/scalars_and_keras"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorFlow.google.cn 上查看</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/scalars_and_keras.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />Google Colab 中运行</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/scalars_and_keras.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tensorboard/scalars_and_keras.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="eDXRFe_qp5C3"
# ## 概述
#
#
# 机器学习总是涉及理解关键指标,例如损失 (loss) ,以及它们如何随着训练的进行而变化。 例如,这些指标可以帮助您了解模型是否[过拟合](https://en.wikipedia.org/wiki/Overfitting),或者是否不必要地训练了太长时间。 您可能需要比较不同训练中的这些指标,以帮助调试和改善模型。
#
# TensorBoard 的** Scalars Dashboard **允许您轻松地使用简单的 API 可视化这些指标。 本教程提供了非常基本的示例,可帮助您在开发 Keras 模型时学习如何在 TensorBoard 中使用这些 API 。 您将学习如何使用 Keras TensorBoard 回调和 TensorFlow Summary API 来可视化默认和自定义标量。
# + [markdown] colab_type="text" id="dG-nnZK9qW9z"
# ## 设置
# + colab={} colab_type="code" id="3U5gdCw_nSG3"
# 加载 TensorBoard notebook 插件
# %load_ext tensorboard
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="1qIKtOBrqc9Y" outputId="cb1b3125-6f75-4fe7-ac5e-07954b5d6847"
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
# + [markdown] colab_type="text" id="6YDAoNCN3ZNS"
# ## 配置数据用来训练回归
#
# 您现在将使用 [Keras](https://tensorflow.google.cn/guide/keras) 计算回归,即找到对应数据集的最佳拟合。 (虽然使用神经网络和梯度下降[解决此类问题多此一举](https://stats.stackexchange.com/questions/160179/do-we-need-gradient-descent-to-find-the-coefficients-of-a-linear-regression-mode),但这却是一个非常容易理解的示例.)
#
# 您将使用 TensorBoard 观察训练和测试**损失 (loss) **在各个时期之间如何变化。 希望您会看到训练集和测试集损失随着时间的流逝而减少,然后保持稳定。
#
# 首先,大致沿 *y = 0.5x + 2* 线生成1000个数据点。 将这些数据点分为训练和测试集。 您希望神经网络学会 x 与 y 的对应关系。
# + colab={} colab_type="code" id="j-ryO6OxnQH_"
data_size = 1000
# 80% 的数据用来训练
train_pct = 0.8
train_size = int(data_size * train_pct)
# 创建在(-1,1)范围内的随机数作为输入
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# 生成输出数据
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# 将数据分成训练和测试集
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:]
# + [markdown] colab_type="text" id="Je59_8Ts3rq0"
# ## 训练模型和记录损失 (loss)
#
# 您现在可以定义,训练和评估模型了。
#
# 要在训练时记录损失 (loss) ,请执行以下操作:
#
# 1. 创建 Keras [TensorBoard 回调](https://tensorflow.google.cn/api_docs/python/tf/keras/callbacks/TensorBoard)
# 2. 指定日志目录
# 3. 将 TensorBoard 回调传递给 Keras' [Model.fit()](https://tensorflow.google.cn/api_docs/python/tf/keras/models/Model#fit).
#
# TensorBoard 从日志目录层次结构中读取日志数据。 在此 notebook 中,根日志目录是 ```logs/scalars``` ,后缀有时间戳的子目录。带时间戳的子目录使您可以在使用 TensorBoard 并在模型上进行迭代时轻松识别并选择训练运行。
#
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="VmEQwCon3i7m" outputId="edf1eca5-a759-41cf-d3f3-8ac734a06099"
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(lr=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss']))
# + [markdown] colab_type="text" id="042k7GMERVkx"
# ## 使用 TensorBoard 检查损失 (loss)
# 现在,启动 TensorBoard ,并指定您在上面使用的根日志目录。
#
# 等待几秒钟以使 TensorBoard 进入载入界面。
# + colab={} colab_type="code" id="6pck56gKReON"
# %tensorboard --logdir logs/scalars
# + [markdown] colab_type="text" id="QmQHlG10Kpu2"
# <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_loss.png?raw=1"/>
# + [markdown] colab_type="text" id="ciSIRibhRi6N"
# 您可能会看到 TensorBoard 显示消息“当前数据集没有活动的仪表板”。这是因为尚未保存初始日志记录数据。随着训练的进行,Keras 模型将开始记录数据。TensorBoard 将定期刷新并显示您的 scalar 指标。如果您不耐烦,可以点击右上角的刷新箭头。
#
# 在观看训练进度时,请注意训练和验证损失如何迅速减少,然后保持稳定。实际上,您可能在25个 epochs 后就停止了训练,因为在此之后训练并没有太大改善。
#
# 将鼠标悬停在图形上可以查看特定的数据点。您也可以尝试使用鼠标放大,或选择其中的一部分以查看更多详细信息。
#
# 注意左侧的 “Runs” 选择器。 “Runs” 表示来自一轮训练的一组日志,在本例中为 Model.fit() 的结果。随着时间的推移,开发人员进行实验和开发模型时,通常会有很多运行。
#
# 使用 “Runs” 选择器选择特定的 Runs,或仅从训练或验证中选择。比较运行将帮助您评估哪个版本的代码可以更好地解决您的问题。
#
# + [markdown] colab_type="text" id="finK0GfYyefe"
# TensorBoard 的损失图表明,对于训练和验证,损失持续减少,然后稳定下来。 这意味着该模型的指标可能非常好! 现在来看模型在现实生活中的实际行为。
#
# 给定 (60, 25, 2), 方程式 *y = 0.5x + 2* 应该会输出 (32, 14.5, 3). 模型会输出一样的结果吗?
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="EuiLgxQstt32" outputId="0a957477-58fe-47b4-c366-06520250b59d"
print(model.predict([60, 25, 2]))
# 理想的输出结果是:
# [[32.0]
# [14.5]
# [ 3.0]]
# + [markdown] colab_type="text" id="bom4MdeewRKS"
# 并不差!
# + [markdown] colab_type="text" id="vvwGmJK9XWmh"
# ## 记录自定义 scalars
#
# 如果要记录自定义值,例如[动态学习率](https://www.jeremyjordan.me/nn-learning-rate/),该怎么办? 为此,您需要使用 TensorFlow Summary API。
#
# 重新训练回归模型并记录自定义学习率。如以下步骤所示:
#
# 1.使用 ```tf.summary.create_file_writer()``` 创建文件编写器。
# 2.定义自定义学习率函数。 这将传递给 Keras [LearningRateScheduler](https://tensorflow.google.cn/api_docs/python/tf/keras/callbacks/LearningRateScheduler) 回调。
# 3.在学习率函数内部,使用 ```tf.summary.scalar()``` 记录自定义学习率。
# 4.将 LearningRateScheduler 回调传递给 Model.fit()。
#
# 通常,要记录自定义 scalars ,您需要对文件编写器使用 ```tf.summary.scalar()```。 文件编写器负责将此运行的数据写入指定的目录,并在您使用 ```tf.summary.scalar()``` 时隐式使用。
# + colab={} colab_type="code" id="XB95ltRiXVXk"
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
"""
Returns a custom learning rate that decreases as epochs progress.
"""
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
# + [markdown] colab_type="text" id="pck8OQEjayDM"
# 查看 TensorBoard
# + colab={} colab_type="code" id="0sjM2wXGa0mF"
# %tensorboard --logdir logs/scalars
# + [markdown] colab_type="text" id="GkIahGZKK9I7"
# <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_custom_lr.png?raw=1"/>
# + [markdown] colab_type="text" id="RRlUDnhlkN_q"
# 使用左侧的 “Runs” 选择器,请注意您运行了 ```<timestamp>/metrics```。 选择此运行将显示一个 "learning rate" 图,您可以在此运行过程中验证学习率的进度。
#
# 您还可以将此运行的训练和验证损失曲线与您以前的运行进行比较。
# + [markdown] colab_type="text" id="l0TTI16Nl0nk"
# 模型会输出什么呢?
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="97T4vT3QkQJH" outputId="fe4614dc-f58d-4804-9e48-d62dcbb9a8ea"
print(model.predict([60, 25, 2]))
# 理想的输出结果是:
# [[32.0]
# [14.5]
# [ 3.0]]
| site/zh-cn/tensorboard/scalars_and_keras.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# <center><h1>Parkinson's Disease Classification with KNN</h1></center>
# <subtitle><center>by Group 28 (<NAME>, <NAME>, <NAME>, <NAME>)</center></subtitle>
# <img src="praat.png" alt="Praat screenshot" width="400"/>
#
# <center><b>Figure 1</b> A screengrab from Praat, the acoustic analysis software used in collecting the data set</center>
# ## Abstract
#
# This project aims to classify whether or not a given patient has Parkinson's disease based on various variables derived from their speech sounds. We used a KNN classification algorithm with six predictors to create a model with an accuracy of 80%. We concluded it's accuracy is satisfactory for the scope of DSCI 100, working with the tools we had, but not accurate enough for commercial use. We discussed how this study may be extended to other degenerative diseases. Finally, we elaborated on how one would make such a model usable by the public by providing a website frontend for collecting new data points to test.
#
#
# ## Table of Contents
#
# - Introduction
# - Background
# - Question
# - Data set
# - Methods
# - Data wrangling and pre-processing
# - Conducting the Classification Analysis
# - Predictors
# - Visualizing the Results
# - Exploration
# - Preliminary exploratory data analysis in R
# - Reading in the data
# - Wrangling
# - Creating the training and testing sets
# - Training set in detail
# - Predictors in detail
# - Hypothesis
# - Expected Outcomes
# - Results
# - Classification & Visualization
# - Discussion
# - Summary
# - Impact
# - Next Steps
# - References
# - Works Cited
# ## 1 Introduction
# ### Background
# Parkinson’s disease (PD) is a degenerative disorder of the central nervous system (CNS) which severely impacts motor control. Parkinson’s has neither a cure nor a known cause. Speech, however, is widely known to play a significant role in the detection of PD. According to Sakar<sup>1</sup> , vocal disorder exists in 90% of patients at an early stage.
#
# [1] Here's the link to the original [data set](http://archive.ics.uci.edu/ml/datasets/Parkinson%27s+Disease+Classification), and corresponding [paper](https://www.sciencedirect.com/science/article/abs/pii/S1568494618305799?via%3Dihub). Note that the paper used the data set with a similar goal but with very different predictor variables than the ones we select here.
#
# ### Question
# To what extent can we use elements of speech sounds to predict whether or not a patient has Parkinson’s disease?
#
# ### Data set
# Our data set is from [UCI’s Machine Learning repository](http://archive.ics.uci.edu/ml/datasets/Parkinson%27s+Disease+Classification). This data set contains numerous variables derived from speech sounds gathered by Sakar using the [Praat](http://www.fon.hum.uva.nl/praat/) acoustic analysis software. Each participant in the study was asked to repeat their pronunciations of the /a/ phoneme three times; each entry is thus associated with an ID parameter ranging from 0 to 251. We chose to focus on the following six predictor parameters:
#
#
# | Speech Sound Component | Variable Name in Data Set | Description |
# |------------------------------------------|---------------------------|------------------------------------------------------------------|
# | Pitch Period Entropy (PPE) | PPE | measures the impaired control of someone’s pitch |
# | Detrended Fluctuation Analysis (DFA) | DFA | a measure of the ‘noisiness’ of speech waves |
# | Recurrence Period Density Entropy (RPDE) | RPDE | repetitiveness of speech waves |
# | Jitter | locPctJitter | a measure of frequency instability |
# | Shimmer | locShimmer | a measure of amplitude instability |
# | Fundamental Frequency | meanPeriodPulses | someone’s pitch (calculated from the inverse of the mean period) |
#
# <center><b>Figure 2</b> Table describing each of the predictor's used in the original analysis</center>
#
# There’s a lot of complicated mathematics associated with each of these parameters (including Fourier transforms, dense equations, etc.) so the interested reader is encouraged to reference Sakar directly.
# ## 2 Methods
# ### Predictors
# Many researchers have attempted to use speech signal processing for Parkinson’s disease classification. Based on Sakar and other widely supported literature, the predictors identified above are the most popular speech features used in PD studies and should prove effective. They are also all related to motor control, which is what the early stages of Parkinson’s directly impacts.
#
# ### Data wrangling and pre-processing
# Each participant’s data (i.e., their pronunciation of /a/) was collected three times; to analyze this data set, each of these three observations was merged by taking the mean of each of our predictor variables in question.
#
# The data was not completely tidy, but the columns that we used were, so the untidy columns were neglected. Next, to be able to conduct classification, our predictor variables were standardized and centered. Each of our variables had a unique distribution; PPE, DFA, and RPDE were on a scale of 0 to 1 and the pitch variable was on an order of magnitude to 10<sup>-3</sup>, for example, so the entire data set was scaled so that we did not have to worry about each variable’s individual distribution.
#
# ### Conducting the Classification Analysis
# Since our question is a binary classification problem, we used the K-nearest neighbours classification algorithm (KNN). We optimized our choice of k using cross-validation on our training set, and then used the optimal choice of k to create our final model. We then determined its accuracy using the test set.
#
# ### Visualizing the Results
# To visualize the effectiveness of our KNN model, we were not able to draw a scatter plot directly, as we worked with more than 2-dimensions. We had a line plot showing the trend of increasing values of k on the accuracy of our model for many folds to assist in determining which k we should use for our final model. Then, we created a bar plot to show the number of correct predictions, false positives, and false negatives.
# <img src="example-knn.png" alt="Example KNN plot" width="400"/>
#
# <center><b>Figure 3</b> An example knn visualization with only two predictors. </center>
# ## 3 Exploration
# ### Exploratory data analysis
#
# We conducted a preliminary exploration of the data, to getter a better sense of what we were working with.
# +
# Reinstall packages (if needed)
# install.packages("tidyverse")
# install.packages("caret")
# install.packages("repr")
# install.packages("GGally")
# install.packages("formula.tools")
# Fix for strange error message
# install.packages("e1071")
# -
# Load in the necessary libraries
library(caret)
library(repr)
library(GGally)
library(tidyverse)
library(formula.tools)
# Make all of the following results reproducible, use this value across the analysis
set.seed(28)
# ### Reading in the data
# To begin, we read in the data from the UCI repository. We did not read it directly from UCI's URL, as it contained a zip, and trying to deal with that in R is tedious. Rather, we uploaded the file to Google Drive and we accessed it from there.
# +
# Reads the dataset from the web into R; the Drive URL is self-made
pd_data <- read_csv("https://drive.google.com/uc?export=download&id=1p9JuoTRM_-t56x7gptZU2TRNue8IHFHc", skip = 1)
#narrows down the dataset to the variables that we will use
baseline_data <- pd_data %>%
select(id:meanHarmToNoiseHarmonicity, class)
# -
head(baseline_data)
# <center><b>Figure 4</b> A slice of the basline data loaded from the UCI repository</center>
# ### Wrangling
#
# As was mentioned in the methods section, each participant was represented three times (e.g., see three rows with `id == 0` above). We merged these by taking the mean of each of the predictor columns, after grouping by `id`.
# +
# Averages the values of each subject's three trials so that each subject is represented by one row
project_data <- baseline_data %>%
group_by(id) %>%
summarize(PPE = mean(PPE),
DFA = mean(DFA),
RPDE = mean(RPDE),
meanPeriodPulses = mean(meanPeriodPulses),
locPctJitter = mean(locPctJitter),
locShimmer = mean(locShimmer),
# meanAutoCorrHarmonicity = mean(meanAutoCorrHarmonicity),--legacy from project proposal
class = mean(class)) %>%
mutate(class = as.factor(class)) %>%
mutate(has_pd = (class == 1))
head(project_data)
# -
# <center><b>Figure 5</b> A table containing the tidied data and only relevant columns remaining</center>
# ### Creating the training and testing sets
# Below we created the training and test sets using `createDataPartition()` from the `caret` package.
# +
# Determines which percentage of rows will be used in the training set and testing set (75%/25% split)
set.seed(28)
training_rows <- project_data %>%
select(has_pd) %>%
unlist() %>%
createDataPartition(p = 0.75, list = FALSE)
# Splits the dataset into a training set and testing set
training_set <- project_data %>% slice(training_rows)
testing_set <- project_data %>% slice(-training_rows)
head(training_set)
# -
# <center><b>Figure 6</b> A slice of our training set data, after splitting our data into two separate sets </center>
# As mentioned in the data wrangling section of "Methods," we eventually scaled our data. Scaling and other pre-processing was done in the analysis section.
# ### Testing set in detail
#
# Here we looked at the testing set in more detail, exploring the balance and spread of our selected columns.
# Reports the number of counts per class
class_counts <- training_set %>%
group_by(has_pd) %>%
summarize(n = n())
class_counts
# <center><b>Figure 7</b> A table displaying the balance in our training set </center>
# +
options(repr.plot.width=4,repr.plot.height=4)
class_counts_plot <- ggplot(class_counts, aes(x = has_pd, y = n, fill = has_pd)) +
geom_bar(stat="identity") +
labs(x = 'Has PD?', y = 'Number', fill = "Has PD") +
ggtitle("Balance in PD Data Set") +
theme(text = element_text(size = 18), legend.position = "none")
class_counts_plot
# -
# <center><b>Figure 8</b> Visualizing the balance in our training set using a bar chart </center>
# We had many more—almost three times as many—patients with PD than without in this data set. Therefore, we could conclude our training set was somewhat imbalanced (in fact, it was the same imbalance as the original data set thanks to `createDataPartition()` handling stratification for us); however, it was not severe enough to warrant use of `upScale()`. This limitation is further discussed at the end of our analysis.
# +
# Reports the means, maxes, and mins of each predictor variable used
predictor_max <- training_set %>%
select(PPE:locShimmer) %>%
map_df(~ max(., na.rm = TRUE))
predictor_min <- training_set %>%
select(PPE:locShimmer) %>%
map_df(~ min(., na.rm = TRUE))
predictor_mean <- training_set %>%
select(PPE:locShimmer) %>%
map_df(~ mean(., na.rm = TRUE))
stats_merged <- rbind(predictor_max, predictor_min, predictor_mean)
stat <- c('max','mean','min')
stats_w_names <- data.frame(stat, stats_merged)
predictor_stats <- gather(stats_w_names,
key = variable,
value = value,
PPE:locShimmer)
predictor_stats
# -
# <center><b>Figure 9</b> A table containing the mean, max, and min of each of our predictor variables </center>
# ### Predictors in detail
# Visualizes and compares the distributions of each of the predictor variables
plot_pairs <- training_set %>%
select(PPE:locShimmer) %>%
ggpairs(title = "PD speech predictor variable correlations")
# plot_pairs
plot_pairs_by_class <- training_set %>%
ggpairs(.,
legend = 9,
columns = 2:8,
mapping = ggplot2::aes(colour=has_pd),
lower = list(continuous = wrap("smooth", alpha = 0.3, size=0.1)),
title = "PD speech predictor variable correlations by class") +
theme(legend.position = "bottom")
# plot_pairs_by_class
# The following two plots were created using the `GGPairs` library. The first, without color, strictly provides detail about the distribution and correlation between each pair created from our six predictor variables. Three of our predictors, DFA, RPDE, and meanPeriodPulses take on a much wider range of values than PPE, jitter, and shimmer. Many of our variables exhibit somewhat positive correlations on the scatterplot, though some have an entirely fuzzy distribution. For example, compare the plots in the PPE column to those in the RPDE column. This likely comes as a result of the spread of the predictors.
# +
options(repr.plot.width=10,repr.plot.height=10, repr.plot.pointsize=20)
plot_pairs
# -
# <center><b>Figure 10</b> A pairwise visualization exploring the relationships between our predictor variables </center>
# With this understanding, we used a second plot, grouped and colored by `has_pd`, to assist in anticipating what the impact of these predictors would be. We noted that for every individual distribution, there is a marked difference between the red and blue groupings, which boded well for our analysis. On average, the healthy patients (i.e., `has_pd == FALSE`) fell on the lower end of the spectrum for our predictors, apart from PPE, where healthy patients exhibited higher values on average. Though we weren’t able to visualize our final model directly (as it was in six dimensions), we predicted from these plots that the new patients which fell on the "lower end" for most of these variables would be healthy. This also made intuitive sense; Parkinson’s is a degenerative disease for the muscles, so unhealthy patients would likely experience more rapid change in various speech variables due to tremors.
# +
options(repr.plot.width=10,repr.plot.height=10, repr.plot.pointsize=20)
plot_pairs_by_class
# -
# <center><b>Figure 11</b> Another visualization of the relationship between our predictors, now with their class distributions considered </center>
# ## 4 Hypothesis
#
# ### Expected Outcomes
# From our analysis, we expected to find that the six variables of speech we identified could form an effective model for determining whether or not a patient has Parkinson’s. We anticipated our findings would allow us to make reasonable predictions of whether or not a new patient has PD given their speech data.
# ## 5 Analysis
# ### Classification & Visualization
#
# #### Using all predictors from proposal
# Below is our first attempt at constructing a classification model using all predictors from the proposal.
# Scale the data set (pre-processing)
scale_transformer <- preProcess(training_set, method = c("center", "scale"))
training_set <- predict(scale_transformer, training_set)
testing_set <- predict(scale_transformer, testing_set)
# head(training_set)
# head(testing_set)
X_train <- training_set %>%
select(-class, -has_pd) %>%
data.frame()
X_test <- testing_set %>%
select(-class, -has_pd) %>%
data.frame()
Y_train <- training_set %>%
select(class) %>%
unlist()
Y_test <- testing_set %>%
select(class) %>%
unlist()
# head(X_train)
# head(Y_train)
train_control <- trainControl(method="cv", number = 10)
k <- data.frame(k = seq(from = 1, to = 51, by = 2))
knn_model_cv_10fold <- train(x = X_train, y = Y_train, method = "knn", tuneGrid = k, trControl = train_control)
# knn_model_cv_10fold
accuracies <- knn_model_cv_10fold$results
head(accuracies)
# <center><b>Figure 12 </b> A table containing the accuracy values for our first few values of k </center>
# +
options(repr.plot.height = 5, repr.plot.width = 5)
accuracy_vs_k <- ggplot(accuracies, aes(x = k, y = Accuracy)) +
geom_point() +
geom_line() +
ggtitle("Graphing Accuracy Against K") +
labs(subtitle = "Initial attempt—all predictors used")
accuracy_vs_k
# -
# <center><b>Figure 13 </b> A plot of accuracies of various k-values for a 10-fold cross-validation </center>
k <- accuracies %>%
arrange(desc(Accuracy)) %>%
head(n = 1) %>%
select(k)
k
# <center><b>Figure 14 </b> A table containing the optimal choice of k </center>
# It looks like a choice of 5 here yields the high accuracy value. After approximately a k value of 40, our accuracy plateaus at just above 0.75. We will now retrain our model using this choice of k.
k = data.frame(k = 5)
model_knn <- train(x = X_train, y = Y_train, method = "knn", tuneGrid = k)
# model_knn
Y_test_predicted <- predict(object = model_knn, X_test)
# head(Y_test_predicted)
model_quality <- confusionMatrix(data = Y_test_predicted, reference = Y_test)
model_quality
# <center><b>Figure 15 </b> Our final model statistics for our data set with all predictors </center>
# Our final model had an accuracy of 79.4%, which is pretty good! Give our model is in six dimensions, there is no simple visualization for it. However, we can visualize whether or not our model had more false positives or false negatives, and which it was better are predicting: sick or healthy. The confusion matrix gives us all of these values in a 2x2 grid.
# +
# Create a dataset
matrix_table <- as.data.frame(model_quality$table) %>%
mutate(isCorrect = (Prediction == Reference))
# matrix_table
matrix_plot <- ggplot(matrix_table, aes(fill = isCorrect, x = Reference, y = Freq)) +
geom_bar(position="stack", stat="identity", width = 0.7) +
labs(title = "Confusion Matrix Bar Chart", y = "Frequency", x = "Has PD?") +
scale_fill_discrete(name="Correction Prediction?")
matrix_plot
# -
# <center><b>Figure 16 </b> Visualizing our confusion matrix table; note that on the bottom if 'Has PD?' is 1, this means the patient does have Parkinson's. </center>
# We noted from the above bar chart that our model was fairly accurate at predicting whether or not a patient *did* have Parkinson's, but were very inaccurate when working with *healthy* patients. This was likely a result of our imbalanced training set, though we did still end up with an 80% accuracy.
# ## 6 Discussion
# ### Summary
# ### Impact
#
# Being able to predict Parkinson’s accurately has significant impact alone; doctors often struggle to make accurate, timely predictions as Parkinson’s is a long-term degenerative disease. Additionally, there are no specific tests for determining whether or not someone has Parkinson’s. Currently a neurologist must use a combination of other factors like medical history, signs and symptoms, and a physical examination to make a prediction. Speech sounds could be another cheap, fast tool for a neurologist to employ to help them make a more accurate prediction.
#
# ### Next Steps
#
# From this analysis, I could see us moving to three questions for further study:
#
# - Can different variables in speech sounds be used to predict other degenerative diseases that have a similar impact on movement? (e.g., <NAME> disease)
# - Can other parts of physiology be used as an indicator of whether or not someone has Parkinson’s disease? (e.g., gross or fine motor control)
# - What other parts of speech sounds could be used to predict Parkinson’s disease? (e.g., as the study suggests (e.g., Q-factor wavelet transform)
#
# Beyond these further questions, it would also be interesting to see the use of this model in a practical setting. For example, this model could be used by doctors in combination with the patient's other symptoms to make a more accurate prediction. In fact, [Google](https://www.nature.com/articles/s41586-019-1799-6) has a similar system for breast cancer screening currently under international review. Given countless electronic devices we interact with on a day-to-day basis possess a microphone, this could even be done without the doctor's help. The only catch would be the consumer-facing product would need to give ample warnings about a potential false-negative or false-positive, and would additionally need to be robust enough to collect the relevant variables from a speech sound, like Praat did in collecting the data set used here.
# ## 7 References
#
# ### Works Cited
#
# <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2019). A comparative analysis of speech signal processing algorithms for parkinson's disease classification and the use of the tunable Q-factor wavelet transform. Applied Soft Computing Journal, 74, 255. doi:10.1016/j.asoc.2018.10.022.
| src/pd-classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in **Python Functions** lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions)**
# </i></small></small>
# # Python Mathematical Functions
#
# Learn about all the mathematical functions available in Python and how you can use them in your program.
# ## What is `math` module in Python?
#
# The **`math`** **[module](https://github.com/milaan9/04_Python_Functions/blob/main/007_Python_Function_Module.ipynb)** is a standard module in Python and is always available. To use mathematical functions under this module, you have to import the module using **`import math`**.
#
# It gives access to the underlying C library functions. For example:
# +
# Square root calculation
import math
math.sqrt(4)
# -
# This module does not support complex datatypes. The **[cmath module](https://docs.python.org/3.0/library/cmath.html)** is the **`complex`** counterpart.
# ## Functions in Python Math Module
#
# Here is the list of all the functions and attributes defined in **`math`** module with a brief explanation of what they do.
#
# **List of Functions in Python Math Module**
#
# | Function | Description |
# |:----| :--- |
# | **`ceil(x)`** | Returns the smallest integer greater than or equal to x. |
# | **`copysign(x, y)`** | Returns x with the sign of y |
# | **`fabs(x)`** | Returns the absolute value of x |
# | **`factorial(x)`** | Returns the factorial of x |
# | **`floor(x)`** | Returns the largest integer less than or equal to x |
# | **`fmod(x, y)`** | Returns the remainder when x is divided by y |
# | **`frexp(x)`** | Returns the mantissa and exponent of x as the pair (m, e) |
# | **`fsum(iterable)`** | Returns an accurate floating point sum of values in the iterable |
# | **`isfinite(x)`** | Returns True if x is neither an infinity nor a NaN (Not a Number) |
# | **`isinf(x)`** | Returns True if x is a positive or negative infinity |
# | **`isnan(x)`** | Returns True if x is a NaN |
# | **`ldexp(x, i)`** | Returns x * (2**i) |
# | **`modf(x)`** | Returns the fractional and integer parts of x |
# | **`trunc(x)`** | Returns the truncated integer value of x |
# | **`exp(x)`** | Returns e**x |
# | **`expm1(x)`** | Returns e**x - 1 |
# | **`log(x[, b])`** | Returns the logarithm of **`x`** to the base **`b`** (defaults to e) |
# | **`log1p(x)`** | Returns the natural logarithm of 1+x |
# | **`log2(x)`** | Returns the base-2 logarithm of x |
# | **`log10(x)`** | Returns the base-10 logarithm of x |
# | **`pow(x, y)`** | Returns x raised to the power y |
# | **`sqrt(x)`** | Returns the square root of x |
# | **`acos(x)`** | Returns the arc cosine of x |
# | **`asin(x)`** | Returns the arc sine of x |
# | **`atan(x)`** | Returns the arc tangent of x |
# | **`atan2(y, x)`** | Returns atan(y / x) |
# | **`cos(x)`** | Returns the cosine of x |
# | **`hypot(x, y)`** | Returns the Euclidean norm, sqrt(x*x + y*y) |
# | **`sin(x)`** | Returns the sine of x |
# | **`tan(x)`** | Returns the tangent of x |
# | **`degrees(x)`** | Converts angle x from radians to degrees |
# | **`radians(x)`** | Converts angle x from degrees to radians |
# | **`acosh(x)`** | Returns the inverse hyperbolic cosine of x |
# | **`asinh(x)`** | Returns the inverse hyperbolic sine of x |
# | **`atanh(x)`** | Returns the inverse hyperbolic tangent of x |
# | **`cosh(x)`** | Returns the hyperbolic cosine of x |
# | **`sinh(x)`** | Returns the hyperbolic cosine of x |
# | **`tanh(x)`** | Returns the hyperbolic tangent of x |
# | **`erf(x)`** | Returns the error function at x |
# | **`erfc(x)`** | Returns the complementary error function at x |
# | **`gamma(x)`** | Returns the Gamma function at x |
# | **`lgamma(x)`** | Returns the natural logarithm of the absolute value of the Gamma function at x |
# | **`pi`** | Mathematical constant, the ratio of circumference of a circle to it's diameter (3.14159...) |
# | **`e`** | mathematical constant e (2.71828...) |
# Visit this page to learn about all the **[mathematical functions defined in Python 3](https://docs.python.org/3/library/math.html)**.
| 009_Python_Function_math_Module.ipynb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
import json
import pandas as pd
import numpy as np
import os
import re
from sqlalchemy import create_engine
import time
os.path.abspath("Movielens_Extract.ipynb")
os.path.abspath("Movies-ETL/wikipedia-movies.json")
file_dir = '/Users/leora/Desktop/Class/Movies-ETL/'
f'{file_dir}wikipedia-movies.json'
with open(f'{file_dir}/wikipedia-movies.json', mode='r') as file:
wiki_movies_raw = json.load(file)
# + active=""
# len(wiki_movies_raw)
# -
# First 5 records
wiki_movies_raw[:5]
# Last 5 records
wiki_movies_raw[-5:]
kaggle_metadata = pd.read_csv(f'{file_dir}/archive/movies_metadata.csv', low_memory = False)
ratings = pd.read_csv(f'{file_dir}/archive/ratings.csv')
kaggle_metadata.head()
kaggle_metadata.tail()
kaggle_metadata.sample(n=5)
ratings.head()
ratings.tail()
ratings.sample(n=5)
wiki_movies_df = pd.DataFrame(wiki_movies_raw)
wiki_movies_df.head()
wiki_movies_df.columns.tolist()
wiki_movies = [movie for movie in wiki_movies_raw
if ('Director' in movie or 'Directed by' in movie)
and 'imdb_link' in movie
and 'No. of episodes' not in movie]
len(wiki_movies)
wiki_movies_df = pd.DataFrame(wiki_movies)
wiki_movies_df.head()
sorted(wiki_movies_df.columns.tolist())
# Create a function to clean the movie data
def clean_movie(movie):
movie = dict(movie) #create a non-destructive copy
return movie
wiki_movies_df[wiki_movies_df['Arabic'].notnull()]['url']
# Create a function to clean the movie data
def clean_movie(movie):
movie = dict(movie) #Create a non-destructive copy
alt_titles = {}
#loop through the columns to check for alternate titles
for key in ['Also known as', 'Arabic', 'Cantonese', 'Chinese', 'French',
'Hangul', 'Hebrew', 'Hepburn', 'Japanese', 'Literally',
'Mandarin', 'McCune-Reischauer', 'Original title', 'Polish', 'Revised Romanization', 'Romanized', 'Russian',
'Simplified', 'Traditional', 'Yiddish']:
#remove the key-value pair and add to the alternative titles dict
if key in movie:
alt_titles[key] = movie[key]
movie.pop(key)
#add the alternative titles dict to the movie object
if len(alt_titles) > 0:
movie['alt_titles'] = alt_titles
#consolidate columns with the same data into one column
def change_column_name(old_name, new_name):
if old_name in movie:
movie[new_name] = movie.pop(old_name)
return movie
#make a list of cleaned movies with a list comprehension.
clean_movies = [clean_movie(movie) for movie in wiki_movies]
#set wiki_movies_df to be the DF created from the clean_movies and
#print out the list of columns
wiki_movies_df = pd.DataFrame(clean_movies)
sorted(wiki_movies_df.columns.tolist())
def clean_movie(movie):
movie = dict(movie) #Create a non-destructive copy
alt_titles = {}
#loop through the columns to check for alternate titles
for key in ['Also known as', 'Arabic', 'Cantonese', 'Chinese', 'French',
'Hangul', 'Hebrew', 'Hepburn', 'Japanese', 'Literally',
'Mandarin', 'McCune-Reischauer', 'Original title', 'Polish', 'Revised Romanization', 'Romanized', 'Russian',
'Simplified', 'Traditional', 'Yiddish']:
#remove the key-value pair and add to the alternative titles dict
if key in movie:
alt_titles[key] = movie[key]
movie.pop(key)
#add the alternative titles dict to the movie object
if len(alt_titles) > 0:
movie['alt_titles'] = alt_titles
#consolidate columns with the same data into one column
def change_column_name(old_name, new_name):
if old_name in movie:
movie[new_name] = movie.pop(old_name)
change_column_name('Adaptation by','Writer(s)')
change_column_name('Country of origin','Country')
change_column_name('Directed by','Director')
change_column_name('Distributed by','Distributor')
change_column_name('Edited by','Editor(s)')
change_column_name('Produced by','Producer(s)')
change_column_name('Producer','Producer(s)')
change_column_name('Productioncompanies ','Production company(s)')
change_column_name('Productioncompany ','Production company(s)')
change_column_name('Release date','Released')
change_column_name('Screen story by','Writer(s)')
change_column_name('Screenplay by','Writer(s)')
change_column_name('Story by','Writer(s)')
change_column_name('Theme music composer','Composer(s)')
change_column_name('Written by','Writer(s)')
return movie
clean_movies = [clean_movie(movie) for movie in wiki_movies]
wiki_movies_df = pd.DataFrame(clean_movies)
sorted(wiki_movies_df.columns.tolist())
# Drop the duplicate rows from the wiki_movies_df
wiki_movies_df['imdb_id'] = wiki_movies_df['imdb_link'].str.extract(r'(tt\d{7})')
print(len(wiki_movies_df))
wiki_movies_df.drop_duplicates(subset='imdb_id', inplace=True)
print(len(wiki_movies_df))
wiki_movies_df.head()
# Determine the number of null values in each column
wiki_movies_df.isnull()
# Use a list comprehension to count the number of null values in each column
[[column,wiki_movies_df[column].isnull().sum()] for column in wiki_movies_df.columns]
# Identify the columns we want to keep (aka columns that have less than 90% null values)
[column for column in wiki_movies_df.columns if wiki_movies_df[column].isnull().sum() < len(wiki_movies_df) * 0.9]
wiki_columns_to_keep = [column for column in wiki_movies_df.columns if wiki_movies_df[column].isnull().sum() < len(wiki_movies_df) * 0.9]
wiki_movies_df = wiki_movies_df[wiki_columns_to_keep]
wiki_movies_df.head()
# Make a data series that drops missing values from box office data
box_office = wiki_movies_df['Box office'].dropna()
len(box_office)
def is_not_a_string(x):
return type(x) != str
box_office[box_office.map(is_not_a_string)]
# Use a lambda function so that we don't have to make a new function every time we use .map()
lambda x: type(x) != str
box_office[box_office.map(lambda x:type(x) != str)]
# Join the list items for box office from above into one string
box_office = box_office.apply(lambda x: ' '.join(x) if type(x) == list else x)
box_office
# Make the box office object a uniform format
# Use a regular expression to search for box office values in the format $1.23 million/billion
form_one = r"\$\d+\.?\d*\s*[mb]illion"
box_office.str.contains(form_one, flags=re.IGNORECASE).sum()
# Use a regular expression to search for box office values in the format $1,234,567 format
form_two = r"\$\d{1,3}(?:,\d{3})+"
box_office.str.contains(form_two, flags=re.IGNORECASE).sum()
# Find which box office values aren't either of the formats above
matches_form_one = box_office.str.contains(form_one, flags=re.IGNORECASE)
matches_form_two = box_office.str.contains(form_two, flags=re.IGNORECASE)
box_office[~matches_form_one & ~matches_form_two]
Out[44]:
# Update the box office object forms to account for spaces between dollar sign and number
form_one = r'\$\s*\d+\.?\d*\s*[mb]illion'
form_two = r'\$\s*\d{1,3}(?:,\d{3})+'
print(box_office.str.contains(form_one, flags=re.IGNORECASE).sum())
print(box_office.str.contains(form_two, flags=re.IGNORECASE).sum())
# Update the box office object forms to account for periods used as a thousands separator (and spaces)
form_two = r'\$\s*\d{1,3}(?:[,\.]\d{3})+'
box_office.str.contains(form_two, flags=re.IGNORECASE).sum()
# Update the box office object forms to account for periods used as a thousands separator (and spaces), but only for $1,234,567 values
form_two = r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!s[mb]illion)'
box_office.str.contains(form_two, flags=re.IGNORECASE).sum()
# Replace box office values that are given as a range
box_office = box_office.str.replace(r'\$.*[-—–](?![a-z])', '$', regex=True)
# Replace millon with million
form_one = r'\$\s*\d+\.?\d*\s*[mb]illi?on'
# Extract the values that match the formats specified (form_one or form_two)
box_office.str.extract(f'({form_one}|{form_two})')
# Create a funciton that extracts box office values into numeric values
def parse_dollars(s):
# if s is not a string, return NaN
if type(s) != str:
return np.nan
# if input is of the form $###.# million
if re.match(r'\$\s*\d+\.?\d*\s*milli?on', s, flags=re.IGNORECASE):
# remove dollar sign and " million"
s = re.sub('\$|\s|[a-zA-Z]','', s)
# convert to float and multiply by a million
value = float(s) * 10**6
# return value
return value
# if input is of the form $###.# billion
elif re.match(r'\$\s*\d+\.?\d*\s*billi?on', s, flags=re.IGNORECASE):
# remove dollar sign and " billion"
s = re.sub('\$|\s|[a-zA-Z]','', s)
# convert to float and multiply by a billion
value = float(s) * 10**9
# return value
return value
# if input is of the form $###,###,###
elif re.match(r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!\s[mb]illion)', s, flags=re.IGNORECASE):
# remove dollar sign and commas
s = re.sub('\$|,','', s)
# convert to float
value = float(s)
# return value
return value
# otherwise, return NaN
else:
return np.nan
# Parse box office values to numeric values using the parse_dollars function
wiki_movies_df['box_office'] = box_office.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)
wiki_movies_df['box_office']
# Drop the box office column
wiki_movies_df.drop('Box office', axis=1, inplace=True)
wiki_movies_df.head()
budget = wiki_movies_df['Budget'].dropna()
budget = budget.map(lambda x: ' '.join(x) if type(x) == list else x)
budget = budget.str.replace(r'\$.*[-—–](?![a-z])', '$', regex=True)
matches_form_one = budget.str.contains(form_one, flags=re.IGNORECASE)
matches_form_two = budget.str.contains(form_two, flags=re.IGNORECASE)
budget[~matches_form_one & ~matches_form_two]
# Remove citation references from the budget column
budget = budget.str.replace(r'\[\d+\]\s*', '')
budget[~matches_form_one & ~matches_form_two]
# Parse the budget values and drop the original Budget column
wiki_movies_df['budget'] = budget.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)
wiki_movies_df.drop('Budget', axis=1, inplace=True)
sorted(wiki_movies_df.columns.tolist())
# +
# Parse the release date data
# Create a variable to hold the non-null values of release dates
release_date = wiki_movies_df['Released'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)
# Specify the different date forms
date_form_one = r'(?:January|Februrary|March|April|May|June|July|August|September|October|November|December)\s[123]\d,\s\d{4}'
date_form_two = r'\d{4}.[01]\d.[123]\d'
date_form_three = r'(?:January|Februrary|March|April|May|June|July|August|September|October|November|December)\s\d{4}'
date_form_four = r'\d{4}'
# Extract the dates
release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})', flags=re.IGNORECASE)
# Use Pandas to_datetime() function to parse the dates
wiki_movies_df['release_date'] = pd.to_datetime(release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})')[0], infer_datetime_format=True)
# +
# Parse the running time
# Create a variable to hold the non-null values of running time
running_time = wiki_movies_df['Running time'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)
# check the number of running times that match 100 minutes
running_time.str.contains(r'^\d*\s*minutes$', flags=re.IGNORECASE).sum()
# -
# Check to see what the other running time entries look like
running_time[running_time.str.contains(r'^\d*\s*minutes$', flags=re.IGNORECASE) != True]
# make the search more general by marking on the beginning of "minutes"
running_time.str.contains(r'^\d*\s*m', flags=re.IGNORECASE).sum()
# Check to see what the entries from above look like
running_time[running_time.str.contains(r'^\d*\s*m', flags=re.IGNORECASE) != True]
# Relax the beginning of the string criteria (remove the caret)
running_time[running_time.str.contains(r'\d*\s*m', flags=re.IGNORECASE) != True]
running_time_extract = running_time.str.extract(r'(\d+)\s*ho?u?r?s?\s*(\d*)|(\d+)\s*m')
running_time_extract
# +
# Convert values in running time df to numeric values
running_time_extract = running_time_extract.apply(lambda col: pd.to_numeric(col, errors='coerce')).fillna(0)
# Apply a funciton to convert hour and minute capture groups to minutes
wiki_movies_df['running_time'] = running_time_extract.apply(lambda row: row[0]*60 + row[1] if row[2] == 0 else row[2], axis = 1)
# Drop the running tiem from the dataset
wiki_movies_df.drop('Running time', axis=1, inplace=True)
sorted(wiki_movies_df.columns.tolist())
# -
# Clean the Kaggle movie data
kaggle_metadata.dtypes
# Check to see if the adult column are all boolean values
kaggle_metadata['adult'].value_counts()
# Remove the bad data from the adult column
kaggle_metadata[~kaggle_metadata['adult'].isin(['True', 'False'])]
# Drop the adult movies from the dataset and keep non-adult movies
kaggle_metadata = kaggle_metadata[kaggle_metadata['adult'] == 'False'].drop('adult',axis='columns')
kaggle_metadata['video'].value_counts()
# Convert the video data
kaggle_metadata['video'] == 'True'
kaggle_metadata['video'] = kaggle_metadata['video'] == 'True'
# Convert the numeric columns
kaggle_metadata['budget'] = kaggle_metadata['budget'].astype(int)
kaggle_metadata['id'] = pd.to_numeric(kaggle_metadata['id'], errors='raise')
kaggle_metadata['popularity'] = pd.to_numeric(kaggle_metadata['popularity'], errors='raise')
# Convert release date to datetime
kaggle_metadata['release_date'] = pd.to_datetime(kaggle_metadata['release_date'])
wiki_movies_df.dtypes
# +
# Reasonability Checks on Ratings Data
# -
| Movielens Extract.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
try:
import google.colab # noqa: F401
except ImportError:
import ufl
import dolfin
else:
try:
import ufl
import dolfin
except ImportError:
# !wget "https://fem-on-colab.github.io/releases/fenics-install.sh" -O "/tmp/fenics-install.sh" && bash "/tmp/fenics-install.sh"
import ufl
import dolfin
assert dolfin.__version__ == "2019.2.0.dev0"
import numpy as np
from petsc4py import PETSc
assert not np.issubdtype(PETSc.ScalarType, np.complexfloating)
mesh = dolfin.UnitIntervalMesh(3)
V = dolfin.FunctionSpace(mesh, "CG", 1)
assert V.dim() == 4
u = dolfin.TrialFunction(V)
v = dolfin.TestFunction(V)
dx = ufl.dx
f = dolfin.Function(V)
f.vector()[:] = np.arange(1, V.dim() + 1)
a = u * v * dx
F = f * v * dx
A = dolfin.assemble(a)
b = dolfin.assemble(F)
A = dolfin.as_backend_type(A)
b = dolfin.as_backend_type(b)
solution = dolfin.Function(V)
dolfin.solve(A, solution.vector(), b)
assert np.allclose(solution.vector().vec().getArray(), np.arange(1, V.dim() + 1))
ksp = PETSc.KSP().create()
ksp.setOperators(A.mat())
ksp.solve(b.vec(), solution.vector().vec())
assert np.allclose(solution.vector().vec().getArray(), np.arange(1, V.dim() + 1))
for package in ("mumps", "superlu", "superlu_dist"):
ksp = PETSc.KSP().create()
ksp.setOperators(A.mat())
ksp.setType("preonly")
pc = ksp.getPC()
pc.setType("lu")
pc.setFactorSolverType(package)
ksp.solve(b.vec(), solution.vector().vec())
assert np.allclose(solution.vector().vec().getArray(), np.arange(1, V.dim() + 1))
grad = ufl.grad
inner = ufl.inner
k = inner(grad(u), grad(v)) * dx
K = dolfin.assemble(k)
K = dolfin.as_backend_type(K)
expected = (0, 10.8, 54, 108)
eigensolver = dolfin.SLEPcEigenSolver(K, A)
eigensolver.parameters["problem_type"] = "gen_non_hermitian"
eigensolver.parameters["spectrum"] = "smallest real"
eigensolver.solve(V.dim())
assert eigensolver.get_number_converged() == len(expected)
for (i, eig_i_ex) in enumerate(expected):
eig_i_r, eig_i_i = eigensolver.get_eigenvalue(i)
assert np.isclose(eig_i_r, eig_i_ex)
assert np.isclose(eig_i_i, 0)
from slepc4py import SLEPc
eps = SLEPc.EPS().create()
eps.setOperators(K.mat(), A.mat())
eps.setProblemType(SLEPc.EPS.ProblemType.GHEP)
eps.setWhichEigenpairs(SLEPc.EPS.Which.SMALLEST_REAL)
eps.solve()
assert eps.getConverged() == len(expected)
for (i, eig_i_ex) in enumerate(expected):
eig_i = eps.getEigenvalue(i)
assert np.isclose(eig_i.real, eig_i_ex)
assert np.isclose(eig_i.imag, 0)
for package in ("mumps", "superlu", "superlu_dist"):
eps = SLEPc.EPS().create()
eps.setOperators(K.mat(), A.mat())
eps.setProblemType(SLEPc.EPS.ProblemType.GHEP)
eps.setWhichEigenpairs(SLEPc.EPS.Which.TARGET_REAL)
eps.setTarget(1)
st = eps.getST()
st.setType(SLEPc.ST.Type.SINVERT)
st.setShift(1)
ksp = st.getKSP()
ksp.setType("preonly")
pc = ksp.getPC()
pc.setType("lu")
pc.setFactorSolverType(package)
eps.solve()
assert eps.getConverged() == len(expected)
for (i, eig_i_ex) in enumerate(expected):
eig_i = eps.getEigenvalue(i)
assert np.isclose(eig_i.real, eig_i_ex)
assert np.isclose(eig_i.imag, 0)
# + language="bash"
#
# export LD_PRELOAD=""
# ERROR_LIBRARIES=($(find /root/.cache/dijitso -name '*\.so' -exec \
# bash -c 'ldd $0 | grep libstdc++.so.6 1>/dev/null 2>/dev/null && echo $0' {} \;))
# if [ ${#ERROR_LIBRARIES[@]} -eq 0 ]; then
# echo "No reference to libstdc++.so was found"
# else
# for ERROR_LIBRARY in "${ERROR_LIBRARIES[@]}"; do
# echo "Error: library $ERROR_LIBRARY depends on libstdc++.so"
# ldd -v $ERROR_LIBRARY
# done
# false
# fi
| fenics/test-dolfin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from Acquire.Client import User, Drive, StorageCreds
# +
root_url = "https://fn.acquire-aaai.com/t"
user = User("chryswoods", identity_url="%s/identity" % root_url)
result = user.request_login()
print(result["login_url"])
from Acquire.Client import create_qrcode
create_qrcode(result["login_url"])
user.wait_for_login()
creds = StorageCreds(user=user, service_url="%s/storage" % root_url)
drive = Drive(name="test_chunking", creds=creds, autocreate=True)
# -
uploader = drive.chunk_upload(filename="test_chunk_upload.txt")
uploader.upload("Here is some text\n")
uploader.upload("Some more text is com")
uploader.upload("ing\nWhat a lot of bits!\n")
uploader.close()
downloaded_name = drive.download(filename="test_chunk_upload.txt")
downloaded_name
for line in open(downloaded_name).readlines():
print(line.rstrip())
downloader = drive.chunk_download(filename="test_chunk_upload.txt")
downloaded_name = downloader.local_filename()
for line in open(downloaded_name).readlines():
print(line.rstrip())
downloader.download_next_chunk()
for line in open(downloaded_name).readlines():
print(line.rstrip())
downloader.download_next_chunk()
for line in open(downloaded_name).readlines():
print(line.rstrip())
downloader.download_next_chunk()
for line in open(downloaded_name).readlines():
print(line.rstrip())
downloader.download_next_chunk()
for line in open(downloaded_name).readlines():
print(line.rstrip())
downloader.download_next_chunk()
downloader.is_open()
user.logout()
| user/notebooks/test_chunking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["header"]
# <table width="100%">
# <tr style="border-bottom:solid 2pt #009EE3">
# <td style="text-align:left" width="10%">
# <a href="contacts.dwipynb" download><img src="../../images/icons/download.png"></a>
# </td>
# <td style="text-align:left" width="10%">
# <a><img class="not_active_img" src="../../images/icons/program.png" title="Will be released soon !"></a>
# </td>
# <td></td>
# <td style="text-align:left" width="5%">
# <a href="../MainFiles/opensignalsfactory.ipynb"><img src="../../images/icons/home.png"></a>
# </td>
# <td style="text-align:left" width="5%">
# <a href="../MainFiles/contacts.ipynb"><img src="../../images/icons/contacts.png"></a>
# </td>
# <td style="text-align:left" width="5%">
# <a href="https://github.com/opensignalsfactory/opensignalsfactory"><img src="../../images/icons/github.png"></a>
# </td>
# <td style="border-left:solid 2pt #009EE3" width="20%">
# <img src="../../images/ost_logo.png">
# </td>
# </tr>
# </table>
# -
# <link rel="stylesheet" href="../../styles/theme_style.css">
# <!--link rel="stylesheet" href="../../styles/header_style.css"-->
# <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
#
# <table width="100%">
# <tr>
# <td id="image_td" width="50%" class="header_image_color_2">
# <img id="image_img" src="../../images/ost_logo.png"></td>
# <td class="header_text header_image_color_ext_2">Contacts</td>
# </tr>
# </table>
# + [markdown] tags=["test"]
# <div style="background-color:black">
# <table width="100%">
# <tr>
# <td style="border-right:solid 3px #009EE3" width="50%">
# <img src="../../images/plux_logo.png" width="50%">
# </td>
# <td style="text-align:left">
# <strong>Lisbon Office</strong>
# <br>
# Phone <i>(+351) 211 956 542</i>
# <br>
# Fax <i>(+351) 211 956 546</i>
# <br>
# Av. 5 de Outubro, 70 - 8º
# <br>
# 1050-059 Lisboa
# <br><br>
# <strong>Support or Suggestions</strong>
# <br>
# E-mail <i><EMAIL></i>
# </td>
# </tr>
# </table>
# </div>
# + [markdown] tags=["footer"]
# <hr>
# <table width="100%">
# <tr>
# <td style="border-right:solid 3px #009EE3" width="30%">
# <img src="../../images/ost_logo.png">
# </td>
# <td width="35%" style="text-align:left">
# <a href="https://github.com/opensignalsfactory/opensignalsfactory">☌ Project Presentation</a>
# <br>
# <a href="https://github.com/opensignalsfactory/opensignalsfactory">☌ GitHub Repository</a>
# <br>
# <a href="https://pypi.org/project/opensignalsfactory/">☌ How to install opensignalsfactory Python package ?</a>
# <br>
# <a href="../MainFiles/signal_samples.ipynb">☌ Signal Library</a>
# </td>
# <td width="35%" style="text-align:left">
# <a href="../MainFiles/opensignalsfactory.ipynb">☌ Notebook Categories</a>
# <br>
# <a href="../MainFiles/by_diff.ipynb">☌ Notebooks by Difficulty</a>
# <br>
# <a href="../MainFiles/by_signal_type.ipynb">☌ Notebooks by Signal Type</a>
# <br>
# <a href="../MainFiles/by_tag.ipynb">☌ Notebooks by Tag</a>
# </td>
# </tr>
# </table>
# + tags=["hide_both"]
from opensignalstools.__notebook_support__ import css_style_apply
css_style_apply()
| notebookToHtml/oldFiles/opensignalsfactory_html/Categories/MainFiles/contacts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"checksum": "e28cd26c5675869689c3a72dbe67ebd5", "grade": false, "grade_id": "cellc-a00", "locked": true, "schema_version": 1, "solution": false}
# # Aula 05 - Gradiente Descendente
# + deletable=false editable=false nbgrader={"checksum": "76346e439ba4c0ba62f041db212c2c93", "grade": false, "grade_id": "cell-f1ae70a19f49ab50", "locked": true, "schema_version": 1, "solution": false}
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.ion()
# + [markdown] deletable=false editable=false nbgrader={"checksum": "69d6fcdb56fd24f144f66bc5ee2e8a68", "grade": false, "grade_id": "cell1c1-a00", "locked": true, "schema_version": 1, "solution": false}
# # Exercício 01:
#
# Atualmente, uma das teorias mais aceitas sobre a formação do universo, diz que o universo está em constante expansão.
#
# Supernovas são estrelas que explodiram e morreram recentemente. A base inclusa na pasta desta lista contém registros dessas supernovas. Cada linha na tabela corresponde a uma supernova próxima da Terra observada por astrônomos, indicando o quão longe da Terra a supernova estava e o quão rápido ela se afastava.
#
# Neste exercício, vamos encontrar valores para os parâmetros de uma reta que aproxime a velocidade a partir da distância, usando a base ```close_novas.csv```, por meio do gradiente descendente. Ou seja, uma regressão linear.
#
# A figura abaixo mostra um pouco, de forma bem bem simples, a ideia do big bang.
#
# 
# + deletable=false editable=false nbgrader={"checksum": "da1a8859bb0e627d58ecd7dc7f186a5a", "grade": false, "grade_id": "cell-13f014c4c37dfcda", "locked": true, "schema_version": 1, "solution": false}
df = pd.read_csv('./close_novas.csv')
plt.scatter(df.values[:, 0], df.values[:, 1], alpha=0.3) # Esse alpha define transparência dos pontos
plt.xlabel('Distance (million parsecs)')
plt.ylabel('Speed (parsecs/year)')
# + [markdown] deletable=false editable=false nbgrader={"checksum": "e8408ec72903a74f3938e2955eafe449", "grade": false, "grade_id": "cell-3599a439494a96b2", "locked": true, "schema_version": 1, "solution": false}
# O resultado de uma regressão nos dados acima pode ser utilizada para estimar a idade do universo. Um carro com alguns colegas seus partiu do ICEx. Um carro partiu da sua localização com uma velocidade de 80 km/h. Depois um tempo, um conhecido seu que está dentro do carro liga para você indicando que os passageiros já percorreram 160km. Com base nesta resposta, você consegue estimar que seus colegas partiram do ICEx 2h atrás.
#
# A mesma ideia acima é utilizada para estimar a idade do universo. Cada supernova está viajando com uma velocidade razoavelmente constante. Podemos assumir que todas as estrelas partiram de um mesmo local, afinal o vetor da trajetória também é razoavelmente constante. Agora, obviamente não observamos as estrelas do local do big bang. Estamos mensurando a velocidade das mesmas e a distância em relação ao planeta terra.
#
# Um fator interessante é que a correlação não mudar ao somar uma constante nos eixos. Lembre-se da z-normalização. A dispersão abaixo captura a mesma tendência da dispersão acima.
# + deletable=false editable=false nbgrader={"checksum": "0ed1ffe2441d2053229cc59ae5fdd057", "grade": false, "grade_id": "cell-bcccbfd366be032b", "locked": true, "schema_version": 1, "solution": false}
C = 5000
plt.scatter(df.values[:, 0] + C, df.values[:, 1], alpha=0.3) # Esse alpha define transparência dos pontos
plt.xlabel('Distance (million parsecs) + C')
plt.ylabel('Speed (parsecs/year)')
# + [markdown] deletable=false editable=false nbgrader={"checksum": "d8f4f19420124e9b1045b4846a0e6d80", "grade": false, "grade_id": "cell-a7a2753fdb0ef119", "locked": true, "schema_version": 1, "solution": false}
# Agora, pense em uma regressão linear como uma média de linhas. Para cada linha, temos a fórmula:
#
# $$y_i = \beta x_i + \alpha$$
#
# Partindo de uma origem (0, 0), cada linha é definida por $\Delta_y/\Delta_x$. Neste caso, temos que $\alpha=0$ e $\beta=(y-0)/(x-0)$, ou $y/x$.
#
# Nos seus dados da supernova, y é a velocidade e x é a distância. Sabendo também que a correlação não muda quando adicionamos uma constante nos dados, podemos estimar a idade do universo observando os dados a partir do planeta terra. Assumindo que todas as supernovas partiram de uma mesma origem. $\Delta_y/\Delta_x$ (nossa unidade de $\beta$) vai ter uma unidade de 1 milhão de: ${parsec \over time} * {1.0\over 1M*parsec}$ = ${1 \over 1M time}$. Então: 1.0/$\beta$ = 1M time.
#
#
# Sabendo do acima, vamos brincar um pouco com a regressão linear. Inicialmente, vamos estimar a reta:
#
# $$y_i = \beta x_i + \alpha$$
#
# fazendo uso de gradiente descendente.
# -
# A) Para ajudar no algoritmo, z-normalize seus dados.
# + deletable=false nbgrader={"checksum": "5b0b5ccead2c2a885330aab5a6089e53", "grade": false, "grade_id": "cell-379358cbec2de487", "locked": false, "schema_version": 1, "solution": true}
df = (df-df.mean())/df.std(ddof=1)
plt.scatter(df.values[:, 0], df.values[:, 1], alpha=0.3) # Esse alpha define transparência dos pontos
plt.xlabel('Distance (million parsecs)')
plt.ylabel('Speed (parsecs/year)')
# + deletable=false editable=false nbgrader={"checksum": "b22b438271ca858ffae0fabfae3c543e", "grade": true, "grade_id": "cell-f74994307fe31c19", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "a3bace4f9e96170e6623f2f3008d021a", "grade": false, "grade_id": "cell-4ca5a76f5abbd718", "locked": true, "schema_version": 1, "solution": false}
# B) Implemente a função de perda para um ponto, retornando o erro quadrático.
# + deletable=false nbgrader={"checksum": "e443c63c1596fd204e083fd6d9367b78", "grade": false, "grade_id": "cell-33778481c726c682", "locked": false, "schema_version": 1, "solution": true}
def loss_um_ponto(x_i, y_i, alpha, beta):
error = y_i - (beta * x_i + alpha)
sum_of_squared_errors = (error ** 2).mean()
return sum_of_squared_errors
# + deletable=false editable=false nbgrader={"checksum": "7ae7019ff92146a1478d526b6270745a", "grade": true, "grade_id": "cell-ee43c4728326e576", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "c8258f838cddc71b3bf4099f926c96d2", "grade": false, "grade_id": "cell-ee64bde24cdaa651", "locked": true, "schema_version": 1, "solution": false}
# C) Implemente a função de gradiente para um ponto, retornando uma lista com os valores dos gradientes para alpha e beta, nessa ordem.
# + deletable=false nbgrader={"checksum": "26b10dee1df21db07f6dc441cae896e0", "grade": false, "grade_id": "cell-c68f94cfa8aa7a20", "locked": false, "schema_version": 1, "solution": true}
def gradient(x_i, y_i, alpha, beta):
grad_a = -2*(y_i - beta*x_i - alpha) # já que é a derivada de uma constante
grad_b = -2*x_i*(y_i - beta*x_i - alpha)
return np.array([grad_a, grad_b])
# + deletable=false editable=false nbgrader={"checksum": "2b6ae731517ebb472257bdab22cac1f1", "grade": true, "grade_id": "cell-97b6f238de14b40d", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "6acbdef927a36943112a7152805c62df", "grade": false, "grade_id": "cell-e4b5321938764920", "locked": true, "schema_version": 1, "solution": false}
# D) Implemente a função de gradiente descendente para os parâmetros alpha e beta da regressão linear, utilizando as duas funções criadas anteriormente.
#
# Retorne uma lista com os valores de alpha (intercepto) e beta (inclinação), nessa ordem.
#
# __Dica:__ obtenha os gradientes para cara ponto, some gradientes para cada parâmetro (alpha e beta), e só então atualize os valores de acordo com a taxa de aprendizado.
# + deletable=false nbgrader={"checksum": "66a8e8af74ff3ffa424c9ebc735bc4d7", "grade": false, "grade_id": "cell-d225c1c500f29ff7", "locked": false, "schema_version": 1, "solution": true}
def descent(x, y, lambda_, niter, param0):
# x,y : dados
# lambda_ : taxa de aprendizado
# niter : número de iterações do gradiente descendente
# param0 : lista com valores iniciais para alpha e beta
alpha, beta = param0
for i in range(niter):
gA, gB = 0, 0
for j in range(x.shape[0]):
gradA, gradB = gradient(x[j], y[j], alpha, beta)
gA += gradA
gB += gradB
alpha_i = alpha-lambda_*gA
beta_i = beta-lambda_*gB
if np.abs(alpha_i - alpha).mean() <= 0.00001 and np.abs(beta_i - beta) <= 0.00001:
break
alpha = alpha_i
beta = beta_i
return np.array([alpha, beta])
# + deletable=false editable=false nbgrader={"checksum": "ceeacc5a58cdab12df8b57a3f0199dbb", "grade": true, "grade_id": "cell1c2-a00", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + deletable=false editable=false nbgrader={"checksum": "99099aee1a6ef9341fbc977ccedba85a", "grade": true, "grade_id": "cell-052f66dac2d15aec", "locked": true, "points": 0, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "f5c6af348d742d641e3d95ffda767276", "grade": false, "grade_id": "cell-33cbf13797c14b9b", "locked": true, "schema_version": 1, "solution": false}
# E) Até o momento, vocês implementaram um grandiente descendente calculando o gradiente um ponto de cada vez.
# Em python, essa não é a forma mais eficiente de se fazer isso.
#
# Implementem uma nova versão da função de gradiente descendente, dessa vez realizando as operações de forma vetorial, ao invés de um ponto de cada vez. Tentem verificar se houve diferença de tempo de execução entre as duas versões.
# + deletable=false nbgrader={"checksum": "3c7562a673fe725e667771c11927dc61", "grade": false, "grade_id": "cell-90e0e5bba70c1d91", "locked": false, "schema_version": 1, "solution": true}
def descent_vec(x, y, lambda_, niter, param0):
# x,y : dados
# lambda_ : taxa de aprendizado
# niter : número de iterações do gradiente descendente
# param0 : lista com valores iniciais para alpha e beta
alpha, beta = param0
for i in range(niter):
gradA, gradB = gradient(x, y, alpha, beta)
alpha_i = alpha-lambda_*np.sum(gradA)
beta_i = beta-lambda_*np.sum(gradB)
if np.abs(alpha_i - alpha).mean() <= 0.00001 and np.abs(beta_i - beta) <= 0.00001:
break
alpha = alpha_i
beta = beta_i
return np.array([alpha, beta])
import time
x = df['Distance (million parsecs)'].values
y = df['Speed (parsecs/year)'].values
t1 = time.time()
descent(x, y, 0.001, 1000, [0,1])
t2 = time.time()
time_descent = t2-t1
t3 = time.time()
descent_vec(x, y, 0.001, 1000, [0,1])
t4 = time.time()
time_descent_vec = t4-t3
print("Points: ", time_descent, "seconds")
print("Vector: ", time_descent_vec, "seconds")
print("O que nos mostra que as operações por vetor é muito mais rápida")
# + deletable=false editable=false nbgrader={"checksum": "af71a981e28d6a1ebe17c5b768b2a0a7", "grade": true, "grade_id": "cell-fdd8a8a61cfc8af2", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "b105e0f876f456d1a428c2921ce2c35f", "grade": false, "grade_id": "cell-b7a466b23808a53f", "locked": true, "schema_version": 1, "solution": false}
# F) Vamos visualizar o modelo de regressão obtido. Utilizando a função abline abaixo, gere um gráfico com a reta de regressão linear sobre os pontos dos dados (como feito na introdução da lista).
# + deletable=false editable=false nbgrader={"checksum": "18e414b893640c62e74f2aaddf5530d3", "grade": false, "grade_id": "cell-8fb62f738b5dcef8", "locked": true, "schema_version": 1, "solution": false}
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '--')
# + deletable=false nbgrader={"checksum": "b1d8c8ea2e22c6c7fe6e9dca91827056", "grade": true, "grade_id": "cell-5b7e5076fc34598f", "locked": false, "points": 1, "schema_version": 1, "solution": true}
def plot_regression(x, y, lambda_, niter, param0):
# x,y : dados
# lambda_ : taxa de aprendizado
# niter : número de iterações do gradiente descendente
# param0 : lista com valores iniciais para alpha e beta
# plot
plt.scatter(x, y, alpha=0.3) # Esse alpha define transparência dos pontos
plt.xlabel('Distance (million parsecs)')
plt.ylabel('Speed (parsecs/year)')
alpha, beta = descent_vec(x,y,lambda_,niter,param0)
abline(beta,alpha)
plot_regression(df.values[:, 0], df.values[:, 1], 0.0001, 1000, [1, 1])
# + [markdown] deletable=false editable=false nbgrader={"checksum": "ef3191d89c984fe469ea338c54df3c5b", "grade": false, "grade_id": "cell-256a04a859341d06", "locked": true, "schema_version": 1, "solution": false}
# G) Agora vamos tentar avaliar o modelo de regressão linear obtido com o gradiente descendente.
#
# Primeiro implementem uma função que calcule o valor da soma total dos quadrados (SST) a partir dos dados.
# + deletable=false nbgrader={"checksum": "f69c3a5e81b3c53efa648b85ea3f95bc", "grade": false, "grade_id": "cell-1c3361ca672946f8", "locked": false, "schema_version": 1, "solution": true}
def sst(y):
mean = y.mean()
sst = ((y-mean)**2).sum()
return sst
# + deletable=false editable=false nbgrader={"checksum": "f6129e4fd723d8aa508cc29a730bf327", "grade": true, "grade_id": "cell-5d46d019ed49db63", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "6fc7d37af01ec556fb2e72c01a3c6c65", "grade": false, "grade_id": "cell-0002d0cc9d29ee72", "locked": true, "schema_version": 1, "solution": false}
# H) Para calcular a soma total de erros (SSE), primeiro precisamos ter uma previsão para os valores de
# velocidade das supernovas.
# Implementem uma função que obtenha os valores previstos de velocidade a partir da distância, de acordo com o modelo de regressão linear (alpha e beta).
#
# A função deve retornar uma lista com os valores previstos.
# + deletable=false nbgrader={"checksum": "fd081b4c164b4f2419f76ce63450bd95", "grade": false, "grade_id": "cell-756ac7f3c7e9789a", "locked": false, "schema_version": 1, "solution": true}
def predict(x, param):
# x : array de distancias das supernovas
# param : lista com os valores dos parâmetros alpha e beta
alpha, beta = param
veloc = x*beta + alpha
return veloc
# + deletable=false editable=false nbgrader={"checksum": "30d337d7ee72bb8e6d2e7f9ee784b6e3", "grade": true, "grade_id": "cell-cab73d8b163b0755", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "fe6673b24c5ae5ac8702a8602c95eb50", "grade": false, "grade_id": "cell-27efa98540f81e4f", "locked": true, "schema_version": 1, "solution": false}
# I) Agora implemente a função de cálculo da soma total de erros (SSE).
# + deletable=false nbgrader={"checksum": "5c45128f35b22e0a5a6ce13c5167f7d1", "grade": false, "grade_id": "cell-69e5e511f97630aa", "locked": false, "schema_version": 1, "solution": true}
def sse(x, y, param):
# x : array de distancias das supernovas
# y : array de velocidades das supernovas
# param : lista com os valores dos parâmetros alpha e beta
sse = ((y-predict(x, param))**2).sum()
return sse
# + deletable=false editable=false nbgrader={"checksum": "53896570c65ad0d943e45fb62cf67b2f", "grade": true, "grade_id": "cell-5e22ca26b2c5b2a5", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "b573f8689fef5f33a77619af4c8a739b", "grade": false, "grade_id": "cell-ad6c2e3e99821a16", "locked": true, "schema_version": 1, "solution": false}
# J) Finalmente, implemente a função que calcula o coeficiente de determinação (R2).
#
# Avalie mentalmente se o valor de R2 obtido condiz com a qualidade do modelo observada no gráfico da regressão gerado na questão (E).
# + deletable=false nbgrader={"checksum": "7e2fde49808672c91051e44959ece3bd", "grade": false, "grade_id": "cell-b6b9b5d310f35ea4", "locked": false, "schema_version": 1, "solution": true}
def r2(x, y, param):
# x : array de distancias das supernovas
# y : array de velocidades das supernovas
# param : lista com os valores dos parâmetros alpha e beta
r2 = 1-sse(x, y, param)/sst(y)
return r2
# + deletable=false editable=false nbgrader={"checksum": "3aeda3e4c3556c6b0c2ac082b9eefc3e", "grade": true, "grade_id": "cell-a1ebec15e79e1533", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# + [markdown] deletable=false editable=false nbgrader={"checksum": "535791b7228075030f11b333c9b6ae9c", "grade": false, "grade_id": "cell-c69552db6aaf0b62", "locked": true, "schema_version": 1, "solution": false}
# K) Com os resultados acima, compute a idade do universo. Estime a mesma em bilhões de anos. Deve ser algo perto de 13 ou 14 a resposta. Você tem que usar os dados não normalizados. Use o valor de $\beta$ da regressão. Lembre-se que:
#
# $\beta = {r_{xy} s_y \over s_x}$
#
# Os desvios são não normalizados. O r pode ser o mesmo que você achou antes. Lembre-se que: a correlação é invariante nas operações de translação e escala (normalização). Porém, a unidade do beta nos dados normalizados não é a mesma dos dados originais. O beta nos dados originais, estimado com a equação acima, corrige isto.
# + deletable=false editable=false nbgrader={"checksum": "631cd0ff8fba27cdeea746fcb84f6f29", "grade": false, "grade_id": "cell-923e8d660554e3a9", "locked": true, "schema_version": 1, "solution": false}
df = pd.read_csv('./close_novas.csv')
x = df.values[:, 0]
y = df.values[:, 1]
# + deletable=false nbgrader={"checksum": "f9217594cc01252c9093c22511e97ae6", "grade": false, "grade_id": "cell-48de5c3ffcd6aeaa", "locked": false, "schema_version": 1, "solution": true}
def idade_universo(sd_x, sd_y):
data = pd.DataFrame()
data[0], data[1] = sd_x, sd_y
x_std = sd_x.std(ddof=1)
y_std = sd_y.std(ddof=1)
correlation = data.corr().iloc[0][1]
beta = correlation*y_std/x_std
idade = 1/(beta*1000)
return idade
idade_universo(x,y)
# + deletable=false editable=false nbgrader={"checksum": "1d12c4010780eed5545d261a95dd7ea9", "grade": true, "grade_id": "cell-896fd4a349fe7e81", "locked": true, "points": 1, "schema_version": 1, "solution": false}
| icd/ICD20191-Lista05/Lista05.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python3
import sys
sys.path.insert(0, '/Users/aymericvie/Documents/GitHub/evology/evology/code/')
from main import *
from parameters import *
import matplotlib
import matplotlib.pyplot as plt
increment = 0.075
time = 50000
agents = 3
wealth_coordinates = [0.01, 0.98, 0.01]
exrNT, exrVI, exrTF, exrAVG = [], [], [], []
stratf, stratv, stratn = [], [], []
while wealth_coordinates[2] <= (1-wealth_coordinates[0]- 0.01):
print(wealth_coordinates)
df = main("static", time, 0, agents, 0, wealth_coordinates, tqdm_display = False, reset_wealth = True)
# print(df['TF_DayReturns'])
exrNT.append(np.nanmean(df['NT_DayReturns'] / np.nanstd(df['NT_DayReturns'])))
exrVI.append(np.nanmean(df['VI_DayReturns'] / np.nanstd(df['VI_DayReturns'])))
exrTF.append(np.nanmean(df['TF_DayReturns'] / np.nanstd(df['TF_DayReturns'])))
exrAVG.append(np.nanmean(df['AvgDayReturn'] / np.nanstd(df['AvgDayReturn'])))
stratn.append(wealth_coordinates[0])
stratv.append(wealth_coordinates[1])
stratf.append(wealth_coordinates[2])
wealth_coordinates[1] -= increment
wealth_coordinates[2] += increment
# +
# print(stratf)
# print(exrTF)
print('TF')
plt.plot(stratf, exrTF)
plt.show()
exrTF_net = np.subtract(exrTF, exrAVG)
plt.plot(stratf, exrTF_net)
plt.show()
print('VI')
plt.plot(stratv, exrVI)
plt.show()
exrVI_net = np.subtract(exrVI, exrAVG)
plt.plot(stratv, exrVI_net)
plt.show()
print('NT')
plt.plot(stratn, exrNT)
plt.show()
exrNT_net = np.subtract(exrNT, exrAVG)
plt.plot(stratn, exrNT_net)
plt.show()
| evology/bin/modelling/excessReturn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
df = pd.DataFrame({'names':['John','Amy','Zhuang','Hao','Gal'],
'nationality':['US','US','China','China','Italy']})
df
df.groupby(by='nationality')['names'].apply(lambda x:x.tolist()).to_dict()
# ## Let's break it down
type(df.groupby(by='nationality'))
# DataFrameGroupby object is able to slice like you would do on a normal dataframe, so we slice the "names" column.
type(df.groupby(by='nationality')['names'])
# Let's take a look at the SeriesGroupby object
list(df.groupby(by='nationality')['names'])
# SeriesGroupby object has function "apply", which will take each series' value, for instance, the first input would be ['Zhuang','Hao'], and output via a user-defined function/callable. We want to convert each Series to a list.
df.groupby(by='nationality')['names'].apply(lambda x:x.tolist())
# It becomes a "meta-Series", then we use Series built-in method "to_dict()" to convert it to a dictionary
df.groupby(by='nationality')['names'].apply(lambda x:x.tolist()).to_dict()
| pandas/examples/5_columns2dict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Calculate the policy of the agent
# * State Variable: x = [w , n, s, A], action variable a = [c, b, k, i], both of them are numpy array
# %pylab inline
import numpy as np
import pandas as pd
from scipy.interpolate import interp1d,interp2d
from multiprocessing import Pool
from functools import partial
from pyswarm import pso
import warnings
warnings.filterwarnings("ignore")
np.printoptions(precision=2)
# time line
T_min = 0
T_max = 70
T_R = 45
beta = 1/(1+0.02)
# States of the economy, GOOD or BAD, {1 : GOOD}, {0 : BAD}
S = [0,1]
# All the money amount are denoted in thousand dollars
earningShock = [0.8,1.2]
# Define transition matrix of economical states
# GOOD -> GOOD 0.8, BAD -> BAD 0.6
Ps = np.array([[0.6, 0.4],[0.2, 0.8]])
# current risk free interest rate
r_f = np.array([0.01 ,0.03])
# stock return depends on current and future econ states
r_m = np.array([[-0.2, 0.15],[-0.15, 0.2]])
# probability of survival
Pa = np.load("prob.npy")
# deterministic income
detEarning = np.load("detEarning.npy")
# tax rate
tau_L = 0.2
tau_R = 0.1
# minimum consumption
c_bar = 3
# +
#Define the utility function
def u(c):
gamma = 2
if c <= 1:
return 0
return (np.float_power(c,1-gamma) - 1)/(1 - gamma)
#Define the bequeath function, which is a function of wealth
def uB(w):
B = 2
return B*u(w)
#Define the earning function
def income(age, s):
if age <= T_R:
return detEarning[age] * earningShock[s]
else:
return detEarning[age]
# Define the reward funtion
def R(x, a):
c, b, k, i = a
w, n, s, A = x
if A == 0:
if w + n > 0:
return uB(w+n)
else:
return 0
else:
return u(c)
# Define the transtiion of state (test)
def transition(x, a, t):
'''
Input: x current state: (w, n, s, A)
a action taken: (c, b, k, i)
Output: the next possible states with corresponding probabilities
'''
c, b, k, i = a
w, n, s, A = x
s = int(s)
x_next = []
prob_next = []
if A == 0:
for s_next in S:
x_next.append([0, 0, s_next, 0])
return np.array(x_next), Ps[s]
else:
# A = 1, agent is still alive and for the next period
for s_next in S:
r_bond = r_f[s]
r_stock = r_m[s, s_next]
w_next = b*(1+r_bond) + k*(1+r_stock)
n_next = (n+i)*(1+r_stock)
x_next.append([w_next, n_next, s_next, 1])
prob_next.append(Ps[s][s_next] * Pa[t])
x_next.append([w_next, n_next, s_next, 0])
prob_next.append(Ps[s][s_next] * (1-Pa[t]))
return np.array(x_next), np.array(prob_next)
# -
# Value function is a function of state and time t
def V(x, t, Vmodel):
# Define the objective function as a function of action
w, n, s, A = x
s = int(s)
A = int(A)
if A == 0:
return np.array([R(x,[0,0,0,0]),[0,0,0,0]])
else:
if t < T_R:
def obj(bkThetaI_theta):
bk,theta,i_theta = bkThetaI_theta
b = bk * theta
k = bk * (1-theta)
i = y(t,s) * i_theta
c = (1-tau_L)*(income(t,s) - i) + w - bk
if c <= c_bar:
return 9999999999
a = (c,b,k,i)
x_next , prob_next = transition(x, a, t)
return R(x, a) + beta * np.dot(Vmodel(s,A,x_next[:,0], x_next[:,1]), prob_next)
lb = [0, 0, 0]
ub = [w, 1, 1]
xopt, fopt = pso(obj, lb, ub)
max_val = -fopt
bk_m,theta_m,i_theta_m = xopt
b_m = bk_m * theta_m
k_m = bk_m * (1-theta_m)
i_m = y(t,s) * i_theta_m
c_m = (1-tau_L)*(income(t,s) - i_m) + w - bk_m
else:
def obj(bkThetaI_theta):
bk,theta,i_theta = bkThetaI_theta
b = bk * theta
k = bk * (1-theta)
i = n * i_theta
c = (1-tau_R)*income(t,s) + w - i - bk
if c <= c_bar:
return 9999999999
a = (c,b,k,i)
x_next , prob_next = transition(x, a, t)
return R(x, a) + beta * np.dot(Vmodel(s,A,x_next[:,0], x_next[:,1]), prob_next)
lb = [0, 0, -1]
ub = [w, 1, 0]
xopt, fopt = pso(obj, lb, ub)
max_val = -fopt
bk_m,theta_m,i_theta_m = xopt
b_m = bk_m * theta_m
k_m = bk_m * (1-theta_m)
i_m = n * i_theta_m
c_m = (1-tau_R)*income(t,s) + w - i_m - bk_m
return np.array([max_val, [c_m, b_m, k_m, i_m]])
# +
w_grid_size = 100
w_lower = 5
w_upper = 20000
n_grid_size = 50
n_lower = 5
n_upper = 10000
def powspace(start, stop, power, num):
start = np.power(start, 1/float(power))
stop = np.power(stop, 1/float(power))
return np.power( np.linspace(start, stop, num=num), power)
xgrid = np.array([[w,n,s,A] for w in powspace(w_lower, w_upper, 3, w_grid_size)
for n in powspace(n_lower, n_upper, 3, n_grid_size)
for s in [0,1]
for A in [0,1]]).reshape((w_grid_size, n_grid_size,2,2,4))
Vgrid = np.zeros((w_grid_size, n_grid_size, 2, 2, T_max+1))
Cgrid = np.zeros((w_grid_size, n_grid_size, 2, 2, T_max+1))
bgrid = np.zeros((w_grid_size, n_grid_size, 2, 2, T_max+1))
kgrid = np.zeros((w_grid_size, n_grid_size, 2, 2, T_max+1))
igrid = np.zeros((w_grid_size, n_grid_size, 2, 2, T_max+1))
def V_T(x):
w, n, s, A = x
x = [w, n, s, 0]
return R(x,[0,0,0,0])
# apply function to state space, need to reshape the matrix and shape it back to the size
def applyFunToCalculateValue(fun):
return np.array(list(map(fun, xgrid.reshape((w_grid_size * n_grid_size * 2 * 2, 4))))).reshape((w_grid_size, n_grid_size,2,2))
Vgrid[:,:,:,:, T_max] = applyFunToCalculateValue(V_T)
# -
print(Vgrid[:,:,:,:, T_max])
# ### Backward Induction Part
# +
# %%time
pool = Pool()
w = xgrid[:,:,0,0,:].reshape((w_grid_size * n_grid_size, 4))[:,0]
n = xgrid[:,:,0,0,:].reshape((w_grid_size * n_grid_size, 4))[:,1]
def model(s,A,X,Y, interpolation):
values = []
for xy in zip(X,Y):
x,y = xy
values.append(cs[s][A](x,y))
return np.array(values).flatten()
for t in range(T_max-1, T_max-2, -1):
print(t)
cs = [[interp2d(w,n,Vgrid[:,:,s,A,t+1].flatten(),kind='cubic') for A in [0,1]] for s in [0,1]]
f = partial(V, t = t, Vmodel = partial(model, interpolation = cs))
np.array(pool.map(f, xgrid.reshape((w_grid_size * n_grid_size * 2 * 2, 4)))).reshape((w_grid_size, n_grid_size,2,2))
# -
plt.plot(B_t[10,1,:])
plt.plot(B_t[20,1,:])
plt.plot(B_t[30,1,:])
plt.plot(B_t[40,1,:])
plt.plot(B_t[50,1,:])
plt.plot(B_t[60,1,:])
plt.plot(K_t[10,1,:])
plt.plot(K_t[20,1,:])
plt.plot(K_t[30,1,:])
plt.plot(K_t[40,1,:])
plt.plot(K_t[50,1,:])
plt.plot(K_t[60,1,:])
# ### Simulation Part
# +
import quantecon as qe
mc = qe.MarkovChain(P)
def action(t, w, s, alive):
c = interp1d(wgrid, C_t[:,s,t], kind = "linear", fill_value = "extrapolate")(w)
b = interp1d(wgrid, B_t[:,s,t], kind = "linear", fill_value = "extrapolate")(w)
k = interp1d(wgrid, K_t[:,s,t], kind = "linear", fill_value = "extrapolate")(w)
if not alive:
c = 0
b = 0
k = 0
return (c,b,k)
# Define the transtiion of state
def fixTransition(w, s, s_next, a, alive):
c, b, k = a
# collect possible next state (w_next, s_next) with probability prob
Z_next = []
prob = []
# depend on the current econ state s and future state s_next we have the following return on bond and stock
r_bond = r_f[int(s)]
r_stock = r_m[s,s_next]
w_next = b*(1+r_bond) + k*(1+r_stock)
if not alive:
return 0
return w_next
# -
import random as rd
def simulation(num):
for sim in range(num):
if sim%100 == 0:
print(sim)
# simulate an agent age 15 starting with wealth of 10
w = 20
wealth = []
Consumption = []
Bond = []
Stock = []
Salary = []
econState = mc.simulate(ts_length=T_max - T_min)
alive = True
for t in range(len(econState)-1):
if rd.random() > prob[t]:
alive = False
wealth.append(w)
s = econState[t]
s_next = econState[t+1]
a = action(t, w, s, alive)
if alive:
Salary.append(y(t+T_min, s))
else:
Salary.append(0)
Consumption.append(a[0])
Bond.append(a[1])
Stock.append(a[2])
w = fixTransition(w,s,s_next, a, alive)
# dictionary of lists
dictionary = {'wealth': wealth,
'Consumption': Consumption,
'Bond': Bond,
'Stock': Stock,
'Salary': Salary}
if sim == 0:
df = pd.DataFrame(dictionary)
else:
df = df + pd.DataFrame(dictionary)
return df/num
df = simulation(10000)
df.plot()
df.Consumption.plot()
df.wealth.plot()
| 20200601/.ipynb_checkpoints/lifeCycleModel-speadUp-bequeathWealth-401k-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Watch Me Code 3: List of Dictionary
#
students = [
{ 'Name':'bob','GPA':3.4, 'Ischool' : True },
{ 'Name':'sue','GPA':2.8, 'Ischool' : True },
{ 'Name':'kent','GPA':4.0, 'Ischool' : False }
]
print(students)
type(students)
students[-1]
type(students[-1])
print(students[-1])
# print names and GPA's of just ischool students:
for student in students: # list
if student['Ischool']: # == True is not necessary
print(student['Name'], student['GPA'])
# +
students = [
{"Name": "bob", "age": 18, "grades": [70, 80, 30]},
{"Name": "Tom", "age": 20, "grades": [70, 80, 30]},
{"Name": "Jerry", "age": 19, "grades": [70, 80, 30]}
]
for student in students:
print("Grades for: " + student["Name"])
for grade in student["grades"]:
print(grade)
# -
| content/lessons/10/Watch-Me-Code/WMC3-List-Of-Dict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Background
#
# For this mini-project for data cleaning/analysis/visualization, the 911 call data (from Montgomery county, PA) was downloaded from Kaggle [Link](https://www.kaggle.com/mchirico/montcoalert). This is a great dataset for exploratory data analysis (EDA) purposes as it contains some unorganized data and new useful features can be extracted from them.
#
# While not explored, this dataset may be used to train a model to predict a number of 911 calls (or even the types of calls) during the specific time of the year. Such prediction may improve the efficiency of medical treatments by adjusting number of medical staffs and operating ambulances.
#
# The data contains the following fields:
#
# * lat : String variable, Latitude
# * lng: String variable, Longitude
# * desc: String variable, Description of the Emergency Call
# * zip: String variable, Zipcode
# * title: String variable, Title
# * timeStamp: String variable, YYYY-MM-DD HH:MM:SS
# * twp: String variable, Township
# * addr: String variable, Address
# * e: String variable, Dummy variable (always 1)
#
# ## Data overlook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
sns.set_style('darkgrid')
sns.set_context('notebook',font_scale=1.5)
df_911 = pd.read_csv('911.csv')
print(df_911.shape)
df_911.head(2)
# The dataset contains about 424,000 data with 9 features.
# ## check for missing values
df_911.isnull().sum()[df_911.isnull().any()==True]
# `zip` and `twp` columns contain 52,129 and 159 missing values, respectively. For EDA, the missing values may be left as they are.
# ## Top 5 *zipcodes* and *townships* for 911 calls
plt.figure(figsize=(10,5))
plt.subplot(121)
df_911.zip.value_counts().head().plot(kind='bar')
plt.tick_params(axis='both',labelsize =12)
plt.title('Top 5 zipcodes for 911 calls',fontsize=15)
plt.subplot(122)
df_911.twp.value_counts().head().plot(kind='bar')
plt.tick_params(axis='both',labelsize =12)
plt.title('Top 5 zipcodes for 911 calls',fontsize=15)
plt.tight_layout()
plt.show()
# ## Extracting new features: *`title`* --> *`Call_Reason`*
df_911['Call_Reason'] = df_911.title.apply(lambda title: title.split(':')[0])
fig, ax = plt.subplots()
sns.countplot('Call_Reason',data=df_911,ax=ax)
plt.show()
# ## Extracting new features: *`timeStamp`* --> *`Hour`*, *`Month`*, *`DayofWeek`*, *`Date`*
df_911.timeStamp = pd.to_datetime(df_911.timeStamp)
df_911['Hour'] = df_911.timeStamp.apply(lambda x: x.hour)
df_911['Month'] = df_911.timeStamp.apply(lambda x: x.month)
df_911['DayOfWeek'] = df_911.timeStamp.apply(lambda x: x.dayofweek)
df_911['Date'] = df_911.timeStamp.apply(lambda x: x.date())
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df_911['DayOfWeek'] = df_911['DayOfWeek'].map(dmap).astype('category').cat.set_categories(['Mon','Tue','Wed','Thu','Fri','Sat','Sun'], ordered=True)
df_byDOW = df_911.groupby(['DayOfWeek','Call_Reason']).e.count().reset_index().rename(columns={'e':'number of calls'})
df_byMon = df_911.groupby(['Month','Call_Reason']).e.count().reset_index().rename(columns={'e':'number of calls'})
# +
# fig, ax = plt.subplots()
# sns.lineplot('DayOfWeek','number of calls',hue='Call_Reason',data=df_byDOW,ax=ax)
# ax.legend(bbox_to_anchor=(1,1),frameon=False).set_title('aa')
fig, ax = plt.subplots(1,2,figsize=(10,5))
sns.lineplot('DayOfWeek','number of calls',hue='Call_Reason',data=df_byDOW,ax=ax[0])
ax[0].set_title('Calls by day of week',fontsize=15)
ax[0].set_xlabel('')
ax[0].set_ylabel('')
ax[0].legend(loc='best',frameon=False,fontsize=15)
sns.lineplot('Month','number of calls',hue='Call_Reason',data=df_byMon,ax=ax[1])
ax[1].set_title('Calls by month',fontsize=15)
ax[1].set_xlabel('')
ax[1].set_xticks(range(1,13))
ax[1].set_ylabel('')
ax[1].legend().set_visible(False)
plt.tight_layout()
# -
plt.figure(figsize=(15,5))
df_911[df_911.Call_Reason == 'Traffic'].groupby('Date').size().plot(lw = 2)
df_911[df_911.Call_Reason == 'EMS'].groupby('Date').size().plot(lw = 2)
df_911[df_911.Call_Reason == 'Fire'].groupby('Date').size().plot(lw = 2)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.xlabel('')
plt.title('Call counts by reasons',fontsize=20)
plt.legend(['Traffic','EMS','Fire'],fontsize=18)
plt.show()
sns.heatmap(df_911.groupby(['DayOfWeek','Hour']).size().unstack(),cmap='coolwarm')
sns.heatmap(df_911.groupby(['DayOfWeek','Month']).size().unstack(),cmap='coolwarm')
# # Conclusion
#
| Mini capstone projects/EDA-911call_Montgomery.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
ssc = SQLContext(sc)
tweets = ssc.read.parquet("/tmp/tweet-corpus")
tweets.cache()
tweets.count()
df = tweets.toPandas()
df.columns = ["tokens", "label"]
df.head()
# ## Preprocessing
# ### Using `HashingTF`, a simple bag of words model
from pyspark.mllib.feature import IDF
from pyspark.mllib.feature import HashingTF
from pyspark.mllib.linalg import Vectors
from pyspark.mllib.classification import LabeledPoint
coeff = 3000
hashingTf = HashingTF(coeff)
# +
#vectors = sc.parallelize([hashingTf.transform(tokens) for tokens in df._1])
#idf = IDF().fit(vectors)
# -
def featurize(tokens): return hashingTf.transform(tokens)
#def tfidf(tokens): return idf.transform(tf(tokens))
df['lpoint'] = df.apply(lambda row: LabeledPoint(row['label'], featurize(row['tokens'])), axis=1)
df
# Create train/test split
# create boolean mask
msk = np.random.rand(len(df)) < 0.80
train = df[msk]
test = df[~msk]
# ### Distribution of labels of training set
_ = sns.countplot(x="label", data=train)
# ### Distribution of labels of test set
_ = sns.countplot(x="label", data=test)
# ### Run PCA
from pyspark.mllib.feature import PCA
df
# +
#lpoints = df['lpoint']
#rdd = sc.parallelize(lpoints.map(lambda point: point.features).tolist())
#pca = PCA(3).fit(rdd)
#df['pca'] = df.apply(lambda row: pca.transform(row['lpoint'].features), axis=1)
#df['pca_0'] = df.apply(lambda row: row['pca'][0], axis=1)
#df['pca_1'] = df.apply(lambda row: row['pca'][1], axis=1)
#viz = df[['label', 'pca_0', 'pca_1']]
# +
#_ = sns.pairplot(viz, vars=['pca_0', 'pca_1'], hue="label", size=6.0)
# -
# # Train a Logistic Regression classifier
from pyspark.mllib.classification import LogisticRegressionWithSGD
from pyspark.mllib.classification import LabeledPoint
# Let's add a new column with `LabeledPoint`s consisting of TF-IDF vectors.
train_rdd = sc.parallelize(train.lpoint)
# Now train the logistic regression estimator.
lr = LogisticRegressionWithSGD.train(train_rdd, initialWeights=Vectors.zeros(coeff), iterations=200)
# # Test
test['pred'] = test.apply(lambda row: lr.predict(row['lpoint'].features), axis=1)
test
# # Metrics
from pyspark.mllib.evaluation import BinaryClassificationMetrics
from pyspark.mllib.evaluation import MulticlassMetrics
scoreAndLabels = test.apply(lambda row: (float(row['pred']), row['lpoint'].label), axis=1)
scoreAndLabels = sc.parallelize(scoreAndLabels)
binary_metrics = BinaryClassificationMetrics(scoreAndLabels)
binary_metrics.areaUnderPR
binary_metrics.areaUnderROC
mult_metrics = MulticlassMetrics(scoreAndLabels)
mult_metrics.precision()
mult_metrics.recall()
max(test.label.mean(), 1 - test.label.mean())
# # Cross validation
# +
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import HashingTF
lr = LogisticRegression()
tf = HashingTF(inputCol="tokens", outputCol="features")
pipeline = Pipeline(stages=[tf, lr])
# -
pdf = ssc.createDataFrame(df)
dataset = sqlContext.createDataFrame(
[(point.features, point.label) for point in df['lpoint']],
["features", "label"])
ptrain = ssc.createDataFrame(train)
ptest = ssc.createDataFrame(test[['tokens','label','lpoint']])
model = pipeline.fit(ptrain)
prediction = model.transform(ptest)
result = prediction.select("tokens", "label", "prediction").toPandas()
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# +
grid = ParamGridBuilder().addGrid(lr.maxIter, [0, 1]).build()
evaluator = BinaryClassificationEvaluator()
cv = CrossValidator(estimator=lr, estimatorParamMaps=grid, evaluator=evaluator)
cvModel = cv.fit(dataset)
evaluator.evaluate(cvModel.transform(dataset))
type(cvModel)
# -
weights = cvModel.bestModel.weights
# ### Use initial weights of best model
lr_new = LogisticRegressionWithSGD.train(train_rdd, initialWeights=weights, iterations=200)
test['pred_new'] = test.apply(lambda row: lr_new.predict(row['lpoint'].features), axis=1)
scoreAndLabels = test.apply(lambda row: (float(row['pred_new']), row['lpoint'].label), axis=1)
scoreAndLabels = sc.parallelize(scoreAndLabels)
binary_metrics = BinaryClassificationMetrics(scoreAndLabels)
binary_metrics.areaUnderROC
binary_metrics.areaUnderPR
mult_metrics = MulticlassMetrics(scoreAndLabels)
mult_metrics.precision()
| data/evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Retrieval for Machine Learning
# In this notebook, we retrieve MLB data from 1996-2017 that we will use to fit a machine learning model to. We decided that we wanted to see if we could find a model that would predict team wins even better than the Pythagorean Expectation does. After researching and examining baseball statistics, we decided to not only focus on team statistics as a whole, but pitcher statistics as well. The performance of the pitcher has a huge impact on the game results.
import numpy as np
import pandas as pd
import requests
import plotly.offline as py
import matplotlib.pyplot as plt
from plotly.graph_objs import *
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score
from sklearn.cluster import KMeans
from bs4 import BeautifulSoup
# %matplotlib inline
pd.set_option("max_r", 15)
py.init_notebook_mode(connected=True)
# For unknown reasons, the seventh and eighth tables that we need from the above URLs do not appear in the html returned from BeautifulSoup (it skips over them). Thus, instead of scraping from them like we did for the other datasets we need in the previous notebook, we are taking CSV's. It is also worth noting that this is the only website that has the advanced metrics we need for our calculations, which makes this situation even more of a shame.
# read in all the csvs
dfs = []
for i in range(1996, 2017):
fname = "mlb%d.csv" % i
df = pd.read_csv(fname)
df["year"] = i
dfs.append(df)
# concat all the team data into one
mlb = pd.concat(dfs).dropna()
mlb = mlb.sort_values(by=["Tm","year"])
mlb.head()
# read in pitching data
dfs = []
for i in range(1996, 2017):
fname = "pitch%d.csv" % i
df = pd.read_csv(fname)
df["year"] = i
dfs.append(df)
# combine pitching data
pitch = pd.concat(dfs).dropna()
pitch = pitch.sort_values(by=["Tm","year"])
pitch.head()
mlb = mlb.reset_index().drop("index",axis=1)
pitch = pitch.reset_index().drop("index",axis=1)
# Ensure all overlapping columns have the same values.
overlapping_cols = ['Tm', 'W', 'L', 'G', 'year']
df = (mlb[overlapping_cols] == pitch[overlapping_cols])
df.all()
# Convert runs and runs allowed to yearly numbers from averages per game.
mlb['R'] = mlb['R'] * 162
mlb['RA'] = mlb['RA'] * 162
# Add all of the columns we need into one dataframe.
mlb.head()
# The variables that we felt would be most usuful for predicting wins with a machine learning model were:
# + Runs: the total number of runs the team scored during the season
# + Runs Allowed: the total number of runs that were scored against the team during the season
# + Strength of Schedule: the difficulty or ease of a team's opponent as compared to other teams.
# + ERA: the mean of the earned runs given up by a pitcher per nine innings pitched
# + WHIP: walks hits per inning pitched
# + FIP: field independent pitching
# + SO: strike outs
# Create a dataframe for our training data.
machine_data = pd.DataFrame(mlb[['R',"RA","SOS",'W',"year"]])
machine_data['ERA'] = pitch['ERA']
machine_data['WHIP'] = pitch['WHIP']
machine_data['FIP'] = pitch['FIP']
machine_data['SO'] = pitch['SO']
machine_data.dtypes
machine_data.to_csv("training.csv", index=False)
| More Data Retrieval.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tataphani/b_pt/blob/master/SAS_connection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="r8HcKh44bf1t" outputId="a624b02a-b08c-49d8-e5f8-2291158895ca"
# ! pip install saspy
# + id="TxxTqloOejS-"
# + colab={"base_uri": "https://localhost:8080/"} id="jfWbUu03cG46" outputId="1af33831-88b9-475b-829d-f4f74b866901"
# ! which java
# + colab={"base_uri": "https://localhost:8080/"} id="aHZOyIg_cN-E" outputId="579102e4-0d3e-4895-d3e9-738bb1bd3907"
##First Import saspy using this then Enter the IOM user which is You can use either this user ID or your email address (<EMAIL>),
## along with your SAS Profile password, to sign in to SAS OnDemand for Academics:
## https://welcome.oda.sas.com and password is <PASSWORD>
import saspy
sas = saspy.SASsession(iomhost=['odaws01-usw2.oda.sas.com', 'odaws02-usw2.oda.sas.com',
'odaws03-usw2.oda.sas.com', 'odaws04-usw2.oda.sas.com'],
java='/usr/bin/java', iomport=8591)
# + id="EMKByPHPO_JA"
# + id="YbPvZqfwcv49" colab={"base_uri": "https://localhost:8080/", "height": 130} outputId="4f3a49da-7a42-449e-b910-22073296049b"
## First SAS program using sas.submitLST
sas.submitLST('data a ; x = 1 ; run;proc print data = a ; run;' )
# + id="r1nb8TM7elJ8" colab={"base_uri": "https://localhost:8080/", "height": 446} outputId="9bbb2e98-2476-4b12-cef1-3e22c26bf7a2"
sas.submitLST("data cars ; set sashelp.cars; if _n_ <11 ; run; proc print data=cars; run;", method='listorlog')
# + colab={"base_uri": "https://localhost:8080/", "height": 259} id="P9QB1-Sgtgqd" outputId="9a5e20ff-fbd4-4aaf-821e-3879b1f65a4e"
import pandas as pd
df = pd.read_csv('https://github.com/sas2r/clinical_fd/blob/master/data-raw/inst/extdata/adsl.csv')
print(df.to_string())
# + id="tNY3SXwtpjBP" colab={"base_uri": "https://localhost:8080/", "height": 381} outputId="81e10e2f-ec64-4dd6-bd85-bda522dd30fc"
import pandas as pd
pandasdf = pd.read_csv("deals.csv")
sasdf = sas.df2sd(pandsdf , 'sasdf')
sas.submitLST('proc print data = work.sasdf (obs = 5);run;', method='listorlog')
# + id="5kDMsZKqqgHH"
from google.colab import files
import pandas as pd
files.upload()
df = pd.read_csv('employee.csv')
sasdf = sas.df2sd (df ,'sasdf')
sas.submitLST("proc print data=work.sasdf; run;", method='listorlog')
# + id="lbgyjQ8m2Bqe"
## Mounting data from Google drive
# Load the Drive helper and mount
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
# + id="j7RnpU6h3VGn"
# After executing the cell above, Drive
# files will be present in "/content/drive/My Drive".
# !ls "/content/drive/My Drive"
# !ls "/content/drive/My Drive/datasets"
# !ls "/content/drive/My Drive/datasets/adsl.xpt"
# + id="JEbvPX3_4kUV"
import pandas as pd
adsl = pd.read_sas('/content/drive/My Drive/datasets/adsl.xpt')
#head(adsl)
import rpy2.ipython.html
rpy2.ipython.html.init_printing()
#adsl
# + id="YOsphBYs9usk"
## reading the .xpt files using the R language
# activate R magic
# %load_ext rpy2.ipython
# + id="HcKg7OSK9Wxe" language="R"
# install.packages('dplyr')
# install.packages('haven')
# install.packages('ggplot2')
# library(reticulate)
# + id="n8vL-XH-Yy5_"
# A bit of imports
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# Load in the r magic
# %reload_ext rpy2.ipython
# %config IPCompleter.greedy=True
# %config InlineBackend.figure_format = 'retina'
# + id="23d7aXTlatPr"
adsl = pd.read_sas('/content/drive/My Drive/datasets/adsl.xpt')
# + id="1Vo9_ynLa0uQ" language="R"
#
# df1 = c(5, 3, 3)
# df2 = c(1, 3, 4)
# + id="Wis9Y5ATbE-w"
# %R -o df1 -o df2
# + id="RTNGRc1OB7Lw" language="R"
# install.packages('reticulate')
# library(reticulate)
# #install.packages("googledrive")
# #library("googledrive")
# #drive_find()
# + id="yGWBm8znX4pV"
# + id="F54bT8JKCof1" language="R"
# if (file.exists("/usr/local/lib/python3.7/dist-packages/google/colab/_ipython.py")) {
# install.packages("R.utils")
# library("R.utils")
# library("httr")
# my_check <- function() {return(TRUE)}
# reassignInPackage("is_interactive", pkgName = "httr", my_check)
# options(rlang_interactive=TRUE)
# }
# + id="Cix5h1txJOTd"
# If ts not available we need to install . put other packages if you need.
# %%R
packages <- c("googledrive", "googlesheets4")
if (length(setdiff(packages, rownames(installed.packages()))) > 0) {
install.packages(setdiff(packages, rownames(installed.packages())))
}
# + id="ChjHYnLSQSg4" language="R"
# library("googledrive")
# library("googlesheets4")
# + id="c8T5aAMqRsEE"
# Find Google Sheets anf get the id of the first sheet
# %%R
#library("googledrive")
# drive_find(type = "xpt" )
# + id="U06RF-2P9HQF" language="R"
# adsl <- read_xpt('/content/drive/My Drive/datasets/adsl.xpt')
# + id="qFCeBltvPBAg"
##https://stackoverflow.com/questions/66430209/running-rstudio-on-google-colab
##https://rpy2.github.io/doc/latest/html/introduction.html#r-packages
import rpy2.robjects as robjects
import rpy2
print(rpy2.__version__)
# + id="iudzGBncP44U"
from rpy2.robjects.packages import importr
# import R's "base" package
base = importr('base')
# import R's "utils" package
utils = importr('utils')
# + id="gPajYDpDQR6q"
# import rpy2's package module
import rpy2.robjects.packages as rpackages
# import R's utility package
utils = rpackages.importr('utils')
# select a mirror for R packages
utils.chooseCRANmirror(ind=1) # select the first mirror in the list
# R package names
packnames = ('ggplot2', 'hexbin' 'dplyr' 'xportr' , 'SASxport' )
# R vector of strings
from rpy2.robjects.vectors import StrVector
# Selectively install what needs to be install.
# We are fancy, just because we can.
names_to_install = [x for x in packnames if not rpackages.isinstalled(x)]
if len(names_to_install) > 0:
utils.install_packages(StrVector(names_to_install))
# + id="9xBj1x5dYNGX"
##https://songjoyce.medium.com/google-colab-101-using-both-r-python-f04eca03e6b5
# %load_ext rpy2.ipython
# + id="I_IIkqbFYrrU"
from rpy2.robjects.packages import importr
utils = importr('utils')
dataf = utils.read_csv('https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/'
'master/notebooks/data/california_cities.csv')
# + id="PejnhnQzZoRN"
import rpy2.ipython.html
rpy2.ipython.html.init_printing()
dataf
# + id="3fjXUGAkY4Gm"
#Python to R
# %R -i df -i df2
#R to Python
# %R -o df -o df2
# + id="NjwbqH7ylchx"
# activate R magic
# %load_ext rpy2.ipython
# + id="mT3NMhpWle9F" language="R"
# x <- 42
# print(x)
# + id="k1k9TKdplyCo" language="R"
# install.packages('dplyr')
# install.packages('haven')
# install.packages('ggplot2')
#
# + id="fpZHzsFsv-Tu" language="R"
# data(package = "dplyr")
# + id="WYGlmvBdwrgB"
import pandas as pd
pandasdf = pd.read_csv("/content/sample_data/california_housing_train.csv")
sasdf = sas.df2sd(pandasdf, 'sasdf')
sas.submitLST("proc print data=work.sasdf (obs=5);run;", method='listorlog')
# + id="tlXas2uHzuWI"
"""
Read five NHANES data files using Pandas and merge them into a 2d array.
This value can be exported:
Z : a Pandas data frame containing all the data
"""
## Data file names (the files are in ../Data)
FN = ["DEMO_F.XPT", "BMX_F.XPT", "BPX_F.XPT", "DR1TOT_F.XPT", "DR2TOT_F.XPT"]
# + id="7OlyFBzH0T_0"
import numpy as np
import pandas as pd
df = pandas.read_sas('filename.XPT')
| SAS_connection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### This file is for addding a column of time in the energy file
# +
import pandas as pd
import regex as re
import sys, os
import time
import ast
import math
from statistics import mean
from ast import literal_eval
import numpy as np
import csv
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import make_scorer, accuracy_score, precision_score, recall_score, f1_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import StratifiedKFold
from sklearn import metrics
import matplotlib.pyplot as plt
from pandas.api.types import is_string_dtype
from pandas.api.types import is_numeric_dtype
from statistics import mean, median
from sklearn.tree import DecisionTreeClassifier
from collections import Counter
# neural network
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.preprocessing.sequence import pad_sequences
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
# -
# ### Mirei attack dataset
# +
from pandas import DataFrame
from pandas import concat
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
"""
Frame a time series as a supervised learning dataset.
Arguments:
data: Sequence of observations as a list or NumPy array.
n_in: Number of lag observations as input (X).
n_out: Number of observations as output (y).
dropnan: Boolean whether or not to drop rows with NaN values.
Returns:
Pandas DataFrame of series framed for supervised learning.
"""
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
# print(df)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
def get_pre_rec(model, X_train, X_test, y_train, y_test):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# print(y_pred)
# print(y_test)
pre=precision_score(y_test, y_pred, average='weighted')
rec=recall_score(y_test, y_pred, average='weighted')
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred, pos_label=1)
#print (fpr, tpr, thresholds)
auc=metrics.auc(fpr, tpr)
return pre, rec, auc
# -
# ## Make energy to timestamp dataset
# +
attacks = ["UFOnet","RouterSploit", "Mirai", "Normal"]
for attack in attacks:
iot_devices = ["wr940n", "archer", "camera", "cctv", "indoor", "phillips"]
print("Actual attack is", attack)
dataset_folder = "/home/amine/Documents/CRIM_Project/Energy-network/"+attack+"/"
if attack == "UFOnet":
iot_devices = ["archer", "cctv", "indoor"]
dfObj = pd.DataFrame()
for i, target in enumerate(iot_devices):
output = "/home/amine/Documents/CRIM_Project/timeseries_energy_network/"+attack+"/"+target+"-"+attack+".csv"
print(target)
dataset_file = dataset_folder+target+"-"+attack+".csv"
if os.path.isfile(dataset_file):
print(dataset_file)
dataset = pd.read_csv(dataset_file, delimiter=',')
for index, line in dataset.iterrows():
val = int(len(ast.literal_eval(line['protocol'])))
dataset.loc[index,'packet'] = val
unique_sour = len(set(ast.literal_eval(line['source'])))
unique_dest = len(set(ast.literal_eval(line['destination'])))
list_length = list(ast.literal_eval(line['length']))
if(list_length):
list_length.sort()
max_len = int(max (list_length))
min_len = int(min (list_length))
mean_len = mean (list_length)
mid = len(list_length) // 2
median_len = (list_length[mid] + list_length[~mid]) / 2
dataset.loc[index,'unique_sour'] = unique_sour
dataset.loc[index,'unique_dest'] = unique_dest
dataset.loc[index,'median_len'] = median_len
dataset.loc[index,'mean_len'] = mean_len
dataset.loc[index,'max_len'] = max_len
dataset.loc[index,'min_len'] = min_len
dataset = dataset.drop(columns=['source', 'destination', 'info', 'length', 'timestamp', 'time'])
# print(dataset)
cols = dataset.columns
dataset_series = series_to_supervised(dataset, 2)
dataset_series ["device"] = i
# dfObj = dfObj.append(dataset_series)
for count, c in enumerate(cols):
dataset_series.columns = dataset_series.columns.str.replace("var"+str(count+1), c)
# print(dataset_series.columns)
print("writing")
dataset_series.to_csv(output, encoding='utf-8', index=False)
# -
# ## Mixte each attack with the normal
# +
attacks = ["Mirai", "RouterSploit", "UFOnet", "Normal"]
for attack in attacks:
iot_devices = ["wr940n", "archer", "camera", "cctv", "indoor", "phillips"]
print("Actual attack is", attack)
dataset_folder = "/home/amine/Documents/CRIM_Project/Energy-network/"+attack+"/"
if attack == "UFOnet":
iot_devices = ["archer", "cctv", "indoor"]
dfObj = pd.DataFrame()
for i, target in enumerate(iot_devices):
dataset_file = "/home/amine/Documents/CRIM_Project/timeseries_energy_network/"+attack+"/"+target+"-"+attack+".csv"
dataset = pd.read_csv(dataset_file, delimiter=',')
dfObj = dfObj.append(dataset)
# print(dfObj)
dfObj.to_csv("/home/amine/Documents/CRIM_Project/timeseries_energy_network/"+attack+"/"+attack+"-all.csv", encoding='utf-8', index=False)
# +
attacks = ["Mirai", "RouterSploit", "UFOnet"]
for attack in attacks:
dataset_folder = "/home/amine/Documents/CRIM_Project/timeseries_energy_network/"+attack+"/"
mirei_input_folder = dataset_folder+attack+"-all.csv"
mirei_output_folder = dataset_folder+"pre_ml-"+attack+".csv"
dataset_attack = pd.read_csv(mirei_input_folder, delimiter=',')
dataset_normal = pd.read_csv("/home/amine/Documents/CRIM_Project/timeseries_energy_network/Normal/Normal-all.csv", delimiter=',')
dataset_attack['target'] = 1
dataset_normal['target'] = 0
merged_data = pd.concat([dataset_attack,dataset_normal], axis=0)
merged_data.to_csv(mirei_output_folder, index=False)
# -
# ### Dummies for protocol variables
# +
attacks = ["Mirai", "RouterSploit", "UFOnet"]
from collections import Counter
for attack in attacks:
dataset_folder = "/home/amine/Documents/CRIM_Project/timeseries_energy_network/"+attack+"/"
mirei_input_folder = dataset_folder+"pre_ml-"+attack+".csv"
mirei_output_folder = dataset_folder+"pre_ml_dummies-"+attack+".csv"
dataset = pd.read_csv(mirei_input_folder, delimiter=',')
columns_dum = ['protocol(t-2)', 'protocol(t-1)', 'protocol(t)']
for col in columns_dum:
bridge_types = []
for index, line in dataset.iterrows():
val = list(set(ast.literal_eval(line[col])))
bridge_types.extend(val)
bridge_types = list(set(bridge_types))
for col_bridge in bridge_types:
dataset[col+'_'+col_bridge] = 0
for index, line in dataset.iterrows():
val = list(set(ast.literal_eval(line[col])))
occurrences = dict(Counter(val))
for pro, occ in occurrences.items():
dataset.loc[index, col+'_'+pro] = occ
del dataset[col]
# print(dataset)
dataset.to_csv(mirei_output_folder, index=False)
# -
# ## Apply ML algrithms (random forest, logictic regression, decision tree)
# +
attacks = ["Mirai", "RouterSploit", "UFOnet"]
for attack in attacks:
dataset_folder = "/home/amine/Documents/CRIM_Project/timeseries_energy_network/"+attack+"/"
input_folder = dataset_folder+"pre_ml_dummies-"+attack+".csv"
dataset = pd.read_csv(input_folder, delimiter=',')
target='target'
# features = list(dataset.columns)
# print(list(dataset.columns))
dataset = dataset[~dataset[target].isnull()]
dataset=dataset.reset_index(drop=True)
y = dataset[target].astype(int)
# print(y.value_counts())
features = list(dataset.columns)
features.remove(target)
# energy_features = ["energy(t)", "energy(t-1)", "energy(t-2)"]
# for e in energy_features:
# features.remove(e)
x = dataset[features]
folds = StratifiedKFold(n_splits=10, shuffle=True)
pre_rf = []
pre1_rf = []
pre2_rf = []
rec_rf = []
rec1_rf = []
rec2_rf = []
auc_rf = []
auc1_rf = []
auc2_rf = []
for train_index, test_index in folds.split(x,y):
x_train=x.iloc[train_index]
x_test=x.iloc[test_index]
y_train=y.iloc[train_index]
y_test = y.iloc[test_index]
sm = SMOTE(random_state=42, sampling_strategy='auto')
x_train, y_train = sm.fit_resample(x_train, y_train)
pre, rec, auc = get_pre_rec(RandomForestClassifier(), x_train, x_test, y_train, y_test)
pre1, rec1, auc1 = get_pre_rec(LogisticRegression(), x_train, x_test, y_train, y_test)
pre2, rec2, auc2 = get_pre_rec(DecisionTreeClassifier(), x_train, x_test, y_train, y_test)
pre_rf.append (pre)
rec_rf.append (rec)
auc_rf.append (auc)
pre1_rf.append (pre1)
rec1_rf.append (rec1)
auc1_rf.append (auc1)
pre2_rf.append (pre2)
rec2_rf.append (rec2)
auc2_rf.append (auc2)
print ("random forest {:.2f}".format(median(pre_rf)*100), "{:.2f}".format(median(rec_rf)*100), "{:.2f}".format(median(auc_rf)*100) )
print ("logistic regression{:.2f}".format(median(pre1_rf)*100), "{:.2f}".format(median(rec1_rf)*100), "{:.2f}".format(median(auc1_rf)*100))
print ("decision tree {:.2f}".format(median(pre2_rf)*100), "{:.2f}".format(median(rec2_rf)*100), "{:.2f}".format(median(auc2_rf)*100))
# -
# ## Neural Network
# +
attacks = ["Mirai", "RouterSploit", "UFOnet"]
for attack in attacks:
print(attack)
dataset_folder = "/home/amine/Documents/CRIM_Project/energy/"+attack+"/"
input_folder = dataset_folder+"pre_ml-"+attack+".csv"
dataframe = pd.read_csv(input_folder, delimiter=',')
# load dataset
dataset = dataframe.values
# split into input (X) and output (Y) variables
X = dataset[:,0:6].astype(float)
Y = dataset[:,6]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# baseline model
def create_baseline():
# create model
model = Sequential()
model.add(Dense(6, input_dim=6, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['Recall', "Precision"])
return model
# evaluate model with standardized dataset
estimator = KerasClassifier(build_fn=create_baseline, epochs=5, batch_size=1, verbose=2)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(estimator, X, encoded_Y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
# -
| src/.ipynb_checkpoints/Energy_data_for_ML-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Aula de Data Science
# +
import pandas as pd
uri = "https://raw.githubusercontent.com/alura-cursos/introducao-a-data-science/master/aula4.1/movies.csv"
filmes = pd.read_csv(uri)
filmes.head()
# -
filmes.columns
filmes.columns = ["filmeId", "titulo", "generos"]
filmes.head()
uri = "https://raw.githubusercontent.com/alura-cursos/introducao-a-data-science/master/aula4.1/ratings.csv"
notas = pd.read_csv(uri)
notas.head()
notas.columns = ["usuarioId", "filmeId", "nota", "momento"]
notas.head()
notas["nota"] # Series
notas["nota"].unique()
notas["nota"].mean()
notas["nota"].min()
notas.describe()
# ## Matrizes
import numpy as np
c = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
c
c.ndim
c[0, 1]
c[2, 1]
c.shape
c.shape[0]
c.shape[1]
# ### Extraindo uma submatriz
c[:,:]
c[1:,:]
c[1:]
c[0:2]
c[0:2].shape
c[:,1:]
c[:,1:2]
c[:,0:2]
c[0:2, 0:2]
c[0,1]
c[0,1] = 20
c
c[0:1, 0:1] = np.array([300])
c
c[0:1] = np.array([3, 2, 1])
c
# ### Matrizes especiais
# +
# MATRIZ IDENTIDADE
# Matriz quadrada (perfeita) que em sua diagonal principal é composta por algarismo 1.
i = np.eye(3) # 3 linhas e 3 colunas
i
# +
# MATRIZ DE ZEROS
# Matriz composta somente por zeros.
zeros = np.zeros([3, 3])
zeros
# +
# MATRIZ DE UMS
# Matriz composta somente por ums.
ums = np.ones([3, 3])
ums
# +
# ARREY COM UM DETERMINADO ELEMENTO
d = np.full((4, 5), 10) #(4, 5) é o shape 4 linhas por 5 colunas que vão
# ser preenchidos por 10.
d
# +
# MATRIZ DIAGONAL
# Diferente da matriz de identidade, a matriz diagonal pode receber
# quaisquer números em uma mesma matriz.
matriz = np.diag([1, 2, 3, 4, 5, 6, 7, 8, 9])
matriz
# +
# MATRIZ COM TERMOS REPETIDOS
matriz = np.tile([[1, 2], [3, 4], [5, 6]], (3, 2)) # (3, 2) é o shape onde 3 repete as
# linhas e 2 irá repetir as colunas.
matriz
# -
# ## Tipagem de Dados
# **dype** - fornece qual tipo de dado está armazenado na estrutura
e = np.array([1, 2, 3, 4, 5])
e
e.dtype
f = np.array(['a', 'b', 'c', 'd'])
f
f.dtype
g = np.array([False, True, False])
g
g.dtype
# ## CAST DE DADOS
h = np.array([9, 8, 7, 6])
h
h.dtype
h = h.astype(float)
h
h.dtype
# ## Operações Aritméticas com Arrays
k = np.arange(0, 10)
k
h = np.arange(0, 10)
h
k + h
h - k
k = h
k * h
k == h
k > h
k < h
array1 = np.array([True, True, True])
array2 = np.array([False, True, False])
np.logical_and(array1, array2)
np.logical_or(array1, array2)
# ## CÓPIAS DE ARRAYS
a = np.array([1, 2, 3, 4, 5, 6])
b = a.copy()
b
a[0] = 10
a
b
c = a.copy()
c
a[0] = 20
a
b
c
# ## Operações aritiméticas com matrizes
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
b = np.array([[9, 8, 7], [6, 5, 4], [3, 2, 1]])
a + b
b - a
a + 3
a * 2
a * b
# ## ARRAY x MATRIX
z = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]])
z
z.shape
# +
# y = np.matrix([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]])
# A função matrix comporta apenas arrays bidimencionais.
# A variavel y causaria um erro pois o array é tridimencional.
# -
def shape(A):
num_rows = len(A)
return num_rows
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
A
shape(A)
def is_diagonal(i, j):
"""1's na diagonal, 0's nos demais lugares"""
return 1 if i == j else 0
is_diagonal(3, 3)
M = np.eye(10)
M
| Aula Data Science.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
class Affine:
def __init__(self, W, b):
self.W = W
self.b = b
self.x = None
self.dW = None
self.db = None
def forward(self, x):
self.x = x
out = np.dot(x, self.W) + self.b
return out
def backward(self, dout):
dx = np.dot(dout, self.W.T)
self.dW = np.dot(self.x.T, dout)
self.db = np.sum(dout, axis=0)
return dx
# +
affine = Affine()
# -
| ch5/Affine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Logistic Regression
#
# <br />
# <br />
# <br />
#
# ### Table of Contents
#
# * Introduction
# * Loading Dataset
# * Logistic Regression Model
# * Using a Scaled Model
# * Quantitative Assessment with Cross-Validation
# * Adding Volume and Interaction Terms
#
# <br />
# <br />
# <br />
# ## Introduction
#
# In this notebook, we illustrate the use of Logistic Regression to categorize the abalone shell data set by number of rings. The notebook starts by importing the data as a scikit Bunch object. It then builds a cross-validated Logistic Regression model using a 70/30 split of training and test data, and plots the confusion matrix.
#
# The results turn out to be pretty dismal. However, we can improve the model quite a bit by utilizing results from prior notebooks. There, we saw that adding a volume variable and normalizing the input variables were all helpful.
# +
# %matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.api as sm
import scipy.stats as stats
from sklearn import metrics, cross_validation, preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.datasets.base import Bunch
import pickle
# -
# ## Loading Dataset
#
# The function to load the data reads the data from a CSV file, but populates it using a scikit `Bunch` object, which is basically a DataFrame with the inputs and outputs separated.
def load_data():
# Load the data from this file
data_file = 'abalone/Dataset.data'
# x data labels
xnlabs = ['Sex']
xqlabs = ['Length','Diameter','Height','Whole weight','Shucked weight','Viscera weight','Shell weight']
xlabs = xnlabs + xqlabs
# y data labels
ylabs = ['Rings']
# Load data to dataframe
df = pd.read_csv(data_file, header=None, sep=' ', names=xlabs+ylabs)
# Filter zero values of height/length/diameter
df = df[df['Height']>0.0]
df = df[df['Length']>0.0]
df = df[df['Diameter']>0.0]
dummies = pd.get_dummies(df[xnlabs], prefix='Sex')
dfdummies = df[xqlabs+ylabs].join(dummies)
xqlabs = xqlabs + dummies.columns.tolist()
return Bunch(data = dfdummies[xqlabs],
target = df[ylabs],
feature_names = xqlabs,
target_names = ylabs)
# Load the dataset
dataset = load_data()
X = dataset.data
y = dataset.target
print X.head()
print "-"*20
print y.head()
# ## Logistic Regression Model
#
# Now we can split the data into two parts, a training set and a testing set. We'll use the training set to train the model and fit parameters, and the testing set to assess how well it does. Splitting the inputs and outputs in this way is common when cross-validating a model (for example, to try cutting the data in different places to see if there are significant changes in the fit parameters).
# +
# Split into a training set and a test set
# 70% train, 30% test
X_train, X_test, y_train, y_test = \
cross_validation.train_test_split(X, y, test_size=0.2)
# -
# Now we create a logistic regression model, which is predicting abalone age as a categorical variable (the class of 1 ring, the class of 2 rings, and so on.)
# +
# Fit the training data to the model
model = LogisticRegression()
model.fit(X_train, y_train)
print model
# -
# Once we've trained the model on the training set, we assess the model with the testing set. If we cut our data into k pieces and repeated this procedure using each of the k cuts as the testing set, and compared the resulting parameters, it would be called k-fold cross validation.
# Make predictions
yhat_test = model.predict(X_test)
# +
# Make sure y_test is a numpy array
y_test = y_test['Rings'].apply(lambda x : int(x)).values
# Compare yhat_test to y_test to determine how well the model did
# -
# This is not usually a good way to assess categorical models,
# but in this case, we're guessing age, so the categories are quantitative.
print model.score(X_test,y_test)
# +
## Yikes. This model may not be worth saving.
#with open('logistic_regression.pickle', 'w') as f:
# pickle.dump(model, f)
# -
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111)
sns.heatmap(metrics.confusion_matrix(y_test, yhat_test),
cmap="GnBu", square=True, ax=ax)
ax.set_title('Heatmap: Confusion Matrix for \nLogistic Regression Model')
ax.set_xlabel('Predicted Age')
ax.set_ylabel('Actual Age')
plt.show()
# +
#print metrics.confusion_matrix(y_test, yhat_test)
# -
print metrics.classification_report(y_test, yhat_test)
# To interpret the above chart: the precision is the ratio of total number of positives in the prediction set to total number of positives in the test set. Most of the abalones have between 7 and 11 rings. For these categories our precision is around 20-30%. This means that 70-80% of the abalones that we put in these categories (i.e., that we guessed have 7-11 rings) actually have a different number of rings.
#
# The reacll of the 7-10 ring categories have a recall of about 40%, which means that 60% of the ablones that should have been in this category are not.
#
# So basically, a _lot_ of miscategorization, with most of it happening for the 7-11 rings categories (which also happen to be the most common).
resid = y_test - yhat_test
print np.mean(resid)
print np.std(resid)
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
stats.probplot(resid, dist='norm', plot=ax)
plt.show()
# ## Using a Scaled Model
#
# For our next step we'll compare a scaled model, to see how well that does.
# +
# Split into a training set and a test set
# 70% train, 30% test
X_train, X_test, y_train, y_test = \
cross_validation.train_test_split(X, y, test_size=0.2)
# +
# Repeat above, but with scaled inputs
Xscaler = preprocessing.StandardScaler().fit(X_train)
Xstd_train = Xscaler.transform(X_train)
Xstd_test = Xscaler.transform(X_test)
modelstd = LogisticRegression()
modelstd.fit(Xstd_train, y_train)
# +
# Make predictions
yhatstd_test = modelstd.predict(Xstd_test)
y_test = y_test['Rings'].values
# -
# This is not usually a good way to assess categorical models,
# but in this case, we're guessing age, so the categories are quantitative.
print modelstd.score(Xstd_test,y_test)
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111)
sns.heatmap(metrics.confusion_matrix(y_test, yhatstd_test),
cmap="GnBu", square=True, ax=ax)
ax.set_title('Heatmap: Confusion Matrix for \nNormalized Logistic Regression Model')
ax.set_xlabel('Predicted Age')
ax.set_ylabel('Actual Age')
plt.show()
resid = y_test - yhat_test
print np.mean(resid)
print np.std(resid)
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
stats.probplot(resid, dist='norm', plot=ax)
plt.show()
# This model, like the corresponding unscaled version, is pretty terrible. We're underpredicting abalone age by a substantial amount, and the residuals still have curvature.
# ## Quantitative Assessment with Cross-Validation
#
# Moving forward, we can try adding a few additional features to our logistic regression model (more input variables, transformed responses, etc.). However, to do that we'll want to be a bit more careful about how we're assessing our models.
#
# Here, we'll implement a k-fold cross validation of our logistic regression parameters, so we can be sure we're not just getting lucky or unlucky with how we cut our data set. To do this with scikit-learn we'll use some of the goodies provided in the [scikit-learn cross-validation documentation](http://scikit-learn.org/stable/modules/cross_validation.html). Namely, we'll build a logistic regression model (which we'll use to fit the data), a shuffle split object (which we'll use to split the data at random into training and test sets), and a pipeline to connect the standard scaler to the logistic regression model.
#
# When we run the `cross_val_score()` method, we'll pass it the pipeline as our "model", and the shuffle split object as our cross-validation object. We'll also pass it our original inputs and outputs, X and y (note that we no longer have to split the data, standardize it, fit the model, compare the predictions, etc etc.).
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import ShuffleSplit
# +
# Make a logistic regression model
mod = LogisticRegression()
# Make a ShuffleSplit object to split data into training/testing data sets randomly
cv = ShuffleSplit(n_splits=4, test_size=0.3, random_state=0)
# This will be our "model":
# a pipeline that scales our inputs first,
# then passes them to the logistic regression model
clf = make_pipeline(preprocessing.StandardScaler(), mod)
cross_val_score(clf, X, y, cv=cv)
# -
# This is a big improvement in workflow, if not in accuracy: we now split the data into training and testing data sets randomly, four different times, and see what the score of each model is. Note that if we want to access the predictions themselves, we can use the `cross_val_predict()` method instead of the `cross_val_score()` method. That will allow us to compute things like a confusion matrix or run a classification report.
# ## Adding Volume and Interaction Terms
#
# Now that we have a more quantitative way to assess our models, let's start adding in some factors to see if we can improve our logistic regression model.
def load_data_with_volume():
# Load the data from this file
data_file = 'abalone/Dataset.data'
# x data labels
xnlabs = ['Sex']
xqlabs = ['Length','Diameter','Height','Whole weight','Shucked weight','Viscera weight','Shell weight']
xlabs = xnlabs + xqlabs
# y data labels
ylabs = ['Rings']
# Load data to dataframe
df = pd.read_csv(data_file, header=None, sep=' ', names=xlabs+ylabs)
# Filter zero values of height/length/diameter
df = df[df['Height']>0.0]
df = df[df['Length']>0.0]
df = df[df['Diameter']>0.0]
# -----------------------------
# Add volume
df['Volume'] = df['Height']*df['Length']*df['Diameter']
xqlabs.append('Volume')
# Add dimensions squared
sq = lambda x : x*x
df['Height2'] = df['Height'].apply(sq)
df['Length2'] = df['Length'].apply(sq)
df['Diameter2'] = df['Diameter'].apply(sq)
xqlabs.append('Height2')
xqlabs.append('Length2')
xqlabs.append('Diameter2')
# Add interactions
df['Height-Length'] = df['Height']*df['Length']
df['Length-Diameter'] = df['Length']*df['Diameter']
df['Height-Diameter'] = df['Height']*df['Diameter']
xqlabs.append('Height-Length')
xqlabs.append('Length-Diameter')
xqlabs.append('Height-Diameter')
# Add dimensions cubed
cube = lambda x : x*x*x
df['Height3'] = df['Height'].apply(cube)
df['Length3'] = df['Length'].apply(cube)
df['Diameter3'] = df['Diameter'].apply(cube)
xqlabs.append('Height3')
xqlabs.append('Length3')
xqlabs.append('Diameter3')
# -----------------------------
dummies = pd.get_dummies(df[xnlabs], prefix='Sex')
dfdummies = df[xqlabs+ylabs].join(dummies)
xqlabs = xqlabs + dummies.columns.tolist()
return Bunch(data = dfdummies[xqlabs],
target = df[ylabs],
feature_names = xqlabs,
target_names = ylabs)
# Load the dataset
datasetV = load_data_with_volume()
XV = datasetV.data
yV = datasetV.target
# +
# Make a logistic regression model
mod = LogisticRegression()
# Make a ShuffleSplit object to split data into training/testing data sets randomly
cv = ShuffleSplit(n_splits=4, test_size=0.3, random_state=0)
# This will be our "model":
# a pipeline that scales our inputs first,
# then passes them to the logistic regression model
clf = make_pipeline(preprocessing.StandardScaler(), mod)
cross_val_score(clf, XV, yV, cv=cv)
# -
# Adding higher order variable inputs to our model didn't help much. Although we really didn't explore variable interactions very deeply, it's clear they're only getting us a boost of less than 0.05 in the model score. Let's actually fit the model to data, using the same model and pipeline and data set, but this time use `cross_val_predict()` instead of `cross_val_score()` so we can actually get the predictions from our model.
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=4)
print XV.values.shape
#print len(yV.values)
print yV.values.reshape(len(yV.values)).shape
# Because this is an array of shape (N,1)
# and we need an array of shape (N,)
# we must reshape it.
yV = yV.values.reshape(len(yV.values))
yhatV = cross_val_predict(clf, XV, yV, cv=skf)
print len(yV)
print len(yhatV)
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111)
sns.heatmap(metrics.confusion_matrix(yV, yhatV),
cmap="GnBu", square=True, ax=ax)
ax.set_title('Heatmap: Confusion Matrix for \nNormalized Logistic Regression Model')
ax.set_xlabel('Predicted Age')
ax.set_ylabel('Actual Age')
plt.show()
# ## Conclusions
#
# Throwing in the towel here... The logistic model performs very poorly when compared to other techniques like ridge regression or state vector regression, and it'll take a lot of effort, focused on this particular model form, to get it anywhere close to state vector regression.
| Abalone - Logistic Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from sigvisa.graph.sigvisa_graph import SigvisaGraph
from sigvisa.signals.common import Waveform
from sigvisa.source.event import get_event
from sigvisa.infer.run_mcmc import run_open_world_MH
from sigvisa.infer.mcmc_logger import MCMCLogger
# -
"""
sg = SigvisaGraph(template_model_type="dummyPrior", template_shape="lin_polyexp",
wiggle_model_type="dummy", wiggle_family="dummy",
phases="leb", nm_type = "ar")
evid = 5335822
wave = Waveform(data=np.zeros(20), stime=1240241314.33, srate=5.0, sta="MKAR", chan="BHZ", filter_str="freq_0.8_4.5")
wn = sg.add_wave(wave)
ev = get_event(evid=evid)
evnodes = sg.add_event(ev, observed=True)"""
"""logger = MCMCLogger(write_template_vals=True, dump_interval=100)
run_open_world_MH(sg, steps=5000,
enable_template_moves=True,
enable_event_moves=True,
logger=logger,
enable_event_openworld=False,
enable_template_openworld=False)"""
"""def build_relocation_sg_leb(evids, stas):
sg = SigvisaGraph(template_model_type="dummyPrior", template_shape="lin_polyexp",
wiggle_model_type="dummy", wiggle_family="dummy",
phases="leb", nm_type = "ar")
for evid in evids:
for sta in stas:
wave = load_event_station_chan(evid, sta, "auto", exclude_other_evs=True)
sg.add_wave(wave)
for evid in evids:
ev = get_event(evid)
sg.add_event(ev, observe=True)
return sg
sg = SigvisaGraph(template_model_type="dummyPrior", template_shape="lin_polyexp",
wiggle_model_type="dummy", wiggle_family="dummy",
phases="leb", nm_type = "ar")"""
# +
from sigvisa.synthetic.doublets import *
def sample_events(basedir, seed=40):
n_evs = 1
lons = [129, 130]
lats = [-3.5, -4.5]
times = [1238889600, 1245456000]
mbs = [4.0, 5.0]
sw = SampledWorld(seed=seed)
sw.sample_region_with_doublet(n_evs, lons, lats, times, mbs, doublet_idx=0, doublet_dist=0.01)
sw.stas = ["FITZ",]
gpcov = GPCov([0.7,], [ 40.0, 5.0],
dfn_str="lld",
wfn_str="compact2")
param_means = build_param_means(sw.stas)
sw.set_basis(wavelet_family="db4_2.0_3_30", iid_repeatable_var=0.1,
iid_nonrepeatable_var=0.4, srate=5.0)
sw.joint_sample_arrival_params(gpcov, param_means)
sw.sample_signals("freq_0.8_4.5")
wave_dir = os.path.join(basedir, "sampled_%d_simple" % seed)
sw.serialize(wave_dir)
#sw.train_gp_models_true_data()
#sw.save_gps(wave_dir, run_name="synth_truedata")
return sw
import os
basedir = os.path.join(os.getenv("SIGVISA_HOME"), "experiments", "synth_wavematch")
sw = sample_events(basedir)
wave_dir = os.path.join(basedir, "sampled_%d_spreadtime" % 0)
#sw = load_sampled_world(wave_dir)
# -
print sw.tm_params
# +
print sw.evs[0]
print sw.ev_doublet
plot( sw.true_coefs['FITZ'][0,:])
plot( sw.true_coefs['FITZ'][1,:])
print sw.true_coefs['FITZ'][0][70], sw.true_coefs['FITZ'][1][70]
# +
plot( sw.waves[0]['FITZ'].data[400:1000])
plot( sw.waves[1]['FITZ'].data[400:1000])
print sw.waves[1]['FITZ'].data[600:610]
# +
import copy
def corrupt_ev(ev, stddevs):
ev = copy.copy(ev)
ev.lon = ev.lon + np.random.randn() * stddevs['lon']
ev.lat = ev.lat + np.random.randn() * stddevs['lat']
ev.depth = ev.depth + np.random.randn() * stddevs['depth']
ev.time = ev.time + np.random.randn() * stddevs['time']
ev.mb = ev.mb + np.random.randn() * stddevs['mb']
return ev
def set_true_templates(sg, sw, include_doublet=False):
for sta in sw.tm_params.keys():
for i in range(len(sw.evs) + (1 if include_doublet else 0)):
tmnodes = sg.get_template_nodes(eid=i+1, sta=sta, phase=sw.phase, chan=sw.chans[sta], band=sw.band)
for param in sw.tm_params[sta]:
k, n = tmnodes[param]
n.set_value(sw.tm_params[sta][param][i])
sg = SigvisaGraph(template_model_type="dummyPrior", template_shape="lin_polyexp",
wiggle_model_type="gp_joint", wiggle_family="db4_2.0_3_30",
phases=["P",], nm_type = "ar", runids=(-1,), joint_wiggle_prior=(0.01, sw.gpcov))
for i in sw.waves.keys():
for sta in sw.waves[i].keys():
wn = sg.add_wave(sw.waves[i][sta])
basis, iid_std, target_coef = wn.wavelet_basis
wn.wavelet_basis = (basis, sw.scaled, target_coef)
stddevs = {"lon": 0.2, "lat": 0.2, "depth": 20.0, "time": 3.0, "mb": 0.3}
sg.add_event(sw.evs[0], observed=sw.evs[0], stddevs=stddevs, fixed=True)
sg.add_event(sw.ev_doublet, observed=sw.ev_doublet, stddevs=stddevs, fixed=False)
set_true_templates(sg, sw, include_doublet=True)
# +
wn1 = sg.station_waves['FITZ'][0]
wn2 = sg.station_waves['FITZ'][1]
wn1._parent_values()
wn1.pass_jointgp_messages()
wn2._parent_values()
wn2.pass_jointgp_messages()
print [(c[0], c[1], c[3], c[4]) for c in wn1.tssm_components]
# +
ell, prior_means, prior_vars, posterior_means, posterior_vars = wn1._coef_message_cache
#plot(posterior_means, c='blue')
print np.sqrt(posterior_vars)
plot( 2*np.sqrt(posterior_vars), c='red')
plot(-2*np.sqrt(posterior_vars), c='red')
plot(sw.true_coefs['FITZ'][1] - posterior_means, c='green')
# -
# +
logger = MCMCLogger(write_template_vals=False, dump_interval=5)
corrupted_evs = []
for i,ev in enumerate(sw.evs):
eid = i+1
lon = sg.all_nodes["%d;lon_obs" %eid ].get_value()
lat = sg.all_nodes["%d;lat_obs" %eid ].get_value()
depth = sg.all_nodes["%d;depth_obs" %eid ].get_value()
mb = sg.all_nodes["%d;mb_obs" %eid ].get_value()
time = sg.all_nodes["%d;time_obs" %eid ].get_value()
cev = Event(lon=lon, lat=lat, depth=depth, mb=mb, time=time, eid=eid)
corrupted_evs.append(cev)
with open(os.path.join(logger.run_dir, "obs_events.pkl"), "wb") as f:
pickle.dump(corrupted_evs, f)
with open(os.path.join(logger.run_dir, "events.pkl"), "wb") as f:
pickle.dump(sw.evs, f)
# #%debug
print sg.current_log_p(verbose=True)
print "hello"
run_open_world_MH(sg, steps=1000,
enable_template_moves=True,
enable_event_moves=True,
logger=logger,
enable_event_openworld=False,
enable_template_openworld=False)
# -
sg.current_log_p_breakdown()
print sg.get_event(1)
print sg.get_event(2)
| notebooks/LEB_as_a_sensor_simple_doublet_match.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from models import *
from utils import *
import os, sys, time, datetime, random
import torch
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
# +
config_path='C:/Users/md459/PycharmProjects/choiwb/BigData_Team_AI_Contest/pytorch_objectdetecttrack/config/yolov3.cfg'
weights_path='C:/Users/md459/PycharmProjects/choiwb/BigData_Team_AI_Contest/pytorch_objectdetecttrack/config/yolov3.weights'
class_path='C:/Users/md459/PycharmProjects/choiwb/BigData_Team_AI_Contest/pytorch_objectdetecttrack/config/coco.names'
img_size=416
conf_thres=0.8
nms_thres=0.4
# Load model and weights
model = Darknet(config_path, img_size=img_size)
model.load_weights(weights_path)
model.cuda()
model.eval()
classes = utils.load_classes(class_path)
Tensor = torch.cuda.FloatTensor
# -
def detect_image(img):
# scale and pad image
ratio = min(img_size/img.size[0], img_size/img.size[1])
imw = round(img.size[0] * ratio)
imh = round(img.size[1] * ratio)
img_transforms = transforms.Compose([ transforms.Resize((imh, imw)),
transforms.Pad((max(int((imh-imw)/2),0), max(int((imw-imh)/2),0), max(int((imh-imw)/2),0), max(int((imw-imh)/2),0)),
(128,128,128)),
transforms.ToTensor(),
])
# convert image to Tensor
image_tensor = img_transforms(img).float()
image_tensor = image_tensor.unsqueeze_(0)
input_img = Variable(image_tensor.type(Tensor))
# run inference on the model and get detections
with torch.no_grad():
detections = model(input_img)
detections = utils.non_max_suppression(detections, 80, conf_thres, nms_thres)
return detections[0]
# +
# load image and get detections
img_path = "C:/Users/md459/Documents/jupyter_data/pytorch_object_tracking/images/Intersection-Counts.jpg"
prev_time = time.time()
img = Image.open(img_path)
detections = detect_image(img)
inference_time = datetime.timedelta(seconds=time.time() - prev_time)
print ('Inference Time: %s' % (inference_time))
# Get bounding-box colors
cmap = plt.get_cmap('tab20b')
colors = [cmap(i) for i in np.linspace(0, 1, 20)]
img = np.array(img)
plt.figure()
fig, ax = plt.subplots(1, figsize=(12,9))
ax.imshow(img)
pad_x = max(img.shape[0] - img.shape[1], 0) * (img_size / max(img.shape))
pad_y = max(img.shape[1] - img.shape[0], 0) * (img_size / max(img.shape))
unpad_h = img_size - pad_y
unpad_w = img_size - pad_x
if detections is not None:
unique_labels = detections[:, -1].cpu().unique()
n_cls_preds = len(unique_labels)
bbox_colors = random.sample(colors, n_cls_preds)
# browse detections and draw bounding boxes
for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:
box_h = ((y2 - y1) / unpad_h) * img.shape[0]
box_w = ((x2 - x1) / unpad_w) * img.shape[1]
y1 = ((y1 - pad_y // 2) / unpad_h) * img.shape[0]
x1 = ((x1 - pad_x // 2) / unpad_w) * img.shape[1]
color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])]
bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=2, edgecolor=color, facecolor='none')
ax.add_patch(bbox)
plt.text(x1, y1, s=classes[int(cls_pred)], color='white', verticalalignment='top',
bbox={'color': color, 'pad': 0})
plt.axis('off')
# save image
plt.savefig(img_path.replace(".jpg", "-det.jpg"), bbox_inches='tight', pad_inches=0.0)
plt.show()
# -
| PyTorch_Object_Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
# picked the columns that gave the most important data for use in our target features
df = pd.read_csv("Austin_clean_Complete_with_fixes.csv",usecols=[1,2,3,4,5,6,7])
# I picked names here simplify and standardize how the data would be presented
df.rename(columns={"Weapons Used":"force type","Date Occurred":"date","Location":"location"},inplace = True)
df["lon"] = df["lon"]*-1
cols = list(df.columns)
# standardize the date so that the format is constant
df['date'] = pd.to_datetime(df['date'],errors = 'coerce')
DF_Au = df
DF_Au.head()
# -
df = pd.read_csv("Baltimore_clean.csv")
# divide the "date" column into time and date and keeping only the date portion as the time is unusable for most data
temp = df['DATE'].map(lambda a: a.split(' ',1))
date=[]
time=[]
for i in temp:
date.append(i[0])
time.append(i[1])
df['DATE'] = date
df = df[["DATE","LOCATION","TYPE","X (LONG)","Y (LAT)"]]
df['city'] = "Baltimore"
df['state'] = "MD"
df.rename(columns = {"DATE":"date","LOCATION":"location","TYPE":"force type","X (LONG)":"lon","Y (LAT)":"lat"},inplace = True)
# reorder the columns to match the previous data and check for missing columns
df = df[cols]
df['date'] = pd.to_datetime(df['date'])
DF_Ba = df
DF_Ba.head()
df = pd.read_csv("Bedford_clean.csv",usecols=[2,12,13])
df['city'] = "Bedford"
df['state'] = "VA"
# put unreported as the data is not provided and it's better than a nan
df['force type'] = "unreported"
df['location'] = "unreported"
df.rename(columns = {"incident_date":"date"},inplace = True)
df = df[cols]
DF_Bed = df
DF_Bed.head()
df = pd.read_csv("Beloit_clean.csv",usecols =[2,6,7,8])
temp = df['incident_date'].map(lambda a: a.split(' ',1))
date=[]
time=[]
for i in temp:
date.append(i[0])
time.append(i[1])
df['incident_date'] = date
df['city'] = "Beloit"
df['state'] = "WI"
df.rename(columns={"incident_date":"date","type_of_force":"force type"},inplace = True)
df['location'] = "unreported"
df = df[cols]
DF_Bel = df
DF_Bel.head()
df = pd.read_csv("Bloomington_clean.csv",usecols = [2,5,6])
df['city'] = "Bloomington"
df['state'] = "IN"
df['location'] = "unreported"
df['force type'] = "unreported"
df.rename(columns = {"incident_date":"date"},inplace = True)
df = df[cols]
DF_Blo = df
DF_Blo.head()
df = pd.read_csv("Cincinnati_clean.csv",usecols = [2,3,9,10])
temp = df['incident_date'].map(lambda a: a.split(' ',1))
date=[]
time=[]
for i in temp:
date.append(i[0])
time.append(i[1])
df['incident_date'] = date
df['city'] = "Cincinnati"
df['state'] = "OH"
df['location'] = "unreported"
df.rename(columns = {"incident_date":"date","incident_description":"force type"}, inplace = True)
df = df[cols]
DF_Ci = df
DF_Ci.head()
df = pd.read_csv("Delaware_clean.csv", usecols = [1,2,7,8,9,10])
df["location"] = "unreported"
df.rename(columns = {"force_type":"force type","incident_date":"date"}, inplace = True)
df = df[cols]
DF_De = df
DF_De.head()
df = pd.read_csv("Indiannapolis_clean.csv", usecols = [1,2,3,6])
df["city"] = "Indiannapolis"
df["state"] = "IN"
df["location"] = "unreported"
df.rename(columns = {"Date":"date","officerForceType":"force type"}, inplace = True)
df = df[cols]
DF_Ind = df
DF_Ind.head()
df = pd.read_csv("New_Orleans_clean.csv", usecols = [1,2,3])
df['city'] = "New Orleans"
df['state'] = "LA"
df['location'] = "unreported"
df["force type"] = "unreported"
df.rename(columns = {"Date Occurred":"date"}, inplace = True)
df = df[cols]
df['date'] = pd.to_datetime(df['date'])
DF_NO = df
DF_NO.head()
df = pd.read_csv("Northampton_clean.csv", usecols = [6,7,8])
df['city'] = "Northampton"
df["state"] = "MA"
df['location'] = "unreported"
df['date'] = "unreported"
df.rename(columns = {"pd_force_type":"force type"}, inplace = True)
df = df[cols]
DF_NH = df
DF_NH.head()
df = pd.read_csv("Norwich_clean.csv", usecols = [6,9,10])
df["city"] = "Norwich"
df["state"] = "CT"
df["location"] = "unreported"
df['date'] = "unreported"
df.rename(columns = {"pd_force_type":"force type"}, inplace = True)
df = df[cols]
DF_Nor = df
DF_Nor.head()
df = pd.read_csv("Orlando_clean.csv",usecols = [2,10,11,12,13])
temp = df['incident_date_time'].map(lambda a: a.split(' ',1))
date=[]
time=[]
for i in temp:
date.append(i[0])
time.append(i[1])
df['incident_date_time'] = date
df['location'] = "unreported"
df["force type"] = "unreported"
df.rename(columns = {"incident_date_time":"date"}, inplace = True)
df = df[cols]
DF_Orl = df
DF_Orl.head()
# +
df = pd.read_csv("Portland_clean.csv", usecols = [3,4,9,10,11])
df['city'] = "Portland"
df['state'] = "OR"
df["location"] = "unreported"
df.rename(columns = {"force_type":"force type"}, inplace = True)
df.drop("year",axis = 1)
df['date'] = "unreported"
df = df[cols]
DF_Por = df
DF_Por.head()
# +
df = pd.read_csv("Seattle_clean.csv", usecols = [3,6,7,8,9])
temp = df['incident_date'].map(lambda a: a.split(' ',1))
date=[]
time=[]
for i in temp:
date.append(i[0])
time.append(i[1])
df['incident_date'] = date
df['location'] = "unreported"
df["force type"] = "unreported"
df.rename(columns = {"incident_date":"date"}, inplace = True)
df = df[cols]
df['date'] = pd.to_datetime(df['date'])
DF_Sea = df
DF_Sea.head()
# -
df = pd.read_csv("South_bend_clean.csv", usecols = [1,2,4,5,6,7])
temp = df['incident_date'].map(lambda a: a.split(' ',1))
date=[]
time=[]
for i in temp:
date.append(i[0])
time.append(i[1])
df['incident_date'] = date
df['location'] = "unreported"
df.rename(columns = {"incident_date":"date", "force_type": "force type"}, inplace = True)
df = df[cols]
df['date'] = pd.to_datetime(df['date'])
DF_SB = df
DF_SB.head()
# +
big = [DF_SB,DF_Sea,DF_Por,DF_Orl,DF_Nor,DF_NH,DF_NO,DF_Ind,DF_De,DF_Ci,DF_Blo,DF_Bel,DF_Bed,DF_Ba,DF_Au]
Grand = pd.DataFrame()
# concatinate the data into one dataframe with matching columns
for i in big:
if len(Grand)==0:
Grand = i
else:
Grand = pd.concat([Grand,i],)
# in the future we could use NLP to standardize the type of force column so the data can be more easily categorized.
print(Grand)
# -
Grand.to_csv("Compiled_Police_Reports.csv")
| notebooks/Labs_28/Compounding_clean_data_to_csv BW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ADAPTIC materials
#
# This notebook illustrates how to incorporate __ADAPTIC__ material classes from _material.py_ module.
#
# ## con1
#
# ```python
# materials.con1(ID, fc1, length, epsilon_t2 = 0.001, fc2_factor = 0.1, ft_factor = 1, characteristic = True)
# ```
#
# |Parameter|Type|Description|
# |:-:|:-:|:-:|
# |ID|str|name of the material|
# |fc1|float/int|peak compressive strength|
# |length|float/int|element length|
# |epsilon_t2|float|ultimate tensile strain|
# |ft_factor|float in range (0,1)|tensile strength reduction factor|
# |characteristic|bool|characteristic strength (True) or mean strength (False)|
#
# _Con1_ is a trilinear curve model in compression with an optional quadratic initial response . Tensile stage is given by a bilinear curve with softening.
#
# 
#
# Initial compressive response is defined by the parameter $\alpha$, which is based on $E_{c0}$ and $E_{c1}$. Elastic initial modulus $E_{c0}$ is based on the parabolic curve. $E_{c1}$ is the secant modulus from the origin to the peak compressive stress. If $\alpha > 0$, a quadratic initial compressive response is implied.
#
# After the peak compressive strength is achieved, softening stage takes place up to the failure. To avoid convergence issues, residual compressive strength $f_{c2}$ is maintained after failure. User can specify the _fc2_factor_ as the fraction of the peak strength. This is taken as 10% by default $f_{c2} = 0.1f_{c1}$.
#
# The input parameters are concrete cylinder strength $f_{c1}$ and element length $h$. It is assumed that the input strength $f_{c1}$ is the characteristic compressive strength $f_{ck}$. If mean strength $f_{cm}$ is used, set the input parameter $characteristic$ to False, which affects the calculations of the fracture energy $G_f$. Element length $h$ is used to determine crack-band width.
#
# Most of the other parameters are calculated according to <em>CEB-FIP Model Code 1990 (MC 1990)</em>, <em>CEB-FIP Model Code 2010 (MC 2010)</em> as well as <em>Rijkswaterstaat Technical Document: Guidelines for Nonlinear Finite Element Analysis of Concrete Structures (RTD 2010)</em>. These formulas are based on the uniaxial compressive cylinder strength.
#
# To avoid overestimating the cracking moment, tensile strength $f_t$ can be reduced using tensile reduction factor *ft_factor*. Tension strain at failure $\varepsilon_{t2}$ needs to be defined by the user, taken as 0.001 by default.
#
# |Parameter|Formula|Units|Reference|
# |:-:|:-:|:-:|:-:|
# |Compressive cylinder strength |$$f_{c} = 0.85f_{c,cube}$$|MPa|NA|
# |Characteristic compressive cylinder strength |$$f_{ck}$$|MPa|NA|
# |Mean compressive strength |$$f_{cm} = f_{ck} + 8$$|MPa|MC 1990 Eq. 2.1-1|
# |Peak compressive strength|$$f_{c1}$$|MPa|NA|
# |Residual compressive strength|$$f_{c2}$$|MPa|NA|
# |Tensile strength |$$f_t= ft_{factor} \cdot 0.3f_{cm}^{2/3} \leq C50$$ $$ f_t= ft_{factor} \cdot 2.12ln(1+0.1f_{cm}) > C50$$|MPa|MC 2010 Eq. 5.1-3a|
# |Fracture energy|$$G_f = 73\frac{ f_{cm}^{0.18}}{1000} $$|N/mm|MC 2010 Eq. 5.1-9|
# |Initial compressive modulus|$$E_{c0} = 21500\cdot(f_{cm}/10)^{1/3}$$|MPa|MC 2010 Eq. 5.1-21|
# |Poisson's ratio|$$0.2$$|-|MC 2010 5.1.7.3|
# |Compressive fracture energy |$$G_{c} = 250G_{f}$$|N/mm|RTD 2010 p. 11|
# |Compressive strain at peak strength|$$\varepsilon_{c1} = \frac{5}{3}\frac{f_c}{E_0}$$|-|RTD 2010 p. 21|
# |Secant compressive modulus|$$E_{c1} = \frac{f_{c1}}{\varepsilon_{c1}}$$|MPa|NA|
# |Initial tensile modulus|$$E_{t1} = E_{c0}$$|MPa|NA|
# |Compressive failure strain|$$\varepsilon_{c2} = \varepsilon_{c1} + \frac{3}{2}\frac{G_c}{hf_c}$$|-|RTD 2010 p. 21|
# |Tensile strain at peak strength|$$\varepsilon_{t1} = \frac{f_t}{E_{t1}}$$|-|NA|
# |Tensile failure strain|$$\varepsilon_{t2}=\frac{G_{f}}{h_{eq}f_{t}}$$|-|RTD 2010 p. 19|
# |Initial compressive response factor|$$\alpha = \frac{E_{c0}-E_{c1}}{E_{c1}}$$|-|ADAPTIC manual|
#
import sys
sys.path.insert(1, '../libraries')
import numpy as np
import pandas as pd
import utils
import materials as mat
# Below properties for _con1_ material of concrete grade C50/60 are shown. Characteristic cylinder strength is assumed $f_{c1} = f_{ck} = 50 MPa$ and element length $h = 250 mm$.
# Sample properties for mat_con11 instance
sample_con1 = mat.con1('C50/60', 50, 50, fc2_factor = 0.1, characteristic = True)
sample_con1.data_frame()
# ## stl1
#
# ```python
# materials.stl1(ID, E1, fy, fu, epsilon_u)
# ```
#
# |Parameter|Type|Description|
# |:-:|:-:|:-:|
# |ID|str|name of the material|
# |E1|float/int|initial elastic stiffness|
# |fy|float/int|yield strength|
# |fu|float/int|ultimate strength|
# |epsilon_u|float|ultimate strain|
#
# _Stl1_ is a bilinear elasto-plastic model with kinematic strain hardening, used for a uniaxial modelling of mild steel. Although **ADAPTIC** in the current revision does not require ultimate strains, $\varepsilon_{u}$ needs to be defined for this class to be used in post-processing.
#
# <img src="../assets/images/materials.stl1.png" width="500" />
# Below properties for _stl1_ material of steel S355 are shown.
# Sample properties for mat_stl1 instance
sample_stl1 = mat.stl1('S355', 205000, 355, 490, 0.1)
sample_stl1.data_frame()
| documentation/materials.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# MAKE A PIPELINE FOR PROCESSING THE COLOR IMAGES!
# FROM TIRAMISU
# IDEA: Add neck to the posture map?
# # %matplotlib inline
# # %matplotlib widget
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import time
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import sys, os, pickle
import cv2
from colour import Color
import h5py
from tqdm import tqdm, tqdm_notebook
import os
import sys
import math
import string
import random
import shutil
import glob
# +
# Check CUDA
print(torch.cuda.is_available())
print(torch.cuda.device_count())
print(torch.cuda.get_device_name(0))
torch_device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(torch_device)
# -
# +
# Load the tesiting dataset of
# +
N_STACKS = 3
WEIGHTS_PATH = '/media/chrelli/SSD4TB/revision_profiling/{}_STACKS/'.format(N_STACKS)
EPOCH = 50
def load_model(WEIGHTS_PATH,N_STACKS,EPOCH,verbose=False):
# Load a model!
# import the hourglass model and set up architecture
from architectures.hourglass import hg
global best_acc
model = hg(
num_stacks=N_STACKS,
num_blocks=1,
num_channels=1,
num_classes=11,
num_feats=128,
inplanes=64,
init_stride=2,
)
model = torch.nn.DataParallel(model).cuda()
# A HELPER FUNCTION WHICH SAVES THE STATE OF THE NETWORK, maybe every 10 epochs or smth?
import os
import sys
import math
import string
import random
import shutil
import glob
# Find the path to the saved weights
all_options = sorted( glob.glob(WEIGHTS_PATH + '/singlecore_weights_epoch_'+str(EPOCH)+'*' ) )
if verbose:
print(all_options)
weights_fpath = all_options[0]
if verbose:
print("loading weights '{}'".format(weights_fpath))
model.load_state_dict( torch.load(weights_fpath) )
model.eval()
if verbose:
print('loaded!')
return model
model = load_model(WEIGHTS_PATH,N_STACKS,EPOCH, verbose = True)
# +
# Load the validation dataset!
# training data with different exposure
top_folder_0 = '/media/chrelli/Data0/recording_20200828-113642/'
top_folder_1 = '/media/chrelli/Data1/recording_20200828-113642/'
# validation dataset with LASER ON 90 fps
top_folder_0 = '/media/chrelli/Data0/recording_20200828-114251/'
top_folder_1 = '/media/chrelli/Data1/recording_20200828-114251/'
training_sets = glob.glob( top_folder_0 + '/*.h5')
skeletons = glob.glob('training_sets' + '/*skeleton_v2*')
print(training_sets)
print(skeletons)
h5_path = training_sets[0]
h5_file = h5py.File(h5_path, 'r')
from c_utils.utils_hour import check_h5_ir
pic = check_h5_ir(h5_path)
# +
# make a minimal dataset to loop over the frames!!
# https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel
import torch
import torch.utils.data as data
import imgaug.augmenters as iaa
from c_utils.utils_hour import gaussian
# HACK for imgaug for now, numpy had a code change in 1.8
# import numpy
# numpy.random.bit_generator = numpy.random._bit_generator
selfseq = iaa.Sequential([
# iaa.Crop(px=(0, 100)), # crop images from each side by 0 to 16px (randomly chosen)
iaa.CropAndPad(percent=(-0.10, 0.15), sample_independently=False),
iaa.Fliplr(0.5), # horizontally flip 50% of the images
iaa.Sometimes(.3, iaa.GaussianBlur(sigma=(0, 1.5)) ), # blur images with a sigma of 0 to 3.0
iaa.Sometimes( 1, iaa.Dropout(p = (0,0.2)) ),
iaa.Affine(rotate=(-30, 30),
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)})
],random_order=True)
snowflakes = iaa.Snowflakes(flake_size=(0.5, 0.7), speed=(0.001, 0.003), density_uniformity=(0.99, 1.))
# pic_snow = snowflakes.augment_image(pic)
class MouseDataset(data.Dataset):
# todo add augmentation here, clean up and make faster
# todo remove stupid side effects etc
def __init__(self, h5_data,which_indices,augmentation=False):
'''Initialization'''
self.label_names = ['impl','ear','ear','nose','tail','ear','ear','nose','tail']
self.label_index = [0,1,1,1,1,2,2,3,3]
# index for loading subsets from the h5 file
self.which_indices = which_indices
self.n_images = len(which_indices)
# FOR THE AUGMENTATION PIPELINE
self.seq = iaa.Sequential([
# iaa.Crop(px=(0, 100)), # crop images from each side by 0 to 16px (randomly chosen)
iaa.CropAndPad(percent=(-0.10, 0.15), sample_independently=False),
iaa.Fliplr(0.5), # horizontally flip 50% of the images
iaa.Sometimes(.3, iaa.GaussianBlur(sigma=(0, 1.5)) ), # blur images with a sigma of 0 to 3.0
iaa.Sometimes( 1, iaa.Dropout(p = (0,0.2)) ),
iaa.Affine(rotate=(-30, 30),
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)})
],random_order=False)
self.augmentataion = augmentation
def __len__(self):
'Denotes the total number of samples'
return self.n_images
def __getitem__(self, index):
# will be i x j x 1 - is that ok?
# todo check if this moveaxis is good?
# im = np.moveaxis(h5_data['c_images'][self.which_indices[index]],[0,1],[1,2])
# tracking_folder = '/home/chrelli/Documents/Example3D_compressed'
# h5_path = tracking_folder + '/mouse_rgbd_annotation_set.h5'
# h5_path = 'training_sets/mouse_training_set_labeled.h5'
with h5py.File(h5_path, 'r') as h5_file:
xy = h5_file['annotations'][self.which_indices[index]]
c_image = h5_file['c_images'][self.which_indices[index]]
# dac_image = h5_file['dac_images'][self.which_indices[index]]
# check if the points are good before augmentation
# upper left corner is trash
point_good = (xy[:,0] > 30)*( xy[:,1] > 30)
# NOW, AUGMENT!
# TODO AUGMENT DEPTH AS WELL?
# AUGMENT DEPTH BY DEAD BLOCKS TO SIMULATE REAL DATA?
high_res = True
if self.augmentataion and not high_res:
#HACK: selfseq instead of self.seq
images_aug, xy_aug_list = self.seq(images = c_image[np.newaxis,:,:,:], keypoints=[xy])
xy = xy_aug_list[0]
c_image = images_aug[0,:,:,:]
if self.augmentataion and high_res:
#HACK: selfseq instead of self.seq
images_aug, xy_aug_list = self.seq(images = c_image[np.newaxis,:,:], keypoints=[xy])
xy = xy_aug_list[0]
c_image = images_aug[0,:,:]
snowflakes = iaa.Snowflakes(flake_size=(0.1, 0.4), speed=(0.00, 0.00),
density = (.001,.1), density_uniformity=(0.99, 1.))
add_snow = False
if add_snow:
c_image = snowflakes.augment_image(c_image)
# pack depth and pixels to target - OR NOT??
im = c_image
frame_height = im.shape[0]
frame_width = im.shape[1]
if high_res:
# size is
pad_right = 0
pad_top = 480-448 - 2
pad_bottom = 2
im = im[pad_top:-pad_bottom,:]
# and make the image Chan X heigh X width
im = im[np.newaxis,:,:]
else:
# make the resolution correct, i.e. set the height to 192
pad_top = 8
pad_bottom = 10
im = im[pad_top:-pad_bottom,:,:]
# and make the image Chan X heigh X width
im = np.moveaxis(im,2,0)
# recale the keypoints
xy[:,1] -= pad_top
rescale = True
h_out,w_out = im.shape[1],im.shape[2]
label_sigma = np.array([2,1,1,1,1,1,1,1,1]) *1.5 *2
gaussian_sigma = (15,15)
line_thickness = 3
if rescale:
# halved
h_out,w_out = int(h_out/4),int(w_out/4)
xy = np.round(xy/4).astype('int')
if high_res:
label_sigma = np.array([3,1,1,1,1,1,1,1,1])
gaussian_sigma = (3,3)
line_thickness = 2
else:
label_sigma = np.array([3,1,1,1,1,1,1,1,1])
gaussian_sigma = (3,3)
line_thickness = 1
label_names = ['impl','ear','ear','nose','tail','ear','ear','nose','tail']
body_names = ['mouse0','mouse0','mouse0','mouse0','mouse0','mouse1','mouse1','mouse1','mouse1']
label_index = [0,1,1,2,3,1,1,2,3]
body_index = [0,0,0,0,0,1,1,1,1]
# target has to be batch x n_features x pic_i x pic_j
target_points = np.zeros((4,h_out,w_out))
img = target_points[0,:,:].copy()
for i in range(9):
if point_good[i]:
target_points[label_index[i],:,:] += ( gaussian(img.copy(), xy[i,:], label_sigma[i]) )
# draw the lines within the body!
xy_good = xy[point_good]
label_good = xy[point_good]
body_good = xy[point_good]
target_lines = []
def draw_lines(p1,p2):
img_blank = np.zeros((h_out,w_out)).astype('uint8')
for i1,i2 in zip(p1,p2):
if point_good[i1]*point_good[i2]:
start = tuple(np.round(xy[i1,:]).astype('int') )
end = tuple(np.round(xy[i2,:]).astype('int') )
cv2.line(img_blank,start,end,[255,255,255],thickness = line_thickness)
img_blank = np.clip(img_blank,0,255)
img_blank = cv2.GaussianBlur(img_blank,gaussian_sigma,0)
img_blank = img_blank/255
target_lines.append(img_blank.copy())
# I to ears
p1,p2 = [0,0],[1,2]
draw_lines(p1,p2)
# I to nose
p1,p2 = [0],[3]
draw_lines(p1,p2)
# I to tail
p1,p2 = [0],[4]
draw_lines(p1,p2)
# Ear to Ear
p1,p2 = [1,5],[2,6]
draw_lines(p1,p2)
# Ear to Tail
p1,p2 = [1,2,5,6],[4,4,8,8]
draw_lines(p1,p2)
# Ear to Nose
p1,p2 = [1,2,5,6],[3,3,7,7]
draw_lines(p1,p2)
# Nose to Tail
p1,p2 = [3,7],[4,8]
draw_lines(p1,p2)
# stack all the targets
target_lines = np.concatenate([t[np.newaxis,:,:] for t in target_lines],axis=0)
target = np.concatenate((target_points,target_lines),axis = 0)
# convert to sensible ranges
im = im / 255.
im = np.clip(im,0,1)
# from documentation
# For a conv2D, input should be in (N, C, H, W) format. N is the number of samples/batch_size. C is the channels. H and W are height and width resp.
# See shape documentation at https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d
# so im is (batch, channels, H, W)
# target is (n_trained_features, H, W)
return im.astype('float32'),target.astype('float32'),xy_good,point_good,label_index
# # separate by pseudo-random indices
np.random.seed(0)
with h5py.File(h5_path, 'r') as h5_file:
n_frames = h5_file['c_images'].shape[0]
print("n_frames = " +str(n_frames) )
# n_frames = 576
random_indices = np.random.permutation(n_frames)
MouseTrain = MouseDataset(h5_file,random_indices[:],augmentation=False)
# we shuffle, so that we always see different dumps
# MouseTrainLoader = data.DataLoader(MouseTrain, batch_size=1, shuffle=True, num_workers = 1)
MouseTrainLoader = data.DataLoader(MouseTrain, batch_size=1, shuffle=False,num_workers = 0)
print("training augment = {}".format(MouseTrain.augmentataion) )
# -
# check that the bechmark loading works
from c_utils.utils_hour import plot_im_target,plot_im_target_ir, random_from
for i in range(1):
# im,target = random_from( MouseTrainLoader)
im,target,xy,point_good,lab = next(iter( MouseTrainLoader))
print(im.shape)
print(im.dtype)
print(torch.max(im))
plot_im_target_ir(im,target,5)
# +
# define a loss function
# custom loss function for weighing
def weighted_mse_loss(input, target, weights):
out = (input - target) ** 2
out = out * weights.expand_as(out)
loss = out.sum()
return loss
# also load a summary_writer for tensorboard
def weighted_hourglass_loss(inputs,targets,output):
# loss_weight = torch.ones((inputs.size(0), args.num_classes, 1, 1))
loss_weight = torch.ones((inputs.size(0), targets.size(1), 1, 1))
loss_weight.requires_grad = True
loss_weight = loss_weight.cuda()
# add all the loss maps together (8 hourglasses)
loss = weighted_mse_loss(output[0], targets, weights=loss_weight)
for j in range(1, len(output)):
loss += weighted_mse_loss(output[j], targets, weights=loss_weight)
return loss
# also load a summary_writer for tensorboard
def weighted_hourglass_loss_final_stack(inputs,targets,output):
# loss_weight = torch.ones((inputs.size(0), args.num_classes, 1, 1))
loss_weight = torch.ones((inputs.size(0), targets.size(1), 1, 1))
loss_weight.requires_grad = True
loss_weight = loss_weight.cuda()
# add all the loss maps together (8 hourglasses)
loss = weighted_mse_loss(output[-1], targets, weights=loss_weight)
return loss
# also load a summary_writer for tensorboard
def weighted_hourglass_loss_final_stack(inputs,targets,output):
# loss_weight = torch.ones((inputs.size(0), args.num_classes, 1, 1))
loss_weight = torch.ones((inputs.size(0), targets.size(1), 1, 1))
loss_weight.requires_grad = True
loss_weight = loss_weight.cuda()
# add all the loss maps together (8 hourglasses)
loss = weighted_mse_loss(output[-1], targets, weights=loss_weight)
return loss
# also load a summary_writer for tensorboard
def weighted_hourglass_loss_final_noPAF(inputs,targets,output):
targets = targets[:,:4,:,:]
output = [o[:,:4,:,:] for o in output]
# loss_weight = torch.ones((inputs.size(0), args.num_classes, 1, 1))
loss_weight = torch.ones((inputs.size(0), targets.size(1), 1, 1))
loss_weight.requires_grad = True
loss_weight = loss_weight.cuda()
# add all the loss maps together (8 hourglasses)
loss = weighted_mse_loss(output[-1], targets, weights=loss_weight)
return loss
# +
# a function to get out the keypoints
from skimage.feature import peak_local_max
def single_score_2_keypoints(sco):
xy_list = [None]*4
pxy_list = [None]*4
score_idx_list = [None]*4
for key in range(4):
xy = peak_local_max(sco[key,:,:],threshold_abs = 0.1,num_peaks = 6)
xy_list[key] = xy
pxy_list[key] = sco[key,xy[:,0],xy[:,1]]
# print(xy.shape)
score_idx_list[key] = key * np.ones_like(xy)
return np.concatenate(xy_list), np.concatenate(pxy_list), np.concatenate(score_idx_list)
# -
# +
# make a function which will do a training step!
from torch.autograd import Variable
def test(model, trn_loader,noPAF = False):
model.eval()
epoch_loss = 0
frame_loss = []
xy_list = []
pxy_list = []
score_idx_list = []
xy_real_list = []
score_idx_real_list = []
# trn_error = 0
for idx, data in tqdm(enumerate(trn_loader)):
# Hack for faster look
if idx == -30:
return epoch_loss, frame_loss, xy_list, pxy_list, score_idx_list, xy_real_list, score_idx_real_list #, trn_error
### PREPARE TENSORS ###
# make right byte and shape (# remove the depth dimension)
inputs = data[0].float()
targets = data[1].float()
# send to cuda
inputs = Variable(inputs.cuda())
targets = Variable(targets.cuda(non_blocking=True))
### CALC LOSS W/O GRAD ###
with torch.no_grad():
# compute model output
output = model(inputs)
if noPAF:
# drop the PAFs from the score
loss = weighted_hourglass_loss_final_noPAF(inputs,targets,output)
else:
loss = weighted_hourglass_loss_final_stack(inputs,targets,output)
# and check the score to get the keypoints
sco = output[-1][:,:4,:,:].cpu().numpy()
xy, pxy, score_idx = single_score_2_keypoints(sco[0,...])
xy_list.append(xy)
pxy_list.append(pxy)
score_idx_list.append(score_idx)
xy_real = data[2].squeeze().cpu().numpy()
logi = data[3].squeeze().cpu().numpy()
lab_real = np.array([0,1,1,2,3,1,1,2,3])[logi]
xy_real_list.append(xy_real)
score_idx_real_list.append(lab_real)
epoch_loss += loss.item()
frame_loss.append(loss.item())
epoch_loss /= len(trn_loader)
# trn_error /= len(trn_loader)
return epoch_loss, frame_loss, xy_list, pxy_list, score_idx_list, xy_real_list, score_idx_real_list #, trn_error
epoch_loss, frame_loss, xy_list, pxy_list, score_idx_list, xy_real_list, score_idx_real_list = test(model, MouseTrainLoader, noPAF = True)
# +
# Load the REAL xy and indices!
# -
# +
# epoch_loss, frame_loss = test(model, MouseTrainLoader)
# +
N_STACKS = 3
ep_stack = []
frame_stack = []
keypoint_stack = []
all_stacks = [1,2,3,6,9]
for st in all_stacks:
WEIGHTS_PATH = '/media/chrelli/SSD4TB/revision_profiling/{}_STACKS/'.format(st)
epoch_loss_list = []
frame_loss_list = []
keypoint_list = []
test_epochs = np.hstack((np.arange(0,90,10),89))
# only last
test_epochs = [89]
# test_epochs = [10]
# test_epochs = np.arange(0,90,10)
if st == 9:
# test_epochs = np.hstack((np.arange(0,120,10),119))
# test_epochs[-1] += 1
test_epochs = [119]
for ep in test_epochs:
print("stack {}, ep {}".format(st,ep))
model = load_model(WEIGHTS_PATH,st,ep)
epoch_loss, frame_loss, xy_list, pxy_list, score_idx_list, xy_real_list, score_idx_real_list = test(model, MouseTrainLoader,noPAF = True)
epoch_loss_list.append(epoch_loss)
frame_loss_list.append(frame_loss)
keypoint_list.append([xy_list, pxy_list, score_idx_list, xy_real_list, score_idx_real_list])
ep_stack.append(epoch_loss_list)
frame_stack.append(frame_loss_list)
keypoint_stack.append(keypoint_list)
# +
# Make a plot of the keypoints vs time
# for i_st,st in enumerate(all_stacks):
#
i_st = -1
cuts = [.1,.2,.3,.4,.5,.6,.7,.8,.9]
# cuts = [.1]
n_cuts = len(cuts)
missed_stack = np.zeros((n_cuts,5,4))
for i_cutoff,p_cutoff in enumerate(cuts):
distance_stack = []
for i_st in range(5):
# then epoch
i_ep = -1
# the every image
xy_list = keypoint_stack[i_st][i_ep][0]
pxy_list = keypoint_stack[i_st][i_ep][1]
kpi_list = keypoint_stack[i_st][i_ep][2]
xy_real_list = keypoint_stack[i_st][i_ep][3]
kpi_real_list = keypoint_stack[i_st][i_ep][4]
distance_list = [[],[],[],[]]
n_im = len(xy_list)
for i_im in range(n_im):
for i_keyp in [1,2,3]:
kpi = kpi_list[i_im][:,0]
xy = xy_list[i_im][kpi==i_keyp]
pxy = pxy_list[i_im][kpi==i_keyp]
kpi_real = kpi_real_list[i_im]
xy_real = xy_real_list[i_im][kpi_real==i_keyp]
xy = xy[pxy>p_cutoff,:]
if len(xy_real) == 0:
# if there were no points to detect, just go on...
continue
if len(xy) == 0:
# if there were no detections, set distance to infinity
d_match = np.inf*np.squeeze(np.ones_like(xy_real[:,0]))
if len(xy_real) == 2:
# catcg stupid numpy, could be more elegant
d_match = 800*np.array([1])
else:
d_match = 10000*np.ones_like(xy_real[:,0])
distance_list[i_keyp].append(d_match)
else:
# HERE there is a stupid convention conflict!
# the xy_real are in the the classic image space, so i,j i.e. down, right
# scipy is returned flipped, so j,i
i_net = xy[:,1]
j_net = xy[:,0]
i_real = xy_real[:,0]
j_real = xy_real[:,1]
d2 = (i_real[...,np.newaxis]-i_net[np.newaxis,...])**2 + (j_real[...,np.newaxis]-j_net[np.newaxis,...])**2
d = np.sqrt(d2)
match = np.argmin(d,1)
d_match = np.min(d,1)
distance_list[i_keyp].append(d_match)
distance_stack.append(distance_list)
for i_st in range(5):
for i_keyp in [1,2,3]:
edges = np.arange(0,15)
all_dist = np.concatenate(distance_stack[i_st][i_keyp])
count,edges = np.histogram(all_dist,edges)
count = count/len(all_dist)
missed_stack[i_cutoff,i_st,i_keyp] = (1 - np.sum(count[:5]))*1
# -
missed_stack
# +
# Make a figure showing the performance below the cutoff
from palettable.tableau import GreenOrange_6, Tableau_10
cc = GreenOrange_6.mpl_colors
cc = Tableau_10.mpl_colors
fig,axs = plt.subplots(3,5,figsize = (8,5) )
edges = np.arange(0,15)
for i_st in range(5):
for i_keyp in [1,2,3]:
ax = axs[i_keyp-1,i_st]
all_dist = np.concatenate(distance_stack[i_st][i_keyp])
count,edges = np.histogram(all_dist,edges)
count = count/len(all_dist)
ax.bar(edges[:-1],count,color = cc[i_st])
ax.set_ylim([0,.6])
ax.set_yticks([])
ax.set_xticks([])
ax.axvline(4.5,ls=':',c='k')
missed = (1 - np.sum(count[:5]))*100
ax.text(5,.3,'{:0.0f} %\nmiss'.format(missed))
for ax,name in zip(axs[:,0],['ears','nose','tail']):
ax.set_yticks([0,.2,.4])
ax.set_ylabel('p({})'.format(name) )
for ax,name in zip(axs[0,0:],all_stacks):
ax.set_title("{} stacks".format(name))
for ax in axs[-1,:]:
ax.set_xticks([0,4,8,12])
ax.set_xlabel('d_match [px]')
plt.subplots_adjust(hspace = 0,wspace=0)
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_keypoint_10.pdf',transparent=True, bbox_inches='tight')
plt.show()
# +
# Make a plot of the keypoints vs time
# for i_st,st in enumerate(all_stacks):
#
i_st = -1
cuts = [.1,.2,.3,.4,.5,.6,.7,.8,.9]
cuts = [.1]
n_cuts = len(cuts)
false_stack = np.zeros((n_cuts,5,4))
for i_cutoff,p_cutoff in enumerate(cuts):
distance_stack = []
for i_st in range(5):
# then epoch
i_ep = -1
# the every image
xy_list = keypoint_stack[i_st][i_ep][0]
pxy_list = keypoint_stack[i_st][i_ep][1]
kpi_list = keypoint_stack[i_st][i_ep][2]
xy_real_list = keypoint_stack[i_st][i_ep][3]
kpi_real_list = keypoint_stack[i_st][i_ep][4]
distance_list = [[],[],[],[]]
n_im = len(xy_list)
for i_im in range(n_im):
for i_keyp in [1,2,3]:
kpi = kpi_list[i_im][:,0]
xy = xy_list[i_im][kpi==i_keyp]
pxy = pxy_list[i_im][kpi==i_keyp]
kpi_real = kpi_real_list[i_im]
xy_real = xy_real_list[i_im][kpi_real==i_keyp]
xy = xy[pxy>p_cutoff,:]
if len(xy) == 0:
# if there were no points to detect, just go on...
continue
if len(xy_real) == 0:
# if there were no detections, set distance to infinity
d_match = np.inf*np.squeeze(np.ones_like(xy[:,0]))
if len(xy) == 2:
# catcg stupid numpy, could be more elegant
d_match = 800*np.array([1])
else:
d_match = 10000*np.ones_like(xy[:,0])
distance_list[i_keyp].append(d_match)
else:
# HERE there is a stupid convention conflict!
# the xy_real are in the the classic image space, so i,j i.e. down, right
# scipy is returned flipped, so j,i
i_net = xy[:,1]
j_net = xy[:,0]
i_real = xy_real[:,0]
j_real = xy_real[:,1]
d2 = (i_real[...,np.newaxis]-i_net[np.newaxis,...])**2 + (j_real[...,np.newaxis]-j_net[np.newaxis,...])**2
d = np.sqrt(d2)
match = np.argmin(d,0)
d_match = np.min(d,0)
distance_list[i_keyp].append(d_match)
distance_stack.append(distance_list)
for i_st in range(5):
for i_keyp in [1,2,3]:
edges = np.arange(0,15)
if len(distance_stack[i_st][i_keyp]) == 0:
false_stack[i_cutoff,i_st,i_keyp] = 0
else:
all_dist = np.concatenate(distance_stack[i_st][i_keyp])
count,edges = np.histogram(all_dist,edges)
count = count/len(all_dist)
false_stack[i_cutoff,i_st,i_keyp] = (1 - np.sum(count[:5]))*1
# +
# make a figure to plot the flase alarm distance
fig,axs = plt.subplots(3,5,figsize = (8,5) )
for i_st in range(5):
for i_keyp in [1,2,3]:
ax = axs[i_keyp-1,i_st]
all_dist = np.concatenate(distance_stack[i_st][i_keyp])
all_dist = all_dist[all_dist<500]
ax.plot(all_dist,'.',alpha = .1,c=cc[i_st])
ax.set_xticks([])
ax.set_yticks([])
ax.set_ylim([0,70])
plt.subplots_adjust(hspace = 0,wspace=0)
for ax,name in zip(axs[:,0],['ears','nose','tail']):
ax.set_yticks([0,50])
ax.set_ylabel('{}\nd_match [px]'.format(name) )
for ax,name in zip(axs[0,0:],all_stacks):
ax.set_title("{} stacks".format(name))
for ax in axs[-1,:]:
# ax.set_xticks([0,100])
ax.set_xlabel('image')
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_outliers.pdf',transparent=True, bbox_inches='tight')
plt.show()
# + jupyter={"outputs_hidden": true}
# PLOT THE ROI CURVE!!!
# missed_stack =
# false_stack =
fig,axs = plt.subplots(3,5,figsize = (8,5) )
edges = np.arange(0,15)
for i_st in range(5):
for i_keyp in [1,2,3]:
ax = axs[i_keyp-1,i_st]
missed = missed_stack[:,i_st,i_keyp]
false = false_stack[:,i_st,i_keyp]
detected = 1-missed
ax.plot(detected,false,'o-',c=cc[i_st])
ax.set_xlim([0,1])
ax.set_ylim([0,.3])
ax.set_xticks([])
ax.set_yticks([])
for xx,yy,cut in zip(detected,false,cuts):
if (cut == .1):
if (i_st == 1)*(i_keyp == 2):
pass
else:
ax.text(xx-.1,yy+.02,str(cut))
if cut == .5:
ax.text(xx-.1,yy+.02,str(cut))
if cut == .9:
ax.text(xx-.1,yy+.02,str(cut))
plt.subplots_adjust(hspace = 0,wspace=0)
for ax,name in zip(axs[:,0],['ears','nose','tail']):
ax.set_yticks([0,.2,.4])
ax.set_ylabel('{}\np(false alarm)'.format(name) )
for ax,name in zip(axs[0,0:],all_stacks):
ax.set_title("{} stacks".format(name))
for ax in axs[-1,:]:
ax.set_xticks([0,.5])
ax.set_xlabel('p(detected)')
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
# plt.savefig(fig_folder + 'stacks_ROCish.pdf',transparent=True, bbox_inches='tight')
plt.show()
# -
# +
# Make a figure showing the performance below the cutoff
from palettable.tableau import GreenOrange_6, Tableau_10
cc = GreenOrange_6.mpl_colors
cc = Tableau_10.mpl_colors
fig,axs = plt.subplots(3,5,figsize = (8,5) )
edges = np.arange(0,15)
for i_st in range(5):
for i_keyp in [1,2,3]:
ax = axs[i_keyp-1,i_st]
all_dist = np.concatenate(distance_stack[i_st][i_keyp])
count,edges = np.histogram(all_dist,edges)
count = count/len(all_dist)
ax.bar(edges[:-1],count,color = cc[i_st])
ax.set_ylim([0,.6])
ax.set_yticks([])
ax.set_xticks([])
ax.axvline(4.5,ls=':',c='k')
missed = (1 - np.sum(count[:5]))*100
ax.text(5,.3,'{:0.0f} %\nfalse'.format(missed))
for ax,name in zip(axs[:,0],['ears','nose','tail']):
ax.set_yticks([0,.2,.4])
ax.set_ylabel('p({})'.format(name) )
for ax,name in zip(axs[0,0:],all_stacks):
ax.set_title("{} stacks".format(name))
for ax in axs[-1,:]:
ax.set_xticks([0,4,8,12])
ax.set_xlabel('d_match [px]')
plt.subplots_adjust(hspace = 0,wspace=0)
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_keypoint_net_as_reference_10.pdf',transparent=True, bbox_inches='tight')
plt.show()
# +
from palettable.tableau import GreenOrange_6, Tableau_10
cc = GreenOrange_6.mpl_colors
cc = Tableau_10.mpl_colors
plt.figure(figsize = (5,5) )
for i_st,st in enumerate(all_stacks):
test_epochs = np.hstack((np.arange(0,90,10),89))
if st == 9:
# test_epochs = np.hstack((np.arange(0,120,10),119))
test_epochs[-1] += 1
# plt.plot(test_epochs,np.log(ep_stack[i]),'o-',label = "{} stacks".format(st) )
sigma = 2
# for i_ep,ep in enumerate(test_epochs):
# dat = frame_stack[i_st][i_ep]
# plt.plot(ep * np.ones_like(dat) + sigma*np.random.normal(size = len(dat),loc = 0), dat,'.' , alpha = 0.41 ,c=cc[i_st] )
median_loss = [np.median(fr) for fr in frame_stack[i_st] ]
mean_loss = [np.mean(fr) for fr in frame_stack[i_st] ]
# plt.plot(test_epochs,median_loss,'o-',label = "{} stacks".format(st) ,c=cc[i_st])
plt.plot(test_epochs,mean_loss,'o-',label = "{} stacks".format(st) ,c=cc[i_st])
ax = plt.gca()
ax.set_yscale('log')
plt.legend()
plt.ylabel("Loss [test data]")
# plt.ylabel("log(Loss) [only ]")
plt.xlabel("Training epoch")
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_test.pdf',transparent=True, bbox_inches='tight')
plt.show()
# +
plt.figure(figsize = (2,4.4) )
inf_speed = [81.5,53.5,39.1,21.9,14.9]
plt.bar([1,2,3,4,5],inf_speed,color = cc)
ax = plt.gca()
ax.set_xticks([1,2,3,4,5])
ax.set_xticklabels([1,2,3,6,9])
plt.xlabel('Stacks')
plt.ylabel('fps [no batching, so slow]')
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_speed.pdf',transparent=True, bbox_inches='tight')
plt.show()
# +
# also plot the learning rate and the training loss as a func of iterations and time
logdir = '/home/chrelli/git/3ddd_mouse_tracker/analysis/runs/'
stack_list = [1,2,3,6,9]
tag_list = ['Oct26_12-29-07_CE-01','Oct27_09-07-18_CE-01','Oct26_20-51-12_CE-01','Oct26_16-56-27_CE-01','Oct26_23-26-01_CE-01']
# from here https://gist.github.com/tomrunia/1e1d383fb21841e8f144
# from tensorflow.python.summary.event_accumulator import EventAccumulator
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
import matplotlib as mpl
import matplotlib.pyplot as plt
# Loading too much data is slow...
tf_size_guidance = {
'compressedHistograms': 10,
'images': 0,
'scalars': 100,
'histograms': 1
}
def extract_as_arrays(lr):
# unpacks from the tensorboard logs
lr_v = np.array([x[2] for x in lr])
# the steps
lr_s = np.array([x[1] for x in lr])
# get the time and zero to first time
lr_t = np.array([x[0] for x in lr])
lr_t -= lr_t[0]
return [lr_t,lr_s,lr_v]
# +
# plot the learninig rate
i_stack = 0
fig = plt.figure(figsize = (7,3) )
for i_stack,st in enumerate(stack_list):
tensorboard_path = glob.glob(logdir+tag_list[i_stack]+'/*')[0]
# read the data
event_acc = EventAccumulator(tensorboard_path)
event_acc.Reload()
# Show all tags in the log file
print(event_acc.Tags())
trn_loss = event_acc.Scalars('trn_loss')
trn_frame_loss = event_acc.Scalars('trn_frame_loss')
val_loss = event_acc.Scalars('val_loss')
val_frame_loss = event_acc.Scalars('val_frame_loss')
lr = event_acc.Scalars('lr')
n_stacks = event_acc.Scalars('n_stakcs')
# convert
trn_loss = extract_as_arrays(trn_loss)
trn_frame_loss = extract_as_arrays(trn_frame_loss)
n_stacks = extract_as_arrays(n_stacks)
lr = extract_as_arrays(lr)
plt.plot(lr[1][:90],lr[2][:90],c=cc[i_stack],label = "{} stacks".format(st))
ax = plt.gca()
ax.set_yscale('log')
plt.legend()
plt.ylabel("Learning rate")
# plt.ylabel("log(Loss) [only ]")
plt.xlabel("Training epoch")
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_lr.pdf',transparent=True, bbox_inches='tight')
plt.show()
# +
# plot the learninig rate
i_stack = 0
fig = plt.figure(figsize = (7,4) )
for i_stack,st in enumerate(stack_list):
tensorboard_path = glob.glob(logdir+tag_list[i_stack]+'/*')[0]
# read the data
event_acc = EventAccumulator(tensorboard_path)
event_acc.Reload()
# Show all tags in the log file
print(event_acc.Tags())
trn_loss = event_acc.Scalars('trn_loss')
trn_frame_loss = event_acc.Scalars('trn_frame_loss')
val_loss = event_acc.Scalars('val_loss')
val_frame_loss = event_acc.Scalars('val_frame_loss')
lr = event_acc.Scalars('lr')
n_stacks = event_acc.Scalars('n_stakcs')
n_train = event_acc.Scalars('n_train')
# convert
trn_loss = extract_as_arrays(trn_loss)
trn_frame_loss = extract_as_arrays(trn_frame_loss)
n_stacks = extract_as_arrays(n_stacks)
lr = extract_as_arrays(lr)
# plt.plot(trn_frame_loss[1]/n_train[0][-1],trn_frame_loss[2]/st,'.',alpha = .01,c=cc[i_stack])
plt.plot(trn_loss[1][:90],trn_loss[2][:90],c=cc[i_stack],label = "{} stacks".format(st))
ax = plt.gca()
ax.set_yscale('log')
plt.legend()
plt.ylabel("Loss (all stacks)")
# plt.ylabel("log(Loss) [only ]")
plt.xlabel("Training epoch")
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_loss.pdf',transparent=True, bbox_inches='tight')
plt.show()
# +
# plot the learninig rate
i_stack = 0
fig = plt.figure(figsize = (7,4) )
for i_stack,st in enumerate(stack_list):
tensorboard_path = glob.glob(logdir+tag_list[i_stack]+'/*')[0]
# read the data
event_acc = EventAccumulator(tensorboard_path)
event_acc.Reload()
# Show all tags in the log file
print(event_acc.Tags())
trn_loss = event_acc.Scalars('trn_loss')
trn_frame_loss = event_acc.Scalars('trn_frame_loss')
val_loss = event_acc.Scalars('val_loss')
val_frame_loss = event_acc.Scalars('val_frame_loss')
lr = event_acc.Scalars('lr')
n_stacks = event_acc.Scalars('n_stakcs')
n_train = event_acc.Scalars('n_train')
# convert
trn_loss = extract_as_arrays(trn_loss)
trn_frame_loss = extract_as_arrays(trn_frame_loss)
n_stacks = extract_as_arrays(n_stacks)
lr = extract_as_arrays(lr)
# plt.plot(trn_frame_loss[1]/n_train[0][-1],trn_frame_loss[2]/st,'.',alpha = .01,c=cc[i_stack])
plt.plot(trn_loss[0][:90]/(60*60),trn_loss[2][:90],c=cc[i_stack],label = "{} stacks".format(st))
ax = plt.gca()
ax.set_yscale('log')
plt.legend()
plt.ylabel("Loss (all stacks)")
# plt.ylabel("log(Loss) [only ]")
plt.xlabel("Training time [hr]")
fig_folder = '/home/chrelli/git/3ddd_mouse_tracker/analysis/revision_figures/profile_network/'
plt.savefig(fig_folder + 'stacks_loss_time.pdf',transparent=True, bbox_inches='tight')
plt.show()
# -
| analysis/ir_profiling_check_performance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import scipy
import scipy.stats as ss
import random
import plotly
import plotly.graph_objects as go
import warnings
warnings.filterwarnings("ignore")
prices=[0.9, 1.8, 2.49, 3.49 ,4.99, 5.99, 7.5, 9.49, 11.99, 17.99, 19.99, 24.5, 28.9]
countries=['United States','Canada','United Kingdom']
# +
purchases_list=[]
while len(purchases_list) < 1000:
purchases_list.extend(random.sample(prices, 5))
# -
# +
user_ids=[]
for i in np.arange(1,1001,1):
user_ids.append((str('user_id_')+str(i)))
country_list=[]
while len(country_list)<1000:
country_list.extend(random.sample(countries,1))
users_df=pd.DataFrame({'user_id':user_ids,'country':country_list})
# -
users_df.pivot_table(values=['user_id'],
index='country',
aggfunc='count')
# +
payers=[]
while len(payers)<70:
payers.extend(random.sample(user_ids,1))
# +
payments=[]
while len(payments) < 252:
payments.extend(random.sample(list(payers), 1))
# -
len(set(payments))
len(payments)
# +
payments_sum=[]
while len(payments_sum) < 252:
payments_sum.extend(random.sample(prices,1))
# -
len(payments_sum)
purchases_df=pd.DataFrame({'user_id':payments,
'purchase_sum':payments_sum})
purchases_df.to_csv('materials/purchases.csv', index=False)
users_df.to_csv('materials/users.csv', index=False)
| anova_analysis/.ipynb_checkpoints/data_prep-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: yggJLNE
# language: python
# name: yggjlne
# ---
import jupyterlab_nodeeditor as jlne
coll = jlne.SocketCollection(socket_types = ('Temperature', 'Rainfall', 'Delta Time', 'Results'))
in1 = jlne.InputSlot(title = "Temperature Morning", key = "temp1", socket_type = "Temperature", sockets = coll)
in2 = jlne.InputSlot(title = "Temperature Afternoon", key = "temp2", socket_type = "Temperature", sockets = coll)
in3 = jlne.InputSlot(title = "Temperature Evening", key = "temp3", socket_type = "Temperature", sockets = coll)
out1 = jlne.OutputSlot(title = "Average Temperature", key = "avg_temp", socket_type="Temperature", sockets = coll)
in4 = jlne.InputSlot(title = "Daily Rainfall", key="rainfall", socket_type="Rainfall", sockets=coll)
in5 = jlne.InputSlot(title = "Daily Temperature", key="temp", socket_type="Temperature", sockets=coll)
in6 = jlne.InputSlot(title = "Time at Sunrise", key="time", socket_type="Delta Time", sockets=coll)
out2 = jlne.OutputSlot(title = "Results", key="results", socket_type="Results", sockets=coll)
c1 = jlne.Component(sockets=coll, inputs = [in1, in2, in3], outputs = [out1], title="Temperature Averaging")
c2 = jlne.Component(sockets=coll, inputs = [in4, in5, in6], outputs = [out2], title="My Model")
editor = jlne.NodeEditorModel()
editor.add_component(c1)
editor.add_component(c2)
editor.send_config({'a14a0c84-8f57-4904-a989-f385d398feae': {'id': 'jupyterlab_nodeeditor@0.1.0',
'nodes': {'7': {'id': 7,
'data': {},
'inputs': {'temp1': {'connections': []},
'temp2': {'connections': []},
'temp3': {'connections': []}},
'outputs': {'avg_temp': {'connections': [{'node': 9,
'input': 'temp',
'data': {}}]}},
'position': [-32.26091002098711, 209.57835110393947],
'name': 'Temperature Averaging'},
'8': {'id': 8,
'data': {},
'inputs': {},
'outputs': {},
'position': [389.29637401810436, 82.77778168871276],
'name': 'DefaultComponent'},
'9': {'id': 9,
'data': {},
'inputs': {'rainfall': {'connections': []},
'temp': {'connections': [{'node': 7, 'output': 'avg_temp', 'data': {}}]},
'time': {'connections': []}},
'outputs': {'results': {'connections': []}},
'position': [370.77779306113496, 171.66663527679302],
'name': 'My Model'}}}})
display(editor)
editor.sync_config()
editor.editorConfig
coll.socket_types
| examples/example_jlne.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tf2]
# language: python
# name: conda-env-tf2-py
# ---
import numpy as np
import cv2
import os
import matplotlib.pyplot as plt
easy = np.load('texts.v2.npz')
easy['texts'].shape
# 5837 5838 5896
# +
SKIPPED_DIR = './net/skipped/'
# tagged = os.listdir('./net/untagged_imgs/')
def skipped_file_gen():
for file_name in os.listdir(SKIPPED_DIR):
id = file_name.partition('.')[0]
yield int(id)
file_gen = skipped_file_gen()
# -
imid = next(file_gen)
# +
print(imid)
img = easy['texts'][imid]
plt.imshow(img, cmap='gray')
# +
# text_img2 = cv2.GaussianBlur(img, (3,3), 1)
text_img2 = img
sobelY = np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]
])
plt.imshow(text_img2, cmap='gray')
# -
edges = cv2.filter2D(text_img2, -1, sobelY)
plt.imshow(edges, cmap='gray')
_, thres = cv2.threshold(edges, 0, 1, cv2.THRESH_OTSU)
plt.imshow(thres, cmap='gray')
# +
col_sum = np.sum(thres[:5, :], axis=0)
divides = np.argwhere(col_sum > 4).flatten()
print(divides)
# col = int(round(np.average(divides)))
col = np.max(divides)
res = img[:, :col]
plt.imshow(res, cmap='gray')
# -
SAVE_DIR = 'net/untagged_imgs/'
cv2.imwrite(os.path.join(SAVE_DIR, "%05d.png"%(imid)), res)
def cropping_img(img):
text_img2 = cv2.GaussianBlur(img, (3,3), 1)
sobelY = np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]
])
edges = cv2.filter2D(text_img2, -1, sobelY)
_, thres = cv2.threshold(edges, 0, 1, cv2.THRESH_OTSU)
col_sum = np.sum(thres, axis=0)
divides = np.argwhere(col_sum > 17).flatten()
if len(divides) >= 1:
col = int(round(np.average(divides)))
res = img[:, :col]
else:
res = img
return res
# +
SAVE_DIR = 'net/untagged_img0/'
for i, img in enumerate(easy['texts']):
res = cropping_img(img)
cv2.imwrite(os.path.join(SAVE_DIR, "%05d.png"%(i)), img)
# -
easier = np.load('texts.npz')
easier['labels'].shape
map_list = [15, 64, 65, 26, 48, 12, 27, 41, 22, 54, 9, 79, 45, 17, 8, 30, 44, 78, 34, 33, 69, 66, 28, 29, 2, 25, 4, 35, 51, 77, 39, 47, 31, 76, 62, 3, 63, 19, 71, 46, 50, 38, 43, 68, 75, 55, 13, 40, 1, 24, 42, 36, 58, 60, 53, 7, 52, 11, 23, 18, 5, 70, 16, 14, 73, 20, 67, 49, 0, 61, 6, 32, 72, 56, 37, 74, 57, 59, 21, 10]
DIR = 'net/tagged_imgs_org/'
for i, img in enumerate(easier['texts']):
id = map_list[easier['labels'][i]]
d = os.path.join(DIR, "%02d"%(id))
if not os.path.exists(d):
os.mkdir(d)
cv2.imwrite(os.path.join(d, "%05d.png"%(20000+i)), img)
| text/text_image_exploring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
#Data Loading
data = np.genfromtxt('sgd_data.txt',delimiter = ',')
x = np.zeros((40,1), dtype = np.float)
y = np.zeros((40,1), dtype = np.float)
for i in range(data.shape[0]):
x[i] = data[i][0]
for i in range(data.shape[0]):
y[i] = data[i][1]
print("Input data shape = {}".format(x.shape))
print("Output data shape = {}".format(y.shape))
#Helper Functions
def f(x,w,b):
'''Sigmoid Function'''
f = 1/(1+np.exp(-(w*x+b)))
return f
def mse(x,y,w,b):
'''Mean Squared Loss Function'''
L = 0.0
for i in range(x.shape[0]):
L += 0.5*(y[i]-f(x[i],w,b))**2
return L
def cross_entropy(x,y,w,b):
'''Cross Entropy Loss Function'''
L = 0.0
for i in range(x.shape[0]):
L += -(y[i]*np.log(f(x[i],w,b)))
return L
def grad_w_mse(x,y,w,b):
fx = f(x,w,b)
dw = (fx - y)*fx*(1-fx)*x
return dw
def grad_b_mse(x,y,w,b):
fx = f(x,w,b)
db = (fx - y)*fx*(1-fx)
return db
def grad_w_cross(x,y,w,b):
fx = f(x,w,b)
dw = (- y)*(1-fx)*x
return dw
def grad_b_cross(x,y,w,b):
fx = f(x,w,b)
db = (- y)*(1-fx)
return db
#Gradient Discent
def Line_search_GD(x,y,epochs,batch_size,loss,lr_list):
w = np.random.randn()
b = np.random.randn()
l_list = []
w_list = []
b_list = []
points = 0
ep = [i for i in range(epochs+1)]
dw,db = 0,0
for i in range(epochs+1):
dw,db = 0,0
for j in range(x.shape[0]):
if (loss == 'mse'):
dw += grad_w_mse(x[j],y[j],w,b)
db += grad_b_mse(x[j],y[j],w,b)
elif (loss == 'cross_entropy'):
dw += grad_w_cross(x[j],y[j],w,b)
db += grad_b_cross(x[j],y[j],w,b)
points += 1
if(points % batch_size == 0):
best_w,best_b = w,b
min_loss = 10000
for i in range(len(lr_list)):
tmp_w = w - lr_list[i]*dw
tmp_b = b - lr_list[i]*db
if (loss == 'mse'):
loss = mse(x,y,tmp_w,tmp_b)[0]
elif (loss == 'cross_entropy'):
loss = cross_entropy(x,y,tmp_w,tmp_b)[0]
if (loss<min_loss):
min_loss = loss
best_w,best_b = tmp_w,tmp_b
w,b = best_w,best_b
dw,db = 0,0
if (loss == 'mse'):
print('Loss after {}th epoch = {}\n'.format(i,mse(x,y,w,b)[0]))
l_list.append(mse(x,y,w,b)[0])
elif (loss == 'cross_entropy'):
print('Loss after {}th epoch = {}\n'.format(i,cross_entropy(x,y,w,b)[0]))
l_list.append(cross_entropy(x,y,w,b)[0])
w_list.append(w[0])
b_list.append(b[0])
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss vs Epoch Curve\nAlgotithm :Line Search Mini Batch Gradient Decent\nBatch Size = {}\nLoss Function = {}'.format(batch_size,loss))
#plt.plot(ep,l_list)
#plt.show()
return w_list,b_list
Learning_rate_list = [0.01,0.07,0.1,0.2,0.4,0.9]
W,B = Line_search_GD(x,y,500,10,'mse',Learning_rate_list)
print('Weight list = \n{}'.format(W))
print('\n\nBias list = \n{}'.format(B))
W,B = Line_search_GD(x,y,500,10,'cross_entropy',Learning_rate_list)
print('Weight list = \n{}'.format(W))
print('\n\nBias list = \n{}'.format(B))
#Error Surface MSE
w = np.linspace(-10,10,num = 1000,dtype = np.float)
b = np.linspace(-10,10,num = 1000,dtype = np.float)
w,b = np.meshgrid(w,b)
mse_list = []
for i in range(w.shape[0]):
Loss = mse(x,y,w[i],b[i])
mse_list.append(Loss)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(w, b, mse_list, cmap=cm.coolwarm,linewidth=0, antialiased=False)
plt.title('MSE Error Suface')
plt.show()
#Error Surface Cross Entropy
cross_list = []
for i in range(w.shape[0]):
Loss = cross_entropy(x,y,w[i],b[i])
cross_list.append(Loss)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(w, b, cross_list, cmap=cm.coolwarm,linewidth=0, antialiased=False)
plt.title('Cross Entropy Error Suface')
plt.show()
| Examples/Line Search Gradient Descent.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hollow775/Python/blob/main/Mineracao_Dados.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="DZGQYBaziY2k"
# #Utilizaremos nesse pequeno projeto o site https://www.basketball-reference.com/, um site que disponibiliza dados de jogos da NBA.
#
# Utilizaremos, também, algumas funções da biblioteca Pandas.
#
# Nossa mineração de dados ira nos retornar a quantidade de jogadores da NBA que possui determinada quantidade de pontos.
# No eixo y teremos a quantidade de jogadores e no eixo x teremos os pontos marcados por eles
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="5Ipo8N8njJzW" outputId="509eb46f-e98a-42a9-e1df-8010bbcf047c"
# Criando o link\ Dessa forma podemos alterar a data e conseguirmos mudar
#o ano que queremos os dados
year = '2019'
url_link = 'https://www.basketball-reference.com/leagues/NBA_{}_per_game.html'
# Combinando a URL e o ano
url = url_link.format(year)
url
# + colab={"base_uri": "https://localhost:8080/"} id="QMa7yZCZjsvP" outputId="037ba127-f7fe-481d-813b-81802b884780"
#há também a possibilidade de passarmos um array contendo mais de um ano.
years = [2015,2016,2017,2018,2019]
url_link = 'https://www.basketball-reference.com/leagues/NBA_{}_per_game.html'
for year in years:
url = url_link.format(year)
print(url)
# + [markdown] id="i4SIN3Kgj4cf"
# #Agora vamos importar a biblioteca Pandas
# + colab={"base_uri": "https://localhost:8080/"} id="EuwGfAd2j7hk" outputId="c5235ad5-5bc7-48d5-f2d9-0850bab02108"
import pandas as pd
# a funcao read_html ira ler a nossa pagina da web e ira retornar as tabelas
df = pd.read_html(url, header = 0)
df
# + colab={"base_uri": "https://localhost:8080/"} id="vYVp3kFekUi2" outputId="19c50487-9d65-4d30-d7cc-8b8870cc4194"
#podemos facilmente descobrir a quantidade de tabelas retornadas usando o comando
len(df)
# + colab={"base_uri": "https://localhost:8080/", "height": 865} id="7CyNllzYkikj" outputId="bd241be0-dd8f-4c92-b0bc-4930ac781c0f"
#agora vamos pegar somente o primeiro ano do nosso array e depois retirar o cabeçalho da da tabela que sempre se repete
df2019 = df[0]
df2019[df2019.Age == 'Age']
# + colab={"base_uri": "https://localhost:8080/"} id="6ImVzPGclSmj" outputId="47fb476d-2e03-4b51-e9e7-d835ba1c43d7"
#como visto acima o cabeçalho se repete muitas vezes e para isso vamos retirar todos a partir do primeiro cabeçalho
len(df2019[df2019.Age == 'Age'])
df = df2019.drop(df2019[df2019.Age == 'Age'].index)
df.shape
# + [markdown] id="itSrPLssku1_"
# #Agora vamos ver os dados que temos utilizando a biblioteca seaborn
# + colab={"base_uri": "https://localhost:8080/", "height": 351} id="5vnvwSX5k1Ov" outputId="7e434342-defe-42ec-db05-8780a13a5b14"
import seaborn as sns
sns.distplot(df.PTS,
kde=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 351} id="8g9wDxSRk6Eo" outputId="a6975f65-49c8-44fd-fe34-935e8590fcbc"
#para melhorar a visualização podemos alterar o contorno do gráfico e a cor
sns.distplot(df.PTS,
kde=False,
hist_kws=dict(edgecolor="black", linewidth=2),
color='#00BFC4')
| Mineracao_Dados.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import my_packages.Transformers.StringTransformer as Trans
import my_packages.Operations.Math as Math
# +
#from my_packages.Operations.Math import get_average as avg
# -
dir (Trans)
dir (Math)
Trans.reverse("Hello Mohamed")
Trans.capitalize("mohamed yousry")
lst=[3,22,47,15,87,52,11,2,99]
Math.get_average(lst)
| using my_packages .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Z7K0SeDgzjL2"
# ---
#
#
#
# ---
#
# # Predição de partido político baseado em votações
#
# Este notebook implementa uma rede neural para predizer (descobrir) qual partido político de um congressista americano com base nos seus votos.
#
# Utiliza-se um conjunto de dados públicos de como os congressistas dos EUA votaram em 17 questões diferentes.
#
# Nos Estados Unidos há dois principais partidos políticos: "Democrata" e "Republicano". Nos tempos modernos, eles representam ideologias progressistas e conservadoras, respectivamente.
#
# Trata-se de um problema de classificação binária.
#
#
# ---
#
#
#
# ---
# + [markdown] id="PcWx36BdDBX0"
# ## Conjunto de dados
#
# Arquivo: **votes_data.txt**:
#
# 1. Banco de dados de registros de votação do Congresso dos Estados Unidos de 1984
#
# 2. Informações da fonte:
# * Fonte: Congressional Quarterly Almanac, 98º Congresso,
# 2ª sessão 1984, Volume XL: Congressional Quarterly Inc.
# Washington, D.C., 1985.
# * Doador: <NAME> (<EMAIL>)
# * Data: 27 de abril de 1987
#
# 3. Uso anterior
# - Publicações
# 1. Schlimmer, <NAME>. (1987). Aquisição de conceito através de
# ajuste representacional. Tese de doutorado, Departamento de
# Informação e Ciência da Computação, Universidade da Califórnia, Irvine, CA.
# - Resultados: cerca de 90% -95% de precisão parece ser a assíntota de STAGGER
# - Atributo previsto: afiliação partidária (2 classes)
#
# 4. Número de instâncias: 435 (267 democratas, 168 republicanos)
#
# 5. Número de atributos: 16 + nome da classe/partido = 17 (todos com valor booleano)
#
# 6. Informação de Atributo:
# 1. Nome da classe: (democrata, republicano)
# 2. Crianças deficientes: (y, n)
# 3. Compartilhamento de custos do projeto de água: (y, n)
# 4. Resolução de adoção do orçamento: (y, n)
# 5. Congelamento de honorários médicos: (y, n)
# 6. Ajuda El Salvador: (y, n)
# 7. Grupos religiosos nas escolas: (y, n)
# 8. Proibição de teste anti-satélite: (y, n)
# 9. Ajuda para contras nicaraguenses: (y, n)
# 10. Míssil Mx: (y, n)
# 11. Imigração: (y, n)
# 12. Corte de corporação de combustíveis Syn: (y, n)
# 13. Gastos com educação: (y, n)
# 14. Super fundo direto: (y, n)
# 15. Crime: (y, n)
# 16. Exportações com isenção de impostos: (y, n)
# 17. Exportação África do Sul: (y, n)
#
#
# + [markdown] id="LWXwGCqvIwgN"
# ## Leitura e preparação dos dados
#
# Vamos começar importando o arquivo CSV bruto usando o Pandas e fazer um DataFrame dele inserindo rótulos (nomes) em cada coluna (atributo)
# + colab={"base_uri": "https://localhost:8080/", "height": 360} id="LM6AT7tPzjL6" executionInfo={"status": "ok", "timestamp": 1623118667799, "user_tz": 180, "elapsed": 3407, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="14a3f21e-1bc2-441e-f01b-55ca7d9fb9e2"
import pandas as pd
# Clone do repositório de dados do GitHub
# !git clone https://github.com/malegopc/AM2PUCPOC
# cria vetor com os rótulos de cada atributo
nomes_atributos = ['Partido', 'Crianças deficientes', 'Compartilhamento de custos do projeto de água',
'Resolução de adoção do orçamento', 'Congelamento de honorários médicos',
'Ajuda El Salvador', 'Grupos religiosos nas escolas',
'Proibição de teste anti-satélite', 'Ajuda a contras nicaraguenses',
'Míssil MX', 'Imigração', 'Corte de corporação de combustíveis Syn',
'Gastos com educação', ' Super fundo direto', 'Crime',
'Exportações isentas de impostos', 'Exportação África do Sul']
# lê arquivo de dados, atribue NaN para dados faltantes e rótulos em cada coluna
dados_votacao = pd.read_csv('/content/AM2PUCPOC/Datasets/Predicao_Politica/votes_data.txt', na_values=['?'], names = nomes_atributos)
# imprime as 5 primeiras linha dos dados montados
dados_votacao.head()
# + [markdown] id="HFsjqPh-zjL-"
# ## Análise (estatística) descritiva dos dados
#
# Análise descritiva dos dados (resumo).
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="d02W1HKazjL-" executionInfo={"status": "ok", "timestamp": 1623118668223, "user_tz": 180, "elapsed": 427, "user": {"displayName": "Andr\u00<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="f12e1da3-1487-4e59-c969-a626429b7e76"
dados_votacao.describe()
# + [markdown] id="qW6o_Dn0mB-E"
# ## Número de instâncias
# + colab={"base_uri": "https://localhost:8080/"} id="U9TOyn0Pj3tu" executionInfo={"status": "ok", "timestamp": 1623118668223, "user_tz": 180, "elapsed": 11, "user": {"displayName": "Andr\u00<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="c016177b-4e54-4580-9735-650b110719d7"
len(dados_votacao)
# + [markdown] id="HVm9d_FmzjL_"
# ## Eliminação de dados ausentes
#
# Alguns políticos se abstiveram em algumas votações ou simplesmente não estavam presentes no momento da votação. Para contornar esse problema vamos simplesmente eliminar as linhas com dados ausentes para mantê-lo simples, mas na prática é importante ter certeza de que isso não introduziu nenhum tipo de viés em sua análise (se uma parte se abstiver mais do que outra, isso poderia ser problemático)
#
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="c6gfguVAzjL_" executionInfo={"status": "ok", "timestamp": 1623118668224, "user_tz": 180, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="41b6edcb-7457-46b7-b136-a617fd5dd58e"
# Remove dados faltantes na própria variável
dados_votacao.dropna(inplace=True)
dados_votacao.describe()
# + [markdown] id="QzU81rWymGty"
# ## Número de instâncias após eliminação de dados faltantes
# + colab={"base_uri": "https://localhost:8080/"} id="iZbrJd0amTG7" executionInfo={"status": "ok", "timestamp": 1623118668224, "user_tz": 180, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="361d34fe-97e5-4ca7-815e-5c081eb75480"
len(dados_votacao)
# + [markdown] id="f0ziTMDEzjMA"
# ## Transforma dados categóricos (y e n) em números (1 e 0)
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="IE4ekFOTzjMA" executionInfo={"status": "ok", "timestamp": 1623118668224, "user_tz": 180, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="7d91405a-bb01-4ee3-b944-13148f34d2c1"
dados_votacao.replace(('y', 'n'), (1, 0), inplace=True)
dados_votacao.replace(('democrat', 'republican'), (1, 0), inplace=True)
dados_votacao.head()
# + [markdown] id="yabMrBvezjMC"
# ## Separa os atributos das classes
#
# Extrai os atributos e as classes (rótulos) colocando-os em duas variáveis separadas (na forma que o Keras espera).
# + colab={"base_uri": "https://localhost:8080/"} id="9pK51IcczjMC" executionInfo={"status": "ok", "timestamp": 1623118668225, "user_tz": 180, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="ba4149a7-6b99-4568-e769-626e8e8ad987"
X = dados_votacao[nomes_atributos].drop('Partido', axis=1).values
y = dados_votacao['Partido'].values
print(X.shape)
print(y.shape)
# + [markdown] id="OhhPEpwNm4z2"
# ## Divide o conjunto de dados em treino e teste
# + colab={"base_uri": "https://localhost:8080/"} id="R2rzqlBYm8CD" executionInfo={"status": "ok", "timestamp": 1623118669334, "user_tz": 180, "elapsed": 1115, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="2498e27d-ee04-43ee-ffcc-ae6de090c6da"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25,random_state=42)
print(X_train.shape)
print(X_test.shape)
# + [markdown] id="zyWHf7bCzjMD"
# ## Criação do modelo de rede neural
#
# + colab={"base_uri": "https://localhost:8080/"} id="uwpVybNDzjME" executionInfo={"status": "ok", "timestamp": 1623118672258, "user_tz": 180, "elapsed": 2926, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="eefa7388-eef5-4695-b39a-ee85f67671c3"
from keras.layers import Dense
from keras.models import Sequential
from sklearn.model_selection import cross_val_score
model = Sequential()
# 1a. camada escondida com 32 neurônios - 16 entradas (atributos)
model.add(Dense(32, input_dim=16, kernel_initializer = 'he_uniform', activation='relu'))
# 2a. camada escondida com 16 neurônios
model.add(Dense(16, kernel_initializer = 'he_uniform', activation='relu'))
# Camada de saída com 01 neurônio - Classificação binária (Democrat or Republican)
model.add(Dense(1, activation='sigmoid'))
# Sumário do modelo
model.summary()
# + [markdown] id="wHhaiOZfcPR1"
# ## Compila o modelo
# + id="HWM1ThWlzjMF" executionInfo={"status": "ok", "timestamp": 1623118672258, "user_tz": 180, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}}
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# + [markdown] id="n2dOhpdZceL0"
# ## Treina o modelo
# + colab={"base_uri": "https://localhost:8080/"} id="4-4ArmPeBt4t" executionInfo={"status": "ok", "timestamp": 1623118691034, "user_tz": 180, "elapsed": 18783, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="57f1763f-20f9-4183-e4da-e0ddaa69c338"
from keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss', min_delta = 1e-10, patience=5, restore_best_weights = 'True', verbose=1)
history = model.fit(X_train,y_train, validation_data=(X_test,y_test), batch_size=8, epochs=100, callbacks=[early_stop])
# + id="M-OXJQ3vciV1" executionInfo={"status": "ok", "timestamp": 1623118691035, "user_tz": 180, "elapsed": 8, "user": {"displayName": "Andr\u00<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}}
# history = model.fit(X_train, y_train, epochs=100, batch_size=8, validation_data=(X_test, y_test))
# + [markdown] id="l48BhF4PlZkM"
# ## Análise da função *loss* (erro/perda)
# Vamos observar o comportamento da função *loss* para os dados de treino e de validação/teste.
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="oscCHzuRldf7" executionInfo={"status": "ok", "timestamp": 1623118693521, "user_tz": 180, "elapsed": 2494, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="ee9f9d03-c084-4654-c14f-9ced41d45fca"
import matplotlib.pyplot as plt
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'], '')
plt.xlabel("Épocas")
plt.ylabel('Erro/loss')
plt.title('Função erro/loss')
plt.legend(['loss', 'val_loss'])
plt.show()
# + [markdown] id="CnOywLpjliYD"
# ## Análise da métrica de desempenho (acurácia)
# Vamos observar o desempenho (acurácia) da rede para os dados de treino e de validação/teste.
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="47nLbRc_llb9" executionInfo={"status": "ok", "timestamp": 1623118693522, "user_tz": 180, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="6551cfcb-1fa8-4109-b91b-4de840b42d24"
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'], '')
plt.xlabel("Épocas")
plt.ylabel('Acurácia')
plt.title('Acurácia')
plt.legend(['acurácia', 'val_acurácia'])
plt.show()
# + [markdown] id="UnCzp7cqefpF"
# ## Classifica os dados de teste
# + id="55GxDLXLeoJK" executionInfo={"status": "ok", "timestamp": 1623118694027, "user_tz": 180, "elapsed": 508, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}}
y_pred = model.predict(X_test)
y_pred = (y_pred > 0.5)
# + [markdown] id="lhcvQZaLe6hx"
# ## Calcula a matriz de confusão sobre os dados de teste
# + colab={"base_uri": "https://localhost:8080/"} id="XxuZT6jxfD0H" executionInfo={"status": "ok", "timestamp": 1623118694027, "user_tz": 180, "elapsed": 4, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="15ffb0d7-ec6c-41fb-9d9e-e24440971223"
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# + [markdown] id="1-bBUClVowa6"
# ## Classifica o conjunto de dados inteiro
# + id="ufS7LzNno3oR" executionInfo={"status": "ok", "timestamp": 1623118694580, "user_tz": 180, "elapsed": 555, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}}
y_pred2 = model.predict(X)
y_pred2 = (y_pred2 > 0.5)
# + [markdown] id="Cn9dwVGuo-8j"
# ## Calcula a matriz de confusão sobre o conjunto inteiro de dados
# + colab={"base_uri": "https://localhost:8080/"} id="oGSUOV9WpDNg" executionInfo={"status": "ok", "timestamp": 1623118694581, "user_tz": 180, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="a1e68a85-ac10-4d35-99b8-f526873c57cc"
from sklearn.metrics import confusion_matrix
cm2 = confusion_matrix(y, y_pred2)
print(cm2)
# + id="kkMaa8wmldRq" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1623118694581, "user_tz": 180, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="0bc3d2de-36fb-4cf3-d170-59db2affe666"
import numpy as np
print(np.around(model.predict(X[:5,:])))
# + id="rN1vbr1xokhd" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1623118694581, "user_tz": 180, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiasvxfAR-CIxluihbOgHWSf2ClG399tovBvjxjuA=s64", "userId": "12807951517144359226"}} outputId="4254e7d4-0b54-4f07-e1ac-fac9cb75a7cb"
x1 = np.array([X[0]])
pred = model.predict(x1)
print(type(pred)) # array numpy
print(pred)
print(round(pred[0][0]))
| Uso da callback EarlyStopping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: geo_dev
# language: python
# name: geo_dev
# ---
# # Tessellation-based blocks
# ## Ideal case with ideal data
#
# Assume following situation: We have a layer of buildings for selected city, a street network represented by centrelines, we have generated morphological tessellation, but now we would like to do some work on block scale. We have two options - generate blocks based on street network (each closed loop is a block) or use morphological tessellation and join those cells, which are expected to be part of one block. In that sense, you will get tessellation-based blocks which are following the same logic as the rest of your data and are hence fully compatible. With `momepy` you can do that using `momepy.Blocks`.
import momepy
import geopandas as gpd
import matplotlib.pyplot as plt
# For illustration, we can use `bubenec` dataset embedded in `momepy`.
path = momepy.datasets.get_path('bubenec')
buildings = gpd.read_file(path, layer='buildings')
streets = gpd.read_file(path, layer='streets')
tessellation = gpd.read_file(path, layer='tessellation')
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, edgecolor='white', linewidth=0.2)
buildings.plot(ax=ax, color='white', alpha=.5)
streets.plot(ax=ax, color='black')
ax.set_axis_off()
plt.show()
# -
# This example is useful for illustration why pure network-based blocks are not ideal.
# 1. In the centre of the area would be the block, while it is clear that there is only open space.
# 2. Blocks on the edges of the area would be complicated, as streets do not fully enclose them.
#
# None of it is an issue for tessellation-based blocks. `momepy.Blocks` requires buildings, tessellation, streets and IDs. We have it all, so we can give it a try.
# + tags=["hide_output"]
blocks = momepy.Blocks(
tessellation, streets, buildings, id_name='bID', unique_id='uID')
# -
# GeoDataFrame containing blocks can be accessed using `blocks`. Moreover, block ID for buildings and tessellation can be accessed using `buildings_id` and `tessellation_id`.
blocks_gdf = blocks.blocks
buildings['bID'] = blocks.buildings_id
tessellation['bID'] = blocks.tessellation_id
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
blocks_gdf.plot(ax=ax, edgecolor='white', linewidth=0.5)
buildings.plot(ax=ax, color='white', alpha=.5)
ax.set_axis_off()
plt.show()
# -
# ## Fixing the street network
#
# The example above shows how it works in the ideal case - streets are attached, there are no gaps. However, that is often not true. For that reason, `momepy` includes `momepy.extend_lines` utility which is supposed to fix the issue. It can do two types of network adaptation:
#
# 1. **Snap network onto the network where false dead-end is present.** It will extend the last segment by set distance and snap it onto the first reached segment.
# 2. **Snap network onto the edge of tessellation.** As we need to be able to define blocks until the very edge of tessellation, in some cases, it is necessary to extend the segment and snap it to the edge of the tessellated area.
#
# Let's adapt our ideal street network and break it to illustrate the issue.
# + tags=["hide_input"]
import shapely
geom = streets.iloc[33].geometry
splitted = shapely.ops.split(geom, shapely.geometry.Point(geom.coords[5]))
streets.loc[33, 'geometry'] = splitted[1]
geom = streets.iloc[19].geometry
splitted = shapely.ops.split(geom, geom.representative_point())
streets.loc[19, 'geometry'] = splitted[0]
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, edgecolor='white', linewidth=0.2)
buildings.plot(ax=ax, color='white', alpha=.5)
streets.plot(ax=ax, color='black')
ax.set_axis_off()
plt.show()
# -
# In this example, one of the lines on the right side is not snapped onto the network and other on the top-left leaves the gap between the edge of the tessellation. Let's see what would be the result of blocks without any adaptation of this network.
# + tags=["remove_cell"]
buildings.drop(['bID'], axis=1, inplace=True)
tessellation.drop(['bID'], axis=1, inplace=True)
# + tags=["hide_output"]
blocks = momepy.Blocks(tessellation, streets, buildings,
id_name='bID', unique_id='uID').blocks
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
blocks.plot(ax=ax, edgecolor='white', linewidth=0.5)
buildings.plot(ax=ax, color='white', alpha=.5)
ax.set_axis_off()
plt.show()
# -
# We can see that blocks are merged. To avoid this effect, we will first use `momepy.extend_lines` and then generate blocks.
# + tags=["hide_output"]
snapped = momepy.extend_lines(
streets, tolerance=40, target=tessellation, barrier=buildings
)
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
tessellation.plot(ax=ax, edgecolor='white', linewidth=0.2)
buildings.plot(ax=ax, color='white', alpha=.5)
snapped.plot(ax=ax, color='black')
ax.set_axis_off()
plt.show()
# -
# What should be snapped is now snapped. We might have noticed that we used buildings int the `momepy.extend_lines`. That is to avoid extending segments through buildings.
#
# With a fixed network, we can then generate correct blocks.
# + tags=["hide_output"]
blocks = momepy.Blocks(
tessellation, snapped, buildings, id_name='bID', unique_id='uID').blocks
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
blocks.plot(ax=ax, edgecolor='white', linewidth=0.5)
buildings.plot(ax=ax, color='white', alpha=.5)
ax.set_axis_off()
plt.show()
| docs/user_guide/elements/blocks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ZzPU36WEDsqa"
# very loosely based on https://keras.io/examples/vision/mnist_convnet/
# + id="3iB5wvjpWexF"
outdim=7
# + id="HRhV-0piDPyl"
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import backend as K
# + colab={"base_uri": "https://localhost:8080/"} id="OpkCj5gZDZxP" outputId="cd9f0967-5408-4179-f4b2-62fc9c0dc391"
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
bx_train = np.expand_dims(x_train, -1)
bx_test = np.expand_dims(x_test, -1)
print("x_train shape:", bx_train.shape)
print(bx_train.shape[0], "train samples")
print(bx_test.shape[0], "test samples")
# convert class vectors to binary class matrices
by_train = keras.utils.to_categorical(y_train, num_classes)
by_test = keras.utils.to_categorical(y_test, num_classes)
# + colab={"base_uri": "https://localhost:8080/"} id="Hjhm1Q8lIBs2" outputId="6c471656-43f2-4935-b572-b6cd3791ec00"
classes=[0,1]
x_train=np.array(
[xx for xx,yy in zip(bx_train,by_train) if
np.any([yy[cc] for cc in classes])
])
y_train=np.array([yy for yy in by_train if np.any([yy[cc] for cc in classes])])
print(x_train.shape)
print(y_train.shape)
xa_test=np.array(
[xx for xx,yy in zip(bx_test,by_test) if
not np.any([yy[cc] for cc in classes])
])
ya_test=np.array([yy for yy in by_test if not np.any([yy[cc] for cc in classes])])
print(xa_test.shape)
print(ya_test.shape)
xn_test=np.array(
[xx for xx,yy in zip(bx_test,by_test) if
np.any([yy[cc] for cc in classes])
])
yn_test=np.array([yy for yy in by_test if np.any([yy[cc] for cc in classes])])
print(xn_test.shape)
print(yn_test.shape)
# + id="jJz0agqiQuYp"
def loss(outdim):
if outdim==1:
def lss(a,b):
q=b
return K.mean((q-1)**2)
return lss
def lss(a,b):
q=b
pd=[i for i in range(len(q.shape))]
pd.remove(pd[-1])
pd.insert(0,len(pd))
#print(pd)
q=K.permute_dimensions(q,tuple(pd))
#exit()
#print(q.shape)
adl=None
for i in range(outdim):
for j in range(i+1,outdim):
ac=K.abs(K.mean(((q[i]-1)*(q[j]-1))))
if adl is None:
adl=ac
else:
adl+=ac
return adl
return lss
# + id="cguVrdVrEtLl" colab={"base_uri": "https://localhost:8080/"} outputId="0a2c578e-c0b4-4406-d1fc-3df0c36c6d41"
model = keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(7, 7), activation="relu",use_bias=False),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(4, 4), activation="relu",use_bias=False),
layers.Conv2D(16, kernel_size=(4, 4), activation="relu",use_bias=False),
layers.Conv2D(4, kernel_size=(2, 2), activation="relu",use_bias=False),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(4,kernel_size=(1,1),activation="linear",padding="same",use_bias=False),
layers.Flatten(),
layers.Dense(outdim,use_bias=False)
]
)
model.summary()
# + id="9qbpq3qAEfy9" colab={"base_uri": "https://localhost:8080/"} outputId="a463c1c7-f477-4db0-81ed-9f282843080e"
batch_size = 128
epochs = 5
model.compile(loss=loss(outdim), optimizer="adam", metrics=[])
model.fit(x_train,
np.ones_like(x_train),
batch_size=batch_size,
epochs=epochs,
validation_split=0.1)
# + id="OKQ91orUVHAz"
pa=model.predict(xa_test)
pn=model.predict(xn_test)
# + id="q8u_q-lpqWcL" colab={"base_uri": "https://localhost:8080/"} outputId="687954fa-55d4-4409-c17f-689a64328898"
print(pa.shape)
print(pn.shape)
# + id="54W8twtNQj3_" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="efbdafc6-bda0-44a8-e3c7-4ebc7d536ae0"
import matplotlib.pyplot as plt
plt.hist(pn,bins=25,alpha=0.5,label="normal",density=True)
plt.hist(pa,bins=25,alpha=0.5,label="abnorm",density=True)
plt.legend()
plt.show()
# + id="2TL9SwQIRThk" colab={"base_uri": "https://localhost:8080/"} outputId="283e0b40-3928-43c9-97ec-8107a844442b"
def dexbyloss(lss):return np.argmin(np.mean((pn-lss)**2,axis=1))
minl=np.min(pn,axis=0)
maxl=np.max(pn,axis=0)
print(minl,maxl)
ls=[np.arange(aminl,amaxl,(amaxl-aminl)/9.0001) for aminl,amaxl in zip(minl,maxl)]
print(minl,ls,maxl)
print(len(ls))
# + id="ezJUTwnCta3j"
def moduloop(q,modulo=1):
i,j=0,0
while True:
yield q[i]
j+=1
if not (j%modulo):
i+=1
i=i%len(q)
def allcomb(q):
modulo=1
iterators=[]
for zw in q:
iterators.append(moduloop(zw,modulo))
modulo*=len(zw)
for i in range(modulo):
yield [zw.__next__() for zw in iterators]
# + id="M-6o99TjSJIs" colab={"base_uri": "https://localhost:8080/"} outputId="a5d8a6b2-393d-4303-ef9c-8c309278e012"
from tqdm import tqdm
bids=[[0,1,2,3,4,5,6,7,8,9] for zw in ls]
x,y=[],[]
for bid,lss in tqdm(zip(allcomb(bids),allcomb(ls)),total=10**outdim):
x.append(bid)
y.append(dexbyloss(lss))
x=np.array(x)
y=np.array(y)
cls=np.array([np.argmax(yn_test[yy]) for yy in tqdm(y,total=10**outdim)])
print(x.shape,y.shape,cls.shape)
# + id="G-6EJbPtSdd0" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="5615d800-5af5-4def-c5f1-17a3451bb135"
for dim in range(outdim):
mns,stds=[],[]
for val in range(10):
dex=np.where(x[:,dim]==val)
vals=cls[dex]
mn=np.mean(vals)
std=np.std(vals)/np.sqrt(len(vals))
mns.append(mn)
stds.append(std)
#print(f"x{dim}={val}:{mn}+-{std}")
plt.errorbar(range(10),mns,yerr=stds,label=dim,alpha=0.6)
plt.legend()
plt.show()
# + id="qC-5kEXZoz4m"
p=model.predict(x_train)
# + id="WeggvG1vaX_a" colab={"base_uri": "https://localhost:8080/"} outputId="5fe572f2-25d9-4b55-9da3-30a441b5f655"
mp=np.mean(p,axis=0)
da=np.abs(pa-mp)
dn=np.abs(pn-mp)
dfa=np.sqrt(np.mean(da**2,axis=1))
dfn=np.sqrt(np.mean(dn**2,axis=1))
print(dfa.shape,dfn.shape)
# + id="74pcqtEuomht" colab={"base_uri": "https://localhost:8080/"} outputId="89451259-30c5-4f1b-e0e3-82b41b3344c9"
print("normal",np.mean(dfn),np.std(dfn))
print("abnorm",np.mean(dfa),np.std(dfa))
# + id="UsNvCJXYorqX" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="05710538-4216-4411-9e66-f27183980640"
plt.hist(dfn,bins=25,alpha=0.5,label="normal",density=True)
plt.hist(dfa,bins=25,alpha=0.5,label="abnorm",density=True)
plt.legend()
plt.show()
# + id="XNHbWu2hpEo-" colab={"base_uri": "https://localhost:8080/"} outputId="2b631064-593c-425c-e6b6-f453af14eb59"
y_score=np.concatenate((dfn,dfa),axis=0)
y_true=np.concatenate((np.zeros_like(dfn),np.ones_like(dfa)),axis=0)
print(y_true.shape,y_score.shape)
# + id="E7U2nXJdqXG6" colab={"base_uri": "https://localhost:8080/"} outputId="855b2ec8-c426-4547-baea-2de2aa85a646"
from sklearn.metrics import roc_auc_score as rauc
auc=rauc(y_true,y_score)
print(auc)
# + id="Ga6UNtw7qj-x"
| interpretability/colab/08_eval7d.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Week 9 - Visual Analytics in Python
# *© 2020 <NAME>*
#
# Welcome to Week 9 of INFO 6270! I trust that you are all well and enjoying the end of term from the comfort of your couches. The COVID-19 crisis has certainly developed into something major. As always, if there is anything I can do to support you (e.g. flexible deadlines) please do not hesitate to reach out.
#
# Last week we explored dataframes. This week we are going to build on those concepts by exploring dataframe visualization methods. Python comes with a few good visualization tools, though (as we will see) they are not a replacement for a good data visualization tool such as Tableau. In addition, we will explore one inferential statistics technique called Student's t-test, which is among the most influential and common data analysis techniques.
#
# **This week, we will achieve the following objectives:**
# - Visualize a dataframe in Python
# - Visualize a grouped dataframe
# - Change your plot styles
# - Conduct inferential analysis with a t-test
# # Case: Apple Appstore
# Pretty much everyone knows about the Apple iPhone; after all, this was *the* defining smartphone. However, iPhone's reign as the big boss of smartphones may be coming to an end. According to Gartner, as Q3 2019 [iPhone has seen its year-over-year sales decline by 10\%](https://www.gartner.com/en/newsroom/press-releases/2019-11-26-gartner-says-global-smartphone-demand-was-weak-in-thi). This is due to increasing competition from some of its android competitors.
#
# Part of Apple's key to success has been the iPhone App Store. Unlike its Android counterparts, all Apple apps are developed by licensed developers and carefully screened for malware. To analyze other factors in its success, we can observe data from the Appstore itself. Using this dataset provided by [Ramanathan on Kaggle](https://www.kaggle.com/ramamet4/app-store-apple-data-set-10k-apps) we can visualize features of the dataset which might have contributed to the success of the appstore.
# # Objective 1: Visualize a dataframe in Python
# The first thing we will do is create some basic visualizations. Pandas has some [great documentation on visualization](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html#basic-plotting-plot) which I strongly encourage you to read. This documentation provides more details than I can give in this exercise, though I will make an effort to highlight some key points.
#
# Let's start by importing the Pandas dataframe, as before. Rather than importing `numpy`, we will import a different library called `matplotlib`. This tool is a plotting library which is designed to integrate with pandas. As before, we will import a csv file, this time from Apple.
# +
import pandas as pd # import pandas
import matplotlib.pyplot as plt
apps = pd.read_csv('data/w9_apple.csv') # command pandas to import the data
# -
# ### Dataframe head (again!)
# I recommend always starting by understanding the data. In this case, we have a series of iPhone apps with a few interesting fields. Here are some details on ones which are potentially interesting and non-obvious to me:
# - **rating_count_tot**: Total number of ratings for all versions of the app
# - **rating_count_ver**: Number of ratings for this version of the app
# - **sup_devisces.num**: Number of Apple devices that the app supports
# - **lang.num**: The number of (human) languages which the app supports
apps
# ### Visualizing with pandas.plot()
# Pandas really is designed for data scientists. In addition to the dataframe features which we observed last week, pandas also comes pre-built with plotting features. Pandas also provides some [excellent documentation on plotting](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html#basic-plotting-plot) which I encourage you to read.
#
# Let's start by trying to plot our dataframe. Surprisingly, pandas is smart enough to allow us to do this, though it is certainly not pretty! We will need to be a bit more specific about what we want to visualize before proceeding.
apps.plot() # plots the dataframe
# The graph above is meaningless. Let's try to focus on more specific elements of this dataframe.
# ### Plot a series
# The first way that we might make sense of the data is by visualizing series from the data. However, simply using `.plot()` with a series suffers from serious problems. Try executing the visualization below, which attempts to visualize the user ratings for the current version of each app. **This is is not yet intelligible.**
apps['user_rating_ver'].plot() # visualize user ratings for the current version of the app
# Though pandas is smart, it is not **that** smart. When we naively try to visualize the version, we end up with unreadable garbage. Fortunately, we can alter our series to suit our needs. Consider visualizing the sorted user ratings. Note that we have to explicitly tell pandas to not order these by the original index values.
apps['user_rating_ver'].sort_values().plot(use_index=False) # sort the series and then plot it
# Much better!
# ### Specifying axes and figure size
# It's important to remember that when making plots we specify an axis. For instance, if we only wanted to show the number of ratings received for an app, we could specify the y axis as `rating_count_tot`. To make our graph more readable, we can also change the figure size by specifying `figsize=(12,6)` -- 12 inches by 6 inches.
apps.plot(y='rating_count_tot', figsize=(12,6))
# If we wished instead to visualize a subset of the data, say only highly rated apps, we could create a subset similarly to Lab 8. We can then use `plot()` to visualize the results. The result will still not be informative, but it is progress.
# +
highly_rated = apps[(apps['user_rating'] == 5) &
(apps['rating_count_tot'] > 100000)] # a subset of apps with ratings of 5 and at least 100 000 ratings
highly_rated.plot(y='rating_count_tot') # manually specify the y value, in this case rating count
# -
# ### Try a different plot
# Our visualization needs two things in order to be useful. The first is a useful visualization for comparing the different states identified. The second is a decent x label. We can solve these problems by asking pandas to create a bar plot with the `track_name` axis. As you recall, track name is the name of the app.
#
# **Note:** you will probably get a warning about a missing glyph. This is because of encoding issues with some of the characters.
# +
sorted_rated = highly_rated.sort_values(by='rating_count_tot') # sort the data
sorted_rated.plot.bar(x='track_name', y='rating_count_tot', figsize=(12,6)) # specify bar plot with the x value of track name
# -
# This is a fine graph of the apps with an average rating of 5 and at large number of ratings! I am sure that many of you use these apps (I counted 3 which I use).
# ### Remove outliers
# Finally, there are other types of visualizations which could be useful. For instance, a scatter plot can be used to compare the variance between two variables. Let's plot the rating count and price to see whether there is a relationship.
# +
price_rating = apps[['price', 'rating_count_tot', 'rating_count_ver']] # we will include three series in this dataframe
price_rating.plot.scatter(x='price', y='rating_count_tot') # we choose to only visualize two of them
# -
# Right away, it is clear that there are some outliers with a small number of ratings and high price, as well as a high number of ratings and low price. We can remove the outliers to try and make sense of this graph. There are many ways to remove outliers, such as by using standard deviation. However, for the purposes of *Introduction to Data Science* it is sufficient to simply remove values that seem too extreme. The code below removes price values which are greater than 50 and rating counts which are greater than 100000.
# +
apps_clean = apps[(apps['price'] < 50) &
(apps['rating_count_tot'] < 100000) &
(apps['rating_count_ver'] < 100000)]
apps_clean.plot.scatter(x='price', y='rating_count_tot')
# -
# ## Challenge Question 1 (2 points)
# Take a subset of the data where `prime_genere` is equal to `Games`. Generate a scatter plot with the number of languages on one axis and the price on the other. Do you think this tells us something about the relationship between these variables?
pri=apps[apps.prime_genre =='Games']
pri.plot.scatter(x='lang.num',y='price')
# # Objective 2: Visualize a grouped dataframe
# So far so good. What we have done so far works for continuous variables such as `price`, but not nominal variables such `prime_genre`. For instance, if we simply visualize `prime_genre` on the x axis, we will get a nonsensical graph.
apps.plot(x='prime_genre', y='rating_count_ver')
# To effectively analyze discrete variables we need to use a `groupby` query. Pandas [also has great documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html) on this concept so be sure to take a look. Much like with SQL, we can use `groupby` to specify sets of data which we wish to analyze.
#
# For instance, to analyze the median user rating for each genre, we could create a grouped dataframe by grouping by primary genre. We can then retrieve the median values of each genre easily.
# +
avg = apps.groupby('prime_genre') # group by primary genre
avg['user_rating'].median() # provide the median user rating for each
# -
# To visualize this on a graph, we could simply create a bar plot. This is a useful visualization for understanding the median user ratings for each genre. Some genres, such as Catalogs, fare poorly.
avg['user_rating'].median().plot.bar()
# ## Challenge Question 2 (2 points)
# Create a bar graph which visualizes the mean price for each `content_rating`. Is there a trend?
mean = apps.groupby('cont_rating') # group by primary genre
mean['price'].mean().plot.bar()
# ## Challenge Question 3 (1 point)
# Visualize the sum of the total rating count for each genre. Instead of a bar graph, use a pie chart. If you get stuck, consider [reading the docs](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html#pie-plot) on `plot.pie()`.
sum=apps.groupby('prime_genre')
sum['rating_count_tot'].sum().plot.pie()
# # Objective 3: Change your plot styles
# So far, we have focused on creating various graphs with the data. In addition to creating graphs, we can also change features of these graphs such as their colors, shapes, textures or legend. For instance, if we revisit our scatter plot, we can change the color by specifying `color='DarkGreen'` as an input to the method.
apps_clean.plot.scatter(x='price', y='rating_count_tot', color='DarkGreen')
# You can also specify colors which change with the data variables. The visualization below changes color based on the `user_rating`.
apps_clean.plot.scatter(x='price', y='rating_count_tot', color=apps_clean['user_rating'])
# Finally, you can also change other variables, though these depend on the graph in question. Scatterplots allow you to change based on size, though other graphs can be altered based on variables such as the presence of legends or textures. The graph below changes size depending on the `lang.num`, though this is not terribly informative.
apps_clean.plot.scatter(x='price', y='rating_count_tot', c=apps_clean['user_rating'], s=apps_clean['lang.num'] * 10)
# ## Challenge Question 4 (3 points)
# Based on what you just learned, create a bar plot which achieves the following:
# - Visualizes the 5 most expensive education apps
# - Price should be provided on the Y axis
# - The application names should be provided on the X axis
# - The graph should have the color `DarkOrange`
# - The legend should be removed from the graph
exp=apps[apps.prime_genre =='Education']
sort_vals = exp.nlargest(5,'price')
sort_vals.plot.bar(x='track_name',y='price',color='DarkOrange',legend=None)
# # Objective 4: Conduct inferential analysis with a t-test
# In this final objective we will switch gears briefly to inferential statistics. So far, we have explored features of the data which can be used to learn something new about the world. In addition to description however, we can also use data to *infer* something.
#
# You have probably heard this type of statistics in action before. For example:
#
# > "The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days" (Li, Q. et al., 2020).
#
# From the statement above, scientists are able to conclude that a 14-day period of self-isolation would be sufficient to take action against COVID-19. They *inferred* this from early data in China. The data observed a likely true incubation period between 4.1 and 7 days, and 95\% of the cases seeing onset of symptoms by 12.5 days.
#
# We will **not** solve COVID-19 in this class. However, we can learn something about inferential statistics before concluding the semester. We will learn about the most common technique: [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test).
#
# The t-test is fundamentally a test to compare two series of data and determine the probability that they are the same. Without going into the math, the logic of the test is simple: if two datasets have a low probability of representing the same phenomenon, we can conclude that they are different. In social science, we normally consider data to come from different sources when they have a probability of **less than 5\%**.
#
# Let's start by importing a statistics library from scipy, one of Python's science libraries. From this we will import the independent ttest.
from scipy.stats import ttest_ind
# This test can be used to compare whether two sets of ratings are actually different. Let's start by comparing ratings from `Finance` apps and `Games` apps. We can start by gathering two subsets.
# +
finance = apps[apps['prime_genre'] == 'Finance']
games = apps[apps['prime_genre'] == 'Games']
# -
# Once we have our two subsets, we can run the test. This will return two values. The first is the test statistic, which is what the model uses to conclude probability. The second is the `pvalue`, which is the probability that the two phenomena are the same. In this case, `p < 0.0001` which means there is less than a 0.01\% chance that these came from the same source. **We can conclude there is a significant difference between the ratings of Catalogs and Games apps**.
#
# **Note**: `e-17` means "multiplied by 1 to the power of -17", which is something like 0.00000000000000001.
ttest_ind(finance['user_rating'], games['user_rating'])
# ## Understanding why this works
# Again, without going into the math, the main reason why the t-test works is because the mean and distribution of the values are very different. Let's look at the values for Finance by observing the mean, standard deviation and histogram. There are many apps which have low ratings with a mean of just under 2.5.
print("Finance mean: " + str(finance['user_rating'].mean()) + " | Finance stv:" + str(finance['user_rating'].std()))
finance['user_rating'].plot.hist(title='Finance')
# With Games there is a very different distribution. The mean is considerably higher and there are comparatively fewer low-rated apps. This is sort of what the t-test measures under the hood.
print("Games mean: " + str(games['user_rating'].mean()) + " | Games stv:" + str(games['user_rating'].std()))
games['user_rating'].plot.hist(title='Games')
# ### When t-tests fail
# Alternatively, when you run a t-test you may find that you cannot tell whether two phenomena are different from data alone. For example, while Finance may be significantly different from Games, Weather is not. When we run the t-test, we observe a p-value of 61%, which means that there is a 61% chance that the data comes from the same source. As such, we say that **the ratings of weather and games are not significantly different**. You can also see this in the histogram-- though there are some differences, it is very similar to the one generated from games.
# +
weather = apps[apps['prime_genre'] == 'Weather']
ttest_ind(weather['user_rating'], games['user_rating'])
# -
print("Weather mean: " + str(weather['user_rating'].mean()) + " | Weather stv:" + str(weather['user_rating'].std()))
weather['user_rating'].plot.hist(title='Weather')
# ## Challenge Question 5 (2 points)
# Write code which generates the t-statistic and p-value of a comparison of `price` from Lifestyle and Finance apps. Are they significantly different?
lifestyle = apps[apps['prime_genre'] == 'Lifestyle']
finance = apps[apps['prime_genre'] == 'Finance']
ttest_ind(lifestyle['price'], finance['price'])
# Since pvalue is less than 0.05 they are different
# ## References
#
# <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2020). Early transmission dynamics in Wuhan, China, of novel coronavirus–infected pneumonia. New England Journal of Medicine.
#
# The Pandas Development Team (2020). Visualization. Retrieved from: https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html
| Lab 9 - Analyze iPhone app downloads/Lab 9 - Analyze iPhone app downloads.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.5 64-bit ('.venv')
# metadata:
# interpreter:
# hash: 31dc1f372cfac356882bb195821b283a54c175adc4b811c5377b62bd4d238d1c
# name: python3
# ---
# +
import os
print(os.environ['PBI_SERVER'])
print(os.environ['PBI_DB'])
# +
import ssas_api as powerbi
powerbi.load_libraries()
# +
powerbi_server = os.environ['PBI_SERVER']
powerbi_db_name = os.environ['PBI_DB']
conn = "Provider=MSOLAP;Data Source=" + powerbi_server + ";Initial Catalog='';"
print(conn)
# +
import System
import Microsoft.AnalysisServices.Tabular as tom
tom_server = tom.Server()
tom_server.Connect(conn)
for item in tom_server.Databases:
print("Database: ", item.Name)
print("Compatibility Level: ", item.CompatibilityLevel)
print("Created: ", item.CreatedTimestamp)
print()
tom_database = tom_server.Databases[powerbi_db_name]
# +
import pandas as pd
def map_column(column):
return {
'Tabla': column.Table.Name,
'Nombre': column.Name,
'Llave': column.IsKey,
'Oculta': column.IsHidden
}
columns = [map_column(column) for table in tom_database.Model.Tables for column in table.Columns]
df_columns = pd.DataFrame(columns)
display(df_columns)
# +
import pandas as pd
from System import Enum
def map_measure(measure):
return {
'Tabla': measure.Table.Name,
'Carpeta': measure.DisplayFolder,
'Nombre': measure.Name,
'Oculta': measure.IsHidden,
'Implicita':measure.IsSimpleMeasure,
'FechaModificacion':measure.ModifiedTime,
'TipoDato': Enum.GetName(tom.DataType,measure.DataType),
'Formato': measure.FormatString,
'Expresion': measure.Expression
}
measures = [map_measure(measure) for table in tom_database.Model.Tables for measure in table.Measures]
df_measures = pd.DataFrame(measures)
display(df_measures)
# -
display(df_measures[df_measures['Nombre'].str.startswith('Net')])
display(df_measures[df_measures['Nombre'].str.contains('Sales')])
display(df_measures[df_measures['Oculta']])
with pd.ExcelWriter('./powerbi_tom.xlsx') as writer:
df_columns.to_excel(writer, sheet_name='Columnas', index=False)
df_measures.to_excel(writer, sheet_name='Medidas', index=False)
tom_measures_net = [measure for table in tom_database.Model.Tables for measure in table.Measures if measure.Name.startswith('Net')]
display(tom_measures_net)
display([(measure.Name,measure.DisplayFolder) for measure in tom_measures_net])
tom_measures = [measure for table in tom_database.Model.Tables for measure in table.Measures]
measure_tablular_editor = [(measure.Name, annotation.Name,annotation.Value) for measure in tom_measures for annotation in measure.Annotations if annotation.Name == 'Creada con' and annotation.Value == 'Tabular Editor']
display(measure_tablular_editor)
# +
for measure in tom_measures_net:
measure.DisplayFolder = 'Net'
display([(measure.Name,measure.DisplayFolder) for measure in tom_measures_net])
# -
tom_database.Model.SaveChanges()
# +
tom_server.Refresh(True)
tom_measures = [measure for table in tom_database.Model.Tables for measure in table.Measures if measure.Name in ['Returns']]
display(len(tom_measures))
tom_table = tom_database.Model.Tables.Find('Analysis DAX')
display(tom_table)
for measure in tom_measures:
new_measure = tom.Measure()
new_measure.Name = f'{measure.Name} YTD'
new_measure.Expression = f"TOTALYTD ( [{measure.Name}], 'Calendar'[Date] )"
new_measure.DisplayFolder = measure.DisplayFolder
new_measure.Description = f'Calcula {measure.Name} desde el inicio del año'
measure.Table.Measures.Add(new_measure)
display(f'La medida "{new_measure.Name}" se añadió a la tabla "{new_measure.Table.Name}"" y a la carpeta "{new_measure.DisplayFolder}""')
tom_database.Model.SaveChanges()
# -
display(df_measures)
# +
df_measures['ref'] = df_measures['Nombre'].apply(lambda x: list(df_measures[df_measures['Expresion'].str.contains('[' + x +']', regex=False)]['Nombre']))
display(df_measures)
# +
import networkx as nx
G = nx.DiGraph()
G.add_nodes_from(list(df_measures['Nombre']))
for _,measure in df_measures.iterrows():
edges = [(measure_ref,measure['Nombre']) for measure_ref in measure['ref']]
if len(edges) > 0:
G.add_edges_from(edges)
nx.draw(G, with_labels=True)
# +
measure_to_search = 'Returns Variance %'
measure_to_search = 'Net Sales PM'
# measure_to_search = 'Net Sales'
MG = nx.DiGraph()
MG.add_edges_from(G.in_edges(measure_to_search))
MG.add_edges_from(G.out_edges(measure_to_search))
nx.draw(MG, with_labels=True)
# +
graph = []
for _,measure in df_measures.iterrows():
edges = [(measure_ref, measure['Nombre']) for measure_ref in measure['ref']]
graph.extend(edges)
df_graph = pd.DataFrame(graph,columns=['Nodo Fuente','Nodo Destino'])
df_graph.to_csv('./Grafo de Medidas.csv', index=False)
| data_analytics_day_iberoamerica.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# [[ch08]]
# -
# # BERTology: Putting it all Together
# ## ImageNet
# ### The Power of Pretrained Models
# ## The Path to NLP’s ImageNet Moment
# ## Pretrained Word Embeddings
# ### The Limitations of One-Hot Encoding
# ### word2vec
# ### GloVe
# ### fastText
# ### Context-aware Pretrained Word Embeddings
# ## Sequential Models
# ### Sequential Data and the Importance of Sequential Models
# ## RNNs
# ### Vanilla RNNs
# ### LSTMs
# ### GRUs
# ## Attention Mechanisms
# ## Transformers
# ### Transformer-XL
# ## NLP’s ImageNet Moment
# ### ULMFiT
# ### ELMo
# ### BERT
# ### BERTology
# ### GPT-1, GPT-2, GPT-3
# ## Conclusion
| ch08.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] cell_id="00001-f9f3e947-1769-4924-9976-b4bc363ba9f7" deepnote_cell_type="markdown" tags=[]
# # 1. Introduction
# + [markdown] cell_id="00000-8dbc70a0-8928-4fa6-bd60-d2fb67cb7797" deepnote_cell_type="markdown" id="8FWeIGkD44FJ"
# We will be buidling a library that performs Automatic Differentiation (AD). Any client can install and use the library for personal or professional use and obtain an estimate of the value of the derivative (or gradient / Jacobian in higher dimension) of the function provided at the data point given as argument.
#
# Performing fast differentiation and obtaining derivatives is absolutely necessary as it is a skill needed for a lot of real life applications. Indeed, most of systems modeling use differential equations to describe a behaviour and these equations require to take derivatives of sometimes complex function. Also, taking the gradient of a function at a given point and cancel it is the most effective way (at least analytically) to find the extrema of a function. Computing the values of extrema is a key feature to optimize a function and the processes it represents.
# + [markdown] cell_id="00001-80c97f43-43cf-403e-b8e5-1254b20071ec" deepnote_cell_type="markdown" tags=[]
# ### Feedback
# + [markdown] cell_id="00002-1cac0a55-30f6-454c-9c79-0966bdebb9b2" deepnote_cell_type="markdown" tags=[]
# *Would have been nice to see more about why do we care about derivatives anyways and why is GrADim a solution compared to other approaches?*
# + [markdown] cell_id="00002-76145f41-cbbe-4d56-a483-2b08f7cfcd2f" deepnote_cell_type="markdown" tags=[]
# Obtaining the derivative a function is a skill needed for a lot of real life applications. Indeed a lot of models used in different parts of science use differential equations to describe a behavior. As these equations contain explicitly written derivatives, it is important to know how to take them to solve the equations and have a good description of our system.
#
# Even for problems where no derivative is explicitly written it is useful to know how to use them. For convex optimization problems, the global optimum we are looking for is located at the only point where the gradient of the function is null. For more complex cases, taking the derivative of a quantity is an important step of algorithms like Black Box Variational Inference for Bayesian neural networks or Hamiltonian Monte Carlo to obtain samples from any probability distribution.
#
# With GrADim, we offer a way to compute effectively the derivative of a function using forward and reverse mode (see more details below). Compared to naive methods that could be used to compute a derivative, GrADim will be more precise as it will compute the exact numeric derivatives and not estimations. Also, it will allow the user to access to the computational graph of the function and to see the derivation process step by step. In that way, they will be able to use a tool which is not a black box and which they can easily understand.
# + [markdown] cell_id="00002-9ac65347-4b83-4096-a3d9-8a4fa9cb428e" deepnote_cell_type="markdown" id="VWzl082g44Fu"
# # 2. Background
#
# We will provide a brief background to motivate our implementation of Automatic Differentiaion.
#
# ### 1. Intro to AD
#
# AD is a way to obtain the value of the derivative of a function $f$ at a point $X$. The objective is to obtain a method providing more precise values than the naive estimators using Taylor expansion. Such estimators require fine tuning of parameters in order to give an approximation which is close enough to the truth value but which does not fail because of the floating point approximation.
#
# ### 2. Chain Rule
#
# The Chain Rule is the key element of AD. Indeed we can decompose recursively a function $f$ into elementary components. For example, if we consider the function $f(x, y) = cos(x+y) \times sin(x-y)$, we can write it $f(x,y) = prod(cos(sum(x,y)), sin(difference(x, y))))$. Although unclear for a human eye, such a function is easier to derive by a machine using the chain rule:
#
# $\frac{\partial f}{\partial x} = \frac{\partial f}{\partial u}\frac{\partial u}{\partial x} + \frac{\partial f}{\partial v}\frac{\partial v}{\partial x}$
#
# In other words, you can compute the derivative of a function with respect to a variable by computing recursively the derivatives of each of the components and the derivative of the main function with respect to its components.
#
# ### Evaluation graph
#
# When we can write a function as a series of simple components, we can obtain its evaluation graph. Here would be the evaluation graph for the example function provided above.
#
#
# + [markdown] cell_id="00003-4d8a61dd-19db-41a3-9fac-a4cf46d28bea" deepnote_cell_type="markdown" id="YwRrrcf9getP"
# 
# + [markdown] cell_id="00004-1319bd86-f994-4cc6-ae4a-89d235b6f2f2" deepnote_cell_type="markdown" id="-5T3Iv-ggnaP"
# We also have the following evaluation table
#
# |trace|elem operation|value of the function (as a function of $(x,y)$)|elem derivative|$\nabla_x$|$\nabla_y$|
# |--|--|--|--|--|--|
# |$u_{-1}$|$u_{-1}$|$x$|$\dot{u}_{-1}$|$1$|$0$|
# |$u_0$|$u_0$|$y$|$\dot{u}_0$|$0$|$1$|
# |$u_1$|$u_{-1} + u_0$|$x+y$|$\dot{u}_{-1} + \dot{u}_0$|$1$|$1$|
# |$u_2$|$u_{-1} - u_0$|$x-y$|$\dot{u}_{-1} - \dot{u}_0$|$1$|$-1$|
# |$u_3$|$cos(u_1)$|$cos(x+y)$|$-\dot{u}_1sin(u_1)$|$-sin(x+y)$|$-sin(x+y)$|
# |$u_4$|$sin(u_2)$|$sin(x-y)$|$\dot{u}_2cos(u_2)$|$cos(x-y)$|$-cos(x-y)$|
# |$u_5$|$u_3u_4$|$cos(x+y)sin(x-y)$|$\dot{u}_3u_4+u_3\dot{u}_4$|$-sin(x+y)sin(x-y) + cos(x+y)cos(x-y)$|$-sin(x+y)sin(x-y)-cos(x+y)cos(x-y)$|
# + [markdown] cell_id="00009-8920e047-b06f-434f-a886-9e950c7e6c11" deepnote_cell_type="markdown" tags=[]
# ### Feedback
# + [markdown] cell_id="00010-a73ae2e0-cc0d-4316-ba7f-5766d4aaf6c2" deepnote_cell_type="markdown" tags=[]
# *Good start to the background. Going forward, we would like to see more discussion on automatic differentiation. How do forward mode and reverse mode work? We would also like to see more discussion on what forward mode actually computes (Jacobian-vector product), the "seed" vector, and the efficiency of forward mode.*
# + [markdown] cell_id="00011-e18794e2-f496-4128-b647-3898fc4f3ce0" deepnote_cell_type="markdown" tags=[]
# If $f$ is a function with $m$ inputs and $n$ outputs, forward mode is a way to compute the partial derivatives of $f$ with respect to one of its input. To do that, we start from the roots of the evaluation tree (ie the inputs), compute the partial derivatives of all the inputs with respect to the selected one and go down the tree layer by layer computing the partial derivative of each node with respect to its parents. For example, if $g$ is an elementary function and in the evaluation graph $v_i = g(v_j)+v_k$, the partial derivative of $v_i$ will be $\dot{v}_i = \dot{v}_jg'(v_j)+\dot{v_k}$. If we plug the values of $\dot{v}_j$ and $\dot{v}_k$ computed before, we find the value of $\dot{v}_i$.
#
# The direction (ie the vector) with respect to which the derivative is computed in the forward mode is called the seed vector. The forward mode AD allows to compute the product between the Jacobian of the function and this seed vector. This computation is a linear complexity of the number of states in the computational graph.
#
#
# The reverse mode is a way to compute the partial derivatives of an output of $f$ with respect to all the inputs. To do that we start by the leaves of the graph (ie the outputs), compute the partial derivatives with respect to their parents and do the same by going up in the evaluation graph. For example, if we already know the value of the partial derivative $\frac{\partial f_i}{\partial v_j}$ and we know that $v_j = g(v_k)$ where $g$ is an elementary function, we can use the chain rule to write $\frac{\partial f_i}{\partial v_k} = \frac{\partial f_i}{\partial v_j}\times \frac{\partial v_j}{\partial v_k} = \frac{\partial f_i}{v_j}\times g'(v_k)$.
#
# + [markdown] cell_id="00006-2e7a0168-6173-4f26-9019-b782fcf1923e" deepnote_cell_type="markdown" id="DyYntbxO44F6"
# # 3. How to Use GrADim
#
# We will briefly demonstrate how to use our package
#
# ### Installing and importing the package
#
# A user can install the package using:
# + cell_id="00007-acf3e8bb-92f1-454c-9941-55116d216ce7" deepnote_cell_type="code" id="G0y0QOqb44F_"
>>> pip install GrADim
# + [markdown] cell_id="00008-94e509c4-fa89-42ae-b006-e62642b93d44" deepnote_cell_type="markdown" tags=[]
# The package can be imported using the following command:
# + cell_id="00008-09073b80-e39b-4017-af62-5c3a21567149" deepnote_cell_type="code" id="ovswxvHx44GD"
>>> import GrADim as ad
# + [markdown] cell_id="00010-eb4a64a1-24ce-49f9-813b-17cfc3fff92a" deepnote_cell_type="markdown" tags=[]
# ### Using the package
#
# An instance of AD object can be created and used to find the derivative for the required point:
# + cell_id="00011-1bec7ac0-d9e4-4da1-a6af-276ecfe01a43" deepnote_cell_type="code" tags=[]
>>> import GrADim as ad
>>> import numpy as np
>>> def fun(x,y):
return np.cos(x+y)*np.cos(x-y)
>>> autodiff = ad.derivative(fun)
>>> autodiff.fwd_pass((1,1))
>>> autodiff.rev_pass()
# + [markdown] cell_id="00009-3e330790-7e3e-4a68-b3b1-3071d62957f4" deepnote_cell_type="markdown" id="C89QEv6-44GN"
# # 4. Software Organization
#
# 1. Directory Structure
# The package directory would look like the following tree.
#
# File README.md will contain instructions and provide examples using the package. License folder will contain relevant licensing information. File requirements.txt will contain details for package to be distributed.
#
#
#
# + cell_id="00013-68e78a93-9c9f-4c96-a1f8-9f9be35153f2" deepnote_cell_type="code" tags=[]
master
├── LICENSE
├── README.md
├── docs
│ ├── ...
│ └── ...
├── requirements.txt
├── travis.yml
├── GrADim
│ ├── ...
│ └── ...
├── Tests
├── ...
└── ...
# + [markdown] cell_id="00014-c1baf8f1-45f7-4762-87dd-bae81ee2474f" deepnote_cell_type="markdown" tags=[]
# 2. The basic modules
# GrADim is the main module for the library where callable submodules used for automatic differentation will be stored. Test will contain testing submodules for GrADim.
#
# 3. Test Suite
# We will use both TravisCI and CodeCov. Travis CI will be used to detect when a commit has been made and pushed onto Github, and then try to build the project and run tests each time it happens. Codecov will be used to provide an assessment of the code coverage.
#
# 4. Package distribution
# PyPi will be used to distribute the package based on the format in the directory tree.
#
# 5. Framework considerations
# Framework is not currently considered as current implementation is not as complicated. Should the implementation complexity evolves, we will then consider implementing a framework.
# + [markdown] cell_id="00011-08d9553d-0b69-4a1e-882c-1ad19869a9fc" deepnote_cell_type="markdown" id="NoBDMeMt44GW"
# # 5. Implementation
#
# We will now go into some implementation considerations
# + [markdown] cell_id="00012-c3427a3a-9ca0-4550-a89a-38dc1945cf4d" deepnote_cell_type="markdown" id="HiJJW5Y-44Ga"
# ### 1. Data Structures
# We will use floats if the user asks for a single output. If the user asks for a number of outputs for several inputs, we will use arrays. Different classes are defined for different input options as explained below.
#
# ### 2. Classes
#
# We will have one class for the generic type of the automatic differentiation object.
#
# Inside this generic class, we have one method for calculating derivative values, including a number of if-else statements:
# - For the function input, we have one if-else block to check whether it contains matrices. If yes, the program will call the method of matrix operations for differentiation. Otherwise, call the usual automatic differentiation method.
# - For the number of input variables, we have one if-else block to check if it is larger than 1. If univariate, the program will implement the differentiation function with only one variable. If multivariate, the program calls the multivariate differentiation method.
# - For univariate differentiation, we have one nested if-else block to check whether the input variable is a single value or an array of values. If input is a single number, the program will implement simple differentiation. Otherwise, the input is an array, then the program will iterate through the array of values for simple differentiation.
# - For multivariate differentiation, we have a nested if-else block to check whether the input variable is a matrix of values. If it is a vector, the program will implement multivariate automatic differentiation. Otherwise, the input values are in matrix form, then the program will iterate through each vector and call the multivariate automatic differentiation.
# - For the function implemented, we have one if-else block to check if the function contains matrices. If it contains matrices, the program will implement the matrix version of differentiation, in univariate or multivariate form, depending on the number of input variables. Otherwise, the program will implement the usual form of differentiation that do not invlove matrix multiplication.
#
# For automatic differentiation, an elementary operation would be something like:
# + cell_id="00017-d965680e-30c5-432a-8d9b-ea6274b8d251" deepnote_cell_type="code" tags=[]
def sin(self, x):
if x is a vector:
for variable in vector:
self.partial_variable = self.cos(x)
self.val = self.sin(x)
else:
self.der = self.cos(x)
self.val = self.sin(x)
# + [markdown] cell_id="00014-b2534af7-3d34-4f57-9653-ad7791d219ed" deepnote_cell_type="markdown" id="Og8Q-q7s44Gd"
# We have one subclass of differentiation that implements the most basic form of differentiation: input has one variable and one function, and the output is one value representing the function derivative.
# We have one subclass of differentiation, partial differentiation, that handles the input function with more than one variables.
# We have one subclass of differentiation for automatic differentiation that covers the case where the input function is in matrix form.
#
# Methods that all classes share include hard coding the basic differentiations, (e.g., $\frac{dc}{dx}$ = 0 where c is a constant, $\frac{d sin(x)}{dx}$ = cos(x), $\frac{dx^a}{dx}$ = ax<sup>a-1</sup>etc.) and chain rule. For multivariate differentiation, methods are defined to calculate partial derivatives ($\frac{\partial (xy)}{\partial x} = y, \frac{\partial (xy)}{\partial y} = x$). When necessary, we will overwrite the pre-defined methods so that the program would do the differentiation.
#
# Name attributes contain function values and derivatives.
#
# ### 3. External dependencies
# We will use numpy elementary operations (e.g., sin, cos, log, exp, tan, sinh, cosh, tanh, arcsin, arctan, etc.). We will use scipy is used mainly for more complicated matrix algebra. If the input function is passed in via scipy, we may use a scipy dependency in implementation.
#
# ### 4. Other considerations
# We will consider differentiation in polar coordinate as well as that in cartesian coordinate.
# + [markdown] cell_id="00025-975c0c90-1d8d-4d5a-a478-a0e6c046d8b9" deepnote_cell_type="markdown" tags=[]
# ### Feedback
# + [markdown] cell_id="00027-d454f033-41cb-4d01-8445-bf1857e9db95" deepnote_cell_type="markdown" tags=[]
# Will you implement operator overloading methods? If so, what methods will you overload?
#
# + [markdown] cell_id="00027-2b7192ff-47f1-42d2-964c-6731dfcba034" deepnote_cell_type="markdown" tags=[]
# Yes, we will be implementing operator overloading to natural operators such as addition, multiplication, subtraction, and raising to a power etc.
# + [markdown] cell_id="00015-d2251d10-9fbc-4d50-8d92-7dc4ec4f26d0" deepnote_cell_type="markdown" id="tQeSkW2U44Gg"
# # 6. Licensing
#
# We would want our library to be open source and accessible to everyone. We would be okay with others mmodifying our code and if the modifications are distributed, and if it is used in comercial softwares, with proper acknowledgement. So we opt for MIT copyright license.
# + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[]
# <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=077351ea-f043-41bb-a867-a9ce6280de54' target="_blank">
# <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzM<KEY> > </img>
# Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| docs/milestone1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_pattern_search:
# -
# ## Pattern Search (Hooke and Jeeves)
# An implementation of well-known Hooke and Jeeves Pattern Search <cite data-cite="pattern_search"></cite> for single-objective optimization which makes use of *exploration* and *pattern* moves in an alternating manner.
# For now, we like to refer to [Wikipedia](https://en.wikipedia.org/wiki/Pattern_search_(optimization)) for more information such as pseudo code and visualizations in the search space.
# + code="algorithms/usage_pattern_search.py"
from pymoo.algorithms.so_pattern_search import PatternSearch
from pymoo.factory import get_problem
from pymoo.optimize import minimize
problem = get_problem("ackley", n_var=30)
algorithm = PatternSearch()
res = minimize(problem,
algorithm,
seed=1,
verbose=False)
print("Best solution found: \nX = %s\nF = %s" % (res.X, res.F))
# -
# ### API
# + raw_mimetype="text/restructuredtext" active=""
# .. autoclass:: pymoo.algorithms.so_pattern_search.PatternSearch
# :noindex:
| doc/source/algorithms/pattern_search.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
import datetime as dt
import panel as pn
pn.extension()
# -
# The ``DatetimeInput`` widget allows entering a datetime value as text and parsing it using a pre-defined formatter.
#
# For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb).
#
# #### Parameters:
#
# For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).
#
# ##### Core
#
# * **``start``** (datetime): Lower bound
# * **``end``** (datetime): Upper bound
# * **``value``** (datetime): Parsed datetime value
#
# ##### Display
#
# * **``disabled``** (boolean): Whether the widget is editable
# * **``format``** (str): Datetime formatting string that determines how the value is formatted and parsed (``default='%Y-%m-%d %H:%M:%S'``)
# * **``name``** (str): The title of the widget
#
# ___
# The datetime parser uses the defined ``format`` to validate the input value, if the entered text is not a valid datetime a warning will be shown in the title as "`(invalid)`":
# +
dt_input = pn.widgets.DatetimeInput(name='Datetime Input', value=dt.datetime(2019, 2, 8))
dt_input
# -
# ``DatetimeInput.value`` returns a datetime object and can be accessed and set like other widgets:
dt_input.value
| examples/reference/widgets/DatetimeInput.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Text Processing with Python
#
# Packages Discussued:
#
# - [readability-lxml](https://github.com/buriy/python-readability) and [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/)
# - [Pattern](http://www.clips.ua.ac.be/pattern)
# - [NLTK](http://www.nltk.org/)
# - [TextBlob](http://textblob.readthedocs.org/en/dev/)
# - [spaCy](https://honnibal.github.io/spaCy/updates.html)
# - [gensim](https://radimrehurek.com/gensim/)
#
# Other packages:
#
# - [MITIE](https://github.com/mit-nlp/MITIE)
#
# ## NLP in Context
#
# > The science that has been developed around the facts of language passed through three stages before finding its true and unique object. First something called "grammar" was studied. This study, initiated by the Greeks and continued mainly by the French, was based on logic. It lacked a scientific approach and was detached from language itself. Its only aim was to give rules for distinguishing between correct and incorrect forms; it was a normative discipline, far removed from actual observation, and its scope was limited.
# >
# > — <NAME>
#
# ### The State of the Art
#
# - Academic design for use alongside intelligent agents (AI discipline)
# - Relies on formal models or representations of knowledge & language
# - Models are adapted and augment through probabilistic methods and machine learning.
# - A small number of algorithms comprise the standard framework.
#
# Required:
#
# - Domain Knowledge
# - A Corpus in the Domain
# - Methods
#
# ### The Data Science Pipeline
#
# 
#
# ### The NLP Pipeline
#
# 
#
# #### Morphology
#
# The study of the forms of things, words in particular.
#
# Consider pluralization for English:
#
# - Orthographic Rules: puppy → puppies
# - Morphological Rules: goose → geese or fish
#
# Major parsing tasks:
#
# - stemming
# - lemmatization
# - tokenization.
#
# #### Syntax
#
# The study of the rules for the formation of sentences.
#
# Major tasks:
#
# - chunking
# - parsing
# - feature parsing
# - grammars
# - NGram Models (perplexity)
# - Language generation
#
# #### Semantics
#
# The study of meaning.
#
# I see what I eat.
# I eat what I see.
# He poached salmon.
#
# Major Tasks
#
# - Frame extraction
# - creation of TMRs
# - Question and answer systems
#
# #### Machine Learning
#
# Solve **Clustering Problems**:
#
# - Topic Modeling
# - Language Similarity
# - Document Association (authorship)
#
# Solve **Classification Problems**:
#
# - Language Detection
# - Sentiment Analysis
# - Part of Speech Tagging
# - Statistical Parsing
# - Much more
#
# Use of word _vectors_ to implement distance based metrics.
#
# ## Setup and Dataset
#
# To install the required packages (hopefully to a virtual environment) you can download the `requirements.txt` and run:
#
# $ pip install -r requirements.txt
#
# Or you can pip install each dependency as you need them.
#
# ### Corpus Organization
# ## Preprocessing HTML and XML Documents to Text
#
# Much of the text that we're interested in is available on the web and formatted either as HTML or XML. It's not just web pages, however. Most eReader formats like ePub and Mobi are actually zip files containing XHTML. These semi-structured documents contain a lot of information, usually structural in nature. However, we want to get to the main body of the content of what we're looking for, disregarding other content that might be included such as headers for navigation, sidebars, ads and other extraneous content.
#
# On the web, there are several services that provide web pages in a "readable" fashion like [Instapaper](https://www.instapaper.com/) and [Clearly](https://evernote.com/clearly/). Some browsers might even come with a clutter and distraction free "reading mode" that seems to give us exactly the content that we're looking for. An option that I've used in the past is to either programmatically access these renderers, Instapaper even provides an [API](https://www.instapaper.com/api). However, for large corpora, we need to quickly and repeatably perform extraction, while maintaining the original documents.
#
# > Corpus management requires that the original documents be stored alongside preprocessed documents - do not make changes to the originals in place! See discussions of data lakes and data pipelines for more on ingesting to WORM storages.
#
# In Python, the fastest way to process HTML and XML text is with the [`lxml`](http://lxml.de/) library - a superfast XML parser that binds the C libraries `libxml2` and `libxslt`. However, the API for using `lxml` is a bit tricky, so instead use friendlier wrappers, `readability-lxml` and `BeautifulSoup`.
#
# For example, consider the following code to fetch an HTML web article from The Washington Post:
# +
import codecs
import requests
from urlparse import urljoin
from contextlib import closing
chunk_size = 10**6 # Download 1 MB at a time.
wpurl = "http://wpo.st/" # Washington Post provides short links
def fetch_webpage(url, path):
# Open up a stream request (to download large documents)
# Ensure that we will close when complete using contextlib
with closing(requests.get(url, stream=True)) as response:
# Check that the response was successful
if response.status_code == 200:
# Write each chunk to disk with the correct encoding
with codecs.open(path, 'w', response.encoding) as f:
for chunk in response.iter_content(chunk_size, decode_unicode=True):
f.write(chunk)
def fetch_wp_article(article_id):
path = "%s.html" % article_id
url = urljoin(wpurl, article_id)
return fetch_webpage(url, path)
# -
fetch_webpage("http://www.koreadaily.com/news/read.asp?art_id=3283896", "korean.html")
fetch_wp_article("nrRB0")
fetch_wp_article("uyRB0")
# `BeautifulSoup` allows us to search the DOM to extract particular elements, for example to load our document and find all the `<p>` tags, we would do the following:
# +
import bs4
def get_soup(path):
with open(path, 'r') as f:
return bs4.BeautifulSoup(f, "lxml") # Note the use of the lxml parser
for p in get_soup("nrRB0.html").find_all('p'):
print p
# -
# In order to print out only the text with no nodes, do the following:
for p in get_soup("nrRB0.html").find_all('p'):
print p.text
print
# While this allows us to easily traverse the DOM and find specific elements by their id, class, or element type - we still have a lot of cruft in the document. This is where `readability-lxml` comes in. This library is a Python port of the [readability project](http://lab.arc90.com/experiments/readability/), written in Ruby and inspired by Instapaper. This code uses readability.js and some other helper functions to extract the main body and even title of the document you're working with.
# +
from readability.readability import Document
def get_paper(path):
with codecs.open(path, 'r', encoding='utf-8') as f:
return Document(f.read())
paper = get_paper("nrRB0.html")
print paper.title()
# -
with codecs.open("nrRB0-clean.html", "w", encoding='utf-8') as f:
f.write(paper.summary())
# Combine readability and BeautifulSoup as follows:
def get_text(path):
with open(path, 'r') as f:
paper = Document(f.read())
soup = bs4.BeautifulSoup(paper.summary())
output = [paper.title()]
for p in soup.find_all('p'):
output.append(p.text)
return "\n\n".join(output)
print get_text("nrRB0.html")
# ### A note on binary formats
#
# In order to transform PDF documents to XML, the best solution is currently [PDFMiner](http://www.unixuser.org/~euske/python/pdfminer/index.html), specificially their [pdf2text](https://github.com/euske/pdfminer/blob/master/tools/pdf2txt.py) tool. Note that this tool can output into multiple formats like XML or HTML, which is often better than the direct text export. Because of this it's often useful to convert PDF to XHTML and then use Readabiilty or BeautifulSoup to extract the text out of the document.
#
# Unfortunately, the conversion from PDF to text is often not great, though statistical methodologies can help ease some of the errors in transformation. If PDFMiner is not sufficient, you can use tools like [PyPDF2](https://github.com/mstamy2/PyPDF2) to work directly on the PDF file, or write Python code to wrap other tools in Java and C like [PDFBox](https://pdfbox.apache.org/).
#
# Older binary formats like Pre-2007 Microsoft Word Documents (.doc) require special tools. Again, the best bet is to use Python to call another command line tool like [antiword](http://www.winfield.demon.nl/). Newer Microsoft formats are acutally zipped XML files (.docx) and can be either unzipped and handled using the XML tools mentioned above, or using Python packages like [python-docx](https://github.com/mikemaccana/python-docx) and [python-excel](http://www.python-excel.org/).
# ## Pattern
#
# The `pattern` library by the CLiPS lab at the University of Antwerp is designed specifically for language processing of web data and contains a toolkit for fetching data via web APIS: Google, Gmail, Bing, Twitter, Facebook, Wikipedia, and more. It supports HTML DOM parsing and even includes a web crawler!
#
# For example to ingest Twitter data:
from pattern.web import Twitter, plaintext
twitter = Twitter(language='en')
for tweet in twitter.search("#DataDC", cached=False):
print tweet.text
# Pattern also contains an NLP toolkit for English in the `pattern.en` module that utilizes statistical approcahes and regular expressions. Other languages include Spanish, French, Italian, German, and Dutch.
#
# The patern parser will identify word classes (e.g. Part of Speech tagging), perform morphological inflection analysis, and includes a WordNet API for lemmatization.
# +
from pattern.en import parse, parsetree
s = "The man hit the building with a baseball bat."
print parse(s, relations=True, lemmata=True)
print
for clause in parsetree(s):
for chunk in clause.chunks:
for word in chunk.words:
print word,
print
# -
# The `pattern.search` module allows you to retreive N-Grams from text based on phrasal patterns, and can be used to mine dependencies from text, e.g.
# +
from pattern.search import search
s = "The man hit the building with a baseball bat."
pt = parsetree(s, relations=True, lemmata=True)
for match in search('NP VP', pt):
print match
# -
# Lastly the `pattern.vector` module has a toolkit for distance-based bag-of-words model machine learning including clustering (K-Means, Hierarhcical Clustering) and classification.
# ## NLTK
#
# Suite of libraries for a variety of academic text processing tasks:
#
# tokenization, stemming, tagging,
# chunking, parsing, classification,
# language modeling, logical semantics
#
# Pedagogical resources for teaching NLP theory in Python ...
#
# - Python interface to over 50 corpora and lexical resources
# - Focus on Machine Learning with specific domain knowledge
# - Free and Open Source
# - Numpy and Scipy under the hood
# - Fast and Formal
#
# What is NLTK not?
#
# - Production ready out of the box*
# - Lightweight
# - Generally applicable
# - Magic
#
# *There are actually a few things that are production ready right out of the box*.
#
# **The Good Parts**:
#
# - Preprocessing
# - segmentation
# - tokenization
# - PoS tagging
# - Word level processing
# - WordNet
# - Lemmatization
# - Stemming
# - NGram
# - Utilities
# - Tree
# - FreqDist
# - ConditionalFreqDist
# - Streaming CorpusReader objects
# - Classification
# - Maximum Entropy (Megam Algorithm)
# - Naive Bayes
# - Decision Tree
# - Chunking, Named Entity Recognition
# - Parsers Galore!
#
# **The Bad Parts**:
#
# - Syntactic Parsing
# - No included grammar (not a black box)
# - Feature/Dependency Parsing
# - No included feature grammar
# - The sem package
# - Toy only (lambda-calculus & first order logic)
# - Lots of extra stuff
# - papers, chat programs, alignments, etc.
#
#
# +
import nltk
text = get_text("nrRB0.html")
for idx, s in enumerate(nltk.sent_tokenize(text)): # Segmentation
words = nltk.wordpunct_tokenize(s) # Tokenization
tags = nltk.pos_tag(words) # Part of Speech tagging
print tags
print
if idx > 5:
break
# +
from nltk import FreqDist
from nltk import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
text = get_text("nrRB0.html")
vocab = FreqDist()
words = FreqDist()
for s in nltk.sent_tokenize(text):
for word in nltk.wordpunct_tokenize(s):
words[word] += 1
lemma = lemmatizer.lemmatize(word)
vocab[lemma] += 1
print words
print vocab
# -
# The first thing you needed to do was create a corpus reader that could read the RSS feeds and their topics, implementing one of the built-in corpus readers:
# +
import os
import nltk
import time
import random
import pickle
import string
from bs4 import BeautifulSoup
from nltk.corpus import CategorizedPlaintextCorpusReader
# The first group captures the category folder, docs are any HTML file.
CORPUS_ROOT = './corpus'
DOC_PATTERN = r'(?!\.).*\.html'
CAT_PATTERN = r'([a-z_]+)/.*'
# Specialized Corpus Reader for HTML documents
class CategorizedHTMLCorpusreader(CategorizedPlaintextCorpusReader):
"""
Reads only the HTML body for the words and strips any tags.
"""
def _read_word_block(self, stream):
soup = BeautifulSoup(stream, 'lxml')
return self._word_tokenizer.tokenize(soup.get_text())
def _read_para_block(self, stream):
soup = BeautifulSoup(stream, 'lxml')
paras = []
piter = soup.find_all('p') if soup.find('p') else self._para_block_reader(stream)
for para in piter:
paras.append([self._word_tokenizer.tokenize(sent)
for sent in self._sent_tokenizer.tokenize(para)])
return paras
# Create our corpus reader
rss_corpus = CategorizedHTMLCorpusreader(CORPUS_ROOT, DOC_PATTERN,
cat_pattern=CAT_PATTERN, encoding='utf-8')
# -
# Just to make things easy, I've also included all of the imports at the top of this snippet in case you're just copying and pasting. This should give you a corpus that is easily readable with the following properties:
#
# > RSS Corpus contains 5506 files in 11 categories
# > Vocab: 69642 in 1920455 words for a lexical diversity of 27.576
#
# This snippet demonstrates a choice I made - to override the `_read_word_block` and the `_read_para_block` functions in the `CategorizedPlaintextCorpusReader`, but of course you could have created your own `HTMLCorpusReader` class that implemented the categorization features.
#
# The next thing to do is to figure out how you will generate your featuresets, I hope that you used unigrams, bigrams, TF-IDF and others. The simplest thing to do is simply a bag of words approach, however I have ensured that this bag of words does not contain punctuation or stopwords, has been normalized to all lowercase and has been lemmatized to reduce the number of word forms:
# +
# Create feature extractor methodology
def normalize_words(document):
"""
Expects as input a list of words that make up a document. This will
yield only lowercase significant words (excluding stopwords and
punctuation) and will lemmatize all words to ensure that we have word
forms that are standardized.
"""
stopwords = set(nltk.corpus.stopwords.words('english'))
lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
for token in document:
token = token.lower()
if token in string.punctuation: continue
if token in stopwords: continue
yield lemmatizer.lemmatize(token)
def document_features(document):
words = nltk.FreqDist(normalize_words(document))
feats = {}
for word in words.keys():
feats['contains(%s)' % word] = True
return feats
# -
# You should save a training, devtest and test as pickles to disk so that you can easily work on your classifier without having to worry about the overhead of randomization. I went ahead and saved the features to disk; but if you're developing features then you'll only save the word lists to disk. Here are the functions both for generation and for loading the data sets:
# +
def timeit(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
delta = time.time() - start
return result, delta
return wrapper
@timeit
def generate_datasets(test_size=550, pickle_dir="."):
"""
Creates three data sets; a test set and dev test set of 550 documents
then a training set with the rest of the documents in the corpus. It
will then write the data sets to disk at the pickle_dir.
"""
documents = [(document_features(rss_corpus.words(fileid)), category)
for category in rss_corpus.categories()
for fileid in rss_corpus.fileids(category)]
random.shuffle(documents)
datasets = {
'test': documents[0:test_size],
'devtest': documents[test_size:test_size*2],
'training': documents[test_size*2:],
}
for name, data in datasets.items():
with open(os.path.join(pickle_dir, name+".pickle"), 'wb') as out:
pickle.dump(data, out)
def load_datasets(pickle_dir="."):
"""
Loads the randomly shuffled data sets from their pickles on disk.
"""
def loader(name):
path = os.path.join(pickle_dir, name+".pickle")
with open(path, 'rb') as f:
data = pickle.load(f)
return name, data
return dict(loader(name) for name in ('test', 'devtest', 'training'))
# Using a time it decorator you can see that this saves you quite a few seconds:
_, delta = generate_datasets(pickle_dir='datasets')
print "Took %0.3f seconds to generate datasets" % delta
# -
# Last up is the building of the classifier. I used a maximum entropy classifier with the lemmatized word level features. Also note that I used the MEGAM algorithm to significantly speed up my training time:
# +
@timeit
def train_classifier(training, path='classifier.pickle'):
"""
Trains the classifier and saves it to disk.
"""
classifier = nltk.MaxentClassifier.train(training,
algorithm='megam', trace=2, gaussian_prior_sigma=1)
with open(path, 'wb') as out:
pickle.dump(classifier, out)
return classifier
datasets = load_datasets(pickle_dir='datasets')
classifier, delta = train_classifier(datasets['training'])
print "trained in %0.3f seconds" % delta
testacc = nltk.classify.accuracy(classifier, datasets['test']) * 100
print "test accuracy %0.2f%%" % testacc
classifier.show_most_informative_features(30)
# +
from operator import itemgetter
def classify(text, explain=False):
classifier = None
with open('classifier.pickle', 'rb') as f:
classifier = pickle.load(f)
document = nltk.wordpunct_tokenize(text)
features = document_features(document)
pd = classifier.prob_classify(features)
for result in sorted([(s,pd.prob(s)) for s in pd.samples()], key=itemgetter(1), reverse=True):
print "%s: %0.4f" % result
print
if explain:
classifier.explain(features)
classify(get_text("nrRB0.html"), True)
# -
classifier.explain(document_features(get_text("nrRB0.html")))
# The classifier did well - it trained in 2 minutes or so an dit got an initial accuracy of about 83% - a pretty good start!
#
# ### Parsing with Stanford Parser and NLTK
#
# NLTK parsing is notoriously bad - because it's pedagogical. However, you can use Stanford.
# +
import os
from nltk.tag.stanford import NERTagger
from nltk.parse.stanford import StanfordParser
## NER JAR and Models
STANFORD_NER_MODEL = os.path.expanduser("~/Development/stanford-ner-2014-01-04/classifiers/english.all.3class.distsim.crf.ser.gz")
STANFORD_NER_JAR = os.path.expanduser("~/Development/stanford-ner-2014-01-04/stanford-ner-2014-01-04.jar")
## Parser JAR and Models
STANFORD_PARSER_MODELS = os.path.expanduser("~/Development/stanford-parser-full-2014-10-31/stanford-parser-3.5.0-models.jar")
STANFORD_PARSER_JAR = os.path.expanduser("~/Development/stanford-parser-full-2014-10-31/stanford-parser.jar")
def create_tagger(model=None, jar=None, encoding='ASCII'):
model = model or STANFORD_NER_MODEL
jar = jar or STANFORD_NER_JAR
return NERTagger(model, jar, encoding)
def create_parser(models=None, jar=None, **kwargs):
models = models or STANFORD_PARSER_MODELS
jar = jar or STANFORD_PARSER_JAR
return StanfordParser(jar, models, **kwargs)
class NER(object):
tagger = None
@classmethod
def initialize_tagger(klass, model=None, jar=None, encoding='ASCII'):
klass.tagger = create_tagger(model, jar, encoding)
@classmethod
def tag(klass, sent):
if klass.tagger is None:
klass.initialize_tagger()
sent = nltk.word_tokenize(sent)
return klass.tagger.tag(sent)
class Parser(object):
parser = None
@classmethod
def initialize_parser(klass, models=None, jar=None, **kwargs):
klass.parser = create_parser(models, jar, **kwargs)
@classmethod
def parse(klass, sent):
if klass.parser is None:
klass.initialize_parser()
return klass.parser.raw_parse(sent)
def tag(sent):
return NER.tag(sent)
def parse(sent):
return Parser.parse(sent)
# -
tag("The man hit the building with the bat.")
for p in parse("The man hit the building with the bat."):
print p
# ## TextBlob
#
# A lightweight wrapper around nltk that provides a simple "Blob" interface for working with text.
# +
from textblob import TextBlob
from bs4 import BeautifulSoup
text = TextBlob(get_text("nrRB0.html"))
print text.sentences
# -
import nltk
np = nltk.FreqDist(text.noun_phrases)
print np.most_common(10)
print text.sentiment
review = TextBlob("<NAME> would be the most amazing, most wonderful, most handsome actor - the greatest that ever lived, if only he didn't have that silly earing.")
print review.sentiment
# Language Detection using TextBlob
b = TextBlob(u"بسيط هو أفضل من مجمع")
b.detect_language()
chinese_blob = TextBlob(u"美丽优于丑陋")
chinese_blob.translate(from_lang="zh-CN", to='en')
en_blob = TextBlob(u"Simple is better than complex.")
en_blob.translate(to="es")
# ## spaCy
#
# Industrial strength NLP, in Python but with a strong Cython backend. Super fast. Licensing issue though.
# +
from __future__ import unicode_literals
from spacy.en import English
nlp = English()
tokens = nlp(u'The man hit the building with the baseball bat.')
baseball = tokens[7]
print (baseball.orth, baseball.orth_, baseball.head.lemma, baseball.head.lemma_)
# -
tokens = nlp(u'The man hit the building with the baseball bat.', parse=True)
for token in tokens:
print token.prob
# ## gensim
#
# Library for bag of words clustering - LSA and LDA.
#
# Also implements word2vec - Google's word vectorizer: something that was explored in a previous post.
| Text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Sensitivity analysis exercise
# You are doing the resource planning for a lawn furniture company. They manufacture decorative sets of legs for lawn chairs, benches, and tables from metal tubes using a two step process involving tube-bending, and welding. The profit the company receives from the sale of each product is $3 for a set of chair legs, $3 for a set of bench legs, and $5 for a set of table legs. You are trying to plan the production mix for the upcoming season. Unfortunately, due to a strike there is only 2000 lbs tubing available for production on-hand.
#
# The time and raw material requirements for each product are printed in the console. Also, the PuLP model has been completed for you and stored in the variable model. The constraints of the model are printed to the console.
#
#
# Complete the code to print the model status.
# Print values of the decision variables.
# Print the total profit by printing the value of the objective value.
import pandas as pd
from pulp import *
# +
# Initialize Class
model = LpProblem("Maximize Glass Co. Profits", LpMaximize)
# Define Decision Variables
A = LpVariable('A', lowBound=0)
B = LpVariable('B', lowBound=0)
C = LpVariable('C', lowBound=0)
# Define Objective Function
model += 500 * A + 450 * B + 600 * C
# Define Constraints
model += 6 * A + 5 * B + 8 * C <= 60
model += 10.5 * A + 20 * B + 10 * C <= 150
model += A <= 8
# +
# Solve Model
model.solve()
print(LpStatus[model.status])
for v in model.variables():
print(v.name,"=",v.varValue)
print("Objective (Max Profit) = ", value(model.objective))
o = [{'name':name, 'shadow_price':c.pi,'slack':c.slack} for name,c in model.constraints.items()]
print(pd.DataFrame(o))
# -
# # _c1 determines that if we increae first constraints with 61 from 60 then it will increase the profit to $ 78
#
# # same for _c2 if we increase 150 to 151 then we will be having $2 profit in objective
#
# # if it is 0 then it infers that no changes can be made even if it is increases
# # slack = 0 then binding i.e. change in constraint will change the solution or objective fun
# # slack > 0 then no binding
# Complete the code to use a Pandas DataFrame to print the shadow price and slack of each constraint.
| PULP/tutorial/.ipynb_checkpoints/4.1 Sensitivity analysis exercise-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### import libraries
# ! pip install netCDF4
import netCDF4 # python API to work with netcdf (.nc) files
import os
import datetime
from osgeo import gdal, ogr, osr
import numpy as np # library to work with matrixes and computations in general
import matplotlib.pyplot as plt # plotting library
from auxiliary_classes import convert_time,convert_time_reverse,kelvin_to_celsius,kelvin_to_celsius_vector,Grid,Image,subImage
import json
import geojson, gdal, subprocess
# ### auxiliary functions
def print_geojson(tname, tvalue, fname, longitude, latitude, startdoc, position,endloop): #for printing to geojson - start,end,attributes
fname = fname +".geojson"
pmode="a"
if startdoc==1:
with open(fname, mode="w", encoding='utf-8') as f1: #start of geojson
tstring = "{\n\"type\": \"FeatureCollection\",\n\"features\": ["
print(tstring, file=f1)
f1.close()
else:
if position==0: #for printing to geojson - geometry, longitude, latitude
tstring = "\"type\": \"Feature\",\n\"geometry\": {\n\"type\": \"Point\",\n\"coordinates\": [" + str(longitude) + ","+ str(latitude) + "]\n},\n\"properties\": {"
fname = fname
with open(fname, mode=pmode, encoding='utf-8') as f1:
print(tstring, file=f1)
f1.close()
elif position==1: #start of point attributes
with open(fname, mode=pmode, encoding='utf-8') as f1:
print("{", file=f1)
f1.close()
elif position==2: #print attribute (not last)
with open(fname, mode=pmode, encoding='utf-8') as f1:
ttext = "\"" + str(tname) + "\": \"" +str(tvalue) + "\","
print(ttext, file=f1)
f1.close()
elif position==3: #print last attribute
with open(fname, mode=pmode, encoding='utf-8') as f1:
ttext = "\"" + str(tname) + "\": \"" +str(tvalue) + "\""
print(ttext, file=f1)
f1.close()
elif position==4: #end of point attributes
with open(fname, mode=pmode, encoding='utf-8') as f1:
if endloop==0:
print("}\n},", file=f1)
f1.close()
else: #end of geojson
print("}\n}\n]\n}", file=f1)
f1.close()
# +
def trend(inputlist, nametrend, namediff, fname):
listlong = len(inputlist)
if listlong <= 1:
trendcoef = 0
timediff = 0
else:
x = np.arange(0,len(inputlist))
y = inputlist
z = np.polyfit(x,y,1)
trendcoef=z[0]
timediff=int(trendcoef*(listlong-1))
print_geojson(nametrend, trendcoef, fname, 0, 0, 0, 2, 0)
print_geojson(namediff, timediff, fname, 0, 0, 0, 3, 0)
# +
def trend2(inputlist, nametrend, namediff, endyear, startyear, fname,fnameavg):
listlong = endyear-startyear+1
numberweeks = len(inputlist[0])
for j in range(0, numberweeks,1):
tempweek = j +1
if listlong <= 1:
trendcoef = 0
timediff = 0
else:
x = np.arange(0,listlong)
y = []
for i in range(0, listlong, 1):
y.append( inputlist[i][j])
z = np.polyfit(x,y,1)
trendcoef=z[0]
timediff=int(trendcoef*(listlong-1))
nametrend2 = nametrend + str(tempweek)
namediff2 = namediff + str(tempweek)
print_geojson(nametrend2, trendcoef, fname, 0, 0, 0, 2, 0)
print_geojson(nametrend2, trendcoef, fnameavg, 0, 0, 0, 2, 0)
if j == (numberweeks-1):
print_geojson(namediff2, timediff, fname, 0, 0, 0, 3, 0)
print_geojson(namediff2, timediff, fnameavg, 0, 0, 0, 3, 0)
else:
print_geojson(namediff2, timediff, fname, 0, 0, 0, 2, 0)
print_geojson(namediff2, timediff, fnameavg, 0, 0, 0, 2, 0)
# -
def avg2Dlist(inputlist,startyear,endyear): #average for 2D list ->1D list # inputs: inputlist = 2D list, output: avglist = 1D list with avg values
numberyear = endyear-startyear+1
listlen = len(inputlist[0])
templist = []
avglist = []
for i in range(0, listlen,1):
for j in range(0, numberyear,1):
templist.append(inputlist[j][i])
tempvalue=sum(templist)/len(templist)
avglist.append(tempvalue)
templist = []
return avglist
def acumulatelist(inputlist): #average for 2D list ->1D list # inputs: inputlist = 2D list, output: avglist = 1D list with avg values
listlen = len(inputlist)
for i in range (0,listlen-1,1):
inputlist[i+1] += inputlist[i]
return inputlist
# +
def printlistasweekgeojson(inputlist,name,fname,fnameavg,endloop): # from list of week values print geojson
listlen = len(inputlist)
for i in range(0, listlen,1):
tempvalue=inputlist[i]
tvarname = name + str(i+1)
if endloop==1 and i == (listlen-1):
print_geojson(tvarname, tempvalue, fname, 0, 0, 0, 3, 0)
print_geojson(tvarname, tempvalue, fnameavg, 0, 0, 0, 2, 0)
else:
print_geojson(tvarname, tempvalue, fname, 0, 0, 0, 2, 0)
print_geojson(tvarname, tempvalue, fnameavg, 0, 0, 0, 2, 0)
# -
# ### Solar radiation: function for one place
# +
from datetime import date, timedelta
def findradiation(latitude,longitude,year,endyear,im,enddate, startdate, fnameradiation, allweekradilist,radiationparam, fnameannualrad, yearradilist, unitcoeff,fnameradaccum):
sdate = startdate # start date for calculation of solar sums
edate = enddate # end date for calcultion of solar sums
delta = edate - sdate # as timedelta
sevendays=0 # for determination of new week (1-7)
currentweek=1 # for determination of weeks
weekradiation=0
weekradilist = []
starthourday = 0
endhourday = 23
weekradisum = 0
yearradisum = 0
for i in range(delta.days+1):
daylong = sdate + timedelta(days=i)
sdaylong = str(daylong)
tday = int(sdaylong[8:10])
tmonth = int(sdaylong[5:7])
tyear = int(sdaylong[0:4])
dayradisum = 0 # start value
sevendays+=1
for hour in range(starthourday, endhourday+1, 1): # for specific hours (all day,only sunrise hours,..)
time=convert_time_reverse(datetime.datetime(tyear, tmonth, tday, hour, 0))
slice_dictionary={'lon':[longitude,],'lat':[latitude],'time':[int(time)]}
currentradi=int(im.slice(radiationparam,slice_dictionary))*unitcoeff
dayradisum += currentradi
yearradisum += currentradi
if daylong == edate: # save week date for last date in season
tempi=0
if endyear==year:
tempi=3 # last geojson column for each place
weekradisum+=dayradisum
weekradilist.append(weekradisum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekradisum, fnameradiation, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearradisum, fnameradaccum, 0, 0, 0, 2, 0)
elif sevendays<=7: # new week?
weekradisum+=dayradisum
else:
weekradilist.append(weekradisum)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, weekradisum, fnameradiation, 0, 0, 0, 2, 0)
tvarname = "W" + str(year) + "_" + str(currentweek)
print_geojson(tvarname, yearradisum, fnameradaccum, 0, 0, 0, 2, 0)
weekradisum=dayradisum
sevendays=0
currentweek+=1
allweekradilist.append(weekradilist)
yearradilist.append(yearradisum)
tvarname = "Radi" + str(year)
print_geojson(tvarname, yearradisum, fnameannualrad, 0, 0, 0, 2, 0)
# -
# ### Find deficits: function for selected years
def radiationyearly(latorder,lonorder,startyear,endyear,endloop,datafolder,fnameradiation,enddatem, startdatem,enddated, startdated,radiationparam, fnameannualrad, unitcoeff,fnameradaccum,fnameavgradiation):
print_geojson("", "", fnameradiation, 0, 0, 0, 1,0)
print_geojson("", "", fnameradaccum, 0, 0, 0, 1,0)
print_geojson("", "", fnameannualrad, 0, 0, 0, 1,0)
print_geojson("", "", fnameavgradiation, 0, 0, 0, 1,0)
endloopyear =0
allweekradilist=[] # 2D list for all weeks many years
yearradilist=[] # 2D list for all weeks many years
for year in range(startyear, endyear+1, 1):
source = datafolder + '/' + str(year) + '.nc'
im=Image(netCDF4.Dataset(source,'r'))
longlist = im.get_data().variables['lon'][:]
latlist= im.get_data().variables['lat'][:]
longitude = longlist [lonorder]
latitude = latlist[latorder]
if year == startyear:
print_geojson("", "", fnameradiation, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameannualrad, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameradaccum, longitude, latitude, 0, 0,0)
print_geojson("", "", fnameavgradiation, longitude, latitude, 0, 0,0)
if year == endyear:
endloopyear=1
enddate=date(year, enddatem, enddated)
startdate=date(year, startdatem, startdated)
findradiation(latitude,longitude,year,endyear,im,enddate, startdate, fnameradiation, allweekradilist,radiationparam, fnameannualrad, yearradilist, unitcoeff,fnameradaccum)
avgweekradilist = avg2Dlist(allweekradilist,startyear,endyear)
printlistasweekgeojson(avgweekradilist,"RadW",fnameradiation,fnameavgradiation, 0)
avgweekacuradilist = acumulatelist(avgweekradilist)
printlistasweekgeojson(avgweekacuradilist,"ARadW",fnameradaccum,fnameavgradiation, endloopyear)
avgradiyear=sum(yearradilist)/len(yearradilist)
print_geojson("AvRadi", avgradiyear, fnameannualrad, 0, 0, 0, 2, 0)
nametrend = "AnTrCo"
namediff = "Andiff"
trend(yearradilist, nametrend, namediff,fnameannualrad)
nametrend = "TrCo"
namediff = "Diff"
trend2(allweekradilist, nametrend, namediff, endyear, startyear, fnameradiation,fnameavgradiation)
print_geojson("", "", fnameradiation, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameannualrad, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameradaccum, 0, 0, 0, 4,endloop)
print_geojson("", "", fnameavgradiation, 0, 0, 0, 4,endloop)
# ### Find deficits: function for selected latitudes, longitudes
# +
def radiationplaces(startlat, startlon, endlat, endlon, startyear,endyear,exportfolder,datafolder,fnameradiation1,enddatem, startdatem,enddated, startdated, alllatlonfile, radiationparam, fnameannualrad1, unitcoeff,fnameradaccum1,fnameavgradiation1):
fnameradiation= exportfolder + "/" +fnameradiation1
fnameradaccum= exportfolder + "/" +fnameradaccum1
fnameannualrad= exportfolder + "/" +fnameannualrad1
fnameavgradiation= exportfolder + "/" +fnameavgradiation1
#start in geojson files:
print_geojson("", "", fnameradiation, 0, 0, 1, 0,0)
print_geojson("", "", fnameradaccum, 0, 0, 1, 0,0)
print_geojson("", "", fnameannualrad, 0, 0, 1, 0,0)
print_geojson("", "", fnameavgradiation, 0, 0, 1, 0,0)
endloop=0
if alllatlonfile==1: # if it is calculated for all latitudes and longitudes in input file
source = datafolder + '/' + str(startyear) + '.nc'
im=Image(netCDF4.Dataset(source,'r'))
arraylon = im.get_data().variables['lon'][0::]
arraylat = im.get_data().variables['lat'][0::]
startlat=0
startlon=0
endlon= len(arraylon)-1
endlat= len(arraylat)-1
for latorder in range(startlat, endlat+1, 1):
for lonorder in range(startlon, endlon+1, 1):
if latorder==endlat and lonorder==endlon:
endloop=1
radiationyearly(latorder,lonorder,startyear,endyear,endloop,datafolder,fnameradiation,enddatem, startdatem,enddated, startdated,radiationparam, fnameannualrad, unitcoeff,fnameradaccum,fnameavgradiation)
# -
# ## <font color=red>Find solar radiation: input parameters and launch</font>
# +
#Time definition:
startyear=2010 #start year (integer) of calculation solar sums
endyear=2019 #end year (integer) of calculation solar sums
enddatem = 12 # start date (month) each year of calculation solar sums
enddated = 31 # start date (day) each year of calculation solar sums
startdatem = 1 # end date (month) each year of calculation solar sums
startdated = 1 # end date (day) each year of calculation solar sums
#Solar unit:
units = 2 # 1 = J*m^(-2), 2 = MJ*m^(-2), 3 = Wh, 4 = kWh
#Files/Folders name:
datafolder = "data" #folder with data files (named by year) for each year #string
fnameradiation ="weekly_radiation" #name of created files with week radiation #string
fnameradaccum ="weekly_accum_radiation" #name of created files with week radiation #string
fnameavgradiation ="weekly_avg_radiation" #name of created files with week radiation #string
fnameannualrad ="annualsum_radiation" #name of created files with annual/seasonal/defined period radiation #string
exportfolder = "export" #for all files (if each file its folder -> insert name of folder to name of file) #export folder must be created #string
#Area definition:
alllatlonfile=0 #calculate all latitudes and longitudes in input file (1=yes, 0=no)
# if alllatlonfile!=0 then:
startlat=0 # start number of list of latitudes from used netCDF4 file
startlon=0 # start number of list of longitudes from used netCDF4 file
endlat=20 # end number of list of latitudes from used netCDF4 file
endlon=21 # end number of list of longitudes from used netCDF4 file
#Solar data parameter:
radiationparam = 'ssr_NON_CDM'
unitcoeff = 1
if units == 2:
unitcoeff = 0.000001
elif units == 3:
unitcoeff = 0.000278
elif units == 4:
unitcoeff = 0.000000277777778
starthourday=0 # integer 0-23
endhourday=23 # integer 0-23
radiationplaces(startlat, startlon, endlat, endlon, startyear,endyear,exportfolder,datafolder,fnameradiation,enddatem, startdatem,enddated, startdated,alllatlonfile,radiationparam, fnameannualrad, unitcoeff, fnameradaccum,fnameavgradiation)
# -
# <font color=red> Output: in export folder is created geojson with points - each point has got attributes: first/last frost date, frost-free period with corresponding probability, frost dates, period for each year and also number of frost days in each year or averange of number of frost days </font>
# ## From geojson to shp
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/weekly_radiation.shp', 'export/weekly_radiation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/weekly_accum_radiation.shp', 'export/weekly_accum_radiation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/annualsum_radiation.shp', 'export/annualsum_radiation.geojson']
subprocess.Popen(args)
args = ['ogr2ogr', '-f', 'ESRI Shapefile', 'export/weekly_avg_radiation.shp', 'export/weekly_avg_radiation.geojson']
subprocess.Popen(args)
| 3-solar_radiation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from dotenv import find_dotenv, load_dotenv
import os
import pandas as pd
from fuzzywuzzy import fuzz
import configparser
from azure.cosmos.cosmos_client import CosmosClient
from gremlin_python.driver import client, serializer
import requests
import pandas as pd
import re
aws_services = pd.read_csv('../data/raw/aws_services.csv')
aws_services.fillna('', inplace=True)
aws_services.head()
azure_services = pd.read_csv('../data/raw/azure_services.csv')
azure_services.fillna('', inplace=True)
azure_services.head()
google_services = pd.read_csv('../data/raw/google_services.csv')
google_services.fillna('', inplace=True)
google_services.head()
# +
import concurrent.futures
from azure.cosmos.cosmos_client import CosmosClient
from gremlin_python.driver import client, serializer
from gremlin_python.structure.graph import Graph
from gremlin_python import statics
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.traversal import T
from gremlin_python.process.traversal import Order
from gremlin_python.process.traversal import Cardinality
from gremlin_python.process.traversal import Column
from gremlin_python.process.traversal import Direction
from gremlin_python.process.traversal import Operator
from gremlin_python.process.traversal import P
from gremlin_python.process.traversal import Pop
from gremlin_python.process.traversal import Scope
from gremlin_python.process.traversal import Barrier
from gremlin_python.process.traversal import Bindings
# from gremlin_python.process.traversal import WithOptions
import logging
class GremlinQueryManager:
def __init__(self, account_name, master_key, database_name, graph_name):
self.driver_remote_connection = DriverRemoteConnection(
f'wss://{account_name}.gremlin.cosmosdb.azure.com:443/',
'g',
username=f'/dbs/{database_name}/colls/{graph_name}',
password=<PASSWORD>,
message_serializer=serializer.GraphSONMessageSerializer()
)
self.g = self.setup_graph()
def setup_graph(self):
try:
graph = Graph()
logging.info('Trying To Login')
g = graph.traversal().withRemote(self.driver_remote_connection)
logging.info('Successfully Logged In')
except Exception as e: # Shouldn't really be so broad
logging.error(e, exc_info=True)
raise BadRequestError('Could not connect to Neptune')
return g
gremlin_qm = GremlinQueryManager(account_name, master_key, db_name, graph_name)
gremlin_qm.g.V('ba834b84-4fe3-40c3-94b6-991c7348db52').toList()
# +
class DocumentQueryManager:
"""Utitlity class for managing cosmos db client connection
and collections for easy querying."""
def __init__(self, account_name, master_key, db_name):
self.client = CosmosClient('https://{}.documents.azure.com:443/'.format(account_name), {
'masterKey': master_key
})
dbs = list(self.client.ReadDatabases())
self.db = dbs[0]['_self']
for d in dbs:
if d['id'] == db_name:
self.db = d['_self']
self.collections = {}
for c in list(self.client.ReadContainers(self.db)):
self.collections[c['id']] = c['_self']
class GremlinQueryManager:
def __init__(self, account_name, master_key, database_name, graph_name):
self.client = client.Client(
'wss://{}.gremlin.cosmosdb.azure.com:443/'.format(account_name),
'g',
username="/dbs/{}/colls/{}".format(database_name, graph_name),
password=master_key,
message_serializer=serializer.GraphSONMessageSerializer()
)
print(self.client)
self.g = self.setup_graph()
def setup_graph(self):
try:
graph = Graph()
# connstring = os.environ.get('GRAPH_DB')
logging.info('Trying To Login')
g = graph.traversal().withRemote(self.client._get_connection())
logging.info('Successfully Logged In')
except Exception as e: # Shouldn't really be so broad
logging.error(e, exc_info=True)
raise BadRequestError('Could not connect to Neptune')
print(g)
return g
def query(self, query):
callback = self.client.submitAsync(query)
return callback.result().one()
# for r in callback.result():
# return r
# for f in concurrent.futures.as_completed([res.all()]):
# print(f.result())
# return res.done.result()
# if callback.result():
# res = []
# for r in callback.result():
# res += r
# return res
load_dotenv(find_dotenv())
account_name = os.environ.get('COSMOS_ACCOUNT_NAME')
db_name = os.environ.get('COSMOS_DB_NAME')
graph_name = os.environ.get('COSMOS_GRAPH_NAME')
master_key = os.environ.get('COSMOS_MASTER_KEY')
doc_qm = DocumentQueryManager(account_name, master_key, db_name)
gremlin_qm = GremlinQueryManager(account_name, master_key, db_name, graph_name)
# +
# def get_services():
abbr = 'aws'
cats = list(doc_qm.client.QueryItems(
doc_qm.collections['ccg'],
f"select * from c where c.label = '{abbr}_category'",
{'enableCrossPartitionQuery': True}
))
services = list(doc_qm.client.QueryItems(
doc_qm.collections['ccg'],
f"select * from c where c.label = '{abbr}_service'",
{'enableCrossPartitionQuery': True}
))
edges = list(doc_qm.client.QueryItems(
doc_qm.collections['ccg'],
f"select * from c where c.label = 'belongs_to'",
{'enableCrossPartitionQuery': True}
))
cats_df = pd.DataFrame(cats)
cats_df.head()
# data_records = {}
# for s in services:
# data_records[s['id']] = {
# 'name': extract_value(s['name']),
# 'short_description': extract_value(s['short_description']),
# 'long_description': extract_value(s['long_description']),
# }
# return data_records
# -
s_df = pd.DataFrame(services)
s_df.head()
edges_df = pd.DataFrame(edges)
edges_df.head()
# +
# q = """g.V().has('label', 'aws_service')
# .project('name', 'short_description', 'long_description', 'category_name')
# .by('name').by('short_description').by('long_description').by(out('belongs_to').values('name'))"""
# print(''.join(q.split()))
# len(gremlin_qm.query(q))
# q = "g.V('ba834b84-4fe3-40c3-94b6-991c7348db52').in('source_cloud')"
q = "g.V().has('label', 'aws_service')"
len(gremlin_qm.query(q))
# -
import re
class GremlinQueryBuilder:
"""
Basic functions to build gremlin queries that add vertices and edges
"""
@classmethod
def gremlin_escape(cls, s):
return s.replace('"', '\\"').replace('$', '\\$')
@classmethod
def build_upsert_vertex_query(cls, entity_type, properties):
q = f"""g.V().has("label", "{entity_type}"){cls.get_properties_str(properties, False)}.
fold().
coalesce(unfold(),
addV("{entity_type}"){cls.get_properties_str(properties)})"""
return q
@classmethod
def build_upsert_edge_query(cls, from_id, to_id, edge_properties):
"""
g.V().has('person','name','vadas').as('v').
V().has('software','name','ripple').
coalesce(__.inE('created').where(outV().as('v')),
addE('created').from('v').property('weight',0.5))
"""
label = edge_properties["label"]
return f"""g.V("{from_id}").as('v').
V("{to_id}").
coalesce(__.inE("{label}").where(outV().as('v')),
addE("{label}").from('v'){cls.get_properties_str(edge_properties)})"""
@classmethod
def get_by_id_query(cls, _id):
return 'g.V("{}")'.format(_id)
@classmethod
def get_properties_str(cls, properties, create=True):
if create:
query_str = 'property'
else:
query_str = 'has'
properties_lower = {k.lower():v for k,v in properties.items()}
if "label" in properties_lower:
del properties_lower["label"]
output = ""
for k, v in properties_lower.items():
if isinstance(v, str):
output += '.{}("{}", "{}")'.format(query_str, k, v)
else:
output += '.{}("{}", {})'.format(query_str, k, v)
return output
test_eq = GremlinQueryBuilder.build_upsert_edge_query('4952feb6-55dc-4d8f-9d63-4fca5c4265e3', 'b8e694c2-4e0b-4c17-b01b-b00670caaa6e', {'label': 'source_cloud'})
print(test_eq)
gremlin_qm.query(test_eq)
# ## Add clouds
sources = {
'aws': 'Amazon Web Services',
'azure': 'Microsoft Azure',
'gcp': 'Google Cloud'
}
for abbreviation, source in sources.items():
q = GremlinQueryBuilder.build_upsert_vertex_query('cloud', {'name': source, 'abbreviation': abbreviation})
gremlin_qm.query(q)
# ## Add cloud categories
for i, source in enumerate([aws_categories, azure_categories, google_categories]):
abbr = list(sources.keys())[i]
r = gremlin_qm.query(f"g.V().has('abbreviation', '{abbr}')")
cloud_id = r[0]['id']
for cat in source:
vq = GremlinQueryBuilder.build_upsert_vertex_query(f'{abbr}_category', {'name': cat})
v_id = gremlin_qm.query(vq)[0]['id']
eq = GremlinQueryBuilder.build_upsert_edge_query(v_id, cloud_id, {'label': 'source_cloud'})
gremlin_qm.query(eq)
# ## Add services for each cloud category
for abbr, df in zip(sources.keys(), [aws_services, azure_services, google_services]):
source_name = sources[abbr]
print(f"Adding services for {source_name}")
def add_service_and_edge(row):
label = f'{abbr}_service'
props = {
'name': row['name'],
'short_description': row['short_description'],
'long_description': row['long_description'],
'uri': row['link'],
'icon_uri': row['icon']
}
for k, v in props.items():
props[k] = GremlinQueryBuilder.gremlin_escape(v)
vq = GremlinQueryBuilder.build_upsert_vertex_query(label, props)
v_res = gremlin_qm.query(vq)
cat_name = row['category_name']
cat_id = gremlin_qm.query(f"g.V().has('name', '{cat_name}').has('label', '{abbr}_category')")[0]['id']
cat_eq = GremlinQueryBuilder.build_upsert_edge_query(v_res[0]['id'], cat_id, {'label': 'belongs_to'})
gremlin_qm.query(cat_eq)
df.apply(add_service_and_edge, axis=1)
# ---
# # Conflating Service Categories
print('AWS Categories')
aws_categories = list(aws_services['category_name'].unique())
print(aws_categories)
print()
print('Azure Categories')
azure_categories = list(azure_services['category_name'].unique())
print(azure_categories)
print()
print('Google Cloud Categories')
google_categories = list(google_services['category_name'].unique())
print(google_categories)
# ### Resolve similar categories for AWS and Azure
# +
normalized_cats = {}
for aws_cat in aws_categories:
for azure_cat in azure_categories:
if fuzz.ratio(aws_cat, azure_cat) > 75 or aws_cat in azure_cat or azure_cat in aws_cat:
if len(aws_cat) < len(azure_cat):
norm = azure_cat
else:
norm = aws_cat
normalized_cats[azure_cat] = norm
normalized_cats[aws_cat] = norm
normalized_cats
# -
# ### Resolve similar categories between Google categories and the normalized AWS/Azure categories
# +
for google_cat in google_categories:
for norm_cat in set(normalized_cats.values()):
if fuzz.ratio(google_cat, norm_cat) > 60 or google_cat in norm_cat or norm_cat in google_cat:
print(google_cat, norm_cat)
normalized_cats[google_cat] = norm_cat
normalized_cats
# -
print('AWS Categories')
aws_categories = list(aws_services['category_name'].unique())
print(aws_categories)
print()
print('Azure Categories')
azure_categories = list(azure_services['category_name'].unique())
print(azure_categories)
print()
print('Google Cloud Categories')
google_categories = list(google_services['category_name'].unique())
print(google_categories)
# ### Let's pick up a couple obvious strays from Google
normalized_cats['Cloud IAM'] = normalized_cats['Identity']
normalized_cats['Cloud IOT Core'] = normalized_cats['Internet of Things']
# ### Add the normalized categories as super categories.
# These super categories will begin to connect the services from the different cloud providers
for text, norm in normalized_cats.items():
category_res = gremlin_qm.query(f"g.V().has('name', '{norm}').has('label', 'category')")
if category_res:
norm_category_id = category_res[0]['id']
else:
vq = GremlinQueryBuilder.build_upsert_vertex_query('category', {'name': norm})
norm_category_id = gremlin_qm.query(vq)[0]['id']
nodes = gremlin_qm.query(f"g.V().has('name', '{text}')")
for node in nodes:
eq = GremlinQueryBuilder.build_upsert_edge_query(node['id'], norm_category_id, {'label': 'super_category'})
gremlin_qm.query(eq)
| notebooks/02_kk_graph_creation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ref: https://www.kaggle.com/hamishdickson/bidirectional-lstm-in-keras-with-glove-embeddings
# # LOAD libs
# +
import pandas as pd
import numpy as np
import time
import os, gc
from tqdm import tqdm
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix, accuracy_score, roc_auc_score
from sklearn.model_selection import train_test_split
import re
import matplotlib.pyplot as plt
# %matplotlib inline
# +
import torch
from torch import nn, cuda
from torch.nn import functional as F
from torch.utils.data import TensorDataset, Subset, DataLoader
from torch.optim import Adam, Optimizer
from torch.optim.lr_scheduler import _LRScheduler, LambdaLR, ReduceLROnPlateau
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.metrics import roc_auc_score, f1_score
# -
use_cuda = cuda.is_available()
use_cuda
def seed_everything(seed=123):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
class SequenceBucketCollator():
def __init__(self, choose_length, sequence_index, length_index, label_index=None):
self.choose_length = choose_length
self.sequence_index = sequence_index
self.length_index = length_index
self.label_index = label_index
def __call__(self, batch):
batch = [torch.stack(x) for x in list(zip(*batch))]
sequences = batch[self.sequence_index]
lengths = batch[self.length_index]
length = self.choose_length(lengths)
mask = torch.arange(start=maxlen, end=0, step=-1) < length
padded_sequences = sequences[:, mask]
batch[self.sequence_index] = padded_sequences
if self.label_index is not None:
return [x for i, x in enumerate(batch) if i != self.label_index], batch[self.label_index]
return batch
# +
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class NeuralNet(nn.Module):
def __init__(self, embedding_matrix):
super(NeuralNet, self).__init__()
lstm_hidden_size = 120
gru_hidden_size = 60
self.gru_hidden_size = gru_hidden_size
self.embedding = nn.Embedding(*embedding_matrix.shape)
self.embedding.weight = nn.Parameter(torch.tensor(embedding_matrix, dtype=torch.float32))
self.embedding.weight.requires_grad = False
self.embedding_dropout = nn.Dropout2d(0.2)
self.lstm = nn.LSTM(embedding_matrix.shape[1], lstm_hidden_size, bidirectional=True, batch_first=True)
self.gru = nn.GRU(lstm_hidden_size * 2, gru_hidden_size, bidirectional=True, batch_first=True)
self.linear = nn.Linear(gru_hidden_size * 6, 20)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.1)
self.out = nn.Linear(20, 1)
def apply_spatial_dropout(self, h_embedding):
h_embedding = h_embedding.transpose(1, 2).unsqueeze(2)
h_embedding = self.embedding_dropout(h_embedding).squeeze(2).transpose(1, 2)
return h_embedding
def forward(self, x, normal_feats, lengths=None):
h_embedding = self.embedding(x.long())
h_embedding = self.apply_spatial_dropout(h_embedding)
h_lstm, _ = self.lstm(h_embedding)
h_gru, hh_gru = self.gru(h_lstm)
hh_gru = hh_gru.view(-1, self.gru_hidden_size * 2)
avg_pool = torch.mean(h_gru, 1)
max_pool, _ = torch.max(h_gru, 1)
# normal_linear = F.relu(self.normal_linear(normal_feats.float()))
conc = torch.cat((hh_gru, avg_pool, max_pool), 1)
conc = self.relu(self.linear(conc))
conc = self.dropout(conc)
out = self.out(conc)
return out
# +
def train_model(n_epochs=4, accumulation_step=2, **kwargs):
optimizer = Adam(model.parameters(), lr=0.001)
scheduler = LambdaLR(optimizer, lambda epoch: 0.6 ** epoch)
checkpoint_weights = [2 ** epoch for epoch in range(n_epochs)]
best_epoch = -1
best_valid_score = 0.
best_valid_loss = 1.
all_train_loss = []
all_valid_loss = []
total_preds = []
for epoch in range(n_epochs):
start_time = time.time()
train_loss = train_one_epoch(model, criterion, train_loader, optimizer, accumulation_step)
val_loss, val_score = validation(model, criterion, valid_loader)
# if val_score > best_valid_score:
# best_valid_score = val_score
# torch.save(model.state_dict(), 'best_score{}.pt'.format(fold))
elapsed = time.time() - start_time
lr = [_['lr'] for _ in optimizer.param_groups]
print("Epoch {} - train_loss: {:.6f} val_loss: {:.6f} val_score: {:.6f} lr: {:.5f} time: {:.0f}s".format(
epoch+1, train_loss, val_loss, val_score, lr[0], elapsed))
# inference
test_preds = inference_test(model, test_loader)
total_preds.append(test_preds)
# scheduler update
scheduler.step()
total_preds = np.average(total_preds, weights=checkpoint_weights, axis=0)
return total_preds, val_score, val_loss
# -
def inference_test(model, test_loader):
model.eval()
test_preds = np.zeros((len(test_dataset), 1))
with torch.no_grad():
for i, inputs in enumerate(test_loader):
if use_cuda:
inputs[0] = inputs[0].cuda()
inputs[1] = inputs[1].cuda()
# inputs[2] = inputs[2].cuda()
outputs = model(inputs[0], inputs[1])
# outputs = model(inputs[0], inputs[1], inputs[2])
test_preds[i * batch_size:(i+1) * batch_size] = sigmoid(outputs.cpu().numpy())
return test_preds
# +
def train_one_epoch(model, criterion, train_loader, optimizer, accumulation_step=2):
model.train()
train_loss = 0.
optimizer.zero_grad()
# for i, (inputs, targets) in tqdm(enumerate(train_loader), desc='train', total=len(train_loader)):
for i, (inputs, targets) in enumerate(train_loader):
if use_cuda:
inputs[0] = inputs[0].cuda()
inputs[1] = inputs[1].cuda()
# inputs[2] = inputs[2].cuda()
targets = targets.cuda()
preds = model(inputs[0], inputs[1])
preds = model(inputs[0], inputs[1])
loss = criterion(preds, targets)
loss.backward()
if accumulation_step:
if (i+1) % accumulation_step == 0:
optimizer.step()
optimizer.zero_grad()
else:
optimizer.step()
optimizer.zero_grad()
train_loss += loss.item() / len(train_loader)
return train_loss
def validation(model, criterion, valid_loader):
model.eval()
valid_preds = np.zeros((len(valid_dataset), 1))
valid_targets = np.zeros((len(valid_dataset), 1))
val_loss = 0.
with torch.no_grad():
# for i, (inputs, targets) in tqdm(enumerate(valid_loader), desc='valid', total=len(valid_loader)):
for i, (inputs, targets) in enumerate(valid_loader):
valid_targets[i * batch_size: (i+1) * batch_size] = targets.numpy().copy()
if use_cuda:
inputs[0] = inputs[0].cuda()
inputs[1] = inputs[1].cuda()
# inputs[2] = inputs[2].cuda()
targets = targets.cuda()
outputs = model(inputs[0], inputs[1])
# outputs = model(inputs[0], inputs[1], inputs[2])
loss = criterion(outputs, targets)
valid_preds[i * batch_size: (i+1) * batch_size] = sigmoid(outputs.detach().cpu().numpy())
val_loss += loss.item() / len(valid_loader)
val_score = roc_auc_score(valid_targets, valid_preds)
# valid_preds = np.where(valid_preds>=0.1, 1, 0)
# val_score = f1_score(valid_targets, valid_preds)
return val_loss, val_score
# -
# %%time
train_df=pd.read_csv("../KB_NLP/morphs/komo_morphs_train.csv")
test_df=pd.read_csv("../KB_NLP/morphs/komo_morphs_test.csv")
train_df.head()
pd.set_option('display.max_colwidth',-1)
train_df.head(10)
train_df['l'] = train_df['komo_morphs'].apply(lambda x: len(str(x).split(' ')))
print('mean length of data : {}'.format(train_df['l'].mean()))
print('max length of data : {}'.format(train_df['l'].max()))
print('std length of data : {}'.format(train_df['l'].std()))
test_df['l'] = test_df['komo_morphs'].apply(lambda x: len(str(x).split(' ')))
print('mean length of data : {}'.format(test_df['l'].mean()))
print('max length of data : {}'.format(test_df['l'].max()))
print('std length of data : {}'.format(test_df['l'].std()))
sequence_length = 220 #220 # Heuristic way to select this value (if possible, max length bewteen train and test would be fine, not considering the resources.)
def build_vocab(sentences, verbose=True):
vocab = {}
for sentence in tqdm(sentences, disable = (not verbose)):
for word in sentence:
try:
vocab[word] += 1
except KeyError:
vocab[word] = 1
return vocab
vocab = build_vocab(list(train_df['komo_morphs'].apply(lambda x: x.split())))
#vocab_length = len(vocab)
len(vocab)
# +
# %%time
max_features = len(vocab) # oov max range if the number of oov is below this value, then max_feature is set to that value.
tokenizer = Tokenizer(num_words = max_features, split=' ', oov_token='<unw>', filters=' ')
tokenizer.fit_on_texts(train_df['komo_morphs'].values)
#this takes our sentences and replaces each word with an integer
X_train = tokenizer.texts_to_sequences(train_df['komo_morphs'].values)
train_lengths = torch.from_numpy(np.array([len(x) for x in X_train]))
print("train max length: {}".format(train_lengths.max()))
maxlen = train_lengths.max()
# -
# %%time
X_test = tokenizer.texts_to_sequences(test_df['komo_morphs'].values)
test_lengths = torch.from_numpy(np.array([len(x) for x in X_test]))
print("test max length: {}".format(test_lengths.max()))
X_train = torch.from_numpy(pad_sequences(X_train, maxlen)) #padding empty to zeros
X_test = torch.from_numpy(pad_sequences(X_test, maxlen))
y = torch.tensor(train_df['smishing'].values).float().unsqueeze(1)
X_train.shape, X_test.shape
# +
print("shape of the x_train: {} * {}".format(X_train.shape[0], X_train.shape[1]))
#print("shape of the x_test: {} * {}".format(x_test_padded.shape[0], x_test_padded.shape[1]))
X_tr, X_val, y_train, y_valid = train_test_split(X_train, y, test_size=0.2)
print("shape of the validation: {} * {}".format(X_val.shape[0], X_val.shape[1]))
# -
word_index = tokenizer.word_index
print('Found {} unique tokens'.format(len(word_index)))
# +
embeddings_index = {}
f = open(os.path.join('../KB_NLP','glove_5290.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:],dtype='float32')
embeddings_index[word] = coefs
f.close()
print("Found {} word vectors".format(len(embeddings_index)))
# +
num_words = min(max_features, len(word_index))+1
print(num_words)
embedding_dim = 200 # from trained by 200 dim
#first create a matrix of zeros, this is our embedding matrix
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, i in word_index.items():
if i > max_features:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
else:
embedding_matrix[i] = np.random.randn(embedding_dim)
# -
n_epochs = 6
cv_split = 20
n_split = 10
criterion = nn.BCEWithLogitsLoss()
batch_size = 128
# +
#train_length, valid_length = lengths[trn_index], lengths[val_index]
#batch_size = 64
#train_dataset = TensorDataset(X_train, train_length, torch.tensor(y_train))
train_length = torch.from_numpy(np.array([len(x) for x in X_tr]))
valid_length = torch.from_numpy(np.array([len(x) for x in X_val]))
test_length = torch.from_numpy(np.array([len(x) for x in X_test]))
train_dataset = TensorDataset(X_tr, train_length, torch.tensor(y_train))
valid_dataset = TensorDataset(X_val, valid_length,torch.tensor(y_valid))
test_dataset = TensorDataset(X_test, test_length)
print("num train set: ",len(train_dataset))
print("num val set: ",len(valid_dataset))
train_collator = SequenceBucketCollator(lambda length: length.max(),
sequence_index=0,
length_index=1,
label_index=2)
test_collator = SequenceBucketCollator(lambda length: length.max(), sequence_index=0, length_index=1)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=train_collator)
valid_loader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=False, collate_fn=train_collator)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, collate_fn=test_collator)
# -
total_folds_preds1 = []
model = NeuralNet(embedding_matrix)
if use_cuda:
model.cuda()
# +
optimizer = Adam(model.parameters(),lr=0.001)
scheduler = LambdaLR(optimizer, lambda epoch: 0.6 ** epoch)
checkpoint_weights = [2 ** epoch for epoch in range(n_epochs)]
accumulation_step = 2
best_epoch = -1
best_valid_score = 0
best_valid_loss = 1
all_train_loss = []
all_valid_loss = []
total_preds = []
for epoch in range(n_epochs):
start_time = time.time()
model.train()
train_loss = 0
optimizer.zero_grad()
for i, (inputs, targets) in enumerate(train_loader):
print("i: ".format(i))
print("inputs")
print(inputs)
print('targets')
print(targets)
targets = targets.unsqueeze(1)
inputs = inputs.cuda()
targest = targets.cuda()
preds = model(inputs)
loss = criterion(preds, target)
loss.backward()
optimizer.step()
optimizer.zero_grad()
train_loss += loss.item()/len(train_loader)
# -
inputs[0]
targets.cuda()
targets = targets.unsqueeze(1)
targets
train_loss = train_one_epoch(model, criterion, train_loader, optimizer, accumulation_step)
val_loss, val_score = validation(model, criterion, valid_loader)
elapsed = time.time() - start_time
lr = [_['lr'] for _ in optimizer.param_groups]
print("Epoch {} - train_loss: {:.6f} val_loss: {:.6f} val_score: {:.6f} lr: {:.5f} time: {:.0f}s".format(
epoch+1, train_loss, val_loss, val_score, lr[0], elapsed))
# +
train_kwargs = dict(
train_loader=train_loader,
valid_loader=valid_loader,
test_loader=test_loader,
model=model,
criterion=criterion,
)
single_fold_preds, cv_score, cv_loss = train_model(n_epochs=n_epochs, accumulation_step=1, **train_kwargs)
total_folds_preds1.append(single_fold_preds)
del train_loader, valid_loader
gc.collect()
print()
# -
| torch_komo_glove_bi_lstm_200dim_debug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/chrismarkella/Kaggle-access-from-Google-Colab/blob/master/squeeze_the_dataframe.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="KZm3u06OJNVE" colab_type="text"
# ##Reducing the size of a DataFrame.
# + id="V2c87jCbENol" colab_type="code" outputId="3e2c0f61-7152-49db-bb00-3f4a7548e140" colab={"base_uri": "https://localhost:8080/", "height": 121}
# !apt-get -qq install tree
# + id="UfptiakkLVI7" colab_type="code" colab={}
import os
import numpy as np
import pandas as pd
from getpass import getpass
# + id="x_q48eW8LWd_" colab_type="code" outputId="80a43379-bfe0-4888-c106-b9e4f072fe05" colab={"base_uri": "https://localhost:8080/", "height": 69}
def access_kaggle():
"""
Access Kaggle from Google Colab.
If the /root/.kaggle does not exist then prompt for
the username and for the Kaggle API key.
Creates the kaggle.json access file in the /root/.kaggle/ folder.
"""
KAGGLE_ROOT = os.path.join('/root', '.kaggle')
KAGGLE_PATH = os.path.join(KAGGLE_ROOT, 'kaggle.json')
if '.kaggle' not in os.listdir(path='/root'):
user = getpass(prompt='Kaggle username: ')
key = getpass(prompt='Kaggle API key: ')
# !mkdir $KAGGLE_ROOT
# !touch $KAGGLE_PATH
# !chmod 666 $KAGGLE_PATH
with open(KAGGLE_PATH, mode='w') as f:
f.write('{"username":"%s", "key":"%s"}' %(user, key))
f.close()
# !chmod 600 $KAGGLE_PATH
del user
del key
success_msg = "Kaggle is successfully set up. Good to go."
print(f'{success_msg}')
access_kaggle()
# + id="4dZ1ELwzNOuk" colab_type="code" outputId="a4eb2903-c5b6-42bc-b4d1-b4310575f850" colab={"base_uri": "https://localhost:8080/", "height": 176}
# !kaggle datasets files benhamner/sf-bay-area-bike-share
# + id="fdOD33r7NnQA" colab_type="code" outputId="e2f3ad5f-5ba5-4780-ead1-4e0b07be19fe" colab={"base_uri": "https://localhost:8080/", "height": 69}
# !kaggle datasets download benhamner/sf-bay-area-bike-share -f status.csv
# + id="wpra5q6oPfiM" colab_type="code" outputId="00efc20a-5731-4c13-b52a-d9b64377e8b5" colab={"base_uri": "https://localhost:8080/", "height": 208}
# !tree -sh
# + id="hRko-_kgPhrb" colab_type="code" outputId="9a6efcdd-36fc-4119-f0c4-6693dfeb1ada" colab={"base_uri": "https://localhost:8080/", "height": 243}
# !unzip status.csv.zip
# !rm status.csv.zip
# !tree -sh
# + id="lUXPbmcchtDZ" colab_type="code" outputId="510047a5-5a49-4649-88d2-92daeab83ec5" colab={"base_uri": "https://localhost:8080/", "height": 52}
import time
def timer(func):
def wrapper(*args, **kwargs):
start = time.time()
rv = func(*args, **kwargs)
end = time.time()
return rv, end-start
return wrapper
@timer
def load_data(csv_path):
return pd.read_csv(csv_path, sep=',')
df, time_elapsed = load_data('status.csv')
print(f'time elapsed: {time_elapsed}')
print(df.shape)
# + id="ElUxZUNNcROO" colab_type="code" outputId="9e02d94e-2f6e-423d-a2ca-cdd63909a264" colab={"base_uri": "https://localhost:8080/", "height": 173}
df.info()
# + id="Rj4PbLS3cUuL" colab_type="code" outputId="7e99e2ea-a6bc-4909-d0c8-a63632946eee" colab={"base_uri": "https://localhost:8080/", "height": 294}
df.describe()
# + [markdown] id="ywGsKq8uDkdY" colab_type="text"
# ###The range for the numerical data types are pretty small.
# - `station_id`: 2-84
# - `bikes_available`: 0-27
# - `docks_available`: 0-27
#
# `int64` looks too large for such a small numbers.
#
# The current size is 2.1GB. Let's try to change the `int64`'s to `np.uint8`.
# + id="38mX59VnEsCC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="f7392bd2-6dbb-4969-f216-effe4b1a9406"
df.station_id = df.station_id.astype(np.uint8)
df.info()
# + [markdown] id="oivLlLq7E9_T" colab_type="text"
# ###After changing only the `station_id`'s data type saved 400MB.
#
# Changing the other two datatypes.
# + id="Hn1KvR6vFTnO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="38ff7426-8e8e-4641-c76f-450a4d979f6a"
df.bikes_available = df.bikes_available.astype(np.uint8)
df.docks_available = df.docks_available.astype(np.uint8)
df.info()
# + id="Z5ZN5NuBFqYy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="656c6f41-6f94-4313-8020-8256a1ce4c18"
double_df = pd.concat([df.copy(), df.copy()], axis='index')
double_df.shape
# + id="I94bjwAcGLJ0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="92ad1f34-bd16-4d80-8453-f1a97564cf4a"
df_288M = pd.concat([double_df.copy(), double_df.copy()], axis='index')
print(df_288M.shape)
df_288M.info()
# + [markdown] id="LZVmSD2_IsPt" colab_type="text"
# ###Bonus
# - Let's see how long will it take to re-mean a column with 288 millions of entries. Lay back. Will take a while...
# + id="-c0GQJjeHH-U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="99d32825-e24c-439d-a071-5af5eff15920"
_mean = df.station_id.mean()
@timer
def re_mean(_df, method):
if method == 'series':
_df.station_id - _mean
elif method == 'map':
_df.station_id.map(lambda st_id: st_id - _mean)
elif method == 'apply':
_df.apply(func=re_mean_for_apply, axis='columns')
return 1
_method = 'series'
_,time_elapsed = re_mean(df_288M, _method)
print(f'{_method:6}, time elapsed: {time_elapsed:7.3f}')
# + [markdown] id="kr7bjOQEIC1g" colab_type="text"
# ###Are you serious!?
#
# 3.2 sec for **288 millions** of rows.
| squeeze_the_dataframe.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
videos = """
https://www.youtube.com/watch?v=RBShCX3-BtQ&t=1099s
https://www.youtube.com/watch?v=WnipnOcDkx8
https://www.youtube.com/watch?v=fzuKoKTxfEs
https://www.youtube.com/watch?v=Hioy9UvH4yE
https://www.youtube.com/watch?v=98CiTQQtqak
https://www.youtube.com/watch?v=HKnXUGo_H2U
https://www.youtube.com/watch?v=F15a-MH_Hzg&t=1142s
https://www.youtube.com/watch?v=3-DoQSmz5aI
https://www.youtube.com/watch?v=ViJXVHWTRz8&t=1649s
https://www.youtube.com/watch?v=XLV720mng6w
https://www.youtube.com/watch?v=gvPiF-zXur4
https://www.youtube.com/watch?v=mIc0MmSoaiI
https://www.youtube.com/watch?v=oW3yttdKVm0
https://www.youtube.com/watch?v=zHC1MPeSP34
https://www.youtube.com/watch?v=6i8PG2cTAUU
https://www.youtube.com/watch?v=8DMyfcjyrZQ&t=756s
https://www.youtube.com/watch?v=rkwCky94IhQ
https://www.youtube.com/watch?v=Y6RYZpbANJI&t=1451s
https://www.youtube.com/watch?v=oJPtCCTzs_o
https://www.youtube.com/watch?v=O_rK97AUOIA
https://www.youtube.com/watch?v=Jw7B8efygWI
https://www.youtube.com/watch?v=TxeBIVuXRHM
https://www.youtube.com/watch?v=EYBSXaK3cjI
https://www.youtube.com/watch?v=ekKcn3QCU5E
https://www.youtube.com/watch?v=OZMsLGKDv14
https://www.youtube.com/watch?v=jJN0vH42wcs
https://www.youtube.com/watch?v=Q54z0mutz5Q&t=321s
https://www.youtube.com/watch?v=ZcVRJSDCof0
https://www.youtube.com/watch?v=te2Be5GNNys&t=1915s
https://www.youtube.com/watch?v=jcvIfHTy1H0&t=1148s
https://www.youtube.com/watch?v=O19y2ZG8Y-k&t=1464s
https://www.youtube.com/watch?v=5zAsyNKfzps
https://www.youtube.com/watch?v=qVpcfmZF1oY
https://www.youtube.com/watch?v=L2W4inT6j3c
https://www.youtube.com/watch?v=g3iwaxDKQGQ&t=1940s
https://www.youtube.com/watch?v=XldSJLh8kMk&t=592s
https://www.youtube.com/watch?v=DFFHEbk_iYs&t=1158s
https://www.youtube.com/watch?v=x8Gl_zIC9CQ
https://www.youtube.com/watch?v=QLz7USNFnng
https://www.youtube.com/watch?v=vshZhU5dbto
https://www.youtube.com/watch?v=skGDf57Q9xs
https://www.youtube.com/watch?v=r6f3OoHVkgU
https://www.youtube.com/watch?v=QxVzXbS1WFE
https://www.youtube.com/watch?v=WJkSAHvYyqk
https://www.youtube.com/watch?v=FFtFXiIgsyk
https://www.youtube.com/watch?v=Cq6Yjwsu63U
https://www.youtube.com/watch?v=FcD9MpeWqEI
https://www.youtube.com/watch?v=79ax7ZQss5M
https://www.youtube.com/watch?v=Wyb6i3GoJv4
https://www.youtube.com/watch?v=I2BaN8cm6NY
https://www.youtube.com/watch?v=ZJ8jw3C9vB8
https://www.youtube.com/watch?v=PazCOj1nb3c
https://www.youtube.com/watch?v=XMx9tqwzQS4
https://www.youtube.com/watch?v=sZFykBDVjOI
https://www.youtube.com/watch?v=NqCp8Csl7PE
https://www.youtube.com/watch?v=zLEmAKdG9vY
https://www.youtube.com/watch?v=IAeN0jG9Hwc
https://www.youtube.com/watch?v=0jySfrZhs50
https://www.youtube.com/watch?v=CXnW4hJDp54
https://www.youtube.com/watch?v=Rkvd5p3I11M
https://www.youtube.com/watch?v=PRN5UrsDF6w
https://www.youtube.com/watch?v=RlejfOWnKgQ
https://www.youtube.com/watch?v=FNfbmh93DFo
https://www.youtube.com/watch?v=nGnyZ5c00Mo
https://www.youtube.com/watch?v=EHlGktcfDZc
https://www.youtube.com/watch?v=HkOQy1Nxpis&t=1571s
https://www.youtube.com/watch?v=m7mwwtsblIs
https://www.youtube.com/watch?v=-mAdHcMGmEs&t=92s
https://www.youtube.com/watch?v=IixUPM11VMY
https://www.youtube.com/watch?v=n3AR3nQEc7Y
https://www.youtube.com/watch?v=OWbPqC2i71c
https://www.youtube.com/watch?v=670FyeJhnps
https://www.youtube.com/watch?v=8LuihYdx0FQ
https://www.youtube.com/watch?v=1av_p0DPDas
https://www.youtube.com/watch?v=Jn2oeTajYCw
https://www.youtube.com/watch?v=2M8U_mxGbcw&ab_channel=JinnyboyTVHangouts
https://www.youtube.com/watch?v=7_-4TXv6pvY
https://www.youtube.com/watch?v=rMpf5j7683M
https://www.youtube.com/watch?v=lfoBmafO37A
https://www.youtube.com/watch?v=wnxWhWRk4Sc
https://www.youtube.com/watch?v=Zz3NcuK7QHM&t=175s
https://www.youtube.com/watch?v=X4QapKCCneQ
https://www.youtube.com/watch?v=281TVct4AvE
https://www.youtube.com/watch?v=T218uk6pi4E&t=1097s
https://www.youtube.com/watch?v=3pImfMhLYhA
https://www.youtube.com/watch?v=luGvazsVsHc&t=1839s
https://www.youtube.com/watch?v=6syqXC_o6gs
https://www.youtube.com/watch?v=rMpf5j7683M
https://www.youtube.com/watch?v=lfoBmafO37A
https://www.youtube.com/watch?v=wnxWhWRk4Sc
https://www.youtube.com/watch?v=LsV8tnGSPEQ
https://www.youtube.com/watch?v=BHF0ift4aRA
https://www.youtube.com/watch?v=745rfUlGzgM
https://www.youtube.com/watch?v=TqLaI2X4jj0
https://www.youtube.com/watch?v=9SKGSMRpymc
https://www.youtube.com/watch?v=C3U2AtbBjHY
https://www.youtube.com/watch?v=NZXHVJEkxm8
https://www.youtube.com/watch?v=-L7y6_6Ar4s
https://www.youtube.com/watch?v=6RWRMNiBpJA
https://www.youtube.com/watch?v=7izYax1bCwg
https://www.youtube.com/watch?v=ezYaMrChz4M&t=2265s
https://www.youtube.com/watch?v=e4mNzKOZrJ0
https://www.youtube.com/watch?v=R7nCJrdRWyc
https://www.youtube.com/watch?v=V2k9OWeTX84
https://www.youtube.com/watch?v=kbOrRjEDnDU
https://www.youtube.com/watch?v=1EqlSuT6vUY
https://www.youtube.com/watch?v=ezFAYdIwu_8&t=4264s
https://www.youtube.com/watch?v=U4dYpNf8oWY
https://www.youtube.com/watch?v=e4mNzKOZrJ0
https://www.youtube.com/watch?v=eXpirKKPBUA
https://www.youtube.com/watch?v=5cG8wHdaKIU
https://www.youtube.com/watch?v=EjzkQeWKwQw
https://www.youtube.com/watch?v=adq-khQo2eQ
https://www.youtube.com/watch?v=xsjayMqoC0Y
https://www.youtube.com/watch?v=aa5r9pws0AI
https://www.youtube.com/watch?v=NCiDUw4mQgE
https://www.youtube.com/watch?v=6K1feeFnKpQ
https://www.youtube.com/watch?v=7wiSVCBFmFc
https://www.youtube.com/watch?v=N5EfPvQKW90
https://www.youtube.com/watch?v=iugevNPFp2Y
https://www.youtube.com/watch?v=fgaCmnMdZ0c
https://www.youtube.com/watch?v=UryaihXnjBk
https://www.youtube.com/watch?v=lvIbeZ3qgsU
https://www.youtube.com/watch?v=l6bf4RFE31o
https://www.youtube.com/watch?v=7dPzh9PpLs8
https://www.youtube.com/watch?v=HDuY9UdkkaE
https://www.youtube.com/watch?v=-XwvfCTnseI
https://www.youtube.com/watch?v=jhLgzHQnAHU
https://www.youtube.com/watch?v=TzfCaQ8vUMc
https://www.youtube.com/watch?v=Pkv6d83oe8s
https://www.youtube.com/watch?v=woDYSE37_rI
https://www.youtube.com/watch?v=3pKA5NvTCOA&t=1035s
https://www.youtube.com/watch?v=shV1F1JlDzk
https://www.youtube.com/watch?v=rxmMRAzLRJ0
https://www.youtube.com/watch?v=Yv7uONh96tc
https://www.youtube.com/watch?v=H4M_ZUDAT3k
https://www.youtube.com/watch?v=dykz_vuzJh0&t=2554s
https://www.youtube.com/watch?v=UWm1ESejkzs
https://www.youtube.com/watch?v=otK3Gl_XX4c
https://www.youtube.com/watch?v=xMm23vU8_og
https://www.youtube.com/watch?v=mJuKUwFpDgI
https://www.youtube.com/watch?v=_7Un0q6a9zg&t=1931s
https://www.youtube.com/watch?v=3b0bYe36KNE
https://www.youtube.com/watch?v=RqG0tHR9D1g
https://www.youtube.com/watch?v=eWNQhDNoSsY
https://www.youtube.com/watch?v=tdcjTfzYSbk
https://www.youtube.com/watch?v=WSVo-qvu1GQ&t=1505s
https://www.youtube.com/watch?v=lPnRg9UsP0M&t=2400s
https://www.youtube.com/watch?v=vRnkHzyyBc4
https://www.youtube.com/watch?v=6R87P6YD2c4&t=1061s
https://www.youtube.com/watch?v=CSP1XPZu4YY
https://www.youtube.com/watch?v=VlovwtVYuw0&t=619s
https://www.youtube.com/watch?v=MCHDyLJDJrg&t=2610s
https://www.youtube.com/watch?v=7byI9tIJeRk
https://www.youtube.com/watch?v=gW9sv26g4Io
https://www.youtube.com/watch?v=pFBohAGNYWU
https://www.youtube.com/watch?v=8QhPCqS5aBA
https://www.youtube.com/watch?v=hi4MpQZVzYY
https://www.youtube.com/watch?v=ZDloTzTfenM&t=4143s
https://www.youtube.com/watch?v=cSxV6oacgaE
https://www.youtube.com/watch?v=myPdnyUGvUk
https://www.youtube.com/watch?v=JyAEITJk1AI
https://www.youtube.com/watch?v=YaeQ27BgUN0
https://www.youtube.com/watch?v=OpDeoZzKHBM
https://www.youtube.com/watch?v=jzUidmnXGu8
https://www.youtube.com/watch?v=3mt-_JgTm2E
https://www.youtube.com/watch?v=6yW185RJkGw
https://www.youtube.com/watch?v=QIkOzfAcsmE&t=1783s
https://www.youtube.com/watch?v=eFDcnTupdeQ
https://www.youtube.com/watch?v=sT3vpK8npYk
https://www.youtube.com/watch?v=uaSWgcKn0RM
https://www.youtube.com/watch?v=BrYHWQ4xmSc
https://www.youtube.com/watch?v=alUMLUEUdCM
https://www.youtube.com/watch?v=LXSNbZF9ag4
https://www.youtube.com/watch?v=8QNgzWcc7U4
https://www.youtube.com/watch?v=5VpSUeWPEuo
https://www.youtube.com/watch?v=YtCn4hNLk7o
https://www.youtube.com/watch?v=attPdWb6UVo
https://www.youtube.com/watch?v=Yzagxa4bn-M
https://www.youtube.com/watch?v=vKpmE4ZFnGE
https://www.youtube.com/watch?v=b9YHjghfd2E&t=4654s
https://www.youtube.com/watch?v=cs2LkcSpNpw
https://www.youtube.com/watch?v=v-RunsFvf8o
https://www.youtube.com/watch?v=f7qEqPNzGk8
https://www.youtube.com/watch?v=rjjK4aAbKnw
https://www.youtube.com/watch?v=TYL7mmbpsbo
https://www.youtube.com/watch?v=-2d7wzvNOCE
https://www.youtube.com/watch?v=1FLelqfAyZU
https://www.youtube.com/watch?v=HRd3RMQtkDs
https://www.youtube.com/watch?v=5SOInBKNP18
https://www.youtube.com/watch?v=ONUgGzCm00E
https://www.youtube.com/watch?v=5t_yyrP2OdI
"""
videos = list(set(filter(None, videos.split('\n'))))
len(videos)
import youtube_dl
# +
import mp
from tqdm import tqdm
def loop(urls):
urls = urls[0]
ydl_opts = {
'format': 'bestaudio/best',
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}],
'no-check-certificate': True
}
for i in tqdm(range(len(urls))):
try:
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download([urls[i]])
except:
pass
# +
# loop((videos,))
# -
import mp
mp.multiprocessing(videos, loop, cores = 12, returned = False)
| data/semisupervised-manglish/download-videos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="j0my0cGhO5GN" colab_type="text"
# ## Fetch the data
# + id="ClsIadarUHV2" colab_type="code" outputId="d9a9ef8b-6468-4f7e-dfea-b9a1a11d7e63" colab={"base_uri": "https://localhost:8080/", "height": 34}
import pandas as pd
import numpy as np
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()
# + id="-QrZuWpYjnT3" colab_type="code" outputId="db783b8b-5cba-40a4-a69c-1999af4bb167" colab={"base_uri": "https://localhost:8080/", "height": 34}
X, y = mnist["data"], mnist["target"]
print(X.shape)
# + id="eBkKFb3LmFfV" colab_type="code" outputId="39b9844e-b196-43e4-988d-e710016e68dc" colab={"base_uri": "https://localhost:8080/", "height": 248}
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
some_digit = X[41000]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = matplotlib.cm.binary, interpolation="nearest")
plt.axis("off")
plt.show()
# + [markdown] id="R3ZNx32fKxm0" colab_type="text"
# ## Split the data
# + id="0ZDHnFD8ms2j" colab_type="code" colab={}
y = y.astype(np.uint8)
train_range = 60000
X_train, X_test, y_train, y_test = X[:train_range], X[train_range:], y[:train_range], y[train_range:]
shuffle_index = np.random.permutation(train_range)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
# + [markdown] id="0e1Om8pzDX_l" colab_type="text"
# ## Grid search
# + id="fIE4Uz0WOISV" colab_type="code" outputId="3d2ec7d5-0bf7-4e9d-e143-b32f181e0e9e" colab={"base_uri": "https://localhost:8080/", "height": 86}
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
grid_params = {
'n_neighbors': [3,4,5],
'weights': ['uniform', 'distance']
}
gs = GridSearchCV(
KNeighborsClassifier(),
grid_params,
verbose = 3,
cv = 5,
n_jobs =-1
)
gs_results = gs.fit(X_train, y_train)
# + id="EHieOjcqUyVu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="cf11eaaa-3907-4c2f-cfab-480fffe1db19"
print(gs_results.best_score_)
print(gs_results.best_estimator_)
print(gs_results.best_params_)
# + [markdown] id="6vsBcsUdKWqS" colab_type="text"
# Best params:
# `{'n_neighbors': 4, 'weights': 'distance'}`
| number_detection_97.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Matplot
#
# Python可以使用matplotlib来实现2D和3D坐标输出。
#
# [matplotlib](https://matplotlib.org/tutorials/index.html)
#
# +
import math
import matplotlib.pyplot as plt
ln = [math.log(i) for i in range(1, 30)]
plt.plot(range(1, 30), ln, label='ln')
plt.plot(range(1, 30), range(1, 30), label='liner')
plt.xlabel('x label')
plt.xlabel('y label')
plt.title("General function")
plt.legend()
plt.show()
# +
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]):
xs = np.arange(20)
ys = np.random.rand(20)
# You can provide either a single color or an array. To demonstrate this,
# the first bar of each set will be colored cyan.
cs = [c] * len(xs)
cs[0] = 'c'
ax.bar(xs, ys, zs=z, zdir='y', color=cs, alpha=0.8)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
| python/matplot/py-matplotlib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python(scRFE1)
# language: python
# name: scrfe1
# ---
# import dependencies
import numpy as np
import pandas as pd
import scanpy as sc
import random
import logging as logg
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import RFECV
from sklearn.metrics import accuracy_score
from sklearn.inspection import permutation_importance
import matplotlib.pyplot as plt
from tqdm import tqdm
# transform all category columns in string columns
def columnToString (dataMatrix):
cat_columns = dataMatrix.obs.select_dtypes(['category']).columns
dataMatrix.obs[cat_columns] = dataMatrix.obs[cat_columns].astype(str)
return dataMatrix
# remove observations that are NaN for the category
def filterNormalize (dataMatrix, classOfInterest, verbosity):
np.random.seed(644685)
dataMatrix = dataMatrix[dataMatrix.obs[classOfInterest]!='nan']
dataMatrix = dataMatrix[~dataMatrix.obs[classOfInterest].isna()]
if verbosity == True:
print ('Removed NaN observations in the selected category')
return dataMatrix
# set the A/B labels for classification
def labelSplit (dataMatrix, classOfInterest, labelOfInterest, verbosity):
dataMatrix = filterNormalize (dataMatrix, classOfInterest, verbosity)
dataMatrix.obs['classification_group'] = 'B'
dataMatrix.obs.loc[dataMatrix.obs[dataMatrix.obs[classOfInterest]==labelOfInterest]
.index,'classification_group'] = 'A' #make labels based on A/B of classofInterest
return dataMatrix
# downsample observations to balance the groups
def downsampleToSmallestCategory(dataMatrix, random_state, min_cells,
keep_small_categories, verbosity,
classOfInterest = 'classification_group',
) -> sc.AnnData:
"""
returns an annData object in which all categories in 'classOfInterest' have
the same size
classOfInterest
column with the categories to downsample
min_cells
Minimum number of cells to downsample.
Categories having less than `min_cells` are discarded unless
keep_small_categories is True
keep_small_categories
Be default categories with less than min_cells are discarded.
Set to true to keep them
"""
counts = dataMatrix.obs[classOfInterest].value_counts(sort=False)
if len(counts[counts < min_cells]) > 0 and keep_small_categories is False:
logg.warning(
"The following categories have less than {} cells and will be "
"ignored: {}".format(min_cells, dict(counts[counts < min_cells]))
)
min_size = min(counts[counts >= min_cells])
sample_selection = None
for sample, num_cells in counts.items():
if num_cells <= min_cells:
if keep_small_categories:
sel = dataMatrix.obs.index.isin(
dataMatrix.obs[dataMatrix.obs[classOfInterest] == sample].index)
else:
continue
else:
sel = dataMatrix.obs.index.isin(
dataMatrix.obs[dataMatrix.obs[classOfInterest] == sample]
.sample(min_size, random_state=random_state)
.index
)
if sample_selection is None:
sample_selection = sel
else:
sample_selection |= sel
logg.info(
"The cells in category {!r} had been down-sampled to have each {} cells. "
"The original counts where {}".format(classOfInterest, min_size, dict(counts))
)
return dataMatrix[sample_selection].copy()
# build the random forest classifier and perform feature elimination
def makeOneForest (dataMatrix, classOfInterest, labelOfInterest, nEstimators,
randomState, min_cells, keep_small_categories,
nJobs, oobScore, Step, Cv, verbosity):
#need to add verbose arg details
"""
Builds and runs a random forest for one label in a class of interest
Parameters
----------
dataMatrix : anndata object
The data file of interest
classOfInterest : str
The class you will split the data by in the set of dataMatrix.obs
labelOfInterest : str
The specific label within the class that the random forezt will run a
"one vs all" classification on
nEstimators : int
The number of trees in the forest
randomState : int
Controls random number being used
nJobs : int
The number of jobs to run in parallel
oobScore : bool
Whether to use out-of-bag samples to estimate the generalization accuracy
Step : float
Corresponds to percentage of features to remove at each iteration
Cv : int
Determines the cross-validation splitting strategy
verbosity : bool
Whether to include print statements.
Returns
-------
feature_selected : list
list of top features from random forest
selector.estimator_.feature_importances_ : list
list of top ginis corresponding to to features
score : numpy.float
Score of underlying estimator.
X_new : sparse matrix
Transformed array of selected features.
y : pandas series
Target labels.
"""
splitDataMatrix = labelSplit (dataMatrix, classOfInterest, labelOfInterest, verbosity)
downsampledMatrix = downsampleToSmallestCategory (dataMatrix = splitDataMatrix,
random_state = randomState, min_cells = min_cells,
keep_small_categories = keep_small_categories, verbosity = verbosity,
classOfInterest = 'classification_group' )
if verbosity == True:
print(labelOfInterest)
print(pd.DataFrame(downsampledMatrix.obs.groupby(['classification_group',classOfInterest])[classOfInterest].count()))
feat_labels = downsampledMatrix.var_names
X = downsampledMatrix.X
y = downsampledMatrix.obs['classification_group'] #'A' or 'B' labels from labelSplit
clf = RandomForestClassifier(n_estimators = nEstimators, random_state = randomState,
n_jobs = nJobs, oob_score = oobScore)
Cv = StratifiedKFold(Cv)
selector = RFECV(clf, step = Step, cv = Cv, scoring='f1_weighted', min_features_to_select=2)
clf.fit(X, y)
selector.fit(X, y)
feature_selected = feat_labels[selector.support_]
dataMatrix.obs['classification_group'] = 'B'
X_new = selector.fit_transform(X, y)
selector.fit(X_new, y)
score = selector.score(X_new, y)
feature_selected = feature_selected[selector.support_]
return feature_selected, selector.estimator_.feature_importances_,score,X_new,y
# write the results
def resultWrite (classOfInterest, results_df, labelOfInterest,
feature_selected, feature_importance):
column_headings = []
column_headings.append(labelOfInterest)
column_headings.append(labelOfInterest + '_gini')
resaux = pd.DataFrame(columns = column_headings)
resaux[labelOfInterest] = feature_selected
resaux[labelOfInterest + '_gini'] = feature_importance
resaux = resaux.sort_values(by = [labelOfInterest + '_gini'], ascending = False)
resaux.reset_index(drop = True, inplace = True)
results_df = pd.concat([results_df, resaux], axis=1)
return results_df
# main scRFE function
def scRFE (adata, classOfInterest, nEstimators = 1000, randomState = 0, min_cells = 15,
keep_small_categories = True, nJobs = -1, oobScore = True, Step = 0.2, Cv = 5,
verbosity = True):
"""
Builds and runs a random forest with one vs all classification for each label
for one class of interest
Parameters
----------
adata : anndata object
The data file of interest
classOfInterest : str
The class you will split the data by in the set of dataMatrix.obs
labelOfInterest : str
The specific label within the class that the random forezt will run a
"one vs all" classification on
nEstimators : int
The number of trees in the forest
randomState : int
Controls random number being used
min_cells : int
Minimum number of cells in a given class to downsample.
keep_small_categories : bool
Whether to keep classes with small number of observations, or to remove.
nJobs : int
The number of jobs to run in parallel
oobScore : bool
Whether to use out-of-bag samples to estimate the generalization accuracy
Step : float
Corresponds to percentage of features to remove at each iteration
Cv : int
Determines the cross-validation splitting strategy
verbosity : bool
Whether to include print statements.
Returns
-------
results_df : pd.DataFrame
Dataframe with results for each label in the class, formatted as
"label" for one column, then "label + gini" for the corresponding column.
score_df: dict
Score for each label in classOfInterest.
"""
dataMatrix = adata.copy()
dataMatrix = columnToString (dataMatrix)
dataMatrix = filterNormalize (dataMatrix, classOfInterest, verbosity)
results_df = pd.DataFrame()
score_df = {}
for labelOfInterest in tqdm(np.unique(dataMatrix.obs[classOfInterest])):
dataMatrix_labelOfInterest = dataMatrix.copy()
feature_selected, feature_importance, model_score, X_new, y = makeOneForest(dataMatrix = dataMatrix,
classOfInterest = classOfInterest, labelOfInterest = labelOfInterest,
nEstimators = nEstimators, randomState = randomState, min_cells = min_cells,
keep_small_categories = keep_small_categories,
nJobs = nJobs, oobScore = oobScore, Step= Step, Cv=Cv, verbosity=verbosity)
results_df = resultWrite (classOfInterest, results_df,
labelOfInterest = labelOfInterest,
feature_selected = feature_selected,
feature_importance = feature_importance)
score_df[labelOfInterest] = model_score
return results_df,score_df
| scripts/.ipynb_checkpoints/scRFE-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.5 64-bit (''neurogang'': conda)'
# language: python
# name: python37564bitneurogangconda9e81ba3b30864883b8dc0d8841eb01c6
# ---
# +
from os import listdir
from shutil import copyfile
from PIL import Image
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
checks = ["./cifar10pngs/train/airplane/", "./cifar10pngs/test/airplane/"]
src_dir = "./images/"
cifar = list()
for folder in checks:
for pic in listdir(folder):
cifar.append(cifar, plt.imread(folder + pic), axis=1)
count = 0
np.savetxt(cifar)
for pic in src_dir:
bob = plt.imread(src_dir + pic)
if bob in cifar:
count += 1
print(count)
| PUZZLE2/cifar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# +
x = np.linspace(-30, 60, 300)
def func(x):
return (x)**3 + (x+20)**2*30 - 50000
x_a = -20
x_b = 40
slope1 = (func(x_b) - func(x_a))/(x_b - x_a)
intercept1 = func(x_b) - x_b*slope1
def lin_app1(x):
return slope1*x + intercept1
x_c = -intercept1/slope1
slope2 = (func(x_c) - func(x_b))/(x_c - x_b)
intercept2 = func(x_c) - x_c*slope2
def lin_app2(x):
return slope2*x + intercept2
x_d = -intercept2/slope2
slope3 = (func(x_d) - func(x_c))/(x_d - x_c)
intercept3 = func(x_d) - x_d*slope3
def lin_app3(x):
return slope3*x + intercept3
with sns.axes_style('whitegrid'):
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(x, func(x), color='royalblue', linewidth=2)
ax.scatter([x_a, x_b, x_c, x_d],
[func(x_a), func(x_b), func(x_c), func(x_d)],
color='maroon', zorder=20, alpha=0.8)
ax.plot(x, lin_app1(x), color='dimgrey', linestyle='--')
ax.plot(x, lin_app2(x), color='dimgrey', linestyle='--')
ax.plot(x, lin_app1(x), color='dimgrey', linestyle='--')
ax.plot([x_c, x_c], [func(x_c), lin_app1(x_c)], color='grey', linestyle='--')
ax.plot([x_d, x_d], [func(x_d), lin_app2(x_d)], color='grey', linestyle='--')
ax.text(-22, -40000, '$A$', size=15)
ax.text(38, 135000, '$B$', size=15)
ax.text(-2, -75000, '$C$', size=15)
ax.text(8, -59000, '$D$', size=15)
plt.tight_layout()
plt.savefig('../../assets/images/optimization/secant_method.png', bbox_inches='tight');
# -
x_c*slope1+intercept1
intercept1
| jupyter_notebooks/optimization/secant_method.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from dataset import MNISTDataset
from model import *
from scipy.spatial.distance import cdist
from matplotlib import gridspec
# -
dataset = MNISTDataset()
train_images = dataset.images_train[:20000]
test_images = dataset.images_test
len_test = len(train_images)
len_train = len(test_images)
#helper function to plot image
def show_image(idxs, data):
if type(idxs) != np.ndarray:
idxs = np.array([idxs])
fig = plt.figure()
gs = gridspec.GridSpec(1,len(idxs))
for i in range(len(idxs)):
ax = fig.add_subplot(gs[0,i])
ax.imshow(data[idxs[i],:,:,0])
ax.axis('off')
plt.show()
# ## Create the siamese net feature extraction model
img_placeholder = tf.placeholder(tf.float32, [None, 28, 28, 1], name='img')
net = mnist_model(img_placeholder, reuse=False)
# ## Restore from checkpoint and calc the features from all of train data
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ckpt = tf.train.get_checkpoint_state("model")
saver.restore(sess, "model/model.ckpt")
train_feat = sess.run(net, feed_dict={img_placeholder:train_images[:10000]})
# ## Searching for similar test images from trainset based on siamese feature
#generate new random test image
idx = np.random.randint(0, len_test)
im = test_images[idx]
#show the test image
show_image(idx, test_images)
print("This is image from id:", idx)
# +
#run the test image through the network to get the test features
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ckpt = tf.train.get_checkpoint_state("model")
saver.restore(sess, "model/model.ckpt")
search_feat = sess.run(net, feed_dict={img_placeholder:[im]})
#calculate the cosine similarity and sort
dist = cdist(train_feat, search_feat, 'cosine')
rank = np.argsort(dist.ravel())
#show the top n similar image from train data
n = 7
show_image(rank[:n], train_images)
print("retrieved ids:", rank[:n])
| Similar image retrieval.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="XYzYCbp0KmQU" colab_type="code" outputId="5ba80934-3cff-4bf4-db5c-26acd712f7c1" colab={"base_uri": "https://localhost:8080/", "height": 445}
# !pip install -U tensorflow_datasets
# + id="dJH4uTm4LsQJ" colab_type="code" outputId="a8995413-d621-4270-f3de-b7c65737a528" colab={"base_uri": "https://localhost:8080/", "height": 34}
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow and TensorFlow Datasets
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
# Helper libraries
import math
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
# This will go away in the future.
# If this gives an error, you might be running TensorFlow 2 or above
# If so, then just comment out this line and run this cell again
tf.enable_eager_execution()
# + id="Xf9mTsSoN9Xl" colab_type="code" colab={}
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
# + id="E-hD3MCCOATz" colab_type="code" colab={}
dataset, metadata = tfds.load('fashion_mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
# + id="ZA2FPG6hOUnH" colab_type="code" colab={}
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# + id="fnW9l6LuN_p0" colab_type="code" outputId="a3b3558e-cafa-4bc7-ae6c-99c48e612cc8" colab={"base_uri": "https://localhost:8080/", "height": 51}
num_train_examples = metadata.splits['train'].num_examples
num_test_examples = metadata.splits['test'].num_examples
print("Number of training examples: {}".format(num_train_examples))
print("Number of test examples: {}".format(num_test_examples))
# + id="QyAPPX8QPHtY" colab_type="code" colab={}
def normalize(images, labels):
images = tf.cast(images, tf.float32)
images /= 255
return images, labels
# The map function applies the normalize function to each element in the train
# and test datasets
train_dataset = train_dataset.map(normalize)
test_dataset = test_dataset.map(normalize)
# + id="lnAYYqV9PL_o" colab_type="code" outputId="c506c4ca-63b9-49e9-926d-9bbb71f6cd47" colab={"base_uri": "https://localhost:8080/", "height": 269}
# Take a single image, and remove the color dimension by reshaping
for image, label in test_dataset.take(1):
break
image = image.numpy().reshape((28,28))
# Plot the image - voila a piece of fashion clothing
plt.figure()
plt.imshow(image, cmap=plt.cm.binary)
plt.colorbar()
plt.grid(False)
plt.show()
# + id="cJ58sc8KPp9G" colab_type="code" outputId="80066180-49e7-444b-f440-172879efad27" colab={"base_uri": "https://localhost:8080/", "height": 592}
plt.figure(figsize=(10,10))
i = 0
for (image, label) in test_dataset.take(25):
image = image.numpy().reshape((28,28))
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image, cmap=plt.cm.binary)
plt.xlabel(class_names[label])
i += 1
plt.show()
# + id="mRA-vXfEQDVy" colab_type="code" colab={}
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
# + id="y0A7wYo_PoCY" colab_type="code" colab={}
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# + id="0zAP65QxUy66" colab_type="code" colab={}
BATCH_SIZE = 32
train_dataset = train_dataset.repeat().shuffle(num_train_examples).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
# + id="3YzZ8m7LUzRS" colab_type="code" outputId="9d4d6e4d-2c11-452b-9230-0a7f9e96a13f" colab={"base_uri": "https://localhost:8080/", "height": 204}
model.fit(train_dataset, epochs=5, steps_per_epoch=math.ceil(num_train_examples/BATCH_SIZE))
# + id="MmVORcguM3rF" colab_type="code" outputId="16eea0cb-e1e2-46e1-eb6b-00f599b053a3" colab={"base_uri": "https://localhost:8080/", "height": 51}
test_loss, test_accuracy = model.evaluate(test_dataset, steps=math.ceil(num_test_examples/32))
print('Accuracy on test dataset:', test_accuracy)
# + id="teFXn0urUWym" colab_type="code" colab={}
for test_images, test_labels in test_dataset.take(1):
test_images = test_images.numpy()
test_labels = test_labels.numpy()
predictions = model.predict(test_images)
# + id="PxEZzmVgWfdd" colab_type="code" outputId="404c7d89-40dc-4e5a-9a58-d0be2eda0868" colab={"base_uri": "https://localhost:8080/", "height": 34}
predictions.shape
# + id="2yprY4JbXC87" colab_type="code" outputId="125ecc25-6e6d-4d27-cd86-5c593c763830" colab={"base_uri": "https://localhost:8080/", "height": 68}
predictions[0]
# + id="h0c7J9qzXFT5" colab_type="code" outputId="0aec16a2-a095-45ca-daf8-c22b523aafe1" colab={"base_uri": "https://localhost:8080/", "height": 34}
np.argmax(predictions[0])
# + id="VDcW_h1WXGto" colab_type="code" outputId="c29a45e9-57ae-40cd-e5c8-b24baa9064fb" colab={"base_uri": "https://localhost:8080/", "height": 34}
test_labels[0]
# + id="J4iX--HcXJtg" colab_type="code" colab={}
def plot_image(i, predictions_array, true_labels, images):
predictions_array, true_label, img = predictions_array[i], true_labels[i], images[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img[...,0], cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# + id="MAsZ-NTpXPEJ" colab_type="code" outputId="adf70497-ac64-40a9-c86b-0486ac2c41bb" colab={"base_uri": "https://localhost:8080/", "height": 206}
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
# + id="20pkwreDXRar" colab_type="code" outputId="2c8828a1-61f8-49b5-c4b3-196a1d7312df" colab={"base_uri": "https://localhost:8080/", "height": 206}
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
# + id="FmlYCPoKXTBX" colab_type="code" outputId="3f25b95e-2fc6-41f5-f263-f899b3ffac5d" colab={"base_uri": "https://localhost:8080/", "height": 592}
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
# + id="2ZnpAkNpXWGT" colab_type="code" outputId="f046811a-1a6f-4d93-afdc-d85b166d8193" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Grab an image from the test dataset
img = test_images[0]
print(img.shape)
# + id="SdyPrgbDXYfE" colab_type="code" outputId="61e5e735-3c27-41af-b153-16ca2fbcb853" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Add the image to a batch where it's the only member.
img = np.array([img])
print(img.shape)
# + id="H1X3yP2BXaVz" colab_type="code" outputId="acafe19f-d3d2-4a51-b4c2-19538c27ac24" colab={"base_uri": "https://localhost:8080/", "height": 68}
predictions_single = model.predict(img)
print(predictions_single)
# + id="k1IUWYSqXdW0" colab_type="code" outputId="107d2e80-216c-4d3d-e052-b215554f0ef7" colab={"base_uri": "https://localhost:8080/", "height": 304}
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
# + id="TL9J4b76XgWq" colab_type="code" outputId="a7be499a-0d70-4b9c-aa32-c20357afeb0d" colab={"base_uri": "https://localhost:8080/", "height": 34}
np.argmax(predictions_single[0])
# + id="XEq3J4vgXiu6" colab_type="code" colab={}
| Fashion_MINST_with_Tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Acknowledgements
#
# This educational text simply builds upon the inspiring work of a large number of visionaries. In addition, there are many great minds whose generosity in sharing their insights facilitated much of this work. It has been a great honour and privilege to have been supported in this endeavour. As an expression of gratitude, what follows is, in no particular order, a list of thanks:
#
#
# Thanking Eric (Passawis) for his grammatical corrections to chapters 1,2,3... as well as his encouraging feedback.
#
# For his excellent feedback on chapter 1, I would like to thank <NAME> De <NAME>.
#
| Chapters/0_Acknowledgements.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # StatArea - PySpark Example
# ## Cleaning & Exploration
# load csv
stat_df = sqlContext.read\
.format("com.databricks.spark.csv")\
.options(header = True)\
.load("statarea.csv")\
.cache()
# +
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
# count hyphen nulls ("-") per column
def count_hyphen_null(df, col):
return df.where(df[col] == "-").count()
# count cols with "-" ie. null
null_cols = stat_df.columns
print "Total rows: ", stat_df.count()
print "Hyphen nulls:", {col: count_hyphen_null(stat_df, col) for col in null_cols}
# +
# replace hyphen values with Null
hyphen_udf = udf(lambda row_value: None if row_value == "-" else row_value, StringType())
stat_df = (stat_df.withColumn("gameFtScore", hyphen_udf(stat_df.gameFtScore))
.withColumn("gameHtScore", hyphen_udf(stat_df.gameHtScore))
.withColumn("predSuccess", hyphen_udf(stat_df.predSuccess))
)
stat_df.select("gameFtScore", "gameHtScore", "predSuccess").show(5)
# drop Null values
stat_df = stat_df.dropna()
stat_df.select("gameFtScore", "gameHtScore", "predSuccess").show(5)
print "Total rows: ", stat_df.count()
# -
# ## Extract Features
# view all DF columns
stat_df.printSchema()
# +
from pyspark.ml.feature import StringIndexer, VectorAssembler, IndexToString
# declare list of pipeline stages
stages = []
# declare list of categorical columns
categorical_cols = [
"leagueDivision", "leagueDivisionName", "teamHomeName", "teamAwayName",
"probOutcome", "gameHtScore", "gameFtScore", "predSuccess"
]
# drop less useful columns from DF
stat_df = stat_df.drop("gameID", "gamePlayDate", "gamePlayDate", "gamePlayTime")
# index categorical cols
for col in categorical_cols:
stringIndexer = StringIndexer(
inputCol=col,
outputCol=col+"_indx")
stages += [stringIndexer]
# view all DF columns
stat_df.printSchema()
# +
from pyspark.sql.functions import col
# create list of features + labels
features_col = stat_df.columns[5:-3]
# cast feature cols into numeric double
stat_df = stat_df.select([col(c).cast("double").alias(c) for c in features_col])
# view all DF columns
stat_df.printSchema()
# +
from pyspark.ml import Pipeline
# Transform all features into vectors using VectorAssembler
assemblerInputs = map(lambda c: c + "_indx", categorical_cols) + features_col
assembler = VectorAssembler(inputCols = assemblerInputs, outputCol="features")
stages += [assembler]
# create pipeline and fit all stages
pipeline = Pipeline(stages=stages)
pipelineModel = pipeline.fit(stat_df)
stat_df = pipelineModel.transform(stat_df)
# +
# rename "probOutcome_indx" to "label"
stat_df = stat_df.withColumnRenamed("probOutcome_indx", "label")
# split DF into train/test
(train, test) = stat_df.randomSplit([0.7, 0.3], seed=100)
print "Train data count: ", train.count()
print "Test data count: ", test.count()
# -
# ## Build and Train Models
# +
from time import time
from pyspark.ml.classification import LogisticRegression
# use logistic regression model
lr = LogisticRegression(
labelCol="label",
featuresCol="features",
maxIter=10)
# start timer
lr_start_time = time()
# train our model
lrModel = lr.fit(train)
# calculate time taken to train model
print "Training time taken (min): ", (time() - lr_start_time)/60
# -
# Make predictions on test data using the transform() method.
# LogisticRegression.transform() will only use the 'features' column.
predictions = lrModel.transform(test)
predictions.printSchema()
# +
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# evaluate model performance
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
accuracy_one = evaluator.evaluate(predictions)
print "Logistic regression accuracy: ", accuracy_one
# -
# ## Hyper-parameter Tuning
# +
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# Create ParamGrid for Cross Validation
paramGrid = (ParamGridBuilder()
.addGrid(lr.regParam, [0.01, 0.5, 2.0])
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])
.addGrid(lr.maxIter, [1, 5, 10])
.build())
# Create 5-fold CrossValidator
cv = CrossValidator(
estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=5)
# +
# This fit will likely take a fair amount of time because
# of the amount of models that we're creating and testing
# start timer
cv_start_time = time()
# Run cross validations
cvModel = cv.fit(train)
# calculate time taken to tune prameters
print "Hyper-param tuning time taken (min): ", (time() - cv_start_time)/60
# +
# Use test set here so we can measure the accuracy of our model on new data
predictions_tuned = cvModel.transform(test)
# Evaluate best model
accuracy_two = evaluator.evaluate(predictions_tuned)
print "Accuracy after tuning: ", accuracy_two
print "Accuracy increase/decrease by: ", (accuracy_two - accuracy_one)
# +
from pyspark.ml.feature import IndexToString
# view model's feature weights
print "Model's intercept: ", cvModel.bestModel.interceptVector
# sample some data from predictions
sampled_df = predictions_tuned.sampleBy(
"label",
fractions={0: 0.1, 1: 0.2})
# View best model's predictions and probabilities of each prediction class
indx_to_string = IndexToString(
inputCol="label",
outputCol="probOutcome_new"
)
new_df = indx_to_string.transform(sampled_df)
new_df.select(
"prediction", "probability", "label",
"probOutcome", "probOutcome_new", "gameFtScore"
).show(10)
# -
# ## Decision Trees
# +
from pyspark.ml.classification import DecisionTreeClassifier
# Create initial Decision Tree Model
dt = DecisionTreeClassifier(
labelCol="label",
featuresCol="features",
maxDepth=3,
maxBins=6500)
# start timer
dt_start_time = time()
# Train model with Training Data
dtModel = dt.fit(train)
print "Decision tree time taken (min): ", (time() - dt_start_time)/60
# -
predictions_dt = dtModel.transform(test)
print "Decision tree accuracy: ", evaluator.evaluate(predictions_dt)
# ## Gradient Boosting Technique (GBT)
from pyspark.ml.regression import GBTRegressor
| _archived/sstats/statarea-pyspark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ps)
# language: python
# name: ps
# ---
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# # %config InlineBackend.figure_format = 'retina'
import nawrapper as nw
import pymaster as nmt
import numpy as np
import matplotlib.pyplot as plt
from pixell import enmap
# ! echo $OMP_NUM_THREADS
# # File Loading
#
# We specify the filepaths here. In the first cell, we load in a map which has the WCS and shape information, with which we will crop all other maps to fit. When we take power spectra, we want all of the maps involved to have the same shape and WCS!
data_root = '/tigress/zequnl/cmb/data/from_choi/'
apopath = f'{data_root}/apo_mask/deep56_c7v5_car_190220_rect_master_apo_w0.fits'
steve_apo = enmap.read_map(apopath)
shape, wcs = steve_apo.shape, steve_apo.wcs
# Next, we load in the maps and masks. In this example, the same mask is used to speed up the spectra calculation, but in general each map will have a different mask.
# +
mapname_head = f"{data_root}/maps/ACTPol_148_D56_pa1_f150_s14_4way_split"
mask_file = f"{data_root}/window/deep56_s14_pa1_f150_c7v5_car_190220_rect_w0_cl0.00nK_pt1.00_nt0.0_T.fits"
mask_pol_file = f"{data_root}/window/deep56_s14_pa1_f150_c7v5_car_190220_rect_w0_cl0.00nK_pt1.00_nt0.0.fits"
beam_filename = f"{data_root}/beam/beam_tform_160201_s14_pa1_f150_jitter_CMB_deep56.txt"
# loop over splits and generate
nsplits = 4
beam = nw.read_beam(beam_filename)
# we make the mask conform to the same WCS and shape
mask = enmap.read_map(mask_file)
mask = enmap.extract(mask, shape, wcs)
# -
# We correct for the pixel window function and apply the k-space filter using `preprocess_fourier`, with $|k_x| < 90$ and $|k_y| < 50$. The `preprocess_fourier` function will also call `enmap.extract` if you pass it `shape` and `wcs` information, in order to conform all the maps to the same geometry. If you load in a source map for example, you'll want to run `enmap.extract` on it like in the comments below, in order to get it to have the same shape and WCS as everything else.
#
# ### Legacy Support
# There is an important flag here, `legacy_steve = True`. At the time of this writing, Steve's code applies a slightly incorrect k-space filter, and offsets the maps by `(-1,-1)` in `WCS.CRPIX`.
namap_list = []
for i in range(nsplits):
# read map from disk and preprocess (i.e. k-space filter and pixwin)
map_I = enmap.read_map(f"{mapname_head}{i}_srcadd_I.fits") # get map
map_I = nw.preprocess_fourier(map_I, shape, wcs, legacy_steve=True)
# if you are performing a typical analysis, you would add in the sources here
# source_map = enmap.read_map(f"{mapname_head}{i}_srcs.fits")
# source_map = enmap.extract(source_map, shape, wcs)
# map_I = map_I + source_map
# create the namap_car, to bundle maps/masks/beams together
split_namap = nw.namap_car(
maps=map_I,
masks=mask,
beams=beam,
sub_wcs=wcs, sub_shape=shape)
namap_list.append(split_namap)
# # Compute Mode Coupling Matrix
binfile = f'{data_root}/binning/BIN_ACTPOL_50_4_SC_low_ell'
bins = nw.read_bins(binfile, is_Dell=True)
mc = nw.mode_coupling(
namap_list[0], namap_list[1], bins,
mcm_dir='/tigress/zequnl/cmb/data/mcm/example_steve',
overwrite=False
)
# # Computing Spectra
#
# Next, we apply the mode coupling matrix to each pair of `namap` objects. We will reuse the mode coupling object we computed, since all the masks are the same in this toy example. We take a flat mean of the 4 choose 2 = 6 cross spectra, and also compute the standard error.
spec_dict = {}
TT_cross_spectra = []
# TE_cross_spectra = []
# EE_cross_spectra = []
# we reuse the mode coupling matrix `mc` from earlier
for i in range(len(namap_list)):
for j in range(len(namap_list)):
if i >= j:
Cb = nw.compute_spectra(
namap_list[i], namap_list[j], mc=mc)
for clXY in Cb:
spec_dict[f"{clXY},{i},{j}"] = Cb[clXY]
if i > j:
TT_cross_spectra += [Cb['TT']]
# TE_cross_spectra += [Cb['TE']]
# TE_cross_spectra += [Cb['ET']]
# EE_cross_spectra += [Cb['EE']]
mean_Dltt = np.sum(TT_cross_spectra, axis=0) / len(TT_cross_spectra)
se_Dltt = np.std(TT_cross_spectra, axis=0)/np.sqrt(len(TT_cross_spectra))
# # Check Our Results
# We'll use the standard error on the cross-spectra as a quick debugging error bar. We'll do a better job later in this notebook.
# +
fig, axes = plt.subplots(2, 1, figsize=(8,8), sharex=True)
# plot steve spectra
specfile = f"{data_root}/ps/deep56_s14_pa1_f150_c7v5_car_190220_rect_window0_TT_lmax7925_fsky0.01081284_output.txt"
choi_ell, choi_dl, choi_delta_dl, _ = np.loadtxt(specfile, unpack=True)[:,2:54]
axes[0].errorbar( choi_ell, choi_dl, yerr=choi_delta_dl, fmt='k.',
lw=1, ms=1, label="Choi Reference Spectra" )
# plot our spectra
lb = mc.lb[1:-3]
notebook_dl = mean_Dltt[1:-3]
axes[0].errorbar(lb + 10, notebook_dl, # we slightly offset to not overlap
fmt='r.',
yerr=(notebook_dl / np.sqrt(2 * lb + 1) + se_Dltt[1:-3]),
lw=1, ms=1, label="this notebook")
axes[0].set_ylabel(r"$D_{\ell}$")
axes[0].legend(frameon=True)
# plot ratio
axes[1].axhline(0.0, ls='dashed', color='red')
axes[1].plot( lb[:-2], (notebook_dl[:-2] - choi_dl) / choi_delta_dl )
axes[1].set_ylabel('$ \Delta C_{\ell} / \sigma$')
axes[1].set_xlabel(r'$\ell$')
plt.tight_layout()
# -
# ## Analytic Covariance Matrices
#
# Above, we estimated the covariance matrix from the standard error of the six split cross-spectra. We can instead estimate this analytically! We only have one mask in this example, which simplifies things a lot. If you assume each split has the same covariance matrix, then you only need to compute two covariances: the auto-spectrum and the cross-spectrum.
#
# By default, nawrapper will estimate the noise power spectrum using step functions.
test = nw.nacov(namap_list[0], namap_list[1], mc, mc, mc)
plt.plot(test.noise['T1T1'], label='noise')
plt.plot(test.signal['TT'], label='signal')
plt.legend()
# plt.yscale('log')
# plt.ylabel(r'power')
# Essentially you need to know the noise and signal cross-spectra. Since covariance is a bilinear map, the mean of six cross-spectra follows the expressions below.
cross_cov = nw.compute_covmat(
namap_list[0], namap_list[1], bins,
mc_11=mc, mc_12=mc, mc_22=mc)
auto_cov = nw.compute_covmat(
namap_list[0], namap_list[0], bins,
mc_11=mc, mc_12=mc, mc_22=mc)
# These covariance results are just dictionaries containing keys to the covariance matrices, which you can access via something like `cross_cov['TTTT']`. We'll now combine them to calculate the mean covariance matrix.
# +
from scipy.special import comb
# compute the number of cross-spectra from number of splits
n_spec = comb(nsplits, 2)
# weight covmats by autos and crosses to get mean covmat
cross_weight = n_spec**2 - n_spec
auto_weight = n_spec
combined_TT_cov = (
cross_cov['TTTT'] * cross_weight / n_spec +
auto_cov['TTTT'] * auto_weight / n_spec
) / (n_spec**2)
# +
fig, ax = plt.subplots(1, 1, figsize=(8,6), sharex=True)
# plot steve spectra
specfile = f"{data_root}/ps/deep56_s14_pa1_f150_c7v5_car_190220_rect_window0_TT_lmax7925_fsky0.01081284_output.txt"
choi_ell, choi_dl, choi_delta_dl, _ = np.loadtxt(specfile, unpack=True)[:,2:]
ax.errorbar( choi_ell, choi_dl, yerr=choi_delta_dl, fmt='k.',
lw=1, ms=3, label="Choi Reference Spectra" )
# plot our spectra
lb = mc.lb
ax.errorbar(lb + 30, mean_Dltt, # we slightly offset to not overlap
fmt='r.',
yerr=np.sqrt(np.diag(combined_TT_cov)) / (lb * (lb+1) / 2 / np.pi),
lw=1, ms=3, label="this notebook")
ax.set_ylabel(r"$D_{\ell}$")
ax.legend(frameon=True)
plt.yscale('log')
plt.tight_layout()
# +
## We can also computed the polarization power spectra, so here we plot them.
##
# fig, axes = plt.subplots(2,1, sharex=True, figsize=(8,8))
# axes[0].set_title("TE")
# mean_Clte = np.sum(TE_cross_spectra, axis=0) / len(TE_cross_spectra)
# axes[0].plot(mc.lb, mean_Clte)
# axes[1].set_title("EE")
# mean_Clee = np.sum(EE_cross_spectra, axis=0) / len(EE_cross_spectra)
# axes[1].plot(mc.lb, mean_Clee)
# -
# # Bandpower Windows
bpw = mc.w00.get_bandpower_windows()
plt.imshow(bpw.reshape(58,7926), aspect=100)
plt.ylabel('bin')
plt.xlabel(r'$\ell$')
plt.title('TT bandpower window');
| notebooks/Reproduce Steve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (MAE6286)
# language: python
# name: py36-mae6286
# ---
# ###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 <NAME>, <NAME>, <NAME>. Based on [CFDPython](https://github.com/barbagroup/CFDPython), (c)2013 <NAME>, also under CC-BY license.
# # Space & Time
# ## Burgers' Equation
# Hi there! We have reached the final lesson of the series *Space and Time — Introduction to Finite-difference solutions of PDEs*, the second module of ["Practical Numerical Methods with Python"](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about).
#
# We have learned about the finite-difference solution for the linear and non-linear convection equations and the diffusion equation. It's time to combine all these into one: *Burgers' equation*. The wonders of *code reuse*!
#
# Before you continue, make sure you have completed the previous lessons of this series, it will make your life easier. You should have written your own versions of the codes in separate, clean Jupyter Notebooks or Python scripts.
# You can read about Burgers' Equation on its [wikipedia page](http://en.wikipedia.org/wiki/Burgers'_equation).
# Burgers' equation in one spatial dimension looks like this:
#
# $$
# \begin{equation}
# \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}
# \end{equation}
# $$
#
# As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation!
#
# We can discretize it using the methods we've already detailed in the previous notebooks of this module. Using forward difference for time, backward difference for space and our 2nd-order method for the second derivatives yields:
#
# $$
# \begin{equation}
# \frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}
# \end{equation}
# $$
#
# As before, once we have an initial condition, the only unknown is $u_i^{n+1}$. We will step in time as follows:
#
# $$
# \begin{equation}
# u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)
# \end{equation}
# $$
# ### Initial and Boundary Conditions
#
# To examine some interesting properties of Burgers' equation, it is helpful to use different initial and boundary conditions than we've been using for previous steps.
#
# The initial condition for this problem is going to be:
#
# $$
# \begin{eqnarray}
# u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
# \phi(t=0) = \phi_0 &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg)
# \end{eqnarray}
# $$
#
# This has an analytical solution, given by:
#
# $$
# \begin{eqnarray}
# u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\
# \phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg)
# \end{eqnarray}
# $$
#
# The boundary condition will be:
#
# $$
# \begin{equation}
# u(0) = u(2\pi)
# \end{equation}
# $$
#
# This is called a *periodic* boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully.
# ### Saving Time with SymPy
#
#
# The initial condition we're using for Burgers' Equation can be a bit of a pain to evaluate by hand. The derivative $\frac{\partial \phi}{\partial x}$ isn't too terribly difficult, but it would be easy to drop a sign or forget a factor of $x$ somewhere, so we're going to use SymPy to help us out.
#
# [SymPy](http://sympy.org/en/) is the symbolic math library for Python. It has a lot of the same symbolic math functionality as Mathematica with the added benefit that we can easily translate its results back into our Python calculations (it is also free and open source).
#
# Start by loading the SymPy library, together with our favorite library, NumPy.
import numpy
import sympy
from matplotlib import pyplot
# %matplotlib inline
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
# We're also going to tell SymPy that we want all of its output to be rendered using $\LaTeX$. This will make our Notebook beautiful!
sympy.init_printing()
# Start by setting up symbolic variables for the three variables in our initial condition. It's important to recognize that once we've defined these symbolic variables, they function differently than "regular" Python variables.
#
# If we type `x` into a code block, we'll get an error:
x
# `x` is not defined, so this shouldn't be a surprise. Now, let's set up `x` as a *symbolic* variable:
x = sympy.symbols('x')
# Now let's see what happens when we type `x` into a code cell:
x
# The value of `x` is $x$. Sympy is also referred to as a computer algebra system -- normally the value of `5*x` will return the product of `5` and whatever value `x` is pointing to. But, if we define `x` as a symbol, then something else happens:
5 * x
# This will let us manipulate an equation with unknowns using Python! Let's start by defining symbols for $x$, $\nu$ and $t$ and then type out the full equation for $\phi$. We should get a nicely rendered version of our $\phi$ equation.
x, nu, t = sympy.symbols('x nu t')
phi = (sympy.exp(-(x - 4 * t)**2 / (4 * nu * (t + 1))) +
sympy.exp(-(x - 4 * t - 2 * sympy.pi)**2 / (4 * nu * (t + 1))))
phi
# It's maybe a little small, but that looks right. Now to evaluate our partial derivative $\frac{\partial \phi}{\partial x}$ is a trivial task. To take a derivative with respect to $x$, we can just use:
phiprime = phi.diff(x)
phiprime
# If you want to see the non-rendered version, just use the Python print command.
print(phiprime)
# ### Now what?
#
#
# Now that we have the Pythonic version of our derivative, we can finish writing out the full initial condition equation and then translate it into a usable Python expression. For this, we'll use the *lambdify* function, which takes a SymPy symbolic equation and turns it into a callable function.
# +
from sympy.utilities.lambdify import lambdify
u = -2 * nu * (phiprime / phi) + 4
print(u)
# -
# ### Lambdify
#
# To lambdify this expression into a usable function, we tell lambdify which variables to request and the function we want to plug them into.
u_lamb = lambdify((t, x, nu), u)
print('The value of u at t=1, x=4, nu=3 is {}'.format(u_lamb(1, 4, 3)))
# ### Back to Burgers' Equation
#
# Now that we have the initial conditions set up, we can proceed and finish setting up the problem. We can generate the plot of the initial condition using our lambdify-ed function.
# +
# Set parameters.
nx = 101 # number of spatial grid points
L = 2.0 * numpy.pi # length of the domain
dx = L / (nx - 1) # spatial grid size
nu = 0.07 # viscosity
nt = 100 # number of time steps to compute
sigma = 0.1 # CFL limit
dt = sigma * dx**2 / nu # time-step size
# Discretize the domain.
x = numpy.linspace(0.0, L, num=nx)
# -
# We have a function `u_lamb` but we need to create an array `u0` with our initial conditions. `u_lamb` will return the value for any given time $t$, position $x$ and $nu$. We can use a `for`-loop to cycle through values of `x` to generate the `u0` array. That code would look something like this:
#
# ```Python
# u0 = numpy.empty(nx)
#
# for i, x0 in enumerate(x):
# u0[i] = u_lamb(t, x0, nu)
# ```
#
# But there's a cleaner, more beautiful way to do this -- *list comprehension*.
#
# We can create a list of all of the appropriate `u` values by typing
#
# ```Python
# [u_lamb(t, x0, nu) for x0 in x]
# ```
#
# You can see that the syntax is similar to the `for`-loop, but it only takes one line. Using a list comprehension will create... a list. This is different from an *array*, but converting a list to an array is trivial using `numpy.asarray()`.
#
# With the list comprehension in place, the three lines of code above become one:
#
# ```Python
# u = numpy.asarray([u_lamb(t, x0, nu) for x0 in x])
# ```
# Set initial conditions.
t = 0.0
u0 = numpy.array([u_lamb(t, xi, nu) for xi in x])
u0
# Now that we have the initial conditions set up, we can plot it to see what $u(x,0)$ looks like:
# Plot the initial conditions.
pyplot.figure(figsize=(6.0, 4.0))
pyplot.title('Initial conditions')
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
pyplot.plot(x, u0, color='C0', linestyle='-', linewidth=2)
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 10.0);
# This is definitely not the hat function we've been dealing with until now. We call it a "saw-tooth function". Let's proceed forward and see what happens.
# ### Periodic Boundary Conditions
#
# We will implement Burgers' equation with *periodic* boundary conditions. If you experiment with the linear and non-linear convection notebooks and make the simulation run longer (by increasing `nt`) you will notice that the wave will keep moving to the right until it no longer even shows up in the plot.
#
# With periodic boundary conditions, when a point gets to the right-hand side of the frame, it *wraps around* back to the front of the frame.
#
# Recall the discretization that we worked out at the beginning of this notebook:
#
# $$
# \begin{equation}
# u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)
# \end{equation}
# $$
#
# What does $u_{i+1}^n$ *mean* when $i$ is already at the end of the frame?
#
# Think about this for a minute before proceeding.
# Integrate the Burgers' equation in time.
u = u0.copy()
for n in range(nt):
un = u.copy()
# Update all interior points.
u[1:-1] = (un[1:-1] -
un[1:-1] * dt / dx * (un[1:-1] - un[:-2]) +
nu * dt / dx**2 * (un[2:] - 2 * un[1:-1] + un[:-2]))
# Update boundary points.
u[0] = (un[0] -
un[0] * dt / dx * (un[0] - un[-1]) +
nu * dt / dx**2 * (un[1] - 2 * un[0] + un[-1]))
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0] - 2 * un[-1] + un[-2]))
# Compute the analytical solution.
u_analytical = numpy.array([u_lamb(nt * dt, xi, nu) for xi in x])
# Plot the numerical solution along with the analytical solution.
pyplot.figure(figsize=(6.0, 4.0))
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
pyplot.plot(x, u, label='Numerical',
color='C0', linestyle='-', linewidth=2)
pyplot.plot(x, u_analytical, label='Analytical',
color='C1', linestyle='--', linewidth=2)
pyplot.legend()
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 10.0);
# Let's now create an animation with the `animation` module of Matplotlib to observe how the numerical solution changes over time compared to the analytical solution.
# We start by importing the module from Matplotlib as well as the special `HTML` display method.
from matplotlib import animation
from IPython.display import HTML
# We create a function `burgers` to computes the numerical solution of the 1D Burgers' equation over time.
# (The function returns the history of the solution: a list with `nt` elements, each one being the solution in the domain at a time step.)
def burgers(u0, dx, dt, nu, nt=20):
"""
Computes the numerical solution of the 1D Burgers' equation
over the time steps.
Parameters
----------
u0 : numpy.ndarray
The initial conditions as a 1D array of floats.
dx : float
The grid spacing.
dt : float
The time-step size.
nu : float
The viscosity.
nt : integer, optional
The number of time steps to compute;
default: 20.
Returns
-------
u_hist : list of numpy.ndarray objects
The history of the numerical solution.
"""
u_hist = [u0.copy()]
u = u0.copy()
for n in range(nt):
un = u.copy()
# Update all interior points.
u[1:-1] = (un[1:-1] -
un[1:-1] * dt / dx * (un[1:-1] - un[:-2]) +
nu * dt / dx**2 * (un[2:] - 2 * un[1:-1] + un[:-2]))
# Update boundary points.
u[0] = (un[0] -
un[0] * dt / dx * (un[0] - un[-1]) +
nu * dt / dx**2 * (un[1] - 2 * un[0] + un[-1]))
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0] - 2 * un[-1] + un[-2]))
u_hist.append(u.copy())
return u_hist
# Compute the history of the numerical solution.
u_hist = burgers(u0, dx, dt, nu, nt=nt)
# Compute the history of the analytical solution.
u_analytical = [numpy.array([u_lamb(n * dt, xi, nu) for xi in x])
for n in range(nt)]
fig = pyplot.figure(figsize=(6.0, 4.0))
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
u0_analytical = numpy.array([u_lamb(0.0, xi, nu) for xi in x])
line1 = pyplot.plot(x, u0, label='Numerical',
color='C0', linestyle='-', linewidth=2)[0]
line2 = pyplot.plot(x, u0_analytical, label='Analytical',
color='C1', linestyle='--', linewidth=2)[0]
pyplot.legend()
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 10.0)
fig.tight_layout()
def update_plot(n, u_hist, u_analytical):
"""
Update the lines y-data of the Matplotlib figure.
Parameters
----------
n : integer
The time-step index.
u_hist : list of numpy.ndarray objects
The history of the numerical solution.
u_analytical : list of numpy.ndarray objects
The history of the analytical solution.
"""
fig.suptitle('Time step {:0>2}'.format(n))
line1.set_ydata(u_hist[n])
line2.set_ydata(u_analytical[n])
# Create an animation.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(u_hist, u_analytical),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
# ## Array Operation Speed Increase
# Coding up discretization schemes using array operations can be a bit of a pain. It requires much more mental effort on the front-end than using two nested `for` loops. So why do we do it? Because it's fast. Very, very fast.
#
# Here's what the Burgers code looks like using two nested `for` loops. It's easier to write out, plus we only have to add one "special" condition to implement the periodic boundaries.
#
# At the top of the cell, you'll see the decorator `%%timeit`.
# This is called a "cell magic". It runs the cell several times and returns the average execution time for the contained code.
#
# Let's see how long the nested `for` loops take to finish.
# %%timeit
# Set initial conditions.
u = numpy.array([u_lamb(t, x0, nu) for x0 in x])
# Integrate in time using a nested for loop.
for n in range(nt):
un = u.copy()
# Update all interior points and the left boundary point.
for i in range(nx - 1):
u[i] = (un[i] -
un[i] * dt / dx *(un[i] - un[i - 1]) +
nu * dt / dx**2 * (un[i + 1] - 2 * un[i] + un[i - 1]))
# Update the right boundary.
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0]- 2 * un[-1] + un[-2]))
# Less than 50 milliseconds. Not bad, really.
#
# Now let's look at the array operations code cell. Notice that we haven't changed anything, except we've added the `%%timeit` magic and we're also resetting the array `u` to its initial conditions.
#
# This takes longer to code and we have to add two special conditions to take care of the periodic boundaries. Was it worth it?
# %%timeit
# Set initial conditions.
u = numpy.array([u_lamb(t, xi, nu) for xi in x])
# Integrate in time using array operations.
for n in range(nt):
un = u.copy()
# Update all interior points.
u[1:-1] = (un[1:-1] -
un[1:-1] * dt / dx * (un[1:-1] - un[:-2]) +
nu * dt / dx**2 * (un[2:] - 2 * un[1:-1] + un[:-2]))
# Update boundary points.
u[0] = (un[0] -
un[0] * dt / dx * (un[0] - un[-1]) +
nu * dt / dx**2 * (un[1] - 2 * un[0] + un[-1]))
u[-1] = (un[-1] -
un[-1] * dt / dx * (un[-1] - un[-2]) +
nu * dt / dx**2 * (un[0] - 2 * un[-1] + un[-2]))
# Yes, it is absolutely worth it. That's a nine-fold speed increase. For this exercise, you probably won't miss the extra 40 milliseconds if you use the nested `for` loops, but what about a simulation that has to run through millions and millions of iterations? Then that little extra effort at the beginning will definitely pay off.
# ---
#
# ###### The cell below loads the style of the notebook.
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, 'r').read())
| lessons/02_spacetime/02_04_1DBurgers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# .. _model_tutorial:
#
# -
# # Using OPCSIM to Build and Model an Optical Particle Counter (OPC)
#
# The following tutorial will show you how a low-cost OPC is represented when using the opcsim software. You will learn how to build a model OPC, how to mimic a calibration for specific aerosols, and how to evaluate the OPC against a simulated aerosol distribution. Visualization tools will also be discussed.
#
# First, we import the python libraries we need and set the styles used for plotting throughout this tutorial.
# +
# Make imports
import opcsim
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mticks
import seaborn as sns
# %matplotlib inline
# turn off warnings temporarily
import warnings
warnings.simplefilter('ignore')
# Let's set some default seaborn settings
sns.set(context='notebook', style='ticks', palette='dark', font_scale=1.75,
rc={'figure.figsize': (12,6), **opcsim.plots.rc_log})
# -
# # The OPC Model
#
# The `opcsim.OPC` class provides a simple way to model most of the functionality of low-cost optical particle counters. This model is based on just a few instrument parameters which should be defined in the manufacturer's speficication sheet.
#
# An OPC is defined by just the laser wavelength (`wl`), the exact `bins` the OPC uses as its output, and the angles for which the scattered light is collected between (`theta`). Other work has considered much more detail that this, including the electrical properties and characteristics of the photodetector, etc. However, this typically requires knowledge or information that is not commonly available in the datasheet of a low-cost particle counter, and thus we try to provide a method that does not rely on this information.
#
# Within the software itself, there is flexibility in how exactly you define the bins. In the end, you will end up with a 3xn array of values, where n is the number of bins. Each bin is then defined by its left bin boundary, midpoint, and right bin boundary.
#
# To simulate an OPC using the `opcsim.OPC` class, we initiate as follows:
# +
opc = opcsim.OPC(wl=0.658, n_bins=5, theta=(32., 88.))
opc
# -
# When initiated with no arguments, the default arguments are used and can be looked up in the API documentation. This sets $dmin=0.5\;\mu m$, $dmax=2.5\;\mu m$, and $n_{bins}=5$. We can view the number of bins using the `OPC.n_bins` attribute.
opc.n_bins
# We can also view the bin boundaries and midpoint diameters using the `OPC.bins` attribute. Here, we receieve a **3xn** array where the first entry is the left bin boundary, the middle is the midpoint diameter as defined by its geometric mean, and the last entry is the right bin boundary.
opc.bins
# ## Building more specific OPC's
#
# We can build more complex - or specified - models by increasing the number of bins in a couple of ways: (1) we can change the minimum or maximum cutoffs, or the total number of bins:
# +
opc_10bins = opcsim.OPC(wl=0.658, n_bins=10, dmin=0.38, dmax=17.5)
opc_10bins.bins
# -
# If we are trying to model a specific OPC that has pre-defined bins, we can also do that with the help of some utility methods. The bins argument in the OPC class requires a **3xn** array as seen above. Often, you may only have the bin boundary information and not the midpoints. We can define the bins as an array of the bin boundaries, where there are 1 more entries than total number of bins, as follows:
# +
opc = opcsim.OPC(wl=0.658, bins=[0.38, 0.54, 0.78, 1.05, 1.5, 2.5])
opc.bins
# -
# This more or less covers how we can build various OPC's - if you are still unsure of what to do in a specific case, feel free to post questions on the GitHub repository under the 'Issues' tab.
# # Calibration
#
# OPC's count and size each particle individualy - the size of a particle is determined by the signal amplitude measured by the photo-detector, which is proportional to the product of the intensity of incoming light and the particle's scattering cross-section. This model assumes light intensity is constant, thus every particle experiences the same light intensity. Assuming this is valid, we can state the size of a particle is proportional ot the scattering cross-section of that particle.
#
# The output we receieve from an OPC is a histogram with the total number of particles in a specified size bin. So how do we know which bin a given particle belongs to? We must calibrate our OPC and tell it, based on the $C_{scat}$ value, which bin a particle belongs to. While the literature dives deep into complicated, and possibly more accurate ways to do this, we offer three easy methods that can often be found in today's low-cost devices:
#
# 1. `spline` - a simple mapping between $C_{scat}$ and $D_p$
# 2. `linear` - fit a power law (linear in log-log space) to the $C_{scat}$ values at every bin boundary
# 3. `piecewise` - the same as `linear` except we split into two different curves
#
#
# ### Assumptions we make
#
# * all particles are spheres
# * light intensity is constant
# * all particles fall squarely within the laser
#
# We will walk through how to calibrate using each approach, but first, let's take a look at a typical scattering pattern for an OPC with a 658 nm laser and a collection angle of 32-88 degrees.
# +
# build a generic OPC with 10-bins
opc = opcsim.OPC(wl=0.658, theta=(32.0, 88.0), n_bins=10)
opc
# +
# generate an array of particle diameters
dp = np.logspace(-1, 1.25, 250)
vals = [opcsim.mie.cscat(x, wl=0.658, refr=complex(1.59, 0), theta1=32., theta2=88.) for x in dp]
fig, ax = plt.subplots(1, figsize=(6, 6))
ax.plot(dp, vals, lw=3)
ax.semilogx()
ax.semilogy()
ax.set_title("Calibration Curve for PSLs", y=1.02)
ax.set_xlabel("$D_p\;[\mu m]$")
ax.set_ylabel("$C_{scat}$")
sns.despine(offset=5)
# -
# As we can see in the figure above, there are $C_{scat}$ values that are not monotonically increasing as the particle diameter increases, which may make it difficult to assign that value to 1 bin specifically. This is a very well known and documented issue with OPC's and should play a role in choosing bin boundaries. Additionally, it only gets more complicated as the particle optical properties change!
#
# Now, we will examine the two simple approaches to modeling our ways out of this.
# ## Calibration Method 1: `spline`
#
# One of the most common ways to associate a $C_{scat}$ value with a corresponding particle size is to use spline interpolation. Essentially, this entails evaluating the $C_{scat}$ curve at every bin boundary and grouping all $C_{scat}$ values within range of those two boundaries as belonging to that bin. This work fairly well when the bins are chosen intelligently for a given aerosol type. However, this is not only the case and there are non-monotonically increasing bins from time-to-time. To remove issues associated with this "dips", we implement a solution where we interpolate across any "dips". This is far from perfect, but gives us reasonable results.
#
# Note that this can make it quite difficult for particles to get assigned to certain bins, especially if the $C_{scat}$ values plateau - this can be somewhat solved by combining these bins into one bin to reduce the uncertainty in bin assignment, though it will make the correct particle sizing in that bin less precise as well.
#
# Note that it is also possible to write your own calibration function, so do not fret!
# +
# calibrate using spline
opc.calibrate("psl", method="spline")
# show the Cscat values corresponding to each bin boundary
opc.calibration_function
# -
# Let's take a look at the calibration:
# +
fig, ax = plt.subplots(1, figsize=(10, 8))
ax = opcsim.plots.calplot(opc, ax=ax, dp=np.logspace(-1, 1, 250), plot_kws=dict(alpha=.5))
ax.set_xlim(0.1, 10)
ax.set_ylim(1e-10, 1e-6)
sns.despine()
# -
# ## Calibration Method 2: `linear`
#
# The second approach to generating our map of $C_{scat}$ to $D_p$ values is to fit a line to the data in log-log space. While this method is simple, and is often used because you only need to calibrate to a few different sizes of particles, it can lead to drastic oversizing in some regions and undersizing in others.
# +
opc.calibrate("psl", method="linear")
opc.calibration_function
# +
fig, ax = plt.subplots(1, figsize=(10, 8))
ax = opcsim.plots.calplot(opc, ax=ax, dp=np.logspace(-1, 1, 250), plot_kws=dict(alpha=.5))
ax.set_xlim(0.1, 10)
ax.set_ylim(1e-10, 1e-6)
sns.despine()
# -
# ## calibration Method 3: `piecewise`
#
# Similarly, we can fit a `piecewise` linear fit, where there are two linear fits, separated based on the $C_{scat}$ value at a pre-determined breakpoint as defined by the user.
# +
opc.calibrate("psl", method="piecewise")
opc.calibration_function
# +
fig, ax = plt.subplots(1, figsize=(10, 8))
ax = opcsim.plots.calplot(opc, ax=ax, dp=np.logspace(-1, 1, 250), plot_kws=dict(alpha=.5))
ax.set_xlim(0.1, 10)
ax.set_ylim(1e-10, 1e-6)
sns.despine()
# -
# ### Calibration Materials
#
# Above, we covered the methods used to calibrate, but you may have noticed the first argument to those calls was the material used to calibrate. The calibration material can be defined in one of two ways - you can either choose from a material in the table below or you can specify any complex refractive index. Note that the table below uses approximate literature values which were determined at close to 658 nm. Thus, if you are looking for the best results, you may want to specificy the refractive index of your material at the wavelength of light you are using.
#
# The material lookup table is quite limited - if you would like to add to this table, please open an issue on GitHub.
#
# ### Current Options
#
# | option | Material | Refractive Index |
# |:------:|:--------:|:----------------:|
# | `psl` | Polystyrene Latex Spheres | 1.592 + 0i |
# | `ammonium_sulfate` | Ammonium Sulfate | 1.521 + 0i |
# | `sodium_chloride` | Sodium Chloride | 1.5405 + 0i |
# | `sodium_nitrate` | Sodium Nitrate | 1.448 + 0i |
# | `black_carbon` | Black Carbon | 1.95 + 0.79i |
# | `sulfuric_acid` | Sulfuric Acid | 1.427 + 0i |
# | `soa` | Secondary Organic Aerosol | 1.4 + 0.002i |
# | `urban_high` | High estimate for Urban Aerosol | 1.6 + 0.034i |
# | `urban_low` | Low estimate for Urban Aerosol | 1.73 + 0.086i |
# # Evaluating an OPC for a given Aerosol Distribution
#
# The entire point of this software is to model how various OPC's and other particle sensors "see" aerosols in the real world. The Aerosol Distribution tutorial showed how we can model realistic aerosol distributions. This section will review how the OPC's we just built "see" these distributions. Generally speaking, the process works as follows:
#
# 1. For every particle in the aerosol distribution, we calculate the scattering cross-section
# 2. For each of the scattering cross-section's we just calculated, we assign them to a bin of the OPC based on the calibration curve we generated
# 3. We end up with a histogram in the form of the cumulative number of particles in each OPC bin, just as we would if we were using a commercially available OPC
#
#
# The base method we use to obtain these numbers is `OPC.evaluate`. This method returns the number of particles the OPC "sees" in each bin for a given aerosol distribution. One component that hasn't yet been discussed is how the OPC reacts as relative humidity changes. Each of these methods takes an argument for relative humidity (`rh`) which will automatically calculate the change in particle size due to hygroscopic growth per $\kappa$-kohler theory; the refractive index and density will also change accordingly based on the amount of growth that takes place.
#
# To evaluate a distribution, we need to first define the AerosolDistribution and the OPC.
#
# * Show examples for a few different scenarios, including growth due to RH
# * show examples for different calibration approaches
# +
# build a single mode of ammonium sulfate
d = opcsim.AerosolDistribution("Amm. Sulfate")
d.add_mode(n=1e3, gm=250e-3, gsd=1.65, kappa=0.53, rho=1.77, refr=complex(1.521, 0))
d
# +
# build an OPC
opc = opcsim.OPC(wl=0.658, n_bins=24, dmin=0.25, dmax=40.)
opc
# -
# Let's go ahead and calibrate the OPC using ammonium sulfate:
opc.calibrate("ammonium_sulfate", method="spline")
# Now, let's go ahead and evaluate the OPC for the previously defined distribution. This will return the number of particles in each bin:
# +
vals = opc.evaluate(d, rh=0.0)
vals
# -
# ## Iterating across the Particle Size Distribution
#
# Another important task for evaluation OPC's is comparing the integrated values across some pre-defined particle size range. We can accomplish this by using the `OPC.integrate` method to calculate either the total number of particles, total surface area, total volume, or total mass betweeen any two diameters.
#
# Example:
opc.integrate(d, dmin=0., dmax=2.5, weight='number', rh=0.)
opc.integrate(d, dmin=0., dmax=2.5, weight='number', rh=85.)
# ## Visualizing the OPC Histogram
#
# Last, it is fairly useful and important for us to be able to easily visualize our results. There are functions built-in to compute the histogram to use with `opcsim.plots.histplot` to easily plot the particle size distribution.
#
# Let's go ahead and visualize our results from earlier:
# compute the histogram
data1 = opc.histogram(d, weight="number", base="log10", rh=0.0)
data2 = opc.histogram(d, weight="number", base="log10", rh=95.0)
# +
fig, ax = plt.subplots(1, figsize=(8, 6))
ax = opcsim.plots.pdfplot(d, ax=ax, weight="number")
ax = opcsim.plots.pdfplot(d, ax=ax, weight="number", rh=95.0)
ax = opcsim.plots.histplot(data=data1, bins=opc.bins, ax=ax, label="RH=0%",
plot_kws=dict(linewidth=2, fill=True, alpha=.35, color='blue'))
ax = opcsim.plots.histplot(data=data2, bins=opc.bins, ax=ax, label="RH=95%",
plot_kws=dict(linewidth=2))
ax.set_ylim(0, None)
ax.set_xlim(0.01, None)
ax.legend(bbox_to_anchor=(.7, 1))
sns.despine()
# -
# On the left, we have the PDF of the particle size distribution at 0% RH in dark blue with the histogram, as seen by the OPC, in light blue. In dark red, we have the PDF of the particle size distribution at 95% RH - as you can see, it has shifted right substantially. There is also the histogram, shown as red boxes, of how the OPC "see's" the size distribution at 95% RH. Interestingly, you see the lower part of the distribution (below around 600 nm) simply moves right; however, above that point, the bin assignments start to get screwy. This is due to the Mie Resonance taking hold and making it hard to assign the $C_{scat}$ value to the correct bin.
# ## Classifying Bin Misassignment
#
# One way to gain an underestanding of how reliable/accurate the OPC is is to estimate the range of bins particles of a specific size could be assigned to. For example, if we calibrated our OPC with Ammonium Sulfate at 0% RH, and then only evaluated Ammonium Sulfate at that same RH, we would be fairly confident in our ability to resolve the correct bin. However, if we suddenly were in a more humid environment, how would we do?
#
# There are a fwe `metrics` methods available by default to help understand the expected range of bin assignments - each is detailed below.
#
# ### `compute_bin_assessment`
#
# The objective of this method is to assess the ability of an OPC to assign particles to the correct bin. It uses a calibrated OPC to evaluate a material of some refractive index and kappa value at various relative humidities. The method returns a dataframe containing data including: the correct bin assignment, the lower estimate for bin assignment, the upper estimate for bin assignment, the effective refractive index, the relative humidity, and the lower and upper limit's of scattered light to expected scattered light.
#
# Our primary focus here is estimating the effects of changes in relative humidity, which (1) causes the aerosols to grow as they uptake water and (2) causes the refractive index to change.
# +
rv = opcsim.metrics.compute_bin_assessment(opc, refr=complex(1.521, 0), kappa=0.53)
rv
# -
# Above, we see that when RH is low and their is little growth or change in refractive index, we mostly do a great job of assigning particles to the correct bin. However, when the RH increases substantially and significant watre uptake occurs, the combination of massive growth in the actual particle size, along with the substantial change in refractive index, leads to massive oversizing which will have large effects when converting from number distribution to volume and/or mass distribution.
| docs/tutorial/opcs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import libraries
import pandas as pd
import numpy as np
import signatureanalyzer as sa
# +
# Define paths and constants
NMF_HDF_PATH = "/home/yakiyama/CPTAC_Signatures/results/Union/MMRP/HRD/SBS_NoUV_c2a_filtered/nmf_output.h5"
REF = "cosmic3_exome"
# + tags=[]
# Import reference signatures
ref_df, ref_idx = sa.utils.load_reference_signatures(REF, verbose=False)
# Import result matrices
Wraw = pd.read_hdf(NMF_HDF_PATH, "Wraw")
Wraw.rename(columns={x:x.split('-')[0] for x in Wraw.columns if x.startswith('S')}, inplace=True)
W = pd.read_hdf(NMF_HDF_PATH, "W")
W.rename(columns={x:x.split('-')[0] for x in W.columns if x.startswith('S')}, inplace=True)
Hraw = pd.read_hdf(NMF_HDF_PATH, "Hraw")
Hraw.rename(columns={x:x.split('-')[0] for x in Hraw.columns if x.startswith('S')}, inplace=True)
H = pd.read_hdf(NMF_HDF_PATH, "H")
H.rename(columns={x:x.split('-')[0] for x in H.columns if x.startswith('S')}, inplace=True)
signatures = pd.read_hdf(NMF_HDF_PATH, "signatures")
res = {}
res["Wraw"] = Wraw
res["W"] = W
res["Hraw"] = Hraw
res["H"] = H
res["signatures"] = signatures
if REF in ['pcawg_SBS','pcawg_COMPOSITE','pcawg_SBS_ID']:
Wraw96 = pd.read_hdf(NMF_HDF_PATH, "Wraw96")
Wraw9696.rename(columns={x:x.split('-')[0] for x in Wraw96.columns if x.startswith('S')}, inplace=True)
res["Wraw96"] = Wraw96
sa.utils.postprocess_msigs(res, ref_df, ref_idx, REF)
# -
res["cosine"].loc['SBS3']
cosine_og = pd.read_hdf(NMF_HDF_PATH, "cosine")
cosine_og.loc['SBS1']
| tutorials/Evaluate_Cosine_Similiarity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:atec]
# language: python
# name: conda-env-atec-py
# ---
import os
import numpy as np
import pandas as pd
import csv
# !pwd
path = "../data/processed_data_embedded.csv"
d = pd.read_csv(path)
d
sent1 = d["embedded1"].as_matrix()
sent2 = d["embedded2"].as_matrix()
print(type(sent1))
print(len(sent1))
print(type(sent1[1]))
print(type(d['embedded1'][1]))
d['embedded1'][1]
test = d['embedded1'][1]
test
test = test[2:-2].strip().replace('\n', '')
test
clean_test = [np.fromstring(t.strip(), dtype=float, sep=' ') for t in test.split('] [')]
clean_test[0]
k = [t for t in test.split('] [')]
k[0]
| src/data_vis .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="aeQYHT_Bs_8E"
from google.colab import drive
drive.mount('/content/drive')
# + id="lYIyBaHWoEqo"
pip install keras-self-attention
# + id="0Bx_4MDnJXcJ"
# !pip install emoji
# + id="DqX4j-pSKRWa"
# !pip install ekphrasis
# + id="eXiFH_-DCKjv"
# !pip install transformers==4.2.1
# + id="PCwhNaO9JXcX"
import numpy as np
import pandas as pd
import string
from nltk.corpus import stopwords
import re
import os
from collections import Counter
from ekphrasis.classes.preprocessor import TextPreProcessor
from ekphrasis.classes.tokenizer import SocialTokenizer
from ekphrasis.dicts.emoticons import emoticons
# + id="0Urfy-aPJXcd"
text_processor = TextPreProcessor(
# terms that will be normalized
normalize=['url', 'email', 'percent', 'money', 'phone', 'user',
'time', 'url', 'date', 'number'],
# terms that will be annotated
annotate={"hashtag", "allcaps", "elongated", "repeated",
'emphasis', 'censored'},
fix_html=True, # fix HTML tokens
# corpus from which the word statistics are going to be used
# for word segmentation
segmenter="twitter",
# corpus from which the word statistics are going to be used
# for spell correction
corrector="twitter",
unpack_hashtags=True, # perform word segmentation on hashtags
unpack_contractions=True, # Unpack contractions (can't -> can not)
spell_correct_elong=True, # spell correction for elongated words
# select a tokenizer. You can use SocialTokenizer, or pass your own
# the tokenizer, should take as input a string and return a list of tokens
tokenizer=SocialTokenizer(lowercase=True).tokenize,
# list of dictionaries, for replacing tokens extracted from the text,
# with other expressions. You can pass more than one dictionaries.
dicts=[emoticons]
)
# + id="1U40gpHRJXci"
def print_text(texts,i,j):
for u in range(i,j):
print(texts[u])
print()
# + id="XtySfy-O-Va2"
df_1 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016train-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_1.head(5)) #last N rows
# print(len(df_1))
df_2 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_2.head(5)) #last N rows
# print(len(df_2))
df_3 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016devtest-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_3.head(5)) #last N rows
# print(len(df_3))
df_4 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2016dev-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_4.head(5)) #last N rows
# print(len(df_4))
df_5 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2015train-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_5.head(5)) #last N rows
# print(len(df_5))
df_6 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2015test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_6.head(5)) #last N rows
# print(len(df_6))
df_7 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2014test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_7.head(5)) #last N rows
# print(len(df_7))
df_8 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2014sarcasm-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_8.head(5)) #last N rows
# print(len(df_8))
df_9 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2013train-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_9.head(5)) #last N rows
# print(len(df_9))
df_10 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2013test-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_10.head(5)) #last N rows
# print(len(df_10))
df_11 = pd.read_csv('/content/drive/My Drive/Semeval 2017/twitter-2013dev-A.txt', delimiter='\t', encoding='utf-8', header=None)
# print(df_11.head(5)) #last N rows
# print(len(df_11))
# + [markdown] id="jpzhcecp3o0c"
# <h2>Balancing the data</h2>
# + id="AAsanMo83ndl"
df = pd.DataFrame()
df = df.append(df_1, ignore_index = True)
df = df.append(df_2, ignore_index = True)
df = df.append(df_3, ignore_index = True)
df = df.append(df_4, ignore_index = True)
df = df.append(df_5, ignore_index = True)
df = df.append(df_6, ignore_index = True)
df = df.append(df_7, ignore_index = True)
df = df.append(df_8, ignore_index = True)
df = df.append(df_9, ignore_index = True)
df = df.append(df_10, ignore_index = True)
df = df.append(df_11, ignore_index = True)
print(df.head(5))
print(len(df))
# + id="xWi1U6wuJXcu"
# Testing for null values
# lol = np.asarray(df_[1].isnull())
# for i in range(0,len(lol)):
# if lol[i]:
# print(i)
# + id="UFCXYFuQBg7w"
print(len(df))
# + id="bRuDyI63Be8M"
text_array = df[2]
labels = df[1]
print("Length of training data: ",len(text_array))
print_text(text_array,0,10)
# + id="WW4OKUrkClTN"
df_val = pd.read_csv('/content/drive/My Drive/Semeval 2017/Test/SemEval2017-task4-test.subtask-A.english.txt', delimiter='\n', encoding='utf-8', header=None)
print(df_val.tail(5)) #last N rows
print(len(df_val))
# + id="c9yK3oN1NEWu"
lol = []
test_set = np.asarray(df_val[0])
for i in range(0,len(df_val)):
temp = np.asarray(test_set[i].split("\t"))
temp = temp.reshape((3))
lol.append(temp)
# + id="X2vo3_e0QxDP"
df_val = pd.DataFrame(lol)
df_val.head(5)
# + id="y5PUdxUFCiK9"
text_array_val = df_val[2]
labels_val = df_val[1]
print("Length of validation data: ",len(text_array_val))
print_text(text_array_val,0,10)
# + id="3nNEIPu89oP4"
print(Counter(labels))
print(Counter(labels_val))
# + id="O5MQYHo5JXdJ"
#removing website names
def remove_website(text):
return " ".join([word if re.search("r'https?://\S+|www\.\S+'|((?i).com$|.co|.net)",word)==None else "" for word in text.split(" ") ])
# Training set
text_array = text_array.apply(lambda text: remove_website(text))
print_text(text_array,0,10)
print("**************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: remove_website(text))
print_text(text_array_val,0,10)
# + id="R4rE3iyeJXdS"
# Functions for chat word conversion
f = open("/content/drive/My Drive/Semeval 2017/slang.txt", "r")
chat_words_str = f.read()
chat_words_map_dict = {}
chat_words_list = []
for line in chat_words_str.split("\n"):
if line != "":
cw = line.split("=")[0]
cw_expanded = line.split("=")[1]
chat_words_list.append(cw)
chat_words_map_dict[cw] = cw_expanded
chat_words_list = set(chat_words_list)
def chat_words_conversion(text):
new_text = []
for w in text.split():
if w.upper() in chat_words_list:
new_text.append(chat_words_map_dict[w.upper()])
else:
new_text.append(w)
return " ".join(new_text)
# + id="vcFQPohcJXdZ"
# Chat word conversion
# Training set
text_array = text_array.apply(lambda text: chat_words_conversion(text))
print_text(text_array,0,10)
print("********************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: chat_words_conversion(text))
print_text(text_array_val,0,10)
# + id="rTBeDB-YEggQ"
os.chdir("/content/drive/My Drive/Semeval 2017")
# + id="sL-opLGAJXde"
#Function for emoticon conversion
from emoticons import EMOTICONS
def convert_emoticons(text):
for emot in EMOTICONS:
text = re.sub(u'('+emot+')', " ".join(EMOTICONS[emot].replace(",","").split()), text)
return text
#testing the emoticon function
text = "Hello :-) :-)"
text = convert_emoticons(text)
print(text + "\n")
# + id="qVXaSmvRJXdi"
# Emoticon conversion
# Training set
text_array = text_array.apply(lambda text: convert_emoticons(text))
print_text(text_array,0,10)
print("**********************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: convert_emoticons(text))
print_text(text_array_val,0,10)
# + id="3LOLUlHLEl3M"
os.chdir("/content")
# + id="teGGeHMQJXdn"
# FUnction for removal of emoji
import emoji
def convert_emojis(text):
text = emoji.demojize(text, delimiters=(" ", " "))
text = re.sub("_|-"," ",text)
return text
# Training set
text_array = text_array.apply(lambda text: convert_emojis(text))
print_text(text_array,0,10)
print("**************************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: convert_emojis(text))
print_text(text_array_val,0,10)
# + id="acBMRaGRJXdt"
# Ekphrasis pipe for text pre-processing
def ekphrasis_pipe(sentence):
cleaned_sentence = " ".join(text_processor.pre_process_doc(sentence))
return cleaned_sentence
# Training set
text_array = text_array.apply(lambda text: ekphrasis_pipe(text))
print("Training set completed.......")
#Validation set
text_array_val = text_array_val.apply(lambda text: ekphrasis_pipe(text))
print("Test set completed.......")
# + id="_L3dXj9nJXdz"
print_text(text_array,0,10)
print("************************************************************************")
print_text(text_array_val,0,10)
# + id="tIaQRw7hJXd4"
# Removing unnecessary punctuations
PUNCT_TO_REMOVE = "\"$%&'()+,-./;=[\]^_`{|}~"
def remove_punctuation(text):
return text.translate(str.maketrans('', '', PUNCT_TO_REMOVE))
# Training set
text_array = text_array.apply(lambda text: remove_punctuation(text))
print_text(text_array,0,10)
print("********************************************************************")
# Validation set
text_array_val = text_array_val.apply(lambda text: remove_punctuation(text))
print_text(text_array_val,0,10)
# + id="BQ3ob-YFJXd8"
# Finding length of longest array
maxLen = len(max(text_array,key = lambda text: len(text.split(" "))).split(" "))
print(maxLen)
# + id="2KZzrm_BJXeD"
u = lambda text: len(text.split(" "))
sentence_lengths = []
for x in text_array:
sentence_lengths.append(u(x))
print(sorted(sentence_lengths)[-800:])
print(len(sentence_lengths))
# + id="hYE6TcR0JXea"
# Count of each label in dataset
from collections import Counter
# Printing training set counts for analysis
print("Elements: ",set(labels))
print("Length: ",len(labels))
print(Counter(labels))
print("**************************************************************************")
# Printing validation set counts for analysis
print("Elements: ",set(labels_val))
print("Length: ",len(labels_val))
print(Counter(labels_val))
# + id="OhDuYnCbJXee"
Y = []
Y_val = []
# Training set
for i in range(0,len(labels)):
if(labels[i] == 'neutral'):
Y.append(0)
if(labels[i] == 'positive'):
Y.append(1)
if(labels[i] == 'negative'):
Y.append(2)
# Validation set
for i in range(0,len(labels_val)):
if(labels_val[i] == 'neutral'):
Y_val.append(0)
if(labels_val[i] == 'positive'):
Y_val.append(1)
if(labels_val[i] == 'negative'):
Y_val.append(2)
# + id="qTD-IVLRJXej"
print(len(Y),len(Y_val))
# + id="lf17nyOaJXen"
print(Counter(Y))
print(Counter(Y_val))
# + id="ihJqcUHpJXer"
# Testing the conversion into integers
for i in range(310,320):
print(text_array_val[i])
print(labels_val[i],Y_val[i])
# + id="bhNUihGeJXev"
# Verifying train set
X = np.asarray(list(text_array))
Y = np.asarray(list(Y))
labels = np.asarray(list(labels))
print(type(X))
print(type(Y))
print(type(labels))
print(np.shape(X),np.shape(Y),np.shape(labels))
# Verifying validation set
X_val = np.asarray(list(text_array_val))
Y_val = np.asarray(list(Y_val))
labels_val = np.asarray(list(labels_val))
print(type(X_val))
print(type(Y_val))
print(type(labels_val))
print(np.shape(X_val),np.shape(Y_val),np.shape(labels_val))
# + id="8-FOp4xwJXfz"
index = 824
print(X[index])
print(labels[index])
print(Y[index])
# + id="jqGoNZDBJXf7"
print(type(X))
print(type(Y))
print(np.shape(X),np.shape(Y),np.shape(labels))
print(np.shape(X_val),np.shape(Y_val),np.shape(labels_val))
# + id="m0PeQc9AJXf_"
# Converting to one hot vectors
def convert_to_one_hot(Y, C):
Y = np.eye(C)[Y.reshape(-1)] #u[Y] helps to index each element of Y index at u. U here is a class array
return Y
# + id="G8yc2wj5JXgF"
Y_oh_train = convert_to_one_hot(np.array(Y), C = 3)
Y_oh_val = convert_to_one_hot(np.array(Y_val), C = 3)
print(np.shape(Y_oh_train))
index = 310
print(labels[index], Y[index], "is converted into one hot", Y_oh_train[index])
# + [markdown] id="s7_7y3eHJXgI"
#
#
# <h2>Tensorflow Model</h2>
# + id="ap4tHjaMEbB7"
import tensorflow as tf
import os
import numpy as np
import pandas as pd
import string
from nltk.corpus import stopwords
import re
import os
from collections import Counter
# + id="TphUcxSmEFYz"
from transformers import RobertaTokenizerFast, TFRobertaModel, TFBertModel, BertTokenizerFast, ElectraTokenizerFast, TFElectraModel, AlbertTokenizerFast, TFAlbertModel, XLNetTokenizerFast, TFXLNetModel, MPNetTokenizerFast, TFMPNetModel
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import backend as K
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from keras_self_attention import SeqSelfAttention
# + id="jhUZe7iiJXgP"
print(tf.__version__)
# + id="udEBhQ1FEfNi"
# resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
# tf.config.experimental_connect_to_cluster(resolver)
# tf.tpu.experimental.initialize_tpu_system(resolver)
# print("All devices: ", tf.config.list_logical_devices('TPU'))
# + id="2YSpgiV-E3nW"
tokenizer_mpnet = MPNetTokenizerFast.from_pretrained("microsoft/mpnet-base")
tokenizer_roberta = RobertaTokenizerFast.from_pretrained('roberta-base')
tokenizer_bert = BertTokenizerFast.from_pretrained('bert-base-uncased')
tokenizer_albert = AlbertTokenizerFast.from_pretrained('albert-large-v2')
# + id="XcppSX7DE4sY"
X = list(X)
X_val = list(X_val)
# + id="tyCi4h7gE4pE"
train_encodings_mpnet = tokenizer_mpnet(X, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
val_encodings_mpnet = tokenizer_mpnet(X_val, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
train_encodings_roberta = tokenizer_roberta(X, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
val_encodings_roberta = tokenizer_roberta(X_val, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
train_encodings_bert = tokenizer_bert(X, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
val_encodings_bert = tokenizer_bert(X_val, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
train_encodings_albert = tokenizer_albert(X, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
val_encodings_albert = tokenizer_albert(X_val, max_length=80, truncation=True, padding="max_length", return_tensors='tf')
# + id="bn3d_2-HE4m2"
print(np.shape(train_encodings_mpnet["input_ids"]))
print(np.shape(val_encodings_mpnet["input_ids"]))
# + id="qi7beAXQE4ki"
print(train_encodings_mpnet["input_ids"][0])
print("***************************************************************************")
print(val_encodings_mpnet["input_ids"][0])
# + id="YWLzQYP1zTQS"
# This is the best model
def mpnet_classifier(input_shape):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
model = TFMPNetModel.from_pretrained('microsoft/mpnet-base')
layer = model.layers[0]
layer.trainable= False
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
inputs = keras.Input(shape=input_shape, dtype='int32')
input_masks = keras.Input(shape=input_shape, dtype='int32')
embeddings = layer([inputs, input_masks])[0][:,0,:]
# embeddings = keras.layers.GaussianNoise(0.2)(embeddings)
# embeddings = keras.layers.Dropout(0.3)(embeddings)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
# lstm_one = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_one(embeddings)
# X = keras.layers.Dropout(0.2)(X)
# lstm_two = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_two(X)
# X = keras.layers.Dropout(0.2)(X)
# # *************Attention*******************
# X = SeqSelfAttention(attention_activation='elu')(X)
# # ****************Attention*******************
# post_activation_GRU_cell = keras.layers.GRU(64, return_sequences = False, recurrent_dropout=0.25, dropout=0.2)
# X = post_activation_GRU_cell(X)
X = keras.layers.Dense(32,activation='elu',kernel_regularizer=keras.regularizers.l2(0.0001))(embeddings)
X = keras.layers.BatchNormalization(momentum=0.99, epsilon=0.001, center=True, scale=True)(X)
X = keras.layers.Dense(3,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.0001))(X)
# Add a sigmoid activation
X = keras.layers.Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = keras.Model(inputs=[inputs,input_masks], outputs=[X])
return model
# + id="J4iKjQbtN7Jb"
# model_mpnet = mpnet_classifier((80,))
# model_mpnet.summary()
# + id="-ZzqU47hrmqc"
# This is the best model
def roberta_classifier(input_shape):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
model = TFRobertaModel.from_pretrained('roberta-base')
layer = model.layers[0]
layer.trainable = False
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
inputs = keras.Input(shape=input_shape, dtype='int32')
input_masks = keras.Input(shape=input_shape, dtype='int32')
embeddings = layer([inputs, input_masks])[1]
# embeddings = keras.layers.GaussianNoise(0.2)(embeddings)
# embeddings = keras.layers.Dropout(0.3)(embeddings)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
# lstm_one = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_one(embeddings)
# X = keras.layers.Dropout(0.2)(X)
# lstm_two = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_two(X)
# X = keras.layers.Dropout(0.2)(X)
# # *************Attention*******************
# X = SeqSelfAttention(attention_activation='elu')(X)
# # ****************Attention*******************
# post_activation_GRU_cell = keras.layers.GRU(64, return_sequences = False, recurrent_dropout=0.25, dropout=0.2)
# X = post_activation_GRU_cell(X)
X = keras.layers.Dense(32,activation='elu',kernel_regularizer=keras.regularizers.l2(0.0001))(embeddings)
X = keras.layers.BatchNormalization(momentum=0.99, epsilon=0.001, center=True, scale=True)(X)
X = keras.layers.Dense(3,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.0001))(X)
# Add a sigmoid activation
X = keras.layers.Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = keras.Model(inputs=[inputs,input_masks], outputs=[X])
return model
# + id="r_cIx6Vjry-r"
# model_roberta = roberta_classifier((80,))
# model_roberta.summary()
# + id="nZNJjz0jry7Q"
# This is the best model
def bert_classifier(input_shape):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
model = TFBertModel.from_pretrained('bert-base-uncased')
layer = model.layers[0]
layer.trainable = False
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
inputs = keras.Input(shape=input_shape, dtype='int32')
input_masks = keras.Input(shape=input_shape, dtype='int32')
embeddings = layer([inputs, input_masks])[1]
# embeddings = keras.layers.GaussianNoise(0.2)(embeddings)
# embeddings = keras.layers.Dropout(0.3)(embeddings)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
# lstm_one = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_one(embeddings)
# X = keras.layers.Dropout(0.2)(X)
# lstm_two = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_two(X)
# X = keras.layers.Dropout(0.2)(X)
# # *************Attention*******************
# X = SeqSelfAttention(attention_activation='elu')(X)
# # ****************Attention*******************
# post_activation_GRU_cell = keras.layers.GRU(64, return_sequences = False, recurrent_dropout=0.25, dropout=0.2)
# X = post_activation_GRU_cell(X)
X = keras.layers.Dense(32,activation='elu',kernel_regularizer=keras.regularizers.l2(0.0001))(embeddings)
X = keras.layers.BatchNormalization(momentum=0.99, epsilon=0.001, center=True, scale=True)(X)
X = keras.layers.Dense(3,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.0001))(X)
# Add a sigmoid activation
X = keras.layers.Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = keras.Model(inputs=[inputs,input_masks], outputs=[X])
return model
# + id="t0ERU5Rwry4g"
# model_bert = bert_classifier((80,))
# model_bert.summary()
# + id="Pphogs_jry13"
# This is the best model
def albert_classifier(input_shape):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
model = TFAlbertModel.from_pretrained('albert-large-v2')
layer = model.layers[0]
layer.trainable = False
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
inputs = keras.Input(shape=input_shape, dtype='int32')
input_masks = keras.Input(shape=input_shape, dtype='int32')
embeddings = layer([inputs, input_masks])[1]
# embeddings = keras.layers.GaussianNoise(0.2)(embeddings)
# embeddings = keras.layers.Dropout(0.3)(embeddings)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
# lstm_one = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_one(embeddings)
# X = keras.layers.Dropout(0.2)(X)
# lstm_two = keras.layers.Bidirectional(keras.layers.LSTM(150, return_sequences=True, recurrent_dropout=0.25, dropout=0.2))
# X = lstm_two(X)
# X = keras.layers.Dropout(0.2)(X)
# # *************Attention*******************
# X = SeqSelfAttention(attention_activation='elu')(X)
# # ****************Attention*******************
# post_activation_GRU_cell = keras.layers.GRU(64, return_sequences = False, recurrent_dropout=0.25, dropout=0.2)
# X = post_activation_GRU_cell(X)
X = keras.layers.Dense(32,activation='elu',kernel_regularizer=keras.regularizers.l2(0.0001))(embeddings)
X = keras.layers.BatchNormalization(momentum=0.99, epsilon=0.001, center=True, scale=True)(X)
X = keras.layers.Dense(3,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.0001))(X)
# Add a sigmoid activation
X = keras.layers.Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = keras.Model(inputs=[inputs,input_masks], outputs=[X])
return model
# + id="X0txB85YsDSS"
# model_albert = albert_classifier((80,))
# model_albert.summary()
# + id="MEwZdircHVZ2"
# strategy = tf.distribute.TPUStrategy(resolver)
# + id="S5z_U7gcPNaG"
# class EvaluationMetric(keras.callbacks.Callback):
# def __init__(self, val_, Y_val):
# super(EvaluationMetric, self).__init__()
# self.val_ = val_
# self.Y_val = Y_val
# def on_epoch_begin(self, epoch, logs={}):
# print("\nTraining...")
# def on_epoch_end(self, epoch, logs={}):
# print("\nEvaluating...")
# trial_prediction = self.model.predict(self.val_)
# pred = []
# for i in range(0,len(self.Y_val)):
# num = np.argmax(trial_prediction[i])
# pred.append(num)
# from sklearn.metrics import classification_report
# print(classification_report(Y_val, pred, digits=3))
# evaluation_metric = EvaluationMetric(val, Y_val)
# + id="Ygp8tTFKIKzW"
# with strategy.scope():
model_mpnet = mpnet_classifier((80,))
# model_mpnet.load_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-mpnet.004.h5")
model_bert = bert_classifier((80,))
# model_bert.load_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-bert-1.003.h5")
model_roberta = roberta_classifier((80,))
# model_roberta.load_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-roberta.006.h5")
model_albert = albert_classifier((80,))
# model_albert.load_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-albert.006.h5")
model_concat = keras.layers.concatenate([model_mpnet.layers[-5].output, (model_roberta.layers[-5].output)[1], (model_bert.layers[-5].output)[1], (model_albert.layers[-5].output)[1]], axis=-1)
X = keras.layers.Dense(32,activation='elu',kernel_regularizer=keras.regularizers.l2(0.0001))(model_concat)
X = keras.layers.BatchNormalization(momentum=0.99, epsilon=0.001, center=True, scale=True)(X)
X = keras.layers.Dense(3,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.0001))(X)
# Add a sigmoid activation
X = keras.layers.Activation('softmax')(X)
# cl_model = keras.Model(model_mpnet.input, (model.layers[-5].output))
# mpnet_ = model.layers[-5].output
model = keras.Model(inputs=[model_mpnet.input, model_roberta.input, model_bert.input, model_albert.input], outputs=X)
optimizer = keras.optimizers.Adam(learning_rate=1e-5)
loss_fun = [
tf.keras.losses.CategoricalCrossentropy(from_logits=True)
]
metric = ['acc']
model.compile(optimizer=optimizer, loss=loss_fun, metrics=metric)
# + id="E6DrFcOmIyjg"
model.summary()
# + id="zxU1g0UdJXhE"
checkpoint = ModelCheckpoint(filepath='/content/neutro-ensemble.{epoch:03d}.h5',
verbose = 0,
save_weights_only=True,
epoch=4)
# + id="2oU_7l0P2GWl"
c = Counter(Y)
print(c)
print(c.keys())
neutral = c[0]
pos = c[1]
neg = c[2]
total = pos+neg+neutral
print(neutral,pos,neg,total)
# + id="Gprkw9zj1gpp"
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
maxi = max(pos,neg,neutral)
weight_for_0 = (maxi / (maxi+neutral))
weight_for_1 = (maxi / (maxi+pos))
weight_for_2 = (maxi / (maxi+neg))
class_weight_ = {0: weight_for_0, 1: weight_for_1, 2: weight_for_2}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
print('Weight for class 2: {:.2f}'.format(weight_for_2))
# + id="5jAymUJQ0GAU"
train = [
[train_encodings_mpnet["input_ids"], train_encodings_mpnet["attention_mask"]],
[train_encodings_roberta["input_ids"], train_encodings_roberta["attention_mask"]],
[train_encodings_bert["input_ids"], train_encodings_bert["attention_mask"]],
[train_encodings_albert["input_ids"], train_encodings_albert["attention_mask"]]
]
val = [
[val_encodings_mpnet["input_ids"], val_encodings_mpnet["attention_mask"]],
[val_encodings_roberta["input_ids"], val_encodings_roberta["attention_mask"]],
[val_encodings_bert["input_ids"], val_encodings_bert["attention_mask"]],
[val_encodings_albert["input_ids"], val_encodings_albert["attention_mask"]]
]
# + id="5riWhojdbwsG"
history = model.fit(
x = train,
y = Y_oh_train,
validation_data = (val,Y_oh_val),
callbacks = [evaluation_metric, checkpoint],
batch_size = 32,
shuffle=True,
epochs=5,
class_weight = class_weight_
)
# + id="M0Ok9EYPgN9V"
# plot_model(model, to_file="model.png", show_shapes=True, show_layer_names=False)
# + id="lKrBeHKTNaDy"
model.load_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-ensemble.004.h5")
# model.save_weights("/content/drive/MyDrive/semeval 17 transformer weights/neutro-ensemble.004.h5")
# + id="tK6cDoO_70Mg"
# # !mv "/content/neutro-ensemble.004.h5" "/content/drive/MyDrive/semeval 17 transformer weights/"
# + id="BC_29cAPgYuC"
answer = model.predict(val)
# + id="IRs4QzpUgsV9"
print(X_val[0])
print(Y_oh_val[0])
print(labels_val[0])
print("******************************************")
print(len(answer),len(answer))
# + id="HtvboifWhk61"
Counter(Y_val)
# + id="VY8GSPN72Vz5"
# used for querying
count_sl = 0
count_pos = 0
count_not = 0
pred = []
text = df_val[2]
temp = 0
for i in range(0,len(X_val)):
num = np.argmax(answer[i])
pred.append(num)
print(temp)
# + id="z6L2ROGrg_g3" colab={"base_uri": "https://localhost:8080/"} outputId="0011694f-2474-4e51-9e50-aede80cb0ed7"
Counter(pred)
# + id="3Y5xW8QmeVgS"
Counter(Y_val)
# + id="a6p0jkeeT6Tv"
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=pred, dtype=tf.dtypes.int32)
print(con_mat)
# + id="yJWkTDiqh_cx"
import seaborn as sns
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
sns.heatmap(con_mat, annot=True,cmap=plt.cm.Spectral,fmt='d',xticklabels=["Neutral","Positive","Negative"], yticklabels=["Neutral","Positive","Negative"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# + id="0zPYBs34iRBA"
from sklearn.metrics import f1_score
f1_score(Y_val, pred, average='macro')
# + id="ESQfLwdAhqU0"
from sklearn.metrics import recall_score
recall_score(Y_val, pred, average='macro')
# + id="qx-oEbGjPOgG"
from sklearn.metrics import classification_report
target_names = ['Neutral', 'Positive', 'Negative']
print(classification_report(Y_val, pred, digits=3))
# + id="6cLe7eJ_RkEl"
from sklearn.metrics import accuracy_score
accuracy_score(Y_val, pred, normalize=True)
# + [markdown] id="pGRT6Ohsgttp"
# <h3>Clustering</h3>
# + id="sIo8_drjrt5o"
pip install plotly==4.5.4
# + id="pdYWV8oUry_I"
import plotly
import plotly.graph_objs as go
import plotly.express as px
# + id="caQymOSkNzc8"
flag = []
count = 0
positive = []
negative = []
neutral = []
for i in range(0,len(pred)):
count = count + 1
neutral.append(answer[i][0])
positive.append(answer[i][1])
negative.append(answer[i][2])
print(count)
# + id="iokLmIsCwe-E"
pred_colour = []
for i in range(0,len(pred)):
if pred[i] == 0:
pred_colour.append("Neutral")
if pred[i] == 1:
pred_colour.append("Positive")
if pred[i] == 2:
pred_colour.append("Negative")
test_df = pd.DataFrame({'positive':positive, 'negative':negative, 'neutral':neutral, 'Prediction':pred_colour})
fig = px.scatter_3d(test_df, x='positive', y='negative', z='neutral', color='Prediction')
fig.update_traces(
marker={
'size': 0.7,
'opacity': 1,
'colorscale' : 'viridis',
}
)
fig.update_layout(legend= {'itemsizing': 'constant'})
fig.update_layout(width = 700)
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
# + id="huptAtFhvSF6"
from sklearn.preprocessing import normalize
from sklearn.cluster import KMeans
# + id="hQKewYj1gcBa"
from sklearn.metrics.pairwise import cosine_similarity
from scipy.spatial.distance import cosine
# + [markdown] id="MQItgti4itKf"
# <h5>SVNS</h5>
# + [markdown] id="XECwRJ1Kfw-m"
# <h3>Middle Layer</h3>
# + id="6H7Y4oZQdjbd"
model.layers[-3]
# + id="loTB0YxCcAir"
# with strategy.scope():
cl_model = keras.Model(model.input, model.layers[-3].output)
# + id="2XGbRzE2naUt"
cl_32 = cl_model.predict(val)
# + id="9JesTlyFc3TB"
kmeans = KMeans(n_clusters=3, random_state=4).fit(cl_32)
y_kmeans_batchnorm = kmeans.predict(cl_32)
# + id="Hbh38cwLeKSj"
for i in range(0,len(y_kmeans_batchnorm)):
if(y_kmeans_batchnorm[i] == 0):
y_kmeans_batchnorm[i] = 2
elif(y_kmeans_batchnorm[i] == 1):
y_kmeans_batchnorm[i] = 0
else:
y_kmeans_batchnorm[i] = 1
# + id="7xdKo400fmg8"
centers_batchnorm = kmeans.cluster_centers_
# + id="00QwJjYzlbp8"
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=y_kmeans_batchnorm)
print(con_mat)
# + id="3FlHfZw87fiY"
from sklearn.metrics import classification_report
target_names = ['Neutral', 'Positive', 'Negative']
print(classification_report(Y_val, y_kmeans_batchnorm, digits=3, target_names=target_names))
# + id="Hp7bzewKPy9s"
con_mat = [[3805,886,1246],[492, 1829,54],[722,119,3131]]
# + id="n3S7O-kfu_qF"
import seaborn as sns
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
sns.set(font_scale=1.5)
sns.heatmap(con_mat, annot=True,cmap=plt.cm.Spectral,fmt='d',xticklabels=["Neutral","Positive","Negative"], yticklabels=["Neutral","Positive","Negative"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# + id="SR0yDPLli7mL"
svns_neu_bn = []
for i in range(0,len(Y_val)):
neu = cosine(cl_32[i], centers_batchnorm[1])/2
svns_neu_bn.append(1-neu)
print(len(svns_neu_bn))
# + id="7W0ENjp9jQd6"
svns_pos_bn = []
for i in range(0,len(Y_val)):
pos = cosine(cl_32[i], centers_batchnorm[2])/2
svns_pos_bn.append(1-pos)
print(len(svns_pos_bn))
# + id="IiciVYGUjpvW"
svns_neg_bn = []
for i in range(0,len(Y_val)):
neg = cosine(cl_32[i], centers_batchnorm[0])/2
svns_neg_bn.append(1-neg)
print(len(svns_neg_bn))
# + id="Lr5qnwoN8142"
pred_colour = []
for i in range(0,len(pred)):
if y_kmeans_batchnorm[i] == 0:
pred_colour.append("Neutral")
if y_kmeans_batchnorm[i] == 1:
pred_colour.append("Positive")
if y_kmeans_batchnorm[i] == 2:
pred_colour.append("Negative")
test_df = pd.DataFrame({'SVNS Positive':svns_pos_bn, 'SVNS Negative':svns_neg_bn, 'SVNS Neutral':svns_neu_bn, 'Labels:':pred_colour})
fig = px.scatter_3d(test_df, x='SVNS Positive', y='SVNS Negative', z='SVNS Neutral', color='Labels:')
fig.update_traces(
marker={
'size': 1,
'opacity': 1,
'colorscale' : 'viridis',
}
)
fig.update_layout(legend= {'itemsizing': 'constant'})
fig.update_layout(width = 850, height = 750)
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
# + [markdown] id="KLrF6WW-wffo"
# <h3>GRU</h3>
# + id="NZcqhU1PwjBb"
model.layers[-5]
# + id="XEWlVd3zwi9D"
# with strategy.scope():
cl_model = keras.Model(model.input, (model.layers[-5].output))
# + id="ujS-Tn63oNRE"
cl_32 = cl_model.predict(val)
# + id="FrjQrYPiwi7J"
kmeans = KMeans(n_clusters=3, random_state=4).fit(cl_32)
y_kmeans_gru = kmeans.predict(cl_32)
# + id="dtD1cSZ9wi31"
for i in range(0,len(y_kmeans_gru)):
if(y_kmeans_gru[i] == 0):
y_kmeans_gru[i] = 1
elif(y_kmeans_gru[i] == 1):
y_kmeans_gru[i] = 2
else:
y_kmeans_gru[i] = 0
# + id="nFh36FsDwi1H"
centers_gru = kmeans.cluster_centers_
# + id="jU0BXY29SWok"
con_mat = tf.math.confusion_matrix(labels=Y_val, predictions=y_kmeans_gru)
print(con_mat)
# + id="F3M9WNy9vbML"
import seaborn as sns
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(8, 8))
sns.set(font_scale=1.5)
sns.heatmap(con_mat, annot=True,cmap=plt.cm.Spectral,fmt='d',xticklabels=["Neutral","Positive","Negative"], yticklabels=["Neutral","Positive","Negative"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# + id="LpEznqf6w_YC"
from sklearn.metrics import classification_report
target_names = ['Neutral', 'Positive', 'Negative']
print(classification_report(Y_val, y_kmeans_gru, digits=3, target_names=target_names))
# + id="9pn7Pb_Cw_V5"
svns_neu_gru = []
for i in range(0,len(Y_val)):
neu = cosine(cl_32[i], centers_gru[2])/2
svns_neu_gru.append(1-neu)
print(len(svns_neu_gru))
# + id="GfwUr7T-w_Q4"
svns_pos_gru = []
for i in range(0,len(Y_val)):
pos = cosine(cl_32[i], centers_gru[0])/2
svns_pos_gru.append(1-pos)
print(len(svns_pos_gru))
# + id="q8yEsi7cxNVL"
svns_neg_gru = []
for i in range(0,len(Y_val)):
neg = cosine(cl_32[i], centers_gru[1])/2
svns_neg_gru.append(1-neg)
print(len(svns_neg_gru))
# + id="d3lJuVOp-7yu"
pred_colour = []
for i in range(0,len(pred)):
if y_kmeans_gru[i] == 0:
pred_colour.append("Neutral")
if y_kmeans_gru[i] == 1:
pred_colour.append("Positive")
if y_kmeans_gru[i] == 2:
pred_colour.append("Negative")
test_df = pd.DataFrame({'SVNS Positive':svns_pos_gru, 'SVNS Negative':svns_neg_gru, 'SVNS Neutral':svns_neu_gru, 'Labels:':pred_colour})
fig = px.scatter_3d(test_df, x='SVNS Positive', y='SVNS Negative', z='SVNS Neutral', color='Labels:')
fig.update_traces(
marker={
'size': 1,
'opacity': 1,
'colorscale' : 'viridis',
}
)
fig.update_layout(legend= {'itemsizing': 'constant'})
fig.update_layout(width = 850, height = 750)
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
# + id="6riKa1r1fQt8"
| Transformer Models/Stacked Ensemble.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="sggNJR46sta5"
# # Logistic regression
#
#
# <td> <img src="intro/ml_alg.jpg" alt="Drawing" style="width: 1200px;"/></td>
# + id="5CAVX_mSsta7" executionInfo={"status": "ok", "timestamp": 1626160317992, "user_tz": -120, "elapsed": 4, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib notebook
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# + [markdown] cell_style="split" id="kuvXO4_5sta8"
# ### Linear regression
#
# * How many/much?
# * Output data $ y \in \mathbb{R} $
# * Predicting continuous numbers, e.g. temperature of a room, price of apples, etc.
#
#
# \begin{equation*}
# h_{\theta}(x) = \mathbf{\theta}^\intercal \mathbf{x} + \theta_0
# \end{equation*}
#
# + [markdown] cell_style="split" id="KDgJR3wLsta8"
# ### Logistic regression
#
# * **Classification:** Assigning a datapoint to a category/class.
# * Output data are categories.
#
# * What is the probability that today is rainy?
#
# #### Binary classification
# * Only two labels $ y \in \{0,1\} $
# * Yes or no
# * zero or one
# * positive or negative
# * True or False
#
# We can have more than two labels
#
#
# + [markdown] id="L2pa3qLtsta8"
# ### Rain measurement vs Rain yes/no
# + hide_input=false id="n708D5HOsta9" executionInfo={"status": "ok", "timestamp": 1626160324492, "user_tz": -120, "elapsed": 293, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}}
def syn1(N):
""" data(samples, features)"""
global seed
np.random.seed(seed)
data = np.empty(shape=(N,2), dtype = np.float32)
tar = np.empty(shape=(N,), dtype = np.float32)
N1 = int(N/2)
data[:N1,0] = 40 + np.random.normal(loc=5, scale=.3, size=(N1))
data[N1:,0] = 15 + np.random.normal(loc=5, scale=.3, size=(N-N1))
data[:,1] = 5*np.random.normal(loc=3, scale=.3, size=(N))
data = data / data.std(axis=0)
# Target
tar[:N1] = np.ones(shape=(N1,))
tar[N1:] = np.zeros(shape=(N-N1,))
# Rotation
theta = np.radians(30)
c, s = np.cos(theta), np.sin(theta)
R = np.array([[c,-s],[s,c]]) # rotation matrix
data = np.dot(data,R)
return data,tar
#d,t = syn1(100)
#plt.figure(1)
#plt.scatter(d[:,0],d[:,1], c=t)
# + cell_style="split" hide_input=true colab={"base_uri": "https://localhost:8080/", "height": 248} id="m_J8HVjNsta9" executionInfo={"status": "ok", "timestamp": 1626160334326, "user_tz": -120, "elapsed": 1078, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="6e5fa632-9dde-487a-95b8-dfb7f4f89617"
def LoG(x, y, sigma):
temp = - (x ** 3 + y** 3) / (2 * sigma ** 2)
return temp
N = 49
half_N = N // 2
X2, Y2 = np.meshgrid(range(N), range(N))
Z2 = (-LoG(X2 - half_N, Y2 - half_N, sigma=8) +200)/10
plt.figure()
ax = plt.axes(projection="3d")
ax.plot_surface(X2, Y2, Z2, cmap='jet')
ax.set_xlabel('Temperature')
ax.set_ylabel('Humidity')
ax.set_zlabel('Rain (mm)')
plt.show()
# + cell_style="split" hide_input=true colab={"base_uri": "https://localhost:8080/", "height": 248} id="3_YF4qiasta-" executionInfo={"status": "ok", "timestamp": 1626160339089, "user_tz": -120, "elapsed": 511, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="cdacb84f-de47-4db4-e31f-44b1826cef10"
seed = 321
d,t = syn1(40)
x,y = d[:,0],d[:,1]
z = t
plt.figure()
ax = plt.axes(projection="3d")
ax.scatter3D(x,y,z, c = z, cmap = 'Accent')
ax.set_xlabel('Temperature')
ax.set_ylabel('Humidity')
ax.set_zlabel('Rain (mm)')
plt.show()
# + cell_style="split" id="QOdVU3rZsta_" active=""
#
# + cell_style="split" hide_input=true colab={"base_uri": "https://localhost:8080/", "height": 279} id="8lTAXG9asta_" executionInfo={"status": "ok", "timestamp": 1626160344949, "user_tz": -120, "elapsed": 349, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="275df54c-e5a1-45fb-942f-2b82224919f3"
_, ax = plt.subplots()
scat = ax.scatter(x,y,c = z, cmap = 'Accent')
legend1 = ax.legend(*scat.legend_elements(),
loc="lower right", title='Rainy day?')
ax.set_xlabel('Humidity')
ax.set_ylabel('Temperature')
plt.show()
# + [markdown] id="S6cJpT48sta_"
# ### Can we still use Linear Regression hypothesis?
#
# * Consider a one-feature data.
# * In the below example, these two classes are well separated. Fitting a line, one can define a **threshold** to classify the prediction.
#
#
# + hide_input=true colab={"base_uri": "https://localhost:8080/", "height": 279} id="-u8-D3KystbA" executionInfo={"status": "ok", "timestamp": 1626160360247, "user_tz": -120, "elapsed": 394, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="410dde86-3321-4965-d5fa-eedd72bc2700"
plt.figure()
plt.scatter(x,z,c = z, cmap = 'Accent')
plt.plot(x, y.shape[0]*[0.5],'b')
plt.ylabel('Rain yes/no')
plt.xlabel('Humidity')
plt.show()
# + [markdown] id="NCfG7XffstbA"
# ### Problems:
# * Predicted values are continues, we are interested in probability (range $\in [0,1]$).
#
# In the figure below, all the gray samples of humidity have a probability of $1$, causing a rainy day. However, some of them have a lower probability than the others to cause a rainy day.
#
#
#
# * Linear regression for binary classification is sensitive to imbalance data.
#
#
# We want to limit the hypothesis equation to a specific domain and range.
# + [markdown] id="daTrXAJwstbA"
# ## Logistic regression to the rescue
# Logistic regression is a linear algorithm (with a non-linear transform on output).
#
# While $ X \in \mathbb{R} $
#
# \begin{equation*}
# h_{\theta}(x) \in [0,1]
# \end{equation*}
#
# Logistic regression is a classification algorithm with the hypothesis:
#
#
# \begin{equation*}
# h_{\theta}(x) = \sigma(\pmb{ \theta_0} +
# \mathbf{X} \pmb{ \theta})\\
# \text{where} \qquad{} \sigma(x) = \frac{1}{1 + e^{-x} }
# \end{equation*}
#
#
#
#
# The sigmoid function $\sigma(.)$, sometimes called the **logistic function** or squashing function.
#
# t0 = shifts the curve to the right and left
# t1 = how compressed the graph is
# + hide_input=true colab={"base_uri": "https://localhost:8080/", "height": 333, "referenced_widgets": ["470389facd0e41bba777c082c1417bd0", "24f6204035d2489287fbe7f549557522", "<KEY>", "bd7278005ef74f74b089f4e9dde58fec", "<KEY>", "1093d468a9d440cc81e60a12931a524d", "<KEY>", "<KEY>", "<KEY>", "493f7e09b9024767be8c6190c00c2047"]} id="bKEhzG46stbA" executionInfo={"status": "ok", "timestamp": 1626160413729, "user_tz": -120, "elapsed": 312, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="951d2868-3e8a-429d-e6a8-1468ad27c15f"
fig_sigmoid, ax_sigmoid = plt.subplots()
p = np.linspace(-10, 10,200)
def sigmoid(x, t0=0,t1=1):
return (1 / (1 + np.exp(- (t0+ t1* x)))).reshape(x.shape[0],1)
@widgets.interact(t0=(-7, 7, .1), t1=(-5, 5, .1))
def update(t0= 0 ,t1 = 1 ):
[l.remove() for l in ax_sigmoid.lines]
ax_sigmoid.plot(p, sigmoid(p, t0,t1) , color='C0')
# + [markdown] id="FN_IfmamstbB"
# **Modify the linear regression hypothesis by running it through a sigmoid function.**
#
#
# \begin{equation*}
# h_{\theta}^{Logistic}(\mathbf{X}) = \sigma(h_{\theta}^{Linear}(\mathbf{X}))
# \end{equation*}
#
# ### Logistic regression hypothesis
#
# \begin{equation*}
# h_{\theta}(\mathbf{X}) = \frac{1}{1 + e^{- ( \pmb{ \theta_0} +
# \mathbf{X} \pmb{ \theta} ) }}
# \end{equation*}
#
# Sigmoid function squishes input and put it into the range zero and one.
#
#
# + hide_input=false id="6cQ85K6ZstbB" executionInfo={"status": "ok", "timestamp": 1626160442478, "user_tz": -120, "elapsed": 910, "user": {"displayName": "\u0110or\u0111e Grbi\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}}
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0).fit(x.reshape(40,1),z)
t0 =clf.intercept_
t1 = clf.coef_
# + cell_style="center" hide_input=true colab={"base_uri": "https://localhost:8080/", "height": 279} id="ZYjH8qrkstbB" executionInfo={"status": "ok", "timestamp": 1626160444508, "user_tz": -120, "elapsed": 324, "user": {"displayName": "\u0110or\u0111<NAME>\u0107", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjMtdPDPC6XJoAlVJ9SyzF_WldAaS1YlXbiUOAReA=s64", "userId": "17224412897437983949"}} outputId="55c2c833-acd7-4fbb-e315-4082d23c4086"
plt.figure()
plt.scatter(x,z,c = z, cmap = 'Accent')
plt.plot(x, y.shape[0]*[0.5],'b')
plt.plot(x.reshape(40,1), Sigmoid(x,t0, t1),'r.')
plt.ylabel('Rain yes/no')
plt.xlabel('Humidity')
plt.show()
# + [markdown] cell_style="center" id="FYNJWg1HstbC"
# ### Interpretation of the new hypothesis:
# Logistic regression models the probability of the default class ($y=1$)
# * What is the chance of rain with humidity of $7.9$?
#
# * Probability of rain is labeled by one, $y=1$
#
# * Want to predict rain given humidity:
#
# \begin{equation*} P (y = rain | x= humidity)\end{equation*}
#
# * Have the hypothesis $h_{\theta}(x)$ with the best $\theta$
#
# * In the new sample, humidity is $x= 7.9$ which gives the hypothesis $h_{\theta}(x) = 0.61$
#
# * **The probability that it rains, given the input ($x= 7.9$) parameterized by $\theta$, is $61\%$**.
#
#
#
# \begin{align*}
# h_{\theta}(x) &= P (y = rain | x;\theta)\\
# &= P (y = 1 | x;\theta)
# \end{align*}
#
# #### The sum of the probabilities of all outcomes of an event or experiment is equal to one.
#
# $y=1 \quad{} \text{or} \quad{} 0 $
# \begin{equation*}
# P (y = 1 | x;\theta) = 1 - P (y = 0 | x;\theta)
# \end{equation*}
#
#
# ### Logistic regression hypothesis
# For each sample with multiple features:
#
# \begin{equation*}
# \color{green}{P (y_i = 1 | \vec{x_i}; \vec{\theta}) = \sigma(\vec{x_i} \vec{\theta}) }
# \end{equation*}
#
#
# + [markdown] id="7RgwflIbstbC"
# ## Cost function for Logistic Regression:
# * For Linear regression
#
# \begin{equation*}
# J(\theta) = \text{MSE}\\
# \underset{\theta}{\text{min}} J(\theta)
# \end{equation*}
#
# * What happens if we use **MSE** for Logistic regression?
#
# * Cost function $J(\theta)$ is a **non-convex function**: not easy to find minimum global point
# * We don't want to calculate distances but probabilities
#
# * Output probabilities: choose $\theta$s that give the actual labels in the training data the **highest probability**.
#
#
# + hide_input=true id="zSOyoemistbD" outputId="9a553ab3-fa08-4b80-a5aa-c47034a1c65f"
mse = lambda n, y_pred , y: (1/n) * np.sum( (y_pred - y)**2)
all_theta = np.linspace(-4, 4, 100)
all_mse = [ mse(40, 1 / (1 + np.exp(-i*x)) , z) for i in all_theta]
_, axs = plt.subplots(1, 2)
axs[0].scatter(x,z,c = z, cmap = 'Accent')
axs[0].plot(x, all_theta[np.argmin(all_mse)]*(x) , 'm')
axs[0].set_title(r'$h_\theta(x)$')
axs[0].set_xlabel('x')
axs[0].set_ylabel('y')
axs[1].plot(all_theta, all_mse , 'r')
axs[1].set_title(r'$J(\theta)$')
axs[1].set_xlabel(r'$\theta_1$')
axs[1].set_ylabel(r'$J(\theta_1)$')
plt.show()
# + [markdown] hide_input=true id="HUhnrigSstbD"
# ## How to estimate the parameters?
#
# * Data with n samples, each has a label zero or one
#
# \
#
# * For sample i with label <span style="color:red">one</span>:
#
# The goal is to find $\theta$s that $P (y_i = 1 | x_i;\theta)$ is as close as possible to $1$.
#
#
#
# * For sample i with label <span style="color:red">zero</span>:
#
# The goal is to find $\theta$s that $P (y = 0 | x_i;\theta)$ is as close as possible to $0$. Or the complement, $1 - P (y_i = 1 | x_i;\theta)$ is as close as possible to $1$.
#
#
#
# + [markdown] id="NAK14RenstbD"
# ## Cross-entropy as a cost function
# For binary classification, also called **binary cross-entropy**.
#
# * $y$ true label
# * $h_{\theta}(x)$ prediction
#
# \begin{equation*}
# \underset{\theta}{\text{max}} P_{\theta}(y_1, y_2, \ldots y_n |x_1, x_2, \ldots x_n)
# \end{equation*}
#
#
# Samples are **independent**
#
# \begin{equation*}
# \underset{\theta}{\text{max}} P_{\theta}(y_1|x_1) P_{\theta}(y_2|x_2) \ldots P_{\theta}(y_n|x_n) \\
# \underset{\theta}{\text{max}} \prod_i^n P_{\theta}(y_i|x_i)
# \end{equation*}
#
# * It is computationally less expensive to use $\log \prod P(.)$ to break it down to $ \sum \log P(.)$.
#
# * A log convex function is always convex.
#
# \begin{equation*}
# \underset{\theta}{\text{max}} \sum_i^n \color{blue}{\log} P_{\theta}(y_i|x_i)
# \end{equation*}
#
# * We often have a cost that we **minimize**. We can simply consider the negative of the log probability to change a maximization problem to minimization:
#
#
# \begin{equation*}
# \underset{\theta}{\color{red}{\text{min}}} \sum_i^n - \log P_{\theta}(y_i|x_i)
# \end{equation*}
#
# * We have only two cases for the sample i output $y_i$, zero or one.
# * We had $ h_{\theta}(x_i) = P(y_i = 1 | x_i;\theta)$
# * and $ P(y_i = 0 | x_i;\theta) = 1 - P (y_i = 1 | x_i;\theta) $
#
#
#
# \begin{equation*}
# P_{\theta} (y_i | x_i)
# \begin{cases}
# h_{\theta}(x_i) & \text{if} \quad{} y_i =1 \\
# 1- h_{\theta}(x_i) & \text{if} \quad{} y_i =0
# \end{cases}
# \end{equation*}
#
# * We can rewrite it as
#
# \begin{equation*}
# P_{\theta} (y_i | x_i) = h_{\theta}(x_i)^{y_i} (1- h_{\theta}(x_i))^{1-y_i}
# \end{equation*}
#
# ### <span style="color:blue">Cross-entropy</span>
#
# The average of the cost function is:
#
# \begin{align*}
# J(\theta) &= \frac{1}{n} \sum_{i=1}^n \left[ \color{blue}{- y_i \log h_{\theta}(x_i)} \color{green}{- (1- y_i) \log (1-h_{\theta}(x_i))} \right] \\
# \text{or} \\
# J(\theta) &= - \frac{1}{n} \sum_{i=1}^n \left[ \color{blue}{ y_i \log h_{\theta}(x_i)} \color{green}{+ (1- y_i) \log (1-h_{\theta}(x_i))} \right]
# \end{align*}
#
#
#
# This cost function is commonly called log loss and a special case of **negative log likelihood (NLL)**. Later we learn how to get this cost function with **maximum likelihood estimation (MLE)**.
#
#
#
# + [markdown] id="JKCYNjAqstbE"
# ### How to minimize logistic regression cost function?
#
# * This one doesn't have a closed-form solution for the cross-entropy cost function.
# * Cross-entropy (NLL) function is a convex function. This means the simple 1st order **GD** optimizer is a good choice for optimization.
#
# + [markdown] id="IezSbkVKstbE"
# ## Gradient descent for Logistic regression
# Learn $\theta$s in an iterative manner.
# \begin{align*}
# \pmb{ \theta} & := \pmb{ \theta} - \alpha \nabla J(\theta)
# \end{align*}
#
#
#
# With a convex cost function:
#
# \begin{equation*}
# J(\theta) = - \frac{1}{n} \sum_{i=1}^n \left[ y_i \log h_{\theta}(x_i) + (1- y_i) \log (1-h_{\theta}(x_i)) \right] \\
# \underset{\theta}{\text{min}} J(\theta)
# \end{equation*}
#
# How to use **Gradient descent** to minimize the cost function?
#
# Repeat until convergence
#
# \begin{align*}
# \frac{\partial }{\partial \theta_j } J(\theta) = ? \qquad{} \forall j
# \end{align*}
#
#
#
# + [markdown] id="mJnBXmgWstbE"
# 
#
#
#
# \begin{align*}
# \frac{\partial J(\theta)}{\partial \theta_j} & =
# \frac{\partial}{\partial \theta_j} \left[-\frac{1}{n}\sum_{i=1}^n
# \left[ y_i\log\left(h_\theta \left(x_i\right)\right) +
# (1 -y_i)\log\left(1-h_\theta \left(x_i\right)\right)\right]\right]
# \\[2ex]
# &= \,\frac{-1}{n}\,\sum_{i=1}^n
# \left[
# y_i\frac{\partial}{\partial \theta_j}\log\left(h_\theta \left(x_i\right)\right) +
# (1 -y_i)\frac{\partial}{\partial \theta_j}\log\left(1-h_\theta \left(x_i\right)\right)
# \right]
# \\[2ex]
# & = \,\frac{-1}{n}\,\sum_{i=1}^n
# \left[
# y_i\frac{\frac{\partial}{\partial \theta_j}h_\theta \left(x_i\right)}{h_\theta\left(x_i\right)} +
# (1 -y_i)\frac{\frac{\partial}{\partial \theta_j}\left(1-h_\theta \left(x_i\right)\right)}{1-h_\theta\left(x_i\right)}
# \right]
# \\[2ex]
# &=\,\frac{-1}{n}\,\sum_{i=1}^n
# \left[
# y_i\frac{\frac{\partial}{\partial \theta_j}\sigma\left(\theta^\top x_i\right)}{h_\theta\left(x_i\right)} +
# (1 -y^{(i)})\frac{\frac{\partial}{\partial \theta_j}\left(1-\sigma\left(\theta^\top x_i\right)\right)}{1-h_\theta\left(x_i\right)}
# \right]
# \\[2ex]
# &=\frac{-1}{n}\,\sum_{i=1}^n
# \left[ y_i\,
# \frac{\sigma\left(\theta^\top x_i\right)\left(1-\sigma\left(\theta^\top x_i\right)\right)\frac{\partial}{\partial \theta_j}\left(\theta^\top x_i\right)}{h_\theta\left(x_i\right)} -
# (1 -y_i)\,\frac{\sigma\left(\theta^\top x_i\right)\left(1-\sigma\left(\theta^\top x_i\right)\right)\frac{\partial}{\partial \theta_j}\left(\theta^\top x_i\right)}{1-h_\theta\left(x_i\right)}
# \right]
# \\[2ex]
# &= \,\frac{-1}{n}\,\sum_{i=1}^n
# \left[
# y_i\frac{h_\theta\left( x_i\right)\left(1-h_\theta\left( x_i\right)\right)\frac{\partial}{\partial \theta_j}\left(\theta^\top x_i\right)}{h_\theta\left(x_i\right)} -
# (1 -y_i)\frac{h_\theta\left( x_i\right)\left(1-h_\theta\left(x_i\right)\right)\frac{\partial}{\partial \theta_j}\left( \theta^\top x_i\right)}{1-h_\theta\left(x_i\right)}
# \right]
# \\[2ex]
# &=\,\frac{-1}{n}\,\sum_{i=1}^n \left[y_i\left(1-h_\theta\left(x_i\right)\right)x_{ij}
# \left(1-y_i\right)\,h_\theta\left(x_i\right)x_{ij}
# \right]
# \\[2ex]
# &=\,\frac{-1}{n}\,\sum_{i=1}^n \left[y_i-y_ih_\theta\left(x_i\right)-
# h_\theta\left(x_i\right)+y_ih_\theta\left(x_i\right)
# \right]\,x_{ij}
# \\[2ex]
# &=\,\frac{-1}{n}\,\sum_{i=1}^n \left[y_i-h_\theta\left(x_i\right)\right]\,x_{ij}\\[2ex]
# &=\frac{1}{n}\sum_{i=1}^n\left[h_\theta\left(x_i\right)-y_i\right]\,x_{ij}
# \end{align*}
# + [markdown] id="wSR3WaOVstbF"
#
# ### <span style="color:blue">GD for Logistic regression</span>
#
# Repeat until convergence
#
# \begin{align*}
# \theta_j & := \theta_j - \alpha \frac{1}{n}\sum_{i=1}^n\left[h_\theta\left(x_i\right)-y_i\right] . x_{ij} \qquad{} \forall j \\
# \text{or}\\
# \pmb{ \theta} & := \pmb{ \theta} - \frac{\alpha}{n} \mathbf{X}^\intercal (h_{\theta}(\mathbf{x}) - \vec{y})
# \end{align*}
#
#
# This is equivalent to Linear regression gradient descent but we have different $h_\theta(x)$.
#
# * After finding the best $\theta$s, similar to linear regression, we can plug in these $\theta$s back into the hypothesis ($ h_{\theta}(x_i) = P(y_i = 1 | x_i;\theta)$ ) to find the probability for each sample:
#
# \begin{equation*}
# \text{Prediction for sample } i =
# \begin{cases}
# 1 & \text{if} \quad{} P (y_i =1 | x_i; \pmb{ \theta}^*) \geq 0.5 \\
# 0 & \text{if} \quad{} P(y_i =1 | x_i; \pmb{ \theta}^*) < 0.5
# \end{cases}
# \end{equation*}
# + [markdown] id="vTlyysHSstbF"
# ### Feature scaling applies for logistic regression as well, similar to linear regression.
# + [markdown] id="bfD7kwDostbF"
# ## Decision boundary
#
# A boundary that model uses to make decisions!
#
# * After training the model and finding the best $ \pmb{ \theta}^*$, we can use them to find the decision boundary.
# * With threshold = $50\%$ we get concrete yes/no answer
#
# \begin{equation*}
# \text{Prediction for sample } i =
# \begin{cases}
# 1 & \text{if} \quad{} P(y_i =1 | x_i; \pmb{ \theta}^*) \geq 0.5 \\
# 0 & \text{if} \quad{} P(y_i =1 | x_i; \pmb{ \theta}^*) < 0.5
# \end{cases}
# \end{equation*}
#
# * We can have <span style="color:red">different threshold</span>. We come back to different threshold later.
#
#
#
#
# How does Logistic regression choose the decision boundary?
#
#
# + [markdown] id="HLf_qbKKstbF"
# ### Decision boundary in 1 dimension (one feature) when threshold = 0.5:
# \begin{equation*}
# h_\theta(x) = \sigma(\theta_0^* + \theta_1^* x)
# \end{equation*}
#
#
# <span style="color:red">Point decision boundary :</span>
#
#
# \begin{equation*}
# x = - \frac{\theta_0^*}{\theta_1^*}\\
# P_{\theta} (y_i =1 | x_i = - \frac{\theta_0^*}{\theta_1^*} ) = 0.5
# \end{equation*}
#
# + hide_input=true id="E499iWWrstbF" outputId="8a9bea1b-0934-4b56-faaf-bf3048b41301"
_, ax = plt.subplots()
scat = ax.scatter(x, x.shape[0]*[0] , c = z, cmap = 'Accent')
legend1 = ax.legend(*scat.legend_elements(),
loc="lower right", title="Rainy day?")
ax.set_xlabel('Humidity')
ax.axis('off')
plt.show()
# + [markdown] id="uypWZFRVstbF"
# ### Decision boundary in 2 dimensions (two feature) when threshold = 0.5:
#
#
# \begin{equation*}
# h_\theta(x) = \sigma(\theta_0^* + \theta_1^* x_1 + \theta_2^* x_2)
# \end{equation*}
#
#
# <span style="color:red">Line decision boundary :</span>
#
#
# \begin{equation*}
# \theta_0^* + \theta_1^* x_1 + \theta_2^* x_2 = 0\\
# P_{\theta} (y_i =1 | \theta_0^* + \theta_1^* x_{i1}+ \theta_2^* x_{i2} =0 ) = 0.5
# \end{equation*}
#
# 2 features --> 1D decision boundary and 2D sigmoid function
# + deletable=false editable=false hide_input=false run_control={"frozen": true} id="zNMJyri8stbG" outputId="2427dbf6-5b79-450c-e4cc-a2837ef68567"
# clf = LogisticRegression(random_state=0).fit(d,z)
# t0 =clf.intercept_
# t1 = clf.coef_
# print(t0,t1)
# x_d = [np.min(d[:, 0] ), np.max(d[:, 1] )]
# y_d = - (t0 + np.dot(t1[0,0], x_d)) /t1[0,1]
# print()
# + cell_style="center" hide_input=true id="QlDFKpm2stbG" outputId="34bad5e8-8981-4387-b30b-841747475f25"
fig_dc_bounf, ax_dc_bounf = plt.subplots()
scat = ax_dc_bounf.scatter(x,y,c = z, cmap = 'Accent')
legend1 = ax_dc_bounf.legend(*scat.legend_elements(),
loc="lower right", title="Rainy day?")
ax_dc_bounf.plot(x_d, y_d, 'r')
ax_dc_bounf.set_xlabel('Humidity')
ax_dc_bounf.set_ylabel('Temperature')
plt.show()
# + [markdown] id="_PDdeP6RstbG"
# ### For m features:
#
# \begin{equation*}
# h_\theta(x) = \sigma(\theta_0^* + \theta_1^* x_1 + \ldots + \theta_m^* x_m)
# \end{equation*}
#
#
# <span style="color:red"> (m-1) dimenion decision boundary :</span>
#
#
# \begin{equation*}
# \theta_0^* + \theta_1^* x_1 + \ldots + \theta_m^* x_m = 0\\
# P_{\theta} (y_i =1 | \theta_0^* + \theta_1^* x_{i1}+ \ldots + \theta_{im}^* x_m = 0 ) = 0.5
# \end{equation*}
#
# + [markdown] id="V_2c59SLstbG"
# ### Logistic regression with the polynomial hypothesis
#
# Similar to Linear regression, we can have polynomials with different degrees in the hypothesis.
#
# \begin{equation*}
# h_{\theta}(x) = \sigma( \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2 )
# \end{equation*}
#
# * With Logistic polynomial regression, we can have non-linear decision boundaries.
# + hide_input=true id="dNj-DvmMstbG" outputId="f9a16f64-83d4-48a4-bc5e-9fff2b0b8aed"
def syn3(N):
""" data(samples, features)"""
global seed
data = np.empty(shape=(N,2), dtype = np.float32)
tar = np.empty(shape=(N,), dtype = np.float32)
N1 = int(2*N/3)
# disk
teta_d = np.random.uniform(0, 2*np.pi, N1)
inner, outer = 3, 5
r2 = np.sqrt(np.random.uniform(inner**2, outer**2, N1))
data[:N1,0],data[:N1,1] = r2*np.cos(teta_d), r2*np.sin(teta_d)
#circle
teta_c = np.random.uniform(0, 2*np.pi, N-N1)
inner, outer = 0, 2
r2 = np.sqrt(np.random.uniform(inner**2, outer**2, N-N1))
data[N1:,0],data[N1:,1] = r2*np.cos(teta_c), r2*np.sin(teta_c)
tar[:N1] = np.ones(shape=(N1,))
tar[N1:] = np.zeros(shape=(N-N1,))
return data, tar
seed = 2
np.random.seed(seed) if seed else None
d,t = syn3(100)
plt.figure()
plt.scatter(d[:,0],d[:,1], c=t, cmap = 'Accent')
plt.show()
# + [markdown] id="DTfLjtw9stbH"
# ## Regularization
#
# \begin{equation*}
# h_{\theta}(x) = \sigma( \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2 + \theta_5 x_1^2 x_2^2 + \ldots)
# \end{equation*}
#
# * With more complex hypothesis --> <span style="color:red"> overfitting </span>
#
#
#
# * Logistic regression cost function with <span style="color:blue"> L2 regularization </span>
#
#
# \begin{equation*}
# J(\theta) = - \frac{1}{n} \sum_{i=1}^n \left[ y_i \log h_{\theta}(x_i) + (1- y_i) \log (1-h_{\theta}(x_i))\right] \color{red}{ + \frac{\lambda}{n} \sum_{j=1}^m \theta_j^2}
# \end{equation*}
#
# * Upgraded GD with L2 regularization
#
# \begin{align*}
# \pmb{ \theta} & := \pmb{ \theta} - \frac{\alpha}{n} \left [ \mathbf{X}^\intercal (h_{\theta}(\mathbf{x}) - \vec{y}) \color{red}{ + 2\lambda \pmb{\theta} } \right]
# \end{align*}
#
# + hide_input=true id="ZHY8jn34stbH" outputId="b261de86-1890-422c-fd3b-29218d5205eb"
def syn3(N):
""" data(samples, features)"""
global seed
data = np.empty(shape=(N,2), dtype = np.float32)
tar = np.empty(shape=(N,), dtype = np.float32)
N1 = int(2*N/3)
# disk
teta_d = np.random.uniform(0, 2*np.pi, N1)
inner, outer = 2, 5
r2 = np.sqrt(np.random.uniform(inner**2, outer**2, N1))
data[:N1,0],data[:N1,1] = r2*np.cos(teta_d), r2*np.sin(teta_d)
#circle
teta_c = np.random.uniform(0, 2*np.pi, N-N1)
inner, outer = 0, 3
r2 = np.sqrt(np.random.uniform(inner**2, outer**2, N-N1))
data[N1:,0],data[N1:,1] = r2*np.cos(teta_c), r2*np.sin(teta_c)
tar[:N1] = np.ones(shape=(N1,))
tar[N1:] = np.zeros(shape=(N-N1,))
return data, tar
seed = 2
np.random.seed(seed) if seed else None
d,t = syn3(100)
plt.figure()
plt.scatter(d[:,0],d[:,1], c=t, cmap = 'Accent')
plt.show()
# + [markdown] id="5ZysavKtstbH"
# # Logistic regression cheat sheet
#
#
#
# | | Hypothesis | Cost Function | GD |
# |:-: |:-: |:-: |:-: |
# | Without L2 | \begin{equation*}
# \frac{1}{1 + e^{- ( \pmb{ \theta_0} +
# \mathbf{X} \pmb{ \theta} ) }}
# \end{equation*} | \begin{equation*}
# J(\theta) = - \frac{1}{n} \left[ y^T \log h_{\theta}(x) + (1- y)^T \log (1-h_{\theta}(x_i))\right] \end{equation*} | \begin{align*}
# \pmb{ \theta} & := \pmb{ \theta} - \frac{\alpha}{n} \mathbf{X}^\intercal (h_{\theta}(\mathbf{x}) - \vec{y})
# \end{align*} |
# | With L2 | \begin{equation*}
# \frac{1}{1 + e^{- ( \pmb{ \theta_0} +
# \mathbf{X} \pmb{ \theta} ) }}
# \end{equation*}| \begin{equation*}
# J(\theta) = - \frac{1}{n} \left[ y^T \log h_{\theta}(x) + (1- y)^T \log (1-h_{\theta}(x_i))\right] \color{red}{ + \frac{\lambda}{n} \sum_{j=1}^m \theta_j^2}
# \end{equation*} |\begin{align*}
# \pmb{ \theta} & := \pmb{ \theta} - \frac{\alpha}{n} \left [ \mathbf{X}^\intercal (h_{\theta}(\mathbf{x}) - \vec{y}) \color{red}{ + 2\lambda \pmb{\theta} } \right]
# \end{align*}|
# + [markdown] id="L-9zRu21stbH"
# <h1 align="center"> The end</h1>
| day02/logistic_regression_ML_course.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inverse Probability Weighting Model
# Inverse probability weighting is a basic model to obtain average effect estimation.
#
# It calculates the probability of each sample to belong to its group,
# and use its inverse as the weight of that sample:
# $$
# w_i = \frac{1}{\Pr[A=a_i | X_i]}
# $$
# %matplotlib inline
from causallib.datasets import load_nhefs
from causallib.estimation import IPW
from causallib.evaluation import PropensityEvaluator
from sklearn.linear_model import LogisticRegression
# #### Data:
# The effect of quitting to smoke on weight loss.
# Data example is taken from [Hernan and Robins Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
data = load_nhefs()
data.X.join(data.a).join(data.y).head()
# #### Model:
# The causal model has a machine learning model at its core, provided as `learner` parameter.
# This ML model will be used to predict probability of quit smoking given the covariates.
# These probabilities will be used to obtain $w_i$.
# Then, we'll estimate average balanced outcome using Horvitz–Thompson estimator:
# $$
# \hat{E}[Y^a] = \frac{1}{\sum_{i:A_i=a}w_i} \cdot \sum_{i:A_i=a}w_i y_i
# $$
#
# Lastly, we'll use these average counterfactual outcome estimation to predict the effect:
# $$
# \hat{E}[Y^1] - \hat{E}[Y^0]
# $$
# Train:
learner = LogisticRegression(solver="liblinear")
ipw = IPW(learner)
ipw.fit(data.X, data.a)
# We can now preict the weight of each individual:
ipw.compute_weights(data.X, data.a).head()
# Estimate average outcome
outcomes = ipw.estimate_population_outcome(data.X, data.a, data.y)
outcomes
# Estimate the effect:
effect = ipw.estimate_effect(outcomes[1], outcomes[0])
effect
# We can see that on average, indiviudals who quit smoking gained 3.5 Lbs
# on the course of 11 years
# + active=""
#
# -
# ## Non-default parameters
# We just saw a simple example hiding many model's parameters.
# We now dig a bit deeper to every stage.
# #### Model definition
# _Machine learning model:_
# Any scikit-learn model can be specified (even pipelines)
learner = LogisticRegression(penalty="l1", C=0.01, max_iter=500, solver='liblinear')
# _IPW model_ has two additional parameters:
# * `truncate_eps`: a caliper value to trim very small or very large probabilities
# * `stabilized`: Whether to scale weights with treatment prevalence
truncate_eps = 0.2
ipw = IPW(learner, truncate_eps=truncate_eps, use_stabilized=False)
ipw.fit(data.X, data.a);
# #### Weight prediction options
# Now we can predict the probability of quit smoking (`treatment_values=1`)
# and validate our truncation worked:
probs = ipw.compute_propensity(data.X, data.a, treatment_values=1)
probs.between(truncate_eps, 1-truncate_eps).all()
# During the "predict" phase (i.e. computing weights or probabilities),
# We can alter the parameters we placed during initiation:
probs = ipw.compute_propensity(data.X, data.a, treatment_values=1, truncate_eps=0.0)
probs.between(truncate_eps, 1-truncate_eps).all()
# We can even predict stabilized weights.
# However, we will get a warning.
# This is because treatment prevalence is an estimation of
# the trainning data.
# During `fit`, when the model had it's initial values, `use_stabilized` was `False` (default).
# So when coming to `compute_weights` now, the model will use the prevalence from the provided data to estimate treatment prevalence.
# This is not a big deal here, since we compute on the same data we trained on, but this does not have to be the general case.
# (This warning would not exists if we redefine the model with `use_stabilized=True` and re-train it)
stabilized_weights = ipw.compute_weights(data.X, data.a, treatment_values=1,
truncate_eps=0.0, use_stabilized=True)
weights = ipw.compute_weights(data.X, data.a, treatment_values=1,
truncate_eps=0.0)
stabilized_weights.eq(weights).all()
# Since IPW utilizes probabilites, for each sample we can get a probability (or weight) for each treatment value
# ipw.compute_weight_matrix(data.X, data.a).head()
ipw.compute_propensity_matrix(data.X, data.a).head()
# #### Effect estimation options
# We can choose whether we wish for addtive (`diff`) or multiplicative (`ratio`) effect
# (If outcome `y` was probabilites, we could also ask for odds-ratio (`or`))
#
# Providing weights `w` is optional, if not provided the weights would be simply
# calulated again using the provided `X`.
outcomes = ipw.estimate_population_outcome(data.X, data.a, data.y, w=weights)
effects = ipw.estimate_effect(outcomes[1], outcomes[0], effect_types=["diff", "ratio"])
effects
# + active=""
#
# -
# ## Evaluation
# We can also evaluate the performance of the IPW model
# #### Simple evaluation
# Evaluates a fitted model on the provided dataset
evaluator = PropensityEvaluator(ipw)
results = evaluator.evaluate_simple(data.X, data.a, data.y, plots=None)
# `results` contains `models`, `plots` and `scores`,
# but since we did not ask for plots, and did not refit the model,
# our main interest is the scores.
# We have both the prediction performance scores and
# a table1 with standardized mean differences with and without balancing
results.scores.prediction_scores
results.scores.covariate_balance.head()
# #### Thorough evaluation
# Can check general model specification as it evaluates using cross-validation
# and refitting the model on each fold
from sklearn import metrics
plots=["roc_curve", "pr_curve", "weight_distribution",
"calibration", "covariate_balance_love", "covariate_balance_slope"]
metrics = {"roc_auc": metrics.roc_auc_score,
"avg_precision": metrics.average_precision_score,}
ipw = IPW(LogisticRegression(solver="liblinear"))
evaluator = PropensityEvaluator(ipw)
results = evaluator.evaluate_cv(data.X, data.a, data.y,
plots=plots, metrics_to_evaluate=metrics)
results.scores.prediction_scores
print(len(results.models))
results.models[2]
| examples/ipw.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# Assuming we are in the notebooks directory, we need to move one up:
# %cd ../..
# +
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
# Replace with directory you have downloaded NARPS data from:
DATA_DIR = './data/fMRI/event_tsvs/'
sns.set(style="ticks", palette="muted", color_codes=True, font_scale=1.8)
files = [file for file in os.listdir(DATA_DIR) if 'sub' in file]
meta_df = pd.DataFrame({'Filename': files})
meta_df.sort_values(by='Filename', inplace=True)
meta_df.reset_index(inplace=True, drop=True)
meta_df['Participant'] = 0
meta_df['Run'] = 0
meta_df['Trials Designated NoResp'] = 0
meta_df['Trials with RT == 0'] = 0
meta_df['Behaviour in File'] = True
def squash_response(row):
if 'accept' in row:
return 1
elif 'reject' in row:
return 0
else:
return row
def check_RT_response_data(row):
try:
if row['participant_response'] == 'NoResp':
return False
except KeyError:
return False
if row['RT'] == 0:
return False
else:
return True
# +
# These two lines are just here because we're in a notebook, so delete in .py script.
ps_df = 0
del ps_df
prev_p = 0
run_check = 0
# For each filename...
for index, row in meta_df.iterrows():
file = row['Filename']
# check the file conforms to what we think it should
# sub-001_task-MGT_run-01_events.tsv
# there are 4 runs per "sub" and the NARPS site says:
# "119 healthy participants completed the experiment (n=60 from the equal indifference group and n=59 from the equal range group). Nine participants were excluded prior to fMRI analysis based on pre-registered exclusion criteria: Five did not show a significant effect of both gains and loses on their choices (Bayesian logistic regression, p < 0.05; reflecting a lack of understanding of the task) and four missed over 10% of trials (in one or more runs). Data of two additional participants is currently under QA. Thus, at least 108 participants will be included in the final dataset sent to the analysis teams (n=54 from the equal indifference group and n=54 from the equal range group)."
# Check that the filenames have a certain structure:
file_list = file.split('_')
assert file_list[0].split('-')[0] == 'sub'
p = int(file_list[0].split('-')[1])
assert p > 0
assert p <= 128
assert file_list[2].split('-')[0] == 'run'
run = int(file_list[2].split('-')[1])
assert run > 0
assert run <= 4
# check if all participants have all 4 runs, and print who doesn't
if prev_p != p:
if run_check and run_check != 4:
print(prev_p, run_check)
run_check = 0
prev_p = p
run_check+=1
# Open the file:
p_run_df = pd.read_csv(DATA_DIR + file, delimiter='\t')
# Create required columns for participant ID, run, and trail number:
p_run_df['ID'] = p # just a number
p_run_df['participant_id'] = file_list[0] # the IDs the original data used
p_run_df['run'] = run
p_run_df.reset_index(inplace=True)
p_run_df.rename(index=str, columns={"index": "trial"}, inplace=True)
p_run_df['trial'] += 1
# Risk: sqrt(gain*gain*prob_gain*(1 - prob_gain) + loss*loss*prob_loss*(1 - prob_loss))
# based on Canessa et al 2013 https://doi.org/10.1523/JNEUROSCI.0497-13.2013
p_run_df['risk'] = np.sqrt((p_run_df['gain']**2 + p_run_df['loss']**2)) * 0.5
p_run_df['use'] = p_run_df.apply(check_RT_response_data, axis=1)
meta_df.loc[index, 'Participant'] = p
meta_df.loc[index, 'Run'] = run
# Find the files with RT set to 0 and note that in meta_df:
if (p_run_df['RT'] == 0).any():
meta_df.loc[index, 'Trials with RT == 0'] = (p_run_df['RT'] == 0).value_counts()[True]
if 'participant_response' in p_run_df.columns:
meta_df.loc[index, 'Trials Designated NoResp'] = (p_run_df['participant_response'] == 'NoResp').value_counts()[True]
else:
meta_df.loc[index, 'Behaviour in File'] = False
# Create accept column for participant:
try:
p_run_df['accept'] = p_run_df['participant_response'].apply(squash_response)
except KeyError:
p_run_df['accept'] = 'NoResp'
# Create big dataframe for all participants:
try:
ps_df = pd.concat([ps_df, p_run_df], sort=False)
except NameError:
ps_df = p_run_df
ps_df
participants_df = pd.read_csv(DATA_DIR + 'participants.tsv', delimiter='\t')
ps_df = ps_df.set_index('participant_id').join(participants_df.set_index('participant_id'))
# -
meta_df.head(10)
meta_df[meta_df['Participant'] == 48]
meta_df[meta_df['Behaviour in File'] == False]
ps_df[ps_df['ID'] == 48]
len(ps_df['ID'].unique())
ps_df.describe()
# remove the trials where something went wrong, RT == 0 or no data
clean_ps_df = ps_df[ps_df['use']]
clean_ps_df.describe()
clean_ps_df
clean_ps_df.groupby('ID')[['gain', 'loss']].describe()
# use this to assign to each participant what condition they are in
equalIndif_df = clean_ps_df[clean_ps_df['group'] == 'equalIndifference']
equalRange_df = clean_ps_df[clean_ps_df['group'] == 'equalRange']
# +
# sns.set()
fig, axs = plt.subplots(1, 2, sharey=True, figsize=(16,4))#, tight_layout=True)
variable = 'gain'
counts = np.bincount(equalIndif_df[variable])
# print(counts)
# [print(i) for i in zip(range(len(counts)), counts)]
ticks = []
axs[0].bar(range(len(counts)), counts, width=1, align='center')
axs[0].xaxis.set_minor_locator(plt.MultipleLocator(1))
axs[0].xaxis.set_major_locator(plt.MultipleLocator(2))
# axs[0].tick_params(axis='x', which='major', labelsize=14)
axs[0].set_xlim([9,41])
axs[0].set_title('Equal Indifference')
counts = np.bincount(equalRange_df[variable])
# print(counts)
axs[1].bar(range(len(counts)), counts, width=1, align='center')
axs[1].xaxis.set_minor_locator(plt.MultipleLocator(1))
axs[1].xaxis.set_major_locator(plt.MultipleLocator(2))
axs[1].set_xlim([4.2,20.8])
axs[1].set_title('Equal Range')
title = fig.suptitle(variable.capitalize())
# shift subplots down:
title.set_y(0.95)
fig.subplots_adjust(top=0.75)
sns.despine(right=True)
# plt.show()
# +
fig, axs = plt.subplots(1, 2, sharey=True, figsize=(16,4))#, tight_layout=True)
variable = 'loss'
counts = np.bincount(equalIndif_df[variable])
# print(counts)
# [print(i) for i in zip(range(len(counts)), counts)]
ticks = []
axs[0].bar(range(len(counts)), counts, width=1, align='center')
axs[0].xaxis.set_minor_locator(plt.MultipleLocator(1))
axs[0].xaxis.set_major_locator(plt.MultipleLocator(2))
# axs[0].tick_params(axis='x', which='major', labelsize=14)
axs[0].set_xlim([4.2,20.8])
axs[0].set_title('Equal Indifference')
counts = np.bincount(equalRange_df[variable])
# print(counts)
axs[1].bar(range(len(counts)), counts, width=1, align='center')
axs[1].xaxis.set_minor_locator(plt.MultipleLocator(1))
axs[1].xaxis.set_major_locator(plt.MultipleLocator(2))
axs[1].set_xlim([4.2,20.8])
axs[1].set_title('Equal Range')
title = fig.suptitle(variable.capitalize())
# shift subplots down:
title.set_y(0.95)
fig.subplots_adjust(top=0.75)
sns.despine(right=True)
plt.show()
# +
sns.set(font_scale=1.2)
f, axes = plt.subplots(2, 1, figsize=(7, 7), sharex=True)
sns.despine(right=True)
sns.distplot(equalRange_df['risk'], ax=axes[0], norm_hist = False , kde=False)
sns.distplot(equalIndif_df['risk'], ax=axes[1], norm_hist = False, kde=False)
axes[0].set_title('Equal Range')
axes[1].set_title('Equal Indifference')
plt.show()
# +
sns.set(font_scale=1.8)
f, axes = plt.subplots(2, 1, figsize=(7, 7), sharex=True)
sns.despine(right=True)
sns.distplot(equalRange_df['RT'], ax=axes[0])#, kde=False)
sns.distplot(equalIndif_df['RT'], ax=axes[1])#, kde=False)
plt.show()
# +
fig, ax = plt.subplots(1, 1, figsize=(20,10))
sns.despine(right=True)
sns.violinplot(x="accept", y="RT", data=clean_ps_df, hue='group',
split=True, inner="quart",
palette={"equalRange": "y", "equalIndifference": "b"}, ax=ax)
plt.show()
# +
fig, ax = plt.subplots(1, 1, figsize=(20,10))
sns.despine(right=True)
sns.violinplot(x="participant_response", y="RT", data=clean_ps_df, hue='group',
split=True, inner="quart",
palette={"equalRange": "y", "equalIndifference": "b"}, ax=ax)
plt.show()
# -
sns.set(font_scale=1.8)
sns.set_style('ticks')
sns.countplot(x='accept', data=clean_ps_df)
plt.show()
sns.set(font_scale=1.8)
sns.set_style('ticks')
sns.countplot(x='accept', data=equalRange_df)
plt.show()
sns.set(font_scale=1.8)
sns.set_style('ticks')
sns.countplot(x='accept', data=equalIndif_df)
plt.show()
sns.set(font_scale=1)
sns.set_style('ticks')
sns.countplot(x='participant_response', data=equalRange_df)
plt.show()
sns.set(font_scale=1)
sns.set_style('ticks')
sns.countplot(x='participant_response', data=equalIndif_df)
plt.show()
clean_ps_df.to_csv('./data/participants.csv', index=False)
meta_df.to_csv('./data/file_details.csv', index=False)
participants_df
ps_df.head()
subject_13 = ps_df[ps_df['ID'] == 13]
subject_13['accept'].describe()
subject_13
group_assignments = ps_df.drop_duplicates(subset='ID')[['group', 'ID']]
# +
def check_group(row):
one_group = []
other_group = []
if row['ID'] % 2:
if row['group'] == 'equalIndifference':
return True
else:
if row['group'] == 'equalRange':
return True
return False
group_assignments['consistent'] = group_assignments.apply(check_group, axis=1)
# -
group_assignments.iloc[30:50]
group_assignments
group_assignments[group_assignments['consistent'] == False]
group_assignments[['consistent']] == False
subject_87 = ps_df[ps_df['ID'] == 87]
subject_87
| behavioral/notebooks/Checking the Pre-Modelling Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Azure Data Scientist Certification (DP-100) Resources & Tips
# > I passed the DP-100 certification exam yesterday. Here are some of the resources I used and the tips for you to prepare well.
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [certification]
# - hide: false
# ## DP-100
#
# 
#
# The DP-100 exam (DP-100: Designing and Implementing a Data Science Solution on Azure) measures your ability to set up Azure ML workspace, create ML experiments and deploy them in service. I took the exam on Nov 23, 2020. Note that the exam contents will change on December 8, 2020 so if you are going to take the exam after Dec 8, 2020, please visit the exam [website](https://docs.microsoft.com/en-us/learn/certifications/exams/dp-100#certification-exams) for updated content.
#
# This exam covers 4 main areas of Azure ML:
#
# - Set up an Azure ML workspace (30-35%)
# - Run, track, manage, train ML models (20-25%)
# - Optimize models, AutoML (20-25%)
# - Deploy, monitor, consume models (20-25%)
#
# Before you do anything, make sure to check the official exam page first on [MS Certifications page](https://docs.microsoft.com/en-us/learn/certifications/browse/) for the latest exam as MS often retires exams and replaces them as technology evolves.
#
# The exam fee is $165 but I got it for free with MS Ignite Voucher. Keep an eye on MS online events for free vouchers. Exam length is 180 minutes and you are asked 58 questions covering the above topics.
# ## Resources
# Here are the resources I used to learn Azureml ML Service and prepare for the exam.
# 1. First [create a free Azure account](https://azure.microsoft.com/en-us/free/) with $200 in credit. You will need it for practice
#
#
# 2. If you are new to Python or Machine Learning, [start with this MS Learn track](https://docs.microsoft.com/en-us/learn/paths/create-machine-learn-models/). If you are already familiar with Python & ML, you can skip it.
#
#
# 3. Start simple to get hang of the no-code ML offering from Azure and create scaleable ML models using the designer in Azure ML Studio. [Link](https://docs.microsoft.com/en-us/learn/paths/create-no-code-predictive-models-azure-machine-learning/). Note that the exams will have questions on the Designer and *not* the deprecated [Azure ML Studio Classic](https://studio.azureml.net/). The two are different. This should also familiarize you with Azure ML workspace.
#
#
# 4. Take the [Build AI Solutions with Azure ML](https://docs.microsoft.com/en-us/learn/paths/build-ai-solutions-with-azure-ml-service/) course on MS learn. This is the **most** important resource you will want to study. This covers all the DP-100 topics really well.
#
#
# 5. As you go through each module in MS Learn, open the corresponding page on [MS Documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?view=azure-ml-py&tabs=python) and read it. Many of the topics will be same but the documentation page adds more details & examples that I found very helpful.
#
#
# 6. **Labs**: These [labs](https://github.com/MicrosoftLearning/DP100) will give you hands-on experience with Azure ML SDK. These are very important and highly recommend you complete **all of them**.
#
#
# 7. [Lab Videos](https://github.com/MicrosoftLearning/Lab-Demo-Recordings/blob/master/DP-100.md): These videos are supplementary to the labs above. Not very helpful as there is no audio.
#
#
# 8. If you like to follow along with someone instead of doing it on your own, I recommend taking the DP-100 course by <NAME>. The course is **completely free** and covers everything you need for DP100. Below are the links
# - [Module 1, Day 1](https://note.microsoft.com/US-NOGEP-WBNR-FY20-04Apr-20-DP-100SetupanAzureMachineLearningworkspace-SRDEM14523-01_Registration.html)
# - [Module 2, Day 2](https://note.microsoft.com/US-NOGEP-WBNR-FY20-04Apr-21-DP-100Runexperimentsandtrainmodels-SRDEM14523-02_Registration.html)
# - [Module 3, Day 3](https://note.microsoft.com/US-NOGEP-WBNR-FY20-04Apr-22-DP-100Optimizeandmanagemodels-SRDEM14523-03_Registration.html)
# - [Module 4, Day 4](https://note.microsoft.com/US-NOGEP-WBNR-FY20-04Apr-23-DP-100Deployandconsumemodels-SRDEM14523-04_Registration.html)
#
#
# 9. **Pluralsight**: Pluralsight, in partnership with Microsoft, offers [free instroctor led DP-100 courses](https://www.pluralsight.com/paths/microsoft-azure-data-scientist-dp-100). I watched most of them, I found Ginger's course above more thorough.
#
#
# 10. [Cloudacademy](https://cloudacademy.com/learning-paths/dp-100-exam-prep-designing-and-implementing-a-data-science-solution-on-azure-1902/) also offers DP-100 course (not free). I created a free 7 day trial account to check out the content and it's very bad. It's basically just commentary on the examples from the labs above. Don't waste your time. If you already have a cloud academy membership, you can explore but don't pay for it.
#
#
# 11. **Udacity**: Udacity has a [Azure ML Engineer program](https://www.udacity.com/course/machine-learning-engineer-for-microsoft-azure-nanodegree--nd00333) in collaboration with Microsoft. I won the scholarship for that course. It covers all the DP-100 topics and plus some more. If you are enrolled in this course, you should be able to pass DP-100 comfortably. I wouldn't take the course just to prepare for the exam. The content in the course is good but very similar to Ginger's videos above.
#
#
# 12. If you want to learn more advanced Azure ML sdk, [this is a great repo](https://github.com/Azure/MachineLearningNotebooks/tree/4170a394edd36413edebdbab347afb0d833c94ee) with all the tutorials and examples. This is above and beyond what's required for DP-100 and I recommend completing above labs first before using this.
# ## Tips
# 1. Don't skip the labs. You will be asked questions on Azure ML SDK and you need to know it very well
# 2. You will be asked questions on regression, classification modeling and metrics so you need to know machine learning.
# 3. Almost all the courses/resources above (except some courses on Pluralsight) assume that you know how to create regression, classification models. If you don't have any experience with machine learning, use the [MS Learn](https://docs.microsoft.com/en-us/learn/paths/create-no-code-predictive-models-azure-machine-learning/) track from above. It's minimum and you should start with that first.
# 4. Learn the mlflow api and how to use it with Azure ML sdk: [link](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow)
# 5. If you have not taken any Azure certifications before, I recommend taking an easier exam first (e.g. AZ900) to get a feel for it first.
# 6. Start the exam check-in process at least 30 minutes prior to your schedule. It takes time and sometimees you may face technical difficulties so give yourself ample time before the exam.
# 7. You can always reschedule the exam (I did) if you think you need more time to prepare or have other commitments.
# 8. If you create a free Azure account, create a new resource with Azure ML workspace so it will be easier for you to delete it later. Azure ML compute is expensive so be sure to keep an eye on the usage. Don't forget to stop the compute target when not in use, otherwise you will be shocked when you get the invoice form Azure. Don't ask me how I know this ;). If you do incur charges, reach out to Azure support team. They will often waive the charges.
# 9. You will be able to complete most labs (except Pipelines & deployment) using your local compute.
# 10. As mentioned above, MS often gives away exam vouchers or discounts so keep an eye on their events page.
# 10. Connect with me on twitter (@PawarBI) if you have questions
#
#
# **Good luck !**
#
| _notebooks/2020-11-24-dp100-azure-data-scientist-certification-exam-resources-tips-preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib
#matplotlib.use('WebAgg')
#matplotlib.use('Qt4Cairo')
#matplotlib.use('Qt5Cairo')
matplotlib.use('nbAgg')
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# # 3-D example.
#
# Everything works in N-D except for the plots.
# +
def rr():
_a= [np.random.multivariate_normal([-5,5,5],[[0.12,0,0],[0,0.12,0],[0,0,0.12]]),\
np.random.multivariate_normal([-1,1,1],[[0.05,0,0],[0,0.1,0],[0,0,0.02]]),\
np.random.multivariate_normal([-5,1,5],[[0.12,0,0],[0,0.12,0],[0,0,0.01]]),\
np.random.multivariate_normal([-5,5,1],[[0.12,0,0],[0,0.12,0],[0,0,0.2]]),\
np.random.multivariate_normal([-1,3,5],[[0.12,0,0],[0,0.12,0],[0,0,0.12]])]
_i=np.random.choice(np.arange(len(_a)))
return _a[_i]
dim=3
data=np.array([ rr() for i in np.arange(300)])
# -
if True:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(data[:,0],data[:,1],data[:,2]) #plot the centroids
plt.show()
# +
def dist(r1,r2,_dim=dim):
return np.sum([(r1[i]-r2[i])**2 for i in xrange(_dim)])
def cluster_avg(centers,_dim=dim):
return [ np.mean([i[j] for i in centers]) for j in xrange(_dim) ]
def dist_to_closest(data,centers):
for i,c in enumerate(centers):
_tmp=dist(data,c)
if i==0:
arg_min=i
min_dist=_tmp
if _tmp<min_dist:
arg_min=i
min_dist=_tmp
if _tmp==min_dist:
arg_min=np.random.choice([i,arg_min])
min_dist=np.random.choice([_tmp,min_dist])
return arg_min, min_dist
# +
def dist(r1,r2,_dim=dim):
return np.sum([(r1[i]-r2[i])**2 for i in range(_dim)])
def cluster_avg(centers,_dim=dim):
return [ np.mean([i[j] for i in centers]) for j in range(_dim) ]
def dist_to_closest(data,centers):
for i,c in enumerate(centers):
_tmp=dist(data,c)
if i==0:
arg_min=i
min_dist=_tmp
if _tmp<min_dist:
arg_min=i
min_dist=_tmp
if _tmp==min_dist:
arg_min=np.random.choice([i,arg_min])
min_dist=np.random.choice([_tmp,min_dist])
return arg_min, min_dist
fig=plt.figure(figsize=(7,6))
fig.subplots_adjust(bottom=0.05, left=0.05, top = 0.9, right=0.9)
#=============================================================================#
_k=5
ldata=len(data)
#Number o iterations
iterations=50
#run it multiple times
runs=4
for run in range(runs):
#==========initialize the centers (k++ init)==========#
init_index=np.random.randint(ldata)#choose a point at random as one cluster center
centers=[ ]
centers.append(data[init_index])
for i in np.arange(_k-1):
dists=np.array([])
for j,d in enumerate(data):#find the distances for each point to the closest center
dists=np.append(dists, dist_to_closest(d,centers)[1])
probs=dists/np.sum(dists)#normalize dists, so that they represent probabilities
rand_index=np.nonzero(np.random.multinomial(1,probs))[0][0]
centers.append(data[rand_index])
#==========run k-means==========#
_iter=0
while _iter<=iterations:#run k-means
clusters=[[] for i in np.arange(_k)]
for i in np.arange(ldata):
#calculate distances
_tmp= np.array( [dist( centers[c] , data[i] ) for c in np.arange( _k ) ] )
#find the index of the closest centroid
_min=_tmp.argmin()
#make clusters
clusters[_min].append( data[i] )
for i in np.arange(_k):
_tmp=cluster_avg(clusters[i])
centers[i]=[_tmp[_d] for _d in range(dim)]
_iter+=1
sub = fig.add_subplot(runs/2,2,run+1, projection='3d')
for i in np.arange(_k):
sub.scatter(np.array(clusters[i])[:,0],np.array(clusters[i])[:,1],np.array(clusters[i])[:,2],
marker='+',alpha=0.1)
sub.scatter(np.array(centers)[:,0],np.array(centers)[:,1],np.array(centers)[:,2],c='k')
plt.show()
# -
| k-means/N-D_k++.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 8.8
# language: sage
# name: sagemath
# ---
a = 2.75
f = a^x
g = 1 + x
p1 = plot(f, (x, -0.1, 0.1), legend_label='f(x) = $%.3g^x$'%a)
p2 = plot(g, (x, -0.1, 0.1), legend_label='x + 1', color='red')
plot(p1+p2)
N(e)
| ExponentialDerivatives.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Neural synthesis, feature visualization, and deepdream notes
#
# This notebook introduces what we'll call here "neural synthesis," the technique of synthesizing images using an iterative process which optimizes the pixels of the image to achieve some desired state of activations in a convolutional neural network.
#
# The technique in its modern form dates back to around 2009 and has its origins in early attempts to visualize what features were being learned by the different layers in the network (see [Erhan et al](https://pdfs.semanticscholar.org/65d9/94fb778a8d9e0f632659fb33a082949a50d3.pdf), [Simonyan et al](https://arxiv.org/pdf/1312.6034v2.pdf), and [Mahendran & Vedaldi](https://arxiv.org/pdf/1412.0035v1.pdf)) as well as in trying to identify flaws or vulnerabilities in networks by synthesizing and feeding them adversarial examples (see [Nguyen et al](https://arxiv.org/pdf/1412.1897v4.pdf), and [Dosovitskiy & Brox](https://arxiv.org/pdf/1506.02753.pdf)). The following is an example from Simonyan et al on visualizing image classification models.
#
# 
#
# In 2012, the technique became widely known after [Le et al](https://googleblog.blogspot.in/2012/06/using-large-scale-brain-simulations-for.html) published results of an experiment in which a deep neural network was fed millions of images, predominantly from YouTube, and unexpectedly learned a cat face detector. At that time, the network was trained for three days on 16,000 CPU cores spread over 1,000 machines!
#
# 
#
# In 2015, following the rapid proliferation of cheap GPUs, Google software engineers [Mordvintsev, Olah, and Tyka](https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html) first used it for ostensibly artistic purposes and introduced several innovations, including optimizing pixels over multiple scales (octaves), improved regularization, and most famously, using real images (photographs, paintings, etc) as input and optimizing their pixels so as to enhance whatever activations the network already detected (hence "hallucinating" or "dreaming"). They nicknamed their work "Deepdream" and released the first publicly available code for running it [in Caffe](https://github.com/google/deepdream/), which led to the technique being widely disseminated on social media, [puppyslugs](https://www.google.de/search?q=puppyslug&safe=off&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwiT3aOwvtnXAhUHKFAKHXqdCBwQsAQIKQ&biw=960&bih=979) and all. Some highlights of their original work follow, with more found in [this gallery](https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=<KEY>).
#
# 
# 
#
# A number of creative innovations were further introduced by [<NAME>](http://www.miketyka.com) including optimizing several channels along pre-arranged masks, and using feedback loops to generate video. Some examples of his work follow.
#
# 
#
# This notebook builds upon the code found in [tensorflow's deepdream example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/deepdream). The first part of this notebook will summarize that one, including naive optimization, multiscale generation, and Laplacian normalization. The code from that notebook is lightly modified and is mostly found in the the [lapnorm.py](../notebooks/lapnorm.py) script, which is imported into this notebook. The second part of this notebook builds upon that example by showing how to combine channels and mask their gradients, warp the canvas, and generate video using a feedback loop. Here is a [gallery of examples](http://www.genekogan.com/works/neural-synth/) and a [video work](https://vimeo.com/246047871).
#
# Before we get started, we need to make sure we have downloaded and placed the Inceptionism network in the data folder. Run the next cell if you haven't already downloaded it.
#Grab inception model from online and unzip it (you can skip this step if you've already downloaded the model.
# !wget -P ../data/ https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
# !unzip ../data/inception5h.zip -d ../data/inception5h/
# !rm ../data/inception5h.zip
# To get started, make sure all of the folloing import statements work without error. You should get a message telling you there are 59 layers in the network and 7548 channels.
from __future__ import print_function
from io import BytesIO
import math, time, copy, json, os
import glob
from os import listdir
from os.path import isfile, join
from random import random
from io import BytesIO
from enum import Enum
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import numpy as np
import scipy.misc
import tensorflow as tf
from lapnorm import *
# Let's inspect the network now. The following will give us the name of all the layers in the network, as well as the number of channels they contain. We can use this as a lookup table when selecting channels.
for l, layer in enumerate(layers):
layer = layer.split("/")[1]
num_channels = T(layer).shape[3]
print(layer, num_channels)
# The basic idea is to take any image as input, then iteratively optimize its pixels so as to maximally activate a particular channel (feature extractor) in a trained convolutional network. We reproduce tensorflow's recipe here to read the code in detail. In `render_naive`, we take `img0` as input, then for `iter_n` steps, we calculate the gradient of the pixels with respect to our optimization objective, or in other words, the diff for all of the pixels we must add in order to make the image activate the objective. The objective we pass is a channel in one of the layers of the network, or an entire layer. Declare the function below.
def render_naive(t_obj, img0, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
return img
# Now let's try running it. First, we initialize a 200x200 block of colored noise. We then select the layer `mixed4d_5x5_bottleneck_pre_relu` and the 20th channel in that layer as the objective, and run it through `render_naive` for 40 iterations. You can try to optimize at different layers or different channels to get a feel for how it looks.
img0 = np.random.uniform(size=(200, 200, 3)) + 100.0
layer = 'mixed4d_5x5_bottleneck_pre_relu'
channel = 20
img1 = render_naive(T(layer)[:,:,:,channel], img0, 40, 1.0)
display_image(img1)
# The above isn't so interesting yet. One improvement is to use repeated upsampling to effectively detect features at multiple scales (what we call "octaves") of the image. What we do is we start with a smaller image and calculate the gradients for that, going through the procedure like before. Then we upsample it by a particular ratio and calculate the gradients and modify the pixels of the result. We do this several times.
#
# You can see that `render_multiscale` is similar to `render_naive` except now the addition of the outer "octave" loop which repeatedly upsamples the image using the `resize` function.
def render_multiscale(t_obj, img0, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print("octave %d/%d"%(octave+1, octave_n))
clear_output()
return img
# Let's try this on noise first. Note the new variables `octave_n` and `octave_scale` which control the parameters of the scaling. Thanks to tensorflow's patch to do the process on overlapping subrectangles, we don't have to worry about running out of memory. However, making the overall size large will mean the process takes longer to complete.
# +
h, w = 200, 200
octave_n = 3
octave_scale = 1.4
iter_n = 30
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
layer = 'mixed4d_5x5_bottleneck_pre_relu'
channel = 25
img1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
# Now load a real image and use that as the starting point. We'll use the kitty image in the assets folder. Here is the original.
# <img src="../assets/kitty.jpg" alt="kitty" style="width: 280px;"/>
# +
h, w = 240, 240
octave_n = 3
octave_scale = 1.4
iter_n = 30
img0 = load_image('../assets/kitty.jpg', h, w)
layer = 'mixed4d_5x5_bottleneck_pre_relu'
channel = 21
img1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
# Now we introduce Laplacian normalization. The problem is that although we are finding features at multiple scales, it seems to have a lot of unnatural high-frequency noise. We apply a [Laplacian pyramid decomposition](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) to the image as a regularization technique and calculate the pixel gradient at each scale, as before.
#
def render_lapnorm(t_obj, img0, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(oct_n):
if octave>0:
hw = np.float32(img.shape[:2])*oct_s
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end='')
print("octave %d/%d"%(octave+1, oct_n))
clear_output()
return img
# With Laplacian normalization and multiple octaves, we have the core technique finished and are level with the Tensorflow example. Try running the example below and modifying some of the numbers to see how it affects the result. Remember you can use the layer lookup table at the top of this notebook to recall the different layers that are available to you. Note the differences between early (low-level) layers and later (high-level) layers.
# +
h, w = 300, 400
octave_n = 3
octave_scale = 1.4
iter_n = 10
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
layer = 'mixed4d_5x5_bottleneck_pre_relu'
channel = 25
img1 = render_lapnorm(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
# Now we are going to modify the `render_lapnorm` function in three ways.
#
# 1) Instead of passing just a single channel or layer to be optimized (the objective, `t_obj`), we can pass several in an array, letting us optimize several channels simultaneously (it must be an array even if it contains just one element).
#
# 2) We now also pass in `mask`, which is a numpy array of dimensions (`h`,`w`,`n`) where `h` and `w` are the height and width of the source image `img0` and `n` is equal to the number of objectives in `t_obj`. The mask is like a gate or multiplier of the gradient for each channel. mask[:,:,0] gets multiplied by the gradient of the first objective, mask[:,:,1] by the second and so on. It should contain a float between 0 and 1 (0 to kill the gradient, 1 to let all of it pass). Another way to think of `mask` is it's like `step` for every individual pixel for each objective.
#
# 3) Internally, we use a convenience function `get_mask_sizes` which figures out for us the size of the image and mask at every octave, so we don't have to worry about calculating this ourselves, and can just pass in an img and mask of the same size.
def lapnorm_multi(t_obj, img0, mask, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=True):
mask_sizes = get_mask_sizes(mask.shape[0:2], oct_n, oct_s)
img0 = resize(img0, np.int32(mask_sizes[0]))
t_score = [tf.reduce_mean(t) for t in t_obj] # defining the optimization objective
t_grad = [tf.gradients(t, t_input)[0] for t in t_score] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(oct_n):
if octave>0:
hw = mask_sizes[octave] #np.float32(img.shape[:2])*oct_s
img = resize(img, np.int32(hw))
oct_mask = resize(mask, np.int32(mask_sizes[octave]))
for i in range(iter_n):
g_tiled = [lap_norm_func(calc_grad_tiled(img, t)) for t in t_grad]
for g, gt in enumerate(g_tiled):
img += gt * step * oct_mask[:,:,g].reshape((oct_mask.shape[0],oct_mask.shape[1],1))
print('.', end='')
print("octave %d/%d"%(octave+1, oct_n))
if clear:
clear_output()
return img
# Try first on noise, as before. This time, we pass in two objectives from different layers and we create a mask where the top half only lets in the first channel, and the bottom half only lets in the second.
# +
h, w = 300, 400
octave_n = 3
octave_scale = 1.4
iter_n = 10
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
objectives = [T('mixed3a_3x3_bottleneck_pre_relu')[:,:,:,25],
T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,15]]
# mask
mask = np.zeros((h, w, 2))
mask[:150,:,0] = 1.0
mask[150:,:,1] = 1.0
img1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
# Now the same thing, but we optimize over the kitty instead and pick new channels.
# +
h, w = 400, 400
octave_n = 3
octave_scale = 1.4
iter_n = 10
img0 = load_image('../assets/kitty.jpg', h, w)
objectives = [T('mixed4d_3x3_bottleneck_pre_relu')[:,:,:,125],
T('mixed5a_5x5_bottleneck_pre_relu')[:,:,:,30]]
# mask
mask = np.zeros((h, w, 2))
mask[:,:200,0] = 1.0
mask[:,200:,1] = 1.0
img1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale)
display_image(img1)
# -
# Let's make a more complicated mask. Here we use numpy's `linspace` function to linearly interpolate the mask between 0 and 1, going from left to right, in the first channel's mask, and the opposite for the second channel. Thus on the far left of the image, we let in only the second channel, on the far right only the first channel, and in the middle exactly 50% of each. We'll make a long one to show the smooth transition. We'll also visualize the first channel's mask right afterwards.
# +
h, w = 256, 1024
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
octave_n = 3
octave_scale = 1.4
objectives = [T('mixed3b_5x5_bottleneck_pre_relu')[:,:,:,9],
T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,17]]
mask = np.zeros((h, w, 2))
mask[:,:,0] = np.linspace(0,1,w)
mask[:,:,1] = np.linspace(1,0,w)
img1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4)
print("image")
display_image(img1)
print("masks")
display_image(255*mask[:,:,0])
display_image(255*mask[:,:,1])
# -
# One can think up many clever ways to make masks. Maybe they are arranged as overlapping concentric circles, or along diagonal lines, or even using [Perlin noise](https://github.com/caseman/noise) to get smooth organic-looking variation.
#
# Here is one example making a circular mask.
# +
h, w = 500, 500
cy, cx = 0.5, 0.5
# circle masks
pts = np.array([[[i/(h-1.0),j/(w-1.0)] for j in range(w)] for i in range(h)])
ctr = np.array([[[cy, cx] for j in range(w)] for i in range(h)])
pts -= ctr
dist = (pts[:,:,0]**2 + pts[:,:,1]**2)**0.5
dist = dist / np.max(dist)
mask = np.ones((h, w, 2))
mask[:, :, 0] = dist
mask[:, :, 1] = 1.0-dist
img0 = np.random.uniform(size=(h, w, 3)) + 100.0
octave_n = 3
octave_scale = 1.4
objectives = [T('mixed3b_5x5_bottleneck_pre_relu')[:,:,:,9],
T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,17]]
img1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4)
display_image(img1)
# -
# Now, we move on to generating video. The most straightforward way to do this is using feedback; generate one image in the conventional way, and then use it as the input to the next generation, rather than starting with noise again. By itself, this would simply repeat or intensify the features found in the first image, but we can get interesting results by perturbing the input to the second generation slightly before passing it in. For example, we can crop it slightly to remove the outer rim, then resize it to the original size and run it through again. If we do this repeatedly, we will get what looks like a constant zooming-in motion.
#
# The next block of code demonstrates this. We'll make a small square with a single feature, then crop the outer rim by around 5% before making the next one. We'll repeat this 20 times and look at the resulting frames. For simplicity, we'll just set the mask to 1 everywhere. Note, we've also set the `clear` variable in `lapnorm_multi` to false so we can see all the images in sequence.
# +
h, w = 200, 200
# start with random noise
img = np.random.uniform(size=(h, w, 3)) + 100.0
octave_n = 3
octave_scale = 1.4
objectives = [T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,11]]
mask = np.ones((h, w, 1))
# repeat the generation loop 20 times. notice the feedback -- we make img and then use it the initial input
for f in range(20):
img = lapnorm_multi(objectives, img, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=False)
display_image(img) # let's see it
img = resize(img[10:-10,10:-10,:], (h, w)) # before looping back, crop the border by 10 pixels, resize, repeat
# -
# If you look at all the frames, you can see the zoom-in effect. Zooming is just one of the things we can do to get interesting dynamics. Another cropping technique might be to shift the canvas in one direction, or maybe we can slightly rotate the canvas around a pivot point, or perhaps distort it with perlin noise. There are many things that can be done to get interesting and compelling results. Try also combining these with different ways of aking and modifying masks, and the combinatorial space of possibilities grows immensely. Most ambitiously, you can try training your own convolutional network from scratch and using it instead of Inception to get more custom effects. Thus as we see, the technique of feature visualization provides a wealth of possibilities to generate interesting video art.
| deep-learning/DLforArtists/notebooks/neural-synth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_mxnet_p36
# language: python
# name: conda_mxnet_p36
# ---
# # Amazon SageMaker Semantic Segmentation Algorithm
#
# 1. [Introduction](#Introduction)
# 2. [Setup](#Setup)
# 3. [Data Preparation](#Data-Preparation)
# 1. [Download data](#Download-data)
# 2. [Setup Data](#Setup-data)
# 3. [Upload to S3](#Upload-to-S3)
# 4. [Training](#Training)
# 5. [Hosting](#Hosting)
# 6. [Inference](#Inference)
#
# ## Introduction
#
# Semantic Segmentation (SS) is the task of classifying every pixel in an image with a class from a known set of labels. In contrast, [image classification](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms/imageclassification_caltech) generates only one label per image and [object detection](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco) generates a bounding box along with the label for each object in the image. The semantic segmentation output is usually represented as different pixel values in the image. Therefore, the output is an integer matrix (or a grayscale image) with the same shape as the input image. This output image is also called a segmentation mask. With the Amazon SageMaker Semantic Segmentation algorithm, not only can you train your models with your own dataset but also use our pre-trained models for lazy initialization.
#
# This notebook is an end-to-end example introducing the Amazon SageMaker Semantic Segmentation algorithm. In this demo, we will demonstrate how to train and host a semantic segmentation model using the fully-convolutional network ([FCN](https://arxiv.org/abs/1605.06211)) algorithm using the [Pascal VOC dataset](http://host.robots.ox.ac.uk/pascal/VOC/) for training. Amazon SageMaker Semantic Segmentation also provides the option of using Pyramid Scene Parsing Network([PSP](https://arxiv.org/abs/1612.01105)) and [Deeplab-v3](https://arxiv.org/abs/1706.05587) in addition to the FCN Network. Along the way, we will also demonstrate how to construct a training dataset in the format that the training job will consume. Finally, we will demonstrate how to host and validate the trained model.
#
# ## Setup
#
# To train the Semantic Segmentation algorithm on Amazon SageMaker, we need to setup and authenticate the use of AWS services. To begin with, we need an AWS account role with SageMaker access. This role that is used to give SageMaker access to your data in S3 can automatically be obtained from the role used to start the notebook.
# +
# %%time
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
sess = sagemaker.Session()
# -
# We also need the S3 bucket that is used to store training data and the trained model artifacts. In this notebook, we use the default bucket that comes with Sagemaker. However, you can also create a bucket and use that bucket instead.
bucket = sess.default_bucket()
prefix = 'semantic-segmentation-demo'
print(bucket)
# Lastly, we need the Amazon SageMaker Semantic Segmentaion docker image, which is static and need not be changed.
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'semantic-segmentation', repo_version="latest")
print (training_image)
# ## Data Preparation
# [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) is a popular computer vision dataset which is used for annual semantic segmentation challenges from 2005 to 2012. The dataset has 1464 training and 1449 validation images with 21 classes. Examples of the segmentation dataset can be seen in the [Pascal VOC Dataset page](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/segexamples/index.html). The classes are as follows:
#
# | Label Id | Class |
# |:--------:|:-------------:|
# | 0 | Background |
# | 1 | Aeroplane |
# | 2 | Bicycle |
# | 3 | Bird |
# | 4 | Boat |
# | 5 | Bottle |
# | 6 | Bus |
# | 7 | Car |
# | 8 | Cat |
# | 9 | Chair |
# | 10 | Cow |
# | 11 | Dining Table |
# | 12 | Dog |
# | 13 | Horse |
# | 14 | Motorbike |
# | 15 | Person |
# | 16 | Potted Plant |
# | 17 | Sheep |
# | 18 | Sofa |
# | 19 | Train |
# | 20 | TV / Monitor |
# | 255 | Hole / Ignore |
#
# In this notebook, we will use the data sets from 2012. While using the Pascal VOC dataset, please be aware of the usage rights:
# "The VOC data includes images obtained from the "flickr" website. Use of these images must respect the corresponding terms of use:
# * "flickr" terms of use (https://www.flickr.com/help/terms)"
# ### Download data
# Let us download the Pascal VOC datasets from VOC 2012.
#
# If this notebook was run before, you may have downloaded some data and set them up. If you have done this section properly, do not run the cell below as it will download the data all over again. If you have downloaded and want to re-download and reprocess the data, run the cell below to clean the previous download.
#
# If you have already downloaded and setup the data, you do not need to run the following cells in this section. You can instead use the previous S3 bucket. If not clean up directories created from the previous run.
# +
# # !rm -rf train
# # !rm -rf train_annotation
# # !rm -rf validation
# # !rm -rf validation_annotation
# # !rm -rf VOCdevkit
# # !rm test.jpg
# # !rm test_reshaped.jpg
# # !rm train_label_map.json
# +
# %%time
# Download the dataset
# !wget -P /tmp http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
# # Extract the data.
# !tar -xf /tmp/VOCtrainval_11-May-2012.tar && rm /tmp/VOCtrainval_11-May-2012.tar
# -
# ### Setup data
# Move the images into appropriate directory structure as described in the [documentation](link-to-documentation). This is quite simply, moving the training images to `train` directory and so on. Fortunately, the dataset's annotations are already named in sync with the image names, satisfying one requirement of the Amazon SageMaker Semantic Segmentation algorithm.
# +
import os
import shutil
# Create directory structure mimicing the s3 bucket where data is to be dumped.
VOC2012 = 'VOCdevkit/VOC2012'
os.makedirs('train', exist_ok=True)
os.makedirs('validation', exist_ok=True)
os.makedirs('train_annotation', exist_ok=True)
os.makedirs('validation_annotation', exist_ok=True)
# Create a list of all training images.
filename = VOC2012+'/ImageSets/Segmentation/train.txt'
with open(filename) as f:
train_list = f.read().splitlines()
# Create a list of all validation images.
filename = VOC2012+'/ImageSets/Segmentation/val.txt'
with open(filename) as f:
val_list = f.read().splitlines()
# Move the jpg images in training list to train directory and png images to train_annotation directory.
for i in train_list:
shutil.copy2(VOC2012+'/JPEGImages/'+i+'.jpg', 'train/')
shutil.copy2(VOC2012+'/SegmentationClass/'+i+'.png','train_annotation/' )
# Move the jpg images in validation list to validation directory and png images to validation_annotation directory.
for i in val_list:
shutil.copy2(VOC2012+'/JPEGImages/'+i+'.jpg', 'validation/')
shutil.copy2(VOC2012+'/SegmentationClass/'+i+'.png','validation_annotation/' )
# -
# Let us check if the move was completed correctly. If it was done correctly, the number of jpeg images in `train` and png images in `train_annotation` must be the same, and so in validation as well.
# +
import glob
num_training_samples=len(glob.glob1('train',"*.jpg"))
print ( ' Num Train Images = ' + str(num_training_samples))
assert num_training_samples == len(glob.glob1('train_annotation',"*.png"))
print ( ' Num Validation Images = ' + str(len(glob.glob1('validation',"*.jpg"))))
assert len(glob.glob1('validation',"*.jpg")) == len(glob.glob1('validation_annotation',"*.png"))
# -
# Let us now move our prepared datset to the S3 bucket that we decided to use in this notebook earlier. Notice the following directory structure that is used.
#
# ```bash
# root
# |-train/
# |-train_annotation/
# |-validation/
# |-validation_annotation/
#
# ```
# Notice also that all the images in the `_annotation` directory are all indexed PNG files. This implies that the metadata (color mapping modes) of the files contain information on how to map the indices to colors and vice versa. Having an indexed PNG is an advantage as the images will be rendered by image viewers as color images, but the image themsevels only contain integers. The integers are also within `[0, 1 ... c-1, 255]` for a `c` class segmentation problem, with `255` as 'hole' or 'ignore' class. We allow any mode that is a [recognized standard](https://pillow.readthedocs.io/en/3.0.x/handbook/concepts.html#concept-modes) as long as they are read as integers.
#
# While we recommend the format with default color mapping modes such as PASCAL, we also allow the customers to specify their own label maps. Refer to the [documentation](Permalink-to-label-map-documentation-section) for more details. The label map for the PASCAL VOC dataset, is the default (which we use incase no label maps are provided):
# ```json
# {
# "scale": 1
# }```
# This essentially tells us to simply use the images as read as integers as labels directly. Since we are using PASCAL dataset, let us create (recreate the default just for demonstration) label map for training channel and let the algorithm use the default (which is exactly the same for the validation channel). If `label_map` is used, please pass it to the label_map channel.
#
import json
label_map = { "scale": 1 }
with open('train_label_map.json', 'w') as lm_fname:
json.dump(label_map, lm_fname)
# Create channel names for the s3 bucket.
train_channel = prefix + '/train'
validation_channel = prefix + '/validation'
train_annotation_channel = prefix + '/train_annotation'
validation_annotation_channel = prefix + '/validation_annotation'
# label_map_channel = prefix + '/label_map'
# ### Upload to S3
# Let us now upload our dataset including our label map.
# %%time
# upload the appropraite directory up to s3 respectively for all directories.
sess.upload_data(path='train', bucket=bucket, key_prefix=train_channel)
sess.upload_data(path='validation', bucket=bucket, key_prefix=validation_channel)
sess.upload_data(path='train_annotation', bucket=bucket, key_prefix=train_annotation_channel)
sess.upload_data(path='validation_annotation', bucket=bucket, key_prefix=validation_annotation_channel)
# sess.upload_data(path='train_label_map.json', bucket=bucket, key_prefix=label_map_channel)
# Next we need to setup an output location at S3, where the model artifact will be dumped. These artifacts are also the output of the algorithm's traning job. Let us use another channel in the same S3 bucket for this purpose.
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
print(s3_output_location)
# ## Training
# Now that we are done with all the setup that is needed, we are ready to train our segmentation algorithm. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Let us name our training job as `ss-notebook-demo`. Let us also use a nice-and-fast GPU instance (`ml.p3.2xlarge`) to train.
# Create the sagemaker estimator object.
ss_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count = 1,
train_instance_type = 'ml.p3.2xlarge',
train_volume_size = 50,
train_max_run = 360000,
output_path = s3_output_location,
base_job_name = 'ss-notebook-demo',
sagemaker_session = sess)
# The semantic segmentation algorithm at its core has two compoenents.
#
# - An encoder or backbone network,
# - A decoder or algorithm network.
#
# The encoder or backbone network is typically a regular convolutional neural network that may or maynot have had their layers pre-trained on an alternate task such as the [classification task of ImageNet images](http://www.image-net.org/). The Amazon SageMaker Semantic Segmentation algorithm comes with two choices of pre-trained or to be trained-from-scratch backbone networks ([ResNets](https://arxiv.org/abs/1512.03385) 50 or 101).
#
# The decoder is a network that picks up the outputs of one or many layers from the backbone and reconstructs the segmentation mask from it. Amazon SageMaker Semantic Segmentation algorithm comes with a choice of the [Fully-convolutional network (FCN)](https://arxiv.org/abs/1605.06211) or the [Pyramid scene parsing (PSP) network](https://arxiv.org/abs/1612.01105).
#
# The algorithm also has ample options for hyperparameters that help configure the training job. The next step in our training, is to setup these networks and hyperparameters along with data channels for training the model. Consider the following example definition of hyperparameters. See the SageMaker Semantic Segmentation [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/semantic-segmentation.html) for more details on the hyperparameters.
#
# One of the hyperparameters here for instance is the `epochs`. This defines how many passes of the dataset we iterate over and determines that training time of the algorithm. For the sake of demonstration let us run only `10` epochs. Based on our tests, train the model for `30` epochs with similar settings should give us 'reasonable' segmentation results on the Pascal VOC data. For the most part, we will stick to using the simplest of settings. For more information on the hyperparameters of this algorithm, refer to the [documentation](perma-link-to-hyperparameter-section-in-documentation).
# Setup hyperparameters
ss_model.set_hyperparameters(backbone='resnet-50', # This is the encoder. Other option is resnet-50
algorithm='fcn', # This is the decoder. Other option is 'psp' and 'deeplab'
use_pretrained_model='True', # Use the pre-trained model.
crop_size=240, # Size of image random crop.
num_classes=21, # Pascal has 21 classes. This is a mandatory parameter.
epochs=10, # Number of epochs to run.
learning_rate=0.0001,
optimizer='rmsprop', # Other options include 'adam', 'rmsprop', 'nag', 'adagrad'.
lr_scheduler='poly', # Other options include 'cosine' and 'step'.
mini_batch_size=16, # Setup some mini batch size.
validation_mini_batch_size=16,
early_stopping=True, # Turn on early stopping. If OFF, other early stopping parameters are ignored.
early_stopping_patience=2, # Tolerate these many epochs if the mIoU doens't increase.
early_stopping_min_epochs=10, # No matter what, run these many number of epochs.
num_training_samples=num_training_samples) # This is a mandatory parameter, 1464 in this case.
# Now that the hyperparameters are setup, let us prepare the handshake between our data channels and the algorithm. To do this, we need to create the `sagemaker.session.s3_input` objects from our data channels. These objects are then put in a simple dictionary, which the algorithm uses to train.
# +
# Create full bucket names
s3_train_data = 's3://{}/{}'.format(bucket, train_channel)
s3_validation_data = 's3://{}/{}'.format(bucket, validation_channel)
s3_train_annotation = 's3://{}/{}'.format(bucket, train_annotation_channel)
s3_validation_annotation = 's3://{}/{}'.format(bucket, validation_annotation_channel)
distribution = 'FullyReplicated'
# Create sagemaker s3_input objects
train_data = sagemaker.session.s3_input(s3_train_data, distribution=distribution,
content_type='image/jpeg', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution=distribution,
content_type='image/jpeg', s3_data_type='S3Prefix')
train_annotation = sagemaker.session.s3_input(s3_train_annotation, distribution=distribution,
content_type='image/png', s3_data_type='S3Prefix')
validation_annotation = sagemaker.session.s3_input(s3_validation_annotation, distribution=distribution,
content_type='image/png', s3_data_type='S3Prefix')
data_channels = {'train': train_data,
'validation': validation_data,
'train_annotation': train_annotation,
'validation_annotation':validation_annotation}
# -
# We have our `Estimator` object, we have set the hyperparameters for this object and we have our data channels linked with the algorithm. The only remaining thing to do is to train the algorithm. The following command will train the algorithm. Training the algorithm involves a few steps. Firstly, the instances that we requested while creating the `Estimator` classes are provisioned and are setup with the appropriate libraries. Then, the data from our channels are downloaded into the instance. Once this is done, the training job begins. The provisioning and data downloading will take time, depending on the size of the data and the availability of the type of instances. Therefore it might be a few minutes before we start getting data logs for our training jobs. The data logs will also print out training loss on the training data, which is the pixel-wise cross-entropy loss as described in the algorithm papers. The data logs will also print out pixel-wise label accuracy and mean intersection-over-union (mIoU) on the validation data after a run of the dataset once or one epoch. These metrics measure the quality of the model under training.
#
# Once the job has finished a "Job complete" message will be printed. The trained model can be found in the S3 bucket that was setup as `output_path` in the estimator.
ss_model.fit(inputs=data_channels, logs=True)
# # Hosting
# Once the training is done, we can deploy the trained model as an Amazon SageMaker hosted endpoint. This will allow us to make predictions (or inference) from the model. Note that we don't have to host on the same instance (or type of instance) that we used to train. Training is a prolonged and compute heavy job that require a different of compute and memory requirements that hosting typically do not. We can choose any sagemaker supported instance we want to host the model. In our case we chose the `ml.p3.2xlarge` instance to train, but we choose to host the model on the less expensive cpu instance, `ml.c5.xlarge`. The endpoint deployment can be accomplished as follows:
ss_predictor = ss_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge')
# ## Inference
# Now that the trained model is deployed at an endpoint that is up-and-running, we can use this endpoint for inference. To do this, let us download an image from [PEXELS](https://www.pexels.com/) which the algorithm has so-far not seen.
# !wget -O test.jpg https://upload.wikimedia.org/wikipedia/commons/b/b4/R1200RT_in_Hongkong.jpg
filename = 'test.jpg'
# Let us convert the image to bytearray before we supply it to our endpoint.
# +
import matplotlib.pyplot as plt
import PIL
# resize image size for inference
im = PIL.Image.open(filename)
im.thumbnail([800,600],PIL.Image.ANTIALIAS)
im.save(filename, "JPEG")
# %matplotlib inline
plt.imshow(im)
plt.axis('off')
with open(filename, 'rb') as image:
img = image.read()
img = bytearray(img)
# -
# The endpoint accepts images in formats similar to the ones found images in the training dataset. It accepts the `image/jpeg` `content_type`. The `accept` parameter takes on two values: `image/png` and `application/x-protobuf`. For customers who want an indexed-PNG segmentation mask such as the ones that were used during training, can use the `image/png` accept type as shown in the example below. Using this endpoint will return a image bytearray.
# %%time
ss_predictor.content_type = 'image/jpeg'
ss_predictor.accept = 'image/png'
return_img = ss_predictor.predict(img)
# Let us display the segmentation mask.
# +
from PIL import Image
import numpy as np
import io
num_classes = 21
mask = np.array(Image.open(io.BytesIO(return_img)))
plt.imshow(mask, vmin=0, vmax=num_classes-1, cmap='jet')
plt.show()
# -
# The second `accept` type allows us to request all the class probabilities for each pixels. Let us use our endpoint to try to predict the probabilites of segments within this image. Since the image is `jpeg`, we use the appropriate `content_type` to run the prediction job. The endpoint returns a file that we can simply load and peek into.
# +
# %%time
# resize image size for inference
im = PIL.Image.open(filename)
im.thumbnail([800,600],PIL.Image.ANTIALIAS)
im.save(filename, "JPEG")
with open(filename, 'rb') as image:
img = image.read()
img = bytearray(img)
ss_predictor.content_type = 'image/jpeg'
ss_predictor.accept = 'application/x-protobuf'
results = ss_predictor.predict(img)
# -
# What we receive back is a recordio-protobuf of probablities sent as a binary. It takes a little bit of effort to convert into a readable array. Let us convert them to numpy format. We can make use of `mxnet` that has the capability to read recordio-protobuf formats. Using this, we can convert the outcoming bytearray into numpy array.
# +
from sagemaker.amazon.record_pb2 import Record
import mxnet as mx
results_file = 'results.rec'
with open(results_file, 'wb') as f:
f.write(results)
rec = Record()
recordio = mx.recordio.MXRecordIO(results_file, 'r')
protobuf = rec.ParseFromString(recordio.read())
# -
# The protobuf array has two parts to it. The first part contains the shape of the output and the second contains the values of probabilites. Using the output shape, we can transform the probabilities into the shape of the image, so that we get a map of values. There typically is a singleton dimension since we are only inferring on one image. We can also remove that using the `squeeze` method.
values = list(rec.features["target"].float32_tensor.values)
shape = list(rec.features["shape"].int32_tensor.values)
shape = np.squeeze(shape)
mask = np.reshape(np.array(values), shape)
mask = np.squeeze(mask, axis=0)
# So as to plot the segmentation mask from the list of probabilities, let us get the indices of the most probable class for each pixel. We can do this by measuring the `argmax` across the classes axis of the probability data. To plot the probabilites as image, we can use the `numpy.argmax` method to find out which probabilities are the largest and plot only those as a segmentaiton mask.
pred_map = np.argmax(mask, axis=0)
num_classes = 21
plt.imshow(pred_map, vmin=0, vmax=num_classes-1, cmap='jet')
plt.show()
# ## Delete the Endpoint
# Having an endpoint running will incur some costs. Therefore as a clean-up job, we should delete the endpoint.
sagemaker.Session().delete_endpoint(ss_predictor.endpoint)
| introduction_to_amazon_algorithms/semantic_segmentation_pascalvoc/semantic_segmentation_pascalvoc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Make nice interactive plots with Bokeh package
# +
import numpy as np
import matplotlib.pyplot as pl
from astropy.io import ascii
import pandas as pd
import bokeh
from bokeh.layouts import row, column
from bokeh.models import Select, CDSView, GroupFilter, BooleanFilter, Whisker, TeeHead
from bokeh.palettes import Spectral11, Blues8, Viridis11, RdBu8
from bokeh.plotting import curdoc, figure, ColumnDataSource
from bokeh.sampledata.autompg import autompg_clean as df
from bokeh.io import output_notebook, show, push_notebook
from bokeh.server.server import Server
output_notebook()
SIZES = list(range(6, 22, 3))
COLORS = Viridis11
N_SIZES = len(SIZES)
N_COLORS = len(COLORS)
# -
df = pd.read_pickle('../REASONS_DataFrame_withsdbinfo')
columns = sorted(df.columns)
discrete = [x for x in columns if df[x].dtype == object]
continuous = [x for x in columns if x not in discrete]
#source = ColumnDataSource(data=df)
#source
#source.data
discrete
# +
def create_figure():
xs = x.value#.value#df[x.value].values
ys = y.value#.value#df[y.value].values
x_title = x.value.title()
y_title = y.value.title()
kw = dict()
if x.value in discrete:
kw['x_range'] = sorted(set(df[x.value].values))
if x.value == 'Target':
df.sort_values(y.value,inplace=True)
kw['x_range'] = df[x.value].values
if y.value in discrete:
kw['y_range'] = sorted(set(df[y.value].values))
if y.value == 'Target':
df.sort_values(x.value,inplace=True)
kw['y_range'] = df[y.value].values
#print(kw['x_range'])
#print(y.value)
kw['title'] = "%s vs %s" % (x_title, y_title)
source = ColumnDataSource(data=df)
#source.add(source.data['width']/source.data['R'], name='width/R')
#source.add((np.sqrt(((source.data['width_1sigup']-source.data['width_1sigdwn'])/source.data['width'])**2.0+
# ((source.data['R_1sigup']-source.data['R_1sigdwn'])/source.data['R'])**2.0)
# *source.data['width/R'])/2.0, name='width/R_1sigup')
#source.add(-source.data['width/R_1sigup'], name='width/R_1sigdwn')
#source.add(source.data['width_lims'], name='width/R_lims')
source.add(['http://localhost:5006/bokehplots/thumbs/HD9672_natural_cont.png']*48, name='imgs')
TOOLTIPS = """
<div>
<div>
<img
src="@imgs" height="42" alt="@imgs" width="42"
style="float: left; margin: 0px 15px 15px 0px;"
border="2"
></img>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">@desc</span>
<span style="font-size: 15px; color: #966;">[$index]</span>
</div>
<div>
<span>@fonts{safe}</span>
</div>
<div>
<span style="font-size: 15px;">Location</span>
<span style="font-size: 10px; color: #696;">($x, $y)</span>
</div>
</div>
"""
#TOOLTIPS = [
#("system", "@Target"),
##("("+x_title+","+y_title+")", "(@"+x.value+", @"+y.value+")"),
#("Wavelength (mm)", "@wavelength"),
#("Flux (mJy)", "@Fbelt"),
#("Radius (au)", "@R"),
#("Width (au)", "@width"),
#("PA (deg)", "@PA"),
#("inc (deg)", "@inc"),
##("desc", "@desc"),
#]
if (xs not in ['Target', 'wavelength', 'PA', 'inc', 'dRA', 'dDec', 'width/R']):
xaxtype='log'
else:
xaxtype='auto'
if (ys not in ['Target', 'wavelength', 'PA', 'inc', 'dRA', 'dDec', 'width/R']):
yaxtype='log'
else:
yaxtype='auto'
p = figure(plot_height=700, plot_width=900, tools='pan,box_zoom,hover,reset, wheel_zoom', tooltips=TOOLTIPS, **kw, x_axis_type=xaxtype, y_axis_type=yaxtype)
p.xaxis.axis_label = x_title
p.yaxis.axis_label = y_title
if x.value in discrete:
p.xaxis.major_label_orientation = pd.np.pi / 4
sz = 9
if size.value != 'None':
if len(set(df[size.value])) > N_SIZES:
groups = pd.qcut(df[size.value].values, N_SIZES, duplicates='drop')
else:
groups = pd.Categorical(df[size.value])
sz = [SIZES[xx] for xx in groups.codes]
source.add(sz, name='size')
else:
source.add([sz]*len(df[x.value].values), name='size')
c = "#31AADE"
if color.value != 'None':
if len(set(df[color.value])) > N_COLORS:
groups = pd.qcut(df[color.value].values, N_COLORS, duplicates='drop')
else:
groups = pd.Categorical(df[color.value])
c = [COLORS[xx] for xx in groups.codes]
source.add(c, name='color')
else:
source.add([c]*len(df[x.value].values), name='color')
if (xs+'_lims' in source.column_names):# and (ys+'_lims' not in source.column_names):
view3 = CDSView(source=source, filters=[GroupFilter(column_name=xs+'_lims', group='l')])
view1 = CDSView(source=source, filters=[GroupFilter(column_name=xs+'_lims', group='u')])
view2 = CDSView(source=source, filters=[GroupFilter(column_name=xs+'_lims', group='NaN'),
BooleanFilter([False if xx=='NaN' else True for xx in df[xs].values])])
p.circle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6,
hover_color='white', hover_alpha=0.5, view=view2)
p.triangle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6,
hover_color='white', hover_alpha=0.5, view=view1, angle=np.pi/2.0)
p.triangle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6,
hover_color='white', hover_alpha=0.5, view=view3, angle=-np.pi/2.0)
if (ys+'_lims' in source.column_names):# and (xs+'_lims' not in source.column_names):
view3 = CDSView(source=source, filters=[GroupFilter(column_name=ys+'_lims', group='l')])
view1 = CDSView(source=source, filters=[GroupFilter(column_name=ys+'_lims', group='u')])
view2 = CDSView(source=source, filters=[GroupFilter(column_name=ys+'_lims', group='NaN'),
BooleanFilter([False if xx=='NaN' else True for xx in df[xs].values])])
p.circle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6,
hover_color='white', hover_alpha=0.5, view=view2)
p.inverted_triangle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6,
hover_color='white', hover_alpha=0.5, view=view1)
p.triangle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6,
hover_color='white', hover_alpha=0.5, view=view3)
if (xs+'_lims' not in source.column_names) and (ys+'_lims' not in source.column_names):
p.circle(x=xs, y=ys, source=source, color='color', size='size', line_color="white", alpha=0.6, hover_color='white', hover_alpha=0.5)
if xs+'_1sigup' in source.column_names:
source.add(source.data[xs]+source.data[xs+'_1sigup'], name=xs+'up')
source.add(source.data[xs]+source.data[xs+'_1sigdwn'], name=xs+'dwn')
w = Whisker(source = source, base = ys, upper = xs+'up', lower = xs+'dwn', dimension='width', line_color='color',
line_width=2.0, line_alpha=0.5, upper_head=TeeHead(line_color='red', line_alpha=0.0), lower_head=TeeHead(line_color='red', line_alpha=0.0))
#w.upper_head.line_color = 'color'
#w.lower_head.line_color = 'color'
p.add_layout(w)
if ys+'_1sigup' in source.column_names:
source.add(source.data[ys]+source.data[ys+'_1sigup'], name=ys+'up')
source.add(source.data[ys]+source.data[ys+'_1sigdwn'], name=ys+'dwn')
w = Whisker(source = source, base = xs, upper = ys+'up', lower = ys+'dwn', dimension='height', line_color='color',
line_width=2.0, line_alpha=0.5, upper_head=TeeHead(line_color='red', line_alpha=0.0), lower_head=TeeHead(line_color='red', line_alpha=0.0))
#w.upper_head.line_color()
#w.lower_head.line_color()
p.add_layout(w)
return p#, color=c, size=sz
# -
def update(attr, old, new):
layout.children[1] = create_figure()
#push_notebook()
#source = ColumnDataSource(data=df)
#source.data['R']
# +
x = Select(title='X-Axis', value='R', options=columns)
x.on_change('value', update)
y = Select(title='Y-Axis', value='Target', options=columns)
y.on_change('value', update)
size = Select(title='Size', value='width', options=['None'] + continuous)
size.on_change('value', update)
color = Select(title='Color', value='f', options=['None'] + continuous)
color.on_change('value', update)
controls = column([x, y, color, size], width=200)
layout = row(controls, create_figure())
curdoc().add_root(layout)
curdoc().title = "REASONS"
show(create_figure())
#sorted(set(df['Target'].values))
#df[color.value].values[np.isfinite(df[color.value].values)]
#Now save as python script, then execute start bokeh server with /d1/boudica1/anaconda3/bin/bokeh serve --show make_bokeh_plots.py
# -
| bokehplots/.ipynb_checkpoints/make_bokeh_plots-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import numpy.linalg as nl
import matplotlib.pyplot as plt
import groundTruthLocation as gtl
def convert_train_data(file_name):
# File content should be [b_i, rssi]
# ..........
# Convert it to dictionary dict[index_beacon] = [rss_1, ..., rssi_n]
beacon_dict = {}
with open(file_name, 'r') as packs:
packs.readline()
for pack in packs:
pack = pack[0: len(pack) - 1]
space_loc = pack.find(' ')
beacon = pack[0: space_loc]
ri = float(pack[space_loc + 1:])
if beacon not in beacon_dict:
beacon_dict[beacon] = [ri]
else:
beacon_dict[beacon].append(ri)
return beacon_dict
def get_state_location(file_name = 'standingLocation.csv'):
# Assumption that metric in csv file is inch, multiply by 0.0254 to get meters.
# Return state_loc[i][x,y], i is the i_th state, starting from 0.
with open(file_name, 'r') as file:
file_len = sum(1 for _ in file)
with open(file_name, 'r') as file:
state_loc = np.zeros((file_len, 2))
for line in file:
line = line[0:(len(line)-1)]
splt = line.split(',')
state_loc[int(splt[0])-1][0] = float(splt[1])
state_loc[int(splt[0])-1][1] = float(splt[2])
state_loc[:][0] += 40
state_loc[:][1] += 80
return 0.0254 * state_loc
def convert_test_data(file_name):
# file header should be [TimeStamp, Beacon, Rssi]
# convert it to numpy array
with open(file_name, 'r') as file:
file_len = sum(1 for _ in file)
with open(file_name) as file:
file.readline() # remove the header
data = np.zeros((file_len - 1, 3))
for line, i in zip(file, range(file_len - 1)):
line = line[0:(len(line)-1)]
splt = line.split(' ')
data[i][0] = float(splt[0])
data[i][1] = int(splt[1])
data[i][2] = float(splt[2])
return data
def get_alpha(beacon_dict, num_beacon = 60):
# Returned alphas should be [alpha_b1, alpha_b2, ..., alpha_bn]
# ssvs => signal strength values
alphas = np.zeros(num_beacon)
for beacon in beacon_dict:
ssvs = np.array(beacon_dict[beacon])
num_ssv = len(ssvs)
shift_vec = np.arange(num_ssv - 1) + 1
ssvs_shift = ssvs[shift_vec]
ssvs_trunc = ssvs[0: len(ssvs) - 1]
sbar = np.mean(ssvs)
sumsq = np.sum((ssvs - sbar) ** 2)
if sumsq == 0:
r1 = 0
else:
r1 = abs(np.sum((ssvs_shift - sbar) * (ssvs_trunc - sbar))) / sumsq
alphas[int(beacon) - 1] = r1
return alphas
def train(state_files, num_beacon = 60):
# Get mu and sigma for each state, noted that here it's sigma instead of sigma square.
# Return state_map[i][mu,sigma]
num_state = len(state_files)
state_map = np.zeros((num_state, num_beacon, 2)) # For beacons not in file, exclude them
state_map[:,:,0] = -100
state_map[:,:,1] = 1/3
for file, si in zip(state_files, range(num_state)): # si represents state_index
beacon_dict = convert_train_data(file)
alphas = get_alpha(beacon_dict) # Alphas => [alpha_b1, alpha_b2, ..., alpha_bn]
for rssis in beacon_dict: # rssis is beacon number
alpha = alphas[int(rssis)-1]
arr = beacon_dict[rssis]
state_map[si][int(rssis)-1][0] = np.mean(arr)
state_map[si][int(rssis)-1][1] = np.sqrt((1 + alpha) / (1 - alpha)*np.var(arr))
if state_map[si][int(rssis)-1][1] == 0:
state_map[si][int(rssis) - 1][1] = 15
#state_map[si][int(rssis) - 1][0] = -100 # Subtle!!! Depends on signal strength.
return state_map
def process_cluster(cluster, num_beacon = 60):
# Convert cluster to [beacon_1_avg_rssi, ..., beacon_n_avg_rssi], 0 for beacons not present
ret = np.zeros(num_beacon)
cnt = np.zeros(num_beacon)
arg = np.argsort(cluster[:,1])
cluster = cluster[arg]
for i in range(len(cluster)):
ret[int(cluster[i][1])-1] += cluster[i][2]
cnt[int(cluster[i][1])-1] += 1
cnt[cnt == 0] = 1.0
return ret / cnt
def cluster_test_data(filename, num_beacon = 60, interval = 10):
# Data should be numpy array
# Return dictionary with d['i'] = [beacon_1_avg_rssi, ..., beacon_n_avg_rssi], i represents state_i
data = convert_test_data(filename)
num_rows = data.shape[0]
max_time = data[num_rows-1][0]
num_cluster = int(max_time / interval)
start_index = [0]
d = {}
for i in range(1, num_cluster + 1):
thred = interval * i
index = np.searchsorted(data[:,0], thred)
start_index.append(index)
for i in range(len(start_index) - 1):
cluster = data[start_index[i]:start_index[i+1]]
d[i] = process_cluster(cluster)
return d, max_time
def test(state_map, test_rssi, state_loc, index, jud):
jud = 1 - jud
# return state index with max liklihood
# state_map => (state, beacon, (mu, sigma))
# state_loc => (state, (x, y))
# test_rssi => [avg_rssi]
probs = []
arg_zero = (abs(test_rssi) > 1.0) * jud
for i in range(state_map.shape[0]):
map_mu = state_map[i][:, 0]
map_sigma = state_map[i][:, 1]
sum_log = - np.sum(arg_zero * np.square(test_rssi - map_mu) / (2 * map_sigma ** 2)) - \
np.sum(np.log(map_sigma) * arg_zero)
probs.append(sum_log)
probs = np.asarray(probs).reshape(state_map.shape[0], 1)
len_probs = len(probs)
norm_probs = np.zeros(len_probs).reshape(state_map.shape[0], 1)
for i in range(len_probs):
if np.any(probs - probs[i] > 50):
norm_probs[i] = 0
else:
norm_probs[i] = 1 / (np.sum(np.exp(probs - probs[i])))
res = np.sum(norm_probs * state_loc, axis = 0)
return res
def ss_compensator(prev_state, test_file, state_loc, state_map, thred = 2, d = 0.05, N = 6):
# thred => max distance per second
# prev_state, pred_state => numpy (x, y) in meter metric
# d => perturbation fraction
pred_state = np.array([])
min_dist = 0
final_pert = -1
test_rssi = process_test(test_file, 2)
max_index = np.argmax(test_rssi)
for pert in [-0.05, 0, 0.05]:
pert_arr = np.zeros(len(test_rssi))
pert_arr[max_index] = pert
cur_pred = test(state_map, test_rssi + pert_arr, state_loc)
cur_dist = nl.norm(prev_state - cur_pred)
if final_pert == -1 or min_dist > cur_dist:
final_pert = pert
pred_state = cur_pred
min_dist = cur_dist
return pred_state
# +
def main(freq, intv, trip):
# with open("Beacon Log", "a+") as file:
# file.writelines('\n' + str(freq) + ', ' + str(intv) + ': \n')
train_files = []
for i in range(1, 10):
train_files.append('horusrssi/' + str(freq) + 'dBm' + str(intv) + 'secRssi' + str(i) + '.txt')
test_file = 'traceRssi' + str(freq) + 'dB' + str(intv) + 'sec1.txt'
state_loc = get_state_location()
state_map = train(train_files)
jud = np.zeros(60)
###################################
# if freq == -12 and intv == 1:
# iter = np.array([4, 18, 21, 50, 59, 46, 11, 28, 22, 39 ])
# jud[iter] = 1
# # Hard code some beacons to ignore.
# if freq == -15 and intv == 0.1:
# iter = np.array([44, 36, 38, 53])
# jud[iter] = 1
##################################
# Whether to ignore some beacons lost in training stage.
# for i in range(12):
# cur = state_map[i]
# for j in range(60):
# if cur[j][0] == -100:
# jud[j] = 1
# with open("Beacon Log", "a+") as file:
# file.writelines('State '+str(i+1)+ ', loc: (' + str(state_loc[i][0])+ ', ' + str(state_loc[i][1]) + ') Beacon '+str(j) + '\n')
# print('State '+str(i+1)+ ', loc: (' + str(state_loc[i][0])+ ', ' + str(state_loc[i][1]) + ') Beacon '+str(j))
d, max_time = cluster_test_data(test_file)
pred_loc = np.zeros((len(d), 2))
for index in d:
res = test(state_map, d[index], state_loc, index, jud)
pred_loc[index-1][0] = res[0]
pred_loc[index-1][1] = res[1]
trueLoc = np.zeros((int(max_time / 10), 2))
for i in range(int(max_time / 10)):
cur = gtl.findActualLocation(startTime=10*(i), endTime=10*(i+1), stopTime=10, maxTime=max_time)
trueLoc[i][0], trueLoc[i][1] = cur[0], cur[1]
valid_x = 1 - (trueLoc[:,0] < 1.2) - (trueLoc[:,0] > 7.62) # X and Y coordinates to ignore
valid_y = trueLoc[:,1] < 9.55
valid_loc = valid_x * valid_y
errors = valid_loc * np.linalg.norm(pred_loc - trueLoc, axis = 1)
avg_error = np.sum(errors) / np.sum(valid_loc)
with open("c_pred/coopErrorHorus_trip" + str(trip) + str(intv) + str(freq), "w+") as file:
for loc in pred_loc:
file.writelines(str(loc[0]) + "," + str(loc[1]) + "\n")
# with open("Erro Log", "a+") as file:
# file.writelines('Frequency: ' + str(freq) + ', time interval: ' + str(intv) + ', error: ' + str(avg_error) + '\n')
print('Frequency: ' + str(freq) + ', time interval: ' + str(intv) + ', error: ' + str(avg_error) )
# +
freqs = [-12, -12, -12, -12, -12, -12, -15, -15, -20, -20, -20, -20]
intvs = [0.1, 0.1, 0.5, 0.5, 1, 1, 0.1, 1, 0.1, 0.5, 0.5, 1]
trips = [1, 3, 1, 3, 1, 3, 1, 1, 1, 3, 1, 3]
for i in range(12):
main(freqs[i], intvs[i], trips[i])
# -
| coop/.ipynb_checkpoints/Horus-Mean-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: scivis-plankton
# kernelspec:
# display_name: Python (scivis-plankton)
# language: python
# name: scivis-plankton
# ---
# # Evaluation of the ResNet-50 model
# ## Import libraries
# + gather={"logged": 1637758888059} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
from scivision.io import load_pretrained_model, load_dataset
# -
# ## Load hold-out (test) dataset
# + gather={"logged": 1637758888163}
cat = load_dataset('https://github.com/alan-turing-institute/plankton-dsg-challenge')
ds_all = cat.plankton_multiple().to_dask()
labels_holdout = cat.labels_holdout().read()
labels_holdout_dedup = xr.Dataset.from_dataframe(
labels_holdout
.drop_duplicates(subset=["filename"])
.set_index("filename")
.sort_index()
)
ds_holdout_labelled = (
ds_all
.swap_dims({"concat_dim": "filename"})
.merge(labels_holdout_dedup, join="inner")
.swap_dims({"filename": "concat_dim"})
)
# + gather={"logged": 1637758888318} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
print(ds_holdout_labelled)
# + gather={"logged": 1637758921381}
type(ds_holdout_labelled)
# + gather={"logged": 1637758921535} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# choose a test image
image_no = 18
image = ds_holdout_labelled['raster'].sel(concat_dim=image_no).compute().values
label_gt = ds_holdout_labelled['label3'].sel(concat_dim=image_no).compute().values
# + gather={"logged": 1637758921667} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
plt.figure()
plt.imshow(image)
plt.title("{}".format(label_gt))
plt.show()
# -
# ## Load pretrained model
# +
# run if changes are made in https://github.com/acocac/scivision-plankton-models then restart the kernel
# #!pip -q uninstall -y scivision_plankton_models
# + gather={"logged": 1637754654394} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# Load model
scivision_yml = 'https://github.com/acocac/scivision-plankton-models/.scivision-config-resnet50.yaml'
model = load_pretrained_model(scivision_yml, allow_install=True)
# -
# ## Preprocess image
# + gather={"logged": 1637758925231} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# libraries
import torch
import torchvision
# + gather={"logged": 1637758925330} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# resize image
IMAGE = torchvision.transforms.ToTensor()(image)
IMAGE = torchvision.transforms.Resize((256,256))(IMAGE)
IMAGE = torch.unsqueeze(IMAGE, 0)
# -
# ## Predict and visualise
# + gather={"logged": 1637758925417} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# Get model predicition of image
pred = model.predict(IMAGE)
# + gather={"logged": 1637758955243} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
plt.figure()
plt.imshow(image)
_, preds = torch.max(pred, 1)
plt.title("Reference: {} \n Prediction: {}".format(label_gt, preds))
plt.show()
| notebooks/python/dsg2021/scivision-model_resnet50.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# Crie um vetor com 12 números inteiros
vetor <- (c(1,2,3,4,5,6,7,8,9,10,11,12))
vetor
# Crie uma matriz com 4 linhas e 4 colunas preenchida com números inteiros
#
matriz1 <- matrix(c(1:16), nrow = 4, ncol = 4)
mat
# Crie uma lista unindo o vetor e a matriz criados anteriormente
lista1 <- list(vetor, matriz1)
lista1
# Usando a funcao read.table() leia o arquivo do link abaixo para uma dataframe
# http://data.princeton.edu/wws509/datasets/effort.dat
#
df <- data.frame(read.table("http://data.princeton.edu/wws509/datasets/effort.dat"))
df
| 001Exercicios-para-fixacao-de-conceitos/Exercicios2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import remi.gui as gui
from remi import start, App
from threading import Timer, Thread
import re
remiport = 8086
class MyApp(App):
def __init__(self, *args):
super(MyApp, self).__init__(*args)
def _net_interface_ip(self):
ip = super()._net_interface_ip()
return ip + f"/proxy/{remiport}"
def _overload(self, data, **kwargs):
if "filename" in kwargs:
filename = kwargs['filename']
else:
return data
paths = self.all_paths()
for pattern in paths.keys():
if ( filename.endswith(".css") or filename.endswith(".html") or filename.endswith(".js") or filename.endswith("internal") ):
if type(data) == str:
data = re.sub(f"/{pattern}:", f"/proxy/{remiport}/{pattern}:", data)
else:
data = re.sub(f"/{pattern}:", f"/proxy/{remiport}/{pattern}:", data.decode()).encode()
return data
def _process_all(self, func, **kwargs):
print(kwargs)
kwargs.update({"overload": self._overload})
super()._process_all(func, **kwargs)
def idle(self):
self.counter.set_text('Running Time: ' + str(self.count))
self.progress.set_value(self.count%100)
def main(self):
# the margin 0px auto centers the main container
verticalContainer = gui.Container(width=540, margin='0px auto', style={'display': 'block', 'overflow': 'hidden'})
horizontalContainer = gui.Container(width='100%', layout_orientation=gui.Container.LAYOUT_HORIZONTAL, margin='0px', style={'display': 'block', 'overflow': 'auto'})
subContainerLeft = gui.Container(width=320, style={'display': 'block', 'overflow': 'auto', 'text-align': 'center'})
self.img = gui.Image(f'/res:logo.png', height=100, margin='10px')
self.img.onclick.do(self.on_img_clicked)
self.table = gui.Table.new_from_list([('ID', 'First Name', 'Last Name'),
('101', 'Danny', 'Young'),
('102', 'Christine', 'Holand'),
('103', 'Lars', 'Gordon'),
('104', 'Roberto', 'Robitaille'),
('105', 'Maria', 'Papadopoulos')], width=300, height=200, margin='10px')
self.table.on_table_row_click.do(self.on_table_row_click)
# the arguments are width - height - layoutOrientationOrizontal
subContainerRight = gui.Container(style={'width': '220px', 'display': 'block', 'overflow': 'auto', 'text-align': 'center'})
self.count = 0
self.counter = gui.Label('', width=200, height=30, margin='10px')
self.lbl = gui.Label('This is a LABEL!', width=200, height=30, margin='10px')
self.bt = gui.Button('Press me!', width=200, height=30, margin='10px')
# setting the listener for the onclick event of the button
self.bt.onclick.do(self.on_button_pressed)
self.txt = gui.TextInput(width=200, height=30, margin='10px')
self.txt.set_text('This is a TEXTAREA')
self.txt.onchange.do(self.on_text_area_change)
self.spin = gui.SpinBox(1, 0, 100, width=200, height=30, margin='10px')
self.spin.onchange.do(self.on_spin_change)
self.progress = gui.Progress(1, 100, width=200, height=5)
self.check = gui.CheckBoxLabel('Label checkbox', True, width=200, height=30, margin='10px')
self.check.onchange.do(self.on_check_change)
self.btInputDiag = gui.Button('Open InputDialog', width=200, height=30, margin='10px')
self.btInputDiag.onclick.do(self.open_input_dialog)
self.btFileDiag = gui.Button('File Selection Dialog', width=200, height=30, margin='10px')
self.btFileDiag.onclick.do(self.open_fileselection_dialog)
self.btUploadFile = gui.FileUploader('./', width=200, height=30, margin='10px')
self.btUploadFile.onsuccess.do(self.fileupload_on_success)
self.btUploadFile.onfailed.do(self.fileupload_on_failed)
items = ('<NAME>','<NAME>','<NAME>','<NAME>')
self.listView = gui.ListView.new_from_list(items, width=300, height=120, margin='10px')
self.listView.onselection.do(self.list_view_on_selected)
self.link = gui.Link("http://localhost:8081", "A link to here", width=200, height=30, margin='10px')
self.dropDown = gui.DropDown.new_from_list(('DropDownItem 0', 'DropDownItem 1'),
width=200, height=20, margin='10px')
self.dropDown.onchange.do(self.drop_down_changed)
self.dropDown.select_by_value('DropDownItem 0')
self.slider = gui.Slider(10, 0, 100, 5, width=200, height=20, margin='10px')
self.slider.onchange.do(self.slider_changed)
self.colorPicker = gui.ColorPicker('#ffbb00', width=200, height=20, margin='10px')
self.colorPicker.onchange.do(self.color_picker_changed)
self.date = gui.Date('2015-04-13', width=200, height=20, margin='10px')
self.date.onchange.do(self.date_changed)
self.video = gui.Widget( _type='iframe', width=290, height=200, margin='10px')
self.video.attributes['src'] = "https://drive.google.com/file/d/0B0J9Lq_MRyn4UFRsblR3UTBZRHc/preview"
self.video.attributes['width'] = '100%'
self.video.attributes['height'] = '100%'
self.video.attributes['controls'] = 'true'
self.video.style['border'] = 'none'
self.tree = gui.TreeView(width='100%', height=300)
ti1 = gui.TreeItem("Item1")
ti2 = gui.TreeItem("Item2")
ti3 = gui.TreeItem("Item3")
subti1 = gui.TreeItem("Sub Item1")
subti2 = gui.TreeItem("Sub Item2")
subti3 = gui.TreeItem("Sub Item3")
subti4 = gui.TreeItem("Sub Item4")
subsubti1 = gui.TreeItem("Sub Sub Item1")
subsubti2 = gui.TreeItem("Sub Sub Item2")
subsubti3 = gui.TreeItem("Sub Sub Item3")
self.tree.append([ti1, ti2, ti3])
ti2.append([subti1, subti2, subti3, subti4])
subti4.append([subsubti1, subsubti2, subsubti3])
# appending a widget to another, the first argument is a string key
subContainerRight.append([self.counter, self.lbl, self.bt, self.txt, self.spin, self.progress, self.check, self.btInputDiag, self.btFileDiag])
# use a defined key as we replace this widget later
fdownloader = gui.FileDownloader('download test', '../remi/res/logo.png', width=200, height=30, margin='10px')
subContainerRight.append(fdownloader, key='file_downloader')
subContainerRight.append([self.btUploadFile, self.dropDown, self.slider, self.colorPicker, self.date, self.tree])
self.subContainerRight = subContainerRight
subContainerLeft.append([self.img, self.table, self.listView, self.link, self.video])
horizontalContainer.append([subContainerLeft, subContainerRight])
menu = gui.Menu(width='100%', height='30px')
m1 = gui.MenuItem('File', width=100, height=30)
m2 = gui.MenuItem('View', width=100, height=30)
m2.onclick.do(self.menu_view_clicked)
m11 = gui.MenuItem('Save', width=100, height=30)
m12 = gui.MenuItem('Open', width=100, height=30)
m12.onclick.do(self.menu_open_clicked)
m111 = gui.MenuItem('Save', width=100, height=30)
m111.onclick.do(self.menu_save_clicked)
m112 = gui.MenuItem('Save as', width=100, height=30)
m112.onclick.do(self.menu_saveas_clicked)
m3 = gui.MenuItem('Dialog', width=100, height=30)
m3.onclick.do(self.menu_dialog_clicked)
menu.append([m1, m2, m3])
m1.append([m11, m12])
m11.append([m111, m112])
menubar = gui.MenuBar(width='100%', height='30px')
menubar.append(menu)
verticalContainer.append([menubar, horizontalContainer])
#this flag will be used to stop the display_counter Timer
self.stop_flag = False
# kick of regular display of counter
self.display_counter()
# returning the root widget
return verticalContainer
def display_counter(self):
self.count += 1
if not self.stop_flag:
Timer(1, self.display_counter).start()
def menu_dialog_clicked(self, widget):
self.dialog = gui.GenericDialog(title='Dialog Box', message='Click Ok to transfer content to main page', width='500px')
self.dtextinput = gui.TextInput(width=200, height=30)
self.dtextinput.set_value('Initial Text')
self.dialog.add_field_with_label('dtextinput', 'Text Input', self.dtextinput)
self.dcheck = gui.CheckBox(False, width=200, height=30)
self.dialog.add_field_with_label('dcheck', 'Label Checkbox', self.dcheck)
values = ('<NAME>', '<NAME>', '<NAME>', '<NAME>')
self.dlistView = gui.ListView.new_from_list(values, width=200, height=120)
self.dialog.add_field_with_label('dlistView', 'Listview', self.dlistView)
self.ddropdown = gui.DropDown.new_from_list(('DropDownItem 0', 'DropDownItem 1'),
width=200, height=20)
self.dialog.add_field_with_label('ddropdown', 'Dropdown', self.ddropdown)
self.dspinbox = gui.SpinBox(min=0, max=5000, width=200, height=20)
self.dspinbox.set_value(50)
self.dialog.add_field_with_label('dspinbox', 'Spinbox', self.dspinbox)
self.dslider = gui.Slider(10, 0, 100, 5, width=200, height=20)
self.dspinbox.set_value(50)
self.dialog.add_field_with_label('dslider', 'Slider', self.dslider)
self.dcolor = gui.ColorPicker(width=200, height=20)
self.dcolor.set_value('#ffff00')
self.dialog.add_field_with_label('dcolor', 'Colour Picker', self.dcolor)
self.ddate = gui.Date(width=200, height=20)
self.ddate.set_value('2000-01-01')
self.dialog.add_field_with_label('ddate', 'Date', self.ddate)
self.dialog.confirm_dialog.do(self.dialog_confirm)
self.dialog.show(self)
def dialog_confirm(self, widget):
result = self.dialog.get_field('dtextinput').get_value()
self.txt.set_value(result)
result = self.dialog.get_field('dcheck').get_value()
self.check.set_value(result)
result = self.dialog.get_field('ddropdown').get_value()
self.dropDown.select_by_value(result)
result = self.dialog.get_field('dspinbox').get_value()
self.spin.set_value(result)
result = self.dialog.get_field('dslider').get_value()
self.slider.set_value(result)
result = self.dialog.get_field('dcolor').get_value()
self.colorPicker.set_value(result)
result = self.dialog.get_field('ddate').get_value()
self.date.set_value(result)
result = self.dialog.get_field('dlistView').get_value()
self.listView.select_by_value(result)
# listener function
def on_img_clicked(self, widget):
self.lbl.set_text('Image clicked!')
def on_table_row_click(self, table, row, item):
self.lbl.set_text('Table Item clicked: ' + item.get_text())
def on_button_pressed(self, widget):
self.lbl.set_text('Button pressed! ')
self.bt.set_text('Hi!')
def on_text_area_change(self, widget, newValue):
self.lbl.set_text('Text Area value changed!')
def on_spin_change(self, widget, newValue):
self.lbl.set_text('SpinBox changed, new value: ' + str(newValue))
def on_check_change(self, widget, newValue):
self.lbl.set_text('CheckBox changed, new value: ' + str(newValue))
def open_input_dialog(self, widget):
self.inputDialog = gui.InputDialog('Input Dialog', 'Your name?',
initial_value='type here',
width=500)
self.inputDialog.confirm_value.do(
self.on_input_dialog_confirm)
# here is returned the Input Dialog widget, and it will be shown
self.inputDialog.show(self)
def on_input_dialog_confirm(self, widget, value):
self.lbl.set_text('Hello ' + value)
def open_fileselection_dialog(self, widget):
self.fileselectionDialog = gui.FileSelectionDialog('File Selection Dialog', 'Select files and folders', False,
'.')
self.fileselectionDialog.confirm_value.do(
self.on_fileselection_dialog_confirm)
# here is returned the Input Dialog widget, and it will be shown
self.fileselectionDialog.show(self)
def on_fileselection_dialog_confirm(self, widget, filelist):
# a list() of filenames and folders is returned
self.lbl.set_text('Selected files: %s' % ','.join(filelist))
if len(filelist):
f = filelist[0]
# replace the last download link
fdownloader = gui.FileDownloader("download selected", f, width=200, height=30)
self.subContainerRight.append(fdownloader, key='file_downloader')
def list_view_on_selected(self, widget, selected_item_key):
""" The selection event of the listView, returns a key of the clicked event.
You can retrieve the item rapidly
"""
self.lbl.set_text('List selection: ' + self.listView.children[selected_item_key].get_text())
def drop_down_changed(self, widget, value):
self.lbl.set_text('New Combo value: ' + value)
def slider_changed(self, widget, value):
self.lbl.set_text('New slider value: ' + str(value))
def color_picker_changed(self, widget, value):
self.lbl.set_text('New color value: ' + value)
def date_changed(self, widget, value):
self.lbl.set_text('New date value: ' + value)
def menu_save_clicked(self, widget):
self.lbl.set_text('Menu clicked: Save')
def menu_saveas_clicked(self, widget):
self.lbl.set_text('Menu clicked: Save As')
def menu_open_clicked(self, widget):
self.lbl.set_text('Menu clicked: Open')
def menu_view_clicked(self, widget):
self.lbl.set_text('Menu clicked: View')
def fileupload_on_success(self, widget, filename):
self.lbl.set_text('File upload success: ' + filename)
def fileupload_on_failed(self, widget, filename):
self.lbl.set_text('File upload failed: ' + filename)
def on_close(self):
""" Overloading App.on_close event to stop the Timer.
"""
self.stop_flag = True
super(MyApp, self).on_close()
myRemi = Thread(target=start,
args=(MyApp,),
kwargs={'address':'127.0.0.1',
'port':remiport,
'multiple_instance':True,
'enable_file_cache':True,
'update_interval':0.5,
'start_browser':False,
})
myRemi.start()
# http://127.0.0.1:8888/proxy/8086/
from IPython.display import IFrame
IFrame(src=f"http://localhost:8888/proxy/{remiport}/",width="100%",height="600px")
| notebooks/JlabRemiWidgets_Overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Two-sample Kolmogorov–Smirnov test
#
# This notebook applies a KS test to ages of zircon wihtin garnet and zircon within the matrix to test if they are derived from the same distribution.
# ## Import Python modules
# +
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set(style='ticks', font='Arial', context = 'paper')
# -
# ## Import zircon data from weighted averages
# +
# Import U-Pb data
upb = pd.read_csv('../data/zircon-ages.csv')
upb = upb[upb.Age_Status == 'Include']
# Subset U-Pb data by zircon population
zm = upb[upb.Population == 'Matrix']
zg = upb[upb.Population == 'Garnet']
# -
# ## Perform two sample KS test
# The null hypothesis (that the samples come from the same distribution) is rejected if:
#
# 1. the p-value is small
# 2. the D statistic exceeds the critical values defined by:
#
# $$ D_{n,m} > c(\alpha)\sqrt{\frac{n+m}{nm}} $$
#
# Variables n and m are sample sizes and c(a) is 1.36 for condfidence lelvel of 0.05 and 1.63 for condfidence lelvel of 0.01.
# +
from scipy import stats
ks = stats.ks_2samp(zm.Age, zg.Age)
m = zm.Age.count()
n = zg.Age.count()
# critical values for KS statistic
# if D is greater than s, then sample not drawn from same distribution
s95 = 1.36 * np.sqrt((n + m) / (n * m))
s99 = 1.63 * np.sqrt((n + m) / (n * m))
print(ks, s95, s99)
# -
# The samples are not from the same distribution
#
# * the D statistoc is high 0.79
# * the p-value is low 5.9E-6 and less than 0.05 (or 0.01)
# * the D statistic is greater than the critival values at 0.05 (0.45) and 0.01 (0.54)
# ## Plot results of the KS test
# +
# Set plotting style with Seaborn
sns.set_palette('Greys', n_colors=2)
# scott bin-width
m_bins = np.round(3.5 * zm.Age.std() / (zm.Age.count()**0.3333))
g_bins = np.round(3.5 * zg.Age.std() / (zg.Age.count()**0.3333))
print(m_bins, g_bins)
#n_bins = 50.
# plot the cumulative histogram for zircon in the matrix
plt.hist(zm.Age, 13, normed=1, histtype='step',cumulative=True, label='Matrix zircon', lw=1.5)
# plot the cumulative histogram for zircon in garnet
plt.hist(zg.Age, 16, normed=1, histtype='step', cumulative=True, label='Garnet zircon', lw=1.5)
plt.xlim(120., 210.)
plt.ylim(0.0, 1.0)
plt.xlabel('Age (Ma)')
plt.ylabel('Fractional cumulative frequency')
plt.legend(loc=2)
plt.savefig('../figs/supplement-ks-test.png')
# -
| code/supplement-ks-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from statsmodels.tsa.arima_model import ARIMA
from nyoka import ArimaToPMML
import warnings
warnings.filterwarnings('ignore')
# +
def parser(x):
return pd.datetime.strptime(x,'%Y-%m')
# Load the data
sales_data = pd.read_csv('sales-cars.csv', index_col=0, parse_dates = [0], date_parser = parser)
# -
data = [266,146,183,119,180,169,232,225,193,123,337,186,194,150,210,273,191,287,
226,304,290,422,265,342,340,440,316,439,401,390,490,408,490,420,520,480]
index = pd.DatetimeIndex(start='2016-01-01', end='2018-12-01', freq='MS')
ts_data = pd.Series(data, index)
ts_data.index.name = 'date_index'
ts_data.name = 'cars_sold'
ts_data.name
sales_data.__class__
model = ARIMA(sales_data, order = (9, 2, 0))
result = model.fit()
result._results.arparams
result.resid
# +
import numpy
numpy.std(result.resid)
# -
result.data.orig_endog.columns
# # Export
# Use exporter to create pmml file
pmml_f_name = 'non_seasonal_car_sales.pmml'
ArimaToPMML( results_obj = result,
pmml_file_name = pmml_f_name
)
| examples/statsmodels/arima/arima_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: cabbage
# language: python
# name: cabbage
# ---
# # Mask R-CNN Training and Inference
# This notebook is part of the _Automated plant stage labelling of herbarium samples in the family *Brassicaceae*_ made at [Propulsion Academy Zurich](https://propulsion.academy/?gclid=Cj0KCQiAwf39BRCCARIsALXWETyIhnHT7bA3VYXXOC415brejc6qYXnX7kEpqJmmJ5d5kAcYgoiLhI4aAmPxEALw_wcB) in collaboration with [ETH Library](https://library.ethz.ch/en/).
#
# In this notebook we use a ported version of Matterport's implementation of Mask R-CNN to train on a custom dataset, then use the trained weights to run inference on new images.
# **IMPORTANT** : To make the model work please download:
# 1. Mask_RCNN model files from [the AkTwelve's repo](https://github.com/akTwelve/Mask_RCNN) and add it to the src folder with the name "Mask_RCNN"
# 2. Model weights from the following address: [model weights](https://drive.google.com/drive/folders/1HNs_EUyxMg8ThCRuseuJPSDYXR50GDNR?usp=sharing) and place it in the src folder with the name "model_weights"
# 3. OPTIONAL annotated dataset from the following links: [train](https://drive.google.com/drive/folders/13Nph-NoTZwQFj-WOxwXtG61Wcf6fS9LR?usp=sharing), [test](https://drive.google.com/drive/folders/10-WqciDfjVAf5Qg6cJlHeigzvAWWDiQl?usp=sharing)
# ## Imports
import os
import sys
import json
import numpy as np
import time
from PIL import Image, ImageDraw
import matplotlib.pyplot as plt
# +
sys.path.append("../src")
sys.path.append("../src/Mask_RCNN")
ROOT_DIR = '../src/Mask_RCNN'
assert os.path.exists(ROOT_DIR), 'ROOT_DIR does not exist. Did you forget to read the instructions above? ;)'
import herbaria as hb
try:
from mrcnn.config import Config
import mrcnn.utils as utils
from mrcnn import visualize
import mrcnn.model as modellib
except ModuleNotFoundError:
assert "Mask_RCNN modules not found. Did you download the Modules from https://github.com/akTwelve/Mask_RCNN to the src folder?"
# -
# ## Data path and settings
#
# You can set in the cell below all the needed paths to data and model files
# +
# directory for input images, annotations and output images
PROJECT_DIR = os.path.join("..", "data")
assert os.path.exists(PROJECT_DIR), 'PROJECT_DIR does not exist. Did you forget to read the instructions above? ;)'
# train dataset
TRAIN_DIR = os.path.join(PROJECT_DIR, "train")
TRAIN_ANNOTATIONS_FILE = os.path.join(TRAIN_DIR, "train.json")
# test dataset
TEST_DIR = os.path.join(PROJECT_DIR, "train")
TEST_ANNOTATIONS_FILE = os.path.join(TEST_DIR, "train.json")
OUTPUT_DIR = os.path.join(PROJECT_DIR,"OUTPUT_images")
# Directory to save logs and trained model
MODEL_DIR = os.path.join("..", "src", "model_weights")
# -
# ## Configuration
# ### Define configurations for training
#
# Below are the configs to create the model and the dataset. They are based on the main class of the Mask_RCNN `utils.Config()` and are present in the herbaria.py module. Here we show the full config. Then we actually import it from herbaria.py
#
class HerbariaConfig(Config):
"""Configuration for training on the cigarette butts dataset.
Derives from the base Config class and overrides values specific
to the cigarette butts dataset.
"""
# Give the configuration a recognizable name
NAME = "M_image_augm"
# Train on 1 GPU and 1 image per GPU. Batch size is 1 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 2
# Number of classes (including background)
NUM_CLASSES = 1 + 2 # background + 2 [ 'flower', fruit]
# All of our training images are 1024x1024
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
# You can experiment with this number to see if it improves training
STEPS_PER_EPOCH = 4
# This is how often validation is run. If you are using too much hard drive space
# on saved models (in the MODEL_DIR), try making this value larger.
VALIDATION_STEPS = 1
# Matterport originally used resnet101, but I downsized to fit it on my graphics card
BACKBONE = 'resnet50'
# To be honest, I haven't taken the time to figure out what these do
RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128)
TRAIN_ROIS_PER_IMAGE = 32
MAX_GT_INSTANCES = 50
POST_NMS_ROIS_INFERENCE = 500
POST_NMS_ROIS_TRAINING = 1000
LEARNING_RATE=0.01
config = hb.HerbariaConfig()
config.display()
# # Define the dataset
#
# As for the `HerbariaConfig` class, this class holds the specification for DATASET creation. The class is shown in the cell below, than actually imported from `herbaria.py`
class HerbariaDataset(utils.Dataset):
""" Generates a COCO-like dataset, i.e. an image dataset annotated in the style of the COCO dataset.
See http://cocodataset.org/#home for more information.
"""
def load_data(self, annotation_json, images_dir):
""" Load the coco-like dataset from json
Args:
annotation_json: The path to the coco annotations json file
images_dir: The directory holding the images referred to by the json file
"""
# Load json from file
json_file = open(annotation_json)
coco_json = json.load(json_file)
json_file.close()
# Add the class names using the base method from utils.Dataset
source_name = "coco_like"
for category in coco_json['categories']:
class_id = category['id']
class_name = category['name']
if class_id < 1:
print('Error: Class id for "{}" cannot be less than one. (0 is reserved for the background)'.format(class_name))
return
self.add_class(source_name, class_id, class_name)
# Get all annotations
annotations = {}
for annotation in coco_json['annotations']:
image_id = annotation['image_id']
if image_id not in annotations:
annotations[image_id] = []
annotations[image_id].append(annotation)
# Get all images and add them to the dataset
seen_images = {}
for image in coco_json['images']:
image_id = image['id']
if image_id in seen_images:
print("Warning: Skipping duplicate image id: {}".format(image))
else:
seen_images[image_id] = image
try:
image_file_name = image['file_name']
image_width = image['width']
image_height = image['height']
except KeyError as key:
print("Warning: Skipping image (id: {}) with missing key: {}".format(image_id, key))
image_path = os.path.abspath(os.path.join(images_dir, image_file_name))
image_annotations = annotations[image_id]
# Add the image using the base method from utils.Dataset
self.add_image(
source=source_name,
image_id=image_id,
path=image_path,
width=image_width,
height=image_height,
annotations=image_annotations
)
def load_mask(self, image_id):
""" Load instance masks for the given image.
MaskRCNN expects masks in the form of a bitmap [height, width, instances].
Args:
image_id: The id of the image to load masks for
Returns:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
"""
image_info = self.image_info[image_id]
annotations = image_info['annotations']
instance_masks = []
class_ids = []
for annotation in annotations:
class_id = annotation['category_id']
mask = Image.new('1', (image_info['width'], image_info['height']))
mask_draw = ImageDraw.ImageDraw(mask, '1')
for segmentation in annotation['segmentation']:
mask_draw.polygon(segmentation, fill=1)
bool_array = np.array(mask) > 0
instance_masks.append(bool_array)
class_ids.append(class_id)
mask = np.dstack(instance_masks)
class_ids = np.array(class_ids, dtype=np.int32)
return mask, class_ids
# ### Create the Training and Validation Datasets
# This allows to use the dataset class to load a train and validation dataset (by default in the './data' folder).
#
# To download a sample train and test dataset see the `README` or follow the link at the top of this notebook
# +
# Set the data sources for images and annotations /ground truth
# train dataset
dataset_train = hb.HerbariaDataset()
dataset = HerbariaDataset()
dataset.load_data(TRAIN_ANNOTATIONS_FILE, TRAIN_DIR)
dataset.prepare()
# test dataset
dataset_val = hb.HerbariaDataset()
dataset_val.load_data(TEST_ANNOTATIONS_FILE, TEST_DIR)
dataset_val.prepare()
# -
# ### Display a few images from the training dataset
def display_top_masks_layered(image, mask, class_ids, class_names, limit=4):
"""Display the given image and the top few class masks."""
to_display = []
titles = []
to_display.append(image)
titles.append("H x W={}x{}".format(image.shape[0], image.shape[1]))
# Pick top prominent classes in this image
unique_class_ids = np.unique(class_ids)
mask_area = [np.sum(mask[:, :, np.where(class_ids == i)[0]])
for i in unique_class_ids]
top_ids = [v[0] for v in sorted(zip(unique_class_ids, mask_area),
key=lambda r: r[1], reverse=True) if v[1] > 0]
# Generate images and titles
for i in range(limit):
class_id = top_ids[i] if i < len(top_ids) else -1
# Pull masks of instances belonging to the same class.
m = mask[:, :, np.where(class_ids == class_id)[0]]
m = np.sum(m * np.arange(1, m.shape[-1] + 1), -1)
to_display.append(m)
titles.append(class_names[class_id] if class_id != -1 else "-")
display_images(to_display, titles=titles, cols=limit + 1, cmap="Blues_r")
dataset = dataset
image_ids = np.random.choice(dataset.image_ids, 4)
for image_id in image_ids:
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
print(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset.class_names)
# # Create the Training Model and Train
#
# A predefined way to create the model wqith pre-trained way is available in herbaria.py through the `hb.load_trained_model()` function. Here below we generate a new model directly from the mask_rcnn module
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# +
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# +
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
# -
# ## Image augmentations and callbacks
#
# The following are the image augmentations and callbacks used in for our best performing model.
# Other image augmentations can be set up using the [imgaug library](https://imgaug.readthedocs.io/en/latest/)
#
# #### NOTE:
# There are several variations of the training routine that were not implemented because they would require deeper modification of the Mask_RCNN modules, and thus would break compatibility with the original repo:
# - **optimizers**: the default optimizer is SDG and this is hard coded within the `train` function of the mrcnn module
# - **learning rate schedulers/modifiers**: any learning rate modifier appears to break the trainign routine resulting in an error. Deeper modifications of the `train`function could solve the problem
# +
# set img augmentations and callbacks
from keras.callbacks import EarlyStopping
import imgaug
CALLBACKS=[
EarlyStopping(monitor = 'val_loss', patience=10, restore_best_weights=True),
]
my_aug = imgaug.augmenters.SomeOf(0.8, [
imgaug.augmenters.Fliplr(0.5),
imgaug.augmenters.Flipud(0.5),
imgaug.augmenters.GaussianBlur(sigma=(0.0, 5.0)),
imgaug.augmenters.Multiply((0.5, 1.5), per_channel=0.5),
imgaug.augmenters.AdditiveGaussianNoise(scale=(0, 0.2*255)),
imgaug.augmenters.AddElementwise((-40, 40))
])
# -
# ## Training
#
# Train in two stages:
#
# 1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass layers='heads' to the train() function.
#
# 2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass layers="all to train all layers.
#
#
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
start_train = time.time()
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=2,
layers='heads')
end_train = time.time()
minutes = round((end_train - start_train) / 60, 2)
print(f'Training took {minutes} minutes')
print('training finished at: ', time.time())
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
start_train = time.time()
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=8,
layers="3+")
end_train = time.time()
minutes = round((end_train - start_train) / 60, 2)
print(f'Training took {minutes} minutes')
# # Prepare to run Inference
# Create a new InferenceConfig, then use it to create a new model.
# +
class InferenceConfig(hb.HerbariaConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
DETECTION_MIN_CONFIDENCE = 0.5
inference_config = InferenceConfig()
# -
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# +
# Get path to saved weights
# Either set a specific path or find last trained weights
model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# model_path = ".."
# Load trained weights (fill in path to trained weights here)
assert model_path != "", "Provide path to trained weights"
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# -
# # Run Inference
#
# Run model.detect() on real images.
# +
import skimage
real_test_dir = fldr_path_test_images
image_paths = []
for filename in os.listdir(real_test_dir):
if os.path.splitext(filename)[1].lower() in ['.png', '.jpg', '.jpeg']:
image_paths.append(os.path.join(real_test_dir, filename))
for image_path in image_paths:
img = skimage.io.imread(image_path)
img_arr = np.array(img)
results = model.detect([img_arr], verbose=1)
r = results[0]
visualize.display_instances(img, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], figsize=(5,5))
| notebooks/2__TrainAndInference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from tqdm import tqdm_notebook as tqdm
from time import time
import json
from glob import glob
from pprint import pprint
import io
import os
from tqdm import tqdm_notebook as tqdm
import math
import pandas as pd
from collections import Counter, defaultdict, OrderedDict
from fuzzywuzzy import fuzz
import spacy
import nltk
import string
nlp = spacy.load('en')
# +
json_files = glob('data/cvl_formatted/*.json')
def load_file(fname):
return json.load(io.open(fname, 'r', encoding='utf-8-sig'))
data = {os.path.basename(fname)[:-5]: load_file(fname) for fname in tqdm(json_files)}
len(data)
# -
master_word_counter = Counter()
master_noun_counter = Counter()
tf_words = {key:Counter() for key in data.keys()}
tf_nouns = {key:Counter() for key in data.keys()}
word_idf = Counter()
noun_idf = Counter()
data['10809618']
# +
for key, content in tqdm(data.items()):
doc = nlp(content['content'])
# all tokens that arent stop words or punctuations
words = [token.lemma_.lower() for token in doc if token.is_stop != True and token.is_punct != True]
# noun tokens that arent stop words or punctuations
nouns = [token.lemma_.lower() for token in doc if token.is_stop != True and token.is_punct != True and token.pos_ == "NOUN"]
# five most common tokens
word_freq = Counter(words)
tf_words[key] += word_freq
master_word_counter += word_freq
for w in word_freq:
word_idf[w] += 1
# common_words = word_freq.most_common(5)
# five most common noun tokens
noun_freq = Counter(nouns)
tf_nouns[key] += noun_freq
master_noun_counter += noun_freq
for n in noun_freq:
noun_idf[n] += 1
# common_nouns = noun_freq.most_common(5)
# -
master_noun_counter.most_common(100)
len(master_noun_counter)
# +
# noun_idf['risk']
# -
_word_counter = list(master_noun_counter.most_common())
_noun_idf = list(noun_idf.most_common())
# +
# import pickle
# with open('./data/word_counter.pkl', 'wb') as f:
# pickle.dump((_word_counter, tf_nouns, master_noun_counter, tf_words, master_word_counter, word_idf, noun_idf), f)
# +
# import pickle
# with open('./data/cvlbow_lemma.pkl', 'wb') as f:
# pickle.dump((master_noun_counter, master_word_counter, tf_nouns, tf_words, noun_idf, word_idf), f)
# -
any(ch in string.punctuation for ch in _noun_idf[4][0])
len(list(filter(lambda x: x[1] >= 10 and not any(ch in string.punctuation+"0123456789" for ch in x[0]), _noun_idf)))
from matplotlib import pyplot as plt
plt.hist([v for (_, v ) in _word_counter], bins=100, range=(0, 100))
plt.show()
len([v for (_, v ) in _word_counter if v >= 20])
# +
# list(filter(lambda x: x[1] >= 10, _noun_idf))[-150:]
# -
filterd_nouns = list(filter(lambda x: x[1] >= 10 and not any(ch in string.punctuation+"0123456789" for ch in x[0]), _noun_idf))
len(filterd_nouns)
def tfidf(tf, df):
if tf == 0:
return 0.
tf_term = 1 + math.log(tf)
idf_term = math.log(1 + 2273. / df)
return tf_term * idf_term
# noun_idf.keys()
noun_idf[filterd_nouns[0][0]]
# +
pd_data = []
for fname in tqdm(tf_nouns):
nf = tf_nouns[fname]
title = data[fname]['title']
jdata = [('ARTICLE_ID', fname), ('TITLE', title)] + [(word[0], tfidf(tf_nouns[fname][word[0]], noun_idf[word[0]])) for word in filterd_nouns]
jdata += [('t_dom',data[fname]['top_domain']), ('b_dom', data[fname]['bottom_domain'])]
pd_data.append(OrderedDict(jdata))
# -
df = pd.DataFrame(pd_data)
df.head()
df.to_csv('data/cvl_tfidf_with_title.csv')
# +
# LINKING
# -
len(data)
# +
keys = data.keys()
list_of_links = {key: data[key]['refs'] for key in data}
keys
# -
list_of_links['18460739']
# +
# data['18460739']['title']
# +
# (me_id, link_id)
refs4 = []
for key in tqdm(list_of_links):
title = data[key]['title']
for link in list_of_links[key]:
link_title = link['LINK_TEXT']
for new_key in data:
if fuzz.ratio(link_title, data[new_key]['title']) == 100:
refs4.append((key, new_key))
# -
len(refs) # > 90
len(set(refs))
len(set(refs2)) # > 95
len(set(refs3)) # > 98
len(set(refs4)) # == 100
# +
# len(refs4) / (2273*2273)
# +
# refs4[:100]
# +
# refs4[:10]
refs_pd = [{'citing':key[0], 'cited':key[1]} for key in set(refs4)]
refs_df = pd.DataFrame(refs_pd)
refs_df.to_csv('data/cvl_cites_100.csv')
# +
# citation_df =
# -
refs_df
| CVLBow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Enunciado
#
# Carregue o dataset contido no arquivo cholera-dataset.csv em um objeto DataFrame. Esse dataset contém dados referentes ao número de casos reportados de cólera em cada país, desde 1950 até 2016, além do número de mortes provocadas pela cólera; dentre outras informações.
#
# Após carregar esse dataset, obtenha:
#
# (a) Apenas os dados da Índia.
#
# (b) Crie um gráfico de barras que apresente o número de casos reportados e de mortes causadas pela cólera na Índia, ambos em um mesmo axes, em função do ano.
#
# Clique aqui para fazer o download do dataset.
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# %matplotlib inline
casos_colera = pd.read_csv("C:\\Users\\Fabio\\Desktop\\Data Science\\Módulo 3\\cholera-dataset.csv")
casos_colera
plt.rcParams['figure.figsize']=15,5 # Parâmetro usado para esticar e/ou aumentar o gráfico. Esta única referência serve para todos os gráficos
# (a) Apenas os dados da Índia.
casos_india = casos_colera[casos_colera["Country"] == "India"]
casos_india.head(5)
# (b) Crie um gráfico de barras que apresente o número de casos reportados e de mortes causadas pela cólera na Índia, ambos em um mesmo axes, em função do ano.
# +
casos_india = casos_india.sort_values(by="Year")
caso_01 = casos_india["Number of reported cases of cholera"]
caso_02 = casos_india["Number of reported deaths from cholera"]
plt.bar(x=casos_india['Year'], height=caso_01,label = 'Casos reportados' ,color = 'blue')
plt.bar(x=casos_india['Year'], height=caso_02,label = 'Mortes reportadas' ,color = 'red')
plt.title("Casos reportados e ortes por Cólera na Índia")
plt.xlabel('Ano')
plt.legend();
# -
| Lista_matplotlib_03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
list = np.array([1,2,3])
list
a = np.arange(1000)
# %time a2 = a**2
a*2 # array의 경우 각각의 a 원소에 값을 계산함
a1 = [i for i in range(1, 1000)]
print(a1*2) # list의 경우 a1을 그냥 2번 나열함
b = np.array([[0,1,2],[3,4,5]])
b
c = np.array([[[1,2], [3,4]], [[5,6], [7,8]]])
c
print(c.ndim)
print(c.shape)
print(b.shape)
a = np.array([[0, 1, 2, 3], [4, 5, 6, 7]])
a
a[0, :]
a[:, 1]
a [ a % 2 == 0 ]
# +
# 뒤 함수에서 True라고 나온 친구들만 뽑음
# -
a[np.array([0, 0])]
a
a = np.zeros(5)
a
a1 = np.zeros((3,3))
a1
a2 = np.ones((2,2))
a2
a3 = np.ones_like(a1) # np.zeros_like
a3
np.random.rand(3)
np.random.rand(3,3)
g = np.empty((4,3))
g
np.linspace(0, 10, 5) # start, end, 간격
np.random.seed(0)
np.random.rand(4)
np.random.randn(4) # 가우시안 정규분포
a = np.arange(12)
a.reshape(3,4)
b = a.reshape(2,2,-1)
b
b.flatten()
x = np.arange(5)
x
y = x.reshape(5,1)
y
z = x[:, np.newaxis]
z
a1 = np.ones((2,3))
a2 = np.zeros((2,2))
a1
a2
np.hstack([a1,a2])
a3 = np.zeros((4,3))
a3
np.vstack([a1,a3])
# +
# dstack은 축 방향으로 이동
# stack은 aixs 인수 사용 axis=0 행, axis=1 열
# -
# r_ 메서드는 hstack과 유사한데 메서드임에도 불구하고 소괄호()를 사용하지 않고 대괄호 사용
np.r_[np.array([1,2,3]), 0, 0, np.array([4,3,5])]
a = np.array([0, 1, 2])
np.tile(a, 2)
# ### meshgrid
# +
x = np.arange(3)
x
y = np.arange(5)
y
# -
X, Y = np.meshgrid(x, y)
for x, y in zip(X, Y):
print(x, y)
[list(zip(x, y)) for x, y in zip(X, Y)]
np.diag([1,2,3])
np.identity(3)
np.eye(4)
X = np.array([[11,12,13],[21,22,23]])
X
X.T
| 06. 기초 선형대수/Numpy 연습.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Now You Code 4: Syracuse Weather
#
# Write a program to load the Syracuse weather data from Dec 2015 in
# JSON format into a Python list of dictionary.
#
# The file with the weather data is in your `Now-You-Code` folder: `"NYC4-syr-weather-dec-2015.json"`
#
# You should load this data into a Python list of dictionary using the `json` package.
#
# After you load this data, loop over the list of weather items and record whether or not the `'Mean TemperatureF'` is above or below freezing.
#
# Sort this information into a separate Python dictionary, called `stats` so you can print it out like this:
# ```
# {'below-freezing': 4, 'above-freezing': 27}
# ```
#
#
# ## Step 1: Problem Analysis `input_address` function
#
# This function should get input from the user at run time and return the input address.
#
# Inputs: None (gets input from user)
#
# Outputs: a Python dictionary of address info (street, city, state, postal_code)
#
# Algorithm (Steps in Program):
#
# +
# Step 2: Write code
# -
# ## Step 3: Questions
#
# 1. What are the advantages to storing the number of days above freezing and below freezing in a Python dictionary?
# 2. What is the data type of the weather data as it is read from the file `NYC4-syr-weather-dec-2015.json` ?
# 3. Could this same program work for weather at other times in other cities? What conditions would need to be met for this to happen?
# ## Reminder of Evaluation Criteria
#
# 1. What the problem attempted (analysis, code, and answered questions) ?
# 2. What the problem analysis thought out? (does the program match the plan?)
# 3. Does the code execute without syntax error?
# 4. Does the code solve the intended problem?
# 5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
#
| content/lessons/10/Now-You-Code/NYC4-Syracuse-Weather.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.metrics import r2_score
df = pd.read_excel('output.xlsx')
df.head()
plt.scatter(df.X,df.y)
type(df.X)
x = np.array(df.X)
y = np.array(df.y)
x = x.reshape(-1,1)
lin_reg = LinearRegression()
model = lin_reg.fit(x,y)
# Finding slope (weights) of the equation
model.coef_
# Finding intercept of the equation
model.intercept_
y_pred = lin_reg.predict(x)
r2_score(y,y_pred)
plt.scatter(x,y)
plt.plot(x,y_pred)
# If you add some outliers it will change equation of the line and line will try to shift towards outliers
# putting outliers as y_max = 200 abd y_min = -200
y.max()
idx = y.argmax()
y[idx] = 200
y.max()
y.min()
idxmin = y.argmin()
y[idxmin] = -200
y.min()
plt.scatter(x,y)
model1 = lin_reg.fit(x,y)
model1.coef_
model1.intercept_
y_pred1 = lin_reg.predict(x)
plt.scatter(x,y)
plt.plot(x,y_pred1)
r2_score(y,y_pred1)
# Ridge Regression
ridge = Ridge(alpha=10,normalize=True)
modelr = ridge.fit(x,y)
y_pred2 = ridge.predict(x)
r2_score(y,y_pred2)
plt.scatter(x,y)
plt.plot(x,y_pred2)
| C12_Regularization_Contd/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dev
# language: python
# name: dev
# ---
import numpy as np
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDClassifier as skSGDClassifier
# ### Implementation 1
# - scikit-learn loss = "hinge", penalty="l2"/"none"
# - similar to sklearn.svm.LinearSVC
# +
def _loss(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = p * y
if z <= 1:
return 1 - z
else:
return 0
def _grad(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = p * y
if z <= 1:
dloss = -y
else:
dloss = 0
# clip gradient (consistent with scikit-learn)
dloss = np.clip(dloss, -1e12, 1e12)
coef_grad = dloss * x
intercept_grad = dloss
return coef_grad, intercept_grad
# -
class SGDClassifier():
def __init__(self, penalty="l2", alpha=0.0001, max_iter=1000, tol=1e-3,
shuffle=True, random_state=0,
# use learning_rate = 'invscaling' for simplicity
eta0=0, power_t=0.5, n_iter_no_change=5):
self.penalty = penalty
self.alpha = alpha
self.max_iter = max_iter
self.tol = tol
self.shuffle = shuffle
self.random_state = random_state
self.eta0 = eta0
self.power_t = power_t
self.n_iter_no_change = n_iter_no_change
def _encode(self, y):
classes = np.unique(y)
y_train = np.full((y.shape[0], len(classes)), -1)
for i, c in enumerate(classes):
y_train[y == c, i] = 1
if len(classes) == 2:
y_train = y_train[:, 1].reshape(-1, 1)
return classes, y_train
def fit(self, X, y):
self.classes_, y_train = self._encode(y)
if len(self.classes_) == 2:
coef = np.zeros((1, X.shape[1]))
intercept = np.zeros(1)
else:
coef = np.zeros((len(self.classes_), X.shape[1]))
intercept = np.zeros(len(self.classes_))
n_iter = 0
rng = np.random.RandomState(self.random_state)
for class_ind in range(y_train.shape[1]):
cur_y = y_train[:, class_ind]
cur_coef = np.zeros(X.shape[1])
cur_intercept = 0
best_loss = np.inf
no_improvement_count = 0
t = 1
for epoch in range(self.max_iter):
# different from how data is shuffled in scikit-learn
if self.shuffle:
ind = rng.permutation(X.shape[0])
X, cur_y = X[ind], cur_y[ind]
sumloss = 0
for i in range(X.shape[0]):
sumloss += _loss(X[i], cur_y[i], cur_coef, cur_intercept)
eta = self.eta0 / np.power(t, self.power_t)
coef_grad, intercept_grad = _grad(X[i], cur_y[i], cur_coef, cur_intercept)
if self.penalty == "l2":
cur_coef *= 1 - eta * self.alpha
cur_coef -= eta * coef_grad
cur_intercept -= eta * intercept_grad
t += 1
if sumloss > best_loss - self.tol * X.shape[0]:
no_improvement_count += 1
else:
no_improvement_count = 0
if no_improvement_count == self.n_iter_no_change:
break
if sumloss < best_loss:
best_loss = sumloss
coef[class_ind] = cur_coef
intercept[class_ind] = cur_intercept
n_iter = max(n_iter, epoch + 1)
self.coef_ = coef
self.intercept_ = intercept
self.n_iter_ = n_iter
return self
def decision_function(self, X):
scores = np.dot(X, self.coef_.T) + self.intercept_
if scores.shape[1] == 1:
return scores.ravel()
else:
return scores
def predict(self, X):
scores = self.decision_function(X)
if len(scores.shape) == 1:
indices = (scores > 0).astype(int)
else:
indices = np.argmax(scores, axis=1)
return self.classes_[indices]
# binary classification
X, y = load_iris(return_X_y=True)
X, y = X[y != 2], y[y != 2]
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(learning_rate='invscaling', eta0=0.1, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# shuffle=False penalty="none"
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(learning_rate='invscaling', eta0=0.1, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# shuffle=False penalty="l2"
for alpha in [0.1, 1, 10]:
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(learning_rate='invscaling', eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# ### Implementation 2
# - scikit-learn loss = "squared_hinge", penalty="l2"/"none"
# - similar to sklearn.svm.LinearSVC
# +
def _loss(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = 1 - p * y
if z > 0:
return z * z
else:
return 0
def _grad(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = 1 - p * y
if z > 0:
dloss = -2 * y * z
else:
dloss = 0
# clip gradient (consistent with scikit-learn)
dloss = np.clip(dloss, -1e12, 1e12)
coef_grad = dloss * x
intercept_grad = dloss
return coef_grad, intercept_grad
# -
# shuffle=False penalty="none"
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(loss="squared_hinge", learning_rate='invscaling', eta0=0.1, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# shuffle=False penalty="l2"
for alpha in [0.1, 1, 10]:
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(loss="squared_hinge", learning_rate='invscaling', eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# ### Implementation 3
# - scikit-learn loss = "modified_huber", penalty="l2"/"none"
# +
def _loss(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = p * y
if z > 1:
return 0
elif z > -1:
return (1 - z) * (1 - z)
else:
return -4 * z
def _grad(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = p * y
if z > 1:
dloss = 0
elif z > -1:
dloss = -2 * (1 - z) * y
else:
dloss = -4 * y
# clip gradient (consistent with scikit-learn)
dloss = np.clip(dloss, -1e12, 1e12)
coef_grad = dloss * x
intercept_grad = dloss
return coef_grad, intercept_grad
# -
# shuffle=False penalty="none"
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(loss="modified_huber", learning_rate='invscaling', eta0=0.1, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# shuffle=False penalty="l2"
for alpha in [0.1, 1, 10]:
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(loss="modified_huber", learning_rate='invscaling', eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# ### Implementation 4
# - scikit-learn loss = "log", penalty="l2"/"none"
# - similar to sklearn.linear_model.LogisticRegression
# +
def _loss(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = p * y
# follow scikit-learn
if z > 18:
return np.exp(-z)
elif z < -18:
return -z
else:
return np.log(1 + np.exp(-z))
def _grad(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
z = p * y
if z > 18:
dloss = -np.exp(-z) * y
elif z < -18:
dloss = -y
else:
dloss = -y / (1 + np.exp(z))
# clip gradient (consistent with scikit-learn)
dloss = np.clip(dloss, -1e12, 1e12)
coef_grad = dloss * x
intercept_grad = dloss
return coef_grad, intercept_grad
# -
# shuffle=False penalty="none"
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(loss="log", learning_rate='invscaling', eta0=0.1, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
# shuffle=False penalty="l2"
for alpha in [0.1, 1, 10]:
X, y = load_iris(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDClassifier(eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
clf2 = skSGDClassifier(loss="log", learning_rate='invscaling', eta0=0.1, alpha=alpha, shuffle=False).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
prob1 = clf1.decision_function(X)
prob2 = clf2.decision_function(X)
assert np.allclose(prob1, prob2)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.array_equal(pred1, pred2)
| linear_model/SGDClassifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - title: Distributional Deep Q-Learning
# - summary: Expanding DQN to produce estimates of return distributions, and an exploration into why this helps learning
# - author: <NAME>
# - date: 2019-07-26
# - image: /static/images/distributional_bellman.png
# # Overview
#
# I recently stumbled upon the world of distributional Q-learning, and I hope to share some of the insights I've made from reading the following papers:
#
# * A Distributional Perspective on Reinforcement Learning: https://arxiv.org/abs/1707.06887
# * Implicit Quantile Networks for Distributional Reinforcement Learning: https://arxiv.org/abs/1806.06923
#
# This article will loosely work through the two papers in order, as they build on each other, but hopefully I can trim off most of the extraneous information and present you with a nice overview of distributional RL, how it works, and how to improve upon the most basic distributional algorithms to get to the current state-of-the-art.
#
# First I'll introduce distributional Q-learning and try to provide some motivations for using it. Then I'll highlight the strategies used in the development of C51, one of the first highly successful distributional Q-learning algorithms (paper #1). Then I'll introduce implicit quantile networks (IQNs) and explain their improvements to C51 (paper #2).
#
# *Quick disclaimer: I'm assuming you're familiar with how Q-learning works. That includes V and Q functions, Bellman backups, and the various learning stability tricks like target networks and replay buffers that are commonly used.*
#
# *Another important note is that these algorithms are only for **discrete** action spaces.*
# # Motivations for Distributional Deep Q-Learning
#
# In standard Q-Learning, we attempt to learn a function $Q(s, a): \mathcal{S \times A} \rightarrow \mathbb{R}$ that maps state-action pairs to the expected return from that state-action pair. This gives us a pretty accurate idea of how good specific actions are in specific states (if our $Q$ is accurate), but it's missing some information. There exist distributions of returns that we can receive from each state-action pair, and the expectations/means of these distributions is what $Q$ attempts to learn. But why only learn the expectation? Why not try to learn the whole distribution?
#
# Before diving into the algorithms that have been developed for this specific purpose, it's helpful to think about why this is beneficial in the first place. After all, learning a distribution is a lot more complicated than learning a single number, and we don't want to waste precious computational resources on doing something that doesn't help much.
#
# ### Stabilized Learning
# The first possibility I'll throw out there is that learning distributions could stabilize learning. This may seem unintuitive at first, seeing as we're trying to learn something much more complicated than an ordinary $Q$ function. But let's think about what happens when stochasticity in our environment results in our agent receiving a highly unusual return. I'll use the example of driving a car through an intersection.
#
# Let's say you're waiting at a red light that turns green. You begin to drive forward, expecting to simply cruise through the intersection and be on your way. Your internal model of your driving is probably saying "there's no way anything bad will happen if you go straight right now", and there's no reason to think otherwise. But now let's say another driver on the road perpendicular to yours runs straight through their red light and crashes into you. You would be right to be incredibly surprised by this turn of events (and hopefully not dead, either), but how surprised should you be?
#
# If your internal driving model was based only on expected returns, then you wouldn't predict that this accident would occur at all. And since it just *did* happen, you may be tempted to drastically change your internal model and, as a result, be scared of intersections for quite a bit until you're convinced that they're safe again; however, what if your driving model was based on a *distribution* over all possible returns? If you mentally assigned a probability of 0.00001 to this accident occurring, and if you've driven through 100,000 intersections before throughout your lifetime, then this accident isn't really that surprising. It still totally sucks and your car is probably totaled, but you shouldn't be irrationally scared of intersections now. After all, you just proved that your model was right!
#
# So yeah that's kinda dark, but I think it highlights how learning a distribution instead of an expectation can reduce the effects of environment stochasticity<sup>1</sup>
#
#
# ### Risk Sensitive Policies
# Using distributions over returns also allows us to create brand new classes of policies that take risk into account when deciding which actions to take. I'll use another example that doesn't involve driving but is equally as deadly :) Let's say you need to cross a gorge in the shortest amount of time possible (I'm not sure why, but you do. This is a poorly formulated example). You have two options: using a sketchy bridge that looks like it may fall apart at any moment, or you could walk down a set of stairs on one side of the gorge and then up a set of stairs on the other side. The latter option is incredibly safe. It'll still take significantly longer than using the bridge, though, so is it worth it?
#
# For the purposes of this example, let's give dying a reward of $-1000$ and give every non-deadly interaction with the environment a reward of $-1$. Let's also say that taking the bridge gets you across the gorge in $10$ seconds with probability $0.5$ of making it across safely. Taking the stairs gets you across the gorge $100%$ of the time, but it takes $100$ seconds instead.
#
# Given this information, we can quickly calculate expected returns for each of the two actions
#
# $$
# \mathbb{E}[\text{return}_\text{bridge}] = (-1000 * 0.5) + (-10 * 0.5) = -505 \\
# \mathbb{E}[\text{return}_\text{stairs}] = -100
# $$
#
# If you made decisions like a standard Q-learning agent, you would never take the bridge. The expected return is much worse than that of taking the stairs, so there's no reason to choose it. But if you made decisions like a distributional Q-learning agent, your decision can be much more well informed. You can be aware of the probability of dying vs. getting across the gorge more quickly by using the bridge. If the risk of falling to your death is worth it in your particular situation (let's say you're being chased by a wild animal who can run much faster than you), then taking the bridge instead of the stairs could end up being what you want.
#
# Although this example was pretty contrived, it highlights how using return distributions allows us to choose policies that before would have been impossible to formulate. Want a policy that takes as little risk as possible? We can do that now. Want a policy that takes as much risk as possible? Go right ahead, but please don't fall into any gorges.
# # The Distributional Q-Learning Framework
#
# So now we have a few reasons why using distributions over returns instead of just expected return can be useful, but we need to formulate a few things first so that we can use Q-learning strategies in this new setting.
#
# We'll define $Z(s, a)$ to be the distribution of returns at a given state-action pair, where $Q(s, a)$ is the expected value of $Z(s, a)$.
#
# The usual Bellman equation for $Q$ is defined
#
# $$
# Q(s, a) = \mathbb{E}[r(s, a)] + \gamma \mathbb{E}[Q(s', a')]
# $$
#
# Now we'll change this to be defined in terms of entire distributions instead of just expectations by using $Z$ instead of $Q$. We'll denote the distribution of rewards for a single state-action pair $R(s,a)$.
#
# $$
# Z(s, a) = R(s, a) + \gamma Z(s', a')
# $$
#
# All we need now is a way of iteratively enforcing this Bellman constraint on our $Z$ function. With standard Q-learning, we can do that quite simply by minimizing mean squared error between the outputs of a neural network (which approximates $Q$) and the values $\mathbb{E}[r(s, a)] + \gamma \mathbb{E}[Q(s', a')]$ computing using a target Q-network and transitions sampled from a replay buffer.
#
# Such a straightforward solution doesn't exist in the distributional case because the output from our Z-network is so much more complex than from a Q-network. First we have to decide what kind of distribution to output. Can we approximate return distributions with a simple Gaussian? A mixture of Gaussians? Is there a way to output a distribution of arbitrary complexity? Even if we can output really complex distributions, can we sample from that in a tractable way? And once we've decided on how we'll represent the output distribution, we'll then have to choose a new metric to optimize other than mean squared error since we're no longer working with just scalar outputs. Many ways of measuring the difference between probability distributions exist, but we'll have to choose one to use.
#
# These two problems are what the C51 and IQN papers deal with. They both take different approaches to approximating arbitrarily complex return distributions, and they optimize them differently as well. Let's start off with C51: the algorithm itself is a bit complex, but its foundational ideas are rather simple. I won't dive into the math behind C51, and I'll instead save that for IQN since that's the better algorithm.
# # C51
#
# The main idea behind C51 is to approximate the return distribution using a set of discrete bars which the paper authors call 'atoms'. This is like using a histogram to plot out a distribution. It's not the most accurate, but it gives us a good sense of what the distribution looks like in general. This strategy also leads to an optimization strategy that isn't too computationally expensive, which is what we want.
#
# Our network can simply output $N$ probabilities, where all $N$ probabilities sum to $1$. Each of these probabilities represents one of the bars in our distribution approximation. The paper recommends using 51 atoms (network outputs) based on empirical tests, but the algorithm is defined so that you don't need to know the number of atoms beforehand.
#
# To minimize the difference between our current distribution outputs and their target values, the paper recommends minimizing the KL divergence of the two distributions. They accomplish this indirectly by minimizing the cross entropy between the distributions instead.
#
# The idea behind this is simple enough, but the math gets a bit funky. Since the distribution that our network outputs is split into discrete units, the theoretical Bellman update has to be projected into that discrete space and the probabilites of each atom distributed to neighboring atoms to keep the distribution relatively smooth.
#
# To actually use the discretized distribution to make action choices, the paper authors just use the weighted mean of the atoms. This weighted mean is effectively just an approximation of the standard Q-value.
# # IQN
#
# C51 works well, but it has some pretty obvious flaws. First off, its distribution approximations aren't going to be very precise. We can use a massive neural network during training, but all those neurons' information gets funneled into just $N$ output atoms at the end of the day. This is the bottleneck on how accurate our network can get, but increasing the number of atoms will increase the amount of computation our algorithm requires.
#
# A second issue with C51 is that it doesn't take full advantage of knowing return distributions. When deciding which actions to take, it just uses the mean of its approximate return distribution. Under optimality, this is really no different than standard Q-learning.
#
# Implicit quantile networks address both of these issues: they allow us to approximate much more complex distributions without additional computation requirements, and they also allow us to easily decide how risky our agent will be when acting.
#
#
# ### Implicit Networks
# The first issue with C51 is addressed by not explicitly representing a return distribution with our neural networks. If we do this, then our chosen representation of the distribution acts as a major bottleneck in terms of how accurate our approximations can be. Additionally, sampling from arbitrarily complex distributions is intractable if we want to represent them explicitly. IQN's solution: don't train a network to explicitly represent a distribution, train a network to provide samples from the distribution instead.
#
# Since we aren't explicitly representing any distributions, that means our accuracy bottleneck rests entirely in the size of our neural network. This means we can easily make our distribution approximations more accurate without adding on much to the amount of required computation.
#
# Additionally, since our network is being trained to provide us samples from some unknown distribution, the intractable sampling problem goes away.
#
# The second issue with C51 (not using risk-sensitive policies) is also addressed by using implicit networks. We haven't gone over how we'll actually *implement* such networks, but trust me when I say that we'll be able to easily manipulate the input to them to induce risky or risk-averse action decisions.
#
#
# ### Quantile Functions
# Before we go through the implementation of these myterious implicit networks, we have to go over a few other things about probability distributions that we'll use when deriving the IQN algorithm.
#
# First off, every probability distribution has what's called a cumulative density function (CDF). If the probability of getting the value $35$ out of a probability distribution $P(X)$ is denoted $P(X = 35)$, then the *cumulative* probability of getting $35$ from that distribution is $P(X \leq 35)$.
#
# The CDF of a distribution does exactly that, excpet it defines a cumulative probability for all possible outputs of the distribution. You can think of the CDF as really just an integral from the beginning of a distribution up to a given point on it. A nice property of CDFs is that their outputs are bounded between 0 and 1. This should be pretty intuitive, since the integral over a probability distribution has to be equal to 1. An example of a CDF for a unit Gaussian distribution is shown below.
#
# <img alt="CDF" src="{static}/images/cdf.svg" />
#
# Quantile functions are closely related to CDFs. In fact, they're just the inverse. CDFs take in an $x$ and return a probability, but quantile functions take in a probability and return an $x$. The quantile function for a unit Gaussian (same as with the previous example CDF) is shown below.
#
# <img alt="Quantile Function" src="{static}/images/quantile.svg" />
#
#
# ### Representing an Implicit Distribution
# Now we can finally get to the fun stuff: figuring out how to represent an arbitarily complex distribution implicitly. Seeing as I just went on a bit of a detour to talk about quantile functions, you probably already know that that's what we're gonna use. But how and why will that work for us?
#
# First off, quantile functions all have the same input domain, regardless of whatever distribution they're for. Your distribution could be uniform, Gaussian, energy-based, whatever really, and its quantile function would only accept input values between 0 and 1. Since we want to represent any arbitrary distribution, this definitely seems like a property that we want to take advantege of.
#
# Additionally, using quantile functions allows us to sample directly from our distribution without ever having an explicit representation of the distribution. Sampling from the uniform distribution $U([0, 1])$ and passing that as input to our quantile function is equivalent to sampling directly from $Z(s, a)$. Since we can implement this entirely within a neural network, this means there's no major accuracy bottleneck either.
#
# We can also add in another feature to our implicit network to give us the ability to make risk-sensitive policy decisions. We can quite simply distort the input to our quantile network. If we want to make the tails of our distribution less important, for example, then we can map input values closer to 0.5 before passing them to our quantile function.
#
#
# ### Formalization
# We've gone over a lot, so let's take a step back and formalize it a bit. The usual convention for denoting a quantile function over random variable $Z$ (our return) would be $F^{-1}_{Z}(\tau)$, where $\tau \in [0, 1]$. For simplicity's sake, though, we'll define
#
# $$
# Z_\tau \doteq F^{-1}_{Z}(\tau)
# $$
#
# We can also define sampling from $Z(s, a)$ with the following
#
# $$
# Z_\tau(s, a), \\
# \tau \sim U([0, 1])
# $$
#
# To distort our $\tau$ values, we'll define a mapping
#
# $$
# \beta : [0, 1] \rightarrow [0, 1]
# $$
#
# Putting these definitions together, we can reclaim a new distorted Q-value
#
# $$
# Q_{\beta}(s, a) \doteq \mathbb{E}_{\tau \sim U([0, 1])} [Z_{\beta(\tau)}(s, a)]
# $$
#
# To define our policy, we can just take whichever action maximizes this distorted Q-value
#
# $$
# \pi_{\beta}(s) = \arg\max\limits_{a \in \mathcal{A}} Q_{\beta}(s, a)
# $$
#
#
# ### Optimization
# Now to figure out a way to iteratively update our distribution approximations... We'll use Huber quantile loss, a nice metric that extends Huber loss to work with quantiles instead of just scalar outputs
#
# $$
# \rho^\kappa_\tau(\delta_{ij}) = | \tau - \mathbb{I}\{ \delta_{ij} < 0 \} | \frac{\mathcal{L}_\kappa(\delta_{ij})}{\kappa}, \text{with} \\
# \mathcal{L}_\kappa(\delta_{ij}) = \begin{cases}
# \frac{1}{2} \delta^2_{ij} &\text{if } | \delta_{ij} < \kappa | \\
# \kappa (| \delta_{ij} | - \frac{1}{2} \kappa) &\text{otherwise}
# \end{cases}
# $$
#
# This is a messy loss term, but it essentially tries to minimize TD error while keeping the network's output close to what we expect the quantile function to look like (according to our current approximation).
#
# This loss metric is based on the TD error $\delta_{ij}$, which we can define just like normal TD error
#
# $$
# \delta_{ij} = r + \gamma Z_i(s', \pi_\beta(s')) - Z_j(s, a)
# $$
#
# Notice how in this definition, $i$ and $j$ act as two separate $\tau$ samples from the $U([0, 1])$ distribution. We use two separate $\tau$ samples to keep the terms in the TD error definition decorrelated. To get a more accurate estimation of the loss, we'll sample it multiple times in the following fashion
#
# $$
# \mathcal{L} = \frac{1}{N'} \sum_{i=1}^N \sum_{j=1}^{N'} \rho^\kappa_{\tau_i}(\delta_{\tau_i, \tau_j})
# $$
#
# where $\tau_i$ and $\tau_j$ are both newly sampled for every term in the summation.
#
# Finally, we'll approximate $\pi_\beta$, which we defined earlier, using a similar sampling technique
#
# $$
# \tilde{\pi}_\beta(s) = \arg\max\limits_{a \in \mathcal{A}} \frac{1}{K} \sum_{k=1}^K Z_{\beta(\tau_k)}(s, a)
# $$
#
# where $\tau_k$ is newly sampled every time as well.
#
# That was a lot, but it's all we need to make an IQN. We could spend time thinking about different choices of $\beta$, but that's really a choice that depends on your specific environment. And during implementation, you can just decide that $\beta$ will be the identity function and then change it later if you think you can get better performance with risk-aware action selection.
# # Review
#
# We started off with the more obvious way of implementing distributional deep Q-learning, which was explicitly representing the return distribution. Although it worked well, using an explicit representation of the return distribution created an accuracy bottleneck that was hard to overcome. It was also difficult to inject risk-sensitivity into the algorithm.
#
# Using an implicit distribution instead allowed us to get around those two problems, giving us much greater representational power and allowing us much greater control over how our agent handles risk.
#
# Of course, there's always room for improvement. Small techniques like using prioritized experience replay and n-step returns for calculating TD error can be used to make the IQN algorithm more powerful. And since distributional RL is still a pretty new field, there will no doubt be major improvements coming down the academia pipeline to be on the lookout for.
# # Footnotes
# <sup>1</sup> see paper #1, section 6.1 for a short discussion of what the paper authors call 'chattering'
| content/Distributional Deep Q-Learning.ipynb |